Lecture Notes in Control and Information Sciences Editors: M. Thoma · M. Morari
331
Panos J. Antsaklis Paulo Tabuada (Eds.)
Networked Embedded Sensing and Control Workshop NESC'05: University of Notre Dame, USA October 2005 Proceedings With 96 Figures
Series Advisory Board F. Allg¨ower · P. Fleming · P. Kokotovic · A.B. Kurzhanski · H. Kwakernaak · A. Rantzer · J.N. Tsitsiklis
Editors Panos J. Antsaklis Paulo Tabuada 275 Fitzpatrick Hall Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 USA
ISSN 0170-8643 ISBN-10 ISBN-13
3-540-32794-0 Springer Berlin Heidelberg New York 978-3-540-32794-3 Springer Berlin Heidelberg New York
Library of Congress Control Number: 2006921745 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in other ways, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under German Copyright Law. Springer is a part of Springer Science+Business Media springer.com © Springer-Verlag Berlin Heidelberg 2006 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Data conversion by editors. Final processing by PTP-Berlin Protago-TEX-Production GmbH, Germany Cover-Design: design & production GmbH, Heidelberg Printed on acid-free paper 89/3141/Yu - 5 4 3 2 1 0
To Melinda and Lily. Ao tinto do Alentejo e a` alheira de Mirandela.
Preface
This volume contains the proceedings of the Workshop on Networked Embedded Sensing and Control (NESC) held at the University of Notre Dame, Notre Dame, Indiana, on October 17-18 2005 (http://nesc.ee.nd.edu). Networked embedded control systems can be roughly described as collections of spatially distributed sensors, actuators and controllers whose behavior is coordinated through wired or wireless communication links. This integration between different technologies and scientific domains presents new and challenging fundamental problems underlying the foundations of this class of systems. The workshop acted both as a forum and as a catalyst for a dialogue between over 60 researchers working on different aspects of networked systems, such as sensing and estimation, control, coding/communications and ad hoc networking. Different research experiences and results were exchanged among researchers, while concrete steps towards establishing the scientific foundations and identifying the main scientific challenges were given. The workshop was organized as a single track event with 18 contributed papers over two days centered around four invited plenary lectures. The plenary lectures were ”Motion coordination for multi-agent networks” by Francesco Bullo, University of California at Santa Barbara; ”Embedded sensing and control: Applications and application requirements” by Tariq Samad, Honeywell; ”Control over communication networks: Impact of delays on performance” by Dawn Tilbury, University of Michigan; and ”The role of information theory in communication constrained control systems” by Sekhar Tatikonda, Yale University. The 18 contributed papers were presented in five sessions: Multi-Agent Control, Simulation and Implementation, Distributed Sensing, Filtering and Estimation, during the first day; Control over Networks I and Control over Networks II, during the second day of the workshop.
VIII
Preface
What emerged from this workshop was the importance of exploring further the areas between the disciplines of systems and control, information theory, communication networks, distributed and collaborative control, and real-time systems. At the same time it became clear that being aware of application needs that help incorporate the right assumptions in our theoretical results, as well as of implementation issues that assist us in the crafting of our algorithms, is necessary. We are grateful to all invitees and contributors for making the workshop a success. We would also like to recognize the diligence and expertise of the Program Committee: John Baillieul, Boston University; Michael Branicky, Case Western Reserve University; Magnus Egerstedt, Georgia Institute of Technology; Nicola Elia, Iowa State University; Martin Haenggi, University of Notre Dame; Jo˜ ao Hespanha, University of California at Santa Barbara; George J. Pappas, University of Pennsylvania; Sekhar Tatikonda, Yale University; Dawn Tilbury, University of Michigan. We truly appreciate and acknowledge the generous support of our sponsors. The workshop was co-sponsored by the Office of Research of the University of Notre Dame, the Center for Applied Mathematics, the Department of Electrical Engineering, and the H.C. and E.A. Brosey Endowed Chair fund of the University of Notre Dame. It was also co-sponsored by the National Science Foundation EHS program and technically co-sponsored by the IEEE Control Systems Society. We do thank them all.
University of Notre Dame, Notre Dame, October 2005
Panos J. Antsaklis Paulo Tabuada
Contents
Part I Multi-agent Control Plenary Talk - Notes on Multi-agent Motion Coordination: Models and Algorithms Francesco Bullo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
Discrete Time Kuramoto Models with Delay Benjamin I. Triplett, Daniel J. Klein, Kristi A. Morgansen . . . . . . . . . . .
9
Symmetries in the Coordinated Consensus Problem Ruggero Carli, Fabio Fagnani, Marco Focoso, Alberto Speranzon, Sandro Zampieri . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 On Communication Requirements for Multi-agent Consensus Seeking Lei Fang, Panos J. Antsaklis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Applications of Connectivity Graph Processes in Networked Sensing and Control Abubakr Muhammad, Meng Ji, Magnus Egerstedt . . . . . . . . . . . . . . . . . . . . 69 Part II Simulation and Implementation Plenary Talk - Network-Embedded Sensing and Control: Applications and Application Requirements Tariq Samad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Simulation of Large-Scale Networked Control Systems Using GTSNetS ElMoustapha Ould-Ahmed-Vall, Bonnie S. Heck, George F. Riley . . . . . . . 87
X
Contents
neclab: The Network Embedded Control Lab Nicholas Kottenstette, Panos J. Antsaklis . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Homogeneous Semantics Preserving Deployments of Heterogeneous Networks of Embedded Systems Aaron D. Ames, Alberto Sangiovanni-Vincentelli, Shankar Sastry . . . . . . 127 Part III Distributed Sensing, Filtering, and Estimation Distributed Kalman Filtering and Sensor Fusion in Sensor Networks Reza Olfati-Saber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Belief Consensus and Distributed Hypothesis Testing in Sensor Networks Reza Olfati-Saber, Elisa Franco, Emilio Frazzoli, Jeff S. Shamma . . . . . . 169 Distributed Evidence Filtering in Networked Embedded Systems Duminda Dewasurendra, Peter Bauer, Kamal Premaratne . . . . . . . . . . . . . 183 Part IV Control over Networks I Plenary Talk - Delays in Control over Communication Networks: Characterization, Impact, and Reduction Strategies Dawn Tilbury . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Anticipative and Non-anticipative Controller Design for Network Control Systems Payam Naghshtabrizi, Jo˜ ao P. Hespanha . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 On Quantization and Delay Effects in Nonlinear Control Systems Daniel Liberzon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Performance Evaluation for Model-Based Networked Control Systems Luis A. Montestruque, Panos J. Antsaklis . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Beating the Bounds on Stabilizing Data Rates in Networked Systems Yupeng Liang, Peter Bauer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Contents
XI
Disturbance Attenuation Bounds in the Presence of a Remote Preview Nuno C. Martins, Munther A. Dahleh, John C. Doyle . . . . . . . . . . . . . . . . 269 Part V Control over Networks II Plenary Talk - The Role of Information Theory in Communication Constrained Control Systems Sekhar Tatikonda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Delay-Reliability Tradeoffs in Wireless Networked Control Systems Min Xie, Martin Haenggi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 A Cross-Layer Approach to Energy Balancing in Wireless Sensor Networks Daniele Puccinelli, Emmanuel Sifakis, Martin Haenggi . . . . . . . . . . . . . . . 309 Distributed Control over Failing Channels C´edric Langbort, Vijay Gupta, Richard M. Murray . . . . . . . . . . . . . . . . . . . 325 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
To Melinda and Lily. Ao tinto do Alentejo e a` alheira de Mirandela.
Preface
This volume contains the proceedings of the Workshop on Networked Embedded Sensing and Control (NESC) held at the University of Notre Dame, Notre Dame, Indiana, on October 17-18 2005 (http://nesc.ee.nd.edu). Networked embedded control systems can be roughly described as collections of spatially distributed sensors, actuators and controllers whose behavior is coordinated through wired or wireless communication links. This integration between different technologies and scientific domains presents new and challenging fundamental problems underlying the foundations of this class of systems. The workshop acted both as a forum and as a catalyst for a dialogue between over 60 researchers working on different aspects of networked systems, such as sensing and estimation, control, coding/communications and ad hoc networking. Different research experiences and results were exchanged among researchers, while concrete steps towards establishing the scientific foundations and identifying the main scientific challenges were given. The workshop was organized as a single track event with 18 contributed papers over two days centered around four invited plenary lectures. The plenary lectures were ”Motion coordination for multi-agent networks” by Francesco Bullo, University of California at Santa Barbara; ”Embedded sensing and control: Applications and application requirements” by Tariq Samad, Honeywell; ”Control over communication networks: Impact of delays on performance” by Dawn Tilbury, University of Michigan; and ”The role of information theory in communication constrained control systems” by Sekhar Tatikonda, Yale University. The 18 contributed papers were presented in five sessions: Multi-Agent Control, Simulation and Implementation, Distributed Sensing, Filtering and Estimation, during the first day; Control over Networks I and Control over Networks II, during the second day of the workshop.
VIII
Preface
What emerged from this workshop was the importance of exploring further the areas between the disciplines of systems and control, information theory, communication networks, distributed and collaborative control, and real-time systems. At the same time it became clear that being aware of application needs that help incorporate the right assumptions in our theoretical results, as well as of implementation issues that assist us in the crafting of our algorithms, is necessary. We are grateful to all invitees and contributors for making the workshop a success. We would also like to recognize the diligence and expertise of the Program Committee: John Baillieul, Boston University; Michael Branicky, Case Western Reserve University; Magnus Egerstedt, Georgia Institute of Technology; Nicola Elia, Iowa State University; Martin Haenggi, University of Notre Dame; Jo˜ ao Hespanha, University of California at Santa Barbara; George J. Pappas, University of Pennsylvania; Sekhar Tatikonda, Yale University; Dawn Tilbury, University of Michigan. We truly appreciate and acknowledge the generous support of our sponsors. The workshop was co-sponsored by the Office of Research of the University of Notre Dame, the Center for Applied Mathematics, the Department of Electrical Engineering, and the H.C. and E.A. Brosey Endowed Chair fund of the University of Notre Dame. It was also co-sponsored by the National Science Foundation EHS program and technically co-sponsored by the IEEE Control Systems Society. We do thank them all.
University of Notre Dame, Notre Dame, October 2005
Panos J. Antsaklis Paulo Tabuada
Contents
Part I Multi-agent Control Plenary Talk - Notes on Multi-agent Motion Coordination: Models and Algorithms Francesco Bullo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
Discrete Time Kuramoto Models with Delay Benjamin I. Triplett, Daniel J. Klein, Kristi A. Morgansen . . . . . . . . . . .
9
Symmetries in the Coordinated Consensus Problem Ruggero Carli, Fabio Fagnani, Marco Focoso, Alberto Speranzon, Sandro Zampieri . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 On Communication Requirements for Multi-agent Consensus Seeking Lei Fang, Panos J. Antsaklis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Applications of Connectivity Graph Processes in Networked Sensing and Control Abubakr Muhammad, Meng Ji, Magnus Egerstedt . . . . . . . . . . . . . . . . . . . . 69 Part II Simulation and Implementation Plenary Talk - Network-Embedded Sensing and Control: Applications and Application Requirements Tariq Samad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Simulation of Large-Scale Networked Control Systems Using GTSNetS ElMoustapha Ould-Ahmed-Vall, Bonnie S. Heck, George F. Riley . . . . . . . 87
X
Contents
neclab: The Network Embedded Control Lab Nicholas Kottenstette, Panos J. Antsaklis . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Homogeneous Semantics Preserving Deployments of Heterogeneous Networks of Embedded Systems Aaron D. Ames, Alberto Sangiovanni-Vincentelli, Shankar Sastry . . . . . . 127 Part III Distributed Sensing, Filtering, and Estimation Distributed Kalman Filtering and Sensor Fusion in Sensor Networks Reza Olfati-Saber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Belief Consensus and Distributed Hypothesis Testing in Sensor Networks Reza Olfati-Saber, Elisa Franco, Emilio Frazzoli, Jeff S. Shamma . . . . . . 169 Distributed Evidence Filtering in Networked Embedded Systems Duminda Dewasurendra, Peter Bauer, Kamal Premaratne . . . . . . . . . . . . . 183 Part IV Control over Networks I Plenary Talk - Delays in Control over Communication Networks: Characterization, Impact, and Reduction Strategies Dawn Tilbury . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Anticipative and Non-anticipative Controller Design for Network Control Systems Payam Naghshtabrizi, Jo˜ ao P. Hespanha . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 On Quantization and Delay Effects in Nonlinear Control Systems Daniel Liberzon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Performance Evaluation for Model-Based Networked Control Systems Luis A. Montestruque, Panos J. Antsaklis . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Beating the Bounds on Stabilizing Data Rates in Networked Systems Yupeng Liang, Peter Bauer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Contents
XI
Disturbance Attenuation Bounds in the Presence of a Remote Preview Nuno C. Martins, Munther A. Dahleh, John C. Doyle . . . . . . . . . . . . . . . . 269 Part V Control over Networks II Plenary Talk - The Role of Information Theory in Communication Constrained Control Systems Sekhar Tatikonda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Delay-Reliability Tradeoffs in Wireless Networked Control Systems Min Xie, Martin Haenggi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 A Cross-Layer Approach to Energy Balancing in Wireless Sensor Networks Daniele Puccinelli, Emmanuel Sifakis, Martin Haenggi . . . . . . . . . . . . . . . 309 Distributed Control over Failing Channels C´edric Langbort, Vijay Gupta, Richard M. Murray . . . . . . . . . . . . . . . . . . . 325 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Plenary Talk Notes on Multi-agent Motion Coordination: Models and Algorithms Francesco Bullo Mechanical Engineering Department Center for Control, Dynamical Systems and Computation University of California at Santa Barbara
[email protected] http://motion.mee.ucsb.edu
Motion coordination is an extraordinary phenomenon in biological systems such as schools of fish and serves as a remarkable tool for man-made groups of robotic vehicles and active sensors. Although each individual agent has no global knowledge about the group as a whole or about the surrounding environment, complex coordinated behaviors emerge from local interactions. From a scientific point of view, the study of motion coordination poses novel challenges for systems and control theory. A comprehensive understanding of this phenomenon requires the joint ambitious study of mobility, communication, computation, and sensing aspects. In this brief document, we review some of our recent work on models and algorithms for coordinating the motion of multi-agent networks. In Section 1, we discuss models and classifications for multi-agent networks, i.e., groups of robotic agents that can sense, communicate and take local control actions. For these networks, we introduce basic notions of communication and control algorithms, coordination tasks and time complexity. Earlier efforts in this direction are documented in [SY99, San01]; our treatment is influenced by [Lyn97a, JT92] and presented in detail in [MBCF05]. In Section 2, we discuss various basic algorithms for (i) rendezvous at a point and (ii) deployment over a given region. The proposed control and communication algorithms achieve these various coordination objectives requiring only spatially-distributed information or, in other words, single-hop communication. These rendezvous and deployment scenarios are treated extensively in [CMB04a, GCB05] and in [CMKB04a, CMB05a, CB04a], respectively. Early efforts on related problems include [AOSY99, LH97]. The proposed models and examples shed some light on a novel class of control problems with insightful connections to the disciplines of distributed algorithms, geometric optimization, and algorithmic robotics.
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 3–8, 2006. © Springer-Verlag Berlin Heidelberg 2006
4
F. Bullo
1 Robotic Networks and Complexity The global behavior of a robotic network arises from the combination of the local actions taken by its members. Each agent in the network can perform a few basic tasks such as sensing, communicating, processing information and moving according to it. The many ways in which these capabilities can be integrated make a robotic network a versatile and, at the same time, complex system. To understand the trade-offs between performance, reliability and costs, it seems appropriate to propose a modeling framework where the execution of different coordination algorithms can be appropriately formalized, analyzed and compared. We consider uniform networks of robotic agents defined by a tuple S = (I, A, Ecmm ) consisting of a set of unique identifiers I = {1, . . . , N }, a collection of control systems A = {A[i] }i∈I , with A[i] = (X, U, X0 , f ), and a map Ecmm from X N to the subsets of I × I called the communication edge map. Here, (X, U, X0 , f ) is a control system with state space X ⊂ Rd , input space U , set of allowable initial states X0 ⊂ X, and system dynamics f : X ×U → X. An edge between two identifiers in Ecmm implies the ability of the corresponding two agents to exchange messages. A control and communication law for S consists of the sets: ¯ + , an increasing sequence of time instants, called com1. T = {t } ∈N ⊂ R munication schedule; 2. L, called the communication language, whose elements are called messages; 3. W , set of values of some logic variables w[i] ∈ W , i ∈ I, and W0 ⊆ W , subset of allowable initial values. These sets correspond to the capability of agents to allocate additional variables and store sensor or communication data; and the maps: 4. msg : T × X × W × I → L, called message-generation function; 5. stf : T × W × LN → W , called state-transition function; ¯ + × X × X × W × LN → U , called control function. 6. ctl : R To implement a control and communication law each agent performs the following sequence or cycle of actions. At each instant t ∈ T, each agent i communicates to each agent j such that (i, j) belongs to Ecmm (x[1] , . . . , x[N ] ). Each agent i sends a message computed by applying the message-generation function to the current values of t , x[i] and w[i] . After a negligible period of time, agent i resets the value of its logic variables w[i] by applying the statetransition function to the current value of w[i] , and to the messages y [i] (t ) received at t . Between communication instants, i.e., for t ∈ [t , t +1 ), agent i applies a control action computed by applying the control function to its state at the last sample time x[i] (t ), the current values of x[i] and w[i] , and to the messages y [i] (t ) received at t .
Notes on Multi-agent Motion Coordination
5
Some remarks are appropriate. In our present definition, all agents are identical and implement the same algorithm; in this sense the control and communication law is called uniform (or anonymous). If W = W0 = ∅, then the control and communication law is static (or memoryless) and no state-transition function is defined. It is also possible for a law to be timeindependent if the three relevant maps do not depend on time. Finally, let us also remark that this is a synchronous model in which all agents share a common clock. Next, we establish the notion of coordination task and of task achievement by a robotic network. A (static) coordination task for a network S is a map T : X N → {true, false}. Additionally, let CC be a control and communication law for S. We say that CC achieves the task T if for all initial conditions [i] x0 ∈ X0 , the corresponding network evolution t → x(t) has the property that there exists T ∈ R+ such that T(x(t)) = true for t ≥ T . In control-theoretic terms, achieving a task means establishing a convergence or stability result. Beside this key objective, one might be interested in efficiency as measured by required communication service, required control energy or by speed of completion. We focus on the latter notion. The time complexity to achieve T with CC is TC(T, CC ) = sup
TC(T, CC , x0 ) | x0 ∈ X0N ,
where TC(T, CC , x0 ) = inf { | T(x(tk )) = true , ∀k ≥ }, and where t → (x(t)) is the evolution of (S, CC ) from x0 . The time complexity of T is TC(T) = inf {TC(T, CC )| CC achieves T} . Some ideas on how to define meaningful notions of communication complexity are discussed in [MBCF05]. In the following discussion, we describe certain coordination algorithms, which have been cast into this modeling framework and whose time complexity properties have been analyzed.
2 Example Algorithms and Tasks Key problems in motion coordination include the design of strategies for flocking, motion planning, collision avoidance and others. Numerous such problems remain interesting open challenges as of today. For example, it is still not clear how to prescribe the agents’ motion in such a way as to achieve a generic prescribed geometric pattern; note that certain impossibility results are known [SY99]. Typically, coordination objectives are characterized via appropriate utility functions. We illustrate our approach by discussing two basic types of problems: rendezvous and deployment.
6
F. Bullo
Aggregation algorithms The rendezvous objective (also referred to as the gathering problem) is to achieve agreement over the location of the agents, that is, to steer each agent to a common location. An early reference on this problem is [AOSY99]. We consider two scenarios which differ in the agents’ sensing/communication capabilities and the environment to which the agents belong. First [CMB04a], we consider the problem of rendezvous for agents equipped with range-limited communication in obstacle-free environments. In this case, each agent is capable of sensing its position in the Euclidean space Rd and can communicate it to any other robot within a given distance r. This communication service is modeled by the r-disk graph, in which two agents are neighbors if and only if their Euclidean distance is less than or equal to r. Second [GCB05], we consider visually-guided agents. Here the agents are assumed to belong to a nonconvex simple polygonal environment Q. Each agent can sense within line-of sight any other agent as well as sense the distance to the boundary of the environment. The relationship between the agents can be characterized by the so-called visibility graph: two agents are neighbors if and only if they are mutually visible to each other. In both scenarios, the rendezvous problem cannot be solved with distributed information unless the agents’ initial positions form a connected sensing/communication graph. Arguably, a good property of any rendezvous algorithm is that of maintaining connectivity between agents. This connectivitymaintenance objective is interesting on its own. It turns out that this objective can be achieved through local constraints on the agents’ motion. Motion constraint sets that maintain connectivity are designed in [AOSY99, GCB05] by exploiting the geometric properties of disk and visibility graphs. These discussions lead to the following algorithm that solves the rendezvous problems for both communication scenarios. The agents execute what we shall refer to as the Circumcenter Algorithm; here is an informal description. Each agent iteratively performs the following tasks: 1: acquire neighbors’ positions 2: compute connectivity constraint set 3: moves toward the circumcenter of the point set comprised of its neighbors and of itself, while remaining inside the connectivity constraint set. One can prove that, under technical conditions, the algorithm does achieve the rendezvous task in both scenarios; see [CMB04a, GCB05]. Additionally, when d = 1, it can be shown that the time complexity of this algorithm is Θ(N ); see [MBCF05].
Notes on Multi-agent Motion Coordination
7
Deployment algorithms The problem of deploying a group of agents over a given region of interest can be tackled with the following simple heuristic. Each agent iteratively performs the following tasks: 1: acquire neighbors’ positions 2: compute own dominance region 3: move towards the center of own dominance region This short description can be made accurate by specifying what notions of dominance region and of center are to be adopted. In what follows we mention two examples and refer to [CMKB04a, CMB05a, CB04a] for more details. First, we consider the area-coverage deployment problem in a convex polygonal environment. The objective is to maximize the area within close range of the mobile nodes. This models a scenario in which the nodes are equipped with some sensors that take measurements of some physical quantity in the environment, e.g., temperature or concentration. Assume that certain regions in the environment are more important than others and describe this by a density function φ. This problems leads to the coverage performance metric N
Have (p1 , . . . , pN ) =
min
Q i∈{1,...,N }
f ( q−pi ) φ(q)dq = i=1
Vi
f ( q−pi )φ(q)dq.
Here pi is the position of the ith node, f measures the performance of an individual sensor, and {V1 , . . . , VN } is the Voronoi partition of the nodes {p1 , . . . , pN }. If we assume that each node obeys a first order dynamical behavior, then a simple gradient scheme can be easily implemented in a spatiallydistributed manner. Following the gradient of Have corresponds, in the previous algorithm, to defining (1) the dominance regions to be the Voronoi cells generated by the agents, and (2) the center of a region to be the centroid of the region (if f (x) = x2 ). Because the closed-loop system is a gradient flow for the cost function, performance is locally, continuously optimized. As a special case, when the environment is a segment and φ = 1, the time complexity of the algorithm can be shown to be O(N 3 log(N ε−1 )), where ε is a threshold value below which we consider the task accomplished; see [MBCF05]. Second, we consider the problem of deploying to maximize the likelihood of detecting a source. For example, consider devices equipped with acoustic sensors attempting to detect a sound-source (or, similarly, antennas detecting RF signals, or chemical sensors localizing a pollutant source). For a variety of criteria, when the source emits a known signal and the noise is Gaussian, we know that the optimal detection algorithm involves a matched filter, that detection performance is a function of signal-to-noise-ratio, and, in turn, that signal-to-noise ratio is inversely proportional to the sensor-source distance. In this case, the appropriate cost function is Hworst (p1 , . . . , pN ) = max
min
q∈Q i∈{1,...,N }
f ( q − pi ) = max f ( q − pi ), q∈Vi
8
F. Bullo
and a greedy motion coordination algorithm is for each node to move toward the circumcenter of its Voronoi cell. A detailed analysis [CB04a] shows that the detection likelihood is inversely proportional to the circumradius of each node’s Voronoi cell, and that, if the nodes follow this algorithm, then the detection likelihood increases monotonically as a function of time. Acknowledgments This document summarizes some results of joint work with Anurag Ganguli, Ketan Savla, Jorge Cort´es, Emilio Frazzoli, and Sonia Mart´ınez. The author thanks the organizers of the Workshop on Network Embedded Sensing and Control for the warm hospitality throughout the meeting and the opportunity to present this work. The author also gratefully acknowledges the partial support of ONR YIP Award N00014-03-1-0512, NSF SENSORS Award IIS0330008, and ARO MURI Award W911NF-05-1-0219.
Francesco Bullo received the Laurea degree ”summa cum laude” in Electrical Engineering from the University of Padova, Italy, in 1994, and the Ph.D. degree in Control and Dynamical Systems from the California Institute of Technology in 1999. From 1998 to 2004, he was an Assistant Professor with the Coordinated Science Laboratory at the University of Illinois at UrbanaChampaign. He is currently an Associate Professor with the Mechanical & Environmental Engineering Department at the University of California, Santa Barbara. His research interests include motion planning and coordination for autonomous vehicles, geometric control of mechanical systems, and distributed and adaptive algorithms. He is a recipient of the 2003 ONR Young Investigator Award. He is the coauthor (with Andrew D. Lewis) of the book ”Geometric Control of Mechanical Systems” (New York: Springer Verlag, 2004, 0-387-22195-6). He has published more than 70 papers in international journals, books, and refereed conferences. He serves on the Editorial Board of the ”IEEE Transactions on Automatic Control,” ”SIAM Journal of Control and Optimization,” and ”ESAIM: Control, Optimization, and the Calculus of Variations.” He also serves as Chairman of the IEEE Control Systems Society Technical Committee on Manufacturing, Automation, and Robotics Control (MARC).
Discrete Time Kuramoto Models with Delay Benjamin I. Triplett, Daniel J. Klein, and Kristi A. Morgansen Department of Aeronautics and Astronautics University of Washington Seattle, WA 98195-2400 {triplett,djklein,morgansen}@aa.washington.edu Summary. Motivated by the needs of multiagent systems in the presence of sensing and communication that is delayed, intermittent and asynchronous, we present here a discrete-time Kuramoto oscillator model that incorporates delays. Analytical results are derived for certain cases of heading-based stability of synchronized and balanced equilibrium sets both when delay is not present and when it is. In all cases, agents are assumed to be identical and fully connected, with constant and equal network delays. For complex cases, the analysis is supplemented with Monte Carlo simulations, and all results are demonstrated in simulation.
1 Introduction During the past several years, a great deal of research attention in the areas of dynamics and control theory has been directed to problems derived from applications in coordinated and multiagent systems. These applications appear in engineered systems (e.g. coordinated control of autonomous vehicles [BA98, BH98, CB04b, EH01, FM04b, Fer03, JLM03d, JK04, LF01]) as well as in natural systems (e.g. emergent patterns in groups of organisms [CDF+ 01, Gol96, GLR96, Hep97, MEK99, PEK99]). Analysis and control of systems of multiple vehicles or agents provides a great deal of challenge. The order of such systems tends to be large and can be dynamic (as members join and leave the group), the coupling between agents is mathematically nontrivial in most cases of interest (e.g. limited sensing range, dynamic communication), and in many cases allowance must be made for data transmission delay and clock asynchrony. Often, individual agent dynamics are taken to be linear integrators (either single or double). In general, such simplifications of the dynamics do not prevent application of the derived techniques to physical systems, and the mathematics of the resulting system are greatly simplified. However, a number of multiagent systems of interest, both engineered (e.g. aircraft, ground vehicles and underwater vehicles) and natural (e.g. fish, quadrapeds) are composed of
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 9–23, 2006. © Springer-Verlag Berlin Heidelberg 2006
10
B.I. Triplett, D.J. Klein, and K.A. Morgansen
systems whose dynamics are inherently nonholonomic. The key features of the dynamics of the individual agents in such systems is that they have the ability to change their forward/backward velocity and the ability to change their turning radius, usually within finite bounds, but they do not have the ability to directly control lateral motion. The use of Frenet-Serret equations of motion has recently been shown to be a convenient tool for succinct description of such system dynamics [JK04]. By assuming the forward velocity to be constant and taking the turning radius to be a control function composed of relatively weighted spacing and alignment components, where the spacing occurs on a much slower time scale than the alignment, the group dynamics can be studied as a system of coupled oscillators. The study of such systems of oscillators has a well-established history in the physics and biology communities through the original work by Winfree [Win67] whose models were later refined and formalized into the wellknown Kuramoto models and studies on collective synchronization (see e.g. [Str00b] for a survey). Oscillator models have recently begun to receive attention in the control theory community for stabilization of groups of autonomous agents [JMB04, SPL03]. Engineering studies have focused on stability results for continuous time versions of the model with either full or partial and dynamic connectivity of the agents. Here, we are interested in the effects of delayed and discrete time interactions between the agents. Some extensions of the continuous time Kuramoto model to include delay have been considered in previous work [AV93, KPR97, YS99]. However, in engineered systems, for example in underwater applications, such delays often occur due to transmission time or due to shared communication media, and occur on a much slower time scale than the time scale for the agent dynamics. These resulting systems are thus best described with discrete time models with delayed interaction coupling. The contributions of the work here are an extension of the continuous time Kuramoto oscillator model to a discrete time version along with stability analysis for cases of systems both with and without delay. The presentation is organized as follows. For completeness, details of the connection between Frenet-Serret dynamics and continuous time oscillator models are given in the next section. In Section 3, stability results for discretized Kuramoto models with no delay are presented. Stability results for discretized systems with delay are given in Section 4. Simulation results are given in Section 5. We conclude with a discussion of ongoing and future work in Section 6.
2 Oscillator Models for Multivehicle Systems As presented in [JK04], planar vehicles with heading control can be described by the planar Frenet-Serret equations
Discrete Time Kuramoto Models with Delay
r˙ i = xi x˙ i = yi ui y˙ i = −xi ui
11
(1) i = 1...N
2
where r, x, y ∈ R , u ∈ R and θ is such that x=
cos(θ) sin(θ)
Here, r is the planar position of the center of mass of the vehicle, x is the unit tangent vector in the direction of motion, y is the unit vector normal to the direction of motion, and u is the controlled rate of angular motion, ˙ In [SPL03], this control is composed of a linear combination of alignu = θ. ment and spacing terms where the time scale of the spacing is assumed to be small enough relative to that of the alignment to allow the two to effectively decouple. The form of the alignment control is taken to be that of a system of phase-coupled oscillators as in the Kuramoto model, written as u = θ˙i (t) = ωi −
K N
N
sin (θj (t) − θi (t))
(2)
j=1
where N is the number of agents in the system, ωi is the natural frequency of the ith agent, and θi is its corresponding phase. If the dynamics are represented relative to a rotating coordinate frame, then ωi = 0. As shown in [JMB04] and others, the sign of K determines whether the oscillators will align or anti-align their phases as the system evolves, and the magnitude of K determines the rate of convergence to steady state. The results from [JK04] are studied in the framework of phase coupled oscillators in [SPL03], where the topology of the equilibrium sets of systems like those in the prior work are analyzed and described.
3 The Discrete Time Kuramoto Model In many applications, networked multiagent control systems suffer from intermittent, delayed, and asynchronous communications and sensing. To accommodate such systems, in which the time scales of the network events are much slower than the underlying agent dynamics, we consider the Kuramoto model in synchronous discrete time with fixed time step ΔT . Our goal in this section is to determine bounds on the quantity KΔT , where K is the control gain in the continuous model, for which the system will converge to either a state of synchrony in which all oscillators have the same phase, or a state of incoherence in which the phasor centroid is zero. The terms synchronized set and balanced set will be used respectively to describe these sets of states. Using a zero-order hold on the data to compute the control between updates at fixed intervals ΔT , the discretized Kuramoto model is
12
B.I. Triplett, D.J. Klein, and K.A. Morgansen
θi (h + 1) = θi (h) −
KΔT N
sin (θj (h) − θi (h))
(3)
j
where we assume the natural frequencies of all oscillators in the model are zero. As with the continuous time system (2), this assumption corresponds to writing the system dynamics in a rotating coordinate frame before discretization. Stability analysis for the general system is quite challenging. Using linearization, we will show that the behavior of the discrete-time Kuramoto model without delay converges to the synchronized set when −2 < KΔT < 0, and to the balanced set for 0 < KΔT < 2, provided the initial conditions are in a local neighborhood of the equilibria. Further, in special cases, we will show that these results hold more strongly for any initial condition that is not an unstable equilibrium point for the respective case. Analysis for arbitrary numbers of vehicles in this more general case are not yet available, however, Monte Carlo simulations verify the extension. 3.1 The Two Oscillator Problem As the equilibria for this system are sets rather than discrete points, for full stability analysis we will apply LaSalle’s Invariance Principle rather than Lyapunov’s Stability Criteria. For reference, LaSalle’s Invariance Principle for discrete-time systems is stated as Proposition 1. (LaSalle’s Invariance Principle for Discrete Time Systems ¯ and [LaS86]) If V is a Lyapunov function on a linear space D, with closure D, the solution of the discrete time system is in D and bounded, then there is a number c such that the solution approaches M ∩V −1 (c) where M is the largest ¯ positively invariant set contained in the set E = {x ∈ Rm |ΔV = 0} ∩ D. To motivate the construction of the Lyapunov functions, we will refer to a parameters often used to study the synchrony of the continuous time system (2), the order parameter [Str00b], r(θ(h)) =
1 N
i
cos θi (h) sin θi (h)
.
Note that for all states in the synchronized set (identical headings), the order parameter has a value of r(θ) = 1, and for all states in the balanced set (opposing headings), r(θ) = 0. Based on this parameter, consider the following candidate Lyapunov function: V (θ(h)) ≡ =
1 N r(θ(h))2 2 1 2N
i
cos θi (h) sin θi (h)
2
.
Discrete Time Kuramoto Models with Delay
13
This function attains a maximum value of N2 and a minimum value of zero for all points on the synchronized and balanced sets, respectively. Using the definitions ΔV (x) ≡ V (f (x)) − V (x) and f (θi (h)) = θi (h + 1) as above, the ΔT time difference of V is 2 2 cos θi − KΔT sin (θ − θ ) j i 1 1 cos θ j N i − ΔV (θ) = . sin θi 2N 2N sin θi − KΔT sin (θj − θi ) i
i
j
N
Every term in the above equation occurs at the same time, so the index, h, has been dropped for notational brevity. Choosing φi = θi − KΔT j sin (θj − θi ) , N the above equation becomes ΔV (θ) =
1 2N
2
cos φi sin φi
i
−
1 2N
i
cos θi sin θi
2
.
(4)
Employing the simplification N i=1
cos θi sin θi
2
= cos2 θ1 + cos2 θ2 + . . . + cos2 θN + 2 cos θ1 cos θ2 + . . . + cos θN −1 cos θN + sin2 θ1 + sin2 θ2 + . . . + sin2 θN + 2 sin θ1 sin θ2 + . . . + sin θN −1 sin θN = N + 2 cos θ1 cos θ2 + . . . + cos θN −1 cos θN + sin θ1 sin θ2 + . . . + sin θN −1 sin θN = N + 2 cos (θ1 − θ2 ) + . . . + cos (θN −1 − θN ) N −1
N
cos (θi − θj ),
=N +2 i=1 j=i+1
(4) can be written as 1 ΔV (θ) = N
N −1
N
cos (φi − φj ) − cos (θi − θj ) .
(5)
i=1 j=i+1
Using the above, we can state the following. Theorem 1. For N = 2, the discrete time Kuramoto system (3) will converge to the balanced set, |θ1 − θ2 | = π, if 0 < KΔT < 2 and if the initial conditions satisfy θ1 (0) = θ2 (0), where (θ1 , θ2 ) ∈ T. Proof. We prove Theorem 1 by LaSalle’s Invariance Principle. The synchronized and balanced sets for N = 2 are S = {θ1 − θ2 | θ1 − θ2 = 0} and
14
B.I. Triplett, D.J. Klein, and K.A. Morgansen
B = {θ1 − θ2 | |θ1 − θ2 | = π}, respectively. Specify the torus minus the synchronized set as the domain, D = T − S, of the Lyapunov function V (θ). This choice of D contains no unstable equilibria. For N = 2, equation (5) becomes 1 cos (φ1 − φ2 ) − cos (θ1 − θ2 ) 2 1 KΔT = cos θ1 − θ2 − sin (θ2 − θ1 ) 2 2 KΔT + sin (θ1 − θ2 ) − cos (θ1 − θ2 ) 2 1 = cos (θ1 − θ2 + KΔT sin (θ1 − θ2 )) − cos (θ1 − θ2 ) . 2
ΔV (θ) =
(6)
Equation (6) only has zeros at B in D. Therefore, E is composed of only the desired equilibria points, so all of E is invariant, in which case M = E. To complete the proof we need to show that equation (6) is negative on D − E for the stated ranges of KΔT . We can then use the following inequality derived from equation (6) cos (θ + KΔT sin θ) < cos θ (7) where θ = θ1 − θ2 . The plot in Figure 1 shows the behavior of equation (7) which is dependent on the sign of K. Cosine is symmetric about zero so we can consider one half of a symmetric interval. For the domain 0 ≤ θ ≤ π cosine is a strictly decreasing function. Hence, for the inequality in question to hold, we must have that θ < θ + KΔT sin θ < 2π − θ. The left side of the inequality amounts to KΔT sin θ > 0 which is true as long as KΔT > 0 because sin θ is always positive on the interval in question. The right side of the inequality is more interesting. Here, we can discover the bounds on KΔT by taking the derivative of both sides with respect to θ, with the result KΔT < cos2 θ . Since the extreme values of cos θ on 0 ≤ θ ≤ π are ±1, we have the result that −2 < KΔT < 2. The final result 0 < KΔT < 2 comes from taking the intersection of the two conditions on KΔT . Theorem 2. For N = 2, the discrete time Kuramoto system (3) will converge to the synchronized set, θ1 = θ2 , if −2 < KΔT < 0 and if the initial conditions satisfy |θ1 − θ2 | = π, where (θ1 , θ2 ) ∈ T. Proof. The proof uses similar analysis to Theorem 1. The main differences are the Lyapunov function switches signs, V = −V , and the domain is D = T−B. The largest invariant set is then M = S. For V to be negative on D − E, θ > θ + KΔT sin θ > 2π − θ which is true for all θ provided −2 < KΔT < 0.
cos(! + K T cos!)
cos!
Discrete Time Kuramoto Models with Delay
15
0 0.5 1
K T = 1.8 K T = 2.0 K T = 2.2
1.5
cos!
1.5
cos(! + K T cos!)
3
1 0.5
2
1
0
1
2
3
0
1
2
3
!
K T = 1.8 K T = 2.0 K T = 2.2
0 3
2
1
!
Fig. 1. Plot of cos (θ + KΔT sin θ) − cos θ demonstrating that the Lyapunov difference equation becomes positive for values of KΔT > 2 (near θ = ±π) and becomes negative for KΔT < −2 (near zero).
To generalize this result to an arbitrary numbers of oscillators, two points must be addressed. First, we need to show analytically that ΔV is negative for all points in D − E. This is a difficult problem for arbitrary N with generic D. The second point is that we need to choose an initial set D which does not contain any unstable equilibria. It is important to note that the choice of D is significantly different for convergence to the synchronized and balanced sets. For example, any point on the balanced set is an unstable equilibrium point on the synchronized set, and vice versa. For convergence to the synchronized set, the choice of D = {θi | θi ∈ (γ − π/2, γ + π/2), i = 1, . . . , N } for any constant γ removes all unstable equilibria from the domain. This domain is necessary for analytical formality, but in practice, any initial conditions not precisely on an unstable equilibrium will work. For convergence to the balanced set, the choice of D for arbitrary N remains an open analytical problem, but again in practice this has not been a problem. 3.2 The N-Oscillator Problem The N -oscillator problem is more challenging. However, if we limit our attention to a smaller neighborhood about the synchronized set, we can prove a stability result for the N -oscillator problem by noting that an appropriate D is a region where all relative phases are small. We can also extend the N = 2
16
B.I. Triplett, D.J. Klein, and K.A. Morgansen
case to single equilibria in the balanced set by simply linearizing the system about the given equilibria. We therefore have the following result. Theorem 3. There is a non-zero region of attraction for which the system (3) of N phase-coupled oscillators will converge to the same phase provided that −2 < KΔT < 0. Proof. To begin, express the N -oscillator Lyapunov difference equation as follows N −1 N N −1 N 1 ΔV (θ) = cos (θi − θj ) cos (φi − φj ) − N i=1 j=i+1 i=1 j=i+1 N −1 N N 1 KΔT = cos θi − θj − sin(θq − θi ) N i=1 j=i+1 N q=1 N N −1 N KΔT cos(θi − θj ) . sin(θq − θj ) − + N q=1 i=1 j=i+1 All equilibria in the synchronized set are given by θi = θ0 . Thus we can apply the small angle approximation for Δθij = θi − θj to get N −1 N N KΔT 1 cos θi − θj − (θq − θi ) ΔV (θ) ≈ N i=1 j=i+1 N q=1 N N −1 N KΔT cos(θi − θj ) + (θq − θj ) − N q=1 i=1 j=i+1 N −1 N 1 cos (θi − θj − KΔT (θi − θj )) = N i=1 j=i+1 N −1
N
−
cos(θi − θj ) .
i=1 j=i+1
Each cosine in the sum is now only in terms of the difference between two phases, so we can compare term by term and use the results of Theorem 1. From continuity of the state space, it must hold that there exists an open neighborhood about the synchronized set where Δθ is small that does not contain any unstable equilibria. Using this neighborhood as D and applying LaSalle’s Invariance Principle completes the proof by noting that the synchronized set contains all of the positively invariant points in D.
Discrete Time Kuramoto Models with Delay
17
Similar analysis for the balanced set has not yet followed from this approach, although the result holds in simulation, so we provide the corresponding result as a conjecture and appeal to Monte Carlo simulations for the general case. Conjecture 1. There is a non-zero region of attraction for which the system of N phase-coupled oscillators will converge to the balanced set provided that 0 < KΔT < 2, and to the synchronous set provided that −2 < KΔT < 0. To show the stability of the N oscillator system with the most general region of attraction for the equilibria sets, we will use Monte Carlo techniques. To this end, we have run simulations of the system for many trials (typically on the order of 50000) over a range of values of KΔT . The minimum and maximum of ΔV for −3 < KΔT < 3 are shown as in Figure 2.
3
V Max V Min
N=7, Trials=50000
2
1 V 0
1
2
3
3
2
1
0 K T
1
2
3
Fig. 2. Monte Carlo simulation results for seven oscillators without delay. These results are used to support the notion that the stability bounds of KΔT are independent of N and are typical for any number of vehicles.
This, along with many other simulations not shown here, and at several values of N , supports the notion that the bounds of KΔT are independent of N . Specifically, there are three points worth noting in these results. First,
18
B.I. Triplett, D.J. Klein, and K.A. Morgansen
on −2 < KΔT < 0 and on 0 < KΔT < 2, the minimum and maximum of ΔV was respectively zero. This corresponds to initial conditions taken at equilibrium points. Second, for negative KΔT , ΔV could only be negative when KΔT < −2. This supports the notion that the system is guaranteed to converge to the synchronized set only for −2 < KΔT < 0, since the Lyapunov function for the synchronized set is the negative of V. Finally, for positive KΔT , the maximum value of ΔV was positive only for KΔT > 2, which supports the claim that the discrete time N -oscillator system is only guaranteed to converge to the balanced set for 0 < KΔT < 2.
4 Discrete Time Kuramoto Model with Time Delay The discrete-time Kuramoto model can be extended to allow for a class of delay with the formulation θi (h + 1) = θi (h) −
KΔT N
sin (θj (h − d) − θi (h)),
(8)
j=i
where we make the assumption that agent i always has current information about itself, and that information about all other agents is delayed by an integer number of time steps d ∈ N of duration ΔT . The zero-order hold is used to generate the control. Theorem 4. For any N and within some region of attraction, the discrete time Kuramoto model with any number of delay steps converges to alignment when − NN−1 < KΔT < 0. Proof. To prove stability for the N oscillator system with any integer time delay, we will use a different approach from that of the non-delayed system analysis. We will linearize about the aligned equilibrium, which will allow us to write a linear matrix equation for the system. Stability is known from the location of the eigenvalues of the system matrix relative to the unit circle. Starting with equation (8), we can buffer the system and write it as θ1 (h) θ1 (h − d) θ2 (h) θ2 (h − d) .. .. . . θN (h) = 0 I θN (h − d) , (9) θ1 (h + 1) A1 A2 θ1 (h) θ2 (h + 1) θ2 (h) . . . . . . θN (h + 1)
θN (h)
where 0 is the N × N matrix of zeros, I is the N × N identity,
Discrete Time Kuramoto Models with Delay
0
−KΔT N A1 = .. .
−KΔT −KΔT N N
...
−KΔT N
...
−KΔT N
0
.. ...
.
19
−KΔT N
−KΔT N
.. .
,
−KΔT N
0
N −1 N KΔT
... 1 +
N ×N
and A2 = diag 1 +
N −1 N KΔT
1+
N −1 N KΔT
.
For the system to be guaranteed to converge to the aligned state (i.e. Δθij = θi − θj , i, j ∈ {1 . . . N } satisfies the small angle approximation), we must have that the eigenvalues of the 2N × 2N system matrix above all lie within the unit circle. We find that the eigenvalues of the system matrix are 1 1 + bc ± 2 2 1 1 λr = + bc ± 2 2
λ1,2 =
1 + 2bc + b2 c2 − 4(N − 1)b
(10)
1 + 2bc + b2 c2 + 4b,
(11)
where b = KΔT /N and c = N −1. The pairs of repeated eigenvalues, λr , occur N − 1 times, and all of the eigenvalues are real for any KΔT > 0 and N ≥ 2. For system stability, then, it is required that −1 ≤ λ1,2 ≤ 1 and −1 ≤ λr ≤ 1. The requirement for λ1,2 is satisfied when N−N −1 < KΔT < 0, and for λr when −2N N −2 < KΔT < 0. It is clear then that λ1,2 determine stability. The balanced set is more difficult to work with, since the linearization for general N still involves cosines. A simple result for N = 2 and linearization about the balanced set is available and shows the same behavior as for the aligned set. Theorem 5. For N = 2 and within some region of attraction, the discrete time Kuramoto model with any number of delay steps converges to the balanced set when 0 < KΔT < 2. Proof. The N = 2 system with delay steps d, linearized about the balanced set, is described by 0 0 1 0 θ1 (h − d) θ1 (h) θ2 (h) 0 θ2 (h − d) 0 0 1 (12) θ1 (h + 1) = 0 KΔT 1 − KΔT θ1 (h) . 0 2 2 KΔT KΔT θ2 (h + 1) θ2 (h) 0 0 1− 2 2 For the system to be guaranteed to converge for any θ1 , θ2 such that Δθ = θ1 − θ2 ≈ π we must have that the eigenvalues of the 4 × 4 system matrix
20
B.I. Triplett, D.J. Klein, and K.A. Morgansen
above all lie within the unit circle. We find that the eigenvalues of the system matrix are √ KΔT 2 − KΔT ± K 2 ΔT 2 − 12KΔT + 4 . , 1, − 2 4 For the non-unity eigenvalues to be inside the unit circle, we require 0 < KΔT < 2. Note that one eigenvalue is identically unity. This eigenvalue corresponds to the fact that equilibrium on the balanced set can be about any angle, not specifically θi = 0.
5 Simulation Results To demonstrate the previously derived bounds on gains, we provide simulations of illustrative cases. Figure 3 shows simulation results for two vehicles with identical initial conditions when KΔT is within, and near the critical limits of, the asymptotically stable, marginally stable and unstable regions for synchronization with no delay. As expected, the vehicles quickly synchronize in the asymptotically stable case and do not converge in the marginally stable case. Note that the for unstable case of synchronization the oscillators quickly reach an equilibrium separation. Additionally, results for the synchronized 3.08 i
3.1 i
3.06
3.06
3.04 10 3.5 i
20 (a)
30
40
3.02
10
20 (b)
30
40
10
20 (d)
30
40
3.1
3.2
i
3.06 2.9 2.6
10
20 (c)
30
40
3.02
Fig. 3. Simulation of the discrete time Kuramoto model with two vehicles for (a) KΔT = −1.8, d = 0, (b) KΔT = −2.0, d = 0, (c) KΔT = −2.2, d = 0, and (d) KΔT = −1.8, d = 1.
Discrete Time Kuramoto Models with Delay
21
stability of two vehicles with one-step delay are shown in the final panel of Figure 3. Again the derived results are verified. As indicated in the previous section, adding delay to the discrete-time Kuramoto model decreases the range of KΔT for which the system will converge. Analysis for small systems and Monte Carlo simulations for larger systems show that this result is dependent on the number of oscillators, N , but not the number of delay steps d > 0. The behavior of the system demonstrating these bounds is shown in Figure 4 with and without time delay for combinations of N = 4, N = 20, d = 0, d = 4, d = 8, and KΔT = −0.7. Given the results of the analysis, we expect that, for a choice of KΔT , increasing the delay for a constant number of vehicles will not affect stability (Figure 4(a,b,c)), but that increasing the number of vehicles for a fixed delay will lead to an unstable system (Figure 4(b,d)). Clearly these results validate both the analytical results for delay as well as the conjectures.
i
4
N = 4, d = 0
2 0
10
4
i
i
20
(a)
30
20
(c)
30
10
4
N = 4, d = 0
10
N = 4, d = 0
2 0
40
i
2 0
4
40
20
(b)
30
40
N = 4, d = 0
2 0
10
20
(d)
30
40
Fig. 4. Simulation of the Kuramoto model with KΔT = −1.4 and (a) N = 4 and d = 0, (b) N = 4 and d = 4, (c) N = 4 and d = 8, (d) N = 20 and d = 4. This simulation indicates that stability is independent from the delay, d, but dependent on the number of vehicles, N .
The connection between the theory of discrete time oscillators and the motivating multivehicle applications is illustrated in Figure 5. Here, a group of seven nonholonomic vehicles, modeled as (1) with control (3), is simulated with two different values of KΔT . In Figure 5(a), 0 < KΔT < 2 so all vehicles align, whereas in 5(b) KΔT > 2 and the vehicles never reach alignment.
6 Conclusions and Directions of Future Work Motivated by the needs of multiagent systems in the presence of delayed, intermittent and asynchronous sensing and communication, we have presented
22
B.I. Triplett, D.J. Klein, and K.A. Morgansen K=0.03 T=1 N=7
(a)
K=0.5 T=5 N=7
(b)
Fig. 5. To illustrate the connection between the discrete time Kuramoto system and multiple vehicles, this plot shows N = 7 discrete time steering control vehicles with (a) a stable gain (KΔT < 2) and (b) an unstable gain (KΔT > 2).
here a discrete-time Kuramoto oscillator model. Analytical results have been shown for certain cases of stability of synchronized and balanced equilibria sets both when delay is not present and when it is. The nonlinearities of the problem make such analysis challenging, but the initial results presented here, both analytical and Monte Carlo, indicate that the phase coupled oscillator approach to multi-nonholonomic-vehicle systems with delay has potential. Several routes of investigation are being pursued beyond the stability analysis presented here. First, we would like to extend the models to systems with communication networks that are not complete. Such scenarios are more realistic in cases where data is obtained through sensors with limited range or with dynamic communication networks (e.g. [JLM03d]). With respect to the resulting delay from dynamic communication, one cannot usually assume that all data will be updated at the same time. In cases of a shared communication media, the data delays will be varied as in [GM05b, HM99]. As shown in continuous time systems, the pattern of communication has strong implications for stability. The combination of general dynamic communication systems with nonholonomic multivehicle models provides a rich context for future work. The applications here also have strong connections to graph theoretic techniques being applied to networked systems. In studies of continuous time systems, the convergence rate of the continuous time Kuramoto model has been shown to be dependent on λ2 (L), the second smallest eigenvalue of the graph Laplacian. This eigenvalues gives a measure of the connectedness of the communication network of the system [JMB04]. The communication network for our systems is described by a complete graph, meaning all agents are coupled to one another, which maximizes λ2 (L) for a given N . It has been shown that the maximum eigenvalue of the graph Laplacian determines convergence bounds for the linear average-consensus problem when delay is present in the
Discrete Time Kuramoto Models with Delay
23
communication network [KJH04, SM03b]. It is likely that similar mechanisms can be derived for the class of discrete time systems being explored here. In practice, multiple vehicle systems may have asynchronous communication dynamics. Asynchronous communication has been examined in the continuous time linear consensus problem [CMA05b], but the linear matrix tools are unavailable in the general phase coupled oscillator problem. The extension could be made to both the continuous and discrete time Kuramoto models for nonholonomic multivehicle control. In all, the use of Kuramoto oscillator models has provided fundamental results for use with continuous time, nonholonomic, multivehicle systems, and the extension to discrete time systems here shows initial promise and a potential for developing rigorous tools for systems with delays, asynchronicity, and dynamic communication systems is strong.
Acknowledgments This work was supported in part by NSF grant CMS-0238461 and in part by NSF grant BE-0313250.
Symmetries in the Coordinated Consensus Problem Ruggero Carli1 , Fabio Fagnani2 , Marco Focoso1 , Alberto Speranzon3 , and Sandro Zampieri1 1
2
3
Department of Information Engineering Universit` a di Padova Via Gradenigo 6/a, 35131 Padova, Italy {carlirug,zampi}@dei.unipd.it Dipartimento di Matematica Politecnico di Torino C.so Duca degli Abruzzi, 24,10129 Torino, Italy
[email protected] Department of Signals, Sensors and Systems Royal Institute of Technology Osquldasv¨ ag 10, SE-10044 Stockholm, Sweden
[email protected] Summary. In this paper we consider a widely studied problem in the robotics and control communities, called consensus problem. The aim of the paper is to characterize the relationship between the amount of information exchanged by the vehicles and the speed of convergence to the consensus. Time-invariant communication graphs that exhibit particular symmetries are shown to yield slow convergence if the amount of information exchanged does not scale with the number of vehicles. On the other hand, we show that retaining symmetries in time-varying communication networks allows to increase the speed of convergence even in the presence of limited information exchange.
1 Introduction The design of coordination algorithms for multiple autonomous vehicles has recently attracted large attention in the control and robotics communities. This is mainly motivated by that multi-vehicle systems have application in many areas, such as coordinated flocking of mobile vehicles [TJP03b, TJP03c], cooperative control of unmanned air and underwater vehicles [BLH01, BL02], multi-vehicle tracking with limited sensor information [MSJH04]. Typically the coordinating vehicles need to communicate data in order to execute a task. In particular they may need to agree on the value of certain coordination state variables. One expects that, in order to achieve coor-
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 25–51, 2006. © Springer-Verlag Berlin Heidelberg 2006
26
R. Carli et al.
dination, the variables shared by the vehicles, converge to a common value, asymptotically. The problem of designing controllers that lead to such asymptotic coordination are called coordinated consensus problems, see for example [JLM03b, DD03, FM04a, OSM04a], and reference therein. The interest in these type of problems is not limited to the field of mobile vehicles coordination but also involves problems of synchronization [Str00a, MdV02, LFM05]. The problem that mostly has been studied in the literature is the design of control strategies that lead to consensus when each vehicle shares its information with vehicles inside a neighborhood [JLM03b, TJP03b] and the communication network is time-varying [JLM03b, TJP03c, LMA04b]. Robustness to communication link failure [CMB04b] and the effects of time delays [OSM04a] has also been considered recently. The consensus problem with time-invariant communication networks have been studied in [SBF05, FTBG05]. Randomly time-varying networks have also been analyzed in [HM04a]. In this paper we consider the consensus problem from a different perspective. We are interested to characterize the relationship between the amount of information exchanged by the vehicles and the achievable control performance. More precisely, if we model the communication network by a directed graph, in which each arc represents information transmission from one vehicles to another one, we can expect that good control design methods have to yield better performance for graphs that are more connected. In other words, we model the communication effort of each vehicle as the number of other vehicles it communicates with. In order to formally characterize the trade off between control performance and communication effort we make the following assumption: the graph topology is independent of the relative position of the vehicles, and the vehicles are described by an elementary first order model. The first hypothesis, that could be realistic in networks of coupled oscillators [MdV02], it is certainly less plausible in applications involving mobile vehicles. Nevertheless a clear analysis of such simplified model seems to be a necessary prerequisite in order to better understand more realistic scenarios. The motivation of describing vehicles with an elementary model is that it allows a quite complete and clean analysis of the problem. The paper is organized as follows. In section 2 we formally define the consensus problem. In particular we restrict to linear state feedbacks. We then introduce an optimal control problem where the cost functional is related to the convergence rate to the barycenter of the initial position of the vehicles. Under some assumptions, described in section 3, it turns out that weighted directed graphs for which the adjacency matrix is doubly stochastic, are communication graphs that guarantee consensus. Such graphs can be interpreted as a Markov chain and the convergence rate can be related to the mixing rate of the chain [Beh99]. The problem turns out to be treatable for a class of time-invariant graphs with symmetries. In section 4 we introduce the class of Cayley graphs defined on finite Abelian groups. Using tools for bounding the
Symmetries in the Coordinated Consensus Problem
27
mixing time of Markov chains defined on groups [Beh99, SC04] and algebraic properties of finite Abelian groups we derive a bound on the convergence rate to the consensus. The bound is a function of the number of vehicles and the incoming arcs in each vertex of the communication graph, that is the total information each vehicle receives. The main result shows that imposing symmetries in the communication graph, and thus in the control structure, keeping bounded the number of incoming arcs in each vertex, makes the convergence slower as the number of vehicles increases. In section 5 we consider random solutions. In these strategies the communication graph is chosen randomly at each time step over a family of graphs with the constraint that the number of incoming arcs in each vertex is constant. A simple mean square analysis, shows that, in this way, we can improve the convergence rate obtained with time-invariant communication graphs. This holds true even if the random choice is restricted to families of graphs with symmetries. A similar analysis has been proposed in [HM04a] where however a different model of randomly time varying communication graph was proposed and less neat results were obtained. In section 6 some computer simulations are reported.
2 Problem formulation Consider N > 1 vehicles whose dynamics are described by the following discrete time state equations x+ i = xi + ui
i = 1, . . . , N
where xi ∈ R is the state and represents the vehicle position, x+ i is the updated state and ui ∈ R is the control input. More compactly we can write x+ = x + u where x, u ∈ RN . The goal is to design a feedback control u = Kx,
K ∈ RN ×N
yielding the consensus, namely a control that asymptotically makes all the states xi converging to the same value. More precisely, our objective is to obtain a feedback matrix K such that the closed loop system x+ = (I + K)x , for any initial condition x(0) ∈ RN , satisfies lim x(t) = αv
t→∞
where v := (1, . . . , 1)T and where α is a scalar depending on x(0).
28
R. Carli et al.
Without further constraints the above problem is not particularly interesting because it admits completely trivial solutions. Indeed, if I + K is asymptotically stable (condition which can be trivially obtained by choosing, for instance, K = −I) then the rendezvous problem is clearly solved with α = 0 for all x(0). This is however an inefficient solution since, in this way, nonzero initial states having equal components (in which the rendezvous has already occurred) would produce a useless control action driving all the states to zero. In the following we will impose the condition that the subspace generated by the vector v consists of all equilibrium points, and this happens if and only if Kv = 0 .
(1)
From now on, when we say that K solves the rendezvous problem, we will assume that condition (1) is verified. It is easy to see that in this way the rendezvous problem is solved if and only if the following three conditions hold: (A)the only eigenvalue of I + K on the unit circle is 1; (B)the eigenvalue 1 has algebraic multiplicity one (namely it is a simple root of the characteristic polynomial of I + K) and v is its eigenvector; (C)all the other eigenvalues are strictly inside the unit circle. It is clear that, in order to achieve this goal, it is necessary that the vehicles exchange their position. If no constraint is imposed on the amount of information exchanged, it is still quite easy to solve the above problem. In order to describe the information exchange associated to a specific feedback K it is useful to introduce certain graph theoretic ideas. To any feedback K we associate a directed graph GK with set of vertices {1, . . . , N } in which we add an arc from j to i whenever in the feedback matrix K the element Kij = 0, meaning that in the control of xi it is used the knowledge of xj . The graph GK is said to be the communication graph associated with K. Conversely, given any directed graph G with set of vertices {1, . . . , N }, we say that a feedback K is compatible with G if GK is a subgraph of G (we will use the notation GK ⊆ G). We will say that the rendezvous problem is solvable on a graph G if there exists a feedback K compatible with G and solving the rendezvous problem. From now on we will always assume that G contains all loops (i, i): each system has access to its own state. We would like to have a way to measure the performance of a given control scheme achieving rendezvous. The way to quantify this performance is by no means unique. Suppose we have defined a cost functional R = R(K) to be minimized. We can then define RG = min{R(K) | GK ⊆ G} . We expect a meaningful cost functional to be sensitive to the amount information exchanged by the vehicles, in other words we would like RG to show
Symmetries in the Coordinated Consensus Problem
29
certain range of variation among all the possible communication graphs that can be considered. The simplest performance index is related to the speed of convergence towards the equilibrium point. Let P be any matrix such that P v = v and its spectrum (set of the eigenvalues) σ(P ) is contained in the closed disk centered in 0 and having radius 1. Define ρ(P ) =
1 if dim ker(P − I) > 1 max{|λ| : λ ∈ σ(P ) \ {1}} if dim ker(P − I) = 1 ,
(2)
we can then take R(K) = ρ(I + K). The index R(K) describes the exponential rate of convergence of x(t) towards the equilibrium point. However, such index does not show in general the desired sensitivity to the communication constraints. Indeed, if the graph G is described by 1 → 2 → · · · → N , and we consider the controller −1 1 0 · · · 0 0 0 −1 1 · · · 0 0 .. .. .. . . .. .. K= . . . . . . 0 0 0 · · · −1 1 0 0 0 ··· 0 0 which fulfill condition (1), we obtain R(K) = 0. Thus, adding any communication edge will not lower this index. It is clear however that the above feedback have worse performance than others using a richer communication graph. This means that the spectral radius is not sufficient to highlight these differences. We then need to refine the model. Since we are considering autonomous vehicles it seems reasonable to consider the cost in terms of control effort, as for instance ∞
J(K) :=
||u(t)||2 .
t=0
Then the performance index consisting on the pair (ρ(I + K), J(K)) would better describe the problem. However this cost is hard to be analyzed. Therefore we consider a simpler index which is related to the previous one, namely 2
∞
u(t)
J (K) := t=0
∞
≤
||u(t)||2 .
t=0
Notice that, in our case, we have that J = ||x(∞) − x(0)||2 and so arg min{||x(∞) − x(0)||2 : x(∞) = αv , α ∈ R} =
1 T v x(0) v . N
30
R. Carli et al.
In this paper we will consider as performance index the pair (ρ(I +K), J (K)), which is relevant and treatable. Notice that all feedback strategies K producing rendezvous points that are the barycenter of the initial positions of the vehicles, namely such that lim x(t) =
t→∞
1 T v x(0) v , N
(3)
are all optimal with respect to the index J . This feedback maps are called consensus controls [OSM04a]. When K yields such a behavior, it will be called a barycentric controller. It is easy to see that K is a barycentric controller if and only if vT K = 0 . (4) Thus if we restrict to barycentric controllers that satisfy (1) then the performance index of interest is ρ(I + K). Moreover if we consider the displacement from the barycenter Δ(t) = x(t) −
1 T v x(0) v , N
it is immediate to check that, Δ(t) satisfy the same closed loop equation than x(t). In fact we have Δ(t + 1) = (I + K)Δ(t) . (5) Notice that the initial condition satisfies < Δ(0), v >= 0 .
(6)
Hence the asymptotic behavior of our rendezvous control problem can equivalently be studied by looking at the evolution (5) on the hyperplane characterized by the condition (6). The index ρ(I + K) seems, in this context, appropriate for analyzing the performance.
3 Stochastic and doubly stochastic matrices If we restrict to control laws K making I + K a nonnegative matrix, condition (1) imposes that I +K is a stochastic matrix. Since the spectral structure of such matrices is quite well known, this observation allows to understand easily what are the conditions on the graph that will ensure the solvability of the rendezvous problem. To exploit this we need to recall some notation and results on directed graphs (the reader can further refer to textbooks on graph theory such as [GR01a]). Fix a directed graph G with set of vertices V and set of arcs E ⊆ V ×V . The adjacency matrix A is a {0, 1} valued square matrix indexed by the elements in V defined by (A)ij = 1 if and only (i, j) ∈ E. Define moreover the in-degree
Symmetries in the Coordinated Consensus Problem
31
of a a vertex i as deg(i) = j (A)ji . Vertices of in-degree equal to 0 are called sinks. A graph is called in-regular (of degree k) if each vertex has in-degree equal to k. A path in G consists of a sequences of vertices i1 i2 . . . . . . ir such that (i , i +1 ) ∈ E for every = 1, . . . , r − 1; i1 (resp. ir ) is said to be the initial (resp. terminal) vertex of the path. A vertex i is said to be connected to a vertex j if there exists a path with initial vertex i and terminal vertex j. A directed graph is said to be connected if given any pair of vertices i and j, at least one of the two is connected to the other. A directed graph is said to be irreducible if given any pair of vertices i and j, i is connected to j. Given any directed graph G we can consider its irreducible components, namely strongly connected subgraphs Gk with set of vertices Vk ⊆ V (for k = 1, . . . , s) and set of arcs Ek = E ∩ (Vk × Vk ) such that the sets Vk form a partition of V . The various components may have connections among each other. We define another directed graph TG with set of vertices {1, . . . , s} such that there is an arc from k1 to k2 if there is an arc in G from a vertex in Vk1 to a vertex in Vk2 . It can easily be shown that TG is always a union of disjoint trees. Standard results on stochastic matrices [Gan59, page 88 and 99] yield the following proposition. Proposition 1. Let G be a directed graph. The following conditions are equivalent: (i) The rendezvous problem is solvable on G. (ii) TG is connected and has only one sink vertex. Moreover if the above conditions are satisfied, any K such that I + K is stochastic, GK = G and Kii = −1 for every i ∈ VG is a possible solution. Among all possible solutions of the rendezvous problem, when the graph G satisfies the properties of Proposition 1, a particularly simple one can be written in terms of the adjacency matrix A of G. Consider indeed the matrix P: (A)ji deg(i) if deg(i) > 0 Pij = 0 if deg(i) = 0 then K = P − I solves the rendezvous problem. Notice the explicit form that the closed loop system assumes in this case: x+ i = xi +
1 deg(i)
(xj − xi ) .
(7)
j=i (j,i)∈E
If we restrict to control laws K making I +K a nonnegative matrix, conditions (1) and (4) are equivalent to the fact that I + K is doubly stochastic. This remark permits to obtain the following result (see [Gan59]). Proposition 2. Let G be a directed graph. The following conditions are equivalent:
32
R. Carli et al.
(i) The rendezvous problem on G admits a barycentric controller. (ii) G is irreducible Moreover if the above conditions are satisfied, any K such that I +K is doubly stochastic, GK = G and Kii = −1 for every i ∈ V is a possible solution. Notice that in the special case when the graph G is undirected, namely (i, j) ∈ E if and only if (j, i) ∈ E, it follows that we can find solutions K to the rendezvous problem that are symmetric and that therefore are automatically doubly stochastic. One example is given by (7). We expect the spectral radius to be a meaningful cost functional when restricted to feedback controllers K such that I + K is doubly stochastic. More precisely we conjecture that, by taking ρds G = min{ρ(K) | K doubly stochastic, GK ⊆ G} , G1 ⊂ G2 implies that ρG1 > ρG2 . However we have been not able to prove this so far.
4 Symmetric Controllers The analysis of the rendezvous problem and the corresponding controller synthesis problem becomes more treatable if we limit our search to graphs G and matrices K exhibiting symmetries. We will show however that these symmetries yield rather poor performance in terms of convergence rate. In order to treat symmetries on a graph G in a general setting, we consider Cayley graphs defined on Abelian groups. Let G be any finite Abelian group of order |G| = N , and let S be a subset of G which contains the zero. The Cayley graph G(G, S) is the directed graph with vertex set G and arc set EG(G,S) = {(g, h) : h − g ∈ S} . Notice that a Cayley graph is always in-regular: the in-degree of each vertex is equal to |S|. Notice also that irreducibility can be checked at an algebraic level: it is equivalent to the fact that the set S generates the group G which means that any element in G can be expressed as a finite sum of (not necessarily distinct) elements in S. If S is such that −S = S we say that S is inverseclosed. In this case the graph obtained is undirected. A Cayley graph supports (stochastic) matrices; to construct them it is sufficient to start from any function π : G → R such that π(g) = 0 if g ∈ S. Then define P by Pgh = π(g − h). Such a matrix will be called a Cayley matrix (adapted to the Cayley graph G(G, S)). We will also say that P is the Cayley matrix generated by the function π. To have candidate solutions to our rendezvous problem, we of course need something more. Notice that if it holds g π(g) = 1, then P satisfies both relations P v = v and v T P = v T . In the special case when π is a probability distribution (i.e. π(g) ≥ 0 for every
Symmetries in the Coordinated Consensus Problem
33
g) P is thus automatically a doubly stochastic matrix: such matrices P will be called Cayley stochastic matrices and for the rest of this section we will mostly work with them. Among the many possible choices of the probability distribution π, there is one which is particularly simple: π(g) = 1/|S| for every g ∈ S. In this case we have that P =
1 A, |S|
where A is the adjacency matrix of the Cayley graph G(G, S). Example 1. Let us consider the group ZN of integers modulo N and the Cayley graph G(ZN , S) where S = {−1, 0, 1}. Notice that in this case S is inverseclosed. Consider the uniform probability distribution π(0) = π(1) = π(−1) = 1/3 The corresponding Cayley stochastic matrix is given by 1/3 1/3 0 0 · · · 0 0 1/3 1/3 1/3 1/3 0 · · · 0 0 0 . P = .. .. .. . . .. .. .. . .. . . . . . . . 1/3 0
0 0 · · · 0 1/3 1/3
Notice that in this case we have two symmetries. The first is that the graph is undirected and the second that the graph is circulant. These symmetries can be seen in the structure of the transition matrix P which, indeed, results both symmetric and circulant. The idea of considering Cayley graphs and Cayley stochastic matrices on Abelian groups is helpful in order to compute, or at least bound, the cost functional ρ(P ) defined in (2). We can indeed consider the minimum ρCayley G of the spectral radius ρ(P ) as P varies among the stochastic Cayley matrices compatible with the given Cayley graph G. It will turn out that ρCayley can G ds be evaluated or estimated in many cases and clearly it holds ρCayley ≥ ρ G . G Before continuing we give some short background notions on groups characters and on harmonic analysis on groups, upon which the main results are built. 4.1 Cayley Stochastic Matrices on Finite Abelian Groups Let G be a finite Abelian group of order N , and let C ∗ be the multiplicative group of the nonzero complex numbers. A character on G is a group homomorphism χ : G → C ∗ (χ(g + h) = χ(g)χ(h) for all g, h ∈ G). Since we have that χ(g)N = χ(N g) = χ(0) = 1 , ∀g ∈ G
34
R. Carli et al.
it follows that χ takes values on the N th -roots of unity. The character χ0 (g) = 1 for every g ∈ G is called the trivial character. The set of all characters of the group G forms an Abelian group with respect to the pointwise multiplication: it is called the character group and ˆ The trivial character χ0 is the zero of G. ˆ It can be shown that denoted by G. ˆ is isomorphic to G, in particular its cardinality is N . If we consider the G vector space C G of all functions from G to C with the canonical Hermitian form 1 f1 (g)f2 (g) , < f1 , f2 >= N g∈G
ˆ is an orthonormal basis of C G . it follows that G The Fourier transform of a function f : G → C is defined as ˆ → C , fˆ(χ) = fˆ : G
χ(−g)f (g) . g∈G
Example 2. Consider again the group ZN . The characters are given by 2π
χ (j) = ei N
j
,
j ∈ ZN ,
= 0, . . . , N − 1 .
The correspondence → χ yields an explicit isomorphism between ZN and ˆ N . Given any function f : ZN → C, its Fourier transform is given by Z fˆ(χ ) =
N −1
2π
f (j) e−i N
j
.
j=0
The cyclic case is instrumental to study characters for any finite Abelian group. Indeed it is a well known result in algebra [Hun74], which states that any finite Abelian group G is isomorphic to a finite direct sum of cyclic groups. In order to study characters of G we can therefore assume that G = ZN1 ⊕ · · · ⊕ ZNr . It can be shown [Beh99] that the characters of G are precisely the ˆ i for i = 1, . . . , r. maps (g1 , g2 , . . . , gr ) → χ1 (g1 )χ2 (g2 ) · · · χr (gr ) with χi ∈ G ˆN ⊕ · · · ⊕ Z ˆ N . Fix now a Cayley ˆ is (isomorphic to) Z In other terms, G 1 r graph on G and a Cayley matrix P generated by the function π : G → R. The spectral structure of P is very simple. To see this, first notice that P can be interpreted as a linear function from C G to itself: simply considering, for f ∈ C G , (P f )(g) = h Pgh f (h). Notice that the trivial character χ0 ˆ corresponds to the vector v having all components equal to 1. For every χ ∈ G, it holds (P χ)(g) =
Pgh χ(h) = h∈G
π(g − h)χ(h) = h∈G
π(h)χ(g − h) = π ˆ (χ)χ(g) . h∈S
Hence, χ is an eigenfunction of P with eigenvalue π ˆ (χ). Since the characters form an orthonormal basis it follows that P is diagonalizable and its spectrum is given by
Symmetries in the Coordinated Consensus Problem
35
ˆ . σ(P ) = {ˆ π (χ) | χ ∈ G} We can think of characters as linear functions from C to C G : χ : z → zχ , and their adjoint as linear functionals on C |G| : χ∗ : f →< f, χ > . With this notation, χχ∗ is a linear function from C G to itself, projecting on the eigenspace generated by χ. In this way, P can be represented as π ˆ (χ)χχ∗ .
P = ˆ χ∈G
ˆ → C the matrix Conversely, it can easily be shown that given any θ : G θ(χ)χχ∗ ,
P = ˆ χ∈G
ˆ is a Cayley matrix generated by the Fourier transform π = θ. Suppose now P is the closed loop matrix of a system x+ = P x. The displacement from the barycenter can be represented as Δ = (I − χ0 χ∗0 )x . As we had already remarked Δ is governed by the same law (see equation (5)) Δ+ = P Δ . The initial condition Δ0 is characterized by < Δ0 , χ0 >. Notice that π ˆ (χ)t χ < Δ0 , χ > .
Δt = P t Δ 0 = ˆ χ∈G
Hence,
|ˆ π (χ)|t | < Δ0 , χ > |2 .
||Δt ||2 = ˆ χ∈G
This shows in a very simple way, in this case, the role of the spectral radius ρ(P ) = max |ˆ π (χ)| χ=χ0
in the convergence performance.
36
R. Carli et al.
4.2 The Spectral Radius for Stochastic Cayley Matrices The particular spectral structure of stochastic Cayley matrices allows to obtain asymptotic results on the behavior of the spectral radius ρ(P ) and therefore on the speed of convergence of the corresponding control scheme. Let us start with some examples. Example 3. Consider the group ZN and the Cayley graph G(ZN , S), where S = {0, 1}. Consider the probability distribution π on S described by π(0) = 1 − k , π(1) = k . where k ∈ [0, 1]. The Fourier transform of π is 2π
χ(−g)π(g) = 1 − k + ke−i N ,
ˆ (χ ) = π
= 1, . . . , N − 1 .
g∈S
In this case it can be shown that we have rendezvous stability if and only if 0 < k < 1 and that the rate of convergence is ρ(P ) =
2π
1 − k + ke−i N
max
1≤ ≤N −1
.
Hence, we have that ρCayley = min G k
The optimal
and k are ρCayley G
=
2π
max
1≤ ≤N −1
1 − k + ke−i N
.
= 1 and k = 1/2 yielding 1 1 + cos 2 2
2π N
1 2
1−
π2 1 2 N2
where the last approximation is meant for N → ∞. Example 4. Consider the group ZN and the Cayley graph G(ZN , S), where S = {−1, 0, 1, }. For the sake of simplicity we assume that N is even; very similar results can be obtained for odd N . Consider the probability distribution π on S described by π(0) = k0 , π(1) = k1 , π(−1) = k−1 . The Fourier transform of π is in this case given by 2π
2π
χ(−g)π(g) = k0 + k1 e−i N + k−1 ei N ,
π ˆ (χ ) =
= 1, . . . , N − 1 .
g∈S
We thus have ρCayley = G
min
max
(k0 ,1 ,k−1 ) 1≤ ≤N −1
2π
2π
k0 + k1 e−i N + k−1 ei N
.
Symmetries in the Coordinated Consensus Problem
37
Symmetry and convexity arguments allow to say that a minimum is for sure of the type k1 = k−1 . With this assumption the cost functional reduces to ρ(P ) = max k1
1 − 2k1 1 − cos
2π N
, |1 − 4k1 |
.
The minimum is achieved for k0 =
1 − cos 3 − cos
and we have ρCayley = G
2π N 2π N
,
k1 = k−1 =
1 + cos 3 − cos
2π N 2π N
1 3 − cos
1 − 2π 2
1 N2
2π N
(8)
where the last approximation is meant for N → +∞. Notice the asymptotic behavior of previous two examples: the case of communication exchange with two neighbors offer a better performance. However, in both cases ρCayley → 1 for N → +∞. This fact is more general: if we keep G bounded the number of incoming arcs in a vertex, the spectral radius for Abelian stochastic Cayley matrices will always converge to 1. This is the content of our main result. Theorem 1. Let G be any finite Abelian group of order N and let S ⊆ G be a subset with |S| = ν + 1. Let π be a probability measure associated to the Cayley graph G(G, S). Then −2/ν ρCayley , G(G,S) ≥ 1 − CN
where C > 0 is a constant independent of G and S. Proof. See Appendix A. The consequence of theorem 1 we have that, if we consider any sequence of Cayley stochastic matrices PN adapted on Abelian Cayley graphs (GN , SN ) such that |GN | = N and |SN | = o(ln N ) then, necessarily, ρ(PN ) converges to 1. Notice that in Example 4 we have ν = 2 and an asymptotic behavior ρCayley 1 − 2π 2 N −2 while the lower bound of Theorem 1 is, in this case, G 2 −1 1 − 2π N . Can we achieve the bound performance? In other words, is the lower bound we have just found, tight? The following example shows that this is the case. Example 5. Consider the group ZνN and the Cayley graph G(ZνN , S), where S = {0, e1 , . . . , eν } where ej is the vector with all elements equal to 0 except a 1 in position j. Consider the probability distribution π on S described by
38
R. Carli et al.
π(0) = π(ei ) =
1 , ∀i = 1, . . . , ν . ν+1
The Fourier transform of π is
χ(−g)π(g) =
π ˆ (χ 1 , . . . , χ ν ) = g∈S
where
j
ν
2π 1 1+ e−i N j . ν+1 j=1
= 1, . . . , N − 1 , j = 1, . . . , ν. We thus have that for this graph ν
ρG =
2π 1 max ν+ e−i N ν + 1 1≤ j ≤N −1 j=1
j
.
It is easy to see that the above min-max is reached by j = 1 for every j = 1, . . . , ν or when ∃h ∈ {1, ν} such that h = 1 and for all j = h we have j = 0. This yields the value ρG =
2ν ν2 + 1 + cos 2 (ν + 1) (ν + 1)2
1 2
2π N
1−C
π2 π2 = 1 − C ν 2/ν 2 N (N )
where C is a constant. Notice we have exactly obtained the lower bound proven above.
5 Random Communication Graphs 5.1 Random Circulant Communication Graph A direct graph G = (V, E) is said to be a circulant directed graph if (i, j) ∈ E implies that (i + p mod N, j + p mod N ) ∈ E where p ∈ N. Observe that in a direct circulant graph each vertex has the same in-degree. In the following sometimes we will refer to the in-degree of the graph, meaning the in-degree of any of the vertices of the graph. Let G¯ the set of all circulant directed graphs G = (V, E) with in-degree ν +1 and such that the corresponding adjacency matrix A has aii = 0, ∀ 1 ≤ i ≤ N , meaning that, as we said perviously, each vehicle has access to its own state. In this strategy we suppose that at each time instant t the communication graph G(t) is chosen randomly from the set G¯ accordingly to a uniform distribution. This is equivalent to impose that the adjacency matrix A(t) of the communication graph G(t) is such that ν
A(t) = I +
Π αi (t)
i=1
where Π is the following circulant matrix [Dav79]
Symmetries in the Coordinated Consensus Problem
0 1 0 ··· 0 0 1 ··· 0 0 0 ··· Π= . . . . .. .. .. . . 0 0 0 · · · 1 0 0 ···
0 0 0 .. .
0 0 0 .. .
39
0 1 00
and where {α1 (t)} , . . . , {αν (t)} are ν independent sequences of independent random variables uniformly distributed over the alphabet X = {1, 2, . . . , N }. We consider the following control law u(t) ν
u(t) =
ki Π αi (t)
k0 I +
x(t).
(9)
i=1
The close loop system then becomes ν
x(t + 1) =
(1 + k0 )I +
ki Π αi (t)
x(t)
(10)
i=1
The system (10) can be regarded as a Markov jump linear system [BCN04]. Notice that the state transition matrix in (10) is a circulant matrix and since Πv = v we have that the conditions (1) is satisfied. If we restrict to non-negative matrices then we have that the feedback gains k0 , . . . , kν are ν such that 1 + k0 , k1 , . . . , kν ≥ 0 and k0 + i=1 ki = 0. We conclude this section by observing that this strategy has an evident drawback from an implementation point of view : the same random choice, done at every time instance, needs to be known by all vehicles. A possible way to overcome this limitation is by using a predetermined pseudo-random sequence whose starting seed is known to everybody. 5.2 Random Communication Graph with Bounded In-Degree The strategy that we consider in this subsection is similar to the one presented in the previous subsection, but it overcomes the implementation issues. In this case we do not limit the time-varying communication graph G(t) to be circulant. We assume that each vehicle, besides knowing its own position, receives the state of ν vehicle chosen randomly and independently. Because of this it can happen that the resulting communication graph G(t) could have multiple arcs connecting the same pair of nodes. The feedback control in this case is ν
u(t) = k0 x(t) +
ki Ei (t)x(t)
(11)
i=1
where {Ei (t)}, i = 1, . . . , ν, are ν independent sequences of independent random processes taking value on the set of matrices
40
R. Carli et al. N ×N
Υ := E ∈ {0, 1}
: Ev = v
and equally distributed in such a set. Roughly speaking, the set Υ is constituted by all matrices with entries 0 or 1 that have exactly one element equal to 1 in each row. Since Ei (t)v = v, for all i = 1, . . . , ν and all t ≥ 0, we have that, as in the previous case, the in order for condition (1) to hold and to have a non-negative matric, the feedback gains k0 , . . . , kν must satisfy ν 1 + k0 , k1 , . . . , kν ≥ 0 and k0 + i=1 ki = 0. The close loop system becomes ν
ki Ei (t)x(t) .
x(t + 1) = (1 + k0 )I +
(12)
i=1
Notice that also the system (12) can be regarded as Markov jump linear system. 5.3 Convergence and Performance Analysis In order to study the asymptotic behavior of the two previous strategy, it is convenient to introduce the variable y(t) that is defined in the following way. Consider 1 (13) Y = I − vv T N and let y(t) = Y x(t). (14) Notice that the component yi (t) of y(t), represents, by this definition, the displacement of xi (t) from the barycenter of the initial position of the vehicles, at the time instant t. Clearly we have that lim x(t) = αv
(15)
lim y(t) = 0 .
(16)
t→+∞
if and only if
t→+∞
Note that it holds both Y Π = ΠY and Y E = Y EY where E is any matrix in Υ . Then pre-multiplying (10) and (12) by Y we obtain ν
y(t + 1) =
(1 + k0 )I +
ki Π αi (t)
y(t)
(17)
i=1
if we consider the first strategy and ν
y(t + 1) = F2 (t) =
ki Ei (t) y(t)
(1 + k0 )I +
(18)
i=1
In order to study the asymptotic properties of y(t) we consider E y(t) 2 where the expectation is taken over the set of the graph in which G(t) is chosen. We then have the following definition.
Symmetries in the Coordinated Consensus Problem
41
Definition 1 ([FL02b]). The jump linear Markov systems (17) and (18) are said to be asymptotically second moment stable if for any fixed y(0) ∈ RN it holds (19) lim E y(t) 2 = 0. t→+∞
We can then state the two main results. Proposition 3. The system (17) is asymptotically second moment stable for any initial condition y(0) and for k0 = −ν/(1 + ν) and ki = 1/(1 + ν), 1 ≤ i ≤ ν. Moreover, for these values of ki we obtain the fasted convergence rate and we have that E
y(t)
2
1 1+ν
=
t
y(0) 2 .
(20)
Proof. See Appendix B. Proposition 4. The system (18) is asymptotically second moment stable for any initial condition y(0) and for k0 = −νN/(N + N ν − 1) and ki = N/(N + N ν − 1), 1 ≤ i ≤ ν. Moreover, for these values of ki we obtain the fastest convergence rate and we have that E
y(t)
2
=
N −1 N (1 + ν) − 1
t
y(0) 2 .
(21)
Proof. See Appendix C. Remark 1. Notice that the convergence rate obtained using the strategy with random circulant communication graphs it does not depend on N and ensures convergence even if ν = 1, which corresponds to the case when each vehicle receives the state of at most one another vehicle. The strategy with random communication graphs with in-degree bounded, attains the same converge rate of the first only from N → +∞. However notice that both strategies have a better convergence rate than the one obtained using time-invariant communication graphs. This increase of performance has obtained by randomizing the choice over a pre-assigned family of graphs. Remark 2. Notice that by using random communication graphs with bounded in-degree the vehicles, in general, will not reach the consensus at the barycenter of the initial configurations, since vT
ν
ki Ei (t)
k0 I +
= vT
(22)
i=1
It is meaningful to study where the vehicles will reach consensus with respect to the barycenter of the initials conditions. In order to carry out this analysis we consider the mean square distance from the barycenter of the initials conditions namely we consider x(t)−v T x(0)v/N . We have the following result.
42
R. Carli et al.
Proposition 5. The mean square distance to the barycenter of the initial configuration of the vehicle is bounded, namely, lim E (x(t) − bv)(x(t) − bv)T = α
t→∞
where α=
(1 − N )
1 x(0)T (I − Ω)x(0) N
ν 2 i=1 ki ν 2 1=1 ki − k0 N (k0
+ 2)
and where b = v T x(0)/N , and Ω = vv T /N and where the ki make the system (18) asymptotically second order stable. Proof. See Appendix D. Notice that when N → +∞ then the mean square distance to the νN barycenter tends to zero. If we use the control gains k0 = − N +N ν−1 and k1 = · · · = kν = N/(N + N ν − 1) then we have that lim E (x(t) − bv)(x(t) − bv)T =
t→∞
1 x(0)T (I − Ω)x(0) . N (N (1 + ν) − 1)
Notice that for fixed N if ν grows the mean square distance to the barycenter becomes smaller.
6 Simulation Results In Figures 1-3, some computer simulation results are reported. The simulations show the time evolution of the state of n = 10 vehicles when they can exchange the state with at most one other vehicle, thus in this case we have ν = 1. The initial condition for all the three simulations is the same and the barycenter coordinate is 2.76. Figure 1 refers to the state evolution for time-invariant communication graph when the Cayley graph is the ring graph. Figure 2 refers to random circulant communication graph and Figure 3 refers to random communication graph with bounded in-degree. Notice that the random strategies, as expected, exhibit a faster rate convergence and that the random communication graph with bounded in-degree does not converge to the barycenter (in this case the consensus is reached at 0.71).
7 Conclusions In this paper we have analyzed the relationship between the communication graph and the convergence rate to the rendezvous point for a team of vehicles. Modelling the communication graph with a Cayley graph defined on Abelian groups, namely a graph with symmetries, we have been able to bound the
Symmetries in the Coordinated Consensus Problem
43
10 8 6
Positions xi(t)
4 2 0 2 4 6 8
0
5
10 Time t
15
20
Fig. 1. Time-invariant Cayley graph with ν = 1. 10 8 6
Positions xi(t)
4 2 0 2 4 6 8
0
5
10 Time t
15
20
Fig. 2. Random circulant communication graph.
convergence rate. In particular we have proved that the convergence to the barycenter of the initial configuration becomes slower and slower as the number of vehicles increases if the amount of information received by each vehicle remain constant. We have also considered some particular random strategies that consist in choosing randomly a communication graph in a predefined family of graphs. In particular we have considered the circulant graphs and graphs with bounded in-degree. It turns out that choosing randomly over
44
R. Carli et al. 10 8 6
Positions xi(t)
4 2 0 2 4 6 8
0
5
10 Time t
15
20
Fig. 3. Random communication graph with bounded in-degree.
such family graphs we obtain higher performances then using time-invariant communication graphs. In [CFSZ] the analysis has been extended to random Cayley graphs and to communication graphs where the information is quantized.
Acknowledgments. Part of this research was supported by European Commission through the RECSYS project IST-2001-32515.
A Proof of Theorem 1 In order to prove theorem 1 we need the following technical lemma. Lemma 1. Let T = R/Z ∼ = [−1/2, 1/2[. Let 0 ≤ δ ≤ 1/2 and consider the hypercube V = [−δ, δ]k ⊆ Tk . For every Λ ⊆ Tk such that |Λ| ≥ δ −k , there exist x ¯1 , x ¯2 ∈ Λ with x ¯1 = x ¯2 such that x ¯1 − x ¯2 ∈ V . Proof. For any x ∈ T and δ > 0, define the following set L(x, δ) = [x, x + δ] + Z ⊆ T . Observe that for all y ∈ T, L(x, δ) + y = L(x+y, δ). Now let x ¯ = (¯ x1 , . . . , x ¯k ) ∈ Tk and define
Symmetries in the Coordinated Consensus Problem
45
k
L(¯ xi , δ) .
L(¯ x, δ) = i=1
Also in this case we observe that L(¯ x, δ)+ y¯ = L(¯ x + y¯, δ) for every y¯ ∈ Tk . Consider now the family of subsets {L(¯ x, δ), x ¯ ∈ Λ} . We claim that there exist x ¯1 and x ¯2 in Λ such that x ¯1 = x ¯2 and such that L(¯ x1 , δ) ∩ L(¯ x2 , δ) = ∅. Indeed, if not, we would have |Λ|δ k < 1 which contradicts our assumptions. Notice finally that L(¯ x1 , δ) ∩ L(¯ x2 , δ) = ∅ ⇔ L(0, δ) ∩ L(¯ x2 − x ¯1 , δ) = ∅ ⇔ x ¯2 − x ¯1 ∈ V .
We can now prove theorem 1. Proof. With no loss of generality we can assume that G = ZN1 ⊕ . . . ⊕ ZNr . Assume we have fixed a probability distribution π concentrated on S. Let P be the corresponding stochastic Cayley matrix. It follows from previous considerations that the spectrum of P is given by N1 −1
σ(P ) =
Nr −1
... k1 =0
2π
2π
π(k1 , . . . , kr )ei N1 k1 1 · · · ei Nr kr
r
:
s
∈ ZNs ∀s = 1, . . . , r
kr =0
Denote by k¯j = (k1j , . . . , krj ), for j = 1, . . . , ν, the non-zero elements in S, and consider the subset r
Λ= i=1
r
ki1 i kiν i ,..., Ni Ni i=1
+ Zk |
s
∈ ZNs for 1 ≤ s ≤ r
⊆ Tν .
Let δ = ( i Ni )−1/ν and let V be the corresponding hypercube defined as in Lemma 1. We claim that there exists ¯ = ( 1 , . . . r ) ∈ ZN1 × · · · × ZNr , ¯ = 0 such that r r kiν i ki1 i ,..., + Zk ∈ V . N N i i i=1 i=1 Indeed, if there exist two different ¯ , ¯ ∈ ZN1 × · · · × ZNr such that r i=1
r
kiν i ki1 i ,..., Ni Ni i=1
+ Zν =
r i=1
r
kiν i ki1 i ,..., Ni Ni i=1
+ Zν ,
46
R. Carli et al.
then we have that, r
r
i=1
ki1 i kiν i ,..., Ni Ni i=1
+ Zν = 0 ,
where ¯ = ¯ − ¯ = 0 . On the other hand, if different elements in ZN1 × · · · × ZNr always lead do distinct elements in Λ, then, |Λ| = i Ni = δ −ν . We can then apply Lemma 1 and conclude that there exist two different ¯ , ¯ ∈ ZN × · · · × ZN such that 1 r r i=1
r
kiν i ki1 i ,..., Ni Ni i=1
Hence,
r
+ Zν −
r
ki1 i kiν i ,..., Ni Ni i=1
i=1
r
r i=1
kiν i ki1 i ,..., Ni Ni i=1
+ Zν ∈ V .
+ Zν ∈ V ,
where ¯ = ¯ − ¯ = 0 . Consider now the eigenvalue Nr −1
N1 −1 N2 −1
...
λ=
2π
π(k1 , . . . , kr )ei( N1 k1
kr =0 ν
k1 =0 k2 =0
2π
j
π(k1j , . . . , krj )ej( N1 k1
= π(0, . . . 0) +
2π 1+ N 2
2π 1+ N 2
k2j
k2
2π 2 +···+ Nr
2π 2 +···+ Nr
kr
krj
r)
r)
.
j=1
Its norm can be estimated as follows ν
|λ| ≥ π(0, . . . 0) +
π(k1j , . . . , krj ) cos
j=1 ν
≥ π(0, . . . 0) +
π(k1j , . . . , krj ) −
j=1
ν
2π j k N1 1
1
+
2π j k N2 2
π(k1j , . . . , krj )2π 2
j=1
2
+ ··· +
2π j k Nr r
r
1 1 ≥ 1 − 2π 2 2/ν N 2/ν N
and so we can conclude.
B Proof of Proposition 1. As previously we observe that E [ y(t) ] E[y(t)y T (t)]. We have that
2
= tr E y(t)y T (t) . Let P (t) =
Symmetries in the Coordinated Consensus Problem
P (t + 1) = E y(t + 1)y T (t + 1) ν
= E (1 + k0 )I +
Π αi (t) y(t)y T (t) (1 + k0 )I +
i=1
T
Π αi (t)
i=1 ν
=E E
ν
47
Π αi (t) y(t)×
(1 + k0 )I + i=1
ν
×y T (t) (1 + k0 )I +
T
| α1 (t), . . . , αν (t)
Π αi (t)
i=1
Since y(t) is independent from α1 (t), . . . , αν (t) we obtain ν
P (t + 1) = E (1 + k0 )I +
Π αi (t)
P (t) (1 + k0 )I +
i=1
+ (1 + k0 ) ν
ν
ki E(Π αi (t) ) P (t)+
i=1
ki E(Π −αi (t) ) P (t)+
i=1
j=1 j=i
ki kj E Π αi (t) P (t)E Π −αj (t) +
+
+
Π αi (t)
i=1
ν
ν
T
i=1
= (1 + k0 )2 P (t) + (1 + k0 ) ν
ν
ki2 E Π αi (t) P (t)Π −αi (t)
i=1
By straightforward calculations one can verify that E Π −α(t) = E Π α(t) = and tr P (t)vv T = 0. Hence
1 T N vv
E
y(t + 1)
2
= (1 + k0 )2 E
y(t)
2
ν
+
ki2 tr E Π αi (t) P (t)Π −αi (t)
i=1
Using the fact that tr (AB) = tr (BA) we can conclude that E
y(t + 1)
2
=
(1 + k0 )2 +
ν
ki2
E
y(t)
i=1
=
2
ν
(1 + k0 ) + i=1
t
ki2
y(0)
2
2
(23)
48
R. Carli et al.
Now it is easy to verify that ν ki2 | 1 + k0 , k1 , . . . , kν ≥ 0, min (1 + k0 )2 + i=1
1 kj = 0 = 1+ν j=0 ν
(24)
and that it is obtained by choosing k0 = −ν/(1 + ν) and kj = 1/(ν + 1) for all 1 ≤ j ≤ ν. With such a choice we have the convergence result (20)
C Proof of Proposition 2. 2
We observe that E [ y(t) ] = tr E y(t)y T (t) . Let P (t) = E[y(t)y T (t)]. We have that ν
P (t + 1) = E
(1 + k0 )y(t) +
ki Y Ei (t)y(t) × i=1 T
ν
× (1 + k0 )y(t) +
ki Y Ei (t)y(t) i=1
= (1 + k0 )2 P (t) + E (1 + k0 )y(t)y T (t)
ν
ki EiT (t)Y +
i=1 ν
Y Ei (t)y(t) y T (t)(1 + k0 ) +
+E + E
i=1 ν
ki Y Ei (t)y(t)y T (t)
i=1
ν
kj EjT (t)Y .
j=1
Using the double expectation theorem and the fact that Ei (t) and P (t) are independent for any i = 1, . . . , r, we have that P (t + 1) = (1 + k0 )2 P (t) + (1 + k0 )P (t) E
ν
ki EiT (t)Y +
i=1 ν
+ (1 + k0 ) E +Y E
ki Y Ei (t) P (t)+ i=1 ν
ν
(ki Ei (t))P (t) i=1
(kj EjT (t)) Y.
j=1
Let Ω = vv T /N . Since E[Ei (t)] = Ω and since Y Ω = Ω Y = 0 we have that the first two expectations in the previous equation are equal to zero. To compute the last expectation we need to distinguish two cases:
Symmetries in the Coordinated Consensus Problem
49
i = j : then Ei (t), EjT (t) and P (t) are all independent and thus the expectation factorizes. Two terms of the type Y Ω appear and thus for i = j the expectation is zero, i = j : then, since it can be verified by straightforward calculations that for any M ∈ RN ×N , 1 T (v M v)Ω + N
E Ei (t)M EiT (t) =
1 1 tr M − 2 v t M v I, N N
we have Y E ki Ei (t) P (t) ki EiT (t) Y = ki2 Y E Ei (t)P (t)EiT (t) Y =
ki2 T k2 Y v P (t) v ΩY + i tr (P (t)) Y N N ki2 T − 2 Y v P (t) vY N
The first term of the previous equation is zero since Ω Y is zero. We thus obtain that P (t + 1) = (1 + k0 )2 P (t) + Now we consider E E
y(t + 1)
2
y(t) 2
1 N
ν
ki2 tr (P (t))Y −
i=1
1 N2
ν
ki2 Y v T P (t) vY.
i=1
= tr P (t), then we have
= (1 + k0 )2 E 1 − 2 N
y(t)
ν
2
+
1 N
ν
ki2 tr tr (P (t))Y
i=1
ki2 tr (v T P (t) vY ).
i=1
The term tr tr (P (t))Y = (N − 1) tr (P (t)) since tr (Y ) = N − 1 and the last term is zero since v T P (t) v =
N
N
(P (t))ij = 0 . i=1 j=1
We thus have the following difference equation E
y(t + 1)
2
=
Now it is easy to verify that N −1 min (1 + k0 )2 + k,k1 ,...,kν N
(1 + k0 )2 +
N −1 N
ν
ki2
E
y(t)
2
.
i=1
1 kj = 1 = ki2 | 1 + k0 , k1 , . . . , kν ≥ 0, 1+ν j=1 i=1 ν
ν
(25)
50
R. Carli et al.
and that it is obtained by choosing k0 = −νN/(N (1 + ν) − 1) and kj = νN/(N (1 + ν)) for all 1 ≤ j ≤ ν. With such a choice we have the convergence result (21).
D Proof of Proposition 3. Let us define z(t) = x(t) − b v. It is not difficult to prove that z(t) has the same close loop dynamics as the system in x(t), thus ν
z(t + 1) =
k i Ei
(1 + k0 )I +
z(t).
i=1
Let us consider P (t) = E[(x(t) − bv)(x(t) − bv)T ] = E[z(t)z T (t)]. Similar calculations as those done for the convergence rat yields P (t + 1) = (1 + k0 )2 P (t) + (1 + k0 )
ν
ki (P (t)Ω + ΩP (t)) + i=1
ν
1 T (v P (t) v)Ω + N
ki2
+ i=1
ν
1 1 tr P (t) − 2 v T P (t) v I + N N
ν
+
ki kj ΩP (t)Ω i=1 j=1 i=j
Let us define the following variables 1 tr P (t) N 1 s(t) = 2 v T P (t)v N
w(t) =
We want to compute the mean squared distance from the barycenter at steady state, namely we want to compute w(∞) := limt→∞ w(t). We have that ν ν (1 + k0 )2 + i=1 ki2 −k0 (k0 + 2) − i=1 ki2 w(t + 1) w(t) . = 1 ν 1 ν 2 2 s(t) s(t + 1) 1 − k k N i=1 i N i=1 i Σ
where the transition matrix Σ has eigenvalues λ1 = 1, since when the ν states agree, they do not move anymore and λ2 = k02 + NN−1 i=1 ki2 related to the convergence rate, which was computed before. The time evolution of w(t) and s(t) is thus given by
Symmetries in the Coordinated Consensus Problem
w(t) s(t)
51
= c1 λt1 a1 + c2 λt2 a2
where c1 , c2 are constants and a1 , a2 are the eigenvectors associated to λ1 and λ2 . At steady state the vector (w(∞), s(∞))T is aligned to the dominant eigenvector of Σ and thus w(∞) ≈ c1 . Simple calculations yield w(∞) = α where α=
(1 − N )
1 x(0)T (I − Ω) x(0) , N ν 2 i=1 ki ν 2 1=1 ki − k0 N (k0
+ 2)
.
On Communication Requirements for Multi-agent Consensus Seeking Lei Fang and Panos J. Antsaklis Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 {lfang,antsaklis.1}@nd.edu Summary. Several consensus protocols have been proposed in the literature and their convergence properties studied via a variety of methods. In all these methods, the communication topologies play a key role in the convergence of consensus processes. The goal of this paper is two fold. First, we explore communication topologies, as implied by the communication assumptions, that lead to consensus among agents. For this, several important results in the literature are examined and the focus is on different classes of communication assumptions being made, such as synchronism, connectivity, and direction of communication. In the latter part of this paper, we show that the confluent iteration graph unifies various communication assumptions and proves to be fundamental in understanding the convergence of consensus processes. In particular, based on asynchronous iteration methods for nonlinear paracontractions, we establish a new result which shows that consensus is reachable under directional, time-varying and asynchronous topologies with nonlinear protocols. This result extends the existing ones in the literature and have many potential applications.
1 Introduction In recent years, there has been growing interest in the coordinated control of multi-agent systems. One of the fundamental problems is the consensus seeking among agents, that is the convergence of the values of variables common among agents to the same constant value [BS03, JLM03c, LBF04, OSM04b, RB05a]. This need stems from the fact that in order for agents to coordinate their behaviors, they need to use some shared knowledge about variables such as direction, speed, time-to-target etc. This shared variable or information is a necessary condition for cooperation in multi-agent systems, as shown in [RBM04]. The challenge here is for the group to have a consistent view of the coordination variable in the presence of unreliable and dynamically changing communication topology with-
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 53–67, 2006. © Springer-Verlag Berlin Heidelberg 2006
54
L. Fang and P.J. Antsaklis
out global information exchange. For an extensive body of related work, see [BS03, BGP03, dCP04, HM04b, LW04, LBF04, RBA05, SW03, XB03]. The aforementioned consensus protocols all operate in a synchronized fashion since each agent’s decisions must be synchronized to a common clock shared by all other agents in the group. This synchronization requirement might not be natural in certain contexts. For example, the agreement of timeon-target in cooperative attack among a group of UAVs depends in turn on the timing of when to exchange and update the local information. This difficulty entails the consideration of the asynchronous consensus problem, where each agent updates on its own pace, and uses the most recently received (but possibly outdated) information from other agents. No assumption is made about the relative speeds and phases of different clocks. Agents communicate solely by sending messages; however there is no guarantee of the time of delivery or even for a successful delivery. Under asynchronous communications, heterogeneous agents, time-varying communication delays and packet dropout can all be taken into account in the same framework. Nevertheless, the asynchronism can destroy convergence properties that the algorithm may possess when executed synchronously. In general, the analysis of asynchronous algorithms is more difficult than that of their synchronous counterparts. We refer readers to [BT89, FS00, Koz03] for surveys on general theories of asynchronous systems. Work reported on the asynchronous consensus problem is relatively sparse compared to its synchronous counterparts. In [FA05], we introduced an asynchronous framework to study the consensus problems for discrete-time multi-agent systems with a fixed communication topology under the spanning tree assumption (All the assumptions in this paragraph will be discussed in detail later). A distributed iterative procedure under the eventual update assumption was developed in [MSP+ 05] for calculating averages on asynchronous communication networks. The asynchronous consensus problem with zero time delay was studied in [CMA05a] where the union of the communication graphs is assumed to have a common root spanning tree. A nice overview of the asynchronous consensus problem is given in [BHOT05] where the authors link the consensus problem considered here to earlier work [TBA86]. For other related problems in asynchronous multi-agent systems, see [BL96, LMA04a, LPP02, VACJ98]. Asynchronism provides a new dimension to consensus problems and makes convergence harder to achieve. Under certain technical conditions, asynchronism is not detrimental to consensus seeking among agents. A natural question is what are the appropriate requirements on communication topologies to guarantee the convergence of consensus processes? In order to answer this question, we first discuss the various assumptions on communication topologies commonly used in the literature and classify several of the existing consensus results by these communication assumptions. In Sect. 3, we prove the convergence of asynchronous consensus with zero time delay involving pseudocontractive mappings. This development is a generalization of the results of [CMA05a] and [XBL05b], and provides insight into why the choice be-
On Communication Requirements for Multi-agent Consensus Seeking
55
tween bidirectional and unidirectional communication assumptions can make the difference in establishing consensus convergence. In Sect. 4, we unify various communication assumptions using the confluent iteration graph proposed in [Pot98]. Furthermore, a new convergence result for nonlinear protocols is developed based on the confluent asynchronous iteration concept. This result contains some existing ones in [BHOT05] as special cases.
2 Preliminaries and Background 2.1 Definitions and Notations Let G = {V, E, A} be a weighted digraph (or direct graph) of order n with the set of nodes V = {v1 , v2 , . . . , vn }, set of edges E ⊆ V × V , and a weighted adjacency matrix A = [aij ] with nonnegative adjacency elements aij . The node indices belong to a finite index set I = {1, 2, . . . , n}. A directed edge of G is denoted by eij = (vi , vj ). For a digraph, eij ∈ E does not imply eji ∈ E. The adjacency elements associated with the edges of the graph are positive, i.e., aij > 0 if and only if eji ∈ E. Moreover, we assume aii = 0 for all i ∈ I. The set of neighbors of node vi is the set of all nodes which point to (communicate with) vi , denoted by Ni = {vj ∈ V : (vj , vi ) ∈ E}. A digraph G can be used to model the interaction topology among a group of agents, where every graph node corresponds to an agent and a directed edge eij represents a unidirectional information exchange link from vi to vj , that is, agent j can receive information from agent i. The interaction graph represents the communication pattern at certain time. The interaction graph is time-dependent since the information flow among agents may be dynamically ¯ = {G1 , G2 , . . . , GM } denote the set of all possible interaction changing. Let G ¯ is assumed graphs defined for a group of agents. Note that the cardinality of G to be finite. The union of a collection of graphs {Gi1 , Gi2 , . . . , Gim }, each with the vertex set V , is a graph G with the vertex set V and the edge set equal to the union of the edge sets of Gij , j = 1, . . . , m. A directed path in graph G is a sequence of edges ei1 i2 , ei2 i3 , ei3 i4 , · · · in that graph. Graph G is called strongly connected if there is a directed path from vi to vj and vj to vi between any pair of distinct vertices vi and vj . Vertex vi is said to be linked to vertex vj across a time interval if there exists a directed path from vi to vj in the union of interaction graphs in that interval. A directed tree is a directed graph where every node except the root has exactly one parent. A spanning tree of a directed graph is a tree formed by graph edges that connect all the vertices of the graph. The condition that a digraph contains a spanning tree is equivalent to the condition that there exists a node having a directed path to all other nodes. Let xi ∈ R, i ∈ I represent the state associated with agent i. A group of agents is said to achieve global consensus asymptotically if for any xi (0), i ∈ I, xi (t) − xj (t) → 0 as t → ∞ for each (i, j) ∈ I. Let 1 denote an n × 1
56
L. Fang and P.J. Antsaklis
column vector with all entries equal to 1. A matrix F ∈ Rn×n is nonnegative, F ≥ 0, if all its entries are nonnegative, and it is irreducible if and only if (I + |F |)n−1 > 0, where |F | denotes the matrix of absolute values of entries of F . Furthermore, if all its row sums are +1, F is said to be a (row) stochastic matrix. Let X ∗ be a nonempty closed convex subset of Rn , and let · be a norm on Rn . For any vector x ∈ Rn , y ∗ ∈ X ∗ is a projection vector of x onto X ∗ if x − y ∗ = miny∈X ∗ x − y . We use P (x) to denote an arbitrary but fixed projection vector of x and dist(x, X ∗ ) to denote x − P (x) . Let T be an operator on Rn . It is paracontractive if Tx ≤ x
for all x ∈ Rn
(1)
and equality holds if and only if T x = x. An operator is nonexpansive (with respect to · and X ∗ ) if T x − x∗ ≤ x − x∗
for all x ∈ Rn , x∗ ∈ X ∗ ,
and pseudocontractive [SB01] (with respect to
·
(2)
and X ∗ ) if, in addition,
dist(T x, X ∗ ) < dist(x, X ∗ ) for all x ∈ / X ∗.
(3)
We use T to denote the set of all pseudocontractive operators. In the linear case, pseudocontractive operators are generalizations of paracontractive ones. But the converse is not true. Consider the following inequalities: T x − P (T x) ≤ T x − P (x) ≤ x − P (x) , for all x ∈ / X ∗.
(4)
Paracontractivity requires the second inequality to be strict, while pseudocontractivity requires any one of these two inequalities to be strict. Example 1. Let T ∈ Rn×n , X ∗ = {c1|c ∈ R}. Then T is pseudocontractive with respect to X ∗ and the infinity norm · ∞ if and only if T 1 = 1 and for any x ∈ Rn such that mini xi < maxi xi , maxi (T x)i − mini (T x)i < maxi xi − mini xi . 2.2 Synchronous and Asynchronous Consensus Protocols We consider the following (synchronous) discrete-time consensus protocol [RB05a, OSM04b, Mor03] xi (t + 1) =
1 n j=1 aij (t)
n
aij (t)xj (t)
(5)
j=1
where t ∈ {0, 1, 2, · · · } is the discrete-time index, (i, j) ∈ I and aij (t) > 0 if information flows from vj to vi at time t and zero otherwise, ∀j = i. The magnitude of aij (t) possibly represents time-varying relative confidence of
On Communication Requirements for Multi-agent Consensus Seeking
57
agent i in the information state of agent j at time t or the relative reliabilities of information exchange links between agents i and j. We can rewrite (5) in a compact form x(t + 1) = F (t)x(t)
(6) a (t)
where x = [x1 , · · · , xn ]T , F (t) = [Fij (t)] with Fij (t) = Pn ij aij (t) , (i, j) ∈ I. j=1 An immediate observation is that the matrix F is a nonnegative stochastic matrix, which has an eigenvalue at 1 with the corresponding eigenvector equal to 1. The protocol (5) or (6) is synchronous in the sense that all the agents update their states at the same time using the latest values of neighbors’ states. The way described above to define F (t) in (6) is only one possible way among many others. In the following we assume that F (t) satisfies Assumption 1 below. Assumption 1 (Nontrivial interaction strength [BHOT05]) There exists a positive constant α such that (a) Fii (t) ≥ α, for all i, t. (b) Fij (t) ∈ {0} ∪ [α, 1], for all i, j, t. n (c) j=1 Fij (t) = 1, for all i, t. Now, in the asynchronous setting the order in which states of agents are updated is not fixed and the selection of previous values of the states used in the updates is also not fixed. Now let t0 < t1 < · · · < tn < · · · be the time instants when the state of the multi-agent system undergoes change. Let xi (k) denote the state of agent i at time tk . The index k is also called in the literature the event-based discrete time index. The dynamics of asynchronous systems can be written as xi (k + 1) =
n j=1
Fij (k)xj (sij (k)) if i ∈ I(k), xi (k) if i ∈ / I(k),
(7)
where sij (k) are nonnegative integers, I(k) are nonempty subsets of {1, · · · , n}. The initial states are specified by x(0) = x(−1) = · · · . Henceforth, we write the initial vector x(0) to abbreviate reference to this set of equal initial states. We refer to dij (k) = k − sij (k) as iteration delays and I(k) as updating sets. The following assumptions (called regularity assumptions) are usually made in the study of linear asynchronous linear systems. Assumption 2 (Partial asynchronism) (a) (Frequency of updating) The updating sets I(k) satisfy i+B
∃B ≥ 0,
I(k) = {1, · · · , n}, for all i. k=i
(8)
58
L. Fang and P.J. Antsaklis
(b) (Bounded-delay asynchronism) There exists a nonnegative integer B such that 0 ≤ k − sij (k) ≤ B < ∞, ∀ (i, j, k).
(9)
(c) sii (k) = k, for all i. Assumption 2(a) says that every agent should be updated at least once in any B + 1 iteration steps. Assumption 2(b) requires delays to be bounded by some constant B. Assumption 2(c) says that an agent generally has access to its own most recent value. Without loss of generality (but after renumbering in the original definition), we assume that I(k) is a singleton which contains a single element from {1, . . . , n}. Furthermore, if for all i there exists a nonnegative integer Bi such that for all j j+Bi
i∈
I(k),
(10)
k=j
we call I(k) an indexwise-regulated sequence [Pot98]. This condition expresses the fact that different agents may have different updating frequencies. 2.3 Other Communication Assumptions Let us review several important assumptions commonly used in the literature. Assumption 3 (Connectivity) (a) (Uniform strong connectivity) There ext+B ists a nonnegative integer B such that s=t G(s) is strongly connected for all t. (a’) (Uniform spanning tree) There exists a nonnegative integer B such that t+B s=t G(s) contains a spanning tree for all t. (b) (Nonuniform strong connectivity) t≥t0 G(t) is strongly connected for all t0 ≥ 0. (b’) (Nonuniform spanning tree) t≥t0 G(t) contains a spanning tree for all t0 ≥ 0. Assumption 4 (Direction of Communication) (a) (Bidirectional link) If eij ∈ G(t) then eji ∈ G(t). It implies that the updating matrix F (t) or F (k) is symmetric. (b) (Unidirectional link) eij ∈ G(t) does not imply eji ∈ G(t). In this case, the updating matrix F (t) or F (k) is not symmetric. Assumption 5 (Reversal link) (a) If (vi , vj ) ∈ G(t), then there exists some τ such that |t − τ | < B and (vj , vi ) ∈ G(τ ) [CMA05a, BHOT05].
On Communication Requirements for Multi-agent Consensus Seeking
59
(b) There is a nonnegative integer B such that for all t and all vi , vj ∈ V we have that if (vi , vj ) ∈ G(t) then vj is linked to vi across [t, t + B] [Mor04]. Assumptions 1-5 play different roles in proving various consensus results and they are not necessarily independent from each other. Assumption 1 (Nontrivial interaction strength) and one of the items in Assumption 3 (Connectivity) are always necessary for the convergence of consensus protocols. They guarantee that any update by any agent has a lasting effect on the states of all other agents. Assumption 2 (Partial asynchronism) describes a class of asynchronous systems. If Assumption 3(a) (Uniform strong connectivity) is satisfied, then Assumption 2(a) (Frequency of Updating) and Assumption 5(b) (Reversal link) are satisfied automatically. Assumption 4(a) (Bidirectional link) is a special case of Assumption 5 (Reversal link). Instead of requiring an instantaneous reversal link (vj , vi ) (T =0 for the bidirectional case) for the link (vi , vj ), we only need the reversal link (vj , vi ) to appear within a certain time window or just require vj links back to vi within a certain time window (The edge (vj , vi ) may not appear at all). 2.4 A Classification of Consensus Results For better understanding of Assumptions 1-5, we categorize some of the existing consensus results in Table 1. Assumption 1 is omitted in the table since it is required in all the results listed. Table 1. A Categorization of the existing consensus results No.
Results
Synchronism
Connectivity
1 Th. 2 in [JLM03c] Sync. A3(a) 2 Prop. 2 in [Mor03] Sync. A3(b) 3 Prop. 1 in [Mor03] Sync. A3(a’) 4 Th. 3.10 in [RB05a] Sync. A3(a’) 5 Th. 1 [Mor04] Sync. A3(b) 6 Th. 2 in [FA05] A2(a),(b),(c) Fixed spanning tree 7 Th. 4 in [CMA05a] A2(a),(c) with Spanning trees with zero time delay the common root 8 Th. 1 in [BHOT05] A2(a),(c) with A3(a) zero time delay 9 Th. 3 in [BHOT05] A2(a),(b),(c) A3(a) 10 Th. 4 in [BHOT05] A2(a),(b),(c) A3(b) 11 Th. 5 in [BHOT05] A2(a),(b),(c) A3(b)
Direction Reversal link A4(a) A4(a) A4(b) A4(b) A4(b) A4(b) A4(b)
NA NA NA NA A5(b) NA NA
A4(b)
NA
A4(b) A4(a) A4(b)
NA NA A5(a)
Several comments are appropriate. First, asynchronous systems with zero time delay can be mapped to equivalent synchronous systems following the arguments in [FA05]. Therefore, Results No. 3 & 4 may be seen as a special
60
L. Fang and P.J. Antsaklis
case of Result No. 6. Second, Table 1 reveals the fact that the uniform connectivity is necessary under unidirectional communication but it is not necessary under bidirectional communication. The reason behind this fact is not at all clear from an intuitive perspective. In Sect. 3, we give an explanation to the difference between bidirectional and unidirectional communication from contractive operators’ point of view. Third, we utilize the iteration graph in Sect. 4.1 to unify various communication assumptions. Fourth, Result No. 9 considers the most general case among linear protocols; notice that all protocols in Table 1 are linear protocols. A nonlinear asynchronous protocol will be introduced in Sect. 4.2.
3 Bidirectional Versus Unidirectional Communication In this section, we investigate why different assumptions on connectivity need to be imposed for bidirectional and unidirectional communication if consensus is to be achieved. Specifically, we see the consensus problem as a matrix iteration problem where the notions of paracontraction and pseudocontraction introduced in Sect. 2.1 are useful in proving convergence. For an easy exposition, we restrict ourselves to synchronous protocols with time-varying topologies and no time delays. That is, we consider the system updating equation x(t + 1) = F (t)x(t) where the matrix F (t) satisfies Assumption 1. It is easily deduced that F (t) is nonexpansive with respect to the vector norm · ∞ . In the bidirectional case, F (t) is also symmetric. From nonnegative matrix theory [HJ85], we know that the eigenvalues of F (t) lie in (−1, +1] for all t. It is now known that any symmetric matrix F the eigenvalues of which are in (−1, +1] is paracontracting with respect to Euclidian norm [BEN94]. However, F (t) is no longer paracontracting when its symmetry is lost. Example 2 (partly taken from [SB01]) (a) For the weight matrix 0.5 0.5 0 F1 = 0.25 0.5 0.25 , 0 0.5 0.5 induced from the simple communication topology as shown in Fig. 1(a), the norm and the set X ∗ are the same as in Example 1. For any x, P (x) = 0.5(maxi xi +mini xi )1. For x = [2, 2, 1]T , F1 x = (2, 1.75, 1.5)T , P (x) = 1.5·1, and P (F1 x) = 1.75·1. Thus the first inequality in (4) is strict while the second one is an equality. So this operator is pseudocontractive, not paracontractive. (b) For an arbitrary F (t), there is no guarantee that it is pseudocontractive, e.g., the weight matrix
On Communication Requirements for Multi-agent Consensus Seeking
61
1 0 0 F2 = 0.5 0.5 0 , 0 0 1 which results from the communication topology in Fig. 1(b). For x = [2, 2, 1]T , F2 x = [2, 2, 1]T . Thus equalities hold throughout (4). F2 is not pseudocontractive but it is nonexpansive. 1
2
(a)
3
1
2
3
(b)
Fig. 1. Possible communication topologies for a three-agent system
An alternative proof for Result No. 2 was provided in [XBL05b] where the convergence of the paracontracting matrix products was explored. The following theorem in [EKN90] is the key to this development. Theorem 1. Suppose that a finite set of square matrices of same dimensions {F1 , · · · , Fr } are paracontracting. Let {I(t)}∞ t=0 , with 1 ≤ I(t) ≤ r, be a sequence of integers, and let J denote the set of all integers that appear infinitely often in the sequence. Then for all x(0) ∈ Rn the sequence of vectors x(t+1) = FI(t) x(t) has a limit x∗ ∈ ∩i∈J H(Fi ), where H(F ) denotes the fixed point subspace of a paracontracting matrix F , i.e., its eigenspace associated with the eigenvalue 1, H(F ) = {x|x ∈ Rn , F x = x}. Does there exist a result similar to Theorem 1 for pseudocontracting matrices? The answer is, fortunately, affirmative. Theorem 2 ([SB01]). Let {Tt } be a sequence of nonexpansive operators (with respect to · and X ∗ ), and assume there exists a subsequence {Tti } which converges to T ∈ T . If T is pseudocontractive and uniformly Lipschitz continuous, then for any initial vector x(0), the sequence of vectors x(t + 1) = Tt x(t), t ≥ 0 converges to some x∗ ∈ X ∗ . Note that in our study X ∗ = {c1|c ∈ R}. We are thus one step away from proving Result No. 3 and it is exactly where Assumption A3(a) (the uniform connectivity) comes into play. Result No. 3 is an immediate result of Theorem 2 and the following lemma. Lemma 1. If Assumption A3(a) is satisfied, then there exists an integer B such that the matrix product F (t0 + B )F (t0 + B − 1) · · · F (t0 ) is pseudocontractive, with respective to · ∞ and X ∗ = {cl|c ∈ R}, for any t0 ≥ 0. To prove Lemma 1, we need the following technical results.
62
L. Fang and P.J. Antsaklis
Lemma 2 (Lemma 2 in [JLM03c]). Let m ≥ 2 be a positive integer and let F1 , F2 . . . , Fm be nonnegative n × n matrices. Suppose that the diagonal elements of all of the Fi are positive and let α and β denote the smallest and largest of these, respectively. Then Fm Fm−1 · · · F1 ≥
α2 2β
(m−1)
(Fm + Fm−1 + · · · + F1 ).
(11)
¯ Lemma 3 (Proposition 3.2 in [SB01]). Let x be a vector such that x < x with x = mini xi and x ¯ = maxi xi , F be an irreducible matrix with Fii > 0 for 1 ≤ i ≤ n, y = F x. Then the number of the elements in set {i|yi = x or yi = ¯}. x ¯} is at least one less than the number of the elements in {i|xi = x or xi = x Proof (of Lemma 1). Assume that there exists an infinite sequence of contiguous, non-empty, bounded time-intervals [tij , tij+1 ), j ≥ 1, starting at ti1 , with the property that across each such interval, the union of the interaction graphs is strongly connected (Assumption A3(a)). Let Htj = F (tij+1 −1 ) · · · F (tij +1 )F (tij ). By Lemma 2 and the strong connectivity of the union of the interaction graphs, it follows that Htj is irreducible. It is easy to prove that Htj is nonexpansive with positive diagonal elements and Htj 1 = 1. Define H = Htn−1 Htn−2 · · · Ht1 and y = Hx. Applying Lemma 3 repeat¯} edly for (n − 1) times, we have at least one set of {i|yi = x} and {i|yi = x is empty. If, say, {i|yi = x} is empty, then yi > x for all 1 ≤ i ≤ n, and furthermore y − P (y) =
maxi yi − mini yi x ¯ − mini yi x ¯−x ≤ < = x − P (x) ; 2 2 2
therefore H is pseudocontractive. In other words, F (t0 + B )F (t0 + B − 1) · · · F (t0 ) is pseudocontractive for B = (n − 1)B. Let us briefly summarize what we have presented in this section. In the bidirectional case, the weight matrices are always paracontracting. Theorem 1 can be applied directly to infer the convergence of the consensus processes. In the unidirectional case, the weight matrices are generally not pseudocontracting. In order to use Theorem 2, the uniform connectivity condition needs to be imposed so that the matrix products across certain time interval are pseudocontracting.
4 Iteration Graph and Nonlinear Asynchronous Consensus Protocols In the framework of [Pot98], the consensus problem is regarded as a special case of finding common fixed points of a finite set of paracontracting multiple
On Communication Requirements for Multi-agent Consensus Seeking
63
point operators. That is, all the operators are defined on (different) products of Rn . To avoid the divergent phenomena, asynchronous iterations which fulfill certain coupling assumptions called confluent are considered. Below we apply this theory of paracontractions and confluence to derive a more general consensus result, which extends Result No. 9 by allowing nonlinear multiple point operators. For the purpose of self-containedness, we introduce several related definitions below. Let I be a set of indices, m ∈ N a fixed number, and F = {F i |i ∈ I} be a pool of operators F i : Dmi ⊂ Rnmi → D, where mi ∈ {1, · · · , m}, ∀i ∈ I, and D ⊂ Rn is closed. Furthermore, let XO = {x(0), · · · , x(−M )} ⊂ D be a given set of vectors. Then, for sequences I = I(k) (k = 0, 1, . . . ) of elements in I, S = {s1 (k), . . . , smi(k) (k)}, k = 0, 1, . . . , of mi -tuple from N0 ∪ {−1, . . . , −M } with sl (k) ≤ k for all k ∈ N0 , l = 1, . . . , mi(k) , we study the asynchronous iteration given by x(k + 1) = F I(k) x(s1 (k)), . . . , x(smI(k) (k)) , k = 0, 1, . . .
(12)
An asynchronous iteration corresponding to F , starting with XO and defined by I and S can be denoted by (F , XO , I, S). A fixed point ξ of a multiple point operator F : Rnm → R is a vector ξ ∈ Rn which satisfies F (ξ, . . . , ξ) = ξ, and a common fixed point of a pool is a fixed point of all its operators in this sense. 4.1 Iteration Graph In essence, the communication assumptions define the coupling among agents or, more generally, the coupling of an iteration process. The existing assumptions often rely on interaction graphs to describe the “spatial” coupling among agents. However, ambiguity arises when asynchronism (e.g., delays) is allowed since the “temporal” coupling cannot be described directly. In the asynchronous setting, it is of importance to differentiate the same agent at different time instants. To this end, we associate an iteration graph with the asynchronous iteration (F , XO , I, S). Every iteration, including initial vectors, gets a vertex, so the set of vertices is V = N0 ∪ {−1, . . . , −M }. A pair (k1 , k2 ) is an element of the set of edges E in the iteration graph (V, E), if and only if the k1 th iteration vector is used for the computation of the k2 th iteration vector. Below we illustrate the concept of iteration graph via an example. The interaction topologies of a three-agent system at different time instants are shown in Fig. 2(a). It is easy to see that if the interaction pattern continues, Assumption 3(b) is satisfied. Let y(−1) = x1 (0), y(−2) = x2 (0), and y(−3) = x3 (0). At time k = 0, v2 communicates with v3 . By construction, we add a vertex 0 and an edge from vertex -2 to vertex 0 in the associated iteration graph, as shown in Fig. 2(b). Assume that v3 and v1 do not use its own past value. Therefore, we do not add an edge from vertex -3 to vertex 0 in the
64
L. Fang and P.J. Antsaklis
iteration graph. At time instant k = 1, v2 uses the value of v1 and its own past value to update its state, resulting in two edges in the iteration graph. Remark 1. An analogy to the iteration graph is the the reachability graph in the Petri net literature. The reachability graph is used for verification and supervisory control and obtained sometimes via a method called unfolding that simplifies the procedure [BFHJ03].
v2 v3
-3
0
-2
v2 v1
v3
-1
0
v2 v1
1
1
v3
2
2
3
v2 v1
v3 (a)
4
3
v2 v1
v3
4
v2 v1
v3
5
v1
5
(b) Fig. 2. Interaction topologies of an asynchronous system and its associated iteration graph.
Definition 1 (Confluent asynchronous iteration [Pot98]). Let (F, XO, I, S) be an synchronous iteration. The iteration graph of (F , XO , I, S) is the digraph (V, E), whose vertices V are N0 ∪{−1, . . . , −M }, and whose edges E are given by (k, k0 ) ∈ E, iff there is an 1 ≤ l ≤ mI(k0 −1) , such that sl (k0 − 1) = k. (F , XO , I, S) is called confluent, if there are numbers n0 ∈ N, b ∈ N and a sequence bk (k = n0 , n0 + 1, . . . ) in N, such that for all k ≥ n0 the following is true: (i) For every vertex k0 ≥ k there is a directed path from bk to k0 in (V, E), (ii) k − bk ≤ B, (iii) S is regulated, (iv) for every i ∈ I there is a ci ∈ N so that for all k ≥ n0 there is a vertex wki in V , which is a successor of bk and a predecessor of bk+ci , and for which is I(wki − 1) = i. It is worth mentioning that when Assumptions 2(a)(b), 3(b), 4(b) are fulfilled, the associated asynchronous iteration is confluent. Given an arbitrary iteration, there are simple ways to make its implementation confluent [Pot98].
65
On Communication Requirements for Multi-agent Consensus Seeking
Remark 2. As opposed to the original development in [CM69], here the whole vector is updated in every iteration step. Also, all components of vectors have the same delay. This does not impose a restriction since the vectors reduce to a scalar in our study. 4.2 Nonlinear Asynchronous Consensus Protocols Before introducing the main result of the paper, we need the following definition. Note that paracontracting operator in Definition 2(ii) corresponds to pseudocontracting operator as defined in (3). (Without much confusion, the original definitions given in [Pot98] are followed for easy reference.) Definition 2. Let F be a pool of operations as in Definition 1, and X = (x1 , . . . , xmi ) an element of Rnmi . (i) If for all i ∈ I, X, Y ∈ Dmi and a norm
·
Gi (X) − Gi (Y ) < max xj − y j j
or Gi (X) − Gi (Y ) = xj − y j , ∀j ∈ {1, . . . , mi }, then F is called strictly nonexpansive on D. (ii) If for all i ∈ I, X ∈ Dmi and a norm · , F i is continuous on Dmi , then F is paracontracting on D, if for any fixed point ξ ∈ Rn of F i , F i (X) − ξ < max xj − ξ j
or X = (x, . . . , x) and x is a fixed point of F i . It is easy to see that every strictly nonexpansive pool is paracontracting. Moreover, an (e=extended)-paracontracting notion is introduced in [Pot98]. The (e)-paracontracting operators may be discontinuous, and have nonconvex sets of fixed points. A simplified version of the main result in [Pot98] is now given. Theorem 3. Let F be a paracontracting pool on D ⊂ Rn , and assume that F has a common fixed point ξ ∈ D. Then any confluent asynchronous iterations (F , XO , I, S) converges to a common fixed point of F . With the help of Theorem 3, Result No. 9 can be obtained by interpreting the different rows of a stochastic matrix as multiple data operators. To see this, let fimi (j) (k) be for all i ∈ {1, . . . , n} the jth of mi nonzero entries in F (k)’s ith row, or let mi (j) = j, ∀j = 1, . . . , n, and mi = n, if this row is zero. Then the pool F = {F i |i = 1, . . . , Q} (Q, the total number of operators, is finite), defined by F i (k) : Rmi → R, F i (k)(y 1 , . . . , y mi ) :=
mi
fimi (j) (k)y j , i = 1, . . . , Q
(13)
j=1
is strictly nonexpansive on all closed intervals D ⊂ R, if F i (1, . . . , 1) = 1.
66
L. Fang and P.J. Antsaklis
We are now ready to claim a new consensus result where F i is allowed to be nonlinear. To avoid confusion, we rewrite I(k) in (12) as pk below. Theorem 4. Consider the iteration xi (k + 1) = F i x1 (s1 (k), x2 (s2 (k)), . . . , xn (sn (k) .
(14) l
(i) Assume without loss of generality that the numbering of s (k), k = 0, 1, . . . , is chosen in such a manner that all components xl (sl (k)) in (14) themselves are updated at time sl (k), i.e., psi (k)−1 = i, ∀k ∈ N, i ∈ {1, . . . , n} with si (k) ≥ 1,
(15)
and, also w.l.o.g., that all initial vectors are multiples of 1. Define x(−k) := xk (0)1, ∀k = 1, . . . , n, and renumber in this way the elements of the sequences of sl (k), k = 0, 1, . . . , l = 1, . . . , n, for which sl (k) = 0. Then the asynchronous iteration (F , YO , I, S), given by s1 (k)), . . . , y(˜ smpk (k)) , k = 0, 1, . . . y(k + 1) := F pk y(˜
(16)
where F = {F pk |k = 0, 1, . . . } is paracontracting, I = pk , k = 0, 1, . . . , S = {˜ si (k)|k = 0, 1, . . . ; i = 1, . . . , mpk } is given by s˜i (k) := smpk (i) (k), ∀k ∈ N0 , i = 1, . . . , mpk ,
(17)
and YO by y(−l) := x1 (−l), l = 1, . . . , n, generates y(k + 1) = xpk (k + 1), ∀k ∈ N0 .
(18)
(ii) The pool F = F i : Rn → R|i ∈ {1, . . . , Q} is paracontracting and has a common fixed point. Furthermore, there exists an agent i0 which updates its state using only a subset of the pool F . Every operator in this subset ∂F i is continuously differentiable in xi0 and ∂x = 0. Assume that si0 (k) = i0 max{k0 ≤ k|pk0 −1 = i0 } for all k > min{k0 ∈ N0 |pk0 = i0 } with pk = i0 . Then, under Assumptions 2(a)(b), 3(b), 4(b), the nonlinear protocol (16) or, equivalently, (14) guarantees asymptotic consensus. Proof. (i) follows by induction on k. Using the same argument as in the proof of Theorem 5(v) in [Pot98], it can be shown that the iteration (16) is confluent. (ii) is then an immediate result of Theorem 3. Remark 3. In Fig. 2, the iteration graph is confluent when the agent v2 always uses its own past value for updating. Suppose that no agents use their past values during the process (i.e., dashed edges no longer appear). After removing the dashed edges from Fig. 2(b), the iteration graph is no longer confluent since there is no directed path from an odd-numbered vertex to an even-numbered vertex, and vice versa. This shows the necessity of existence of i0 in Theorem 4(ii).
On Communication Requirements for Multi-agent Consensus Seeking
67
Theorem 4 is exact, rather than linearized and can be used to study multiagent systems with nonlinear couplings. Potential applications include distributed time synchronization and rendezvous of multi-robots with nonholonomic constraints.
5 Conclusions In this paper, we examined the different assumptions made in the various consensus results in the literature so to better understand their roles in the convergence analysis of consensus protocols. A novel nonlinear asynchronous consensus result was also introduced using the theory of paracontracitons and confluence. This result is more general than the existing ones and provides a powerful tool to study a wider range of applications. Many open problems remain; see [FAT05] for a detailed discussion.
Applications of Connectivity Graph Processes in Networked Sensing and Control Abubakr Muhammad, Meng Ji, and Magnus Egerstedt School of Electrical and Computer Engineering Georgia Institute of Technology, Atlanta, GA {abubakr,mengji,magnus}@ece.gatech.edu Summary. This paper concerns the problem of controlling mobile nodes in a network in such a way that the resulting graph-encoding of the inter-node information flow exhibits certain desirable properties. As a sequence of such transitions between different graphs occurs, the result is a graph process. In this paper, we not only characterize the reachability properties of these graph processes, but also put them to use in a number of applications, ranging from multi-agent formation control, to optimal collaborative beamforming in sensor networks.
1 Introduction As the complexity associated with the control design tasks for many modern engineering systems increases, strategies for managing the complexity have become vitally important. This is particularly true in the areas of networked and embedded control design, where the scale of the system renders classical design tools virtually impossible to employ. However, the problem of coordinating multiple mobile agents is one in which a finite representation of the configuration space appears naturally, namely by using graph-theoretic models for describing the local interactions in the formation. In other words, graph-based models can serve as a bridge between the continuous and the discrete when trying to manage the design-complexity associated with formation control problems. Notable results along these lines have been presented in [RM03, Mes02, JLM03a, TJP03a, CMB05b]. The conclusion to be drawn from these research efforts is that a number of questions can be answered in a natural way by abstracting away the continuous dynamics of the individual agents. Several terms such as link graphs, communication graphs and connectivity graphs have been used interchangeably in the literature for graphical models that capture the local limitations of sensing and communication in decentralized networked systems. In this paper, we give several applications of connectivity graphs and their dynamics: the so-called connectivity graph processes.
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 69–82, 2006. © Springer-Verlag Berlin Heidelberg 2006
70
A. Muhammad, M. Ji, and M. Egerstedt
The outline of this paper as follows. In Section 2 we provide various characterizations and computational tools that deal with the realization of connectivity graphs in their corresponding configuration spaces, based on our work in [ME05, MEa]. These studies let us distinguish between valid and invalid graphical configurations and provide a computable way of determining valid transitions between various configurations, which is the topic of Section 3. Moreover, notions such as graph reachability and planning will be given a solid foundation. We moreover develop an optimal control framework, where the configuration space is taken as the space of all connectivity graphs. Finally, Section 4 is devoted to the various applications of this framework. In particular, we study low-complexity formation planning for teams for mobile robots and collaborative beamforming in mobile sensor networks.
2 Formations, Connectivity Graphs, and Feasibility Graphs can model local interactions between agents, when individual agents are constrained by limited knowledge of other agents. In this section we summarize some previous results, found in [ME04a], of a graph theoretic nature for describing formations in which the primary limitation of perception for each agent is the limited range of its sensor. Suppose we have N such agents with identical dynamics evolving on R2 . Each agent is equipped with a range limited sensor by which it can sense the relative displacement of other agents. All agents are assumed to have identical sensor ranges δ. Let the position of each agent be xn ∈ R2 , and its dynamics be given by x˙ n = f (xn , un ), where un ∈ Rm is the control for agent n and f : R2 × Rm → R2 is a smooth vector field. The configuration space C N (R2 ) of the agent formation is made up of all ordered N -tuples in R2 , with the property that no two points coincide, i.e. C N (R2 ) = (R2 × R2 × . . . R2 ) − Δ, where Δ = {(x1 , x2 , . . . , xN ) : xi = xj for some i = j}. The evolution of the formation can be represented as a trajectory F : R+ → C N (R2 ), usually written as F (t)(x1 (t), x2 (t), . . . xN (t)) to signify time evolution. The spatial relationship between agents can be represented as a graph in which the vertices of the graph represent the agents, and existence of a pair of vertices on each edge tells us that the corresponding agents are within sensor range δ of each other. Let GN denote the space of all possible graphs that can be formed on N vertices V = {v1 , v2 , . . . , vN }. Then we can define a function ΦN : C N (R2 ) → GN , with ΦN (F (t)) = G(t), where G(t) = (V, E(t)) ∈ GN is the connectivity graph of the formation F (t). vi ∈ V represents agent i at position xi , and E(t) denotes the edges of the graph. eij (t) = eji (t) ∈ E(t) if and only if xi (t) − xj (t) ≤ δ, i = j. These graphs. The graphs are always undirected because the sensor ranges are identical. The motion of agents in a formation may result in the removal or addition of edges in the graph. Therefore G(t) is a dynamic structure. Lastly and most importantly, every graph in GN is not a connectivity graph. The last observation is not as obvious as the others, and
Applications of Connectivity Graph Processes
71
it has been analyzed in detail in [ME04a]. A realization of a graph G ∈ GN is a formation F ∈ C N (R2 ), such that ΦN (F ) = G. An arbitrary graph G ∈ GN can therefore be realized as a connectivity graph in C N (R2 ) if Φ−1 N (G) is nonempty. We denote by GN,δ ⊆ GN , the space of all possible graphs on N agents with sensor range δ, that can be realized in C N (R2 ). In [abu05] we proved the following result. Theorem 1. GN,δ is a proper subspace of GN if and only if N ≥ 5. Formations can produce a wide variety of graphs for N vertices. This includes graphs that have disconnected subgraphs or totally disconnected graphs with no edges. However the problem of switching between different formations or of finding interesting structures within a formation of sensor range limited agents can only be tackled if no sub-formation of agents is totally isolated from the rest of the formation. This means that the connectivity graph G(t) of the formation F (t) should always remain connected (in the sense of connected graphs) for all time t. In [ME05, MEa] we gave a detailed study of feasibility results using semidefinite programming methods and its relation to the Positivstellensatz for semialgebraic sets. In particular, we showed how to setup the feasibility of geometrical constraints in a possible graph as a linear matrix inequality (LMI) problem as follows. Recall that the connectivity graph (V, E) corresponding to the formation (x1 , x2 , · · · xN ) ∈ C N (R2 ) can be described by N (N − 1)/2 relations of the form δ 2 − (xi − xj )2 − (yi − yj )2 ≥ 0, if eij ∈ E, (xi − xj )2 + (yi − yj )2 − δ 2 > 0, if eij ∈ / E, where 1 ≤ i < j ≤ N and xi = (xi , yi ). Therefore the realization problem is equivalent to asking if there exist x1 , y1 , . . . , xN , yN such that these inequality constraints are satisfied. In [ME05] we showed that the non-feasibility problem is equivalent to asking if the set X = {x ∈ RM | xT Aij x ≥ 0, 1 ≤ i < j ≤ N, eij ∈ E, xT Blm x > 0, 1 ≤ l < m ≤ N, elm ∈ E} is empty for certain Aij and Blm matrices (See [ME05] for details). Since all semi-algebraic constraints on the set X are quadratic and Moreover, it was also shown that T Aij = ATij , Blm = Blm , we can use the celebrated S-procedure to transform the feasibility question into a linear matrix inequality (LMI) problem [BGFB94]. Theorem 2. Given symmetric n × n matrices {Ak }m k=0 , the following are equivalent: 1. The set {x ∈ Rn | xT A1 x ≥ 0, xT A2 x ≥ 0, · · · , xT Am x ≥ 0, xT A0 x ≥ 0, xT A0 x = 0} is empty. m 2. There exist non-negative scalars {λk }m k=1 such that −A0 − k=1 λk Ak ≥ 0. If a solution to this LMI exists then we say that we have a certificate of infeasibility. It was demonstrated how to use a standard LMI-software [Inc01] to
72
A. Muhammad, M. Ji, and M. Egerstedt
effectively solve a wide class of such feasibility problems. Once such infeasibility certificates have been obtained, they can be used in formation planning under constraints of communication and sensory perception.
3 Connectivity Graph Processes for Formation Planning As mentioned in Section 1, formation switching with limited global information is an important problem in multi-agent robotics. However, little work has been done so far that adequately addresses the problem of formation switching under limited range constraints. Therefore the ability to give exact certificates about what can and cannot be achieved under these constraints is a desirable result. Recall that the connectivity graph of the formation evolves over time as G(t) = (V, E(t)) = ΦN (x(t)). Under standard assumptions on the individual trajectories of agents, one gets a finite sequence of graphs {G0 , G1 , . . . , GM } for each finite interval of evolution [0, T ], where ΦN (x(0)) = G0 , Gi switches to Gi+1 at time ti , and ΦN (x(T )) = GM . We will often write this as G0 → G1 → . . . → GM and call such a sequence a connectivity graph process 1 . Such graph processes can be thought of as trajectories on the space GN,δ . In what follows we discuss the role of feasible, reachable and desirable sets when generating trajectories on this space. 3.1 Feasible Connectivity Graph Transitions The connectivity graph processes are generated through through the movement of individual nodes. For a connectivity graph Gj = (Vj , Ej ) = ΦN (x(tj )) let the nodes be partitioned as Vj = Vj0 ∪Vjm , where the movement of the nodes in Vjm facilitates the transition from Gj to the next graph Gj+1 and Vj0 is the set of nodes that are stationary. With the positions x0j = {xm (ti )}m∈Vj0 being fixed, let Feas(Gj , Vjm , x0j ) ⊆ GN,δ be the set of all connected connectivity graphs that are feasible by an unconstrained placement of positions corresponding to Vjm in R2 . (We will often denote this set as Feas(Gj , Vjm ), when the the positions x0j are understood from context.) The set Feas(Gj , Vjm ) of feasible graph transitions can be computed using the semi-definite programming methods discussed above. It will be appropriate to explain the reason for keeping track of mobile and stationary nodes at each transition. In principle, it is possible to compute However, in order to manage the combinatorial growth in the number of possible graphs, it is desirable to let the transitions be generated by the movements of a small subset of nodes only. In fact, we will investigate the situation where er only move one node at a time. Let Vj0 = {1, . . . , k − 1, k + 1, . . . , N } and Vjm = {k}. It should be noted that the movement of node k can only result in the addition or deletion of edges that have node k as one of its vertices. Therefore the enumeration of 1
We borrow this term from Mesbahi [Mes02].
Applications of Connectivity Graph Processes
73
the possible resulting graphs should count all possible combinations of such deletions and additions. This number can be easily seen to be 2N −1 for N nodes. Since we are also required to keep the graph connected at all times, this number is actually 2N −1 − 1, obtained after removing the graph in which node k has no edge with any other node. Now, we can use the S-procedure to evaluate whether each of the new graphs resulting from this enumeration is feasible. Since all nodes are fixed except for xk = (x, y), the semi-algebraic set we need to check for non-feasibility is defined by N − 1 polynomial inequalities over R[x, y]. Each of these inequalities has either of the following two forms, δ 2 − (x − xi )2 − (y − yi )2 ≥ 0, if eki ∈ E, (x − xi )2 + (y − yi )2 − δ 2 > 0, if eki ∈ / E. where 2 ≤ i ≤ N , E is the edge set of the new graph and we denote xi (tj ) by (xi , yi ) for i = k. This computation can be repeated for all N nodes so that we have a choice of N (2N −1 − 1) graphs. Each of the N − 1 inequalities can be written as either −1 0 xi x y ≥ 0, if eik ∈ E, x y 1 0 −1 yi xi yi δ 2 − x2i − yi2 1 or
x
y
1
Denoting by −1 0 Ai = 0 −1 xi yi
1 0 −xi
0 1 −yi
−xi x y > 0, if eik ∈ E. −yi 1 x2i + yi2 − δ 2
xi , yi 2 2 2 δ − xi − y i
1 0 Bi = −xi
0 1 −yi
−xi , −yi 2 2 2 xi + y i − δ
and ignoring the lossy aspect of the S-procedure [BGFB94], we need to solve the LMI, λαi Aαi − λαj Bαj ≥ 0. −Aα1 − i=1,eαi k ∈E
j,eαj k ∈E
An example of such a calculation is given in Figure 1, where V 0 = {2, 3, 4, 5} and V m = {1}. The LMI control toolbox [Inc01] for MATLAB has been used to solve the LMI for each of these graphs in order to get the appropriate certificates. 3.2 Reachability and Connectivity Graph Processes Note that Feas(G0 , V0m ) does not depend on the actual movement of the individual nodes. In fact even if G ∈ Feas(G0 , V0m ), it does not necessarily mean
74
A. Muhammad, M. Ji, and M. Egerstedt 5
1 2
5
1 2
4
2
4
3
3 5
1 2
5
2
4
5
1 2
2
2
4
4
3
5
5
1 2
4
G0
3
3
3
5
1
3
2
4
3 5
4
1
4
2
4
1
3
5
1
3
1
3
5
1
4
Set of feasible transitions F eas(G0, {1}). 5
1 2 3
5
1 2
4 3
5
1 2
4 3
5
1 2
4
4
3
Infeasible transitions. Fig. 1. Feasible and infeasible graphs by movement of node 1.
that there exists a trajectory by which G0 → G or even that G0 → Gf . . . G. We therefore need some notion of reachability on the space GN,δ . We say that a connectivity graph Gf is reachable from an initial graph G0 if there exists a connectivity graph process of finite length G0 → G1 → . . . Gf and a sequence of vertex-sets {Vkm } such that each Gk+1 ∈ Feas(Gk , Vkm ). If Vkm = V m at each transition, then every Gk ∈ Feas(G0 , V m ). (In particular, Gf ∈ Feas(G0 , V m ).) Consider all such G that are reachable from G0 with a fixed Vm . We will denote this set by Reach(G0 , V m ). It is easy to see that Reach(G0 , V m ) ⊆ Feas(G0 , V m ). In the previous paragraphs, it was shown how to determine the membership for the set Feas(G0 , V m ). But, determining the membership for Reach(G0 , V m ) is not trivial. Under the assumption that individual nodes are globally controllable, a computational tool from algebraic geometry known as Cylindrical Algebraic Decomposition [bas] or CAD can be used for this purpose, as shown in [MEa, MEb]. For the semialgebraic set S defined by the
Applications of Connectivity Graph Processes
75
x4 xk (t0 )
x2
x1
xk (tf )
Fig. 2. Selection of Nash cells and the generation of trajectory in a graph process.
union of sensing discs of the stationary nodes V 0 , CAD provides a decomposition of S into so-called Nash cells. It has been proved in [MEb] that for two given connectivity graphs ΦN (x(0)) and Gf , there exists a finite connectivity graph process ΦN (x(0)) → G1 → . . . → Gf , and a corresponding trajectory x(t) ⊆ C N (R2 ), t ∈ [t0 , tf ] such that x(tf ) ∈ Φ−1 N (Gf ), if and only if there exists a finite collection of Nash cells in the CAD of S such that x(t) belongs to a cell in this collection for all t ∈ [t0 , tf ]. The construction used in proving this result result gives us the trajectories for the actual movement of the nodes. We omit the details of this proof for the sake of brevity. An example of CAD and the trajectory of the mobile node is depicted in Figure 2. 3.3 Global Objectives, Desirable Transitions and Optimality The whole purpose of a coordinated control strategy in a multi-agent system is to evolve towards the fulfilment of a global objective. This typically requires the minimization (or maximization) of a cost associated with each global configuration. Viewed in this way, a planning strategy should basically be a search process over the configuration space, evolving towards this optimum. If the global objective is fundamentally a function of the graphical abstraction of the formation, then it is better to perform this search over the space of graphs instead of the full configuration space of the system. By introducing various graphical abstractions in the context of connectivity graphs, we have the right machinery to perform this kind of planning. In other words, we will associate a cost or score with each connectivity graph and then work towards minimizing it. Given Reach(G0 , V m ), a decision need to be taken regarding what Gf ∈ Reach(G0 , V m ) the system should switch to. For this we define a cost function Ψ : GN,δ → R and we choose the transition through Gf =
arg min
G∈Reach(G0 ,V m )
Ψ (G)
Here Ψ is analogous to a terminal cost in optimal control. If, in addition, we also take into account the cost associated with every transition in the graph
76
A. Muhammad, M. Ji, and M. Egerstedt
process G0 → G1 → . . . → GM = Gf that takes us to Gf , then we would instead consider the minimization of the cost M −1
β(i)L(Gi , Gi+1 ),
J = Ψ (Gf ) + i=0
where L : GN,δ × GN,δ → R is the analogue of a discrete Lagrangian, β(i) are weighting constants, and Gi+1 ∈ Reach(Gi , V m ) at each step i. The Lagrangian lets us control the transient behavior of the system during the evolution of the graph process. As an example, let Gi = (Vi , Ei ) and define L(Gi , Gi+1 ) = |(Ei+1 \ Ei ) ∪ (Ei \ Ei+1 )|, where |.| gives the cardinality of a set. This Lagrangian is the symmetric difference of the sets of edges in the two graphs. Here, we penalize the addition or deletion of edges at each transition. The resulting connectivity graph process takes G0 to Gf with minimal structural changes. As another example, if we let L(Gi , Gi+1 ) = |Ei | − |Ei+1 |, then the desired Gf is generated while maximizing connectivity during the graph process.
4 Applications of Connectivity Graph Processes We now give some concrete applications of the connectivity graph processes. 4.1 Production of Low-Complexity Formations In [ME04b], we have presented a complexity measure for studying the structural complexity of robot formations. The structural complexity is based on the number of local interactions in the system due to perception and communication. When designing control strategies for distributed, multi-agent systems, it is vitally important that the number of prescribed local interactions is managed in a scalable manner. In other words, it should be possible to add new robots to the system without causing a significant increase in the communication and computational burdens of the individual robots. On the other hand, an additional requirement when designing multi-agent coordination strategies should be that enough local interactions are present in order to ensure the proper execution of the task at hand. It turns out that the notion of structural complexity is the right measure to compare these conflicting requirements for the system. Therefore, it would be desirable to obtain graph processes that transform a formation with a high structural complexity to one with a lower complexity (and vice versa.) In [ME04b] we defined the structural complexity of a connectivity graphs G as deg(v ) j deg(vi ) + C(G) = , d(vi , vj ) vi ∈V
vj ∈V,vi =vj
Applications of Connectivity Graph Processes
77
where d : V × V → R+ is some distance function defined between vertices. It was also observed that if G is a connected connectivity graph then the complexity of G is bounded above and below by C(δN ) ≤ C(G) ≤ C(KN ), where δN is the δ-chain on N vertices, and KN is the complete graph. The lowest complexity graphs or δ-chains (which are the line graphs or the Hamiltonian paths on all vertices), are important for formations that require minimal coordination. Therefore coming up with a planning mechanism to produce low-complexity formations from an arbitrary initial formation is a useful result in multi-agent coordination. Using the concepts from previous sections we define Ψ (G) = C(G) and L(Gi , Gi+1 ) = |(Ei+1 \ Ei ) ∪ (Ei \ Ei+1 )|, where Gi = (Vi , Ei ). In this way we tend to produce formations that are lower in complexity by generating a graph process that adds or deletes a small number of edges at each transition. The results of one such simulation has been shown in Figure 3. Here, the star-like graph in the upper left corner is the initial connectivity graph (with a higher structural complexity than a δ-chain on 5 vertices). The graph process evolves from left to right and then continues onto the lower rows in the same manner until it reaches the δ-chain in the lower right corner. The process involves transitions to various intermediate graphs. In this example, the mobile node is labelled as 1. As predicted by the CAD decomposition, it first slides up to make an edge with node 2 and then rotates about node 2. It then passes by node 5 making various intermediate graphs, till it comes in the vicinity of node 4. Finally it makes a rotation about node 5 to form the δ-chain. Also note that due to the choice of the Lagrangian described above, the mobile node makes (or breaks) only a minimum number of edges at each graph transition. In this example, it can be seen that this number is always 1. Simulations for a relatively large number of nodes and more complex formations have also been done. It should be noted that the optimal trajectories thus obtained are only locally minimizing. A scheme for globally optimal behavior is currently under study. We are also working towards extending the number of mobile nodes by a decomposition of CAD into non-overlapping regions for each mobile node. 4.2 Collaborative Beamforming in Sensor Networks Another promising application of the framework presented in Section 3 is collaborative beamforming. In ad-hoc and wireless sensor networks, long range communication between clusters is always expensive due to limitations on power and communication channels. Collaborative beamforming is one way to solve this problem without dramatically increasing the complexity compared to non-collaborative strategies [OMPT, AC00]. With this method, a cluster of sensors synchronize their phases and collaboratively transmit or receive data in a distributed manner. By properly
78
A. Muhammad, M. Ji, and M. Egerstedt
2 1
3 5 4
2
1 5
3
4
Fig. 3. A connectivity graph process that generates a δ-chain.
designing the array factor, one can shape and steer the beam pattern in such a way that the array has either a high power concentration in the desired direction with little leakage (when transmitting), or a high gain in the direction of arrival (DOA) of the signal of interest with significant attenuation in the direction of interference (when receiving). These properties enables Space-Division Multiplex Access (SDMA) among clusters. In most beamforming applications, the array geometry is assumed to be fixed and the optimal beam pattern is formed by optimally weighing the signals received at individual nodes [VB88]. In this work, we further optimize the beam pattern by altering the geometry of the sensor array using connectivity graph processes.
Applications of Connectivity Graph Processes
79
y 90
Interference B
φi
Signal
!
0
A
!
φ0
θk rk
x
H
180
0
(xk , yk )
270
Fig. 4. Sensor array geometry.
Fig. 5. Beam pattern and HPBW.
It should be mentioned that finding an optimal geometry is a difficult design problem in array signal processing. Most designs favor a regular equispaced geometry such as linear, circular, spherical and rectangular grid arrays over random geometries[VB88]. If one follows the design philosophy in the array processing community, one would tend to drive all nodes to a regular geometry for obtaining better beam patterns. However, more exotic geometries have also been designed for particular applications. Moreover, the placement of nodes in a sensor network is not merely to optimize network communication, but also to maximize some benefit associated with distributed sensing. Therefore, it seems beneficial to optimize the geometries over some cumulative function of both the communication performance as well as the sensing performance of the network. We present this approach below. Let us first study the beamforming for an arbitrary geometry. Following the standard notation in the array signal processing literature, we describe the positions of the individual nodes, signal and interference in polar coordinate system. The signal is located at a distance A and azimuthal angle φ0 while the interference is at an angle φi as shown in Fig. 4. The position xk = (xk , yk ) of the k-th node is given by (rk , θk ), where rk = x2k + yk2 and θk = tan−1 (yk /xk ). Given the position of the sensor array r = [r1 , r2 , . . . , rN ], θ = [θ1 , θ2 , . . . , θN ], we adopt the beamforming algorithm presented in [OMPT], where the gain in direction φ is given by the norm of the array factor θ F (φ|r, θ) = ej
2π λ A
1 N
N
ej
2π λ rk [cos(φs −θk )−cos((φ−θk ))]
.
k=1
For known signal and interference directions, the objective of beamforming is to obtain high signal to interference ratio (SIR) and fine resolution, i.e. we would like to keep the main lobe of the beam pattern as thin as possible while minimizing the power in the interference direction. Let ΔφH be the half power
80
A. Muhammad, M. Ji, and M. Egerstedt
beam width (HPBW) of the main lobe as depicted in Figure 5. The power concentrations (accumulated gain) in the direction of signal and interference are respectively given by Ps (r, θ) =
φ0 +ΔφH /2 φ0 −ΔφH /2
|F (φ|r, θ)|2 dφ, Pi (r, θ) =
φi +ΔφH /2 φi −ΔφH /2
|F (φ|r, θ)|2 dφ.
We choose the following metric to evaluate the performance of a sensor array geometry. Ps (r, θ) = J (x), J (r, θ) Pi (r, θ)ΔφH where x is the sensor array geometry in cartesian coordinates. We wish to maximize the value of this metric over various geometries. To elaborate this point further, we give some example geometries and their respective beam patterns in Figure 7. Here, we assume that the signal direction φ0 is 0 degrees and the interference is coming at an azimuthal angle φi = 90 degrees. For comparison, the values of the metric have also been given on top of the beam patterns. Note that the linear array has a narrower beam but a large leakage in the interference direction. Similarly, the circular geometry in the bottom has low interference but a fat beam (i.e a large ΔφH ) in the direction of signal. The irregular patterns in the middle have a higher benefit, although they lie in between the two extremes of beam width and interference nullification. Moreover, as described in the above paragraphs, the metric to extremize may not be a function of the beamforming performance alone. Therefore it is reasonable to search over all possible geometries, rather than driving all nodes to a pre-determined regular geometry. Note that this metric may be different for different realizations of a particular connectivity graph. We make use of the cylindrical algebraic decomposition (CAD) algorithm for computing reachability to get a representative geometry r, θ for the connectivity graph G. 2 In this way we let J (x) = J (G), where ΦN (x) = G. If J (G) is some other performance metric associated with the function of the sensor network, then using the notation in Section 3, Ψ (G) = ν1 J (G) + ν2 J (G). Similarly we chose L(Gi , Gi+1 ) according to a desired transient behavior in the network. We give a snapshot from one such simulation for a particular choice of cost metric in Figure 7. Here, we have purposely chosen a small number of nodes and a relatively less relative displacement to demonstrate the value of the graph processes. The signal and interference directions are the same as in Figure 6. As the graph process evolves, notice the thinning of the main lobe in the signal direction. More pronounced is the attenuation in the interference direction, thus increasing the value of the metric at each transition.
2
The details of this computation have been omitted for the sake of brevity.
Applications of Connectivity Graph Processes
1
1
1
120
0.5
90 1
60 0.5 30
180
0
1
= 7.04
150
0 330
210
0.5
240 1
0.5
0
0.5
2
90 1
180
0 330
210
0.5
240 0.02
0
0.02
270
0.04
3
0.4
3
300
= 5.87 90 1
60 0.5 30
120
0.2
150 180
0
0 330
210
0.2
240 0.05
0
0.05
270
0.1
4
0.2
4
120
0.1
300
= 2.83 90 1
60 0.5 30
150 180
0
0 330
210
0.1 0.2 0.1
= 31.6
150
0
0.4 0.1
300
60 0.5 30
120
0.5
1 0.04
270
1
2
1
240 0.05
0
0.05
0.1
0.15
81
270
300
Fig. 6. Beamforming performance for various geometries.
4.3 Other Applications In principle, the framework developed in Section 3 can be used for any application that required optimization over connectivity graphs. In a recent work [GM05a, dSGM05], it has been shown that the geometrical problem of determining coverage loss in a sensor network can be studied by looking at certain topological invariants of the topological spaces induced from the connectivity graph of the sensor network. This characterization associates a number with every connectivity graph that measures the number of coverage holes in the network. One can therefore use the connectivity graph processes to reduce the coverage holes in the network by setting up the appropriate terminal cost and
82
A. Muhammad, M. Ji, and M. Egerstedt 120
90 1
60 0.5 30
150 180
0 330
210 240
120
270
300
90 1
60 0.5 30
150 180
0 330
210 240
120
270
300
90 1
60 0.5 30
150 180
0 330
210 240
270
300
Fig. 7. Evolution of beam pattern in a connectivity graph process.
the Lagrangian. A full investigation of this application (and many others) is a subject of current research.
Conclusions We have presented a generic framework for connectivity graph processes. The concepts of feasibility and reachability are useful for obtaining optimal trajectories on the space of connectivity graphs. These graphical abstractions are computable using the techniques of semi-definite programming and CAD, as demonstrated by various simulation results. This framework can be used for a wide range of applications. In particular, the problems of producing low complexity formations and collaborative beamforming has be studied by using connectivity graph processes.
Acknowledgments The authors wish to thank Mehran Mesbahi for discussing dynamic graphs and graph controllability concepts. This work was supported by the U.S. Army Research Office through the grant # 99838.
Plenary Talk Network-Embedded Sensing and Control: Applications and Application Requirements Tariq Samad Honeywell Labs Minneapolis, MN
[email protected] The first part of my talk will cover some recent commercially successful applications of networked and embedded sensing and control solutions. These include applications in the process industries, home health care, and building management systems. In these products and solutions, technological advances in sensors, wireless, networks, and knowledge services have enabled new ways of solving outstanding societal and industry problems. Yet in many ways the current state-of-the-practice has just scratched the surface of technological possibility. Realizing the visions that many of us harbor for distributed sensor networks, however, will require mapping technical benefits to economic value; the second part of the presentation discusses some of the complexities that arise in this process. Finally, I will present some recent research results in networked sensing for a military application–urban surveillance with networked UAVs. This research is still just that, but by discussing it in the context of key application requirements I hope to illustrate that an intimate connection between the two can be achieved.
Tariq Samad is a Corporate Fellow with Honeywell Automation and Control Solutions. He has been with various R&D organizations in Honeywell for 19 years, during which time he has been involved in a number of projects that have explored applications of intelligent systems and intelligent control technologies to domains such as autonomous aircraft, building and facility management, power system security, and process monitoring and control. He was the editor-in-chief of IEEE Control Systems Magazine from 1998 to 2003 and currently serves on the editorial boards of Control Engineering Practice and Neural Processing Letters. He has published about 100 articles in books, journals, and conference proceedings and he holds 11 patents with others pending.
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 85–86, 2006. © Springer-Verlag Berlin Heidelberg 2006
86
Tariq Samad
Dr. Samad is a Fellow of the IEEE and the recipient of an IEEE Third Millennium Medal and of a Distinguished Member Award from the IEEE Control Systems Society. He received a B.S. degree in Engineering and Applied Science from Yale University and M.S. and Ph.D. degrees in Electrical and Computer Engineering from Carnegie Mellon University.
Simulation of Large-Scale Networked Control Systems Using GTSNetS ElMoustapha Ould-Ahmed-Vall, Bonnie S. Heck, and George F. Riley School of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 30332-0250 {eouldahm,bonnie.heck,riley}@ece.gatech.edu
This paper presents a simulation framework for distributed control using wireless sensor networks. The proposed framework is an extension of the Georgia Tech Sensor Network Simulator (GTSNetS). The new features include models for different elements of a control loop: an actuator class, a plant class and a user defined controller application class. These models are integrated with the existing models for various components of a sensor network, different network protocols, energy models and collaboration strategies. We present different architectures for sensor networks in distributed control, depending on whether the sensing, the controller or the actuator elements of the control loop are distributed or centralized. Our tool allows the simultaneous study of the effects of control and network strategies in a distributed control system. The simulation tool is demonstrated on a fault tolerance scheme. An existing distributed fault tolerance method that is applicable to sensor networks is expanded to allow nodes to learn about their failure probability and those of their neighboring nodes. Such an approach can be used to give different weights to the information supplied by different nodes depending on their confidence level. The resulting fault tolerance scheme is implemented in a hierarchical networked control system and evaluated using the new simulation tool.
1 Introduction A wireless sensor network consists of a set of nodes powered by batteries and collaborating to perform sensing tasks in a given environment. It may contain one or more sink nodes (base stations) to collect sensed data and relay it to a central processing and storage system. A sensor node can be divided into three main functional units: a sensing unit, a communication unit and a computing unit.
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 87–106, 2006. © Springer-Verlag Berlin Heidelberg 2006
88
E.M. Ould-Ahmed-Vall, B.S. Heck, and G.F. Riley
A networked control system consists of a set of control nodes having sensing, control and actuation capabilities and interacting using an overlapping network [BLP03]. In such a system, any of the three main control loop tasks of sensing, control and actuation can be performed in a distributed manner. The use of wireless sensor networks for distributed control offers several benefits. It allows cost reduction and eliminates the need for wiring. Wiring could become costly and difficult in the case of a large number of control nodes needed for the sensing and control of a large process. However, the use of wireless sensor networks in a control system introduces new challenges. In fact, these networks can suffer unbounded delays and packet losses. There are several sensor network simulators available and widely used by the research community. There are also few simulation frameworks for networked control systems that have been proposed recently. However, we have not seen any simulator that combines the features of large-scale sensor networks with control applications. The co-simulation of control and network aspects of a networked control system allows the simultaneous study of the effects of control issues and network characteristics on the system performance. The control design issues include stability, adaptability and fault tolerance. Network characteristics affecting the control system performance include bandwidth, reliability and scalability. In this paper, we extend the Georgia Tech Sensor Network Simulator (GTSNetS) [OAVRHR05] to simulate distributed control using sensor networks. Several approaches for using sensor networks for distributed control are presented along with their implementations. One of the main features of our solution is its scalability. In fact, GTSNetS was shown to simulate networks of up to few hundred thousand nodes. The simulator is implemented in a modular way and the user is allowed to choose from different architectural implementations. If a specific approach or algorithm is not available, the user can easily implement it by extending the simulator. This simulator is demonstrated using an existing Bayesian fault tolerance algorithm. The second major contribution of this paper consists of the extension of the Bayesian fault tolerance algorithm to adapt to dynamic failure rates. The enhanced algorithm allows nodes to learn dynamically about the operational conditions of their neighbors. Each node then gives different weight factors (confidence levels) to the information received from each of its neighbors. The weight factors are function of the failure probability of the specific neighbor. The probability of failure is computed from the reliability of the information received from the specific neighbor compared to other neighbors. The simulator facilitates the implementation and evaluation of the algorithm even for a network containing a large number of nodes. The remainder of the paper is as follows. Section 2 presents some of the related work. Section 3 gives an overview of GTSNetS. Section 4 presents the extension to GTSNetS to allow the simulation of networked control systems. Section 5 presents and compares the original and the enhanced fault tolerance algorithms. Section 6 simulates a distributed control scenario in a
Simulation of Large-Scale Networked Control Systems Using GTSNetS
89
sensor network context with fault tolerance capabilities. Section 7 concludes the paper.
2 Related Work 2.1 Sensor Network Simulation Several sensor network simulators have been developed in recent years. Some of these simulators focus more on the network aspect, while others are more application specific. SensorSim [PSS00] was built upon the widely used ns2 [MF97] simulator. It provides models for different parts of sensor network architecture: battery, sensors, radio, CPU, etc. It also models the power consumption of these components. A major feature of SensorSim is its ability to support hybrid simulation: integration between the simulator nodes and real sensor nodes collecting real data and running real applications. The SWAN [LPN+ 01] simulator is shown to be capable of simulating sensor networks of few tens of thousands of nodes. However, SWAN focuses more on the ad-hoc network aspect of sensor networks and less on the sensing function. TOSSF [PN02], developed by L. F. Perrone and D. M. Nicol of ISTS, is an adaptation of SWAN intended to simulate the execution of TinyOS applications at the source-level. It relaxes a major restriction of TOSSIM [LLWC03], the other TinyOS applications simulator. In fact, TOSSIM requires that all nodes within the simulated network run the same set of applications. However, these two simulators cannot be used to validate new applications for general use on sensor networks running any platform, since they are specifically tailored for TinyOS. In addition, these two simulators focus on the application behavior and do not necessarily capture all the aspects of a sensor network. SENS [SKA04] is platform-independent, unlike TOSSF and TOSSIM. A main feature of SENS is its detailed environment model. However, SENS lacks realistic energy models in that it assumes constant power consumption values in each operational mode. 2.2 Networked Control Simulation The simulation of networked control systems has received a growing amount of interest lately. This section discusses the two main simulation frameworks for networked control systems that have been proposed. These are: TrueTime, a Matlab-based simulation framework and an Agent/Plant extension to ns-2. TrueTime [HCA02] is based on Matlab/Simulink and allows the simulation of the temporal behavior of multi-tasking real-time kernels containing controller tasks. It proposes two event-driven Simulink blocks: a computer block and a network block. The computer block is used to simulate computer control activities, including task execution and scheduling for user-defined threads and interrupt handlers. The network block is used to simulate the
90
E.M. Ould-Ahmed-Vall, B.S. Heck, and G.F. Riley
dynamics of a computer network using parameters such as message structure and message prioritizing function. However, this network block is not general enough to simulate various types of networks, especially sensor networks. It also suffers scalability problems. The widely used ns-2 was extended to allow simulation of the transmissions of plants and controllers in a networked control systems [BLP03]. The authors added a plant and an agent classes to ns-2. To the best of our knowledge, this solution is not yet interfaced with any of the ns-2-based sensor network simulators [PSS00]. In addition, it does not allow the simulation of large networked control systems.
3 GTSNetS Overview GTSNetS was built as an extension of GTNetS as a simulation framework for sensor networks. It inherits and benefits from the basic design of GTNetS as discussed in [Ril03a, Ril03b]. By leveraging much of the existing capabilities of GTNetS, we were able to implement GTSNetS in a modular and efficient manner, leading to the capability to simulate large-scale wireless sensor networks. GTSNetS was designed in such a way that it would not impose any architectural or design decisions on the user who wants to simulate a particular sensor network. We did not want GTSNetS to be a “do it yourself” type of simulator. Rather, the user can choose from various implemented alternatives: different energy models, network protocols, applications and tracing options. Several different methods for each of these choices are included in our baseline implementation. Should our models not be sufficient for a particular application, the existing models can easily be extended or replaced by the user. We consider a sensor node as composed of three main functional units: a sensing unit, a communication unit and a computing unit. A battery provides energy for these three units. In addition to regular sensor nodes, a sensor network can contain one or more sink nodes (base stations). These sink nodes interact with sensor nodes to collect sensed data and serve as a relay to the outside world. A sink node has a similar architecture to that of a regular node. The main difference is that it does not have a sensing unit. Because of the importance of lifetime in sensor networks, GTSNetS provides the capability to track the overall lifetime of a simulated network. It also measures the energy consumption of each one of the functional units and provides detailed statistics for each, allowing the user to study the effects of different architectural choices on lifetime and energy consumption. Each one of the three units consuming energy (sensing unit, communication unit and computing unit) has several energy models to characterize its energy consumption. The implemented energy models vary in terms of complexity, accuracy and level of details. For example, for the sensing unit, three energy models have been implemented. The first model is very simple and assumes a
Simulation of Large-Scale Networked Control Systems Using GTSNetS
91
linear relationship between the consumed sensing energy and the size in bits of sensed data. The second sensing energy model takes into account the sensing range (in meters) in addition to the size of the sensed data to compute the sensing energy. The third model offers greater details by dividing the sensing energy into two categories: the energy consumed by the amplifiers and the energy consumed by the analog to digital conversion process. This detailed model allows to better study the trade-off between energy and performance (e.g., accuracy of sensed data). Similar energy models are implemented for the computing and communication units. The user can choose from these models or add other energy models when necessary. The detailed description of GTSNetS implementation can be found in [OAVRHR05]. For the communication units, several sensor network specific network protocols are implemented. For example, as routing protocols we have implemented the directed diffusion and several geographical routing protocols. At the MAC layer, 802.11 and Bluetooth are also available. For more details, please refer to [OAVRHR05]. Other protocols for wireless ad hoc networks and wired networks, DSR, AODV, IP and TCP, are inherited from GTNetS [Ril03a, Ril03b]. The features of GTSNetS compared to other sensor network simulators, include: • A unified framework of existing energy models for the different components of a sensor node. This allows the user to choose the energy model that best suits his needs • GTSNetS provides three models of accuracy of sensed data and allows for addition of new models. Modeling the accuracy of sensed data helps understanding the trade-off of quality versus lifetime • This simulator allows the user to choose among different implemented alternatives: different network protocols, different types of applications, different sensors, different energy and accuracy models. New models, if needed, can be easily added. This makes GTSNetS very suitable for simulating sensor networks since such networks are application-dependent and their diversity cannot be represented in a single model • GTSNetS is currently, to the best of our knowledge, the most scalable simulator specifically designed for sensor networks. It can simulate networks of up to several hundred thousand nodes. The scalability is the result of active memory management and careful programming • Finally, GTSNetS can be used to collect detailed statistics about a specific sensor network at the functional unit level, the node level as well as at the network level
92
E.M. Ould-Ahmed-Vall, B.S. Heck, and G.F. Riley
4 Extension of GTSNetS to Simulate Distributed Control Using Sensor Networks Control is an important application area of networked embedded systems and sensor networks in particular. In fact, distributed control has been a standard for large-scale processes, such as industrial automation and mobile robotics. From a sensor network point of view, a sensor network can be used in several ways in a control system. 1. Distributed sensing with centralized control and actuation: This approach consists of using an entire sensor network as the sensing entity in a control system. This allows monitoring a large plant that cannot be covered using a single sensor. It also allows for fault tolerance since sensor nodes can be deployed to have several nodes covering each part of the plant. Information collected by the different sensor nodes is fused and given as an input to the controller, which runs on a supervisor node outside of the sensor network. The controller generates a signal (action) that is applied to the plant by the actuator without involving the network. Figure 1 illustrates this approach. The arrows in the figure indicate the information flow.
Controller At supervisor node
Actuator At supervisor node
Plant
Sensor Node
Sensor network
Fig. 1. Distributed sensing with centralized control and actuation
2. Distributed sensing and actuation with centralized control: This approach is similar to the previous one. The main difference is that any corrective action on the plant is now applied by individual sensor nodes after receiving control commands from the controller. The controller is still run centrally at the supervisor node. Actuation messages are addressed either to all nodes in the network or to specific nodes as a function of their sensor readings. This could be the case, for example, when nodes in a specific region are reporting exceptionally high values. Figure 2 illustrates this approach.
Simulation of Large-Scale Networked Control Systems Using GTSNetS Actuator
93
Plant Sensor Composition of sensor node
Controller Supervisor node
Sensor Node
Sensor network
Fig. 2. Distributed sensing and actuation with centralized control
3. Distributed sensing, control and actuation: In this approach, the sensing, the control and the actuation are all performed inside the network. In this case, each control system acts as a sensor node. Each node can collect information about the plant, run control algorithms (act as a controller) and apply any necessary actions (actuator role). These nodes collaborate to control an entire system. However, each node monitors and actuates a specific plant that is a part of a larger system. In this case, each sensor node contains a controller, an actuator and a plant in addition to its normal components. In this approach, the sink node (base station) does not participate in the control process. It plays its traditional roles of collecting information, storing it and relaying it to the outside world when necessary. Figure 3 illustrates this approach.
Controller
Actuator
Plant
Composition of sensor node Base station (No control)
Sensor Node
Sensor network
Fig. 3. Distributed sensing, control and actuation
Sensor
94
E.M. Ould-Ahmed-Vall, B.S. Heck, and G.F. Riley
4. Hierarchical distributed sensing, control and actuation: This approach is similar to the previous one, except that a supervisor node can now affect the parameters of the control algorithm executed on individual sensor nodes. It can, for example, change the reference value at individual nodes or load a new control algorithm depending on the current state of the plant as reported by the individual nodes. It can also change the parameters to adjust to changing network conditions. For example, if the energy level becomes low at some of the nodes, these nodes can be asked to sample at a lower rate. This approach is simulated later in this paper. Figure 4 illustrates this approach.
Controller
Actuator Composition of sensor node
Plant Sensor
Supervisor node (No control)
Sensor Node
Sensor network
Fig. 4. Hierarchical distributed sensing, control and actuation
To implement these different control architectures, several classes were added to GTSNetS. Due the modularity of GTSNetS, the addition of these classes does not have any negative impact on the existing classes. It is facilitated by the possibility of code reuse in GTSNetS. The first class (class Plant) models a plant. This class is derived from the real sensed object class that is already in GTSNetS. The main difference is that the plant can receive a command (signal) from an actuator. A second class implements the actuator (class Actuator). An actuator has a plant object associated with it. It can act on the plant in several manners depending on the commands received from the controller. Several methods of actuations are implemented. The user can choose from these methods or implement his own if necessary. The controller is modeled using an application class (class ApplicationSNController) derived from the sensor network application class. Each controller has a specific actuator object attached to it. This actuator object performs actions on the plant depending on the control commands received from the controller application.
Simulation of Large-Scale Networked Control Systems Using GTSNetS
95
The different control architectures are implemented by attaching some or all of these classes to individual nodes. To implement the first architecture (distributed sensing with centralized control and actuation), we attach a plant object to each sensor node. However, since these nodes do not perform the controller and actuator roles, they each have a regular sensor network application. No actuator object is attached to the sensor nodes. A controller application is attached to the supervisor node. An actuator object is attached to this controller. This actuator acts on the plant object, which is sensed by the sensor nodes. In a similar way, the second architecture is implemented by attaching a controller application to the supervisor node. However, this controller application gives commands to actuators, which are attached to individual sensor nodes. Each actuator acts on the plant object attached to its sensor node. In the third architecture, a controller application is attached to each sensor node. This application commands an actuator that acts on a plant, which is attached to the same node. The sink node does not have any role in a fully distributed control application such as this case. The fourth architecture is implemented in a similar way to the third one, except that a controller application is now attached to the supervisor node. This application can modify the control algorithms or other parameters at the individual sensor nodes depending on the information they supply.
5 Fault Tolerance Algorithms A key requirement for the development of large-scale sensor networks is the availability of low cost sensor nodes operating on limited energy supplies. Such low cost nodes are expected to have relatively high failure rates. It is, therefore, important to develop fault tolerant mechanisms that can detect faulty nodes and take appropriate actions. A possible solution is to provide high redundancy to replace faulty nodes in a timely manner. However, the cost sensitivity and energy limitation of sensor networks make such an approach unsuitable [KPSV02]. Here, we focus on the problem of event region detection. Nodes are tasked to detect when a specific event is present within their sensing range. Such an event could be detected through the presence of a high concentration of a chemical substance [KI04]. Each node first determines if its sensor reading indicates the presence of an event before sending this information to its neighbors or to a sink node. However, in case of failure the sensor can produce a false positive or a false negative. That is, a high reading indicating an event occurred when it did not or a low reading indicating the absence of event when one occurred. A distributed solution proposed by Krishnamachari et al. in [KI04] is presented in the subsection below. The simulations reported in [KI04] are
96
E.M. Ould-Ahmed-Vall, B.S. Heck, and G.F. Riley
duplicated in our simulator, which gives confidence to the accuracy of our simulator. The algorithm in [KI04] assumes that all nodes in the network have the same failure rate and that this rate is known prior to the deployment. These are unrealistic assumptions, and we demonstrate through simulation that the algorithm introduces too many errors when such assumptions do not hold. We propose an enhanced version of the algorithm where nodes learn their failure rates as they operate by comparing their detection results with the ones of their neighbors. This mechanism is proved to reduce significantly the number of introduced errors. 5.1 Distributed Bayesian Algorithm This solution presented in [KI04] considers a sensor reading of a high value as an indication of the presence of an event, while a low value is considered normal. It relies on the correlation between the node reading and the readings of its neighbors to detect faults and take them into account. The following binary variables are used to indicate if a node is in an event region (value 1) or in a normal region value (value 0): • Ti : indicates the real situation at the node (in an event region or not) • Si : indicates the situation as obtained from the sensor reading. It could be wrong in the case of failure • Ri : gives a Bayesian estimate of the real value of Ti using the Si values of the node and its neighbors It is assumed that all the nodes have the same uncorrelated and symmetric probability of failure, p: P (Si = 0|Ti = 1) = P (Si = 1|Ti = 0) = p
(1)
The binary values Si are obtained by placing a threshold on the reading of the sensor. The sensor reading when in an event region and when in a normal region are considered to have means of mf and mn , successively. The error term is modeled as a Gaussian distribution with mean 0 and standard deviation σ. In such a case, p is computed as follows using the tail probability of a Gaussian distribution: p = Q(
mf − mn ) 2σ
(2)
Assume each node has N neighbors. Define the evidence E(a, k) as the event that k of the N neighboring nodes report the same conclusion Si = a. Using the spatial correlation, we have: P (Ri = a|Ei (a, k)) =
k . N
(3)
Simulation of Large-Scale Networked Control Systems Using GTSNetS
97
Each node can now estimate the value of Ri given the value of Si = a and Ei (a, k). This is given by: Paak = P (Ri = a|Si = a, Ei (a, k)) =
(1 − p)k (1 − p)k + p(N − k)
P (Ri = a|Si = a, Ei (a, k)) = 1 − Paak =
p(N − k) (1 − p)k + p(N − k)
(4) (5)
Three decision schemes are proposed: • Randomized: determine the values of Si , k and Paak ; generate a random number uε(0, 1); if u ≤ Paak , then set Ri = Si , else set Ri = Si • Threshold: a threshold θ is fixed in advance; determine the values of Si , k and Paak ; if θ ≤ Paak , then set Ri = Si , else set Ri = Si • Optimal threshold: determine the values of Si , k; if k ≥ N2 , then set Ri = Si , else set Ri = Si It has been proved in [KI04] that the optimal value of θ in the threshold scheme is 1−p, which is equivalent to using the optimal threshold scheme. We, therefore, study only the randomized and the optimal threshold schemes. Several metrics have been developed to evaluate the performance of this Bayesian solution under different settings. These metrics include: • Number of errors corrected: number of original sensor errors detected and corrected by the algorithm • Number of errors uncorrected: number of original sensor errors undetected and uncorrected by the algorithm • Reduction in errors: overall reduction in number of errors, taking into account the original errors and the ones introduced by the algorithm • Number of errors introduced by the solution: number of new errors introduced by the algorithm A full description of these metrics as well as their theoretical values can be found in [KI04]. 5.2 Simulation of the Bayesian Solution Using GTSNetS To simulate this particular solution, we only need to add one class (class ApplicationSNFToler). The modularity of GTSNetS allowed us to reuse without any change all the other modules: computing unit, communication unit, sensing unit containing a chemical sensor with the appropriate accuracy model and sensed object. We simulated a sensor network of 1024 nodes deployed in a region of 680 meters by 680 meters. Each two neighboring nodes are separated by 20 meters. The communication range was set to 23 meters. A parameter measuring the fault tolerance range was defined in the ApplicationSN F T olerance. Nodes within this range are considered neighbors and taken into account by the fault
98
E.M. Ould-Ahmed-Vall, B.S. Heck, and G.F. Riley
tolerance algorithm to produce the prediction value Ri . The definition of this parameter disassociates the neighborhood region in which neighbors are taken into account by the algorithm from the communication range. This is a more realistic approach than the one used in [KI04]. This parameter was set such that each interior node has 4 neighbors taken into account by the algorithm. One source (sensed object) was placed at the lower left corner of the region of interest. The sensing range was set to 93 meters. These numeric values are only different from the ones in [KI04] by a scaling factor. To reduce the size of the exchanged messages, it was decided to run an initial neighbors discovery phase prior to the execution of the fault tolerance algorithm. This avoids having to send node location along with every sensor reading message, which greatly reduces the sensor message size. This helps reducing the energy cost of the algorithm, but requires the existence of an identification mechanism [OAVBRH05, OAVBHR05]. For the communication, every node communicates only with nodes in its fault tolerance range. In this specific simulation, this range is less than the communication range. Nodes, therefore, broadcast their messages to the neighboring nodes and there is no need for any routing protocol. When necessary, the user can use one of our implemented sensor network routing protocols, e.g., geographical routing or directed diffusion. Figures 5 and 6 give some of the performance metrics results for the optimal threshold and randomized schemes for various fault rates. This results were obtained by averaging over 1000 runs. We can see that both decision schemes correct a high percentage of original errors (about 90% for the optimal threshold and 75% for the randomized scheme at 10% failure rate). These graphs are in accordance with the ones reported in the original paper [KI04], which increases our confidence in the correctness of GTSNetS. The simulations in [KI04] where conducted using Matlab and did not take into account many of the communication aspects and energy constraints present in sensor networks. Clearly, the optimal threshold scheme performs better than the randomized scheme. This is expected and is in accordance with the findings in [KI04]. However, the randomized scheme has the advantage of giving a level of confidence in its decision to set Ri = Si or not. This confidence level is given by Paak . This is not possible in the case of the optimal threshold scheme. 5.3 Enhancement of the Bayesian Fault Tolerance Algorithm The Bayesian algorithm assumes that all nodes in the network have the same failure probability p. It also assumes that this failure probability is known to the nodes prior to the deployment. These two assumptions are unrealistic. In fact, a node can become faulty with time either because of a lower energy level or because of changing environment conditions. This increases its fault rate. We can also have a heterogeneous sensor network with nodes that have different operational capabilities and accuracy levels.
Simulation of Large-Scale Networked Control Systems Using GTSNetS Metrics for the original algorithm using the optimal threshold scheme 1
Normalized corrected er Normalized uncorrected er
0.9 0.8
Normalized values
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Failure probability
Fig. 5. Performance metrics for the optimal threshold scheme 0.9
Normalized corrected er Normalized uncorrected er
0.8
Normalized values
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.05
0.1
0.15
0.2 0.25 0.3 Failure probability
0.35
0.4
0.45
Fig. 6. Performance metrics for the randomized scheme
0.5
99
100
E.M. Ould-Ahmed-Vall, B.S. Heck, and G.F. Riley
For these reasons, we propose an enhancement to the algorithm under which nodes learn about their failure rates as they operate. This is done by comparing the sensor reading Si with the estimated value of Ri , obtained by taking into account the values reported by the neighbors, over time. We also give different weights to information reported by nodes with different fault rates. The reported information Sj from neighbor j is given a weight of 1 − pj , where pj is the current fault rate at node j sent along with its sensor reading. Several of the previous equations are modified for these dynamic failure rates. Suppose that k of the neighboring nodes obtained sensor decisions identical with the decision Si = a at the current node i. For simplification of notation, assume that these nodes are numbered 1 to k and the rest of the neighbors are numbered k + 1 to N . We define Pea as the probability of the event Ri = a knowing that these first k neighbors have each Sj = a. This k probability is constant and equal to N in the original algorithm. Its value is now given by: k (1 − pj ) (6) Pea = P (Ri = a|E(a, k)) = N1 1 (1 − pj ) Here pj is the current failure probability at the jth neighbor. Each node i keeps track of the current failure probability pj at each one of its neighbors, by looking at the number of times the reported value Sj by neighbor number j disagreed with the estimated value Ri . This avoids having to send this probability value along with every decision message, which saves the energy. The new value of Paak is given by: Paak = P (Ri = a|Si = a, E(a, k)) =
((1 − pi )Pea (1 − pi )Pea + pi (1 − Pea )
(7)
The parameter pi is the current failure probability of node i. The decision schemes are implemented as in the original algorithm using the new expression of Paak . Since the optimal threshold scheme does not use Paak to decide weather the node is faulty or not, we focus our attention on the randomization scheme. For each failure rate level p, as used in the original algorithm, we compute two adjacent probabilities to introduce variability as follows: p1 = p − 0.05
(8)
p2 = p + 0.05
(9)
This keeps the average failure rate unchanged at p in the overall network. Each sensor is assigned randomly one of these two probability levels. The node does not know in advance the failure probability of its sensor. It starts with a conservative estimate that is equal to the highest of the two probability levels (p2 ). A node i learns its real probability level as it operates by looking
Simulation of Large-Scale Networked Control Systems Using GTSNetS
101
at the number of times its sensor decision Si disagreed with the estimate of the real situation Ri . This failure probability, pi , is update at every round. We compare the performance of the enhanced version of the algorithm with that of the original algorithm as well as with the original algorithm with prior knowledge of exact probability level p that is identical for all nodes. We obtain the performance by averaging over 1000 runs as in the previous section. We use the same network topology and configuration parameters as in the previous section. Figure 7 gives the normalized number of corrected errors under the different experimental settings. Clearly, we can see that the number of corrected errors remains relatively constant and does not suffer any degradation from the introduction of the assumptions of unknown and uneven fault rates. The only change is observed for very small probability levels (around 0.05), which is due to the ratio between p and the constant subtracted from (added to) it to produce p1 and p2 . Normalized number of corrected errors 0.85
Original algorithm Enhanced algorithm Known probability
Normalized number of corrected errors
0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.45
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Failure probability
Fig. 7. Normalized number of corrected errors using enhanced and original algorithms
Figure 8 gives the normalized number of reduced errors under the different experimental settings. We can see that the performance, as measured by the number of reduced errors, is worst for the original algorithm (for error rates of up to 15%) while the enhanced algorithm gives similar performance to the one for the case where the values of p are identical for all nodes and known prior to the deployment. The degradation in the performance of the original
102
E.M. Ould-Ahmed-Vall, B.S. Heck, and G.F. Riley
algorithm comes from the increased percentage of errors introduced by the algorithm. Normalized number of reduced errors 0.8
Original algorithm Enhanced algorithm Known probability
0.75
Normalized number of reduced errors
0.7 0.65 0.6 0.55 0.5 0.45 0.4 0.35 0.3
0
0.05
0.1
0.15
0.2 0.25 0.3 Failure probability
0.35
0.4
0.45
0.5
Fig. 8. Normalized number of reduced errors using enhanced and original algorithms
Figure 9 gives the normalized number of errors introduced by the algorithm for different p values. We can see that the performance, as observed before, benefits from the introduction of the enhancement to the algorithm. The number of introduced errors is controlled and keeps close to the levels of the algorithm with prior knowledge of error rates that are even throughout the network. In conclusion of this section, we proved that prior knowledge of error rates is not necessary for the Bayesian fault tolerance algorithm to perform well and keep the level of errors under control. It is also clear that the assumption of even error rates is not necessary. Relaxing these two assumptions makes the algorithm much more realistic.
6 Hierarchical Distributed Control with Fault Tolerance In this section, we give an example of how our simulator can be used to study the behavior of a simple distributed hybrid control system. Each node is responsible for sensing, controlling and actuating a plant. A supervisor node is responsible for changing the parameters of the local controllers. For simplicity, we consider a single input single output proportional controller with gain K and measurement y. The plant is chosen as a first order discrete
Simulation of Large-Scale Networked Control Systems Using GTSNetS
103
Normalized number of introduced errors 0.4
Original algorithm Enhanced algorithm Known probability
Normalized number of introduced errors
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Failure probability
Fig. 9. Normalized number of introduced errors using enhanced and original algorithms
time plant with pole at 1. A hybrid control is added by having the supervisor node modify the gain at a specific node if the local measurement y falls outside of a given Interval, [I1 , I2 . This is a simplistic case of hybrid control where the control is reconfigured depending on the operating region in which it lies. The node informs the supervisor node that it has changed the region by sending a message. The supervisor node sends back a message to update the K value at the specific node. A geographic routing scheme was used to communicate between the supervisor node and the sensor nodes. The location of the final destination is contained in the message header. Each message is routed through the neighbor node that is closest to the final destination among the neighbors that remain alive. Intermediate nodes route messages in a similar way. The enhanced fault tolerance algorithm, presented in Section 5, is incorporated in the control. A node that determines that it is faulty (by comparing its sensor reading with the readings of its neighbors) does not apply any control action and waits until the next reading. Each node is powered by a battery containing 2 joules at the beginning. Typical energy consumption parameters given in [BC02] were used for the energy models. Sensors collect data every 30 seconds. Each node broadcasts a heartbeat message every 100 seconds. These messages allow nodes to keep an updated list of neighbors that are still alive. This list is used in the geographical routing protocol. All nodes were, initially, configured with a fault
104
E.M. Ould-Ahmed-Vall, B.S. Heck, and G.F. Riley
probability p = 0.05. However, this probability is dynamically updated using the enhanced fault tolerance algorithm. The GTSNetS simulator can give results on the control performance at each node. In addition, it allows for an exploration of other network related parameters: communication protocols, energy consumption network-wide and at individual nodes, collaboration strategies, etc. Here, for example, we study the lifetime of the network as a function of the width of the operating range I = I2 − I1 for a network of 1024 nodes. The lifetime, here, is measured as the time from the starting of network operation till the network is considered dead. Here, the base station considers the network dead when more than ( 43 ) of its neighbors are dead. The death of a node is discovered when it no longer sends heartbeat messages. Figure 10 gives the network lifetime as a function of the tolerance interval for K = 0.5 initially. The lifetime increases with the width of the operating range. In fact, for larger operating ranges the sensor reading of individual nodes is less likely to fall outside of the range. This reduces the number of messages sent to the supervisor node, which allows its neighbors to live longer. Network lifetime as a function of width of the oprating range 850
"controlFT.dat" using 1:2
800 750
Network lifetime (seconds)
700 650 600 550 500 450 400 350 300
0
20
40 60 Width of the operating range
80
100
Fig. 10. Network lifetime as a function of the min-to-max interval
We also studied the percentage of sensor nodes remaining alive after the network death. In fact, since all nodes communicate with the supervisor node, the neighbors of this node are expected to die much sooner than most of other nodes. Figure 11 gives this percentage for an initial value of K = 0.5. We observe that this percentage is very high, as expected, and tends to increase with the width of the operating range. However, it remains relatively constant (be-
Simulation of Large-Scale Networked Control Systems Using GTSNetS
105
tween 85 and 95% for most values) because the neighbors of the base station still die much sooner than the rest of the network even for large operating ranges. This problem could be overcome by increasing the network density near the supervisor node. Percent of ramaining nodes as a function of width of the operating range 95
"controlFT.dat" using 1:4
94
Percent of remaining nodes
93 92 91 90 89 88 87 86
0
20
40 60 Width of the operating range
80
100
Fig. 11. Percentage of remaining nodes as a function of the min-to-max interval
7 Conclusion We described the design and implementation of an extension of GTSNetS that allows the simulation of distributed control systems using sensor networks. This simulation tool can be used to study the combined effects of network parameters and control strategies in a distributed control system. One of the main features of GTSNetS is its scalability to network size. In fact, it can be used to simulate networks of several hundred thousand nodes. Another important feature of GTSNetS is the fact that it is easily extensible by users to implement additional architectural designs and models. We also presented an extension of an existing Bayesian fault tolerance algorithm to adapt to dynamic fault rates. This extension relaxes two major assumptions of the original algorithm. Nodes are no longer assumed to know their failure rates prior to the deployment. A node estimates dynamically its failure rate by comparing its sensing values with those of its neighbors. It is also no longer assumed that all nodes have the same failure rate. Relaxing these two assumptions makes the algorithm much more realistic.
106
E.M. Ould-Ahmed-Vall, B-S. Heck, and G.F. Riley
Acknowledgments This work is supported in part by NSF under contract numbers ANI-9977544, ANI-0136969, ANI-0240477, ECS-0225417, CCR 0209179, and DARPA under contract number N66002-00-1-8934.
neclab: The Network Embedded Control Lab Nicholas Kottenstette and Panos J. Antsaklis Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 {nkottens,antsaklis.1}@nd.edu Summary. The network embedded control lab, neclab, is a software environment designed to allow easy deployment of networked embedded control systems, in particular wireless networked embedded control systems (wnecs). A wnecs is a collection of interconnected plant sensors, digital controllers, and plant actuators which communicate with each other over wireless channels. In this paper neclab is introduced and explained using a simple ball and beam control application. We focus on wnecs which use the MICA2 Motes.
1 Introduction Typically, when a controls engineer needs to develop a new closed-loop control system she develops the control system in phases. The first phase is to develop a mathematical model of the system and synthesize a controller. The second phase is to simulate the control system using tools such as MATLAB [mat]. In the third phase, using the results from the simulations, the engineer integrates sensors, actuators, remote data acquisition and control equipment into the system. This is done in order to acquire additional data and refine the models in order to optimize the controller. When the third phase is complete, the engineer has optimized and deployed a robust control system. Systems with a higher degree of autonomy will also have fault detection and remote monitoring systems. Typically these digital control systems are developed using a dedicated data acquisition system attached to a cable interfaced to a computer running a real-time-control software, such as RTLinux [Yod]. For control systems in which a wired control system is not possible or desired, the available design tools for the engineer are limited at best. In this paper, a software environment is introduced called neclab, that is a software environment designed to allow easy deployment of networked embedded control systems, in particular wireless networked embedded control systems called wnecs. The components of neclab are presented in the following and described in terms of a classical control experiment, the ball and beam.
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 107–125, 2006. © Springer-Verlag Berlin Heidelberg 2006
108
N. Kottenstette and P.J. Antsaklis
Note that most of the tools currently available to aid the engineer develop software for wireless embedded systems are geared specifically for sensing. The majority uses Berkley’s TinyOS [Ber]. Note also that the majority of the TinyOS applications listed in [Ber], are not designed to be wirelessly reconfigurable. For example, one reconfigurable system which uses TinyOS is Harvard’s moteLab [WASW05], where each mote is connected to a dedicated programming board that is connected to an Ethernet cable. This is necessary in order for each mote to be reconfigured in order to use TinyOS. A reliable protocol, called Deluge, to enable wireless programming of TinyOS applications has been developed [HC04]. Deluge is currently part of the TinyOS development tree, and should be an integral part of the next stable release of TinyOS. We considered Deluge but in view of our sensor and control applications of interest we decided to work with an alternative to the TinyOS operating system called SOS [UCL]. SOS offered an alternative working design for network reprogramming for the three following reasons. First the SOS operating system utilizes a basic kernel which should only have to be installed on the mote once. The second key element is that the SOS kernel supports small, typically one-twentieth the size of a TinyOS application, dynamically loadable modules over a network. Last, the SOS kernel supports a robust routing protocol, similar to MOAP [SHE03], to distribute modules over a wireless network. We built neclab, our networked embedded control system software environment using SOS. Specifically, neclab is a collection of software consisting of five main components. The first component, build utilities, is a set of utilities designed to build and download all required software tools and libraries in order to use neclab. The second component, SOS, is an operating system developed by the Networked and Embedded Systems Lab (NESL) at UCLA. SOS is a highly modular operating system built around a message passing interface (MPI) which supports various processor architectures, including those on the MICA2 Motes. The third component, sos utilities, are the utilities to facilitate code development and deployment of SOS modules. The fourth component, necroot, is a file system structure and language designed to seamlessly interconnect individual motes for distributed control. The fifth component, FreeMat utilities, are a set of utilities to facilitate wnecs design using FreeMat. FreeMat is a free interpreter similar to MATLAB but has two important advantages. First, FreeMat supplies a direct interface for C, C++, and FORTRAN code. Second, FreeMat has a built-in API for MPI similar to MatlabMPI [Kep]. neclab provides a mini-language built on facilities similar to those supported by UNIX. A modern UNIX OS supports facilities such as pipes, sockets and filters. neclab allows the engineer to develop a wnecs by designing control modules which can be interconnected using networking message pipes. A networking message pipe is an abstraction to pass arrays of structured binary data from one module to another over a network. A networking message type indicates how the data should be handled, for example, descriptors are used to
neclab: The Network Embedded Control Lab
109
indicate standard, error, routing, and control messages which are passed over a network; e.g. control messages are indicated by the control message type. Specifically, a networking message pipe is used to interconnect data flows between networking sources, networking filters, and networking sinks. A networking source creates data which will be sent over a network to networking filters, and networking sinks. Similarly, a networking filter will receive data from either a networking source or another networking filter. The networking filter will proceed to optionally modify the data and send the new data to another networking filter or networking sink. A networking sink is where the network data flow for a given route ends. In order to implement networking message pipes we will use the network message passing protocol provided by SOS. Like UNIX, SOS provides a way to run and halt programs which have a corresponding process id. These executable programs on SOS are known as modules. neclab provides an interface to pass networking configuration messages to a module in order to configure and enable the network flow of data between modules in the wnecs at run-time. Using these facilities we will demonstrate an implementation of a highly parallel wnecs in which a secondary controller is reconfigured, while the primary controller maintains a stable control-loop. Once reconfigured, the roles of the two controllers will be switched. Other, highlights will illustrate that a controls engineer can actually create concise routing tables by simply describing a wnecs with neclab. This is a natural result of wnecs in general. neclab’s use of SOS’s dynamic memory allocation services, easily allows for a system which enables a control engineer to work through the second and third phases of her design project. This highly configurable environment without wires would have been difficult to implement with TinyOS since TinyOS does not support dynamic memory management. SOS on the other hand does. Other SOS features which neclab utilized are the ability to dynamically share functions and load executable files (modules) over a wireless channel while the SOS kernel is still running. SOS implies flexibility, and as a result it was chosen as the core component to neclab. For a more detailed discussion on the advantages and differences of SOS as compared to other solutions, refer to [HKS+ 05]. neclab is not the first project to utilize SOS. Other projects such as Yale’s XYZ Sensor Node project [LS05] and various projects at NESL are starting to use SOS such as the RAGOBOT [FLT+ 05]. In presenting neclab, we will illustrate its use by presenting a typical undergraduate control lab problem modified to be a wnecs. We will then generalize this problem, by describing a tutorial application, which a user can create if five MICA2 Motes and a programming board are available. As the tutorial application is described we will highlight the various components of neclab which highlight the many issues that have to be addressed in order to develop a robust wnecs.
110
N. Kottenstette and P.J. Antsaklis
2 Problem Description Consider an undergraduate controls laboratory experiment that teaches a student how to control the position of a metal ball on a beam. The experiment uses a motor as an actuator, and two variable Wheatstone bridges for sensing. The bridges measure angular position of the motor, and the absolute position of the ball on the beam. In the first laboratory experiment, the students are required to determine the actual model parameters of the ball and beam plant [Unib]. The next lab [Unia] teaches the student how to control the exact position of the ball on the beam. The student designs and implements the control system using MATLAB, Simulink [mat], and the Wincon server for real-time control [Qua]. The sensor inputs and outputs are accessible through the MultiQ board. We are going to replace this system using neclab, the MICA2 motes and a general purpose I/O boards developed for the MICAbot [MGM03]. Figure 1 illustrates such a system. With neclab installed on a host computer a MICA2 N gateway mote is typically accessed via a serial port. Figure 1 indicates that MICA2 N gateway is interconnected to a MIB510 programming board. See [Cro] for additional details on the MIB510. In order to control the ball and beam plant the following control loops (jobs) need to be implemented (spawned). First the MICA2 N actuator needs to reliably control the angular position α of the beam. This is achieved by controlling the angular position θl of the motor. The actuator will receive a desired angular position set-point and will control the motor. In networking terms, the MICA2 N actuator behaves as a networking sink for networking messages, and takes actions based on the messages sent to its control standard input. According to [Unia] the desired response time for controlling α should be around 0.5 seconds. This is a fairly aggressive target to meet for the low data rate wireless network control system. As a result, we have initially kept this control loop internal to the MICA2 N actuator. In order to do this we link this code statically to the kernel in order to guarantee a stable control loop on start-up. The second control loop involving the actual position of the ball, requires around a 4 second settling time, which is reasonable to implement over the wireless channel. This loop is accomplished by MICA2 N sensor sampling data from the ball position sensor output with the ATmega 128L built-in 10 bit A/D converter (see [Atm] for additional information on this chip’s features). The MICA2 N sensor behaves as a networking source for generating networking messages, sending its data along to MICA2 N controller-A and MICA2 N controller-B respectively. Depending on which controller is enabled, the enabled controller will behave as a networking filter by calculating an appropriate command to send to MICA2 N actuator based on the users desired set-point received. Figure 2, illustrates how this system can be implemented using SOS modules and messages which we will refer to as we discuss neclab.
neclab: The Network Embedded Control Lab
111
3 neclab Looking at figure 2, one can appreciate the number of distinct software components required for an engineer to obtain a working wnecs. In this figure the engineer has successfully built and installed kernels on five motes, loaded all the required modules on to the network, and created routing tables in order to create a stable closed loop controller to monitor and maintain the position of the ball. In order to use neclab the engineer must first download the 1.x version of SOS to a unix based PC. The location SOS is installed will be referred to as SOSROOT; neclab will be in the SOSROOT/contrib/neclab directory which will be referred to as NECLABROOT. From there all the engineer needs to do is follow the instructions in the NECLABROOT/README SOS-1.X file. The first task that neclab will do for the user is download, build and install all the necessary tools to build and install software on the MICA2 motes so to work with the FreeMat environment. Once the tools are installed, neclab will apply some minor patches in order to fully utilize the SOS software. From there the engineer should test and fully understand the example blink lab. The blink lab is discussed in Appendix A. 3.1 build utilities neclab has been designed so that a user does not require root access to build and install her tools. This offers two distinct advantages, the first being that the user can modify any appropriate component that is needed to maximize the performance of the system. For example, neclab actually downloads and allows the user to build an optimized BLAS (Basic Linear Algebra Subprograms) library using ATLAS (Automatically Tuned Linear Algebra Software) [WP05]. Second it provides all users with a consistent tool-kit eliminating potential software bugs associated with not using a consistent tool-chain. The key tool used is the build tool makefiles program which reads a configuration file located in NECLABROOT/etc/build.conf and generates a custom makefile for all the following tools: • • • • • • • • • • •
perl – key tool for various utilities in neclab [WOC00] avr-binutils – used for the MICA2 and MICAz motes [bin] avr-gcc – used for the MICA2 and MICAz motes [avrb] avr-libc – used for the MICA2 and MICAz motes [avra] ATLAS – used with FreeMat[WP05] FreeMat [fre] uisp – used to load an image on to the MICA2 and MICAz motes [uis] tcl – (Tool Command Language) required for the tk library [tcl] tk – graphical user interface toolkit library required for python [tcl] python – [pyt] used in conjunction with pexpect [pex] for automation SWIG – [swi] is a tool to connect programs written in C and C++ with high-level programming languages such as perl and python.
112
N. Kottenstette and P.J. Antsaklis
The auto-generated makefiles are then installed in each corresponding NECLABROOT/src/build utilities/< tool > directory and invoked to download, build and install each < tool >. The installation directory is in NECLABROOT/tools/. As a result any user can build and use her own tools, without having to ask the system administrator for permission! 3.2 SOS, and sos utilities Referring back to Figure 2, on the local host PC (HOST PC), the engineer has just finished creating a wnecs using the neclab run tool provided by neclab. Each mote has a kernel; however, they do not have to be unique, as is clearly shown in Figure 2. For example, the MICA2 N gateway mote has the sosbase kernel which has a statically linked module which we will refer to by it’s process id SOSBASE PID. Other motes such as MICA2 N sensor, MICA2 N controller-A, and MICA2 N controller-B have a blank sos kernel with no statically linked modules. Finally the MICA2 N actuator has a custom kernel, custom sos, with statically linked modules ANG POS SEN PID (a module which manages the angular position sensor), and MOT ACT PID (a module which controls the angular position on the motor). The custom sos kernel was required to generate a pwm output in order to drive the H-Bridge on the MICAbot board. The remaining modules which are dynamically loaded and unloaded over the network, are either in an active or inactive state. When in an inactive state they do not consume any additional processor RAM but do take up program space in the flash. In order to load and unload the modules the following programs are required. First, the sos server, sossrv, needs to be built, installed and started. neclab manages this with the makerules and neclab {sim,run} commands. The makerules first manage building and installing sossrv into NECLABROOT/tools/bin. The neclab {sim,run} commands can be used either to begin a simulation of a wnecs with the neclab sim command or to run a wnecs using the neclab run command. Either command can be treated as equivalent for discussion. The neclab run command starts the sossrv and connects it to the MICA2 N gateway mote. Figure 2 indicates that sossrv is listening for clients on HOST PC via the loop-back interface on port 7915 while listening for incoming packets filtered through the MICA2 N gateway attached to /dev/ttyS0. Next the neclab run tool creates an input fifo and starts the SOS modd gw client. The modd gw client is an SOS application that runs natively on a PC, it provides a shell like interface in which user input can be redirected to the fifo for automation. The modd gw client maintains a database file .mod version db local to where it is started. This database tracks the different dynamic modules which are loaded and unloaded from the network. If another engineer chose to use the same motes in the lab, they will either need access to this file or re-install new kernels on all the motes. As a result neclab makes sure that the modd gw is started such that the database is located in a publicly accessible
neclab: The Network Embedded Control Lab
113
directory such as /tmp so others can access and run their own experiments. The neclab run tool then proceeds to build all the required network modules, load them on to the network, and establish the networking routes for the control system. Lastly, the closed loop control system is enabled and can be monitored and modified as necessary. Another tool neclab provides is the create module proto tool to generate a new module prototype for beginning development. neclab has built into its makerules a mechanisms to generate tags files to assist in tracking all the interrelationships that modules have with the kernel and other modules. Once the engineer has a satisfactory implementation she can use the appropriate options from the neclab run tool to easily rebuild and re-install new module images as necessary. These tools provide a stream-lined mechanism for simulating and generating wnecs. Building off of SOS’s novel technique of tracking module version numbers as they are dynamically loaded and unloaded. neclab has created a file-structure known as necroot to track, simulate, and develop wnecs. 3.3 necroot Modules are tracked by their process id, similar to a process id generated by the ps command on a unix machine. This id is actually used to direct messages which are passed on an SOS network. Every message passed on the SOS network has a corresponding destination process id and address. The process id field; however, is only 8 bits, which supports only 255 unique process ids. Clearly more than 255 unique modules will be developed to be run on the SOS operating system, so there needs to be a clean way to address this limitation. SOS uses two files to define the association of a process id with a given module, mod pid.h and mod pid.c. The gen mod pid tool combined with makerules and the necroot design allow for dynamic generation of mod pid.h and mod pid.c for a corresponding lab project. In necroot the NECLABROOT/src/necroot/modules.conf provides a line by line list in which each entry consists of a full-path to a corresponding module and a short description of the module. Each item is delimited by a colon. The module process id is parsed directly from each modules.conf entry and added to mod pid.h. The corresponding description is then added to mod pid.c for simulation and debugging. Furthermore, static modules and modules to be loaded over the network are uniquely identified by grouping these modules between the < static >, < /static >, and < network >, < /network > tags respectively. The modules.conf is to serve as a global file, in it the key modules for neclab are listed. These entries include the moduled pc module (MOD D PC PID) and neclab’s configuring module (CONFIGURING PID). The moduled pc module, is a statically linked module which runs on the modd gw client. The configuring module provides an interface for creating networking sources, networking sinks and networking filters. It also provides the interface to configure modules and enable the control
114
N. Kottenstette and P.J. Antsaklis
standard input/control standard output networking design referred to in the introduction. The remaining modules are identified in the engineer’s NECLABROOT/labs/ball beam lab/etc/modules.conf.local file. The next issue is to create a design which would allow a user to easily manage programming and tracking each corresponding mote’s kernel and module configuration. This is solved by the construction of the NECLABROOT/src/necroot/etc/network.conf and the corresponding NECLABROOT/labs/ball beam lab/etc/network.conf.local files. Each entry follows nearly the same language structure as the build.conf file described in the build utilities section. The only difference instead of identifying a tool, each entry describes a mote and all the properties local to that mote. Using the build mote makefiles tool, a custom makefile for each mote described in the network configuration files is generated. Then the necroot directory structure is generated in the ball beam lab directory. Furthermore the engineer can use neclab run with the appropriate options to build and install the corresponding kernel on to her motes. For reference, a typical network.conf entry for a mote is as follows: <mote> SOS_GROUP = 13 ADDRESS = 2 NECLAB_PLATFORM = mica2 KERNEL_DIR = ${NECLABROOT}/src/patched_sos-1.x/sosbase X = 6 Y = -12 Z = 5 LOC_UNIT = UNIT_FEET TX_POWER = 255 #CHANNEL = ? has no effect for mica2
#$
The configuration language is GNU Make combined with some XML-like tags to separate each mote entry. Although, not true for mobile motes, each nonmobile mote has a fixed location. This information is typically used for geographic routing protocols, such as those implemented on Yale’s XYZ platform. SOS has built in the capability to track the location of each mote into its kernel; hence, the X, Y, Z, and LOC UNIT entries. The neclab project assisted in this design by introducing the Z dimension, a LOC UNIT to associate relative position with a given dimension, and a gps loc variable to track GPS location information. The gateway mote should typically assign itself a GPS location in order to assist with routing and interconnecting of other laboratories around the globe. Presently, gps loc maintains precision down to one second. A sample GPS entry, here at Notre Dame would have the following format (note the Z coordinate is an elevation relative to sea-level in feet). GPS_X_DIR = WEST GPS_X_DEG = 86
neclab: The Network Embedded Control Lab
115
GPS_X_MIN = 14 GPS_X_SEC = 20 GPS_Y_DIR = NORTH GPS_Y_DEG = 41 GPS_Y_MIN = 41 GPS_Y_SEC = 50 GPS_Z_UNIT = UNIT_FEET GPS_Z = 780 Note, each platform that SOS supports has a wireless radio setting unique to each mote. In the above sample the radio was set to full power. neclab is also working on formalizing the interface to modify the frequency and channel of the radio. As the comment notes, the MICA2 mote does not currently support the channel interface; however, the MICAz mote does. The other parameters such as the SOS GROUP id is used to filter radio packets out in the network that do not belong to that particular group. Each mote entry should have a unique, (SOS GROUP, ADDRESS) pair. Being able to group motes and designate different channels, provides one way to allow multiple laboratories to interact and perform co-operative tasks. For example, the ball and beam lab, can be modified to simulate the transfer of one ball from one station to another, as might be done in transferring parts from one assembly cell to another. Furthermore, by building in the ability to group the motes and assign each group to a separate radio channel, we have created a mechanism to create groups which can co-operate without worrying about generating radio interference. This feature we plan to exploit with the MICAz motes when developing routing algorithms between groups. The routing will be implemented between the gateway motes of a group. The gateway mote will typically be attached to their respective HOST PC and the routing of packages will occur over the Internet. Finally, the issue of declaring the desired kernel is addressed with the KERNEL DIR entry. The KERNEL DIR entry provides the full path to the desired kernel to be loaded and built. Any modules that the engineer plans to statically link in her kernel should be addressed in her respective kernel makefile. After the engineer has successfully built her newly created necroot file structure, the following root tree will result. [user@host_pc ball_beam_lab]$ find ./ -name "mote*" ./root/group13/mote1 ./root/group13/mote2 ./root/group13/mote3 ./root/group13/mote4 ./root/group13/mote5 ./root/group14/mote1 ./root/group14/mote2 ./root/group14/mote3
116
N. Kottenstette and P.J. Antsaklis
./root/group14/mote4 ./root/group14/mote5 [user@host_pc ball_beam_lab]$ In each mote directory there is a module configuration file, corresponding to every module identified in the corresponding modules.conf and modules.conf.local files. These are used in conjunction with the configuring module to set module parameters and set up routes in the network. Looking in the mote1 directory for example the user will see the following additional files. [user@host_pc ball_beam_lab]$ ls -1 root/group13/mote1/*.mod root/group13/mote1/ball_beam_con.mod root/group13/mote1/ball_pos_sen.mod root/group13/mote1/configuring.mod root/group13/mote1/moduled_pc.mod root/group13/mote1/ang_pos_sen.mod root/group13/mote1/mot_act.mod [user@host_pc ball_beam_lab]$ These user editable files provide an interface in order to use the configuring module to create routes. These routes can be configured using the command line or can optionally be declared in the routing.conf file. All lines starting with a # are comments. What the following segment of routing.conf should illustrate is that we have created a compact language to describe networking routes. The net source, net filter, and net sink commands are intended to emphasize that a module will be configured to be in one of those three states. The optional arguments such as –d{0,1} are to show that the engineer can describe up to two destinations and the behavior is determined by the respective –m{0,1} option which describes the networking message type for each destination. This allows an engineer to redirect control data flow into error data flow for monitoring for example. It should also be noted that although we are using the necroot file system to maintain and describe our routes, an ad-hoc routing module could be designed to utilize the configuring module interface in order to build arbitrary networking routes based on some metric. This language can further be utilized to describe these routes using a graphical user interface. The ball beam lab’s etc/routing.conf file is as follows: #!/bin/sh #routing.conf #define some constants TIMER_REPEAT=0;TIMER_ONE_SHOT=1 SLOW_TIMER_REPEAT=2;SLOW_TIMER_ONE_SHOT=3 MSG_STANDARD_IO=35;MSG_ERROR_IO=36 MSG_CONTROL_IO=37;MSG_ROUTING_IO=38 cd $NECLABROOT/labs/ball_beam_lab/root/group13 #Main control-loop #MICA_2_N_sensor -> MICA_2_N_controller-A -> MICA_2_N_actuator
neclab: The Network Embedded Control Lab
117
#Secondary control-route #MICA_2_N_sensor -> MICA_2_N_controller-B #Configure the MICA_2_N_sensor (mote2/ball_pos_sen) routes net_source -m mote2/ball_pos_sen -t "$TIMER_REPEAT:1024" \ --d1=mote3/ball_beam_con --m1=$MSG_CONTROL_IO \ --d2=mote4/ball_beam_con --m2=$MSG_CONTROL_IO #Configure MICA_2_N_controller-A (mote3/ball_beam_con) routes net_filter -m mote3/ball_beam_con -t "$TIMER_REPEAT:2048" \ --d1=mote5/mot_act --m1=$MSG_CONTROL_IO \ --d2=mote1/moduled_pc --m2=$MSG_ERROR_IO #Configure MICA_2_N_controller-B (mote4/ball_beam_con) net_sink -m mote4/ball_beam_con -t "$TIMER_REPEAT:2048" #Configure MICA_2_N_actuator (mote5/mot_act) net_sink -m mote5/ball_beam_con -t "$TIMER_REPEAT:256" #Activate the routes $ net_enable {mote5/mot_act,mote4/ball_beam_con} net_enable {mote3/ball_beam_con,mote2/ball_pos_sen} As shown in Figure 2, three routes are clearly established. One route establishes the control loop from sensor to controller to actuator. A second route delivers sensor and controller debugging information back to a gateway mote. A third route enables a second controller to act as a slave. The ball-positionsensor module on mote2 is configured as a networking source, generating a control standard input output message to be sent to the ball-beam-controller module, on mote3 every second or 1024 counts. The ball-beam-controller on mote3 is configured as a networking filter. As a networking filter it handles control standard input output messages from mote2’s ball-position-sensor, computes the appropriate command and sends a control standard input output message to mote5’s motor-actuator module. Furthermore mote3’s ball-beamcontroller module will generate a debugging message destined for the gateway mote1’s non-resident moduled pc module every two seconds. By sending information to the gateway mote, neclab can support remote monitoring. The moduled pc module was arbitrarily chosen to illustrate that an arbitrary nonresident module id can be reserved in order to pass a message up to the sossrv program and a client can be designed to handle and display this message on the engineers HOST PC display. Terminating the control loop route as a networking sink, mote5’s motor-actuator, handles control standard input output messages from mote3’s ball-beam-controller and actuates the beam. The motor-actuator controls the angular position of the beam and requires a faster control loop of a quarter of a second; hence, the timer is set to 256 counts. 3.4 FreeMat utilities The FreeMat utilities will provide an interface for users familiar with MATLAB to appropriately modify configuration parameters for neclab modules
118
N. Kottenstette and P.J. Antsaklis
designed and written in C. These utilities are currently the least developed for neclab; however, the major design problems have been confronted. There were numerous obstacles which had to be overcome in order to develop these utilities. The first accomplishment was getting FreeMat to build and install. Second was to integrate ATLAS, so that the engineer can have an optimized matrix library for simulation and development. Third was to develop the configuring module to allow a higher level networking protocol to be implemented in order to interconnect modules in a manner similar to the MPI protocols identified in the introduction. Lastly, was identifying SWIG as a possible candidate to assist with generating shared libraries to allow us to interface our SOS modules with FreeMat. We have used SWIG to generate interface files and shared libraries for python which we have used to create our initial graphical user interface application. We have also used SWIG to interface to our configuring module and do initial testing of the module. Although, FreeMat does provide a native interface for C and C++ programs, we feel that learning to effectively use SWIG to handle interfacing with SOS will allow a more flexible development environment in which users of neclab can use whatever high-level software tools they desire to use. The FreeMat utilities will allow an engineer to generate her own routing tables while allowing users to receive and monitor data from modules using the FreeMat client. The FreeMat client will connect to the sossrv and relay all data destined for the FREEMAT MOD PID. Setting parameters and displaying data should be transparent due to the configuring interface provided by the combined configuring module and the data flash swapping interface provided by SOS. The configuring interface provides all the necessary elements for a module to establish up to two routes, configure a timer and change up to 8 bytes of parameter data in the modules RAM. To handle larger data-sizes the user can either rely on the larger SOS RAM memory block of 128 bytes. The difficulty is that there are only four 128 byte RAM blocks available for the MICA2 and MICAz motes on SOS. The problem is further compounded in that the radio requires at least one 128 byte block to receive an SOS message over the network. In order to simultaneously send and receive a 128 byte message, a second 128 byte block of RAM needs to be allocated by the radio. This means that the user essentially has only two 128 large blocks of RAM available for allocation and they should be used for temporary operation such as being allocated to perform linear algebra routines. The second option is to dedicate a section of internal flash memory for storing configuration parameters and reading the configuration parameters into the local stack during module run-time. This is a preferred option because swapping in 256 bytes of data into the stack should only require around a tenth of a millisecond. This is a feasible option as long as the engineer utilizes the co-operative scheduler provided by SOS, and avoids interrupt service routines, and nested function calls which require large amounts of the stacks memory. Being able to effectively manage the RAM and stack will allow neclab to support a much larger design space. For example, by gaining the ability to
neclab: The Network Embedded Control Lab
119
configure up to 256 bytes of data, the engineer can begin to develop fourby-four full-state observers. The following sections of configuring.h illustrate both the networking configuration and control standard input interface. The networking configuration is defined by the configuring message type. The control standard input is handled using the standard io message t and indicated by the MSG CONTROL IO message id. #define MSG_CONFIGURING MOD_MSG_START #define MSG_CONFIGURING_ENABLE (MOD_MSG_START + 1) #define MSG_CONFIGURING_DISABLE (MOD_MSG_START + 2) /* #define MSG_*_IO */ #define MSG_STANDARD_IO (MOD_MSG_START + 3) #define MSG_ERROR_IO (MOD_MSG_START + 4) #define MSG_CONTROL_IO (MOD_MSG_START + 5) #define MSG_ROUTING_IO (MOD_MSG_START + 6) /* #define MSG_*_IO */ #define CONFIGURING_SOURCE (1 0, Y1 , Y2 , Si , Ri > 0, 4n × 4n Z1i , Z2i , 2n × 4n matrices Ti , n × 1 matrix L, 1 × n matrix K and α > 0 that satisfy the following matrix inequalities for i ∈ {1, 2}: U0 XU0 − Q0 ∗ Ui XUi − Qi ∗
J0 + αU0 > 0, Y
(16)
Ji + αUi >0 Y
(17)
Ri ∗
(18)
Ti > 0, Z1i
where X = α2 Y −1 ,
X=
X1 0
0 , X2
Y =
Y1 0
0 . Y2
Since A0 , A1 , A2 are linear functions of K and L, the matrix inequalities in (16) are in the form of LMIs also in the unknowns. However, the condition X = α2 Y −1 is not convex. In the next section we introduce a numerical procedure to solve this non-convex problem. Remark 4. A simple but conservative way to make the matrix inequalities in Theorem 1 suitable for controller synthesis for anticipative controllers with regard to Remark 2, consists of requiring that P2 > 0, P3 = ρP2 , for some positive constant ρ > 0 and making the (bijective) change of variables Y = P2 L, which transforms (12) into YC − −T Ψ ρY C < 0, ∗ −S R ∗
ρC Y − CY Z2
> 0,
R ∗
T > 0, Z1
with Ψ given by (13). This inequality is linear in the unknowns and can therefore be solved using efficient numerical algorithms. The observer gain is found using L = P2−1 Y . This procedure introduces some conservativeness because it restricts P3 to be a scalar multiple of P2 .
Anticipative and Non-anticipative Controller Design
213
Numerical procedure We modify the complementarity linearization algorithm introduced in [GOA97] to construct a procedure to determine the feasibility of the matrix inequalities in Theorem 2 by solving a sequence of LMI’s as follows: 1. Pick α. 2. Find a feasible solution denoted by X0 , Y0 for the set of LMIs (16) and X I
I α−2 Y
≥0
(19)
3. Find the feasible solution denoted by Xj+1 , Yj+1 that solves the following problem Σj : min tr(Xj Y + XYj ) subject to (16), (19). 4. We choose satisfying (12a) and (12b) with N = α−1 X as the stopping criterion. If the matrix inequalities are satisfied, exit. Otherwise set j = j + 1 and go to step 3 if j < c (a preset number). The sequence Σj is monotonically decreasing, lower bounded by 8n × α−2 achieved at X = α2 Y −1 , if the LMIs in (16) are feasible with the constraint X = α2 Y −1 . It is numerically very difficult to obtain the minimum such that the tr(Xj Yj+1 + Xj+1 Yj ) equals to 8n × α−2 . Instead we choose (12a) and (12b) with N = α−1 X as the stopping criterion. If after c times the stopping criterion is not satisfied we pick another α and we continue line search until the stopping criteria is satisfied. Examples Example 1. [BPZ00a, ZB01] Consider the following plant 0 x˙ 1 = 0 x˙ 2
1 −0.1
0 x1 + u, 0.1 x2
and assume that all the states are available for feedback. We assume there is no delay and packet dropout in the network and the state feedback K = 3.75 11.5 stabilizes the plant. [BPZ00a] model the system as a hybrid system and show that the closed-loop is stable as long as the sampling intervals are constant and equal to 4.5 × 10−4 . Later [ZB01] find the less conservative upper bound 0.0593 for variable sampling intervals. Based on an exhaustive search the same authors find that the ”true” upper bound is roughly 1.7194. The maximum variable sampling interval based on Theorem 1 is 0.8965.
214
P. Naghshtabrizi and J.P. Hespanha 0.8 0.7
maximum NCS delay
0.6 0.5 0.4 0.3 0.2 0.1 0
0
1
2 3 4 Number of packet dropouts
5
6
Fig. 3. Number of consecutive packet dropouts versus the maximum of actual network delay. Results in our paper and [YWCH04] are shown by ∗ and respectively.
Suppose now that just the first state is available for the feedback. Based on Theorem 2 and the proposed algorithm, we design a controller to stabilize the plant when the sampling interval of the measurement channel is constant and equal to hsk = 0.5, ∀k ∈ N. A non-anticipative control unit with K = 3.3348
9.9103 ,
L = 0.6772
0.1875 ,
stabilizes the plant for any actuation sampling intervals such that ha ≤ 0.7330, ∀ ∈ N. With the same measurement sampling interval, an anticipative control unit with K = 28.5347
83.8626 ,
L = 0.3518
0.0492 ,
stabilizes the plant for any actuation sampling intervals such that ha ≤ 0.976, ∀ ∈ N. As we expected the anticipative controller stabilizes the plant for larger sampling intervals. Example 2. [YWCH04] consider the following state space plant model −1.7 x˙ 1 = −1 x˙ 2
3.8 1.8
5 x1 + u, 2.01 x2
y(t) = 10.1
4.5
x1 , x2
with the controller directly connected to the actuator, which means that there is no delay, sampling, and packet dropouts in the actuation channel. [YWCH04] show that the controller with gains K = −.2115
2.5044 ,
L = 0.1043
0.0518 ,
Anticipative and Non-anticipative Controller Design
215
stabilizes the system for any fictitious delay τ s in the interval [0, 0.3195]. Consequently as long as (tk+1+ms − tk ) + τks ≤ 0.3195 holds, the closed-loop system remains stable. Fig. 3 shows the number of consecutive packet dropouts versus the actual network delay such that the system remains stable in which hs := tk+1 − tk = .1s, ∀k ∈ N. Our result reveals a significant improvement in comparison to [YWCH04]. For instance, when the measurement channel is loss-less (ms = 0, τi max − τi min = hs ), the LMIs in Theorem 2 are feasible up to τi max = 0.865 which means the closed-loop system with K = −1.7436
1.1409 ,
L = 0.0675
0.0267 ,
is stable for any τks ∈ [0, 0.765], ∀k ∈ N. If we assume a maximum number of consecutive packet dropouts ms = 6 (which corresponds to τi max − τi min = 7 × hs ), the LMIs are feasible up to τi max = 0.773 and the closed-loop system with K = −0.5310
0.1668 ,
L = 0.0564
0.0221 ,
is stable for any τks ∈ [0, 0.073], ∀k ∈ N. As expected, for a smaller number of consecutive packet dropouts, the system remains stable for larger τi max .
2.5
0.8 0.7
2 norm of the error
2
0.6 0.5
1.5
0.4 1
0.3 0.2
0.5
0.1 0
0
5
10
15
20 time
(a)
25
30
35
40
0
0
5
10
15
20
25
30
35
40
(b)
Fig. 4. Simulation results considering the nonlinear model for the inverted pendulum (a) L2 norm of state x, (b) L2 norm of error e = x(t + ha ) − z(t).
Example 3. [LCMA05] In this example we design an anticipative controller for an inverted pendulum. The linearized model is given by
216
P. Naghshtabrizi and J.P. Hespanha
0 0 x˙ = 1 0
0 0 0 3
(M + m)g l(m + 4M )
1 0 mg −3 m + 4M β+ε 3 l(m + 4M )
0 0 0 0 β+ε 4α u, x + −4 m + 4M m + 4M −3α 0 l(m + 4M )
where x = X θ X˙ θ˙ ; M, m, l, g are the mass of the cart, the mass of the pendulum, half of the length of the pendulum, acceleration due to the gravity, α, β are motor specifications and ε is the viscous friction. The value of variables are given in [LCMA05], and ε = 2. The sampling interval of the sensor and measurement channels are equal to 0.05, and we assume τ a = 0, ∀ ∈ N. Our goal is to find K and L such that the upper bound on the tolerable network delays in the measurement channel τks is maximized. The LMIs are feasible up to τimax = 0.195 when τi min = 0.05 (regarding (10), τi min = ha + τ¯a where we assumed τ¯a = 0). The observer gain L is obtained by solving the LMIs in Remark 4. The state feedback gain K is chosen such that A − BK is asymptotically stable. With 0.9706 −0.1100
−0.2082 −0.7815
0.0369 5.1597
K = −0.4802
−5.4503
−14.9865
L=
0.4441 , 26.4242 −2.9440 ,
the closed-loop system remains stable even in the presence of variable measurement channel delay smaller than 0.095 (note that τi max = ha +τks +(tsk+1 −tsk )). Fig. 4(a), Fig. 4(b) show the norm of x(t) and e(t) respectively when the pendulum’s non-linear model in [LCMA05] is used with the initial condition 0.1 0.1 0 0.15 .
5 Conclusions and Future Work We proposed two types of control units: non-anticipative and anticipative. NCSs with an LTI plant model and anticipative or non-anticipative controller can be modeled by a DDE such as (11). We found sufficient conditions for asymptotic stability of (11) in the form of matrix inequalities and presented a procedure to design output feedback control unit for NCSs. Our method shows significant improvement in comparison to the existing results. We will extend our results to H∞ or H2 design. The problem of stabilizing the plant in the presence of saturation in the control loop will also be considered.
Anticipative and Non-anticipative Controller Design
217
Appendix Equation (11) can be written in as equivalent form [FS02b] 2
2
x(t) ˙ = y(t), −y(t) +
Ai x(t) − i=0
t
Ai
t−τi
i=1
y(s)ds = 0,
(20)
and the following Lyapunov-Krasovskii functional: 2
V (t) = x P1 x + i=1
0
t
−τi max t+θ
2
y (s)Ri y(s)dsdθ
+ i=1
t t−τi min
x (s)Si x(s)ds, (21)
where P1 > 0 . Differentiating the first, second and third term of (21) with respect to t respectively gives x˙ , 0
2x P1 x(t) ˙ = 2˜ x (t)P 2
τi max y (t)Ri y(t) − i=1
(22a) t
t−τi max
y (τ )Ri y(τ )dτ ,
(22b)
2
x (t)Si x(t) − x (t − τi min )Si x(t − τi min ) ,
(22c)
i=1
where x ˜ = x(t)
y(t) . Substituting (20) into (22a), 2
dV (t) ≤x ˜ (t)Ψ˜ x ˜− x (t − τi min )Si x(t − τi min )− dt i=1 2
t t−τi max
i=1
y (τ )Ri y(τ )dτ + η,
where Ψ˜ = P
0
2 i=0
Ai
2
2˜ x (t)P
η= i=1
I + −I 0 Ai
t t−τi
0
2 i=0
Ai
I −I
2
P+ i=1
Si 0
0 , (23) τi max Ri
y(s)ds.
By Moon-park inequality [MPKL01], a bound on cross term, η, can be found as follows:
218
P. Naghshtabrizi and J.P. Hespanha 2
t
η≤
y(s) x ˜(t)
t−τi min
i=1 2
t−τi min
+ i=1
t−τi
2
t
i=1
t−τi
= 2
y(s) x ˜(t)
i=1
t−τi
2
t
+2
t−τi min
i=1
Ti − 0 Ai P Z1i
Ri ∗
y(s) ds x ˜(t)
T˜i − 0 Ai P Z2i
y(s) ds x ˜(t)
y (s)Ri y(s)ds
t−τi min
+2
Ri ∗
y (s) T˜i − 0
y (s) Ti − 0
Ai P x ˜(t)ds Ai P x ˜(t)ds
2
+
x ˜ (t) τi min Z1i + (τi − τi min Z2i )˜ x(t), i=1
where Ri ∗ By choosing T˜i = 0 2
t
i=1
t−τi
η≤
Ti ≥ 0, Z1i
Ri ∗
T˜3 ≥ 0. Z2i
(24)
Ai P ,
y(s) Ri y(s)ds + 2x (t) Yi − 0
Ai P x ˜(t)
2
x (t − τi min ) Yi − 0
−2
Ai P x ˜(t)
i=1 2
˜(t). x ˜ (t) τi min Z1i + (τi max − τi min )Z2i x
+ i=1
Based on the Lyapunov-Krasovskii theorem, (11) is asymptotically stable if dV 2 for some ε > 0. Hence the system is asymptotically stable if dt ≤ −ε x (12a) holds. However any row and column of (12a) except the first block row and column can be zero. The inequalities in (12b) and (12c) are in fact nonstrict. However for simplicity and since there is no numerical advantage we state them as strict inequality.
On Quantization and Delay Effects in Nonlinear Control Systems Daniel Liberzon Coordinated Science Laboratory University of Illinois at Urbana-Champaign Urbana, IL 61801
[email protected] Summary. The purpose of this paper is to demonstrate that a unified study of quantization and delay effects in nonlinear control systems is possible by merging the quantized feedback control methodology recently developed by the author and the small-gain approach to the analysis of functional differential equations with disturbances proposed earlier by Teel. We prove that under the action of a robustly stabilizing feedback controller in the presence of quantization and sufficiently small delays, solutions of the closed-loop system starting in a given region remain bounded and eventually enter a smaller region. We present several versions of this result and show how it enables global asymptotic stabilization via a dynamic quantization strategy.
1 Introduction To be applicable in realistic situations, control theory must take into account communication constraints between the plant and the controller, such as those arising in networked embedded systems. Two most common phenomena relevant in this context are quantization and time delays. It is also important to be able to handle nonlinear dynamics. To the best of the author’s knowledge, the present paper is a first step towards addressing these three aspects in a unified and systematic way. It is well known that a feedback law which globally asymptotically stabilizes a given system in the absence of quantization will in general fail to provide global asymptotic stability of the closed-loop system that arises in the presence of a quantizer with a finite number of values. One reason for this is saturation: if the quantized signal is outside the range of the quantizer, then the quantization error is large, and the control law designed for the ideal case of no quantization may lead to instability. Another reason is deterioration of performance near the equilibrium: as the difference between the current and the desired values of the state becomes small, higher precision is required,
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 219–229, 2006. © Springer-Verlag Berlin Heidelberg 2006
220
D. Liberzon
and so in the presence of quantization errors asymptotic convergence is typically lost. Due to these phenomena, instead of global asymptotic stability it is more reasonable to expect that solutions starting in a given region remain bounded and approach a smaller region. In [Lib03a] the author has developed a quantized feedback control methodology for nonlinear systems based on results of this type, under a suitable robust stabilization assumption imposed on the controller (see also [Lib03c] for further discussion and many references to prior work on quantized control). As we will see, this robustness of a stabilizing controller with respect to quantization errors (which is automatic in the linear case) plays a central role in the nonlinear results. The effect of a sufficiently small time delay on stability of a linear system can be studied by standard perturbation techniques based on Rouch´e’s theorem (see, e.g., [GKC03, HL01]). When the delay is large but known, it can in principle be attenuated by propagating the state forward from the measurement time. However, the case we are interested in is when the delay is not necessarily small and its value is not available to the controller. There is of course a rich literature on asymptotic stability of time-delayed systems [HL93, GKC03], but these results are not very suitable here because they involve strong assumptions on the system or on the delay, and asymptotic stability will in any case be destroyed by quantization. On the other hand, there are very few results on boundedness and ultimate boundedness for nonlinear control systems with (possibly large) delays. Notable exceptions are [Tee98, WP04, WB99a], and the present work is heavily based on [Tee98]. In that paper, Teel uses a small-gain approach to analyze the behavior of nonlinear feedback loops in the presence of time delays and external disturbances. Our main observation is that his findings are compatible with our recent results on quantization; by identifying external disturbances with quantization errors, we are able to naturally combine the two lines of work. For concreteness, we consider the setting where there is a processing delay in collecting state measurements from the sensors and/or computing the control signal, followed by quantization of the control signal before it is transmitted to the actuators. Other scenarios can be handled similarly, as explained at the end of the paper. Assuming that an upper bound on the initial state is available, our main result (Theorem 1 in Section 3) establishes an upper bound and a smaller ultimate bound on resulting closed-loop trajectories. Section 4 contains some interpretations and modifications of this result, and explains how global asymptotic stability can be recovered by using the dynamic quantization method of [BL00a, Lib03a]. In what follows, we will repeat some of the developments from [Tee98] in order to make the paper self-contained and also because we will need some estimates not explicitly written down in [Tee98]. We remark that the results of [Tee98] were also used in [TNK98] for a different purpose, namely, to study the effects of sampled-data control implementation. We take the delay to be fixed for simplicity; however, everything readily generalizes to the case of a time-varying bounded delay (by simply replacing the value of the delay with
On Quantization and Delay Effects in Nonlinear Control Systems
221
its upper bound in all subsequent arguments). This latter type of delay is used for modeling sampled-data control systems (see, e.g., [FSR04a] and the references therein), thus such an extension could be useful for studying systems with sampled-data quantized feedback.
2 Notation and Preliminaries Consider the system
x˙ = f (x, u)
n
where x ∈ R is the state, u ∈ Rm is the control, and f : Rn × Rm → Rn is a C 1 function. The inputs to the system are subject to quantization. The (input) quantizer is a piecewise constant function q : Rm → Q, where Q is a finite subset of Rm . Following [Lib03a, Lib03c], we assume that there exist real numbers M > Δ > 0 such that the following condition holds: |u| ≤ M
⇒
|q(u) − u| ≤ Δ.
(1)
This condition gives a bound on the quantization error when the quantizer does not saturate. We will refer to M and Δ as the range and the error bound of the quantizer, respectively. We consider the one-parameter family of quantizers qµ (u) := µq
u , µ
µ > 0.
(2)
Here µ can be viewed as a “zoom” variable. This parameter is in general adjustable, but in this section we take µ to be fixed. The range of the quantizer qµ is M µ and the error bound is Δµ. Assumed given is some nominal state feedback law u = k(x), which is C 1 and satisfies k(0) = 0. We take this feedback law to be stabilizing robustly with respect to actuator errors, in the following sense (see Section 4 for a discussion of this assumption and a way to relax it). Assumption 1 There exists a C 1 function V : Rn → R such that for some class K∞ functions1 α1 , α2 , α3 , ρ and for all x ∈ Rn and v ∈ Rm we have α1 (|x|) ≤ V (x) ≤ α2 (|x|) and |x| ≥ ρ(|v|) 1
⇒
∂V f (x, k(x) + v) ≤ −α3 (|x|). ∂x
(3) (4)
Recall that a function α : [0, ∞) → [0, ∞) is said to be of class K if it is continuous, strictly increasing, and α(0) = 0. If α is also unbounded, then it is said to be of class K∞ . A function β : [0, ∞) × [0, ∞) → [0, ∞) is said to be of class KL if β(·, t) is of class K for each fixed t ≥ 0 and β(r, t) decreases to 0 as t → ∞ for each fixed r ≥ 0. We will write α ∈ K∞ , β ∈ KL, etc.
222
D. Liberzon
We let the state measurements be subject to a fixed delay τ > 0. This delay followed by quantization give the actual control law in the form u(t) = qµ (k(x(t − τ )))
(5)
and yields the closed-loop system x(t) = f (x(t), qµ (k(x(t − τ )))). ˙
(6)
We can equivalently rewrite this as x(t) ˙ = f x(t), k(x(t)) + θ(t) + e(t)
(7)
θ(t) := k(x(t − τ )) − k(x(t))
(8)
e(t) := qµ (k(x(t − τ ))) − k(x(t − τ )).
(9)
where and
To simplify notation, we will write a ∨ b for max{a, b}. Applying (4) with v = θ + e and defining γ ∈ K∞ by γ(r) := ρ(2r), we have |x| ≥ γ(|θ| ∨ |e|)
⇒
∂V f (x, k(x) + θ + e) ≤ −α3 (|x|). ∂x
(10)
The quantity defined in (8) can be expressed as θ(t) = −
t t−τ
k (x(s))f (x(s), k(x(s − τ )) + e(s))ds.
(11)
Substituting this into the system (7), we obtain a system with delay td := 2τ . For initial data x : [−td , 0] → Rn which is assumed to be given, this system has a unique maximal solution2 x(·). We adopt the following notation from [Tee98]: |xd (t)| := maxs∈[t−td ,t] |x(s)| for t ∈ [0, ∞), xd J := supt∈J |xd (t)| for a subinterval J of [0, ∞), and |ed (t)| and ed J are defined similarly. In view of (11), for some γ1 , γ2 ∈ K∞ we have |θ(t)| ≤ τ γ1 (|xd (t)|) ∨ τ γ2 (|ed (t)|) (we are using the fact that f (0, k(0)) = 0, which follows from Assumption 1). Defining γτ (r) := γ(τ γ1 (r)), γˆ (r) := γ(τ γ2 (r) ∨ r), and using (10), we have |x(t)| ≥ γτ (|xd (t)|) ∨ γˆ (|ed (t)|) ⇒ V˙ (t) ≤ −α3 (|x|). In light of (3), this implies 2
It is not hard to see that discontinuities of the control (5) do not affect the existence of Carath´eodory solutions; cf. the last paragraph of Section 2.6 in [HL93].
On Quantization and Delay Effects in Nonlinear Control Systems
V (t) ≥ α2 ◦ γτ (|xd (t)|) ∨ α2 ◦ γˆ (|ed (t)|)
223
V˙ (t) ≤ −α3 (|x|)
⇒
hence by the standard comparison principle (cf. [Kha02, Chapter 4]) V (t) ≤ β(V (t0 ), t − t0 ) ∨ α2 ◦ γτ
xd
[t0 ,∞)
∨ α2 ◦ γˆ
ed
[t0 ,∞)
where β ∈ KL and β(r, 0) = r. Using (3) again, we have ˜ |x(t)| ≤ β(|x(t ˜x 0 )|, t − t0 ) ∨ γ
xd
[t0 ,∞)
∨ γ˜e
ed
[t0 ,∞)
(12)
˜ t) := α−1 (β(α2 (r), t)), γ˜x (r) := α−1 ◦ α2 ◦ γτ (r), and γ˜e (r) := where β(r, 1 1 −1 α1 ◦ α2 ◦ γˆ (r). The full expression for γ˜x (which depends on the delay τ ) is γ˜x (r) = α1−1 ◦ α2 ◦ γ(τ γ1 (r)).
(13)
Let us invoke the properties of the quantizer to upper-bound the quantization error e defined in (9). Take κ to be some class K∞ function with the property that κ(r) ≥ max |k(x)| ∀r ≥ 0 |x|≤r
so that we have
|k(x)| ≤ κ(|x|)
∀ x.
Then (1) and (2) give |xd (t)| ≤ κ−1 (M µ)
⇒
|e(t)| ≤ Δµ.
(14)
3 Main Result Theorem 1. Let Assumption 1 hold. Assume that the initial data satisfies |xd (t0 )| ≤ E0
(15)
for some known E0 > 0. Choose a (small) ε > 0 and assume that for some Λ > 0 we have α1−1 ◦ α2 (E0 ) ∨ ε ∨ γ˜e (Δµ) < Λ < κ−1 (M µ).
(16)
Assume that the delay τ is small enough so that γ˜x (r) < r
∀ r ∈ (ε, Λ].
(17)
Then the solution of the closed-loop system (6) satisfies the bound xd
[t0 ,∞)
≤ α1−1 ◦ α2 (E0 ) ∨ ε ∨ γ˜e (Δµ)
(18)
and the ultimate bound xd for some T > 0.
[t0 +T,∞)
≤ ε ∨ γ˜e (Δµ)
(19)
224
D. Liberzon
Proof. As in [Tee98], the main idea behind the proof is a small-gain argument combining (12) with the bound |xd (t)| ≤ |xd (t0 )| ·
(1 − sgn(t − td − t0 )) ∨ x 2
[t0 ,∞)
which gives xd
[t0 ,∞)
˜ ≤ |xd (t0 )| ∨ β(|x(t ˜x 0 )|, 0) ∨ γ
xd
∨ γ˜e
[t0 ,∞)
ed
.
[t0 ,∞)
˜ 0) = α−1 ◦ α2 (r) ≥ r. The condition (15) and the first We know that β(r, 1 inequality in (16) imply that there exists some maximal interval [t0 , t¯) on which |xd (t)| < Λ. On this interval, using (14), (16), (17), and causality, we have the (slightly conservative) bound xd
[t0 ,t¯)
˜ d (t0 )|, 0) ∨ γ˜x ≤ β(|x ≤
α1−1
xd
∨ γ˜e
[t0 ,t¯)
ed
[t0 ,t¯)
◦ α2 (E0 ) ∨ ε ∨ γ˜e (Δµ) < Λ
Thus actually t¯ = ∞ and (18) is established. Next, denote the right-hand side of (18) by E and pick a ρ > 0 such that ˜ β(E, ρ) ≤ ε.
(20)
Using (12), we have xd
[t0 +td +ρ,∞)
≤ x
[t0 +ρ,∞)
˜ 0 , ρ) ∨ γ˜x ≤ β(E
xd
[t0 ,∞)
∨ γ˜e (Δµ)
≤ ε ∨ γ˜x (E) ∨ γ˜e (Δµ) From this, using (12) again but with t0 + td + ρ in place of t0 , we obtain xd
[t0 +2(td +ρ),∞)
≤ x
[t0 +td +2ρ,∞)
˜ ≤ β(|x(t ˜x 0 + td + ρ)|, ρ) ∨ γ
xd
[t0 +td +ρ,∞)
∨ γ˜e (Δµ)
˜ ∨ γ˜e (Δµ), ρ) ∨ β(˜ ˜ γx (E), ρ) ∨ γ˜x (ε ∨ γ˜e (Δµ)) ∨ γ˜ 2 (E) ∨ γ˜e (Δµ) ≤ β(ε x ≤ ε ∨ γ˜x2 (E) ∨ γ˜e (Δµ) in view of (16), (17), (20), and continuity of γ˜x at ε. There exists a positive integer n such that γ˜xn (E) ≤ ε ∨ γ˜e (Δµ). Repeating the above calculation, for T := n(td + ρ) we have the bound (19) and the proof is complete.
4 Discussion Hypotheses of Theorem 1 The only constraint placed on the quantizer in Theorem 1 is the hypothesis (16). It says that the range M of q should be large enough compared to
On Quantization and Delay Effects in Nonlinear Control Systems
225
the error bound Δ, so that the last term in (16) is larger than the first one (then a suitable Λ automatically exists). A very similar condition is used in [Lib03a]. One direction for future work is to extend the result to “coarse” quantizers not satisfying such hypotheses (cf. [LH05], [Lib03c, Section 5.3.6]). The small-gain condition (17) is justified by the formula (13), which ensures that for every pair of numbers Λ > ε > 0 there exists a τ ∗ > 0 such that for all τ ∈ (0, τ ∗ ) we have γ˜x (r) < r
∀ r ∈ (ε, Λ].
The value of τ ∗ depends on the relative growth rate of the functions appearing on the right-hand size of (13) for small and large arguments. In particular, if both α1−1 ◦ α2 ◦ γ and γ1 have finite derivatives at 0, then for small enough τ the inequality (17) holds with ε = 0. In this case, the effect of ε (called the offset in [Tee98]) disappears, and the ultimate bound (19) depends on the quantizer’s error bound only. Also note that in the case of linear systems, α1−1 ◦ α2 ◦ γ and γ1 can be taken to be linear, and for small enough τ we have γ˜x (r) < r
∀ r > 0.
(21)
Assumption 1 says that the feedback law k provides input-to-state stability (ISS) with respect to the actuator error v, in the absence of delays (see [Son89, SW95]). This requirement is restrictive in general. However, it is shown in [Son89] that for globally asymptotically stabilizable systems affine in controls, such a feedback always exists. (For linear systems and linear stabilizing feedback laws, such robustness with respect to actuator errors is of course automatic.) One way to proceed without Assumption 1 is as follows. Suppose that k just globally asymptotically stabilizes the system in the absence of actuator errors, so that instead of (4) we only have ∂V f (x, k(x)) ≤ −α3 (|x|). ∂x By virtue of [CT96, Lemma 1] or [Son90, Lemma 3.2], there exist a class K∞ function γ and a C 1 function G : Rn → GL(m, R), i.e., G(x) is an invertible ¯ e¯ ∈ Rm we have m × m matrix for each x, such that for all x ∈ Rn and θ, ¯ ∨ |¯ |x| ≥ γ(|θ| e|)
⇒
∂V α3 (|x|) f (x, k(x) + G(x)θ¯ + G(x)¯ e) ≤ − . (22) ∂x 2
To use this property, let us rewrite the system (7) as ¯ + G(x(t))¯ x(t) ˙ = f x(t), k(x(t)) + G(x(t))θ(t) e(t) where and
¯ := G−1 (x(t))θ(t) θ(t)
226
D. Liberzon
e¯(t) := G−1 (x(t))e(t). In view of (11), for some γ¯1 , γ¯2 , γ¯3 ∈ K∞ and a > 0 we have ¯ |θ(t)| ≤ τ γ¯1 (|xd (t)|) ∨ τ γ¯2 (|ed (t)|) and
|¯ e(t)| ≤ (¯ γ3 (|x(t)|) ∨ a)|e(t)|.
Using (22), we have |x(t)| ≥ γ τ γ¯1 (|xd (t)|) ∨ τ γ¯2 (|ed (t)|) ∨ γ¯3 (|x(t)|)|e(t)| ∨ a|e(t)|
⇒
α3 (|x|) V˙ (t) ≤ − . 2
As before, we obtain from this that −1 ¯ |x(t)| ≤ β(|x(t ¯1 ( xd 0 )|, t − t0 ) ∨ α1 ◦ α2 ◦ γ τ γ
∨ τ γ¯2 ( ed
[t0 ,∞) )
∨ γ¯3 ( x
[t0 ,∞) )
e
[t0 ,∞) )
[t0 ,∞)
∨a e
[t0 ,∞)
(23)
¯ 0) = α−1 ◦ α2 (r). Define where β¯ ∈ KL and β(r, 1 γ¯x (r) := α1−1 ◦ α2 ◦ γ τ γ¯1 (r) ∨ γ¯3 (r)Δµ and
γ¯e (r) := α1−1 ◦ α2 ◦ γ(τ γ¯2 (r) ∨ ar).
Then we have a counterpart of Theorem 1, without Assumption 1 and with γ¯x , γ¯e replacing γ˜x , γ˜e everywhere in the statement. The proof is exactly the same modulo this change of notation, using (23) instead of (12). The price to pay is that the function γ¯x depends on Δµ as well as on τ , so the modified small-gain condition requires not only the delay but also the error bound of the quantizer to be sufficiently small. This can be seen particularly clearly in the special case of no delay (τ = 0); in this case we arrive at a result complementary to [Lib03a, Lemma 2] in that it applies to every globally asymptotically stabilizing feedback law but only when the quantizer has a sufficiently small error bound.3
3
The argument we just gave actually establishes that global asymptotic stability under the zero input guarantees ISS with a given offset on a given bounded region (“semiglobal practical ISS”) for sufficiently small inputs. (A related result proved in [SW96] is that global asymptotic stability under the zero input implies ISS for sufficiently small states and inputs.) This observation confirms the essential role that ISS plays in our developments.
On Quantization and Delay Effects in Nonlinear Control Systems
227
Extensions In Theorem 1, the effects of quantization manifest themselves in the need for the known initial bound E0 in (15) and in the strictly positive term γ ˜e (Δµ) in the ultimate bound (19). If the ultimate bound is strictly smaller than the initial bound, and if the quantization “zoom” parameter µ in (2) can be adjusted on-line, then both of these shortcomings can be removed using the method proposed in [BL00a, Lib03a]. First, the state measurement at time t0 gives us the value of x(t0 − τ ). We assume that no control was applied for t ∈ [t0 − τ, t0 ). If an upper bound τ ∗ on the delay is known and the system is forward complete under zero control, then we have an upper bound of the form (15). This is because for a forward complete system, the reachable set from a given initial condition in bounded time is bounded (see, e.g., [AS99]). An over-approximation of the reachable set can be used to actually compute such an upper bound (see [KV00, HST05] for some results on approximating reachable sets). Now we can select a value of µ large enough to satisfy (16) and start applying the control law (5) for t ≥ t0 . Let us assume for simplicity that (21) is satisfied, so that we can take ε = 0. Applying Theorem 1, we have from (19) that |xd (t0 + T )| ≤ γ˜e (Δµ). Next, at time t0 + T we want to select a smaller value of µ for which (16) holds with E0 replaced by this new bound (which depends on the previous value of µ). For this to be possible, we need to assume that α1−1 ◦ α2 ◦ γ˜e (Δµ) < κ−1 (M µ)
∀ µ > 0.
Repeating this “zooming-in” procedure, we recover global asymptotic stability. We mention an interesting small-gain interpretation of the above strategy, which was given in [NL05]. The closed-loop system can be viewed as a hybrid system with continuous state x and discrete state µ. After the “zooming-out” stage is completed, it can be shown that the x-subsystem is ISS with respect to µ with gain smaller than γˆ (Δ·), while the µ-subsystem is ISS with respect to x with gain γ˜e−1 /Δ = γˆ −1 ◦ α2−1 ◦ α1 /Δ. The composite gain is less than identity, and asymptotic stability follows from the nonlinear small-gain theorem. One important advantage of the small-gain viewpoint (which was also used to establish Theorem 1) is that it enables an immediate incorporation of external disturbances, under suitable assumptions. We refer the reader to [NL05] and [Tee98] for further details. We combined quantization and delays as in (5) just for concreteness; other scenarios can be handled similarly. Assume, for example, that the control takes the form u(t) = k(qµ (x(t − τ )))
228
D. Liberzon
i.e., both the quantization and the delay4 affect the state before the control is computed. The resulting closed-loop system can be written as x(t) ˙ = f x(t), k x(t) + θ(t) + e(t) where
θ(t) := x(t − τ ) − x(t)
and
e(t) := qµ (x(t − τ )) − x(t − τ ).
Assumption 1 needs to be modified by replacing (4) with |x| ≥ ρ(|v|)
⇒
∂V f (x, k(x + v)) ≤ −α3 (|x|). ∂x
This is the requirement of ISS with respect to measurement errors, which is restrictive even for systems affine in controls (see the discussion and references in [Lib03a]). For γ(r) := ρ(2r) we have |x| ≥ γ(|θ| ∨ |e|)
⇒
∂V f (x, k(x + θ + e)) ≤ −α3 (|x|) ∂x
and we can proceed as before. The ISS assumption can again be relaxed at the expense of introducing a constraint on the error bound of the quantizer (see also footnote 3 above). A bound of the form (15) for some time greater than the initial time can be obtained by “zooming out” similarly to how it is done in [Lib03a], provided that an upper bound τ ∗ on the delay is known and the system is forward complete under zero control. Global asymptotic stability can then be achieved by “zooming in” as before. It is also not difficult to extend the results to the case when quantization affects both the state and the input (cf. [Lib03a, Remark 1]).
5 Conclusions The goal of this paper was to show how the effects of quantization and time delays in nonlinear control systems can be treated in a unified manner by using Lyapunov functions and small-gain arguments. We proved that under the action of an input-to-state stabilizing feedback law in the presence of both quantization and small delays, solutions of the closed-loop system starting in a given region remain bounded and eventually enter a smaller region. Based on this result, global asymptotic stabilization can be achieved by employing a dynamic quantization scheme. These findings demonstrate that the quantized control algorithms proposed in our earlier work are inherently robust to time delays, which increases their potential usefulness for applications such as networked embedded systems. 4
Their order is not important in this case as long as the quantizer is fixed and does not saturate.
On Quantization and Delay Effects in Nonlinear Control Systems
229
Acknowledgments The author would like to thank Dragan Neˇsi´c and Chaouki Abdallah for helpful pointers to the literature, and Andy Teel and Emilia Fridman for useful comments on an earlier draft. This work was supported by NSF ECS-0134115 CAR and DARPA/AFOSR MURI F49620-02-1-0325 Awards.
Performance Evaluation for Model-Based Networked Control Systems Luis A. Montestruque1 and Panos J. Antsaklis2 1
2
EmNet, LLC Granger, IN
[email protected] Department of Electrical Engineering University of Notre Dame Notre Dame, IN
[email protected] Summary. The performance of a class of Model-Based Networked Control System (MB-NCS) is considered in this paper. A MB-NCS uses an explicit model of the plant to reduce the network bandwidth requirements. In particular, an Output Feedback MB-NCS is studied. After reviewing the stability results for this system and some lifting techniques basics, two performance measures related to the traditional H2 performance measure for LTI systems are computed. The first H2 like performance measurement is called the Extended H2 norm of the system and is based on the norm of the impulse response of the MB-NCS at time zero. The second performance measure is called the Generalized H2 norm and it basically replaces the traditional trace norm by the Hilbert-Schmidt norm that is more appropriate for infinite dimensional operators. The Generalized H2 norm also represents the average norm of the impulse response of the MB-NCS for impulse inputs applied at different times. Examples show how both norms converge to the traditional H2 norm for continuous H2 systems. Finally, with the help of an alternate way of representing lifted parameters, the relationship between the optimal sampler and hold of a sampled data system and the structure of the Output Feedback MB-NCS is shown.
1 Introduction A networked control system (NCS) is a control system in which a data network is used as feedback media. NCS is an important area see for example [bib03] and [NE00c, YTS02, WYB99a]. Industrial control systems are increasingly using networks as media to interconnect the different components. The use of networked control systems poses, though, some challenges. One of the main problems to be addressed when considering a networked control system is the size of bandwidth required by each subsystem. Since each control subsystem must share the same medium the reduction of the individual band-
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 231–249, 2006. © Springer-Verlag Berlin Heidelberg 2006
232
L.A. Montestruque and P.J. Antsaklis
width is a major concern. Two ways of addressing this problem are: minimizing the frequency of transfer of information between the sensor and the controller/actuator; or compressing or reducing the size of the data transferred at each transaction. A shared characteristic among popular digital industrial networks are the small transport time and big overhead per packet, thus using fewer bits per packet has small impact over the overall bit rate. So reducing the rate at which packets are transmitted brings better benefits than data compression in terms of bit rate used. The MB-NCS architecture makes explicit use of knowledge about the plant dynamics to enhance the performance of the system. Model-Based Networked Control Systems (MB-NCS) were introduced in [MA02a]. Consider the control of a continuous linear plant where the state estimated by a standard observer is sent to a linear controller/actuator via a network. In this case, the controller and observer uses an explicit model of the plant that approximates the plant dynamics and makes possible the stabilization of the plant even under slow network conditions. The controller makes use of a plant model, which is updated with the observer estimate, to reconstruct the actual plant state in between updates. The model state is then used to generate the control signal. The main idea is to perform the feedback by updating the model’s state using the observer estimated state of the plant. The rest of the time the control action is based on a plant model that is incorporated in the controller/actuator and is running open loop for a period of h seconds. Also a disturbance signal w and a performance or objective signal z are included in the setup. The control architecture is shown in Figure 1. The observer has as inputs the output and input of the plant. In the implementation, in order to acquire the input of the plant, which is at the other side of the communication link, the observer can have a version of the model and controller, and knowledge of the update time h. In this way, the output of the controller, that is the input of the plant, can be simultaneously and continuously generated at both ends of the feedback path with the only requirement being that the observer makes sure that the controller has been updated. This last requirement ensures that both the controller and the observer are synchronized. Handshaking protocols provided by most networks can be used to provide synchronization. The performance characterization of Networked Control Systems under different conditions is of major concern. In this paper a class of networked control systems called Model-Based Networked Control Systems (MB-NCS) is considered. This control architecture uses an explicit model of the plant in order to reduce the network traffic while attempting to prevent excessive performance degradation. MB-NCS can successfully address several important control issues in an intuitive and transparent way. Necessary and sufficient stability results have been reported for continuous and discrete linear time invariant systems with state feedback, output feedback, and with network-induced delays (see [MA02a, MA02b, MA03a]). Results for stochastic update-times have also been derived [MA04]. We have observed that the
Performance Evaluation for Model-Based Networked Control Systems
233
Fig. 1. MB-NCS with disturbance input and objective signal output.
stability of MB-NCS is, in general, a function of the update times, the difference between the dynamics of the plant and the dynamics of the plant model, and of the control law used, and we have quantified these relations. The performance of the MB-NCS can be studied using several techniques and considering different scenarios. One promising technique is called Continuous Lifting [CF96, BPFT91, DL99]. Lifting is basically a transformation of a periodic system to a discrete LTI system. The main advantage of this approach is that most of the results available for LTI systems are readily applicable to the lifted system. The disadvantage is that the input and output spaces are infinite dimensional and thus the parameters of the lifted system are operators and not matrices. New results in this area allow overcoming these difficulties [MRP99, MP99]. The next section briefly introduces the lifting techniques used to derive the results contained in the paper. Then, an H2 like performance measure is introduced as the Extended H2 norm, here the interplay between the discrete and continuous nature of the system is observed in the calculation of the norm. Also, a way of calculating the Extended H2 norm using an auxiliary LTI discrete system is presented. The next section presents the Generalized H2 norm. This norm is important because it can also be related to the norm of a operator based transfer function. Again, a way of computing the norm using an auxiliary LTI discrete system is derived. The paper is finalized with a discussion of the techniques described in [MRP99, MP99] and their application to optimal control synthesis problems for data sampled systems. This allows
234
L.A. Montestruque and P.J. Antsaklis
to efficiently compute the optimal gains for the controller and observer. Not surprisingly it is shown that under certain conditions, the optimal gains for the H2 optimal observer and controller of the MB-NCS are equivalent to the optimal gains for the non-networked system. We will start by defining the system dynamics: Plant Dynamics: x˙ = Ax + B1 w + B2 u z = C1 x + D12 u y = C2 x + D21 w + D22 u Observer Dynamics: ˆ 2 − LD ˆ 22 )u + Ly x ¯˙ = (Aˆ − LCˆ2 )¯ x + (B Model Dynamics: ˆx + B ˆ2 u x ˆ˙ = Aˆ Controller: u = Kx ˆ
(1)
The model state x ˆ is updated with the observer state x ¯ every h seconds. It can be shown that the system dynamics can be described by: Gzw : x˙ x ¯˙ e˙
A B2 K −B2 K x ˜ 22 K −Bˆ2 K − LD ˜ 22 K x¯ LC2 Aˆ − LCˆ2 + Bˆ2 K + LD ˜ ˆ ˆ ˜ e LC2 LD22 K − LC2 A − LD22 K B1 + LD21 w LD21 x C1 D12 K −D12 K x¯ , ∀t ∈ [tk , tk+1 ) e
=
z
=
e
=
x¯ − xˆ = 0, t = tk+1
We will also use the following definitions: x(t) B1 ¯(t) , BN = LD21 , CN = C1 ϕ(t) = x e(t) LD21 A B2 K ˆ 2 K + LD ˜ 22 K Λ = LC2 Aˆ − LCˆ2 + B ˜ 22 K − LCˆ2 LC2 LD
(2)
D12 K
−D12 K
−B2 K ˆ 2 K − LD ˜ 22 K −B ˜ 22 K Aˆ − LD
(3)
Throughout this paper we will assume that the compensated model is stable and that the transportation delay is negligible (that is, the delay resulting from the use of a network is assumed to be zero, this assumption can be made for systems where the inverse of the delay is much smaller than the plant
Performance Evaluation for Model-Based Networked Control Systems
235
bandwidth). We will assume that the frequency at which the network updates the state in the controller is constant. The goal is to find the smallest frequency at which the network must update the state in the controller, that is, an upper bound for h, the update time. A necessary and sufficient condition for stability of the output feedback MB-NCS is now presented. Theorem 1. The non-disturbed output feedback MB-NCS described by (1) is T x ¯T eT = globally exponentially stable around the solution xT 0 if I 0 0 I 0 0 and only if the eigenvalues of M = 0 I 0 eΛh 0 I 0 are 0 0 0 0 0 0 inside the unit circle. A detailed proof for Theorem 1 can be found in [MA02b]. Note that while there might be a maximum h at which the system is stable (known also as maximum transfer interval or MATI [WYB99a]), this does not imply that all smaller transfer times h result in stable systems. Some systems might exhibit alternating bands of stable and unstable transfer times. Before defining the performance measures previously described a brief summary of the lifting technique is presented. As it was pointed out, lifting can transform a periodic linear system such as a MB-NCS into a discrete linear time invariant system with operator-valued parameters. These parameters are computed for a class of MB-NCS and used throughout the paper.
2 Continuous Lifting Technique We will give a brief introduction into the Lifting technique. We need to define two Hilbert spaces, the first space is defined as follows: L2 [0, h) =
u(t)/
h 0
uT (t)u(t)dt < ∞
(4)
The second Hilbert space of interest is formed by an infinite sequence of L2 [0, h) spaces and is defined: l2 (Z, L2 [0, h)) = l2 = =
T
[..., u−2 , u−1 , u0 , u1 , u2 , ...] /ui ∈ L2 [0, h),
+∞ −∞
h 0
uTj (t)uj (t)dt < ∞
(5) Now we can define the lifting operator L as mapping from L2e (L2 extended) to l2 : L : L2e → l2 , Lu(t) = [..., u−2 , u−1 , u0 , u1 , u2 , ...] where uk (τ ) = u(τ + kh), τ ∈ [0, h)
T
(6)
236
L.A. Montestruque and P.J. Antsaklis
It can be shown that L preserves inner products and thus is norm preserving [BPFT91]. Since L is surjective, it is an isomorphism of Hilbert spaces. So, lifting basically transforms a continuous function into a discrete function where each element of the sequence is a continuous function restricted to [0, h). As an example of the application of this lifting technique we will compute the lifted parameters of a MB-NCS with output feedback previously presented (1). This system is clearly h periodic, and therefore we expect to get, after the lifting procedure, an LTI system of the form: ϕk+1 = Aϕk + B wk , z k = C ϕk + Dwk
(7)
To obtain the operators we “chop” the time response of the system described in (1) and evaluate at times kh. The lifted parameters can be calculated as:
I A= 0 0
0 I 0
0 0 eΛh , 0
C = CN eΛτ ,
I h Bw = 0 0 0 D w = CN
τ 0
0 I 0
0 0 eΛ(h−s) BN w(s)ds 0
eΛ(τ −s) BN w(s)ds
(8) The new lifted system is now a LTI discrete system. Note the dimension of the state space is left unchanged, but the input and output spaces are now infinite dimensional. Nevertheless, the new representation [MRP99, MP99] allows extending the results available for discrete LTI systems to the lifted domain. These tools have been traditionally used to analyze and synthesize sample and hold devices, and digital controllers. It is to be noted, though, that in this application the discrete part is embedded in the controller that doesn’t operate in the same way a typical sampled system does. Here, for instance, the controller gain operates over a continuous signal, as opposed to over a discrete signal as it is customary in sampled data systems.
3 An H2 Norm Characterization of a MB-NCS It is clear that, since the MB-NCS is h-periodic, there is no transfer function in the normal sense whose H2 norm can be calculated [BPFT91]. For LTI systems the H2 norm can be computed by obtaining the 2-norm of the impulse response of the system at t = t0 . We will extend this definition to specify an H2 norm, or more properly, to define an H2-like performance index [BPFT91]. We will call this performance index Extended H2 Norm. We will study the extended H2 norm of the MB-NCS with output feedback studied in the previous section and shown in Figure 1. The Extended H2 Norm is defined as: 1/2
Gzw
xh2
=
Gzw δ (t0 ) ei i
2 2
(9)
Performance Evaluation for Model-Based Networked Control Systems
Theorem 2. The Extended H2 Norm, G T XBN BN T
xh2 , 1/2
237
of the Output Feedback MB-
where X is the solution of the NCS is given by G xh2 = trace discrete Lyapunov equation M (h) XM (h) − X + Wo (0, h) = 0 and Wo (0, h) T h T CN eΛt dt. is the observability Gramian computed as Wo (0, h) = 0 eΛ t CN Proof. We will compute the extended H2 norm of the system by obtaining the 2-norm of the objective signal z to an impulse input w = δ(t − t0 ). It can be shown that the response of the system to an input w = δ(t − t0 ) (assuming that the input dimension is one) is: C1 D12 K ϕ(t) −D12 K I 0 eΛ(t−tk ) 0 I 0 0 z(t)
ϕ(t)
=
=
(10) k 0 0 eΛh 0
B1
LD21
LD21
(11)
With x(t) x ¯(t) e(t) , A B2 K ˆ ˆ ˆ 2 K + LD ˜ 22 K LC2 A − LC2 + B Λ= ˜ ˆ LC2 LD22 K − LC2 h = tk+1 − tk . ϕ(t) =
−B2 K ˆ 2 K − LD ˜ 22 K , −B ˆ ˜ A − LD22 K
So we can compute the 2-norm of the output: z
2 2
where
= =
∞ t0 ∞ t0
z(t)T z(t)dt T BN M (h)T
k ΛT (t−tk )
e
I 0 0 M (h) = 0 I 0 eΛh , BN = 0 0 0 C1 D CN = 12 K −D12 K
k
T CN CN eΛ(t−tk ) (M (h)) BN dt
B1
LD21
LD21
,
(12) It is easy to see that the norm of a system with more than one inputs can be obtained by taking the norm of the integral shown in (12). So at this point we can drop our assumption of working with a one dimension input system. We will concentrate now on the integral expression (12).
238
Σ(h)
L.A. Montestruque and P.J. Antsaklis
=
∞ t0
=
T BN
=
T BN
k ΛT (t−tk )
T M (h)T BN ∞
i=0 ∞ i=0
ti+1 ti
e
M (h)T
M (h)T
i
k
T CN eΛ(t−tk ) (M (h)) BN dt CN
i ΛT (t−ti )
e
i
T CN CN eΛ(t−ti ) (M (h)) dt BN
Wo (0, h) (M (h))
i
BN (13)
where Wo (0, h) =
h 0
T
T eΛ t CN CN eΛt dt
Note that Wo (0, h) has the form of the observability Gramian. Also note the summation resembles the solution of a discrete Lyapunov equation. This Lyapunov equation can be expressed as: M (h)T XM (h) − X + Wo (0, h) = 0
(14)
In this equation we note that M (h) is a stable matrix if and only if the networked system is stable. Note that Wo (0, h) is a positive semi definite matrix. Under these conditions the solution X will be positive semi definite.♦ Note that the observability gramian can be factorized as Wo (0, h) = T h T T Caux Caux = 0 eΛ t CN CN eΛt dt. This allow us to compute the norm of the system as the norm of an equivalent discrete LTI system. h
T
T T Corollary 1. Define Caux Caux = 0 eΛ t CN CN eΛt dt and the auxiliary discrete system Gaux with parameters:Aaux = M (h) , Baux = BN , Caux , and Daux = 0 then the following holds:
Gzw
xh2
= Gaux
2
(15)
Example 1. We now present an example using a double integrator as the plant. 0 1 0.1 0.1 ; The plant dynamics are given by: A = ; B1 = 0 0 0.1 1 B2 = 0 1 ; C1 = ; C2 = ; D12 = 0.1; D21 = 0.1; D22 = 0. 0.1 0 −1 . A state estimator We will use the state feedback controller K = −2 20 100 is used to place the state observer eigenvalwith gain L = ues at −10. We will use a plant model with the following dynamics: Aˆ = 0.1634 0.8957 0.8764 ˆ2 = −0.1686 1.0563 , Cˆ2 = , B , −0.1072 −0.1801 0.1375 ˆ 22 = −0.1304. This model yields a stable NCS for update times up and D to approximately 7.5 sec. In Figure 2 we plot the extended H2 norm of the system as a function of the updates times. Note that as the update time of the MB-NCS approaches zero, the value of the norm approaches the norm of the non-networked compensated system. Also note that the optimal update time is around 0.8 sec, and it starts to
Performance Evaluation for Model-Based Networked Control Systems
239
0.045 0.04 0.035 0.03 2 xh2
0.0394
||Gzw||
0.025 0.02
0.015 0.01 0.005 0 0
1
2
3
4
5
6
update times h (sec)
Fig. 2. Extended H2 norm of the system as a function of the update times.
degrade as the update times become smaller. This pattern is repeated with other norms as shown in the next example.
4 A Generalized H2 Norm for MB-NCS In the previous section the Extended H2 Norm was introduced to study the performance of MB-NCS, this norm was defined as the norm of the output of the system when a unit impulse at t = t0 is applied to the input. But since the MB-NCS is a time varying system it may seem inappropriate to apply this input only at t = t0 . By letting the input be δ (t − τ ) we arrive to an alternate definition. Since the system is h periodic we only need to consider τ ∈ [t0 , t0 +h). We will call this norm the Generalized H2 Norm and we define it as: Gzw
gh2
=
1 h
1/2
t0 +h t0
Gzw δ (t − i
2 τ ) ei 2
dτ
(16)
A detailed study of this norm can be found in [BPFT91]. Note that this norm evaluates the time average of the system response to the impulsive function applied at different times. Another option for a generalized norm could have replaced the time average by the maximum over time. However, as it is shown later, there is relationship between the time averaged norm (16) and the norm of the operator-valued transfer function of Gzw that can be useful for frequency domain analysis. We will now show some relations arising from this norm. Let a continuous-time linear transformation G : L2 [0, h) → L2 [0, ∞) be defined by:
240
L.A. Montestruque and P.J. Antsaklis t
(Gu) (t) =
0
g (t, τ ) u (τ ) dτ
(17)
Where g (t, τ ) is the impulse response of G. Let G be periodic and let its Hilbert-Schmidt norm G HS be defined as: G
HS
h
=
0
1/2
∞
T
trace g (t, τ ) g (t, τ ) dtdτ
0
(18)
Then it is clear that: Gzw
gh2
1 = √ Gzw h
(19)
HS
Note the slight abuse of notation since originally Gzw was considered a transformation with domain L2 [0, ∞) while the Hilbert-Schmidt norm in (18) is defined for transformations with domain on L2 [0, h). Now denote the lifted operator Gzw = LGzw L−1 with input-output relation given by the convolution:
zk =
k l=0
g k−l wl (20)
where
g k : L2 [0, h) → L2 [0, h) and
g k u (t) =
h 0
g (t + kh, τ ) u (τ ) dτ
The Hilbert-Schmidt operator for g k is given by:
gk
HS
=
h 0
h 0
1/2 T
trace g (t + kh, τ ) g (t + kh, τ ) dtdτ
(21)
Then it is easy to show that: Gzw
2 HS
∞
=
gk k=0
2 HS
= g
2 2
(22)
The last expression shows a relationship between the discrete lifted representation of the system and the Generalized H2 Norm. Finally we will show the relationship between the Generalized H2 Norm and the norm of an operatorvalued transfer function: ∞
g˜ (λ) = k=0
g k λk
(23)
Performance Evaluation for Model-Based Networked Control Systems
241
By defining in a similar way the λ-transform for the input and output of the system we get: z˜ (λ) = g˜ (λ) w ˜ (λ)
(24)
Note that for every λ in their respective regions of convergence, w ˜ (λ) and z˜ (λ) are functions on [0, h); while g˜ (λ) is a Hilbert-Schmidt operator. Define the Hardy space H2 (D, HS) with operator-valued functions that are analytic in the open unit disc, boundary functions on ∂D, and with finite norm: g˜
2
2π
1 = 2π
0
g˜ e
jθ
1/2
2 HS
(25)
dθ
Note the norm in H2 (D, HS) is a generalization of the norm in H2 (D) by replacing the trace norm by the Hilbert-Schmidt norm. It can be shown that: Gzw
2 HS
= g
2 2
= g˜
2 2
(26)
We will now show how to calculate the Generalized H2 Norm of the Output Feedback MB-NCS. Define the auxiliary discrete LTI system: S
Gaux =
A Baux Caux 0
(27)
Where: T Baux Baux =
h 0
T Caux Caux =
h 0
I 0 0 eΛ
T
τ
0 I 0
I 0 T T Λ τ 0 0 eΛτ BN BN e 0 0
0 0 dτ 0
0 I 0
T CN CN eΛτ dτ
(28)
Theorem 3. The Generalized H2 Norm, Gzw MB-NCS is given by Gzw
gh2
=
√1 h
D
2 HS
gh2 ,
of the Output Feedback
+ Gaux
2 2
1/2
.
Proof. The transfer function for Gzw can be written as: g˜ (λ) = D + C (˜ gt (λ)) B with g˜t (λ) =
A I I 0
(29)
Note that g˜t (λ) is a matrix-valued function and that g˜t (0) = 0 therefore the two functions on the right of (29) are orthogonal and:
242
L.A. Montestruque and P.J. Antsaklis
h Gzw
2 gh2
= g˜ (λ)
2 2
= D
2 HS
+ C (˜ gt (λ)) B
2 2
(30)
The second norm on the right can be calculated as: C (˜ gt (λ)) B
2 2
=
1 2π
2π 0
C g˜t ejθ
By fixing θ the integrand F = C g˜t ejθ B with impulse response: I 0 f (t, τ ) = CN eΛt g˜t ejθ 0 I 0 0
2
B
HS
dθ
(31)
is a Hilbert-Schmidt operator 0 0 eΛ(h−τ ) BN 0
(32)
Then: F
2 HS
h h 0 0
= trace
∗
f (t, τ ) f (t, τ ) dtdτ I 0 0 T ∗ T h T Λ (h−τ ) 0 I 0 g˜t ejθ Caux = trace 0 BN e Caux g˜t ejθ 0 0 0 I 0 0 0 I 0 eΛ(h−τ ) BN dτ 0 0 0 I 0 0 I 0 0 T h T Λ (h−τ ) 0 I 0 dτ = trace 0 0 I 0 eΛ(h−τ ) BN BN e 0 0 0 0 0 0 g˜t ejθ T = trace Baux Baux g˜t e T = trace Baux g˜t ejθ
∗
jθ ∗
∗
T Caux Caux g˜t ejθ
T Caux Caux g˜t ejθ
T Caux Caux g˜t ejθ Baux
(33) So (31) can be calculated as the H2 norm of Caux g˜t ejθ Baux which corresponds to the H2 norm of Gaux .♦ To calculate the Generalize H2 Norm several calculations need to be done, among these are: 2
h t 0 0
T
T Λ τ T BN e CN CN eΛτ BN dτ dt HS I 0 0 I 0 0 T T Baux Baux = 0 I 0 P22 P12 0 I 0 0 0 0 0 0 0 T −Λ BN BN P11 P12 = exp h T 0 P22 0 Λ T T M12 Caux Caux = M22 T M11 M12 CN −ΛT CN = exp h 0 M22 0 Λ
D
= trace
(34)
Performance Evaluation for Model-Based Networked Control Systems
243
Note that in this particular case it was relatively easy to separate the infinite dimensional components of the system from a finite dimensional core component. This is not always possible. In particular one might be tempted to apply the previous techniques to obtain a finite dimensional auxiliary discrete LTI that can be used to solve an H2 optimal control problem. The described separation technique can’t be carried out since the controller and observer gains operate over continuous signals. Nevertheless it will be shown later how to address an H2 optimal control problem using other techniques. Example 2. We now calculate the generalized H2 norm for the same system studied in the previous example. Some computational issues have to be addressed in order to do this. In particular the formulas given in (34) may yield inaccurate results because of scaling issues. In particular for the calculation of Baux and Caux the term (1,1) of the exponentials calculated may be too large in comparison with the other terms, this is because of the negative sign in front of the stable matrix Λ. Direct integration yields better results. Also the Cholesky factorization is usually only an approximation. The results though seem to represent reality in a reasonable way. This is verified by varying the tolerances in the integration algorithm and by measuring the error in the Cholesky factorization. The calculated generalized H2 norm is shown below for the same range of update times used for the previous example: 0.5 0.45 0.4
0.3
0.25
zw
||G ||
2 gh2
0.35
0.2 0.0394 0.15 0.1 0.05 0 0
1
2
3
4
5
6
update times h (sec)
Fig. 3. Generalized H2 norm of the system as a function of the update times.
In this example we see that the norm converges again to the non-networked H2 norm of the system as the update time goes to zero. The optimal update time in this case is around 1 sec, this is somewhat consistent with the previous example where the optimal update time is around 0.8 sec. Both examples coincide in that after the update time of 1 sec the performance starts to degrade.
244
L.A. Montestruque and P.J. Antsaklis
Since both performance measurements are defined in a different manner no real comparison can be made between them. It seems though that the Generalized H2 Norm is more appropriate since it considers the application of the impulse input at different times. Also its link with a well-defined operatorvalued transfer function makes it very attractive. The next section presents an alternate parameter representation that overcomes the inconveniencies of dealing with infinite dimensional operators.
5 Optimal Controllers for MB-NCS In this section we address the issue of designing optimal controllers for MBNCS. We saw previously that lifting can transform a periodic system such as the MB-NCS into a discrete LTI system. Most results for the design of optimal controllers for discrete systems directly apply to the lifted system. Since the parameters of the lifted system are infinite dimensional, computations using the integral representation given in (8) can be difficult. This is evident when one considers operators such as
∗
I −D D
−1
, which appears for instance in
sampled data Hinf problems. To circumvent some of the problems associated with optimal control problems, an auxiliary discrete LTI system is obtained so that its optimal controller also optimizes the lifted system. The separation of the infinite dimensionality from the problem is not always guaranteed. In particular we note that the controller for the auxiliary system works in the discrete time domain while the controller for the lifted system representing the MB-NCS in (8) works in continuous time. This means the controller has to be obtained using the lifted parameters directly. In this section we start by giving a brief summary of an alternative representation of the lifted parameters proposed by Mirkin & Palmor [MRP99, MP99]. This alternative representation allows performing complex computations using lifted parameters directly. Results on the computation of an optimal sampler, hold, and controller are shown and their equivalence with the components of the output feedback MB-NCS is shown. The representation of lifted parameters proposed by Mirkin & Palmor considers the lifted parameters as continuous LTI systems operating over a finite time interval. The main advantage of such representation lies in the possibility of simplifying operations over the parameters to algebraic manipulations over LTI systems with two-point boundary conditions. These manipulations can then be performed using well-know state-space machinery. Consider the following LTI system with two-point boundary conditions (STPBC):
Performance Evaluation for Model-Based Networked Control Systems
G:
x˙ (t) = Ax (t) + Bu (t) y (t) = Cx (t) + Du (t) Ωx (0) + Υ x (h) = 0
245
(35)
Here the square matrices Ω and Υ define the boundary conditions. In is said that the boundary conditions are well-posed if x (t) = 0 is the only solution to (35) when u (t) = 0. It can be verified that the STPBC G has well-posed boundary conditions if and only if the matrix: ΞG = Ω + Υ eAh
(36)
is non-singular. If the STPBC G has well-posed boundary conditions, then its response is uniquely determined by the input u(t) and is given as follows: y (t) = Du (t) +
h 0
KG (t, s) u (s) ds
(37)
where the kernel KG (t, s) is given by: KG (t, s) =
−1 CeAt ΞG Ωe−As B −1 −CeAt ΞG Υ eA(h−s) B
if 0 ≤ s ≤ t ≤ h if 0 ≤ t ≤ s ≤ h
(38)
We will use the following notation to represent (35): G=
A Ω C
Υ
B D
(39)
The following is a list of manipulations that are used to perform operations over STPBCs. 1) Adjoint System: G∗ =
−AT AT h T −T e Υ ΞG −B T
−T A Ω T ΞG e
T
h
CT DT
(40)
2) Similarity Transformation: (for T and S non singular) T GT −1 =
T AT −1 SΩT −1 CT −1
SΥ T −1
TB D
(41)
3) Addition:
A1 0 G1 + G2 = C1 4) Multiplication:
0 A2 C2
Ω1 0
0 Ω2
Υ1 0
0 Υ2
B1 B2 D1 + D2
(42)
246
L.A. Montestruque and P.J. Antsaklis
A1 0 G1 G2 = C1
B1 C 2 A2 D 1 C2
Ω1 0
0 Ω2
Υ1 0
B1 D 2 B2 D1 D2
0 Υ2
(43)
5) Inversion (exists if and only if det (D) = 0 and −1 det Ω + Υ e(A−BD C )h = 0) G−1 =
A − BD−1 C Ω −D−1 C
Υ
BD−1 D−1
,
(44)
This representation reduces the complexity of computing operators such −1
∗
as
I −D D
ξ=
. Using the integral representation of (8) one can get that −1
∗
I −D D
ω if and only if: h
ω (t) = ξ (t) +
t
T −Λ e BN
T
(t−s)
T CN CN
s 0
eΛ(s−τ ) BN ξ (τ ) dτ ds
(45)
It is not clear how to solve this equation. On the other hand using the alternative representation we note that: D=
Λ I CN
0
BN 0
(46)
Using the properties previously listed we obtain: ∗
I −D D
−1
=
I−
Λ I CN
0
BN 0
T CN −ΛT CN T −BN BN Λ = T −BN 0
∗
Λ I CN 0 0
0 I
0
BN 0 I 0
−1
(47) 0 0
0 BN I
To be able to represent operators with finite dimension domains or ranges such as B and C two new operators are defined. Given a number θ ∈ [0, h], the impulse operator Iθ transforms a vector η ∈ Rn into a modulated impulse as follows: ς = Iθ η ⇔ ς (t) = δ (t − θ) η
(48)
Also define the sample operator Iθ∗ , which transforms a continuous function ς ∈ Cn [0, h] into a vector η ∈ Rn as follows:
Performance Evaluation for Model-Based Networked Control Systems
η = Iθ∗ ς
⇔ η = ς (θ)
247
(49)
Note that the representation of Iθ∗ is as the adjoint of Iθ , even when this is not strictly true, it is easy to see that given an h ≥ θ the following equality holds: ς, Iθ η =
h 0
T
T
ς (τ ) (Iθ η) (τ ) dτ = ς (θ) η = Iθ∗ ς, η
(50)
The presented results allow to make effective use of the impulse and sample operator. Namely the last two lemmas show how to absorb the impulse operators into an STPBC. Now let us present a result that links the solutions of the lifted algebraic discrete Riccati equation and the algebraic continuous Riccati equation for the continuous system for the H2 control problem. Lemma 1. Let the lifted algebraic discrete Riccati equation for the lifted system G = LGL−1 be as follows: T
∗
A XA − X + C C T
∗
∗
−1
T
∗
T
D C + B XA
D D + B XB
− A XB + C D
=0
(51)
and let the algebraic continuous Riccati equation for G be: AT X + XA + C T C − XB + C T D
−1
DT D
DT C + B T X = 0
(52)
then the conditions for existence of a unique stable solution for both Ricatti equations are equivalent, moreover if they exist, then X = X. This implies that in order to solve the optimal control problem we just need to solve the regular continuous Ricatti equation. We can for example obtain the optimal H2 state feedback “gain” given by ∗
∗
−1
F = − D D + B XB
∗
∗
D C + B XA .
It can be shown [MRP99, MP99] that: F =
A + BF I F
0
I 0
(53)
Here F is the H2 optimal control gain for the continuous system. Note that the expression in (53) exactly represents the dynamics of the actuator/controller for the state feedback MB-NCS when the modelling errors are zero and the feedback gain is the H2 optimal feedback gain. Finally, we present next a result that obtains the H2 optimal sampler, hold and controller.
248
L.A. Montestruque and P.J. Antsaklis
Lemma 2. Given the standard assumptions, when the hold device is given by (Hu) (kh + τ ) = φH (τ ) uk , ∀τ ∈ [0, h] and the sample device is given by h (Sy)k = 0 φS (τ ) y (kh − τ ) dτ , the H2 optimal hold, sampler, and discrete controller for the lifted system G = LGL−1 with B1 B2 A G = C1 0 D12 C2 D21 0 are as follows: Hold: Sampler: Controller: where:
φH (τ ) = F e(A+B2 F )τ φS (τ ) = −e(A+LC2 )τ L Θ I Kd = I 0 h Θ = e(A+B2 F )h + 0 e(A+LC2 )(h−τ ) LC2 e(A+B2 F )τ dτ
(54)
Remarks: Note that the H2 optimization problem solved in [MP99] is related to the Generalized H2 norm previously presented. That is, replacing the trace norm with the Hilbert-Schmidt norm. As it has been observed, there is a strong connection between the H2 optimal hold of a sampled system and the H2 optimal controller of the nonsampled system. As pointed out in [MP99] it is clear that the H2 optimal hold attempts to recreate the optimal control signal that would have been generated by the H2 optimal controller in the non-sampled case. That is, the H2 optimal hold calculated in [MP99] generates a control signal identical to the one generated by the non-sampled H2 optimal controller in the absence of noise and disturbances. Another connection exists between the H2 optimal sampler, hold, and discrete controller calculated in [MP99] and the output feedback MB-NCS. It is clear that when the modeling errors are zero and the gain is the optimal H2 gain, the optimal hold has the same dynamics as the controller/ actuator in the output feedback MB-NCS. The same equivalence can be shown between the combination of optimal sampler/discrete controller dynamics and the output feedback MB-NCS observer. The techniques shown here can be used to solve robust optimal control problems that consider the modeling error. This is possible due to the alternative representation that allows the extension of traditional optimal control synthesis techniques to be used with the infinite dimensional parameters that appear in the lifted domain.
6 Conclusions The study of the performance of MB-NCS shows that a large portion of the available literature on sampled data systems cannot be directly applied to
Performance Evaluation for Model-Based Networked Control Systems
249
MB-NCS. Moreover, different definitions of performance yield different performance curves. For a constant controller and observer gains it was shown that the best transmission times are not necessarily the smallest ones. Using an alternate representation of the lifted parameters, a connection between the optimal hold, sampler, and discrete controller and the output feedback MBNCS was established. This representation opens a large new area of research in robust MB-NCS.
Beating the Bounds on Stabilizing Data Rates in Networked Systems Yupeng Liang and Peter Bauer Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 {yliang1,pbauer}@nd.edu Summary. This paper applies simple, classical feedback control and quantization schemes to the problem of bit rate reduction for stabilizing unstable linear plants. In particular, state feedback, deadbeat control (pole placement at the origin), and standard floating point quantizers (with differentially encoded exponents) are used to reduce stabilizing data rate requirements to levels near or even below the theoretical bounds for certain classes of systems. First order systems and higher order controllable canonical forms are investigated. The robustness of the derived condition for stabilizing data rate with respect to system coefficients is also addressed.
1 Introduction Over the last decade, embedded systems in general, and networked control systems in particular, have enjoyed tremendous attention. This is mainly due to the low cost of embedded devices and the wide availability of wireless (and wired) digital networks. One of the most interesting questions that has recently been answered in the area of networked control systems is the problem of minimal bit rate requirements in the feedback path for stabilizing an unstable system. In a linear time-invariant system, this bit rate was shown to be directly related to the unstable eigenvalues of the system. There are a large number of results that analyzed minimum bit rate requirements for linear time-invariant systems [FX03, TM04c, WCWC04, EM01a, WB99b, BL00b, FZ03, Lib03b, Tat00, LL04], time-variant systems [TM04c, Tat00, PBLP04, LB03, LB04a], nonlinear systems [LB04b], and stochastic systems [NE04, Tat00, TSM04]. All these results focused on the theoretical problem of establishing lower bounds for stabilizing data rates and were derived without introducing any compression schemes. In this paper, we explore the idea of using standard and simple methods of implementing quantization, feedback control and compression/encoding
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 251–268, 2006. © Springer-Verlag Berlin Heidelberg 2006
252
Y. Liang and P. Bauer
mechanisms to achieve or even beat the derived bounds for first and higher order discrete time linear systems. The introduced system employs state feedback, floating point quantization, and a simple compression scheme of the exponent information by using difference encoding of two consecutively transmitted floating point values for stabilization. It is shown that, for the case of first order systems, the theoretical bit rate bounds can always be achieved and sometimes violated in the average bit rate sense. This requires pole placement such that deadbeat control is achieved. Higher order controllable canonical systems are also investigated, and the obtained results are shown to be close to the theoretical bounds only for certain classes of systems that are theoretically characterized. Simulations show that our theoretically derived mantissa plus sign bit rate requirements are often conservative and the actual bit rates required are even lower. In certain cases, they indeed beat the theoretical minimum presented in previous literature. This is achieved by using a novel methodology developed in this paper, which minimizes the mantissa plus sign rate required for closed loop stabilization, and adopts an efficient compression/encoding scheme for exponent information. This paper is organized as follows: In Section 2, we introduce two model components used in this analysis, and review the previous results about stabilizing data rate bounds. In section 3, we derive stabilizing mantissa plus sign rate bounds for first order and higher order controllable canonical systems. Relationships between the obtained results and previous theoretical bounds are established and conservativeness properties are studied for certain classes of systems. Robustness of the derived results are also briefly discussed. Practical system implementations approaching the theoretical bounds are proposed in Section 4. In particular, an efficient compression/encoding scheme for exponent information and conditions on necessary stabilizing mantissa plus sign rate are presented. Additionally, the possibility of beating the theoretical bounds is discussed and simulation results highlighting such cases are provided. Finally, conclusions are offered in Section 5.
2 Preliminaries and Models 2.1 Model Description Networked embedded systems (NES) implement feedback links over digital communication networks characterized by delays, quantization effects, packet drop, synchronization errors, and data rate constraints. Good models capturing these effects are important for the analysis of stability and performance properties of NES. In what follows, we will introduce two models that are used in the analysis: the quantization (data word) model and the system model. Some preliminary definitions are required. N refers to the set of integers and N0 denotes that of nonnegative integers. The set of real numbers is denoted as R, and that of the nonnegative real numbers is given by R0 . Last,
Beating the Bounds on Stabilizing Data Rates in Networked Systems
253
Fig. 1. Sector bound model for floating point quantization
the discrete time variable is represented as k ∈ N0 and the rational numbers are denoted by Q. 2.1.1. Quantization (Data Word) Model NES send quantized data over digital communication links in the feedforward and feedback paths. Three basic choices of quantization schemes are commonly used in practice: fixed point, floating point and block floating point. It is known that floating point and block floating point schemes present a number of advantages over fixed point schemes [MB95]. In particular, it was proven that the fixed point quantization policy with a finite number of quantization levels cannot achieve an asymptotic closed-loop stability in the case of unstable plants [Del90a]. In contrast, floating and block floating point quantization offer an approximately constant relative quantization error and allow for stabilization of unstable plants. We therefore select floating point quantization as the choice of quantization scheme. A floating point number is represented by x = (−1)s × m × 2e
(1)
where s ∈ {0, 1} is the sign of x, m ∈ [0.5, 1) is the mantissa of x and e = log2 |x| + 1 ∈ N is the exponent of x, with x denoting the largest integer less than or equal to the real number x.When x is stored using a total of l bits in floating point format, it consists of three parts: one bit for s, lm bits for m and le bits for e. Obviously, l = 1 + lm + le
(2)
Notice that here we have denoted lm as the number of mantissa bits not counting the 1 bit after the binary point that is always present to satisfy the normalization requirement 12 ≤ m < 1.
254
Y. Liang and P. Bauer
Fig. 2. State feedback model with floating point quantization
A classical approach for the analysis of quantization effects is to treat the quantization error as a nonlinearity and model it using the sector bound approach [FX03], as shown in Fig. 1. Then, for the case of rounding and infinite dynamic exponent range with no overflow or underflow, the floating point quantizer can be defined as Q : Rn −→ Qn such that Δ
Q(x(k)) = F lp(x(k)) = x(k)[I + E(k)] where x(k) ∈ Rn and
(3)
E(k) = [εij (k)]
with
[ε, ε), 0,
εij (k) =
∀i = j ∀i = j
ε = −ε = 2−(lm +1)
(4)
Therefore, the relative quantization error for each state xi (k) has a maximum absolute value ε. In particular, for Single Input Single Output (SISO) systems, the quantization error term E(k) reduces to a scalar, denoted as ε(k). Hence, (3) can be simplified as: Q(x(k)) = [1 + ε(k)]x(k). (5) 2.1.2. System model Fig. 2 depicts the state feedback model with a floating point quantizer in the feedback path. For simplicity, only SISO systems are considered in this paper. Furthermore, the sampling time is normalized to 1. The state space representation of the plant is given by the difference equations x(k + 1) y(k)
= =
Ax(k) + Bu(k) Cx(k)
(6)
where x ∈ Rn×1 , A ∈ Rn×n , B ∈ R1×n and u ∈ R. In addition, u(k) = r(k) + Q(K T · x(k))
(7)
Beating the Bounds on Stabilizing Data Rates in Networked Systems
255
where K ∈ R1×n . K T denotes the transpose of K. For zero external input cases, applying (5), (7) becomes u(k) = Q(K T · x(k)) = [1 + ε(k)] · K T · x(k)
(8)
where ε(k) denotes the scalar quantization error originated from quantizing K T · x(k). 2.2 Overview of Previous Results In recent years, many researchers [PBLP04, TM04c, NE04, WCWC04, Tat00, LL04, EM01a, WB99b, BL00b, FZ03, Lib03b] have analyzed the effects of communication constraints on the stability property of networked control systems. Several theoretical lower bounds on stabilizing data rates have been obtained [PBLP04, TM04c, NE04, Tat00, LL04]. Some of them are summarized below. 2.2.1. Lower Bound on Average Stabilizing Data Rate When the stabilizing data rate is obtained in an average sense, it is given by [TM04c, NE04, Tat00] n
max(0, log2 |λi |)
R>
(9)
i=1
where λi are the eigenvalues of a discrete time system. 2.2.2. Lower bound on constant stabilizing data rate For the cases with constant rate requirement, the lower bound on stabilizing data rate is [TM04c] n
R>
max(0, log2 |λi | )
(10)
i=1
where, again, λi stands for the system eigenvalues. Furthermore, a tighter lower bound [LL04] is shown to be: n
max(1, |λi |)
R ≥ log2 i=1
(11)
256
Y. Liang and P. Bauer
2.2.3. Deadbeat Cases with Mantissa Requirements and Time-Variant Delay State feedback and floating point quantization have been popularly used in practical implementations of NES. Hence, the stabilizing data rate issue in this particular type of system realization is worth exploring due to its simplicity and practical relevance. In addition, communication networks are usually characterized by various time variant delays. This is especially true in networks employing wireless links. This type of issues have been discussed in previous work [PBLP04]. A deadbeat controller was shown to result in a minimum required stabilizing mantissa rate, and corresponding lower bounds for the stabilizing mantissa length lm ∈ N0 were obtained. For first order systems, the mantissa condition is given by: λm > Ti log2 |λ| − 1, ∀i ∈ N0 , lm ∈ N0 . (12) where Ti , i ∈ N0 represents the time-variant access delay modelled in [PBLP04] and λ stands for the system eigenvalue, which is assumed to be larger than 1 in magnitude. Furthermore, the expression for stabilizing mantissa plus sign rate rTi ∈ R0 was also derived [PBLP04], as follows: rTi =
lm + 1 > log2 |λ|. Ti
(13)
It can be seen that the stabilizing mantissa plus sign rate is not affected by the time-variant access delay Ti present in the feedback path. Finally, an interesting case is worth emphasizing. Note that for an unstable system with 1 < |λ| < 2 and Ti = 1, according to (12), the lowest possible stabilizing mantissa length becomes 0, which is somewhat counterintuitive. In this case, zero mantissa bits except the ’1’ after the binary point are needed. Therefore, only the sign and exponent information needs to be transmitted.
3 Theoretical Results In this section, we extend the work in [PBLP04] by obtaining stabilizing data rate (sign, mantissa and exponent) bounds for systems depicted in Fig. 2, i.e. with a floating point quantizer on the feedback path but without considering time delays. We also investigate the relationship between the obtained bounds with previous results presented in Section 2.2. Interpretations and implications of the derived relationship are explored for different types of systems, and the robustness of the obtained condition for stabilizing data rate with respect to system coefficients is also addressed. Throughout this analysis, the external input is assumed to be zero and the sampling time is normalized to 1.
Beating the Bounds on Stabilizing Data Rates in Networked Systems
257
3.1 Deadbeat Systems with Mantissa Requirements In the following analysis, we focus on two types of deadbeat delay-free (no inherent time delay) systems: the first order case and the n-th order SISO controllable canonical forms, respectively. 3.1.1 First Order Deadbeat System Using (5) and (6), the state difference equation for the first order deadbeat delay-free control system is x(k + 1) = ax(k) + b · Q(K · x(k)) = −a · ε(k) · x(k)
(14)
by noting that a + bK = 0. In order to ensure the stability of the resulting system, the following inequality needs to be satisfied: |a · ε(k)| < 1
(15)
Applying (4), i.e. |ε(k)| ≤ 2−(lm +1) , we obtain |a| < 2lm +1
(16)
lm > log2 |a| − 1
(17)
which can be rearranged to:
It provides the condition for system coefficient a to ensure closed-loop stability. It can also be reformulated in terms of the system eigenvalue λ as lm > log2 |λ| − 1
(18)
by recognizing the fact that λ = a. For delay-free systems, under the assumption of unit sampling time, the stabilizing mantissa plus sign rate is given by: rms = lm + 1
(19)
Specifically, for first order systems, by applying (17), (18) to (19), we obtain the representation for rms as: rms > log2 |a| = log2 |λ|, rms ∈ N0 . 3.1.2 SISO Controllable Canonical System For SISO delay-free systems in controllable canonical form, we have:
(20)
258
Y. Liang and P. Bauer
A=
0 0 .. .
0 an
1 0 .. .
0 1 .. .
an−1
an−2
0
··· ··· ··· ··· ···
0
0 0 , B = ... 0 1 a1 1 0 0 .. .
using (5) and (6), the system characteristic equation can be written as: n
xn (k)
=
ai xi (k − 1) + u(k − 1) i=1 n
n
ai xn (k − i))
ai xn (k − i) − Q(
= i=1
=
i=1
n
ai · xn (k − i)
−ε(k) ·
(21)
i=1
by applying the fact that xi (k − 1) = xn (k − i), i = 1, 2, · · · , n. Here, ε(k) ∈ [−2−(lm +1) , 2−(lm +1) ]. Note that the system represented in (21) is globally asymptotically stable [BMD93] if n
|ε(k)| ·
n
|ai | ≤ 2−(lm +1) ·
i=1
|ai | < 1
(22)
i=1
Therefore,
n
lm > log2
|ai | − 1
(23)
i=1
providing a lower bound for the mantissa length to ensure the overall system stability. Similar as in the first order case, substituting (23) into (19), we obtain the stabilizing mantissa plus sign data rate, described as: n
rms > log2
|ai |, rms ∈ N0 .
(24)
i=1
3.2 Relationship to Bounds in Literature Below, we study the relationship between the rate bound (24) for deadbeat control with floating point quantization and the general bound (9), for the n-th order controllable canonical forms. Since first order systems can be regarded as a special case of controllable canonical forms, the derived relationship can also be applied to first order systems in a straightforward manner. First, we introduce some notations to be used in this section. λi , i = 1, 2, · · · , n stands for system eigenvalues. In particular, λuj , j = 1, 2, · · · , m
Beating the Bounds on Stabilizing Data Rates in Networked Systems
259
denote the unstable eigenvalues. ai , i = 1, 2, · · · , n denote the system coeffiΔ n cients. Finally, we use Rmsb = log2 i=1 |ai | to represent the lower bound on Δ
m
stabilizing mantissa plus sign rate provided by (24), and Rrb = j=1 log2 |λuj | to describe the stabilizing rate bound introduced in (9). It can be seen from the following discussion, that Rmsb is always greater than or equal to the rate Rrb . For a controllable canonical form, the characteristic polynomial is given as: n
Δ
p(λ) =
(λ − λi ) = λn − a1 · λn−1 − · · · − an−1 · λ − an .
(25)
i=1
When an = 0, Rmsb can be simplified from (24) as: n
Rmsb
=
|ai |
log2 i=1
= =
a2 an−1 a1 | + | | + ··· + | |)] an an an a1 a2 an−1 log2 |an | + log2 (1 + | | + | | + · · · + | |). an an an
log2 [|an | · (1 + |
(26)
Δ
Now, denote Δ = log2 (1 + | aan1 | + | aan2 | + · · · + | an−1 an |). Since Δ ≥ 0, from (26), we conclude: n
n
log2 |λi |
λi | =
Rmsb ≥ log2 |an | = log2 | i=1
(27)
i=1
n
by noticing | i=1 λi | = |an |. This result implies that Rmsb can be more conservative than Rrb , depending on the value of Δ and system eigenvalue specifications. More detailed discussions are presented below. In the particular cases when an = 0, the above derivation becomes invalid. Relationships between the system coefficients and eigenvalues need to be inspected for a comparison between Rmsb and Rrb . We begin our following discussion by interpreting the obtained relationship for different classes of systems by investigating the conservativeness of Rmsb compared to Rrb . 3.2.1 Systems with Unstable ”Butterworth Type” Eigenvalue Distribution If ai = 0, i = 1, 2, · · · , n − 1 and |an | > 1, then the characteristic equation becomes λn +an = 0. Hence, the system eigenvalues occur on a circle centered at the origin with radius n |an | > 1 at equally spaced points, or in other words, distributed in a ”Butterworth type” fashion. Obviously, all the system eigenvalues are unstable and denoted by λui , i = 1, 2, · · · , n. In addition, Δ = 0. Therefore, applying (26), we obtain
260
Y. Liang and P. Bauer n
Rmsb = log2 |an | =
log2 |λui | = Rrb .
(28)
i=1
Hence, Rmsb coincides with Rrb in these systems and turns out to be nonconservative. Obviously, a first order system with an unstable pole constitutes the simplest example of this type of system. 3.2.2 Systems with Repeating Unstable Eigenvalues For systems with only repeating unstable eigenvalues, the characteristic polynomial is given as p(λ) = (λ + β)n where |β| > 1. Therefore, Rmsb = nlog2 |β| + log2 (1 + | βan1 | + | βan2 | + · · · + | aβn−1 n |) according to (26). Because n β i = 0, we have Δ > 0. Therefore, in this situation, Rmsb is ai = i conservative compared to Rrb . 3.2.3 Stable Plants n
It is worth noting that while the condition i=1 |ai | < 1 ensures a stable n plant, i=1 |ai | > 1 does not ensure an unstable plant. Therefore, if one would choose to use feedback in the case of a stable plant, it may be necessary to use a non-zero rate for stabilization. (Of course, one can simply check the eigenvalues of the plant and then decide whether stabilization through feedback is necessary.) n Again, an interesting special case occurs for 1 ≤ i=1 |ai | < 2 (which can correspond to both, a stable or unstable plant): for this particular case, only exponent and sign information needs to be transmitted. 3.2.4 Robustness with Respect to System Coefficients From (22), we know the condition for closed-loop system stability is n
|ai | < 2lm +1
(29)
i=1
where lm represents the stabilizing mantissa length. This property provides an important tool to check the required bit rate by inspection. It also sheds light on the robustness of bit rate requirements with respect to plant coefficients. Fig. 3 shows regions of constant stabilizing mantissa rates as a function of plant coefficients for the second order case. Regions of constant minimum mantissa rate requirements are concentric diamonds centered at the origin. n It should be noted that [BMD93], in the first hyper-quadrant, i=1 |ai | ≥ 1 n describes unstable systems and i=1 |ai | < 1 describes stable systems.
Beating the Bounds on Stabilizing Data Rates in Networked Systems
261
Fig. 3. System coefficient set with stabilizing mantissa length specification
4 Beating the Bounds: Possibility, Implementation, and Simulations In the previous work [LDB05], we investigated realizations of two different system realizations for the purpose of approaching the theoretical stabilization data rate bounds, viz. (a) System Type I: systems with the controller co-located with the plant which is digital in nature. In this type of systems, we have access to the floating point exponent information extracted from the system internally to complete the stabilization of the overall system. In this way, we have saved feedback bandwidth a part of which would have to be devoted to transmitting exponent bits; (b) System Type II: systems with the controller geographically separated from the plant or the plant originates from a system which is analog in nature with states/outputs discretized for feedback over a network link. In this type of systems, the internal system information is not accessible any more, and, hence, both mantissa and exponent information of the quantized feedback data needs to be transmitted to achieve closed-loop stability. In this section, we focus on System Type II. We present the system implementation that minimizes the stabilizing data rate, and use simulation re-
262
Y. Liang and P. Bauer
sults to validate that, in particular cases, one can indeed beat the theoretical bounds. 4.1 Exponent Compression/Encoding Scheme In order to minimize the stabilizing data rate, we propose an efficient ”compression” scheme for exponent information, which needs to be transmitted over the feedback path for stabilization, as detailed below: 1) For the first data packet, transmit complete exponent information (e). Store e at both the transmitter and receiver sides of the feedback network. 2) For the following data packets, store the exponent at the transmitter side and send bits representing its difference (Δe) with the previous one (P rev e) only (including the sign of Δe). 3) At the receiver side, recover the complete feedback exponent information using Δe and previously stored exponent (P rev e). Finally, store the recovered exponent as P rev e for the next round use. This compression scheme is especially efficient for systems that are right at the boundary of stability, since exponent updates are then relatively rare. In addition to transmitting Δe instead of complete exponent bits, we adopt efficient encoding schemes for Δe to further reduce the bits that need to be transmitted. Ideally, we propose an encoding method similar to Huffman coding [Lib03c]. In principle, we assign length-varying codes to different values of Δe based on their probabilities. Specifically, we assign codes with less (more) bits to Δe of higher (lower) probabilities. As a result, the bits representing Δe information is minimized in an average sense. However, in some practical cases, implementation of this method is not feasible. One such example is when there are a large number of possible values for Δe, which may further be stochastic, it is impossible to assign a corresponding code to each possible Δe a prior. Even if it is possible to do so, most times this method generates a long processing delay to assign such probability based length-varying codes, which can profoundly degrade the system performance. To avoid the above drawbacks, in this paper, we adopt another coding methodology for Δe, which is easy to implement and reduces the bits that represent Δe information and need to be transmitted on the feedback path for stabilization, although it may not result in the strictly minimal feedback rate. In particular, the first bit is fixed as the sign bit of Δe, and the remaining bits are the absolute value of Δe represented in binary format, unless Δe = 0. In this case, no bits are transmitted in the feedback path, and it is acknowledged at the receiver side that, the exponent stays the same as the previous one. In fact, this turns out to be a most common case if the system is on the stability boundary. Finally, for the cases when Δe becomes an infinite value, the binary bits are rounded up to a certain predefined length before transmission.
Beating the Bounds on Stabilizing Data Rates in Networked Systems
263
In essence, the above compression schemes are introducing memory into the system to reduce the information that needs to be sent at each sample time. In practice, this technique is commonly used in many applications. For example, it is used when streaming video over networks to reduce the required bit-rate while maintaining image quality. 4.2 Sufficient Mantissa Plus Sign Information for Stabilization As observed in section 3.2, Rmsb can coincide with Rrb in many cases. In addition, we have used conservative results to derive Rmsb in section 3.1, specifically, by employing sector bounds [FX03] and the stability condition in [BMD93]. Hence, we can expect that in certain cases the practically required stabilizing mantissa plus sign rate, denoted as Rms , can be lower than the theoretical bound Rmsb . In fact, many such examples have been identified in simulations. Some of them are presented in section 4.5 below. Nevertheless, a systematic way to determine the minimally required mantissa plus sign rate for stabilization is still open. Finally, it should be noted that the notation Rms here differentiates from the rms introduced in section 3, by representing the practically achievable stabilizing mantissa plus sign rate, while rms denotes a sufficient stabilizing mantissa plus sign rate above the bound Rmsb , which can be conservative and higher than Rms . 4.3 Implementation of Simulation In this section, we present the simulation implementation steps for System Type II: 1) Obtain the system state vector, and multiply it with the feedback gain vector required for pole placement. Using floating point quantization, obtain the mantissa, sign and exponent. The mantissa is rounded up to the minimum length (Lm ) sufficient to stabilize the overall system, assuming infinite exponent range. Lm is identified using simulations, and its typical value is below (17) for first order systems and (23) for higher order SISO systems, due to the conservative results in [FX03] and [BMD93] used in deriving these bounds, similar as discussed in section 4.2. Note that once Lm is determined for a system, it stays constant for all simulations. Similar to the relationship between Rms and rms discussed in section 4.2, Lm stands for the practically achievable minimum stabilizing mantissa length in simulations, while the notation lm used in section 3 and before denotes a sufficient stabilizing mantissa length satisfying the conditions specified in (17), (18) and (23) that can be conservative and higher than the practically required one, Lm . 2) Transmit the mantissa (Lm ) with the sign bit (1) completely. For the exponent part, use the compression scheme introduced in section 4.1, i.e. for the first data packet, transmit complete exponent information (e). After that, store the exponent at the transmitter side and send information representing
264
Y. Liang and P. Bauer
its difference with the previous exponent (Δe) only (including the sign of exponent difference). Δe is encoded in an efficient way before transmission, as presented in section 4.1. In the case where two consecutive exponents are the same, no exponent information bit needs to be transmitted. 3) At the receiver end, recover the feedback states for stabilization using the sign, mantissa bits and information on the difference of the corresponding exponents. Note that under this scheme, we have assumed that the channel capacity is large enough to avoid saturation. According to the above implementation, the resulting stabilizing feedback data rate, denoted as Rstab , is given as Rstab = Rms + Re
(30)
The stabilizing mantissa plus sign rate Rms is given as Rms = Lm + 1
(31)
where Lm is defined above, representing the minimum mantissa length that is sufficient for stabilization in simulations. The stabilizing exponent rate Re results from difference encoding of e. Using the second method presented in section 4.1, Re at each sampling time instance is given as i=1 length(e) + 1; (32) Re = length(Δe) + 1; Δe = 0, i = 2, 3, · · · 0; Δe = 0, i = 2, 3, · · · where i stands for the sampling time index and length(x) denotes the length of binary bits representing the absolute value of x. Note that the additional ’1’ bit per unit time is due to the sign bit of e or Δe. 4.4 Possibility of Beating the Theoretical Bounds As discussed in section 4.2, there may exist certain cases when the actually required stabilizing mantissa plus sign rate Rms can violate the theoretical bound Rmsb . If, furthermore, Rmsb coincides with Rrb , we have Rms lower than Rrb . In addition, if the stabilizing exponent rate Re is small enough, the overall stabilizing data rate Rstab given by (30) can indeed beat the theoretical bound Rrb provided by (9). This can be better illustrated using Fig. 4 and Fig. 5 below. Fig. 4 and Fig. 5 compare relationships between different bounds and data rates, respectively. As can be observed, in the cases when Rmsb coincides with Rrb , due to the conservative nature of Rmsb , the practical mantissa plus sign rate Rms can actually be one (or more) bits per unit time less than Rmsb , and Rrb . In addition, if the exponent rate Re is small enough (say less than one
Beating the Bounds on Stabilizing Data Rates in Networked Systems
265
Fig. 4. Relationships between different bounds
Fig. 5. Relationships between different data rates when beating the bound (9)
bit per unit time), the stabilizing total data rate given by Rstab = Rms + Re can indeed beat the theoretical bound Rrb . In fact, such examples do exist, and are presented in section 4.5 below. Finally, it is important to note that the methodology presented in this work has different assumptions than [TM04c] and the other results in NES literature. In particular, the theoretical bound Rrb was derived without considering any compression schemes, while this paper does adopt an efficient exponent compression/encoding method. This paper, therefore, does not disprove the established theoretical bounds, but rather, using these theoretical results as a reference, it proposes a simple and standard system implementation that can actually stabilize the overall system with a feedback rate lower than the theoretical bound Rrb . This is invaluable for practical system design. 4.5 Simulation Results Based on the implementation method presented in section 4.3, we performed extensive simulations to obtain the practical minimum stabilization data rates for many first and second order systems. In all simulations, feedback gains are chosen such that deadbeat control is achieved. Each investigated system was
266
Y. Liang and P. Bauer Table 1. Examples of first order systems beating the theoretical bounds System pole(λ) Rrb (9)1 Rmsb (20)2 Rms 3 Re 4 Rstab 5 5000 −107
12.288 23.254
12.288 23.254
11 22
0.005 11.005 0.004 22.004
Table 2. Examples of second order systems beating the theoretical bounds System matrix(A) Rrb (9)1 Rmsb (24)6 Rms 3 Re 4 Rstab 5 „ « 0 1 8.966 9.229 8 0.009 8.009 „ 100 500 « 0 1 6.644 6.644 5 0.006 5.006 100 0
analyzed for random initial conditions over 100 simulations runs. The stability was observed in the sense of Lyaponov over all the simulation runs to determine the minimum required data rate to achieve stability. Each simulation run was of length 5000 time steps. The practically achieved minimum stabilizing mantissa plus sign rate Rms is specified by (31), and is constant for each particular system in all simulation runs. The stabilizing exponent rates for each of the 100 simulation runs of a particular system is determined by (32). These values are then averaged to render the stabilizing exponent rate Re for this system. It should be noted that for a given system, the differences between stabilizing exponent rates obtained in different simulation runs are actually small. Finally, the minimally required stabilizing total data rate Rstab for each system is given by (30), i.e. produced by summing up the constant Rms and average value Re . Indeed, the resulting Rstab is obtained in an average sense rather than a strictly minimal value. One could stabilize the system using an even smaller Re under certain conditions, or, dynamically assign mantissa bits at a minimal level instead of keeping Rms constant for all time steps. However, both methods come at a cost of significantly increasing implementation complexity, which need to be avoided in many situations. The value of this paper is to provide an easily implementable system realization that requires a stabilizing data rate close to or even lower than the theoretical bounds. In the following, we present simulation results for some first and second order systems. −1 0 1 2 3 4
Stabilizing data rate bound (9) Stabilizing mantissa plus sign rate bound (20) Practical stabilizing mantissa plus sign rate (31) Stabilizing exponent rate (32) Stabilizing total data rate (30) Stabilizing mantissa plus sign rate bound (24)
Beating the Bounds on Stabilizing Data Rates in Networked Systems
267
4.5.1. First Order Case: For first order unstable systems, the lower bound for stabilizing mantissa plus sign rate provided in (20) coincides with the stabilizing data rate bound in (9). However, as mentioned in section 4.2, (20) is sometimes conservative due to the conservative results in [FX03] and [BMD93]. We can therefore have cases when the practical stabilizing mantissa plus sign rate Rms is below the bounds in (20) and (9), and, hence, the resulting stabilizing data rate Rstab beats the theoretical bound (9) under the condition of a small exponent rate Re , according to the analysis in section 4.4. Two such examples are given in Table 1. One can see that the practical stabilizing data rates provided in the simulation results indeed beat the theoretical bound in (9) by more than 1 bits per unit time. 4.5.2. Second Order Case: For second order systems, (24) turns to be more conservative than (9) except for some particular cases, as discussed in section 3.2. However, similar results as the first order systems can still be identified, as shown in Table 2. From this table, one can see the theoretical bounds are again violated by the stabilizing data rates achieved in simulations.
5 Conclusion This paper shows that the recently derived data rate bounds for stabilization of unstable linear time-invariant discrete time systems can be achieved and sometimes even violated with very simple controller, quantization and compression/encoding schemes. Using state feedback and floating point quantization with difference encoding of exponent information, it was shown that first order systems can always be stabilized at approximately the data rate corresponding to the theoretical bound given by the logarithm of the unstable eigenvalue. In certain cases the stabilizing data rates were found to be well below the bound, which is due to the compression(encoding) of the exponent. It was proved that deadbeat control always allows for stabilizing mantissa plus sign rates that are exactly at the bound. Simulations showed that this mantissa plus sign rate can be conservative and that the required exponent rates were usually well under 1 bit per sample. In the case of higher order controllable canonical forms, the results were typically more conservative, even though classes of systems that yielded results close to or below the bounds were identified. Finally, the robustness of the mantissa rate with respect to system coefficients was also investigated and resulted in a condition that can be decided by inspection of the plant coefficients.
268
Y. Liang and P. Bauer
Acknowledgement The authors gratefully acknowledge the support provided by the National Science Foundation (NSF) via Grant IIS 0325252 and the Center for Applied Mathematics, University of Notre Dame.
Disturbance Attenuation Bounds in the Presence of a Remote Preview Nuno C. Martins1 , Munther A. Dahleh2 , and John C. Doyle3 1
2
3
Electrical Engineering Department and the ISR University of Maryland College Park, MD 20742
[email protected] Department of Electrical Engineering and Computer Science and LIDS-MIT Cambridge, MA 02139
[email protected] Department of Electrical Engineering and CDS CALTECH Pasadena, CA 91125
[email protected] Summary. We study the fundamental limits of disturbance attenuation of a networked control scheme, where a remote preview of the disturbance is available. The preview information is conveyed to the controller, via an encoder and a finite capacity channel. In this article, we present an example where we design a remote preview system by means of an additive, white and Gaussian channel. The example is followed by a summary of our recent results on general performance bounds, which we use to prove optimality of the design method.
1 Introduction If S(z) is the sensitivity transfer function [DFT92] of a single-input linear feedback loop, in discrete time, then Bode’s [Bod45] integral equation can be written as: π 1 log |S(ejω )|dω = log |λ| (1) 2π −π λ∈U P
where UP are the unstable poles of the open loop system [DFT92], which is assumed to be rational and strictly proper. By using feedback, one would expect that disturbance rejection can be improved. On the other hand, (1) quantifies a fundamental limitation which says that disturbance rejection can be, at most, shaped in frequency. Due to its importance, Bode’s fundamental limitation has been extended to more general frameworks [SBG97] than the linear and time invariant one. The multi-dimensional version was provided in
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 269–285, 2006. © Springer-Verlag Berlin Heidelberg 2006
270
N.C. Martins, M.A. Dahleh, and J.C. Doyle
[FL88], while the time-varying case has been addressed in [Igl02] and certain non-linear systems have been analyzed in [ZI03, Igl01, SBKM99]. In recent publications, such as [MCF04] and references therein, the study of fundamental limitations generalizes to controllers with preview. Using an information theoretic formulation, Bode’s result was extended for feedback systems where the controller belongs to a general class [MD05a, MD04], which might include systems operating on a discrete or finite alphabet. The use of information theoretic quantities, which was first suggested in [Igl01], also allows for the clear differentiation of the roles of causality and information flow in the feedback loop [MD05a, MD04]. While causality is responsible for Bode’s fundamental limitation, information constraints in the feedback loop give rise to a new limitation[MD05a, MD04]. The work in [Eli04] also explores the connection between Bode’s integral formula and the ability to transmit information over a Gaussian channel, by means of linear and time invariant systems acting as encoders and decoders. Bode’s fundamental limitation is derived for a deterministic setting in [YGI+ ], under certain convergence conditions. 1.1 Problem Formulation It is well known that the use of disturbance previews may improve controller performance [JWPK+ 04]. In [TM05] one finds recent results in optimal preview control as well as a source of references to other related approaches. Recent results on fundamental limitations in the presence of reference preview are given in [CRHQ01, MCF04]. In this publication, we consider the diagram of Fig 1, where the controller has access to a remotely transmitted disturbance preview, represented as r. This scheme portrays a formulation where the disturbance results from a physical phenomenon, which must travel in space until it reaches the system. The travel time is represented as a delay of m units of time. At the same time, a remote preview signal r may be available to the controller, subject to information transmission/processing constraints at the remote preview system (RPS) block. We also adopt a Markovian model for the disturbance, where G is an auto-regressive shaping filter and w is the innovations process. Examples of remote preview systems can be found in animal life, such as the ones that use vision and hearing to perceive a future physical interaction. In these cases, the information/processing constraints arise from limited vision and hearing resolution as well as noise and limited information processing in the brain [Fit54]. Further examples are the information path of a heat-shock mechanism at the cellular level [ESKD+ 05] as well as navigation engineering systems. There are two extreme cases in this setup: the first is when the disturbance can be fully transmitted4 , in that case the disturbance can be canceled by the 4
This would require a RPS block with infinite capacity.
Disturbance Attenuation Bounds in the Presence of a Remote Preview
shaping filter
w✲
physical delay ✲
G ✲
-m
z
RPS
d✲
❧ +
r
✲
remote preview system
e
✲
✻
K
✛
271
✲
P y
❄
❧ +
q
✻
measurement noise Fig. 1. Structure of a Remote Preview System.
controller, and the second is the absence of remote preview information, which is the classical framework. In this paper, we study the situation in between, i.e., we consider that C > I∞ (r, d) > 0, where C is a finite positive constant, representing the Shannon capacity [CT91] of the RPS block, and I∞ (r, d) is the mutual information rate5 between the disturbance d and the remote preview signal r, in bits per unit of time. 1.2 Paper Organization The paper is organized as follows: Sections 1.3 and 1.4 introduce the notation and main definitions. The technical framework is given in section 2, where we also describe the measures of performance adopted in the paper. A motivation example is described in section 3, while the fundamental limitations, in their more general form, are stated in section 4. The conclusions are provided in section 5. Most of the Theorems for the general case are just stated and the proofs may be found in [MD05b]. 1.3 Notation The following notation is adopted: • Whenever it is clear from the context, we refer to a sequence {a(k)}∞ 0 of elements in Rn as a. A finite segment of a sequence a is indicated as def = ∅. . If kmax < kmin then akkmax = {a(k)}kkmax akkmax min min min • Random variables are represented using boldface letters, such as a. • If a(k) is a stochastic process, then we use a(k) to indicate a specific realization. Similar to the convention used for sequences, we may denote ∞ {a(k)}∞ 0 just as a and {a(k)}0 as a. A finite segment of a stochastic kmax process is indicated as akmin . • The probability density of a random variable a, if it exists, is denoted as pa . The conditional probability, given b, is indicated as pa|b . 5
This quantity is precisely defined in section 1.4.
272
N.C. Martins, M.A. Dahleh, and J.C. Doyle
• The expectation operator over a is written as E[a]. • We write log2 (.) simply as log(.). • We adopt the convention 0 log 0 = 0. 1.4 Basic Definitions of Information Theory In this section, we summarize the main definitions of Information Theory used throughout the paper. We adopt [Pin64], as a primary reference, because it considers general probabilistic spaces in a unified framework. The definitions and properties listed in this section hold under general assumptions, we refer to [Pin64] for further details. Definition 1. (from [Pin64] pp. 9 ) The mutual information I : (a; b) → R+ {∞}, between a and b, is given by Pa,b (Ei × Fj ) log
I(a; b) = sup ij
Pa,b (Ei × Fj ) Pa (Ei )Pb (Fj )
The supremum is taken over all partitions {Ei } of A and {Fj } of B, where A and B are the alphabets of a and b. The definition of conditional mutual information can be found in [CT91] or in [Pin64] (pp. 37). Notice that, in definition 1, A and B may be different. Without loss of generality, we consider probability spaces which are countable or Rq , for some q. We also define the following quantities, denoted as differential entropy and conditional differential entropy, which are useful in the computation of I(·, ·), for certain cases relevant in this paper. Definition 2. If a is a random variable with alphabet A = Rq then we define the differential entropy of a as h(a) = −
Rq
pa (γ) log pa (γ)dγ
If b is a random variable with alphabet B = Rq then we define the conditional differential entropy of a given b as: h(a|b) = h(a, b) − h(b) = −
Rq
Rq
pa,b (γa , γb ) log pa|b (γa , γb )dγa dγb
(2)
If B is countable then h(a|b) is defined as: h(a|b) = − γb ∈Sb
Rq
pa,b (γa , γb ) log pa|b (γa , γb )dγa
(3)
Likewise, the quantity h(a|b, c) is defined by incorporating another sum if the alphabet of c is discrete, or an integral otherwise.
Disturbance Attenuation Bounds in the Presence of a Remote Preview
273
In order to simplify our notation, we also define the following quantities: Definition 3. (Information Rate) Let a and b be stochastic processes. The following is the definition of (mutual) information rate6 : I∞ (a; b) = lim sup N →∞
I(a0N −1 ; b0N −1 ) N
The use of information rates is motivated by its universality [CT91], i.e., it quantifies the rate at which information can be reliably transmitted through an arbitrary communication medium. Definition 4. (Entropy Rate) For a given stochastic process, we also define entropy rate as: −1 ) h(aN m (4) h∞ (a) = lim sup N − m N →∞ where m is the time delay represented in Fig 5. In our formulation, em−1 and dm−1 may be deterministic or may take val0 0 ues on a countable set. Since the differential entropy for these variables is undefined, we choose to define entropy rate as in (4). On the other hand, for any em−1 and dm−1 , there are no technical problems in using them with mutual 0 0 information or conditional entropy, such as I((em−1 , b); c) and h(b|c, em−1 ). 0 0 In the real world, all signals in Fig 5 should have some noise, with finite differential entropy, added to them. If that was the case then we could have defined entropy rate in the standard way. We chose not to add noise everywhere, because it would complicate the paper and it would lead to the same results and conclusions. In this paper, we will refer to channels which are stochastic operators conforming to the following definition: Definition 5. (Channel) Let V and R be given input and output alphabets, along with a stochastic process, denoted as c, with alphabet C. In addition, consider a causal map f : C ∞ × V ∞ → R∞ . The pair (f, c) defines a channel. The following are examples of channels: • Additive white Gaussian channel: V = R = C = R, c is an i.i.d. white Gaussian sequence with unit variance and f (c, v)(k) = c(k) + v(k). • Binary symmetric channel: V = R = C = {0, 1}, c is an i.i.d sequence satisfying P(c(k) = 1) = pe and f (c, v)(k) = c(k) +mod2 v(k) For any given channel specified by (f, c), the supremum of the rates, at which information can be reliably transmitted, is a fundamental quantity denoted as capacity [CT91]. The formal definition of capacity, denoted by C, can be found in [CT91], for which the following holds: sup I∞ (v; f (v, c)) ≤ C pv
6
(5)
Throughout the paper, for simplicity, we refer to mutual information rate simply as information rate.
274
N.C. Martins, M.A. Dahleh, and J.C. Doyle
1.5 Spectral Properties of Asymptotically Stationary Stochastic Processes We adopt the following definition of asymptotic power spectral density. Definition 6. A given zero mean real stochastic process a is asymptotically stationary if the following limit exists for every γ ∈ N: ¯ a (γ) def R = lim E[(a(k + γ) − E[a(k + γ)])(a(k) − E[a(k)])] k→∞
(6)
We also use (6) to define the following asymptotic power spectral density: Fˆa (ω) =
∞
¯ a (k)e−jωk R
(7)
k=−∞
1.6 Elementary Properties of h and I. Below, we provide a short list of properties of differential entropy and mutual information. A more complete list is included in the Appendix, at the point where the more complex proofs are derived. Consider that a and b are given random variables and that the alphabet of a is A = Rq , while B, the alphabet of b, may be a direct product between Rq and a countable set7 . Under such assumptions, the following holds [CT91]: • I(a; b) = I(b; a) = h(a) − h(a|b) • for any arbitrary function φ, h(a|b) = h(a + φ(b)|b) ≤ h(a) , where equality holds if a and b are independent. • (Data Processing Inequality) I(a; b) ≥ I(φ(a); θ(b)), where equality holds if φ and θ are injective. The above properties hold if we replace entropy by entropy rate and mutual information by information rate. If a is a stationary process8 then [CT91]: h∞ (a) ≤
1 4π
π −π
log(2πeFˆa (ω))dω
(8)
where equality holds if a is Gaussian.
7
8
Notice that if b takes values in a direct product between Rq and a countable ¯ c) where b ¯ takes values in Rq and c has set, then we can represent it as b = (b, a countable alphabet. Using such decomposition one can use our definitions of conditional differential entropy to perform computations As we show in [MD05b], this inequality also holds for asymptotically stationary processes.
Disturbance Attenuation Bounds in the Presence of a Remote Preview
275
2 Technical Framework and Assumptions Regarding the general scheme of Fig 1, the following assumptions are made: • w is a scalar (w(k) ∈ R), unit variance, identically and independently distributed stochastic process. For each k, w(k) is distributed according to a density pw , satisfying |h∞ (w(k))| < ∞. • G is an all-pole stable filter of the form: G(z) =
α
1−
p m=1
am z −m
(9)
where p ≥ 1, ai and α > 0 are given. We chose this form of G, as a way to model the disturbance d, not only because it is convenient that G−1 is well defined and causal, but also because there exists a very large class of power spectral densities that can be arbitrarily well approximated by |G(ejω )|2 [Pri83]. In addition, we assume that G has zero initial conditions. • given n, P is a single input plant with state x(k) ∈ Rn , which satisfies the following state-space equation: x(k + 1) =
xu (k + 1) Au = xs (k + 1) 0
b 0 x(k) + u e(k) bs As
(10)
y(k) = Cx(k), |λi (Au )| ≥ 1, |λi (As )| < 1 and k ≥ 0 The state partitions xu and xs represent the unstable and stable openloop dynamics, respectively. In addition, the initial state x(0) is a random variable satisfying |h(xu (0))| < ∞. • q, w, x(0) and c are mutually independent, where c represents the channel noise according to definition 5. • the measurement noise q is such that the following holds: I(x(0); um−1 ) 0, would add an extra term on Fˆe (ω) that could mask the beneficial effect of the RPS block. • in general, if we allow K to be non-linear, or if the RPS block is present, then Sd,e will depend on G. Consequently, the fundamental limitations expressed as a function of Sd,e should be interpreted as point-wise in the space of disturbances, in contrast with the no preview, linear and time-invariant case, for which Sd,e is uniform over the set of disturbances. Limitations in terms of the ratio represented by Sd,e should be interpreted as follows: once we have a spectral model of the disturbance, say Fˆd , then limitations in Sd,e translate immediately to limitations in Fˆe . Clearly, for 9 10
The first fact follow from standard results in [CT91]. Since the last fact does not follow immediately from [CT91], we refer the reader to a proof in [MD05b]. When we refer to e as asymptotically stationary, we mean that e is asymptotically stationary for all σq > 0.
Disturbance Attenuation Bounds in the Presence of a Remote Preview
277
each spectral model of the disturbance, Sd,e gives as much information about Fˆe as a classic sensitivity function would. As a consequence, our results show, for any given disturban! ce spectrum, that there are limits of attenuation and that certain spectra Fˆe are not attainable. It has been suggested in previous publications [DS92] that such point-wise property, instead of the uniformity characteristic of induced measures, is needed in the study of feedback loops with general non-linear controllers. In section 4, we derive inequalities in terms of Sd,e , which hold regardless of K, G and the RPS block. • in our formulation, d is asymptotically stationary, but e may not be. If e is not asymptotically stationary then Sd,e (ω) is undefined. We have not tried to attribute a frequency domain interpretation based on other non-stationary notions, such as wavelets or evolutionary power spectral densities. In the absence of stationarity, we resort directly to entropy rates.
3 Motivation Case Study: A Remote Preview System Based on an Additive White Gaussian Channel In this section, we consider a particular case of the scheme of Fig 1, where the RPS block is constructed by means of an encoder and an additive white Gaussian channel, as depicted in Fig 2. We minimize the log-sensitivity integral, by searching over linear and time-invariant E and K, and show that the optimum explicitly depends on the capacity of the channel. Since such solution is derived for the linear and time-invariant case, we end the section by asking the question: “how good is this particular selection of E and K?”. The answer relies on the results of section 4, where we derive universal bounds that hold for arbitrary channels, controllers and encoders, which might be non-linear and time-varying and operating on arbitrary alphabets. This example is intended for motivation only and the general results are stated and proved in section 4 and in the Appendix.
w✲
✲
G ✲
E
-m
z
v✲
❧ +
c
✻
d✲ r
✲
❧ + ✻ u
e
K
✲ ✛
✲
P y
❄
❧ +
q
✻
Fig. 2. Structure of a Remote Preview System (RPS), using an additive white Gaussian channel.
278
N.C. Martins, M.A. Dahleh, and J.C. Doyle
3.1 Technical Framework for the Case-Study For the purposes of this example, we consider the following: • w(k) is unit variance, zero mean, white and Gaussian. • the channel is specified by r = v + c, where c(k) is white, zero mean and Gaussian with variance σc2 > 0. • P is stabilizable by means of a stable, linear and time invariant controller H(z). • q is i.i.d. zero mean and Gaussian with variance σq2 > 0. 3.2 Optimal Design of a Linear and Time-Invariant Remote Preview Control Scheme In this example, we will derive an optimal choice of linear and time-invariant E and K, for the scheme of Fig 2.
w✲
✲
G ✲
-m
z
d✲
˜
e
d ❧✲ ❧ ✲+ +
ˆ −d v ❧✲ r E ✲+ D H ✻ c ✻
✻
✛
✲
P y
❄
❧ +
q
✻
Fig. 3. Complete scheme, comprising the feedback loop and the Remote Preview System, using an additive white Gaussian channel.
We wish to find a scheme that achieves the following minimum: 1 E,K∈ LTI 2π min
π −π
log Sd,e (ω)dω
(12)
under the following power constraint, at the output of E: lim sup V ar(v(k)) ≤ 1 k→∞
(13)
where the minimum is taken over linear and time-invariant (LTI) E and K, while Sd,e is the sensitivity of definition 7. We have decided to investigate (12), because we wanted to determine by how much Bode’s equality can be relaxed in the presence of an AWGN channel, along with linear and time-invariant K and E. Since K is assumed linear, we choose to design the preview system and the feedback controller separately, as shown in Fig 3. The preview system is composed by an encoder E and a decoder D, while H is responsible for
Disturbance Attenuation Bounds in the Presence of a Remote Preview
279
stabilization and performance. For this scheme, the overall controller K results from the additive contribution of D and H, according to: K(r, y) = Dr + Hy
(14)
At the end of section 3.4, we explain why our solution is optimal. In addition, we show that the optimal solution is not unique because the cost function is invariant under any LTI, stable and stabilizing H. Such degree of freedom is useful because it allows H to be designed to meet additional specifications. 3.3 First Step: Design of E and D As a first step, we choose to determine the following minimum: π
1 E,D∈ LTI 2π min
−π
log Sd,d˜ (ω)dω
(15)
under the following power constraint, at the output of E: lim sup V ar(v(k)) ≤ 1 k→∞
(16)
where Sd,d˜ is a particular case of the sensitivity of definition 7, which is given by: 1 1 2 Fˆd˜ (ω) Fˆ˜ (ω) 2 def Sd,d˜ (ω) = (17) = d jω |G(e )| Fˆd (ω) From a functional point of view, E and D of Fig 3 should generate the best ˆ so that the effects of d can be reduced by means of disturbance possible d ˜ = d− cancellation (subtraction). The disturbance residual is indicated as d ˆ d. Our approach consists in designing E as a whitening filter, that recovers w from d, and constructing D as the mean-square optimal estimate of w, followed by a filter which replicates the effect of G, as well as a matching delay. The resulting E and D are given by:
w✲
✲
G ✲
E
-m
z
v✲
❧ +
c
✻
d✲ r
✲
˜ d ❧ ✲ + ✻ ˆ −d D
Fig. 4. Structure of the forward loop used to study the properties of the optimal design of E and D, in the presence of an additive white Gaussian channel.
280
N.C. Martins, M.A. Dahleh, and J.C. Doyle
E = G−1 D = −G
(18)
z −m 1 + σc2
(19)
In order to gage the quality of our design, we focus on the forward loop depicted in Fig 4. The following is the resulting asymptotic power spectral ˜ density of d: |G(ejω )|2 (20) Fˆd˜ (ω) = 1 + σ12 c
From (20), we also find that: log Fˆd˜ (ω) = 2 log |G(ejω )| − C
(21)
where C = 21 log(1 + σ12 ) is the capacity of the remote preview channel11 . c Clearly, (21) indicates that our scheme makes use of the channel in reducing ˜ The larger the capacity C the better the overall power-spectral density of d. the disturbance reduction. Using (21), we can also derive the following equality: 1 2π
π −π
log Sd,d˜ (ω)dω = −C
(22)
At this point, the natural question to ask is: “Assuming that Sd,d˜ is well defined12 , can we do better than (22), by using some other choice of nonlinear and time-varying E and D?”. The answer to the question is no, i.e., the following holds, under the power constraint (16): ˜ is asymptotically stationary ) ∀E, D ( such that d π 1 ˜ (ω)dω ≥ −C 2π −π log Sd,d
(23)
In particular, we find that our choices (18)-(19) are optimal and that the minimum in (15) is equal to −C. The rigorous statement of (23) is given in Theorem 2, for the more general situation where feedback is also present. From Theorem 2, we also conclude that (23) holds regardless of E, D and the channel. Below, we sketch a simple proof of (23) as a way to illustrate the basic principle, which, due to its simplicity, can be explained in just a few steps. In order to make it short, we will assume familiarity with the properties listed in 11 12
This formula gives the capacity of an additive white Gaussian channel [CT91], with noise power σc2 and input power constraint σv2 ≤ 1. ˜ is asympSince d is asymptotically stationary, Sd,d˜ is well defined provided that d totically stationary. Without asymptotic stationarity, we resort directly to entropy rates to define measures of performance. The complete analysis is provided in section 4.
Disturbance Attenuation Bounds in the Presence of a Remote Preview
281
section 1.6. Since this result is proved in detail and generality in the Appendix, the reader has the option to skip directly to section 3.4. Sketch of a proof of (23). By expressing mutual information as a difference of differential entropies [CT91], we get: ˆ = I∞ (d, d) ˆ ≤C h∞ (d) − h∞ (d|d) (24) where the last inequality results from the data-processing inequality, i.e., ˆ ≤ I∞ (v, r) and from I∞ (v, r) ≤ C, which holds for any channel. I∞ (d, d) Using basic properties of h, we arrive at: ˆ = h∞ (d − d| ˆ d) ˆ ≤ h∞ (d − d) ˆ = h∞ (d) ˜ h∞ (d|d)
(25)
Together with (24), (25) leads to the following entropy rate inequality, which is valid regardless of E, D and the channel: ˜ ≥ h∞ (d) − C h∞ (d)
(26)
˜ so that Fˆd and S ˜ are well defined, Assuming asymptotic stationarity of d, d,d we have13 : π ˜ ≤ 1 log 2πeFˆd˜ (ω) dω (27) h∞ (d) 4π −π Since d is Gaussian, the following holds: h∞ (d) =
1 4π
π −π
log 2πeFˆd (ω) dω
(28)
In summary, substituting (27) and (28) into (26), we obtain: 1 2π
π −π
log
Fˆd˜ (ω) Fˆd (ω)
1 2
dω ≥ −C
(29)
which holds regardless of E, D and the channel. 3.4 Optimal Scheme in the Presence of a Linear Feedback Controller We go back and focus on the original problem (12), by re-writing it as: 1 E,D,H∈ LTI 2π min
π −π
log Sd,e (ω)dω
(30)
under the following power constraint, at the output of E: lim sup V ar(v(k)) ≤ 1 k→∞
13
(31)
This inequality is proved in [MD05b], under asymptotic stationarity assumptions. The standard proof, assuming stationarity, is given in [CT91]
282
N.C. Martins, M.A. Dahleh, and J.C. Doyle
At this point, we investigate how our choices (18)-(19) will affect the complete scheme of Fig 3, in the presence of an arbitrary stable and stabilizing linear and time-invariant controller H. In terms of Fig 2, if we substitute (18)-(19) in (14), we obtain the following: z −m 1 + σc2
K(r, y) = −
Gr + Hy
(32)
Standard computations lead to the following asymptotic power spectral density: Fˆ˜ (ω) + |H(ejω )|2 σq2 (33) Fˆe (ω) = d |1 − P (ejω )H(ejω )|2 Using definition 7, (17) and (33), we arrive at the following: Sd,e (ω) = lim sup σq →0
Sd,d˜ (ω) Fˆe (ω) = ˆ |1 − P (ejω )H(ejω )| Fd (ω)
(34)
By making use of (34), we can compute the following integral: π −π
log Sd,e (ω)dω
=
π −π
+
log π
−π
1 |1 − P (ejω )H(ejω )|
log Sd,d˜ (ω)dω
dω (35)
which, by means of Bode’s equality [DFT92] and (22) applied to the right hand side, can be written as: 1 2π
n
π −π
log Sd,e (ω)dω =
max{0, log |λi (A)|} − C
(36)
i=1
From (23) and (35), we conclude that (36) attains the optimal value of (30). 3.5 Motivation for Studying the General Case If C = 0, i.e. in the absence of a RPS block, then (36) is identical to Bode’s integral equation. Otherwise, if C > 0 then the identity (36) shows that our choices (18)-(19) make good use of the channel because they relax Bode’s integral by a factor of −C. If we allow plants P which are not stabilizable by means of a stable, linear and time-invariant H then (35) indicates that the following inequality still holds for any linear and time invariant E and K: 1 2π
π −π
n
max{0, log(|λi (A)|)} − C
log Sd,e (ω)dω ≥ i=1
(37)
Disturbance Attenuation Bounds in the Presence of a Remote Preview
283
where we used the fact that the first term on the right hand side of (35) will also account for the unstable poles of H, if any. This motivated us to ask the following questions, which are addressed in section 4: • adopting the general assumptions of section 2 plus asymptotic stationarity of e, can we beat (37) by allowing a larger class of E and K, such as nonlinear and operating over arbitrary alphabets? Notice that by allowing non-linear K, we also consider the case where K cannot be written as an additive contribution from r and y, such as in (32). • what if, beyond the extensions of the previous question, we allow arbitrary channels, can (37) be beaten? • for the case where e may not be asymptotically stationary, general E, K and arbitrary channels, can we extend (37) by means of another inequality involving entropy rates ? In section 4, we show that the answers to the above questions are no, no and yes, respectively.
4 Fundamental Limitations for the General Case In this section, we present performance bounds for the scheme14 of Fig 5. In such case, the remote preview system is constructed by means of an arbitrary channel and a general encoder. The proofs of the Theorems may be found in [MD05b]. In the rest of the paper, we adopt the following assumptions: • (A1) E and K are causal operators defined in the appropriate spaces, i.e., the output of E must belong to the channel input alphabet, which might be discrete or continuous. Similarly, the output of the channel must be defined in the alphabet of r, at the input of K (see Fig 5). • (A2) (Feedback well-posedness) we assume that the feedback system is well-posed, i.e., that there exists a causal operator J such that the following is well defined: ∀k ≥ 0, u(k) = J(x(0), r, d, q)(k) (38) 4.1 Derivation of a General Bound Involving Entropy Rates As we have discussed in section 2.1, we use h∞ (e) − h∞ (d) as a performance measure for the most general case, where we do not require e to be asymptotically stationary. The following Theorem provides an universal lower bound for h∞ (e) − h∞ (d) as a function of the unstable poles of P and the capacity 14
Although our results are valid for the general scheme of Fig 1, for simplicity, we consider the concrete scenario depicted in Fig 5.
284
N.C. Martins, M.A. Dahleh, and J.C. Doyle
w✲
✲
G ✲
E
-m
z
d✲
v r ✲ Channel ✲
❧ + ✻ u
e
K
✲ ✛
✲
P y
❄
❧ +
q
✻
Fig. 5. Structure of a remote preview system, using a general communication channel.
of the remote preview channel. All of the remaining results in this section are, in one way or another, consequences of such universal lower bound. Theorem 1. [MD05b] Consider the feedback interconnection represented in Fig 5. In addition, assume that the state of P satisfies supk E [(x(k ))T x(k)] < ∞. For any encoder E and controller K, satisfying (A1) and (A2), the following is true: n
h∞ (e) − h∞ (d) ≥
max{0, log |λi (A)|} − C
(39)
i=1
where C represents the capacity [CT91] of the remote preview channel. The proof of Theorem 1 can be found in [MD05b]. In addition, [MD05b] comprises not only all the details of the proof, but it also contains preliminary results, which clarify aspects such as the role of causality and stability. As shown in [MD05b], the effect of stability is related to the results in [TM04a, YB03, NE00b]. The aforementioned references address the problem of control under finite-rate constraints, but their proofs regarding the minimum stabilizing rate hold in general. An example for a control design method for such class of problems is given in [SDS04]. 4.2 Expressing Performance Limitations by Means of Asymptotic Power Spectral Densities: An Extension to Bode’s Integral Formula Under asymptotic stationary assumptions, Theorem 2 ascribes a frequency domain interpretation to Theorem 1. Theorem 2. [MD05b] Consider the feedback interconnection represented in Fig 5. In addition, assume that the state of P satisfies supk E[x [ T (k)x(k)] < ∞. If the encoder E and the controller K are such that (A1)-(A2) are satisfied and e is asymptotically stationary then the following is true: 1 2π
π −π
log
√
n
2πeSd,e (ω) dω ≥
max{0, log |λi (A)|} − C + h∞ (w) (40) i=1
Disturbance Attenuation Bounds in the Presence of a Remote Preview
285
where C represents the capacity [CT91] of the RPS channel. In addition, if w is Gaussian then (40) is given by: 1 2π
π −π
n
max{0, log |λi (A)|} − C
log Sd,e (ω)dω ≥
(41)
i=1
Remark 1. Now, we can confirm our assertion, at the end of section 3.5. Any other choice of non-linear E and K, in the example of section 3, still has to satisfy (41) and (37).
5 Conclusions In this paper, we have analyzed an example of a networked control scheme, where a finite horizon preview of the disturbance is remotely available to the controller. In our results, the capacity of the preview communication system is an essential quantity. Under asymptotic stationary assumptions, we have shown, for a given finite capacity, that Bode’s integral formula can be extended. We have illustrated how some general bounds, derived in [MD05b], might be achieved.
Acknowledgments The authors would like to thank Ola Ayaso (MIT), Jorge Goncalves (Cambridge U., U.K.) and Mustafa Khammash (UCSB) for providing references to examples in Biology. We also would like to express our gratitude to Prakash Narayan (U. Maryland) for carefully reading our first manuscript, to Sanjoy Mitter (MIT) for interesting suggestions and to Jie Chen (U California Riverside) for pointing us to a few references in Preview Control. This work was sponsored by the UCLA, MURI project title: “Cooperative Control of Distributed Autonomous Vehicles in Adversarial Environments”, award: 0205-G-CB222. Nuno C. Martins was supported by the Portuguese Foundation for Science and Technology and the European Social Fund, PRAXIS SFRH/BPD/19008/2004/JS74.
Plenary Talk The Role of Information Theory in Communication Constrained Control Systems Sekhar Tatikonda Department of Electrical Engineering Yale University
[email protected] Communication is an important component of distributed, networked, and cooperative control systems. In most current systems the communication aspects are designed separately from the control aspects. Recently there has been an increasing interest in studying control systems employing multiple sensors and actuators that are geographically distributed. For these systems this design separation is no longer tenable. Hence there is a need for a unified view of control and communication. In this talk we discuss how tools from information theory can be used to develop such a view. Specifically we apply source coding, channel coding, binning, and coding with side-information techniques to a variety of control with communication problems including settings with non-traditional information patterns, multiple sensors, and multiple agents.
Sekhar Tatikonda is currently an assistant professor of electrical engineering at Yale University. He received his PhD degree in EECS from MIT in 2000. He was a postdoctoral fellow in EECS at UC-Berkeley from 2000-2002. His research interests span topics in communications, information theory, control, and machine learning. He has made contributions in the areas of control with communication constraints, feedback in communication channels, and iterative message-passing algorithms.
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, p. 289, 2006. © Springer-Verlag Berlin Heidelberg 2006
Delay-Reliability Tradeoffs in Wireless Networked Control Systems Min Xie and Martin Haenggi Department of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 {mxie,
[email protected]} Summary. Networked control systems (NCS) require the data to be communicated timely and reliably. However, random transmission errors incurred by wireless channels make it difficult to achieve these qualities simultaneously. Therefore, a tradeoff between reliability and latency exists in NCSs with wireless channels. Previous work on NCSs usually assumed a fixed transmission delay, which implied that the failed packet will be discarded without retransmission, and thus reliability is reduced. When the channel errors are severe, the NCS cannot afford the resulting packet loss. In this paper, a delay-bounded (DB) packet-dropping strategy is associated with automatic repeat request (ARQ) at the link layer such that the failed packets will be retransmitted unless they are out-of-date. On the one hand, the packet delay is controlled to be below the predetermined delay bound. On the other hand, reliability will be improved because failed packets are given more retransmission opportunities. This paper investigates the tradeoff between packet delay and packet loss rate with DB strategies. Due to the multihop topology of the NCS, medium access control (MAC) schemes are needed to schedule the transmission of multiple nodes. Moreover, spatial reuse should be taken into account to improve the network throughput. In this paper, two MAC schemes are considered, m-phase time division multiple access (TDMA) and slotted ALOHA. They are compared for different sampling rates and delay bounds. TDMA outperforms ALOHA in terms of both end-to-end (e2e) delay and loss rate when the channel reception probability is above 0.5 and/or traffic is heavy. However, ALOHA shows a self-regulating ability in that the node effective transmit probability depends only on the sampling rate and channel reception probability, but is essentially independent of the ALOHA-dependent parameters. Then, for light traffic, a simple ALOHA with transmit probability 1 is preferred over TDMA in NCSs. The derived relationship between the sampling rate, the e2e delay (or delay bound), and the packet loss rate is accurate and realistic and can be used in NCSs for more accurate performance analyses.
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 291–308, 2006. © Springer-Verlag Berlin Heidelberg 2006
292
M. Xie and M. Haenggi
1 Introduction Networked sensing and control systems require real-time and reliable data communication. In the wireless environment, due to the random channel errors, it is difficult, if not impossible, to guarantee hard delay bounds with full reliability. Many applications of networked control systems (NCS) are delaysensitive, but they can tolerate a small amount of data loss. Therefore, it is sufficient to provide NCSs a balanced guarantee between the delay and the loss rate. Previous work on the QoS in NCS usually assumed that the packet transmission delay is fixed [LMT01, LG03, LG05, BPZ00b]. In the practical wireless environment, this assumption hardly holds since the network-induced delay is random. In order to verify the assumption, in many previous works the packets are allowed only one transmission attempt. In [LG03, LG05], the wireless channel is modeled by a Bernoulli process with a success probability ps . The failed packets are immediately discarded without any retransmission attempts. In this case, the packet delay is a constant equal to one time slot. But the network is rather unreliable. Reliability completely depends on the channel parameter ps . If ps 1, the packet loss rate pL = 1 − ps will be too large to be tolerated. On the other hand, in the fully reliable network, in order to guarantee 100% reliability, the failed packets will be retransmitted until they are received successfully. Then, the resulting delay is tightly controlled by the channel parameter ps . Similarly, as ps 1, the delay will be very long and cannot be tolerated by real-time applications. Meanwhile, more energy will be wasted to retransmit packets that are outdated and thus useless for the controller. In addition, from the perspective of network stability, the traffic rate (or the data sampling rate) is constrained to be smaller than the channel reception probability ps . In an interference-limited network, ps at the hotspot nodes are very small. Then, only light traffic can be accommodated by such networks. A feasible solution to balance latency and reliability is to drop a small percentage of packets. A simple strategy is “finite buffer” (FB) [Zor02]. If the buffer is full, some packets will be thrown out. To guarantee a hard delay bound, consider a bounded delay (BD) dropping strategy [Zor02], in which the failed packets will be retransmitted until they are received correctly or their delay exceed a delay bound B. The maximum packet delay is guaranteed to be B. The dropping strategy is associated with the node scheduling algorithm to determine which packets are eliminated. For instance, the NCS applications always prefer the newly packets over the old packets. Therefore, using priority scheduling (high priority to new packets) or Last-Come-First-Serve (LCFS) scheduling, the old packets will be dropped to yield the buffer space for new packets. With respect to real-time NCS applications, the BD strategy is employed in this paper. The NCS using the BD scheme is referred to as Delay Bounded (DB)-network. Compared to the fully reliable network, the DB network has
Delay-Reliability Tradeoffs in Wireless Networked Control Systems
293
several advantages. First, network stability is not an issue. In case that the traffic load is too heavy to be accommodated by the channel, some packets will be discarded, and the network can be finally self-stabilized. Hence, a traffic rate higher than the channel reception probability ps is allowed in the DB network. Second, less energy is wasted to retransmit packets that will be dropped eventually (refer these packets to as “marked” packets). As a matter of fact, given the multihop topology, the sooner the marked packets are dropped, the better. Third, in interference-limited networks, as the overall traffic load decreases, the channel reception probability ps will be enhanced to admit more traffic. This feature is particularly helpful for NCS applications. At the cost of discarding outdated packets, more recent packets will be transmitted to the controller. The disadvantage caused by the BD strategy is unreliability, which is measured by the packet loss rate pL in this paper. However, with coding or network control packets, the transmission errors can be combated and reliability will be improved. Using B and pL as the maximum delay and reliability and plugging into the controller [LG03, LG05], a more accurate lower bound on the NCS performance can be obtained. The system model for a NCS is outlined in Fig. 1(a), where the source (e.g., sensor) data are transmitted over multiple wireless hops to the controller. Node 1 to N are relays. The data generated at the source node is timecritical, e.g., periodic data used for updating controller output. A loop exists between the source, the controller and plant. The set of communication links is modeled as a tandem queueing network (Fig. 1(b)). Given the multihop topology, multiple nodes in tandem compete for transmission opportunities. Then MAC schemes are needed to schedule the node transmission order to avoid collision and take advantage of spatial reuse. If packets are of fixed length, the objective is to analyze the discrete-time tandem queueing network controlled by wireless MAC schemes. Plant
Controller
Source
0
1
2
1
2
N 1
N
N
Communication links
(a) Multihop NCS model
(b) Regular line network
Fig. 1. System models
N 1
294
M. Xie and M. Haenggi
1.1 Related Work The analysis of multihop MAC schemes in wireless networks essentially involves two issues, the wireless MAC scheme and the tandem queueing system. Previous work usually analyzed these two issues separately. The analysis on tandem networks focuses on the queueing delay, without consideration of the MAC-dependent access and backoff delay, and the retransmission delay induced by collision. For example, in [PP92, PR98], the delay performance is derived assuming that the node transmits immediately if the server is free, and the transmission is always successful. In [Leh96, Leh97], real-time queueing theory is proposed to explore real-time Jackson networks with Poisson arrivals and exponential servers for heavy traffic. Poisson arrival and exponential service possess the memoryless property, which significantly simplifies the analysis from the perspective of queueing theory. However, most traffic models are not as memoryless as Poisson and cause more difficulties in the analysis, particularly when associated with MAC schemes like TDMA. Moreover, the traffic load in NCS is generally not heavy. The assumption of heavy traffic in [Leh96, Leh97] leads to an underestimation of the network performance. On the other hand, the analysis of MAC schemes concentrates on only the access and transmission delay while ignoring the queueing delay. For simplicity, it is usually assumed that the network has a single-hop topology, an infinite number of nodes, and a single packet at each node such that no queueing delay is incurred to the packet. More importantly, the traffic flow is generated in an unpractical way that every node always has packets to transmit whenever it is given a transmission opportunity. In [Fer77, YY03] study the access delay of Bernoulli and Poisson arrivals. In [BTR92], the queueing delay was claimed to be derived with a special MAC scheme that allows one node to completely transmit all its packets as long as it captures the channel. For a specific node, the so-called queueing delay actually includes only access delay and transmission delay. In the wireless environment, the analysis of MAC schemes additionally considers the impact of wireless channel characteristics. In [ZP94, AB87], a capture model is used to calculate the average packet transmission delay of ALOHA and CSMA for Rayleigh fading channels. [FM03] proposes optimum scheduling schemes for a line sensory network to minimize the end-to-end (e2e) transmission delay. With respect to the BD scheme, its effect upon a single node is discussed in [LC04]. In [SS99], various TDMA schemes combined with the BD scheme are studied in the single-hop scenario. This paper extends to the multihop scenario and studies the e2e performance. 1.2 Our Contributions This paper investigates the tradeoff between the end-to-end (e2) delay, reliability, and the sampling rate in multihop NCS. The main contribution is to
Delay-Reliability Tradeoffs in Wireless Networked Control Systems
295
jointly study the MAC scheme, the dropping strategy and the tandem network. The network performance like delay and packet loss rate is determined by several factors, including traffic, the routing protocol, the channel characteristics, and the MAC scheme. Emphasizing on the MAC scheme, we focus on a regular line network as shown in Fig. 1(b), which disposes of the routing and inter-flow interference problem. The obtained performance provides an upper bound for general two-dimensional networks because 1) the interflow interference is zero; 2) networks with equal node distances achieve better performance than those with unequal or random node distances. Two MAC schemes are studied, m-phase TDMA and slotted ALOHA. In the former, every node is allocated to transmit once in m time slots, and nodes m hops apart can transmit simultaneously. In the latter, the node independently transmits with transmit probability pm . The wireless channel is characterized by its reception probability ps . The sampled source data (generated by the source in Fig. 1(a)) is modeled as constant bit rate (CBR). The CBR traffic flow is not only easy to be generated but also more practical than the traffic models used in previous work for MAC schemes, where the traffic load is assumed to be so heavy that the node is always busy transmitting. In practice, this heavy traffic assumption leads to an unstable network. There is no restriction on the node scheduling algorithm. To simplify the analysis, we assume First-Come-First-Serve (FCFS). Other scheduling algorithms like LCFS and priority scheduling can be chosen to better serve the NCS application demand. We will show later that in the TDMA mode, the CBR arrival results in a non-Markovian queueing system, which substantially complicates the analysis. Despite of the enormous queueing theory literature, it is still difficult to track the transient behavior of non-Markovian systems [Tri01], while the BD strategy is implemented based on the system transient behavior. Even if the source is deterministic and smooth, calculating the delay distribution in a time-dependent BD strategy is still a challenge. In addition, non-Poisson arrival causes correlations between the delays and queue lengths at individual nodes. Closed-form solutions exist only for some special networks, like Jackson networks, which do not include the network in this paper. In one word, accurate analyses are almost impossible in a multihop network with a long path. Therefore, we start the analysis with the first node, then investigate the network performance through simulation results. Combining MAC with the BD strategy, we compare DB-TDMA and DBALOHA in terms of the delay and the packet loss rate. The DB-MAC networks are also compared with their non-dropping counterparts to exhibit the advantage of the BD strategy. Since the traffic intensity is not necessarily as heavy as close to 1, ALOHA possesses a self-regulating property, which does not exist in the heavy traffic assumption. That is, although ALOHA attempts to control the packets transmission with the transmit probability pm , the node actually transmits with a probability pt , which is independent of pm .
296
M. Xie and M. Haenggi
This paper is organized as follows. Section 2 describes the system model and presents some results for the non-dropping MAC schemes. Then, TDMA and ALOHA are studied in Section 3. The self-regulating property of ALOHA is specially discussed. Their performance are compared through a set of simulation results in Section 4. The paper is concluded with Section 5.
2 System Model The set of communication links in NCS (Fig. 1(a)) is modeled as a tandem queueing network, which is composed of a source node (node 0), N relay nodes, and the unique destination node (node N + 1, also referred to as the sink node). A CBR flow of fixed-length packets is generated at the source node. Relay nodes do not generate traffic. Time is divided into time slots. One time slot corresponds to the transmission time of one packet. The sampling rate of CBR traffic is 1/r, r ∈ N. Given the channel reception probability ps , the channel is modeled as a Bernoulli process with parameter ps . The e2e delay bound is B. From the perspective of energy efficiency, it is not recommended to drop the packet only when its e2e delay exceeds B, which often happens when the packet reaches the last few nodes to the sink. The longer the route the packet traverses, the more energy is wasted if the packet will be eventually dropped. So, a local BD strategy is preferred. In order to determine how the local BD strategy is implemented, we first review the cumulated delay distribution in non-dropping tandem networks. In [XH05], a fully reliable tandem queueing network is studied. CBR Traffic is transformed to correlated and bursty through the error-prone wireless channels. Even with correlation, the e2e delay is approximately linear with the number of nodes with respect to both the delay mean and delay variance, as confirmed by simulation results in Fig. 2. Then, it is reasonable to uniformly allocate B among nodes based on their relative distances to the source node [Pil01], i.e., the local delay bound Di is set to be Di = iB/N = iD (i ∈ [1, N ], Di ∈ N). Packets are dropped at node i if their cumulated delay exceeds Di . Intuitively, if a node experiences a delay at node i longer than Di , then it is highly possible that it has delay longer than B at the final node N . Parameters of interest include the cumulated delay di and the packet loss rate piL at node i (1 ≤ i ≤ N ). As proved in [XH05], the e2e delay mean of ALOHA is about ps /(1 − ps ) times than that of TDMA. The gap is even larger for the delay variance as shown in Fig. 2(b). However, in the wireless multihop network, TDMA is not feasible to be implemented, and simple MAC schemes like ALOHA are more desirable, even though TDMA substantially outperforms ALOHA in terms of both throughput and delay. The BD strategy is a solution to reduce the performance gap between TDMA and ALOHA.
Delay-Reliability Tradeoffs in Wireless Networked Control Systems 300
4500
ALOHA TDMA
4000
297
ALOHA TDMA
3500
225 variance
mean
3000
150
2500 2000 1500
75
1000 500
0 1
2
3
4
5 node i
6
(a) delay means
7
8
9
0 1
2
3
4
5 node i
6
7
8
9
(b) delay variances
Fig. 2. Comparison of delay performance in the TDMA and ALOHA network with m = 3, r = 4, ps = 0.8, N = 8
Note that in the DB network, the packets are dropped according to their delays. Conventional queueing theory keeps track of the buffer size and cannot capture the packet dropping event [Zor02]. So, we use a delay model [LC04], in which the system state is denoted by the packet delay such that the delaydependent packet dropping event can be directly depicted through the system state.
3 Delay Bounded Wireless Multihop Networks 3.1 m-Phase TDMA The m-phase TDMA scheduler takes advantage of spatial reuse so that nodes i, m + i, 2m + i, . . . (1 ≤ i ≤ m) can transmit simultaneously. A transmission can be either a transmission of a new packet or a retransmission of a failed packet. Instead of being divided into time slots, now the time is divided into frames of m time slots. For a node, the beginning of a frame is the beginning of the time slot allocated to this node. The transmission rate is 1/m, and the transmission is successful with probability ps . To guarantee system stability, r > m. For heavy traffic, we assume m < r < 2m. At the frame level, the service time is geometric with ps . We start with the first node since it determines the traffic pattern of all subsequent nodes. At the frame level, the interarrival time 1 < r/m < 2 is not an integer even though it is a constant. As a matter of fact, the number of packet arrivals in one frame jumps between 0 and 1, depending on the arrival pattern of all previous r − 1 frames. This dependence makes the resulting queueing system more complicated than the G/M/1 system, where the interarrival times are identical and independent. Hence, a standard Markov chain cannot be established as usual to keep track of the buffer size. Instead, we resort to the delay model
298
M. Xie and M. Haenggi
that denotes the delay of the Head of Line (HOL) packet as the system state [LC04]. The system state is the waiting time of the HOL packet in terms of time slots, but the state transitions happen at the frame boundaries. Since the waiting time is the difference between the present time and the packet arrival time, the state value might be negative when the queue is empty and the next packet arrival does not happen. The absolute value of the negative state represents the remaining time till the next packet arrival. With a constant interarrival time r, the transition probability matrix P = {Pij } is: 0 ≤ i ≤ D − m, j = i − Δ, ps qs 0 ≤ i ≤ D − m, j = i + m, Pij = (1) 1 D − m < i ≤ D, j = i − Δ or i < 0, j = i + m, where Δ := r − m > 0. At frame t, let the HOL packet be packet k and its waiting time wk (t). If the transmission is successful, packet k departs at frame t, and the subsequent packet k + 1 becomes the HOL packet at frame t + 1. The waiting time of packet k + 1 at frame t is wk+1 (t) = wk (t) − r. It increases by m up to wk+1 (t + 1) = wk (t) − r + m = wk (t) − Δ at frame t + 1. Therefore, the system state transit from wk (t) to wk (t) − Δ with probability ps . If wk (t) < Δ, packet k is the last packet in the buffer and the buffer becomes empty after its transmission. Then, the system transits to a negative state i = −(Δ − wk (t)) < 0. For m < r < 2m, the server idle time does not exceed one frame. Then, there must be a packet arrival during frame t+1. This new packet may arrive in the middle of frame t + 1 and cannot be transmitted immediately. The waiting time to access the channel is m + i > 0. Then the negative state i transits to a positive state m + i with probability 1. If the transmission is failed and wk (t) ≤ D − m, the HOL packet remains at the buffer and will be retransmitted after one frame. Its delay increases by m up to wk (t + 1) = wk (t) + m ≤ D with probability qs . If wk (t) > D − m, this HOL packet k will experience a delay greater than D after one frame and be discarded (maybe in the middle of the frame). Then, at the beginning of frame t + 1, packet k + 1 becomes the HOL packet with a delay wk+1 (t + 1) = wk (t)−Δ. Recall that if the transmission is successful, the positive state wk (t) transits to state wk (t) − Δ, as well. In other words, if wk (t) > D − m, the system state always transits to wk (t) − Δ with probability 1, regardless of whether the transmission is successful or failed. The steady-state probability distribution{πi } can be obtained either iteratively or by using mathematical tools to solve π = πP. For a critical case Δ = 1, {πi } is derived in terms of π0 as follows (qs 1 − ps ): Ki π0 1 + q k i≤D−m (−qs pm s s ) g(k) pis k=1 (2) πi = D−m qs
πj j=i−m
i>D−m
Delay-Reliability Tradeoffs in Wireless Networked Control Systems
299
where
i − km + 1 i − km i+1 − p s , Ki = m+1 k k If the HOL packet is transmitted successfully, its delay at the first node is wk (t) plus one time slot for transmission. Therefore, the delay distribution {di } is completely determined by the probabilities of non-negative states, πi−1 di = . (3) j≥0 πj g(k) =
In the m-phase TDMA network, the node transmits at the frame boundaries. A packet may be dropped in the middle of a frame if it experiences a delay greater than D − m at the beginning of this frame and fails to be transmitted. The packet dropping probability is qs
D i=D−m+1
πi m , λ= λ r By observing the balance equations, we obtain (1) pL
=
qs qs
πi =
i j=1 πj k j=i−m
+ π0 πj
1≤i<m i ≥ m,
(4)
(5)
where k = min{i, D − m}. (5) holds for the delay distribution {di } as well. Apparently, di is jointly determined by l = min{i, m, B − i} consecutive states below i, which results in a backward iteration. If D − m > m, the probability mass function (pmf) of the first node delay is composed of three sections, [1, m], [m + 1, D − m + 1], and [D − m + 2, D + 1]. If D < 2m, the pmf will be simpler. Since a smaller D causes a higher dropping probability, a general condition for the delay constraint is D > 2m to ensure that every packet has at least one transmission opportunity. Because both m and D are positive integers, the smallest possible value of D is 2m + 1. Note that as D → ∞, [XH05] has shown that the output of the first node is a correlated on-off process. This correlation exists even if D < ∞. Then, the following relay nodes are fed with bursty and correlated traffic, which makes it difficult to analyze the resulting network. So, for D < ∞, the network performance is investigated through simulation results. The e2e delay is the sum of all local delays. The delay mean µ and the dropping probability pL can be upper bounded as follows: N
di ≤ N µ 1
µ=
(6)
i=1 N
pL =1 − i=1
(i)
(1)
(1 − pL ) ≤ 1 − (1 − pL )N . (1)
(7)
The tightness of these upper bounds depends on pL . The fewer packets are (i) (1) dropped, the closer pL to pL , and the tighter is the bound.
300
M. Xie and M. Haenggi
3.2 Slotted ALOHA In slotted ALOHA, each node independently transmits with probability pm . Note that pm represents the node transmission opportunity. The node actually transmits only if it is given a transmission opportunity and it has packets to transmit, which depends on its buffer occupancy. Traffic and the channel model are the same as in TDMA. Again, we start with the first node, which is observed at the time slot level. Given the success probability ps , a packet departs the node if and only if the node is scheduled to transmit and the transmission is successful, with a probability a = ps pm . Otherwise, the packet is retained in the buffer or discarded. The service time is geometric with a. The system state is the waiting time of the HOL packet. The state transition probabilities are 1 − a i ∈ [0, D), j = i + 1 i ∈ [0, D), j = i − r + 1 Pij = a (8) 1 i < 0, j = i + 1 or i = D, j = D − r + 1. At slot t, assume that the HOL packet is packet k with delay wk (t). If wk (t) < D and the packet successfully departs the node with probability a, the next packet becomes the HOL packet at slot t+1 with delay wk+1 (t+1) = wk+1 (t)+ 1 = wk (t) − r + 1. Otherwise, if the packet fails to depart the node with probability (1 − a), it remains as the HOL packet with its delay increased by one. If wk (t) = D, either the packet is transmitted successfully or not, it has to be deleted from the buffer since its delay exceeds the bound D at slot t + 1. In this case, the system state transits from D to D − r + 1 with probability 1. The negative states indicate an empty buffer and the system is waiting for the next new packet arrival. The delay distribution {di } is calculated based on {πi } and (3) like in TDMA. Since the packet dropping possibly occurs at the time slot boundaries, the packet dropping probability is (1)
pL =
(1 − a)πD = r(1 − ps pm )πD . λ
(9)
Rewriting the balance equations, we obtain πi =
i+r−1
a j=k πj (1 − a)πi−1
i 2(r − 1), so that the pmf contains all three sections. Note that the node transmits with probability pm when its buffer is nonempty. In other words, the effective transmit probability is pt = PB pm , where
Delay-Reliability Tradeoffs in Wireless Networked Control Systems
301
PB is the node busy probability. In previous work, the traffic load is so heavy that PB = 1. Then the effective transmit probability pt is identical to the transmit probability pm so that the performance of ALOHA networks can be optimized by manipulating pm . However, if the traffic load is light and PB 11 , which is highly possible in ALOHA networks, simply optimizing pm does not necessarily lead to optimization of the network performance. As a matter of fact, based on queueing theory, the busy probability of node i is PBi = λi /(ps pm ), where λi is the arrival rate to node i. As the delay bound D goes to infinity, it is easy to show that λi = 1/r for all i and thus PB = 1/(rps pm ). Naturally, the effective transmit probability pt = 1/(rps ) depends only on the traffic rate 1/r and the channel reception probability ps , and is completely independent of pm . Since the network performance is essentially determined by the effective transmit probability pt , this observation implies that the ALOHA parameter pm does not contribute to the change of the network performance. In the DB-ALOHA network, due to packet dropping, (i) (i) the arrival rate λi to node i is a function of the loss rate pL , λi = λi−1 (1−pL . (i) The packet loss rate pL depends on the delay bound D, the service rate a = 1/(ps pm ) and the arrival rate λi−1 . Through intuition, pt is not completely independent of pm . However, in most cases the loss rate is required to be relatively small. Then PBi ≈ 1/(rps pm ), and it is reasonable to say that pt is independent of pm . From the analysis in [XH05] we can see that the longer delay of ALOHA is mostly caused by the longer access delay, which is proportional to 1/pm . When the traffic load is light, a large pm will lead to a small access delay and significantly improve the e2e delay. In this sense, to optimize the delay performance of ALOHA, pm should be chosen based on the traffic load, which has been ignored in previous work because of the heavy traffic assumption. For example, in [LG04], the throughput is maximized by pm = 1/N without consideration that the node does not need contend for transmission opportunities when its buffer is empty.
4 Simulation Results A set of extensive simulation results are provided to expose the performance of the DB-TDMA and DB-ALOHA network. First of all, the fully reliable (non-dropping) TDMA network is compared with the DB-TDMA network in Fig. 3. We set m = 3 and ps = 0.8 for both networks. For the DB network, we additionally set the interarrival time r = 4 and D = 4, which results in an e2e dropping probability pL ≈ 0.20. It implies that 20% packets will be discarded and the throughput is (1 − 0.2)/r = 1/5. In the non-dropping network, all generated packets will be successfully delivered to the sink and the throughput is exactly the traffic rate 1/r. Accordingly, for the non-dropping network, we 1
PB is essentially a function of pm .
302
M. Xie and M. Haenggi
set the interarrival time r = 5 such that both networks are compared under the identical throughput. 30
60
Non dropping BD dropping
25
40 variance
mean
20 15
30
10
20
5
10
0 1
Non dropping BD dropping
50
2
3
4
5 node i
6
7
8
(a) Delay means
9
0 1
2
3
4
5 node i
6
7
8
9
(b) Delay variances
Fig. 3. Comparison of the non-dropping and DB-TDMA network with m = 3, N = 9, ps = 0.8
With the BD strategy, the e2e delay decreases. A substantial improvement is particularly reflected on the delay variance that is reduced by 60%. On the one hand, simply reducing the traffic load at the first node does not improve the network performance significantly. On the other hand, although introducing redundant packets does increase the traffic load, it enhances the delay performance. In addition, the lost packets can be compensated for by the redundant packets, which ensures reliability. In this sense, the BD strategy is very helpful to achieve a good balance between latency and reliability. 4.1 m-Phase TDMA Network Throughout this section, the m-phase TDMA network is assumed to have m = 3, r = 4, N = 8. Compared to the pmf of the non-dropping TDMA network, the pmf of the cumulated delays from node 0 to node i (1 ≤ i ≤ N ) is truncated based on B/D (Fig. 4) . For D > 2m, the pmf of the cumulated delay is scaled. In Fig. 5, the e2e delay mean and variance are shown for ps = 0.8, D = 10. The delay mean approximately linearly increases with the number of nodes. In comparison with the non-dropping TDMA network (D = ∞), the delay mean is reduced by 40% and the delay variance by 75%. The resulting e2e packet loss rate pL = 0.0414 ( as listed in Table 4.1) is acceptable when the aforementioned packet-level coding scheme is applied to combat the network unreliability. The dropping probability is to be traded off against the delay. A smaller D (i) results in a smaller delay and a higher local dropping probability pL at node (i) i, which is shown in Fig. 6(a). As D increases, pL decreases more slowly. It
Delay-Reliability Tradeoffs in Wireless Networked Control Systems 0.3 0.25 0.2 0.15
0.3
0.3
0.25
0.25
0.2
0.2
0.15
node 1
0.1 0.05
50
(i)
D =d
100
0.1
0.05
0.05
node 8
0 0
150
20
(a) D = 20
node 1
0.15
node 1
0.1
node 8 0 0
303
40 (i) D =d
60
80
100
node 8
0 0
10
20
(i)
30
40
50
D =d
(c) D = 6
(b) D = 10
Fig. 4. pmf of the cumulated delay in the TDMA network with m = 3, r = 4, pr = 0.8, N = 8 Table 1. E2e dropping probabilities of the TDMA network with m = 3, r = 4, N = 8 D 1 5 10 15 20 0.7 0.9615 0.3742 0.2183 0.1678 0.1431 0.75 0.9331 0.2621 0.1229 0.0803 0.0589 0.8 0.8880 0.1550 0.0414 0.0142 0.0058 ps
60 40 20 0 1
2
3
4
5 node i
6
7
(a) Delay means
8
9
0.1
TDMA(D=10) TDMA(D= ) ALOHA(D=10)
400
variance
mean
80
500
TDMA (D=10) ALOHA (D=10) TDMA (D= )
TDMA(D=10) ALOHA(D=10)
0.08
300
0.06 p(i) L
100
200
0.04
100
0.02
0 1
2
3
4
5 node i
6
7
8
(b) Delay variances
9
0 0
2
4 node i
6
8
(c) Dropping probability
Fig. 5. Performance comparison of the system with m = 3, r = 4, N = 8, ps = 0.8, D = 10
implies that the major packet loss occurs at the first few nodes of the chain. This property is desirable since the downstream nodes do not need to spend energy to transmit packets that will finally be discarded. For large D, the per-node dropping probability asymptotically converges to zero. Fig. 7 demonstrates the effects of D. Both the delay mean and the delay variance are nearly linear in D, particularly when D is small. Unsurprisingly, the e2e dropping probability pL asymptotically decreases with D. For large D, say D > 10, the decrease of pL becomes very slow. Thus, simply increasing D does not help to improve reliability, but does harm the delay performance. There may exist an optimal D to achieve the best balance between the delay and the packet loss. Unlike in fully reliable networks [XH05], the delay perfor-
304
M. Xie and M. Haenggi 0.07
0.1 D=20 D=15 D=10 D=5
0.06
0.06 L
p(i)
0.04
pL
(i)
0.05
D=50 D=30 D=20 D=10
0.08
0.03
0.04
0.02 0.02
0.01 0 1
2
3
4 5 node i
6
7
0 1
8
2
(a) TDMA
3
4 5 node i
6
7
8
(b) ALOHA
Fig. 6. Packet dropping performance for the system with m = 3, r = 4, N = 9, ps = 0.8
mance is not severely deteriorated by the drop of ps (Fig. 8). Moreover, as long as D is sufficiently large, even if the traditional stability condition does not hold, the resulting e2e dropping probability is so moderate that both the data latency and reliability are guaranteed. For instance, considering the critical case ps = m/r = 0.75, for D ≥ 20, the packet loss pL ≤ 0.05 is negligible. For small D like D = 10 and pL ≤ 0.13, it is not difficult to introduce redundant packets for reliability. 80
350
70
300
60
250
40 30
150
20
100
10
50
5
10 D
15
20
(a) Delay means
0.6
200
pL
variance
mean
50
0 1
1 0.8
0.4 0.2
0 1
5
10 D
15
(b) Delay variances
20
0 1
5
10 D
15
20
(c) pL
Fig. 7. The impact of D in the m-phase TDMA network with m = 3, r = 4, N = 8, pr = 0.8
4.2 Slotted ALOHA This section discusses the performance of the DB-ALOHA. To compare with TDMA, we set pm = 1/m and m = 3, r = 4, N = 8. Different from the TDMA system, the pmf tail is both truncated and twisted by applying the BD scheme (Fig. 9). But the delay central moments and the dropping probability behave similarly as in TDMA. Specifically, the delay mean and variance
Delay-Reliability Tradeoffs in Wireless Networked Control Systems 60
120
p =0.8 pr=0.75 r
20
4
5 node i
6
7
8
r
p =0.7 r
p(i) L
mean
variance
10 3
pr=0.7
60 40
2
p =0.75
0.08 0.06
20
0 1
pr=0.75
80
30
pr=0.8
r
100
p =0.7
40
0.1
p =0.8
r
50
305
0.04 0.02
0 1
9
(a) Delay means
2
3
4
5 node i
6
7
8
0 1
9
2
(b) Delay variances
3
4 5 node i
(c)
6
7
8
(i) pL
Fig. 8. The impact of ps in the m-phase TDMA network with m = 3, r = 4, N = 9, D = 10
linearly increase with the number of nodes (Fig. 5(a) andFig. 5(b)), and the (i) per-node dropping probability pL diminishes with the node index i and the first node experiences the maximum packet loss (Fig. 5(c) and Fig. 6(b)). The e2e dropping probability pL is listed in Table 2. 0.12
0.4
0.2
0.35
0.1
0.3
0.15 0.08 0.1
0.06 0.04
node 1
0.2 0.15
node 1
node 8
node 8 100
200 D(i)=d
300
(a) D = 50
400
0 0
node 8
0.1
0.05
0.02 0 0
node 1
0.25
20
40
(i)
D =d
60
(b) D = 10
80
100
0.05 0 0
5
10
(i)
D =d
15
20
25
(c) D = 3
Fig. 9. pmf of the cumulated delay in the ALOHA network with m = 3, r = 4, pr = 0.8, N = 8 Table 2. E2e dropping probabilities of the ALOHA network D 1 10 20 30 40 50 100 0.70 0.9999 0.4243 0.2505 0.1911 0.1599 0.1423 0.1048 0.75 0.9999 0.3481 0.1784 0.1164 0.0863 0.0692 0.0341 0.80 0.9998 0.2731 0.1020 0.0477 0.0238 0.0127 0.0000 ps
Like the TDMA network, both the delay mean and variance are approximately linear with D (Fig. 10(a) and Fig. 10(b)). The dropping probability pL (Fig. 10(c)) sharply decreases with small D. However, for D sufficiently
306
M. Xie and M. Haenggi
large, say D ≥ 30, the decreasing speed is decelerated and pL eventually converges to zero. Apparently, a larger D is needed for pL to reach zero. The impact of ps is displayed in Fig. 11. Again, the drop of ps causes a very small difference in the delay mean and variance, but results in an increase of the per-node packet dropping rate. Moreover, the per-node dropping probability asymptotically converges to zero. 1
2000
0.8
150
1500
0.6
variance
mean
100
pL
2500
200
250
1000
0.4
500
0.2
50 0 1
10
20
D
30
40
0 1
50
10
20
30
40
0 1
50
10
20
D
(a) Delay means
(b) Delay variances
D
30
40
50
(c) pL
Fig. 10. The impact of D in the ALOHA network with m = 3, r = 4, N = 8, pr = 0.8
70
0.2
p =.8
pr=.8
r
pr=.75
80
r
p =.7 r
variance
40 30
0 1
3
4
5 node i
6
7
(a) Delay means
8
9
0.1
40 0.05
20
2
pr=.7
60
20 10
pr=.75
0.15
p(i) L
50 mean
100
p =.8 r p =.75 r p =.7
60
0 1
2
3
4
5 node i
6
7
8
(b) Delay variances
9
0 1
2
3
(c)
4 5 node i
6
7
8
(i) pL
Fig. 11. The impact of ps in the ALOHA network with m = 3, r = 4, N = 8, D = 10
4.3 Comparison In the fully reliable network, the TDMA network substantially outperforms the ALOHA network in terms of the delay (Fig. 2). In the DB network, the performance gap between TDMA and ALOHA becomes fairly small. As shown in Fig. 5, the difference of the delay mean between the TDMA and ALOHA network decreases from 300% (reliable) to 20% (DB); while the delay variance difference changes from 750% (reliable) to 10% (DB). The main performance degradation caused by the random access is the dropping probability. For (i) ALOHA, the local dropping probability pL at node i is almost five times (i) than that of TDMA. Moreover, pL of ALOHA converges to zero more slowly
Delay-Reliability Tradeoffs in Wireless Networked Control Systems
307
than TDMA. However, if packet-level coding is applied to introduce redundant packets to compensate for the dropped packets, ALOHA is is a feasible MAC scheme that achieves a good delay performance. As a tradeoff, when the gap in pL is reduced, the gap in the delay moments will be increased. The pair (B, pL ) can be used in the controller design to optimize the NCS performance as shown in [Nil98a, LG04]. With nonzero packet loss pL and the maximum packet delay B, the controller system can be written as a Markovian Jump Linear System (MJLS). Optimizing the MJLS system can optimize the NCS network with MAC schemes.
5 Conclusions This paper aims to provide a more accurate measurement to optimize NCS networks with MAC schemes. Previous work usually assumed the wireless channels as a constant-delay, which is not practical. We derive the e2e delay and packet loss probability of DB multihop wireless networks for two MAC schemes, m-phase TDMA and probabilistic slotted ALOHA. These parameters can be used by the controller to evaluate the performance. The e2e delay mean and variance are approximately linear with the number of nodes. The (i) local dropping probabilities pL asymptotically converge to zero. A moderate delay bound B is sufficient to guarantee a small packet loss and thus achieve a good balance between reliability and latency. Compared to fully reliable networks, the e2e delay of DB networks becomes less sensitive to the channel reception probability ps . This improvement is desirable since the network performance is not expected to rapid fluctuate with ps , which basically cannot be controlled. Besides, with the BD strategy, the delay performance gap between TDMA and ALOHA is reduced. Due to the implementation complexity and overhead, TDMA is less favored than ALOHA. But ALOHA has poor delay performance. With the reduced performance gap, ALOHA becomes more practical. We also show that ALOHA possesses the self-regulating property. For light traffic, unlike previous work [LG04], we find that a large transmit probability pm does not degenerate the network performance as for heavy traffic. Previously, pm = 1/N (N is the number of nodes) is thought to optimize the throughput. As the network enlarges, pm becomes very small and the resulting delay is long. From the perspective of the delay, a large pm is preferred, particularly when it does not decrease the throughput substantially. This paper considers a FCFS node scheduling for analysis tractability. Other scheduling algorithms that favor the newly arriving packets like LCFS and priority scheduling, may be more desirable for NCS applications. In the future work we will discuss how NCSs perform when associating with the BD strategy and other scheduling algorithms.
308
M. Xie and M. Haenggi
Acknowledgment The authors would like to thank the support of the Center for Applied Mathematics (CAM) Fellowship of the University of Notre Dame and the partial support of NSF (grant ECS03-29766).
A Cross-Layer Approach to Energy Balancing in Wireless Sensor Networks Daniele Puccinelli, Emmanuel Sifakis, and Martin Haenggi Department of Electrical Engineering University of Notre Dame Notre Dame, IN {dpuccine,esifakis,mhaenggi}@nd.edu Summary. We consider a many-to-one real-time sensor network where sensing nodes are to deliver their measurements to a base station under a time constraint and with the overall target of minimizing the energy consumption at the sensing nodes. In wireless sensor networks, the unreliability of the links and the limitations of all resources bring considerable complications to routing. Even in the presence of static nodes, the channel conditions vary because of multipath fading effects due to the motion of people or objects in the environment, which modify the patterns of radio wave reflections. Also, sensing nodes are typically battery-powered, and ongoing maintenance may not be possible: the progressive reduction of the available energy needs to be factored in. The quality of the links and the remaining energy in the nodes are the primary factors that shape the network graph; link quality may be measured directly by most radios, whereas residual energy is related to the node battery voltage, which may be measured and fed into the microcontroller. These quantities may be used to form a cost function for the selection of the most efficient route. Moreover, the presence of a time constraint requires the network to favor routes over a short number of hops (a.k.a. the long-hop approach, in the sense that a small number of long hops is used) in order to minimize delay. Hop number information may be incorporated into the cost function to bias route selection toward minimum-delay routes. Thus, a crosslayer cost function is obtained, which includes raw hardware information (remaining energy), physical layer data (channel quality), and a routing layer metric (number of hops). A route selection scheme based on these principles intrinsically performs node energy control for the extension of the lifetime of the individual nodes and for the achievement of energy balancing in the network; intuitively, the long-hop approach permits the time-sharing of the critical area among more nodes. A novel, practical algorithm based on these principles is proposed with the constraints of the currently available hardware platforms in mind. Its benefits are investigated with the help of computer simulation and are illustrated with an actual hardware implementation using Berkeley motes.
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 309–324, 2006. © Springer-Verlag Berlin Heidelberg 2006
310
D. Puccinelli, E. Sifakis, and M. Haenggi
1 Introduction A large portion of wireless sensor network research over the past few years has concentrated on routing [AKK04]. In this extensive body of work, the most common metric used for the assessment of routing quality is the number of hops in a route [JM96]. However, nodes in wireless sensor networks are typically static, and link quality has to be taken into account to avoid routing over lossy links. In the literature on ad hoc networks, various metrics are suggested in order to take packet loss into account. An interesting example is ETX (Expected Transmission Count) [CABM03], which uses perlink measurements of packet loss ratios in both directions of each wireless link to find the high throughput paths which require the smallest number of transmissions to get a packet to destination. Since sensor nodes have heavy energy constraints, the battery capacity of individual nodes should also be taken into account. Battery-aware routing algorithms [SWR98] typically try to extend the lifetime of the network by distributing the transmission paths among nodes with greater remaining battery resources. The key observation is that minimum-energy routes can often unfairly penalize a subset of the nodes. Most efforts in this direction are targeted toward ad hoc networks, and are often not portable to wireless sensor networks, which are the focus of the present work. The distinguishing features of wireless sensor networks make routing much more challenging. Wireless sensor networks often need to self-organize and operate in an untethered fashion. Sensor nodes are usually static, but location information is often unavailable, and geographic routing is thus not an option. Typically, constraints in terms of energy, processing, and storage capacities are extremely tight: resources need to be carefully managed, and lifetime extension of the sensing nodes is a major concern. The adoption of a many-to-one traffic scheme is the primary cause of energy unbalance leading to premature discontinuation of node activity. If the base station is not located within the reach of a source, a multihop scheme needs to be adopted, and other nodes are used as relays to guarantee a connection from that source to the base station. However, if many sources are in that situation, the nodes directly within the reach of the base station are located on most forwarding paths, have an increased workload, and their lifetime is likely to be shortened. In the special case of one-to-one traffic, schemes that find and rely on optimal routing paths also cause energy unbalancing, as they unevenly distribute the workload across the network and shorten the lifetime of the nodes along the optimal path. In [SR02], the occasional use of suboptimal routes is suggested as a countermeasure; this solution is shown to yield a significant lifetime improvement with respect to methods based on optimal routes alone. An efficient attempt to balance the energy distribution in the network is necessarily reliant upon some form of monitoring of the quality of the links which allows an assessment of connectivity. The remaining battery power is also of interest, since we aim at having a reasonably uniform energy balancing.
A Cross-Layer Approach to Energy Balancing in Wireless Sensor Networks
311
Our main goal in this paper is to investigate the benefits of a joint metric which considers hop count, link quality, and remaining battery power. We look at the routing problem in wireless sensor networks from a cross-layer perspective, by introducing an algorithm to build a routing tree by means of a heuristic metric encompassing physical layer information such as link strength and battery power as well as routing layer information (number of hops). The scenario is as follows: in a wireless sensor network comprising of N severely energy-constrained (battery-operated) lower-end sensing nodes, we intend to use fairly efficient routes for node lifetime extension; hence, we propose a strategy with energy-balancing guarantees. We assess its validity and benefits by means of simulation, and describe a hardware implementation with the most recent generation of Berkeley motes, MICAz. The remainder of the paper is organized as follows. Section 2 describes our algorithm and illustrates it with examples obtained with a custom network simulator. Section 3 presents an evaluation of the performance of our scheme obtained by means of extensive simulation, and Section 4 details the hardware implementation. Section 5 includes some closing remarks.
2 A Routing Scheme with Energy-Balancing Guarantees From a dynamic programming perspective, the problem of routing from a source s to a destination node d corresponds to defining a policy at a generic node ri for the choice of a relay ri+1 in the direction of d given the list of nodes s, r0 , r1 , ..., ri−1 . Routing over many short hops (e.g., nearest-neighbor routing) enjoys a lot of support. From an energy-consumption viewpoint, it is claimed that dividing a hop over a distance d into n short hops yields an energy benefit of nα−1 , where α is the path loss exponent [Rap01]. Unfortunately, this comes from the use of oversimplified link models such as the disk model [Gil61]: a transmission can either fail or succeed depending on whether the distance is larger or smaller than the so-called transmission radius. This model totally ignores fading and is therefore inaccurate for a realistic modeling of the wireless medium [Rap01]. It is shown in [Hae04] that for a block fading channel multihop routing does not offer any energy benefits for α=2. Routing over fewer (but longer) hops carries indeed many advantages [HP]; from our point of view, the most interesting are the reduction of energy consumption at the sensing nodes, the achievement of a better energy balancing, the more aggressive exploitation of sleep modi, and the lack of route maintenance overhead. If the sensing nodes only occasionally need to act as relays, they can sleep longer and only consume energy to make their own data available. With these ideas in mind, we propose a lightweight multihop routing scheme with energy-balancing guarantees. Our scheme is an example of flat routing, in the sense that all nodes are assigned similar roles (hierarchies are not defined). It can be considered dynamically proactive, since all routes are computed prior to being used and periodically updated.
312
D. Puccinelli, E. Sifakis, and M. Haenggi
In a network of N nodes (N − 1 sensing nodes and a base station), the generation of the routing tree is performed as follows: • Every sensing node sends a test packet to the base station; due to the unreliability of wireless links, the base station is only able to receive a fraction of such test packets. • The base station uses the received test packets to measure the quality of the links to the sensing nodes, and feeds this information back to them. • The sensing nodes that receive a reply from the base station broadcast a route setup packet to advertise their nearest-neighbor status as well as the quality of their link to the base and their battery voltage. Route setup packets may be seen as pointers toward the direction of the base station. • Other sensing nodes that receive these packets generate and broadcast similar packets to advertise that they are two hops away from the base station. • The information travels upstream from the base station into the network until all nodes know their depth and the tree is fully defined. The route setup is targeted at downstream communication, and its effectiveness is reliant upon link symmetry. In environments where asymmetric links are abundant, link quality estimation from reverse link quality information often does not work and handshakes between nodes are necessary.
Table 1. List of symbols. Symbol Meaning Ai li→k Mi,k Li Di vi Vi l[i, k]
Address of node i. Quality of the link between node i and node k. Metric of the route found at node i going through k. min(li→j , Lj ), Mi,j = max0≤k≤N −1 Mi,k , i = k. LBS 1. Depth of node i. Battery voltage of node i. min(vi , Vk ), Mi,k = max0≤j≤N −1 Mi,j , i = j. VBS 1. min(li→k , Lk ) (k-th entry in the table internal to node i).
We integrate a route selection scheme into the tree formation procedure by adding the following operations: • Each sensing node maintains a table indicating the nodes it can reach, the quality of the links to such nodes, their battery voltage, and their depth. All this information can be inferred by the route setup packets mentioned above. In particular, the depth can be inferred by using a counter initialized to 0 in the setup packets sent by the base station. The nearest neighbors of the base station will thus set it to 1, their nearest neighbors will increment it to 2, and so forth.
A Cross-Layer Approach to Energy Balancing in Wireless Sensor Networks
313
Table 2. Setup packet sent by the base station. Abs Lbs Vbs Dbs bs 1
1
0
The setup packet from the base station has the default structure shown in Table 2. All symbols are explained in Table 1. Table 3. Structure of a setup packet sent by node k after choosing to route packets over node j. A k Lk k
Vk
Dk
min(lk→j , Lj ) min(vk , Vj ) Dj + 1
Table 4. Structure of the internal table for node i, which is able to communicate with nodes k and j. AL
V D
M
k min(li→k , Lk ) Vk Dk +1 Mi,k j min(li→j , Lj ) Vj Dj +1 Mi,j
• Sensing node i at depth Di receives a setup packet from node k at depth Dk ; the structure of setup packets is shown in Table 3. In the remainder of the paper, we assume the quantities L and V to be normalized as to be comprised within the interval [0, 1]. If Dk ≥ Di , the setup packet from node k is discarded (it does not point toward the base station). If Dk < Di , on the other hand, the k-th entry in the table internal to node i is processed as displayed in Table 4. It should be observed that the base station does not need to maintain an internal table. The rationale behind these operations is to keep track of possible routes to the base station and be able to order them on the basis of a joint metric favoring good links, relays with abundant energy resources, and low number of hops. By keeping track of the minimum link quality and the lowest voltage in the route, bottlenecks may be identified. • In node i, a metric is computed for each entry in the internal table. For instance, a possible metric for the k-th entry is given by Mi,k =
min(li→k , Lk ) + Vk +
1 Dk +1
, (1) 3 where k is a downstream nearest neighbor of i. This metric favors energy balancing: the number of hops from node i to the base station through
314
D. Puccinelli, E. Sifakis, and M. Haenggi
node k is given by Dk + 1, and using its inverse in the metric results into a larger metric for routes over a small number of hops. Node i operates the decision to route over the node whose entry in the table has the largest metric. We will use this metric in the numeric examples. This procedure is to be repeated periodically for dynamic route maintenance and update (in case of topology changes, e.g. due to mobility or to the death of one or more nodes). The effectiveness of this approach is reliant upon the existence of a solid MAC scheme minimizing in-network interference.
Fig. 1. An example of a sensor network with 5 nodes. Node 5 represents the base station. The numbers near the links indicate their quality, whereas the numbers below the nodes represent the battery voltage.
We will now show a simple example of a network with N =5 nodes (shown in Figure 1) in order to clarify how the algorithm works. • Nodes 1 through 4 send a test packet to node 5. Due to the features of the wireless channel, node 5 can only communicate with nodes 3 and 4: the test packets sent by the other nodes are not received by node 5. • Node 5 sends a setup packet to 3 and 4; this packet is shown in Table 5.
A Cross-Layer Approach to Energy Balancing in Wireless Sensor Networks
315
Table 5. Setup packet sent by node 5. A 5 L5 V 5 D 5 5
1 1 0
• Node 4 fills the row pertaining to node 5 in its internal table as follows: l[4, 5] = min(l4→5 , L5 ) = 0.9
(2)
V5 = 1
(3)
D5 + 1 = 1
(4)
0.9 + 1 + 1 = 0.97. 3
(5)
M4,5 =
Table 6. Setup packet sent by node 4. A 4 L4 4
V4
D4
l[4, 5]=0.9 min(v4 , V5 ) = 0.6 1
On the basis of the internal table, the packet shown in table 6 is sent. Since there is only one possible route (node 5 is the only entry in the internal table), L4 = l[4, 5]. • Node 3 fills the row pertaining to node 5 in its internal table as follows: l[3, 5] = min(l3→5 , L5 ) = 0.4
(6)
V5 = 1
(7)
D5 + 1 = 1
(8)
0.4 + 1 + 1 = 0.8. (9) 3 On the basis of the table, the packet shown in table 7 is sent. Since there is only one possible route (node 5 is the only entry in the internal table), L3 = l[3, 5]. • Only nodes 2 and 5 receive the test packets from 3 and 4. Node 5 does not process them, as they come from higher-depth nodes. M4,5 =
316
D. Puccinelli, E. Sifakis, and M. Haenggi Table 7. Setup packet sent by node 3. A 3 L3 3
V3
D3
l[3, 5]=0.4 min(v3 , V5 ) = 0.2 1
• Node 2 fills the rows pertaining to nodes 3 and 4 on the basis of the received setup packets. The row for node 3 is filled as follows: l[2, 3] = min(l2→3 , L3 ) = 0.4
(10)
V3 = 0.2
(11)
D3 + 1 = 2
(12)
0.4 + 0.2 + 0.5 = 0.37. 3 The row for node 4 is filled as follows: M2,3 =
(13)
l[2, 4] = min(l2→4 , L4 ) = 0.1
(14)
V4 = 0.6
(15)
D4 + 1 = 2
(16)
M2,4 =
0.1 + 0.6 + 0.5 = 0.4 3
(17)
Table 8. Setup packet sent by node 2. A 2 L2 2
V2
D2
l[2, 4]=0.1 min(v2 , V4 ) = 0.6 2
The route with the highest metric, namely the one related to entry 4, is selected, L2 = l[2, 4], and the test packet shown in Table 8 is sent out. • Only nodes 1, 3, and 4 receive the test packet from 2. Nodes 3 and 4 do not process them, as they come from a higher-depth node.
A Cross-Layer Approach to Energy Balancing in Wireless Sensor Networks
317
• Node 1 fills the row pertaining to node 2 in its internal table as follows: l[1, 2] = min(l1→2 , L2 ) = 0.1
(18)
V2 = 0.6
(19)
since node 2 has chosen route (2, 4, 5), and node 4 (v4 = 0.6) is the bottleneck of route (1, 2, 4, 5) in terms of battery voltage. D2 + 1 = 3
(20)
0.1 + 0.6 + 0.3 = 0.33. (21) 3 • Node 1 is thus able to reconstruct the route to node 5: the first step is node 2, the only entry in the table internal to node 1. Node 2 knows that packets need to be routed over node 4, whose entry has the best metric. Node 4 is a nearest neighbor of the base station, so the chosen route is (1, 2, 4, 5). M1,2 =
3 Performance Analysis We have used a custom network simulator to assess the validity of our routing scheme in a scenario with N =100 nodes uniformly randomly deployed in a 30m × 30m environment. The source and destination are placed at the opposite ends of one of the diagonals of the square deployment area. We will show that the use of our proposed joint metric can lead to a significant gain in terms of energy balancing with respect to approaches that only consider link quality. A one-to-one scenario with one node trying to communicate to the base station is simulated. The simulator evaluates link quality on the basis of a multipath channel model with additive Gaussian noise. Our simulator initially finds a total of 72 routes: 8 over 3 hops, and 64 over 4 hops. Routes with more hops are discarded. Figure 2 shows the performance of a routing scheme using bottleneck link quality as the only metric for route selection. The route with the best bottleneck link quality is chosen, which does not at all guarantee energy-balancing: each chosen route is perused until its relaying nodes are fully depleted. The use of a joint metric between node k and node 0 along the route (k, k − 1, ..., 1, 0) obtained as Mk =
min(Lk→k−1 , ..., L1→0 ) + min(Vk−1 , ..., V0 ) + 3
1 Dk
(22)
dramatically improves energy balancing: Figure 3 shows that the death of the first node is delayed from step 12 to step 118. The impact of this result can be better appreciated by considering that a wireless sensor network typically
318
D. Puccinelli, E. Sifakis, and M. Haenggi 80
Number of 3−hop Routes Number of 4−hop Routes
70 60 50 40 30 20 10 0 0
50
100 Step
150
200
Fig. 2. The number of 3-hop and 4-hop routes steadily decreases as the batteries of the nodes are depleted: a routing scheme based on bottleneck link quality does not ensure a proper energy balancing.
follows a many-to-one traffic pattern: keeping as many nodes alive for as long as possible is extremely important for the sensing capabilities of the network. Other metrics are also possible. A stronger emphasis may be placed on either energy balancing or link quality by using a weighted average of the type Mk = α min(Lk→k−1 , ..., L1→0 ) + β min(Vk−1 , ..., V0 ) + (1 − α − β)
1 . Dk
(23)
Figure 4 shows the performance of our scheme with a weight assignment of the form α=1/6 and β=1/2, which places an even stronger emphasis on energy balancing, and further delays the death of the first node (from step 118 to step 131). With both this weighted metric and the unweighted bottleneck metric, a gain of an order of magnitude can be achieved with respect to the scheme based on bottleneck link quality. These gains are of course upper bounds, as in our simulation we assume that the energy consumption is negligible if the nodes are not actively relaying packets. These upper bounds can be approached with an aggressive use of sleep modes and low-power listening techniques which reduce the receive energy, which is normally comparable to the transmit energy. We have seen the advantages of a joint metric, but the analysis above is based on the idea of using bottleneck quantities. Figure 5 shows that in-
A Cross-Layer Approach to Energy Balancing in Wireless Sensor Networks 80
319
Number of 3−hop Routes Number of 4−hop Routes
70 60 50 40 30 20 10 0 0
50
100 Step
150
200
Fig. 3. Our joint metric ensures better energy balancing: routes start to become unusable after 118 steps.
corporating average route link quality and average route battery voltage (as opposed to minima) into the selection metric fails to provide energy balancing and leads to disadvantages such as the premature loss of shorter routes. Bottlenecks should not be ignored: if a K-hop route includes K − 1 good hops and a bottleneck, it will have a good average but will also prematurely cease to exist if selected.
4 Hardware Implementation We implemented our energy-aware routing scheme in hardware using the low power MICAz platform. MICAz represents the latest generation of Berkeley motes, and is commercialized by Crossbow. It is built around an ATMega128L microprocessor, and features a CC2420 802.15.4-compliant ZigBee-ready radio. IEEE 802.15.4 is a standard for low-rate, wireless personal area networks which provides specification for the physical and the MAC layer. At the physical layer, it defines a low-power spread spectrum radio operating at 2.4GHz with a bit rate of 250kb per second. ZigBee is a collection of high level communication protocols built on top of the IEEE 802.15.4 MAC layer. We implemented the algorithm with the operating system TinyOS [HSW+ 00]; our code has a reasonably small footprint (about 10KB of ROM occupancy, and less than 500B of RAM occupancy).
320
D. Puccinelli, E. Sifakis, and M. Haenggi 80
Number of 3−hop Routes Number of 4−hop Routes
70 60 50 40 30 20 10 0 0
50
100 Step
150
200
Fig. 4. The use of weights within the metric can lead to even better energy balancing results.
We placed 10 MICAz motes in a lab environment in an arrangement shown in Figure 6. This network is to deliver a continuous stream of packets at a rate of 46 packets/min from node 1 to node 10 over a duration of 105 hours and 20 minutes (the time that was needed for the radio of the source node to become unusable due to the overly low voltage). Although the nodes themselves are static, people moving in the lab cause a certain amount of fading. Fresh batteries were used, the transmit power is -15dBm for both control and application traffic, the MAC scheme is standard CSMA/CA, and no acknowledgments are used to makes links reliable. In this implementation, unused nodes sleep 66% of the time, and low power listening modes are not used. The lifetime gain in this experiment can therefore be expected to be fairly far from the upper bounds, but can still provide us with a valid proof of concept. In the 105h experiment, 290720 packets were transmitted and 256313 were successfully received at the BS; this corresponds to a packet loss of about 12%. Given that there were no retransmissions at all and the transmit power was relatively small, this loss is quite acceptable. Among all the routes found, 9% were single-hop, 89% were two-hop, and 1.8% three-hop. The mean path loss over a distance of about 8m and an obstructed line-of-sight path prevents packets from being received at a transmit power of -15dBm. So the fact that single-hop routes exist indicates that the algorithm exploits positive fading states, i.e., is opportunistic, thereby allowing all relay nodes to sleep for some
A Cross-Layer Approach to Energy Balancing in Wireless Sensor Networks 80
321
Number of 3−hop Routes Number of 4−hop Routes
70 60 50 40 30 20 10 0 0
50
100 Step
150
200
Fig. 5. The use of average quantities (as opposed to bottleneck quantities) leads to the premature loss of 3-hop routes.
7m 7 8 4m
4
6 5
3 2
Source 1
9 10 Base Station
All nodes elevated by 1m, except: Node 1: on floor Node 8: 1.6m
Fig. 6. Setup of the 10-node experiment in an indoor environment.
time. More precisely, one-hop routes dominate between hours 58 and 64; in this time lapse, the room was empty, and the particular arrangement of the furniture created a pattern of reflections that placed the destination in a good fading spot with respect to the source, thereby allowing one-hop communication. As soon as people entered the room again, the particular arrangement that had created this situation was modified, and the source was no longer able to exploit static fading to reach the destination directly.
322
D. Puccinelli, E. Sifakis, and M. Haenggi
Remaining Battery Voltage
3.2 3 2.8 Source → 2.6 2.4 2.2 2 1.8 0
20
40
60 Time [hours]
80
100
Fig. 7. Battery discharge curves for the 10 nodes.
4
8
x 10
← Node 2
7
Packets Relayed
6
← Node 5
5 4 ← Node 7
3
← Node 8
2
← Node 1 ← Node 3
1
← Node 9
0 0
← Node 6 ← Node 4
20
40
60 80 Time [hours]
100
120
Fig. 8. Cumulated number of packets relayed by each node.
With short-hop routing, the discharge curves of the nodes would all lie below the one for the source node in Figure 7, since all the relays would
A Cross-Layer Approach to Energy Balancing in Wireless Sensor Networks
Number of Control Packets
500 400
323
0.5 nodes/m2 1 node/m2 2 nodes/m2
300 200 100 0 0
50
100 150 Number of Nodes
200
Fig. 9. Number of control packets as the network size increases.
not only transmit each packet but also receive each packet, and the receive energy is substantial. The use of sleep modes is certainly not aggressive in the algorithm used; by increasing the length of the sleeping periods, the high gains shown by our simulations can be approached. This is easier in networks with a high number of nodes, as the chances of losing connectivity because of the excessive use of sleep modes are reduced. This shows that our scheme has very interesting scalability properties; our long-hop approach enhances such properties, in the sense that with as few long hops as possible the volume of routing traffic is minimized. The number of control packets for route discovery M is related to the number of nodes by the relation N − 1 + 2n1 + i=2 ni , where N is the number of nodes, M is the maximum depth in the routing tree, and ni is the number of nodes which can reach the base station in i hops. In fact: 1) N − 1 nodes send a test packet to the base station; 2) only n1 nodes can actually reach it, and the base station thus sends n1 feedback packets to such nodes; 3) these n1 nodes broadcast one route setup packet each; 4) so do their n2 nearest neighbors; 5) this goes on until the nM nearest neighbors at M hops from the base station are reached. Figure 9 shows that the number of control packets as predicted by our simulator for three different node densities scales approximately linearly with the number of nodes, which is again promising for scalability.
324
D. Puccinelli, E. Sifakis, and M. Haenggi
5 Closing Remarks The main contribution of the present work is the introduction of a new approach to route selection in wireless sensor networks. With the constraints of real hardware in mind, we suggest the use of a joint metric as part of a cross-layer approach to achieve energy-balancing. Simulation results show a gain of up to one order of magnitude in node lifetime extension with respect to routing schemes based on link quality. A successful hardware implementation with MICAz motes is indicative of the lightweight nature of our scheme. Control applications would greatly benefit from a scheme that relies on a reduced number of hops because of the inherent delay benefit, and the dramatic extension of network lifetime provided by our routing strategy is extremely appealing.
Acknowledgements The support of NSF (grants ECS 03-29766 and CAREER CNS 04-47869) is gratefully appreciated.
Distributed Control over Failing Channels C´edric Langbort, Vijay Gupta, and Richard M. Murray Division of Engineering & Applied Sciences California Institute of Technology 1200 E. California Blvd., Pasadena CA 91125 {clangbort,gupta,murray}@ist.caltech.edu Summary. We give sufficient convex conditions for the existence of a distributed controller, with the same interconnection structure as the plant, when the latter is composed of heterogeneous subsystems that communicate with their neighbors over packet-dropping channels. Our linear matrix inequalities generalize previous results for the case of ideal communication links and, although conservative, are typically far more tractable for large-scale systems than recently obtained sufficient and necessary conditions.
1 Introduction Ever since the early works reported in [Bel74], the field of Distributed Control has mostly been concerned with the design of controllers with particular geometrical structures or information patterns for large-scale systems involving multiple cooperating and/or coupled agents. However, in cases where this coupling corresponds to actual communication as, e.g., in a team of vehicles broadcasting their positions to teammates over a wireless network, the constraints experienced by such systems go beyond the mere specification of “which subsystem can communicate with which” and communication limitations should also be taken into account. This latter point, i.e., the analysis of the role of such degradations as quantization, packet drops and random delays, has received relatively little attention in the structured control community, apart from notable exceptions like [RL04, Vou00], that included propagation delays in their theory. The field of Control over Networks, on the other hand, has focused on understanding the role of these limitations. In particular, researchers have considered control problems with non-ideal communication links between the sensor, actuator or controller and the plant. Beginning with the seminal work of Delchamps [Del90b], quantization issues have been studied, among others, by Wong & Brockett [WB99c], Nair & Evans [NE00a], Tatikonda & Mitter [TM04b] and so on. Effects of packet-drops on stability and performance have
P.J. Antsaklis, P. Tabuada (Eds.): Netw. Emb. Sens. and Cntrl., LNCIS 331, pp. 325–342, 2006. © Springer-Verlag Berlin Heidelberg 2006
326
C. Langbort, V. Gupta, and R.M. Murray
Fig. 1. A schematic representation of the control architecture. Communication links are only present between the lighter subsystems (the controller’s) if they are between the darker ones (the plant’s). Both links fail according to the same random process.
also been addressed by e.g., Nilsson [Nil98b], Sinopoli et al. [SSF+ 04], AzimiSadjadi [AS03], Gupta et al. [GSHM05]... Unfortunately, all these results, derived in the context of a single centralized plant, cannot be applied directly to large-scale distributed control systems. Our goal, in this paper, is to initiate a merger of the tools of Control over Networks and Distributed Control to arrive at a theory of Networked Control Systems that can handle the effects and implications of both spatial structure and limited communication on the control design problem. We restrict our attention to networks of possibly different Linear Time Invariant (LTI) systems, where coupling variables are exchanged between units over non-ideal communication links that can fail, and hence, drop packets stochastically. We consider several possible models for these failures, but assume that enough bits are allocated to each coupling variable and that the error-correcting code operating at the output of each link is efficient enough for each packet containing the value of a variable to be either destroyed or transmitted without error. This is the so-called packet erasure model of a channel. For such systems, we consider both the stability and performance analysis and control synthesis problem. The obtained controller has the same structure as the plant, i.e., that it is itself an interconnected system with the same communication links as the plant, as illustrated in Figure 1. Such an architecture is appropriate in cases where it is too costly or impossible to create a specific network for communication between controllers, but some information still needs to be exchanged to obtain satisfactory performance. Our work relies on similar dissipativity arguments as those used in [LCD04] and the results take the form of a set of Linear Matrix Inequalities (LMIs), with a particular structure that can be exploited for efficient numerical resolution as was done, e.g., in [LXDB04].
Distributed Control over Failing Channels
327
After explaining the main elements of our approach and discussing some robustness and conservatism issues for the simplified case of a single failing communication link in Section 2, we present our analysis and synthesis results for general interconnections in Section 3. Finally, a numerical example is given in Section 4, which illustrates the trade-offs and computational benefits of our distributed control conditions.
2 The Case of Two Interconnected Subsystems We first focus on the simplest type of interconnected system: that consisting of two (different) subsystems with one communication link between them. This is done so as to introduce the main ideas at play in the paper, while avoiding unnecessary notational complications. The general case will be addressed in Section 3. Each subsystem is LTI and represented in state-space by xi (k + 1)=Ai xi (k) + Bi vi (k) wi (k)=Ci xi (k) + Di vi (k), i = 1, 2.
(1) (2)
xi is the state of subsystem i and takes its values in Rmi , while vi and wi respectively stand for the input received from and sent to subsystem i’s neighbor. We will assume that both signals take their values in Rni . To account for the fact that information exchanged between subsystems can be dropped randomly at each time-step, we assume the following interconnection relation: v1 (k) v2 (k)
= δ(k)
w2 (k) for all k ≥ 0, w1 (k)
(3)
where δ(k) is a stochastic process taking values in {0, 1} at each time k. Implicit in equation (3) is the fact that n1 = n2 . In the remainder of this section, we will denote both dimensions by n. This way of representing failing channels as random structured perturbations is close in spirit to the approach taken in [EM01b]. In contrast with this reference, however, we will exploit the particular structure of this ‘perturbation’ to obtain a controller with the same interconnection topology as the plant. Also, we will consider the following two models of failure processes, which correspond to various degrees of knowledge about the properties of the network. Definition 1. We will say that a property of the interconnected system holds in – the independent failure model: if it holds when δ(k), k ≥ 0 are i.i.d random variables, taking value 1 with a given probability p and 0 with probability (1 − p).
328
–
C. Langbort, V. Gupta, and R.M. Murray
the arbitrary failure model: if it holds for δ being any time-inhomogeneous Markovian process.
Relations (1) and (3) do not specify the interconnected system entirely, as the various signals involved may not exist or be unique. We thus need to introduce the following notion of well-posedness. Definition 2. The interconnection of subsystems (1) with communication link (3) is nominally well-posed if and only if matrix I−
0 D2 D1 0
is invertible.
When an interconnected system is nominally well-posed, states x1 and x2 satisfy a set of jump difference equations, namely x1 (k + 1) x (k) = A(δ(k)) 1 , x2 (k + 1) x2 (k)
(4)
where A is a rational matrix function of δ, A(δ) =
B1 0 A1 0 + 0 B2 0 A2
I −δ
0 D2 D1 0
−1
δ
0 C2 , C1 0
and their first and second moment are well-defined at all time k ≥ 0. 2.1 Distributed analysis We are interested in testing stability of the subsystems’ interconnection and, ultimately, in designing distributed stabilizing controllers for them. We will use the notion of second moment stability, which has several equivalent definitions [Lop]. We chose to follow that of [FL02a, SS05], namely Definition 3. We say that a well-posed interconnected system with communication link (3) is stable when lim EΔ(k−1)
k→∞
x1 (k) x2 (k)
xt1 (k)xt2 (k)
=0
where expectation is taken with respect to Δ(k − 1) = {δ(1), . . . , δ(n − 1)} . Using results of [FL02a] and [Lib03d], we can give the following analysis conditions. Proposition 1. Assume the interconnected system is nominally well-posed. Then it is stable – with independent failures, if and only if there exists P > 0 such that (1 − p) A(0)t P A(0) + p A(1)t P A(1) < P.
(5)
Distributed Control over Failing Channels
329
– with arbitrary failures, if there exists P > 0 such that A(0)t P A(0)
0 for i = 1, 2. I
But, since S2 = −E(1)t S1 E(1), I D2 which implies that Im
t
S1
I 0 and symmetric matrices S1c and S2c such that (Aci )t Pic Aci − Pic (Aci )t Pic Bic Cic Dic c t c c c t c c < (Bi ) Pi Ai (Bi ) Pi Bi 0 I
t
Sic
Cic Dic 0 I
S2c = −E(1)t S1c E(1) Sic =:
Xic Yic (Yic )t Zic
, Xic < 0.
for i = 1, 2,
(19) (20) (21)
Because the closed-loop subsystems’ matrices are affine functions of the controller’s data Θi , conditions (19) are not jointly convex in Pic , Sic and these data. The following theorem shows that they are nevertheless equivalent to a set of LMIs.
Distributed Control over Failing Channels
333
Theorem 1. There exist matrices such that stability conditions (19)-(20) (21) are satisfied in closed-loop if and only if there exist symmetric matrices ˜ g, X ˜ g and matrices Y g , Y˜ g such that the following P1g , P2g , P˜1g , P˜2g , X1g , X2g , X 1 2 LMIs are satisfied: Pig > 0, Xig < 0,
Pig I ≥ 0 for all i = 1, 2 I P˜ig −Xig I ˜ g ≥ 0 for all i = 1, 2 I −X i
t g A1 B1 0 0 A1 B1 P1 0 g 1 0 0 1 t I 0 0 −P1 I 0 (NX (NX ) ) 0 (NX ˜) g g t ˜ ˜ 0 −I 0 0 −X2 (Y ) 0 −I X t t ˜g C2 D 2 C2t D2t 0 0 Y˜ g X 1
(22)
(23)
(24)
(25)
y y u t u t i i and NX where the columns of NX ˜ span the null-space of Ci Di and (Bi ) (Di ) respectively, for all i. The corresponding controller can be computed once a feasible point is known and its subsystems share three times as many signals as the plant’s subsystems, i.e., nk = 3 n.
Proof. The equivalence between (19)-(20)-(21) and the LMIs of Theorem 1 is proved in a similar way as in [LCD04], where distributed stabilizing controllers are constructed for systems with ideal channels. One starts with the bilinear matrix inequalities (19) and tries to obtain equivalent convex conditions using the Elimination Lemma of [Sch01]. Of crucial importance for this purpose is the fact that the inertia condition in
S1c 0 0 S2c
= (n + nk , 0, n + nk )
334
C. Langbort, V. Gupta, and R.M. Murray
holds. This is implied by condition (20). It is also to guarantee this property that we need to choose nk = 3n when reconstructing the controller from the LMI conditions of Theorem 1.
3 General Multi-link Interconnections We now consider distributed control design for interconnections of N > 2 subsystems, where all communication links can fail separately. In addition to internal stability, we ask that the H∞ –norm of the closed-loop system be less than one. Being able to synthesize such suboptimal controllers will allow us to explore the influence of decentralization on performance in Section 4. The dynamics of each subsystem is described by xi (k + 1)=Ai xi (k) + Bi vi (k) + Biu ui (k) + Bid di (k) wi (k)=Ci xi (k) + Di vi (k) + Diu ui (k) + Did di (k)
+Diyd di (k) yi (k)=Ciy xi (k) + Diy vi (k) + zi (k)=Ciz xi (k) + Diz vi (k) + Dizu ui (k) + Dizd di (k),
(26) (27) (28) (29)
where di and zi are the exogenous disturbance and performance output of the ith subsystem, respectively. In (26), vi and wi represent the aggregate signals exchanged between subsystem i and its neighbors i.e., t t t t vit = vi1 . . . viN ; wit = wi1 . . . wiN
each vij and wij belonging to Rnij (nij = nji ) and corresponding to signals shared by subsystem i and j. When subsystem i and j do not communicate, we simply let nij = 0. As in Section 2, interconnection links between two subsystems are described by vij (k) wji (k) = δij (k) for all k ≥ 0, 1 ≤ i ≤ j ≤ N (30) vji (k) wij (k) for a stochastic process δij taking values in {0, 1}. In the remainder of this section, we will restrict ourselves to the arbitrary failure model and assume that each δij is a switching signal of the form presented in Section 2. The concept of well-posedness, which guarantees that all signals are well-defined irrespective of the system’s failure mode, generalizes naturally to this case. Definition 4. We say that a system described by (26) and (30) is well-posed if, for any value of δij = δji in {0, 1} (1 ≤ i ≤ N, 1 ≤ j ≤ N ), the only vectors {vij }1≤i≤N , {wij }1≤i≤N satisfying 1≤j≤N
1≤j≤N
vij = δij wji and are zero.
wi Di ∈ Im vi I
for all i, j
Distributed Control over Failing Channels
335
Finally, following [SS05], we define the H∞ –norm of the interconnected system (26)-(30) as ∞ t k=0 EΔ(k−1) z(k) z(k) (31) sup sup ∞ t δ(0) d=0 k=0 EΔ(k−1) d(k) d(k) where, in (31), we have let t (k) and dt (k) = dt1 (k). . .dtN (k) z t (k) = z1t (k). . .zN
for all k ≥ 0 and assumed xi (0) = 0 for all i = 1...N . When this norm is strictly less than one, we say that the system is contractive. With these definitions in place, we can state the counterpart of Proposition 4 for general interconnections. Proposition 6. The interconnection of subsystems (26) (with dimui = dimyi = 0 for all i) with communication link (30) is well-posed and stable and contractive with arbitrary failures, if there exist symmetric matrices Pi > 0 and Xij < 0 for all i, j = 1...N and matrices Yij for all i ≥ j, with Yii skewsymmetric, such that, for all i = 1...N t 0 0 0 0 Pi 0 Ai Bi Bid Ai Bi Bid I 0 0 0 −Pi 0 0 0 0 I 0 0 Ci Di Dd 0 0 Zi11 Zi12 0 0 Ci Di Dd i i 0 I 0 0 0 Z 12 t Z 22 0 0 0 I 0 < 0 i i C z Dz Dzd 0 0 0 0 I 0 Ciz Diz Dizd i i i 0 0 I 0 0 I 0 0 0 0 0−I
where
(32)
Zi11 := − diag Xij 1≤j≤N
Zi22 := diag (Xji ) 1≤j≤N
Zi12 := diag −diag Yij , diag Yjit . 1≤j≤i
i<j≤N
Proof. We will only give the proof of well-posedness. From LMIs (32) we obtain, in particular, that N
wij vij
t
Sij wij vij ≥ 0
(33)
j=1
for all i = 1...L and all wij , vij such that wi vi ∈ Im Di I , with equality only if all wij ’s and vij ’s are zero. In (33), we have let
(34)
336
C. Langbort, V. Gupta, and R.M. Murray
Sij :=
Xij Yij ifj ≤ i, Yijt −Xji
Xij −Yjit ifj > i. Yji −Xji
Now assume that, in addition to relation (34), vij = δij wji for all i, j and let F := {(i, j)|δij = 0} be the set of failing communication links and E := {(i, j)|δij = 1} its complementary set in the set of non-oriented edges of the complete graph over N vertices, with self-loops. We can them rewrite (33) as t
t wij Xij wij + (i,j)∈F
wij wji Sij wij wji ≥ 0 (i,j)∈E
for all i. Summing this latter expression over i and using the definition of Sij , we get i
(i,j)∈F
t Xij wij + wij ≤0
i
t wij wji Sij wij wji ≥ 0.
(i,j)∈E =0 t
Hence, wij = 0 for all (i, j) ∈ F and, in turn, (i,j)∈E wij wji Sij wij wji = 0 for all i. As a result, we have equality in inequality (33), which implies wij = vij = 0 for all (i, j) ∈ E as well and proves well-posedness.
The proof of stability and contractiveness amounts to showing that (32) implies the Bounded Real Lemma conditions of [SS03]. The steps are very similar to those used in the proof of Proposition 4 and are thus omitted. Nevertheless, we would like to draw the reader’s attention to two points: 1. Analysis conditions of Proposition 6 can handle interconnections with selfloops (i.e. signals of the form vii and wii ). This will prove useful when we treat a practical example in Section 4. 2. When LMIs (32) are satisfied, stability and contractiveness are guaranteed irrespective of whether different links fail independently of each other or not. This is because these conditions require each failing link to decrease the stored energy xti Pi xi of its adjacent subsystems, no matter what mode the other channels are in. In contrast, the multi-link counterpart of the probability-dependent condition given in Proposition 2 would not have this property, since every link would only decrease subsystem’s stored energy on average, which requires to know the joint failure probability and to account for all global failure modes.
Distributed Control over Failing Channels
337
Probability-dependent stability tests can be given for general interconnections when Di = 0 for all subsystems (as we did in Proposition 2). However, since we do not know how to ensure this property on a closedloop system and we ultimately want to formulate synthesis conditions, we decided to focus on the arbitrary failure model for the multi-link case. Our distributed control synthesis theorem also has a counterpart for general interconnections. Theorem 2. There exists a distributed controller, with the same structure as the plant, such that the closed-loop system is well-posed, stable and contractive g , with arbitrary failures, if there exist symmetric matrices Pig > 0, P˜ig > 0, Xij g g g g g ˜ for all i, j = 1...N and matrices Y , Y˜ for i ≥ j, with Y , Y˜ skewX ij ij ij ii ii symmetric such that g I −Xij Pig I ≥ 0, g ˜g ≥ 0 ˜ I −X I Pi ij
i NX
where
(35)
0 0 Ai Bi Bid 0 0 I 0 0 12 d 0 0 Z i i g g C i D i D i NX