General Systems Theory: Mathematical Foundations M . D . Mesarovic and Yasuhiko Takahara SYSTEMS RESEARCHCENTER RESERVE...
121 downloads
2144 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
General Systems Theory: Mathematical Foundations M . D . Mesarovic and Yasuhiko Takahara SYSTEMS RESEARCHCENTER RESERVEUNIVERSITY CLEVELAND, OHIO CASE WESTERN
ACADEMIC PRESS
New York
San Francisco London
A Subsidiary of Harcourt Brace Jovanovich, Publishers
1975
COPYRIGHT 0 1975, BY ACADEMIC PRESS, INC. ALL RIQHTS RESERVED. NO PART OF THIS PUBLICATION MAY BE REPRODUCED OR TRANSMITTED IN A N Y FORM OR BY ANY MEANS, ELECTRONIC OR MECHANICAL, INCLUDING PHOTOCOPY, RECORDINQ, OR ANY INFORMATION STORAGE AND RETRIEVAL SYSTEM, WlTHOUT PERMISSION IN WRITING FROM THE PUBLISHER.
ACADEMIC PRESS, INC.
111 Fifth Avenue, New Y a k , New York 10003
United Kingdom Edition published by ACADEMIC PRESS, INC. (LONDON) LTD. 24/28 Oval Road, -don
NWl
Library of Congress Cataloging in Publication Data Mesarovid, Mihajlo D Foundations for the mathematical theory of general systems. (Mathematicsin science and engineering, ) Bibliography: p . 1 . System analysis. I . Takahara, Y., joint author. 11. Title. 111. Series. QA402.M47 003 74-1639 ISBN 0-12 -49 1540-X
PRINTED W THE UNITED STATES OF AMERICA
To Gordana and Mitsue
with affection and gratitude
CONTENTS xi
l. fntroduc:tloa I. OenCnll Sysl- Theory: Wbal b 11 aod Wbal bll f0u ol Mirumal Rcalilations 2. CharK1uizaiiOD o( the Minimal Realization ofStatiODlry Systtms 3. Uniquencu of ManjmaJ lnput·Rcs:ponse Realiz:adon
139 143
IS3
IX. S tability I. G cneraJ Concept of Stability 2. Sttbility of Sets for General Systems
X.
170 178
182 ISS 199 201
ComputabUity, Coosls!eocy, and Completeaess
Computation as Dynamical PrOOC$$ Fundamental O.a.aonalization (Gocd.d) Theorem Application of the Fundamental Theorem to Formal S)'Jtcms 4. Rcahution by Turing Machirtes I.
2. 3.
Xrt.
164
lntercoMectioos of Subsystems, Decompo5hion, and Oecoupling
I. Conncctlna Opera.wrs 2. Sublyatcmll. Components. and Dec-omposition 3. Feedback Connection or Components 4, Dceoupling and Functional Controllability 5. Abolracl Pole A>sianability 6. Simplifica110n throua.h Decomposition of Discrete-Time Dynamical Systems
XI.
160
210 113 214 l iS
Categories of S ystems and Associated Functors
1. Formition ofCatcsorics o f Genera1 Systems and Homomorphic Models 2. Cate&ories of General Systems 3. Cateaor1cs or Time Systems 4. Categoric• or Oynam~l Systems
217 22S 23S 24()
Contents Appendix 1. Appendix D.
IX
References and Historical Account
248
Alternative Basis for Mathematical Geaeral Systems Theory
Axiomatic logic Structures 2. Topologital. f'uootional Analysis-. and Quantitative Approaches
I.
3. Algebraic Syttems Theory 4. Restricted Notion of a Systc:m
2Sl 252 252 2SJ
Appeadix m. Open Systems aDd Goal-Seeking Systems I. Open Systems 2. Goai-Scc::king Systems
2SS
AppeDdlx IV. Basic Notions In Category Theory
262
lndtx
265
258
PREFACE
This book reports on the development of a mathematical theory of general systems,initiated ten years ago. The theory is based on a broad and ambitious program aimed at formalizing all major systems concepts and the development of an axiomatic and general theory of systems. The present book provides the foundations for, and the initial steps toward, the fulfillment of that program. The interest in the present volume is strictly in the mathematical aspects of the theory. Applications and philosophical implications will be considered elsewhere. The basic characteristics and the role of the proposed general systems theory are discussed in some detail in the first chapter. However, the unifying power of the proposed foundations ought to be specifically singled out; within the same framework, using essentiallythe same mathematical structure for the specification of a system, such diverse topics are considered and associated results proven as: the existence and minimal axioms for statespace construction ; necessary and sufficient conditions for controllability of multivalued systems ; minimal realization from input-output data ; necessary and sufficient conditions for Lyapunov stability of dynamical systems ;Goedel consistencyand completenesstheorem ;feedback decoupling of multivariable systems ; Krohn-Rhodes decomposition theorem ; classification of systems using category theory. A system can be described either as a transformation of inputs (stimuli) into outputs (responses)-the so-called input-output approach (also referred to as the causal or terminal systems approach), or in reference to the fulfillment of a purpose or the pursuit of a goal-the so-called goal-seeking or decision-making approach. In this book we deal only with the input-output approach. Originally, we intended to include a general mathematical theory of goal-seeking, but too many other tasks and duties have prevented us xi
xii
Preface
from carrying out that intention. In fairness to our research already completed, we ought to point out that the theory of multilevel systems which has been reported elsewhere? although aimed in a different direction, does contain the elements of a general theory of complex goal-seeking systems. For the sake of completeness we have given the basic definition of a goal-seeking system and of an open system (another topic of major concern) in Appendix 11. We have discussed the material over the years with many colleagues and students. In particular the advice and help of Donald Macko and Seiji Y oshii were most constructive. The manuscript would have remained a scribble of notes on a pile of paper if it were not for tireless and almost nondenumerable series of drafts retyped by Mrs. Mary Lou Cantini.
t M. D. Mesarovic. D. Macko, and Y. Takahara. “Theory of Hierarchical Multilevel Systems.” Academic Press, New York, 1970.
Chapter I
INTRODUCTION
1. GENERAL SYSTEMS THEORY: WHAT IS IT AND WHAT IS IT FOR?
Systems theory is a scientific discipline concerned with the explanations of various phenomena, regardless of their specific nature, in terms of the formal relationships between the factors involved and the ways they are transformed under different conditions ; the observations are explained in terms of the relationships between the components, i.e., in reference to the organization and functioning rather than with an explicit reference to the nature of the mechanisms involved (e.g., physical, biological, social, or even purely conceptual). The subject of study in systems theory is not a “physical object,” a chemical or social phenomenon, for example, but a “system” : a formal relationship between observed features or attributes. For conceptual reasons, the language used in describing the behavior of systems is that of information processing and goal seeking (decision making control). General systems theory deals with the most fundamental concepts and aspects of systems. Many theories dealing with more specific types of systems (e.g., dynamical systems, automata, control systems, game-theoretic systems, among many others) have been under development for quite some time. General systems theory is concerned with the basic issues common to all of these specialized treatments. Also, for truly complex phenomena, such as those found predominantly in the social and biological sciences,the specialized descriptions used in classical theories (which are based on special mathematical structures such as differential or difference equations, numerical or abstract algebras, etc.) do not adequately and properly represent the actual 1
2
Chapter I
Introduction
events. Either because of this inadequate match between the events and types of descriptions available or because of the pure lack of knowledge, for many truly complex problems one can give only the most general statements, which are qualitative and too often even only verbal. General systems theory is aimed at providing a description and explanation for such complex phenomena. Our contention is that one and the same theory can serve both of these purposes. Furthermore, in order to d o that, it ought to be simple, elegant, general, and precise (unambiguous). This is why the approach we have taken is both mathematical and perfectly general. At the risk of oversimplification, the principal characteristics of the approach whose foundation is presented in this book are : (i) It is a mathematical theory of general systems; basic concepts are introduced axiomatically, and the system’s properties and behavior are investigated in a precise manner. (ii) It is concerned with the goal seeking (decision making, control) and similar representations of systems, as much as with the input-output or “transformational” (causal) representations. For example, the study of hierarchical, multilevel, decision-making systems was a major concern from the very beginning. (iii) The mathematical structures required to formalize the basic concepts are introduced in such a way that precision is obtained without losing any generality. It is important to realize that nothing is gained by avoiding the use of a precise language, i.e., mathematics, in making statements about a system of concern. We take exception, therefore, to considering general systems theory as a scientific philosophy, but rather, consider it as a scientific enterprise, without denying, however, the impact of such a scientific development on philosophy in general and epistemology in particular. Furthermore, once a commitment to the mathematical method is made, logical inferences can be drawn about the system’s behavior. Actually, the investigation of the logical consequences of systems having given properties should be of central concern for any general systems theory which cannot be limited solely to a descriptive classification of systems. The decision-making or goal-seeking view of a system’s behavior is of paramount importance. General systems theory is not a generalized circuit theory-a position we believe has introduced much confusion and has contributed to the rejection of systems theory and the systems approach in fields where goal-seeking behavior is central, such as psychology, biology, etc. Actually, the theory presented in this book can just as well be termed general cybernetics, i.e., a general theory of governing and governed systems. The
1 . General Systems Theory : What Is It and What Is I t For?
3
term “general systems theory” was adopted at the initiation of the theory as reflecting a broader concern. However, in retrospect, it appears that the choice might not have been the happiest one, since that term has already been used in a different context. The application of the mathematical theory of general systems can play a major role in the following important problem areas. (a) Study of Systems with Uncertainties
On too many occasions, there is not enough information about a given system and its operation to enable a detailed mathematical modeling (even if the knowledge about the basic cause-effect relationships exists). A general systems model can be developed for such a situation, thus providing a solid mathematical basis for further study or a more detailed analysis. In this way, general systems theory, as conceived in this book, significantly extends the domain of application of mathematical methods to include the most diverse fields and problem areas not previously amenable to mathematical modeling. (b) Study of Large-Scale and Complex Systems Complexity in the description of a system with a large number of variables might be due to the way in which the variables and the relationships between them are described, or the number of details taken into account, even though they are not necessarily germane to the main purpose of study. In such a case, by developing a model which is less structured and which concentrates only on the key factors, i.e., a general systems model in the set-theoretic or algebraic framework, one can make the analysis more efficient, or even make it possible at all. In short, one uses a mathematically more abstract, less structured description of a large-scale and complex system. Many structural problems, such as decomposition, coordination, etc., can be considered on such a level. Furthermore, even some more traditional problems such as Lyapunov stability can be analyzed algebraically using more abstract descriptions. The distinction between classical methods of approximation and the abstraction approach should be noted. In the former, one uses the same mathematical structure, and simplification is achieved by omitting some parts of the model that are considered less important; e.g., a fifth-order differential equation is replaced with a second-order equation by considering only the two “dominant” state variables of the system. In the latter approach, however, one uses a different mathematical structure which is more abstract, but which still considers the system as a whole, although from a less detailed viewpoint. The simplification is not achieved by the omission of variables but by the suppression of some of the details considered unessential.
Chapter I Introduction
4
(c) Structural Considerations in Model Building In both the analysis and synthesis of systems of various kinds, structural considerations are of utmost importance. Actually, the most crucial step in the model-building process is the selection of a structure for the model of a system under consideration. It is a rather poor strategy to start investigations with a detailed mathematical model before major hypotheses are tested and a better understanding of the system is developed. Especially when the system consists of a family of interrelated subsystems, it is more efficient first to delineate the subsystems and to identify the major interfaces before proceeding with a more detailed modeling of the mechanisms of how the various subsystems function. Traditionally, engineers have used block diagrams to reveal the overall composition of a system and to facilitate subsequent structural and analytical considerations. The principal attractiveness of block diagrams is their simplicity, while their major drawback is a lack of precision. General systems theory models eliminate this drawback by introducing the precision of mathematics, while preserving the advantage, i.e., the simplicity, of block diagrams. The role of general systems theory in systems analysis can be represented by the diagram in Fig. 1.1. General Computer
Verbal description of problem
-
Block diagram
-
General systems model
-Detailed mathematical model
(d) Precise Definition of Concepts and Interdisciplinary Communication General systems theory provides a language for interdisciplinary communication, since it is sufficiently general to avoid introduction of constraints of its own, yet, due to its precision, it removes misunderstandings which can be quite misleading. (For example, the different notions of adaptation used in the fields of psychology, biology, engineering, etc., can first be formalized in
2. Formalization Approach
5
general systems theory terms and then compared.) It is often stated that systems theory has to reflect the “invariant” structural aspects of different real-life systems, i.e., those that remain invariant for similar phenomena from different fields (disciplines). This similarity can be truly established only if the relevant concepts are defined with sufficient care and precision. Otherwise, the danger of confusion is too great. It is quite appropriate, therefore, to consider the mathematical theory of general systems as providing a framework for the formalization of any systems concept. In this sense, general systems theory is quite basic for the application of the “systems approach” and systems theory in almost any situation. The important point to note when using general systems theory for concept definition is that when a concept has been introduced in a precise manner, what is crucial is not whether the definition is “correct” in any given interpretation, but rather, whether the concept is defined with sufficient precision so that it can be clearly and unambiguously understood and as such can be further examined and used in other disciplines. It is in this capacity that the general systems theory offers a language for interdisciplinary communication. Such an application of general systems theory might seem trivial from the purely mathematical standpoint, but is not so from the viewpoint of managing a large team effort in which specialists from different disciplines are working together on a complex problem, as is often found in the fields of environmental, urban, regional, and other large-scale studies. (e) Unification and Foundation for More Specialized Branches of Systems Theory
Questions regarding basic systems problems which transcend many specialized branches of systems theory (e.g., the question of state-space representation) can be properly and successfully considered on the general systems level. This will be demonstrated many times in this book. The problems of foundations are of interest for extending and making proper use of systems theory in practice, for pedagogical reasons, and for providing a coherent framework to organize the facts and findings in the broad areas of systems research.
2. FORMALIZATION APPROACH FOR THE DEVELOPMENT OF THE MATHEMATICAL THEORY OF GENERAL SYSTEMS
The approach that we have used to develop the general systems theory reported in this book is the following :t
t A comparison with some other possible approaches is given in Appendix 11.
6
Chapter I
Introduction
(i) The basic systems concepts are introduced via formalization. By this we mean that starting from a verbal description of an intuitive notion, a precise mathematical definition for the concept is given using minimal mathematical structure, i.e., as few axioms as the correct interpretation would allow. (ii) Starting from basic concepts introduced via formalization, the mathematical theory of general systems is further developed by adding more mathematical structure as needed for the investigation of various systems properties. Such a procedure allows us to establish how fundamental some particular systems properties really are and also what is the minimal set of assumptions needed in order that a given property or relationship holds.
The starting point for the entire development is the concept of a system defined on the set-theoretic level. Quite simply and most naturally for that level, a system is defined as a relation in the set-theoretic sense, i.e., it is assumed that a family of sets is given, I/= { & : i E Z }
where I is the index set, and a system, defined on
is a proper subset of x V,
Sc x { y : i E I } The components of S, 6 , ~ E are I , termed the systems objects. We shall primarily be concerned with a system consisting of two objects, the input object X and the output object Y : S C X X Y
(1.1)
Starting a mathematical theory of general systems on the set-theoretic level is fully consistent with the stated objective of starting with the least structured and most widely applicable concepts and then proceeding with the development of a mathematical theory in an axiomatic manner. To understand better some of the reasons for adopting the concept of a system as a set-theoretic relation, the following remarks are pertinent. A system is defined in terms of observed features or, more precisely, in terms of the relationship between those features rather than what they actually are (physical, biological, social, or other phenomena). This is in accord with the nature of the systems field and its concern with the organization and interrelationships of components into an (overall) system rather than with the specific mechanisms within a given phenomenological framework. The notion of a system as given in (1.1)is perfectly general. On the one hand, if a system is described by more specific mathematical constructs, e.g., a set of
2. Formalization Approach
7
equations, it is obvious that these constructs define or specify a relation as given in (1.1). Different systems, of course, have different methods of specification, but they all are but relations as given in (1.1). On the other hand, in the case of the most incomplete information when the system can be described only in terms of a set of verbal statements, they still, by their linguistic function as statements, define a relation as in (1.1). Indeed, every statement contains two basic linguistic categories :nouns and functors-nouns denoting objects, functors denoting the relationship between them. For any proper set of verbal statements there exists a (mathematical) relation which represents the formal relationship between the objects denoted by nouns (technically referred to as a model for these statements). The adjective “proper” refers here, of course, to the conditions for the axioms of a set theory. In short, then, a system is always a relation, as given in (l.l), and various types of systems are more precisely defined by the appropriate methods, linguistic, mathematical, computer programs, etc. A system is defined as a set (of a particular kind, i.e., a relation). It stands for the collection of all appearances of the object of study rather than for the object of study itself. This is necessitated by the use of mathematics as the language for the theory in which a “mechanism” (a function or a relation) is defined as a set, i.e., as a collection of all proper combinations of components. Such a characterization of a system ought not to create any difficulty since the set relation, with additional specifications, contains all the information about the actual “mechanism” we can legitimately use in the development of a formal theory. The specification of a given system is often given in terms of some equations defined on appropriate variables. To every variable there corresponds a systems object which represents the range of the respective variable. Stating that a system is defined by a set of equations on a set of variables, one essentially states that the system is a relation on the respective systems objects specified by the variables (each one with a corresponding object as a range) such that for any combination of elements from the objects, i.e., the values for the variables, the given set of equations is satisfied. To develop any kind of theory starting from (1.1),it is necessary to introduce more structure into the system as a relation. This can be done in two ways :
(i) by introducing the additional structure into the elements of the system objects, i.e., to consider an element ui E as a set itself with additional appropriate structure ; (ii) by introducing the structure in the object sets, F,i E I , themselves. The first approach leads to the (abstract) time system concept, the second to the concept of an algebraic system.
8
Chapter I
Introduction
(a) Time Systems
This approach will be introduced precisely in Chapter I1 and will be used extensively throughout the book ; therefore, a brief sketch is sufficient here. If the elements of an object are functions, e.g., v : T, + A , , the object is referred to as a family object or a function-generated object. Of particular interest is the case when both the domain and codomain of all the functions in the given object V are the same, i.e., any V E V is a function on T into A , v : T -,A . T represents the index set for V , while A is referred to as the alphabet for V . Notice that A can be of arbitrary cardinality. If the index set is linearly ordered, it is called a time set. This term was selected because such an index set captures the minimal property necessary for the concept of time, particularly as it relates to the time evolution and dynamic behavior of systems. A function defined on a time set is called an (abstract)time function. An object whose elements are time functions is referred to as a time object. A system defined on time objects represents a time system. Of particular interest are time systems whose input and output objects are both defined on the same sets X E AT and Y E BT. The system is then S
c AT
x BT
(b) Algebraic Systems
An alternative way to introduce mathematical structure in a system’s object V necessary for constructive specification is to define one or more operations in V so that V becomes an algebra. In the simplest case, a binary operation is given, R : V x V + V , and it is assumed that there exists a subset Wof V , often of finite cardinality, such that any element in Vcan be obtained by the application of R on the elements of Wor previously generated elements. The set W is referred to as the set of generators, or also as an alphabet and its elements as the symbols; the elements of the object V are referred to as words. If R is concatenation, the words are simply sequences of elements from the alphabet W. A distinction should be noticed between the alphabet for a time object and for an algebraic object. For objects with finite alphabets, these are usually the same sets, i.e., the object, whose elements are sequences from the given set, can be viewed either as a set of time functions (on different time intervals, though) or as a set generated by a n algebraic operation from the same set of symbols. When the alphabet is infinite, complications arise, and the set of generators and the codomain of the time functions are different sets, generally even of different cardinality.
2. Formalization Approach
9
In a more general situation, an algebraic object is generated by a family of operations. Namely, given a set of elements, termed primitive elements, W, and a set of operations R = { R , , . . . ,Rn},the object V contains the primitive elements themselves, W c V ,and any element that can be generated by a repeated application of the operations from R. We shall use primarily the time systems approach in this book because it allows a more appealing intuitive interpretation in particular for the phenomena of time evolution and state transition. Actually, it can be shown that the two approaches are, by and large, equivalent. It should be emphasized, however, that we shall be using the algebraic structure both within a general system, S c X x Y , and a general time system, S c AT x BT, although not necessarily for the problems related to time evolution. It is interesting to note that the two methods mentioned above correspond to the two basic ways to define a set constructively : by (transfinite) induction on an ordered set and by algebraic induction. The implication and meaning of this interesting fact will not be pursued here any further.
Chapter I1
BASIC CONCEPTS
In this chapter we shall introduce some basic systems notions on the set-theoretic level and establish some relationships between them. First, we shall define a general system as a relation on abstract sets and then define the general time and dynamical systems as general systems defined on the sets of abstract time functions. In order to enable more specific definition of various types of systems, certain kinds of so-called auxiliary functions are introduced. They are abstract counterparts of relationships, often given in the form of a set of equations, in terms of which a system is defined. Auxiliary functions enable also a more detailed analysis of systems, in particular their evolution in time. In order to define various auxiliary functions, new auxiliary objects, termed state objects, had to be introduced; the elements of such an object are termed states. The primary functions of the state, as introduced in this chapter, are : (i) to enable a system or its restrictions, which are both in general relations, to be represented as functions ; (ii) to enable the determination of a future output solely on the basis of a given future input and the present state completely disregarding the past (the state at any given time embodies the entire past history of the system); (iii) to relate the states at different times so that one can determine whether the state of a system has changed over time and in what way. This third requirement leads to the concept of a state s ace. A general dynamical P system is defined in such a state space. 10
1. Set-Theoretic Concept of a General System
11
Some basic conditions are given for the existence of various types of auxiliary functions in general and in reference t o such system properties as input completeness and linearity. A classification of systems in reference to various kinds of time invariance of certain auxiliary functions is given. Finally, some questions of time causality are considered. Two notions are introduced in this respect : (i) A system is termed nonanticipatory if there exists a family of state objects so that the future values of any output are determined solely by the state at a previous time and the input in this time period. (ii) A system is termed past-determined if, after a certain initial period of time, the values of any output are determined solely by the past input-output pair. Conditionsare then given for time systems to be nonanticipatory or pastdetermined. 1. SET-THEORETIC CONCEPT OF A GENERAL SYSTEM
(a) General System, Global States, and Global-Response Function Starting point for the entire development is provided by the following definitions. Definition 1.1. A (general) system is a relation on nonempty (abstract) sets
Sc x{y:iEZ} (2.1) where x denotes Cartesian product and I is the index set. A component set is referred to as a system object. When I is finite, (2.1)is written in the form
s c v, x
x
v,
(2.2)
Definition 1.2. Let I , c I and I , c I be a partition of I , i.e., I , n I , = 4, I , u I , = I . The set X = x { i E I , } is termed the input object, while Y = x { 5 :i E I,} is termed the output object. The system S is then
v:
S c X x Y
(2.3)
and will be referred to as an input-output system. The form (2.3) rather than (2.2) will be used throughout this book. Definition 1.3. If S is a function
s:x+Y it is referred to as a function-type (or functional) system.
(2.4)
Chapter II Basic Concepts
12
Notice that the same symbol S is used both in (2.2) and (2.3) although strictly speaking the elements of the relation in (2.2) are n-tuples while those in the relation (2.3) are pairs. This convention is adopted for the sake of simplicity of notation. Which of the forms for S is used will be clear from the context in which it is used. Analogous comment applies to the use of the same symbol S in (2.3) and (2.4). For notational convenience, we shall adopt the following conventions : The brackets in the domain of any function, e.g., F : ( A )-+ B, will indicate that the function F is only partial, i.e., it is not defined for every element in the domain A . The domain of F will be denoted by 9(F) c A , and the range by W ( F ) c B. Similarly, the domain and the range of S c X x Y will be denoted, respectively, by ,
g(s)= {Y : (3x)((x,Y)E S ) }
and
9 ( S ) = {x:(3Y)((X, Y)E S ) }
For the sake of notational simplicity, 9 ( S ) = X is always assumed unless stated otherwise. Definition 1.4. Given a general system S, let C be an arbitrary set and R a function, R :(C x X ) -+ Y, such that (x,Y)E S
-
( 3 4[R(G x) = Yl
C is then a global state object or set, its elements being global states, while R is a global (systems)-response function (for S ) .
Theorem 1.1. Every system has a global-response function which is not partial, i.e., R:C x X
-+
Y
PROOF. Let F = Y x = { f : f : X + Y } . Let G = { f , : cC} ~ E F such that f , ~ G - f , ~ S , w h e r e C i s a n i n d e x s e t o f G . L e t R : Cx X - + Ybesuchthat R(c,x) = f,(x). Then we claim that S = {(x, y):(3c)(y = R(c, x))}. Let S’ = {(x, y):(k)(y = R(c,x))}. Let (x, y)E S’be arbitrary. Then y = R(c,x) = f c ( x ) for some c E C. Hence, (x, y) E S because f, E S . Therefore, S’ E S. Conversely, let (x,y) E S be arbitrary. Since 9 ( S ) = X 3 x,S is nonempty. Let f f G~. Let f = (ff\{(x,ff(x))]) u {(x,~)}.Then f~ F and f C S. Hence, f = f,. for some c’ E C . Consequently, y = f,.(x) or (x,y) E S’ and hence Q.E.D. S E S’. Therefore, S = S‘.
In the preceding theorem, no additional requirements are imposed either on C or R . However, if R is required to have a certain property, the global response function, although it might still exist, cannot be defined on the entire C x X , i.e., R remains a partial function. Such is the case, e.g., when R
1 . Set-Theoretic Concept of a General System
13
is required to be causal. Since the case when R is not a partial function is of special importance, we shall adopt the following convention : R will be referred t o as the global-response function only if it is not a partial function. Otherwise, it will be referred to as a partial global-responsefunction. (b)
Abstract Linear System
Although many systems concepts can be defined solely by using the notion of a general system, the development of meaningful mathematical results is possible often only if additional structure is introduced. In order to avoid proliferation of definitions, we shall, as a rule, introduce specific concepts on the same level of abstraction on which mathematical results of interest can be developed ;e.g., the concept of a dynamical system will be introduced only in the context of time systems. The concept of linearity, however, can be introduced usefully on the general systems level. We shall first introduce the notion of linearity, which is used as standard in this book. Definition 1.5. Let d be a field, X and Y be linear algebras over d and let S be a relation, S c X x Y, S is nonempty, and (i) s e S & s ’ ~ S-+ s + s‘eS (ii) s e S & a ~ d + u s ~ S where + is the additive operation in X x Y and a E a?.?S is then an (abstract) complete linear system. In various applications, one encounters linear systems that are not complete, e.g., a system described by a set of linear differential equations whose set of initial conditions is not a linear space. For the sake of simplicity, we shall consider in this book primarily the complete systems, and, therefore, every linear system will be assumed t o be complete unless explicitly stated otherwise. This is hardly a loss of generality since any incomplete linear system can be made complete by a perfectly straightforward and natural completeness procedure. The following theorem is fundamental for linear systems theory. Theorem 1.2. Let X and Y be linear algebras over the same field d.S c X x Y is then a linear system if and only if there exists a global response function R : C x X -+ Ysuch that: (i) C is a linear algebra over d ; (ii) there exists a pair of linear mappings R,:C-+Y and R,:X+ Y
t The operation + (x
and the scalar multiplication on X x Y is defined by: ( x , y ) Y and a E d.
+ 2, y + 9) and a ( x , y) = ( a x , a y ) where ( x , y). ( 9 , J )E,X x
+ (a,9)=
14
Chapter II Basic Concepts
such that for all (c, x ) E C x X “ 7
x) =Rl(4
+ R,(x)
PROOF. The if part is clear. Let us prove the only i f part. First, we shall show that there exists a linear mapping R , : X + Y such that { ( x ,R , ( x ) ) : xE X} c S. Let X, be a subspace of X and L, :X, + Y a linear mapping such that { ( x ,L,(x)):x E X,} c S. Such X,and L,always exist. Indeed, let (A, j?) E S # ; then X, = {a3 : a E d}and L , :X, + Y such that L,(aZ) = aj? are the desired ones. If X, = X, then L , is the desired linear mapping. If X, # X, then L , can always be extended by Zorn’s lemma so that X, = X is achieved. Let 21 = { L , } be the class of all linear mappings defined on the subspaces of X such that when the subspace X, is the domain of L,, { ( x ,L,(x)) : x E X , } c S holds. Notice that t is not empty. Let I be an ordering on defined by: When L’ and L” are in t,then L” I L“ iff L” E L”. Since a mapping is a relation between the domain and the codomain and since a relation is a set, the above definition is proper. Let P c E be an arbitrary linearly ordered subset of 1.Let Lo = UP,where is the union of elements of P. We shall show that Lo is in t.Suppose ( x , y ) and (x,y’)are elements of L o . Then, since Lo = there exist two mappings L and L’ in P such that (x,y ) E L and ( x , Y ’ ) E L‘. Since P is linearly ordered, e.g., L I L’, ( X , Y ) E L’ holds. Since L’ is a mapping, y = y’ ;that is, Lo is a mapping. Next, suppose (x’,y’) and (x”,y ” ) are in Lo. Then the same argument implies that (x’,y’)E L“ and (x”,y”)E L” for some L“ c P. Since L” is linear, (x’ + x”, y’ + y”) E L c Lo. Furthermore, if (x‘,y’)E Lo and a ~ dthen , there exists L’ in P such that (x’,y’)E L’ ; that is, (ax’,my’) E L’ c Lo holds. Hence, Lo is a linear mapping. Finally, if (x’,y‘) E L o , then (x’,y’)E L’ for some L‘ in P . Hence, (x’,y’)E S , or Lo c S. Therefore, Lo E 1.Consequently, Lo is an upper bound of P in 2. We can, then, conclude by Zorn’s lemma that there is a maximal element R , in t. We claim that 9 ( R , ) = X. If it is not so, 9 ( R , ) is a proper subspace of X. Then there is an element 2 in X such that 3 is not an element of 9 ( R , ) . Then X’ = {a2 x : u E GI & x E 9 ( R , ) } is a linear subspace which includes 9 ( R , ) properly. Notice that every element x’ of X’ is expressed in the form x‘ = a2 + x uniquely. As a matter of fact, if x’= a3 + x = b2 y , then (a - b)3 = ( y - x). If a # p, then 3 = (a - p)- ‘ ( y - x ) E 9 ( R , ) , which is a contradiction. By using this fact, we can define a linear mapping L‘ :X‘ + Y such that L‘(a1z + x ) = aj? + R,(x), where (a,9)E S and x E 9 ( R 2 ) . Then L’ is linear and {(x‘,L’(x’)):x’ E X’} c S, and R , is a proper subset of L’, which contradicts the maximality of R , . Hence, R , is the desired mapping. To complete the construction of R , let C be C = { ( o , y ) : ( o , y ) ~ SC} .is, apparently, a linear space over d when the addition and the scalar multiplication are defined as : (0,y ) + (0,y’) = (0,y + y’) and a(o, y ) = (0,ay), where a E d. Let R , :C -+ Y such that R , ( ( o ,y)) = y . Then R , is a linear mapping. Let
UP
UP,
+
+
1 . Set-Theoretic Concept of a General System R(c,x) = R,(x)
+ R,(x).
15
We shall show that
S = {(x,~):(~c)(cEC&~ = R(c,x))} S’ Suppose (x,y) E S.Then (x,R,(x)) E S.Since S is linear, (x, y) - (x,R2(x)) = (0, y - R,(x))E S. Hence, (3c)(cE C&y = R,(c) + R,(x)); that is, S E S’ holds. Conversely, suppose (x,R,(c) + R,(x)) E S’.Since (0,R,(c))E S and (x,R,(x)) E S and since S is linear, (x,R,(x))
+ (0,R,(c)) = (x,R l k ) + R,(x)) E s Q.E.D.
Hence, S’ G S holds.
The fundamental character of the preceding theorem is illustrated by the fact that every result on linear systems developed in this book is based on it. We can now introduce the following definition. Debition 1.6. Let S c X x Y be a linear system and R a mapping R :C x X + Y. R is termed a linear global-response function if and only if (i) R is consistent with S, i.e., (x, Y) E
s * (3c)[Y = R(c,4 1
(ii) C is a linear algebra over the field of X and Y; (iii) there exist two linear mappings R :C + Y and R, :X for all (c, x) E C x X
,
-+
Y such that
R(c, x) = R,(c) + R 2 b ) C is referred to as the linear global state object. The mapping R , :C + Y is termed the global state response, while R 2 : X + Y is the global input response.
Notice the distinction between the global-response function and the linear global-response function. The first concept requires only (i), while for the second, conditions (ii) and (iii) have to be satisfied. A linear system, therefore, can have a response function which is not linear. From Theorem 1.2 we have immediately the following proposition.
Proposition 1.1. A system is linear if and only if it has a linear globalresponse function. The concept of a linear system as given by Definition 1.5 uses more than a “minimal” mathematical structure. The most abstract notion of a linear system consistent with the formalization approach is actually given by the following definition.
16
Chapter IJ Basic Concepts
Definition 1.7. Let X be an (abstract) algebra with a binary operation :X x I
X + X and a family of endomorphisms Cr = { a :X + X } ;similarly, let Y has a binary operation * : Y x Y + Y and a family J = { p : Y + Y}. A function system S :X + Y is a general linear system if and only if there exists a one-toone mapping )I :Cr + /3 such that :
(i) (Vx, x’)[S(x. x’) = S(x)* S(x’)] (ii) ( W ( W [ M X ) ) = @(a)(S(x))I There could be other concepts of a linear system with the structure between that in Definitions 1.5 and 1.7, e.g., by assuming that X and Yare modules rather than abstract linear spaces. We have not considered such “intermediate” concepts in this book because for some essential results the structure of an abstract linear space is needed. Actually, one might argue that the concept of linearity based on the module structure is not satisfactory because it is neither most abstract (such as Definition 1.7) nor sufficiently rich in structure to allow proofs of basic mathematical results such as Theorem 1.2.
2. GENERAL TIME AND DYNAMICAL SYSTEMS
(a) General Time System
In order to introduce the concept of a general time system, we have to formalize the notion of time. In accordance with the strategy pronounced in Chapter I, we have to define the notion of time by using minimal mathematical structure and such that it captures the most essential feature of an intuitive notion of time. This seems a very easy task, yet the decision at this junction is quite crucial. The selection of structure for such a basic concept as the time set has important consequences for the entire subsequent developments and the richness and elegance of the mathematical results. We shall use the following notion. Definition 2.1. A time set (for a general time system) is a linearly ordered (abstract) set. The time set will be denoted by T and the ordering in T by I.
Apparently, the minimal property of a time set is considered to be that its elements follow each other in an orderly succession. This reflects our intended usage of the concept of time for the study of the evolution of systems. No restrictions regarding cardinality are imposed on the time set. However, the time set might have some additional structure, e.g., that of an Abelian group. We shall introduce such additional assumptions when needed.
17
2. General Time and Dynamical Systems
For notational convenience, Twill be assumed to have the minimal element 0. In other words, we assume that there exists a superset Twith a linear ordering < and a fixed element denoted by o in such that T is defined by T = { t : t 2 o}. We can introduce now the following definition. Definition 2.2. Let A and B be arbitrary sets, T a time set, AT and BT the set of all maps on T into A and B, respectively, X c AT and Y c BT. A general time system S on X and Y is a relation on X and Y, i.e., S c X x Y. A and B are called alphabets of the input set X and output set Y, respectively. X and Y are also termed time objects, while their elements x : T + A and y : T -,B are abstract time functions. The values of X and Y at t will be denoted by x(t) and y(t),respectively. In order to study the dynamical behavior of a time system, we need to introduce the appropriate time segments. In this respect, we shall use the following notational convention. For every t, t' > t,
IT;
= {t':t' 2 t } ,
T'
=
{ t ' : t '< t } ,
T,', u { t ' } ,
T'
=
T' u { t }
=
T,', = { t * : t 5 t* < t'}
Corresponding to various time segments, the restrictions of x E AT will be defined as follows. X,
=XI
X'= X'
XI
T,,
T',
= {x':x' =
X' =
XI
-
xtt, = X I T,'.,
T',
Zttr =
X I T,',
x , = { x , : x ,= X I T , & X E X } XI T'&xEX},
X t z ,= {x,,~:x,,~ =X
I
T,,$&XEX}
X ( t )= { x ( t ) : x E x }
The following conventions will also be used: XI' =
43
xtt
=
{4}
Based on the restriction operation, we shall introduce another operation called concatenation. Let x E AT and x* E A T . Then for any t we can define another element 2 in AT : 2(r) = 2 is represented by 2
= x' . x,*
x(z),
if
T
s = { s : s = s v 3 = s' A
*
v
9
=
s,
v
9
=
s,,.)
We shall also use the following notational convention :
x, = XI
X'
?;,
=
X I T',
X", = XI
T',
and completely analogous for Y, and for S, e.g., S,
=
S I ?;,
S'
=
S I T',
and Stt, = S 1 T,,,
Definition 2.4. Let S be a time system S c AT x BT.The initial state object for S and the initial systems-response function are the global state object and the global systems response of S , respectively. The initial systems response will be denoted by p,, i.e., p, :C, x X + Y such that
-
(x,Y ) E s (34b,(G x) = Yl We can now introduce the following definition.
19
2. General Time and Dynamical Systems
Definition 2.5. Let S be a time system and t E T. The state object at t , denoted by C,, is an initial state object for the restriction S , ; i.e., it is an abstract set such that there exists a function p , :C, x X , + Y, such that (xr Yr) E Sr 9
* (3c)br(c,
xr) = ~
t l
p, is referred to as the (systems)-responsefunction at t .
A family of all response functions for a given system, i.e.,
p = {p,:C,
x
x,+ Y,&tET}
is referred to as a response family for S , while C state objects.
=
{ C, :t E T } is a family of
Definition 2.6. Let S be a time system S c X x Y, and pt an arbitrary function such that p , :C , x X , + Y,. p, will be termed consistent with S if and only if it is a response function at t for S , i.e., (xr Yr) E St 9
* ( 3 ~br(c, ) xr) = ~ r l
Let
S,P
=
{(xrg
yr):(k)(yr= p i c ,
Then the consistency condition is expressed as
s,p = s, Let p = {pr:C, x X , + Y,) be a family of arbitrary functions. Then p is consistent with a time system S if and only if p is a (systems-)responsefamily for S, i.e., for all t E T s,p
=
s, =
sop
I 7;
Regarding the existence of a response family, we have the following direct consequence of Theorem 1.1.
Proposition 2.1. Every time system has a response family. A family of arbitrary maps, naturally, cannot be a response family of a time system as seen from the following theorem. Theorem 2.1. Let p = { p t : C ,x X , + Y, & t E T } be a family of arbitrary maps. There exists a time system S c X x Y consistent with p ; that is, p is a response family of S if and only if for all t E T the following conditions hold :
(P1)
W c o ) Wxr)WXr)
3
X'
. xt) I TI
(P2)
( ~ ~ ~ ) ( ~ ~ r ) ( 3 ~ o ) ~ 3 ~;'~)t [)= p tp(o c( c,o ,,
X'
* xt)
bt(Cr
9
4 = po(co
I TI
Chapter 11 Basic Concepts
20 PROOF.
SopI
First, we shall prove the i f part. We have to prove that S,P =
IT; is satisfied for every t E T. Let (x,, y,) E StP be arbitrary. Then y,
=
pl(ct,x,) for some c, E C , . Property (P2) implies that Y,
= P A C , , x,) =
PO(C0,
x' . x,)
I IT;
for some (c,, x') E C , x X'.Hence, ( X , , Y t ) = (X'.X,~P,(~,,,X'.X,))I'T;
or (xt,y,) E SopI IT;. Therefore, we have StP G SopI IT;. Conversely,let (x, y ) E Sop be arbitrary. Then Y
= PO(C0,X) = PO(C0, x' * x,)
for some c,. Property (Pl) implies, then, that Y I IT;
I IT; = P t ( C , , x,) for some c, E C,,or (x, y) I IT; E S,P. Hence, SopI IT; c S,p. Combining the first result with the present one, we have S,p = SopI IT;. =
PO(C0,
x' . x,)
Next, consider the only i f part. Let x and c, be arbitrary. Then (x, po(co,x)) E S,P. Since S o p I IT; E S , p , we have that (x, PO(C0,X)) I IT; = (XI Po@, ,x' . x,) I 9
IT;) E S , O
or PO(C0,
x' . x,) I IT; = P t ( C , , x,)
for some c,. Hence, we have
@)
General Dynamical System
The concept of a dynamical system is related to the way a system evolves in time. It is necessary, therefore, to establish a relationship between the values of the system objects at different times. For this purpose, the response function is not sufficient,and another family of functions has to be introduced.
2. General Time and Dynamical Systems
21
Definition 2.7. A time system S c X x Y is a dynamical system (or has a dynamical system representation) if and only if there exist two families of mappings
p=
{ p , : C ,x
x,+ r , & t E T }
and
6 = { 4,,, :c, x xtrp+ c,. & t , t’ E T & t’ > t } such that (i) p is a response family consistent with S ; (ii) the functions 4,,, in the family 6satisfy the following conditions (a) Pr(Cr xr) I T,, = Pt,(4rt,(ct,X r r f h x t A where xt = xrz, . xr, (B) 4 t t 4 c t , x,,,)= 4r’,r*(4tp(C,. x,,,), Xr,,,,), where x,,,= xrp . x,,,,, 9
$ , t , is termed the state-transition function (on Tt,), while 4 will be referred to as the state-transition family. $,t, is defined for t c t‘. However, the following convention :
(Y)
4 t t ( C r , xtt) =
ct7
for every t E T
will be used. Since a dynamical system is completely specified by the two families of mappings p and 6,the pair (p, 6)itself will be referred to as a dynamical system representation or simply as a dynamical system. If a response family has a consistent state-transition family, it will be called a dynamical systemsresponsefarnily. It will be shown that not every response family is a dynamical systems-response family. Condition (a) represents the consistency property of the stare-transition family (with the given response family), while (8) represents the statetransition composition property (also referred to as the semigroup property). Conditions (a) and (8) are rather strongly related. Actually, under fairly general conditions, property (B) is implied by (a) so that only the consistency of 6with a response family p is required in order for 6to be qualified as a state-transition family. To arrive at these conditions, we need the following definition.
Definition 2.8. Let p be a response family consistent with a time system S . p is a reduced response family if and only if for all t E T
(vct)(vC) [(Vxt)(Pr(cr xt) = Pt(cr > x i ) ) ct = 41 The reduction of p, i.e., of associated state objects C = { C, : t E T } ,does 3
+
not represent a significant restriction. It only requires that if the two states
22
Chapter II Basic Concepts
at any time t E Tlead to the identical future behavior ofthe system, they ought to be recognized as being the same. We have now the following theorem. Theorem 2.2. Let p = { p , :C, x X , + k;} be a response family and d; = {$,,, :C , x X,,, + C , , } a family of functions consistent with p, i.e., satisfying condition ( a ) from Definition 2.7 : Pr(C, 9
xr) I
T,,= P,44tJcr
9
xty), -+)
Then if p is reduced, d; has the state-transition composition property, i.e., condition (fl) from Definition 2.7 is satisfied. PROOF.
From the consistency of d;, i.e., condition (a), it follows, for
t I t‘ I t”,
I IT;,, = ~ , 4 4 t p ( c txtt,,)?x,) = Pt44rr,(ct xtt,L xt,) Pt(Cr, xr) I ~ t 4 4 t t 4 c t x,t*), x t , ) I T,,, = p,44r*,44,,4cr -ytt,), x,,t,,)?xt,,) Pr(c,
3
xt)
9
5
7
9
Since P t ~ ( 4 , * * ( C ,X, t t - ) ,
x,,)
I IT;,, = (P,(C,,
x,) I T,,) I
7;s.
= P,(Ct,
x,)
I T,,
we have ~ t 4 4 d c txtt,,), xr,,)= 7
xt,,), xt,t,,hx,,,)
~ z d + , * t 4 4 t A ~7 t
for every x,.. E X,,, . Since { p,} is reduced, we have 4 d C t7
xrt,,)
=
4 t , t 4 4 r r , ( C t 3 xtr,), xr,p)
Q.E.D.
It should be pointed out that in the definition of a time system, both input and output objects are defined on the same time set. This is obviously not the most general case (e.g., an output can be defined as a point rather than as a function). We have selected this approach because it provides a convenient framework for the results of traditional interest in systems theory (e.g., the realization theory presented in Chapter 111). The properties and behavior of systems that have input and output objects defined on different time sets can be derived from the more complete case we are considering in this book. (c) General Dynamical Systems in State Space The concept of a state object as introduced so far has a major deficiency because there is no explicit requirement that the states at any two different times are related ;i.e., it is possible, in general, that for any t # t’, C, n C,, = 4.
23
2. General Time and Dynamical Systems
To use the potential of the state concept fully, the states at different times ought to be represented as related in an appropriate manner. It should be possible, e.g., to recognize when the system has returned into the “same” state it was before, or has remained in the same state, i.e., did not change at all. In short, the equivalence between states at different times ought to be recognized. What is needed then is a set C such that C, = C for every t E T. Such a set would represent a state space for the system. At any time the state of the system is then an element of the state space, and dynamics (i.e., change in time) of the system for any given input can be represented as mapping of the state space into itself. This consideration leads to the following definition. Definition 2.9. Let S be a time system S c X x Y and C an arbitrary set. C is a state space for S if and only if there exist two families of functions p = { p , : C x XI + and C$ = { 4,,, :C x XIt,-+ C} such that
x}
(i) for all t E T, S , c SIP and Sop = {(x, y ) : ( 3 c ) ( y= po(c,x))} = S (ii) for all t , t‘, t” E T (4 P,(G XI) I 7;. = P,4#%,4C> XI,,), XP)
(B)
4 t I ‘ ( C , XIt,)
(7)
4 t r ( C , XI?) =
= 4t‘,1,(4t,4C, c
XI,,,), XIfY)
where x, = xtt, . x,. and xIt,= xIt3, . x,,, . x,,. S is then a dynamical system in the state space C . Notice that, in general, S , is a proper subset of S,P. This is so because p is defined on the entire state space C, while the system might not accept all states at any particular time; i.e., the set of possible states might be restricted at a specified instant of time. This leads to the following definition. Definition 2.10. A dynamic system in a state space C is full if and only if it has S, = S,P for all t E T.
Many full dynamical systems, e.g., those that are also linear and time invariant, will be considered in this book. However, in general, a system need not be full ; e.g., even a finite automaton is, in general, not a full dynamical system. A simple example of such a case is given by a Mealy type automaton specified by : Input alphabet : { l},
Output alphabet: { 1,2},
State set : { 1,2}
The state transition and the output function of the system are given by the state-transition diagram in Fig. 2.1.
Chapter II Basic Concepts
24
1 /2
FIG.2.1
Throughout this book when a state space is used to specify a system, the response-function consistency is assumed to mean condition (i) of Definition 2.9 and not condition (i) of Definition 2.7. The fact that a linear time-invariant system is a full dynamical system is an important property for the class of systems that is responsible for many of their convenient properties (see Chapter VIII).
3. AUXILIARY FUNCTIONS AND SOME BASIC CLASSIFICATION OF §Y §TEMS
(a) Auxiliary Functions
Definition of a system as a relation, S c X x Y, or of a time system as a relation on time objects S c AT x BT, provides a starting point for the development of a general theory of systems. In order to be able to study the behavior and properties of a given system or to investigate more specific characterization of various systems, it is necessary to introduce additional concepts. A system is, in general, a relation, and one cannot determine the output of the system when a given input is applied. In order to be able to do that, the concept of an initial state object is introduced so that the output can be determined whenever an input-initial state pair is given ; i.e., the system is' represented as a function, termed initial-response function, po.This, however, is not enough for an efficientspecification of the outputs. For instance, when S is a time system and p o is a given initial-response function, for any given X E X the output is given by p o ( c o , x )= y where x : T + A and y : T + B. p a is often a function of a very high cardinality as, e.g., when T is an infinite set it specifies the output only in abstract. What is needed for an efficient specification of the output is to find some simpler functions, preferably of lower cardinality, which can be used to characterize the systems behavior, e.g., by defining po recursively. For the class of time systems, these functions can be obtained in a convenient way by considering the restrictions on subsets of T. There is a whole collection of such functions which we shall term the auxiliaryfunctions. We shall consider functions that have been traditionally used in the definition of specific types of systems. We have already seen two
3. Auxiliary Functions and Some Basic Classijication of Systems
25
auxiliary functions: the response function p , : C , x X, -, and the statetransition function & :C , x X,,, + C,.. We shall introduce in this section some additional auxiliary functions, which will be used subsequently to characterize various systems properties and to provide more efficient specification of the systems-response function. (i) Output-Generating Function One way to provide a more effective method to specify the systems response is to give a “procedure” in terms of a function which will generate the value of the output at any given time ;i.e., for any given x‘ and c, the required function will give y(t).If such a function is given, the output y : T -, B is defined for any given pair (x, c,) in the sense that y ( t ) , the value of that output at any time t E T, can be determined. This leads to the following definition.
Definition 3.1. Given a response family ij for a time system, let p,,. be a relation p,,. c c, x I,,, x Y(t‘) such that (ct,
Xtt.9
~ ( t ’ E) )~
t tH ,
(%)(3yt)[yr(t‘) = ~ ( t ’&) Y , =
Pr(Ct,
xt) & Xi,, = X,
I Tt.1
When p,,,is a function such that /Art,:
c, x I,,, -+ Y(t‘)
it is termed an output-generating function on T,,.ji is termed an output-generating family (Fig. 3.1).
=
{p,‘. :t , t’ E T & t’ 2 t ]
t I
\
I I
I
Xtt’
I
I
I I’
I
t
t
FIG.3.1
Notice that the input restriction in the domain of p,,, is defined on
K,,u { t ’ } rather than on T,,.
T,, =
Apparently, the relation p,,, exists and is well defined for any general time system. However, the existence of an output-generating function, i.e., p,,, as a function, requires certain conditions t o be satisfied.
Chapter I I Basic Concepts
26
(ii) Output Function Time evolution of a dynamical system is customarily described in terms of the state transition, and it is of interest to relate the changes in states to the changes in outputs ;specifically, the state at any time t E T ought to be related with the value ofthe output at that time. This leads to the following definition. Definition 3.2. Let S be a time system with the response family p and A, a relation A, c C, x X ( t ) x Y ( t )
-
such that
(c, 9 x(t), Y ( t ) )E 1,
(3x1)(3YJ [Y, =
P,(C,
9
x,) 8L x ( t ) =
8L Y ( t ) = Y,(t)l
When A, is a function such that A,:C, x X ( t ) -, Y ( t )
*-’
it is termed an output function at t , while2 = {A, :t E T }is an output-function family (Fig. 3.2).
1 I
I
t
FIG. 3.2
Apparently, At is a well-defined relation and exists for any general time system. The conditions for the existence ofan output function will be presented in the next section.
(iii) State-Generating Function For a dynamical system, the state at any time t is determined by the initial state c, and the initial input restriction x‘. However, for certain classes of systems, there exists a time Z E T such that the state at any subsequent time is determined solely by the past input and output restrictions ;i.e., no reference to the state is needed. This leads to the following definition.
3. Auxiliary Functions and Some Basic Classijication of Systems
27
Definition 3.3. Let jj be a response family for a time system S and qf a relation q' c
X'
x Y' x
c,
such that ( ~ ' 9
Y',
Cr)E
V'
++
(vxr)(vy,)[(x'
*
x,, Y' . Y,) E S
+
yt = PLC,, xtll
When q1 is a function such that q':X' x Y ' +
c,
it is termed a state-generating function at t , while ij = {q' :X' x Y' TI is termed a state-generating family (Fig. 3.3).
-+
C,& t E
A I
Y' I Y t
-
FIG.3.3
Again, although q' is always defined, the existence of a state-generating family requires certain conditions which, in this case, are of a more special
kind. @)
Some Classification of Time Systems
The auxiliary functions for any t E T are, in general, different. However, when some of them are the same for all t E T or are obtained from the same function by appropriate restrictions, various forms of time invariance can be introduced.
(i) Static and Memoryless Systems The first type of time invariance refers to the relationship between the system objects at any given time and is intimately related with the response function.
Debition 3.4. A system S is static if and only if there exists an initial response Y for S such that for all t E T function po :C, x X -+
(vc,)(vx)(~'a)"
=
R(t)
+
PO(CO,x ) ( t ) = Po(C0,W ) l
28
Chapter II Basic Concepts
In other words, the system is static if and only if for any t E T there exists a map K,:C, x X ( t ) -+ Y ( t )such that ( x , Y )E
s
-
(3co E C,)
(W( Y ( t ) =
w, 9
x(t)))
Any time system that is not static is termed a dynamic system. Intuitively, a system is static if the value of its output at any time t depends solely on the current value of the input and the state from which the evolution has initially started ; i.e., if ~ ( tbecomes ) constant over a period of type, y ( t ) becomes constant too. On the other hand, the output of a dynamic system depends not only on the current value of the input but also on the past “history” of that input as well. Notice that, in general, reference to the initial state had to be made (Fig. 3.4).
t
FIG.3.4
It should be noticed that a distinction is made between a dynamic and dynamical system. For the former, it is sufficient that the system is not static, while the latter requires that a state-transition family is defined. This, perhaps, is not the most fortunate choice of terminology ;however, it has been selected because it corresponds to the common usage in the already established specialized theories. A related notion to a static system is the following definition.
Definition 3.5. A time system S is memoryless if and only if it is a static system such that
3. Auxiliary Functions and Some Basic Classijication of Systems
29
i.e., there exists a mapping K,* : X ( t )--+ Y ( t )such that K,*(x(t))= Kt(co,x ( t ) ) (Fig. 3.5).
FIG.3.5
Apparently, a memoryless system is completely characterized by the map K,* : A + B. A system that does not satisfy Definition 3.5 is termed a system with memory. (ii)
Time-Znvariant Dynamical Systems
The second kind of time invariancy refers to how system responses at two different times compare. To introduce appropriate concepts, we shall assume for this kind of time invariancy that the time set T is a right segment of a linearly ordered Abelian group 'i;whose group operation (addition) will be denoted by + . More precisely, T = { t :t 2 o}, where o is the identity element of Tand the addition is related with the linear ordering as t It o t ' - t 2 o
The time set Tdefined above will be referred to as the time set for stationary sys tems. For each t E T, let F' :X -+ 8 denote an operator such that (Vt')[F'(x)(t')= x(t' - t ) ]
Notice that F' is defined for t < o as well as t 2 o and that whether or not F' is meaningful depends on its argument. In general, F'(x,,~,,) E X~r.+,,ct..+t, holds whenever F'(x,.,,.)is defined. F' is termed the shiji operator; it simply shifts a given time function for the time interval indicated by the superscript, leaving it otherwise completely unchanged (Fig. 3.6).We shall use the same symbol F' for the shift operator in Y and define F' also in S , F' :S S , such that --+
F ' k Y ) = ( F ' M F'(Y))
30
Chapter II Basic Concepts
i
t
I'
FIG.3.6
We can introduce now the following definition. Definition3.6. A time system defined on the time set for stationary systems is fully stationary if and only if (Fig. 3.7) ( V t ) [ tE T
+
F'(S) = S,]
and stationary if and only if (vt)(Vt' 2 t ) ( t , t' E T + S,, c Fr'-,(Sr))
FIG.3.7
Apparently, if a system is fully stationary, F-'(S,. I IT;) = F r ' ( S r , )for any t 2 t' E T ; i.e., starting from any given time, its future evolution is the same except for the shift for the appropriate time interval. When some given input and output objects X and Y satisfy the condition ( V t ) ( X ,= F'(X))
and
(Vt)( I: = F'( Y ) )
they will be referred to as the objects for a stationary system.
3. Auxiliary Functions and Some Basic Classipcation of Systems
31
When the systems response for a time system is given, we have the following definition.
-
I
F'
& t , ( ~ t ) ( ~ t ' ) ( ~ ~ r ) [(+ t~t , (xc tr 3rxrt,) ,) =
where t'
=
2
+ t and Grr,= Gar,
0
G,
Gtt,(+dGi '(cA F-'(xtr*)))
'.
Apparently, for a time-invariant system, the state-transition function at any time can be obtained from the initial-response function by applying the shift operator. Regarding the relationship between the stationarity and time invariance, the following is an immediate consequence of the definitions.
Proposition 3.1. A system with a time-invariant response (and therefore a time-invariant dynamical system) is stationary.
32
Chapter II Basic Concepts
4. CAUSALITY
Causality refers essentially to the ability to predict the outcome or consequences of some events in the future; in other words, one has a causal representation of a phenomenon if one can recognize the associated causes and effects and has a way t o describe how the causes result in effects. Given a general system S c V, x . . . x V’,, causality is introduced in the description of S in the three steps: (i) It is established which objects are inputs and which are outputs. (ii) It is recognized which output objects depend explicitly on other outputs while only implicitly on the system inputs. (iii) A description of the time evolution of the system is provided such that the value of the output a t any time depends solely on the past, i.e., the preceding state-input pair. We shall be concerned in this section with the third aspect, which can also be referred to as time causality; it is the most pertinent causality concept for the time systems study. (a) Time Causality Concepts
There are two causality-type concepts. The first is based on a given initial state object or rather a given initial-response function and is also referred to as nonanticipation. The second is given solely in terms of the inputs and outputs and is referred to as past-determinacy. The adjective “causal” will be used as generic covering both cases. (i)
Nonanticipation
Let p , be an initial-response function for a time system S . We have then the following definition. Definition 4.1. An initial systems-response function po:Co x X time system S c X x Y is nonanticipatory if and only if
(V‘t)(Vc,)(Vx)(V2)[x I T’
=
2 I T‘
--t
po(co,x) I T‘
=
+
Y for a
po(co,2 ) I T‘I
In a nonanticipatory system, the changes in the output cannot precede“anticipate”-the changes in the input (Fig. 4.1). For certain classes of systems, the concept of nonanticipation can be strengthened as follows.
33
4 . Causality
1
c FIG.4.1
Definition 4.2. An initial systems-response function po:C, x X strongly nonanticipatory if and only if ( v t ) ( v c , ) ( V x ) ( v R ) [ XI T' =
R 1 T' -+
po(c,, x ) I
T'
= po(co,a) I
-+
Y is
T']
Notice that Definitions 4.1 and 4.2 refer to time systems rather than to dynamical systems. The difference between the nonanticipator y and strongly nonanticipatory response functions is that in the latter the present value of the output, y(t), does not depend upon the present value of the input, x(t), since the restriction in the antecedent in Definition 4.2 is on T' while in Definition 4.1 is on T' = T' u { t } ; however, in both cases the output is restricted to T'. It should also be pointed out that p o is defined as a full function. This is an important restriction in the case of nonanticipation because it prevents some systems of having a nonanticipatory systems-response function. We shall introduce therefore the following definition. Definition 4.3. Let R c C, x X and p , : ( R ) -+ Y . p, is an incomplete nonanticipatory initial systems response of S if and only if (i) po is consistent with S , i.e., (X, Y ) E S ( 3C,) [p,(c,
=
Y 8~(c, ,X ) E R ]
(ii) ( V t ) ( V c , ) ( V x ) ( V R ) [ ( c , ,x ) E R & (c,, 12) E R & x I T' =
R I T' +
po(co,x ) I
T'
= po(c,, 2 )
I T']
Definition 4.4. We shall say that the system S is (strongly) nonanticipatory if and only if it has a complete (strongly) nonanticipatory initial-response function.
34
Chapter II
Basic Concepts
(ii) Past-Determinacy Definition 4.5. A time system S c AT x BT is past-determined from 2 if and only if there exists Z E T such that (see Fig. 4.2) (i) ( ~ ( xy) , E s)(V(X’,y‘)E S)W 2 t ) ( [ ( x i ,y’) = (x”, y’i)& xit = xir] + j i t = Fir)
(ii) (tl(x’, yi))(tlxi)(3yi)((xi, y’) E S’
+
(xi.x i , y’. y’)E S )
FIG.4.2
Past-determinacy means that there exists 2~ T such that for any t 2 5, the future evolution of the system is determined solely by the past observations, and there is no need to refer to an auxiliary set as, e.g., the initial state object. Condition (ii), which will be referred to as the completeness property, is introduced as a mathematical convenience.
(b) Existence of Causal-Response Family Theorem 4.1. Every time system has an incomplete nonanticipatory initial systems response. PROOF. Let 3 E S x S be a relation such that (x, y) = (x’, y‘) if and only if y = y’. Then, apparently, = is an equivalence relation. Let S / = = { [s]} E C,, where [s] = {s* I s* = s & s* E S}.t Let po :C, x X + Y such that
t In this book the following convention will be used for a quotient set. Let E be an equivalence relation on a set X . Then the quotient set X / E will be represented by X / E = { [ X I } , where [ x ] is the equivalence class of x, i.e.,
[XI
={x*:(x,x*)~E&x*~X)
and [ ] will be considered as the natural mapping, i.e., [ ] : X -P X / E .
4. Causality
35
Notice that p , is properly defined, because if (x, y) E [s] and (x, y’) E [s], then y = y’. In general, p , is a partial function. First, we shall show that S
=
{(x, y ) :(3c)(cE
c, & y
= po(c,x))}
= S’
If (x, y) E S,then p,([(x, y)], x) = y by definition. Hence, (x, y) E S’, or S c S’. Conversely,if (x, y) E S‘, then y = p,([s],x) for some [s] E C,. Then (x, y) E [s] follows from the definition of po . Hence, S’ E S. Furthermore, if the values of p,([s], x) and p,([s],x’) are defined, then p,([s], x) = p,([s],x’) for any x and x‘. Hence, condition (ii) in Definition 4.3 is trivially satisfied. Q.E.D. Theorem 4.1 cannot be extended for the full initial systems-response function, i.e., when po is a full function. There are time systems that do not have a (complete) nonanticipatory initial systems response as defined in Definition 4.1 ; in other words, requirement for the initial response to be a full function prevents a causal representation of the systems in the sense of nonanticipation. Such systems can either be considered to be essentially noncausal or it can be assumed that only an incomplete description of the system is available and that noncausality is due to having only partial information. This can be best shown by an example as given in Fig. 4.3.
FIG.4.3
Consider a system S which has only two elements, S = {(x,, y,), (x,, y,)}, which are as shown in Fig. 4.3. Since the initial segments of both x1 and x2 are the same, while those of y , and y, are different, the initial state object ought to have at least two elements if we want to have a nonanticipatory initial-responsefunction. Let C, = {c,c’} and po(c, x,) = y,, po(c‘,x,) = y 2 . If p, is a full function, (c, x,) is also in the domain of p,. Therefore, either P&, x2) = y1 or po(c,X J = Y , . But pO(c,x,) = Y implies (x2, yl) E S , i.e., p0 is not consistent with S, while po(c,x,) = y, violates the nonanticipation
Chapter I I Basic Concepts
36
condition in Definition 4.1 since the initial segments of x1 and x2 are the same, while y , and y , are not. The system S does not have, therefore, a complete nonanticipatory response function. Definition 4.6. A response family p = { p , :t E T ) is called nonanticipatory if and only if every p , is an initial nonanticipatory response function of S , . Theorem 4.2. A time system has a nonanticipatory response family if and only if it has an initial nonanticipatory response function. PROOF. The only if part is obvious. Let us consider the !f part. Let p o : C , x X -, Y be an initial nonanticipatory systems response. Let C, = C, x X‘ and p , :C, x X , -, such that if c, = (c,, P), then Pt(Cr, X J
= po(c0,3‘ * xr) I IT;,
for t E T
Then the consistency conditions in Theorem 2.1 are trivially satisfied, i.e., p = { p , :t E T } is a response family. Furthermore, suppose x, 1 Ttr= x,’1 T,, for t‘ 2 t. Then if c, = (co,3,), pr(Cr7
xr) I
Tr, = p o ( c o , 3,. xr) I Tr,
Pt(Cr,
xr’) I
Tr* = p o ( c o , 2, . xr’) I Tr,
and
Since po is nonanticipatory and 2.“.x, I T” = 2, . x,I I Tr’,we have Pt(Cr,
XO I
Tt, = ( p o ( c o , 2, . x t ) I Tr‘)I Tt, = (po(co,2, x,’)1 F ‘ )I T,, = po(c0 2, . xr‘) I Tr, *
7
Hence, p, is nonanticipatory.
Q.E.D.
(c) Causality and Output Functions
Conditions for the existence of an output function and an output-generating function can be given directly in terms of the nonanticipation of systems response. Dependence of the output function upon nonanticipation is clearly indicated by the following proposition. Proposition 4.1. A time system has an output-function family X ( t ) -+ Y ( t ) } if the system is nonanticipatory.
x = {A,:C, x
37
4. Causality
Since the system is nonanticipatory, there exists, by Theorem 4.2, a nonanticipatory response family p = { p , :t E T } . Let A, be defined for the ), E nonanticipatory response family. Suppose (c,,x(t),y ( t ) )E A, and (c,, ~ ' ( t y'(t)) 1, where ~ ( t=) x'(t). Since PROOF.
x, I
r,, = x(t) = x'(t) = x,' I T,
and since p, is nonanticipatory, we have that
x,) I Tf = P L C , Xr') I Tf or y ( t ) = y'(t).Hence A, is a mapping such that I , :C , x X ( t ) + Y ( t ) . Q.E.D. Pr(C, 3
9
The concept of an output function illustrates one of the important roles of the concept of state : If a state is given and the system is nonanticipatory, all information about the past of the system, necessary to specify the present value of output, is contained in the state itself. Dependence of the output-generating function upon the nonanticipation of the system is quite similar to the dependence of the output function and is given by the following proposition. Proposition 4.2. A time system has an output-generating family ii = {p,,. : C , x Xt8,+ x ( t ' ) ) if the system is nonanticipatory.
PROOF.The proof is similar to that of Proposition 4.1.
Q.E.D.
From Proposition 4.1 it is obvio.us that the output of a nonanticipatory time system can be determined solely by the present state and the present value of the input. For some systems, however, the present output depends solely upon the present state and does not depend upon the present value of the input. For the analysis of these systems, a somewhat stronger notion, namely, that of strong nonanticipation, is needed. Proposition 4.3. A time system has an output function such that for all t E T (Vx(t))(Va(t))[C, = E,
+
q c , , x(tN = A , ( k
ml
if it is a strongly nonanticipatory system. PROOF. Since the system is strongly nonanticipatory, there exists a strongly nonanticipatory initial system response p,. Then the same procedure as Q.E.D. used in Proposition 4.1 proves the statement.
When the system is strongly nonanticipatory, for every t E T there apparently exists a map K , : C ,+ B
Chapter II Basic Concepts
38
such that
(d) Existence of Past-Determinacy First of all, a distinction between past-determinacy and functionality ought to be made. A system can be functional (i.e., S : X + Y ) , yet it does not have to be past-determined andviceversa. Noticealso that for a functional system (i.e., when there is only one initial state), the concepts of strong nonanticipation and the past-determinacy from o coincide. For any (x,y) E S , let S(x', y') be the set
where S(x',y') is the family of "successors" to (x',y'). We now have the following proposition. Proposition 4.4. A system is past-determined from 2 if and only if for all t 2 2, S(x', y') is a strongly nonanticipatory function S(X',
PROOF.
(11',
y') :
x, r, -+
First, let us consider the only if part. Suppose (xt,y,) E S(x', y') and = 11, I Tr,for t' > t. Then
9,)E S(x', yf). Suppose x, I T,. x'
. x, I T'
=
x' * A' I T i
and y' . y , I T i= y'
.9,I T' & x' . x, I q,,= x' A' I ITf,. *
zt,
hold. Since the system is past-determined, y' . y , I q,, = y' .9, I holds, which implies y , I Tt,= 9, I Tt..Since t' > t is arbitrary, we have that if x, = f t ,then y, = 9,,which implies S(x', y') is functional. Hence, S(x', y') is a strongly nonanticipatory function. Conversely, suppose (x, y) E S and (11, 9)E S.Furthermore, suppose (x', y') = (A', 9)and xi'= f t t .Since S(x', y') is a strongly nonanticipatory function, jtf= 7''. Q.E.D. Past-determinacy is related to the state-generating family ij. In this respect, we have the following proposition. Proposition 4.5. A system is past-determined from 2 if and only if for any t 2 2, it has a state-generating function rf such that p, is strongly nonanticipatory.
4. Causality
39
PROOF. First, we shall consider the only if part. Suppose S is past-determined. Let C , = S' for t 2 2. Then the desired result immediately follows from Proposition 4.4,where the state-generating function qf :S' --t C , is the identity function and p r :C, x X , --+ yt is p,(c,, x,) = S(c,)(x,). Conversely, suppose S has a state-generating function q' :S' --t C, for t 2 2. Suppose (2, y') = ( A ' , ~ ' ) ES and x,,, = A,,, for t' 2 t 2 2. Let ci = qi(x',y') = qi(Ai, 9'). Since x' . xttp= 2' . gtr, and since pi is strongly nonanticipatory, by assumption, the following holds :
I
~ t ( ~xi) i , Tt, =
P ~ ( G At) , I
Tr,
where x' . xff,I Ttr= xi I
Tr., and 9 .A',, I T,, = At I 13,.
Hence the system is past-determined.
Q.E.D.
Chapter 111
GENERAL REALIZATION THEORY
Realization theory for the class of dynamical systems is concerned with existence of a dynamical representation for an appropriately defined time system. Usually, there is given a family of response-type functions p , and the realization problem consists in determining whether there exists a family of state-transition functions 6 and a time system S so that (p, 6)is a dynamical realization of S . Problems of this type are considered in this chapter. Since some basic factors of the realization theory can be proven for a system whose dynamics is described solely in reference to a family of state objects rather than a state space, a portion of the realization theory is considered in that framework first. This arrangement is in accord with the formalization approach proclaimed in Chapter I, namely, to consider any given problem with a system having minimal structure. In Section 1, conditions for the consistency and realizability of a family of response functions are given which are nontrivial only if the response family consists of full functions, i.e., partial functions are not accepted as the elements of p. In Section 2, it will be shown that nonanticipation is necessary and sufficient for a decomposition of a time system in two families of functions : the state-transition functions and the output functions. Furthermore, when the system is strongly nonanticipatory, the output functions can be defined solely on the state objects. Since the output function is static, the entire dynamics of the system is described then by the state-transition family. Because of this important and convenient property, the representation of a system by the pair (6, 2) is referred to as canonical. Furthermore, it is 40
I . Realizability and Dynamical Representation
41
shown how the states can be characterized by an equivalence relation defined on the input and the initial-states pairs. In Section 3, the relationship between the auxiliary functions (and various systems representations) is shown to be a commutative diagram (Fig. 3.4). This procedure provides an important insight for the nature of the state space, the origin of that concept, and the reasons for its importance. Often, the state space is given as a primary concept in the very definition of a dynamical system. By showing how the state space can be constructed from a given set of input-output pairs, we have justified our viewpoint that the state space is a secondary, i.e., derived, concept. It is shown how the state space can be generated by appropriate equivalence relations (Fig. 3.9). A procedure (based on the results developed in the preceding chapters) to construct the state space representation when a time system (i.e., the inputoutput pairs) is given is then outlined (Fig. 3.10). 1. REALIZABILITY AND DYNAMICAL REPRESENTATION
A pair of functions (p, 6)is a dynamical system representation of general time system S (3.1) S c AT x BT if the appropriate conditions (as given in Chapter 11) are satisfied so that p is consistent with S and 6 is a family of state-transition functions. The time system is defined in (3.1)as a set, and as such it has to be specified by a defining function. This is often done by giving the systems-response family jj = { p , :C, x X , --* & t E T} either directly or in terms of an output-generating function. One encounters then the situation in which there is given a family of response-type functions p, and the question arises whether there indeed exists a dynamical system for which p is a response family. This question can be answered in two steps: (i) Given p, is there a system S such that p is consistent with S? (ii) Given p, which is a response family for a system S, is there a family of transition functions 6so that (p, 6)is a dynamical representation of S?
These two problems will be considered in this section.
Definition 1.1. Given a family of functions p = {p, :C,x X,-+ &tE T}, we shall say that p has a dynamical realization or, simply, is realizable if and only if there exist a time system S and a family,of functions 6= {& : C, x Xff,-+ C,,} so that P is consistent with S and (p, 4) is a dynamical representation of S.
Chapter III General Realization Theory
42
x}
Realizability of a set of maps p = { p t :C, x X , -, depends upon the properties of those maps and the sets involved. Recall that the assumptions made for p were the following: (i) C , is an arbitrary set. (ii) X , c AT' and Y; c BTr,and furthermore (VXt)(tlX,)[X'.
x,E x]
(iii) p t is a full function, i.e., it is defined on the entire set C , x X , . (a)
Consistency of p
The first realizability question concerned with the consistency of the given maps p with a time system S has already been answered by Theorem 2.1, Chapter 11. It has been shown there that when a family of arbitrary maps p satisfies conditions (i), (ii), and (iii), there exists a time system S consistent with p if and only if for all t E T
Pt(C,,
x,) =
PO(C0,
x' . x,)I ?;
could be interpreted to mean that the state c, "properly" connects the previous history represented by (co, XI), and the future evolution of the system represented by x,,in the sense that the output starting from c,, i.e., y, = pr(c,,x,), is precisely the remainder of the output yo = po(co,x' . x,), i.e., yo I T, = po(co,x' . x,)I ?; = y,. Condition (Pl), which can also be written in the form (
W
0
Xt))(tJX,)(3C,) [P,(C,,
7
x,) = Po(Cor x' . x,)I
TI
then means that for any past history (co,x') and for any future input x,, there exists a state c, at t which connects them properly. Condition (P2), which can also be written in the form ( W t
9
x,)) (Yco x')) 9
[P,(C, 9
x,) =
Poke ,x'
*
x,)
I TI
then means that for any (c,, x,), i.e., the future behavior of the system, there exists a past history, i.e., an initial state and an initial input segment, so that this history leads up to c,, which properly connects (co, x') and x,.
43
1 . Realizability and Dynamical Representation (b)
Realizability of p
Theorem 1.1. Let p be a family of maps satisfying conditions (i) to (iii). Then j j is realizable (i.e., there exists a state-transition family I$ so that (p, 4) is a dynamical representation of a time system S) if and only if for all t, t' E T, t' 2 t, p has the properties : (P3)
(vc,)(vx,,,)(3cr,)(vxr,)[Pr,(Ct,, xr,) =
(P4)
( v ~ , ) ( t f ~ , ) ( 3 ~ o ) ( 3 ~ ' ) [ ~ xr) r ( c r=,
7
~ o ( c o 9X' *
PROOF. We shall consider the only if part first. Let c, and x,,, be arbitrary. By consistency of 4, Pr44rr4Cr 9
xu,), xt,) =
Pt(Cr 9 Xtr,
. xr,) I T,,
T.1 xr) I TI
xrr, . xr,) I
p be realizable and let
for every
9
Xr,
and since there exists c,. such that c,. = $,,,(c,, x,,,), (P3) holds. Since (P4) is equal to (P2) in Theorem 2.1, Chapter 11, and since p is consistent with the time system S, i.e., Srp = SopI ?;, (P4) also holds. Consider now the i f part. Let (P3) and (P4) be satisfied. Since (P3) and (P4) imply (Pl) and (P2) of Theorem 2.1, Chapter 11, the system consistency is naturally satisfied by p . For each t E T, let E , c C , x C , be such that (cr 9 cr') E Er
H
W X r ) (PAC, >
xr) =
Pt(Cr'9
xr))
E, is obviously an equivalence relation for each t. Let P,:(C,/E,) x X,+ be such that
Pr([crl,xr) = PAC,, xr) where P , is apparently well defined. Now, we claim that there exists a family of mappings {A,.}: :(Cr/Er) x Xtr*
f,r,
+
Ct,/E,*
such that
Br~(fird~r1, xrr,), xr,) = Pr([crl, xrr, . xr,) I 7;'
(3.2)
for every [c,], x,,,, and x,. . Let f,r*
c (CrIEr x Xu,) x (Cr*/Er,)
be such that
((tctl,x,,,), [crpl) E L , * * (vxr,)(Pt,([~r,I, xr,) = PrKcrl, xrr, * xtd I T,,) (P3)implies thatf,,, # 4. Suppose (([crl,Xrp), [cr,I)~f,t,, and (([c,], Xrr,), [e,,])~f,,, . Then we have that P,.([c,.],x,.)
= j?,.([i?,.],x,,),
for every x,,
Chapter III General Realization Theory
44
which implies that [c,.] = [?,.I. Furthermore, (P3) implies that (C,/E, x X,,,). Hence,f,,. is a mapping such that
A,, :(C,IE, x
X,,,)
+
Q(ft,,) =
C,./E,.
and P,,(f,J[cJ,x,,,),xt.1
=
P,([ctl?xtt, . x t , ) I T,,
for every xr,
Then
x,,,),X , Y )
ft,t,,(ftt4[c,l9
= ftt4c,l, x,,,. X , Y )
(3.3)
holds, because since {f,,.} is a state-transition family consistent with the reduced systems-response family { P,}, Theorem 2.2, Chapter 11, is applicable to {ft,.}. Finally, let p , : C , / E , + C, be such that p,([c,])~[c,]. p, can be any function that satisfies p,([c,])E [c,]. Let
4,*,:c, x
x,,,c,. +
(3.4)
45
I . Realizability and Dynamical Representation
Interpretation of (P‘3) is then as follows : For any state-input pair (c,, x,,,), there exists a state c,. at t‘ such that further time evolution for any subsequent input segment is consistent with the given state-input pair. Condition (P‘3) requires, therefore, that the state object at any time t‘, c,,, “properly” connects preceding and following segments, x,,. and x,,, as to be consistent with p t . In particular, for any initial system’s evolution, as specified by (c,, x,,.), there exists a state c,,, which properly accounts for the continuation of that evolution. (c) Dynamical Representation of a Time System It has been shown in Section 2, Chapter 11, that every time system has a response family p = { p t : C ,x X , + T } if no constraints on p are assumed (Proposition 2.1, Chapter 11). A similar fact is true for dynamical representation of a time system, i.e., when a family of state-transition functions r$ consistent with p and S is required to exist. Theorem 1.2. Every time system S has a dynamical representation, i.e., there exist two families of mappings ( p , 4)consistent with S . PROOF. Let p,: C, x X --t Y be an initial systems response of S . Let C , x X‘ and p,:C, x X , + such that if c, = (c,, x‘), then
c, =
Then p = { p , ) satisfies the conditions of Theorem 1.1 and hence p is realizable. Q.E.D. Consider now the effect of causality requirements. Theorem 4.2, Chapter 11, has indicated that a time system has a nonanticipatory response family if and only if it has a nonanticipatory initial systems response. This result can be easily extended to dynamical systems. Theorem 1.3. Let (p, 6) be a dynamical system such that for every t~ T, 40,is an onto map. Then p is a nonanticipatory systems-response family if and only if po is an initial nonanticipatory systems response. PROOF. Let us consider the if part first. Suppose po is nonanticipatory. Let x,, x,’, and c, be arbitrary elements. Since g5,, is an onto map, for some (e, A‘) P t ( C , x,) = Pt(g5,,(~, x,) = P,(i., At . x,) I IT; and similarly 7
9 Ar)7
9
PAC,,
x,’) =
PO(E0Y
9
9‘ . X f ’ ) I IT;
46
Chapter III
General Realization Theory
Since ( 9 ' .x,) I
when x, I
T"
= ( 9 ' .x,')
I T"
T,, = x,' I Trpwe have that Po(?, , 9' . x,) I
Therefore, if x, I T,, = xt' 1
T"
= p,(E,,
9, . x,') I
T"
T,,
-
pr(cr,xt)I
'& = P ~ ( E , , ~ ' . XT,, , ) ~= P ~ ( E , , ~ ' . Xr ' ) I Trr , = P r( cr
The only if part is apparent from the definition.
9
xr? I
Trf Q.E. D.
We have now the following consequence for a dynamical system representation. Theorem 1.4. A time system has nonanticipatory dynamical system representation if and only if it is nonanticipatory. PROOF.If a time system has a nonanticipatory dynamical system representation, then it is nonanticipatory by definition. Conversely, if the system is nonanticipatory, then there exists an initial nonanticipatory response function from which a dynamical system can be constructed. (Refer to Theorem 1.2.) It is possible that some state transition 4,, may not be onto, but in that case as the proof of Theorem 1.1 shows, the state object at t can be restricted to the range of 4otsuch that every 4,, can be onto. Then it follows from Theorem 1.3 that the resultant dynamical system representation is nonanticipatory. Q.E.D. The following result, which is of some conceptual importance, follows from Theorem 4.1, Chapter 11. Theorem 1.5. If the response function and hence the state-transition functions are accepted as partial functions, every time system has a nonanticipatory dynamical representation. PROOF. The proof is immediately obtained when Theorem 4.1, Chapter 11, is applied to Theorem 1.4. The procedure to define I$ is exactly the same as described in Theorem 1.2. Q.E.D.
Nonanticipation of a time system is defined with respect to its initial systems-response function. It is clearly true that even if a system is nonanticipatory, an initial systems-response function for such a system, which in general can be chosen arbitrarily, may not satisfy the nonanticipatory conditions. A general condition for a time system to be nonanticipatory is not known yet. However, a past-determined system is intrinsically non-
2. Canonical Representation (Decomposition)
47
anticipatory, and moreover, its “natural” initial systems-response function is nonanticipatory too. This fortunate fact is of special importance in the systems theory in general, in view of a widespread application of the pastdetermined systems. Past-determined systems will be discussed in this light in Chapter V. 2. CANONICAL REPRESENTATION (DECOMPOSITION) OF DYNAMICAL SYSTEMS AND CHARACTERIZATION OF STATES
(a) Canonical Representation of Dynamical Systems Let p be an arbitrary response family for a system S and C = {C,:t E T } the family of associated state objects. Since C, is arbitrary, it might have more states than necessary for consistency with S . An obvious way to eliminate some of the redundant states is to consider any two states c, and 2, as equal if the future behavior of the system starting from c, and from 2, is the same. More precisely, let E, c C, x C, be the relation such that (c,,2,) E E ,
-
(W[ P , ( C ,
7
x,) = P,@, x,)l 9
Apparently, E , is an equivalence relation. Starting from arbitrary C,, we can introduce a reduced state object = C,/E, whose elements are equivalence classes. The corresponding response family fi = {$,:c, x X , + y} such that
c,
P([C,l?
x,) = Yr
-
P,(C,,
x,) = Y ,
is termed reduced. The following simple fact will be used in the sequel. Proposition 2.1. Let P be an arbitrary response family and fi the corresponding reduced family. fi is consistent with a system S if and only if p is consistent with S . PROOF.
Since the relation
Pf(C,,
x,) =
Po(Cor
x‘ * x,) I T
-
Pt“ct1, x,) = P0([c019x‘ * X r ) I ?;
holds, the proposition follows from Theorem 2.1, Chapter 11.
Q.E.D.
Definition 2.1. Let S be a time system with a given family of output-generating functions ji = {p,,.: t ,t’ E T } . A pair (4,A), where 6 is a state-transition family while X is a family of output functions, will be referred to as a canonical
48
Chapter III
General Realization Theory
(dynamical) representation of S if and only if for any t , t' E T, the diagram
X,,.) = ( I $ ~ ~ . ( cx,.,),, X J t ' ) ) . is commutative, where (&, Rr,)(cr, We have now the following theorem. Theorem 2.1. A time system has a canonical representation if and only if it is nonanticipatory. PROOF. Let us consider the if part first. When a time system is nonanticipatory, it follows from Theorem 1.4 that it has a nonanticipatory dynamical system representation, ( p , 6).Then an output function Ar :C, x X ( t ) + Y ( t ) such that Ar(c,, x r ( t ) ) = pt(c,, x , ) ( t ) and an output-generating function pr,.:C , x Xtt,+ Y(t')such that Prr,(Cr? Xrv)
=
Pr(Cr,
Xu*. gr,)(t')
where gr,is arbitrary with g,.(t') = Z,,.(t'), are properly defined. Furthermore, since Pr(cr 7 xrr, * %,)(t') = P r 4 4 A C r xrr,), %)(t') = Ar44rJCr 9
9
xrr,), &(t'))
= Prr,(Cr* Xrr,)
we have the commutative diagram. Hence, the system has a canonical representation. Conversely, let a time system have a canonical representation. Let p r : C , x X , + be such that Pr(Ct,
xr) = Yr ++
Yr(T) = Ar(4rdCr9 Xrr), xr(T))
for
2 t and
Now, we shall show that x
I T'
=
x' I T'
x) I T'
+ po(co,
= po(co,x') I
for any c, E Co.Let T E T' be arbitrary: then
Poke x)(4 = ~ 9
r ~ 4 01 x r' ),~ X(Z)), ~ o
and PO(C0
9
x') (4 = ~ ' ~ 4 , ' ~x"), c 0 ,X ' ( T ) )
T'
Xtr = xt
I T'
49
2. Canonical Representation (Decomposition)
since x* = x ’ and ~ x(z) = x’(z) for z E Ti, we have po(co,x)(z) = po(c0,x’)(z)
for
TE
Ti Q.E.D.
Hence po is nonanticipatory.
The canonical representation essentially means a decomposition of the system into the subsystems in a cascade as shown in Fig. 2.1. The first
I
XiI*
II I I I I
I I
-
btt,
I
Dynamic
1
ktv
I -
Static
I
I
I
I
I I I I
I xttWl I
I I I
I I
I
-
Y ( 0
I
Ct’
~
I
I
Rt
I
I
Static
I I
FIG.2.1
subsystem, denoted by c#+,, in Fig. 2.1, represents fully the dynamic behavior of the system, while the other two subsystems, A,,and R , , , are static and indicate only how the output values are generated from the given states and the current input values. The first subsystem is defined solely in terms of 6,and therefore the dynamics of the system is fully represented by this single family of functions. If one is concerned solely with the dynamics, attention can be restricted to 6. A somewhat stronger decomposition is possible if the system satisfies conditions for strong nonanticipation. The following theorem is then valid. Theorem 2.2. The system is strongly nonanticipatory if and only if there exist a state-transition family 6,an output-function family X = {A,:c, +
50
Chapter III
General Realization Theory
Y ( t ) } ,and an output-generating-function family ji = {p,,. : C , x X,,, such that the diagram
+
Y(t’)}
is commutative. PROOF.
The proof is completely analogous with the proof of Theorem 2.1. Q.E.D.
The canonical decomposition is now given as in Fig. 2.2. The output values depend solely on the present state and are not explicitly dependent upon the input.
FIG.2.2
(b) Characterization of State as Equivalence Classes
Conceptually, one of the roles of the concept of state is to reflect the past history of the system’s behavior. If two different past behaviors lead to the same state, they are apparently equivalent as far as present and future behavior of the system are concerned. The state at any time can, therefore, be represented as coiresponding to an equivalence class generated by an equivalence relation defined on the past of the system. The equivalence that characterizes the states depends upon the particular family of state objects used, and conversely, by introducing an appropriate equivalence relation, one can define, or construct, the corresponding family of state objects and the state space itself. It should be noticed that we are talking here about the equivalencesdefined strictly in terms of the past. These equivalence relations ought to be distinguished from the equivalences that are defined in reference to the future behavior and are used to eliminate redundant states. (a) Let po be an initial systems-response function of a time system S, i.e.,
po:C, x X
+
Y. The present and the future output y , of the system is
2. Canonical Representation (Decomposition)
51
uniquely specified then by the initial-state-initial-input segment pair (co,x') ; that is, it is given by the restriction y, = po(corx' . x,) I T,. This observation suggests the selection of the initial-state-initial-input segment pair (c,, x') as a state at time t, a choice justified by Theorem 1.1. In other words, C, x X' itself can be used as a state object at time t. However, it is possible that the two different elements (c,, x') and (e0, A') yield completely the same future output. Then, as far as the behavior of the system is concerned, we need not distinguish (c,, x') from ,;( A'). This observation leads to an equivalence relation E' on C , x X', which is referred to in the literature as the Nerode equivalence [2]. Proposition 2.2. Let p, be an initial systems-response function of a time system S. For every t E T, let E' c (C, x X') x (C, x X') be an equivalence relation such that ((co
7
x'), (2, 2')) E E' 9
c-,(Vxt) b o ( c o 3 X'
xr) I
T = po(Eo 2' . xr) I TI 1
There exists then a family of functions p = { p r : C ,x X , C, = (C, x X')/E' and p is consistent with S and realizable. PROOF.
-+
x } such that
6 is apparently an equivalence relation. Let C , = (C, x X')/E' -, I: be such that
=
{[c,, x']}. Let p t :C , x X ,
Pt([Co,
x'I, xr) = po(co, x'. xr) I IT;
Since ((c,, x'), (to,A')) E E' implies that po(co, X'
. XJI 17; =
PO(;,,
9' . xr) I IT;
for every x,, pt is properly defined. Then, since the consistency conditions (Pl) and (P2) of Theorem 2.1, Chapter 11, are trivially satisfied, p is consistent with S. In the similar way, the conditions of Theorem 1.1 are shown to be satisfied. Q.E.D. Proposition 2.3. Let relation such that
(p,6)be a dynamical system and E, c C, x C , a
52
Chapter III
General Realization Theory
PROOF. Since [c,,
-
x‘] = [c,’,
X’,]* (c,, X f ) E f ( C o ’ ,x’,)
( ~ x , ) ( p 0 ( cx‘ , , . x,)I ?; = Po(C0’r
X’,. XI)
I ?;I
* ~ ~ ~ , ~ ~ XIp),,XI)~ =~ P1(401(~0’r , , ~ ~ x‘f)l , , XI)) * 4or(C,,
X f ) E f 4 0 t ( C o ’ ?x’?
* [4or(Co, xf)l = [401(~,’~ x31 Q.E.D.
F is well defined and one to one.
The preceding two propositions suggest a construction procedure for the reduced state objects (where redundant states are eliminated) whenever an initial-response function p o is given. First, the equivalences { E ’ : t E T ) are introduced, and the state objects are defined by C, = (C, x X‘)/E‘. The response functions are then given by p , :C, x XI + such that P&, ,x,) = po(co,x’ . x,)I IT;
where c, = [c,, xfl
Apparently, p t is well defined and Theorem 1.1 guarantees that p = { p , } is realizable. Furthermore, Proposition 2.3 guarantees that C = { C , :t E T } is the set of smallest state objects in the sense that starting from any other set of state objects c, and the initial systems-response function p , , after such state objects are reduced by the appropriate equivalence E, (i.e., after redundant states are eliminated), there exists a one-to-one map from C, into the reduced state object C J E , .
(8) The preceding equivalence relation E‘ is defined for a given initial response function p o , i.e., in reference to a given initial state object C,. When the system is past-determined, it is possible to characterize reduced states by an equivalence defined solely on the input-output objects, i.e., without reference to the initial states. The states are then defined in terms of the primary concepts X and Y, which are used to define the system S itself. This will be discussed in Chapter V. 3. CONSTRUCTIVE ORIGIN OF STATE-SPACE REPRESENTATION
(a) Construction of a State Space and a Canonical Representation
A state space can be introduced directly by assuming an abstract set that satisfies the necessary requirements or by constructing the state space from the previously introduced family of state objects. The general procedure for such a construction involves the following steps :
53
3. Constructive Origin of State-Space Representation
(1) All state objects are aggregated, e.g., by the union operation c u{ C, :t T}or by the Cartesian product operation c x C,. E
=
c
=
,ST
(2) An equivalence relation E, c x c, which has to satisfy some necessary conditions, is introduced as given below. (3) The quotient set c / E c or a set isomorphic to it, C = e/E,
is taken as the state space. The necessary conditions mentioned in (2) are the following : E [cl 4tt4ct9 xtt,) E [c’l) (i) (~[CI)(~X,~,)(~[C’I)(VC,)(C~ (ii) ( ~ [ ~ l ) ( ~ ~ r ) ( v ~ r ’ ) ( E~ [c] ~ , )& ( ~c,’t E [c] PJc,, x t ) = P~(c,’, x,)) +
-+
or (ii)’ (V[c])(Vc,)(Vc,.)(Vu)(c, E [cl tk c,, E [cl --* U c , , 4 = U c , ~4) , Condition (i) requires simply that the evolution of the system in time is represented as “uninterrupted” much as in the family of state objects. Condition (ii) is used when the system is defined by its dynamical representation [i.e., the pairs (p, $)I, while condition (ii)’ is used when the canonical representation [i.e., the pair (8, A)] is given. If the state objects have additional algebraic structures (for instance, they are linear algebras as required for the linear system case), the equivalence relation which generates a state space should preserve those structures in the quotient set. The results derived in terms of the state objects have apparent counterparts when the state space is introduced. Of particular interest is the canonical representation. Before presenting the form of the canonical representation in the state space, we shall introduce the following notational convention. Let Ti, c TI, and XI,, be an input set. A mapping R T ; ; , can be then defined : RT;;,
:X,,, + Xiif
such that (vxtt*)[RT;;.(Xtt’)
=
xttr
1 Tit]
where Rni, is termed a restriction operator. For simplicity of notation, the index will always denote the codomain of a restriction operator, i.e., the subset on which the original function is restricted, while the domain (i.e., the time set of the original function being restricted) is understood from the context. The same symbol R will be used in a similar way for other objects too, e.g., Y, &, etc. We shall have then, e.g., RTt
Y+
Chapter III General Realization Theory
54
such that Furt herm re, we use th Let &. and prr.represent the state-transition and the output-generating represent ,. the respective functions defined on the state space, while I$,,.and /I, functions defined on the family of state objects. (How &,and prr.are defined by means of &,and fir,. will be shown in reference to specific types of equivalences in the subsequent sections.) The canonical representation is now definedin terms of a family of state-space transitions 4 = {& :C x Xrr,+ CJ and maps X = {A,:C x A + B}. The diagram of the decomposition is as given in Figs. 3.1 and 3.2 The pair of mappings (&, R$,J used in Fig. 3.1 is applied in the following way (+rr*,
Rf*))(c, %*) = (#+r*(C,
c
x
xrr,
Pa.
xtr,), %(t’))
*B
C x A FIG.3.1
C FIG.3.2
The importance of the canonical representation can now be J l y recognized: The dynamics of the system is represented by a family of transformations of the set C and the corresponding input restriction XI, into C itself. The set C, both in the domain and range of &, is the same for any t, t ’ T; ~ the only difference between the state-transition functions for any two intervals, e.g., &, and &. ,is in the corresponding input restriction.
3. Constructive Origin of'State-Space Representation
55
A specific connection can be now established with the classical concept of a dynamical system as, e.g., used in topological dynamics. To any input x E X of a general dynamical system corresponds a set of transformations
p
=
{4:yr:C+ C }
=
411,
such that
4s;
I .{ I TI,} x
c
If the state space satisfies the necessary additional requirements, e.g., it is an appropriately defined topological space, the family p is referred to as a dynamical system since it fully specifies the time evolution of the system for the given input x. It can be readily shown that p has the required properties for a dynamical system as stated in the classical literature, in particular, the composition (semigroup) property and the consistency property [3]. In conclusion, the usefulness of the state-space approach for the study of time systems is primarily due to the following : (1) The evolution in time, dynamics, for any given input, is described fully by the mappings of the state space C into itself. (2) The output at any time is obtained by using a static function from the state, if the system is strongly nonanticipatory, or from the current input value also if the system is only nonanticipatory. (b) State-Space-Generating Equivalences and Dynamical Systems in a
State Space There are many equivalences that satisfy conditions (i) and (ii) from Section 3a ; i.e., the quotient set generated by such an equivalence can be used as a state space. In this section, we shall introduce two typical statespace-generating equivalences. (a) The first way to define an equivalence in c is in terms of an output function. Informally, two states will be considered as equivalent, even if at different times, if for the same value of inputs they have the same value of outputs, and furthermore, the state-transition function will always generate the equivalent states if the inputs are the same. More precisely, we shall introduce a relation
E," c
cxc
where
e = uC, t€T
Chapter III
56
such that (c,,t,,)E Ela
-+
-+
General Realization Theory
( V U ) [ ~ , (a) C ,=, J-t*(?p, a)] &L { t = t' (tfx,,,,) [ ( 4 d c i
9
(3.5)
CbtAtr,> xrt,,))E EA"I1
xtt,,),
Let E be the family of all such equivalences that satisfy relation ( 3 . 3 , = { E," :cz E A } . The trivial equivalence relation I = { (c,, c,) :c, E c}satisfies (3.5). Therefore, E is not empty. In general, there are many equivalence relations that satisfy (3.5). The family E can be ordered by the set-theoretic inclusion E.For such an ordered set, the following lemma holds.
E
Lemma 3.l.t The set E has a maximal element EmA with respect to the ordering 1. PROOF. Let P be an arbitrary nonempty chain in E ; that is, P is a linearly ordered subset of E, where P is denoted by P = { E," I B E B}. Let E,=VE," PEB
E , will be shown to be an element of E, i.e., E, E E. Since for any c,, c,. , and c,.. , E , satisfies (c,, c,) E x -+ (cr,c,) E ELP for every EB + Reflexivity : (c, CJ E Eo (ct,ct,)E E , -+ (c,, ctf)E EAPfor some p E B -+ (c,, ,ct)E Symmetry : EA' (c,, c,) E Eo (c,, c,,) E E , & (c,, c,,,) E E, -+ (c,, c,,) E EiP and Transitivity : (c,. , c,,,) E ElP' for some B' E B. Since P is linearly ordered, (c,, c,,) E ELP'' and ( c r pc,,,) , E ELP'' where P,, = B or P,, = B' -+ (c,, c,,,) E ELP"+ (c,, c,,) E E ,
a
c c
9
+
9
a,
E , is an equivalence relation, and furthermore, for any (c,, c,.) the following holds : for some /?E B (c,, c,,) E E, -+ ( c t ,CJ E E A P ( v a ) ( l J ~a) f = J-r,(c,, a)) 9
7
& ( t = t' +
+
W x t r , , )[(4tt,,(c,3
Xri,,), 4tt"(C,, 7 xtt,*))E
E?I}
(va)(lr(cf a) = lr,(C,, a)) 9
& { t = t'
+
9
(VXr,,,)
[(4,,*,(~, xtt,,), 4 J C t * 9
t This result holds under a more general condition. Let E" c c; (c,, t,.)E Em+ P(c,, t,,)& ( t = t'
-+
x
9
x,**))E Eel} be such that
(Va)[I,(c,,a)
= ~ , 4 ta)] t , 8~( V + , ) [ ( ~ J , , ~x,,.)), C , , 4~,.,4cf,, x,,,,))E E"I) where P(c,, c,,) is an arbitrary predicate such that P(c,,c,) is true for any c,. If E is the family of all such equivalences as E', Lemma 3.1 holds.
3. Constructive Origin of State-Space Representation
57
Hence, E , is an element of E. Consequently, since an arbitrary chain in E has been shown to have an upper bound, E has a maximal element Emafrom Q.E.D. Zorn’s lemma. Apparently, it is desirable to use the maximal equivalence Emaand to define the state space as C = e/EmA
since Emayields the most “reduced” state space. Consider now how the auxiliary functions can be defined in terms of the state space C so as to be consistent with the respective functions defined in terms of the state objects. Let I, :C/Ema-+ C, be such that
NCl)
=
{
c‘, c,*
E C,,
if c, E [c] n C, if [c] n C , = 4
where c,* is an arbitrary element of C, and, in general, different for each [c]. Then let the state-transition function in the state space C be defined by
6,,.
$,,, :(C/Ema)x X,,, -+ C / E m A such that
6,,4Cl,
Xftf) =
[4,,,(I,([Cl),x,,31
-
Now, we can prove the composition property of 4 :
58
Chapter III
Consider now the output function
General Realization Theory
2, defined on the state space
&:(c/E,,,’) x A + B such that
2,([cl, a) = W t ( [ C I ) ?
a)
as required. In summary, then, from (3.6) and (3.7), respectively, we have
so that the following diagram is commutative:
1
Natural mapping
x A
C x A
FIG.3.3
Hence, C
=
e/E,’ is the state space, and the mappings
$,,. :c
x
xr,* +c
2,:C x A
+B
define a state transition and an output function, respectively.
3. Constructive Origin of State-Space Representation
59
(8) An alternative way to construct a state space is in terms of an equivalence relation defined by means of the shifting operator. Essentially, the states will be defined as equivalent (regardless at what time they appear) if the respective system's behavior is the same (i.e., the same input-output functions) except for a shift in time. To introduce this equivalence, the following assumptions will be added : (iii) The time set T is a stationary time set. (iv) X is a stationary object, i.e., for every t E T
F ' ( X ) = x, where F' is the shift operator defined in Section 3, Chapter 11. Let c = UtETCr. We shall introduce a relation ExP c for t' 2 t, (cr,?t,) E E,P
+
c x e such that
( v ~ J [ f " ' - ' ( ~ r ( ~xrt,) ) = P,,(&, F"-'(xJ)I 9
and
Let EP be the family of equivalence relations which satisfies (3.8); the trivial equivalence relation {(c,, c,) :c, E c } again satisfies (3.8). Hence, EP = { EaP:ct E A} is not empty ;furthermore, there may be many equivalence relations satisfying (3.8). The existence of a maximal equivalence relation EmPin EPcan be proven by the.same argument as used to show the existence of EmA. When the maximal equivalence is used, the state space is given by the quotient set
C = C/Emp and the system response at t is the function
such that
while the state-transition function is
60
Chapter 111 General Realization Theory
such that ( W C , E [cl)
If @,and df,,are to be total functions, they ought to be extended appropriately ;in particular, the extended system ought to preserve basic properties of the original system, e.g., linearity. Which of the above equivalences EaAor E,P is used to introduce the state space depends ultimately upon the circumstances in application. However, if both equivalences are equally good candidates, a comparison between the two approaches indicates the following key distinctions : (1) Method (8)requires additional assumptions, namely, (iii) and (iv). (2) The state space constructed by EmPis, in general, larger than the space constructed by Em’. The state space can be constructed by using other equivalence relations. A notable example is when T is a metric space. This case can be shown to be a special case of method (8). In conclusion, it should be pointed out again that we consider the state space as a secondary concept, which when introduced has to be consistent with the primary information given in the input-output pairs or has to be constructed in a manner shown in this section. Often the state space is given a priori in the definition of the system itself. The discussion in this section explains what the origin of such a concept is and what condition it has to satisfy even if introduced a priori.
(c) Commutative Diagram. of Auxiliary Functions
Relationships between auxiliary functions can be represented in a convenient way by the diagram shown in Fig. 3.4. The arrows denote mappings, and each of the loops represents a commutative relationship ofthe mappings involved. The mappings in the diagram are: the auxiliary functions po, p,, ptf.,A f , 1,;the restrictions R T t , ,Rt,.), etc. ; the identity operator I ; and the appropriate composition of those mappings F , , F 2 , F3, F4 defined as follows. The mapping F , is given by the algebraic diagram in Fig. 3 . 5 , where I represents the identity, and I x R,t indicates that the mapping is applied componentwise ; i.e., the first component is simply mapped into itself, while the second is transformed by the restriction operator.
3. Constructive Origin of State-Space Representation
61
The second composite mapping F2 is defined by the diagram in Fig. 3.6. Again, I x RTt,, indicates that the first component is mapped into itself while the second is restricted to T,,. The third mapping F3 is given by the diagram in Fig. 3.7, where 4,r, is the state-transition function, while I x R T t t , has the same interpretation as for F1 and F 2 . Finally, the mapping F4 is given by the diagram in Fig. 3.8, where nat Ema is the natural mapping of the equivalence relation Erna.
S c AT x BT
c, x x
1
c, x x
,
+Y
A
c,. x
cx
cr FIG.3.4
x
xr
FIG.3.5
Chapter III General Realization Theory
62
c, x I,,, FIG.3.6
ct*x
X(t’)
I
cx
X(t’) FIG.3.8
3. Constructive Origin of State-Space Representation
63
(a) Construction of the Stateapace Representation We can now indicate how the results developed so far are related in providing conditions for the existence of various auxiliary functions and associated system representations and furthermore how they can be used in an orderly procedure to construct all the necessary machinery for a statespace description of a general time system. Figure 3.9 indicates the relationship between theorems that give the conditions for the existence of different kinds of representations. At each step a new auxiliary function is introduced, and a different representation of the system becomes possible. Some standard terminology usually associated with the conditions given by the corresponding theorems is also indicated in the diagram. Condition
System representation
s c XT
x YT
Auxiliary function Time system
1 Existence of initial-response function Existence of systems-response family
Theorems 1.1 and 4.2, Chapter I I
x
1
p.:C. x -+ Y (initial-response function)
1
p
P
Realizability
= {p,:C, x
X I-+ X }
(systems-response family)
Theorems 1.1 and 1.2, Chapter 111 I$
Canonical (decomposition) representation
Theorem 2.1, Chapter 111
State-space representation
Theorem 3.1, Chapter I11
= { q5,*, :c, x x,,, --t C,,} (state-transition family)
I = {rZ,:C, x
X ( t )+ Y(t)} (output-function family)
1
(6,3) FIG. 3.9
Figure 3.10 gives a procedure for the construction of the state space. Starting from given initial state object C,, there are three steps in the construction process : (i) First, for any t E T, the set C, x X' is taken as the state object, i.e., C' = c, x X'.
Chapter I l l General Realization Theory
64
M
Initial state object
Step (i)
(c,= c,
x
*'I
State object at t
I
Step (ii)
1c
t
= IJ
C,
IET
I
Aggregation of the state objects
&
Step (iii)
C = if/EmA
State space
FIG. 3.10
(ii) Second, the union of all state objects is formed
c=yc, t€T
C
(iii) Finally, the equivalence EmA is defined in = c/Emais taken as the state space.
c, and the quotient set
Let us show how the auxiliary functions associated with the construction process shown in Fig. 3.9 can be defined. Given the initial response p,:
c, x x + Y
the response function at t p,:C, x
xi r, -+
associated with the state C, = C, x X iis defined in terms of po so that
The state-transition function on
associated with C,
=
Ti,
C, x X iand.C,.
=
C, x Xi'is defined by
3. Constructive Origin of State-Space Representation
65
To show consistency of c$ as defined by Eq. (3.10) with the given response family p, observe that for a given c, = (co, x') pr44rr4cr,xrr,), xr,) =
. xrt,)?xr,) = po(co,x'. x,,. . X,?) I ?;! pr*((Co 9 X'
= ( ~ o ( ~ oX'r . xrt, . xr*) I
= Pr(4oACo 9
~
7
Xrr, 3
'4) I ?;, xr,) I IT;, = P ~ ( C , x,,, . Xt.1 I 7;. 7
To show that 6 has the required composition property, notice that, for a given c, = (co, XI),
4 , , ~ 4 4 A cxrr,), < ~ XrSr,,)
=
$t't4(co,
X'
= (co, x' . x,,. = 4rJ(Co
9
. xrr,), Xr,,,,) . x,.,..)
x'), xtr, . xr,r,*)
Therefore, (p, 4) as defined by Eqs. (3.9) and (3.10) is a dynamical system representation of the system S consistent with the given po. The output function 1,is again defined in terms of p U C t9
xr(t)) = pdct 9 xr)(t)
Finally, the state-transition and output functions on the state space C = C / E m Aare constructed in a manner described in Section 3b, namely,
{
1r~c1)= cr' c,*
if c, E [c] n C , # EC,,
4
otherwise
and c,* is an arbitrary element of C , . As has been shown in Section 3b, ($,I) as defined above satisfies the required consistency and composition properties so that ($,I)is a canonical state space representation of S .
66
Chapter III General Realization Theory
The construction procedure shown in Figs. 3.9 and 3.10 starts from a given initial systems response,i.e., a given initial state object. For different selections of initial state objects, different state spaces will be constructed. A pastdetermined system, however, yields a unique natural state space and a unique state-space representation. This will be discussed in Chapter V.
Chapter I V
The objective of this chapter is to investigate how problems considered so far are affected by the assumption of linearity. The results from this chapter will point out some properties due to linearity and will be used in subsequent chapters when dealing with linear systems. The separability of the system’s response function p into the input response p 1 and the state response p 2 is established for the case of a linear system. Then the realizability theory for linear systems is developed in reference to these two parts of the system’s response. Of particular interest is the realizability of the input-response family p2. When the system is strongly nonanticipatory, p 2 is realizable if and only if it is equivalent to the consecutive application of two transformations : The first transformation maps inputs into states, while the second maps states into the output values. When a linear system is time invariant, the representation of p 2 into two consecutive maps appears in a form which is convenient for the direct application, e.g., to the case of a system specified by differential equations. Actually, the developed results (Theorem 3.3, Chapter IV) give the minimal assumptions needed to represent p 2 by the two maps having the required property, which shows that the key assumption is linearity rather than specification of the system by differential equations. On the basis of the developed results, an orderly procedure for the construction of the state space of a linear system is given. In contrast to the case of a general time system, the construction is not based on an assumed initial state object but rather on a special subset of the total input-output set, referred to as the algebraic core. For any linear system, the state space 61
68
Chapter IV
Linearity
constructed from such an algebraic core is unique; furthermore, such a state space is minimal in an appropriate sense. 1. LINEAR TIME SYSTEMS
Linearity is an important property for a system because it enables one to arrive at the conclusions valid for the entire class of inputs although considering only a few of them. For a system to be linear, first of all the input object has to have an appropriate structure and, second, the system’s transformation has to preserve that structure in a given sense so that an analogous structure is also present in the output set. Recall the definition of a (complete) linear system given in Chapter 11: A system S c X x Y is linear if and only if X and Y are linear algebras over the same field d,and S satisfies the following : (Vs)(Vs’)[sE s & s’E s
-+
s
+ s’E S ]
(Vs)(Va)[a E d & s E S -+ a . s E S]
If X and Yare time objects, a usual realization of the algebra operations in X and Y is in terms of the linear structure in the alphabets A and B. For X we have X” =
x
+ X’
x’ = ax
c-)
c-)
(Vt)[x”(t) = x ( t )
+ x’(t)]
(Vt)[x‘(t) = ax(t)]
(4.1)
and completely analogous for Y. In what follows we shall assume that X and Y have algebras defined by Eq. (4.1). More specifically, then, we have the following definition.
Definition 1.1. Let S be a time system and furthermore (i) A and B are linear algebras over the same field XI’ ; (ii) X is a linear algebra with the operations + and ’ such that (Vt)[(x
+ x’)(t)
= x(t)
+ x’(t)]
( V a ) [ ( a .x ) ( t ) = a . x(t)]
Y is a linear algebra similarly defined.
Then S is a (complete) linear time system if and only if (iii) (x, y) E S & (a, 9) E S -+ (x + 2, y (iv) (x, y) E S & a E d -+ (ax, a y ) E S
+ 9)E S
2 . Decomposition of Systems Response
69
We shall also assume that the object X is complete relative to concatenation (Definition 2.3, Chapter 11)which, due to the linearity of X , can be expressed by the condition ( V x ) ( xE x
-+
x' . 0 E X )
As a matter of fact, suppose X satisfies the above condition and let x , 2 be two arbitrary elements of X ; then X ' . O E X and 2 ' - o ~ Xby definition. Since X is a linear algebra, 2 - 2' . o = o . 2gE X and hence, x' . o + o . 2' = X ' . ~ ~ Ewhich X , satisfies the completeness condition of X as given in Definition 2.3, Chapter 11. When a system is linear, some deeper and more specific results can be developed. This is, of course, due to the linear structure that has been added to the system's objects. The development in this chapter will follow the same line as for the general time systems in the preceding chapters. However, the results for the linear systems will not be a direct application of the general results, since additional conditions will have to be satisfied (e.g., the requirement for the linearity of the state objects and the state space). Furthermore, some special facts specific for the linear systems (e.g., the realizability of the strongly nonanticipatory systems) will have to be established. 2. DECOMPOSITION OF SYSTEMS RESPONSE: STATE AND INPUT-RESPONSE FUNCTIONS
As shown in Chapter 11, the systems-response function for a linear system can be decomposed in reference to the algebraic operation in Y, in two functions representing the response of the system due to the state and due to the input separately. The total response is then the sum of such separated responses. Such a separation of responses is, of course, possible for linear time systems also.
Definition 2.1. Let S c X x Y be a linear time system and po a mapping p,: C, x X -+ Y. po is termed a linear initial systems-response function if and only if (i) po is consistent with S , i.e., ( x ,Y ) E s
++
(W[Y =
PO(C,
41
(ii) C , is a linear algebra over the field d ; (iii) there exist two linear mappings p l o :C , --* Y and pz0 :X that for all (c, x) E C, x X P O k X )
=
PlO(4
+ P20(X)
-+
Y such
70
Chapter I V
Linearity
C, is referred to as the linear initial state object. The mapping plo :C, + Y is termed the initial state response, while p20:X Y is the initial input response. -+
Notice the distinction between the initial systems-response function for a general time system and the linear initial systems-response function ; the first concept requires only (i), while for the second, conditions (ii) and (iii) have to be satisfied also. From Theorem 1.2, Chapter 11, we have immediately the following proposition. Proposition 2.1. A time system is linear if and only if it has a linear initial systems-response function. A response family for a linear system is defined in an obvious manner in the following definition.
Definition 2.2. Let S be a linear time system. Then a family of linear maps p = {p,:C, x X , 3 is the linear-response family for S if and only if p is consistent with S ; that is, for every t E T, p, is a linear initial systems response for S,.
x}
In view of Definition 2.2, every linear initial systems response can be decomposed into two maps plr :C, and p2,:X, -+ such that -+
PI(C1. XI)
= PlI(C,)
+ P21(XI)
plr and p2, will be referred to as the state response and the input response at t, respectively. From Theorem 1,2, Chapter 11, we have immediately the following proposition.
Proposition 2.2. Every linear time system has a linear-response family. Finally, a linear dynamical system is defined in the following manner. Definition 2.3. Let S be a linear system and p its linear-response family. S is a linear dynamical system (i.e., has a linear dynamical-system representation) if and only if for all t , t’ E T, there exists a pair of linear maps 41,f, :C, --, C,,, 421,. :X,,, CISand its sum 4&,, x,,~)= CP~,,~(c,) + c b 2 , , ~ ( x t l ~ ) such that (p, $) is a dynamical-system representation of S , i.e., +
xt,) =
Pt(C1 9
x,,,), xt,,,,)=
CPtJct
P , 4 4 I f ~ ( C I, XWh
+r,r44rr,(Cr
7
+If(C,
I
Xtf)
= c,
x,) I 9
T,
xtt,,)
3 . Realization Theory
71
3. REALIZATION THEORY
The realization problem for linear systems is analogous to the case of general time systems except that the initially given systems-response function (whose dynamic realizability is investigated) is linear and actually given by its component mappings p 1 and p 2 . For the realization by a linear dynamical system, it is required in addition, of course, that 6 is linear in the sense that the conditions of Definition 2.3 are satisfied. The consistency conditions and the realizability conditions for p are given in Theorem 2.1, Chapter 11, and Theorem 1.1, Chapter 111, for general time systems. As mentioned above, these theorems cannot cover directly the case of linear time systems. However, some useful information for the linear time systems can be deduced from them. For the state response, i.e., P I , the case when the input is zero, i.e., x, = 0, is of special interest. Conditions (P2) and (P3) take now the form (vcr)(vxrr,)(3cr,)(p~r’(Cr,)=
Pr(Cr,
xrr* 0)I
IT;,)
(4.2)
(vcr,)(~Cr)(3xrr,)(p~r,(Cr,) =
PAC,,
xrr, * 0)I IT;*)
(4.3)
Conditions (4.2)and (4.3) will be referred to as the state-response consistency condition. Their conceptual interpretation is the following : By definition, for every (c,, x,,.) if the state-input pair (c,, x,,, . 0)is applied to the system, the output observed from t’ is y,. = &,, x,,, . 0)I 7;. . The condition (4.2) then requires that for every such output, there exists a state c,. so that if the stateinput pair (c,. ,0)is applied, y,. will be obtained. Condition (4.3) furthermore requires that only such states are acceptable in C,, . For the input response, i.e., p 2 , the case of interest is when c, = o and x,,, = 0,so that condition (P3) becomes (32.,,)(vx,,)(pr,(e,., x,*) = pr(0.0 . xr,) I IT;,)
Since pr is linear, p,@,,, have
0)=
p,(o, 0)I T,. = o holds for x,. = 0. Hence we
( V x r * ) ( ~ ~ J x= r * ~)
.
2 t ( 0 Xr.1
I IT;,)
(4.4)
Condition (4.4) will be referred to as the input-response consistency condition. It requires that zero input, when applied to the input-response function over a given time interval, does not affect the future evolution of the inputresponse function. In other words, when the input is such that its values are zero up to the time t‘, i.e., x, = 0 .x,. , the restriction of the corresponding output from time t‘ on, p z r ( o .x,,) I T,,, is equal to the output of the inputresponse function starting from t’ and applying x,.. In general, PAX,,) Z
PAX,,,. xr,) I ‘T;,
12
Chapter IV
Linearity
The consistency of p for the case of linear time systems can be now characterized in terms of the newly introduced consistency conditions.
e
Theorem 3.1. Let X and Y be linear time objects X c A T , Y c BT, = { C, :t E T } a family of linear spaces, p l , p 2 , two families of linear mappings P I = {Plt:Cr X},P 2 = ( ~ 2 t : X t X},and +
P
=
+
{Pr:Cr x X t
+
Y, & ( t f ( c t ,xr))[P,(cr,x t ) = Pir(Cr)
+ ~2,Cxr)I)
Suppose P2 satisfies the input-response consistency condition : For all t It’,
(vx,,) [ p z r , ( X r * ) = Pzr(0
*
xt.1 I T.1
Then there exists a linear system S c X x Y such that P is a linear-response family for S if and only if p satisfies the state-response consistency condition : For all t I t’, (i) (Vet, x,t,))(3c,,)(Plr,(cr,) = Pt(cr, xt,, . o ) I ?;,) and (ii)
I IT;,) First, we shall prove the if part. Suppose ( x , , y,) 1 T,. E
W ~ t , ) ( 3 ( ~ r ~3 t t ‘ ) ) ( ~ 1 t 4 ~= t*)
xrt, . 0)
9
Let S = S / . It’. Then due to the definition of SrP,y, holds for some c, E C,. Hence, PROOF.
S,p
1 ‘T;. for t
=
plr(c,)+ p2,(xt)
+ ~ 2 r ( ~ rI)IT;, ) = ( ~ l t ( c t+ ) ~ 2 r ( ~ t r0) , + ~ 2 t ( 0 xr,))I ?;,
y, I IT;, = (Plr(C,)
’
+
I IT;, + ~ 2 t ( 0. x t , ) I IT;, It follows from condition (i) that yt 1 T,, = pl,,(c,,)+ p2,,(x,,) for some c,. E C,. . Hence, ( x , ;y,) I IT;, E Stjp or S,P I IT;. E S / . Suppose (x,,, y,,) E S,.’. Then y,. that
=
(plt(c,)
=
pl,,(ct,) p2,,(xt.)for some c,. E Ctf.It follows from condition (ii)
pzt(xrt3 ‘0))
+
Plr,(Ct,)
for some c, E C, and x,,.
E
= (Plt(Cr)
+ ~ 2 t ( ~ t t0)) , I ‘4, *
X,,,. Hence,
I IT;, + ~ 2 t ( oxt.1 I 7;. = ( P I ~ C , )+ ~ 2 r ( ~ t.tX‘V ) ) I IT;, Therefore, (x,.,y,.) E StpI IT;. or SrTpE StpI IT;.. Hence, combining the first part of the proof with the present one, we have S,! = SrpI ?;. for t I t’. Therefore, S,P = S o p I ?;. Next, we shall prove the only fi part. Notice that S,” = S,’ 1 IT; for every YIP
=
(Plr(Cr)
+ P2r(Xrr‘
t E T implies and is implied by
S,,p
*
0))
=
S,P I ‘T;, for any t It’, because S,P =
SopI IT; and S,! = SopI IT;, implies
s,PI ‘T;.=
(SOP
I 7;) I IT;.
=
sop
I IT;.
=
s,.p
73
3. Realization Theory
We can now present the main result in the realization theory for linear systems.
Theorem 3.2. Let X and Y be linear time objects X c A T , Y c BT, = { C, : t E T } a family of linear spaces, p 1 , p 2 , two families of linear mappings p1 = {p1,:C, r,}, P 2 = { P z t : X , XI9 and -+
p
-+
= {p,:C,x
XI
+
r, & W ( C 1 , X , ) ) [ P ( C , ,
XI)
= Pl,(4
+ P21(XJl}
p is then realizable by a linear dynamical system (i.e., there exists a family of linear state-transition functions 4 such that (p, 4) is a linear dynamical system) if and only if p satisfies both the conditions of the state-response consistency and the input-response consistency, i.e., for all t, t' E T ,
(4 (ii)
{m,,
X I , ~ ) ) ( ~ ~ , ~ ) [ P l= , ~ PZ(C1, ( ~ , ~ )X I , ,
and
( ~ ' X , f ) [ P 2 I , ( X , , )=
X , ~ , ) ) [ P ~ , ( C ~= ,)
P z t ( 0 . 'X,,)
I T.1
. 0)I 'I;,]
PAC,, x,,, . o ) I T.1)
PROOF. Let us consider the only ifpart first. Let @, 6)be a linear dynamical system where p = { p , } is the family of linear mappings described in Theorem 3.2. Then we have
74
Chapter IV Linearity
Since Pt(Cr 9
xr) I 7;. = p r 4 4 r A C r xtr*),Xr,) 9
holds for every c,, x,,., and x,. we have pz,.(xt.) = pzr(o. x,.) 1 7;.
when c, = o and x,,.
=
o
Hence, { p , } satisfies the input-response consistency. Then it follows from Theorem 3.1 that the state-response consistency is also satisfied. Next, let us consider the if part. First, let us consider the case where plr is a one-to-one mapping for every t~ T where plt:C, + pl,(C,) has the inverse p;,'. To prove the realizability, we have to (i) show that zj = {p,} is a response family; (ii)construct a linear state-transition family { $,,.} ;(iii)show that {&} is consistent with {p,} and satisfies the composition property. (i) Since {p,} satisfies the state-response consistency, the input-response consistency {p,} is a response family by Theorem 3.1. (ii) Let r$ltt~:Cr -+ C,. be such that +lrr*(Cr)
= p;r'(plr(cr)
I 7;')
41rr, is properly defined because the state-response consistency implies that plr(c,)I 7;. = pl,.(cr.)for some c,. or pl,(cr)I 7;. E plr,(Ct,)for every C,, and plr, is a one-to-one mapping. Let 4 2 t r , : X r+ t . C,, be such that 4ztJXtt')
I T*)
= plt!(pzt(xrr, 0)
4zrr,is also properly defined because the state-response consistency implies that pZr(x,,.-0) I T,. E plr,(Cr,).Since the restriction is a linear operation and since the inverse of a linear operator is linear, and are linear and hence so is &, where 4tr*(c,, xrt,) =
+ 4zrAXrrf)
41rr4~0
(iii) From the construction of { p t } and {&} and the input-response consistency, it follows that pr44tr4cr, xu*),xr,) = plr44w(cr, Xw)) + Pzr,(xr,)
+ plr44ztr,(xrr,)) + pZr,(xr,) = p1X4 I IT;, + PzAXrr, '0)I T,, + PZAO . xr,) I 'G = p1r44lrr4cr))
= Pt(Cr, xr)
I 7;.
Hence, {+,,.} is consistent with { p , } . Since pis reduced (or plris a one-to-one mapping),Theorem 2.2, Chapter 11, is applicable to the present case, and hence {&.} satisfies the composition property. Hence, { 4,,,} is a desired state-transition family.
75
3. Realization Theory
The general case will be considered now where plr is not necessarily a one-to-one mapping. Let E, c C , x C , be such that (cr, Cr? E Er c* Plr(Cr) = Plr(Cr')
Then it is easily verified that E, is a congruence relation. Let C,/E, = {[c,]} be the quotient set, which is a linear space in the obvious sense, and for every t E T, a linear map Plr:C,/E,+ Y, such that pl,([c,]) = plt(c,) is properly defined because E, is a congruence relation. It is easy to show that { p , = (PI,, p 2 , ) } satisfies the state-response consistency and the input-response consistency when { p , = ( p l , , p z t ) } does. Therefore, we have a family of state-transition functions { $,,.} which is consistent with {p,}. We shall proceed in two steps: (i) Define a family of state transitions {4,,.>corresponding to { p , } . (ii) Show that { ( p , , 4,,.)}is a linear dynamical system. The first step : let pLt:CJE, + C, be a linear mapping such' that p,([c,]) E [c,] which can be constructed by using the same technique as used in the proof of Theorem 1.2, Chapter 11. We shall write p for pr when the meaning is apparent from the context. The state-transition family is then defined as 4lrt':Cr 42rr'
+
:Xu,
Cr*
+
Cr,
such that
$lrr,(Cr)
such that
42rr,(xrr,) = A 6 2 t r * ( X r r * ) )
=~(61r~~r1))
The second step: we have only to show that {$,,.} defined above satisfies the composition property and is consistent with { p , } . First, we should notice that from the definitions of plrand [c,]
It can be readily shown that
We can now prove the validity of the consistency
Chapter IV Linearity
76 and the validity of the composition property 4 1 ~ 1 ~ ~ ( 4 1 1X ~ I (, ,~) ,1 X , I Y )
= 4I,t,,(P(411*([C11, =
[from (4.8)]
XII,)), X I Y )
~ t 4 ~ ~ ~ ~ ~ ( [ dx,,~,.)) 4 ~ ~ [from ~ ( [ (~4 .~8 ~i ~ X~~,))I,
= ~t8r,t,,t4rr,([ctl, x u < ) ,x t . r * * ) )
[from (4.6)]
= Aitr,,([crI,x w ) ) =
[from (4.811
4tt"(ct, xty)
Q.E.D. Let us briefly compare the realizability conditions for a general time system and for a linear time system. Regarding consistency of p with a time system S, Theorems 2.1 (Chapter 11) and 3.1 ought to be compared. Suppose condition (i) from Theorem 3.1 holds, i.e., =
( ~ ~ , ) ( ~ X * ) ( ~ ~ , ) ( P l I ( C ( PI l )O ( C 0 )
+ P Z O ( X ' . 0))I 7;)
(4.9)
Then combining Eq. (4.9) with the input-response consistency, we have
(v~~)(~~')(tf~t)(3~t)(~i,(~,) + P d X r ) = ( ~ I o ( c o )+ p 2 0 ( x 1 . 4)I 7;) which is exactly (Pl) stated in Theorem 2.1, Chapter 11. Property (P2) can be, similarly, derived from the state-response consistency. Regarding the realizability of 0, Theorems 1.1 (Chapter 111) and 3.2 ought to be compared. If p satisfies the input-response consistency, then condition (i) from Theorem 3.2 implies (P3). As a matter of fact, it follows from condition (i) in Theorem 3.2 that ~ ~ ~ I ~ ~ ~ ' 0) x =I P1(Cr, I ~X t I ~ , . 0) ~ I T,) ~ ~ I ~ ~ ~ P I ~ ~
Since (Vxr.)(pzI.(xr.) = pzr(o. x r , ) 1 T,,),we have ~
~
~
1
~
~ XI,)
= ~
.
P I ( C ~1 , XIIS ~ X I , )
I T.1 1
~
~
which is (P3). We can conclude, then, that the realizability conditions for a linear system are almost the same as for the case of a general time system. However, the present form is more suitable for the application in the case of a linear system, since the conditions are separated into two parts of importance in linear systems analysis: one related with p I rand the other with p z t , This fact will be used later when considering the problem of a natural realization of a linear time system. The preceding theorem can be used also for the question of realizability of the component functions p1 and &. For example, let a family of inputresponse functions p z be given; then ji2 is considered as realizable if and only if there exists a state-response function p 1 so that (PI, &) is realizable.
~
~
~
77
3 . Realization Theory
Theorem 3.2 then implies that p 2 is realizable if and only if it satisfies the consistency condition (ii), and furthermore there exists a mapping p1 such that condition (i) (given in terms of both p 1 and p 2 ) is satisfied. When the system satisfies the conditions for strong nonanticipation, more specific and stronger conditions for realizability can be derived. In this respect, we have the following theorem.
x}
Theorem 3.3. Let p 2 = { p 2 , : X t+ be a family of linear maps. p 2 is realizable by a strongly nonanticipatory linear system (i.e., it is an input response of a strongly nonanticipatory linear dynamical system)if and only if
(i) p2,(x,)(r)= p 2 , ( x t r .o)(r)for T 2 t (strong nonanticipation condition); (ii) there exist two families of linear mappings {F,,,:c, + B } and {G,,, :X,,, + such that for any Z E T, 2 2 t’ 2 t
e,,}
0)(2) = FtdGrr,(xrr*)I where G,,, satisfies the condition (iii) G,,.,(O . x1,,,,)= Gr,I,,(xr,r,,) for t I t’ I t”. ~ 2 1 ( x r r*,
PROOF. First, we consider the fi part. We shall check the input-response consistency and the state-response consistency. Notice that conditions (i) and (ii) imply that p2,(x,)(z) = p2,(xrr o)(T)= F,, . Grr(xrT). Hence, we have that for z 2 t’ ( ~ 2 1 ( 0 .Xr.1
I T)(4= ~ 2 r ( 0. . Grr(0 .x t ‘ z ) = Frr . Gr*r(xr,r) =
Frr
= ~2r,(xr,)(r)
Hence, the input consistency is satisfied. Now, let p 2 0 I T, :X,, + such that
x
I
I T,
( ~ 2 0 K ) ( x o r ) = ~ 2 0 ( ~ 0 0) 1 *
Then we have the following diagram, where C, = G,,,(X,,):
Chapter IV
78
Linearity
We shall show that there exists a linear mapping p l , :C , + r, such that the above diagram is commutative. As a matter of fact, GoAxot) =
GoAxbt)
+
+
Ftr
Got(xot) = Ftr
.Gor(~bt)
0)(4= P 2 O ( X b l * O)(T)
PZO(X0,.
for every T 2 t . Hence, Got(x0,) =
Go,(xb,)
+
+
Pzo(X0, . o ) I (PZO
T
T
= P 2 0 ( X b f . 0)I
I Tt)(xo,) = (P20 I T)(xbt)
Furthermore, Go, is an onto mapping by assumption. Hence, p l t : C ,+ r, can be defined as
I
where c, = Let c, E C, and x,,, E X,,, be arbitrary elements. Then Pl,(C,) = ( P 2 0
(PlAC,)
T)(xot)
+ PZ,(X,,* .ONI '7;. = ( P Z O ( X 0 t . 0)I T + P z o b 0)I
= ~ 2 0 ( ~ *o t
where c,.
=
GOt(X0,)
*
XtI,
*
I T )I 7;.
0)
T ,= P I ~ ~ c , , )
Gor.(xOf . x,,,). Let c,, be an arbitrary element of C,.. Then Plt,(C,,) = PZO(XOt,
. 0)I IT;,
+ P 2 0 ( 0 x,,, + PZt(X,,* I T *
= ( ( P 2 o ( X o t * 0) = (P1r(C,)
[where c,,
= Go,,(Xo,f)]
I 7;) I T ,
*
where c, = Go,(xot).Hence, the state-response consistency is satisfied. The strong nonanticipation is a direct consequence of condition (i) from Theorem 3.3. Let us consider the only if part. Let { ( p , , +,,.)} be a strongly nonanticipatory dynamical system where the input-response family is equal to { pzr}given in the theorem. Then the realizability conditions require that P Z , ( X , , , . 0)I T , = ~ 1 , 4 4 2 , , 4 ~ , , ~ ) ) for every x,,, E X,,, Hence, let G,,,(x,,,)= 42t,,(~,,.), and F,,(c,) = P ~ , ( C , ) ( T ) . Then pZr(x,,. . o)(T)= FtSr. G,,.(x,,.) for It follows from the composition property of 4,,, that 42tt40
.X t y )
=
4,,,',(42ttm9
T
2 t'
x,y)
= 4Zf',"(X,',")
or G,,..(o . x,.,..) = G,.,..(x,.,..). Finally, the strong causality implies ~2,(x,)(= d
p2,(x,, . o)(T)
for
T
2t
Q.E.D.
79
3. Realization Theory
Condition (ii) from the preceding theorem shows that the effect of an input applied during an interval [t, t') can be separated in two parts: (1) changes of the state during the interval [t. t') under direct influence of x,,., (2) free evolution of the state after that interval and its mapping into the output values. For the special case when the system is specified as a linear differential equation system, it is easily seen that conditions (i) and (iii) are satisfied. For instance, suppose G,,, is represented by c,,, = G,,,~(x,,..)=
I'"
~ ( t "o)x(o) , do
If x(o) = o for t I o I t', then
ir
G,,..(o . xttt,,)=
w(t",o)x(o) do
= Gt.,..(x,.,..)
which is condition (iii). When i j is a linear systems-response family which is also realizable, there exists a convenient way to construct its state-transition family. Recall that a response family i j is reduced if and only if ( W [P,(C, x,) = Ptk,'? X J l 9
--*
c, = c,'
In the linear case, the response family i j is reduced if and only if plris a one-toone mapping. We have now the following theorem. Theorem 3.4. Let i j be a family of linear reduced-response functions which is realizable. Its associated linear state-transition family is then uniquely determined by 41rr,(Ct)
= PG,l(Plt(Ct) I
42tt'(X,,,) = P l r ! ( P 2 r ( X t r ,
IT;,)
I IT;,)
0)
where p;,' :plt(C,)+ C, such that p;,'(pl,(ct)) = c,.
80
Chapter IV
Linearity
By Definition 2.3, the left sides of Eqs. (4.10) and (4.11) are equal, and it follows then from the right side that
I
Plt,(41tt,(Ct))
=
Plt(C,)
PIt,(4Zttkt,))
=
PZt(Xtt,
7;.
(4.12)
. 0)I T ,
(4.13)
Since p is reduced, Eqs. (4.12) and (4.13) imply that 41tt,(CJ
=
4ztt,(xtr,) =
In other words, q51ttrand
Plt!(Plt(Ct)
I T,)
pTt!(pzt(xrt, '0)I T , )
are uniquely determined by p.
Q.E.D.
The preceding theorem provides a way to construct the state-transition family for a linear system starting from a given response family p. 4. CONSTRUCTION OF THE STATE SPACE FOR A LINEAR SYSTEM
The algebraic diagram indicating the commutative relationship of various auxiliary functions for a linear time system has the same format as for a general time system. However, the state objects and the associated auxiliary functions can be constructed more effectively when the special properties due to linearity of the system are used. Given a (complete) linear time system S c X x Y, for every t E T, let S," c S , denote the set
s,"= {,hYt) :(0,Y t ) E St) Notice that S," is a linear space for any t E T. We have now the iollowing theorem.
Theorem 4.1. Let S be a linear time system. Let C, = S," for every t~ T. Then there exists an initial linear response po of S p o : cox
x+Y
such that PO(C0,X) = PlO(C0)
+ PZO(X)
where plo((o,y)) = y and p z 0 is linear. Furthermore, if j5
=
{ p , } is defined as
4 . Construction ofthe State Spacefor a Linear System
81
and p r ( C t 9 xt)
=
Plr(Cr)
+ ~2t(xt)
then p is realizable, and since p is reduced, a dynamical system representation associated with p is uniquely determined. PROOF. The first part of the proposition is a direct consequence of Theorem 1.2, Chapter 11. We have only to check, therefore, whether {p,} satisfies the input-response consistency and the state-response consistency.
(i) Input-response consistency: It follows from the definition that P Z ~ ~ X= V ) p20(0
*
xr,) I
17;.
and for t 5 t',
. x,,) = p z o ( 0 . Xr.1 I T Hence,
. xr*)I T f = P Z O ( O .Xr.1 I 17;. = ~2,4xr,)
~2t(0
The input-response consistency is therefore satisfied. (ii) State-response consistency: Let c, = (0, y,) E S, and x,,, be arbitrary. Since (0 x,,. . 0,p Z 0 ( 0. x,,, . 0))E S, we have
P Z J O. xu,
(xu, * 0,
*
I 7;) E Sr
0)
Since S, is linear, we have
Y
( x t t , '0, t
+ pzdo
*
xrr, '0)I 17;) E Sr
Hence (0, Yr
I 7;' + P ~ O ( OXrt" *
I '4,) E Sr,
0)
or plr.(c,.)= pl,(c,) I 17;.
+ p2,(xtt..0)I 17;.
for some c,. E C,.
(Notice that we used the input-response consistency.) Let c,. = (0,y,,) E S,. be arbitrary. Then there exists (xr',yr') such that (x" .0,y" . y,.) E S. Hence, for some co E C,,
Y" . Y,,
= PlO(C0)
+ P2o(X"
*
0)
Therefore, Yr, =
=
(p,O(co) + p20(xr'* 0))I T *
((pldco) + p20(xr. 0)+ ~ 2 0 ( 0 xrr, . * 0)) I TI I 17;.
= (pir(4
+ ~zr(x,r,'0))I 17;,
82
Chapter IV
Linearity
for some c, E C,. Notice that the existence of c, is guaranteed by the first part of the proof of the state-response consistency. p is apparently reduced. Q.E.D. The final result comes from Theorem 3.4. Definition 4.1. Let S be a linear system. The set So = be referred to as the (algebraic) core of S.
((0, y):(o,y ) E S }
will
On the basis of Theorem 4.1, we can now give a specific procedure for the construction of the auxiliary functions and the associated state space for a linear system based on the selection of the algebraic core as the state set. The steps in the construction process are shown in Fig. 4.1 : Step(a)
c, = S"
Step(b)
c, = So'
so = {(o,Y):(O,Y)ES)
I s;=
{(o,Y,):(O,Y,)€Sr}
FIG.4. I
(a) First, the algebraic core So is selected as the initial state object. (b) Second, the state object at any time t is selected as the core of the respective restriction of the system, i.e., C, = Sp. (c) Third, an appropriate equivalence relation is introduced, and the state space is defined as the quotient set in the usual manner. The conditions that enable such a construction procedure are indicated in Fig. 4.2. The major distinction between the state-space construction procedure for the case of a general dynamical system and for a linear system is in the selection of the state objects. For the general case, the initial state object is defined a priori ; it represents the necessary initial information. For the linear system, the initial state object is defined uniquely by the system itself. The same is true for the selection of the state objects at other times. Another difference between the general and linear case is that in the former the states are defined in terms of the past, while for the latter the states are defined in terms of the future. The present construction procedure will be proven convenient, in particular for a stationary linear time system (as considered in Chapter VI), since it yields a state space in a natural way.
4 . Construction of the State Space for a Linear System
83
S C X X Y
c,
=
so
e P O
p,:C. x
x
p,:C, x
x,+ r;
--*
Y
I
Theorem 4. I
c, = s, P
+
4,,,: c,x
x,,.+ c,.
Section 4
tb, x,
$,,, :c x x,,, c --*
I:CxA-B FIG.4.2
Theorem 4.1 also indicates what the restrictions on the selection of the response functions are. In principle, unless nonanticipation is considered, m y function f :X + Y that satisfies the relation ( x , f ( x ) )E S for every x and the linearity requirements can be chosen for pzOand independently from plo. Such a selection of pzO is essentially the only freedom in the construction procedure implied by Theorem 4.1 ; everything else is defined by the given system S itself. It should be noticed that p in Theorem 4.1 is already a reduced realizable response family, and hence the state-transition family 4 associated with p is uniquely determined by Theorem 3.4, that is, and Furthermore, if nonanticipation is satisfied, the system can also be represented by and 1.This part of the procedure is completely the same as for the general time systems case except that X is automatically linear for the present case.
84
Chapter IV Linearity
Consider now the selection of an equivalence which specifies a state space and the construction of the necessary auxiliary functions. Let c = C,. The equivalence Em* as introduced for the general time system cannot be used for the present linear case, because the quotient set C/E,* may not be a linear space, while the state space of a linear system has to be linear. In order to overcome this obstacle, we start with a congruence relation. Recall that an equivalence relation E," c x c is a congruence relation if and only if
ufET
e
(c,, c,,) E Ea* & (c,', c,,') E Ea* -+ (c,
+ c,', c,, + c,,') E E,*
and (C,9 cr') E Ea*
Now, let E," c
pEd
+
(Pet
3
BCr')
E EaA
c x c be a congruence relation which satisfies the condition
(c,, c,,) E Ea* + (Va)(At(c,,a) = A,,(c,., a))& (t = t'
. ((+rt',(C, xtp), +rJCr,
-,(Vx,,,,)
xrr,,))E Ea')) The trivial equivalence relation I = {(c,, c t ) :C, E c c x c is such a congruence relation. Using the same procedure as in Section 3, Chapter 111, we can prove the existence of a maximal congruence relation EmA,which equivalence can then be used to construct a state space. It is easy to show that such a state space C = c / E m * = { [c]}can be linear. Indeed, assume that the system is full in the sense that 7
9
(vt)(Wcl)[Cr n [cl f 41 i.e., at every time t there exists at least one state for each of the equivalence classes of Em*.Then C is a linear space in the sense that
[cl
+ 123 = [c, + 2,]
where co E [c] and E, E [El where c, E [c]
a[cI = [acol
Since Em* is the congruence relation, the above definitions are proper, and furthermore for every t E T, if c, E [c] and E, E [i.],
[cl
+ [El = [c, + Er]
and a[cl = [ac,l
The state-transition and output functions can be defined now by
$,,#:c such that
x
x,,,c 4
4 . Construction of the State Space for a Linear System
85
and 1:C x A - r B
such that @c], a) = Al(c,, a)
where c, E [c]
Notice that 2 is time invariant (because the system is assumed full). Because the state objects constructed above are defined directly in terms of the primary systems concepts (i.e., inputs and outputs), the dynamical representation for a linear system based on the construction process as illustrated in Figs. 4.1 and 4.2 will be referred to as natural realization.
Chapter V
In this chapter, we shall investigate in some more detail the class of pastdetermined systems. For the sake of illustration, some examples of specific past-determined systems are given in order to show how widespread these systems really are. It is shown then how a well-behaved response function can be constructed directly from the input-output pairs, i.e., without explicit reference to the initial state object. Furthermore, such a response function is nonanticipatory, realizable, and the associated state objects themselves can be generated from the past input-output pairs. Such a response family and the state objects are termed natural since their construction is fully determined by the given input-output pairs, i.e., system itself. This allows then the construction of state space, which is also termed natural for the same reason. In order to characterize some past-determined systems, the notion of a finite-memory system is introduced. It is shown then that a past-determined and stationary system is of a finite memory, and conversely, if the time set of a finite-memory and stationary system is well ordered, the system is pastdetermined. 1. ON THE CLASS OF PAST-DETERMINED SYSTEMS
A central idea in our approach via formalization is to stay as close as possible to the observations in the process of developing an explanation for a phenomenon and introduce additional constructs only when absolutely necessary and with minimal additional assumptions. Of special interest, therefore, are the systems in which additional constructs can be introduced with no extra assumptions, directly from observations, i.e., input-output 86
1. On the Class of Past-Determined Systems
87
pairs. If one is interested in the state-type representation, a class of systems that allows such a natural treatment is the class of past-determined systems. The concept of past-determinacy has been already introduced in Chapter I1 as a causality-type concept. Conceptually, for a past-determined system if one observes the input-output pair long enough, one can deduce everything he needs to construct the state-transition machinery and therefore study the dynamic aspects of the system's behavior. Specifically, we shall use in this chapter the following concept. Definition 1.1. A time system S c AT x BT is a past-determined system if and only if there exist 2 E T such that (i) ( x i ,y') = ( x ' ~y") , and x' (ii) ( ~ ( x 'y') , E S')(VXJ
=
x"
-,j'
=
7'' for all x', x E X and t 2 E
(3y,)(x' . xi, y' . yi) E s
Recall that condition (ii) is called the completeness property. The completeness property allows the existence of the natural systems-response family. The importance of the past-determined systems, in reference to the statetype representation, is, among others, due to the following factors : (i) Past-determined systems are intrinsically nonanticipatory as will be shown subsequently. (ii) It is possible to construct a response function for the system in a "natural" way that preserves essential properties such as linearity, nonanticipation, stationarity, and others. Most of the results in the representation theory of general systems depend on the existence of a well-behaved response function. Unfortunately, in general it is not known how to construct such a function ; however, for the past-determined systems such a procedure is possible as we shall show in this chapter. (iii) The class of past-determined systems is rather large, covering a number of different types of systems of interest both in application and for theoretical investigation. (iv) A conceptual argument can be presented that a well-defined system, in principle, ought to be past-determined and that the absence of pastdeterminacy results from the lack of information on the system itself. To illustrate (iii) we shall consider several examples. (a) Linear Constant-Coefficient Ordinary Differential Equation Systems Let us consider the following differential equation system : dzfdt = FZ
+ GX
y = HZ
(5.1)
(5.2)
88
Chapter V Past-Determinacy
where F , G, and H are constant matrices, while x, y, and z are vectors. The dimension of z is assumed to be n. Let T = [o, 00). Then we have the following proposition. Proposition 1.1. The class of linear time-invariant ordinary differential equation systems described by Eqs. (5.1) and (5.2) is past-determined from 2 > 0,where 2 is an arbitrary small positive number. PROOF.
It follows from Eqs. (5.1) and (5.2) that y(t) = H eFrz(o)
+H
JI
eF('-')Gx(z)d z
(5.3)
Let Z be an arbitrary positive number. Let (it,3) be
+
3(t) = HeF'2(o) H
(5.4)
Then (x', y') = (iti,3') implies that HeF"z(o)= HeF"2(0)holds for o 5 CT c 2. Consequently, we have from an elementary calculation that Hz(o) = H2(o), HFz(o) = HF2(0), . . . ,HF"-'z(o) = HF"-'Z(o). Therefore, HeF"z(o)= HeF"$(o) holds for any CT 2 0,which implies that if x' completeness property is clearly satisfied.
=
it' ( t 2 2), 8' = fi' holds. The Q.E.D.
(b) Linear Constant-Coefficient Difference Equation System
Let us consider the following difference equation system :
where F, G, and H are constant matrices, while x k , Y k , and zk are vectors. The dimension of zk is assumed to be n. Let T = {0,1,2,. . .}. Then we can introduce the following proposition. Proposition 1.2. The class of linear constant-coefficient difference equation systems as described by Eqs. (5.5) and (5.6) is past-determined from 2 = n, where n is the dimension of the state space.
1 . On the Class of Past-Determined Systems PROOF.
89
We have from Eqs. (5.5) and (5.6) that y k = HFkzo + H
c F~-'-'Gx
k- 1
i
i=o
Let k- 1
yk =
HFk$
+ H 1Fk-'-'G%, i=o
Suppose (xi,y') = (gi,f )for 2 = n. Then we have that H z , = H20, HFz, = HF2,, . . . , and HF"-'zo = HF"-'$. Consequently, HFkzo = HFk% holds for any k 2 0. This result shows that if x' = 9' holds, then 8' = F' for t 2 2 = n. The completeness property is clearly satisfied. Q.E.D. The proof of Proposition 1.2 shows that, in general, i can be smaller than n.
(c) Automata Systems As shown below, a general automaton is not a past-determined system. However, every finite automaton has a subsystem which is past-determined. Let S = ( A , B , C , 6 , 1 ) be a finite automaton whose A is a finite input alphabet, B a finite output alphabet, C a finite state space, 6 a next-state function such that 6:C x A + C , and 1 a output function such that 1: C x A + B. Let the cardinality of C be n. Let A* and B* be the free monoids of A and B, respectively. Let 6 : C x A* + C and l : C x A* -+ B* be the extensions of 6 and 1 as usually defined. Let Z be an infinite string of a, i.e., 5 = a a . . . . Let us define a subsystem S of S as below. Let S be a discrete time system with the time set T = {0,1,2,. . .} and S a subset of S such that
-
s = {(a,1(c, a)):c E C }
(5.7)
where a E A. Then we have the following proposition. Proposition 1.3. Let S be a finite automaton, S = (A, B, C, 6, 1) and n the cardinality of C. S then has a subsystem S as defined by Eq. (5.7), which is past-determined from n. PROOF. Let us consider a finite automaton
s, = ( { a } ,B, c,6 I c x
{ a } ,1 I
cx
{a>>
Notice that only an input string of S, is ak (a string of k a's), where k = 0, 1, 2,. . . . Then it follows from the well-known property of a finite automaton that 1(c, a") = A(2, a") implies 1(c, ak) = A(2, ak) for any k (k = 0,1,2,. . .). Consequently, S is past-determined from Z = n. The completeness property is clearly satisfied. Q.E.D.
90
Chapter V Past-Determinacy
(d) Non-Past-Determined Automata Systems
As we mentioned above, an automaton, in general, is not past-determined. Let us give a simple example to show that is the case. Let a finite automaton S = (A, B, C,6, A) be defined by A = B = C = (0,l)
6(0,0)= 6(0,1) = 0,6(1,0) = 6(1,1)
=
1
A(0,O) = A(0,l) = 0, A(1,O) = 0, A(1, 1) = 1 The state-transition diagram is given in Fig. 1.1. 010
010
FIG.1.1
It is clear from the definition that for any large n, A(0,V) = A(1,V) holds. However, A(0,O". 1) # A( 1,O". 1). Consequently, such an automaton is not past-determined,
2. STATE-SPACE REPRESENTATION
In the previous two chapters, state-space representations of time systems and general dynamical systems have been considered. The results, in essence, depend on showing the existence of a nonanticipatory systems-response function. No general procedure is presently known to find such a systems-response function for a given general system except when the system is past-determined.
(a) Past-Determined General Time Systems Most of the results in the investigation of the past-determined systems are based on the following proposition.
2. State-Space Representation
91
Proposition 2.1. Let S be a past-determined system from 2. Then (i) there exists a strongly nonanticipatory initial systems response pi at 2 whose state object is S' = S I T' ; (ii) there exists a one-to-one correspondence between S' ( t 2 2) and Si x X i ' , i.e., S' H S' x X i ' , and hence S' can be used as a state object for a realizable strongly nonanticipatory response family of Si. PROOF.
It follows from Proposition 4.4,Chapter 11, that S(X', Y') =
{(X'
9
Yt):(x' . x , , Y' . Y,)E S }
is a strongly nonanticipatory function. Then let Cf = S' and let pi:Cix X i + & be such that pi(ci, xi) = S(Q)(X*).Then pi is a strongly nonanticipatory initial system response. Let F :Six X , , + S' be such that for ci = (xi,y') and for X i I IT;, = Xi' F((x', y'), Xi') = (xi . X " , y' . p&,
Xi)
I T')E S'
F is well defined because piis nonanticipatory. If F((x', g),xtt) = F((2', j'),2J, (xi *
I T J = (2' .$ i t 9' ~ i ( C i2')I Ti) holds, where ct = (x', y') and & = (2',9'). Hence, x' . xtt = 2' . A,, and y' y' . pi(ci ,X J
9
*
9
=
9'
hold, which implies that F is a one-to-one mapping. Let (x' . xi', y' . y',) E S'. Then
I T')E S' where xi I Tt = xi'. Since S is past-determined, pi(ci,x i ) I F((xi, y'), Xi') = ( x i . X " , y i * p&,
Xi)
=
y i t . Conse-
quently, ~ ( ( x ' , Xi') = (x' . xit 9 Y' . ~ t r ) Hence, F is a one-to-one correspondence. Let C, = S'. Let pt :C, x X , + Y, such that when F - '(cJ = (ct, xi'), Pr(Ct9
X') =
Pi(%
Xi'
*
X')
I '7;
Then Theorem 1.1, Chapter 111, shows that { p t :t 2 I } is a realizable response Q.E.D. family of Si. Proposition 2.1 shows explicitly that a past-determined system has a nonanticipatory realizable response family and state objects which are determined from the input-output pairs of the system without introducing an auxiliary object, i.e., initial object. (Refer to the state-generating function of Proposition 4.5, Chapter 11.)Hence, those response family and state objects will be referred to as natural.
92
Chapter V Past-Determinacy
The natural state objects are, in general, not reduced. As an illustration, let us consider Eq. (5.1). Let z ( o ) # 2(0), x' # P,and y' # 9' be such that eF'z(o)+ J-6eP(t-u)Gx'(a) da
=
eFr2(o)+
sb
eF(r-u)GP(a) da
and and
scxx
Y
pi:cix xi-*
Pi
L Proposition 2.1
f
p,:c, x x,-+ r; ( t 2 2)
P
1
I
I
Theorem 1.1,Chapter 111;
4-J Theorem 2.1, Chapter 111
I
4
(6,f ,
ljj,,, : c x
1,:C x A
x,,, +c B
-i
FIG.2.1 Past-determined system from i.
(5.8)
2 . State-Space Representation
93
Apparently, (xr,y') # (P, y), but Eq. (5.8) implies that pr((x',y'), x,) = p,((P', y),x,) hold for every x,, where p, is the natural systems-response function. Once a nonanticipatory response family is given, a state space and other auxiliary functions can be constructed in the way analogous to the case of general time systems. Such a construction is shown in Figs. 2.1 and 2.2.
-t c, = s,
For t 2 i:state object at
t
Aggregation of state objects
c l C = c/EmA
State space
FIG.2.2
Comparison between the present construction method and the previous one given in Chapter I11 indicates the following: (i) The method given in Fig. 3.9, Chapter 111, is more general since the system does not have to be past-determined. (ii) The method given in Fig. 2.2 here yields a unique state-space representation defined solely in terms of the primary information (input-output restrictions), i.e., without introducing a secondary concept, such as the initial state object required to initiate construction procedure given in Fig. 3.10, Chapter 111. (iii) The method given in Fig. 2.1 here yields the representation of S, rather than S itself. This may be viewed as a shortcoming, but as Proposition 1.1 shows, for certain classes of systems, f can be taken as small as we wish. (b) Past-Determined Linear Time Systems
The procedure to determine the state space representation of a linear past-determined system is completely analogous to the general, nonlinear case.
94
Chapter V Past-Determinacy
Proposition 2.2. Let pt :S' x X i--* &bethe natural systems-response function of a past-determined linear system S c X x Y from 2. Then (i) Siis a linear algebra ; (ii) pi is a nonanticipatory linear mapping; (iii) the state-response function pli :S' -, Y' and the input-response function p 2 ' : X i -, & are given by
PROOF.
pli(x',y')
=
yi
P2'(X')
=
y'
4-+
( x i . 0,y'
(0*
. y') E S
xi, 0 .y J E s
Notice that Pj((X',
y'),
Xi) =
yi c* (xi . x i , yi . y') E s
Q.E.D.
The result follows, then, from direct computations.
For the natural realization of a linear system, the state object Ci is given by Ci= Sio = {yiI (0,yi)E S } . Proposition 2.2 shows that the state-response function pli of the natural response function is given by Pl'(X', yi)= y'
4-+
(xi.0,y' * y')
ES
Therefore, if pli is considered a mapping from S' into Sp, p l i : S+ i Sio is an onto function, and furthermore, the reduced natural state is isomorphic to si". Since the natural realization method is applicable to the present case, a state space and auxiliary functions are constructed in the completely same way as in the previous chapter.
3. CHARACTERIZATION OF PAST-DETERMINED SYSTEMS
A general characterization of the past-determined systems is not known at present, However, the linear past-determined system can be characterized by nonanticipation and some continuity property. For this we need the following concept.
Definition 3.1. Let plo :C x X + Y be a linear systems-response function of a linear time system. For some 2, if po satisfies the condition (tfC)(P,O(C)
I T'
po is called analytic from left at 2.
=0
I T'
-+
PlO(4 = 0)
95
3. Characterization of Past-Determined Systems
Proposition 3.1. If a linear time system S c X x Y has a systems-response function p , : C , x X + Y which is strongly nonanticipatory and analytic from left at 2, then S is past-determined from 2. PROOF. Let po : C , x X + Y be a linear systems-responsefunction which is strongly nonanticipatory and analytic from left at 2. Suppose ( x i ,y') = (2',j'), where y = plo(c) p Z q ( x )and 9 = plo(2) +_pzO(2).Since po is strongly I T'.( Consequently, 2 ) nonanticipatory, x' = 2' implies that p z O ( x )I T' = ~ ~ ~ we have plo(c) 1 T' = p l o ( t ) 1 T' because of y 1 T' = j 1 Ti. Furthermore, since p o is analytic, we have that
+
p1& - 2) I T' = 0 I T'
+
-
t )= 0 + p10(c)= p10(2) 1 T', where t 2 2. Hence, S is past-
Therefore, x 1 T' = 21 T' implies y I T' = j determined from 2. The completeness property is clearly satisfied.
Q.E.D.
Notice that a system described by Eqs. (1.1) and (1.2) is strongly nonanticipatory, and p l o is an analytic function because p l o is the integral of (dzldt) = F z . Consequently, Proposition 1.1 is a direct result of the above proposition. The converse of Proposition 3.1 is given by Proposition 3.2 in the following. Proposition 3.2. Suppose a time system S c X x Y is past-determined from 2. Then St is strongly nonanticipatory, and furthermore, if S is linear, every linear systems-response function of S is analytic from left at 2. PROOF. The first part is proved in Proposition 2.1. Let p o : C , x X -+ Y be a linear systems-response function. Let c E C, bearbitrary such that p,(c) I T' = 0 I T'. Let x E X be arbitrary. Then (x, pzO(x))E S and (x, plo(c) + p z O ( x ) E )S hold. Hence, plo(c) 1 Ti= 0 1 T' implies that (X
I T', p 2 o ( x ) I T') = (X I Ti, ( P i o ( C ) +.P z o ( X ) ) I T')
Consequently, past-determinacy implies that p z O ( x )= plo(c) is, plo(c) = 0.
+ ~ ~ ~ (that x ) , Q.E.D.
The concept of a finite-memory machine defined in the automata theory [4] is intimately related with that of a past-determined system. Let us explore
that relationship. Definition 3.2. Let S c AT x BT be a time system, where T is a stationary time set. Let 2 E T be fixed. Suppose there exists a mapping h :Xi x Y' -+ B, such that y(t') = (F-'(xtt,,y f f r ) for ) any (x, y) E S and t' = t + 2. Then S is called a finite-memory system whose memory is 2.
Chapter V Past-Determinacy
96
Proposition 3.3. Suppose a time system S c AT x BT is past-determined from 2 and stationary. Then S is a finite-memory system whose memory is 2. PROOF. Let p :S' x Xi -, be the natural system response function of S . Let h : X , , , x -+ B (t' = t + 2 ) be such that h(xtt,, Y,,,) = p(F-'(Xtt,, Y,,,). xi)(z)
x,,
Since p(F-'(x,,. , y,,.), x i ) @ )is independent of xi because of past-determinacy, xi can be arbitrary in defining h. We shall show that S is a finite-memory system with respect to h. Let ( x , y ) E S be arbitrary. (x,, y,) E S, and S, c F'(S) imply F-'(x,, y,) E S. Consequently, p ( F - ' ( x , , yr) I Ti, F-'(xt)
holds. Let t'
=t
I TI
= F-'(yr)
IT
+ 2. Then
(F-YY,) I T ) V ) = Y(t') =
p ( F - ' ( x , , Y,) I
=
P(F-'(X,t', Y,,,), F-YXt,))(i)
=
h(x,,,, Y,,,)
Ti, F-'(x,) I Ti)(?)
Therefore, S is a finite-memory system whose memory is 2.
Q.E.D.
The above proposition shows that a differential equation system described by Eqs. (5.1) and (5.2) is a finite-memory system. The converse of the above proposition is given by Proposition 3.4 in the following. Proposition 3.4. Let S c AT x BT be a time system, where T is the time set for stationary systems and, in addition, is well ordered. If S is a finitememory system whose memory is 2, S is past-determined from 2. PROOF. Let ( x , y ) E S and ( 2 , j )E S be arbitrary. Since the system is of a finite memory, y(t') = h(x,,, ,y,,,) and j ( t ' ) = h(R,,, ,j,,.) holds for any t E T and t' = t + 2. We shall show that (xi, y') = (2',y)and x' = 9' imply j+ = p', where t 2 2. Let T,, = { z : j ' ( z ) # p'(z)}. If T,, = 4, then p' = B'. Suppose T, # 4. Since T is well ordered, minrsT, z = z, exists. Since (xi, y') = (ti, ji), y(2) = h(x', y') = h ( P , 9') = jqi)
should hold, which implies that Z < 7,. Since z, is the minimum element of T,,, (xro,yro) = (2.0,j+) holds. Suppose CJ = z, - 2 is positive. Then we have Y(T0) = h(XUTolY U J = Wore? 9uro)= H t o )
which implies that z, 6 T,. This contradicts the fact that z, is the minimum of T,. Therefore, T, is empty. Q.E.D.
Chapter VI
STATIONARITY AND TIME INVARIANCE
The class of stationary systems is important because, in application, many systems can be considered as stationary with a sufficient degree of accuracy and because in theoretical studies the additional property of stationarity allows a deeper analysis of such a class of systems, yielding a better understanding of their behavior. A concept closely related with stationarity is that of time invariance ;while the stationarity is defined in reference to the system itself and its restriction on appropriate time subsets, the time invariance is defined in reference to the auxiliary functions. The class of systems considered in this chapter will be assumed to have state-space representation. This will simplify the developments, but will require a slight modification of earlier definitions. In practice, dynamical systems are usually defined in reference to a given state space. Realization theory for stationary and time-invariant systems is developed in this chapter both for the general case and when the linearity property holds. Conditions for a strongly nonanticipatory realization are given, and the decomposition of a realizable linear input-response function in two parts, namely, mapping inputs to states and states to output values, is proven. After considering the effect of past-determinacy, it is shown on an axiomatic basis how the developed theory can be linked directly with the classical linear systems theory. 91
98
Chapter V I Stationarity and Time Invariance
1. STATESPACE STATIONARITY AND TIME INVARIANCE
For the class of systems considered in this chapter, it will be assumed that the time set is stationary and that the input and output objects are such that X , = F ' ( X ) and = F'(Y) for every t E T. It will also be assumed at the outset that a state space is associated with the system under consideration. We have then the following definition. Definition 1.1. A time system S c X x Y is stationary if and only if (WS,
The case when S ,
=
= F")
F'(S) will be referred to as full stationarity.
Definition 1.2. A time system S is time invariant if and only if there exists a response-function family p for S defined on a state space C,
p
= {pr:C x
x, y} --*
such that (V~WC)(VX,) (P,(C, x,)
=
F'(Po(C, F
-w)))
Notice that because the state space is used directly in definition, the state object S , is, in general, a proper subset of S,P :
s, = StP = {(x,, Y t ) : ( W Y , = P , k , x,))) Definition 1.3. A dynamical system ( i j ,
4)is time invariant if and only if
(i) the response family p is time invariant ; (ii) the state-transition family 6 is such that ( ~ t ) ( ~ t ' ) ( ~ C ) ( ~ X ~ , , ) ( 4 , ,x,,,) , ( C ,= +or(c,
where t
=
~-f(xtt,)))
t' - t .
Linearity obviously does not affect the concepts of stationarity and time invariance. However, in reference to the linearity of a system, the time invariance can be defined in a special form. Definition 1.4. A dynamical linear system is time invariant if and only if
(9 (W(VC)[P1r(C) = FYP 1o(C))l (ii) ( W V X , )[P2,(X,) = ~ Y P 2 0 ~ ~ - ~ ~ ~ , ~ ~ ~ 1 (iii) (Vt)(Vt')(Vc)[t'2 t -+ $J~,,,(c)= 410,(c)]where z = t' - t (iv) (Vt)(Vt')(tfx,,,)[t' 2 t -, 4 2 , , k , , ) = 4 2 0 , ( F - ~ ( ~ ~ ~ , ) ) l If p satisfies (i) and (ii), p is called a time-invariant linear-response family.
99
2. Realization Theory of Time-Invariant Systems 2. REALIZATION THEORY OF TIMEINVARIANT SYSTEMS
(a) General Time System
x&
t E T } be a family of timeTheorem 2.1. Let p = { p r : p t: C x X , + invariant maps. Then p is realizable by a time-invariant dynamical system if and only if for all t E T, p satisfies the condition x’. xr) I
( ~ ~ ) ( ~ x ‘ ) ( 3 c ’ ) ( ~ x r ) [ pxt) r ( c=’ ,
TI
PROOF. The proof is similar to Theorem 3.1, Chapter 111, except that every state-transition function has to be shown time invariant. The only if part is clear. Let us consider the if part. Let E c C x C be such that
(c, c’) E E
4.+ (V’x)(Po(C,x)
=
PO(C’,
x’))
Since p is time invariant, we can define p , : ( C / E )x X , + p,(c, xi). Let 4,. c (C/E x X,,,) x (CIE) be such that
x as P,([c],x,) =
(([cl, xrr,), [c‘l) E Li, 4.+ (vxr,)( Pr*([c’l,xr*) = PrNcI, xii, xr,) I T,) Then A,. is a mapping such that f,,.: C/E x X,,, + C/E and
P t U J [ c l , Xrr,), Xr.1 = Pt([cl,xrc, . xr,) I T , for every x,,.We will show that f = {f;,.}is time invariant.
Since p is time
invariant, F-‘(pr,(C‘, xi,)) =
PAC’, F-TXr,))
and F-‘(p,(c, Xrr, . xr,) I ‘4.1 = po(c, F-‘(xri,). F-YXr,))
hold where z
=
I T,
t’ - t . Consequently, we have that
f satisfies the semigroup property. Let p : C/E + C be such that p ( [ c ] )E [c]. Let & : C x .X,,.
-+ C
be such that 4rr,(C,
Then
=
{&}
xrr,) =
~(414~19 xir,))
satisfies the semigroup property and is consistent with
p.
100
Chapter V I Stationarity and Time Invariance
Furthermore, since f is time invariant, we have that
Theorem 2.2. A time system S is stationary if and only if it has a timeinvariant dynamical system representation (p, +). PROOF. Let us consider the if part first. Let ( x , , y,) E S , . Since S , c S,P holds, there exists c E C such that y, = p,(c, x,). Since i j is time invariant, y , = F'(po(c, F-'(x,))) holds, that is, F-'(x,, y t )E S . Hence, S is stationary. Conversely, suppose S is stationary. If we can find for S a systems-response function p o :C x X 4 Y such that the following holds :
( v c ) ( ~ x ' ) ( w ([PO(C, w
Xf
*
F W ) I IT;
= F'(Po(C',
x))l
then from po we can derive p which satisfies the condition of Theorem 2.1, where p , : C x X , -+ Y, is given by P , k x , ) = F'(Po(C7 F - f(x,N)
Then we have the desired result from Theorem 2.1. We shall show that a stationary system has a systems-response function which satisfies the above condition. Let
c = {c:c:X Let po:C x X Let
--*
Y & c E S}
Y be po(c,x ) = c(x). Suppose c E C and x' are arbitrary.
4 .
f We shall show that f
=
F - ' [ c ( x ' * F ' ( - ) ) ) T , ] : X -+ Y
E S.
Let x
EX
be arbitrary. Then
( x t . F'(x), c(x' . F'(x)))E s
implies ( F W , c(x' * F'(x)) I IT;) E s,
Since S, c F'(S) holds, we have ( x , F-'(c(x' . F'(x)) I IT;)) E s
that is, f c S . Then it follows from the definition of C that there exists c' E C such that c' = f.Therefore, the above condition is satisfied. Q.E.D.
101
2 . Realization Theory of Time-Invariant Systems
Notice that Theorem 2.2 does not take into consideration the nonanticipation conditions. This case will be dealt with later. (b)
Linear Time System
Proposition 2.1. Suppose a family of time-invariant linear maps p = { p l : p t : Cx + K & t € T }
x,
is reduced and realizable. Then a family of linear state-transition functions 6 associated with p is uniquely given by 4l,t'(C) =
I IT;,)
P;t'l(Plt(c)
42tt"Xtt') = P l t ' l ( P 2 t ( X f f ' . 0)I
IT;,)
Furthermore, (p, 6)is time invariant. PROOF. The first part is given in Theorem 3.4, Chapter IV. We shall show = c'. Then that (p, 6)is time invariant. Let $JItt,(c)
4ltt44 = c'
+
P l t W
= Pltk)
I IT;,
+
= ( F f ( p l o ( c ) )I)IT;, = F'(plo(c) I
Fr(~lo(c')) = P ~ O ( C ) I T,
Similarly, let c'
+
Ft'(P1O(C"
where
T,)
Plr(c') = P ~ O ( C ) I T,
+
t =
t' - t
C' = $Jlor(C)
= 4 2 t t 4 x t t p ) .Then
c' = 4Ztt'(Xtt*)
-+
P l t W =
= Ff(P20(F-'(xtt,
PZt(X,f' * 0)I
. 0)))I IT;,
+
T,
Ft'(P1O(C"
+
Ft'(Pl0(C'))
= F ' ( P Z o ( F - ' ( x , , ~ . 0))I T,)
where z +
Plr(c')
I T,
= P z o ( F - ' ( x t , , ) . 0)
=
t' - t
C' = 4 2 0 r ( F - ' ( x t t , ) )
Hence, the dynamical system representation is time invariant.
Q.E.D.
Theorem 2.3. Let
P={pt:pt:Cx X , + ~ & ~ E T }
be a family of time-invariant linear maps. Then p is realizable by a linear time-invariant dynamical system if and only if p satisfies the input-response consistency and
(W,~ ' ) ) ( W ( = Pp0(c, ~ ~x (' . C 0)I' T ))
(6.1)
which is one of the state-response consistency conditions. PROOF. Since the proof is similar to that of Theorem 3.2, Chapter IV, we shall sketch the procedure. For the case of time-invariant systems, S, = SopI T, may be a proper subset of Srp.Hence, the state-response consistency need
102
Chapter VI Stationarity and Time Invariance
not be satisfied in the complete form. Furthermore, due to the time invariance, S, c Srp++condition (6.1)c,(V(c, xtt,))(3c')(pIt,(c') = p,(c, x t f ,'0)1 T,)
holds. The main procedure for the if part is the following. First, derive the reduced-response family and apply Proposition 2.1. Next, define a statetransition family as done in Theorem 3.2, Chapter IV, and show that it is time invariant. The details are left for the reader. Q.E.D. Classical theory of realization is mainly concerned with the realization of an input-response function [5]. The counterpart in our general systems theory is given by the following theorem. Theorem 2.4. Let
p 2 = { p z t : p z t : X t +I : & t E T } be a family of linear maps. p 2 is realizable by a time-invariant strongly nonanticipatory linear dynamical system if and only if the following conditions are satisfied : (i) p 2 satisfies the input-response consistency; (ii) p 2 satisfies the strongly nonanticipatory condition, i.e., ( ~ , ' X t ) ( P 2 0 ( 0 .Xr)
(iii)
I T'
= 0)
p 2 is time invariant, i.e., (QX,)(P2r(X,)
= F'(PZo(F- '(X'))))
If p 2 is realized, a state space C and a state-response function plr :C + yt are given by c = { F - ' ( p z O ( x ' . o ) 1( 7 ; ) : x ' E X ' & t E T ) Pl,(C)
= F'W
PROOF. The only if part is clear. Let us consider the i f part. We shall show that the state space C mentioned in Theorem 2.4 is a linear algebra. Let c = F - 1 ( p 2 0 ( ~0) ' . 1 17;) and CL E d be arbitrary. Then we have
cic = F-'(p,,(ax'
*
I
0) 'T)E
c
Let i. = F - i ( p 2 0 ( 2 i .0)1 TJ be arbitrary. Suppose t 2 2 such that z (z 2 0).Conditions (i) and (iii) imply that p20(0
. F'(2').
I
0)
T, = ~ z ' ( F ' ( 2 ~0) ) . = F'(p20(Ai . 0))
Consequently, we have that p z 0 ( 2 ' . 0)I q = F-'(p,,(o. F ' ( 9 ) . 0)I T,)I = F-'((p,,(o
'
q
F'(9) * 0)1 T )1 T,)
= F-'(p,,(o~ F'(9). 0)1 T,)
t =
i+
2. Realization Theory of Time-Invariant Systems
103
Therefore, c
+ 2 = F-'(p,,(x'. =
F-'(p,o((X'
0)
I 7J
+ F-'(p,O(O.
F'(2'). 0)I 7;)
+ 0 .F'(2')). 0)I 7;)E c
Hence, C is a linear algebra. Let p l f : C-, be such that pl,(c) = F'(c). We shall show that the conditions of Theorem 2.3 are satisfied. Let c = F-'(p,,(x'. 0 )I
and x' be arbitrary. Let t
7;)E c
+ i = z. Then we have
p2o(x'. 0 ) = F - ' ( p z 0 ( o . F'(x'). 0 )1 7;)
Consequently,
h O ( 4 + PZO(X'. 0))I 7; = (c + P 2 O ( X f . 0))I 7; = ( F - ' ( p , 0 ( x i . 0 )I 7;) + F-'(p,,(o F'(x'). 0 )I 7;))I 7; = F-'(p,,(x'. F'(x'). 0 ) I 7;)I '7; = F-'(p,,(x'. F'(x'). 0 ) I T,) *
Therefore, for c'
=
F-'(p,,(x'. F'(x'). 0 ) I T,)E c
we have p '(c') = po(c,x' . 0)I 7;.
Definition 2.1. A mapping E' :X
Q.E.D. -, X
(or E' : Y + Y), which is defined by
E"(x) = 0'.~ ' ( x ) or P ( y ) = 0' . ~ ' ( y )
where 0' = o I T', is called an extended shift operator. Corollary 2.1. A linear map p 2 , : X -, Y is realizable by a time-invariant strongly nonanticipatory linear dynamical system if and only if the following diagram is commutative :
104
Chapter VI Stationariry arid Time Invariance
that is, p z 0 is a homomorphism with respect to the extended shift operator p, i.e.,
(W(W(E'(P 2o(x)) = P 2 0 ( & 4 ) PROOF. From any given p z 0 :X invariant maps by P2t(Xt)
-+
Y we can derive a family
p 2 of time-
=~ Y P 2 o ( ~ - W )
Then Theorem 2.4 says that i j 2 is realizable by a time-invariant strongly nonanticipatory linear dynamical system if and only if the following conditions are satisfied : (input-response consistency)
(VX,)(F'(~~~(F-'= ( Xpz0(o , ) ) ) . x,) I IT;) ( W ( P Z O ( 0 .
x,) I T'
(strong nonanticipation)
= 0)
Those two conditions imply and are implied by ( V x ) ( ~ ' ( p z 0 ( x=) )pzo(E'(x))). Q.E.D. The following corollary is an expression of the classical realization theory [5] in the framework of general systems theory.
Corollary 2.2. A linear map p z 0 : X -+ Y is realizable by a time-invariant strongly nonanticipatory linear dynamical system if and only if the following conditions are satisfied : (i) p 2 0 is strongly nonanticipatory. (ii) There exist a linear algebra C over the field d and two linear maps F ( t ) :C -+ B and G ( t ) : X ' -+ C such that p2O(x1.o)(T)= F(T - t)[G(t)x']
holds for r and z( 2 t), where G(t)satisfies the following property : G(t')(o. x,,.) = G(r' - t ) ( F - t ( ~ , , , ) )
Ifp,, is realizable, let the state space and p l rbe as given in Theorem 2.4. Then let F(t) :C -+ B and G(t):X' + C be as follows : PROOF.
F(t)(c)= PlO(C)(t)
G(r)(x')=
=
c(t)
I 7;)
F - 1 ( P 2 0 ( X ' . 0)
Then for z 2 t , we have PZO(X'
*
o ) ( d = ( P 2 0 ( X f . 0)I THz) =
F-'(p,O(x'.o)I
=
F(T - t ) [ G ( t )(x')]
IT;)(t-
t)
105
2. Realization Theory of Time-Invariant Systems
Q.E.D.
The following theorem is a basic result on a linear stationary system.
Theorem 2.5. Let S be a stationary linear system. There exists then a linear dynamical time-invariant representation of S if and only if there exists a linear map p20 :X -, Y such that
PROOF. Suppose a linear mapping p20 :X -, Y, which satisfies the above two conditions, can be found. We shall apply the natural realization to S, where the input-response function is p 2 0 . Then in the process of natural realization, the response function p2, is given by p 2 , : X r-, such that P ~ , ( x ,= ) p 2 0 ( 0 . X J I 2;. Hence, ~ 2 r ( x r )= F f ( ~ 2 0 ( F - f ( ~ , )Let ) ) * C = {Y :(o, Y )E S}. Let p l o : C -, Y be such that plo(c) = c. Let p l , : C + Y be such that pl,(c) = Fr(plo(c)).Then p = { p f = (plt,p2,)} clearly satisfies the inputresponse consistency. Let c E C and x' be arbitrary. Let y, = po(c,x' . 0)I ?;. Since (0,y,) E S, c F'(S), we have (0,F-,(y,)) E S ; that is, there exists c' E C such that F-'(y,) = plo(c'). Consequently, we have plt(c')= po(c,x' . 0)I 17;. Therefore, it follows from Theorem 2.3 that p is realizable by a linear timeinvariant dynamical system. Apparently, Sop = S. Conversely, suppose S has a time-invariant linear dynamical system representation (p, $). Then by definition of time invariance, p2,(x,) = l % 7 2 0 ( F - r ( x f ) ) ) . Since { ( p l , , ~ 2 , > > is realizable, p2r(xr) = p 2 0 ( 0 . xr) I T for every x,. Hence,
Chapter VZ Stationarity and Time Invariance
106
3. STATIONARY PAST-DETERMINED SYSTEMS
The representation of a stationary system by a time-invariant dynamical system depends on the availability of a family of response functions that possesses the required property. Although the existence of such a family of functions is already expected by the very fact of stationarity, there is no general procedure for constructing such a function. The objective of this section is to prove that for the class of past-determined systems, the natural systems-responsefunction (as defined for that class of systems in Chapter V) satisfies the required conditions and can be used for a dynamic time-invariant representation of a stationary system.
Theorem 3.1. Suppose a stationary time system S c X x Yis past-determined from 2. Then Sican be represented by a time-invariant strongly nonanticipatory dynamical system. PROOF. Let p i : S f x X i+ 4 be the natural systems-response function. Let Si= C . For any t 2 Z let p t : C x X, -, Y, be such that P,(C,
x,) = F' -f(Pdc, F'
- '(X'N
Then pi = { p , : t 2 2) is time invariant. If we show that
(tfc)(vxi,)(3c')(Vx,)[Pt(Cf, xt) = P ~ ( c ,it. xt) I TI the desired result follows from Theorem 2.1. Let x, and c = (xi,y') E C be arbitrary. Since the system is past-determined, there exists a unique yi,such that (x' . xi,,y' ytt) E S'. Let t - t = CT 2 0.Let x' . xi, = x' and y' yit = y'. Since S is stationary, we have that
-
(XI,
y') I T,, E S' I T,, = S , I T,, c F"(S) I T,' = F"(S I Ti)
Consequently, F-,((x', y') I T,,)E S'. Let (XI,y') 1 5,= (x,,, y,'). Let c' = F-,(x,,, y,,) E C.Let x, be arbitrary. Then p&, xi, . x,) = y, for some 9,. The definition of pi implies then that (xz. x,, y' .j),) E S . We have from the stationarity that (XI x,,y' ~ 9 ,I )T, E S , c F"(S), that is, a j ) ,
F-"[(x'.x,,y'.EJI T,1
=
F-%,,.x
t,Yar.9r)ES
On the other hand, let yi = pi@', F-"(x,)). Then we have that (F-,(x,,. xt), F-"(y,,). yt)E S . Consequently, since S is past-determined from 2, we have F-"@,) = yi,that is, Pt(C', x,) = FYYJ = 9 t = P i k Xi'. X') I T Q.E.D.
Theorem 3.2. Let a stationary linear system S c X x Y be past-determined from 2. Then Sican be represented by a time-invariant strongly nonanticipatory linear dynamical system.
4 . Axiomatic Construction of a Class of Dynamical Systems
107
the desired result follows from Theorem 2.5. Let x, be arbitrary. Let yi = pzi(F-"(x,))where 0 = t - 2 2 0. Then (0.F-"(x,), 0 .y i ) S~ holds. Notice that S is past-determined from 2 and that (0,0)E S. Then pzi(o. x,) = o . 9,for some j , , that is, (0.x , , o .9,)E S. Since S is stationary, we have that (0*
x , , 0 .9,)I T,E s, c F"(S)
that is (0* F U ( X t ) , 0 *
Consequently, y,
=
F"(9,))E s
F-"@,). Hence,
4. AXIOMATIC CONSTRUCTION OF A CLASS OF DYNAMICAL SYSTEMS
The general systems theory we are developing in this book obviously covers a full spectrum of specialized theories which are concerned with various classes of systems having deeper and more specific mathematical structures. However, it is of interest to establish specific bridges linking the present theory with the classical theories in a precise and rigorous manner. This task requires a careful treatment. It is important because such links show precisely which of the results in the general systems theory are valid abstractions of the specific, more limited results derived earlier, and in that way show which the general structural properties are that the system ought to possess in order to behave in a certain way. We shall consider in this section the link between general systems and dynamical systems described by the constant coefficient linear ordinary differential equations. In addition to the interest in application, these systems actually play a central role in the classical theory of linear systems. The classical theory of linear systems can very well be considered as the theory of the class of linear differential equation systems.
108
Chapter VI Stationarity and Time Invariance
The class of dynamical systems considered in this section will be described by a set of constant coefficient linear ordinary differential equations of the following fotm : dc/dt = FC
y
=
+ GX
HC
where F, G, and H are constant matrices, while x, y, and c are vector-valued functions. We shall arrive at such a description on an axiomatic basis starting from the general linear time system S c AT x BT. Axiom 4.1. The input and output alphabets, A, B, the time set T, and the field d needed for the specification of a linear system are defined as : A = Em ; B = E'; T = R' (the set of nonnegative real numbers); d = R ; X = L J O , 03).
Axiom 4.2. S is stationary and past-determined from 2. Axiom 4.3. Si = {yi:(o,yi)~ Si}= C is a finite-dimensional vector space with the dimension n.
Let p 2 i : C x Xi-, & be the icput-response function associated with the natural systems-response function. Let {cl,. . . ,c,} be a basis of C and let #:C -, En be such that n @(c)=
((XI,.
. . , a,) * c =
1 aici
i= 1
Notice that 4 is linear. Let p l i ( c ) = c. Since the time-invariant systemsresponse.family derived from pzi and p is realizable, the following condition (state-response consistency) holds : (vxir)(3c)(p,r(c) = Fr-'(~1i(c)) = PZi(Xit.
0)
I T)
that is, { F ' - ' ( p z i ( x i r . o ) I(I ; ) : X i r E X i r & t E T }
is a linear subspace of C. This fact will be used in the subsequent arguments. The next axiom assures continuity in an appropriate sense. Axiom 4.4. (i) For each x E X, p2i(xi): q -+ B( = E') is differentiable. (ii) For each t E 7;, p Z i (- ) ( t ) : X i -+ B is continuous. (5) 4pli( - ) :C -+ En is continuous. (iv) c E C is an analytic function.
4 . Axiomatic Construction of a Class of Dynamical Systems
109
Notice that the system described by Eqs. (6.2) and (6.3)is past-determined from 2, where 2 is any positive number as proven in Chapter V. It is clear that these four axioms are satisfied by the system of Eqs. (6.2)and (6.3). We shall show now how these equations can be derived from these axioms. For notational convenience, 2 is replaced by 0,which can be interpreted to mean that the time axis is shifted left by 2. In other words, we shall wrte pz0 and plo for pzi and p l i . &, Xi,and other restrictions should be replaced accordingly. No generality is lost by doing so. Since p z o ( - ) ( t ) : X -+ E' is continuous and X = L,(o, m), the representation theorem of linear functionals on H space implies that there exists w(t)E L,(o, m) such that p&)(t)
=
w(t, 7 ) ~ ( 7d7 )
JOm
where w(t, 7) = w(t)(7):Em-+ E' is a matrix. Axiom 4.2 implies that p , is time invariant and satisfies the input-response consistency, i.e., pz0(o.F'(x)) I 7; = F ' ( P ~ ~ (holds. X ) ) Consequently, for each 7 2 t, we have
=
=
=
IOm
w(7, o ) ( o .F'(x))(a)do
lm lm
4 7 , ~[F'(x)l(o) ) do
w(7,o)x(a - t ) da
= [om
w(7,o
+ t)x(o)do
On the other hand,
=
Sby
w(7 - t, o)x(a) do
Therefore, we have JOm
~ ( 7o ,
+ t)x(o)do =
Sby
w(7 - t, o)x(o) do
110
Chapter VI Stationariry and Time Invariance
Since the above equality is satisfied for every x E X, we have w(7, a + t ) = W(T - t, a)(r - t 2 0). Hence, for a = o we have w(z, t) = w(7 - t, 0). Let W(T - t, 0 ) E w0(7 - t). Then pzo(x)(t) =
wo(t
- a)x(a)da
(6.4)
Furthermore, since pz0 satisfies the strong nonanticipation, we have that for each x, E X, m
pZo(0. xr)(t) =
wo(t
- z)xr(r) dz = 0
that is, t c o + wo(t) = 0. This is the nonanticipatory condition usually used for weighting functions. In summary, we have pzo(x)(t) =
wo(t
- T)X(Z) dz
Following Corollary 2.2 of Theorem 2.4, let F(t):E"--+ E' be such that n
F(t)(a,, . . . ,a,) =
C aici(t) i= 1
Since dF-'(pzo( - . 0)I 7 J :X'
+
E"
is continuous, there exists a matrix G(t, a) :Em+ E" such that
Since Corollary 2.2 of Theorem 2.4 says that G is time invariant and since the system is strongly nonanticipatory, we have from the same argument used for w that G(t,a) = G(t - a,0)I Go(t - a) and Go(t)= o for t c 0.Hence p z o ( ~ ' o)(z) . = F(T - t )
l
G,(t - O)X'(G)-~O
On the other hand, we have from Eq. (6.4) that P ~ o ( x 'o)(z) . =
6
W,(T
- a)xf(a)da
Let 'c - t = 7' and t - a = - a'. Then we have w(z', a') = F(t') - Go(- a'). Since ci is an analytic function, F(t) is continuously differentiable.Therefore,
5. Abstract Transfer Function
111
the classical realization theory is applicable so that the system can be represented by Eqs. (6.2) and (6.3).
5. ABSTRACT TRANSFER FlJNCTIONt
When a dynamical system is expressed by a constant coefficient ordinary linear differential equation or by a constant coefficient linear difference equation, one of the most powerful techniques used in the analysis of such a system is based on a transfer function. It is, therefore, important to consider how a transfer function can be represented in the abstract framework as developed in this book. In order to attack the problem, we have to introduce a deep structure for the input object X and the output object Y. Let V be a linear algebra such that A = V mand B = V‘, where rn and r are positive integers. Let U c V T be a time object such that the following conditions are satisfied : (i) U is a linear algebra over d . (ii) U is a commutative ring whose multiplication operation * satisfies thecondition:foruandu’EU,u*u’=ot,u=oV u’=o. (iii) The input object X and the output object Y are expressed by U as X = U” and Y = U’. U will be referred to as a basic time object. As a concrete example, let T = [o, co), V = R, and U each t~ T, let (U
* Q)(t)
=
sb
u(t -
CT)
*
=
C(o, co). For
Q(o)do
It is easy to show that the operation * is commutative. Furthermore, it can be shown that u * G = o c,u = o V a = o.? Then C(o, co) is a basic time object for such X and Y in the previous section. Suppose a linear input-response function pzO:X + Y is expressed as m
p z O ( x )= y
c-)
yi
=
for each i (= 1,. . . ,r )
11 wij * x j
(6.5)
j=
where w i j € U . The right-hand side of the above expression is an abstract form of the expression of a dynamical system by a weighting function. Let us introduce an equivalence relation E c U 2 x U 2 such that ((u, u’), (0,a’))E E
t See Mikusinski [6].
c-)
(U
* 0’ = G * u’)
112
Chapter VZ Stationarity and Time Invariance
The following is a well-known result [7] : (i) U 2 / Eis a field called the quotient field whose operations are defined by where u # o is arbitrary
o element = [o, u] identity = [u, u ] , [u, u’]
where u # o is arbitrary
+ [ti, a,] = [u * a, + ti * u’, u’ *all
[u, u’] x
[Ei, 0’1 = [u * Ei, u’ * 0’1
(ii) Let u, E U be a nonzero fixed element of U . Let h : U -, U 2 / Ebe such that h(u) = [u * u,, u,]. Then h is a homomorphism in the sense that h(u
+ u’) = h(u) + h(u’),
h(u * u’) = h(u) x h(u’)
and
An abstract transfer function is, then, given by applying h to Eq. (6.5), that is,
m
=
C h(wij) x
h(xj)
j= 1
Therefore, the abstract transfer function TF(p2,) of p20 is given by the following matrix form : h(w, 11, . . . h(w,m)
TF(p20) =
7
[, :
h(wr,)>. . . h(wrm)
In particular, if r = m = 1, then
h(w1 1)
=
3
h(Y ,)/h(x1)
In order to demonstrate that the abstract transfer function is a valid abstraction of the usual concept of a transfer function, we shall consider how a differential or integral operator can be represented in the quotient field U2JE. Let D :U 3 U be a linear operator defined by
D(u) = U’
-
u = 1 * U’ + a(u)
where 1 E U is a fixed element and a : U
4
(6.6)
U is a linear operator. In the
5. Abstract Transfer Function
113
usual calculus, D represents the differential operator, while relation (6.6) represents the following : du/dt
= U’ c*
~ ( t=)
lor 1x
u‘(T)
dz
+ U(O)
where I is a constant function such that l ( t ) = 1 for all t . When the homomorphism h is applied to relation (6.6),we have h(D(u))= h ( d )c* h(u) = h(1) x h ( d ) + h ( ~ ( u ) ) that is
h(D(u))= Mu’) = (l/hU)) x (h(u) - h ( 4 u ) ) ) If l / h ( l ) is replaced by s, the above relation is expressed by
h(D(u)) = M u ) - h(4u)))
(6.7)
If h is considered as a Laplace transformation, Eq. (6.7) is exactly the basic formula for the differential operator in the Laplace transformation theory. Similarly,if the integral operator I : U + U is represented by the following relation :
I(u) = u’
-
u’ = I
*u
the operator I is represented by the following : h(I(u))= W / s
(6.8)
Equation (6.8)is also a basic formula in the Laplace transformation theory. Furthermore, it is easy to show that h is an injection. Hence, h has the “inverse transformation.”
Chapter VII
CO NTRO LLABlLlTY
Some basic controllability-type concepts are defined in this chapter in a general setting. These concepts involve both a system and a performance function ; classical controllability-type notions such as state controllability, functional controllability, or reproducibility are shown to be special cases of the more general notions. Necessary conditions are given for the general controllability of a multivalued system; i.e., a system evaluated not by a single element but by an n-tuple, e.g., a system with more than one state variable for the case of state controllability. For the class of linear systems, both necessary and sufficient conditions for general controllability are derived. It is shown then that the classical conditions for the state controllability and functional controllability of differential equation systems can be obtained by the application of the general results. The chapter concludes with the investigation of mutual relationships between various properties of the time systemsas they relate to controllability and realizability conditions.
1. BASIC CONCEPTS
To formalize the controllability notions on a general level, two remarks are of interest :
(i) The ability of the system to perform in a certain way is evaluated usually by the output, but depends ultimately upon the input. The conditions I14
115
I. Basic Concepts
for a system having given property will be expressed, therefore, in reference to the existence of certain inputs. (ii) To specify a notion within this category, it is necessary, in general, to introduce a performance or evaluation function in terms of which the desired behavior is specified. This ought to be emphasized because in the classical theory (e.g., controllability of differential equation systems in the state space) such a function is not shown explicitly. We shall assume, therefore, that in the addition to the system S c X x Y, there is given a map G:X x Y + V
termed the evaluation function. We shall define the system in this chapter in terms of a map on two objects M and U , i.e., S:M x U + Y
Several interpretations of S are possible ; e.g., S is the initial-response function of a general system which is actually a relation; the system itself is a function; S is the union of a family of response functions, etc. When a specific interpretation for S is intended, it will be indicated as such. The evaluation function is now G :M x U x Y + V. The composition of S and G is a mapping g :M x U + V, such that for all (m, u) E M x U g(m, u) = G(m,u, S(m, 4)
There are three generic notions in this category. Definition 1.1. A set V' c V is reproducible (attainable, reachable) relative to g if and only if (Vu)[uE V' ( 3 m ,u M m , 4 = 41 When the mapping g is clear from the context, the explicit reference to g will be omitted. A point U E V is reproducible if and only if there exists a reproducible set V' such that u E V'. +
'
Definition 1.2. A set V' c Vis completely controllable relative to g (Gand S ) if and only if (Vu)(Vu)[uE V' & u E U
+
(3m)(g(m,u ) = u)]
When the mapping g is clear from the context, the explicit reference will be omitted. A variation of Definition 1.2 is worth noticing, although it will not be explored in any detail in this book. Namely, the requirement in Definition 1.2
116
Chapter VII Controllability
might appear too restrictive for some practical applications where it would be sufficient to have the performance within a subset V‘ for any u E U rather than requiring that the performance attain a given fixed value ;the condition for complete controllability is then (VU) [u E u
+
( W ( g ( m , u ) E V’)l
The third notion is given for the case when the value object I/ has more than one component, i.e., V = V, x
. . . x Vk =
x{I.;.:jEzk}
where 1, = {l,. . . ,k}. For notational convenience, we shall use the symbol V k for V to indicate how many component sets there are in V. When V has more than one component, the system is referred to as multiualued. For each i € I , , p i will denote the projection map pi:V k 4 6 ;i.e., pi(u) is the ith component of u. We shall refer to V ’ ,a subset V’ E V k ,as being a Cartesian set if and only if V’ = p,(V’) x . . . x pk(V’)
i.e., if V’ is the Cartesian product of its component sets. Given an arbitrary subset V ’ c V k ,the Cartesian set p’generated by V’ is the set p’ = pl(V’) x . . . x pk(V’). Obviously, a set is Cartesian if and only if V’ = V . We can now introduce the following concept. Definition 1.3. A set V’ c V k is cohesive (for a given S and G ) if and only if the Cartesian set p’generated by V ’ is not reproducible ;i.e., V’ is cohesive if and only if the following statement is true :
(34[u E V’ 8L ( W m ,4) Mm, 4 z
41
Alternately, V‘ is a noncohesive set if and only if
( W [ u E P’
+
(3(m,U))(g(m,4 = 41
Interpretation of the reproducibility is obvious : Any element of a reproducible set can be generated when desired. Controllability is a stronger condition requiring that the given value, u E V ’ , be achieved over a range of different conditions, i.e., for all u E U . Cohesiveness is a more special concept si-nceit is defined only for the systems evaluated in terms of several components. A system is cohesive relative to a Cartesian set V’ = Vl’ x . . . x V,l if not all of the possible combinations of component values can be achieved simultaneously; i.e., a given value component, say Oi E pi(V‘), can be achieved only in conjunction with some of the values from the remaining component sets. It appears as if there were an internal relationship
117
I . Basic Concepts
between the components of the value set, hence the term “cohesiveness.” If the set V’ is cohesive, there exists a nontrivial relation Y c p,(V’) x . * . x Pk(V’)
such that
Y will be referred to as the cohesiveness relation. When ‘P is a function, i.e.,
it is referred to as the cohesiveness function, and the set will be termed functionally cohesive. Several additional conventions will be adopted in this chapter :
(i) When the selection of rn and/or u is restricted, e.g., to a set M‘ x U‘ c M x U , we shall talk about reproducibility, controllability relative to M’ x U’, and the set V’ will be denoted by V’[M’ x V’]. (ii) If V’ = W(g),the system itself will be referred to as cohesive or as controllable depending on which property is considered. The system is then noncohesive if and only if the range W(g)is a Cartesian set. (iii) If V’ c g(M x {a}) where is a given element of U , we shall say that the system is controllable from fi or simply &controllable in V’. Analogously, if V’ is a unit set V’ = { 0 } , we shall say that the system is controllable to 0 or simply 0-controllable. (a) Relationship between Basic Controllability-Type Concepts Relationships between various concepts introduced in the previous pages can be easily derived from the definitions themselves. Proposition 1.1. If V ‘ is a completely controllable set, it is also a reproducible set. The reverse of Proposition 1.1 is not true, however. Proposition 1.2. If V’ is a noncohesive set, it is also reproducible. The relationship between the complete controllability and cohesiveness is slightly more subtle. Noncohesiveness does not imply complete controllability nor vice versa, as one might intuitively expect. To show the validity of this assertion, let V ’ be noncohesive and furthermore there exist a E U
118
Chapter VII Controllability
and 0 E V such that g(m,a) = 0 for all rn E M . This does not violate noncohesiveness,but surely precludes complete controllability, since an arbitrary v*, v* # 0, cannot be reproduced regardless of the selected m E M as long as u = ci. Alternatively, let V ’ be completely controllable. There is no reason why f’ should be reproducible. Of course, if V ’ = p‘,then complete controllability does imply noncohesiveness of I/‘ by definition. From this conclusion, some sufficient conditions for the absence of controllability can be expressed in terms of cohesiveness. Proposition 1.3. Let V‘ c V k be a Cartesian set, i.e., V’ = cohesive, it cannot be completely controllable.
p’.If
V’ is
The preceding proposition is important in various applications : I f the given set of interest is Cartesian, in order to show that it cannot be completely controllable, it is suflcient t o show that it is cohesive. Proposition 1.4. Let V’ c V kbe a Cartesian set ;V’ is completely controllable over U’if and only if V ’ [ M x { u ) ] is a noncohesive set for every u E U’. PROOF. Let V‘ be noncohesive within M x {u} for every U E U ’ ; then V’ c g(M x { u } ) for every u E U‘ ; i.e., V’ is completely controllable by definition. Alternatively, let V’ be completely controllable over U’ ; then V’ c g(M x { u } ) for every u E U‘. V’ is then reproducible by definition, Q.E.D. and since V ’ is a Cartesian set, it is noncohesive.
2. SOME GENERAL CONDITIONS FOR CONTROLLABILITY
Controllability depends upon the character of the mapping g and the constraints in the domain M x U,while the conditions for controllability depend upon the more specific definition of these sets and mappings. However, for the case of a multivalued system, i.e., when the value object has more than one component, it is possible to derive rather general conditions for controllability which reflect the interdependence between the value components. Obviously, this is the question intimately related with the concept of cohesiveness. Indeed, the generic problem we shall consider here is the following : Given a multivalued system g . What are the conditionsfor the noncohesiveness of g? That is, is the set t ( M x U )reproducible? From this, the controllability conditions follow immediately. Controllability in this case is dependent strictly upon the interdependence between the value components, and we shall refer to it as algebraic (or structural) controllability. The importance
119
2. Some General Conditions f o r Controllability
of this case can be seen from the fact that almost all controllability-type conditions developed so far for various specific systems are of this type. We can now give sufficient conditions for cohesiveness.
Theorem 2.1. Let g : M x U -+ V k be a multivalued system. If there exists a positive integer j < k and a function F : V’ V k such that diagram -+
M x U
A
V
k
\ is commutative and pk-jg(M x U )is nonempty and not a unit set, the system is cohesive [i.e.,g(M x U ) is not a Cartesian set], where V’ = V, x . ’ . x 5, x . . . x v,, p’ and pk-’ are projections from V k into V J and V k into I$+ respectively. Let q
k
Since p 4 . g(M x U ) has more than two points, let g(m, u) and 0 4 = p 4 . g ( h , 0 ) . Suppose the system is noncohesive. Then since ($. g(m, u), p 4 . g(h,a))E jj(M x V), there exists (m’, u’) E M x U such that g(m’, u’) = ($g(m, u), p4 . g ( h , 0)).Since the diagram is commutative, it follows that F . p’ . g(m, u) = g(m’,u’) # g(m, u) = F .p j . g(m, u) PROOF.
v4 and
=
- j.
O4 be two distinct points in p 4 . g(M x U ) such that u4 = p4
This is a contradiction. Q.E.D. To derive necessary and sufficient conditions, more structure has to be added to the specification of the system. In particular, this can be achieved by introducing linear structure in the value object V . This, however, has to be done so that V can be recognized as a multicomponent set, which is necessary for the cohesiveness and algebraic controllability. There are two ways that this can be done. (i) Let V be linear and p = (PI,. . . ,P k ) a family of projection operators, pi: V-+ such that is a linear subspace and V is a direct sum, V = V, 0 . . . 0 Vk. Component sets of the direct sum V are now considered as the components of the value set V . Notice that starting from a given linear value set V , the component sets and the number of those sets depend upon the family of projection operators p which, in general, is not unique for a given space, since there are many different ways to decompose a linear space into subspaces. Cohesiveness in Vhas to be defined, therefore, in reference to a given family of projections.
Chapter VII Controllability
120
(ii) Let V = V, x . . . x V, = V k and each component set space. The set V is then a linear space too.
6 is a
linear
We can now give necessary and sufficient conditions for cohesiveness.
Theorem 2.2. Let V be a linear space on a field d and g(M x U ) a nonzero linear subspace of V. There exists a projection family for which the system is cohesive if and only if there exists a proper linear subspace V, c V and a function F: V, + V such that the diagram M x U
is commutative, where p , is the projection from V into V, . PROOF.Let us consider the ifpart first. If (I - p , ) . g(M x U) has a nonzero element, it has more than two elements. Hence, the cohesiveness of the system follows from Theorem 2.1. Suppose (I - pl).g(M x U) = {o}, that is, g(M x U) c V , . Let p, = g(M x U). Let V, be a linear subspace such that V = pl 0 V,. Then another linear subspace p,’ can be constructed by the same method used in Theorem 1.2, Chapter 11, such that V = p,’0 V, and p, p,’.Consequently, there exists 0 # o such that 0 E p, and 8 # pi’. Let the projection family of {PI’,V,} be { p , ,p , } . Then p,O # 0. Suppose the system is noncohesive with respect to { p , , p , } . Then, since for any u ~ d a n ud ’ E & ~ , u ~ E and p , c c ’ O ~V,, we have
+
vl‘,
up,D
+ u‘p,B = a8 + (a‘ - u)p,O E P,
Since (a’ - a)p,9 E V,, we have a contradiction with respect to V = p, 0 V, . Let us consider the only ifpart. The relation V 3 k ( M x U ) 3 g(M x U ) always holds. If the system is cohesive, there exists a nonzero element 8 E V such that D$g(M x U). Let V, = { a O : a E d ) . Then, since g(M x U) is a linear subspace and hence since V, n g(M x U ) = {o}, V, is a proper subspace of V. Then there exists another subspace V, such that V = V, 0 V, and g(M x U) t V , . Let F: V, -, V such that F(u,) = 0,. If u = g(m,u), then o E V,. Hence, p1 . g(m, u ) = g(m, u); that is, F . p1 . g(m, u ) = g(m, u). Q.E.D.
Theorem 2.3. Let V k be a finite-dimensional linear space on a field d , where each 6 is generated by Oi # o ;that is, = { a0, :a E d}. Let g(M x U ) be a linear subspace of V ” . The system then is cohesive if and only if there
121
2 . Some General Conditions for Controllability
exists a proper linear subspace V j c V k and a function F : V j -+ V ksuch that the diagram M x U
ivk
\ is commutative and
pk-j.
g(M x U ) has a nonzero element.
PROOF. The proof of the ifpart comes from Theorem 2.1. Let us prove the only ifpart. Let V' = g(M x U ) and = pig(M x U). Since V' is cohesive, V' is a proper subset of V,' 0 . . . 0 V i . Then there exists one component set q' such that F' $ V', or otherwise V,' 0 . . . 0 V,' t V'. Hence, suppose V; q! V'. Then there exists one element uk' such that o # uk) E h'and 0,' # V'. Since V, is generated by irk, we can assume uk' = Ok without losing generality. Let I t Vk-' X VkSuch that
(uk-',uk)Ejlo(3(m,u))(uk-' = pk-'
'
g(m, u)
and
uk =
pkg(m, u)).
We shall show that I is a function, A : 9 ( A ) -+ 5. Since I/' is a proper linear subspace, there exists a linear function f:V + d such that f(Ok)= 1 and f(u') = 0 if u' E I/'. Suppose (uk- ',u k ) E 11 and (uk- ',uk)) E where u, = a8, and uk' = a ' . 4. Then f(u"-' + 0,)' = 0 = f ( u k - ' + ukf). Consequently, af(OJ = a I f ( O k ) , or a = a'. Hence V k = u k ; that is, il is a function. Let F: V k - ' .+ V ksuch that F(uk-') = uk-' + A(uk-'). Then
F . pk- . g(m, u) = p k - ' . g(m, u) + I(pk-' . g(m, u)) =
pk-' . g(m, u)
+ pkg(m, u) = g(m, u)
Q.E.D.
Notice that no direct requirement on the linearity of g is imposed. Of course, the easiest way to satisfy the conditions for both Theorems 2.2 and 2.3 is when M and U are linear spaces and g is a linear function. Since by Proposition 1.3 cohesiveness precludes controllability, we have the following immediate corollaries. Corollary 2.1. The set g(M x U) cannot be completely controllable if the conditions from Theorem 2.1 are satisfied.
Corollary 2.2. Suppose g(M x U) is a linear subspace of a linear space V. There exists a family of projections such that g(M x U )cannot be completely controllable if the conditions from Theorem 2.2 are satisfied.
122
Chapter VII Controllability
Corollary 2.3. Suppose g ( M x U )is a linear subspace of a finite-dimensional linear space V k . g ( M x U ) cannot be completely controllable if the conditions from Theorem 2.3 are satisfied. Corollary 2.4. Suppose g ( M x (0))is a linear subspace of a linear space V where U = (0).The set V cannot be completely controllable if and only if the conditions from Theorem 2.2 are satisfied. PROOF. The proof of the ifpart comes from Corollary 2.2. Let us consider the only if part. Since V is not controllable, g(M x { 0))is a proper subspace of V. Let V, = g(M x (0)).Then there exists another subspace V, such that V = V, 0 V,. Let F : V, 3 V such that F(u,) = u1 . Then we have the Q.E.D. desired result.
To appreciate the conceptual importance of the corollaries, it should be noticed that the function F : V j -, V k satisfies the relation :
i.e., p k - j . F is the cohesiveness function. g is a function with a multicomponent range, and the existence of F means that there exists a relationship between these components ;namely, if j components are fixed, the remaining ( k - j ) are determined by F. Statement (7.1) suggests then the following test for controllability. One starts from the set of equations 01
= g*(m,u)
where gi(m, u ) = p i g(m, u). If it is possible to derive from (7.2) a set of equations of the form +
F1(U’)
0,
=
uk
= Fk(uJ)
where U ~ VE j a n d j < k, the set i ( M x U ) cannot be controllable. In other words, a sufficient condition for noncontrollability is that it is possible to eliminate the inputs m and u from the systems equations (expressed in terms
2 . Some General Conditions f o r Controllability
123
of g ) and to derive a relationship solely between the components of the value sets. In connection with this procedure, the following remarks should be noted : (i) Only the proof of the existence of the functions Fl, . . . , Fk is needed in order to prove noncontrollability, the actual form of these functions being irrelevant. In this sense, the mapping F plays a similar role in the controllability theory as the Lyapunou functions in the stability theory. (ii) Starting from a given system, S, the character of the function F depends upon G ; e.g., when S is a differential equation system, F can be the relationship between the values of the solutions at a (or any) given time, e.g., F(y’(2)) = yk(2),or it might be the relationship between the entire functions, e.g., y k = F(y’) where y‘ and yk correspond to u“ and uk. Theorem 2.2 clearly indicates the role of linearity in the controllability theory. In essence, it makes sufficient conditions both necessary and sufficient. For linear systems, several other results can be derived, however. For example, under certain conditions it is sufficient to test controllability only with respect to a single element of U.
Theorem 2.4. Let M be monoid, V kand U Abelian groups, and g is such that g(m,u) = g’(m) + gz(u), where g z is a homomorphism; i.e., g2(u + u‘) =
g 2 ( 4 + SZ(U’).
Let V’ = n { g ( M , { u } ) : u EU}.Ifthereexistsu’~Usuchthat f ’ [ M x { u ’ } ] is completely controllable, then f ’ [ M x U ] is completely controllable.
PROOF. Let 0 be an arbitrary element of 9’.Let 0’ = 0 + g’(u’). We shall show 0’ E f‘.It follows from the definition of f’ that Oi’ - giz(u’)= OiE n u g i ( M , u ) ,where Oi’ = pie’, giz(u’)= pi.g2(u‘),Oi = p i . 0, and gi(m,u) = p i . g(m, u). Hence,
0i’
En gi(M, u) + gi2(uf)= ng W , u + u’) n gi(M,u) =
U
U
Y
which implies that 0’ E f’.Hence, if P’[M x {u’}] is completely controllable, 0’ = 0 + gz(u’)can be expressed as 0’ = g(N, u‘)for some (A’, u’) E M x { u‘} ; that is, 0 = g l ( N ) gz(o). Consequently, P’[M x {o}] is completely controllable. Similarly, we can show that if f ’ [ M x { o ) ] is completely controllable, 0’[M x { u } ] is completely controllable for any u E U . Therefore, if P ’ [ M x {u’}] is completely controllable for some U‘E U , f ’ [ M x U ] is completely controllable. Q.E.D.
+
Although Theorem 2.4 is proven for Abelian groups rather than for linear algebras, it certainly holds for a linear algebra also since such an algebra is an Abelian group.
124
Chapter VII Controllability
3. CONTROLLABILITY OF TIME SYSTEMS
(a) State-Space Controllability In this section, we shall be concerned with a special kind of controllability suitable for dynamical systems. Specifically,the following assumptions will be made :
(i) S is a dynamical system with a canonical representation ($,I) and the associated state space C. (ii) U = C and V = C. (iii) The evaluation map g is defined in terms of the state-transition family in the manner indicated in definitions of specific notions given below. We shall be primarily concerned in this section with the following concepts. Definition 3.1. A dynamical system is (state-space) completely controllable if and only if
( W ( W 3 x t )[e
= 4,t(C,
xt)l
Definition 3.2. A dynamical system is (state) controllable from the state c, if and only if
( W W P = 4ot(co, X')l For simplicity, when co is the zero element of the linear space, the system will be referred to as (state) controllable. Definition 3.3. A dynamical system is (state) controllable to the state c, or zero-state controllable if and only if
(
W
W[c, = 4,tk xt)l
The importance of the above state-space controllability notions for dynamical systems is due to the canonical representation ;namely, according to such a representation, dynamic behavior of a system can be fully explored in terms of its state-transition family. The output of the system is determined by the output function, which is static and has no effect on dynamic evolution of the system. It should also be noted that state-space controllability is the controllability-type notion, which was historically introduced first and as a matter of fact the only one explored in detail in classical systems theory and for the class of linear systems [8]. The concepts given in Definitions 3.1-3.3 are obviously rather close, but they are not equivalent even when the system is linear. To make these
3. Controllability of Time Systems
125
concepts equivalent, additional conditions have to be assumed as shown in the following.
Proposition 3.1. Let S be a linear time-invariant dynamical system such that there exists Z such that (VC)(VX')(3P)(C = f#lot(o, x')
+
(vc)(vx~)(3a')(o=
4ot(C,
x')
(vt)0'~') (3) (c' =
4ot(C,
0))
c = 4oi(o,3'))
(7.3)
a'))
(7.4)
+0
= 4,'(C,
(7.5)
Then the three definitions of state space controllability are equivalent where c, is o of the linear space C . PROOF
(i) Definition 3.2 + Definition 3.3 : Let c be arbitrary. Let c' = Since the system is state controllable, c' = 420r@") for some 9''. However, condition (7.3) implies that c' = 420i(x') for some xi. Hence, 0 = 4lOi(C)
+ 4zoi(-(xi))
=
40&
-xi)
Hence, Definition 3.3 is satisfied. (ii) Definition 3.3 + Definition 3.2: Let c' be arbitrary. It follows from condition (7.5) that c' = 410i(c)for some C E C. Since the system is controllable to the state 0,o = 4,'.(c, a") for some a''. Then o = 4oi(c,x') for some xi, which comes from condition (7.4). Since 0 = 4lOdC)
c' =
+,i(O,
+ 42OdX')
= c'
+ 420t(Xi)
-(xi)).
(iii) Definitions 3.2 and 3.3 + Definition 3.1 : Let c and c' be arbitrary. Since the system is state controllable and simultaneously controllable to the state 0,o = +,'(c, x') and c' = 4of,(o, a'') hold for some x' and 55". Since the system is time invariant, c' = 4rrp.(o, F'(9'')) where t" = t' + t . Notice that it x') = follows from the properties of a dynamical system that o = 4or.(c, x' . 0)and c' = 4rr,f(~, F'(9'')) = q50t..(o, o . F'(2'')).(Refer to Chapter IV.) Hence, c' = CJ~,~,,(C, x' F'(2')). (iv) Definition 3.1 + Definition 3.2: This is obvious. Q.E.D. +
The desired result comes from the combination of (i), (ii), (iii), and (iv). As it will be shown later, conditions (7.3), (7.4), and (7.5) follow from some basic properties of linear systems. Apparently, conditions (7.3), (7.4), and (7.5) are not unique for making the three definitions equivalent. However, it can be proven that those conditions are satisfied by usual linear time-invariant differential equation systems,
126
Chapter VII Controllability
while condition (7.5) may not be satisfied by a linear time-invariant difference equation. This fact shows one point of difference with respect to the system’s behavior between a differential equation system and a difference equation system. The state space of a dynamical system most often has several components, and it is meaningful to talk about algebraic-type state-space controllability, i.e., about the interdependence between the state components implied by the system. Let M = u r e T X r a n d$:C x M + Csuch that
$ 4 xr) ~ = +ot(C,
xr)
Then Definitions 3.1-3.3 become (i) complete controllability : (Vc)(V?)(3m)(? = $(c, m)) (ii) controllability from c,:(V?)(3rn)(? = $(co, m)) (iii) controllability to co:(Vc)(3m)(c0 = $(c, m))
Apparently, $ takes the role of g in Section 1. The basic conditions for cohesiveness and algebraic controllability, as given in Theorems 2.1 and 2.2, then apply directly here; specifically, Corollaries 2.1 and 2.2 take on the following form. Corollary 3.1. Let S be a dynamical system (p, $) with a multicomponent state space Ck = C1 x . .. x C,. If there exist a positive integer j < k and a function F :Cj + C ksuch that for every t E T the diagram
is commutative, the system cannot be completely controllable in C kwhere C’ = C , x . . . x Cj and pk-j$(C x M) has more than two elements.
Corollary 3.2. Let S be a linear time-invariant dynamical system. Then the system is not state controllable in C if and only if there exists a proper linear subspace C1 in the state space C and a function F:C, -,C such that for
3. Controllability of Time Systems
127
every t E T the diagram
is commutative. PROOF. We shall show that $({o} x M ) is a linear subspace of C. Let c = $(o, x') and E = $(o, 9") be arbitrary. Then, since 4,' is linear, ac = $(o, ax'). Supp0se.t' 2 t. Let t' = t z. Then, since 4ocis linear and time invariant, c = 4,'(0,x') = 4,',(0,o Fr(x')),and hence
+
-
c
+ c' = 4,"(0, 0
*
F'(x')
+ 9") = $(o, 0.F'(x') + 9")
Consequently, $({o} x M ) is a linear subspace. Then Corollary 2.4 is directly applicable to the present case except that in Corollary 3.2, ( I - pl)$({o} x M ) may not have nonzero element. However, if ( I - pl)$({o} x M ) does not have nonzero element, $({o} x M ) is apparently included in a proper subspace C1, and hence the system is not controllable. Q.E.D. Corollary 3.2 can be reformulated using the linear space properties of C. When the state space C is a linear algebra on a field d,a linear functional f' on C is a linear function such that f':C + d.Let F' be the class of all linear functionals on C. An addition and scalar multiplication by the elements from d can be defined on F' as
If f'(c)
(f' + 3 % ~=)f'(c) + f c ( c ) , and (af')(c) = a . f'(c) = o for every c E C , f ' will be denoted by 0.Furthermore, since f'(c
+ c') = f'(c) + f'(c'),
and
f"(ac) = orf'(c)
hold, F' itself can be considered a linear algebra on the field d,and hence F' will be referred to as the algebraic conjugate space of C. The following lemma is basic for the later arguments [9].
Lemma 3.1. Let C, be a proper subspace of the linear space C , and suppose c1 E C\C,. Then there exists an element f' E F' such that f'(c) = o if c E C , andf'(c,) = 1, where 1 is the unit element of A We can now give another necessary and sufficient condition for the statespace controllability of linear dynamical systems.
128
Chapter VII Controllability
Theorem 3.1. Let S be a linear time-invariant dynamical system and F' the algebraic conjugate space of C . The system is then state controllable if and only if for any f'E F' =
(tfx')[fC(42ot(x'))
PROOF.
01
[f' = 01
(7.6)
As shown in Corollary 3.2,
c, = { c : c = ~ 2 0 t ( X ' ) & X ' E X ' & t E T } is a linear subspace of the state space C. Then it follows from Lemma 3.1 that the system is not state controllable iff C, is a proper subspace of C, i.e., iff
(VYf' z 0 dc ~
w
~
P
~
4
2 =0 0)) z ~
~
z
~
~
Notice that it follows from the definition that f' # o implies (3cl E C) 0). Q.E.D.
(f'(c1) #
Application of Theorem 3.1 yields directly the well-known conditions for the controllability of linear differential equation systems. Namely, let the state-transition family be defined by a linear vector-matrix differential equation
ditldt = FZ + G x ,
y = Hit
(7.7)
where F, G, and H are time invariant. Condition (7.6) from the preceding theorem takes on, then, the following form. The system is not state controllable if and only if
(If' #
O ) ( W ( P f ( 4 2 O t ( X ' ) ) = 0)
Using explicit expression for the state transition obtained from Eq. (7.7), the condition becomes
for any
t E
Tand x and
4
#
0 ;that
(VtE
is,
T ) ( t T 6 ' G = 0)
This, finally, holds if and only if rank [G, FG, . . . , F"-'G] < n where n is the dimension of F , which is a well-known condition for the controllability of differential equation systems [ 101.
129
3. Controllability of Time Systems
Regarding zero-state controllability, Theorem 3.1 can be generalized to the systems that are time variant. Theorem 3.2. Let S be a linear dynamical system. S is zero-state controllable if and only if for any f' E F' (V'C,)['C, E c,
+
f'(c,)
=
01
+
[.f' = 01
where C,
=
{c:(3xr)(40t(c, x')
= 0))
PROOF. We shall show that C, is a linear subspace. Suppose 4,'(c, x') = 0. Then 4,,(ac, ax') = 0. Suppose r),'(c, x') = o and 4ot,(~', A") = 0. Suppose t' > t. Then, since
4or*(c, X' * 0)= f$,''(C,
4tt,(40t(c,
+ 4,"(C',
x' . 0)
A")
=
xf),0)= 4rt,(o, 0)= 0
4,,.(c
+ c', x' . 0 + A")
=0
Hence, C , is a linear subspace of C. The desired result follows by the same argument as used in Theorem 3.1. Q.E.D. (b) Output-Functional Controllability In this section, we shall be concerned again with a dynamical system but with the case when the system is evaluated by its entire output function rather than state at certain time. Specifically, the following assumptions will be made to particularize the concept of reproducibility : (i) (ii) V = Y, (iii)
S is a dynamical system. U is the initial state object, U = C,, while V is the output object, and M = X . g is the initial-response function, po :C, x X -, Y.
We shall illustrate the application of the general results from Section 2 only for the concept of cohesiveness. Definition 3.4. Let S be a dynamical, multivariable system, Y = Y, x . . . x Yk = yk. S is functionally output cohesive if and only if p,(C, x X ) is not a Cartesian set. Conditions for cohesiveness follow now directly from the general conditions given in Section 2. Corollary 3.3. Let S be a multivariable dynamical system, Y = Y, x ... x = Yk. If there exists a positive integer j < k and a function F : Y j -+ Y k
Chapter VII Controllability
130 such that the diagram
c, x x
Po
i
yk
is commutative andpk-’p,(C, x X ) has more than two elements,the system is functionally output cohesive. Corollary 3.4. Let S be a linear time-invariant dynamical system. Then the system is functionally output cohesive if and only if there exists a proper linear subspace Y’ c Y and a function F : Y’ + Y such that the diagram
Y‘ is commutative and ( I - pl)po(C x X ) has a nonzero element, where p1 is the projection operator p1 : Y + Y‘. Further application of the general results lead to the well-known results on functional reproducibility for linear differential equation systems [ 113. 4. OVERVIEW OF SOME BASIC LINEAR TIME SYSTEMS PROPERTIES RELATED TO CONTROLLABILITY
The existence of various auxiliary functions and various systems properties depends upon the conditions that the systems-responsefamily has to satisfy. For example, the realizability depends upon (PlHP4) from Chapter 111, Section 1, while the state-space controllability for linear systems depends upon the conditions given in Theorem 3.1. It is of interest to provide an overview of all the conditions considered so far that involve the responsefunction family and, in particular, for the class of linear time-invariant systems where they can be expressed in terms of the state- and input-response
131
4 . Overview of .Some Basic Linear Time Systems Properties
families separately. Table 7.1 is then a list of such conditions either considered so far or of potential interest when studying some additional properties. It is also indicated in the table what system property depends upon given conditions. TABLE 7.1 BASICPROPERTIES OF THE SYSTEMS-RESPONSE FUNCTION Number P1 P2 P3 P4 P5 P6 PI PS' P8" P9 P10 P11 P12 P13 P14 P15' P 15" P16 P17' P17"
Form
Name existence of 4101 existence of controllability strong controllability zero-state controllability strong zero-state controllability free-response completeness finite connectedness from zero finite connectedness to zero state-response consistency strong complete controllability observability past-determinacy complete controllability reducibility left analyticity from i right analyticity finite-dimensionalstate space nonanticipation strong nonanticipation
Most of the above properties are intimately related to the state transition. Indeed, the meaning of any properties listed in Table 7.1 can be grasped more easily when they are expressed in terms of state transitions than in terms of a response family. In particular, if plr has the inverse, or if plr is reduced, the above properties can be described on the state space as given in Table 7.2.
Chapter VII Controllability
132 TABLE 7.2
BASICPROPERTIES OF THE STATE-TRANSITION FAMILY Form
Number P1 P2 P3 P4 P5 P6 P7 P8' P8" P9 P10 P11 P12 P13 P14 P15' P 15" P16 P17' P 17"
Obviously, the properties listed in Table 7.2 are interdependent, and while some are basic, some others are induced by the primary ones. In order to clarify the interrelationships among them, we shall first show that P14P16 induce past-determinacy, finite connectedness, and freeresponse completeness. The relations among past-determinacy, strong nonanticipation, and left analyticity are given in Chapter V. c
Proposition 4.1. Suppose a time-invariant linear dynamical system (p, 4)is reduced. Then if its state space C is finite dimensional (P16), it satisfies finite connectedness from zero (P8'). PROOF.
Let
c,
I IT; & c E c & X'E X ' &
= { c :Plf(C) = pz()(x'. 0)
tET}
Co is the set of states reachable from the origin. We shall show that C, is a linear subspace of C. Let C E C, and a ~ bed arbitrary, where p l f ( c )= pz0(x'. 0)I IT;. Since p is linear, p,,(ac) = pzo(or(x')~0)1 T, holds; that is, c E C, 3 ac E C,. Let 2 E C, be arbitrary, where pli(2) = pZo(R'.0)I T . Let
133
4 . Overview of Some Basic Linear Time Systems Properties
I T,
(input-response consistency)
= pzo(o. ~ ' ( x ' )0) .
Similarly, we have that p,,.(C) = pzo(0 F'(9') 0)I T,. 3
f
Consequently, we have that plt.(c + 2.)
= pzo((o*
F'(x')
+0
*
F'(9')) * 0)I T,.
that is, c + E E C,. Since C is finite dimensional, C, c C is also finite dimensional. Let { c , , . . . ,c,} be a basis of C, which is assumed n-dimensional. where for each i, plri(ci)= pzo(xii.0)I T i .Let Z = maxi { t , , . . . ,t.}. Then for each i, c
p1&)
= pzo(0
*
Ff-'i(x:i).0) I ?;
holds. Let c E C, be arbitrary. Then c = n). Consequently,
cy= l i c i for some li~d (i
=
1, . . . ,
holds, that is,
Q.E.D. Proposition 4.2. Suppose a time-invariant linear dynamical system (F,6)is reduced. Then, if its state space C is finite dimensional (P16), it satisfies finite connectedness to zero (P8"). PROOF.
Let
c, =
( C : p o ( C , X f . O ) ~T,
= o&cEC&X'EX'&tET}
We shall show that C, is a linear subspace of C. Let tl E d and c E C, be arbitrary, where po(c, x' . 0)I IT; = 0. Then, po(tlc, tl(x').0)I T = o ; that is, c E C, -+ CLCE C,. Let E E C, be arbitrary where p,(C, 9 ' . 0)I T = 0. Let t' = t + 2. Then, since
134
and p0(& Ai
Chapter VII Controllability 4
0)
I 'T;. = o holds, we have p,(c
+ 2, x' - 0 + 2'.
0)
1 '7;. = 0
that is, c + 2 E C,.Let {cl, . . . ,c,} be a basis of C, as before, where po(ci,xy. 0 ) 1 Ti= o (i = 1,. . .,n). Let 5 = maxi { t l ,. . . ,t"}.Then po(ci,x: . 0 ) I 'I; = 0. Consequently, for c = Aici E C,, i= 1
holds, that is,
Q.E.D. i= 1
Finite connectedness(P8)does not necessarily hold only for linear systems. A finite automaton is a typical example of nonlinear systems that possess this property. However, we can show an example that a linear system with an infinite-dimensional space does not satisfy this property in general. In this sense, finite connectedness can be considered as a basic property of "finite" state systems.
Proposition 4.3. Suppose a time-invariant linear dynamical system (p, 6 ) is reduced. If its state space is finite dimensional and right analytic, then it satisfies free response completeness. PROOF. Let &o,(c) = 0. Then
blOt(4 = 0 -+ P l l ( 4 l O t ( C ) ) = 0 PlO(4l T = 0 +
+
PlO(C)
=0
That is, 410t :C + C is a one-to-one mapping. Furthermore, since C is finite dimensional, 41Ot should be a one-to-one correspondence, that is, 4; exists. Q.E.D. The above result is well known for a constant coefficient linear ordinary differential equation system. However, traditional proof is based only on a mathematical argument, that is, convergence of an operator sequence rather than system theoretic arguments. Propositions 4.1-4.3 result in the following corollaries with the help of Proposition 3.1.
Corollary 4.1. Suppose a time-invariant finite-dimensional linear dynamical system (p, 6)is reduced and satisfies right analyticity. Then the three kinds of state-space controllability introduced in Section 3 are mutually equivalent.
4 . Overview of Some Basic Linear Time Systems Properties
135
Corollary 4.2. The three kinds of state-space controllability are mutually equivalent for a constant-coefficient linear ordinary differential equation system, i.e., dcldt
=
FC
+ Gx
where F and G are matrices, while x and c are vectors.
136
Chapter VII Controllability
in the diagram is straightforward : An arrow denotes which conditions can be derived by starting from given assumptions. For instance, the diagram shows that if a linear response family { p , } satisfies P3 and P8,then P4 is satisfied also ; or when P9, P4,and P1 are given, P10 can be derived, etc. The essential independent assumptions on which the diagram is based are the following : (1) realizability, (2) analyticity, (3)finite-dimensional state space, (4)past-determinacy. They can be, therefore, considered the most basic properties for linear dynamical systems. Any other property listed can be derived from some of those basic assumptions. We shall prove some of the relationships from the diagram.
P3 + P8
+ P4:
Let c’ be arbitrary and 2 be given by P8. Then
P3
+
( 3 t ) ( W ( P i t ( C ’ )= Pzo(X‘. 0)I
TI
and
P8 + (3X’f)(Pli(C’)= pzo(x’i* 0 ) I T ) Therefore, (32)(vc‘)(3x’i)(pli(c’)= p 2 0 ( x ’ i .0 ) I Ti)
which is P4. P4 + P I + P6: Let c be arbitrary and 2 be given by P4.Then
P1
+
(3c’)(p,i(c’) = P ~ O ( C I) T )
Then
P4 + ( 3 x ‘ ) ( t I2 & p+’)
I 17;)
= p 2 0 ( x t . 0)
On the other hand, since p is time invariant, for T
=
2 - t,
I 17;) = (F‘(p,o(x‘ .ON)I T = (P,r(F‘(X‘) IT = p,o(o F ‘ ( x ‘ ) . 0 ) 1 T
~ ‘ ( P I t ( C ’ ) )= Pli(C’) =
F‘(p,o(x’.
0)
Hence, ( 3 2 ) ( V c ) ( 3 t ) ( 3 x t ) ( t5 2 8L PlO(C)
I 17; = P20(Xt. 0)I 17;)
which is P6. P7 + P6 + P4: Let c’ be arbitrary and 2 = (t of P6) + (*t of P7).Then
4 . Overview of Some Basic Linear Time Systems Properties
137
Then, since p is time invariant, P6 -, (3X')(PlO(C)I T
I T)
= P Z O ( X ' . 0)
Hence, (32)(V'c')(3t)(3xf)(t5 2 & p&')
which is P4. P6 + P4 -+ PIO: Let 2 t' = (2 of P6). Then P6
+
=
I I;)
= P 2 0 ( X f . 0)
(2 Of P4) + (2 of P6). Let c and c' be arbitrary. Let
(3X")(O =
~ l o ( c1 )IT;,
+
I IT;,)
PZO(X" '0)
Let t" = (2 of P4). Since p is time invariant, P4 -, ( 3 x q ( p l & ' )
=
pzo(x"' . 0 )I I;..)
Let x i = x" . F"(x"'). Then (PlO(C)
Therefore,
+ PZO(X'.
I Ti = ((PlO(4 + P Z O ( X "
0))
I IT + pz0(o. F"(xf").0 )I T = pzr,(Ft'(x"'). 0 ) I ITi' = pz0(x"' . 0 )I IT;.. = P l f . . ( C ' ) *
0)) IT;,)
138
Chapter VII Controllability
Chapter VIII
MINIMA1 REALlZATlO N
Starting from the primary concept of a system defined on the input and output objects, we can introduce many different dynamical realizations and in reference to a variety of state spaces. It is of practical interest to find a state space that is the “smallest” in an appropriate sense and to have a procedure to construct such a space starting from some initial information about the system. This is the problem to be considered in this chapter. Several different concepts of minimal realizations are used in specialized systems theories, and we have first introduced the related set of concepts in our general framework in a precise manner. The relationships between these concepts is then investigated and the conditions given for the existence of such minimal realizations. The uniqueness of some minimal realizations is also established. It is also shown how minimal realizations can be characterized by systems properties, in particular controllability and reducibility. 1. CONCEPTS OF MINIMAL REALIZATIONS
A given time system S_ can have many different dynamic realizations, and two pairs (p, 6)and (6, $) can be dynamical realizations of the same system. Among the class of dynamic realizations, it is of interest to find out which are equivalent in an appropriate sense, which have some special property, and in particular which are the “simplest” or the “smallest” in a given sense. This leads to the concern for a minimal realization. Since the dynamic behavior of a system is described in terms of changes of the states, the minimality of a realization refers to the state space itself. 139
140
Chapter VIII Minimal Realization
Actually, there are two distinct concepts of state-space minimality used in different branches of systems theory, namely :
(i) In automata theory, a realization is minimal if the state space is of the smallest cardinality [12]. (ii) In dynamical systems theory or control theory, a minimal realization has the smallest number of state variables [5]. Both of these concepts refer to the space and use similar terminology, although they are apparently quite distinct. In order to develop a general theory of minimal realizations (which will include the specialized cases considered earlier), we shall have to consider quite carefully first the concept of minimality itself. A precise procedure to introduce a concept of minimal realization consists of three steps. First, a class YDof systems of interest is identified, e.g., stationary, linear, finite in a given sense, etc. Since a system is described as a rule by means of some auxiliary functions, the systems of interest are defined in terms of a class of auxiliary functions of a certain kind. Second, an equivalence relation is established within the class of interest. Third, an ordering is defined in the equivalence classes with respect to which the minimality of a realization is identified. We shall consider three types of equivalences. DefinitionJ.1. Let .4”D be a class of dynamical realizations. Two realizations (p, $), 6)E are input-output equivalent if and only if
(a,
sop
that is, (tfc)(tfx)(32)[Po(c, x)
=
Po@,
=
XI1 &L
s: (tfNtfx)(3c)rP,(c,
4=
ijO(t
41
Definition 1.2. Let YDbe a class of dynamical realizations. Two realizations (p,$), (3, f )E % are response equivalent if and only if and The third type of equivalence refers to the input part of the response function. For a linear system, the response function can be decomposed into state-response part (for zero input) and the input-response part (for zero initial state). Generalizing this idea for any co, the function p,(c, - ) :X + Y will be referred to as the c, input response. To identify c, in question, we shall refer to it as the reference point since in interpretation it has, as a rule, a special role. We have then the following definition.
141
1. Concepts of Minimal Realizations
6) s,
Definition 1.3. Let SP, be a class of dynamical realizations, ( p , $), (J, E and-c,, t othe reference points of (p,$) and (b,$), respectively. (p, $) and @,$)are input-response equivalent if and only if PO(CO,
-) =
P&,
-)
that is,
(W[P,(C, > 4
= Po(?,
9
41
The first type of equivalence is apparently most general, but two other types are of special interest in practice. For example, for the class of linear systems, if one is primarily interested in the steady state, i.e., when the transient due to the initial condition is practically negligible, the system can be described solely in terms of input response. Actually, the analysis that uses Laplace and other transform techniques is based on this premise. We shall consider two types of orderings.
(a,
Definition 1.4. Let YDEbe an equivalence class of dynamical systems, $), @,$)E and C , the corresponding state spaces with the cardinality K ( C ) and K ( e ) , respectively. A relation Iin is then defined by
sE, c
sE
(P, $1
I( 6 9 6 )
t--)
K(C)
K(Q
Definition 1.5. Let YDEbe an equivalent class of dynamical systems, (p, $), @,$)E SE, C , and the corresponding state spaces. A relation 5 in is then defined by
c ( p ; 4) 5 (&6)
c1
sE
there exists an epimorphism h :
e
+
C
It can be shown readily that 5 is a partial ordering where
(P, $1 (P,&
(B,6)* (P, $1 5 (P, 6) ( b y 6)5 (P, 6) T: (A 6)* (P, 6)5 (i2 6,&1[@ 6)5 (P, I)$ =
If a state space has no algebraic structure, the ordering relation Ican be considered as a special case of the ordering relation 5 since in such a case any surjection can be considered as an epimorphism. The ordering S is of the type used in the automata theory, while the ordering .I;is used in the control theory where the minimal realization problem is considered with respect to Euclidean spaces En. The ordering in the latter case is determined according to dimensions of the spaces involved, and since for any two positive integers m and n such that m < n, there is a linear surjection from E" to Em but not conversely, the ordering relation is of the type given in Definition 1.5. Using different types of equivalences with different types of ordering, one has the following six minimality concepts.
142
Chapter VIII Minimal Realization
Definition 1.6. (p, (is) is a minimal state-space realization if and only if for any dynamical system (J,&) in the same class %, the following holds : Sop = S t
K(C) I K(C)
Definition 1.7. (&$) is a minimal dimension realization if and only if for any dynamical system ( p , & in the same class %, the following holds :
sop= s,”+ “P, $1 t (696, (D, $) 5 ( A $11 Definition 1.8. ( p , 4) is a minimal state-space response realization +
only if for any dynamical system (J,&)in the same class holds : (VC)(3i.)(tfX)(P,(C,
if and
%, the following
4 = Po(?, 4)fsL ( W ~ c ) ( W ( P o ( cx), = Po(&x)) + K ( C ) 5 K @ )
Definition 1.9. ( p , (is) is a minimal dimension response realization if and only if it satisfies the following: For any dynamical system (j, 6) in the same class %, if there exists an epimorphism h :C -P c such that h satisfies the following commutative diagram :
then there exists an epimorphism mutative diagram holds :
h :C -+ C
such that the following com-
Definition 1.10. ( p , $) is a minimal state-space input-response realization if and only if for any dynamical system (J,$)in the same class %, the following holds : ( W ( P , ( C , , x) = P,(t,,x)) + K ( C ) I K ( e ) where c, and i., are the reference points of C and
e, respectively.
2 . Characterization of the Minimal Realization
143
Definition 1.11. (p,6)is a minimal dimension input-response realization if and only if for any dynamical system (p, 4) in the same class YD, the following holds : ( W ( P O ( ~ 0 , X )=
Po(i.07
4) 0, 6)2 +
(376)
+
(P, 6)-I; (376))
Definition 1.8 is used in the automata theory, while in the linear control theory the concepts in Definitions 1.7 and 1.11 are used. The interrelationships between various minimality concepts will be clarified in the next section.
2. CHARACTERIZATION OF THE MINIMAL REALIZATION OF STATIONARY SYSTEMS
In this section, the characterization of various minimal realizations will be considered, using the concepts of controllability and reducibility. Controllability and reducibility are defined in Definition 3.1, Chapter VII, and Definition 2.8, Chapter 11, respectively, as follows. Definition 2.1. Let (p, 6)be a time-invariant dynamical system, where c, E C is the reference point. Then (p, 6)is controllable if and only if (VC)(3Xf)(C
Ec +c =
f j o t ( C o , XI))
As mentioned in the preceding section, if the state space C is linear, the origin is taken as the reference point. Definition 2.2. Let ( p , 6) be a time-invariant dynamical system. Then (p, 6)is reduced if and only if for any c and c‘ (W(P,(C,
4
= PO(C’,
4) c +
=
c’
Various forms of “finiteness” of a state space play an important role in the minimal realization theory. We shall, therefore, introduce two types of finite systems. Definition 2.3. Let (p, 6)be a dynamical system. ( p , 6)is a finite dynamical system if and only if the state space of (p, 4) is a finite set. Definition 2.4. Let ( p , 6)be a dynamical system with the linear state space C on the field d.(p, 6)is finite dimensional if and only if C has a finite base, i.e., there exists a set of k elements { I ? ~.,..,t k } in C such that for any c E C there exists the unique set of elements of the field d,a l , . . . ,ak E d,
144
Chapter VIII Minimal Realization
such that c =
a121
+
’ ’ *
+
On the basis of the above definitions, we can characterize now minimal realizations of a system. Proposition 2.1. Let (p, $) be a time-invariant finite linear dynamical system. The system (p, $) is a minimal realization in the sense of Definition 1.6 if and only if (p, 6)is reduced. PROOF. We shall consider the only if part first. Suppose @,$) is not reduced. Then the equivalence (congruence) relation E c C x C where
(c, c’) E E
H
(VX)(P,(C, X) = p0(c’, XI)
is not trivial. Let a new function Pf:(C/E) x X,+
be such that
Pr([cI, 4 ) = P f k Xf)
Then fi = {P, : t E T } is a time-invariant linear realizable input-output response. (Since C is a linear algebra, C/E is a linear algebra in the usual sense.) Cogsequently, we have a time-invariant finite linear dynamical system ($,& such that Sop = S!. Since E is not trivial, we have K ( C / E ) < K(C).(Notice that Cis finite.) Consequently, (p, 6)is not a minimal realization in the sense of Definition 1.6. Next, we shall consider the ifpart. Suppose (5.6) is of the same type as (p, 4).Suppose ( p , 6)is reduced. It follows from Sop = S,” that
( W @ C ) ( P=~P10(2)) &) because p o and Po are linear. Let a relation 4 c c x C be such that and
( ~ c ) ( W ( p 1 0 (=~ )P 1 0 ( 4 ) ,
(CC)
4
++
P l O ( 4
= PlO(C)
If (2, c) E 4 and (2, c’) E 4 hold, then we have that plo(c) = Plo(2) = plo(c’). Consequently, c = c’ because (p, 6)is reduced. Furthermore, since ( W W P l O ( ~= ) PlO(2))
e
holds, we have .9(+) = %. Therefore, #I is a mapping, i.e., 4 : + C. Moreover, since (Vc)(32)(plo(c) = Plo(t)) holds, 4 is a surjection. Consequently, we have K ( C ) I K ( c ) . Q.E.D. In general, if a state space is finite, the system is a minimal realization in the sense of Definition 1.6 only if it is reduced, which can be proven in the same way as Proposition 2.1. However, when a system is nonlinear, the converse is not true, which is shown by the following example.
145
2 . Characterization of the Minimal Realization
Example 2.1. Let us consider two finite automata, S , and S , . S , is given by
the time set:
T = {0,1,2,.. .},
the state space: C ,
=
{ 1,2,3},
the input alphabet: A
=
{ a l ,a,}
the output alphabet: B
=
{0,1)
The state transition 6,:C , x A + C , and the output function 1, :Cl x A B for S1 are given by the following table: 41
3
A1
A
CI
6'1
a2
a1
a2
1
2
0
1
1 2 2
1
2 3
0
1 1
2
1
S , is given by
the time set:
T = (0, 1,2,. . .},
the state space: C, A
=
{l,2},
The state transition 6,: C , x A + B for S , are given by
+
the input alphabet: A
=
{ a , , a,}
the output alphabet : B
=
{0,1)
C2 and the output function I , :C2 x
62=41IC,XA,
A,=111C,XA
Let us denote the systems-response functions of S , and S2 by prl : C , x XI + r, and pr2:C , x XI + where X , and r, are AT' and BTt,respectively. It is clear from the definition of S , and S2 that
x,
s, = s, u { ( x ,p0'(3, x ) ) :x E X } where X = X,. Notice that x takes one of the following two forms:? x = a, . x1 or x
= a,.
x1
where x , € A T 1
If x = a , . x , , then pO1(3,x ) = Po% a, . x l ) =
Ii(33 a i l . pi1(4i(33ail, X i )
=
Il(La , ) . P11(61(L all, x1)
=
pol(La , . x1)
t In order to simplify the notation, we write a , . x 1 instead of (0,a , ) . xl.
146 If x
Chapter VIII Minimal Realization =
a 2 .xl,then
~ ~ ' (x) 3 ,= ~ ~ ' (a2 3 ,. x1) =
&(3,
=
4 ( 2 , az) * Pll(dJl(2, a219 x1)
=
Po1@,a2 . x1)
p11(dJ1(39a2)9 xl)
Therefore, we have Sgl = S,P2. It is easy to show that p 1 is reduced. Consequently, this example shows that the reducibility does not imply the minimal realization in the sense of Definition 1.6 even if a system is time invariant and finite.
A characterization of the minimal realization in the sense of Definition 1.7 is given by the following proposition. Proposition 2.2. Let (p, 4) be a time-invariant finite-dimensional linear dynamical system. Then ( p , 4) is a minimal realization in the sense of Definition 1.7 if and only if (p, 4)is reduced. PROOF. We shall consider the only if part first. Suppose @,$) is not reduced. Then the congruence relation E c C x C, which is defined by
(c, c? E E
-
P1 o(c) = P 1 o(c')
is not trivial. Let /Slt:CIE+ Y, be such that
Plt"C1) = P l t ( C ) This definition is proper because plt(c) = Ft(plo(c)).Then
5 = {(/Sit, P 2 r ) :
T) is a realizable systems response of the same type as p . Furthermore, Sop = So6 holds. The'natural mapping h :C -, C/E,where h(c) = [c], is an epimorphism but not isomorphism because E is not trivial. If (p, is a minimum realization, there exists an epimorphism h :C/E --* C. Then we have the following contradiction :
4)
dim (C) = dim (W(h))5 dim (C/E)
-= dim ( C )
where dim (C) is the dimension of C . Consequently, (p, 6)is not a minimal realization in the sense of Definition 1.7. We shall consider the ifpart next. be a dynamical system of the same type Suppose ( p , $) is reduced. Let as (p, $) such that Sop = So6 holds. Let h :C -+ be an epimorphism. Let E c C x C be a congruence relation where
(B,&
c
(c,c') E E ++ h(c) = h(d)
2 . Characterization of the Minimal Realization
147
Pi,: C/E + Y, be such that
For each t E T let a linear function
Pi l“C1)
P 1,(h(c)) is an epimorphism, 6’ = {(j3’lf, b2,):t E T} is a realizable =
Since h response family, and S,” = Sofi’holds. Since S O P = S,” = S,”‘ holds, we have the following relations:
linear-
Pi o([C’l)) = Pi d[c’l))
(W(3[C’l)(PlO(C)
(8.1)
=
(m’IN3c)(P
(8.2)
Since p is reduced and since Eq. (8.2) implies that plo(C) 3 Pi0(C/E), we have a linear function p;: . Pio: C/E + C , where p;: :plo(C)+ C such that p;:(pl0(c)) = c for every c E C. Furthermore, Eq. (8.1) implies that p;: . Pio is an epimorphism. Since C/E is isomorphic to therefore there exists an Q.E.D. epimorphism fi : + C.
e,
c
Proposition 2.2 holds when dynamical systems are finite dimensional. This requirement seems too restrictive. However, as the following example shows, Proposition 2.2 does not hold, in general, for infinite-dimensional systems even if they are linear and time invariant. Example 2.2. Let C (cl,c 2 , . . .). Let
=
E m whose typical element is represented by
c =
. .) :(o,c2, c 3 , . . .) E C} Let a projection P : C + C, be such that P((cl,c 2 , . . .)) = (0,c2, c 3 , .. .). Let 8:C, + C be such that 8((o, c2, c 3 , .. .)) = (c2, c 3 , .. .). Notice that P and 8 c1 = ((0, c2, c 3 , .
are linear and that 8 is an isomorphism. Let a time-invariant linear dynamical system (p, 4) be defined by the time set :
T = (0, 1,2,. . .>
the state space :
C
=
Em = ((~1,
the input alphabet :
A
=
an arbitrary linear algebra
the output alphabet: B
=
Em
The state transition 4 :C x A are given by
+
C and the output function Iz :C x A
4(c, a) = P(c), The initial systems response p,: C x X tE T P O ( G
.
~ 2 , ..)>
+B
4 c , a) = P(c) +
Y is then given by : For every
x ) ( t )= P(c)
148
Chapter VIII Minimal Realization
( p , $) is not reduced. Let (p’, 4’) be the reduced system of (p, 6); that is, C
=
C / E and p,‘ :C‘ x X , -,
are given by
(c, c’) E E
-
P 1o(c)
=
P 1o(c’)
and p,’([c],x,) = p,(c, xt). Let (& 6)be an arbitrary time-invariant linear dynamical system which satisfies the relation S o p = S:. Then, since S o p ’ = Sop= So? holds, we have an epimorphism h’ : + C’. (Refer to the last part of the proof of Proposition 2.2.) Furthermore, since C’ is isomorphic to C , , where the isomorphism is expressed by h” :C’ -,C1,there is an epimorphism 0 . h” . h‘ : + C . Consequently, although ( p , $) is not reduced, (p, $) is minimal in the sense of Definition 1.7.
c
c
Next, we shall consider Definition 1.8. Since the ordering used in the definition is defined by the cardinality, finite state spaces are of practical interest in this case again. Proposition 2.3. Let (p, $) be a time-invariant finite dynamical system. Then (p, 4) is a minimal realization in the sense of Definition 1.8 if and only if p
is reduced. PROOF. The only if part is clear. We shall consider the if part. Let @, f )be a finite time-invariant dynamical system. Suppose the condition
( ~ C ) ( W ~ X ) ( P , ( x) C ,=
holds. Let a relation
4
c
PO(2,
-
4)8L ( W W ( W ( P o ( c , x) = a o c e , 4)
c x C be such that
(2, c ) E 4
( W ( P 0 ( C , x) =
Po(&
x))
If (2, c ) E 4 and (2, c’)E 4 hold, we have P,(i., 4 = P o ( c ‘ , 4) Since p is reduced, c = c’ hojds; that is, 4 is functional. Furthermore, the condition that ( p , 4) and (&$) are equivalent in the sense of Definition 1.2 yields that 4 : + C is an onto mapping. Therefore, K ( C ) I K ( e ) . Q.E.D. ( W ( P o ( C ,x)
=
c
As Example 2.1 shows, a time-invariant finite dynamical system is not in general a minimal realization in the sense of Definition 1.6 even if the system is reduced, while a reduced system is minimal in the sense of Definition 1.8. Definition 1.6 requires stronger properties of a system than Definition 1.8, and in reality, linearity condition is necessary for Proposition 2.1. The minimal realization in the sense of Definition 1.9 is characterized by the following proposition. Proposition 2.4. Let ( p , 6)be a time-invariant dynamical system. Suppose ( p , 4) is linear whose state space is of finite dimension, or the state space C
149
2 . Characterization of the Minimal Realization
is a finite set without any algebraic structure. Then (p, 6)is a minimal realization in the sense of Definition 1.9 if and only if it is reduced. PROOF. We shall consider the only if part first. Suppose @,$) is not reduced. Let a congruence relation E c C x C be (c7
c’) E E
*( W ( P 0 ( C , 4
=
PO(C’7
x))
Since E is a congruence relation, C / E has the same algebraic structure as C in the usual sense. Let p, : ( C / E ) x X , -, be Pt([cI,xr)
= Pr(C3 x,)
a,
Since (p, 4)is time invariant, is well defined as above, and 9 = {fir: t E T } is realizable. Let h : C -, C / E be the natural mapping; that is, h(c) = [c]. Then h is an epimorphism and satisfies the commutative diagram of Definition 1.9. However, since E is not trivial, h is not an isomorphism. The desired result follows from the similar arguments used in Proposition 2.2. We shall consider the if part. Suppose (p, 6)is reduced. Let h :C --* be an epimorphism which satisfies the commutative diagram of Definition 1.9. We shall show that h is an injection. Suppose h(c) = h(c‘)for some elements c, c’ in C. Then it follows from the diagram of Definition 1.9 that for any X€X
e
PO(c,
x, =
pO(h(c)7
Consequently, (Vx)(po(c,x) because ( p , 4)is reduced.
=
)’ =
p0(h(c’)7
=
PO(”,
po(c’,x)) holds, which implies that c = c‘ Q.E.D.
Proposition 2.4 is proven for the systems that are linear or whose state space does not have any algebraic structure. However, as the proof of that proposition shows, the proposition holds for any system if the relation E c C x C is a congruence relation with respect to the algebraic structure of the state space, where E is defined by (c7
c’) E E
* ( W ( P 0 ( C , x) = P0(C’, 4)
When the equivalence relation used to define a minimal realization is given with respect to the input response only, the minimal realization is termed a minimal input-response realization of the respective kind. The importance of this type of minimal realization is due to the fact that most of the problems in the design of linear control systems area (or in the filters design) are specified by giving some weighting functions as an input-response function. We shall consider this type of minimal realization by using Definitions 1.10 and 1.11.
Proposition 2.5. Let (p74) be a time-invariant finite dynamical system. (p, 4) is a minimal realization in the sense of Definition 1.10 if and only if (p, 6) is controllable and reduced.
Chapter VIII Minimal Realization
150
PROOF. We shall consider the only fi part first. If (p,4)is not reduced, then the equivalence E c C x C, where (c, c') E E (Vx)(p,(c, x) = p0(C', XI)
is not trivial. Let
6, : ( C / E )x
-
X , -, Y, be such that Br([cl, x,)
x,) Since p is time invariant, the above p, is well defined. Then 3 = {fir: t E T } is a realizable systems response. Suppose c, E C is the reference point of C. Let [c,] be the reference point of C/E. Then {(X? P O ( C 0 , X ) )
:x E X } =
=Prk
{k60([c01,4): x E X }
holds. Since E is not trivial and since C is finite, we have K ( C / E ) .c K ( C ) . Consequently, (p, 4)is not a minimal realization in the sense of Definition 1.10. Suppose (p, 4)is not controllable. Let C, c C and 6, :C, x X , 4 be such that c o
= { c : ( 3 X X C = 4o,(c,,
6, = Pt I c, x
x",
x,
where c, E C is the reference point. We shall show first that 3 = (0, :t E T ) can be realizable by a time-invariant finite dynamical system. Let c E C, and x' E X' be arbitrary. Then c = ~ , J C , , XI) for some xr.Consequently,
4 o k xf)= A r k m')) = 4ot'(C,, xr
m"
E
c,
where t' = t + T. Therefore, we have that $,,(C, x X') c C,; that is, &.;C, x X,,, 4 C, can be defined by &, = &,I C, x X,,,. Jt is clear that ( & 4 ) is a = {$,,, : t, t' E T } . Since time-invariant finite dynamical system, where co = 4oo(c,, xO),co E c, and
6
{(X? P O ( C 0 , X ) )
:x E
x>= {(x,
BO(C0,X))
:x E X }
hold. Let c, be the reference point of C,. Then, since K(C,) .c K ( C ) holds,
(p, 4)is not a minimal realization in the sense of Definition 1.10. We shall consider the i f part. Suppose (p,$) is reduced and controllable. Suppose ( p , & is not a minimal realization in the sense of Definition 1.10. Then there exists a finite set
e and a time-invariant finite dynamical system
(6, f ) ,whose state space is c such that the following holds :
-= K(C),
1. where c, we have
E C and
2,
E
e
x) = 6,(20, 4) (8.3) are the reference points. Since p and fi are realizable, 2.
(W(Po(C,r
( W ( 3 C ' E C ) ( W ( P , ( C ' 9 x,) =
PO(C0,A'.
x,)I TI
W W 3 C ' E e ) ( w ( 6 f ( c 'x,) , = 6,(2,, 9, * x,) I 7;)
(8.4)
2. Characterization of the Minimal Realization
151
Since (p, 4)is controllable, we have (VCE C ) ( 3 9 ' ) (= ~ 4ot(co,
$9)
that is, (VCE C ) ( ~ - V ( v x t ) h xr) ( ~ ~= po(co9 3'. xr) I 7;)
(8.5)
It follows from (8.3)-(8.5) that (VCE
c ) ( ~Ecc )'( v x r ) ( p r ( c ,xr) = Pic', xr))
(8.6)
c, c c be such that e, = {c' : (3c)(vxr)(pr(c,xt) = Pt(c', Relation (8.6) implies that c, # 4. Let 4 c e, x C be such that
Let
(8.7)
Xr))}
(c', C) E 4
Relation (8.7) implies that we have
++
(vxr)(Pr(c, xr) =
PAC', X J )
9(4)= co.If (c', c) E 4 and (c', 2) E 4 hold, then
(vxr)(pr(c,xr) =
Pr(C'3
xr) = pr(t 4)
c,
Since (p, 4)is reduced, c = 2 holds. Therefore, 4 is a mapping ;i.e., 4 : + C . Furthermore, 4 is a surjection due to (8.6). Consequently, we have that K ( C ) I K(e,) I K ( e ) , which contradicts the condition K ( c ) < K(C). Q.E.D. Before characterizing minimal realization in the sense of Definition 1.11, we shall introduce the following lemma.
Lemma 2.1. Suppose that ( p , 4)is a reduced time-invariant linear dynamical system. Let
c, = { C : ( 3 X r ) ( ~ 2 0 t ( X=' ) c ) & t E T } Then
c, = ( c : ( 3 x ' ) ( p 1 , ( c )= p 2 0 ( x f . 0 ) I 7 ; ) & t E T } and C, is a linear subspace of C. PROOF.
In general,
c, c { c : ( 3 x ' ) ( p , , ( c )= p z o ( x ' . o ) I 7 ; ) & t E T } = C,' Suppose plr(c)= p z O ( x ' .0)I 'T;. Since { p , } is reduced, c
I
= ~ ~ l ( p 2 0 0) ( ~ '7;) . = 420t(x')
152
Chapter VIII Minimal Realization
(Refer to Chapter IV, Section 3.) Hence C , the proof of Corollary 3.2, Chapter VII.
=
C,’. The latter part follows from Q.E.D.
We have now the following proposition. Proposition 2.6. Let ( p , 6) be a time-invariant finite-dimensional linear dynamical system. Then ( p , 6)is a minimal realization in the sense of Definition 1.11 if and only if it is controllable and reduced. PROOF We shall consider the only $part first. If@, 6)is not reduced, there exists a nontrivial congruence relation E c C x C where
(c, c’)E E
Let
fill: C/E -+ Y, be such that
-
Plt“C1)
P i O ( C ) = PlO(C’)
= Plt(C)
Then 3 = {(fill, p,,) :t E 7’)can be realizable by a time-invariant finitedimensional linear dynamical system. Let h :C -, C/E be the natural mapping; i.e., h(c) = [c]. Then h is an epimorphism. Since E is not trivial, h is not an isomorphism. Therefore, the argument used in Proposition 2.2 implies that (p, 6)is not a minimal realization in the sense of Definition 1.11. Suppose (p, 6)is reduced but not controllable. Let C , c C be such that
c, = { c : ( W ( c = 4 2 0 t ( X f ) ) 1 Then Lemma 2.1 says that C, is a subspace of C , and C , is a proper subspace due to uncontrollability. Let filr:Co-, such that pl, = plr 1 C , . Then we shall show that {(p1,,p,,)} is realizable. The input-response consistency is obviously satisfied because { ( p l r ,p,,)} is realizable. We want to show that (tfNtft”fc
E C,)(tfxt,,)(~c’ E C,)(filtf(C‘)
=
(filt(4
+ P2t(X,,*
IT)
Since {(pit, p,,)} is realizable, the following holds:
I 17; + P 2 O ( X 1 . 0)I 17;) We shall show that c’ E C,. Since c E C,, pl,.(c) = ~ ~ 0) ~I 17;* ( for i “ some . (tft)(tfx”tfc E C O W ’ E C)(Plt(C’)=
t’ E T and it’. Let t
PlO(C)
+ t‘ = t. Then, since { p t } is time invariant, we have
P l M
= Pltm =
I T, + P 2 t , ( f . f ’ ( x t ) .0)I T,
pz0(2” . 0)I T,
+ p 2 , , ( F f ’ ( x t0) .)I T,
3 . Uniqueness of Minimal Input-Response Realization
153
Since { p , } is time invariant, we have the result. Therefore, p,,)} is realizable by a time-invariant finite-dimensional linear dynamical system. Let h :C + C , be the projection from C onto C , . Then h is an epimorphism but not an isomorphism because C, is a proper subspace of C . Consequently, the argument used in Proposition 2.2 implies that (p, 4) is not a minimal realization in the sense of Definition 1.11. Next, we shallconsider the ifpart. Suppose (p, 4) is reduced and controllable. Suppose (6, 6)is a time-invariant finite-dimensional linear dynamical system such that (Vx)(p,(o,x) = po(o,x)) holds, and, moreover, there exists an epimorphism h :C -, Let a congruence relation E c C x C be such that (c, c’) E E c* h(c) = h(c’). Let a linear function P;, :C/E + yt be such that
e.
b;,([Cl) = D M c N Then 6’ = {(pi,, p2,) : t E T} is realizable by a time-invariant finite-dimensional linear dynamical system. (Refer to the realization theory.) Since (Vx)(po(o,x) = po(o,x)) implies that pz0(x) = pzO(x) for every x, we have from the realizability condition that = P 2 o W . 0)I
(vt)(vX‘)(3[cl)(p;,([cI) Since (p, 4) is controllable,
TI
(8.8)
c = {c :(3x‘)(c = 420,(xr))} that is, (W3Wx‘)(cE
c
+
Pl,(C)
I T)
(8.9)
= P z O ( X f . 0)
holds. Relations (8.8) and (8.9) imply that (W3t)(3[c’l)(c E c
Since 6’ =
+
Plt(C)
=
P;,([c’l))
{(pi,, p2,):t E T } is time invariant, the above relation yields ( ~ 4 ( 3 [ c ~ i )c( c ~plo(c) = P;~([~’I)) +
Let c’cC/E be such that HC’I
: (3c E C)(P,O(C)= P;O([c’l)))
c’is a linear subspace of C/E. Since (p, 4) is reduced, there exists a linear
surjection p;: . pio :
--+ C .
Q.E.D.
3. UNIQUENESS OF MINIMAL INPUT-RESPONSE REALIZATION
A minimal realization is, in general, not unique (as can easily be seen from the proofs of propositions in the previous section). In this section, we shall
154
Chapter VIII Minimal Realization
show, however, that a minimal input-response realization is unique within an isomorphism. Some other minimal realizations will be considered in Chapter XII. We shall first consider the uniqueness problem for a general case.
6)
Proposition 3.1. Let @,$) and @, be time-invariant dynamical systems. Suppose ( p , $) and (b,$)are reduced and controllable. They are then realizations of the same input-response family; i.e., for every x, po(c,, x) = Po(?,, x) if and only if there exists a one-to-one correspondence h :C -+ such that h(c,) = 2, and the following diagram is commutative for every x f t Pand x,. and for o I t I t' :
c
We shall consider the ifpart first. Suppose there exists a one-to-one correspondence h : C .+ such that the above diagram is commutative. Then, for any x E X and t' 2 0, PROOF.
c
Therefore, po(co,x) = Po(?,, x) for every x E X . Next we shall consider the only ifpart. Let h c C x that (c, 2.) E h
-
(3x') (c =
4ot(c, x') & i. = $,'(to,XI)) 9
Suppose (c, 2.) E h and (c, 2.') E h hold such that c =
r$,,(C,,
x') & E = $,'(E,,
x')
and c =
c be a relation such
4,,,(Co, P') & 2.'
=
cjOf@,, A'')
3 . Uniqueness of Minimal Input-Response Realization
155
Then we have that for any x,
(8.10)
(8.1 1)
Since p and
5 are time invariant, Eqs. (8.10) and (8.11) imply that (W(PO(C9
x) = P,(C x))
(W(PO(C3
x) = Po(?,
and
4)
Consequently, we have that
x) = P,(i.', 4) Since (j5,J) is reduced, 2 = 2' holds. Let c E C be arbitrary. Since ( p , $) is controllable, there exists 2' such that c = 4,,(c,, 2') holds. Let 2 = r$,,(i., 2,). Then we have that (c, 2) E h. Consequently, 9 ( h ) = C. Therefore, his a surjection from C onto c.Moreover, since Cand c a r e related with h in a systematic way, the inverse h - of h exists. Hence, h is a one-to-one correspondence. The relation h(c,) = 2, follows from the fact that co = ~ o o ( c o , xand o ) 2, = ~$,,(2,, xo).We shall show that the diagram is commutative with respect to h. Since ( W ( P O ( 2 ,
(4ot(Cor
4, 6,,(2,,
x'))
Eh
we have that c$~,(?,,xf)= h . 4,,(c0, x'),
for any x'
The relation that for any c E C and x,,. 6rr,(~(C),
x,,,)= h . 4 r r 4 G x,,,)
is proven as follows. Since ( p , $) is controllable, there exists xr such that
156 c =
Chapter VIII Minimal Realization
4or(~or xr). Let p = (t' - t ) + 7. Then (40,(~oxT. F'- r ( ~ t r , ) ) $ $ o p ( ~ o x r 9
-+
+
7
Fr-r(xrr,))) E h
(+rp(40~(co, xr),Fr-r(xrr,)),$rp($or(Eo, xr),~ ~ - Y x t t , )E) h) (4r@(c,~ ~ - * ( x t t , )6rp(Wc), )?
+
( $ t r , ( ~ , xrt,), $rr,(h(c)*
+
$ r t O ( C ) , XI,,)
=
h
~ ~ - ~ ( x r tE * ) ) )
xrr,)) E
h . 4rr@,X I , , )
Similarly, we can prove that for any c E C and x,.
h . Pr*(c, xts) Therefore, the diagram is commutative with respect to h. bt,(Nc),xt.1
=
Q.E.D.
Proposition 3.1 states that if two dynamical systems (p,6)and (F, f ) ,which are reduced and controllable, are realizations of the same input-response family, they are isomorphic in the ssnse that there exists a one-to-one correspondence h :C + such that (,3,4)is determined by (p,$) as follows: P@, XI) = P,(h - Y E ) , x,) (8.12)
c
(8.13) h satisfies the relation h(co) = Eo, and hence if C and are considered algebra with null-ary operators c, and E,, then h is considered an isomorphism. Combining Proposition 3.1 with Proposition 2.5, we have the following corollary. $I,@?
XI,,) =
h4114h-
l ( E ) , XI,,)
c
Corollary 3.1. Given two time-invariant finite dynamical systems ( p , $) and (F, f ) ,which are minimal realizations in the sense of Definition 1.10. They are then realizations of the same input-response family if and only if the diagram of Proposition 3.1 is commutative where h is a one-to-one correspondence and h(c,) = Eo holds. According to the above corollary, a minimal realization in the sense of Definition 1.10 of a time-invariant finite dynamical system is uniquely determined within an isomorphism. [Refer to Eqs. (8.12) and (8.13).] Regarding a linear minimal realization, an equivalent proposition to Proposition 3.1 can be proven. However, in order to emphasize the intimate relationship between the present result and the classical theory, we shall present that fact in the form of a uniqueness proposition as follows.
Proposition 3.2. Given two strongly nonanJicipatory time-invariant linear dynamical systems S = (p, 6)and 3 = (p, J), which are reduced and controllable. They are then realizations of the same input-response family ; i.e.,
3. Uniqueness of Minimal Input-Response Realization
157
for every x,, pZt(x,)= pzr(xt)if and only if there exists a linear one-to-one correspondence h :C -+ such that the following diagram is commutative :
c
B
where A and ,? are the output functions of @-, 6)and (fi,$). PROOF. Let us consider the only if part first. Notice that the output function
Iz :C
B of a strongly nonanticipatory time-invariant linear dynamical system ( p , 6)is given by : For any t, x,, and c -P
4 c ) = P A C , xr)(t) = P l r ( C ) ( t )
For any t and x', we have
Consequently, for every t' 2 t , we have
= PIO(C)(O)
Chapter VIII Minimal Realization
158
Since Sand 3 are reduced, plr and P1,are one to one, that is, have the inverse Furthermore, since S and 3 are controllable, on plr(C) and
Plr(c). C=
u 4,,(Xr)
and
r
hold. Let h,:C +
c = u$,,(Xt) r
c be such that hc(c)
=
PTr' . Plr(c)
where t is arbitrary if the right-hand side is properly defined. Notice that since the systems are time invariant, for any t and t',
P Tr '(P lr(c))
P T r f b It,(c))
=
holds if both sides are properly defined and that C
=
u4 o r ( X r ) f
and P ir(4zot(Xr)) =
P 1 r(420r(xf))
for every x' imply 9(h,) = C . Consequently, hc is a one-to-one correspondence (i.e., h; = pTrlPlr for some t), and for every t and t' 2 t hc(dlrr,(4zor(xr)))
= 41rt'($zor(Xr))
holds, and in particular, if t' = t, hc(4zor(xr))
holds. The if part is apparent.
=
420r(Xt)
Q.E.D.
By combining Proposition 3.2 with Proposition 2.6, we have the following result. Corollary 3.2. Given two strongly nonanticipatory -time-invariant finitedimensional linear dynamical systems (p, $) and ($, &, which are minimal realizations in the sense of Definition 1.11 and whose state spaces satisfy the conditions from Proposition 2.6, they are then realizations of the same linear input-response family if and only if the diagram of Proposition 3.2 is commutative, where h is a linear one-to-one correspondence.
Chapter IX
STAB ILlTY
Stability is a vast and important subject in systems theory, and in this chapter we shall only open that subject to the considerations on the general systems level. It will be shown how several concepts of stability can be formalized within the general systems theory framework developed in this book, but only one of them, namely, the stability of a subset of a state space, will be investigated in detail. In order to show how some conventional notions and problems deeply rooted in the analysis framework can be successfully dealt with even on the general systems level, the concept of a Lyapunov-type function is introduced for a general dynamical system, and the necessary and sufficient conditions for stability are given in terms of that function. Conceptually, the stability depends on the system and the way its behavior is evaluated to determine the “inertness” (i.e., stability) of a given mode of operation. In addition to the definition of a system, one, therefore, also needs a way to determine whether the system’s behavior has changed appreciably after a perturbation. The assessment is done in terms of a properly defined concept of a neighborhood, and the system is then stable in reference to the given notion of neighborhood if the changes in the system’s behavior are small enough for small variations in operating conditions. The generality of the stability considerations and results derived in this chapter should be noticed. All the systems objects used are defined solely as abstract sets, and only one auxiliary set, which is ordered for the sake of a proper definition of the Lyapunov-type function, is introduced. The information of the system is condensed in the form of a preordering. Using only that 159
160
Chapter I X
Stability
much structure, we have been able to prove the basic Lyapunov-type theorem, giving necessary and sufficient conditions for stability. The results are immediately applicable for the more structured systems, e.g., for the sets with a topology or a uniform topology, a metric space, etc. 1. GENERAL CONCEPT OF STABILITY
The essential idea in the stability concept in general is the following. Let and & represent the cause and effect of a phenomenon, respectively; i.e., there is assumed a mapping F such that C = F(2). In some other instances, another cause, for example, d, yields then another effect e = F(d).The causeeffect pair (&, d) is stable if small deviations from 8 are caused by small deviations from d, i.e., for all d close to d , the corresponding effect e = F(d) is close to &. Intuitively, then, small perturbations around d will not change the effect appreciably. To formalize the notion of stability, it is apparently necessary to have a way to express “closeness.” In general, this can be done simply by using a specified family of subsets, otherwise arbitrary. Let V be an arbitrary set, n ( V ) the family of all subsets of V, and 8, a given family of subsets of V,i.e., 8, c n(V ) . For any point u E V, the neighborhood system m ( u ) of u relative to 8, is then defined by
d
mcU)= { a : a E e y & U E a } A general concept of stability is then the following definition.
Definition 1.1. Let F : D -+ E be a given mapping, 8, and 8, given families of subsets of D and E , respectively, and E D x E , such that & = F(d). (d, 8) is stable relative to 8, and 8 E if and only if
(a,&)
(VN E m(e))(3pE N(d))(Vd)(dE p
-+
F(d)E a)
w(4
where N(2) c 8, and c 8, are the neighborhood systems of P and d in the sense of 8, and OD, respectively. (a) Stability of Response The stability notion from Definition 1.1 can be used to derive a variety of system stability concepts. For example, let D be the initial (or global) state object C,E the output objects Y, and let the initial (global)-response function of a system po for a given input R E X be taken for F . We have then the following definition.
I . General Concept of Stability
161
Definition 1.2. Let 8, and Oy be given families of subsets of C and Y , respectively. The response j = p,(?,, 2 ) is then stable (relative to 8,, 8,, and for the given 2 ) if and only if
where m ( j )and m(t0)are the neighborhood systems of j and E,, respectively. When S is a dynamical system, the question of stability is traditionally explored in terms of the state-transition family. This is certainly completely satisfactory when the system has a canonical representation (d;, X) since the output function 1 is a static map and has no effect on stability. Actually, in such an approach [13] the only information on the system necessary for the stability considerations is the succession of states. All required information of the system can then be presented in a compact form in terms of an ordering in the state space, Y c C x C. Let us derive such an ordering from the state-transition family $. Let S be a time-invariant dynamical system (d;, X) with the state space C. Let 2 = C T .For any x E X , the system S defines a relation Y c C x C such that (c, c') E $
-
(%t,)(C'
= 4 r t k Xrt,))
(9.1)
The relation $ satisfies the following properties : For any c, c', and c" in C (i) (c, 4 E y, (ii) (c, c') E Y & (c', c") E Y
-, (c, c") E Y.
Condition (i) follows from the convention adopted for the state transition that &(c, x,~)= c, while condition (ii) follows from the time invariancy, the input completeness, and the composition property.? Let D and E of Definition 1.1 be C and n(C),respectively. Let F :C -, n(C) be such that F(c) = $(c) = ( c ' : ( c ,c') E Y}. Let 8, = OE = 8 c lT(C). Then Definition 1.1 yields the following definition. Definition 1.3. A point c E C is stable relative to $ and 8 if and only if W'CI E "$(c)))(3B
E W(C))($(B)
=4
t As a matter of fact, let (c, c') E Y and (c',c") E 'I' or c' = +Jc, xfft)and c" = ~ J c ' , &) for some xff.and 2ss.;let t" = (s' - s) r'. (Notice that since S is time invariant, the time set is an Abelian group with a linear ordering.) Then c" = +,,,,,(c', F''-'(&)) because S is time invariant; since Xff., = XI,. ' F" -s(2ss,) E XI,,, and since c" = 4t ' l " (c', Ff'-s(&)) = +r.r~?(+fr.(c, x,,.), Ff'-y2ss,))= +,,?,(C, XI,. ' F + V S S , ) ) (c, c") E Y holds.
+
162
Chapter I X
Stability
where $(/J) = u,,,~(c) and the neighborhood of a set $(c) is defined in the usual manner ; i.e., N(+(c)) = { a : a E 8 & $(c) c a } . It should be pointed out, again, that in Definition 1.3 the only reference to the given system is in terms of the relation $, which satisfies (i) and (ii); i.e., it is a preordering relation. Therefore, when using Definition 1.3 we shall talk about stability of the preordering $, with the understanding that the relation $ is defined by the state-transition family of a system. As an illustration of the application of Definition 1.3, let us consider a dynamical system defined on an Euclidean space with a norm topology, where 8 corresponds to that topology. Suppose the dynamical system has no input, i.e., 4or:C --* C and 2 E C is an equilibrium point, &(i?) = 2 for every t . Definition 1.3 then can be interpreted as W ~ ) ( W ( V c ) ( l l c- Ell < 6
+
(Vt)(ll4or(c)
- 211 < E ) )
This represents a Lyapunov-type stability, and hence Definition 1.3 is a generalization of Lyapunov-type stability.? Definition 1.3 can be extended to the stability of a set rather than a point. Definition 1.4. A set C’ c C is stable relative to $ and 0 if and only if (Va E m w ’ ) ) ) ( 3 8E W’))($(/J) c4 (b) Stability of an Isolated Trajectory?
The Lyapunov-type stability of a given element $(2), as considered in the preceding section, is defined in relation to other elements $@). But there is another type of stability where the stability of $(2) is defined in terms of +(t) itself. A typical example of this kind is the Poisson-type stability. Let 2,(t) = 4or(~, a‘)for a dynamical system with a fixed input R ;a trajectory 2, : T + C is positively Poisson stable if and only if
(32)(Va E N(.2c( 2))) (V t )( It’)( t I t’
+
a,( t’) E a)
For instance, if 2Jt) = sin ct for c E E l , 2, is stable in the above sense, where 2 can be any positive number. As another illustration, let a dynamical system S be defined on a Euclidean space with a norm topology, and furthermore, let S have a unique input 2. Let 2(t) = C)~~(C,2). A classical concept of stability is then expressed as : A trajectory 2 : T + C is stable if and only if (V~)(32)(~t)(VI t ‘ )t(& 2 2 I t’ + IIqt) - qt’)II < E )
t See Nemytskii and Stepanov [14].
(9.2)
1 . General Concept of Stability
163
In relation (9.2) the stability is defined by the trajectory 1 itself. However, the classical stability is, in general, not a Poisson-type stability. Consider the trajectory of 2(t) = 1/(1 + t). 1 is stable in the sense of (9.2), but not stable in the sense of Poisson. Furthermore, z,(t) = sin ct is not stable in the sense of (9.2).Under some conditions, the classical stability becomes a Lyapunov-type stability, i.e., an asymptotical stability. (c) Structural Stability? The concept of structural stability is related with the “morphology” of a system. Let S c X x Y be a general system, which is “parameterized” by a set D in the sense that for any d E D the system is in a certain “mode.” Furthermore, let the system’s behavior be classified in terms of various “forms” of behavior, i.e., there is given a function P:S+ E
such that e = P ( S ) denotes that S E S has the form e, where S = { S c X . Y } . Let R :D + S. Let F be the composition of P and R, i.e., F :D -+ E such that e
=
F(d) = P(R(d))
Usually, the neighborhood system of e E E is defined as w ( e ) = { { e } } .Then the form 8 E E is structurally stable if and only if for each d E D,where 8 = F(d), (3p E m(d))(Vd)(dE fi
+
F(d) = 8)
As a simple example, let us consider the following dynamic system (d2z/dt2)+ 2d(dz/dt) + z
=
o
Usually, according to the value of the parameter d, the forms of dynamic behavior of the above system are classified as “overdamped”( = e , ) ,“critically damped” ( = e,), “underdamped” (= e3),“steady-state oscillation” ( = e4),and “unstable” ( = es). Then we have that ‘el, e,, F(d) = ( e 3 ,
e4, .e5,
t See Thorn [ 151.
if d > 1 if d
=
1
if o < d < 1
if d
=
o
if d < o
Chapter I X
164
Stability
If 0, and 0, are taken as the usual topology and discrete topology, respectively, it is easy to see that the forms e l , e 3 , and e5 are structurally stable, while the others are not. In conclusion, the following remark is of conceptual importance. The stability concept in this chapter is defined relative to a given family of subsets 8. If no restrictions are imposed, every system can be viewed as stable by an appropriate selection of 8. However, most often, the family 8 is given, and the question is to determine the stability of the system relative to that 8. We shall consider that problem extensively for the stability of sets as introduced in Definition 1.4. 2. STABILITY OF SETS FOR GENERAL SYSTEMS
Stability can be characterized in terms of the properties of the function F . Specifically,we shall derive in this section necessary and sufficient conditions for stability of sets as given in Definition 1.4,and therefore the term “stability” in this section will be referred henceforth solely to that concept. (a) Preliminaries As mentioned in Section 1, a family of subsets 8 c n(C) is introduced in order to evaluate the changes in C . This notion will be strengthened by introducing the concept of a generalized distance function. This will enable the introduction of a Lyapunov-type function and characterization of stability in terms of the existence of such a function.
Definition 2.1. Given a set C and a family of subsets of C , 0 c II(C).Suppose there is given a complete lattice W with the least element o and for each c a subset W, c W such that there is given a function p : C x C -+ W which satisfies the following : (i) (ii) (iii) (iv)
p(c, c’) 2 o and p(c, c) = o p(c, c’) = p(c’, c) p(c, c”) I p(c, c‘) V p(c, c“) Let S(c, w ) = {c’:p(c,c’) 5 w } . Then { S ( c ,w ) :w E W,} is a base of c E C in the sense
(4 (VW E W,)(S(c,w ) E 8) ~ 0 )(va E e)(c E a -,( 3 w E w,)(c E S(C,
w ) c a))
Then p is called a generalized pseudodistance function with respect to 8. If W, is a sublattice for each c, p is called a generalized distance function. The following is a main result.
165
2 . Stability of Sets for General Systems
Proposition 2.1. Let C be an arbitrary set and 8 a family of subsets of C. (C, 8) has a generalized pseudodistance function p . PROOF. Let 8 = {ai: i E I } , where I is an index set for 8. Let W = (0, l}’ = {w : I + (0, 1)). If an ordering Iis defined on W as w I w‘ c* (Vi)(w(i) I w’(i)), where 0 < 1, then W is a complete lattice whose greatest element is 1, where l(i) = 1 for every i E I , and whose.least element is 0, where O(i) = 0 for e v e r y i E I . L e t p : C x C + Wbesuchthat
p(c, c’) = w ++ w(i)
=
1, 0,
if (c E ai& c ‘ $ ai)V (c $ ai & c’ E ai) otherwise
Let wi E W be such that wi(j) =
0, 1,
if i = j otherwise
Let W, = {wi : c E cli & i E I } . Then we can show that conditions (i), (ii), (iii), and (iv) are satisfied. Condition (i). Apparently, p(c, c‘) 2 0 and since c E ai& c $ aiis impossible, p(c, c) = 0. Condition (ii). Let p(c, c’) = w and p(c’, c) = w’.Then for any i E I , w(i) = 1 c* (c E ai& c ’ $ ai)V (c $ ai& c’ E ai) c* w’(i)
=
1
Hence, p(c, c‘) = p(c’, c). Condition (iii). Let p(c, c’) = w , p(c‘,c”) = w’, and p(c, c”) = w”. Then for any i E I , w”(i)
=
1 + (c E mi & c” $ ai) V (c $ ai& c”
E ai) -+
w(i) = 1 V w’(i) = 1
Hence, p(c, c”) Ip(c, c’) V p(c’,c”). Condition (iv). (a) Let W ~ W, E be arbitrary. Then we shall show that p(c, c‘) Iwi c* c’ E ai. Let p(c, c‘) = w’. If c’ E ai, then w’(i) = 0. Hence w‘ Iw i . If c‘ $ mi, then w’(i) = 1. Hence w’ $ wi. Consequently, S(c, wi)= ai E 8. (fi) Let aiE 8 be arbitrary. When c E ai holds, wi E W,. Hence, c E S(c, Wi) = ai. Q.E.D. It is clear that the generalized distance function introduced in the proof of Proposition 2.1 is a formalization of the intuition that the change from c to c‘ is smaller than that from c to c” if (Va E 8)(c E a & c“ E a + c’ E a). We shall consider a characterization of the stability by using a generalized distance function. Let C’ be a subset of C and 8 a subset of n(C).Let { a :a E 8 & C‘ c a ) be denoted by N(C’).Then we have the following corollary. Corollary 2.1. Let C’ be a subset of C and 8 a subset of n(C).Let p :C x C + W be a generalized pseudodistance function given in the proof of Proposition
Chapter IX Stability
166
2.1. Let p(C’,c) = infc,ecpp(c’, c). Then there exists WCtc W for C’ such that the following relations hold : (a) (b) (c) (d)
p(C’, c) 2 0 and if c E C’, then p(C’,c ) = 0 p(C’,c”) I p(C’,c’) V p(c’, c”)
(Vw)(w E w,. S(C’, w )E R(C’)) (Va E N(C’))(3wE W,.)(C‘ c S(C’, w ) c a ) -+
ncEc.
PROOF. Let W,. = W,. We shall show that conditions (a), (b), (c), and (d) are satisfied. Condition (a). p(C’,c) 2 0 is clear. If c E C’, p(C’,c ) I p(c, c ) = 0. Hence, p(C’,c ) = 0. Condition (b). p(c, c”) I p(c, c’) V p(c’, c”) holds for every c E C’. Hence,
p(C’,c”) I p(c, c”) I p(c, c’)
v
p(c’, c”)
holds for every c E C‘, which implies that p(C’,c”) I p(C’,c’) V p(c’, c”). Condition (c). If w E W,,,then w(i) takes the value of 1 for all i E I except one point where w(i) = 0 and aiE N(C’). Hence, S(C’,w ) = { c :p(C‘,c ) I w } = aiE N(C’),where w(i) = 0. Condition (d). If aiE m(C’),then w iE W,.and hence C’ c S(C’,wi)= ai. Q.E.D. (b) Main Theorem?
Without introducing any more structure, we can give necessary and sufficient conditions for stability by using Corollary 2.1. Let be a preordering relation in a set C . Let N be an arbitrary partially ordered set and N + a fixed subset of N . We can introduce then the following definition [ 131.
+
Definition 2.2. A functionf: C C* c C if and only if
-+
N is a Lyapunov-type function for a subset
+
(i) ( ~ C ) W C ’ ) [ ( ~ c’) , E -+ f ( c ) 2 f(c’)l (ii) (Vn)@tl)(Vc)[n E N f & C* c a & c E a f ( c ) 5 n] (iii) ( V a ) ( k ) ( V c ) [ n ~ N ’ & + ( C *ci&f(c) )c I n+cEtl] -+
+
Theorem 2.1. Let C be an abstract set, a preorder in C,and 8 an arbitrary family of subsets of C. Then a subset C* is stable relative to 8 and 1(/ ifand only if there exists a Lyapunov-type function f :C -+ N . PROOF. Let us consider the if part first. Let a E m(+(C*))be arbitrary. Then A} c tl. By it follows from condition (iii) that for some A E N ’ , {c:f(c) I applying condition (ii) to A, we have that there exists p E m(C*) such that
t See Yoshii [42].
2. Stability of Sets for General Systems
167
p c (c:f(c) I iz} c a. Furthermore, conditions (i) implies that if f ( c )I ii and if (c, c’) E $, f ( c ’ ) I A. Hence, $(p) c { c : f ( c )I A } c a. Consequently, C* is stable. Let us consider the only i f part next. Let N = W, N + = W,, where = Il/(C*) and f ( c ) = supc,E$(c) p ( c , c’), where W, W 2 ,and p are given in Corollary 2.1. We shall show that the three conditions (i), (ii), and (iii) are satisfied. Condition (i). Since f(c) = S U ~ ~p ( ~e , c’) ~ ~and ( ~since ) I) is transitive, condition (i) is obviously satisfied. Condition (ii). Let n E N + be arbitrary. Since c S(c,n ) E N(C) from Corollary 2.1 and since C* is stable, there exists a E m(C*) such that +(a) c S(c,n): that is, (Vc)(Vc’)(cE a & (c, c’) E + p(C, c’) I n) Condition (iii). Let a E be arbitrary. Then there exists n E N + from Corollary 2.l.such that c S ( e , n) c a. Iff(c) I n, then
c
e
e
N(c)
*
which implies c E S(c, n) c a. Q.E.D. In many cases, the Lyapunov stability is investigated in reference to an invariant set C* ; i.e., C* = $(C*). When C* is invariant, Definition 1.4 can be restated as follows. An invariant set C* c C is stable relative to 4 and 8 if and only if E
mc*))(w E mc*))(*(P) c a)
In other words, when C* is not an invariant set, a is taken as a neighborhood of I)(C*), but when C* is invariant, a is taken as a neighborhood of C*. Consequently, in the case of an invariant set, the definition of a Lyapunovtype function becomes the following definition.
Definition 2.3. A function f : C + N is a Lyapunov-type function for an invariant set C* c C if and only if
*
(9 ( ~ C W ” C , c’) E f ( c ) 2 f(c” (ii) (Vn)(3a)(Vc)(nEN+& C* c a & c E a +f(c) 5 n) (iii) (Va)(3n)(Vc)(n E N + & C* c a & f ( c ) I n + c E a) The difference between Definition 2.2 and Definition 2.3 is that in condition (iii) of Definition 2.2, a is a superset of +(C*), while in Definition 2.3 a is a superset of C*. Apparently, Theorem 2.1 holds for Definition 2.3 also. +
Corollary 2.2. Let C be an abstract set, I) a preordering in C, and 8 an arbitrary family of subsets of C . Then an invariant set C* c C is stable relative to 8 and $ if and only if there exists a Lyapunov-type function f:C N as specified in Definition 2.3.
168
Chapter IX
Stability
(c) Application of the Main Theorem It should be noticed that the set C so far has no additional structure except that implied by the arbitrary family of subsets 8. Theorem 2.1 can be applied to a large number of specialized cases by adding more structure to basic sets and functions involved. It is interesting to note that the basic structure of a Lyapunov-type function and the necessary and sufficient conditions are captured by Definition 2.2 and Theorem 2.1. The theorem applies directly to more specific systems, and to show the validity of necessary and sufficient conditions for any particular case, one only has to check that the additional structure is not introduced in a way that violates the basic conditions. Apparently, Theorem 2.1 is true when 8 is a general topology; i.e., it satisfies the conditions : (i) C E 8 and the empty set 4 E 8. (ii) The union of any number of members of 8 is again a member of 8. (iii) The intersection of any finite number of members of 8 is again a member of 8. We have therefore the following corollary.
Corollary 2.3. Let C be a general topological space, C* c C, and $ a preorder in C. C* is then stable if and only if there exists a Lyapunov-type function f:C+ N . Finally, we shall consider the stability in a metric space [16]. A metric space is interesting on two accounts. First, a metric space can illustrate a basic idea of the generalized distance function. Since the actual construction of a Lyapunov-type function is based on the generalized distance function, the idea of a Lyapunov-type function can be explored clearly in a metric space. Second, the range of a Lyapunov-type function is, in general, much more complicated than that of conventional scalar function. However, when C is a metric space, a Lyapunov-type function whose range is the real line (exactly speaking, nonnegative real line) can be constructed. Let C be a metric space whose metric is p :C x C --* R and whose topology 9 is the metric topology. For a subset C' c C, let p(C', c ) =
inf p(c', c) C'EC'
< E} Then if W,. = R', where R + is the set of positive real numbers, the following facts are apparently true : S(C',E )
= {c:p(C', c )
(i) p ( @ , c) 2 0 and if c E C', then p(C',c) = 0 (ii) (Vw)(w E W,. + s(c',w ) E is(C'))
2. Stability of Sets for General Systems
169
where N(C’)is the whole family of open sets including C‘.Furthermore, if C’ is a compact set, the following also holds : (iii) (b’a E N(C’))(3wE K.)(C‘c S(C’,w ) c a)
As a matter of fact, if (iii) is not true, then for any n ( n = 1,2,3,.. .), there exists c, E S(C’, l/n) n (C\a). Since c, E S(C’, l/n), there exists 2, E C’ such that p(c,, 2,) < 2/n; since C’ is compact, the series {t,}has an accumulating point toE C’. For the sake of simplicity, suppose {en} converges to Z 0 ; then &, to)IP(c,, t,) + d e n ,to)-+ o
as n + 00
This contradicts the fact that c, E C \ u.
As the proof of Theorem 2.1 shows, the existence of a Lyapunov-type function for the stability of C* where C’ = $(C*) depends on the above three properties, and the suggested Lyapunov-type function f : C -+ R + u (0)is such that
or if C* is an invariant set, f ( c ) = SUP P(C*, c‘) C‘EJI(C)
Formally, we have the following theorem. Theorem 2.2. Let ( C , 8)be a metric space, C* a compact subset of C , and $ a preorder on C. Then C* is stable relative to 8 and J/ if and only if there exists a Lyapunov-type function f : C + R + u {o}, where R + is the set of positive real numbers and N + = R+. PROOF. The if part is obvious. In the proof of the only if part, we may have to be careful of the proof of condition (iii). In the present case, n 2 p ( c , c ) does not imply that c E S(C,n). However, since there exists o < n’ < n such that c S(c, n’) c S(c, n) c 01, we have that n‘ 2 p ( c , c) implies c E S(e, n) c a. Q.E.D.
c
Chapter X
INTERCONNECTIONS OF SUBSYSTEMS, DECOMPOSIT10 N, AND DECOUP11NG
An important application of general systems theory is in the large-scale systems area. Essential in this application is the ability to deal with a system as a family of explicitly recognized and interconnecting subsystems. Actually, this is how the so-called “systems approach” is defined in many instances in practice, and the entire systems theory concerned with a system as an indivisible entity is viewed as nothing but a necessary prerequisite for the consideration of the large-scale and complex problems of real importance. In this chapter, we shall be concerned with various questions related to interactions between subsystems that are interconnected and constitute a given system ;the objective is to provide a foundation for new developments in systems theory aimed at large-scale systems applications. To demonstrate the breadth of possible applications, specific problems in two different areas are considered. Conditions for decoupling of a multivariable system by means of a feedback are given in terms of the functional controllability of the system; both general and linear systems cases are considered. Conditions for decomposition of a finite discrete time system into a special arrangement of simpler subsystems are given.
1. CONNECTING OPERATORS
Interconnection of two or more systems is an extremely simple operation. All one needs to do is to connect an output ofa system with the input ofanother system or to apply the same input to two systems; in practice, e.g., that might I70
1 . Connecting Operators
171
simply mean “connecting the wires” as indicated. Unfortunately, formalization of that simple notion is rather cumbersome primarily because of the “bookkeeping” difficulties,i.e., a need to express precisely what is connectable with what and indeed what is actually being connected. To avoid being bogged down by unduly pedantic definitions, we shall introduce first a class of connectable systems and then define various connecting operations in that class. x we shall denote by E the family of x For any object = component sets of F, E = { Y,, . . . , Let Si c Xi x be a general system with the objects
v vl
vn, vn}.
xi =X(Xij:jEZx,},
v
yi = x { q j : j € Z , , }
In general, some, but not all, of the component sets of Xi are available for connection. Let Z,, denote the Cartesian product of such component sets, i.e., XijE Z,, means that Xij is a component set of X i and is available for connection. Denote by Xi* the family of all component sets not included in Z,,, Xi* = {Xij:Xij$Z,,}, and by Xi* the Cartesian product of the members from Xi*, Xi* = x { X i j : X i j ~ X i *The } . input object can then be represented as the product of two composite components Xi = Xi* x Z,,. Analogously, let Z,, be the Cartesian product of the output components available for connection. Given a system Si c X ix Yi, there can be defined, in general, many “different” connectable systems Si, c (Xi* x Z,,) x x Z,,,), depending upon the selection of Z,, and Z,, . The relationship and Z,, is between Si as defined on Xi, and Si, as defined on Xi*, Z,, , obvious. In both cases, we have essentially the same system except for an explicit recognition of the availability of some components for connection. We can now define a class of connectable systems
(x*
x*,
s, = { S i z : S i zc (x: x Z,,) x (YT x Z,,)} and the connection operations within that class.
0
is termed the cascade (connecting) operation.
Chapter X
172
Definition 1.2. Let
s 3
c
+ : 3,
(X1* x
x
S,
+
x,* x
Subsystems, Decomposition, Decoupling
S , be such that S ,
Z) x
(Yl
z,
x Y,),
and ((X,,XZ,Z),(Yl,Y,))ES3 -((x19Z),Yl)Esl
+ S2 = S3,where =
z,, = z
&((X29Z),Y,)ES2
+ is termed the parallel (connecting)operation. Definition 1.3. Let 9 be the mapping 9: 3, + 3, such that 9 ( S , ) = S2, where S, c (X* x Z,) x ( Y * x Z,), S 2 c X* x Y*, Z , = Z, = Z, and (x, Y ) E s2 * (3z)(((x,4, (Y, 4) E Sl)
9is termed the feedback (connecting)operation. Examples of the application of the connection operators are given in Fig. 1.1. It should be noticed that the connection operators could be also defined in alternative ways. For example, rather than defining a feedback using a single system and connecting an output with an input as shown in Fig. 1.1, another subsystem could have been assumed in the feedback path as shown in Fig. 1.2. However, the three basic operations as given in Definitions 1.1 to 1.3 cover in combination most of the cases of interest, and in this sense they can be considered as the primary connecting operators. For example, the connection for the case in Fig. 1.2 is given by 9 ( S 1 S2), as shown in Fig. 1.3. Notice that the operations 0 , +, and 9 are defined as partial functions. Although it is possible to make these functions total, we shall not do that for the sake of simplicity. When (S, S,) S3 is defined, we have ( S , S,) S3 = S , ( S , o S3). As a matter of fact, if S1, S2, and S , are defined as 0
0
then
0
0
0
0
173
1. Connecting Operators
+
Parallel connection
Feedback operation
FIG.1.1
FIG. 1.2
Chapter X
174
Subsystems, Decomposition, Decoupling
FIG.1.3
Similarly, we have (S,
+ S,) + s3 = s, + ( S , + S 3 )
if both sides are defined. The operation 0 does not have identity element. However, in both the input and the output objects, one can define identities I, t X x X , I , = { ( x , x ) : x ~ X }and , I , c Y x Y, I , = { ( y , y ) : yY~} . For a given system S c X x Y, it is possible then to define the left inverse -IS c Y x X and the right inverse S-' c Y x X such that - ' S o S = I,,
and
SoS-'
=
I,
A function f defined on a family of subsets X i in X such that f ( X i )E X i is called a choice function. We have then the following immediate propositions. Proposition 1.1. A system S c X x Y has a right inverse S-' c Y x X if and only if there exists a choice functionf: { S ( x ):x E 9(S)} 3 W ( S )such that { f ( S ( x ) ) )n S(x') = 0 for any x' # x, where S(x) = {y : ( x ,y) E S } . Proposition 1.2. Suppose Y = W(S). Then a system S c X x Y has a left inverse - ' S c Y x X if and only if there exists a choice function f : {(y)S:y E W ( S ) }-+ B(S) such that { f ( ( y ) S ) )n (y')S = 125 for any y' # y where (y)S = { x : ( x , y S) ~} .
For the operation + ,the empty system 0is the identity. The three operations are interrelated ; for instance, 9 ( S 1 s,) = 9 ( S , 0 s,) holds if both sides are defined. 0
I . Connecting Operators
175
We shall now consider how some properties of the subsystems are affected by interconnections.First, we shall look into the question of nonanticipation. In this respect, we have the following propositions.
Proposition 1.3. If S, c X, x (Y, x Z) and S, anticipatory, so is S3 = S, 0 S,.
c
(X,x Z) x Y, are non-
PROOF. Let ( Y , 4 = (PlY(C13 X i ) , P l Z ( C 1 9 X i ) ) and Y , = P Z ( C 2 , (x27 4) be nonanticipatory global state representations. Suppose (x, ,x,) 1 T' = (x,', x,') I T'. Then y , I T' = y,' I T' and z I T' = z' I T', where (y,', z') = (ply(cl,xi'), plz(c,, x,')), and consequently, y, I T' = y,' 1 T', where Q.E.D. y,' = p,(c,, (xz', z')). Hence S, S, is nonanticipatory. 9
0
Proposition 1.4. If S, c (X,x 2)x Y and S2 c (X,x Z) x Y are nonanticipatory systems, so is S3 = S1 S , .
+
Proposition 1.5. Suppose S c (X x Z)x ( Y x Z) is nonanticipatory such that (y, z') = (JI (c, x, z), p,(c, x, z)) is a nonanticipatory systems-response function. If z 1 $' = p,(c, x, z) 1 T' has a unique solution z I T' for each (c, x) and c, t4en d ( S ) is also nonanticipatory. PROOF. It follows from the definition of %(S) that
(x, Y ) E P(S)* ( W ( ( X , z), (Y, 4)E S)
* (34(3c)(y= py(c,x, 4 8z z
= PAC, x, 4)
Since z = p,(c, x, z) has a unique solution for each (c, x), let z = +(c, x) t+ z = p,(c, x, z). Then (x, Y ) E
*( W Y
= py(c,x,
&, x)))
Let p(c, x) = py(c,x, +(c, x)). If p(c, x) is nonanticipatory, the desired result follows immediately. Let R I T' = R' I T', 2 = pz(c,R,2), and 2' = &, 2',2'). Since p, is nonanticipatory, p,(c, 2 , 2 ) 1 T' = p,(c, A, 2) 1 T'. Hence, &,2')I T' = +(c, 2) I T'. Since py is nonanticipatory, we have p(c, 3) I T' = p(c, 2)I T'. Q.E.D. As an example of the application of Proposition 1.5, consider the system given in Fig. 1.4 and defined by ((x, z), (y, 2')) E s * ( ~ Y ( O ) )f i t ) = ~ ' ( = t ) fio)
+
176
Chapter X
Subsystems, Decomposition, Decoupling
FIG.1.4
The global-state response function for z is then given by pZ(c,X, Z) = Z'
C)
z'(t) = c
+
[
(X
+ Z) dt
that is, z I T' = p&, x, z) I T' has a unique solution z I T' for each (c, x) and t. Therefore, by Proposition 1.5 the system is nonanticipatory even after the feedback is closed. This can easily be verified by observing that the system after the feedback is closed is defined by
y(t) = z(t) = y ( o ) -dy/dt
=x
+
+y
The linearity of a system is preserved by the three interconnecting operations. Recall that a linear system is defined on a linear space. Proposition 1.6. Suppose S, and Sz are linear systems. Then S, Sz, S, Sz, and 9 ( S , ) are linear when S, Sz, S , + S 2 , and 9 ( S , ) are defined.
+
0
0
177
1. Connecting Operators
PROOF.We shall consider only the case of feedback. Let S , c ( X x Z ) x ( Y x Z).If (x,y ) E 9( S s , )and (x’,y’) E 9 ( S , ) , then there exist z E 2 and z‘E Z such that (x,z,y, z)t E S1 and (x’,z’,y’, z’)E S , . Since S , is linear, (x x‘, z z’, y y’, z z‘) E S , . Hence, (x x’, y y’) E 9 ( S , ) . Similarly, Q.E.D. (ax,a y ) E F(S,),where a is a scalar multiplier.
+
+
+
+
+
+
In subsequent sections we shall consider functional systems. Recall that :
(i) a system S c X x Y is functional if and only if
(x,y) E S & (x,y‘) E S
-+
y = y’
(ii) a system S c X x Y is one-to-one functional if and only if S is functional and
(x,y) E s & (x‘,y) E s -+ x
= x’
Proposition 1.7. If S , and S2 are functional, then S , S2 and S , + S2 are functional if they are defined. Furthermore, the one-to-one functionality is also preserved by the cascade and parallel operations. 0
In general, the functionality is not preserved by the feedback operation. Proposition 1.8. Suppose S c ( X x 2) x ( Y x 2) is functional. Let
S(X) = {z :(3Y)((X, z,y, z ) E S>>
S(x, y)
=
{ z ; (3z’)((x, z , y , z’)E S)}
Then 9 ( S ) is functional if and only if for each x E X (C1)
(3Y)(S(X)
= S(x, Y ) )
holds. In particular, if S satisfies the relation
(x,z,y, z)E S & (x,z’,y’, z’) E s -+ z = z’ F(S)is also functional. PROOF. Suppose condition (Cl) holds. Suppose (x,y) E 9 ( S ) and (x,y ’ ) ~ 9 ( S ) .Then there exist z and z’such that (x,z,y, z)E Sand (x,z’,y’, z’) E S hold. Therefore, z E S(x) and z’ E S(x). It follows from condition (Cl) that there exists j such that z E S(x,9) and z’ E S(x, 9) hold ; that is, (x,z, j , 2)E S and (x,z’,j , Y ) E S hold for some 2 and 9’. Since S is functional from assumption, (x,z,y, z ) E S and (x,z, 9 , 2 ) E S imply that y = 9. Similarly, we have y‘ = 9. Consequently, y = y’.
t For notational
convenience, we shall write (x, z, y, z) instead of ((x, z), (y, z)).
118
Chapter X
Subsystems, Decomposition, Decoupling
Conversely, suppose F(S)is functional. Let 2 E X be arbitrary. If there is no such j as (2,9) E F(S),then S(2) = 0. Then S(2) c S(2, y) trivially holds. Suppose (2, j)E P(S).Let z E S(2) be arbitrary. Then there exists y such that (2, z, y, z) E S . Since F(S)is functional, y = 9 holds. Consequently, z E S(2,j). Q.E.D. 2. SUBSYSTEMS, COMPONENTS, AND DECOMPOSITION
There are many ways in which the subunits of a system can be defined. We shall introduce only some of the notions that seem to be among the most interesting in application. Definition 2.1. Let S c X x Y be a general system. A subsystem of S is a subset S’ c S , S‘ c X x Y. A component of S is a system S* such that S can be obtained from S* (possibly .in conjunction with some other systems) by the application of the basic connecting operators. It should be noticed that in application, the term “subsystem” is used for the whole range of different concepts, including the notion of a component as given in Definition 2.1. This should be borne in mind in the interpretation of the statements from this chapter. We shall fdst consider the component-system relationship. Given two systems S , c X , x Y, and S , c X , x Y,, let n, and n, be the projection operators, x (Yl x
n,:(X,
x
n,:w,
x X,) x
x2)
Y2) - + W l
x Y,)
and (Yl
x
Y2) -+
(X,x
Y2)
such that n,(x,,x,,y,,y,)
=
(xl,yl),
and
n2(xl,x2,yl,~2) =(
~ 2 ~ ~ 2 )
Let S c (XI x X , ) x (Yl x Y,). Definition 2.2. Two systems S1 = n,(S) and S , noninteracting (relative to S ) if and only if S = S , is called a noninteractive decomposition of S .
=
n,(S) are considered
+ S , , and (n,(S),n,(S))
Definition 2.3. A noninteractive decomposition ( S , , . . . , S,), where S , + = S is called a maximal noninteractive decomposition if and only if no Si has a (nontrivial) noninteractive decomposition.
..’ + S ,
2 . Subsystems, Components, and Decomposition
179
It is an interesting question whether or not a system has a maximal noninteractive decomposition. It is obvious that if a system has finitely many components, i.e., X = X , x ... x X , and Y = Y, x ... x Y, for some n and rn, it has a maximal noninteractive decomposition. This problem will be considered in relation to the feedback in Section 4. Let us consider basic decompositions by components. Proposition 2.1. Every system S c ( X , x X,) x (Y, x Y,) can be decomposed as S = %(S, S,), that is, in cascade and feedback components, where S , c ( X , x Z , ) x (Y, x Z , ) and S2 c ( X , x Z,) x (Y, x Zl), and Z1and 2, are auxiliary sets. (See Fig. 2.1.) 0
FIG.2.1
Proposition 2.2. Every system S c ( X , x X,) x (Y, x Yz) can be decomposed into cascade components as shown in Fig. 2.2.
180
Chapter X
Subsystems, Decomposition, Decoupling
FIG.2.2
Proposition 2.3. Let S c (X, x X , ) x (Y, x Y2) and S(x) = {y : (x, y) E S}, where X = X , x X, and Y = Yl x Y,. Let n l ( y l ,y2),= y1 and n2(y1,y2) = y,, Then, if and only if S(x) = n,(S(x)) x n,(S(x)) for every x E 9(S), S can be decomposed by some S, and S , as shown in Fig. 2.3.
FIG.2.3 PROOF. Suppose S(x) = nI(S(x)) x n2(S(x))for every x ~ 9 ( S )Let . S, c X x Yl be such that
(X,Yl)ESl ~ ( ~ Y , ) ( ( X , Y l ~ Y , ) ~ S )
181
2. Subsystems, Components, and Decomposition and S, c X x Y, be such that ( x ,Y,)
E Sz c-) V Y , ) ( ( XY ,I
Y Z )E S )
Apparently (X,Y,,YZ)ES
+
(x,Y,)ES,&(x,Y,)ES*
Conversely, suppose ( x ,y , ) E S , & ( x ,y,) E S , . Then y , E n , S ( x ) and y 2 E n,S(x); that is, ( y , , y , ) ~ S ( x because ) S(x) = ll,(S(x))x ll,(S(x)). Consequently, ( x ,Y , ,Y z ) E s. Suppose there exist S , c X x Y, and S , c X x Y, such that the above decomposition is realized. Apparently, S(x) c ll,(S(x))x H,(S(x)). Let y , E II,(S(x))and y , E lT,(S(x))be arbitrary. Then ( x ,y,) E S , and ( x ,y,) E S2 Q.E.D. should hold. Consequently, ( x ,y , ,y,) E S. Proposition 2.4. Every system S c X x Y can be decomposed by some S‘ as shown in Fig. 2.4, where
--
( x ,x’)E Ex
S ( X ) = S(X’)
(Y, Y‘) E E ,
(Y)S = ( Y ’ P
-
S(x) and (y)S are defined in Propositions 1.1 and 1.2, and q x : X X I E X , q, : Y -+ Y / E , are the natural mappings ; that is, ( [ y ] ,y ) E q i l b]= qy(y). -+
Suppose 2 E [ x ] and 9 E [ y ] such that (a,9)E S. Such elements exist for ( [ x ][, y ] )E S’ by definition. Since x’ E [XI, we have S(2) = S(x‘). Furthermore, 9 E S(2) implies (x’,9) E S. Consequently, x‘ E (9)s.On the other hand, y’ E b] and 9 E [ y ] imply (y‘)S = (9)s.Hence, we have x‘ E (y‘)S,that is, (x’,y’) E S. Q.E.D. In conclusion, we shall briefly consider the subsystem-system relationship. Definition 2.4. Let Si (i E I ) be a functional subsystem of S c X x Y; i.e., Si c S and Si:9(S) -+ Y. If Si = S, then {Si : i E I } is called a functional
system decomposition.
uieI
182
Chapter X
Subsystems, Decomposition, Decoupling
A functional system decomposition is apparently equivalent to a globalstate representation. Such a decomposition is, therefore, always possible and is not unique. Definition 2.5. Let S, = { Sai: i E I,} and S, =_ { Sp,; i E I,} be two functional system decompositions of S c X x Y. Let S, I S , if and only if S, c S,. Then if a decomposition Sosatisfies the relation that for any functional system decomposition S,
s, I So + So = S, So is called a minimal functional system decomposition of S . It can be shown quite readily that every finite system has a minimal functional system decomposition.
3. FEEDBACK CONNECTION OF COMPONENTS
In this section, we shall consider the feedback system as given in Fig. 3.1. The overall system is defined as
s c(X
x Z,) x ( Y x Z,)
and the feedback component itself is
Denote by S , the set of all input-output pairs of the system S in X x Y, i.e.,
and by S , the class of all possible feedback components
S, = { S , : S , c z,x Z,} Let
be the map e : S , + S such that %(Sf) = F ( S S,) 0
183
3. Feedback Connection of Components X
*Y
zY
FIG.3.1
For any given feedback component, Fsgives the resulting overall system. % therefore indicates the consequences of applying a given feedback. Some conceptually important properties of % are given by the following propositions.
Q.E.D.
Proposition 3.2. Let 4 :lI(S,)
+
sfbe such that
(z’,z) E %(Sf) * (3(x, Y ) E S’)(((X,
4,( Y , z‘))
E S)
Then S’ c E(gs(S’)) holds. If S satisfies the relation
(C1) then
s
(x, z, y, z’) E & (a,z,
*((Y(S’))
=
9,z’) E s + (x, y) = (a,9) S’ * s’E a(*)
184
Chapter X
PROOF.
Subsystems, Decomposition, Decoupling
In general, since S’ c S,, ( x ,Y )E S’
+
+
(3(z,z’))((x,z, Y , z? E s & (z’, z ) E %(V) ( x ,Y ) E %(%S’))
Suppose condition (Cl) holds. Obviously, Fs(%(S)) = S’ + S’ E a(%). Conversely, we have that (x, Y ) E
Y(S”
W’,z))((z’,z ) E %(Sf) & ( x ,z, y , z’) E S )
+
(3(z’,z))(3(%9))((%9) E S’ & (?, z , 9, z‘)
+
E
s
& ( x ,z, Y , z’) E S ) +
(x, Y ) = (299)E S’
(because of the condition stated in the proposition). Consequently, since S’ c %(%(s’)) generally holds, we have S’ = %(%(Sf)). Q.E.D. Let
sf, = ((z’,z) :(3(x,y))((x,z, y , z’) E S ) } c Z, Sf* = (sf* : sf*c Sf,}
x 2,. Let
Proposition 3.3. Suppose S satisfies the relation ( x ,z, y, z’)
E
s & (x, 2, y, 1’)E s + (z, z’) = (2, 1’)
The restriction of % to Sf*, S’ = %(Sf*),then Sf* = %‘(S‘). PROOF.
--
Sf* is then a one-to-one mapping, and if
Notice that if S’ = %(Sf*)holds, we have
(z’,4 E SS(S’)
(3(x,Y ) ) ( ( X , Y ) E S‘ & ( x ,z, y , z’) E S )
( Y x ,Y ) ) W ’ , 2))@’, 2) E Sf*& (x, 2, Y , 1’)E s 8t ( x ,z, y, z’) E S )
(z’, z ) = (2’, 2) E s,* Q.E.D.
Consider now conceptual interpretation of Propositions 3.1, 3.2, and 3.3. Important properties of a feedback and the consequences of applying feedback to a given system (e.g., for the purpose of decoupling) can be studied in reference to the properties of Fs. Proposition 3.1 indicates that the application of any feedback results in a restriction in S B .To illustrate the importance of Proposition 3.2, consider a feedback connection defined on a linear space as given in Fig. 3.2, i.e., S c (X x Z,) x ( Y x Z,), Z , = X , Z , = Y, and (x, 2, y , z’) E
s
-
z’ = y & ( x
+ z, y ) E s,
4 . Decoupling and Functional Controllability
185
FIG.3.2
Since X is linear, the addition operation is defined in X . Assume that S is functional (e.g., an initial state is fixed) and the input space is reduced such that S1 is a one-to-one mapping. The conditions from Propositions 3.2 and 3.3 are then satisfied. Suppose S’ is an arbitrary subsystem of S,, S’ c S,. Proposition 3.2 gives, then, the conditions when S’ can be synthesized by a feedback. S‘ can be obtained from S by applying feedback if and only if %(%@’)) = S’ ; furthermore, if S’ can be synthesized, the required feedback is given by $(S’). This will be studied further in Section 4. are given in the following propositions. Additional properties of
Proposition 3.4. FSis an order homomorphism, i.e., Sf c Sf’
-+
%(Sf)c
%(SO. Proposition 3.5. 4 is an order homomorphism, i.e., S’ c $I -+ 4(S)c %($I). Proposition 3.6. Suppose S satisfies the condition of Proposition 3.3. Then the class of the systems that can be synthesized by feedbacks forms a closure system on S,. % o 4 is the closure operator for that closure system.
% :n(S,)--+ II(S,). It follows from Propositions 3.4 PROOF. Let J = Proposition $). 3.2 shows that and 3.5 that if S’ c 9’ c S,, then J(S’) e .I( S’ c J ( S ) for any S’ c S,. Furthermore, since %(%(S’)) E a(%) for any S’, it follows from Proposition 3.2 also that JJ(S’) = J ( S ) for any S’ c S B . Since J(S’) = S’ C I S’ E a(%), a(%)forms a closure system on S,. Q.E.D. 0
4. DECOUPLING AND FUNCTIONAL CONTROLLABILITY
The objective of this section is to give necessary and sufficient conditions for the decoupling of a system via feedback. It represents an illustration of
186
Chapter X
Subsystems, Decomposition, Decoupling
the use of the general systems framework developed in this chapter. Many other systems problems that are essentially structural or algebraic in nature can be treated in a similar way. (a) Decoupling of Functional Systems
We shall consider a system S c ( X x Z,) x ( Y x Z,) with a feedback component Sf c Z , x 2, connected as shown in Fig. 4.1. Furthermore, it will always be assumed that the following condition is satisfied :
(PI)
(x, zx, Y, zy) E
s
+
Y = zy
n
FIG.4.1
i.e., 2, = Y. We shall consider, therefore, the feedback system as defined on ( X x 2,) x Y rather than on ( X x Z,) x ( Y x Z,). The notation F(S Sf), where S, c Y x Z , , is then not quite consistent with the previous definitions. However, this inconsistency should be allowed here for simplification of the notation. Before considering the question of decoupling explicitly, some additional properties of the feedback connection need to be established. 0
Proposition 4.1. Consider a feedback system S c ( X x Z,) x Y whose feedback component is Sf c Y x Z , . Suppose
(i) Sf and *(Sf) are functional, i.e., Sf : ( Y )+ Z,, R(Sf):(X)+ Y (ii) S satisfies the condition ( x , z, y ) E s & (x’, z , y ) E s
(P2) Then * ( & ) : ( X )
+
Y is a one-to-one functional.
x = x’
4 . Decoupling and Functional Controllability PROOF.
187
Suppose %(S,)(x) = y and %(S,)(x’) = y . Then (W((X,
z, Y ) E
s
=
S,(Y))
and (3z’)((x‘,z’, y ) E S & z’ = S,(y))
Consequently, we have z = z’. Hence, (P2) implies x
=
x’
Q.E.D.
Using Proposition 4.1 we shall show a basic result.
Proposition 4.2. Suppose (i) S is functional, S :B(S) + Y; (ii) S satisfies (P2); (iii) S satisfies the condition (x, z, y) E S & (x, z‘, y ) E
(P3)
s
+
z = z’
Let S, c I l ( Y x Z,) be the set of all functional feedback components defined in Y x Z,, i.e., Sf:9(Sf)4 Z , , such that B(S.S,) is functional. Let be an arbitrary system 3 c X x Y, and K ( X , Y )is a family of subsystems of 3 such that K ( X , Y) = {S’ :S’ c & S’ is one-to-one functional} Then
%(sf) = K(X, Y )
if and only if
3 = SB
where SB =
{(& Y ) :(3z)((x,z , Y ) E S ) }
PROOF. We consider the only i f part first. Suppose %(Sf)= K ( X , Y) holds. Let (x, y ) E 3 be arbitrary. Since %(Sf) = K ( X , Y) and since U K ( X , Y) = 3, there exists S, E Sf such that (x,y ) %(Sf); ~ that is, (3z)((y,z ) E S, &(x, z, y) E S). Hence, (x, y ) E S,. Let (a,9)E S , be arbitrary such that (%)((a,&j ) E S). Let S, = {(9,2)}. We shall show that S, E Sf. Notice that if (x, y ) E B(S .S,) and (x, y’) E B(S S,), then y = 9 = y’ holds; that is, B(S S,) is trivially functional. Furthermore, S , is functional. Therefore, S , E S , . Naturally, (a,9)E %(Sf).Since %(Sf)= K ( X , Y), (a,?)E 3. Next, we consider the if part. Suppose = S , holds. Let S’ E %(Sf)be arbitrary. Then there exists S, E S, such that S = %(Sf).S‘ is functional due to the assumption of and satisfies the relation S’ c SB = 3 because S‘ = B ( S S,). Since S‘ is one to one from Proposition 4.1, S’ is an element of K ( X , Y).
-
s,
Chapter X
188
Subsystems, Decomposition, Decoupling
Let S’ E K ( X , Y ) be arbitrary. Since
s = S,,
we have S’ c S,. Let Sf =
gS(S‘); that is,
S,
=
( ( Y ,z ): W
( ( X , Y ) E S’
& ( x , z, Y ) E 3
1
Suppose ( y , z ) E Sf and ( y , z’)E S,. Then (3x)(3x’)((x,y ) E S‘ & (x’,y ) E S’ & (x, z, y ) E s & (x’, z’, y ) E S )
Since S‘ is one to one, we have x = x’. Consequently, (P3) implies that z = z’. Therefore, S , is functional. We can show 9 ( S . S,) = S’ because
F(S * S,) 3 ( x , Y ) * ( 3 z ) ( ( x z, , Y ) E s & (Y, 4 E S,)
* ( 3 z ) ( y = S(x, z ) %6 ( 3 2 ) ( y = S(2)& y
s(a)& y
4-+
( 3 z ) ( 3 2 ) ( y= S(x, z ) & y
4-+
( 3 z ) ( y = S(x, z ) & y = S’(x))
=
= =
S(2,z))) S(2, z ) )
because (P2) implies x = 2 - ( x , y ) ~ S ‘ , and because S’ c S,. Therefore, B(S . S,) = S‘, where S’ is functional by definition. Consequently, S , E S , , Q.E.D. which implies S‘ E %(Sf). We shall apply Proposition 4.2 to decoupling of multivariable systems. We shall first introduce the formal definitions of decoupling by a feedback and functional controllability. Definition 4.1. A general system S c X x Y is functionally controllable if and only if W Y E Y)(3X E X)((X,Y ) E S )
Definition 4.2. AmultivariablegeneralsystemS c ( X I x . .. x X , ) x Z, x (Y, x . . x Y,) is called decoupled by feedback if and only if there exists a feedback system S , c 0 there exists a feedback S, E S, such that
+ a n - i ( $ i o r Y 1 + . . . + a , ~= o
f($iot) = ( $ i o t ~
6. Simplification Through Discrete- Time Dynamical Systems
20 1
holds, where pointwise addition, pointwise scalar multiplication, and multiplication by composition are assumed for and I : C + C is the identity mapping. The relationship between pole assignability and state controllability is given by the following proposition. Proposition 5.1. Let S be a linear dynamical system specified as above whose state space C is of dimension n. Assume that the field d has more than (n + 1) elements and that the state response 4 1 0 t : C+ C of S is an isomorphism. Then, if S is pole assignable, the system S is controllable from zero.
Let E E C be arbitrary. Since is an isomorphism, there exists # o such that ( I $ ~ ~ , ) ” c1,I = L, is an isomorphism. Choose Sf such that satisfies ($lo,)” c1,I = 0. Then we have ($1,$“ - (410r)” = -L,.
PROOF. c1,
+
+
Since L, is an isomorphism, there exists c E C such that - ($lot)”)(c)= 2 ; that is, $loi(c) - 410i(c)= i? (due to the state composition property), where 1 = nt. Furthermore, we have from the definition of that for any c E C and Sf E Sf, 4loi)(c) = 4oi(o, SAY)’) holds, where y satisfies the relation po(c, Sf(y)) = y. Consequently, ($102
-
Q.E.D. In the above proposition, 410ris assumed to be an isomorphism. As Proposition 4.3, Chapter VII, shows, this holds under rather mild conditions ; in particular, this is true for a linear time-invariant differential equation system. 6. SIMPLIFICATION THROUGH DECOMPOSITION OF DISCRETE-TIME DYNAMICAL SYSTEMS?
In the preceding sections, we have considered the decomposition of a system, primarily into two subsystems of a rather general kind. In the present section, we shall consider decomposition of a system into a family of subsystems of a prescribed, much simpler type. We shall consider time-invariant discrete-time dynamical systems defined by the maps p,:C x XI 3
t See Hartmanis and Stearns [I21
r,,
& ‘ : C x X t I ,+
c
Chapter X
202
Subsystems, Decomposition, Decoupling
where T = (0, 1,2,. . .}. To simplify the developments, the following will be assumed : (i) If the system is (strongly)nonanticipatory, which is the most interesting case in practice, the dynamic behavior of the system is completely represented by 4 = { q5ttp:t, t' E T ) . In this section, we shall consider, therefore, only $, refer to it as a state-transition system, and be concerned with the decomposition of 6. (ii) Since the system is time invariant and discrete time, $is characterized by a single mapping, namely, c$ol :C x A + C ;+,,, or in general 4tr,,can be determined uniquely from 4ol by using the composition property of the state-transition function. We shall denote 4ol by 4 and refer to 4 itself as a state-transition system. X'. If a composition operation is (iii) Let X = AT and X = Ute, defined on X as :X x + X such that x' P'= x' F'(P'),where F' is the shift operator, X is essentially a free monoid on A ; the null element of that monoid will be denoted by xo. In considering the structure of X later on in this section, we shall treat X as a monoid in the above sense. 0
0
0
For the sake of notational convenience and in order to follow the conventional literature, we shall define a special form of the cascade connection for the state-transition systems. Definition 6.1. Let r$l :Cl x A + C1 and 4 2 : C 2 x ( C , x A) + C2 be two state-transition systems, where A is the input alphabet for t$l, while C1 x A is the input alphabet for 4,. The cascade connection r$ of 41 and 42 is then
$:(C1 x C,) x A + C1 x C2 such that &(i.19
4, 4 = (41(c19 4 , 4 2 ( c 2 ,(c19 4))
Definition 6.2. A state-transition system 4 :C x A -,C is decomposed into a cascade connection of two state-transition systems 41:C1 x A -,C1 and b2:C, x (C, x A ) + C 2 if and only if there exists a mapping R :C, x C2 -, C such that R is onto and the following diagram is commutative:
(C, x C , ) x A R
I
/ I
c
'C1 x IR
c2
6. Simplification Through Discrete-Time Dynamical Systems
203
Intuitively, 4 is decomposed into the cascade connection of 4, and 42 ifand only if 4 = 4, . 4 2. R = .R holds. (Strictly speaking, some notational modifications of 4, and 42 are necessary for 4, . b 2 .R to be compatible with the cascade operation introduced in Section 1.) The following is a basic method to decompose a state-transition system.
4
Proposition 6.1. Let 4 : C x A --+ C be a state-transition system. Given a class of subsets { C , :a E C , } of C such that
(i)
(4
c = U a e C 1 c,
(VCa)(Va)(3CB)(4(Ca, a) = C,)
Then there exists a set C2 and two state-transition systems 4, :C, x A -, C , and 42:C2 x (C, x A ) -, C2 such that 4 is decomposed into the cascade connection of 4, and 42. c C, x C be a general system such that (a, c) E R H c E C,. PROOF. Let Let a set C2 be a global state of 8 ;that is, there exists a mapping R : C , k C2 --+ C such that c E C , H (a, c ) E ff ++ (3c2)(c= R(a,c2)). Let 4, :C1 x A C1and 42:C, x (C, x A ) -, Czbesuch that cjl(a, a) = B--+ &C,, a) c C, and
R(+,(a, a), ~ 2 ’ ) There may be more than one fi that satisfies the relation 4 ( C a ,a) c C,. In that case, any p satisfying the condition can be used to define 4, , As for 42, we shall show that there exists at least one cz’ that satisfies the relation $(R(a,c2),a) = R($,(a, a), c2‘).The definition of R implies R(a,c 2 )E C,. Let $,(a, a) = fi. The definition 4, then implies +(C,, a) c C,. Consequently, $(R(a,c2),a) E C,. Therefore, there exists c2‘E C2 such that $(R(a,c2),a) = R(B, cz’). If there are more than one cz’ satisfying the condition, any one of them can be used for 42. Let 4 2 ( ~ 72 (a, a)) = ~ 2 + ‘
&,3
CZ), a) =
c2), a) = (41(c1, a),42(c2,
(c13
4))
We shall show that R[&(c,, c,),a)] = &R(c,, c2),a) holds. Let 42(c2,(c,, a)) = c2‘.The definition of 42implies then that +(R(c,,c2),a) = R(4,(c,,a), c2’). Therefore, R[&(c,, c2),a)] = &R(c,, cz), a) holds. Furthermore, since C, = C , R is onto. Q.E.D.
uxsC,
Let us apply Proposition 6.1 and its proof to the case when the state space C of a state-transition system is finite. Let C, = C - { a } , where a E C. Then it is easy to show that { C , : a E C} satisfies the conditions of Proposition 6.1. In other words, a finite state-transition system always has a cascade decomposition. In order to discuss stronger results on decomposition of the finite case, we shall first introduce the following definitions.
204
Chapter X
Subsystems, Decomposition, Decoupling
Definition 6.3. Let 4 : C x A -, C be a state-transition system. For each U E A , let 4 a : C + C be such that 4,(c) = +(c, a). If 4, is onto, the input a is called a permutation input. If 4ais a constant function, the input a is called a reset input. If is the identity function, the input a is called an identity input. Definition 6.4. Let 4 :C x A -, C be a state-transition system. If all the inputs of 4 are permutation, 4 will be referred to as a P system. If all the inputs of 4 are a reset or an identity, 4 will be referred to as an R system. If all the inputs of 4 are either permutation or reset, will be referred to as a P-R system. Let us return to the finite case where C, = C - { a } . If the input a is a permutation for 4, it is also a permutation for the component subsystems as given in Proposition 6.1. (Refer to the proof of Proposition 6.1.) If the input a is not a permutation, there exists C , E { C, : a E C ) such that d(C*,a) c C , for all a E C , = C . Consequently, the input a is a reset for Therefore, 4l is a P-R system. Notice that the cardinality of C , can be less than that of C , or C . If we apply repeatedly the procedure of the cascade decomposition given in Proposition 6.1 to a state-transition system, the system is decomposed finally into the cascade connection of P-R systems, because a state-transition system whose state space has two elements has to be a P-R system. Consequently, we have the following proposition. Proposition 6.2. A finite state-transition system can be decomposed into a cascade connection of P-R systems. In the obvious manner, we can extend the domain of a state-transition system 4 :C x A -+ C to C x X,where X is the free monoid of A ; that is, $(c, x‘) = q50t(~,x‘). Let 4x:C + C be such that 4x(c) = &c, x). It is easy to show that 4x,. 4x = c # ~ ~ ,. where ~, 4x,. 4x is the usual composition of the two functions 4xand . Consequently, we can consider x E X itself as a mapping, i.e., x :C -, C such that x(c) = 4x(c) and A(c) = c, where A = xois the identity element of the monoid X. From now on X will be considered a monoid of functions in the above sense. Let a relation E c X x X be defined by c#J~,
(x,x’) E E 4+ (Vc)(x(c)
=
x’(c))
E is apparently an equivalence relation. Moreover, E is a congruence relation with respect to the monoid operation of Let X / E = {[XI : x E X}.A composition operation on X / E , therefore, can be defined as
x.
[XI *
[x’]= [x.x’]
X / E is also a monoid with respect t o the new operation.
6 . SimpliJication Through Discrete-Time Dynamical Systems
205
Proposition 6.3. Let 4 : C x A -, C be a state-transition system. Let C1 be a group in X I E , where the group operation is the monoid operation of X / E . The state-transition system 4 then can be decomposed into the cascade connection oftwo state-transition systems 4 , :C , x A -+ C , and 42:C x (C, x A ) -, C given by
where c; PROOF.
is the inverse of c1 in the group C , . Let R :C1 x C
-+
C be defined by R k , , c)
=
Then,
4)) = 41(C1¶4(42(C9(c19 4))
R($((c, c ) , d = R(4,(c,3 a),42tc, (c1 3
I
= (c1. [ a l k ) = 4(R(c,3
Furthermore, if c 1 = [A] E C, ,we have cl(c)
=
c. Hence, R is onto.
Q.E.D.
Let us consider a finite state-transition system again. Proposition 6.2 shows that every finite state-transition system can be decomposed into a cascade connection of P-R systems. Let us apply Proposition 6.3 to a P-R system. Let P be the set of all permutation inputs. Let P* be the free monoid on P . Then it is easy to show that P*/E c X / E is a finite group. Furthermore, since every element of P* is an onto mapping, if an input a is a reset, we have [a]$ P*/E. Consequently, if a is a reset, 42(c, (c, ,a)) of Proposition 6.3 does , is the identity mapping, not depend on c, and if a is a permutation, 4 2 ( ~ ( c 1a)) where C1 = P*/E. Therefore, 42 is an R system. Proposition 6.4. Every finite state-transition system can be decomposed into a cascade connection of P systems and R systems.
We shall consider 4 , of Proposition 6.3. Let us recall the following definitions. Definition 6.5. A subgroup H of a group G is called normal subgroup if and only if aH = H a for every a E G, where aH = { x : ( 3 y ) ( yE H & x = a . y ) } .
206
Chapter X
Subsystems, Decomposition, Decoupling
Definition 6.6. A group G is a simple group if and only if it has no nontrivial (i.e., different from G itself or the identity) normal subgroup. Definition 6.7. Let 4 :C x A + C be a state-transition system, where C is a group. If C is simple, the system is called simple.
Let 4 : C x A 3 C be a state-transition system whose state space C is a group. Let G be a group and let $ :C + G be a group homomorphism. Let E , c C x C be such that (c, c') E E ,
* $(c)
=
W')
Then E , is apparently a congruence relation. Let H , = {c : $(c) = e } ,where e is the identity element of G. Proposition 6.5. Let $ : C condition I&)
-,G be a
= $(c') + $(&, a)) =
group homomorphism such that the
$(4(c', a))
for every a E A
holds. Then 4 :C x A 4 C can be decomposed into the cascade connection of 41: ( C / E , ) x A + (C/E,) and 4 2 : H , x ((CIE,) x A ) 4 H,.
uaEC
PROOF. Let C, = [a] E C/E,, where a E C. Then C , = C . Furthermore, we can show that for any C , and a E A , # C a , a) c C,, where /?= &a, a). Let c E C , be arbitrary ;that is, $(c) = $(a). Then the property of $ assumed in Proposition 6.5 implies that $(4(c, a)) = $(&a, a ) ) ; i.e., 4(c, a ) E C,, where p = &a, a). Let h : C/E, + C be a choice function; i.e., h([a])E C,, where a E C. Let R :(C/E,) x H , + C be defined by R(C,, y) = h(C,) y , where y E H,. Then
c E c,
Jl(4 = $(a)
c-)
++
(3Y)(Y E H , Lk c
= EY)
- ( 3 y ) ( 3 j ) ( y E H ~ & 9 E H , & c = a9. j - ' y )
where h([a])= a j
(~Y)(E Y H , & C = R(Ca, Y ) ) The final result can be proven in the same way as Proposition 6.1. Let us return to 4 , :C 1 x A -+ C 1 of Proposition 6.3. c-)
Q.E.D.
Proposition 6.6. Let (6, :C1x A + Cl be the cascade component of Proposition 6.3. Let $ :C , -+ G be an arbitrary group homomorphism. Then $(cl) = $(c,') implies that $(&(c, a)) = $(41(cl', a)) for every a E A . Consequently, if C1 is not a simple group, the state-transition system 41can be decomposed into the cascade connection of : ( C , / H ) x A + ( C l / H )and 412:H x ( ( C J H ) x A ) + H where H c C 1 is a normal subgroup.
207
6 . Simplijkation Through Discrete-Time Dynamical Systems PROOF. Suppose $(cl) = $(cl‘). Suppose [a] E C1.The definition of implies that
4) = $(c1 . [ a ] ) = $(Cl).$([aI) If [a] $ C1,then $(41(C1?
$(41(c13
4) = $(Cl)
=
= $(Cl’)
41
Wl‘).$ ( [ a ] ) = $(41(Cl’? a)) = $(41@1’,4)
SupposeH is a normal subgroup of C,. Then there exists another group G and a group homomorphism $ :C, -+ G such that H = H , and aH = [a], where a E C1. The final result follows by Proposition 6.5. Q.E.D. As we mentioned in Proposition 6.6, the state-transition system 4 , has a peculiar property ; i.e., for every group homomorphism $,
4) = $ ( 4 1 ( C l ’ ? 4) for every a E A. This property is also preserved by 4, and 4, when 41,and 412are defined in the same way as 4 , and dZin Proposition 6.1. Notice that if 4 :C x A -+ C is a finite state-transition system, C1of c$l of Proposition 6.3 $(Cl)
= $(c,’)
+
$(4l(Cl?
is finite. Consequently, we have the following proposition.
Proposition 6.7. If 4 :C x A C is a finite state-transition system, the system can be decomposed into a cascade connection of state-transition systems which are either simple or reset.
A reset system has a parallel decomposition. We shall define a parallel decomposition of a state-transition system in the following way. Definition 6.8. Let 4 :C x A -+ C be a state-transition system. Let { 4i:Cix A + Ci:i= 1,. . . ,n) be a family of state-transition systems. If there exists a mapping R :C1 x . . . x Cn+ C such that R is onto and the following diagram is commutative : (C, x
I
... x C,)
x A
I
m
bC1
x
.”
I
x
c,
208
Chapter X
Subsystems, Decomposition, Decoupling
Proposition 6.8. Let 4 :C x A C be a reset system. Given a family of sets {Ci: i = 1,. . . ,n ) and an onto mapping R :C1 x . . . x C, + C. Then there exists a family of state-transition systems { 4 i : C ix A -+ Ci : i = 1,. . . , n } such that 6 = { 4 i :i = 1,. . . ,n } is a parallel decomposition of the reset system 4. -+
PROOF.
Let a state-transition system +i :Ci x A
4i(Cit
a) =
ci, c,i
'
-+
Ci be given by
if a is an identity input, i.e., 4, is the identity if a is a reset input
where
. . , can)E c, x . . . x c, c, = such that R(E,) = d(c, a ) for every c E C. Since R is onto, when a is a reset input, it is clear that there exists E , E C, x . . . x C, such that R(E,) = #(c, a) for every c. If there are more than two ?, that satisfy the above condition, any one of them can be used to define 4i.We shall show that R satisfies the commutative diagram of Definition 6.8. Suppose a is an identity input. Then R(W(c,,.. ., c,),
4) = R(c1, . . ., c,) = 4(W, . . ,c,), 4 7 .
Suppose a is a reset input. Then R(Nc12 . . . 9 c,),a)) = R(?,) = $(C' =
4
4(R(cl,. . . ,C,), a )
Q.E.D.
Suppose the state space C of a reset system is finite. There exist then a positive integer n and a mapping R :(0, l}" -+ C such that R is onto. Consequently, it follows from Proposition 6.8 that every finite reset system 4: C x A -+ C has a parallel decomposition { f j i : { o 11 , x A
-+
{0,1}: i = 1,. . . , n }
Definition 6.9. If the state space C of a state-transition system 4 has two elements, I) is called a two-state transition system. Combining Definition 6.9 with Proposition 6.7 by using Proposition 6.8, we have the following proposition. Proposition 6.9 (Krohn-Rhodes [ 171)lf 4 :C x A -+ C is a finite state-transition system, the system can be decomposed into a cascade and parallel connection of the state-transition systems which are either simple or two states.
6: Simplification Through Discrete-Time Dynamical Systems
209
The results of the principal decomposition theorems in this section are shown diagrammatically in Fig. 6.1. Finite state discrete-time dynamical systeiii (FSDTD)
I
Proposition 6.2
0
...
- +.+.+.+*
++ ...*
U U
Proposition 6.4
Proposition 6.1
* *
0
Proposition 6.9
...* P-R: P-R system P: Psystem R : R system
S: Simple system 2: Two-state system
FIG.6.1
0
Chapter X I
COMPUTABILITY, CONSISTENCY, AND COMPLETENESS
There are a number of important discrete-time processes, such as computation, theorem proving, symbol-manipulation processes, and the like, which can be represented by dynamical systems. We shall show in this chapter how this can be done and, using an abstract formulation for such a system, prove some key results regarding the properties of such systems. In particular, we shall consider the questions ofconsistency and completeness of a formal (or axiomatic logic) system and the computability problems. We shall prove the famous Goedel theorem in the context of the consistency and completeness properties of a general system. The Goedel diagonalization argument will be developed without reference to the specifics of the “dynamics” of a logical system. In this way, we shall reveal the most essential feature of an important result-in the spirit of the general methodology penetrated in the first chapter.
1. COMPUTATION AS DYNAMICAL PROCESS
A computation can be described essentially in the following way. There is given a set of initial data on which there is applied a sequence of transformations according to a preassigned arrangement (schedule or algorithm) ; when the process stops, the last set of data represents the result of computation. Conditions for ending transformation process are in the rules for applying transformations themselves and depend also on the initial data. It is quite apparent that a computation represents a dynamical process. 210
I . Computation as Dynamical Process
21 1
Since the computation process by definition is nonanticipatory, stationary, and discrete in time, the dynamical system used to represent a computation can be specified solely in reference to a state-transition function. We shall start, therefore, with a class of time-invariant state-transition families $ = {&}, where $, = {$kr :C + C : t E T } , and an output function I :C + B. Since the state-transition family $,, which specifies a computation process, is, by definition, time invariant, iiis specified by { $:,}. Furthermore, since $, is discrete in time, that is, T = (0, 1,2,. . .}, $iis essentially specified by q5bl. Each $, of $ represents one algorithm (schedule), and so the selection of the class $ itself specifies a type of algorithm (for example, the class of is free from an input, (&, I ) will be referred to as a Turing machines). Since 6, free dynamical system. An output function I : C -+ B represents a "decoding" in the sense that it gives an interpretation of the present state. In this chapter, one fixed output function 1 will be assumed. For any given state-transition family $iE $, a state c E E is called an equilibrium state ifand only if
(vw:t@)= c) holds. Let Cfbe the set of equilibrium states of
6, ; i.e.,
ci' = { c : (vt)($;t(c)= c ) ) In this chapter, we assume for the class i t h a t there exists a subset C' c C such that it represents the equilibrium set common to every member $, of $; is., C: = Cefor every ii. Starting from a given initial state (initial data) c and an algorithm $, E $, we note that the changes of the state represent the evolution of the computational process in time, and if the system reaches an equilibrium state in a finite time, the computation process has been completed. Computation, therefore, can be interpreted as the so-called finite time stability, i.e., the ability of a system to reach an equilibrium state in a finite time. A trajectory reaching an equilibrium state in a finite time will be called a computation. This leads to the following definition. Definition 1.1. A function f :C + B is computable by (6 if and only if there exists 6,E $ such that (Vc)(3t)($bt(c)E 'C' & f ( c ) = J-($:r(c)))
Before proceeding any further, it will be convenient to simplify the framework still more. Namely, for the basic question of computability of importance is only the fact that the process specified by a given $iand c has terminated in an equilibrium state in a finite time. How this is achieved, i.e., the computation
212
Chapter X I Computability, Consistency, and Completeness
process itself, is of no importance. We can, therefore, dispense with the dynamics of each state-transition family and consider solely a static mapping from the state space into the set of computation processes. To that end we shall define a new system p:c x
such that X
=
x4Y
6, Y = C T ,and 6i) = Y * (V~)(Y([)= 4bt(c))
For notational convenience, we introduce a subset Yo c Y and a mapping Res : Y -+ C as follows :
r, = { y : (32)(Vt)(t2 i
-+
y(t) = y(2) & y(Z) E C.)}
and Res (y) =
{
:!)defined,
if y E Yo otherwise
In the present framework, Definition 1.1 can be restated as in the following definition. Definition 1.2. A function f : ( C ) exists 2 E X such that
B is computable by p if and only if there
(i) c E (C) ++ p(c, 2) E r, (ii) f ( c ) = A . Res . p(c, 2) It should be noticed that in Definition 1.2, the function f can be a partial function when (C) is proper subset of C . The development can proceed now solely by using the mapping p. It is conceptually important, however, to have in mind the interpretation of p as a dynamic system, i.e., as resulting from a computation process. To help make this bridge, we shall give in the last section some standard interpretations of p. We can introduce now the following concepts. Definition 1.3. A given subset C’ c C is acceptable by p if and only if there exists 2 E X such that c E C’ f-) p(c, 2) E
Yo
i.e., C‘ is the domain on which computations can exist for a given 2. Definition 1.4. A given subset Y’ c Y, is representable by p if there exists 2 E X such that Y E Y’ ++ ( W P ( C , 2) = Y )
2. Fundamental Diagonalization (Goedel) Theorem
213
Definition 1.5. A subset C’ c C is decidable if and only if both C’ and C\C‘ are acceptable by p. Definition 1.6. Let Wbe an arbitrary subset of Y. The system p is W-consistent if and only if WnYo=4 and W-complete if and only if WvY,=Y
2. FUNDAMENTAL DIAGONALIZATION (GOEDEL) THEOREM
We shall now prove a fundamental theorem, which contains a basic argument for the impossibility of solving certain kinds of decidability problems. The theorem is an abstraction of the famous Goedel theorem, and we shall present it in the context of consistency and completeness properties of a system. Let g : X + C be an injection, which we shall term a Goedel mapping. In reference to the given Goedel mapping, we can now define a norm or diagonalization y x for any x E X by Yx =
p(g(x),4
Let Q be an arbitrary subset of Y ; there is defined, then, a set of elements XQd whose norms are in Q , i.e., x E xa”* p(g(x),x ) E 42
Let CQdbe the image of XQdunder g ; i.e.,
cad= g(xQd) Theorem 2.1. Let p : C x X -+ Y be a given system and W c Y a subset of the outputs. If CWdis an acceptable set, the system is either inconsistent or incomplete. PROOF.
Since CWdis an acceptable set, there exists x* E X such that for any
C€C c E CWd
* p(c, x*) E y,
-
and, in particular, for g(x*)E C , we have gb*)E cwd
P(dX*),x*) E Y,
(11.1)
214
Chapter X I
Computability, Consistency, and Completeness
where CWd,by definition, is the set of Goedel images of the norms that are elements of W, i.e., g ( x )E C W d
-
p(g(x), x) E w
(11.2)
Since relation (11.2) holds for any x E X , it follows from (1 1.1) and (1 1.2) that, for the given x* E X ,
Denote by y* the output of the system such that y* = p(g(x*),x*). From (1 1.3) it follows that either y* E W n Y,, i.e., the system is W-inconsistent, or Q.E.D. y* # W u Yo, i.e., the system is W-incomplete.
3. APPLICATION OF THE FUNDAMENTAL THEOREM TO FORMAL SYSTEMS
We shall now show how the fundamental theorem can be applied to specific systems with more structure. We shall consider the so-called formal or symbol manipulating systems as being the closest to the general framework developed so far. Similar application can be made for other types of systems, as, e.g., logical systems of different varieties. As a representative of a formal system, we shall use the representation system [17] defined as an ordered sextuple
K
=
(E, S, T, R, P , 4)
such that (1) (2) (3) (4) (5) (6)
E is a denumerable set and represents expressions ; S c E represents sentences; T c S represents theorems of S ; R c S represents refutable sentences ; P c E are (unary) predicates ; N denotes the set of integers.
Let two mappings be given by g : E -+ N , 4 : E x N -,E, such that g is a injection and &e, n) E S whenever e is a predicate, e E P . Then, for any e E E , g(e)is the Goedel number of e. It is quite easy to construct a general system for K by establishing the following correspondences : (i) Predicates P are inputs of the system; (ii) Expressions E are the states;
4. Realization by Turing Machines
215
(iii) Sentences S are outputs; (iv) The Goedel (restricted) function g of the general system is a restriction of the mapping g of K , g = g I P , and the state representation p :E x P -, S is P(e,P) = + ( p , g ( e ) ) (v) Theorem set T c S corresponds to Yo.
We can now state a Goedel (-like) theorem for the representation system K .
Theorem 3.1. Let R* be the set of Goedel numbers for all the elements whose norm is in R c S . If R* is representable in K , then K is either R inconsistent or R-incomplete. PROOF. Let p be a general system for K constructed as above. To R* of K corresponds a set of inputs Cwdprecisely as defined in Section 2. Since R* is representable, Cwdis an acceptable set. But then, by Theorem 2.1, there exists an input x E X such that
p ( g ( x ) .x ) E Yo
-
p ( g ( x ) ,x) E
w
There exists, then, a corresponding predicate p E P such that
4(P3 g(P)) E T
+-+
+(P, g(P)) E R
+(p, g(p)) is an example of a sentence which is either in both T and R or is outside of both T and R. K is, therefore, either inconsistent or incomplete. Q.E.D.
4. REALIZATION BY TURING MACHINES?
The abstract dynamical system and the concepts introduced in Section 1 can be realized in terms of many specific models used in the computation field. We shall consider here briefly the realization as a Turing machine. Let I be the set of nonnegative integers; i.e., I = (0, 1,2,. . .}. Let Vbe a countable set; i.e., V = {uo, ul,. . .}. Let V* be the set of finite sequences from V ; that is, V* is the free monoid of V, whose identity element is A (empty string). A Turing machine is a time-invariant free dynamical system such that : (i) the time set T is the set of nonnegative integers ; (ii) the state space C = V* x I x V * ;
t See Davis [ 181
216
Chapter XI
Computability, Consistency, and Completeness
(iii) the output object B is V* ; (iv) for each Turing machine, there are given two positive integers m and n and a mapping + : I f l x V, -+ , p x I , , where I , = (0,. . . , n } , V, = { u o , . . . ,urn}, and p , = V, u { R , L}. R and L are special elements, and 4 satisfies the condition that &i, uj) # ( u j , i) for all i E I , and uj E V, except for i = 1 where +(i, u j ) = ( u j , i) for every u j E V,. The state transition 4olis then given by
401(a’k,
i, ujB) =
(mkuj,
i’, fl),
if +(i, u j ) = ( R , i’) and
(wkuj,
i’, uo),
if
&i, u j ) = ( R , i’)
and
(a, i’, u k u j f l ) ,
if
4(i,u j ) = (L,i’)
and muk # A
(A, i’, uoujfl),
if #(i, u j ) = (L,i‘) and
I
fl
# A =A
auk =
A
4(i,uj) = (ue, i’)
(auk, i’, u J ) ,
if
(auk, i, ujfl),
otherwise
where a, a‘, fl, and fl’ are elements of V * , while i and i’ are elements of I . Since the state-transition family is time invariant and since the time set is discrete, the state-transition family is uniquely determined by 401. The pair of integers (m, n ) will be referred to as the characteristic numbers ; (v) the output function A :C + B is given by A(E, i,
p) = a . p
Let V,* be the free monoid on V,. Let (m,n) be the characteristic number of aTuring machine S. Then, if an initial state is in V,* x I , x V,*, the state of the system S remains in V,* x I , x V,* at all t E T.In other words, if an initial state is in V,* x I , x V,*, the set of reachable and interesting equilibrium points is a subset of V* x { l} x V * . (The states of the form (a, i, A) are also trivial equilibrium states.) Following the custom, we assume that an initial state c = (a, i, fl) of a Turing machine with the characteristic number (m,n) is selected from { A } x {0} x V,* such that c = (A, 0, fl) where fl # A. Consequently, for the class $ of Turing machines, the set V* x { 1) x V* can be taken as C‘. Then (6,A) is the entire class of all Turing machines, and a general system p is defined by these Turing machines as given in Section 1.
Chapter X I I
CATEGORIES OF SYSTEMS AND ASSOCIATED FUNCTORS
In the preceding chapters, we have considered a number of rather general systems properties, giving conditions when a system has some stated properties. Obviously, this classifies the systems in different types in reference to various properties they possess. It is of interest to define these classes more precisely and to establish the relationships between them in a rigorous manner. An appropriate framework for this is provided hy the categorical algebra, which is concerned with the classes and relationships between them.? Various classes of general systems are defined in this chapter as categories with the homomorphisms between the objects of a category representing a modeling relation. Relationships between classes of systems are established in terms of the appropriate functors, using results from the representation and realization theories as derived in the preceding chapters. The natural transformations between these functors are also established. The diagram in Fig. 4.1 displays the relationships between the main concepts. 1. FORMATION OF CATEGORIES OF GENERAL SYSTEMS AND HOMOMORPHIC MODELS
(a) Formation of Categories of General Systems To construct a category one needs a class of objects and a class of morphisms between them. For the objects we shall simply take systems, i.e., relations on the appropriately defined sets. The choice is less obvious as far t We shall use the objective concept of a category given in the appendix. 217
218
Chapter X I I
Categories of Systems and Associated Functors
as morphisms are concerned. Actually there are many ways to introduce the necessary morphisms, and we shall present some of the approaches that we have found interesting. General Approach Since our approach to general systems theory is set-theoretic, the most obvious candidates for morphisms are simply all the functions defined on respective sets. A category of general systems will be defined by all relations S c X x Y and the mappings between them. Other categories are constructed in reference to additional structure of the sets X and Y and the selection of appropriate functions as morphisms. (i)
(ii) Structure Modeling Approach When the structure of a system is of prime concern, a relational homomorphism between systems can be used as a morphism. Let S c X x Y, S’ c X ’ x Y’ be two general systems, h and k the mappings, h : X + X , k : Y -, Y’. The mapping h x k:X x Y -+ X’ x Y’ is called a relational homomorphism from S to S’ if the following holds :
(WGY ) ) ( ( X ? Y ) E S
+
(h(x), k(Y)) E S?
A category of general systems will be then defined by all relations S c X x Y and the relational homomorphisms between them. (iii) Algebraic Modeling Approach
If X , X‘, Y, and Y‘ are algebras, the mappings h and k of the structure modeling approach can be limited to homomorphisms from X to X‘ and from Y to Y’, respectively, to yield yet another category of general systems. The categories constructed in this way are somewhat more restrictive than the ones defined by using the structure modeling approach, but they have important advantages too, namely :
(i) Additional structure allows a deeper analysis of various systems properties. The introduction of such additional structure algebraically is most appropriate for the categorical considerations. (ii) The homomorphisms have rather nice interpretation as “modeling functions.” Namely, in various applications, the homomorphisms can be defined as simplifying the structure of the original system while preserving the essential relationships and suppressing the unessential details. The homomorphic image represents then “a simplified’’ system, i.e., a model of the original system. This argument is also applicable to the structure modeling approach. However, ifwe require a model of, for instance, a linear system to be linear, the structure modeling approach does not satisfy the requirement.
219
1 . General Systems and Homomorphic Models
In this chapter, we shall follow, in principle, the algebraic modeling approach. However, when we treat only the categories of general systems without any specific algebraic structures, the approach can be eventually considered equal to the structure modeling approach. More specifically, when considering general systems, our approach can be viewed as a special case of the algebraic modeling approach, where since no algebraic structure is explicitly assumed to exist in the objects, every function can be a homomorphism. If a system S c X x Y is a time system, elements of X and Y have a special structure; i.e., x : T -+ A and y : T --* B, where X c AT and Y c BT. Consequently, when we call a function h :X X’ a homomorphism of inputs of time systems, we always assume that h is constructed by a function (homomorphism) ha:A -+ A‘ such that h(x)(t)= h,(x(t)) for every t E T where X ‘ c A’T.This construction implies the following property: -+
( v x ‘ ) ( v x , ’ ) ( v x , ” ) (x,’ x ~E~x & x f .X,”
E
x
4
h ( x f .x,’) I T‘ = h(xf x,”)I TI) 3
Therefore, we can define an extention of a homomorphism h :X X ‘ of a time system as h :X‘ 3 X “ such that h(x‘)= h(x‘.x , ) I T’, where x , is arbitrary if x‘ . x , E X holds. Similarly, h :X,,, -+ Xi, can be defined as -+
In other words, a homomorphism of time objects is required to be at least a homomorphism with respect to the operations of concatenation and restriction.
(b) Homomorphic Models In general, there are many ways to formulate the concept of models. However, conceptually, the most convincing mathematical formulation is by means of homomorphisms. This is the modeling approach used in this chapter. We shall explore briefly the meaning of such homomorphic models. Let X , X’, Y, and Y’ be R-algebras, while h,:X
-+
X’,
and
h,: Y -, Y’
are homomorphisms ;it is easy to show then that
h = (h,,h,):X x Y + X ’ x Y’ such that
h(x, Y ) = (h,(x), h,(Y))
220
Chapter XII
Categories of Systems and Associated Functors
is also a homomorphism, where the algebraic structures of X x Y and X‘ x Y’ are defined in the usual way. The concept of a homomorphic model is then shown in the following definition.
Definition 1.1. Let S c X x Y and S‘ c X’ x Y’ be general systems, where h = (h,, h,) is a homomorphism from X x Y into X’ x Y ’ and h, is onto. Then S’ is a model of S if and only if W ( X , Y ) ) ( ( X , Y ) E S -+
h(x, Y) E S’)
Furthermore, S and S’ are called equivalent if and only if h is an isomorphism and (V(X‘,y‘))((x‘, y‘)E S’ -+ h - yx’, y’) E S )
is also satisfied. Similarly, in the case of dynamical systems, the concept of homomorphic models is given by the following definition.
Definition 1.2. Let S = { ( p , : C ,x X ,
and
3 = {(fit;e, x 8,
-+
-+
x , &,:C,x X,,,
-+
C , . ) : t t, ‘ E T }
8, $,,. :C, x 2,,, + C,,) :t, t‘ E T }
be dynamical systems, where h = ( h x , h,, h,) is a homomorphism from X , x I: x C,into 8, x ?, x for every t E T and h, is onto. Then 9 is a model of S if and only if for every c, and x , ,
e,
xr)) = f i r ( h c ( c t ) , hx(xJ)
h y ( ~ t ( 7~ t
Furthermore, S and are called equivalent if and only if h is an isomorphism. Similar definitions can be introduced for other kinds of system representations, such as global-state response, input response, and others. As mentioned above, a homomorphic m d e l has a convenient property ; i.e., it preserves algebraic structures that are of interest and suppresses noninteresting details. This is fully in accord with the intuitive concept of a model. Let us examine the situation in some detail for the class of dynamical systems. Given the response function of a dynamical system, pr:C , x X , I:,and homomorphisms h = ( h x , h,, hc). Let C,,8,,and Pr be the homomorphic images of C , , X , , and Y,, respectively, and let fir be the homomorphic image of p , . Let E x c X , x X , be such that -+
(x, x,’) E E x * h,(x,) = U X , ’ ) 9
22 1
I . General Systems and Homomorphic Models
Naturally, Ex is an equivalence relation. Similarly, let E , and E, be the equivalence relations derived from h, and h,. Since h,, h,, and h, are homomorphisms, the quotient sets X , / E , , Y,/E,, and C J E , can have the same algebraic structures as X , , Y,, and C , . However, two different elements x , and x,', where ( x , , x,') E E x , may not be considered equivalent with respect to another algebraic structure not embodied in the definition of the homomorphism h, ; such structure is completely ignored by h,. Let
PtM([C,I9 [ X t l ) =
[P,(C,,
x,)l
where c, E [c,] E C,/E, and x , E [x,] E X , / E , . The above definition is proper because h,(Pt(C, x,)) = 9
b,(hc(43
h,(x,))
holds. It is easy to show that ptM and b, are equivalent; i.e., an isomorphism can be found between them. We have, therefore, the following commutative diagram, where q,, v],, and v], are the natural mappings:
/ \ b, is a simplified model of pt because of the isomorphic relationship it has with ptM, which is defined on the quotient sets. It should be noticed that any homomorphism can generate a model; i.e., a model is determined by a homomorphism, and the selection of a homomorphism depends on the properties of the original system that are taken as important for modeling. Also, notice that only h, is required to be an onto mapping. Hence, the homomorphic image of the original system is, in general, a proper subset of the model. The model, then, has certain implications not present in the original system, which, again, is corripatible with the intuitive notion of a model.
222
Chapter XII
Categories of Systems and Associated Functors
The homomorphism h = ( h x ,h,, h,) in Definition 1.2 is given solely in reference to p ; it is not required, therefore, that the following diagram be satisfied :
However, we shall show now that an appropriately defined state-transition function of a model given by Definition 1.2 is related with the state transition of the corresponding original system in a way which is quite natural from the input-output viewpoint, where the concept of the state is a secondary, derived concept. Let E, c C, x C , be such that (cry cr‘)E Er c-)( v ’ X r ) ( P r ( C t ,xr) =
4)
Pt(Cr’,
Then E, is an equivalence relation. It should be noticed that if (c,, c,’) E E , holds, c, need not be distinguished from c,’ since both c, and c,’ always produce the same output for the same input. So we can use the quotient set C,/E, for a state object. We shall show that E, is a congruence relation with respect to hc and 4rr4 - xrr,), i.e.9 9
(i) [crl = [cr’l (ii) [crl = [ G I
+
[hccrl = [hccr’l Wxrr,)([4rt4cr,xrr,)l =
[4rr,(Cr’, xrr,)l) where, naturally, [hcc,]is an equivalence class on g,/&and 8, c c, x
0)
[crl = [cr’l
[crl
=
[ct’l
+
+
-+
+
+
+
+
[hccrl = [hccr’l WXr)(Pr(cr,xr) = Pt(Cr’9 4) (vxr)(hypr(cr,xr) = hyPr(Cr’,x,)) Wxr)(bdhccr,hxxt) = bt(hccr’,hxxt))
(vj?r)@r(h~cr~ 2,) = br(hcct’,2,)) [hccrl = [hcct‘l
.(hx is onto!)
c,.
1 . General Systems and Homomorphic Models
223
Consequently, we can define extensions of +,,.and hc as follows $tt*
:(Ct/Et)X
Xtt,
+
C,,/Et,
such that
We have now the following proposition. Proposition 1.1. The commutative diagram associated with pr as given in Definition 1.2 implies the commutativity of the following diagram :
where
224
Chapter XII Categories of Systems and Associated Functors
Q.E.D. The above proposition means that if the state object C, is allowed to have redundant, “useless,” states, the state transition may not satisfy the commutative diagram ; however, the state transition does satisfy the desired commutativity if we consider only the essential aspect of C,,i.e., if the redundant states are eliminated. It should be noticed again that the redundancy of states is a direct consequence of considering the state objects as a secondary concept. Before moving to the categorical considerations of general systems, let us illustrate the basic idea, which we shall use for the categorical classifications. Let A and B in Fig. 1.1 be two categories of systems. The objects in each category (i.e., the systems of a given type) are related by a homomorphic modeling relation, which is a morphism in respective category. The categories of systems are related by functors; specifically, we shall be interested in two types of functors, termed constructive functor and forgetful functor, respectively. The constructive functor maps the systems with less structure into the systems with more structure (i.e., more structure in the respective sets S). sysrem
aysrem Structured representation
-
\ (Constructive
I I \\
functor)
Homomorphic modeling (rnorphism)
\
I
\ \
Anotlher categcJW
/ /
Abstraction functor)
I Category B
Category A
FIG.1.1
225
2. Categories of General Systems
For instance, a constructive functor maps general systems into systems with response functions or time systems into dynamical systems. The forgetful functor maps the systems into opposed direction ;e.g., it maps dynamical systems into time systems, etc. 2. CATEGORIES OF GENERAL SYSTEMS
In this section, we shall construct several categories of general systems and establish the existence of some functors and relationships between them. Since it is, in general, necessary to prove that a class of objects and a class of morphisms form a category, we shall present various definitions, including those of categories, in the form of propositions. Where the required conditions are obviously satisfied, the proof will be omitted. By following the algebraic approach, the most immediate category is given by the following proposition. Proposition 2.1. Let Ob S be the class of all relations, S c X x Y, where X and Y are arbitrary sets, X = 9(S), and for each pair, S , S’ E Ob S, let Mor (S, S‘) be the set of all pairs of functions f = (h,, h,,), h , : X + X’ and h,: Y -+ Y’ such that h, is onto and ( x , Y ) E s -(h,(x), i h,(Y))
E S‘
Define the composition in {Mor (S, S ’ ) : S ,S’E Ob S} by
(h,, h,)
‘
(h‘,hyl) = ( h ,
*
h,‘, h , . hy))
where h, . h,’ and h, 12,’ are usual compositions of functions, i.e., h, . h,‘(x’)
=
h,(h,’(x’))
(Ob S, {Mor (S, S ’ ) : S , S’ E Ob S } ) then is a category, which will be denoted by S and termed category of general systems. The category of general systems as defined in Proposition 2.1 has a difficulty in subsequent theoretical developments. This difficulty is due to the fact that under general functions as used in Proposition 2.1, a function-type relation R which is defined by ( V x , y))(Wx’, y’))((x,Y), (x’, Y’) E R tk x = x’
+
Y = Y’)
can be mapped into a nonfunction relation, i.e., a single-valued relation into a many-valued relation. To avoid this, we shall introduce a subcategory of S.
226
Chapter XII Categories of Systems and Associated Functors
Roposition 2.2. Let ObSf be the class as defined in Proposition 2.1 and Morf (S, S’) a subclass of functions as defined in Proposition 2.1 such that for a n y f c Morf(S, S), if R E Ob Sf is functional, so isf(R).
(Ob Sf, { Morf (S, S’):S, S E Ob Sf>) then is a category, which will be denoted by Sfand termed function-preserving category of general systems. The class of objects for both and SJ are the same, but when we want to indicate which category is in question, the class of objects for Sf will be denoted by Ob Sf.Although both categories refer to the same class of systems, the modeling relationships used are different, with the requirements for Sf being slightly stronger than for S. Namely, a system S can be a model of another system S in S even if not so in Sf;conversely, however, if S’ is a model of S in Sf, it is always so in S as well. This fact can be expressed by a functor in the following proposition. Proposition 2.3. Let H , :Sf + S be the identity ; that is, for every S E Ob Sf, H , ( S ) = S, and for every (h,, h,) E Mor ( S , S ) in Sf,H,((hx,h,)) = (h,, h,). HI is then a functor.
We shall now introduce a category for the consideration of systemsresponse functions. Proposition 2.4. Let Ob S , be the class of all functions, p : C x X -+ Y, where C, X,and Y are arbitrary sets. For any p, p’ E Ob S,, let Mor (p, p’) be the set of all triplets of functions, f = (h,, h,, hc), h, :X + X‘,h, : Y --* Y’, hc:C-,C‘ such that h, is onto and the diagram
cxx
P
is commutative, i.e.,
h, * p(c, X)
=
p’(h, * C , h, .X)
for every x E X and c E C. Define the composition in { Mor (p, p’) :p, p‘ E Ob S,} by
(h,, h,, h,) ( k ’ , hy),h,’) = (h, . h i , h , . hyl, h, . hc’)
227
2. Categories of General Systems
(Ob S,, {Mor (p, p’): p, p’ E Ob S,}) is then a category, which we shall denote by 3, and refer to as the category of global-systems responses. Consider now the relationship among the categories S, Sf, and S,. A functor has been already established between S and Sf. Relationships of S and S’ with S, are characterized by the existence of two functors : a forgetful functor F, :S, 3, which to every global-response function associates the corresponding general system, and the constructive functor GI:Sf + S,, which to every general system assigns a global response. For each relation S c X x Y, let Fun(S) 5 {f:f:X
Proposition 2.5. Let G I:Sf
3
Y&f G S)
S, be a mapping such that
-,
(i) for every S E Ob Sf, GAS) = P
where C = Fun (S) G Y“, p :C x X
-, Y
such that
p k x ) = c(x)
for all (c, x) E C x X ; (ii) for every ( h x , h,) E Mor (S, S’), G,l