GENERAL DYNAMICAL PROCESSES A Mathematical Introduction THOMAS G. WINDEKNECHT Department of Electrical Engineering Mich...
52 downloads
527 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
GENERAL DYNAMICAL PROCESSES A Mathematical Introduction THOMAS G. WINDEKNECHT Department of Electrical Engineering Michigan Technological University Hougbton, Michigan
ACADEMIC PRESS
NEW YORK AND LONDON
COPYRIGHT 0 1971, BY ACADEMIC PRESS,INC. ALL RIGHTS RESERVED NO PART O F THIS BOOK MAY BE REPRODUCED IN ANY FORM, BY PHOTOSTAT, MICROFILM, RETRIEVAL SYSTEM, OR ANY OTHER MEANS, WITHOUT WRITTEN PERMISSION FROM THE PUBLISHERS.
ACADEMIC PRESS, INC.
111 Fifth Avenue, New York, New York 10003
United Kingdom Edition published by ACADEMIC PRESS, INC. (LONDON) LTD. Berkeley Square House, London W l X 6BA
LIBRARY OF CONGRESS CATALOG CARD NUMBER:79-137604 AMS(M0S) 1970 Subject Classification: 93AlO PRINTED I N THE UNITED STATES OF AMERICA
To Margaret and La.ura, John, and Beth
Contents
PR£f'A(;£ ACKNOWI..EIJCMENTS
I. General Processes 1. 1 In troduction 1. 2 St:l$ :md C13S!>e$
1. 3 1.4
I
4
9
Relatior;s and Functions T ime S4!ts
13
1. 5 Pn;,c::esse.s
17
1.6 I. 7 1.8 1.9
21
Processors Free. 1--"unctional, and Unooupled Processors Prootss l\•1 orphisms E:
and the union of a class,
u
A
E
a&bE
I t can be proved that
na=o&,
ua = a
n@= m,
U ~ L ~ L
Next, we define A C B o ( V u ) : ( a E A*
U E B )
8
1.
GENERAL PROCESSES
and we say A is a subclass of B, A is contained in B, and B contains A iff A C B. T h e following elementary properties hold in general:
c z4,
‘4
c Jld,
& c‘IA
1 Boa4CB&BCA A C B & B C C - .ICC -
.ICBo.lUB=B -4 C B o ‘4 n B = A ACB
a
uAcUB&nBcnil
. ~ E2 B AC(JB&nBCA
T h e power class of a class A is the class of all subclasses of A , i.e., 2A
(B B C d )
‘l’he singleton of a class A is the class {‘-l;
=-
(B B ~
E
2
R
A)
I t follows froin the definition that (A} - { B j e A= B
’I’hc doublet o f two classes A and B is thc class
{A-l, B )
--
(9)U { R )
and it follows that
I n Lclley, the concept of a set is defined in the following manner: .I
is a set o (3B):A
E
B
T h a t is, a set is a class which is an element of some class. From this definition, the above axiom schema about extensions of formulas reduces to
1.3.
RELATIONS AND FUNCTIONS
9
Thus, { a 1 @} is precisely the class of all sets a for which G;! is true.t Three of the axioms of set theory have to do with postulating that certain of the elementary operations on classes introduced above when applied to sets yield sets. For example, the following is among Kelley’s seven axioms: If A is a set then there exists a set B such that (VC): C C A -+ C E B. Using this axiom, it is proved that: (i) if B is a set and A C B, then A is a set; and (ii) for any set A , 2 A is a set. T h e other two such axioms arc: If A is a set and B is a set, then A v B is a set. If A is a set then lJA is a set. From these axioms, Kelley carefully develops the fact that if A and B arc sets, then A n B , A - B, n A ( A # B), {A},and { A ,B} are sets. Finally, it is proved that (i) A E G2 0 A is a set. (ii) G2 6 %. (iii) G2 is not a set. Here, A 4 B o -(A E B). T h e universal class is thus the class of all sets. It is further then a prime example of a class which is not a set. 1.3
RELATIONS AND FUNCTIONS
Many of the above basic set-theoretic concepts play only a relatively minor role in our considerations. Of much greater significance are the concepts of relations and functions and their elementary properties. Again, we shall only sketch the basic prerequisites using Kelley [46] as a basis. If A and B are classes, then the Cartesian product of A and B is the class A x B ={(u,b)la~A&b~B} Here, ( a , b) is an ordered pair which is defined as (a, 4 = {{a},{a, 61)
t T h e condition “,5 is a set” in the axiom schema is iniportant in avoiding certain classical paradoxes in set theory. However, in our general systems developments, it is a mere technicality, generally, whose verification being straightforward is omitted altogether.
10
1.
Kelley proves that if
LC
GENERAL PROCESSES
and 6 are sets, then ( a , 6) is a set, and (c, d ) u u
(u, b )
c&6
~
7-
d
I t is in this latter sense that an ordered pair is “ordered,” and wc see that a doublet is an “unordered” pair. A relation is a class of ordered pairs, i.e., S
is a rclatiori o (Vs): (s t S
(3u)(36):s
-->
=
(a, b))
We write a s 6 u ( a , 6 ) E S. T h e classes
9s = {u 1 (36): aSb} 9 s = { b 1 ( l a ) :USb} then are the domain and range of S, respectively. A relation S is a function or a map iff 6
a S b & USC
(in which case one writes S: W S if d S C Y ) and is improper iff
---f
=c
,%?S(onto) and S: 9s + Y
I n general, S C 9s x d S , so S is improper iff indeed, if for any classes A and B, S
9s x
A x B
-:
for in this case, B S : A and d S In general, we define ==
=
r){b I
B.
u ~ 4
If S is a function, then, as is readily proved,? b Thus, in this case,
s
=
B!S C S or
{(u,6) 1 b
=
=
aS
e aSb.
US)
1. \Ve thus choose to denote the image of a function S at a point a as U Sinstead of as the more usual S ( a ) .
1.3.
11
RELATIONS AND FUMCTIONS
Moreover, if S and R are functions, then = R u (VU): U S =
S
9s and B R C 9s.'The
If R C S, then 9 R C and S is defined as R S
= {(a, b )
0
I n general, R
0
UR
composition of R
j ( 3 ~ )URC : & cSb}
S is a relation, and
9 ( R S )C 9 R
%(R S ) C 2s.
and
0
0
If 9 R C 9S, then 9 ( R S) = 2 R . If 0
9s.Quite importantly,
(R0S)oU
=
9s C 9 R , then &(R
0
S) =
Ro(S0 U )
T h e composition of two functions is itself a function. I n fact, if S and R are functions, then for all a E 9 ( R 0 S ) , u(R S ) = (uR)S 0
T h e converse of a relation S is the relation
s-'
=
9s
I n general, 9s-l = and for any relations R and S,
((6, u ) 1 USb}
9s-l = 9s.Also, (S-l)-l = S
9
( R o S)-l
= ,'-I
o
and
R-1
For any A, the class 1,
=
{(a,4 I a E 4
is the identity relation on A. We see 9 1 , = 91A = A. Moreover, (1,)-' = 1, . A relation S is a 1 : 1 function iff S is a function and US = b S
3
or in other words, for all aSc, b S d :
u =b
1.
12
GENERAL PROCESSES
I t readily follows that S is a 1 : 1 function iff both S and S-l are functions. .\Iso, S is a I : 1 function iff S is a function and
s s-' v
I y.7
~
If S is a 1 : I function, then '5-l is a 1 : 1 function. For any A , 1, is a 1 : I function. Finally, i f S and R are 1 : 1 functions, then S K is a 1 : 1 function. If S 1s a relation, then for any A ,
si.1
= {((I,
h) , uSh & a
E
A]
is the restriction of S on A. X relation R is an extension of S iff for some A , S R A. I n general, S / A is itself a relation and Q(S A ) ' / S n A and d ( S ' A )C ,XS. Indeed, SIA C S. W e see S V S = S and (SIA)/Z3= S / ( Ar? R). A h y subclass of a relation [a function; a 1 : 1 function] is a relation [a function; a 1 : 1 function]. If S is a function, then R is d subclass of S (i.e., R C S ) iff R S / ' / R . I t fo1lows that if is a function, R C ,5'& 5'H = 9 . S -> R - S -
~
s
~
h o w , very importantly for our work, we note BA =
{"/if:
rl+ B )
T h a t is, B" denotes the class of all functions mapping A into B. T h c follo\ving conditions hold in the general case:
H" C C". (i) H C C' (ii) B A C ( K u C).l. \Yc now can consider the remaining axioms of set theory. Kelley gives: If S is a function and 9 S is a set, then :&S is a set. He subsequcntly proves: (i) { A ) x B is a set, (ii) A x B is a set, and (iii) B/' is a set whenever both A and R are sets. H e further proves: If S is a function and %S is a set, then S is a set. 'The remaining axioms of set theory are: If A # a,then there PJ. T h i s axiom leads to exists a class H E A such that 4 n B : -4 B '4 and -(A E B & 13 t A ) . Another axiom is that there exists
1.4.
13
TIME SETS
a set B such that 0 E B and A w { A }E B whenever A E B. This axiom leads to a derivation of the Peano postulates for the natural numbers (nonnegative integers) and hence to the existence of this set. T h e existence of other sets such as follows. Finally, there is the axiom of choice. This axiom, which admits a truly magnificent set of alternative formulations and which has been the subject of some controversy in the past, is particularly well treated by Suppes [49]. There, it is shown that one formulation of the axiom of choice (and one which is particularly useful for our work) is: For every relation R there exists a function f C R such that 9f== QR. 1.4
TIME SETS
As we mentioned above, we will address ourselves in this book to dynamical processes (or systems) which are parametrized by (evolve in) time. Our first order of business is then to develop a suitable mathematical representation of time, which we shall do in this section. Little motivation for the particular representation of time we choose can be given here. However, at the beginning of Chapter 3 , some pertinent discussion will be presented. I t suffices to say that the main mathematical structure we shall have in our considerations will be that imposed on a time set, i.e., that which is developed in the next few pages. A semigroup [43, 44, 471 is a set T together with a function T x T A T [whose images are written as t t’ instead of as ( t , t’) which satisfies
+:
+
+]
t
+ (t’ + t ” )
=
(t
+ t ’ ) + t“
for all t , t‘, t“ E T . A semigroup T is a monoid if there exists an element 0 E T such that t
-+0 = t
=
0 -1- t
for all t E T. 0 is an identity. A semigroup T has at most one identity, i.e., if 0 and 0’ are identities in T , then 0
=0
+ 0’ = 0’
1.
14
GENERAL PROCESSES
Thus, the identity of a monoid T is unique. (Proper) left division over a monoid T is the relation < on T such that t < t’
1.4.1
(3”): t“ f 0 & t‘
0
=
t
+ t”
(t” E T )
Definition. Let T be a monoid. T is a time set iff
0 )& t , (I) (3t1)(3t2):( t l = 0 v t, (11) t , t = t t, 0 t , = t, . (111) t , t = 0 * t , = 0. T
+ +
+
+t
= t’
+ t , ( t , , t, E T ) .
\Ve think of a time set as a mathematical model of real time.
of properties, most especially a simple ordering. T h a t it does is shown in the following theorem. -1s such, it is clear that a time set should have a number
1.4.2
Theorem. Let T be a monoid. If T is a time set then
+t
+ t‘
(commutativity). (left cancellation). = : t” < t’ v t = t’ v t‘ < t (connectedness). -(t ,c t ) (irreflexivity). t < t’ => -(t’ < t ) (asymmetry). t t < t” (transitivity). t < t’ -3t” 3- t < t“ + t‘ (left invariance). t < t‘ 3 t < (t’ t ” ) (right extension). t #- 0 o 0 -1 t (least element). (s) t’ p 0 0 t c:. t t’.
(i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix)
t‘ t t
PK~OF.
+ t‘
= t
+ t“ * t ’
t
:
+ +
LVe have
(i) t‘ t‘ :* t‘ + t = t + t‘ (by axiom 11). t’ = t” (byII). (ii) t t‘ t + t” t’ t =t t” (byi) (iii) (3t1)(3t2): (tl 0 v t , = 0) & t , t = t‘ t , (i.e., I) t. ( t l f 0 & t , --: 0 & t, t = t‘ t,) v ( t l = 0 & t , = 0 & t, $- t = t’ 4- t,) v ( t l = O & t, 0 & t, t = t’ t,) * ( I I # O & t’ = t t l ) v ( t = t ’ ) v ( t z f 0 & t =: t’ + t,) (using i) ~
-+
+
-7
+ + +
:
+
+
3
t
< t’v
t
t ’ v t’
+
< t.
+ + +
1.4.
+ + + + + + +
15
TIME SETS
+
+
(iv) t‘ # O & t = t t’ * t’ # O & O t =t t’ * t’ # 0 & t‘ = 0, which is impossible. (v) t , # O & t ’ = t t , & t , # O & t = t’ t, * t , # O & t , # O & t = ( t t,) t , * t, t, # 0 & t = t ( t , t,) (using 111), which contradicts (iv). (vi) t , # 0 & t’ = t t , & t, # 0 & t“ = t’ + t, 3 t, # 0 & t , # 0 & t“ = ( t t,) t, => (t, t,) # 0 & t“ = t (tl t2). (vii) t, # 0 & t’ = t t, o t , # 0 & t” t’ = t” ( t t,) 0 t , # 0 & t“ t‘ = (t” t ) t , (using ii). (viii) t , # 0 & t’ = t t, 3 t , # 0 & t’ t” = ( t t l ) t“ * ( t , t”) # 0 & t‘ t” = t (tl t”)(using 111). (ix) t # 0 => t # 0 & t == O + t 3 0 < t t = 0 => -(O < t). (x) t‘ # 0 u 0 < t’ o t + 0 < t t’ 0 t < t t’ (using vii and ix). I
+ + + + +
+ + + +
+ + + + + + + + + + + +
+
+
+
REMARK. There are two time sets of particular importance. T h e set p of nonnegative real numbers is a time set with identity 0 under ordinary addition of reals. T h e set w of nonnegative integers (the natural numbers) is a time set under addition of integers. Again, 0 is the identity. I n both of these cases, left division turns out to be the usual inequality relation < on numbers.
Throughout the following, we shall denote the above sets of real numbers by p and w , respectively. We shall presume the reader to be familiar with the elementary properties of these sets of numbers. In fact, we shall presume a familiarity with certain properties of these number sets beyond those indicated here for time sets in general. 1.4.3 Lemma. relation
Let T be a monoid. If T is a time set, then the - =
is a function.
{ ( ( t ,t
+ t’), t’) 1 t , t’ E TI
I.
16
GENERAL PROCESSES
Immediate from the fact that T admits left cancdlation.
PROOF.
above is complementation over T.
'l'he function (-)
REMARK.
\Ye write t' ~-t instead of ( t , t')- for the images of (-). Given t , t' E T , t' -- t is defined iff either t = t' or t < t'. I n the case
of the time sets p and
1.4.4 Lemma.
(-) is proper subtraction.
w,
Let T be a time set. A subset T' C T is itself E T' =. ( t -t-t ' ) E T ' ; and (iii)
a time set iff: (i) 0 t T ' ; (ii) t , t' t , t' t 7" & t I : ' t' =1- (t' - t ) E T'.
an exercise.
PRC)OF.
\\'e conclude this section by establishing one further fact about tinie sets (in fact, about monoids in general). Let T be a semigroup Lvith identity 0 under and let T' be a semigroup with identity 0 under -1- .I function h : T + TI is a (monoid) homomorphism iff
+
I .
Oh
=
0'
( t i- t ' ) h
=
tk
+' t'h
( t , t'
7')
E
T and 7'' are isomorpliic iff there exists a homomorphism h: T-+ T' ( 1 : 1 onto). T and T' are order isomorphic iff there exists a homomorphism h : T + ?" ( 1 : 1 onto) such that t
\\-here
< :
and
'< ,
th
t'h
are left division on T and T',respectively.
1.4.5 Theorem. (and conversely).
Isomorphic monoids are order isomorphic
I,et 7' and '1" be as above. \lie have
IW)OJ..
t -..f '
u y t P
We have a simple characterization 7'-proccssors.
theorem for uncoupled
1.7.2 Theorem. If P is a T-processor, then the following statements arc equivalent: (i) (ii) (iii) (iv) (v)
P is uncoupled. PIP2 C P. There exist 7'-processes Q and R such that P P - PIP2. P , is improper.
mmF.
(i)
+
(ii). If P is uncoupled, then using Lemma 1.5.15, uylZP'P2
(ii)
P
= QR.
--u E P ' & y E P 2
2
uyrP
=. (iii). By I m n m a 1.6.6, P C P1P2.Hence if P 1 P 2C P, then P'P2.
1.7.
FREE, FUNCTIONAL, AND UNCOUPLED PROCESSORS
(iii) * (iv). If P = QR, then P P # D , Lemma 1.6.8 applies and Pl
P
=
=
P1P2. If P
=
GQ
= ia
v
R
=
0.If
= (QR)l = Q
P2 = (QR)'
Thus, P
= ia
27
=
R
D , then P1 = P2 =
o
and again
P'P2.
(iv) U(%P*
=>
(v). Using Lemma 1.6.10 and Corollary 1.6.12,
x 9P,)y * u € B P , & y € 9 ? P * * uy E P * uP,y
3
u E P l & y € P 2 3 uyEPlP2
that is, 9P.+ x .%P, C P , . Thus, P , is improper. (v) 5 (i). If P , is improper, u€P'&y€P2 * u€9P*&y€&P, 3 uP,y 3 uy E 'I so P is uncoupled. 1
1.7.3 Corollary. If P and Q are 5"-processes, then PQ is an uncoupled T-processor. We remark that an uncoupled 5"-processor is in an important sense trivial. 1.7.4
Definition. If P is a T-processor, then the set P-1
=
{ y u I uy E P }
is the inverse of P. If Q is a T-process, then the set IQ
=
{qq I
qEQ)
is the identity on Q. REMARK. P-l is itself a T-processor. I n general, (P-')* = (P,)-'. By Corollary 1.6.12, it follows that (P-l)-I = P. Also, (P-l)l = P2 and (P-l)z = Pl. IQ is a T-processor. We see that (IQ)* = 1,. Moreover, (IQ)' = (I(z)2 = Q.
28
1.
GENERAL PROCESSES
1.7.5 Definition. Let P be a T-processor. P is functional [hifitnrtional] iff for all uy, z’z E P, u
- - 7)
1
y
=
,”
[u =.
2,
e y
=
z]
1.7.6 Theorem. If P is a T-processor, then the following statements are equivalent: (i) P is functional [bifunctional]. (ii) P , is a function [a 1 : 1 function]. (iii) P , : P’ + P2 (onto) [P* : P’ P 2 (1 : 1 onto)] and ----f
UyEI’ey PROOF.
=UP*
Obvious.
1.7.7 Lemma. Let P be a T-processor. P is bifunctional iff both P and P-’ are functional. For any T-process Q, IQ is bifunctional. IW)OF.
0bvi o u s.
nErmRK. For functional T-processors, the output y is unique given the input u , i.e., y == U P , . It is no great distortion then to regard y as being “determined by” or “caused by” u. Indeed, this is exactly how the situation is usually regarded in systems theory, where the concept of “causality” is exceedingly important. ‘I’hc functional T-processors are distinguished by a very simple type of causality, and some much more sophisticated types exist. In Chapter 4, we shall consider some of these very sophisticated types in great detail.
I n the following definition, we introduce a deceivingly simple dichotomy of the 1’-processors. Actually, this dichotomy is very old and very important conceptually. Perhaps even now though, it is not fully understood.
1.7.8 Definition. Let P be a T-processor. P is.free iff 6TP1 has one and only one element. If P is not free, it is said to be.forced. REMARK.
:I relation R is constant iff aHb PL cRd
b
=
d
1.7.
I t follows that a relation R is constant iff for some b, 9 R Every constant relation is a function. 1.7.9
29
FREE, FUNCTIONAL, AND UNCOUPLED PROCESSORS =
(6).
Definition, For any a, a
REMARK.
= { ( t ,a )
I t E TJ
a is a constant T-time function. It is easy to prove that = a.
a T-time function v is constant iff for some a, v
1.7.10 Theorem. If P is a T-processor, then the following statements are equivalent:
(i) (ii) (iii) (iv) (v) (vi) (vii)
P is free.
some a, a P 1 = {a}. some a, P1 = {a}'. some a, Pl = {a}. some a, P = {a}P2. some a and some T-process Q, P = {a}Q. some constant T-time function u and some T-process Q, P = {u)Q. (viii) For some constant T-time function u,Pl = ( u } . For For For For For For
PROOF. (iv) 3 (v). If uy uy E {a}P2.Conversely,
uyE{a}P2
E P,
then u
=
a and y E P 2 . Thus,
a & y c P 2 3 u = a&(3u):vyEP 3 u = a&(3v):(vEP1&uyEP) 3 u = a & (371): (u = a & uy E P ) 3UYEP u =
Thus, P = {a}P2. (vii) 3 (viii) is by Lemma I .6.8. All of the remaining implications are obvious. REMARK. T h e concepts of free and forced processors (or systems) are quite well known in the various branches of systems theory. For example, the theory of dynamical systems or motions [63-671 is essentially a theory of certain kinds of free, continuous-time processors. We give an introduction to dynamical motions in Chapter 4.
30
1.
GENERAL PROCESSES
1.7.11 Theorem. Let P be a T-processor. If P is free, then, (i) 1’ is uncoupled, (ii) ( P * ) is a constant function, and (iii) P-’ is functional. PROOF.
-1s an exercise.
1.7.12 Definition. Let P bc a T-processor. P is multizariable iff either P’ or Pior both are T-processors. Ri:vARK. T h e concept of a multivariablc processor (or system) is well known in the control field. For example, see Mesarovic [20]. 1f.e include Definition 1.7.12 mainly to emphasize the fact that processors can be “nested” to arbitrary (finite) depth. For example, it can happen that (say) P’, etc., are T-processors as aell as P.
1.8
PROCESS MORPHISMS
I n this section, we introduce the notion of a “homomorphism” of T-processes. As we shall show, we have already encountered s e era1 ~ instances of such homomorphisms, one of considerable conceptual importance.? 1.8.1 Definition. Let P and Q be T-processes. Q is an image of ‘I iff therc exists a function h: U P l;rQ such that ---f
e
{P ( 3 z ) :uu’t 1,Q & 2L u?, t I Q
>
u
-+
F
y
0& uP,y
Q)y
> u(P,
that is, (10 P)k P , Son, ( P * 1.6.1 1, \\e see ’ P C P. I
ro
0.
~
0)C P ,
. ‘l’hus, by Lemma
K r v w k . ‘Thus, the series interconnection c P serves to denote the “restriction” of the T-processor P on the input set \\here 0 is ‘1 qiveii 7’-process and 0 C Pl.
0,
2.2.10
Lemma.
PK~OE.
1’2
f=
If P
l(P) Q &
0
0
Let P m d be 7’-processors. If P is functional.
functional, then Z(P‘)
0
C Q is
0 is functional, we hax.t
>‘ZL
i
[(P) 0
y y E I ( P ) & 3’2c Q & y“L E Q Y i PZ & y 3 r C,, & J ’ W E ,Q ( 3 )UJ :
i I’&y,-ty&4’ZL i Q
( 3 u ) : u z i I ’ ~ ~ & u z l !9 ~P ->z
T h i s proves I ( P ) [I
is
functional.
I
w
2.3. 2.3
PROCESSORS IN PARALLEL
45
PROCESSORS IN PARALLEL
V’c continue with a similar elementary investigation of the “parallel” interconnection of T-processors.
2.3.1 Definition. If P and Q are T-processors, then the parallel interconnection of P and Q is the set P//Q
-=
{(uu)(Yz)1
UY E
P & uz E 0 )
REMARK. We see P / / Q C (P1Q1)(P2Q2), and it follows that P / i Q is a T-processor. Moreover, (P//Q)‘ C PlQl and ( P / / Q ) 2C P2Q2; hence, both ( P , / Q ) land ( P / i Q ) 2are T-processors. Thus, P / / Q is always multivariable.
2.3.2 Lemma. If P and Q are T-processors, then (P,’/Q)l= P’Q’ and (P/lQ)2= P2Q2. PROOF.
We see
(I’i/Q)l = { u u , ( 3 y ) : u y € P & ( 3 z ) : U Z € Q } { u v ~ u r P l & v r Q 1 } = ply‘ 1
T h e condition ( P / l Q ) 2= p2Q2 is similar.
[
By Theorem 1.7.2, we have:
2.3.3 Corollary. If P and 0 are T-processors, then (P //Q)l and (P//Q12 are 7‘-processors. Moreover, ( P / / Q ) l and (Pl’’Q)2are uncoupled. 2.3.4 Lemma. If P and Q are T-processors, then ( P / i Q )is free iff both P and Q are free. PROOF. If u and v are T-time functions, then uu is constant iff both u and v are. In fact,
(a)@)
=
(a, b)
Now, since (P/’Q)l PIQ1, it follows that (P,’!Q)l contains precisely one constant T-time function iff both P’ and Q1 do. [ ~
46
2. IlASIC Lemma. If I’ and
2.3.5
INTERCONNECTIONS
0 are
(P/‘Q)-’ iwooi~.
2.3.6
=
T-processors, then P-yIg-1
For the reader.
Lemma. If P and 0 are nonempty T-processors, then is functional [bifunctional] iff both P and Q are functional c t i o nal] .
IW)OF. I,et P and Q be functional. Choose ( u v ) ( y z )E ( P / / Q ) and ( u ‘ v ’ ) ( y ’ z E’ )(1’ 0). Clearly, uy, u‘y’ E P and DZ, v’z’E Q. I~-sing1,emma I .5.15 - fL’V’
1(7)
-. u
-z u‘
&v
=
@’
3
y
.-:
y‘ & z
=
z‘
=> y z = y’z’
that is, ( P i j Q ) is functional. Conversely, let ( P / / Q )be functional. F P. Sincc 0 is nonempty, there exists some vz E Q. 0)is functional, u = u‘
-:. uv
=
yz
u‘t1 ~2
~
y‘z
3
y
= y‘
that is, P is functional. Similarly, Q is functional. T h c condition for the bifunctional case follows from Lemmas 1.7.7 and 2.3.5.
2.3.7 Lemma. Let P, 0,K, and S be T-processors. If P C R and Q C S, then ( P i ‘ Q )C ( R S). PIU)OI;.
Obvious.
1 , e r n n ~2.2.7 inakcs clear the fact that any T-processor admits (tri\,ial) series ctccompositions which are proper. ’l’here is an issile i n the par;dlel case which we can easily settle:
2.3.8 Theorem. If P is a noncmpty T-processor, then the follov in? statements are equivalent: (1)
(11)
’l’hcre exist 7’-processors R and Q such that P Q//Ii. 1’1 m d P’ are 7’-processors and I’ 1 1 I/, where ~
L
I
{ u y , (32)(32): ~
{vz
,
( u v ) ( v z )t C’]
, ( 3 u ) ( 3 y ) : (.v)(yz)
E PI
2.3.
47
PROCESSORS IN PARALLEL
(iii) P’ and P2 are T-processors, and (uv’)(yz’)E P & (u’.)(y’x)
P * (u.)(yx)
E
t
P
(ii) 3 (i) is trivial. (i) 3 (ii). If P = Q / / R , then obviously both P’ and P2 arc T-processors. Now, since P is nonempty, both Q and R are V. nonempty. I n this case, clearly, Q = U and R (ii) 3 (iii). If P = U / / V ,then PROOF.
2
(uv’)(yz’)E P & (u’.)(y’.)
E
P
3
uy E
u& 7Jx E v
-. (u.)(y.)
(iii)
3
E
P
-
( u v ) ( y x )E ( U / / V )
Given (iii), U / / V C P. I n fact,
(ii).
( u v ) ( y z )E (L’//L’) * uy E U & U Z E v
3
(3v’)(32’): (uv’)(yz’)E P
& (3uf)(3y’):(u’v)(y’z)E P
2
( u v ) ( y z )E 1’
But, also, P C ( U / / V ) .T h a t is, (u.)(yz)
Hence, P
2.3.9
=
E
P
* (uy) E U & (u.)
U//V.
E
I; * (u.)(yz)
E
(U//l.)
I
Theorem, If P, Q, R , and S are T-processors, then ( P / / Q ) (RIIS) = ( P R ) / / ( Q S) O
O
We conclude this section by proving:
2.3.10 Theorem. If P and Q are nonempty T-processors, then both P and Q are images of P / / Q .
2.
48
Consider the function
IW)OF.
Clc,irly, t i 7’,
BASIC INTERCONNECTIOSS
/I:
/I(P
t ( ( U Z ’ ) ( yz)
0)
+
I/)
(t((IW)(JJ2)))h ~
that
/I IS, (U~)(J
are the feedforward and the feedback of P, respectively. REMARK. P : C PIP, so P : is a T-processor. Moreover, ( P : ) 2C P, so P : is multivariable. :PC PP2,which implies :P is a T-processor. Also, (:P)’ C P so :P is likewise multivariable. We note
u(.y)
E
P:
0
uy E P & u
(UY)ZE : P - uy E P & y
=
=
x
z
2.4.2 Lemma, If P is a T-processor, then (P:)l= Pl, (P:)2= P, (:P)I = P, and ( : P ) 2-=P2. PROOF.
Obvious.
2.4.3 Lemma. If P and Q are 7’-processors, then ( P : ) C ( Q : ) o P C Q e( : P ) C ( : Q ) PROOF. Clearly, if P C Q , then ( P : )C ( Q : ) and ( : P ) C ( : Q ) . Conversely, if ( : P )C (:Q), we have
u y ~ P a ( u y ) y ~ : P( u* y ) y ~ : Q - u y ~ Q
and if ( P : )C (Q:), uy E P
u(uy) E P:
that is, in either case, P C Q. -I. Unfortunately,
* u(uy) €9: * uy E Q
I
by the same token, we should not expect to prove a great
deal about the closed loop operation in general; i.e., what we are mostly doing here is showing preservation of various properties under operations.
50
2.
BASIC INTERCONNECTIONS
2.4.4 Corollary. If P and Q are T-processors, then ( P : ) = ( 0 : )e P
=
Q 0 ( : P )= (:Q)
2.4.5 Theorem. If P is a T-processor, then the following statements are equivalent: (i) There exists a T-processor Q such that P (ii) P2 [P'] is a T-processor, and u =z
u(zy)E P
[(uy). E P
(iii) P 2 [PI]is a T-processor and P
=
3
y
= Q: [ P =
:Q];
= z]
( P z ) :[P = :(P1)].
PROOF. Wc shall treat the latter case and leave the former as an exercise.
(i) 3 (ii) is obvious. (ii) 3 (iii). If (ii) holds, then uy E Pl
Thus, P :(I"). (iii) 3 (i) is trivial.
0
(32): (uy).
E P e (uy)y E
P
~~
I
2.4.6 Lemma. If P is a T-processor, then P : is free iff P is free. :P is functional. PROOF.
2.4.7
Obvious.
Lemma. If P is a T-processor, then P
FKOOF.
=
P: :P 0
LVc have :P 0 ( 3 4 3 2 ) : u(s2) E P: & u = x & (.z)y E :P & z = y -a u(uy) E P: & ( u y ) yE :P cr> uy E P & uy E P 0 uy E P
uy E P: 0 : P c- (3x)(3z): u(xz) E P: & ( x z ) y E
that is, P
-
P: o :P.
I
2.4.
51
PROCESSOR PROJECTIONS
Lemma 2.4.7 shows in the given weak sense, for any REMARK. T-processor P, P : and :P are “inverses” of each other. This interesting property leads to some other useful identities. 2.4.8
Lemma. If P, Q, R, and S are T-processors, then (POQ) = ( P o Q : ) :Q 0
( P o 8)o ( R o S ) = ( P PROOF.
0
(8
0
R):)0 (:(Q R) S ) 0
0
Application of Lemma 2.4.7 gives (PoQ)=Po(Q:o:Q)
=(PoQ:)o:Q
( P o Q ) o ( R o S ) = P o ( Q 0 ( R 0 S ) ) = P O ( @0 R ) 0 S )
(((Q R ) : :(Q R ) ) 0 S ) = P ((Q R): (:(Q R ) S ) ) = ( P (Q R ) : ) (:(Q R ) S ) =P
0
0
0
0
0
0
0
0
0
0
0
0
0
0
where we have repeatedly used the associative law established in Corollary 2.2.3. I 2.4.9 Theorem. If P is a T-processor, then P , P : , and :P are pairwise isomorphic. PROOF.
We note
Therefore, consider the relation h
Clearly, h: GlP (uy)y. Hence,
--f
4, ( ( a , b), 6)) I a ( @ W }
= {((a,
Gl(:P) (1 : 1 onto) and for all uy
E P,
uy 0 h
=
52
2.
BASIC INTERCONNECTIONS
and h is an isomorphism from P to :P. Similarly, the relation R
~
{ ( ( a ,( a , b ) ) , (66)) I a ( W b 1
is an isomorphism from P : to P. Finally, g 0 h is an isomorphism from P: to :P. I 2.5
CLOSED LOOP PROCESSORS
Algain,“feedback” is one of the most intriguing concepts in the entire systems area. In this section, we introduce the notion of the “closed loop” of a T-processor. T his can be regarded as a direct attempt to formalize the concept of feedback in as general a \lay as possible still retaining the flavor of the concept as it is employed in engineering. T h e closed loop of a T-processor is u hat that T-processor “looks like” with feedback introduced in the form of a direct connection from output to input. As we shall show, the closed loop operation and the feedback operation introduced in Section 2.4 arc intimately related. 2.5.1 Definition. I,et P be a 7’-processor. If P’ is a T-processor, then the closed loop of P is the set #P
=
{UY,(U,V)Y EP:
KFVAHK.In general, # P C PI. Thus, # P is itself a T-processor. C (P’)’ and (#P)2 C n P2.Also, # P C Pl o I(P2),
\Ye see (#P)’ 1.e.,
u?/r#P~~(u3’)yEP~uy€P’&y€P2 f’
u-y E
P‘ & y y
E
I ( P ) 2 uy € P’ Z(p2) 0
2.5.2 Lemma. Let P and Q be T-processors with P1 and Q1 IikeLvise T-processors. Then, PCQ
=“
#PC#Q
PROOF. Obvious. REMARK. ‘l’hc converse of Lemma 2.5.2 fails and this is important.
2.5.
53
CLOSED LOOP PROCESSORS
Next, we see how the closed loop and feedback operations are related: 2.5.3 Theorem. If Q is a T-process, then #(:Q) = Q. If P is a T-processor with Pl a T-processor, then :(#P) C P. PROOF.
We note (:Q)l is a T-processor, so #( is: well (I defined. )
We have
#(:~)={~Y/(~~)YE:~}={~YI~YE~}=Q Next, we see (.Y)Y
that is, :(#P) C P.
E
:(#P)
UY E
#P * (.Y>Y
Ep
I
REMARK. Theorem 2.5.3 interprets to say :(#P) is the least T-processor whose closed loop is the same as that of P, i.e., #(:(#P)) = #P. Thus, for a T-processor R, :R is the least “open loop” of R. :(#P) also interprets to be that part of P that can be “observed” or “measured” under closed loop conditions.
An interesting special case, namely where :(#P) characterized as follows:
=
P can be
2.5.4 Theorem. Let P be a T-processor. If Pl is a T-processor, then the following statements are equivalent: (i) :(#P) = P. (ii) P = :(I“). (iii) U P = 21ZP1. (iv) (uy)z E P * y = z . (v) P = : ( P l ) & # P = P’.
-
(ii) is by Theorem 2.4.5. (i) (iii). I n the proof of Theorem 2.4.9, wc showed 2aQ for any T-processor Q. Thus, here PROOF.
(ii)
3
OlP
(iii)
3
(iv).
(uy) z E P
=
a(:@=
@(:(Pl))= 2aPl
Given (iii),
* (Vt): t((uy)z)E OlP * (Vt): t((uy)z)E 2 a r ’ 3
(Vt): ((tu, ty), t z ) E 2@P1
(Vt): ty
=
tz
2
y
=
z
54
2.
BASIC INTERCONNECTIONS
(iv) 3 (v). Given (iv), P :(P1)by Theorem 2.4.5. But then by Theorem 2.5.3, #P = #(:(PI)) = Pl 1
so (v) holds.
(v)
3
(i). We have :(#P)
=
:(PI)= P
I
Another interesting special case is to characterize when # P For this, we have:
=
Pl.
2.5.5 Theorem. Let P be a 7'-processor. If Pl is a T-processor, then the following statements are equivalent: (i) (ii) (iii) (iv) (v)
# P = Pl. uy E Pl u (uy)y uy E P 1 3 (uy)y :(PI)C P. P' c #P.
IWWF.
(i)
3
(ii). uy
E E
P. P.
Given (i), E
Pl
0
uy E # P u (uy)y € P
(ii) * (iii) is trivial. (iii) 3 (iv): ( u y ) z t :(PI) 3 u y E P ' & y
=
z
3
( u y ) y t P & y = z =. ( U Y ) X € P
(iv) (v). P' = #(:(P1))by Theorem 2.5.3. Using Lemma 2.5.2 then, if :(PI)C P, we see P1 C #P. (v) 3 (i). I n general, # P C PI. I Ifre have a number of identities involving the closed loop operation:
2.5.6
Lemma. If P, Q, and R are 7'-processors and R1 is
T-processor, then
#((PiiQ) R ) = P #((W2)//Q) R) O
O
O
a
55
EXERCISES PROOF.
We see
uy E #((P//Q)0 R ) 0 (uyly E (PllQ) R 0
P / / Q & (xz) y E R P & y z E Q & (xz) y E R
u (3x)(3z): (uy)(xz)E
o (3x)(3z): ux E
o (3x)(3z): ux E P & x E P2 & y z E Q & (xz) y
ER P & xx € I ( P 2 )& y z E Q & ( x z ) y E R o (3x)(3z): ux E P & (xy)(xz) E Z(P2)//Q& (xz)y E R
u (3x)(3z): ux E
u (3x): ux E
-
P & (xy) y E (Z(P2)//Q) o R
o (3x): ux E P & xy E #((Z(P')//Q) 0 R )
uy E p
O
#((V2)//Q) R) O
2.5.7 Lemma. If P and Q are T-processors with Q1 a T-processor, then #(PllQ) = Po (#Q): PROOF.
4 ~ E4#(PI@)
0
( u ( y z ) ) ( y z E) P / / Q
-
UY E
P&(
~ 4 EO
#Q e uy E P & y ( y z ) E (#Q): e (3x): ux E P & x ( y z ) E (#Q): 0 u(y.) E P (#Q):
o uy E P & y z E
0
2.5.8 Lemma. If P and Q are T-processors with Q' a T-processor, then P o #Q = # ( p / / Q ) :(#el 0
PROOF. Combining Lemma 2.5.7 and the first condition of Lemma 2.4.8,
(P
O
#Q)
=
(P
O
(#!a:):(##!a = # ( P / / Q ) :(#!a I O
O
Exercises 2-1. If P is a T-processor and Q is a T-process, prove ( P O Q)* = (V*)-*/Q)Y
2-2. Prove for any T-processors P and Q that P / / Q and PQ are isomorphic.
56
2.
DASIC INTERCONNECTIONS
2 - 3 . P r m c or give a counterexample: For any T-processors P and 0,P Q is uncoupled iff both P and Q are uncoupled.
2-4.
ProLC for any T-processors P and (2, P, iL,
2-5.
r-
(P//Z(Q')) ( I ( P 2 ) / / Q )
Prove for any T-processor P, #((P:)-1
2-6.
0
P)
=
P
Let P be a T-processor with both P' and P2 T-processors, and let lT = {uy , (3v)(31): (uv)(y.) € P )
I-
= (u2
(%)(3y): (uv)(yz)E P }
Prove that if C' is functional and V is uncoupled, then Pp is u n co u p 1e d .
2-7. Prove for any 7'-processors P and Q,
P.0: = { u ( y z ) ~ 2 I ~ y E P & y S E Q ) 2-8.
Give necessary and sufficient conditions on a T-processor P that there exist T-processors 0 and R such that P = Q R : , where Q 9 R: is proper. 0
2-9.
Let I' be a T-processor with P' a T-processor. Prove
(P'): P 0
=
{uy ,
(32):( u 2 ) y
E P)
2-10. Let P be a T-processor xvith Pl a 7'-processor. Prove if ( P I ) : P is functional that # P is functional.
-
2- 1 1. Prove if P is a T-processor with P' a T-processor that
#r = ( ( P I ) :
0
P : ) :(:(PI)) 0
and hence that # P may always be represented as a (often improper) series interconnection.
EXERCISES
57
2-12. (Feedback compensation problem.) If P and Q are T-processors, prove that the following statements are equivalent: (i) There exists a T-processor R with R‘ a T-processor such that P = #(R 0 Q), where R2 C Q1 and R1= P. (ii) P 2 C Q2. (iii) P = # ( ( : P 0 Q-l) 0 Q). 2-13. Develop and prove a theorem analogous to that of Exercise 2-12 for series compensation of T-processors, i.e., for the case of P = R Q. 0
2-14. In automata theory, the following operation on T-processors is defined and called “parallel interconnection”: P$Q ={u(~z)Iu~EP&uzEQ}
Has this operation been subsumed in our theory of interconnections ? 2-15. Work out an elementary theory of the interconnection given in Exercise 2-14.
Time-Evolution
3.1
INTRODUCTION
I n this chapter, we consider the notion of the “evolution” of a T-process in time. This concept is not a concept that is commonly dealt with in systems theory except perhaps informally. What we mean by the evolution of a process in time is “how that process appears (as a process) at various instants of time.” But does not a process always appear the same at various instants of time? T h i s is a very subjective question and whether it does or not depends very much on what we take time to be. This is the basic point of Weiner [31] who, in his discussion of time, eloquently argues for the existence of processes with nontrivial time-evolution. Given a T-process P, the element 0 E T interprets to be the then the transition relation of A is the set
REMARK.
I n general, A
==
IJ.SA
64
3.
TIME-EVOLUTION
-4lS0,
bAh’ u (3m):b’
=
(m, h ) A
3.2.6 Lemma. Let A be an action of M . Then the following statements are equivalent: (i) bAb’.
b’ E c ~ A b . (iii) &Ab’ C .#Ah. (11)
PROOF.
(i)
(ii).
-3
bAh’ u (3m): h’ (ii)
u
We have =
( m ,0) A
u
mAb u 6’
(3m):b’
(iii). Assume &A6’C J A ” . Now, h’ h’
=
( 0, b’)i3
t
G
%’Ab
9?A”’,i.e.,
-- O A b ’
T h u s , b’ €.#Ah. Conversely, if b ’ t #A‘), then for some m , b’ ( m , b ) A . Now, if h” E gA”’ then for somc m‘,b” -- ( m ’ , b’)A. But, h“
=
(m’, h’)A = = (m’, (m, h ) d ) A
-
so h” t .#’A”. In other words, &AtJ‘C &A”.
(m
+ m‘, h)A
1
In view of condition (iii) of 1,emma 3.2.6, the following is apparent:
3.2.7
Corollary. If A is an action of M , then A is a preorder >( d A and
of .HA, i.e., A C .&A
(i) hAb (reflexivity). (ii) bAb‘ & b’A6” 3 bAb” (transitivity).
I n the remaining part of the section, we develop the elementary theory of invariant and minimal invariant sets for actions of a monoid. Several of the important definitions and proofs are due to I,. It. XIarino (private communication).
3.2.8
Definition.
Let A be an action of hf:
(i) A is strongly connected iff for every 6, 6’ some m E such that 6’ = (m,6)A.
E &A
there exists
3.2.
65
ACTIONS OF A MONOID
(ii) An element b E 9 A is initial iff for every 6’ E &?A there exists some m E IC.1such that ( m , b)A = 6’. (iii) An element b E WA is terminul iff for every b’ E .#A there exists some m E M such that ( m , b’)A = b. (iv) .4n element b E 2 A is recurrent iff for every m E M there exists some m‘ E M such that ( m m’, b)A = b.
+
3.2.9
Lemma.
Let A be an action of M :
(i) 6 E .@Ais initial iff d A b = &A. (ii) b E WA is recurrent iff 6’ E 9 A b 3 9A6’ = &A6. (iii) A is strongly connected iff for every b E &A, .%’Ab = 9 A . PROOF. (i) and (iii) are obvious. T o see (ii), first we note that b is recurrent iff (b’ E &’Ab 3 b E 9 A b ’ ) ,i.e.,
(Vm)(3m’):( m
+ m‘, b ) d = b
0
(Vm)(3m’):(m’, ( m , b)A).-l = b
e(Vm)(3m’):( ( m ,b),4 = b’
3
(m’,b’),4 = b )
o (6’ ~ W ‘ i = 2.~(3m’):(m’,b’)A - b ) o (b’ E B24b 3 (3m’):m‘Ab’ = b )
o (b’ E B’i2b* b E BAgb’)
Now, using Lemma 3.2.6, (6’ E W 4 b
Thus, (ii) holds.
3
b E BAb‘)
o (b’ E BAb
2
gab‘C W A b& b E A?Ab’)
o (6‘ E BAb
3
W‘,4b’ C 9 ‘ 4 b & B A b C W‘i2b’)
o (b‘ E WAb
3
WAb’ = .G’Ab)
[
3.2.10 Theorem. If A is an action, then the following statements are equivalent: (i) (ii) (iii) (iv)
A is strongly connected. Every b E .%A is initial. Every b E B?Ais initial and recurrent. Every b E 2 A is terminal.
66 IW)OP.
(1)
(iii) a
(ii).
3.
-
TIME-EVOLUTION
(ii) is iminediatc from Lcinma 3.2.9. By Lemma 3.2.9, .#A6”- YllA for d11 b” €%A.
, T
1 hus, h‘
t .9A4b >
9Ab’
z
9-4
-
~ L 4 b
h is recurrent h y 1,cmnia 3.2.9. (iii) (iv). Choose h’ E ,//AA. Since h’ is initial, 9A6’ a n d there exists some rn E M such that SO
%:
=
.%A
(nz,b’)A --= mAb‘ = b
’l’hus, h is terminal. (iv) ;1(i). Choose 6, 6’ E 9 A . Since h’ is terminal, there exists sonic m such that ( m , b)A = b’. Thus, A is strongly Connected. I 3.2.11 Theorem. Let A be an action of M . If bc9Z’A is initial, then the following statements are equivalent:
(i) b is terminal. (ii) A is strongly connected. ( i i i ) Every eleiiient of &A is recurrent. (iv) h is recurrent. PROOF. (i) * (ii). Let b be initial and choose b’, b” E %A. If b is terminal, for some m, ( m ,h’)A = 6. Since b is initial, for some m’, ( i n ’ , /))A h“. Thus, ~
( m !- m‘,b’)A = (m’,( m , b’)A)A = (m’, b ) A
=
b”
that is, A is strongly connected. (ii) * (iii) is by Theorem 3.2.10. (iii) 3 (iv) is trivial. (iv) (i). Choose 6’ E .%”A.Since b is initial, ,%A‘) = %’A and 6‘ E .#A”. Since b is recurrent,
-
IIcncc, there exists some m such that (m,b’)A terminal. I
=
b. Thus, b is
3.2.
ACTIONS OF A MONOID
67
3.2.12 Definition. Let A be an action of M . A subset C C %A is invariant (with respect to A ) iff mEM&bEC
3
(m,b)AEC
An invariant set C is minimal iff DCC&Disinvariant*D=
@vD=C
where D is the empty set. REMARK. .@A itself is invariant and D is invariant. 0 is a minimal invariant set.
We have the following theorem characterizing invariant sets.
3.2.13 Theorem. Let A be an action of M . If C C , @ A , then the following statements are equivalent: (i) C is invariant. (ii) A / ( M x C) is an action of M . (iii) b E C * .@Ab C C. PROOF.
For the reader.
3.2.14 Lemma. Let A be an action of M . If C, D C -@Aare invariant, then both C u D and C n D are invariant. For any b E g A , 9 A b is invariant. PROOF.
Let C and D be invariant. We have
r n E M & b E ( C n D )3 m e M & b e C & b E D
3
( m E M & b E C )& ( m E M & b E D )
( m , b ) A ~ C & ( m , b ) A ~ D = > ( m , bs )( iC3 n D )
Thus, (C n 0)is invariant. T h e proof for (C u D) is similar. Finally, by Lemma 3.2.6, for any b E 9 A , b'
E
%'Ab 3 BAb' C BAb
Thus, by condition (iii) of Theorem 3.2.13, .%Ab is invariant.
I
REMARK. Contrary to the case of actions of a group, %?A- C need not be invariant when C is invariant.
68
3.
TIME-EVOLUTION
We have a characterization theorem for minimal invariant sets analogous to that for invariant sets:
3.2.15 Theorem. Let A be an action of M . If C C 9 A , then the following statements are equivalent: (i) C is invariant and minimal. (ii) A / ( M x C) is a strongly-connected action of M . (iii) b E C => %Ab = C. PROOF. (i) v (iii). If C is invariant then h E C 3 9 A b C C by 'l'heorem 3.2.13. But &?AtJis nonempty (i.e ., b E 9 A b ) and, by Lemma 3.2.14, &!Aois invariant. Thus, if b E C, then 9 A b = C. Conversely, let D C C be nonempty and invariant. Since D is invariant, b E D * W A b C D. Since D is nonempty, there exists some b E D.But b E C then, and so .%?Ab = C. Thus,
D CC
=
%'AbC D
so D = C. Thus, C is invariant (by Theorem 3.2.13) and minimal. (ii) u (iii) is obvious from Lemma 3.2.9. [ REMARK. Thus, invariant sets correspond 1 : 1 with subactions of a given action, and minimal invariant sets correspond 1 : 1 with strongly connected subactions.
i r e conclude by relating the notions of minimal invariant sets and recurrent elements.
3.2.16 Theorem. Let A be an action of M . A nonempty subset C C &A is a minimal invariant set iff for some recurrent element b c c%A,C r~Ab. ~
PROOF. Let C be invariant and minimal. Since C is nonempty, there exists some b E C. 9 A 6 = C by Theorem 3.2.15. Hence,
b' E %'A0 3 b' E C
3
(WAb' = C
= ,!%?Ab)
and, by Lemma 3.2.9, b is recurrent. Conversely, if b is recurrent and C = 2 A b , then h' E C
3
6' E * A b
2
(WAb' = %''Ab
=
C)
Thus, by Theorem 3.2.15, C is minimal and invariant.
[
3.3.
69
TIME-EVOLUTION OF PROCESSES
TIME-EVOLUTION OF PROCESSES
3.3
I n this section, we introduce the concept of “left translations” of T-time functions and T-processes. This notion leads readily to the concept of time-evolution.
3.3.1 Definition. If v is a T-time function and t t-left translation of v is the set ~ l = t
3.3.2
(i) (ii) (iii) (iv)
{(t’,( t
E
T, then the
+ t ’ ) ~j )t’ E T }
Lemma. If v is a T-time function, then for all t, t’
E
T:
v l is a T-time function. t’v, = ( t t‘)v. gal C 9v. vo = ZI.
(4 U t + t ’
+
= (VJt,
.
PROOF. (i) v l is clearly a function and 93%~~ = T, i.e., v l is a T-time function. (ii) follows immediately from (i). (iii) We note
av,= { ( t + t’)v 1 t’ E T } c {t”v I t“ E TI (iv) For all t
E
=
av
T, tv,
= (0
+ t)v = tv
so v, = v. (v) For all t , t’, t” E T, t”v,+,.= ( ( t + t’)
that is, v l - l ’
=
+ t”)v
(t
+ (t’ + t”))v = ( t ’ + t”)v,= t”(v,),.
( v ~ ) .~ , I
v l interprets to be “what v looks like as a T-time function” starting at time t . I n order to make a , in fact a T-time function, the time base must be “shifted” so as to make tv (for example) correspond to the time instant 0. This is of course literally a left translation of ZI.
70
3.
TIME-EVOLUTION
3.3.3 Definition. If P is a T-process and t E T , then the t-left trmslation of P is the set p, = { P t I P
E
P>
P , interprets to be “what P looks like as a T-process” at the starting time t . T h e following lemma provides formal verification: If P is a T-process, then for all t E T , P , is a
3.3.4 Lemma. 1 -process. , I
By 1,emma 3.3.2, P , is a set of T-time functions.
PROOF.
3.3.5
I
L e m m a . If P is a T-process, then for all t E T, OlP, C OlP. By I.emma 3.3.2, .&pi C .%p. Thus,
PROOF.
‘ZP,
-
U{a?p, p E P ) c U { 9 p 1 p E P } = 6TP
that is, /I€‘, C { T P .
~
I
Since every subset of a relation is a relation, the following is imrnediate:
3.3.6 Corollary. T-processor.
If P is a T-processor, then for all t E T , P , is a
’l’he next lemma is crucial for all our subsequent results.
3.3.7 L e m m a . If u and y are T-time functions, then for all t E T, (UYh PK~OF.
i‘(uy)t
W e have
=
( t :- t’)uy
that is, (uy), = u l y , .
3.3.8
=: U t Y t
=
((t
;
t’)u,
(t
+ t’)y)
---
(t’u,
, t’y,) = t’(u,y,)
I
L e m m a . If P and Q are T-processes, then for all t E T,
3.3.
71
TIME-EVOLUTION OF PROCESSES
Lemma. If P is a T-processor, then for all t
3.3.9
(PtY
=
t
T,
(Wt
(PJZ = (P”t
We have
PROOF.
(P,)’
= {V
~
( 3 ~ ) vz : E Pt} = {ut i ( 3 ~ )UY:
E P } = {u,
~
u E P’)
=
(P’),
1
T h e condition on ( P J 2is similar.
Combining this with Lemma 3.3.5, we have:
3.3.10 Corollary.
If P is a T-processor, then for all t
QZ(P,)’ C GYP1
3.3.11
Lemma. If P is a T-processor, then for all t =
T,
LT(Pt)zC GYP2
and
(P-yt
E
E
T,
(Pt)-l
If (2 is a T-process, then
(m,
= [(Qt)
Using Lemma 3.3.7,
PROOF.
3.3.12 for all t PROOF.
3.3.13
Lemma. Let P and Q be T-processes. If P C Q , then T, P , C Q l .
E
Obvious.
Lemma. If P is a T-process, then for all t , t’ E T , t’P,
=
( t -1 t’)P
72
3.
TIME-EVOLUTION
PROOF.
t’P,
{t’p, 1 p
c; P ) = { ( t
+ t’)p 1 p E P ) = ( t + t’)P
Corollary. If P is a T-processor, then for all t , t‘ E T ,
3.3.14
t’(P,)l == ( t
+ t’) Pl
t’(P,)Z = ( t
+ t ’ ) PZ
Lemma. If P is a T-process, then Po = P, and for all t ,
3.3.15 t‘ E T,
p,,f’
= (Pt)t,
By Lemma 3.3.2.
PKOOF.
We now formalize the concept of time-evolution of T-processes:
3.3.16 Definition. Let C be any set. T h e T-evolution in C is the relation &(’
~
(((1,
P ) , Pt) 1 t E T & P C C T }
{ B 1 B C A} is the
Iiecall that if A is a set, then 2A power set of A. I~ILIAIIK.
3.3.17
Theorem. For any set C, GC is an action of T on 2(cT).
Pwor. Let K = 2(c7) and consider GC. Clearly, P E K iff P is a 7‘-process and f7P C C. Moreover, 9GC = T x K. Now by Lemma 3.3.5, U P , C U P . Thus, P F K =+ P , E K. I n other words, 9‘GC --- K . I t follows that bC: T x K + K . Now using Lemma 3.3.15,
(0, P ) t C (t
=
P” - P
t ’ , P ) GC = P,,
t’ --
(P,),, = ( t ’ , Pt) 8 C
Thus, AC is an action of T .
=
(t’, ( t , P ) &C) bC
I
I n the light of Theorem 3.3.17, it is possible to formalize what is meant by a “time-invariant” class of T-processes and a “timeinvariance” property of T-processes.
3.3.
TIME-EVOLUTION OF PROCESSES
73
3.3.18 Definition. Let 9 be the class of all T-processes. A subclass A C 9 is time invariant iff for any set C, the set K
=J i!n 2(CT)
is an invariant set of &C. If 9 is a formula depending on the (variable) T-process P, then 9 is a time-inaariance property of T-processes iff the class JZ
=
{P 1
F}
is time invariant.
3.3.19 Theorem. If A is a class of T-processes, then the following statements are equivalent: (i) A is time invariant. (ii) P E A 3 ( V t ) : P , E A ( t E T). (iii) P E A 0 ( V t ) : P , E A ( t E T ) .
(i) =, (ii). Choose P E A and let C ==e%P and K = Then, P E J Z n K and 4n K is an invariant set of bC. We have PROOF.
2'C'). t
E
T&P
E
( An K ) 3 ( t , P ) G C E (.A' n K)* P, E (A' n K ) 3 f,E J ~ '
that is, (ii). (ii) 3 (iii). Po = P and, hence if for all t E T, P , E A', then PEA. If P E K , then (iii) 3 (i). Choose a set C and let K = 2(c7). for all t E T, P , E K . Thus, if P E A n K , then for all f E T, P , E A? n K . Moreover then, t E T & P E ( d Z n K ) = .P , E ( A n K ) * ( t , P ) t " C E ( A n K )
that is, A n K is an invariant set of &C. Finally, since C was arbitrary, JA! is a time-invariant class of T-processes. I
As an example of a time-invariant class of T-processes, we see by Corollary 3.3.6,
3.3.20 Corollary.
T h e class of T-processors is time invariant.
74
3.
TIME-EVOLUTION
I n our later considerations, we shall encounter many other clas;hcs of 7'-processes (in particular, subclasses of the T-processors) \I
hich are time invariant.
3.4
THE TRANSLATION CLOSURE OPERATION
(;I\ en thc concept of left translations of 7'-time functions and T-processes, it is natural to consider the set of all left translations
of a given 7'-process. 'I'his leads to a closure operation on the class of all 7'-proccsses which is of considerable importance.
3.4.1 Definition. If P is a T-process, then the translation closiire of I' is the sct P - { p , l p ~ P & t ~ T ] RI 11IRK.
P
IS
itself a ?'-process and P
=
u(P,i t E TI
Theorem.? If P and Q are T-processes, then
3.4.2 (I)
P
(11)
P
c I-'. P.
I-'u Q
(iii)
.
(I)
i'i h, P E P ]
3.5.
Now, for all p
By Lcmma 3.4.3, h: c1"P + G!Q.
t ' ( p , h) = (t'p,) h 0
that is, p , h 0
Q
79
TIME-EVOLUTION OF INTERCONNECTIONS
=
=
((t
+ t') p ) h = ( t + t ' ) ( p
E 0
P and all t
h)
=
E
T,
t ' ( p h)t 0
( p h ) , . Therefore, 0
{q, q E Q & t E T } = { ( p o h ) , i ~ E P &T~} =E{ p , h 1 p E P & t E T } ={voh~v€P} =
0
which proves Q is an image of P. Next, choose t E T. Clearly, OIP, C OlP. Hence, consider g = h/OIP,. Since t'( p , o h) = t'( p o h ) ,, where p 0 h E Q, we see g: LZP, OIQ, . Moreover, ---f
Qt -= {qt i
q E Q}
={(P
0
h)t I P E PI
Hence Q, is an image of P , .
{ p t h 1 P E PI 0
=
{v h 1 a E Pt> 0
I
3.4.17 Corollary. Let P and Q be T-processes. If h is a homomorphism from P to Q, then h is a homomorphism from p to Q and, for all t E T, h/OIP, is a homomorphism from P , to Q, . TIME-EVOLUTION OF INTERCONNECTIONS
3.5
I n this section, we investigate how the various interconnections of T-processors which were introduced in Chapter 2 evolve in time. Somewhat surprisingly, we find that, in some cases, the left translations of interconnections are proper subsets of, rather than equal to, the interconnections of the corresponding left translations. Thus, we develop necessary and sufficient conditions for equality.
3.5.1 Theorem. If P and Q are T-processors, then ( P OQ) C P o and, for all t E T , ( P o Q), C P , Q , . Also, ( P Q ) , = P , Q t iff
0
0
0
0
U V E P &xz E Q &V , = x t PROOF.
(3p)(3~)(3q):p~ E P & y g E Q & p t = u,& qt = Z,
Using Lemma 3.3.7, ( P Q), 0
P, Qt 0
= {U,Z,I =
(3y): uy E P & y z
E Q}
{u,z,1 (3~)(3.~): uv E P & xz E Q & ~
l = t XJ
80
3.
TIME-EVOLUTION
so (I' r Q)! C P , Q , and ( P Q), = P , 0 Q, iff the given condition holds. Also, we have 0
0
~~~
( P 0 0) {utzt I (33)): uy E P &yZ E Q & t E T } ) : E P & xz E Q & t , t' E T & V , = P Q - (2ltzt,, ( 3 ~ ) ( 3 ~uv 0
'I'hus, since this is the case, ( P o Q ) C P
g. I
o
3.5.3 Theorem. If P is a 7'-processor with Pl a T-processor, then (#P) C # ( p ) and, for all t E T , ( # P ) , C #(Pt). Also, (#O/= #(PJ iff (uy)x E I ' & y t mom.
= Xf
3
(3v)(32): ( V Z ) Z
P & V f = Ut & Z f
t
--
yt
Wc have (#PI,
@*Yt ! (UYlY E P l
#(Pi)
{ WI(3.v): f
(v).6 P & Y t
=
4
Thus, ( # P ) ( C #(P,) and # ( P l ) = # ( P t ) iff the given condition holds. Vow, (#I)) #(P)
T h u s , clearly,
=
{tLtyt 1 (uy)y E P & t E
=
( u , Y 1~( 3 ~ ) (UY)X :
(#p) C #(p>.
I
E
7-1
P& t
t
T&y,
=
x/>
3.6.
81
CONTRACTING PROCESSES
Theorem. If P is a T-processor, then
3.5.4
(F)= ( P ) : and
(:p) = : ( p ) Moreover, . for all t E T, ( P : ) l= (P,):and ( : P ) t= : ( P J PROOF
P:)t= {(4WCI uy E PI =
{v(vz)1
v.2 E
P,}
=
=
{u,(uy),I
UY E
PI
=
{.,(u,y,) I uy E PI
(P,):
(p,) = {(u(.y)), I uy 6 P & t E T } = {u,(uy), uy E P & 2 E T } = {u,(u,y,) 1 uy E P & t E T } = {v(vz)I vz E P } = ( P ) : ~
Similarly, ( : P ) t= :(Pi) and
(,p) = :(P).
I
Thus, the various interconnections of T-processors appear to evolve in time. 3.6
CONTRACTING PROCESSES
I n the next two sections, we examine three special kinds of T-processes. These types, contracting, expanding, and stationary T-processes, are distinguished by a special property of timeevolution. As we shall see, each of these defining properties is a time invariance property of T-processes.
3.6.1
Definition. Let P be a T-process. P is contracting iff
for all t
E
T, P , C P.
REMARK. For example, if P contains only constant T-time functions, then
P, = = = { p t I p E P } = { p ~ p E P } = P
and, in particular, P , C P, so P is contracting. T h e property of contraction on a T-process has a simple intuitive interpretation. I t says the process “has no new elements at any later time.” We have the following theorem which characterizes contracting T-processes.
3.6.2
Theorem. If P is a T-process, then the following state-
ments are equivalent:
3.
82
TIME-EVOLUTION
(i) P is contracting. (ii) P P. (iii) p E P & t E T * p , E P. (iv) For all t , t' E T, Pt7,,C P , . (v) For all t E T , P , is contracting. P,, C P , . (1.;) I:or all t , t' E T , t < t' 1
PROOF.
Clearly, P C P. We are given P , C P for all t ;
(i) -> (ii).
hencc, PCP
P. that is, P (ii) * (iii). P
=
{ p ,1 p
=
E
U{P,1
T)CP
P & t E T}, so if P
P € P & t € 1' * p
t'
t E
, E P =>
=
P, then
ptEP
(iii) * (iv). LfTe use the commutative law on 7' here. Fix t, 7' and choose q E P,,,, . For some p E P, q = p,,,, . Now,
E
q
=
P,,,, = P,,,.,= ( P t . 1 ,
But p E P & t' E II' * p , , E P. 'rhus, q E P , . (i\-) 3 (v). Choose t E T . For all t' E T , (Pt)t, = pt+t,c p ,
that is, P , is contracting. (v) 2 (vi). Let t < t'. By hypothesis, there exists some t" E T t". But then such that t" f 0 and t' = t
+
P,,
that is, (vi). (vi) * (i).
P,+t-
(P,),,,C P,
For all t
E
T, 0
(since
P, is contracting)
< t and
0 < t * P,CP, But P,, == P and so, for all t
E
T, P , C P.
I
In Theorem 3.6.2, (ii) shows that the contracting T-processes are precisely those which are equal to their translation closure; (iii) shows they are those which are closed under left translation
3.6.
83
CONTRACTING PROCESSES
of their elements; and (iv)-(vi) give conditions on their timeevolution. Condition (v) yields: 3.6.3 Corollary. invariant.
T h e class of contracting 7’-processes is time
We consider next the special case of contracting T-processors. We note the intersection of two time invariant classes of T-processes is, in fact, time invariant. Hence: 3.6.4 Theorem. T h e class of contracting T-processors is time invariant. 3.6.5 Lemma. Let P be a T-processor. If P is contracting, then both Pl and P2 are contracting. PROOF.
If P is contracting, P
P and by Lemma 3.4.5,
=
(p’) = (P)’
x
p1
that is, Pl is contracting. Similarly, P 2 is contracting. 3.6.6 (i) (ii) (iii) (iv)
a
Lemma. Let P be a 7’-process. If P is contracting, then OP
= mP. For all t E T, t P C OP. For all t , t’ E T , t < t’ =z t‘P C t P . For all t , t‘ E T, ( t $- t ’ ) P C tP.
PROOF.
If P is contracting, P
OP
1
==O
P
P. Hence, by Lemma 3.4.3,
=
@P
Now, using Lemmas 1.5.10 and 3.3.13, if P is contracting, t
< t’
P,.CP,
3
OP,.COP,
3
(t’ + O ) P C ( t
4-0)P =- t ’ P C t P
that is, (iii). Rut (ii) follows from (iii) since either t = 0 or 0 < t . Similarly, (iv) follows from (iii) since t < t’ iff there exists some t“ E T (t” f 0) such that t‘ = t t”. I
+
84
3.
TIME-EVOLUTION
RFMARK. ‘l’hus, for a contracting T-process, the attainable space also is “contracting” in time. I n particular, every element of the attainable space is attainable at time 0.
Combining Lemmas 3.6.5 and 3.6.6, thr following is immediate:
3.6.7 Theorem.
Let P be a T-processor. If P is contracting,
then (i) OP’ ((PI and OP2 GYP2. (ii) For all t E T , t P 1 C OP’ and t P 2 C OP2. t’Pl C tP’ & t‘P2 C tP2. (iii) For all t , t’ E II’, t < t’ (iv) For all t , t’ E T , ( t t ’ ) P C tP’ and ( t t’)P2C tP2. -
+
+
REMARK. I n words, for a contracting T-processor, the attainable spaces of both input set and output set also are “contracting” in time.
3.6.8 Theorem. Let P and ,O be T-processes. If ,O is an image of P and I’ is contracting, then 0 is contracting. IW)OF. \I’e are given P P. By Corollary 3.4.17, any homomorphism from P to 0 is also a homomorphism from P to g. ‘l’hercfore, if /z is a homomorphism, ~
Q ={pohlp€P] -{pohlpEP] -0 and Q is contracting.
I
‘I’heorem 3.6.8 leads immediately to several other results:
3.6.9 Theorem. If P is a T-processor, then P is contracting iff P-I is contracting. PROOF. By Theorem 1.8.8, P and P-’ are isomorphic, hence, images of each other. ‘The theorem is then imrnediatc from Theorem 3.6.8.
3.6.10 Theorem. If P is a 7’-process, then P is contracting iff I P is contracting. PROOF.
Similar to Theorem 3.6.9.
3.7.
85
EXPANDING A N D STATIONARY PROCESSES
3.6.11 Theorem. Let P be a T-processor. If P is free, then P is contracting iff P2 is contracting. PROOF. By Theorem 1.8.6, P and P2 are isomorphic. T h e theorem is immediate.
Finally, we T-processors:
consider
the interconnections
of
contracting
3.6.12 Theorem. If P is a T-processor, then the following statements are equivalent: (i) P is contracting. (ii) P : is contracting. (iii) : P is contracting. PROOF. By Theorem 2.4.9, P, P:, and :P are pairwise isomorphic. Thus, each is an image of the other. T h e theorem follows from Theorem 3.6.8.
3.6.13 Theorem. Let P and Q be T-processors. If both P and Q are contracting, then P o Q and P / / Q are both contracting. If P1 is a T-processor and P is contracting, then # P is contracting.
By Theorem 3.5.1, ( P Q ) , C P , 0 Q t for all t PROOF. fore, if P and 0 are contracting, by Lemma 2.2.6,
E
T. There-
1
P,,!/Ql by
0
P, C P & Q ,CQ
* P,.Q, C P o Q
and we see P o Q is contracting. Similarly, ( P / / Q ) l Theorem 3.5.2, and by using Lemma 2.3.7, Pt C P & Qt C Q
* PtlIQt C PIIQ
so P / / Q is contracting. Finally, using Theorem 3.5.3, if P # P C (#p)C # ( P ) -
that is, (#P) 3.7
==
=
=
P,
#P
#P. Thus, # P is contracting.
EXPANDING AND STATIONARY PROCESSES
T h e concepts of “expanding” and “stationary” T-processes are very similar to the above concept of contracting T-processes. For the most part, we get completely analogous results.
86
3.
3.7.1
Definition.
TIME-EVOLUTION
Let P bc a T-process. P is expanding E T, P C P , [for all t E T, P = P J .
[strrtioiiarj~]iff for all t
Evidently, P is stationary iff P is both contracting and Rt x w K . c\panding. *Igain, if every element of P is constant, then P is st,itionary and expanding as 1% ell as contracting. Thus, for example, thc input set of any free T-processor is stationary. \Vc have characterization theorems for expanding and stationary T-processes as follows:
3.7.2 Theorem. If P is a 7’-process, then the following statements arc equivalent: (i) P is expanding. (ii) p t I ’ & t ~ T :. ( 3 q ) : q ~ P k p y,. ( i i i ) I:or a11 t , t‘ t 7’, P , C I’trt, . (iv) For all t F 7’, P , is expanding. (L) For all f , t‘ E T , f -:t’ ;- P , C P,’ . ~~
i w o o i ~ . Similar to the proof of Theorem 3.6.2. We leave the d c‘t;I i 1s to t 11e rca d e r .
3.7.3
Corollary. ‘l’hc class of expanding T-processes is time
i n \ ariant. ‘l’hc class of expanding T-processors is time invariant.
3.7.4
Theorem. I f I-’ is a 7’-process, then the following state-
niciits are cquivalcnt:
( i ) I’ is stationary. (ii) I:or all f E T , P , is stationary ( i i i ) For all t , t’ i Y’, P , I-’,, . ~
ixooi~.
(i)
.- (ii).
P for all t’ E T. Fix t. Then,
\Ye k n o w P,,
:
(P,),.
=
I’,+,,
P
=
P,
that is, Pt is stationary. ( i i ) .- (iii). For all t , t’ E T , by hypothesis on T, there exist t, and f, such that t t, t , 7 t‘ -= t‘ 1 t , ~
3.7. with either t ,
EXPANDING A N D STATIONARY PROCESSES
=
0 or t ,
=
0. If t,
since P,, is stationary. If t ,
= 0,
=
87
0, then
then
since P , is stationary. Thus, (iii) holds. (iii) * (i). For any t E T,
so P is stationary.
a
Pt
=
Po = P
Condition (iii) of Theorem 3.7.4 shows that a stationary T-process has a constant and hence a trivial time-evolution. Condition (ii) gives the usual corollary:
3.7.5 Corollary. T h e class of stationary T-processes is time invariant. T h e class of stationary T-processors is time invariant. 3.7.6
Lemma. Let P be a T-process. If P is expanding [stationary] then for all t , t' E T , t < t' * tP C t'P [for all t , t' E T, tP = t'P]. PROOF.
Like Lemma 3.6.6.
T h e next theorem leads to almost all of our other preliminary results:
3.7.7 Lemma. Let P be a T-process. If P is expanding, then for all t E T, U P , = (ZP. PROOF. By Lemma 3.3.5, ClP, C O/P. By Lemma I .5.10, if P C P , , then o%PC a p t . Thus, if P is expanding, " P , = CZP for all t. 1
3.7.8 Theorem. Let P and Q be 7'-processes. If Q is an image of P and P is expanding [stationary], then Q is expanding [stationary].
88
3.
TIME-EVOLUTION
PROOF. Let h : ( / l ’ + be a homomorphism. By Corollary 3.4.17, for all t E T, /z,L%P,is a homomorphism from P , to Q 1 . If P is expanding, then for all t t T, P C P , . Moreover, by Lemma 3.7.7, (TP, { I P and it follows that h,iCIP, h. We have 1
~
0 = = { p o h I p E P } C { ~ ~ h ~ p €-0, Pt} that is, Q is expanding. Similarly, if P is stationary, P , = P for all t . Every stationary T-process is expanding, hence, again, h hiUP, . Thus, for any t E T , ~
Q
=
{p h1p 0
E
Z’]
which proves Q is stationary.
-= {
p h Ip 0
E
Pt}
0,
1
3.7.9 Lemma. Let P be a T-processor. If P is expanding [stationary], then both PJand P“ are expanding [stationary]. PROOF.
1
of P.
I n Theorem 1.8.4, we showcd PI and P2 are images
Combining 1,emnias 3.7.6 and 3.7.9, we get
3.7.10 Corollary. Let P he a 7’-processor. If P is expanding [stationary], then for all t , t‘ E T: (1) (11)
t t
/,
t‘ t‘
-.
tP1 c t‘PL [ t P tP2 c t‘PL [ t P
-
=
t’Pl]. t’P.1.
‘I’hus t h e ‘ittitinable spaces of both input set and output set of an expanding T-processor are “expanding,” and for a stationary T-processor they are “constant.”
3.7.11
Theorem. Let P be a T-processor. P is expanding [stationary] iff P-’ is expanding [stationary]. I’nooF.
P and P-’ are isomorphic.
1
3.7.12 Theorem. Let I’ be a 7’-process. P is expanding [stationary] iff Z P is expanding [stationary]. PROOF.
I’ and I P are isomorphic.
I
3.7.
89
EXPANDING A N D STATIONARY PROCESSES
3.7.13 Theorem. Let P be a T-processor. If P is free, then P is expanding [stationary] iff P 2 is expanding [stationary]. PROOF.
If P is free, then P and P2 are isomorphic.
I
3.7.14 Theorem. If P is a T-processor, then the following statements are equivalent: (i) P is expanding [stationary]. (ii) P: is expanding [stationary]. (iii) :P is expanding [stationary].
P, P:, and :P are pairwise isomorphic. I REMARK. I t happens that if two T-processors P and Q are expanding [stationary], the series interconnection P 0 Q need not be expanding [stationary]. Also, if P’ is a T-processor, then the closed loop # P may not be expanding [stationary] when P is. PROOF.
T h i s somewhat surprising result is traceable to the situation discussed at the beginning of Section 3.5. As we showed there, the conditions ( P , 0 Q l ) C ( P 0 Q), and # ( P l ) C ( # P ) f hold only in special cases, and these conditions are essentially what we need in the present situation. This result leads us to conclude that of the three concepts, contracting, expanding, and stationary T-processes, the contracting T-process is the most interesting and important since it admits the significant additional property of being closed under various interconnections in the processor case. We shall encounter additional results unique to thc contracting case in Chapter 4 that tend to support this conclusion. For the expanding and stationary T-processors, we get only the following theorem:
3.7.15 Theorem. Let P and Q be T-processors. If both P and Q are expanding [stationary], then P / / Q is expanding [stationary]. PROOF.
Using Lemma 2.3.7 and Theorem 3.5.2, we have P C Pt & Q C Qt
3
(p//Qz) C (PtliQt)
Thus, PIIQ is expanding when both P and P
=
Pt & 8
Qt
(p//Q)t
0 are. Similarly,
* (PllQz) = PtllQt
so P / / Q is stationary when both P and Q are.
=
(P//’Q)t
I
90 3.8
3.
TIME-EVOLUTION
SPECIALIZATION TO DISCRETE TIME
In some instances, we get a significant simplification of things hen \t e restrict attention to discrete-time processes. Our first e.;ample of this is the following in which we consider the concepts of contracting, expanding, and stationary w-processes. I\
3.8.1 Theorem. If P is an W-process, then the following statements are equivalent: (i) P is contracting. (ii) p t P => p , E P. (iii) P, C P. (iv) For all t E w , P,+l C P , . PROOF.
We know
P
2
P and 1 E w . Thus,
p € P * P€P&1t
that is, (i) => (ii). (ii) 3 (iii). If q E P, , then q p 1 E P ; hence, q E P. (iii) * (iv). W'e see p,+1
w
* pl€ P
=
p , for some p
=
-~: p1,t
3
p,€ P E
P. But, by (ii),
( P A c pt
where we have used Lemma 3.3.12. (i) is by induction on t. We will show P , C P for all t (iv) Choosing t : : 0 in (iv), we have
Plc P"
-=
E
T.
P
which provides the basis for the induction. Assume P , C P. Using (iv), Pt,, c pt c p This completes the induction and theorem is proved.
I
We shall simply state the result in the remaining two cases and leave the proofs as exercises.
3.8.2 Theorem. If P is an W-process, then the following statements are equivalent:
91
EXERCISES
(i) (ii) (iii) (iv)
P is expanding. P E P * ( 3 q ) : q E P & p= q1 P C PI . For all t E w , P , C P,,, .
3.8.3 Theorem. If P is an W-process, then the following statements are equivalent: (i) P is stationary. (ii) P = P, . (iii) For all t E W , P , = P,,, .
Exercises 3-1. Prove that for any function f:B A
=
+B
the set
{ ( ( t , 6), 6 f t ) I t E w & 6 E B }
is an action of w , where f o = 1, ;f Characterize P A in the given case.
= f;a n d f l f '
=f
0
f.
3-2.
Let A be an action of the monoid M . Prove if b € % A , then g A b is a minimal invariant set iff b is recurrent.
3-3.
(Marino's theorem.) Let A be an action of a monoid M . Prove the set I of nonempty minimal invariant sets of A is a partition of the set of recurrent elements of A. (Note: A partition of a set K is a set I of nonempty subsets of K such that: (i) (J I = K ; and (ii) for all i , j E I , i # j =, i n j =
->
3-4. Prove that for a T-time function p , the following statements are equivalent: (i) p is constant. (ii) For all t E T, p , = p . (iii) For all t , t' E T , p , = p,, . (iv) For all t , t' E T , tp = t'p. (v) For all t E T, tp = Op.
3-5. Prove that for any T-process P, the set { P , I t E T } is a timeinvariant class.
3.
92 3-6.
TIME-EVOLUTION
Prove that for any T-time function v, the set 7:,
~
{t 1 t
E
T & vt
=
v}
is a time set. 3-7.
A T-time function v is weakly periodic iff for some t E T ( t O), a t a. v is periodic iff for some t E T ( t # 0),
+
:
I
J 1 11
-=
(t” 1 n E w}
where T, is as in Exercise 3-6, and to 0; tl = t ; t n f l = tTL t. t is the period of v. Show that an w-time function is periodic iff weakly periodic.
+
3-8.
A time set T has Archemediun order iff for every t , t‘ E T ( t # 0), t‘ < t r r + l for some n E W . Prove if T has Archemedian order that a T-time function v is periodic iff there exists some t E T ( t f 0) such that v t = v and for all t‘ E T (0 < t‘ i t ) u t # a. (Hint: First show that if 0 t -< t’, there exists some n E w and some t , E T such that t‘ t” + t , where t , < t.) i.
~~
3-9.
Prove ‘l‘heorcm 3.2.13.
3-10. Give a counterexaniple to the following proposition: If the time set T has Archemedian order, then every weakly periodic T-time function is either periodic or constant. (Hint: Consider Exercise 1-30.)
3-1 1 . Prove that if A is an action of T (where T is a time set), then .PA is a contracting T-process. 3-12. Prove for any set A , the set A’ is a stationary T-process. 3-1 3. \Vrite a short paragraph discussing your interpretation of the properties of contraction, expansion, and stationarity on T-processes. 3-14. A T-process P is weakly periodic iff for some t E T ( t # O), P, : P . Prove the class of weakly periodic T-processes is time invariant. -
3-15. Prove a T-process P is stationary iff, for all t E T, P , ==.: P.
EXERCISES
93
3-16. If P is an w-process then the unit delay on P is the w-processor dP = { ( P A P I P E P I Give necessary and sufficient conditions on dP that P be contracting. Prove that if P is contracting, then dP is contracting. 3-17. Prove that every finite automaton is contracting. 3-18. Prove that every differential motion is contracting. 3-19. Let T be a time set with Archernedian order. Prove that for a T-time function p , if there exists some t E T ( t f 0) such that p 1 = p and for all t' E T, t' < t p,, : p , then p is constant. 3-20. Characterize a T-process P such that the set { P l I t E T } is a minimal invariant set of GC, where C -= 6TP. Prove that such a T-process is weakly periodic (Exercise 3-14).
Strong Types of Causality
4.1
INTRODUCTION
T h e properties of contraction, expansion, and stationarity are what might properly be termed “strong” types of time invariance for T-processes. Though the properties are strong, their exposition may be justified in two ways:
(i) Some mathematical models for real technological processes do in fact possess these strong properties. (ii) These properties are suggestive of other (weaker) properties that a process might more realistically possess. Another type of property that a process or processor might very well possess is some type of “causality.” Now what do we mean by this term as used here ? Certainly the bcst word to substitute for causality” is “functionality.” However, we also do not see any fundamental reason for distinguishing the words “causality” and “ determinism” as properties of processes. There is a reason; namely, the term “nondcterminism” has, over the years, become equated with “random” or “stochastic” in some parts of systems theory. This is unfortunate because there would appear to be a broad area between “nondeterministic” and “stochastic,” wherein “ nondeterministic” processes would be studied by methods other than the theory of random processes. Of course, such an area does (1
95
4.
96
STRONG TYPES OF CAUSALITY
not, in fact, exist. Indeed, on the contrary, it would appear that even the theory of random processes takes us back to some type or other of process functionality, hence, back in the end to causality, although the functionality may be on some process or processor other than the one we started with. Let us disregard this for nou. Here, causality means functionality, and functionality in ea n s : (1) For a 7-process, for each p E P and t E T , t p is given as the image of some function f associated with P, whose arguments do not include t p itself, (11) For a ?’-processor, for each uy E P and t E ?’, ty (i.e., the output) is given as the image of some function f associated with P, n hose arguments d o not include ty itself.
I n each of the ahole cases, f is called an auxiliuryfunction for P. \Ye have already seen one example of an auxiliary function for 7’-processes. Namely, for a functional ?’-processor P, the function f : P’ T - * UP’ such that (u,t ) f
-
f(ZLP,)
is an auxiliary function for P. T h a t is, for all uy
7
E
P and all t
E
T,
.
1he concept of auxiliary function is due to Mcsarovic, as i s the concept of a constructice specijication, and it was Mesarovic who first pointed out the important rclationship between the two concepts in [26]. Roughly, a constructive specification for a 1-process P is a formula .% depending on one or more auxiliary functions, such that the following formula is true: pEPtl%--
Llbove,the auxiliary function f for functional T-processors gives rise to a constructive specification for such ?‘-processors. Recall that in Theorem 1.7.6 we showed Z ~ J IF 1’ < > U P , y in the functional case. It follows that for any functional 7-processor, ~
uy
E f’e (Yt):
ty
-
(u,t ) f
4.1.
INTRODUCTION
97
Hence, the formula ( V t ) : ty = ( a , t ) f is a constructive specification for P. T h e reader will recall, of course, that in Section 1.9 a number of examples of T-processes (in particular, T-processors) was presented. Looking back, essentially all of these example Tprocesses were defined by some sort of constructive specification or another. I n other words, the most common way of defining processes in systems theory is by the use of auxiliary functions and constructive specifications. This is an important observation and gives some insight. Actually, the ready availability of constructive specifications for processes has tended in the past to obscure what, in essence, a process really is and to make generalization (such as we are trying for here) more difficult. I t is quite well understood in systems theory that certain constructive specifications for processes are “canonical” in the sense that they exist for a host of important processes. In fact, this is one of the common ways of classifying processes. Onc speaks of “ordinary differential equation” processes (or systems); “integral equation” processes; “finite-difference equation” processes; etc. This understanding (that “canonical” constructive specifications exist) is, however, informal. I n the setup we have established here, it is possible to attack this issue formally. What we can do is show for a certain classification of 7’-process that a constructive specification of a certain form exists. If we do not find any surprises in this, we shall at least take comfort in the new precision we bring to bear on this issue. Fundamentally, what we propose to do in this chapter is to consider a number of “strong” types of causality for T-processcs. Such properties in the case of mathematical models for real technological processes arise: (i) When the process is a processor, and there is a very strong property of “cause” and “effect” from input to output underlying the given phenomenon, (ii) When said property of “cause” and “effect” is well understood (as in the case of many processes which obey physical “laws”) and is duly accounted for in choosing the mathematical model for the processor.
4.
98
STRONG TYPES OF CAUSALITY
The main concepts ~ v shall e be espousing are, of course, auxiliary functions and constructive specifications. Building on what we do in this chapter, in Chapter 5 we shall consider the concept of state in our theory. IVith the conccpts of auxiliary functions, constructive specifications, and state spaces for T-processors, we will then be able to esplain the “state space” approach in systems theory, i.e., the approach to processes on which almost all of systems theory is today constructed. ‘I’his, then, we consider to be perhaps the most basic chapter of this book. 4.2
STATIC PROCESSORS
O n e of the simplest yet most important concepts of causality in thc systems field is the notion of a “static” processor. A static or is one which is “instantaneously” a function from input to output. Static processors are often referred to as “instantaneous” or “iiiemoryless” as well as “static.” As we shall see, the property of being ‘‘static’’ is a very strong property of processors. T h e importance of the concept stems from the fact that many real processors can be regarded as “instantaneous” for the purposes of analysis relative to the processors with which they are interconnected. ’l’hus, in the given interconnection, they are relatively sirnple constituents. Recall that if I’ is a 7’-processor, then ( I P is a relation and, for all t t 7’, t P is a relation. \Ye distinguish four types of static ,, I -processors:
4.2.1 Definition. Let I’ be a T-processor. P is weakly static iff OP is ;I function. P is static iff for all t E 7’, t P is a function. P is um’,tornily strrtic [bistcrtic] iff rip is a function [a 1 : I function]. I~I~.XI.ARK.
liecall that U P is the attainable space of P and that t P
is t h e attainable space at t E 7’. We are going to discover the rather
reinarkable fact that these attainable spaces can also be auxiliary functions for 7’-processors.
4.2.2 Lemma. l,et P be a T-processor. If P is static, it is weakly static. If P is uniformly static, it is static. If P is bistatic, it is uniformly static.
4.2.
99
STATIC PROCESSORS
PROOF. T h e first and third statements are obvious. T o see the second, recall that for all t E T, tP C MP. If ClP is a function, then for all t E T , tP is a function since every subset of a function is a function. Thus, if P is uniformly static, it is static. I
\Ye have characterization theorems for each of the types of static T-processors. However, only the theorems for static and uniformly static T-processors are at all sophisticated. For weakly static T-processors, we have:
4.2.3 Theorem. A OP: OPI + OP2(onto). PROOF.
2"-processor
For any function f,f:9f
P
is
weakly
-+ 9f(onto).
static
iff
But, by Lemma
1.6.5, BOP
= OP1
BOP
=
QPz
Hence, OP is a function iff OP: OP1 + OP2(onto).
I
4.2.4 Theorem. If P is a T-processor, then the following statements are equivalent: (i) P is static. (ii) For all t E T, P , is weakly static. (iii) For all t E T, P , is static. (iv) For all t E T, tP: tP1 tP2 (onto). (v) P,: PI P2 (onto); for all t E T, tP: tP1 -+ tP2 (onto); and -j
--f
t(uP*) = (tu)(tP)
(vi) P,: Pl
+ P2 (onto)
and
tu = tv =r
(vii) For all uy, z1z
E
t(uP*) = t(vP*)
P and all t E T, tu = tv -aty = t z
+
PROOF. (i) 3 (ii). By Lemma 3.3.13, OP, = ( t 0 ) P = tP. Thus, if for all t , tP is a function, then for all t , OP, is a function.
100
4.
STRONG TYPES OF CAUSALITY
(ii) >: (iii). Let OP,, be a function for all t‘. Choose t consider tP. By 1,emma 3.3.13, t ’ pt
~~~~~
(t
t’)P == ( ( t -tt’)
7-
E
T and
+ 0)P = optit,
Hence, for all t’ E T, t’P, is a function. (iii) 3 (iv). If P , is static, then it is weakly static via Lemma 4.2.2. Thus, by Theorem 4.2.3, OP,: O(P,)I 0 ( P J 2 (onto). But O P , = tPand using Lemmas 3.3.9 and 3.3.13, --j
0(Pt)1 = O(Pl), --= tP’
and similarly for P2. Thus, tP: t P 1 + tP2 (onto). (iv) * (v). If for all t~ T, t P : tP1 -+ tP2 (onto), then P is functional, i.e., uy € 1’ 8i uz E 1’
3
(Vt):(tu)(tP) ==ty & (tu)(tP) = tz
3
( V t ) : ty :== t z
3
y
=
z
so P,: Pl ---t Pz (onto) by Theorem 1.7.6. Then, if uy E P and t E 7’, w e have U P , -=3’ and t(uZ’,)
(v)
=
(tu)(tP)
Given (v), P,: P’--t P’ (onto) and
: ~ >(vi).
tu
ty
=
tv =’ (tu)(tP)=- ( t v ) ( t P )
(vi) -+ (vii) and (vii)
3
(i) are trivial.
t(uI’*) ~=t(vP*)
I
Condition (v) of Theorem 4.2.4 shows that all static processors are functional and indicates how the output y may be calculated from the input u in a point-by-point manner using the functions tP.
4.2.5
Corollary. Every static T-processor is functional.
Condition (iii) of Theorem 4.2.4 gives us the familiar result:
4.2.6 Corollary. T h e class of static 7-processors is time invariant.
4.2.
STATIC PROCESSORS
101
T h e next theorem shows there exists a uniform scheme for defining all static T-processors. Such a scheme is what we have discussed in the introduction above, namely, a constrzrctize specification for static processors.
4.2.7 Theorem. If P is a static T-processor, then uy E P 0u E PI& ( V t ) : ty
=
(tu)(tP)
PROOF. Clearly, if uy E P, then the given conditions are satisfied. Conversely, assume the given conditions. If u E P' then, for some z , uz E P. Then, since P is static, for all t E T,
tz
that is, z
= y.
Thus, uy
=
E
(tu)(tP)= ty
P.
I
REMARK. Thus, the formula u E P 1 & (Vt):ty = (tu)(tP) is a constructive specification for any static T-processor P. T h e interpretation of this is important. It says that in order to define a static T-processor P, it suffices to give the input set P' together with the family of functions tP for t E T. We note that Theorem 4.2.7 makes clear the fact that each of the functions tP is an auxiliary function for P. Thus, we have an example of a constructive specification involving an infinite number of auxiliary functions.
We note in passing that the weakly static T-processors do not constitute a time-invariant class. This downgrades the importance of the concept somewhat. We justify our interest in these T-processes by the role they play in the section in making proofs.
4.2.8 Theorem. If P is a T-processor, then the following statements are equivalent: (i) P is uniformly static. (ii) P is weakly static. (iii) For all t E T, P , is uniformly static. (iv) OlP: GlP' + a P 2 (onto). (v) P,: Pl -+ P2 (onto); LZP: GIP1 + @P2 (onto); and UP, = u
0
aP
102
4.
STRONG TYPES OF CAUSALITY
tu = t’E
( \ 1 1 ) for all
ZLJ?, 7’2F
t’(,P*)
P and all t , t‘ t T, tu =
iwx)~.
t(,P*)
t’v * ty
t‘z
Stniilar to the proof for Theorem 4.2.4 and using Lemma ( l P ) extensively. I
3.4.3 (nainclv, OP
(’ondttion (1) of ‘I’hcorem 4.2.8 shows how the function U P may be used to calculate the output y given the input u in a point-bypoint manner, i.e., t(llP,)
~
I(u
2
f2P)
(tu)m’
All,o, It g11es:
4.2.9 Corollary. ti1 not i o na 1.
Every
uniformly
static
T-processor
is
Condition (vii) shows that the functional relationship between tu and t j docs not depend on t , i.e., it does not vary with time. T h u s , Uniformly static processors are what are sometimes referred to as “time-invariant” ones, and static processors are what are sometimes refered to as “times-varying” ones. Condition (iii) of Thcorem 4.2.8 gives:
4.2.10 Corollary.
T h e class of uniformly static T-processors is
time invariant. 1:or uniformly static T-processors, there is only the one auxiliary function ( / P . Such processors admit the following simple sort of constructive specification:
4.2.11 Theorem. If P is a uniformly static T-processor, then 2‘3’ PROOF.
€
I’
-
u E P I & ( V t ) : ty
=
(tu)tYP
Similar to the proof for Theorem 4.2.7.
4.2.
103
STATIC PROCESSOR5
REMARK. T h u s we see that to define a uniformly static ?’-processor, it suffices to give the input set P1 and the auxiliary function L7P of P.
Finally, there is a very simple relationship between the bistatic 5”-processors and the uniformly static ones:
4.2.12 Theorem. Let P be a T-processor. P is bistatic iff both P and P-l are uniformly static. PROOF.
It suffices to show that Ql(P-l) =
U(P-1) = { t ( y u )1 uy E P & t =
E
We see
T } = {(ty,tu) 1 uy E P & t
E
T}
{ ( b , a ) 1 a6TPb) = ( m y
Now, since MP is a 1 : 1 function iff both CIP and (UP)-l are functions, the theorem is immediate. T h e following conditions follow readily:
4.2+13 Corollary.
Every bistatic T-processor is bifunctional.
4.2.14 Corollary. invariant.
T h e class of bistatic T-processors is time
\Ye have on hand a very important and general example of a bistatic T-processor:
4.2.15 Lemma. T-processor. PROOF.
For
any
T-process P, I P is a
bistatic
-4s an exercise.
4.2.16 Lemma. Let P and Q be T-processors, such that P C 0.If Q is weakly static [static; uniformly static; bistatic], then P is M eakly static [static; uniformly static; bistatic]. PROOF. If P C Q, then for all t E T, t P C tQ and U P C UQ by Lemma 1.5.10. T h e lemma follows from the fact that any subset of a function [a 1 : 1 function] is a function [a I : 1 function]. 1
104
4.
STRONG TYPES OF CAUSALITY
b’e come now to a very important set of results which give to the uniformly static ”-processors and the bistatic T-processors a very promintmt role in our theory.
4.2.17 Theorem. A T-processor P is uniformly static [bistatic] iff f / / ’ is a homomorphism [isomorphism] from Pi to P2. PROOF. If fTP is a homomorphism [isomorphism], then OZP is a function [a 1 : 1 function]. Thus, P is uniformly static [bistatic]. Con\ ersely, if P is uniformly static [bistatic], then fTP: (tP1 G‘P2 (onto) [UP: (/Pi ( l P 2( 1 : 1 onto)] by Theorem 4.2.8. Moreover, by condition (t) of 4.2.8, ---f
---f
P2
=
{ U P * 1 u E 1”)
(u
0
(TI’ 1 u E P’)
‘Thus, if P is uniformly static, then U P is a homomorphism, and if I-’ is bistatic, then U P is an isomorphism from P1to P2. I
4.2.18 Corollary. Let P be a T-processor. If P is uniformly static [bistatic], then P2is an image of Pi[PI and P 2are isomorphic]. 4.2.19 Theorem. Let P and Q be 7‘-processes. If h is a homomorphism [isomorphisnil from P to Q, then the set I? = { p ( p o h ) l p € P ] is n uniformly static [bistatic] T-processor. Moreover, U R PKOOE.
f 7lZ
~
~
=
h.
Clearly, R is a T-processor. Now, {t(p( p h ) ) 1 p E P & t E 7‘) { ( t p , t( Z, h ) ) p F P 8i t t TI { ( t p , (tp)h) 1 p E P & t E T ) - { ( a , ah) I a F m’) h ~1
~
Ilcncc, if I? is a homomorphism [isomorphism], U R is a function [a I : I function] anct K is uniformly static [bistatic]. I
4.2.20 Theorem. If P is a uniformly static 7’-processor, then I’ anct Z’l are isomorphic.
4.2. PROOF.
STATIC
105
PROCESSORS
Clearly, the function h
IUlP
:
= { ( ( a ,b), U )
I ~@Pbj
is a homomorphism from P to P‘ (see proof of Theorein 1.8.4). Now if P is uniformly static, 1aP is 1 : 1. I n fact, choose aOPb and caPd. There exist elements uy, vx E P and t , t’ E T such that: (i) (ii) (iii) (iv)
tu = a. ty = b. t’v = c. t’x = d .
Now, using condition (vii) of Theorem 4.2.8, we have u =
c
3
u
=
( a , 6)
c&tu =
=
t’v
=> u = C &
ty
=
t‘z > u = c & b
--
d
(c, 4
Thus, h is an isomorphism.
I
4.2.21 Corollary. If P is a uniformly static T-processor, then ICTP is an isomorphism from P to Pl.
Theorem 4.2.22 gives a result which shows when uniformly static T-processors are going to be contracting, expanding, and stationary:
4.2.22 Theorem. Let P be a T-processor. If P is uniformly static, then P is contracting [expanding; stationary] iff Pl is contracting [expanding; stationary]. PROOF. Since P and Pl are isomorphic, they are images of each other. Thus, apply Theorems 3.6.8 and 3.7.8. I
Finally, we have a theorem which shows to some extent that the assumption of contraction on a T-processor is indeed a strong assumption. We merely note that being weakly static is, in fact, a “weak” assumption on a T-processor relative to being uniformly static. Then:
106
4.
STRONG TYPES OF CAUSALITY
4.2.23 Theorem. I,et P be a 7’-processor. If P is contracting, thcn the follo\ving statements are equivalent: (i) I’ is uniformly static. ( i i ) P is static. ( i i i ) I’ is ~ e a k l ystatic. IW)OF. (iii) hold by Lemma 4.2.2. ( i ) =:. ( i i ) a n d (ii) (iii) -2- (i). If I’ is contracting, then P : P. Thus, if P is weakly static, thcn I’ is weakly static. However, by l h e o r e m 4.2.8, if P is u ~ a k l ystatic, then P is uniformly static. I
4.3
STATIC INTERCONNECTIONS
I n this section, ~ v econsider interconnections of the various types of static 7’-processors.
4.3.1
Theorem.
any
I:or
I
.
I-processor
P, : P is uniformly
static . See
PROOF.
//(:P)
{t((uy)?) uv ~
{ ( ( t u , ty), ty)
t
P 8; t E 2“)
uy
E
P&tE T) h ) , 6) 1 aTlP6)
{(t(uy),ty) 1 uy
1’8; t t 7’)
= ((((1,
t
2(/P
4.3.2 Theorem. itf f’: is histatic.
1,et P be a 7‘-processor. P is uniformly static
r i t o o ~ . Consider //( 1’:). \Ye have
(/(I?)
--
{r(u(ujv)) rL-v t 1’8; t t 7’)
= :
{([u, ( t u , I!))
=
(1clP-l
U? E
==
{ ( t u ,t(uy)) , uy
1’8; t t 7’)
-
t
1’8; t E TI
{ ( a , ( ( I , 6)) aClPbJ ~
So\\, according to Corollary 4.2.21, if P is Uniformly static, then l l l P is an isomorphism from P to PI, hence, a 1 : 1 function. Clearlv, thcn ( I U P ) - 1 is also a 1 : 1 function, i.e., U ( P : )is 1 : 1 and
4.3.
107
STATIC INTERCONNECTIONS
P : is bistatic. Conversely, if P: is bistatic, then (161!P)-t is a 1 : 1 function. Choose aCtPb and clZPd. Clearly, a(lTirP) = ( a , 6) and c(lCZP)-* = ( c , d). Thus, a = c
= (a, b ) = (c, d ) = b
=
d
that is, U P is a function. I n other words, P is uniformly static.
I
4.3.3 Theorem. Let P be a T-processor. P is weakly static [static] iff P: is weakly static [static]. T h e reader can show that for all t
PROOF.
E
T,
a(tP:)(a,b) o a(tP)b
and it follows that tP: is a function iff tP is a function.
I
4.3.4 Theorem. Let P and Q be 1-processors. If P and Q are both weakly static [static; uniformly static; bistatic] then P o Q is weakly static [static; uniformly static; bistatic]. PROOF.
t(P
For any t E T, 0
0 ) = {(tu, tz) 1 (3y):uy E Z’&yz
t P 0 tQ
= {(tu, tz)
EQ}
1 (3y)(3s):uy € P & x z € Q & ty
=
tx}
and we see that t ( P 0 Q) C t P tQ. If both OP and OQ are functions, then OPo O Q is a function and O(P0Q) is a function, since O(P Q) C OP 0 OQ. Thus, if both P and Q are weakly static, then P 0 Q is weakly static. Similarly, if for all t E T , tP and tQ are functions, then tP tQ is a function, whence, t ( P Q) is a function. I n other words, if P and Q are both static, then P 0 Q is static. Next, we see 0
0
0
0
6qP.Q))
==
{ ( t u , t z ) 1 (3y):u y € P & y z € Q &t € TI
LM’ flQ
==
{(ti%, t‘z) 1 (3y)(&):uy
E
P & XZ E Q & t , t’ t T & ty
=
t’x}
and, again, U ( P Q) C U P UQ. If both U P and lYQ are functions [I : 1 functions], then LIP 6TQ is a function [a 1 : 1 function], and we see that U ( P Q) is a function [a 1 : 1 function]. Thus, if both P and Q are uniformly static [bistatic], then P 0 Q is uniformly static [bistatic]. I 0
0
0
0
4.
108
STRONG TYPES OF CAUSALITY
‘l’he cross product of two relations R and S is the
RE\iARK.
relation RX.S
= { ( ( a ,c ) ,
(b, d ) ) 1 uRb & cSd)
’I’hc cross product of two functions [two 1 : 1 functions] is a Eiinction [a 1 : 1 function]. I n fact, if H and S are functions, ( a , c ) ( R X S )= ( a n , C S )
4.3.5 Theorem. L e t ’I and Q be T-processors. If P and Q are both weakly static [static; uniformly static; bistatic], then P / / Q is \I eakly static [static; uniformly static; bistatic]. IW)OF.
For any t E T, t ( / ’ / / Q ) = { l ( ( u u ) ( y z ) )1 u y E P & Z I Z E Q ] { ( ( t u ,tv), (ty, t z ) ) 1 uy E P & 2’2€Q} = { ( ( a ,c ) ,
( 6 , 4) I a(tP)b c(tQ)d>
tf“YtQ
:
\ihcrc ( X ) is the abovc cross product. It follows that if both P and
0a r c
M
eakly static [static], then P / / Q i s weakly static [static]. Next,
u e sc‘c that ([(f’,
‘0) { ( ( t u ,tv),(ty, t z ) )1 uy E P & uz t Q & t E 7’;
11PX/IQ
{ ( ( t u , f’Z)),
(ty, t ’ z ) )I uy
€
P & uz & 0 & t , t’ E T }
and that / / ( I ” 0 )C {ll’X(/Q. If both CIP and 6TQ are functions [ I : I functions], then / / P A Y ( @ is a function [a 1 : 1 function], aird hence / i ( P / , C , )is a function [a 1 : 1 function]. I n other words, i t both P and Q are unifornily static [bistatic], then P / / Q is u n i f o r i n 1y sta t i c [bi seat ic] . I i (i). Let P-l be free and contracting. T h e n P is strictly nonanticipatory by Theorem 4.4.12, and P is contracting by Theorem 3.6.9. REMARK. 'l'hus a 7'-processor which is both strictly nonanticipatory and contracting has (precisely) one constant output T-time junction.
\Ye get an analogous result for the case of strictly nonanticipatory T-processors which are stationary:
4.4.15 Theorem. If I' is a nonempty 7'-processor, then the following statements are equivalent: (i) P is stationary and strictly nonanticipatory. (ii) I-" is stationary and (LP' has one and only one element. (iii) P-' is free and stationary. PROOF.
As a n exercise.
4.4.
117
N O N A N T I C I P A T O R Y PROCESSORS
REMARK. Classically, processors which are both free and stationary are called “autonomous.”
For later use, we record what we showed in the proof of Theorem 4.4.14 about strictly nonanticipatory T-processors: 4.4+16 Corollary. Let P be a nonempty T-processor. If P is strictly nonanticipatory, then OP2 has one and only one element.
Next we consider the class of contracting nonanticipatory T-processors. Here we get a very interesting result: 4.4.17 Lemma. Let P be a T-processor. If Pis nonanticipatory, then P is weakly static. PROOF. Our observation is the same as that leading to Corollary 4.4.16. For all uy, vz E P,
ou -=
ov * d
* uo
= 0 &Ou -= ov
* (u”, O U )
= (VO, O V ) 2
oy
=
vO&Ou
= Ov
= oz
T h a t is, P is weakly static since OP is a function.
[
4.4.18 Theorem. If P is a T-processor, then the following statements are equivalent:
(i) P is contracting and nonanticipatory. (ii) P is contracting and uniformly static. (iii) Pl is contracting, and P is uniformly static. PROOF. (i) 3 (ii). If P is nonanticipatory, it is weakly static by Lemma 4.4.17. If P is contracting and weakly static, it is uniformly static by Theorem 4.2.23. (ii) a (iii). If P is contracting, then Pl is contracting. (iii) 3 (i). If P is uniformly static and P’ is contracting, then P is contracting by Theorem 4.2.22. If P is uniformly static, it is static and hence nonanticipatory by Lemma 4.4.11. [
4.4.19 Corollary. Let P be a T-processor. If P is contracting, then P is nonanticipatory iff P is uniformly static.
4.
118
STRONG TYPES OF CAUSALITY
Again we get an analogous result in the stationary case:
4.4.20 Theorem. If P is a T-processor, then the following statements are equivalent: (i) 1' is stationary and nonanticipatory. (ii) '1 is stationary and uniformly static. (iii) P1is stationary, and P is uniformly static. PROOF.
For the reader.
IEXUK. Again contraction and stationarity arc shown to be strong propcrtics of 7'-processors.
-1s was the case with contracting, expanding, and stationary 7'-processors, we get a simplification of nonanticipation in the discrete-time case:
4.4.21 Theorem. If P is an w-processor, then P is nonanticipatory iff P is almost nonanticipatory. PROOF.
If
11
and
I n fact, in t h e Cled rl y , then ,
7'
are w-time functions, then for all t
case, (zilj*
w
(U/)]
=-
(V,)l
y L = x t (since P is almost nonanticipatory) &
4.
120
STRONG TYPES OF CAUSALITY
ty -= tx * ( y t ,t y ) == (xt,tx) 3 t z tw, so P is strictly nonanticipatory. z’t y t --= x t ~3 tz == tw, so P 0 Q is strictly non(ii) anticipatory. (iii) ( z i t , t z t ) -= ( c t , tv) * y t = x t (since P is aln-lost nonanticipatory) & ty = t x ( y t ,ty) :=- (xt,tx) 3 tz = tw, so P Q is nonanticipatory. (iv) u1 -= v t * y t ==: xt zt = w t ,so P Q is almost nonanticipatory. 1 -7
I
% -
0
0
RIMARK. Since each of two T-processors may independently be strictly nonanticipatory, nonanticipatory, or almost nonanticipatory, there are actually nine cases of P o Q in all. T h e four cases considered in Theorem 4.5.1 subsume the other five. For example, if both 1’ and Q are strictly nonanticipatory, then by (i), P Q is strictly nonanticipatory (sincc Q is also then nonanticipatory). 0
4.5.2 Lemma. If u , y , z’, and z are T-time functions, then for all t E T , u”‘ El,!$ ut L vl & yl ;z 21 (1) (ii) (uy)l = ( E Z ) ~2 ut = vt & yt = 21 PROOF. (ii) follows from (i) by Lemma 4.4.5. For all t’ t T (t’ < t ) , ~
t‘(vtz‘)u (t’ut, t’yt)
t’(utyt)
3
Iience, (1) holds.
t’ut
=
=
(t’vt, t’zt)
t’vt & t’yt
t’zt
1
4.5.3 Theorem. Let P and Q be T-processors. If both P and Q arc strictly nonanticipatory [nonanticipatory; almost nonanticipatory], then P / / Q is strictly nonanticipatory; [nonanticipatory ; alniost nonanticipatory]. PROOF. We use Lemma 4.5.2 extensively. Choose ( u v ) ( y ~ ) , ( p x ) ( q w )E P / / Q and t E T . The n, uy, p q E P & vz,xw E Q. I n order, we have (uv)t = : (p.q
2 3
ut = p t & vt == X t
(ty, t z )
= i
==
(q,tzu)
* ty 3
I=
t(yz)
tq & tz J Z
=
t(qw)
tw
4.5.
121
NONANTICIPATORY INTERCONNECTIONS
that is, the strictly nonanticipatory case. Next, (uv)t = (
p q & t(uv) = t(p.) * ut
& Zlt = Xt & tu = t p & tv = tx
tp) & (vt,tv) = (Xt, t x )
3
(ut, tu) =
3
(ty, tz) = (tq, tw)
(pi,
=pt
3
ty
= tg
& t z = tw
t ( y z ) = t(qw)
that is, the nonanticipatory situation. Finally, (uv)t = ( p x ) t
* ut
= pt
& vt = xt
* yt
= qt
& zt = Wt
3
( y z ) t = (gw)t
which proves the theorem for almost nonanticipation. REMARK.
I
Again the given conditions subsume all nine possible
cases. Both Theorems 4.5.1 and 4.5.3 admit interesting corollaries connecting the static and nonanticipatory situations of interconnections. We recall that every static 7'-processor is nonanticipatory. Then, 4.5.4
Corollary. Let P and Q be 7'-processors:
(i) If is (ii) If is (iii) If
P is strictly nonanticipatory and Q is static, then P 0 Q strictly nonanticipatory. P is static and (2 is strictly nonanticipatory, then P o Q strictly nonanticipatory. P is static and Q is nonanticipatory, then both P Q and P / / Q are nonanticipatory. (iv) If P is nonanticipatory and Q is static, then both P Q and P / / Q are nonanticipatory. (v) If P is static and Q is almost nonanticipatory, then both P Q and P / / Q are almost nonanticipatory. (vi) If P is almost nonanticipatory and Q is static, then both P Q and PiIQ are almost nonanticipatory. 0
0
0
0
4.5.5
Theorem. For any T-processor P, :P is nonanticipatory.
PROOF. By Theorem 4.3.1, :P is uniformly static, hence, static. T h u s :P is nonanticipatory. I
4.
122
STRONG TYPES OF CAUSALITY
4.5.6 Theorem. I,et P be a T-processor. P is strictly nonanticipatory iff P is almost nonanticipatory, and :P is strictly nonanticipatory. PKOOF. Let P be strictly nonanticipatory. P is clearly also almost nonanticipatory. Choose (uy)y E :P, ( u z )z E :P, and t E T :
(Uyy
= (W)f
=- Ut = 29