Mathematical Methods in Robust Control of Linear Stochastic Systems
MATHEMATICAL CONCEPTS AND METHODS IN SCIENCE AND ENGINEERING Series Editor: Angelo Miele Mechanical Engineering and Mathematical Sciences Rice University Latest Volumes in this series: MATHEMATICAL METHODS IN ROBUST CONTROL OF LINEAR STOCHASTIC SYSTEMS • Vasile Dragan, loader Morozan, and Adrian-Mihail Stoica PRINCIPLES OF ENGINEERING MECHANICS: VOLUME 2. DYNAMICS—THE ANALYSIS OF MOTION • Millard F. Beatty, Jr. CONSTRAINED OPTIMIZATION AND IMAGE SPACE ANALYSIS: VOLUME 1. SEPARATION OF SETS AND OPTIMALITY CONDITIONS • Franco Giannessi ADVANCE DESIGN PROBLEMS IN AEROSPACE ENGINEERING: VOLUME 1. ADVANCED AEROSPACE SYSTEMS • Editors Angelo Miele and Aldo Frediani UNIFIED PLASTICITY FOR ENGINEERING APPLICATIONS • Sol R Bodner THEORY AND APPLICATIONS OF PARTIAL DIFFERENTIAL EQUATIONS • Peiro Bassanini and Alan R. Elcrat NONLINEAR EFFECTS IN FLUIDS AND SOLIDS • Editors Michael M. Carroll and Michael A. Hayes
Mathematical Methods in Robust Control of Linear Stochastic Systems Vasile Dragan Institute of Mathematics of the Romanian Academy Bucharest, Romania
Toader Morozan Institute of Mathematics of the Romanian Academy Bucharest, Romania
Adrian-Mihail Stoica University Politehnica of Bucharest Bucharest, Romania
^ S p r i nger
Vasile Dragan Institute of Mathematics of the Romanian Academy RO. Box 1-764, Ro 70700 Bucharest, Romania
[email protected] Bucharest, Romania ainstoica@rdslink. r o
Toader Morozan Institute of Mathematics of the Romanian Academy P.O. Box 1-764, Ro 70700 Bucharest, Romania t o a d e r .morozan@ imar. r o
Adrian-Mihail Stoica University Politehnica of Bucharest Str. Polizu, No. 1, Ro-77206 Bucharest, Romania amstoica^rdslink.ro
Mathematics Subject Classification (2000): 93EXX, 34F05 Library of Congress Control Number: 2006927804 ISBN-10: 0-387-30523-8
e-ISBN: 0-387-35924-9
ISBN-13: 978-0-387-30523-8 Printed on acid-free paper. ©2006 Springer Science+Business Media LLC All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed in the United States of America 9 876 54 321 springer.com
To our wives, Vioricay Elena and Dana for their love, patience and support.
Contents
Preface 1
2
3
Preliminaries to Probability Theory and Stochastic Differential Equations 1.1 Elements of measure theory 1.2 Convergence theorems for integrals 1.3 Elements of probability theory 1.4 Independence 1.5 Conditional expectation 1.6 Stochastic processes 1.7 Stochastic processes with independent increments 1.8 Wiener process and Markov chain processes 1.9 Stochastic integral 1.10 An Ito-type formula 1.11 Stochastic differential equations 1.12 Stochastic linear differential equations
ix
1 1 5 7 7 8 9 11 12 14 19 26 29
Exponential Stability and Lyapunov-Type Linear Equations 2.1 Linear positive operators on the Hilbert space of symmetric matrices 2.2 Lyapunov-type differential equations on the space S^ 2.3 A class of linear differential equations on the space (R")^ 2.4 Exponential stability for Lyapunov-type equations on vS^ 2.5 Mean square exponential stability 2.6 Numerical examples 2.7 Affine systems Notes and references
33 34 37 45 47 62 75 79 82
Structural Properties of Linear Stochastic Systems 3.1 Stabilizability and detectability of stochastic linear systems 3.2 Stochastic observability
85 85 93
viii
Contents
3.3
Stochastic controllability Notes and references
102 108
4
The Riccati Equations of Stochastic Control 4.1 Preliminaries 4.2 The maximal solution of SGRDE 4.3 Stabilizing solution of the SGRDE 4.4 The case 0 e T^ 4.5 The filtering Riccati equation 4.6 Iterative procedures Notes and references
109 110 114 124 132 140 142 157
5
Linear Quadratic Control Problem for Linear Stochastic Systems 5.1 Formulation of the linear quadratic problem 5.2 Solution of the linear quadratic problems 5.3 The tracking problem 5.4 Stochastic H^ controllers Notes and references
159 159 161 173 178 207
6
Stochastic Version of the Bounded Real Lemma and Applications . . . . 6.1 Input-output operators 6.2 Stochastic version of the Bounded Real Lemma 6.3 Robust stability with respect to linear structured uncertainty Notes and references
209 209 218 240 256
7
Robust Stabilization of Linear Stochastic Systems 7.1 Formulation of the disturbance attenuation problem 7.2 Robust stabilization of linear stochastic systems. The case of full state access 7.3 Solution of the DAP in the case of output measurement 7.4 DAP for linear stochastic systems with Markovian jumping 7.5 An //^-type filtering problem for signals corrupted with multiplicative white noise Notes and references
257 257 259 273 292 298 303
Bibliography
305
Index
311
Preface
This monograph presents a thorough description of the mathematical theory of robust Unear stochastic control systems. The interest in this topic is motivated by the variety of random phenomena arising in physical, engineering, biological, and social processes. The study of stochastic systems has a long history, but two distinct classes of such systems drew much attention in the control literature, namely stochastic systems subjected to white noise perturbations and systems with Markovian jumping. At the same time, the remarkable progress in recent decades in the control theory of deterministic dynamic systems strongly influenced the research effort in the stochastic area. Thus, the modem treatments of stochastic systems include optimal control, robust stabilization, and H^- and //^-type results for both stochastic systems corrupted with white noise and systems with jump Markov perturbations. In this context, there are two main objectives of the present book. The first one is to develop a mathematical theory of linear time-varying stochastic systems including both white noise and jump Markov perturbations. From the perspective of this generalized theory the stochastic systems subjected only to white noise perturbations or to jump Markov perturbations can be regarded as particular cases. The second objective is to develop analysis and design methods for advanced control problems of linear stochastic systems with white noise and Markovian jumping as linear-quadratic control, robust stabilization, and disturbance attenuation problems. Taking into account the major role played by the Riccati equations in these problems, the book presents this type of equation in a general framework. Particular attention is paid to the numerical aspects arising in the control problems of stochastic systems; new numerical algorithms to solve coupled matrix algebraic Riccati equations are also proposed and illustrated by numerical examples. The book contains seven chapters. Chapter 1 includes some prerequisites concerning measure and probability theory that will be used in subsequent developments in the book. In the second part of this chapter, detailed proofs of some new results, such as the Ito-type formula in a general case covering the classes of stochastic systems with white noise perturbations and Markovian jumping, are given. The Ito-type formula plays a crucial role in the proofs of the main results of the book.
X
Preface
Chapter 2 is mainly devoted to the exponential stability of linear stochastic systems. It is proved that the exponential stability in the mean square of the considered class of stochastic systems is equivalent with the exponential stability of an appropriate class of deterministic systems over a finite-dimensional Hilbert space. Necessary and sufficient conditions for exponential stability for such deterministic systems are derived in terms of some Lyapunov-type equations. Then necessary and sufficient conditions in terms of Lyapunov functions for mean square exponential stability are obtained. These results represent a generalization of the known conditions concerning the exponential stability of stochastic systems subjected to white noise and Markovian jumping, respectively. Some structural properties such as controllability, stabilizability, observability, and detectability of linear stochastic systems subjected to both white noise and jump Markov perturbations are considered in Chapter 3. These properties play a key role in the following chapters of the book. In Chapter 4 differential and algebraic generalized Riccati-type equations arising in the control problems of stochastic systems are introduced. Our attention turns to the maximal, minimal, and stabilizing solutions of these equations for which necessary and sufficient existence conditions are derived. The final part of this chapter provides an iterative procedure for computing the maximal solution of such equations. In the fifth chapter of the book, the linear-quadratic problem on the infinite horizon for stochastic systems with both white noise and jump Markov perturbations is considered. The problem refers to a general situation: The considered systems are subjected to both state and control multiplicative white noise and the optimization is performed under the class of nonanticipative stochastic controls. The optimal control is expressed in terms of the stabilizing solution of coupled generalized Riccati equations. As an application of the results deduced in this chapter, we consider the optimal tracking problem. Chapter 6 contains corresponding versions of some known results from the deterministic case, such as the Bounded Real Lemma, the Small Gain Theorem, and the stability radius, for the considered class of stochastic systems. Such results have been obtained separately in the stochastic framework for systems subjected to white noise and Markov perturbations, respectively. In our book, these results appear as particular situations of a more general class of stochastic systems including both types of perturbations. In Chapter 7 the y-attenuation problem of stochastic systems with both white noise and Markovian jumping is considered. Necessary and sufficient conditions for the existence of a stabilizing /-attenuating controller are obtained in terms of a system of coupled game-theoretic Riccati equations and inequalities. These results allow one to solve various robust stabilization problems of stochastic systems subjected to white noise and Markov perturbations, as illustrated by numerical examples. The monograph is based entirely on original recent results of the authors; some of these results have been recently published in control journals and conferences proceedings. There are also some other results that appear for the first time in this book.
Preface
xi
This book is not intended to be a textbook or a guide for control designers. We had in mind a rather larger audience, including theoretical and applied mathematicians and research engineers, as well as graduate students in all these fields, and, for some parts of the book, even undergraduate students. Since our intention was to provide a self-contained text, only the first chapter reviews known results and prerequisites used in the rest of the book. The authors are indebted to Professors Gerhard Freiling and Isaac Yaesh for fruitful discussions on some of the numerical methods and applications presented in the book. Finally, the authors wish to thank the Springer publishing staff and the reviewer for carefully checking the manuscript and for valuable suggestions.
October 2005
Preliminaries to Probability Theory and Stochastic Differential Equations
This first chapter collects for the readers' convenience some definitions and fundamental results concerning the measure theory and the theory of stochastic processes which are needed in the following developments of the book. Classical results concerning measure theory, integration, stochastic processes, and stochastic integrals are presented without proofs. Appropriate references are given; thus for the measure theory, we mention [27], [43], [55], [59], [95], [110]; for the probability theory we refer to [26], [55], [96], [104], [110] and for the theory of stochastic processes and stochastic differential equations we cite [5], [26], [55], [56], [69], [81], [97], [98]. However several results that can be found only in less accessible references are proved. In Section 1.10 we prove a general version of the Ito-type formula which plays a key role in the developments of Chapters 3-5. The results concerning mean square exponential stability in Chapter 2 may be derived using an Ito-type formula which refers to stochastic processes that are solutions to a class of stochastic differential equations. This version of the Ito-type formula can be found in Theorem 39 of this chapter. Theorem 34, used in the proof of the Ito-type formula and also in Lemma 22 in Chapter 6 to estimate the stability radius, appears for the first time in this book.
1.1 Elements of measure theory 1.1.1 Measurable spaces Definition 1. A measurable space is a pair (Q, T), where Q is a set and T is a G-algebra of subsets ofQ; that is, T is a family of subsets A C ^ with the properties (i) ^ e T; (ii) if A G J^, then QAeT; (iii) if An eJ=',n>l, then U^^ A„ e T. \iT\ and T2 are two a-algebras of subsets of ^ , by T\ v^2 we denote the smallest cr-algebra of subsets of ^ which contains the a-algebras T\ and T2By i3(R") we denote the a-algebra of Borel subsets of R'^, that is, the smallest (J-algebra containing all open subsets of R'^.
2
1 Preliminaries to Probability Theory and Stochastic Differential Equations
For a family C of subsets of ^ , a (C) will denote the smallest a-algebra of subsets of Q containing C; a(C) will be termed the a-algebra generated by C. If (Q\,G\) and (^2, G2) are two measurable spaces, by Qi (g) Q2 we denote the smallest a-algebra of subsets of ^1 x ^2 which contains all sets A x B, A e Qi, Be 02. Definition 2. A collection C of subsets ofQ is called a TT-system if (i) 0 e C, and (ii) if A, B eC, then AH B e C. The next result proved in [118] is frequently used in probability theory. Theorem 1. IfC is a n-system and Q is the smallest family of subsets ofQ such that (i) C c G; (ii) if A e G, then QAeG; (iii) An e G,n > I, andAiHAj = (t)fori 7^ j implies iJ"^^^ e G, then a (C) = GProof Since a{C) verifies (i), (ii), and (iii) in the statement, it follows that
Gca(C). To prove the opposite inclusion, we show first that ^ is a TT-system. Let A G e and define G(A) = {B; B e G stnd AH B e 0}. Since A - B = Q - [(A n B)U (Q - A)], it is easy to check that G(A) verifies the conditions (ii) and (iii), and if A € C, then (i) is also satisfied. Hence for A G C we have G(A) = G\ consequently, if A G C and B e G, then AHBeGBut this implies G{B) D C and therefore G{B) = G for any B e G- Hence 5 is a jr-system and now, since G verifies (ii) and (iii), it is easy to verify that ^ is a a-algebra and the proof is complete. D
1.1.2 Measures and measurable functions Definition 3. (a) Given a measurable space {Q, T), a function ii'.T^^ called a measure if:
[0, 00] is
(i) n (0) = 0; (ii) if An e T,n >\, and A, H Aj = (p for i ^ j , then
l^{^T=X^n) = Y.^^^^-^(b) A triplet (^, ^ , /x) is said to be a space with measure. (c) If/ji(Q) — I, we say that /JL is a probability on T, and in this case the triplet (^, T, /JL) is termed a probability space. A measure p. is said to be a-finite if there exists a sequence An,n > I, An e. T with A/ n Aj = (j)for i 7^ j and Q. = U ^ j A,, and /x(A„) < oofor every n.
1.1 Elements of measure theory
3
Definition 4. Given a measurable space (Q,T), a function f: Q i—> R is said to be a measurable function if for every A e B{R) we have f~^(A) e T, where /-i(A) = { a ; G ^ ; / M G A } . It is easy to prove that/: ^ I—> R is measurable if and only if/~^((—oo, a)) G T for every of G R. Remark 1. It is not difficult to verify that \i{Q.\,T\) and (^2, ^1) are two measurable spaces and if / : ^1 x ^2 -^ R is J^i ( ^ ^ 2 measurable, then for each 002 e ^2 the function (0\ 1—> f{(jo\, 0)2) is T\ measurable and for each oox e Q\ the function CL>2 I—> f{cjO\, C02) is J^2 measurable. Definition 5. A measurable function f : Q 1—> R is said to be a simple measurable function if it takes only a finite number of values. We shall write a.a. and a.e. for almost all and almost everywhere, respectively; f = g a.e. means /x(/ ^ g) = 0. Definition 6. Let (Q, T, \i) be a space with measure /„ : ^ -> R, /i > 1, and f : Q ^^ R be measurable functions. (i) We say that fn converges to / for a.a. co eQor equivalently lim„_^oo fn = f a.e. (fn ^ ' / ) if fi Ico; lim f„(co) 7^ f(co) = 0. (ii) We say that the sequence /„ converges in measure io f (fn -^ f) if for every 6 > 0,we have lim^^oo M{ / , then there exists a subsequence fn,^ of the sequence fn such that lim^_>oo fik — f ^•^^ Corollary 4. Let (^, JF, /x) be a space with measure such that ii(Q) < 00. Then the following assertions are equivalent: (i) fn ^ / ; (ii) any subsequence of fn contains a subsequence converging a.e. to f.
D
As usual, in the measure theory two measurable functions / and g are identified if f = g a.e. Moreover, if f : Q -^ R = [—00, 00] is measurable, that is, f~\[-oo,a))eT for every a G R and if MCI/I = 00) = 0, then / will be identified with a function ^ : ^ -> R defined as follows: I f(co) if \f(a))\ < 00, and
^("^=1 0 if|/M|=oc. Theorem 5. If (Qi, J^\, /xi) and (^2, ^2^ M2) 0 be a measurable function. Let us define I —1
fn{(0)= Y^ - — - X A , „ M , where A/,„ = \o)'/——
< f(co) < ;;^ ,/ = 1,2, . . . , 2 " « ,
and XA(^) is the indicator function of the set A; that is, XA{^) = lif(i>eA and XA((^) = 0 if CO e Q — A. Then we have: (i) 0< f„ < fn+i andlim„^oo fn((^)^ = f{co), co e Q\ (ii) 0 < fl„ < fl«-fi, where a^ = X]?J!^^ ^TTMC^/.^) (^ith the convention that D 0 . oo = Oj. Definition 7. (i) Let f > 0 be a measurable function on a space with measure (Q, T, IJL) and fn, a^, n > I be the sequences defined in the statement of Theorem 6. By definition an = f^ fnd/x and f^ fd/x = lim„^oo (^n(ii) A measurable function f :Q -^ K is called an integrable function if f^\f\dfi 1, and g e L^(^) with
ll/^lll oo
\fn-f\Pdn f \fn-
in L^ or fn -> f if = 0,
Theorem 9. If fn ^ f then fn^f
D
1.2 Convergence theorems for integrals Let (Q, T, /x) be a space with measure. The following results are well known in measure theory. Theorem 10. (Fatou's Lemma) Let fn ^ 0,n > I, be a sequence of measurable functions. Then / (lim/«)^M < lim / fndii.
D
Theorem 11. (Lebesgue 's Theorem) Let fn, f be measurable functions and | /„ | < g, n > I, a.e. where g is an integrable function. If lim„^oo fn = f ci.e., then fn -^ / , and therefore lim„^oo JQ fndli = / ^ fdji. • Theorem 12. Let fn, f be measurable functions. //" |/„| < g, « > 1, for some IJL
O
integrable function g and fn -^ / , then fn ^^ f-
^
6
1 Preliminaries to Probability Theory and Stochastic Differential Equations
Theorem 13. [26], [55], [106] Let fn, f be integrable functions. Suppose that li{Q) < oo and there exists a > 1 such that sup / |/„r^/x < oo. n JQ V fn -^ / , then fn^
f and therefore lim^^oo JQ fndfi = J^ fd/x.
Theorem 14. [43], [95] Iff
O
: [a, b] ^^ R is an integrable function, then
1 f
lim - /
f{s)ds = fit)
a.e., t e [a, b].
O
h^0+ h Jmax{t-h,a}
Definition 9. Let ii\ and /X2 be two measures on the measurable space {Q,T)\ we say that /xi is absolutely continuous with respect to /X2 (and we write /xi 2)dp\ is T2 measurable, the function co\ I—> f^ f{oL>\, co2)dp2 is T\ measurable, and /
fdipx
X M2) = /
( /
= /
( /
fi(Ouco2)dp2 ) dpi f((i)ua)2)dpi]dp2-
(b) A measurable function f : ^1 x ^2 -^ ^ Is integrable (on the space (Qi X ^2,^1 (8)^2, Ml X P2)) if and only if /
( /
\f(couO)2)\dp2 ] dpi < 00.
/Q2
(c) If f : Qi X Q2 -^ R is an integrable function, then: (i) For a.a. coi e Qi the function (pi((i)i) = f^ f{co\, 2)dpi is well defined, finite, and measurable and integrable on the space {^2. •?^2, l^i}(iii) /fiix^2 ^^^^1 ^ ^2) = /^, (pxdpi = f^^ ndl^i^ •
1.4 Independence
7
1.3 Elements of probability theory Throughout this section and throughout this monograph, {Q,J^,P} is a given probabiHty space (see Definition 3(c)). In probabiHty theory a measurable function is called a random variable and the integral of a random variable / is called the expectation of f and is denoted by Ef
orE{f)Ah^iis,Ef
=
f^fdP.
A random vector is a vector whose components are random variables. All random vectors are considered column vectors. In probability theory the words almost surely (a.s.) and with probability 1 are often used instead of almost everywhere. As usual, two random variables (random vectors) x, j are identified if x = y a.s. With this convention the space L^(Q, P) of all random variables x with £"1x^1 < oc is a real Hilbert space with the inner product (x, y) = E(xy). If Xa.aeA is a family of random variables, by a(jc«,QfGA) we denote the smallest a-algebra Q C T with respect to which all functions jCa,Qf G A are measurable. 1.3.1 Gaussian random vectors Definition 10. An n-dimensional random vector x is said to be Gaussian if there exist m G R" and K an n x n symmetric positive semidefinite matrix such that
for all w G R", where u* denotes the transpose ofu and i := >/—T. Remark 3. The above equality implies m = Ex and K = E(x — m)(x — m)*.
(1.1)
Definition 11. A Gaussian random vector x is said to be nondegenerate if K is a positive definite matrix. Ifx is a nondegenerate Gaussian random vector, then P(x eA) =
1 [ ^-iO--')*^"^()^--^)jj ((27r)^detA:)2 J A
for every A e 6(R").
1.4 Independence Definition 12. (i) The cr-algebras T\,T2,...
,Tn,Ti
0.
1.6 Stochastic processes
9
Theorem 18. Let x, y be integrable random variables and Q,H C T, a-algebras. Then the following assertions hold: (i)E(E[x\G])=Ex; (ii) E[E[x\g]\n] = E[x\n] a.s. ifQ D H; (iii) E[(ax + Py)\g] =aE[x\g] + PE[x\g] a.s. if a, ^^ e R; (iv) £'[x>'|^] = ^^[xl^] a.s. if y is measurable with respect to g and xy is integrable; (v) ifx is independent of g, then E[x\g] = Ex\ D (vi) X > 0 implies E[x\g] > 0 a.s. Remark 4. It is easy to verify that: (i) If X is an integrable random variable and y is 3. simple random variable with values cu ... ,Cn, then Elx\y] = J2 Xy=CjE[x\y = Cj], jeM
where M = {j e {1,2,..., n}; Piy = cj) > 0}. (ii) If A e T,gA = [ ^o ^Ith p
tn,to e J implies x(tn) -> x{to). (iv) X is called to be a measurable process if it is measurable on the product space with respect to the a-algebra B(J) 0 ^ , I3{J) being the a-algebra ofBorel sets in J.
10
1 Preliminaries to Probability Theory and Stochastic Differential Equations
Remark 5. (i) Every right continuous stochastic process is a measurable process. (ii) From the Fubini theorem it follows that if jc : 7 x ^ -> R is a measurable process and E fj \x(t)\dt < oo, then for a.a. co, fj x{t)dt is a random variable. Definition 15. Two stochastic processes x\ = [x\{t)}tej, -^2 = {-^2(0}rey /jci. Now let us consider a family M = {Mt}tej of a-algebras MtC T with the property that t\ < ti implies Mt^ C Mt^Definition 16. We say that the process x = [x{t)]t^j is nonanticipative with respect to the family M, if (i) X is a measurable process; (\i)for each t e J,x{t, •) is measurable with respect to the a-algebra Mf. When (ii) holds we say that x{t) is A i r adapted. As usual by LP{J X Q, R ^ ) , p > 1, we denote the space of all m-dimensional measurable stochastic processes jc : 7 x ^ -^ R'". By L^(J) we denote the space of all jc G LP (J X ^ , R"") which are nonanticipative with respect to the family M = (Mr), t e 7. Theorem 19. If for every t e 7, the a-algebra Mt contains all sets M e T with P(M) = 0, then L%^(J) is a closed subspace ofLP{J x ^ , R'"). Proof Let Xn e L^(J), n > l , b e a sequence which converges tox € L^(7 x ^ , R^). We have to prove that there exists x e Lj^{J) such that x„ converges to x in the space L^(7 x ^ , R^). Indeed, since lim / E\x„(t)-x{t)\Pdt
= 0,
»oo
by Theorem 9 the sequence of functions E\Xn{t) - x(t)\P converges in measure to zero. Hence by virtue of Riesz's Theorem there exists a subsequence x„^ and a set N c J with fi(N) = 0 (/x being the Lebesgue measure) such that lim E\xn(t) - x(t)\P =0 n-^oo
for all ^ G J — N. Let t e J — N be fixed. Again applying Theorem 9 and Riesz's Theorem, one concludes that the sequence jc„^(r), A: > 1, has a subsequence which converges a.e. to x(t). But x„^.(r) are A^^adapted and Mt contains all sets M e T with P(M) = 0. Therefore jc(0 is measurable with respect to Mt for each t e J — N. Now, define x : J x Q -^ R^ as follows: _ |-^(^0, is an r-dimensional stochastic process with independent increments, then a(x(t) — x{a), t e [a, b]) is independent ofa(x(b -{- h) — x(b), h > 0) for alio < a < b. Proof Let M be the family of all sets of the form nf^j(jc(r/) — x(a))~\Ai) where a < tt < b and A/ e B(W), 1 < / < p, and let J\f be the family of all sets of the form H'H^ixib + hi) - x(b))-\Bi), where 0 < /z,, 5/ e B(W), I 0). First, we prove that P{M r\N) = P{M) - P{N)\i M e M and N eN. Indeed, let M = n^^^(xiti)-x(a))-\Ai), N = nll^(x(b -\- hi) - x(b))-\Bi) with a < ti < "' < tp < b, 0 < hi < ' •' < hn, Ai e B(R'), Bi e
B(R').
Since cr(x(ti) - x(a), I 0). Then, applying Theorem 1 again, we prove that P(A n B) = P(A) • P{B) if A € G{M) and B 6 G{M). The proof is complete. D
12
1 Preliminaries to Probability Theory and Stochastic Differential Equations
Theorem 22. [106] Ifx{t), t > 0, is a continuous r-dimensional stochastic process with independent increments, then all increments x{t2) — ^(^i) cire Gaussian random vectors. •
1.8 Wiener process and Markov chain processes In the following definitions, / is the interval [0, oo). Definition 18. A continuous stochastic process fi = {P(t)]tei is called a standard Brownian motion or a standard Wiener process if: (i)y^(0) = 0; (ii) P(t) is a stochastic process with independent increments; (iii) E^it) =0j e I, E\P(t) - p{s)\^ = \t-s\ with t,s e I. Definition 19. An r-dimensional stochastic process w{t) = (w\(t),..., Wr(t))*, t e /, is called an r-dimensional standard Wiener process if each process Wt (t) is a standard Brownian motion and the a-algebras cf{Wi(t), t e I),l < i < r, are independent. For each r > 0, by Tj we denote the smallest a-algebra which contains all sets M e T with P{M) = 0 and with respect to which all random vectors {w{s)}sOMt =cx(w(t-^h)-w(t),h > 0). From Theorem 21 it follows that for each t e I,J^t ^^ independent ofUf. Remark 6. (i) Since w(t) — w(s) is independent of ^^ if ^ > 5 (see Theorem 21), from Theorem 18(v) it follows that E[(w{t)-w(s))\Ts]=0, (1.2) E[iw(t) - w(s)){w{t) - w(s)y I Ts] = Irit -s), t > s, a.e. (ii) The increments w(t) — w(s), t ^ s art nondegenerate Gaussian random vectors (see Theorem 22 and (1.1)). The converse assertion in (i) is also valid. Theorem 23. [52], [81] Let w(t), t > 0, be a continuous r-dimensional stochastic process with w(0) =0 and adapted to an increasing family of a-algebras J^t^t > 0, such that (1.2) hold. Then w(t),t > 0, is a standard r-dimensional Wiener process and all increments w(t2) — w^(^i), ^2 # ti, are nondegenerate Gaussian random vectors. D Theorems 22 and 23 will not be used in this book, but they are given because they are interesting by themselves and they give a more detailed image of the properties of these stochastic processes.
1.8 Wiener process and Markov chain processes
13
Definition 20. A family P{t) = [pij{t)], t e (0, oo), ofd x d matrices is said to be a transition semigroup if the following two conditions are satisfied: (i) For each t>0, P(t) is a stochastic matrix, that is, 0< pij{t) 0. The equality (ii) is termed the homogeneous Chapman-Kolmogorov relation. Definition 21. A stochastic process r]{t),t e [0, oo), is called a standard homogeneous Markov chain with state-space the set D = {1, 2 , . . . , J} and the transition semigroup P(t) = [pij{t)], t > 0, if: (i) r]{t, 0)) e Vfor all t > 0 and co e Q; (ii) P[r](t + h = j)\r](s),s < t] = Pr^(t)j(h) a.s.forallt >0,h>OJe V; (iii) lim/^^o+ ^ ( 0 = h^h is the identity matrix of dimension (d x d); (iv) r]{t),t > 0 is a right continuous stochastic process. In fact, the above definition says that a standard homogeneous Markov chain is a triplet {r](t), P(t), V) satisfying (i)-(iv), P{t), t > 0, being a transition semigroup. The next result is proved in [26]. Theorem 24. The standard homogeneous Markov chain has the following properties: (i) P{r](t + h) = j\rj(t) = i] = pijQi) for all ij e V,h > 0,t > 0 with P{ri(t) = i] > 0. (ii) P{ri(t + h) = j\r]is),s < t} = P[r](t + h) = j\r]{t)lt >0,h>0, j e V, a.s. (iii) Ifx is a bounded random variable measurable with respect to the a-algebra crirjis), s > t), then E[x\r]{u), u 0. (iv) r](t) is continuous in probability. (v) Pii(t) > Ofor all i eV,t >0. (vi) limt^ooP{t) exists. (vii) There exists a constant matrix Q such that P{t) = e^^, t > 0, Q = [qij] is a matrix with qtj >Oifi ^ j and Xlj=i ^ij = 0 ^ In fact (ii) follows from (iii) since Xr){t+h)=j is measurable with respect to the (J-algebra a(^(w), u > t). The assertion (iii) in Theorem 24 is termed the Markov property ofthe process r]{t). The fact that a transition semigroup P{t)t > 0, with the property that lim^_^o+ P(t) = I^ admits an infinitesimal generator Q {P(t) = e^\t > 0) follows from the general theory of semigroups in Banach algebras [63], but in the theory of Markov processes a probabilistic proof is given in [16], [26], [55]. We assume in the following that TTI := P[r]{0) = /} > 0 for all / e V. Remark 7. From the above assumption and from the equality d
P{r]it) = i] = ^ 7 r , P { ^ ( r ) = i\r]iO) = j}, 7=1
we deduce that P{r]{t) = i} > TCipait) > Oj >0J
eV.
14
1 Preliminaries to Probability Theory and Stochastic Differential Equations
In the following developments ^r, ^ > 0, denotes the family of a-algebras Qt = (j(r](s)', 0 < s 0 is the family of a-algebras Vt = (j{r]{s), s > t).
1.9 Stochastic integral Throughout this section and throughout the monograph we consider the pair (w(t),r]{t)),t > 0, where w(t) is an r-dimensional standard Wiener process and ri(t) is a standard homogeneous Markov chain (see Definitions 19, 21). Assume that the (J-algebra J^t is independent of Qt for every r > 0, where Tt and Qt have been defined in the preceding section. Denote by Ht :=Tt'^GtJ > 0. Letg = a(r]{t),t > 0). Theorem 25. For every t > 0, J^t is independent of Q and Ut is independent of Tt V Q. Therefore, Ut and Ht are independent a-algebras for every t > 0. Proof First one proves that J^t is independent of Qs for alU > 0, s > 0. Indeed, ift < ^ we have Tt C Ts and since JF^ is independent of Qs it follows that Tt and Ts are independent a-algebras. Similarly one proves that t > s. Now let Mo be the family of all sets of the form Pij^^j {^(f^) = ik), with tk > 0, tk y^te,ifky/^i and ik ^^A 0, Bi e B(W), I < i < p. Obviously M, Aft, and St are jr-systems and cr(M) = g, a{Nt) = TtVg, and a(St) = Ut. Define g(F) = [G e §; P(G n F) = P(G)P(F)} for each F e Tt. Since Tt is independent of ^^ for all ^ > 0, it follows that M C g(F). By using the equality F — G = F — (FOG) one verifies easily that g(F) satisfies conditions (ii) and (iii) in Theorem 1. Thus, by virtue of Theorem 1, 5(F) = ^ for all F e Tt and thus the first assertion in the theorem is proved. further, if 5" eSt,H eAft, H = Gr\F, G eg, F e J^„ since j ; is independent of g for every u >0 and Ut is independent of Tt (see Theorem 21), we have p(S n //) = P(5 n G n F) = P(G)P(S n F)
== P(G)P(S)P(F) = P{S)P(H). Therefore, by using Theorem 1, one gets P{U (1 H) = P(U)P(H) for all U eUt, H e Aft and applying Theorem 1 again, one concludes that P{UnV) = PiU)P{V) if U eUt, V e TtV g. The proof is complete. D If [a,b] C [0, oo) we denote by L^f^j^,[fl, Z?] the space of all nonanticipative processes f(t), t e [a, b], with respect to the family H = (Ht), t e [a, b], with
Ef^'f'P(t)dt 0. Since the family of a -algebras Ht,t e [a,b], has the properties used in the theory of the ltd stochastic integral, namely: (ei)Hty C nt2 ifti < t2\ (b) G{p{t + h)- ^ ( 0 , /z > 0) is independent of ?^, (see Theorem 25); (c) ^{t) is measurable with respect to Ht\ (d) H t contains all sets M e T with P(M) = 0 for every t > 0, we can define the Ito stochastic integral f^ f(t)dp(t) (see [52], [55], [81], [97], [98]) with/ eLlJa,b]. Definition 22. A stochastic process f(t), t e [a, b], is called a step function if there exists a partition a = to < t\ - • - < t^ = b of [a,b] such that f{t) = f(ti) if t e [tiJi^i), I 0, where the a-algebras Ut and H are as defined in Section 1.8.
20
1 Preliminaries to Probability Theory and Stochastic Differential Equations
Theorem 34. If^ is an integrable random variable measurable with respect to IZt, that is, ^ e L^Q.Ur, P), then Ei^lHt] = E[^\ii{t)]a.s. Proof. The proof is made in two steps. In the first step we show that the equality in the statement holds for ^ = XB for all B elZt, and in the second step we consider the general situation when ^ is integrable. Step 1 Define z = E[^\r](t)]. We must prove that EizxA) = E(^XA)^Aenr.
(1.5)
First we shall prove that (1.5) holds in the particular case when ^ = XMXN, M eUt,N eVt. Let M be the family of all sets A e J^ verifying (1.5). It is obvious that M verifies (ii) and (iii) in Theorem 1. Let C = {F n G; F G J?>, G e Qt); it is easy to check that C is a TT system. We show now that C C Al.Indeed, let F e Tt,G eQt\ we must prove that £'(ZXFXG) = E(^XFXG)' But since XM is independent of {XN,r)(t)} (see Theorem 25) we can write [
E(xM)E[xN\ri(t)]dP
= EixM) [
J{r]it)=i}
E[xN\ri(t)]dP
J{T]U)=i}
= E(XM)E(XNXrj{t)=i) = E(XMXNXm=i) = /
XMXNdP.
Ar](t)=i}
Hence z = F(XM)F[XA^ 1^(01 (in our case z = E[xMXN\r]{t)]). From Theorem 24(iii) we have E[xN\r](t)] = EIXNIGI]' Further, since XM is independent of {XF.XCXN} and XF is independent of (XG. E[xN\^(t)]} (see Theorem 25), we can write, applying Theorems 17 and 18, that: E(^XFXG)
= E(XMXNXFXG) =
E(ZXFXG)
=
E(XM)E(XNXFXG)
E(XM)E(XF)E{XNXG).
= E(xM)E(xFXGE[XN\r]{t)]) = E(xM)E(xF)E(xGE[xN\ri(t)]) = = =
EixM)E{xF)E(xGE[xN\Gt]) (ExM)iExF)E(E[xGXNm) E{XM)E{XF)E{XNXG)'
Thus we proved that C 0, and thus the equality in the statement takes place for ^"^ and ^" and therefore, according to Theorem 18, the proof is complete. D Theorem 35. (Ito-type formula) Let us consider a = ( ^ i , . . . , (2„)* with au e L^^ x (ltoT]),l R ^ F : R+ x R" x P -^R"^^ and for each / G V, fi', •, /)andF(-, •, /) are measurable with respect to S(R+xR"), where ;B(R+xR") denotes the a-algebra of Borel sets in R+ x R"; (C3) For each 7 > 0 there exists yiT) > 0 such that \fit,xx,i)-
fit,X2,i)\^\\Fit,x,,i)-Fit,X2^i)\\
to, where 'HtQj = cr(w(s) - w(to), r](s); s e [to, t]). Based on the inequality (1.19) one can obtain an Ito-type formula for the solution of the system (1.16) in case a = 0,a = 0 and in more general assumptions for the functions v{t,x,i) than the ones used in Theorem 35. The result giving such a formula has been proved in [80]. Theorem 39. Assume that the hypotheses of Theorem 37 are fulfilled and additionally / ( • , •, /), F(', -, i) are continuous on R+ x R", for all i e V. Let v :R+ xW xV he a function which for each i € V is continuous together with its derivatives Vt^v^ and Vxx. Assume also that there exists y > 0 such that \v(t,x,i)\
+
0 depends on T, Then we have: E [v {s, x{s), r](s)) \r](to) = i] - v(to, xo, i) dv
= E JtQ
dt
(t, x(t), r](t)) +
/dv_ — (^ x(t), r](t))
f(t, x(t), r](t))
+ ]-TrF' (t, xit), rjit)) ^ (t, x(t), r]{t)) 2 dxdx X F(t, x(t), r](t)) -^-Y^vit,
x(t), r](t)) qr^uy dt\r](to) = i
x(t) = X (t, to, Xo), Xo eR"", t >to> 0, for all s > to, i G V.
(1.20)
1.12 Stochastic linear differential equations
29
Proof. From Theorem 37 it follows that for all positive integers p we have sup E[\x{t)\'P\r^{t^) = i]y+2. (ii) From Theorems 36 and 37, for the system (1.16), Theorem 39 is not applicable, while in the case when a(t) = 0 and a{t) = 0 we can use Theorem 39 due to the estimate (1.19). (iii) In many cases, in the following developments we shall consider the system (1.16) with a(t) / 0 and a ( 0 7^ 0, being thus obliged to use Theorem 35.
1.12 Stochastic linear differential equations Since the problems investigated in this book refer to stochastic linear controlled systems we recall here some facts concerning the solutions of stochastic linear differential equations.
30
1 Preliminaries to Probability Theory and Stochastic Differential Equations Let us consider the system of linear differential equations r
dxit) = Ao(r, r]{t))x{t)dt + ^
Ak{t, r](t))x(t)dwk{t),
(1.22)
where / -^ Ak(t, /) : R+ -^ R"^^\ i e V, are bounded and continuous matrixvalued functions. The system (1.22) has two important particular forms: (i) Akit, /) = 0, ^ = 1 , . . . , r, / > 0. In this case (1.22) becomes x{t) = A(r, r^{t))x{t), t > 0,
(1.23)
where A (^ r]{t)) stands for Ao(^ ^^(0) and it corresponds to the case when the system is subjected only to Markovian jumping. (ii) D = {1}; in this situation the system (1.22) becomes r
dx(t) = Ao(t)x{t)dt + J2 Ak{t)x(t)dwk{t),
t > 0,
(1.24)
k=i
where A;t(0 '•= ^A:(^ 1), ^ = 0, . . . , r, r > 0, representing the case when the system is subjected only to white noise-type perturbations. Definition 24. We say that the system (1.22) is time invariant (or it is in the stationary case) if Ak(t, i) = A^(/) for all k = 0,..., r, t e R+ and i e V. In this case the system (1.22) becomes r
dxit) = Ao{ri(t))x(t)dt + J2 Mri(t))x(t)dwk(t).
(1.25)
k=\
Applying Theorem 37, it follows that for each ^o > 0 and each random vector §, Ti^Q-measurable and E\^\^ < +oo, the system (1.22) has a unique solution x(t; to, ^) which verifies x{to; to, ?) = §. Moreover, if £"1^^^ < +oo, p > I, then sup E[\xit, to, ^)\^P I r](to) = i] < cE[\^\^P \ r](to) = H te[to,T]
i eV,c > 0 depending upon T,T — to, and p. For each /: e {1, 2 , . . . , «} we denote ^j^(t, to) = x{t, to, ek) where ^^ = (0, 0 , . . . , 1, 0, . . . , 0)* and set O ( r , r o ) = {^\{t,to)
^2{t,to)'"^n{t,to)).
^{t,to) is the matrix-valued solution of the system (1.22), which verifies O(ro, to) = In- If § is a random vector W^-measurable with E\^\^ < oo, we denote x(t) = 0 ( ^ to)^- By Remark 10 it is easy to verify that x{t) is a solution of the
1.12 Stochastic linear differential equations
31
system (1.22) verifying x(t) = ^. Then, by uniqueness arguments, we conclude that x{t) = x(t, to, ^) a.s., / > ^0. Hence we have the representation formula x(tJo,^)
= ^it,to)^
a.s.
The matrix ^{t,to),t > to >0, will be termed iht fundamental matrix solution of the system of stochastic linear differential equations (1.22). By the uniqueness argument it can be proved that ^(t, s)(s, to) = 0(r, ^o) a.s., t > s
>to>0.
Proposition 40. The matrix 0 ( ^ ^o) is invertible and its inverse is given by 0)-^ (r, ^o) = 0*(r, to) a.s., t
>to>0,
where 0 ( ^ to) is the fundamental matrix solution of the stochastic linear differential equation: dyit)
(1.26)
y(t)dt r
-Y,Al{t,r){t))y{t)dwk{t). k=\
Proof. Applying Ito's formula (Theorem 33) to the function v{t, X, y) = y*x, t >to, x,y e R"" and to the systems (1.22) and (1.26), we obtain y * $ * ( ^ to)^{t,
to)x - y*x =0 a.s., t >to>0,
x,y e R";
hence 0 * ( ^ to)^(t, to) = In a.s., t > to, and the proof is complete. Let us consider the affine system of stochastic differential equations: dx(t) = [Ao(t, r](t))x(t) + fo(t)]dt
D (1.27)
r
+ ^ [ A , ( f , ri(t))x(t) +
fkit)]dwk{t),
k=\
t > 0, where /^ : R+ x ^ —> R'^ are stochastic processes with components in L^^([0, T]) for all T > 0. Using Theorem 36 we deduce that for all ro > 0 and for all random vectors ^, W^^-measurable with E\^\^ < oo, the system (1.27) has a unique solution Xf(t,to,^), f — (/o, / i , • • •, / r ) - Additionally, for all T > to, there exists a positive constant c depending onT,T — to such that sup E\\xfit,to.^)f te[to,T]
(1.28)
I ^(^) = /l
L
J
l§l' + J2 I k=o •^^o
\fk(s)\^ds]\r](to) /
ds
Let ^(t,to), t > to >0, be the fundamental matrix solution of the linear system (1.27) and set z(t) = 0~^(r, to)xfit, to, ^ ) . Applying obtained by taking fk=Om
32
1 Preliminaries to Probability Theory and Stochastic Differential Equations
Ito's formula (Theorem 33) to the function v{t,x,y) systems (1.26) and (1.27), we obtain
= y*x , x,y e R", and to the
/o(5)-^A,(^,r/(^))/fc(5)
j*z(0 = y"z(to) + y
ds
k=\
k=\
t >tQ,y eW.
^(0
"^^0
Since y is arbitrary in R^' we may conclude that ds
(^, ^) JtQ
k=\
)fk(s)dwk(s)
a.s.,
k=\ -^^0
t > to. Thus we obtained the following representation formula: Xf(t,to,^)
(1.29)
= O(r,ro)
^(5(/)//(/)), 5, // € S'^,
(2.1)
We introduce on S^ the following norm: \S\ = m a x | 5 ( / ) | ,
(2.2)
ieV
where \S(i)\ is the norm induced by the Euclidean norm on /?", that is: \S(i)\ = sup |5(/)x| = \x\0. (ii) From (2.1), it is sufficient to show that if 5, M e 5^, with 5 > 0, M > 0, then Tr[SM] > 0. Since 5 > 0, there exist the orthogonal vectors ^ 1 , . . . , ^„ and the nonnegative numbers X i , . . . , A,„ such that Y^XiCie
36
2 Exponential Stability and Lyapunov-Type Linear Equations
(see, e.g., [7]). Then we have Tr[SM] = ^XiTr[eie';M]
= J^^i^i^^i
i= \
^ ^
i= \
and the proof is complete.
D
Proposition l.IfT e S^ -^ Sf^ is a linear and positive operator then the adjoint operator T* : Sf^ -> S^ is positive too. Proof. Let S eSf^, 5 > 0. We show that 7*5 > 0. Indeed, if H eS^,H >0, ihenTH > 0 and hence, according to Lemma l(ii), we obtain (5, TH) > 0. Therefore {r*5, //) > 0 for all H e S^^. Invoking part (i) in Lemma 1 we conclude that D r * 5 > 0 and the proof ends. The result stated in the next theorem provides a method for determining || T || for a positive operator T. Theorem 3.//" r : <S^ -> <S,f is a linear positive operator then \\T\\ = \TJ^\. Proof. From (2.4) one can see that I ry^ I < UriKLet^" e 5^^with|5| < l,thatis, 1^(01 < 1 for all / G V. Hence -!„ < 5(1) < /„ for all / e V and - 7 ^ < S < J"^. Since T is a positive operator it follows that - 7 7 ^ 0
if/ ^ 7 .
(2.7)
38
2 Exponential Stability and Lyapunov-Type Linear Equations For each r G T we define the linear operator C(t) : S^ -> S^ by (C(t)S)(i) = Ao(t, i)S(i) + SiOA^it, i)
(2.8)
r
d
+ ^ A,(r, /)5(/)A*(r, /) + ^ ^ , , 5 0 ' ) , k=\
j=\
i e V, S e S^, It is easy to see that t i—> L{t) is a continuous operator-valued function. Definition 3. The operator C(t) defined by (2.8) is called the Lyapunov operator associated with AQ, . . . , A^ and Q. The Lyapunov operator C{t) defines the following linear differential equation
on Si: -S{t) = C{t)S{t), tel, (2.9) dt For each to el and H e S^, S(t, to, H) stands for the solution of the differential equation (2.9) which verifies the initial condition S{to, to, H) = H. Let us denote by T(t, to) the linear evolution operator on S^ defined by the differential equation (2.9), that is T{t, to)H = S(t, to, H)\ t,toeI,
H e S^.
It is said that T{t, to) is the evolution operator associated with the system
(Ao,...,A,;e). We have
-T{tJo) = C{t)T{tJo). dt T{t,to) = j \ where J^ : S^ ^^ S^ is the identity operator. It is easy to check that T{t, s)T(s, r) = T(t, r) for all t,s,T e I. For all pairs t,T eX, the operator T(t, z) is invertible and its inverse is T~^ (t, z) = T(z, t). If T*{t,z) denotes the adjoint operator of r(r, r), the following hold: r{tJo)
= T'(s,to)T'{t,s),
(2.10)
r(t,s) = (THz,s))rit,z),
(2.11)
^T\t,s) = r{t,s)L\t), dt
(2.12)
4-r*(5, 0 = -C\t)T\s,
(2.13)
t).
dt It is not difficult to see that the adjoint operator C{t) \ S^ -^ S^ is given by {C\t)S){i)
= A*(r, i)S{i) + 5(/)Ao(r, /) r
(2.14) d
+ Y, A*(/, i)S{i)A,{t, i) + Y, qijS(j), k=\
i eV, S e St
j=\
2.2 Lyapunov-type differential equations on the space <S^
39
Remark 3. (i) If Ak(t,i),k = 1 , . . . , r, do not depend on t, then the operator C defined by (2.8) is independent on t. More precisely, if Ajc = (A^(l),..., A^(J)), then r
(CSm = Ao(/)5(/) + S(i)Al{i) + J2 Akii)S{i)Alii)
(2.15)
d
+ ^^,,5(7),
i e V, S e S^.ln this situation the evolution operator defined by the differential equation ^S{t) at
= CS{t)
is given by T(t,to)=e^^'-'''\
(2.16)
where Ct
_
^
e-:=E
k=0
kl
(the above series being uniform convergent on every compact subset of the real axis). C^ stands for the k-iteration of the operator C and C^ = J^, (ii) If A^ :I ^ X ^ are 6>-periodic functions, then T{t + Ojo + 0) = T(t, to) for all tJo el such that t -\-0,to-\-0 el. In order to motivate the definition of the Lyapunov operator C(t) and its corresponding evolution operator T(t,to), we shall prove the following result which establishes the relationship between the evolution operator T(t, to) and the fundamental matrix solution of a system of stochastic linear differential equations of type (1.22). Theorem 4. Assume that J = R+ and that the elements of Q satisfy (2.7) and the additional condition X!/=i ^U — ^' ^ ^ ^- Under these assumptions we have (r*(r, to)H)ii) = £[0*(r, to)H(r](t)) to >0, H e S^,i eV, where cl>(r, to) is the fundamental matrix solution of the system (1.22). Proof Let U(t, to) : S^ -^ S^ be defined by (U(t, to)iH))(i) = £[0*(r, to)H(rj(t))(t, to)\r](to) = H H e Si, ieVj> to. Taking H e 5^, we define v{t, jc, /) = JC*//(/)JC, X eWJ
eV,t>
0.
40
2 Exponential Stability and Lyapunov-Type Linear Equations
Applying Theorem 35 of Chapter 1 to the function v{t,x,i) (1.22), we obtain
7''
jc*(W(r,ro)(//))(/)jc-x*//(/)jc = x * /
and to the equation
(U(sJo)(C%s)H))ii)dsx.
JtQ
Hence
at Since Uito, to) = T*(to, to) and using (2.12) it follows that U(t,s) = T*(t,s), t > s, and the proof is complete.
D
As we shall see in Section 2.5, the above result allows us to reduce the study of the exponential stability for the linear stochastic system (1.22) to the problem of the exponential stability for a deterministic system of type (2.9). Remark 4. (i) If in the system (1.22) we have Ak(t -i-O) = Ak(i), t >0, i eV, then from Theorem 4 and Remark 3(ii) we deduce that E[\to>0, i eV, xoe R\ (ii) If the system (1.22) is time invariant, then according to Theorem 4 and Remark 3(i), we have E[\{t,to)xo\^\r](to) = i] = E[\cP(t-to,0)xo\^\r](0) for all t >to>0,
i eV, xoe
= i]
R\
Theorem S, If Tit, to) are linear evolution operators on S^ defined by the linear differential equation (2.9), then the following hold: (i) T{t, to) > 0, T*(t, to) > 0 for all t > to. r, ^o G I; (ii) ift -> Ak(t) are bounded functions, then there exist 8 > 0,y > 0 such that T(t, to)J'^ > 5^-^^^-'oV^ T*{t, to)J'^ > Se-^^'-'^^J"^ for all t > to, ^ to e T. Proof To prove (i) we consider the linear operators C\{t) : S^ ^^ S^, C{t) : S^ -^ S^ defined by (C,(t)H)ii)
= Uoit, i) + ^quln) H(i) + H(i) (AOO, i) + ^qulnj r
{C(t)H)(i) = J2^k(t,i)H(i)Al(tJ)+ H = (//(I), / / ( 2 ) , . . . , H{d)) e 5,^ t e I.
d
J2
^JiHU)J^^^
,
2.2 Lyapunov-type differential equations on the space <S,f
41
It is easy to see that for each t e J , the operator C(t) is a positive operator on S^. Let us consider the linear differential equation ^Sit) at
= Cdt)Sit)
(2.17)
and denote Ti(t, to) the linear evolution operator on S^ defined by (2.17). By direct calculation, we obtain that (ri(^ to)Hm
= toJ eV, H e S^, where O/ (r, to) is a fundamental matrix solution of the deterministic differential equation on R", 1
d
xit).
It is clear that for each t > to,T\{t, to) > 0. Since the linear differential equation (2.9) is written as ^S(t) = Cdt)S(t)^C(t)S(t), at we may write the following representation formula: T{t,to)H = Ti(tJo)H+
f
Tdt,s)C(s)T(sJo)Hds
JtQ
for all H eS^,t > to, t, to e I. Let H e S^, H >Ohc fixed. We define the sequence of Volterra approximations Sk(t),k>0,t>to,hy Soit) = Ti(t,to)H, Sk+iit) = T,(t,to)H+
f Tdt,s)L(s)Sk(s)ds,
k=l,2,...,
JtQ
Since ^l(^ to) is a positive operator on 5^, we get inductively that Skis) > 0 for all 5 > to,k = 1,2, Taking into account that Hm^_^oo Sk(t) = T{t, to)H we conclude that T(tJo)H > 0, hence T(t, to) > 0. By using Proposition 2 we get that the adjoint operator T*(t, to) is positive. (ii) First, we show that there exist 5 > 0, y > 0, such that \T(t,to)H\ >8e-^^'-'^^\H\, |r*(/,ro)//| >8e-^^'-'^^\H\ for all H eS^,t
> to, ^ to e I.
(2.18)
42
2 Exponential Stability and Lyapunov-Type Linear Equations Let us denote v(t) - ^\\\T{t, to)H\\\^ = ^{T{t, to)H, T(t, to)H},
where 111 • | {| denotes the norm induced by the inner product, that is, 1•1| l i : = ( - , - ) 2 By direct calculation, we obtain d v(t) - {C{t)T(t, to)H, Tit, to)H), t > to. dt Under the considered assumptions there exists y > 0 such that d
to.
— v(t) > -2yv(t), at
t > to,
Further, we have
or equivalently -ri;(/)^2K(r-ro)1>o
dt^ -• for all t > to. Hence the function t -^ v{t)e'^^^^~^^^ is not decreasing and v(t) > ^-2K(r-ro)^(^Q) Considering the definition of v{t) and using (2.3), we conclude that there exists 8 > 0 such that \T(t,to)H\
>8e-^^'-''^^\H\,
which is the first inequality in (2.18). To prove the second inequality in (2.18), we consider the function 1 o ^ v(s) = - | | | r * ( / , 5 ) / / | | | 2 , H eS^,s v{t) for all s e-^^'-'^\\\H\\\.
Using again (2.3) we obtain the second inequality in (2.18).
2.2 Lyapunov-type differential equations on the space <S^ Let X eW,
43
i eV,bc fixed; consider H e S^ defined by 0 if j ^ i . •^
' JCJC* if J =
I.
We may write successively xHT(t, to)J'^)(nx = Tr[xx'(T(t,
to)J'^m]
= {H, T(t, to)J'^)
d
= {r(t,
to)H, j') = J2 Tr[r(t,
to)H]{i)
i=\ d
> V \{T*{t, to)Hm\
> max \{T*it, to)H){i)\
= |r*(r,ro)H| ^u-^^'-'^\x\!-. Since x G R'^ is arbitrary we get (r(r, h)J%i)
> 8e-^^'-'^'>In, (V)/
eVj>to>0,
or equivalently \T(t,to)J^\ > 8e~^^^~^^^J Wt > ro. The second inequality in (ii) may be proved in the same way. D Remark 5. Combining the result in Theorem 5 with Remark 1 we obtain that T(tJo)j' T'(t,to)j'
|>C(OII is a bounded function, we deduce easily that there exists y > 0 such that
foralU > ^0, t,to e I, Corollary 6. Suppose that Ak,0 < k < r, are continuous and bounded functions. Then there exist 5 > 0 and y > 0 such that Se-y^'-^o)jd < j(^^ f^^^jd ^ ey(t-to)jd^
(2.20)
Se-y^'-'o)jd < T*it,to)J'^ < e^^'-'^^J"^ for all t > ^0, t, to eX.
D
44
2 Exponential Stability and Lyapunov-Type Linear Equations Let us close this section with two important particular cases: (a) Ak(t) = 0 , k = 1 , . . . , r; in this case the linear operator (2.8) becomes {C(t)S)(i) = Ao(t, i)S{i) + 5(/)A*(r, /)
(2.21)
d
7=1
/ eV,SeS^.lt is easy to check that the evolution operator T(t, to) defined by (2.9) has the representation T(t, to) = f (/, ^o) + / T(t, s)C2(s)T(s, to) ds,
(2.22)
JtQ
t > to, t, to e I, where T(t, to) is the evolution operator on S^ defined by the differential equation ^S{t) dt and £2(0 :S^ ^ S^ is defined by
= C{t)S{t)
r
{C2(t)H){i) = J2 ^k(t, i)H{i)Al(t, i), k=\
t el,
H eS^,
i e V.
Remark 6. Since T{t, to) > 0, f (r, ro) > 0, r > ^0, and £2(0 > 0, t e I, from (2.22) it follows that T(t, to) > f(t, to) for all t > ^o, t, to e J , and hence, using Theorem 3, we get \\TitJo)\\ > \\f(t,to)l
t>to,
tjoel.
The evolution operator T{t, to) will be called the evolution operator on the space S^ defined by the pair (AQ, Q). If additionally Q verifies the assumptions in Theorem 4, then (2.21) is the Lyapunov-type operator associated with the system (1.23). (b) D = {1} and q\\ = 0 . In this case S^ reduces to Sn and the operator C{t) is defined by r
C{t)S = Ao(t)S + SA*(t) + Yl MOSAlit),
(2.23)
k=\
t eX, S e Sn, where we denoted Ak(t) := Ak(t, 1). The evolution operator T(t,to) will be called the evolution operator on Sn defined by the system (AQ, . . . , A^). The operator (2.23) corresponds to the stochastic linear system (1.24). Proposition 7. If I = R+ and T(t, to) is the linear evolution operator on Sn defined by the Lyapunov operator (2.23), then we have the following representation formulae: T(t,to) = E[to>0, system (1.24).
^(t,to)
=
E[ 0 for / 7^ j and Ylj=i ^U = ^- It is easy to check that for each t > 0, M{t) is a linear and bounded operator on the Hilbert space (R'')'^ and t i—> ||M(OII is a bounded function, || • || denoting the operatorial norm induced by the norm in (R")"^.
46
2 Exponential Stability and Lyapunov-Type Linear Equations Let us consider the linear differential equation on (R")"^:
^y{t) = M{t)y{t). (2.26) at Let R (t, ^o) be the linear evolution operator associated with the equation (2.26), that is, -R(t, at
to) = M(t)R(t, to), R(to, to)y = y
foralU,ro >0, y e (R'^)^. By M*(t) and /?*(^ to) we denote the adjoint operators of M(t) and R(t, to), respectively, on (R^)"^. One can easily see that r
(M*(OJ)(/)
^R\tJo) at d — R\sj) dt
= A*(^ i)y{i) + J2 "^'jyU)^ ieV^ye
(R")^
= R\tJo)M^{t),
(2.27)
= -M*(0/?*(5,r)
for alU, 5 G /?+. The operator R(t, to) will be termed the evolution operator on (R^)^ defined by the pair (A, Q). The next result provides the relationship between the evolution operator R{t, to) and the fundamental matrix solution ^(t,to) of the stochastic system (L23). Proposition 8. Under the assumptions given at the beginning of the section, the following equality holds: (/?*(/, to)y)ii) = E[%t, to)y(r](t)) \ r](to) = H ieV,y
=
t>to>0,
(y(l),...,y{d))e(R")'.
Proof Lett >to>0
and the operator V(r, ^o) : (R")^ -> (R'^)^ be defined by
(V(t, to)y)(i) = £[0*(r, to)y(riit)) I ^?(^o) = H i e T), y = (_y(l), . . . , y(d)) e (R'^)^. Let y be fixed and consider the function i; : R'^ X P ^ R by v(x,i) =x*y(i). Applying the Ito-type formula (Theorem 35 of Chapter 1) to the function v and to the system (1.23), we obtain: E[vix(t)^r](t))\r](to) / x*is)
=
i]-x^y(i)
A*(5, r](s))y(r](s)) + X^^,(.))>^(7)
ds \ r](to) = i
2.4 Exponential stability for Lyapunov-type equations on <S^
47
where x(s) = 0(5, ^)JCO. Further, we write x'o(yit,to)ym-x'oy(i)=x',
[
{VisJo)M'is)y)(i)ds
JtQ
for all t > to >0,xo eR^,
i eV. Therefore, we may conclude that
V(tJo)y-y=
f
V(s,to)MHs)yds
JtQ
foralU >tomdy e (R^'Y. By differentiation, we deduce that ^V(t,to)y at
=
VitJo)M''(t)y
for all y e (R")^, and hence — VitJo) = V{t,to)M'(t), at
t>to.
Since V{to, to) = R*ito, to), from (2.27), V(t, to) = R*(t, to) for all t > to > 0, and the proof ends. •
2.4 Exponential stability for Lyapunov-type equations on S^ In this section J C R denotes a right-unbounded interval. Consider the Lyapunov operator (2.8) on S^, where Q satisfies (2.7) and A^ are continuous and bounded functions. Let T(t, to) be the linear evolution operator on S^ defined by (2.9). Definition 4. We say that the Lyapunov-type operator C(t) generates an exponentially stable evolution, or equivalently, the system (AQ, . . . , A^-; Q ) is stable if there exist the constants j6 > 1, a > 0 such that \\T{t, to)\\ < Pe-^^'-'^\ t > to, to e I.
(2.28)
Remark 8. From Remark 6 immediately follows that if (AQ, . . . , A^; 2 ) is stable, then there exists fi > I and a > 0 such that \\f(t,to)\\ for diWt >toJ,to pair (Ao, Q).
el,
S^ be a continuous function. Assume that the integral fj T*{s,t)H(s)ds is convergent for all t eX. Set oo
Ki,y.= l
T\s,t)H{s)ds.
Then K(t) is a solution of the affine differential equation ^K(t) dt
+ C{t)K{t)-\-H(t)
= 0.
Proof Let z > t be fixed. Then we have /*cx^
r
/ r*(5, t)H(s) ds-h
T*(s, t)H(s) ds.
Based on (2.11) we get K{t) = r*(r, t)K(T) + r*(r, t) j
T*(s, T)H(S)
ds.
Using (2.12) we obtain that Kit) is differentiable and ^K(t) = -C*(t)K(t)-H(t), dt and the proof ends. D The next lemma shows that the integrals used in this section are absolutely convergent. Lemma 10. Let H : X ^^ S^ be a continuous function such that H(t) > 0 for all t e L Then the following are equivalent: (i) The integral J^ \T*{s, t)H(s)\ds is convergent for all t e X. (ii) The integral J^ T'^is, t)H(s) ds is convergent for all t eX. Proof, (i) => (ii) follows immediately, (ii) =^ (i) Let y(t) =
/ \Jt
T*(s,t)H(s)ds\,
t eX. I
We have oo
/
T''(s,t)H(s)ds
< y{t)j\
t GX,
which leads to oo
{T\s,t)H{s)){i)ds
/
< y(t)In, i eV, t eX.
Hence oo
/
Tr{T*(s, t)H(s))ii)ds
< nyit), i eV,t
el.
2.4 Exponential stability for Lyapunov-type equations on S^
49
from which we deduce that j Tr{T\s, t)H{s)){i) ds < ny(t), T >t. The above inequality gives
j
\{T\sj)H{s)m\ds (ii) From (2.28) it follows that T(t,s)\\ds JtQ
< a
for all t > to. (ii) zz> (iii) immediately follows from (2.6) and Theorem 5.
50
2 Exponential Stability and Lyapunov-Type Linear Equations
(iii) =^ (i) Let / / : J -> 5f be a continuous and bounded function. It follows that the real constants 8i, 82 exist such that 81J^ < H(s) < 82 J^ for all s el. Since T(t,s) is a positive operator defined on 5f, we deduce 8\T{t,s)J^ < T(t, s)H(s) < 82Tit, s)J'^ for all r > 5 > ro, to e I. Hence 81 I T{t,s)jUs
0 such that oo poo
/
jd T\sj)H{s)ds < V28r
for all t e X. We define K(t) = = /I
T\sj)H{s)ds.
Based on (2.13) we obtain that K(t) defined above is a solution of (2.30). (vi) =^ (vii) From vi) it follows that the affine differential equation — K(t) -h C\t)K{t) at
+ H{t) + 7^ = 0
has a uniform positive and bounded solution which also solves (2.31). (vii) ^ (viii) It is obvious that any solution of (2.31) is a solution of — K{t) + C\t)K{t)^0. dt
(2.33)
2.4 Exponential stability for Lyapunov-type equations onS^
53
(viii) => (iv) Let A' : J ^- 5f be a bounded and uniform positive solution of (2.33) with bounded derivative. We define M(r) = (M(t, 1 ) , . . . , M(r, d)) by M(t) =
-C\t)K.
Y dt
Therefore, there exists the constants /Ii > 0 and jli > 0 such that Ai/^ <M{t) ^0. Since r*(r, ^o) is a positive operator, from (2.36) we obtain that /S 11 5i T*(t, to)J'' > -G(t) > -T*(t,
tQ)j\
(2.37)
54
2 Exponential Stability and Lyapunov-Type Linear Equations
which leads to at
0
from which it follows that
^GitJ) to, hence
(r{t,to)j'm 0. Hence K verifies (2.42). (iii) =^ (iv) follows immediately (taking H = J^). (iv) =^ (i) follows from Proposition 13. (i) =^ (v) Let / / > 0. Therefore fiiJ"^ < H ft > 0. Since Ik^'ll < fie-'^^t > 0, for some y6 > l , a > 0 the integral K = f^e^'Hdt IS convergent, and since e^^ is a positive operator we have according to (2.20) o
poo
^ ft7^ < ft / e^'jUt Jo
oo in the above inequality, one gets K = f^ e^^Hds = K and thus the proof of (i)=>(v) is complete. (v) => (vi) follows by using the same reasoning as in the proof (ii) => (iii). (vi) =^ (vii) follows immediately (taking H = J^). (vii) ^ (i) Let H = -CK. Thus CK + H = 0 with // > 0 and /^ > 0. Since K is a constant solution of the equation jjK{t) = CK{t) + // we have K = e^^'-'^^K + / e^^'-'^Hds, t > to. JtQ
Since e^^ is a positive operator and H > yj^ with some y > 0 we can write y f e^^'-'^jUs JtQ
< [ e^^'-'^Hds 2, and for /? = 1 we have r
d
k=\
7=1
Since for each / e P , Ao(t, i) + ^