RANDOM DIFFERENTIAL INEQUALITIES
This is Volume 150 in MATHEMATICS IN SCIENCE AND ENGINEERING A Series of Monographs ...
33 downloads
815 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
RANDOM DIFFERENTIAL INEQUALITIES
This is Volume 150 in MATHEMATICS IN SCIENCE AND ENGINEERING A Series of Monographs and Textbooks Edited by RICHARD BELLMAN, University of Southeriz Culifunziu The complete listing of books in this series is available from the Publisher upon request.
RANDOM DIFFERENTIAL INEQUALITIES
G. S . Ladde and V. Lakshmikantham Department of Mathematics University of Texas at Arlington Arlington, Texas
1980
ACADEMIC PRESS A Subsidiary of Harcourt Brace Jovanovich, Publishers
New York
London
Toronto Sydney
San Francisco
COPYRIGHT @ 1980, BY ACADEMtC PRESS, INC. ALL RIGHTS RESERVED. NO PART OF THIS PUBLICATION MAY BE REPRODUCED OR TRANSMITTED I N ANY FORM OR BY ANY MEANS, ELECTRONIC OR MECHANICAL, INCLUDING PHOTOCOPY, RECORDING, OR ANY INFORMATION STORAGE AND RETRIEVAL SYSTEM, WITHOUT PERMISSION IN WRITING FROM THE PUBLISHER.
ACADEMIC PRESS, INC.
I11 Fifth Avenue. New York, New York 10003
United Kingdom Edition published by ACADEMIC PRESS, INC. (LONDON) LTD. 24/28 Oval Road, London N W l
IDX
Library of Congress Cataloging in Publication Data Ladde, G . S. Random differential inequalities. (Mathematics in science and engineering) Bibliography: p . Includes index. 1. Stochastic differential equations. 2. Differential inequalities. I . Lakshmikantham, V. ,joint author. 11. Title. 111. Series. QA274.23.L32 515.3’5 80-521 ISBN 0-12-432750-8
PRINTED IN THE UNITED STATES OF AMERICA
8081 8283
9 8 7 6 5 4 3 2 1
CONTENTS
vii ix
Preface Notations and Abbreviations
CHAPTER 1
Preliminary Analysis 1.o. 1.1. 1.2. 1.3. 1.4. 1.5. 1.6. 1.7.
Introduction Events and Probability Measure Random Variables, Distribution Functions, and Expectations Convergence of Random Sequences Conditional Probabilities and Expectations Random Processes Separability of Random Processes Deterministic Comparison Theorems Notes
1 1 3 6 10 12 18 19 20
CHAPTER 2
Sample Calculus Approach 2.0. Introduction Sample Calculus 2.1. 2.2. Existence and Continuation Random Differential Inequalities 2.3. Maximal and Minimal Solutions 2.4. Random Comparison Principle 2.5. Uniqueness and Continuous Dependence 2.6. The Method of Variation of Parameters 2.7. Random Lyapunov Functions 2.8. Scope of Comparison Principle 2.9. 2.10. Stability Concepts 2.11. Stability in Probability 2.12. Stability with Probability One 2.13. Stability in the pth Mean Notes V
21 22 30 37 39 41 42 50 56 60 69 72 80 88 91
Contents
vi
CHAPTER 3
L Walculus Approach 3.0. 3.1. 3.2. 3.3. 3.4. 3.5. 3.6.
Introduction LP-Calculus Interrelationships between Sample and LP-Solutions Existence and Uniqueness Continuous Dependence Comparison Theorems Stability Criteria Notes
92 93 95 97 105 108 110 112
CHAPTER 4
ItG-Doob Calculus Approach 4.0. 4.1. 4.2. 4.3. 4.4. 4.5. 4.6. 4.7. 4.8. 4.9. 4.10. 4.11.
Introduction Ita’s Calculus Existence and Uniqueness Continuous Dependence The Method of Variation of Parameters Stochastic Differential Inequalities Maximal and Minimal Solutions Comparison Theorems Lyapunov-Like Functions Stability in Probability Stability in the pth Mean Stability with Probability One Notes
113 114 13 1 142 149 152 157 159 160 166 172 175 178
Introduction Moments of Random Functions Spectral Representations of Covariance and Correlation Functions Some Properties of Gaussian Processes Brownian Motion Martingales Metrically Transitive Processes Markov Processes Closed Graph Theorem Notes
180 180
Appendix A.O. A.l. A.2. A.3. A.4.
A.5. A.6. A.7. A.8.
References Index
183 186 188 189 191 195 200 20 1 202 207
PREFACE
The mathematical modeling of several real-world problems leads to differential systems that involve some inherent randomness due to ignorance or uncertainties. If the randomness is eliminated, we have, of course, deterministic differential systems. Also, many important problems of the physical world are nonlinear. Consequently, the study of nonlinear random differential systems is a very important area in modem applied mathematics. A differential system can involve random behavior in three ways: (i) random forcing functions, (ii) random initial conditions, and (iii) random coefficients. Problems in which randomness is limited to (i) and (ii) are relatively simple to investigate. However, the most interesting case of random differential equations is (iii) when combined with (i) and (ii). The objective, of course, is to discuss fundamental properties and qualitative behavior of solutions by various probabilistic modes of approach. Also, since the solutions are stochastic processes, various statistics of the solution processes are to be found. As is well known, the theory of differential inequalities plays a crucial role in the study of deterministic differential equations. Furthermore, the theory of differential inequalities together with the concept of a Lyapunov function provides a suitable and effective mechanism for investigating a variety of qualitative aspects of solutions, including stability theory. It is natural to expect that a comsponding theory of random differential inequalities will play an equally important role in the theory of random differential equations. This is the basis for the evolution of this book. The present book offers a systematic treatment of random differential inequalities and its theory and application-depending on the different modes of probabilistic analysis, namely, approach through sample calculus, Lp-mean calculus, and It& Doob calculus. The book is divided into four chapters. The first chapter consists of preliminary material that is required for the rest of the book. We list here needed basic concepts and results in a logical sequence. In the second chapter we develop the fundamental theory of random differential equations and inequalities in the framework of sample calculus. Chapter 3 is devoted to the vii
viii
Preface
treatment of random differential equations and inqualities through pth mean calculus. The last chapter investigates the differential equations of It6 type in the context of It6-Doob calculus. Finally, an appendix is given to supplement the material of the book. Several examples and a carefully selected set of problems are incorporated in the body of the text. Some of the important features of the book are the following: (i) inclusion of the study of random differential equations through sample calculus, (ii) development of the theory of random differential inequalities through various modes of probabilistic analysis and the application of these results to discuss various propexties of solution processes, (iii) a unified treatment of stability theory through random Lypaunov functions and random comparison method, and (iv) stress of the role of the method of variation of parameters in the stability analysis of stochastic perturbed systems. This monograph can be used as a textbook at the graduate level and as a reference book. A good background in probability theory and differential equations is adequate to follow the contents of this book. We wish to express our warmest thanks to Professor Richard Bellman whose interest and enthusiastic support made this work possible. We are immensely pleased that our book appears in his series. The staff of Academic Press has been most helpful. We thank our colleagues who participated in the seminar on stochastic differential equations at The University of Texas at Arlington. In particular, we appreciate the comments and criticism of Professors Stephen R. Bernfeld, Jerome Eisenfeld, Pat Sutherland, and Randy Vaughn. Moreover, we wish to thank Mrs. Mary Ann Crain for her excellent typing of the manuscript. The fist-mentioned author would like to acknowledge encouragement and support of his friend and colleague, Professor Clarence F. Stephens, SUNY-Potsdam. Furthermore, he would like to acknowledge the Research Foundation of the State University of New York for its continuous encouragement and support for the development of several results in this book. A large part of the book was completed while the first author was associated with SUNY-Potsdam. The preparation of this book was facilitated by U.S. Army Research Grant DAAG29-77(30062, and we express our gratitude for the support.
NOTATIONS AND ABBREVIATIONS
For the convenience of readers we collect below the various notations and abbreviations employed in the monograph. Vectors (column vectors) of dimension n are basically treated as n X 1 matrices. All relations such as equations, inequalities, belonging to, and limits, involving random variables or functions are valid with probability one. Sometimes the symbolsx(t) and x ( t , W ) are used interchangeably as a random function.
R"
An n-dimensional Euclidean space with a convenient norm
I/ * I 1
The norm of a vector or matrix The set of all deterministic real numbers or real line The set of all t E R such that t 3 0 A probability space The cr-algebra of Bore1 sets in R" A complete probability space The collection of all random vectors defined on a complete P ) into R" probability space ( 9, A collection of all n x m random inatrices A = (aij) such that aij E R [ i l , R ] The collection of all n-dimensional random vectors x such that E(llxllp) < forp 3 1 A collection of all equivalence classes of random vectors such that an element of an equivalence class belongs to Yp The transpose of a vector or matrix x Almost surely or almost certainly In probability or stochastically pth mean or moment A characteristic or indicator function with respect to an event A
a,
Y
P
LPIJXR"l XT
a.s. i.p. P.m I.4
ix
X
Notations and Abbreviations
An arbitrary index set, in particular, a finite, countable set or any interval in R A class of random functions defined on I into R [ R , R " ] RII,RIQ,Rrl]] The class of deterministic continuous functions defined on C[E,R"l an open (t,x) subset E of R"+' into R" R [[u,b],R [a,Rtl]] A collection of all R"-valued separable random functions defined on [u,b]with a state space (R",Sc,'),a,b E R C[[a,b], R [ Q , R " ] ] A collection of all R"-valued separable and sample continuous random functions defined on [u,b] with a state space ( R t ' , P 1 ) , a,b E R w.p. 1 With probability one M[[u,b],R[R,R"]] A collection of all random functions in R[[u,b],R[Q,R"]] m x P), which are product-measurable on ([a,b]XR , F I X w h e r e 0 = ( 0 , F , P ) and ( [ a , b ] , 9 l , r nare ) a complete probability space and a Lebesgue-measurable space, respectively The set of all x E R" such that I - zI 1 < p for given z E R" B(2,P) and positive real number p The set B ( z , p ) with z = 0 E R" B( P) B(Z,P) The closure of B ( z , p ) C [ R +x B ( z ,p), R [Q,R"]] A class of sample continuous R"-valued random functi0nsflt.x) whose realizations are denoted by f l t , x , 0) M [ R +x B ( z , p ) ,RIQ,Rfl]] A class of R"-valued random functions such that f l t , x ( t ) ) is product-measurable whenever x(t) is product-measurable The class of random functions K E M [ I ,R [Q,R +]] such that its sample Lebesgue integral is bounded with probability one [ t o , t,,+a], where to E R + and a is a positive real number Almost everywhere or except a set of measure zero The determinant of a square matrix A The trace of a square matrix A The inverse of a square matrix A The logarithmic norm of a random square matrix A( W) The spectrum of random square matrix A( o) The class of functions b E C[[O,p),R+] such that b(0) = 0 and b(r) is strictly increasing in r , where 0 ~p s m The class of functions b E C[[O,p),R+] such that b(0) = 0 and b(r) is convex and strictly increasing in r The class of functions a E C [ R + X [O,p), R+] such that W X a(t,O) = 0 and a(r,u) is concave and strictly increasing in u for each t E R +
I
xi
Notations and Abbreviations
R[[u,b],L’[fl,R”]] A collection of all functions defined on [u,b]with values in L’[fl, R ” ] C[[u,b],L’ [ f l , R ” ] ] A collection of functions in R [ [ u , b ] ,L’[fl,R”]] which are L”-continuous on [u,b] [R, xL”[fl,R ” ] , L p [ f R l , ” ] ] A collection of all functionsf(t,.r) defined on R , X L’[fl, Rf’]into L”[fl, R “ ] A sub-u-algebra of Bdefined for t E R , 8 The smallest sub-u-algebra of Fgenerated by an 25 m-dimensional normalized Wiener process z(t) for t E R , The smallest subu-algebra of Ygenerated by z(t) - z(s) for 3; s 3 t 3 0, where z(t) is an m-dimensional normalized Wiener process An N X n Jacobian matrix of V(r,x), where V E C [ R + X I/ ax (t,x) R”, R”]
Gw
V,,
=
a*v
M,[u,bl
m.s.
(t.1)
An n x n Hessian matrix of V E C [ R + x R”, R N ]whose elements ( a 2 V /&&,) (r,x) are N-dimensional vectors A set of all nonanticipating n x m matrix or n-vector functions G defined on [u,b]into R [ f l R n m ]or R [ R , R ” ]such that the sample Lebesgue integral I IG(s, w) I l2 ds exists with probability one Mean square
s:
This page intentionally left blank
CHAPTER
1
PRELIMINARY ANALYSIS
1.0.
INTRODUCTION
This chapter is essentially introductory in nature. Its main purpose is to introduce some basic probabilistic concepts that are essential to the study of random differential equations. We list some known results from standard textbooks and sketch some useful results that are not so well known. Sections 1.1 and 1.2 are concerned with events, probability measure, random variables, expectations, and distribution functions. Section 1.3 is devoted to convergence of random sequences depending on various modes of convergence. Several important and well-known theorems about various kinds of convergence are stated. Furthermore, some results concerning the weak convergence of measures are also included. A brief discussion about conditional probabilities and expectations is given in Section 1.4. Section 1.5 surveys certain fundamental notions and results in the general theory of stochastic processes. We sketch an important concept due to Doob, namely, the concept of separability of random processes in Section 1.6. Finally, we collect in Section 1.7 some basic deterministic comparison results. 1.1.
EVENTS AND PROBABILITY MEASURE
Let us consider a random experiment E whose outcomes, the elementary events w, are elements of a set R. R is called a sample space. Let F denote a o-algebra of subsets of the sample space R. Elements of 9are called events. A real-valued set function P defined on 9is called a probability measure or simply a probability if (i) P ( A ) 2 0, V A E 9(nonnegativity); (ii) P(Q) = 1 (normed finiteness); P ( A , ) for A , E 5 and n 2 1, A, n A , (iii) P ( u , " _ , A,) = (n # m)(semiadditivity). 1
=
1.
2
Preliminary Analysis
The triplet (Q, 9, P ) is called a probability space. A subset of an event of zero probability is called a null event or null set. A probability space (Q, 9, P) is said to be complete if every null event is an event. If (Q, 9, P) is not complete, P can be uniquely extended to the o-algebra 9generated by B and its null events. This procedure is called completion. Let P denote the o-algebra of Bore1 sets in R" generated by the n-dimensional intervals
I"= { x ~ R " : c 0, E > 0, x E Lp,we have P(llxll 2 For n
=
E)
< ~-~11x11; (Markov's or Chebyshev's inequality).
1, the number x ~ ( x ) ) ' )= crz(x), var(x) = ~ ( ( -
x E L',
is called the variance of x, the number cr = [var(x)] ' I 2 is called the standard deviation of x, the number E(xk)is called the kth moment of x, the number E ( ( x - E(x))~) is called the kth central moment of x, and the number COV(X,Y) = E((x - E(X))(Y- E(Y))) is called the covariance of x and y for x, y E Lz. For n-dimensional random vectors x, y , the symmetric nonnegative definite n x n matrix = E((x - E(x))(Y- E(y)IT)= (cov(xi7~j)) COV(X,Y)
is called the covariance matrix of random vectors x and y. Let x be an ndimensional random vector. The characteristic function of a random vector x is defined by Q(a) = Qx(a)= E(eiaTx)=
sRn
eirrTx d,F(x)
for a E R". If F(a) is absolutely continuous with density f(a), then the above equation reduces to dx, Q(a) = JRn eiaTxf(x) and f(a) can be obtained from Q(a) by the inversion formula of the Fourier integral, 1 f(x) = 2.n Neveu [74]
sRn
e-'"'"Q(a) da.
6 1.3.
1.
Preliminary Analysis
CONVERGENCE OF RANDOM SEQUENCES
Let x , x l , x z , . . . be n-dimensional random vectors defined on a probP). ability space (a,9, Definition 1.3.1. A sequence of random variables { x , } is said to converge almost surely or almost certainly to x if there exists an event N E 9such that P ( N ) = 0, and for every o E n\N
and we write x,
a.s.
n+
00
x.
Definition 1.3.2. A sequence { x , } is said to converge in probability or stochastically to x if for every E > 0,
lim ~ { w : j J x , ( w-) x ( o ) / l > E ) fl+
= 0,
a,
and we denote it x,
i.p. 7 x. +a,
Definition 1.3.3. A sequence of random variables { x , } is said to converge in the pth mean or moment (p > 0) to x if
lim E[llx, - xllp] n+
= 0,
00
and we use the notation x,-
p.m. n+ m
x.
Remark 1.3.1. In Definition 1.3.3, for p = 1, the pth convergence is simply convergence in the mean, and for p = 2, the pth convergence is simply convergence in the mean square or in the quadratic mean. Let {F,) and F denote the distribution functions of { x , } and x , respectively. Definition 1.3.4. A sequence of random variables { x , ) is said to converge in distribution to x if
lim F,(x) = F ( x )
n+ m
at every point at which F is continuous or lim Q,(u)
n+ m
= Q(u)
for all u E R",
where Q, and Q are characteristic functions of x , and x , respectively.
1.3. Convergence of Random Sequences
7
Remark 1.3.2. These convergence concepts are related with each other as follows: convergence in the p rth mean for r I
in the pth mean
U convergence in probability
convergence
in distribution
Remark 1.3.3. (1) If there exists a number A such that llx,,ll I A a.s. for n 2 1 (x, is a.s. uniformly bounded), then {x,} converges in pth mean. x,+ as. for all n 2 1 or if x , 2 x,+ as. for all n, then each (2) If x, I one of the three convergences (in probability, a.s., and quadratic mean) implies the other two. (3) A sequence converges in probability if and only if every subsequence of it contains an as. convergent subsequence. We present some well-known results that are useful in our study.
Theorem 1.3.1 (Monotone convergence). Let { x , } be an increasing sequence of nonnegative random variables converging as. to x . Then E(x) = lim,,+mE(x,). Theorem 1.3.2 (Fatou-Lebesgue lemma). Let x, be a sequence of random variables. Suppose that there exist integrable random variables x, y such that x,(w) 2 x(o) and x,(w) I y ( o ) as. for all n. Then
1
lim inf E(x,) 2 E lim inf x, , fl-m
and
(
)
lim sup E(x,) I E lirn sup x , . fl+
m
n-w
Theorem 1.3.3 (Dominated convergence). Let {x,} be a sequence of random variables converging almost surely to x. Suppose that there exists an integrable random variable y such that IIx,(w)II I Ily(o)ll as. for all n. Then Iim E(x,) = E(x). fl+
m
Theorem 1.3.4 (Borel-Cantelli lemma). For any arbitrary sequence of P(A,) < co implies P(1im sup,, A,) = 0. Moreover, if the events {A,}, sequence { A , } is independent, P(A,) = GO implies P(lim sup,+ A,) = 1.
I,"=
I,"=
1.
8
Preliminary Analysis
Let {x,} be a sequence of real-valued mutually independent random variables defined on ( Q 8 , P ) . Without loss of generality, we assume that Etx,,) = 0, a: = E(x:) for all n 2 1. Let us set n
s, =
1 xi.
i= 1
The law of convergence of s, is said to be the weak law of Convergence if the convergence is in probability; the law of convergence of s, is said to be the strong law of convergence if the convergence is as.
Theorem 1.3.5 (Weak law of large numbers). Let {x,} be a sequence of real-valued mutually independent random variables with E(xi) = 0 and : . Then finite variances a
(lln) s,
-+
in the mean square as n
0
+
co
iff n= 1
Theorem 1.3.6 (Strong law of large numbers). Let {xn}be a sequence of real-valued mutually independent random variables with E(xi)= 0, o? < co. Then i=l
implies (l/n) s,
-+
0 as. as n
-+
co.
Theorem 1.3.7 (Central limit theorem). Let {x,} be a sequence of real-valued independent random variables with a common distribution function, having finite mean p and variance a2.Then (s, - np)/o,f.nisasymptotically Gaussian with expectation 0 and variance 1, ds
uniformly in A. Let us recall that P is a o-algebra of Bore1 sets in the n-dimensional Euclidean space R". Let P , and P 2 be probability measures defined on (R",8"). For every closed subset F of R", let E~~ be the infimum of E > 0 such that Pl(F) < P,(O,(F))
+5
(1.3.1)
where O,(F) is the &-neighborhoodof F . By interchanging the roles of PI and P , in (1.3.1), E~~ can be defined analogously.
1.3.
Convergence of Random Sequences
9
Definition 1.3.5. The Prohorov distance D ( P , , P,) is defined by D(P,,P2)
=
maX(E12,&21).
We note that the set of probability measures together with distance D in Definition 1.3.5 is a complete separable metric space and D-convergence is equivalent to weak convergence of measures. For details, see [75]. Letx, y E R[R, R"], and let P,, P , be corresponding probability measures defined on (R",9"). The Prohorov distance between their probability laws, that is, P,, P,, is denoted by D(x, y ) = D(P,, P,). We notice that D(x, y ) = 0 means that x and y have the same probability measure or law. We remark that P{o:lim,,, Ilx,(o) - x(w)ll = 0 } = 1 implies x , is a D-Cauchy sequence. The converse is also true in the following sense. Theorem 1.3.8 (Skorokhod's theorem). Let x, E R[R, R"] be a DCauchy sequence of random variables. Then one can construct a sequence of random variables yn E R[R, R"] and a random variable y E R[Q, R"] such that D ( y n , x n )= 0
and
P { o : I I y , ( o ) - y(o)II
= 0} =
1.
Definition 1.3.6. A collection S = { x , : E~ A} c R[R,R"] is said to be totally D-bounded if every infinite sequence {x,,,} c S has a D-Cauchy subsequence, where A is an index set. The following theorem gives a necessary and sufficient condition for a set in R[R, R"] to be totally D-bounded. Theorem 1.3.9 (Prohorov's theorem). Let S c R[R,R"]. Then S is totally D-bounded in R[Q,R"] iff for every E > 0, there exists a compact subset K , of R" such that P ( x E K,) > 1 - E
for every x E S. We remark that Theorems 1.3.8 and 1.3.9 are valid for a collection of random variables that are defined on a complete probability space with values in a complete separable metric space ( M ,d ) with a distance d. In the light of this, we present a result which shows that the direct product of a finite number of totally D-bounded sets in a complete separable metric space is totally D-bounded in a direct product metric space. Let ( M idi), , i = 1,2,. . . , n, be complete separable metric spaces. Then the direct product ( M ,d ) = ( M ,d , ) x (M2,d,) x . . . x (M,,,d,) is also a complete separable metric space di,M = M , x M , x . . . x M,. Let S = {x, = (x,,,~,~,. .., with d = xan)E R[R, ( M ,d ) ] :a E A} be a subset of R[Q,( M ,d ) ] . Using Prohorov's
I;=,
1.
10
Preliminary Analysis
theorem and recalling the fact that the direct product of compact sets is compact and that the projection of a compact set is compact, we can immediately see the validity of the following result.
Lemma 1.3.1. The set S c R[Q,( M ,d ) ] is totally D-bounded ifand only if Si = {xaiE R[Q,( M idi)] , : a E A} is totally D-bounded for every i = 1,2,. . . ,n
1.4.
CONDITIONAL PROBABILITIES AND EXPECTATIONS
Let ( Q , F , P ) denote a probability space and let Fl c 9 denote a sub-oalgebra of 9. Let x E L' [(Q, 9, P), R"].The probability space (Q, PI,P ) is a P), and x is, in general, no longer Fl-measurable. coarsening of (Q, 9,
Definition 1.4.1. The conditional expectation of x relative to Fl is an Fl-measurable random variable, denoted by E(xlFl), and is defined by
sA
E(xlFl)P(dw)= sAxP(dw),
A
E Fl.
According to Radon-Nikodym theorem, E ( x [ F l )exists and is almost-surely unique.
P ) be a probability space Theorem 1.4.1 (Radon-Nikodym). Let (Q, 9, and let u be an absolutely continuous measure on (Q, 9). Then there exists a finite-valued measurable function x on Q such that n
#(A) = J A x ~ ( d w ) , A
E
F.
Furthermore, x is unique up to sets of P-measure zero. Conditional expectations possess the following important properties with probability one:
(i) (ii) (iii) (iv) (v) (vi)
For F = (q5,Q}, E ( x / F l )= E(x). If x 2 0, then E ( x ( F l )2 0. If x is 8,-measurable, then E(x I Fl)= x. If x = a, then E(xlFl) = a. For Fl 2 F, if E[x] exists, then EIE[xIFl]] If cl, c 2 , . . . ,c, are constants, then
(vii) If x I y, then
= E[x].
1.4.
Conditional Probabilities and Expectations
11
(viii) If x , Fl are independent, then E(xl=%)
(ix)
= E(x),
For Fl c Fz c .F then ,
Furthermore, conditional expectation also has the convergence property of expectation. The results corresponding to Theorems 1.3.1-1.3.3 are also valid. P ) be a probability space and A E 9,9, c 9. Let (Q, 9, Definition 1.4.2. The conditional probability P(AIF1) of an event A under the condition PI c 8 is defined by = E(zA\Fl)?
p(A]91)
where I , is defined as IA(O)
=
I 0
if ~ E A , if o $ A .
Let x be an n-dimensional random variable on (Q, 9, P). Consider the conditional probability P ( x E B ] F l )= P ( { w : x ( o ) E B ) ] g l ) ,
where B E P. There exists a function p ( o , B ) defined on Q x 8”with the following properties: For fixed o E Q, the function p(w, .) is a probability on 9”; for fixed B, the function p ( . , B ) is a version of P ( x E B ] F 1 ) ,that is, p ( . ,B ) is .F,-measurable and P ( A n { o : x ( o )E B } ) = J A p ( w , B ) d P ( w ) ,
A
E
Fl.
Such a function p is called the conditional probability distribution of x for given F1. For f ( x ) E L’,
If Fl is the a-algebra generated by the random variable y , then E(xl@J= E(xly),
and where P ( x E BIy
= a) = p ( a , B ) .
1.
12
Preliminary Analysis
If p(a, .) has density h(x, a) in R", this density is called the conditional density of x under the condition y = a:
1.5.
RANDOM PROCESSES
In this section, we shall present a very brief survey of fundamental notions and results in the general theory of stochastic processes. P ) denote a probLet I denote an arbitrary index set, and let R = (R, 9, denote the state space. ability space and (R", 9") Definition 1.5.1. A family { x ( t ) ,t E I ] of R"-valued random variables defined on a probability space (Q, 9, P ) is called a stochastic process or random process or random function with parameter set I and state space (R",9" The ). class of random functions defined on I into R[R, R"] is denoted by R[I,R[R, R"l1.
If I is a finite or countable set, then we are dealing with finitely many random variables or a random sequence. In what follows, I is always one of the intervals of the type [ t o , t , ) , [ t o , t l ] , (to,tl), and (t0,tI], where -a I to I t , I 00. If { x ( t ) : tE I} is a stochastic process, then for each t E I , x(t, .) is an R"-valued random variable which is called a section at t of x(t), whereas for each o E R, x ( .,o)is an R"-valued function defined on I . Hence x ( ., o)is an element of the product space (R")'.It is called a section at o or realization or trajectory or path of the stochastic process. The finite-dimensional distribution of a stochastic process { x ( t ) : tE I } is given by P { x ( t ) I x } = F,(x),
that is, P ( x ( t l ) I x l , . . . , x ( t , ) I x,} = F,,,,,, . . . , , , ( x 1 , x 2 , .. . , x J . where t, ti E I , x , xi E R" for i = 1,2,. . . ,n. Note that this distribution function satisfies the following two conditions: (i) Condition of symmetry. If i,, iz, . . . ,in is a permutation of numbers { 1,2,. . . ,n), then for arbitrary instances and n 2 1, 't,1,f,2,..
. , t l n ( x i l ~ x .i 2. ,xi,,) ~ . = F t l , t 2 ,.. , , t n ( x 1 > x 2., .,xn). .
(ii) Condition of compatibility. For m < n and arbitrary t,+ 'tl,f2
, . . . ,t , , , , t m + l , . . . ,~ , ( x ~ , x z , . . . , ~ ~ , c2 oa) ,...
-
7xrn).
- ' t l , r 2 . . . . , t m ( ~ 1 , ~ z t . . -
,,.. . ,t , E I ,
1.5. Random Processes
13
In practice, one knows a family ofdistributions P t , , . . ,JB1, B , , . . . ,B,) or their distribution functions F,,,,,, Jxl, x2,. . . ,x,) which satisfies the symmetry and compatibility conditions. ,
,, ,
Theorem 1.5.1 (Kolmogorov's fundamental theorem). For every family of distribution functions that satisfies the symmetry and compatibility conP ) and a stochastic process ditions, there exists a probability space (R, 9, (x(t):t E I ) having the given finite-dimensional distributions. In fact, we take f2 = (R")', x(t,o) = value of o at
(1.5.1)
t.
Note that {x(t,o),o E (R")',t E I ) as defined in (1.5.1)is called the coordinate function, x(t, o)is the t-th coordinate of o.Sets of the form {ci.,:x(t,,o) E B1,x ( t 2 , w ) E B 2 , . . . ,x(t,,o) E B,), where B , , B 2 ,. . . ,B, are n-dimensional Bore1 sets, are called cylinder sets. Let 9x be the minimal o-algebra containing the cylinder sets. The probability determined by the above cylinder sets can be uniquely extended on Fx.
Definition 1.5.2. Two processes with the same state space and index set are said to be equivalent iff their finite dimensional distributions are identical. In the following, we present a few well-known examples of random functions.
Definition 1.5.3. A stochastic process x E R[I,R[Q,R"]] is said to be a process with independent increments if for all to < t , < . . . < tk in I , the quantities x(to),x(tl) - x(t2),. . . , X(tk) - x(tk- 1) are mutually independent. Definition 1.5.4. A process x(t) with independent increments with values in R is said to be a Poisson process P(A(t))if the increments x(t) - x(s) are distributed according to the Poisson law with parameter A(t) - A(s), where A(t) is a nondecreasing real-valued function defined on I . In the case A ( t ) = At, A > 0, the process x(t) is said to be a homogeneous Poisson process P(A). Thus for the process P(A(t)),we have P{x(t) - x(s) = k )
1 k!
= - [A(t) - A(s)Ikexp[A(s)- A(t)],
where N is a set of natural numbers, E(x(t))
= A(t) = var(x(t)),
and Qt(0)= exp{(A(t) - A(s))(eie- 1)).
k
E
N,
1.
14
Preliminary Analysis
Definition 1.5.5. A process x E R[Z,R[Q,R"]] is said to be Gaussian if it has Gaussian (normal) distribution P{x(t)< a } =
s"'
1
J(2n)" det( V ( t ) )
...J :
m
-
x ( V ( t ) ) - ' ( u - m(t))]dul
exp[-+(u - r n ( t ) ) ~
. . du,, '
that is, x ( t ) has a density function f such that
where rn(t) and V ( t ) are the mean n-vector and variance n x n symmetric matrix functions on I , respectively. In the case m(t) = rnt and V ( t )= Dt, where D is a symmetric nonnegative definite matrix and m is a constant n-vector, the process is called a homogenous Gaussian process. Thus for the Gaussian process, the characteristic function is given by
Qt(a)= exp[iaTm(t) - $aTV(t)a], where a E R", t E 1. Definition 1.5.6. A Gaussian process with independent increments is called a Wiener process. In particular, a Gaussian process with independent increments is called a process of Brownian motion or normalized Wiener process if E[x(t)
-
x(s)]= 0,
E[(x(t) - x(s))T(x(t)- x(s))]
= Z l t - sI
for t, s E I ,
where Z is an n x n identity matrix. Thus for the Wiener process, we have P(x(t) < x) =
a s":-. . .
~
1
JTmexp[ -&uTu]dul
du2 . . . dun
and the characteristic function
Definition 1.5.7. A random process x E R[Z,R[Q,R"]]is said to be a Markov process if for any increasing collection t l < t 2 < . . . < t,, in I and B E F", P(x(tfl) E Blx(t,), -Y(t*),.
' ' 3
x(t,-
1))
=
P(x(t,) E Blx(tfl-1)). (1.5.2)
1.5.
Random Processes
15
We give various equivalent formulations of the definition of the Markov process.
Theorem 1.5.2. Each of the following conditions is equivalent to relation (15 2 ) :
(i) For
toI sI t, t o ,s, t E
I, and B E 9",
P ( x ( t )E B I 9 J
= P ( x ( t )E B ~ X ( S ) ) ,
where Fs= .F;[to,s] is the smallest sub-a-algebra of 9 generated by all random variables x(u) for t o Iu Is. (ii) For to I s 5 t I a, t o ,s, t E I , and y Ta-measurable and integrable, E(YI9J
(3) For
to I sI
t I a and A
E
= E(ylx(s)).
Fa,
P(AIFS)= P(AIX(S)). (iv) For
to
I t,
I t I t, I a , A , E F t , A 2E Fa,
P(A1 n A2199
= P(A,Ix(t))P(A,lx(t)).
As indicated in Section 1.4, for the conditional probability P ( x ( t )E BIx(s)), there exists a conditional distribution P(x(t) E Blx(s)) = P(s, x(s),t, B ) that is a function of four arguments s, t E I with s I t, x E R", and B E 9". This function has the following properties: t and B (a) For fixed s I
E
F", we have
P(s, x(s),t, B) = P(x(t)E Blx(s)) w.p. 1.
(b) P(s, x,t, .) is a probability on 9" for fixed s I t and x E R". (c) P(s, ., t, B) is 9"-measurable for fixed s 5 t and B E 9". (d) For t o 5 s I u 5 t I a and B E 9" and for all x E R", we have the
sRn
Chapman-Kolmogorov equation
W,x,t , B ) =
P(u, Y , t , B)P(s,x , u, dy) W.P.1.
(1 5 3 )
Definition 1.5.8. A function P(s, x , t, B) with the above properties (b), (c), and (d) is called a transition probability or transition function. If x(t) is a Markov process and P(s, x , t, B) is a transition probability of the Markov process such that property (a) is satisfied, then P(s, x , t, B) is called a transition probability of the Markov process x(t). We shall also denote P(s,x,t, B ) by P ( x ( t )E BJx(s)= x). If the probability P(s, x , t, .) has a density, that is, if for all t, s E I, s < t, all x E R" and all B E P', we have P(S, x,t, B ) = JB P b , x, 4 Y ) &
16
1.
Preliminary Analysis
Definition 1.5.9. A Markov process x ( t ) is said to be homogeneous if its transition probability P(s,x , t, B ) is stationary, that is, if the function P(s
+ u, x , t + u, B ) = P(s, x, t, B).
Let us consider a Markov process x(t), t E I defined on (R, B P ) with state space ( R " , F ) and transition function P . A random time z is an Fmeasurable mapping of R into [O,a] such that P(z < a) > 0. A Markov time is a random time z such that for all t E I , the set {co:z(co) It } E Ft, the a-algebra generated by x(s), to I sI t . Let R' = {co:z(o) I a}, F'= (Rr n A : A E P), P' = P/P(R'). Denote PIthe sub-o-algebra of 8' such that A E F implies A n { o : z < t ] E Fffor all t E I. Definition 1.5.10. A Markov process x ( t ) ,t E I , is said to be a jump Markov process if its transition function satisfies the relations
uniformly in (s, x, B), where t E I,x E R", and B E F. (b) For fixed (x,B), the function q ( s , x , B ) is continuous with respect to s E I and uniformly continuous with respect to (x, B). Remark 1.5.1. It follows from (a) that there exists a K > 0 such that Iq(s,x,B)]IK,
SEI,
XER", B E F .
Example 1.5.1. A Weiner process is a homogeneous n-dimensional Markov process o(t)defined on [0, 00) with stationary transition probability
P(t,x;)
=
J2.t
[
1x~x], exp -2t
t>O
\Probability measure centered at x,
t
= 0.
An important class of random processes is the class of stationary processes. These are processes whose probabilistic characteristics do not change with displacement of time. More precisely, we define as follows: Definition 1.5.11. A random process x E R[Z,R[R,R"]]is said to be strictly stationary if for arbitrary m, h, and t , , t , , . . . ,t, such that t j + h E I , i = 1,2, . . . ,m,the joint distribution function of random vectors x ( t , + h), x ( t 2 h), x(t h),. . . ,x ( t , + h) is independent of h, i.e.,
+
+
Ftl+h,tz+h,.
..,t,+h(X1,X2?.
where xiE R", i = 1,21. . . ,m.
. . ? x m ) = Ft,,t,,
.. .,t,(X1,X2>.
. . ?Xrn),
1.5.
Random Processes
17
Definition 1.5.12. A random process x E R [ I , R [ R , R " ] ] is said to be a process with stationary increment if the joint distribution of the differences x(tz h) - x(t1 + h), x ( t , h) - x ( t 2 h), . . . , x ( t , h) - x ( t , - , h) is independent of h for arbitrary m, h and for t l , t 2 , . . . ,t, such that ti h E I , i = 4 2 , . . . ,m.
+
+
+
+
+ +
Remark 1.5.2. The definition of a stationary process is equivalent to the following: For any bounded continuous function f :R" + R, the mean E [ f ( x ( t l h), x ( t , h), . . . ,x ( t , h ) ) ] is independent of h for arbitrary m,h, and for t , , t z , . . . , t , such that ti + h ~ forl i = 1,2,. . . , m .
+
+
+
Remark 1.5.3. For every continuous functionf: R" process x(t), the random process y(h) = f ( x ( t 1 + h)7 x(tZ
+ h),
* *
+
R and a stationary
., X(tm + h ) )
is also stationary. Definition 1.5.13. A process x related increments if
E
R[Z,R[SZ,R"]] is said to have uncor-
E[IIX(t) - x(4ll'I < 00,
4 s E I,
(15 4 )
and if whenever s1 I t l < s2 I t 2 , the increments x ( t l ) - x(sl) and x ( t 2 ) - x ( s 2 )are uncorrelated with each other, i.e., E((X(t2)- x ( s z ) ) T ( x ( t l) x(s,),> = E ( X ( t 2 ) - x ( s z ) ) T E ( x ( t ,) x(s1)), (1.5.5)
where x is a column vector. Definition 1.5.14. A process x E R[I,R[Q, R"]]is said to have orthogonal increments if (1.5.4)holds and (1.5.5) is replaced by E((X(t2)- x(s2f)T(x(fl)- x(s1f)I = 0.
Remark 1.5.4. If the process x ( t ) has uncorrelated increments, then the process y( t ) defined by
At) = x(t)- W
t ) )
has uncorrelated and orthogonal increments. Remark 1.5.5. If a process has independent increments and satisfies (1.5.5), then the process has uncorrelated increments. Remark 1.5.6. Let x E R[I,R[R, R"]] be a process with orthogonal increments. Then F(t) can be defined to satisfy E[IIX(t) - x(s)II'] = F ( t ) - F(s),
s < t.
(1.5.6)
18
1.
Preliminary Analysis
Note that F(t) is monotone nondecreasing and is determined by (1.5.6) up to an additive constant. For example, one can define F(t) =
1.6.
4 0 - x(to))121, -E[IJx(t) - X(to))/21,
t 2 to, t < to.
SEPARABILITY OF RANDOM PROCESSES
It is obvious from the properties of random functions that we cannot determine the probability of events defined by means of an uncountable set of index values. For example, the set {o:x(t,o) 2 0 for all t E I } = n t p l { o : x ( t , o )2 0) may not be an event since it involves an uncountable intersection of events. Similarly, y = suptel x ( t , o ) may not be a random variable because sets of the form y(o) I y}
=
n
re:
{ o : x ( t , o)I y f
may not be events. The satisfactory removal of this difficulty is due to J. L. Doob who introduced the notion of separability. Definition 1.6.1. A process x E R[I,R[Q,R"]]is said to be separable if there exist a countable set S c I and a fixed null event A such that for any closed set F c R" and any open interval 0, the two sets { o : x ( t , o ) E F, t
E
I n O),
{ w : x ( t , o )E F, t E I n S )
differ by a subset of A. The countable set S is called the separant or separating set. The usefulness of the separability concept is demonstrated by the following. Theorem 1.6.1. The following properties are equivalent to the separability of process x E R(Z,R [ Q , R"]] with the separating set S: for every open interval 0 c I , == S U P r E O n I x ( t ) ; (1) inftssnox(t) = inft..n,x(t),supresnox(t) (ill inheSnOx(t) 5 inftson,x(t), S U P t e O n I x ( t ) L SUPtsSnOx(t); (iii) infteSnO x(t) I x(t) I S U P ~ ~ ~ ~ ~ X ( ~ ) ; (iv) lim inft,-to x(t,) = lim inftftox(t), lim S U ~ ~ x(t,) , , ~ = ~ lim S U ~ x~( t )-; (v) lim inftnjrox(t,) I x ( t ) _< hm x(t,); (vi) lim infrn,tox(tJ I lim inft,to x(t), lim S U ~ ~ ,x(t,) , ~ ~5 lim suptjrox ( t ) ; where t, E S n 0, t o E 0 n I .
The next result shows that the separability requirement is not a serious restriction.
~
~
1.7.
19
Deterministic Comparison Theorems
Theorem 1.6.2. For every stochastic process x E R [ I ,R [ Q R"]] there exists a process x E R[Z,R[Q R"]] defined on the same probability space with values in R" such that (i) x(t) is separable, (ii) P { x ( t )= x ( t ) } = 1 for each t E I . Theorem 1.6.3. Let x ( t ) be a separable jump Markov process. Then P(X(U) = x
1.7.
for all u E [s, t]lx(s) = x) = exp
DETERMINISTIC COMPARISON THEOREMS
In our later discussion, we shall need to employ deterministic comparison results several times. We shall give below some important results in this direction. Let E be an open (t,u)-set in R"+'. We shall mean by C[E,R"] the class of continuous functions from E into R". We shall be using vectorial inequalities freely with the understanding that the same inequalities hold between their corresponding components. We shall consider the deterministic differential system with an initial condition, written in the vectorial form u' = g(t,4, 4 t o ) = uo, (1.7.1) where g E C [ E ,R"]. We require that the function g(t,u) satisfy certain monotonic properties. Definition 1.7.1. The function g(t, u) is said to possess a quasi-monotone nondecreasing property if for u, v E R" such that u 5 u and ui = ui, then gi(t, u) s gi(t,v) for any i = 1,2, . . . ,n and fixed t. We now quote the following deterministic comparison theorem [59] : Theorem 1.7.1. Assume that (i) g E C [ E ,R"] and that g(t, u) is quasi-monotone nondecreasing in u for each t, where E is an open ( t ,u)-set in R"" ; (ii)' [ t o ,to + a) is the largest interval of existence of the maximal solution r(t) = r(t,to,uo)of(1.7.1); (iii) m E C [ [ t o to , a),R"],(t,m(t))E E, t E [ t o ,to a), and for a fixed Dini derivative (D), the inequality
+
+
M t ) 5 d t , 4)
holds for t E [ t o ,to + a). Then
(1.7.2)
20
1.
implies m(t) I u(t),
t E [ t o , to
Preliminary Analysis
+ a).
(1.7.4)
Corollary 1.7.1. If in Theorem 1.7.1 inequalities (1.7.2) and (1.7.3) are reversed, then conclusion (1.7.4)is to be replaced by
W )2 P(t),
+ 4,
t E [to, to
where p ( t ) = p ( t , t o ,uo) is the minimal solution of (1.7.1). We shall consider an important result concerning the systems of integral inequalities that are reducible to differential inequalities.
Theorem 1.7.2. Let assumptions (i) and (ii) of Theorem 1.7.1 hold, except the quasi-monotone nondecreasing property of g(t,u) in u for fixed t is replaced by the nondecreasing property of g(t,u) in u for fixed t . Let m E C [ [ t o , t o+ a),R"], (t,m(t))E E, t E [ t o ,to a), m(to)s u o , and
+
m(t) 5 moo) + J$m(s))ds,
t
E [to,
to
+ a).
Then
Notes
The preliminary material concerning various definitions, notions, and results listed in Sections 1.1 to 1.6 is based on the well-known books and papers, namely, Doob [14], Gikhman and Skorokhod [19], Lokve [61], Neveu [74], Wong [87], Arnold [11, iosifescu and Tiiutu [27], Prohorov [75], and Skorokhod [80]. The results of Section 1.7 are adapted from the book by Lakshmikantham and Leela [59].
CHAPTER
2
SAMPLE CALCULUS APPROACH
2.0.
INTRODUCTION
The dynamics of several biological, physical, and social systems of n interacting species is described by an initial-value problem of the type
(2.0.1) where x is an n-vector, the prime represents a probabilistic derivative, and
f is a random rate process. Randomness arises in a system due to different
types of unforeseen exogenous factors. This chapter emphasizes the study of systems (2.0.1) through sample calculus. In Section 2.1 we present essentials of sample calculus or a.s. sample calculus. We begin by defining stochastic and sample continuity of random processes and present some well-known results about them. A result that gives a sufficient condition for the Dboundedness of a subset of a space of sample continuous functions is also given. Next we define the sample differentiability of random functions. Finally, we discuss the measurability of random processes and then outline sample integrals of Riemann- and Lebesgue-type and state some useful results. We prove in Section 2.2 the basic sample existence theorem of Caratheodory-type for (2.0.1) and then study the continuation of sample solutions. Section 2.3 is devoted to the fundamental results concerning systems of random differential inequalities. Sufficient conditions are given for the existence of sample maximal and minimal solutions of (2.0.1) in Section 2.4. Section 2.5 deals with the basic random comparison theorems that are useful in estimating the sample solutions. Uniqueness and continuous dependence on parameters of sample solutions of (2.0.1) are discussed in Section 2.6. Moreover, the sample continuity and differentiability of solutions with respect to the initial state are also considered. The method of variation of parameters for sample solutions of linear and nonlinear systems of the type (2.0.1) forms the content of Section 2.7. These formulas
22
2.
S a m p l e Calculus Approach
give an alternate technique for studying the qualitative behavior of sample solutions of stochastic systems under constantly acting random disturbances. By employing the concept of random vector Lyapunov functions and the theory of random differential inequalities, very general comparison results are developed in Section 2.8. In Section 2.9 we discuss the scope of the general comparison theorems and illustrate them by means of several results in terms of logarithmic norm of a random matrix processes. The results developed in this section are computationally attractive. Depending on the mode of convergence in the probabilistic analysis, several stability notions relative to the given solution of (2.0.1) are formulated in Section 2.10. Finally, in Sections 2.11, 2.12, and 2.13 sufficient conditions for stability in probability, stability with probability one, and stability in the pth mean of the trivial solution of (2.0.1) are presented in a systematic and unified way. Some of the stability conditions are expressed in the framework of laws of large numbers and the random rate functions of the system.
2.1.
SAMPLE CALCULUS
In this section we shall discuss sample qalculus or a.s. sample calculus for random functions. Because of Theorem 1.6.2, we can suppose without loss of generality that all random processes in our considerations are separable processes. Let 52 = (52, 8, P ) be a complete probability space, and let R[[a,b], R[Q R"]] stand for a collection of all R"-valued separable random functions a, b E R. or processes defined on [a, b] with a state space (R",Fn), Definition 2.1.1. A random process x E R[[a,b],R[Q,R"]] is said to be continuous in probability (or stochastically continuous) at t E (a, b) if for any y > 0,
+
P{w:l(x(t h , o ) - x(t,o)ll 2 y }
-+
0
as h -+ 0.
If a process is continuous in probability at every t E (a,b) and has one-sided continuity in probability at the end points a and b, then it is said to be continuous in probability on [a, b]. We state a result that shows that every countable dense set in [a,b] is a separating set. (i) Let x E R [ [ a ,b], R[R,R"]]. If x is continuous in probability on [a, b], then every countable dense set in [a, b] is a separating set. In the following, we list some results concerning stochastic continuity of well-known random functions.
2.1.
Sample Calculus
23
(ii) A random function x E R[[a,b], R[R, R”]]with independent increments is stochastically continuous at to E (a,b) if and only if its characteristic function Q, is continuous with respect to t at to. (iii) A Gaussian process x E R [ [a, b], R[Q, R”]]is stochastically continuous at t E (a,b) if and only if the mean m(t) and the variance V ( t ) are continuous functions o f t E (a, b). (iv) Let P(ll(t)) be a separable and stochastically continuous Poisson process with lim,+all(t)= CQ, then all its sample paths are monotone nondecreasing step functions with probability one and with jumps equal to 1. (v) A jump Markov process is stochastically continuous. Definition 2.1.2.
A random process x E R[[a,b], R[R, R”]]is said to be
(i) almost-surely continuous ( a s . continuous) at t E (a,b) if
(ii) sample continuous (almost-surely sample continuous) in t E (a,b) if
'{fib){
o:lim[)(x(t+ h , o ) - x(t,o)ll]# hAO
o
If a process is as. (as. sample) continuous at every tE(a,b) and has one-sided a.s. (as. sample) continuity at the end points a and b, then it is said to be as. (as. sample) continuous on [a, b]. We note that as. continuity of a process on [a,b] is not equivalent to sample continuity. This is simply because
{
1
Q(t)= o:lim[l)x(t + h, o)- x(t, w)ll] z 0 ,
utE[a,bl
h+O
being a null event for every t E [a, b], does not imply that the uncountable union Q(t) is a null event. However, if a process is sample continuous on [a, b], then it is necessarily as. continuous on [a, b]. Definition 2.1.3.
A random process x E R [ [ a ,b],R[R, R”]]is said to be
(i) bounded in probability (or stochastically bounded) on [a, b] if for every 9 > 0, there exists a positive number K such that P{o:llx(t,w)l)> K ) < y
for all t E [a,b];
(ii) sample bounded (or as. sample bounded) on [a,b] if there exists a Positive number K such that
P{o:I(x(t,o)(l> K for all t E [a, 61) = 0.
2.
24
S a m p l e C a l c u l u s Approach
Random functions that are sample or stochastic continuous enjoy all elementary properties of deterministic continuous functions. Without proof, we state a few results that ensure the sample continuity of a random process. Let us denote a(&,S) = ess inf ocfl
[
sup
assis sample equicontinuous and equibounded, and hence by Ascoli-Arzela's theorem, {xm(t))is a compact subset of C [ [ t o ,t o b], R"]. Recall that x,(t) E B(z,p) for all t E [ t o , t o b] w.p. 1. Now by application of Prohorov's theorem, {x,} is totally D-bounded in R [ Q C [ [ t o ,t o b], R"]]. We also note that {xm(to)= xo>is compact. Hence (x, xo) is also compact because the direct product of a finite number of compact sets is compact if and only if each set is compact. Thus by the application of Prohorov's theorem, there exists a D-Cauchy subsequence
+
+
+ +
+
2.2.
Existence and Continuation
35
{(xm,,xo)>of { ( x m y, o ) } . Let us denote this sequence xmr by {xr}.By Skorokhod's theorem, we can construct a sequence {( Y,, Yo)} and x such that
r,,x E R[Q, C"t0, D((Y,, Yo,), (x,,x0))
to
+ b ] ,R " ] ] , for r
= 0,
=
(2.2.12)
1,2, . . .
and
P((Y,> Yo,)
+
(~2x0)= )
(2.2.13)
1.
It is obvious that x ( t ) is a sample continuous and product-measurable random variable on [ t o ,t o b]. Notice that D ( ( Y , , Yo,), (x,,xo)) = 0 means that (Y,, Yor)and (x,,xo) have the same distribution. Hence Y, E B(z,p ) w.p. 1, so also x E B(z, p ) w.p. 1 in view of (2.2.13). Next we shall show that x ( t ) is a solution process of (2.2.1). Since from (2.2.3) we have
+
Ilf(t,Y , ( ~ > 4 > 4 1I 1 w,4
on t E [ t o , t o + b]
andf is sample continuous in x w.p. 1 for fixed t E
[to,to
f ( t , Y,(t,o),o) + f ( t , x ( t , ~ ) , u ) w.p. 1 as
for every fixed t E [ t o , to theorem, we get
Jl f ( s , U s ,
+ b]. By the
w),01 ds
as r + 00 for any t E [ t o ,t o continuity of functions yield F(t,W)
+
r
+
co
Lebesgue dominated convergence
Jl A s ,
x(s,
01, 4 ds
W.P. 1
(2.2.14)
+ b]. Relations (2.2.5) and (2.2.12) and sample
+ s' f ( s , Y,(s, 0), 0)ds J'
= x~(o)
+ b], it follows that
-
f0
f -b/r
f ( s , Y,(s, 0),
W ) ds.
(2.2.15)
By (2.2.3), (2.2.4), and the uniform sample continuity and boundedness of M(t,w), it follows that the sample integral
tends to zero as r r -+ co,that
-+
co.This, together with (2.2.12)-(2.2.15),shows, by letting
x(t, 4 = x o ( 4
+
f(s, x(s,O ) , 4 ds.
This completes the proof, by virtue of Lemma 2.2.1. The following corollary of Theorem 2.2.1 is useful in applications. Corollary 2.2.1. Let D be an open set in R" and E = J x D.Let E , be a compact subset of E . Suppose that f E M [ E , R [ Q , R " ] ]and f ( t , x , ~ is)
2. Sample Calculus Approach
36
sample continuous in x for each t. Let K IIf(t,x,w)ll IK(t,w)
E ZB[J, R[R,
R +]I and
for (4x1 E Eo W.P. 1.
Then there exists a positive number b such that if (t,,x,) E E,, (2.2.1) has a solution x(t) on [ t o , to + b]. Moreover, x(t) E E , w.p. 1 for all t E [ t o ,to + b]. Definition 2.2.2. A solution process of (2.2.1) defined on an interval J , is said to be continuable if there exists a solution process y ( t ) of (2.2.1) defined on an interval J 2 2 J , such that y ( t ) = x ( t ) w.p. 1 on J , .
The next theorem deals with the problem of continuation or extention of solutions up to the boundary of E. Theorem 2.2.2. Assume that f E M [ E , R[R, R "] ] and f ( t ,x, o)is sample continuous in x for each t. Furthermore, let K E I B [ J , R[R, R + ] ]and
Ilf(t,x,4ll 5 K ( 4 4
for ( t , x ) E E ,
W.P. 1,
where E , is any compact subset of E . Let x ( t ) be a solution process of (2.2.1) on some interval [to,bo]with (bo,limt4~o x(t)) E Eo w.p. 1. Then the solution x(t) can be continued to the right of bo. Moreover, it can be continued to the boundary of E. Proof. Since (b,, limt460x(t)) E E,, by Corollary 2.2.1, there exists a solution 2(t) through (bo,limt+,x(t)) on [b,, b, + b] for some b > 0. If y(t) is defined by Y(t)=
i
x(t)
2(t)
for t E [to,bol, for t E [b,, bo
+ b],
then y ( t ) is a solution on [ t o ,bo + b], and y(t,) = x,. The proof of the last part of the statement is similar to the proof of Theorem 1.1.3in Lakshmikantham and Leela [59]. Remark 2.2.1. In the foregoing discussion we have assumed that K E IB[J,R[R, R + ] ] .Instead of this, we can assume that K satisfies the conditions of De La Vallee-Poussin's theorem.
We conclude this section by giving an example which shows that the type of general random differential equations considered include other important classes of equations. Example 2.2.1. Let v] E M [ J , R[R, R"]]. Let F : J x B ( z , p l ) x B ( y ,p z ) -+ R". Suppose that f(t, x, y ) is continuous in (x,y) for fixed t E J and measurable in t for fixed (x,y). Set ' f ( 4 x,
= F(4 x, v](t,4).
(2.2.16)
2.3.
37
Random Differential Inequalities
It is clear that iff in (2.2.16) satisfies the hypotheses of Theorem 2.2.1, then the initial-value problem XV,
4 = f ( t ,x,r(t,a)),
has at least one solution on [ t o ,to + b].
x(t0,4 = xo(w)
We note that the random function ~ ( w t ,) in Example 2.2.1 can be a separable and stochastically continuous process with independent increments (Poisson process, Gaussian process) or a separable jump Markov process.
2.3
RANDOM DIFFERENTIAL INEQUALITIES
We shall state and prove some fundamental results concerning systems of random differential inequalities. Theorem 2.3.1.
Assume that
(H,) g E M [ [ t o ,to + a) x D,R[R,R"]],g(t, u, w ) is a s . quasi-monotone nondecreasing in u for each t, D is an open set in R", and a > 0, (H,) u, u E c[to, to + a), R[Q, R"]] for (t, u(t,a)), ( t , v(t, 0)) E [ t o , t o + 4 x D, D - 44 0)I g(t, 44 w),w),
and D_u(t,w)>g(t,u(t,o),w) (H3)
a.e.in t ~ [ t ~ , t , + a ) ,
4 t o , 4 < u(t0,w).
Then u(t,w) < u(t,w)
for t E [ t o , to
+ a).
(2.3.1)
Proof. Suppose that (2.3.1) is false; then without loss of generality, there exist t , > t o , an index j , and R j c R such that (i) u j ( t l ,w ) = u j ( t l , o),w E R j with P(Rj) > 0, (ii) uj(t,w)> u j ( t , o ) , t E [ t o ,t l ) w.p. 1, (iii) ui(t,,w) 2 ui(t,,u), w E Rj, i f j , (iv) hypothesis (H,) holds for (tl,w),where w E (Rj- N) and N c R j is any null event.
For w E Rj and small h < 0, we have
2. Sample Calculus Approach
38
which implies that D-uj(tl,o) 5 D-uj(tl,o). This, together with (H,), for w E
Qj,
yields
gj(t1, ~ ( t l , ~0) )< , g j ( t 1 , v(tl, w), 0).
(2.3.2)
On the other hand, from (HI)and (i)-(iv), we have gj(t1, ti, o), 0)5 gj(ti, u(ti,m), 0)
for
0 E Qj.
for
0E Q j .
Inequality (2.3.2) now leads to a contradiction: gj(t1,
4 t l , o ) ,0)< g j ( t 1 , u(t,,o), 0)
Hence P(Qj) = 0 and such a t1 does not exist. This proves the theorem. Remark 2.3.1. It is obvious from the proof that the inequalities in hypothesis (H2) can also be replaced by
D - u(t,0)< s(t,v(t7 4,4, and o-u(t,o))g(t,u(t,o),o),
a.e.in
tE[to,to+a).
We note that the proof does not demand the validity of the inequalities in hypothesis (H,) for all (t,o)E [ t o , to + a) x R. In fact, it is enough to assume that the inequalities
D - vj(t, a)5 g j ( t , ~ ( 01, t , 01, D-uj(t, a)> Qj(t,u(t,o), 0) aresatisfiedforte { t : u j ( t , w )= u j ( t , o ) ) andwEQj(t) = {w:uj(t,w)= uj(t,o)} with P(Q,(t))> 0 and 1 I j _< n. By assuming one-sided estimates on g(t, u, o),hypothesis (H,) can be relaxed, and a weaker kind of inequality can be established. Theorem 2.3.2. Assume that hypotheses (HJ, (H2) hold. Further assume that for t E [ t o ,t o a), u, u, and g satisfy the inequalities
+
D - U ( t > 4 5 g(t, u ( t , 0 ) ,
0)
and a.e. in t E [to, to + a); moreover, g satisfies the one-sided estimate D - u ( t , o ) 2 g(t, u(t,w),o)
(2.3.3)
2.4.
Maximal and Minimal Solutions
39
+
for x,y E D such that x > y , where L E M [ [ t o , to a), R[SZ,Ry]] and is sample Lebesgue-integrable on [ t o , t o a). Then u(to,w)I u(to,o)implies
+
for t E
v(t,o) I u ( ~ , w )
Proof. Set w
E~
=u
+ exp[2S:,L(s,o)ds]~,
> 0 for all 1 I j I n. It is obvious that
[ t o , to
where
+ a). ...
E =( E ~ , E ~ ,
and
[s:, L(s,w)ds1
D - w ( t , w ) = D - u ( t , ~+ ) 2L(t,w)exp 2
This, together with inequality (2.3.3), yields
E
D:
+
D-w(t,w) 2 g ( t , u ( t , ~ )o) , 2L(t,o)exp 2
1
L(s,o)ds
E
> g(t, we,4,4 W.P. 1 since w > u. We note that u(t0,w)< o ( t o , w )w.p. 1. Now by Theorem 2.3.1, we have u(t,o)
By taking the limit as E 2.4.
< w(t,o) +
for t E [ t o , t o + a).
0, we get the desired inequality.
MAXIMAL AND MINIMAL SOLUTIONS
By introducing the notion of maximal and minimal solutions of uv, 4 = g(t, u(t,4,4
7
u ( t o , 4 = uo(4,
(2.4.1)
sufficient conditions are given for the existence of maximal and minimal solutions of (2.4.1). Definition 2.4.1. Let r ( t , o ) be a sample solution process of the system of random differential equations (2.4.1) on [ t o ,to + a). Then r(t,w)is said to be a sample maximal solution process of (2.4.1)if for every sample solution u(t,o)existing on [ t o , to + a), the inequality 4 4 4 5 r(4 4,
t E [to,
to
+ 4,
(2.4.2)
holds. A sample minimal solution p(t, o)may be defined similarly by reversing inequality (2.4.2).
2.
40
Sample Calculus Approach
We shall now consider the existence theorem for maximal and minimal solution processes of (2.4.1). Theorem 2.4.1. Assume that the hypotheses of Theorem 2.2.1 hold and that g(t, u, w ) possesses the sample quasi-monotone nondecreasing property in u for fixed t E J . Then there exist sample maximal and minimal solutions of (2.4.1)on [ t o , t o + b] for some b > 0. Proof. Let problem
E
ll~ll< p/4. Consider
> 0 be such that
u'@,4 = 9&(444 d o ) ,
u(t0,w) = u o ( 4
the initial-value
+ E,
(2.4.3)
+
where gE(t,u, o)= g(t, u, w ) E and uo(o)E B(z, p/4). It is easy to see that g,(t, u, E ) satisfies all the hypotheses of Theorem 2.2.1 with K(t,w ) replaced
by K ( t , o ) + p/4. Hence by Theorem 2.2.1,the initial-value problem (2.4.3) has a sample solution u(t, o , ~ for)t E [ t o , t o + b] for some b > 0. We shall prove the existence of the sample maximal solution only since the proof for the case of the sample minimal solution is similar. Let el < E~ 5 E and let u,(t, o)= u(t, o,E ~ ) u,(t, , o)= u(t, o,E,) be solutions of
4@,4 = S&,k u1(4 4, 4,
u,(t,o) = uoi4
+El.
u'z(t,0)= S&,(t, uz(t, a),01,
uz(t, 0)= uo(W)
Ez
W.P. 1-
This implies that
ui(t,o)> g,,(t,
u,(t,
o), o)
a.e. in t E [ t o ,t o
+ b]
and
> u,(to,w). By applying Theorem 2.3.1,we get uz(t,,4
u,(t, 0)
4
4 4,0)- f ( 4 Y ( t , 4, 4))
2 s(4 m(t, o),of,
using (2.6.2). Note that rn(to,o)= 0 w.p. 1. By Theorem 2.5.1, we have rn(t,u)I r(t,o)
on [ t o , to
+ b],
where r(t,o)is the maximal solution process of (2.6.1). This, together with assumption (iii), implies that m(t, GO) =_ 0 on [ t o ,to b] w.p. 1. Thus the proof of the theorem is complete.
+
Corollary 2.6.1. The function g(t, u, o)= K(t,w)u is admissible in Theorem 2.6.1. In this case, condition (2.6.2) reduces to the well-known local Lipschitz condition. Remark 2.6.1. Under the conditions of Theorem 2.6.1 with g(t, ti, w ) = K(t, o ) u , the proof of the existence theorem (Theorem 2.2.1) can be simplified by following the method of successive approximation. That is, define x o ( t , o )= x o ( o )
and X,+~(~,L())=
on J
+ llf(.s, x,(s,w),
xo(o)
w)ds,
m = 1,2,. . . . (2.6.3)
In this case, the sequence x,(t) itself converges to the solution process uniquely. So the argument used to get the convergent subsequence in the proof of Theorem 2.2.1 is not necessary. Furthermore, assumption (vi) is not required. Remark 2.6.2. In addition to the hypotheses of Theorem 2.2.1, if we assume that the initial-value problem (2.2.1)has a unique solution if it exists, then the argument used to select the convergent subsequence can be modified. Further, note that the uniqueness assumption on (2.2.1) seems to be reasonable because the system (2.2.1) that describes the dynamic model of a practical system is well-defined. In formulating a mathematical model for a physical or biological system, we make errors in constructing the functions f ( t , x, o)as well as errors in initial conditions. For mathematical purposes, it is sufficient to know that the change in the solutions can be made arbitrarily small by making arbitrary small changes in the differential equations and the initial values. The following theorem establishes the continuous dependence of solutions on the parameters.
2. Sample Calculus Approach
44
Theorem 2.6.2. Assume that x D x A, R [ Q R"]],f(t, x, A, w ) is sample continuous in ( x , A ) for each fixed t, D is an open set in R", and A is an open parameter A-set in R"; g E M [ J x R + ,R[SZ,R+]], g(t,u,w) is sample continuous in u for each t E J , and g(t, u(t), w ) is sample Lebesgue-integrable whenever u(t) is sample absolutely continuous; u(t) = 0 is the unique solution w.p. 1 of (2.6.1) such that u(t,) = 0 w.p. 1, and the solutions u(t,t,,u,,w) of (2.6.1) are sample continuous with respect u,; for (t,x, A), ( t ,y, A) E J x D x A,
f E M[J
I(f(WI,4- f ( t , Y , b ) l l I g(t,I(x - y(I44; K
E
(2.6.4)
IB[J,R[Q,R +]] and satisfies (If(t,x,A,w)II IK(t,w)
for (t,x,A) E J x E , x A (2.6.5)
w.p. 1, where E , is any compact subset of D. Then, given any scalar number E > 0 and A, E A, there exists a 6 = B(E) > 0 such that for every A , I ( A - A,(( i B(E), and the differential equation x'(t, 0) fft, x(t, o), 1,o),
x(t0,
A,0) = xo(Aw),
(2.6.6)
admits a unique solution x(t, A, w ) = x ( t , t o , xo(A,w), o)satisfying Ilx(t71,a)- x(t, A0,w)ll
< 6,
(2.6.7)
t E J.
Proof. Under hypotheses and (H5),,,because of Corollary 2.2.1, the initial-value problem (2.6.6)has at least one solution on [ t o ,to b] whenever (to,x,,A) belongs to a compact subset of J x D x A. Furthermore, by (HJA-(H4), and Theorem 2.6.1, x(t,A) is the unique solution process of (2.6.6). Let 2, be any element of A. Since the family { x ( t , 2)) is sample equicontinuous and equibounded, it follows that for almost all fixed o E Q, there exists a sequence of An such that A,,+ I , as n --* 00; depending on w, the uniform limit
+
lim x(t, An, w ) = x(t, A,, w )
(2.6.8)
d"+dO
exists. From the sample continuity of f ( t , x,A,w ) in (x, A) for fixed t E we get f ( t ,x ( t ,
4 01, 7
>
0) + f ( t , x ( t ,
I , w), 20 9
2
0)
[to, to
+ b],
(2.6.9)
2.6.
Uniqueness and Continuous Dependence
45
as n(w) + co for fixed o E !2 in (2.6.8). By the application of the Lebesgue dominated convergence theorem, we see that i l S ( s , X ( s , A n , w ) > An, w)ds
+
il.f(s>x(s,&,w),
w)ds.
(2.6.10)
It is obvious that for fixed w E !2 in (2.6.8), x(t,A,,co) is the solution of the deterministic system x’(t,Lo,4 =f(4 x(4Lo,w), 10, 01,
X(t0,AO)
= xo(L0,w).
(2.6.11) For A = A o , the solution process of (2.6.6) is unique. Therefore, for every sequence 2, such that A,, -+ A. as n -+ co,the uniform limit
,) lim x(t, 2,,,o) = x(t,l ow
for fixed o E Q in (2.6.8) (2.6.12)
&.+A0
x(t, A,,, w ) = x(t, A,, w ) w.p. 1 . Hence exists, which implies that 1imAn++, x(t,L0,w) is a random process satisfying (2.6.11), which shows that the limits (2.6.Q (2.6.9), (2.6.10),and (2.6.12) hold a s . Moreover, (2.6.12) and the selection of sequence A,, imply that
lim x ( t , i , w ) = x ( t , I o , o )
rl+A’o
uniformly in
t E [ t o , to
+ b]
w.p. 1,
which shows that for any given scalar number E > 0 and A. E A, there exists a 6 = h ( ~>) 0 such that for every A, 112 - Aoll 5 h ( ~ the ) , solution process x(t,A) of (2.6.6)satisfies (2.6.7)for f E [ t o , t o b]. Now, by using Theorem 2.2.2 and the argument used in the proof of that theorem, the solution can be extended on J.This completes the proof of Theorem 2.6.2.
+
Remark 2.6.3. Assume that all the hypotheses of Theorem 2.6.2 hold, except that hypotheses (HJA-(HJ2 are replaced by the assumption that the initial-value problem (2.6.6) has a unique solution; then the conclusion of Theorem 2.6.2 remains valid. Remark 2.6.4. We note that the parameter A in Theorem 2.6.2 can be replaced by the random parameter A E R(Q, R”]. The proof remains the same with slight modifications. Now we present a theorem which establishes the continuous dependence of the solution processes of (2.2.1) with respect to the initial conditions ( t o 7 xo).
Theorem 2.6.3. Assume that the hypotheses of Theorem 2.6.1 hold. Furthermore, the solutions u(t, w ) of (2.6.1) through (to, uo) are sample continuous with respect to the initial conditions ( t o , u o ) . Then the solutions
2.
46
Sample Calculus Approach
x(t, t o , x o ) of
2.2.1) through (t,,x,) are a.s. unique and sample continuous with respect to the initial conditions ( t o ,xo).
Proof. The proof of the existence and uniqueness of solutions x(r) of (2.2.1) follows from the proof of Theorem 2.6.1. The proof of the continuous dependence on initial conditions can be formulated on the basis of the proof of Theorem 2.6.2. Consider the following system of random differential equations x'(t,t o xo(o), 0) = f(t, x(t, t o xo(o), a),w), 9
9
x ( t o , 0) = xo(o),
(2.6.13)
where ( t o ,x,) is considered a random parameter. In view of the hypotheses of the theorem and the fact that the function f(t, x, w ) is independent of the parameter, it is very easy to see that (2.6.13) satisfies the hypotheses of Theorem 2.6.2. However, the formal proof of the continuous dependence of solutions of (2.2.1) on the initial conditions can be formulated as follows: Let x,(t, w ) = x(t, t l , xl(w), w ) and x,(r, o)= x(t, t 2 ,x2(0),o)be solutions of (2.2.1)through (t,, x,) and ( t 2 ,x2), respectively. Without loss of generality, assume that t z I t,. Define
m ( 4 4 = Ilx,(t,o)
-x2(t,49
m(t,,w) = 11x1 - XZ(tl,w)11. (2.6.14)
From (2.6.2),we obtain m'(t, 0)I g(t, m(t,w), w),
t 2 t,.
(2.6.15)
From (2.6.14), (2.6.15), and Theorem 2.5.1, we have m(t,o)I r(t, o),
t 2t
,,
(2.6.16)
where r(t,w ) = r(t, t,, m(t,,o), w ) is the maximal solution process of (2.6.1) through ( t , , m(tl)).Since x,(t, o)is sample continuous in t and r(t, t o ,uo(o),w ) is by hypothesis sample continuous in (to, u0), it follows from (iii), (2.6.16), and the definition of m(t,o)that lim rn(t,w)5 r(t,t,,O)
=0
w.p. 1.
t1-:2 XIJX2
Hence lim jlx,(t, w ) - x2(t,o)ll = 0 w.p. 1, f l +f2 XI - x 2
and the proof is complete. Lemma 2.6.1. Assume that
(HI) f
E M [ J x D, R[Q,R"] and its sample derivative Jf/dx = (?/?x)f(f,X, o)exists and is sample continuous in x for each t E J ,
2.6.
47
Uniqueness and Continuous Dependence
where D is an open convex set in R". Then
Proof. Setting
F(s, 0, t ) = f ( t , sy
+ (1 - S)X,w),
0s sI 1,
the convexity of D implies that F(s, w, t ) is well-defined. It is obvious that F(s, w, t) is sample continuous in s. Since f(t, x, w ) is sample differentiable in x, it follows that F(s, o,t ) is sample differentiable in s E [0, I]. Moreover, its sample derivative 2F/as is a sample continuous random variable. Hence
a as
B ax
- F(s,w,t) = -f ( t , s y
+ (1 - s)x, w)(y - x).
(2.6.17)
Since F(1, w, t ) = f(t, y, o) and F(0,w, t ) = f ( t ,x, w), the result follows by sample integrating (2.6.17) in the sense of Riemann with respect to s from 0 to 1. In the following we shall prove that the solution process x ( t , t O , x 0 , w ) of (2.2.1) is sample differentiable with respect to the initial conditions ( t o ,xo) and that the sample derivatives (d/ax,)x(t, t o ,xo, w ) and (a/ato)x(t,t o ,xo, o) exist and satisfy the equation of variation of (2.2.1) along the solution process x(t, to, xo, 0).
Theorem 2.6.4. Assume that (H,) f E M [ J x R", R[Q,R"]] and its sample derivative
-af_ -
a f(t,x,w) ax ax
exists and is sample continuous in x for each t E J ; (H2) K E I B [ J,R[Q, R + ] ]and satisfies Ilf,(t,x,w)lI 5 K(t,w)
for (t,x) E J x B(z,p), (2.6.18)
where f,(t, x,w ) = (d/dx)f(t, x, a); (H3) the solution x ( t , w ) = x(t, t o ,xo(w),o)of (2.2.1) exists for t 2 t o . Then (a) the sample derivative (d,/dxk,)x(t, to, xo(w),o)exists for all k = 1,2, 3 , . . . ,n and satisfies the systems of linear random differential equations y ' ( t , o ) = f ( t , x(t, t o , Xo(W),m), w)Y(t,w),Y ( t o ,0) = ek,
4
(2.6.19)
where ek = (e:,e:, . . . ,et, . . . ,e:)' is an n-vector such that = 0 if j # k and et = 1 and xko is the kth component of x o = (xlo,xk0,.. . , x ~ and ~ ) ~
48
2.
Sample Calculus Approach
(b) the sample derivative (a/ato)x(t,t o ,xo ( o ) w , )exists and satisfies (2.6.19) with
, exists w.p. 1, where @(t,t o , x o , o )is the fundamental whenever f ( t o , x o ( o )o) matrix solution process of (2.6.19). Moreover, @(t,t o ,x o ,o) satisfies the random matrix differential equation
X’
X(to,o)= unit matrix.
= f x ( t ,x(t),o ) X ,
(2.6.21)
Proof. First we shall show that (a) holds. From hypotheses (HI) and (HJ, Lemma 2.6.1, and Corollary 2.6.1, it is obvious that the solution x(t,w) in (H,) is unique. Moreover, by Theorem 2.6.3, the solutions x(t, to,xo)are continuous with respect to the initial conditions ( t o ,xo). For small A, x(t,A, o)= x(t,t o ,xo(o)+ Ae,, o)and x(t,w ) = x ( t,t o , xo(w),w ) are solution processes of (2.2.1) through ( t o ,xo + Ae,) and ( t o ,xo),respectively. From the continuous dependence on initial conditions, it is clear that lim x(t,A,o)= x(t, to,o)
1-0
uniformly on J
w.p. 1.
(2.6.22)
Set
A # 0, x = x(t,w ) and y 2.6.1, yields
= x(t, E,, 0). This,
AX;(^, o)=
together with the application of Lemma
Jolfx(t, sy + (1 - s)x, o)Ax,(t, o)ds.
Define f,(t, x ( ~ , o )1, , LO) =
lof,(t, sx(t,;I,o)+ (1 1
(2.6.23)
- s)x(t,w), o ) d s .
(2.6.24)
(2.6.25)
Note that the integral is the sample Riemann integral. In view of (Ht), f x ( t ,x , A, a)is a product-measurable random process which is sample continuous in (x,3,) for fixed t. Furthermore, Ilf,(t, x , A o)ll s
W ,o)
for (t,x , A) E J x B(z,p ) x A,
where A c R is an open neighborhood of 0 E R. From (2.6.22)and the sample continuity of f,(t, x, w), we have lim f x ( t , sx(t, A , o )
1+0
+ (1 - s)x(t,w), o)= f,(t,
x(t,o), o)
(2.6.26)
2.6.
49
Uniqueness and Continuous Dependence
uniformly in s E [0,1] w.p. 1. This, together with (2.6.25), yields lim fx(t, x(t,w),I , w ) = fx(t, x(t,w), w ) w.p. 1.
1-0
(2.6.27)
From (2.6.23) and (2.6.25), relation (2.6.24) reduces to Ax;(t,w) = f x ( t ,x(t,w), I , w)Ax,(t, w),
Ax,(to, w ) = e k . (2.6.28)
It is obvious that the initial-value problem (2.6.28) satisfies hypotheses (Hl),-(H5)1, and hence by the application of Theorem 2.6.2, we have lim Ax,(t, o)= y ( t , w )
1-0
uniformly on J
w.p. 1,
(2.6.29)
where y(t,w) is the solution process of (2.6.19). Because of (2.6.23), we note that the limit of Ax,(t,w) in (2.6.29) is equivalent to the derivative (d/dx,,)x(t, t o ,x,(o), 0).Hence (d/dx,,)x(t, t o ,xo(o),w ) exists and is the solution process of (2.6.19). This is true for every k = 1,2,. . . ,n. Thus (d/dxo)x(t,t o ,xo(w),w ) is the fundamental random matrix solution process of (2.6.19) and satisfies the random matrix equation (2.6.21). (d/dx,)x(t,t o ,xo(w),w ) is denoted by @(t,t o ,xo(w),0). This completes the proof of (a) and the last part of (b). To prove the first part of (b), define
where x(t,I , w ) = x(t, t o + I , xo(w),w ) and x(t,w) = x(t,t o , x,(w),w) are solution processes of (2.2.1) through (to + 2, x o ) and (to,xo),respectively. Again, by imitating the proof of part (a) and by replacing Ax1(&w )by ALx^,(t,w), one can conclude that the sample derivative (d/dt,)x(t,t o ,xo(w),o)exists and is the solution of (2.6.19) whenever lim Ax^n(to,w)exists and is equal to --f(to,x,(o),w).
1-0
(2.6.31)
We shall show that (2.6.31) is true. By uniqueness of solutions, we have
+ I , xo(w),w)- x(t0 + 2, t o + 2, x o ( 4 , 4)/1 - - x(t0 + A, to + 2, x o ( w ) , 4 - x(t0, to + I , x o ( o ) , 4
A21(to,w ) = (x(t0,to -
I
This, together with the sample Lebesgue integrability of f(s,
x(s, to + 2, xo(w),o), w),
implies that
lirn AZ1(t0,w ) 1-0
2.
50
Sample Calculus Approach
exists and is equal to --f(to, xo(w),o)w.p. 1. Now by following the proof of the deterministic Theorem 2.5.3 in Lakshmikantham and Leela [ S S ] , we can conclude that
This completes the proof of the first part of (b).Hence the proof of the theorem is complete. Remark 2.6.5. It is easy to see that the fundamental random matrix
@(t,t o ,xo,o)is sample continuous in (to, xo) for fixed t. This is because of
the application of Theorem 2.6.3 and the facts that f,(t, x,o)is sample continuous in x for fixed t and the solution process x(t, t o , xo, o)is sample continuous in (to,xo)for fixed t.
2.7.
THE METHOD OF VARIATION OF PARAMETERS
Let us present some elementary results about the linear random differential system
x'@, 4 = 4 4 oMt,4,
x(t0,o)= Yo(4,
(2.7.1)
where A(t, o)= (afj(t,o))is a product-measurable random matrix function defined on J x 51 into Rn2.Assume also that A(t, o)is as. sample Lebesgueintegrable on J . Let @(t,o) be the n x n matrix whose columns are the n-vector solutions of (2.7.1) with the initial condition yo(w)= ek, k = 1, 2,. . . ,n. Clearly, @(to,o)= unit matrix, and @(t,o)satisfies the random matrix differential equation @'(t,o) = A(t,co)@(t,w),
@(t0,o) = unit
matrix.
(2.7.2)
The following result is analogous to the corresponding deterministic result, whose proof can be formulated similarly. Lemma 2.7.1. Let A ( t , o ) be a product-measurable random matrix P ) into R" and sample Lebesgue-integrable function defined on J x (51, F, ' as. on J . Then the fundamental matrix solution process @(t,w ) of (2.7.2) is nonsingular on J . Moreover,
-
det @(t, w ) = exp where tr A(t, o)=
-
1
tr A(s, o)ds ,
t E J,
(2.7.3)
c:=a&, co) and det stands for a determinant of a matrix.
51
2.7. The Method of Variation of Parameters
Consider the random perturbed system of the linear system (2.7.1), Y'(t, w ) = A(4 w)Y(40)+ F(t, y(t, w), w),
y(to, w ) = yo(o), (2.7.4) where A(t,w) is as defined above and F E M [ J x R", R [ Q ,R"]]. The following result establishes the integral representation for the solution processes of (2.7.4) in terms of the solution process of (2.7.1).
Theorem 2.7.1. Assume that (i) A(t,w ) satisfies the hypotheses of Lemma 2.7.1, (ii) F E M [ J x R", R[R, R"]] and F(t,x, w ) is sample continuous in x for fixed t, (5) K E I B [ J , R[R,R+]] and satisfies IIF(t,x,w)ll s K(t,w)
for ( t , x ) E J x B(z,p).
Then any solution process y(t, w ) = y(t, t o ,yo(w),a) of (2.7.4) satisfies the random integral equation
+
y(t,w ) = x ( t ,0)
@(t,w)@-'(s, w)F(s,y(s, w), o)ds,
t 2 to,
(2.7.5)
where @ ( t ,w ) is the fundamental matrix solution of (2.7.1).
Proof. Let x ( t , w ) = @(t, w)yo(w)be a solution process of (2.7.1), existing for t 2 t o . The method of variation of parameters requires the determination of an a s . sample differentiable function z(t, a )so that y ( t ,4 =
w,444 4,
Z ( t 0 , 0) =
yo(4,
(2.7.6)
is a solution process of (2.7.4). By sample differentiation with respect to t, we get
+
+
A(t,o)y(t,w) F ( t , y(t, w), 0)= 0 and i E I, and llxlle = (xTQX)'12,respectively.
I/x//~
Consider a particular kind of nonlinear system of random differential equations ~ ' (w t ,) = A(t, x(t,a),o)f(x(t, 01, o),
x(to, W ) = xo(o), (2.9.20)
where x E R", A(t, x,w) = (aij(t,x,w)) is an n x n random matrix function whose elements aijE M [ R + x B(z,p), R[SZ,R]], aij(t,x,(o) is as. sample continuous in x for fixed t, f~ R[B(z,p), R[R,R"]], f(x,w) = (fi(x,w), fz(x,w), . . . ,jJx, w ) ) ~and , its sample derivative (a/ax)f(x, o)exists and is sample continuous in x. Further, assume that the random functions A(t,x, o) and f(x, w ) are smooth enough to assure the existence of a sample solution process x(t, w ) of (2.9.20), as far as x(t) E B(z,p), for t 2 t o w.p. 1. Theorem 2.9.2. Let A(t,x,w) and f(x,w) be as described above. Let V ( t , x , w )= \ \ j ( x , w ) \ (where , (1. 1 1 is any norm on R". Then for any solution process x(t,w ) of (2.9.20),
[c,
W ,x(4 01, 4 I W o x, o ( 4 ,0)exp as far as x(t) E B(z, p), for t 2
p ( f ( x ,w)A(s,x, 4)
to.
Proof. For small h > 0, we have V(t
+ h, x + hA(t,x, o)f(x, o),0)
+ W t ,x,4 f ( x , o ) , w ) [ l = \If(.> 0) + h f x k w)A(t,x,w ) f ( x ,4 + ocq, =
2.9.
67
Scope of Comparison Principle
which implies
This, together with Definition 2.9.1 and (2.8.2), shows that D&.9.zo,Wx9 4 5 P(f,(.& 4 4 4 x,4 ) W , x , 0) for ( t , x) E B(z, p). (2.9.22)
By (2.9.22) and Theorem 2.8.1, we then get
which completes the proof of the theorem.
Problem 2.9.3. Assume that the hypotheses of Theorem 2.9.2 hold. Then
(i) (MN,) in Problem 2.9.1 and V(t,x , o)= ( f T ( x o, ) f ( x ,w))’” imply that p(fx(x,o)A(t,x, w)) in (2.9.21) is the largest eigenvalue of 1
4m,x ,
+f
TC(fX(X,
x k
4 4 4 x, 4 1 ;
(ii) (MN,) in Problem 2.9.1 and V(t,x,o) = ~ u p ~ , ~ ( l f i ( x , oimply )l>
= f x ( x ,o)A(t,x , 0); where (bjj(t,x , 0)) (iii) (MN,) in Problem 2.9.1 and V(t,x , o)=
P ( ~ X w)A(t, ( ~ ? x, O))= sup kel
[
bkk(t,
x7
If;(x, w)l imply
+
where (bij(t,x , 0)) is as in (ii); (iv) (MN,) in Problem 2.9.1 and V(t,x,co) =
n
i= 1 i#k
(bik(t,
I
x , O))
9
cS=ldi\J(x,a)\imply
(v) (MN,) in Problem 2.9.1 and V(t,x , o)= ( f T ( x w)Qf(x))’” , imply p ( f x ( x ,w)A(t,x,0)) is the largest eigenvalue of + Q - ’ [ ( f ( x ,4
4 t , x, 4)’Q + Wx(x,w)A(t,x , 4 1 .
68
2.
Sample Calculus Approach
The following problem shows the usefulness of the concept of the logarithmic norm relative to the random matrix differential equation (2.6.21). Problem 2.9.4. Show that the solution process of (2.6.21) satisfies the estimate
where @(t,t o ,xo,w ) is the solution process of (2.6.21) through ( t o ,I) and
x(t,w) is the solution process of (2.2.1) through (to,xo).
The following problem shows the relationship between the solution processes of (2.2.1)and (2.6.21). Problem 2.9.5. Suppose that the hypotheses of Theorem 2.7.4 hold. Then Ilx(t, t o , xo, 4 - x(t, t o , Y o , 4 1 1
5 I l Y o ( 4 - xo(4ll x Jol [exP[J:P(f(u,
x(u, to,xo
I1
+ S ( Y 0 - xo), 4,w ) ) d u
ds, (2.9.24)
where for s E [0,1], x(u, w ) = x(u, t o , xo + s(yo - xo),w ) is the solution process of (2.2.1) through (to,xo s(yo - xo)).
+
The following example illustrates the comparison theorem relative to the system (2.8.1). Example 2.9.1. Suppose that (2.8.1) satisfies [/xi+ ~ ( t , x , w ) 5 l ~llxill+ h(
m
1 ~ i j ( t ~ ~ ) ~ ~ x j ~(2.9.25) l)~
j= 1
i = 1,2, . . . ,m, for (t,x) E R , x R" and sufficiently small h > 0, where xi E R"', n = zy=l ni, aij E M [ R + ,R[Q,R ] ] , aij(t,w) is a.s. sample locally Lebesgue-integrable on R +, and aij(t,w ) 2 0 w.p. 1 for i # j . Set
J"(4X, where K ( t , x,w)
= (J"i(t, X, a),J"z(t,X, a), . . . , &(t, X, w ) ) ~ ,
= l{xi/l,i =
(2.9.26)
1,2, . . . ,m. It is obvious that
J"(4x,4,4, where g(t, u, w ) = A ( t , w)u and A(t, w ) = (aij(t,w ) )is an m x m random matrix function. Furthermore, A(t, w)u satisfies the sample quasi-monotone nondecreasing property in u for fixed t E R + . Then all hypotheses of Theorem D;2.8.1,J"(t7 x, 0)
g(t,
2.10. Stability Concepts
69
2.8.1 are satisfied. Hence for t 2 to,
V ( t , x ( t ,o), o)_< r(t,w)
where x(t,w) is any solution process of (2.8.1) and r ( t , o ) is the maximal solution process of(2.8.4)with g(t, u, o)= A(t,w)u and u o ( o )= V ( t o ,xo(o),0). Remark 2.9.7. We note that the most of our previous illustrations are based on the scalar version of Theorem 2.8.1. However, the real usefulness of Theorem 2.8.1 will be seen in the context of stability analysis of (2.8.1). Furthermore, this discussion of the scalar version of Theorem 2.8.1 relates to the earlier work in this field in a systematic and unified way. 2.10.
STABILITY CONCEPTS
Let x ( t , o ) = x(t, t o , x o , o )be any sample solution process of (2.8.1). Without loss of generality, we assume that x ( t , o ) = 0 is the unique solution process of (2.8.1) through (to,O). Otherwise, if one finds a solution process z ( t , o ) of a random algebraic equation f ( t , x , o ) = 0, then one can use the transformation y = x - z(t,o)to reduce the steady state z(t,o)of (2.8.1) to y(t,o)= 0 of the transformed system Y'(40)= f ( t , y
+ z(t,4,o)= H(4 Y O , a,4,
Y(t0,4 = xo(o).
On the other hand, if one knows a solution process z(t,w ) of (2.8.1) through ( t o ,zo) and is interested in knowing the stability properties of z(t,a),then again one can use the above transformation to reduce the stability analysis of z(t, o)to that of y(t,o)= 0 of the transformed system Y'(44 = W t , Y ( t , 4 , 4 ,
+
y(to,w) = Y O ( d
$
where H ( t , y , o ) = f ( t , y z ( t , o ) , o)- f ( t , z ( t , o ) , 0). Now, depending on the mode of convergence in the probabilistic analysis, we shall formulate some definitions of stability. Definition 2.10.1. The trivial solution of (2.8.1) is said to be (SP,) stable in probability if for each E > 0, q > 0, to E R +, there exists a positive function 6 = s ( t , , ~ , q )that is continuous in t o for each E and q such that the inequality
'
PCo:IIxo(o)ll 81 < q implies
(SP,)
El
t 2 to; < q, ungormly stable in probability if (SP,) holds with 6 independent oft,; P[o:JJx(t,o)Jl 2
2.
Sample Calculus Approach
asymptotically stable in probability if it is stable in probability and if for any E > 0, q > 0, to E R +, there exist numbers do = 6(to)and T = T ( t o ,E, q ) such that
P [ o :llxo(4ll > 601 implies P[o:Ilx(t,o)ll 2
E]
< q,
t 2 to
+ T;
uniformly asymptotically stable in probability if (SP,) and (SP,) hold with 6, do, and T independent of t o ; stable with probability one (or as. sample stable) if for each E > 0, to E R,, there exists a positive function 6 = 6(t0,E)such that the inequality IIx,(o)ll I 6 w.p. 1 implies
Ilx(t,o)ll < E w.p. 1,
t 2 to;
uniformly stable with probability one if (SS,) holds with 6 independent of to; asymptotically stable with probability one (or as. sample asymptotically stable) if it is stable with probability one and if for any E > 0, to E R,, there exist 6 , = 6(to) and T = T(tO,E)such that the inequality Jlxo(o))lI 6 , w.p. 1 implies Ilx(t,o)ll < E
w.p. 1,
c 2 to
+ T;
uniformly asymptotically stable with probability one if (SS,) and (SS,) hold with 6,6,, and T independent of to; stable in the pth mean, if for each E > 0, to E R,, there exists a positive function 6 = 6(t0,&)such that the inequality I\xoIlp5 6 implies IIx(t)llp< 8,
t 2 to;
uniformly stable in the pth mean if (SM,) holds with 6 independent of t o ; asymptotically stable in the pth mean if it is stable in pth mean and if for any E > 0, to E R + , there exist a0 = 6,(t0) and T = T(tO,E) such that the inequality llxollpI 6, implies J J X ( t ) l J ,< 6,
t 2 to
+ T;
uniformly asymptotically stable in the pth mean if (SM,) and (SM,) hold with 6, do, and T independent of t o .
Remark 2.10.1. Based on Definition 2.10.1, one can formulate other definitions of stability and boundedness. See Lakshmikantham and Leela C591.
2.10.
71
Stability Concepts
Remark 2.10.2. We note that our stability definitions are local in character as is usual. If one wants to study the global behavior of solutions such as those for boundedness and Lagrange stability, we need to take p = m. For convenience, we shall now introduce certain classes of monotone functions.
Definition 2.10.2. A function 4(u) is said to belong to the class X if 4 E C[[O,p),R,], d(0) = 0, and 4(u) is strictly increasing in u, where 0
So} >q implies P ( w :Ilx(t, w)ll 2 E } < q,
t 2 to
+ T.
(2.11.20)
Assume that (SPQ)holds. Then given b ( ~>) 0, q > 0, and to E R +, there exist = 6' and T(to,E , q ) = T > 0 such that numbers so(t0)
whenever
As before, we choose uo so that (2.11.5) holds and choose 6g*(to) = 6g* such that P(w:a(to,l/XO(~)l~,O) > S O ) = P(o:l/xo(o)ll >
@*I.
Then choose h0 = min(6;,6;*). With this h0, we claim that (2.11.20) holds. Otherwise, there exists a sequence {t,,), t , 2 to + T , t, + a3 as n ---* co,such that for some solution process (2.8.1) satisfying P{w:IIxo(w)ll> 6,) < q, we have P{o:IIx(t,,,w)ll 2 E } =: q, t, 2 to + T. This, together with (2.11.9) and (2.1 1.21), will establish the validity of (2.11.20). This completes the proof of the theorem.
2.11.
77
Stability in Probability
Example 2.11.2. Let p ( A ( t , w ) )be the logarithmic norm of A(t,w) in (2.9.15). Then for any q > 0 and to E R,, there exists a number T(to,q)= T > 0 such that P w : &A(s,w))ds
J'
implies (SP,) of (2.9.15).
I
2 0 < y,
t2
to
+ T,
(2.11.22)
Proof. From the proof of Example 2.11.1, the functions V ( t , x , w )= jlx1/ and g(t, u, w ) = p(A(t,w))u satisfy the hypotheses of Theorem 2.1 1.3. Now it remains to show that (SP:) holds. Let E > 0, y > 0, and to E R , be given. Note that
From this and following the proof of Example 2.11.1, we can conclude that (SPY) holds for (2.9.15).From (2.11.22),it follows that
Hence we conclude that (SPQ) holds for (2.9.15).
Remark 2.11.3. For Problem 2.9.1, the stability condition (2.11.22)corresponding to the norms (MN,)-(MN,) reduces to the following respective stability conditions: the random matrix 3[AT(t,w ) + A ( t , o ) ] is P-stable in probability, (2.11.23) p
~
:
~
~
~+ ~k~#l i~ u~i k ( ~s ~ w() ~ s
~
o
) t 2 to
+ T,
(2.11.24)
t 2 to + T, (2.11.25)
P t 2 to
+ T,
(2.11.26)
2.
78
Sample Calculus Approach
and the random matrix $Q-'[AT(t,~)Q + QA(t,o)]is P-stable in probability. (2.11.27)
Theorem 2.11.4. Assume that the hypotheses of Theorem 2.11.2 hold. Then
(SPX)
implies (SP,).
Proof. The proof follows from the proofs of Theorems 2.1 1.2 and 2.1 1.3. In certain cases, results that are based on the comparison theorem (Theorem 2.8.2) are useful in discussing the stability properties of (2.8.1). We shall simply state the result corresponding to Theorem 2.1 1.1, leaving its proof as an exercise.
Theorem 2.11.5. Assume that the hypotheses of Theorem 2.11.1 hold except that relation (2.11.1) is replaced by A(t)%3.1
%$4 + A'fOV(4 x, 0)Id t , A(t)t/'(t,x, 44,
X?
where A(t) = (aij(t)),aijE C [ R + ,R[(R,F,P),R ] ] , A - l ( t ) exists and its elements belong to M [ R + , R[R, R + ] ] w.p. 1, and A-'(t)A'(t) is productmeasurable with off-diagonal elements aij, i # j , nonnegative for t E R+. Then the (SPT)property of the trivial solution of (2.8.11) implies (SP,). Now we shall use the method of variation of parameters to derive some stability results in probability for (2.8.1) and its nonlinear perturbations (2.7.9).
Theorem 2.11.6. Assume that (i) the hypotheses of Theorem 2.7.4 hold; (ii) f ( t , O , ~ )= 0 w.p. 1, t E R,; (iii) there exists a random function CY E C [ R + ,R[R, R ] ] such that for each t E R + , x0(o) in (2.8.1) and cr(t,o)are independent random variables and P(f,(t, x, 4)5
@, W ) ,
(t,x) E R+ x BfP).
Then for any q > 0 and to E R,, there exists a positive number T such that
implies (SP,) of (2.8.1).
(2.11.28) =
T(to,q)
2.11.
79
Stability in Probability
Proof. From hypothesis (ii), we have x(t, t o , O ) = 0 w.p. 1, where x(t, t o ,xo, w ) is a solution process of (2.8.1). This, together with Theorem 2.7.4 and Problem 2.9.5 and relation (2.11.28),yields
Now the rest of the proof follows by using the arguments in the proofs of Examples 2.11.1 and 2.11.2. Theorem 2.11.7. Assume that (i) (ii) (iii) (iv)
the hypotheses of Theorem 2.7.3 hold, f(t,O,w)~F(s,O,w)-Ow.p. 1 , t ~ R + , hypothesis (iii) of Theorem 2.1 1.6 holds, and IIF(t,x,w)lJ = o ( ~ ~asx x~ + ~ )O uniformly in t w.p. 1.
Then relation (2.11.29) implies (SP,) of (2.7.9). Proof. Let E > 0 be sufficiently small. Then relation (2.11.29) implies P
i
o:E(t -
to)
I
+ j'a(s,w)ds 2 0 f0
< q,
t2
to
+ T.
(2.11.31)
Hence one can choose a positive constant K such that
With the foregoing E and because of hypothesis (iv), there exists a 6 > 0 such that llxll < 6 implies IIF(t,x,w)(l < Now for Theorem 2.9.5, (2.7.14), and (2.11.30), we obtain
EI(x(I.
so long as I(y(t,to,xo(w),o)ll< 6. Multiplying both sides of (2.11.33) by exp[ - J:o a(s, o)ds] and applying the scalar version of Theorem 1.7.2, we get
so long as IIy(t, t o , xo(o),#)[I < 6. Now (2.11.34) shows that
80
2.
Sample Calculus Approach
Otherwise for any q > 0, there exists a T such that P(o:IlY(T,to,Xo(W),O)11 2 6) = q.
Then from (2.11.34)and (2.11.32),we get the contradiction q
= P { w :IlY(T,t o , X O ( 4 , O ) l l
2 6)
which proves our claim. Thus (2.11.34) holds for all t 2 t o ,and this, together with (2.1 1.31), yields the desired conclusion. 2.12.
STABILITY WITH PROBABILITY ONE
In this section, we shall present some stability criteria that assure the stability with probability one or almost-surely sample stability properties of the trivial solution of (2.8.1). Furthermore, some illustrations are given to show that the stability conditions are connected with the statistical properties of the random rate function of the differential equations. An example is also worked out to exhibit the advantage of the use of the vector Lyapunov function over the single Lyapunov function. First, we shall prove some results for stability with probability one in the context of random differential inequalities and Lyapunov functions. We then consider results for stability with probability one in the framework of the method of variation of parameters. Theorem 2.12.1. Assume that conditions (i)-(iii) of Theorem 2.1 1.1 hold. Then
(SST)
implies (SS,).
Proof. Let 0 < E < p and to E R , be given. Assume that (SST) holds. Then given b ( ~>) 0 and to E R + ,we can find a positive number d1 = 6(t0,&) such that
(2.12.1) implies
2.12.
Stability with Probability One
81
(2.12.3) E %? and a E C [ R + x R + , R [ Q , R + ] ] , we can find 6 = Since a(t, .,o) 6(to,E ) > 0 such that
4 t o ? I ( x o ( w ) l l , 4I 61
and
6
IIxo(4II
(2.12.4)
6, then hold simultaneously. We claim that if [Ix,(o)II I IIX(t,tO,XO,Oll < E ,
t 2 to.
Suppose that this claim is false. Then there would exist a sample solution ) IIx0(o)II I 6, Ql c 52, P(G1) > 0, and t , > to process x ( t , t O , x 0 , ~with such that IIx(tl,o)II = E ,
oE Ql,
Ilx(t,w)l( < E,
and
This implies that x ( t ) E B ( p ) for t E we have
[ t o ,tl),
t E [ t o , t l ) . (2.12.5)
and hence, from Theorem 2.8.1,
V(t,x(t, o), o)I r(t, t o ,u o , o )
for t E [ t o ,tl).
(2.12.6)
for t E
(2.12.7)
It therefore follows that m
c m
1
b(llx(t,w)ll) I W t , x(t, w), w ) I i= 1
i= 1
ri(t, o)
[ t o ,tl).
Relations (2.12.2), (2.12.5), and (2.12.7) and the continuity of the functions involved lead us to the contradiction
c V ( t 1 ,x ( t , , o ) , m
b ( ~I)
i= 1
m
0) I
C ri(t1,a)< b ( ~ ) ,
wEQ1-
i= 1
This establishes the fact that (SS,) holds. The proof of the theorem is complete. Example 2.12.1. Let p(A(t,w)) be the logarithmic norm of A(t,o) in (2.9.15). Then for t o E R+, there exist deterministic positive numbers K = K(to)and T = T(to)such that P{~:J: ~p(A(s,
implies ( S S , ) of (2.9.15).
ds I K , t 2
to
+T
I
=
1
(2.12.8)
Proof. By following the argument used in the proof of Example 2.11.1, the conclusion of the example remains true, provided that (SS:) holds for
82
2.
Sample Calculus Approach
(2.10.1) with m = 1 and g(t,u,w) = p(A(t,w))u. Observe that the sample p(A(s,w ) )ds is sample absolutely continuous. ThereLebesgue integral fore, from (2.12.8) and (2.9.16), the conclusion of the example remains true.
lt0
Remark 2.12.1. From Problem 2.9.1, stability condition (2.12.8) corresponding to the norms (MN,)-(MN,) reduces to the following respective stability conditions: the random matrix $[AT(t,o)+ A(t,w)] is P-bounded w.p. 1, (2.12.9) n
I
T =1,
1
t 2 t o + T =1,
(2.12.10)
(2.12.11)
(2.12.12)
the random matrix $Q- l[AT(t, w)Q
+ QA(t,w)] is P-bounded w.p. 1. (2.12.13)
Based on the proofs of Theorems 2.11.1,2.11.2, and 2.12.1, it is not difficult to construct the proofs of the rest of the notions of stability w.p. 1. We shall incorporate such results in the following problem, leaving the details to the reader.
Problem 2.12.1. Under the hypotheses of Theorem 2.11.1, show that (SS?) implies (SS,). If the assumptions of Theorem 2.11.2 hold, show that (SS?) implies (SS,) and ( S S t ) implies (SS,). Problem 2.12.2. Let p ( A ( t , o ) ) be the logarithmic norm of A(t,o)in (2.9.15). Then for to E R,, there exists a deterministic positive number
2.12. Stability with Probability One
83
a = a(to) > 0 such that
implies (SS,) relative to (2.9.15). Remark 2.12.2. By Problem 2.9.1, stability condition (2.12.14) corresponding to the norms (MN,)-(MN,) reduces to the following respective stability conditions:
+
the random matrix i [ A T ( t , w ) A(t,w)] is P-stable w.p. 1, (2.12.15)
< -a@,)
-
I
=
1,
(2.12.16)
(2.12.17)
I
(2.12.18)
I -a(to) = 1,
and
the random matrix $Q-'[AT(t, w)Q + QA(t,o)]is P-stable w.p. 1. (2.12.19) Let us now discuss an example that shows a certain gain over various stability conditions. Consider the special type of random differential system ~ ' ( tO) , =
A(o)x + B(t,O ) X ,
x(~,,co)= x,(w),
(2.12.20)
K w.p. 1, E[l(B(t,o)ll]< 00 for t E R,, and the where x E R", IIA(o)ll I elements bij(t, w ) of the random matrix function B(t,o)= (bij(t, o))are product-measurable. Under these conditions it is obvious that the initialvalue problem (2.12.20)has a unique sample solution for t > t o .
2.
84
Sample Calculus Approach
In the following, we shall only discuss (SS,) of (2.12.20). However, other properties (SS,) and (SS,) can also be discussed analogously, which we leave as exercises. Set A(t,o)x = [ A ( o )+ B(t, w)]x = A(w)x + B(t,o ) x . This, together with (2.12.20) and the stability condition (2.12.14) in Problem 2.12.2, for (2.12.20), yields
1 1
p(A(w)+ B ( s , o ) ) d s I
-U
= 1, (2.12.21)
where a is as described in Problem 2.12.2. O n the other hand, if one uses the property of the logarithmic norm, p ( A ( o )+ B(t,w))5 p ( A ( o ) )+ p(B(t,o)), then in view of Theorem 2.9.1, the stability condition for (2.12.20) is given by
o:p(A(w))+ lim sup t-+m
[ 1 J', ~
to
1 1
p(B(s,0)) ds I - u = 1. (2.12.22)
Furthermore, if we utilize the property that )p(B(t,w))1 5 []B(t, w ) ] ) then , again in view of Theorem 2.9.1, the stability condition for (2.12.20)reduces to
From the above development of stability conditions (2.12.21)-(2.12.23), one can easily see that condition (2.12.23)demands more than condition (2.12.22), and condition (2.12.22)demands more than condition (2.12.21). Furthermore, if we assume that the elements b,,(t,w) of the random matrix function B(t, w ) are strictly stationary metrically transitive stochastic processes, then, by Theorem 2.1.5, stability conditions (2.12.21)-(2.12.23) reduce to
+
P ( w : p ( A ( o ) ) E [ B ( t o , o ) ] 5 - a } = 1,
(2.12.24)
and
+
P { o : p ( A ( o ) ) EI(B(to,o)l[I - a }
=
1,
(2.12.26)
respectively. Remark 2.12.3. By Problem 2.9.1, stability conditions (2.12.24)-(2.12.26) with respect to the norms (MNJ-(MN,) can easily be formulated. Nonetheless, in order to show the richness of the preceding stability analysis, we
2.12
Stability With Probability One
85
exhibit stability condition (2.12.25), corresponding to (MN3);that is,
which implies that at least one of the matrices A ( o ) and E I B ( t o , o ) ] is P-stable. We shall next offer a simple example to illustrate the fact that the stability conditions presented can be interpreted in the context of the laws of large numbers. Consider the linear differential system with random parameters X'(4 4 = A(t,o)x(t,4,
x ( t o , 4 = xo(w),
(2.12.28)
where x E R", E [ ( ( A ( t , w ) ( exists (] for t E R,, and the elements aij(t,o) of the random matrix A(t,w ) = (aij(t,w ) )are product-measurable. Again, under these conditions the initial-value problem (2.12.28) has a unique sample solution for t 2 t o . Set C(t)= E [ A ( t ,a)]
and
B(t,o)= A(t,o)- E[A(t,o)].
Then the system (2.12.28) can be rewritten as X'(t,W ) = C(t)x
+ B(t,W ) X ,
x(t0, ~
0 =)
x~(o).
(2.12.29)
Now analogously stability conditions (2.12.21)-(2.12.23) can be rewritten relative to (2.12.29). For example, conditions (2.12.22) and (2.12.23) become
(2.12.30) and
I
-.]
=
1,
(2.12.31)
respectively. We note that conditions (2.12.30) and (2.12.31) show that the random function A(t,o)satisfies the strong law of large numbers. Similarly, other laws of large numbers can be associated with other stability conditions such as (2.11.22).
86
2.
Sample Calculus Approach
Remark 2.12.4. From the foregoing discussion, we note that in the case of non-white-noise coefficients such as strictly stationary metrically transitive random coefficients, the randomness may be a stabilizing agent. This remark can be justfied from (2.12.25)and (2.12.30). To exhibit the fruitfulness of using a vector Lyapunov function instead a single Lyapunov function, we discuss the following example.
Example 2.12.2. Consider the system of differential equations with random coefficients x'(t, 0)= A(t, x, o ) x ( t ,a),
x(to, 4 = x o ( 4 ,
(2.12.32)
where x E R", A(t, x, o)is a 2 x 2 random matrix function defined by A@,x, 0)=
[
+
1.
e-' - f l ( t , x, o) sin(2nt O(o)) sin(2nt e(o)) e-' - f l ( t ,x, o)
+
M[R+ x R2,R [ R , R + ] ]f,l ( t , O , o ) = 0 w.p. 1, and 9 E R[Q R,] and is uniformly distributed over [0,271]. It is obvious that sin(2n.t 19(o)) is an ergodic process. First, we choose a single Lyapunov function V ( t , x , o ) that is given by V(t:x, o)= x t + x;. It is easy to see that
fi E
+
D&.12.32)V(t,~I , o )[2e-'
+ 21sin(2nt + e(o))l]v(t,x,W)
because 21abl < a2 + b2 and f i ( t ,x, o)2 0. Clearly, the trivial solution process of the comparison equation u'(t,o)= 2[e-'
+ Isin(2nt + B(o))l]u(t,w )
is not stable w.p. 1. Hence we cannot deduce any information about the stability of (2.12.32)from the scalar version of Theorem 2.1 1.1, even though it is stable. Now we attempt to seek the stability analysis of (2.12.32) by employing the vector Lyapunov functions. We take
It is easy to assume that components V, and V2of V are not positive definite and hence do not satisfy the scalar version of Theorem 2.11.1. However, V(t,x, o)does satisfy the hypotheses of Theorem 2.11.l. In fact,
and the vectorial inequality
2.12.
Stability with Probability One
87
is satisfied with
It is clear that g ( t , u , o ) is sample quasi-monotone nondecreasing in u for t E R,. It is obvious that u ' ( t , o ) = g(t, u ( t , o ) , o)is stable w.p. 1. Consequently, the trivial solution of (2.12.32) is stable by Theorem 2.1 1.1.
Problem 2.12.3. Assume that the hypotheses of Theorem 2.11.5 hold. Then (i) (SST) of (2.8.11) implies (SS,); (ii) (SS;) of (2.8.11) implies (SS,). In the following, the method of variation of parameters will be used to derive the results on the stability w.p. 1 with respect to the trivial solutions. of (2.8.1)and its nonlinear perturbation (2.7.9).
Theorem 2.12.2. Let the hypotheses of Theorem 2.1 1.6 hold. Then for to E R,, there exists a positive number a = a(to)such that =
1
(2.12.33)
implies (SS,) of (2.8.1). Proof. By following the proof of Theorem 2.11.6, one can arrive at (2.11.30). Now the proof of the theorem follows from relation (2.12.33).
Theorem 2.12.3. Let the hypotheses of Theorem 2.1 1.7 hold. Then relation (2.12.33) implies (SS,) of (2.7.9). Proof. Let E > 0 and be sufficiently small. Then relation (2.12.33)implies
'+J J: 1 i [ +J : 1 {
P o:lim ~ (-t to)+
a(s,w)ds
I -a/2
I I
=
1.
(2.12.34)
Hence there exists a positive deterministic number K such that P w:exp ~ (-t t o )
a(s,o)ds i K , t 2 to = 1.
(2.12.35)
Because of hypothesis (iv), by using the argument in the proof of Theorem 2.11.7 with Ilxo(o)II I 6 / K , we arrive at (2.11.34). From (2.11.34) one can show that (ly(t,to,xo(w),w)((< 6 for all t 2 to. Otherwise there exists a T such that Ily(t,to,xo(w),o)ll= 6 and Ily(t,t,,x,(o),w)II I 6 for to I t IT w.p. 1. Then from (2.11.34) and (2.12.35), we get the contradiction 6 < ( 6 / K ) K = 6, which proves our claim. Thus (2.11.34) holds for all t 2 t o and this, together with (2.12.34),yields the desired conclusion.
88
2.
2.13.
Sample Calculus Approach
STABILITY IN THE pth MEAN
Now, we shall present some stability criteria that assure the pth mean or moment stability properties of the trivial solutions of (2.8.1) and (2.7.9). Theorem 2.13.1. Assume that conditions (i) and (ii) of Theorem 2.11.1 hold. Suppose further that
(5) for (t,x) E R , x B ( p )
c m
b(llX11”)I K/k(t,X,4I 4 ,IIxII”), i= 1
where b E V X ,a E %X,and p 2 1. Then
(SMT)
impfies (SM,).
Proof: Let 0 < E < p, t o E R , be given. Assume that (SMT) holds. Then for b(E) > 0 and t o E R,, there exists 6, = 6(t0,&)such that rn
1 E[uidw)l 5 6
i= 1
implies m
(2.13.1) We chose uo(w)such that V(to,xo(w),o)I uo(o) and m
Since a(to,.) E X , we can find a 6 = 6(t0,&)such that E [ 1 / ~ ~5 ( 16~ ] implies
afto, E[llxo(o)((P])6,.
(2.13.3)
Now we claim that if ( E [ I I x ~ I I ~ ]I ) ”6,~then ( E [ I I X ( ~ ) I I ~
xn
J: f(s, xn(s,o), ds.
+ l(t, 0) = xo(o) +
0)
By Remark 2.6.1, it follows that ~ , + ~ ( t+, xo( )t , o ) uniformly in t w.p. 1, where x(t, o)is a sample solution of (3.2.3).Note that IIXn+l(t,W) -
xn(t,m)II
I J~K(s~~)IIxn(S~W) - xn-1(S,m)11ds.
Since IlXl(t,
4 - xo(o))(5 J: Ilf(s, xo(o),o)llds 5 L ( 4 I u,(t, 4 - uo(t, 4
and un + l(t, 0) - Un(t, 0)=
Ji ~ ( s w)(un(s, ,
0)- un - 1(s> 0))ds,
in
it is easy to see that IIxn+
i(t, W ) - xn(t, w)ll 2
un+
i(t, 0)-
W)
M~Q~~FFB
and II~i+l(t*w) - xXt,w>11 I K(t,o)(un(t,w)- u n - l ( t , m ) ) 5 ub+ l(t, w ) - ub(t, 0).
3. LP-Calculus Approach
100
Hence by (d), x,(t,w) -+ x(t,co) uniformly in t w.p. 1, uniformly in Lp, and in the Lp-integral sense. Similarly, by (f), xh(t, o)+ x’(t, w ) w.p. 1 for almost all t, in Lp for almost all t, and in the LP-integralsense. We note that f ( t ,Xn(t,w), 0)==f ( t ,xdw), W) n
+ i1 [ f ( t ,xi(C a), 0) - f ( t , xi - l(t,m),w)l = 1 and hence that I l f ( t J , ( t , 4 > 4 ( I 5 ((f(t,X0(4,W)(( n
n
Consequently, jlf(t9
X&))l\,
5
Jlf(4Xo))j, + I l W l l p .
Also,
Ilf(4Xfl(t))Jl,
+
llf(4x(t))lI,
as n -+ GO for almost all t E J.Thus by the application of Lebesgue dominated convergence theorem we conclude that
s:,llf(t>
x(t))ll,dt < GO
on
J.
The proof of uniqueness follows from the uniqueness of sample solutions of (3.2.3) Corollary 3.3.1. Assume that all hypotheses of Theorem 3.3.1 hold, except that (ii) and (iv) are replaced by
on J . (3.3.6) Then the conclusion of Theorem 3.3.1 remains true.
3.3.
Existence and Uniqueness
101
Proof. It is enough to show that the sample solution u(t,w) is an Lpsolution of (3.3.4). This means that it is sufficient to show that (3.3.7)
where u,,(w) is as defined in (3.3.5).This implies that on J . Hence by (3.3.6), (3.3.7) is true, proving the corollary. Problem 3.3.1. Consider the linear sample problem x'(t, 0)= A(t, w)x(t,w )
+ P(t,w),
x(t,o)
= xg(w),
(3.3.8)
where A(t,w ) is an n x n product-measurable random matrix function whose entries aij(t,w ) are sample Lebesgue-integrable on J , P(t, a) is a productmeasurable n-dimensional random vector function whose elements are Lp-integrable on J , and xo E Lp[Q, R"]. Let K(t, w ) = IIA(t,w)ll. Suppose that A(t, w), P(t,w), and x,(w) are independent random functions for all t E J . Let L(s,t ) = E(exp(sK(t,w))). Then show that the sample solution x(t, w ) of (3.3.8) is an LP-solution (unique) on J if
l',L(s*,
u) du
< co
for t E J ,
(3.3.9)
and p < s*a-'.
(3.3.10)
(Hint: By using the Holder inequality, Lemma 3.1.4, and the properties of a moment-generating function L(s,t ) of K(t,w), show that
1
llK(s,w )
r'" fo
K(v, w )dv erp(
Jl K(u,w )du)l/ ds < cc
on J.)
P
Problem 3.3.2. Consider the sample initial-value problem (3.3.8) in which A(t, w), P(t,w), and xo(w) satisfy all the hypotheses listed in Problem 3.3.1 except (3.3.9). Further, assume that the elements aij(t,w)are mutually independent. Define Lij(s,t ) = E(exp(saij(t,0))).Then show that (i) if for some s* > 0 and each (i,j ) ,
and
J',Lij(- s*, t ) dt
(3.3.11)
102
3.
LP-Calculus Approach
are finite on J , then the sample solution of (3.3.8)is an LP-solution, provided that p < s*n2a-';
(ii) if for some s* > 0 and each (i,j ) ,
66
(Lij(s*,t))"' dt
and
(3.3.12)
6'
(Lij(- s*, t ) Y 2dt
(3.3.13)
are finite on J , then the sample solution of (3.3.8) is an LP-solution, provided that p < s*a-
(3.3.14)
We note that if the moment-generating functions Lij(s,t ) of aij(t,w ) in Problem 3.3.2 are defined everywhere as functions of s and are locally L" functions of t E J, then the sample solution of (3.3.8) is an LP-solution for all p for which P(t) is LP-integrable. This observation leads to the following problem. Problem 3.3.3. Suppose that xo, A(t,o),and P(t,w ) are as in Problem 3.3.2. Suppose further that the coefficients aij(t,o)and p j ( t ,o)of A(t, o)and P(t, w), respectively, are normally distributed with means and variances that are bounded on J . Then show that the sample solution x(t,w) of (3.3.8) is the LP-solution for all p on J . Problem 3.3.4. Consider a scalar random differential equation x'(t,0)= a(o)x(t,o),
x(t,, 0)= 1,
(3.3.15)
where a E R[R, R ] has the density function +(q) = iexp[ - IqI]. Then show that the sample solution of (3.3.15) x(t,w) is an LP-solution for t~ [ t o , t o + (l/P)). ( H i n t : Note that
provided that t E [ t o , to
+ (l/p)).)
The following Peano-type existence result gives the sufficient conditions under which every sample solution of (3.2.3) is also an LP-solution.
3.3.
103
Existence and Uniqueness
Theorem 3.3.2. Assume that (HI) Wz)
f E M[J X
B(z, p), R[Q, R"]] and f(t, x, o)is sample continuous in x; 9 E M [ J x [092pl, R[Q R + I l , g(t, u, o)is sample continuous nondecreasing in u for each t E J w.p. 1, and g(t,u(t),w) is sample Lebesgue-integrable whenever u(t) is sample absolutely continuous on J and satisfies Jlf(t?441)5 g(t, JfxJJ, 4
(HJ
(3.3.16)
a.e. on J and for x E B(z,p); u(t,w ) is the sample maximal solution of u ' ( ~ , o )= g(t,u(t,o),o),
u(to,o) = u ~ ( o ) ,
(3.3.17)
existing for t E J , which is also an LP-solution; (H4) x o ( 4 E B(z73p) and IIxo(4ll 5 I l U o ( 4 J I W.P. 1. Then the initial-value problem (3.2.3) has at least one LP-solution x(t, t o ,xo) on [ t o , t o b] for some b > 0.
+
Proof. By Theorem 2.2.1 it follows that the initial-value problem (3.2.3) has at least one sample solution x(t, t o ,xo(o),o)= x(t, o)on [ t o ,t o b] for some b > 0. By Lemma 2.2.1, we have
+
u(t,o) = uo(o)
+ lg(s,u(s,o),w)ds
x(t, 4 = xo(4
+1 1f(s, x(s, 4,4ds,
and
on J
t E [to
9
to
+ bl,
where the integral denotes the sample integral. Using this, together with (3.3.17), (H2),(HJ, and the scalar version of Theorem 2.5.1, we get for t E [ t o , t o + b].
(Ix(t,o)(I5 u ( t , o )
(3.3.18)
Now we show that [x(t,w)] = x ( t ) is an LP-solution of (3.2.3). By (3.3.18) it is obvious that x(t) E LPIS1,Rn] on [ t o , t o b]. From (3.3.16) and (3.3.17), we obtain
+
Ilx(t,w) - x(ti,W)II 5
(U(t,W) -
u ( t i , o ) ( W.P. 1
(3.3.19)
for all t , , t E [ t o , t o + 61, which implies that (Ix(t)- x(h)l(pI I(u(t)- 4 t , ) l l p
for all t, t ,
E [to, to
+ bl.
3.
104
LP-Calculus Approach
This establishes the pth mean continuity of x ( t ) because of the fact that u(t) is pth mean continuous. Now we show that x(t) has a pth mean derivative and satisfies x'(t) = f ( t , x(t)),where f ( t , x ( t ) ) = [ f ( t ,x(t, w),a)]. For any t E [ t o , to + b] and for any h E R such that t + h E [ t o ,to + b], we see, because of (3.3.19),that
+
~ ( th, O)- ~ ( O t ,)
w.p. 1.
(3.3.20)
We observe that (3.3.21) a.e. on
[ t o , to
+ b] and (3.3.22)
and in the sense of the LP-integral a.e. on [ t o ,to + b]. From (3.3.20), (3.3.21), (3.3.22), and by the application of the generalized Lebesgue dominated convergence theorem (Theorem 3.1.2), we conclude that
in the sense of the LP-integral a.e. on [ t o ,to + b]. This proves that x(t) = [x(t, w ) ] has a pth mean derivative and satisfies the equation x'(t) = f ( t ,x ( t ) )
a.e. on [to,to + b],
which proves the theorem. Problem 3.3.5. Let g(t, u, w ) in (3.3.17)be defined by
s(t,u, 4 =
w, w)u + PO,4,
(3.3.23)
where K , p E M[J, R [ Q R + ] ] ,K(t, w ) is sample Lebesgue-integrable on J and p ( t , w ) is LP-integrableon J . Let xo E LP[Q R"]. Let K(t, w), p ( t , o), and xo(w) be independent on J. Let L(s,t ) = E[exp(sK(t, o))]. Then show that ifL(s, t )and p satisfy (3.3.9)and (3.3.10),respectively,(3.2.3)has an Lp-solution. (Hint: Apply Problem 3.3.1 and Theorem 3.3.2.)
Remark 3.3.1. If p = GO in Theorems 3.3.1 and 3.3.2, then the corresponding results establish the global existence of solutions of LP-problems. Also, based on the results on continuation of sample solutions of (3.2.3) in
3.4.
Continuous Dependence
105
Section 2.2, one can formulate the corresponding results covering the continuation of LP-solutions of (3.2.3). We do not attempt this. The following generalization of Theorem 3.3.1 is interesting in itself.
Theorem3.3.3. Assume that all the hypotheses of Theorem 3.3.2 are satisfied. Further, assume that f satisfies the relation
Ilfk x, 4 - f(t,y, w)JJI w, IJx- YJJ,0) W.P. 1
(3.3.24)
a.e. on J and x, y E B(z,p), where G E M [ J x [ 0 , 2 p ] , R [ Q , R + ] ] ,G(t,u,w) is sample continuous is u for almost all t E J w.p. 1, and G(t,u(t),w ) is sample Lebesgue-integrable on J whenever u(t) is sample absolutely continuous on J . Let u(t) = 0 be the unique LP-solution of ~ ' (W t ,) = G ( t ,u ( ~ , w ) , w ) ,
~ ( t ,O , ) = 0,
(3.3.25)
existing on J. Then the differential system (3.2.3) has a unique LP-solution on J . Proof. By Theorem 3.3.2, the initial-value problem (3.2.3) has an Lpsolution. Let x(t,w) = x ( t , t , , x , , o ) and y(t,w) = y(t,to,x,,w) be two Lpsolutions of (3.2.3)on J. Set
m(t,w) = Ilx(t,w) - y(t,o)ll
so that
m(t,,w) = 0.
From (3.3.25) we have
m ' k 4 I Ilf(4x(t,
4
3
0
)
- f k y(t2 4,w)Il
I G ( t ,m(t,w),w ) w.p. 1.
By Theorem 2.5.1 we then obtain m(t, 4 5 r(t,4, t E
[ t o ,t o
+ a],
(3.3.26)
where r(t,w ) is the maximal solution of (3.3.25) through (to,O). From (3.3.26) we get
ll~(~I ) l (Ir(t)llp ~p
on
J.
By assumption, r(t) = 0 on J , and so m(t) = 0 on J . This completes the proof.
Remark 3.3.2. The function G(t,u, w ) = K(t,u ) u is admissible in Theorem 3.3.3, where K E M [ J , R[Q, R , ] ] and K(t,u) is sample Lebesgueintegrable on J . In this case, Theorem 3.3.3 reduces to Theorem 3.3.1. 3.4.
CONTINUOUS DEPENDENCE
This section deals with the continuous dependence of solutions on the parameter.
3.
106
LP-CalcuIus Approach
Theorem 3.4.1. Assume that
f E M [ J x B(z, p ) x A, R[O,R"]] and f ( t ,x, A, o)is sample continuous in x for each ( t ,A) and continuous in (x, A) for each fixed t, A being an open A-set in R"; G E M [ J x R,, R[O,R , ] ] , G(t,u,w) is sample continuous nondecreasing in u for each t E J , and G(t,u(t),o)is sample Lebesgueintegrable whenever u(t) is sample absolutely continuous; u(t) = 0 is the unique LP-solution of (3.3.25) such that u(to) = 0 and the solutions u(t,to, u o , o ) of (3.3.25) are LP-continuous with respect to (to, uo); for ( t ,x,A), ( t ,x, A) E J x B ( z , p ) x A, Ilf(t,x,A,0)- f ( t , Y , Lo)ll I G(t7IIx - ~ 1 1 , a), W.P. 1; (3.4.1)
there exist product-measurable functions K , p E M [ J , R[Q, R + ] ] such that K(t,w ) is sample Lebesgue-integrable on J and p ( t , o) is LP-integrable on J ; also, let xo(,l, o)E LP[QR"] be LP-continuous in A and K(t, o),p ( t , w), x&, o)be independent; furthermore, suppose that Ilf(t,x,o)ll I K(t,o)llx(l + p ( t , o) w.p. 1
(3.4.2)
and
on J; (H6)* f ( t , x,I,, o)is LP-continuous in 3, E A.
Then given E > 0, there exists a 6 ( ~>) 0 such that for every I,, Illb - Ao\/ < S(E), and the system of differential equations x'(4 4 = f ( t ,x(t,a), A,a),
x(t,,
A,w ) = x&,
0)
(3.4.4)
admits unique LP-solution x ( t ,w ) = x(t, t o ,xo, I,, o)satisfying Ilx(t,A,o)- x(t,Ao,o)llp< E
for t
E
J.
(3.4.5)
Proof. Under hypotheses (HI) and (H5), Theorem 3.3.2 shows that the initial-value problem (3.4.4) has at least one LP-solution x(t, A, o)= x(t,t,,x,(A,o),o) on [ t o , to + b]. Furthermore, because of (H2)A-(H4)A and Theorem 3.3.3, x(t, 3,) is the unique LP-solution process of (3.4.4). To prove that (3.4.5) holds, let x(t, A) and x(t,&,) be LP-solutions of (3.4.4) with j/A < 6 ( ~ )Set . m ( 4 o ) = I l x ( t , b ) - x(t,&,w)ll
3.4.
Continuous Dependence
107
Setting
444 = r] + J~G(s,m(s,w),w),
u ( t o , o ) = vl(o),
relation (3.4.6),together with the nondecreasing property of G, yields
4 44 I G(t,4 4 4,4,
4 t 0 , w)= V(0).
Hence by Theorem 2.5.1 we have u(t,w) I r(t,t,,uo,4, t E [ t o , to + bl, is the maximal solution of (3.3.25) through ( t o ,uo) with where r(t, t o , #,,a) uo = r]. This, together with (3.4.6), gives
Since the solutions u(t, t o , uo, o)of (3.3.25) are assumed to be LP-continuous with respect to u o , it follows that limr(t,t,,r])
= r(t, to,O)
r1-0
and, by hypotheses, r(t, t o , O ) (HJ2, and r], yields
= 0. This, in
view of the definition of rn(t,w),
lim Ilx(t,A,w) - x(t,Ao,w)ll,
A+&
which establishes the theorem.
= 0,
3.
108
LP-Calculus Approach
We are now ready to prove a theorem that establishes the continuous dependence of solutions of (3.2.3) on the initial conditions ( t o ,xo). Theorem 3.4.2. Assume that the hypotheses of Theorem 3.3.3 hold. Furthermore, assume that the solution process u(t, t o ,uo, o)is LP-continuous with respect to its initial data. Then the solutions x ( t , t o , x o , m ) of (3.2.3) through ( t o ,x o ) are LP-unique and LP-continuous with respect to the initial conditions ( t o ,xo).
Proof. The proof easily follows from the proofs of Theorems 2.6.3 and 3.4.1. We omit the details. We remark that the LP-differentiability of solutions with respect to initial conditions can be studied analogously to the differentiability of solutions of abstract differential systems. See Ladas and Lakshmikantham [49]. Moreover, based on the previous results, results similar to those of Section 2.7 can be developed. We omit such results to avoid monotony. 3.5.
COMPARISON THEOREMS
Consider the system of first-order differential equations x' = f ( 4 4, x(t0) = xo,
(3.5.1)
where the prime denotes the pth mean derivative of x , f E C [ R + x B ( x o ,p), LP[QR " ] ] , and f ( t , x ) is smooth enough to guarantee the existence of Lpsolutions x ( t ) = x ( t , t o , x o )of (3.5.1) for t 2 t o , t o E R + . The following results give estimates for the solutions of (3.5.1) in terms of maximal solutions of a deterministic comparison system. Theorem 3.5.1. Assume that (i) g E C [ R + x R", R"] and g(t, u ) is quasi-monotone nondecreasing in u for fixed t E R + ; (ii) r(t) = r(t, t o ,uo)is the maximal solution of a deterministic differential system
u' = s(4
4,
4 t O )
= uo,
(3.5.2)
existing for t 2 t o ; (iii) V E C [ R + x LP[Q,R"],R"], V ( t , x ) satisfies a local LP-Lipschitz condition in x , and for ( t ,x ) E R + x L p [ Q , R"], D'V(t,x)
1 = limsup-[V(t + h, x + h f ( t , x ) )- V ( t , x ) ] h+O+
h
3.5. Comparison Theorems
109
(iv) x ( t ) is any LP-solution of (3.5.1),existing for t 2 t o , such that
w o , xo) Iuo.
(3.5.4)
Then for t 2 to.
V(t,x(t))I r(t)
(3.5.5)
Proof. Set m(t) = V(t,~ ( t ) ) so that
rn(to)= V ( t o xo), ,
where x(t) is any LP-solutionof(3.5.1).For small h > 0, we have m(t
+ h) - rn(t) = V ( t + h, x(t + h ) ) - V(t,x(t)) = V(t + h, ~ ( t+) h f ( t , x ( t ))- V(t,x(t)) + V(t + h, x(t + h ) ) V(t + h, x ( t ) + hf(t, x(t))). -
Hypothesis (iii) now yields the inequality D'm(t) 5 g(t,m(t)),
t 2 to.
(3.5.6)
By Theorem 1.7.1, we then have m(t) I r(t)
for t 2 t o ,
and the proof is complete. Corollary 3.5.1. Assume that the hypotheses of Theorem 3.5.1 hold with g ( t , u) = 0. Then the function V(t,x ( t ) ) is nondecreasing in t, and V ( t , x ( t ) )I V ( t o , x o )
for t 2 t o .
Theorem 3.5.2. Let the hypotheses of Theorem 3.5.1 hold except that inequality (3.5.3) is strengthened to A(t)D+V ( t ,x)
+ A'(t)V(t,x) Ig(t, A(t)V(t,x))
(3.5.7)
for (t,x) E R+ x LP[QR"],where A(t) is a continuously differentiable m x rn matrix function such that A - ' ( t )exists and its elements are nonnegative and ) aij(t)I 0 for i # j . Then the elements ofthe matrix A-'(t)A(t) = ( a i j ( t ) satisfy (3.5.8)
~ ( t o ) V ( t o , x , )5 uo
implies V ( t , x ( t ) )I R(t),
(3.5.9)
t 2 to,
where R(t)= R(t,t o ,yo) is the maximal solution of the differential system V' =
A-'(t)[-A'(t)v
+ g(t,A(t)v)],
existing for t 2 t o , and A(to)vo= uo.
v(to) = v,,
(3.5.10)
3. LP-CalcuIus Approach
110
Proof. Set W ( t ,x) obvious that
= A(t)V(t,x).
From (3.5.7) and the nature of A(t), it is
D + Y t ,x) I g ( t , W(t,$1 for (t,x) E R , x LP[Q R"].Now by following the argument used in the proof of Theorem 2.8.2, it is easy to conclude the proof of the theorem. 3.6. STABILITY CRITERIA
Let x(t) = x(t, t o ,x,) be any solution of (3.5.1). Without loss of generality, we assume that x(t) = 0 is the unique solution of (3.5.1) through ( t o ,0). We list a few definitions concerning the stability of the trivial solution of (3.5.1).
Definition 3.6.1. The trivial solution of (3.5.1) is said to be (S,) stable if for each E > 0 and to E R + , there exists a positive function 6 -- B(t,,,&) that is continuous in t o for each E such that lIxollp < 6 implies IIx(t)llp < E for t 2 t o ; (S,) uniformly stable if(S1)holds with 6 independent of t o ; (S,) asymptotically stable if it is stable and if for any E > 0 and to E R,, there exist positive numbers 6, = 6,(t0) and T = T(to,&) such that IIx& I 6 implies lI?c(t)lJ, < E for t 2 t , T; (S,) uniformly asymptotically stable if (S,) and (S,) hold with 6, 6,, and T independent of t o .
+
It is obvious that the stability, uniform stability, asymptotic stability, and uniform asymptotic stability of the trivial solution of (3.5.1)are equivalent to (SM,)-(SM,), respectively, in Definition 2.10.1. Let us assume that the comparison equation (3.5.2) possesses the trivial solution. Then we can define the corresponding stability concepts (Sf)-(SX) for the trivial solution of (3.5.2). For example, (ST) implies the following definition :
Definition 3.6.2. The trivial solution u f 0 of (3.5.2) is said to be (ST) stable if for each E > 0 and t , E R + ,there exists a positive function 6 = h(t,, E ) that is continuous in t , for each E such that uio I 6 implies ui(t, to, u,) < E for t 2 to. The definitions of (Sz)-(SX) can be formulated similarly.
xy=,
We now present a result concerning the stability of the trivial solution of (3.5.1)
Theorem 3.6.1. Assume that (i) g E C [ R + x R", R"] and g(t,u) is quasi-monotone nondecreasing in u for fixed t E R , ;
3.6. Stability Criteria
111
(ii) V E C [ R , x B(p), R"],V ( t ,x) satisfies a local LP-Lipschitz condition in x, and for ( t ,x) E R , x S(p), %.5.1)WX)
5 s(t, W,X)),
(3.6.1)
where B ( p ) = (x E LP[Q,R"]:llxllp < p> ; (iii) for ( t ,x) E R + x S ( p ) (3.6.2) where b,a(t, .) E .X and a E C [ R , x R,, R,]. Then
(ST)
implies (S,).
Proof. Let E > 0 and to E R , be given. Assume that (ST) holds. Then given b(E)> 0 and t o E R,, there exists a positive function 6, = hl(to,e), continuous in to for each E, such that xy= uio I 6 implies m
1ui(t, t o , uo) < b ( ~ ) ,
t 2 to.
i= 1
(3.6.3)
EL,
Let us choose uo = ( U , ~ , U .~. ~. ,, u , ~ so ) ~that V ( t o , x o )I uo and uio = a(to,llxollp)for xo E B(p). Since a(t,, .) E K and a E C [ R + x R +,R,], we can find a 6 = 6 ( t o ,E ) that is continuous in to for each E such that IJxollpI 6 implies a(to,llxollp) < 6,. We claim that this 6 is good for (S,), that is, IJxollpI 6 implies that llx(t,to,xo)lIp < E, t 2 to. In fact, from relations (3.6.2), (3.54, and (3.6.3), we get b((\x(t,to, xo)(lp)I V(t ,
t o , xo)) 5
r(t, to,%) < b ( 4
for t 2 t o . Since b(r) is strictly increasing in r JJx(t,t o , xo)llp< E,
t 2 to.
Therefore, (S,) holds. Problem 3.6.1. Assuming the hypotheses of Theorem 3.6.1, prove that (S:) implies (SJ. Problem 3.6.2. Let the hypotheses of Theorem 3.6.1 be satisfied with a(t,r) = a(r). Then show that (a) (S;) implies (S2); (b) (SX) implies (SJ. We remark that one can formulate the stability results with regard to the trivial solution of (3.5.1) and its perturbed system Y' = f ( t , Y)
+ F(t, Y),
Y(t0)
= xo,
(3.6.4)
112
3.
LP-Calculus Approach
by means of the method of variation of parameters. The formulation and the proofs of these results can be given on the basis of the results in Sections 2.1 12.13 and the results in Ladas and Lakshmikantham [49]. Such a detailed formulation and the proofs are left to the reader as exercises. Notes
The contents of Section 3.1 are based on well-known books, namely, Dunford and Schwartz [ls], Hille and Phillips [25], and also Ladas and Lakshmikantham [49]. The results of Section 3.2 are taken from Strand [83, 841. See also Edsinger [16]. Theorem 3.3.1 is adapted from Strand [84]. All the examples and problems given in Section 3.3 are taken from Strand [83]. See also Bharucha-Reid [ 5 ] and Tsokos and Padgett [85,86]. Theorems 3.3.2 and 3.3.3 are new. All the results of Section 3.4 are new and are based on the results in Ladas and Lakshmikantham [49]. The results in Section 3.5 are formulated in the framework of Lakshmikantham and Leela [59]. In Section 3.6 sufficient conditions are given for the stability of the trivial solution of the LP-initial-valueproblem in the context of the vector Lyapunov functions and the theory of deterministic differential inequalities.
CHAPTER
4
ITO-DOOB CALCULUS APPROACH
4.0.
INTRODUCTION
Several phenomena in biological, physical, and social sciences can also be described by a system of stochastic differential equations of ItB-type, dx = f ( t ,X) dt
+ ~ ( tX ), dz,
x(to) = x O ,
(4.0.1)
where dx is a stochastic differential of x, z an m-dimensional normalized Wiener process defined on a complete probability space (Q, 9, P), f(t, x ) an average rate vector, and o(t,x ) a diffusion rate matrix function. This chapter concentrates on the study of ItB-type systems of the form (4.0.1). In Section 4.1 we sketch It6’s calculus. We begin with basic definitions and arrive at the definition of It6’s integral. We state important properties of stochastic integrals, introduce the stochastic differential, and derive a well-known differential formula known as ItB’s formula. Some useful applications of 16’s formula are also given. Finally, a special class of Markov processes is discussed, and the It8-type stochastic differential equation is formulated. Certain properties of diffusion processes are also stated. A Peano-type existence theorem for ItB-type stochastic differential systems is proved in Section 4.2. Several existence and uniqueness theorems are also developed. Continuous dependence on parameters, as well as continuity and differentiability properties of solutions with respect to initial data, is discussed in Section 4.3. Section 4.4 develops variation of constants formulas with respect to both stochastic and deterministic perturbations. Some basic results on stochastic differential inequalities of It6-type are given in Section 4.5. The notion of maximal and minimal solutions relative to (4.0.1) is introduced in Section 4.6. In Section 4.7 basic comparison theorems for (4.0.1) are listed. By using the concept of Lyapunov function and the theory of stochastic and deterministic differential inequalities, several comparison theorems are developed in Section 4.8. Finally, in Sections 4.9, 4.10, and 4.11 a variety of 113
4.
114
It6-Doob Calculus Approach
stability results concerning stability in probability, stability in the pth mean, and stability with probability one of the trivial solution of(4.0.1)are presented in a coherent way. These results are developed in the framework of vector Lyapunov functions and the theory of differential inequalities as well as variation of constants formulas. This approach provides stabilizing as well as destabilizing effects of random disturbances. Moreover, the stability conditions are formulated in the context of the laws of large numbers. 4.1.
IT6'S CALCULUS
In this section we shall discuss Itd-type stochastic integral and differential calculus. Let LZ = (sZ,B,P) be a complete probability space. Let Ft be a sub-aalgebra of F defined on R,. Let z ( t ) be an rn-dimensional normalized Wiener process, and let 2Tt and 9: be the smallest sub-a-algebras of F generated by all random variables z ( t ) and z(s) - z ( t ) for s 2 t 2 0, respectively. We need the following definitions. Definition 4.1.1. A family {Ft:tE R,) of sub-a-algebras of B is said to be nonanticipating with respect to the rn-dimensional normalized Wiener process if it satisfies the following conditions : (i) Fs c Ft for all t > s 2 0; (ii) 8,c Ft for all t E R + ; (iii) Ft is independent of 8:. Let us note that the family (2Tt:t 2 0 ) is the smallest nonanticipating family of sub-o-algebras of F. Definition 4.1.2. An n x m product-measurable matrix function G ( t , o ) defined on R+ x R into R"" is said to be nonanticipating with respect to a nonanticipating family {Ft:t E R,} of sub-a-algebras of 9 if G(t,o)is Ftmeasurable. Let M 2 [ a ,b] denote the set of all nonanticipating n x m matrix functions G = G(t,o)defined on [a, b] x R into R"" such that (IG(s,u)lJ2ds < m
w.p. 1,
where the integral is the sample Lebesgue integral, We note that every deterministic function G(t,w) = G(t) is a nonanticipating function. Also, if G is nonanticipating, then every product-measurable function g(t, G) defined on R+ x R"" into R" is nonanticipating. We are now in a position to define the stochastic integral of a step function G(t,w ) E M,[a, b] relative to z(t).
4.1.
lt8’s Calculus
115
Definition 4.1.3. Let G E M,[a, b] be a step function, and let a = to < t , < . . . < t, = b be a partition ofthe interval [a, b] such that G(t,o)= G(ti,o) for t E [ t i , ti+ 1), i = 1, 2, . . . , n - 1. The stochastic integral of G with respect to an m-dimensional Wiener process z ( t ) is then defined by
.C~ ( dz(t) t ) c G(ti) It-
=
1
[Z(ti
i=O
(4.1.1)
+ 1) - z(ti)],
where JabG(t)dz(t) = l G ( t , w ) d z ( t , w )
and
We list below some needed properties of this stochastic integral: (a) If G I , G , Jab
E
M,[a, b] are step functions, then
(aG,(t) + PGAt))d d t ) = a
[Gl(t) d 4 t ) + P lobGAL) dz(t),
(4.1.2)
where a, p E R. If G E M , [ a , b] is a step function, then (b)
Jab
G(t)d z ( t ) =
( f J: j= 1
m
Gij(t)dzj(t),. . . ,
C J:
j= 1
T
Gnj(t)dZj(t)) ; (4.1.3)
(d E[llG(t)ll] < co,t E [a, b], implies E[J:G(t)dz(t)]
= 0;
(d) E[llG(t)ll’] < co,t E [a, b], implies,
and, in particular,
(e) for all c: > 0 and c > 0,
The following result provides a motivation for the stochastic integral of an arbitrary n x m matrix function G(t, o)that belongs to M 2 [ a ,b].
116
4.
It6-Doob Calculus Approach
Lemma 4.1.1. Let G E M,[a, b], and let G,(t, o)be a sequence of n x m matrix step functions such that G, E M,[a, b] and IlG(t,a)- Gdt,
Jab
dt
-+
0
(4.1.4)
in probability as n + 00. Then the random sequence j: G,(t)dz(t) converges in probability to some limit. Moreover, this limit is independent of the selection of a sequence. Proof. From (4.1.4)it follows that
C
IlGn(t)- Gm(t)lJZ dt
in probability as n, m -+ GLt) - Gm(t),we get lim sup P n,m+ m
{
00.
+
0
Applying property (e) to the step function
G,(t) dz(t) -
G,(t) dz(t)
Since E is arbitrary, we have
This implies that j”: Gn(t)dz(t) is a Cauchy sequence in probability and hence converges in probability to an n-random vector Z(G). The limit is almostsurely uniquely determined and independent of the special choice of sequences for which (4.1.4)holds. We are now ready to define the stochastic integral. Definition 4.1.4. Let G E M,[a, b ] , and let G,(t, o)be a sequence of n x m matrix step functions such that G, E M,[a, b] and Jab
IlG(t,a, - Gn(t,W)(y dt
+
0
in probability as n + 00. The stochastic integral or It8’s integral of G with respect to an m-dimensional Wiener process z(t) is defined by
lab
G(t)dz(t) = lim
n-, m
Jab
G,(t) dz(t) in probability.
The stochastic integral of G E M,[a, b] possesses the following properties:
(I) It possesses properties (a), (b), and (e) of the stochastic integral of an n x m matrix step function that belongs to M,[a, b].
4.1.
117
Itti’s Calculus
(11) If (:E[\lG(t)ll’] dt < 00, then (i) there exists a sequence G, E M,[a, b] of step functions such that
c
E“IG,(t)l121 dt
E ) I P ( A J , # I , - AIJ
+ P(llAJ,II > E ) < E.
This proves the desired statement. We shall now introduce the notion of a stochastic differential. Let x ( t ) be a stochastic indefinite integral defined by
x(t) = x(a) + Jaff(s)ds + JOr G(s)dz(s),
(4.1.13)
where z ( t ) is an rn-dimensional Wiener process, G E M 2 [ a ,b], x(a) is an nand f is an n-dimensional nondimensional random vector relative to Fa, anticipating random vector relative to a nonanticipating family {Ft:t E R +)
122
4.
It6-Doob Calculus Approach
of sub-a-algebras of 9and is sample Lebesgue-integrable on [a, b], that is, Jab
/)f(s,0)JI ds < 02 W.P. 1.
We further note that the integrals Lebesgue and It8, respectively.
1; f ( s )ds and 1; G(s) dz(s) are in the sense of
Definition 4.1.6. We shall say that the process x(t) defined by Eq. (4.1.13) processes the stochastic differential f(t) dt + G(t)dz(t),and we shall write
d x ( t ) = f ( t )dt
+ G(t)dz(t).
(4.1.14)
Remark 4.1.1. Let z ( t ) be a scalar normalized Wiener process. By (4.1.8), the stochastic indefinite integral of z ( t ) relative to z ( t ) is given by Jat
1 z(s) dz(s) = - [ z 2 ( t )- z'(u) - ( t - a)]. 2
Hence its stochastic differential form is equal to d(z2(t))= 2 z ( t )dz(t) + dt.
(4.1.15)
On the other hand, by applying Taylor's theorem, we obtain d(zZ(t))= 2 z ( t ) d z ( t )+ (dz(t))2.
This suggests that in the case of the stochastic differential of z2(t), we must regard the first two terms as first-order terms and must replace ( d z ( t ) ) 2with dt. We shall next derive a well-known differential formula known as It63 formula. Theorem 4.1.1 (It8's formula). Let V E C [ R + x R", R"], and let V,, V,, and V,, exist and be continuous for (t, x) E R + x R", where V, is an N x n Jacobian matrix of V(t,x) and V,,(t,x)= is an n x n Hessian matrix whose elements (d2/dxidx,)V(t, x) are N-dimensional vectors. Suppose that an n-dimensional random function x(t) has stochastic differential dx(t) = f ( t )dt G(t)dz(t) and x(a) = xo,where z ( t ) is an rn-dimensional normalized Wiener process, G E M2[a,b], and f is an n-dimensional nonanticipating t E R + }of sub-orandom vector relative to the nonanticipating family {Pt: which is sample Lebesgue-integrable on [a, b]. x(a) is a random algebras of F, vector relative to Fa. Then the process rn(t) = V ( t ,x ( t ) )possesses a stochastic differential, and moreover
+
(4.1.16)
4.1.
lt6’s Calculus
123
Proof. Since the proof of the theorem is exactly similar to the proof of its scalar version (n = rn = N = l), we shall give only the proof of its scalar version. Let us first prove the formula for the case in which f ( t ) and G(t) are independent of t. Then x( t ) is a process of the form x ( t) = x ( ~+) f(t - a) + G(z(t)- z(a)),and hence the process m(t) takes the form m(t) = V(t,~ ( t )=) V(t,f ( t - a)
+ G(z(t)- z(u)) + x(u))
with m(a) = V(a,xo). Suppose that t o = a < t , < . . . < t , = t 5 b. Then n
m(t) - m(a) =
1V(ti,X(t,))- V(ti-l,X(ti-l)). i=
(4.1.17)
1
From the assumption on V(t,x), Taylor’s formula yields V(ti,x(ti))- V(ti-1jx(Li-l))
+
= ~ ( t ~ ei(ti - - t i - 1 ) , x( t i _l))(ti
- ti- 1)
+ Yx(ti-1, x( t i - l ) ) ( x( t i -) X(ti-1)) + $Vxx(ti-1>X(ti-I ) + $i(x(ti)- ti- ~ ) ) ) ( x ( t-i )~ ( t i - 1 ) ) ’ ~(4.1.18)
where Oi, +hiE (0,l) for all i = 1,2,. . .,n. From the sample continuity of the indefinite integral and the continuity of r/; and V,,, one can find ar_ M. that approaches zero w.p. 1 as 6, -+ 0 and satisfies the inequalities
+ Oi(ti
max { l v ( t i -
l$ijn
- K(ti-
1,
-
t i - ,), x(ti- ,))
x(ti- I ) ) ( ) < a
and max { ]Vxx(ti-l,x(ti- 1)
1S i 5 n
-
+ $i(x(ti)- x(ti-
K x ( ~ i x(ti-1))ll - ~ ~
1)))
cllY
= 0,
1 , 2 we have
- X11kP(~,XJ,44
I E-z-d+kJR"IIy
- x(12+JP(s,x,t,dy)=
O(t
- s)
for all E > 0. Let us discuss the meaning of relations (4.1.26)-(4.1.28). Relation (4.1.26) means that large changes in x ( t ) over a short period of time are improbable, that is, P(IIX(t) - x(s)(I I E ( X ( S ) = x ) = 1 - O ( t - s). We replace the truncated moments in (4.1.27) and (4.1.28) with the usual ones. Then the first moment of x ( t ) - x(s) under the condition x(s) = x as t -+ s+ is mx(t)-x(~))I
and
=
=
JI l v - x l l ~ (y - x ) ~ ( sx, t, d y ) E
4.2.
Existence and Uniqueness
131
where cov [ ( x ( t )- x(s))lx(s)= x ] is the covariance matrix of x ( t ) - x(s) with respect to the probability P(s, x , c, Consequently,f(s, x ) is the mean velocity vector of the random motion described by x ( t ) under the assumption that x(s) = x , whereas b ( s , x ) is a measure of the local magnitude of the fluctuation of x ( t ) - x(s) about the mean value. If we neglect the last term o(t - s), we can write .)a
x ( t ) - x(s) = f(s, x(s))(t - 4
+ 4%x , (S))[W(t)- w(s)],
where E [ ( w ( t )- w(s))lx(s)= x ] cov[(w(t) - w ( s ) ) x ( s = ) x]
= 0, = I ( t - s),
a ( s , x ) is any n x rn matrix with the property nuT = b, and I is the identity matrix. The shift to differentials yields dx(t) = f ( t ,x ( t ) ) d t
+ o(t,~ ( t dw(t). ))
(4.1.29)
This is the It6-type stochastic differential systems or diffusion differential systems.
4.2.
EXISTENCE AND UNIQUENESS
Let R = (Q9, P ) be a complete probability space, and let R[R, R"] be the system of all R"-valued random variables. Let J = [to,t + a], to 2 0, a > 0. We consider the stochastic differential system of It6-type dx =f(t,x)dt
+ o(t,x)dz,
x ( t 0 )= x 0 ,
(4.2.1)
where dx is a stochastic differential of x , x,f
E
C [ J x R", R"],
o E C [ J x R", R",],
and z ( t ) E R[R, R"] is a normalized Wiener process. Let us now define a solution process for the stochastic system (4.2.1). Definition 4.2.1. A random process x ( t ) is called a solution process or a solution of (4.2.1) on J if it satisfies the following conditions: (1) x ( t ) is 9t-measurable, that is, nonanticipating, where for t the smallest sub-a-algebra generated by xo and z ( t ) ; (2) x ( t ) is sample continuous and verifies
( 3 ) x ( t ) satisfies Eq. (4.2.1) for t
E
J w.p. 1;
EJ,
Ft is
132
It6-Doob Calculus Approach
4.
We have intentionally chosen to consider the deterministic functions f and a in the It8-type equation (4.2.1). However, one could utilize random . This functions f and a as long as they were nonanticipating relative to F, change does not in any way create significant problems in our discussion. We shall prove the following basic existence theorem of Peano-type. Theorem 4.2.1.
Assume that
(HI) there exist positive numbers L and M such that IIf(t,X)II + I l ~ ( t 4 ) lI l L
+ MllXll
for (t,x)E J x R"; x ( t o ) is independent of z ( t ) and E[llx(to)l14]< c1 for some constant c1 > 0.
(H,)
Then the stochastic system (4.2.1) possesses at least one solution x ( t ) = x(t, to,xo)on J . Proof. We define a sequence of random functions {x,(t)} on J by setting
for to I t I to
[x, xo
+
+
f ( s , x,(s)) ds
+ u/n
+ s'
-a'n
f0
a(s, x,(s))dz(s)
(4.2.2)
+ +
for to ka/n I t I t o ( k l)u/n, k = 1,2, . . . , n - 1. The first part defines x,(t) on [ t o ,to u/n], the second part defines x,(t), first on [to+ a/n, to 2u/n], then on the interval [ t o 2 4 1 , t o 3u/n], and so on. In fact,
+
+
+
+
(4.2.3)
+
+
for t E [ t o ku/n, to (k + l)u/n],k = 1, 2, . . . , n - 1. By the continuity off and a and (HJ, f(t, x,(t)) and a(t, x,(t)) are nonanticipating random functions relative to the family PpConsequently, a(t, x,(t)) E M 2 [ t 0 ,t o u] andf(t, x,(t)) is integrable. Thus the sequence {xn(t)}satisfies the properties of a stochastic indefinite integral. We shall first show that there exists a positive number CI independent of n, such that
+
E[IIxn(t)l14]I c1
for t E J .
(4.2.4)
4.2.
Existence and Uniqueness
133
We note that because of (Hz),
E[IIxn(t)l14] 5 c1
for all t E [ t o , to
+ a/n].
(4.2.5)
Using the inequalities (a + b + c ) 5~ 33(a4 + b4 + c4), 2a2b25 a4 + b4, 4a3b I 3a4 b4, Holder's inequality, Lemma 4.1.2, (HI), and (4.2.3), we obtain successively
+
+ 8(a3 + 6")s' + to
+
ka/n
(..+
M4E[l/xn(s -
;)l14]
dr]
+
for t E [to ka/n, to (k + l)a/n]. Starting now from (4.2.5), it is easy to see by induction that E[IIxn(t)l14] < co,t E J . By It63 formula, we get
(4.2.6) for t E
[to
+ a/n, to + a], and
It6-Doob Calculus Approach
4.
134
where b(t, x) = o(t,x)oT(t,x). Since E[Ilxn(t)ll4] < GO, it follows by (H,) that E [ ~ ~ ~ , ( s ) ~ 1-~a/n, / ~ ox,(s ( s - u/n))11] < GO. Thus (4.2.6) reduces to
which implies m(t) I @1
+
a2
s:,m(s)
ds
in view of relations (4.2.5) and (4.2.7) and the inequalities 4a3b I 3a4 + b4, 2a2b2Iu4 + b4, where a1 = c1 + ( L 3L2 3LM)a, a2 = 3L 4M 3L2 6M2 + 9LM, and m(t) = s ~ p - ~ , , ~ ~ ~ ~ E [ (0)11"]. ( x , (This, t together with the scalar version of Theorem 1.7.2 with g(t, m(t))= cr,m(t) and m(to) = a, , yields
+
+
m(t) < c(l exp[a,a],
+
+
+
+
t EJ,
which proves relation (4.2.4) with a = a , exp[a,a]. We shall now prove that there exists a positive number of n such that E[IIXn(t) - ~n(s)11~1Plt -
p independent
~ 1 ~ ' ~
(4.2.8)
for all t, s E J . For this purpose, we apply ItS's formula to the expression Ilx,(t) - x,,(~)11~ and obtain
4.2.
Existence and Uniqueness
135
Applying Holder's inequality, we then have
+
a4 b4 and 4abc2 I a4 + Here we have utilized the inequalities 2a'b' I b4 2c4, the identity (2ab 3 ~ ' )=~ 4a2b2 12abc' 9c4, and hypothesis (Hl),together with suitable majorization to the nearest perfect cube number. It now follows from (4.2.4) that
+
+
+
ECllXn(4 - x,(s)l141 5 M , f,f(ECJlx,(u)
+
- x,(s>ll
4
1)1/2 du,
(4.2.9)
where M , = 3 2 ( ~ + L4 + M4c1)1/2. On the other hand, we have E[IIxfl(t) - X,(S)II"l 5 8(E[Ilxn(t)1l41 + E[IIXfl(41l41)>
which because of(4.2.4),shows that E[IIx,(t) - x,(s))(~]5 1 6 ~It. now follows from (4.2.9) that E[IIX,(t) - x,O11"1 5 +B(t -
where
B = ~M,(ct)'". This, together E[((x,(t)
-
x,(s)\\"] I B(t
+
$7
with (4.2.9), establishes the inequality
- s)3/2
for
to I sI t I to
+ c1.
For t o I tI sI to a, the above inequality can be proved similarly, which proves inequality (4.2.8).
136
4.
It6-Doob Calculus Approach
We recall that the collection {xn(t)}consists of continuous random processes defined on J into R[Q R"]. In view of relations (4.2.4) and (4.2.8) and Lemma 2.1.1, we conclude that the collection is totally D-bounded. Using Prohorov's theorem and recalling the fact that the direct product of compact sets and the projection of a compact set are also compact, we can easily see that {(x,,(t),z,(t), x n o ) } is totally bounded, where z,(t) = z ( t ) and xn0 = x(to). Therefore, one can find a D-Cauchy subsequence {(x,,,(t),z,,(t), x,,,~)} of {(xn(t),z,(t), xno)}. By Skorokhod's theorem, we can construct a sequence {(y,,(t),wn,(t), y,,,)} and a random function ( y ( t ) ,w(t), y o ) on a certain probability measure space such that D((xn,(t), zn,(t),
Xn,o),
(Ynp(t),wn,(t), Ynro))= 0
(4.2.10)
1
(4.2.11)
for n l , n 2 , n 3 , . . . and p((yn,(t),wn,(t), Yn,.o)
+
( ~ ( t4) t, h Y O ) )
=
as r GO, where the convergence is understood in the sense of the sup norm. For the sake of simplicity, hereafter we shall denote the subscript n, by just r. From (H2),(4.2.2), and (4.2.10), it follows that w,(t)and y,, are independent. This, together with (4.2.1l), implies that w(t) and y o are independent. Since D ( ( w r ( t ) , Yro),4 t h x o ) ) = 0 for r = 1,2, . . . and P((w,(t), yro) (w(t), Y O ) ) = 1 as r + GO, we have --f
+
D((w(t),Yo), ( z ( t ) , x o ) )= 0,
(4.2.12)
which implies that (w(t), y o ) and (z(t),xo) have the same probability law. Hence the sub-a-algebra generated by w(t) and y o can be identified as Fz. Because of (4.2.10) and (4.2.11), it is obvious that y,(t) and y ( t ) possess all the properties of x,(t) as defined in (4.2.2).Furthermore, { y,(t)} is a D-Cauchy sequence. Finally, we shall prove that Y ( t ) = Yo + J:,f(s,y ( s ) ) d s + J:,.(s.
y(s))dz(s).
By following the definition of x,(t) in (4.2.2), one can construct
(4.2.13)
4.2.
137
Existence and Uniqueness
It is obvious that ( y " ( t ) }and { y;(t)} for r = 1, 2, . . . possess all the properties of { ~ ~ ( in t )(4.2.2). ) This, together with the continuity off and B, the fact that (yr(t)} is a D-Cauchy sequence, relation (4.2.1l), Proposition 4.1.1, and Example 4.1.2, yields for any r = 1, 2, . . . and given E > 0, P(l(Qt) - I(t)lJ> E ) < E
and
P((IC(t)- I r ( t ) ( l > E ) < E
(4.2.14)
for all n 2 N(E),where Ir(t) =
J*f(s, yr(s))ds+ J 1 : ~ ~Yr(s))dwr(s), (s~ to
I ( t ) = ibf(s, y ( s ) ) d s +
I:@)
jll&
= S'-"'"f(s, f0 y:(s))ds
y(s))dw(s),
+ y a ( s , y:(s))dwr(s),
and I"(t) = Jb-*"f(s, y"(s))ds
+ S'-a/nQ(s,
y"(s))dw(s).
+
y:(s))dor(s).
f0
Set =
y i ( s ) ) d s J;"'.(s,
J-+f(s,
Again, by the continuity off and P(Z:(t)
This implies that for any r 2 ro = rO(e),
+
E
B
and (4.2.1I), it is obvious that
I"(t)) = 1
as r
+ GO.
> 0, there exists a positive number r such that
Ilf(s9 YW) - f(s, y"(s))II
E ) < E,
r 2 ro.
138
4.
It&-Doob Calculus Approach
This, together with (4.2.12) and (4.2.14), shows that P ( ( ( Y ( t) Yo - WIl > 6 4 = p < l l ( ~ (t >~ : ( t )+ ) ( Y r o - Y O ) - (l(t) - l"(t)) + ( C ( t ) - In(t)) + ( l r ( t ) - ZXt)) + ( C ( t ) - z r ( f ) ) l / > 6 ~ ) P ( ( ( Y (~ )~ : ( t ) l>( E ) + P(llYr0 - Y O [ / > E )
> 4 + P(JJlf(t) - Z"(t)(l > E ) P(lllr(t) - C(t>ll > 1' + ~ ( j l C ( t )- lr(t)lJ> E )
+ P(JlW- I"(C)JI
-t
< 68
for n, I 2 max(N(E),rO(E)).
Since E > 0 is arbitrary, this implies (4.2.13). Hence y ( t ) = x ( t ) is a solution of ( 4 . 2 4 and the proof is complete. Remark 4.2.1. We note from the proof of Theorem 4.2.1 that hypothesis (HI) can be replaced by a weaker hypothesis, namely, (HT) There exist positive numbers L and M such that (If(t,x)114 + 1\44x)l14 L + MJIX1I4.
Also, one can prove the existence of a solution on [ t o ,Q), provided that f E C [ [ t o ,co) x R", R"] and 6 E C [ [ t o ,co) x R", R""']. We shall next give sufficient conditions on f and 0,in order to ensure the uniqueness of solutions of (4.2.1). Theorem 4.2.2.
Assume that f and
CT
satisfy the relation
2(x - Y)'(f(t>x ) - f ( t , Y ) )
+ tr [W,x ) - 4 4 Y ) ) ( d tx, ) 4,Y))'] -
(4.2.15) for ( t , x ) ,(t, y ) E J x R", where g E C [ J x R+,R] is concave, nondecreasing in u for fixed t. Let u(t) = 0 be the unique solution of
I g(t, IIx
- YIIZ)
u' = g(t,u), u(t) = 0. (4.2.16) Then the stochastic system (4.2.1) has at most one solution on J . Proof. Let x ( t ) = x(t, t o ,xo) and y(t) = y(t,t o ,xo) be two solutions of (4.2.1) on J . Applying It6's formula to Ilx(t) - y(t)\I2and using the assumption of (4.2.15), we find
-qIX(t)
- Y(t)1I21
4.2.
139
Existence and Uniqueness
Because of Jensen's inequality, the concavity of g(t, u) now gives
J;
E[llx(t) - Y(t)11'3 I g(s, E[llx(s) - y(s)ll21)ds,
t E J.
Set m(t) = E[IIx(t) - y(t)!12] so that rn(to)= 0. Define =
J$
E[IIx(s) - y(s)ll21)ds.
Then we have u' = g(t, m(t)).
Since m(t) I u(t), the monotonicity of g yields the differential inequality u'(t) Ig o , u(t)),
U ( t 0 ) = 0.
By the scalar version of Theorem 1.7.1, we have u(t) s 4 t h
t E [ t o , to
+ a],
where r(t) is the maximal solution of (4.2.16) through (to,O). This implies that m(t) 5 r(t), t E [ t o , to + u ] , and consequently, the assumption r(t) = 0 on J proves the theorem. Remark 4.2.2. We note that when the solution of (4.2.1) is unique, then the solution process is a Markov process and also a diffusion process.
The following result gives sufficient conditions under which (4.2.1) has a unique solution. Theorem 4.2.3.
Assume that
(H,) the functions f(t, x ) and o(t,x ) in (4.2.1) satisfy
+
I l f ( t , ~ ) 1 1 ~Ilc~(t,x)11~ 5 L2(1
+ 11x11')
(growth condition)
and
Ilf(4x ) - f(4Y)JI + Ilok x ) - 4 4 Y)ll 5 Lllx - YII (Lipschitz condition) for all (t,x), (t, y) E J x R",where L is some positive number.
(H,) x ( t o ) is independent of z(t) and satisfies E [ ] I X , ! ! ~ 4+l l~l 4 ~ > X J ) I l L + MllXll for (t,x,A)E J x R" x A; ( H 3 ) l there exists a function g E C [ J x R,, R ] , which is concave and nondecreasing in u for fixed t E J , such that 2[(x - Y)=(f(t,x, 4- f k Y , 4) + Ilo(t,x, 4- 44 Y , 4ll'l (4.3.1) I s(t,IIx - Y11') for all (t,x, A), (t, y, A) E J x R" x A;
(H4)1 x(to,A)is independent of z(t) and lim E[llx(to,4 - x(to,~o)11'3= 0;
(4.3.2)
A+lO
( H 5 ) l E [ I l ~ ( t ~ , A ) 1I1 c1 ~ ] for some constant c1 > 0; (H6)A for any E > 0, N > 0, r
+ Ilo(t,xA - o(t,x,Ao)lI)> E (H,)
u(t )
= 0 is the unique solution of
1
= 0;
(4.3.3)
(4.3.4)
u' = g(t, u)
with u(to) = 0 and the solutions u(t, t o ,uo) of (4.3.4)with u(to) = uo are continuous with respect to ( t o ,uo). Then given E > 0, there exists a b ( ~>) 0 such that for every A, 111 - Aoll < b ( ~ )the , differential system dx
= f(t, X , A ) dt
+ ~ ( X,t ,A) dz(t),
x ( t o ,A ) = ~ o ( l l )
(4.3.5)
admits a unique solution x(t,A) = x(t, t o , x(to,A))satisfying (E[IIx(t,A) - x(t,Ao)112])112
<E
for all
t E J.
(4.3.6)
4.
144
Itb-Doob Calculus Approach
Proof. Under hypotheses ( H z ) k ,(H4)1,and ( H 5 ) k ,the initial value problem (4.3.5)has at least one solution x(t, L) = x(t, t o , x(to,A)) by Theorem 4.2.1. That this solution is unique follows by Theorem 4.2.2 in view of hypotheses (HJ1 and (H7)1.Moreover, x(t, A) satisfies the estimate (4.2.4) for all A E A. By applying It6's formula to the function I1x - y(I2, assumption ( H 3 ) k ,and concavity of g in u, we get
E[II.(tJ,
-
x(tJ0))121
Using hypotheses (H2)A,(H4)1, (H,)A and the Lebesgue dominated convergence theorem, relations (4.2.4) and (4.3.2) give the inequality E[IIX(t,4- x(t, ~o)112] I
r + Jl: g(s, E[llx(s,4 - x(s, &)lIZ)4
(4.3.7)
for t E J , whenever 1111 - ,lol I h(yl), q > 0 being arbitrary. Set m(t) = 4- x(t, J.o)l/21 and
E[IIX(4
U(t) = Y
+ J:ds,m(s))&
4t0)
=
r.
Then relation (4.3.7), together with the nondecreasing property of g, shows that 4 t ) 5 s(t,@)I, @o) = r. Hence by the scalar version of Theorem 1.7.1, we have
4.3.
145
Continuous Dependence
where r(t, to,y) is the maximal solution of (4.3.4) through ( t o ,uo) with uo = y. This, together with (4.3.7) and the definition of u(t),yields t E J. m(t)I r(t, t o , 11, Since the solutions u(t, t o ,uo) of (4.3.4) are assumed to be continuous with respect to uo, it follows that
lim r ( t ,t o , 3) =
7-0
t o , 01,
and by hypothesis, r(t,to,O) = 0. This, in view of the definition of rn(t), yields lim E[J)x(t,A) -X ( ~ , A ~ )= ] ]0, ~]
I-&
which establishes the theorem. Now we shall state a theorem which ensures the continuous dependence of , solutions of (4.2.1) on the initial conditions ( t o xo).
Theorem 4.3.2. Assume that the hypotheses of Theorems 4.2.1 and 4.2.2 hold. Furthermore, suppose that the solutions u(t,t o ,uo) of (4.3.4) through every point (to,uo)are continuous with respect to ( t o ,uo).Then the solutions x(t, t o , x o ) of (4.2.1) are unique and continuous with respect to the initial conditions (to,xo). Proof: The existence and uniqueness of the solutions x(t,x o ,xo)of (4.2.1) follow from Theorems 4.2.1 and 4.2.2. The proof of continuous dependence can be formulated based on the proof of Theorem 4.3.1. We shall merely indicate the necessary steps in the proof. Let x(t, t o , x o )and x(t,t,, yo) be solutions of (4.2.1) through ( t o , x o )and
( t l , yo), respectively, By ItB's formula we get, assuming t , 2 t o ,
EIIllxct, t o , xo) -
t l , YO)1l21
= EC((x(t,,to.xo)- Yo1(21 -I- E [ f J 2 ( x ( s . t 0 , x o ) - x(s,tl?Yo))'
x (f(s,X ( S > t o , xo)) - f(s, 4% tl, Yo)))]
+ tr[(ds,
x(s, t o , xo)) - 4% x(s, t l , Y o ) ) )
1
x (ds,x(s,to, xo)) - 4% X(S,tl*Yo)))']ds .
This, together with (4.2.15) and the fact that E[(lx(t1,to,xo)-
YOIIZ1 5 2EC((x(tdo,xo)- x(to,to?xo)1121+ EC((x0- Yol121,
4.
146
Itb-Doob Calculus Approach
Now the rest of the proof can be constructed by following the proof of the Theorem 4.3.1. We leave the details to the reader. in Theorem 4.3.1 is replaced by Remark 4.3.1. If hypothesis (H5)A
(HT)n E[lIx(t,A)/l2] 5 c, for some constant
c1
> 0,
then the conclusion of the theorem remains valid. We now turn to the question of the differentiability of the solution of (4.2.1) with respect to the initial conditions (to,xo), We shall consider the mean square derivatives (dx/dx,)(t, t o ,xo) and (ax/dto)(t,t o ,xo). Theorem 4.3.3. Assume that
(H,) the functions f(t, x) and o(t,x) in (4.2.1) satisfy the relations Ilf(t,x)1I2 l I f ( 4 4 - f(t,Y)ll
+ Ilo(t,X)112 5 L2(1 + llX1l2),
+ I l a x ) - 4 4 Y)ll
5 Lllx - YJI
for (t,x), (t,y) E J x R", where L is a positive constant; (H,) x(to) is independent of z(t) and E[lJ~(t,)1/~]< 00; (H,) f(t, x) and o(t, x) are continuously differentiable with respect to x for fixed t E J and their derivativesfx(t, x)and a,(t,x) are continuous in (t,x) E J x R" and satisfy the condition "fx(t7x)Il
+ J l ~ x ( ~ A cJ l
for some positive constant C. Then exists for k = 1, (a) The mean square derivative (a/dx,,)x(t, t o ,),x, 2, . . . , n and satisfies the It8-type system of linear differential equations dY
= fX(4 W
) Y dt
+ ox(4 x(t))y dz
(4.3.8)
where y(to) = ek = (el, e;, . . . , e:, . . . ,e:) is an n-vector such that el = 0, j # k, and ei = 1, and xkois the kth component of x,; moreover, d4 = f X ( 4 X(t))4 d t
where 4
=
4@,t o ,XO)= (dx/dx,)(t,
+ ox(t, x(t))4 dz
t o ,xo);
(4.3.9)
4.3.
Continuous Dependence
147
(b) the mean square derivative (d/dto) x(t,t o ,xo)exists and satisfies (4.3.8) x(t,t o , x o )= -@(r, ~ o , x o ~ f ( ~ o , x o ~ . with (W0) Proof. From hypotheses (H,) and (H2) and by the method of successive approximation, it has been shown that (4.2.1) has a unique solution x ( t ) = x(t,t o ,xo). Moreover, by Theorem 4.3.2, solutions are continuously dependent with respect to (to,xo).For small A, let x(t,A) = x(t, t o , x o Aek) be a solution process through ( t o ,x o k k ) . From the continuous dependence on initial conditions, it is clear that
+
+
x(4A) -!!% x ( t )
as A
-+
0 uniformly on J .
(4.3.10)
Set Ax(t,I) = [x(t,A)- x(t)]/A, x
Ax(t0,A) = ek,
and
= x(t),
y
I # 0,
= x(t,I).
This, together with the application of Lemma 2.6.1, yields
and o(t,y ) - a(t,x)=
So 1
ox(t,sy
+ (1 - s)x)(y- x)ds.
(4.3.12)
Let us denote F(t7 x(t),A>= Jol f X ( 4 sy
+ (1 - s)x) ds
G(t,x(t),A)= Jolox(t,sy
+ (1
and -
s)x)ds.
This, together with (H3),yields IIW,x(t),4ll I C
and
~~G(t,x(r)I , AC ) ~. ~
(4.3.13)
Furthermore, from (4.3.10),we note that F(t, x , I ) and G(t,x,A) are continuous in (t,x , A) and satisfy F(t,x,lb)
as A
-+
fx(t,X)
and
G(t,x,A)
o,(t,x)
(4.3.14)
0 uniformly on J . On the other hand, 4 Y - x ) = Cfk Y ) - At,4 1 dt
+ [.(t,
Y ) - o(t,4 3 dz.
This, together with (4.3.11)and (4.3.12) and the definitions of Ax(t,A), F(t, x , I), and G(t,x , A), yields dAx(t,A) = F(t,x,A)Ax(t,A)dt + G(t,x,A)Ax(t,A)dz, Ax(to,I)= e k .
(4.3.15)
148
Itb-Doob Calculus Approach
4.
From (4.3.13), (4.3.14), (4.3.15), Chebyshev’s inequality, the continuous dependence of the solution on the initial state, and the definition of Ax(t,A), it follows that IIF(t,XJ)Y(l-k IIG(t4J)YlI p ( t ,x ,
-
F(t, x, A h 1 1
L
+ M((Y(J>
+ //G(4x, 4 Y l - G(t,x>4 Y 2 j l
and
where M = 2C, E > 0 is any number, A,, E A, and L is any positive constant. By the application of Theorem 4.2.3, (4.3.15) has a unique solution Ax(t,A) with Ax(to,A)= ek on J . As noted in the proof of Theorem 4.2.3, we have
+
2(Y, - y2)TF(t,x,4y,- m , x , 4 Y z ( ( G( t, X J ) Y,- G(t,xJ)Y,(12 5 C(C + 2)J(Y,- Yz112 = s(4 IlYl - Y21I2). Therefore, by the application ofTheorem 4.3.1 in the context of Remark 4.3.1, limA+ Ax(t,A) exists in the mean square, and it is a solution of (4.3.8).From the definition of Ax(t,A), the limit of Ax(t,A) as A -+ 0 is equal to the partial derivative of x(t, t o ,x o )with respect to the kth component of x o . This is true for all k = 1, 2, . . . , n. Thus (d/dxo)x(t,to,xo)is the fundamental solution process of (4.3.8) and satisfies the ItB-type random matrix differential equation dY =f , ( t , x ( t ) ) Y d t
+ a,(t,x(t))Ydz,
(4.3.16)
@(to)= I ,
where I is a random identity matrix and x ( t ) = x(t, t o ,xo) is the solution process of (4.2.1). We note that the fundamental solution of (4.3.8) is denoted by (d/dxo)x(t,t o ,x o ) = @(t,t o ,xo). This establishes the proof of (a). To prove part (b), without loss of generality, take to I s < s A I t 5 to b, and define
+
+
+
where x(t,A) = x(t, s A, x o ) and x ( t ) = x(t,s, x o ) are solution processes through (s 2, xo) and (s,xo), respectively. Again, by imitating the proof of the part (a), one can conclude that the derivative (d/dt,)x(t,t o ,xo) exists in
+
4.4.
The Method of Variation of Parameters
149
the mean square sense, and it is the solution of (4.3.8)whenever
exists in the mean square sense. The proof is complete. The following example shows that under certain conditions the solution process x(t, t o ,xo)is twice differentiable with respect to xo. Example 4.3.1. In addition to the hypotheses of Theorem 4.3.3, we assume that f ( t , x) and a(t, x) are twice continuously differentiable and their x) satisfy the inequality derivatives f,,(t, x) and a,(t,
Ilfx,(~,x)ll+ I l ~ x x ( t 4 1 5 1 L2(1 + IIXllrn2)
(4.3.17)
for some positive numbers m2 and L 2 . Then the mean square derivative
(a2/dx0dxo)x(t,t o ,xo) exists and satisfies the ItB-type stochastic matrix
differential equation
dY = [fxx(t, X ( t ) P T ( 4 t o , xo)W, t o , xo) + f,@, x(t))YIdt + [ d t , x(t))@'(t,t o , xo)@(t,t o , xo) + ax(4x(t))Y] dz where @(t,t o ,xo)is the fundamental matrix solution of (4.3.8). Proof. The validity of the above statement can be established by following the argument used in the proof of Theorem 4.3.3. However, we note that relation (4.3.17)is needed first to show that Ax(t,A)is bounded which in turn implies that
and
converge to zero in the mean square uniformly on J . Further details are left as an exercise. 4.4.
THE METHOD OF VARIATION OF PARAMETERS
Consider a system of deterministic differential equations x' = f ( t , 4,
x(t0) = xo
9
(4.4.1)
where f e C [ R + x R", R"]. The stochastic differential system of ItB-type (4.2.l), namely, dY
= f ( t , Y ) dt
+ o(t,Y )4
t h
Y ( t 0 ) = xo >
(4.4.2)
150
4.
ItB-Doob Calculus Approach
can be viewed as a perturbed system relative to (4.4.1) with a constantly acting stochastic perturbation. With this understanding, we shall obtain a nonlinear variation of constants formula for the solutions of (4.2.1). Theorem 4.4.1. Assume that f ( t , x) is twice continuously differentiable withrespecttoxforeachtE R , andthaty(t)=y(t,to,xo)andx(t)=x(t,to,xo) are solutions of (4.2.1) and (4.4.1), respectively, existing for t 2 t o . Then
where b(t, y ) = o(t,y)oT(t, y). Proof. Consider x(t,s,y(s)). Under the assumption on f , it is known (see Hartman [24]) that the solution x ( t , t o , x o )of (4.4.1) is continuously differentiable with respect to to and twice continuously differentiable with respect to x o . As a result, applying It6's formula to x(t,s, y(s)),we have
We also know that
and consequently, we derive from (4.4.4)
Integrating this from to to t, the desired result (4.4.3) follows. Corollary 4.4.1. If f ( t , x ) = A x , where A is an n x n matrix, that is, f is linear, formula (4.4.3) reduces to the form Y(t,tO,XO)= x(t,to,x,)
+ J',e"~"'.(s,
y(s))dz(s),
t 2 t,.
4.4.
The Method of Variation of Parameters
151
Problem 4.4.1. By applying It6’s formula to Ilx(t,s, y(s))llP7p 2 1, find a variation of constants formula for Ily(t, t o ,xo)ll” similar to relation (4.4.3). Deduce from it the corresponding formula when f ( t ,x ) is linear. Consider now the It6 system x(to) = x,,,
dx = o(t,x)dz(t),
(4.4.5)
to be an unperturbed system so that (4.4.2) can be viewed as a perturbed system with the deterministic perturbation f(t, x)dt. We can also derive a nonlinear variation of constants formula relative to this setup. Theorem 4.4.2. Suppose that o(t,x ) is twice continuously differentiable with respect to x for each t E R , and that ~ ( t=) x ( t , t o , x o ) and y(t) = y(t, t o ,x o ) are solutions of (4.4.5) and (4.4.2), respectively, existing for t 2 t o . Then
) u(t) is a solution of where (dx/axo)(t,t o , x o ) 3 $(t, t o 7 x o and = *-‘(t,to,u(t)lf(t, x(t,to,u(t))),
4 t O ) = xo.
Proof. Let x(t,t o , x o )be the solution of (4.4.5) for t 2 t o . The method of variation of constants requires that we determine a process u(t) such that y(t9 t07x0) = x(t, ‘07 u(r)),
= xo?
is a solution of (4.4.2). Since we know that x(t, t o , x o ) is twice continuously differentiable with respect to x o and that I+- ‘ ( t , t o ,xo) exists, where $(t, t o , x o )= (ax/ax,)(t, t o , x o ) is the solution of the variational system relative to (4.4.9, we have by It6’s formula dy = d x
+ $ d u + d$du +
Let us suppose that u ( t ) satisfies du = q(t,u)dt
+ G(t,u)dz(t),
u(t0) = x O ,
and find g and G. Thus we have f ( t , y)dt = $ d u
+ -t1.(2 Consequently, we see that G
+ o,(t,x)$Gdt a2x
( t , t o , u)G(t,u)TG(t, u)
axoax0
= 0 and that
4.
152
It6-Doob Calculus Approach
Hence it follows that
for t 2 to. As an illustration of Theorem 4.4.2, consider a(t, x ) = A(t)x in (4.4.5). Then we know by Example 4.1.3 that
-3
x(t, t o ,xo) = x o ex,[
Hence $(t, to)= exp[
-4
s' A2(s) + s' A(s) f0
i',
,12(s)ds
ds
f0
1
dz(s)
+ jto'A(s)dz(s)
and consequently any solution y(t) of (4.4.2)is of the form Y(t) = tt/k t o )
Problem 4.4.2.
Consider the perturbed system dY = [.(t9 Y )
+ m,Y ) ] dz + f(t,
Y)dt
relative to (4.4.5). Obtain the variation of constants formula for the solution y ( t ) ,assuming that a(t, x) is twice continuously differentiable in x. 4.5.
STOCHASTIC DIFFERENTIAL INEQUALITIES
In this section we shall present some basic results on stochastic differential inequalities of lt8-type. The following lemmas play an important role in our discussion. Lemma 4.5.1.
Assume that
(i) m(t) is a solution of dm = g(t,m)dt
+ G(t,m)dz(t),
rn(to)= mo,
where g, G E C [ R + x R, R ] and z(t) is a normalized Wiener process; (ii) IG(t,u)I2 I h(lu1) for (t,u)E R , x R, where h E C [ R + , R + ]such that h(0) = 0, h(u) is increasing in u, and ds
so+
hcs> =
4.5. Stochastic Differential Inequalities
153
Then we have E(jm(t)() 5 E(jm0j) + E(~olg(r>m(s))lds).
Proof. Let a.
=
. . . ,J:;-'ds/h(s)
1 > a , > a, > . . . > a,
-+ 0
as n
+
t 2 to.
co.Define for n =
= n. Then there exists a twice continuously differentiable function H,,(u) on R , such that H,,(O) = 0,
1,2,
i::
H;(u) = between 0 and 1, and
Hi(u) =
0 I u 5 a,, a, < u < a,,-,, u 2 a,-,,
i::
0I u I a,,,
2 between 0 and __ nh(u)'
a, < u < a,-,, u 2 an-l.
We then extend H,(u) to ( - co,00) symmetrically, that is, H,(u) = H,(lul). Clearly, H,(u) is a twice continuously differentiable function on (- co,a), and H,(u) 1.1 as n -+ co. Using Itd's formula, we get --f
Hn(m(r)) Hn(mo) +
+
l;
+$
J',H;(m(s))G(s,m(s)) dz(s)
HXm(s))g(s,m(s))ds
s'
t0
= H,(rno)
f&'(m(s))[G(s,m(s))]' ds
+ I , + 12 + 13.
It is obvious that E ( 1 , ) = 0. Since \HL(u)\5 1, we have
Also,
154
4.
Itb-Doob Calculus Approach
Consequently, we obtain
and the proof is complete. An extension of Lemma 4.5.1 to It8-type systems of differential equations will be needed to derive component-wise estimates. We merely state such a result since its proof is similar, with appropriate modifications, to the proof of Lemma 4.5.1. As usual, inequalities between two vectors are to be understood component-wise. If m E R", we use the notation lml = (lmll, lm21,. . . , lm,,l).
Lemma 4.5.2. (1)
Assume that
m(t) is a solution of the It8-type system
dm = g(t,m) dt -tG(t,m) dz(t),
m(to)= m,,
where g E C [ R + x R", R " ] , G E C [ R + x R", R""], and z ( t ) is a normalized m-vector Wiener process; (ii) ~ ~ = l I G i j ( t , ~ I ) 1hi(luil) 2 for ( t ,u) E R , x R", where for each i = 1, 2, . . . , n, hi E C [ R + R,], , hi(0) = 0, hi(s)is increasing in s, a n d l o +ds/hi(s)= m. Then we have
Let us introduce the notion of quasi-positivity of a function. Definition 4.5.1. A function g E C [ R + x R", R"] is said to be strictly quasi-positive in u E R" for each t E R + if u 2 0 and ui = 0 imply gi(t,u) > 0 for each t E R + . We are now in position to prove the following result concerning the nonnegativity of solutions of It6 systems, which result is a useful tool in proving theorems on differential inequalities.
Theorem 4.5.1. Let the assumptions of Lemma 4.5.2 hold. Suppose further that g(t,u) is strictly quasi-positive in u for each t > t o . Then m(to)= m, 2 0 implies m(t) 2 0 for t 2 to a s . Proof. Let aR: denote the boundary of R;. For any m, E dR;, one can find a nonempty index subset I , of I = { 1,2, . . . , n ) , depending on m,, such that moi = 0 for i E I , and m,, > 0 for i E f\f,, w.p. 1. We first prove that P(there exists a t > t,:m(s) 2 0 for s E [t,,t]) = 1.
(4.5.1)
4.5.
Stochastic Differential Inequalities
155
To prove this, set
and z, = inf
IJ [s:mi(s)< 01.
iEl-lo
Because of the quasi-positivity of g in u, gi(t,mo)> 0 for i E I,, t > t o . This, together with the continuity of g and the sample continuity of rn(t), gives P(z > to)= 1, where z = min(t,, t,).Let T > to be fixed, and set t = min(t, T ) . Then for i E I , , we find that E(mi(t))= E
(6:
1
(4.5.2)
gi(s, m(s))ds .
Since t I z, we see that gi(s,m(s))2 0 for s E [ t o ,t ] and for each i E I,. Hence for i E I , , Lemma 4.5.2 gives
which, together with (4.5.2) and the fact that moi(t)= 0, yields E ( ( mi( t)I () E(mi(t)). This implies that for i E I,, rni(t) 2 0 a.s. From the definitions of t, t,, and the nature of mio, mi(t)2 0 as. for i E Z\I,. This implies that m(t) 2 0 a s . Since this is true for all t = min(z, T ) ,the sample continuity of m(t)shows that P(t E [ t o , z ] ==- m(t) 2 0) = 1,
uy=,
which proves (4.5.1). [s:mi(s)< 01. To prove that m(t) 2 0 as. for all t 2 t o ,we let t , = inf It is enough to show that t , = co as. Suppose that P(tl < co) > 0. Set fi = [ w : t , ( w ) < 003, g t = @t+tl-tol$, @ = Slb, and &= l) P(A)/ P ( b ) ,A E 9. On the space (fi,@=, p),we set G(t,u ) = G(t + t , - to,u), g(t,u) = g(t t , - t o , u), 6 ( t )= m(t t , - to),and T ( t ) = z(t + t , - to). Then it follows that f i ( t o )= m(tJ E aR: as. and
+
+
d f i = Lj(t,%)dt
+ G(t,A)dF((t),
fi(t0) = fio
We now apply (4.5.1) and obtain P(there exists a t > t o : f i ( s ) 2 0 for s E [ t o , t ] ) = 1. But this contradicts the definition of t,. Hence t , m(t) 2 0 a.s. for t 2 t o , whenever mo E dR;.
= 00
a.s., and hence
156
4.
It6-Doob Calculus Approach
To complete the proof of the theorem, we need to show that if R'!+\aRn+,then m(t) 2 as. for t 2 t o . To show this, set
moE
u [s:m,(s)< 01, n
t,
= inf
i =1
and by using an argument similar to the above, we can arrive at a contradiction to the definition of t,. The proof of the theorem is complete. Corollary 4.5.1. The conclusion of Theorem 4.5.1 remains valid when the strict quasi-positivity of g for t > t o is weakened to quasinonnegativity for t > t o if the uniqueness of the solution is guaranteed for the It8 system.
Proof. Let E > 0, and let m(t,E) be a solution of dm = [g(t,m) + E ] d t
+ G(t,m)dz(t),
m(to,E)= mo
+ E.
By Theorem 4.5.1, r n ( t , E ) 2 0 as. for t 2 to. Now the uniqueness of the solutions implies that m(t) = lirn,,,rn(t,E) 2 0 a s . for t 2 t o . We shall next prove some basic results on differential inequalities. Theorem 4.5.2.
Assume that
(i) f~ C [ R + x R", R"], F E C [ R + x R", R""'], z(t) is a normalized rnvector Wiener process, and f ( t ,x) is quasi-monotone nondecreasing in x for each t ; ,IFij(t, u) - Fij(t,u)lz 5 hi((ui- uil) and (t,u), (t,u) E R+ x R", (ii) where for each i = 1,2, . . . , n, hi E C [ R + , R + ]hi(0) , = 0, h,(s) is increasing in s, and So+ ds/hi(s)= 00; (iii) u(t),u(t) are solutions of
+ F(t, U)dz(t), dU = PZ(t)dt + F ( t , ~ ) d z ( t ) ,
du = Pl(t)dt
to) = U O to) = u O ,
respectively, such that Pl(t)If ( t ,u(t)) and inequalities being strict.
Pz(t) 2 f(t,
t 2 to, u(t)), one of the
Then uo 5 uo implies u(t) Iu(t), t 2 to, a.s. Proof. We set m(t) = u(t) - u(t) so that dm = g(t,m)dt
where
+ G(t,rn)dz(t),
(4.5.3)
4.6.
Maximal and Minimal Solutions
157
From the quasi-monotonicity of f(t, x) in x, it follows that g(t, x) is quasimonotone in x; moreover, if one of the inequalities p2 2 f(t, u), p1I f(t, u) is strict, we see that g ( t , X) is strictly quasi-positive in x for t > to. In view of assumption (ii), G(t,rn) satisfies condition (ii) of Lemma 4.5.2. Hence by Theorem 4.5.1, we get rn(t) 2 0, t 2 t o , as., which implies the stated result.
Corollary 4.5.2. If in Theorem 4.5.2, the strictness of one of the inequalities p2 2 f(t, u), p1 I f(t,u) is dropped and the uniqueness of solutions of (4.5.3) is assumed, the conclusion of Theorem 4.5.2 remains valid. The following form of Theorem 4.5.2 is more suitable for later discussion. We merely state it.
Theorem 4.5.3.
Assume that
(i) u(t),u ( t ) are solutions of
du = f l ( t , U)dt d~ = f 2 ( t , u ) d t
+ F(t, U)dz(t), + F(t,u)dz(t),
to) = u O , to) = u o ,
t2
respectively, where f l , f 2 E C [ R + x R", R"], F E C [ R + x R", R""], z ( t ) is a normalized rn-vector Wiener process, f 2 ( t ,x) is quasi-monotone nondecreasing in x for each t, and fl(t, x) < f2(t, x); (ii) condition (ii) of Theorem 4.5.2 holds. Then uo I uo implies u(t) I u(t), t 2
t o , as.
It might seem that the results contained in Theorems 4.5.2 and 4.5.3 do not look like theorems on differential inequalities, although intrinsically they are. For example, in view of assumption (iii), it is clear that u, u satisfy
du I f(t, u ) d t du 2 f(t, u) d t
+ F(t, u) dz(t), + F(t, u) dz(t),
(4.5.4)
and consequently Theorem 4.5.2 is implicitly a result on stochastic differential inequalities. Unfortunately, it is not possible to prove the assertion of Theorem 4.5.2 by starting from (4.5.4) as is usual. 4.6.
MAXIMAL AND MINIMAL SOLUTIONS
We shall introduce the notion of maximal and minimal solutions relative to the ItG-type stochastic system
+
du = f ( t , ~ ) d t .(t,u)dz(t),
to) = u O ,
(4.6.1)
where f E C [ J x R", R"], 0 E C [ J x R", R""], and z(t) E R[Q, R"] is a normalized Weiner process. Here J = [ t o , t o + a). Let us begin by defining the extremal solutions.
158
4.
It6-Doob Calculus Approach
Definition 4.6.1. Let r(t) be a solution of (4.6.1) on J . Then r(t) is said to be the maximal solution of (4.6.1) if for every solution u(t) existing on J , the inequality u(t) I r(t),
t
EJ,
w.p. 1,
(4.6.2)
holds. A minimal solution is defined similarly by reversing inequality (4.6.2). We shall now prove the existence of extremal solutions.
Theorem 4.6.1. Let assumptions (H,) and (H,) of Theorem 4.2.1 hold. Assume further that
(H3) (a) f ( t , u ) is quasi-monotone nondecreasing in u for each t ; (b) Cj”=I laij(t,u) - cij(t, u)l’ I hi( Iui - uil), (t,u), (t,0 ) E J x R”, where for each i, hiE C [ R + ,R,], hi(0)= 0, hi@) is increasing in s, and So+ ds/hi(s)= co. Then there exist maximal and minimal solutions of (4.6.1) on J . Proof. We shall prove the existence of the maximal solution only since the case of the minimal solution is very similar. Let E > 0 be such that 1 1 ~ 1 1I L , and consider the problem
+
+
du = [ f ( t , ~ ) ~ ] d t o(t,u)dz(t),
to,^)
= UO
+
(4.6.3)
E.
It is easy to see that the hypotheses of Theorem 4.2.1 are satisfied for (4.6.3) with L replaced b y 2L. Hence there is a solution u ( t , ~ of ) (4.6.3) on J . Let 0 < E , < E, I E , and let u, = u ( t , ~ u2 ~ )= u ( ~ , E ,be ) solutions of du, = [ f ( t , % f du2 = [ f ( t , ~
+ Elldt + 4 , U , ) W ) ,
+
2 ) E,]
dt
+ a(t,~ 2 ) d z ( t ) ,
u,(to) = uo u,(to) = uo
+ El, +~ 2
.
Then an application of Theorem 4.5.3 gives @,El)
I u(~,E~), t E J,
W.P. 1.
Choose a decreasing sequence ( E ~ } such that E~ + 0 as k + 00. Then it is clear {u(t,q)) is a decreasing sequence and thus the uniform limit r(t) = 1imkdmu(t, &k) w.p. 1 exists on J . Obviously r(t,) = u,. We have to show that r ( t ) satisfies (4.6.1). For this purpose, we set
4.7.
Comparison Theorems
159
as k -+ co uniformly on J,. In view of this, after some manipulations (similar to the ones used in the last part of the proof of Theorem 4.2.1), we obtain E(IIu(t,Ek)
- y(t)112) 5 3(t
+
u(S,Ed)-.f(s,r(s))l12ds)
- tO)E(S,’llfcs,
St: E(llg(s,
<e3
u(S,Ek))- g(s, r(s))l12)dsf
for k 2 k,,
38;
~EJ,
Hence by Chebyshev’s inequality, it follows that P[llu(t,&k)- y(t)ll > &] < &
on J ,
for k 2 k , ,
which implies that u(t,&k)+ y ( t ) w.p. 1
as k
-+
co
on J,.
This proves that r(t) is a solution of (4.6.1) on J. We shall now show that r(t) is the desired maximal solution of (4.6.1). Let u(t) be any solution of (4.6.1)on J. Then an application of Theorem 4.5.3 gives u(t) I u(t,Ek)
on J
w.p. 1.
This proves the theorem since u(t) I limk+mtd(t,&k)= r(t) on J w.p. 1.
4.7.
COMPARISON THEOREMS
An important technique in the theory ofdifferential equations is concerned with estimating a function that satisfies a differential inequality by the extremal solutions of the corresponding differential equation. In this section we shall obtain comparison results of a similar type for the stochastic differential equation (4.6.1). Theorem 4.7.1. Assume that (i) r(t) is the maximal solution of (4.6.1) on J ; (ii) m(t) is a solution of dm = P(t)dt + a(t,rn)dz(t),
m(to)= m,,
on J ,
where P ( t ) < f ( t ,m ( t ) )and f ( t , x) is quasi-monotone nondecreasing in x for each t ; (iii) condition H,(b) of Theorem 4.6.1 holds. Then m, 5 u, implies m(t) r(t) on J w.p. 1.
160
4.
Itb-Doob Calculus Approach
Proof. Let u ( t , ~be ) any solution of
du = [f(t, u) -t E ] dt
+ o(t,U ) dz(t),
~ ( t ,= ) uo
+ E,
for E > 0 sufficientlysmall. Then by an application of Theorem 4.5.3, we have
m(t) I u ( ~ , E ) Since limc+ou(t,E )
on J
w.p. 1.
= r(t) on J , the proof is complete.
Corollary 4.7.1. If in Theorem 4.7.1, the inequality B(t) I f(t,rn(t)) is reversed, then the conclusion is to be replaced by m(t)2 p ( t )
on J
w.p. 1,
where p ( t ) is the minimal solution of (4.6.1). The scalar version of Theorem 4.7.1, which we shall merely state below, is needed in later applications. Theorem 4.7.2.
Assume that
(i) r ( t ) is the maximal solution of
du = f ( t , U ) d t + a(t,u)dz(t),
u(to)= uo,
where f,o E C[J x R, R] and z(t) is a normalized Weiner process; (ii) rn(t)is a solution of
drn = B ( t ) d t
+ o(t,rn)dz(t),
m(to)= rno,
where B(t) I f ( t ,m ( t ) ) ; (iii) lo(t,u)- o(t,u)I2I h(lu - ul) and ( t , ~ ) (,t , u ) E J x R, where h E C [ R + , R + ] h(0) , = 0, h(t) is increasing in s, and ds/h(s) = co.
so+
Then rn, I uo implies m(t) I r(t) on J w.p. 1. 4.8.
LYAPUNOV-LIKE FUNCTIONS
As is well-known, the Lyapunov second method has played an important role in the qualitative study of solutions of differential equations. In this section, by using the concept of the Lyapunov function and the theory of differential inequalities (stochastic and deterministic), we develop some results that furnish very general comparison theorems. Definition 4.8.1. Let G be a function on R" into R". The function G is i I m is convex, and G is said to be convex if each component G, for 1 I said to be concave if - G iis convex.
4.8.
Lyapunov-Like Functions
161
Consider the It6-type stochastic differential system
+
dx = f ( t , ~ ) d t a(t,X)dz(t),
x(t,J
= x0,
to E R + ,
(4.8.1)
where f E C [ R + x R", R"], (r E C [ R + x R", R""], and z ( t ) E R[Q Rm] is a normalized Wiener process. We shall assume that the functions f and a are smooth enough to assure the existence of the solution process. Let V E C [ R + x R", R"], V,, V', and V,, exist and be continuous for ( t , x )E R + x R". By It6's formula, we obtain d V ( t , x )= L V ( t , x ) d t
+ V,(t,x)o(t,x)dz(t),
(4.8.2)
where
+ K ( t , x ) f ( t , x ) + ~tr(V,,(t,x)a(t,x)aT(t,x)).(4.8.3)
L V ( t , x ) = V,(t,x)
We can now formulate the basic comparison theorems in terms of Lyapunov-like functions. Theorem 4.8.1.
Assume that
(i) V E C [ R + x R", R"], V,, V,, and V,, exist and are continuous for ( t , x ) E R + x R", and for ( t , x )E R , x R",
w ,x)),
(4.8.4)
L V t , x) Is(t,
where L is the operator defined in (4.8.3); (ii) g E C [ R + x R", R N ] ,g(t,u) is concave and quasi-monotone nondecreasing in u for each fixed t E R + , and r ( t ) = r(t, t o ,uo) is the maximal solution of the auxiliary differential system u' = s(t,4,
u@o) = uo
9
(4.8.5)
existing for t 2 t o ; (iii) for the solution process x ( t ) = x(t, t o ,xo)of (4.8.1),E [ V ( t ,x ( t ) ) ] exists for t 2 to. Then (4.8.6) whenever (4.8.7)
Proof. Set m(t) = E[V(t,x(t))]
for t 2 t o .
Then assumption (iii), together with the continuity of V ( t ,x ) and the solution process x(t),implies that m(t)is continuous for t 2 t o . Applying ItB's formula
It6-Doob Calculus Approach
4.
162
to V(t,x(t)),we get
For h > 0 sufficiently small, this, together with assumption (i) and the concavity of g(t, u), implies E[V(t
+ h, ~ (+th ) ) ] - E [ V ( t , x ( t ) ) ]= E
I
[6’+h
LV(s,x ( s ) )ds
6’+h
g(s, E [ V(s,x ( s ) ) ] )ds.
(4.8.8)
t 2 to.
(4.8.9)
It therefore follows that D’m(t) I g(t,m(t)),
An application of Theorem 1.7.1 yields immediately the stated result (4.8.6), completing the proof. Corollary 4.8.1. Suppose that conditions (i) and (ii) of Theorem 4.8.1 hold. Assume further that the initial value uo in (4.8.5) is a random variable so that Eq. (4.8.5) may be viewed as a random differential equation and that E [ V ( t , x ( t ) ) I x , ] exists for t 2 t o w.p. 1. Then E [ ~ ( t , X ( t ) ) I X ,I ] ~(t>tO,%),
t 2 t o , w.p. 1,
provided that V ( t o , x o )I uo w.p. 1. In particular, if g(t,u) = 0, we have E[V(t,x(t))Ix(s)I ] V ( s , x ( s ) ) , t > s > to,W.p. 1,whichimpliesthat [ V ( t , x ( t ) ) : F,, t 2 t o ] is a supermartingale. Proof. Proceeding as in the proof of Theorem 4.8.1 with m(t) = E [ V ( t ,x ( t ) )I x o ] and noting that m(t)is sample continuous, we arrive at the random differential inequality (4.8.9).We then apply the comparison theorem (Theorem 2.5.1) to derive the stated result. If g = 0, we employ m(t) = E [ V ( t ,x ( t ) )I x ( s ) ] , t 2 s 2 t o , to get the desired conclusion.
Remark 4.8.1. The drawback to Theorem 4.8.1 is assumption (iii). However, under certain conditions one could show that assumption (iii) holds. For example, let v ( t , x ) 2 0 and V ( t , x ) I a(t, IJxll), where a E C [ R + x R +, R y ] and a(t, u) is concave in u for fixed t E R , . Then we could have
0 I -E[W,x(t))I
4t>E[IIX(t)llI),
(4.8.10)
4.8.
163
Lyapunov-Like Functions
from which condition (iii) follows, using the concavity of a and the fact that the solution process x ( t )E L 2 [ Q ,R"]. Assumption (iii) in Theorem 4.8.1 can be avoided if we employ integral inequalities. But then other restrictions would be needed. Nonetheless, the comparison results that emerge from this approach are useful. We discuss such a comparison theorem next.
Theorem 4.8.2. Assume that hypotheses (i) and (ii) of Theorem 4.8.1 are satisfied except that the quasi-monotonic nondecreasing property of g(t,u) in u for fixed t E R , is replaced by the nondecreasing property of g(t,u) in u for fixed t E R,. Suppose further that the function V ( t , x )is bounded below. Then the conclusion of Theorem 4.8.1 remains valid. Proof. Let x ( t ) be the solution process of the system (4.8.1) such that EIV(co,xo)]I: u o . Let z, be the first exist time of x ( t ) from B(n) = { x E R":llxll I n> for a positive integer n 2 1. Define z,(t) = min(t,z,). Then r,(t) is a Markov time. By ItB's formula, we obtain
Hence condition (4.8.4) yields
[
~ [ V ( z , ( t ) , x ( z , ( t ) )I):IE [ W o , x o ) l + E J'''(')g(s, fo V(s,x ( s ) ) d) s ] .
(4.8.11)
This inequality, together with the continuity of the functions involved and the fact that V ( t ,x ) is bounded below, implies that E [ V(z,(t),x(z,(t)))]exists for all t 2 to. Clearly, for to Is I z,(t), m(s) = E [ V(s,x(s))]is continuous. Furthermore, since g ( t , u) is concave in u, we have E[g(s,
w ,x ( s ) ) ) ]I g(s,m(s)),
to I: s I:?At).
(4.8.12)
Consequently, the assumption that g(t, u) 2 0 and the continuity of g(t,u) in ( t ,u) yield
1:
0 I E M S , V ( t , x ( t ) ) dl t I:
$I:
s(t,m(t)) d5 < co
for to I s I z,(t). Hence applying Fubini's theorem, we have
This, together with (4.8.11), (4.8.12), and the definition of m(s), leads to the integral inequality m(t)I m(to) + l;s(t,m(t))dt.
t o 5 s 5 zfl(t).
4.
164
Itb-Doob Calculus Approach
We now apply Theorem 1.7.2 to obtain the inequality E [ V ( S , X ( S ) )I ] r(s,to,uo),
to
I S 5 Tfl(t),
which implies E [ v(zn(t)> x ( Z n ( t ) ) ) ] 5 r ( T n ( t ) 7 ' 0 7 '0). Since g(t, u) 2 0, the solutions u(t, t o , uo) of (4.8.5) are nondecreasing in t, and consequently we have
E [ W , ( t ) , x(7,(t)))I 5 r(4 t o , uo). We now apply Fatou's lemma to the left-hand side of this inequality to get the desired result E[V(t,x@))] I ~ ( f r t o , ~ o ) r t 2 to.
The proof is complete. The following variant of Theorem 4.8.1 is often more useful in applications.
Theorem4.8.3. Let the hypotheses of Theorem 4.8.1 hold except that inequality (4.8.4) is strengthened to L 4 t ) W 4 I g(4 A ( t ) W ,4)
(4.8.13)
for (t,x) E R + x R", where A(t)is a continuously differentiable N x N matrix function such that A-'(t) exists and its elements are nonnegative and continuous, and the elements of the matrix A-'(t)A'(t)= (crij(t))satisfy clij(t)5 0 for i # j. Then E I A ( t o ) V ( t , , x o ) ]5 uo implies E [ V ( t , x ( t ) ) ] R(t,
2
(4.8.14)
where R(t,t o ,uo) is the maximal solution of the auxiliary differential system u'
= AP(t)[-A'(t)u
+ g(t,A(t)u)],
u(t0) = u g ,
(4.8.15)
existing for t 2 t o . Proof. Setting W ( t , x )= A ( t ) V ( t , x ) ,we see (because of (4.8.13)) that
L W 7x) 5 g(t, w t 7 $1. This shows that the function W(t,x) satisfies all the assumptions of Theorem 4.8.1, and as a consequence, we have
wW,x(t))l 5 r(t,lO,UO),
t 2 to,
(4.8.16)
provided that EIW(to,xo)]Iuo. Here r(t, to,uo) is the maximal solution of (4.8.5). It is easy to see that A ( t ) W ,t o , 00) = r@,t o , uo)
(4.8.17)
4.8.
Lyapunov-Like Functions
165
with A(to)uo= uo. From (4.8.16), (4.8.17), the properties of the mean E, the definition of W(t,x), and the properties of A(t), we have W " t , x(t))l 5 R(t, t o ,uo),
t 2 to.
Thus the proof is complete.
Remark 4.8.2. Let B(t) be a diagonal matrix function whose elements are nonnegative. Then the function A(t) = exp[ro B(s)ds] is admissible in Theorem 4.8.3. In the following we shall present a basic comparison theorem in the framework of stochastic differential inequalities.
Theorem 4.8.4. Assume that (i) V E C [ R + x R", R N ] , V,, V,, and V,, exist and are continuous for ( t , x ) E R + x R"andthatfor(t,x)ER+ x R", dV(t, X) = a(t,X)dt
+ G(t, V(t,x)) dz(t),
(4.8.18)
where 01 E C [ R + x R", R N ] , g E C [ R + x RN,R N ] , G E C [ R + x R N ,RNm], ~ ( x) t , Ig(t, V(t,x)), and ~ ( xt ), = LV(t,x ) ; (ii) (a) g(t, u) is quasi-monotone nondecreasing in u for each t ; - Gij(t,u)I2 Ihi(lui - uiJ), ( t , ~ )( ,t , v ) E R + x RN, (b) Cy=~JGij(t,u) where hi E C [ R + ,R + ] , hi(0) = 0, hi@) is increasing in s, and lo+ds/hi(s) = 00, i = 1,2, . . . , N ; (iii) r ( t ) = r(t, t o ,uo).is the maximal solution of the It6-type stochastic differential system (4.8.19) du = ~ ( U) t ,dt + G(t, U ) dz(t), existing for t 2 t o . (iv) x ( t ) = x ( t , to,xo) is a solution process of (4.8.1) such that V(to,xo< ) u g w.p. 1.
(4.8.20)
Then V ( t , x ( t ) )I r(t, t o ,uo),
t 2 to, W.P. 1.
(4.8.21)
Proof. Let x(t) be any solution process of (4.8.1) defined for t 2 to such that (4.8.20) holds. Define =
W,x(t))
so that m(to)5 uo. This, together with (4.8.18), yields dm(t)= B ( t ) dt
+ G(t, m ( t ) )dz(t),
where P ( t ) = ~ ( xt (,t ) ) = LV(t,x(t)).Applying Theorem 4.7.1, we obtain the desired result (4.8.21).
4.
166
4.9.
It6-Doob Calculus Approach
STABILITY IN PROBABILITY
Let x ( t ) = x ( t , t o ,x o ) be any solution of the It6-type stochastic differential system (4.8.1). Without loss of generality, we assume that f(t,O) = 0 and o(t,O) = 0 for all t E R , and that x ( t ) = 0 is the unique solution of (4.8.1) through ( t o ,0). The stability concepts (SP,)-(SP,), (SS,)-(SS,), and (SM,)(SM,) of Section 2.10 with respect to the trivial solution x = 0 of (4.8.1) will be used in our discussion. To derive the stability properties of the trivial solution of (4.8.1) in the framework of the second method of Lyapunov, we need to know the stability properties of the corresponding auxiliary or comparison differential system. We shall utilize the stochastic comparison system (4.8.19) as well as the deterministic system (4.8.5),where g, G, and z are as defined in Theorem 4.8.4. Corresponding to the stability definitions (SP,)-(SP,), (SS,)-(SS,), and (SM ,)-(SM4), we shall designate by (SPT)-(SPX), (SST)-(SMX), and (SM7)(SMX) the concepts of Section 2.10 concerning the stability of the equilibrium solution u = 0 of (4.8.19).Similarly, we shall denote by (ST)-(SX) the concepts of Section 3.6 concerning the stability of the equilibrium solution of u = 0 of (4.8.5). We begin with the following stability criteria, recalling the definition of the function t V ( t ,x ) given in (4.8.3).
Theorem 4.9.1.
Assume that
(i) g E C [ R , x R Y , R"], g(t,O) = 0, g ( t , u ) is concave and quasimonotone nondecreasing in u for each t E R + ; (ii) I/ E C [ R , x B(p), R N ] , K ( t , x ) , V,(t,x), and I/,,(t,x) exist and are continuous on R , x B(p),and for ( t , x ) E R , x B(p), LV(t,x ) i g(t, V t ,X I ) ,
where B ( p) = [x E R'':IIxII < p ] ; (iii) for ( t , x )E R , x B(p),
c w,x) 5 4 4 llxll), N
b ( l J x J5 J)
(4.9.1)
i= 1
where b E X , a E W X . Then
(S:)
implies (SP,).
Proof. Let x ( t ) be the soiution process associated with (4.8.1), By assumption (iii), we have
4.9.
Stability in Probability
167
so long as x(t) E B ( p ) for t 2 t o . This inequality assures, as noted in Remark 4.8.1, that E[V(t,x(t))] exists. Hence by Theorem 4.8.1, the estimate
E[
w ,x(t) 13 Ir(t,
t o 9 uo)
is valid so long as x(t) E S(p) for t 2 t o , provided that E[Wo,xo)] I uo. It is obvious that the relation (4.9.2) yields the estimate
(4.9.2) (4.9.3) (4.9.4)
Let 0 < q < 1,0 < E < p, and to E R , be given. Then given E~ = q b ( ~>) 0 and to E R,, there exists a positive function d1 = 6,(to, E, q) that is continuous uio I h1 implies in to for each E and q such that (4.9.5)
(4.9.6) Since a E V X , we can find a 6, = h 2 ( t O , ~ , > q ) 0 that is continuous in to for each E and q such that the inequalities q1x0111 5 6 2
and
a(t0XIIXolll) I 61
(4.9.7)
hold simultaneously. Now we choose a 6 = 6 ( t , , ~ , q )that is continuous in to for each E and q such that Sz < q6. This, together with Chebyshev's inequality and (4.9.7), yields PiIo:llxo(4ll > 61 < vl.
(4.9.8)
We claim that (SP,) holds with this 6. If this claim is not true, then there exists a t , > to such that (4.9.9) P " ( X ( t l ) ( l 2 E l = r. Let z, be the first exist time of x(t) from (x E R":(lx((< E ) , and let z,(t) min(t, z,). Hence from (4.9.4) and (4.9.5), we get N
1 E[l/(7,(t),x(z,(t)))l < Vb(E),
i= 1
which implies that
=
168
4.
where R, = [oE R:z,(t) > tl] and R, condition (iii), gives
=
It6-Doob Calculus Approach
[oE R:z,(t) 5 t,]. This, in view of
N
1 J*r/;.(r,(t),x(z,(t)))P(do) 0 such that N
1 ui(t,t0,uo) < v ~ ( E ) ,
i= 1
t 2 to +
7'7
(4.9.10)
uio 5 6:. As before, we choose uo so that (4.9.9) holds and whenever also choose @(to)so that (4.9.7) holds. Choose b0 so that min(6,6;) < q6,.
4.9.
Stability in Probability
169
This, together with Chebyshev's inequality and (4.9.7), gives p C ~ : I l x o ( ~ 'Sol )ll
< 'I,
(4.9.1 1)
which implies that
1 E[I/;.(t,x(t))I 5 c &to,uo), N
N
i= 1
i= 1
t
2 to,
with probability greater than or equal to 1 - q o , since IIx(t)[l< E with probability greater than or equal to 1 - ylo. As a result, we have (because of (4.9.10))
c E[T/T(t,x(t))]< 'IW, N
i= 1
t 2 to
+ T,
with probability greater than or equal to 1 - q o , which implies, arguing as before, that PCJJMll 2 81 < ?,
t 2 to
+ T,
whenever (4.9.11) holds. This proves (SP,), and therefore the proof is complete.
Theorem 4.9.4. Then
Assume that all the hypotheses of Theorem 4.9.2 hold. (SX)
implies (SP,).
Proof. The proof follows from the proofs of Theorems 4.9.2 and 4.9.4. Remark 4.9.1. Instead of using the comparison theorem (Theorem 4.8.1) for the stability considerations as was done in the foregoing discussion, we could utilize the comparison result contained in Corollary 4.8.1. This would necessitate employing the random differential system (4.8.5) (note that only the initial value will be random) as the comparison system. Thus if the trivial solution of (4.8.5) is stable in probability (SPT), we would then obtain the following type of stability for the given system (4.8.1),namely, PIIIxoll 2 S] < q implies PIE(l[x(t)lxoll)2 E ] < 'I, t 2 to. On the other hand, if we assume stability in the mean for the comparison system, we have a similar concept holding for (4.8.1), provided that the function b(u) is also convex (refer to Theorem 4.9.2). In particular, if the initial value xo is constant w.p. 1 and g = 0, we get stability in probability for (4.8.1).Thus it is clear that the comparison result in Corollary 4.8.1 offers another unified and systematic approach to investigate slightly different kinds of stability notions. We do not wish to pursue this treatment.
170
4.
Itb-Doob Calculus Approach
As an illustration consider the ItB-type linear system of stochastic differential equations m
dx
= Axdt
+ C Bixdzi,
(4.9.12)
i= 1
where z(t) is a Wiener process and A and Bi,i = 1,2,. . . ,m,are n x n constant matrices. Take V ( x ) = xTPx 8s a Lyapunov function, where P is the positive definite matrix solution of
0 =Q
m
+ P A + ATP+ C BTPBj
(4.9.1 3)
i= 1
and Q is any given positive definite matrix. Noting that g(t, u) = - CIU, where = (Am(P-'Q)) and A m ( F I Q is ) the smallest eigenvalue of the matrix F ' Q , it is clear that the trivial solution of(4.9.12) is uniformly asymptotically stable in probability. c1
The following example demonstrates the advantage of a vector Lyapunov function over a single Lyanpunov function. Example 4.9.1. Consider the system of stochastic differential equations dx
= F(t,x)xdt
+ cr(t,x)dz,
(4.9.14)
where x E R Z ,z ( t )is a normalized scalar Wiener process, cr E C [ R + x R2, R 2 ] , o(t,O) = 0,
(4.9.15)
A E C [ R + , R + ]n L,[O, a), and
fi E C [ R + x R2, R ] , f , ( t , x )2 0 on B(p), andfl(t,O) = 0. x:
First, we choose a single Lyapunov function V ( t , x ) ,given by V ( t , x )= + x : . Then it is evident that
+ +in tl + 4 t ) ] V ( t ,x), using the inequalities 2(ab(I u2 + b2 and f i ( t , x ) 2 0 in B(p). Clearly, the trivial solution of the comparison equation u' = [2e-' + 2lsintl + A(t)]u is LV(t,x ) I [2e-'
not stable. Hence we can not deduce any information about the stability of (4.9.14)from the scalar version of Theorem 4.9.1, that is, with N = 1, even though it is stable.
4.9.
171
Stability in Probability
Now we attempt to seek stability information of (4.9.14) by employing a vector Lyapunov function. We choose
Note that the components of V , Vl ,and V2are not positive definite. However, Vl and V2 satisfy the hypotheses of Theorem 4.9.1 with N = 2. In fact, (x:
+ $1
2
5
1
Y(t,X)
I 2(x:
i= 1
+ y;)
and the vectorial inequality L W x )I s(t,W,X)) are satisfied in B(p) with g(t' u,
=
[
(2e-' (2e-t
I.
+ 2 sin t + l ( t ) ) u l + 2 sin t + l ( t ) ) u 2
It is easy to observe that g(t, u) is concave and quasi-monotone nondecreasing in u for fixed t and that the trivial solution of (4.8.5) is uniformly stable. Consequently, the trivial solution of (4.9.14) is stable in probability. In the following, we shall illustrate the use of the variation of parameters method to study the stability in probability of ItS-type stochastic differential systems. Theorem 4.9.5.
Assume that
(i) o(t,x) in (4.8.1) is an n x 1 matrix function such that o(t,x) = B(t)x and z is a scalar normalized Wiener process, where B E C [ R , , R n 2 ]is an n x n matrix function; (ii) for any q > 0 and t o E R , , there exists a positive number T = T(t,, q ) such that P ( o : j ( t , t o ) 2 O} < y ~ ,
t 2 to
+ T,
where j ( t ,t o ) is the largest eigenvalue of the stochastic matrix function
+
-+S:,BT(u)B(u)du S:,B(u)dz(u); (iii) f(t,O) = 0 for t E R,; (iv) Ilf(t,x)II = o ( ~ ~asx x~ + ~ )0 uniformly in t
E
R,.
Then the trivial solution of (4.8.1) is asymptotically stable ir, probability.
172
4.
It6-Doob Calculus Approach
f . From Theorem 4.4.2, we have
e
he fundamental matrix solution process of
dx
(4.9.17)
= B(t)xdz.
Let E > 0 be sufficiently small. Then because of hypothesis (iv), there exists a 6 > 0 such that llxll < 6 implies Ilf(t, x)II < EIIxII.This, together with (4.9.16) and the argument that is used in Theorem 2.11.7, yields
IlY(tdo,xo>ll 5 KlllXOIl exP[IE(t - to) + B(t,to)l so long as IIy(t, t o , x0)" < 6 w.p. 1, where K , satisfies
(4.9.18)
With the foregoing E and hypothesis (ii),one can find a positive constant K such that P { o : ~ (-t t o )
+ P ( t , t o ) 2 K } , < U]
t 2 to
+ T,
and choose 6 = E/KleK. Now the remaining proof of the theorem can be reformulated on the basis of the proof of Theorem 2.11.7.
Problem 4.9.1. Assume that hypotheses (i), (iii), and (iv) of Theorem 4.9.5 hold. Further assume that for any U] > 0 and to E R,, there exists a positive number T = T(t0,q)such that
where b(t) is the largest eigenvalue of -+BT(t)B(t). Show that the trivial solution of (4.8.1) is asymptotically stable in probability. 4.10.
STABILITY IN THE PTH MEAN
In this section we shall present some stability criteria that assure the pth mean stability properties of the trivial solution of (4.8.1).
4.10.
Stability in the pth Mean
173
Theorem 4.10.1. Assume that
(i) g E C [ R , x R?, R N ] , g(t,O) = 0, and g ( t , u ) is concave and quasimonotone nondecreasing in u for t E R,; (ii) I/ E C [ R + x R”, R N ] , V ( t , x ) , V,(t,x), and V,,(t,x) exist and are continuous on R , x R”, and for (t,x) E R , x R”, L V x ) I g(t, v ( t , X ) ) ;
(iii) for (t,x ) E R , x R“, N
b(ll.11”) I
1
w
i= 1
7
x ) s a(t, IIXII”),
where b E “ f X , a E %?X, and p 2 1; Then
(ST)
implies (SM,).
Proof. By Theorem 4.8.1, we have E[W,x(t))I I r(t,tOt~O),
t2
to,
(4.10.1)
for the solution process x ( t ) = x(t, t o ,xo) of (4.8.1) whenever (4.10.2)
E [ W o , x o ) l I uo,
where r(t, t o ,uo) is the maximal solution of (4.8.5). Let E > 0 and to E R , be given, and suppose that u = 0 of (4.8.5) is equistable. Then given b ( ~>~0 ) and to E R,, there exists a 6, = 6,(t0,E) such that uio I 6, implies N
1 ui(t, t o , u o ) < b(EP),
t 2 to,
(4.10.3)
i= 1
where u(t, t o ,uo) is any solution of (4.8.5). Let us choose uo so that N
E I V ( t o , x o ) ]I uo
and
1ui0 = a ( t , , E [ ~ ~ x o ~ I P ](4.10.4) ).
i= 1
Since a E %‘X,we can find a 6 = 6(t0,E) > 0 that is continuous in to for each > 0 such that ( ( x ~I ( 6( ~ implies a ( t o , ~ [ l ( x o l l P to such that Ilx(t1, t0,XO)IJp= E .
(4.10.5)
174
4.
It&-Doob Calculus Approach
From (iii), we have
Relations (4.10.1), (4.10.3), (4.
c E[K(t N
b(EP)
I
i= 1
thus proving the theorem. Based on Theorem 4.10.1, it is easy to state and prove other types of pth mean stability properties. In order to avoid monotony, we do not venture proving these results. In the following we give some results that show the role of the method of variation of parameters in the stability analysis of ItG-type systems. First we consider a stability result concerning (4.8.1) as a perturbation of (4.4.5).
Theorem 4.10.2. Assume that hypotheses (i), (iii), and (iv) of Theorem 4.9.5aresatisfied. Further assume that there exists a positive number a = a(to) and a function 1 E C [ R + , R + ]such that - a(to)= lim s
r+m
u p [ L
( t - to)
(ib[b(s)+ A(s)]d s ) ]
(4.10.7)
and
where b(t) is the largest eigenvalue of -$BT(t)B(t). Then the trivial solution of (4.8.1) is pth mean asymptotically stable. Proof. From (4.9.16),we get
This, together with hypothesis (iv), yields
4.11.
Stability with Probability One
175
so long as Ily(t,to,xo)ll < 6 w.p. 1, where 6 = E / K . First, we take the pth exponent and then the expected value of the inequality, and using the fact that xo is independent of the increment z(t) - z(s) of the Wiener process z ( t ) for t 2 s, we get
so long as Ily(t, to,xo)ll < 6 w.p. 1. This, together with (4.10.8),yields
so long as ~ ~ y ( t , t o , x 0, t o E J , there exists a positive function 6, = bl(tO,&)that is continuous in t o for each E such that uio s 6, implies N
1 ui(t,t o ,
i= 1
ug)
< b ( ~ ) , t 2 t o > W.P. 1,
(4.11.3)
where u(t, t o ,uo) is any solution process of (4.8.19). We choose uo so that N
and
V ( t o , x o )I uo
uio = a(to,llxoll) w.p. 1.
(4.11.4)
i= 1
Since a(t, .) E X , we can find a 6 = 6(t0,&)that is continuous in 6 and satisfies the inequalities
IlxolI2 6
and
a(to,JJxoJJ) I 6,
W.P. 1
t o for
each
(4.11.5)
simultaneously. We claim that (SS,) holds with this 6. Suppose that this is not true. Then there would exist a solution process x ( t ) = x(t, to,xo) with llxoll I 6 w.p. 1 and a t , > t o such that W.P.1, (4.11.6) E ) c S(p) because of the choice of E. which implies x(t) E {x E R":((XI( I From (4.11.11, (4.11.3), (4.11.4), (4.11.6), and hypothesis (iii), we are led to the contradiction t E [ t o , tl],
IIx(t)ll I E,
IIX(to)ll = E ,
N
N
i= 1
i= 1
b ( ~=) b(llx(ti>ll)I K(ti,x(ti)) I C ri(ti,to,uo) < b(E) W.P. 1,
1
proving (SS,). The proof of the theorem is complete.
4.11.
Stability with Probability One
Example 4.11.1. equations
177
Consider the linear system of stochastic differential dx
= A(t)xdt
+ B(t)xdz,
(4.11.7)
where z ( t ) is a scalar Wiener process and A and B are n x n continuous matrix functions defined on R , into R”’. Take V ( x )= (xTPx)’/’, where P is a positive definite matrix and b(t) is the eigenvalue of the matrix $P-’[BT(t)P + PB(t)] of multiplicity n. Let a(t) be the largest eigenvalue of $P- ’ [ A T ( t ) P+ PA(t) BT(t)PB(t)].Assume that
+
t’co
t - to
[j-‘ca@)
- b2(s))ds+
for a > 0. Then it is evident that dV(x) = a(t, x ) dt
s’ b(s)dz(s) f0
+ b(t)V(x)dz(t),
where a(t,x) I [a@)- $b’(t)]V(x)
and
a(t,x) = $ ( x ~ P x ) - ~ ” [ x ~ (+AP~AP + BTPB)x] - +(xTPx)-3/’[xT(BTP + PB)x]’.
+
Clearly, the trivial solution of du = (a(t)- +b’(t))udt b(t)udz(t) is uniformly asymptotically stable with probability one. Consequently, the trivial solution of (4.11.7)is uniformly asymptotically stable with probability one. Based on the proof of Theorem 4.11.1, it is not difficult to prove other as. stability properties. We leave the formulation of such results to the reader. We further note that Lyapunov-like functions and stochastic differential inequalities of It8-type can also be used to derive the stability results in probability as well as in the pth mean. In order to avoid monotony, we do not undertake the formulation of such results. However, we give a sufficient condition that illustrates the asymptotic stability in probability of the trivial solution of (4.11.7). We assume that a(t),b(t),and z(t) satisfy the relation
and that the trivial solution of du = (a(t)- 3b2(t))udu+ b(t)udz(t) is uniformly asymptotically stable in probability. Consequently, the trivial solution of (4.11.7) is uniformly asymptotically stable in probability. Finally, we shall give a simple result that will show the usefulness of the method of variation of parameters in the stability study of (4.8.1).
178
Theorem 4.11.2.
4.
It6-Doob Calculus Approach
Assume that
(i) o(t,x) in (4.8.1) is an n x 1 matrix function such that a(t,x) = B(t)x and z is a scalar normalized Wiener process, where B E C [ R + R"'] , is an n x n matrix function; (ii) f ( t ,0) = 0 for t E R + ; (iii) I l f ( ~ , x)JJ= o(1lxll) as x + 0 uniformly in t E R , ;
for some positive real number a
= a(to).
Then the trivial solution of (4.8.1) is asymptotically stable with probability one. Proof. By following the proof of Theorem 4.10.2, we arrive at (4.10.9). From hypothesis (iv) and following the argument (used in the proof of Theorem 2.12.3), the proof of the theorem can be completed very easily. We therefore leave the details to the reader.
Problem 4.11.1. Assume that hypotheses (i)-(iii) of Theorem 4.1 1.2 hold. Further assume that P ( t , t o ) in Theorem 4.9.5 satisfies
for some positive number a = u(to).Then the trivial solution of (4.8.1) is asymptotically stable with probability one. Notes
The It8 calculus presented in Section 4.1 follows the well-known texts on the theory of random processes, namely, Doob [14], Gikhman and Skorokhod [19], and Wong [87]. See also It6 [28-331, Friedman [18], and McShane [64]. Theorem 4.2.1 is based on the work on It8 and Nisio [34]. See also Gikhman and Skorokhod [20]. Theorem 4.2.2 is new. Theorem 4.2.3 is adapted from Arnold [l]. See also Bharucha-Reid [5], Gikhman and Skorokhod [20], Girsanov 1211, Goldstein [22], Hardiman and Tsokos [23], Morozan [68], Ruymgaart and Soong [76], and Wong [87]. Theorem 4.4.2 is new. Theorems 4.3.1 and 4.3.2 are new and are direct extensions of the corresponding deterministic results given in Lakshmikantham and Leela [59]. Theorem 4.3.3 is based on the work of Gikhman and Skorokhod [20] and the deterministic results of Lakshmikantham and Leela [59]. Theorem 4.4.1 is taken from the work of Kulkarni and Ladde [47]. Theorem
Notes
179
4.4.2 is new and is an extension of the deterministic result of Lord and Mitchell [62]. The results of Sections 4.5 and 4.6 are adapted from Ladde and Lakshmikantham [56]. See also Ikeda and Watanabe [26], Gihman and Skorokhod [20], and Yamada [90]. The results of Section 4.7 are due to Ladde and Lakshmikantham [56]. Theorems 4.8.1, 4.8.2, and 4.8.3 are due to Ladde [SO]. See also Ladde, Lakshmikantham, and Liu [54], Bucy [6], Cumming [13], and Kushner [48]. Theorem 4.8.4 is adapted from Ladde and Lakshmikantham [57]. Theorems 4.9.1, 4.9.2, and 4.9.3 are based on the work of Ladde, Lakshmikantham, and Liu [SS]. This section illustrates the significance of the comparison result and related earlier stability results. Example 4.9.1, which is adapted from Ladde [50], demonstrates the usefulness of a vector Lyapunov function over a single Lyapunov function. For further related results, see Kleinman [42], Kozin [45], Nevel’son [70, 711, Nevel’son and Khas’minskii [73], and Wonham [88]. Theorem 4.10.1 is based on the result of Ladde. Lakshmikantham, and Liu [55]. See also Kushner [48], Ladde and Leela [SS], McLane [63], Morazon [68], Miyahara [65], Nevel’son [71], Nevel’son and Khas’minskii [72,73], and Zakai [91]. Theorem 4.11.1 and Example 4.11.1 are adapted from Ladde and Lakshmikantham [l57]. The stability results in Sections 4.9, 4.10, and 4.11 that are based on the method of variation of parameters are developed on the basis of deterministic results in Lakshmikantham and Leela [59]. Problems connected with stochastic control theory, see Astrom [2], Fleming and Rishel [17], and Wonham [89]. Furthermore, see also an introductory study of stochastic differential equations and their applications due to Srinivasan and Vasuderan [82].
A.O.
INTRODUCTION
The main objective of the following appendixes is to provide a brief survey of additional concepts and results from the theory of probabilitistic analysis which are used in the text or frequently used in the related literature. Section A.l deals with moments of random functions and their applications in L2calculus. In Section A.2 a spectral representation of covariance and correlation functions is given. Section A.3 deals with some additional properties of Gaussian processes. Brownian motion and martingales are discussed in Section A.4 and AS, respectively. In Section A.6 strictly stationary and metrically transitive processes are presented. Section A.7 deals with Markov processes and Kolmogorov backward and forward equations. Furthermore, Kolmogorov equations for diffusion processes are also considered. The study of Kolmogorov equations (backward and forward) provides the method to compute the so-called transition function. The nature of this problem has been well-known and well-recognized in the literature. Finally, in Section A.8 the closed graph theorem is stated. A.l.
MOMENTS OF RANDOM FUNCTIONS
Some of the most important properties of random functions are characterized by their moments. We shall list some concepts related to the different kinds of moments of a stochastic process. Definition A.l.l. Let x E R [ [ a , b ] ,R [ R , R " ] ] ,and let f ( t , x ) be a density function of x ( t ) for t E [a, b ] .
(i) The mean orjirst moment of a random function x ( t ) is defined by
A.l.
Moments of Random Functions
181
(ii) The variance or second central moment of x ( t ) is defined by V ( t )= E [ ( x ( t )- m(t))(x(t)- m(t))T]= SRn(x- m)(x - m)Tf(t,x)dx. (A.1.2)
We note that the mean m(t) is an n-vector function and the variance V ( t ) is an n x n matrix function.
Definition A.1.2. Let x, y E R [ [ a , b ] ,R[O,R"]],and let g(t,x; s, y ) be a joint density function of the random functions x ( t ) and y(s) for t,s E [a,b]. Furthermore, let f ( t , x1; s,x2) be a joint density function of x(t) and x(s) for t, s E [a, b]. (i)
The cross-correlationfunction of x(t) and y(s)is defined by
rXyw =~[x(t)(y(s))~ S,.S,.xyTg(t,x;s,y)dxdy. ]=
~ 1 . 3 )
(ii) The autocorrelation function of x(t)is defined by
r ( t , s )= E [ ~ ( t ) ( x ( s ) = ) ~~] R n ~ R n x l x ~ f s,x,)dxdx. (t,xl;
(A.1.4)
(iii) The cross-covariance function of x ( t ) and y(s) is defined by Cxy(t, s) = EC(x(t)- mx(t))(y(s)- m,(s))7 = JRJR.(X
- mx)(y- mJTg(t,x;S,Y)dXdY.
('4.1.5)
(iv) The autocouariance function of x(t) is defined by
C(t,S) = E[(x(t)- m ( t ) ) ( X ( S ) - m($)T1 = JRn
(v)
sRn
( x - m(t))(x - m(s))Tf(t, xl; s,x2)d x , dx,.
(A.1.6)
The correlation-coeficient function is defined by p ( t , s) = C(t,S ) / W W .
(A.1.7)
Remark A.l.l. We note that rxy(t, s), r(t,s), Cxy(t, s), and C(t,s) are n x n matrix functions defined on [a, b] x [a,b] into R"'. Furthermore, r(t,s) = C(t,s) if and only if m(t) = 0 on [a,b], and rx,,(t7s) = C,,(t,s) if and only if mx(t)5 m,(t) = 0 on [a,b]. The functions rxy, I-, C x yand , C in Definition A.1.2 satisfy the following properties: H(t,s) = H(s, t ) for all t, s E [a,b], (A.1.8) H 2 ( t , s ) IH(t, t)H(s,s)
for all t , s E [a,b],
(A.1.9)
where H(t,s) stands for any one of the functions, rxy(t7 s), r(4s), Cxy(t,S), and C(t,4.
Appendix
182
The functions r(t,s) and C(t,s) satisfy the following property. The matrix functions r(t,s) and C(t,s)are nonnegative definite functions on [a, b] x [a,b ] , that is, for every n, t , , t,, . . . ,t, E [ a , b ] , and for an arbitrary deterministic function g ( t ) defined on [a,b], fl
r(tj, t k ) g ( t j ) g ( t k )
j ,k= 1
(A. 1.10)
2 O,
and an inequality similar to (A.l.10) holds for C(t,s). Example A . l . l . Consider a random function defined by x(t) = Asin(bt
+ O),
where A and b are real constants and B E R[Q, R , ] , moreover, it is uniformly distributed in the interval [0,271]. We obtain easily A ~ [ x ( t )=] ~ [ ~ s i n ( + b o)] t =271
0
sin(bt
+ s)ds = 0,
and
r(t,s) = AZE[sin(bt + 0) sin@ + O)] = A2 cos(b(t 2 -
-
s)). (A.l.ll)
In the following we shall list certain properties of a random function x E R [ [ a , b ] , L2[sZ,R"]] in terms of its autocorrelation or covariance functions. Theorem A.l.l. Let x, y E R [ [ a , b ] , L2[Q,R"]].For &,,so E [a,b], let US assume t -+ to and s -+ so for t, s E [a,b], such that x(t) % x(to) and y(so) as t + to and s -+ t o . Then y(s)
Theorem A.1.2. Let x , E R [ [ a , b ] , L2[Q,R"]] be a sequence of random functions. Then x , converges to x E R [[a, b ] ,L2[Q,R"]] on [a,b] if and only ~] to a finite function on [a, b] for all if the functions E [ ~ , ( t ) ( x , ( t ) )converge n and m as n, m 3 no, that is,
r,,,,(t, 4 -, r,,(t, 4
as n
--f
co
on [ a , b ] x [ a , b ] . Theorem A.1.3. Let x E R [ [a, b ] , L 2 [ QR"]].Then x E C [[a, b ] , L2[Q, R"]] if and only if r(t,s) is continuous at ( t ,t).
A.2.
Spectral Representations of Covariance and Correlation Functions
183
Theorem A.1.4. If an autocorrelation function r(t,s) of x E R[[a,b], R[Q R"]] on [a, bl x [a, b] is continuous at every (t,t ) E [a, b) x [a, b], then it is continuous on [ a J l x [a,bl. Theorem A.1.5.
Let y ( t ) be a random function defined by y ( t ) = Jab G(t,s)x(s)ds,
(A.1.12)
where G ( t ; ) E C[[a,b],R"'"],x E R[ [ a , b ] ,L2[!2,R"]],and the integral is a Riemann integral in the mean square sense. y ( t ) in (A.1.12) exists if and only if the ordinary double Riemann integral
cc
G(t, u ) W , s)(G(t,4 IT du ds
(A. 1.13)
exists and is finite. This theorem suggests the definition of an autocorrelation function of y ( t ) in (A.1.12). Definition A.1.3. The autocorrelation function of y ( t ) in (A.1.12) is given by
ryy(t, s) =
W, m u , U ) ( G ( ~ , u) du do.
(A. 1.14)
Let us conclude this section by noting the following. Example A.1.2.
cw,
Let y ( t ) be a random function defined by
At)=
s)x(s)ds,
(A. 1.15)
where x E R[[a,b], L2[R, R"]] and G(t,s) E C[ [ a ,b] x [a, 61, R"'"]. By the application of Theorem A.1.5, one can conclude that y ( t ) in (A.1.15) exists if and only if
CJd
~ ( U)r(u, t , s ) ( ~ ( S))T t , du ds.
(A. 1.16)
Therefore, the autocorrelation function of y ( t ) in (A.1.15) is given by
ryy(t, s) = A.2.
s' s" a
a
~ ( u)r(u, t , U ) ( G ( ~ , u ) ) ~ d do. u
(A.1.17)
SPECTRAL REPRESENTATIONS OF COVARIANCE AND CORRELATION FUNCTIONS
Here we discuss spectral representations of the autocovariance and autocorrelation functions of a stationary process.
184
Appendix
Definition A.2.1. A random function x E R[[a, b], R[R, R"]] is said to be a broad- or wide-sense stationary process if x E R[[a, b], L2[R, R"]], = r(t,s) for all t, s E [a, b]. E[x(t)] = rn = constant, and E[~(t)(x(s))~] Remark A.2.1. We remark that for a stationary process x(t) in the broad sense, the variance V ( t )of x(t) is independent oft, that is, V ( t )= r(0) on [a,b]. It is obvious that a strictly stationary process that belongs to R[[a, b], L2[R, R"]] is also wide-sense stationary, but the converse is not true, for example, the Gaussian process.
We shall present additional properties of the autocorrelation and autocovariance functions of a wide-sense stationary process. Let x, y E R[( - 00, m), R[R, R"]] be a wide-sense stationary process, and let r(t)= E[x(s)(x(s + t ) ) 7 and C(t)= E[(x(s) - m)(x(t s) - rn)T] be the autocorrelation and autocovariance functions of x(t), respectively. From (A.1.8)-(A.1.10) and the definitions of r(t)and C(t),it is obvious that
+
r(0)2 0
and
C(0) 2 0,
r(t)and C ( t )are even functions on ( - 00, m), that is, r(t)= r(-t) and C(t)= C(- t) for t E (- m, m), and r(t)and C(t) are bounded functions on (- co,m). Moreover, Ir(t)l I r(0)= q x " ( s ) ) T 1 , Ic(t)lI C(0) = E[(x(s) - rn)(x(s)- m)T],
(A.2.1) (A.2.2) (A.2.3) (A.2.4)
and they are nonnegative definite functions on R. Furthermore, if r(t)and C ( t ) are continuous at t = 0, then they are uniformly continuous on (-m, -a).
In the following we state representation theorems for autocorrelation and autocovariance functions. To avoid monotony, we state only a representation theorem for C(t). TheoremA.2.1 [19]. For an n x n matrix function to be the autocovariance function of x E R[R, R[R, R"]], which is stationary in the broad sense and satisfies the condition lim E[IIx(t
h+O
+ h) - ~(t)11~] = 0,
(A.2.5)
it is necessary and sufficient that it have a representation of the form
m,
m: J
C ( t )=
exp[iut] dF(u),
(A.2.6)
where i = t E R, and F(u) = ( F i j ( u ) )is an n x n matrix function satisfying the following conditions: (a) for any ul,u E R and u1 < u, the matrix
A.2.
Spectral Representations of Covariance and Correlation Functions
185
AF(u) = F(u) - F(u,) is nonnegative definite, and (b) tr(F( + 00) - F( - 00)) is finite. Remark A.2.2. We remark that the positive definiteness of the matrix AF(u) implies that IAFij(u)12< AFii(u)AFjj(u),so that
where AF(uk)= F(uJ - F(uk- for k = 1,2,. . . , m and uo < u1 < . . . < u,. Furthermore, condition (a) implies that the diagonal elements of the matrix F(u) are nondecreasing functions of u. The condition (b) is equivalent to the requirement that each diagonal element Fjj(u) of the matrix F(u) be of bounded variation on R. This, together with (A.2.7), implies that the offdiagonal elements Fij(u),i # j , of the matrix F(u) are also functions ofbounded variation.
RemarkA.2.3. Relation (A.2.5) is equivalent to the continuity of the function C ( t )at t = 0. Remark A.2.4. If the function F(u) in (A.2.6) is absolutely continuous on R, then we have (dF/du)(u)= S(u) a.e. on R. Hence relation (A.2.6) reduces to ~ ( t=) exp~iutld ~ ( u ) . (A.2.8)
sp”,
We further note that the C(t)in (A.2.8) is the inverse Fourier transform of S(u).
Definition A.2.2. The spectral function of a wide-sense stationary random function x E R[R, R[Q R”]] is defined by the representation
Jrm
~ ( t=)
exp~iut]d ~ ( u ) .
Remark A.2.5. If we assume that the autocovariance function is absoIlC(t)lldt < 00, then from (A.2.8) we lutely integrable on R ; that is, if I?m have 1 ~ ( u=) 2.n exp[ - iutlc(t) dt. (A.2.9)
m: J
Definition A.2.3. The spectral density function of a wide-sense stationary process x E R[R, R[Q, R”]] is defined by (A.2.9). Remark A.2.6. We remark that S(u) is a real and nonnegative function. Furthermore, C ( t ) is an even function, and therefore relations (A.2.8) and (A.2.9) can be written as
c(t)= 2 JTm cos(ut)~(u)du
(A.2.10)
Appendix
186
and S(u) =
;f-"n
cos(ut)C(t)dt,
(A.2.11)
respectively. We further note that for u = 0, Eq. (A.2.11) reduces to 1 S(0) = -
s-" C(t)dt.
7 r "
A.3.
(A.2.12)
SOME PROPERTIES OF GAUSSIAN PROCESSES
In view of the central limit theorem, a Gaussian process can be expected to occur whenever a sum of a random sample xl, x 2 , .. . , x, (where xl, x2, . . . , x, denote mutually independent random variables in R[R, R"] which have the same, possibly unknown, distribution) is drawn from a population with finite mean p and finite standard deviation o,Therefore, we can expect that Gaussian processes are models of many physical and biological phenomena. The Wiener process or the process of Brownian motion, for example, is a Gaussian process. Gaussian processes have a number of important properties that have several mathematical advantages. Theorem A.3.1. A Gaussian process is completely specified by its mean and covariance functions. Theorem A.3.2. Linear transformations of Gaussian processes map Gaussian distributions into Gaussian distributions. (If x(t) is a Gaussian process, then A x ( t ) is also a Gaussian process, where A is a linear mapping and x E R[Z, R[R,R"]].) Theorem A.3.3. Let x,(t) E R[Z, R [ Q R"]] be a sequence of n-dimensional Gaussian processes having Gaussian distributions with parameters m,(t) and The sequence of distributions of the processes x,(t) converges in distribution to some limiting distribution if and only if rnfl(t)+ m(t)and c(L)+ V(t)for each t E I . Then the limiting distribution is also a Gaussian distribution with parameters m(t) and V(t).
c(t).
Problem A.3.1. Let G be an n x m deterministic matrix function that belongs to M,[a,b], and let z ( t ) be a normalized Wiener process. For any sub-o-algebra Ft 3 .Tt, show that the It6-type indefinite integral Y ( t ) = Jat G ( 4 d z ( 4
A.3.
Some Properties of Gaussian Processes
187
is a normally distributed process. Moreover, the mean is 0 and variance is
JZ llG(s)1I2ds.
Theorem A.3.4. Let x let y(t),defined by
E
R [ a ,b], R[R, R"]] be a Gaussian process, and
Y(t) =
c
G(t,M
s ) ds,
(A.3.1)
belong to R[[a,b], L2[R, R"]],where G(t, .) E C [ [ a ,b], Rm"]for t E [a,b]. Then y(t), t E [a,b], is a Gaussian process. Theorem A.3.5. Let x E R[[a,b], R[R, R"]]be a Gaussian process, and let x'(t) be the L2-derivative of x(t), t E [a, b]. Then x'(t), t E [a, b], is a Gaussian process. RemarkA.3.1. From the preceding properties, it is obvious that the Gaussian properties are preserved under L2-integration and differentiation. Example A.3.1. Let x E R[[a,b], R[R,R"]]be a Gaussian process. Then Theorem A.3.4 shows that the process defined by (A.3.1)is a Gaussian process. Now by Example A.1.2, it is obvious that the autocorrelation function r(t,s) of y ( t ) in (A.3.1) is given by
r(t,s) = Ji J:
~ ( Ut ),q u ,
U ) ( G ( ~ , U)'
du dU,
(A.3.2)
provided that
exists. If we assume that E[x(t)]= 0 on [a,b],then by Remark A . l . l , the autocorrelation function r(t,s) in (A.3.2)of y ( t ) in (A.3.1)is an autocovariance function of y(t). Therefore, the Gaussian process in (A.3.1)has zero mean and variance V(t),where V ( t )=
11
G(t,u)C(u,s)(G(t,s ) ) ~duds.
(A.3.3)
If we further assume that the Gaussian process is stationary in the broad or wide sense, then r(t,s)in (A.3.2) reduces to r(t - s), and thus the variance V ( t )in (A.3.3) of y ( t ) in (A.3.1) becomes V ( t )=
i j
G(t,u)C(u - s)(G(t,s ) ) duds. ~
(A.3.4)
Example A.3.2. Let x E R[[a,b], R[R, R]] be a Gaussian process with E(x(t)]= 0 on [a, b] with its autocovariance function C(t,s) = C(t - s). Let y ( t ) be defined by
y(t) = Js(s)x(.) ds,
(A.3.5)
Appendix
188
where g E C [ [ u , b], R]. From Example A.3.1, we infer that y(t) in (A.3.5) is a Gaussian process with E[y(t)] = 0 on [a,b] and variance V(t),where V ( t )=
1J;
(A.3.6)
g(u)C(u - s)g(s) du ds.
It is obvious that for p 2 1, ECexP[PY(t)ll = J=== 1 JmmexP[PY - m]dY Y2 2nV(t) = exp[+p21/(t)] = exp[$p2
11
g(u)C(u - s)g(s) duds
1
.
(A.3.7)
The following result gives sufficient conditions for a process to be Gaussian. Theorem A.3.6. Let x E C [ [ a , b], R[R, R"]] be a process with independent increments. Then x(t) is a Gaussian process. A.4.
BROWNIAN MOTION
A Wiener process or a process of Brownian motion x E R[I, R[Q, R"]] is defined as a Gaussian process with independent increments. We note that for a process x(t) with independent increments, the matrix function D(s, t ) defined by D(s, t ) = E[(x(t)
- x(s) - m(t)
+ m(s))(x(t) - x(s) - m(t) + m ( ~ ) ) ~ ]
satisfies the relation D(s, t ) = D(s, u)
+ D(u, t )
for s < u < t,
(A.4.1)
where E[x(t)] = m(t). If x(t) is a process with stationary increments, then D(s, t ) = D(t - s), and (A.4.1) can be written as D(s,
+
~ 2= )
D(s1)
+D(s~),
~ 1 s2 ,
> 0.
(A.4.2)
If D(t) is continuous, then it is obvious that the solutions of equation (A.4.2) are of the form D(s) = Ds, (A.4.3) where D is a nonnegative definite matrix. Formula (A.4.3) completely describes the structural function of a process of Brownian motion with stationary increments. Thus the process x E R [ I , R[R, R"]] is defined as a homogeneous Gaussian process with independent increments such that E[x(t)] = 0 and aTD(t)a= a2tlla112for a E R".
A.5.
189
Martingales
Let us now discuss a physical application of a Brownian motion process. Consider a sufficientlysmall particle (a molecule of gas or liquid) suspended in a liquid, and consider its motion under the influence of collisions with the molecules of the liquid, which are in chaotic thermal motion. This phenomenon in physics is known as “Brownian motion.” For the probabilistic analysis of this phenomenon, it is natural to assume that the velocities of the molecules with which the particle collides are random. If the liquid is homogeneous, then one can assume that the distribution of the velocities of the molecules are independent of the position of the molecules. It is obvious that the distribution may depend on the temperature, which is uniform everywhere. Let x ( t ) = ( x l ( t ) x, z ( t ) ,x 3 ( t ) )be a position of a particle at a time t. If we further assume that the velocities of the different molecules are independent of each other and if we neglect the mass of the particle, then the displacement x ( t h) - x ( t ) of the particle from a position x ( t ) to x(t + h) for any t 2 0 and h > 0 will be independent of the position of the particle and its previous motion. This means that the position of the particle x ( t ) is a process with independent increments. If the physical state of the liquid is independent of time, then it is easy to see that the process x ( t ) is continuous and homogeneous relative to t 2 0. By Theorem A.3.6, it is obvious that the process is a Gaussian process. Now we shall assume that x(0) = 0. Suppose that E[x(t)] = trn and aTD(t)a = t(aTAa),where rn E R3 and A is a 3 x 3 symmetric matrix. In the case of a homogeneous liquid in which there are no currents, the process must be isotropic (since the distribution of the projections of the velocities of the molecules of the liquid in an arbitrary direction is independent of that direction), that is, (rn,a) and (Aa,a) are independent of a for llall = 1. This is possible when (rn,a) = 0 and (Aa,a) = ozllallz.Thus from the most general considerations of the physical phenomenon of Brownian motion, we have arrived at the definition of a Brownian motion process.
+
A.5.
MARTINGALES
Let x E [ R , , R [ R , R “ ] ] such that E[11x(t)ll] < co, and let {.Ff:t E R , } be an increasing family of sub-o-algebras of 9 such that x ( t ) is Ff-measurable t E R , ) is said to be martingale and -integrable for all t E R , . Then { x ( t ) ,Ff: if t > s implies E[x(t)IFs]
= x(s)
w.p. 1
for all t E R , .
The process is said to be a supermartingale (submartingale) if the above equality is replaced by E[x(t))?J
I X b )
(x(4 I ECx(t)JFl).
190
Appendix
For any Brownian motion {x(t):t E [0, m)}, ifwe take Ftto be the smallest o-algebra with respect to which {x(s);s I t } are all measurable, then {x(t), F t : t E [0, a)}is a martingale. In fact, E[x(t)IFsl
[ W - 4 s ) ) + x(s)IFs] = -"(t) - x(s))IFd + E [ x ( s ) I ~ J = x(s) w.p. 1. =E
Let us list some important properties of martingales. (i) If {x(t), Ft:tE R,} is a submartingale and if (€I is a real function of the real variable r, which is monotone nondecreasing and convex, with E{@(~~x(to)l~} < co for some to E J , then { @ ( ~ ~ x ( Ft, t)~~ t E) ,R,, t I t o ) is a submartingale. (ii) If {x(t),Ft:tE R,} is a martingale, then {IIx(t)ll,F t : t E R,, t I to} is a submartingale. (iii) Let {x(t):t E R,} beaseparablemartingalesuch thatE[llx(t)l12] < m for t E R,. Then
Remark A.5.1. If {x(t):t E R,} is a separable process with independent increments such that P(x(t) - x(s) > 0) = P(x(t) - x(s) < 0), s I t, then we have I2P(IIX(b)ll 2 E). Problem A.5.1. Let x E R[[a,b], R [ Q , R"]] be a Markov process with a specified transition probability distribution function. That is, we suppose that there is a given function P(s, x, t, y ) that defines a Baire function of x for fixed s, t, y and defines a distribution function in y for fixed x, s, t. Then show that the stochastic process defined by z(t) = P(t,x(t), y,b) = P{x(b,oj) I ylx(s) for a I s I t }
is a martingale. The following result shows that the It6 indefinite integral in Section 4.1 is a martingale.
A.6.
Metrically Transitive Processes
191
Proposition A.5.1. Let z ( t ) be a normalized Wiener process and let G be an n x m random matrix function belonging to M 2 [ a ,b]. Define a process x(t) = j; G(s) dz(s). Then the process x(t) is a martingale for all t E [a, b]. A.6.
METRICALLY TRANSITIVE PROCESSES
In this section we briefly summarize translation groups of a stationary process and metrically transitive processes. Definition A.6.1. A family {T,:tE R } of transformations defined on 0 into itself is called a translation group of measure-preserving one-to-one point transformations if each T , is a one-to-one measure-preserving point transformation and T,,, = T,T,. Note that if x is a random variable and if { T,:t E R } is a translation group of measure-preserving one-to-one point transformation groups, the stochastic process { T,x: t E R } is strictly stationary. Remark A.6.1. Each translation group of measure-preserving one-to-one point transformations induces a translation group of measure-preserving set transformations (see Doob [14, Chap. 101). Definition A.6.2. Let { T,:t E R } be a translation group of measurepreserving one-to-one point transformations, and let A E 8. Consider the the set { ( t , o ) : wE T,A}. If this set is (t,o)-measurable for each A E 9, translation group is said to be measurable. Definition A.6.3.
If x is a random variable, then the process defined by x(t,o) = (T,x)(o)= x ( T - , o )
is measurable in the pair (t,w) if the translation group is measurable. Definition A.6.4. A E 8 is called an invariant set under a translation group of measure-preserving point transformations if the set differs from its image under T , by A, such that P(AJ = 0. Note that the invariant sets form a Bore1 field of w-sets. Definition A.6.5. A random variable x is called invariant under a translation group of measure-preserving point transformations T , if for each t E R, x = T,x W.P.1. Definition A.6.6. A translation group of measure-preserving point transformations is called metrically transitive if the only invariant sets are those that have probability 0 or 1, i.e., if the only invariant random variables are those that are indentically constant w.p. 1.
192
Appendix
Remark A.6.2. Definitions A.6.1 -A.6.6 can be formulated for t E [O,m], in which case the word "group" is replaced by "semigroup."
From Definitions A.6.1-A.6.6 and Remarks A.6.1 and A.6.2, one can formulate the corresponding definitions for a translation group or semigroup of measure-preserving one-to-one set transformations. We illustrate a definition corresponding to Definition A.6.1. Definition A.6.7. A family { T,:t E R } of transformations defined on R into itself is called a translation group of measure-preserving set transformations if T,,, = T,T, holds except for the modulo sets of probability zero, that is, if for every A E F and every choice of TsTtA,and this choice is one of the images of A under T,, t . Let x E R[R, R[Q, R"]]be strictly stationary, that is, P { w : x ( t , , o )E B,,x ( t , , o ) E B,, . . . ,X ( t , , O ) E B,) = P { o : x ( ~ , h, 0 )E B,,~ ( t , h, 0)E B,, . . . ,~ ( t , h, 0 )E B,}
+
+
+
for all t,, t , , . . . , t,, h E R, and B,,B , , . . . , B, Bore1 sets in R". Let Fxbe the minimal a-algebra containing the cylinder sets A: where A
=
{o:x(t1,o)E B l , x ( t 2 , 0 ) E B 2 , .. . ,x(t,,w) E B,}.
(A.6.1)
Let
fi = (6:Q= $(t), 4 : R + R"). Let @ be the a-algebra generated by the sets
-
A
=
{Q:4([')E ' , , 4 ( t z )
E B2,
. . ., 4 ( t n )
(A.6.2)
E Bm},
and let
U:R
-+
fi
be defined by
Uw = Q = x ( t , ~ ) .
From (A.6.1) and (A.6.2), we note that the inverse image of U is equal to A E Fx.Hence
Fx= { u- '(A): AE
(A.6.3)
2 E @ under
n}.
For every t E R, define a transformation
T t : n+ fi,
Ft(8)= 4(t + s)
(A.6.4)
whenever 8 = 4(s)
for
SE
R.
The transformation Ft is called the shift transformation. Furthermore, it is defined on fi into itself.
A.6.
Metrically Transitive Processes
193
For every A E Fxand t E R, we define a set transformation Tt:& -+ & as
U-'(A). (A.6.5) We note that the set T,(A)is not uniquely determined by A ; however, it may be proved that T is single valued, with modulo sets of probability zero. In fact, if A , = Tt(A)and A , = Tt(A),then A , is equivalent to A , , that is, T,(A)= U-'(Tt(A)),
P ( A , AAJ
= 0,
where A
where A , AA2
=
= ( A , - A,)
u ( A 2 - A , ) . (A.6.6)
In addition, the family of transformations T , possesses the following properties : P(T,A)= P ( A )
for every A E F,,
and
Tt(Q- A ) = R - Tt(A). We note that the above equations are valid w.p. 1. This shows that the family of transformations Tt is a family of measure-preservingset transformations. Thus for each strictly stationary process, there corresponds a unique translation group or semigroup of measure-preserving set transformations. These transformations are called shift transformations, and the group or semigroup is called the shift group or semigroup. Sets and random variables invariant relative to the shift group or semigroup are called invariant sets and random variables of the process, respectively. Definition A.6.8. Let x E R [ R , R[Q, R"]] be a strictly stationary process. The process x ( t ) is called metrically transitive if the shift group is metrically transitive. In the following we present some results that establish the metric transitivity of a stochastic process. TheoremA.6.1. A process is metrically transitive if and only if the corresponding coordinate space process for which the shift transformations become point transformations is metrically transitive. Theorem A.6.2. If g: R" .+ RN is Borel-measurable and if x E M [ R , R[R, R"]] is a strictly stationary and metrically transitive process, then the process y ( t ) = g ( x ( t ) ) is a measurable, strictly stationary, and metrically transitive process.
Appendix
194
Proof. Since g is Borel-measurable and x ( t ,o)is product-measurable, it follows that g(x(t))is a product-measurable process. We have
+
+
P { w : y ( t , h, W )E B1, y(tz + h, 0)E B z , . . . ,y(t, h, W ) E B,} = P { o : x ( t , h, w ) E g - l ( B 1 ) , x(L2 h, w ) E 9 - l(BJ, . . . , X(L, h, 0)E 9 - V,)} = P ( w : x ( t , , o )E g-I(B1),
+
+
x(t,,o) E 9 -
+
'('A . . .
>
9-
X ( t r n , 0) E
= P ( w : y ( t , , w )E B1, y ( t 2 , ~E) B z ,
'(Bm)}
. . ., y(tm,W) E B m }
for all t,, t 2 , . . , , t,, h E R, and B , , B,, . . . , B, Borel sets in RN. We note that g is a Borel-measurable function and that therefore g- '(Bl), g- '(B,), . . . , g- '(B,) are Borel sets in R". The above identity proves that the function y ( t ) = g(x(t)) is strictly stationary. Let Fy be the minimal o-algebra containing the cylinder sets corresponding to y ( ~ )and , let the transformations U, and T; given in (A.6.3) and (A.6.5) correspond to the process y(t). Since any cylinder set relative to the process y(t) is a cylinder set relative to the process x(t), we have Fy5 Fx.We shall prove that for every A E Fy,we have TJA)equivalent to T;(A). Let F*= { A E Fx: T,(A) T?(A)for all t E R}. F*contains the cylinder sets relative to the process y(t). In fact, letting
-
A = ( w : y ( t , , w ) E By, . . . , y(t,,o)
E
Bi),
we have A = U ; '(J),where A" is as in (A.6.2). Therefore, T:(A) = U ; But
Tz(A)= { 5 : 4 ( t ,+ h) E B1, 4(t2 + h) E B,, . . . ,$(tm
+ h)
E
'(TzJ). B,}
and
UT'(Tz(A))= { o : y ( t , + h) E B,, y ( t , + h) E B,, . . . , y(t, = {O:x(tl
x(t,
=
+ h) E g - l(Bl),
+ h)
f
u-'(&),
9- '(BZ), . . . > x ( t m
+ h)
+ h) E 9 - '(Bm)}
where
A, = {5:4(t1+ h) E g-I(B1), 4(t2
+ h) E g-'(Bz), . . .
>
4(tm
+ h) E 9- '(Brn)].
Hence U Y 1 ( T t ( A ) )=
u-'(A,)= U-'(Tc(A2)),
E
B,)
A.7.
Markov Processes
195
where
A",
= {&:d)(tJ E 9 -
l(Bl)>M
2 )E
g-1(B2), . . . , d)(t,)
E
9- *(&)}.
On the other hand A = lT1(J2), and hence T,(A)= U-1(?',(A2)). Consequently, by (A.6.6),T,(A)is equivalent to T:(A) and therefore contains the cylinder sets corresponding to the process y(t). From the properties of T, in (A.6.5),it is obvious that F* is a o-algebra. Since 8* is a o-algebra containing the cylinder sets corresponding to y(t), it follows from the choice of FY that F*= Fy.Thus for every A E FY, we have T,(A) T,Z(A)for all t E R. Let A E Fybe an invariant set relative to y(t). Then T;(A) A for all t E R. But T,(A) T;(A)implies that T,(A) A for all t E R. Hence A is an invariznt set relative to the process x(t). Furthermore, the probability of A is either zero or one in view of the fact that x(t) is a metrically transitive process. By definition, this implies that y ( t ) is a metrically transitive process.
-
-
A.7.
-
-
MARKOV PROCESSES
Several phenomena that take place in physical and biological sciences are random processes. Many phenomena require the ability to compute the so-called transition function P(s, x; t, B) expressing the probability that a system with initial state x at time s arrives in the set B at time t. In 1931 Kolmogorov, under certain general conditions, showed that this probability can be calculated from certain differential equations. Let us consider a Markov process x E R[[a,b],R[Q,R"]] with fixed 0algebra F and state space (R",9"). Let P(s, x; t, B) denote the transition For s = t, we have function for s < t, where s, t E [a, b], x E R", and B E 9". P(s, x; x, B) = f B ( x ) ,where ZB(x) is a characteristic function of the set B. The investigation of the behavior of the trajectories of a Markov process x E R[[a,b], R[O, R" ] ] requires a strengthening of the Markov process.
Definition A.7.1. The process x E R[[a,b],R[Q,R"]] is said to be a strong Markov process if it is measurable and if (i) P ( . , . ; t,B) is an (F1,"a,b] x F")-measurable function for fixed t E [a, b] and B E F', where P / [ a , b] is the o-algebra of Bore1 sets in [a, t ] ; for an arbitrary Markov time z of x(t), t E [a, b],P'(x(t z) E B / F I )= (ii) P(z, x(z); z t , B), P' as. for all t E [a,b],B E 8".
+
+
Definition A.7.2. that
For a Markov process x lim
h+k+O+
[
+
E
R[[a,b], R[Q,R "] ] ,assume
1
P(t - h, X; t k, B ) - IB(x) h+k
(A.7.1)
Appendix
196
exists, and denote it by Q(t, x, B). This function Q(t, x, B) is called the transition intensity function corresponding to P(s,x; t, B). The transition intensity function Q(t,x, B ) satisfiesthe followingproperties: (i) Q is completely additive with respect to B E F", (ii) Q is 9"-measurable with respect to x E R", and
1
if X E B , if B = @, x$B.
20 (iii) Q(t,x,B) =O 20
Let us represent Q(t,x,B) in terms of two other functions that have an interesting probabilistic interpretation. Set q(t,X)
= - Q(t, X,
(x})
and
if q(t,x) = 0,
where x E R", B E F' and , t E [a, b]. It is obvious that the properties of Q(t, x, B) imply that (i) q is nonnegative and P"'measurab1e with respect to x E R" and (ii) a is nonnegative, 9"measurable relative to x E R", and completely additive with respect to B E 9" with a(t,x,R") I1. Furthermore, Q(t,x,B)= dt,x)[-Idx)
+ a(t,x,B)I.
(A.7.2)
The probabilistic interpretations of q and a are as follows: P(s,x; t,(x}) = 1 - (4(s,x)+ o(l))(t - 4;
(A.7.3)
consequently, q(s,x)(t - s) + o(t - s) is the probability of no longer being in x at time t, given that the process was in x at time s. Finally, if q(s,x) # 0, we may write
(A.7.4) Thus a(s, x, B) might be thought of as the probability of a jump from x at time s into the set B - (x}, given that a jump has in fact occurred. Theorem A.7.1
(Kolmogorov's backward equation). Assume that lim
h+k-O+
[
+
P(t - h, X; t k, B) - ZB(X) h+k
1
A.7.
Markov Processes
197
exists for t E [a, b], x E R", and B E8". Then the partial derivative (dP/ds)(s,x; t , B) exists for almost all s E [a,b] (the set of expected values of s may depend and satisfies the equation on x E R", t E [a, b], and B E P) ap as
- (s,x;
t,B) =
(A.7.5)
-
and moreover,
Equations (A.7.5) and (A.7.6) are called the Kolmogorov backward equations because they involve differentiation with respect to the earlier time s. The following result gives a necessary condition to obtain the Kolmogorov forward equation, that is, the equation that arises by differentiation with respect to the later time t. Theorem A.7.2 (Kolmogorov's forward equation). Assume that the hypotheses of Theorem A.7.1 hold. Further assume that the function q(t,x) = - Q(t,x, {x)) is bounded in x E R" for each t E [a, b]. Then the partial derivative (dP/dt)(s,x; t, B) exists for almost all t E [a, b] (the set of expected values and satisfies the equation o f t may depend on s E [a, b], x E R", and B E F") (A.7.7)
Remark A.7.1. To obtain Eqs. (A.7.5) and (A.7.7), stronger restrictions must be imposed on P ( s , x ; t,B) and the limit (A.7.1) for each s, t E [a,b]. Thus to deduce the backward equation it is enough to assume that P( ., x; t, B) is right-continuous uniformly with respect to x E R" and that the limit in (A.7.1) is uniform with respect to B E 9". Similarly, to derive the forward equation, it suffices to assume that P(s,x;., B ) is left-continuous on [a,b] uniformly with respect to B E 9" and that the limit in (1.7.1) is uniform with respect to x E R". ExampIe A.7.1. Let x E R[[a,61, R[Q, R"J] be a Markov process that takes a finite set of values, say {l, 2,. . . ,m}.The transition function P ( s , x ; t , A ) = P(s, i; t, { j ) ) will be determined by the transition matrix function '(s,
where Clearly, we have p i j ( s ,t ) 2 0,
t ) = (Pij(s, t)),
x=
(A.7.8) (A.7.9)
pij(s, t ) = 1.
Appendix
198
The Chapman-Kolmogorov equation reduces to m
pij(s, t, =
1
k= 1
Pik(S? u)pkj(u,
t!
for a < s I u 5 t I b with pij(s,s) = Sij,s E [a,b], where hij = 0 if i # j and hij = 1 if i = j. Suppose that pij(s,t ) E C[[a,b] x [a,b],R ] for all i, j = 1,2,. . . ,m. Then it is clear that lim
h+k+Ot
[
P(t - h, i; t
1
+ k, {j})- Z{j)(i)
h+k
(A.7.10)
exists and is equal to qij(t).Moreover, (dp/ds)ij and (ap/at)ij exist for all i , j = 1,2,. . . ,m. The matrix function Q(s, t ) = (qij(s,t ) ) is said to be the transition intensity matrix function corresponding to P(s, t). It is obvious that qij(t)
and
i
1 0
if if
i=j, i #j,
m
1 qij(t)= 0
for i 2 1 and
t E [a,b].
j= 1
Thus, assuming the continuity of the pij(s,t ) and the existence of Q(t), we can write the backward and the forward equations (A.7.5) and (A.7.7), respectively, as follows:
(A.7.11) aP (s,t ) = P(s, t)Q(t), at
-
P(s, S )
= I.
(A.7.12)
Remark A.7.2. If a Markov jump process with discrete state is homogeneous, then the limit in (A.7.10) exists without any additional assumptions. Remark A.7.3. Based on our discussion in Example A.7.1 and the representation of Q(t,x,B ) as a function of q(t,x) and a(t, x,B), one can derive the backward and forward equations with regard to a Markov jump process with a denumerable number of values. The objective of determining the transition function for a Markov function reduces to determining the answers to the following problems: (a) the existence of solutions of (A.7.5) and (A.7.7), (b) the uniqueness of solutions of (A.7.5) and (A.7.7), and (c) finding a solution or an estimate for solutions of
A.7.
Markov Processes
199
Eqs. (A.7.5)and (A.7.7).These problems (a)-(c) are equivalent to the problem of deterministic problems of ordinary integrodifferential equations. The determination of a transition function of an as. continuous Markov process (diffusionprocess) x E R[[a,b], R[Q, R"]] is equivalent to the problem of solving a certain kind of partial differential equation of parabolic type. We give simple examples of diffusion processes: (1) Uniform motion with velocity u is a diffusion process with f = u and b = 0. (2) The Wiener process w ( t ) is a diffusion process with drift vector f = 0 and diffusion matrix b = I .
To each diffusion process with coefficients f and b there is assigned the second order differential operator (A.7.13)
where tr stands for the trace of a matrix. This operator can be calculated as follows:
1
+ t, y) - g(s, x))P(s,x, t + s, d y ) (A.7.14) by means of a Taylor expansion of g(s + t, y) about (s,x) under the assumpdg(s, x) = lim t-O+
(g(s
~
t
[!R"
tion that g is defined and bounded on I x R" and is twice continuously differentiable with respect to x and once continuously differentiable with respect to s. Together with relations (4.1.27) and (4.1.28), the right-hand member of (A.7.14) gives the operator (a/&) + L, where d = (a/&) L . For the time-independent function g and the homogeneous case d = L. We shall next state the following well-known results.
+
Theorem A.7.3. Let x(t), t E I , be an n-dimensional diffusion process with continuous coefficients f and b. The limit relations in (4.1.26)-(4.1.28) hold uniformly in s E I . Let g(x) denote a continuous bounded scalar function such that
has bounded continuous first and second derivatives with respect to x. Then u(s, x) is differentiable with respect to s and satisfies Kolmogorov's backward
equation
au as
-
+ Lu = 0,
Appendix
200
where L is as defined in (A.7.13), with boundary condition lim u(s, x ) = g(x). s-1
Theorem A.7.4. Let the assumptions of Theorem A.7.3 regarding x ( t ) be satisfied. If P ( s , x ; t, has a density p ( s , x ; t, y ) that is continuous with respect to s and if the derivatives aplax, d2p/ax ax exist and are continuous with respect to s, then p is called a fundamental solution of the backward equation 0
)
aP as
-
+ Lp = 0,
and it satisfies the boundary condition lim p(s, x ; t, y) = 6 ( x - y), s-t
where 6 is Dirac’s delta Function.
Theorem A.7.5. Let x ( t ) be an n-dimensional diffusion process for which the limit relations (4.1.26)-(4.1.28) hold uniformly in s and x , and let it possess a transition density p(s,x ; t, y). Furthermore, assume that dp/at, (a/ay)(fp),(a’/ay dy)(bp) exist and are continuous functions. Then for fixed s and x such that s I t, this transition density p(s, x ; t, y ) is a fundamental solution of the Kolmogorov forward equation or the Fokker-Plank equation
Example A.7.2. For the Wiener process, the forward equation (A.7.15) for the homogeneous transition density p ( t , x , y ) = (271t)-1’2exp[-(2t)-’Ily -
xI I ’ ]
becomes
which in this case is identical to the backward equation with y replaced by x . A.8.
CLOSED GRAPH THEOREM
Let X be a normed linear space. A linear operator T with domain D ( T ) G X is said to be closed if whenever x , + x as n + 00, x, E D ( A ) and whenever A x , -+ y as n + 00, then x E D ( A ) and A x = y .
A.8.
Closed Graph Theorem
201
Theorem A.8.1 (Closed graph theorem). A closed linear operator mapping a Banach space into a Banach space is bounded (and thus continuous). Notes
Section A.l is based on the material in the book of Soong [Sl]. Theorem A.2.1 is adapted from Gikhman and Skorokhod [19]. The results of Section A.3 are also taken from Soong [Sl]. For Theorem A.3.6, refer to Iosifescu and Tiutu [27] and Arnold [1J. Section A.4 is based on the discussion in the book of Gikhman and Skorokhod [19]. Section A S is taken from Doob [14]; see also Arnold [l]. The material in Section A.6 is formulated on the basis of the discussion in Doob [14]. Theorem A.6.2 is adapted from Morozan [66]. The results in Section A.7 are contained in Iosifescu and Tiutu [27] and Doob [14]. Theorem A.8.1 is based on the result in Dunford and Schwartz [lS].
REFERENCES
Arnold. L , “Stochastic Differential Equations Theory and Applications ” Wiley, New York. 1974 Astrom, K J , “Introduction to Stochasbc Control Theory” Academic Press, New York, 1970 Bartlett, M S , “Introduction to Stochastic Processes.” 2nd ed Cambridge University Press, London, 1966 Bertram, J E , and Sarachik, P E , “Stability of circuits with randomly time-varying parameters IRE TranT Special Sup$ PGIT-5. (1959), 260 -270 Bharucha-Reid, A T , “Random Integral Equations ” Academic Press, New York, 1972 Bucy, R. S., Stability and positive supcrmartingales. J . LXferential Equations (1965), I51 - 155. Bunke, €I., Stabilitat bei stochastischen Differcntialgleichungssysternen. 2,Angtw. Math. Mach. 43 (1963), 63-70. Bunke, H ., Gber die fast sichere Stabilitat linear stochastischer Systeme. %. Anyew. Muth. Mech. 43 (1963), 533 535. Bunke, H.. uber das asymptotisch Verhalten von lijsungen hearer stochastischer Dif ferentialgleichungssysteme. 2. Angew. Marh. Mech. 45 (1965): 1-9. Bunke, H., On the stability of ordinary differential equations under persistent random disturbances. 2.Anyew. Math. Mech. 51 (1971), 543- 546. Caughey. T. K., Comments on On the stability of random systems. J . Acousr. SOC.Amrr. 32 (1960), 1356. Caughey, T. K., and Gray, A. H,, Jr., On the almost sure stability of linear dynamic systems with stochastic coefficients. A S M E J . Appl. Mech. 32 (1965), 365-372. Cumming, I. G . ,Derivation of the moments of a continuous stochastic system. Internat. J. Conrrol5 (1967), 85-90. Doob, J. L., “Stochastic Processes.” Wiley, New York, 1953. Dunford, N., and Schwartz, J., “Linear Operators.” Vol. I. Wiley (Interscience), New York, 1957. Edsinger, R. W., Random Ordinary Differential Equations. Doctoral dissertation, University Of California, Berkeley, California, 1968. Fleming. W. H., and Rishel, R. W., “Deterministic and Stochastic Optimal Control.” Springer-Verlag, Berlin and New York, 1975. Friedman, A.. “Stochastic Differential Equations and Applications,” Vols. I and I t . Academic Press, New York, 1975. 202
References
203
Gikhman, I. I., and Skorokhod, A. V., “Introduction to the Theory of Random Processes.” Saunders, Philadelphia, Pennsylvania, 1969. Gikhman, I. I., and Skorokhod, A. V., “Stochastic Differential Equations.” SpringerVerlag, Berlin and New York, 1972. Girsanov, I. V., An example of non-uniqueness of the stochastic equation of K. Itb. Theor. Probability Appl. 7 (1962), 325-331. Goldstein, J. A., An existence theorem for linear stochastic differential equations. J . Differential Equations 3 (1967), 78-87. Hardiman, S. T., and Tsokos, C. P., Existence theory for a stochastic differential equation. Internat. J. Systems Sci. 5 (1974), 615-621. Hartman, P., “Ordinary Differential Equations.” Wiley, New York, 1964. Hille, E., and Phillips, R. S., “Functional Analysis and Semi-Groups.” Amer. Math. SOC. Coll. Publ. Vol. 31. Amer. Math. SOC.,Providence, Rhode Island, 1957. Ikeda, N., and Watanabe, S., A comparison theorem for solutions of stochastic differential equations and its applications.” Osaka J . Math. 14 (1977), 619-633. Iosifescu, M., and TButu, P., “Stochastic Processes and Applications in Biology and Medicine,” Vol. 1, Theory. Springer-Verlag. Berlin and New York, 1973. It6, K., On stochastic processes. Japan. J . Math. 18 (1942), 261-524. It6, K., Stochastic integral. Proc. Imp. Acad. Tokyo 20 (1944), 519-524. It6, K., On stochastic integral equations. Proc. Japan Acad. 22 (1946), 32-35. It6, K., Stochastic differential equations in a differentiable manifold. Nagoya Math. J . 1 (1950), 35-47. ItB, K.,On aformulaconcerningstochasticdifferentials.Nag0yaMath.J. 3(1951), 55-65 It6, K., On Stochastic Differential Equations. Mem. Amer. Math. Soc., 4, (1951). It6, K., and Nisio, M., On stationary solutions of a stochastic differential equation. J . Math. Kyoto Univ. 4 (1964), 1-75. Kats, I. Ia., On the stability of stochastic systems in the large. Prikl. Mat. Meh. 28 (1964), 366-372. Kats, I. Ia., and Krasovskii, N. N., On the stability of systems with random parameters’ Prikl. Mat. Meh., 24 (1960), 809-823. Khas’minskii, R. Z., On the stability of the trajectory of Markov processes. Prikl. Mat. Meh., 26 (1962), 1025-1032. Khas’minskii, R. Z., On the dissipativity of random processes defined by differential equations. Problemy PeredZchi Informatcii 1 (1965), 84-104. Khas’minskii, R. Z., On the stability of nonlinear stochastic systems. Prikl. Mat. Meh. 30 (1966), 915-921. Khas’minskii, R. Z., Stability in the first approximation for stochastic systems. Prikl. Mat. Meh. 31 (1967), 1021-1027. Khas’minskii, R. Z., “Ustoychivost’sistem differentsial’nykh uravneniy pri sluchaynykh vozmusheniyakh (Stability of systems of differential equations in the presence of random disturbances).” Moscow, Nauka Press, (1969). Kleinman, D. L., On the Stability of Linear Stochastic Systems. IEEE Trans. Automatic Control AC-14 (1 969), 429-430. Kozin, F., On almost sure stability of linear systems with random parameters. J . Math. Phys. 43 (1963), 59-67. Kozin, F., On the relations between moment properties and almost sure Lyapunov stability for linear stochastic systems. J . Math. Anal. Appl. 10 (1965), 342-352. Kozin, F., On almost sure asymptotic sample properties of diffusion processes defined by stochastic differential equation, J. Math. Kyoto Unh. 4 (1965), 515-528. Kozin, F., A survey of stability of stochastic systems, Automatica-J. IFAC. 5 (1969), 95-112.
References
204
Kulkarni, R. M., and Ladde, G. S., Stochastic Perturbations of Nonlinear Systems of Differential Equations. J . Mathematical and Physical Sci. 10 (1976), 33-45. Kushner, H. J., “Stochastic Stability and Control.” Academic Press, New York, 1967. Ladas, G., and Lakshmikantham, V., “Differential Equations in Abstract Spaces.” Academic Press, New York, 1972. Ladde, G. S., Systems of differential inequalities and stochastic differential equations 11. J . Mathematical Phys. 16 (1975), 894-900. Ladde, G. S., Systems of differential inequalities and stochastic differential equations 111. J. Mathematical Phys. 17 (1976), 2113-2120. Ladde, G. S., Logarithmic norm and stability of linear systems with random parameters. Internat. J . Systems Sci. 8 (1977), 1057-1066. Ladde, G. S., Stability technique and thought provocative dynamical systems 11. In “Applied Nonlinear Analysis” (V. Lakshmikantham, ed.), pp. 21 5-218. Academic Press, New York, 1979. Ladde, G . S., Lakshmikantham, V. and Liu, P. T., Differential inequalities and It6 type stochastic differential equations. Proc. Int. Conf. Nonlinear Differential and Functional Equations, Bruxelles et Louvain, Belgium, September 3-8, 1973 (P. Janssens, J. Mawhin, and N. Rouche, eds.), pp. 611-640. Herman, Paris, 1973. Ladde, G. S., Lakshmikantham, V., and Liu, P. T., Differential Inequalities and Stability and Boundedness of Stochastic Differential Equations. J . Math. Anal. Appl. 48 (1974), 341 -352. c561 Ladde, G. S., and Lakshmikantham, V., Stochastic Differential Inequalities of It6 Type. Proc. Con$ Appl. Stochastic Processes, Athens, Georgia, May 14-19, 1978. Academic Press, New York (to be published). Ladde, G. S., and Lakshmikantham, V., “Systems of Differential Inequalities and Stochastic Differential Equations V”, (to appear). Ladde, G. S., and Leela, S., Instability and unboundedness of It6 type stochastic differential equations. Rev. Roumainc Math. Pures Appl. XXII (1977), 933-939. Lakshmikantham V., and Leela, S., “Differential and Integral Inequalities, Theory and Applications,” Vol. I. Academic Press, New York, 1969. Leibowitz, M. A,, Statistical behavior of linear systems with randomly varying parameters. J. of Mathematical Phys. 4 (1963), 852-858. Loeve, M., “Probability Theory,” 3rd ed. Van Nostrand-Reinhold, Princeton, New Jersey, 1963. Lord, M. E., and Mitchell, A. R., A new approach to the method of nonlinear variation of parameters. Appl. Math. Computation, 4 (1978), 95-105. McLane, P. J., Asymptotic stability of linear autonomous systems with state-dependent noise. IEEE Trans. Automatic Control AC-14 (1969), 754-755. McShane, E. J., “Stochastic Calculus and Stochastic Models.” Academic Press, New York, 1974. Miyahara, Y., Ultimate boundedness of the systems governed by stochastic differential equations. Nagoya Math. J. 47 (1972), 111 - 1 4 , Morozan, T., Stability of some linear stochastic systems. J . Diflerential Equations3(1967), 153 169. Morozan, T., Stability of linear systems with random parameters. J. Differential Equations 3 (1967), 170-178. Morozan, T., “Stabilitatea sistemelor cu Parametri Aleatori.” Editura Academiei Republicii Socialiste Romlnia, Bucharest, 1969. Natanson, I. P., “Theory of Functions of Real Variables,” Vol. I & 11. Ungar, New York, 1964. ~
References
205
Nevel’son, M. B., Stability in the large of trajectories of diffusion type. Markov processes. Differentcial’nye Uraunenija 2 (1966), 1052-1060, Nevel’son, M. B., Some remarks concerning the stability of a linear stochastic system. Prikl. Mat. Meh. 30 (1966), 1124-1127. Nevel’son, M. B., and Khas’minskii, R. Z., Stability of a linear system with random disturbances of its parameters. Prikl. Mat. Meh. 30 (1966), 404-409. Nevel’son, M. B., and Khas’minskii, R. Z., Stability of Stochastic systems. Problemy Peredachi Informatsii 2 (1966), 76-91. Neveu, J., “Mathematical Foundations of the Calculus of Probability.” Holden-Day, San Francisco, California, 1965. Prohorov, Yu. V., Convergence of Random Processes and Limit Theorems in Probability Theory. Theor. Probabifity Appl. 1 (2) (1956). 157-214. Ruymgaart, P. A., and Soong, T. T., A sample treatment of Langevin-type stochastic differential equations. J. Math. Anal. Appl. 34 (1971), 325-338. Samuels, J. C., On the stability of random systems and the stabilization of deterministic systems with random noise. J. Acoust. SOC.Amer. 32 (1960), 594-601. Samuels, J. C., Theory of stochastic linear systems with Gaussian Parameter Variations. J. Acoust. SOC.Amer. 33 (1961), 1782-1786. Shur, M. G., 0 lineinykh differentsial’nykh uravneniiakh so sluchaino vozmushchenn ymi parametremi (Linear differential equations with random perametric excitation). Izv. Akad. Nauk SSSR Ser. Mat. 29 (1964), 783-806. Skorokhod, A. V., Limit Theorems for Stochastic Processes. Theor. Probability Appl. 1, (3) (1956),261--290. Soong, T. T., “Random Differential Equations in Science and Engineering.” Academic Press, New York, 1973. Srinivasan, S. K., and Vasudevan, R., “Introduction to Random Differential Equations and Their Applications.” Amer. Elsevier, New York, 1971. Strand, J. L., Stochastic Ordinary Differential Equations. Doctoral Dissertation, University of California, Berkeley, California, 1967. Strand, J. L., Random Ordinary Differential Equations. J. Differential Equations 7 (1970), 538-553. Tsokos, C. P., and Padgett, W. J., “Random Integral Equations with Applications to Stochastic Systems.” Springer-Verlag, Berlin and New York, 1971. Tsokos, C. P., and Padgett, W. J., “Random Integral Equations with Applications to Life Sciences and Engineering.” Academic Press, New York, 1974. Wong, E., “Stochastic Processes in Information and Dynamical Systems.” McGrawHill, New York, 1971. Wonham, W. M., Lyapunov criteria for weak stochastic stability. J. Differential Equations 2 (1966), 195-207. Wonham, W. M., Random differential equations in control theory. In “Probabilistic Methods in Applied Mathematics” (A. T. Bharucha-Reid, ed.), Vol. 2, pp. 132-212. Academic Press, New York, 1970. Yamada, T., On a comparison theorem for solutions of stochastic differential equations and its applications. J. Math. Kyoto Univ. 13 (1973), 497-512. Zakai, M., On the ultimate boundedness of moments associated with solutions of stochastic differential equations. SIAM J. Control 5 (1967), 588-593.
This page intentionally left blank
INDEX
A Additivity, 4 Approximations, Carath6odory-type, 32 Peano-type, 132 successive, 43, 139 stochastic, 119 Asymptotic stability, 70 in probability, 70, 76, 78, 168, 171 with probability one, 70, 82, 87, 90, 177 inpth mean, 70, 89, 110, 111, 174 Autocorrelation function, 181 Autocovariance function. 181
B Bore-Cantelli lemma, 7 Boundedness, 70 in probability, 23 sample, 23 Brownian motion, 14, 188
C Caratheodory-type condition, 71 Cartesian product, 2 Central limit theorem, 8 Chapman- Kolmogorov equation, 15 Chebyshev’s inequality, 5 Closed graph theorem, 201 Comparison differential equations, deterministic, 19, 110, 166 It6-type or stochastic, 166 with random parameters, 59
Comparison theorems, deterministic, 19. 108 random, 41 scalar version, 69, 160 stochastic, 159 Compatibility condition, 12 Continuity, almost-sure, 23 LP-, 93 in mean square, 93 in probability, 22 inpth mean, 93 sample, 23 strong, 93 Continuous dependence with respect to, initial data, 45, 108, 145 parameters, 43, 105, 142 Convergence, almost surely or almost certainly, 6 D-, 9 in distribution, 6 in mean square, 6 in probability, 6 inpth mean or moment, 6 in quadratic mean, 6 strong law of, 8 weak law of, 8 Conelation-coefficient function, 181 Covariance, 5 Cross-correlation function, 181 Cross-covariance function, 181 Cylinder set, 2
D D-bounded, 9 D-Cauchy sequence, 9
207
208
Index
De La Vallee- Poussin theorem. 29 Density, 3 conditional, 12 joint, 3 Derivative, almost-sure, 26 L P - , 93 mean square, 146 pth mean, 93 sample, 26, 47 strong, 93 Differential inequality, 19 deterministic, 19 random, 37 stochastic, 152 Differential operator, d. 131, 199 L, 161, 199 Diffusion, differential system, 131 coefficients, 129 matrix, 129 process, 129 Dini derivatives, deterministic, 19 random, 26 Dirac’s delta function, 200 Distribution, 3 conditional probability, 11 Gaussian, 14 marginal, 3 normal, 14 Poisson, 13 Distribution function, 3 joint, 3 marginal, 3 Dominated convergence theorem, 7 Dnft vector, 129
E Eigenvalue, 63 Ergodic process, 30 Estimates, 59, 65, 108, 160 Event, 1 elementary, I independent, 2 null, 2
Existence of solution, local, 31, 103, 132 global, 104, 138 Existence theorem, Caratheodory-type, 31, 97 Peano-type, 102, 132 Expectation, 3 conditional, 10
F Fatou- Lebesgue lemma, 7 Fokker- Plank equation, 200 Fubini’s theorem, 29 Function characteristic, 5 , 195 spectral, 185 spectral density, 185 transition, 15 truncated, 121 Fundamental matrix solution process, 48, 148
G Gaussian processes, 14, 186 with independent increments, 14 homogeneous, 14 Generalized dominated convergence theorem, 95
Growth condition. 139
H Hincian condition, 129 Holder’s inequality, 5 Homogeneity, 4
I Initial-value problem, 31, 96, 131 Integral, Lebesgue- Stieltjes, 4 L P - , 93 sample Lebesgue. 28 sample Riemann, 28 stochastic, 115, 116
Index
209
Integral equation, Itd-type stochastic, 121, 136 LP-, 96 sample Lebesgue, 31 Integral inequality, 20 Inverse Fourier transform, 5 It6, differential, 122 formula, 122 integral, 1 16 Itd-type stochastic differential equation, 131 comparison, 166 existence of solution, 131 linear, 146 matrix, 148 maximal and minimal solution, 157 scalar, 177 solution, 131 system, 131 uniqueness of solution, 138
J Jensen’s inequality, 5
K Kolmogorov, backward equation, 196 condition, 13 fonvard equation, 197 fundamental theorem, 13 theorem, 24 kth central moment, 5
L Lebesgue, integral, 28 measurable space, 27 measure, 27 Stieltjes measure, 3 Lipschitz condition, global, 139 local Lp-, 111 local sample, 43, 98
Logarithmic norm, 61 LP-solution, 96 existence, 103 uniqueness, 108 Lyapunov function, 56, 160 deterministic scalar, 170 deterministic vector, 108, 161, 170 random scalar, 86 random vector, 57, 86 Lyapunov second method, 56, 72, 80, 110, 160, 166, 173, 175
M Markov inequality, 5 Markov processes, 14, 195 with countable state, 198 with finite state, 197 homogeneous, 16 jump, 16, 198 time, 195 Martingales, 189 Matrix, covariance, 5 positive definite, 170 random, 3 transpose, 4 Mean, 3, 180 Measurable function, 3 Lebesgue, 27 LP-, 94 product, 27 Measurable processes, 27 Measure, Lebesgue, 27 Lebesgue- Stieltjes, 3 probability, 1 a-finite, 29 Minkowski’s inequality, 5 Moment generating function, 102 Monotone convergence theorem, 7
N Nonanticipating family of u-algebras, 114 Nonanticipating function, 114 Nonnegativity, I , 154
210
Index
Non-white-noise, 86 Nomed finiteness, I
0 Order preservation, 4
P Path of stochastic processes, 12 P-bounded, in probability, 72 with probability one, 72 Perturbations, deterministic, 15 1 random, 51 stochastic, 150 Poisson processes, 13 homogeneous, 13 Probability, 1 conditional, 10 measure, 1 product, 2 stationary transition, 16 transition, 15 Probability space, 2 complete, 2 product, 2 Prohorov ’s, distance, 9 theorem, 9 P-stable, in probability, 72 with probability one, 72
Q Quasi-monotone nondecreasing function, deterministic, 19 random, 37 Quasi-positivity, 154
R Radon-Nikodym theorem, 10 Random differential equations, 3 1 comparison, 59
linear, 47 matrix, 48 scalar, 42 system, 31 Random matrix, 3 expectation, 4 positive definite, 64 spectrum, 62 Random variable, 3 characteristic function, 5 distribution, 3 equivalence class, 4 expectation, 4 independent, 8 Realization, 12
5
Sample maximal and minimal solution processes, 39 existence, 40 sample solution process, 31 continuation, 36 existence, 30 uniqueness, 42 Sample space, 1 Schwarz’s inequality, 5 Second central moment, 181 Semiadditivity, 1 Separable processes, 18 u-algebra, 1 independent, 2 product, 2 Skorokhod’s theorem, 9 Solution processes, 31, 96, 131 equilibrium, 166 Lp-, 96 maximal, 19 minimal, 20 sample, 31 trivial, 69, 166 zero, 69, 166 Spectral density, 195 Spectral representation theorem, 184 Stability, 69 Lagrange, 7 1 in probability, 69, 72, 166, 176 with probability one, 70, 80
Index
211
in Pth mean or moment, 70, 88, 110, 11 1, 173
Standard deviation, 5 Stationary processes, 16 strictly, 16 wide-sense, 184 Stochastic differential, 122 Stochastic processes, 12 with independent increments, 13 with orthogonal increments, 17 with stationary increment, 17 with uncorrelated increment, 17 Strong law of large numbers, 8, 30 Submartingales, 189 Supermartingales, 189 Symmetry conditions, I2
T Tonelli’s theorem, 29 Trajectory, 12 Transition intensity function, 196 Transition intensity matrix function, 198 Transition matrix function, 197 Translation group measure-preserving, point transformation, 191 set transformation, 192
U Uniform asymptotic stability, 70 in probability, 70, 78, 164 with probability one, 70, 82, 177 in pth mean or moment, 70, 89, 110, 111, 174 Uniform stability, 69 in probability, 70, 76, 168 With probability one, 70,82, 177 in pth mean or moment, 70, 89, 110, 111, 174
V Variance, 5, 181 Variation of constants formula, 54, 150, 151 Variation of parameter method, 51, 78, 87, 90,112, 149, 171, 174, 178 Vector, random, 3 transpose, 4 Vectorial inequality, 19, 154
W Weak law of large numbers, 8 Wiener processes, 14 normalized, 14
This page intentionally left blank