Research Not s in Mathematics
S-E A Mohammed
Stochastic functional differential equations
Pitman Advanced Publishing Program BOSTON·LONDON · MELBOURNE
99
S-EA Mohammed University of Khartoum
Stochastic functional ditTerential equations
Pitman Advanced Publishing Program BOS10N ·LONDON· MBLBOURNE
PITMAN PUBLISHING LIMITED 128 Long Acre, London WC2E 9AN PITMAN PUBLISHING INC 1020 Plain Street, Marshfield, Massachusetts 02050 Associated Companies Pibnan Publishing Pty Ltd, Melbourne Pibnan Publishing New Zealand Ltd, Wellington Copp Clark Pibnan, Toronto
© S-EA Mohammed 1984 First published 1984 AMS Subject Classifications: (main) 60H05, 60Hl0, 60H99, 60J25, 60J60 (subsidiary) 35K15, 93E15, 93E20 Library of Congress Cataloging in Publication Data Mohammed, S. E. A. Stochastic functional differential equations. (Research notes in mathematics; 99) Bibliography: p. Includes index. 1. Stochastic differential equations. 2. Functional differential equations. I. Title. II. Series. QA274.23.M64 1984 519.2 83-24973 ISBN 0-273-08593-X British Library Cataloguing in Publication Data Mohammed, S. E. A. Stochastic functional differential equations.(Research notes in mathematics; 99) 1. Stochastic differential equations I. Title II. Series 519.2 QA274.23 ISBN 0-273-08593-X All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording and/or otherwise, without the prior written permission of the publishers. This book may not be lent, resold, hired out or otherwise disposed of by way of trade in any form of binding or cover other than that in which it is published, without the prior consent of the publishers. Reproduced and printed by photolithography in Great Britain by Biddies Ltd, Guildford
To Salwa and Yasar with love
Contents Preface I. PRELIMINARY BACKGROUND §1. §2. §3. §4. §5. §6. §7. §8.
Introduction Measure and Probability Vector Measures and the Dunford-Schwartz Integral Some Linear Analysis Stochastic Processes and Random Fields Martingales Markov Processes Examples: (A) Gaussian Fields (B) Brownian Motion (C) The Stochastic Integral I
II. EXISTENCE OF SOLUTIONS AND DEPENDENCE ON THE INITIAL PROCESS §1. Basic Setting and Assumptions §2. Existence and Uniqueness of Solutions §3. Dependence on the Initial Process III.MARKOV TRAJECTORIES §1. The Markov Property §2. Time-Homogeneity: Autonomous Stochastic FOE's §3. The Semigroup IV. THE INFINITESIMAL GENERATOR §1. §2. §3. §4.
Notation Continuity of the Semigroup The Weak Infinitesimal Generator Action of the Generator on ·Quasi-tame Functions
8 11 14 16 18 21 21 22 24 30 30 33 41 46 46 58 66 70 70 71 76 97
V.
REGULARITY OF THE TRAJECTORY FIELD
113
§1. §2. §3. §4.
113 114 144 149
Introduction Stochastic FDE•s with Ordinary Diffusion Coefficients Delayed Diffusion: An Example of Erratic Behaviour Regularity in Probability for Autonomous Systems
VI. EXAMPLES
165
§1. §2. §3. §4.
165 165 167 191
Introduction Stochastic ODE•s Stochastic Delay Equations Linear FDE•s FOrced by White Noise
VII. FURTHER DEVELOPMENTS, PROBLEMS AND CONJECTURES §1. §2. §3. §4. §5.
Introduction A Model for Physical Brownian Motion Stochastic FDE•s with Discontinuous lnitial Data Stochastic Integro-Differential Equations Infinite Delays
223 223 223 226 228 230
REFERENCES
2'34
I.NDEX
240
Preface
Many physical phenomena can be modelled by stochastic dynamical systems whose evolution in time i.s governed by random forces as well as intrinsic dependence of the state on a finite part of its past hi.story. Such models may be identified as stochastic (retarded) functi.onal differentia 1 equations (stochastic FOE's). Our main concern in this book is to eluci.date a general theory of stochastic FOE's on Euclidean space. I.n order to have an idea about what is actually going on in such differential systems, let us consider the simplest stochastic delay differential equati.on. For a non-negative number r > 0, this looks like dx(t)
=
x(t-r) dw(t)
(SDDE)
where w is a one-dimensional Bro~nian motion on a probability space (rl,F,P) and the solution x is a real-valued stochastic process. It is interesting to compare (SDDE) with the corresponding deterministic delay equation dy(t)
=
(ODE)
y(t-r)dt.
One can then immediately draw the followinganalogi.esbetween (SDDE) and (DOE): (a) lf both equations are to be integrated forward in time and starting from zero, then it is necessary to specify a priori an initial process {e(s): -r < s < 0} for (SDDE) and a deterministic fUcntion n~[-r,O] +R for (DOE). ln the ordinary case (r = 0), a simple application of the (lto) calculus gives the following particular solutions of (SDDE) and (ODE) i.n closed form: x(t)
=
ew(t)-it, y(t)
=
et, t € R.
For posi.tiv.e delays (r > 0), no simple closed-form solution of (SDDE) is known to me. On the other hand, (DOE) admits exponential solutions y(t) = eH, t €R, where A € l solves the characteristic equation (i )
(b) When r > 0, both (SDDE) and (ODE) can be solved uniquely through e, n, respectively just by integrating forward over steps of si.z.e r; e.g. e(O) x(t)
+
J:
= {
-r < t < 0
e(t) and n(O) y(t)
0< t < r
e(u-r)dw(u)
+
J0t n(u-r)du
0< t < r
= {
-r < t
n(t)
< 0
Similar formulae hold over the successive intervals [r,2r], [2r,3r], etc. (c) In just the same way as the continuation property fails for actual s·olutions y:[-r,m) +R of (DOE), it is clear that a Markov property cannot hold for solutions {x(t): t > -r} of (SDDE) in R when r > 0. Heuristically speaking, a positive delay upsets Markov behaviour in a stochastic delay equation. (d) To overcome the difficulty in (c), we let C: C([-r,O],R) denote the Banach space of all continuous real functions on [-r,O] given the supremum norm II • II c. For each t > 0, pick the slice of the solution paths over the interval [t-r,t] and so obtain trajectories {xt: t > 0}, {yt:t > 0} of (SDDE) and (DOE) traced out in C. It now follows from our results in Chapter Ill (Theorems (111.1.1), (111.2.1), (111.3.1)) that trajectories {xt:t > 0} describe a time-homogeneous continuous Feller process on C. (e) As functions of the initial path n € C, trajectories {nxt:t > 0}, {nyt: t > 0} of (SDDE) and (ODE) through n define a tpajectoPy field t
> 0,
and a semi-fiOIP
d. Tt.c
+
c •
n ..... nyt (ii)
t >
o
respectively. Both T~ and T~ are continuous linear, where t 2(n,C) is the complete space of all F-measurable e:n+ C such that Jnll e (w) II~ dP(w) < "" , furnished with the £ 2 semi-norm llell 2 = £
[J
Q
lle(w)ll~
dP(w)J 112 •
(Cf. Theorem (II.3.1)). However, in Chapter V §3, we show that the trajectory.field T~. t > 0, does not admit 'good' sample function behaviour. Thus, despite the fact that Borel measurable versions always exist, no such version of the trajectory field has almost all sample functions locally bounded (or even linear) on C (cf. Corollary (V.4.7.1), V §3, VI §3). It is intriguing to observe here that this type of erratic behaviour is peculiar to deZayed diffUsions (SDDE) with r > 0. Indeed for the ordinary case r = 0 it is wellknown that the trajectory field has sufficiently smooth versions with almost all sample functions diffeomorphisms of Euclidean space onto itself (Kunita [45], Ikeda and Watanabe [35], Malliavin [51], Elworthy [19], Bismut [5]). (f) At times t > r, the deterministic semi-flow T~ maps continuous paths into c1 paths, while the corresponding trajectory field T~ takes continuous paths into a-H6lder continuous ones with 0 0 x0 =
e E £ 2 (Q,C)
where the coefficient pPocess g:~ x £ 2(n,C) + £ 2(nJRn} and the initiaZ pPocess e E £ 2(n,C) are given, with z a McShane-type noise on a filtered probability space (Q,F,(Ft)t>Q,P). (Refer to Conditions (E)(i) of Chapter II) •
Chapter III essentially says that for systems of type (iii)
dx(t) = H(t,xt)dt
+
G(t,xt)dw(t), t > 0, x0 = n e:
c
e trajectory field {nxt:t > 0} describes a Feller process on the state ace C. Here the drift coefficient H:~0 x C ~Rn takes values in Rn and e diffusion coefficient G:~ x C ~ L(RmJRn) has values in the space Rm,Rn) of all linear maps ~m ~Rn. The noise w is a (standard) m-dimensional ownian motion. If the stochastic FOE is autonomous, the trajectory field a time-homogeneous diffUsion on C. In Chapter IV we look at autonomous stochastic FOE's
,d investigate the structure of the associated one-parameter semi-group 't}t>O given by the time-homogeneous diffusion on C. A novel feature of ch diffusions ~hen r > 0 is that the semi-group {Pt}t>O is never strongly •ntinuous on the Banach space Cb(C,R) =Cb of all bounded uniformly connuous real-valued functions on C endowed with the supremum norm (Theorem V. 2.2 )). Hence a weak generator A of {Pt}t>O can be defined on the .tter's domain of strong continuity C~ ; Cb and a general formula for A is ;tablished in Theorem (IV. 3.2). Due to the absence of non-trivial differttiable functions on C having bounded supports, we are only able to define ~eakly dense class of smooth functions on C which is rich enough to generate te Borel a-algebra of C. These are what we call quasi-tame functions (IV §4). 1 such functions the weak generator assumes a particularly simple and con·ete form (Theorem (IV. 4.3)). Distributional and sample regularity properties for trajectory fields of 1tonomous stochastic FOE's are explored in Chapter V. We look at two extreme tamples: the highly erratic delayed diffusions mentioned above, and the case • stochastic FOE's with ordinary diffusion coefficients viz. dx(t) = H(xt)dt
+
g(x(t))dw(t), t
>
0.
g satisfies a Frobenius condition, the trajectory field of the latter lass admits sufficiently smooth and locally compactifying versions fort> r fheorem (V. 2.1), Corollaries V (2.1.1) - V (2.1.4)). In ger.eral, the >mpactifying nature of the trajectory field for t > r is shown to persist in distributional sense for autonomous stochastic FOE's with arbitrary ip~chitz coefficients (Theorems (V. 4.6), (V. 4.7)). f
iv)
There are many examples of stochastic FOE's. In Chapters VI and VII we highlight only a few. Among these are stochastic ODE's (r = 0, VI §2), stochastic delay equations VI §3), linear FOE's forced by white noise (VI §4), a model for physical Brownian motion (VII §2), stochastic FOE's with discontinuous initial data (VII §3), stochastic integra-differential equations (VII §4), and stochastic FOE's with an infinite memory (r=~. VII §5). Chapter VII contains also some open problems and conjectures with a view to future developments. From a historical point of view, equations with zero diffusions (RFDE's) or zero retardation (stochastic ODE's) have been the scene of intensive study during the past few decades. There is indeed a vast amount of literature on RFDE's e.g. Hale [26], [27], [28], Krasovskii [43], El'sgol 'tz [18], Mishkis [56], Jones [42], Banks [3], Bellman and Cooke [4], Halanay [25], Nussbaum [62], [63], Mallet-Paret [49], [50], Oliva [64], [65], Mohammed [57], and others. On stochastic ODE's, one could point out the outstanding works of Ito [36], [37], [38], Ito and McKean [40], McKean [52], Malliavin [51], McShane [53], Gihman and Skorohod [24], Friedman [22], Stroock and Varadhan [73], Kunita [45], Ikeda and Watanabe [35], and Elworthy [19]. However, general stochastic FOE's have so far received very little attention from stochastic analysts and probabilists. In fact a surprisingly small amount of literature is available to us at present on the theory of stochastic FOE's. The first work that WP are aware of goes back to an extended article of Ito and Nisio [41] in 1964 on stationary solutions of stochastic FOE's with infinite memory (r = ~). The existence of invariant measures for non-linear FOE's with white noise and a finite memory was considered by M. Scheutzow in [69], [70]. Apart from Section VII §5 and except when otherwise stated, all the results in Chapters II-VII are new. Ce~tain parts of Chapters II, III and IV were included in preprints [58], [59], [60], by the author during the period 19781980. Section VI §4 is joint work of S.E.A. Mohammed, M. Scheutzow and H.v. Weizsacker. The author wishes to express his deep gratitude to K.D. Elworthy, K.R. Parthasarathy, P. Baxendale, R.J. Elliott, H.v. Weizsacker, M. Scheutzow and S.A. Elsanousi for many inspiring conversations and helpful suggestions. For financial support during the writing of this book I am indebted to the British Science and Engineering Research Council (SERC), the University (v)
of Khartoum and the British Council. Finally, many thanks go to Terri Moss who managed to turn often illegible scribble into beautiful typescript. Salah Mohammed Khartoum 1983.
I Preliminary background
§1.
Introduction
In this chapter we give an assortment of basic ideas and results from Probability Theory and Linear Analysi.s which make necessary prerequisites for reading the subsequent chapters. Due to limitations of space, almost all proofs have been omitted. However, we hope that the referencing is adequate. §2. Measure and Probability A measuPable space (G,F} is a pair consisting of a non-empty set Q and a a-algebra F of subsets of G. If E i.s a real Banach space, an E-valued measuPe ~on (Q,F} is a map ~:F + E such that (i} ~(fJ} = 0, (ii} ~ is a-additi~e i.e. for any disjoint count!ble famil! of sets {Ak};=t in F the series E ~(Ak} converges in E and ~( u Ak} = E ~(Ak}. When E = R, ~ is called k=1 . k=t k=t a signed measuPe and if ~(F~ it is called a positive measUPe. If sup {I~(A}I: A € F} the triple (G,F,P} is then a pPObability space. The set of all finite real-valued measures on Q is denoted by M(Q} and the subset of all probability measures by Mp(Q}. A probability space (G,F,P} is complete if every subset of a set of Pmeasure zero belongs to F i.e. whenever B € F, P(B} = 0, A~ B, then A € F. In general any probability space can be completed with respect to its underlying probability measure. Indeed let (G,F,P} be an arbitrary probability space and take Fp
= {A
u ~: A € F, ~
c
~O € F, P(~ 0 }
= 0}
to be the completion ofF undel' P. Extend P to Fp by setting P(AU~} = P(A}, ~ c ~0' P(~0 } = 0. Then (Q,FP ,P} is the smallest P-complete probability space with F c FP. Because of this property, we often find it technically simpler to assume from the outset that our underlying probability space is complete.
When n is a Hausdorff topological space and F is its Borel a-algebra, Bore1 n, generated by all open (or closed) sets, a 1neasure ll on n is :r>eguZa:r> if l.I(B) = sup {lJ(C) = inf {lJ(U)
C ~ B, C closed} B ~ U, U open}.
If n is metrizable, every l.l € M(n) is regular and hence completely determined by its values on the open (or closed) sets in n. (Parthasarathy [66], Chapter II, Theorem 1.2, p. 27). Let n be a Hausdorff space and E a real Banach space. An E-valued measure l.l on (n, Borel n) is tight if (i) sup {lu(B)I: B € Borel n} < oo, where 1·1 denotes the norm on E; and (ii) for each £ > 0, there is a compact set K£ in n such that lll(~K£)1 < £ Theorem (2.1): Let n be a PoZish space i.e. a complete separable metrizable space. Then every finite real Borel measure on n is tight. Proof: Parthasarathy ([66], Chapter II, Theorem 3.2, p. 29); Stroock and Varadhan ([73], Chapter 1, Theorem (1.1.3), pp. 9-10). c Let n be a separable metric space and F = Borel n. Denote by Cb(QJR) the Banach space of all bounded uniformly continuous functions $:n ~R given the supremum norm ll$11
c
=sup {j$(n)l: n En}.
b
The natural bilinear pairing ~ Cb(QJR) =
X
J
nEn
M(Q)
~
R
$(n) dl.l(n), $ E Cb(nJR),
lJ
E M(n),
induces an embedding M(Q) -------+ Cb(QJR) * lJ
~
where Cb(nJR)* is the strong dual of Cb(QJR). sponds to the continuous linear functional
'2
Indeed each
lJ
E M(Q) corre-
because for
e~ery
' E Cb(nJR)
1J
'(n) dp(n) I < II' llcb v(lJ)(n) nEG p p where v(lJ)(n) =sup { E lll(Ak)l: Ak E F, k = 1, ••• ,p disjoint, n = u Ak' k=1 k=1 p < m} is the total variation of lJ on 0 (Dunford and Schwartz [15], Chapter III, pp. 95-155). As a subset of Cb(nJR) * give M(O) the induced weak* topology. Now this turns out to be the same as the ~eak toplogy or vague topology on measures because of the following characterizations. Theorem (2.2): Let n be a metric space and lJ, llk E M(O) fork= 1,2,3, •••• Then the following statements are all equivalent: (i)
llk
(ii)
lim J '(n)dlJk(n) = J '(n)dlJ(n), for every' E Cb(nJR); k-- nEn nEO
~
lJ as k
~min
the weak *topology of M(O);
(iii) lim sup lJk(C) < l!(C) for every closed set C in n; k
(iv)
lim inf lJk(U) > lJ(U) for every open set U in 0; k
(v)
lim lJk(B) = lJ(B) for every BE Borel n such that lJ(aB) = 0. k--
For proofs of the above theorem, see Parthasarathy ([66] Chapter II, Theorem 6.1, pp. 40-42) or Stroock and Varadhan ([73], Theorem 1.1.1, pp. 7-9). The weak topology on M(O), when n is a separable metric space, can be alternatively described in the following two equivalent ways: (a)
Define a base of open neighbourhoods of any lJ E M(O) by
k =
1,2, ••• ,p}
3
(b) Furnish M(O) with a metric p in the following manner. Compactify the separable metric space 0 to obtain "' o. Then Cb(OJR) is Banach space-isomorphic to the separable Banach space C(OJR) "' of all continuous real functions on "'0, given the supremum norm. Pick a countable dense sequence {~k};= 1 in Cb(OJR) and define the metric p on M(O) by CD p(~,v) =
1:
k=1
2kll
~kllcb
~.v
E M(O) (Stroock and Varadhan [73], Theorem 1.1.2, p. 9; Parthasarathy [66], pp. 39-52). ~ote that M(O) is complete if and only if o is so. Similarly,~(O) is compact if and only if 0 is compact. More generally compact subsets of M(O) are characterized by the well-known theorem of Prohorov given in Chapter V (Theorem '{V.4.5)). There is a theory of (Bochner) integration for maps X:O + E where E is a real Banach space and (O,F,~) a real measure space (Dunford and Schwartz [15], Chapter III §1-6). On a probability space (O,F,P) an (F, Borel E)-measurable map X:O+ E is called an E-valued random variabLe. Such a map is P-integrabLe if there is a sequence Xn:o + E, n = 1,2, ••• , of simple (F, Borel E)-measurable maps so that Xn(w) + X(w) as n +CD for a.a. wE 0 and lim J IX (w)-X (w)IEdP(w) = 0. m,n+m w€0 n m Define the expectation of an integrable random variable X:O + E by EX =
f0 ·X(w)dP(w) = lim
n+ao
j'
Xn(w)dP(w) E E.
0
This definition is independent of the choice of sequence {Xni~= 1 converging a.s. to X (Rao [67], Chapter I, §1.4; Yosida [78], Chapter V §5, pp. 132-136). For a separable Banach space E, X: 0+ E is a random variable if and only if one of the following conditions holds: (i) There is a sequence Xn:O + E, n = 1,2, ••• , of simple (F. Borel E)-measurable maps such that Xn(w) + X(w) as n + CD for a.a. w E O; (ii) X is weakLy measurabLe i.e. for each A E E*, A0 X:O +R is (F, Borel R}measurable. (Elworthy [19], Chapter I, §1(C} pp. 2-4; Rao [67], Chapter I §1.4}. Denote by £ 0 (o,E;F) the real vector space of all E-valued random variables X:O +Eon the probability space (O,F,P}. The spdce £ 0 (0,E;F} is a complete
TVS under the complete pseudo-metric
for x1• x2 € t 0 (Q,E~F). The norm in our real Banach space is always denoted by I .1 E or sometimes just I .1. A sequence {Xn};=t of random variables Xn:Q ~ E converges in probabitity to X € £ 0 (Q,E~F) if for every E > 0 lim P{w:w € Q, IXn(w)- X(w)IE > E} = 0. A random variable X:Q ~ E is
n-(Bochner) integrable if and only if the functi.on
w 1-----+
is P-integrable, in which case IJg X(w)dP(w)IE < JgiX(w)IE dP(w) i.e. lEXlE < EIXC·>IE. The space £ 1 (Q,E~F) of all integrable random variables is a complete real TVS with respect to the £1-semi-norm IIXII 1 £
=I
Q
IX(w)IE dP(w), X € t 1(Q.E;.F).
Similarly for any integer k > 1 define the complete space £k(Q,E;F) of all F-measurable maps X:Q ~ E such that JQ IX(w)lk dP(w) 0), then Xn +X as n +=in probability. (iii) If Xn +X as n + = in probability, then there is a subsequence {X }= ni i =1 of {Xn};= 1 such that xn. +X as i + = a.s. 1 (iv) Dominated Con~r:_gence: Let Xn E t 1(n,E;F), n = 1,2, ••• and XE£ 0 (n,E;F) be such that Xn +X as n +=in probability. Suppose there exists Y € t 1(nytl;F) such that, for a.a. w € n.IXn(w)IE < Y(w) for all n > 1. Then X € t 1(n,E;F) and J X(w)dP(w) = lim J X (w)dP(w). n n-- n n Chebyshev's inequality also holds for Banach-space-valued random variables X. It follows trivially by applying its classical version to the real-valued random variable IX(·)IE (Chung [7], p. 48)~ Theorem (2.4) (Chebyshev's Inequality): If E is a Banach space and X E £k (n,E;F), k > 1, then for every E > 0 P{w:w € n. IX(w)IE >
E}
is continuous, for each k > 1. continuous also for k = 0.
lf E is separable, then the above map is
In a probability space (n,F,P), two events A,B € Fare independent (under P) if P(A n B) = P(A)P(B); two sub-a-algebras g1, g2 of Fare independent (under P) if P(A n B) = P(A)P(B) for all A € g1 and all B € g2; two random variables X, Y :· n + E are independent (under P) if the a-algebras a(X), a(Y) generated by X, Y respectively are independent under P. Theorem (2.5) (Borel-Cantelli Lemma) Let (n,F,P) be a probability space and nk}'!'? { •• k=1 c F (i) If P(nl 0 with IB(x,y)l < KlxiE IYIF for all x € E and all y € F. Then L(E,F~) is a Banach space when furnished with the norm liB
II =sup {IB(x,y)l : x € E, y
€
F, 1x1E < 1, IYIF < 1}.
The projective tensor product allows us to identify each continuous bilinear map Ex F +R with a continuous linear functional onE 91T F viz. Theorem (4.3):
L(E,F~)
is norm isomorphic to [E 01T F] * •
Proof: Treves [75], Proposition 43.12, pp. 443-444.
c
As an .example consider the following situation. Let S be a compact Hausdorff space and E = F = C(SJR). Then every continuous bilinear form e:C(SJR) x C(SJR) +R corresponds to a signed bimeasure on S x S viz. a set function v:Borel S x Borel S +R such that for any A,B € Borel S, v(A,·) and v(·,B) are measures on s. An integration theory for signed bimeasures has 12
been developed by Yl;nen [77]. Note that a continuous b;linear form on C{SJR) does not necessarily correspond to a measure on S x S because of the str;ct embedding C{SJR) i'lr C{SJR) ~ C(S ne
~
>
X
s. R)
{{s,s') 1--+ n<sn(s')}
(Treves [75], cf. Exerc;se 44.2, p. 450). The following theorem of Mercer w;11 prove to be useful. For a proof see R;esz and Sz-Nagy {[68], Chapter \ll, §97-98, pp. 242-246) or Courant and Hilbert {[10], pp. 122-140). Theorem (4.4) (Mercer's ~eorem): Let l cR be a closed bounded ;nterval and K:l x l +R a continuous symmetric kernel which is positive in the sense that
J1 J1 K(s,s•)n(s)n(s•)ds
ds' > 0
for all n € C{lJR). Suppose {~k};=t ;ons of the eigenvalue problem
J1 K(s,s•)~k(s•)ds•
= ~k
c
C{lJR) and {~k};= 1 c~ are solut-
~k(s), for all s
€ 1.
Then K can be expanded as a uniformly convergent series 00
on
1 x
I.
Remark
JJ
The val;d;ty of 1 1 K(s,s•)n(s)n(s•)ds ds• > 0 for all n € C{lJR) ;s equ;valent to ~k > 0 for all k > 1. Theorem (4.5) linear map T:E
For real Banach spaces E, F, every Borel-measurable F is cont;nuous.
(Douady):. +
13
Proof: §5.
Schwartz. ([11], Part U, Chapter I., §2 pp. 1.55-·1.60).
a
Stochastic Processes and Random Fi.elds
Let ( O,F,P) be a probabil i.ty space, T a complete separable metric space and E a real Banach space. An E-v.alued stochastic process x on (Q,F,P) parometl'ized by T is a map x:T x n + E such that x(t,.) e:.c~n,E;F) for all t e: T. If 0 0 and n € E, (!'l,F,(F~)t>s,Ps,n) i.s a filtered probability space in whi.ch every Ps is a probabili.ty measure defined on the a-algebra s .n Fs = o{Ft~ t > s}. ( i)
m
(ii)
S
lf s' <sand t < t', then Ft
=Ft•· Sl
(iii) The process x~[O,m) x !1 + E i.s (Borel [O,m) Q F, Borel E)-measurable and x(t,.) i.s (F~, Borel E)-measurable whenever t > s. (iv.)
For fixed 0 < s < t < m , B € Borel E, the function E -->R
n t--> Ps
{w~w € Q,
.n
x(t,w)
€
B}
is Borel-measurable. (v)
(vi)
For each s > 0 and n € E, Ps,n {w~w € !1, x(s,w) = n} = t. If B € Borel E, 0 < s < t < u and n € E, then the Markov Property Ps.n(x(u,.)
€
BIF~)
=
Pt,x(t,·) (x(u,.)
€
B)
holds a.s.-Ps on !1• •n An E-v.alued Markov. process defines a family of transition probabilities {p(s,n,t •• )~n € E, 0 < s < t<m} onE gillen by p(s,n,t,B)
=
Ps
•n
(x(t,·)
€
B), B € Borel E•
These have the properties: (i)
for fixed 0 < s < t, B € Borel E, p(s,.,t,B) is
Borel-measurable~
(ii) for fixed 0 < s < t, n € E, p(s,n,t,·) is a Borel probability measure on E;. (iii) the Chapman-Kolmogorov identity
19
p(s,n,u,B) =
J
~EE
p(s,n,t,d~) p(t,~,u,B)
holds for 0 < s < t < u, neE, Be Borel E (Oynkin [t6], p. 85). A Markov process i.s time-homogeneous if its associated fami.ly of transition probabilities {p(s,n,t,·):.n e E, 0 < s < t < l~n E E}. Then a time-homogeneous Markov process onE with transition probabili.ties {p(O,n,t,.):. neE, t > 0} yields a contraction one-parameter semi.-group {Pt}t>O of continuous linear operators Pt:b(EJR) + b(EJR) defined by
Pt(~)(n) =
f
~EE
$(~) p(O,n,t,d~), n E E,
for each $ E b(EJR) (Dynkin [t6], Chapter 11., §t-2, pp. 47-61; Chung [8], pp. 1-12). A time-homogeneous Marko~ process on a Banach space E with semigroup Pt:.b(EJR) + b(EJR), t > 0, i.s called a FeUer process if (i) {Pt}t>O lea~es invariant the closed li.near subspace Cb(EJR) all bounded uniformly continuous functions E ·R~
c:
b(E,R) of
(ii) {Pt}t>O is weakty continuous on Cb(E,R) at t = 0 i.e. for every n E E lim Pt($)(n) = $(n) for all $ E Cb.
t.o ...
As a consequence of our results in Chapter lll (Theorem (IV. 2.2)), for infinite dimensionat E the last condition (ii) in the above definition of Feller process does not impty strong continuity at t = 0 of the semi-group {Pt}t>O on Cb(EJR). Compare this with the locally compact case in Chung ([8], pp. 48-56) where strong continuity is implied by weak continuity at t = 0. For a further study of (time-homogeneous) Marko~ processes, their transition semi-groups and infinitesimal generators, the reader may consult Dynkin ([16]).
20
§8.
Ex amp 1es
Let us wind up by looking at some useful examples of stochastic processes which are indispensible for our forthcoming discussion of stochastic FOE's. (A)
Gaussian Fields
If T is a metric space, a real stochasti.c process x~T x n + :R on a probabi 1 i ty space ( n, F, P) is a Gaussian process i.f for every t e: T the random variable x(t,.) e: i(n,R~F) and has a Gaussian distribution (Hida [31], pp. 31-43). For a Gaussian process x~T x n + R define the mean m~T + R and covariance V.~T x T +:R by setting m(t) = E x(t,.),
t
e: T
'l(s,t) = E{[x(s,.)-m(s)][x(t,.)-m(t)]}, s,t
e: T.
A Gaussian process x~T x n + R is called a Gaussian system if every finite linear combination from the family {x(t,.):~ t e: T} has a Gaussian distribution (Hida [31] §1..6). The following results concerning Gaussian systems are wellknown. Theorem (8. 1) :. (i)
Any subsystem of a Gaussian system is also a Gaussian system.
(ii) For any Gaussian system x~T x n +R the closed linear hull of {x( t, ·) :. t e: T} in .c 2 ( n,R;.F) is a Gaussian system. (iii) If x:T x n + R is a Gaussian system, then a necessary and sufficient condition for the family {x(t,.ht e: T} to be independent under Pis V(s,t) = 0 whenever s # t. (iv)
If x:z+ x n +R is a (discrete-time) Gaussian system, then the sequence {x(n,.)}~= 1 converges in probability i.f and only if i.t converges in .c 2 (n,R;F). ln this case lim x(n,.) is a Gaussian random variable.
n-~:
Hida [31], pp. 34-35.
c
lf Tis a real Banach space, a Gaussian system x:T x n +:Ron a probability space (n,F,P) will be called a Gaussian (random) field or just a Gaussian field. Gaussian fields parametrized by i.nfini.te-dimensional Hilbert or Banach spacesdo not in general admit continuous or locally bounded versions 21
which are defined on the whole of the paraQteter space (Dudley [13] and Chapter V. §3 of thi.s book). For a deeper study of sample regularity properties of Gaussian processes, the i.nterested reader could refer to the works of Dudley [13], [14], Feldman [20] and Fernique [21]. (B)
Brownian Motion
Let (n,F, (Ft)t>O,P) be a filtered probabi.li.ty spa·ce. A process w~xn +R is a one-dimensional Brownian motion (or a Wienero proocess) on (O,F,(Ft)t>O,P) if (i)
it i.s (Ft)t>O-adapted.
(ii) it i.s a Gaussian system on (n,F,P) with mean and covariance given by
E w(t,.) = 0, E w(t,.)w(s,·) =min (t,s), for all t,s > 0. A Brownian motion w can always be normalized so that w(O,.) = 0 a.s. A process w~ x n +Rm is an m-dimensional Broownian motion if it is of the form w = (w 1,w 2 , ••• ,wm) where the coordinate processes {wi}~= 1 are independent one-dimensional Brownian motions on (n,F,(Ft)t>O,P). For easy reference, we list here some of the basic properties of Browni,an motion in :Rm. These are well-understood and the reader may look at Hida [31], Chung [8], Friedman [22] and McKean [52] for proofs. Theorem (8.2)~ Let w~ x n +Rm be an m-dimensional Brownian motion on the filtered probability space (n,F,(Ft)t>O,P). Then the fo 1 lowing is true: (i)
w is an (Ft)t>O-martingale.
(ii) The sample paths of w are almost all nowhere differentiable.
"' such that for any a > 0 and any 0 0, then the process {w(t~t 0 ,·)- w(t 0 ,·>~ t > 0} is an m-dimensional Brownian motion on (n,F,(Ft)t>t ,P).
~ ,Rm) be the space of all conti.nuous 0 ~ .... Rm and "'F (vi) Let "' n = C(Kpaths 1\be the a-algebra generated by all cylinder sets in~ of the form {f:f E ~. (f(t 1), ••• ,f(tk)) E Bk}' t 1 , ••• ,tk E ~. Bk E Borel (Rm)k~ Then there is a unique probability measure ~w (called Wiener measure) on (rl,F) giving the finite-dimensional distributions of w viz.
I
g(t 1,t 2-t 1, ••• ,tk-tk_ 1;x 1,x 2-x 1, ••• ,xk-xk- 1)dx 1dx 2, ••• ,dxk Bk where the kernel g:(~)k x (Rm)k .... ~is given by =
g(u1, ••• ,uk;y1, ••• ,yk)
k
=
2
.rr (2w uJ.)-m/2 e-Yj/2uj
J=1
for u1, ••• ,uk E ~. y1, ••• ,yk E Rm. (vii) If 0 < s < t, then E(lw(t,·)-w(s,.)l 2k IFs)
=
(2k-2+m)(2k-4+m) ••• mlt-slk
a.s. for every integer k > 1. (viii) For 0 < t 1 < t 2 < t 3, the increments w(t 2,.)-w(t 1,.), w(t 3,.)-w(t 2,.) are conditionally independent under P given the a-algebra a{W(u,.):O < u < t 1}. (xi) w is a Feller process with stationary transition probabilities
given by
I e-1!::.\'_ I
2
p(O,x, t,B)
=
1
--m/2 (2wt)
B
--zt
dy, B E Borel Rm.
Furthermore, the associated contraction semi-group {Pt}t>O is strongly 23
continuous on b(:Rm ,R). is given by
where $~m +:R is a m 6$(x) = .~ 1=1
c2
l.ts infi.ni.tesimal generator A:.V(A) c b(:Rm ,:R) +b(:Rm ,IR)
function with compact support and 6 is the Laplacian
a2$ :-! (x). ax.1
(C) The Stochastic Integral We adopt the viewpoint of McShane ([53], Chapters ll, ll.l, 1\1) which contains adequate information for all our purposes. However, it is interesting to note that Elworthy's recent treatment ([19], Chapters 1.11, 1\1, \1) contains far-reaching generali.z.ations of McShane's stochastic integral to infinite dimensions. Other references for stochastic integrati.on include Ito [39], McKean [52], Meyer [55], Friedman [22], Gihman and Skorohod [24], Ikeda and Watanabe [35], Metivier and Pellaumail [54], and Arnold [2]. In thi.s secti.on we shall only content ourselves by giving a brief account of the stochasti.c integral including some of its most important properties. Let (n,F,(Ft)t>O,P) be a filtered probability space. For a closed interval [a,b] c:R a paPtition IT= (t 1, ••• ,tk* 1 ~T 1 , ••• ,Tk), a= t 1 < t 2 < ••• < tk = b, Ti E :R, i = 1,2, ••• ,k, is beLated if Ti < ti, i = 1, ••• ,k and is Cauchy if Ti = ti, i = 1, ••• ,k. If f:[a,b] x n + L(:Rm,IRn) and z.:[a,b] x n +:Rm are (Ft)a~ -adapted processes, form the Riemann sum S(IT) E £ 0 (n,:Rn;Fb) by setting
for all wEn.
For belated IT define
mesh IT= max {t.1 +. 1 - T.1.
~
1 < i < k}.
The stochastic integpaZ Jb f(t)dz(t) r: a
t 0 (n,:Rm~Fb)
is defined as
S(IT) J b f(t)dz(t) = lim a meshiT + 0 whenever the above limit exists in probability as meshiT + 0 24
o~er
all belated
partitions II of [a,b]. Conditions under l!lhich this li.m.i.t exists are given below, together with SO!IIe of the basic properties of the stochastic integral. For proofs see McShane [53], Elworthy [19] and Friedman [22]. Theorem (8.3):. (i) Let f;:.[a,b] X n-+ L(:R01 ,Bn), Z;,:.[a,b] X n -+R01 , i. = 1,2, be (Ft)a
g(t,y(t))
€ .c 2(n,L(Rm,
Rn))
is also adapted to (Ft)tE[O,a]• Remarks (1.1) (1) l.n condition (E)(i) the Lipschitz. function >. can be replaced by a process "'>.~n x [O,a] +Rm adapted to (Ft)tE[O,a] and whose sample paths are almost all Lipschitz. with a uniform Li.pschitz. constant i.e. there exists Jl. > 0 such that
and all t 1, t 2 E [O,a]. (2) The stochastic FDE(I.) includes both the ones with random coefficients and the (ordi.nary) stochastic differential equations (without retardation, [22], [53]);. for suppose g~[O,a] x C(J,Rn) x n + L(Rm ,Rn) is a coefficient process corresponding to a stochastic FOE with random coefficients. Assume that is .c 2 with respect to the third variable. Then we can define the coefficient process g~[O,a] x .c 2(n,C(J,:Rn)) + .c 2(n,L0Rm, Rn)) by setting
g
"'
g ( t ,1j.l }{bl ) = g ( t ,1j.l (w ) ,w)
for all t E [O,a], and all
1jJ
a •a • w
€ Q
E .c 2(n,C(J,Rn)).
§2. Existence and Uniqueness of Solutions The for
follo~ling
exi~tence
lenmas will be needed in order to establish the main theorem and uniqueness of solutions to the stochastic FDE(I).
Lemma (2.1)~ Suppose x~[-r,a] x Q +Rn is a process with almost all sample paths continuous. Assume that x![O,a] is adapted to (Ft)tE[O,a] and x(s,.) is F0-measurable for all s E J. Then the process
33
[O,a]
x
cCJ. nn>
n
(t,w) is adapted to (Ft)t€[O,a]" with almost all sample paths continuous. Proof~
Fix t € [O,a]. Since xl[-r,t] has continuous sample paths, it induces an F-measurable map xl[-r,t]:n + C([-r,t]JRn). This map is in fact Ft-measurable• to see this obser~e that Borel C([-r,t]JRn) is generated by cylinder sets of the form .H p~~(Bi) where Bi € Borel n". ti E [-r.tJ. 1=1
l
Pt. :.C([-r,t],.Bn)--> nn i.s e~aluation at ti for 1 < i < n. Thus it is suffi.clent to check Ft -measurability of xI [ -r. t] on such s~ts. With the above notation, 1
n
Cxl[-r,t])- [ n.
p~
1
(Bi)]
i
i=1
=
n
n {w:.x(ti,w) € B.}• 1
i=1
By hypotheses, if ti € J, {w:.x(ti,w) € Bi} € F0; and if ti € [O,t]. {w:x(ti,w) € Bi} € Ft.• 1
Hence {w:.x(ti,w) € Bi} €
Ft if
1
< i < n.
1
So Cxl[-r,t])
-1
n
[.n
1=1
Now xt
Ft.~
=
-1
Pt.CBi)] € Ft. 1
m(t,.)o(xl[-r,t]) and the deterministic memory
m(t,.):C([-r,t], Rn) -----> C(J, Rn) is continuous, so xt must be Ft-measurable. c Lemma (2.2)·:. Suppose that z:n + C([O,a]. Rm) is a process satisfying condition E(i). Let f:[O,a] + t 2 (n,L(Rm, Rn)) be an t 2-continuous process adapted to (Ft)t€[O,a]" Deiine the process F:[-r.a] + t 2(n,Rn) by F(t)
=
{
f f(u)dz(·)(u), t € [O,a] )i0 0
34
t € J
a.s., where the integral is McShane's belated integral off with respect to z.. Then F corresponds to a process belongi.ng to icn.c([-r,a], Rn)) and adapted to (Ft}te[O,a]" Indeed there is an M> 0 such that E(
sup te[O,a]
1ft f(u}dz.(·}(u}1 2> < M fa E( llf(u}ll 2 }du
o
o
(1}
The process [O,a] 3 t ~> Ft e t 2Cn.C(J,Rn}} is adapted to (Ft}te[O,a] with almost all sample paths continuous (i.e. it belongs to CA(O,a], t 2Cn,C(J,Rn}}}. Mis independent of f. Proof:
With the notation of Condition E(i}, F(t}(w}
=
f:
f(u}(w}d~(u} ~
(w} f: f(u)d,m(·}(u}, t E [O,a] a.a.w e n.
(2}
The first integral on the right-hand side of (2} is a Riemann-Stieltjes integral for a.a. w and is therefore continuous in t for a.a. w e Q; it thus defines an F-measurable map n + C([O.a],Rn}. As f(u} is Fu-measurable for all 0 < u < t, then J: f(u}( • }d~(u} is Ft-measurable (being a.s. a limit of Ft-measurable Riemann-Stieltjes sums}. Since zm is a martingale adapted to (Ft}te[O,a] with a.a. sample paths continuous, then so is the McShane integral on the right-hand side of (2} (§I.S(C}}. Hence by the Martingale inequality we have E(
sup te[O,a]
1J0t f(u}dzm(·}(u}l 2> < 4E(1Ja0 f(u}dzm(·}(u}l 2>
< 4C f:
E(
llf(u}(·}ll 2 }du (Theorem
(1.6.1}}
(3}
where C = 2Ka 112 +. K112 (Theorem (1.8.4}}. If 1 > 0 is the Lipschitz constant for ~. then it is easy to see that for a.a. we n lJ:
f(u}(w}d~(u}l 2 < t 2a
J: lf(u}(w}l 2du.
35
Hence E(
sup lJat f(u)(·)dA.(u)l 2) < 1 2a Jaa E(lf(u)(·)l 2)du. tE[a,a]
{4)
The inequality ( t) follows now from (3) and (4) and the fact that E(
sup lJt f(u)d,(·)(u)l 2) < 2E( sup lJt f(u)(·)dA.(u)l 2) tE[a ,a] a tE[a ,a] a +-
2E ( sup tE[a,a]
lJat f(u)d,m(·)(u)l 2>
Take M = 2(R.2a + 4C). Note that C (and hence M) is independent of f. It follows i.mmediately from ( t) that F E .c 2(n,C( [ -r ,a] ,Rn)) and from Lemma (2.1) that [a,a] 3 t~ Ft E .c 2(n,C(J,Rn)) is adapted to (Ft)tE[a,a] with the sample paths t ~ Ft(w) continuous for a.a. wEn. c Here is the main existence and uniqueness theorem for solutions of the stochastic FOE( I.)~ Theorem (2.1)~ Suppose Conditions (E) of §tare satisfied, and let e E .c 2(n,C(J,Rn)) be Fa-measurable. Then the stochastic FDE(I) has a solution x E .c 2(n,C([-r,a], Rn)) adapted to (Ft)tE[a,a] and with initial process e. Furthermore, (i) x is unique up to equi~alence (of stochastic processes) among all solutio!;'_s of 2(l) belonging ~o icn,C([-r,a],Rn)) and adapted to (Ft)tE[a,a] i.e. if x E .c (O,C([-r,a], R )) is a solution of (1) adapted to (Ft)tE[a,a] and with initial process e, then x(·)(t) = "'x(•)(t)
Ft-a.s., for all t E [a,a];
(ii) the trajectory [a,a] 3 t ~ xt E .c 2(n,C(J,Rn)) is a C(J,Rn)-valued process adapted to (Ft)tE[a,a] with a.a. sample paths continuous. (It belongs to CA([a,a], .c 2(n,C(J,Rn))). Proof:
We look for solutions of (I) by successive approximation in n ~ )). Suppose 6 E .c 2(n,C(JJRn)) is Fa-measurable. Note that this is equivalent to saying that 6(·)(s) is Fa-measurable for all s E J, because e has a.a.
-::-z::_.c (n,c ( [-r,a],
36
sample paths continuous. W.e prove by induction that there is a sequence of processes kx~[-r,a] x n +~n. k = 1,2, ••• such that each kx has the Properties P(k): (i)
kx
€
£ 2 (n,C([-r,a]~n)) and is adapted to (Ft)tE[O,a]"
For each t E [O,a], kxt E £2(n,C(JJRn)) and is Ft-measurable. k-1 (iii) llk-t.tx- kxll < (ML 2)k-t a 11 2x- 1xll , 2 £ (n,c([-r,aJ~n)) (k-1)! izn,c) (ii)
(5)
where M is the constant of Lemma (2.2). Take 1x:[-r,a] X n +Rn to be 1x(t,w)
__ { e(w)(O) 6(w)(t)
t E [O,a] t
J
€
a.s., ande(w)(O) + (w) Jt g(u, kx )dz.( • )(u) k+tx(t,w)
0
= {
6(w)(t)
t
€
u
t E
[O,a] (6)
J
a.s. Since e E £2(n,C(J,Rn)) and is F0-measurable, then 1x E £2(n,C([-r,a],Rn)) and is trivially adapted to (Ft)tE[O,a]" By Lemma (2.1), 1xtE£ 2(n,C(J,Rn)) and is Ft-measurable for all t E [O,a]. P(1)(iii) holds trivially. Now suppose P(k) is satisfied for some k > 1. Then by Condition (E)(ii), (iii) and the continuity of the stochastic memory, it follows from P(k) (ii) that the process
is continuous and adapted to ( Ft)tE[O,a]" We can therefore apply Lemma (2.2) to the right-hand side of (6) obtaining P(k+1)(i) and P(k+1)(ii). To check P(k+1)(iii), consider 37
uk•2x- k+-1xll22 s;
(n,C)
< E(
sup I Jt {g(u,k•1xu)-g(u,kx )}dz(·Hul2> tE[O,a] o u
< MJa llg(u,k•tx)- g(u,kx >11 22 du,(P(k+1){ii) and LeJ:Ir.la 2.2) 0 u u £ < ML 2 Ja llk+ 1xu- kxull 22
o
s;
(n,c)
du,(Conditi.on E (ii))
( 2 k-1 JaOuk-1du <ML2.1!!:_L 112x-txll2 (k-1)! s; 2(n,c) =
2 k ak 2 (ML ) r.r. II ~:
X -
1
X
II 2
s;2(n,c)
•
Complete the proof of P(k+1)(iii) by noting that llk+2x _ k+1x 11 2 < llk+2x _ k+1x 11 2 t t s; 2 cn,c) (n,c)
i
Therefore P(k) holds for all k > 1. For each k > 1, write k- 1 i+1 i k 1 X= X+ L ( X - X). i =1 Now s;!(n,C([-r,a]JRn)) is closed in s; 2(n,C([-r,a]JRn)); so the series co
E
(i+1x - ix)
i=1 converges in s;!(n,C([-r,a], Rn)) because of (5) and the convergence of co
E
i=1
[
•
(ML2)1-1
i -1
a (i-1)!
,, /2
J
•
Hence {kx};=t converges to some x € s;!(n,C([-r,a]JRn)). Clearly xiJ = a and is F0-measurable, so we can apply Lemma (2.2) to the difference
obtaining 38
t
E(
sup t€[0,a]
1J0 g(u,
t
kxu)dz.(•)(u)- J0 g(u,xu)dz.(·)(u)! 2)
< ML 2a II k x - x 112 .c 2 (n,c) ~
0 as k
+ ....
Thus viewing the ri.ght-hand side of (6) as a process in icn,C([-r,a], Rn)) and letting k ..... , it follows from the abov.e tnat x must satisfy the stochastic FDE(l) a.s. for all t € [-r,a]. To prove uniqueness, let~ € .c~(n,([-r,a], Rn)) be also a solution of (I) with initial process e. Then it is easy to see by the Lipschitz condition (E(ii)) and Lemma (2.2) that
II xt - "'xt II 22 < ML 2 Jt II xu - "' xu II 2 2 du .c (Q,C) 0 .c (n,C)
"' = 0 for all t € [O,ala so for all t € [O,a]. Therefore we must have xt-xt "'· 2 n x = x 1n .c (n,C([-r,a], R )) a.s. The last assertion of the theorem follows imedi.ately from Lemma (2.1). a Remarks (2.1) Let 0 < t 1 < t icn,C(J,Rn) ;,Ft ) 1
assoc;at;ng w;th each n e: C(J,Rn) the constant map tCn)(w) = n. for all t1 t1 . Let Tt Cn) denote Tt CtCn)) for all t > t 1• w;th the above convent;on one has
(I)
Lemma (1.2): For each n € C(J,Rn), the trajectory {T! 1(n)}t>t of (I) (or (11)) twith tp = n) ;s adapted to the filtration (Ft n gt 1)t>t 1(w;th each Ft n g 1 independent of Ft ). 1 1
Proof: For fixed t 1 > 0 def;ne the process .,t 1 : t1 (t) = w(t) - w(t 1), t > t 1•
0---+
C([t 1,a] ,Rm) by
Then wt 1 ;s adapted to (Ft n gt1)t>t • Now let n e: C(J,Rn) and nx be the solut;on of (II) w;th 48
tp =
n·
1
€
n.
t In (II) one can replace w by w 1, thus getting a unique solution through t n which is adapted to (Ft n g 1)t>t • But nx also satisfies t 1 t t > t1 n(O) + (w) Jt G(u,nxu(·))dw 1(u) 1 nx(w)(t) = { n(t-t 1) t 1-r < t < t 1, a.a. wEn.
t1 t1 Then by uniqueness of solutions {Tt (n)}t>t is adapted to (Ft n g )t>t • c 1
1
The next lemma is probably well-known for the case J = {O} (Stochastic ODE's): it essentially says that one need only solve (I) on the subspace C(JJRn) of the configuration space £ 2(n,C(JJRn)). Lemma (1.3): Suppose Hypotheses (M) are satisfied. Let~ € £ 2(0,C(JJRn);Ft) be an Ft -simple function i.e. there are nj ~ C(JJRn), j = 1,2, ••• ,k and a 1 partitio~ {O.}J~= 1 c Ft of n such that ~ = _En. Xn.• where x0 . is the J 1 J=1 J J J characteristic (indicator) function of Oj. Then t
Tt1(~) =
k (2)
E
j=1
i."e. k
1:
=
j=1
t1
Tt (nJ.)(w)x0 _(w) for a.a. wen.
J
Remark In particular, if n 0
€
Ft , then 1
t1 t1 Tt (Xg) = Tt (1)Xg 0 0
+
t1 T (O)X~ • 0
Then Xn. is Ft -measurable because n. € Ft • Solving J 1 n.J 1 the stochastic FDE(II) at nj € C(JJRn), we get a solution Jx satisfying
~:
Let 1 < j < k.
ft
n·
n· nj(O) + G(u, Jxu(·))dw(u) t > t 1 Jx(·)(t) = { tl nj(t-t 1) t 1-r < t < t 1, a.s. Since Xg is Ft -measurable, then the process u j
1
(III)'
n·
~ G(u, Jxu(·))Xg_ is J
adapted to (Ft)t>t; and so by property of the stochastic integral (Theorem 1
49
a.s.
Clearly,
for all u > t 1• Thus k n. I: Jx(·)(t)x ={ j=1 °j
Jt
k I: j=1
n.(O)x0
k . I: J=1
nJ. ( t-t 1>Xn.,
J
+
j
tl
k
I: G(u, j=1
n.
Jxu(·))dw(u), t > t 1,
J
Therefore by uniqueness of solutions to stochastic FDE(II) (Theorem (11.2.1)), we obtain k n. = I: Jx(·)(t)x0 j=1
,
t 1-r < t
(8)
= 1,2 •••• ; {n;,h} partition n.
Now the left hand side of (7) is equal to nj
JA j.._
lim f(
l:
i =1
nj I j-+oo A i=1
= 1im
l:
t
Tt 1(41 .. )(w)Xg 2 J •1
(w))dP(w)
j •i
t
f{Tt 1(41J,1 .. )(w)}y_ (w)dP(w) "U • • 2
nj
J ,1
mk
..
= lim lim J l: l: f(e~·~)Xo* (w)x0 (w)dP(w) j.._ k- A i=1 h=1 • k,h j,i nj mk .. = lim lim t t f(~~·~)P(Slk h n n .. n A) j-+oo k- i=1 h=1 • • l· 1 52
(9)
conversely, the right-hand side of (1) is equal to f[T: 1(1j1.(w'))(w)]dP(w)dP(w') JA Jn lim j2 J = lim
J J
j- A
n
f[T: 1(1j1.(w'))(w)]dP(w)dP(wl) 2 J
n.
t
f
=lim J J f[( Tt 1(cllj i)'Xn (w'))(w)]dP(w)dP(w') j- A n i=1 2 ' j,i =
~im J J
J- A
=lim
n
JJ
j- A
nj f[.l:
1 =1
nj l:
n i =1
t
Tt 1 xn* (w)JCn .. (w' )dP(w)dP(w') j-+cok-+co A 01=1 h=1 -l (n) = 1
= { a=1
a n
0
Xs
=
for all
t B
n E C(JJRn).
Therefore fm:C{JJRn) +R. m = 1.2 •••• is a sequence of -bounded and uniformly continuous functions converging pointwise to xB. We can therefore replace f by fm in (7). then pass to the limit as m + ... using Lebesgue dominated convergence theorem. We obtain (6) in this way for any open set B C(JJRn). To get (6) for any Borel set B in C(JJRn) we reason as follows. Let v:Borel C(JJRn) +R be the set function
=
v(B) =
I J
XB{[T: 1(Tt (8)(w')](w)}dP(w)dP(w'). BE Borel C(JJRn).
A Sl
2
1
(11)
where A E Ft1 is fixed. Then vis a finite measure on C(JJRn); to prove this let {Bi}7= 1 be a countable disjoint class of Borel sets. Then
I
J ml: XB {[Ttt 1(Tt (e)(w'))](w)}dP(w)dP(w') v( m U B.) = 1 i =1 A Sl i =1 i 2 1 Since
m = l: v(B.) 1 i =1
m
(12)
xB (n) +Xu B•(n) as m + ... for all n E C(JJRn). then by the domi i=t 1 inated convergence theorem we can let m + • in (12) to obtain l: i =1
~
i =1
v(B.) =lim
JJ ~
or-- A
1
J J
n ; =1
xB {[T: 1(Tt (8)(w'))](w)}dP(w)dP(w') i 2 1 t
{[Tt1(Tt (e)(w'))](w)}dP(w)dP(w') - A Sl U Bi 2 1 i=1
-
54
x...
00
= v( U Bi).
i=1
Therefore"' is a finite (Borel) measure on C(JJRn). Since C(JJRn) is metric, v is regular (See §(1.2)). The set function v*:Borel C(JJRn) +R v*(B) =
J XB(Tt (e)(w))dP(w), B € Borel C(JJRn), A
2
is also a regul.ar (finite) measure on C(JJRn);, so for any B € Borel C(JJRn), v*(B) = inf{v*(u): U ~ C(JJRn) open, B c: U} = inf {v(U): U ~ C(JJRn) open, B c: U} =
V(B).
Therefore (6) holds for all Borel sets B in C(JJRn). To see that
it is now sufficient to check that the left hand side p(t 1,Tt (9)(·),t2,B) 1 is measurable with respect to the a-algebra generated by Tt (9); this is 1 valid because the function p(t 1,·,t 2 ,B) t1 C(JJRn) 3 n J-1-~> p(t 1,n,t2 ,B) = J XB(Tt (n)(w))dP(w) n 2 is measurable: If B ~ C(JJRn) is open, let {fm}m= 1 be a sequence of uniformly bounded, uniformly continuous functions fm:C(JJRn) +R converging pointwise to XB. Therefore
. p(t 1,n,t2,B) =11m
J
fm(Ttt1 (n)(w))dP(w).
nr-t-0
If nk
2
t
+
n in C(JJRn), then Tt 1(nk) 2
fm(T! 1(n)(w)) as k +
oo
t
+
t
Tt 1(n) in £ 2(n,C); so fm(Tt1(nk)(w)) + 2
2
in probability; so by Lebesgue dominated convergence
2
theorem, for each m, 55
In f m(Ttt21(nk)(w))dP(w) +In f m(Ttt21(n)(w))dP(w) Hence n ~-+-
as k
In f (Ttt1 (n)(w))dP(w) is continuous (for each m). m
2
+
~. Therefore
p(t 1,·,t 2,B) is measurable because it is a pointwise limit of continuous functions. If B € Borel C(JJRn), then
=
p(t 1,n,t2,B) = inf {p(t 1,n,t2,U): U open in C(JJRn), B U}. But each n + p(t 1,n,t 2,u) is measurable, so p(t 1,.,t2,B) must be measurable. Therefore p(t1,Tt (9)(·),t2,B) 1
=
E[P(Tt (9) 2
BIFt )ITt (9)] 1 1
=
P(Tt (9) € BITt (9)),
€
2
1
since the o-algebra generated by Tt (9) is contained in Ft • 1
1
Finally we check the Chapman-Kolmogorov identity p(t 1,n,t2,B)
=I
~EC
p(u.~.t 2 ,B) p(t 1 ,n,u,d~)
for all B € Borel C(JJRn) 0 < t 1 < u < t 2 F
L(t,xi(t,w)). i
=
1,2.
are isonomous. ~:
Suppose B1•••• ,Bk € Borel F, t 1,t 2•••• ,tk € T. Since L(t,•) is measurable, then L(t.·)-1(B.) e Borel E for all 1 < j < k, so that by isonomy 1 2 J of x and x we have
Lemma (?..2): Suppose x1.x 2:[-r,a] x 0 +Rn have a.a. sample paths continuous. Then the following statements are equivalent: 59
( i) x1 and x2 are isonomous • (;;) The induced random 'lari.ables x1• x2 ::n distribution.
+
C([-r.a].Rn) ha'le the same
(;.; i) The processes [O.a] x (t.w)
n- -
C(J.Rn) x!(w).
i = 1.2.
are isonomous. Proof: We prove the implications (i) .. (;;) .. (iH) .. (i). ~ppose x1 - x2• Since x1• x2 have a.a. sample paths continuous. then x1• i:n+ C([-r.a].Rn) are (Borel) measurable; these wi_l_l have the same distribution if and only if the measures Po (x 1)- 1• Po (x 2)- 1 agree on cylinder sets in C([-r.a].Rn). (See Parthasarathy [66]. Theorem (2.1). p. 212). For each t € [-r.a]. let Pt:C([-r.a].Rn) "'"'""""' Rn
a
, a(t)
be evaluation at t. Let t 1.t2••••• tk Then
[-r.a]. s1.s2••••• Bk
€
k
Po (x 1)- 1[.n p~~(B.)] = P(x 1 J=1 J J
€
J
= P(x 2(.)(tj) € Bj for all 1 < j < k) 2 -1 k -1 =Po (x) [.n Pt. for all 1 < j < k) J
J
=P(x~ EB~forall1<j 1i so by uniqueness of the limit in the weak topology of measures we must have po(F 1(t 1,.), ••• ,F 1(th,.)) -1 = po(F2(t 1,.), ••• ,F 2(th,.)) -1 on Borel (Rn x ••• x Rn) (h times) {§{1.2)). Hence F1 - F2• To prove F1 x z1 -F2 xi, fix {s1' ••• ,sp) =!_and define z\~.> = {zi{s 1), ••• ,zi{sp)). Then (f 1,z 1{s)) x z1 - {fz ,z 2{s)) x z2 • .- . . . 1 2 1 2 Write {F1 ,z 1 {s)) = {F1 ,0) + (O,z 1 {s)), i = 1,2. As F - F and z - z, it follows from Lemma (2.1) that {F1 -(F2,o), {O,z\~» -{o,i{_!)) and
,o)
62
(F 1,z1(s)) = (F 1,0) + (O,z1(s)) ~ (F2,o) Thus-F 1 x z1 ,... F2 x z?. -c
+
(o,z?(s)) = (F 2,z?(s)). -
-
Lemma (2.4): Suppose G:C(J,Rn) -> L(Rm ,Rn) is Lipschitz and z 1 :n + C([O,a],Rm), i = 1,2, are processes satisfying the conditions of Existence E(i) (§11.1). For each n € C(J,Rn) let {iTt(n)}t€[O,a]' i = 1,2, be the trajectories of the stochastic FOE's: n(O) ix(w)(t)
= {
n(t)
t € J, i = 1,2,
a.a. w €
n.
If z1 and z? are isonomous, then £1Tt(n)}t€[O,a]' £2Tt(n)}t€[O,a] are also isonomous. Proof: Use the method of proof of Theorem (11.2.1). Assume inductively that there are sequences ixk:n+C([-r,a],Rn), k=1,2, ••• , i =1,2, such that (i)
· k1 · k 2 ll 1 x+ - 1 xll 2
s:. (n,c)
ix 1(w)(t)
2 k 1 ak- 1 i 2 i 1 2 1. Therefore 1xk x z1 ,... 2xk x z 2 implies 63
tk tk ( xt (.), ••• , xt (·)) 1
x
1 2k 2k z - ( xt (.), ••• , xt (·)) 1
m
m
x
z-? (Lemma 2.2)
Hence
(G( 1 x~ (•)), ••• ,G( 1 x~ (·)}} m
1
xz.1 -
(G( 2 x~ (·)), ••• ,G( 2 x~
(·}})x z 2,
m
1
for t 1, ••• ,tm € [O,a], by Lemma (2.1)a i.e. k ) 1 G( 1X(·) X z
~
k G( 2X(·))
X
l- •
Thus 1xk+ 1 x z1 - 2xk+t x z2 because of Lemma (2.3). Obviously (;;) is valid fork = 1 because (n,z1) - (n,z 2). Therefore (i) and (ii) hold for all k. In particular (ii) implies. 1xk- 2xk for all k > 1. Thus po( 1xk)-t =Po ( 2xk)-t for all k > 1• . k . 2 Now 1 x - - > 1 x as k + oo a.s. (or in t ) from the proof of Theorem (11.2.1); soP o (ixk)- 1 ---)Po (ix)- 1, i = 1,2, ask ... oo in the weak topology of Borel measures on C([-r,a]JRn}. Hence by uniqueness of limits, we must have P o ( 1 x)- 1 = P o ( 2x)-t on Borel C([-r,a]JRn). So by Lemma (2.2), {~Tt(n)}t€[O,a] = {t ~ 1xt} is isonomous to £2Tt(n)}t€[O,a] = {t--->. xt}. o The following the9rem is our second main result; it says that the Markov process given by trajectories of the autonomous stochastic FOE (IV) is in fact time-homogeneous in the sense that each transition probability p(t 1.n,t2,.), t 1 < t 2, n € C(JJRn) depends on t 2 - t 1 (and n) only. Theorem (2.1) (~me-homogeneity): Suppose that the autonomous stochastic FOE (IV) satisfies Hypotheses (A). For 0 < t 1 < t 2 O 1
= {
t • e: J
n( t •)
a.s. Now by property of Brownian motion, we know that the process u' ~> w(u'+t 1)-w(t 1) is isonomous to w (Elworthy [19], Corollary 3A, p. 19). So comparing the above stochastic FOE with our original one viz: n(O) + (·) nx(·)(t')
= {
n(t')
Jt'
G(n> 0,
A I-->
In
lft(w)-fA(w)l~
A A lim P(lft-f IE> e:)
t~O+ t~+
1im
t~O+
r
Jn
(
uniformly in A E A.
66
In ct>(fA(w) )dP(w) E R. =
dP(w) = 0 uniformly in A E A, then for every 0 uniformly in A EA.
f~(w))dP(w) =
I
n
(
fA(w) )dP(w)
Moreover,
-
Proof:
( i) Let E > 0 be given. By uniform continuity of ~. there is a o• > 0 such that Whenever~ 1 .~ 2 E E and 1~ 1 -~ 2 1E < o•, then 1~ 0 such that if 1 A,A' E A and d(A,A 1 ) < o , then In If A(w) - f A (w) IE2 dP(w) < Eo' 2 , where d is the metric on A. Therefore, if d(A,A 1 ) < o, we get 1 P{w:w En, lfA(w)-fA 1 (w)IE > o•} < --
f
o•2 n
by the Chebyshev •s i nequa li. ty.
Jn
f A A' If -f
+
o•
A A' If -f IE , n
Cb, t > 0, by setting €
C(JJRn ) , •
€
cb.
The next result then gives a canonical characterization for the strong continuity of {pt}t>O in terms of the shifts {St}t>O.
71
Theorem (2.1): The shifts {St}t>O form a contraction semigroup on Cb, such that, for each n € C(J,Rn), lim St(ct>)(n) = lim Pt(ct>)(n) = cl>(n) for all t-+0+ t-+0+ 4> € Cb. Furthermore lim Pt(cl>)(n) = cl>(n) uniformly in n € C(J,Rn) if and t-+0+ only if lim St(cl>)(n) = cl>(n) uniformly in n € C(J,Rn). t-+0+ Proof:
Let t 1,t2 > 0, n € C(J,Rn),
4>
€ Cb, s € J.
Then
where
"' )(0) (nt
~
"' )t (s) (nt 1 2
=
{
"'
=
n(O)
t
(nt }(t2 + s) 1
=
+
s
< 0
"' +t )(s). (nt 1
Hence
i.e.
-r < t 2
st (St (ct>))(n) 1 2
=
2
"' +t ) ct>(nt 1 2
=
st +t (ct>)(n) 1 2
Since lim . "' nt = n , it is clear that lim St(cl>)(n) = ct>(n) for each t-+0+ t-+0+ n € C(J,Rn), 4> € Cb. Also by sample paths continuity of the trajectory {nxt:t > O} of (I) (Theorem (11.2.1)) together with the dominated convergence theorem, one obtains
72
for each ~ € Cb and n € C(JJRn). To prove the second part of the theorem, suppose K > 0 is such that IH(n) I < K and IIG(n) II< K for all n € C(JJRn). Then for each t > 0 and almost all w € Q we have t+s t+s J H(nx (w))du+(w) J G(nx (·))dw(·)(u) t+S>O 0
u
0
u
-r < t
+
= {
0
s
1.
Find n" e
as
II
(where 38 is the boundary of B) such that n" lies on the line segment joining nand n• i.e. find >.0 E [0,1].such that n" = (t->.0 )n.., >.0n• and lln"ll = 1. Define the function f:[0,1]---> R by f(>.) = 11(1->.)n • >.n'll-1, >. E [0,1]. Then f h clearly continuous. Also f(D) = lin II- t < 0 and f(1) = llrlll-1 >0. Hence by the Intermediate-Value Theorem there exists >. 0 E (0,1) such that f(A 0 ) = o i.e. taken"= (1->. 0n• and lln''ll = t. Hence lljl(n) - ljl(n') l < lljl(n) - ljl(n")
I ...
lljl(n") - ljl(n')
II
O} in Cb conve~es ~eakLy to $ € Cb as t ~ 0+ if lim = for all ~ € M(C(JJRn)). Write this as $ = w-lim ~t• t~+
t~+
Proposition (3.1) (Dynkin [16], p. 50): For each t > 0 let ~t' ~ € Cb. Then $ = w-lim ~t if and only if £11~tll: t > 0} is bounded and $t(n) ~ ~(n) t~+
as t ~ 0+ for each n € C(JJRn). Proof: For each n € C(JJRn) let on be the Dirac measure concentrated at n, defined by 1 ,
n € B
o, nt B Define ~t € [M(C(JJRn))]* by
for all B € Borel C(JJRn).
$t(~) = · ~(s) as k +=for all s € J for some ~ € C(J,Rn ) • Fn• then a(~ k) ---. a(~) as The extension map e : C(J,Rn) *---> [C(J,Rn) • FnJ*
is a linear isometry into. Proof: We prove the lemma first for n = 1. Suppose a € C(J,R)*. By the Riesz representation theorem (Dunford and Schwartz [15], §IV.6.3 p. 265) there is a (unique) regular finite measure ~= Borel J +R such that a(n) = Jt -r
n(s)d~(s)
for all n € C(J,R).
Define a:C(J,R) e F1 ~ R by a(n
+
v1x{O}) = a(n)
+ v 1 ~{0},
n € C(J,R), v1 € R.
a is
clearly a continuous linear extension of a. k let {~k};= 1 be a bounded sequence in C(J,R) such that ~ (s) + n(s) + v 1 ~ 0 js) as k +=for all s € J where n € C(J,R), v1 € R. By the dominated convergence theorem,
By
79
The map e:ceJ,R) * -~ rceJ,R) e F1J* is clearly linear. a
•->
a
Higher dimensions n > 1 may be reduced to the 1-dimensional situation as follows: Write a E ceJ,Rn)* in the form n . aen) = E a 1 en.) 1 i=1 where n = en,_ ••• ,nn) E ceJ,Rn), ni E ceJ,R), 1 < i < n, aien*) = aeo,o, ••• ,O,n*,O, ••• ,O) with n* E ceJ,R) occupying the i-th place. Hence ai E ceJ,R)* for 1 < i < n. Write Fn=F~, where F1 = {v*x{O}:v* ER} by taking vX{O} = ev 1x{O}'"" ., vnx{O}) for each v = ev 1, ••• ,~n) ERn, vi E R 1 < i < n. Let al E [CeJ,R) e F1J* be the extension of a 1 described before and satisfying ew 1). I_t is easy to see that ceJ,Rn) e Fn = rceJ,R) e F1J
x ••• x
rceJ,R) e F1J en copies)
i.e. n + VX{O} = en 1 + v1x{O}'"""'nn + VnX{O}) Define a E [CeJ,Rn) e FnJ* by n
•
aen+ VX{O}) = i~ 1 a 1 eni + ViX{O}) when n = en 1 ,.~··nn), v = ev 1, ••• ,vn). Since ai is a continuous linear~ extension of a 1 , then a is a continuous linear extension of a. Let {~k}k= 1 be bounded in ceJ,Rn) such that ~kes) ~ ~es) as k + ~ , for all s E J, n k k k k where~ E ceJ,R) e Fn. Let~ = e~ 1 ••••• ~n), ~ = e~ 1 ••••• ~n), ~i E ceJ,R), ~i E ceJ,R) e F1, 1 < i < n. Hence {~~};= 1 is bounded in C(J,R) and 80
t~(s) ~ l;;i(s) as k k
-+
Q), s
€
n
i
J, 1 < i < n. Therefore k
n
-i
lim a(l;; ) = lim E a (l;;i) = E a (l;;i k.._, k.._, i=1 i=1 ·so
)
-
= a(l;;).
a satisfies
(w 1). To prove uniqueness let "' a € [C(JJRn) e Fn] * be any continuous linear extension of a satisfying (w 1). For any vx{O} € Fn choose a bounded sequence {t~};= 1 in C(JJR) such that l;;~(s) ---) YX{O}(s) as k -+ Q), for all s € Ji e.g. take. (ks + 1)v, -
r1 < s < 0
= {
-r < s < -
0
r1
-r
Note that lll::~ll = lvl for all k > 1. Also by (w 1) one has
-
for all n € C(JJRn). Thus "' a = a. Since the extension map e is linear in the one-dimensional case, it follows from the representation of a in terms of the a; that e:C(JJRn) * --~ a
;,
a 81
is also linear. But a is an extension of a, so llall > llall. Conversely, let ~ = n +- vx{O} € C(J,Rn) e Fn and construct {~~};= 1 in C(J,Rn) as above. Then
a(~) = lim·a(n k-+OO
+
~~).
But Ia( n
+
~~ > I < II a II II n .., ~~ II < II a II [ II n II = II a II
+
II ~ ~ II J
[ II n II .., IvI J = II a II II ~II for a11 k > 1•
Hence Ia(~) I= lim la(n +-~~)I< llall II~ k-+OO
II for every~
€
C(J,Rn) e Fn.
Thus llall < llall. So lliill = llall and e is an isometry into.
o
Lemma (3.2) :. Let B:C(J,Rn) x C(J,Rn) ~ R be a continuous bilinear map. Then B has a unique (continuous) bilinear extension S:[C(J,P.n) eFn] x [C(J,Rn) E&Fn] +F. satisfying the weak continuity property: (w2) if {~k};= 1 , {nk};= 1 are bounded sequences in C(J,Rn) such that ~k(s)--!> ~*(s), nk(s)--!> n*(s) as k + oo, for all s € J, for some ~*.n* € ·c(J,Rn) e Fn, then B(~k.nk)--+ B(~*.n*) ask+ oo. Proof: Here we also deal first with the 1-dimensional case: Write the continuous bilinear map B:C(J,R) x C(J,R) --+ R as a continuous linear map B:C(J,R) --!> C(J,R)*. Since C(J,R) *is weakly complete (Dunford and Schwartz [15], §IV. 13.22, p. 341), B is weakly compact as a continuous linear map . C(J,R) ~ C(J,R) * (Theorem (1.4.2); Dunford and Schwartz [15], §VI. 7.6, p. 494). Hence there is a unqiue measure A:Borel J ---~ C(J,R)* (of finite semi-variation IIA II (J) < oo ) such that for all ~ € C(J,R)
B(~) = ~~r ~(s)
dA(s)
(Theorem (I.4.1)). Using a similar argument to that used in the proof of Lemma (3.1), the above integral representation of B implies the existence of a unique continuous 82
nnear extension a:C(J,R) C9 F1 ~ C(J,R)* satisfying (w1). To prove this, one needs the dominated convergence theorem for vector-valued measures (Dunford and Schwartz [15] §IV.10.10, p. 328; Cf. Theorem (1.3.1)(iv)). Define a:C(J,R) e F1 -> [C(J,R) e F1J* by B = e o 13 where e is the extension isometry of Lemma (3.1); i.e.
Clearly a gives a continuous bilinear extension of 13 to [C(J·,R) • F1] X [C(JJR) • F1]. To prove that a satisfies (w2) let kco kco k {~ }k= 1' {n }k= 1 be bounded sequences in C(J,R) such that~ (s) + ~*(s), nk(s) + n*(s) as k +co for all s € J, for some~·. n* € C(J,R) e F1• By ,.. ,.. * k (w 1) for 13 we get 13(~ ) = lim 13(~ ). Now for any k, k--
,..
But by (w 1) for
13(~*)
we have
lim la(~*)(nk)- a(t*)(n*)l = 0 k-Since {llnkll };= 1 is bounded, it follows from the last inequality that lim 13(~k)(nk) = ~(~*)(n*). k-When n > 1, we use coordinates as in Lemma (3.1) to reduce to the 1-dimensional case. Indeed write any continuous bilinear map 13:C(J,R)n x. C(J,Rn) ~ R as the sum of continuous bilinear maps C(J,R) x. C(J,R) + R in the following way 13((~1•···•~n),(n1, ••• ,nn>> =
n
..
E 13,Jc~ .• n.) i,j=1 1 J
~~re (~ 1 ••••• ~n>• (n 1, ••• ,nn) € C(J,Rn), ti,ni € C(J,R), 1 < i < n,
13 1 J:C(J,R)
x
C(J,R) +R is the continuous bilinear map 83
aij(~'.n'> = a((o,o, ••• ,o.~·.o, ••• ,o), (o,o, ••• ,o.n·.o •••• ,o)), for ~·, n' € C(J,R) occupying the i-th and j-th places respectively, 1
0.
>
0;
Hence (3)
and (4)
Now by the main existence theorem (Theorem (11.2.1)), the map [O,a] ?u~o-+ xuEi(n,C(J,.r.n)) is conttnuous;. so Hm Ellnxu-nii 2=0. Therefore the u...O-~: last two inequalities (3) and (4) imply that {EIH(nxu>l 2: u E [O,aJ} is bounded and Hm Ell G(nxu) - G(n) 11 2 = 0. u-+0+ Letting t-+ 0+ in (2) yields (1). Since a is bilinear,
' - a(G(n) o wi• G(n) o wi> tt a( nxt - " nt•' n xt " - nt) a( __ 1 (nxt - ~t) - G(n) o wi• __ 1 (nxt - ~t) - G(n) o wi> If If 1 (nxt-~t)-G(n)owi•G(n)owi) + a(G(n)owi• -l{nxt-~t)-G(n)owt). + a(-If If
=
Thus, by continuity of a and H6lder•s inequality, one gets It Ea(nxt- ~t• nxt- ~t)- Ea(G(n) o wt• G(n) o wi>l < llall
Ell~ (nxt- ~t) - G(n) o wi11 2
1 + 211a II [EII-If for all t > 0.
(nxt-~t) - G(n)owi11 2J 112 [E IIG(n)owi11 2J 112 •
(5)
But E IIG(n) o wi11 2 < E sup I w(t+s~w(O)I 2 IIG(n) 11 2 sE[-t,O] = =
t E SlAP lw(t+s)-w(O) 12 IIG(n) 11 2< t t IIG(n) 11 2 sE[-t,OJ IIG(n) 11 2 • for all t > 0. (6) 87
Comb;n;ng (6) and (5) and lett;ng t + 0+ g;ves the requ;red result.
a
Lemma (3.5): Let ;n~n + Fn be the ;somorph;sm ;n(v) = vx{O}' v € Rn, and G(n) x G(n) denote the l;near map
(v,.v 2)
~
(G(n)(v 1), G(n)(v 2)).
Then for any cont;nuous b;l;near form Bon C(J,Rn) Hm t+O+
t EB(nxt-~t•nxt-~t) = trace [B
o
(;n x in)
o
(G(n)
x
G(n))]
for each n € C(J,Rn), where B ;s the cont;nuous b;l;near extens;on of B to C(J,Rn) • Fn (Lemma (3.2)). Indeed ;f {ej}J= 1 ;s any bas;s for Rm, then . "' nxt-nt) "' = .Em B(G(n)(ej)X{o}•G(n)(ej)X{o}>· 11m t1 EB( nxt-nt• t+O+ J=1
Proof:
In v;ew of Lemma (3.4) ;t ;s suff;c;ent to prove that m
1;m EB(A t+O+
o
wt• A o wt) = _E B(A(eJ.)X{O}' A(eJ.)X{O}) J=1
(7)
for any A € L(Rm ,Rn). We deal f;rst w;th the case m = n = 1, v;z. we show that 1;m EB(wi• wi> = B(x{o}•X{o}> t+O+
(7)'
for one-dimensional Brown;an mot;on w. If t, n € C = C(J,R), lett 8 n stand for the funct;on J x J +R def;ned by (t 8 n)(s,s') = t(s)n(s')for all s, s' € J. The projective tensor product C 8~ C is the vector space of all functions of the form E~= 1 ti 8 ni where ti,ni € C, i = 1,2, ••• ,N. It carries· the norm N N llhll 8 = inf { E llt-11 lin; II :h = E ti 8 ni,ti,ni €C, i=1,2, ••• ,Nl. i =1 ~ i =1 1 The infimum is taken over all possible finite representations of h € C 8~ C. Denote by C i~ C the completion of C 8~ C under the above norm. It is well known that C ~~ C is continuously and densely embedded in C(J x J,R), the Banach space of all continuous functions J x J +R under the supremum norm 88
(Treves [75], pp. 403-410; §1.4). Since C is a separable Banach space, so is C in C. . countable dense subset of C. Then the countable set N
Y8 Y = { t
~-
i=1
1
For let Y c C be a
8 n. : ~ 1.,n 1• E Y, i = 1, ••• ,N, N = 1,2, ••• } 1
is dense in C en C and hence in C in C. The continuous bilinear form Bon C corresponds to a continuous ZineaP ,.. C] * (Treves [75] pp. 434-445; Cf. Theorem 1.43). functional "'BE [C ew Now let ~,.~2 e t2(g,c). The map
cec---+ cenc (~.n)
~~en
is clearly continuous bilinear. Thus ~, ( •)
e ~2 (.)
:
n - > c in c w1
is Borel measurable.
for almost all wE
>
~ 1 (w) 8 ~ 2 (w)
But
n;
hence by HOlder's inequality the integral
fn 11~ 1 (w) 8 ~2 (w) 11 8
dP(w) exists and
n
From the separability of C •n C the Bochner integral (§1.2) E
~ 1 (.) 8 ~2 (·)
=
J0 ~ 1 (w) 8 ~2 (w)dP(w)
exists in C•n C. Furthermore, it commutes with the continuous 1inear functional "'B; viz.
89
Fix 0 < t < r and cons;der
t E[w(·)(t+s) - w(•)(O)][w(·)(t+s') - w(•)(O)],
s,s• € [-t,O]
= {
s
0 1 +
[-r,-t) or s•
€
t m;n (s,s•)
s,s•
€
€
[-r,-t).
[-t,O]
= {
s
0
[-r,-t) or s•
€
€
[-r,-t) (9)
Def;ne Kt:J x J +R by lett;ng Kt(s,s•) = [1 +
t m;n (s,s')]X[-t,O](s)x[-t,O](s•)
s,s• € J
(10)
;.e. Kt
=
E[wt(•)
€ £ 2(n,C),
S;nce wt
8
wt(•)].
;t ;s clear from (8) that Kt
ES(wt(•), wt(·))
=
"'B(Kt)
€
C e~ C and
•
( 11)
In order to calculate 1;m ~(Kt)• we shall obta;n a ser;es expans;on of Kt. t+O+ We appeal to the follow;ng class;cal techn;que. Note that Kt ;s cont;nuous; so we can cons;der the e;genvalue problem
t
-r
Kt(s,s•)~(s')ds'
=
'-~(s)
s
€
J
(12)
s;nce the kernel Kt ;s symmetr;c, all e;genvalues '-of (12) are real. (10) rewr;te (12) ;n the form t
J0-t ~(s')ds' + J-t0 m;n(s,s•)~(s')ds' '-~(s) =
Therefore 90
0
= '-t~(s)
s € [-t,O] (;)
s € [-r,-t)
(;;)
} (13)
t J0 E;(s')ds'.., Js s'E;(s')ds' -t -t
+-
s J0 E;(s')ds' = UE;(s) s
sE[-t,O] ( 14)
Differentiate ( 14) with respect to s, keepi.ng t fixed, to obtain
t
( 15)
E;(s')ds' = U E;'(s), s E (-t,O]
s
Differentiating once more, -E;(s) = ).t E;"(s) ,
s
E
(-t,O]
(16)
When).= 0, choose E;~~J +R to be any continuous function such that ~~(s) = 0 for all s E [-t,O] and normalized by
f-r-t t;0(s) 2 ds = t. t
Suppose)."/ 0. Then (13)(ii) implies that E;(s) = 0 for all s E [-r,-t)
( 17)
In (14) put s = -t to get ~(-t)
(18)
= 0·
In (15) put s = 0 to obtain ~·co> =
o
( 19)
Hence for). I 0, (12) is equivalent to the differential equation (16) coupled with the conditions (17), (18) and (19). Now solutions of this are given by
A,et-h--~is
+
A2e-t-h-~is • s E [-t,O]
E;(s) = {
(20) s E [-r,-t), i = r-T
0
Condition (19) implies immediately that A1 = A2 = 1, say. e
-t-~A-~it
+
e
t-~A-~it
From (18) one gets
= 0.
Since the real exponential function has no zeros, it follows that ).~ cannot 91
he
imaginary i.e. A > 0.
Being a covariance function, each kernel Kt is non-
negative definite in the sense that
tt
Kt(s,s')t(sH;(s')ds ds' > 0 for all t
€
c.
Using (18), we get the eigenvalues of (12) as solutions of the equation t(-t) = 2 cos [ - 1 (-t)] = 0 •
liT
Therefore the eigenvalues of (12) are given by k = 0,1,2,3, •••
(21)
and the corresponding eigenfunctions by t (2)i (2k+1)~s tk(s)= t x[-t,OJ(s) cos [ 2 t J,
s € J, k = 0,1,2, •••
(22)
after being normalized through the condition t 2ds = 1, k = 0,1,2, ••• JJ tk(s)
Now, by Mercer's theorem (Courant and Hilbert [10] p. 138, Riesz and Sz-Nagy [68] p. 245), the continuous non-negative definite kernel Kt can be expanded as a uniformly and absolutely convergent series 00
Kt(s,s') = E Ak t~(s) t~(s'), k=O 00
E
k=O = {
0
8
~2(2k+1)2
(23)
s,s' € J
cos [(2k+1)~s] cos [(2k+1)~s·J 2t 2t s,s' € [-t,O]
(24)
s € [-r,-t) or s' € [-r,-t)
But from the definition of Kt' one has Kt(O,O) = 1 for every t putting s = s' = 0 in (24) we obtain
>
0. Thus (25)
From the absolute and uniform convergence of (24), it is easy to see that Kt can be expressed in the form 92
(26) where ~
~k(s) =cos [
(2k+1)ns zt Jx[-t,O](s),
s € J.
Note that the series (26) converges (absolutely) in the projective tensor ,.. product norm on C en C. Hence we can apply "'B to (26) getting from (11) the equality EB(wi(·),wt(.)) =
=
"" 8 "' '!1: E 2 2 B(~k k=O n (2k+1)
'!1:
8 ~k)
""E
8 B(~ ~t) k' k k=O n2(2k+1)2
(27)
But II~ II< 1 for all k > 0 and all 0 < t < r; so the series..,(27) is uniformly convergent in t, when compared with the convergent series E 2 8 2• ~ k=O n (2k+1) Moreover, for each s € J, ~k(s) ~ x{O}(s) as t + 0+, k = 0,1,2, •••• Thus if we lett+ 0+ in (27), we obtain
""
= E 2 8 2 S(x{O}' X{O}) k=O n (2k+1)
using (25) and Lemma 2. This proves (7)'. For dimensions n > 1, write e:C(JJRn) X C(JJRn) +R in the form n
..
.
.
e = . ~ e,J 1 ,J=1 - ( 1 n) lJ. ( ) ( ) where ~ 1 -- ( ~ 11 ••••• ~ n) 1 , ~ 2 - ~ 2 •••• ,~; 2 and each B .C JJR x C JJR ~ R is continuous bilinear. Let A € L(Rm, Rn) and {ek}~= 1 ' {ei}~= 1 be the canonical bases for Rm and Rn, respectively; i.e.
93
Write m-dimensi.onal Brownian motion w in the form w = (w1,w2,. ••• ~) where wk (t) = ~(t),ek>' k = 1, ••• ,m, are independent one-dimensional Brownian motions. Then
Letting t
=
n 1: i ,j=1
=
n 1: i ,j=1
+
0+ and using (7)' gives
lim Ea(Aowt,Aowt) t+O+
=
m 1: k=1
To obtain the final statement of the lemma, take A = G(n) and note that the last trace term is independent of the choice of basis in Rm. c Let V(S) c C~ be the domain of the weak generator S for the shift semigroup {St}t>O of §2. We can now state our main theorem which basically says that if' € V(S) is sufficiently smooth, then it is automatically in V(A). Furthermore, A is equal to S plus a second order partia~ differential operator on C(J,Rn) taken along the canonical direction Fn. The following conditions on a function ':C(J,Rn) + R are needed. Conditions (OA): (i )
' € 1)( s)t
( i i) ' is c2; (iii) 0,, o2' are globally bounded; (iv) o2, is globally Lipschitz on C(J,Rn). Theorem (3.2): Suppose ':C(J,Rn) ~ R satisfies Conditions (OA). Then 94
• €
O(A) and for each n
C(JJR")
€
A( 4> )(n) = S( l!>)(n)+( Dl!>(n )o in)(H(n))+Hrace[D24>(n) o( in Xin) o(G(n) XG(n))]
where~. D24>(n) denote the canonical weakly continuous extensions of o•(n) and D24>(n) to C(JJR"> e Fn' and in~" ~ Fn is the natural identification v t--> vx{O} • Indeed if {ej}J= 1 is any basis for R", then n :2:::'
-
A(l!>)(n) = S(l!>)(n)+DII>(n)(H(n)x{O})+-i j: 1o l!>(n)(G(n)(ej)X{o}•G(n)(ej)X{o}>· ~: Fix n € C(JJR") and let nx be the solution of the SRFDE (I) through n. Suppose 4> satisfies (DA). Since 4> is c2, then by Taylor•s Theorem (Lang [47]) we have
where (28)
Taking expectations, we obtain
Since 4>
€
O(S), then (30)
In order to calculate lim following two limits
t~+
t E[ll>(nxt) - l!>(n)], one needs to work out the
"' n ~+ t1 E04>(nt)( xt - "' nt)
lim
t~O+
t ER2(t)
We start by considering (31). that
(31) (32)
From Lemma (3.3), there exists a K > 0 such
95
Hence
Let t
+
0+ and use the cont;nu;ty of 0' at n to obta;n
"-n ' = 1;m t1 EO,(n)( n " nt) ' 1;m t1 EO,(nt)( xt " - nt) xt-
t~+
t~+
(33) by Lenvna (3.3). Secondly, we look at the l;m;t (32). Observe that ;f K ;s a bound for H and G on C(JJRn) and 0 < t < r, then
< 8K4t 4 + 8K2t
J:
Ell G(nxu) 11 4 du
< "'K(t4 + t 2 ), some K2, "'K > 0,
(34)
where we have used Theorem (1.8.5) for the Ito ;ntegral. Furthermore, ;f u € [0,1] and 0 < t < r, then t 2'(nt+u( "' nxt-nt))( "' nxt-nt, "' nxt-nt) "' Il£0
2
< 2L 2 11~t - n11 2 ... 2L 2 [~1nx t - ~t 11 4 J 112 < 2L 2 ll~t - n II 2 ... 2L 2~ 1 1 2 t(t 2 because of the inequality (34). . tED 1 2$(nt "' 11m
+
t+O+
+ 1) 112
Letting t
+
(36)
0+ in (35) and (36), we obtain
u( nxt- "' nt))( nxt- "' nt• nxt- "' nt) (37)
uniformty in u E [0,1].
1 =~
n
:r.;:-
.r o $(n)(G(n)(eJ.)X{o}•G(n)(eJ.)X{o}>·
J=1
(38)
Since $ E V(S) and has its first and second derivatives globally bounded on C(JJRn). it is easy to see that all three terms on the right hand side of (29) are bounded in t and n. The statement of the theorem now follows by letting t + o... in (29) and putting together the results of (30), (33) and (38).
c
It will become evident in the sequel that the set of all functions satisfying Condition (DA) is weakly dense in Cb. Indeed within the next section we exhibit a concrete weakly dense class of functions in Cb satisfying (DA) and upon which the generator A assumes a definite form. §4. Action Of the Generator on Quasi-tame Functions The reader may recall that in the previous section, we gave the algebra Cb of all bounded uniformly continuous functions on C(JJRn) the weak topology induced by the bilinear pairing ($,lJ). ~ J n $(n)dlJ(n), where C(JJR ) ~ € Cb and lJ runs through all finite regular Borel measures on C(JJRn). Moreover, the domain of strong continuity C~ of {pt}t>O is a weakly dense proper 97
subalgebra of Cb. Our aim here is to construct a concrete class Tq of smooth functions on C(JJRn), viz. the quasi-tame fUnctions, with the following properties: (i)
Tq is a subalgebra of C~ which is weakly dense in Cb;
(ii)
Tq generates Borel C(JJRn);
(iii) Tq c V(A), the domain of the weak generator A of {pt}t>O; (iv)
for each ~ € Tq and n € C(JJRn), A(~)(n) is a second-order partial differential expression with coefficients depending on n.
Before doing so, let us first formulate what we mean by a tame function. A mapping between two Banach spaces is said to be cP-bounded (1 < p < m) if it is bounded, cP and all its derivatives up to order p are globally bounded; e.g. Condition (DA) implies c2-boundedness; and c3-boundedness implies (DA)(ii), (iii), (iv). Definition (4.1)
(Tame FUnction):
A function ~:C(JJRn) +R is said to be tame if there is a finite set {s 1,s 2, ••• ,sk} c Janda em-bounded function f:(Rn)k +~such that ~(n) = f(n(s 1), •••• n(sk)) for all n € C(JJRn).
(*)
The above representation of ~ is called minimal if for any projection p:(Rn)k + (Rn)k- 1 there is no function g:(~n)k- 1 +R with f =gop; in other words, no partial derivative Djf:(Rn)k + L(RnJR), j = 1, ••• ,k, off vanishes identically. Note that each tame function admits a unique minimal representation. Although the set T of all tame functions on C(JJRn) is weakly dense in Cb and generates Borel C(JJRn), it is still not good enough for our purposes. due to the fact that •most• tame functions tend to lie outside c~ (and hence are automatically not in V(A)). In fact we have Theorem (4.1): (i) The set T of all tame functions on C(J,Rn) is a weakly dense subalgebra of Cb' invariant under the shift semigroup {St}t>O and generating Botel C(JJRn). 98
(ii)
Let
~
E T have a minimal representation
~(n) = f(n(s 1), ••• ,n(sk))
where k > 2.
-
Proof:
Then
~ ~
n E C(JJRn)
0
Cb.
For simplicity we deal with the case n = 1 throughout.
lt is easy to see that T is closed under linear operations. We prove the closure ofT under multiplication. Let ~,.~ 2 E T be represented by $1(n) = f1(n(s 1), ••• ,n(sk)), $2(n) = f 2 (n(s1)~ ••• ,n(s~ )), for all n E C(JJR), where f 1 ~k -+R, f 2 :Rm -+Rare C -bounded functions and s 1, ••• ,sk' s,, ••• ,s~ E J. Define f 12 :Rk+m -+R by (i)
f 12 cx 1, ••• ,xk, x,, ••• ,x~) = f 1(x 1, ••• ,xk)f 2 (x1•····x~) for all x1, •.• ,xk, x,, ••• ,x~ E R.
00
Clearly f 12 is C -bounded and
Thus ~,~~ E T, and T is a subalgebra of Cb. It is immediately obvious from the definition of St that if ~ E T factors through evaluations at s 1, ••• ,sk E J, then St($) will factor through evaluations at t + sj < 0. So T is invariant under St for each t > 0. Next we prove the weak density of T in Cb. Let T0 be the subalgebra of Cb consisting of all functions $:C(JJR) -+R of the form ~(n) =
f(n(s 1), ••• ,n(sk)), n E C{JJR)
( 1)
when f:Rk -+R is bounded and uniformly continuous, s 1, ••• ,sk E J. Observe first that T is (strongly) dense in T0 with respect to the supremum norm on Cb. To see this, it is sufficient to prove that if c > 0 is given and f:Rk -+R is any bounded uniformly continuous function on Rk, then there is a k C -bounded function g:R k -+R such that lf(x) - g(x)l < c for all x E R. We Prov.e this using a standard smoothing argument via convolution with a C bump function (Hirsch [32], pp. 45-47). By uniform continuity of f there is a 0 > 0 such that lf(x 1)- f(x 2 )1 ldy'
0
< V.
supk IIDph(z) 11. supk lf(z) I
z~
z~
=
N, say,
for all x € Rk, where N is independent of x0 and V = J dy' is the 8(0,26) . volume of a ball of radius 215 in Rk. Secondly we note thatTOis weakly dense in Cb. Let ITk:-r = s 1 < s 2 < s 3 < ••• < sk = 0, k = 1,2, ••• , be a sequence of partitions of J such that mesh nk + 0 as k + m. Define the continuous linear embedding Ik:Rk + C(JJR) by letting lk(v 1,v 2, ••• ,vk) be the piece-wise 1inear path = (s-sj_ 1) v. + (sj-s) lk(v 1, ••• ,vk)(s) vJ._ 1 , s € [sJ._ 1,sJ.] sj-sj_ 1 J sj- 5 j-t 100
joining the points
\L,. ... ,'lk E R,
j
= 2~ ••• ,k.
R
-r-s,
Denote by !k the k-tuple (st•···•sk) E Jk, and by.p!k the map e(J,R) ~ Rk n
~
(n(st), ••• ,n(sk)).
Employtng the unifo.rm. continuity of .each n E. e(J,R) on the compact interval J, the reader may easily check that lim (Ik o.p k)(n)
!
k.....
=n
in e(JJR).
(5)
Now if' E eb' define 'k~e(J,R) +R by 'k = 'olk 0 Psk• k = 1,2, •••• Since' is bounded and uniformly continuous, so is ' o lk~ Rk + R. Thus each 'k E T0 and lim 'k(n) = '(n) for all n E e(J,R), because of (5). Finally k.....
note that II 'k II e < b
II' lie for a11 k > 1. Therefore ' = w-Hm 'k and b
k-
T0
is
weakly dense in eb. From the weak density of T in T0 and of T0 in eb' one concludes that T is weakly dense in eb. Borel e(J,R) is generated by the class {,- 1 (u)~U c:R open, 'E T}. For any finite collection !k = (s 1, ••• ,sk) E Jk let Psk ~ e(JJR) +Rk be as befor Write each' E Tin the form'= f o P5 k for some-em-bounded f~k +R.
It is 101
a: 8
,}~--~~r·----,~~----------------------,0 I I I
. r---------------
1
I I
.a
:.
-------------~~
.e....
+~ ~
.a + i
• .6
-------------------
~
.aI
~
102
well-known that Borel C{JJR) is generated by the cylinder sets {psk -1 (U 1 x ••• xUk):Ui =:R open, i = 1, ... ,k,!k=(s 1, ... ,sk) e:J k, k=1,2, ... } (Parthasarathy [66] pp. 212-213). Moreover, it is quite easy to see that Borel Rk is generated by the class {f- 1(U) : U=:R open, f:Rk ..... R C00-bounded} (e.g. from the existence of smooth bump functions on Rk). But for each open set U =R, $- 1(U) = P;~ [f- 1{U)]. Therefore it follows directly from the above that Borel C(JjR) is generated by'· (ii) Let $ e: < have a minimal representation $= f o Psk where !k = (s 1, ••• ,sk) e: Jk is such that -r < s 1 < s 2 < ••• < sk 2. Take 1 < jo < k so that -r < Sj 0 < 0. Since the representation of $is minimal, there is a k-tuple (x 1, ••• ,xk) e: Rk and a neighbourhood [xj 0-E0 ,xj 0 + E0 J of xj 0 in R such that oj 0f(x 1, ••• ,xj 0_1,x,xj 0+1 , ••• ,xk) '0 for all x e: [x.
- E0 , x. + E0], with EO> 0. Jo Jo + E0 ] ..... R by
Define the function
g:[x. -E 0 , x. Jo Jo g(x) = f(x 1 , ••• ,x. _1,x,x. 1 , ••• ,xk) for all x e: [x. -E 0 ,x. +E 0]. Jo Jo+ Jo Jo co
Hence Dg(x) ' 0 for all x e: [x. -E 0 , x. + E0] and g is a C diffeomorphism Jo Jo onto its range. Therefore, there is a A > 0 such that lx' - x"l
< Alg(x')- g(x")l for all x', x"e:[x. -E 0 ,x. +E 0] Jo Jo
Pick o0 > 0 so that o0 <Eo and the intervals are mutually disjoint. In the remaining part with no loss of generality, that all integers struct a sequence {nn} in C(JJR) looking like
(6)
(sj-2o 0 , sj+2o 0 ), j =1,2, ••• ,k, of the argument we may assume, n are such that~< o0 • Conthe picture opposite
103
viz.
xj
sj-oO < s
- -0) 010 x.(s-s.-o J J
xJ.
sj+o 0 < s < sj+2o 0• j
~
~ x.(s-s.+o0) + xJ.
sj-2o 0 < s < sj-o 0, j
- -01o x.Jo (s-s.Jo -o 0)+x.Jo
s. +o0 < s < s. + 2o0 Jo Jo
0
nn(s) =
+-
J
J
j0 ~
jo
(7)
0
Suppose, if possible, that ' € Cb. Then lim St(,)(nn) = lim f(nn(t+s 1), ••• ,nn(t+sk))=f(nn(s 1h···•nn(sk)) t-+0+ t~+ unifo~ty in n.
But, for j ~ jo and 0 < t < o0, nn(t+sj) = xj fo~ att n.
Therefore
unifo~ty
inn; i.e. for any E > 0, there is a 0 < o < o0 such that
lg(nn(t+sJ. )) - g(nn(sJ. ))I < E for all n, all 0 < t < o. . 0 0
(8)
Now suppose 0 < t < o0• If*< t < o0, nn(t+sj 0) - xj 0 = o. If o < t < *· then lnn(.t+s.)- x. I= l-nE 0t + x. + .E0 - x. I = E. 0 (1-nt) <Eo· Hence Jo Jo Jo Jo nn(t+s. ) € [x. -E0,x. +Eo] for all 0 < t < o0• Applying (6) and (8), we Jo Jo Jo get for 0 < t < o , 104
Note that 6 is independent of n. ln the above inequality, fix t < 6 and choose n0 large enough such that~< t. Then s. * < t + s. < s. + 60 , n
and n °(t
+-
s. )
Jo
lnn°(t
+-
=
x. ~so
i-0
Jo
0
Jo
Jo
Jo
s. ) - nn°(s. ) I
Jo
Jo
= lx.
Jo
-(x.
Jo
+- £ 0)
which clearly contradicts the arbitrary choice of
I = £0 < l£
£.
Therefore
t
C~.
a
Definition (4.2) (Quasi-tame FUnctions)~ A function ~C(JJRn) +R is quasi-tame if there is an integer k > 0, C~-bounded maps fj~n +Rn, 1 < j < k-1, h:(Rn)k +Rand piecewise c1 functions gj:J +R, 1 < j < k-1, such that (n)
=
h( J:r f 1(n(s))g 1(s)ds, ••• ,
J~r
fk_ 1(n(s))gk_ 1(s)ds,n(O))
for all n € C(J,Rn). Each derivative gj is assumed to be absolutely integrable over J. Denote by Tq the set of all quasi-tame functions on C(J,Rn). Theorem (i)
(4.2)~
Tq
c
V(s)
m(n)
=
c~.
c
O" 105
(iii)
Tq is a weakly dense subalgebra of Cb generating Borel C(JJRn).
Proof: In proving statements (i) and (ii) of the theorem, we shall assume for simplicity that $ = hom where h:(Rn) 2 +R is C~-bounded and m(n) = cf0 f(n(s))g(s)ds, n(O»for some C~-bounded map f~n +Rn and a -r
(piecewise) c1 function g:J +R. Let 0 < t < r and consider the expression { [$(~t) - $(n)J
={ =
[h(m(~t)) - h(m(n))]
J: Dh(z~){¥m
using the Mean-Value Theo~em for h, with z~ But {rmO it i.s suffi.cient to prove that it is invariant under St• 0 < t < r, due to the semigroup property. So let $ be as above and t E [O,r]. Then St($)(n) = h(J0 f(n(s))g(s-t)ds t-r
+-
f(n(O)) J0 g(s)ds,n(O)) -t
for all n E C(J,R'n). Define "' g:.J -+:R by "' { g(s-t) t-r < s < 0 g(s) = 0 -r < s < t-r Then clearly ~is piecewise c1 with g' absolutely integrable over J. also ~:C(J,Rn) -+ (:Rn) 2• F:(:Rn) 2 -+ (:Rn) 2 and ~:.(:Rn) 2 -+ :R by
~(n) = <J0 f(n(s))~(s)ds,n(O)) = <J0 -r
F(x,y) = f(y)
t-r
Define
f(n(s))g(s-t)ds,n(O)),
J-t0 g(s)ds, h(x,y) = h[F{x,y)
+
(x,y)],
for all x,y ERn, n E C(J,Rn). Since f and hare c=-bounded, so is F and hence "' h. Therefore St($) = "'"' hom E •q· We show that •q is a weakly dense subalgebra of Cb. It is an easy matter checking that •q is closed under addition and multiplication of functions in Cb. To prove weak density of •q in Cb" it is sufficient to show that •q is weakly dense in • u •q• because the set of tame functions • is already weakly dense in Cb (Theorem (4.1)(i)). Let$ E • have the representation $(n) = f(n(s 1) ••••• n(sk))
n E C(J,P.n)
where f:(:Rn)k -+:R is c=-bounded and s 1•••• ,sk E J. For each 1 < j < k, construct a sequence {gj}== 1 of piecewise linear functions on J with the property that gj(s)-+ X{s.}(s) as m-+ =and lgj(s)l < 1 for all s E J, for all m> 1; e.g.
J
107
-----------
1
0
-r
Also choose a sequence {6m}== 1 of C~ bump funct;ons on Rn such that lxl < m lxl >m+1
0
and l6m(x)l < 1 for all x € Rn. Def;ne the sequence 1 and n E C(JJRn), ltm(n(s)) gj(s)l < ln(s)l for all s E J. So by the dom;nated convergence theorem n(sj) = 1;m J 0 tm(n(s))gj(s)ds, j = 1, ••• ,k nr-- -r
(12)
Therefore '(n) = 1;m f(J 0 JII'+OO
-
r
tm(n(s))g~(s)ds, ••••
J0 -r
tm(n(s))g~(s)ds)
(13)
Denot;ng the expression under the l;m;t;ng operat;on ;n (13) by 'm(n), ;t ;s clear that·'m € Tq for each m and l'm(n)l <sup 1 and for each integer p > 0 let Up be the complement of the concentric closed ball of radius b - ~- Let the sequences {~}m= 1 and {gJ}m= 1 be as before, so that (12) is satisfied. Let ~mE Tq be given by
~m(n)
J-r ~(n(s))gj(s)ds,
= 0
We contend that
m> 1, n E C(JJRn).
... n
p=t
lim inf
m
{n:~m(n)
E UP}
(14)
To see this, let n(sj) E B'. Then n(sj) E UP for all p > 1. Since n(sj) = lim ~m(n), then for each p there is an m0 > 0 such that ~m(n) E UP for all
m--
m> m0• Hence, for each p > 1, belongs to the set on the right lim inf {n:~m(n) E UP} for each m m0 > 1 such that ~m(n) E Up for
n belongs to lim inf {n:~m(n) E UP} i.e. n m hand side of (14). Conversely, let n be in p > 1. Then for every p > 1, there is an
all m > m0• Taking m + '"' gives n(sj) E Up for all p > 1 i.e. n(sj) E Up. But B' = Up, so n(sj) E B'. Thus p=1 p=1 our contention is proved. As the sets {n:~m(n) E UP} are clearly in a(Tq), it follows directly from (14) that {n:n(sj) E B'} E a(Tq). Therefore a(Tq) =Borel C(J,Rn). a
n
n
The final result in this chapter asserts that every quasi-tame function is in the domain O(A) of the weak generator A of {Pt}t>O. Theorem (4.3): Every quasi-tame function on C(J,Rn) is in the domain of the weak generator A of {pt}t>O. Indeed if ' E Tq is of the form '(n) = h(m(n)), n E C(J,Rn), where
109
then A{~){n)
k-1 = j;1 Djh{m{n)){fj{n{O))gj{O)-fj{n{-r))gj{-r)
- J0
-r
+
Dkh{m{n)){H{n))
+
fj{n{s))gj{s)ds}
t trace [D~h{m{n))o(G{n)
x
G{n))]
for all n E C{JJRn). Here again D.h denote the partial derivatives of h:{Rn)k +R considered as a functi~n of k n-dimensional variables. Proof: To prove that Tq c V{A), we shall show that each ~=homE Tq satisfies®Conditions {DA) of §3. First, it is not hard to see that each ~ E Tq is C • Also by applying the Chain Rule and differentiating under the integral sign one gets
--
D~{n){~)
+
= Dh{m{n)){J 0 -r
o
Dh{m{n)){ J
-r
Df 1 {n{s)){~{s))g 1 {s)ds, ••• ,
2
D f 1 {n{s)){~ 1 {s).~ 2 {s))g 1 {s)ds, .•• ,
for all n, ~. ~,.. ~ 2 E C{JJRn). Since all derivatives of h, f., 1 < j < k-1, are bounded, it is easy to see from the above formulae thatJD~ and o 2 ~ are bounded on C{JJRn). By induction it follows that~ is C -bounded. Hence Codnitions {DA){ii), {iii), {iv) are automatically satisfied. Condition {DA){i) is fulfilled by virtue of Theorem {4.2){i). From the above two formulae we see easily that the unique weakly continuous extensions D~(n), o 2 ~{n) of D~{n) and o 2 ~{n) to ®
110
c(J,Rn) a Fn are given by D$(n)(vx{O}) = Dh(m(n))(O, ••• O,v) = Dkh(m(n))(v), ~
D $(n)(v 1x{o}•v 2x{o}> = o2h(m(n))((O, ••• ,o,v 1), (O, •••• o,v 2)) 2 = Dkh(m(n))(v 1,v 2),
for all v, v1,v 2 ERn. The given formula for A($)(n) now follows directly from Theor~~ (3.2) and Theorem (4.2). a Definition (4.3) (Dynkin [16]): Say n° E C(J,Rn) is an absoPbing state for the trajectory fietd {nxt:t > 0, n € C(J,Rn)} of the stochastic FDE(I) if p{w:w En, noxt(w) = n°} = 1 for all t > 0, where we take a =~here, i.e. p(O,n°,t, {n°}) = 1 for all t > 0. The following corollary of Theorem (4.3) gives a necessary condition for n° E C(J,Rn) to be an absorbing state for the trajectory field of the stochastic FDE (I). Corollary: Let n° E C(J,Rn) be an absorbing state for the trajectory field of the stochastic FOE (1). Then (i) n°(s) = n°(0) for all s E J i.e. n° is constant; (ii) H(n°) = 0 and G(n°) = 0. ~:
Let n° E C(J,Rn) be an absorbing state for {nxt:t > 0, n E C(J,Rn)}. For each t > 0 and s E J, define the Ft-measurable sets 0
0
nt = {w:w E n, n xt(w) = n } 0
Qt(s) = {w:w En, n xt(w)(s) = n°(s)}. Then nt c Qt(s) for all t > 0, s E J and since P(nt) = 1, it follows that P[Qt(s)] = 1 for t > 0, s € J. Suppose if possible that there exist s 1,s 2 E J such that n°(s 1) ~ n°(s 2). 0 Without loss of generality take -r < s 1 < s 2 < o. For each wEn, n x5 _ 5 (w)(s 1) = n°(s 2) ~ n°(s 1) and so ns _5 (s 1) = 2
1
~.
2
1
Hence P[Q5 -s (s 1)J = 0, which contradicts P[Qt(s)] = 1, 2
1
111
t > 0, s € J. So n° must be a constant path. Th;s proves (;). To prove that n° sat;sf;es (;;), note that the absorb;ng state n° must sat;sfy A(~)(n°) = 0 for all ~ € V(A) (Dynk;n [16], Lemma (5.3), p. 137). In part;cular A(~)(n°) = 0 for every quas;-tame ~:C(J,Rn) +R. Note f;rst that s;nce n° ;s constant then so ;s the map t + ~ and so S(~)(n°) = 0 for every~ € V(S). Take any ~:Rn +R C~-bounded and def;ne the (quas;)-tame funct;on ~:C(J,Rn) +R by ~(n) = ~(n(O)) for all n € C(JJRn). Then by Theorem (4.3), ~ € V(A) and
A(~)(n°) = D~(n°(0) )(H(n°))
+
i trace [D ~Cn°(0) ) (G(n°) 2
0
x
G(n°) )] = 0
The last ;dent;ty holds for every C~-bounded ~:Rn +R. Choose such a~ w;th the property D~(n°(0)) = 0 and o 2 ~(n°(0)) = , the Eucl;dean ;nner product on Rn e.g. take "'~ to be of the form "'~(v) = lv - no(0) I2 ;n some ne;ghbourhood of n°(0) ;n Rn. Then
for any basis {eJ. }mJ·= 1 of Rm. Therefore G(n°) = o. Thus D~(n°(0)) (H(n°)) = 0 ~ n ~ 1\1 ~~~~ for every C -bounded ~:R +R. Now p;ck any C -bounded A such that ~(v) = for all v ;n some ne;ghbourhood of n°(0) ;n Rn. Then DA(n°(0))(H(n°)) = IH(n°)1 2 = 0; so H(n°) = 0 and(;;) ;s proved. c Remarks (;) We conjecture that cond;t;ons (;) and (;;) of the corollary are also suff;c;ent for n° to be an absorb;ng state for the trajectory f;eld {nxt:t > 0, n € C(J,Rn)}. It ;s perhaps enough to check Lemma (5.3) of Dynk;n ([16], p. 137) on the weakly dense set of quas;-tame funct;ons ;n V(A). (;;) An absorb;ng state n° corresponds to the o;rac measure onO be;ng ;nvar;ant under the adjo;nt sem;group {p~}t>O assoc;ated w;th the stochast;c FOE (I). Thus a necessary cond;t;on for the existence of ;nvar;ant o;·rac measures ;s that the coeff;c;ents H, G should have a common zero (cf. §III.3).
112
V Regularity of the trajectory field
§1.
Introduction
Given a filtered probability space (n,F,(Ft)G r. If t 1.t2 >rand lt 1-t 2 1 < 6. then one has llht -ht II < llht -h~ II + llh~ -h~ II . + llh~ -ht II 1 2 ca 1 1 ca 1 2 ca 2 2 ca
< 2 llh-h 0 II
ca
+ llh~ -h~ II < £. 1 2
Thus we need only prove the lemma for h € c1([0,a]JR). If so, let r < t 1 < t 2 < a, -r < s 1 < s 2 < 0 and consider s2 (ht -ht )(s 2)-(ht -ht )(s 1) = J (ht -ht ) 1 (u)du 1 2 1 2 s1 1 2 s2 ~ J [h 1 (t 1+u)-h•(t 2+u)]du. s1 By the uniform continuity of hand h 1 , if£> 0 is given there exists 60 > 0 such that lh(u)- h(v)l < £/ 1+r1-a and lh•(u)- h•(v)l< £/ 1+r1-a whenever u,v € [O,a], 1u-v1 < 6 0• Hence if lt 1-t 2 1 < 60• then
So llht -ht II a= sup j(ht -ht )(s 2)-(ht -ht )(s 1>1·1<s 2-s 1 1-QI 1 2 C s 1,s 2EJ 1 2 1 2 s1 '# s2 + llht -ht II 1
2
c
But llht -ht 11 = sup lh(t 1+s) - h(t 2+s) 1 < £ 1 2 C sEJ 1+rl-a 115
Therefore e:rt-a II ht - ht II < 1 2 1+r 1-a
+
e: - e: t+rl-a -
This completes the proof of the lemma. Lemma (?..2): Suppose F:RP Lipschit~. Then the map
-+
a
Rn is a C1 map with DF:RP
-+
L(RP ,Rn) locally
is Lipschit~ on every bounded set in Ca(J,RP). Proof~ Let s 1,s 2 € J, ~ 1 .~ 2 € Ca(J,RP). For each u € [O,t] define y(u), z(u) € iP by y(u) = u~ 1 (s 2 ) + (t-u)~ 2 <s 2 ), z(u) = u~ 1 (s 1 )+(1-u)~ 2 (s 1 ). By the Mean-Value Theorem for F, the expression
= F(~ 1 (s 1 )) -
F(~ 2 (s 1 ))
-
[F(~ 1 <s 2 ))
-
F(~ 2 <s 2 ))]
=I: DF(z(u))(~ 1 (s 1 )-~ 2 (s 1 ))du- I: DF(y(u))(~ 1 (s 2 )-~ 2 (s 2 ))du =It 0
DF(z(u))[(~ 1 -~ 2 )(s 1 )-(~ 1 -~ 2 ><s 2 )Jdu
+
1
+
J0 [DF(~(u))-DF(y(u))] (~ 1 (s 2 )-~2 (s 2 ))du.
and lz(u)I 0 such that Now for each u
€
[0,1], ly(u)l < 11~ 1 11c
+
11~ 2 11c
IIDF(z(u))- DF(y(u))ll< K1 1z(u)-y(u)l for all Letting K2 = sup { IIDF(v) II : v 116
€
RP, lv I < 2}, we get
u€
[0,1].
11~1- ~211 a -11~1- ~211cJ ls1-s21a
IE(s1,s2)1 < K2[
+ K1
c
11~ 1 - ~ 2 11 c
ls 1-s 2 1a
cJ Is1-s21a
K111~1-~211c 0 such that
"' "'F(~2>11ca( 1,X,!), !
€
:Rm, X € Rn.
Clearly F is c2• As g(.)(O) = 0, then cj>{s,x,O) = x for all s e: R. Thus F(O,x) = 4>(1,x,0) = x for ;11 X € Rn. -Since g is c1, we can differentiate both sides of (1) with respect to! getting D 1 [D 3 c~>(s,x,!)(v)]{1) =
{Dg{cjl{s,x,!))[D34>(s,x,!)(v)]}{!)+g(cl>(s,x,!))(v) (2)
D34>(0,x,!)(v) = 0 for all s € R, x € Rn, !•V e: Rm• Fix v,! e: Rm and x e: Rn; define u:R -+ Rn by u(s)
=
D3 4>(s,x,!)(v) - sg(cl>(s,x,!))(v)
for all s e: R.
Differentiating with respect to s gives
- s{Dg{cjl{s,x,!))[D 14>(s,x,!)(1)]}{v) = {Dg(cl>(s,x,!))[D 3 4>(s,x,!)(v)]}{!)-s{Dg(cl>(s,x,!))[D 1 4>(s,x,!)(~)}{v) =
{Dg(cl>(s,x,!) )(u(s) )}(!) + sDg(cl>(s,x,!) )[g(·:P(s,x,!) )(v)] - s{Dg(cl>(s,x,!))[g(cl>(s,x,!))(!)J}(v)
=
118
{Dg(cl>(s,x,!))(u(s))}(!)
(3)
for all s E R, because of (1), (2) and (Fr). Also by definition of u, u(O) = D3cll(o,x,_!)(v) = o. Since g is c1, the map s 1+ Dg(cll(s,x,_!) )( • )(_!) is continuous and therefore the linear differential equation y 1 (s)
=
Dg(cll(s,x,_!))(y(s))(!), s E R
y(O)
=
0
}
has y = 0 as its unique solution. Since u satisfi.es (4), then u(s) all s E R. ln particular, putting s = 1 one gets
(4) = 0
for
D1F(_!,x)(v) - g(F(_!,x))(v) = D3cll(t,x,_!)(v) - g(cJl(t,x,_!))(v)
= u(1) = 0 for all v E Rm, i.e. D1F(_!,x)
=
g(F(_!,x))
for a11 ! E Rm, x E Rn. Note that for each tERm, F(t,•) = cll(1,·,t) is a c2 diffeomorphism of Rn. To prove the grou; property~ let !t, ~ ~ Rm and define z,'i:Rm + Rn by z(_!)
=
F(_!,F(!1,x)), "'z(_!)
where x ERn is fixed.
=
F(!
+
,!1,x) for a11 ! E Rm,
Then
Dz(_!)
= g[F(_!,F(_!1,x))] = g(z(_!)),
Dz(_!) "'
= D1F(!
+
_!1,x)
= g[F(!
+
_!1,x)]
= g(z(_!)) "'
for all! E Rm, and z(~) = ~(Q,F(_!1 ,x)) = F(_!1,x) = "'z(~). But, for each fixed ! E Rm, the maps s 1-+' z(s_!), s >+ z(s_!) are clearly both solutions of h 1 (s) = g(h(s))(_!), s E R, h(O) = F(_!1 ,x). So by uniqueness of solutions, z(st) = "'z(st) for all s E R. In particular - + _! ,x)- for all! E Rm, all x ERn. c Z(!) = "'z(_!) i.e. F(_!,F(_!1,x)) = F(t 1 Remark ( 2 .1) If the c2 condition on g is weakened to g being c1 but with Dg locally 119
Lipschitz, a unique c1 flow F~m x Rn +Rn for g still exists. Furthermore DF is locally Lipschitz due to the following argument. Use the notation in the proof of the lemma. Let B1 eRn, B2 cRm be bounded sets. By continuity of'· the setS= {'(s,x,!) : s € [0,1], x € B1, ! € B2} is relatively compact in Rn. Since Scan be covered by a finite number of arbitrarily small balls and Dg is locally Lipschitz, it follows that Dg is Lipschitz on S with Lipschitz constant K1 , say. Let x,x• € B1 , !• !' € B2• Then from differentiating (1) with respect to x we obtain ID 2 '(~.x.!)(y)- o 2 '(~.x',!')(y)l
= 1J:{(Dg(,(s,x,!))[D2,(s,x,!)(y)])(!) - (Dg(,(s,x',!'))[D 2,(s,x',!')(y)])(!)}dsl < K2 +
J:
IIDg(,(s,x,!)) II ID 2,(s,x,!HY) - o2,(s,x • •!' )(y) Ids
K2 J: IIDg(,(s,x,!))- Dg(,(s,x',!'»lllo2,(s,x•,!•)(y)lds
< K2 K3 J: ID2,(s,x,!)(y)- o2,(s,x',!')(y)lds +
IYI K4 K1 J: l'(s,x,!)- '(s,x',!')lds,
~
€ [0,1],
where y € Rn, K2 =max [sup lxl. sup I!IJ. x€B 1 !EB2 K3
=sup {IIDg(,(s,x,!»ll: s € [0,1], x € B1, ! € B2},
and K4 .=sup
{IID 2,(s,x,!)ll: s € [0,1], x € B1 , ! € B2}.
But ' is c1 and therefore Lipschitz on the compact set [0,1] there is a K5 > 0 such that l'(s,x,!) - '(s,x',!') I< K5[1x-x'l Therefore 120
+
x
s1
x
B2 i.e.
1!- !'IJ for all s € [0,1].
ID 2 ~(~,x.!)(y)- o 2 ~(~,x',!')(y)
< K2K3 +
I
J: ID2~(s,x,!HY)- o2~(s,x',!')(y)lds K1K4K5 IYI Clx-x' I
+
I!-!' IJ for all
~
E [0,1].
By Gronwall's lemma,
for all
~
E [0,1].
In particular,
IID2F(!,x)- o2F(!',x'>ll = IID2 ~(1,x,!)- o 2 ~(1,x',!'>ll KK < K1K4K5 e 2 3 clx-x'l + I!- !'IJ for all x, x' E B1, !•!' E B2• Hence D2F is Lipschitz on bounded sets in Rm x Rn. Also o1F = goF is Lipschitz on bounded sets in Rm x Rn. Therefore so is OF. Theorem (2.1): In the stochastic FOE (II), suppose His Lipschitz and g is a c2 map satisfying the Frobenius Condition. Then the trajectory field {nxt:t E [O,a], n E C(JJRn)} has a version x:n x [O,a] x C(JJRn) + C(JJRn) having the following properties. For any 0 < a < i, there is a set na c n of full P-measure such that, for every wE na, (i)
the map X(w,.,.):[O,a]
X
C(JJRn)
+
C(JJRn) is continuous•
(ii) for every t E [r,a] and n E C(JJRn), X(w,t,n) E Ca(JJRn)i (iii) X(w,.,.):[r,a]
x
C(JJRn)
+
Ca(JJRn) is continuous•
(iv) for each t E (r,a], X(w,t,.):C(JJRn) + Ca(JJRn) is Lipschitz on every bounded set in C(JJRn), with a Lipschitz constant independent oft E [r,a]. ln particular each X(w,t,•) : C(JJRn) + C(JJRn) is a compact map.
~: Suppose H is Lipschitz and g is a c2 map satisfying the Frobenius Condition. Employing a technique of Sussman ([74]) and Doss ([12]), we show that a 121 .
version X:fl x [O,a] x C(J~n) + C(J,Rn) of the trajectory {nxt:t € [O,a], n € C(J,Rn)} of (II) can be constructed, sample-function-wise, by first solving a suitably defined random family "' H(.,.,w):[O,a] x C(J,Rn) +Rn of retarded FOE's, and then deforming their semi-flows into X using the flow F of g parameterized by Brownian time w(t). The random family "' H and the flow F are sufficiently regular to guarantee the required sample function properties of X. Indeed, by Lemma (2.3), we let F:Rm x Rn +Rn be the c2 flow of g, viz: D1F(!_,x)
=
g(F(!_,x)) ( 5)
F(Q_,x)
= X
for all t € Rm, x € Rn. For a;y ~ € C(J,Rm) and n € C(J,Rn), define the map Fo(~,n):J +Rn by [Fo(~.n)](s) = F(~(s),n(s))
for all s € J. For each! € Rm, F(!,·> is a diffeomorphism of Rn; so for any x € Rn the linear map o2F(!,x):Rn +Rn is invertible. Define the 'Brownian motion' w0 :fl + C([-r,a]~m) by w(w)(t) - w(w)(O) w0 (w)(t) = {
0
t € [O,a] t € J
(6)
for a.a. w € fl. Recall that w~:fl + C(J,Rm) is the slice of w0 at t € [O,a], i.e. w~(w) € C(J,Rm) for a.a. w € fl. We now define a random family of retarded FOE's on Rn by letting H:[O,a] x C(J,Rn) x fl +Rn be the map
H( t,_n,w)
=
[D2F(w0 (w)( t) ,n( 0)) ]- 1{H[Fo (w~(w) ,n)]
-!trace (Dg[F(w0 (w)(t),n(O))]og[F(w0 (w)(t),n(O))])} for a.a. w € fl, t € [O,a], n € C(J~n). We shall show that, for a.a. w € fl, there is a map n~(w):[-r,a] +Rn continuous on J and c1 on [O,a] which is a solution of the RFDE: 122
(7)
n~(w)•(t) = H(t,n~t(w),w)
0 < t 0 i.s some constant. First of all, note that for a.a. w € n every term appearing on the right-hand side of (7) is Lipschitz inn over bounded subsets of C(JJRn) uniformly with respect to t € [O,a]. Referring to the proof of Lemma (2.3), let ll< IIF(w0 (w)(•),O)II + e 1 II n lie where IIF(w0 (w)(·),O>II =sup {IF(w0 (w)(t),O)I: t
€
( 15)
[-r,a]}.
Using (14) and (15) gives L llw0 (w) II IH[Fo(w~(w),n)JI < IH(O)I + L1 IIF(w0 (w)(·),O)II + L1e 1 llnlle (16) for all t € [O,a], n € e(JJRn). Similarly, by the inequality ( 17)
we get L llw0 (w)lj lg[F{w0 (w)(t) ,n(O) )] I < llg(O) II + L1 IIF(w0 (w)( ·) ,O)II+L 1e 1 In lie ( 18) Finally, use (7), (11)', (16) and (18) to deduce that
125
"' IH(t,n,w) I < "'K(w)(t "" lin
II ), n €. C(JJl">, t E [O,a]
eta)•
where "' K(w)
II 1111 o II Lllwo(w)ll e Dg w (w) max{L 1e 1 , IH(O) I "" L1 IIF(w0 (w)( ·) ,0) II ,
=
L llw0 (w) II ~ llogll ( llg(O) II+ L1 IIF(w0 (w)(.),O) II>.~ llogll L1e 1 }. Thus for a.a. wE g and every n E C(JJRn), the RFDE (8) has a un;que solut;on n~(w) def;ned on [-r,a] and start;ng at n.
Def;ne the random f;eld ~:0
~(w,t,n)
x
[-r,a]
x
C(JJRn)
+
Rn by
F(w0 (w)(t), n~(w)(t))
=
(19)
for a.a. wE o, all t E [-r,a], and all n E C(J,Rn). Note that each x(•,t,n) ;s def;ned on a set of full P-measure ;n g wh;ch ;s independent of "' t and n e.g. 00 • Also s;nce g ;s c2, then so ;s F because of Lemma (2.3). Therefore we can apply Ito•s formula (Elworthy [19], p. 77) to (19) gett;ng, for each f;xed n E C(J,Rn), d~(·,t,n)
=
D 1 F(w0 (·)(t),n~(·)(t))dw0 (.)(t)
+ D 2 F(w0 (·)(t),n~(·)(t))dn~(·)(t) + ~ D~F(w0 (·)(t),n~(.)(t))(dw0 (.)(t), dw0 (·)(t)), t
>
0 (20)
But for a
and 126
>
t
>
0 we have
dw0 (.)(t)
=
dw(·)(t)
dn~(·)(t)
=
"' H(t, n~t(•),·)dt
D~F(w0 (·)(t),n~(·)(t))(dw0 (·)(t),dw0 (·)(t)) =trace D~F(w0 (·)(t),n~(·)(t))dt. so (20) yields
"' dx(.,t,n) = D1F(wo(.)(t), n~(·)(t))dw(.)(t)
"'
~ o2 F(w0 (.)(t),n~(·)(t)){H(t,n~t(•),·)dt +
i trace D~F(w (•)(t),n~(·)(t))dt, 0
a> t > 0.
(21)
Now in (5), keep x fixed and differentiate with respect to! € Rm; then
D~F(!,x) = Dg(F(!,x)) o D1F(!,x) = Og(F(!,x))og(F(!,x)) for all!
€
Rm, x
€
Rn.
In particular,
trace D~F(w0 (w)(t),n~(w)(t)) = Dg[F(w0 (w)(t), n~(w)(t))] o g[F(w0 (w)(t),n~(w)(t))] =
Dg[x(w,t,n)] o g[x(w,t,n)] "' "'
for t € [O,a], and a.a. w €
(22)
n.
From> ( 7) we see that o 2 F(w0 (w)(t),n~(w)(t)){H(t,n~t(w),w)} = H[F o (w~(w),n~t(w))]
-i trace (Dg[F(w0 (w)(t),n~(w)(t))] o g[F(w0 (w)(t),n~(w)(t))]) =
"' H[xt(w,•,n)]
1 -~trace
"' "' Dg[x(w,t,n)] o g[x(w,t,n)], 0 < t 0 be the Lipschitz constant of H(t,·,w) on B(w). Take any n1.n 2 € B, t € [r,a] and -r < s 1 < s2 < 0. Then l[n 1tt(w)- n2tt(w)](s 2)- [n 1tt(w)- n2tt(w)](s 1>1
(32)
131
for all t' € [O,a]. Therefore (32) g;ves
l[n1~t(w)-n2~t(w)](s2)-[n1~t(w)-n2~t(w)](s1) I
"'L 2 (w)~
< lln 1-n2 11c e
L2(wl-ls 1-s 2 1
(34)
Th;s last ;nequal;ty clearly shows that the map (30) ;s L;psch;tz on B un;formly w;th respect to t € [r,a]. Now w € na, so the map [O,a]
-~
--+:. w~(w)
t
1-1
;s cont;nuous (Lemma (2.1)). [r,a]
By cont;nu;ty of (26), the map
x C(J,R") ~ Ca(J,Rm) x Ca(J,R") ;; Ca(J,Rm+n)
(t,n)
~
(w~(w), n~t(w))
;s also cont;nuous. Thus, compos;ng w;th the cont;nuous map
(~,n)
132
Fo(~,n)
(36)
__,
(Lemma (3.2)), one gets the continuity of the sample function X(w, ••• )~[r,a] x C(JJRn)
ca(J.Rn>" Fo(w~(w), ntt(w)).
(t,nJ
To prove the final assertion (iv) of the theorem, fix t E [r,a] and compose the isometric embedding --:>
Ca(J,.Rm)
x
Ca(J,.Rn)
(w~(w),n)
n
with the map (30), deducing that the map C(JJRn) --~ Ca( J ,.Rm)
x
Ca( J ,.Rn)
n a-----> (w~(w), ntt(w))
is Lipschitz on each bounded set in C(J,.Rn) with Lipschitz constant independent of t E [r,a]. Applying Lenma (3.2) once more shows that X(w,t,• ):.C(J.Rn) -+ ca(J,.Rn) is Li.pschitz. on every bounded set in C(J.Rn), with Lipschitz constant independent of t E [r,a]. The compactness of X(w,t,·):C(J.Rn)-+ C(J,.Rn) is then a consequence of the last statement together with the compactness of the embedding ca(JJRn) ~> C(JJRn) (viz. Asco 1i •s Theorem) • a. The reader may easily check that the construction in the above proof still works when the drift coefficient is time-dependent i.e. for the stochastic FOE dx(t)
H1(t,xt)dt + g(x(t))dw(t) 0 < t H1(t,n) in the stochastic FOE (Ill) is continuous, Lipschitz in n over bounded subsets of C(JJRn) uniformly with respect to t E [O,a]. Assume also that H1 satisfies a linear growth condition: 133
•
for all t• E [O,a]. Therefore (32) gives l[n 1tt(w)-n 2tt(w)](s 2 )-[n 1tt(w)-n2tt(w)](s 1)1
"'L 2 (w)~
< II n 1-n2 11 c e
l 2 (wJ-Is 1-s 2 1
(34)
This last inequality clearly shows that the map (30) is Lipschitz on B uniformly with respect to t E [r,a]. Now w E ga' so the map
--+,
t
t-1
w~(w)
is continuous (Lemma (2.1)). [r,a]
By continuity of (26), the map
x C(J,Rn) ~ Ca(J,Rm) x Ca(J,Rn) ;; Ca(J,Rm+n)
(t,n)
,
(w~(w), ntt(w))
is also continuous. Thus, composing with the continuous map
(t,n)
132
Fo(t,n)
(36)
(Lemma (3.2)), one gets the continuity of the sample function X(w, ••• )~[r,a] x C{JJRn)
__,
Ca{J,Rn). Fo(w~{w), n~t{w)).
{t,n)
To prove the final assertion {iv) of the theorem, fix t the isometric embedding
n
1-----i»
€
[r,a] and compose
{w~{w) ,n)
with the map {30), deducing that the map C{JJRn) --~ Ca{J,Rm)
x
Ca{J,Rn)
n ~----i> {w~{w), n~t{w))
is Lipschitz on each bounded set in C{J,Rn) with Lipschitz constant independent of t € [r,a]. Applying Lemma {3.2) once more shows that X{w,t,.):.C{J;Rn) + ca{J,Rn) is Li.pschit:z. on every bounded set in C{J;Rn), with Lipschitz constant independent of t € [r,a]. The compactness of X(w,t,·):C{J;Rn) + C{J,Rn) is then a consequence of the last statement together with the compactness of the embedding Ca{JJRn) ~> C{JJRn) {viz. Ascoli's Theorem). o The reader may easily check that the construction in the above proof still works when the drift coefficient is time-dependent i.e. for the stochastic FOE dx{t) x0 with H1:[O,a]
= H1{t,xt)dt +
=
n E C{JJRn)
x
C{JJRn)
+
g{x{t))dw{t)
Rn, g:Rn
0
< t H1{t,n) in the stochastic FOE (III) is continuous, Lipschitz in n over bounded subsets of C(JJRn) uniformly with respect to t € [O,a]. Assume also that H1 satisfies a linear growth condition: 133
IH 1(t,n)l < K(1.., lin II ), n e: C(J,Rn), t e: [O,a],
.
(37)
c
for some K > 0. Then all the conclusions of Theorem (2.1) hold for (III). Unfortunately, ;t ;s not poss;ble to deal w;th t;me-dependent d;ffus;ons as above, except perhaps ;n the followtng rather spedal case: Corollary (2.1.2):
In the stochast;c FOE
dx(t) = H1(t,xt)dt .., g1(t,x(t))dw(t),
0 < t o2H1(t,n) is continuous. Then for all w € na (0 0. Represent the coefficients of (D III) as maps H:n x [O,a] x C(JJRn) -+Rn, g:n x [O,a] x C(JJRn)-+ L(RmJRn) defined by ~
~
H(w,t,~) = D 2 H 1 (t,X(w,t,n))(~). g(w,t,~) = Dg(X(w,t,n)(O))(~(O))
for we: na, t e: [O,a], ~ e: C(JJRn), with n e: C(JJRn) fixed. a,.a2 e: .c 2(n,c(JJRn)), then ~ J)H(w,t,B 1(w))-
••
If
2 2 2 H(w,t,B2(w))l dP(w) < IID 2H1 11 IIB 1-B2 11 2
~
.c cn,c)
2
n
~
for all t e: [O,a]. Also for any Be: .c (n,c(JJR );Ft) the map w•-+ H(w,t,B(w)) is Ft-measurable because it is almost surely a limit as h-+ 0 of the Ft~asurable maps w ~ ~ [H(X(w,t,n) + hB(w)) - H(X(w,t,n))]. The map g satisfies similar conditions. Hence Condition (E) of Chapter II is satisfied and by Theorem (II.2.1), equation (D III) has a un;que solution in .c 2(n,C([-r,a]JRn)). Assume now that o2H1 is continuous but not necessarily bounded on [O,a] x C(JJRn). Take any two (Ft)O F
o
(w~(w) ,n)
w ~ "' H(t,n,w)
are of class .cP for any p > 1, for fixed t € [O,a], n € C(J,Rn). Therefore X(•,t,n) € .cP(n,C(J,Rn)) for all p > 1. Just as for stochastic ODE•s (Sussman [74], Doss [12]) the Frobenius Condition is not required when the noise w is one-dimensional. Corollary (2.1.4): If w is one-dimensional Brownian motion (m = 1), the Frobenius Conditions (Fr) and (38) may be dropped in Theorem (2.1) and its corollaries. Proof:
Note that every c1 map g~n ~ L(R,Rn) satisfies the identity
§3. Delayed Diffusion: An Example of Erratic
Behaviou~
In the previous section (§2) it was established, in some detail, that for o~inary diffUsion coefficients a stochastic FOE may admit a nice version of its trajectory field. It is our intention in the present section to show that such continuous versions of the trajectory random field neve~ exist if the diffusion coefficient depends on the past. To see this let us consider the one-dimensional linea~ stochastic delay diffe~ential equation (SDDE): dx(t) = x(t-1)dw(t)
0 p(O,n,t,.) is (uniformly) continuous i.nto the narrow (weak *) topology ·of probabili.ty measures on C. So i.f we set "'X(w,t,n) = w(t,n), "' "' "'wE "'n, "' -t t E [0,1], n E C,th~ because of the fact that Po p(t,n) = p(O,n,t,.) one can easily see that X is isonomous to the trajectory field {nxt:t €"'[0,1], n E C} and is continuous in probability on [0,1] x C. Notice that X(.,t,n) is measurable with respect to the a-algebra "'Ft ~ o{~(u,n):u E [O,t], n E C} for each t E [0,1], n E c.· Of course for a.a. wE S'l and all t E (0,1] the map X(~,t,.) is discontinuous everywhere on C. (iii) A further discussion of stochastic DOE's will course of our next chapter on examples of stochastic see that a measu~bZe version X:.S'l x [0,1] x C ~ C of {nxt:t E [0,1], n E C} always exists. lt is howev.er i.e. for each t E [0,1] P{w:.w En,
be carried out in the FDEas. There we shall the trajectory field highly non-linear on C,
X(w,t,An 1 +~n 2 ) = AX(w,t,n 1 )~~X(w,t,n 2 ),
for all
n1 .n 2 E C, >•• ~ e: R} = 0
despite the linearity of the equation (\ll) and the fact that P{w:.w E S'l, X(w,t,An 1
*
~n 2 )
=>< X(w,t,n 1) +-
~X(w,t,n 2 )} =
E R, t E [0,1], n1, n2 E C. See §Vl.3. (iv) In the next section we shall show that on the interval [1,2] the map n ~ p(O,n,t,•) is locally compact into the weak* topology of all regular Borel probability measures on C. So by a theorem of Prohorov (Schwartz [71], Theorem 21, p. 74• Rao [67], p. 125) we can construct a proces~ isonomous to {nxt:t E [1,2], n e: C} on the probability space (C[ 1•2] x C.~. ~) with for all
148
>adon pr>obability measur>e on c[ 1•2Jxc. It is not clear, however, if the trajec~ory field {nxt~t E [1,2], n e: C} is then isonomous to a continuous process F x [1,2] x C +C. §4.
Regularity in Probability for Autonomous Systems
So far we have been looking for the trajectory (random) logical example of §3 kills x:n x [O,a] x C + C for the general stochastic FOE
x0 = n e: c
at the question of establishing suitable versions field of a stochastic FOE. The simple but pathoall hope in obtaining a sample-continuous version trajectory field {nxt~t E [O,a], n E C} of our
= C(JJRn),
J = [-r,O].
}
(I)
On the other hand, Theorem (2.1) of §2 indicates that the trajectory field admits good compactifying versions in the case of ordinary diffusion coefficients. It therefore seems necessary in dealing with regularity of (I) to shift our viewpoint towards distributional properties of the trajectory field rather than sample function behaviour. Indeed we motivate ourselves by the conclusions of Theorem (2.1) to show that their parallels in pr>obabiZity hold good under fairly general assumptions on the coefficients of the stochastic FOE (I) above. In particular these conclusions are valid for the delayed diffusion example (VI) in §3 as well as the ordinary 'Frobenius type' one (II) of §2. To fix ideas, recall that (I) is defined on the filtered probability space (n,F,(Ft)G r. Our study of the regularity properties of these transition probabilities will turn on the following result which is taken from. Stroock and Varadhan [73] (Garsia, Rodemick and Rumsey [23])~ Theorem (4.1): Let E be a real Banach space and y~Q x [O,a] + E an (F e Borel [O,a], Borel E) measurable process with almost all sample paths continuous. Suppose that for each t E [O,a], y(.,t) E £ 2k(Q,E;F) in the Bochner sense and there is a number C = C(a,k) > 0 such that Ely(·,t 1)- y(.,t 2) l~k < Clt 1-t 2 1k for all t 1,t 2 E [O,a]. Then for every 0
and any real N > 0,
Proof: The result follows directly from Corollary 2. 1..4 and Exercise 2.4.1 of Stroock and Varadhan {[73], pp. 47-61), noting that J
a Ja
( ) It -t Ik 1- 2a - 2 dt dt 1 2 0 0 1 2
=
2a k(1-2a) , 0 < a < i. k(1-2a)[k(1-2a)-1]
c
The first main result tells us that the sample paths of the solution nx of (I) are almost all a-H6lder continuous on [O,a] for any 0 < a < i. Thus the trajectory field segment {nxt:t E [r,a], n E C} on [r,a] will always lie in Ca{JJRn) for 0 < a < i. Before proving this we need estimates on higher order moments of the trajectory field. Such estimates are obtained from the following two theorems which are well-known for stochastic ODE's (r = 0) (Gihman and Skorohod [24], pp. 44-50; Friedman [22], pp. 102-107). The reader is invited to make the obvious modifications in the classical proofs to obtain Theorem (4.2) (LocaZ Uniqueness) Suppose Hi:Q x C +Rn, Gi:Q x C + L(Rm,Rn), i = 1,2, are such that H (·,n), Gi(.,n) are F0 -measurable for each n E C and 150
there is a constant L
>
0 so that
1Hi(w,n 1)- Hi(w,n 2>1 < Llln 1 -n 2 11, 11Gi(w,n1)- Gi(w,n2)11 0 is a constant depending only on k, m and n (independent of n, H, G). Using the linear growth condition on H, G together with Theorem (4.3) it follows that there are posi.tive constants Kk, 2' Kk, 3' Kk, 4 ' Kk,S depending on K, k, a, m, n and independent of n, t so that t2 Elnx(t 1)- nx(t 2>1 2k < Kk, 2 1tct2 12k- 1 Jt (1 ""E llxull 2k)du t
"" Kk '31 t 1-t2 Ik-1
t2 Jt {1 "" E II xu 112k ) du 1
< Kk, 4(t k-t.1} lt 1-t2 1k < Kk,S(tk-t.1}(1
+
1 * Kk, 4(t k-t.t} Itct2I k-1 .Ck(h llnll 2k ) lt 1-t 2 1
lln11 2k) lt 1-t2 1k
(1)
Now apply Theorem (4.1) to the continuous process nxi[O,t], choosing k > 1 sufficiently large such that 0 ~ thus P{w:w
€
Inx(w)( t1 )-nx(w)( t2) I
n,
lt1 - t21a
k 2k < Kk,S(t +1)(1 + lin II ) =
Kk, 6 (tk+1)(1 + lln11 2k>
>
8(ka+1) a
4
ik
N}
2ak( 1- 2a)
1
k{1-2a)[k(1-2a)-1]
N2K
-~----
:k N
(2)
2K ak( 1-2a) k,S , independent oft € [O,a], for every N > 0, where Kk 6 = ' k(1-2a)[k(1-2a)-1] n E C. Define the sequence {~};=t of subsets ~ of n by ~ = {w:w €
Then ~~ € F, ~+t
c
n,
sup ON} < ~~(t+llnl~k) +Kk,7(tk+t)(t+llnll~k)·~ 2k 2k 4(ka+t) 2k . . where Kk 7 = Kk 6 8 .2 • 1s 1ndependent of t,n. Choosing 2k • • a C~ = Kk, 7 + 2 2 kc~ we immediately obtain assertion (4) of the theorem. We next prove that for any bounded set B c C, the set of probability measures {p(O,n,t,.): nEB, t E [r,a]} is uniformly tight and hence relatively weakly * compact in Mp(C) by Prohorov•s Theorem. Let £ > 0 and choose M, N£ > 0 sufficiently large so that B c {n:n E C, II n lie NE}
2NE}
c
< c2(ak+1)(1+M2k>· -!.. < IE. k N'K E
Therefore sup {p(O,n,t,C~E):n € B, t € [r,a]} p(O,n,t,•) takes bounded sets into relatively weakly * compact sets.. c Denote by Mp(Ca) the space of all Borel probability measures on Ca(JJRn) (0 < a < I) given the weak * topology. We shall show below that for t > r the transition probabilities p(O,n,t,.) € Mp(Ca) for 0 O
+
P{w:w € o, lle,(w)-e2(w)llc >
+
P{w:w € 0, lle,(w)-62(w) II > ca
E}]
for all e1,e2 € t 0 (0,C) and d ca,,e2) a
=
inf [E E>O
E}]
for all e1,e2 € t 0 (0,Ca). Un~er these pseudo-metrics and a global Lipschitz condition on the coefficients H, G of the stochastic FOE (I), we get our last main result concerning the regularity of the map n ~> nxt into t 0 (0,C) or t 0 (0,Ca): Theorem (4.7): Suppose H:C +Rn and G:C + L(Rm,Rn) in (I) are globally Lipschitz with a Common Lipschitz constant L > 0. Let 0 N} t 1 ,t2€[0,a] It, - t21a t, ~ t2
0
(5)
n n c 2k P{w:w € 0, II 1xt(w)- 2xt(w) II a>N} < :21(11n 1-n 2 11 , N> 0, t € [r,a], 3
C
N
(6)
n1 n2 2k/2k+.1 4 d0 ( xt' xt)12k < K2K5 1t1-t2 1k
lln 1 -n 2 11~k= K6"1t 1 -t 2 1klln 1 -n 2 1~ck (10)
for 0 < t 1 < t 2 < a, where K6 = K2K5 • Now apply Theorem 1 to get a constant c3 = c 3(k,a,L,m,n,a) independent of n 1,n 2 , N > o such that P{w:w €
n,
ly(·Ht 1)-y(·Ht2>1 ----=--1-=a____.;~ lt 1-t 2 ,
sup
t 1,t2€[0,a]
>
N}
~}
2k
lln 1-n 2 11 C
O
+
=
inf (E £>0
+
P{w:w
€ 0,
n1
n2
II xt(w)- xt(w) lie > E}]
K -:kE IIYtll 2kJ < inf (E +-=* lln 1-n2 11 2kJ. E £>0 E c
Define the Cm function f~>O +R by f(E)
=
Ks 2k £ + -:2k lln 1-n211c for all E > o. E
Then f attains its absolute minimum value at EO > 0 where f'(E 0) -
i.e.
d0 (
o, f"(E 0) > o i.e.
2k Ks Eo
Eo 2k+t
Therefore
=
n1
2k+l =
2k
lln 1-n 2 11 c
=
o
2k K5 lln 1-n 2 11c2k.
n2
xt' xt) < inf f(E) E>O
= f(E 0) = E0(t + ~)
1 =
1
c iK>
K5"2K+f (2k)"2K+f 1 +
-n
lln 1 2 11
2k
F
1
Hence (7) holds with c4
=
(2K5k)"2K+T(1 + ~). 161
1 Choose and f;x an ;nteger k0 = k0(a) such that k0 > T=2a • Then by a s;m;lar argument to the above one, we can prove that
1 5
3
'2I 0 such that
for a.a. w €. g, all n1,n 2
164
€
C.
VI Examples
§1.
Introduction
In this chapter we illustrate our general results of Chapters II,III,IV and V on various examples of stochastic FOE's. These examples include stochastic delay equations (§3) and linear FOE's which are forced by white noise (§4). The latter class of stochastic FOE's corresponds to equations of the form dx(t)
=
H(xt)dt
+
g(t)dw(t), t > 0
x0 = n E C : C(J,Rn) where H:C +Rn is a continuous linear drift and g:R + L(Rm,Rn) is locally integrable. Section 4 is joint work of the author with Henrich Weizs!cker and Michel Scheutzow : the asymptotic behaviour of trajectories to the above equation is analysed along the stable and unstable subspaces in C of the deterministic linear system dx(t)
=
H(xt)dt, t > 0.
Applying well-known results of J. Hale ([26] pp. 165-190) we find that, if g is constant, the forced system is globally asymptotically stochastically stable (§3) whenever the unforced linear system is globally asymptotically stable. The stochastically stable case corresponds to the existence of a limiting invariant Gaussian measure under the stochastic flow. On the other hand, when g is periodic and the unforced system is asymptotically stable, the transition probabilities converge to a periodic family of Gaussian measures on c. §2. Stochastic ODE's These correspond to the case when our system has no memory i.e. r = 0, thus J = {O} and C(J,Rn) is just the Euclidean n-dimensional space Rn. Our basic stochastic FOE (I) of Chapter II then takes the simple differential form 165
dx(t) = g(t,x(t))dz(t),
t>O (I)
x(O) = v € t 2(nJRn)
Such stochastic ODE's were first studied by Ito ([36]) in 1951 and has since been the subject of intensive research. Indeed there are now excellent texts on the subject such as the works of Gihman-Skorohod [24], Friedman [22], Stroock-Varadhan [73] on Euclidean space,and Ikeda and Watanabe [35] and Elworthy [19] on differentiable manifolds. We therefore make no attempt to give any account of the behaviour of trajectories to (1), but content ourselves by noting that Theorems (11.2.1), (11.2.2), (11.3.1), (111.1.1) (111.2.1), (111.3.1) are all well-known to hold for stochastic ODE's of type (I) or its autonomous versions dx(t) = g(x(t))dz(t) ,
t
> 0
(II)
x(O) = v dx(t) = h(x(t))dt
+
g(x(t))dw(t), t > 0
(III)
x(O) = v
for z a continuous semi-martingale, w m-dimensional Brownian motion and coefficients h:Rn ~Rn, g:Rn ~ L(RmJRn). The associated semigroup {pt}t>O on Cb(RnJR) for (III) is, however, always strongly continuous in the supremum norm of Cb(RnJR). A strong infinitesimal generator A:V(A) c Cb ~ Cb can therefore be computed to get A($)(v) = D$(v)(h(v))
+
~ trace D2$(v)
o
(g(v)
x
g(v)),
v € Rn
(*)
for$ c2-bounded on Rn. The reader may check that this agrees formally with the conclusion of Theorem (IV.3.2) (or that of Theorem (IV.4.3)). See [22] and [24] for a classical derivation of formula (•). The trajectory field {vx(t):t ·> 0, v € Rn} of (III) has a measurable version consisting a.s. of diffeomorphisms in case h and g are sufficiently smooth (Elworthy [19], Kunita [46], Malliavin [51]). Furthermore, if h and g are linear maps, then selecting a separable measurable version x:n x ~ x Rn ~Rn of the trajectory field implies that for a.a. w € n the map X(w,t,.):Rn ~Rn is a linear bijection for all t > 0. This sharply contrasts the corresponding behaviour for linear delayed diffusions of §(IV.3). 166
§3.
Stochas~1c
Delay Equations
Let (n,F,(Ft)t>O,P) be a filtered probabili.ty space with z~n -+ C(~ ,Rm) a continuous Bm-~alued martingale adapted to (Ft)t>O and satisfying E(z(.)(t 2) - z(.)(t 1) 1Ft ) < K(t 2-t 1) t
E(lz(.)(t 2)-z(.)(t 1)1 21Ft) < K(t 2-t 1)
}
(E)(i)
t
a.s. whenever 0 < t 1 < t 2, for some K > 0. Suppose hj:Bn -+Bn, j = 1, ••• ,p, g1 :Rn-+ L(Bm,Rn), i = 1, ••• ,q are Lipschitz maps. Let there be given p ~ q delays as random variables r.~n-+ ~. j = 1, ••• ,p, di~n ..... ~. i = 1, ••• ,q, which are P-essentiall~ bounded and F0-measurable. Define r =
max (essup rj(w), essup di(w)) 1<j
.c 2(n,Rn), ~:£ 2 (n,c(J,Rn)) -~> icn,L(Bm,Rn)) by "' h(~)(w)
= I:p h.[~(w)(-r.(w))J, "'g(~)(w) = 9I: j=1 J J . i=1
gi[~(w)(-di(w))J
E £ 2 (!l,C(J,R n)), a.a. wEn. To see that "'h, "'g are globally Lipschitz, let L > 0 be a common Lipschitz constant for all the hj's and gi's. Then, if ~,.~ 2 E t 2(n,C(J,Rn)), we have
for~
167
llh(1j1t)-h(1j12)1122 n t. (O,.R )
.~ J lh .[1j1 1 (w)(-rJ.~))-hJ.[1j12 (w)(-rJ.(w))] 12 dP(w) J=1 n J < p L2 ~ J 11j11(w)(-r.(w))- 1j12(w)(-r.(w)l 2 dP(w) j=t n J J < p
< p2 L2 111jl1 - 1jl2112 2 t. (O,C)
g
Similarly is Lipschitz with Lipschitz constant qL. It remains now to verify the adaptability condition E(iii) of §(li.1). To do this it is sufficient to show that for each 1jl E t. 2(O,C;Ft), "' h(ljl) and "'g(ljl) are Ft2 measurable, fort> 0. Let 1jl E t. (0,C) be Ft-measurable, and P:J x C ~Rn the evaluation map (s,n) ~ n(s), s E J,n E C. Then P is continuous, and since each rj is F0-measurable, it follows that
W~> ~(ljl)(w) = ~ h.[P(-r.(w),ljl(w))] j=1 J
J
~s Ft-measurable. Therefore h(ljl)
E t. 2(n,.Rn;Ft); similarly 2 g(ljl) E r. (n,L(Rm,Rn);Ft). Hence all the conditions of Theorem (II.2.1) are satisfied and so a unique strong solution ex E t. 2(n,C([-r,a],Rn)) of the stochastic ODE (IV) exists with initial process e. The trajectory {ext:a > t > 0} is defined for every a > 0, is (Ft)-adapted and has continuous sample paths. Moreover each map
icr~.C;Ft), t > e
e
o.
xt
is globally Lipschitz, by Theorem (II.3.1). Now su.ppose in (IV) that z is m-dimensional Brownian motion w adapted to (Ft)O0;F0) there exists k0 = k0(E,o) such that Elrk-r0 12 < E' = o2E for all k > k0• Suppose k > k0 and for any u E [O,a] denote by
the characteristic functions of the sets {w:w € G, r 0(w) < u}, {w:w € G, r 0(w) > u.l, {w:w € G, rk(w) < u}, {w:w € G, rk(w) > u} in F0 and A= {(w,v):w € G, v € [O,a], v + r 0(w) > u}, Bk = {(w,v):w € G, v € [O,a], v + rk(w) < u} in F0 Q Borel [O,a], respectively. Using Theorem (1.8.3) for the stochastic integral, we may write 174
+
=
J:
n{O)-n{u-r0), ;f rk < u and r 0 > u.
X(rou)dP k o
< E + 4lln 11 2 P{w:WE Sl, (r0 (w) - rk(w))
c
< E
+~ 6
because k > k0 •
llnl1 2 Elr0-rkl 2 < E + 4lln11 2 E,
c
Similarly, for k > k 0 , we get
+
Combining
J
Ir.0-rk 1>6
+~lin 1~2
(4),
(5)
c
Eln(u-rk)-n(u-r0 ) 12x(r >u) = k
< E
> 6}
J
lr 0-rklu)dP k
ln(u-rk)-n(u-r0 ) 12x(r >u)dP k
Elr0-rkl 2 < (1 + 4llnllc2 )E
(6)
(5) and (6), one gets
Elnx(u-rk)-nx(u-r0 ) 12 k0 • Note that k 0 is independent of u € [O,a]. A similar argument applied to the last integrand in (1) yields for every E > 0, a k 0 > 0 such that (8)
for all u € [O,a], k > k0• (7) and (8) to obtain 176
Now put together the inequalities (1), (2),
(3),
E sup -I"f' = Hm nxk,w' in ico,C([-r,a],Rn)) for a.a. w•
€
o.
k-+co
But (10) implies that the map o 3 w• + nxk,w' € ico,C([-r,a],Rn)) is F0measurable and 0 x 0 3 (w',w) + nxk,w'(w) € C([-r,a],Rn) is F0 ~Fa-measurable for each k > t. By the Stricker-Yor Lemma (Theorem (1.5.1)) the random fami.ly {n>f' :w' € O} admits an F0 8 Fa-measur~ble version nx:oxo+C([-r,a],Jfl) such that for a.a. w• € 0, nx(w', ·) = nxw (·) a.s. Moreover the map o 3 w' + nxw' €. ico,C([-r,a],Rn)) is F0-measurable and is in fact essentially bounded because, for a.a. W1 € o, E sup lnxw'(t)1 2 < 3llnlll
+
3 E sup 1Jt h(nxw'(u-r(w')))dul 2 O0;F0)·converging to r 0, d0 in £2• The reader may use a very similar argument to the one used in deriving (9) to prove that for every E > 0, there is a ko = k0(E,n) > 0 such that sup fw'€0 E-rO) be independent ofF: =o{(w(·)(t): 0 < t < a}. Then P{w:w € n, nxt(w) € B} = Jn p(w•,o,n,t,B)dP(w•) for every n € C(J,Rn), t € [O,a] and B € Borel C. Furthermore the trajectory field is a time-homogeneous continuous process on C(J,Rn) viz. PoTt -t (n)-1 t1 -1 n t1 2 1 if 0 < t 1 < t 2 kn(-1 + t> for u E . j = 0,1,2, •••• k. Fix any t 0 E [0,1] and let o < jo < k be such that t 0 E [j 0/k,(j 0+1)/k]. Then
J0tow(w)(u)nk(u-1)(nk)'(u-1)du j 0-1 E
j=O
. k[n(-1 + 1fl>
+ k[n(-1 +
=
.
- n(-1 + t>J
j +1 4-> - n(-1
fj~t)/k
j/k
j Jto + k0)] .
w(w)(u)nk(u-1)du
w(w)(u)nk(u-1)du.
Jolk
But the maps n ~ n(-1 + j/k), (u,n) ~ nk(u-1) are continuous and (w,u) ~ w(w)(u) is measurable, so the integrals (w,n) r->
(j+1)/k k w(w)(u)n (u-1) du, (w,n) j/k
J
depend measurably on the pair (w,n). that the map (w,n) r->
Jto >+
j 0/k
w(w)(u)nk(u-1)du
From the preceding equality, it follows
J: w(w)(u)nk(u-1)(nk)'(u-1)du
is (F & Borel C}measurable for every t E [0,1]. Since this indefinite integral is continuous in t for each w E n0• n E C, it is easy to see that the map
is (F & Borel [0,1] & Borel C)-measurable. From the definition of Yk, it follows that (w,t,n) ~ Yk(w,t,n)(s) is measurable for each s E [-1,0]. Thus Yk is (F@ Borel [0,1]@ Borel C, Borel C)-measurable. Moreover, using integration by parts (Elworthy [19] p. 79), one has a.s. Jt+s [nk(u-1)] 2dw(u) Yk(·.t.n)(s) = { 188
s E [-t,O]
0
0
s E [-1,:t)
i.e. E [0,1], n E C, a.s.
t
Since nk(s)-+ n{s) as k-+ oo uniformly in s E [-1,0], it follo\1S easily from ooob's inequality for the stochastic integral (Theorem (1.8.5)) that Yk(.,t,n) -+ nxt- ~t as k-+ oo in £ 2 (n,c). Hence by the Stricker-Yor lemma for C-valued mappings we get a measurable version Y: n ~ [0,1] x c-. c for . "' } the f1eld {nxt-nt:t E [0.1], n E C. A very similar argument to the above gives a measurable version for the trajectory field of the one-dimensional polynomial delay equation dx(t)
=
[x{t-1}] 1 dw(t).
0
0
(X)
x(O) =vERn, hE L(Rn), g E L(Rn,L(Rm,Rn)) the trajectory field {vx(t);t > 0, vERn} possesses a measurable version X;Q x ~ x Rn ~Rn which is a.s. linear on Rn i.e. for a.a. wEn, all t > 0, X(w,t,·) E L(Rn). This follows from the easily-verifiable fact that for a measurable field n x Rn ~Rn linearity in probability is equivalent to almost sure linearity. For the simple one-dimensional linear stochastic ODE dx(t) = x(t)dt
~
cx(t)dw(t),
t
>
0
x(O) = v E R. 2
Ito's formula shows that the process X(w,t,v) = ve< 1-ic )t~c[w(w)(t)-w(w)(O)] a.a. w E n, t > 0, v E R, gives a measurable version of the trajectory field which is a.s. linear in the third variable v E R. More generally a measurable version for the trajectory field of the linear system (X) can be constructed by solving the associated fundamental matrix equation e.g. as in Arnold ([2], pp. 141 -144) •
190
§4. Linear FOE's Forced by. White Noise As before, we take (n,F,(Ft)tSR'P) to be a filtered probability space satisfying the usual conditions. Note that here we require the filtration (Ft)tSR to be parametrized by all time, with an m-dimensional standard Brownian motion w:n + C(RJRm) adapted to it. Let H:C = C(JJRn) +Rn be a continuous 1inear map and g:R + L(Rm JRn) be measurable such that llg( ·)II is locally integrable over R, where II · II is the operator norm on L(Rm JRn). Consider the fo~ced tinea~ system dx(t) = H(xt)dt
x0
=
+
g(t)dw(t),
t > 0 (XI)
nEc
as opposed to the unforced deterministic linear RFDE: dy(t)
=
H(yt)dt
Yo
=
n E c.
(XII)
The dynamics of (XII) is well-understood via the fundamental work of J. Hale ([26], Chapter 7). In particular, the state space C splits in the form ~
c = u • s.
( 1)
The subspace u is finite-dimensional, sis closed and the splitting is invariant under the semigroup Tt:C + C, t > 0, Tt(n) = Yt for all n € C, t > 0. (Hale [26] pp. 168-173, c.f. Mohammed [57] pp. 94-104). According to Hale [26] the subspace U is constructed by using the generalized eigenspaces corresponding to eigenvalues·with non-negative real parts of the infinitesimal generator AH to {Tt}t>O viz.
s € [-r,O) s =
o.
For convenience, identify the spaces L(RmJRn), L(Rn) with the corresponding 191
spaces of n x m and. n x n rea 1 matri.ces. From the Ri.esz. representation theorem, there is a (unique) L(mn)-v.alued measure lJ on J such that
H(~)
= J:r
~(s)dl..l(S)
for all
~
E C(JJRn).
It is therefore possible to extend H to the Banach space "'C = "'C(JJRn) of all bounded measurable maps J +Rn, given the supremum norm. We denote this extension also by H. Solving the li.near FOE (XU.) for initial data in "'C, we can extend the semigroup {Tt}t>O to one on "'C denoted also by Tt:C"' + "'C, t > 0. The splitting (1) of Cis topological, so the projections nU:c + U, rrs:C + S are continuous linear maps. Since dim u < oo, rr 0 has a representation by an L(Rn,U)-valued measure p on J v.iz.. rr 0 (~) = ! 0 r ~(s)dp(s). This formula gives a natural extension to a continuous linear map rr 0 :c"' + u. Defining "'S = {~~~ E "'C, rr u(~) = 0}, we see that "'C has a topological splitting
-
"'c = u • "'s.
(2)
The projection rr~"' ~C + "' s is conti.nuous 1inear, being given by
"' = ~ - rru(~) for all ~ E "'C. rrS(~)
"' by ~U and ~s"' respectively. The following When ~ E "'C, denote rr 0 (~) and ITS(~) lemma shows that the spli.tti.ng (2) is invariant under the semigroup {Tt}t>O. Lemma (4.1):
For each
~
E "'C, and t > 0,
\'le
have
"' [Tt(~)Ju = Tt(~u), [Tt(~)Js"' = Tt(~s). Proof: For E C, the result follows directly from the well-known invariance ---of the splitting ( 1) under {Tt}t>O. To prove it for E "'C, the ~
~
cons~der
following definition of weak continuity for linear opet·ators on C. A linear operator B~C + is '!..eakly continuous if whenever {~k}k=t is a uniformly bounded sequence in C with ~k(s) + 0 as k + oo for each s E J, then B(~k)(s) + ( ask+ oo for each s E J (cf. the 'weak continuity property (w 1)• of Lemma
C
(IV.3.1)).
The Riesz representation theorem implies that every continuous linear map C + U has a unique weakly continuous extension "'C + u. Hence for the first assertion of the lellllla to hold, it is enough to show that rr 0oTt and Ttorr0 are 192
both weakly continuous for all t > 0. lt is clear from the definition of nU:c"' -+ u that it is weakly conti.nuous. As the composition of weakly continuous linear operators on "'C is also weakly conti.nuous, it remains to show that each Tt:C"' -+ "'C is weakly continuous. This is so by the following lemma. The second assertion of the lemma follows from the first because
[Tt(~)]S"'
=
Tt(~) - [Tt(~)]U
=
Tt(~) - Tt(~U) t
> 0,
"'c.
~ €
D
Lemma (4.2): For each t > 0, Tt:.C"' -+ "'C i.s weakly continuous. Proof: Let v(~) be the total variation measure of the L(Rn)-valued measure ~on J representing H. Fix~ E and define ~f:R>O -+R by
C
~f(t) = J~r ITt(~)(s)ldv(~)(s) Now
Tt(~)(s) =
r
(O) + J
t+S
+
ITt(~)(O)I,
H(Tu(~))du
t > o.
t+s > 0
0
~(t+S)
-r < t+S
0,
-r
and
C = v(J.~)(J)
+
1.
By Gronwall's lemma, we obtain f;f(t) < f;h(t) +- C J: f;h(u)eC(t-u)du, for all t > 0.
C
Now let {f;k};=t be a uniformly bounded sequence in converging pointwise to 0; then the sequence {~kh}~=l is uniformly bounded on [O,t] and f;kh(t) ... 0 ask ... ~ for each t > 0, by the dominated convergence theorem. The last estimate then implies again by dominated convergence that f;kf(t) ... 0 as k ... for each t > 0. ln particular, Tt(f;k)(O) ~ 0 as k ~~for every t > 0. But for each s. € J, F,;k(t+s) -r < t+-s < 0 Tt(F;k)(s) = { k t+-s > 0, k > 1, Tt+s (F; )( 0)
=
so Tt(F;k)(s) ~ 0 as k ~ ~ for each s € J, t >
o.
c
By analogy with deterministic forced li.near FOE's, our first objective is to deriv.e a stochastic variation of parameters formul-a for the forced linear system (XI). The main idea is to look for a stochastic interpretation of the determintstic variation of parameters formula corresponding to non-homogeneous linear systems dy(t) = H(yt)dt + g(t)dt,
Yo
= n
Ec
t
>
0 (XIII)
(cf. Hale [26] pp. 143-147• Hale and Meyer [29]). To start with, we require some notati.on. Denote by t:.:J ~ L(~n) the map 194
= X{O}I• wh~re l E L(~n) is the i.dentity ~ x n matrix. Also, for any linear map B:C{JJRn) + C{JJRn) and any A E C(J,LORmJRn)),
ll
let where and s If
A(s) = (a 1(s), a2(s), ••• ,am(s)), s EJ ~ m n BA = (B(a 1),B(a 2), ••• ,B(am)) E C(J,LOR JR )) a.(s) h the j-th column of then x m matrix A(s) for each 1 < j < m J ~ n ~ m n E J. Thus~each aj E C(JJR ) and BA E C(J,L(~ JR )). F:[a.b] + C(J,L(RmJRn)) is a map, define the stochastic integral
1: F(t)dw(t)
by
[Jb F(t)dw(t)](s) = Jb F(t)(s)dw(t), s E J, a
a
whenever the Ito integral /: F(t)(s)dw(t) ERn exists for every s E J. This will exist for example if F is measurable and IIF(t) II~ dt < oo. In case
1:
1: F(t)dw(t)
c
E C{JJRn) a.s., its transform under a continuous linear map C{JJR"> +Rn is described by ~~
Lemma (4.3): Let L:C(JJRn) +Rn be continuous linear and suppose L:C(JJRn) + Rn is its canonical continuous linear extension using the Riez representation theorem. Assume that F:[a,b] + C(J,L(RmJRn)) is such that /b F(t)dw(t) E n b ~ a C(JJR ) a.s. Then /a LF(t)dw(t) exists and L(Jb F(t)dw(t)) = Jb LF(t)dw(t) = ~ Jb L(f.(t))dw.(t) a a j=1 a J J a.s., where fj(t) is the j-th column of F(t) and wj(t) is the j-th coordinate of w(t), j = 1, ••• ,m. ~:
Represent L by an L(RmJRn)-valued measure on J via the Riesz representation theorem; then use coordinates to reduce to the one-dimensional case m = n = 1. Namely, it is sufficient to prove that if~ is any finite positive measure on J and f E t 2([a,b] x JJR;dt 0 d~). then Jb J 0 a
-r
f(t,s)d~(s)dw(t)
= J 0 Jb
-r a
f(t,s)dw(t)d~(s)
a.s.
(3)
195
Suppose first that f
=
X[a,a]x[y, 6] , the characteristic function of the
=
rectangle [a,a] x [y,6] [a,b] x J. Then (3) holds trivially. Also, by linearity of the integrals in f, (3) is true for all simple functions on [a,b] x J with rectangular steps. Since these are dense in iua,b] x J, dt 8 d\1), we need only check that each side of (3) is continuous in f € £ 2([a,b] x JJR,dt 8 d~). But this is implied by the easy inequalities: EIJb J 0 a -r
0.
Then there is a 61
>
0 such that 197
1 < -x-r. uj'"
-~ .ll.t = i £ (£/.a 1
J
ra
since lt-tjl < ltj+ 1-tjl <mesh rr < o1 for all t E [tj, tj+ 1]. Applying Lemma 2 of Elworthy ([19], Chapter III, §2, pp. 25-28), we obtain k-1 Ja El .E G(tJ.)llJ.wG(t)dwrr(t)l 2 J=1 0 k-1 1 Jtj+1 2 < .E lljt IIG - n G(t)dtll 2 J=1. J t.J s:.
0 such that
if mesh IT< o2 • So if mesh IT< Eli: G(t)dw(t)
o =min (o 1,o 2),
-I: G(t)d~(t)l 2
a k-1 < 2El I G(t)dw(t)- .~ G(tJ.)~J.wl 2 0 J=1
O to the flow onto the subspaces u and
s.
To begin with, recall that u has finite dimension d. Therefore it is possible to thi.nk of {x~}t>O as the solution of an unstable stochastic ODE (without delay) on :Rd. We make this more preci.se by appealing to the following considerations which are taken from Hale ([26], pp. 173-190). Define * where lRn * is the Eucli.dean space of all n-dimensional row c* = C([O,r],Rn) vectors. The continuous linear map H:.C -+:R0 defines a continuous bilinear pairing c* X c -+ R:. (a,~)
= a(O)~(O)
+
fo-r Jos'
a(s-s')d~(s')~(s)ds
(10)
where ~ is the L(Rn)-valued measure on J representing H, a E C* and ~ E C. With reference to this bilinear pairing, the generator AH of {\}t>O possesses a (formal) adjoint A*11 :.V(A*H) c: C*-+ c* defined by the relations
-a' ( t),
(A*Ha)( t) = {
r -r
V(A*H) =
{a~a
0 < t < r
a( -s)d~(s), t = 0;.
f
E C*, a is c 1, a'(O) = 0 -r
a(-s)d~(s)}.
Furthermore, o(AH) = o(A*H) and the spectra are discrete consisting only of eigenvalues with finite multipliciti.es. Both o(AH) and o(A*H) are invariant under complex conjugation~ and the multiplicities of the eigenvalues coincide. Construct u* c: c* using the generalized eigenspaces of A*H which correspond to eigenvalues with non-negative real parts. Then dim u* = dim U =d. Take_a basis~= (~ 1 ••••• ~d) for u and a basis - ( ljl.1 \
1¥-\:) ljld
for u* such that (lJij'~i) = oji' i,j = 1,2, ••• ,d. The basis~ of U defines a unique matrix representation BE L(lRd) of AH IU i.e. AH~ =~B. A*HI¥ = Bl¥, where AH~. ~B. A*HI¥, Bl¥ are all formally defined like matrix multiplication •. 206
Note that the eigen~alues of B are preci.s.ely thos.e A e: o(AH) with ReA > o. The reader s.hauld. also obs.er~e here that the s.pli.tting (t) of c is realized bY the bi.li.near pairi.ng (tO) through the fortll.lla ~U
=
~(~o~)
for all ~
€
C.
( t 1)
The results quoted in this paragraph are all well-known for linear FOE's and proofs may be found in Hale [26]. We would like to extend formula (11) so as to co~er all ~ e: "'C. First note·that the bilinear pairing (tO) extends to a continuous bilinear map c* x "'C + R defi.ned by the same formula. So the right hand side of ( 11) makes sense for all ~ e: "'C. But both sides of ( 1t) are continuous with respect to pointwise convergence of uni.farmly bounded sequences in C, because of the dominated con~ergence theorem and the weak continuity of rrU~~ + u. As "'C is closed under pointwise limits of uniformly bounded sequences, (tt) holds for all ~ e: "'C. ln view of the abo~e considerations we may now state the following corollary of Theorem (4.1). Corollary (4. 1.1) :. Define ~ c: u, ~ c: u* and B e: L(Rd) as above. Let {xt~t e: [O,a]} be the trajectory of (Xl) through n e: C. Define the process z~n X [O,a] +Rd on Rd by z(t) = c~.xt)• 0 < t 0, a.
< 0
(;H)
~x
such that
is the unique solution of the stochastic FOE
~x(t) = H(~xt)dt
~x 0
0
"'
J_~ T_u aSg(u)d\'l{u)
€
t
>
0
.c 2(rl,Ch
{X\l)
is constant, then ~x is stationary•
(;v)
lf
g
(v)
lf
g ;s
~th
=
"' * ~(O)g(t)dw(t)
period;c with per;od k
> 0, ~x
is peroiodic in distroibution
;.e.
pePiod k
Suppose g and ;ts der;vative g• are both globally bounded on R. We def;ne the process ~x:Sl x [-r,~) +Rn by Proof~
~x(t)
Jt = {
J~
"'
{Tt-u 6s g{u)){O)dw{u),
"'
t
> 0
{12)
-~ {T_u 6s g{u)){t)dw{u).
t
€ J
To see that ~x ;s well-def;ned, we note the existence of the l;m;t t
t
"'
"'
J (Tt-u 6s g(u))(O)dw(u) J {Tt-u 6s g{u)){O)dw(u) = 1;m v- v ~
a.s. lndeed the map u r-> (Tt-u 6s"' g(u))(O) ;s c1 and so ;ntegrat;ng by Parts g;ves the class;cal pathw;se ;ntegral
J:
(Tt-u
6~ g(u))(O)dw(u)
=
6l{O)g(t)w(t) - (Tt-v
-J: ~
6~ g(v))(O)w(v)
{(Tt-u 6l g(u))(O)}w(u)du 209·
a.s. for all ~ < t. Now by Hale ([26], p. 187) there are constants M> 0, B < 0 such that
"'
IITt lls g(u) II< MeBt llg(u) II
(13)
for all t > 0, u € R. But the law of the interated logarithm for Brownian motion (Theorem (l.8.2)(iv)) i.mpHes that there is a constant L > 0 such that for a.a. w € Sl there exists T(w) < 0 with the property IW(w)(t)l for all t
~ 1
eB(t-r)
> r.
a.s. for all t Thus a.s.
llx~
IIHII ll9llc
- coxtll c <Meat
II nsllc
... sup IAt(s) I
s.e:J
< Meatllnsllc + MeB(t-r) llg(O) II lw(O) I .., M(
IIHII II gil c.., llg' llc>K1 eB(t-r)
for t > r. I.n particular,
To establish assertion (i.i.) of the theorem, note that by cominated convergence and Fubini•s theorem one gets
E~~ = E(J: e-sulw(u)~) 2 < =m
J: e-BuEiw(u)l 2du
J~co e-Budu
J~co (-u)e-Sudu J~co e-audu
m = - :J .
s
Thus E llx~ for t > r.
-coxtl~
0 if e e:.c 2 (o,C). ln fact, we can write mx in the form. Tt(e)(O) *
t
"'
J [Tt-u As g(u}](O)dw(u),
t>O
0
mx(t) = {
(15)
t e: J
e(t)
provided we can show that e e: .c 2(o,C). Now the stochastic integral depends measurably on parameters, so the process e(s)'
=
0
"'
J~ [T-u As g(u}](s)dw(u),
s e: J,
has a measurable version e~o x J +Rn according to a result of Stricker and Yor ([72], Theoreme t, §5, p. 119). Indeed e i.s sample continuous. To see this, use integration by parts to write e(s) = =
Js
-m
"' [T_u+s As g(u}](O)dw(u)
"' s S' "' AS(O)g(s)w(s)-lim J [-H(T_U+SA g(u))+-{T_U+SASg'(u)}(O}]w(u)du (16) y+-m
V
a.s. for all s e: J. Since g, g• and ware continuous, thP. processes s
1---+
AS'(O)g(s)w(s) s
"'
"'
s ~ J [-H(T_U+S AS g(u)) + {T_U+S AS g'(u)}(O)]w(u)du v are sample continuous on J. Thus for e to have continuous sample paths one needs to check that the a.s. limit on the right-hand side of (16) is uniform for s e: J. Now, for v < s < 0,
!fvs [-H(T_U+S AS"' g(u)) + {T_U+S AS"' g'(u)}(O)]w(u)du
214
- J_~s [-H(T_u+-s 6s"' g(u) + T-u+s 6s"' g'(u)]w(u)dul
-r. Thus Xj[O,oo) is periodic in distribution with period k i.e. Po x(t)- 1 = Po X(t+k)- 1 for all t > -r. In particular if g is constant, one can take the 'period' k to be arbitrary and so Po x(t)- 1 is independent of t > -r. Hence X is stationary. This completes the proof of the theorem. c 00
00
00
00
Corollary (4.2.1): Let H:C + Rn be continuous linear and g:R c1-bounded. Suppose that the deterministic RFDE dy(t)
=
H(yt)dt,
+
L(RmJRn) be
(XII)
t > 0
is globally asymptotically stable i.e. Re ~ > 0 for every ~ E a (AH). where AH is the infinitesimal generator of {Tt}t>O. Then the process x: g x [-r,oo) + Rn given by 00
00
xt
=
Jt-oo Tt-u ~g(u)dw(u),
t>O
is a sample continuous Gaussian solution of the stochastic FOE 216
dx(t)
=
H(xt)dt
+
g(t)dw(t), t > 0
such that for any trajectory {xt:t > 0} K > 0, a < 0 with CD 2 at E II xt - xt II C < Ke
c t
(XI)
2(g,C) of (XI) there are constants
for a11 t > r.
If g is constant, CDx is stationary; if g is periodic with period k, CDx is periodic in distribution with period k. Proof: If a(AH) lies to the left of the imaginary axis, the splittings (1) and (2) reduce to the trivial case C=
S ,
U =
"' "'
{0}, C = S , ITS
=
ide,
"'
ITS =
ide.
Hence x~ = xt for all t > 0; so all conclusions of the corollary follow immediately from the theorem. a Corollary (4.2.2): Suppose all the conditions of Corollary (4.2.1) hold and let g be constant i.e. g(t) = G € L(RmJRn) for all t € R. Then the stochastic FOE dx(t)
=
H(xt)dt
+
Gdw(t)
t
>
0
(XVI)
is (globally) asymptotically stochastically stable i.e. for every n € C lim Po(nxt)- 1 exists in Mp(C) and is an invariant Gaussian measure for the t-+oo
stochastic FOE (XVI), (§(III.3)).
~: As CDx is stationary, let Po(CDxt)- 1 = ~O = Poe- 1 for all t > 0. We show that the transition probabilities {p(O,n,t,.) = Po(nxt)- 1:t > 0, n € C} of (XVI) converge to ~O in Mp(C) as t +CD for every n € C. Thus it is sufficient to prove that lim J
t-+oo
~EC
$(~)p(O,n,t,d~)
=
J
~€C
$(~)d~ 0 (~)
for every bounded uniformly continuous function $ on C. But by Corollary (4.2.1) above, limE unxt- CDxt11 2 = 0 for every n € C; so it follows from t-+oo
c
217
the proof of Le11111a ( 111.3. 1)(H) that lim t-i.e. lim t--
[J [J
,(nxt(w))dP(w) g
J ,(~xt(w))dP(w)]
= 0, ' € Cb.
g
~EC
'(~)p(O,n,t,d~)
-
J
~E.C
'(~)d~(~)J .
= 0, 'E cb.
Thus lim p(O,n,t,•) = ~ 0 , a Gaussian measure on C. This implies that t-P~(~o> = ~ 0 for all t > 0, where {P~}t>O is the adjoint semigroup on M(C) associated with the stochastic FOE (XVL) (§(111.3)). a Remark (4.3) Note that the invariant measure ~O of Corollary (4.2.2) i.s uniquely determined independently of all i.nitial conditions n € C (or in .c 2(n,C)). 1n contrast with the asymptotically situation when a(AH) has some elements case it turns out that the variance of nx~ in U di\lerges to -~: ~ exponentially have
stable case, we finally look at the with positive real parts. In this every one-dimensional projection of as t + ~ for each n € C. Indeed we
Theorem (4.3)~ Define~ c u*, B € LORd), ~~g x ~ +Rd as in Corollary (4.1.1). For the stochastic FOE (XVI) (constant G), let C = ~(O)GELORm,Rd). Suppose {Aj = aj -~: ibj~1 < j < p} is the set of all eigenvalues of B where aj > 0 and bj € R, j = 1,2, ••• ,p. Denote by the (Euclidean) inne~ product on Rd i.e. for u = (u 1, ••• ,ud), v = (v 1, ••• ,vd) € Rd, = _E uivi. 1=1
(i) Assume that the pair (B,C) is contzoottabte viz. rank (C, BC, B2C, ••• ,Bd-1 C) =d.
(18)
Then for every v € Rd, there exists 1 < j < p and t 0 > 0 such that given£ >0 we can find o1, o2, o3 > 0 with the property
o1e
2a.t J
for all t > t 0• 218
-
o2
0 and any v E Rd. The following argument is taken directly from (Mohanuned, Scheutzow, Weizs!cker [61]). Fix t > 0. Since z. satisfies the stochastic ODE Proof~
dz{t) = Bz{t)dt
+
it follows that z.(t) = et 8z.(O)
+
Cdw(t)
J:
e 0
Then Ell 2 = d
m (21)
t
j=1 d
If all .t viBiJ'k = 0 and if for all k with bk ~ 0 one has .t viaiJ'k = o. 1=1 1=1 2 then El 0. According to (Zabczyk [79]) this cannot happen if (B.C) is controllable. We therefore assume that m 2 m 2 .t lgjk(u)l ~ 0 for some k. Now among all k with .t lgjk(u)l ~ 0 pick J=1 J=1 . one for which ak is largest. and if this is not unique take one with the highest exponent ~· Call this index k0 • Let jo be any index with gj k (u) ~ 0. Then from (21) 00 Jt p m au 2 El 0 { t u k e k g. (u)} du. (22) k=t Jok 220
Suppose y ;sa pos;t;ve constant such that I= {u:lg. k (u)l > y} 1 ~and Jo o let Mk, k = 1,2, ••• ,p be constants so that 19Jok(u)l < Mk for all u > 0, all k. Note that th;s ;s poss;ble because eac.h gj k ;s globally bounded. 0 Then there ex;sts t 0 > 0 such that mko akou rnk aku u e Y > 2E lu e Mkl for all u > t 0 , where the sum ;s taken over all k 1 k0 for wh;ch lgj 0k(u)l 2 1 0. Thus J
p mk aku J mko ak u mk aku 2 2 u e gj k(u)} du > {u e 0 gj k (u)-Eiu e Mkl} du o k=1 o [t 0 ,tJni oo t
{ E
>! J lu~oeakou Yl2du > D e2akot- D 4 [to,tJni 1 2 for some constants o1,o2 > 0, .s;nce gj k ;s per;od;c. Hence by (22) we have 0 0 2 2akot Ell > D1e - D2 for all t > t 0 • On the other hand Ell
2
m Jt p p mk+m1 (ak+a 1)u = E E E u e gjk(u)gj 1(u)du j=1 0 k=1 1=1
p p Jt ~+m 1 (ak+a 1)u m < E E u e E lgjk(u) I lgj 1(u) ldu, t > 0. k=1 1=1 0 j=1 Def;n;ng k0 , jo as before and lett;ng m
Mki. =sup { E lg.k(u)l lgJ.1 (u)l : u > O}, j=1 J we can f;nd u1 > 0 such that 2~ 0
u
2ak0u _ ~+m 1 (ak +a1 )u e y > 2 E u e Mkt (k,t)
for all u > u1, where summat;on ;s taken over all k,t 1 k0 w;th Mkt 1 0, and y = max {Mk 1:1 < k,t < p}. Therefore 221
2 3 Jt 2mko 2akou El 0,
H':JtO
x
C(J,R6 ) ~
R6 ,
0
-I
-t
H(t,n)
x
t
S(-s)n(s)ds
0
f_r
S(-s)n(s)ds
< t
0
< r
t>r
Po \
H )
0 Setting x(t) = (f;(t)) v(t) € R6 , t > -r, "'w(t) = ( w(t) ) € R6 ,
it is easily seen that equation (I) is equivalent to the stochastic FOE dx(t)
=
"'H(t,xt)dt
+
"' y(x(t))dw(t),
t
> 0
(II)
in R6 • Note that this stochastic FOE is time-dependent for 0 < t < r but becomes autonomous for all t > r. If a € t 1([0,r],R), it follows that Hand "'H are continuous and Lipschitz in the second variable uniformly with respect tot € ~. In fact H(t,•), H(t,•) are continuous linear maps with norms IIH(t,•)ll, IIH(t,•)ll uniformly bounded in t € ~. Now xiJ is specified by v.IJ and so from Theorem (11.2.1), (III.1.1) the stochastic FOE (II) has a unique Markov trajectory {xt}t>O in C(JJR6 ) with given viJ. The Markov process {xt}t>O is time-homogeneous for t > r. In contrast to the classical Ornstein-Uh.lenbeck process, observe here that the pair {(F;(t), v(t)):t > -r} does not correspond to a Markov process on R6 , yet the trajectory {(f;t,vt): t > O} in C(J,R6 ) does have the Markov property. We would like to consider the velocity process {v(t):t > -r} in the simple case when the noise coefficient Y is identically constant i.e. let y(x,y) = mr0 € R for all x,y € R3 • Then v satisfies the autonomous stochastic FOE (III)
224
fort> r, where H0 :C(JJR3 ) +R3 is given by H0(n)
= -
Jo-r S(-s)n(s)ds,
3
n E C(JJR ).
By Theorem (\ll.4.1), write vt
=
Tt-r 2r
a.s., where {Tt}t>O is the semigroup of the deterministic linear drift FOE
> r.
t
Now suppose J 0 S(-s)ds < TI/2r.
We show that if A E o(AH 0 ), the spectrum H of the generator of {Tt}, then Re A < 0. Write A = A1 + u 2 € o(A 0), for some A1,A 2 E R. Suppose if possible that A1 > 0. Using Hale [26] (LeDIIla (2.1), p. 168), A satisfies the characteristic equation o S(-s)eAsds = 0 ( 1) A+ -r
J-r
Hence. A1
Jo-r S(-s)e 1s cos A2s ds A
+
and
= 0
(2)
(3)
But from (3), IA 2sl < IA 2 1r < r
Jo
-r
As S(-s)e 1 lsin A1slds
J-r
< r 0 S(-s)ds < TI/2
for all s € J. Therefore cos A2s > 0 for all s € J and is positive on some open subinterval of J. So from (2), A1 =- f~r S(-s)eA1S cos A2s ds < 0 where S is assumed positive on a set of positive Lebesgue measure in J. This is a contradiction and ReA must be less than zero for all A E o(AH 0 ). Therefore, according to Corollary (VI.4.2.1), we obtain 225
Theorem (2.1): In the system (I) assume that y is constant (:my 0) ,a has compact support in [O,r], a E .c 1([o·,r]_rQ) and 0 r} of (I) and positive real numbers K,a such that m~(t) = m~(r)
(i)
+
Jt mv(u)du r
mvt
J~ Tt-u
=
t > r,
AyO dw(u)
a.s., (ii) for every solution
(~,v)
of (1),
'E ~~~t - m~t~~2 < Ke-at E llvt - mvtlt2 < Ke-at for all t > 2r, (iii) my is stationary and m~ has a.a. sample paths c1• Remark Physically speaking, the above theorem implies that the 'heat bath' will always eventually stabilize itself into a stationary Gaussian distribution for the velocity of the molecule. §3.
Stochastic FOE's with Discontinuous Initial Data
This is a class of stochastic FOE's with initial process having a.a. sample paths of type .c 2 allowing for a possible finite jump discontinuity at 0. These equations were studied by T.A. Ahmed, S. Elsanousi and S.E.A. Mohammed and can be formulated thus: dx(t)
=
x(O)
=
H(t,x(t),xt)dt v E .c 2(n,Rn)
x(s) = e(s)
+
G(t,x(t),xt)dz(t), t
> 0
}
for all s E [-r,O).
In (IV) the initial condition is a paiP (v,e) where v E .c 2(n,Rn) and 226
(IV)
a € t 2(g,£ 2(JJR")). Note that here we confuse £2 with L2, the Hilbert space of all equivalence classes of (Lebesgue)-square integrable maps J +R". The trajectory of (IV) is then defined as pairs {(x(t),xt):t > O} in R"xt 2(JJR"). We assume that the coefficients
are measurable with the maps H(t,.,.), G(t,.,.) globally Lipschitz on R" x t 2(JJR") having their Lipschitz constants independent oft € ~. The noise process z·:R x g +Rm is a sample continuous martingale on the filtered probability space (g,F,(Ft)t>O,P) with z(t,•) € t 2(g,Rm;Ft) for all t € ~ and satisfying McShane's Condition II(E)(i). Using the method of successive approximations (cf. Theorem (11.2.1)), it can be shown that there is a unique measurable solution x:[-r,m) x g +Rn through (v,e) € i(g,JR";F0 ) x t 2(g,i(J.R");F0 ) with a continuous trajectory {(x(t),xt):t>O} adapted to (Ft)t>O (Ahmed [1]). From the point of view of approximation theory, a Cauchy-Maruyama scheme can be constructed for the stochastic FOE (IV) in the spirit of McShane ([53], Chapter V, §§3,4, pp. 165-179). For more details on this matter see [1]. In addition we would like to suggest the following conjectures: Conjectures (i) In the stochastic FOE (IV), suppose the coefficients H,G satisfy the conditions of existence mentioned above. Let z = w, m-dirnensional Brownian motion adapted to (Ft)t>O. Then the trajectory {(x(t),xt):t > 0} corresponds to a Feller process on R" x t 2(JJR"). If H, G are autonomous viz dx(t) = H(x(t),xt)dt
+
G(x(t),xt)dw(t), t
>
0,
(V)
then the above process is time-homogeneous. The transition probabilities {p(t 1,(v,n),t 2,.):0 < t 1 < t 2, v € R", n € t 2(JJR")} are given by p(t 1,(v,n),t2,B)
=
P{w :w € g, ((v,n)x(w)(t 2 ), (v,n)xt (w)) € B} 2
where B € Borel OR" x t 2(JJR")} and (v,n)x is the unique solution of (IV) through (v,n) € R" x t 2(JJR") at t = t 1• 227
(ii) Let Cb = Cb(Rn x £ 2(JJRn)JR) be the Banach space of all uniformly continuous and bounded real functions on Rn x £ 2(JJRn). Define the semigroup {Pt}t>O for the stochastic FOE (V) by t > 0, ~
€ Cb.
Define the shift semigroup St:Cb St(~)(v,n)
=
"'
~(v,nt),
Cb, t > 0, by setting
+
t > 0,
for each ~ € Cb. The semigroups {pt}t>O and {St}t>Owill have the same domain of strong continuity C~ ~ Cb (cf. Theorem (IV.2.1)), but it is not cleaP if C~ ~ Cb in this case (cf. Theorem (IV.2.2)). However it is easily shown that both semigroups are weakly continuous. Let A,S be their respective weak infinitesimal generators (cf. IV §3); then we conjecture the following analogue of Theorem (IV.3.2): Suppose~ € V(S).~ c2, D~ globally bounded and o 2 ~ globally bounded and globally Lipschitz. Then ~ € V(A) and A(~)(v,n)
=
S(~)(v,n) + o 1 ~(v,n)(H(v,n))
+
2
1 n L
j=1
2
D 1 ~(v,n)(G(v,n)(e.),
J
G(v,n)(e.)) J
where o 1 ~. D~~ denote the partial derivatives of ~ with respect to the first variable and {ejlj= 1 is any basis for Rn. Remark In contrast with the non-Hilbertable Banach space C(JJRn), the state space Rn x £ 2(JJRn) carries a natural real Hilbert space structure and so Cb(Rn x £ 2(JJRn)JR) contains a large class of smooth (non-zero) functions with bounded supports. By a result of E. Nelson (Bonic and Frampton [6]), a differentiable function on C(JJRn) with bounded support must be identically zero. §4. Stochastic Integra-Differential Equations In the stochastic integra-differential equation (SIDE)
228
dx(t)
,o
{J_r
=
h(s,x(t~r(s)))ds}dt ~
Jo
{ -r
g(s,x(t~d(s)))ds}dz(t),t >
0
(VI) x(t)
= e(.)(t),
= [-r,O]
t € J
z:n + c(ltD ,.Rm) is a continuous Rm-valued marti.ngale on a filtered probability space (n,F,(Ft)t>O,P), satisfying the usual conditions of McShane (Conditions E(i) of Chapter Il). The coefficients h:J x Rn +Rn, g:J x Rn + L(RmJRn) are continuous maps which are globally Lipschitz in the second variable uniformly with respect to the first. Denote their common Lipschitz constant by L 0. The delay processes r,d:J x n + J are assumed to be (Borel J 8 F0 • Borel J)measurable and the initial condition e € £ 2(n,C(JJRn)•F0 ). To establish the existence of a unique solution we shall first cast the stochastic IDE (VI) i.nto the general format of Chapter ll §1. lndeed, let us define the maps ~:£ 2 (n,C) + £ 2 (n,.Rn), g:£ 2 (n,C) + £ 2(n,L(Rm JRn)) as follows:
"'h(~)(w) = Jo-r h(s,~(w)(r(s,w)))ds "'g(~)(w) = Jo-r g(s,~(w)(d(s,w)))ds for all FOE
~
€ £ 2 (n,C), a.a. w € n.
dx(t)
=
Observe now that (VI) becomes the stochastic
"'h(xt)dt + "'g(xt)dz(t),
t >0
Note also that the coefficients "'h, "'g are globally Lipschitz because if 2 ~,.~ 2 € £ (Sl,C), then "' "' llh II 2 2 £
=In
n
(SlJR )
0
1J_r {h(s,~ 1 (w)(r(s,w)))- h(s,~ 2 (w)(r(s,w)))}dsl 2 dP
< rL 2 Jn
1~ 1 (w)(r(s,w))- ~ 2 (w)(r(s,w))l 2 dP
< rL 2 11~1
- ~2112 2 £
(Sl,C)
•
229
A similar inequality holds for "' g. To check that "'h, "' g sati.sfy the adaptability condition E(iii) of Chapter l i §1, notice that the processes (s,w) 1------7> h(s,liJ(w)(r(s,w)), (s,w) ~ g(s,ljl(w)(d(s,w))) are (Borel J 8 F.J-measurable whenever liJ € £ 2(n,C;Ft), fort> 0. Thus by Theorem (11.2.1) the stochastic IDE (Vl) has a unique sample continuous trajectory {xt~t > 0} in C(JJRn) through e. The trajectory field of (VI.) describes a time-homogeneous Feller process on C if z = w, m-dimensional Browni.an motion and the delay processes r, d are just (detenninisti.c) continuous functions r,d~J .... J. According to Theorem (11l.3.2), the weak generator A of the assaci.ated semigroup {Pt}1>0 is given by the formula A(~)(n) = S(~)(n)
+
•
-
D~(n1
( Jo-r h(s,n(r(s)))ds X{O})
i J=1.~ o2~