This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
II P( IfA fA I IE > 0 I)
(ii) Suppose lim
A1
+
EP( IfA fA I IE < 0 I) < E( 2 II~ II
+ 1)
E E}.
Jn lftA(w) fA(w)l~ dP(w)
= 0
uniformly in A EA.
Then the first assertion in (ii) follows immediately from the Chebyshev inequality
P(lf~
fAIE
>E)<~ J lf~(w) fA(w)l~ E
n
dP(w)
which holds for any E > 0, t > 0, A E A (Theorem (1. 2.4)). To prove the second assertion in (ii), use the first assertion and a similar argument to that in (i). c We now have the following theorem. Theorem (3.1): For the stochastic FOE (I) suppose Hypotheses (M) are satisfied. Then the family {p~1:o < t 1 < t 2
67
(i)
with t1 t (ii) Pt o pt2 2
=
3
t pt1' 0 < t 1 < t 2 < t 3
In particular, for the autonomous stochastic FOE (IV), the family {pt = P~:O < t
Proof:

2
1
+
2
t 2 E [O,a].
By Theorem (11.3.1), the map Ttt 1:C(JJRn) ~ 2
t
2(n,C(JJRn);Ft ) is 2
Lipschitz for 0 < t 1 < t 2
'E
3
{P: 1rP: 2(,)J}(n) 2
3
=
3
2
J J g
=
'{T!2rT: 1(n)(w)](w')}dP(w')dP(w) 3
g
2
t1 '[Tt (n)(w)]dP(w),
J
3
g
(Cf. proof of Theorem (1.1)). In the autonomous case, Pt

2
= Pt0
=
2
t1 Pt +t , t 1,t2 E [O,a], t 1 + t 2 E [O,a], 1
2
because of timehomogeneity. Thus pt opt = p~ o·p!1+t = pt +t. D 1 2 1 1 2 1 2 The semigroup {Pt}t>o of the above theorem will be studied in some detail in the next chapter. Let M(C) be the complete topological vector space of all finite Borel measures on C : C(JJRn) given the weak * (or vague) topology (§1.2, Parthasarathy [66], Schwartz [71], Stroock and Varadhan [73]). Then we have a bilinear pairing 68
<·,·>:. cb
x
M(C)> :R, <4>,ll>
=
Ic4>(n)dJJ(n),
4> € Cb' ll € M(C). Following Dynkin [16], define the adjoint semigroup * {Pt}O
I
nEC
p(O,n,t,B)dlJ(n), B €. Borel C(J,:Rn), 0 < t
Then for any 4> € Cb' <4>,P;ll>
=
I
~€C
4>(~) d(P;ll)(~) =
f
J~EC
=
J
nEC
I
~EC
4>(~)
I
p(O,n,t,d~)dll(n)
n€C
4>(~)p(O,n,t,d~)dll(n)
Pt(cp)(n)dll(n) =
Since {Pt}O
2
1
2
t 1,t 2 € [O,a], t 1+t 2 € [O,a]. An invariant probabiLity measure for the stochastic FOE (IV) is a probability measure llo € M(C) such that P; llo = llo for all t € [O,a]. If a = ~. a probability measure llo € M(C) is invariant if and only if lim p(O,n,t,·) = llo for some n € C(JJRn). This t+<><>
follows easily from the fact that P; p(O,n,t',·) = p(O,n,t+t',·), t,t' > 0, n € C(JJRn). Observe also that the family of transition probabilities {p(O,n,t,.):n € C(JJRn), t > O} for (IV) is left invariant by the semigroup {p;} t>O when a = ~. It would be interesting to find generic conditions on the coefficients H, G of (IV) which guarantee the existence of a (unique) invariant probability measure. Some partial results in this connection may be found among the examples of Chapter VI (§VI.4). See also Ito and Nisio ([41]) and Scheutzow [69]).
69
IV The infinitesimal generator
§1o
Notation
For the present chapter we keep the notation, general setup and the standing assumptions of the last chapter (§III 1,2)o In particular, we focus our attention on the autonomous stochastic FOE: n(O)
+I:
H(xu(w))du+ (w)
I:
G(xu(o))dw(o)(u)
x(w)(t) = { n(t)
t >0 (I)
t E J = [r,O]
Very frequently, the solution x through n will be denoted by nx; and throughout the chapter we shall assume that the coefficients H:C(J,Rn)+ Rn, G:C(J,Rn) L(Rm ,Rn) are g'LobaZZ.y bounded and Lipschitzo The driving Brownian motion w is in Rm, generating a filtration (Ft)t>O on the probability space (O,F,P)o For brevity, symbolize the above stochastic functional equation by the differential notation: dx(t) = H(xt)dt + G(xt)dw(t)
(I)
x0 = Tl € C(J,Rn) Similarly for any t 1 > 0 we represent the equation
t
n(O)+t H(xu(w))du+ (w) G(xu( o))dw( o)(u).t > t 1 x(w)(t) = { t1 t1 t 1r < t < t 1 n( tt 1) by the stochasticdifferential notation dx(t)·= H(xt)dt + G(xt)dw(t)
t > t, > 0 }
}
(I)
1
I
xt = n € C(J,Rn) 1
Recall that at the end of the previous chapter, we constructed a contraction semigroup {pt}t>O associated with the stochastic FOE (I) and defined on the Banach space Cb of all bounded uniformly continuous functions ~:C(J,Rn) > Ro 70
Indeed
Now, for an ordinary stochastic differential equation (Stochastic ODE, r = 0), it is wellknown that the semigroup {Pt}t>O is strongly continuous on Cb with respect to the supremum norm, and its strong infinitesimal generator is a second order partial differential operator on the state space of the solution process ([16], [22]). Our first objective is to show that, when r 0, the semigroup {Pt}t>O is neve~ strongly continuous on the Banach space Cb. Furthermore, we shall derive an explicit formula for the (weak) infinitesimal generator of {Pt}t>O. §2.
Continuity of the Semigroup
For each n € C(JJRn) and t > 0, define ~:[r,=) +Rn by "'
n(t)
=
{~ n( O)
t>O
n(t)
t € J R"
R
0
Define the shift St:Cb
~
"' st<•>
Cb, t > 0, by setting €
C(JJRn ) , •
€
cb.
The next result then gives a canonical characterization for the strong continuity of {pt}t>O in terms of the shifts {St}t>O.
71
Theorem (2.1): The shifts {St}t>O form a contraction semigroup on Cb, such that, for each n € C(J,Rn), lim St(ct>)(n) = lim Pt(ct>)(n) = cl>(n) for all t+0+ t+0+ 4> € Cb. Furthermore lim Pt(cl>)(n) = cl>(n) uniformly in n € C(J,Rn) if and t+0+ only if lim St(cl>)(n) = cl>(n) uniformly in n € C(J,Rn). t+0+ Proof:
Let t 1,t2 > 0, n € C(J,Rn),
4>
€ Cb, s € J.
Then
where
"' )(0) (nt
~
"' )t (s) (nt 1 2
=
{
"'
=
n(O)
t
(nt }(t2 + s) 1
=
+
s
< 0
"' +t )(s). (nt 1
Hence
i.e.
r < t 2
st (St (ct>))(n) 1 2
=
2
"' +t ) ct>(nt 1 2
=
st +t (ct>)(n) 1 2
Since lim . "' nt = n , it is clear that lim St(cl>)(n) = ct>(n) for each t+0+ t+0+ n € C(J,Rn), 4> € Cb. Also by sample paths continuity of the trajectory {nxt:t > O} of (I) (Theorem (11.2.1)) together with the dominated convergence theorem, one obtains
72
for each ~ € Cb and n € C(JJRn). To prove the second part of the theorem, suppose K > 0 is such that IH(n) I < K and IIG(n) II< K for all n € C(JJRn). Then for each t > 0 and almost all w € Q we have t+s t+s J H(nx (w))du+(w) J G(nx (·))dw(·)(u) t+S>O 0
u
0
u
r < t
+
= {
0
s
<
0,
s € J
Thus, using the Martingale inequality for the stochastic integral (Theorem (1.8.5)), one gets "' 2 Jt+s 2 unxt(·) ntll 2 < 2E sup I H(nxu(·))dul £ (Q,C) S€[t,O] 0 + 2E sup S€[t,O]
IJ
t+s
G(nxu(·))dw(·)(u)l 2
0
< 2K 2t 2 + 2K 1 J: E IIG(nxu(·))ll 2 du < 2K2t 2 + 2K 1t, some K1
> 0.
Therefore, lim llnx  ~ II = 0 uniformly in n € C(JJRn). Using the t+0+ t t £2(Q,C) uniform continuity of ~ € Cb it is then not hard to see that lim {E~onxt(•)~(~t)} =0 unifom"ly inn € C(JJRn) (Cf. proof of Lemma (III. t+0+ 3.1 )). So 1 irn {Pt(~)(n)St(~)(n)} = 0 uniformly in n € C(JJRn). Finally, \'lriting t+0+
the second assertion of the theorem is now obvious. c Let C~
Cb be the set of all ~ € Cb such that lim Pt(~) = ~(= lim St(~)) t+0+ t+0+ in Cb. Then C~ is a closed linear subalgebra of Cb which is invariant under the semigroups {pt}t>O, {St}t>O. Both {pt}t>O and {St}t>O restrict to strongly continuous semigroups on C~ (Dynkin [16], pp. 2226). c
Theorem (2.2): The semigroup {pt}t>O is not strongly continuous on Cb with respect to the supremum norm. ~:
It is sufficient to find~
€
Cb(C(JJRn)JR) such that St(~) ~ ~as 73
t +0+ i.n Cb;. (but St(lll)(n) + lll(n) as t + 0+ for each n €. C(J,Rn).) Let B c: C(J,Rn) be the closed unit ball. Fi.x any r < s0 < 0 and define lii:C(J,Rn) ~ R by n(s 0) lll(n) = {
lin
II
< t
1 lin II >
n<s 0) lin II
t
Clearly 111 is continuous; indeed 111 is globally Lipschitz (and hence unifonmly continuous) on C(J,Rn). To prove this let n. n• E C(J,Rn) and consider the following cases (i)
n,n• e B.
( ii )
n, n•
t.
Then llll(n)  lll(n•)
i nt B• i.e.
II n II >
t,
= ln<s 0)  n•(s 0) I
II n• II >
1•
Write
I
n(s 0) n'(s 0) J llll(n)  lll(n•) I =   •  lin II lln'll n(s 0) n(s 0) J n(s 0)
< =
<
Iw~~~ I In<s >I 0 llln• llnlllln•ll
L
lin• nil
lln•ll (iii) n e int B, n•
74
+
II 
n'(s 0)
~~~M~
lin II I
+
I
 1ln<s 0)  n•(s 0) I
lln'll
+
L
lin n•ll < 2lln n•ll
lln'll
t.
B;. i.e. lin II < 1,
lin•
II>
1.
Find n" e
as
II
(where 38 is the boundary of B) such that n" lies on the line segment joining nand n• i.e. find >.0 E [0,1].such that n" = (t>.0 )n.., >.0n• and lln"ll = 1. Define the function f:[0,1]> R by f(>.) = 11(1>.)n • >.n'll1, >. E [0,1]. Then f h clearly continuous. Also f(D) = lin II t < 0 and f(1) = llrlll1 >0. Hence by the IntermediateValue Theorem there exists >. 0 E (0,1) such that f(A 0 ) = o i.e. taken"= (1><0)n ... >. 0n• and lln''ll = t. Hence lljl(n)  ljl(n') l < lljl(n)  ljl(n")
I ...
lljl(n")  ljl(n')
II
R"
r
i.e.
0 nn(s) = { [s 0 • 1
*
s]n
so
1 +
<s< 0
n
s0 < s < s0 +
r < s < s 0
Note that llnnll = t for all n. such that s 0 • Consider for each fixed n
*
*
< 0; i.e. nnE 38 for all n.
75
St(~)(nn)
= ~<~{' = nn(t + s 0>. o < t
s 0•
< 
Therefore lim St(~)(nn) t+o+
= lim
nn(t + s 0)
t+o+
= nn(s 0) = ~(nn)
(*)
Now(*) is not unifo~ inn; for if (•) were uniform inn. given 0 < E < 1, there is a 0 < 6 < s0 (6 independent of n) such that lnn(t+s 0)nn(s 0) I <E for all 0 < t < 6 and for all n > 1. rtx 0 < t 0 < 6 and choose n0 such that 1 1.e. 1 t 0 + s 0 > s 0 + n· Th us · n0 > t: 0
0
no
no
n (t 0 + s0)  n
(s 0) = lo 1 I = 1.
But, by choi.ce of 6, lnn°(t0 + s 0)  nno(s 0) I < tradicts the choice of E. Therefore ~ t cg.
E •
Hence 1 <
E,
which con
a
Remarks (i)
One could have chosen
~· €
Cb in the above proof such that
II n II <
1
~*(n) =
II n II
> 1
It can be shown that ~· t c~. However, note that, while ~· is only continuous. ~ is smooth in any small neighbourhood of 0. (ii) Later on, in §4, we construct a concrete class of smooth functions on C(JJRn) lying entirely in C~. These functions are called quasitame fUnctions. The class of quasitame functions is sufficiently rich to generate Borel C(JJRn). in exactly the same fashion as the tame functions do. (iii) Note that the domain of strong continuity C~ of {Pt}t>O is independent of the coefficients H. G and the Brownian motion w. As yet, a complete characterization of C~ is unknown to me. §3. The Weak Infinitesimal Generator In this section we obtain a general formula for the.weak infinitesimal generator of {Pt}t>O (Dynkin [16] Vol. 1 pp. 3643). Define a weak topology on Cb as follows: Let M(C(JJRn)) be the Banach 76
space of all finite regular measures on Borel C(JJRn) given the total variation norm. Then there is a continuous bilinear pairing <·,·>:CbxM(C(JJRn))~ defined by
<~.~>
=
fC(JJRn) $(n) d~(n),
$ € Cb'
~
€ M(C(JJRn))
Say a family {$t:t > O} in Cb conve~es ~eakLy to $ € Cb as t ~ 0+ if lim <~t.~> = <~.~> for all ~ € M(C(JJRn)). Write this as $ = wlim ~t• t~+
t~+
Proposition (3.1) (Dynkin [16], p. 50): For each t > 0 let ~t' ~ € Cb. Then $ = wlim ~t if and only if £11~tll: t > 0} is bounded and $t(n) ~ ~(n) t~+
as t ~ 0+ for each n € C(JJRn). Proof: For each n € C(JJRn) let on be the Dirac measure concentrated at n, defined by 1 ,
n € B
o, nt B Define ~t € [M(C(JJRn))]* by
for all B € Borel C(JJRn).
$t(~) = <~t'~>. ~ € M(C(JJRn)).
If$= wlim
t~+
~t'
then
~(n) =<~,on>
"'
= lim
t~+
<~t.on>
=lim
t~+
~t(n)
for all
n € C(JJRn), and the set {$t(~):t > O} i! bounded for each~ € M(C(JJRn)). By the uniform boundedness principle {II $til : t > O} is bounded. But ll~tll = ll~tll for each t > 0, so £11~tll : t > O} is bounded. Conversely suppose {II $til : t > O} is bounded and ~t(n) ;:. $(n) as t ~ 0+, for all n € C(JJRn). By the dominated convergence theorem
J
n C(JJR )
as t ~ 0+, for all ~ € M(C(JJRn)). The
~ak infinitesimaL generatoP
A($) = wlim t~+
~(n)d~(n)
Thus ~ = wlim $t.
= <~.~>
D
t~+
A:V(A)
c
Cb
~
Cb of {pt}t>O is defined by
Pt($)  $ t 77
where O(A) is the set of all ' for which the above weak limit exists. By continuity of the sample paths and the dominated convergence theorem it is easy to see that wlim Pt(') = 'for ev.ery ' € Cb. Moreover the t~+
following properties can be found in Dynkin ([16] Vol. 1 Chapter I §6, pp. 3643). Theorem
(3.1)~
(i) O(A)
=C~.
O(A) is weakly dense in Cb, Pt(O(A)) ~ O(A) for all t >
o.
(ii) If ' € O(A), then the weak derivative
exists and
~ Pt(') = A(Pt(,)) = Pt(A(,)), Pt(')  ' =
J:
Pu(A ('))du for all t > 0.
(iii) A is weakly closed i.e. if {'k};= 1 c O(A) is weakly convergent and {A(,k)};= 1 is also weakly convergent, then wlim 'k € O(A) and wlim A(,k) = A(wlim 'k). k~ k~
(iv)
k~
For
each~>
(~id

0,
A) 1 (~)
~idA
=
is a bijection of O(A) onto Cb.
J: e~tpt(~)dt
for all
~
Indeed
E cb.
The resolvent R~ = (~id A)1 is bounded linear and IIR~II< {for all ~ > 0. (v) wlim
~~
~R~(~)
= ~ for every
~
E Cb.
In order to deriv.e a general formula for the weak infinitesimal generator A, one needs to augment the state space C(JJRn) by adjoining a canonical ndimensional direction. The generator A will then be equal to the weak infinitesimal generator S of the shift semigroup {St}t>O of §2 plus a second order partial differnetial operator along this new direction. The construction is as follows. Let Fn be the vector space of all simple functions of the form vx{O}where vERn and x{O}:J ~ R is the characteristic function of {0}. Clearly 78
c(JJRn) n Fn = {0}. plete norm lin
+
Form the direct sum C(JJRn) • Fn and give it the com
vx{o}ll = sup ln(s) I s€J
+
lv I
• n e: C(JJRn). v € Rn
Indeed C(J,Rn) • Fn is the space of all functions ~:J +Rn which are continuous on [r,O) and possibly with a finite jump discontinuity at 0. The following lemma shows that the above construction admits weakly continuous extensions of linear and bilinear forms on C(J,Rn) to C(J,Rn) • Fn. Lemma (3.1): let a € C(J,Rn)*. Then a has a unique (continuous) linear extension a:C(J,Rn) • Fn ~ R satisfying the weak continuity property (w 1) if {~k};= 1 is a bounded sequence in C(J,Rn) such that ~k(s) ">· ~(s) as k +=for all s € J for some ~ € C(J,Rn ) • Fn• then a(~ k) . a(~) as The extension map e : C(J,Rn) *> [C(J,Rn) • FnJ*
is a linear isometry into. Proof: We prove the lemma first for n = 1. Suppose a € C(J,R)*. By the Riesz representation theorem (Dunford and Schwartz [15], §IV.6.3 p. 265) there is a (unique) regular finite measure ~= Borel J +R such that a(n) = Jt r
n(s)d~(s)
for all n € C(J,R).
Define a:C(J,R) e F1 ~ R by a(n
+
v1x{O}) = a(n)
+ v 1 ~{0},
n € C(J,R), v1 € R.
a is
clearly a continuous linear extension of a. k let {~k};= 1 be a bounded sequence in C(J,R) such that ~ (s) + n(s) + v 1 ~ 0 js) as k +=for all s € J where n € C(J,R), v1 € R. By the dominated convergence theorem,
By
79
The map e:ceJ,R) * ~ rceJ,R) e F1J* is clearly linear. a
•>
a
Higher dimensions n > 1 may be reduced to the 1dimensional situation as follows: Write a E ceJ,Rn)* in the form n . aen) = E a 1 en.) 1 i=1 where n = en,_ ••• ,nn) E ceJ,Rn), ni E ceJ,R), 1 < i < n, aien*) = aeo,o, ••• ,O,n*,O, ••• ,O) with n* E ceJ,R) occupying the ith place. Hence ai E ceJ,R)* for 1 < i < n. Write Fn=F~, where F1 = {v*x{O}:v* ER} by taking vX{O} = ev 1x{O}'"" ., vnx{O}) for each v = ev 1, ••• ,~n) ERn, vi E R 1 < i < n. Let al E [CeJ,R) e F1J* be the extension of a 1 described before and satisfying ew 1). I_t is easy to see that ceJ,Rn) e Fn = rceJ,R) e F1J
x ••• x
rceJ,R) e F1J en copies)
i.e. n + VX{O} = en 1 + v1x{O}'"""'nn + VnX{O}) Define a E [CeJ,Rn) e FnJ* by n
•
aen+ VX{O}) = i~ 1 a 1 eni + ViX{O}) when n = en 1 ,.~··nn), v = ev 1, ••• ,vn). Since ai is a continuous linear~ extension of a 1 , then a is a continuous linear extension of a. Let {~k}k= 1 be bounded in ceJ,Rn) such that ~kes) ~ ~es) as k + ~ , for all s E J, n k k k k where~ E ceJ,R) e Fn. Let~ = e~ 1 ••••• ~n), ~ = e~ 1 ••••• ~n), ~i E ceJ,R), ~i E ceJ,R) e F1, 1 < i < n. Hence {~~};= 1 is bounded in C(J,R) and 80
t~(s) ~ l;;i(s) as k k
+
Q), s
€
n
i
J, 1 < i < n. Therefore k
n
i
lim a(l;; ) = lim E a (l;;i) = E a (l;;i k.._, k.._, i=1 i=1 ·so
)

= a(l;;).
a satisfies
(w 1). To prove uniqueness let "' a € [C(JJRn) e Fn] * be any continuous linear extension of a satisfying (w 1). For any vx{O} € Fn choose a bounded sequence {t~};= 1 in C(JJR) such that l;;~(s) ) YX{O}(s) as k + Q), for all s € Ji e.g. take. (ks + 1)v, 
r1 < s < 0
= {
r < s < 
0
r1
r
Note that lll::~ll = lvl for all k > 1. Also by (w 1) one has

for all n € C(JJRn). Thus "' a = a. Since the extension map e is linear in the onedimensional case, it follows from the representation of a in terms of the a; that e:C(JJRn) * ~ a
;,
a 81
is also linear. But a is an extension of a, so llall > llall. Conversely, let ~ = n + vx{O} € C(J,Rn) e Fn and construct {~~};= 1 in C(J,Rn) as above. Then
a(~) = lim·a(n k+OO
+
~~).
But Ia( n
+
~~ > I < II a II II n .., ~~ II < II a II [ II n II = II a II
+
II ~ ~ II J
[ II n II .., IvI J = II a II II ~II for a11 k > 1•
Hence Ia(~) I= lim la(n +~~)I< llall II~ k+OO
II for every~
€
C(J,Rn) e Fn.
Thus llall < llall. So lliill = llall and e is an isometry into.
o
Lemma (3.2) :. Let B:C(J,Rn) x C(J,Rn) ~ R be a continuous bilinear map. Then B has a unique (continuous) bilinear extension S:[C(J,P.n) eFn] x [C(J,Rn) E&Fn] +F. satisfying the weak continuity property: (w2) if {~k};= 1 , {nk};= 1 are bounded sequences in C(J,Rn) such that ~k(s)!> ~*(s), nk(s)!> n*(s) as k + oo, for all s € J, for some ~*.n* € ·c(J,Rn) e Fn, then B(~k.nk)+ B(~*.n*) ask+ oo. Proof: Here we also deal first with the 1dimensional case: Write the continuous bilinear map B:C(J,R) x C(J,R) + R as a continuous linear map B:C(J,R) !> C(J,R)*. Since C(J,R) *is weakly complete (Dunford and Schwartz [15], §IV. 13.22, p. 341), B is weakly compact as a continuous linear map . C(J,R) ~ C(J,R) * (Theorem (1.4.2); Dunford and Schwartz [15], §VI. 7.6, p. 494). Hence there is a unqiue measure A:Borel J ~ C(J,R)* (of finite semivariation IIA II (J) < oo ) such that for all ~ € C(J,R)
B(~) = ~~r ~(s)
dA(s)
(Theorem (I.4.1)). Using a similar argument to that used in the proof of Lemma (3.1), the above integral representation of B implies the existence of a unique continuous 82
nnear extension a:C(J,R) C9 F1 ~ C(J,R)* satisfying (w1). To prove this, one needs the dominated convergence theorem for vectorvalued measures (Dunford and Schwartz [15] §IV.10.10, p. 328; Cf. Theorem (1.3.1)(iv)). Define a:C(J,R) e F1 > [C(J,R) e F1J* by B = e o 13 where e is the extension isometry of Lemma (3.1); i.e.
Clearly a gives a continuous bilinear extension of 13 to [C(J·,R) • F1] X [C(JJR) • F1]. To prove that a satisfies (w2) let kco kco k {~ }k= 1' {n }k= 1 be bounded sequences in C(J,R) such that~ (s) + ~*(s), nk(s) + n*(s) as k +co for all s € J, for some~·. n* € C(J,R) e F1• By ,.. ,.. * k (w 1) for 13 we get 13(~ ) = lim 13(~ ). Now for any k, k
,..
But by (w 1) for
13(~*)
we have
lim la(~*)(nk) a(t*)(n*)l = 0 kSince {llnkll };= 1 is bounded, it follows from the last inequality that lim 13(~k)(nk) = ~(~*)(n*). kWhen n > 1, we use coordinates as in Lemma (3.1) to reduce to the 1dimensional case. Indeed write any continuous bilinear map 13:C(J,R)n x. C(J,Rn) ~ R as the sum of continuous bilinear maps C(J,R) x. C(J,R) + R in the following way 13((~1•···•~n),(n1, ••• ,nn>> =
n
..
E 13,Jc~ .• n.) i,j=1 1 J
~~re (~ 1 ••••• ~n>• (n 1, ••• ,nn) € C(J,Rn), ti,ni € C(J,R), 1 < i < n,
13 1 J:C(J,R)
x
C(J,R) +R is the continuous bilinear map 83
aij(~'.n'> = a((o,o, ••• ,o.~·.o, ••• ,o), (o,o, ••• ,o.n·.o •••• ,o)), for ~·, n' € C(J,R) occupying the ith and jth places respectively, 1
where~·= (~1·····~~), n* = (n; ••••• n~) € C(J,Rn) • Fn, ~i· nj € C(J,R) • F1, 1 < i, j < n. It is then easy to see that is a continuous bilinear extension of a satisfying (w2). Finally we prove uniqueness. Let e:C(J,Rn) e Fn ~ [C(J,Rn) e Fn]* be any continuous bilinear extension of a satisfying (w2). Take~· = ~ + v1x{O}' n* = n + v2XV} € C(J,Rn) e Fn' where~. n € C(J,Rn), v1, v2 € Rn. Choose k k n) such th:t ~ k( s ) + v x{O} ( s ) , b~unded sequences {~O}k=t' {n 0}k=t 1n CkJ,R 1 0 n0(s)  > v2x{O}(s) as k~oo, s € J; ll~oll = 1v 1 1. llnoll = 1v 2 1 for all k > 1. koo koo . k k k k Let~ = ~ + ~ 0 • n = n + n0• Then{~ }k= 1' {n }k= 1 are bounded sequences 1n C(J,Rn) such that ~k(s) ~ ~*(s), nk(s) ~ n*(s) as k ~ oo, for all s € J. So by (w2) for "'a and a one gets
a
00
00
•
(
B(~*)(n*) =lim a(~k)(nk) = B(~*)(n*) k~
Thus ~ =
s.
c
For each n € C(J,R0 ) let nx € t 2(n,C([r,a],Rn)) be the solution of (I) through n. and ~t € C(J,Rn) be defined as in §2 for each t € [O,a]. Lemma (3.3):
There is a K > 0 (independent of t,n) such that
II{E(nxt  ~t) II C < K for all t Also if a
€
lim
t~+
Proof: 84
>
0 and n
€
C(J,Rn).
C(J,Rn)*, then 1 nxt "' tEa( nt) = a(H(n)x{O}) for each n
€
C(J,Rn).
Denote by E the expectation for Rn or C(J,Rn)valued random variables.
Let K > 0 be such that IH (n)l < K for all n € C(J,Rn). Now t+s Jt+s n ~ { J H(nxu(w))du + (w) G(nxu(·))dw(·)(u), t+S>O xt(w)(s)nt(s) = 0 0 0 t + s < 0, for each t E [O,a], a.a.w € o, s € J. Since evaluation at each s € J is a continuous linear map C(J,Rn) then it commutes with the expectation i.e.
+
Rn,
1n ~ 1n ~ [Eft< xt nt)}](s) =Eft£ xt(•)(s)  nt(s)]}, s € J n 1 Jt+s = { t 0 E(H( xu))du, 0
t +s>0
t + s <
,
o.
t > 0
using the Martingale property of the Ito integral. Thus lim t J: E(H(nxu))du lim [E{t(nxt ~t)}](s) = { t+0+ 0 r < s < 0 t+0+ =
s =0
H(n)x{O}(s), for all s € J.
We prove next that lit E(nxt  ~t) lie is bounded in t
>
0 and n. Clearly
Jt
~ 1 Jt+s IE(H(nxu))ldul
< K for all t
>
Therefore lit E(nxt  ~t) lie < K for all t If a
0 and n
>
C(J,Rn).
€
o and n
€
C(J,Rn).
C(J,Rn) *, then
€
~ ) 1 (nxt  nt tEa = t1 a ( E(nxt  ~nt)) = a[t1 E( nxt  ~nt)].
By the weak continuity property (w1), one gets .
1
n
~

11m tEa( xt  nt) = a(H(n)x{o}>· t+0+
a
85
Lemma (3.4): For each t
w~(w)(s)
>
0 and a.a. w €
n,
define wt(w)
1 [w(w)(t+s)  w(w)(O)] = {
€
C(JJRn) by
t < s < 0
If r < s < t
0
Let a be a continuous bilinear form on C(JJRn). Then 1 nxt  "' lim [tEa( nt, nxt  "' nt)  Ea(G(n) t+0+
wt, G(n)
o
o
wi)l =
o.
Proof: We prove first that (1)
Observe that 1 Jt+s H(nx )du + 1 Jt+s G(nx )dw(u) If 0 u If 0 u 1 Jt+s dw(u)], t  G(n)[ If 0
<
s<0
r < s < t
0
almost surely.
for t > 0, and some K1 > 0. Since Hand G are Lipschitz with Lipschitz constant L, then almost"surely IH(nxu) 12 < 2IH(n) 12 + 2L 2 11 nxu nil~, u and
86
IIG(nxu)  G(n) 11 2 < L2 llnxu nil~, u
>
0.
>
0;
Hence (3)
and (4)
Now by the main existence theorem (Theorem (11.2.1)), the map [O,a] ?u~o+ xuEi(n,C(J,.r.n)) is conttnuous;. so Hm Ellnxunii 2=0. Therefore the u...O~: last two inequalities (3) and (4) imply that {EIH(nxu>l 2: u E [O,aJ} is bounded and Hm Ell G(nxu)  G(n) 11 2 = 0. u+0+ Letting t+ 0+ in (2) yields (1). Since a is bilinear,
'  a(G(n) o wi• G(n) o wi> tt a( nxt  " nt•' n xt "  nt) a( __ 1 (nxt  ~t)  G(n) o wi• __ 1 (nxt  ~t)  G(n) o wi> If If 1 (nxt~t)G(n)owi•G(n)owi) + a(G(n)owi• l{nxt~t)G(n)owt). + a(If If
=
Thus, by continuity of a and H6lder•s inequality, one gets It Ea(nxt ~t• nxt ~t) Ea(G(n) o wt• G(n) o wi>l < llall
Ell~ (nxt ~t)  G(n) o wi11 2
1 + 211a II [EIIIf for all t > 0.
(nxt~t)  G(n)owi11 2J 112 [E IIG(n)owi11 2J 112 •
(5)
But E IIG(n) o wi11 2 < E sup I w(t+s~w(O)I 2 IIG(n) 11 2 sE[t,O] = =
t E SlAP lw(t+s)w(O) 12 IIG(n) 11 2< t t IIG(n) 11 2 sE[t,OJ IIG(n) 11 2 • for all t > 0. (6) 87
Comb;n;ng (6) and (5) and lett;ng t + 0+ g;ves the requ;red result.
a
Lemma (3.5): Let ;n~n + Fn be the ;somorph;sm ;n(v) = vx{O}' v € Rn, and G(n) x G(n) denote the l;near map
(v,.v 2)
~
(G(n)(v 1), G(n)(v 2)).
Then for any cont;nuous b;l;near form Bon C(J,Rn) Hm t+O+
t EB(nxt~t•nxt~t) = trace [B
o
(;n x in)
o
(G(n)
x
G(n))]
for each n € C(J,Rn), where B ;s the cont;nuous b;l;near extens;on of B to C(J,Rn) • Fn (Lemma (3.2)). Indeed ;f {ej}J= 1 ;s any bas;s for Rm, then . "' nxtnt) "' = .Em B(G(n)(ej)X{o}•G(n)(ej)X{o}>· 11m t1 EB( nxtnt• t+O+ J=1
Proof:
In v;ew of Lemma (3.4) ;t ;s suff;c;ent to prove that m
1;m EB(A t+O+
o
wt• A o wt) = _E B(A(eJ.)X{O}' A(eJ.)X{O}) J=1
(7)
for any A € L(Rm ,Rn). We deal f;rst w;th the case m = n = 1, v;z. we show that 1;m EB(wi• wi> = B(x{o}•X{o}> t+O+
(7)'
for onedimensional Brown;an mot;on w. If t, n € C = C(J,R), lett 8 n stand for the funct;on J x J +R def;ned by (t 8 n)(s,s') = t(s)n(s')for all s, s' € J. The projective tensor product C 8~ C is the vector space of all functions of the form E~= 1 ti 8 ni where ti,ni € C, i = 1,2, ••• ,N. It carries· the norm N N llhll 8 = inf { E llt11 lin; II :h = E ti 8 ni,ti,ni €C, i=1,2, ••• ,Nl. i =1 ~ i =1 1 The infimum is taken over all possible finite representations of h € C 8~ C. Denote by C i~ C the completion of C 8~ C under the above norm. It is well known that C ~~ C is continuously and densely embedded in C(J x J,R), the Banach space of all continuous functions J x J +R under the supremum norm 88
(Treves [75], pp. 403410; §1.4). Since C is a separable Banach space, so is C in C. . countable dense subset of C. Then the countable set N
Y8 Y = { t
~
i=1
1
For let Y c C be a
8 n. : ~ 1.,n 1• E Y, i = 1, ••• ,N, N = 1,2, ••• } 1
is dense in C en C and hence in C in C. The continuous bilinear form Bon C corresponds to a continuous ZineaP ,.. C] * (Treves [75] pp. 434445; Cf. Theorem 1.43). functional "'BE [C ew Now let ~,.~2 e t2(g,c). The map
cec+ cenc (~.n)
~~en
is clearly continuous bilinear. Thus ~, ( •)
e ~2 (.)
:
n  > c in c w1
is Borel measurable.
for almost all wE
>
~ 1 (w) 8 ~ 2 (w)
But
n;
hence by HOlder's inequality the integral
fn 11~ 1 (w) 8 ~2 (w) 11 8
dP(w) exists and
n
From the separability of C •n C the Bochner integral (§1.2) E
~ 1 (.) 8 ~2 (·)
=
J0 ~ 1 (w) 8 ~2 (w)dP(w)
exists in C•n C. Furthermore, it commutes with the continuous 1inear functional "'B; viz.
89
Fix 0 < t < r and cons;der
t E[w(·)(t+s)  w(•)(O)][w(·)(t+s')  w(•)(O)],
s,s• € [t,O]
= {
s
0 1 +
[r,t) or s•
€
t m;n (s,s•)
s,s•
€
€
[r,t).
[t,O]
= {
s
0
[r,t) or s•
€
€
[r,t) (9)
Def;ne Kt:J x J +R by lett;ng Kt(s,s•) = [1 +
t m;n (s,s')]X[t,O](s)x[t,O](s•)
s,s• € J
(10)
;.e. Kt
=
E[wt(•)
€ £ 2(n,C),
S;nce wt
8
wt(•)].
;t ;s clear from (8) that Kt
ES(wt(•), wt(·))
=
"'B(Kt)
€
C e~ C and
•
( 11)
In order to calculate 1;m ~(Kt)• we shall obta;n a ser;es expans;on of Kt. t+O+ We appeal to the follow;ng class;cal techn;que. Note that Kt ;s cont;nuous; so we can cons;der the e;genvalue problem
t
r
Kt(s,s•)~(s')ds'
=
'~(s)
s
€
J
(12)
s;nce the kernel Kt ;s symmetr;c, all e;genvalues 'of (12) are real. (10) rewr;te (12) ;n the form t
J0t ~(s')ds' + Jt0 m;n(s,s•)~(s')ds' '~(s) =
Therefore 90
0
= 't~(s)
s € [t,O] (;)
s € [r,t)
(;;)
} (13)
t J0 E;(s')ds'.., Js s'E;(s')ds' t t
+
s J0 E;(s')ds' = UE;(s) s
sE[t,O] ( 14)
Differentiate ( 14) with respect to s, keepi.ng t fixed, to obtain
t
( 15)
E;(s')ds' = U E;'(s), s E (t,O]
s
Differentiating once more, E;(s) = ).t E;"(s) ,
s
E
(t,O]
(16)
When).= 0, choose E;~~J +R to be any continuous function such that ~~(s) = 0 for all s E [t,O] and normalized by
frt t;0(s) 2 ds = t. t
Suppose)."/ 0. Then (13)(ii) implies that E;(s) = 0 for all s E [r,t)
( 17)
In (14) put s = t to get ~(t)
(18)
= 0·
In (15) put s = 0 to obtain ~·co> =
o
( 19)
Hence for). I 0, (12) is equivalent to the differential equation (16) coupled with the conditions (17), (18) and (19). Now solutions of this are given by
A,eth~is
+
A2eth~is • s E [t,O]
E;(s) = {
(20) s E [r,t), i = rT
0
Condition (19) implies immediately that A1 = A2 = 1, say. e
t~A~it
+
e
t~A~it
From (18) one gets
= 0.
Since the real exponential function has no zeros, it follows that ).~ cannot 91
he
imaginary i.e. A > 0.
Being a covariance function, each kernel Kt is non
negative definite in the sense that
tt
Kt(s,s')t(sH;(s')ds ds' > 0 for all t
€
c.
Using (18), we get the eigenvalues of (12) as solutions of the equation t(t) = 2 cos [  1 (t)] = 0 •
liT
Therefore the eigenvalues of (12) are given by k = 0,1,2,3, •••
(21)
and the corresponding eigenfunctions by t (2)i (2k+1)~s tk(s)= t x[t,OJ(s) cos [ 2 t J,
s € J, k = 0,1,2, •••
(22)
after being normalized through the condition t 2ds = 1, k = 0,1,2, ••• JJ tk(s)
Now, by Mercer's theorem (Courant and Hilbert [10] p. 138, Riesz and SzNagy [68] p. 245), the continuous nonnegative definite kernel Kt can be expanded as a uniformly and absolutely convergent series 00
Kt(s,s') = E Ak t~(s) t~(s'), k=O 00
E
k=O = {
0
8
~2(2k+1)2
(23)
s,s' € J
cos [(2k+1)~s] cos [(2k+1)~s·J 2t 2t s,s' € [t,O]
(24)
s € [r,t) or s' € [r,t)
But from the definition of Kt' one has Kt(O,O) = 1 for every t putting s = s' = 0 in (24) we obtain
>
0. Thus (25)
From the absolute and uniform convergence of (24), it is easy to see that Kt can be expressed in the form 92
(26) where ~
~k(s) =cos [
(2k+1)ns zt Jx[t,O](s),
s € J.
Note that the series (26) converges (absolutely) in the projective tensor ,.. product norm on C en C. Hence we can apply "'B to (26) getting from (11) the equality EB(wi(·),wt(.)) =
=
"" 8 "' '!1: E 2 2 B(~k k=O n (2k+1)
'!1:
8 ~k)
""E
8 B(~ ~t) k' k k=O n2(2k+1)2
(27)
But II~ II< 1 for all k > 0 and all 0 < t < r; so the series..,(27) is uniformly convergent in t, when compared with the convergent series E 2 8 2• ~ k=O n (2k+1) Moreover, for each s € J, ~k(s) ~ x{O}(s) as t + 0+, k = 0,1,2, •••• Thus if we lett+ 0+ in (27), we obtain
""
= E 2 8 2 S(x{O}' X{O}) k=O n (2k+1)
using (25) and Lemma 2. This proves (7)'. For dimensions n > 1, write e:C(JJRn) X C(JJRn) +R in the form n
..
.
.
e<~1.~2> = . ~ e,J<~~.~~> 1 ,J=1  ( 1 n) lJ. ( ) ( ) where ~ 1  ( ~ 11 ••••• ~ n) 1 , ~ 2  ~ 2 •••• ,~; 2 and each B .C JJR x C JJR ~ R is continuous bilinear. Let A € L(Rm, Rn) and {ek}~= 1 ' {ei}~= 1 be the canonical bases for Rm and Rn, respectively; i.e.
93
Write mdimensi.onal Brownian motion w in the form w = (w1,w2,. ••• ~) where wk (t) = ~(t),ek>' k = 1, ••• ,m, are independent onedimensional Brownian motions. Then
Letting t
=
n 1: i ,j=1
=
n 1: i ,j=1
+
0+ and using (7)' gives
lim Ea(Aowt,Aowt) t+O+
=
m 1: k=1
To obtain the final statement of the lemma, take A = G(n) and note that the last trace term is independent of the choice of basis in Rm. c Let V(S) c C~ be the domain of the weak generator S for the shift semigroup {St}t>O of §2. We can now state our main theorem which basically says that if' € V(S) is sufficiently smooth, then it is automatically in V(A). Furthermore, A is equal to S plus a second order partia~ differential operator on C(J,Rn) taken along the canonical direction Fn. The following conditions on a function ':C(J,Rn) + R are needed. Conditions (OA): (i )
' € 1)( s)t
( i i) ' is c2; (iii) 0,, o2' are globally bounded; (iv) o2, is globally Lipschitz on C(J,Rn). Theorem (3.2): Suppose ':C(J,Rn) ~ R satisfies Conditions (OA). Then 94
• €
O(A) and for each n
C(JJR")
€
A( 4> )(n) = S( l!>)(n)+( Dl!>(n )o in)(H(n))+Hrace[D24>(n) o( in Xin) o(G(n) XG(n))]
where~. D24>(n) denote the canonical weakly continuous extensions of o•(n) and D24>(n) to C(JJR"> e Fn' and in~" ~ Fn is the natural identification v t> vx{O} • Indeed if {ej}J= 1 is any basis for R", then n :2:::'

A(l!>)(n) = S(l!>)(n)+DII>(n)(H(n)x{O})+i j: 1o l!>(n)(G(n)(ej)X{o}•G(n)(ej)X{o}>· ~: Fix n € C(JJR") and let nx be the solution of the SRFDE (I) through n. Suppose 4> satisfies (DA). Since 4> is c2, then by Taylor•s Theorem (Lang [47]) we have
where (28)
Taking expectations, we obtain
Since 4>
€
O(S), then (30)
In order to calculate lim following two limits
t~+
t E[ll>(nxt)  l!>(n)], one needs to work out the
"' n ~+ t1 E04>(nt)( xt  "' nt)
lim
t~O+
t ER2(t)
We start by considering (31). that
(31) (32)
From Lemma (3.3), there exists a K > 0 such
95
Hence
Let t
+
0+ and use the cont;nu;ty of 0' at n to obta;n
"n ' = 1;m t1 EO,(n)( n " nt) ' 1;m t1 EO,(nt)( xt "  nt) xt
t~+
t~+
(33) by Lenvna (3.3). Secondly, we look at the l;m;t (32). Observe that ;f K ;s a bound for H and G on C(JJRn) and 0 < t < r, then
< 8K4t 4 + 8K2t
J:
Ell G(nxu) 11 4 du
< "'K(t4 + t 2 ), some K2, "'K > 0,
(34)
where we have used Theorem (1.8.5) for the Ito ;ntegral. Furthermore, ;f u € [0,1] and 0 < t < r, then t 2'(nt+u( "' nxtnt))( "' nxtnt, "' nxtnt) "' Il£0
<
t 2'(n)( nxtnt, "' nxtnt) "' 1  l£0
t E llo2'(~t + u(nxt~t))  o2,(n) II
II nxt~tll2
< [E llo2,(?) + u(nx ~ ))o2'(n) 112J 1/2 [ 1 Ell nx ~ 114J 1/2 tt t2" tt t < ~1/2(t2+1)1/2[E llo2'<~t+u(nxt~t)) o2'(n)112J1/2. 96
(35)
But o2$ is globally Lipschitz., ~i.th Li.pschi.tz..constant L say;. so E IID 2 $(~t""u(nxC~t))  D2$(n) 11 2 < L2E( II ~tnll
+
II nxt~tll >2
< 2L 2 11~t  n11 2 ... 2L 2 [~1nx t  ~t 11 4 J 112 < 2L 2 ll~t  n II 2 ... 2L 2~ 1 1 2 t(t 2 because of the inequality (34). . tED 1 2$(nt "' 11m
+
t+O+
+ 1) 112
Letting t
+
(36)
0+ in (35) and (36), we obtain
u( nxt "' nt))( nxt "' nt• nxt "' nt) (37)
uniformty in u E [0,1].
1 =~
n
:r.;:
.r o $(n)(G(n)(eJ.)X{o}•G(n)(eJ.)X{o}>·
J=1
(38)
Since $ E V(S) and has its first and second derivatives globally bounded on C(JJRn). it is easy to see that all three terms on the right hand side of (29) are bounded in t and n. The statement of the theorem now follows by letting t + o... in (29) and putting together the results of (30), (33) and (38).
c
It will become evident in the sequel that the set of all functions satisfying Condition (DA) is weakly dense in Cb. Indeed within the next section we exhibit a concrete weakly dense class of functions in Cb satisfying (DA) and upon which the generator A assumes a definite form. §4. Action Of the Generator on Quasitame Functions The reader may recall that in the previous section, we gave the algebra Cb of all bounded uniformly continuous functions on C(JJRn) the weak topology induced by the bilinear pairing ($,lJ). ~ J n $(n)dlJ(n), where C(JJR ) ~ € Cb and lJ runs through all finite regular Borel measures on C(JJRn). Moreover, the domain of strong continuity C~ of {pt}t>O is a weakly dense proper 97
subalgebra of Cb. Our aim here is to construct a concrete class Tq of smooth functions on C(JJRn), viz. the quasitame fUnctions, with the following properties: (i)
Tq is a subalgebra of C~ which is weakly dense in Cb;
(ii)
Tq generates Borel C(JJRn);
(iii) Tq c V(A), the domain of the weak generator A of {pt}t>O; (iv)
for each ~ € Tq and n € C(JJRn), A(~)(n) is a secondorder partial differential expression with coefficients depending on n.
Before doing so, let us first formulate what we mean by a tame function. A mapping between two Banach spaces is said to be cPbounded (1 < p < m) if it is bounded, cP and all its derivatives up to order p are globally bounded; e.g. Condition (DA) implies c2boundedness; and c3boundedness implies (DA)(ii), (iii), (iv). Definition (4.1)
(Tame FUnction):
A function ~:C(JJRn) +R is said to be tame if there is a finite set {s 1,s 2, ••• ,sk} c Janda embounded function f:(Rn)k +~such that ~(n) = f(n(s 1), •••• n(sk)) for all n € C(JJRn).
(*)
The above representation of ~ is called minimal if for any projection p:(Rn)k + (Rn)k 1 there is no function g:(~n)k 1 +R with f =gop; in other words, no partial derivative Djf:(Rn)k + L(RnJR), j = 1, ••• ,k, off vanishes identically. Note that each tame function admits a unique minimal representation. Although the set T of all tame functions on C(JJRn) is weakly dense in Cb and generates Borel C(JJRn), it is still not good enough for our purposes. due to the fact that •most• tame functions tend to lie outside c~ (and hence are automatically not in V(A)). In fact we have Theorem (4.1): (i) The set T of all tame functions on C(J,Rn) is a weakly dense subalgebra of Cb' invariant under the shift semigroup {St}t>O and generating Botel C(JJRn). 98
(ii)
Let
~
E T have a minimal representation
~(n) = f(n(s 1), ••• ,n(sk))
where k > 2.

Proof:
Then
~ ~
n E C(JJRn)
0
Cb.
For simplicity we deal with the case n = 1 throughout.
lt is easy to see that T is closed under linear operations. We prove the closure ofT under multiplication. Let ~,.~ 2 E T be represented by $1(n) = f1(n(s 1), ••• ,n(sk)), $2(n) = f 2 (n(s1)~ ••• ,n(s~ )), for all n E C(JJR), where f 1 ~k +R, f 2 :Rm +Rare C bounded functions and s 1, ••• ,sk' s,, ••• ,s~ E J. Define f 12 :Rk+m +R by (i)
f 12 cx 1, ••• ,xk, x,, ••• ,x~) = f 1(x 1, ••• ,xk)f 2 (x1•····x~) for all x1, •.• ,xk, x,, ••• ,x~ E R.
00
Clearly f 12 is C bounded and
Thus ~,~~ E T, and T is a subalgebra of Cb. It is immediately obvious from the definition of St that if ~ E T factors through evaluations at s 1, ••• ,sk E J, then St($) will factor through evaluations at t + sj < 0. So T is invariant under St for each t > 0. Next we prove the weak density of T in Cb. Let T0 be the subalgebra of Cb consisting of all functions $:C(JJR) +R of the form ~(n) =
f(n(s 1), ••• ,n(sk)), n E C{JJR)
( 1)
when f:Rk +R is bounded and uniformly continuous, s 1, ••• ,sk E J. Observe first that T is (strongly) dense in T0 with respect to the supremum norm on Cb. To see this, it is sufficient to prove that if c > 0 is given and f:Rk +R is any bounded uniformly continuous function on Rk, then there is a k C bounded function g:R k +R such that lf(x)  g(x)l < c for all x E R. We Prov.e this using a standard smoothing argument via convolution with a C bump function (Hirsch [32], pp. 4547). By uniform continuity of f there is a 0 > 0 such that lf(x 1) f(x 2 )1
00
R
99
Define g~k +R by
the integral being a Lebesgue one on Rk. g(x) = J, k h(y)f(xy)dy R
=I
8(0,15)
h(y)f(xy)dy, x ERk.
(2)
By choice of 6 and property of h, it follows that lf(x)  g(x) I
= I JRk
h(y)f(x)dy  IRk h(y)f(xy)dyl
E
J, k h(y)dy R
= E
for all x € Rk. To prove the smoothness of g, use a change of variable y• = xy in order to rewrite (2) in the form g(x)
=
J
B(x,IS)
h(xy')f(y')dy'
Now fix x0 € Rk and note that B(x,IS) Therefore g(x)
= 
I
B(x 0 ,2&)
c
for all x
€
Rk.
(3)
B(x 0,2&) whenever x € B(x 0,&).
h(xy')f(y')dy'
for all x € B(x 0,&).
(4)
Since f is continuous and the map x ~ h(xy•) is smooth, it follows from (4) that g is em on B(x 0 ,&) and hence on the whoLe of Rk, because x0 was chosen arbitrarily. Indeed g and all its derivatives are globally bounded on Rk, for II oPg(x) II <
fB(x ,2&) II oPh(xy• >II
lf(y• >ldy'
0
< V.
supk IIDph(z) 11. supk lf(z) I
z~
z~
=
N, say,
for all x € Rk, where N is independent of x0 and V = J dy' is the 8(0,26) . volume of a ball of radius 215 in Rk. Secondly we note thatTOis weakly dense in Cb. Let ITk:r = s 1 < s 2 < s 3 < ••• < sk = 0, k = 1,2, ••• , be a sequence of partitions of J such that mesh nk + 0 as k + m. Define the continuous linear embedding Ik:Rk + C(JJR) by letting lk(v 1,v 2, ••• ,vk) be the piecewise 1inear path = (ssj_ 1) v. + (sjs) lk(v 1, ••• ,vk)(s) vJ._ 1 , s € [sJ._ 1,sJ.] sjsj_ 1 J sj 5 jt 100
joining the points
\L,. ... ,'lk E R,
j
= 2~ ••• ,k.
R
rs,
Denote by !k the ktuple (st•···•sk) E Jk, and by.p!k the map e(J,R) ~ Rk n
~
(n(st), ••• ,n(sk)).
Employtng the unifo.rm. continuity of .each n E. e(J,R) on the compact interval J, the reader may easily check that lim (Ik o.p k)(n)
!
k.....
=n
in e(JJR).
(5)
Now if' E eb' define 'k~e(J,R) +R by 'k = 'olk 0 Psk• k = 1,2, •••• Since' is bounded and uniformly continuous, so is ' o lk~ Rk + R. Thus each 'k E T0 and lim 'k(n) = '(n) for all n E e(J,R), because of (5). Finally k.....
note that II 'k II e < b
II' lie for a11 k > 1. Therefore ' = wHm 'k and b
k
T0
is
weakly dense in eb. From the weak density of T in T0 and of T0 in eb' one concludes that T is weakly dense in eb. Borel e(J,R) is generated by the class {, 1 (u)~U c:R open, 'E T}. For any finite collection !k = (s 1, ••• ,sk) E Jk let Psk ~ e(JJR) +Rk be as befor Write each' E Tin the form'= f o P5 k for someembounded f~k +R.
It is 101
a: 8
,}~~~r·,~~,0 I I I
. r
1
I I
.a
:.
~~
.e....
+~ ~
.a + i
• .6

~
.aI
~
102
wellknown that Borel C{JJR) is generated by the cylinder sets {psk 1 (U 1 x ••• xUk):Ui =:R open, i = 1, ... ,k,!k=(s 1, ... ,sk) e:J k, k=1,2, ... } (Parthasarathy [66] pp. 212213). Moreover, it is quite easy to see that Borel Rk is generated by the class {f 1(U) : U=:R open, f:Rk ..... R C00bounded} (e.g. from the existence of smooth bump functions on Rk). But for each open set U =R, $ 1(U) = P;~ [f 1{U)]. Therefore it follows directly from the above that Borel C(JjR) is generated by'· (ii) Let $ e: < have a minimal representation $= f o Psk where !k = (s 1, ••• ,sk) e: Jk is such that r < s 1 < s 2 < ••• < sk <0, f:Rk ..... R is Ceobounded and k > 2. Take 1 < jo < k so that r < Sj 0 < 0. Since the representation of $is minimal, there is a ktuple (x 1, ••• ,xk) e: Rk and a neighbourhood [xj 0E0 ,xj 0 + E0 J of xj 0 in R such that oj 0f(x 1, ••• ,xj 0_1,x,xj 0+1 , ••• ,xk) '0 for all x e: [x.
 E0 , x. + E0], with EO> 0. Jo Jo + E0 ] ..... R by
Define the function
g:[x. E 0 , x. Jo Jo g(x) = f(x 1 , ••• ,x. _1,x,x. 1 , ••• ,xk) for all x e: [x. E 0 ,x. +E 0]. Jo Jo+ Jo Jo co
Hence Dg(x) ' 0 for all x e: [x. E 0 , x. + E0] and g is a C diffeomorphism Jo Jo onto its range. Therefore, there is a A > 0 such that lx'  x"l
< Alg(x') g(x")l for all x', x"e:[x. E 0 ,x. +E 0] Jo Jo
Pick o0 > 0 so that o0 <Eo and the intervals are mutually disjoint. In the remaining part with no loss of generality, that all integers struct a sequence {nn} in C(JJR) looking like
(6)
(sj2o 0 , sj+2o 0 ), j =1,2, ••• ,k, of the argument we may assume, n are such that~< o0 • Conthe picture opposite
103
viz.
xj
sjoO < s
 0) 010 x.(ss.o J J
xJ.
sj+o 0 < s < sj+2o 0• j
~
~ x.(ss.+o0) + xJ.
sj2o 0 < s < sjo 0, j
 01o x.Jo (ss.Jo o 0)+x.Jo
s. +o0 < s < s. + 2o0 Jo Jo
0
nn(s) =
+
J
J
j0 ~
jo
(7)
0
Suppose, if possible, that ' € Cb. Then lim St(,)(nn) = lim f(nn(t+s 1), ••• ,nn(t+sk))=f(nn(s 1h···•nn(sk)) t+0+ t~+ unifo~ty in n.
But, for j ~ jo and 0 < t < o0, nn(t+sj) = xj fo~ att n.
Therefore
unifo~ty
inn; i.e. for any E > 0, there is a 0 < o < o0 such that
lg(nn(t+sJ. ))  g(nn(sJ. ))I < E for all n, all 0 < t < o. . 0 0
(8)
Now suppose 0 < t < o0• If*< t < o0, nn(t+sj 0)  xj 0 = o. If o < t < *· then lnn(.t+s.) x. I= lnE 0t + x. + .E0  x. I = E. 0 (1nt) <Eo· Hence Jo Jo Jo Jo nn(t+s. ) € [x. E0,x. +Eo] for all 0 < t < o0• Applying (6) and (8), we Jo Jo Jo get for 0 < t < o , 104
Note that 6 is independent of n. ln the above inequality, fix t < 6 and choose n0 large enough such that~< t. Then s. * < t + s. < s. + 60 , n
and n °(t
+
s. )
Jo
lnn°(t
+
=
x. ~so
i0
Jo
0
Jo
Jo
Jo
s. )  nn°(s. ) I
Jo
Jo
= lx.
Jo
(x.
Jo
+ £ 0)
which clearly contradicts the arbitrary choice of
I = £0 < l£
£.
Therefore
<1>
t
C~.
a
Definition (4.2) (Quasitame FUnctions)~ A function ~C(JJRn) +R is quasitame if there is an integer k > 0, C~bounded maps fj~n +Rn, 1 < j < k1, h:(Rn)k +Rand piecewise c1 functions gj:J +R, 1 < j < k1, such that
=
h( J:r f 1(n(s))g 1(s)ds, ••• ,
J~r
fk_ 1(n(s))gk_ 1(s)ds,n(O))
for all n € C(J,Rn). Each derivative gj is assumed to be absolutely integrable over J. Denote by Tq the set of all quasitame functions on C(J,Rn). Theorem (i)
(4.2)~
Tq
c
V(s)
m(n)
=
c~.
c
r
Indeed if
=
hom € Tq where
f 1 (n(s))g 1(s)ds, ••• ,
t
r
fk_ 1(n(s))gk_ 1(s)ds,n(O))
1, ••• ,k1, are as in Definition (4.2), then k1 S(
and h.,fj ,gj, j
=
 J0 fj(n(s))gj(s)ds} r
for all n € C(J,Rn). The function h:(Rn)k +R is considered as one of k n~dimensional variables and Djh are the corresponding partial derivatives (J = 1, ••• ,k).
(ii) Tq is invariant under the shift semigroup {Sf t>O" 105
(iii)
Tq is a weakly dense subalgebra of Cb generating Borel C(JJRn).
Proof: In proving statements (i) and (ii) of the theorem, we shall assume for simplicity that $ = hom where h:(Rn) 2 +R is C~bounded and m(n) = cf0 f(n(s))g(s)ds, n(O»for some C~bounded map f~n +Rn and a r
(piecewise) c1 function g:J +R. Let 0 < t < r and consider the expression { [$(~t)  $(n)J
={ =
[h(m(~t))  h(m(n))]
J: Dh(z~){¥m
using the MeanValue Theo~em for h, with z~ But {rm
 m(n)J}du =
(10)
(1u)m(n) + um(nt), 0 < u < 1.
frt f(n(t+s))g(s)ds + i Jot f(n(O))g(s)ds 0
{ J
f(n(s))g(s)ds,O)
r
=
cf 0
tr
f(n(s)) t{g(st)g(s)}ds + f(n(O)) t
Jt0 g(s)ds
tr  { Jr f(n(s))g(s)ds,O).
Now f is bounded and g is (piecewise) c1, so all three terms in the above expression are bounded in t and n. Moreover letting t + 0+ we obtain lim t+O+
¥m<~t)m(n)J
=
<J 0 f(n(s))g'(s)ds+f(n(O))g(O)f(n(r))g(r),O) r ( 11)
As Z~ is bounded inn and continuous in (t,u), it follows from (10), (11) and the dominated convergence theorem that $ € V(S) and S($)(n)
106
=
Dh(m(n)){lim ¥m<~t)  m(n)J} t+O+
=
D1h(m(n)){f(n(O))g(O)f(n(r))g(r) 
J~rf(n(s))g'(s)ds}.
For •q to be invari.ant under {St}t>O it i.s suffi.cient to prove that it is invariant under St• 0 < t < r, due to the semigroup property. So let $ be as above and t E [O,r]. Then St($)(n) = h(J0 f(n(s))g(st)ds tr
+
f(n(O)) J0 g(s)ds,n(O)) t
for all n E C(J,R'n). Define "' g:.J +:R by "' { g(st) tr < s < 0 g(s) = 0 r < s < tr Then clearly ~is piecewise c1 with g' absolutely integrable over J. also ~:C(J,Rn) + (:Rn) 2• F:(:Rn) 2 + (:Rn) 2 and ~:.(:Rn) 2 + :R by
~(n) = <J0 f(n(s))~(s)ds,n(O)) = <J0 r
F(x,y) = f(y)
tr
Define
f(n(s))g(st)ds,n(O)),
Jt0 g(s)ds, h(x,y) = h[F{x,y)
+
(x,y)],
for all x,y ERn, n E C(J,Rn). Since f and hare c=bounded, so is F and hence "' h. Therefore St($) = "'"' hom E •q· We show that •q is a weakly dense subalgebra of Cb. It is an easy matter checking that •q is closed under addition and multiplication of functions in Cb. To prove weak density of •q in Cb" it is sufficient to show that •q is weakly dense in • u •q• because the set of tame functions • is already weakly dense in Cb (Theorem (4.1)(i)). Let$ E • have the representation $(n) = f(n(s 1) ••••• n(sk))
n E C(J,P.n)
where f:(:Rn)k +:R is c=bounded and s 1•••• ,sk E J. For each 1 < j < k, construct a sequence {gj}== 1 of piecewise linear functions on J with the property that gj(s)+ X{s.}(s) as m+ =and lgj(s)l < 1 for all s E J, for all m> 1; e.g.
J
107

1
0
r
Also choose a sequence {6m}== 1 of C~ bump funct;ons on Rn such that lxl < m lxl >m+1
0
and l6m(x)l < 1 for all x € Rn. Def;ne the sequence
(12)
Therefore '(n) = 1;m f(J 0 JII'+OO

r
tm(n(s))g~(s)ds, ••••
J0 r
tm(n(s))g~(s)ds)
(13)
Denot;ng the expression under the l;m;t;ng operat;on ;n (13) by 'm(n), ;t ;s clear that·'m € Tq for each m and l'm(n)l <sup
p!k:C ( JJRn)
+ ( Rn)k ,
! k = (s 1, ••• ,sk ) , sj E J, J. = 1, ••• ,k, k=1,2, •••
=
. aut this is agai.n generated by sets of the form {n:.n(sj) E C} where C Rn is closed. Now each closed set in Rn is a countable intersection of complements of open balls, so we need only prove that if BeRn is any open ball and B' is its complement, then {n:.n(sj) E B'} E a(Tq) for any sj E J. Suppose B has radius b > 1 and for each integer p > 0 let Up be the complement of the concentric closed ball of radius b  ~ Let the sequences {~}m= 1 and {gJ}m= 1 be as before, so that (12) is satisfied. Let ~mE Tq be given by
~m(n)
Jr ~(n(s))gj(s)ds,
= 0
We contend that
m> 1, n E C(JJRn).
... n
p=t
lim inf
m
{n:~m(n)
E UP}
(14)
To see this, let n(sj) E B'. Then n(sj) E UP for all p > 1. Since n(sj) = lim ~m(n), then for each p there is an m0 > 0 such that ~m(n) E UP for all
m
m> m0• Hence, for each p > 1, belongs to the set on the right lim inf {n:~m(n) E UP} for each m m0 > 1 such that ~m(n) E Up for
n belongs to lim inf {n:~m(n) E UP} i.e. n m hand side of (14). Conversely, let n be in p > 1. Then for every p > 1, there is an
all m > m0• Taking m + '"' gives n(sj) E Up for all p > 1 i.e. n(sj) E Up. But B' = Up, so n(sj) E B'. Thus p=1 p=1 our contention is proved. As the sets {n:~m(n) E UP} are clearly in a(Tq), it follows directly from (14) that {n:n(sj) E B'} E a(Tq). Therefore a(Tq) =Borel C(J,Rn). a
n
n
The final result in this chapter asserts that every quasitame function is in the domain O(A) of the weak generator A of {Pt}t>O. Theorem (4.3): Every quasitame function on C(J,Rn) is in the domain of the weak generator A of {pt}t>O. Indeed if ' E Tq is of the form '(n) = h(m(n)), n E C(J,Rn), where
109
then A{~){n)
k1 = j;1 Djh{m{n)){fj{n{O))gj{O)fj{n{r))gj{r)
 J0
r
+
Dkh{m{n)){H{n))
+
fj{n{s))gj{s)ds}
t trace [D~h{m{n))o(G{n)
x
G{n))]
for all n E C{JJRn). Here again D.h denote the partial derivatives of h:{Rn)k +R considered as a functi~n of k ndimensional variables. Proof: To prove that Tq c V{A), we shall show that each ~=homE Tq satisfies®Conditions {DA) of §3. First, it is not hard to see that each ~ E Tq is C • Also by applying the Chain Rule and differentiating under the integral sign one gets

D~{n){~)
+
= Dh{m{n)){J 0 r
o
Dh{m{n)){ J
r
Df 1 {n{s)){~{s))g 1 {s)ds, ••• ,
2
D f 1 {n{s)){~ 1 {s).~ 2 {s))g 1 {s)ds, .•• ,
for all n, ~. ~,.. ~ 2 E C{JJRn). Since all derivatives of h, f., 1 < j < k1, are bounded, it is easy to see from the above formulae thatJD~ and o 2 ~ are bounded on C{JJRn). By induction it follows that~ is C bounded. Hence Codnitions {DA){ii), {iii), {iv) are automatically satisfied. Condition {DA){i) is fulfilled by virtue of Theorem {4.2){i). From the above two formulae we see easily that the unique weakly continuous extensions D~(n), o 2 ~{n) of D~{n) and o 2 ~{n) to ®
110
c(J,Rn) a Fn are given by D$(n)(vx{O}) = Dh(m(n))(O, ••• O,v) = Dkh(m(n))(v), ~
D $(n)(v 1x{o}•v 2x{o}> = o2h(m(n))((O, ••• ,o,v 1), (O, •••• o,v 2)) 2 = Dkh(m(n))(v 1,v 2),
for all v, v1,v 2 ERn. The given formula for A($)(n) now follows directly from Theor~~ (3.2) and Theorem (4.2). a Definition (4.3) (Dynkin [16]): Say n° E C(J,Rn) is an absoPbing state for the trajectory fietd {nxt:t > 0, n € C(J,Rn)} of the stochastic FDE(I) if p{w:w En, noxt(w) = n°} = 1 for all t > 0, where we take a =~here, i.e. p(O,n°,t, {n°}) = 1 for all t > 0. The following corollary of Theorem (4.3) gives a necessary condition for n° E C(J,Rn) to be an absorbing state for the trajectory field of the stochastic FDE (I). Corollary: Let n° E C(J,Rn) be an absorbing state for the trajectory field of the stochastic FOE (1). Then (i) n°(s) = n°(0) for all s E J i.e. n° is constant; (ii) H(n°) = 0 and G(n°) = 0. ~:
Let n° E C(J,Rn) be an absorbing state for {nxt:t > 0, n E C(J,Rn)}. For each t > 0 and s E J, define the Ftmeasurable sets 0
0
nt = {w:w E n, n xt(w) = n } 0
Qt(s) = {w:w En, n xt(w)(s) = n°(s)}. Then nt c Qt(s) for all t > 0, s E J and since P(nt) = 1, it follows that P[Qt(s)] = 1 for t > 0, s € J. Suppose if possible that there exist s 1,s 2 E J such that n°(s 1) ~ n°(s 2). 0 Without loss of generality take r < s 1 < s 2 < o. For each wEn, n x5 _ 5 (w)(s 1) = n°(s 2) ~ n°(s 1) and so ns _5 (s 1) = 2
1
~.
2
1
Hence P[Q5 s (s 1)J = 0, which contradicts P[Qt(s)] = 1, 2
1
111
t > 0, s € J. So n° must be a constant path. Th;s proves (;). To prove that n° sat;sf;es (;;), note that the absorb;ng state n° must sat;sfy A(~)(n°) = 0 for all ~ € V(A) (Dynk;n [16], Lemma (5.3), p. 137). In part;cular A(~)(n°) = 0 for every quas;tame ~:C(J,Rn) +R. Note f;rst that s;nce n° ;s constant then so ;s the map t + ~ and so S(~)(n°) = 0 for every~ € V(S). Take any ~:Rn +R C~bounded and def;ne the (quas;)tame funct;on ~:C(J,Rn) +R by ~(n) = ~(n(O)) for all n € C(JJRn). Then by Theorem (4.3), ~ € V(A) and
A(~)(n°) = D~(n°(0) )(H(n°))
+
i trace [D ~Cn°(0) ) (G(n°) 2
0
x
G(n°) )] = 0
The last ;dent;ty holds for every C~bounded ~:Rn +R. Choose such a~ w;th the property D~(n°(0)) = 0 and o 2 ~(n°(0)) = <·,·>, the Eucl;dean ;nner product on Rn e.g. take "'~ to be of the form "'~(v) = lv  no(0) I2 ;n some ne;ghbourhood of n°(0) ;n Rn. Then
for any basis {eJ. }mJ·= 1 of Rm. Therefore G(n°) = o. Thus D~(n°(0)) (H(n°)) = 0 ~ n ~ 1\1 ~~~~ for every C bounded ~:R +R. Now p;ck any C bounded A such that ~(v) =
112
V Regularity of the trajectory field
§1.
Introduction
Given a filtered probability space (n,F,(Ft)G
=
H(xt)dt + G(xt)dw(t)
x0
=
n Ec
= C(JJRn)
0 < t < a
}
(I)
with coefficients H:C ~Rn, G:C ~ L(RmJRn) and mdimensional Brownian motion w. A version of the trajectory field of (I) is a measurable process x:n x [O,a] x C ~ C such that X(·,t,n) = nxt a.s. for all t € [O,a] and n € c. In order to deal with the question of the existence of versions of the trajectory field with reasonable sample function behaviour one needs to isolate two qualitatively different examples of stochastic FOE's. The first of these is when the diffusion coefficient G is independent of the past history, in which case we show that the trajectory field has a version whose sample functions are almost all compactifying (See Theorem (2.1)). In the second example, where G incorporates explicit dependence on the past (e.g. as in stochastic delayequations), we shall see that all versions of the trajectory field are almost surely highly irregular (See §3). Such erratic behaviour is essentially due to the Gaussiantype nature of delayed diffusions coupled with infinitedimensionality of the state space C. Under these considerations we are lead naturally to investigate regularity in probability of the trajectory field for the general system (1). This is done in §4 where the compactifying nature of the trajectory field is still shown to persist, though in a distributional sense, as in Theorem (4.6). It is perhaps worth noting here that for ordinary stochastic DE's (r = 0) satisfactory results on the sample function behaviour of the trajectory field are known. See Remark (i) of §3.
113
§2. Stochastic FOE's with Ordinary Diffusion Coefficients Consider the stochastic FOE dx(t) = H(xt)dt
+
g(x(t))dw(t)
x0 = n € c = C(J,Rn),
0 < t
Given a E t 2(n,C(J,Rn); F0), J = [r,OJ, consider the stochastic delay equation (with finite random delays):
dx(t) = x0 =
p I:
h.(x(tr.))dt
j=1 J
J
9 +
I:
i=1
gi(x(tdi))dz(t),
t > 0 (IV)
a
The random delays rj, di, j = 1, ••• ,p, i = 1, ••• ,q, in (IV) may not all be essentially bounded away from zero; and so a stepbystep direct integration of (IV) is not in general possible. Nevertheless we can still apply our basic Existence Theorem (11.2.1). First we need to check that (IV) does indeed satisfy Conditions (E) of Chapter II. Define h:£ 2(n,C(J,Rn)) > .c 2(n,Rn), ~:£ 2 (n,c(J,Rn)) ~> icn,L(Bm,Rn)) by "' h(~)(w)
= I:p h.[~(w)(r.(w))J, "'g(~)(w) = 9I: j=1 J J . i=1
gi[~(w)(di(w))J
E £ 2 (!l,C(J,R n)), a.a. wEn. To see that "'h, "'g are globally Lipschitz, let L > 0 be a common Lipschitz constant for all the hj's and gi's. Then, if ~,.~ 2 E t 2(n,C(J,Rn)), we have
for~
167
llh(1j1t)h(1j12)1122 n t. (O,.R )
.~ J lh .[1j1 1 (w)(rJ.~))hJ.[1j12 (w)(rJ.(w))] 12 dP(w) J=1 n J < p L2 ~ J 11j11(w)(r.(w)) 1j12(w)(r.(w)l 2 dP(w) j=t n J J < p
< p2 L2 111jl1  1jl2112 2 t. (O,C)
g
Similarly is Lipschitz with Lipschitz constant qL. It remains now to verify the adaptability condition E(iii) of §(li.1). To do this it is sufficient to show that for each 1jl E t. 2(O,C;Ft), "' h(ljl) and "'g(ljl) are Ft2 measurable, fort> 0. Let 1jl E t. (0,C) be Ftmeasurable, and P:J x C ~Rn the evaluation map (s,n) ~ n(s), s E J,n E C. Then P is continuous, and since each rj is F0measurable, it follows that
W~> ~(ljl)(w) = ~ h.[P(r.(w),ljl(w))] j=1 J
J
~s Ftmeasurable. Therefore h(ljl)
E t. 2(n,.Rn;Ft); similarly 2 g(ljl) E r. (n,L(Rm,Rn);Ft). Hence all the conditions of Theorem (II.2.1) are satisfied and so a unique strong solution ex E t. 2(n,C([r,a],Rn)) of the stochastic ODE (IV) exists with initial process e. The trajectory {ext:a > t > 0} is defined for every a > 0, is (Ft)adapted and has continuous sample paths. Moreover each map
icr~.C;Ft), t > e
e
o.
xt
is globally Lipschitz, by Theorem (II.3.1). Now su.ppose in (IV) that z is mdimensional Brownian motion w adapted to (Ft)O
dx(t) = E h.(x(tr.))dt j=1 J J x0 = n E C(J,Rn) 168
q +
E g.(x(td.))dw(t),
i=1
1
1
t
>
0 (V)
with deterministic initial cond;tion n. Note that the coefficients in (V) ~an also be viewed as F0 ®Borel C measurable maps H~n x C(JJR"> ~ Rn, G:n x C(J,lRn) ~ L(Rm,Rn) given by "'
=
H(w,n)
p
E h.(n(r.(w))) , "' G(w,n)
j=1
J
J
=
q
E g.(n(d.(w))) 1
i=t
1
a.a. w € n, all n € C(J,lRn). So (V) becomes the stochastic RFDE with random coefficients: dx(t)
=
"'H( • ,xt)dt + "'G( • ,xt)dw(t),
x0
=
n
0
I
Because of the randomness in the coefficients of (V)' we cannot apply theresults of Chapter III to obtain a Markov property for the trajectory field of (V). However, if the delays rj, di are all independent of the Brownian aalgebra F: = o{w(·)(t): 0 < t
dx(t) = h(x(tr 0))dt x0
=
n
€
+
g(x(td 0))dw(t),
o<
t
C(J,Rn)
Where h:Rn ~ Rn, g:Rn ~ L(Rm ,Rn) are Lipschitz maps. !!mma (3.1):
Suppose the delays r 0, d0 € £~(nJR>OiF0 ) are simple functions 169
with respect to the deLay aaL,gebra F0 = a{r0,d 0} c F0 i.e. there are k 0 0 < r m' , dm' < r, m' = 1, ••• ,k and {0m' }m'= 1 c F such that r0 =
k I:
m1=1
m' r X...m l
•
~r
do =
m1=1
For each m' = 1, ••• ,k, let "xm 1:n 1
I
I
m' d X...Ill I •
k I:
dxm (t) = h(xm (t~rm ))dt
+
+
~r
k
u
m1=1
n
ml
= n.
C([r,a]JRn) be the unique solution of ml m1 g(x (td ))dw(t). 0 < t
xm~ = n k
Then x(t) _
(VI(m1)) I
"xm (t)x...m 1, r < t < a, gives the unique solution of (VI)
I:
m1 =1 ~r in t 2(n,C([r,a]JRn)) starting at n € C(JJRn) and adapted to (Ft)O
Proof: By the definition of x, k ml xt(·) = I: "xt <·>x...~~~~<·> for all t > 0. m'=1 ~r Then
x0 =
k I: nx...~~~.<·>
m1 =1
~r
= n.
Also since r 0 , d0 are simple functions, h(x(tr0 )) =
k I:
I
m1=1
I
h(nxm (trm »x...~~~ .. g(x(td 0 ))= ~,
k I:
m'=1
I
I
g(nxm (tdm ))x...JD~; ~'
and so by Theorem (I.8.3)(i) for the stochastic integral one has Lh( 1xm 1(urm'»x 1du + ~ x ~{J\cnxm 1 (udm 1 ))~")} x(t) = { m'=1 0 rP' m1=1 rP' 0 0< t < a n(t) t EJ n(O)
·={n(O)
+
+
~
t J h(x(ur0 ))du
+
0
n(t) n(O)
={
n(t)
170
Jt
k n I I x...~~~~g( xm (udm ))dw(u),O
t EJ +
J: h(x(ur0))du t EJ
+
J: g(x(ud 0))dw(u), 0 < t
1 w As each nxm is (Ft)O
Hence x is the unique solu
For a.a. W1 E n, let nxw 1 E .c 2(n,c([r,a]JRn}) be the unique solution adapted to (F~)O
+
g(x(td 0(w 1 )))dw(t)
0 < t
=n
with fixed delays r 0(w 1 ) , d0(w 1 ) € [O,r]. The trajectory fields I wl {nxt:t E [O,a], n E C, u.f En therefore correspond to a random family {X :w 1 En} of continuous timehomogeneous Feller processes on C(J,Rn) adapted to (F~)O
p(w 1 ,0,n,t,B) = and
E pm 1 (0,n,t,B) x 1 (W 1 } m1 =1 nm
a.a.
W1
En
Jn p(w 1 ,0,n,t,B)dP(w 1 ) = p(O,n,t,B)  p{w:w En, nxt(w) E B}. ~
Note that for each BE Borel C, the map (w 1 ,n,t) F0 ® Borel C & Borel [O,a]measurable. ~oof:
~>
p(w 1 ,0,n,t,B) is
By Lemma (3.1), the trajectory field of (VI) is given by nxt(w) =
k
1
E nxmt (w)x
m1 =1
1
rfl
(w)
a.a. wEn, t E [O,a],
n E C. The rfl form a partition of n and are all independent of nx~ 1 • Let 1
171
w• € Sl. Then there is an m• such that w' €. dm•. Hence nxw' = nxm• and
an•
and so r(w') = t'm' • d(w•)
=
p(w•,o,n,t,B) = P{w:nxt w• (w) €. B} = P{w:w € Sl, nxtm• (w) € B} m• = p (O,n,t,B)
for every B € Borel C. Thus p(w',O,n,t,B) =
k
m•
E p (O,n,t,B)xm.(w•) m•=t ~r
for a11 w • € Sl. Since each pm'(o,.,.,B) is Borel C 8 Borel [O,a]measurable (Theorem (111.1.1)), it is clear that p(.,o,.,.,B) is F0 8 Borel C 8 Borel [O,a] measurable. Also, by independence of xd"' and nxtm• for t € [O,a], we get
fSl
p(w•,o,n,t,B)dP(w•) =
=
k
E
m•=t
m• ...m• p (O,n,t,B)P(~, )
Jg xB(nxt(w))dP(w)
= P{w:w € Sl, nxt(w) € B} for every B € Borel C.
c
The following is a general approximation lemma for stochastic DDE•s: Lemma (3.3): Suppose the coefficients h, gin (VI) are globally bounded and "" ' {dk}k=t "" >0;FD) • Lipschitz. Let {rk}k= be increasing sequences in£""(Sl~ 1 0 such that rk ~ r 0 and dk ~ d0 as k ~""in £ 2 (Sl~;F~). Let nxk and nxk,w denote solutions of the stochastic DDE 1 s: dxk(tk) = h(x(trk))dt
x0 172
=
n
+
g(x(tdk))dw(t),
0 < t
(VIk)
and dxk,w 1 (t) k w•
x0 •
=
h(xk,w 1 (trk(w•)))dt
=
n
+
g(xk,w•(tdk(w•)))dw(t) O
respectively, fork = 1,2,3, ••• , a.a. w• € n. Then the map n 3 w• ~ 11xw• € l 2(n,C([r,a]JRn)) is in lm(n,l 2(n,C([r,a]JRn));F0),
and
f
1 im k
W 1 €0
{fW€0
11 11 xk,w• (w)  nxw• (w) II~ dP(w) }dP(w•)
= 0
Remark For each n € C(JJRn), the random family {11 xw• :w• € 0} has a unique version nx ~ l 2(n x n, C([r,a]JRn); ~ e Fa) such that for a.a. w• € n, 11x(w•,.) = nxw (·) a.s. Proof: Suppose h and g a·re bounded by a conmon bound K > 0 and have a conmon Lipschitz constant L > 0. Let rk ~ r 0, dk ~ d0 as k ~ m in l 2(nJR>0;F0). Fix n € C(JJRn) and take any integer k > o. Consider J: {h( 11 xk(urk))  h( 11x(ur0))}du t
+
J 0 {g( 11xk(udk))g( 11x(ud 0))}dw(u), O
t € J
By Doob•s inequality, we have, for any t E
sup
D
0
4E sup 1Jv {h( 11 x(urk))h( 11 x(ur 0))}dul 2 D
t
+
[O,a],
111xk(v)11x(v)l 2 < 4E sup 1Jv{h( 11 xk(urk))h( 11x(urk))}dul 2
I"
€
4K 1 J 0
0
Ell g( 11xk(udk))g( 11 x(udk))ll 2du 173
t
+
4K 1 J0 Ell g(nx(udk}} g(nx(ud0}}11 2 du, some K1
>
o,
where II All denotes the operator norm sup {lA( e) I:e € Rm, leI < A E L(RmJRn). Therefore, by the Lipschitz condition, one gets E
sup
lnxk(v)nx(v)l 2 < 4aL 2
~
+
+
1}
for each
Jt Elnxk(urk)nx(urk)l 2du 0
Jt
4aL 2 0 Elnx(urk) nx(ur0>1 2du
4K 1L2 J:Einxk(udk)nx(udk)l 2du
+
4K 1L2 J:Einx(udk)nx(ud 0)1 2du (1)
for t € [O,a]. Clearly, Elnxk(urk)nx(urk>l 2 < E sup
lnxk(v) nx(v) 12
(2)
lnxk(v) nx(v)l 2
( 3)
r
and Elnxk(udk)nx(udk)l 2 < E sup l"
for all u € [O,a]. We must estimate the second integrand Elnx(urk)  nx(ur0) 12 in (1) for sufficiently large k, unifo~ly in u € [O,a]. To do this, let E > 0 and note that by uniform continuity of non J there is a 0 < o < 1, o = o(E,n) such that for s 1 ,s 2 € J, ls 1s 2 1 < o , we have ln(s 1)n(s 2>1 2 <E. Let E' = o2E, so by convergence of {rk};= 1 in £ 2(nJR>0;F0) there exists k0 = k0(E,o) such that Elrkr0 12 < E' = o2E for all k > k0• Suppose k > k0 and for any u E [O,a] denote by
the characteristic functions of the sets {w:w € G, r 0(w) < u}, {w:w € G, r 0(w) > u.l, {w:w € G, rk(w) < u}, {w:w € G, rk(w) > u} in F0 and A= {(w,v):w € G, v € [O,a], v + r 0(w) > u}, Bk = {(w,v):w € G, v € [O,a], v + rk(w) < u} in F0 Q Borel [O,a], respectively. Using Theorem (1.8.3) for the stochastic integral, we may write 174
+
=
J:
n{O)n{ur0), ;f rk < u and r 0 > u.
X(ro
+
J: X
,a
+
J0 +
+
X( rk tJ) \k ( ·, v)h(nx( vr0 ) )dv Ja x
x X (•,v)g (nx(vd 0 ))dw(v) (rk < u) (r >u) Bk 0 0
[n{O)  n(ur0 )Jx(rktJ)
[n{urk)n(ur 0 )J x(r~)
+
Therefore,
+
+
Ja
n
6Ein(O)n(ur 0)1
2
2
X(rkO)
< 12K aEx(ro
2
6K 1 0 EX(rktJ)XBk ( • ,v) llg( x(vd 0)) 11 dv
2
+
+
2
6Ein(urk)n(ur 0) I X(r~)
2
12K 1K Ex(rkO)(urk)
6Ein(O)n(ur 0>1 X(rkY)
+
2
6Ein(urk)n(ur 0)1 X(r~)
(4)
175
Now
2
Eln(O)  n(ur0 >1 x(rk
J
+
J
2
ln(O)n(ur 0 ) I x(r < u)X(r >u)dP lur 0 1<6 k 0
lur0 1>6
ln(O)n(ur 0 ) 12x(r < u)x(r >u)dP k o
< E + 4lln 11 2 P{w:WE Sl, (r0 (w)  rk(w))
c
< E
+~ 6
because k > k0 •
llnl1 2 Elr0rkl 2 < E + 4lln11 2 E,
c
Similarly, for k > k 0 , we get
+
Combining
J
Ir.0rk 1>6
+~lin 1~2
(4),
(5)
c
Eln(urk)n(ur0 ) 12x(r >u) = k
< E
> 6}
J
lr 0rkl<6
ln(urk)n(ur 0 ) 12x(r >u)dP k
ln(urk)n(ur0 ) 12x(r >u)dP k
Elr0rkl 2 < (1 + 4llnllc2 )E
(6)
(5) and (6), one gets
Elnx(urk)nx(ur0 ) 12 <.12K2(a+K 1)Eir0rk 12 + 12(1 + 4lln < 12K2(a+K 1)6 2E + 12(1 +
1~2 )E
4lln1~2 )E
< 12[K2(a+K 1) + 1 + 4lln11lJE
(7)
for all u € [O,a], k > k0 • Note that k 0 is independent of u € [O,a]. A similar argument applied to the last integrand in (1) yields for every E > 0, a k 0 > 0 such that (8)
for all u € [O,a], k > k0• (7) and (8) to obtain 176
Now put together the inequalities (1), (2),
(3),
E sup I"
lnxk(v)nx(v)l 2 < (4aL 2+4K 1L2) JtE sup lnxk(v)nx(v)l 2 du 0  r
for k > k0 and for all t E [O,a]. if k > k 0 , then E sup I"
2 lnxk(v)nx(v)l 2 < 48aL 2(a+K 1)[K2(a+K 1)+1+4IIn II~JEe4 L (a+K1)t
for all t E [O,a].
In
By Gronwall's lemma, it follows that,
In particular, take t = a to get 2
llnxk(w)nx(w)ll~dP(w)<48aL 2 (a+K 1 )[K 2 (a+K 1 )+1+4IIn II~JEe 4 L (a+K1)a
for all k > k0 ; i.e. {nxk};= 1 converges to nx in
£ 2(n,C([r,a]JRn)).
To demonstrate the F0measurability of the map n 3 w' + nxw' E £ 2 (n,C([r,a]JRn)) assume for the moment that the delays rk' dk are simple ones of the form nk ".. r m' ,k Xnm ' , k , rk m'=1 for some nk > 0, 0 < rm',k, dm',k < r, nm',k E F0 , 1 < m' < nk, k = 1,2,3, ••• Denote by nxm',k E £ 2(n,C([r,a]JRn)) the unique solution of the stochastic DOE
k xm' 0 ' =n
with fixed (deterministic) delays r m' ' k , dm' ' k , for each m' = 1, ••• ,nk, k = 1,2,3, •••• Clearly the approximations {nxk,w' :w' E S'2} are then given by PO)
for a.a. w, w' En, k = 1,2,3, •••• Since the simple delays can always be chosen so that rk < rk+ 1 , dk < dk+ 1 for all k > 1 and rk(w') + r 0{w'), dk(w') + d0(w') as k +~for a.a. w' En, it follows from what has already 177.
been proved that n>f' = Hm nxk,w' in ico,C([r,a],Rn)) for a.a. w•
€
o.
k+co
But (10) implies that the map o 3 w• + nxk,w' € ico,C([r,a],Rn)) is F0measurable and 0 x 0 3 (w',w) + nxk,w'(w) € C([r,a],Rn) is F0 ~Fameasurable for each k > t. By the StrickerYor Lemma (Theorem (1.5.1)) the random fami.ly {n>f' :w' € O} admits an F0 8 Fameasur~ble version nx:oxo+C([r,a],Jfl) such that for a.a. w• € 0, nx(w', ·) = nxw (·) a.s. Moreover the map o 3 w' + nxw' €. ico,C([r,a],Rn)) is F0measurable and is in fact essentially bounded because, for a.a. W1 € o, E sup lnxw'(t)1 2 < 3llnlll
+
3 E sup 1Jt h(nxw'(ur(w')))dul 2 O
f'
0
+ 3E sup I Jt g(nxw'(ud(w')))dw(u)l 2 O
0
< 3llnll~ + 3iK2 + 3K1aK 2 = 3CIInlb2 + iK2 + K1aK2). Therefore by Theorem 17 (p. 198) of (Dunford and Schwartz [15]) the measurable version nx of {nxw' :w• € O} is uniqueZy determined up to a set of P x Pmeasure zero in 0 x o. Now assume once more that {rk}~= 1 , {dk}~= 1 are our original increasing sequences in £=(oJR>0;F0)·converging to r 0, d0 in £2• The reader may use a very similar argument to the one used in deriving (9) to prove that for every E > 0, there is a ko = k0(E,n) > 0 such that sup fw'€0 Er
lnxk,w'(v) nxw'
< (4aL 2 + 4K 1L2) Jt {J 0
w'€0
E sup lnxk,w'(v)nxw'(v)l 2dP(w')}du r
+ 12a(4aL 2 + 4K 1L2)[K2(a + K1) + 1 + 4lln II~JE for all k > k0• So, by Gronwall's lemma, sup fw'€0 Er
lnxk,w'(v)nxw'
< 48aL 2(a+K 1)[K2(a+K 1) + 1 + 4lln II~]Ee 4 L (a+K1)t
178
for all t € [O,a].
In particular, the final assertion of the lemma holds. c
We can now prove the following theorem which says that the distribution of the trajectory field {nxt: t € [O,a], n € C} to the stochastic DOE (VI) is the expectation of the associated family of transition probabilities {p(w•,o,n,t,.): w• En, n E c, t € [O,a]} of (VI(w•)). Theorem (3.1): In (VI) suppose the coefficients h, g are globally Lipschitz on Rn. Let the delays r 0 , d0 E £=(nJR>O) be independent ofF: =o{(w(·)(t): 0 < t < a}. Then P{w:w € n, nxt(w) € B} = Jn p(w•,o,n,t,B)dP(w•) for every n € C(J,Rn), t € [O,a] and B € Borel C. Furthermore the trajectory field is a timehomogeneous continuous process on C(J,Rn) viz. PoTt t (n)1 t1 1 n t1 2 1 if 0 < t 1 < t 2
t 1 < t
Yt = n • 1
Proof: We first prove the theorem under the additional assumption that the coefficients h, g are gLobaLLy bounded and Lipschitz. Choose increasing sequences of simple delays {rk};= 1' {dk};= 1 independent of F: and coverging almost surely to r 0 , d0 respectively. Use the notation of Lemma (3.3) to denote ~Y nx the version in £2(n x n,C([r,a],Rn);F0 Q Fa) of the random family {nxw ;w• € Q} of solutions to VI(w•), w• € n. Let {nxk};= 1' {;:
x
n,C([r,a],Rn); F0 ~F), nxk(w•,w) = nxk,w•(w), a
for a.a. w,w• € n, and each k = 1,2,3, •••• Because of Lemma (3.3), one 1 = 2 n k = can select a ~ubsequence {n1 Xk 1 }k•= 1 of the£ convergent sequence {X }k= 1 such that nxk (w•,w) = nxk ,w (w) + nx(w•,w) as k1 + = for a.a. w,w• € n. N = 1 of {n~ k }k= = 1 converges 1n . £2 to nx ote also that the subsequence{ nxk• }k•= as k 1 + =. In view of Lemma (3.1), write nx~ , nx~•,w• in the form 179
nkl
nxtk1(w) = nx
kl
nxtm1' k 1(w) X ml ,.k ~w),
l:
m1 =1
Sl ·
( 11)
nkl
I
,w (w) =
E
m1 =1
t
nkl I k1 D for a.a. w,w 1 € 0, t € [O,a], where Om ' € F , rk1 = l: m1 =1 nkl m1 k dk1 = E d ' X_m 1 kl and m1 =1 ~& ' I kl I kl I kl I kl I kl dxm ' (t) = h(xm ' (trm ' ))dt + g(xm ' (tdm ' ))dw(t) 0 < t < a
(12)
k xm = n 0 ' 1
1
for 1 < m1 < nk1' k1 = 1,2, •••• If' € Cb is any bounded (uniformly) continuous realvalued function on C(JJRn), consider '( nxtm ' k (w) )x m1 k 1(w)dP(w) 1
I
w€0
=
I I w1 ffi
0
'
,(nxk 1,w 1(w) )dP(w)dP(w 1)
(13)
t
w€0
n m1
1
k1
and X ml,kl for each k1 > 1, 1<m 1
due to the independence of'" xt'
= 1im k 1 .....,
180
I I w 1 ffi
w€0
,(nx~ 1,w 1(w) )dP(w)dP(w 1)
=
{J
r
Jw•Eo
for every $
€
Cb, n
€
w€0
C(JJRn), t
$C1 x~· (w) )dP(w) }dP(w•) €
( 14) I
[O,a].
Now let B c C(JJRn) be open. Use a un;formly cont;nuous part;t;on of un;ty ;n C(JJRn) to construct a sequence {$ 1 }7.. 1 c Cb such that $1 (n) + xB(n) as 1 + ... for each n € C(JJRn) and l$ 1 (n) I < 1 for all n € C(JJRn), 1 > 1 (Cf. Proof of Theorem (111.1.1)). Put$= $1 ;n (14) and pass to the l;m;t as 1 + ... v;a the dominated convergence theorem, thus obta;n;ng p{w:w
€
0, nxt(w)
€
B} =
J
P{w:w
J
P{w:w € O, nxt001 (w) € B}dP(w•)
J
p(w•,o,n,t,B)dP(w•).
€
0, nXt(w•,w)
€
B}dP(w•)
w•€g
=
w•€g
=
( 15)
w•Eo
s;nce th;s holds for every open set B ;n C(JJRn) and Borel C ;s generated by all open sets, ;t follows by un;queness of measuretheoret;c extens;ons that (15) must be true for all B € Borel C. Hence the f;rst assert;on of the theorem. Suppose now that the coeff;c;ents h, g are L;psch;tz but not necessarily globally bounded on Rn. We ma;nta;n that (14) 1 still holds ;n th;s case. The ;dea of the proof ;s to approx;mate the stochast;c DDE•s (VI), (VI)(w•) by sequences of stochast;c DDE 1 s whose coeff;c;ents are globally bounded and L;psch;tz. For each ;nteger N > 0, and v € Rn, set h(v) hN(v) = { h(v)(2  lijl> 0
g(v) gN(v) = { g(v)(2 0
Jwl>
lvl < N N < Ivi;< 2N lvl > 2N !vi< N N < lvl < 2N lvl > 2N.
Then clearly hN, gN are globally bounded on Rn by
sup lh(v)l, sup ig(v)l Iv I< 2N Iv I<2N and are also Lipsch;tz. with the same Upschi.tz constant L as that of h,g. Fer each N > 0, let nxN € £ 2(0,C([r,a]JRn)), nxN,w• , a.a. w• € 0, be the un;que 181
solutions of the stochastic ODE's dxN(t)
=
hN(xN(tr0))dt + gN(xN(td 0))dw(t),
o
xN 0 = nEc dxN,w'(t)
=
x0N•w'
hN(xN,w'(tr0{w')))dt+gN(xN,w'(td0 {w' )))dw(t), O
=
n
View (VI)N and (VI)N(w') as stochastic FOE's with random coefficients HN:Q x C + Rn, GN:Q x C + L(Rm ,Rn) viz. dx N(t)
=
N + GN(·,xt)dw{t), N HN(·,xt)dt U < t
x0N = n dx N•w' (t)
=
N w' )dt + GN(w' ,xt' N w' )dw(t), 0 < t < a HN(w' ,xt'
x0N•w'
=
n
where HN(w,n) = hN(n(r0 (w))), GN(w,n) all n E c. It is easy to see that
for a.a. wE such that
fl,
=
gN(n{d 0{w))), for a.a. wE f2 and
all n1,n 2 E C; and there is a K > 0 (independent of w,n,tl)
for a.a. wEn, all n E C. So HN, GN satisfy all the conditions of Theorem (V.4.2). Indeed if BN = {n: n E C, lin II < N} for each N > 0, then HN(w,·)IBN = HN,(w,.)IBN, GN(w,·)IBN = GN,(w,·)IBN if N' > N, a.a. wE fl; so the trajectories {nx~}t>O, {nx~'}t>Oagree up to the time they leave the ball BN. Using the uniform estimates E llnx~llc2 <
C~(1 + lin llc2>
E llnx~·w'llc2 < C~(1 + lin llc2> 182
for all N > 0, all t E [O,a], and a.a. w E n, it ;s not hard to see that nxN + nx as N + m a.s. in £ 2(n,C([r,a]JRn)). s;m;larly, for a.a. w' En, nXN' w' + nXw' as N + m a.s. ;n £ 2(n,C([r,a]JRn)). Therefore, ;f ~ E Cb, we have by the dom;nated convergence theorem
fwEn $(nxt(w))dP(w)
=
J
$(nx~(w))dP(w)
f
$(nx~•w'(w))dP(w)
1;m
wEn
N
and
fwEn $(nX~ 1 (w))dP(w)
1;m
=
N wEn
a.a. w' En.
By the remark follow;ng Lemma (3.3), the random fam;ly {nxN,w' :w' E n} has a vers;on ;n £ 2(n x n, C([r,a]JRn); F0 e Fa) for each N > 0; so each map w'
~ J
wEn
$(nxN,w' (w) )dP(w) t
;s F0measurable, by Tonell;'s Theorem (Dunford and Schwartz [15], p. 194).
~(nx~'(w))dP(w) ;s also F0measurable. Therefore, WEn we can once more apply the dom;nated convergence theorem to get Hence the map w' r>
J
fw'En fWEn~(nx~' (w))dP(w)dP(w')
=
Hm
J J
~(nx~•w' (w) )dP(w)dP(w')
N w'En wEn
Now, by the f;rst part of the proof,
JwEn ~(nx~(w))dP(w)
=
J
w'En
J
WEn
~(nx~•w'(w))dP(w)dP(w')
for each N > 0; so (14)' holds by lett;ng N + m ;n the last equal;ty. The relat;on (15) then follows as before. To prove t;mehomogene;ty note that by the above result and Theorem (III. 2.1) we have (PoT! 1(n) 1)(B) 2
=
J
p(w',;,n.t 2 ,B)dP(w')
=
J
p(w',O,n,t2t 1,B)dP(w')
=
(PoTt t (n)
W'EQ W'EQ
2
1
1
)(B)
for every B E Borel C, 0 < t 1 < t 2 < a, n E C. 183
This completes the proof of the theorem.
c
As a consequence of the above theorem one can show that the trajectory field of the stochastic DOE (V) (or VI) possesses all the distributional regularity properties of §(V.4). In particular, we get Corollary (3.1.1): If the coefficients hj' gi, j = 1, ••• ,p, i = 1, ••• ,q in (V) are Lipschitz, then the trajectory field of (V) satisfies all the conclusions of Theorems(VI.3.1), (V.4.4), (V.4.6), (V.4.7). Proof: These results can in fact be proved using the methods of Chapter V. However, they follow more or less directly from the identity (15) of the theorem. c Definition (3.1): The trajectory field {nxt:t > 0, n € C} of a stochastic FOE is asymptotically stochastically stable if lim po(nxt)1 exists in the space MP(C) of all probability measures on c. t~ The next corollary says that in order for the stochastic ODE (V) to be asymptotically stochastically stable it is sufficient to verify this property for every stochastic DOE with fixed delays of the type p
0
dx(t) = .E hJ.(x(trJ.))dt J=1
x0
=
+
q
0
.E gi(x(tdi))dw(t), 1=1
t >0 (VII)
nEc
and aPbitrary {rj}j= 1• {d~}~= 1 c [O,r]. A similar result for the case gi = 0, i = 1, ••• ,q, hj linear, j = 1, ••• ,p has been established by Lidskii ([48], 1965). Note however that Lidskii allows the delays rj to be timedependent, Markovian in general. Corollary (3.1.2): Suppose that for every choice of fixed deterministic delays rj, d~, j = 1, ••• ,p, i = 1, ••• ,q the system of stochastic ODE's (VII) is asymptotically stochastically stable. Then so is the stochastic DOE (V) for all random delays rj, di € £~(Q,[O,r];F 0 ), j = 1, ••• ,p, i~1 •••• ,q which are independent of the Brownian aalgebra Fw = u Fwt• ~
t>O
Proof: Take rj = rj(w'), d~ = di(w') for a.a. w' € Q in (VII). 184
Then by
hypothesis, lim p(w 1 ,0,n,t,•) = ~(w 1 ), a.a. w1 € n. where {~(w 1 ):w 1 € n} is ta random family of probability measures on C. If {nxt:t > 0, n € C} is the trajectory field of (V), then the identity {Po(nxt) 1}(B) =
J
w1 ffi
p(w 1 ,0,n,t,B)dP(w 1 ) , B € Borel C,
the dominated convergence theorem, and Theorem (I.2.2)(v) imply that lim Po(nxt) 1 = ~ t+co
~(w 1 )dP(w 1 ) in M (C).
=J
w1 €Q
c
P
Remarks (i) In.our next section we shall give some sufficient conditions in the linear case under which the system with fixed delays (VII) becomes asymptotically stochastically stable. (ii) We conjecture that the first assertion of Theorem (3.1) also holds if in (V) we allow the delays rj' di, j = 1, ••• ,p, i =1, ••• ,q to be continuous stochastic processes, each independent of w. (iii) Although the trajectory field {nxt:t > 0, n € C} of (V) may not in general be isonomous to a Markov process on C, we can still apply the results of Chapter IV to obtain the random family of weak infinitesimal generators Aw :V(Aw ) c Cb + Cb associated with the family of semigroups {~ }t>O defined by the systems I
I
I
p
q
dx(t) = E h.(x(tr.(w 1 ))dt j=1 J J
x0
=
n
E
+
E gi(x(tdi(w 1 ))dw(t)
i=1
t > 0 (V(w 1 ) )
c
for a.a. w1 € n. Using the notation of Chapter IV and the results of Theorems (IV.3.2), (IV.4.2), we observe that when hj' gi are globally bounded the weakly dense subalgebra Tq of all quasitame functions in Cb is determined independently of the choice of the random parameter w1 in V(w 1 ) . In parti1 cular, Tq c V(Aw ) for a.a: w1 € n. ~o suppose ~ € Tq and using functional calculus write formally~ (~) = etAW (~) a.a. w1 € n , t > 0. Then by (14) 1 of the proof of Theorem (3.1) we get
fw€Q ~(nxt(w))dP(w) = Jw €Q ~~(~)(n)dP(w 1 ) = Jw €QetAw 1 (~Hn)dP(w 1 ) 1
1
185
where
Recall that {St}t>O and nxt(w) € 8} function x8
S is the weak infinitesimal generator of the shift semigroup {ek}~= 1 is any basis for ~m. To get the distribution P{w:w € n, of the trajectory field to (V), approximate each characteristic weakly by a directed family {$ 1 } in Tqi thus
P{w:w €
n,
nxt(w) € 8} =lim 1
J
w'€n
w'
etA ($ )(n)dP(w'). 1
As our final topic in this section, we now turn to the question of selecting a measurable version of the trajectory field to the stochastic DDE(V). If the coefficients hj, gi are all Lipschitz, it is easy to check that Remark (V.4.3)(ii) is fulfilled and so the trajectory fields {nxt:t € [O,a],n € C}, {nxt:t € [r,a], n € C} admit measurable versions n x [O,a] x C + C, n X [r ,a] X c + c'X, for any 0 < a < l. Note hePe that we do not pequi'I'e the delay
aa~ebra F0 to be independent of F:.
Alternatively, and in relation to our Remark (V.3.1)(iii) on delayed diffusions, it is perhaps instructive to indicate here how a measurable version may be obtained for the trajectory field of the onedimensional quadratic delay equation dx(t) = [x(t1)] 2dw(t)
0 < t < 1
(VI II) x0 = n € c = C([1,0JJR) Note that in this case r = 1, m = n = 1 and the diffusion coefficient g:R +R, g(v) = v2, v € R, is clearly not globally Lipschitz on~. In fact g is clearly locally Lipschitz but does not satisfy a linear growth condition and so none of the regularity theorems in §(V.4) can be directly applied to the stochastic DOE (VIII). Nevertheless, we proceed to find a measurable version nx [0,1] x C + C of the trajectory field {nxt:t € [0,1], n € C} as follows. It is sufficient to find a measurable version for the field {nxt ~t: t € [0,1], n € C} where 186
~t(s)
n(O)
s e: (t,O]
n(t~s)
s e: (1,t), t
= { €
(0,1]
as in §(1V.2). For each n e: C define the sequence {nk}~= 1 of piecewise linear approximations nk e: C([t,O]JR) by nk(s) = [k(s~1)j]n(t + jkt) ~ [j + 1  k(s~1)Jn(1 +f) 1"f
se: [ .. 1
j 1 *y:sJ= j+ 1J . +1(·
0,t,2, ••• ,k, k = t ,2,3, •••
R
2/lc
1/k
0
Clearly nk + n as k + m in C, and each nk is piecewise c1 and hence of bounded variation on [1,0]. Define the processes Yk:g x [0,1] x C + C, k = 1,2, ••• , by setti.ng t+s  2 0 w(w)(u)nk(u1)(nk)'(u1)du, s e: [t,O] s e: [1,t)
J
0
for all t e: [0,1], n e: C and all we: g0, where go is a set of full Pmeasure such that w(w) e: C([ 0,1]JR) for each we: go· Therefore for all we: go• t e: [0,1], n e: Cit is clear that Yk(w,t,n) e: C for every k > 1. Moreover each process Yk is (F 8 Borel [0,1] 8 Borel C, Borel C)measurable. To see 187
this, note that, since Borel C is generated by evaluations at s E [1,0], it is sufficient to check that for any fixed s E [1,0] the map (w,t,n) ~ Yk(w,t,n~(s) is meas~ra~le. Now for each k, (nk)'(u1) = kn(1 + 4f!> kn(1 + t> for u E
J0tow(w)(u)nk(u1)(nk)'(u1)du j 01 E
j=O
. k[n(1 + 1fl>
+ k[n(1 +
=
.
 n(1 + t>J
j +1 4>  n(1
fj~t)/k
j/k
j Jto + k0)] .
w(w)(u)nk(u1)du
w(w)(u)nk(u1)du.
Jolk
But the maps n ~ n(1 + j/k), (u,n) ~ nk(u1) are continuous and (w,u) ~ w(w)(u) is measurable, so the integrals (w,n) r>
(j+1)/k k w(w)(u)n (u1) du, (w,n) j/k
J
depend measurably on the pair (w,n). that the map (w,n) r>
Jto >+
j 0/k
w(w)(u)nk(u1)du
From the preceding equality, it follows
J: w(w)(u)nk(u1)(nk)'(u1)du
is (F & Borel C}measurable for every t E [0,1]. Since this indefinite integral is continuous in t for each w E n0• n E C, it is easy to see that the map
is (F & Borel [0,1] & Borel C)measurable. From the definition of Yk, it follows that (w,t,n) ~ Yk(w,t,n)(s) is measurable for each s E [1,0]. Thus Yk is ([email protected] Borel [0,1]@ Borel C, Borel C)measurable. Moreover, using integration by parts (Elworthy [19] p. 79), one has a.s. Jt+s [nk(u1)] 2dw(u) Yk(·.t.n)(s) = { 188
s E [t,O]
0
0
s E [1,:t)
i.e. E [0,1], n E C, a.s.
t
Since nk(s)+ n{s) as k+ oo uniformly in s E [1,0], it follo\1S easily from ooob's inequality for the stochastic integral (Theorem (1.8.5)) that Yk(.,t,n) + nxt ~t as k+ oo in £ 2 (n,c). Hence by the StrickerYor lemma for Cvalued mappings we get a measurable version Y: n ~ [0,1] x c. c for . "' } the f1eld {nxtnt:t E [0.1], n E C. A very similar argument to the above gives a measurable version for the trajectory field of the onedimensional polynomial delay equation dx(t)
=
[x{t1}] 1 dw(t).
0
<
t
<
1 (IX)
x0
=
n E C = C{[1,0]JR}
where 1 E z>O is a positive integer. Furthermore. if 1 is odd, then by the method of §(V.3) it is not hard to see that the trajectory field does not admit a locally bounded (or continuous) version n x [0,1] x C+ C. In particular, when 1 = 1, every measurable version x:n x [0,1] x C + C for the trajectory field of dx(t) = x(t1 )dw(t) ,
0 < t < 1
is a.s. nonlinear on C, i.e. for every t E [0,1] the set
n~ = {w: wEn, X(w,t,•):C+ C is linear} is contained in a set of Pmeasure zero in F. For simplicity, take t = 1 and assume that the probability space (n,F,P) is complete; so it is sufficient to prove that P(n~) = 0. To do this, let w0 E n~. From the measurability of X, X(w0,1,·)(0):C +R is measurable linear and by a theorem of Douady (Theorem (1.4.5)) it must be continuous. So w0 must belong to the set {w:w En, X(w,1,•)(0):C +R is continuous} which is of Pmeasure zero by §(V.3).
By completeness of P, n~ E F and 189
P(~) = 0. Note, however, that linearity in probability (or in distribution) always holds in this case i.e. for any n1,n2 E C, u,v E R, t E [0,1], we have
Indeed this last result is vali.d for li.near stochastic FOE's t > 0
x0
=
n
with coeffici.ents H E L(C,Rn), G E L(C,L(Rm ,Rn)) due to (pathwise) uniqueness of solutions (Theorem (ll.2.1)). lt is interesti.ng to observe here that for stochasti.c 1inear ODE's on Euclidean space Rn dx(t) = h(x(t))dt
~
g(x(t))dw(t),
t > 0
(X)
x(O) =vERn, hE L(Rn), g E L(Rn,L(Rm,Rn)) the trajectory field {vx(t);t > 0, vERn} possesses a measurable version X;Q x ~ x Rn ~Rn which is a.s. linear on Rn i.e. for a.a. wEn, all t > 0, X(w,t,·) E L(Rn). This follows from the easilyverifiable fact that for a measurable field n x Rn ~Rn linearity in probability is equivalent to almost sure linearity. For the simple onedimensional linear stochastic ODE dx(t) = x(t)dt
~
cx(t)dw(t),
t
>
0
x(O) = v E R. 2
Ito's formula shows that the process X(w,t,v) = ve< 1ic )t~c[w(w)(t)w(w)(O)] a.a. w E n, t > 0, v E R, gives a measurable version of the trajectory field which is a.s. linear in the third variable v E R. More generally a measurable version for the trajectory field of the linear system (X) can be constructed by solving the associated fundamental matrix equation e.g. as in Arnold ([2], pp. 141 144) •
190
§4. Linear FOE's Forced by. White Noise As before, we take (n,F,(Ft)tSR'P) to be a filtered probability space satisfying the usual conditions. Note that here we require the filtration (Ft)tSR to be parametrized by all time, with an mdimensional standard Brownian motion w:n + C(RJRm) adapted to it. Let H:C = C(JJRn) +Rn be a continuous 1inear map and g:R + L(Rm JRn) be measurable such that llg( ·)II is locally integrable over R, where II · II is the operator norm on L(Rm JRn). Consider the fo~ced tinea~ system dx(t) = H(xt)dt
x0
=
+
g(t)dw(t),
t > 0 (XI)
nEc
as opposed to the unforced deterministic linear RFDE: dy(t)
=
H(yt)dt
Yo
=
n E c.
(XII)
The dynamics of (XII) is wellunderstood via the fundamental work of J. Hale ([26], Chapter 7). In particular, the state space C splits in the form ~
c = u • s.
( 1)
The subspace u is finitedimensional, sis closed and the splitting is invariant under the semigroup Tt:C + C, t > 0, Tt(n) = Yt for all n € C, t > 0. (Hale [26] pp. 168173, c.f. Mohammed [57] pp. 94104). According to Hale [26] the subspace U is constructed by using the generalized eigenspaces corresponding to eigenvalues·with nonnegative real parts of the infinitesimal generator AH to {Tt}t>O viz.
s € [r,O) s =
o.
For convenience, identify the spaces L(RmJRn), L(Rn) with the corresponding 191
spaces of n x m and. n x n rea 1 matri.ces. From the Ri.esz. representation theorem, there is a (unique) L(mn)v.alued measure lJ on J such that
H(~)
= J:r
~(s)dl..l(S)
for all
~
E C(JJRn).
It is therefore possible to extend H to the Banach space "'C = "'C(JJRn) of all bounded measurable maps J +Rn, given the supremum norm. We denote this extension also by H. Solving the li.near FOE (XU.) for initial data in "'C, we can extend the semigroup {Tt}t>O to one on "'C denoted also by Tt:C"' + "'C, t > 0. The splitting (1) of Cis topological, so the projections nU:c + U, rrs:C + S are continuous linear maps. Since dim u < oo, rr 0 has a representation by an L(Rn,U)valued measure p on J v.iz.. rr 0 (~) = ! 0 r ~(s)dp(s). This formula gives a natural extension to a continuous linear map rr 0 :c"' + u. Defining "'S = {~~~ E "'C, rr u(~) = 0}, we see that "'C has a topological splitting

"'c = u • "'s.
(2)
The projection rr~"' ~C + "' s is conti.nuous 1inear, being given by
"' = ~  rru(~) for all ~ E "'C. rrS(~)
"' by ~U and ~s"' respectively. The following When ~ E "'C, denote rr 0 (~) and ITS(~) lemma shows that the spli.tti.ng (2) is invariant under the semigroup {Tt}t>O. Lemma (4.1):
For each
~
E "'C, and t > 0,
\'le
have
"' [Tt(~)Ju = Tt(~u), [Tt(~)Js"' = Tt(~s). Proof: For E C, the result follows directly from the wellknown invariance of the splitting ( 1) under {Tt}t>O. To prove it for E "'C, the ~
~
cons~der
following definition of weak continuity for linear opet·ators on C. A linear operator B~C + is '!..eakly continuous if whenever {~k}k=t is a uniformly bounded sequence in C with ~k(s) + 0 as k + oo for each s E J, then B(~k)(s) + ( ask+ oo for each s E J (cf. the 'weak continuity property (w 1)• of Lemma
C
(IV.3.1)).
The Riesz representation theorem implies that every continuous linear map C + U has a unique weakly continuous extension "'C + u. Hence for the first assertion of the lellllla to hold, it is enough to show that rr 0oTt and Ttorr0 are 192
both weakly continuous for all t > 0. lt is clear from the definition of nU:c"' + u that it is weakly conti.nuous. As the composition of weakly continuous linear operators on "'C is also weakly conti.nuous, it remains to show that each Tt:C"' + "'C is weakly continuous. This is so by the following lemma. The second assertion of the lemma follows from the first because
[Tt(~)]S"'
=
Tt(~)  [Tt(~)]U
=
Tt(~)  Tt(~U) t
> 0,
"'c.
~ €
D
Lemma (4.2): For each t > 0, Tt:.C"' + "'C i.s weakly continuous. Proof: Let v(~) be the total variation measure of the L(Rn)valued measure ~on J representing H. Fix~ E and define ~f:R>O +R by
C
~f(t) = J~r ITt(~)(s)ldv(~)(s) Now
Tt(~)(s) =
r
(O) + J
t+S
+
ITt(~)(O)I,
H(Tu(~))du
t > o.
t+s > 0
0
~(t+S)
r < t+S
<
0
Thus
~f(t)
=J
+
max(r,t) r l~(t+s)ldv(~)(s)
1~(0)
0 Jt+S +J 1~(0) + H(Tu(~))duldv(~)(s) max(r,t) 0 + J:
H(Tu(~))dul
max(r,t) < J l~(t+s)ldv(~)(s) +
[v(~)(J)
r
max(r,t)
< J r
+
1](1~(0)1+
Jt
IH(Tu(~))ldu)
0 l~(t+s)ldv(~)(s)
· + [v(~)(J)
+1JI~(O)I
193
+
[\l(J.~}(J)
< F;h(t)
+.
t](Jt ITu(F;)(O)Idu + Jt J0 ITu(F;)(s) ldv(J.~}(s)du)
+
0
r
0
C J: f;f(u)du
where f; h(t) = Jmax(r,t) IF;(t+S) ldv(J.~}(s)
+.
[\l(lJ}(J) + 1JIF;(O) I, t
> 0,
r
and
C = v(J.~)(J)
+
1.
By Gronwall's lemma, we obtain f;f(t) < f;h(t) + C J: f;h(u)eC(tu)du, for all t > 0.
C
Now let {f;k};=t be a uniformly bounded sequence in converging pointwise to 0; then the sequence {~kh}~=l is uniformly bounded on [O,t] and f;kh(t) ... 0 ask ... ~ for each t > 0, by the dominated convergence theorem. The last estimate then implies again by dominated convergence that f;kf(t) ... 0 as k ... for each t > 0. ln particular, Tt(f;k)(O) ~ 0 as k ~~for every t > 0. But for each s. € J, F,;k(t+s) r < t+s < 0 Tt(F;k)(s) = { k t+s > 0, k > 1, Tt+s (F; )( 0)
=
so Tt(F;k)(s) ~ 0 as k ~ ~ for each s € J, t >
o.
c
By analogy with deterministic forced li.near FOE's, our first objective is to deriv.e a stochastic variation of parameters formula for the forced linear system (XI). The main idea is to look for a stochastic interpretation of the determintstic variation of parameters formula corresponding to nonhomogeneous linear systems dy(t) = H(yt)dt + g(t)dt,
Yo
= n
Ec
t
>
0 (XIII)
(cf. Hale [26] pp. 143147• Hale and Meyer [29]). To start with, we require some notati.on. Denote by t:.:J ~ L(~n) the map 194
= X{O}I• wh~re l E L(~n) is the i.dentity ~ x n matrix. Also, for any linear map B:C{JJRn) + C{JJRn) and any A E C(J,LORmJRn)),
ll
let where and s If
A(s) = (a 1(s), a2(s), ••• ,am(s)), s EJ ~ m n BA = (B(a 1),B(a 2), ••• ,B(am)) E C(J,LOR JR )) a.(s) h the jth column of then x m matrix A(s) for each 1 < j < m J ~ n ~ m n E J. Thus~each aj E C(JJR ) and BA E C(J,L(~ JR )). F:[a.b] + C(J,L(RmJRn)) is a map, define the stochastic integral
1: F(t)dw(t)
by
[Jb F(t)dw(t)](s) = Jb F(t)(s)dw(t), s E J, a
a
whenever the Ito integral /: F(t)(s)dw(t) ERn exists for every s E J. This will exist for example if F is measurable and IIF(t) II~ dt < oo. In case
1:
1: F(t)dw(t)
c
E C{JJRn) a.s., its transform under a continuous linear map C{JJR"> +Rn is described by ~~
Lemma (4.3): Let L:C(JJRn) +Rn be continuous linear and suppose L:C(JJRn) + Rn is its canonical continuous linear extension using the Riez representation theorem. Assume that F:[a,b] + C(J,L(RmJRn)) is such that /b F(t)dw(t) E n b ~ a C(JJR ) a.s. Then /a LF(t)dw(t) exists and L(Jb F(t)dw(t)) = Jb LF(t)dw(t) = ~ Jb L(f.(t))dw.(t) a a j=1 a J J a.s., where fj(t) is the jth column of F(t) and wj(t) is the jth coordinate of w(t), j = 1, ••• ,m. ~:
Represent L by an L(RmJRn)valued measure on J via the Riesz representation theorem; then use coordinates to reduce to the onedimensional case m = n = 1. Namely, it is sufficient to prove that if~ is any finite positive measure on J and f E t 2([a,b] x JJR;dt 0 d~). then Jb J 0 a
r
f(t,s)d~(s)dw(t)
= J 0 Jb
r a
f(t,s)dw(t)d~(s)
a.s.
(3)
195
Suppose first that f
=
X[a,a]x[y, 6] , the characteristic function of the
=
rectangle [a,a] x [y,6] [a,b] x J. Then (3) holds trivially. Also, by linearity of the integrals in f, (3) is true for all simple functions on [a,b] x J with rectangular steps. Since these are dense in iua,b] x J, dt 8 d\1), we need only check that each side of (3) is continuous in f € £ 2([a,b] x JJR,dt 8 d~). But this is implied by the easy inequalities: EIJb J 0 a r <
f(t,s)d~(s)dw(t)l 2
~(J)
=
Jb [J0 a r
Jb J 0 lf(t,s) 1 2 d~(s)dt = a r
f(t,s)d~(s)J 2dt
~(J)
llfll 22 £
and
c
Remark Since f is £ 2 , there is a version of the process s ~ J~ f(t,s)dw(t) with almost all sample paths in £ 2 (JJR;d~). Next we shall need the following result in 'differentiating• a stochastic integral with respect to a parameter. Lemma (4.4): Assume that f:[O,a] x [O,a] + LORmJRn) is continuous on {(t,u):O < u < t
=
Jt f(t,u)dw(u), 0
t € [O,a].
Then dz(t) viz.
196
=
f(t,t)dw(t)
+ {
J0t ata f(t,u)dw(u)}dt
It f(t,u)dw(u) = It f(u,u)dw(u) * It 0
Jv.
a 0 { 0 ~ f(v,u)dw(u)}dv
0
(4)
for all t e [O,a], a.s. Proof~
Suppose first that "' w is a process on [O,a] with almost all sample
paths piecewise C1• Define "'z by
"'z(t) = It f(t,u)dw(u) "' = It f(t,u)w"'I (u)du, 0
0
t e [O,a],
a.s. Then almost all sample paths of "'z are differentiable and a.s.
"' (t) = f(t,t)w"' (t) * It a f(t,u)w"' (u)du Z1
0~
1
I
for all t e (O,a). Thus dz(t) "' = "'Z 1 (t)dt = f(t,t)w" ' I (t)dt
"'
= f(t,t)dw(t)
It a f(t,u)w"' •(u)du]dt It a f(t,u)dw(u)]dt "'
+ [
+ [
0
0
~
~lC
i.e. (4) holds if w is replaced by the piecewise c1 process ~If w is Brownian motion on Rm, define piecewise linear approximations of was follows. Let IT~O = t1 < t 2 < t 3 < ••• < tk =a be a partition of [O,a]. Define the process wiT in Rm by IT ~.w w (t) = w(t.) + ~ (tt.), tJ. < t < tJ.+ 1, j
J
J
where
Suppose G: ox [O,a] + L(Rm ,Rn) is an (Ft)O
!
)'
t
2(0,L(Rm ,Rn))
G(t): G(•,t)
is (uniformly) continuous. Let
£
>
0.
Then there is a 61
>
0 such that 197
1 < xr. uj'"
~ .ll.t = i £ (£/.a 1
J
ra
since lttjl < ltj+ 1tjl <mesh rr < o1 for all t E [tj, tj+ 1]. Applying Lemma 2 of Elworthy ([19], Chapter III, §2, pp. 2528), we obtain k1 Ja El .E G(tJ.)llJ.wG(t)dwrr(t)l 2 J=1 0 k1 1 Jtj+1 2 < .E lljt IIG
1 £2 k1 1 2 E ll.t = 4 £. 4a j=1 J
But, by property of the stochastic integral, there is a o2 k1 2 1 2 a E I J 0 G(t)dw(t)  E G(t.)fl.wl < 4 £ j=1 J J 198
>
0 such that
if mesh IT< o2 • So if mesh IT< Eli: G(t)dw(t)
o =min (o 1,o 2),
I: G(t)d~(t)l 2
a k1 < 2El I G(t)dw(t) .~ G(tJ.)~J.wl 2 0 J=1 <
1 2 2.4e:
+
one gets
+
k1 Ia 2El .~ G(tJ.)~J.w G(t)d~(t)l 2 J=1 0
1 2 2 2.4e: = e:.
Thus we have proved that lim Elia G(t)dw(t) Ia meshiT +0 0 0
G(t)d~(t)l 2
= 0.
(5)
In particular, since the functions u ~ f(t,u), u ~ f(u,u), u ~ :v f(v,u) are continuous for fixed t and v, we get
I: f(t,u)d~(u) ~ I: t
Io f(u,u)dwil(u)
t
~ J 0 f(u,u)dw(u)
J: ;v f(v,u)dwn(u) in £ 2 as mesh IT
I:
+
f(t,u)dw(u)
~
I: a~
f(v,u)dw(u)
0, for each t € [O,a], v € [O,t]. Therefore the equality
f(t,u)dwil(u) = J:
f(u,u)d~(u)
+
J: {J: :v
f(v,u)d~(u)}dv
(6)
will yield the required assertion (4) of the lemma, if we let mesh IT + 0 and show that the second integral on the righthand side of (6) tends to J~ {!~ :v f(v,u)dw(u)}dv in £ 2 as mesh IT + 0. To see this consider ElJ:
{I:
:v
f(v,u)d~(u)}dv
f(v,u)d~(u)
J:
I:
{J:
:v f(v,u)dw(u)}dvl 2
:v f(v,u)dw(u)l 2dv
for each t € [O,a], by HOlder's inequality and Fubini's Theorem. From the dominated convergence theorem, the righthand side will tend to zero as mesh IT + 0 if we can prove that the expression 199
v v E1J 0 :v f(v.,u)dwil(u)  0 :v f(v,u)dw(u)l 2
J
is bounded in v and rr. To see this write
v v El 0 a~ f(v.,u)dwil(u)  J 0 :v f(v,u)dw(u)l 2
J
< 2EI J:
aav f(v,u)dwil(u) 12 + 2K 1 J:
11
a~
f(v,u) 11 2du
for all v E [O,a] and some K1 > 0 (depending on m,n,a). Let M=sup £11 aav f(v,u)ll: 0 < u < v
<
k, then
E I J: :v f(v,u)dWU(u)l 2 k'1 1 ( Jtj+1 a ) 1 (Jv a \ 2 I: W av f(v,u)du t:.jw + ~t av f(v,u)du}t:.k,wl j:1 J tj k tk 1 k' 2 2 2 < M I: t:.J.t = Mtk'+ 1 < Ma j=1 = El
for every partition rr of [O,a]. Here we have again used Lemma 2 of Elworthy ([19], pp. 2528). Thus
for all v E [O,a] and all partitions rr of [O,a]. This completes the proof of the lemma. c We are now in a position to state and prove a stochastic variation of parameters formula for the trajectory of the forced system (XI). By virtue of the splitting (2) of "'C, the formula gives a convenient representation for the projections of the flow onto the subspaces u and S of c. Theorem (4.1): In the stochastic FOE (XI) suppose H:C ~>Rn is continuous linear and g~ + LORmJRn) is locally integrable. Then the trajectory {xt:t E [O,a]} of (XI) through n E C satisfies 200
~ J: Ttu ~g(u)dw(u),
(7)
xt
=
Tt(n)
x~
=
Tt(nu) + J: Ttu
~u g(u)dw(u),
(8)
xs
=
Tt(ns)
~
~s g(u)dw(u),
(9)
Jt Tt u 0

for all t € [O,a], where~ matrix.
=
x{O} I and I € L(Rn) is the identity n
x
n
Remark (4.1) It is evident from the following proof that the stochastic variation of parameters formula (7) (together with (8) and (9)) still holds if n is replaced by any a € £ 2 (n,C(JJRn);F0 ). Proof: We prove the formula first for g~ y:n x [r,a] + Rn by Tt(n)(O) + J: [Ttu y(t)
+
LORmJRn} c1 • Define the process
~g(u)](O)dw(u),
0
< t
= {
n(t)
t
€
J
Since gi[O,a] is c 1bounded, a straightforward integration by parts implies that the process t
~
y(t)  Tt(n)(O)
=
J: [Ttu
~g(u)](O)dw(u)
has continuous sample paths and belongs to £ 2 (n,c([O,a]JRn}). Thus y has a.a. sample paths continuous and belongs to t 2(n,c([r,a]JRn}). Fix t € [O,a] and s € J. Then [Ttu ~g(u)](s)
_f [T tu+s l 0
~g ( u)] ( 0) , r
<
tu+s >
tU+S <
0
0.
Hence
201
t+s Tt+s(n)(O) + J 0 [Tt+su~g(u)](O)dw(u), t+s yt(s)
=
y(t+s)
= {
n(t+s),
f
=
> 0
Tt(n)(s) +
· n ( t+s) ,
r < t +s < 0
[Tt u ~g(u)](s)dw(u) 0 r < t+s < 0
Jt
t+s > 0
i.e. Yt
=
Tt(n) +
Jt0 Tt u ~g(u)dw(u),
t > 0.
a.s.,
We prove next that y solves the stochastic FOE (XI). f~[O,a] ~ [O,a] ~ L(RmJRn) by [Ttu f(t,u)
~g(u)](O)
Defining
0< u< t < a
= {
u > t
0
we see that~ f(t,u) = H(Ttu ~g(u)) for 0 < u < t
=
{Jt Tt(n)(O)}dt t
+ {J0 =
a
]t
+ (Ttt~g(t))(O)dw(t)
[Ttu ~g(u)](O)dw(u)Jdt
H(Tt(n))dt + g(t)dw(t) + {J: H(Ttu
~g(u))dw(u)}dt.
Now
Jt Tt u ~g(u)dw(u) 0

=
n
Yt  Tt(n) E C(JJR ) a.s.,
so by Lemma (4.3), dy(t)
=
H(Tt(n))dt + g(t)dw(t) + H[J: Ttu
=
H[Tt(n) + Jt Ttu 0
=
202
~g(u)dw(u)]dt
H(yt)dt + g(t)dw(t)
0 < t
~g(u)dw(u)]dt
+ g(t)dw(t)
;.e. y is a solution of (Xl) in £ 2(g,C([r,a]JRn)). By the uniqueness theorem (Theorem (11..2.1)), it follcms that for a.a. w € Q, y(w) = x(w) in C([r,a]JRn). Hence xt
=
Yt
=
Tt(n) *
J:
Ttu 6g(u)dw(u), 0 < t
a.s. l.f g is just locally integrable, approximate gi[O,a] by a sequence {gk};= 1 of c1 maps gk~[O,a] + L(RmJRn) such that
J: Let xk FOE
€
llg(t)  gk(t) 11 2 dt
0 as k
+
+
oo
•
iCQ,C([r,a]JRn)) denote the uni.que solution of the stochastic dxk(t) = H(x~)dt + gk(t)dw(t), 0
xk0
=
<
t
<
a
n
for each k > 1. Suppose£> 0. Then there is a k0 > 0 such that ~ llg(t)  gk(t) 11 2 dt < £ for all k > k0 • For k > k0 , consider Ell x~ xtll ~ < E sup
lxk(t+s)  x(t+s)l 2
S€J
t+s>O < 2E sup S€J
J
t+S
I0
k
{H(xu)  H(xu)}dul
2
t+s>O
J
t+S
+ 2E sup sEJ t+s>O
I 0 [gk(u)  g(u)]dw(u)l 2
< 2a IIHII 2 Jt E llx~ xull 2 du + K1 Jt llgk(u)  g(u)ll 2 du
c
0
<
2a II HII 2 Jt E II xk  x 0
u
u
0
II 2 du + K1£ c
Where K1 > 0 is some positive constant (Theorem (1.8.5)). lemma, we obtain
By Gronwall's
203
E llxk  x 112 < Ke: e2a IIHII2t t t c for all t E [O,a] and all k > k0• This implies that x~ ~ xt as k + t 2 (~.C(JJRn)), uniformly for t E [O,a]. But xtk
=
Tt(n)
We let k +
m
to get
xt = Tt(n)
+
+
ft0 Tt u hgk(u)dw(u),
0 < t
0
where the limit exists in t 2 (~.C(JJRn)).
1,2, •••
0 < t < a,
lim Jt Ttu hgk(u)dw(u),
k......
=
m
Indeed
EIJt(Tt u hgk(u))(s)dw(u) Jt (Tt u hg(u))(s)dw(u)l 2 0

0

< K1 Jt IITt u[hgk(u)  hg(u)](s) 11 2 du 0
c

< K1 Jt IITt u h(gk(u)  g(u)) 11 2 du, 0 
s
E
J.
Since {Tt}t>O is a strongly continuous semigroup on "'C, there and M> 0 such that
for 0 < u < t, k
=
exist.~>
1,2, ••• (Hale [26] p. 180). Hence
EIJ:(Ttu hgk(u))(s)dw(u)  J: (Ttu hg(u))(s)dw(u)l 2
.
K 1 M2e2 ~a
t
<
K 1 M2 e 2 ~a
J: llgk(u)  g(u) 11 2 du, k
<
ll_gk(u)  g(u) 11 2du
0
=
1,2, ••• ,
and each s E J. But the last expression tends to zero as k + lim Jt (Ttu
k......
0
in t 2 for each s E J. 204
~gk(u))(s)dw(u)
=
Jt
0
(Tt~hg(u))(s)dw(u)
m ,
so
0,
in
Therefore xt
=
Tt(n)
+
J:
Ttu 6g(u)dw(u), 0 < t
a.s., which pro~es the stochastic ~ariation of parameters formula (7) for locally integrable g. I.t remains to prove fonJI.Ilae (S)~and (9). This is indeed quite simple. Just apply the projections nu and~ to both sides of (7). Indeed
x~
= llu(xt) = nuTt(n) + nu = Tt(nu) +
Jt
= Tt(n u) +
Jt
= Tt(nu) +
Jt
J:
Ttu 6g(u)dw(u)
u 0 n Ttu 6g(u)dw(u)
u 0 Ttu n 6g(u)dw(u)
.
0
Ttu 6ug(u)dl!l(u), 0 < t
because of LemmBS(4.1) and (4.3). Similarly for (9). a Remark (4.2) By a slight modification of the above argument, it follows that for any t 0 € R the unique solution of the stochastic FOE dx(t)
=
H(xt)dt
+
g(t)dw(t) , t > t 0
x0 = n E c satisfies xt
=
+I: s It to It o
Ttt 0(n)
Ttu 6g(u)dw(u)
0
sg(u)dw(u)
xts
=
Ttt0(n )
+
Ttu
xtu
=
Ttt (nu )
+
Ttu 6u g(u)dw(u)
t0
~
a.s. for all t > t 0• The next step in our analysis is to use the representations (8) and (9) of Theorem (4.1) in order to study the asymptotic behaviour as t +=of the 205
u s projections {xt}t>O' {xt}t>O to the flow onto the subspaces u and
s.
To begin with, recall that u has finite dimension d. Therefore it is possible to thi.nk of {x~}t>O as the solution of an unstable stochastic ODE (without delay) on :Rd. We make this more preci.se by appealing to the following considerations which are taken from Hale ([26], pp. 173190). Define * where lRn * is the Eucli.dean space of all ndimensional row c* = C([O,r],Rn) vectors. The continuous linear map H:.C +:R0 defines a continuous bilinear pairing c* X c + R:. (a,~)
= a(O)~(O)
+
for Jos'
a(ss')d~(s')~(s)ds
(10)
where ~ is the L(Rn)valued measure on J representing H, a E C* and ~ E C. With reference to this bilinear pairing, the generator AH of {\}t>O possesses a (formal) adjoint A*11 :.V(A*H) c: C*+ c* defined by the relations
a' ( t),
(A*Ha)( t) = {
r r
V(A*H) =
{a~a
0 < t < r
a( s)d~(s), t = 0;.
f
E C*, a is c 1, a'(O) = 0 r
a(s)d~(s)}.
Furthermore, o(AH) = o(A*H) and the spectra are discrete consisting only of eigenvalues with finite multipliciti.es. Both o(AH) and o(A*H) are invariant under complex conjugation~ and the multiplicities of the eigenvalues coincide. Construct u* c: c* using the generalized eigenspaces of A*H which correspond to eigenvalues with nonnegative real parts. Then dim u* = dim U =d. Take_a basis~= (~ 1 ••••• ~d) for u and a basis  ( ljl.1 \
1¥\:) ljld
for u* such that (lJij'~i) = oji' i,j = 1,2, ••• ,d. The basis~ of U defines a unique matrix representation BE L(lRd) of AH IU i.e. AH~ =~B. A*HI¥ = Bl¥, where AH~. ~B. A*HI¥, Bl¥ are all formally defined like matrix multiplication •. 206
Note that the eigen~alues of B are preci.s.ely thos.e A e: o(AH) with ReA > o. The reader s.hauld. also obs.er~e here that the s.pli.tting (t) of c is realized bY the bi.li.near pairi.ng (tO) through the fortll.lla ~U
=
~(~o~)
for all ~
€
C.
( t 1)
The results quoted in this paragraph are all wellknown for linear FOE's and proofs may be found in Hale [26]. We would like to extend formula (11) so as to co~er all ~ e: "'C. First note·that the bilinear pairing (tO) extends to a continuous bilinear map c* x "'C + R defi.ned by the same formula. So the right hand side of ( 11) makes sense for all ~ e: "'C. But both sides of ( 1t) are continuous with respect to pointwise convergence of uni.farmly bounded sequences in C, because of the dominated con~ergence theorem and the weak continuity of rrU~~ + u. As "'C is closed under pointwise limits of uniformly bounded sequences, (tt) holds for all ~ e: "'C. ln view of the abo~e considerations we may now state the following corollary of Theorem (4.1). Corollary (4. 1.1) :. Define ~ c: u, ~ c: u* and B e: L(Rd) as above. Let {xt~t e: [O,a]} be the trajectory of (Xl) through n e: C. Define the process z~n X [O,a] +Rd on Rd by z(t) = c~.xt)• 0 < t
=
z(O)
Bz(t)dt
+ ~(O)g(t)dw(t), 0
< t
= (~.n)
}
(XIV)
~:.
Use the definition of z, the stochastic variation of parameters formula and Lemma (4.3) to obtain z.(t)
=
c~.Tt(n))
+I: c~.Ttullg(u))dw(u)
a.s.
for all t e: [O,a]. Take differentials and use properties of the bilinear Pairing and the generator AH of {Tt}G
207
d~(t)
=
* =
(A
*H
+
(~.~g(t))dw(t)
I: ~ (~,Ttu~g(u))dw(u)dt
~.Tt(n))dt
+
=
(~,AH(Tt(n)))dt
{It
0
(~.
(A*H~,Tt(n))dt + {
I0 (A*H t
*
(~.~)g(t)dw(t)
;t Tt +
(Lernna (4.4))

u~g(u))dw(u)}dt
~(O)g(t)dw(t)
~.Ttu~g(u))dw(u)}dt
= (B~,Tt(n))dt + ~(O)g(t)dw(t)
~: (B~. =
(B~,Tt(n)
I: ~g(u)dw(u))dt I: Ttu~g(u)dw(u))dt Ttu
+
(Lellllla (4.3)) +
~(O)g(t)dw(t)
= (B~,xt)dt + ~(O)g(t)dw(t) = B(~,xt)dt + ~(O)g(t)dw(t) =
Bz(t)dt
+ ~(O)g(t)dw(t),
0 < t
Thus z is the unique solution of the stochastic ODE (XIV) through (~.n)ERd. The relation x~ = ~~(t), t € [O,a], follows directly from (11). This completes the proof of the corollary. o The next theorem specifies the asymptotic beha~iour as t ~ ~ of the stable projection {x~}t>O of our stochastic FOE (XI). ln particular the trajectory {x~}t>O is asymptotically close to a continuous Gaussian process. Theorem (4.2): Let H:C ~Rn be continuous linear, g~ ~ L(Rm,Rn) c1bounded and {xt:t > 0} the trajectory of the stochastic FOE (XI) through n € C. Then there is a sample continuous Gaussian process x:n x [r,~) ~Rn satisfying:
208
(;)
~xt = J~ Ttu 6~ g(u)dw(u)
a.s. for all t > 0&
(U)
There are constants K > 0, a.
< 0
(;H)
~x
such that
is the unique solution of the stochastic FOE
~x(t) = H(~xt)dt
~x 0
0
"'
J_~ T_u aSg(u)d\'l{u)
€
t
>
0
.c 2(rl,Ch
{X\l)
is constant, then ~x is stationary•
(;v)
lf
g
(v)
lf
g ;s
~th
=
"' * ~(O)g(t)dw(t)
period;c with per;od k
> 0, ~x
is peroiodic in distroibution
;.e.
pePiod k
Suppose g and ;ts der;vative g• are both globally bounded on R. We def;ne the process ~x:Sl x [r,~) +Rn by Proof~
~x(t)
Jt = {
J~
"'
{Ttu 6s g{u)){O)dw{u),
"'
t
> 0
{12)
~ {T_u 6s g{u)){t)dw{u).
t
€ J
To see that ~x ;s welldef;ned, we note the existence of the l;m;t t
t
"'
"'
J (Ttu 6s g(u))(O)dw(u) J {Ttu 6s g{u)){O)dw(u) = 1;m v v ~
a.s. lndeed the map u r> (Ttu 6s"' g(u))(O) ;s c1 and so ;ntegrat;ng by Parts g;ves the class;cal pathw;se ;ntegral
J:
(Ttu
6~ g(u))(O)dw(u)
=
6l{O)g(t)w(t)  (Ttv
J: ~
6~ g(v))(O)w(v)
{(Ttu 6l g(u))(O)}w(u)du 209·
a.s. for all ~ < t. Now by Hale ([26], p. 187) there are constants M> 0, B < 0 such that
"'
IITt lls g(u) II< MeBt llg(u) II
(13)
for all t > 0, u € R. But the law of the interated logarithm for Brownian motion (Theorem (l.8.2)(iv)) i.mpHes that there is a constant L > 0 such that for a.a. w € Sl there exists T(w) < 0 with the property IW(w)(t)l for all t
<
T(w).
<
L(ltl log log ltl)i ln particular, we
(14)
ha~e
a.s.
"' )(O)w(v:) I < M~ im eB(t~) llg(~) lllw(v) I Hml (Tt~llsg(~) vvoo <MLIIgllc Hm eB(t~)
ML llgllc eBt Hm eB~( 1~1 log log 1~1> 1 v
< ML llgllc eBt lim lvl eB'l vwhere
llgllc
=
sup { llg(t) II:. t
€
=
0
R}.
Next we consider the existence of the a.s. limit lim 'l
t
"'
v
.
J ~ {(Ttu lls g(u))(O)}w(u)du.
To prove its existence, i.t i.s sufficient to demonstrate that t
"'
J_ool.~ {(Ttu llsg(u))(O)}w(u) ldu
<
oo
a.s. Note first that
"'
~ {(Ttu lls g(u))(O)} for u < t;. so 210
=
"'
"'
H(Ttu llsg(u)) ~ (Ttu lls g•(u))(O)
Ia~ {(Ttu 6s"' g(u))(O)}w(w)(u)l
"'
< IIHII IITtu 6s g(u) II lw(w)(u) I < IIHII
M
llgllc eB(tu)lw(w)(u)l
IIHIIIIgllc
= M(
+
< LM( IIHIIIIgllc
t
Jm
+
Mllg'llceB(tu)lw(w)(u)l
llg'llc>e 6t lui eBu
o. Since B < 0 the integral
< m and hence so is "'
I~ {(Ttu 6s g(u))(O)}w(u)l du
min (T(w),t)
=
+.
llg'llc>e 6t eBulw(w)(u)l
for all u <min (T(w),t) and a.a. wE
J~luleBudu
"'
llttu 6s g'(u) II lw(w)(u) I
+
Jm
"'
I~{(Ttu6sg(u))(O)}w(u)ldu
t
9
.
J.
mtn(T(w),t)
"'
I~{(Ttu6s g(u))(O)}w(u)ldu
for t > 0. From the definition of mx the reader may easily verify that the trajectory {mxt}t>O satisfies assertion (i) of the theorem. Thus we can look at the difference
(x~"'  mxt)(s)
=
"'
Tt(nS)(s) *
0 f_m
"'
{TtU*S 6S g(u)}(O)dw(u)
for all s E Jandt> r. Let At:n x J +Rn be the process At(s)
=
0
"'
J_m
for each fixed t > r. Then integration by parts gives
211
~
At(s)
=
~
{Tt+s ~s g(O)}(O)w(O) • 1~ {Ttu~s~s g(u)}(O)w(u) u
0
lUn 'l
J
~
[·H(Ttu+s
~
~s g(u))+ {Ttu~s ~s g•(u)}(O)]w(u)du
'l
a.s. for t > r, s € J. From the subsequent discussion we shall see that At is (F0 8 Borel J, Borel mn}measurable and has a.a. sample paths continuous on J. Futhermore, one has the following esti.mates satisfied a.s. lA {s) I < MeB{t~s) llg{O) II lw{O) I + M1im eB{tu+s) llg{u) II lw(u) I t um
~
f4
J0 eB(tu~:s)
IIH II lim.
.., M 1im 'lm
llg{u) II lw(u) ldu
V
'lm
J0 eB(tu+s)
llg• (u) II lw{u) ldu
V
< MeB{tr) llg(O}IIIw(O)I ~ MeB{tr) llgllclim eBulw(u)l u
for t > r, s € J. In the above inequalities, all limits exist a.s. because for a.a. w € n,
for all u
<
lim
min{T{w),1).
um
eBu lw(u)l = 0
and so if we let
r v
for a.a. w E n, we get 212
In particular, a.s.
eBu lw{w)(u) ldu
sup IAt(s>l < MeB(tr) llg(O)II lw(O)I
se:J
+ M(
+ 119'
lie >~ 1
eB(tr)
> r.
a.s. for all t Thus a.s.
llx~
IIHII ll9llc
 coxtll c <Meat
II nsllc
... sup IAt(s) I
s.e:J
< Meatllnsllc + MeB(tr) llg(O) II lw(O) I .., M(
IIHII II gil c.., llg' llc>K1 eB(tr)
for t > r. I.n particular,
To establish assertion (i.i.) of the theorem, note that by cominated convergence and Fubini•s theorem one gets
E~~ = E(J: esulw(u)~) 2 < =m
J: eBuEiw(u)l 2du
J~co eBudu
J~co (u)eSudu J~co eaudu
m =  :J .
s
Thus E llx~ for t > r.
coxtl~
<
3.1 e26t
2 llnsll ~  3 M6
f<
IIHII llgllc + llg'llc> 2 e2S(tr)
Now take
and a = 2S < 0.
Then
213
We next
s~ow
that mx is the solution of the 0
~tochastic
FOE (XV) through
"'
e = J~ T_u As g(u)dw(u). By our main existence theoren (Theorem (11.2.1)), the stochastic FDE(XV) will have a unique solution in .c 2(o,C([r,a],Rn)) for any a > 0 if e e:.c 2 (o,C). ln fact, we can write mx in the form. Tt(e)(O) *
t
"'
J [Ttu As g(u}](O)dw(u),
t>O
0
mx(t) = {
(15)
t e: J
e(t)
provided we can show that e e: .c 2(o,C). Now the stochastic integral depends measurably on parameters, so the process e(s)'
=
0
"'
J~ [Tu As g(u}](s)dw(u),
s e: J,
has a measurable version e~o x J +Rn according to a result of Stricker and Yor ([72], Theoreme t, §5, p. 119). Indeed e i.s sample continuous. To see this, use integration by parts to write e(s) = =
Js
m
"' [T_u+s As g(u}](O)dw(u)
"' s S' "' AS(O)g(s)w(s)lim J [H(T_U+SA g(u))+{T_U+SASg'(u)}(O}]w(u)du (16) y+m
V
a.s. for all s e: J. Since g, g• and ware continuous, thP. processes s
1+
AS'(O)g(s)w(s) s
"'
"'
s ~ J [H(T_U+S AS g(u)) + {T_U+S AS g'(u)}(O)]w(u)du v are sample continuous on J. Thus for e to have continuous sample paths one needs to check that the a.s. limit on the righthand side of (16) is uniform for s e: J. Now, for v < s < 0,
!fvs [H(T_U+S AS"' g(u)) + {T_U+S AS"' g'(u)}(O)]w(u)du
214
 J_~s [H(T_u+s 6s"' g(u) + Tu+s 6s"' g'(u)]w(u)dul <
(}M IIHIIIIgllc
ea(u+s) +
Mllg'llc
ea(u+s)}lw(u)!du
'l
< M( IIHIIIIgllc +llg"llc> ear J~ eaulw(u)ldu
(17)
\l
s;nce
Hm v...~
J
eaulw(u)!du = 0, it follows illlltediately from (17) that
~
the Hm;t ;n th·e righthand side of ( 16) ;s uniform ;n s € J and so a(w) € C for a.a. w E n. As the map J 3 s ~ a(w)(s) € mn is Borelmeasurable for a.a. wEn, it follows that a E t 0 (n,C). Indeed, starting from (16), ;t ;s easily seen that
a.s. for all s
€
J. Taking suprema and expectations g;ves
Hence a € t 2(n,C). In fact, by a s;milar argument, one can show that ~xt € t 2(n,C) for all t > 0. Note also that th;s ;s a consequence of assert;on (ii) of the theorem fort> r. Since {w(t)~t ER} is a Gaussian system (§§I.B(A),(B)), the righthand side of (16) ;s also a Gaussian system. Thus a:n x J +Rn is a Gaussian process (Theorem (1.8.1)(;i)). In a similar fash;on ~x:n x [r,~) +Rn, ~xt:Q x J +Rn are Gaussian for each f;xed t > 0. The stochastic FOE (XV) for ~x may be der;ved d;rectly from (15) by tak;ng stochastic differentials as in the proof of the stochastic variation of parameters formula (Theorem (4.1)). Thus d~x(t)
=
{J.t Tt(a)(O)}dt
+
(6s"' g(t))(O)dw(t)
215
t "' ~ {J0 ~ [Ttu~s g(u)]{O)dw(u)}dt
={H(Tt(e)) = H( 00Xt)dt
+
J0t
"'
H(Ttu ~ s g(u))dw(u)}dt ~
~ ~~(O)g(t)dw(t) •
t
~
ll
(O)g(t)dw(t)
> 0.
Let us now check assertion (v) of the theorem. Suppose g is periodic on R with period k > 0. Consider 00 X(t~k) for t > r. Then a.s. 00
x(t~k) =
=
Jt+k ~ oo Tt+.ku ~ g(u)dw(u)
ft Ttu' aS"' g(u'+.k)dw{u'~k)

(u'
=
uk)
But g(u'+k) = g(u') and {w(u'):u' E R}• {w(u'+k)w(k):u' E R} are isonomous, so by isonomy properties of the stochastic i.ntegral (Lemma (111.2.3)) we get X(t+k)
00
= J:oo Ttu'
~~g(u')dw(u'+.k) J:00Ttu'A~g(u')dw(u')
= X(t) 00
for every t > r. Thus Xj[O,oo) is periodic in distribution with period k i.e. Po x(t) 1 = Po X(t+k) 1 for all t > r. In particular if g is constant, one can take the 'period' k to be arbitrary and so Po x(t) 1 is independent of t > r. Hence X is stationary. This completes the proof of the theorem. c 00
00
00
00
Corollary (4.2.1): Let H:C + Rn be continuous linear and g:R c1bounded. Suppose that the deterministic RFDE dy(t)
=
H(yt)dt,
+
L(RmJRn) be
(XII)
t > 0
is globally asymptotically stable i.e. Re ~ > 0 for every ~ E a (AH). where AH is the infinitesimal generator of {Tt}t>O. Then the process x: g x [r,oo) + Rn given by 00
00
xt
=
Jtoo Ttu ~g(u)dw(u),
t>O
is a sample continuous Gaussian solution of the stochastic FOE 216
dx(t)
=
H(xt)dt
+
g(t)dw(t), t > 0
such that for any trajectory {xt:t > 0} K > 0, a < 0 with CD 2 at E II xt  xt II C < Ke
c t
(XI)
2(g,C) of (XI) there are constants
for a11 t > r.
If g is constant, CDx is stationary; if g is periodic with period k, CDx is periodic in distribution with period k. Proof: If a(AH) lies to the left of the imaginary axis, the splittings (1) and (2) reduce to the trivial case C=
S ,
U =
"' "'
{0}, C = S , ITS
=
ide,
"'
ITS =
ide.
Hence x~ = xt for all t > 0; so all conclusions of the corollary follow immediately from the theorem. a Corollary (4.2.2): Suppose all the conditions of Corollary (4.2.1) hold and let g be constant i.e. g(t) = G € L(RmJRn) for all t € R. Then the stochastic FOE dx(t)
=
H(xt)dt
+
Gdw(t)
t
>
0
(XVI)
is (globally) asymptotically stochastically stable i.e. for every n € C lim Po(nxt) 1 exists in Mp(C) and is an invariant Gaussian measure for the t+oo
stochastic FOE (XVI), (§(III.3)).
~: As CDx is stationary, let Po(CDxt) 1 = ~O = Poe 1 for all t > 0. We show that the transition probabilities {p(O,n,t,.) = Po(nxt) 1:t > 0, n € C} of (XVI) converge to ~O in Mp(C) as t +CD for every n € C. Thus it is sufficient to prove that lim J
t+oo
~EC
$(~)p(O,n,t,d~)
=
J
~€C
$(~)d~ 0 (~)
for every bounded uniformly continuous function $ on C. But by Corollary (4.2.1) above, limE unxt CDxt11 2 = 0 for every n € C; so it follows from t+oo
c
217
the proof of Le11111a ( 111.3. 1)(H) that lim ti.e. lim t
[J [J
,(nxt(w))dP(w) g
J ,(~xt(w))dP(w)]
= 0, ' € Cb.
g
~EC
'(~)p(O,n,t,d~)

J
~E.C
'(~)d~(~)J .
= 0, 'E cb.
Thus lim p(O,n,t,•) = ~ 0 , a Gaussian measure on C. This implies that tP~(~o> = ~ 0 for all t > 0, where {P~}t>O is the adjoint semigroup on M(C) associated with the stochastic FOE (XVL) (§(111.3)). a Remark (4.3) Note that the invariant measure ~O of Corollary (4.2.2) i.s uniquely determined independently of all i.nitial conditions n € C (or in .c 2(n,C)). 1n contrast with the asymptotically situation when a(AH) has some elements case it turns out that the variance of nx~ in U di\lerges to ~: ~ exponentially have
stable case, we finally look at the with positive real parts. In this every onedimensional projection of as t + ~ for each n € C. Indeed we
Theorem (4.3)~ Define~ c u*, B € LORd), ~~g x ~ +Rd as in Corollary (4.1.1). For the stochastic FOE (XVI) (constant G), let C = ~(O)GELORm,Rd). Suppose {Aj = aj ~: ibj~1 < j < p} is the set of all eigenvalues of B where aj > 0 and bj € R, j = 1,2, ••• ,p. Denote by<·,·> the (Euclidean) inne~ product on Rd i.e. for u = (u 1, ••• ,ud), v = (v 1, ••• ,vd) € Rd, = _E uivi. 1=1
(i) Assume that the pair (B,C) is contzoottabte viz. rank (C, BC, B2C, ••• ,Bd1 C) =d.
(18)
Then for every v € Rd, there exists 1 < j < p and t 0 > 0 such that given£ >0 we can find o1, o2, o3 > 0 with the property
o1e
2a.t J
for all t > t 0• 218

o2 <
El
(2a.+£)t J
(19)
(ii) lf the rank condition (18) is not satisfied, then either the inequalities (19) hold or else El
dz{t) = Bz{t)dt
+
it follows that z.(t) = et 8z.(O)
+
Cdw(t)
J:
e
and Ez{t) = e tB z(O) = etB (',n), t > 0. Let v = (v 1,v 2, ••• ,vd) E Rd , w = (w 1,w2, ••• ,wm>· Then El
~
i=1
v.(Jt e
Jt (e (tu)B C) .. dw.(u)] 2 = E[ dE v. m E 1 1J J i=1 j=1 0
(20)
Now the entry (e(tu)Bc) .. of the matrix e
where C = (Cij), i = 1, ••• ,d, j = 1, ••• ,m; therefore using (Hirsch and Smale [33], §6.5, Theorem 1) we can write P mk ak(tu) ( t )B (e u C) .. = E (tu) e [aiJ"ksin bk{tu) + SiJ"kcos bk(tu)] 1J k=1 for some real constants aijk' Sijk and integers mk > 0, i = 1,2, ••• ,d, j = 1,2, ••• ,m, k = 1,2, ••• ,p. If we insert the above expression into (20) 219
e get El
= E{
m Jt p
t (tu)
t
j=t 0 k=t
p t
k=1
(tu)
2
mk ak(tu) d e [ t v.a .. k sin bk(tu) i=t 1 1J d t
v.e .. k cos bk(tu)]dw.(u)} i =1 1 1J J
+
= m t Jt j=1 0
vi Bijk cos bk(tu)]dwj(u)]
2
mk ak(tu) d e [ t v.a. 'k sin bk(tu) i=1 1 1J +
d
because of the independence of wj• j = 1.2 ••••• m. gjk~ ~R be defined by d
2
v.B .. k cos bk(tu)J} du i=t 1 1J t
Now let the functions
d
g.k(u) = t v.a .. k sin bku J i=1 1 1J
+
t
i=1
v.B. 'k cos bku. 1 1J
u> 0
Then El
m (21)
t
j=1 d
If all .t viBiJ'k = 0 and if for all k with bk ~ 0 one has .t viaiJ'k = o. 1=1 1=1 2 then El
Suppose y ;sa pos;t;ve constant such that I= {u:lg. k (u)l > y} 1 ~and Jo o let Mk, k = 1,2, ••• ,p be constants so that 19Jok(u)l < Mk for all u > 0, all k. Note that th;s ;s poss;ble because eac.h gj k ;s globally bounded. 0 Then there ex;sts t 0 > 0 such that mko akou rnk aku u e Y > 2E lu e Mkl for all u > t 0 , where the sum ;s taken over all k 1 k0 for wh;ch lgj 0k(u)l 2 1 0. Thus J
p mk aku J mko ak u mk aku 2 2 u e gj k(u)} du > {u e 0 gj k (u)Eiu e Mkl} du o k=1 o [t 0 ,tJni oo t
{ E
>! J lu~oeakou Yl2du > D e2akot D 4 [to,tJni 1 2 for some constants o1,o2 > 0, .s;nce gj k ;s per;od;c. Hence by (22) we have 0 0 2 2akot El
2
m Jt p p mk+m1 (ak+a 1)u = E E E u e gjk(u)gj 1(u)du j=1 0 k=1 1=1
p p Jt ~+m 1 (ak+a 1)u m < E E u e E lgjk(u) I lgj 1(u) ldu, t > 0. k=1 1=1 0 j=1 Def;n;ng k0 , jo as before and lett;ng m
Mki. =sup { E lg.k(u)l lgJ.1 (u)l : u > O}, j=1 J we can f;nd u1 > 0 such that 2~ 0
u
2ak0u _ ~+m 1 (ak +a1 )u e y > 2 E u e Mkt (k,t)
for all u > u1, where summat;on ;s taken over all k,t 1 k0 w;th Mkt 1 0, and y = max {Mk 1:1 < k,t < p}. Therefore 221
2 3 Jt 2mko 2akou El<'l,z.(t)Ez.(t)>l < D+ '2' u e y du u1
for some D> 0 and all t > u1• But for any such that
£ >
0, there exists a
D> 0
= (2ak 0+£)t 3 Jt 2111ko 2ak 0u _ 3 Jt 211\ 2ak 0u _ '2' u Oe ydu< 2 u e ydu
u,
0
So (2ak ~£)t 2 0 El
>
0. This completes theproof of the
Remark (4.4) ln the hypePbotic case when no element of o(AH) lies on the imaginary axis, it is easy to see that there are constants M, A > 0 such that I Ez(t) 1 >MeAt for all t > 0 (Hirsch and Smale [33], Chapter 7 §1, Theorem 2, pp. 144150).
222
VII Further developments, problems and conjectures §1.
Introduction
The greater part of the discussion in this chapter will be sketchy and often speculative in nature. Our purpose is to introduce the reader to some further examples and open problems in stochastic functional differential equations. The examples include a generaLised OrnsteinUhZenbeck pPocess (§2), stochastic FDE's with discontinuous initiaZ data (§3), stochastic integPodiffePentiaZ equations (§4) and stochastic DE's with infinite deZays
(§5). The latter class of equations was first systematically studied by K. Ito and M. Nisio [41] in 1964. §2. A Model for Physical Brownian Motion The classical problem of describing the random motion of a molecule in a gas 'heat bath' goes back to Einstein [17], Ornstein and Uhlenbeck [76]. In 1966 R. Kubo [44] proposed the follO\'Iing modification of the OrnsteinUhlenbeck process d~(t) =
v(t)dt } t (I) mdv(t) = m[J e(tt')v(t')dt']dt * y(~(t),v(t))dw(t), t > t 0•
to
The molecule is of mass m and moves under no external forces; B represents a frictional (viscosity) coefficient function having compact support, and y~3 x ~ 3 ~~is a function giving the random gas forces on the molecule. The position and velocity of the molecule at time t > t 0 are denoted by ~(t), v(t) E R3 respectively. Without loss of generality we may assume that t 0 = 0, supp Bc [O,r] for some r > 0 and m ~ 1. It is clear that one needs to specify viJ, J = [r,O], for equations (I) to make sense. As usual w stands for 3dimensional Brownian motion. Now (I) is a stochastic FOE. To see this note that 223
I0t S(tt')v(t')dt' = Iot S(s)vt(s)ds, and define the mappings H:Jt'l 3 3 p 0 :C(J,R ) + R by ,
"'
H=
for
11
=
( o 0
€ C(JJR3 ).
C(J,R3 )
+
R3 ,
> 0,
H':JtO
x
C(J,R6 ) ~
R6 ,
0
I
t
H(t,n)
x
t
S(s)n(s)ds
0
f_r
S(s)n(s)ds
< t
0
< r
t>r
Po \
H )
0 Setting x(t) = (f;(t)) v(t) € R6 , t > r, "'w(t) = ( w(t) ) € R6 ,
it is easily seen that equation (I) is equivalent to the stochastic FOE dx(t)
=
"'H(t,xt)dt
+
"' y(x(t))dw(t),
t
> 0
(II)
in R6 • Note that this stochastic FOE is timedependent for 0 < t < r but becomes autonomous for all t > r. If a € t 1([0,r],R), it follows that Hand "'H are continuous and Lipschitz in the second variable uniformly with respect tot € ~. In fact H(t,•), H(t,•) are continuous linear maps with norms IIH(t,•)ll, IIH(t,•)ll uniformly bounded in t € ~. Now xiJ is specified by v.IJ and so from Theorem (11.2.1), (III.1.1) the stochastic FOE (II) has a unique Markov trajectory {xt}t>O in C(JJR6 ) with given viJ. The Markov process {xt}t>O is timehomogeneous for t > r. In contrast to the classical OrnsteinUh.lenbeck process, observe here that the pair {(F;(t), v(t)):t > r} does not correspond to a Markov process on R6 , yet the trajectory {(f;t,vt): t > O} in C(J,R6 ) does have the Markov property. We would like to consider the velocity process {v(t):t > r} in the simple case when the noise coefficient Y is identically constant i.e. let y(x,y) = mr0 € R for all x,y € R3 • Then v satisfies the autonomous stochastic FOE (III)
224
fort> r, where H0 :C(JJR3 ) +R3 is given by H0(n)
= 
Jor S(s)n(s)ds,
3
n E C(JJR ).
By Theorem (\ll.4.1), write vt
=
Ttr<~r) ~
Jt Ttu 6yo dw(u), r
t > 2r
a.s., where {Tt}t>O is the semigroup of the deterministic linear drift FOE
> r.
t
Now suppose J 0 S(s)ds < TI/2r.
We show that if A E o(AH 0 ), the spectrum H of the generator of {Tt}, then Re A < 0. Write A = A1 + u 2 € o(A 0), for some A1,A 2 E R. Suppose if possible that A1 > 0. Using Hale [26] (LeDIIla (2.1), p. 168), A satisfies the characteristic equation o S(s)eAsds = 0 ( 1) A+ r
Jr
Hence. A1
Jor S(s)e 1s cos A2s ds A
+
and
= 0
(2)
(3)
But from (3), IA 2sl < IA 2 1r < r
Jo
r
As S(s)e 1 lsin A1slds
Jr
< r 0 S(s)ds < TI/2
for all s € J. Therefore cos A2s > 0 for all s € J and is positive on some open subinterval of J. So from (2), A1 = f~r S(s)eA1S cos A2s ds < 0 where S is assumed positive on a set of positive Lebesgue measure in J. This is a contradiction and ReA must be less than zero for all A E o(AH 0 ). Therefore, according to Corollary (VI.4.2.1), we obtain 225
Theorem (2.1): In the system (I) assume that y is constant (:my 0) ,a has compact support in [O,r], a E .c 1([o·,r]_rQ) and 0 r} of (I) and positive real numbers K,a such that m~(t) = m~(r)
(i)
+
Jt mv(u)du r
mvt
J~ Ttu
=
t > r,
AyO dw(u)
a.s., (ii) for every solution
(~,v)
of (1),
'E ~~~t  m~t~~2 < Keat E llvt  mvtlt2 < Keat for all t > 2r, (iii) my is stationary and m~ has a.a. sample paths c1• Remark Physically speaking, the above theorem implies that the 'heat bath' will always eventually stabilize itself into a stationary Gaussian distribution for the velocity of the molecule. §3.
Stochastic FOE's with Discontinuous Initial Data
This is a class of stochastic FOE's with initial process having a.a. sample paths of type .c 2 allowing for a possible finite jump discontinuity at 0. These equations were studied by T.A. Ahmed, S. Elsanousi and S.E.A. Mohammed and can be formulated thus: dx(t)
=
x(O)
=
H(t,x(t),xt)dt v E .c 2(n,Rn)
x(s) = e(s)
+
G(t,x(t),xt)dz(t), t
> 0
}
for all s E [r,O).
In (IV) the initial condition is a paiP (v,e) where v E .c 2(n,Rn) and 226
(IV)
a € t 2(g,£ 2(JJR")). Note that here we confuse £2 with L2, the Hilbert space of all equivalence classes of (Lebesgue)square integrable maps J +R". The trajectory of (IV) is then defined as pairs {(x(t),xt):t > O} in R"xt 2(JJR"). We assume that the coefficients
are measurable with the maps H(t,.,.), G(t,.,.) globally Lipschitz on R" x t 2(JJR") having their Lipschitz constants independent oft € ~. The noise process z·:R x g +Rm is a sample continuous martingale on the filtered probability space (g,F,(Ft)t>O,P) with z(t,•) € t 2(g,Rm;Ft) for all t € ~ and satisfying McShane's Condition II(E)(i). Using the method of successive approximations (cf. Theorem (11.2.1)), it can be shown that there is a unique measurable solution x:[r,m) x g +Rn through (v,e) € i(g,JR";F0 ) x t 2(g,i(J.R");F0 ) with a continuous trajectory {(x(t),xt):t>O} adapted to (Ft)t>O (Ahmed [1]). From the point of view of approximation theory, a CauchyMaruyama scheme can be constructed for the stochastic FOE (IV) in the spirit of McShane ([53], Chapter V, §§3,4, pp. 165179). For more details on this matter see [1]. In addition we would like to suggest the following conjectures: Conjectures (i) In the stochastic FOE (IV), suppose the coefficients H,G satisfy the conditions of existence mentioned above. Let z = w, mdirnensional Brownian motion adapted to (Ft)t>O. Then the trajectory {(x(t),xt):t > 0} corresponds to a Feller process on R" x t 2(JJR"). If H, G are autonomous viz dx(t) = H(x(t),xt)dt
+
G(x(t),xt)dw(t), t
>
0,
(V)
then the above process is timehomogeneous. The transition probabilities {p(t 1,(v,n),t 2,.):0 < t 1 < t 2, v € R", n € t 2(JJR")} are given by p(t 1,(v,n),t2,B)
=
P{w :w € g, ((v,n)x(w)(t 2 ), (v,n)xt (w)) € B} 2
where B € Borel OR" x t 2(JJR")} and (v,n)x is the unique solution of (IV) through (v,n) € R" x t 2(JJR") at t = t 1• 227
(ii) Let Cb = Cb(Rn x £ 2(JJRn)JR) be the Banach space of all uniformly continuous and bounded real functions on Rn x £ 2(JJRn). Define the semigroup {Pt}t>O for the stochastic FOE (V) by t > 0, ~
€ Cb.
Define the shift semigroup St:Cb St(~)(v,n)
=
"'
~(v,nt),
Cb, t > 0, by setting
+
t > 0,
for each ~ € Cb. The semigroups {pt}t>O and {St}t>Owill have the same domain of strong continuity C~ ~ Cb (cf. Theorem (IV.2.1)), but it is not cleaP if C~ ~ Cb in this case (cf. Theorem (IV.2.2)). However it is easily shown that both semigroups are weakly continuous. Let A,S be their respective weak infinitesimal generators (cf. IV §3); then we conjecture the following analogue of Theorem (IV.3.2): Suppose~ € V(S).~ c2, D~ globally bounded and o 2 ~ globally bounded and globally Lipschitz. Then ~ € V(A) and A(~)(v,n)
=
S(~)(v,n) + o 1 ~(v,n)(H(v,n))
+
2
1 n L
j=1
2
D 1 ~(v,n)(G(v,n)(e.),
J
G(v,n)(e.)) J
where o 1 ~. D~~ denote the partial derivatives of ~ with respect to the first variable and {ejlj= 1 is any basis for Rn. Remark In contrast with the nonHilbertable Banach space C(JJRn), the state space Rn x £ 2(JJRn) carries a natural real Hilbert space structure and so Cb(Rn x £ 2(JJRn)JR) contains a large class of smooth (nonzero) functions with bounded supports. By a result of E. Nelson (Bonic and Frampton [6]), a differentiable function on C(JJRn) with bounded support must be identically zero. §4. Stochastic IntegraDifferential Equations In the stochastic integradifferential equation (SIDE)
228
dx(t)
,o
{J_r
=
h(s,x(t~r(s)))ds}dt ~
Jo
{ r
g(s,x(t~d(s)))ds}dz(t),t >
0
(VI) x(t)
= e(.)(t),
= [r,O]
t € J
z:n + c(ltD ,.Rm) is a continuous Rmvalued marti.ngale on a filtered probability space (n,F,(Ft)t>O,P), satisfying the usual conditions of McShane (Conditions E(i) of Chapter Il). The coefficients h:J x Rn +Rn, g:J x Rn + L(RmJRn) are continuous maps which are globally Lipschitz in the second variable uniformly with respect to the first. Denote their common Lipschitz constant by L 0. The delay processes r,d:J x n + J are assumed to be (Borel J 8 F0 • Borel J)measurable and the initial condition e € £ 2(n,C(JJRn)•F0 ). To establish the existence of a unique solution we shall first cast the stochastic IDE (VI) i.nto the general format of Chapter ll §1. lndeed, let us define the maps ~:£ 2 (n,C) + £ 2 (n,.Rn), g:£ 2 (n,C) + £ 2(n,L(Rm JRn)) as follows:
"'h(~)(w) = Jor h(s,~(w)(r(s,w)))ds "'g(~)(w) = Jor g(s,~(w)(d(s,w)))ds for all FOE
~
€ £ 2 (n,C), a.a. w € n.
dx(t)
=
Observe now that (VI) becomes the stochastic
"'h(xt)dt + "'g(xt)dz(t),
t >0
Note also that the coefficients "'h, "'g are globally Lipschitz because if 2 ~,.~ 2 € £ (Sl,C), then "' "' llh<~, )h(~2> II 2 2 £
=In
n
(SlJR )
0
1J_r {h(s,~ 1 (w)(r(s,w))) h(s,~ 2 (w)(r(s,w)))}dsl 2 dP
< rL 2 Jn
1~ 1 (w)(r(s,w)) ~ 2 (w)(r(s,w))l 2 dP
< rL 2 11~1
 ~2112 2 £
(Sl,C)
•
229
A similar inequality holds for "' g. To check that "'h, "' g sati.sfy the adaptability condition E(iii) of Chapter l i §1, notice that the processes (s,w) 17> h(s,liJ(w)(r(s,w)), (s,w) ~ g(s,ljl(w)(d(s,w))) are (Borel J 8 F.Jmeasurable whenever liJ € £ 2(n,C;Ft), fort> 0. Thus by Theorem (11.2.1) the stochastic IDE (Vl) has a unique sample continuous trajectory {xt~t > 0} in C(JJRn) through e. The trajectory field of (VI.) describes a timehomogeneous Feller process on C if z = w, mdimensional Browni.an motion and the delay processes r, d are just (detenninisti.c) continuous functions r,d~J .... J. According to Theorem (11l.3.2), the weak generator A of the assaci.ated semigroup {Pt}1>0 is given by the formula A(~)(n) = S(~)(n)
+
•

D~(n1
( Jor h(s,n(r(s)))ds X{O})
i J=1.~ o2~
where ~ € Cb = Cb(CJR) satisfies the conditions of Theorem (IV.3.2), and the notation is the sarne as that of I.V §3. If the delay processes r, d:J x Q .... J are assumed to be independent of the Brownian motion w, we believe that the trajectory field of the stochastic IDE (V1) corresponds to a random family of ~~rkov processes on C(JJRn) in very much the same spirit as that of Chapter Ill §3 (Theorem (VI.3.1) for stochastic DOE's). For deterministic delays r, d:J + J and Brownian noise z = w the coefficients H:C + Rn, G:C + L(Rm JRn), H(n)
= J0
r
h(s,n(r(s)))ds, G(n)
= J0
r
g(s,n(d(s)))ds, n E C,
are clearly globally Lipschitz and so all the regularity properties of Section ll §4 hold in this case viz. Theorems (V.4.2), (V.4.3), (V.4.4), (V.4.6) and Corollaries (V.4.4.1), (V.4.7.1). It is not clear, however, if the trajectory field admits versions with continuous or locally bounded sample functions. §S.
Infinite Delays
The problem of determining sufficient conditions for the existence (and 230
uniqueness) of stationary solutions to stochastic FOE's was first considered by K. Ito and M. Nisio ([41], 1964) in the context of an infinite retardation time (r =co). We quote here solll! of their results without proofs. For further details the reader should consult [41]. For simplicity, we only consider the onedimensional case. The state space is the Polish space C = C((.co,O]JR} of all real~alued continuous functions (co,O] .... R furnished with the lll!tri.c co llnt;lln p(n,E;) = ~ 2n n=1 1 + lin t;ll n where llnt;lln = FOE
ln(s)  E;(s) I, n, E;
sup
n<S
dx(t) = H(xt)dt
+
G(xt)dw(t),
t
>
E
C. Consider the stochastic
0.
(VII)
for a onedimensional Brownian motion won a filtered probability space (n,F,(Ft)t>O,P) and continuous coeffici.ents H,G:C .... :R. Assume the following~
Theorem (5.1) (ItoNisio):
(i) Equation (VLl) has a solution °x:R x n +:R such that 0 x(s) = 0 a.s. for all s < 0 and there is a constant a> 0 with El0 x(t) 14 < a for all t > 0;. (ii) There is a number that
f~
> 0 and a finite positive measure
lJ
on (co,O] so
0
IH(n)l
+
IG(n)l < M+ J_coln(s)ldll(S)
for all n E c. Then the stochastic FOE (VII) has a stationary solution. Ln [41] se~eral conditions on the coefficients H, G are given in order to guarantee the existence of a stationary solution e.g. Theorem (5.2) (ItoNisio): {i)
Under the assumptions
H is of the form H(n) = H0{n)n(O)
+
H1(n), n
E
c, 231
for some continuous H0 ,H 1 :c
+~;
(ii) G is continuous; (iii) There are constants m, M, M1, M2 > 0 and finite positive measures on (m,O] such that for all n € C
~1 •
~2
m < H0 (n)
<
M
IH 1Cn>1 4 < M1 +
J~m
IG(n)1 4 < M2 +
J~m ln(s)l 4 d~ 2 (s)
ln(s)l 4
d~ 1 (s)
where
the stochastic FOE (VII) has a stationary solution. Theorem (5.3) (It~Nisio): H(n)
=
Suppose H, G can be written in the form
H 0 (n)n(O) + H1(n), G(n)
=
G 0 (n)n(O) + G1(n), n € c,
with H0 , H1 ,G 0 ,G 1 :c +~all continuous and bounded on C. positive constant m (> 0) such that
If there is a
2H 0 Cn) + IG0 Cn>l 2 < m for all n € C, then the stochastic FOE (Vll) has a stationary solution. Now in (VII) assume that the coefficients H, G are linear or rather affine of the form. H(n)
= M1 +
G(n)
=
M2 +
J~m n(s)d~ 1 (s) J~m n(s)d~2 (s),
n
€
C,
for M1, M2 € ~.and ~,.~ 2 finite signed measures on (m,O]. Let 1~ 1 1, denote the total variation measures of ~ 1 • ~ 2 • Define the constants c 232
1~ 2 1 € ~.
: 1 ,c 2 > 0 by
c =  jump of c1 = c2
~,
at 0,
1~ 1 1(co,O),
= 1~ 2 1c~,o].
Then the stochast;c FDE (VII) will follow;ng cases: (;) c1
+
lc~
<
ha~e
a stationary solution ;n each of the
c, provided ~,, ~ 2 ha~e compact supports&
}cc:
(;;) c1 + {c~ + + ac 1 c~) 1 1 2 In th;s case the stat;onary solut;on ;s un;que among those w;th sup Elx(t)l 2 < ~. ([41] §11 pp. 4756).
tER
Tak;ng the further spec;al case G(n) = 1 for all n € C ;.e. M2 = 1 and ~2 = 0, ItO and N;s;o ([41], §12, pp. 5156) also proved the ex;stence of a un;que stat;onary solution to dx(t) = H(xt)dt under the cond;tions
+
dw(t),
~,c~,OJ
<
t >
0 and
o,
J~~lsl dl~ 1 1(s)
<
1. Note here that
;n the case when~, has compact support (;.e. a f;n;te retardat;on t;me) the last two cond;t;ons ;mply that the character;stic equat;on
~(A)
=A  J0 eAs ~
d~ 1 (s) = o
of H has all ;ts roots to the left of the ;mag;nary ax;s (Lemma (12.1), p. 54, ;n [41]). Therefore th;s last result of [41] ;s ;ndeed a spec;al case of our results ;n VI §4 v;z. Corollar;es (VI.4.2.1), (VI.4.2.2). F;nally, ;n v;ew of our analys;s ;n Chapter Ill and the fact that C ;s a Pol;sh space, it is tempt;ng to believe that for the general nonlinear stochastic FDE (VII) the trajectory field {nxt:t > 0, n € C} also describes a timehomogeneous Markov process on C.
233
References
[1] Ahmed, T.A., Stochastic Functional Differentia~ Equations with Discontinuous Initial Data, M.Sc. Thesis, Unillersity of Khartoum, Khartoum, Sudan {1983). [2] Arnold, L., Stochastic Differential Equations: XheopY and Applications, John Wi.ley and Sons, Inc., New York {1974). [3] Banks, H. T., The Representati.on of Solutions of Linear Functional Differential Equations, J. Differential Equations, 5 {1969), 399410. [4] Bellman, R. and Cooke, K.L., DifferentialDifference Equations, Academic Press, New YorkLondon {1963). [5] Bismut, J M.. A Generalized Formula of Ito and Some Other Properties of Stochastic Flows, z. Wahr. uePW. Geb., 55 {1981), 331350. [6] Bonic, R. and Frampton, J., Differentiable Functions on Certain Banach Spaces, Butt. Am~r. Math. Soc. ?1 {1965), 393395. [7] Chung, K.L., A Course in Probability XheopY, Academic Press, New YorkLondon {1974). [8] Chung, K.L., Lectures from Markov Processes to Brownian Motion, SpringerVerlag, New YorkHeidelbergBerlin {1982). [9] Cohn, D.L., Measurable Choice of Limit Points and the Existence of Separable and Measurable Processes, z. Wahr. vePW. G~b. 22 {1972), 161165. [10] Courant, R. and Hilbert, D., Methods of Mathematical Physics VoL. 1, l.nterscience Publishers, Inc., New York {1953). [11] Dieudonn~, J.A., Fou~dations of MOdern Analysis, Academic Press {1960). [12] Doss, H., Liens entre ~quations differentielles stochastiques et ordi.nair~s. Ann. Inst. Henri Poincare, Vol. XIII, no 0 2 {1977), 99125. [13] Dudley, R.M., Sample Functions of the Gaussian Process, Ann. Prob. 1 {1973), 66103. [14] Dudley, R.t1., The Sizes of Compact Subsets of Hilbert Space and Continuity of Gaussian Processes, J. Functional AnaLysis 1 {1967), 290330. 234
[15] Dunford, N. and Schwartz~ J.T., Linear QpeFators, Part t= General Theory, lnterscience Publishers~ New York (1958). [16] Dynkin, E.B. Markou Processes V.ols. L, II, SpringerVerlag, Berlin (1965). [ 17] Ei.nstein, A., Investigations on the Theory of Broumian Movement, Methuen, London (1926). [18] El'sgol'tz., L.E., Introduction to the Theory of Differential Equations with Deuiating Arguments, HoldenDay, Inc. (1966). [19] Elworthy, K.D., Stochastic Differential Equations on Manifolds, LMS Lecture Note Series 70, Cambridge University Press, Cambridge (1982). [20] Feldman, J., Sets of Boundedness and Continuity for the Canonical Normal Process, Pl'oc. Sixth Berkeley Symp. Math. Statist. Pl'ob. 2, University of California Press (1971), 357368. [21] Fernique, X., R~gularit~ de Processus Gaussiens, Invent. Math. 12 (1971), 304320. [22] Friedman, A., Stochastic Differential Equations and Applications Vols. 1,2, Academic Press, New YorkSan FranciscoLondon (1975). [23] Garsia, A., Rodemich, E., and Ramsey (Jr.), H., A Real Variable Lemma and the Continuity of Paths of Some Gaussian Processes, Indiana University Math. J., 20 (1970), 565578. [24] Gihman, l.I. and Skorohod, A.V., Stochastic Differential Equations, SpringerVerlag, New York (1973). [25] Halanay, A., Differential Equations, Stability, Oscillations, TimeLags, Academic Press (1966). Teoria Calitativa a Ecuatilior Diferentiale (Rumanian), Editura Acad. Rep. Populaire Romine (1963). [26] Hale, J.K., Theory of FUnctional Differential Equations, SpringerVerlag, New YorkHeidelbergBerlin (1977). [27] Hale, J.K., Linear Functional Differential Equations with Constant Coeffici.ents, Cont. Diff. Eqns. 2 ( 1963), 291319. [28] Hale, J.K., Sufficient Conditions for Stability and Instability of Autonomous Functional Differential Equations, J. Differential Equations 1 (1965), 452482. [29] Hale, J.K. and Meyer, K.R., A Class of Functional Differential Equations of Neutral Type, Mem. ~£r. Math. Soc. 76 (1967). [30] Halmos, P.R., Measure Theory, D. Van Nostrand Company, Inc., TorontoNew YorkLondon (1950). 235
[31] Hida, T., Brownian Motion, Springer~erlag, New YorkHeidelbergBerlin (1980). [32] Hirsch, M.W., Differential Topology, Graduate Texts in Mathematics 33, Springer (1976). [33] Hirsch, M.W. and Smale, S., Differential Equations, Dynamical Systems and Linear Algebra, Academic Press, New YorkSan FranciscoLondon (1974). [34] HoffmannJ;rgensen, Existence of Measurable Modifications of Stochastic Processes, z. Wahr. veru. Geb. 25, (1973), 205207. [35] lkeda, N. and Watanabe, S., Stochastic Differential Equations and DiffUsion ~ocesses, North HollandKodansha, AmsterdamTokyo (1981). [36] Ito, K., On Stochastic Differential Equations, Mem. Amer. Math. Soc. 4 (1951). [37] Ito, K., On Stochastic Differential Equations on a Differentiable Mani.fold, Nagoya Math. J. 1 (1950), 3547. [38] Ito, K., On a Formula Concerning Stochastic Differentials, Nagoya Math. J. 3 (1951), 5565. [39] Ito, K., Stochastic Integral, ~oc. Imp. Acad. Tokyo, 20 (1944), 519524. [40] lto, K. and McKean, H.P., Diffusion ~ocesses and Their Sample Paths, Springer~erlag, Berlin (1965). [41] Ito, K. and Nisio, M., On Stationary Solutions of a Stochastic Differential Equation, J. Math. Kyoto University, 41 (1964), 175. [42] Jones, G.S. Asymptotic Fixed Point Theorems and Periodic Solutions of Functional Differential Equations, Cont. Diff. Eqns. 2 (1963), 385405. [43] Krasovskii, N. Stability of Motion, Moscow (1959). Translated by Stanford University Press (1963). [44] Kubo, R., The FluctuationDissipation Theorem and Brownian Motion, in ManyBody Theory, Edited by R. Kubo, Syokabo and Benjamin (1966), 116. [45] Kunita, H., On the Decomposition of Solutions of Stochastic Differentia.l Equations, ~oc. Durham LMS Symposium on Stoohastic Integrals (1980), Lecture Notes in Mathematics 851, SpringerVerlag, BerlinHeidelbergNew York (1981), 213255. 236
[46] [47] [48] [49]
[50] [51]
[52] [53] [54] [55]
[56] [57]
[58]
[59]
Kuni.ta, H., On Backward Stochasti.c Differential Equations, to appear in Stochastics {t98t). Lang, S., Differential Manifolds, AddisonWesley Publishing Company, lnc. {1972). Li.dskii, E.A., Stability of t1otions of a System with Random Retardations, DifferentsiaZ'nye Umueneniya, TloZ. 1, No. 1 {1965), 96101. MalletParet, J., Generic and Qual i.tati.\le Properties of Retarded Functional Differential Equations, Meeting FUnc. Diff. Eqns. Bmz. Math. Soc., Sao Carlos, July, 1975. MalletParet, J., Generic Properties of Retarded Functional Differential Equations, BuZZ. Amer. Math. Soc. Bl {1975), 750752. Mallia\lin, P., Stochastic Calculus of ~ariation and Hypoelliptic Operators, Proc. Intern. Symp. Stoch. Diff. Eqns., Kyoto 1976, Edited by K. Ito, Wiley, TokyoNew York {1978), 195263. McKean, H.P. Stochastic IntegmZs, Academic Press, New York {1969). McShane, E.J. Stochastic Calculus and Stochastic Models, Academic Press, New York {1974). Meti.v.ier, M. and Pellaumail, J., Stochastic Integmtion, Academic Press, LondonNew York {1980). Meyer, P.A. Un Cours sur Zes IntegmZes Stochastiques, Seminaire de Probabilit~s X, Proceedings 197475, edited by P.A. Meyer, Lecture Notes in Mathematics No. 511, SpringerVerlag, BerlinHeidelbergNew York {1976). Mishkis, A.D., General Theory of Differential Equations with a Retarded Argument, Amer. Math. Soc. TransZ. No. 55 {1951). t·1ohammed, S.E.A., Retarded FUnctional Differential Equations: A Global Point of View, Research Notes in Mathematics, 21, Pitman Books Limited, LondonSan FranciscoMelbourne {1978). Mohammed, S.E.A., Stochastic Functional Differential Equations and Markov Processes I, II, School of Mathematical Sciences, University of Khartoum, Khartoum, Sudan {1978) {Preprints). Mohammed, S.E.A., Generators of Stochastic Functional Differential Equations, School of Mathematical Sciences, University of Khartoum, Khartoum, Sudan {1980) {Preprint).
237
[60] Mohammed, S.E.A., The lnfi.ni.tesi.mal Generator of a Stochastic Functional Differential Equation, Proceedings of the Seventh Conference on Ordinary and Partial Differential Equations, Dundee, Scotland, March 29  Apri.l 2, 1982~ Lecture Notes in Hathematics 964, SpringerVerlag, BerlinHeidelbergNew York (1982). [61] Mohammed, S.E.A., Scheutzow, M. and Weizs8cker, H.v., Growth of the Solutions of Stochastic Delay Equations on Certain Subspaces of the State Space (Preprint). [62] Naussbaum, R., Some Asymptotic FixedPoint Theorems, Trans. Amer. Math. Soc. 171 (1972), 349375. [63] Nussbaum, R., Periodic Solutions of Some NonLinear Autonomous Functional Differential Equations, Ann. Mat. Rura Appl. 10 (1974), 263306. [64] 01 iv.a, W.M., Functional Differential Equations on Compact Manifolds and an Approximation Theorem, J. Differential Equations 5 (1969), 483496. [65] Oliva, W.M., Functional Differential Equations Generic Theory, Proc. Int. Symp. Diff. Eqns. Dyn. Syst., Brown Univ.ersity, August 1974; Dynamical Systems An International SymposiUTII, vol. 1, Academic Press ( 1976). 195209. [66] Parthasarathy, K.R., Probability Measures on Metric Spaces, Academic Press, New YorkLondon (1967). [67] Rao, M.M., Foundations of Stochastic Analysis, Academic Press, New York ( 1981). [68] Riesz., F. and SzNagy, B., Functional Analysis, tran.ilated by L.F. Boron, Frederick Ungar Publishing Co., New York (1978). [69] Scheutzow, M., Qualitative Behaviour of Stochastic Delay Equations with a Bounded Memory (To appear in Stochastics) (1982). [70] Scheutzow, M., Qualitatives Verhalten der L8sungen von eindimensionalen michtlinearen stochastischen Differentialgleichungen mit Ged8chtnis,
Ph.D. Thesis, Kaiserslautern (1982). Schwartz, L., Radon Measures on Arbitrary Topological Spaces and Cylindrical Measures, Tata Institute of Fundamental Research, Oxford University Press (1973). [72] Stricker, C. and Yor, M., Calcul Stochastique d~pendant d'un param~tre, z. Wahr. verw. Geb., 45 (1978), 109133. [71]
238
[73] Stroock, D.W. and Varadhan, S.R.S., MUltidimensional Diffusion Processes, SpringerVerlag, BerlinHeidelbergNew York {1979). [74] Sussman, H.J., On the Gap between Deterministic and Stochastic Ordinary Differential Equations, Ann. Prob., 6, 1 {1978), 1941. [75] Tre~es, F., Topological Vecto~ Spaces, Dist~ibutions and Ke~els, Academic Press, New York {1967). [76] Uhlenbeck, G.E. and Ornstein, L.S., On the Theory of Brownian Motion, Physical Reuiew 36 {1930), 823841. [77] Ylinen, K., On Vector Bimeasures, Ann. Mat. PUra Appl. {4), 117 {1978), ttS138. [78] Yosida, K., FUnctional Analysis, SpringerVerlag, BerlinHeidelbergNew York {1971). [79] Zabczyk, J., Linear Stochastic Systems in Hilbert Spaces: Structural Properties and Limit Behaviour, Polish Academy of Sciences, Reprint No. 236.
239
Index
Absorbing state 111 Adapted 14, 30 Adjoint semigroup 69 aH6lder continuous (iii) norm 114 Algebraic tensor product 8 Almost sure convergence 5 linearity 190 Approximation of stochastic DOE's 172 Asymptoti.c behaviour 165 fixed point theorems 236 stochastic stability 184
CauchyMaruyama scheme 227 cPbounded function 98 Characteristic equation (i) function 49 ChapmanKolmogorov identity 19 Chebyshev's inequality 6 Coefficient process (iii), 31 Compact Hausdorff space 1 map 121 Compactifying version 113 Compactness in probability 155 Complete filtered probability space 28 Completion of a probability space 1 Condition (D) 41 (DA) 94 Backward stochastic DE's 237 Banach space (ii) Condition (E) 32 Banachspace valued random Conditional expectation 7 independence 7 variable 4 Bilinear pairing 2 probability 7 Configuration space 31 Bochner Integration 4 Continuation property (ii) integrable random variable 4 of trajectories 40 BorelCantelli lemna 6 Borel measurable version (iii) Continuity of the semigroup 71 aalgebra (iv), 2 in probability 148 Continuous bilinear form 12 Browni.an filtration 46 motion, onedimensional (i), 22 bilinear maps 12 dependence (iii) mdimensional 22 Feller process {ii) Brownian aalgebra 169 function (ii) time 122 linear functional 2 Bump function 99 240
Continuous path (iii) Contraction semigroup 20, 67 Controllable 218 Convergence in £k 6 in probability 5 Convolution 99 Covariance 21 function 92 Cylinder sets 23 Decomposition of solutions 236 Delay (i) processes 229 aalgebra 170 Delayed diffusions (iii) Deterministic delay equation (i) memory map 31 Deviating argument 235 Diffeomorphism (iii), 117 Differential system (i) difference equations 234 notation 70 Differentiation of stochastic integral with respect to parameters 196 Diffusion coefficient (iv) RFDE 46 Dirac measure 77 Direct sum 79 Discontinuous initial data (v), 226 Discretetime Gaussian system 21 Distribution 59 Distributional regularity (iv) Domain of strong continuity (iv), 76 Dominat~d convergence 6 Doob's inequality 17
Drift coefficient (iv) RFDE 46 DunfordSchwartz (DS) integral 8, 10 (DS)~integrable 9 Eigenfunctions 92 Eigenvalues 92 Eigenvalue problem 13 Erratic behaviour 144 Estimates on higher order moments 150 Euclidean inner product 218 norm 23 space (i) Evaluation map 30 Existence (iii), 33 Expectation 4 Feller process (ii), 20 Filtration 14 Filtered probability space (iii), 14 Finitedimensional distributions 15 jump discontinuity 79 memory (v) First hitting time 151 Fluctuationdissipation 236 Forced linear system 191 Formal adjoint 206 Frdchet differentiability 41 Frictional coefficient function 223 Frobenius condition (Fr) (iv), 114 Functional calculus 185 Gateaux differentiable 42 Gaussian distribution 21 241
Gaussian process 21 random fi.eld 21 random variable 21 system. 21 Generalized eigenspace 191 OrnsteinUhlenbeck process 223 Generic condi.tions 69 Globally asymptotically stable 216 stochastically stable 165 Globally Lipschitz. 46 Group of diffeomorphisms 117 property 119 Growth of solutions 238 Hausdorff topological space 2 Heat bath 223 Hyperbolic case 222 Hypotheses (A) 58 (M) 46 Increments of Brownian motion 23 Independence of events 6 random variables 6 subaalgebras 6 indicator function 49 Infinite memory (or delay) (v), 239 lnfi.nitesimal generator, strong 24 weak 71 lnitial data 31 lnitial path (H) process (i) lnv.ariant Di.rac measures 112 Gaussian measure 165 lnv.ariant probability measure 69 Isonomous 15 lto· belated integral 26 242
,.. Ito calculus (i) Ito's formula 126 Ito integral 27
KolmogorovBochner theorem 147 KolmogorovTotoki theorem 147 Kronecker delta 146 Laplacian 24 Law of the iterated logarithm 23 t 2continuous process 34 Linear drift 165 FOE's with white noise 191 growth condition 123 maps (iv) stochastic DOE 144 Linearity in probability 28, 190 Lipschitz coefficients (iv) Local uniqueness 150 Localization technique 143 Locally bounded (iii) compactifying version (iv) 1 . 5 t sem1norm t 2seminorm (iii) ~almost everywhere convergence 9 Markov behaviour (iii) process 19 property (ii), 19, 51 trajectories 46 Martingale 16 inequality 35 McShane belated integral 31 type noise (iii) Mean 21 Measure 1
Measure, v.ectorvalued 1 finite 1 positive 1 signed 1 Measurable space Measurable versi.on 15, 148 Mercer's theorem 13 Mesh 24 ~essentially bounded 10 Minimal representation 98 1,1measurable 9 FOE's 235 Noise (iii) coefficient 224 Noise process 31 Nonexistence of continuous versions 144 locally bounded versions 145 measurable linear versions 148 Nonnegative definite kernel 92
Neut~al
Onedimensional polynomial delay equation 189 quadratic delay equation 186 Oneparameter semigroup (iv) Ordinary diffusion coefficient (iv) Oscillations 235 Parameter space 22 Partition 24 belated 24 Cauchy 24 of unity 53 Periodic family of Gaussian measures 165
Periodicity in distribution 209 Physical Brownian motion (v), 223 Piecewise linear approximations 187 path 100 Polish space 2 Position 223 Probability measure Probability space (i), Product probability measure 148 Progressively measurable 16 Prohorov's theorem 155 Projective tensor product 8 Pseudometric 5 Quasitame function
(iv), 105
Random coefficients 33 delays 167 family of Feller processes 169, 171 field 15 gas forces 223 infinitesimal generators 185 RFDE 122 retardations 237 semigroups 185 transition probabilities 171 Regular measure 2 Regularity properties 113 in probability 113, 149 Resolvent 78 Retardation (v) Retarded FOE's (v) RiemannStieltjes integral 35 sums 35 Riemann sum 24 243
representation theorem. tt Right continuous 16 Ries~
aaddithe 1 aalgebra 1 Sample function (iii) regularity ( iv) Semi group property 68 Semimartingale 143 Semiflow ( U) SeminriaUon 9 Separable martingale 32 measurable version 163 Shift 71 semigroup 78 Signed bimeasure 12 Simple delay 169 Simple function 8 Slice (ii) Solution process 31, Splitting of C 191 Stable projection 208 subspace 165 State space 15 Stationary transition probabilities 23 Stationary solutions (v), 231 Stochastic DOE (i), 167 dynamical system (i) memory map 3.1 ODE's (v), 49, 165 Stochastic process (i) a.s. continuous 14 continuous 14 locally bounded 14 of class .ck 14. 244
squareintegrable 14 vectorvalued 14 with continuous sample functions. 14 Stochastic indefinite integral 26 integral (McShane) 24 integradifferential equations (v), 228 RFDE (i). 31 variation of parameters 194 Stopping time 151 StrickerYor lemma 15 Strong continuity (iv), 20 dual 2 infinitesimal generator 166 Submartingale 17 Successive approximation 36 Supremum norm ( i i) Symmetric kernel 13 continuous 13 positive 13 Tame function 98 Taylor's theorem 95 Tight measure 2 Timehomogeneous (ii), 20, 64 lags 235 Tonelli's theorem 183 Total variation measure 3 norm 77 Trajectories (ii) Trajectory field (ii), 111 Transition probabilities 19 Uniform Lipschitz constant 33 Uniformly continuous (iv) Lipschitz 32
Uniform.ly tight 155 Uniqueness (11i). 33 Unstable stochastic ODE 206 subspace 165
Wiener process 22 measure 23
topology 3 Vector fie 1ds 118 Velocity process 224 Version (iii) Borel measurable 15 discontinuous 145 ~ague
smooth
(Hi)
nonlinear measurable 148 Weak continuity of linear operators 192 property (w1) 79 property (~) 82 Weak convergence in Banach space 11 in cb 77 Weak Derivative 78 generator (tv) sequential compactness 11 Weak * topo1ogy 3 Weak (or narrow) topology 3 Weak topology on Banach space 11 on cb 77 Weakly closed 78 Weakly compact map 11 complete 12 continuous extensions 79 continuous semigroup 20 dense (iv.) measurable 4 White noise (v) 245