LONDON MATHEMATICAL SOCIETY STUDENT TEXTS
Managing Editor: Professor E.B. Davies, Department of Mathematics, King's Col...
16 downloads
501 Views
731KB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
LONDON MATHEMATICAL SOCIETY STUDENT TEXTS
Managing Editor: Professor E.B. Davies, Department of Mathematics, King's College, Strand, London WC2R 2LS, England 1 Introduction to combinators and lambda-calculus, J.R. HINDLEY & J.P.SELDIN 2 Building models by games, WILFRID HODGES
3 Local fields, J.W.S. CASSELS 4 An introduction to twistor theory, S.A. HUGGETT & K.P. TOD 5 Introduction to general relativity, L. HUGHSTON & K.P. TOD 6 Lectures on stochastic analysis: diffusion theory, DANIEL W. STROOCK
London Mathematical Society Students Texts. 6
Lectures on Stochastic Analysis: Diffusion Theory DANIEL W. STROOCK Massachusetts Institute of Technology
]he ighr of the
UnirerriIY fC.-b.dge
ro p...
manne, .f
"'I
r of books ar gamed by Henry '111 in
a/f
The Ublrshe ry har p.inred 84,1 34.my and Onberrhed 15581.
CAMBRIDGE UNIVERSITY PRESS Cambridge
London New York New Rochelle Melbourne Sydney
CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, Sao Paulo, Delhi
Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521333665
© Cambridge University Press 1987
This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 1987 Re-issued in this digitally printed version 2008
A catalogue record for this publication is available from the British Library
Library of Congress Cataloguing in Publication data Stroock, Daniel W. Lectures on stochastic analysis
(London Mathematical Society Student Texts; 6) Includes index. 1. Diffusion processes. I. Title. II. Series.
QA274.75.585
1987
519.2-33
ISBN 978-0-521-33366-5 hardback ISBN 978-0-521-33645-1 paperback
86-20782
Contents
Introduction 1
1.1
vii
Stochastic processes and measures on function space Conditional probabilities and transition probability functions
1.2 The weak topology
1
4
1.3
Constructing measures on C( [0,-) ; RN
12
1.4
Wiener measure, some elementary properties
15
2
Diffusions and martingales
2.1 A brief introduction to classical diffusion theory
19
2.2
The elements of martingale theory
27
2.3
Stochastic integrals, Ito's formula and semi-martingales
49
3 The martingale problem formulation of diffusion theory 3.1
Formulation and some basic facts
73
3.2
The martingale problem and stochastic integral equations
87
3.3
Localization
101
3.4
The Cameron-Martin-Girsanov transformation
106
3.5
The martingale problem when g is continuous and positive
112
Appendix
120
Index
127
Introduction
These notes grow out of lectures which I gave during the fall semester of 1985 at M.I.T.
My purpose has been to
provide a reasonably self-contained introduction to some stochastic analytic techniques which can be used in the study of certain analytic problems, and my method has been to concentrate on a particularly rich example rather than to attempt a general overview.
The example which I have chosen
is the study of second order partial differential operators of parabolic type.
This example has the advantage that it
leads very naturally to the analysis of measures on function space and the introduction of powerful probabilistic tools like martingales.
At the same time, it highlights the basic
virtue of probabilistic analysis: the direct role of intuition in the formulation and solution of problems.
The material which is covered has all been derived from my book [S.&V.] (Multidimensional Diffusion Processes, Crundlehren #233, Springer-Verlag, 1979) with S.R.S. Varadhan.
However, the presentation here is quite different.
In the first place, the emphasis there was on generality and detail; here it is on conceptual clarity.
Secondly, at the
time when we wrote [S.&V.], we were not aware of the ease
with which the modern theory of martingales and stochastic integration can be presented.
As a result, our development
of that material was a kind of hybrid between the classical ideas of K. Ito and J.L. Doob and the modern theory based on the ideas of P.A. Meyer, H. Kunita, and S. Watanabe.
In
these notes the modern theory is presented; and the result is,
I believe, not only more general but also more
understandable. In Chapter I, I give a quick review of a few of the
important facts about probability measures on Polish spaces: the existence of regular conditional probability distributions and the theory of weak convergence.
The
chapter ends with the introduction of Wiener measure and a brief discussion of some of the basic elementary properties of Brownian motion.
Chapter II starts with an introduction to diffusion theory via the classical route of transition probability functions coming from the fundamental solution of parabolic equations.
At the end of the first section, an attempt is
made to bring out the analogy between diffusions and the theory of integral curves of a vector field.
In this way I
have tried to motivate the formulation (made precise in Chapter III) of diffusion theory in terms of martingales,
and, at the same time, to indicate the central position which martingales play in stochastic analysis.
The rest of Chapter
II is devoted to the elements of martingale theory and the development of stochastic integration theory.
(The
presentation here profitted considerably from the
ix
incorporation of some ideas which I learned in the lectures
given by K. Ito at the opening session of the I.M.A. in the fall of 1985.) In Chapter III,
I formulate the martingale problem and
derive some of the basic facts about its solutions.
The
chapter ends with a proof that the martingale problem corresponding to a strictly elliptic operator with bounded continuous coefficients is well-posed.
This proof turns on
an elementary fact about singular integral operators, and a derivation of this fact is given in the appendix at the end of the chapter.
I. Stochastic Processes and Measures on Function Space:
1. Conditional Probabilities and Transition Probability Functions:
We begin by recalling the notion of conditional Namely, given a probability space (E,9,P) and a
expectation.
sub-o-algebra .°f',
the conditional expectation value EP[XI5']
of a function X E L2(P) is that 5'-measurable element of L2(P) such that
JA
X(f)P(df) =
JA
EP[XJ9*](f)P(df). A E 9'.
(1.1)
Clearly EP[XI9'] exists: it is nothing more or less than the projection of X onto the subspace of L2(P) consisting of 'f'-measurable P-square integrable functions. EP[XI5;'] > 0 (a.s.,P) if X > 0 (a.s.,P).
Moreover,
Hence, if X is any
non-negative 9-measurable function, then one can use the monotone convergence theorem to construct a non-negative g'-measurable EP[X1V'] for which (1.1) holds; and clearly, up to a P-null set, there is only one such function.
In this
way, one sees that for any 9-measurable X which is either non-negative or in L1(P) there exists a P-almost-surely unique 9'-measurable EP[Xl9']satisfying (1.1).
Because
is linear and preserves non-negativity, one
might hope that for each f E E there is a Pf E M1(E) (the space of probability measures on (E,9)) such that EP[XIg'](f) = JX(T7)Pf(dT).
general.
Unfortunately, this hope is not fulfilled in
However, it is fulfilled when one imposes certain
2
topological conditions on (E,°f).
Our first theorem addresses
this question. (1.2) Theorem: Suppose that f2 is a Polish space (i.e. 0
is a topological space which admits a complete separable
metricization), and that A is a sub a-algebra ofO (the
Borel field over f2).
Given P E M1(O), there is an
s4-measurable map wI-'PW E M1(O) such that P(Af1B) =
fA P,,(B) P(dw) for all A E A and B E 10.
Moreover, cuF-*PW is
uniquely determined up to a P-null set A E A.
Finally, if A
is countably generated, then wI---+PW can be chosen so that
P(A) = XA((J) for all w E Q and A E A. Proof: Assume for the present that 0 is compact, the general case is left for later (c.f. exercise (2.2) below). Choose {tip n: n > 0) C C(O) to be a linearly independent
set of functions whose span is dense in C(ft), and assume that For each n > 0, let
APO = 1.
EP[ppkI4] and choose y0 E 1.
`,k
be a bounded version of
Next, let A be the set of cg's
such that there is an n > 0 and a0,.... an E Q (the rationals)
n
n
such that I a pp > 0 but I am ym (u) < 0, and check that A is m m m=0 m=0
an A-measurable P-null set. n
define A
W
(
otherwise.
P
For n > 0 and a0,.... an E Q, n
n
I amp) = E [ 2 amp] if w E A and m=O
m=0
2 am 'm (U)
m=O
Check that, for each w E Q, AW determines a
unique non-negative linear functional on C(0) and that AU(1)
= 1.
Further, check that
pp E C(ft).
is 4-measurable for each
Finally, let PW be the measure on 0 associated
with AW by the Riesz representation theorem and check that
3
wl-iPW satisfies the required conditions. The uniqueness assertion is easy. (a.s.,P) for each A E sA,
Moreover, since P.(A)
it is clear that, when sA is
countably generated, wI--;P(i can be chosen so that this
equality holds for all w E Q.
Q.E.D.
Referring to the set-up described in Theorem(l.2), the map wI--iP(i is called a conditional probability distribution
of P given sA (abbreviated by c.p.d. of P IA).
If cal->P6) has
the additional property that PCO (A) = XA((w) for all w E 0 and
A E sA,
then col-->P(i is called a regular c.p.d. of LIA
(abbreviated by r.c.p.d. of 111A).
(1.3) Remark: The Polish space which will be the center of most of our attention in what follows is the space 11 = C([O,m);IRN) of continuous paths from [0,-) into RN with the
topology of uniform convergence on compact time intervals. Letting x(t,GG) = sj(t) denote the position of w E 0 at time t
> 0, set At = a(x(s): 0
0
,
and so P(T) = P(Tflt2') determines an element of
K1(t2) with the required property. obvious.
Clearly, P(ft') = 1 where
The uniqueness of P is Q.E.D.
5
(2.2) Exercise: Using the preceding, carry out the proof of Theorem (1.2) when 0 is not compact.
Given a Polish space 0. the weak topology on M1(Q) is the topology generated by sets of the form
(u: Iv0p) - µ('p) I < e}, for µ E M1(Q),
f
E Cb(fl), and a > 0.
Thus, the weak topology
on M1(0) is precisely the relative topology which M1(Q) inherits from the weak* topology on Cb(fl)*.
(2.3) Exercise: Let {ck} be a countable dense subset of Q.
Show that the set of convex combinations of the point
masses 6
with non-negative rational coefficients is dense
in M1(Q).
In particular, conclude that M1(Q) is separable.
(2.4) Lemma: Given a net (µa) in M1(Q). the following are equivalent:
i) Aa-'µ; ii) p is a metric on Q and
for every p E
UP, P);
iii) rim µa(F) S µ(F) for every closed F in Q; a iv)
a
v)
lim µ(F) = µ(F) for every r E tt
lim µa(G) Z µ(G) for every open C in 0; with µ(8T) = 0. f2
a
Proof: Obviously i) implies ii) and iii) implies iv) implies v).
To prove that ii) implies iii), set
ye(w) = P(w.(F(E))c)/[P(w,F) + P(..(F(E))c)] where F(e) is the set of cg's whose p-distance from F is less than e.
Then, pE E U(Q,p), XF
1
- e for every µ E F.
Given e > 0 and
Proof: First suppose that r CC M1(il).
n > 1, choose for each p E F a Kn(µ) CC 0 so that µ(Kn(A)) > 1
u(Kn(p)(1/n)) >
- e/2n and set Gn(p) = {u:
1
- e/2n}, where
distances are taken with respect to a complete metric on Q. N
Next, choose 4n,1.... 'µn.N E F so that r C Un Gn(µn,k)' and k=1 n 00
N
Un Kn(µn,k)(1/n) n=1 k=1
Clearly K CC it and µ(K) >
set K = fl
1
-
e for every µ E F.
To prove the opposite implication, think of M1(0) as a subset of the unit ball in Cb(fl)*.
Since the unit ball in
Cb(fl)* is compact in the weak* topology, it suffices for us
to check that every weak* limit A of µ's from F comes from an
element of M1(0).
But A(pp)
satisfying p > XK
,
>
1
- e for all p E Cb(fl)
and so Theorem(2.1) applies to A. Q.E.D.
F-
(2.7) Example: Let n = C([O,m);E), where (E,p) is a
Polish space and we give 0 the topology of uniform convergence on finite intervals.
Then, r C M1(12) is
relatively compact if and only if for each T > 0 and e > 0 satisfying lim b(T) _
there exist K CC E and
T10 0, such that: sup P({tw: x(t,w) E K, PET
t E [0,T], and
p(x(t,w),x(s,(w)) < 6 (It-s I). s,t E [O,T]}) >
1
- e-
8
In particular, if p-bounded subsets of E are relatively compact, then it suffices that: lim R
sup P({w: p(x,x(O,(j)) < R and PET
p(x(t,w),x(s,w)) S 60t-sI), s,t E [0,T]}) 2 1 - e for some reference point x e E.
The following basic real-variable result was discovered by Garsia, Rademich, and Rumsey.
(2.8) Lemma(Garsia et al.): Let p and W be strictly increasing continuous functions on (0.0) satisfying p(O) = W(0) = 0 and
lim W(t) = w.
For given T > 0 and p E
t -mo
suppose that:
C([O,T];ltN).
T T
'li(Ip(t) - p(s) I/p(It - s I))dsdt s B < -.
fOJO
Then, for all 0 S s S t
S T:
1'f(t) - T(s)1 S 0
Proof: Define P(Ipp(t) - p(s)I/p(It - sI))ds,
I(t) = J
t E [0,T].
0
T
Since J I(t)dt
B, there is a t0 E (0,T) such that I(t0)
0
B/T.
Next, choose t0 > d0 > tl
follows.
>
...
> to > do > ... as
Given tn_l, define dn_1 by p(dn_1) = 1/2p( tn-1) and
choose to E (O,dn_1) so that I(tn) S 2B/dn_1 and 4'(IP(tn) -
p(tn-1)I/p(Itn_to-l1))
S 21(tn-1)/dn-1'
Such a to exists because each of the specified conditions can fail on a set of at most measure dn_1/2.
9
Clearly: 2p(dn+1) = P(tn,tl) < p(dn).
Thus, tn10 as n- .
Also, p(tn-tn+l) < p(tn) = 2p(dn) _
4(p(dn) - 1/2p(dn)) < 4(p(dn) - p(dn+1))'
Hence, with d_1 E
T:
I`P(tn+l) - p(tn)I < I-1(2I(tn)/dn)P(tn-tn+1) < -1(4B/dn-ldnn)(P(dn) - p(dn+1)) 4J n %P-1 (4B/u2)P(du), do+1 rT
and so hp(t0) - p(0)I
0, a > 0, and A < m: Aly - xld+a, x.y E B(r).
EP[Ilf(y) - f(x)IIq]
(2.13)
Then, for all X > 0, P(x
YEB(r)IIf(y)
xIR
- f(x)II/ly -
X) < AB/Aq
(2.14)
where (3 = a/2q and B < - depends only on d. q. r, and a.
Proof: Let p = 2d + a/2.
B(r) B(r)
EP[(Ilf(y)
Then:
- f(x)II/ly - xlp/q)q) ]dxdy S AB'
where ly - xl
B' = B(r) B(r)
Next, set
d+a/2
dxdy.
11
Y(w) =
JB(r)JB(r)
[IIE(Y,u) - E(x,w)II/ly - xlP/q]gdxdy.
Then, by Fubini's theorem, EP[Y] S AB', and so: P(Y 2 Aq) S AB'/Aq, X > 0. In addition, by (2.10):
Ilf(y,(J) - f(x.w)II
8
IY- xl (4d+1Y(W)/1u2d)1/gduP/q JJ0
S CY(cw)1/qly - xIl.
Q.E.D.
(2.15) Corollary: Let 0 = C([O,w);IRN) and suppose that T
C Ml(0) has the properties that: lim
R-)
sup P(lx(0)l PET
R) = 0
and that for each T > 0: sup
sup
EP[Ix(t) - x(s)lq]/(t - s)1+a
0.
(3.1) Exercise: Check that Al = a( U Alt t>0
In particular,
conclude that if P,Q E M1(Q) satisfy P(x(t0) E r0,...,x(tn) E rn) = Q(x(tO) E TO,...,x(tn) E rn
0, 0 < t0 0 and 0 < t0 0, 0 < k < n, t0 0: ly
sup
- xjq Ps t(dxxdy)/(t-s)i+a
0. Ml(fl) such that Pt
t
=
Then there exists a unique P E Po(x(t0),...,x(tn))-1 for all n
0 > 0 and 0 S t0
tn.
(Throughout. PoO-1(r) a P(O-1(r)).)
Proof: The uniqueness is immediate from exercise (3.1). To prove existence, define for m > 0 the map 0m:(R N)4+1---- *0
so that x(t,om(x0,...,x m)) = Xk + 2m(t - k/2m)(xk+1 4
k/2m
t
-
xk) if
< (k+l)/2m and 0 S k < 4m, and x(t.0m(x0,...,x m)) _ 4 l
x 4m if t > 2`°.
Next, set P m = P t0
and tk = k/2m.
Then, by Corollary (2.15), {Pm: m > 0} is
tno@ m
where n m = 4m
m
relatively compact in Ml(0).
Moreover, if P is any limit of
{Pm: m > 01, then
(3.7)
r
qP0(xo).."Pn(xn)Pto,...Itn(dx0x...xdxn) J
for all n > 0, dyadic 0 S t0 0, then the family associated with any initial distribution and satisfies (3.6).
15
4. Wiener Measure, Some Elementary Properties:
We continue with the notation used in the preceding section.
The classic example of a measure on it is the one
constructed by N. Wiener.
Namely, set P(s,x;t,dy) _
g(t-s,y-x)dy, where (2w)-N/2 exp(-Ix12/2s)
(4.1)
is the (standard) Gauss (or Weierstrass) kernel.
It is an
easy computation to check that: N
N
J exp[ 1 0 y.]g(t,y)dy = exp[t/2 1 0] J
j=1
(4.2)
j=1 j
J
for any t > 0, and 01,...,0N E C; and from (4.2), one can
easily show that
is a transition probability
function which satisfies fly - xlq P(s,x;t,dy) = CN(q)(t - s)q/2
for each q E [1.m) a = 1.
(4.3)
In particular, (3.9) holds with q = 4 and
The measure P E M1(il) corresponding to an initial
distribution µ0 and this P(s,x;t,.) is called the (N-dimensional) Wiener measure with initial distribution }-to
and is denoted by Wµ
.
In particular, when µ0 = Sx, we use
0
and refer to Wx as the (N-dimensional)
9x in place of WS x
Wiener measure starting at x; and when x = 0, we will use W (or, when dimension is emphasized, W(N)) instead of N0 and will call 1f the (N-dimensional) Wiener measure.
In this
connection, we introduce here the notion of an N-dimensional Wiener process.
Namely, given a probability space (E,9,P),
we will say that 0:[O,-)xE-,RN is an (N-dimensional)-Wiener process under P if
$3
is measurable,
tl
->(3(t) is P-almost
16 0,(N)
surely continuous, and
(4.4) Exercise: Identifying C([O,o);IRN) with C([O,o);t1)N, show that w(N) = (,,(1))N
In addition, show
is a Wiener process under w and that wx =
that
woTx1
where Tx: 0-+O is given by x(t,Tx(w)) = x + x(t,w). Finally, for a given s Z 0, let ,F-+ws be the r.c.p.d. of Show that, for w-almost all w,
VIAs.
x(s,w) is a
Wiener process under VS, and use this to conclude that WsoGs1 = wx(s,w) (a.s.,V), where 9s: 0-->0 is the time shift map given by
Thus far we have discussed Wiener measure from the Markovian point of view (i.e. in terms of transition probability functions).
An equally important way to approach
this subject is from the Gaussian standpoint.
From the
Gaussian standpoint, If is characterized by the equation: N
exp(-1/2
N I
k, a=1
(4.5)
(tkAte)(Ok,0e) 6t
N
for all n Z 1, tl.....tn E [0,-), and 01,....0n E QtN
(4.6) Exercise: Check that (4.5) holds and that it characterizes V. Al/2x(t/X,w) 9oAX1.
Next, define 41 X: tt-+0 by x(t,$A(w)) _
for X > 0; and, using (4.5), show that w =
This invariance property of w is often called the
Brownian scaling propery.
In order to describe the time
inversion property of Wiener processes one must first check that w( lim x(t)/t = 0) = 1.
t --W
To this end, note that:
17
2 E) S I0, N( n=m
sup llx(t)I 2 nE)
and that, by Brownian scaling:
W(nStSn+iIx(t)I 2 nE) S w(OStS2Ix(t)I Z nl/2E) Now combine (4.3) with (2.14) to conclude that nE) S C/n2E4
V(nStSn+llx(t)I
and therefore that 9( lim x(t)/t = 0) =1.
t-mo
The Brownian time
inversion property can now be stated as follows. 0 and, for t > 0. set R(t) = tx(1/t).
and (4.5). check that
Define (3(0)
Using the preceding
is a Wiener process under W.
We close this chapter with a justly famous result due to Wiener.
In the next chapter we will derive this same result
from a much more sophisticated viewpoint.
(4.6) Theorem: #-almost no w E 0 is Lipschitz continuous at even one t > 0.
Proof: In view of exercise (4.4). it suffices to treat the case when N = 1 and to show that N-almost no w is
Lipschitz continuous at any t E [0.1).
But if w were
Lipschitz continuous at t E [0,1), then there would exist e.m E Z+ such that Ix((k+l/n)) - x(k/n)I would be less than 2/n for all n > m and three consecutive k's between 0 and (n +2). Hence, it suffices to show that the sets w
B(2,m) =
fl
n u
2 fl
A(B,n,k + j), e,m E Z
n=m k=l j=0
where A(e,n,k) a {Ix((k+l/n)) - x(k/n)I S 2/n}, have V-measure 0.
But, by (4.1) and Brownian scaling:
18
w(B(2,m))
lim nw(Ix(1/n)I < 2/n)3 n --r°
= lim nw(Ix(1)I
n-+
e/n1/2)3
1/2 r2 /n
g(l,y)dy)3 = 0.
lim n(J
new
1/2
Q.E.D.
(4.7) Remark: P. Levy obtained a far sharper version of the preceding; namely, he showed that: w(Sim
0<spt 0 and all f E C1.2([O,T]xlRN):
Jf(T.y)P(s,x;T,dy) - f(s,x) (1.3)
dtJ(8t + Lt)f(t,y)P(s,x;t,dy), (s.x) E [O,T]xR J s
Moreover, if T > 0 and [0,T]xIRN1-+uT
p(s,x) =
pp` E
C00(IRN),
then (s,x) E is an element of
Cb1.2([O,T]xlR N).
(1.4) Remark: Notice that when Lt = 1/2A (i.e. a = I and
20
b E 0). P(s,x;t,dy) = g(t-s,y-x)dy where g is the Gauss kernel given in 1.4.1.
Throughout the rest of this section we will be working with the situation described in Theorem (1.2).
We first observe that when p E CA(IRNuT
is the
unique u E Cb'2([O,T]xDRN) such that (8s + Ls)u = 0 in [0,T)xlR
N
(1.5)
lim u(s,.) _ sp. stT
The uniqueness follows from (1.3) upon taking f = u. prove that u = uT
,
To
satisfies (1.5), note that
asu(s,x) = lim (u(s+h,x) - u(s,x))/h h 1O rr
= lim Iu(s+h,x) h10 LL
fu
(s+h,y)P(s,x;s+h,dy)J/h
rs+h r
= -lim 1/hJ dtJ[Ltu(s+h..)](y)P(s,x;t,dy) h10 s = -Lsu(s,x),
where we have used the Chapman-Kolmogorov equation ((1.3.4)), followed by (1.3).
We next prove an important estimate for
the tail distribution of the measure
(1.6) Lemma: Let A = sup IIa(t,x)Ilop and B =
sup Ib(t.x)l. t,x
Then for all 0 S s < T and R > N1/2B(T-s):
P(s,x;T.B(x,R)c)
2Nexpl-(R-N1/2B)2/2NA(T-s)1.
(1.7)
In particular, for each T > 0 and q E [1,-), there is a C(T,q) < -, depending only on N, A and B. such that fly-xlgP(s,x;t,dy)
C(T,q)(t-s)q/2, 0 S
s
0 and x E
IN
so that 0
R/N1/2})
i=1
< 2N
max-1P(s,x;T,{y:
GES
and, by (1.9)
P(s,x;T,{y: (O.y-x) N Z R/N1/2})
(O,y-x) N Q2
R/N1/2})
22
e-AR/N1/2
N1/2B(T-s) and we
for all 0 E SN-1 and A > 0. take X =
we arrive at (1.7).
(R-N1/2B(T-s))/N1/2,
Q.E.D.
In view of (1.8), it is now clear from exercise (1.3.8) that for each s > 0 and each initial distribution
there is a unique Ps
li 0
uO
E M1(IRN)
E M1(0) such that
PS'po(x(tO)ETO,...,x(tn)ETn)
P(s+t0'x0;s+t1,dx1)
µ0(dx0)J
=J
r0
(1.10)
rl
...rT P(s+tn-l'xn-l;s+tn,dxn) n
for all n > 0, 0 = t0
tn, and r0'...Irn E
N.
We will
use the notation Ps,x in place of Ps,S X,
(1.11) Theorem: The map (s,x) E [0,o3)xIRNl.-Ps,x E Mi(f2) is continuous and, for each
WO E M1(RN), Ps,µ0 =
,'Ps,xp"0(dx).
Moreover, Ps x is the one and only P E M1(0) which satisfies: P(x(0) = x) =1 (1.12)
P(x(t2) E FJA t
)
= P(s+t1,x(s+t1);s+t2,T) (a.s.,P)
1
for all 0
0 and
ll
cvF-)(Pt x) is a r.c.p.d. of Ps x11(
,
then (Pt tx)W00t1
Ps+t,x(t,w) for Ps x-almost every u.
Proof: First observe that, by the last part of Theorem (1.2), (s,x) E [0,T]xIRNF-->J,p(y)P(s,x;T,dy) is bounded and
23
continuous for all
Combining this with (1.7),
.p E CO(IRN).
one sees that this continues to be true for all 'p E Cb(!RN)
Hence, by (1.10), for all n > 1. 0 < E Cb(IRN). E
tn, and
tl
s.x[wl(x(tl))...,pn(x(tn))]
continuous function of (s,x) E [0,w)xIRN.
is a bounded
Now suppose that
(sk,xk)-i(s,x) in [0,ao)xcRN and observe that, by (1.8) and
(1.2.15), the sequence (P
}
is relatively compact in
}
is a convergent subsequence
sk xk '
Moreover, if (Ps
M1(0).
x
k
k'
and P is its limit, then: P [v1(x(tl))...,pn(x(tn))]
E
P
=klim iE sk,.xk,[p1(x(tl))...,pn(x(tn))] P
= E s'x[,pl(x(tl))...,pn(x(tn))]
for all n >
1,
0 < t1
tn, and p1....,pn E Cb('RN).
Hence, P = Ps,x, and so we conclude that Ps fact that Ps,µ0
that (s,x)HPs
=
x
k'x k-+Ps,x.
The
fPs.xp,(dx) is elementary now that we know is measurable.
Our next step is to prove the final assertion concerning (Pt
)
s,x G)
> 0.
.
When t = 0, there is nothing to do.
Given m,n E Z+, 0 < al
and Al..... Am,T1.....Tn E
Assume that t
am < t, 0 < T1 N'
set A = {x(al)EA1,...,
IR
and B = {x(T1)ETl,...,x(Tn)ETn}.
fAPs+t,x(t,(O)(B)Ps.x(d(w)
Then:
Tn
24
= Je P(s,x;s+al.dxl) l
re
J P(s+am-1.xm-1;s+am,dxm) m
xfRNP(s+am,xm;s+t,dy0)xfr1P(s+t,y0;s+t+Tl,dyl) ...
P(s+t+Tn-1'yn-1's+t+Tn,dyn)
Jr n
= Ps x(x(al)Eel....,x(am)Eem,x(t+T1)Erl,...,x(t+Tn)Ern) t
A(Ps,x)woG
-1
t
(B)Ps x(dw).
Hence, for all A E At and B E A: r
fA(Ps,x)Woetl(B)Ps.x(d(j) =
JAPs+t,x(t,W)(B)Ps x(d(j).
Therefore, for each B E A. (Pt.x)Wo9t1(B) = (a.s..Ps,x).
Since A is countably generated, this, in turn,
implies that (Ps x)(Joe-t
1
=
Ps+t,x(t,W) (a.s.,Ps,x).
Finally, we must show that Ps x is characterized by (1.12). That Ps satisfies (1.12) is a special case of the x result proved in the preceding paragraph.
On the other hand,
if P E M1(O) satisfies (1.12). then one can easily work by induction on n Z 0 to prove that P satisfies (1.10) with µ0 = Q.E.D.
(1.13) Corollary: For each (s,x) E [0,o)xIRN, Ps,x is the unique P E M1(ft) which satisfies P(x(O) = x) = 1 and: EP[w(x(t2)) - w(x(tl))'At 1
EP[ft2[Ls+tp](x(t))dtlAtll (a.s.,P) 1
for all 0
X(t,g) upcrosses [a,b] during [0,T). (2.17)
Then:
EP[U(a,b;T)] < EP[(X(T)-a)+]/(b-a), T E (0,-).
In particular, for P-almost all f E E, at each t E (0,w).
limit (in sup EP[X(T)+] < T>0
--
(supEP[IX(T)I] < 00),
has a left In addition, if then tlim X(t) exists
in [_w,w) ((- )) (a.s.,P). Proof: In view of exercise (2.15), it suffices to prove that (2.17) holds with U(a,b;T) replaced by Um(a,b;T) (cf.
35
the last part of (2.15)).
Given m Z 0, set Xm(t) = X(([2mt]/2m) and TO = 0, and define an and Tn inductively for n Z 1 by: an = (inf{tZTn-1: Xm(t) < a})AT and
Tn = (sup {tZa
:
Xm(t) > b})AT.
Clearly the an's and Tn's are stopping times which are bounded by T, and Um(a,b;T) =
Tn 0. set TR = inf{t>0:o u
Then TR is a stopping time and XR(t)
define XR(t) = X(tATR). < R,
t
> 0, (a.s.,P).
X(s) > R} and
Hence, (XR(t),?t,P) is a
sub-martingale and EP[XR(T)+] < R for all T > 0.
particular, tlim X(t) exists (in }.
In
(a.s.,P) on {TR =
Since this is true for every R > 0, we now have the
desired conclusion in the sub-martingale case.
The
martingale case follows from this, the observation that EP[1XR(T)I] =
2EP[XR(T)+]
-
EP[XR(0)], and Fatou's lemma. Q.E.D.
(2.19) Exercise: Prove each of the following statements. i) (X(t),gt,P) is a uniformly P-integrable martingale if
and only if X(m) = tlimX(t) exists in L1(P), in which case (a.s.,P) and X(T) = EP[X(a)I9T] (a.s.,P) for each stopping time T.
ii) If q E (1,-) and (X(t),9t,P) is a martingale, then 00)
(X(t),9;t,P) is LgLa-bounded (i.e. supEP[IX(t)lq] < only if X(-) =
if and
limX(t) in Lg(P), in which case
(a.s.,P) and X(T) = EP[X(°)IgT] (a.s.,P) for each stopping time T.
iii) Suppose that X: [0,-)xE-R1 ([0,-)) is a right continuous progressively measurable function and that X(t) E L1(P) for each t in a dense subset D of [0,-).
If
X(s)(R1 a right-continuous progressively measurable
function which is P-almost surely continuous and of local bounded variation (i.e. for each T > 0,
the total variation
is finite for P-almost every g E E).
IAI(T,f) of
Then, assuming that EP[OSipTIX(t)I(IAI(T)+IA(O)I)] < rt
for all T > 0, (X(t)A(t)-J X(s)A(ds),5;t,P) is a martingale. 0
Proof: Let 0 S s < t and r e gs be given.
Then:
EP[X(t)A(t)-X(s)A(s),r] n-l
EP[
)A(un,k+l)-X(un.k)A(un,k)]'rl
L [X(un,k11 k=O n-1
[X(un,k+l)(A(un,k+l)-A(un,k))],rI.
EP[ k=O
where un k = s + k(t - s)/n. continuous and
Since uI-
>X(u) is right
is P-almost surely continuous and of
local bounded variation, n-1
t
X(s)A(ds) (a.s.,P);
[X(un,k+l)(A(un,k+l)-A(un,k))]-
k=1
0
and our integrablility assumption allows us to conclude that this convergence takes place in L1(P).
Q.E.D.
41
(2.23) Theorem: Let (X(t).gt.P) be a P-almost surely continuous martingale and define f = sup{t>O:
IXI(t) < -}.
Then, P-almost surely, X(tAC) = X(0),
t
In particular,
if P(X(t) = X(s)) = 0 for all 0 S s
t, then tI->X(t) is
0, define CR = sup{t>O:
Then CR is
IXI(t) < R}.
a stopping time for each R > 0 and CR-+C as R-. Moreover, by Lemma (2.22) with
and
replaced by
respectively:
and
tA (X(tACR)2-JORX(s)X(ds),gt.P) is a martingale; and therefore r r0tAc
EP[X(tACR)2] = E PIJ
RX(s)X(ds)
I.
`
On the other hand, since
is P-almost surely
continuous and of local bounded variation, X(tACR)2 = 2JtACRX(s)X(ds) (a.s..P). and therefore EP[X(tACR)2] 0 r r0 tAC
2EP1
1
Hence, EP[X(tAcR)2] = 0 for all t > 0,
RX(s)X(ds)J.
L
and so X(tACR) = 0,
t
> 0, (a.s.,P).
Clearly this leads
immediately to the conclusion that X(tAcR) = 0,
t
> 0,
(a.s.,P).
To prove the last assertion, it suffices to check that, for each 0 S s X(-As).
O:
IXsk(u) < w}, then
P(IXs1(t)t); and, by the preceding with Xs replacing X, P(cs>t) S P(X(t)=X(s)) = 0.
Q.E.D.
42
(2.24) Corollary: Let X: [0,co)xE--->RI be a right
continuous progressively measurable function.
Then there is,
up to a P-null set, at most one right continuous
progressively measurable A: [0,m)xE->R1 such that: A(O) = 0, t--A(t) is P-almost surely continuous and of local bounded variation, and, in addition, (X(t)-A(t),5t.P) is a martingale.
Proof: Suppose that there were two, A and A'.
Then
(A(t)-A'(t),9t.P) would be a P-almost surely continuous martingale which is P-almost surely of local bounded variation.
Hence, by Theorem (2.23), we would have A(t)
- A'(t) = A(0) - A'(0) = 0.
t
> 0, (a.s.,P).
Q.E.D.
Before proving the existence part of our special case of the Doob-Meyer decomposition theorem, we mention a result which addresses an extremely pedantic issue.
Namely, for
technical reasons (e.g. countability considerations), it is often better not to complete the a-algebras 9; t with respect to P.
At the same time, it is convenient to have the
processes under consideration right continuous for every f E E, not just P-almost every one.
In order to make sure that
we can make our processes everywhere right continuous and, at the same time, progressively measurable with respect to possibly incomplete a-algebras 9t, we will sometimes make reference to the following lemma, whose proof may be found in 4.3.3 of [S.&V.].
On the other hand, since in most cases
there is either no harm in completing the a-algebras or the
43
asserted conclusion is clear from other considerations, we will not bother with the proof here. (2.25) Lemma: Let {Xn: n21} be a sequence of right
continuous (P-almost surely continuous) progressively measurable functions with values in a Banach space If
P(O MTIIXn(t)-Xm(t)IIZe) = 0
M----4m n2m
for every T > 0 and e > 0, then there is a P-almost surely unique right-continuous (P-almost surely continuous) progressively measurable function X such that nli c P(O'upTIIXn(t)-X(t)112e) = 0
for all T > 0 and e > 0. (2.26) Theorem (Doob-Meyer): Let (X(t),9t.P) be a P-almost surely continuous real-valued L2(P)-martingale.
Then there is a P-almost surely unique right continuous
progressively measurable A: [0,-)xE-4[0,w) such that: A(0) 0, v-->A(t) is non-decreasing and P-almost surely continuous and (X(t)2-A(t),9t,P) is a martingale.
Proof: The uniqueness is clearly a consequence of In proving existence, we assume, without
Corollary (2.24).
loss in generality, that X(0) a 0.
Define Tk a k for k Z 0; and given n Z 1, define TO a 0 and for 2 Z 0: TQ+1 = (inf{tZr :
if Tk-1 S Te
0,
n-.
2n-1 X
k=0
t
> 0.
In addition, show that for
(l3((k+l )T/2n) - (3(kT/2n) )2-+T (a. s . , P) as
Let Mart2 (= Mart 2L{t}.P1) denote the space of all real-valued P-almost surely continuous L2(P)-martingales (X(t),9t,P).
Mart
,
Clearly Mart2 is a linear space.
we will use <X> to denote the associated process A
constructed in Theorem (2.26).
Mart
,
Given X E
In addition, given X,Y E
define <X.Y> by:
<X,Y> = 1/4[<X+Y> - <X-Y>].
Clearly, <X,Y> is a right-continuous progressivesly measurable function which is not only of local bounded variation, with I<X,Y>I(T) E L1(P) for all T > 0, but also P-almost surely continuous.
(2.29) Exercise: Given a stopping time T and an X E Mart
2,
define XT(t) = X(tfT) and XT(t) = X(t) - XT(t),
t
> 0.
Show that XT and XT are elements of Mart2 and that <XT>(t) _ <X>(tAT) and <XT>(t) = <X>(t) - <X>(tAT),
t
> 0, (a.s.,P).
(2.30) Theorem(Kunita & Watanabe): Given X,Y E Mart ,
<X,Y> is the P-almost surely unique right continuous
47
progressively measurable function which has local bounded variation, is P-almost surely continuous, and has the properties that <X,Y>(O) a 0 and (X(t)Y(t)-<X,Y>(t),9t,P) is a martingale.
In particular, <X> = <X,X> (a.s.,P) for all X
E Mart ; and, for all X,Y,Z E Mart , = a<X,Z> + b, a,b E Q{1,
(a.s.,P).
Finally, for all X,Y E Mart :
I<X,Y>I(r) S <X>(r)1/2(r)1/2 <X+Y>(F) S <X>(r)1/2 +
F E °t[0 (r)1/2,
-),
(a.s.,P);
r E 1[0 -), (a.s.,P); and
I<X,Y>I(dt) S (<X>(dt) + (dt))/2 (a.s.,P).
Proof: To prove the first assertion, simply note that XY 1/4[(X+Y) 2 - (X-Y)2], and apply Lemma (2.24).
The equality <X> = <X,X> as well as the linearity assertion follow easily from uniqueness.
In order to prove the rest of the theorem, it suffices to show that I<X,Y>I(r) S <X>(F)1/2(r)1/2, F E 9[0 (a.s.,P); and to do this we need only check that, for each 0 s
(t) - <X,Y>(s)I
S (<X>(t) - <X>(s))1/2((t) - (s))1/2 (a.s.,P). Furthermore, by replacing X and Y with Xs and Ys,
respectively (cf. Exercise (2.29)), we see that it is enough to prove that I<X,Y>(t)I S <X>(t)1/2(t)1/2 (a.s.,P) for each t Z 0.
But, by the linearity property, 0 S <XX±Y/X>(t)
X 2<X>(t) ± 2<X,Y>(t) + (t)/X2
,
X > 0, (a.s.,P).
Hence
the desired inequality follows by the same argument as one uses to prove the ordinary Schwarz's inequality.
Q.E.D.
48
(2.31) Exercise: Given X,Y E Mart({5t},P) and an {5t}-stopping time T. show that <XT.Y> (.) =
Next, set 9t = a(X(s): OSsSt) and 9i = a(Y(s): OSsSt).
Show
that X,Y E Mart ({9tx9L},P) and that, up to a P-null set,
<X,Y> defined relative to ({9txtt},P) coincides with <X,Y> defined relative to ((9t ),P).
Conclude that, if for some T >
0, gT and 9T are P-independent, then <X,Y>(t) = 0. 0 S (a.s..P).
t
S I.
49
3. Stochastic Integrals, Ito's Formula, and Semi-martingales:
We continue with the notation with which we were working in section 2.
Given a right continuous, non-decreasing, P-almost surely continuous, progressively measurable function A:
[0,-)xE-4[0,co), denote by Lloc(A.P) = Lloc({9t),A,P) the space of progressively measurable a: [0,o)xE-+R1 such that r
1
EPLJ a(t)2A(dt)J < - for all T > 0.
Clearly, Lloc(A,P)
0
admits a natural metric with respect to which it becomes a Frechet space.
Given X E Mart2 and a E [0,o)xE_
at most one I:
1
L2oc(<X>,P),
note that there is
such that:
i) 1(0) E 0 and I E Mart22
(3.1)
'
ii) = a<X,Y> (a.s.,P) for all Y E Mart .
(Given a measure p and a measurable function a, as denotes the measure v such that dµ = a.)
Indeed, if there were two,
say I and I', then we would have ,P)
and if both
J$3(s)dX(s) exist, then:
J[aa(s)+b(3(t)]dX(s)
0
0
50
exists and is equal afa(s)dX(s) 0
+
bf(3(s)dX(s) (a.s.,P) for 0
all a,b E ftl, and to
EP[0,P).
have completed our demonstration that
Thus, we will
Ja(s)dX(s) exists for 0
all a E
Lloc(<X>,P)
once we have proved the following
approximation result.
(3.4) Lemma: Let A: [0,w)xE->[0,-) be a non-decreasing, P-almost surely continuous, progressively measurable function with A(0) a 0. Lloc(A,P)
C
Given a E
Lloc(A,P),
there is a sequence {an}
of bounded, P-almost surely continuous functions
which tend to a in
2loc(A.P).
Proof: Since the space of bounded elements of Lloc(A'P) are obviously dense in
Lloc(A,P),
we will assume that a is
bounded.
We first handle the special case when A(t) = tr for all t
Z 0.
To this end, choose p E CO((0,1)) so that
Jp(t)dt =
1, and extend a to IR 1xE by setting a(t) = 0 for t < 0.
Next,
52
define an(t) = nfa(t-s)p(ns)ds for t > 0 and n >
1.
Then it
is easy to check that {an} will serve.
To handle the general case, first note that it suffices for us to show that for each T > 0 and e > 0 there exists a
bounded P-almost surely continuous a' E Lloc(A.P) such that rT
1
Given T and a. choose M >
(a(t)-a' (t))2A(dt)J < e2.
EP[J
1
0
T
so that EP[f'a(t)2A(dt), A(T)>M-l] < (a/2)2 and 17 that
X[O,M-1]
COO(IR 1) so
Set B(t) = f i(A(s))2A(ds) + t,
< D
0 t
> 0, and T(t) = B-1(t).
Then {T(t): t>O} is a
non-decreasing family of bounded stopping times. 9T(t) and R(t) = a(T(t)).
Set Vt =
Then Q is a bounded
{}_progressively measurable function; and so, by the preceding, we can find a bounded continuous progressively measurable 3' such that T+M EP[f (R(t)-R'(t))2dt < (e/2) 2. I
Finally, define a'(t) = R'(B(t))i(A(t)).
Then a' is a
bounded P-almost surely continuous element of
Lloc(A,P),
and:
1/2
EP[f
(a(t)-a'(t))2A(dt)J
0
11/2 < a/2 + EP[fO(a(t)-R'(B(t)))2B(dt)J T +M
< a/2 + EP[ f
,1 1/2
(R(t)-R'(t))2dtj
< E.
0
Q.E.D.
As a consequence of the preceding, we now know that
53
f a(s)dx(s) exists for all X E Mart2 and a E
L2oc(<X>,P).
0
(3.5) Exercise: Let X E Mart2 be given.
i) Given stopping times a S T and a E
show
Lloc(<X>,P),
that X[o,T)(t)a(t) Er Lloc(<X>,P) and that: rTATa(t)dX(t)
- J0X[o,T)a(t)dX(t) Tno
0
TAo a(t)dX(t) (a.s.,P)
(TAT
a(t)dX(t) -
= J
J
0
ii) Given 19 E
Lloc(<X>,P)
and a E
Lloc(132<X>,P),
show
that: rt0
((`T
(`T0
J a(t)d(J P(s)dX(s)) = J a(s)P(s)dX(s) (a.s.,P). 0
Our next project is the derivation of the reknowned Ito's formula.
(Our presentation again follows that of
Kunita and Watanabe).
Namely, let X = (X1
,
...xM) E (Mart2)M
and Y: [O.-)xE-->l2N be a P-almost surely continuous
progressively measurable function of local bounded variation N
such that IYI(T) = (.2
IYiI(T)2)1/2 E L1(P) for each T > 0.
1
Given f E Cb'1(lrMxn
Ito's formula is the statement that:
f(Z(T)) - f(Z(O))
I JTa if(Z(t))dXi (t) +
i=1 O x + 1/2
M 2
X
T jf(Z(t))Yj(dt) fa
j=l O y
(3.6)
(T
a i,f(Z(t))<Xi,Xt >(dt) (a.s..P). a i,i'=1J 0 x i x
where Z a (X.Y).
54
It is clear that, since (3.6) is just an identification
statement, we may assume that tI ->Z(t, f) and tH->«X.X>>(t, f ) M
((<Xi'Xi >(t'f)))i,i'=1 are continuous for all f E E.
In
addition, it suffices to prove (3.6) when f E Cb(IRM+N) Thus, we will proceed under these assumptions.
Given n >
1,
define Tk' k > 0, so that TO = 0 and Tk+1 = [inf{t>Tk:
(1(t)-<Xi>(Tk))
VIZ(t)-Z(Tk)I>1/n}]A(Tk+l/n)AT.
Then, for each T > 0 and f E E. Tn(f) = T for all but a finite number of k's.
Hence, f(Z(T)) - f(Z(0))
W
2 (f(Zk+l)-f(Zk)), where Zk = (Xk,Yk) = Z(Tk). k=0
Clearly:
f(Zk+1) - f(Zk) _ [f(Xn n) - f(Xn Yn)] + [f(Xnk+l'Yn ) k+1'Y k k' k k+1 - f(Xn k+1'Yn)] k 2 8 f(Zk)AkXi + 1/2 X JTa is i,f(Zk))Alc<Xi,Xi > i,i'=1 O x x i=1 X1
n Tk+1 f +j11fTn N
8yjf(Xk+l'Y(t))Y.(dt)
+ Rk,
k
n(Tk) and
where Ak5 = 5(Tk+1) M 2
Rk ° 1/2
i,i'=1
+ 1/2
i,f(Zk))AkX'AkXi
(a is i,f(Zk) - 8 i8 M 2
x
x
x
x
a is i,f(Zk)(AkX1AkX1 i'i'=1 x x
-
Ak<Xi,Xi >)
with Zk a point on the line joining (Xk+1,Yk) to (Xn.yn). exercise (3.3),
By
55 T
00
f a if(Z )AkXi = J0 a if(Z"(s))dX1(s) k=0 x x
I
where Zn(s) = Zk for S E [Tk,Tk+1) and Zn(s) = Z(T) for s
>
Since Zn(s)->Z(s) uniformly for s E [0,T]. we conclude
T.
that if(Zk)AkXi-'JTa
X a k=0 x
if(Z(s))dXi(s) 0 x
Also, from standard integration theory
in L2(P). (`T
X
a
a
k=0J 0 xi xi
.f(Zn))An<Xi'xi >-+ k k T a
.f(Z(s)))<X1,X1 >(ds)
a
k=0 0 xi xi and
n
rT
Tn+la
f(Xk+1'Y(t))Yi(dt)-.f a
2
k=0STn
f(Z(s))Yi(ds)
0 y
y
It therefore remains only to check that IRk- 0 in
in L1(P).
k
P-measure.
First observe that I(a is i,f(Zn) - a is i,f(Zk))AkX'AkX' x
x
I
x
x
< C[(AkXi)2 + (AkX1 )2]/n and therefore that
EP[IX(a is i,f(Zk) - a is i,f(Zk))AkX'AkX1 )I] k
x
x
x
x
< 2CEP[IX(T) - X(0)12]/n -0. At the same time: 00
EP[( X a is i,f(Zk)(AkXiAkXi k=0 x x
-
Ak<Xi.Xl >))2]
X EP[(a is i,f(Zk)(AkXiAnXl k=0 x x P
4
< C X E [((AkX A -X k=0
i'
-
Ak<Xi,Xi >))2k
4
- Ak<X,X i'
>))
2 ]
]
56
ni4
P
n i' )4 +(Ak<X n i >)2 +(Ak<X n i' >) 2
S C' 2 E [(AkX ) +(AkX k=O
]
m
C'' I EP[IAkXI2]/n = C' EP[IX(T)-X(0)12]/n ->0. k=O
Combining these, we now see that (3.6) holds.
The applications of Ito's formula are innumerable.
One
particularly beautiful one is the following derivation, due to Kunita and Watanabe, of a theorem proved originally by Levy.
(3.7) Theorem (Levy): Let /3 E (Mart2)N and assume that
«/3,f3>>(t) = tI, t
0 (i.e. (t) = t61'j).
Then
(/3(t)-/3(0),a(/3(s): OSsSt),P) is an N-dimensional Wiener process.
Proof: We assume, without loss of generality, that /3(0) 0.
What we must show is that Pop-1 = W; and, by Corollary
(1.13), this comes down to showing that
(c(x(t))-1/2J 0
is a martingale for every p E C0(IRN).
Clearly this will
(t0
is a
follow if we show that martingale.
But, by Ito's formula: N
X
'p(R(t)) - w((O)) 0
and so the proof is complete.
Jt
a
i=O x
w(R(S))dRl(s). Q.E.D.
Given a right continuous, P-almost surely continuous,
57
{g t}-progressively measurable function 13: will say that (13(t).9; .,.
[0,o)xE--.)IRN, we
is an N -dimensional Brownian
motion if 13 E (Mart ({5t},P))N, {3(0) = 0, and «(3,!3>>(t) = t, 0, (a.s.,P).
t
(3.8) Exercise: i)
Let R: [0,co)xE_IRN be a right continuous, P-almost
surely continuous, progressively measurable function with Show that ((3(t),5;t,P) is an N-dimensional Brownian
(3(0) = 0.
motion if and only if P(/3(t)EFI9 s) =
g(t-s.Y-(3(s))dy. 0
>(T) =
rT
J a(s+t,x(t))dt, T 2 0, (a.s.,P) 0
Although the class Mart2 has many pleasing properties, it is not invariant under changes of coordinates. even if f 6
CCO (IR1).
(That is,
foX will seldom be an element of Mart2
58
simply because X is.)
There are two reasons for this, the
first of which is the question of integrability.
To remove
this first problem, we introduce the class Martloc
(= Mart °C({t,P}) of P-almost surely continuous local martingales.
Namely, we say that X E MartboC if X:
[0,oz)xE-lR1 is a right continuous, P-almost surely continous
function for which there exists a non-decreasing sequence of
stopping times an with the properties that an-
(a.s.,P)
a
and (X n(t),gt,P) is a bounded martingale for each n (recall that
It is easy to check that Mart loc is a
E
Moreover, given X E Martcoc, there is a
linear space.
P-almost surely unique non-decreasing, P-almost surely continuous, progressively measurable function <X> such that <X>(O) = 0 and X2 - <X> E Martcoc
The uniqueness is an easy
consequence of Corollary (2.24) (cf. part ii),of exercise To prove existence, simply take <X>(t) =
(3.9) below). a
sup<X n>(t),
t
0.
Finally, given X,Y E Martboc, <X,Y>
1/4(<X+Y> - <X-Y>) is the P-almost surely unique
progressively measurable function of local bounded variation which is P-almost surely continuous and satisfies <X,Y>(0) 0 and XY - <X,Y> E Martooc
(3.9) Exercise:
i) Let X: [O,m)xE-41R1 be a right continuous P-almost surely continuous progressively measurable function.
Show
that (X(t),5;t,P) is a martingale (X E Mart) if and only if X E Martloc and there is a non-decreasing sequence of stopping
59
times Tn such that Tn-4 (a.s.,P) and {X(tATn): n 2 1) is uniformly P-integrable (su
EP[X(tATn)2] < -) for each t 2 0.
ii) Show that if X E Mart loc and then X(tAC) = X(0),
t
= sup{t20:
IXI(t)
(dt) J0
< - (a.s.,P) for all T 2 0.
Ja(s)dX(s)
Show that there exists a
a<X,Y> for 0 all Y E Mart °C and that, up to a P-null set, there is only
c
0'
one such element of Mart°C.
The quantity
Ja(s)dX(s) is 0
again called the (Ito) stochastic integral of a with respect to X.
iv) Suppose that X E (Mart loc)M and that Y: [0,0)xE---.)ltN
is a right-continuous P-almost surely continuous progressively measurable function of local bounded variation. Set Z = (X,Y) and let f E
C2,1(IR M QtN)
be given.
Show that
all the quantities in (3.6) are still well-defined and that (3.6) continues to hold.
We will continue to refer to this
extension of (3.6) as Ito's formula.
(3.10) Lemma: Let X E Mart loc and let a S T be stopping
times such that <X>(T) - <X>(a) S A for some A < -. P(vs p J X(t)-X(a)I2R) S
2exp(-R2/2A).
Then,
(3.11)
In particular, there exists for each q e (0,-) a universal Cq < - such that
60
(3.12)
CgAq/2.
S
EP[oStSTIX(t)-X(o)Iq]
Proof: By replacing X with Xa = X - Xa, we see that it suffices to treat the case when X(0) E 0, a E 0 and T E OD. For n > 1, define cn = inf{t>0:0SuptIX(s)I>n) and set Xn 2
XCn
and Yx = exp(XXm - 2 <Xn>). 1
+ XJYx(s)dX"(s) E Mart .
Then, by Ito's formula, Hence, by Doob's
0
inequality:
P(OsupTXn(t)>R)
exp(XR-X2A/2))
< exp(-XR+X2A/2)
for all T > 0 and X > 0.
After minimizing the right hand
side with respect to X > 0, letting n and T tend to -, and then repeating the argument with -X replacing X. we obtain the required estimate.
Clearly, (3.12) is an immediate
consequence of (3.11).
Q.E.D.
(3.13) Exercise:
i) Suppose that X E (Martloc)M and that T is a stopping M
time for which
X <X1>(T) < A (a.s.,P) for some A < -.
Let
i=l a:
[0,o,)xE-FIRM be a progressively measurable function with
the property that 0
'
I a(s)I
S BexplOSsptJX(s)-X(0)ITJ
(a.s.,P) for each t > 0 and some T Ell[0,2) and B < -.
M rtAr that ( X i=lJ 0
Show
al(s)dX1(s),9t,P) is a martingale which is
Lq(P)-bounded for every q E [1,-).
In particular, show that
M if
X <X i >(T) is P-almost surely bounded for each T > 0 and i=1
if
f E C2.1(IR MxlN) satisfies the estimate 10
(X(t),yt+,P) is again a martingale.
ii) Let Y: [0,ai)xE-4R1 be a measurable function such that tI-+Y(t,f) is right-continuous for every f E E. Assuming that for each T > 0:
62 < 0<s 0.
Show that for any (3 E
(0,1] and T E Z+ IYn(t)-Yn(s)I
sup
0<s 0)); and define Z: [0,co)xE--R1 so that: X(T(t)) - X(0) if T(t) < Z(t) =
X(o) - X(0) if T(t) = -
where X(w) = slim00X(s) when this limit exists in X(0) otherwise.
and
Show that: <X>(s) is a {1St}-stopping time
for each s > 0, Z(0) = 0 (a.s.,P), (Z(t),(St,P) is a
martingale, and that EP[IZ(t)-Z(s)I4] < C4(t-s)2, 0 < t
< - (where C4 is the constant in (3.12)).
s
(ds)
(T)IIg12
At least when q E [2,-),
L -(P)
we are now going to prove a refinement of this result.
The
inequalities which we have in mind are referred to as
Burkholder's inequality; however, the proof which we are about to give is due to A. Carsia and takes full advantage of the fact that we are dealing with continuous martingales.
(3.17) Theorem (Burkholder's Inequality): For each q E [2,w), all X E Martloc, and all stopping times T: a II<X>(T)1/2II q Lq(P)
< II(X-X(0))*(T)N
Lq(P) (3.18)
< A II<X>(T)1/2II q
where aq = (2q)-1/2 and Aq =
Lq(p)
21/2q(q')(q-1)/2
(1/q'
_
1 - 1/q).
Proof: First note that it suffices to prove (3.18) when X(O) = 0 and T. X(T), and <X>(T) are all bounded.
Second, by
replacing X with XT if necessary, we can reduce to the case when T E T < - and X and <X> are bounded. prove (3.18) under these conditions.
Hence, we will
In particular, this
means that X E Mart2 c
To prove the right hand side, apply Ito's formula to write: sgn(X(t))IX(t)Iq-1dX(t)
IX(T)Iq = of 0 +
T
q(g2l)
IX(t)Iq-2<X> 1
Then, by (2.11):
(dt).
65
(1/q')gEP[X*(T)q] S EP[IX(T)Iq] r
1
)T
EPI
IX(t)Iq-2
(dt)J
0
q(g21)EP[X*(T)q-2<X>(T)]
S q(q1)EP[X*(T)q]1-2/gEP[<X>(T)q/2]1/q, from which the right hand side of (3.18) is immediate.
To prove the left hand side of (3.18), note that, by Ito's formula: (q-2)/4 =
where
Y(T) +
JX(t)<X>_2'4(dt),
f<X>(t)(q-2)/4 dX(t).
Hence:
0 2X*(T)<X>(T)(q-2)/4
IY(T)I S
At the same time: rT0
(T) =
<X>(t)
(q-2)/2
<X>(dt) = 2q<X>(T) q/2
J
Thus:
EP[<X>(T)q/2]
=
2EP[(T)] = 2EP[Y(T)2] 2gEP[X*(T)2<X>(T)(q-2)/2]
2gEP[X*(T)q]2/gEP[<X>(T)q/2]1-2/q.
S
Q.E.D.
Remark: It turns out that (3.18) actually holds for all q E (0,-) with appropriate choices of aq and Aq.
When q E
(1,2], this is again a result due to D. Burkholder; for q E (0,1], it was first proved by D. Burkholder and R. Gundy using a quite intricate argument.
However, for continuous
66
martingales, A. Garsia showed that the. proof for q E (0,2]
can be again greatly simplified by clever application of Ito's formula (cf. Theorem 3.1 in Stochastic Differential Equations and Diffusion Processes by N. Ikeda and S. Watanabe, North Holland, 1981).)
Before returning to our main line of development, we will take up a particularly beautiful application of Ito's formula to the study of Brownian paths.
(3.19) Theorem: Let (l3(t),5t,P) be a one-dimensional
Brownian motion and assume that the 9t's are P-complete. Then there exists a P-almost surely unique function e:
[0,m)xll xE-4[0,o') such that: i) For each x E
IR1.
(t,g)ie(t.x,f) is progressively
E E. (t,x)He(t,x,E) is continuous;
measurable; for each
and, for each (x,f) E IR1xE, e(O,x,f) = 0 and t'
)e(t,x,E) is
non-decreasing.
ii) For all bounded measurable p: R rt
r J
1
p(y)e(t,y)dy = 1/2J q(R(s))ds,
t
2 0, (a.s.,P).
(3.20)
> 0.
(3.21)
0
Moreover, for each y E IR 1:
e(t.Y) = R(t)VO -
t
fOx[y.0)(R(s))dP(s),
t
(a.s.,P).
Proof: Clearly i) and ii) uniquely determine e.
To see
how one might proceed to construct e, note that (3.20) can be interpreted as the statement that "e(t,y) =
67
(t
1/2J b(13(s)-y)ds". where b is the Dirac 6-function.
This
0
interpretation explains the origin of (3.21). X[y
Indeed,
Hence, (3.21) is precisely
and
the expression for e predicted by Ito's formula.
In order to
justify this line of reasoning, it will be necessary to prove that there is a version of the right hand side of (3.21) which has the properties demanded by i).
To begin with, for fixed y, let tF-k(t,y) be the right hand side of (3.21).
We will first check that
P-almost surely non-decreasing. CO(IR1)+
is
To this end, choose p E
having integral 1, and define fn(x) _
nJp(n(x-C))(CVy)dc for n Z
1.
Then, by Ito's formula:
fn(P(t)) - fn(0) - ffn(P(s))dP(s) = l/2ftfn'(p(s))ds t 0 0 (a.s.,P).
Because f'' Z 0. we conclude that the left hand
side of the preceding is P-almost surely non-decreasing as a function of t.
In addition, an easy calculation shows that
the left hand side tends, P-almost surely, to uniformly on finite intervals.
is P-almost
Thus
surely non-decreasing.
We next show that, for each y,
can be modified on
a set of P-measure 0 in such a way that the modified function is continuous with respect to (t,y).
Using (1.2.12) in the
same way as was suggested in the hint for part ii) of (3.14), one sees that this reduces to checking that:
EP[O; and if Z'
is a second element of S.Martc with associated parts X'
and Y', we use .
Also, if a:
[0,w)xE- 1R1 is a progressively measurable function
71
satisfying ((``
JO
a(t)2(dt)VJOIa(t)IIYI(dt) < -, T >0.
(a.s.,P), we define
J0 a(s)dZ(s)
(3.24)
to be $a(s)dX(s) +
Notice that in this notation. Ito's formula for P-almost surely continuous semi-martingales becomes N
f(Z(t)) - f(Z(0)) _
8 if(Z(s))dZi(s) t i=lJ o
z
(3.25)
+ 1/2
N .
t
8 i8 jf(Z(s))(ds)
i,j=l J 0
z
z
for Z E (S.Martc)N and f E C2(IRN)
(3.26) Exercise: Let Z C (S.Martc)N and f C C2(,RN)
Show that, for any Y E S.Mart
c
,
N = .2 (8 i=1
z
i
f)oZ
(a.s ,P)
We conclude this section with a brief discussion of the Stratonovich integral as interpreted by Ito.
Namely, given
X,Y E S.Martc, define the Stratonovich integral J X(s)odY(s) 0
of X with respect to Y (the "o" in front of the dY(s) is put there to emphasize that this is not an Ito integral) to be the element of S.Martc given by
JX(s)dY(s) +
0
Although the Stratonovich integral appears to be little more than a strange exercise in notation, it turns out to be a very useful device.
The origin of all its virtues is
72
contained in the form which Ito's formula takes when Namely, from (3.25) and
Stratonovich integrals are used.
(3.26), we see that Ito's formula becomes the fundamental theorem of calculus: N rt e f(Z(t)) - f(Z(O)) = X i=1 J 0
i
f(Z(s))odZ(s)
(3.27)
z
for all Z E (S.Martc)N and f E C3(RN).
The major drawback to
the Stratonovich integral is that it requires that the integrand be a semimartingale (this is the reason why we restricted f to lie in C3 (RN )).
However, in some
circumstances, Ito has shown how even this drawback can be overcome.
(3.28) Exercise: Given X,Y,Z E S.Martc, show that: rt0
1
J 0X(t)od[J Y(s)odZ(s)J =
r0
J(XY)(s)odZ(s).
(3.29)
73
III. THE MARTINGALE PROBLEM FORMULATION OF DIFFUSION THEORY:
1. Formulation and Some Basic Facts:
Recall the notation (1, A. and (At} introduced at the beginning of section 1.3.
Given bounded measurable functions a: [0,-)xRN-IS+(IRN) (cf. the second paragraph of (II.1)) and b: define
Motivated by the results in
by
Corollary (II.1.13), we now pose the martingale problem for {Lt}.
Namely, we say that P E M1((2) solves the martingale
problem for {Lt} starting from (s,x) E [0,o)xIRN and write P E M.P.((s,x);{Lt}) if: i) P(x(O)=x) = 1, (M.P.)
t ii)
[Ls+u
v](x(u))du,At,P)
is a martingale
0
for every pp E C0(IRN)
set
Given V E C2(IRN) ,
t p(x(t)) -
J0[Ls+uv](x(u))du.
E Mart ({A4 },P) for every
If P E M.P.((s,x);{Lt}), then Xs P E C'O(RN).
To compute <Xs
V>,
note that: t
Xs,w(t)2 = v(x(t)) 2 - 2p(x(t))f 0 t
+
rr lJ
0
(1.1)
12
[Ls+up](x(u))du1
74
=
p(x(t))2
[Ls+uv](x(u))du
t
fft[Ls+uv](x(u))du]2 0
t
2(t) + f
Xs
p
[Ls+u`p2](x(u))du
0
ff[Ls+u`p](x(u))du,
t
-
2Xs,P(t)f[Ls+up](x(u))du
-
0t
2
0
Applying Lemma (11.2.22) (or Ito's formula), we see that: Xs,,p(t)2-f
O`[Ls+u`p2](x(u))-2[vLs+up](x(u))Idu,.Ilt,P)
(
is a martingale.
Noting that [Ls+utip2](y)-2[`pLs+u'p](y)
=
we conclude that: f0 (v,p,avPp)(s+u,x(u))du, (a.s.,P).
Remark:
(1.2)
As an immediate consequence of (1.2), we see
that when a = 0, Xs,V (t) = for each p E CO(IRN).
,p(x),
Z 0, (a.s.,P)
t
From this it is clear that: t
x(t,(J) = x + f b(s+u,x(u,w))du,
t
Z 0,
0
for P-almost every (1 E 0.
In other words, when a ° 0 and P E
M.P.((s,x);{Lt}), P-almost every w E 0 is an integral curve of the time dependent vector field (s,x).
starting from
This is, of course, precisely what we would expect on
the basis of (11.1.16) and (11.1.17).
Again suppose that P E M.P.((s,x);{Lt}). C2(IRN), note that the quantity Xs Mart loc({..Nt},P).
P
Given p E
in (1.1) is an element of
Indeed, by an easy approximation procedure,
it is clear that Xs
E Mart({.Nt},P) when p 6 C2(IRN).
For
75
general p E C2(IRN), set an = inf{Q0:
and choose in
E CO(IRN) so that tn(y) = 1 for
1.
antes and Xs
jyI
< (n+l), n Z
E Mart 2 ((A ),P).
Xs
Then,
At the
n
same time, it is clear that (1.2) continues to hold for all p E C2(IR N).
In particular, if: rt
x(t) = x(t) -
b(s+u,x(u))du, J
t
(1.3)
0,
0
then x E (Mart°O({At},P))N and rt
a(s+u,x(u))du, t Z 0, (a.s.,P).
>(t) = J
(1.4)
0
Using (1.4) and applying Lemma (11.3.10) (cf. the proof of (11.1.6) as well), we now see that: P(sssup tpTlx(t)-x(s)I
>-
R)
(1.5)
1, EW[An(T)2] S K(T)n/n!, where K(T) = (8 + 2T)C(T)2; and so: CO
n>PE#If supTI0n(t) - 0m(t)121
0, 0n(t)I2I_)0
Ew[0 0, (a.s..W)
Finally, suppose that a Brownnian motion ((3(t),°t.Q) on (E,9,Q) and a right continuous, {'fit}-progressively measurable
Without loss in generality,
solution X to (2.7) are given.
we assume that /3(*,L) is continuous for all f E E.
Set
Then, as a consequence of ii) in exercise (2.4) and the fact that Qo(3-1 = V: rT
(0 T
a(t,Y(t))dP(t) +
Y(T) = J
b(t,Y(t))dt, T > 0, (a.s.,Q). J
0
Hence, proceeding in precisely the same way as we did above, we arrive at: T
,up EQ[0supTIX(t) - Y(t)I2J S K(T)f EQ[0 eI and Ila(r7) - a(f)IIH.S.
0 and C < - and for all f,r, E IR1.
Then
(2.14)
Ila1/2(rl) - a1/2(E)11 H. S. < Clrl - f (/261/2
for all f ,rl E IR1. Finally, suppose that if E IR11-a(f) E S+(IRN) is a twice continuously differentiable map and that Ila'' (g) 11 op < C < 00 for all if E IR1 . Then
Ila1/2(i) - al/2(
)II
< N(C)
H.S.
1/21n
-ft
(2.15)
for all f ,rl E IR1. Proof: Equation (2.13) is a simple application of the spectral theorem, as is the equation a1/2 = lim
(a+eI)1/2
610
S+(lRN)_4a1/2
From these, it is clear that a E
has the
+
asserted regularity properties and that a E S
N (IR
1/2
)---+a
is
measurable.
In proving (2.14), we assume, without loss in
generality, that f---a(f) is continuously differentiable and that Ila'(f)IIH S
for each if
< C for all
if
E IR
and we will show that,
E IR1: II(a1/2)'(f)IIH.S.
(2.16)
< (1/2e1/2)Ila(f)IIH.S..
In proving (2.16), we may and will assume that a(f) is diagonal at if.
Noting that a(rl) = al/2(ll)a1/2(n)
we see
that:
((a1/2)IM) iJ _ a-1/2
(a'O)ij 11
+ a 1/2 (
(f)
(2.17) )jj
and clearly (2.16) follows from this. Finally, to prove (2.15), we will show that if a(f) > 0, then II((a1/2)'(f))IIH.S.
an-i(w): (t,x(t,w)) AUe
(W))'
where en-1(c)
n-1
define: QO = S(1
,
where
=
e(an-1(w)'x(o, n(w)'w))'
Finally,
0; and Qn+1 =
0
e (w) [Qn®a n Pn]°(x('Aon+l))-1- where P = Pan((J) an((0)
< -.
(J),
x(an((J)
if
103
(3.3) Lemma: For each n Z 0 and all N E Co*(ON): (p(x(t)) - J [Lnp](x(u))du,Att,Qn) 0
is a martingale, where Lt = X[O.a )(t)Lt,
t
Z 0.
n
Proof: We work by induction on n 2 0. nothing to prove when n = 0.
Clearly there is
Now assume the assertion for n,
and note that, by Theorem (1.18), we will know it holds for
n+l as soon as we show that for each w E {a 0 and all r E AIT: exists.
Thus, {Qn} can has precisely one limit Q;
Finally, we see
and clearly QPAIa = Qn"A'a for each n 2 0. n n
from Lemma (3.3) that EQ[pp(x(tAon))
r(tAo - pp(x(sAan),r] = EQIJ n
f E CO(IItN0 < s therefore Q E M.P.((0,0);{Lt}). n
02N be bounded measurable functions and define tF-+Lt accordingly.
If the martingale problem for {Lt} is locally
well-posed, then it is in fact well-posed.
Proof: Clearly, it is sufficient for us to prove that M.P.((0,0);{Lt}) contains precisely one element.
Since we
already know that the Q constructed in Lemma (3.4) is one element of M.P.((0,0);{Lt}), it remains to show that it is the only one.
To this end, suppose that P is a second one.
Then, by Lemma (3.1). PPAI
=
o
PO(0,0)P.M
1
assume that PPAta = QPAIa PIAIa
.
n
and let wF---+P
a =
QPAI a
0
.
Next,
0
be a r.c.p.d. of
(i n n Then, for P-almost every w E {an0: (t,x(t,w')fUn(j)}. time shift map.)
)
_
(Recall that Ot:OF-*0 is the
At the same time, if an(w) < -, then
°n+1(w ) = °n(w) + rw(O°(w)w') for P0-almost every w'.
Combining these, we see that
PPA w Cr n+1 =
en(w)
Sw® an(w)P°n(w),x(°n(w),x(an(G1),w)P °n+l
for P-almost every w E (an 0. Further, assume that the second derivatives of first derivatives of for t in compact intervals.
and the
at x = 0 are uniformly bounded Then, the martingale problem for
the associated {Lt} is well-posed and the corresponding family (Ps,x: (s,x)E[O,m)xIRN} is continuous.
106
4. The Cameron-Martin-Girsanov Transformation:
It is clear on analytic grounds that if the coefficient matrix a is strictly positive definite then the first order part of the operator Lt is a lower order perturbation away from its principle part N
Lt = 1/2
a'J(t,y)6 10 i,j=1
y
(4.1) y
Hence, one should suspect that, in this case, the martingale problems corresponding to {L°0 } and {Lt} are closely related.
In this section we will confirm this suspicion.
Namely, we
are going to show that when a is uniformly positive definite, then, at least over finite time intervals, P's in
M.P.((s,x);{Lt}) differ from P's in M.P.((s,x);{0}) by a quite explicit Radon-Nikodym factor.
(4.2)Lemma: Let (R(t),1t,P) be a non-negative martingale with R(O) E 1.
Then there is a unique Q E M1(() such that
QPAT = R(T)PPMT for each T 2 0. Proof: The uniqueness assertion is obvious. the existence, define Qn =
To prove
for n Z 0.
Then Qn+ltAn = QntAn; from which it is clear that {Qn: n20}
is relatively compact in M1(D).
In addition, one sees that
any limit of {Qn: n20} must have the required property. Q.E.D.
(4.3)Lemma: Let (R(t),.t,P) be an P-almost surely continuous strictly positive martingale satisfying R(0) E 1.
Define Q 6 M1(n) accordingly as in Lemma (4.2) and set 9 =
107
Then (1/R(t),4(.Q) is a Q-almost surely continuous
logR.
strictly positive martingale, and PLAT = (1/R(T))QtAT for all
T Z 0.
Moreover, 51 E S.Martcr({4t},P),
9(T) = S0 R(t)dR(t) - 2J 0( R(t))2(dt)
(4.4)
(a.s.,P) for T Z 0; and X E Mart loc({,Nt},P) if and only if XR In particular,
X - <X,!%> E Mart loc({4t},Q).
Finally. if X,Y E
S.Martc((At),P) = S.Martc({At},Q).
S.Martc({.Nt},P), then, up to a P.Q-null set, <X.Y> is the same whether it is computed relative to ({At ).P) or to ({.Ilt},Q).
In particular, given X E S.Martc({.Nt},P) and an
(At)-progressively measurable a: [0,0x0---- *RI satisfying rT
a(t)<X,X>(dt) < - (a.s.,P) for all T > 0. the quantity J0
f adX is, up to a P.Q-null set, is the same whether it is 0
computed relative to ({.Mt},P) or to ({.Mt},Q).
Proof: The first assertion requiring comment is that (4.4) holds; from which it is immediate that °Jt E S.Martc({.Nt},P).
But applying Ito's formula to log(R(t)+e)
for e > 0 and then letting 40, we obtain (4.4) in the limit. In proving that X E Mart loc({At},P) implies that XR E 1/R, and X
Mart loc({Alt},Q), we may and will assume that R,
are all bounded.
Given 0 S
t1
< t2 and A E At
,
we have:
1
<X,X>(t2),A] = EPIR(t2)X(t2) - <X,R>(t2),AJ
EQIX(t2) - R(t lL
2)
L`L
= EP{R(t1)X(t1)
EQ[X(t1) -
-
<X,R>(t1),AI
)<X,X>(t1),AJ. R(l) 1
108
Hence, since, by (4.4), <X,51>(dt) = R(t)<X,X>(dt), we will be
done once we show that RI
E 0
Martc1° c ({.Nt},Q).
However, by Ito's formula
R(T) <X,X>(T) = JO<X,X>(t)d1R(t), + fOR(t)<X,X>(dt),
and so the required conclusion follows from the fact that (1/R(t),.Mt,Q) is a martingale.
We have now shown that XR E
Mart s°C({.Mt},Q) whenever X E Marts°c({.Mt},P) and therefore that S.MartC({.Mt},P) C S.Martc({.Mt},Q).
Because the roles of
P and Q are symmetric, we will have proved the opposite implications as soon as we show that <X,Y> is the same under ({.Mt},P) and ({.Mt},Q) for all X,Y E S.Martc({.Mt},P).
Finally, let X,Y E Marts°C({.Mt},P).
To see that <X,Y>
is the same for ({.Nt},P) and ({.Mt},Q), we must show that XRY R
- <X,Y>P E Marts°C({.Nt},Q) (where the subscript P is used to emphasize that <X,Y> has been computed relative to ({.Mt}.P)).
However, by Ito's formula: XRYR(T) = XY(O) +
J0
XR(t)dYR(t)
+ <XR,YR>P(T), T 2 0,
+ JO 0
(a.s.,P).
Thus, it remains to check that JXRdYR and
fYdxR 0
are elements of Mart°C({.Mt},Q). But: JOXRdYR = JOXRdY - If xRdP = fox RdY - <J XRdY, r
1R
xRdYJ E Martc°C({.Mt},Q),
If0 and, by symmetry, the same is true of JO0 Y
Q.E.D.
109
(4.5)Exercise:-A more constructive proof that <X,Y> is the same under P and Q can be based on the observation that
<X>(T) can be expressed in terms of the quadratic variation of
over [0,T] (cf. exercises (11.2.28) and (11.3.14)).
(4.6)Theorem (Cameron-Martin-Girsanov): Let a: [0,co)xIN-* S+(RN) and b,c: [0,co)xIN___lRN be bounded
measurable functions and let tI_*Lt and tHLt be the operators associated with a and b and with a and b+ac, respectively.
Then Q E M.P.((s,x);{Lt}) if and only if there
is a P 6 M.P.((s,x);{Lt}) such that QtAT = R(T)PtAT, T 2 0, where
r rT R(T) = explJ c(s+t,x(t))dx(t) t 0
(4.7)
r0 T
- 1/2J (c,ac) N(s+t,x(t))dt rT
with x(T) ° x(T) -
b(s+t,x(t))dt, T > 0. J
In particular,
0
for each (s,x) 6 [0,o)x!N, there is a one-to-one correspondence between M.P.((s,x);{Lt}) and M.P.((s,x);{Lt}). Proof: Suppose that P E M.P.((s,x);{Lt}) and define by (4.7).
By part ii) in exercise (11.3.13), (R(t),At,P) is
a martingale; and, clearly, R(0) = 0 and surely positive.
is P-almost
Thus, by Lemmas (4.2) and (4.3), there is a
unique Q E X1(0) such that QtAT = R(T)PPAT, T Z 0. Moreover. X E Mart loc({,t},P) if and only if X - <X,R> 6 Mart loc({At},Q), where 91 = logR.
In particular, since
N
<X. >(dt) = X c (S+t,x(t))<xi,X>(dt), i=1
i
110
if * E CD(RN), then N I i=1 N
(c a1JB p)(s+t,x(t))dt i,J=1 i xJ N
I (ac)J(s+t,x(t))e
p(x(t))dt, xJ
J=1 rt0
and so
[Luc](x(u))du,.Nt,Q) is a martingale.
In
J
other words, Q E M.P.((s,x);{Lt}).
Conversely, suppose that Q E M.P.((s,x);{Lt}) and define as in (4.7) relative to Q. 1
r
rT
Then: (T
1
R(T) = expl-J c(s+t,x(t))dx(t) - 1/2J (c.ac)(s+t.x(t))dt] LL
0
0
where x(T) ° x(T) -
(b+ac)(s+t,x(t))dt. J
Hence, by the
0
preceding paragraph applied to Q and {Lt}, we see that there is a unique P E M.P.((s,x):{Lt}) such that PP"WT T Z 0.
=
Since stochastic integrals are the same whether they
are defined relative to P or Q. we now see that QtAT = R(T)Pt4IT, T 2 0, where
is now defined relative to P. Q.E.D.
(4.8) Corollary: Let a: [0,-)xlN_9S+(!RN) and b: [O.o,)xORN--4Q{N be bounded measurable functions and assume that
a is uniformly positive definite on compact subsets of
[0,w)xltN.
Define tF-+0 as in (4.1) and let tF- Lt be the
operator associated with a and b.
Then, the martingale
problem for {L°} is well-posed if and only if the martingale
111
problem for {Lt} is well-posed.
Proof: In view of Theorem (3.5), we may and will assume that a is uniformly positive definite on the whole of [0,-)xIN. (4.6).
But we can then take c = a 1b and apply Theorem Q.E.D.
112
5. The Martingale Problem when a is Continuous and Positive:
Let a: IRN_S+(IRN) be a bounded continuous function
satisfying a(x) > 0 for each x E IRN.
Let b: [0.o)xIRN--Q N be
Our goal in this section is
a bounded measurable function.
to prove that the martingale problem associated with a and b is well-posed.
In view of Corollary (4.8), we may and will assume that b = 0, in which case existence presents no problem.
Moreover, because of Theorem (3.5), we may and will assume in addition that (5.1)
IIa(x) - IIIH.S.S e.
where e > 0 is as small as we like. N
Set L = 1/2
What we are going to do is
2 aii(y)8 8 J.
i,j=1
y
y
show that when the a in (5.1) is sufficiently small then, for each A > 0, there is a map SA from the Schwartz space Z(IRN
into Cb(ll )° such that 1
EP[J e-"tf(x(t))dt] = S,f(x)
(5.2)
0
for all f E .d(U
). whenever P E M.P.(x;L).
proved (5.2), the argument is easy.
Once we have
Namely, if P and Q are
elements of M.P°.°(x;L), then (5.2) allows us to say that EQ[J e-"tf(x(t))dt] EPLJ e-?tf(x(t))dt] = for all A > 0 and f E Cb(IRN).
But, by the uniqueness of the
Laplace transform, this means that Pox(t)-1 = Qox(t)-1 for all t Z 0; and so, by Corollary (1.15). P = Q.
Hence,
113
everything reduces to proving (5.2).
where g(t,y) denotes the
(5.3)Lemma: Set tit =
standard Gauss kernel on IRN; and, for X > 0, define RAf =
RA maps Z(fl 2(IRN).
Then
for f E d(RN) ("*" denotes convolution).
Joe-xt,rt*fdt )
into itself and (AI-1A)oRA = RAo(AI-2A) = I on
Moreover, if p E (N/2,-), then there is an A = A(X,p)
E (0,w) such that IIRAf11
N
S AIIfJI
Lp (IR )
N
,
(5.4)
f E Z(IRN).
Lp (IR )
Finally, for every p E (1,m) there is a C = C(p) E (0,-) (i.e. independent of X > 0) such that N
It(i,3-1 (8yi8yiRAf)2)1/2
(5.5)
C IIfIILP(RN)
for all f E Z(RN).
Proof: Use 5f to denote the Fourier transform of f.
Then it is an easy computation to show that 5RAf(f) = (A+2IfI2)-lgf(f).
From this it is clear that RA maps .S(ffN)
into itself and that RA is the inverse on A(IRN) of (AI-2A).
To prove the estimate (5.4), note that
II-r t*fII
Cb(IRN) IIti
=
t
II
11 f 11
Lp(IRN)
,
where 1/q = 1 - 1/p, and that IIT t
BNtN/(2p) for some BN E (0,m).
11R
A
f II
S B IJ Lp(IRN) Nl0
II
Lq(IRN
Thus, if p C (N/2,m), then
e-Att-N/(2p)dt) Ilf II
= Allf II Lp(IRN)
Lp(IRN) ,
where A E (0,m). The estimate (5.5) is considerably more sophisticated.
114
What it comes down to is the proof that for each p E (1,a) there is a K = K(p) E (0,w) such that N II (
_T
i, j=l
(8y i8 y 1 f )2)1/211
N
K11 2Af
Lp(JR )
II
Lp(IR
(5.6)
N
Indeed, suppose that (5.6) holds.
for f E Z(IRN).
Then,
since 2AR, = I - XR., we would have N II(
.2
i, j=1
(8 i8 3RXf)2)1/211
y
y X
Lp(DtN) IIti t I I
Ll (Qt N)
S KII 2ARXf
II
Lp(IR
N )
S 2KIIfII
+ KIIXR f11
S K11f11
since
N
Lp(IR )
Lp(6tN)
= 1 and so IIXR A f I I
Lp(DtN) Ilf II LP(rR N) .
S
Except
LP(IR N)
when p = 2, (5.6) has no elementary proof and depends on the Rather than spend
theory of singular integral operators.
time here developing the relevant theory, we will defer the proof to the appendix which follows this section.
Q.E.D.
Choose and fix a p E (N/2,w) and take the e in (5.6) to lie in the interval (0'2VC(p))' where C(p) is the constant C in (5.5).
We can now define the operator SX.
Namely, set DT
Then, for f E Z(MN
(L-2A)Rx.
N
IIDX fIILp(IRN) =
1
211i,J=l
(a-I)1 8
N S
II(
=1Yijx Rf
8
yl yJ
R1 fll
)2)1/2(85
N)
11
y
Lp(IRN)
LP(IR
1/21IfI N).
LP(IR
Hence, DX admits a unique extension as a continuous operator on LP(IRN) with bound not exceeding 1/2.
Using DX again to
denote this extension, we see that I-DX admits an continuous inverse KA with bound not larger than 2.
We now define SX =
115
RxoKx.
Note that if K5f E 3(RN), then (XI-L)Sxf = (XI-2A)RxKxf - DxKxf = (I-Dx)Kxf = f.
Thus, if 5;A = {fEL1(IR N): K f E x
3(RN)}, then we have that: (AI-L)SAf = f, f E 5
(5.7)
.
In particular, we see that 9; A C Cb(RN).
Moreover, since
.Z(RN) is dense in LP(RN) and Kx is invertible, it is clear that 9x is also dense in LP(RN).
We will now show that (5.2)
holds for all P E M.P.(x;L) and f E 9A.
Indeed, if f e 9X,
then an easy application of Ito's formula in conjunction with (5.7) shows that pt0
(e-AtSXf(x(t))+J
At, P)
Thus, (5.2) follows
is a martingale for every P E M.P.(x;L).
by letting taw in
r rt
11
EP[e-AtSNf(x(t))] - Sxf(x) = EPii e-"sf(x(s))d,]. ` 0
At first sight, we appear to be very close to a proof of (5.2) for all f E 2(IR N).
However, after a little reflection,
one will realize that we still have quite a long way to go. Indeed, although we now know that (5.2) holds for all f E 9T and that 9x is a dense subset of LP(IR N), we must still go
through a limit procedure before we can assert that (5.2) holds for all f E Z(IR N).
The right hand side of (5.2)
presents no problems at this point.
In fact, (5.4) says that
f E LP(IR N)I--H RAf(x) is a continuous map for each x E IN, and
therefore Sx has the same property.
On the other hand, we
know nothing about the behavior of the left hand side of
116
Thus, in order to
(5.2) under convergence in Lp(IRN).
complete our program we have still to prove an a Priori estimate which says that for each P E M.P.(x;L) there is a B
E (0,-) such that 11
IEPIJOe-Atf(x(t))dt]
J
S BIIfIILp(IR N), f E Z(IRN).
To prove (5.8), let P E M.P.(x;L) be given.
(5.8)
Then, by
Theorem (2.6), there is an N-dimensional Brownian motion rT
a(x(t))dp(t), T Z 0
((3(t).At,P) such that x(T) = x + J
a1/2.
(a.s.,P), where a =
0
Set an(t,w) = a(x(
nn
An,w)) and
rT
Xn(T) = x +
Note that, for each T Z 0,
an(t)d(3(t), T 2 0. 0 0e
((x j/IxIN+1) ('P(x)-t'(0) )dx + J (xi /IxIN+1)f(x)dx el and lim
(xj/IxIN+1)('P(x)-w(0))dx
=
610f e 0, set r(E)(x) = X(E ,,)(lxl)rJ(x) and define 5I(E)f = f
In view of Lemma (A.2), what we must show is that
E -S(RN).
for each p E (1,w) there is a Cp < w such that
S Clifll p
E>0 11g('411 Lp (RN ) SUP
N, f E
Lp(R )
(A.3)
Z(IRN).
To this end, note that
51(6)f (x) = cN J
w jdw
SN-1
wf(x-rw)drr
F.
(cN/2)J SN-1
w dwJ f(x-rw)dL. lrl>e
Next, choose a measurable mapping WESN-1HU(i E 0(N) so that U(i e1 = w for all w; and, given f E Z(IRN), set f(J(y) = f(U(jy). Then: !c(E)f(x) = (cN/2)J
SN-1
= (cN/2a)J
w
SN-1
lrl>e
i
*(E)f(i (x) dw
where
g(x-re l)- . g E .b(IRN).
*(E)g(x) a (1/7) J lrl>e
x E IR1. and
In particular, set h(e)(x) = suppose that we show that sup p
Ily*h(E)IILp(IRN) S KpIIyIILp(UNE .d(ftl).
for each p E (1,-) and some Kp < sup
ll
N
S K 11f p
11
N
(A.4)
Then we would have that = K Ilfll
N
for all w e
Lp(O ) p SN-1; and so, from the preceding, we could conclude that
Lp(IR )
Lp(IR )
(A.3) holds with Cp = lcNIKp/2n.
In other words, everything
123
(Note that the preceding reduction
reduces to proving (A.4).
allows us to obtain (A.3) for arbitrary N E Z+ from the case when N = 1.)
For reasons which will become apparent in a moment, it is better to replace the kernel h(a) with he(x) = a
2x
2.
X +6
Noting that alih(a)-h
(
(°D
2J x/(x2+a2)dx + 2J a2/(x(x2+e2))dx 0 a
a L 1 (Utl ) 11
2J x/(x2+1)dx + 2J 1/(x(x2+1))dx, 1
we see that (A.4) will follow as soon as we show that
sup IIy*h >0 a
II
L p (IR )
S K pII4II Lp
(
1
y E 2(IR1) ,
)
In addition, because:
for each p E (1,w) and some Kp < -. f w(x) (dl*ha) (x)dx = -
f JA(Y) (w*ha)
(Y)dY. P,,P E d(Dtl)
an easy duality argument allows us to restrict our attention to p E [2,w); and therefore we will do so.
Set py(x) = A y > 0}.
Given 4' E
x 2y +y 2
for (x,y) E IR
C0CO (lt1 ;ll1)
{(x,y) E IR2:
(we have emphasized here that y
is real-valued), define u1P (x,y) = P*py(x) and v41(x,y) _ **hy(x). are
(A.6) Lemma: Referring to the preceding, uy and vJ+
conjugate harmonic functions on IR+ (i.e. they satisfy the Cauchy-Riemann equations).
Moreover, there is a C = C,, < W
such that Iuy(x,y)IVIv4'(x,y)I S C/(x2+y2)1/2 for all (x,y) E a+
Finally,
lim 640 sxp If-xsup
uVy(t)
fO
JO
Hence, by Burkholder's inequality (cf. (3.18)),
(a.s.,N'X y).
we see that for each p E [2,-) there is a Cp < - such that
IINE(Te)II
S C IIME(TE)H
IINE(TE)II Lp(WX,Y)
LP(wx,Y S CPIIMO(TO)II
LP(WX.y)
P
LP(WX,y)
S (P/(p-1))KPIIMO(TO)II
(N'X,y),
p L
where we have used Doob's inequality in order to get the last relation.
Since y(Te) = e (a.s.,N'X.y), we conclude from the
preceding that 1 1/P
E X'Y[IVy(X(TE).e)-v41(X,Y)Ip I
S
and so EpX.Y[IV,(x(TE).C)Ip]l/P
P
S
Iu41(x,Y)I +
for all (x,y) E R2 and 0 < e S y.
distribution of x(T x+x(T
under W,
,
Noting that the is the same as that of
under N'O,y, raising the preceding to the power p, and
126
integrating with respect to x E R1, we obtain: [IV
S 2p-1KpIi.IlpLp
1
L (IR
)
+
(IR
2p-1I(
)
KpIIv LP(IR ) p y Lp(R1 ) But, by the estimate on u0 and v1P in Ilu
1P
for all 0 < e $ y.
Lemma(A.6), it is clear that --+0 as yt-.
In other words, we have now proved (A.5), with
2Kp replacing Kp, so long as 4y E C0(IR1).
Obviously, the same
result for all y E Z(1R1) follows immediately from this; and so we are done.
127
INDEX
B
Ito's exponential of X, p.61
Brownian scaling, p.16
Ito's formula, pp.53,71,72
Burkholder's inequality, p.64 Ito's stochastic integral
for X E Mart, p.49
C
Cameron-Martin-Girsanov theorem, p.109 Chapman-Kolmogorov
for X E Mart coc, p.59
for Z E S.Martc, p.70,71
equation, p.12
for X E (Mart2)d, P.88,89
conditional exectation
K
EPLXIg']. p.1 conditional probability distribution (c.p.d.)
Kolmogorov's criterion, p.10
and regular c.p.d. (r.c.p.d.), p.3
L
Levy's theorem, p.56 local martingale, p. 58
Mart °C, P.58
consistent family of distributions, p. 12
local time, p.69
D
K
determining set, p. 80 Doob's inequality, p.29
A, p.12 At. p.12
Doob-Meyer Theorem, pp.39,43
M1(E), p.1
<X> and <X,Y>, p.46 Diob's stopping time
martingale, p.28
theorem, p.33
Dbob's upcrossing inequality, p.34 C
Gauss kernel, p.15 I
<X> and <X,Y>, p.58
Lq-martingale, p.29 Lq-bounded, p.36
Mart, p.46 <X> and <X,Y>, p.46 XT and XT, p.46 martingale problem, p.73 M.P.((s,x);{Lt}), p.73 M.P.(x;L), p.77 well-posed, p.85
initial distribution, p.12
locally well-posed, p.101
128
P®TQ., P.81 1
p.13
stochastic integral equation, p.87 Stratonovitch integral, 71
Polish space, p.2
stopping time, p.31 9T, p.31 sub-martingale. p. 28
progressively measurable
T
Pot-
,
(a.s.) continuous, p.28 right continuous, p.27
Tanaka's formula, p.69 time inversion, p.16 time shift At, p.16
locally bounded variation, pp.40,70
transition probabilitiy function, p.12
function, p.27
Prokhorov-Varadarajan theorem, p.7
W
S
weak topology on M1(0), p.5
semi-martingale, p.70 S.Martc, p.70 and