THEORY OF STOCHASTIC DIFFERENTIAL EQUATIONS WITH JUMPS AND APPLICATIONS
MATHEMATICAL AND ANALYTICAL TECHNIQUES WITH APP...
15 downloads
1068 Views
18MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
THEORY OF STOCHASTIC DIFFERENTIAL EQUATIONS WITH JUMPS AND APPLICATIONS
MATHEMATICAL AND ANALYTICAL TECHNIQUES WITH APPLICATIONS TO ENGINEERING
MATHEMATICAL AND ANALYTICAL TECHNIQUES WITH APPLICATIONS TO ENGINEERING Alan Jeffrey, Consulting Editor
Published: Inverse Problems
A. G . Ramm Singular Perturbation Theory
R. S. Johnson Methods for Constructing Exact Solutions of Partial Differential Equations with Applications
S. V . Meleshko Stochastic Differential Equations with Applications
R. Situ
Forthcoming: The Fast Solution of Bounda~yIntegral Equations
S. Rjasanow and 0. Steinbach
THEORY OF STOCHASTIC DIFFERENTIAL EQUATIONS WITH JUMPS AND APPLICATIONS MATHEMATICAL AND ANALYTICAL TECHNIQUES WITH APPLICATIONS TO ENGINEERING
RONG SITU
a - Springer
Library of Congress Cataloging-in-Publication Data Theory of Stochastic Differential Equations with Jumps and Applications: Mathematical and Analytical Techniques with Applications to Engineering By Rong Situ ISBN-10: 0-387-25083-2 e-ISBN 10: 0-387-25 175-8 ISBN-13: 978-0387-25083-0 e-ISBN-13: 978-0387-25175-2
Printed on acid-free paper.
O 2005 Springer Science+Business Media, Inc.
All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, Inc., 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. U se in connection w ith any form o f in formation storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now know or hereatler developed is forbidden. The use in this publication of trade names, trademarks, service marks and similar terms, even if the are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed in the United States of America. 9 8 7 6 5 4 3 2 1
(BSIDH)
SPIN 1 1399278
Contents
Preface
xi
Acknowledgement
xvii
Abbreviations and Some Explanations
xix
I Stochastic Differential Equations with Jumps in Rd 1 1 Martingale Theory and the Stochastic Integral for Point Processes 1.1 Concept of a Martingale 1.2 Stopping Times. Predictable Process 1.3 Martingales with Discrete Time 1.4 Uniform Integrability and Martingales 1.5 Martingales with Continuous Time 1.6 Doob-Meyer Decomposition Theorem 1.7 Poisson Random Measure and Its Existence 1.8 Poisson Point Process and Its Existence 1.9 Stochastic Integral for Point Process. Square Integrable Martingales 2 Brownian Motion, Stochastic Integral and Ito's Formula
3 3 5 8 12 17 19 26 28 32 39
Contents 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8
Brownian Motion and Its Nowhere Differentiability Spaces C° and £ 2 Ito's Integrals on C2 Ito's Integrals on £ 2 -' oc Stochastic Integrals with respect to Martingales Ito's Formula for Continuous Semi-Martingales Ito's Formula for Semi-Martingales with Jumps Ito's Formula for d-dimensional Semi-Martingales. Integration by Parts Independence of BM and Poisson Point Processes Some Examples Strong Markov Property of BM and Poisson Point Processes Martingale Representation Theorem
39 43 44 47 49 54 58
Stochastic Differential Equations 3.1 Strong Solutions to SDE with Jumps 3.1.1 Notation 3.1.2 A Priori Estimate and Uniqueness of Solutions . . . 3.1.3 Existence of Solutions for the Lipschitzian Case . . . 3.2 Exponential Solutions to Linear SDE with Jumps 3.3 Girsanov Transformation and Weak Solutions of SDE with Jumps 3.4 Examples of Weak Solutions
75 75 75 76 79 84
Some Useful Tools in Stochastic Differential Equations 4.1 Yamada-Watanabe Type Theorem 4.2 Tanaka Type Formula and Some Applications 4.2.1 Localization Technique 4.2.2 Tanaka Type Formula in d—Dimensional Space . . . 4.2.3 Applications to Pathwise Uniqueness and Convergence of Solutions 4.2.4 Tanaka Type Formual in 1-Dimensional Space . . . 4.2.5 Tanaka Type Formula in The Component Form . . . 4.2.6 Pathwise Uniqueness of solutions 4.3 Local Time and Occupation Density Formula 4.4 Krylov Estimation 4.4.1 The case for 1—dimensional space 4.4.2 The Case for d—dimensional space 4.4.3 Applications to Convergence of Solutions to SDE with Jumps
103 103 109 109 110
2.9 2.10 2.11 2.12
62 64 65 67 68
86 99
112 116 121 122 124 129 129 130 133
Stochastic Differential Equations with Non-Lipschitzian Coefficients 139 5.1 Strong Solutions. Continuous Coefficients with p— Conditionsl40 5.2 The Skorohod Weak Convergence Technique 145
Contents
vii
5.3 Weak Solutions. Continuous Coefficients 147 5.4 Existence of Strong Solutions and Applications to ODE . . 153 5.5 Weak Solutions. Measurable Coefficient Case 153
II
Applications
161
6 How to Use the Stochastic Calculus to Solve SDE 163 6.1 The Foundation of Applications: Ito's Formula and Girsanov's Theorem 163 6.2 More Useful Examples 167 7 Linear and Non-linear Filtering 169 7.1 Solutions of SDE with Functional Coefficients and Girsanov Theorems 169 7.2 Martingale Representation Theorems (Functional Coefficient Case) 177 7.3 Non-linear Filtering Equation 180 7.4 Optimal Linear Filtering 191 7.5 Continuous Linear Filtering. Kalman-Bucy Equation . . . . 194 7.6 Kalman-Bucy Equation in Multi-Dimensional Case 196 7.7 More General Continuous Linear Filtering 197 7.8 Zakai Equation 201 7.9 Examples on Linear Filtering 203 8 Option Pricing in a Financial Market and BSDE 8.1 Introduction 8.2 A More Detailed Derivation of the BSDE for Option Pricing 8.3 Existence of Solutions with Bounded Stopping Times . . . . 8.3.1 The General Model and its Explanation 8.3.2 A Priori Estimate and Uniqueness of a Solution . . . 8.3.3 Existence of Solutions for the Lipschitzian Case . . . 8.4 Explanation of the Solution of BSDE to Option Pricing . . 8.4.1 Continuous Case 8.4.2 Discontinuous Case 8.5 Black-Scholes Formula for Option Pricing. Two Approaches 8.6 Black-Scholes Formula for Markets with Jumps 8.7 More General Wealth Processes and BSDEs 8.8 Existence of Solutions for Non-Lipschitzian Case 8.9 Convergence of Solutions 8.10 Explanation of Solutions of BSDEs to Financial Markets . . 8.11 Comparison Theorem for BSDE with Jumps 8.12 Explanation of Comparison Theorem. Arbitrage-Free Market 8.13 Solutions for Unbounded (Terminal) Stopping Times . . . . 8.14 Minimal Solution for BSDE with Discontinuous Drift . . . .
205 205 208 209 209 213 215 219 219 220 223 229 234 236 239 241 243 250 254 258
viii
Contents 8.15 Existence of Non-Lipschitzian Optimal Control. BSDE Case 262 8.16 Existence of Discontinuous Optimal Control. BSDEs in R1 . 267 8.17 Application to PDE. Feynman-Kac Formula 271
9 Optimal Consumption by H-J-B Equation and Lagrange Method 277 9.1 Optimal Consumption 277 9.2 Optimization for a Financial Market with Jumps by the Lagrange Method 279 9.2.1 Introduction 280 9.2.2 Models 280 9.2.3 Main Theorem and Proof 282 9.2.4 Applications 286 9.2.5 Concluding Remarks 290 10 Comparison Theorem and Stochastic Pathwise Control 291 10.1 Comparison for Solutions of Stochastic Differential Equations 292 10.1.1 1—Dimensional Space Case 292 10.1.2 Component Comparison in d—Dimensional Space . . 293 10.1.3 Applications to Existence of Strong Solutions. Weaker Conditions 294 10.2 Weak and Pathwise Uniqueness for 1-Dimensional SDE with Jumps 298 10.3 Strong Solutions for 1-Dimensional SDE with Jumps . . . . 300 10.3.1 Non-Degenerate Case 300 10.3.2 Degenerate and Partially-Degenerate Case 303 10.4 Stochastic Pathwise Bang-Bang Control for a Non-linear System 312 10.4.1 Non-Degenerate Case 312 10.4.2 Partially-Degenerate Case 316 10.5 Bang-Bang Control for d—Dimensional Non-linear Systems 319 10.5.1 Non-Degenerate Case 319 10.5.2 Partially-Degenerate Case 322 11 Stochastic Population Control and Reflecting SDE 11.1 Introduction 11.2 Notation 11.3 Skorohod's Problem and its Solutions 11.4 Moment Estimates and Uniqueness of Solutions to RSDE . 11.5 Solutions for RSDE with Jumps and with Continuous Coefficients 11.6 Solutions for RSDE with Jumps and with Discontinuous Coefficients 11.7 Solutions to Population SDE and Their Properties 11.8 Comparison of Solutions and Stochastic Population Control
329 330 332 335 342 345 349 352 363
Contents 11.9 Caculation of Solutions to Population RSDE
ix 372
12 Maximum Principle for Stochastic Systems with Jumps 12.1 Introduction 12.2 Basic Assumption and Notation 12.3 Maximum Principle and Adjoint Equation as BSDE with Jumps 12.4 A Simple Example 12.5 Intuitive thinking on the Maximum Principle 12.6 Some Lemmas . 12.7 Proof of Theorem 354
377 377 378
A A Short Review on Basic Probability Theory A.I Probability Space, Random Variable and Mathematical Expectation A.2 Gaussian Vectors and Poisson Random Variables A.3 Conditional Mathematical Expectation and its Properties . A.4 Random Processes and the Kolmogorov Theorem
389
B Space D and Skorohod's Metric
401
379 380 381 383 386
389 392 395 397
C Monotone Class Theorems. Convergence of Random Processes407 C.I Monotone Class Theorems C.2 Convergence of Random Variables . . . . . . . . . . . . . . C.3 Convergence of Random Processes and Stochastic Integrals
407 409 411
References
415
Index
431
Preface
Stochastic differential equations (SDEs) were first initiated and developed by K. Ito (1942). Today they have become a very powerful tool applied to Mathematics, Physics, Chemistry, Biology, Medical science, and almost all sciences. Let us explain why we need SDEs, and how the contents in this book have been arranged. In nature, physics, society, engineering and so on we always meet two kinds of functions with respect to time: one is determinstic, and another is random. For example, in a financial market we deposite money rt in a bank. This can be seen as our having bought some units r)! of a bond, where the bond's price Pf satisfies the following ordinary differential equation dPf = Pfrtdt, P : = 1, t E [O,T], where rt is the rate of the bond, and the money that we deposite in the t bank is r t = r)!Pf = r)! exp[Jl r,ds]. Obviously, usually, Pf = exp[& r,ds] is non-random, since the rate rt is usually deterministic. However, if we want to buy some stocks from the market, each stock's price is random. For simplicity let us assume that in the financial market there is only one stock, and its price is P i . Obviously, it will satisfy a differential equation as follows: d P i = Pi(btdt d(a stochastic perturbation)), Po' = Po', t E [O,T], where all of the above processes are 1-dimensional. Here the stochastic perturbation is very important, because it influences the price of the stock, which will cause us to earn or lose money if we buy the stock. One important' problem arises naturally. How can we model this stochastic perturbation? Can we make calculations to get the solution of the stock's price Pi, as we do in the case of the bond's price P f ? The answer is positive, usually a
+
continuous stochastic perturbation will be modeled by a stochastic integral / 0 asdws, where Wt,t > 0 is the so-called Brownian Motion process (BM), or the Wiener process. The 1—dimensional BM wt, t > 0 has the following nice properties: 1) (Independent increment property). It has an independent increment property, that is, for any 0 < t\ < • • • < tn the system {wo, w^ — wo, w42 — Wt,, • • • , Wtn — tvtn^i, } is an independent system. Or say, the increments, which happen in disjoint time intervals, occured independently. 2) (Normal distribution property). Each increment is Normally distributed. That is, for any 0 < s < t the increment Wj — ws on this time interval is a normal random variable with mean m, and variance a2(t — s). We write this as wt — ws ~ N(m,a2(t — s)). 3) (Stationary distribution property). The probability distribution of each increment only depends on the length of the time interval, and it does not depend on the starting point of the time interval. That is, the m and a2 appearing in property 3) are constants. 4) (Continuous trajectory property). Its trajectory is continuous. That is BM wt, t > 0 is continuous in t. Since the simplest or say, the most basic continuous stochastic perturbation, intuitively will have the above four properties, the modeling of the general continuous stochastic perturbation by a stochastic integral with respect to this basic BM Wt,t > 0 is quite natural. However, the 1—dimensional BM also has some strange property: Even though it is continuous in t, it is nowhere differentiate in t. So we cannot define the stochastic integral / 0 as(u})dws{(jj) for each given w. That is why K. Ito (1942) invented a completely new way to define this stochastic integral. Our first task in this book is to introduce the Ito stochastic integral and discuss its properties for later applications. After we have understood the stochastic integral jQ as(uj)dws(uj) we can study the following general stochastic differential equation (SDE): xt = xo + / 0 * Z ( s , x s ) d s + Jo a(s,xs)dws,
t>0,
or equivalently, we write dxt — b(t,xt)dt + 0.
(1)
Returning to the stock's price equation, we naturally consider it as the following SDE: dPl = P}(btdt + ctdwt),Pj
= P o \* € [0,T\.
(2)
Comparing this to the solution of Pt°, one naturally asks could the solution of this SDE be P / = PQ exp[J0 bsds + JQ crsdws}'! To check this guess, obviously if we can have a differential rule to perform differentiation on PQ expxt, where Xt = f0 bsds + JQ crsdws, then we can make the check. Or more generally, if we have an f(x) € C2(R) and dxt = btdt + atdwt, can we have df(xt) = f'(xt)dxt = f'(xt) (btdt + crtdwt)f
If as in the deterministic case, this differential rule holds true, then we immediately see that P / = PQ exp[/ 0 bsds + JQ asdws] satisfies (2). Unforturnately, such a differential rule is not true. K. Ito (1942) has found that it should obey another differential rule - the so-called Ito's formula:
df'(xt) = f'(xt)dxt + |/"(x t ) Wtf dt. By this rule one easily checks that P / = PQ1 exp[/0* bsds +jl asdws - \ /„* | 0 if b(t, x) is only bounded and jointly continuous, then even though the solution exists, is not necessary unique. However, for the SDE (1) in 1—dimensional case if 6(£, a;) and a(t, x) are only bounded and jointly Borelmeasurable, and |5(i,a;)|~ is also bounded and a(t, x) is Lipschitz continuous in x, then (1) will have a unique strong solution. (Here "strong" means that Xt is #2"—measurable). This means that adding a non-degenerate stochastic perturbation term into the differential equation, can even improve the nice property of the solution. 2) The stochastic perturbation term has an importaqnt practical meaning in some cases and it cannot be discarded. For example, in the investment problem and the option pricing problem from a Financial Market, the investment portfolio actually appears as the coefficient of the stochastic integral in an SDE, where the stochastic integral acts like a stochatic perturbation term. 3) The solutions of SDEs and backward SDEs can help us to explain the solutions of some deterministic partial differential equations (PDEs) with integral terms (the Feynman-Kac formula) and even to guess and find the solution of a PDE, for example, the soluition of the PDE for the price of a option can be solved by a solution of a BSDE - the Black-Scholes formula. 4) More and more. So we have many reasons to study the SDE theory and its applications more deeply and carefully. That is why we have a Chapter that discusses useful tools for SDE, and a Chapter for the solutions of an SDE with nonLipschitzian coefiicients. These are Chapter 4 and 5. The above concerns the first part of our book, which represents the theory and general background of the SDE.
The second part of our book is about the Applications. We first provide a short Chapter to help the reader to take a quick look at how to use Stochastic Analysis (the theory in the first part), to solve an SDE. Then we discuss the estimation problem for a signal process : the so-called filtering problem, where the general linear and non-linear filtering problem for continous SDE systems and SDE systems with jumps, the KalmanBucy filtering equation for continuous systems, and the Zakai equation for non-linear filtering, etc. are also considered. Since, now, research on mathematical finance, and in particular on the option pricing problem for the financial market has become very popular, we also provide a Chapter that discusses the option pricing problem and backward SDE, where the famous Black-Scholes formulas for a market with or without jumps are derived using a probability and a PDE approach, respectively; and the arbitrage-free market is also discussed. The interesting thing here is that we deal with the mathematical financial problem by using the backward stochastic differential equation (BSDE) technique, which now becomes very powerful when treating many problems in the finacial market, in mathematics and in other sciences. Since deterministic population control has proved to be important and efficient, and the stochastic population control is more realistic, we also provide a Chapter that develops the stochastic population control problem by using the reflecting SDE approach, where the existence, the comparison and the calculation of the population solution and the optimal stochastic population control are established. Besides, the stochastic Lagrange method for the stochastic optimal control, the non-linear pathwise stochastic optimal control, and the Maximum Principle (that is, the necessary conditions for a stochastic optimal control) are also formulated and developed in specific Chapters, respectively. For the convenience of the readers three Appendixes are also provided: giving a short review on basic probability theory, space D and Skorohod's metric, and monotone class theorems and the convergence of random processes. We suggest that the reader studies the book as follows: For readers who are mainly interested in Applications, the following approach may be considered: Appendix A —> Chapter 1 —+ Chapter 2 —> Chapter 3 — • > Chapter 6 —» Any Chapter in The second part "Applications" except Chapter 10, and at any time return to read the related sections in Chapters 4 and 5, or Appendixes B and C, when necessary. However, to read Chapter 10, knowledge of Chapters 4 and 5 and Appendixes B and C are necessary.
Acknowledgement
The author would like to express his sincere thanks to Professor Alan Jeffrey for kindly recommending publication of this book, and for his interest in the book from the very beginning to the very end, and for offering many valuable and important suggestions. Without his help this book would not have been possible
Abbreviations and Some Explanations
All important statements and results, like, Definitions, Lemmas, Propositions, Theorems, Corollaries, Remarks and Examples are numbered in sequential order throughout the whole book. So, it is easy to find where they are located. For example, Lemma 22 follows Definition 21, and Theorem 394 is just after Remark 393, etc. However, the numbers of equations are arranged independently in each Chapter and each Appendix. For example, (3.25) means equation 25 in Chapter 3, and (C.4) means the equation 4 in Appendix C. The following abbreviations are frequently used in this book. a.e. almost everywhere. a.s. almost sure. BM Brownian Motion. BSDE backward stochastic differential equation. FSDE forward stochastic differential equation. H-J-B equation Hamilton-Jacobi-Bellman equation. IDE integral-differential equation. ODE ordinary differential equation. PDE partial differential equation. RCLL right continuous with left limit. SDE stochastic differential equation. P - a.s almost sure in probability P. a+ max {a, 0) . amax {-a, 0). a v b max{a,b). a A b min{a,b).
fj. « v measure /i is absolutely continuous with respect to v\ that is, for any measurable set A, v(A) = 0 implies that n(A) = 0. £ n —> £, a.s. £ n converges to £, almost surely; that is, £n(w) —> £(w) for all UJ except at the points w € A, where P(A) = 0. £ n —> ^, in P £n converges to ^ in probability; that is, Ve > 0, lim^oo P(w : |C(w) - e(w)| > E) = 0. # {•} the numbers of • counted in the set {•} . cr(xs, s < t) the smallest cr—field, which makes all a;s, s < t measurable. S[^|?7] E[£\(T(T})]. It means the conditional expectation of ^ given cr(r)). The following notations can be found on the corresponding pages. For example, #, 4,387 means that notation J can be found in page 4 and page 387. 0,4,387 S, 4,387 34 34 34 oc 34 , p Rdl C , 43 43 43 c\ 43 48 ) 51 Rd{
T
h
11
(
212
212 ), 212 )(/ 12 f,P)Zd®d2
^, 35
O, 7 T5, 7 P, 4,387 , 212
Part I
Stochastic Differential .. Jumps in Equations with
Martingale Theory and the Stochastic Integral for Point Processes
A stochastic integral is a kind of integral quite different from the usual deterministic integral. However, its theory has broad and important applications in Science, Mathematics itself, Economic, Finance, and elsewhere. A stochastic integral can be completely charaterized by martingale theory. In this chapter we will discuss the elementary martingale theory, which forms the foundation of stochastic analysis and stochastic integral. As a first step we also introduce the stochastic integral with respect to a Point process.
1.1 Concept of a Martingale. In some sense the martingale conception can be explained by a fair game. Let us interprete it as follows: In a game suppose that a person a t the present time s has wealth x, for the game, and at the future time t he will have the wealth xt. The expected money for this person a t the future time t is naturally expressed as E[xt15,], where E[.]means the expectation value of ., 8, means the information up to time s , which is known by the gambler, and E[.lSs] is the conditional expectation value of - under given 5,. Obviously, if the game is fair, then it should be E[xt15sl = xs,W 2 s. This is exactly the definition of a martingale for a random process xt, t 0. Let us make it more explicit for later developement.
>
4
1. Martingale Theory and the Stochastic Integral for Point Processes
Let (R, 5 , P ) be a probability space, {St)t,o be an information family (in Mathematics, we call it a a-algebra f a d y or a a-field family, see Appendix A ) , which satisfies the so-called "Usual Conditions": ( 2 ) 5, c St, as 0 5 s 5 t ; (ii)5t+ =: nh>05t+h. Here condition (i)means that the information increases with time, and 1 gt, condition (ii)that the information is right continuous, or say, as h 10. In this case we call - a a-field filtration. Definition 1 A real random process { x ~ ) , , is ~ called a martingale (supermartingale, submartingale) with respect 15{5t}t,o - , or { x t , &}t20 is a martingale (supermartingale, submartingale), i f (2) xt is integrable for each t 1 0; that is, E lxtl < w , V t 1 0; (ii) xt is gt-adapted; that is, for each t 2 0 , xt is &-measurable; (iii) E(xtlTs] = x,, (respectively, s , is integrable and with non-negative expectation, moreover, xo is also integrable, then {xt)t>o is a submartingale with respect to {$;),Lo, where 5; = a(x,, s 5 t ) , whizh is a a-field generated by {x,, s 5 t ) (that is, the smallest a-field which makes all x,, s 5 t measurable) and makes a completion. In fact, by independent and non-negative increments, 0 5 E ( x t - x S ) = E[(xt - ~ ~ ) 1 5 : ] , V2t S . Hence the conclusion is reached. is a submartingale, let yt := xt V 0 = max(xt,O), Example 3 If then {yt)t20 is still a submartingale. In fact, since f ( x ) = x V 0 is a convex function, hence by Jensen's inequality for the conditional expectation E [ x t V OISs] 1 E[xtlSs]V E[Ol5s] 2 2 s V O,Vt 2 S . So the conclusion is true.
Example 4 If { x ~ } , ,is~ a martingale, then { I ~ t -l ) i~s ,a~submartingale. In fact, by Jensen's inequality E[Ixtl 15,] > IE[xtlSsll = Ix.4 ,W 1 S. Thus the conclusion is deduced. Martingales, submartingales and supermartingales have many important and useful properties, which make them become powerful tools in dealing with many theoretical and practical problems in Science, Finance and elsewhere. Among them the martingale inequalites, the limit theorems, and the
1.2 Stopping Times. Predictable Process
5
Doob-Meyer decomposition theorem for submartingales and supermartingales are most helpful and are frequently encountered in Stochastic Analysis and its Applications, and in this book. So we will discuss them in this chapter. However, to show them clearly we need to introduce the concept called a stopping time, which will be important for us later. We proceed to the next section.
1.2 Stopping Times. Predict able Process Definition 5 A random variable ~ ( w E) [O, co] is called a &-stopping time, or simply, a stoping time, if for any (co >)t > 0, { ~ ( w5) t ) E St. The intuitive interpletation of a stopping time is as follows: If a gambler has a right to stop his gamble at any time ~ ( w )he , would of course like to choose the best time to stop. Suppose he stops his game before time t , i.e. he likes to make ~ ( wI) t , then the maximum information he can get about his decision is only the information up to t , i.e { T ( w ) t ) E The trivial example for a stopping time is ~ ( w=) t ,Vw E 0. That is to say, any constant time t actually is a stopping time. For a discrete random variable T ( W ) E {0,1,2,. . . ,CO) the definition can be reduced to that ~ ( wis) a stopping time, if for any n E {0,1,2,. - . ) , { T ( w )= n ) E since { ~ ( w=) n ) = { ~ ( w ) n ) - { ~ ( w ) n - I ) , and { T ( w ) 5 n ) = UZzl { ~ ( w=) k) . The following examples of stopping time are useful later.
1. Then for every N, and A > 0, + P AP P(ma% 1, set [ = maxo a , let 71 =min{n 0 : x, < a ) , 7 2 = min{n > 71 : x, b),
>
......
= min{n 72n+2 = min{n
72,+l
...... .
>
> 72, : x, < a), > 72n+l : xn > b),
I
where we recall that min 4 = +co.Then (7,) is an increasing sequence of stopping times. In fact, Vk 2 0, {TI = k) = {XO > a, x1 > a , . . . ,xk-1 > a, xk a ) E Sk; ( 7 2 = k) = U =: {TI = j, 7 2 = k)
bb,xj+l > a , . . - , x ~ - I> U , X I , < U ) E ~ ~ . Hence r 1 , ~ 2 and , 73 are stopping times. The proofs for the rest are similar. Now set ~,b[x(.), N](w) = max {k 2 1 : T ~ ~ ( WN) ) , D: [x(.),N](w) = max {k 2 1 : T ~ (w)~ -N ) ~. Obviously the first one is the number connected to the upcrossing of {x,),=,N for the interval [a,b], and the second one is the number connected to the ~ the interval [a, b]. downcrossing of { x , ) ~ = for
,Nl. Applying Theorem 18 we arrive a t the results. w Theorem 18 and Corollary 19 are the classical crossing theorems on martingale. We can derive some other useful crossing results which are very useful in the mathematical f i n a n ~ e . [ ~ ~ ] ~Here [ ~ ~ we ] ~ [apply ~ * ~ some ] of them to derive the important limit theorem on martingales.
1. Martingale Theory and the Stochastic Integral for Point Processes
12
Theorem 20 If {x,)~'~ is a submartingale such that there exists a subsequence of {n) , denote it by {nk) , such that
then x, = limn,,xn exists a s . , and x, is integrable. In particular, i f x, O,Vn, then condition (1.1) is obviously satisfied, and i n this case Qn E[~ml5n2 ] xn, a.8.
0 , i = 1,2,. . . ,m
If this is true, then p is a Poisson random measure with the intensity measure A. Let us simply show the case for m = 2. Note that by the independence E exp(a j p ( B j ) ) = flr=l E e x p ( - a i CfZ1IU,B~(ET)Ip,,ti -2 C:z1Iu,,~~(J;)Ip,tl)= Jn. However, by the complete probability formula one can derive that k Jn = 1 . P(pn = 0 ) + C;=i E [ e x ~ ( - a lCi=1 IU,BI (J1)
Zz1
28
1. Martingale Theory and the Stochastic Integral for Point Processes
1.8 Poisson Point Process and Its Existence Now let us introduce the concept of random point processes. Assume that (2,B z ) is a measurable space. Suppose that Dp c (0,cm)is a countable set, then a mapping p : Dp 2, is called a point function (valued) on 2. Endow (0,oo) x Z with the product a-field %((o,oo))x B z , and define a counting measure through p as follows: Np((O,t]x U ) = # { s E D p : s 5 t,p(s) E U ) , Vt > 0,U E % z , where # means the numbers of . counting in the set 1.). Now let us consider a function of two variables p(t, w) such that for each w E 52, P(.,w) is a point function on Z , i.e. P(.,w) : DP(.,,) -+ Z , where Dp(.,,) c (0,oo) is a countable set. Naturally, its counting measure is defined by Np((0,tl x U,w) = Np(w)((O,tl x U ) =#{~ED~:S~~,~(S,W)EU),~~>O,UE%Z, and we introduce the definition as follows: -+
Definition 51 1) If Np((O,t]x U,w) is a random measure on (%((o,oo))x B z ) x R, then p is called a (random) point process. 2) If Np((O,t] x U,w) is a Poisson random measure on (%((o,oo))x % z ) x R, then p is called a Poisson point process. 3) For a Poisson point process p if its intesity measure np(dtdx) = E(Np((dtdx))satisfies that np(dtdx)= w(dx)dt,
1.8 Poisson Point Process and Its Existence
29
where ~ ( d x )is some measure on (Z,%z), then p is called a stationary Poisson point process. ~ ( d x is ) called the characteristic measure of p. Because for a Poisson random measure on % z x R (1.10) is its sufficient and necessary condition. Now we consider the Poisson random measure defined on (%((o, oo)) x B z ) xR. Then it is easily seen that p is a stationary Poisson point process with the characteristic measure d t ~ ( d x )if, and only if W > s 2 0, disjoint {Ui)z1 c %z and Xi > 0, P - a.s.
g,
where = a[Np((O,s'] x U); s' 5 s, U E %z]. Now let us use this fact to show the existence of a stationary Poisson point process. Firstly, we will show a lemma on Poisson random process.
Lemma 52 If {Nt),,,, - is a Poisson process with intensity tp, i.e. {Nt)t20 is a random process such that P(N(t) = Ic) = e - p t w , EN(t) = tp, and it has stationary independent incremenrs; then 1) E [ ~ - X ( N ~ - N * ) I ~ ,= ] e(t-s)k'(eix-l), Vt > > , A > 0; 2) ~ [ ~ - X ( N t + u - N u ) l g , ] = e t ~ ( e i h - l ) ,t(t > O,A > 0,
-
and Vu-bounded zt-stopping time. Proof. 1): Since {Nt),,,- has independent increments, E[~-x(N~-N.)~= ~ ~E[~-x(N~-N.) I
I = e(t-~)~(eiA-l).
The last equality follows from the elementary probability theory. 2): Let us make a standard approximation of a, i.e for any bounded stopping time u with 0 5 u 5 T,where T is a constant, let an = a s u [~~ T , & T ) n, , I c = 1 , 2 , . . . . Then for each n , an is a bounded discrete stopping time. Moreover, as ntoo,anLu.By ~ ) V B E & , -*(Nt+on-No,) -*(Nt+&& - N ~ T S { W , , < ~ } ~ B ee dP = .f{r.=q~}nB ee T=d p = et"("*-l)d~. So --*(Nt+a,- N u n ) d p = J~et~(e'A-l)dp. SB ee C In particular, VB E -NNt+an -Nan) d P = JB etP(ei*-l)d~. SB ee Letting n + oo one obtains that VB E -h(Nt+a-Na) d P = SBetk'(eix-')d~. JB ee Thus 2) is proved. rn By means of this lemma one immediately obtains the following lemma.
g,
S{un=qF},,B
go gun,
g,.
30
1. Martingale Theory and the Stochastic Integral for Point Processes
Lemma 53 If {N(t))t,o is a Poisson process with an intensity function pt, denote 7-1 = inf { t 5 0 : N ( t ) = I ) , . . . , and r k = inf { t - r k - 1 > 0 : N ( t ) - N(7k-1) = I ) , then 1) { r k ) L l is an independent family of random variables, and P ( r k > t ) = e-xt,Vk = 1,2,. . . ;
2) P(c:~:
7 j
< t < c:=~
ri) = e- pt
&, (P
k-l.
Proof. 1): In fact, P ( r l > t ) = P ( N ( t ) - N ( 0 ) = 0 ) = e-pt. Hence by Lemma 52 P(7-k > t ) = P ( N ( t 7 k - 1 ) - N(7k-1) = 0 ) = P ( N ( t ) - N ( 0 ) = 0 ) = e-pt,Vk = 1,2,... . Since {N(t))t,o has independent increments, V t l ,Vtz,. . - ,Vt, P ( n C 1 {rk-> t ) ) = P(nT=, { N ( t 71-1) - N(rk-1) = 0 ) ) = P ( N ( t r k - I ) - N(Q-1) = 0 ) = n r = l P ( n > t). Hence is an independent family. 2): p(rl 7 2 > t 2 r l )= P ( N ( ~=) 1 ) = e - p t y .
+
nrZl
+
+
P(C:~: 7 j < t < c:=~
+
&
pt ~t
k-l.
.
r j )= P ( N ( t ) = k - 1) = eNow we are in a position to show the existence of a Poisson point process.
Theorem 54 Given a n-finite measure n on (2,B z ) there exists a sationary Poisson point process on Z with the charteristic measure n. Proof. Since .rr is a a-finite measure on (2,B z ) , there exists a disjoint { U k ) E 1 c !BZ such that r ( U k ) < m, and Z = U g l U k . Let us construct probability spaces and the random variables defined on them as follows: 50,Po), V k ,i = 1,2, . . . ;[$ is a Uk-valued (i) On a probability space (no, E d x ) = n(dx)/n(Uk) defined on it such that random variable with P ( J ~ {[$,V k ,i = 1,2,. - . is an indepent random variable system. (ii) For each k = ;, 2, . on a probability space ( R k ,5 k , Pk), { N ! ) , ~ ,is a Poisson random process with intensity t.rr(Uk) defined on it, set V i = 1,2,... , 7: = inf { t - ~ i - 1> 0 : ~ ~ ( N t~ ( )T ~ - 1 = ) 1); then by Lemma 53 { r b ) z l is an independent variable system such that P(T: > t ) = exp[-t.rr(Uk)],for t 2 0. S Now let R = X ~ = ~ R=~ X, *k=OSk, P = x g 0 P k . Then [$,N: are naturally extended to be defined on probability space (R,$,P), i.e. for w = (wO,wl,... ,wk,...) E R let [$(w) = [$(wo), N ~ ( w = ) N $ ( w ~ ) Then . r t ,V k ,i = 1,2, - . is an indepent random variable system on ( R ,5 , P ) .
{I!,
Moreover, { N I , J $ , V k ,i = 1,2,. maps. Now set D, = U g l ( T ; , T ~+T!,-.. ,T: and
- I is a independent system of random +T! + . - . + r k , . . - ) ,
1.8 Poisson Point Process and Its Existence
31
p ( r ~ + ~ ~ + . . . + 7 ~ ) = ~ ~ , ~ k , .m = 1 , 2 , . . . Introduce a counting measure by p as follows: Np((s, t] x (Uk n B)) = # {r E Dp : r E (s, t],p(s) E Uk n B ) Then we have that
Note that if w E 0 is such that s < CG1T;(w) 5 t, then there exists an F E (s, t] (actually, Z = CGl T?(w)) such that Nk(F,w) = m, and N ~ ( uw) , = m - 1,Vu < F. Conversely, if there exists an F E (s, t] such that Nk(F,w) = m, and N"U,W) = m - 1,Vu < ;F; then F E (s,t] (and -s = CEl r?(w)), because {T;(W))~=~ w is just the set of all jump times which have happened for the Poisson process Nk(u,w). Thus by (1.12) and by the independence one has that
zEl
~u~ns(~k)~(~.tl(~~Nk~~~l. ~ ~ ( (tl sx ,B) = Now the proof of (1.11) for this Np((s,t] x B) can be completed by the complete probability formula, as in the proof of Theorem 50. 4 A special case is the following corollary.
Corollary 55 Given a finite measure x on (2, B z ) there exists a finite sationary Poisson point process on Z with the charteristic measure x. (Here, a finite measure x means that x(Z) < co; and, a finite point process p means that the counting measure Np((O,t], Z) generated by p, is always finite, VO t < co).
O,Vn, and Z = U,"=lUn. From now on we only discuss the gt-adapted and a- finite point process p. Denote rp = {U E CZ : ENp(t, U) < cm,W > 0). Obviously, for any U E rP,Np(t, U) is a submartingale and is of class (DL), since it is nonnegative and increasing in t. Hence by Doob-Meyer's decompositio~theorem (Theorem 42) there exists a unique 3t-adapted martingale Np(t, U) and a unique &-adapted natural increasing process fip(t, U) such that Np(t, U) = K ( t , U)
+ fip(t, U).
(1.13)
Notice that the equality only holds true P - a.s. for the given U. Hence fip(t, U)may not be a measure for U E Cz, a.s. Moreover, it also may not be continuous in t. However, in most practical case we need N,(t, U) to have such properties. A
Definition 57 A point process p is said to be of class (QL) (meaning Quasi Left-continuous) if in the D-M decomposition expression (1.13) (i) fip(t, U) is continuous in t for any U E r,; (ii) fip(t, U) is a a- finite measure on (2,CZ)for any given t 2 0, P-as.
1.9 Stochastic Integral for Point Process. Square Integrable Martingales
33
We will call Gp(t, U) the compensator of Np(t, U) (or p). We now introduce the following definition for the Poisson point process.
st-
st-
Definition 58 A point process p is called an Poisson point process, zf it is a Poisson point process, st-adapted, and a- finite, such that Np(t h, U) - Np(t, U) is independent of for each h > 0 and each U E F,.
+
st
st,
gt
Notice that = a[Np((O,s] x U); s 5 t, U E Bz] c and in general these may not equal to each other. This is why we have to assume that for a Poisson point process Np(t h, U) - Np(t, U) is independent of From now on we only discuss point processes which belong to class (QL). By definition one can consider that the following proposition holds true.
+
st.
(st-)
Proposition 59 A point processes p is a stationary St- Poisson point process of class (QL), if and only if its compensator has the form: S P ( t , u ) = tr(U), W > 0, U E I?,, where r(.) is a a-finite measure on Bz. In fact, the "only if part" of the Proposition can be seen from the dePoisson point process, then its counting finition: If p is a stationary measure Np(t, U) is a Poisson random measure with the intensity measure ENp(t, U) = t?r(U), where r ( . ) is a 0-finite measure on B z . From this, one sees that Vt 2 0,Vh > 0, E[(Np(t + h, U) - (t + h)r(U)) - (Np(t, U) - t.n(U))l&] = E[Np(h, U) - hr(U)] = 0. , ~a~st-martingale, i.e. Hence {Np(t, U) - ~ T ( U ) ) is Np(t, U) - tr(U) = Mt, where Mt is st-martingale. However, by the uniqueness of decomposition of the submartingale N,(t, U) (Theorem 42) one should have Mt = fip(t,U), and fip(t,U) = t?r(U). The "if part" of the above Proposition will be proved in the next chapter by using Ito's formula. Now let us discuss the integral with respect to the point process. In the simple case it can be defined by the Lebesgue-Stieltjse integral. First, we have the following Lemma.
st-
Proof. Assume that f (t, w) is a left-continuous bounded st-predictable (s) process. Let f,(t) = f (O)I{,=O~ f ( & ) I ( + ,(s). ~ ~Then by definition one easily sees that J,"fn(s)dNp(s, U ) = f (&)[fiP(% At, U) - fip(& At, U)].
+
CEO
1. Martingale Theory and the Stochastic Integral for Point Processes
34
So the left hand side is obviously an &-martingale. Now since f (t) is left-continuous and bounded, applying Lebesgue's dominated convergence theorem and u ~ i n g E [ J fn(s)dNp(s, ~ U)15s] = J," fn(s)dfip(s, u), vs I t. one obtains tkat as n -+ co f (s)dNp(s, u)l&l = J," f (s)dfip(s, U), vs I t. Now by the Monotone-classTheorem (Theorem 392) it is easily seen that it also holds true for all bounded st-predictable process. H The integral defined in the above lemma motivates us to define the stochastic integrals with respect to the counting measure and martingale measure generated by a point process of the class (QL) for some class of stochastic processes as the integrands thru LebesgueStieltjes integral. First, let us generalize the notion of predictable processes to functions f (t, z, w) with three variables.
m,t
Definition 61 1) By P we denote the smallest a-field on [0,GO)x Z x Q such that it makes all g having the following properties P/%(R1)- measurable: (i) for eacht > 0, Z X R 3 (z,w) +g(t,z,w) E R1 is%zx&-measurable; (ii) for each (z, w), g(t, z, w) is left-continuous in t. 2) If a real function g(t, z,w) is p/%(R1)- measurable, then we call it gt- predictable, and denote g E P. Now for any given Zt-point process of the class (QL) let us introduce four classes of random processes as follows: Fp= {f (t,x,w) : f is Tt- predictable such that W > 0, SZ If (3, x,w)I NP(d%dz) < 00, a.s.1, = { f (t, x, W): f is St-- predictable such that Vt > 0,
J,t+
Fi
q,t+ Jz l f ( ~ ' ~ , ~ ) l f i P ( d 0, EJ,t+ Jz lf ( s , x , 4 l 2fip(ds,dz) < GO), is St- predictable such that 30, f co, a s . , an is a stopping time, and IIo,o,l (t) f (t, x, w) E F:, Vn = 1,2, . ). It is natural that we define the stochastic integral for f E Fpwith respect to the counting measure by $ + J Z f (~7.7 w)NP(ds, dz) = P(s)?W) = Cslt,sEDp(,) f (s,P(s, w), w), since the last series absolutely converges for a s . w E 0. pote that E Jz If (s, x,w)l Np(ds, dz) = E S,t+ Jz If (s,x,w)l Np(ds, dz). Actually, the above equality holds for f being an 3t-simple process. Applying the monotone class theorem (Theorem 391) one easily sees that it is also true for f being an Et- predictable process. So 3; c Fp,and for f E 3; we can define the stochastic integral for f E Fj with respect to the martingale measure by = {f(t, x,w) : f
el1oc
+
J,t+
+
1.9 Stochastic Integral for Point Process. Square Integrable Martingales
35
As the proof of Lemma 60 one can show that it is a martingale. In fact, it is true for f being an 5t-simple process. Applying the monotone class theorem (Theorem 391) again one easily sees that it also holds true for f being an 5,- predictable process. Thus for f E Fj the stochastic integral with respect to s p ( d s ,dz) can be defined by (1.14), and it is a 5,- martingale. However, for f E 3; we cannot define the stochastic integral by (1.14), since each term on the right hand side of (1.14) may not have meaning in this case. So we have to define it through limit. Let us introduce the following notion: M2 = {{mt)t20 : {mt)t20 is a square integrable martingale, i.e. {m,),,, -
~ 2 , 1
is a martingale, and for each t > 0, E lmt12 < oo. Moreover, mo = 01, is a locally square integrable martingale, 0 C-{{mt)t20 : {mt)
i.e.
30, f co,each a, is a stopping time, such that for each n, {mt/\un) t > O E M2 1,
M$ = {{mt)t€lo.TI : {mt}t20 E M2)
7
M>~OC
={{mt)t€[O,*] : {mt)t20 E M2>loch For each {mt),,o- E M2 by Jensen's inequality
{ lmtl') t>O is a non-
negative submartingale. So it is of class (DL). In fact, for ;iyconstant a > 0 one has that sup,,ss E lm,12 5 E lm,12. So {m,),,s, is uniformly integrable. Now by the D-M decomposition theorem lrntl2 has a unique decomposition, and we denote it by (m), , lmt12 = a martingale i.e. (m), is the natural increasing process for the decomposition of submartingale lmt12. Usually, (m), is called the characteristic process of mt. Let us show the following lemma.
+
Lemma 62 Iff E Fi n F;, then
{J:+
Sz f (s, . z , u ) ~ ~ ( d s , d ntzo )) E M2;
and
1. Martingale Theory and the Stochastic Integral for Point Processes
36
Proof. Let us consider a special case first. Assume that f(s, z, w) = Iu(z), U E I?. We are going to show that u)) = g p ( t , U). In fact, let Vm= 1 , 2 , . . . , om = inf {t 2 0 : l&(.,U)l > m, or GP(t,U) > m ) . Then o m is a stopping time, since Vt > 0, {om > t) = {lGP(.,U)l m, and fip(t, U) m) E k.
(&(.,
0 , x E Rd let p(t, x ) = ( 2 ~ t ) -exp[~ / ~ 1x12121. Then we have the following proposition.
Proposition 65 If {xt)t20 is a continuous d-dimensional random process,
the following statements are equivalent: (i) {xt)t20 is a BM with some initial law p. (ii)VO=to< tl < . . . < t,, a n d I ' i ~ % ( ~ d ) , i = 0 , 1 , 2 , .,m, -.
(iii) E[exp(i(X,xt - x,))l$T] = exp[-(t - s) / X I 2 121, as., VXER~,O<S)l = E[eiXoX0E(exp(i C:=l (Ah,xt, - xtk-,))15~)]
+
- ~ [ ~E ( e iiX l ( x~t l - x ~~) E [ e xi X 2 ( x ~t 2 - " t l ) . .. ~ ( ~ ~ ' " ' ( " t -m"tm-l'~$t,-I) .. . f8tl]180)I
2.1 Brownian Motion and Its Nowhere Differentiability
- ( E ~ ~ X O X )O . = (EeiXoxo) .
nYzlexp[-(tj n m
3=1
41
1Ajl2 /2]
tj-1)
Eeib ( x t j-=tj+1
Thus the increments xto,xtl - xto,xtz - xtl , - . . ,xt, - ~ t , - ~are independent, and the increment xtj - xtj_, is Normally distributed with mean 0 and with the variance matrix such that all elements on the diagonal are equal to t j - tj-1, and all other elements are equal 0. Thus {xt)t>o - is a (d-dimensional) BM with some initial law. (i)==+ (iii) : I f { x ~ ) , ,is~ a BM, then by the independent increment property, and because i t s increments are Normally distributed, one finds that E[exp(i(A,xt - xs))13;] = E[exp(i(A,xt - x s ) ) ]= exp[-(t - s ) !At2121. Now let us set W d = the set o f all continuous d-dimensional functions w(t) defined for t 2 0. % ( W d )= the smallest a-field including all Borel cylinder sets in W d , where a Borel cylinder set means a set B c wdof the following form B = {W : ( ~ ( t l. ). -,~ ( t , ) )E A ) , for some finite sequence 0 tl < t2 < -.. < tn and A E %(Rnd). From above one sees that given a Brownian motion { x ~ ) ~ this , ~ will , lead to the generation of a probability measure P defined on '13(wd) such that its measure of the Borel cylinder set is given by (ii) in Proposition 65. Such a probability measure is called a Wiener measure with the initial measure (or say, the initial law) p. Conversely, if we have a Wiener measure P with initial measure p on % ( w d ) ,let ( f l , z , P ) = ( w d , % ( W d )~, ) , x ( w t ,) = w(t),W 2 0, w E W d ,then we obtain a BM { ~ t ) defined ~ , ~ on the probability space ( f l , z , P ) .So the BM is in one to one correspondence with the Wiener measure. Now a natural question arises: does the BM, that is, the Wiener measure exist? The existence of the Brownian motion is established by the following theorem.
{xt)tlo is not Holder-continuous with indexa for each t 2 0.
i,
Definition 70 We say that { ~ t ) , ,is~ Holder-continuous with index a at to > 0, if VE > 0,3S > 0 such thatas It - to1 < S,Ixt - xtoI < E It - t O l f f . Theorem 69 actually tells us that the trajectory of BM is not Lipschitzian continuous at each point t , so it is nowhere differentiable for t 2 0. Hence it is also not finite variational on any finite interval of t, since each finite variational function of t should be almost everywhere differentiable for t . Thus we arrive at the following corollary. Corollary 71 1) The trajectory of a BM is nowhere differentiable fort 2 0, P - a.s. 2) The trajectory of a BM is not finite variational on any finite interval o f t , P - a.s. Now let us show Theorem 69. Proof. Take a positive integer N such that N ( a > 1. For any positive integer T > 0 denote w : there exists a s E [O,T] such that V t E [O,T], A; = It - sl < N +Ixt(w) - X.(W)[ < E It - sl" Obviously, A: f, as n 1 . Let A" = Ur=lAk. If one can show that VE > 0, P(A:) = 0, then one finds that the conclusion of Theorem 69 holds true on the interval [O,T]. Now set
4)
{
,
I
2.2 Spaces Lo and C2.
43
Zk = r n a x l l i < ~1 x ( F ) - ,(-)I , k = 0,1,. . - ,nT; BE = {w : 3k such that Zk(w) 2 ~ ( N / n ) ~ ) . Let us show that A: c BE. In fact, if w E A,: then 3s E [O,T] such that
~nowhere differentiable on t 2 0, P - a s . This means that 3A E 5 such that P(A) = 0 and as wo $ A, the function wt(wo) cannot be differentiated at W 2 0. This means that we cannot simply define the stochastic integral :J f (t, w)dwt(w) for each w E A in terms of the usual integral. That is why Ito had to invent a new way to define this completely different integral which now is known as Ito's integral.[501J511
E~L2 let
llf 112 = C r = o zfr(llf ll2,n A 1) Then 1) 11-112 is a metric, and L2 is complete under this metric, if we make the idenfication f = ft,Vf, f' E L2, as [If - f '112,n = 0,Vn. 2) Lo is dense in L2 with respect to the metric 11.112. Proof. 1) is obvious. We only need to show 2)..Suppose that f = {ft)t>o is a bounded left-continuous St-adapted process. Let k & f n ( 0 , ~ )= f(Olw), fn(t,w) = f($i,w), as t E (F;;, zn I, Ic = 0, l , . . . Then fn E Lo and 11 fn - f [I2 -' 0, as n -+ co, by the left-continuity of f and the Lebesgue's dominated convergence theorem. Now collect all f E L2 to form a family H such that each f E 7-1 can be approximated by some c Lo under the metric 11.112. Then H contains all bounded leftcontinuous St- process, and obviously it is closed for the non-negative increasing limit under the metric 11.112.SO,by Theorem 3 9 2 , a contains all bounded St- predictable process. However, for each bounded f E L2 one t can let fn(t,w) = nSt-l f(s,w)ds to form a bounded sequence such that each fn is a boGded St- continuous process, and by real function theory for each w E fl, f,(t,w) -t f(t,w) for a.e. t. Hence one sees that for each k , as n -t oo, Ilfn - f l12, k = EJ: Ifn(t,w) - f (t,w)12 0, by Lebesgue's dominated convergence theorem. Hence H contains all bounded f E &.Finally, for any f E L2 let fn(t, w) = f (t, w)Il - Then I fnl 5 n, and for each k, as n -+ oo, 11 fn - f -,0. So we can show that 7-1 > L2.
-
{fn)r=l
{fn)r==,
-+
rn
2.3 Ito's Integrals on L2 First, we will define the Ito integral for Lo. Suppose that a St-Brownian motion {wt)t>o - (Wiener process) is given on (fl, 5,P).
Firstly, it is easily seen that the stochastic integral also has an expression, which is actually a finite sum for each 0 < t < co, I(f)(t) = Czo~ , ( w ( t i + Al t) - w(ti A t)), moreover, I(f)(t) is continuous in t. Secondly, it has the following property.
Proposition 75 1) I(f)(O) = 0, a.s. and for any a,P E R; f ,g E Lo I ( a f + Pg) = 4 f + PI(g).
2.3 Ito's Integrals on
L2
45
Proof. 1) is obvious. 2):Since BM w(t)is a square integrable martingale, vs 5 t , A t ) - w(ti A t))I$s]= ~i(w(ti+l A s) - w(ti A s)). E[~i(w(ti+l So E[I(f)(t)13s] = I ( f ) ( s ) This . means that {I(f)(t)),,, is a gt - martingale. Note that
Hence I ( f ) ( t )is uniquely determined by f and is independent of the particular choice of {ti) . Now suppose that f E L2. Since { w ~- ) is~ , ~ nowhere differentiable, we cannot define Jot f (u,w)dwu(w)pathwise for each or a.s. w E a. However, by (2.3) one sees that for f E Lo, and for each given T > 0, {ft)t,-[o,Tl has a norm 11 f = f 2 ( u ,w)du = E [ I ( f ) ( T ) 2 ]So . {f(t))tEIO,T] E L$ is in 1 to 1 correspondence with
EG
{ I ( f) ( t ) 2 } t ~ f o , ~E, M:', and both with the same norm. This motivates to define { ~ ( )(t)2)t,lo,Tl f E for f E L2 through the limit of 2 E if IIfn - f l12,T 0. {I(fn)(t)2}t,[0,TlE M?, where {fn)tEC,TI
~2~
US
4,
-+
That is exactly what Ito's integral defines. Let us make it more precise. For any f E L2 since Lo is dense in L2 with metric 11. 1 12 (Lemma 73),3f , E Lo such that Ilfn - f 112 0, 00- SO III(fn)- I(frn)II2= IIf* - fmI12 0, as n, m + oo.This means that {I(fn))r=lC M27' is a Cauchy sequence. However, M2>' is complete under the metric 11. 1 12 (Lemma 63). Therefore there exists a unique limit, denoted by I( f ) , belonging to M21C.Let us show that I ( f ) is uniquely determined by f and is independent of the particular -+
+
+
2. Brownian Motion, Stochastic Integral and Ito's Formula
46
choice o f {fn);='=, c Lo. In fact, let there be two {fn);=l, such that both
11 fn - f [ I 2 ,
l f;.
-f
1
-+
2
-
m
c Lo 0 , as n + m. Construct a new { f n L
-
sequence {gn);='=,C Lo such that gzn = fn, gzn+l = fn. Then one still has IlI(gn) - I(9,)112 = IIgn - h 1 1 2 + 0, 11% - f 11, 0 as n , m m. So the limits should satisfy the conditions that limn-+mI ( f n ) = limn+m I ( f n ) = limn-+m I(gn) = I ( f ) . -+
-+
Definition 76 I ( f ) E M2>= defined above is called the stochastic integral , ~ ,it is or the Ito integral o f f E L2 with respect to a BM { ~ ( t )-) ~ and denoted by I ( f ) ( t )= J: f (s)dw(s)= J: f ( s ,w)dw(s,w ) . Beware of the fact that the integral is not defined pathwise. So, actually, f (s,w)dw(s,w))(wo),P-a.s.wo. I ( f )(t)(wo)= (J: f (s)dw(s))(wo)=
(J,t
Proposition 77 (i) All conclusions in Proposition 75 still hold for V f E
L2.
(ii) More generally, i f T > a are both stopping times, then W > 0, V f E L2, E [ ( I ( f)(t A 7 )- I ( f ) ( tA a))lSul = 0, a.s. tAr 2 a's' E[('(f A - I ( f ) ( t A a))218u]= E[JtAu f (u,w)du18C7], (iii) Furthermore, V f , g E L2;W > s 2 0; for all stopping times r > a, -W(f )(t)- I ( f ) ( s ) ) ( I ( g ) ( t) I(g)(s))lSsl = E[J,t(f .g)(., w)d.l3sl, E [ ( I ( f)(t A 7 )- I ( f )(tA a ) ) ( I ( g ) ( tA 7 )- I ( g ) ( tA a))I3ul = . g)(u,w)d4Su1, a.s. (iv) For any stopping time a,Vf E L2, I ( f )(t A a ) = I(f')(t), vt > 0, where f ' ( 4w) = Itlu(,)f (t,w).
~[$";(f
Proof. ( i ) is true for V f f Lo, and so is true for V f E L2 through the limits. (ii): Since {I(f)(t))t,o is a St-martingale, and so by Doob's stopping time theorem, the first conclusion of (ii) holds true. On the other hand, by (i) for each t > s > 0,V f E L2,
W f)W2 - ~
()(s)21S*l f
=
W (Kt) f - ~ ()(s))213s1 f (2.4)
Thus { ~ ( f ) ( t -) J: ~ f
Z ( U ,w
) d ~ ) is~also , ~ aSt-martingale, and by Doob's
stopping time theorem, (2.4) stillholds true when t and s are substituted by the stopping times t A 7 and t A a , respectively. So (ii) is proved. (iii): The first conclusion is true for f , g E Lo, and so is for f , g E L2 through the limits. The second conclusion follows from the first one through Doob's stopping time theorem, as in the proof o f (ii). Finally, let us establish (iv).
2.4 Ito's Integrals on c
~
* 47
We still show it first for f E Lo. This can be done by evaluation using the standard discretization of the approximation to the stopping time a, followed by taking the limit. In fact, suppose that f (t,w) = ~o(w>I{t=o> (t) + CEO '~i(w)I(ti,ti+il (t). Introduce {sl)&, which is the refinement of subdivisions {ti): and Now on this partition we re-express f as f (4 W)= cpo(w)I{t=o}(t)+ CEO cpl(w)I(s;,s;+l](t), where cpl(w) = cpj (w), as t j < sl 5 tj+1, and make a standard discretization of the approximation to the stopping time a as follows: Let an(w) = s?+~, if a(w) E (sr, s:+~]. Then as in the proof of Theorem 35, for each n an is a discrete &-stopping time only valued on { s ~ ) z oand , an 5 a, as n t co. SO,if we let fA(s, w) = f (s,w), then fA E Lo. In fact, 3{s7)F0 such that fA(s,w) = (~O(w)I{s=o}(s) + C Z o (Pin1( ~ ) I ( s ~ , s ~ + ~ l ( ~ ) , where cpll(w) = cpl(w), if s E ( s l , s:+~], s an(w) ( s E (32, ST+",,, an(w) cpll(w) = 0, if s E (SF,sY+J,s > an(w) ( s E ( s l , s;+",], an(w) s l ) . Obviously, cpll(w) E since {an(w) sl) E and cp;(w) E Zs;. Now by evaluation we are going to show that 1) I ( f X t ) = I ( f )(t A 4, 2) IlI(f3 - I(f')ll,,t 0, as n -+ 00If these results can be established, then (iv) is proved for f E Lo. However, as n --+ oo, I l I ( f 3 - ~ ()ll& f = IIfA - fllli,t = E f 2(s7~)~(a(w),an(w)](s)ds -,0. Thus, 2) is proved. Notice that if s E (sl, s:+~], then IS'. Then, after establishing the 1 to 1 correspondence between space C2 and M21Cwith the same metric, for each f E L2, we can take a sequence of C Lo which tends to f
Jl
{fn)zl
in C2. So the corresponding sequence of integrals
{s: fn(s,w)dw,} n=l E 03
M2>"will also tend to a limit in M2", which we denote by
f (s,w)dw,, and define it to be the stochastic integral for f . Note that for a BM { ~ t ) , , ~we have that w: = a martingale t , '82 0, and we establish a one to one corresvondence as follows: for each T > 0.
+
[c
with the same norm 11fl12,T = f 2 ( s , ~ ) d ~ ] 1NOW / 2 . for a {Mt)t>o E M2 we want to do the same thing. So first we need a D-M decompo&ion for its square. For simplicity we discuss the 1-dimensional processes.
Proposition 81 1) If {Mt)t>o - E M2, then {M:),>~ has a unique D-M dewmposition as follows: M; = a martingale + (M), , where (M), is a natural (predictable) integrable incresing process, and it is called the (predictable) characteristic process for Mt. 2) If {Mt)t20, {Nt),,o E M 2 , then {MtNt),,o has a unique decomposition (it may be still called the D-M decomposit~on)as follows: MtNt = a martingale + (M, N), , where (M, N), is a natural (predictable) integrable finite variational process, i.e. it is the diflerence of two natural (predictable) integrable incresing processes, and it is called the cross (predictable) characteristic process (or (predictable) quadratic variational St - adapted process) for Mt .and Nt . q a n < co is a stop3) If {Mt)tlo, {Nt)t20 E M21zoc, then (i) 30, ping time for each n such that {MtAun)t20,{NtAun)t>OE M2 for each n; (ii) there exist a unique predictable process {(M, N),)ylo such that (M, N),,,, = (Mun, Nun),, V n and V t > 0, where we write M,"" = MtAun,and NFn = NtAon. Proof. 1): By Jensen's inequality {M;),,~ is a submartingale. Since it is also non-negative, it is of class (DL). So by the D-M decomposition theorem we arrive at 1).
50
2. Brownian Motion, Stochastic Integral and Ito's Formula
2): Note that in this case {MI (t))t,o , {M2(t))t10 E M2,where MI (t) = Hence by the D-M decomposition theorem ) ~a martingale A2(t), Ml(t)2 = a martingale Al(t), M z ( ~ = where Al(t) and A2(t) both are natural (predictable) integrable incresing processes. So MtNt = MI(^)^ - M2(t)2= a martingale Al(t) - A2(t). Let us show the uniqueness of this decomposition, In fact, if there are two D-M decompositigns for MtNt : MtNt = f i t +At, MtNt = 6: +A;, where fit and 6; are martingales, and At = Alt - A2t, = Ait - A& such that all Alt, A2t,A:,, A& are natural (predictable) integrable incresing processes, then G t - 6; Alt A;, = A:, AZt. However, by the D-M decomposition theorem we must have f i t - 6; = 0,Alt A& = A'lt Azt, since A:, Agt is also a submartingale of class (DL). So fit = 6:,At = A:. 3): (i) is true by the definition of (ii) We only need to show that the equality in (ii) is well definied. In fact, if m > n, then (MU,, Nan)t = (Mum,Num)tAu, . So it is true. H For the continuity of (M, N), we have the following proposition.
MthNt,M2(t) = Mt;Nt.
+
+
+
-
+
+
&
+
+
+
-
+
- -
~
~
1
'
~
~
.
Proposition 82 Any one of the following conditions makes (M, N)t continuous in t : (i) {St)tlois continuious in time, i.e. if CT, f a and they are all stopping = VnzU,,; times, then (ii) M, N E M21C.
zU
Proof, (i): In this case by Levi's theorem Mtr\,, = E[Mt I$tAu,] --E[Mt I ~ ~ A u = ] M t ~ uas, aMn 1a . However, since 0 5 M&,, 5 E[M: lzt~a,1, by Corollar~27 {M&,, is uniformly integrable. Hence limn,, EM&,, = EM;,, i.e. {M:),,~ is regular. Similarly, {N:)t20 is regular. Moreover, by the same token, one also has that MtAu, NtAu, + Mtha NtAo, as a, f a. Hence {(Mt ~ t ) ~ ) , , , , is also regular. However, one easily sees that ( M + N), = (M), + (N), + 2 (M, N), . Thus the continuity of (M, N), follows from the continuity of the other three, because they are all regular. (ii) is similarly proved. For stochastic integrals with respect to the martingale {Mt)tlo E M2, we need to introduce the space of integrand processes as in the case with respect to BM {wt)tlo E M27c.
+
+
+
Definition 83 1) Write Lb = {{f (t, w ) : it~ is zt-predictable ~ ~ such that VT > 0 M
2
(Ilf ll2,T) =
E S ~ Tf2(t,w)d
< 0~i.1
( ~ ) t
2.5 Stochastic Integrals with respect to Martingales
51
For f = { f (t,w ) ,-, ~ E L& set
l l f l?
=
C= :l ~ 1 ( l l llz,, f M A 1).
2) L y = {{f ( t , ~ ) ): ,it~is~st-predictable such that if don is a st-stopping time for each n, and f2(t,w)d ( M ) t < m, V T > 0,Vn). 3) Lo is defined the same as i n Definition 72.
T w,o,,
EST
Note that if E J:'~" f ' ( t ,w)d ( M ) t < w, V T > O,Vn, then P - a.s. J ~ N / \f~' "( t ,w)d ( M ) t < w,V n ,V N = 1,2, . - . . Therefore, J: f ' ( t , w)d ( M ) t < w , V T > 0 , P-a.s. In general, the inverse is not necessary true. However, if ( M ) , is continuous in t , then the inverse is also true. Reasoning almost completely in the same way as in Lemma 73, one arrives at the following lemma.
Lemma 84 Lo is dense i n L L with respect to the metric
II.II~.
f ( s ,w)dM, with respect to Now we can define the stochastic integral { M t ) t 2 0 , first for f E Lo, and then for f E L L , and finally for f E ,Cg•‹C in completely the same way as when defining f (s,w)dw,. However, we would like to define it in another way, even if it is more abstract and different, because it is then faster and easier to show all of its properties. Definition 85 For M E M2>loC and f E L g o C (or M E M 2 and f E L h ) i f X = {xt)t>O E M2>loc(X = {xt)t20 E M 2 ) satisfies that
V N E M 2 ~ l o c( N E M 2 ) , V t 2 0, then set xt = I M ( f ) ( t ) ,and call it the stochastic integral o f f with respect to martingale M . In the rest of this section we always assume that M E M2>""".First let us show the uniqueness of X E M2>locin Definition 85. In fact, if there is another X' E M2>locsuch that (2.5) holds, then ( X - X', N ) = 0, V N E Hence by taking N = X - X' one finds that X = X'. Secondly, we need to show that such a definition is equivalent to the usual one, which was explained before this definition. Proposition 86 I f f E L h is a stochastic step function, i.e. f E L L , and 3an uo = 0, on is a st-stopping time for each n such that f (t,W ) = f 0 ( ~ ) 1 t = + 0 C:=o f n ( ~ ) I ( o ~ , (ot~) ,+ ~ ] where fn E , then
r,
son
52
2. Brownian Motion, Stochastic Integral and Ito's Formula
Proof. In fact, VN E M2?loc ( ~ ~ ( fN) ), = ~ ( x m of n ( w ) [ ~ ~ n + livUn )J), = C,"==, fn(w)((MUn+l ,N)t - (Men, N)t) = C,"==, fn(w)((MUn+l, - (Mun,Nun),) = J,"f (s, w)d (M, N),
.
+
BY this lemma if f E L L , f(t) = fo(w)It=o C ~ = o f n ( ~ ) I ( t n , t n + l l ( t ) , where 0 = to < tl < -.. < t, -+ oo, and fn(w) E Ztn, and all fn are bounded, then IM(f >(t)= CZo fn(w)(Mt,+l - Mtn). That is just the usual way to define the stochastic integral J , f (s, w)dM, E M 2 , for f E L k . Following this usual way for f E L& one can take a sequence {fn) E L& such that 11f n - f llf -+ 0. So there exists a limit J, f (s, w)dMs E M2 such that f (s, w)dM, - J, fn(s, w ) d ~ , ~ l ; + 0. Therefore the stochastic integral J,"f (s,w)dM, E M2 is also defined for f E L&. Let us show that it is equal to the stochastic integral IM(f)(t) defined in Definition 85. To show this we need to use the Kunita-Watanabe's inequality (see the lemma below). In fact, by this inequality VN E M2 I(IM(fn) -IM(f),N),l 6 G I ( f R - f ) ( ~ , ~ ) l d l ( M , N ) I , 5 (J," I(fn - f ) ( s , w ) ~ ~ d ( ~ ) ,()( lN/ t~Y. 2 . l - f 11;)~ -, 0. On l+?nce E 1 ( 1 ~ ( f ~ ) ( t-) ~ ~ ( f ) ( t ) , ~5) ,(lllll,N)2(11fn the other hand, one also has that (J, f(s,w)dMs - J; fn(s,w)dMs,N), = (J,(f(s,w) - fn(~,w))dMs,N), = S,"(f ( 8 , ~ ) f"(s,w))d(M,N),. So in the same way we can show that the left hand side of the above result tends to zero as n -+ oo. Since by Proposition 86 (IM(f n), N)t = (& fn(s,w)dMs,N),. We have (IM(f),N), = (&f(s,w)dM,,N), ,VN E M2. Therefore IM (f)(t) = Jot f (s, w)dMs, Vf E L&. Furthermore, we can also easily obtain the same equality for all f E c Y .
I J,
Lemma 87 (Kunita- Watanabe's inequality). If M, N E M 2 , f E L L , g E LL, then
Proof. First, we see that VA E R, O(s, w)Iu,,(z)Np(ds,dz), where f n(s,z , w) f ( s ,z, w)Iun( z ) , and gn is similarly defined. For the -
JZ
2,
=
pure jump term (the last term) the counting measure only counts the numbers of the point p(s) = z falling in U,. Since
2. Brownian Motion, Stochastic Integral and Ito's Formula
60
ENp((O,t],Un) = E(#{O < s I t : ~ ( s E) Un)) < m , where # means the numbers of counted in the set {.) , which we introduced in Chapter 1. So # {0 < s 5 t : p(s) E Un) < co, a.s. That is to say, for a.s. w, there are only finite points of s E (0, t] such that p(s, w) E U,. So we can denote these points by 0 < q ( w ) < . . . < a,(w) < . . . . Let us show that for each m, a, is a stopping time. In fact, {am(w) 5 t) = {Np((O, t], Un) 2 m ) E 8t. For convenience we also write a0 = 0. Then XT = xo Mt At JZ gn(%wJ)Gp(ds, dz) + Cam5t(f + g)(am(w), P(am(w),w),w), F ( G ) - F(xZ;.)= C,IF(xE,r\t) - F(xE,At-)l + Crn[F(x:,At-) - F(xE,-~A~)I= Il(t) + I2(t> t; and F(xEmAt-) = F(xy-) = Note that F(xEmAt-)= F(xEm-), as B , F(xy), as a,-1 < t < a,. Since as s E (am-1 At, a, ~ t )x:, has no jumps. So we can apply Ito's formula for the continuous semi-martingale to obtain u,At a,At F(xEmAt-) - F(xEm-lAt) = F1(x:)dA, + Jum-,~t F1(x:)dMS
+ +
Jgt+
Np(ds,d ~ )on , any 10, TI, where " 3 " means uniformly convergence. Similarly, Sgtf JZf n ( ~z,, w)Np(ds, d 4 =f Sgt+JZ f (s, Z, w)Np(ds, dz), on any [O,TI. Hence xT- and x; converge uniformly to xt- and xt on any [O, TI, respectively. From this, letting n + co, one easily obtains that (2.15) holds true.
Jgt+ SZ
Sot+Sz
2.7 Ito's Formula for Semi-Martingales with Jumps
For example, let us show that as n
-+
61
oo,
+
where hn(s, z, W) = F(x: gn(s, z, w)) - F(x:) h(s, z, w) is similarly defined; and
- F1(x:)gn(s, z, w), and
(2.18) where Xn(s, z, w) = F(x:- +gn(s, z, w)) - F ( x E ) , and x(s, z, w) is also similarly defined. In fact, Vs, z,and a.s. w, hn(s, z, w) -+ h(s, z, w). Moreover, since F E C t , Ihn(s, z, w)l i SUP% IF1'(x)l Ids, z,w)12 i z o I ~ ( s , ~ , w E3 ); I.~ Hence applying Lebesgue's dominated convergence theorem, one obtains (2.17). . , Notice that
I; = E and
JFjz I;ys,
l2
w ) - K ( ~z,,w) ~ ~ (dz), d ~ , 2
i supx 1 ~ ~ 6 l g3 ( 1~z,w)12. ,~ So (2.18) is also similarly obtained. Thus (2.15) is true under the assumption that g E 3'. However, for g E p>"c, 30, f oo, a, is a stopping time for each n such that I p u n l(t)g(t, Z,W)E 3'. Hence one easily sees that (2.15) is true for t substituted by t A a,, W,Vn. Letting n f oo, one derives (2.15), Vt 0. 2) Now assume that F(x) E C2(R). First let us show that each term in the right hand side of (2.16) has meaning. In fact, since { x ~ ) ~is , ~ a right continuous with left limit (RCLL) process, { x ~ - ) , , ~is a locally bounded process, i.e. 30; f oo, a; is a stopping time for eaEh n such that {x(tAaL)-)t,o is bounded, for each n. Indeed, let a; =inf{t 2 0 : lxtl > n ) . Then 4 n,Vt 2 0. So J:""' F1(xs)dMs = J:"" F1(xs-)dM, makes sence, so also does F1(x,)dM,. On the other hand, since g E
,
,
>
Ix(~, - , ~ )-~
Sot
32,loc
Jz
2
NP(ds, dz) < m, a.s. because by 1) E Jz ids, z , 4 1 2 Np(d~,d.4 = E J:"~" JZ lg(s, z, w)12 Sp(ds, dz) < oo,Vn. Notice that for each w E R, { x , ( ~ ) ) , ~is~RCLL, ~ , ~ ~so it is bounded; that is, 3ko(w), such that Ix,(w)l 5 ko(w),Vs 's [0, t]. Hence 3io(w) is such that IF1(x)l IF1'(x)l 5 &(w), as x E [-ko(w), ko(w)]. Thus, Cs,D ..,[F(xs+ g(s, p(s), w)) - F(G-1 - F1(xs-Ids, p(s), w)l c ~ ; D ~ I ,~ ~ ( sP(s), ~, ~ w)12=
J,tAUn
+
P I
z,
2. Brownian Motion, Stochastic Integral and Ito's Formula
62
c.,Dp.,,t
5 ;kcw) Similarly, by
w ) ~< 2 m.
JZ
C,,Dp,.st If (3,p(s), w)l = J,'c If (s, z, w)l NP(d%dz) < one has that + f (Q)P(s),w))- F(xS-)l CSEDp,85t[F(~s-
as-
5 5 ( 4 E,,DP,,,t If (%P(.),.)l < m. Therefore one easily deduces that the right hand side of (2.16) makes sense, and so also does (2.15). Now let WN(X)E Cm(R) such that WN(X)= 1, as 1x1 5 N ; WN(x) =0, as 1x1 > N + 2 ; and [W,!,,(x)l 51,IW{(x)1 1,Vx~ R, where W,!,, and W{ are the first and second derivatives, respectively. ). IFN(x)I IF(x)I , (F,!,(x)~ IF1(x)l set FN(x) = F ( X ) ~ N ( X Then IF(x)I, IF{(x)I IF1'(x)1+2 IF1(x)l+lF(x)l. Moreover, FN(x) = F,!,,(x) = F{(x) = 0, as 1x1 > N 2. Hence for each N, FN(x) E Ct(R). Thus by 1) Ito's formula (2.16) holds true for FN.Letting N f oo, by Lebesgue's dominated convergence theorem one easily obtains that (2.16) still holds for F.
0 , i = I , . . . ,m; and Ui E S z , i = 1,2,... ,m such that N,([O,t],Ui) < co,Vi, and Ui n U y = 4, as i # j . Let F ( x , y l , . . . ,ym) = eiAxe-CZ1 Aiy'. Apply Ito's formula t o the 1 + m-dimensional semi-martingale ( x t ,y:, - . . ,y r ) , where xt = x t , y; = N,((O,t],Ui) = s:+S~ Iui(z)NP(dt,dz),i = I , - - . ,m, and denote Nt = ( N , ((0,t ] ,U l ) ,. . . ,N,((O, t ] ,Urn)),then A F = F(xt,y:,- ,YF! - F ( ~ s , ~ s l , . .,YF) . eihxYe- C Z l A i v : d ~ - 2 eih,e-CZ1 &yidu
~ X L ~
u
st
+ sSt:J Z [ F ( x u Nu, + I u i ( z ) )- F ( x u , ~ , - ) l & ( d u , d z ) + Sz[F(xu,Nu- + I u i ( z ) )- ~ ( x ~ , ~ ~ - ) l f i ~ ( d u , d z ) . 2
3
s::
However, F(xu, Nu- + ILli(z))- F(xU,Nu-) = e i A ~ u e - C Z ~ A i N ~ ( ( O ~ " l ~ u i ) . .(e- C Z i AiIui(z) - 1). Therefore,V A E 5,, multiplying all terms in the expression o f A F by e-iAxu&bl A i N ~ ( ( o ~ u l ~ u i )and ~ A taking the expectation, one finds that ~ ~ i * ( x t - x * ) ~ - Z* Zi N l ~ ( ( % t l l u i )-~P3((At ) = -k$ E I ~ v ' ( ~ ) ~ ~ A"ui(Z) - l)GP(du,d z ) + J: E I A V 3(u)S,(e= -x" 2 J st E I ~ V ~ ( U ) Sat ~U EIAV'(U) x z l ( ( e - A i - 1)fip(dU,Ui), where V " ( t ) = e i A ( x t - x a ) e - C E i N ~ ( ( 3 ~ t l ~ u ~Note ) . that, the ordinary differential equation
sSt
+
2.10 Some Examples
65
So
~ ~ i x ( x t - x . ) CZI ~h i N p ( ( s , t ] ? U i )= P(A)e- $ ( t - s ) e ~ ~ l ( e - " - l ) f i p ( ( s , t ] , ~ j ) The proof is complete. rn The above theorem is easily generalized to the d-dimensional case.
Theorem 98 Assume that { x ~- ) is~ a, d-dimensional ~ ~t-semimartingale, where xt = ( x i , . . . ,x f ) , and pi, i = 1,2, . . - ,n , are &-point processes of class (QL) on state spaces Zi, i = 1,2, . . . ,n, respectively. If E ~ 2 l l " c t c , ( M ~ , M=~d i)j t~; i,j = 1,2,. . . ,d, 1) t - x i t 2) the compensator N,, (dt,dz) of pi is a non-random a- finite measure on [O,oo)x Z , i = 1,2,... ,n; and the domains Dpi(w),i= 1,2,.-. ,n, are mutually disjoint, a.s. Then {xt)t20 is a d-dimensional St-BM, and pi (i = 1,2,. . - ,n) is a St-Poisson point process such that they are mutually independent. A
Proof. The only thing we need to do is to combine all point process pi, i = 1,2, . . . ,n, into one new point process p on a new space Z. Then we can prove (2.21) in exactly the same way as for xt and p, only with X E R -d substituted by ;i= ( X I , . . . ,X ) E R ~and , denote the inner product o f two d -i x; that is, X . x = x i = , X xi. For d-dimensional vectors and x by this we introduce a space Z such that Z = Uy=lHi, where Hi, i = 1, . - - ,n, are disjoint such that for each i, Hi is in 1 to 1 correspondence with Zi. For simplicity let us identify Hi and Zi. So we can set Z = ULIZ.i. Note that B z = Uy=CBzi, and all Bz,,i = 1,2, . . . ,n are mutually disjoint. Now let D, = Uy=l Dpi, and set p(t) = pi(t), as t E Dp,. Then we have a point process p on Z, an! it is easy to see that p_is a point process of the class (QL). Moreover, N,(dt, dz) = EL1Iz, (z)Npi(dt,dz). Obviously, @,(dt, dz) is a non-random a- finite measure on [O, co) x Z. Hence by the proof of Theorem 97 { x ~ ) , ,is~ a d-dimensional St-BM, and p is a &-Poisson point process suchthat they are independent. Note that for each i, pi is also a &-Poisson point process by Theorem 97. However, we still need t o prove that all pi, i = 1,2,. . . ,n are mutualy independent. In fact, i f Ai E Bzi,i = 1,. . . ,n, then they are mutually disjoint. Moreover, {N,, ((0,ti]x Ai))y=l = {Np((O,ti] x Ai))y=l is an independent system o f random variables, since p is a Poisson point process. The proof is complete.
-
rn
2.10 Some Examples In Calculus if x ( t ) = f ( y ( t ) )= e ~ ( y(t) ~ ) ,E C 1 , y(t) is non-random, then
66
2. Brownian Motion, Stochastic Integral and Ito's Formula
dx(t)= e ~ ( ~ ) d ~ ( t ) . However, by Ito's formula we will see that some extra terms will occur in the expression for d x ( t ) , when y(t, w ) is some random process.
{ - Jt20
Example 99 Assume that { ~ t ) is, a~dl-~ dimensional BM, Nt is a stationary 1- dipensional centralized Poisson process with the compensator At; that is, Nt = Nt - At, where X > 0 is a constant, and Nt is a Poisson process such that E N t = At. Suppose that b(t,w) is a St-adapted Rlmdl -valued process and c(t,w ) is a Zt -predictable R' - valued process such that for each T < oo ~ ) l ~ ] s;Vi. Similarly, one sees that
+
I
-
1
I
1
68
2. Brownian Motion, Stochastic Integral and Ito's Formula
(w*~,~ * j=)bij(t+a-a) ~ = Sijt holds. Hence, by Theorem 98, { ~ f ) = , ~ ~ {xt+, - x , ) ~ , ~ is a standard BM independent of 5: = 5,. Moreover, { x ; ) ~=~{x~+o)t20 ~ = {w: + xu)t20 is a d-dimensional s f = Zt+,--BM with the initial random variable xu E 3;. The proof is complete. Theorem 101 If p is a stationary &- Poisson point process on some , a is a {&}t,o -stopping space Z with the characterictic measure ~ ( d z )and is time with P ( a < oo) = 1, then p* = {p*(t))t,Dp, = {p(t a stationary = { ~ t + o ) t > o-Poisson point process with the same characterictic measure ~ ( d z ) .
+
Proof. The proof is similar to that of Theorem 100. Let a, be the bounded stopping time defined in Theorem 100. By Doob's stopping time theorem -martingale for each {Np((a,, t D,] x U) - ~ T ( U ) is) a~ ~ ~ n, where U E I?,; that is, ENp((O,t] x U) < oo, W. Furthermore, by Levi's theorem and taking the limit as in the proof of Theorem 100 one easily sees that {N,((u, t a] x U)- ~ T ( U ) )-, , ~is a {5t+u),lo -martingale. So the proof is complete. rn By the previous two theorems one immediately sees that a standard BM and a stationary Poisson point processes are both stationary strong Markov processes. That is to say, if { ~ t ) ~is ,a~d-dimensional &-standard BM, then for any zt-stopping time ;with P ( a < m) = 1, it satisfies VA E %(R~), w > 0, p(wt+, E At??,) = P(wt E Al30), a.s. In fact, by Theorem 100 one has that it is equivalent to P(wf E A) = P(wt E A) P(w; E A[%) = P(wt E Al&) where { ~ f ) ~ , is , a 3;-BM. The last equality is obviously true. It is naturd to define the strong Markov property of a stationary Tt-point process p as follows: If p satisfies that Vt > 0, Vk = 1,2,. . , P(Np((O,t + a ] x U) = kl&) = P(Np((O, t] x U) = kl30), a s . , then p is called a strong Markov st-point process. Since the proof is the same we omit it. Thus we arrive at the following corollary.
+
+
zt
zt
Corollary 102 The - standard BM and - stationary Poisson point processes are both stationary strong Markov processes.
2.12
Martingale Representat ion Theorem
a:7P-
In this section we are going to show that any square integrable Martingales can be represented as a sum of an integral with respect t ~the ! BM { w ~ ) and , ~ ~an integral with respect to the martingale measure N,(dt, dz) generated by the Poisson point process p. Here $TrP is the smallest a- field such that makes all w,, s 5 t; and all fi,((0, s],U), s 5 t, U E !BZ mea-
2.12 Martingale Representation Theorem
69
surable. Such a representation theorem is very useful in the mathematical financial market and in the filtering problems. More precisely, we have the following theorem. Theorem 103 . (Martingale representation). Let m(t) be a square integrable Rd-valued -martingale, where $y'k is the a- algebra generated (and completed) by {w,,k,,s 6 t ) , and {wt),,, is a dl- dimensional BM, {kt),,,- is a stationary d2- dimensional Poisson point process %lk
of the class (QL) such that the components {k:)tLo,. .
,{k?)+,"
disjoint domains and disjoint ranges. Then there exists a unique L&, (RdBd1)x F&, (RdBdz) such that
m(t)= m(o)+ J; q,dw,
". -
have E
+ J; Jz P . ( Z ) ~ ( ~d S~ ,) .
Here we write L ; ( R ~ @=~{ f~( t), w ) : f ( t , w ) is zt-adapted, ~ ~ @ ~ l - v a l usuch e d that E r l f ( t , w ) l 2 d t < co, for any T < co ) and F $ ( R ~ @ ~=" { f (t,Z , W ) : f (t,Z , W ) is Rdmd2-valued, Tt-predictable such that E If (t,Z , w)12r(dz)dt < m, QT < co.) T h e following two lemmas are useful and interesting. Lemma 104
.$7$
=
c ' w~2 ,0.
Proof. Let Hn(tl,tz,.. . ,tn; f i , f 2 , ... , f n ) = Hn-l(tl, t27 .. . tn-1; f i , f 2 , . . . fn-2, fn-lHtn-t,-lfn), H i ( t ; f )( x ) = H t ( f ) ( x )= JR, P ( ~ ,-x ~ ) f ( y ) d y , VEf C o ( R d ) , exp[- 1x12 /at], p(t, x ) = and_ H$itl,t2,... , t n ; f i , f i , . . . , f n ) = HA-l(tl,t2,... , t n - l ; f i , f i , . . . ,fn-2,fn-lHtit,-t,-lfn), @(t; f ) = @ ( f ) , Q f E Co(Rd), 00
( @ ( g ) ) (m)= =
m
C
C g ( n +m)e-tn(uj) (tr( U j ) ) n/n!
n=O g(n)e-t"((.'j)(tr(Uj))n-m / ( n- m ) ! ,
n=m where 0 = to < tl < t2 < . . . < tn, f l , f 2 , . . . ,f n E Co(Rd), Ui n U j = q5,i # j,r(Ui) < m,i = I,..-m;Uzn=,Ui= 2, and C o ( R d )= { f : f is a continuous function defined o n Rd such that limlz~-t, I f (x)l = 0 1. Hence, i f t k - 1 t < t k , by the stationary Markov property and the independent property o f BM and Poisson process ) gl(Nk((O,tl]rU j ) ) E [ f i ( w ( t l ) ) f i ( w ( t 2 )' )' ' f n ( w ( t n ) ny=1
0) 2 P(Nk((0,a ] ,U j )- Y > 0) > 0,
Sek
lim (Nk((O,o], Uj) - Nk((0,on)],Uj))> O } . Thus by Fatou's lemma 0 < E (Nk((0,a ] ,U j )- Y )Ir = lim E[Nk((O,a ] ,Uj)-Nk((0,ant],Uj)]Ir =
{W
:
nl-w
nl--too
2.12 Martingale Representation Theorem
71
= ?r(Uj) lim E[a - anl]Ir= 0.
9%'-+w This is a contradiction. Therefore (2.22) holds. Now it is easily shown that P -as. 9i(Nk((Olti], uj)) !im E[II:=l fi(w(ti)) n -+w
n;"=,
I%'k]
nGl
1%$,
= E[ndlfi(w(ti)) gi(Nk((0, ti], Uj)) . For this, for any given w E fl, let us show a special case: a(w) = tn,ant(w) < ~ ( w )unt(w) , f a(w), as n' T oo. All the other cases are similar or even much easier to handle. For simplicity let us omit the w. For n' large enough, we have tnwl < ant < a = t,, and
]!%:1
E[II:=l fi(w(ti)) IIy=l gi(N~((O,ti],Uj)) =
fi(w(ti))
nzl
gi(Nk((0, ti], Uj))
. (Ha-ant (fn)) ant)) (&-on, (gn)) (Nk((0, ant],Uj)). However, as n' oo -+
In fact, one notes that (2?r(t - Sn,))-d/2 SRde-1~-~12/2(t-ant)f ( Y P Y= I(Rd) =I ( ~x yl < 6) I ( ~ x- yl 2 6) = Il 12. However, by the continuity of f at point x one has that for arbitrary given E > 0 there exists a 6 > 0 such that for ly - XI < 6 If (Y)- f (x)l < ~ 1 3 . Hence as ly - XI < 6, Vn'
+
+
1
111 - ( 2 ~ (t sn1))-di2 J ( ( ~ - ~ ~f
From Definition 112 it is seen that for discussing the solution of (3.1) we always need to assume that the coefficients satisfy the following assumption (A), b a n d u : [O,co) x Rd x R + R ~ , C:[O,~)XR~XZXR-+R~ are jointly measurable and St-adapted where, furthermore, c is St- predictable. Moreover, to simplify the discussion of (3.1), we will also suppose that all Nki(ds, Z) , 1 i d2, have no common jump time; i.e. we always make the following assumption (A)2 Nki({t), U)Nk,({t), U) = 0, as i # j, for all U E %(Z) such that r ( U ) < oo. Now for the uniqueness of solutions to (3.1) we have the following definition.
<
0, as u > 0; and so+ dulp(u>= 00, then yt =O,tr't 2 0.
J,t
Proof. Let zt = J;p(yS)ds(2yt 1 0 ) . Obviously, one only needs to show that W 2 0 zt = 0. Indeed, zt is absolutely continuous, increasing and a.e.
Set to = sup{t 2 0 : zs = 0,Vs E [O,t]). I f t o < oo, then zt > 0, as t > to.Hence by assumption and from (3.5) for any 5 > 0 = J(o,z(to+q)dulp(u) = J(to,t0+a) dztlp(zt) L. S(to,to+qd t I 5. This is a contradiction. Therefore to = oo.
3.1.3 Existence of Solutions for the Lipschitzian Case In this section we are going to discuss the existence and uniqueness of solution to SDE (3.1). First, we introduce a notation which will be used later. .L;(Rd) =
{
f (t,w) : f (t,w) is St - adapted, Rd - valued such that ~ J ~ ~ f ( t , ~< m ) l ~ d t 0
Theorem 117 Assume that lo b a n d o : [O,oo)x Rd x R-+ R d , C:[O,W)XR~XZXR-+R~ are jointly measurable and St-adapted, where furthermore, c is dictable such that P - a.s. Ib(t,x,w)l < c ( t ) ( l+ IxI), 1 4 4x,w>12+ JZ 2, z,412 r ( d 4 < c(t)(l+ 1x12), where c(t) is non-negative and non-random such that c(t)dt < oo; 2" Ib(t,x1,w) - b(t,2 2 , w)l < c(t)1x1 - x2l ,
St- pre-
3. Stochastic Differential Equations
80
lu(t,X I , & ) - ~ ( x2,w)l2 t, + JZ Ic(t,x l , z , w ) - ~ ( 5t2,, z , w)12~ ( d z )
< c(t) 1x1 - 2 2 1 2 ,
where c(t) satisfies the same conditions as i n lo; P Xo E 8 0 , ~ l ~
where ko 1 is a fixed constant, and we have applied Lemma 118 below. Note that 0 ( A(t) is increasing, so that
where kg > 0 is a constant depending on 2 u(s) = E IX,I , then
SoT c(s)ds only. Thus, if we write
After appropriately choosing y and bo to make max(k$y-', kg($ - y)-') < 1 by the contraction mapping principle one finds that there exisits a unique E L ~ ( Rsatisfying ~) (3.1). Let us show the following result: solution There exists a version {xt)t>o of {Zt)tlo , that is, for each t E [0, TI P(xt # xt) = 0, such that {xt)Go is RCLL (right continuou and with left limit) and {x~)~,, - is a solution of (3.1). In fact, write b(s,Ts, w)ds xt = xo a(s,zS,w)dws Jz c(s,TS-, z, w)fik(ds, dz), t E [O, TI; ~t = xo b(s,xs,w)ds ~(s,~s,w)dws + JZ c(s,xS-, ~ , w ) f i k : ( ddz), ~ , t E [O,T]. Then E lxt - Tt12dt = 0. SO,there exists a set A1 x A2 E %([O,T]) x 5 T such that EJo IAlxA2(t,w)dt= 0, and xt(w) =Zt(w), as (t,w) @ Al x A2. Hence, for each t E [0,TI,
+
+ Jot
+
+ Jot
+ Jot
fl
3. Stochastic Differential Equations
82
+E J:
S,
Ic(s,T,-,
Z)
- c(s,x,-, z)12A ( ~ z ) ~=s0.]
So, the above fact holds true. Now, by Lemma 114 and 115 the solution is also pathwise unique such that E( sup lxt12) 6 kT < oo, for each T < oo, where k~ is a constant tWV1 T depending on T and So c(s)ds only. So we have show that there is a unique for each given T < oo. By the uniqueness of the solution solution {xt)tEIO,Tl we immediately obtain a solution { ~ t ) ~, ~ which , , is also unique. When all
-
coefficients are g'Nkadapted, etc., then by construction one easily sees that {xt)tlo is also gy'Nk- adapted, i.e. it is a strong solution.
Lemma 118 . (Gronwall's inequality). If 0 6 yt 6 yvt +Sot c(s)y,ds, W 2 0, where y > 0 is a constant, and c(s) 2 0, then W 2 0 yt 6 yvt
+y
exp
([
c(r)dr) c(s)v,ds.
6 y J; exp (J: c(r)di)c(s)v,ds. Hence
+
+
yt 6 yvt ~t 6 yvt y J: exp ( J c(r)dr) ~ ~ c(s)v.ds. The above Theorem 117 is easily generalized to the locally Lipschitzian case.
Theorem 119 If the condition 9 in Theorem 117 is weakened to 2" for each N = 1,2,. . - there exists a non-random function c N ( t )such that as lxll and 1x21 < N ,
where c N ( t )2 0 satisfies the condition that cN(t)dt< co for each T < oo; and all other conditions remain true, then the conclusion of Theorem 117 still holds.
Proof. Let bN(t,x, W ) =
b(t,N f i , w ) , as 1x1 > N.
3.1 Strong Solutions to SDE with Jumps
83
o N ( t , x , w )and cN(t,x,z,w)are similarly defined. Then by Theorem 117 there exists a unique zt-solution - E S : ' O ~ ( R ~ ) solving the following SDE
Set
T ~= inf > {t 2 ~ 0 : IxrI > m) . Then we have that
xr
= xo
+
b(s,x:, w)ds
+f
t
~ ( sx: , ,w)dws
By the uniqueness theorem xp+" = xF, as t E [O, T Hence, .rNtNf , as N f . Let us show that lim T
N+m
~ > v~m)=, 0, 1,2,
~= ca, ' P~- a.s.
.. . .
(3.6)
In fact, i f this is not true, then there exists a To < ca such that P ( A ) > 0, where A = {T < T o ) , and T = l i m ~ - rNrN. + ~ Hence, Elimp? ,m IA 2 l i m ~ N + P~( A ) = ca On the other hand, by Fatou's lemma and by the a priori estimate 2 E N +, IA 5 EEN,,E IA 5 S U P N E s u ~ t g /x?l2 , 5 k,r0 < 00. This is a contradiction. Therefore, (3.6) holds. Now let xt = x y , as t E [ O , T ~ , ~ ) , then by uniqueness it is well defined. Moreover, since T. oo, as N f ca, { x ~ )is,a~unique ~ solution of (3.1). H The above Theorem 117 is also easily generalized to a more general case, which is useful in the non-linear filtering problem.
l ~ cl2 ~ , ~
I X ~ N , N I2
~~y~
Theorem 120 If the condition 3' in -Theorem 117 is weakened to be 3" there is a given process tt E such that - E 1&12 < ca; - adapted, and c(t,x , z, w) and assume that b(t,x , w) and ~ ( xt ,,w) are is - predictable, then there exists a pathwise unique strong solution xt, t 2 0, satisfying the following SDE:
c'Nk c'Nk
c'fik
xt
=
6+
6 t
b(s,xslW M S
+
1 t
4 9 , x S ,w)dwS
84
3. Stochastic Differential Equations
The proof of Theorem 120 can be completed in exactly the same way as that of Theorem 117. Now let us give an example to show that the condition on c(t) in Theorem 117 cannot be weakened.
c(t)dt < oo cannot be weakened). Consider Example 121 . (Condition the following BSDE i n 1-dimensional space
where st,wt, and kt are all 1-dimensional, and U E Bz such that x ( U ) < co. Obviously, if a < 1, then by Theorem 117 it has a unique solution T { x ~ ) , However, ~ ~ . if a 2 1, (in this case So I,zos-uds = co), then it has no solution. Otherwise for the solution { x ~-) ~one > has ~ that
Hence E x t = oo, V t > 0, as a 2 1. This is a contradiction.
3.2 Exponential Solutions to Linear SDE with Jumps In Calculus it is well konwn that a 1-dimensional linear ordinary differential equation (ODE) dxt = atxtdt, xo = co; t 2 0, has a unique solution xt = coelotaads,t 2 0. However, for a 1-dimensional simple linear SDE dZt = atZtdwt, xo = Q; t 2 0, even if { ~ t ) is~a ,1-dimensional ~ BM (Brownian Motion process), write txt = c-,eJo t 2 o, xt does not satisfies the above linear SDE. Actually, it satisfiess the following SDE: dxt = i a i x t d t atxtdwt, xo = q;t 2 0. In fact, let f ( y ) = ey, and yt = s:a,dw,. Then by Ito's formula dxt = df (yt) = f1(yt)dyt !jf"(yt)d ( y ) , = coateYtdwt $coeyta?dt = $a?xtdt atztdwt. So xt satisfies another linear SDE. What is the solution of the above original given SDE? In this section we will discuss this kind of problems.
+
+
+
+
3.2 Exponential Solutions to Linear SDE with Jumps
85
First, we can use Ito's formula to verify that
solves the following SDE
where wt is a d-dimensional BM, and we assume that 8: is a d-dimensional &-adapted process such that
In fact, f (x) = ex E C2(R1). SOif we let .dw,- ~ ~ 1 0 : 1 2 d s , xt then by Ito's formula we have that ex* = 1 ex*O: dw, ex. 18:12 ds J: ex. 101:2 ds =1 exad: . dw,. Therefore (3.9) holds. Furthermore, let us use Ito's formula to verify that
=GO:
+ + Jot
4Jot
+
solves the following SDE dz:
= z~
JZ
+
~:(~)lj*(dt,dz),
= 1,
(3.12)
where for simplicity we assume that &k(dt, dz) is a 1-dimensional Poisson martingale measure, (for the n-dimensional case the discussion is similar), and 8; is a 1-dimensional &-predictable process such that
For this we first show that the right hand side of (3.11) makes sense. In fact, by assumption 30, t oo a is a stoping time for each n such that N (R1). So ~ , ~ ~ . Nk(ds,dz) ~ y ~ 0makes f sense. On the 0t I t l o , E F2,l0c IC other hand, since Jz 8: . Nk(ds, dz) is RCLL in t, it has only a finite number of discontinuous points in t for each w . So as the product is a finite product, it also makes sense. Therefore zKon has meaning for each n, and so also does zg,. Now let
JotAon
3. Stochastic Differential Equations
86
By the formula for integration by parts (Theorem 96) dxt~lt= xt-dyt yt-dxt d[x, ylt, where [x, y]t = (xC,y C ) t + ~ o , , , t Ax,Ay,, x: is the continuous martingale part of xt. However, we easily see that [x, y]t = 0, since y,C and Ayt are zero. Hence t t xtYt - = SO~ 3 - + = Sox S - ~ Y S ~ CO<s 0 is non-random and cl(t)dt < CCI, for each T < m. Write zt(a-'bO) = exp[J;(a-'bO)(s,x,) . dw, - J; (a-'bo)(s, ds], dFt = ?dp, Gt = Jo (a-'bO)(s,xs)ds wt, Then Ft is a probability measure, and for each 0 < T < co, Gt, t E [0,TI, is a BM under the new probability measure FT, and ~ ~ ( t( ]0d, z,) , t E [0,TI, is a Poisson martingale measure under the new probability measure FT with the same compensator ~ ( d z ) tNotice . that here the space R and the a-filtration {&}tao do not change. Furthermore, for each T < m,
+
+
1
%, ) I 2
+
-
(a,57 { z t } t € P , ~';P
~{ x; t } t p [ ~ ,{Gt?t.[~,~] ~l { R ~ ( (t~>dz))tt[O,Tl,drEBZ o, 1 9
is a weak solution of the following SDE: FT - a.s. W E [O,Tl, dxt = (bo(t,xt) b(t,xt))dt a(t,xt)dGt J, c(t,xt-, z)Nk(dt,d z ) , with the initial value xo = xo, where xo E is a constant vector. Furthermore, if (f2,g) is a standard measurable space, then a probability P exists such that PisT = &,VT < co. Thus the result is generalized to all t 2 0. Or, simply speaking, { ~ t- ) is~ a, weak ~ solution of the above SDE, W 0, F - a s .
+
>
Proof. Let
+
+
3.3 Girsanov Transformation and Weak Solutions of SDE with Jumps
b N ( t , x )=
-
97
( bO(t,x), as 1x1 5 N , 0, otherwise.
a Then by ~ e m k 123 d P p = %(a-'bN)dp is a probability measure. Applying Theorem 124 for any given 0 T < oo asO 0 and A E !Bt(Dd x D ~ ) there , exists a conditional probability Q ~ ~ ' " ~ ) (such A ) that (w3,w4) E Wf x Dd + Q ! Z U ~ ' ~ ~ )is( A ) &(wf x D ~ ) ~ measurable, ~ ' ~ - and W for any C E !Bt(w$x D ~ ) . P,'(AxC) = fc Q $ ? ' ~ ~ ) ( A ) P(dw3)P~(dw4), Now let = ( ( ~ 3~, 4 E) W$ X Dd : pt(w3, ~ 4 E) AI, Ot(w3, w4) E A,), Al,A2 E %(W$ x Dd), where Ot is defined by Ot (w3,w4)(s) = (203(t S) - w3(t), w4(t S) - w4(t)), s 2 0,and p, is defined by p, (w3,w4)(s) = (w3(t A s), w4(t A s)). Since 0t(w3, 204) is independent of !Bt(w$ x Dd) with respect to PWICwe have fc (A)PW(dw3)~C(dw4)
(z),
d
6
c1 c2
--
+
+
QP'"~)
Q(w3>~4) - & t ( ~ 3 , ~ 4 ) E It~ ~ ( A ) P ~ ( ~ W ~ ) P ' ( ~ W ~ ) Pw4) ~>C E (O~(W~, = Pi(A x {pt(w3,~4)E A1))Pw'C(Ot(~3, ~ 4 E)A2)
4.1 YamadaWatanabe Type Theorem
107
= P1(xl(.)E = P1(xl(.) E = P1(xl(.) E
A,pt(w,C) E Al)p1(Bt(w,C)E A2) A , pt(w,C) E Ai, &(w,C) E A2) A , (w,C ) E C ) = P ~ ( xA C ) , where we have used the result that { x l ( . ) E A,pt(w, C ) E A1) E St, moreover, &(w,C) and St are independent. Hence it is easily shown that (""'"")(A)= Q ~ " ~ ' " ~ ) a.a.(w3, ( A ) , w4) (pW>C). Qlt (Here "a.a." means "almost all"). Fact A is proved. Fact B. w3(t) is an d-dimensional St-BM on (Q,5,Q ) , and p(dt, dz) is a Poisson counting measure with the same compensator n(dz)dt on ( R , 5 ,Q ) , where ~ ( ( 0t ], U , ) = ~ o < , s tIO#AC.EU,for t > 0 , U E B(2). Indeed, by using Fact A we have that for X E R ~ , AE ~B S ( D d ) , i = l,2,4;A3 E BS(w,d),05 s 5 t EQ[ e i ( X , ~ 3 ( t ) - ~ 3 ( s ) ) ~xAa A 1 xA3 x A 4 ] ( A 2 ) p w(dW3)pC(dW4)
ei(~,~3(t)-~~(~))~p*w4)(~1)~pr
- .!A3 x A4 - e-(lN2/2)(t-s)JArXA4Q ( ; I . ~ ~ ~ ) ( A ~ ) Q ~ ' ~ ~ ) ( A ~ ) P Y ( ~ w ~ ) P c ( ~ w ~ = e - ( l x l a / 2 ) ( t - s ) ~x( ~ A2 1 x A3 x A4). Therefore the first conclusion of Fact B is true. Now for t > s 2 0, disjoint Ul,. - -,Urn E B ( 2 ) such that p((O,t],Ui) < o q V i = 1,. . .,m and X i > 0,i = 1,. . .,m; Ai E B,(Dd),i = 1,2,4;Ag E Bs(W,d),O < s < t EQ[exp[-
czlXiP((s,t],Ui)]IA*
XA,
- JA5XA6~ x P [ -
xA3xA4] ( " 3 ? ~ 4 )( A 1 )
EL1 X i ~ ( ( s , t lUi)lQl ,
. Q ~ ~ ~ " ) ( A ~ ) P " ( ~=Wexp[(t ~ ) P-Cs() CL1(e-" ~ W ~ ) - l)x(Ui)]
J
&("3'"4) 1 (~1)~~'"~'(~2)p"(dw3)p~(d~4)
= exp[(t - s ) C z l ( e - x i - l ) ~ ( U i ) l Q ( Axl A2 x A3 x A4) Therefore the second conclusion of Fact B is also true. Now let us return to the proof of Theorem 4.1. From Fact B it is not difficult to show that ( w l ,w3, q(., .)) and (w2,w3, q(., .)) are solutions of (4.1) on the Same space ( Q ,5, (&I, Q ), where q(dt,dz) = p(dt, dz) - x(dz)dt is a Poisson martingale measure with the compensator .rr(dz)dt.Hence 2 ) of Theorem 4.1 is proved. Now assume that the condition in 1) holds. Then the pathwise uniqueness implies that wl = wa, Q - a.s. This implies that x Q $ ~ ~= W~2 ) = ~ 1, ~pw>C ) -(a.s.~ ~ Now it is easy to see that there exists a function (w3,w4) E W$X Dd -+ FZ(w3,w4) E Dd x D~ such that = Q $ ~= S~{ ~ z~ ( w 3~ 1 w 4~ pw,C ) ) ~) a s . By Fact A this function is Bt(w,dx D ~ ) ~ ~ ' x~Dd)-measurable. / B ~ ( D ~ Clearly, FZ(w3,w4) is uniquely determined up to pW2C-measure0. Now by the assumption of 1) (4.1) has a weak solution. Denote it by (at,wt, q(dt,d z ) ) , where q(dt,dz) is a Poisson martingale measure with the compensator t ?r(dz)dt. By above F,(ws, C,, s t ) , where Ct = zq(ds,dz)
QY*"~)
QP'"~)
0 = - xgl sgn(xt- - x$-) . d(xi - x$)
EC
1.: I;.-
+ Jot
; .1
lx'-xa +$xdldlr %,j,k=lSgt I(X:#X:) a
a1
2 & . .-(=! -=? *3 rs I X : - X ~ I ~
..)(~4.-~j2,)
. ( ~ i k ( ~ , ~-~~,i wk () ~ , x $ , w ) ) (x~~j,kw()~ g~j k ( ~ , ~ $ , w (Mk), ))d 1 +StO Jz 1[ [ x i -- xs-2 cl(s,xs-, 2 , W ) - c2(s,xi-, Z , w ) ] - 12;- - 2:-sgn(xs- - xS-) . (cl(s,xi-, z,w) - c2(s,x$-,Z , w))]Nk(ds,dz), where
+
1
4.2 Tanaka Type Formula and Some Applications
sgnx
=
111
{ (8,.0,- -,as8). as x # 0, x 0. =
Proof. To show the formula for t 2 0 we only need to prove that it holds on t E [0,T ] for each given T < oo. For simplicity, in the following for an arbitrary given T < oo, let us write k ~ ( tand ) pN (u)for k$(t) and &(u), respectively. Write the equality which we want to prove as Ixt' - x q = 1x6 I~. Let T N =inf {t 2 0 : Ix:l+ Ix:I > N } . Now for any fixed N take 1 2 a, 5 0 such that Jaann-' (u)du = n , and for each n take a continuous function f,(u) such that 0, as 0 u a,, or u a,-l, fn(u) = any number less than 2(npN(u))-l,otherwise; f,(u)du = 1. Set for x E Rd with
I ; . + c;=,
<
12-'
dV fn(u)du, n = 1,2,. . .. cpn(x) = Then cp, E C 2 ( R d )and ,
3
( a / a x i ) ~ n ( x=) I(an t. Hence Ft = 2;'
TthrN - X ? TthrN
< t. Thus
I ~ E J , L " ' ~ G ~ ( IT ~a $ ~ - X ? I ) ~ ~
lxkaArN- x?T.ArN 1)ds. From this it easily found that P - a.s. Vt 2 0
5 S;
Lemma 144 If pi(u) is strictly increasing, concave, and continuous i n u 2 0 with pi(0) = 0 such that So+du/pi(u) = oo, i = 1,2, then pl ( u ) + p2(u) is still strictly increasing, concave, and continuous i n u 2 0 with p1 (0) p2(0) = 0 and satisfies
+
Proof. One only needs to prove (4.8). The rest of the conclusions in Lemma 144 are obviously true. Let u = inf { u > 0 : pl ( u ) = p2(u)). 1) Suppose that 'Zi > 0. Since pl(u) - p2(u) is a continuous function. It must be true that pl(u) - p2(u) > 0 , for all u E (0,T i ) , or p1 ( u ) - p2(u) < 0, for all u E ( 0 , ~ )Thus, . in the first case, dul (pl(u) p2(u)) 2 So+dul (2pl(u))= oo. For the second case the proof is just the same. 2) Assume that E = 0. Again by the continuity of pl(u) - p2(u) it should be p1 (u)- p2(u) > 0, for a11 u > 0 or pl (u)- p2(u) < 0 , for all u > 0. The proof can still be completed as in 1). We can also use Theorem 142 to discuss the convergence of solutions to (4.7). Suppose that x;, n = 0,1,2, - . . satisfy the following SDE's, respec-
So+
.
+
114
4. Some Useful Tools in Stochastic Differential Equations
tively,
We also denote xf = xt,b0 = b.
c/ m
lim E[exp(-
n4m
k=l
t
F , ( s , w)dAk,
0
Proof. For arbitrary given 0 5 T < r x let us show that (4.10) holds as t E [0,TI. Applying Theorem 142 we find (x?ATN
I
- x t ~ 7=~Ix$ - ~ 0 1
+ .ftATN sgn(x!
+J,""'" Jz[Ix:,:.1
2,-
+ c n ( s , x r l1 , ~-)c(s,xa-, z,w)I
- x,- I ] ~ k ( d sdz) ,
+ J,""'~JJIx; - Ix;
- x,) . (bn(s,x,! w) - b(s,xS,w))dAs
- x, - x,] - sgn(x;
+ cn(s,x;,
z, W)- C(S,x,, z, w)[ - x,)(cn(s, x;, z, W)- C(S,x,, Z, w))]~(dz)ds,
4.2 Tanaka Type Formula and Some Applications
where
115
+
T N = inf t E [O,T]: J i g ( s ,w ) d ( ( M ) , s ) > N } . Write X r = x? - xt, o;(t,w) = o(t,x?,w) - o(t,xt,w), c;(t,z,w) = cn(t,x?-,z,w) - c(t,xt-,z,w). Then by the above I IX$l+ F ( s ,W ) 1x21dA, Fn(s,w)dA, S Gn(s,z , w)r(dz)ds 2 SZ G(s,Z , W ) IX:l ~ ( d z ) d s +2
{
Ixz",
JotATN
I
JotATN
JotATN
+
' t ATN
+ JotATN
+
So
K ( s ,w) 1x21d (Mk), N ~ A T , where Nt = s g n ( ~ ~ ) - u ; " , ( s , w ) d ~ , + J ~cJ~~( [s l , z~, ~ w-) I - I ~ : - I ] ~ ~ ( d s , d z ) . Applying Lemma 146 below we find exp(F(s,w)dAS- 2 JZ G(s,z, w)r(dz)ds)). lxpATr I lX;l+ J ' ~ exp(~ ' J; F(u,w)dAu - 2 J; Jz G(u,Z , w)r(dz)du)) .(Fn(s,w)dAs 2 Jz Gn(s,z, w)r(dz)ds dNs). Hence E H ~ A TI ,X ~ A ~ 5 , . E IX;l +E Hs(Fn(s,w)dAs 2 Jz Gn(s,Z , w)r(dz))ds I E I X ; ~ +E H,(F"(S, w ) ~ A , 2 jZ G ~ SZ , w, ) T ( ~ z ) ) ~ s , where Hs = exp(- S; F(u, w)dA, - 2 S; Jz G(u,Z , w)r(dz)du))I 1. Note that by the assumption for each w, when r is large enough r,(w) = 03. Therefore by the above result letting r + 03, by Fatou's lemma we have E H ~I X P I~ E I X ; ~ E j,t F ~ Sw ,) ~ A , 2 J, G ~ Sz , w , )T(~z)~s. Now letting n + 03, we obtain limn,, EHt IXrI = 0. Notice that the condition 2' here is weaker than the usual Lipschitzian condition. Because now b(t,x , w) can be discontinuous . For example, b(t,x ) = A. - Alsgn(x), where Ao, A1 are constants. f k0 xC;=l
+
JiAT'
JpTr
+
Jprr
I
+
I
+
+
+
+
Lemma 146 (Stochastic Gronwall's inequality). Assume that K , Nt are RCLL (right continuous with left limit) processes, where Nt is a semimartingale, and assume that Bt is a continuous increasing process. If
vt < Nt + Jot V,dB,,Vt 1 0; & = 0 , then e - B a d ~ sVt, 0. e-BtK 5 No Proof. Notice that by Ito's formula d(NtBt) = BtdNt NtdBt, ds(Ns(Bt - Bs)2/2)= [(Bt- Bs)2/2]dNs- (Bt - Bs)NsdBs, etc. Hence by assumption it can be seen that K I Nt VsdBs Nt NsdBs d ~J,", VudBu = No NoBt $ ( ~ t- Bs)dNs 'J dNs V,(Bt - Bs)dBs
>
+
+
+
+
+
< + 1:
+
+ Ji + Ji
4. Some Useful Tools in Stochastic Differential Equations
116
N O ( C ~ = ~ ( & ) " I+~J! ~ ) ( c ~ = ~(& BS)"ln!)dNs
5 ..
+ J ' v , ( B ~- ~ , ) " k ! d ~ , .
Since & is RCLL, it must be bounded locally. (Its bound can depend on w). Hence J: V,(Bt - ~ . ) ~ / k ! d5~JCO . (w)~ f + ' / ( k I ) ! 4 0, as n 4 m. Therefore Vt 5 NoeBt J: e B t - B s d ~ sVt, 2 0. w Conditions 3", 4" in Theorem 143 can be some non-Lipschitzian condition. Let us give an example as follows.
I
1
+
+
Example 147 Let bi(t,x) = -ko3IxZo klxi k2 lxil Then b(t,x ) satisfies the condition 30.
+
In fact, as x , y 1
5 -(-
+
+ k3 1x1
# 0,
1 '1 (-i;a +fi)=Cid=lm(-@1x1 - IYI + + yf9= 0)
yp
$+y+w) lul
XI
lul
where we have applied the Schwarz's inequality to get that ~ : = X1i Y i 5 1x1 IYI Note also that as x #O,y = 0, Hence b(t,x ) satisfies the condition 3".
4.2.4
Tanaka Q p e Forrnual in I-Dimensional Space
Now consider SDEs (4.7) in 1 - dimensional space: i = 1,2,
where d = 1, but At = ( A l t , . . .,Amt) and Mt = (Mlt,. . ., MTt) are still n-dimensional and r-dimensional such that (Mi,M j ) , = 0, as i # j; respectively; so b E RIBm, and a f RIB'. In this case we can prove that the Tanaka type formula will be simpler and the conditions to be satisfied are also weaker. Actually, we have
Theorem 148 (Tanaka type formula) Assume that 1" Ibi(t,x,w)I2 la(t,x,w)12+ Jz 1ci(t,x,z,w)li ~ ( d z5) g(Ixl), i,j = 1,2, where g(u) is a continuous non-random function on u 2 0; FJ 1 4 4 X , W ) - 4 t ,Y , w)12 k;(t)P;(lx - YI), as 1x1 ,191 5 N , t E [O, TI; T where 0 5 k$(t), ELzl E SoT kN(t)d (Mk)t < m, for each T < m;0 5 p%(u) is non-random, strictly increasing in u > 0 with pK(0) = 0 , and
+
I4 d 4 1
118
4. Some Useful Tools in Stochastic Differential Equations
in l o of Theorem 148 also can be dropped. For this, let us first prove the following theorem.
Proof. 2) is deduced from 1). In fact, 1x1 = x+ + x-. So 11%q s , ~ , w ) l -lZs-I - sgn(%-)?(s, Z , w)l I q s , z,w))+ - I ( B ~ - > O )Z~, WS ), I + q s , z , w ) ) - - (2s-1- + ~ ( z s - 5 0 ) q s , z , w ) ~ . Thus 1) implies 2). Let us show 1). Notice that x- = (-x)+. So we only need to show the first conclusion in 1). Take 0 < a , 1 0 and continuous functions 0 g,(x) such that
+
I(%- + +I(%-
o)+ J: g n ( ~ ) dh;(x) ~ > = gn(xC). Hence as n 1; hn(x) x+, h ; ( ~ ) -+ h n ( ~E ) C2(R), Ih;(z)l Applying Ito's formula, we find that
O ) d s -+o. From this it follows easily that is also an increasing process. Notice that 0 5 nxt = - 2-; - I(%& >o) A Ztl vt 2 0, where we write 2f_= 2; Hence W 2 0
xt
q
-
0 5 @.At = I(%,>o)% + ro@, where we havcused the fact that I(&>o) n At = I(%,>O) - (2,A%)] -t = I(gt->O)[xt - %] = I ( B ~ - > o ),x ~ and -II(%,-lo) n At = I(~,- At t CO<sP lnxsi,l= C O < ~ < ~ - Co<s&%s- >0)% I(%,-lo)%)
+
[q
A-
-
+
nL
120
4. Some Useful Tools in Stochastic Differential Equations
= 2s: Sz[I(%.->o)(Zs-+z(~, z , w ) ) - + I ( E . - ~ o ) ( ~ s - + E (~s , w ) ) + ] N k ( d s , d ~ ) = Jz[(2sZ , w))+ - (2s-)+ - I(z,- > o ) ~ s Z ,,w)]Nk(ds,d ~ ) .
+z ( ~ ,
.& one sees that L 5 12; -GI + ET=lJilip(~l~)l dAP(s) + EL IS: I(Z.->~)&(S,w ) d ~ k ( sI )
On the other hand by the definition (4.14) of
Remark 151 1) By the proof of Theorem 150 one sees that Co,s,t[I(~,->o)(2)- + I ( Z ~ - ~ O ) ( ~ ~ ) + I +7i, I ( ~ . - > ~ ) ~ M< , I m, P - a.s. 5 ljl: holds for a general real semi-martingale 2t =Z0+71,+Mt, where & 2 0 with 20= 0 is a real continuous increasing process which is locallv inte.qrable, i.e. for each T < m E& a)(ss- a)- + I ( ~ ~ - ~ a )-( a)+] 2s 5 I(%-a)+ - (20 -a)+[ +& l J , , ~ ( . - ~ - > ~ ) dOd(~:- x:) ( x i - 2:)' = ( x i - xi)+ + ~ , x : ~ , ~ , +~ ) ) + J ~ J z [ ( ~ : - - x : - + ~ 1 ( ~ , x : ~ , ~ , ~ ) - ~ 2 (-(x:--x:-) 1 1 -I(x:-x:)>o(c (51x8-7 2 , ~-) c2(s12 : - 1 x , ~ ) ) ] N k ( d ~~ ,) , (x: - 2:)- = (2; - x i ) I(x:-x:)Iod(xt - $2) +Jo"Jz[(xf- - x i - + c ' ( s , ~ ~ ~ , z , w ) - c ~ ( s , ~ ~ ~ , z , -wI :)- )) -- - ( ~ ~ ~
+ Ji
+
4.2 Tanaka Type Formula and Some Applications
121
+ ~ ( x : - x : ) 5 0 ( ~ 1 xi-, ( ~ , 2 , W ) - c2(s,x!-, 2,~ ) ) l N k ( d dz), ~, and 1x2 - xfl = 1x6 sgn(x; - x?)d(xi - x?) 2 [[Ixf-- xs+ ~ ' ( s , x : - ~ z , w-) C ~ ( S , X : - , Z , " J ) ~ - I x f -x:-1 O Z1 -sgn(xs- - 2:-) . (cl(s,xi-, z,w) - c2(s,x?-, z,w))]Nk(ds,dz). Moreover, the last terms of the above three formulas are absolutely convergent, P - a.s.
+sot +sts
I;.-
4.2.5 Tanaka Type Formula in The Component Form The 1- dimensional result and Remark 151 motivates us to consider a Tanaka formula for the component ( x h - x$) of ( x i - x f ) , when xi and xf are solutions of n-dimensional SDE with jumps (4.7) under some other conditions. For this we write (4.7) in its component forms: for i = 1,2, . - . ,d
Thus we can apply Theorem 152 to obtain the following theorem.
4.2
Tanaka Type Formula and Some Applications
+
123
+
(ii) x 2 y ==+ x c(t, 2, z, w) 2 y c(t, y, z, w). Then the pathwise uniqueness holds for SDE (4.16). Proof. Assume that x f , i = 1,2 are two solutions of (4.16). Write jZ, = xt - x f , T N = inf {t > O : 1x$1+ Ixf > N } : Then by the Tanaka type formula (Theorem 145)
1
I % A~, I
~ATN
5Ed
E where
Jt = J;[,[Ix:-
T
+
k N ( s ) p ~ ( l z s l ) d s EJtnrN,
1 - x : + c(s,x.-,.,LO) - c(s,x~-,z,w)I- I x : - x:-1 - x:-) . (c(s, 2:-, 2,W)- C(S, 2:-, z, w))]Nk(ds, dz).
-sgn(x,In the case that the condition (ii) of 4" holds then Jt = 0. So we have
lzt~~~I 5 Ji~;(S)PE(E lanrN1)dS, t E [O,T]. Thus E IztATN I 0, t [O,T].From this one easily derives that zt 0 , t [ d , ~ ] . Since T < ca is arbitrary given, so zt = 0,Vt 2 0. In the case that the E
88
=
=
E
E
condition (i) of 4" holds then by the same Tanaka type formula
E p t A T N I 5 E J ~kTN(S)P5(Izsl)dS " ~ ~ +2E Jz Ic(t, x: ,z, W)- ~ ( tx:,, z, w)l r(dz)ds 5 3EJiATNk;(s)&(lz,l)ds, as t E [O,T]. The conclusion now follows. Comparing the uniqueness theorem here and Theorem 115, one finds that the condition on a here is weaker than there. Here a can be only H6lder continuous with index However, the condition on c is different. Anyway, if the SDE is without jumps, that is c = 0, and then we have got a really weaker conditions on the pathwise uniqueness of solutions. For an SDE with jumps in d-dimensional space we can also have a uniquenes theorem discribed by the conditions on the components of the coefficients which are different from Theorem 143 and Theorem 115. For simplicity consider the following SDE in d-dimensional space
gATN
3.
Theorem 155 Assume that 1" E lxo12 < ca, and 2 (x , b(t,x,w)) < c ( t ) ( l + 1x12), la(t, x, 412+ JZ1 4 42, z , 4 1 2 4 d z ) 4 t H l + 1x12), where 0 c(t) is non-random, such that for any 0 < T < oo CT = c(t)dt < ca,
0, Z , W )4Id~. 4 I ko, lb(t,w)12 + l 4 t , w)12 + SZ IC@, where 0 5 ko, and 60 are constants, and wt, t 2 0, is a standard Brownian Motion process, then for any f , which is a Bore1 measurable function valued in R, EJ: If (xs)lds 5 lvr llf llLl(Rl , where 0 kT is a constant depending on ko, 60 and T only. Proof. If
>
0, then for any s > 0 0 5 f = f (t,x ) : E: x Rd -+ there exists a smooth function u E ( tx, ) : R x Rd -+ [0,m) such that 1) z t j = , hihj(a2/axiaxj)uE I X U € , v h E R ~lhl, I 1,
x:(=
where k(p,A, d , E ) = e"X(d+1)/ppd/p(~dd!)1/p~d/2p(~(d + l))('/p)-', Vd is the volume of d-dimensional unit sphere, and I l g ( s , ~ ) l l ~ , ( ~ ~= ) ,( ~h i~x ,n ~d dl g ( s , ~ ) I ~ d s d x ) ' / ~ (= IMs, Y ) I I , , ~ ~ 3) lgrad,u"I 5 (X)'12uE, 4) for all non-negative definite symmetric matrices A = (ai, j ) d X d z z j = l a i , j ( a 2 ~ E / a ~i a~~( jt )r . A ) d5 0, c ; ~ = a~ i , j ( a 2 ~ E / a ~ i a ~ j ) - ~ ( t ~ . ~ + i ) d + 5 (-(det a / a t~)) ul / E (d+l)f,, where f E is the smoothness knction o f f , i.e. f E = f * wE = JroO ds JRd f (t - E S , x - E Y ) W ( S , y)dy, = f ( t , x ) , as t 2 0, 0, otherwise, w(t,x) is a smooth function such that w(t,a ) : R1 x Rd -+ [O,m), w(t,x) = 0, as ( t ,x) 4 [-I, 11 x [-I, lId, JroO dt SRd~ ( x)dxdt t, = 1.
y(t,x)
{
By using the above lemma we can prove the following Theorem 165 (Krylov type estimate). Assume that u = o(t,x ) : [O,T]x Rd -+ Rdmd, c=c(~,x,z):[O,T]XR~XZ+R~
132
4. Some Useful Tools in Stochastic Differential Equations
are jointly measurable and satisfy the conditions
where k~ = So cl(t)dt < co, for each T < co. Moreover, assume that E1 21, 5 k T , for each T < m. Write T
vt =- SOt tr.A(s,xs)ds, A ( s , x ) = $aa*(s,x ) . '
Then for all 0 < f , which is a bounded Bore1 measurable function defined o ~ [ o , Tx]Rd, a n d p > d + l
where we w d e 11 f il~,[o,Tl Rd = J, & 1 f (t,x)lp dxdt, and k t 0 b a constant depending only on p, k T , d and T . Furthermore, i n the case that a is uniformly non-degenerate or locally uniformly non-degenerate we have the following corollaries: 1) If, i n addition, for each N = 1,2,. . . there exists a 6~ > 0 such that for all p E Rd, ( A ( t ,X ) P , P ) 2 1/112 6 ~ as 1 1x1 5 N , where (., .) is the inner product i n Rd, and A ( t , x ) = aa*(t,x ) , (that is, a is locally uniformly non-degenerate), and letting T N = inf { t E [O,T] : lxtl > . N ) . then S,TN f ( s , x S ) d s 5 d > G T~ ,, I l f llp,[O,Tlx[-N,N]@d where k 2 0 is a constant only depending on p, ko,d, dN and T . 2) If, i n addition, there exists a 60 > 0 such that for all p E Rd, M t , X ) P , P ) 2 lp12 60, (that is, a is uniformly non-degenerate), then f ( ~ , x s ) 5d ~k ( ~ , k ~ l d , 6 ,( TI f )1 l p , [ ~ , ~7 ] ~ ~ d where k 2 0 is a constant only depending on p, h , d , 6 and T . 3
Proof. Let f (t,x ) = 0 , as (t,x )
E
(T,co) x Rd.
According to Lemma 164, for f 2 0 there exists a u" satisfying 1)-4) in on+ [0, ~ )T ] we find Lemma 164. Applying Ito's formula to u " ( t , ~ ~ ) e - ~ ( % that E u" (T,X T ) ~ - ~ ( ~ TEu" + ~(0, ) -x O )
+
e-X(qa+s)gradzu"(s,x,)d& E J ,e-A(qa+s)((a/&)@ f , a~i , (=d 2~u " / ~ x i ~ xi )X(tr.A, + l ) u E ) d s + E CO<sIT e - A ( q a + S ) ( ~ 2,) E ( ~-, u g ( s ,x,-) 3 -gradzug(s, x,-) . A x s ) = Ci=' I;.. Now we set -E
+~
4.4 Krylov Estimation
133
8 = k ( ~A,, d>€1Ile-'s(dfl)lPf y)llpJOnx Rd . By 2) in Lemma 164 E uE(O,xo)I 8. Notice that by 3) and 2) in Lemma 164 I$ 5 ~ ~ B XJ:? Ee-*qa-'(l-(d+')ir)rd A < kobAf E jkTkOA;8. T On the other hand, notice that le-'r(d+')lp f (r S, y) drdy)llp (JRd - e*s(d+l)/~lle-"r(d+l)/~f (r,y) < xRd = e's(d+l)l~Ile-*r(d+l)lp f (r, Y)
1x1
I -Is
+
lp lp,(r, ,z:
11p,(r,v)dO,TI x Rd
'
Hence by 1) in Lemma 164 and the condition (4.25) we have I$ $ ;E Co,ss, e-'(q~+~) -(dZu"(s,2,~AX~)/~X~~X~)AX~(S)AX~(S) % x ~ ~ ' ~ ( ~ +C' )OI 0 , 5 ~= '0. Furthermore, ~ for arbitrary given Z > 0 there exists a f i such that as n fi, J T < ~ Z. ~ Thus, for each s, as n -+ m, 2 JZ I C ( ~ ~ X ! , ~ , W ) -c(s,xs,z,w)I I ~ ~ ; ~ ~ k ~ I l ~ ~0 , lin~P.k ~ ~ ( d z ) Hence, Lebesgue's dominated convergence theorem applies, and (5.8) holds. Thus (5.6) follows. By the same token one easily shows that as n -+ m, J; bn(s, x:, w)ds -+ b(s,x,, w)ds, in P ; and t an(s,xy,w)dws -+ So a ( s , x,, w)dw,, in P. Therefore, ( x t ) is a solution of (5.1). The pathwise uniqueness follows from Lemma 115.
>
Jot
5.2 The Skorohod Weak Convergence Technique To discuss the existence of a weak solution for a SDE under some weak conditions the following Skorohod weak convergence technique is very useful and we will use it frequently in this chapter. Let us establish a lemma, which is very useful in the discussion of the existence of a weak solution to an SDE. In the rest of this Chapter let us assume that Z = R~ - {O) , and Sz &n(dz) < m.
Lemma 173 Suppose that Ibn(t,x,w)l s ( t ) ( l +lxl), as n 2 No; lan(t,x , w)12 + JZ lcn(t,x,z , w)12n ( d 4 < Cl(t)(l + lx12), where No > 0 is a constant. Assume that for each n = 1,2,. . . , xy is the solution of the following SDE: an(s,xy)dw, xy = x0 J; bn(s,xz)ds + So' JZ cn(%xy-, M d s , d z ) , where we denote q(dt,d z ) = fik(dt, d z ) the Poisson martingale measure wzth the compensator r(dz)dt such that q(dt,d z ) = p(dt, d z ) - n(dz)dt and p(dt, d z ) = Nk(dt, dz). Then the following fact holds, this fact we may call "the result of SDE from the Skorohod weak wnvergen.ete@nique": There exists a probability space (0,5, P ) (actually, 6 = 10, I ] , % = %([0,1])) and a sequence of RCLL processes ( Z ~ , i Z ~ , ~=)0,1,2, , n .. . , defined on it such that (Ey,i Z y , c ) , n = 1,2,. . have the same finite probability distributions as those of ( x y ,wt, Ct), n = 1,2, . . . , where ct = So' J;,ls,ziSx(ds,W + J;ii>l . a ( d s , d z ) , and a s n - + m , W 2 0 , -7l ijy -+ $, i n probability, as i j y = Zy, iZ;, ,n = 0,1,2, .... Write p(dtl d z ) = CS&tI(OZ,? ( d t l d z ) = p(dtld z ) - n(dz)dt,
zFn(d~, l dz),n = 01 172, ... --Moreover, iZr and 65; are BMs on the probability space (Q,5,P ) and, T ( d t ,dz) and jj"(dt,dz) are Poisson martingale measures with the same compensator x(dz)dt. Furthermore, @?) satisfies the following SDE with and p ( d t , dz) on (Q,5, P ) . ZT = xo + of bn(s,Z:)ds + of an(s,Z:)diZ: ojf jz c ~ ( s , z ) P ( ~ sdz). ,
---
65r
+
z:-,
Proof. By the properties o f bn, an, and cn, applying Lemma 114 one immediately finds that ~ u p , E ( s u p-, 1) = Hence P ( s u p t l ~I:] < m) = 1. In particular, limN,, suptlT N ) = 0. Now for arbitrary E > 0
1
p(lCt
- is1 > &) 5
~(1. f :
&151z N k ( d s , d z ) l
>
+ ~(1s: &I>l zfik(ds,dz)l > ~ / 2 = ) Ji + J2.
I t is evident that as (t- sl 5 h + 0
P( I c : I
>
5.3 Weak Solutions. Continuous Coefficients
s,ZIL1
-
147
5 2 ( 2 / ~ ) ~ &7w It - 81 0 . Notice that Nk(dt,dz) is a Poisson random measure with the compensator 0 ?r(dz)dt, as It - sl 1) > 0) = 1 - exp(r(dz)dr) < 1 - exp(-+I > 1)h) 0 , where a(lz1 > 1) = n(dz) = 24ZI,, &x(dz) < m . Hence satisfies (5.9), that is, l i m ~ s+u ~p t~5 P((Ctl ~ > N) = 0, and , Ctz > E ) = O-. limh10 s u ~ t l , t z < T , ( t l - t z l < h~ ( I c t Since E Iwt - W, 1' = It - SI . One also easily shows that (5.9) holds for
-
&,,,
sst,s,
(,
I
Wt
.
Hence Skorohod's theorem (Theorem 398) applies to {xT,Ct,wt) and the conclusion follows by Lemma 399.
Remark 174 By this lemma one sees that Intheresult of SDE from the Skorohod weak convergence technique" holds, and we can prove that
16
t
(bn(s,
c)- b(s, 2))d.l
(ft)
-
0, in probability B,
then ( 8 , f , ,B; {$)t,o, - b(dt,dz), {5$)tL0), tgo solution of (5.11) in the next section.
0'.
3,is a
5.3 Weak Solutions. Continuous Coefficients The technique used in proving Theorem 170 motivates us to obtain an existence theorem for weak solutions of SDE with jumps and with u, which can be degenerate. Consider the following SDE with non-random coefficients: w 2 0,
-
Theorem 175 Assume that 1" b = b(t, x) : [0, m ) x Rd Rd, u = u(t, x) : [O, m ) x Rd --+ RdBd,
5. Stochastic Differential Equations with Non-Lipschitzian Coefficients
148
c = c ( t , x , z ) : [O,m)x Rd x Z + R d , are jointly Bore1 measurable such that P - a.s. j, 1 4 4x , 4124 d z ) < ~ l ( t ) + ( l1xI2), where c l ( t ) is non-negative such that for each T < m cl (t)dt < m; lb(t,x)12+ la(t,x)l < c1(t)(l+ 1x12), where c l ( t ) has the same property as in 1"; S b(t,x ) is continuous in x and a ( t ,x ) is jointly continuous in (t,x ) ; and limh,h~,o Jz Ic(t h', z h, z ) - c(t,x , z)12~ ( d z=) 0; 4" Z = Rd - { o ) , & &&~(dz) < m. Then for any given constant xo E Rd (5.11) has a weak solution.
fl
+
+
Proof. By Lemma 172 we can smooth out b,a and c only with respect to x to get bn,an, and cn, respectively. Then we have a pathwise unique strong solution xy satisfying a SDE similar to (5.2),but here all coefficients bn, an, and cn do not directly depend on w. Now applying Lemma 173 "the resul of SDE from the Skorohod weak convergence techniquett holds. So we only need to show (5.10) in Remark 174 holds. However, since W 2 0 , 8 + 8,in probability P, as n -+ m, as in the proof of Theorem 170 one finds that (5.4) and (5.5) hold. So by Remark 397 in the Appendix we may assume that all {ZT,t E [0,T]):'o are uniformly bounded, that is, IZTl ko,Vt E [0,T ] , V n= 0,1,2,. . in all following discussion on the convergence in probability. Now for an arbitrary given a > 0
f ) = I g where 0 < < . . - < 2m < ... < T is adivision on [O,T].Since by I g = 0, one can choose a large enough m such conditions l o - 3O lim,, that 1% < 216. On the other hand, by the conclusion 1) of Lemma 400 in Appendix C for these given m,6 there exists a fi such that as n E, < 216. So, we have proved that limn,, IF = 0, and eventually we obtain that limn+m x:=l I: = 0. That is, the third limit in (5.10) holds. The proofs of the remaining results are similar and even simpler. Thus Z: is a weak solution. For that the coefficient b can be greater than linear growth we can establish the following thoerem.
Theorem 176 Assume that conditions l o ,3O and 4' in Theorem 183 hold and assume that 50 IW,% ) I 5 cl ~ ( 1x1 1 gkw, lu(t, x)12 I + 1 x 1~ ~ Ig L kw, where c l ( t ) 2 0 has the same property as that in the condition loof Theorem 183, and gk(x) is such that gk(x)= 1 ln(1 ln(1 . . . ln(1 l ~ l ~ ~ ' ) ) ) ,
+
nr=,
+P + + + k-times
(no is some natural number). Then for any given constant xo
E Rd
(5.11) has a weak solution on t
> 0.
Proof. For each n = 1,2,. . . introduce a real smooth function W n ( x ) x, E R d , such that 0 5 W n ( x )5 1 and W n ( x )= 1, as 1x1 5 n; W n ( x )= 0, as 1x1 n 1. Write bn(t,x) = b(t,x)Wn(x),an(t,x) = u ( t , x ) W n ( x ) . Then by Theorem 175 for eact n there exists a weak solution xr with a BM wF and a Poisson martingale measure Zkn(dt,dz), which has the same compensator n(dz)dt,defined on some probability space (On,r, ,Pn) such that Pn - a.s. W > 0 , xr = xo+Jl bn(s,zr)ds+Jot an(s,x ~ ) d w ~ +JZJC~( S , X F - , z ) f i p (ds,dz). Construct a space On = D x W ox D, where D and W o are the totality of all RCLL real functions and all real continuous functions f ( t )with f (0) = 0, defined on [0,XI), respectively. Map (xn(.,w),wn(.,w ) , cn(., w ) ) into the O, where
> +
{z)
150
5. Stochastic Differential Equations with Non-Lipschitzian Coefficients
Ji hzI1l
rNkn(ds,dz), C = hZI N ) = 0. In fact, > N, P ( ~ ~ P , < T > N, = P(su~k=l,2,... 5 P(supk=l,2,... - ZpL > P ( S U P ~,... = ~IZ;,,"~ > $) 5 P ( S U P ~ = ~ ,lqk ~ , . .. Zp," > p(supk=l,2,... IZp," > $) = I1 where is the set of all rational numbers in [O,T]. However, for arbitrary given Z > 0 and for each r k we may take an nk large enough such
Iqk
{rk}El
I s)+ 1 i)+
Iek1
1
I
+ 12)
152
5. Stochastic Differential Equations with Non-Lipschitzian Coefficients
z. -
that ~ ( 1 %- 2I;: > I1 ) < &, k = 1,2. ... . Hence II 5 EEL& = On the other hand, by (5.13) there exists a 5 such that as N 2 N, I? < i. Therefore, limN,, P(sup, N) = 0 holds true. Now let us prove the second limit in (5.10). Notice that by Remark 397 in the Appendix and from (5.13) and the result just proved we may assume that IETl 5 ko,Vt E [O,T],Vn= 0,1,2,... . Now for any given E > 0 ~ ( I J ,on(+ " 5:)diZ; - J,' O(S,z:)diZ:I > E)
I ($)2
EG I a n ( ~ , z ?-) o(s, %)I2 I
+P@ I = I;
~
+IT.
~2 ) d +~
s;~ I
~ z : ~ ~ ~ ~ I ~ ~ < ~ ~ ~ s ~~
~~2)dw:l (~ > ~ ~5 ) , ~
~
~
~
>
Notice that for any E > 0 as n ICg, F(lon(s,z:) - u ( s , 2 ) l 2 ~ ~ Z : ~ < k o ~ ~ Z> ~~ 7)
~
> &) ~
I8)~5~F(Iz? ~ < ~-~ > 7) 0. So, by Lebesgue's dominated convergence theorem as n -+ oo, I? -+ 0. Now notice that a(t, x) is jointly continuous, so if we write um(t,x) as its smooth functions, then 2 limm,, Iom(t,x) - u(t, x)] = 0, Vt, x; and Iurn(t'x) - um(s,Y)I 5 km[It - sI IX - YIIi where k, 0 is a constant depending only on rn. Observe that
I2 < 2 ( z ) E~ '~(,J+'J = IZ
+
I ~ ( s2, ) - om(s, %)I2 I ~ E ! ~ < ~ ~ ~ s
~ ~ Z ~ ~ < kz:)d~? ~ f f m( sJ;~
+ Igln.
2)d~:l > 5)
So for any given Z > 0 by Lebesgue's dominated convergence theorem we can choose a large enough m such that I,nl < Z/2. Then applying Lemma 401 in Appendix C we can have limn,, = 0. Thus we obtain that limn,, I2 = 0, and the second limit in (5.10) is established. The proof for the remaining results are similar.
(
~
5.4 Existence of Strong Solutions and Applications to ODE
153
5.4 Existence of Strong Solutions and Applications to ODE Applying the above results and using the Yamada-Watanabe type theorem (Theorem 137) we immediately obtain the following theorems on the existence of a pathwise unique strong solution to SDE (5.15).
Theorem 177 Under the assumption of Theorem 176 if, i n addition, the following condition for the pathwise uniqueness holds: (P WU1) for each N = 1,2,. - . , and each T < oo, 2 ( ( 2 1 - 2 2 1 , (W,$ 1 ) - b(t, ~ 2 ) ) + 144 x i ) - ~ ( $dl2 t , + JZ Ic(t, X I , 2 ) - ~ ( 2t 2,, 2)12~ ( d z ) 2 ~TN(t)PTN(lx1-x21 1, as lxil N , i = 1,2, t E [O,T];where $(t) > 0 such that cg(t)dt < oo; and p g ( u ) > 0,as u > 0 , is strictly increasing, continuous and concave such that So+ ~ U / P T N ( U ) = 00; then (5.11) has a pathwise unique strong solution.
0 , for each T < m and N = 1 , 2 , . . . , as n --+ m, Jd+l(U) =
.[713[281
Furthermore, one easily sees that for each n = 1,2, . . . as x , x' E Rd ( a n ( t ,x ) - a n ( t ,x')1 knko I X - 2'1. Thus a n , n = 1 , 2 , . . . , satisfy 1) - 4). In the same way one can construct bn(t,x ) , n = 1,2, . . . , such that they satisfy I ) , 3) and 4). However, for the smothness o f c, t o meet our purpose we need more discussion. First we take a sequence E, 1 0. Set cEn = c I { ~ (~z ) ,~ and crn(t,2 , ~=) I { & n < l r l < E ~. P(t, } ( ~ )2,4, where P(t,x,z ) = JRl x R d X zcEn(t - m-I?, x - m - l Z , z - m P 1 Z J(T, ) Z , ~)d?&&, where we define c ( t ,x , z ) = 0 , as t < 0. That is, P(t,x , z ) is the smoothness ~ x { E , 5 IzI 5 E L ' ) . T h e n function o f c ( t , x , z ) on An = [O, m ) R~
&ek2, and EO > 0 is a constant such that ( 1 + < 2. Thus we have proved that Jz 1crnl2~ ( d z 5) 2k0, as m > '€E0- ' ." NOW for
q$g
each
E,
by assumption
2
ko 2 J{En $&i2 and
+
I
C-Cmn12
dz
-I
2
I
l l ~ { ~ ~ s ~1 ~ ~ s ~ p ; l } L ~ ( [ O , ~ ] X S ~ / 3 )
+$E'G I(bnO- b)(~,Z!31Ilsy5kodsl
5.5 Weak Solutions. Measurable Coefficient Case
157
= I y o + pn + I?. Obviously, by 4) in Lemma 181 and by the Krylov type estimate (Theorem 165) there exists a fi such that as n fi,no N, I:1no+ IT 2 . llbn - bnOI~~+I,[O,T]XS~, < 2r/4.
>
>
-
>
Now for each no fi, by (5.9) and by 3) in Lemma 181 as n -+ m , W E [O,TI, I p n-+ 0. Thus the first limit in (5.10) is proved. Now notice that for each no = 1,2, ... P(IJ," JZ cn(s,Zy-, z)P(ds,dz) - J,"Jzc(s,$-,z)iO(ds,dz)l
5 ($)'
> 8)
2
E J J,~ I(cn - c n O ) ( s , Z , z ) lI
~ ~ ~ ~ ~ k ~ ~ ( d z ) d s
+P(I J,"J, cnO(s,ZT-, z)Gn(ds,dz) -
s,"Sz
2-,z)2(ds, dd~rsup,,[o,TI ~ S ~ ~ 2
+ S U>~I) ~ ~ [ ~ , ~ ~ ~ ~
+ ( $ ) ' ~ ~ t ~ ~ l ( ~ ~ " - ~ ) ( sIli~l~kon(dz)ds , ~ : , t ) l
+
+
- I;"'0 I,""*" I,"". For an arbitrary given F > 0, as above (by using the Krylov estimate) one E n show tkat there exist a large enough fi such that for any fixed no N,asn> N
>
I y O+ q0< $E.
On the other hand, by using Lemma 400 in the Appendix one also finds that as n -+ m , Vno 13""~" 0. (5.18) -)
-
C3z = 1
pO,n 32
.
>
Notice that 'ds 0,Zy -+ 2 , in probability, as n Lebesgue's dominated convergence theorem,
1%
I 0
- I;."' +
IT0.
Now the proof can be completed in the same way as that of the third limit in (5.10) by using Lemma 401. So the second limit in (5.10) is established. Thus we have proved that {5$}t, - - satisfies the following SDE on probability space
---
-
(R,5, P) for any T < m as t E [0,T]
5.5 Weak Solutions. Measurable Coefficient Case
1-20+
b(s,z ) d s
159
+ J,' o ( s ,z ) d z + fi JZc(s,Z f , z ) p ( d s ,d z ) ,
where WE) and p ( d t , dz) are a BM and a Poisson martingale with the compensator r ( d z ) d t , respectively. By using Theorem 180 and the Girsanov type thoerem we can obtain the existence of a weak solution to a BSDE with jumps under much weaker conditions.
Theorem 182 Assume that b, a and c are Bore1 measurable functions such that lo Z = R~ - ( 0 1 , and r ( d z ) = dz/ ~ z l ~ "; 2 , z)12 fl(dz) I ko, 144 x)l + J, where ko > 0 is a constant; 3" there exists a 60 > 0 such that for all p E Rd, ( d t ,X ) P , P ) 2 lp1260. 4 ( x ,b(t,x ) ) < c l ( t ) ( l f 1x1' nr=1 gk(x)), where c l ( t ) and gk(x) have the same properties as i n Theorem 176; furthermore, b(t,x ) is locally bounded for x , that is, for each r > 0 , as 1x1 < r , K t , x)l < k,, where k, > 0 is a constant only depending on r . Then there exists a weak solution for (5.15). Proof. The proof involves a combination of the results of Theorem 180 and Theorem 133. In fact, by Theorem 180 there exists a weak solution for the following SDE with jumps: V t 0 , -
>
2t
= xo
+ Jota(s,xS)dws+ ~ ~ ~ Z ~ ( ~ , x s - , z ) N k ( d ~ l d z ) .
Notice that by Skorohod theorem (Theorem 398 in the Appendix) we know that the above SDE holds in a probability space (Cl,z,P), where R = [O, 1 ] , 5= %([0,I ] ) , P =Lebesgue measure on [0,11. Since such ( 0 , g ) is a standard measurable space. So applying Theorem 133 the conclusion follows on W 2 0. rn In the above theorem we assume that a is bounded. Now we relax the coefficient a to be less than linear growth, (so, it can be unbounded). In this case we have to assume that a and c are jointly continuous.
Theorem 183 Assume that conditions l o ,3" and 4' i n the previous theorem hold, and assume that 5" Jz lc(t,x , z)12 n ( d z ) < cl ( t ) , 60 a ( t ,x ) is jointly continuous in ( t ,x ) ; and limh,h~-.+O JZIc(t + h', x + h, z ) - c ( t , x ,z)j 2 ~ ( d z=) 0 ; 7O there exists a 60 > 0 such that J a ( t x)l , > 60, and la(t,x)12< ~ l ( t ) (+l 1 x 1 ~ ) Then for any given constant xo E R~ (5.15) has a weak solution on t > 0. Example 184 If b(t,X ) = -Z lx12n1
+
nrZlg k ( x ) ,
160
5. Stochastic Differential Equations with Non-Lipschitzian Coefficients
where nl is any natural number, and g k ( x ) is defined in 9 of Theorem 176, then b(t, x) satisfies all conditions in Theorem 183. However, b ( t , x ) is very much greater than linear growth in x. Now let us prove Theorem 183. Proof. The proof can be completed by applying Theorem 175 and the Girsanov type theorem (Theorem 133). Since it is completely similar to the proof of the previous theorem. We do not repeate it. Finally, applying the above results and applying the Yamada-Watanabe type theorem (Theorem 137) we immediately obtain the following theorems on the existence of a pathwise unique strong solution to SDE (5.15).
Theorem 185 Under the assumption of Theorem 183 if, in addition, the (P W U I ) condition i n Theorem 177 holds, then (5.15) has a pathwise unique strong solution.
Part I1
Applications
How to Use the Stochastic Calculus to Solve SDE
To help the reader who wants to quickly know how to use the stochastic calculus to solve stochastic differential equations (SDE) we offer a short introductory chapter. Some instructive examples are also presented. Actually, some of the material only represent special cases of the general results in the first part of the book. However, these simpler cases can help explain the ideas more directly and clearly, and so may help the reader master the main ideas. A reader who is already familiar with Ito's formula, Girsanov's theorem and their applications may skip this chapter.
6.1 The Foundation of Applications: Ito's Formula and Girsanov's Theorem In solving ordinary differential equations (ODES) the following technique is frequently used: If we can guess the form of a solution, we will use the differentiation to check whether the guess is true solution, or to make some changes to make it true. Such idea can also be applied to the solution of SDE. However, the rule for differentiation in the Stochastic Calculus is different, and for this we need the Ito differential rule, that is, the Ito formula. Another frequently used technique is that if a tranformation can be made to simplify the ODE, then this will always be done first to make finding the solution easier. Such a technique is also applied to SDE. However, to use a trasformation in an SDE is much harder than in the case of an ODE. The
164
6. How to Use the Stochastic Calculus to Solve S D E
problem is that, after the transformation, is the new differential equation still an SDE that we can understand; that is, does the new stochastic integral still makes sense? To answer this question we need the Girsanov's transformation, or say, a Girsanov type theorem. As for the existence and uniqueness o f solutions o f an SDE we need a related theorem in SDE, and that is the theory in the first part o f the book. However, in this case the discussion o f the theory also needs the help o f Ito's formula. Let us look at the following examples. Example 186 Find the solution of the following 1-dimensional SDE dxt = axtdt bxtdwt, x0 = c, t 2 0, where a, b, c are constants, wt, t 2 0 , is a 1-dimensional BM.
+
I f b = 0 , then by the usual differential rule, one easily checks that xt = ceat satisfies the ODE dxt = axtdt, xo = c, t 2 0. However, setting xt = ceat+bwt,by Ito's differential rule (Ito's formula) this does not satisfy the above SDE, because it only satisfies the following SDE: dxt = cdeat+bwt= cdf (at bwt) = cf'(at bwt)d(at bwt) +;cf1'(at + bwt)d (bw), = ceat+bwt(adt + bdwt) + = xt(adt bdwt) jb2xtdt = axtdt bztdwt $b3xtdt. That is xt = ceat+bwtonly satisfies the SDE dxt = axtdt + bxtdwt + jb2xtdt, 5 0 = C , t 2 0. Here we have applied the following Ito's formula, which is a special case o f Thoerem 93 in the first part of the book, and we set yt = at+bwt, f ( y ) = eY t o get the above result. (Recall that by the notation before Lemma 62 ( M ) , = ( M ,M ) , is the characteristic process o f the locally square integrable martingale Mt such that ( M ) , comes from the Doob-Meyer decomposition: M2 =a local martingale + ( M ) ,. Now for a BM we have that (bw), = (bw,bw), = b2 (w,W ) = b2 ( w ) = b2t)
+
+
+
+
+
+
+
Theorem 187 (Ito's formula). Suppose that yt = Y O + A t Mt, where yo E T o , {At)tlo is a continuous finite variational ($t-adapted) process with A. = 0 , {Mt)t,O - E M2~10c~c. If f ( x ) E C 2( R ) ,then
+
Or, we can write it i n the differntial form: df ( y t ) = fl(yt)dAt + fl(yt)dMt + i f t l ( y t ) d( M I , . Recall that the differntial form is only a form, and its exact meaning is that the integral equality holds.
6.1 The Foundation of Applications: Ito's Formula and Girsanov's Theorem
165
Remark 188 To use the It0 formula one should know how to calculate the characteristic process ( M ) , of the given locally square integrable martingale Mt. For the convenience of the reader we recall the following fact t where w t , t E [O,T] is a BM, and here: If Mt = ~ ~ ~ a ( s , w ) d wE, ,[O,T], E J , (a(s,w)12ds < CO, then ( M ) , = l a ( s , ~ )ds,Vt 1 ~ E [O,T]. Now we can solve Example 186 by making a small change to the guess form as follows:
Solution 189 Set xt = ceat+bwt-?bat. Write yt = at + bwt - 4b2t, and f ( 9 ) = eY. Then by (6.1) dxt = cdf ( y t ) = cf1(yt)dyt+ & f " ( y t ) d (bw), = ceYt(adt bdwt - i b 2 d t ) ib2ceYtdt = ceYt(adt bdwt) = axtdt bxtdwt. SO xt = ~ e ~ ~ is+a solution ~ ~ ~of Example - 4 ~ 186. ~ Moreover, ~ it is the unique solution, because the coeflcient of the SDE is Lipschitz continuous and less than linear growth (Theorem 117).
+
+
+
+
Example 190 Find the solution of the following 1-dimensional SDE dxt = axtdt
+ f ( t ,xt)dt + bdwt, xo = c, t E [0,TI,
(6.2)
where b > 0 , a , c are constants, w t , t 2 0 , is a 1-dimensional BM, i n the case that 1) f ( t ,x ) is bounded and jointly measurable; 2) f ( t ,x ) is bounded and satisfies the Lipschitzian condition, that is, there exists a constant ko > 0 such that If(t,x)I 5 ko, V t E [O,T],vxE R 1 , E [O,T],&Y E R 1 . l kolx-yl, I f ( t , x ) - f ( t , ~ )5 In this example in case I ) , without the help of Girsanov's theorem we cannot even know that (6.2) will have a weak solution. Here, a "weak solution" means that the solution x t , t E [0,T I , with a BM wt, t E [O,T], on some probability space (a,5, {5t)tE[0,Tl ,P ) satisfies (6.2),but St # 5 y . So xt is zt-adapted, but is not necessary %-adapted. (See Definition 127). Intuitively we see that (6.2) equals dxt = axtdt f ( t ,xt)dt bdwt = axtdt b(b-'f ( t ,xt)dt d w t ) = axtdt bdGt, where we write dGt = dwt b-l f ( t ,xt)dt. However, the existence of a solution xt of (6.2) is not yet known. So we should solve (6.2) in an different way. First we solve the simpler SDE dxt = axtdt bdwt for a given BM wt to find a solution xt. Secondly, we let dGt = dwt - b-lf ( t ,xt)dt. If there is a theorem (we call it a transformation theorem) to guarantee that such a Gt is still 5 B M , but under some new probability measure pT, then we can arrive a t PT - a.s. dxt = axtdt f ( t , x t ) d t bdGt,x0 = c, t E [O,T].
+
+
+
+
+
+
+
+
+
166
6. How to Use the Stochastic Calculus to Solve SDE
That is, we have that x t , t E [O,T],with a B M Gt,t E [O,T],o n the probability space (Q5,{5t)tE[0,T1 , satisfies (6.2). So (6.2) has a weak solution. Fortunately, such a useful transformation theorem exists, and it is the so-called Girsanov type theorem that can be stated as follows, which is a special case o f Theorem 124 in the First Part o f this book.
FT)
Theorem 191 If on a given probability space (0,5,{St)tEIO,Tl,P ) a 1dimensional process 0: is St - adapted such that
where c(t) 2 0 is non-random such that S: c(t)dt < co,then defining as dkT = e x P [ cBydw, 10y12d s l d ~ , where wt, t E [0,TI is a BM on this probability space, PT is a new probability measure, and wi
= wt
-J(;
t
6:ds.t E [O,T],
is a new BM under the probability pT. Now let us use this theorem to solve SDE (6.2) by using this approach. Solution 192 For a given probability space ( O , g , { ~ t ) t E ~ o,,PT )l and a given BM wt, t E [0,T ] defined on it, we can solve the simpler SDE dxt = axtdt bdwt, xo = c, t E [0,TI, to get a unique 520-adapted solution P - a s . t xt = eatc + b So ea(t-s)dw,, Vt E [0,TI. (In fact, by Ito's formula one can easily checks that it satisfies the simpler SDE, and by Theorem 117 it is the unique 520-adapted solution). Now applying the above Girsanov type thoerem (Theorem 191) Gt = wt - bF1f ( s ,xs)ds, t E [0,TI, is a new BM under the new probability measure PT, where T -1 ~ P T= exp[Jo b f ( 3 , xs)dws - SoT Ib -1 f ( s ,x s ) 2 dsldp. So we have that PT - a s .
+
Jot
1
dxt
= axtdt
+ f ( t ,xt)dt + bdGt, xo = c, t E [0,TI.
(6.3)
where xt = eatc b ea(t-s)dws, So ( ~ t , G t ) ~ is~ only [ ~ , a~ weak ] solution of xt E 520 c 5t, but xt $ (6.2) in case 1). Next we discuss case 2). In this case the pathwise uniqueness of solutions of (6.3) holds. Hence one can apply the Yamada- Watanabe theorem (Theorem 137) to get the result that (xt,i&)tEIO,Tl-is actually a strong solution of SDE (6.3); that is, xt E 5f (xt is 520-adapted). So there
+
SF.
6.2 More Useful Examples
167
exists a Baire function F such that xt = F(G,,s L. t). Therefore, let ~ [ (6.2) ~ , on ~ the ~ original probZt = F(w,,s L. t ) , then (Zt,w ~ ) ~satisfies ability space (Q, 5, { ~ t ) t E I O ,P T l) with the original BM wt, t E [0,TI, and (Zt)tE[o,Tl is the pathwise unique strong solution, that is it is unique and
st E 59.
6.2 More Useful Examples In the later Chapter we will meet a Stock price SDE as follows: dP: = P,"[rtdt+ atdGt)],PJ = PJ; Vt E [0,TI, F - a.s., where for simplicity we assume that all of the processes that occur here are real-valued, and Gt,t E [0,TI, is a BM under the probability measure F. Example 193 Under assumption that rt, at are non-random and .f[lrtl + lat121dt < co the solution of the stock price SDE is: F - a.s.Vt E [O,T] t P,1 = PJ exp[$ r,ds So a,dGs - $ J: la,12 ds].
+
Proof. Write yt = So rsds + So o,dGs - $ J: la,12 ds, and f ( y ) = eY. Applying Ito's formula (Theorem 187) to P: = PJeYt we find that fj - a.s. dP: = PJ@tdyt + $ ~ J e y(at12dt t 2 = P;eYt[rtdt + atdGt - $ latl dt] $pteYt lat12dt = P,"[rtdt + atdGt]. So P> = PJeYt solves the stock price SDE. Moreover, by Theorem 117 it is the unique strong solution of the stock price SDE. In the later Chapter we will also meet a wealth process SDE as follows: dxt = rtxtdt .lrtatdGt,xo = X O , P - as., where for simplicity we assume that all of the processes that occur here_are real-valued, and Gt,t E [0,TI, is a BM under the probability measure P. t
t
+
+
Example 194 Under assumption that rt is non-random and Iatl 5 ko :J IrtJdt < co, and E lat12dt < co, the solution of the wealth process SDE is: F - a.s.W E [O,T] t t xt = exp[$ r,ds]xo So exp[Ssr,du].lr,a,dG,.
+
Proof. Applying Ito's formula (Theorem 187) to the above xt we find that F-a.s. dxt = zod(exp[$ r,ds]) d exp[Jstr,dul.lrsasd61 t t = rtxo exp[$ r,ds]dt atotdGt rtdt So exp[S, r,dul.lrsasdG = rt [ e x p [ r,ds]xo ~i S' e x p [ ~r,du].lr,~,dG,]dt ,~ + .lrtatdGt = rtxtdt ntatdGt, where we have used the following result: ,~ dJ : e x p [ ~r,du].lr,a,dG,] st = atatdGt rtdt e x p [ ~rrdu].lrsa,d~,. Indeed, let yt = J: expi- S{ r,du].lr,a,dG,]. Then by Ito's formula
+
+ +
+
+
+
168
6. How to Use the Stochastic Calculus to Solve SDE
d Jot exp[Jst rudu].lr,a,dG,] = dej: ruduyt = rteSOt ?uduytdt+ el: rud"dyt e- So"rudu.lrsgsdGs+ & ?udue- Sot r~du.lrtatdGt = rtdteS: ?udu = el: rudu.lrsa,dG, .lrtatdii&. So the result is true. Thus t t xt = exp[Ji r,ds]xo Soexp[J, rudu].lr,a,dGs solves the wealth process SDE. Moreover, by Theorem 117 it is the unique strong solution of the wealth process SDE. rn
Jot
+
+
Linear and Non-linear Filtering
In many cases the signal process which we want to examine cannot be observed directly, and we can only examine a different observable process which is related to the signal process. This poses the question of how we estimate the true signal process by using the information obtained from the observable process? Such an estimate of the present signal obtained by using the information from an observable process up to the present time is called a filter. In this chapter we will discuss the fitering equation for both the non-linear case and the linear case. When the linear case is without jumps, we will derive and solve the continuous linear filtering equation the famous Kalman-Bucy equation, and we will also consider the non-linear case when we will derive the Zakai equation and discuss its solutions.
7.1 Solutions of SDE with Functional Coefficients and Girsanov Theorems In discussing filtering problems we need to consider a SDE with functional coefficients. Let us begin by introducing some notation. Let D = D([O,co);Rd) = The totality of RCLL maps from [O,co) to R ~ , with the Skorohod metric (see Lemma 388 in the Appendix), DT = D([o,T];Rd), C = C([O,00);R ~ =) the totality of continuous maps from [0,m) to Rd, CT = C([O,TI);Rd), B(D)= The topological a-field on D,
170
7. Linear and Non-linear Filtering
(that is, the a-field is generated by the totality of open sets), %(D) = B(Dt). Consider the following SDE with jumps and functional coefficients in d-dimensional space : Vt 0,
>
O:Ji~(o-'b)(s,z(.))l~dt> n } , Then by Theorem 198 there exists a pathwise unique strong solution x? of the following SDE: Vt E [0,T ]
+
X? =
xll ~ ~ ~ , , , , , d sf
t
- l S < T-n ( Z ) ) a ( s ,
xn(*))dwS
+ So"(1- Is 0 is a constant depending on T only; P a-'exists such that ko, [la-' 9 Ila(t,x>- a(t,y)112+ Ic(t,x) - ~ ( y)12 t , 5 ko(llx - yll3, for all x , y E D , t 1 0, Now assume that xt satisfies (7.6). If I , is a RCLL Rdl-valued square integrable 5; -martingale, where 5: = a(x,, s I t ) , then there exzsts an f (t,w): [O, oo) x R -+ R ~ ~,@ g(t,w) ' ~ .: [0,oo) x R -+ R d l , which is 5;-adapted, 5;-predictable, respectively, satisfying for any 0 5
11
0
Since by Theorem 197 the solution of (7.8) is a (pathwise unique) strong solution. Heace for all t 0. 5: c On the other hand, by (7.8) Nt = C o < s < tc s - ( ~ ) - ~ xs -
>
a
is $:-adapted, Hence 3: = @I".
and so is %t
==
Nt-At. However, iijt is obviously g?lN-adapted.
-
Let us show that tt . (z&)-' is a 5FN- adapted locally square integrable s5t martingale under probability F. Indeed, for all A E
el',
JA
tt . (zXt)-'dF
=
L.a
I ~ t.tE(z j
( dJ) dP = JA ( I . d P
es . (z&-'dF. This shows that {& (zXt)-l, 51"''}tL0 is a F= JA
.
martingale. On the other
.
hand,
I
let . ( z ~ ~ ) - l lsuptST = E letl = suptsT E IqtT111') I EItTI < m , since Jt is a (591N, P)-martingale, and we have applied the Jensen SUP~STEF
inequality, and = 1, for all 0 5 t 5 T. E ( z k 13: >') = E [ E ( z b 13;) Moreover, let U N = inf { t E [O,T] : ((z&-'1 IJtI > N) . Then 0 N f m , as N f m, since (z$')-l is a continuous process, and Et is a RCLL (right continuous with left hmit) process. Now since tt is a square integrable (3:lN, P)-martingale, we also have
]":$I
+
1
1
1 2
S U P ~ S T I~< tP/ \ U N ' (zctk7N)2 =N ~ ~ P ~ s T E 5 N s u p t < ~ EI E
Ic~A,, I
This shows that
tt . (z&-'
is locally
tt . (z&)-'
2 '
(zctr\UN)-ll
~ 0 P(S,T IAs(w)l ds < m ) = 1, l ~ s ( ~ ) l< ~m d )s = 1, p ( f l l c S ( t ) 1 2 ~ s< m ) = 1, 1) is also true by taking the limit through a series of stopping times a, T m . 2): Without losing any generality we may assume that the stronger assumption (7.15) holds. Notice that
p(g
at, = G- ((1 N. Hence Nt = C,o - E is a BM, then there exists a {at(w), E L;'"(R) such that (P - as:) t ( x , w )= ~ So a,(w)ds, Vt 2 0. Naturally, we write d ( : y ) k = at(w).
M 2 , and {wt,
Proof. Fix an arbitrary T < m. For V S x A E %([O,T ] )x TT let Q ( S x A ) = E V A Js d ( x ,w),l Then dQ
I
+
s:[la(t)l+ IA(t)l+ lbM2 + lb1(t)I2 lB(t)12 lc(t)12 1c'(t)l2 lC(t)12]dt < co. By theorem 117 (7.28) has a pathwise unique strong solution Ot = erot a ( u ) d u [ O o + ~ ~e-1; a(u)du(bl(s)dw~+b(s)d~s+cl (s)dfi;+c(s)dfis]. Applying Corollary 212 one immediately finds that
+
+
+
192
7. Linear and Non-linear Filtering
However, (7.30)is not very convenient, because there is another term 7rs(e2) in the equation. Notice that ~ s ( 6 ' )= ~ [ 6 : 1 3 $=] E[(6s - ns(6) ~ s ( 6 ) ) ~ 1 3 $ ] = E[(6, - ~ , ( 6 ) ) ~ 1 3 $~] ~ (- 2ns(6)E[Qs 6 ) ~ -~s(6)13$] = E[(BS- ns(6))'13$1 ~ ~ ( 6 ) ~ . So if we write 7 , = E[(6, - nS(6))'1$$],then (7.30)can be rewritten as
+
+ +
Notice that yt = 7rt(e2)-nt(6)'. Applying Ito's formula to (6t)2and ~ respectively, one has that
~(e)~,
and
respectively. Now applying the non-linear filtering equation (7.26) to (6t)2 one has that
+
l
t
+
[2b(s)rs(6) B-1(s)[A(s)ns(63) - A(~)a,(6~)n,(B)]&, t
+l
( ~ C ( S ) K , - (6)
+~ ( ~ ) ~ ) d f i , .
Therefore, substracting (7.34) by (7.33) one obtains
(7.34)
7.4 Optimal Linear Filtering
193
Equations (7.31) and (7.35)are the equations for the (optimal)filtering and the conditional mean square error of filtering, respectively. The interesting thing is that there is no jumps in (7.35). Thus we arrive at the following theorem. Theorem 214 Suppose that a 1-dimensional signal process {6t,&}t20 and an 1-dimensional observable process {[t,5t)tlo are given by (7.28) and (7.13). Assume that BP'(t)l I ko, IC-'(t)l I ko, and
I
&%(t)l+ IWl+lb(t)I2+ lb1(t)I2+ IW)I2 lc(t)12 lc1(t)12 1c(t)l2]dt< 03. Write nt(e) = = E[(& - ~ ~ ( 6 ) ) ~ 1Then @ ] . they satisfy the following filtering equation and equation for conditional mean square error:
+
where
+
+ ~[e~lg],?~
$)tto-
a BM, and
is still a centralized Poisson
process such that Nt = Nt - At, (where Nt is a Poisson process with the intensity ENt = At), both under the original probability P such that P-a.s. dEt = A ( t ) ~ t ( B ) d t B ( t ) f i t c ( t ) d N t .
+
+
The interesting thing here is even the filter ~ ~ (is6RCLL ) (right continuous with left limit), that is, it can be discontinuous; however, the conditional mean square error yt of the filter is continuous, and its SDE is without jump terms.
7. Linear and Non-linear Filtering
194
7.5 Continuous Linear Filtering. Kalman-Bucy Equation In the case that b(t) = c(t) = c l ( t ) = C ( t ) = 0 , the signal process and the observable process (7.28) and (7.29) become
+
d6t = a(t)6tdt bl(t)dwf, qt = A(t)Btdt B(t)dwt,
+
(7.38)
where {wf ,S t )t,o and {wt ,5t)t.,0 are two independent BMs and (60,to)is ~o-measurable.-Moreover, the filtering equation and the conditional mean square error equation in Theorem 214 become
~} t 20 is a BM. Recall
where Et = B B 1 ( t ) ( q t- A(t)rt(6)dt),and { E t , that
y,
= E [ ( 6 , - ~ , ( 6 ) ) ~ 1 3 We $ ] . have the following theorem.
Theorem 215 Under the assumption i n Theorem 214 and the assumption that (et,l t ) t > 0 is a jointly Gaussian process, (60,to)is &-measurable, { w ~ , 5 t ) t t oand { ~ t , $ t } ~ > ,are , two independent BMs, and b(t) = c ( t ) = c l ( t ) = C ( t ) = 0 one has that 1) 7, = E[(6, - %(6))21. 2) yt satisfies the following ODE (ordinary difserential equation)
Equations (7.39) and (7.41) are called the Kalman-Bucyfiltering equations. Equation (7.41) is also a Riccati equation. To establish this theorem we need the following lemma.
Lemma 216 If then
(E, 6 ) are jointly E(W3 =
Guassian (both can be multi-dimensional),
+ DecD,f,(E - BE),
(7.42)
c o v ( 6 , 6 l l ) ~ E [ (-e E(elE))(e- E(6lE))*IEl = Dee - DeCD&D&, (7.43) where Dee = E[(6- E6)(J- E l ) * ] ,and Dee, DEEare similarly defined and, moreover, we write D& = D&', i f DEE> 0; and D& = 0 , otherwise.
7.5 Continuous Linear Filtering. Kalman-Bucy Equation
195
+
Proof. Notice that if Q = (6 - EO) C(J - EJ), then C =-D~~D& ===+ Eq(J - EJ)* = 0. In fact, if DEE > 0, then D& = DG'. Thus the above statements are obvious. If DEE= 0, then J = EJ. In this case any constant C will make EQ(J- EJ)* = 0. For C = - D ~ D &one has that {Q, J) is an independent system, since Q and J are jointly Gaussian and not linearly correlated. So,
0 = E(Q) = E(v1J) = E(0lJ) - Eo - DqD&(J - EJ). Thus (7.42) is obtained. On the other hand, substracting (7.42) from 0, one finds that Q = 0 - E(0IJ). By the independence of {Q,J) E[(o - E(oIJ))(o - E(oIJ))*IEl = EQV*. Using (7.44) one has that as Dee > 0 EGQ*= Dee DeeDG' DEED;F1 D& - DoSD:c1 D& - Dee DG' D& = Dee - DeeD:;' D ; ~ . In the case Dee = 0, so then J = E J , D& = 0, and EQQ* = Doe = Dee - DsED&D&. The proof is complete. w Now we are in a position to prove Theorem 215. Proof. Let us show that 7, = E[(Ot - T ~ ( O ) ) ~For ] . this let us make a subdivision on [O, t] by 0, -e2~Asds+~~eS~2;i,dr[b(~)2+~-2(~)~(~)2y~]d T t = El&l = yo
yt
-?;I
+
+
+
+
+
7.6 Kalman-Bucy Equation in Multi-Dimensional Case For the continuous linear filtering problem in multi-dimensional case we have the following corresponding theorem.
Theorem 218 Suppose that a k-dimensional signal process {8t,8t)t,0 and an 1-dimensional observable process {(tlzt)t,o are given by the f&lowing SDE: dot = a(t)&dt bl(t)dw:, dEt = A(t)Btdt + B ( t ) d w t , where {w:, 5t)t,o and { w t ,zt}tlo are two independent BMs, the first one is k - dimension~l,and the second one is 1-dimensional; moreover, ( B t ,Et)t2o is a jointly Gaussian process, ( B o , f o ) is 80-measurable, and a ( t ) ,b l ( t ) E ~ ( t ~) , ( tE )R @ ~ .
{
+
7.7 More General Continuous Linear Filtering
197
,,
where Vt = BB-t)(dt, - A(t)nt(0)dt), and Etl thermore, if
is a P-BM. Fur-
yo > 0, that is, it is positive definite, then yt > 0, W 2 0.
Since the proof is almost completely the same as in the 1-dimensional case, we will not repeat it. This theorem actually gives us a practically closed form method for solving the filtering equation. In fact, we can first solve the second ODE (the so-called Riccati equation) to get the mean square error for the filter, and then put it into the first equation and solve the first linear SDE to obtain the filter ~ ~ ( 0 ) .
rt
7.7 More General Continuous Linear Filtering In this section we will consider the filtering problem on a more general continuous partially observed system:
where {w:, zt),>o and { ~ : , z ~ ) , > are~ k-dimensional and I-dimensional and BMs, respectiveiy, and they areindependent; moreover, {Ot, {[t,zt}t20 are k-dimensional and I-dimensional random processes, respectively. Naturally, we assume that aO(t)E Rk@l;al(t), bl(t) E RkBk;a2(t),b2(t) E Rk@'; AO(t)E R I @ ~ ; A1 (t), B1(t) E R1@k;A2(t),B2(t) E R1@', 0 1 2 A0 A1 ~ 2 . g-lh(t)ldt Applying Theorem 218 to the partially observed SDE system (7.46) and (7.47) we obtain the following lemma. -
u E[(mt - T&)(TBt - mu)*lzu]= E[J: DL' BsB,*(D,l)*d~lzs] = E[J: D ; ~ D , D , * ( D ; ~ ) * ~ s=~t~-, ]u. Applying Theorem 97 we find that { ~ t , 5 t } is ~ ,a ~BM. By definition Jot Ds&, = Jot Bsdw,. The proof is complete. Finally, we can deduce the filtering equation for the original partially observed SDE system (7.45). Let us write out the final result as follows.
+
k),
+ +
+ +
+
Theorem 222 Suppose that a k+l-dimensional partially observed processes { O t , ~ t } t 2 0and { t t , 5 t ) t 2 0 are given by the SDE (7.45), and suppose that the assumption made i n the beginning of this section holds. Moreover, asko,V t 0 , sume that ( B o B)-' (t) exists and is bounded ( B 0 B)-l(t)l and (&, tt)tlo is a jointly Gaussian process, (00,to)is 30-measurable. ] , = E[(Ot - ~ t ( e ) ) then ~ ] , they are the unique Write nt(0) = ~ [ 0 t 1 3 !yt solutions of the following filtering equation and equation for mean square error.. [aO(s) a' (s)wS( 0 ) a2(s)t,)ds nt ( 0 ) = no(0) Jot [(b0 B ) ( s ) T,A1* ( s ) ]( B 0 ~ ) - l ( s ) .[dts - ( A O ( s ) A1(s)ns(0) A2(s)t,)ds], ;jit = yo Jot[al(s)T, Tsal*(s) b 0 b(s)]ds - ~ o t [ (obB ) ( s ) T s A 1 * ( s ) ] ( oB B)-'(s)[(b o B ) ( s ) T,A1*(s)]*ds. hrthermore, if To > 0, that is, i f it is positive definite, then Tt > 0,Vt 2 0.
I
+
+
+ + + +
+
+
+
+
+
+
-
Proof. Notice that nt(0) = mt = vt + W i t , and 7, = 7,. Hence by using Lemma 220 and by using 2 ) of Lemma 221, and (7.48), (7.49) we easily derive the final result.
7.8 Zakai Equation
201
7.8 Zakai Equation In this section we are going to derive the Zakai equation for some concrete partially observed system. Suppose that we are given a signal process xt satisfying the following SDE :
where wl, t 1 0, is a d-dimensional BM, fikt(ds,dz) is a Poisson martingale measure with the compensator rf(dz)dssuch that Nkr (ds,dz) = Nk,(ds,dz) - r f(dz)ds, where r f ( d z )is a a-finite measure on the measurable space (Z,!B(Z)), Nkt(ds,dz) is the counting measure generated by the Poisson point process Ic; and xo is supposed to be independent of and suppose that the observation tt is given as follows:
-
~r"~',
where At is assumed to be a bounded function, B , C satisfy the same conditions stated in Theorem 206, and fit is a centralized Poisson process like that given in the same Theorem. Write ( w f i , w j )= Jot p y d ~1, 5 i ,j 5 d .
Ig), etc.
For the random process At(%(.))write r t ( A ) = E(At(x(.)) Then we have
Theorem 223 . Suppose f E C;([O,OO); R1). Then by Ito's formula f ( x t ) = f ( 5 0 ) L f ( x ,S ) ~ + S ~f (xa)' 4 3 , X ( . ) ) ~ W ; ji jz L ( ~ ) ~s-)fikt(ds, (x, dz), where for each x(.) E D([o,oo), R ~ ) , L f ( x ,s ) = b(s,x(.)) . ( x s )+ i t r . ( a ( ~ , x ( . ) ) * ( ax~sf) l ~ x 2 x(.))) )~(~, J Z ( f ( x s - + c ( s - , x ( . ) , ~ )) f (xs-) - ~f (xs) . C ( S - , X ( . ) , Z ) ) ~ ' ( ~ Z ) , L ( l ) f ( x , s - ) = f(xs- +c(s-,x(.),z)) - f(xs-). Furthermore, if (w:,w ~= ) &ds. ~
+
+ Ji
Ji
vf
+
Ji
7. Linear and Non-linear Filtering
202
then
.(ns(fA*) - n . ( f ) n , ( A * ) ) ~ ; ~ ( e ) * ] f i ~ 11
+
-4
where Dt dt
= d[yd,@ ] t , and
~,= d J:
JZ
/d
n s - ( ~ d ) d G s , (7.53)
L(')f ( x ,s - ) f i k ~(ds,dz).
Theorem 223 is a direct corollary o f Theorem 206. One only needs t o see f ( x t ) as the signal process ht in (7.12), then applying Theorem 206, the conclusion is derived immediately. Corollary 224 Assume that i n (7.51) and (7.52) the signal and observation noise are independent, and the jump noise in the signal and observation have no common jump time, i.e. p = 0, Ayt . A N t = 0; and P [ x t 5 x
-
/g],
sE,
(which is the conditional distribution of xt under given and {xt 5 x ) is the set { x : 5 x l , . . . ,xf 5 x d } ) has a density at,r ) = d P [ x L5 x 1$]/dx, which satisfies suitable differential hypothesis, and x ( x t ) = At(x(.)) only depends on x t , and it is a bounded function. Then one has the following Zakai equation satisfied by the conditional density p^(t,x)
where L* is the adjoint operator of L. Proof. T o show Corollary 224 on_e only needs-to notice that nt (2) = J,, 3 (x)p^(t,x)dx = (A*p^(t,.)) = ( A * ,ph), and t o apply integration by parts. Indeed, by assumption p = 0 , A y t . A N t = 0, hence (7.53) becomes Vf E C z ([o,co);R' ) (where C: ([o,co);R') is the totality o f functions f : [0,co) -t R 1 , with continuous derivatives up t o the second order and with compact support), n t ( f )= n o ( f ) r S ( L f)ds + $ ( n S ( f ~* n) s ( f ) ~ s ( A * ) ) B ~ l ( J ) * f i s . Or, V f E Ci([O,co);R1) ( f , dtp^(t, = dt(f,p^(t,.)) = ( f ,~ * p ^ .))dt ( t , ( f ,gt, .)(A* - n t ( ~ * ) ) ~ , - l * f i t . Hence (7.54) now follows. T h e advantage o f Zakai's equation is that it is a linear partial stochastic differential equation (PSDE) for g t , x ) , and usually a linear PSDE is much easier t o handle. As soon as the solution p^ is obtained, then the non-linear filter r t ( z )= ~ ( x t l g=) JRd xp^(t,x)dx is also obtained.
-
+ Jot
+
7.9 Examples on Linear Filtering
203
7.9 Examples on Linear Filtering In many cases, or in an ideal case, we will consider the original signal system to be non-random, that is, the coefficients of the signal dynamics are non-random. However, the initial value of the signal process may be random. For example, the initial value of the population of fish in a large pond actually is random. Moreover, since the signal process itself usually cannot be observed directly, we can only estimate it and understand it through an observable process that is related to it. Obviously, the observed results will usually be disturbed by many stochastic perturbations. So the appropriate assumption is that we have a pair of a signal process Ot and an observable process I, as follows, where for simplicity we consider them both in 1-dimensional space: dOt(w) = a(t)Ot(w)dt,Oo(w) = OO(W); dt, = A(t)Otdt B(t)dwt, l o = l o ,
+
(7.55)
where all coefficients a(t), A(t) and B(t) are non-random. By Theorem 215 we have that
where rt(0) = EIOt13!] is the estimate of Ot based on the information given by the observation J,, s 5 t, and y, = E[(O, - r,(0))2] is the mean square error of the estimate with yo = E[(Oo - T ~ ( O ) ) ~Again ]. by Theorem 215 it is already known that (7.56) has a unique solution (rt(O),y,),Vt 2 0. Here, we are interested in how to get the explicit formulas for the solution. First, from the practical point of view, let us replace { Z t }by {l,)from the formula: dl, = A(t)rt(O)dt B(t)fit, because our observation is {f,} . Thus we get the following filtering SDE system:
+
~ ~ (=0ro(e) )
+ J,"(a(s) - B - 2 ( ~ ) A 2 ( ~ ) T s ) r 8 ( Q+) dJot~ B-2(s)A(s)TsdEs,
yt = yo + J,"2a(s)y,ds
-
Jot ~ - ~ ( s ) ~ ( s ) ~ y t d s .
(7.57) Obviously, if we can find a formula for 7, solving the second ordinary differential equation - the so-called Riccati equation in (7.57) then the estimate, or say, the filter rt(0), can also be obtained from the following formula:
204
7. Linear and Non-linear Filtering
(In fact, one can use the Ito formula to check that rt(0) defined above satisfies the first SDE in (7.57)). Fortunately, the solution of the Riccati equation does have an explicit formula, if we make some further assumptions that a(s) = ao, A(s) = Ao, B(s) = Bo > 0, To = E[(00 - ~ o ( @ )> ) ~0 ] are all constants. In this case one easily checks that the following yt satisfies the second Riccati - ODE in (7.57):
These tell us that if we can know "the mean square error of the initial estimate" yo, which is larger than zero, then the estimate ~ ~ (=0E[o~~$] ) by observation, and 7,-the mean square error of the estimate, can be calculated by formulas (7.58) and (7.59), respectively. One naturally asks = 80 and one finds that yt = 0,Vt 2 0 how about yo = 0. In this case TO(@) is the unique solution of the Riccati equation, that is the second equation of (7.57). So xt(0) satisfies the following equation ~ t ( @=)00 f a(s)n~(0)ds, that is, the same equation as the signal equation. So one immediately gets the solution formulfted by ,,(@) = 6, = 0,e& a(s)ds. This means that the estimate is exactly equal to the signal process. This is quite reasonable. Because the initial value can be explicitly observed, so one can directly use the known signal dynamics to get the explicit signal process. However, one should notice that if the signal process satisfies an SDE (Not an ODE!) dBt = a(t)Otdt b1(t)dw;, 00 = Oo, and the obsevable process J t still satisfies the second SDE in (7.55), then by Theorem 215 ;Jt will satisfy a more complicated Riccati equation: yt = To ~,"[2a(s)y, b l ( ~ ) ~ ]-d s B - ~ ( s ) A ( s ) ~ ~ : ~ s . In this case even if 7, = 0, we still cannot get ;l;, = 0. So for continuous linear filtering problems one asks in what cases we can get explicit formulas for the filterings, then by the above discussion one sees that this completely depends on how many explicit formulas we have for solutions of the Riccati equations.
+
+
+
Jot
Option Pricing in a Financial Market and BSDE
In this chapter we will discuss option pricing in the financial market and how this problem will draw us to study the backward stochastic differential equations (BSDEs) and how the problem can be solved by a BSDE. Furthermore, we will also use the partial differentail equation (PDE) technique to solve the option pricing problem and to establish the famous Black-Scholes formula.
8.1 Introduction 1. Hedging contingent claim, option pricing and BSDE In a financial market there are two kinds of securities: one kind is without risk. We call it a bond and if, for example, you deposit your money in a bank, you will get a bond that will pay some interest at an agreed rate. It is natural to assume that the bond price equation is:
where r ( t ) is the rate function. Another kind of security in the financial market is with risk. We call it a stock. Since in the market there can be many stocks, say, for example, N different kinds of stocks, and they will usually be disturbed by some stochastic perturbations, for simplicity we assume that the stochastic perturbations are continuous, so it is also natural to assume that the stochastic differential equations for the prices
206
8. Option Pricing in a Financial Market and BSDE
of stocks are:
where wt = (w:, w:, . . - ,w:)* is a standard d-dimensional Brownian Motion process, and A* means the transpose of A. Now suppose that a small investor who wants his money (or say his wealth) from the market at the future time T when it reaches X. (Notice that X is not necessary a constant, for example, X = co clP;, where q,and cl are non-negative constants, and P; is the price for the first stock at the furture time T, because the investor has confidence that the first stock can help him to earn money). How much money xt should he invest in the market, and how could he choose the right investment portfolio at time t? Suppose the right portfolio (n!, nf . , . ,n?) exists, where n! is the money invested in the bond, and IT: is the money invested in the ith stock. Then he should have
+
where 77: is the number of bond units bought by the investor, and is the amount of units for the ith stock. We call xt the wealth process for this investor in the market. Now let us derive intuitively the stochastic differential equation (SDE) for the wealth process as follows: Suppose the portfolio is self-financed, i.e. in a short time dt the investor doesn't put in or withdraw any money from the market. He only lets the money xt change in the market due to the market own performance, i.e. self-finance produces dxt = @dPf 77idP;. Now substituting (8.1) and (8.2) into the above equation, after a simple calculation we arrive at the following backward SDE (BSDE), where the wealth process xt and the portfolio .rrt = (T: ... ,T?) (actually it is the risk part of the portfolio) should satisfy:
+ EL,
dxt = rtxtdt
+ nt(bt - rtl)dt + ntatdwt,
XT = X,
t E [O,TI,
(8.4)
where I=(1,. . . , I ) * is an N-dimensional constant vector. In a financial market if we let X be a contingent claim, then the solution (xt, nt) of the BSDE (8.4) actually tells us the following fact: At any time t, let us invest a total amount of money xt, and dividing it into two parts: One part of the mone is for the non-risky investment; that is we invest the money n: = xt ~f into the bond. The other part of the money is for the risky investment nt = (n: . . . ,n?); that is we invest the money T: into the the i-th stock, i = 1,2,. . . ,N. Then, eventually, at the terminal time T our total money xt, t = T, will arrive at the contingent
8.1 Introduction
< < <
7.
i JkT
i
E[lutli + lqs12ds + SI,, Sz IpS(4l24dz)dsl < k& e ( 3 c l ( s ) 4 c ~ ( s ) ~ ) ~ l u ~ l ~ d s , where c1(s)ds. k&= E 1x1'+ 2 By Gronwall's inequality one easily finds that
+
+
Lemma 230 Under the assumption in Lemma 229, i f ("1 - 2 2 ) ' (b(t,Xl,Qlr P ~ W) b(t,x2)92,P2,~)) c i ( t ) p ( l x~ x212) ~ 2 ( t 1x1 ) - 2 2 1 (191 - 921 ((pi- ~ 2 1 1 ) 1 , where ci(t),i = 1,2, satisfy the same condition as that in Lemma 229, and p(u) 2 0, as u 2 0 is non-random, increasing, continuous and concave such that So+ dulp(u>= w then the solution of (8.8) is unique.
T, and pl(u) = p(u) u.Hence by Lemma 235 below P - a s . Zt =O,WE [O,r]. Unlike the SDE theory, an interesting thing is that for BSDE we have the following result: If the terminal variable X is bounded, then the solution is always bounded.
Lemma 231 Assume that loin Lemma 229 holds, T a constant ko 2 0 such that 1x1< ko, P - a.s. If ( s t , qt,pt) is a solution of (8.8) , then P - a.s. lxtl < No,vt 't [ o , ~, W ] E 0, where No 2 0 is a constant depending on cl(t)dt, only.
< To,and there exists
fl( ~ ~ ( dtt ) and ) ~ ko
Proof. Similar to (8.12) one obtains lxt12 E ~ ~ [ I X ~ J,' I ~ ,, lqSl2 d~ J;, J~I P ~ ( ZT)(I~~Z ) ~ S I [ ~ 51x1~ t + C1(S)dS]eS,TO(3~ds)+4~z(s)z)ds, where we write ESt [.] = E [. IZt]. Hence the conclusion now follows.
0 is a constant, and c(s) 2 0, then
8. Option Pricing in a Financial Market and BSDE
218 ?/t
< yvt + y
Proof. Let 0
exp (J: c(r)dr)c(s)vSds.
< zt = J?
c(s)y,ds, gt = yt - yvt - zt
+ +
< 0. T h e n
-zt = c(t)yt = c(t) (gt yvt zt) . From this one gets To zt = St exp (J: c(r)dr)c ( s ) (g, yv,) ds < exp (S,8 c(r)dr)c(s)yv,ds. Hence ~t < yvt zt Q yvt y exp (J: c(r)dr)c(s)v,ds. Now let us give some examples t o show that the conditions on c l ( t ) and c2(t) in Theorem 234 cannot be weakened.
+
~7
+ ~7
+
.
cl(t)dt < co cannot be weakened). ConExample 236 . (Condition sider the following BSDE 0 t T , T T T Xt =1 Is#~s-"xsds - St qsdws SZ ~ s ( z ) ~ k ( dz). ds, Obviously, i f a < 1, then by Theorem 234 it has a unique solution ( x t ,0 , O ) , where xt is non-random. However, if a 2 1, it has no solution. Otherwise, for the solution (xt,qt,pt) one has EX: = 1 + J,T I , # ~ S - ~ E X=~eJtTIa#os-"ds,~i ~S = I , 2,. . . ,d. Hence E x ; = m , V i = 1,2,... ,d, as a 2 1, which is a contradiction.
<
0, b, a are all constants, and g(y) = ( y K)+,y 2 0. Let u(t,y) = BS(t,T,y, K , a , r ) , as 0 t < T ,y 2 0; and u(t,y ) = g(y), as t = T ,y 0; where BS(t,T,y,K,a,r) = yN(p+(T - t , ~ )-)K ~ - ' ( ~ - ~ ) N ( ~ t-,(yT )), and p*(t, y ) = -&[log j$ t(r f )I, N ( y ) = J_Y_e - ~ ' / ~ d z . Then u(t,y) E C1r2([0,T)x (0, co)) and satisfies (8.21). Furthermore, u(t,P i ) is the price of the option, and $ ( t , P,')P,' is the portfolio such that (u(t,P,'), e ( t ,Pi)P,') can duplicate the money (P: - K)+ which the option owner can earn at the future time T . Such an explicit formula ~ ( P,') t , = BS(t,T ,P i , K , a , r ) for the option price is called the BlackScholes formula.
+
f
-&
Proof. It is a rutine matter to show that u(t,y ) E C1,2([0, T )x ( 0 , ~ ) ) and that it satisfies (8.21).The reader can verify those results for u(t,y ) by himself.Now let us check the terminal condition u(T,y ) = ( y - K)+,Vy > 0 also holds true. In fact,
Obviously, as t 1 T,O 5 If 5 K ( l - e-T(T-t))-, 0. Observe that in the expression for p+(T-t, y), if y > K then as t T,p+(T-t, y ) -,co; on the other hand, if y 5 K, then as t 1T ,p+(T-t, y) -, -co. Therefore,as t 1T , I: -+ ( y -K)+. The terminal condition holds for u(t,y ) . Finally, let us show that the proof of Lemma 241 goes through. In fact, notice that by definition u(t,O) = 0. So &(t,O) = 0. Now applying Ito's formula to u(t,P,') on t E [O,T],where P,' satisfies (8.20),one finds that FT - a.s.W E [O,T], Bu u(t,P,') = u(T,P$) - J,T Iz(s, P i ) + $(s, p,1)p,1rS las121p:l2 @ ] ~ p : & - :J e(s,p:)p:aSd"', = g(P$) r,u(s, ~:)lp:+ods- :J $(s, P:)P:~,~G,
++
JT
=g ( ~ $ ) :J rsU(s, p:)dS - :J %(s, P:)P:c,~~?,. So the proof of Lemma 241 still goes through. The Black-Scholes formula is easily evaluated pratically, because in Statistics books there are tables for the values of N ( x ) . Furthermore, the Black-Scholes formula can also be converted into a computer program, so the option price can be found immediately from the computer, if it is provided with the necessary data.
8.6 Black-Scholes Formula for Markets with Jumps
229
8.6 Black-Scholes Formula for Markets with Jumps Since financial markets with jumps are more realistic, in this section we will briefly discuss how to obtain a Black-Scholes formula for option pricing in such markets. Recall that from Theorem 238 we have considered a self-financed market. There is a bond, whose price satisfies the bond price equation:
where r(t) is the rate function, and there are N different kinds of stocks, with their prices satisfying the following general stochastic differential equations:
where wt = (wf,w:, . . . ,w:)~ is a standard d-dimensional Brownian Motion process, Nt = (N), - . . ,i@)T is a m-dimensional centralized Poisson process, i.e. all components are indzpendent and any two components have non-common jumps, such that dN," = dN," - dt, and 0 = (aik)F;il E
-
-
-
RNmd,p = (Pilk)t& E RNmm.In such a market if X E TT = g'N is a contingent claim, and (xt, xt), t E [O,T] can hedge this contingent claim, then they should satisfy:
where i= (1,. - . , l)T is a N-dimensional constant vector, and xt is called the price of the contingent claim X, (it is also called a wealth process in many cases), and xt = (xi . . . ,x y ) is called a portfolio, (actually it is the risk part of the portfolio). According to Theorem 238, under the condition (Al), (8.28) has a unique CfN-adapted solution (xt,nt). Now let [qt,pt] = xt[at, pt] = mat, d where qt = (q,',-,qt),pt = (pZ,.-,pZn). If 5,' = [ ~ ~ , ~ , ] - ~ e x i(for s t ssimplicity, , to ensure this, let us assume that N = d + m), then (8.28) can be rewritten as
230
8. Option Pricing in a Financial Market and BSDE
where we set
with 0: E R ~ @0"8, E RmQ1.Then the existence of the solution ( x t ,qt,pt) for BSDE (8.29),or, more precisely, the existence of a solution ( x t ,n t ) for BSDE (8.28) will help the investor to make his wealth XT arrive at X at the future time T (or say XT can duplicate the contingent claim X ) , if at time t by using the portfolio nt he invests the money xt. Moreover, if one calls X the value of some contingent claim, then the existence of xt also shows that one can give the contingent claim a price xt at time t. W e can give another existence theorem (the sufficient conditions are a little different from Theorem 238) for BSDE (8.29) and BSDE (8.28) as follows (its proof is similar to that of Theorem 238):
-
Theorem 243 .[l64 Suppose that b, r, a , p are all bounded ~ ' ~ - ~ r e d i c t a b l e processes and -5,' exists and is also bounded, and assume that N = d + m , X E g;'* and E 1x1~ < oo. Then (8.29) has a unique solution (xt,qt ,pt) E Lg ( R 1 )x Lg (RIBd)x Fg (RIBrn),and there also exist a unique wealth process xt E Lg(R1) and a unique portfolio process nt E L g ( R N ) satisfying (8.28), where . --1 xt = xt, nf = q:(~:l)ji ~ y = ~ )(j+d)& g ( ~i =~1,.. . ,N .
c;=,
+
Recall that i f the contingent claim X can be duplicated by ( x t ,~ t (i.e. ) putting such ( z t ,nt) into the market, when t evolves to time T , then XT = X ) , then we will say that ( x t , n t ) hedges the contingent claim X . So we see that under the conditions of Theorem 243 any contingent claim can be hedged. Such a market can be called a complete market. Now we will use the Girsanov type thoerm to simplify these equations: (8.27) and (8.28). First we know that (8.27) also can be rewritten simply as dPt Po
= Pt-[rtdt =
Po,
+ (bt - rtl)dt + atdwt + ptdfit],
(8.31)
t€[O,Tl,
where Pt E RN,rt E R1,at E E RNBrn.So we can establish the following theorem by using the Girsanov type transformation.
Theorem 244 Under the assumptions of the above theorem 2J in addition, = -(Bt iG. )' > -1,Vi = I , . . . , m ; then 1) the stock price SDE can be rewritten as: FT - a.s. dPt Po
= Pt[rtdt =
+ atdGt + p t d r t ] ,
Po, O < t < T ;
8.6 Black-Scholes Formula for Markets with Jumps
231
and - the SDE for hedging the contingent claim X also can be rewritten as PT - a s . dxt = rtxtdt rtatdGt r t p t d T t , XT=X, OstST. (8.33)
+
+
whzre we write p t , ~= exp[(e;, dw,) le:12 ds - ( ~ , , 1ds]. ) Ht'([O,T) x (0,oo)) and ~ ( ty), satisfies (8.37) with g(y) = (y - K)+ and Ic = 1; i,j = 1. So after a discussion similar to the one in the previous section one sees that
+
f
ry
8. Option Pricing in a Financial Market and BSDE
234
is the price o f the option for the first stock. Noticing that in (8.41)the price of the option for stock 1 actually - depends on both stocks, because 1does so. (Actually, depends on ( I N . By (8.30) (IA is solved by both coefficientsof the two stocks). Hence, if there exists only one stock with a continuous and a jump perturbations, then usually we cannot have a formula, because in this case X usually does not exist. Formula (8.41) is a generalization o f the Black-Scholes formula for option pricing. In the case that p = 0 and N = 1, i.e. the stock is without any jump perturbation, and we only consider one stock in the market, then formula (8.41) is reduced to
-
xt = E ~ ( ( P; K ) +I&)
= B S ( ~ , TP:, ; K , a ,r ) .
(8.42)
That is the standard Black-Scholes formula which we found in the previous section. So the option pricing formula also can be evaluated by solving a PDE, even in a self-financed market with jumps, but under the assumption that 2 = N = d+ m = 1 1 and all coefficientsare constants. That is, we arrive at the following theorem.
+
Theorem 245 Assume that r > 0, b, a are all constants, d = m = 1, N = 2 and g(y) = ( y - K ) + , y L 0. Let u ( t , y ) define by ( 8 . 3 9 ) ~0~< t < T ,y 2 0. Then u(t,y) E C1,2([0, T ) x ( 0 , ~ )and ) satisfies (8.37), with u ( T ,y) = ( y-K)+ ,Vy 2 0. Furthermore, u ( t ,P)) is the price of the option, ( t ,Pl)P: is the portfolio, such that (u(t,P)), ( t ,P))P)) can duplicate the money (P; - K ) + , which the option owner can earn at the future time T . Such an explicit formula u(t,P l ) (defined by (8.41)) for the option price is called a generalized Black-Scholes formula.
8.7 More General Wealth Processes and BSDEs In a financial market sometimes we will meet more general wealth processes. Suppose that during the time interval [0,T ] the investor also consumes the money dCt 2 0 in a small time dt with Co = 0, and CT < W , and borrows the money from the bank with the interest rate Rt at time t. Then his wealth process xt developed in the financial market should satisfy the following SDE: dxt = rtxtdt - dCt
-
N
rf)-dt
-ko(Rt - rt)(xt i=l
+ 7~t[bt+ bt - r J ] d t
+ atctdwt + x t p , d ~ t0, < t < T ,
(8.43)
8.7 More General Wealth Processes and BSDEs
235
+
where Zt = [at,pt] is the N x (d m) volatility matrix process, 6t = (bit, ...,6N)T is a dividend rate process, and KO 2 0 is a constant, and we write a- = m u ( - a , 0). (8.43) is a general model in the financial market, which includes many interesting cases. For example, let ko = 0, then we get the model in [8];and if we let ko = 1, and pf O,1 i G m, (i.e. no jumps), we get a model discussed in [221J331. Suppose now the investor expects that his wealth will arrive at X at the future time T, then his present wealth (or say, his present investment) xt should satisfy the following BSDE from (8.43):
-
(kI - + lip - ~'11). Hence by Theorem 234 there exists a unique solution (x;,qr,p?) of the following BSDE (denote t' = t A T below)
{
l ko[Ix - 2'1 + I9 - ~ ' l l , where ko 2 0 is a constant, and C? (z, W) satisfies the condition (B)2 IC?(Z,W)I kolCzt(~) 1,i = 1 , . ' .,d2; ct2(z) = (eft (z) ,- . .,C& (2)) E F; ( ~ ~ 8; ~ 2 ) the conclusion of Theorem 251 still holds.
0 , W 2 0. Hence we can construct an example as follows: Consider BSDE (with all random processes i n R 1 )
with b(p) = ( 1 + ~ 0JZ) I u ( ~ P ( z ) ~ ( ~ z ) . Obviously, (0,0 , 0 ) solves the above BSDE with X = 0 , and - t ) r ( U ) ,0 , - I Z E U ) ( x t , qt, ~ t=)(-Nk([O, t ] ,U ) solves the above BSDE with X = -Nk ([0,To],U ) < 0. In fact, by Nk([t,To],U ) = N/c([t,To],U ) - (TO- t ) r ( U ) one easily checks this result. However, > 1, as z E U ; C? ( z , ~=) ( 1 EO)IU(Z) and P ( x t > 0 ) 2 P(Nk([O,t],U)= 0 ) > 0,W E [O,To).
+
-
+
A counter example for c,' ( z ,w ) = -(1 E,,) < -1
+
can also be similarly constructed. Now we will give an example to show that the condition in 2" of Theorem 251 for b with respect to p cannot be weakened to the usual Lipschitz condition.
Example 254 . Assume that n ( U ) = 112, r ( V ) = 1, U C V . Consider the BSDE (8.55) with b(p) = Jz I v ( z ) p ( z ) r ( d z ) . Obviously, (O,0,0 ) solves the above BSDE with X = 0, and ( x t , ~ ~t , t =)(-Nk([O,t],U),O,- I Z E U ) solves the above BSDE with X = -Nk([O,To],U) < 0. Moreover, here C: ( z ,W ) = IV ( z ) and P ( x t 6 0 ) = 1,Vt E [O,To]. Hence the comparison theorem holds. Now if we consider the BSDE (8.55) with ) l ~ b(p) = (Sz l ~ ( ~ +))1'2. then the coeficzent b satisfies that
\lSz
with C t ( z ) = I v ( z ) ,IlpII = l p ( ~ ) l ~ n ( d ~ )= , n i(, ~n )( V ) = l , U c V. (Notice that the condition (8.56) is weaker than that Ib(pl) - b(p2)1 6
246
8. Option Pricing in a Financial Market and BSDE
JzlCt(z)I I(pl(z) - p2(z))l ~ ( d z ) )Obviously, . (O,0, 0 ) still solves the above BSDE with X = 0 , and (xt,'?t,pt) = ( - ~ k ( [ o , t ] y u+) [ m - n ( U ) ] ( T o - t ) , O , - ~ z ~ " ) solves the above BSDE with X = -Nk([O,To],U ) 6 0. However, P ( x t > 0) 2 P(Nk([O,t],U)= 0 ) > O,vt € [&To), and so the comparison theorem fails. Now, let us establish Theorem 251 by using Theorem 252. Proof. Notice that ( S t ,q2,p2) = (x: - xi,q: - q,',p: - p i ) satisfies the BSDE as follows:
C t ( z , w ) = G t ( w ) Icit(z,w)l ~ n @ i t ( z ) )IQt(w)l ,
< 1 , i = 1, ...,4;
1 a: = ( ~ l t...,, afi,t), 6 = (?lt,.. . ,Gdlt), Ct ( z ,W ) = (Clt( 2 ,w ) , . . . ,Cdzt( z ,w ) ) , etc. Obviously, condition (ii) of 2'' in Theorem 252 is satisfied. Hence Theorem 252 applies. One has P - a.s. x t = X: - x i 0,Vt E [O,T]. The proof using Theorem 252 to derive Theorem 251 is complete. w Before we establish Theorem 252, some preparation is necessary. Introduce another condition for C t ( z ) : (C)1 Cit(z) -1, i = 1,. . .,d2; ~ ( 2=)( c l t ( z ) ,. . .,c d z t ( z ) )E F ; ( R ~ @ ~ ~ ) , A
0 ) > 0, X 1 2 0, then P - a.s. x: > 0,Vt E [O,T],VwE { w : x l ( w ) > 0 ) .
>
z,'
Proof. We only need to show the last conclusion. Suppose that P ( X 1 > 0 ) > 0. Now applying Girsanov's theorem we find that (8.63) can be rewritten as PT - a.s. 0 Q t < T , xi =X 1 r,x:ds + C; - C j q:dG, - c p ; d ~ , , where e
fl
8. Option Pricing in a Financial Market and BSDE
252
+ + +;
Gt = wt J: Oyds, N t = Et Ofds, are a new BM and a new centralized (not necessary stationary) Poisson process - under theT new probability measure &, respectivvely, such that PT =exp[-& ( ~ ~ , d w s ) - ~ ~ l ~ ~ 1 2 d s - f ( ~ s , ~ ) d s ] . . I l o < s < ~ j l +( ~ snNs))dP, , p =-ON s
-
-
1
F?
-
N t = Nt - So(1 - 6, )ds, PT- a.s. Hence applying Ito's formula to xieone sees that PT- a s . 1 ; = xle-gr.ds + e- :$ rddsdc: -
t
JT e- ./:
-
~Js&ij,
-
J,T e- .ToY radsp@r,.
Now taking the conditional expectation E'T(-I&), one finds that FT- a s . vt E [O,Tl, 1 - Sot 'ads = E&(Xle- :$ Fads + e- ./: r&dc;[&) xte - joTr.ds - $ , r,ds 2 EPT(X1e, 1%) 2 E ~ ( X ~ I X ~ > O Wt). ~ ~ So FT- a s . Vt E [O,T], E ~ ( X i e ; . f ? ~ s d sIxl>olZt>. Now let A = {w : X1(w) > 0) , and pA = P(.IA). Then pA is still a probability measure. Replacing P by p A , and discussing as in the above case, one finds that PAT- a.s. > EP"T(XI~; ./?'ads t -I X ~ > O ~> &0. ) Since PAT pA.SO pA- a.s. Vt 't [O,T], xi > 0. That is, P - a.s. W E [O,T], Vu E A, x: > 0. This theorem has a nice physical meaning. Actually, it tells us that in a financial market the following facts are true. 1) If a small investor wants his wealth in the market at the future time T to reach a higher level, and his consumption rule does not change, then he must invest much money now; that is, X1 X 2 ; C i = C:,Vt E [O,T]d x i 2 x:,Vt E [O,T]. 2) If a small investor wants to consume more during the time interval [0,TI, and his terminal target does not change, then he also must invest more money now; that is, C l 2 C:,Vt E [O,T];X1 = X 2 ===+xi >x:,Vt E [O,T]. 3) If someone wants his money in the future in the market to be nonnegative, and he also wants to consume money in the market, then he must invest now. Naturally, if a zero investment in a market can produce a positive gain with a positive chance (that is, with a positive probability), then such a market has an arbitrage chance, and we will call it an arbitrage market. Otherwise, we will call the market an arbitrage-free market.
fl
-
N
>
-
8.12 Explanation of Comparison Theorem. Arbitrage-Free Market
253
To explain the concept of "arbitrage" more clearly, let us simplify the market. Suppose that in the market there is one bond, and its price Pf satisfies the equation as before: dPf = Pfrtdt, P$ = 1. There are N-stocks, and as before they satisfy-the following SDE: dP," = ~ , i [ ( b f d t af%w: CF="=,fkd~,k], Pi-Pi 00 -> O 7 i = 1 , . . . ,N. Applying Ito's formula, one can verify that the solution of the bond price is pf = eiotradS> 0, as t 2 0, and the solutions of stock prices are P,"= Pi exp[sOtb,ds ~ t = Ca"fdw," , -$ ds]. t ikds . n ~ = , n o , , , t ( l + p ~ k A N sIc) e- Sopa >O,i=1,2,...,N, provided that pF > -1, and P; > 0. Obviously, the wealth process of a small investor in the market under the condition that p;Ic > -1 and Pi 2 0, is xt = CEO T,f P; = CEO 7rf 2 0, where rl!is the number of bond units bought by the investor, and -the amount of units for the i-th stock. Now suppose that the market is selffinanced, then the wealth process xt should satisfy N dzt = C i = o vfdP,". This deduces that
+ c%=, +
+
dxt = rtxtdt
Sot
+ 7rt(bt - rtL)dt + ntatdwt + rtPtd~tTt.
(8.64)
Now let us give the explicit definition of an arbitrage market and an arbitrage-free market.
Definition 258 Suppose in a market the wealth process is described by (8.64), and the conditions stated above are satisfied. Thus, if for xo = 0, P - a s . , there exists a portfolio r t , t E [0,T] such that the wealth process xt developed by SDE (8.64) reaches X at the future time T, such that one of the following results holds: (i) X 2 0 and P ( X > 0) > 0; (ii) X 0 and P ( X < 0) > 0; then 7rt, t E [O,T], is called an arbitrage portfolio, and the market is called an arbitrage market. Otherwise, the market is called an arbitrage-free market.
0) > 0; and we also cannot have a loss with a positive chance; that is, X 5 0 and P ( X < 0) > 0. Otherwise, somebody can use such a chance to make money. Hence applying Theorem 250 and the above theorem we immediately obtain the following result.
254
8. Option Pricing in a Financial Market and BSDE
Theorem 25%If for a financial market all of the assumptions i n Theorem 257 hold and ko = 0, plk > -l,Vi,k,t, then this market is a complete market, (that is, any contingent claim i n this market can be duplicated), and moreover, it is an arbitrage-free market.
Notice that xie- 5; = x:/Pf. SOwe may call xie- 5; 'ads a discount (w.r.t the bond) wealth process. By the proof of the arbitrage-free market, or say, the proof of the last conclusion of Theorem 257, one immediately sees that the following theorem is true.
-
Theorem 260 For a self-financed market, i f there exists an equivalent probaility measure FT (that is, FT is a probability measure and FT P) such that under this measure & the discount wealth process becomes a martingale, then this self-financed market is an arbitrage-free market.
In fact in this case x:e-l; = E ~ T ( X ~ ~ - ~ ~ ~SO ' *the ~ proof ' I ~ ~of) . the last conclusion of Theorem 257 still goes through.
8.13 Solutions for Unbounded (Terminal) Stopping Times It is quite possible that in a financial market some investor will stop his all activities a t some random time according to his rule for investment. However, it is not known beforehand when he will stop. Moreover, sometimes it is also unreasonable to assume that he will stop before some fixed time; that is, the terminal stopping time is not neccessary bounded. So we need to consider the BSDE with an unbounded stopping time as a terminal time. Now let us discuss BSDE (8.8) with an unbounded stopping time T E [0, w]. In this case we still use Definition 226 as the definition of the solution to (8.8). Moreover, one easily sees that Lemma 229, Lemma 231, and Lemma 230 still hold, if we substitute To by w in their proofs. However, the proof of Lemma 233 needs a little more discussion. Let us rewrite it as follows:
zT-
Lemma 261 If T is a zt- stopping time, X is measurable and Rdvalued, f ( t ,w) is adapted and Rd- valued such that
zt-
E 1x1~ < w, E IS,' If(3,w)l dsI2 < w, then (8.8) has a unique solution. Proof. Let xt = E5t(X f (s)ds),V t 2 0. Then, in the same way as before, one sees that it makes sense. Similarly, Mt = E5t(X f ( s ) ~ st) 2 , 0, is a square integrable martingale. Hence by the martingale representation theorem (see Theorem 103 in Chapter 2) there exists a unique
+ SLT +
8.13 Solutions for Unbounded (Terminal)Stopping Times
where
255
{ f (t,w)suchf that ( t ,w) is 7jt adapted, Rd@dl- valued : ( t ,w)12dt < m,QT < m E1
~ ; l o c ( ~ d @ d= ~)
:
-
If
etc. Let us show that E
J: lqt12
dt < m, and E
J: Jz
1
'
lpt(z)12r(dz)dt < m.
In fact, by assumption one finds that E 1Mrl2 < m and E 1Mol2 < m. So by (8.65) one sees that E
ITAT
+E
Jz bt(z)12n(dr)dt = E IMrAT - Mo12 A+TE I1Mol2] ~ 6 2[E1Mrl2+ E 1 ~ 0 1 ~oe-bA(t)E + SUPt>0 C b A @ ) E llptl12dt, where B = L t (0,T ;R d ) x L; (0, T ;~ ~ " x~ F i1 (0,) T ;R ~ @ ~ z ) , b 2 0 is a constant, and A(t) = Jtm(cl(s) c ~ ( s ) ~ and )~s. f (t,w) : f (t,w) is 7jt - adapted, RdBdl - valued p ( 0 , T ;~ d @ d l = such that E 1f(t,w)l2dt< m,QT < m
L;,
+
Gr
+
{
IT 0
then the proof can be completed as that of Theorem 234. Similarly, if one takes To = m, and changes the interval [O,To]to be [0,m),then Theorem 246 is also true. cl(t)dt+ Furthermore, we also have examples to show that the condition J r ~ ~ ( t )" I,">" < I,"#" 4(c1( s ) C ~ ( S )([(x: ~ ) ~ 2:) ~ 12) +2-l(19," - dl2+ - p:1I2), where pl(u) = p(u) u. However, one can show that lim E I;+ds = 0.
+
+
+
n-oo
+
8.13 Solutions for Unbounded (Terminal) Stopping Times
257
Indeed, by the Schwarz inequality and Lemma 229 for unbounded stopping times E S,' I;rnds 2(E sup Ixk, - xyAT 4
O
Theorem 266 All assumptions are the same as in Theorem 249 for solutions to BSDE with bounded stopping times, but now T E [O,CQ] and T = To = co. Then the conclusion of Theorem 264 still holds. Proof. Notice that now we have
However, now one finds that by Lebesgue's dominated convergence theorem
EL'
I,"lnds
< 2(E sup Ix&, t>
-X:,,~I~)~
0
The proof is now completed as above. w The results of the BSDEs for the terminal time being an unbounded stopping time can also be explained in the financial market. One only needs to replace the terminal time T by the stopping time T and to replace X E ZT by X E ZT, then a similar explanation can still be made. We leave this as an exercise for the reader.
258
8. Option Pricing in a Financial Market and BSDE
8.14 Minimal Solution for BSDE with Discontinuous Drift Applying Theorem 251 we can derive a new existence theorem for solutions to BSDE (8.8) with discontinuous coefficient in R1 (i.e. we assume that d = 1 in (8.8)), where for simplicity we also assume that T is a bounded stopping time: T 5 To < co.
Theorem 267 Consider (8.8). Assume that lo b(t,x , q, p, w ) is jointly continuous i n ( x ,q, p) E R 1 x Rlad1 x L:(.) (R1mda) \G, where G C R1 x Rladl x L : ( . ) ( R ' @ ~is~ a) Borel measurable set such that ( x , q , p ) E G + Iql > 0 , and m l G l = 0, where G1 = { x : ( x , q , p ) E G ) , and ml is the Lebesgue measure i n R1; moreover, b(t,x,q,p,w) is a separable process with respect to ( x , q ) , (2.e. there exists a countable set { ( x i , q i ) ) z l such that for any Borel set A c R1 and any open rectangle B c R 1 x Rladl the w-sets {W : b(t, x , 9, P, w ) E A , ( x , d E B), and {w : b(t,X , q,P, w ) E A, ( x i ,qi) E B,Qi) only differ a zero-probability w-set; 9 Ib(t,x,q,~,w)l< c l ( t ) ( l + 1x1) + cz(t)(lqI + IIPII)), Ib(t,x,q, Pi,w) - b(t,x,q,P2,w)l < JZ ICt(z,w)l IPi(z) - P2(z)l ~ ( d z ) , dt where cl (t),c2(t)2 0 are non-random such that J?(cl(t) + ~ ~ ( t ) ~< )co, and C t ( z ,w ) satisfies the condition (B) i n P of Theorem ,251. Then (8.8) has a minimal solution. ( A minimal solution xt is one such that for any solution yt it holds true that P - a s . yt 2 x t , V t E [O, 71). Furthermore, if b satisfies 30 (el - ~ 2 ) ( b ( t , ~ l , q l , ~ l-b(t,x2,q2, ,w) ~2,w)) < ci(t)p(Ixi - 2212 ) + c2(t)1x1 - 221 (191 - 421 + llpi - ~ 2 1 1 ) 1 , where cl(t),c2(t) 0 have the same properties as those i n P ; and p(u) 2 0,as u 2 0, is non-random, increasing, continuous and concave such that I L00; ) So+ ~ ~ I P ( ' = then the solution of (8.8) is unique. Remark 268 I n the case that T ( . ) = 0 (a zero measure) and X E $! (8.8) is reduced to the following continuous BSDE: b(s,x s , qS, - h>TqSdwg,0 < t, xt = x where T is a St-stopping time such that 0 < T < To, and To is a &ed number.
+ JkT
The reader easily obtains the existence of solutions to the continuous BSDE with discontinuous coefficients b from Theorem 267. Before proving Theorem 267 let us give some examples.
8.14 Minimal Solution for BSDE with Discontinuous Drift
259
+ h q T + SZ C t ( ~ ) ~ ( z ) r ( d z ) , where kl = ( k i , ...,k f ' ) ; ko,k(,,kg 3 0 , k f , 1 < i < dl,O < S < 1 are all constants, and C t ( z ) satisfies the condition (B). Then conditions l o - 2O in Theorem 267 are satisfied. Hence (8.8) has a minimal solution. However, b& is discontinuous at ( x ,q,p) , where x = 0, q # 0 , and p is arbitrary given. I n particular, for 0 < a < 1 b& is also non-Lipschitzian continuous i n q. However, i f we let a = 1, and b(t,x , 9, P, w ) = -b; ( t ,x , 9, P, w ) , then 3" i n Theorem 267 is also satisfied. Hence (8.8) has a unique solution. Example 270 Assume that C;=l arc < 00, where arc > 0. Let { r k ) E l = the totality of rational numbers in R1, f ($1 = Cn:r,
+
+
+3
+
Theorem 274 . Define u O ( x )by (8.73), and for any u E U, where U is defined by (8.72), let J ( u ) = E(; lx$12 !j ~oT(1q;l~ JZ lp;12 r ( d z ) ) d s J: 1x;l1+' ds), where (xy, q?, p?) is the unique solution of (8.71) for u E U. Then 1) u0 E U, 2) J ( u ) 2 J ( U O ) ; for all u E U.
+
+
+
The above target functional J ( u ) can be explained as an energy functional. We can also discuss another kind of optimal control problem which can be explained in the financial market. Still consider the BSDE system (8.71), but this time with xy,u(s,x;,q:,p;) E R1,q; E R1@dl,p;(z)E RIBda,and consider the admissible control set
u = u ( t ,x , q, P ) : u ( t ,x , q, p) is jointly measurable such that = (8.71) has at least a solution ( x t ,qt, pt), and lu(t,x , q, p)l 1x1' (8.74) where 0 < ,O < 1 is a given fixed constant. Denote the target functional by J ( u ) = S U P { E ( 1 ~~ ~~ 1 ~d~" - l$l2 - g ( l @ 1 2 + Jz lp;l2 ~ ( d z ) ) d s ) ) = sup{E[~?(xyll+'ds - 1x$12
{
+(J?
0 such that So [X12 X E Rd,where a = aa* = ( ~ i j ) & ~ , a n d(., .) is the inner product in R ~ ; C.3 c(t, x, z ) : [0,T] x Rd x Z Rd is bounded, measurable, and -+
for arbitrary ?I E
h-)L(u) E
E-+O
+ iETdl\t + d[A(.)T,/?I:( as(')(as x;x , ,u s )dw,
a d 2 )( s ,x,- ,u s ) dx
+~
k
6"
-BtXt)dt
+
L(u
283
0
-
dQs)]t)
and by (9.9) for arbitrary
+E
X ~- ~t++m lim O ~ e - J ~ ~ * ~ " l(9.12) \ ~ h ~ ;
x E 6"
Suppose that (uO(.), xO(.),A'(.)) is a maximum point of L defined by (9.9), then from (9.11)-(9.13)one finds that
Y x+c(t,x,z,w) L y+c(t,y,z,w). If x; x& then P - a.s. x; 2 x;, v t 2 0 , where xt, i = 1,2, satisfy (10.1), respectively.
pTN
Proof. Define T N =inf { t > 0 : lxil +lx;l > N } . By the Tanaka type formula (Theorem 152) for any given N , T > 0 we have <E I.:,~: ( p 2 ( sW, ) - p1 ( s ,w ) ) d ~ , E(&rN - xt*rN)+ IE Ix;>2: (b2(s,x:) W ) - bl(s,x t )w))dA,
SotATN
I E SetAt
sotATN
sotATN
=
Yt =
~ N , T ( S ) P N , T( ( 2 ; - x;)+) dAs,Vt E
Sot ~ N , T ( s ) ~ +( As),Vt , 2 0, 0, otherwise.
[O, T ].
10.1 Comparison for Solutions of Stochastic Differential Equations
293
Then we have that W 2 0 E I y ( T t ~ ~ , )5l S : PN,T(E I Y ( T ~ A ~ ~ ) I ) ~ ~ . Hence, P - a.s. Y(t)=O,W>O, i.e. P - a s . x i x:, Vt E [O,T). Since T is arbitrary, the desired result is obtained.
>
10.1.2 Component Comparison i n d- Dimensional Space Motivated by the 1-dimensional case we can discuss the comparison of the components of two d-dimensional solutions for two d-dimensional SDEs. Assume that xf,i = 1,2, satisfy the following d-dimensional SDE: j = 1,2,... , d
where At, Mt and fik(dt,dz) have the same meaning as in (4.4). We have the following comparison theorem for the components of solutions fo these SDEs.
10. Comparison Theorem and Stochastic Pathwise Control
294
Proof. By using the notation in the proof o f the previous theorem and applying Theorem 153 one finds that for any given N , T > 0 E(xj,tArN - ~ j , t A r N ) + I ZrL E ~ x ~ , > x( @ : ~p(s,~ - )~ ~ p ( s , w ) ) d ~ p s
SiATN
rn
I c , = , E SFTNI,;s>,;*(b:p(s, 4,w) - bjp(s,x:,w))dAPs ) Vt E [O, TI . 5 Cp=lE ~N,T(s)PN,T - 5 j S ) + dAps,
SetAt =
JiATN
k~,~(s)dC,"=~(A +ps),Vt . s 2 0,
7"
(xjt - xjr)+, as t t IO,T), 0, otherwise. Then we have that W 2 0,Vj = 1 , 2 , - . . ,d I Y ( T j , t ~ 7 ~ ) I Jot PN,T(EIY(T~,SATN)I)~S. Hence, as in the proof of the previous theorem, we have that P - a.s. v j = 1 , 2 , . . . ,d xjt > xjt, Vt E [O,T). Since T is arbitrary the desired result is obtained. rn As an application we give the following example.
qt = Tt =
Example 297 Consider the following SDE with jumps t t X j t = X j o + Jo bj(s,xjsIw)dsf Jo u j k ( s , ~ ) d ~ k ( ~ ) JZ c j ( s , ~ j s , ~ ) ~ kd (~d )js ,,= 1,2,. . . , d. Suppose that sgn(xj - Y j ) . ( b : ( t , ~ j , w-bj(t,Yj,w)) ) I k$(t)&(Ixj - Y j I ) , as b j I , I Y j I I N,Vt E [O,TI, ) ~j+cj(t,~j,z,w)~ xj 2 ~ j x j + c j ( t , x j , ~ , w 2 where k$(t) and p$ (u) satisfy the same conditions as in Theorem 296. If x f , i = 1,2, are two solutions corresponding to the given initial conditions xb, i = l , 2 , respectively, then x1 30 > x 320 , j = 1 , 2 , . . - ,d xjt 2 x ! ~Vt , 2 0, j = 1,2,. . . ,d.
-
+ Ji
*
Notice that in the above example the u j k ( s w) , do not depend on x.
10.1.3 Applications to Existence of Strong Solutions. Weaker Conditions. For a 1-dimensional SDE one can use the comparison theorem (Theorem 295) to derive the existence of a strong solution under weaker condition in some sense. Let us consider the following SDE with jumps in 1-dimensional space.
10.1 Comparison for Solutions of Stochastic Differential Equations
+
c ( s ,x8-, z)iS*(ds, d z ) ,w
295
> 0,
First we establish a theorem on the existence of a pathwise unique strong solution by combinning three results: the existence of a weak solution (through Theorem 175), the validity of the pathwise uniqueness of solutions (through Theorem 170), and then the existence of a pathwise unique strong solution (through Theorem 137).
Theorem 298 Assume that lo b = b ( t , x ) : [O,co) x R~ -+ R d , u = u ( t ,x ) : [O, co) x Rd -+ R d B d , C = c ( t , x , z ) : [O,co) x Rd x Z -+ R d , are Bore1 measurable processes such that P - a s . Ib(t,x)l < ~ l ( t ) (+l Ixl), x , 4124 d Z ) < ~ l ( t ) + ( l1x12), l d t , x)12 + JZ where c l ( t ) is non-negative and non-random such that for each T < oo :J cl (t)dt < co; 2' b(t,x ) and u(t,x ) are continuous i n x ; and limh,0 Jz Ic(t, x + h, Z ) - c(t,x , z)12~ ( d z=) 0 ; S for each N = 1 , 2 , . . . , and each T < co, l u ( t , x d - u(t,x2)12 < $(t)pTN(Ix1 - ~ 2 1 1 , x 2 y ===+ x + c ( t , x , z ) > y + c ( t , y , z ) , as lxjl < N , i = 1,2, t E [O,T];where g $ ( t ) d t < 00;and p$(u) > 0,as u 2 0 , is non-random, strictly increasing, continuous and concave such that So+duIpTN(4 = 00; 4" = Rd - ( 0 ) r and JRd-{0) % ~ ( d z ) < co,
0 such that so, 144I>. and for each N = 1,2, . . . ,T < m, there exists a kK(t) and &(u) > 0, such that 2 (1% - Y I ) , as 1x1 1 I Y ~ N , t E [O,T], b ( t , x ) - a ( t l ~ )5~kZ(t)& where N , T > 0 are arbitrarily given, and 0 k s ( t ) is non-random such that :J k$(t)dt < m;0 &(u) is non-random, strictly increasing in u 2 0 with pN (0)= 0, and So+du/pN (u)= m; a: L Y x c(t,x , z ) 2 Y c(t,y, z ) ; 5" ~ b ( t , x5) c1(t)(l+ 1 x 1 ~ gk(x)), where ~ ~ (= x1 +) ln(1 + ln(1 + . . . ln(1 + ~ x [ ~ ~ o ) ) ) ,
>
0. Now let us show that the weak uniqueness of weak solutions for (10.7) holds on t E [O,T].Suppose that xf,i = 1,2, are two weak solutions of SDE (lO.7), defined on probability spaces (ai,p,(5: ) t20 ,Pi),i = 1,2, respectively. That is, xf,i = 1,2, satisfy that Pi - a.s.W 0, xf = xo + Sot b(s,x:)ds + a ( s ,x i ~ ) d w + : Sz C ( S , xi-, Z ) Nki (ds,dz),
2 -
302
10. Comparison Theorem and Stochastic Pathwise Control
where wt and Zki(ds, dz) are the BM and Poisson martingale measure, defined on probability spaces ( n i , { ~ s, ) ~ ,,~ Pi),respectively, such that
fiki(ds, dz) = Nki (ds, dz)- r(dz)dt, that is, the Poisson martingale measures still have the same compensator r(dz)dt. Now for any given T < co as i = 1,2, set t 2 Z: = exp[- ~;(a-lb)(s, xt)dwt - So (aF1b)(s,x:)] ds], d9- Zi d p i ,
I
Wj
T- T = Jot b(s, x:)ds
Then one finds that
+ wi,
F$ - a.s.W E [O,T],
where G i l t E [O,T], is a BM under the new probability measure F$, and Nki((0, t], dz), t E [O, TI, is a Poisson martingale measure with the same compensator r(dz)t, also under the new probability measure F$. However, by Theorem 154 the pathwise uniqueness holds for SDE (10.8). So, applying Corollary 140, one finds that the weak uniqueness holds for SDE (10.8). Furthermore, take any non-negative BDx Bw0x BD-measurable function f : (D x WOx D , B D x Bw0 x B D ) -+ (R1,BR1), where BD,Bw,, and BR1are the Bore1 field in D , Wo and R1, respectively. Moreover, D and Wo are the totality of all RCLL real functions and all real continuous functions f (t) with f (0) = 0, defined on [O,T], respectively. By Corollary 140 one finds that J, f(xl(.), ~ ' ( 9tU E IB ( Z~) , ~ ~ ~ f Nki(dt, dz) = Nki (dt, dz) - n(dz)dt. In particular, take y(xi(.), ci(.)) = f (xi(.)) exp[~?(a-'b)(sl xa)dzt 2 I(a-'b)(s,x:)1 ds a-'(S,X~-)C(S,X~-, ~)N~i(d~,dz)]. One_sees that J, f (x' (.), (.))dF$ = J, .T(x2(.),c2(.))dF$. This is equivalent to s_ayingthat J, f (x1(.))(zk)-ldP+ = J, f (x2(.))(z+)-ld@. Thus, J, f (xl(,))dP$ = J, f (x2(WP$. That is, the weak uniqueness of solutions to (10.8) holds. Let us give an example for Theorem 305 on the existence of a pathwise unique strong solution to a 1-dimensional SDE with jumps and with the coefficients such that b(t, x) is discontinuous and very much greater than
-+
0 is a constant depending only on r; 30 b(t,x ) is continuous in x , a ( t ,x ) is jointly continuous in ( t ,x ) ; and limh,h~--tO J" Ic(t + h', x + h , z ) - c(t,x , z)12~ ( d z=) 0; - Y I ) , as 1x1 ,lYl 5 N , t E [O,TI; 4" la(t,z,w) - 4 t , Y , 4 1 2 where 0 I k;(t) is non-random such that J: k$(t)dt < co, for each given T < co;0 5 &(u) is non-random, strictly increasing in u 2 0 with = 0, and So+ du/&(u) = 0;); 5" sgn ( x - v) . 2 ) - b(t,Y ) ) 5 kTN(t)pTN(Ix- Y I ) , as 1x1, lvl I N,'%Y E R 1 , h E [o,T]; where k s ( t ) and &(u) have the same property as that in 4 O , besides, pTN(u) is concave; 6" one of the followzng conditions is satisfied: (2) Sz I~(t7x , 211 4 d z ) I ~ ( t ) g ( l x I ) , Jz Ic(t,272) - ~ ( Yt ,,z)I 4 d z ) 5 kiXt)pTN(Ix- Y I ) , as 1x1, lvl I N,Qx,y E R1,W E [O,T]; where c(t) and g(u) satisfy the following properties: ~ : c ( t ) d t < co, for each T < oo; g(lx1) is locally bounded, that is 1g(1x1)15 k,, as 1x1 I r , for each r < oo; moreover, k$(t) and &(u) have the same property as that in 4") besides, &(u) is wncave; (ii) x 2 y ===+ x c(t,2 , z ) 2 y c(t,y, z). Then, for any given constant xo E R , (10.7) has a pathwise unique strong solution on t 2 0.
0.
+
n+oo
O n the other hand, 1x;12
< lx:12 + ltt12,and by Lemma 114
l2
E sup,,,[lx: + lltl21 I k* < 03. Hence by Lebesgue's dominated convergence theorem one finds that lim E IxT - xtl 2 ds = 0. ,400 From these one can take a subsequence i n k ) o f { n ) , denote it by { n ) again, such that dt x d P - a.e. x y -+ xt, in Rd and 2 E ( r Ix: - x,l ds < 1/2", n = 1,2, ... Hence E J , sup 1x;12 dt < m. NOW notice that n
XT
+
+
-
+AT J O t ~ , ( ~ , x ! , w ) d w S JZ~n(~,xF-,~,w)Nk(ds,dz), where A: = Sot bn(s,x:, w)ds. Hence there exists a limit ( i f necessary, take a subsequence) At = lim AT, = X:
n+cc
10.3 Strong Solutions for 1-Dimensional SDE with Jumps
307
and we have xt = xo At a(s,x,, w)dw, Jz c(s,3,-, z, w)fik(ds,dz), since a is continuious in x , c satisfies condition 3', and they are less than linear growth. Notice that xY, w)I < ~1 ( t ) [ l +S U P ~ ( ~ X ? ~ ) ] . Ibn(~, Hence, one finds that A? is a finite variational process, and so also is At. Let us show that
+ Jot
+ +
~t
=
Jo1
b(s,x., e,ps,w)ds.
(10.9)
Indeed, from m l G = 0 applying the occupation density formula for the local time of semi-martingale (Lemma 159) one finds that 0 = JGLf(x)da= J ~ ~ I ( ~ . EI GU ()S , X S ~ W ) I ~ ~ ~ . Since by assumption la(s,x,,w)12 > 0, as x, f G. Hence m l ( s E [O,t]: x, f G ) = O . Therefore, by conclusion 4 ) of Lemma 308 J; Ib(s,xs,w) - bn(s,xY,w)Jds J ; I ( , ~ ~I ~~()~ , x . , w - )~ & , x : , w ) I ds 0. Thus (10.9) 1s derived. Now suppose that x: is another strong solution of (10.7). By Lemma 308 bn(t,x , w) < b(t,2 , w). Hence applying again Theorem 295 one has that x? < xi,Vn. Letting n -+ oo. It is obtained that xt < xi. Therefore xt is a minimal solution of (10.7). In addition, if 6' holds, then by Lemma 115 the solution is also pathwise unique. w Furthermore, in the case that b, a and c are non-random, then the condition on a can also be weakened. Actually, we will have the following theorem.
c
-+
Theorem 310 Assume that 1' b = b(t,x ) : [0,co) x R -t R , a = a ( t , x ) : [O,co)x R -+ R'@*, c = ~ ( x, t , z ) : [O, co) x R x Z R, are jointly Borel measurable processes such that Ib(t,x)l c ~ l ( t ) (+l 14, 144 4l2+ Jz 144 x , 4124 d z ) c ~ l ( t ) (+l 1x12), where cl(t) is non-negative and non-random such that for each T < co J: cl (t)dt < oo; T' b(t,x) is continuous in x E R1\G, where G c R1 is a Borel measurable set such that x E G ==+ ~ ( xt ), > 0,Vt 2 0, and mlG = 0, where ml is the Lebesgue measure in R1; 3 ' a ( t ,x ) is jointly continuous in ( t ,x ) ; and JZ Jc(t hl,x h, z ) - c(t,x , z)12?r(dz)= 0; limhh~-+O 4' for each N = 1,2,. . . , and each T < co, 14,x1,w) - 4 4 x2,w)12 c c T N ( ~ ) P T N ( I x ~ - X Z I ) ,
+
+
308
10. Comparison Theorem and Stochastic Pathwise Control
as Ixjl < N , i = 1,2, t E [O,T];where g $ ( t ) d t < CO; and &(u) 2 0,as u 2 0, is non-random, strictly increasing, continuous and concave such that So+ dulpTN(u)= 00; 50 x 1 Y x + c ( t , x , z , w ) I y+c(t,y,z,w), k Z = R~ - { 0 ) , JZ &,(dz) < m. Then (10.7) has a minimal stromg solution. Furthermore, zf, in addition, the following condition also holds: 7" sgn(x - Y ) . (b(t,x)- b(t1Y))2 k%)&(Ix - Y I ) , as 1x1 , lvl 2 N,Vx,y E R1,W E [%TI; where k c ( t ) and &(u) have the same property as that in 4'; then (10.7) has a pathwise unique strong solution.
*
Proof. Let bn(t,x ) = infv(b(t,Y ) + (nV ci(t))I Y - $11, n > 1. Then b, is Lipschitz continuous in x. By Theorem 170 SDE
(10.10) has a weak solution xr. Notice that by conditions 4" and 5" the following Tanaka formual holds: Ixrl, - xr2 = sgn(x:l - ~ : ~ ) d ( x :-l xr2), if { x ~ ) , , , ,i = 1,2, are two solutions of (10.10). This means that the pathwisehniqueness of solutions to (10.10) holds, because b, is Lipschitz continuous in x. Hence by the Yamada-Watanabe type theorem (Theorem 137) (10.10) has a pathwise unique strong solution xr. Now the remaining proof follows in exactly the same way as in Theorem 309. rn
1
Ji
Example 311 Set a ( t , x ) = (It#ot-"O So)[ko k l x ] , ~ ( xt ,,Z ) = 1 t ~ ~ t - 1(iFo -+ C1x)co(z), where ai < 1, So > 0, Ici, ki, i = 0,1, kl 2 0 all are constants, ko > 0 and J, 1 . 0 ( z ) 1 2 ~ ( d 0 at the discontinuous points x for b(t,x). (However from SDE, returning to an ODE, one needs to put a ( t ,x ) = 0). Furthermore, we can also generalize the result to the case that b has not necessary a less than linear growth. Theorem 312 Assume that
10.3 Strong Solutions for 1-Dimensional SDE with Jumps
1" b = b(t,x ) : [0,co) x R + R, a = a ( t , x ) : [O, co) x R + RIBd, c=c(t,x,z):[O,co)~RxZ+R, are jointly Bore1 measurable processes such that Ib(t, x)l I ~ l ( t ) (+l 1x1 9&)>, ( l1xl2), 14t,x)12 I ~ l ( t ) + Sz Ic(t,x,4124 d z ) < ~ l ( t ) , where g k ( x ) = 1 ln(1 + in(1 . . . ln(1 + I X ~ ~ ~ O )
309
rIL
+
+
) ) ,
7
Ic-times
(no is some natural number), ko > 0 is a constant, and c l ( t ) > 0 is such that for each T < co, :J cl(t)dt < co; .P b(t,x) is continuous in x E R1\F, where F c R1 is a compact set such that m l F = 0 , where ml is the Lebesgue measure in R1; moreover, there exist a So > 0 and a open set G I > G such that x E G1 a ( t ,x ) 2 So > 0 ,V t 0; ? a(t,x ) is jointly continuous i n (t,x ) ; and limhh~+oJz Ic(t + h l ,x h, Z ) - ~ ( Xt , ,z)12~ ( d z=) 0 ; 4' for each N = 1,2, . . . , and each T < co, I . ( t , ~ l l ~ )-ff(t,x2,w)12 < cTN(t),JTN(Ix1 -x21), as lxil < N , i = 1,2, t E [O,T];where c c $ ( t ) d t < co; and p$!(u)> 0,as u > 0 , is non-random, strictly increasing, continuous and concave such that So+d ~ l &(u) = co; 5" x L Y ===+ x + c ( t , x ,z,w) > Y + c(t,y, z,w), a z= RI - ( 0 ) , JZ $ $ l ( d t ) < m. Then (10.7) has a weak solution. Furthermore, i f , i n addition, the following condition also holds: 7" sgn ( x - Y) - (b(t,x) - b(t,y)) I G ( t ) & ( I x - Y I ) , as 1x1 , lyl 5 N , ~ X , fY R1lVt E [O, TI; where k z ( t ) and &(u) have the same property as that i n 4"; then (10.7) has a pathwise unique strong solution.
>
+
Proof. Let bn(t,x ) = b ( t , x ) W n ( x ) , where W n ( x ) is a smooth function such that 0 5 W n ( x ) 5 1, W n ( x ) = 1, as 1x1 5 n; and W n ( x ) = 0, as 1x1 2 n 1. Then by the previous theorem (Theorem 310) for each n there exists a pathwise unique strong solution x: satisfying the following SDE: V t 0 , t xt = xo bn(s,xS)ds SOa ( s ,x,)dws J: JZ C ( S , 2,-, z)%k(ds, d z ) . Now in the same way as in Theorem 176 one finds that "the result of SDE from the Skorohod weak convergence technique" holds. In particular, we have that @in) satisfies the following SDE with Gin and p ( d t , d z ) on -In (6, P), where (-In x t ,wt ,q-n (dt,d z ) ) come from "the result of SDE from
+
+ sot
g,
+
>
+
310
10. Comparison Theorem and Stochastic Pathwise Control
the Skorohod weak convergence technique", etc:
Also in exactly the same way one can prove that as n_ -+ co a(s, ZLn)dz Sot a(,, Z;O)dG;O, in probability P , and S," C(S,ZiY, z)P(ds, dz) -+ C(S,Z i t , z)$(ds, dz), in probability F. Write 2in = 2 0 A; Sot a(s, ZLn)dXn J: SZC(S,5:; ,z)?(ds, dz), where A; = Sot bn(s, ZLn)ds. Hence there exists a limit (in probability) At = lim A;, n-+OO and we have -+
+
+
+
Write .rN = inf { t 0 : IZiOI> N} . So it only remains for us to prove that I~;~'"(bn(~, a;.) - qs,~ ; o ) ) d ~ l
>
=
1~:~'"-
(bn(s, ZLn) - b(s, Zi0))dsl -+ 0, in probability
-
If this can be done, then letting n satisfies: P - a s . Z
=
-+
CTN + CTN l
xo
+
F.
co in (10.11) we can show that ZiO
+
b(s, ZLO)ds
~ATN
a(~,e)dG;~
c(s, ZLy, z ) k ( d s , dz), ~t 2 0.
Notice that TN f co, as N f co. We obtain that ZP is a weak solution of (10.7), as t 2 0. Now observe that for any E > 0 P ( l ~ : ~ ' " - ( b ~ ( s , j ~~ ~b(s, ) ZLO))dsl> E )
I SUP,
-
SUP,^, IZLnI + SUPS fi)
Sot A r ~ -Iipec, Isup,i,li~~15fids ~ATN-10 +f E'- So - IikO@G /bn(s,Ztn) - b(s1 2 s )I ~ = + I; + ~ y ,
+?EP
1;yN
nrz1
s u p ~ ~ T \ ~ ~ ~ ~ + s u p ~ ~ ~ l
where kfi = 2(1+ N gk(fi)), and F C Gz C GI, G2 is an bounded open set. Obviously, for arbitrary 2 > 0 one can take a large enough fi
10.3 Strong Solutions for 1-Dimensional SDE with Jumps
-
311
g.
such that I:'~ < (See the proof in Thoerem 176). Notice that by the occupation density formula of local time (Lemma 159) Ifi < 2k- j5 ~ A T N = L$,TN-(iZ1O)da. 2 - 3 E So I ~ Gmds G6,, However,
3 ~SG2'
E lAtATN-l i: E lZ$,,,-l+
+
E 1x01 E
1~:~'"-
O(S,Z~~)~G:~~
~ATN-
+EIJ0 Szc(s,Z~%,z)~(ds,dz)l E ) 5 F(lzin - ZLOI > q).. where F$ = ~ ; n {x E R ~1x1 :5 IS}. Hence, for each given s as n -+ co, ~ ( l b ( ~ , i Z :-~ b(s,ZL0)/ ) IE;oEF; > E) 5 F(lZLn - ZiO1> 17) + 0. So applying -Lebesgue's dominated convergence theorem, one finds that as n -t co, I;'~-t 0. Therefore, we have proved that as n -+ co, I ~ i ~ ' " ( b ~ ( s , Z :~ )b(.s,Z~~))dsl 0, in probability
-
B..
312
10. Comparison Theorem and Stochastic Pathwise Control
So that ZiO is a weak solution of (10.7). Finally, if 7' holds, then by the Tanaka type formula the pathwise uniqueness of solutions to (10.7) also holds. Hence by the Y-W theorem (Theorem 137) (10.7) has a pathwise unique strong solution. w
10.4 Stochastic Pathwise Bang-Bang Control for a Non-linear System 10.4.1 Non-Degenerate Case Now let us formulate an existence theorem of the optimal Bang-Bang control for a very non-linear stochastic system with jumps. Let J(u)=E 1x71' dt, where xTis the pathwise unique strong solution of the following 1-dimensional SDE with jumps: W E [O,T],
and the admissible control u = u ( t ,x t ) is such that u ( t ,x ) is jointly measurable, lu(t,x)l 5 1,Vt E [O, TI,x E R1, and it makes that (10.13) has a pathwise unique strong solution. Denote the admissible control set by U. Our object is to find out an optimal control u0 E Usuch that J ( u O )= minUEuJ ( u ) . We have the following theorem.
Theorem 313 Assume that 1' At, i = 0,1,2,3,4,Cf , and Bt all are non-random real continuous functions o f t ; No 2 0 is a constant, k l , k2 and kg are any even natural numbers; and D t ( z ) 2 0 is jointly measurable, non-random such that Jz lW)12 4 d - 4 < 00, Jz IDt+h(z) - Dt(z)12~ ( d z ) 0 , as h 0 , <m; moreover, Z = R - ( 0 ) , JZ $n(dz) ,T A 2 0 ,i = 2,3,4; C i 2 0;Cf 2 60 > 0 , where So is a constant, Bt > CfCj; '3 g2(t,u) is jointly continuous in ( t , u ) E [O,m)x R such that 92(t,0 ) = 0, g2(t, U ) 2 0 , u 2 0; g2(t,x) 2 g z ( t , d , x Y20 4' gl(t, 1x1') is jointly continuous i n ( t , x ) such that gk(x)), gl(tl lx12) I ko(l +
>
*
n:=,
10.4 Stochastic Pathwise Bang-Bang Control for a Non-linear System
313
where k-times
%
(no is some natural number), and exists such that it is uniformly locally 4, as 1x1 5 r , where k, 2 0 bounded in x , that is, for each r < m, is a constant only depending on r. Then an optimal stochastic control exists, which is admissible and BangBang, that is, there exists an admissible control u0 E U such that J ( u O )= minuEUJ(u), uZ) = -sgn x 0t , where the optimal trajectory xZ) is the pathwise unique strong solution of (10.13) with ut = -sgn x. That is, the SDE for the otimal trajectory is: \Jt E IO,Tl,
1 9 1
a), then xb(t, x ) 5 Atx2gl(t,(xi2) AZ)x2 - Btx . sgn(x) 5 Po + IA!11x2(l + n:=1 iik ( X I ) , d t ,x ) 2 60 > 0 , JzIc(t, x , z)12 ~ ( d zI) < 00, x y ===+ x c ( t , x , z ) 2 y c(t,y , z ) . Hence Theorem 305 applies, and (10.14) has a pathwise unique strong solution. Secondly, let us show that J ( u O )5 J ( u ) , V u E U. In fact, for any u E U let x: be the pathwise unique strong solution of (10.13). Write t w: = & [ ~ : + o & + Ix:=oldws = J ; p(x:)dw.. Then E[(w;)~ -(W;)~~ = SE ~[ (] w ~ ~ : ) ~ 1 5=, t]- S . Hence ( ~ 2 "is still ) ~ a ~BM~ by Theorem 97. Moreover, one easily sees that p(x)-l exists, and p(x)-l = 1, as x = 0; p(x)-l = as x # 0. Hence xp(x)-' = 1x1, and we can write dwt = P ( x y ) - l d w ~ So . xy will satisfy (10.13) with p(xy)-ldw: substituting dwt. Similarly, x! will satisfy (10.14) with p(x:)-'dwZ) substituting dwt. Applying Lemma 314 we find a probability\space (0,5,(5t)t,o, - F)
+
>
+
KO
+
y,
--
10.4 Stochastic Pathwise Bang-Bang Control for a Non-linear System
315
and four 1—dimensional $(—adapted random processes (a;™, x\, Wt, Ct) such that the finite probabilty distributions of (x^,wt,(t) and (x%,u>t,Q) coincide with those of (x™,u)",£") and (x®,Wt,(,t), respectively. Moreover, Wt is a 1—dimensional gt—BM under the probability P, and if we write P((0,t],U) = Eo< s < t I 0 *&c.eu> for i > 0,f/ 6 ®(Z), q(dt,dz) = p(dt, dz) — ir{dz)dt, then 0, Bt > C t C j , and 0 5 a < 1 is a constant; 9 gl (t,u ) is jointly continuous in ( t ,u ) E [0,w ) x R such that I ~ I ( ~ , U ) ~5 k ~ , g l ( t , O )= O; 91 (4u ) 2 0, as u 2 0; g1(t,x) 291(t,y). x LY 2 0 Then an optimal stochastic control exists, which is admissible, feedback and Bang-Bang, that is, there exists an admissible control u0 E U such that J(u), U: = -sgn x:, J(uO)= minuELL where x: is the pathwise unique strong solution of (10.15) with ut = -sgn x. +
-
Proof. First let us prove that (10.15) with ut = -sgn x has a pathwise unique strong solution, so u: = -sgn x! E U.In fact, if we write
b(t,x ) = A:x - At xagl ( t ,1x1) - A: &Izt +O - Btsgn(x), a ( t , x ) = C: C; 1x1, ~ ( x, t ,2 ) = Dt(z)x, then xb(t,x ) < A:x2 - Btx . sgn(x) 5 ]A: x2, la(t,0)12 2 60 > 0, Jz l.(t, x , .)12 r ( d z ) L x2 JZ lDt(z)12r ( d z ) < w, x r y ===+ x + c ( t , x , z ) > y + c ( t , y , z ) , and b(t,x ) is discontinuous at x = 0. However, la(t,x)122 ( C ? ) > ~ 0, as x = 0. 2 Moreover, 2(x1 - x z ) ( b ( t , x ~-)b ( t , x d ) L 1A:I Ix - yl , 2 I.(t,x1) - 4 , x 2 ) 1 2 L lGl Ix - Y I , x L y 3 x+c(t,x,z) L y+c(t,y,z). Hence Theorem 310 applies, and (10.15) with ut = -sgn x has a pathwise unique strong solution. Now the proof of the remaining part is similar to that of Theorem 313. Notice that in the above theorem the coefficient b(t,x) has a discontinuous point x = 0, and at this point la(t,0)12 > 0. That is, o ( t , x ) is non-degenerate at the discontinuous points of b(t,x), and the Lebesgue's measure of the set of all discontinuous points is zero. Now let us consider another partially degenerate stochastic system, where its coefficient b(t,x) can be greater than linear growth: W E [O,T],
+
1
318
10. Comparison Theorem and Stochastic Pathwise Control
Theorem 316 Assume that 1' A:, C:, i = 0 , l ;A!, A: and Bt all are non-random real continuous functions of t ; and D t ( z ) 2 0 is jointly measurable, non-random such that Jz l ~ t ( z >4 ld~z ) < 0 , JzIDt+h(z) - Dt(2)12' ~ ~ ( d z 0) , as h 0, moreover, Z = R - { 0 ) , Jz &n(dz) <m ; 2' A:,A: 2 0, C!(Ci)-l < 0 , Bt > C:Ctl, Bt > 0 a n d 0 a! < 1 is a constant; '3 g l ( t ,u) is jointly continuous in ( t ,u ) E [0,m) x R such that 191(t,u)1 I ko, g1(t, 0 ) = 0; gl(t, u ) 2 0, as u 2 0; x 2 Y 2 0 ===+ g1(t,x) 2 g1(t,y); 4' g2(t, 1 x 1 ~is) jointly continuous i n (t,x ) such that
- 4t,Y)12 I Ix - Yl x 2 y =+- x c(t, x,z) 2 y c(t, y, 2). Hence Theorem 312 applies, and (10.16) with ut = -sgn x has a pathwise unique strong solution. Now the proof of the remaining part is similar to that of Theorem 313. rn
0 is a constant, and C: 2 60Idxd, that is, C: - 60Idxd 2 0, where Idxd is a d x d unit matrix; moreover, Bt > C,OC:, that is, d x d matrix Bt - C:C: is positive definite; 3" g2(t,u ) is real, jointly continuous i n (t,u) E [O, m) x Rd such that exists such that it is unifomly locally bounded i n u, that is, for each r < m, 5 k,., as lul < r , where k,. 2 0 is a constant depending only on r ; moreover, g2(t,0 ) = 0,g2(t,U ) 2 0 , as u 2 0; x 1 Y 1 0 ===+ g2(t,x) 2 ~ 2 ( t , ~ ) ; 4' g l ( t , [ x i 2 )is real, jointly continuous i n ( t , x ) such that d t , 1 x 1 ~I ) ko(l + ck(x)), where & ( x ) is defined i n 4' of Theorem 316. Then an optimal stochastic control exists which is admissible and BangBang, that is, there exists an admissible control u0 E U such that x0 J(uO) = minus J ( u ) , u ! = -i$JIlx:l#o, where the optimal trajectory xZ) is the pathwise unique strong solution of the SDE (10.17) with ut = - -&IlxlZo. -+
+
%
1 2 1
nL
Proof. First let us prove that (10.17) with ut = - ~. .I l l l f .has o a pathwise unique strong solution, so u! = - ~x0I l x g l fEoU.In fact, if we write .
+
-
b ( t , x ) = A!x Atxgl(t, 1 x 1 ~-) A:xg2(t, 1x1) - A:x 1 x 1 ~ ~ ' 1~1~(~1-~2) /Bt ( ~j ~I 2 + ~ ) xl lxlZ0 a ( t ,x ) = C: C: 1x1, c(t,x , z ) = Dt(")("IlXI.fi + J N a ~ I l x ( > m ) , then ( x ,b(t, x ) ) < A! 1 x 1-~ Bt 1x1 + ko / A t [(xi2(1 nF=l & ( x ) ) , lu(t,x)12 1 6: > 0 , J, Ic(t, x , 412.rr(dz) I No Sz lW)12 < 00, and b(t,x ) is discontinuous at x = 0. Moreover, 2 ( x - y,b(t,x) - b(t,y)) I [A![1% - y12 + ~ N , TIx - y12, as 1x1, lyl 0 and an open set GI > F such that x E G1 + l a ( t , x ) ( 2 60 > 0,Vt > 0. Then for any given constant xo E Rd (10.18) has a weak solution on t > 0. Furthermore, if, i n addztion, the following condition (for the pathwise uniqueness) holds:
+
10.5 Bang-Bang Control for d-Dimensional Non-linear Systems
323
(P WU1) for each N = 1,2,,- . , and each T < co, b(t,~ 2 ) ) 2 ( ( X I - x2),'(b(t,2 1 ) ; + b ( t ,x i ) - ~ ( 2211 t , + Jz 144 X I , z) - ~ ( x2, t , z)12~ ( d z ) < .TN(t)PTN(I.l - x212), as Ixil 6 N , i = 1,2, t E [O,T];where c g ( t ) > 0 such that c G ( t ) d t < co; and p$(u) > 0,as u > 0, is strictly increasing, continuous and concave such that So+ duIpTN(u) = 03; then (10.18) has a pathwise unique strong solution.
Proof. Let us smooth out b(t, x ) and a ( t ,x ) only with respect to x to get bn(t,x ) and un(t,x), respectively. (See Lemma 172 and its proof). Then by Theorem 175 for each n there exists a weak solution x; with a BM w? and a Poisson martingale measure fikn (dt,dz), which has the same compensator ~ ( d z ) d tdefined , on some probability space (Rn,Zn,{ S r ) ,Pn) such that Pn - a.s. Vt > 0, X: = xO+J; bn(s,x:)ds+S; an(s,x:)dw:+Si JzC ( S , 2:-, ~ ) f i k n ( d ~ , d ~ ) . As in the proof of Theorem 176, one finds that "the result os SDE from the Skorohod weak convergence techniquet' holds. In p a r t i c ~ l a r ~Lave ~ e that (Zin)satisfies the following SDE with G;n and p ( d t , dz) on (a,5,P ) , where (Zin,Gln,p ( d t , dz))come from "the result of SDE from the Skorohod weak convergence technique", etc:
Write
>
= inf { t 0 : IZiOI > N } . Let us show that for each N as n
7~
-
-+ co
IPTN (bn(s,ZLn) - b(s,~;'))dsl ATATN= JiATN bn(s,Zin)ds. 1)
For this write
0, in probability
Then there exists a limit (in probability) A~ATN= n--03 lim ATAT,, and we find that
P.
10. Comparison Theorem and Stochastic Pathwise Control
324
+SEP JiATN IE:OQG~ lbn(s,ZLn)- b(s,Z20)l + qN.
=I?
Obviously, for any -
I?
0 one can take an @ large enough such that -
However, the term IFsNrequires more discussion. Notice that
EP +SEP =
J~ATN -
I ; E . : ~ E G ~ I ~ ~ ~ , < ~ ~ E ; ~ ~ ~ ~ ~ ~
JFTN-IE:OQG~ lbn(s,ZLn)- b(s,ZkO)I
~ u p- ~ < T ~ ~ ~ n l + s u l ~ ~ T ~ ~ ; ~ I ~
IE + I;;',
+
n;, (a)),
@ and F c G ~ c G3 C G2 C G I , G2 and G3 are open sets, E3 is the closure of G3, all of which will be satisfies the following SDE: determined below. Notice that yt = Z;:,, vt 2 0, ~t = X O + A ~ A , ~ + J ~~ ~( s , Y ~ ) I ~ ~ , ~ ~ N ~ G JZ L ~C +( SJ, ys-, ~ 4 1 1 y , - I s ~ P (dd ~ ,) . Moreover, Ig(s,Y ) I ~ ~ ~ <JzN I ~C ~( S , Y ~ ~~)~I y~, - ~ s N " ( d q ) +P(IIEpa---qollv Ib(s,Zkn) - b(~,ZkO)lIqoeF; > E ) < F([Zin- ZiO[> q).. where
-
F ; = G ; ~ { X E R ~ : I ~ ~ < Q ) .
Therefore, for each given s as n -+ co, P ( ~ ~ ( s , z :-) b(s,qO)II~~~~~ > E ) I P(1qn - qOI> 7)--+ O. So applying Lebesgue's dominated convergence theorem, one finds that as n -+ m, I$ -+ 0. Therefore, we have proved 1). However, since the conditions o n a and c are the same as those in Theorem 176, so the other terms in (10.18) also have similar limits t o those in Theorem 176. So that ZiO is a weak solution o f (10.18). Finally, i f 7O holds, then by the Tanaka type formula the pathwise uniqueness o f solutions t o (10.18) also holds. Hence by the Y-W theorem (Theorem 137) (10.18) has a pathwise unique strong solution. rn Now let us discuss the stochastic Bang-Bang control for d-dimensional SDE with jumps and with partially degenerate coefficients. Consider the following partially degenerate stochastic system in d-dimensional space, where its coefficient b ( t , x ) can have a greater than linear growth: Vt E [O, TI 1
T h e o r e m 319 Assume that 10 Ait , Cit , z' = 0 , l ; A:, A: and Bt all are non-random d x d continuous matrices of t ; and Dt(z) 2 0 is a jointly measurable, non-random nonZ () d~ z~ CfC:, Bt > 0 a n d 0 5 a < 1 is a constant; 30 gl(t, u ) is real, jointly continuous i n (t,u)E [O,co) x Rd such that exists and is uniformly locally bounded in u, that is, for each r < m, 0 such that e . n = (e, n ) 1 C,Vn E u ~ E ~ Q N ~ . It isobvious that Q = R $ = { x = ( x l , . . . , x d ) E Rd : x i > 0 , 1 < i s d )
-
11.2 Notation
333
obviously satisfies this condition with e = (-73, • • • > T7=;)J an< i co = -73Condition |0| t = JQ Id&(xs)d\cf)\s in (11.3) obviously means that \\s increases only when xs £ dQ, and the condition t = Jo n(s)d \cf>\s means that the reflection d<j)t happens along the direction of the inner normal vector n(t), so we may call such a reflection a normal reflection. The geometrical meaning of Afx is that it is the set of all inner normal vectors at x, when a; £ dQ. Obviously, when x £ 6, by definition Mx = $ (the empty set). Actually, in the case that i t € 6 no refelction is needed, so we do not need an inner normal vector.
Remark 320 1° In the case 9 = Rd one can take t one easily sees that in (11.3) conditions 3): \\t = /0* Iae(xs)d\4>\s , and 4): (f>t = /„* n(s)d \<j>\a , are equivalent to 3'): for all f: 0 —* Rd, bounded and continuous such that f | a Q = 0, and for allt>0
fZf(xs)-d4a=0;
and 4') for all {/(•) e Dd([0,oo), 6), where D d ([0,oo),e) is the totality of RCLL functions f : [0, 00) —* 9 , J0(ys — xs) • dcj)s is increasing, as t is increasing.
Let us show 2° in Remark 320. In fact, 3) and 4) can be rewritten as d\\t = Ide{xt)d\<j>\t,
or 0 = IQ(xt)d\ct>\t,
and \t.
_
So if they are true, then V/ £ Cb(Q; Rd), with / \dQ_= 0, where Cb(Q; Rd is the totality of bounded continuous functions / : 6 —• Rd, t
hence fi(xt)ni(t)d\(j)\t = 0, • • • , fd(xt)nd(t)d\(f)\t = 0,V£ > 0; that is, f(xt) • dcj>t = 0. So 3') is true. Furthermore, since 0 is a convex domain, and d(pt = n{t)d \cj>\t, (yt - xt) • dcj>t = (yt - xt) • n{t)d\<j>\t > 0, Vj/(-) e Dd([0,00),
6).
Hence 4') is true. Conversely, suppose that 3') and 4') are true. Since 9 is a convex domain, and (yt - xt) • dt > 0, Vy(-) £ D d ([0,oo),e),_# t = n(t)d | 0 | t . That is, 4) is true. Furthermore, V/ G Cb{Q;Rd), with / | a e = 0, by f(xt) • d(f>t = 0, f(xt) • n(t)d\tf>\t = f(xt) • dt = 0. From this one has that =0. Iae(xt)d\^\t So 3) holds true. The proof of 2° in Remark 320 is complete.
334
11. Stochastic Population Control and Reflecting SDE
Obviously, according to (11.3), the population SDE discussed in the introduction should be presented as follows:
+
+
+
dxt = ( A ( t ) x t B(t)xtPt)dt ~ ( xt)dwt t, Jz ~ ( xt-, t , z)fik(dt,d z ) -4. +dq!~,,t 2 0,xo = x , xt E R+,for a11 t 0 ,
>
whereR$={~=(x~,~~~,x~)~~~:x~>0,1 0 such that e n = exists an e E Rd with (eln) 2 Q J , V ~E U X E ~ ~ J V X -
.
lei
The geometric meaning of the uniform inner normal vector positive projection condition actually means that all inner normal vectors on the boundary have a uniform positive projection: There exists a vector e E R~ with [el = 1, amd there exists a positive constant co > 0 such that all inner normal vectors n at the boundary have a uniform positive projection on this vector e; that is n . e 2 co > 0. Let us explain why we need the condition that 0 is convex and satisfies the uniform inner normal vector positive projection condition for the solution to Skorohod's problem. Notice that Remark 320 still holds true for (11.5). In particular, by means of the convexity of O one has that if (xt, 4t) is a solution of (11.5), then for all y(.) E Dd([O,oo),g ) , Sot (ys - x,) . d4, is increasing, as t is increasing; or equivalently, for all y (.) E D~([0, co),g ) ,
Such a property will be used many times in the discussion of the solutions. For example, we can use it to prove the uniqueness of the solutions to (11.5): Suppose that (xt,4,) and (xi, 4:) are two solutions of (11.5) with a given Yo,Yt,Vt 2 0 and Yd,E;',Vt 2 0, respectively. We want to prove that (xt14J = (xi,&, V t 2 0, when Yo = yd, and Yt = Y,',Vt 2 0. Naturally, we discuss the difference (Xt - x;12. By means of the elementary inequalit la b] I (la1 lb1)2 5 2[la12 lb12], Va, b E Rd, we have IXt - x;125 2( IYt + 14, - d l 2 ) . Notice that as 4, and 4: are finite variational, we can use Ito's formula to find 14, - dl2= 2S,"cos- - 4 2 . 4 4 , - 4 3 t 2 2 +Co<slt[14s- - 4)s- + n4, - 14,- - 41,-1 -2(A- - 4 L ) * (4% = 2 J,'(ms- - 4:-) . d(ms - 4:) + xO<s6t lams- am:12 = 2 J,'k- 4:) . 4 4 s - 4 3 - 2 S3hS- 4s- - (4: - 4;-N . 4 4 : - 4 3
+ T
+
+
w2
-&,I
WI
=CE1
- CO<s 0 it satisfies limn,, supt~T lXtn - Xtl = 0, limn-+ooSUPtg 14; - 4tl = 0. Proof, Notice that Y E D, so x , t E [O,T] is bounded for any given < oo. By assumption we also have that SUPn SUPtg Iqnl ICT < 03, where bT is a constant depending only on T. So by (11.7) SUPtltI k b ~ ~IKI, s ~ t where the constant k&> 0 depends only on co. Therefore SUP,$, IXs(Y)I 5 (% + 1) supsit K s l . Proof. First, for any given (Y,Yo)E D x 8, by Lemma 381 one can take step functions Y n E D such that SUp,g IYs - YFl -+ 0 , as n + oo, for any O T < oo. Now by Lemma 326 (11.5) has a unique solution ( X n , with ( Y n ,Yon) for each n. By Lemma 325 14nI~ ~ ~ O S U P IYFI. S ~ T Hence S U P , Idn/, 5 2ko(supSi~IYsI + suPnsuPs 0 , z E ~ , w E S 1 a n d x ~ ~ . Obviously, by assumption (Hz) it is always true that xt- Jz c(t, at-, z, w)Nk({t), dz) E G. Now, under assumptions (HI) and (H2) the RSDE (11.3) can be rewritten as follows:
-
+
+
+
+
, z, w)fik(dt, dz) dxt = b(t, xt, w)dt a(t, xt, w)dwt Jz ~ ( txt-, +d4,, ~ > o , x ~ = x E G xt, EG, LO, 4, is a continuous R~ - valued - adapted RCLL process with finite variation 141, on each finite interval [0, t] such that 4o = 0, and 141t = Iae(xs)d 141,, 4t = J'n(s)d 141,~ n(t) E N,, , as xt E 80. (11.10) The difference between (11.3) and (11.10) is that in (11.3) 4, is RCLL, but here in (11.10) we can require that 4, is continuous by means of assumption (Hz). For the solutions of (11.10) we have the following a priori estimate.
zt
Jl
Theorem 330 If (xt,&) is a solution of (11.10) with E 1x01~ < co, and l lx12), lb(t,x,w)l2 + ll~(t,x,w)1lZ JZlc(t,x, z,w)12r(dz) I ~ l ( t ) (+ where cl(t) 2 0 is non-random such that for each 0 5 T < co :J cl(t)dt < co, then for any 0 5 T < co kT ~ , E ~ ~ Plxt12 o +~ 141; ~ ~I
+
where 0 5 kT is a constant depending only on J:'cl(t)dt,co Assumption (HI) ), E lxo12 and T.
(appeared in
Proof. By Ito's formula
Since 4, is continuous in t, and xt is RCLL in t , it has at most countable many discontinuous points in t. So by 4') in Remark 320 one finds that
344
11. Stochastic Population Control and Reflecting SDE
+
~ - x~s ) . d4, ( ~2 Jst(xr~ - x,) . d4, I 0. s,t = 2 J Again by the continuity of 4, one finds that as 0 I s 5 t 5 T , EC,t = E CS N ) , N = 1,2, ..., if necessary), from (11.11) it is easily seen that 2 s u ~ o l t l ~lxt - X O ~ 5 k'$. Hence 2 2 E ~ u P o < ~Jxtl < T I 2 E s u p , g j ~lxt - xo12 + 2E 1x01 I k ~ . Now by Lemma 325
+
-
where yt = XO+J; b(r,x,, w)dr+$ a(r,x,, w)dw,+2 J; Jz c(r,x,-, z, w)Nk(dr,dz). Hence ~141;I k&. Again by (11.11) one finds that as 0 I s 5 t I T J: C I ( U ) ~ U . E s u ~ ~Ixrj -~xs12 ~ It Applying (11.12) again one also has that E(141t - 141s)2< k? Sst cl(u)du,as 0 I s I t I T . The proof is complete. rn For the uniqueness o f solutions to (11.3) we have the following theorem. Theorem 331 Assume that (xi,f ), i = 1, 2, satisfy (11.3) with the same initial value and with the same BM and the same Poisson martingale measure on the same probability space, and assume that the conditions in Theorem 330 hold. Moreover, assume that for each T < m and N = 1,2,. . . 2(x1 - x2) . (b(t,xl,w)- b(t,x2,w)) Ila(t,xl,w) - a(t,x2,w)112
z, w )I
+
2
+ Jz Ic(t1x l , z, W ) - ~ ( x2, t, 4 d z ) I ~ N , T ( ~ ) P N , T ( I-x x2I2), ~ as t E [O,T],x1,x2 E R~ and l x l l , 1x21 I N ; where k N , ~ ( t2) 0 is nonrandom such that S,T ~ N , T ( s )0 14: - 4: = 0 ) = 1.
>
1
I
Proof. For arbitrary given T < co write T N = i n f { t E [O,T]: > N). By Ito's formula one finds that as t < T 2 2 E - x L T N I K O E J ; " ~~ ~ N , T ( s ) P N , T(I~ ) d ~ .
IX~~+(X:[
I
x?I
11.5 Solutions for RSDE with Jumps and with Continuous Coefficients
345
>
2
Hence one easily sees that E lx: - x:l = 0,Vt 0 and P - a s . x: = x:, W 1 0. Now again by Ito's formula and after an application of the martingale inequality one finds that P(su~t,[O,T]1x2 = 0) = From this and (11.3), and again by means of the martingale inequality it is also seen that P(su~t,[O,,] (4: - 43 = 0) = 1.
xI:
11.5 Solutions for RSDE with Jumps and with Continuous Coefficients For RSDE (11.10) we discuss the case for non-random coefficients. Recall that we always make the assumptions (HI) and (H2) from the previous section; that is, we always assume that (HI): O is a convex domain, and there exists a constant q > 0 and a vector e E Rd, lei = 1 such that e . n q > 0, for all n E UZGasN,. (Hz): x c(t, x, z, w) E g, for all t 1 0, a E Z, w E R and x E 8. We have the following theorem.
>
+
Theorem 332 Assume that for t 2 0,x E G,z E Z 1' b(t, x), a(t, x), c(t, x, z) are jointly measurable such that there exists a constant Ico 2 0 lb12 + lla112 Jz lc12 ~ ( d z 5 ) ko, 2' b(t, x), a(t, x) are jointly continuous, and as lx - yl 4 0, It - sl + 0 JZ Ic(t, x,z) - 4% Y,4124 d z ) -,0, ' 3 ~ ( d z= ) dz/ ~ z l ~ " ,(2 = Rd - (0)). Then (11.10) has a weak solution.
+
Proof. We will use the step coefficient approximation technique to show this theorem. Let hn(0) = 0, hn(t) = (Ic - 1)2-", as (k - 1)2-" < t 5 k2-". Then by Theorem 328 there exists a unique solution (x:, 4:) satisfying
+ J:
+
bn(s, x2)ds J: an(s, xy)dws +So" .fz cn(%SF-, z)q(ds, dz) 4:, and the other statements in (11.3) hold for (x? ,q5;), x:
-
= xo
+
(11.13)
where for simplification of the notation in this proof we write q(ds, dz) = Nk(ds, dz), p(ds, dz) = Nk(ds, dz), and bn ( 3 , ~ s= ) b(hn (s), xh, (s)) etc.
346
11. Stochastic Population Control and Reflecting SDE
In fact, once x? is obtained for 0 5 t 5 k2-", then (x?, 4;) is uniquely determined as the solution of the Skorohod problem:
+
+
x? = xE2-, b(k2-n,xE2-,)(t - k2+) a(k2-n,xE2-n) .(w(t) - ~ ( k 2 - ~ ) )JZ c(k2-", xE2-., ,~ ) q ( ( k 2 - t], ~ ,dz) 4:, and the other statements in (11.3) hold for (xy, 4): on k2-" 5 t 5 (k 1)2-".
+
+
+
(11.14) Notice that in (11.13) the reflection process 4; is RCLL, but not necessary continuous. Now we show that for arbitrary E > 0 and T > 0
> C) = 0,
(11.15)
P(1$ - r]r(> E ) = 0,
(11.16)
lim supsup P(l$1 n t$T
C-*m
lim sup C-*m
n
sup t,s 0 such that lb12 + lld2+ Jz lc12 4 d z ) I ko, B a ( t , x ) is jointly continuous, and as lx - yl 4 0,It - sl + 0 Ic(t,x , 4 - 4%Y , z)12 4 d . 4 0, 3" there exists a constant 5 > 0 such that as ( t ,x ) E [0,T ]x (4 A) 1 6 1A12, where A = 2-'aa*. Then (I 1.10) has a weak solution. Furthermore, if the following conditions also hold: 4" 2(x - Y ) . (b(t,x , w ) - Y ,w ) ) + IMt, x , w) - d t ,y,w)112 Ic(t,x,z,w) - c ( t , y , z , w ) I 2 4 d ~I) ~ N ( ~ ) P N ( I x - yI2), N , where 0 I k N ( t ) , k ~ ( t ) d< t co, for each T < co, and as 1x1, Iyl I pN(u) i s concave and strictly increasing i n u 1 0 such that pN(0) = 0 , and So+du/ pN(u) = co, for each N = 1,2, ...; but i f condition 1" is weaken to 1"' lla112 k o ( l + [ X I ' ) , JZ lc12 ~ ( d z I %, and assume that conditions 2" - 3" hold. By Theorem 332 there exists a weak solution ( x t ,4,) defined o n some probability space ( R , z ) (without loss o f any generality we may assume that (R,$) is a standard measurable space) satisfying the following RSDE xs)dws Jz c(s,xs-, z ) G d d s , d z ) + d, - a.s. xt = xo and the other statements in (11.3) hold for (xt ,A), Let dFt = ztdP, where t t zt = exp[Jo0, . dws los)2 ds],Ot = a-l ( s ,x,)b(t, x,).
+
{
+ Ji
+ 5:
So
11.6 Solutions for RSDE with Jumps and with Discontinuous Coefficients
351
defined on Then by Theorem 124 there exists a probability measure (Q,5)such that P IzT = &,for each 0 < T < co;moreover, t 1) w; = w t - J o e s d s , O ~ t , is a-BM under probability P; 2) Nk(dt,dz) = Nk(dt,dz) - a(dz)dt is still a Poisson random martingale measure with the same compensator n(dz)dt under the probability P. Therefore we find that (xi,4,) satisfies the following RSDE xt = xo b(s,xs)ds u(s,xs)dwi & C ( S , X S - , z)Kk.(ds,dz) +&, and the other statements in (11.3) hold for (xt,&). P - a.s. Now assume that conditions 4" and 5" also hold. Then by the same reasoning as in Theorem 334 we find that (11.10) has a pathwise unique strong solution. Case 2. (General case). In the case that lo' is satisfied, let bN(t,x ) = b(t,x ) , as 1x1 < N ; and bN(t,x ) = b(t,N x l lxl), as 1x1 > N ; u N is similarly defined. Then by the case 1 there exists a pathwise unique where C@ is continuous, satisfying (11.10) with strong solution (x:, the coefficientsbN,u N and c. Set now T N = inf { t 2 0 : Ix,NI > N ) , xt =x?, 4t as 0 2 t < 7-N. By the pathwise uniqueness the above definition is well posed, and ~ATN b(s,xS)ds JotArN U ( S , xs)dwS X ~ A T N = 20 + JO JZ C ( S , xs-, z)fik(ds,dz) &.,rN, for a11 t 2 0. Applying Ito's formula to g(xt) = g,+l(xt) we have that as 0 5 t T ~ATNI 0 ~ ( x ~ A T N=) ~ ( x o+)SO g ( x s ) b ( ~ , x s ) d ~ gl(xs)u(s,xs)dws JZ gl(xs-)c(s,xs- z ) ~ ( kd ~d , ~ ) Jz[g(xS- +c(s,xs-, z ) )- - ~ ( x s - -gl(xs-)c(s, ) X S - , z)]Nk(ds,dz) +2 JotATNgl(xs)- d$, = Ti, IjTN - a.s. where 2nox; x ' % I sm+~ ( % ) = l - I L 1 9 i 1(x)+. . . a gl(x) = grad g(x) = (=g(x), . . . , and we write go(x)= 1. Notice that ~ATNI so s ( 2 s ) .d4s 2no 2 s = SotATN(l-ILig i l ( 4 s ) )-&&$pT(xi(s)- xi(O))d4i( s )
{
+ Sot
+ Sot
+ Sot
&),
=@rl
+ +
+ JiATN
< + sotATN + &,
EL + CL of^^ (n;=,
x0 ,~1 I i s d ) , 0, is a Rd - valued Bt - adapted continuous process with finite variation on any finite interval such that q50 = 0, and
Idt = la'I ~ R :(X.P 101, , mt
n ( t ) EN,,, as xt E d ~ $ .
=
~ o nt
wI
.
~I,
(11.22) Now we easily see that i f &-the specific fertility rate of females is bounded, and coefficientsb and a satisfy the conditions in Theorem 335, then (11.22) has a unique,solution. That is, i f in (11.22) we denote by x: the size of the population with ages between [i,i + I ) , and write xt = (22,. . .,x!), where d is the largest age of the people, then for any given &-the specific fertility rate of females, we can find the population size vector xt with xf 2 0,tli = 1,2, . . . ,d. More precisely, we have the following theorem. -4
Theorem 337 Assume that for t 2 0 , x t: R+, z E Z 1" A(t) and B ( t ) are bounded and non-random, Pt(x), a ( t ,x ) , and c(t,x , z ) are also non-random and jointly measurable, and there exists a constant ko > 0 such that l ~ t ( x ) l Jz ~ lc12 4 d . 4 I b,lla112 5 ko(l+ 1 x 1 ~ ) ; P Pt ( x ) and a(t,x ) are jointly continuous, and as Ix - yl -+ 0, It - sl -+ 0 J' 144 2 , ~-) 4%Y , 4124 d z ) 0; --d a ' 3 x + c ( t , x , z ) E R+,Vt 0 , x E R+,z E 2; 4" ISt(x) - &(y)12+ Il4t.x) - 4t,y)I12 + Jz Idt,X , z ) - c(t,Y , z)l 4 d z ) I ~ N , T ( ~ ) P N (-~y12), x as 1x1 I IYI N , t E IO,TI,
+
+
>
1%:
1
+: .1
14; - dy12 r ~ ( I x ;-
+ lJi(bn(s, 5:)
- bO(s,x:))dsI
- x8I2
2
- [G
+ Ifi
+ l~,'(a"(s,x:)
2
- uo(s,x~))dwsl
J~(C~(~,X:-,z) - co(s,x:-, z))fik(dSI dz)12). We also notice that as n oo IA2 - A:/ lx!1 dsI2 5 1x:I2 I k t [ ~ i - A:[ ds12 -+0, etc. Hence by the martingale inequality, as in the above, one also easily finds that as n -+ w, E S U P . ~ ~14: 0Theorem 338 actually tells us the following facts: 1) For a given stochastic population dynamics and a given specific fertility rate of females, if an initial population size vector xg "in closes to" a known initial population size vector x; "in the mean square", then the population size vector x r corresponding to x; will "uniformly close to" the population size vector x! corresponding to xo on any given interval [0,TI "in the mean square"; that is, for a given A(t), B(t), u(t, x), c(t, x, z) and
E[X
IA:
41'
Pt ,
2
-
Ix; I!x
E 12: - 281 + 0 ==+ E S U P , , ~ -+ 0. 2) For given stochastic popula%on dynamics, and a given initial population size vector, if a specific fertility rate of females Py "is close to" a known specific fertility rate of females @ Itin L1([O, TI)", then the population size vector xa corresponding to /3? will be "uniformly close to" the population size vector x! corresponding to @ on any given interval [0, T] "in the mean square"; that is, for a given A(t), B(t), a ( t , x), c(t, x, z) and xo, 2 C I p n ( t ) - p o ( t ) l d t - + O =+ E s u P , ~ ~ ~ x ; -'0.x ~ ) ~ 3) If the population dynamics can only be "approximately calculated in some sense", then for a given initial population size vector, and a given specific fertility rate of females, the corresponding population size vector will 2
356
11. Stochastic Population Control and Reflecting SDE
also be "obtained approximatey uniformly on [O,T]in the mean squarett; that is, for a given xo and p,, - AO(t) P ( t )- BO (t)l]dt
r[I~"(t)1 + I + I.T Cupx,z
jlon(s, x ) - oO(s,x )
112
+ E s u P , ~~X~Y - X ! ~2 4 0 . Furthermore, we have the following approximate error estimate calculation: Corollary 339 Assume that all conditions in Theorem 338 hold, and V n = 0,1,2,... u y t , x ) = u ( t ,x ) , c n ( t , x , z )= c ( t , x , z ) , f ( z ) =I&), r ( U ) = 1, where a ( t ,x ) and c(t,x , z ) are defined by (11.E?), together with the properties there.
where
+
+
h = ~ f &lcf)12 l 6 l 2 ( k ; 1). and the constant ko comes from the condition l o in Theorem 338, kT comes from Theorem 330. Actually, here we can get that 2 E ~ u Plx:l, ~ 5 kT,
E ) -+ 0 is weaker 2 than the convergence in mean square: EsupSct - Ix; - x!l -+ 0. In fact,
+
p(supSlt
Ix:
- x:1 > E ) I $EsupSlt lx:
- 281
.
Proof. Notice that now bn(t,xF) = An(t)xF B n ( t ) x ~ ~ ; ( x ~ ) . Set rN = inf { t : 1xf1 > N } , rnlE= inf t : x? -xfI > E } , r F ( n ) = T A rn*€. Then by Ito's formula, as in the proof of Theorem 338, we have
+
II
1
2
i nt x A However, for all t 2 0
1
n-x
A
n = 0.
2
E IX;,,$'(~) -X;~,:(~) 2 ~ ~ p (5 ir N~A 't).~ Hence limn,, P(rn>€5 rN A t ) = 0. W e have that P(supS5,NAt Ix: - x:l > 2E) < p(rn" I rN ~ t-+ )0, as n -+ m. Notice that 12T : - x:l > €1 P(~uP,E) P(SUP~< +P(suP,NAT e) +P(suprSt bn(s, x;)ds - J; bO(s,x:)dsl > E) +P(supr5t IS; on(s, xF)dws - S : go(., z:)dwsI > E) +P(su~,,t I .f,' Jz F ( %x&, z)Gk (d%dz) - J,' Jz cO(s,x:-, z)fik(ds, dz)( > e) = Ii. However, it is already known that as n 4 co Il -+ 0. Let us show that as n + co I4-t 0. In fact, for any > 0, by the martingale inequality, I 4 I P ( s u P O + ~Ix; ~ - x q > $1
1s:
41
zE1
2
+ 4 e - 2 ~ ( s k(s)p(sup,5* , Ix: - 4 1 )ds)Is,,<s > E) -+ 0. From this, one easily finds that as n co, Iz-,0. Therefore, we arrive at the second conclusion. rn 2. The Stability Property Now we are going to discuss the stability of solutions to (11.22). We have the following stability property for the solutions of the population dynamical system.
Jl
1
+
0,
xtl
I
t
E lxt12 5 E 1 ~ 0 e-*lt, 1 ~ for all t 1 0. That is, the population dynamics of (11.22) is exponentially stable in the mean square under the above assumptions. In particular, for a given constant a > 0 we have that t2& l n v ~ l x 5~a. l ~ This corollary actually tells us that if the stochastic perturbation is not
xf=, + 2
too large (that is, lcj')l [ E l l 2 is very small), and the forward death rate is greater than a positive constant (that is, 60 > 0 ) ,then the population dynamics (11.22) can be exponentially stable "in the mean square" i f the fertility rate of females is small enough. Furthermore, if we have a target
11.8 Comparison of Solutions and Stochastic Population Control
363
a > 0, we can find out when the population size vector "in the mean square" can be less than this target a. Proof. By calculation 2 2 . b(t, X) = -2 ~ : = ~ ( 1qi)x: + 2 C:L: xixi+l+ 2(C& bixix1)Ptr
+
2
II~(~,X =) P
where 60 = min(4 d
1c1"1 1 x 1 ~
. .
+ ql, q2,. . . ,~ d - 4~ +, qd). Thus as (i)
I
61 = 260 - Xi=1 lcl - 1E1l2 - & b max(r2 ~ - r ~1), > 0, the population dynamics (11.22) is exponentially stable in the mean square, i.e. by 3) of Theorem 342 E lxt12 E 1xol2e-61t, for all t 1 0. Finally, noticing that E lxo12e-6lt 5 a u t 2 b ; l n w . We arrive at the final conclusion.
zl(s)(br(s, ~ ' ( -4 b?(s,x2(s)))ds ) + I(x:(S)>x:(S) ~ i =( al i k ( ~x1, ( s ) )- a i k ( ~x2(s)))dwk(s) , + Jz I(,~(,-)>~~~-)(G(~,X~(~-), z ) - ci(91x2(s-), z))Nk(&dz) + 1 ( ~ : ( ~ ) > ~ ~ ( ~ ) )-d 4%)). (4,l(s)
fl
*
+So"
Jl So"
Remark 345 An example in which the above assumptions 20 and 30 are satisfied is ~ j k ( t X, ) = C; + cikxi, ci(t,x , z ) = 'Ei0 ++xi, where all c ~ , c i k , % , q , i , k= 1,2,... ,d are constants; and 1 , 2 , - . . ,d. Now let us establish the theorem.
i$,% 2 O,i =
11.8 Comparison of Solutions and Stochastic Population Control
365
Proof. By Theorem 153 one finds that the first and second formulas are true. However, by the definition of the solution one finds that qzt(.5)>$ (,))d4%) = 0. Because d4t(s) > 0 only when x:(s) = 0, and now x:(s) 2 0. Hence the last formula follows. Now let us derive some comparison theorems for solutions to the stochastic population dynamics (11.22). For conveneince let us write out it again as follows:
+
+
dxt = (A(t)xt B(t)xtPt)dt a(t, xt)dwt +d& xo = 2, t 2 0, for all t 2 0, xt E and the other statements hold for (xt, 4t),
+ Sz ~ ( txt-, , z)Ek(dt, dz)
Tid+,
(11.25) where x: is the size of the population with age between [i,i l ) , and xt = (22, ...,2:); while P, is the specific fertility rate of females,
+
bi(t) = (1 - po0(t))ki(t)hi(t)# 0, i = rl,. . -,r2; 1 < rl < r2 < d, poo(t) is the death rate of babies, qi(t) is the forward death rate by ages, ki(t) and hi(t) are the corresponding sex rate and fertility models, respectively, xi is the size of the population with ages between [i,i I), xt = (xt,. . .,x2), d = r, is the largest age of people. Because of the physical meaning, we may assume that all qi(t), bj (t), pt, i = 1,2, . . . ,d; j = rl, ...,rz are nonnegative and bounded. To consider the comparison of solution to (11.25) first we write out all its component equations as follows:
+
dx:
+ q:)x: + C;LIIb$$3t]dt + x:=Id k ( t , x t ) d w t + Sz cl(tlSt-, z)Nk(dt,dz) + d4:, dx: = [-(I + qi)xf + xE-']dt + ai"t, xt)dwf + Jz ci(t, xt-, z)@k(dt,dz) + d4:, i = 2,. . - ,d.
= [-(I
(11.28)
Now assume that conditions 2" and 3" in Theorem 344 hold, and all coefficients qf,b:,& in the above population dynamics are bounded, non-
366
11. Stochastic Population Control and Reflecting SDE
) less than linear growth negative and that la(t,x )'1 ,JZ [ ~ (x t, 2, )l2 ~ ( d zhave in 1x12 ; that is, 0 rll bi + & 5 ko, Vi = 1,2,... ,d;j = r1,e.a ,r2; 14,x>12 JZ 1 4 4x , 4124d-4 I ko(1 + 1x12), where kg 2 0 is a constant. We also assume that there is another solution (zt,3,)satisfying the same stochastic population dynamics (11.25) but with another initial value Eo and another fertility rate for females p,. Then by the Tanaka type formula (Theorem 344) one finds that (x: - ?$)+ = ( x i - zk)+ S," Ix:>q [-(I T&(xf - zt)
< +
+
+
+ E L ,bt(x:P, - e K ) ] d s+ M: - J,' 5 ( x i - 3$)+ + & ( I +
+
~,:>,:d$:
&(x: - $)+
+ EZ,, bt((x: - z ) ++~$.(p,, - p,))]ds + M:, where M: is a martingale. Similarly, (x: - $)+ 5 ( x i - $)+ + J,'[-(l + Q ~ ) ( X $- 4 x s )+ +
+(xi-' - ?i$')+]ds M;, i = 2,. - . ,d, where M i , i = 2,. - . ,d are martingales. Furthermore, now assume that < -0,P V i = l , . . . ,d; 20 Pt I Pt,Vt 0. Then 0 5 yt = E E;=,(X: - ?$)+ I JOt[k~ysy,]ds. Hence yt = 0,Vt 2 0. This implies that P - a.s. xf 5 3 , Vt 2 0 , V i = I , . . . ,d. Thus we arrive at the following theorem.
>
+
Theorem 346 Assume that lo 0 5 ~ f , g , / 3 I, k o , V i = 1 , 2 , . . . , d ; j = r l , . . . ,r2; 14, x)12 JZ I&, x , 4124 d 4 I ko(l + 1 x 1 ~ ) ; 2' the same as 2' in Theorem 344; 30 the same as 30 in Theorem 344. If (xt,4,) and ( F ~&,) are solutions of the stochastic population dynamics (11.25) with the initial value xo, the fertility rate of females Pt and Zo,Bt, respectively; then xb 2 3 , V i = I , . . . ,d; and Pt 2 Pt,W 0 implies that P - a s . xf 2 3 , Vt 2 0 , V i = I , . . - ,d.
+
>
The comparison theorem (Theorem 346) actually tells us the following facts: 1) In a stochastic population dynamic if the initial size of the population takes a larger value, then the size of the population will also take larger values forever, as time evolves; that is, with the same rlf, >~0 , V i = l , . . . ,d a xf > ~ : , V t > ~ , V i = l , . . . ,.d xo -
11.8 Comparison of Solutions and Stochastic Population Control
367
2) In a stochastic population dynamic i f the fertility rate o f females always takes larger values, then the size o f the population will, also take larger values forever, as time evolves; that is, with the same 4,q,x i = xi > $ , V t 2 0 , V i = I , . . . ,d. Pt 2 p t , V t 2 0 Furthermore, the proof o f Theorem 346 also motivates the following more general theorem. Consider two more general RSDEs as (11.24). More precisely, consider the following two RSDEs: i = 1,2,
g,
+
+
dxp) = b(i)(t,xp))dt a ( t ,xp))dwt S, c(t,x ji),z ) E k ( d t ,d z ) + d4p), 4 2:) = x ( ~E) R+, t 2 0 , Xj') € t 2 0, and the other stanternents i n (11.4) also holds for (xii),q5p)). (11.29) W e have the following comparison theorem.
z;,
Theorem 347 Assume that all conditions lo- 3' in Theorem 344 hold, and assume that one of the b(i)(t,x ) , i = 1,2; say, b(')(t,x ) satisfies the following condition: 4" ~ ~ i (b('li(t, > ~ iX ) - b(')i(t,y ) ) 5 c:=~ - yk)+), as t E [O,T];1x1,lyl < N , for eachT < m, a n d N = 1,2,.-- , 4 V i = 1,2,... ,d; V x = ( X I , . . . , x d ) , y = (y',... , y d ) E R+; where b(') (t,X ) = @('I1( t ,x ) , . . . ,b(' ld(t,x ) ) , and c$fT(t) 2 O,Vi,k are such that ~ : c $ : ~ ( t ) d t < m; and p$:T(u) 2 0 ,Vi, k,V u 2 0 are strictly increasing, continuous and concave such that PL:T(o) o and So+ d u -- 00.
cL!~(~)~$!~((x~
pzT(u)
If ,& and ( x r ) ,&)) are solutions of (11.29) with the initial values s f ) and s f ) , respectively; then x t ) i 5 ~ f ) ~ ,=v1i, . . . ,d; and b(')'(t,x) b ( 2 ) i ( t , x ) = , ~I , . . ,d;W 2 0,Vx G E: implies that P - a.s. x p i 5 x j 2 y vt 2 0 , vi = 1,. . - ,d. (1) (st
( 1 ))
x p ) ( ~ i (xS-, s(,1 ) 2 ) - ~ i ( sas-, ,( 2 ) z))Nk(ds,dz) - r," 1 ( ~ ~ l ) ~ , ~ ~=) c:=~ i , d 4 I~f < If. However, I 5J
x * , x ( i
4 S,t Now let
(1)
(1)
( 8 , ~ s)
C%:~(S)~$~~((X$')"
(1)
- bi
( 8 , x?))ds
~i~)~)+)d~.
T N = inf{t 2 0 : Ix!l)l+ l x p ) l > N } . Then one easily finds that V t 0, yt = E ~ t = , ( x j ; y , - x:t',i,)+ 5 S," ~ ( s ) p ( ~ ) d s . Hence yt = 0,Vt > 0. Since limN,, T N = m. These imply that P - a.s. xp)" > p i , vt 2 0 , vi = I,... ,d. Analyzing the component equations o f the stochastic population dynamics (11.28) one finds that the fertility rate o f females Pt only influnces the popuation aged in [ r l ,r 2 ](See the first term in (11.28)). This is reasonable, because only females with age in some interval can have babies. However, one also may think that for a better control o f the population, Pt itself may also depend on the population size xt. Is it still true that in such a case the comparison result still holds? T h e following theorem gives a partial answer t o this question.
>
.
e,
Theorem 348 Assume that all u and c satisfy all conditions lo -3' i n Theorem 346, moreover, assume that 4" Pt = & ( x ) satisfies the following condition: V x = ( x l , . . . ,x d ) E 4 R+, & ( x ) = &' + C t l c3xiIoiXiika + %Ix*,ra), where 0 5 py, ci ,. . . ,cf 5 ko, they do not depend on x , and ko is a constnut; besides, k i , . . - ,kg 2 0 are also constants. NOW letVx = ( X I , . . . , x d ) E X : , -
-0
+
+
Pt(x) = Bt ~ $ c1 ~ ( x k~ i ~ I~ t > ~ ~ ;~) . ~ ~ ~ ~ ~ Then the conclusion of Theorem 346 still holds, that is, if ( x t ,4,) and (zt,&) are solutions of (11.25) with xo, Pt and %, p,, respectively; then x i < $ , , V i = l , . . . ,d; and 4
& ( x ) < &(x),Vt 2 O , ~ Xf R+ implies that P - a s . xf 5 ?iVt $,0 , V i = I , . . . ,d.
>
Remark 349 Theorem 348 implies Theorem 346. I n fact, let c; = 0 , i = 1,2,. . . ,d. Then from Theorem 348 we obtain Theorem 346. Now let us establish the above thoerem. Proof. Let
11.8 Comparison of Solutions and Stochastic Population Control
369
+
bcl)(t, x) = A(t)x B(t)x&(x), b(2)(t,x) = A(t)x ~ ( t ) x p ~ ( x ) , where A(t) and B(t) are defined by (11.26) and (11.27), respectively. Then by assumption b(2)"t,x),~i= 1, ,d;Vt 2 0,Vx E b(l)i(t,x) I So if we want to apply Theorem 347, we only need to check that condition 4" in Theorem 347 holds. In fact, ~ ~ i > ~ ~ ( bx)( ~ b(lI1(t, ) ~ ( t y)) , = Ixl>yl[-(l V;)(X' - yl) CF=rl b!(xkDt(x) - ykPt(y))l Ixl>yi C;=,., b!(xkPt(x) - ykPt(y))l = Ixl>yl [b!(xk - ~ ~ ) P t (+ x b!yk(Pt(x) ) -Pt(~))l k,Z CF=, (xk - Yk)+ + Ixl>yl CF=, b:Yk. C ~ [ X- ' I-~ < ~k$Ixi>kk ~ < ~ &- yiIos.iikA - k$y*>kl] I kg C;=rl (xk - yk)+ Izi>yl C?=rl b!yk C;=, ct[(xi - yi) V 0] d ki CF=rl(xk - yk)+ (rz - r1)kiN Ci=l(xi - yi)+ k ; x;=,(xi ~ - pi)+, N; x = (xl, ... ,xd), = ( y l , . - . ,yd) E as 1x1,lyl I More easily, Vi = 2, . . . ,dl ~ = i >(b(lIi(t, ~ i X) - b('Ii(t, 9)) = Ixi>yi[-(I vf)(xi - yi) ~ ~ 5( (xi-I x ~- yi-l)+. - ~ +@*-I - yi-l)] 5 ~ ~ ~ >- y2-l) Therefore, condition 4" in Theorem 347 holds, and Theorem 347 applies.
+
z:.
+
Furthermore, Assume that . . where n ( U ) = 1,ko 2 0 is a constant, c f ) , i = I , . . . ,d, and fi are constant d x d matrices, moreover, 0,Vl I i, j I d, where 11= ( ~ ~ , ~ j ) & = , , and let 60 = mintlo($ 171(t), 172(t), . . . r]d--l(t),$ + rld(t)), b~ =maxt>~(brl(t),...,br,(t)), largest age of the population. where d = r,-the Then as 2 a, = 280 Icf'l - bl12- & b ~ ( r 2 -TI) > O,
>
+
)
where xfO is the so-called optimal trajectory corresponding to the optimal control Po. This theorem actually tells us the following facts: 1) If a target functional monotonically depends on the population size, for example, the energy, the consumption, and so on, spent by the population, then to minimize this target functional we should control the fertility rate of females to take a value as small as possible. 2) It is also possible to make the optimal population size to be exponentially stable in mean square, if we can take the fertility rate of females small enough and when the stochastic perturbation is not too large. To show the truth of this theorem one only needs to apply the existence theorem, the comparison theorem, and the stability theorem of solutions to the stochatic population dynamics. (See Theorem 337, Theorem 348 and Corollary 343). We leave this to the reader. 3. Some ~xblanationsand Conclusions on Population RSDE Finally, by using the results obtained here, the following conclusions can be drawn: The RSDE (11.22) is a suitable model for the stochastic population control system. In fact, by this model one sees that - x i M -(I E ; & ( l - ~Oo(t))ki(t)hi(t)xE~~ +(a stochastic perturbation between [t, t 11) - 42, - xi M -(I+ rl:)x: xi-' +(a stochastic perturbation between [t,'t 11) 4;+l - 42, Vi = 2 , . . . , r m .
+T ~ ) x Z + +
+ +
+ +
11.8 Comparison of Solutions and Stochastic Population Control
371
The second expression tells us intuitively that when i 2 2, xf -the size of the population with ages between [i,i+l), will have an increment ~ f + ~ - xwhen f the time t increases from t to t 1. However, this increment is contributed by two terms when we do not consider the stochastic perturbation. (In the deterministic case 4, = 0). A positive term due to people with ages in [i - 1,i) at time t, who will be in ages [i,i I), when time goes to t 1. That is, the term "+xf-lt'. A negative term is due to the fact that people may die with a death rate caused by a disease or some other cause when the time evolves from t to t 1; moreover, the people with ages in [i,i 1) at time t, will also arrive in ages [i 1,i 2), when time goes to t 1. So from time t to t 1 the size of population with ages in [i,i 1) will lose in all the amount (1 ~ f ) x fThat . is the term "-(I &xitt. However, for the first expression above, the only difference is that x i is the size of the population with ages between [I, 2). So when time goes to t 1 the positive contribution term can only be the number of babies born during this time interval [t, t 11, since women, who can have babies, can only be in some age interval [rl,rz],and a baby if born, may die with a death rate poo(t). Furthermore, the number of babies born and living, also depends on the size of the population x f , the fertility rate of females P,, the sex model lci(t), and the fertility model hi(t) at time t and aged i E [rl,rz]. That is, the positive contribution term from time t to time t 1 is " C ~ ~(1,-. ,~00(t))ki(t)hi(t)~fPt". Now if the system is disturbed by a continuous and a jump type stochastic perturbation, then as we said in the "Introduction" section, we need a reflecting SDE to discribe the population dynamics to keep all population sizes non-negative, that is, xf 2 0, V i = 1,2, .- - ,r,. So we need the stochastic population control system (11.22). In this model we have shown the following facts: 1) The size of the population depends continuously in some sense (Theorem 338 and 340) on the initial size of the population, the fertility rate of females, and the coefficients in the stochastic population dynamics system. Moreover, the resultant error can be calculated in some cases (Corollary 339). 2) If the initial size of the population or the fertility rate of females takes larger value, then so does the size of the population forever as time evolves (Theorem 348). 3) If the stochastic perturbation is not large, and the forward death rate is greater than zero, then it is possible to take a fertility rate of females that makes the system exponentially stable in some sense, and the time when the population size can in some sense be less than a given level, can also be calculated. (Corollary 343). 4) If a payoff value functional (or say, a target functional) depends monotonically on the size of the population, then the payoff value will take the smallest value, as does the fertility rate of females (Theorem 350).
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
372
11. Stochastic Population Control and Reflecting SDE
11.9 Caculation of Solutions to Population RSDE Now let us discuss how to calculate, or say, to construct a practical solution of the stochastic population dynamics. For simplicity we will discuss the population RSDE without jumps. Suppose we have already got stochastic population dynamics as in (11.25) with c(t, x, z) = 0. To calculate its solution practically, first let us see what will happen if (xt, d,) is a solution of (11.25) with c(t, x, z) = 0. For this purpose we now give a lemma for a more general RSDE. Lemma 351 If (xt, 4,) satisfies -d
R+, t 2 0 , xt E ,:?i t 2 0, and the other stantements in (11.4) also hold for (xt, dt), XO=XE
(11.30)
Proof. If (xt, 4,) solves (11.30), set di(t> = s u ~ o ~((-~i(s)) s ~ t V O), xi(t) = yi (t) & (t) then it is obvious that 0 5 &(t) is increasing, continuous, &(0) = 0, and 2 0, for all i = 1,2, ...,d. Moreover, Ti(t) di(t) = J,"I(~;~s)=o,d$i(s), $i(t) = J,"ni(s)d (31, -d where n(t) -= (nl(t), . - . ,nd(t)) E &(,,, as T(t) E dR+, and $(t) = (&(t), . . - ,d d ( t ) ) , ~ ( t= ) (f l(t), . . - ,&(t)). SO (f(t),?(t)) solves &(t) = dy (t) d$(t), -d xo = x E R+, t 2 0, t 0, x, € ,:?i and the other stantements in (11.4) hold for (~(t),$(t)). In particular, (Ti(t), (t)) satisfies -
+
I?/
I
+
>
&
11.9 Caculation of Solutions to Population RSDE
373
i.e. it is a solution of the Skorohod's problem (11.31)in 1-dimensional space. Since (xi( t ) ,q5i(t)) also satisfies (11.31); by the uniqueness of solution to the Skorohod problem we have that Vi = 1,2,. . . ,d, ( x i ( t ) 4i ( t ) )= (%( t ),?i ( t ) ) . This lemma motivates us to use the Picard's iteration technique to calculate the unknown solution (xi(t),q5i(t)), i = 1,2, . . . ,d o f the RSDE (11.30). In fact, we may let Y: ( t ) = xi(())+ bi(s,~ ( 0 ) ) d+s ffik(s,x ( o ) ) c I w ~ ( s ) , i = l , 2 , ...d. By this y: ( t ) we can get s)) 4:(t) = ~ u ~ o < ~ g t ( ( - ~V: (01, x: ( t ) = y: ( t ) 4: (t). By the proof of the previous lemma ( x t ( t ) ,4f ( t ) )satisfies a Skorohod problem similar t o (11.31) but with the given y:(t), and the same initial value xi(0). By using the induction we may construct
Sot
+
+
@ ( t ) = xi(O) Sgt bi(s1xn-' (s))ds )> 41(t) = s u ~ ~ g , ~ t ( ( - y l (Vs 01, x l ( t ) = Y X t ) + q5l(t),
S,t ffik(s,xn-'(S))dwk(~),
(11.32) and we see that ( x l ( t ) q5l(t)) , still satisfies a Skorohod problem similar to (11.31) but with the given Il,"(t), and the same initial value xi(0). Thus, after some calculation, we have a sequence o f (xn(t),q5"(t)),n= 1,2,. . . , which we may call the approximated solutions of (11.30). Can we show that they actually "converge" to the solution of (11.30)? The following theorem answers this question. Theorem 352 Assume that for all t 2 0 w E a,z E 2,x E 1" lb(t,x)12+ 1 1 4x)112 t , 5 ko(l 1x12), where ko 2 0 is a constant, 2 2•‹ 1% 2 ) - b(t,y)12 + Ilff(t,2 ) - 4,d1125 ko lx - Y I . Then we have the following conclusions: 1) (11.30) has a pathwise unique strong solution ( x t ,&). 2) For i = 1,2, . . . ,d the sequence (xp(t),q52(t)),n = 1,2, . . . constructed by (11.32) satisfies that ~ T - q5(t)12] = 0, limn-+,[Esu~t 0, let N, be such that CT n 4 ( 1 2 2 k ~ ~ ) n - ' /( nI ) ! < 77, where CT = 8E 1 x 0 1 ~(30k0 J O ~ O[xo12)T, E VE> 0, let N, be such that 1/n2 < € 7 then as n 2 max(N,,, N,),
+
+
zzN,
x:='%,,
+
+
374
11. Stochastic Population Control and Reflecting SDE
In this theorem the conclusion 1) and 2) mean that a uniform limit xt exists, which is just the unique solution of the RSDE (11.30) and conclusion 3) tells us that with a greater than (1 - g)% possibility, the error of the uniform approximation is less than a given number E > 0, if n is large enough. So the approximae solutions and the approximate error can all be calculated. Proof. 1) is true by Theorem 335. Now let us show 2). By assumption n = 1'2, ... x; = 2 0 b(s, x:-l, w)ds a(s, xy-', w)dw, 4: = yr 4:, 51;.= 50, Applying Ito's formula, one finds that ; . 1 - xt12 I 2 S~(X:- x,) . (b(s, x:-)' - b(s, xS))ds +2 S~(X: - x:-l) - (a(s, x:-') - a(s, x:-~))~w, J,"Ila(s, x:-') - a(s, x:-~) ds. It follows by the martingale inequality that as 0 t T I? = sup,^, 12; - 2;-'12 i: ilo I;-'~S 5 ~ t ( $ t ) ~ - ' / ( n - I)!,
+ Ji
+ Ji
+
11
+
+
<
ut; where Pt satisfies (12.9). So we have that such ut satisfies (12.8). However, solving (12.9) one finds that Pt = 1: e-A(t-~)&s. From these results one finds that for such a ut, (12.7) has a unique solution. Since by the approximation to the infimum, (see Lemma below), one easily finds that there should exist an optimal control u(-) E hdsuch that J (u (-)) = i n f V ( . p a dJ (v (-)) . So the ut obtained above must be an optimal control.
-=;
Lemma 355 There exists an optimal control u(.) E &d such that J (u (.)) infv(.)€uadJ (v (.I) . Proof. Take vn E Uad such that J (v, (-)) for n = 1 , 2 , . . . Since E
=
< infvc.,EuadJ (V(.)) + &, ..
-< ko < co, there exists a subse-
quence of {v,) denoted again by {v,) a i d there also exists a u E kdsuch that v, 2 u in L$(R1). Hence there also exists a subsequence denoted again by {v,) such that its arithmetic means from this sequence converge strongly -+ u, in L$(R1). However, one easily sees to u in L$(R1), that is, "l+'i+"~ that V ~ + ~ ; + V ~ and J ( u I + ~ ; + v ~5) J(vl)+...+J(vn) n 5 Hence by letting n -+ co, one easily obtains that J (u (.)) = infvc.,,u,d J (V(-)) .
ad
i.
12.5 Intuitive thinking on the Maximum Principle In Calculus if a real-valued function with d variables f (x), x E Rd, attains its minimum at the point $0, then we will have f (x) 2 f (xo),Vx E Rd; or equivalently, f(xo y) 2 f (xo),Vy E Rd. TO explain clearly the intuitive derivation of a Maximum Principle for a functional, let us simplify the T notation. Consider a non-random functional J(v) = So 1(x:, vt)dt h(x$), where all functions are real-valued, xt is a solution of a real ODE
+
+
12. Maximum Principle for Stochastic Systems with Jumps
382
for v(.) E U, and where vt takes values in a non-empty set U c R1. If J(v) attains at its minimu at v = u E U, then obviously we will have ~, 1(x?, ut))dt h(x$) J(v) 2 J(u),Vv E U. Or, equivalently, ~ ( l ( xvt) h(x$) 2 0, Vv 'v U. By Taylor's formula we will have, approximately, that vv € U
+
+o( sup IxY - 2211)
2 0,
tE[O,Tl
where 1, is the derivative of 1 with respect to x, provided that 1 is smooth enough. Actually, all kinds of maximum principle are only trying to transform this inequality into a more applicable statement. The way here by which to derive a necessary condition for the inequality (12.11) is by using the following two techniques: 1) We use a "spike variation" ue(t) of the optimal control to obtain a first order variational equation dx:'" such o(c2)dt, where uE(t) = v, as t E [to,to 6); that d(x?' - x?) = dx:.' ue(t) = u(t), otherwise; and xyE is the solution of (12.10) corresponding to the control uE(.) provided that uE(.) E U. In fact by Taylor's formula, and under appropriate conditions, one has that d(x?' - x?) = (b(xyE,vt) - b(x7, ut))dt = bS(x?,ut) . (x? - x?)dt (b(x?,vt) - b(x?, ut))dt o(E). So we may introduce a first order variational equation:
+
+
+
dx;'" = b,(xy, vt) . x:"dt
+
+ (b(xy, vt) - b(xy, ut))dt,
= 0.
(12.12)
2) By using integration by parts we have
SOT
T
- So (dpt)xt1"=
ptdx;" d(Ptx:'E), and we can introduce a function Pt such that T 1 E Mx?, ut)@dt hx(xU(~))(x>") = - J, (xt>)dpt ~ , ( X ~ ( T ) ) ( X > ~ ) -:J (x:lE) b,(x;, ut)Ptdt = :J pt(qx;, u:) - qx;,~ ~ ) ) d t , if
+
J,T
+
This means that (12.11) can be rewritten as that T 0 5 So (1(xY,uB)- ~(xY,us) Ps(b(xY,ue) - b(xY,uS))ds o(E). By the definition of ue one sees that 0 0, set
uE(t) =
v, astE[to,to+~), u(t), otherwise,
where v(.) E kdis arbitrary. It is easily seen that ue(-) E Uad. Hence there corresponds a unique trajectory x" (.) (the solution of (12.1)). Let us introduce the first and second order Rd-valued variational equations, respectively, as follows:
and
where
+ +
1 E l e $bxx(s,xSluS)xs9 5.' (bx(s, xs, u:) - bx(s, xs, us)) x:,' lll: = p x x ( s , xs, %)xs'l e X,'l E (ax(s, x s , u 3 - ax(%xs, us)) x p , x ~ ( z )= $cxx(s,x,, z ) x ~ , ~ x ~ ~ ~ , where we will write fX:xiyiy', for f = b, 0, c, 1, h. fxxyy = By Theorem 117 it is known that (12.16) has a unique strong solution and hence again, by Theorem 117, (12.17) also has a unique strong solution. We have the following lemmas.
v:
=
c$=~
Lemma 356 lo For p = 1,2,3,4 E s u ~ t q o , qlxt12' G k ~ ,
12. Maximum Principle for Stochastic Systems with Jumps
384
where k~ = koTP(l+
SUP
1ut lap), and ko 2 0 is a wnstant;
t€lO.Tl
Remark 357 In [184 there are some errors: In (2.10) there is the estimate forp > 1
EIR
2P
JzxIp
S cp IIPI~-'
gO(s,z)Sk(ds,dz)l
IJz
+)lP
E SIP lgo(s,z)12 ds. This estimate is used in [184 to get an estimate similar to 2'. However, the above inequality is not correct. To see this, i f we take I p = [ s , t ] , and p = 3, then by the Kolmogorov continuous trajectory version theorem J: Jz go(s,z)Nk(ds,dz) will have a continuous (in t ) version. This is obviously incorrect. Hence the proof that the general Maximum Principle still holds for stochastic systems with jumps such that the jump weficients also depend on controls in [184 is at the very least incomplete. Actually, by the Burkholder-Gundy-Davis i n e q u a ~ i t ~one [ ~ qcan only get that for p > 1
1~;
E JzxIPgo(%z)fidds,dz)l but not that E 11;
J~~~~go(%z ) k ( d s ldz)l
2P
C kpE 2P
IS;
6 k p E 1.l;
&,
1go(s,z)12~ii(ds,dz)IP,
JZXIrI S O ( S ,z)12~ ( d z ) d s I ~ .
Now let us show Lemma 356. Proof. 1" can be proved by Ito's formula and Gronwall's inequality. Let us use Gronwall's inequality to show 2' by lo.In fact, by assumption and by (12.16) y:aEsup 1 x i . " 1< ~ ~S a p [ % $ ytds ss t +E(J,t Ib(s, xs, u 3 - b(s,xs, us)l dsI2P 2P
+ +
+E(IJ:(o(s, xs1u:) - ~ ( sxs, ' us))dws)l = 52p[Ij I: I:]. However, lo(s,x,, uz) - o ( s ,x., u,)12 d s ) ~ It < L"E(J;:+' 0 ~ ds . )sp-' 1C 2k;klsP. ~ ~ Io(s1z S ,u:) - u(s1x S ,~ S k;E Similar estimate holds for I:. Hence by Gronwall's inequality we obtain the first estimate in 2". 3" is proved by the first estimate o f 2". Finally by (12.17) and by Gronwall's inequality one easily derives the second estimate of 2". The proof is complete.
-.
12.6 Some Lemmas
Lemma 358 E s u ~ ~Ixt , ~- xt ~ , x:" ~ ~- x:"l Proof. Set y; = x; - xt - x:'" - x21" t
Then
2
=o
385
(2) .
-
+ + + + ++ + +c(s, X , + xi1€+ x'$€,u;) - a(s,x,, u;) + c ( s ,x,, u;) - c ( s ,x,, us))dws + j," jZ (c(s,x;, 2 ) - x, + xitE + x p , z ) +c(s,5 , + xi" + x i 1 €Z,) x S ,z ) ) S k ( d sdz) , t 1 = So So bX(s,xs + x$' + x:lE + XyQE, u;)dXy:ds + J: c X ( sx,, + xilE+ xzrE+ Xy;, u;)dXy~dw, + Sz J: c X ( sx,, + x:lE + xz1"+ Xy;, z ) d ~ ~ , ~ @dz) ~(ds,
YE
= j;(b(s, x;, uz) - b(s,xs zitE x:*€,u;) x:lE,uz) - b(s,x,, u;) b(s,x,, uz) - b(s,x,, u,))ds - x ; , ~- x 2 , ~ J;(c(s, xz, uz) - a ( s ,x, xilE xi>€, u;)
+b(s,X S +
C(S,
C(S,
+sotJ z [ ~ c X x ( s , ~ , , x:~ ) (+~x ~, ,x,~ 1
E
2
E
2,E
2,"
+So1 (1 - X)(cXx(s,X , + X(xirE+ x : ~ ~Z)) -, cXx(s,xsz))dX .(Xft"+ X ~ ~ " ) ( X ; > + " x:?')]fik(ds,dz)
+
+
j," R € ( ~ ) ( s ) ~jots R ~ ( C ) ( S ) ~ ~ , , where RE(b)(s)= (bx(s,2 3 , u;) - bx(s,x S ,U , ) ) X : > ~ f L b X x ( sx,, , u s ) . (2xi3Ex:2E+ x ' $ ~ x ~ ~ ~ ) j? +T(bx,(s, xs, u;) - b,,(s, xs, us))(x;lE X ; + ) ( X : > ~ + x:vE) 1 + So (1 - A)(bxx(s,xs X ( X : ' ~ x : ' € ) ,u:) -bxx ( s ,~ , u E ) ) d X ( x i >x:>€) ~ (x:lE+ x:lE), R E ( c ) ( s )is similarly defined. Now by applying Ito's formula, G r ~ n ~ a l l ' ~ inequality and Lemma 356, one obtains the stated conclusion.
+ +
+
+
Lemma 359
Proof. Since (xt,ut) is optimal, we have
E[S,T l(t,xZ, uZ)dt + h(x?)]- E [ J l(t, ~ x t , ut)dt + h ( x T ) ] = ~ [ ~ ? ( l xt ( t+ , x:lE + xTvE, - l ( t ,x t , ut))dt + x;') - h(xT)]+ o ( E ) + E [ ~ ( x+ T = E [ / c ( l ( t ,xt + x:" + x:lE,u t ) - l(t,xt, ut))dt
0
UZ)
x$€
386
12. Maximum Principle for Stochastic Systems with Jumps
1,e 1,e
+$E ~ : ( 1 ~ ~xt, ( tu:) , - lxx(t,xt, ut))xt xt dt ' l e l e +E[hX( X T ) ( x $ € x : ~ ) ] E[hxx( X T ) X + a+ ] o ( E )
+
+
+
Thus the conclusion now follows from Lemma 356 rn
12.7 Proof of Theorem 354 To derive the Maximum Principle from Lemma 359 we need to introduce an adjoint process pt such that it makes E l x ( t lx t , ut)xil'dt E h x ( ~ ( T ) ) ( x = $ ~E) f ( t ,utr vt, xt, pt)dt. For this it is necessary to notice that by Ito's formula (or say, using integration by parts) -E c ( d p t ) x i l E = ptdx:'E J: dptdxi" d(ptx~'E)]. So if we introduce (Pt,Qt, Rt) as the solution of (12.4),then by (12.16) we have e E l x ( t ,X ~ , U ~ ) X E~~ S, ( x~ (~T )~) ( x $ "= ) -E J,T ( xlt > )*dpt
SOT
+
E[C
Jr
+
Now let
. . . x;:(x;;) ............. 1E X ; ~ ~ ( X : : ) . . . xis ( d s .::(x;:)
X; = x ; ' E ( x ; ' ~ ) *
+
=
I
12.7 Proof of Theorem 354
Then we have
387
Appendix A A Short Review on Basic Probability Theory
A . l Probability Space, Random Variable and Mathematical Expectation In real analysis a real-valued function f (x) defined on R1 is called a Borel - measurable function, if for any c E R1 {x E R1 : f (x) 5 c) is a Borelmeasurable set. If we denote the totality of Borel - measurable sets in R1 by %(R1), and call (R1,%(R1)) a measurable space, then we may write {x E R1 : f (x) 5 c) E %(R1),Vc E R1, for a Borel - measurable function f , and say that f is a measurable function defined on the measurable space (R1, !?3(R1)). The definition for a Lebesgue - measurable function is given in similar fashion.. F'urthermore, in real analysis, the Lebesgue measure m is also introduced and used to construct a measure space (R1, %(R1),m), so that one can define the Lebesgue - integral for a Lebesgue - measurable function under very general conditions and obtain its many important properties and applications. Such concepts and ideas can be applied to probability theory. For this we first need to introduce a probability space (a,5,P ) . Definition 360 (SZ,5, P ) is called a probability space, if SZ is a set, 5 is a family of subsets from 0 , which is a a-field, and P is a probability measure. Definition 361 5 is called a a-field (c-algebra), i f it satisfies the following conditions: 1) 0 E 5, 2)AE5 +AC=a-AE5,
Appendix A. A Short Review on Basic Probability Theory
390
It is easily seen that a a-field is closed under the operations of U, n, in a countable number of times.
Definition 362 A set function P defined on a a-field ability measure, or simply a probability, i f 1) P(A) L 0,VA E 5, 2) P(+) = 0, where 9 is the empty set, 3)Ai E 5 , A i n A j = + , V i # j ; i , j = 1,2,... +P( U g l Ai) = Cz1P(Ai), 4 ) P ( 0 ) = 1.
5 is called a prob-
Usually, a set function P with properties 1) - 3) is called a measure. Now we can introduce the concept of a random variable.
Definition 363 A real - valued function J defined on a probability space ( Q 5 ,P ) is called a random variable, if for any c E R1, { W E0 : J ( w ) 5 C ) E 5. So, actually, a random variable is a measurable function defined on a probability space. Recall that in real analysis if two Lebesgue - measurable functions f and g are equal except at a set with a zero Lebesgue measure, then we will consider them to be the same function and show this by writing f =g b ) , ax. Such a concept is also introduced into probability theory.
(XI
Definition 364 If J and r] are two random variables on the same probability space (Q, 5,P ) , such that they equal except at a set with a zero probability, then we will consider them to be the same random variable, and show this by writing [ ( w ) = r](w),P - a s . Or, simply, J = 77, a.s. Here "a.s." means "almost sure". Similar to real analysis we may also define the integral of a random variable as follows: (Recall that a random variable is a measurable function defined on a probability measure space). If J 0 is a random variable defined on a probability space (a,5,P ) , we will write EE = S , J(w)P(dw)= S, J(w)dP(w)= S, JdP = limn-+mCY:; $ P ( { w : [(w)E [$, For a general random variable J = J f - J - we will write
>
&)I).
EJ
= EJ+ - EJ-,
if the right side makes sense.
Definition 365 EJ defined above is called the mathematical expectation (or, simply, the expectation) of J and D ( J ) = E lJ - EJ~' is called the variance of (.
A.1 Probability Space, Random Variable and Mathematical Expectation
391
By definition E J may not exist for a general random variable J. However, if EJ+ < cw or EJ- < cw, then EJ exists. By definition one also sees that E J is simply the mean value of the random variable J. So to call EJ the expectation of J is quite natural. (Sometimes we also call Ef the mean of J). Obviously, D(J) describes the mean square error or difference between J and its expectation EJ. In real analysis the Fourier transform is a powerful tool when simplifying and treating many kinds of mathematical problem. Such idea can also be introduced into Probability Theory, leading to the so-called a characteristic function of a random variable J. Definition 366 If J is a random variable, p(t), t E R1 defined by p(t) = ~ e j ~where t , i is the imaginary unit number, is called the characteristic function of J. The charateristic function is a powerful tool when simplifying and treating many probability problems. Here, we can use it to distinguish different specific useful random variables.
Example 367 J is called a Normal (distributed) random variable, or a Guassian random variable, if and only if its characteristic function p(t) = eimt-+02t . In this case E J = rn, E IJ - E E =~a 2~. That is, m and a2 are the expectation and variance of J , respectively. One can show that if J is such a random variable that Vx E R1, e-wdx, P([ 5 x) = 1 &u J-cO then p(t) = Eeitt = eimt-+oZt. So J is a Normal random variable. Moreover, b y the definition of expectation and by the definition of derivative one can show that - m x f (x)dx, and E f = Ja J d P = Jm $YE I x) = f (x), e - w . Naturally, we call f (x) = k e - w the where f (x) = lJi;;u probability density function of the Normal random variable J. So this (probability) density function also completely describes a Normal random variable. Furthermore, we also call the function denoted by F(x) = P ( J 5 x),Vx E R1, the (probability) distribution function of a random variable J. Obviously, any (probability) distribution function F(x) has the following properties: 1) F(-co) = lim,,-, F(x) = 0, F(W) = lim,,, F(x) = 1; 2) F(x) is an increasing and right continuous function on x E R1. Conversely, one can show[40]~[78] that if F(x) is such a function satisfying the above two properties, then there exists a probability space (a,Z,P) and a random variable J defined on it such that F(x) = P(f 5 x),Vx E R1. That is,
392
Appendix A. A Short Review on Basic Probability Theory
F ( x ) must be a distribution function of some random variable J. So the distribution function completely describes a random variable. Now let us discuss the n-dimensional case. For a random vector J = (J1,. . . ,Jn), where each component ti,i = 1,. . . ,n is a real-valued random variable, we may also introduce a n-variable characteristic function cp(t) as follows: Definition 368 cp(t) = cp(tl,. . . ,t,) = ~ e ~ ( ~=l Eei c ) xE=l the characteristic function of [.
is called
Now, we can use the characteristic function to define the independence of two random variables. Definition 369 We say that two real random variables J and q- are independent, if they satisfy that cp&) = cpt,&, t2) = cp&)cp,(t2), vt = (tl ,t2) E R2, ) , cpE(tl) l c ~= Eeitlc, etc. where cpc,,(tl, t2) = ~ e ~ t(a q ~ or, equivalently, VA, B E B(R1), P(J E A, 7) E B) = P ( J E A)P(q- E B). In similar fashion we can define the independence of the three random variables J , and q-. If n random variables J,, . . . ,[, are independent, then we will say that {Jk)i=l is an independent system. The intuitive meaning of independence for two random variables is that they can take values independently in probability. One easily sees that if {Jk);=, is an independent system, then each pair of random variables is independent, i.e. all (ti, Jj) form an independent system, i # j ; i, j = 1 , . . . ,n. One should pay attention to the fact that, the inverse statement is not necessary true.[126] However, if {Jk);=l is a Gaussian system, then it will certainly hold true. See the next section. This is why the Gaussian system is quite useful and easy to treat in practical cases, because it is much easier to check whether each pair of random variables forms an independent system, than to check the whole n-dimensional system. The amount of calculation work will be considerably saved. (Calculations on involving 2 x 2- matrices are much easier than calculation on involving an n x n-matrix).
A.2 Gaussian Vectors and Poisson Random Variables Naturally, we generalize the concept of a real normal random variable to an n-dimensional normal random variable (also called a Gaussian vector) as follows: Definition 370 An n-dimensional random variable J* = (El, . . . ,J,) is called a Gaunssian vector if it its characteristic function is
A.2 Gaussian Vectors and Poisson Random Variables
393
Cn t, cp(tl,... ,t,) = it. 0 is its intensity such that E J = X. One easily sees that for the above Poisson random variable its probability distribution is P ( t = n) = e - A n! x ' and E J = A, D ( J ) = E(IJ - Et12) = A.
A.3 Conditional Mathematical Expectation and its Properties Up to now we have reviewed the elementary theory of Probability needed in this book. Now we will enter a higher level of Probability theory. The first important concept is the conditional mathematical expectation of 6 under a given a-algebra Q , where Q ~ 5 .
Definition 375 Suppose that Q C$ is a a-algebra, J is a random variable. If there exzsts a Q-measurable function denoted by E [ J J Qsuch ] that V A E Q JA E[JIBldP = JA EdP, then E[JIQ]is called the conditional expectation o f t given Q .
m,
If one takes Q = (0,A , A", 41, where A E 0, then one easily sees that EKlQ](w)=
5 is any fixed set with P ( A ) >
as w E A; and EKIQ](w)= w , as w E AC. (To sees notice that E[JIQ]is G-measurable. So one should have that E[JIQ](w)= cl IA(w) c z I ~(= w ) , where cl and cl are constants only depending on the given [. On the other hand, by cl P ( A ) = JA E[JIQ]dP= JA EdP one sees that cl = Similarly, one has that cz = w . From which the conclusion follows). This tells us that if one takes the conditional expectation of J given the condition that A has happened (that is, w E A ) , then one should consider ( A ,A n S , P ( . ) / P ( A ) )as a new probability space, and take the expectation of t in this new probability space, that equals SA J d P / P ( A ) = The intuitive meaning of the conditional expectation E[JIQ]is the expectation of t taken under the given information Q. Notice that by definition the conditional expectation is usually a random variable, unlike the expectation which is a deterministic value. This also can be explained by intuition. For example, a person at the present time s has wealth x , for a game, and at the future time t he will have the wealth xt for this game. In this game the expected money for this person at the future time t is naturally expressed as E[xt15,],where 5, means the information up to time s,
+
m.
m.
Appendix A. A Short Review on Basic Probability Theory
396
I ~a ~ random ] variable which is known by the gambler. Naturally, E [ X ~ is for any t > s. The important thing is that the conditional expectation has many useful properties which we will use a lot in this book, so we must discuss it. Existence and Properties of the Conditional Expectation Now let us use the Radon-Nikodyn theorem to show the existence of the conditional expectation. Assume that J is a random variable such that E < m , and 9 CZ is a a-algebra. Write p(A) = JA
0 such Proof. In fact, since X is RCLL, there exists a point ~ [ -~ Xtl , ~ < ~ )E. Write that sups,[o T ) IX, - Xol < &/2. SO S U ~ ~ , ~ IX, ' .L tl sup^^^^ { t l ) . Then suP,,t~[o,t~) 1x8 - Xtl I E. Again starting from tl there exists a maximum point tz > tl such that SUPs,t~[tl,t2) IX, - Xtl E. By induction there exists a maximum point t, > t,-l such that SUPs,tE[t,-l,t,) IX, - Xtl I E. NOWsuch a procedure should stop at finitely many steps. Otherwise, let T = sup,t,. As XTIX, - Xtl E. exists, one easily finds a t; > t, such that This is a contradiction with the definition of t,. Obviously, we should have T = 1. So we have proved that there exist finite many points 0 = to < t l < . . < t,, = 1 such that
0, there exist only finitely many points 0 = to < tl < . . . < t,, = 1 such that AX,, > E, i = 1,2,. . . ,n,. Thus, X only has at most countably many discontinuous points. In fact, by Corollary 382 taking E = 1, one sees that SUP~E[O,I, IXtl 5 C;tl(lxtn-l + IXtn-l - X t l ) I t ~ [ t ~ - l , t ~ ) < n l + C:L1 Ixtn-,I< 03.
1
Appendix B. Space D and Skorohod's Metric
403
F'urthermore, by Corollary 382 for any given E > 0 it only can be true that a t t h e p o i n t s o = to < t l < . . . < t,, = I , lAXt,l > ~ , i = 1 , 2 , . . . , n ., Finally, {laxtl> o) = {t E [o, 11: lnxtl > 0) = u=:, {Inxtl> So X only has at most countably many discontinuous points. Thus, facts 1) and 2) hold true. These two facts mean that any RCLL function defined on [0,1] is bounded, and for any given E > 0 it has only finitely many jump points where its jumps can exceed E, and so it can only have at most countably many discontinuous points. Now let us show the following lemma.
i).
Lemma 383 d(X, Y) defined by (B.l) is a metric in D([O, 11).
Proof. First, take X(t) = t,Vt E [O,l]. then 0 5 d(X, Y) 5 suPt,[o,l IXt - Ytl < m , because X - Y E D([O, l]), and any function in D([0,1]) is bounded. Secondly, let us show that d(X, Y) satisfies the three properties for metric. 1) Notice that X E A x-' E A. For each so E [O,1] write to = X(so). Then SO - X(SO)= X-l(t0) - t o , and X,, - Yx(so)= X,-I(,,) - Go. So SUPo C, and F1is a A-system. So 3; = A(C). Set 3 2 = { B E A(C) : V A E F ~ , B ~EA A(C)). Again F2> C, and F2is a A-system. So F2= A(C). This means that A(C) is a also T-system. So it is a a-field, i.e. A(C) > a(C). However, a a-field is also a A-system. Therefore A(C) c a(C). 2): Notice that the intersection of all monotone systems containing C is M(C). Similarly as in 1) we find that it is a T-system. Now set 3 3 = {A E M(C) : ACE M(C)) . Since C is a field, then C c 33. Moreover, 3 3 is obviously a monotone system. So F3 = M(C). This means that M(C) is also a field, since it is a T-system, and F3is closed under the complement operation. So it is a a-field, i.e. iM(C) > a(C). However, a a-field is also a monotone system. Therefore M(C) c a(C). rn The following theorem is the monotone class theorem on a function family.
Theorem 391 Assume that 3-1 is a linear space of some real functions defined on fi, and C is a n-system of some subsets i n 0. If (i) 1 €3-1, (ii) fn E 'H, 0 5 f, T f , f is finite (resp. bounded)* f E 'H; (iii) A € C I A E 3-1; then 3-1 contains all a(C)-measurable real (resp. bounded) functions defined on 5.
*
1
Proof. Let 3 = A C fi : I A E 3-1 . Then by assumption one sees that 3 is a A-system, and 3 > C. Hence a(C) = A(C) c 3. Now for any a(C)-measurable real (resp. bounded) function [ let
{
n2n i
= Ci=oI;;'{~[+,+)}. Then E 3-1, since { E E E a(C) c 3,and 3-1 is a linear space. Obviously, 0 5 En T [+ = J V 0. Hence [+ E 3-1. Similarly, we argue that 6- E 3-1. Therefore = [ + - [ - E 'H. rn En
En
e))
[&,
Now we formulate the following frequently used a monotone class theorem:
Theorem 392 If 3-1 is a linear space of real finite (bounded) measurable processes satisfying (i) 0 L, fn E 3-1, f , T f , f is finite (resp. bounded) ===+ f E 3-1; (ii) 3-1 contains all St- left continuous processes; then 3-1 contains all real (resp. bounded) gt- predictable processes. (For the definition of a predictable process, etc. see section 2 of Chapter 1 in this book).
Remark 393 If i n the condition (ii) we substitute the left - continuous processes by the right - continuous processes, then i n the conclusion the
C.2 Convergence of Random Variables
409
predictable processes will also be substituted by the optional processes. The proof for this new case is the same. Now let us prove Theorem 392. Proof. Notice that f is St- predictable, means that, f E P , where P = a(C), and C ={&fZ:l {(ai, bi)) c [O, co) x R : Vk, Vai < bi, 'dfi - Zt - left continuous process) Obviously, C is a n-system. By Theorem 391 one only needs to show that VA E C ==+ IAE 7-t. In fact, there exists a sequence of bounded continuous functions 0 I cp; defined on R such that cp;(x) f I(ai,bi) (x), as n f co. Hence as n f co, q:p..(fi(t, w)) T I(.,,~,,(fi(t, w)) = I , : = ~ ~ ; ~ { ( ~ , , ~ ~(t,) )w ) . Since the left hand side is St-left continuous, so it belongs to 7-t. Now ) The proof for predictable by assumption (i) Ink f L 1 ~ ( a i , b i ) ) ( tE, ~7-t. r=l processes is complete.
nL
nt,
C.2 Convergence of Random Variables Now let us discuss the convergence of random variables. Recall that in real analysis we have several different concepts for the convergence of measurable functions: a.e. convergence (almost everywhere convergence); convergence in measure; and weak convergence. Since random variavles are measurable functions defined on a probaility space, all such convergent concepts can be introduced into probability theory. Definition 394 A sequence of real random variables tn defined on a probability space is said to be convergent to a real random variable J almost surely (a.~.), and we will write:
if P(w : limn,, IJ,(w) - J(w)l = 0) = 1; and En is said to be convergent to J in probability PI and we will write:
if for every E > 0 limn,, P(w : [tn(w)- t(w)I > E) = 0; and tn is said to be weakly convergent to t , and we will write:
410
Appendix C. Monotone Class Theorems. Convergence of Random Processes
if for arbitrary f E Cb(R), where Cb(R) is the totality of all real bounded continuous functions defined on R, S f (Jn(w))dP-,S f (J,(w))dP, as n -,00. Naturally, we will have the following relations between these types of convergence.
Proposition 395 Assume that J n , n = 1,2, . . . , and J are real random ((7.3). variables. Then (C.l) ==+ ((7.2) Proof. C . l ) ==+ C.2): Let 1 A = nz=1 u g = , nn>N {IJn - 61 < ,). If C.1) holds, then P ( A C )= 0 , where AC is the complement of A. So limN-too P(Un>N {Itn- 2 limN-oo P({IEN - El 2 = P ( n g = , U n > { ~I J n -[I > I P ( A C )= 0 , for each m. C.2) ==+ C.3): For any f E C b ( R ) ,and f is uniformly continuous, ISf (J,(w))dP -,S f (J(w))dpI 5 E l f (Jn) - f (J)l I E I f ( e n ) - f(E)I I1o,n = 1,2,. - . , be a sequence of d-dimensional RCLL random satisfying the following two conditions: for each T 2 0 , E > 0 limnr-, supn S U P t q P {ICl > N ) = 0 , l i m h l o s u ~ ~ S U P t ~ , t ~ s ~ , lPt ~{It: -t~l -~C2 h > E ) = 0. Then there exists a subsequence { n k ) of {n) , a probability space (Q5,P ) , (actually, 6 = [O, 1],%= 23([0,I ] ) ) , and d-dimensional RCLL random
I
processes
{ck}
tto
---
{ E } , defined ~ ~ on it such that distributions of T k { )t,O coincide with the finite-
, k = 1 , 2 , . . . ,and
1) all finite-dimensional
dimensional distributions of {Jyk)t,O, - k = 1,2,. . . ;
2)
Kk -tt,i n probability p, as k --+
+ m,
V t 2 0.
Here we give the following lemma, and quote two more lemmas that will be needed for the discussion in the book. T o save space we will not give the proofs o f the last two lemmas. (For their proofs see [ I g 4 ] ) . Lemma 399 Let {x;, w;, c y ) t y o ,n = 1,2,. . . , be a sequence of d x r x d-dimensional RCLL random processes satisfying the two conditions stated i n Theorem 398. are BMs, n = 1,2,. . . , then the {G~k)t,O 1) If, i n addition, obtained from Theorem 398 also are BMs on ( R ,5,P ) , k = 1,2, . . and so also is the limit {Gt)t,O. 2) If, in addition, g y ) t Z o ,n = 1,2, . . . , are random processes with in-
-
---
nk} dependent increments, then { - Ct
tzo
obtained from Theorem 398 also are
---
random processes with independent increments on ( R ,5,P ) , k = 1,2,. . . and so also is the limit . Write tzo pn(dt, d z ) = C s E d t I A C : Z O I A C : E ~ ~ ~ Pk(dt,d z ) = C s E d t I n ~ : k o I A p E d Z , F(dt, dz) = C s E d t I A T , + ~ I A ~ , E ~ ~ If, besides, all pn(dt,dz) are Poisson random measures with the same compensator A such that V n = 1,2,. . .
{L}
I Z I8 d + l 8
qn(dt,d z ) = pn(dt, d z ) - &, . . where qn(dt,d z ) ,n = 1,2, . . . , are Poisson martingale measures, then P k ( d t , d z ) , k= 1,2, ... , also are Poisson random measures with the same compensator on
&
z,
(6, F) such that Vk = 1,2,. .d.z p*(dt,d z ) = jTk (dt,d z ) - p, where pk(dt, d z ) are Poisson martingale
I-,
-- -
measures on (a,5,P ) ; and the same conclusions hold true for the limits F(dt, d z ) and ij'(dt,dz).
C.3 Convergence of Random Processes and Stochastic Integrals
413
Proof. 1). Write 3: = a ( x ~ , w : , < ~ L: ; st ) , = a ( Z : , G ~ , ~ L: ~ ;t )s, and gt = ~ ( L , G , ?s ,5; t ) . Since by assumption for each n, { w : , ~ ) ~ ~ , is a B M , so WE - w c , t L: t l < t2, is independent of g, and w? - w:, 0 s < t , is Guassian N(0,t - s). However, {GFk)t,o has the same finite-