Lecture Notes in Mathematics Editors: J.--M. Morel, Cachan F. Takens, Groningen B. Teissier, Paris
1858
Alexander S. Cherny Hans-J¨urgen Engelbert
Singular Stochastic Differential Equations
123
Authors Alexander S. Cherny Department of Probability Theory Faculty of Mechanics and Mathematics Moscow State University Leninskie Gory 119992, Moscow Russia e-mail:
[email protected] Hans-J¨urgen Engelbert Institut f¨ur Stochastik Fakult¨at f¨ur Mathematik und Informatik Friedrich-Schiller-Universit¨at Jena Ernst-Abbe-Platz 1-4 07743 Jena Germany e-mail:
[email protected] Library of Congress Control Number: 2004115716
Mathematics Subject Classification (2000): 60-02, 60G17, 60H10, 60J25, 60J60 ISSN 0075-8434 ISBN 3-540-24007-1 Springer Berlin Heidelberg New York DOI: 10.1007/b104187 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specif ically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microf ilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science + Business Media http://www.springeronline.com c Springer-Verlag Berlin Heidelberg 2005 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specif ic statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera-ready TEX output by the authors 41/3142/ du - 543210 - Printed on acid-free paper
Preface
We consider one-dimensional homogeneous stochastic differential equations of the form (∗) dXt = b(Xt )dt + σ(Xt )dBt , X0 = x0 , where b and σ are supposed to be measurable functions and σ = 0. There is a rich theory studying the existence and the uniqueness of solutions of these (and more general) stochastic differential equations. For equations of the form (∗), one of the best sufficient conditions is that the function (1 + |b|)/σ 2 should be locally integrable on the real line. However, both in theory and in practice one often comes across equations that do not satisfy this condition. The use of such equations is necessary, in particular, if we want a solution to be positive. In this monograph, these equations are called singular stochastic differential equations. A typical example of such an equation is the stochastic differential equation for a geometric Brownian motion. A point d ∈ R, at which the function (1 + |b|)/σ 2 is not locally integrable, is called in this monograph a singular point. We explain why these points are indeed “singular”. For the isolated singular points, we perform a complete qualitative classification. According to this classification, an isolated singular point can have one of 48 possible types. The type of a point is easily computed through the coefficients b and σ. The classification allows one to find out whether a solution can leave an isolated singular point, whether it can reach this point, whether it can be extended after having reached this point, and so on. It turns out that the isolated singular points of 44 types do not disturb the uniqueness of a solution and only the isolated singular points of the remaining 4 types disturb uniqueness. These points are called here the branch points. There exists a large amount of “bad” solutions (for instance, nonMarkov solutions) in the neighbourhood of a branch point. Discovering the branch points is one of the most interesting consequences of the constructed classification. The monograph also includes an overview of the basic definitions and facts related to the stochastic differential equations (different types of existence and uniqueness, martingale problems, solutions up to a random time, etc.) as well as a number of important examples. We gratefully acknowledge financial support by the DAAD and by the European Community’s Human Potential Programme under contract HPRNCT-2002-00281. Moscow, Jena, October 2004
Alexander Cherny Hans-J¨ urgen Engelbert
Table of Contents
Introduction
1
1
Stochastic Differential Equations 5 1.1 General Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 Sufficient Conditions for Existence and Uniqueness . . . . . . . . . 9 1.3 Ten Important Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.4 Martingale Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5 Solutions up to a Random Time . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2
One-Sided Classification of Isolated Singular Points 2.1 Isolated Singular Points: The Definition . . . . . . . . . . . . . . . . . . . 2.2 Isolated Singular Points: Examples . . . . . . . . . . . . . . . . . . . . . . . 2.3 One-Sided Classification: The Results . . . . . . . . . . . . . . . . . . . . . 2.4 One-Sided Classification: Informal Description . . . . . . . . . . . . . 2.5 One-Sided Classification: The Proofs . . . . . . . . . . . . . . . . . . . . . .
27 27 32 34 38 42
3
Two-Sided Classification of Isolated Singular Points 3.1 Two-Sided Classification: The Results . . . . . . . . . . . . . . . . . . . . . 3.2 Two-Sided Classification: Informal Description . . . . . . . . . . . . . 3.3 Two-Sided Classification: The Proofs . . . . . . . . . . . . . . . . . . . . . . 3.4 The Branch Points: Non-Markov Solutions . . . . . . . . . . . . . . . . . 3.5 The Branch Points: Strong Markov Solutions . . . . . . . . . . . . . . .
65 65 66 69 73 75
4
Classification at Infinity and Global Solutions 4.1 Classification at Infinity: The Results . . . . . . . . . . . . . . . . . . . . . 4.2 Classification at Infinity: Informal Description . . . . . . . . . . . . . . 4.3 Classification at Infinity: The Proofs . . . . . . . . . . . . . . . . . . . . . . 4.4 Global Solutions: The Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Global Solutions: The Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81 81 82 85 86 88
5
Several Special Cases 5.1 Power Equations: Types of Zero . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Power Equations: Types of Infinity . . . . . . . . . . . . . . . . . . . . . . . 5.3 Equations with a Constant-Sign Drift: Types of Zero . . . . . . . . 5.4 Equations with a Constant-Sign Drift: Types of Infinity . . . . .
93 93 97 99 102
VIII
Table of Contents
Appendix A: Some Known Facts 105 A.1 Local Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 A.2 Random Time-Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 A.3 Bessel Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 A.4 Strong Markov Families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 A.5 Other Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Appendix B: Some Auxiliary Lemmas 113 B.1 Stopping Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 B.2 Measures and Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 B.3 Other Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 References
119
Index of Notation
123
Index of Terms
127
Theoretical Problems Related to the Internet (Extended Abstract) Christos H. Papadimitriou? University of California, Berkeley, USA,
[email protected] http://www.cs.berkeley.edu/∼christos/
The Internet has arguably superseded the computer as the most complex cohesive artifact (if you can call it that), and is, quite predictably, the source of a new generation of foundational problems for Theoretical Computer Science. These new theoretical challenges emanate from several novel aspects of the Internet: (a) Its unprecedented size, diversity, and availability as an information repository; (b) its novel nature as a computer system that intertwines a multitude of economic interests in varying degrees of competition; and (c) its history as a shared resource architecture that emerged in a remarkably ad hoc yet gloriously successful manner. In this talk I will survey some recent research (done in collaboration with Joan Feigenbaum, Dick Karp, Elias Koutsoupias, and Scott Shenker, see [1,2]) on problems of the two latter kinds. See [3,5] for examples of theoretical work along the first axis. To a large extent, the Internet owes its remarkable ascent to the surprising empirical success of TCP’s congestion control protocol [4]. A flow between two points strives to determine the correct transmission rate by trying higher and higher rates, in a linearly increasing function. As links shared by several flows reach their capacity, one or more packets may be lost, and, in response, the corresponding flow halves its transmission rate. The fantastic empirical success of this crude mechanism raises important, and at times theoretically interesting, questions related to novel search algorithms (the process clearly contains a novel variant of binary search), on-line algorithms (when the true capacity that a flow can use changes in an unknown and, in a worst case, adversarial fashion), and algorithmic game theory (why is it that flows seem to be unwilling, or unmotivated, to move away from TCP?). In a situation in which the same piece of information is multicast to millions of individuals, it is of interest to determine the price that each recipient must pay in return. The information in question is of different value to each individual, and this value is known only to this individual. The participants must engage in a mechanism, that is to say, a distributed protocol whereby they determine the individuals who will actually receive the information, and the price each will pay. Economists over the past fourty years have studied such situations arising in auctions, welfare economics, and pricing of public goods, and have come up with clever mechanisms —solutions that satisfy various sets of desiderata ?
Research partially supported by the National Science Foundation.
D.-Z. Du et al. (Eds.): COCOON 2000, LNCS 1858, pp. 1–2, 2000. c Springer-Verlag Berlin Heidelberg 2000
2
Christos H. Papadimitriou
(“axioms”)— or ingenious proofs that satisfying other sets of desiderata is impossible. In the context of the Internet, however, an important computational desideratum emerges: The protocol should be efficient, with respect to the total number of messages sent. It turns out that this novel condition changes the situation dramatically, and carves out a new set of pricing mechanisms that are appropriate and feasible.
References 1. Joan Feigenbaum, Christos H. Papadimitriou, Scott Shenker, “Sharing the Cost of Multicast Transmissions,” Proc. 2000 STOC. 2. Richard M. Karp, Elias Koutsoupias, Christos H. Papadimitriou, Scott Shenker, “Optimization Problems in Congestion Control,” manuscript, April 2000. 3. Jon Kleinberg, “Authoritative sources in a hyperlinked environment,” Proc. ACMSIAM Symposium on Discrete Algorithms, 1998. 4. Van Jacobson “Congestion Avoidance and Control,” in ACM SigComm Proceedings, pp 314-329, 1988. 5. Christos H. Papadimitriou, Prabhakar Raghavan, Hisao Tamaki, Santosh Vempala, “Latent Semantic Indexing: A Probabilistic Analysis,” Proc. 1998 PODS, to appear in the special issue of JCSS.
1 Stochastic Differential Equations
In this chapter, we consider general multidimensional SDEs of the form (1.1) given below. In Section 1.1, we give the standard definitions of various types of the existence and the uniqueness of solutions as well as some general theorems that show the relationship between various properties. Section 1.2 contains some classical sufficient conditions for various types of existence and uniqueness. In Section 1.3, we present several important examples that illustrate various combinations of the existence and the uniqueness of solutions. Most of these examples (but not all) are well known. We also find all the possible combinations of existence and uniqueness. Section 1.4 includes the definition of a martingale problem. We also recall the relationship between the martingale problems and the SDEs. In Section 1.5, we define a solution up to a random time.
1.1 General Definitions Here we will consider a general type of SDEs, i.e., multidimensional SDEs with coefficients that depend on the past. These are the equations of the form dXti = bit (X)dt +
m
σtij (X)dBtj ,
X0 = x0
(i = 1, . . . , n),
(1.1)
j=1
where n ∈ N, m ∈ N, x0 ∈ Rn , and b : C(R+ , Rn ) × R+ → Rn , σ : C(R+ , Rn ) × R+ → Rn×m are predictable functionals. (The definition of a predictable process can be found, for example, in [27, Ch. I, §2 a] or [38, Ch. IV, § 5].) Remark. We fix a starting point x0 together with b and σ. In our terminology, SDEs with the same b and σ and with different starting points are different SDEs.
A.S. Cherny and H.-J. Engelbert: LNM 1858, pp. 5–25, 2005. c Springer-Verlag Berlin Heidelberg 2005
6
1 Stochastic Differential Equations
Definition 1.1. (i) A solution of (1.1) is a pair (Z, B) of adapted processes on a filtered probability space Ω, G, (Gt )t≥0 , Q such that (a) B is a m-dimensional (Gt )-Brownian motion, i.e., B is a m-dimensional Brownian motion started at zero and is a (Gt , Q)-martingale; (b) for any t ≥ 0, t n 0
|bis (Z)|
i=1
+
n m
(σsij (Z))2
ds < ∞ Q-a.s.;
i=1 j=1
(c) for any t ≥ 0, i = 1, . . . , n, Zti = xi0 +
0
t
bis (Z)ds +
m j=1
0
t
σsij (Z)dBsj
Q-a.s.
(ii) There is weak existence for (1.1) if there exists a solution of (1.1) on some filtered probability space. B Definition 1.2. (i) A solution (Z, B) is called a strong solution if Z is F t B
adapted, where F t is the σ-field generated by σ(Bs ; s ≤ t) and by the subsets of the Q-null sets from σ(Bs ; s ≥ 0). (ii) There is strong existence for (1.1) if there exists a strong solution of (1.1) on some filtered probability space. Remark. Solutions in the sense of Definition 1.1 are sometimes called weak solutions. Here we call them simply solutions. However, the existence of a solution is denoted by the term weak existence in order to stress the difference between weak existence and strong existence (i.e., the existence of a strong solution). Definition 1.3. There is uniqueness in law for (1.1) if for any solutions B) (that may be defined on different filtered probability (Z, B) and (Z, t ; t ≥ 0). spaces), one has Law(Zt ; t ≥ 0) = Law(Z Definition 1.4. There is pathwise uniqueness for (1.1) if for any solutions B) (that are defined on the same filtered probability space), (Z, B) and (Z, t } = 1. one has Q{∀t ≥ 0, Zt = Z Remark. If there exists no solution of (1.1), then there are both uniqueness in law and pathwise uniqueness. The following 4 statements clarify the relationship between various properties. Proposition 1.5. Let (Z, B) be a strong solution of (1.1). (i) There exists a measurable map
1.1 General Definitions
7
Ψ : C(R+ , Rm ), B −→ C(R+ , Rn ), B B (here B denotes the Borel σ-field) such that the process Ψ(B) is F t -adapted and Z = Ψ(B) Q-a.s. is a m-dimensional (Ft )-Brownian motion on a filtered proba(ii) If B and Z G, (Gt ), Q := Ψ(B), then (Z, B) is a strong solution bility space Ω, of (1.1). For the proof, see, for example, [5]. Now we state a well known result of Yamada and Watanabe. Proposition 1.6 (Yamada, Watanabe). Suppose that pathwise uniqueness holds for (1.1). (i) Uniqueness in law holds for (1.1); (ii) There exists a measurable map Ψ : C(R+ , Rm ), B −→ C(R+ , Rn ), B B such that the process Ψ(B) is F t -adapted and, for any solution (Z, B) of (1.1), we have Z = Ψ(B) Q-a.s. For the proof, see [48] or [38, Ch. IX, Th. 1.7]. The following result complements the theorem of Yamada and Watanabe. Proposition 1.7. Suppose that uniqueness in law holds for (1.1) and there exists a strong solution. Then pathwise uniqueness holds for (1.1). This theorem was proved by Engelbert [10] under some additional assumptions. It was proved with no additional assumptions by Cherny [7]. The crucial fact needed to prove Proposition 1.7 is the following result. It shows that uniqueness in law implies a seemingly stronger property. Proposition 1.8. Suppose that uniqueness in law holds for (1.1). Then, for B) (that may be defined on different filtered any solutions (Z, B) and (Z, t , B t ; t ≥ 0). probability spaces), one has Law(Zt , Bt ; t ≥ 0) = Law(Z For the proof, see [7]. The situation with solutions of SDEs can now be described as follows. It may happen that there exists no solution of (1.1) on any filtered probability space (see Examples 1.16, 1.17). It may also happen that on some filtered probability space there exists a solution (or there are even several solutions with the same Brownian motion), while on some other filtered probability space with a Brownian motion there exists no solution (see Examples 1.18, 1.19, 1.20, and 1.24).
8
1 Stochastic Differential Equations
weak existence
uniqueness in law
weak @ existence @
strong existence
strong existence
@ @ @ @ @ @ @ @ @ @ @ @ uniqueness pathwise @ in law @ uniqueness
pathwise uniqueness
weak existence
strong existence
@ @ @ @ @ @ @ @ uniqueness in law
weak existence
pathwise uniqueness
pathwise uniqueness
T the best T possible T T situation T T
strong existence
uniqueness in law
Fig. 1.1. The relationship between various types of existence and uniqueness. The top diagrams show obvious implications and the implications given by the Yamada– Watanabe theorem. The centre diagram shows an obvious implication and the implication given by Proposition 1.7. The bottom diagram illustrates the Yamada– Watanabe theorem and Proposition 1.7 in terms of the “best possible situation”.
1.2 Sufficient Conditions for Existence and Uniqueness
9
If there exists a strong solution of (1.1) on some filtered probability space, then there exists a strong solution on any other filtered probability space with a Brownian motion (see Proposition 1.5). However, it may happen in this case that there are several solutions with the same Brownian motion (see Examples 1.21–1.23). If pathwise uniqueness holds for (1.1) and there exists a solution on some filtered probability space, then on any other filtered probability space with a Brownian motion there exists exactly one solution, and this solution is strong (see the Yamada–Watanabe theorem). This is the best possible situation. Thus, the Yamada–Watanabe theorem shows that pathwise uniqueness together with weak existence guarantee that the situation is the best possible. Proposition 1.7 shows that uniqueness in law together with strong existence guarantee that the situation is the best possible.
1.2 Sufficient Conditions for Existence and Uniqueness The statements given in this section are related to SDEs, for which bt (X) = b(t, Xt ) and σt (X) = σ(t, Xt ), where b : R+ × Rn → Rn and σ : R+ × Rn → Rn×m are measurable functions. We begin with sufficient conditions for strong existence and pathwise uniqueness. The first result of this type was obtained by Itˆ o. Proposition 1.9 (Itˆ o). Suppose that, for a SDE dXti = bi (t, Xt )dt +
m
σ ij (t, Xt )dBtj ,
X0 = x0
(i = 1, . . . , n),
j=1
there exists a constant C > 0 such that b(t, x) − b(t, y) + σ(t, x) − σ(t, y) ≤ C x − y , b(t, x) + σ(t, x) ≤ C(1 + x ),
t ≥ 0, x, y ∈ Rn ,
t ≥ 0, x ∈ Rn ,
where b(t, x) := σ(t, x) :=
n
i
2
(b (t, x))
i=1 n m
1/2 ,
(σ ij (t, x))2
1/2 .
i=1 j=1
Then strong existence and pathwise uniqueness hold. For the proof, see [25], [29, Ch. 5, Th. 2.9], or [36, Th. 5.2.1].
10
1 Stochastic Differential Equations
Proposition 1.10 (Zvonkin). Suppose that, for a one-dimensional SDE dXt = b(t, Xt )dt + σ(t, Xt )dBt ,
X0 = x0 ,
the coefficient b is measurable and bounded, the coefficient σ is continuous and bounded, and there exist constants C > 0, ε > 0 such that |σ(t, x) − σ(t, y)| ≤ C |x − y|, t ≥ 0, x, y ∈ R, |σ(t, x)| ≥ ε, t ≥ 0, x ∈ R. Then strong existence and pathwise uniqueness hold. For the proof, see [49]. For homogeneous SDEs, there exists a stronger result. Proposition 1.11 (Engelbert, Schmidt). dimensional SDE dXt = b(Xt )dt + σ(Xt )dBt ,
Suppose that, for a oneX0 = x0 ,
σ = 0 at each point, b/σ 2 ∈ L1loc (R), and there exists a constant C > 0 such that |σ(x) − σ(y)| ≤ C |x − y|, x, y ∈ R, |b(x)| + |σ(x)| ≤ C(1 + |x|),
x ∈ R.
Then strong existence and pathwise uniqueness hold. For the proof, see [15, Th. 5.53]. The following proposition guarantees only pathwise uniqueness. Its main difference from Proposition 1.10 is that the diffusion coefficient here need not be bounded away from zero. Proposition 1.12 (Yamada, Watanabe). dimensional SDE dXt = b(t, Xt )dt + σ(t, Xt )dBt ,
Suppose that, for a oneX0 = x0 ,
there exist a constant C > 0 and a strictly increasing function h : R+ → R+ 0+ with 0 h−2 (x)dx = +∞ such that |b(t, x) − b(t, y)| ≤ C|x − y|, t ≥ 0, x, y ∈ R, |σ(t, x) − σ(t, y)| ≤ h(|x − y|), t ≥ 0, x, y ∈ R. Then pathwise uniqueness holds.
1.2 Sufficient Conditions for Existence and Uniqueness
11
For the proof, see [29, Ch. 5, Prop. 2.13], [38, Ch. IX, Th. 3.5], or [39, Ch. V, Th. 40.1]. We now turn to results related to weak existence and uniqueness in law. The first of these results guarantees only weak existence; it is almost covered by further results, but not completely. Namely, here the diffusion matrix σ need not be elliptic (it might even be not a square matrix). Proposition 1.13 (Skorokhod). Suppose that, for a SDE dXti = bi (t, Xt )dt +
m
σ ij (t, Xt )dBtj ,
X0i = xi0
(i = 1, . . . , n),
j=1
the coefficients b and σ are continuous and bounded. Then weak existence holds. For the proof, see [42] or [39, Ch. V, Th. 23.5]. Remark. The conditions of Proposition 1.13 guarantee neither strong existence (see Example 1.19) nor uniqueness in law (see Example 1.22). In the next result, the conditions on b and σ are essentially relaxed as compared with the previous proposition. Proposition 1.14 (Stroock, Varadhan). Suppose that, for a SDE dXti = bi (t, Xt )dt +
n
σ ij (t, Xt )dBtj ,
X0 = x0
(i = 1, . . . , n),
j=1
the coefficient b is measurable and bounded, the coefficient σ is continuous and bounded, and, for any t ≥ 0, x ∈ Rn , there exists a constant ε(t, x) > 0 such that σ(t, x)λ ≥ ε(t, x) λ , λ ∈ Rn . Then weak existence and uniqueness in law hold. For the proof, see [44, Th. 4.2, 5.6]. In the next result, the diffusion coefficient σ need not be continuous. However, the statement deals with homogeneous SDEs only. Proposition 1.15 (Krylov). Suppose that, for a SDE dXti = bi (Xt )dt +
n
σ ij (Xt )dBtj ,
X0 = x0
(i = 1, . . . , n),
j=1
the coefficient b is measurable and bounded, the coefficient σ is measurable and bounded, and there exist a constant ε > 0 such that σ(x)λ ≥ ε λ ,
x ∈ Rn , λ ∈ Rn .
Then weak existence holds. If moreover n ≤ 2, then uniqueness in law holds.
12
1 Stochastic Differential Equations
For the proof, see [32]. Remark. In the case n > 2, the conditions of Proposition 1.15 do not guarantee uniqueness in law (see Example 1.24).
1.3 Ten Important Examples In the examples given below, we will use the characteristic diagrams to illustrate the statement of each example. The first square in the diagram corresponds to weak existence; the second – to strong existence; the third – to uniqueness in law; the fourth – to pathwise uniqueness. Thus, the statement “for the SDE . . . , we have + − + − ” should be read as follows: “for the SDE . . . , there exists a solution, there exists no strong solution, uniqueness in law holds, and pathwise uniqueness does not hold”. We begin with examples of SDEs with no solution. Example 1.16 (no solution). For the SDE dXt = − sgn Xt dt,
where sgn x =
1 −1
X0 = 0,
if x > 0, if x ≤ 0,
(1.2)
(1.3)
we have − − + + . Proof. Suppose that there exists a solution (Z, B). Then almost all paths of Z satisfy the integral equation t f (t) = − sgn f (s)ds, t ≥ 0. (1.4) 0
Let f be a solution of this equation. Assume that there exist a > 0, t > 0 such that f (t) = a. Set v = inf{t ≥ 0 : f (t) = a}, u = sup{t ≤ v : f (t) = 0}. Using (1.4), we get a = f (v) − f (u) = −(v − u). The obtained contradiction shows that f ≤ 0. In a similar way we prove that f ≥ 0. Thus, f ≡ 0, but then it is not a solution of (1.4). As a result, (1.4), and hence, (1.2), has no solution. The next example is a SDE with the same characteristic diagram and with σ ≡ 1. Example 1.17 (no solution). For the SDE dXt = − we have − − + + .
1 I(Xt = 0)dt + dBt , 2Xt
X0 = 0,
(1.5)
1.3 Ten Important Examples
13
Proof. Suppose that (Z, B) is a solution of (1.5). Then t 1 I(Zs = 0)ds + Bt , t ≥ 0. Zt = − 2Z s 0 By Itˆo’s formula, Zt2
t 1 =− 2Zs I(Zs = 0)ds + 2Zs dBs + t 2Zs 0 0 t t I(Zs = 0)ds + 2Zs dBs , t ≥ 0. = t
0
0
The process Z is a continuous semimartingale with Zt = t. Hence, by the occupation times formula, t I(Zs = 0)ds = I(x = 0)Lxt (Z)dx = 0, t ≥ 0, 0
R
where Lxt (Z) denotes the local time of the process Z (see Definition A.2). As a result, Z 2 is a positive local martingale, and consequently, a supermartingale. Since Z 2 ≥ 0 and Z02 = 0, we conclude that Z 2 = 0 a.s. But then (Z, B) is not a solution of (1.5). Now we turn to the examples of SDEs that possess a solution, but no strong solution. Example 1.18 (no strong solution; Tanaka). For the SDE dXt = sgn Xt dBt ,
X0 = 0
(1.6)
(for the precise definition of sgn, see (1.3)), we have + − + − . Proof. Let W be a Brownian motion on (Ω, G, Q). We set t Zt = Wt , Bt = sgn Ws dWs , t ≥ 0 0
and take Gt = FtW . Obviously, (Z, B) is a solution of (1.6) on Ω, G, (Gt ), Q. If (Z, B) is a solution of (1.6) on a filtered probability space Ω, G, (Gt ), Q , then Z is a continuous (Gt , Q)-local martingale with Zt = t. It follows from P. L´evy’s characterization theorem that Z is a Brownian motion. This implies uniqueness in law. If (Z, B) is a solution of (1.6), then t sgn Zs dZs , t ≥ 0. Bt = 0
|Z| Ft
(see [38, Ch. VI, Cor. 2.2]). Hence, there exists This implies that FtB = no strong solution. If (Z, B) is a solution of (1.6), then (−Z, B) is also a solution. Thus, there is no pathwise uniqueness.
14
1 Stochastic Differential Equations
The next example is a SDE with the same characteristic diagram, b = 0, and a continuous σ. Example 1.19 (no strong solution; Barlow). There exists a continuous bounded function σ : R → (0, ∞) such that, for the SDE dXt = σ(Xt )dBt ,
X0 = x0 ,
we have + − + − . For the proof, see [2]. The next example is a SDE with the same characteristic diagram and with σ ≡ 1. The drift coefficient in this example depends on the past. Example 1.20 (no strong solution; Tsirelson). There exists a bounded predictable functional b : C(R+ ) × R+ → R such that, for the SDE dXt = bt (X)dt + dBt ,
X0 = x0 ,
we have + − + − . For the proof, see [46], [23, Ch. IV, Ex. 4.1], or [38, Ch. IX, Prop. 3.6]. B Remark. Let B be a Brownian motion on (Ω, G, Q). Set Gt = F t . Then the SDEs of Examples 1.18–1.20 have no solution on Ω, G, (Gt ), Q with the Brownian motion B. Indeed, if (Z, B) is a solution, then Z is (Gt )-adapted, which means that (Z, B) is a strong solution.
We now turn to examples of SDEs, for which there is no uniqueness in law. Example 1.21 (no uniqueness in law). For the SDE dXt = I(Xt = 0)dBt ,
X0 = 0,
(1.7)
we have + + − − . to note that (B, B) and (0, B) are solutions of (1.7) on Proof. It is sufficient Ω, G, (Gt ), Q whenever B is a (Gt )-Brownian motion. Remark. Let B be a Brownian motion on (Ω, G, Q) and η be a random variable that is independent of B with P{η = 1} = P{η = −1} = 1/2. Consider
Bt (ω) if η(ω) = 1, Zt (ω) = 0 if η(ω) = −1 and take Gt = FtZ . Then (Z, B) is a solution of (1.7) on Ω, G, (Gt ), Q that B
is not strong. Indeed, for each t > 0, η is not F t -measurable. Since the sets B {η = −1} and {Zt = 0} are indistinguishable, Zt is not F t -measurable.
1.3 Ten Important Examples
15
The next example is a SDE with the same characteristic diagram, b = 0, and a continuous σ. Example 1.22 (no uniqueness in law; Girsanov). Let 0 < α < 1/2. Then, for the SDE dXt = |Xt |α dBt , X0 = 0, (1.8) we have + + − − . Proof. Let W be a Brownian motion started at zero on (Ω, G, Q) and At =
0
t
|Ws |−2α ds,
t ≥ 0,
τt = inf{s ≥ 0 : As > t}, Zt = Wτt , t ≥ 0.
t ≥ 0,
The occupation times formula and Proposition A.6 (ii) ensure that At is a.s. a.s. continuous and finite. It follows from Proposition A.9 that At −−−→ ∞. t→∞
Hence, τ is a.s. finite, continuous, and strictly increasing. By Proposition A.16, Z is a continuous (FτWt )-local martingale with Zt = τt . Using Proposition A.18, we can write τt =
τt
ds = 0
0
τt
|Ws |2α dAs =
0
Aτt
|Wτs |2α ds =
t 0
|Zs |2α ds,
t ≥ 0. a.s.
(We have Aτt = t due to the continuity of A and the property At −−−→ ∞.) t→∞ Hence, the process t |Zs |−α dZs , t ≥ 0 Bt = 0
is a continuous martingale with Bt = t. According to P. L´evy’s characterization theorem, B is a (FτWt )-Brownian motion. Thus, (Z, B) is a solution of (1.8). Now, all the desired statements follow from the fact that (0, B) is another solution of (1.8). (FτWt )-local
The next example is a SDE with the same characteristic diagram and with σ ≡ 1. Example 1.23 (no uniqueness in law; SDE for a Bessel process). For the SDE δ−1 dXt = I(Xt = 0)dt + dBt , X0 = 0 (1.9) 2Xt with δ > 1, we have + + − − .
16
1 Stochastic Differential Equations
Proof. It follows from Proposition A.21 that there exists a solution (Z, B) of (1.9) such that Z is positive. By Itˆo’s formula, t t Zt2 = (δ − 1)I(Zs = 0)ds + 2 Zs dBs + t 0 0 t t (δ − 1)I(Zs = 0)ds + 2 |Zs2 |dBs , t ≥ 0. = δt − 0
0
By the occupation times formula, t t I(Zs = 0)ds = I(Zs = 0)d Zs = I(x = 0)Lxt (Z)dx = 0, 0
0
R
t ≥ 0.
Hence, the pair (Z 2 , B) is a solution of the SDE dXt = δdt + 2 |Xt |dBt , X0 = 0. B Propositions 1.6 and 1.12 combined together show that Z 2 is F t -adapted. B As Z is positive, Z is also F t -adapted, which means that (Z, B) is a strong solution. By Proposition 1.5 (i), there exists a measurable map Ψ : C(R+ ) → B C(R+ ) such that the process Ψ(B) is F t -adapted and Z = Ψ(B) a.s. For any t ≥ 0, we have t δ−1 Ψt (B) = I Ψs (B) = 0 ds + Bt a.s. 0 2Ψs (B) = −B is a Brownian motion. Hence, for any t ≥ 0, The process B t δ−1 I −Ψs (−B) = 0 ds + Bt a.s. −Ψt (−B) = −2Ψ (−B) s 0 B), where Z = −Ψ(−B), is a (strong) solution Consequently, the pair (Z, is negative. Hence, Z and Z have of (1.9). Obviously, Z is positive, while Z a.s. different paths and different laws. This implies that there is no uniqueness in law and no pathwise uniqueness for (1.9). Remark. More information on SDE (1.9) can be found in [5]. In particular, it is proved in [5] that this equation possesses solutions that are not strong. Moreover, it is shown that, for the SDE dXt =
δ−1 I(Xt = 0)dt + dBt , 2Xt
X0 = x0
(1.10)
(here the starting point x0 is arbitrary) with 1 < δ < 2, we have + + − − ; for SDE (1.10) with δ ≥ 2, x0 = 0, we have + + + + . The SDE for a Bessel process is also considered in Sections 2.2, 3.4.
1.3 Ten Important Examples
17
The following rather surprising example has multidimensional nature. Example 1.24 (no uniqueness in law; Nadirashvili). Let n ≥ 3. There exists a function σ : Rn → Rn×n such that ε λ ≤ σ(x)λ ≤ C λ ,
x ∈ Rn , λ ∈ R n
with some constants C > 0, ε > 0 and, for the SDE dXti =
n
σ ij (Xt )dBtj ,
X0 = x0
(i = 1, . . . , n),
j=1
we have +
−− .
For the proof, see [35] or [40]. We finally present one more example. Its characteristic diagram is different from all the diagrams that appeared so far. Example 1.25 (no strong solution and no uniqueness). For the SDE dXt = σ(t, Xt )dBt ,
with σ(t, x) =
X0 = 0
sgn x I(x = 1) sgn x
(1.11)
if t ≤ 1, if t > 1
(for the precise definition of sgn, see (1.3)), we have + − − − . Proof. If W is a Brownian motion, then the pair t Zt = Wt , Bt = sgn Ws dWs , 0
t≥0
(1.12)
is a solution of (1.11). Let (Z, B) be the solution given by (1.12). Set τ = inf{t ≥ 1 : Zt = 1}, B) is another solution. Thus, there is no uniqueness in Zt = Zt∧τ . Then (Z, law and no pathwise uniqueness. If (Z, B) is a solution of (1.12), then Zt =
0
t
sgn Zs dBs ,
t ≤ 1.
The arguments used in the proof of Example 1.18 show that (Z, B) is not a strong solution.
18
1 Stochastic Differential Equations
Table 1.1. Possible and impossible combinations of existence and uniqueness. As an example, the combination “+ − +−” in line 11 corresponds to a SDE, for which there exists a solution, there exists no strong solution, there is uniqueness in law, and there is no pathwise uniqueness. The table shows that such a SDE is provided by each of Examples 1.18–1.20. Weak Strong Uniqueness Pathwise existence existence in law uniqueness
Possible/Impossible
−
−
−
−
impossible, obviously
−
−
−
+
impossible, obviously
−
−
+
−
impossible, obviously
−
−
+
+
possible, Examples 1.16,1.17
−
+
−
−
impossible, obviously
−
+
−
+
impossible, obviously
−
+
+
−
impossible, obviously
−
+
+
+
impossible, obviously
+
−
−
−
possible, Example 1.25
+
−
−
+
impossible, Proposition 1.6
+
−
+
−
possible, Examples 1.18–1.20
+
−
+
+
impossible, Proposition 1.6
+
+
−
−
possible, Examples 1.21–1.23
+
+
−
+
impossible, Proposition 1.6
+
+
+
−
impossible, Proposition 1.7
+
+
+
+
possible, obviously
Remark. The SDE dXt = I(Xt = 1) sgn Xt dBt ,
X0 = 0
is a homogeneous SDE with the same characteristic diagram as in Example 1.25. However, it is more difficult to prove that this equation has no strong solution. Let us mention one of the applications of the results given above. For SDE (1.1), each of the following properties: weak existence, strong existence, uniqueness in law, pathwise uniqueness
1.4 Martingale Problems
19
may hold or may not hold. Thus, there are 16 (= 24 ) feasible combinations. Some of these combinations are impossible (for instance, if there is pathwise uniqueness, then there must be uniqueness in law). For each of these combinations, Propositions 1.6, 1.7 and Examples 1.16–1.25 allow one either to provide an example of a corresponding SDE or to prove that this combination is impossible. It turns out that there are only 5 possible combinations (see Table 1.1).
1.4 Martingale Problems Let n ∈ N, x0 ∈ Rn and b : C(R+ , Rn ) × R+ → Rn , a : C(R+ , Rn ) × R+ → Rn×n be predictable functionals. Suppose moreover that, for any t ≥ 0 and ω ∈ C(R+ , Rn ), the matrix at (ω) is positively definite. Throughout this section, X = (Xt ; t ≥ 0) will denote the coordinate process on C(R+ , Rn ), i.e., the process defined by Xt : C(R+ , Rn ) ω −→ ω(t) ∈ Rn . By (Ft ) we will denote the canonical filtration on C(R+ ), i.e., Ft = σ(Xs ; s ≤ t), and F will stand for the σ-field t≥0 Ft = σ(Xs ; s ≥ 0). Note that F coincides with the Borel σ-field B(C(R+ , Rn )). Definition 1.26. A solution of the martingale problem (x0 , b, a) is a measure P on B(C(R+ , Rn )) such that (a) P{X0 = x0 } = 1; (b) for any t ≥ 0, t n n |bis (X)| + aii (X) ds < ∞ P-a.s.; 0
i=1
i=1
(c) for any i = 1, . . . , n, the process t i i M t = Xt − bis (X)ds, 0
is a (Ft , P)-local martingale; (d) for any i, j = 1, . . . , n, the process t Mti Mtj − aij s (X)ds, 0
is a (Ft , P)-local martingale.
t≥0
t≥0
(1.13)
20
1 Stochastic Differential Equations
Let us now consider SDE (1.1) and set at (ω) = σt (ω)σt∗ (ω),
t ≥ 0, ω ∈ C(R+ , Rn ),
where σ ∗ denotes the transpose of the matrix σ. Then the martingale problem (x0 , b, a) is called a martingale problem corresponding to SDE (1.1). The relationship between (1.1) and this martingale problem becomes clear from the following statement. Theorem 1.27. (i) Let (Z, B) be a solution of (1.1). Then the measure P = Law(Zt ; t ≥ 0) is a solution of the martingale problem (x0 , b, a). (ii) Let P be a solution of the martingale problem (x0 , b, a). Then there exist a filtered probability space Ω, G, (Gt ), Q and a pair of processes (Z, B) on this space such that (Z, B) is a solution of (1.1) and Law(Zt ; t ≥ 0) = P. Proof. (i) Conditions (a), (b) of Definition 1.26 are obviously satisfied. Let us check condition (c). Set t Nt = Zt − bs (Z)ds, t ≥ 0. 0
(We use here the vector form of notation.) For m ∈ N, we consider the stopping time Sm (N ) = inf{t ≥ 0 : Nt ≥ m}. Since N is a (Gt , Q)-local martingale, the stopped process N Sm (N ) is a (Gt , Q)-martingale. Hence, for any 0 ≤ s < t and C ∈ Fs , we have S (N )
EQ Nt m − NsSm (N ) I(Z ∈ C) = 0. Therefore,
S (M)
EP Mt m − MsSm (M) I(X ∈ C) = 0,
where M is given by (1.13) and Sm (M ) = inf{t ≥ 0 : Mt ≥ m}. This proves that M ∈ Mcloc (Ft , P). Condition (d) of Definition 1.26 is verified in a similar way. n (ii) (Cf. [39, Ch. V, Th. 20.1].) Let Ω1 = C(R ), Gt1 = Ft , G 1 = F , 2+ , R 1 2 Q = P. Choose a filtered probability space Ω , G , (Gt2 ), Q2 with a mdimensional (Gt2 )-Brownian motion W and set Ω = Ω1 × Ω2 ,
G = G1 × G2,
Gt = Gt1 × Gt2 ,
Q = Q1 × Q2 .
We extend the processes b, σ, a from Ω1 to Ω and the process W from Ω2 to Ω in the obvious way. For any t ≥ 0, ω ∈ Ω, the matrix σt (ω) corresponds to a linear operator Rm → Rn . Let ϕt (ω) be the m × m-matrix of the operator of orthogonal projection onto (ker σt (ω))⊥ , where ker σt (ω) denotes the kernel of σt (ω); let ψt (ω) be the m × m-matrix of the operator of orthogonal projection onto ker σt (ω). Then ϕ = (ϕt ; t ≥ 0) and ψ = (ψt ; t ≥ 0) are predictable Rm×m valued processes. For any t ≥ 0, ω ∈ Ω, the restriction of the operator σt (ω)
1.4 Martingale Problems
21
to (ker σt (ω))⊥ is a bijection from (ker σt (ω))⊥ ⊆ Rm onto Im σt (ω) ⊆ Rn , where Im σt (ω) denotes the image of σt (ω). Let us define the operator χt (ω) : Rn → Rm as follows: χt (ω) maps Im σt (ω) onto (ker σt (ω))⊥ as the inverse of σt (ω); χt (ω) vanishes on (Im σt (ω))⊥ . Obviously, χ = (χt ; t ≥ 0) is a predictable Rm×n -valued process. We have χt (ω)σt (ω) = ϕt (ω). Define the process Z as Zt (ω 1 , ω 2 ) = ω 1 (t) and the process M as M t = Zt − Let us set
Bt =
0
t
bs ds,
t
χs dMs +
0
t ≥ 0.
t
0
ψs dWs ,
t ≥ 0.
(We use here the vector form of notation.) For any i, j = 1, . . . , n, we have
B , B t = i
j
t
n
0 k,l=1
kl jl χik s as χs ds
+
t n 0 k=1
ψsik ψsjk ds
t (χs σs σs∗ χ∗s )ij ds + (ψs ψs∗ )ij ds 0 0 t t (ϕs ϕ∗s )ij ds + (ψs ψs∗ )ij ds = 0 0 t ij (ϕs + ψs )(ϕ∗s + ψs∗ ) ds = 0 t δ ij ds = δ ij t, t ≥ 0. = t
=
0
By the multidimensional version of P. L´evy’s characterization theorem (see [38, Ch. IV, Th. 3.6]), we deduce that B is a m-dimensional (Gt )Brownian motion. Set ρt (ω) = σt (ω)χt (ω). Let us consider the process Nt =
t 0
σs dBs =
0
t
σs χs dMs +
0
t
σs ψs dWs =
0
t
ρs dMs ,
Then, for any i = 1, . . . , n, we have t t
N i t = (ρs as ρ∗s )ii ds = (σs χs σs σs∗ χ∗s σs∗ )ii ds 0 0 t t i (σs σs∗ )ii ds = aii t ≥ 0. = s ds = M t , 0
0
(We have used the obvious equality σs χs σs = σs .) Furthermore,
t ≥ 0.
(1.14)
22
1 Stochastic Differential Equations
N i , M i t =
t
0
t
= 0
(ρs as )ii ds = (σs σs∗ )ii ds
t
0
=
0
(σs χs σs σs∗ )ii ds
t
(1.15) aii s ds
= M t , i
t ≥ 0.
Comparing (1.14) with (1.15), we deduce that N i − M i = 0. Hence, M = x0 + N . As a result, the pair (Z, B) is a solution of (1.1). In this monograph, we will investigate only weak solutions and uniqueness in law for SDE (1). It will be more convenient for us to consider a solution of (1) as a solution of the corresponding martingale problem rather than to treat it in the sense of Definition 1.1. The reason is that in this case a solution is a single object and not a pair of processes as in Definition 1.1. This approach is justified by Theorem 1.27. Thus, from here on, we will always deal with the following definition, which is a reformulation of Definition 1.26 for the case of the SDEs having the form (1). Definition 1.28. A solution of SDE (1) is a measure P on B(C(R+ )) such that (a) P{X0 = x0 } = 1; (b) for any t ≥ 0, 0
t
|b(Xs )| + σ 2 (Xs ) ds < ∞ P-a.s.;
(c) the process M t = Xt − is a (Ft , P)-local martingale; (d) the process Mt2 −
0
t
0
t
b(Xs )ds,
σ 2 (Xs )ds,
t≥0
(1.16)
t≥0
is a (Ft , P)-local martingale. Remark. If one accepts Definition 1.28, then the existence and uniqueness of a solution are defined in an obvious way. It follows from Theorem 1.27 that the existence of a solution in the sense of Definition 1.28 is equivalent to weak existence (Definition 1.1); the uniqueness of a solution in the sense of Definition 1.28 is equivalent to uniqueness in law (Definition 1.3). Definition 1.29. (i) A solution P of (1) is positive if P{∀t ≥ 0, Xt ≥ 0} = 1. (ii) A solution P of (1) is strictly positive if P{∀t ≥ 0, Xt > 0} = 1. The negative and strictly negative solutions are defined in a similar way.
1.5 Solutions up to a Random Time
23
1.5 Solutions up to a Random Time There are several reasons why we consider solutions up to a random time. First, a solution may explode. Second, a solution may not be extended after it reaches some level. Third, we can guarantee in some cases that a solution exists up to the first time it leaves some interval, but we cannot guarantee the existence of a solution after that time (see Chapter 2). In order to define a solution up to a random time, we replace the space C(R+ ) of continuous functions by the space C(R+ ) defined below. We need this space to consider exploding solutions. Let π be an isolated point added to the real line. Definition 1.30. The space C(R+ ) consists of the functions f : R+ → R ∪ {π} with the following property: there exists a time ξ(f ) ∈ [0, ∞] such that f is continuous on [0, ξ(f )) and f = π on [ξ(f ), ∞). The time ξ(f ) is called the killing time of f . Throughout this section, X = (Xt ; t ≥ 0) will denote the coordinate process on C(R+ ), i.e., Xt : C(R+ ) ω −→ ω(t) ∈ R ∪ {π}, (Ft ) will denote the canonical filtration on C(R+ ), i.e., Ft = σ(Xs ; s ≤ t), and F will stand for the σ-field t≥0 Ft = σ(Xs ; s ≥ 0). Remark. There exists a metric on C(R+ ) with the following properties. (a) It turns C(R+ ) into a Polish space. (b) The convergence fn → f in this metric is equivalent to: ξ(fn ) −−−−→ ξ(f ); n→∞
∀t < ξ(f ), sup |fn (s) − f (s)| −−−−→ 0. s≤t
n→∞
(In particular, C(R+ ) is a closed subspace in this metric.) (c) The Borel σ-field on C(R+ ) with respect to this metric coincides with σ(Xt ; t ≥ 0). In what follows, we will need two different notions: a solution up to S and a solution up to S−. Definition 1.31. Let S be a stopping time on C(R+ ). A solution of (1) up to S (or a solution defined up to S) is a measure P on FS such that (a) P{∀t ≤ S, Xt = π} = 1; (b) P{X0 = x0 } = 1; (c) for any t ≥ 0, t∧S |b(Xs )| + σ 2 (Xs ) ds < ∞ P-a.s.; 0
24
1 Stochastic Differential Equations
(d) the process Mt = Xt∧S − is a (Ft , P)-local martingale; (e) the process Mt2
−
t∧S 0
t∧S
0
b(Xs )ds,
σ 2 (Xs )ds,
t≥0
(1.17)
t≥0
is a (Ft , P)-local martingale. In the following, we will often say that (P, S) is a solution of (1). Remarks. (i) The measure P is defined on FS and not on F since otherwise it would not be unique. (ii) In the usual definition of a local martingale, the probability measure is defined on F . Here P is defined on a smaller σ-field FS . However, in view of the equality M S = M , the knowledge of P only on FS is sufficient to verify the inclusion M ∈ Mcloc (Ft , P) that arises in (d). In other words, if and P are probability measures on F such that P|F S =P |FS = P, then P c c M ∈ Mloc (Ft , P) if and only if M ∈ Mloc (Ft , P ) (so we can write simply M ∈ Mcloc (Ft , P)). In order to prove this statement, note that the inclusion means that there exists a sequence of stopping times (Sn ) M ∈ Mcloc (Ft , P) such that (a) Sn ≤ Sn+1 ; (b) Sn ≤ S; (c) for any t ≥ 0, P{t∧S −−−→ 1 (note that {t ∧ Sn = t ∧ S} ∈ FS ); n = t∧S} − n→∞
(d) for any s ≤ t, C ∈ Fs , and n ∈ N,
EP Mt∧Sn − Ms∧Sn I(C) = 0. This expression makes sense since the random variable Mt∧Sn − Ms∧Sn I(C) = Mt∧Sn − Ms∧Sn I(C ∩ {Sn > s}) is FS -measurable. Similarly, in order to verify conditions (a), (b), (c), and (e), it is sufficient to know the values of P only on FS . Definition 1.32. (i) A solution (P, S) is positive if P{∀t ≤ S, Xt ≥ 0} = 1. (ii) A solution (P, S) is strictly positive if P{∀t ≤ S, Xt > 0} = 1. The negative and strictly negative solutions are defined in a similar way. Recall that a function S : C(R+ ) → [0, ∞] is called a predictable stopping time if there exists a sequence (Sn ) of (Ft )-stopping times such that (a) Sn ≤ Sn+1 ; (b) Sn ≤ S and Sn < S on the set {S > 0}; (c) limn Sn = S. In the following, we will call (Sn ) a predicting sequence for S.
1.5 Solutions up to a Random Time
25
Definition 1.33. Let S be a predictable stopping time on C(R+ ) with a predicting sequence (Sn ). A solution of (1) up to S− (or a solution defined up to S−) is a measure P on FS− such that, for any n ∈ N, the restriction of P to FSn is a solution up to Sn . In the following, we will often say that (P, S−) is a solution of (1). Remarks. (i) Obviously, this definition does not depend on the choice of a predicting sequence for S. (ii) Definition 1.33 implies that P{∀t < S, Xt = π} = 1. (iii) When dealing with solutions up to S, one may use the space C(R+ ). The space C(R+ ) is essential only for solutions up to S−. In this monograph, we will use the following terminology: a solution in the sense of Definition 1.28 will be called a global solution, while a solution in the sense of Definition 1.31 or Definition 1.33 will be called a local solution. The next statement clarifies the relationship between these two notions. Theorem 1.34. (i) Suppose that (P, S) is a solution of (1) in the sense of Definition 1.31 and S = ∞ P-a.s. Then P admits a unique extension to to F . Let Q be the measure on C(R+ ) defined as the restriction of P P {ξ = ∞} = C(R+ ). Then Q is a solution of (1) in the sense of Definition 1.28. (ii) Let Q be a solution of (1) in the sense of Definition 1.28. Let P be the measure on C(R+ ) defined as P(A) = Q(A ∩ {ξ = ∞}). Then (P, ∞) is a solution of (1) in the sense of Definition 1.31. follow from Lemma B.5. Proof. (i) The existence and the uniqueness of P The latter part of (i) as well as statement (ii) are obvious.
2 One-Sided Classification of Isolated Singular Points
In this chapter, we consider SDEs of the form (1). Section 2.1 deals with the following question: Which points should be called singular for SDE (1)? This section contains the definition of a singular point as well as the reasoning that these points are indeed “singular”. Several natural examples of SDEs with isolated singular points are given in Section 2.2. These examples illustrate how a solution may behave in the neighbourhood of such a point. In Section 2.3 we investigate the behaviour of a solution of (1) in the righthand neighbourhood of an isolated singular point. We present a complete qualitative classification of different types of behaviour. Section 2.4 contains an informal description of the constructed classification. The statements that are formulated in Section 2.3 are proved in Section 2.5. Throughout this chapter, we assume that σ(x) = 0 for all x ∈ R.
2.1 Isolated Singular Points: The Definition In this section, except for Proposition 2.2 and Theorem 2.8, we will deal with global solutions, i.e., solutions in the sense of Definition 1.28. Throughout the section, except for Proposition 2.2 and Theorem 2.8, X denotes the coordinate process on C(R+ ) and (Ft ) stands for the canonical filtration on C(R+ ). Definition 2.1. (i) A measurable function f : R → R is locally integrable at a point d ∈ R if there exists δ > 0 such that
d+δ
|f (x)|dx < ∞.
d−δ
We will use the notation: f ∈ L1loc (d). (ii) A measurable function f is locally integrable on a set D ⊆ R if f is locally integrable at each point d ∈ D. We will use the notation: f ∈ L1loc (D).
A.S. Cherny and H.-J. Engelbert: LNM 1858, pp. 27–64, 2005. c Springer-Verlag Berlin Heidelberg 2005
28
2 One-Sided Classification of Isolated Singular Points
Proposition 2.2 (Engelbert, Schmidt). Suppose that, for SDE (1), 1 + |b| ∈ L1loc (R). σ2
(2.1)
Then there exists a unique solution of (1) defined up to S−, where S = supn inf{t ≥ 0 : |Xt | = n} and X denotes the coordinate process on C(R+ ). For the proof, see [15]. Remark. Under the conditions of Proposition 2.2, there need not exist a global solution because the solution may explode within a finite time. Theorem 4.5 shows, which conditions should be added to (2.1) in order to guarantee the existence of a global solution. In Chapter 2, we prove the following local analog of Proposition 2.2 (see Theorem 2.11). If the function (1 + |b|)/σ 2 is locally integrable at a point d, then there exists a unique solution of (1) “in the neighbourhood of d”. Therefore, it is reasonable to call such a point “regular” for SDE (1). Definition 2.3. (i) A point d ∈ R is called a singular point for SDE (1) if 1 + |b| ∈ / L1loc (d). σ2 A point that is not singular will be called regular. (ii) A point d ∈ R is called an isolated singular point for (1) if d is singular and there exists a deleted neighbourhood of d that consists of regular points. The next 5 statements are intended to show that the singular points in the sense of Definition 2.3 are indeed “singular”. Proposition 2.4. Suppose that |b|/σ 2 ∈ L1loc (R) and 1/σ 2 ∈ / L1loc (d). Then there exists no solution of (1) with X0 = d. For the proof, see [15, Th. 4.35]. Theorem 2.5. Let I ⊆ R be an open interval. Suppose that |b|/σ 2 ∈ / L1loc (x) for any x ∈ I. Then, for any x0 ∈ I, there exists no solution of (1). Proof. (Cf. also [11].) Suppose that P is a solution. By the occupation times formula and by the definition of a solution, we have t t |b(Xs )| |b(x)| x d X L (X)dx < ∞ P-a.s. (2.2) |b(Xs )|ds = = s 2 (X ) 2 (x) t σ σ s 0 0 R As Lyt (X) is right-continuous in y (see Proposition A.6 (i)), we deduce that P{∀t ≥ 0, ∀x ∈ I, Lxt (X) = 0} = 1.
2.1 Isolated Singular Points: The Definition
29
Thus, for the stopping time S = 1 ∧ inf{t ≥ 0 : Xt ∈ / I}, one has
S
S
σ −2 (Xs )d Xs = σ −2 (x)LxS (X)dx = σ −2 (x)LxS (X)dx = 0 P-a.s. 1ds =
S=
0
0
R
I
(Here we used the fact that = 0 for x ∈ / I; see Proposition A.5.) This leads to a contradiction since S > 0. LxS (X)
Remark. The above statement shows that a solution cannot enter an open interval that consists of singular points. Theorem 2.6. Suppose that d is a singular point for (1) and P is a solution of (1). Then, for any t ≥ 0, we have Ldt (X) = Ld− t (X) = 0
P-a.s.
Proof. Since d is a singular point, we have d
or
∀ε > 0,
1 + |b(x)| dx = ∞ σ 2 (x)
(2.3)
1 + |b(x)| dx = ∞. σ 2 (x)
(2.4)
d+ε
∀ε > 0,
d d−ε
If (2.3) is satisfied, then (2.2), together with the right-continuity of Lyt (X) in y, ensures that, for any t ≥ 0, Ldt (X) = 0 P-a.s. If (2.4) is satisfied, then, for any t ≥ 0, Ld− t (X) = 0 P-a.s. We set t 1 dMs , t ≥ 0, Bt = 0 σ(Xs ) where M is given by (1.16). Then 0
t
I(Xs = d)dXs =
I(Xs = d)b(Xs )ds +
0
=
0
t
t
0
t
I(Xs = d)b(Xs )ds + Nt ,
I(Xs = d)σ(Xs )dBs t ≥ 0,
where N ∈ Mcloc (Ft , P). By the occupation times formula, 0
t
t
I(Xs = d)b(Xs ) d Xs σ 2 (Xs ) 0 I(x = d)b(x) x Lt (X)dx = 0 P-a.s. = σ 2 (x) R
I(Xs = d)b(Xs )ds =
30
2 One-Sided Classification of Isolated Singular Points
Similarly,
N t =
Therefore,
t
0
0
I(Xs = d)σ 2 (Xs )ds = 0 P-a.s.
t
I(Xs = d)dXs = 0,
t≥0
and, by equality (A.1), for any t ≥ 0, we have Ldt (X) = Ld− t (X) P-a.s.
(2.5)
We have already proved that Ldt (X) = 0 or Ld− t (X) = 0. This, together with (2.5), leads to the desired statement. Theorem 2.7. Let d be a regular point for (1) and P be a solution of (1). Suppose moreover that P{Td < ∞} > 0, where Td = inf{t ≥ 0 : Xt = d}. Then, for any t ≥ 0, on the set {t > Td } we have Ldt (X) > 0, Ld− t (X) > 0
P-a.s.
This statement is proved in Section 2.5. Theorems 2.6 and 2.7 may be informally described as follows. Singular points for (1) are those and only those points, at which the local time of a solution vanishes. Consider now SDE (1) with x0 = 0. If the conditions of Proposition 2.2 are satisfied, then the behaviour of a solution is regular in the following sense: – there exists a solution up to S−; – it is unique; – it has alternating signs, i.e., P{∀ε > 0 ∃t < ε : Xt > 0} = 1,
P{∀ε > 0 ∃t < ε : Xt < 0} = 1
(these properties follow from the construction of a solution; see [15] for details). The theorem below, which follows from the results of Chapter 4, shows that at least one of the above 3 conditions fails to hold if zero is an isolated singular point. Theorem 2.8. Suppose that 1 + |b| 1 + |b| ∈ L1loc (R \ {0}), ∈ / L1loc (0), σ2 σ2 and x0 = 0. Set S = supn inf{t ≥ 0 : |Xt | = n}, where X is the coordinate process on C(R+ ). Then there are only 4 possibilities: 1. There exists no solution up to S−. 2. There exists a unique solution up to S−, and it is positive. 3. There exists a unique solution up to S−, and it is negative. 4. There exist a positive solution as well as a negative solution up to S−. (In this case alternating solutions may also exist.)
2.1 Isolated Singular Points: The Definition
31
6
-
t
6
6
A AA
-
-
t
t
A AA
6
6 A AA -
t
?
-
t
Fig. 2.1. Qualitative difference between the regular points and the singular points. The top graph shows the “typical” behaviour of a solution in the neighbourhood of a regular point. The other 4 graphs illustrate 4 possible types of behaviour of a solution in the neighbourhood of a singular point. As an example, the sign “ A” in the bottom left-hand graph indicates that there is no positive solution. The sign “ ? ” in the bottom right-hand graph indicates that an alternating solution may exist or may not exist.
32
2 One-Sided Classification of Isolated Singular Points
2.2 Isolated Singular Points: Examples A SDE with an isolated singular point is provided by Example 1.17. For this SDE, there is no solution. Another SDE with an isolated singular point is the SDE for a Bessel process, which has been considered as Example 1.23. Here we will study it for all starting points x0 . As in the previous section, we deal here with global solutions. Example 2.9 (SDE for a Bessel process). Let us consider the SDE dXt =
δ−1 I(Xt = 0)dt + dBt , 2Xt
X0 = x0
(2.6)
with δ > 1. (i) If x0 > 0 and δ ≥ 2, then this equation has a unique solution. It is strictly positive. (ii) If x0 = 0 or 1 < δ < 2, then this equation possesses different solutions. For any solution P, we have P{∃t ≥ 0 : Xt = 0} = 1. Proof. (i) Let P be the distribution of a δ-dimensional Bessel process started at x0 . Proposition A.21 (combined with Theorem 1.27) shows that P is a solution of SDE (2.6). Suppose that there exists another solution P . Set Q = Law(Xt2 ; t ≥ 0 | P),
Q = Law(Xt2 ; t ≥ 0 | P ).
By Itˆo’s formula, both Q and Q are solutions of the SDE dXt = δdt + 2 |Xt |dBt , X0 = x20
(2.7)
(see the proof of Example 1.23). Propositions 1.6 and 1.12 combined together show that weak uniqueness holds for this equation, i.e., Q = Q. Hence, Law(|Xt |; t ≥ 0 | P ) = Law(|Xt |; t ≥ 0 | P).
(2.8)
Proposition A.20 (i) guarantees that P{∀t ≥ 0, Xt > 0} = 1. This, together with (2.8), implies that P {∀t ≥ 0, Xt = 0} = 1. Since the paths of X are continuous and P {X0 = x0 > 0} = 1, we get P {∀t ≥ 0, Xt > 0} = 1. Using (2.8) once again, we obtain P = P. (ii) We first suppose that x0 = 0. Let P be defined as above and P be the image of P under the map C(R+ ) ω −→ −ω ∈ C(R+ ). It is easy to verify that P is also a solution of (2.6). The solutions P and P are different since P{∀t ≥ 0, Xt ≥ 0} = 1,
P {∀t ≥ 0, Xt ≤ 0} = 1.
2.2 Isolated Singular Points: Examples
33
(Moreover, for any α ∈ (0, 1), the measure Pα = αP + (1 − α)P is also a solution.) Suppose now that x0 > 0. Let P denote the distribution of a δ-dimensional Bessel process started at x0 . Since 1 < δ < 2, we have: T0 < ∞ P-a.s. (see Proposition A.20 (ii)). Let P be the image of P under the map C(R+ ) ω −→ ω ∈ C(R+ ),
ω(t) if t ≤ T0 (ω), ω (t) = −ω(t) if t > T0 (ω). Then P is also a solution of (2.6). Thus, for any x0 , SDE (2.6) has different solutions. Now, let P be an arbitrary solution of (2.6). Let us prove that P{∃t ≥ 0 : Xt = 0} = 1. For x0 = 0, this is clear. Assume now that x0 > 0, so that 1 < δ < 2. The measure Q = Law(Xt2 ; t ≥ 0 | P) is a solution of (2.7). As there is weak uniqueness for (2.7), Q is the distribution of the square of a δ-dimensional Bessel process started at x20 . By Proposition A.20 (ii), Q{∃t > 0 : Xt = 0} = 1, which yields P{∃t > 0 : Xt = 0} = 1. Example 2.10 (SDE for a geometric Brownian motion). Let us consider the SDE dXt = µXt dt + (Xt + ηI(Xt = 0))dBt ,
X0 = x0
(2.9)
with µ ∈ R, η = 0. (i) If x0 > 0, then there exists a unique solution P of this equation. It is strictly positive. If µ > 1/2, then P{limt→∞ Xt = ∞} = 1; if µ < 1/2, then P{limt→∞ Xt = 0} = 1. (ii) If x0 = 0, then there exists no solution. Remark. The term ηI(Xt = 0) is added in order to guarantee that σ = 0 at each point. The choice of η = 0 does not influence the properties of (2.9) as seen from the reasoning given below. Proof of Example 2.10. If P is a solution of (2.9), then, for any t ≥ 0, 0
t
I(Xs = 0)ds =
t
0
I(Xs = 0) 1 d Xs = 2 2 σ (Xs ) η
R
I(x = 0)Lxt (X)dx = 0
P-a.s.
Hence, P is also a solution of the SDE dXt = µXt dt + Xt dBt ,
X0 = x0 .
(2.10)
Propositions 1.6 and 1.9 combined together show that there is uniqueness in law for (2.10). Applying Itˆ o’s formula and Theorem 1.27, we deduce that the solution of (2.10) is given by
34
2 One-Sided Classification of Isolated Singular Points
Q = Law x0 eBt +(µ−1/2)t ; t ≥ 0 , where B is a Brownian motion started at zero. Obviously, if x0 = 0, then Q is also a solution of (2.9); if x0 = 0, then Q is not a solution of (2.9). The properties of the solution in the case x0 > 0 follow from the strong law of a.s. 2 large numbers for the Brownian motion: Btt −−−→ 0. t→∞
The situations in Examples 1.17, 2.9, and 2.10 may informally be described as follows. In Example 1.17, the drift is negative on the positive half-line and is positive on the negative half-line. Moreover, the drift is so strong near zero that it does not allow a solution to leave zero. On the other hand, the measure concentrated on {X ≡ 0} is not a solution. In Example 2.9, the drift is positive on the positive half-line and is negative on the negative half-line. Moreover, the drift is so strong near zero that it guarantees the existence of both a positive solution and a negative solution started at zero. If 1 < δ < 2, then a solution started at a point x0 = 0 reaches zero a.s. After the time it reaches zero, it can go in the positive direction as well as in the negative direction. This yields different solutions. If δ ≥ 2, then a solution started at x0 = 0 cannot reach zero. As this “bad” point is not reached, a solution is unique. In Example 2.10, the drift and the diffusion coefficients are so small near zero that a solution started at x0 = 0 cannot reach zero. A solution with X0 = 0 cannot leave zero, but the measure concentrated on {X ≡ 0} is not a solution. The above examples show that a slight modification of a parameter in a SDE may essentially influence the properties of solutions.
2.3 One-Sided Classification: The Results Throughout this section, we assume that zero is an isolated singular point. Then there exists a > 0 such that 1 + |b| ∈ L1loc ((0, a]). σ2 Note that the integral
0
a
(2.11)
1 + |b(x)| dx σ 2 (x)
may converge. In this case the corresponding integral should diverge in the left-hand neighbourhood of zero. We will use the functions
2.3 One-Sided Classification: The Results
ρ(x) = exp
a
x
s(x) =
x ∈ (0, a],
x
ρ(y)dy
0
−
2b(y) dy , σ 2 (y)
0
a
ρ(y)dy
a
if a
if 0
x
35
(2.12)
ρ(y)dy < ∞, (2.13) ρ(y)dy = ∞.
a x The function s is a version of the indefinite integral ρ(y)dy. If 0 ρ(y)dy < ∞, we choose a version that vanishes at zero; otherwise, we choose a version that vanishes at a. We will use the notation Ta = inf{t ≥ 0 : Xt = a}, T a = sup inf{t ≥ 0 : |Xt − a| ≤ 1/n}, n
Ta,c =Ta ∧ Tc , T a,c =T a ∧ T c with the usual convention inf ∅ = +∞. Here a, c ∈ R and X is the coordinate process on C(R+ ). Note that T a may not be equal to Ta since the coordinate process may be killed just before it reaches a. We will also consider stochastic intervals [[S, T ]] = {(ω, t) ∈ Ω × R+ : S(ω) ≤ t ≤ T (ω)}, ]]S, T ]] = {(ω, t) ∈ Ω × R+ : S(ω) < t ≤ T (ω)}, [[S, T [[= {(ω, t) ∈ Ω × R+ : S(ω) ≤ t < T (ω)}, ]]S, T [[= {(ω, t) ∈ Ω × R+ : S(ω) < t < T (ω)}, where S, T are stopping times (not necessarily S ≤ T ). Theorem 2.11. Suppose that a 0
1 + |b(x)| dx < ∞. σ 2 (x)
If x0 ∈ [0, a], then there exists a unique solution P defined up to T0,a . We have EP T0,a < ∞ and P{XT0,a = 0} > 0. If the conditions of Theorem 2.11 are satisfied, we will say that zero has right type 0. Theorem 2.12. Suppose that a a 1 + |b(x)| dx < ∞, ρ(x)dx < ∞, 2 (x) ρ(x)σ 0 0
0
a
|b(x)| dx = ∞. σ 2 (x)
If x0 ∈ [0, a], then there exists a positive solution P defined up to Ta , and such a solution is unique. We have EP Ta < ∞ and P{∃t ≤ Ta : Xt = 0} > 0.
36
2 One-Sided Classification of Isolated Singular Points
If the conditions of Theorem 2.12 are satisfied, we will say that zero has right type 2. Theorem 2.13. Suppose that a a 1 + |b(x)| dx = ∞, ρ(x)dx < ∞, 2 (x) ρ(x)σ 0 0
0
a
1 + |b(x)| s(x)dx < ∞. ρ(x)σ 2 (x)
(i) For any solution (P, S), we have X ≤ 0 on [[T0 , S]] P-a.s. (i.e., for any t ≥ 0, we have Xt ≤ 0 P-a.s. on {T0 ≤ t ≤ S}). (ii) If x0 ∈ [0, a], then there exists a unique solution P defined up to T0,a . We have EP T0,a < ∞ and P{XT0,a = 0} > 0. If the conditions of Theorem 2.13 are satisfied, we will say that zero has right type 1. Remark. Statement (i) implies that if x0 ≤ 0, then any solution (P, S) is negative. Theorem 2.14. Suppose that a a 1 + |b(x)| s(x)dx = ∞, ρ(x)dx < ∞, 2 (x) ρ(x)σ 0 0
0
a
s(x) dx < ∞. ρ(x)σ 2 (x)
(i) If x0 > 0, then any solution (P, S) is strictly positive. (ii) If x0 ≤ 0, then any solution (P, S) is negative. (iii) If x0 ∈ (0, a), then there exists a unique solution P defined up to T 0,a −. We have EP T 0,a < ∞ and P{limt↑T 0,a Xt = 0} > 0. If the conditions of Theorem 2.14 are satisfied, we will say that zero has right type 6. Remarks. (i) The solution P is defined up to T 0,a − (and not up to T0,a −) since the stopping time T0,a is not predictable. The reason is that the coordinate process may be killed just before it reaches 0 or a. On the other hand, T 0,a is obviously predictable. (ii) Under assumption (2.11), for x0 ∈ (0, a), there always exists a unique solution up to T 0,a − (not only for type 6). Theorem 2.15. Suppose that a ρ(x)dx < ∞, 0
0
a
s(x) dx = ∞. ρ(x)σ 2 (x)
(i) If x0 > 0, then any solution (P, S) is strictly positive. (ii) If x0 ≤ 0, then any solution (P, S) is negative. (iii) If x0 ∈ (0, a], then there exists a unique solution P defined up to Ta . We have P{Ta = ∞} > 0 and limt→∞ Xt = 0 P-a.s. on {Ta = ∞}.
2.3 One-Sided Classification: The Results
37
If the conditions of Theorem 2.15 are satisfied, we will say that zero has right type 4. Theorem 2.16. Suppose that a ρ(x)dx = ∞, 0
a
0
1 + |b(x)| |s(x)|dx < ∞. ρ(x)σ 2 (x)
(i) If x0 > 0, then any solution (P, S) is strictly positive. (ii) If x0 ∈ (0, a], then there exists a unique solution P defined up to Ta . We have EP Ta < ∞. (iii) If x0 = 0, then there exists a positive solution P defined up to Ta , and such a solution is unique. We have EP Ta < ∞ and X > 0 on ]]0, Ta ]] P-a.s. If the conditions of Theorem 2.16 are satisfied, we will say that zero has right type 3. Theorem 2.17. Suppose that a ρ(x)dx = ∞, 0
0
a
1 + |b(x)| |s(x)|dx = ∞. ρ(x)σ 2 (x)
(i) If x0 > 0, then any solution (P, S) is strictly positive. (ii) If x0 ≤ 0, then any solution (P, S) is negative. (iii) If x0 ∈ (0, a], then there exists a unique solution P defined up to Ta , and Ta < ∞ P-a.s. If the conditions of Theorem 2.17 are satisfied, we will say that zero has right type 5. Figure 2.2 represents the one-sided classification of the isolated singular points. Note that the integrability conditions given in Figure 2.2 do not have the same form as in Theorems 2.11–2.17. Nevertheless, they are equivalent. For example, a 1 + |b(x)| dx < ∞ σ 2 (x) 0 if and only if a ρ(x)dx < ∞, 0
0
a
1 + |b(x)| dx < ∞, ρ(x)σ 2 (x)
(In this case zero has right type 0.)
0
a
|b(x)| dx < ∞. σ 2 (x)
38
2 One-Sided Classification of Isolated Singular Points
2.4 One-Sided Classification: Informal Description Let us now informally describe how a solution behaves in the right-hand neighbourhood of an isolated singular point for each of types 0, . . . , 6. If zero has right type 0, then, for any x0 ∈ [0, a], there exists a unique solution defined up to T0,a . This solution reaches zero with strictly positive probability. An example of a SDE, for which zero has right type 0, is provided by the equation dXt = dBt , X0 = x0 . If zero has right type 1, then, for any x0 ∈ [0, a], there exists a unique solution defined up to T0,a . This solution reaches zero with strictly positive probability. Any solution started at zero (it may be defined up to another stopping time) is negative. In other words, a solution may leave zero only in the negative direction. The SDE dXt = −
1 I(Xt = 0)dt + dBt , 2Xt
X0 = x0
provides an example of a SDE, for which zero has right type 1 (this follows from Theorem 5.1). In this example, after having reached zero the solution cannot be continued neither in the negative nor in the positive direction, compare with Example 1.17. If zero has right type 2, then, for any x0 ∈ [0, a], there exists a unique positive solution defined up to Ta . This solution reaches zero with strictly positive probability and is reflected at this point. There may exist other solutions up to Ta . These solutions take negative values (see Chapter 3). The SDE for a δ-dimensional Bessel process dXt =
δ−1 I(Xt = 0)dt + dBt , 2Xt
X0 = x0
with 1 < δ < 2 is an example of a SDE, for which zero has right type 2 (this follows from Theorem 5.1). If zero has right type 3, then, for any x0 ∈ (0, a], there exists a unique solution defined up to Ta . This solution never reaches zero. There exists a unique positive solution started at zero and defined up to Ta . There may exist other solutions started at zero and defined up to Ta . These solutions take negative values (see Chapter 3). For the SDE dXt =
δ−1 I(Xt = 0)dt + dBt , 2Xt
X0 = x0
with δ ≥ 2, zero has right type 3 (this follows from Theorem 5.1). If zero has right type 4, then, for any x0 ∈ (0, a), there exists a unique solution defined up to Ta . This solution never reaches zero. With strictly positive probability it tends to zero as t → ∞. Thus, with strictly positive
2.4 One-Sided Classification: Informal Description
a 0
ρ 0. Let R be the measure on C(R+ ) × C(R+ ) defined as R(dω1 , dω2 ) = P(dω 1 )Pω1 (u) (dω2 ) and let Q be the image of R under the map C(R+ ) × C(R+ ) (ω1 , ω2 ) −→ ω ∈ C(R+ ), if t < u, ω1 (t) ω(t) = ω2 (t − u) if t ≥ u, ω1 (u) = ω2 (0), if t ≥ u, ω1 (u) = ω2 (0). ω1 (u) Obviously, Q{X0 = x0 } = 1 and, for any t ≥ 0, t∧T0,a |b(Xs )| + σ 2 (Xs ) ds < ∞ Q-a.s. 0
Clearly, the process Kt = Xt∧u − x0 −
t∧u∧T0,a
0
b(Xs )ds,
t≥0
is a continuous (Ft , Q)-local martingale. Consider the process Nt = Xt∨u − Xu −
(t∨u)∧T0,a
b(Xs )ds,
t≥0
u∧T0,a
and the stopping times τm = inf{t ≥ u : |Nt | ≥ m}. Set Xt1 = Xt∧u ,
Xt2 = Xt+u ,
t ≥ 0.
For any u ≤ s ≤ t, C 1 ∈ Fu , C 2 ∈ Fs−u , and m ∈ N, we have
2.5 One-Sided Classification: The Proofs
47
EQ Ntτm − Nsτm I(X 1 ∈ C 1 , X 2 ∈ C 2 ) = Nt∧τm (ω2 ) − Ns∧τm (ω2 ) I(X 1 (ω1 ) ∈ C 1 ) C(R+ )
C(R+ )
I(X 2 (ω2 ) ∈ C 2 )P(dω 1 )Pω1 (u) (dω2 ) = 0. It follows from Proposition A.36 that for any C ∈ Fu , we have EQ Ntτm − Nsτm I(X ∈ C) = 0. Combining this with the martingale property of K, we conclude that the process t∧T0,a Mt = Xt − x0 − b(Xs )ds, t ≥ 0 0
is a continuous (Ft , Q)-local martingale. In a similar way we prove that the process t∧T0,a σ 2 (Xs )ds, t ≥ 0 Mt2 − 0
is a continuous (Ft , Q)-local martingale. Set Y = s(X). By the Itˆo-Tanaka formula, Y ∈ Mcloc (Ft , Q) and
Y t =
t∧T0,a (Y ) 0
κ 2 (Ys )ds,
t ≥ 0,
where κ is defined in (2.17). Let us consider t κ 2 (Ys )ds if t < T0,α (Y ), At = 0 ∞ if t ≥ T0,α (Y ), τt = inf{s ≥ 0 : As > t}, Vt = Yτt , t ≥ 0. x that, for any x ∈ [0, a], P x {T0,a < It follows from the construction of P ∞} = 1. Hence, Q{T0,a < ∞} = 1. Now we can apply the same arguments as in the proof of existence to show that AT0,α (Y )− = T0,α (V ) Q-a.s., where α = s(a), and t∧T0,α (V ) τt = κ −2 (Vs )ds, t ≥ 0. (2.26) 0
By Proposition A.16,
V t = Y τt = t ∧ AT0,α (Y )− = t ∧ T0,α (V ),
t ≥ 0.
Using the same method as in the proof of Theorem 1.27 (ii), we construct a Brownian motion W (defined, possibly, on an enlarged probability space) such that W = V on [[0, T0,α (W )]]. Then (2.26) can be rewritten as
48
2 One-Sided Classification of Isolated Singular Points
τt =
t∧T0,α (W )
0
κ −2 (Ws )ds,
t ≥ 0.
Furthermore, At = inf{s ≥ 0 : τs > t} and Yt = VAt = WAt , t ≥ 0. As a result, the measure Q is determined uniquely (i.e., it does not depend on the choice of a solution P). Since u > 0 was taken arbitrarily, we conclude that is determined uniquely. But P = P|F T (see Lemma B.3). the measure P 0,a This completes the proof of uniqueness. The inequality EP T0,a < ∞ follows from (2.23). The property P{XT0,a = 0} > 0 is clear from the construction of the solution. Indeed, for the process Y defined by (2.20), we have P{YT0,α (Y ) = 0} = P{BT0,α (B) = 0} > 0. 2
The proof is completed. Proof of Theorem 2.12. Existence. Consider the functions ρ(x) = ρ(|x|), x ∈ [−a, a], s(x) = s(|x|) sgn x, x ∈ [−a, a].
If we apply the same procedure as in the proof of the existence part of Theorem 2.11 with the interval [0, a] replaced by the interval [−a, a], the function ρ replaced by ρ, the function s replaced by s, and the function κ replaced by the function κ(y) = ρ(s−1 (y))σ(s−1 (y)), then we obtain a measure P on FT−a,a that is a solution of the SDE dXt = b(Xt )dt + σ(Xt )dBt ,
X0 = x0
up to T−a,a . Here b(x) = b(|x|) sgn x, σ(x) = σ(|x|),
x ∈ [−a, a],
x ∈ [−a, a].
In particular, (2.21) is replaced by a a a 1 2 dx < ∞, κ −2 (y)dy = dx = 2 2 −a −a ρ(x)σ (x) 0 ρ(x)σ (x) and one should apply Lemma 2.18 when checking the analogue of (2.23). The arguments used in the proof of Theorem 2.6 show that L0t (X) = 0 P-a.s. The map C(R+ ) ω −→ |ω| ∈ C(R+ ) is FT−a,a |FTa -measurable. Hence, we can define a measure P on FTa as the image of P under this map. Using the Itˆ o-Tanaka formula and the equality L0t (X) = 0 P-a.s., we prove that P is a solution of (1) up to Ta . Obviously, P is a positive solution.
2.5 One-Sided Classification: The Proofs
49
= Uniqueness. Let P be an arbitrary solution defined up to Ta . Set P P ◦ Φ−1 , where Φ is defined by (B.1). Fix u > 0. Using the same method as Ta in the proof of the uniqueness part of Theorem 2.11, we construct a measure u , Q{X0 = x0 } = 1, Q on C(R+ ) such that Q|Fu = P|F t∧Ta |b(Xs )| + σ 2 (Xs ) ds < ∞ Q-a.s. 0
for any t ≥ 0, and the processes Mt = Xt − x0 − Mt2 −
t∧Ta
0
0
t∧Ta
b(Xs )ds,
σ 2 (Xs )ds,
t ≥ 0,
t≥0
are continuous (Ft , Q)-local martingales. Moreover, Q{Ta < ∞} = 1. Let us choose δ ∈ (0, a) and set ∆ = s(δ), Y = s(X) ∨ ∆. By the Itˆ oTanaka formula applied to the function x → s(x) ∨ ∆, we have t 1 I(Xs > δ)ρ(Xs )dMs + ρ(δ)Lδt (X), t ≥ 0. (2.27) Yt = Y0 + 2 0 Applying the Itˆ o-Tanaka formula to the function y → y ∨ ∆, we get t Yt = Yt ∨ ∆ = Y0 + I(Ys > ∆)I(Xs > δ)ρ(Xs )dMs
0
t
1 1 + ρ(δ) I(Ys > ∆)dLδt (X) + L∆ (Y ) 2 2 t 0 t 1 = Y0 + I(Ys > ∆)ρ(s−1 (Ys ))dMs + L∆ t (Y ) 2 0
(2.28)
1 = Y0 + Nt + L∆ (Y ). 2 t (In the third equality we applied Proposition A.5.) Let us consider t I(Ys > ∆)ds if t < Tα (Y ), Dt = 0 ∞ if t ≥ Tα (Y ), ϕt = inf{s ≥ 0 : Ds > t}, 1 Ut = Yϕt = U0 + Nϕt + L∆ (Y ), 2 ϕt
t ≥ 0,
where α = s(a). It follows from Proposition A.15 that N is τ -continuous. Proposition A.16 shows that the process Kt = Nϕt is a (Fϕ+t , Q)-local martingale. On the set {t < DTα (Y )− } we have
50
2 One-Sided Classification of Isolated Singular Points
ϕt
Kt = N ϕt = κ 2 (Ys )I(Ys > ∆)ds 0 t ϕt 2 κ (Ys )dDs = κ 2 (Us )ds, t ≥ 0. = 0
0
The processes K and K are continuous and are constant after DTα (Y )− . Moreover, DTα (Y )− ≤ Tα (Y ) < ∞ Q-a.s. Similarly to (2.22), we verify that DTα (Y )− = Tα (U ) Q-a.s. As a result,
Kt =
t∧Tα (U)
0
κ 2 (Us )ds,
t ≥ 0.
Obviously, U = U ∨ ∆. Applying the Itˆ o-Tanaka formula to the function x → x ∨ ∆, we get t 1 t 1 ∆ I(Us > ∆)dKs + I(Us > ∆)dL∆ t ≥ 0. Ut = U0 + ϕs (Y ) + Lt (U ), 2 2 0 0 By Propositions A.5 and A.18, t ∆ I(Us > ∆)dLϕs (Y ) = 0
ϕt
ϕ0
I(Ys > ∆)dL∆ s (Y ) = 0,
t ≥ 0.
It follows from the uniqueness of the semimartingale decomposition of U that t I(Us > ∆)dKs = Kt , t ≥ 0. 0
As a result, 1 Ut = U0 + K t + L ∆ (U ), 2 t
t ≥ 0.
(2.29)
Let us consider t κ 2 (Us )ds At = 0 ∞
if t < Tα (U ), if t ≥ Tα (U ),
τt = inf{s ≥ 0 : As > t}, 1 (U ), Vt = Uτt = V0 + Kτt + L∆ 2 τt
t ≥ 0.
Arguing as above, we deduce that ATα (U)− = Tα (V ) Q-a.s. The process Jt = Kτt is a (Gτt , Q)-local martingale, where Gt = Fϕ+t , and Jt = t∧Tα (V ), t ≥ 0. The following equality is obtained similarly as (2.29): 1 (V ), Vt − V0 = Jt + L∆ 2 t
t ≥ 0.
2.5 One-Sided Classification: The Proofs
51
There exists a Brownian motion W (defined, possibly, on an enlarged probability space) such that J coincides with W on [[0, Tα (V )]]. Note that V ≥ ∆ and V is stopped at the first time it reaches α. Propositions A.32 and A.33 taken together show that the process V is a Brownian motion started at V0 = s(x0 ) ∨ ∆, reflected at ∆, and stopped at the first time it reaches α. Using the same arguments as in the proof of uniqueness in Theorem 2.11, we conclude that the measure Rδ = Law(Ut ; t ≥ 0 | Q) is determined uniquely (i.e., it does not depend on the choice of a solution P). The superscript δ here indicates that U depends on δ. We can write t∧Ta t∧Ta I(Xs = 0) d Xs I(Xs = 0)ds = σ 2 (Xs ) 0 0 (2.30) I(x = 0) x Lt∧Ta (X)dx = 0 Q-a.s. = 2 R σ (0) Combining this equality with the property Q{∀t ≥ 0, Xt ≥ 0} = 1, we conclude that the measures Rδ converge weakly to the measure R = Law(s(Xt ); t ≥ 0 | Q) as δ ↓ 0. As a result, the measure Q is determined uniquely. The proof of uniqueness is now completed as in Theorem 2.11. The inequality EP Ta < ∞ follows from (2.23). The property P{∃t ≤ Ta : 2 Xt = 0} > 0 follows from the construction of the solution. Proof of Theorem 2.13. (i) Suppose that there exists a solution (P, S) such that P supt∈[T0 ,S] Xt ≥ c > 0 for some c > 0. We can assume that c ≤ a, S is bounded and S ≤ Tc0 where Tc0 := inf{t ≥ T0 : Xt = c}. Otherwise, we can choose a smaller c and consider S ∧ Tc0 ∧ m instead of S, where m is a sufficiently large number. = P ◦ Φ−1 (Φ is defined by (B.1)) and Set P S Xt
= 0
t
I(s ≥ T0 )dXs ,
t ≥ 0.
Take δ ∈ (0, c) and set ∆ = s(δ), Y = s(δ ∨ X ). Computations similar to (2.27) and (2.28) show that 1 (Y ), Yt = ∆ + Nt + L∆ 2 t
t ≥ 0,
and where N ∈ Mcloc (Ft , P)
N t =
0
t
I(T0 ≤ s ≤ S)I(Ys > ∆)κ 2 (Ys )ds,
(κ is defined in (2.17)).
t≥0
52
2 One-Sided Classification of Isolated Singular Points
Consider
N t At = ∞
if t < S, if t ≥ S,
τt = inf{s ≥ 0 : As > t}, 1 Ut = Yτt = ∆ + Nτt + L∆ (Y ), 2 τt
t ≥ 0.
It follows from Proposition A.15 that N is τ -continuous. Moreover, τ is bounded since τ ≤ S. By Proposition A.16, the process Vt = Nτt is a (Fτ+t , P)local martingale with V t = t ∧ η, where η = AS− . The following equality is proved similarly as (2.29): 1 Ut = ∆ + Vt + L∆ (U ), 2 t
t ≥ 0.
Propositions A.32 and A.33 taken together show that U − ∆ is the modulus of a Brownian motion started at zero and stopped at time η. (Note that η is a (Fτ+t )-stopping time.) We have = Tγ (U )} = P sup P{η t∈[T0 ,S] Xt ≥ c > 0,
(2.31)
where γ = s(c). Consider the function g(y) =
1 + |b(s−1 (y))| I(0 < y < γ). κ 2 (y)
The conditions of the theorem guarantee that, for any ε > 0,
ε
s−1 (ε)
g(y)dy = 0
0
1 + |b(x)| dx = ∞. ρ(x)σ 2 (x)
Combining this property with Propositions A.6 (ii) and A.8, we get
Tγ (∆+|W |) 0
g(∆ + |Ws |)ds =
Tγ−∆ (|W |)
0
γ−∆
= 0
≥
0
γ−∆
g(∆ + |Ws |)ds
LyTγ−∆ (|W |) (|W |)g(∆ + y)dy P-a.s.
LyTγ−∆ (|W |) (W )g(∆ + y)dy −−−→ ∞, ∆↓0
where W is a Brownian motion started at zero. Consequently, for any λ > 0, there exists ∆ ∈ (0, γ) such that Tγ (U) 1 = Tγ (U )}. g(Us )ds > λ > 1 − P{η (2.32) P 2 0
2.5 One-Sided Classification: The Proofs
53
On the other hand, for any ∆ ∈ (0, γ), we have
η
0
g(Us )ds =
AS−
0
S−
= τ0 S
=
0
≤
S
0
1 + |b(s−1 (Us ))| ds κ 2 (Us )
1 + |b(s−1 (Ys ))| dAs κ 2 (Ys )
I(s ≥ T0 )I(Xs > δ)(1 + |b(Xs )|)ds (1 + |b(Xs )|)ds < ∞ P-a.s.
We arrive at a contradiction with (2.31) and (2.32) since λ can be chosen arbitrarily large. (ii) Existence. We define the process Y by (2.20), where B is a (G )-Brownian t motion started at s(x0 ) on a filtered probability space Ω, G, (Gt ), Q . Inequality (2.23) remains valid (take Lemma 2.18 into account). Set Z = s−1 (Y ). The Itˆo–Tanaka formula yields that, for any n ∈ N, the process (n)
Nt
= Zt∧T1/n,a (Z) − x0 −
t∧T1/n,a (Z)
b(Zs )ds,
0
t≥0
is a continuous (Gτt , Q)-local martingale. Consider the process Nt = Zt − x0 −
t∧T0,a (Z)
0
b(Zs )ds,
t ≥ 0. Q-u.p.
Note that Z T0,a (Z) = Z Q-a.s. It follows from (2.23) that N (n) −−−−→ N . By Lemma B.11, N ∈ Mcloc (Gt , Q). In a similar way we prove that
N t =
0
t∧T0,a (Z)
σ 2 (Zs )ds,
n→∞
t ≥ 0.
The proof of existence is now completed as in Theorem 2.11. Uniqueness. Let P be the solution up to T0,a constructed above. Suppose that there exists another solution P up to T0,a . If x0 = 0, then the statement is trivial. Therefore, we can assume that x0 > 0. It follows from Theorem 2.11 that, for each n > 1/x0 , P |FT1/n,a = P|FT1/n,a . Note that T1/n,a (ω) ≤ T0,a (ω) for P, P -a.e. ω, but not for all ω since ω(0) may not be equal to x0 . Therefore, we use here the convention from Lemma B.5. Applying Lemma B.6, we get P = P. The inequality EP T0,a < ∞ follows from (2.23). The property P{T0,a < ∞ and XT0,a = 0} > 0 follows from the construction of the solution. 2
54
2 One-Sided Classification of Isolated Singular Points
Proof of Theorem 2.14. (i) Suppose that there exists a solution (P, S) such that P{S ≥ T0 } > 0. We can assume that S ≤ T0 and S is bounded (otherwise, we can take S ∧ T0 ∧ m instead of S, where m is a sufficiently large = P ◦ Φ−1 (Φ is defined by (B.1)). We choose δ ∈ (0, x0 ∧ a) number). Set P S and set ∆ = s(δ). Let us consider the process Y = 0 ∨ s(X) ∧ ∆ and the stopping times Sn = S ∧ T1/n (Y ), n ∈ N. The Itˆo–Tanaka formula yields that, for any n ∈ N, (n)
Yt∧Sn = ∆ + Nt
1 − ρ(δ)Lδt (X), 2
t≥0
(we applied the equality L0t (X) = 0; it is proved similarly as Theorem 2.6), and where N (n) ∈ Mcloc (Ft , P)
N (n) t =
t∧Sn
0
I(Ys < ∆)κ 2 (Ys )ds,
t≥0
(κ is given by (2.17)). Using the same method as in (2.27), (2.28), we show that 1 (n) (Y ), t ≥ 0, Yt∧Sn = ∆ + Nt − L∆− 2 t ∆−ε where L∆− (Y ) (see Proposition A.6 (i)). Consider the t (Y ) = limε↓0 Lt process 1 (Y ), t ≥ 0. Nt = Yt − ∆ + L∆− 2 t P-u.p.
P-a.s.
Then N (n) = N Sn . Obviously, Sn −−−−→ S. Thus, N (n) −−−−→ N . By n→∞
Lemma B.11, N ∈
Mcloc (Ft , P)
N t =
t∧S 0
and N
(n)
n→∞
P-u.p.
−−−−→ N . Hence, n→∞
I(Ys < ∆)κ 2 (Ys )ds,
t ≥ 0.
Consider t I(Ys < ∆)κ 2 (Ys )ds At = 0 ∞
if t < S, if t ≥ S,
τt = inf{s ≥ 0 : As > t}, 1 Ut = Yτt = ∆ + Nτt − L∆− (Y ), 2 τt
t ≥ 0.
The following equality is proved similarly as (2.29): 1 Ut − ∆ = Vt − L∆− (U ), 2 t
t ≥ 0,
2.5 One-Sided Classification: The Proofs
55
where Vt = Nτt . The process V is a standard linear Brownian motion stopped at the time η = AS− . By Propositions A.32 and A.33 taken together, the process ∆ − U is the modulus of a Brownian motion started at zero and stopped at time η. Consider the function g(y) =
1 + |b(s−1 (y))| I(0 < y < ∆). κ 2 (y)
The conditions of the theorem ensure that, for any ε > 0,
ε
s−1 (ε)
yg(y)dy = 0
0
Corollary A.24 yields T0 (∆−|W |) 0
1 + |b(x)| s(x)dx = ∞. ρ(x)σ 2 (x)
g(∆ − |Ws |)ds = ∞ a.s.,
where W is a Brownian motion started at zero. Therefore, infinite on the set
η 0
g(Us )ds is P-a.s.
{η = T0 (U )} = {inf t≥0 Ut = 0} = {inf t≤S Xt = 0}, and this set has strictly positive P-probability. On the other hand, η AS− 1 + |b(s−1 (Us ))| ds g(Us )ds = κ 2 (Us ) 0 0 S− 1 + |b(s−1 (Ys ))| = dAs κ 2 (Ys ) τ0 S = I(s ≤ S)I(Xs < δ)(1 + |b(Xs )|)ds ≤
0
S 0
(1 + |b(Xs )|)ds < ∞
P-a.s.
This leads to a contradiction. (ii) The proof is the same as in Theorem 2.13 (i). (iii) Existence. We define the process Y by (2.20). Set Z = s−1 (Y ), = Law(Zt ; t ≥ 0), P = P|F P T 0,a − . The arguments used in the proof of T Theorem 2.11 show that, for any n > 2/a, the measure P|F is a 1/n,a−1/n solution up to T1/n,a−1/n . The stopping times Sn = inf{t ≥ 0 : |Xt | ≤ 1/n} ∧ inf{t ≥ 0 : |Xt − a| ≤ 1/n} form a predicting sequence for T 0,a . Obviously, for each n > 2/a ∨ 1/x0 , P|FSn is a solution up to Sn . Hence, P is a solution up to T 0,a −.
56
2 One-Sided Classification of Isolated Singular Points
Uniqueness. Let P be the solution up to T 0,a constructed above. Suppose that there exists another solution P up to T 0,a . For each n ∈ N, the n restrictions Pn = P|FSn and P = P |FSn are solutions up to Sn (we use here the convention from Lemma B.5). By Lemma B.5, each of the measures Pn , P n admits a unique extension to T1/n,a−1/n . Obviously, these extensions are solutions up to T1/n,a−1/n , and, by Theorem 2.11, they coincide. Hence, n P = Pn . Now, choose t ≥ 0 and A ∈ Ft . We have P (A ∩ {T 0,a > t}) = lim P (A ∩ {Sn > t}) = lim P (A ∩ {Sn > t}) n
n→∞
n→∞
= lim Pn (A ∩ {Sn > t}) = lim P(A ∩ {Sn > t}) = P(A ∩ {T 0,a > t}). n→∞
n→∞
Applying Proposition A.36, we get P = P. In order to prove the inequality EP T 0,a < ∞, note that, for any n > 1/x0 , a s(x) EP T1/n,a ≤ 2 dx < ∞ 2 (x) ρ(x)σ 0 (see (2.23)). The property P{limt↑T 0,a Xt = 0} > 0 follows from the construction of the solution. 2 Proof of Theorem 2.15. (i) The proof is the same as in Theorem 2.14 (i). (ii) The proof is the same as in Theorem 2.13 (i). (iii) Existence. We define the process Y by (2.20), where B is a )-Brownian (G t motion started at s(x0 ) on a filtered probability space Ω, G, (Gt ), Q . It follows from Corollary A.24 that AT0,α (B)− is Q-a.s. infinite on the set {T0 (B) < Tα (B)}. Hence, Y is Q-a.s. strictly positive. Moreover, lim Yt = 0 Q-a.s. on {T0 (B) < Tα (B)}.
t→∞
(2.33)
= Law(Zt ; t ≥ 0), P = P|F T . The estimates used Let us set Z = s−1 (Y ), P a in (2.23) show that for any c ∈ (0, x0 ), EQ
Ta,c (Z)
1 + |b(Zt )| + σ 2 (Zt ) dt < ∞.
0
Letting c → 0, we get, for any t ≥ 0,
t∧Ta (Z) 0
1 + |b(Zs )| + σ 2 (Zs ) ds < ∞ Q-a.s.
The proof of existence is now completed in the same way as in Theorem 2.11. Uniqueness. Let P be the solution up to Ta constructed above. Suppose that there exists another solution P up to Ta . It follows from Theorem 2.11
2.5 One-Sided Classification: The Proofs
57
that, for any n > 1/x0 , P |FT1/n,a = P|FT1/n,a . Since the solution (P, Ta ) is strictly positive, P{T1/n,a = Ta } −−−−→ 1. Hence, P {T1/n,a = Ta } −−−−→ 1. n→∞
Applying Lemma B.6, we deduce that P = P.
n→∞
The properties P{Ta = ∞} > 0 and limt→∞ Xt = 0 P-a.s. on {Ta = ∞} follow from (2.33). 2 Proof of Theorem 2.16. (i) Suppose that there exists a solution (P, S) such that P{T0 < ∞, T0 ≤ S} > 0. Then there exists d ∈ (0, a) such that (2.34) P T0 < ∞, T0 ≤ S, and sup Xt < a = θ > 0. t∈[Td ,T0 ]
We choose δ ∈ (0, d) and consider
Xt
t
= d+ I(s > Td )dXs , t ≥ 0, 0 Yt = s Xt∧Tδ,a (X ) , t ≥ 0.
The Itˆo-Tanaka formula shows that Y is a (Ft , P)-local martingale, where −1 P = P ◦ ΦS (Φ is defined by (B.1)). Moreover, Y is bounded, and therefore, there exists Y∞ = limt→∞ Yt . We have s(d) = Y0 = EP Y∞ = EP Y∞ I Ts(δ) (Y ) = ∞ + EP Y∞ I Ts(δ) (Y ) < ∞ ≤ θs(δ), where θ is defined in (2.34). (In the above inequality we took into account that Y ≤ 0.) Since s(δ) −−→ −∞, we arrive at a contradiction. δ↓0
(ii) Existence. Let us consider t κ −2 (Bs )ds At = 0 ∞
if t < T0 (B),
(2.35)
if t ≥ T0 (B),
τt = inf{s ≥ 0 : As > t}, Yt = Bτt , t ≥ 0,
(2.36) (2.37)
where B is a Brownian motion started at s(x0 ) on a filtered probability space (Ω, G, (Gt ), P) and κ is defined in (2.17). Note that s(x0 ) ≤ s(a) = 0. Using the same arguments as in the proof of the existence part of Theorem 2.11, we show that AT0 (B)− = T0 (Y ) Q-a.s. For Z = s−1 (Y ), we have
58
2 One-Sided Classification of Isolated Singular Points
EQ
Ta (Z)
1 + |b(Zt )| + σ 2 (Zt ) dt
0
= EQ
1 + |b(s−1 (Yt ))| + σ 2 (s−1 (Yt )) dt
0
= EQ
0
= EQ
Ta (B)
0
−∞ 0
≤2
Ta (Y )
−∞ a
=2
0
1 + |b(s−1 (Bt ))| + σ 2 (s−1 (Bt )) dt κ 2 (Bt )
1 + |b(s−1 (y))| + σ 2 (s−1 (y)) y LT0 (B) (B)dy κ 2 (y)
(2.38)
1 + |b(s−1 (y))| + σ 2 (s−1 (y)) |y|dy κ 2 (y)
1 + |b(x)| + σ 2 (x) |s(x)|dx ρ(x)σ 2 (x)
(the inequality here follows from Proposition A.10.) In view of Lemma 2.18, this expression is finite. The proof of existence is now completed as in Theorem 2.11. Uniqueness. The uniqueness of a solution is proved in the same way as in Theorem 2.15 (iii). The property EP Ta < ∞ follows from (2.38). (iii) Existence. The same estimates as in (2.38) show that, for any 0 < x < c ≤ a, EPx
0
Tc
1 + |b(Xs )| + σ 2 (Xs ) ds ≤ 2
0
c
1 + |b(u)| + σ 2 (u) |s(u)|du, ρ(u)σ 2 (u)
where Px is the solution with X0 = x defined up to Ta . The finiteness of the integral a 1 + |b(u)| + σ 2 (u) |s(u)|du ρ(u)σ 2 (u) 0 (see Lemma 2.18) ensures that there exists a sequence of strictly positive numbers a = a0 > a1 > . . . such that an ↓ 0 and ∞ n=1
EPn
0
Tan−1
1 + |b(Xs )| + σ 2 (Xs ) ds < ∞,
(2.39)
where Pn is the solution with X0 = an defined up to Tan−1 . We set T a Qn = Law Xt n−1 − an ; t ≥ 0 | Pn . Then Qn are probability measures on C0 (R+ ), where C0 (R+ ) is the space of continuous functions R+ → R vanishing at zero. Let us consider
2.5 One-Sided Classification: The Proofs
59
Ω = C0 (R+ ) × C0 (R+ ) × . . . , G = B(C0 (R+ )) × B(C0 (R+ )) × . . . , Q = Q1 × Q2 × . . . and let Y (n) denote the coordinate process on the n-th copy of C0 (R+ ). We consider each Y (n) as a process on Ω. Set ηn = Tan−1 (an + Y (n) ). It follows from (2.39) that EQ and hence,
∞
ηn =
n=1
∞ n=1
∞
EQn ηn =
n=1
∞
EPn Tan−1 < ∞,
n=1
ηn < ∞ Q-a.s. Now we consider ∞
τn =
ηk ,
n = 0, 1, . . . .
k=n+1
Let us define the process (Zt ; t ≥ 0) by if t = 0, 0 (n) Zt = an + Yt−τ if τn ≤ t < τn−1 , n a if t ≥ τ0 . Obviously, Z is Q-a.s. continuous on (0, ∞). Furthermore, on each interval ]]0, τn [[ we have Z ≤ an Q-a.s. Thus, Z is Q-a.s. continuous on [0, ∞). = Law(Zt ; t ≥ 0 | Q), P = P|F T . Let us prove that (P, Ta ) is a Set P a solution with X0 = 0. Conditions (a) and (b) of Definition 1.31 are obviously satisfied. Condition (c) follows from the equalities Ta EP 1 + |b(Xs )| + σ 2 (Xs ) ds 0 τ0 1 + b(Zs ) + σ 2 (Zs ) ds = EQ 0
= EQ =
∞
n=1 ∞
EPn
n=1
τn−1
1 + |b(Zs )| + σ 2 (Zs ) ds
τn
0
1 + |b(Xs )| + σ 2 (Xs ) ds
Tan−1
and inequality (2.39). Let us verify conditions (d) and (e). For n ∈ N, set U (n) = Z τn and define the processes V (n) recursively by V (1) = G(Y (1) , 0, η1 ), V (2) = G(Y (2) , V (1) , η2 ), V (3) = G(Y (3) , V (2) , η3 ), . . . ,
60
2 One-Sided Classification of Isolated Singular Points
where G is the gluing function (see Definition B.8). Using Lemma B.9, one can verify that, for each n ∈ N, the process (n) Nt
=
(n) Vt
−
t∧(η1 +···+ηn )
0
b(an + Vs(n) )ds,
t≥0
(n) (n) -stopping time is a FtV , Q -local martingale. Observe that τn is a FtU (n) (n) (n) since τn = Tan (U ). Moreover, G(U , V , τn ) = Z. By Lemma B.9, G(0, N (n) , τn ) ∈ Mcloc (FtZ , Q). Obviously, G(0, N (n) , τn ) = K (n) , where (n) Kt
0 =
Z t − an −
if t < τn ,
t∧τ0
b(Zs )ds
if t ≥ τn .
τn u.p.
It follows from (2.39) and the continuity of Z that K (n) −−−−→ K, where n→∞
K t = Zt −
t∧τ0
b(Zs )ds,
0
t ≥ 0.
Due to Lemma B.11, K ∈ Mcloc (FtZ , Q). This means that the process M t = Xt −
t∧Ta 0
b(Xs )ds,
t≥0
is a (Ft , P)-local martingale. In a similar way we check that
M t =
0
t∧Ta
σ 2 (Xs )ds,
t ≥ 0.
As a result, P is a solution up to Ta . = P◦Φ−1 Uniqueness. Let P be a positive solution defined up to Ta . Set P Ta (Φ is defined by (B.1)) and, for each x ∈ (0, a], consider the measures Qx = · | Tx < ∞), Rx = Qx ◦ Θ−1 , where Θ is defined by (A.4). The same P( Tx arguments as those used in the proof of Lemma B.7 show that Rx |FTa is a solution of (1) with X0 = x defined up to Ta . Therefore, Rx |FTa = Px , where Px is the unique solution with X0 = x defined up to Ta . On the other hand, X Ta = X Rx -a.s. Hence, −1 −1 Rx = Rx ◦ Φ−1 Ta = (Rx |FTa ) ◦ ΦTa = Px ◦ ΦTa ,
which proves that the measures Rx are determined uniquely. Similarly to (2.30), we prove that 0
Ta
I(Xs = 0)ds = 0 P-a.s.
(2.40)
2.5 One-Sided Classification: The Proofs
61
P-a.s.
Since P is a positive solution, this equality shows that Tx −−−→ 0. Hence, Rx x↓0
as x ↓ 0. Now it follows from (2.40) that P (and hence, converge weakly to P P) is unique. 2 Proof of Theorem 2.17. (i) The proof is the same as that of Theorem 2.16 (i). (ii) Suppose that there exists a solution (P, S) such that P supt∈[0,S] Xt > c > 0 = P ◦ Φ−1 (Φ is for some c > 0. We can assume that S is bounded. Set P S defined by (B.1)). Choose δ ∈ (0, c) and consider the process t Xt = δ + I(s ≥ Tδ )dXs , t ≥ 0. 0
Then the arguments used in the proof of Theorem 2.16 (i) show that P{∀t ≥ 0, Xt > 0} = 1. Set Y = s(X ), ∆ = s(δ). According to the Itˆo–Tanaka formula and the and occupation times formula, Y ∈ Mcloc (Ft , P) t
Y t = I(Tδ ≤ s ≤ S)κ 2 (Ys )ds, t ≥ 0 0
(κ is defined in (2.17)). Consider
At =
Y t ∞
if t < S, if t ≥ S,
τt = inf{s ≥ 0 : As > t}, Vt = Yτt , t ≥ 0. The process V − ∆ is a Brownian motion started at zero and stopped at the time η = AS− . We have P{η > Tγ (V )} = P supt∈[0,S] Xt > c > 0. Let us consider the function g(y) =
1 + |b(s−1 (y))| I(y < γ). κ 2 (y)
The conditions of the theorem guarantee that λ ∀λ < γ, |y − γ|g(y)dy = ∞. −∞
(2.41)
62
2 One-Sided Classification of Isolated Singular Points
Let (Wt ; t ≥ 0) be a two-dimensional Brownian motion started at zero. The set λ D = ω : ∀λ < γ, |Wγ−y (ω)|2 g(y)dy = ∞ −∞
belongs to the tail σ-field X = t>0 σ(Ws ; s ≥ t). In view of Blumenthal’s zero–one law and the time-inversion property of W , P(D) equals 0 or 1. Combining (2.41) with Proposition A.34, we conclude that P(D) = 1. Using Proposition A.10 (i), we get
Tγ (∆+B)
γ
Ly−∆ Tγ−∆ (B) (B)g(y)dy γ γ law P a.s. = |Wγ−y |2 g(y)dy −−−−−→ |Wγ−y |2 g(y)dy = ∞, 0
g(∆ + Bs )ds ≥
∆
∆→−∞
∆
−∞
where B is a Brownian motion started at zero. The proof is now completed in the same way as the proof of Theorem 2.13 (i). (iii) Existence. Define Y by (2.37), where B is a (Gt )-Brownian motion started at s(x0 ) on a filtered probability space (Ω, G, (Gt ), P) and κ is defined in (2.17). Set Z = s−1 (Y ). Arguing in the same way as in the proof of the existence part of Theorem 2.11, we check that AT0 (B)− = T0 (Y ) Q-a.s. The estimates used in (2.38) show that for any c ∈ (0, x0 ),
Ta,c (Z)
1 + |b(Zt )| + σ 2 (Zt ) dt < ∞ Q-a.s.
0
(2.42)
Furthermore, Ta (Z) = T0 (Y ) < ∞ Q-a.s. Letting c → 0 in (2.42), we get 0
Ta (Z)
1 + |b(Zt )| + σ 2 (Zt ) dt < ∞ Q-a.s.
(2.43)
The proof of existence is now completed in the same way as in Theorem 2.11. Uniqueness. The uniqueness of a solution is proved in the same way as in Theorem 2.15 (iii). The property Ta < ∞ P-a.s. follows from (2.43). 2 Proof of Theorem 2.7. As d is a regular point, there exist constants d1 < d < d2 such that 1 + |b| ∈ L1loc ([d1 , d2 ]). σ2 We will employ the notation used in the proof of Theorem 2.11. Without loss of generality, we can assume that d1 = 0, d2 = a. = Law X T0,a ; t ≥ 0 | P . Then P|F T is Suppose first that x0 = d. Set P t 0,a a solution up to T0,a . By Theorem 2.11, this measure is unique. Consequently,
2.5 One-Sided Classification: The Proofs
63
= Law(Zt ; t ≥ 0), where Z = s−1 (Y ) and Y is defined by (2.20). By the P Itˆo–Tanaka formula and Proposition A.17, τt 1 0) I(Bs > s(x0 ))dBs + Ls(x (B) (Yt − s(x0 ))+ = (Bτt − s(x0 ))+ = 2 τt 0 t 1 0) = I(Bτs > s(x0 ))dBτs + Ls(x (B) 2 τt 0 t 1 0) I(Ys > s(x0 ))dYs + Ls(x (B), t ≥ 0. = 2 τt 0 On the other hand,
(Yt − s(x0 ))+ =
t
0
s(x )
1 s(x ) I(Ys > s(x0 ))dYs + Lt 0 (Y ), 2
t ≥ 0,
s(x )
and therefore, Lt 0 (Y ) = Lτt 0 (B). Applying the Itˆ o–Tanaka formula to the function y → s−1 (y ∨ s(x0 )) and keeping (2.24), (2.25) in mind, we get s
−1
(Yt ∨ s(x0 )) = x0 + 1 + 2
t 0 t
0
= x0 +
1 I(Ys > s(x0 ))dYs ρ(s−1 (Ys )) 2b(s−1 (Ys )) 1 s(x ) I(Ys > s(x0 ))d Y s + L 0 (Y ) κ 2 (Ys ) 2ρ(x0 ) t
t 0
I(Zs > x0 )dZs +
1 s(x ) L 0 (Y ), 2ρ(x0 ) t
t ≥ 0, (2.44)
where Z = s−1 (Y ). Applying now the Itˆ o–Tanaka formula to the function x → x ∨ s(x0 ), we get s−1 (Yt ∨ s(x0 )) = Zt ∨ x0 = x0 +
t 0
1 I(Zs > x0 )dZs + Lxt 0 (Z), 2
t ≥ 0.
Comparing this with (2.44), we deduce that Lxt 0 (Z) =
1 1 s(x ) L 0 (Y ) = Ls(x0 ) (B), ρ(x0 ) t ρ(x0 ) τt
t ≥ 0.
For any t > 0, τt > 0 a.s. It follows from Proposition A.8 that, for any t > 0, s(x ) Lt 0 (B) > 0 a.s. Hence, for any t > 0, Lxt 0 (Z) > 0 a.s., which means that Lxt 0 (X T0,a ) > 0 P-a.s. Using the obvious equality Lxt 0 (X T0,a ) = (Lxt 0 (X))T0,a , we get: for any t > 0, Lxt 0 (X) > 0 P-a.s. Taking into account (2.5), we obtain the desired statement. Suppose now that x0 = d. Set Q = P( · | Td < ∞), R = Q ◦ Θ−1 Td , where Θ is defined by (B.2). By Lemma B.7, R is a solution of (1) with X0 = d. The
64
2 One-Sided Classification of Isolated Singular Points
statement proved above (for the case x0 = d), together with Corollary A.7, yields that, for any t > 0, 1 t I(d ≤ Xs < d + ε)σ 2 (Xs )ds > 0 R-a.s. lim ε↓0 ε 0 Hence, 1 lim ε↓0 ε
t+Td
I(d ≤ Xs < d + ε)σ 2 (Xs )ds > 0 Q-a.s.
Td
This means that 1 t+Td lim I(d ≤ Xs < d + ε)σ 2 (Xs )ds > 0 P-a.s. on {Td < ∞}. ε↓0 ε T d Consequently, 1 lim ε↓0 ε
0
t+Td
I(d ≤ Xs < d + ε)σ 2 (Xs )ds > 0 P-a.s. on {Td < ∞}.
Applying once more Corollary A.7, we get the desired statement.
2
3 Two-Sided Classification of Isolated Singular Points
In this chapter, we investigate the behaviour of a solution of (1) in the twosided neighbourhood of an isolated singular point. Many properties related to the “two-sided” behaviour follow from the results of Section 2.3. However, there are some properties that involve both the right type and the left type of a point. The corresponding statements are formulated in Section 3.1. Section 3.2 contains an informal description of the behaviour of a solution for various types of isolated singular points. The statements formulated in Section 3.1 are proved in Sect 3.3. The results of Section 3.1 show that the isolated singular points of only 4 types can disturb uniqueness. These points are called here the branch points. Disturbing uniqueness, the branch points give rise to a variety of “bad” solutions. In particular, one can easily construct a non-Markov solution in the neighbourhood of a branch point. This is the topic of Section 3.4. However, it turns out that all the strong Markov solutions in the neighbourhood of a branch point admit a simple description. It is given in Section 3.5. Throughout this chapter, we assume that σ(x) = 0 for all x ∈ R.
3.1 Two-Sided Classification: The Results Definition 3.1. An isolated singular point has type (i, j) if it has left type i and right type j. (The left type of an isolated singular point is defined similarly as the right type.) Suppose that zero is an isolated singular point. Then there exist numbers a < 0 < c such that 1 + |b| (3.1) ∈ L1loc [a, c] \ {0} . σ2 If zero has right type 0, then c 1 + |b(x)| dx < ∞. σ 2 (x) 0 If the right type of zero is one of 1, . . . , 6, then, for any ε > 0,
A.S. Cherny and H.-J. Engelbert: LNM 1858, pp. 65–79, 2005. c Springer-Verlag Berlin Heidelberg 2005
66
3 Two-Sided Classification of Isolated Singular Points
0
ε
1 + |b(x)| dx = ∞. σ 2 (x)
(This can easily be seen from Figure 2.2.) If zero had type (0, 0), then we would have (1 + |b|)/σ 2 ∈ L1loc (0), and hence, zero would not be a singular point. For the other 48 possibilities, (1 + |b|)/σ 2 ∈ / L1loc (0). As a result, an isolated singular point can have one of 48 possible types. Theorem 3.2. Suppose that zero has type (i, j) with i = 0, 1, 4, 5, 6, j = 0, 1, 4, 5, 6 (we exclude the case i = j = 0). Then, for any solution (P, S), we have S ≤ T0 P-a.s. Theorem 3.3. Suppose that zero has type (i, j) with i = 0, 1, j = 2, 3. (i) If (P, S) is a solution, then X ≥ 0 on [[T0 , S]] P-a.s. (ii) If x0 ∈ [a, c], then there exists a unique solution P defined up to Ta,c . Theorem 3.4. Suppose that zero has type (2, 2). Then, for any x0 ∈ (a, c), there exist different solutions defined up to Ta,c . Theorem 3.5. Suppose that zero has type (2, 3). (i) If (P, S) is a solution, then X > 0 on ]]T0+ , S]] P-a.s., where T0+ = inf{t ≥ 0 : Xt > 0}. (ii) If x0 ∈ (a, 0], then there exist different solutions defined up to Ta,c . (iii) If x0 ∈ (0, c], then there exists a unique solution defined up to Ta,c , and it is strictly positive. Theorem 3.6. Suppose that zero has type (3, 3). (i) If x0 ∈ [a, 0), then there exists a unique solution defined up to Ta,c , and it is strictly negative. (ii) If x0 ∈ (0, c], then there exists a unique solution defined up to Ta,c , and it is strictly positive. (iii) If x0 = 0, then there exist different solutions defined up to Ta,c . They can be described as follows. If P is a solution defined up to Ta,c , then there exists λ ∈ [0, 1] such that P = λP− + (1 − λ)P+ , where P− is the unique negative solution up to Ta,c and P+ is the unique positive solution up to Ta,c . We do not formulate here the statements related to types (i, j) with i = 4, 5, 6, j = 2, 3 because, for these types, the behaviour of a solution in the neighbourhood of the corresponding point is clear from the one-sided classification, and there are no new effects brought by the two-sided combination of types.
3.2 Two-Sided Classification: Informal Description If zero has type (i, j) with i = 0, 1, 4, 5, 6, j = 0, 1, 4, 5, 6 (the case i = j = 0 is excluded), then after the time a solution has reached zero (if this time is
3.2 Two-Sided Classification: Informal Description
67
6 1
A AA
1
A AA
A AA
-
t
6 2
-
t
0
6 3 5
-
t
Fig. 3.1. Behaviour of solutions for various types of the isolated singular points. The graphs show simulated paths of solutions with different starting points. The top graph illustrates Theorem 3.2. This graph corresponds to the case, where zero has type (1,1). The signs “ A ” indicate that a solution cannot be extended after it has reached zero. The centre graph illustrates Theorem 3.3. This graph corresponds to the case, where zero has type (0,2). The bottom graph illustrates the situation, where zero has type (i, j) with i = 4, 5, 6, j = 2, 3. This graph corresponds to the case, where zero has type (5,3).
68
3 Two-Sided Classification of Isolated Singular Points
6 2
-
t
2
6 3
-
t
2
6 3 3
-
t
Fig. 3.2. Behaviour of solutions for various types of the branch points, i.e., points of type (i, j) with i = 2, 3, j = 2, 3. The graphs show simulated paths of “branching” solutions. The top graph shows 4 different solutions started at zero for the case, where zero has type (2, 2). The centre graph and the bottom graph are constructed in a similar way for the cases, where zero has types (2, 3) and (3, 3), respectively.
3.3 Two-Sided Classification: The Proofs
69
finite), it cannot go further either in the positive direction or in the negative direction. Therefore, a solution can be defined at most up to T0 . If zero has type (i, j) with i = 0, 1, j = 2, 3, then a solution started at a strictly negative point can reach zero. After this time, the solution cannot go in the negative direction, but it can go in the positive direction. Thus, there exists a unique solution “in the neighbourhood of zero” that passes through zero in the positive direction. In other words, zero is then a “right shunt”. If zero has type (i, j) with i = 4, 5, 6, j = 2, 3, then the behaviour of a solution in the neighbourhood of zero is clear from the one-sided classification. Namely, a solution started at a positive point can reach zero if j = 2, but cannot enter the strictly negative half-line; a solution started at a strictly negative point cannot reach zero. If zero has type (i, j) with i = 2, 3, j = 2, 3, then there exist (at least locally) both a positive solution and a negative solution started at zero. If zero has type (2, 2), then a solution started at any point in a sufficiently small neighbourhood of zero can reach zero with strictly positive probability. After the time it has reached zero, it may go in the positive direction or in the negative direction. Thus, for any starting point in the neighbourhood of zero, there exist different (local) solutions. If zero has type (2, 3), then, by the same reasoning, there exist different (local) solutions for all the starting points in the left-hand neighbourhood of zero. However, a solution started at a strictly positive point cannot reach zero, and therefore, the presence of this “bad” point does not disturb the uniqueness of a solution with x0 > 0. If zero has type (3, 3), then solutions started outside zero never reach zero, and therefore, the presence of this “bad” point does not disturb the uniqueness of a solution with x0 = 0. Of course, this does not mean that, for x0 = 0, there exists a unique solution since there may exist points of type (2, 2), (2, 3), (3, 2), or (3, 3) other than zero.
3.3 Two-Sided Classification: The Proofs Proof of Theorem 3.3. (i) We will prove this statement for types (i, j) with i = 0, 1, j = 1, . . . , 6. If i = 1, then the statement follows from Theo = P◦Φ−1 rem 2.13 (i). Now, suppose that i = 0. Let (P, S) be a solution. Set P S (Φ is defined by (B.1)) and consider the functions
70
3 Two-Sided Classification of Isolated Singular Points
ρ− (x) = exp − a
s− (x) = −
x
2b(x) dy , σ 2 (x)
0
ρ(y)dy,
x ∈ [a, 0],
(3.2)
x ∈ [a, 0],
(3.3)
x
k1 (x − a) + s− (a) f (x) = s− (x) k2 x
if x < a, if a ≤ x ≤ 0, if x > 0,
(3.4)
where the constants k1 and k2 are chosen in such a way that f is differentiable at the points a and 0 (such constants exist since zero has left type 0). By the Itˆ o–Tanaka formula together with the occupation times formula, the process with the decomposition Y = f (X) is a (Ft , P)-semimartingale t∧S
Yt = Y0 + k1 I(Ys < s− (a))b(f −1 (Ys )) + k2 I(Ys > 0)b(f −1 (Ys )) ds + Nt 0
= Y0 +
t∧S
0
ϕ(Ys )ds + Nt ,
t ≥ 0,
and where N ∈ Mcloc (Ft , P) t∧S
N t = (f (f −1 (Ys )))2 σ 2 (f −1 (Ys ))ds = 0
t∧S 0
ψ 2 (Ys )ds,
t ≥ 0.
As the right type of zero is one of 1, . . . , 6, we have for any ε > 0, ε 1 + |ϕ(y)| dy = ∞. ψ 2 (y) 0 Combining this with the arguments used in the proof of Theorem 2.5, we By the Itˆo–Tanaka formula, deduce that L0t (Y ) = 0 P-a.s. t I(Ys ≤ 0)dYs Yt− = Y0− + 0
= Y0− +
t∧S
0
I(Ys ≤ 0)ϕ(Ys )ds +
0
t
I(Ys ≤ 0)dNs ,
where we use the notation x− = x ∧ 0. Choose ∆ ∈ (f (a), 0) and consider T = inf{t ≥ T0 (Y ) : Yt = ∆}. The function ϕ is zero on (s− (a), 0). Therefore, the process t
Zt =
0
I(T0 (Y ) ≤ s < T )dYs− ,
t≥0
martingale. As Z is negative and Z0 = 0, we deduce that is a (Ft , P)-local Z = 0 P-a.s. Consequently, after the time T0 (Y ), the process Y reaches no level ∆ < 0. This leads to the desired statement.
3.3 Two-Sided Classification: The Proofs
71
(ii) Existence. If x0 ∈ [0, c], then, by Theorems 2.12 and 2.16, there exists a solution up to Tc . Obviously, its restriction to FTa,c is a solution up to Ta,c . Suppose now that x0 ∈ [a, 0). Then there exists a solution Q defined up to Ta,0 . Let R0 be the positive solution with X0 = 0 defined up to Tc . Set = Q ◦ Φ−1 , R 0 = R0 ◦ Φ−1 . We will consider Q as a measure on C(R+ ) Q Ta,0 Tc 0 as a measure on C0 (R+ ). Let P be the image of Q ×R 0 under the and R map C(R+ ) × C0 (R+ ) (ω1 , ω2 ) −→ G(ω1 , ω2 , T0 (ω1 )) ∈ C(R+ ), where G is the gluing function. Using Lemma B.9, one can verify that the T is a solution up to Ta,c . measure P = P|F a,c Uniqueness. If x0 ∈ [0, c], then uniqueness follows from statement (i) of this theorem and the results of Section 2.3. Suppose now that x0 ∈ [a, 0). Let P be a solution defined up to Ta,c . Set P = P ◦ Φ−1 Ta,c . Let P0 denote the (unique) solution with X0 = 0 defined up 0 = P0 ◦ Φ−1 . We will consider to Ta,c . Set P Ta,c
Ω = C(R+ ) × C(R+ ),
G = F × F,
Gt = Ft × Ft ,
×P 0. Q=P
A generic point ω of Ω has the form (ω1 , ω2 ). Let us define the processes Yt (ω) = ω1 (t ∧ Ta,0 (ω1 )), t ≥ 0,
ω1 (t + T0 (ω1 )) if T0 (ω1 ) < ∞, Zt (ω) = if T0 (ω1 ) = ∞. ω2 (t) Set H = σ(Yt ; t ≥ 0). Let (Qω )ω∈Ω denote a version of the Q-conditional distribution of (Zt ; t ≥ 0) with respect to H. We will now prove that, for Q-a.e. ω, Qω |FTa,c is a solution of (1) with X0 = 0 defined up to Ta,c . Conditions (a), (b) of Definition 1.31 are obviously satisfied. Furthermore, for any t ≥ 0, t∧Ta,c (Z) |b(Zs )| + σ 2 (Zs ) ds < ∞ Q-a.s. 0
Hence, for Q-a.e. ω, we have t∧Ta,c ∀t ≥ 0, |b(Xs )| + σ 2 (Xs ) ds < ∞ Qω -a.s. 0
Thus, condition (c) of Definition 1.31 is satisfied for Q-a.e. ω. Let us now verify that, for Q-a.e. ω, the measure Qω |FTa,c satisfies condition (d) of Definition 1.31. Consider the process t∧Tc (Z) Nt = Zt − b(Zs )ds, t ≥ 0 0
72
3 Two-Sided Classification of Isolated Singular Points
and the stopping times Sm (N ) = inf{t ≥ 0 : |Nt | ≥ m}. For any s ≤ t, A ∈ Fs , and B ∈ H, we have S (N )
− NsSm (N ) I({Z ∈ A} ∩ B) = 0. EQ Nt m (This follows from the construction of Q and the optional stopping theorem.) Hence, for Q-a.e. ω, S (M)
EQω Mt m − MsSm (M) I(X ∈ A) = 0, where
M t = Xt −
t∧Ta,c 0
b(Xs )ds,
t≥0
and Sm (M ) = inf{t ≥ 0 : |Mt | ≥ m}. As a result, for Q-a.e. ω, M ∈ Mcloc (Ft , Qω ). In a similar way, we prove that, for Q-a.e. ω, Qω |FTa,c satisfies condition (e) of Definition 1.31. Using now Theorems 2.11, 2.13, we deduce that, for Q-a.e. ω, Qω |FTa,c = P0 . This implies that Z Ta,c (Z) is independent of H. Since Z = Z Ta,c (Z) Q-a.s., we conclude that Z is independent of H. The obtained results show that Law(Xt1 ; t ≥ 0 | Q) coincides with the is “glued” measure constructed in the proof of existence. This means that P T is unique. determined uniquely. Hence, the measure P = P|F 2 a,c Proof of Theorem 3.2. It follows from the proof of Theorem 3.3 (i) and the results of Section 2.3 that, for any solution (P, S), X = 0 on [[T0 , S]] P-a.s. On the other hand, the quadratic variation
Xt =
0
t∧S
σ 2 (Xs )ds,
t≥0
is strictly increasing on [[0, S]] since σ 2 > 0. This leads to the desired statement. 2 Proof of Theorem 3.4. Without loss of generality, we suppose that x0 ∈ (a, 0]. It follows from Theorem 2.12 that there exists a negative solution P defined up to Ta,c . There also exists a positive solution P0 with X0 = 0 defined up to Ta,c . 0 = P0 ◦ Φ−1 (Φ is defined by (B.1)) and let P be the = P ◦ Φ−1 , P Set P Ta,c Ta,c ×P 0 under the map image of P C(R+ ) × C0 (R+ ) (ω1 , ω2 ) −→ G(ω1 , ω2 , T0 (ω1 )) ∈ C(R+ ), where G is the gluing function. Using Lemma B.9, one can verify that the |FT is a solution of (1) up to Ta,c . Moreover, P is not measure P = P a,c 2 negative, and hence, P = P.
3.4 The Branch Points: Non-Markov Solutions
73
Proof of Theorem 3.5. (i) Let (P, S) be a solution. For 0 ≤ α < β < c, we set Tαβ = inf{t ≥ Tβ : Xt = α}. Suppose that there exists β ∈ (0, c) such that P{T0β < ∞, T0β ≤ S} > 0. Then there exists α ∈ (0, β) such that P T0β < ∞, T0β ≤ S, and supt∈[Tαβ ,T β ] Xt ≤ c > 0. 0
Using the same arguments as in the proof of Theorem 2.16 (i), we arrive at a contradiction. Thus, for any β ∈ (0, c), P{T0β < ∞, T0β ≤ S} = 0. This leads to the desired statement. (ii) This statement is proved in the same way as Theorem 3.4. (iii) This statement follows from Theorem 2.16 (ii).
2
Proof of Theorem 3.6. (i),(ii) These statements follow from Theorem 2.16 (ii). (iii) Let P be a solution up to Ta,c . The proof of Theorem 3.5 (i) shows that P{∃t ∈ (T0+ , Ta,c ] : Xt ≤ 0} = 0, P{∃t ∈ (T0− , Ta,c ] : Xt ≥ 0} = 0, where T0− = inf{t ≥ 0 : Xt < 0}. Furthermore, T0+ ∧ T0− = 0 P-a.s. since σ = 0. Thus, P(A+ ∪ A− ) = 1, where A+ = ω ∈ C(R+ ) : ω(0) = 0 and ω > 0 on (0, Ta,c(ω)] , A− = ω ∈ C(R+ ) : ω(0) = 0 and ω < 0 on (0, Ta,c(ω)] . Suppose that P coincides neither with P− nor with P+ . Then P(A+ ) > 0 and theconditional probability P( · | A+ ) is a solution up to Ta,c (note that A+ ∈ ε>0 Fε ). Moreover, this solution is positive, and thus, it coincides with P+ . In a similar way, we prove that P( · | A− ) = P− . Thus, P = λP− + (1 − λ)P+ with λ = P(A− ). 2
3.4 The Branch Points: Non-Markov Solutions Definition 3.7. A branch point is an isolated singular point that has one of types (2, 2), (2, 3), (3, 2), or (3, 3). The branch points can be characterized by the following statement. Lemma 3.8. Suppose that zero is an isolated singular point. Then it is a branch point if and only if there exist both a positive solution and a negative solution with X0 = 0 defined up to Ta,c . (Here a and c are taken from (3.1).)
74
3 Two-Sided Classification of Isolated Singular Points
The statement follows from the results of Section 2.3. In what follows, X denotes the canonical process on C(R+ ) and (Ft ) stands for the canonical filtration on C(R+ ). For s ≥ 0, the shift operator Θs : C(R+ ) → C(R+ ) is defined by (Θs ω)(t) = ω(s + t),
t ≥ 0.
Definition 3.9. A measure P on F has the Markov property if for any t ≥ 0 and any positive F -measurable function Ψ, EP [Ψ ◦ Θt | Ft ] = EP [Ψ ◦ Θt | σ(Xt )]. If SDE (1) possesses a unique global solution for any x0 ∈ R, then these solutions are Markov (see [45, Th. 6.2] or [28, Th. 18.11]; see also [15, Cor. 4.38]). The example below shows that the presence of branch points may give rise to non-Markov solutions. Example 3.10 (non-Markov solution; SDE for a Bessel process). Let us consider the SDE dXt =
δ−1 I(Xt = 0)dt + dBt , 2Xt
X0 = x0
(3.5)
with 1 < δ < 2. Take x0 > 0 and let P be the positive solution of (3.5) (this is the distribution of a δ-dimensional Bessel process started at x0 ). Consider the map C(R+ ) ω −→ ω ∈ C(R+ ) defined by ω(t) ω (t) = ω(t) −ω(t)
if t ≤ T0 (ω), if t > T0 (ω) and ω(T0 (ω)/2) > 1, if t > T0 (ω) and ω(T0 (ω)/2) ≤ 1.
Then the image P of P under this map is a non-Markov solution of (3.5). The proof is straightforward. Remarks. (i) Similarly to Example 3.10, one can construct non-Markov local solutions for an arbitrary SDE that possesses an isolated singular point of types (2, 2), (2, 3), or (3, 2). The points of type (3, 3) do not lead to nonMarkov solutions (this follows from Theorem 3.6). (ii) Let us mention another way to construct non-Markov solutions of one-dimensional homogeneous SDEs. Consider the equation dXt = |Xt |α dBt ,
X0 = 0
(3.6)
3.5 The Branch Points: Strong Markov Solutions
75
with 0 < α < 1/2. This example was first considered by Girsanov [22]. SDE (3.6) possesses different solutions (see Example 1.22). In order to construct a non-Markov solution of (3.6), we start a solution at x0 = 0. When it first reaches zero, we hold it at zero for a time period that depends on the past of the path and then restart it from zero in a nontrivial way. This way of constructing non-Markov solutions is well known (see [14], [15], or [26, p. 79]). Actually, the same procedure can be performed with the SDE dXt = I(Xt = 0)dBt ,
X0 = 0.
(3.7)
It is important for both examples (3.6) and (3.7) that σ vanishes at zero. On the other hand, in Example 3.10, σ ≡ 1.
3.5 The Branch Points: Strong Markov Solutions Throughout this section, we assume that (3.1) is true. Let us first consider the case, where zero has type (3,3). Then we know the structure of all the solutions in the neighbourhood of zero. Indeed, it follows from Theorem 3.6 that if (Px , Ta,c ) is a solution with X0 = x, then there exists λ ∈ [0, 1] such that − if x ∈ [a, 0), Px λ + Px = Px = λP− (3.8) + (1 − λ)P if x = 0, 0 0 + Px if x ∈ (0, c]. + Here P− x is the unique negative solution defined up to Ta,c ; Px is the unique −1 λ λ = P ◦Φ positive solution defined up to Ta,c . Set P x x Ta,c (Φ is defined λ )x∈[a,c] is a strong Markov family by (B.1)). If λ equals 0 or 1, then (P x (see Definition A.25). For λ ∈ (0, 1), this family does not have the strong Markov property. In order to check this, consider the (Ft+ )-stopping time T0+ = inf{t ≥ 0 : Xt > 0} and the function Ψ(ω) = I(∀t ≥ 0, ω(t) ≥ 0). Thus, we arrive at the following theorem.
Theorem 3.11. Suppose that zero has type (3, 3). For each x ∈ [a, c], let x = Px ◦ Φ−1 and suppose that (Px , Ta,c ) be a solution with X0 = x. Set P Ta,c x )x∈[a,c] has the strong Markov property. Then either Px = P0 the family (P x for any x ∈ [a, c] or Px = P1x for any x ∈ [a, c], where P0x and P1x are given by (3.8). Let us now consider the case, where zero has type (2,3). For x ∈ [a, 0], there exists a unique negative solution P− x defined up to Ta,c ; for x ∈ [0, c], there exists a unique positive solution P+ x defined up to Ta,c . We set
if x ∈ [a, 0], P− x 0 Px = (3.9) + Px if x ∈ (0, c].
76
3 Two-Sided Classification of Isolated Singular Points
1 as the image of P − × P + under the map For x ∈ [a, 0), we define P x x 0 C(R+ ) × C0 (R+ ) (ω1 , ω2 ) −→ G(ω1 , ω2 , T0 (ω1 )) ∈ C(R+ ), − = P− ◦ Φ−1 , P + = P+ ◦ Φ−1 . We set where G is the gluing function, P x x x x Ta,c Ta,c
P1x
=
1 |FT P a,b x P+ x
if x ∈ [a, 0), if x ∈ [0, c].
(3.10)
Using Lemma B.9, one can verify that, for any x ∈ [a, c], P0x and P1x are solutions with X0 = x defined up to Ta,c . Moreover, one can check that 0 )x∈[a,c], (P 1 )x∈[a,c] , where P λ = Pλ ◦ Φ−1 , λ = 1, 2, have both families (P x x x x Ta,c the strong Markov property. However, we will not prove this statement, but would rather prove the converse statement. Theorem 3.12. Suppose that zero has type (2, 3). For each x ∈ [a, c], let x = Px ◦ Φ−1 and suppose that (Px , Ta,c ) be a solution with X0 = x. Set P Ta,c the family (Px )x∈[a,c] has the strong Markov property. Then either Px = P0x for any x ∈ [a, c] or Px = P1x for any x ∈ [a, c], where P0x and P1x are given by (3.9) and (3.10), respectively. 0 {T0+ < ∞} > 0. By the strong Markov property, we Proof. Suppose that P have 0 {T0+ < ∞} P 0 {∀t > 0, Xt > 0}. EP 0 [I(T0+ < ∞) I(∀t > T0+ , Xt > 0)] = P According to Theorem 3.5 (ii), the left-hand side of this equality can be 0 {∀t > 0, Xt > 0} = 1, which means that 0 {T0+ < ∞}. Thus, P rewritten as P T0+ = 0 P0 -a.s. Using the same arguments as in the proof of Theorem 3.3 (ii), we conclude that Px = P1x for any x ∈ [a, c]. 0 {T0+ < ∞} = 0. This means that the solutions Px Now, suppose that P 0 with x ∈ [a, 0] are negative. Consequently, Px = P− x = Px for x ∈ [a, 0]. + 0 If x ∈ (0, c], then, by Theorem 2.16 (ii), Px = Px = Px . This means that Px = P0x for any x ∈ [a, c]. We will finally consider the case, where zero has type (2,2). Let ρ+ , s+ denote the functions defined in (2.12), (2.13). Let ρ− , s− be the functions defined in (3.2), (3.3). For λ ∈ (0, 1), we set
λs− (x) if x ∈ [a, 0], λ s (x) = (1 − λ)s+ (x) if x ∈ [0, c], mλ (dx) =
I(0 < x < c) I(a < x < 0) dx + dx + ∆a (dx) + ∆c (dx), 2 λρ− (x)σ (x) (1 − λ)ρ+ (x)σ 2 (x)
3.5 The Branch Points: Strong Markov Solutions
77
where ∆a and ∆c denote the infinite masses at the points a and c, respectively. Take x ∈ [a, c], λ ∈ [0, 1]. Let B be a Brownian motion started at sλ (x) and Lyt (B)mλ ◦ (sλ )−1 (dy), At = sλ ([a,c])
τt = inf{s ≥ 0 : As > t}, λ = Law (sλ )−1 (Bτ ); t ≥ 0 , P t x λ |FT . Pλ = P x
x
a,c
P1x
We define similarly to (3.10); P0x is defined in an analogous way. The measure Pλx may informally be described as follows. We start a solution at the point x, and after each time it reaches zero, it goes in the positive direction with probability λ and in the negative direction with probability 1 − λ. This can be put on a firm basis using the excursion theory. The same arguments as in the proof of Theorem 2.12 allow one to verify that, for any λ ∈ [0, 1] and x ∈ [a, c], (Pλx , Ta,c ) is a solution with X0 = x. λ )x∈[a,c] , where Moreover, one can check that, for any λ ∈ [0, 1], the family (P x −1 λ λ Px = Px ◦ ΦTa,c , has the strong Markov property. However, we will not prove these statements, but would rather prove the converse statement. Theorem 3.13. Suppose that zero has type (2, 2). For each x ∈ [a, c], let x = Px ◦ Φ−1 and suppose that the (Px , Ta,c ) be a solution with X0 = x. Set P Ta,c x )x∈[a,c] has the strong Markov property. Then there exists λ ∈ [0, 1] family (P such that Px = Pλx for any x ∈ [a, c].
0 -a.s. Then, by the strong Markov propProof. Suppose first that T0− = ∞ P erty, X ≥ 0 on [[T0 , Ta,c ]] Px -a.s. for any x ∈ [a, c]. Using the same arguments as in the proof of Theorem 3.3 (ii), we conclude that Px = P1x for any x ∈ [a, c]. 0 -a.s., then Px = P0 for any Similarly, we deduce that if T0+ = ∞ P x x ∈ [a, c]. 0 {T0+ < ∞} > 0. We 0 {T0− < ∞} > 0, P Now, suppose that P will prove that the family (Px )x∈[a,c] is regular in this case (see Definition A.26). Condition (a) of Definition A.26 is obviously satisfied, and we should check only (b). We will verify this condition for x ∈ (0, c), y = a. Let P+ x denote the positive solution with X0 = x defined up to Ta,c . In view of Theorem 2.11, Px |FTδ,c = P+ x |FTδ,c for any δ ∈ (0, c). Consequently, + Px |FT0,c = P+ x |FT0,c . It follows from Theorem 2.12 that Px {T0 < Tc } > 0. Therefore, Px {T0 < ∞} > 0. Since P0 {T0− < ∞} > 0, there exists d ∈ [a, 0) 0 {Td < ∞} > 0. Furthermore, P d {Ta < ∞} > 0. Using the strong such that P x {Ty < ∞} > 0. Thus, the Markov property at times T0 and Td , we get P x )x∈[a,c] is regular. family (P x )x∈[a,c] Let s and m denote the scale function and the speed measure of (P (see Definitions A.28, A.30). We can assume that s(0) = 0. For x ∈ [0, c],
78
3 Two-Sided Classification of Isolated Singular Points
x as the image of P x under the map ω → ω T0,c (ω) . Then (Q x )x∈[0,c] is define Q a regular strong Markov family whose scale function is given by the restriction of s to [0, c] and whose speed measure is the restriction of m to (0, c). On x |FT = P+ . x |FT is a solution up to T0,c , and hence, Q the other hand, P 0,c 0,c x T0,c + + x -a.s., we get Q x = P . The measure P is obtained from =X Q Since X x x the Wiener measure by a time-change and a space transformation (see the proof of Theorem 2.11). The explicit form of the time-change and the space + )x∈[0,c] is transformation allows us to conclude (see [28, Th. 20.9]) that (P x a regular strong Markov family with the scale function s+ (x) and the speed measure I(0 < x < c) dx. m+ (dx) = ρ(x)σ 2 (x) The scale function is defined up to an affine transformation (see Proposition A.27). Therefore, there exists λ+ > 0 such that s(x) = λ+ s+ (x),
x ∈ [0, c],
I(0 < x < c) m|(0,c) (dx) = dx. λ+ ρ(x)σ 2 (x) Similarly, we prove that there exists λ− > 0 such that s(x) = λ− s− (x), m|(a,0) (dx) =
x ∈ [a, 0],
I(a < x < 0) dx. λ− ρ(x) σ 2 (x)
We have 0
Ta,c
I(Xs = 0)ds =
Ta,c
0
=
R
I(Xs = 0) d Xs σ 2 (0)
I(x = 0) x 0 -a.s. L (X)dx = 0 P σ 2 (0) Ta,c
Consequently, m{0} = 0. We arrive at the equalities s(x) = λ sλ (x), 1 m(dx) = mλ (dx), λ − x )x∈[a,c] is a regular process whose scale function . Thus, (P where λ = λ+λ+λ − and speed measure can be chosen equal to sλ and mλ . It follows from Propo sition A.31 that Px = Pλx for any x ∈ [a, c].
3.5 The Branch Points: Strong Markov Solutions
79
6 2
-
t
2
6 3
-
t
2
6 3 3
-
t
Fig. 3.3. Behaviour of strong Markov solutions for various types of the branch points. The graphs show simulated paths of solutions with different starting points. The top graph corresponds to the case, where zero has type (2,2). It represents a path of the solution Pλ with λ = 0.7. The centre graph and the bottom graph correspond to the cases, where zero has types (2,3) and (3,3), respectively. These graphs represent paths of the solution P1 .
4 Classification at Infinity and Global Solutions
A classification similar to that given in Chapter 2 can be performed at +∞. This is the topic of Sections 4.1–4.3. The results of Chapters 2, 3 apply to local solutions, i.e., solutions up to a random time. In Sections 4.4, 4.5, we study the existence and uniqueness of a global solution, i.e., a solution in the sense of Definition 1.28. This is done for the SDEs that have no more than one singular point. Throughout this chapter, we assume that σ(x) = 0 for all x ∈ R.
4.1 Classification at Infinity: The Results Throughout this section, we assume that 1 + |b| ∈ L1loc ([a, ∞)) σ2 for some a ∈ R. We will use the functions x 2b(y) ρ(x) = exp − dy , x ∈ [a, ∞), σ 2 (y) ∞ a s(x) = − ρ(y)dy, x ∈ [a, ∞)
(4.1)
(4.2) (4.3)
x
and the notation T ∞ = lim Tn , n→∞
T a,∞ = T a ∧ T ∞ . Theorem 4.1. Suppose that
∞
ρ(x)dx = ∞.
a
If x0 ∈ [a, ∞), then there exists a unique solution P defined up to Ta . We have Ta < ∞ P-a.s.
A.S. Cherny and H.-J. Engelbert: LNM 1858, pp. 81–91, 2005. c Springer-Verlag Berlin Heidelberg 2005
82
4 Classification at Infinity and Global Solutions
If the conditions of Theorem 4.1 are satisfied, we will say that +∞ has type A. Theorem 4.2. Suppose that ∞ ρ(x)dx < ∞, a
∞
a
|s(x)| dx = ∞. ρ(x)σ 2 (x)
If x0 ∈ [a, ∞), then there exists a unique solution P defined up to Ta . If moreover x0 > a, then P{Ta = ∞} > 0 and limt→∞ Xt = +∞ P-a.s. on {Ta = ∞}. If the conditions of Theorem 4.2 are satisfied, we will say that +∞ has type B. Theorem 4.3. Suppose that ∞ ρ(x)dx < ∞, a
a
∞
|s(x)| dx < ∞. ρ(x)σ 2 (x)
If x0 ∈ (a, ∞), then there exists a unique solution P defined up to T a,∞ −. We have P{T ∞ < ∞} > 0. (In other words, the solution explodes into +∞ with strictly positive probability.) If the conditions of Theorem 4.3 are satisfied, we will say that +∞ has type C. As a consequence of the above results, we obtain Feller’s criterion for explosions (see [16], [29, Ch. 5, Th. 5.29], or [34, § 3.6]). Corollary 4.4. Suppose that x0 ∈ (a, ∞) and P is a solution defined up to T a,∞ −. Then it explodes into +∞ with strictly positive probability (i.e., P{T ∞ < ∞} > 0) if and only if ∞ ∞ |s(x)| dx < ∞. ρ(x)dx < ∞, 2 (x) ρ(x)σ a a
4.2 Classification at Infinity: Informal Description If +∞ has type A, then a solution cannot explode into +∞. Moreover, a solution is recurrent in the following sense. If there are no singular points between the starting point x0 and a point a < x0 , then the solution reaches the level a a.s. An example of a SDE, for which +∞ has type A, is provided by the equation dXt = dBt , X0 = x0 . If +∞ has type B, then there is no explosion into +∞ and a solution tends to +∞ with strictly positive probability. In other words, a solution is transient. For the SDE
4.2 Classification at Infinity: Informal Description
σ = 0,
∞ a
83
Am
ρ=∞
1 + |b| ∈ L1loc ([a, ∞)) σ2
A A
A A
A A AA U
∞ a
∞ a
|s| =∞ ρσ 2
6a ρ 0 (this follows from Theorem 5.5).
X0 = x0
84
4 Classification at Infinity and Global Solutions
6
Am
-
t
6
Bm
-
t
6
Cm
-
t
Fig. 4.2. Behaviour of solutions for various types of infinity. The graphs show simulated paths of solutions.
4.3 Classification at Infinity: The Proofs
85
4.3 Classification at Infinity: The Proofs Proof of Theorem 4.1. Existence. Consider the function x r(x) = ρ(y)dy, x ∈ [a, ∞). a
Let B be a (Gt )-Brownian motion started at r(x0 ) on a filtered probability space Ω, G, (Gt ), Q . Let us consider κ(y) = ρ(r−1 (y))σ(r−1 (y)), y ∈ [0, ∞), t κ −2 (Bs )ds if t < T0 (B), At = 0 ∞ if t ≥ T0 (B), τt = inf{s ≥ 0 : As > t}, Yt = Bτt , t ≥ 0. Arguing in the same way as in the proof of Theorem 2.11, we check that AT0 (B)− = T0 (Y ) < ∞ Q-a.s. Set Z = s−1 (Y ). The estimates used in (2.23) show that, for any c > x0 , EQ
Ta,c (Z)
1 + |b(Zt )| + σ 2 (Zt ) dt < ∞.
0
(4.4)
Furthermore, Ta (Z) = T0 (Y ) < ∞ Q-a.s. Letting c → +∞ in (4.4), we get
Ta (Z)
0
1 + |b(Zt )| + σ 2 (Zt ) dt < ∞ Q-a.s.
(4.5)
The proof of existence is now completed in the same way as in Theorem 2.11. Uniqueness. Uniqueness follows from Lemma B.6 applied to the stopping times Ta,n . 2 The property Ta < ∞ P-a.s. is a consequence of (4.5). Proof of Theorem 4.2. Existence. Let B be a (Gt )-Brownian motion started at s(x0 ) on a filtered probability space Ω, G, (Gt ), Q . Let us consider κ(y) = ρ(s−1 (y))σ(s−1 (y)), y ∈ [α, 0), t κ −2 (Bs )ds if t < Tα,0 (B), At = 0 ∞ if t ≥ Tα,0 (B), τt = inf{s ≥ 0 : As > t}, Yt = Bτt , t ≥ 0,
86
4 Classification at Infinity and Global Solutions
where α = s(a). With the same arguments as in the proof of Theorem 2.11 we check that ATα,0 (B)− = Tα,0 (Y ) Q-a.s. Furthermore, for any ε > 0,
0
−ε
|y| dy = κ 2 (y)
∞
s−1 (−ε)
|s(x)| dx = ∞. ρ(x)σ 2 (x)
By Corollary A.24, ATα,0 (B)− is Q-a.s. infinite on the set {T0 (B) < Tα (B)}. Hence, T0 (Y ) = ∞ Q-a.s. Thus, the process Z = s−1 (Y ) is correctly defined. The arguments used in (2.23) show that, for any c > x0 , EQ
Ta,c (Z)
1 + |b(Zt )| + σ 2 (Zt ) dt < ∞.
0
By letting c → +∞, we get, for any t ≥ 0, 0
t∧Ta (Z)
1 + |b(Zt )| + σ 2 (Zt ) dt < ∞ Q-a.s.
The proof of existence is now completed in the same way as in Theorem 2.11. Uniqueness. The uniqueness of a solution follows from Lemma B.6 applied to the stopping times Ta,n . The properties P{Ta = ∞} > 0 and limt→∞ Xt = +∞ P-a.s. on {Ta = ∞} follow from the properties that Q{T0 (B) < Tα (B)} > 0, and on Q-a.s.
the set {T0 (B) < Tα (B)} we have Yt −−−→ 0. t→∞
2
Proof of Theorem 4.3. The proof is similar to the proof of Theorem 2.14. 2
4.4 Global Solutions: The Results Throughout this section, we consider global solutions, i.e., solutions in the sense of Definition 1.28. Theorem 4.5. Suppose that SDE (1) has no singular points, i.e., 1 + |b| ∈ L1loc (R). σ2 (i) If −∞ and +∞ have types A or B, then there exists a unique solution P. For any point a ∈ R, we have P{Ta < ∞} > 0. (ii) If −∞ or +∞ has type C, then there exists no solution. Theorem 4.6. Suppose that zero is the unique singular point for (1). Let x0 > 0. (i) If +∞ has type C, then there exists no solution.
4.4 Global Solutions: The Results
87
(ii) If zero has type (i, j) with i = 0, 1, 4, 5, 6, j = 0, 1 (we exclude the case i = j = 0), then there exists no solution. (iii) If zero has type (i, j) with i = 2, 3, j = 0, 1, −∞ has type A or B, and +∞ has type A or B, then there exists a unique solution P. We have P{T0 < ∞} > 0 and X ≤ 0 on [[T0 , ∞[[ P-a.s. (iv) If zero has type (i, j) with i = 2, 3, j = 0, 1 and −∞ has type C, then there exists no solution. (v) If zero has type (i, j) with i = 0, 1, 4, 5, 6, j = 2 and +∞ has type A or B, then there exists a unique solution P. It is positive and P{T0 < ∞} > 0. (vi) If zero has type (i, j) with i = 2, 3, j = 2, −∞ has type A or B, and +∞ has type A or B, then there exist different solutions. (vii) If zero has type (i, j) with i = 2, 3, j = 2, −∞ has type C, and +∞ has type A or B, then there exists a unique solution P. It is positive and P{T0 < ∞} > 0. (viii) If zero has type (i, j) with j = 3, 4, 5 and +∞ has type A or B, then there exists a unique solution. It is strictly positive. (ix) If zero has type (i, j) with j = 6, then there exists no solution. Theorem 4.7. Suppose that zero is the unique singular point for (1). Let x0 = 0. (i) If zero has type (i, j) with i = 0, 1, 4, 5, 6, j = 0, 1, 4, 5, 6 (we exclude the case i = j = 0), then there exists no solution. (ii) If zero has type (i, j) with i = 0, 1, 4, 5, 6, j = 2, 3 and +∞ has type A or B, then there exists a unique solution. It is positive. (iii) If zero has type (i, j) with i = 0, 1, 4, 5, 6, j = 2, 3 and +∞ has type C, then there exists no solution. (iv) If zero has type (i, j) with i = 2, 3, j = 0, 1, 4, 5, 6 and −∞ has type A or B, then there exists a unique solution. It is negative. (v) If zero has type (i, j) with i = 2, 3, j = 0, 1, 4, 5, 6 and −∞ has type C, then there exists no solution. (vi) If zero has type (i, j) with i = 2, 3, j = 2, 3, −∞ has type A or B, and +∞ has type A or B, then there exist different solutions. (vii) If zero has type (i, j) with i = 2, 3, j = 2, 3, −∞ has type A or B, and +∞ has type C, then there exists a unique solution. It is negative. (viii) If zero has type (i, j) with i = 2, 3, j = 2, 3, −∞ has type C, and +∞ has type A or B, then there exists a unique solution. It is positive. (ix) If zero has type (i, j) with i = 2, 3, j = 2, 3, −∞ has type C, and +∞ has type C, then there exists no solution. Remark. Theorems 4.6, 4.7 reveal an interesting effect. It may happen that a branch point does not disturb the uniqueness of a global solution. (As we have seen in Chapter 3, a branch point always disturbs the uniqueness of a local solution.) The explanation of this effect is as follows. Suppose, for example, that zero is a branch point, −∞ has type C, +∞ has type A or B, and x0 ≥ 0. If a solution becomes strictly negative with strictly positive
88
4 Classification at Infinity and Global Solutions
Table 4.1. Existence and uniqueness in the case with no singular points. As an example, line 2 corresponds to the situation, where −∞ has type C and there are no restrictions on the type of +∞. The table shows that in this case there exists no solution because there is an explosion into −∞. Type of −∞
Type of +∞
Existence
Uniqness
Comments
AB
AB
+
+
solution can reach any point
−
+
explosion into −∞
−
+
explosion into +∞
C C
probability, then it explodes into −∞ with strictly positive probability, and hence, it is not a global solution. Thus, any global solution should be positive. But there exists a unique positive solution.
4.5 Global Solutions: The Proofs Proof of Theorem 4.5. (i) This statement is proved similarly to Theorems 4.1, 4.2. (ii) Without loss of generality, we may assume that +∞ has type C. Suppose that there exists a solution P. Fix a < x0 . Let Q be the solution defined up to T a,∞ − (it is provided by Theorem 4.3). Then Q{T ∞ < ∞} > 0. Hence, there exist t > 0 and c > a such that Q{T ∞ < t ∧ Tc } = θ > 0. Then, for any n > c, we have Q{Tn < t ∧ Tc } ≥ θ. The set {Tn < t ∧ Tc } belongs to FTc,n , and Q|FTc,n is a solution up to Tc,n . It follows from the uniqueness of a solution that, for any n > c, P{Tn < t∧Tc } ≥ θ. But this is a contradiction. 2 Proof of Theorem 4.6. (i) The proof is similar to the proof of Theorem 4.5 (ii). (ii) Suppose that there exists a solution P. Fix a > x0 . Then P|FT0,a is a solution up to T0,a . It follows from the results of Section 2.3 that P{T0,a < ∞ and XT0,a = 0} > 0. Hence, P{T0 < ∞} > 0. But this contradicts Theorem 3.2. (iii) Existence. The results of Section 2.3 ensure that there exists a solution R0 with X0 = 0 defined up to T−1 . Employing similar arguments as in the proofs of Theorems 2.12 and 2.16 (ii), we construct a solution R−1 with X0 = −1 defined up to ∞. Let R−1 be the image of R−1 under the map 0 = R0 ◦ Φ−1 ω → ω + 1. We consider R−1 as a measure on C0 (R+ ). Set R T−1 0 × R under the map (Φ is defined by (B.1)). Let Q0 be the image of R −1
4.5 Global Solutions: The Proofs
89
Table 4.2. Existence and uniqueness in the case, where zero is the unique singular point. The starting point is greater than zero. Right Left Type Type type type of zero of zero of −∞ of +∞
Exis- Uniqtence ness
Comments
−
+
explosion into +∞
−
+
killing at zero
+
+
passing through zero
−
+
explosion into −∞
AB
+
+
reflection at zero
C 01456
01
23
01
AB
23
01
C
01456
2
23
2
AB
AB
+
−
branching at zero
23
2
C
AB
+
+
reflection at zero
AB
+
+
solution is strictly positive
−
+
killing at zero
345
AB
6
Table 4.3. Existence and uniqueness in the case, where zero is the unique singular point. The starting point is equal to zero. Right Left Type Type Exis- Uniqtype type of zero of zero of −∞ of +∞ tence ness 01456 01456
Comments
−
+
killing at zero
01456
23
AB
+
+
solution is positive
01456
23
C
−
+
explosion into +∞
23
01456
AB
+
+
solution is negative
23
01456
C
−
+
explosion into −∞
23
23
AB
AB
+
−
branching at zero
23
23
AB
C
+
+
solution is negative
23
23
C
AB
+
+
solution is positive
23
23
C
C
−
+
explosion into −∞ or +∞
90
4 Classification at Infinity and Global Solutions
C(R+ ) × C0 (R+ ) (ω1 , ω2 ) −→ G(ω1 , ω2 , T−1 (ω1 )) ∈ C(R+ ). Using Lemma B.9, one can verify that Q0 is a solution of (1) with X0 = 0. Arguing in the same way as in the proofs of Theorems 4.1, 4.2, we deduce that there exists a solution Q with X0 = x0 defined up to T0 . For this solution, = Q ◦ Φ−1 (Φ is defined by (B.1)). Let P be the image Q{T0 < ∞} > 0. Set Q T0 × Q0 under the map of Q C(R+ ) × C0 (R+ ) (ω1 , ω2 ) −→ G(ω1 , ω2 , T0 (ω1 )) ∈ C(R+ ). Using Lemma B.9, one can verify that P is a solution of (1). Uniqueness. The uniqueness of a solution follows from Theorem 3.3 (ii) and Lemma B.6 applied to the stopping times T−n,n . The properties P{T0 < ∞} > 0 and X ≤ 0 on [[T0 , ∞[[ P-a.s. follow from the construction of the solution. (iv) Suppose that there exists a solution P. For any a > x0 , P|FT0,a is a solution up to T0,a . Applying the results of Section 2.3, we conclude that P{T0 < ∞} > 0. Set P = P( · | T0 < ∞), P0 = P ◦ Θ−1 T0 , where Θ is defined by (B.2). By Lemma B.7, P0 is a solution of (1) with X0 = 0. Thus, P0 |FT−1,1 is a solution with X0 = 0 up to T−1,1 . Applying Theorem 3.3 (i), we deduce that X ≤ 0 on [[0, T−1 ]] P0 -a.s. Moreover, P0 {∀t ≥ T0 , Xt = 0} = 0 (see the proof of Theorem 3.2). Therefore, there exists c < 0 such that P0 {Tc < ∞} > c. Consider P0 = P0 ( · | Tc < ∞), Pc = P0 ◦ Θ−1 Tc . By Lemma B.7, Pc is a solution of (1) with X0 = c. But this contradicts point (i) of this theorem. (v) Existence. Using similar arguments as in the proof of Theorem 2.12, we conclude that there exists a positive solution P. Uniqueness. Suppose that there exists another solution P . Then, for any n > x0 , P |FTn is a solution up to Tn . It follows from the results of Section 2.3 that P |FTn is positive. Due to Theorem 2.12, P |FTn = P|FTn . Lemma B.6 yields that P = P. The property P{T0 < ∞} > 0 follows from Theorem 2.12. (vi) Similar arguments as in the proof of Theorem 2.12 allow us to deduce that there exists a positive solution P. Arguing in the same way as in the proof of point (iii) above, we construct a solution P such that P {T0 < ∞} > 0 and X ≤ 0 on [[T0 , ∞[[ P -a.s. Moreover, P {∀t ≥ T0 , Xt = 0} = 0 (see the proof of Theorem 3.2). Hence, P is not positive, and therefore, P and P are two different solutions. (vii) Existence. Using similar arguments as in the proof of Theorem 2.12, we deduce that there exists a positive solution P. Uniqueness. Suppose that there exists another solution P . Assume first that it is not positive. Then there exists c < 0 such that P{Tc < ∞} > 0. Set P = P( · | Tc < ∞), Pc = P ◦ Θ−1 Tc . By Lemma B.7, Pc is a solution of (1) with X0 = c. But this contradicts point (i) of this theorem.
4.5 Global Solutions: The Proofs
91
Assume now that P is positive. By Theorem 2.12, for any n > x0 , P |FTn = P|FTn . Lemma B.6 yields that P = P. The property P{T0 < ∞} > 0 follows from Theorem 2.12.
(viii) Existence. Using the same arguments as in the proofs of Theorems 2.15–2.17, we deduce that there exists a strictly positive solution P. Uniqueness. Suppose that there exists another solution P . It follows from the results of Section 2.3 that, for any n > x0 , P |FTn = P|FTn . By Lemma B.6, P = P. (ix) This statement follows from Theorem 2.14.
2
Proof of Theorem 4.7. (i) This statement follows from Theorem 3.2. (ii) This statement is proved in the same way as Theorem 4.6 (v). (iii) Suppose that there exists a solution P. It follows from the results of Section 2.3 that P is positive. Moreover, P{∀t ≥ 0, Xt = 0} = 0 (see the proof of Theorem 3.2). Hence, there exists a > 0 such that P{Ta < ∞} > 0. Set P = P( · | Ta < ∞), Pa = P ◦ Θ−1 Ta . By Lemma B.7, Pa is a solution of (1) with X0 = a. But this contradicts Theorem 4.6 (i). (vi) Using similar arguments as in Section 2.5, one can construct both a positive solution and a negative solution. (vii) This statement is proved in the same way as Theorem 4.6 (vii). (ix) The proof of this statement is similar to the proof of point (iii).
2
5 Several Special Cases
In Section 5.1, we consider SDEs, for which the coefficients b and σ are power functions in the right-hand neighbourhood of zero or are equivalent to power functions as x ↓ 0. For these SDEs, we propose a simple procedure to determine the right type of zero. Section 5.2 contains similar results for the types of infinity. In Section 5.3, we investigate which types of isolated singular points are possible if the drift coefficient is positive or negative. Section 5.4 contains similar results for the types of infinity.
5.1 Power Equations: Types of Zero Theorem 5.1. Suppose that there exists a > 0 such that b(x) = µxα ,
σ(x) = νxβ ,
x ∈ (0, a],
(5.1)
where µ, ν, α, β ∈ R and ν = 0. Set λ = µ/ν 2 , γ = α − 2β. Then the right type of zero for (1) is given by Figure 5.1. Proof. If µ = 0, then the equation is investigated in a trivial way. Suppose that µ = 0. The function ρ that appears in (2.12) is equal to the following function up to multiplication by a constant exp − 2λ xγ+1 if γ = −1, γ+1 ρ(x) = −2λ x if γ = −1. Hence, the function s that appears in (2.13) coincides with the following function up to multiplication by a constant
A.S. Cherny and H.-J. Engelbert: LNM 1858, pp. 93–103, 2005. c Springer-Verlag Berlin Heidelberg 2005
94
5 Several Special Cases
β 6 1 1/2
β 4l 1−γ 6
4l
5l
2
1 q
1l
1l 1/2 q
0l
λ
3l λ
γ > −1
γ < −1 0l
β 6 4l
5l q
1
1l 1/2 q
3l 1/2
γ = −1
λ
2l
0l
Fig. 5.1. The one-sided classification for power equations. Here λ = µ/ν 2 , γ = α − 2β, where α, β, µ, and ν are given by (5.1). One should first calculate γ and select the corresponding graph (out of the three graphs shown). Then one should plot the point (λ, β) in this graph and find the part of the graph the point lies in. The number ijmarked in this part indicates that zero has right type i. As an example, if γ < −1, λ > 0, and β ≥ (1 − γ)/2, then zero has right type 5.
5.1 Power Equations: Types of Zero
x 2λ γ+1 dy y exp − γ+1 0 a 2λ γ+1 y exp − dy γ+1 x −2λ+1 s(x) = x ln x − ln a −x−2λ+1 + a−2λ+1 x 2λ γ+1 dy y exp − γ+1 0
95
if γ < −1, λ < 0, if γ < −1, λ > 0, if γ = −1, λ < 1/2, if γ = −1, λ = 1/2, if γ = −1, λ > 1/2, if γ > −1.
The integrability conditions in the formulations of Theorems 2.11–2.17 remain unchanged when ρ and s are replaced by ρ and s. Suppose that γ > −1. In this case ρ(x) −−→ 1 and s(x)/x −−→ 1. Now, x↓0
x↓0
it is easy to verify that, for β < 1/2, zero has right type 0; for 1/2 ≤ β < 1, zero has right type 1; for β ≥ 1, zero has right type 4. In a similar way, one investigates the cases γ = −1 and γ < −1. The only nontrivial problem is to find the asymptotic behaviour (as x ↓ 0) of the function | s(x)|/ ρ(x) for γ < −1, µ = 0. This is done with the help of the following lemma. Lemma 5.2. Let γ < −1 and µ = 0. Then there exist constants 0 < c1 < c2 and δ > 0 such that c1 x−γ ≤
|s(x)| ≤ c2 x−γ , ρ(x)
x ∈ (0, δ).
Proof. We will give the proof only for the case, where µ < 0 (the case µ > 0 a is considered similarly). If µ < 0, then 0 ρ(x)dx < ∞ and s(x) ≥ 0 on (0, a]. It follows from (2.12) that ρ (x) = −2λxγ ρ(x) for x ∈ (0, a), and therefore, x x y γ ρ(y)dy = 2|λ| y γ ρ(y)dy, x ∈ (0, a). (5.2) ρ(x) = −2λ 0
0
There exists δ ∈ (0, a) such that γ+1 2λ x 2λ γ+1 x > + ln 2γ , γ+1 2 γ+1
x ∈ (0, δ ).
Then ρ(x/2) < 2γ ρ(x) for x ∈ (0, δ ), and therefore, x/2 x 1 x γ γ −γ−1 γ y ρ(y)dy = 2 y ρ(y/2)dy < y ρ(y)dy, 2 0 0 0 Consequently,
x
x/2
y γ ρ(y)dy >
1 2
0
x
y γ ρ(y)dy
x ∈ (0, δ ).
96
5 Several Special Cases
and, using (5.2), we get x γ 2|λ| y ρ(y)dy < ρ(x) < 4|λ| x/2
x
x ∈ (0, δ ).
y γ ρ(y)dy,
x/2
Thus, there exist constants 0 < c1 < c2 such that x x γ γ c1 x ρ(y)dy < ρ(x) < c2 x ρ(y)dy, x/2
x ∈ (0, δ ).
(5.3)
x/2
In a similar way we prove that there exist constants 0 < c1 < c2 and δ > 0 such that x x ρ(y)dy < s(x) < c2 ρ(y)dy, x ∈ (0, δ ). (5.4) c1 x/2
x/2
Combining (5.3) and (5.4), we get the desired statement.
Theorem 5.3. Suppose that b(x) −−→ 1, µxα x↓0
σ(x) −−→ 1, νxβ x↓0
where µ, ν, α, β are such that µ = 0, ν = 0, and α − 2β = −1. Set λ = µ/ν 2 , γ = α − 2β. Then the right type of zero for (1) is given by Figure 5.1. This statement is proved similarly to Theorem 5.1. The following example shows that the condition α− 2β = −1 in the above theorem is essential. Example 5.4. If b(x) =
1 , 2x
σ(x) = 1,
x ∈ (0, a]
for some a > 0, then zero has right type 3. If 1 1 b(x) = + , σ(x) = 1, 2x x ln x for some a ∈ (0, 1), then zero has right type 2.
x ∈ (0, a]
Proof. The first statement follows from Theorem 5.1. In order to prove the second statement, it is sufficient to note that a 2b(y) dy = − ln x − 2 ln | ln x| + const, x ∈ (0, a]. 2 (y) σ x Thus, the function ρ coincides on (0, a] with the function ρ(x) = 1/(x ln2 x) up to multiplication by a constant. This implies that the function s(x) coincides on (0, a] with the function s(x) = −1/ ln x up to multiplication by a constant. The proof is now completed in a trivial way.
5.2 Power Equations: Types of Infinity
97
5.2 Power Equations: Types of Infinity Theorem 5.5. Suppose that there exists a > 0 such that b(x) = µxα ,
σ(x) = νxβ ,
x ∈ [a, ∞),
(5.5)
where µ, ν, α β ∈ R and ν = 0. Set λ = µ/ν 2 , γ = α − 2β. Then the type of +∞ for (1) is given by Figure 5.2. Proof. If µ = 0, then the equation is investigated in a trivial way. Suppose that µ = 0. The function ρ that appears in (4.2) is equal to the following function up to multiplication by a constant exp − 2λ xγ+1 if γ = −1, γ+1 ρ(x) = −2λ x if γ = −1. Hence, the function s that appears in (4.3) coincides with the following function up to multiplication by a constant ∞ 2λ y γ+1 dy if γ = −1, exp − x γ+1 s(x) = x−2λ+1 if γ = −1, λ > 1/2, ∞ if γ = −1, λ ≤ 1/2. The integrability conditions in the formulations of Theorems 4.1–4.3 remain unchanged when ρ and s are replaced by ρ and s. Suppose that γ < −1. In this case ρ(x) −−−−→ 1. It is easy to verify that x→∞ +∞ has type A. In a similar way, one investigates the cases γ = −1 and γ > −1. The only nontrivial problem is to find the asymptotic behaviour (as x → ∞) of the function | s(x)|/ ρ(x) for γ > −1, µ > 0. This is done with the help of the following lemma. Lemma 5.6. Let γ > −1 and µ > 0. Then there exist constants 0 < c1 < c2 and δ > 1 such that c1 x−γ ≤
|s(x)| ≤ c2 x−γ , ρ(x)
x ∈ (δ, ∞).
The proof is similar to the proof of Lemma 5.2. Theorem 5.7. Suppose that b(x) −−−−→ 1, µxα x→∞
σ(x) −−−− → 1, νxβ x→∞
where µ, ν, α, β are such that µ = 0, ν = 0, and α − 2β = −1. Set λ = µ/ν 2 , γ = α − 2β. Then the type of +∞ for (1) is given by Figure 5.2.
98
5 Several Special Cases
β 6
β 6 Cl Al
l A
1−γ 2
λ
λ
Bl
γ > −1
γ < −1
β 6
Cl
Al 1 Bl 1/2
λ
γ = −1
Fig. 5.2. The classification at infinity for power equations. Here λ = µ/ν 2 , γ = α − 2β, where α, β, µ, and ν are given by (5.5). One should first calculate γ and select the corresponding graph (out of the three graphs shown). Then one should plot the point (λ, β) in this graph and find the part of the graph the point lies in. The letter Λjmarked in this part indicates that +∞ has type Λ. As an example, if γ > −1, λ > 0, and β > (1 − γ)/2, then +∞ has type D.
5.3 Equations with a Constant-Sign Drift: Types of Zero
99
This statement is proved similarly to Theorem 5.5. The following example shows that the condition α− 2β = −1 in the above theorem is essential. Example 5.8. If b(x) =
x , 2
σ(x) = x,
x ∈ (a, ∞)
for some a > 0, then +∞ has type A. If x x , σ(x) = x, b(x) = + 2 2 ln x for some a > 1, then +∞ has type B.
x ∈ (a, ∞)
The proof is similar to the proof of Example 5.4.
5.3 Equations with a Constant-Sign Drift: Types of Zero Throughout this section, we assume that (2.11) is true. Theorem 5.9. Let σ 2 = 1 and b ≥ 0 on (0, a]. Then zero can only have right type 0, 2, or 3. Proof. The condition b ≥ 0 means that ρ decreases on (0, a]. Suppose that a ρ(x)dx < ∞. (5.6) 0
Since 1/ρ is bounded on (0, a], we have a a 1 1 dx = dx < ∞. 2 (x) ρ(x)σ ρ(x) 0 0 Furthermore, a 0
2|b(x)| dx = − ρ(x)σ 2 (x)
0
a
ρ (x) dx = ρ2 (x)
ρ(0) ρ(a)
1 dy < ∞. y2
So, if (5.6) holds, then zero has right type 0 or 2. Suppose now that (5.6) is violated. Then a |s(x)| = ρ(y)dy ≤ aρ(x), x ∈ (0, a]. x
Consequently,
0
a
|s(x)| dx = ρ(x)σ 2 (x)
0
a
|s(x)| dx < ∞. ρ(x)
(5.7)
100
5 Several Special Cases
Combining (5.7) with (2.16), we arrive at a b(x)|s(x)| dx < ∞. 2 (x) ρ(x)σ 0
Thus, zero has right type 3.
Theorem 5.10. Let σ 2 = 1 and b ≤ 0 on (0, a]. Then zero can only have right type 0 or 1. Proof. The condition b ≤ 0 means that ρ increases on (0, a]. Hence, (5.6) holds true. Suppose that ρ(0+) > 0. Then, by (2.12), a |b(x)| dx < ∞. 2 (x) σ 0 Thus, zero has right type 0. Suppose now that ρ(0+) = 0. Then a a 2|b(x)| 2b(x) dx = − dx 2 (x) 2 (x) ρ(x)σ ρ(x)σ 0 0 a ρ(a) ρ (x) 1 = dx = dy = ∞. 2 y2 0 ρ (x) 0 Furthermore,
x
s(x) = 0
and consequently,
0
a
ρ(y)dy ≤ xρ(x),
s(x) dx = ρ(x)σ 2 (x)
Moreover, by (5.8) and (2.16), a 0
0
a
x ∈ (0, a],
(5.8)
s(x) dx < ∞. ρ(x)
|b(x)|s(x) dx < ∞. ρ(x)σ 2 (x)
(5.9)
Thus, zero has right type 1.
Theorem 5.11. Let b ≥ 0 on (0, a]. Then zero can only have right type 0, 1, 2, 3, 4, or 5. Proof. Assume that zero has right type 6. Then (5.6) holds. In view of (5.8), s(x)/ρ(x) −−→ 0. Using (2.16), we get x↓0
0
a
b(x)|s(x)| dx < ∞. ρ(x)σ 2 (x)
This means that zero cannot have right type 6.
5.3 Equations with a Constant-Sign Drift: Types of Zero
101
Theorem 5.12. Let b ≤ 0 on (0, a]. Then zero can only have right type 0, 1, or 4. Proof. The condition b ≤ 0 implies (5.6). Consequently, the right type of zero is one of 0, 1, 2, 4, 6. Inequalities (5.8), (5.9) hold true, and hence, zero cannot have right type 6. It is also clear that the conditions a 1 + |b(x)| dx < ∞ 2 (x) ρ(x)σ 0
and
a
0
|b(x)| dx = ∞ σ 2 (x)
cannot hold simultaneously (note that ρ is bounded on [0, a]). Hence, zero cannot have right type 2. Remark. Theorems 5.9–5.12 cannot be strengthened. Indeed, Figure 5.1 shows that all the right types allowed by Theorems 5.9–5.12 are realized.
Table 5.1. Possible right types in the case of a constant-sign drift σ 2 = 1 arbitrary σ b≥0
023
012345
b≤0
01
014
It follows from Theorems 5.9–5.12 that type 6 can be realized only if the drift coefficient has alternating signs. The example below shows that this type is realized. Example 5.13. Let ρ be an absolutely continuous function on (0, a] such that 1 ≤ ρ(x) ≤ 2, x ∈ (0, a], a x|ρ (x)|dx = ∞. 0
Take σ(x) = 1, Then zero has right type 6. The proof is straightforward.
b(x) = −
ρ (x) . 2ρ(x)
102
5 Several Special Cases
Remark. Suppose that zero has right type 6 and x0 > 0. Let P denote the on F solution of (1) up to T 0,a −. Obviously, there exists a unique measure P such that P|FT 0,a − = P and X is stopped P-a.s. when it reaches 0 or a. T is not a solution defined up to T0,a . The reason However, the measure P|F 0,a is that P
T0,a
|b(Xs )|ds = ∞
0
>0
(see the proof of Theorem 2.14 (i)). In particular, the integral
T0,a
b(Xs )ds
0
(5.10)
cannot be defined in the usual Lebesgue-Stieltjes sense. On the other hand, there exists limt↑T0,a Xt P-a.s. It would be interesting to investigate whether there exists t
lim
t↑T0,a
0
b(Xs )ds.
(5.11)
For example, if σ = 1, then, in view of the equality Xt = x0 +
0
t∧T0,a
b(Xs )ds + Mt ,
t ≥ 0,
tends to a limit as t ↑ T0,a and this limit exists P-a.s. (Note that Xt P-a.s. Mt P-a.s. tends to a limit as t ↑ T0,a .) If limit (5.11) exists P-a.s. (or in P-probability), then it can be taken as the principal value of (5.10). In this T is a solution (in case we could modify Definition 1.31 and say that P|F 0,a the generalized sense) defined up to T0,a . Thus, if we accept this generalized definition, then types 6 and 1 are merged together. The investigation of the existence of limit (5.11) may lead to the study of the principal values of integral functionals of diffusion processes (see [6], [47] for results concerning principal values of integral functionals of Brownian motion).
5.4 Equations with a Constant-Sign Drift: Types of Infinity This section is included for the sake of completeness. However, it is totally trivial. Throughout this section, we assume that (4.1) is true.
5.4 Equations with a Constant-Sign Drift: Types of Infinity
103
It is seen from the results of Section 5.2 that, under the conditions b ≥ 0 and σ 2 = 1, +∞ can have all types A, B, C. Therefore, we formulate the statement related only to the case, where b ≤ 0. Theorem 5.14. Let b ≤ 0. Then +∞ has type A. The proof is straightforward. Table 5.2. Possible types of +∞ in the case of a constant-sign drift σ 2 = 1 arbitrary σ b≥0 ABC b≤0
A
ABC A
Approximation Algorithms for Independent Sets in Map Graphs Zhi-Zhong Chen Department of Mathematical Sciences, Tokyo Denki University, Hatoyama, Saitama 350-0394, Japan.
[email protected] Abstract. This paper presents polynomial-time approximation algorithms for the problem of computing a maximum independent set in a given map graph G with or without weights on its vertices. If G is given together with a map, then a ratio of 1 + δ can be achieved in O(n2 ) time for any given constant δ > 0, no matter whether each vertex of G is given a weight or not. In case G is given without a map, a ratio of 4 can be achieved in O(n7 ) time if no vertex is given a weight, while a ratio of O(log n) can be achieved in O(n7 log n) time otherwise. Behind the design of our algorithms are several fundamental results for map graphs; these results can be used to design good approximation algorithms for coloring and vertex cover in map graphs, and may find applications to other problems on map graphs as well.
1
Introduction
An independent set in a graph G = (V, E) is a subset I of V such that no two vertices of I are connected by an edge in E. If each vertex of G is associated with a weight, the weight of an independent set I in G is the total weight of vertices in I. Given a vertex-weighted graph G = (V, E), the maximum independent set (MIS) problem requires the computation of an independent set of maximum weight in G. The special case where each vertex of G has weight 1, is called the unweighted maximum independent set (UMIS) problem. The MIS problem is a fundamental problem in many areas; unfortunately, even the UMIS problem is widely known to be NP-hard. Since the MIS problem is NP-hard, we are interested in designing approximation algorithms, i.e. polynomial-time algorithms that find an independent set of large but not necessarily maximum weight in a given graph. An approximation algorithm A achieves a ratio of ρ if for every vertex-weighted graph G, the independent set I found by A on input G has weight at least OP Tρ (G) , where OP T (G) is the maximum weight of an independent set in G. Hastad ˙ [6] gave strong evidence that no approximation algorithm for the UMIS problem achieves a ratio of n1− for any > 0, where n is the number of vertices in the input graph. Due to this, a lot of work has been devoted to designing approximation algorithms for the MIS problem restricted to certain D.-Z. Du et al. (Eds.): COCOON 2000, LNCS 1858, pp. 105–114, 2000. c Springer-Verlag Berlin Heidelberg 2000
106
Zhi-Zhong Chen
classes of graphs (such as planar graphs, bounded-degree graphs, etc). This paper considers the MIS problem restricted to map graphs, which were introduced by Chen et al. [5] recently. Intuitively speaking, a map is a plane graph whose faces are classified into nations and lakes. Traditional planarity says that two nations on a map M are adjacent if and only if they share at least a borderline. Motivated by topological inference, Chen et al. [5] considered a modification of traditional planarity, which says that two nations on M are adjacent if and only if they share at least a border-point. Consider the simple graph G whose vertices are the nations on M and whose edges are all {f1 , f2 } such that f1 and f2 are nations sharing at least a border-point. G is called a map graph [5,10]; if for some integer k, no point in M is shared by more than k nations, G is called a k-map graph [5]. The UMIS problem restricted to map graphs is NP-hard, because planar graphs are exactly 3-map graphs [5] and the UMIS problem restricted to planar graphs is NP-hard. It is known [1,7] that the MIS problem restricted to planar graphs has a polynomial-time approximation scheme (PTAS). Recall that a PTAS for a maximization (respectively, minimization) problem Π is a polynomial-time algorithm A such that given an instance I of Π and an error parameter > 0, A computes a solution S of I in time polynomial in the size of I such that the value of S is at least (1 − ) · OP T (I) (respectively, at most (1 + ) · OP T (I)), where OP T (I) is the optimal value of a solution of I. An interesting question is to ask whether the MIS problem restricted to map graphs also has a PTAS. Regarding this question, we show that if the input graph is given together with a map, then the MIS problem restricted to map graphs has a practical PTAS. Combining this result and the Thorup algorithm [10] for constructing a map from a given map graph, we obtain a PTAS for the MIS problem restricted to map graphs. Unfortunately, the Thorup algorithm is extremely complicated and the exponent of its polynomial time bound is about 120. So, it is natural to assume that no map is given as part of the input and no map construction is allowed in approximation algorithms. Under this assumption, we first show that the MIS problem restricted to k-map graphs for any fixed integer k has a PTAS. This PTAS runs in linear time, but is impractical because it uses the impractical Bodlaender algorithm [2] for tree-decomposition. Using this PTAS, we then obtain an O(n7 )-time approximation algorithm for the UMIS problem restricted to general map graphs that achieves a ratio of 4. We also give two relatively more practical approximation algorithms. One of them is for the MIS problem restricted to general map graphs, runs in O(n7 log n) time, and achieves a ratio of O(log n). The other is for the UMIS problem restricted to general map graphs, runs in O(n7 ) time, and achieves a ratio of 5. Behind the design of our algorithms are several fundamental results for map graphs; these results can be used to obtain the following results. First, there is a PTAS for the minimum weighted vertex cover (MWVC) problem restricted to map graphs. Second, a map graph G with n vertices and m edges can be colored in O(n2 m) time using at most 2 · χ(G) colors, where χ(G) is the chromatic number of G. Third, for all optimization problems Π such that Π restricted
Approximation Algorithms for Independent Sets in Map Graphs
107
to planar graphs has a PTAS as shown in [1], Π restricted to k-map graphs has a PTAS as well for any fixed integer k. These results do not involve map constructions. They may find applications to other problems on map graphs as well. The rest of this paper is organized as follows. Section 2 gives precise definitions and states known results that are needed later on. Section 3 presents a PTAS for the MIS problem where the input graph is given together with a map. Section 4 describes two PTASs for the MIS problem restricted to k-map graphs. Section 5 presents nontrivial approximation algorithms for the MIS and UMIS problems restricted to map graphs. Section 6 asks several open questions. See http://rnc2.r.dendai.ac.jp/˜chen/papers/mapis.ps.gz, for a full version of this paper.
2
Preliminaries
Throughout this paper, a graph may have multiple edges but no loops, while a simple graph has neither multiple edges nor loops. Let G = (V, E) be a graph. The subgraph of G induced by a subset U of V is denoted by G[U ]. An independent set of G is a subset U of V such that G[U ] has no edge. The independence number of G, denoted by α(G), is the maximum number of vertices in an independent set of G. Let v ∈ V . The degree of v in G is the number of edges incident to v in G; v is an isolated vertex if its degree is 0. The neighborhood of v in G, denoted by NG (v), is the set of vertices adjacent to v in G. Note that |NG (v)| bounds the degree of v in G from below. For a subset U of V , let NG (U ) = ∪v∈U NG (v). Let E be a plane graph. Let f be a face of E. The boundary of f is the graph (Vf , Ef ), where Vf and Ef are the set of vertices and edges surrounding f , respectively. The size of f , denoted by |f |, is |Vf |. We call f a cycle face if its boundary is a simple cycle. A map M is a pair (E, ϕ), where (1) E is a plane graph (P, EE ) and each connected component of E is biconnected, and (2) ϕ is a function that maps each inner face f of E to 0 or 1 in such a way that whenever ϕ(f ) = 1, f is a cycle face. A face f of E is called a nation on M if f is an inner face with ϕ(f ) = 1, while called a lake on M otherwise. For an integer k, a k-point in M is a p ∈ P that appears on the boundaries of exactly k nations on M; a k + -point in M is a p ∈ P that appears on the boundaries of more than k nations on M. M is a k-map if there is no k + -point in M. Two nations are adjacent if their boundaries share a p ∈ P. The graph of M is the simple graph G = (V, EG ) where V consists of the nations on M and EG consists of all {f1 , f2 } such that f1 and f2 are adjacent. We call G a map graph, while call G a k-map graph if M is a k-map. For clarity, we call the elements of P points, while call the elements of V vertices. A nation f on M is isolated if f is an isolated vertex of G. The half square of a bipartite graph H = (X, Y ; EH ) is the simple graph G = (X, E) where E = {{x1 , x2 } | there is a y ∈ Y with {x1 , y} ∈ EH and {x2 , y} ∈ EH }.
108
Zhi-Zhong Chen
Lemma 1. [5] Every map graph G = (V, E) is the half square of a bipartite planar graph H = (V, P; EH ) with |P| ≤ 3|V | − 6. For a map M = (E, ϕ) and a face f in E, we say that f touches a point p if p appears on the boundary of f .
3
A PTAS for the Case with a Given Map
Let G = (V, E) be the input map graph given with a map M = ((P, EE ), ϕ) and a nonnegative weight to each nation on M. Let be the given error parameter. Let H be the bipartite planar graph (V, P; EH ) where EH = {{v, p} | v ∈ V, p ∈ P, and p is a point on the boundary of nation v}. Obviously, H can be constructed in linear time from M. Moreover, G is the half square of H. Indeed, Lemma 1 is proved by further removing all redundant points from H where a point p ∈ P is redundant if for every pair {u, v} of neighbors of p in H, there is a point q ∈ P − {p} with {u, v} ⊆ NH (q). Our PTAS is a nontrivial application of the Baker approach. Let h = d 1 e. We may assume that G and hence H are connected. Starting at an arbitrary vertex r ∈ V , we perform a breadth-first search (BFS) on H to obtain a BFS tree TH . For each vertex v in H, we define the level number `(v) of v to be the number of edges on the path from r to v in TH . Note that only r has level number 0. For each i ∈ {0, . . . , h− 1}, let Hi be the subgraph of H obtained from H by deleting all vertices v with `(v) ≡ 2i (mod 2h). Obviously, each v ∈ V appears in exactly h − 1 of H0 through Hh−1 . Moreover, Hi is clearly a (2h − 1)-outerplanar graph. For each i ∈ {0, . . . , h − 1}, let Gi be the half square of Hi . The crux is that each Gi is an induced subgraph of G, because all p ∈ P appear in Hi . Our strategy is to compute an independent set Ii of maximum weight in Gi for each i ∈ {0, . . . , h − 1}. Since each vertex of G appears in all but one of G0 through Gh−1 , the independent set of maximum weight among I0 , . . . , Ih−1 has weight at least (1 − h1 ) · OP T (G) ≥ (1 − ) · OP T (G). Recall that OP T (G) is the maximum weight of an independent set in G. So, it remains to show how to compute a maximum weight independent set Ii in Gi for each i ∈ {0, . . . , h−1}. To do this, we first recall the notion of treewidth. A tree-decomposition of a graph K = (VK , EK ) is a pair ({Xj | j ∈ J}, T ), where {Xj | j ∈ J} is a family of subsets of VK and T is a tree with vertex-set J such that the following hold: – ∪j∈J Xj = VK . – For every edge {v, w} ∈ EK , there is a j ∈ J with {v, w} ⊆ Xj . – For all j1 , j2 , j3 ∈ J, if j2 lies on the path from j1 to j3 in T , then Xj1 ∩Xj3 ⊆ Xj2 . Each Xj is called the bag of vertex j. The treewidth of a tree-decomposition ({Xj | j ∈ J}, T ) is max{|Xj | − 1 | j ∈ J}. The treewidth of K is the minimum treewidth of a tree-decomposition of K, taken over all possible treedecompositions of K.
Approximation Algorithms for Independent Sets in Map Graphs
109
It is widely known that the MIS problem restricted to graphs with bounded treewidth can be solved in linear time. However, the treewidths of our graphs G0 , . . . , Gh−1 may not be bounded. Our strategy is to resort to H0 , . . . , Hh−1 instead, and do a dynamic programming on them. The result is the following: Theorem 1. There is a PTAS for the special case of the MIS problem restricted to map graphs where the input graph is given together with a map M = (E, ϕ). It runs in O(h|E|2 + 212h−8 h2 |E|) time, where h = d 1 e and is the given error parameter.
4
PTASs for k-Map Graphs
We use an extension of the Baker approach, suggested in [4]. The only difference between the extension and the Baker approach is that instead of decomposing the given graph G into k 0 -outerplanar subgraphs, we decompose G into subgraphs of bounded treewidth such that each vertex of G appears in all but one of these subgraphs. The next lemma shows that such a decomposition is possible for a k-map graph. Lemma 2. Given a k-map graph G and an integer h ≥ 2, it takes O(h|G|) time to compute a list L of h induced subgraphs of G such that (1) each subgraph in L is of treewidth ≤ k(6h − 4) and (2) each vertex of G appears in exactly h − 1 of the subgraphs in L. By the proof of Lemma 2, for all optimization problems Π such that Π restricted to planar graphs has a PTAS as shown in [1], we can emulate Baker [1] to show that Π restricted to k-map graphs has a PTAS as well for any fixed integer k. In particular, we have: Theorem 2. For any fixed integer k, there is a PTAS for the MIS problem 3 restricted to k-map graphs. It runs in O(232(6kh−4k) hn) time, where n is the 1 size of the given k-map graph, h = d e, and is the given error parameter. The PTAS in Theorem 2 is impractical. We next show a different PTAS for the special case of the UMIS problem where the input graph is given together with a k-map. This PTAS is based on the Lipton and Tarjan approach. Although this PTAS is not better than the one in Section 3, the tools used in its design seem to be fundamental. Indeed, the tools will be used in the next section, and we believe that they can be used in other applications. A map M = (E, ϕ) is well-formed if (1) the degree of each point in E is at least 3, (2) no edge of E is shared by two lakes, (3) no lake touches a point whose degree in E is 4 or more, and (4) the boundary of each lake contains at least 4 edges. The next lemma shows that a map can easily be made well-formed. Lemma 3. Given a map M = (E, ϕ) with at least three nations and without isolated nations, it takes O(|E|) time to obtain a well-formed map M0 such that (1) the graph of M0 is the same as that of M and (2) for every integer k ≥ 2, M is a k-map if and only if so is M0 .
110
Zhi-Zhong Chen
Let M = (E, ϕ) be a well-formed map. Then, each point on the boundary of a lake on M is a 2-point in M and has degree 3 in E; moreover, for all integers i ≥ 3, each i-point in M has degree i in E. Using these facts, we can prove the following lemma which improves a result in [5] significantly. Lemma 4. Let k ≥ 3 and G be a k-map graph with n ≥ 3 vertices and m edges. Then, m ≤ kn − 2k. From Lemma 4, we can prove the following corollary. Corollary 1. The following hold: 1. Given an integer k and a k-map graph G, it takes O(|G|) time to color G with at most 2k colors. Consequently, given a k-map M = (E, ϕ), it takes O(k|E|) time to color the graph of M with at most 2k colors. 2. Given a map graph G = (V, E), we can color G in O(|V |2 · |E|) time using at most 2 · χ(G) colors, where χ(G) is the chromatic number of G. Indeed it has been known for a long time that k-map graphs are 2k-colorable [9], but Corollary 1 has the advantage of being algorithmic. It is also known that k-map graphs with k ≥ 8 can be colored with at most 2k − 3 colors [3], but the proof does not yield an efficient algorithm. Let G be a connected graph with nonnegative weights on its vertices. The weight of a subgraph of G is the total weight of vertices in the subgraph. A separator of G is a subgraph H of G such that deleting the vertices of H from G leaves a graph whose connected components each have weight ≤ 2W 3 , where W is the weight of G. The size of a separator H is the number of vertices in H. The following lemma shows that k-map graphs have small separators. Lemma 5. Given a k-map M = (E, ϕ) on which each nation has an associated nonnegative weight and at least two nations exist, it takes O(|E|) time p to find a separator H of the graph G of M such that the size of H is at most 2 2k(n − 1). Using Lemma 5, we can emulate Lipton and Tarjan [8] to prove the following theorem: Theorem 3. For any fixed integer k, there is a PTAS for the special case of the UMIS problem restricted to k-map graphs where the input graph is given together 2 3 with a k-map M = (E, ϕ). It runs in O(|E| log |E| + 2288h k n) time, where n is the number of nations on M, h = d 1 e, and is the given error parameter.
5
Approximation Algorithms for Map Graphs
This section presents four approximation algorithms. The first three are for the UMIS problem restricted to map graphs. The fourth is for the MIS problem restricted to map graphs. The input to the first algorithm is a map instead of a map graph; this algorithm is given here for the purpose of helping the reader
Approximation Algorithms for Independent Sets in Map Graphs
111
understand the other three algorithms. Also, the second algorithm is relatively more practical than the third. For a well-formed map M = (E, ϕ), let G(M) be the plane graph obtained from the dual plane graph E ∗ of E by deleting all vertices that are lakes on M. Note that corresponding to each 2+ -point p in E, there is a unique face Fp in G(M) such that the vertices on the boundary of Fp are exactly the nations on M that touch p; the boundary of Fp is a simple cycle. We call Fp a point-face of G(M). Those faces of G(M) that are not point-faces are called lake-faces. Consider the following algorithm, called M apIS1 : Input : A map M = (E, ϕ). 1. If at most two nations are on M, then return a maximum weight independent set of the graph of M. 2. If there is an isolated nation on M, then find such a nation f ; otherwise, perform the following substeps: 2.1. Modify M to a well-formed map, without altering the graph of M. 2.2. Construct G(M). 2.3. Find a minimum-degree vertex f in G(M). 3. Construct a new map M0 = (E, ϕ0 ), where for all faces f1 of E, if f1 = f or f1 is a neighbor of f in the graph of M, then ϕ0 (f1 ) = 0, while ϕ0 (f1 ) = ϕ(f1 ) otherwise. 4. Recursively call M apIS1 on input M0 , to compute an independent set J in the graph of M0 . 5. Return {f } ∪ J. Theorem 4. M apIS1 achieves a ratio of 5 and runs in O(|E|2 ) time. Proof. (Sketch) The number of edges in G(M) is at most 3n − 6. So, the degree of f in G(M) is at most 5. We can then claim that the neighborhood of f in G can be divided into at most five cliques. Consider the following algorithm, called M apIS2 : Input : A map graph G = (V, E). 1. If G has exactly at most two vertices, then return a maximum weight independent set of G. 2. Find a vertex f in G such that α(Gf ) ≤ 5, where Gf = G[NG (f )]. 3. Remove f and its neighbors from G. 4. Recursively call M apIS2 on input G, to compute an independent set J. 5. Return {f } ∪ J. Theorem 5. M apIS2 achieves a ratio of 5, and runs in O(|V |· time.
P f ∈V
|NG (f )|6 )
Proof. By the proof of Theorem 4, Step 2 of M apIS2 can always be done.
112
Zhi-Zhong Chen
Consider the following algorithm, called M apIS3 : Input : A map graph G = (V, E), an integer k ≥ 4, and a positive real number less than 1. 1. If G has exactly at most two vertices, then return a maximum weight independent set of G. 2. Find a vertex f in G such that α(Gf ) is minimized among all vertices f of G, where Gf = G[NG (f )]. 3. If α(Gf ) ≤ 4, then perform the following substeps: 3.1. Remove f and its neighbors from G. 3.2. Recursively call M apIS3 on input G to compute an independent set J. 3.3. Return {f } ∪ J. 4. Repeat the following substeps until G has no clique of size ≥ k + 1 (and hence is a k-map graph): 4.1. Find a maximal clique C in G such that C contains at least k + 1 vertices. 4.2. Remove all vertices of C from G. 5. Color G with at most 2k colors (cf. Corollary 1); let J1 be the maximum set of vertices with the same color. 6. Compute an independent set J2 in G with |J2 | ≥ (1 − )α(G). 7. Return the bigger one among J1 and J2 . To analyze M apIS3 , we need the following lemma: Lemma 6. [5] Let G be the graph of a map M = (E, ϕ). Let C be a clique of G containing at least five vertices. Then, at least one of the following holds: – C is a pizza, i.e., there is a point p in E such that all f ∈ C touch p. – C is a pizza-with-crust, i.e., there are a point p in E and an f1 ∈ C such that (1) all f ∈ C − {f1 } touch p and (2) for each f ∈ C − {f1 }, some point q 6= p in E is touched by both f1 and f . – C is a hamantasch, i.e., there are three points in E such that each f ∈ C touches at least two of the points and each point is touched by at least two nations. Theorem 6. If we P fix k = 28 and = 0.25, then M apIS3 achieves a ratio of 4 and runs in O(|V | · f ∈V |NG (f )|6 + |V |3 · |E|) time. Proof. Let G0 be the input graph. We may assume that G0 is connected. By the proof of Theorem 5, it suffices to prove that if α(Gf ) ≥ 5, then J1 found in Step 0) 5 or J2 found in Step 6 has size ≥ α(G 4 . So, suppose that α(Gf ) ≥ 5. Let n be the number of vertices in G0 . Fix a well-formed map M of G0 . For each integer i > k, let hi be the number of cliques C in G0 such that (1) C contains exactly i vertices and (2) C is found in Substep 4.1 by the algorithm. We claimPthat the number x of edges in G(M) is bounded from ∞ above by 3n − 6 − i=k+1 hi (i − 4). To see this claim, let F1 , . . . , Ft be the point-faces in G(M) whose boundaries each contain at least three vertices. For
Approximation Algorithms for Independent Sets in Map Graphs
113
each j ∈ {1, . . . , t}, fix a vertex fj on the boundary of Fj . We can triangulate each face Fj by adding exactly |Fj | − 3 new edges incident to fj ; call these new edges imaginary edges incident to fj . There are exactly |Fj | − 3 imaginary edges Pt incident to fj . Thus, x ≤ 3n − 6 − j=1 (|Fj | − 3). Let C be a clique of G0 found in Substep 4.1. We say that an imaginary edge {fj , f 0 } with j ∈ {1, . . . , t} is contributed by C, if f 0 is contained in C. Let i be the number of vertices in C. To prove the claim, it remains to prove that the number of imaginary edges contributed by C is at least i − 4. By Lemma 6, only the following three cases can occur: – Case 1: C is a pizza. Then, there is a point p in M such that all nations f 0 in C touch p on M. Let Fj be the point-face in G(M) that corresponds to p. No matter whether fj is in C or not, at least i − 3 imaginary edges incident to fj are contributed by C. – Case 2: C is a pizza-with-crust. Then, there is a point p in M such that |C| − 1 nations in C touch p on M. Similarly to Case 1, at least (i − 1) − 3 imaginary edges are contributed by C. – Case 3: C is a hamantasch. Then, there are three points p1 through p3 in M such that each nation f 0 ∈ C touches at least two of the three points on M and each point is touched by at least two nations in C. Since i ≥ k +1 ≥ 5, at most one of p1 through p3 is a 2-point in M. Moreover, if one of p1 through p3 is a 2-point, then C is actually a pizza-with-crust and Case 2 applies. So, we may assume that p1 through p3 are 2+ -points in M. Let Fj1 be the pointface in G(M) that corresponds to p1 , and a1 be the number of nations in C that touch p1 on M. Define Fj2 , a2 , Fj3 , and a3 similarly. Then, the number of imaginary edges incident to fj1 (respectively, fj2 , or fj3 ) contributed by C is at least a1 − 3 (respectively, a2 − 3, or a3 − 3). Thus, the number of imaginary edges contributed by C is at least (a1 +a2 +a3 )−9 ≥ 2i−9 ≥ i−4. P∞ By the claim, x ≤ 3n − 6 − i=k+1 hi (i − 4). The degree of f in G(M) is at most b 2x n c. On the other hand, since α(Gf ) ≥ 5, the degree of f in G(M) P∞ P∞ is at least 5. Thus, 2 i=k+1 hi (i − 4) ≤ n. In turn, i=k+1 hi i ≤ n(k+1) 2(k−3) and P∞ n i=k+1 hi ≤ 2(k−3) . 6, |J|P ≥ (1 − )(α(G0 ) − P∞Let J be the bigger one among J1 and J2 . By Step ∞ 1 h ). By Step 5 and Corollary 1, |J| ≥ · (n − i i=k+1 i=k+1 ihi ). Thus, 2k |J| |J| 1 n k−7 n α(G0 ) ≥ (1 − )(1 − 2(k−3) · α(G0 ) ) and α(G0 ) ≥ 4k(k−3) · α(G0 ) . As the value of n α(G0 )
decreases, the first lower bound on
|J| α(G0 )
increases while the second lower
|J| decreases. The worst case happens bound on α(G 0) |J| k−7 . By fixing Therefore, α(G0 ) ≥ (1 − ) · 3k−2k−7 |J| 1 α(G0 ) ≥ 4 .
when =
1 4
n α(G0 )
=
4k(k−3)(1−) 3k−2k−7 .
and k = 28, we have
Finally, the proof of Theorem 4 leads to an approximation algorithm for the MIS problem restricted to map graphs that achieves a logarithmic ratio.
114
Zhi-Zhong Chen
Theorem 7. Given a vertex-weighted map graph G = (V, E), we can find an P log 76 independent set of G whose weight is at least 6 log |V f ∈V | · OP T (G), in O(( |NG (f )|7 + |G|) · log |V |) time.
6
Concluding Remarks
The impracticalness of the PTAS in Theorem 2 and hence that of M apIS3 mainly result from the impracticalness of the Bodlaender algorithm [2] for treedecomposition. A more practical algorithm for tree-decomposition can make our algorithms more practical. Is there a practical PTAS for the MIS or UMIS problem restricted to map graphs? It would also be nice if one can design approximation algorithms for the problems that achieve better ratios than ours. Can M apIS1 and M apIS2 be implemented to run faster than as claimed? Finally, we point out that Lemma 2 can be used to design a PTAS for the minimum weighted vertex cover (MWVC) problem restricted to map graphs. Theorem 8. There is a PTAS for the MWVC problem restricted to map graphs. It runs in O(n2 ) time.
Acknowledgments: I thank Professors Seinosuke Toda and Xin He for helpful discussions, and a referee for helpful comments on presentation. References 1. B.S. Baker. Approximation algorithms for NP-complete problems on planar graphs. J. ACM 41 (1994) 153-180. 2. H.L. Bodlaender. A linear time algorithm for finding tree-decompositions of small treewidth. SIAM J. Comput. 25 (1996) 1305-1317. 3. O.V. Borodin. Cyclic coloring of plane graphs. Discrete Math. 100 (1992) 281-289. 4. Z.-Z. Chen. Efficient approximation schemes for maximization problems on K3,3 -free or K5 -free graphs. J. Algorithms 26 (1998) 166-187. 5. Z.-Z. Chen, M. Grigni, and C.H. Papadimitriou. Planar map graphs. STOC’98, 514-523. See http://rnc2.r.dendai.ac.jp/˜chen/papers/mg2.ps.gz for a full version. 6. J. Hastad. ˙ Clique is hard to approximate within n1− . STOC’96, 627-636. 7. R.J. Lipton and R.E. Tarjan. A separator theorem for planar graphs. SIAM J. Appl. Math. 36 (1979) 177-189. 8. R.J. Lipton and R.E. Tarjan. Applications of a planar separator theorem. SIAM J. Computing 9 (1980) 615-627. 9. O. Ore and M.D. Plummer. Cyclic coloration of plane graphs. Recent Progress in Combinatorics, pp. 287-293, Academic Press, 1969. 10. M. Thorup. Map graphs in polynomial time. FOCS’98, 396-405.
References
1. Assing, S., Senf, T. (1991): On stochastic differential equations without drift. Stochastics and Stochastics Reports, 36, No. 1, 21–39 2. Barlow, M.T. (1982): One-dimensional stochastic differential equations with no strong solution. Journal of the London Mathematical Society, 26, 335–347 3. Borodin, A.N., Salminen, P. (2002): Handbook of Brownian Motion – Facts and Formulae. 2nd Ed. Birkh¨ auser 4. Cherny, A.S. (2000): Convergence of some integrals associated with Bessel processes. Theory of Probability and Its Applications, 45, No. 2, 251–267 5. Cherny, A.S. (2000): On the strong and weak solutions of stochastic differential equations governing Bessel processes. Stochastics and Stochastics Reports, 70, No. 3–4, 213–219 6. Cherny, A.S. (2001): Principal values of the integral functionals of Brownian motion: existence, continuity and an extension of Itˆ o’s formula. Lecture Notes in Mathematics, 1755, 348–370 7. Cherny, A.S. (2001): On the uniqueness in law and the pathwise uniqueness for stochastic differential equations. Theory of Probability and Its Applications, 46, No. 3, 483–497 8. Cs¨ org¨ o, M., Horw´ ath, L., Shao, Q.-M. (1993): Convergence of integrals of uniform empirical and quantile processes. Stochastic Processes and Their Applications, 45, No. 2, 283–294 9. Dynkin, E.B. (1961): Theory of Markov processes. Prentice-Hall 10. Engelbert, H.-J. (1991): On the theorem of T. Yamada and S. Watanabe. Stochastics and Stochastics Reports, 36, 205–216 11. Engelbert, H.-J. (2000): Existence and non-existence of solutions of onedimensional stochastic equations. Probability and Mathematical Statistics, 20, 343–358 12. Engelbert, H.-J., Schmidt, W. (1981): On the behaviour of certain functionals of the Wiener process and applications to stochastic differential equations. In: Stochastic Differential Systems, Lecture Notes in Control and Information Sciences, 36, 47–55. Springer 13. Engelbert, H.-J., Schmidt, W. (1985): On one-dimensional stochastic differential equations with generalized drift. In: Stochastic Differential Systems, Lecture Notes in Control and Information Sciences, 69, 143–155. Springer 14. Engelbert, H.-J., Schmidt, W. (1985): On solutions of one-dimensional stochastic differential equations without drift. Probability Theory and Related Fields, 68, 287–314 15. Engelbert, H.-J., Schmidt, W. (1989, 1991): Strong Markov continuous local martingales and solutions of one-dimensional stochastic differential equations, I, II, III. Math. Nachr., 143, 167–184; 144, 241–281; 151, 149–197
120
References
16. Feller, W. (1952): The parabolic differential equations and the associated semigroups of transformations. Ann. Math., 55, 468–519 17. Feller, W. (1954): Diffusion processes in one dimension. Trans. Amer. Math. Soc., 77, 1–31 18. Gikhman, I.I. (1947): On a method of constructing random processes (in Russian). Dokl. Acad. Nauk. SSSR, 58, 961–964 19. Gikhman, I.I. (1950, 1951): On the theory of differential equations for random processes, I, II (in Russian). Ukr. Math. J., 2, No. 4, 37–63; 3, No. 3, 317–339 20. Gikhman, I.I., Skorokhod, A.V. (1972): Stochastic Differential Equations. Springer 21. Girsanov, I.V. (1960): On transforming a certain class of stochastic processes by absolutely continuous substitution of measures. Theory of Probability and Its Applications, 5, No. 3, 285–301 22. Girsanov, I.V. (1962): An example of non-uniqueness of a solution to the stochastic equation of K. Itˆ o. Theory of Probability and Its Applications, 7, No. 3, 325–331 23. Ikeda, N., Watanabe, S. (1989): Stochastic differential equations and diffusion processes. 2nd Ed. North-Holland 24. Itˆ o, K. (1946): On a stochastic integral equation. Proc. Imp. Acad. Tokyo, 22, 32–35 25. Itˆ o, K. (1951): On stochastic differential equations. Memoirs of the American Mathematical Society, 4, 1–51 26. Itˆ o, K., McKean, H.P. (1974): Diffusion processes and their sample paths. 2nd Ed. Springer 27. Jacod, J., Shiryaev, A.N. (2003): Limit theorems for random processes. 2nd Ed. Springer 28. Kallenberg, O. (2002): Foundations of modern probability. 2nd Ed. Springer 29. Karatzas, I., Shreve, S.E. (1991): Brownian motion and stochastic calculus. 2nd Ed. Springer ¨ 30. Kolmogoroff, A.N. (1931): Uber die analytischen Methoden in der Wahrscheinlichkeitsrechnung. Math. Annalen, 104, 415–458 31. Krylov, N.V. (1966): On quasi-diffusion processes. Theory of Probability and Its Applications, 11, No. 3, 424–443 32. Krylov, N.V. (1969): On Itˆ o’s stochastic differential equations. Theory of Probability and Its Applications, 14, No. 2, 330–336 33. Liptser, R.S., Shiryaev, A.N. (2001): Statistics of random processes. 2nd Ed. Springer 34. McKean, H.P. (1969): Stochastic integrals. Academic press, New York 35. Nadirashvili, N.S. (1997): Nonuniqueness in the martingale problem and the Dirichlet problem for uniformly elliptic operators. Ann. Scuola Norm. Sup. Pisa Cl. Sci., 24, 537–550 36. Øksendal, B. (2003): Stochastic differential equations. 6th Ed. Springer 37. Pitman, J.W., Yor, M. (1986): Some divergent integrals of Brownian motion. In: Analytic and Geometric Stochastics (ed. D. Kendall). Supplement to Adv. Appl. Probab., 18, 109–116 38. Revuz, D., Yor, M. (1999): Continuous martingales and Brownian motion. 3rd Ed. Springer 39. Rogers, L.C.G., Williams, D. (2000): Diffusions, Markov processes, and martingales. 2nd Ed. Cambridge University Press
References
121
40. Safonov, M.V. (1999): Nonuniqueness for second-order elliptic equations with measurable coefficients. Siam. J. Math. Anal., 30, No. 4, 879–895 41. Shiryaev, A.N. (1978): Optimal stopping rules. Springer 42. Skorokhod, A.V. (1965): Studies in the theory of random processes. AddisonWesley, Reading, MA 43. Stroock, D.W. (1993): Probability theory: an analytic view. Cambridge University Press 44. Stroock, D.W., Varadhan, S.R.S. (1969): Diffusion processes with continuous coefficients, I, II. Communications in Pure and Applied Mathematics, 22, 345–400; 479–530 45. Stroock, D.W., Varadhan, S.R.S. (1979): Multidimensional diffusion processes. Springer 46. Tsirelson, B. (1975): An example of a stochastic differential equation with no strong solution. Theory of Probability and Its Applications, 20, No. 2, 427–430 47. Yamada, T. (1996): Principal values of Brownian local times and their related topics. In: Itˆ o’s stochastic calculus and probability theory, 413–422. Springer 48. Yamada, T., Watanabe, S. (1971): On the uniqueness of solutions of stochastic differential equations. J. Math. Kyoto Univ, 11, 155–167 49. Zvonkin, A.K. (1974): A transformation of the phase space of a process that removes the drift. Math. USSR Sbornik, 93, No. 1, 129–149 50. Zvonkin, A.K., Krylov, N.V. (1975): On the strong solutions of stochastic differential equations. In: Proceedings of the school-seminar on random processes (Druskininkai, 1974), part 2, 9–88
Index of Notation
C(R+ )
the space of continuous functions R+ → R
C0 (R+ )
the space of continuous functions R+ → R vanishing at zero
C(R+ )
the space of continuous functions R+ → R ∪ {π} killed at some time, 23
C(R+ , Rn )
the space of continuous functions R+ → Rn
Ft Ft+
σ(Xs ; s ≤ t), the canonical filtration on C(R+ ) or C(R+ ) ε>0 Ft+ε , the right modification of the filtration (Ft )
FtZ
σ(Zs ; s ≤ t), the natural filtration of the process Z
Z Ft
the completed natural filtration of the process Z, 6
FS
the collection of the sets A ∈ F such that, for any t ≥ 0, A ∩ {S ≤ t} ∈ Ft
FS− F
the σ-field generated by the sets A ∩ {S > t}, where t ≥ 0 and A ∈ Ft t≥0 Ft
G|A
{B ∩ A : B ∈ G}, the restriction of the σ-field G to the set A ∈ G
L1loc (d)
functions, locally integrable at the point d, 27
L1loc (D)
functions, locally integrable on the set D, 27
Lat (Z)
the local time of the process Z spent at the point a up to time t, 105
Law(Zt ; t ≥ 0)
the distribution of the process Z
Law(Zt ; t ≥ 0|Q) the distribution of the process Z under the measure Q Mcloc (Gt , Q)
the space of continuous (Gt , Q)-local martingales
P|G
the restriction of the measure P to the σ-field G −1
P◦ϕ
the image of the measure P under the map ϕ
R+
[0, ∞), the positive half-line
124
Index of Notation
[[S, T ]]
{(ω, t) ∈ Ω × R+ : S(ω) ≤ t ≤ T (ω)}
]]S, T ]]
{(ω, t) ∈ Ω × R+ : S(ω) < t ≤ T (ω)}
[[S, T [[
{(ω, t) ∈ Ω × R+ : S(ω) ≤ t < T (ω)}
]]S, T [[
{(ω, t) ∈ Ω × R+ : S(ω) < t < T (ω)}
Ta (Z)
inf{t ≥ 0 : Zt = a}
Ta,b (Z)
Ta (Z) ∧ Tb (Z)
Ta
inf{t ≥ 0 : Xt = a}, where X is the coordinate process on C(R+ ) or C(R+ )
Ta,b
Ta ∧ Tb
Ta
supn inf{t ≥ 0 : |Xt − a| ≤ 1/n}, where X is the coordinate process on C(R+ )
T a,b
Ta ∧ Tb
T∞
limn→∞ Tn
T a,∞
Ta ∧ T∞
T0+
inf{t ≥ 0 : Xt > 0}, where X is the coordinate process on C(R+ ) or C(R+ )
T0−
inf{t ≥ 0 : Xt < 0}, where X is the coordinate process on C(R+ ) or C(R+ )
Var Z
the variation process of the finite-variation process Z
X
the coordinate process on C(R+ ) or C(R+ ), 19, 23
x x
+
x∨0
−
x∧0
x∨y
max{x, y}
x∧y
min{x, y}
xn ↓ 0
xn → 0 and xn > 0
xn ↑ 0
xn → 0 and xn < 0
Y, Z
the quadratic covariation of the continuous semimartingales Y and Z
Z
the quadratic variation of the continuous semimartingale Z
ZS−
limt↑S Zt , the left limit of the c` adl` ag process Z at time S
ZS
the process Z stopped at time S: ZtS = Zt∧S
Index of Notation
125
B(E)
the Borel σ-field on the space E
ΘS
the shift operator on C(R+ ) or C(R+ ), 115,110
ξ
inf{t ≥ 0 : Xt = π}, killing time of the coordinate process on C(R+ ), 23
σ∗
the transpose of the matrix σ
ΦS
the stopping operator on C(R+ ) or C(R+ ), 113
−1
ϕ
the inverse of the function ϕ
·
the Euclidean norm in Rn
u.p.
−−→
convergence of processes in probability uniformly on compact intervals, 117
Index of Terms
(Gt )-Brownian motion, 6 τ -continuous process, 107 d-system, 112
Itˆ o–Tanaka formula, 105
Bessel process, 108
Martingale problem, 19 – corresponding to a SDE, 20 Monotone class, 112
Canonical filtration, 19, 23 Change of variables – in Lebesgue-Stieltjes integrals, 108 – in stochastic integrals, 107 Characteristic diagram of a SDE, 12 Condition of – Engelbert–Schmidt, 10, 28 – Itˆ o, 9 – Krylov, 11 – Skorokhod, 11 – Stroock–Varadhan, 11 – Yamada–Watanabe, 10 – Zvonkin, 10 Coordinate process, 19, 23 Example of – Barlow, 14 – Girsanov, 15 – Nadirashvili, 17 – Tanaka, 13 – Tsirelson, 14 Existence of a solution of a SDE, 22 – strong, 6 – weak, 6 Explosion, 82 Feller’s criterion for explosions, 82 Function – locally integrable at a point, 27 – locally integrable on a set, 27 Galmarino’s test, 114 Gluing function, 116
Local time, 105
Occupation times formula, 106 Point – branch, 73 – regular, 28 – singular, 28 – – isolated, 28 Predictable stopping time, 24 Predicting sequence, 24 Property – Markov, 74 – regularity, 110 – strong Markov, 110 Scale function, 110 SDE, 1 – singular, 2 Shift operator, 110, 115 Skorokhod’s lemma, 111 Solution of a martingale problem, 19 Solution of a SDE, 6, 22 – global, 25 – local, 25 – positive, 22 – strictly positive, 22 – strong, 6 – up to S, 23 – – positive, 24 – – strictly positive, 24 – up to S−, 25
128
Index of Terms
Speed measure, 111 Stopping operator, 113 Time-change, 107 Type – (i, j), 65 – left, 65 – of infinity, 82 – right, 35–37
– – entrance, 42 – – exit, 42 Uniqueness of a solution of a SDE, 22 – in law, 6 – pathwise, 6 Yamada–Watanabe theorem, 7