Large Deviations for Stochastic Processes Jin Feng Thomas G. Kurtz Department of Mathematics and Statistics, University of Massachusetts, Amherst, MA 010034515 Email address:
[email protected] Departments of Mathematics and Statistics, University of Wisconsin at Madison, 480 Lincoln Drive, Madison, WI 53706  1388 Email address:
[email protected] 2000 Mathematics Subject Classification. Primary 60F10, 47H20; Secondary 60J05, 60J25, 60J35, 49L25 Key words and phrases. large deviations, exponential tightness, approximation of nonlinear semigroups, Skorohod topology, Markov processes, random evolutions, occupation measures, weakly interacting particles, viscosity solutions, comparison principle, mass transport techniques
Work supported in part by NSF Grants DMS 9804816 and DMS 9971571. Abstract. General functional large deviation results for cadlag processes under the Skorohod topology are developed. For Markov processes, criteria for large deviation results are given in terms convergence of infinitesimal generators. Following an introduction and overview, the material is presented in three parts. • Part 1 gives necessary and sufficient conditions for exponential tightness that are analogous to conditions for tightness in the theory of weak convergence. Analogs of the Prohorov theorem given by Puhalskii, O’Brien and Vervaat, and de Acosta, then imply the large deviation principle, at least along subsequences. Representations of the rate function in terms of rate functions for the finite dimensional distributions are extended to include situations in which the rate function may be finite for sample paths with discontinuities. • Part 2 focuses on Markov processes in metric spaces. For a sequence of such processes, convergence of Fleming’s logarithmically transformed nonlinear semigroups is shown to imply the large deviation principle in a manner analogous to the use of convergence of linear semigroups in weak convergence. In particular cases, this convergence can be verified using the theory of nonlinear contraction semigroups. The theory of viscosity solutions of nonlinear equations is used to generalize earlier results on semigroup convergence, enabling the approach to cover a wide variety of situations. The key requirement is that a comparison principle holds. Control methods are used to give representations of the rate functions. • Part 3 discusses methods for verifying the comparison principle and applies the general theory to obtain a variety of new and known results on large deviations for Markov processes. Applications include FreidlinWentzell theory for nearly deterministic processes, random walks and Markov chains, DonskerVaradhan theory for occupation measures, random evolutions and averaging problems for stochastic systems, rescaled timespace lattice equations, and weakly interacting particle systems. The latter results include new comparison principles for a class of HamiltonJacobi equations in Hilbert spaces and spaces of probability measures. 1
1
July 29, 2005
Contents Preface
v
Notation
vii
Introduction
1
Chapter 1. Introduction 1.1. Basic methodology 1.2. The basic setting for Markov processes 1.3. Related approaches 1.4. Examples 1.5. An outline of the major obstacles
3 4 6 8 10 25
Chapter 2. An overview 2.1. Basic setup 2.2. Compact state spaces 2.3. General state spaces
29 30 30 33
Part 1.
39
The general theory of large deviations
Chapter 3. Large deviations and exponential tightness 3.1. Basic definitions and results 3.2. Identifying a rate function 3.3. Rate functions in product spaces
41 41 50 53
Chapter 4. Large deviations for stochastic processes 4.1. Exponential tightness for processes 4.2. Large deviations under changes of timescale 4.3. Compactification 4.4. Large deviations in the compact uniform topology 4.5. Exponential tightness for solutions of martingale problems 4.6. Verifying compact containment 4.7. Finite dimensional determination of the process rate function
57 57 62 64 65 67 71 73
Part 2. Large deviations for Markov processes and semigroup convergence
77
Chapter 5. Large deviations for Markov processes and nonlinear semigroup convergence 5.1. Convergence of sequences of operator semigroups 5.2. Applications to large deviations
79 79 82
i
ii
CONTENTS
Chapter 6.
Large deviations and nonlinear semigroup convergence using viscosity solutions 6.1. Viscosity solutions, definition and convergence 6.2. Large deviations using viscosity semigroup convergence
Chapter 7. Extensions of viscosity solution methods 7.1. Viscosity solutions, definition and convergence 7.2. Large deviation applications 7.3. Convergence using projected operators
109 109 125 129
Chapter 8. 8.1. 8.2. 8.3. 8.4. 8.5. 8.6. Part 3.
The Nisio semigroup and a control representation of the rate function Formulation of the control problem The Nisio semigroup Control representation of the rate function Properties of the control semigroup V Verification of semigroup representation Verifying the assumptions
97 98 106
133 133 139 140 141 149 153
Examples of large deviations and the comparison principle 161
Chapter 9. The comparison principle 9.1. General estimates 9.2. General conditions in Rd 9.3. Bounded smooth domains in Rd with (possibly oblique) reflection 9.4. Conditions for infinite dimensional state space
163 163 170 177 181
Chapter 10.1. 10.2. 10.3. 10.4. 10.5.
10. Nearly deterministic processes in Rd Processes with independent increments Random walks Markov processes Nearly deterministic Markov chains Diffusion processes with reflecting boundaries
197 197 204 205 216 218
Chapter 11.1. 11.2. 11.3. 11.4. 11.5. 11.6.
11. Random evolutions Discrete time, law of large numbers scaling Continuous time, law of large numbers scaling Continuous time, central limit scaling Discrete time, central limit scaling Diffusions with periodic coefficients Systems with small diffusion and averaging
227 228 241 256 263 265 267
Chapter 12. Occupation measures 12.1. Occupation measures of a Markov process  Discrete time 12.2. Occupation measures of a Markov process  Continuous time
279 280 284
Chapter 13.1. 13.2. 13.3.
289 289 301 311
13. Stochastic equations in infinite dimensions Stochastic reactiondiffusion equations on a rescaled lattice Stochastic CahnHilliard equations on rescaled lattice Weakly interacting stochastic particles
CONTENTS
iii
Appendix
339
Appendix A. Operators and convergence in function spaces A.1. Semicontinuity A.2. General notions of convergence A.3. Dissipativity of operators
341 341 342 346
Appendix B. B.1. B.2.
Variational constants, rate of growth and spectral theory for the semigroup of positive linear operators Relationship to the spectral theory of positive linear operators Relationship to some variational constants
349 350 353
Appendix C. Spectral properties for discrete and continuous Laplacians C.1. The case of d = 1 C.2. The case of d > 1 R C.3. E = L2 (O) ∩ {ρ : ρdx = 0} C.4. Other useful approximations
363 364 364 365 366
Appendix D. Results from mass transport theory D.1. Distributional derivatives D.2. Convex functions D.3. The pWasserstein metric space D.4. The MongeKantorovich problem D.5. Weighted Sobolev spaces Hµ1 (Rd ) and Hµ−1 (Rd ) D.6. Fisher information and its properties D.7. Mass transport inequalities D.8. Miscellaneous
367 367 371 372 374 378 382 390 397
Bibliography
399
Preface This work began as a research paper intended to show how the convergence of nonlinear semigroups associated with a sequence of Markov processes implied the large deviation principle for the sequence. We expected the result to be of little utility for specific applications, since classical convergence results for nonlinear semigroups involve hypotheses that are very difficult to verify, at least using classical methods. We should have recognized at the beginning that the modern theory of viscosity solutions provides the tools needed to overcome the classical difficulties. Once we did recognized that convergence of the nonlinear semigroups could be verified, the method evolved into a unified treatment of large deviation results for Markov processes, and the research “paper” steadily grew into the current volume. There are many approaches to large deviations for Markov processes, but this book focuses on just one. Our general title reflects both the presentation in Part 1 of the theory of large deviations based on the large deviation analogue of the compactness theory for weak convergence, material that is the foundation of several of the approaches, and by the generality of the semigroup methods for Markov processes. The goal of Part 2 is to develop an approach for proving large deviations, in the context of metricspacevalued Markov processes, using convergence of generators in much the same spirit as for weak convergence (e.g. Ethier and Kurtz [36]). This approach complements the usual method that relies on asymptotic estimates obtained through Girsanov transformations. The usefulness of the method is best illustrated through examples, and Part 3 contains a range of concrete examples.
We would like to thank Alex de Acosta, Paul Dupuis, Richard Ellis, Wendell Fleming, Jorge Garcia, Markos Katsoulakis, Jim Kuelbs, Peter Ney, Anatolii Puhalskii and Takis Souganidis for a number of helpful conversations, and Peter Ney and Jim Kuelbs for organizing a longterm seminar at the University of Wisconsin  Madison on large deviations that provided much information and insight. In particular, the authors’ first introduction to the close relationship between the theory of large deviations and that of weak convergence came through a series of lectures that Alex de Acosta presented in that seminar.
v
Notation (1) (E, r). A complete, separable metric space. (2) B(E). The σalgebra of all Borel subsets of E. (3) A ⊂ M (E) × M (E). An operator identified with its graph as a subset in M (E) × M (E). (4) B(E). The space of bounded, Borel measurable functions. Endowed with the norm kf k = supx∈E f (x), (B(E), k · k) is a Banach space. (5) Bloc (E). The space of locally bounded, Borel measurable functions, that is, functions in M (E) that are bounded on each compact. (6) B (x) = {y ∈ E : r(x, y) < }. The ball of radius > 0 and center x ∈ E. (7) bucconvergence, bucapproximable, bucclosure, closed and dense, and buc lim, See Definition A.6. (8) C(E). The space of continuous functions on E. (9) Cb (E) = C(E) ∩ B(E). (10) C(E, R). The collection of functions that are continuous as mappings from E into R with the natural topology on R. (11) Cc (E). For E locally compact, the functions that are continuous and have compact support. (12) C k (O), for O ⊂ Rd open and k = 1, 2, . . . , ∞. The space of functions whose derivatives up to kth order are continuous in O. (13) Cck (O) = C k (O) ∩ Cc (O). (14) C k,α (O), for O ⊂ Rd open, k = 1, 2, . . ., and α ∈ (0, 1]. The space of functions f ∈ C k (O) satisfying ∂β f h ∂β f i kf kk,α = sup sup β + sup < ∞, β α ∂x 0≤β≤k O β=k ∂x where
h i f (x) − f (y) , f = sup x − yα α x6=y
α ∈ (0, 1].
k,α (15) Cloc (O). The space of functions f ∈ C(O) such that f D ∈ C k,α (D) for every bounded open subset D ⊂ O. (16) CE [0, ∞). The space of Evalued, continuous functions on [0, ∞). b ), for U locally compact. The space of continuous functions vanishing (17) C(U at infinity. (18) D(A) = {f : ∃(f, g) ∈ A}. The domain of an operator A. (19) D+ (A) = {f ∈ D(A), f > 0}. (20) D++ (A) = {f ∈ D(A), inf y∈E f (y) > 0}. (21) DE [0, ∞). The space of Evalued, cadlag (right continuous with left limit) functions on [0, ∞) with the Skorohod topology, unless another topology is specified. (See Ethier and Kurtz [36], Chapter 3). vii
viii
NOTATION
(22) D(O), for O ⊂ Rd open. The space Cc∞ (O) with the topology giving the space of Schwartz test functions. (See D.1). (23) D0 (O). The space of continuous linear functionals on D(O), that is, the space of Schwartz distributions. (24) g ∗ (respectively g∗ ). The upper semicontinuous (resp. lower semicontinuous) regularization of a function g on a metric space (E, r). The definition is given by (6.2) (resp. (6.3)). (25) lim supn→∞ Gn and lim inf n→∞ Gn for a sequence of sets Gn . Definition 2.4 in Section 2.3. (26) Hρk (Rd ), for ρ ∈ P(Rd ). A weighted Sobolev space. See Appendix D.5. (27) K(E) ⊂ Cb (E). The collection of nonnegative, bounded, continuous functions. (28) K0 (E) ⊂ K(E). The collection of strictly positive, bounded, continuous functions. (29) K1 (E) ⊂ K0 (E). The collection of bounded continuous functions satisfying inf x∈E f (x) > 0. (30) M (E). The Rvalued, Borel measurable functions on E. (31) M u (E). The space of f ∈ M (E) that are bounded above. (32) M l (E). The space of f ∈ M (E) that are bounded below. (33) M (E, R). The space of Borel measurable functions with values in R and f (x) ∈ R for at least one x ∈ E. (34) ME [0, ∞). The space of Evalued measurable functions on [0, ∞). (35) M d×d . The space of d × d matrices. (36) M u (E, R) ⊂ M (E, R) (respectively, C u (E, R) ⊂ C(E, R)). The collection of Borel measurable (respectively continuous) functions that are bounded above (that is, f ∈ M u (E, R) implies supx∈E f (x) < ∞). (37) M l (E, R) ⊂ M (E, R) (respectively, C l (E, R) ⊂ C(E, R)). The collection of Borel measurable (respectively continuous) functions that are bounded below. (38) M(E). The space of (positive) Borel measures on E. (39) Mf (E). The space of finite (positive) Borel measures on E. (40) Mm (U ), U a metric space. The collection of µ ∈ M(U ×[0, ∞)) satisfying µ(U × [0, t]) = t for all t ≥ 0. (41) MTm (U ) (T > 0). The collection of µ ∈ M(U × [0, T ]) satisfying µ(U × [0, t]) = t for all 0 ≤ t ≤ T . (42) P(E) ⊂ Mf (E). The space of probability measures on E. (43) R = [−∞, ∞]. (44) R(A) = {g : ∃(f, g) ∈ A}. The range of an operator A. (45) T #ρ = γ. γ ∈ P(E) is the pushforward (Definition D.1) of ρ ∈ P(E) by the map T .
Introduction
CHAPTER 1
Introduction The theory of large deviations is concerned with the asymptotic estimation of probabilities of rare events. In its basic form, the theory considers the limit of normalizations of log P (An ) for a sequence of events with asymptotically vanishing probability. To be precise, for a sequence of random variables {Xn } with values in a metric space (S, d), we are interested in the large deviation principle as formulated by Varadhan [118] Definition 1.1. [Large Deviation Principle] {Xn } satisfies a large deviation principle (LDP) if there exists a lower semicontinuous function I : S → [0, ∞] such that for each open set A, (1.1)
lim inf n→∞
1 log P {Xn ∈ A} ≥ − inf I(x), x∈A n
and for each closed set B, (1.2)
lim sup n→∞
1 log P {Xn ∈ B} ≤ − inf I(x). x∈B n
I is called the rate function for the large deviation principle. A rate function is good if for each a ∈ [0, ∞), {x : I(x) ≤ a} is compact. Beginning with the work of Cram´er [16] and including the fundamental work on large deviations for stochastic processes by Freidlin and Wentzell [52] and Donsker and Varadhan [33], much of the analysis has been based on change of measure techniques. In this approach, a reference measure is identified under which the events of interest have high probability, and the probability of the event under the original measure is estimated in terms of the RadonNikodym derivative relating the two measures. More recently Puhalskii [97], O’Brien and Vervaat [91], de Acosta [24] and others have developed an approach to large deviations analogous to the Prohorov compactness approach to weak convergence of probability measures. Definition 1.2. {Xn } converges in distribution to X (that is, the distributions P {Xn ∈ ·} converge weakly to P {X ∈ ·}) if and only if limn→∞ E[f (Xn )] = E[f (X)] for each f ∈ Cb (S). The analogy between the two theories becomes much clearer if we recall the following equivalent formulation of convergence in distribution. Proposition 1.3. {Xn } converges in distribution to X if and only if for each open set A, lim inf P {Xn ∈ A} ≥ P {X ∈ A}, n→∞
3
4
1. INTRODUCTION
or equivalently, for each closed set B, lim sup P {Xn ∈ B} ≤ P {X ∈ B}. n→∞
Our main theme is the development of this approach to large deviation theory as it applies to sequences of cadlag stochastic processes. The proof of weak convergence results typically involves verification of relative compactness or tightness for the sequence and the unique characterization of the possible limit distribution. The analogous approach to large deviations involves verification of exponential tightness (Definition 3.2) and unique characterization of the possible rate function. In Part 1, we present results on exponential tightness and give Puhalskii’s analogue of the Prohorov compactness theorem. We also give complete exponential tightness analogues of standard tightness criteria for cadlag stochastic processes and show that the rate function for a sequence of processes is determined by the rate functions for their finite dimensional distributions, just as the limit distribution for a weakly convergent sequence of processes is determined by the limits of the finite dimensional distributions. In Part 2, we focus on Markov processes and give general large deviation results based on the convergence of corresponding nonlinear semigroups. Again we have an analogy with the use of the convergence of linear semigroups in proofs of weak convergence of Markov processes. The success of this approach depends heavily on the idea of a viscosity solution of a nonlinear equation. We also exploit control theoretic representations of the limiting semigroups to obtain useful representations of the rate function. In Part 3, we demonstrate the effectiveness of these methods in a wide range of large deviation results for Markov processes, including classical FreidlinWentzell theory, random evolutions, and infinite dimensional diffusions. 1.1. Basic methodology 1.1.1. Analogy with the theory of weak convergence. In both the weak convergence and large deviation settings, proofs consist of first verifying a compactness condition and then showing that there is only one possible limit. For weak convergence, the first step is usually accomplished by verifying tightness which, by Prohorov’s theorem, implies relative compactness. The corresponding condition for large deviations is exponential tightness (Definition 3.2). Puhalskii [97] (and in more general settings, O’Brien and Vervaat [91] and de Acosta [24]) has shown that exponential tightness implies the existence of a subsequence along which the large deviation principle holds. (See Theorem 3.7.) For stochastic processes, the second step of these arguments can be accomplished by verifying weak convergence (or the large deviation principle) for the finite dimensional distributions and showing that the limiting finite dimensional distributions (finite dimensional rate functions) determine a unique limiting distribution (rate function) for the process distributions. We extend this analogous development in a number of ways. For the Skorohod topology on the space of cadlag sample paths in a complete, separable metric space, we give a complete exponential tightness analogue of the tightness conditions of Kurtz [71] and Aldous [2] (Theorem 4.1). We then extend the characterization of the rate function in terms of the finitedimensional rate functions (Puhalskii [97], Theorem 4.5, and de Acosta [2], Lemma 3.2) to allow the rate function to be finite on paths with discontinuities (Section 4.7). Finally, we apply these results to Markov
1.1. BASIC METHODOLOGY
5
processes, using the asymptotic behavior of generators and semigroups associated with the Markov processes to verify exponential tightness for the processes (Section 4.5) and the large deviation principle for the finite dimensional distributions (Chapters 5, 6, 7). These arguments are again analogous to the use of convergence of generators and semigroups to verify tightness and convergence of finite dimensional distributions in the weak convergence setting. 1.1.2. Nonlinear semigroups and viscosity methods. Results of Varadhan and Bryc (Proposition 3.8) relate large deviations for sequences of random variables to the asymptotic behavior of functionals of the form n1 log E[enf (Xn ) ]. For Markov processes, (1.3)
Vn (t)f (x) =
1 log E[enf (Xn (t)) X(0) = x] n
defines a nonlinear semigroup, and large deviations for sequences of Markov processes can be studied using the asymptotic behavior of the corresponding sequence of nonlinear semigroups. Viscosity methods for nonlinear equations play a central role. Fleming and others (cf. [44, 45, 40, 48, 46]) have used this approach to prove large deviation results for Xn at single time points and exit times. We develop these ideas further, showing how convergence of the semigroups and their generators Hn can be used to obtain both exponential tightness and the large deviation principle for the finite dimensional distributions. These results then imply the pathwise large deviation principle. 1.1.3. Control theory. The limiting semigroup usually admits a variational form known as the Nisio semigroup in control theory. Connections between control problems and large deviation results were first made by Fleming [44, 45] and developed further by Sheu [108, 109]. Dupuis and Ellis [35] systematically develop these connections showing that, in many situations, one can represent a large class of functionals of the processes as the minimal cost functions of stochastic control problems and then verify convergence of the functionals to the minimal cost functions of limiting deterministic control problems. This convergence can then be used to obtain the desired large deviation result. Variational representations for the sequence of functionals can be difficult to obtain, and in the present work, we first verify convergence of the semigroups by methods that only require convergence of the corresponding generators and conditions on the limiting generator. Working only with the sequence of generators frequently provides conditions that are easier to verify than conditions that give a convergent sequence of variational representations. Variational representations for the limit are still important, however, as they provide methods for obtaining simple representations of the large deviation rate function. In Chapter 8, we discuss methods for obtaining such representations. In particular, we formulate a set of conditions at the infinitesimal level, so that by verifying a variational representation for the limit generator, we obtain the variational structure of the semigroup that gives the large deviation rate function. A generator convergence approach directly based on the sequence of control problems and Girsanov transformations was discussed in Feng [41] in a more restrictive setting. The present work avoids explicit use of a Girsanov transformation, and it greatly generalizes [41] making the approach much more applicable.
6
1. INTRODUCTION
1.2. The basic setting for Markov processes 1.2.1. Notation. Throughout, (E, r) will be a complete, separable metric space, M (E) will denote the space of realvalued, Borel measurable functions on E, B(E) ⊂ M (E), the space of bounded, Borel measurable function, Cb (E) ⊂ B(E), the space of bounded continuous functions, and P(E), the space of probability measures on E. We identify an operator A with its graph and, for example, write A ⊂ Cb (E) × Cb (E) if the domain D(A) and range R(A) are contained in Cb (E). An operator can be multivalued and nonlinear. The space of Evalued, cadlag functions on [0, ∞) with the Skorohod topology will be denoted by DE [0, ∞). Define q(x, y) = 1 ∧ r(x, y), and note that q is a metric on E that is equivalent to r. The following metric gives the Skorohod topology . (See Ethier and Kurtz [36], Chapter 3.) Let Λ0 be the collection of strictly increasing functions mapping [0, ∞) onto [0, ∞) and satisfying (1.4)
γ(λ) ≡ sup  log 0≤s 0, define X(t) = Y[t/] . Then, setting FtX = σ(X(s) : s ≤ t), for g ∈ B(E), [t/]−1
g(X(t)) − g(X(0)) −
X
(T g(Yk ) − g(Yk ))
k=0
Z
[t/]
= g(X(t)) − g(X(0) −
−1 (T − I)g(X(s))ds
0
is an {FtX }martingale, and for f ∈ B(E), Z [t/] exp{f (X(t)) − f (X(0)) − −1 log e−f T ef (X(s))ds} 0
is an
{FtX }martingale.
n , we define For a sequence, Xn (t) = Y[t/ n]
An g = −1 n (Tn − I)g
(1.11) and (1.12)
Hn f =
1 1 log e−nf Tn enf = log(e−nf (Tn − I)enf + 1), nn nn
8
1. INTRODUCTION
so that [t/n ]n
Z g(Xn (t)) − g(Xn (0)) −
An g(Xn (s))ds 0
and [t/n ]n
Z exp{nf (Xn (t)) − nf (X(0)) −
nHn f (Xn (s))ds} 0
are martingales. In Chapter 3, we give conditions for exponential tightness that are typically easy to check in the Markov process setting using the convergence (actually, only the boundedness) of Hn . In particular, supn kHn f k < ∞ for a sufficiently large class of f implies exponential tightness, at least if the state space is compact. In Section 4.7, we give conditions under which exponential tightness and the large deviation principle for finite dimensional distributions imply the large deviation principle in the Skorohod topology. In Chapters 5, 6 and 7, we give conditions under which convergence of the sequence of semigroups {Vn } implies the large deviation principle for the finite dimensional distributions and conditions under which convergence of {Hn } implies convergence of {Vn }. Consequently, convergence of {Hn } is an essential ingredient in the results. 1.3. Related approaches There are several approaches in the literature that are closely related to the methods we develop here. We have already mentioned the control theoretic methods developed in detail by Dupuis and Ellis [35] that play a key role in the rate function representation results of Chapter 8. Two other approaches share important characteristics with our methods. 1.3.1. Exponential martingale method of de Acosta. de Acosta [26, 27] develops a general approach to large deviation results that, in the case of Markov processes, exploits the exponential martingale (1.10). Let λ be a measure on [0, 1], and assume that eaf ∈ D(An ) for all a ∈ R. Then the fact that (1.10) is a martingale implies that Z 1 Z 1 exp{ f (Xn (s))λ(ds) − λ[0, 1]f (Xn (0)) − Hn eλ(s,1]f (Xn (s))ds} 0
0
has expectation 1. Setting Z Φn (x, λ, f ) = λ[0, 1]f (x(0)) +
1
Hn eλ(s,1]f (x(s))ds,
0
convergence of Hn implies (at least formally) that 1 Φn (x, nλ, f ) → Φ(x, λ, f ) = λ[0, 1]f (x(0)) + n
Z
1
Heλ(s,1]f (x(s))ds.
0
Comparing this convergence to the conditions of Theorems 2.1 and 3.1 of [26] indicates a close relationship between the methods developed here and those developed by de Acosta, at least at the formal computational level.
1.3. RELATED APPROACHES
9
1.3.2. The maxingale method of Puhalskii. Assume that the semigroups Vn converge to V . Then V is nonlinear in the sense that, in general, V (t)(af + bg) 6= aV (t)f + bV (t)g. However, defining a ⊕ b = max{a, b},
a b = a + b,
and using (⊕, ) as operations on R in place of the usual (+, ×), V becomes “linear” V (t)((a f ) ⊕ (b g)) = (a V (t)f ) ⊕ (b V (t)g), and defining a ⊕n b =
1 log(ena + enb ), n
a n b = a + b,
Vn is linear under (⊕n , n ): Vn (t)((a n f ) ⊕n (b n g)) = (a n Vn (t)f ) ⊕n (b n Vn (t)g). Furthermore, lim a ⊕n b = a ⊕ b,
a n b = a b.
n→∞
The change of algebra (semiring, to be precise) on R produces a linear structure for V (or the Vn ). Results analogous to those of linear analysis, such as the Riesz representation theorem, hold and can be used to study the properties of V . The counterpart of a measure is an idempotent measure . (See Puhalskii [100].) Taking this view point and mimicking the martingale problem approach in weak convergence theory for stochastic processes, Puhalskii [99, 100] defines a maxingale problem and develops large deviation theory for semimartingales. The main result of [100], Theorem 5.4.1, can be stated roughly as follows: let {Xn } be a sequence of Markov processes and Hn be the operators so that
RtH
f Ms,n (t) = enf (Xn (t))−nf (Xn (s))−
s
n f (Xn (r))dr
is a mean one martingale for each f ∈ D(Hn ), each s ≥ 0 and t ≥ s, as in (1.10). Assuming that H = limn→∞ Hn in an appropriate sense, under additional conditions on H and the convergence, {Xn } is exponentially tight and along any subsequence of {Xn }, there exists a further subsequence such that the large deviation principle holds with some rate function I(x). Then Π(A) = supx∈A e−I(x) defines an idempotent probability measure on the trajectory space of the processes. (See Chapter 1 of [100]. Note that an idempotent measure is not a measure in the usual sense.) Π then solves a maxingale problem for H:
R t Hf (x(s))ds
ef (x(t))−f (x(0))−
0
is a maxingale under Π(dx) for every f ∈ D(H) in the sense that (1.13) Rt sup{x:x(r)=z(r),0≤r≤s} exp{f (x(t)) − f (x(s)) − s Hf (x(r))dr − I(x(·))} = 1, sup{x:x(r)=z(r),0≤r≤s} exp{−I(x(·))} for every 0 ≤ s ≤ t and every trajectory z. If the maxingale problem for H has a unique idempotent probability measure Π0 as solution, then Π0 = Π and the large deviation principle holds for {Xn } with rate function I.
10
1. INTRODUCTION
1.4. Examples As explained in Section 1.2, an essential ingredient of our result is the convergence of {Hn }. As the following examples show, at least formally, the basic calculations for verifying this convergence may be quite simple. The calculations given here are frequently heuristic; however, we give rigorous results for these and other examples in Part 3. Example 1.4. [FreidlinWentzell Theory  I] The bestknown example of large deviations for Markov processes comes from the work of Freidlin and Wentzell on diffusions with small diffusion coefficient. (See Chapters 3 and 4 in [52].) Consider a sequence of ddimensional diffusion processes with Xn satisfying the Itˆo equation Z t Z t 1 σ(Xn (s))dW (s) + b(Xn (s))ds. Xn (t) = x + √ n 0 0 Let a(x) = σ(x) · σ T (x). Then the (linear) generator is X 1 X An g(x) = aij (x)∂i ∂j g(x) + bi (x)∂i g(x), 2n ij i where we can take D(An ) to be the collection of functions of the form c + f where c ∈ R and f ∈ Cc2 (Rd ), the space of twice continuously differentiable functions with compact support in Rd . (1.14) X 1 X 1 X Hn f (x) = aij (x)∂i ∂j f (x) + aij (x)∂i f (x)∂j f (x) + bi (x)∂i f (x) 2n ij 2n ij i and Hn f (x) =
X 1 X 1X aij (x)∂i ∂j f (x) + aij (x)∂i f (x)∂j f (x) + bi (x)∂i f (x). 2n ij 2 ij i
Consequently, Hf = limn→∞ Hn f is (1.15)
Hf (x) =
1 (∇f (x))T · a(x) · ∇f (x) + b(x) · ∇f (x). 2
We can identify the rate function I in a simple form by finding a variational representation of H. First, we introduce a pair of functions on Rd × Rd : (1.16)
H(x, p) ≡
1 T 1 p · a(x) · p + b(x) · p = σ T (x) · p2 + b(x) · p, 2 2 L(x, q) ≡ sup {p · q − H(x, p)}. p∈Rd
H(x, p) is convex in p, and H and L are dual FenchelLegendre transforms. In particular, H(x, p) = sup {p · q − L(x, q)}. q∈Rd
Therefore Hf (x) = H(x, ∇f (x)) = Hf (x) ≡ sup {Af (x, u) − L(x, u)}, u∈Rd
1.4. EXAMPLES
11
where Af (x, u) = u∇f (x), for f ∈ Cc2 (Rd ). Applying Corollary 8.28 or Corollary 8.29, the rate function for {Xn } is Z ∞ L(x(s), u(s))ds, I(x) = I0 (x0 ) + inf {u:(x,u)∈J }
0
where I0 is the rate function for {Xn (0)} and J is the collection of solutions of the functional equation Z t Af (x(s), u(s))ds, ∀f ∈ Cc2 (Rd ). f (x(t)) − f (x(0)) = 0
In the present setting, this equation reduces to x(t) ˙ = u(t), so R∞ I0 (x(0)) + 0 L(x(s), x(s))ds ˙ if x is absolutely continuous (1.17) I(x) = ∞ otherwise One can use other variational representations of the operator H and arrive at different expressions for the rate function I. For example, if we choose Af (x, u) = u · (σ T (x)∇f (x)) + b(x) · ∇f (x), L(x, u) =
f ∈ C02 (Rd ),
1 2 u , 2
and define Hf (x) = sup {Af (x, u) − L(x, u)}, u∈Rd
then H = H and the rate function can be expressed as Z ∞ 1 I(x) = I0 (x(0)) + inf{ u(s)2 ds : u ∈ L2 [0, ∞) 2 0 Z t Z t and x(t) = x(0) + b(x(s))ds + σ(x(s))u(s)ds}. 0
0
Example 1.5. [FreidlinWentzell theory – II]Wentzell [127, 128] has studied the jump process analogue of the small diffusion problem. Again, let E = Rd , and consider, for example, Z 1 (1.18) An g(x) = n (g(x + z) − g(x))η(x, dz), n d R where for each x ∈ Rd , η(x, ·) is a measure on B(Rd ) and for each B ∈ B(Rd ), η(·, B) is a Borelmeasurable function. (For the moment, ignore the problem of defining an appropriate domain.) Then Z 1 Hn f (x) = n (ef (x+ n z)−f (x) − 1)η(x, dz) Rd
and Z Hn f (x) =
1
(en(f (x+ n z)−f (x)) − 1)η(x, dz).
Rd
Assuming
α·z
R E
e
η(x, dz) < ∞, for each α ∈ Rd , Hf = limn→∞ Hn f is given by Z Hf (x) = (e∇f (x)·z − 1)η(x, dz). Rd
12
1. INTRODUCTION
More generally, one can consider generators of the form Z 1 1 An g(x) = n (g(x + z) − g(x) − z · ∇g(x))η(x, dz) n n d R 1 X + aij (x)∂i ∂j g(x) + b(x) · ∇g(x). 2n ij Then Z Hn f (x)
1 z · ∇f (x))η(x, dz) + b(x) · ∇f (x) n Rd 1 X 1 X + aij (x)∂i f (x)∂j f (x + aij (x)∂i ∂j f (x)) 2n ij 2n ij
= n
1
(ef (x+ n z)−f (x) − 1 −
and Z Hn f (x)
=
1
(en(f (x+ n z)−f (x)) − 1 − z · ∇f (x))η(x, dz) + b(x) · ∇f (x)
Rd
+
1X 1 X aij (x)∂i f (x)∂j f (x) + aij (x)∂i ∂j f (x). 2 ij 2n ij
R Now, assuming Rd (eα·z − 1 − α · z)η(x, dz) < ∞, for each α ∈ Rd , Hf = limn→∞ Hn f is given by Z 1X aij (x)∂i f (x)∂j f (x)+b(x)·∇f (x). Hf (x) = (e∇f (x)·z −1−z·∇f (x))η(x, dz)+ 2 ij Rd A variational representation of H can be constructed as in Example 1.4. Define Z 1 H(x, p) = (ep·z − 1 − z · p)η(x, dz) + σ T (x)p2 + b(x) · p 2 d R and L(x, q) = sup {p · q − H(x, p)}. p∈Rd
Then Hf (x) = H(x, ∇f (x)) = Hf (x) ≡ sup {Af (x, u) − L(x, u)}, u∈Rd
where Af (x, u) = u · ∇f (x),
f ∈ Cc2 (Rd ),
and the rate function is given by (1.17). Example 1.6. [Partial sum processes] Functional versions of the classical large deviation results of Cram´er [16] have been considered by a number of authors, including Borovkov [11] and Mogulskii [87]. Let ξ1 , ξ2 , . . . be independent and identically distributed Rd valued random variables with distribution ν. Define [t/n ]
Xn (t) = βn−1
X
ξk .
k=1
Then, as in (1.11), Z
An f (x) =
−1 n (
Rd
f (x + βn−1 z)ν(dz) − f (x))
1.4. EXAMPLES
13
and 1 Hn f (x) = log nn
Z
−1
en(f (x+βn
z)−f (x))
ν(dz).
Rd
R If Rd eαz ν(dz) < ∞, for each α > 0, n = n−1 , and βn = n, then Hn f → Hf given by Z Hf (x) = log e∇f (x)·z ν(dz). Rd
If Rd zν(dz) = 0, Rd eαz ν(dz) < ∞ for some α > 0, n βn2 = n, and n n → 0, then Hn f → Hf given by R
R
1 Hf (x) = 2
Z
(∇f (x) · z)2 ν(dz).
Rd
Once again, a variational representation of H and the rate function I can be constructed using FenchelLegendre transforms. Define 1 H(p) = 2
Z
(z · p)2 ν(dz)
Rd
and L(q) = sup {p · q − H(p)}. p∈Rd
Then R∞ I(x) =
0
L(x(s))ds ˙
∞
if x is absolutely continuous otherwise.
Example 1.7. [Levy processes] If η in Example 1.5 does not depend on x, then the process is a L´evy process, that is, a process with stationary, independent increments. Lynch and Sethuraman [82] and Mogulskii [88] consider the realvalued case. de Acosta [25] considers Banach spacevalued processes. In de Acosta’s setting, let E be a Banach space and E ∗ its dual. Recall that for f ∈ C 1 (E), the gradient ∇f (x) = ∇x f (x) is the unique element in E ∗ satisfying f (x + z) − f (x) = hz, ∇f (x)i + o(z),
∀z ∈ E.
Let F ⊂ E ∗ separate points, and let D(A) be the collection of functions of the form f (x) = g(hx, ξ1 i, . . . , hx, ξm i), where ξ1 , . . . , ξm ∈ F and g ∈ Cc2 (Rm ). Then, ∇f (x) =
m X k=1
∂k g(hx, ξ1 i, . . . , hx, ξm i)ξk ∈ E ∗ .
14
1. INTRODUCTION
1 ∧ hz, ξi2 η(dz) < ∞, for every ξ ∈ F , we have Z 1 1 = n g(hx + z, ξ1 i, . . . , hx + z, ξm i) − g(hx, ξ1 i, . . . , hx, ξm i) n n E m X 1 − I{kzk≤1} hz, ξk i∂k g(hx, ξ1 i, . . . , hx, ξm i η(dz) n
Assuming An g(x)
R
E
k=1
+
m X
hb, ξk i∂k g(hx, ξ1 i, . . . , hx, ξm i),
k=1
= n
Z
Z
1 1 z, ξ1 i, . . . , hx + z, ξm i) − g(hx, ξ1 i, . . . , hx, ξm i) n n E 1 − I{kzk≤1} hz, ∇f (x)i η(dz) + hb, ∇f (x)i, n g(hx +
1
(ef (x+ n z)−f (x) − 1 −
Hn f (x) = n E
1 I{kzk≤1} hz, ∇f (x)i)η(dz) + hb, ∇f (x)i, n
and Z Hn f (x)
=
1
(en(f (x+ n z)−f (x)) − 1 −
E
1 I{kzk≤1} hz, ∇f (x)i)η(dz) + hb, ∇f (x)i. n
R Now, assuming E (ehα,zi − 1 − I{z≤1} hα, zi)η(dz) < ∞, for each α ∈ E ∗ , Hf = limn→∞ Hn f is given by Z Hf (x) = (ehz,∇f (x)i − 1 − I{kzk≤1} hz, ∇f (x)i)η(dz) + hb, ∇f (x)i. E
Define Z H(p) =
(ehz,pi − 1 − Iz≤1 hz, pi)η(dz) + hb, zi,
p ∈ E∗,
E
and L(q) = sup {hq, pi − H(p)},
q ∈ E.
p∈E ∗
H is a convex function and L is its FenchelLegendre transform, and hence, H(p) = sup{hq, pi − L(q)}. q∈E
Consequently Hf (x) = Hf (x) ≡ sup {Af (x, u) − L(u)}, u∈E
where Af (x, u) = hu, ∇f (x)i, and the rate function is again given by R∞ ˙ if x is absolutely continuous I0 (x(0)) + 0 L(x(s))ds I(x) = ∞ otherwise. Example 1.8. [Random evolutions  I] Let B be the generator for a Markov process Y with state space E0 , and let Yn (t) = Y (nt). Let Xn satisfy X˙ n (t) = F (Xn (t), Yn (t)), where F : Rd × E0 → Rd . If Y is ergodic with stationary distribution π, then lim Xn = X,
n→∞
1.4. EXAMPLES
15
R where X satisfies X˙ = F (X), with F (x) = F (x, y)π(dy). Averaging results of this type date back at least to Hasminskii [66]. Corresponding large deviation theorems have been considered by Freidlin [49, 50]. (See Freidlin and Wentzell [52], Chapter 7.) For simplicity, assume B is a bounded operator Z (1.19) Bg(y) = λ(y) (g(z) − g(y))µ(y, dz). Then Z An g(x, y) = F (x, y) · ∇x g(x, y) + nλ(y)
(g(x, z) − g(x, y))µ(y, dz), Z
Hn f (x, y) = F (x, y) · ∇x f (x, y) + nλ(y)
(ef (x,z)−f (x,y) − 1)µ(y, dz)
and Z Hn f (x, y) = F (x, y) · ∇x f (x, y) + λ(y)
(en(f (x,z)−f (x,y)) − 1)µ(y, dz).
Let fn (x, y) = f (x) + n1 h(x, y). Then Z lim Hn fn (x, y) = F (x, y) · ∇x f (x) + λ(y)
n→∞
(eh(x,z)−h(x,y) − 1)µ(y, dz).
Consequently, H is multivalued. One approach to dealing with this limit is to select h so that the limit is independent of y, that is, to find functions h(x, y) and g(x) such that Z (1.20) F (x, y) · ∇x f (x) + λ(y) (eh(x,z)−h(x,y) − 1)µ(y, dz) = g(x). Multiply both sides of the equation by eh(x,y) , and fix x. One needs to solve an equation of the form Z (1.21) α(y)h(y) + λ(y) (h(z) − h(y))µ(y, dz) = γh(y), where h plays the role of eh(x,y) and hence must be positive. Of course, (1.21) is just an eigenvalue problem for the operator (1.22)
Q = (αI + B).
If E0 is finite, that is, Y is a finite Markov chain, and B is irreducible, then the PerronFroebenius theorem implies the existence of a unique (up to multiplication by a constant) strictly positive h which is the eigenfunction corresponding to the largest eigenvalue for Q. Note that in (1.20), it is only the ratio h(z)/h(y) that is relevant, so the arbitrary constant cancels. For linear semigroups, this approach to convergence of generators was introduced in Kurtz [71] and has been developed extensively by Papanicolaou and Varadhan [93] and Kushner (for example, [79]). For nonlinear semigroups, Kurtz [72] and Evans [37] have used similar arguments. If we use γ(α(·)) to denote the largest eigenvalue for the Q in (1.22), then we arrive at a single valued limit operator Hf (x) = γ(F (x, ·) · ∇f (x)). Define H(x, p) = γ(F (x, ·) · p)
16
1. INTRODUCTION
and L(x, q) = sup {p · q − H(x, p)}, p∈Rd d
for x, p, q ∈ R . There are various probabilistic representations for the principle eigenvalue γ. (See Appendix B.) It follows from these representations that H is convex in p, so Hf (x) = H(x, ∇f (x)) = Hf (x) ≡ sup {Af (x, u) − L(x, u)}, u∈Rd
where Af (x, u) = u · ∇f (x), for f ∈ Cc1 (Rd ). By the results in Chapter 8, (1.17) gives a representation of the rate function. Another representation of the rate function can be of special interest. Donsker and Varadhan [32] generalize the classical RayleighRitz formula and obtain the following variational representation of the principal eigenvalue for Q when B is a general operator satisfying the maximum principle: Z αdµ − IB (µ)}, γ(α) = sup { µ∈P(E0 )
E0
where Z IB (µ) = −
(1.23)
inf
f ∈D(B),inf y f (y)>0
E0
Bf dµ. f
Therefore, setting Z F (x, y)dµ(dy) · ∇f (x),
Af (x, µ) = E0
H satisfies Hf (x) = Hf (x) ≡
sup {Af (x, µ) − IB (µ)}. µ∈P(E0 )
By the methods in Chapter 8, the rate function can also be represented as Z ∞ I(x) = I0 (x(0)) + inf{ IB (µ(s))ds : 0 Z x(t) ˙ = F (x(t), ·)dµ(t)}, µ(t) ∈ P(E0 ), t ≥ 0}, E0
if x is absolutely continuous, and I(x) = ∞ otherwise. Example 1.9. [Random evolutions  II] Let B be as in (1.19), let Y be the corresponding Markov process, and let Yn (t) = Y (n3 t). Let Xn satisfy X˙ n (t) = nF (Xn (t), Yn (t)). Suppose that Y is ergodic with stationary distribution π and that Z (1.24) F (x, y)π(dy) = 0. E0
Then An g(x, y) = nF (x, y) · ∇x g(x, y) + n3 λ(y) 3
Hn f (x, y) = nF (x, y) · ∇x f (x, y) + n λ(y)
Z (g(x, z) − g(x, y))µ(y, dz), Z
(ef (x,z)−f (x,y) − 1)µ(y, dz),
1.4. EXAMPLES
17
and Z
2
Hn f (x, y) = nF (x, y) · ∇x f (x, y) + n λ(y) Let fn (x, y) = f (x) +
1 n2 h1 (x, y)
1 n3 h2 (x, y),
+
(en(f (x,z)−f (x,y)) − 1)µ(y, dz).
and assume that
Bh1 (x, ·)(y) = −F (x, y) · ∇x f (x). If E0 is finite and B is irreducible, then (1.24) ensures the existence of h1 . Note that h1 (x, y) will be of the form α(x, y) · ∇x f (x), where α is a vector function satisfying Bα(x, ·)(y) = −F (x, y). It follows that lim Hn fn (x, y)
n→∞
=
λ(y) 2
Z
(h1 (x, z) − h1 (x, y))2 µ(y, dz) E0 Z +λ(y) (h2 (x, z) − h2 (x, y))µ(y, dz), E0
and again we have a multivalued limit. As before, one approach is to select h2 so that the limit is independent of y. If E0 is finite, (1.24) ensures that h2 exists. In any case, if h2 exists, the limit will be Z Z λ(y) Hf (x) = ((α(x, z) − α(x, y)) · ∇x f (x))2 µ(y, dz)π(dy). 2 E0 E0 Note that H is of the same form as H in Example 1.4, indicating that this “random evolution” behaves like the “small diffusion” process. If Yn (t) = Y (n3 t) is replaced by Yn (t) = Y (n2 t), then Xn will converge in distribution to a diffusion. (See, for example, Ethier and Kurtz [36], Chapter 12.) The results in Example 1.4 give variational representations of the rate function. Example 1.10. [Periodic diffusions] Baldi [6] has considered large deviations for models of the following type. Let σ be a periodic d × dmatrixvalued function (for each 1 ≤ i ≤ d, there is a period pi > 0 such that σ(y) = σ(y + pi ei ) for all y ∈ Rd ), and let Xn satisfy the Itˆo equation 1 dXn (t) = √ σ(αn Xn (t))dW (t), n where αn > 0 and limn→∞ n−1 αn = ∞. Let a = σσ T . Then 1 X ∂2 An f (x) = aij (αn x) f (x), 2n ij ∂xi ∂xj Hn f (x) =
1 X 1 X aij (αn x)∂i ∂j f (x) + aij (αn x)∂i f (x)∂j f (x), 2n ij 2n ij
and Hn f (x) =
1 X 1X aij (αn x)∂i ∂j f (x) + aij (αn x)∂i f (x)∂j f (x). 2n ij 2 ij
Let fn (x) = f (x) + n h(x, αn x), where n = nαn−2 . Noting that, by assumption, n αn = nαn−1 → 0, if we select h with the same periods in y as the aij so that 1X ∂2 (1.25) aij (y) h(x, y) + ∂i f (x)∂j f (x) = g(x), 2 ij ∂yi ∂yj
18
1. INTRODUCTION
for some g independent of y, then lim Hn fn (x, y) = g(x).
n→∞
For each x, the desired h(x, ·) is the solution of the linear partial differential equation (1.26)
1X ∂2 1X aij (y) h(x, y) = g(x) − aij (y)∂i f (x)∂j f (x) 2 ij ∂yi ∂yj 2 i,j
on [0, p1 ] × · · · × [0, pd ] with periodic boundary conditions extended periodically to all of Rd . For h to exist, we must have 1X (1.27) g(x) = aij ∂i f (x)∂j f (x), 2 ij where aij is the average of aij with respect to the stationary distribution for the diffusion on [0, p1 ] × · · · × [0, pd ] whose generator is A0 f (y) =
1X ∂2 aij (y) f (y), 2 i,j ∂yi ∂yj
with periodic boundary conditions. To see that (1.27) must hold, simply integrate both sides of (1.26) by the stationary distribution. In particular, 1X h(x, y) = hij (y)∂i f (x)∂j f (x), 2 ij where hij satisfies A0 hij (y) = aij − aij (y). If A0 is uniformly elliptic and f is C 3 , then, with sufficient regularity on the aij , h ∈C 2 will exist. By (1.27), the limit Hf is a special case of Hf in Example 1.4. Consequently, we can identify the rate function as in that example. Example 1.11. [DonskerVaradhan theory for occupation measures] Large deviation theory for occupation measures (see Donsker and Varadhan [33]) is closely related to Example 1.8. Let Y be a Markov process with generator B and state space E0 , and for n = 1, 2, . . ., define Z 1 nt Γn (C, t) = IC (Y (s))ds. n 0 Then Zn defined by Zn (t) = (Y (nt), Γn (·, t)) is a Markov process with state space E = E0 × MF (E0 ), where MF (E0 ) is the space of finite measures on E0 . Let h ∈ Cb (E0 × Rm ) be differentiable in the real variables and satisfy h(·, x) ∈ D(B) for x ∈ Rm . For αi ∈ Cb (E0 ), i = 1, . . . , m, let g(y, z) = h(y, hα1 , zi, . . . , hαm , zi), (y, z) ∈ E0 × MF (E0 ), R where hα, zi = E0 αdz. The generator for Zn satisfies An g(y, z) = nBh(y, hα1 , zi, . . . , hαm , zi) +
m X i=1
αi (y)∂i h(y, hα1 , zi, . . . , hαm , zi),
1.4. EXAMPLES
19
∂ where ∂i h(y, x1 , . . . , xm ) = ∂x h(y, x1 , . . . , xm ). For definiteness, let B be a diffui sion operator X 1X ∂2 ∂ Bg(y) = aij (y) g(y) + bi (y) g(y). 2 i,j ∂yi ∂yj ∂yi i
Then for f (y, z) = h(y, hα1 , zi, . . . , hαm , zi), nX ∂ ∂ Hn f (y, z) = An f (y, z) + aij (y) f (y, z) f (y, z) 2 i,j ∂yi ∂yj and Hn f (y, z) = An f (y, z) +
∂ ∂ n2 X aij (y) f (y, z) f (y, z). 2 i,j ∂yi ∂yj
If we let f (z) = h(hα1 , zi, . . . , hαm , zi) and fn (y, z) = h(hα1 , zi, . . . , hαm , zi) +
1 h0 (y, hα1 , zi, . . . , hαm , zi), n
then lim Hn fn (y, z)
n→∞
(1.28) =
m X
αi (y)∂i h(hα1 , zi, . . . , hαm , zi) + Bh0 (y, hα1 , zi, . . . , hαm , zi)
i=1
+
∂ 1X ∂ aij (y) h0 (y, hα1 , zi, . . . , hαm , zi) h0 (y, hα1 , zi, . . . , hαm , zi). 2 i,j ∂yi ∂yj
As in Example 1.8, we have a multivalued limit, and the rate function representation is also similar to the representation in Example 1.8. Example 1.12. [Stochastic reactiondiffusion equations] We consider a stochastic reactiondiffusion equation on a rescaled lattice, which is a GinzburgLandau model of nonconservative type. Let F ∈ C 2 (R), supr F 00 (r) < ∞, and O = [0, 1)d with periodic boundary. We discretize O into 1 k 1 , . . . , , . . . , 1 − }d ⊂ O, m m m also with periodic boundary conditions (that is, 0 = 1). Let m = m(n) depend on n so that limn→∞ m(n) = ∞. When there is no possibility of confusion, we simply write m in place of m(n). For ρ ∈ RΛm  , define Λm ≡ {0,
m 1 1 (ρ(x + ek ) − ρ(x − ek )); 2 m m ∇m ρ(x) = (∇1m ρ(x), . . . , ∇dm ρ(x)) ∆m ρ(x) = ∇m · ∇m ρ(x) d m 2X 2 2 = ( ) ρ(x + ek ) − 2ρ(x) + ρ(x − ek ) , 2 m m
∇im ρ(x) (1.29)
=
k=1
and for a vectorvalued function, ξ(x) = ξ1 (x), . . . , ξd (x) ,
20
1. INTRODUCTION
define divm ξ(x) = ∇m · ξ(x) =
d X
∇km ρk (x).
k=1
We consider a finite system of stochastic differential equations (1.30)
md/2 dYn (t, x) = (∆m Yn )(t, x)dt − F 0 (Yn (t, x))dt + √ dB(t, x), n
where x ∈ Λm and {B(t, x) : x ∈ Λm } are independent standard Brownian motions. Let En = RΛm  . For p ∈ C ∞ (O) and ρ ∈ En , define X hρ, pim ≡ p(x)ρ(x)m−d , x∈Λm
and Z hρ, pi ≡
p(x)ρ(x)dx,
ρ ∈ L2 (O).
O
For p1 , . . . , pl ∈ C ∞ (O) and ϕ ∈ Cc∞ (Rl ), let ~γ = (γ1 , . . . , γl ) and fn (ρ) = ϕ(hρ, γ1 im , . . . , hρ, γl im ) = ϕ(hρ, ~γ im ), f (ρ) = ϕ(hρ, ~γ i),
(1.31)
and An fn (ρ)
=
l X
X ∂i ϕ(hρ, ~γ im ) γi (x)(∆n ρ(x) − F 0 (ρ(x)))m−d
i=1
x∈Λm
+
l X
X 1 2 ∂ij ϕ(hρ, ~γ im ) γi (x)γj (x)m−d . 2n i,j=1 x∈Λm
Then, by Itˆ o’s formula, Yn is an En valued solution to the martingale problem for An and 1 −nf e An enf (ρ) n l X X = ∂i ϕ(hρ, ~γ in ) γi (x)(∆n ρ(x) − F 0 (ρ(x)))m−d
Hn fn (ρ) ≡
i=1
+
x∈Λm l X
X 1 ∂i ϕ(hρ, ~γ in )∂j ϕ(hρ, ~γ in ) γi (x)γj (x)m−d 2 i,j=1 x∈Λm
+
l X
1 2n i,j=1
X 2 ∂ij ϕ(hρ, ~γ in ) γi (x)γj (x)m−d . x∈Λm
Define ηn : En → E = L2 (O) by (1.32)
ηn (ρn )(x1 , . . . , xd ) ≡
m−1 X i1 ,...,id =0
ρn (
d id Y i1 ,..., ) 1[i /m,(ij +1)/m) (xj ). m m j=1 j
1.4. EXAMPLES
21
For ρn ∈ En and ρ ∈ E satisfying kηn (ρn ) − ρkL2 (O) → 0, we have (1.33)
Hn fn (ρn ) → Hf (ρ) ≡
l X
∂i ϕ(hρ, ~γ i) hρ, ∆γi i − hF 0 (ρ), γi i
i=1
+ = =
Z l 1 X ∂i ϕ(hρ, ~γ i)∂j ϕ(hρ, ~γ i) γi (x)γj (x)dx 2 i,j=1 O
δf 1 δf δf i − hF 0 (ρ), i + k k2L2 (O) , δρ δρ 2 δρ δf H(ρ, ), δρ hρ, ∆
where 0 1 H(ρ, p) = h∆ρ + F (ρ), pi + kpk2L2 (O) , 2
p ∈ C ∞ (O),
and l
(1.34)
X δf ∂i ϕ(hρ, ~γ i)γi . = δρ i=1
By analogy with Example 1.4, we expect {ηn (Yn )} to satisfy a large deviation principle in L2 (O). Let L(ρ, u) =
sup
{hu, pi − H(ρ, p)}.
p∈C ∞ (O)
The rate function is analogous to that of (1.17) and should be given by Z ∞ I(ρ) = I0 (ρ(0)) + L(ρ(s), ρ(s))ds. ˙ 0
A rigorous proof of this statement is given in Chapter 13, Theorem 13.7. The equation for Yn is a discretized version of the stochastic partial differential equation given in weak form by Z Z Z ϕ(x)Xn (t, x)dx = ϕ(x)Xn (0, x)dx + ∆ϕ(x)Xn (s, x)dxds O O O×[0,t] Z (1.35) − ϕ(x)F 0 (Xn (s, x))dxds O×[0,t] Z 1 +√ ϕ(x)W (dx × ds), n O×[0,t] where W (dx × ds) is a spacetime Gaussian white noise. Unfortunately, (1.35) does not have a functionvalued solution when d ≥ 2. To study the large deviation problem in L2 (O), as described above, even for the discretized equation (1.30), requires that m go to infinity slowly enough with n. (See Theorem 13.7.) Equation (1.35) is also known as a stochastically perturbed AllenCahn equation. It has been used in material science to study time evolution of material density when the total mass is not conserved. More generally, in statistical physics, when F is nonconvex, the equation is used to model phase transition (Spohn [113]). When d = 1, (1.35) is welldefined and Sowers [112] gives the large deviation principle.
22
1. INTRODUCTION
Example 1.13. [Stochastic CahnHilliard equations] Spohn [113] introduces a type of GinzburgLandau model with a conserved quantity motivated by the study of phase transitions. Let O and F be as in Example 1.12. Formally, the equation can be written 1 (1.36) ∂t Xn (t, x) = ∇ · ∇(−∆Xn (t, x) + F 0 (Xn (t, x))) + √ ∂t ∂x W(t, x) n or in a more easily interpretable, but still formal, weak form Z Z Z ϕ(x)Xn (t, x)dx = ϕ(x)Xn (0, x)dx − ∆2 ϕ(x)Xn (s, x)dxds O O O×[0,t] Z (1.37) + ∆ϕ(x)F 0 (Xn (s, x))dxds O×[0,t] d
1 X +√ n
k=1
Z ∂xk ϕ(x)Wk (dx × ds), O×[0,t]
where the Wk are independent Gaussian white noises and W = (W1 , . . . , Wd ). Without the stochastic term, (1.36) is known as the CahnHilliard equation and originally arose in material science where the solution gives the evolution of a material density. In these applications, (1.36) is derived as a phenomenological model using rough and heuristic arguments. Frequently, only asymptotic properties are of practical interest. Indeed, as in the previous example, (1.36) does not have a functionvalued solution. Therefore, for the large deviation problem here, we consider a discretized version. Let ∇m and ∆m be defined according to (1.29), and consider the system (1.38) md/2 dYn (t; x) = divm ∇m (−∆m Yn (t, x) + F 0 (Yn (t, x)))dt + √ dB(t, x) , x ∈ Λm , n where B(t, x) = B1 (t, x), · · · , Bd (t, x) ,
x ∈ Λm ,
with the Bj (·, x) independent, Rd valued standard Brownian motions. Note that the solution satisfies X X Yn (t, x) = Yn (0, x), t ≥ 0, x∈Λm
x∈Λm
so we may as well choose the state spaces X (1.39) En ≡ {ρ ∈ RΛm  : ρ(x) = 0},
E ≡ {ρ ∈ L2 (O) :
Z ρ(x)dx = 0}. O
x∈Λm
The map ηn in (1.32) embeds En into E. For functions of the form (1.31), by Itˆo’s formula, (1.40) An f (ρ)
=
l X
∂i ϕ(hρ, ~γ im ) hγi , ∆m (−∆m ρ + F 0 (ρ))im
i=1
+
l X 1 X 2 ∂ij ϕ(hρ, ~γ im ) ∇m γi (z) · ∇m γj (z)m−d . 2n i,j=1 z∈Λm
1.4. EXAMPLES
23
Therefore 1 −nf e An enf (ρ) n l X = ∂i ϕ(hρ, ~γ im ) hγi , ∆m (−∆n ρ + F 0 (ρ))im
Hn f (ρ) ≡
i=1
+
l X 1 X ∇m γi (x) · ∇m γj (x)m−d ) ∂i ϕ(hρ, ~γ im )∂j ϕ(hρ, ~γ im )( 2 i,j=1 x∈Λm
+
l X
1 ∂ 2 ϕ(hρ, ~γ im ) 2n i,j=1 ij
X
∇m γi (z) · ∇m γj (z)m−d .
z∈Λm
As in (1.34), let l
X δf ∂i ϕ(hρ, ~γ i)γi . = δρ i=1 Then for every ρn ∈ En satisfying limn→∞ kηn (ρn ) − ρk2L2 (O) = 0, (1.41)
lim Hn f (ρn ) = Hf (ρ) = H(ρ,
n→∞
δf ), δρ
where 1 (1.42) H(ρ, p) = h∆(−∆ρ + F 0 (ρ)), pi + k∇pk2L2 (O) , ρ ∈ L2 (O), p ∈ C ∞ (O). 2 Following the method of rate function identification in the finite dimensional Example 1.4, we expect that the rate function can be identified using the same arguments as in Example 1.12. The proof of this result is given in Chapter 13. See Theorem 13.13. Bertini, Landam, and Olla [10] proved a large deviation principle for a variant of (1.38) in which the discrete version of ∇W is replaced by a discrete version of ∆W , where W is a scalar Brownian sheet. The current form of (1.38) was treated using a slightly different technique in Feng [42]. Example 1.14. [Weakly interacting stochastic particles] For n = 1, 2, . . ., we consider the finite system of stochastic differential equations n 1X (1.43) dXn,i (t) = −∇Ψ Xn,i (t) − ∇Φ Xn,i (t) − Xn,j (t) dt + dWi (t), n j=1 where Wi (t), i = 1, 2, . . . , n, are independent, Rd valued, standard Brownian motions. Define the empirical measurevalued processes n
(1.44)
ρn (t) =
1X δXn,k (t) . n k=1
Then under appropriate growth conditions on Ψ and Φ, a law of large numbers (the McKeanVlasov limit) holds. Specifically, ρn converges to a probability measurevalued solution of ∂ 1 (1.45) ρ = ∆ρ + ∇ · (ρ∇Ψ) + ∇ · (ρ∇(ρ ∗ Φ)), ∂t 2
24
1. INTRODUCTION
R where ρ ∗ Φ(x) = Rd Φ(x − y)ρ(dy). We consider the corresponding large deviation principle. When Φ(z) = θz2 /2, θ > 0, (1.43) is known as the ferromagnetic CurieWeiss model n θX (1.46) dXn,k (t) = −∇Ψ(Xn,k (t)) − (Xn,k (t) − Xn,j (t))dt + dWk (dt). n j=1 The large deviation problem for this model, as well as for a larger class of models, has been considered by Dawson and G¨artner in a series of publications. See [23] and [22] and the references therein. In connection with gas kinetics in statistical mechanics, the system (1.43) with a general semiconvex Φ gives a microscopic statistical foundation for certain deterministic models of the evolution of a spatially homogeneous gas in a granular media. In the infinite system limit, n → ∞, the law of large numbers gives a nonlinear partial differential equation modeling the evolution. See Section 9.6 of Villani [126] for a discussion of these models. Our methods applied to this problem seem more involved than those of Dawson and G¨ artner [23]; however, our interest is not only in the large deviation problem, but also in the wellposedness of an associated nonlinear equation (1.50) it induces. This nonlinear equation is a special case of a HamiltonJacobi equation in the space of probability measures and is closely related to the HamiltonJacobi equations in Banach spaces studied by Crandall and Lions [21, 20] and Tataru [116]. The previous work, however, requires the state space to be a subset of a Banach space satisfying the RadonNikodym property. We note that even though the space of with Lebesgue density is a bounded subset of L1 (Rd ) (i.e. Rprobability measures 1 ρ(x)dx = 1), L (Rd ) does not satisfy the RadonNikodym property. (See, for Rd example, [31], page 31.) Assume that Φ, Ψ ∈ C 2 (Rd ) and that ∇Φ(z) has subquadratic growth as z → ∞. More conditions on Φ, Ψ will be imposed later. To simplify, we only consider the case where Φ is an even function (i.e. Φ(x) = Φ(−x)). Let d be the (order) 2Wasserstein metric on E (i.e. (D.14) with p = 2). Then (E, d) is a complete separable metric space and ρn → ρ0 in (E, d) if and only if ρn ⇒ ρR0 , in the sense of weak convergence of probability measures, and R 2 x dρn → Rd x2 dρ0 (Lemma D.17). Define d R n
En ≡ {ρ(dx) ≡
1X δxk (dx), xk ∈ Rd , k = 1, 2, . . .} ⊂ E. n k=1
Let rn be the restriction of d to En . For fixed n, the corresponding topology is just the topology of weak convergence. For each n, ρn (·) in (1.44) is a probabilitymeasurevalued Markov process. We calculate its generator next. Define 1 B(ρ)p(x) = ∆p(x) − ∇(Ψ + Φ ∗ ρ)(x) · ∇p(x), ∀p ∈ C 2 (Rd ), ρ ∈ P(Rd ). 2 Let p1 , . . . , pl ∈ Cc2 (Rd ), and define Z Z p = (p1 , . . . , pl ), hp, ρi = ( p1 dρ, . . . , pl dρ). Rd
Then for (1.47)
f (ρ) = ϕ(hp, ρi),
ϕ ∈ C 2 (Rl ),
Rd
1.5. AN OUTLINE OF THE MAJOR OBSTACLES
An f (ρ) =
l X
∂i ϕ(hp, ρi)hρ, B(ρ)pi i +
i=1
25
l 1 X 2 ∂ ϕ(hp, ρi)hρ, (∇pi )T ∇pj i 2n i,j=1 ij
Therefore (1.48)
1 −nf e An enf (ρ) n l X = ∂i ϕ(hp, ρi)hρ, B(ρ)pi i
Hn f (ρ) ≡
i=1
+
l 1 X ∂i ϕ(hp, ρi)∂j ϕ(hp, ρi)hρ, (∇pi )T ∇pj i 2 i,j=1
+
l 1 X 2 ∂ ϕ(hp, ρi)hρ, (∇pi )T ∇pj i. 2n i,j=1 ij
Setting l
X δf ∂i ϕ(hp, ρi)pi , = δρ i=1 for ρn ∈ En , ρ ∈ E satisfying d(ρn , ρ) → 0, lim Hn f (ρn ) = Hf (ρ) = H(ρ,
n→∞
δf ), δρ
where (1.49)
H(ρ, p) = hρ, B(ρ)pi +
1 2
Z
∇p2 dρ = hB ∗ (ρ)ρ, pi +
1 2
Z
∇p2 dρ,
and for each γ ∈ E, B ∗ (γ) : ρ ∈ P(Rd ) → D0 (Rd ) is defined by 1 ∆ρ + ∇ · (ρ∇(Ψ + γ ∗ Φ)). 2 Large deviation results for this example are discussed in Chapter 13 (Theorem 13.37). The key here is to establish a uniqueness result for weak solutions of B ∗ (γ)ρ =
(1.50)
(I − αH)f = h,
α > 0,
for sufficiently many h ∈ Cb (E). This is achieved in two steps in Example 9.35 and in Section 13.3.4. See Theorem 13.32.
1.5. An outline of the major obstacles From the discussion in the previous sections, we see that one of our main assumptions should be the convergence of {Hn } to a limit operator H in the sense that for each f ∈ D(H), there exist fn ∈ D(Hn ) such that (1.51)
fn → f,
Hn fn → Hf,
where the type of convergence may depend on the particular problem. To rigorously formulate a large deviation program using generator convergence and nonlinear semigroup theory requires some work, but it is straightforward (Chapter 5) provided
26
1. INTRODUCTION
we assume that the limit operator H satisfies a range condition. Specifically, we need existence of solutions of (1.52)
(I − αH)f = h,
for all sufficiently small α > 0 and a large enough collection of functions h. Unfortunately, there are very few examples for which this condition can actually be verified in the classical sense, that is, with solutions f satisfying the differentiability requirements assumed in the examples above. We overcome this difficulty by using the weak (viscosity) solution theory introduced by Crandall and Lions [19]. The range condition is replaced by the requirement that a comparison principle (Definitions 6.4 and 7.3) holds for (1.52). Assuming that (1.53)
(I − αHn )fn = hn ,
for a sequence hn converging to h, we adapt a technique of Barles and Perthame [7] [8] to prove that the comparison principle implies the convergence of fn . This technique does not require a priori estimates on the regularity of f . There is a well developed, powerful theory for verifying the comparison principle, at least when the state space is a subset of Rd or a compact metric space (Chapter 6). Indeed, the approach has been applied in ad hoc ways to various large deviation examples (e.g. [48], [40]) for quite some time. When the state space is not compact, however, major obstacles arise. First, we need a good definition of viscosity solution in this context. The definition should be weak enough to allow extension of the BarlesPerthame arguments to noncompact metric spaces, yet strong enough so that the comparison principle can be proved for interesting examples. Our choice here is a definition (Definition 7.1) that preserves a nonlinear version of the maximum principle for H. This definition is different from existing definitions [21], [116], [20] in the literature on partial differential equations, where structural information about the equation is usually built into the definition. Second, the Barles and Perthame limiting procedure may not always work when E is not compact. This procedure is compatible with a variant of uniform convergence over compacts, but without other information, the usual estimates needed to derive convergence of solutions of (1.53) require the convergence in (1.51) to be uniform over the whole space. In general, uniformity will not hold for a sufficiently large collection of f . The uniformity requirement can be relaxed if we can verify the exponential compact containment condition (Condition 2.8). In applications, this condition can frequently be verified by a stochastic Lyapunov function technique (Section 4.6). The general versions of our large deviation theorems are given in Chapter 7. A short version is summarized in Chapter 2. Further generalizations of these results are also given. For instance, we discuss situations where certain functionals of the Markov processes satisfy a large deviation principle, while the full processes may not. We also discuss the use of a general notion of convergence of test functions and operators, to handle processes with multiple scales. In most of the examples, the most difficult technical argument comes in verifying the comparison principle (Definitions 6.4 and 7.3). Verification is an analytic issue and often gives the impression of being rather involved and disconnected from the probabilistic large deviation problems. This disconnect is not always the case. Specific probabilistic structures can give insight into the solution of the analytic
1.5. AN OUTLINE OF THE MAJOR OBSTACLES
27
problem. For instance, the large deviation theory for the interacting particle system in Example 1.14 and stochastic equations in Examples 1.12 and 1.13) leads to HamiltonJacobi operators, (1.49), (1.33), (1.42), that describe the evolution of optimally controlled partial differential equations. The corresponding comparison theories in the literature ([21], [116], [20]) are technical and limited. Moreover, these theories do not apply for Example 1.14. We solve the problem by devising simple, new comparison techniques exploiting the special structure in these problems. As a result, we not only arrive at large deviation principles, but also obtain simpler proofs of analytic viscosity solution results.
CHAPTER 2
An overview The purpose of this chapter is to provide a rough “road map” for reading the general theory that follows. We collect versions of the main results regarding large deviations for Markov processes and describe some motivation for our approach. Proofs of these results will be given in later chapters. The results collected here may not be the sharpest or the fullest versions in the paper. Indeed, for most of the results, further generalizations are given later. For example, the majority of the material in Chapters 3 and 4 applies to general processes which may be nonMarkovian; Chapter 5 contains a pure semigroup formulation (as opposed to the viscosity solution approach given here) of the large deviation results; Chapters 6 and 7 have a number of new results regarding viscosity solutions and their convergence, many of which may have application in areas besides large deviations. A typical application requires the following steps: (1) Verify convergence of the sequence of operators Hn and derive the limit operator H. Many examples of this convergence are given in the Introduction. In general, convergence will be in the extended limit or graph sense. (See Definition A.12.) In some examples, the limit is described in terms of a pair of operators, (H† , H‡ ), where, roughly, H† is the lim sup of {Hn } and H‡ is the lim inf. (2) Verify exponential tightness. The convergence of Hn (in fact, boundedness of {Hn fn } for an appropriate collection of sequences {fn }) typically gives exponential tightness, provided one can verify the exponential compact containment condition (4.10). Lyapunov function arguments can be used to verify this condition (Section 4.6). Alternatively, one can avoid verifying the compact containment condition by compactifying the state space and verifying the large deviation principle in the compactified space. If the rate function is infinite for every path that hits a point added in the compactification, then the large deviation principle holds for the original space (Theorem 4.11). (3) Verify the comparison principle for the limiting operator H (or the pair (H† , H‡ )). The comparison principle asserts a strong form of uniqueness for the equation (I − αH)f = h. If the comparison principle holds for a sufficiently large class of functions h, then one can conclude that the nonlinear semigroups {Vn } converge. Convergence of the semigroups implies the large deviation principle for the finite dimensional distributions of the sequence of processes, which by exponential tightness, then gives the large deviation principle in DE [0, ∞). The rate function is characterized by the limiting semigroup. Chapter 9 discusses concrete techniques for verifying the comparison principle in a number of situations. These techniques are general, yet practical enough to cover every example in the Introduction. 29
30
2. OVERVIEW
(4) Construct a variational representation for H. Typically we can identify the limiting semigroup as the Nisio semigroup for a control problem. The control problem then gives an alternative and more explicit representation of the rate function (Chapter 8). In Chapters 10 to 13, we apply these methods to the examples in the Introduction. 2.1. Basic setup Let (En , rn ), n = 1, 2, . . . and (E, r) be complete separable metric spaces. a) (Continuous time Markov processes) Let An ⊂ B(En ) × B(En ), and assume the following: For each n = 1, 2, . . ., existence and uniqueness hold for the DEn [0, ∞)martingale problem for (An , µ) for each initial distribution µ ∈ P(En ). Letting Pyn ∈ P(DEn [0, ∞)) denote the distribution of the solution of the martingale problem for An starting from y ∈ En , the mapping y → Pyn is Borel measurable taking the weak topology on P(DEn [0, ∞)). For each n, Yn is a solution of the martingale problem for An . b) (Discrete time processes) Let {Yen (k), k = 0, 1, · · · } be a time homogeneous Markov chain with state space En and transition operator Tn on B(En ): Tn f (y) = E[f (Yen (k + 1))Yen (k) = y],
f ∈ B(En ).
Let n > 0, and define Yn (t) ≡ Yen ([
t ]). n
Then E[f (Yn (t))Yn (0) = y] = Tn[t/n ] f (y),
f ∈ B(En ).
In either case, we suppose ηn : En → E is Borel measurable and assume Xn ≡ ηn (Yn ) ∈ DE [0, ∞). We are interested in establishing the large deviation principle for Xn . For f ∈ M (E), we set ηn f = f ◦ ηn . Remark 2.1. In many applications, it suffices to take En = E and ηn (y) = y. More generally, however, En may be a discrete set that is asymptotically dense in E or a higher dimensional space in which the large deviation principle is only verified for certain components. 2.2. Compact state spaces We first assume that E is compact which simplifies a number of technical issues. This assumption is not as restrictive as it may first appear. Many results in Rd can be obtained by first verifying the large deviation principle in the onepoint compactification E = Rd ∪ {∞}. The notion of a viscosity solution for a nonlinear equation is central to the results that follow. Definition 2.2 (Viscosity Solution). Let E be a compact metric space, and H ⊂ C(E) × B(E). Fix h ∈ C(E) and α > 0. Let f ∈ B(E) and define g = α−1 (f − h), that is, f − αg = h. Then
2.2. COMPACT STATE SPACES
31
a) f is a viscosity subsolution of (I − αH)f = h if and only if f is upper semicontinuous and for each (f0 , g0 ) ∈ H such that supx (f − f0 )(x) = kf − f0 k, there exists x0 ∈ E satisfying (f − f0 )(x0 ) = kf − f0 k
(2.1) and
α−1 (f (x0 ) − h(x0 )) = g(x0 ) ≤ (g0 )∗ (x0 ).
(2.2)
b) f is a viscosity supersolution of (I − αH)f = h if and only if f is lower semicontinuous and for each (f0 , g0 ) ∈ H such that supx (f0 − f )(x) = kf0 − f k, there exists x0 ∈ E satisfying (f0 − f )(x0 ) = kf − f0 k
(2.3) and
α−1 (f (x0 ) − h(x0 )) = g(x0 ) ≥ (g0 )∗ (x0 ).
(2.4)
The comparison principle holds for h if for each subsolution f and each supersolution f , f ≤ f . Theorem 2.3. Suppose E is compact. Step 1: (Convergence of Hn ) In the continuous time case, let 1 −nf e An enf , n In the discrete time case, let Hn f ≡
Hn f ≡
enf ∈ D(An ).
1 1 log e−nf Tn enf = log(1 + e−nf (Tn − I)enf ). nn nn
Suppose H ⊂ C(E) × B(E) with D(H) dense in C(E), and H ⊂ ex lim Hn n→∞
in the sense of Definition A.12. That is, for each (f, g) ∈ H, there exists (fn , gn ) ∈ Hn such that lim kηn f − fn k + kηn g − gn k = 0.
n→∞
Step 2: (Exponential tightness) Under the convergence assumptions on {Hn }, exponential tightness of {Xn } holds. Step 3: (The comparison principle) Let α0 > 0, and assume that for each 0 < α < α0 , there exists a subset Dα ⊂ C(E) such that Dα is dense in C(E) and that, for each h ∈ Dα , the comparison principle (Definition 6.4) holds for viscosity sub and super solutions of (2.5)
(I − αH)f = h.
Then a viscosity solution Rα h exists for each h ∈ Dα , and Rα extends continuously to all of C(E). In addition, suppose that {Xn (0)} satisfies a large deviation principle with a good rate function I0 . Then a) {Xn } satisfies the large deviation principle with a good rate function I.
32
2. OVERVIEW
b) The limit m V (t)h ≡ lim Rt/m h m→∞
exists and defines a strongly continuous operator semigroup on C(E), and (2.6) I(x) =
sup
sup
I0 (x(0))+
{t1 ,··· ,tk }⊂∆cx f1 ,··· ,fk ∈Cb (E)
k X
fi (x(ti ))−V (ti −ti−1 )fi (x(ti−1 )) ,
i=1
for x ∈ DE [0, ∞), where ∆x denotes the set of discontinuities of x. This theorem is proved in Chapter 6. Consider the classical Freidlin and Wentzell diffusion problem (Example 1.4). Then X 1 X An g(x) = aij (x)∂i ∂j g(x) + bi (x)∂i g(x), 2n ij i and Hn f (x) =
X 1 X 1X aij (x)∂i ∂j f (x) + aij (x)∂i f (x)∂j f (x) + bi (x)∂i f (x). 2n ij 2 ij i
Assuming a and b are bounded on bounded subsets, the convergence Hf = lim Hn f n→∞
for 1 (∇f (x))T · a(x) · ∇f (x) + b(x) · ∇f (x) 2 is immediate for all f ∈ Dd = {f ∈ C(E) : f Rd − f (∞) ∈ Cc2 (Rd )}. (Taking E = Rd ∪ {∞}, Hn f (∞) = Hf (∞) = 0.) Exponential tightness then holds in DE [0, ∞). Assuming the large deviation principle holds for {Xn (0)}, the conditions of Theorem 2.3 are then satisfied for this example provided we can verify the comparison principle. Note that H can be written Hf (x) = H(x, ∇f (x)). Chapter 9 gives a detailed discussion of conditions under which the comparison principle holds for H of this form. In particular, if a and b are bounded and continuous and a(x) is positive definite for all x, then Lemmas 9.15 and 9.16 give the desired result. The conclusion of Theorem 2.3 is less than satisfactory, since {V (t)} is only determined implicitly and the rate function is expressed in terms of {V (t)}. For our example, however, we can write Hf (x) =
Hf (x) = sup(u · σ T (x)∇f (x) + b(x)∇f (x) − u2 /2), u
where σσ T = a. This representation suggests that V is the Nisio semigroup for a control problem. In particular, we should have Z t 1 V (t)f (x0 ) = sup(f (x(t)) − u(s)2 ds), 2 0 where the supremum is over x and u satisfying Z t Z t (2.7) x(t) = x(0) + σ(x(s))u(s)ds + b(x(s))ds, 0
0
x(0) = x0 .
2.3. GENERAL STATE SPACES
33
It then follows, at least under boundedness and continuity assumptions on σ and b, that Z t 1 I(x) = inf{I0 (x(0)) + u(s)2 ds}, 0 2 where the infimum is over all u such that (x, u) satisfies (2.7). More generally, we look for a linear operator A ⊂ C(E) × C(E × U ), where U is another complete, separable metric space, and a lower semicontinuous L : E × U → [0, ∞) such that Hf (x) = sup (Af (x, u) − L(x, u)). u∈U
The corresponding control problem may use relaxed controls, that is measures λ on U × [0, ∞) where λ(U × [0, t]) = t. The control problem requires Z (2.8) f (x(t)) = f (x(0)) + Af (x(s), u)λ(du × ds), U ×[0,t]
for all f ∈ D(A), and V is given by Z V (t)f (x0 ) = sup (f (x(t)) − (x,λ)
L(x(s), u)λ(du × ds), U ×[0,t]
where the supremum is over (x, λ) satisfying (2.8) with x(0) = x0 . Some additional constraint on λ is also possible. The rate function then satisfies Z I(x) = inf{I0 (x(0)) + L(x(s), u)λ(du × ds)}. U ×[0,t]
See Theorem 8.14. 2.3. General state spaces We now allow E to be an arbitrary complete, separable metric space. In this more general setting, it is often difficult to verify uniform convergence of Hn fn and, consequently, weaker notions of convergence are useful. One such notion is convergence of bounded sequences of functions, uniformly on compact sets, which we refer to as bucconvergence (see Definition A.6) and denote buc lim. With reference to Examples 1.8  1.11, we also see that the natural limit of Hn may have a range in functions defined on a larger space than E (call it E 0 ). Consequently, we have mappings ηn : En → E, ηn0 : En → E 0 , and γ : E 0 → E which are consistent in the sense that ηn = γ ◦ ηn0 . We also work with a notion of convergence which is essentially bucconvergence for our sequence of spaces {En }. Recall the definitions of set convergence. Definition 2.4. Define lim inf Gn = {x ∈ E : ∃xn ∈ Gn , 3 lim xn = x}, n→∞
n→∞
and lim sup Gn = {x ∈ E : ∃n1 < n2 < · · · and xnk ∈ Gnk 3 lim xnk = x}. n→∞
k→∞
If G ≡ lim supn Gn = lim inf n Gn , then we say Gn converges to G and write G = limn→∞ Gn . The following notion of convergence will be central to our main results.
34
2. OVERVIEW
Definition 2.5. Let Q be an index set. For n = 1, 2, . . . and q ∈ Q, let Knq ⊂ En satisfy the following: (1) For q1 , q2 ∈ Q, there exists q3 ∈ Q such that Knq1 ∪ Knq2 ⊂ Knq3 . (2) For each x ∈ E, there exists q ∈ Q such that x ∈ lim inf n→∞ ηn (Knq ). bq ⊂ E (3) For each q ∈ Q, there exist a compact K lim sup inf r(ηn (y), x) = 0,
b
n→∞ y∈K q x∈K q n
b 0q ⊂ E 0 such that and a compact K (2.9)
lim sup
inf r0 (ηn0 (y), x) = 0.
b
n→∞ y∈K q x∈K 0q n
(4) For each compact K ⊂ E, there exists q ∈ Q, such that K ⊂ K q ≡ lim inf n→∞ ηn (Knq ). For fn ∈ B(En ) and f ∈ B(E), define f = LIMfn if and only if supn kfn k < ∞ and lim sup fn (y) − ηn f (y) = 0, n→∞ y∈K q
n
where ηn f = f ◦ ηn . b q and K b 0q If the requirements in the above definition hold, the compact sets K q q 0q 0 b = lim supn→∞ ηn (K ) and K b = lim supn→∞ η (K q ). can always be chosen to be K n n n Note that the definition of LIM given here is a special case of the abstract definition given in Section A.6. (See Condition A.13.) We give two useful examples of LIM. Example 2.6. If En = E, Q is the class of compact sets in E, and KnK = K for every n = 1, 2, . . . and K ∈ Q, then LIM is just bucconvergence. Example 2.7. With reference to Examples 1.12 and 1.13, let ηn be given by 0 (1.32). For m = m(n), let E = E = L2 (O), and let Q be the class of compact sets in E. For each ρ ∈ E, define its projection into En = RΛm  by (2.10) Z i1m+1 Z idm+1 id i1 d (πn ρ)(x) = m ... ρ(y1 , . . . , yd )dyd . . . dy1 , x = ( , . . . , ) ∈ Λm . id i1 m m y1 = m yd = m Then ηn (πn (ρ)) ∈ E, and assuming m(n) → ∞, for every compact set K ⊂ L2 (O), lim sup kηn (πn (ρ)) − ρkL2 (O) = 0.
(2.11)
n→∞ ρ∈K
If, for each K ∈ Q, we define KnK = πn (K) ≡ {πn (ρ) : ρ ∈ K ⊂ E} ⊂ En = RΛm(n)  , then all the requirements for {KnK : K ∈ Q} in Definition 2.5 are satisfied. In particular, for each K ∈ Q, by (2.11) and the above definition of KnK , lim
sup
inf kηn (γn ) − γkL2 (O) ≤ lim sup kηn (πn (ρ)) − ρkL2 (O) = 0.
n→∞ γ ∈K K γ∈K n n
n→∞ ρ∈K
b K = K to satisfy the third requirement in Definition 2.5. Therefore, we can choose K Finally, since sup fn (γ) − ηn f (γ) = sup fn (πn (ρ)) − f (ηn (πn (ρ))), K γ∈Kn
ρ∈K
if f ∈ C(E), f = LIMfn is implied by the following two conditions:
2.3. GENERAL STATE SPACES
35
(1) supn kfn k < ∞, (2) for γn ∈ RΛm(n)  satisfying ηn (γn ) → ρ in k · kL2 (O) , limn→∞ fn (γn ) = f (ρ). Dropping compactness and generalizing the notion of convergence requires extra work and part of the extra work comes in verifying the following condition. Condition 2.8. For each q ∈ Q, T > 0, and a > 0, there exists qb(q, a, T ) ∈ Q such that (2.12) 1 lim sup sup log P {Yn (t) 6∈ Knqb(q,a,T ) , for some 0 ≤ t ≤ T Yn (0) = y} ≤ −a. q n n→∞ y∈Kn The Lyapunov function techniques discussed in Section 4.6 are frequently useful for verifying this condition. Example 2.9. We illustrate the Lyapunov function approach for Example 1.12. Definitions and properties of the projection πm : E → RΛm  ≡ En and of LIM convergence are discussed in Example 2.7. In this context, Condition 2.8 becomes: for each compact K0 ⊂ L2 (O), T > 0, and a > 0, there exists another compact K1 ⊂ L2 (O) such that (2.13) 1 lim sup sup log P (∃t, 0 ≤ t ≤ T, 3 Yn (t) 6∈ πm (K1 )Yn (0) = ρ0 ) ≤ −a. n n→∞ ρ0 ∈πm (K0 ) Example 1.12 is motivated by a physical context. We can associate with the model (1.30) a free energy functional of the form X 1 X (2.14) Em (ρ) ≡ ∇m ρ(x)2 m−d + F (ρ(x))m−d , ρ ∈ En ≡ RΛm  , 2 x∈Λm
x∈Λm
where ∇m g(x) is the vector with components m ∇km ρ(x) = (ρ(x + m−1 ek ) − ρ(x − m−1 ek ). 2 Em is the desired Lyapunov function. Define 1 (2.15) Hn Em (ρ) ≡ e−nEm An enEm (ρ) n 1 = h∆m ρ − F 0 (ρ), −∆m ρ + F 0 (ρ)im + k∆m ρ − F 0 (ρ)k2L2 (Λm ) 2 00 md X d 2−d + ( m + F (ρ(x))m−d ) 2n 2 x∈Λm X 1 1 1 = − k∆m ρ − F 0 (ρ)k2L2 (Λm ) + dm2+d + md F 00 (ρ(x))m−d . 2 2n 2 x∈Λm
Then by Itˆ o’s formula, Z exp{nEm (Yn (t)) − nEm (Yn (0)) −
t
nHn Em (Yn (s))ds} 0
is a positive local martingale (hence a supermartingale), and we can apply Lemma 4.20 to obtain (2.13).
36
2. OVERVIEW
If m2+d = O(n), then (2.16)
sup
sup
HEm (ρ) < ∞.
n ρ∈RΛm 
Let C > C0 > 0 and ρ0 ∈ RΛm  be such that Em (ρ0 ) ≤ C0 < ∞. By the finiteness of Λm , {ρ ∈ RΛm  : Em (ρ) < C} is open in En . By Lemma 4.20, P (∃t ∈ [0, T ], Em (Yn (t)) ≥ CYn (0) = ρ0 ) ≤ e−n(C−C0 )+nT supn supρ∈L2 (Λm ) Hn Em (ρ) . If F (r) ≥ c1 + c2 r2 for some c1 ∈ R, c2 > 0,
(2.17) then
KC ≡ cl (∪n {ηn (ρ) : ρ ∈ En , Em (ρ) ≤ C < ∞}) is a compact subset of E = L2 (O). Therefore, for every a, T, C0 > 0, by selecting C large enough, we can find a compact set Ka,T,C0 ⊂ L2 (O) such that (2.18) sup P (∃t, 0 ≤ t ≤ T, ηn (Yn (t)) 6∈ Ka,T,C0 , Yn (0) = ρ) ≤ e−na . {ρ∈RΛm  :Em (ρ)≤C0 }
Next, we extend (2.18) to (2.13) by relaxing the requirements on the initial data. For this purpose, we need to make use of a stability result regarding equation (1.30) which is proved in Lemma 13.4: For T > 0, there exists CT > 0 such that if Yn and Zn are solutions of (1.30), sup kYn (t) − Zn (t)kL2 (Λm ) ≤ CT kYn (0) − Zn (0)kL2 (Λm )
(2.19)
a.s.
0≤t≤T
Noting that the collection of ρ ∈ L2 (O) such that sup Em (πm (ρ)) < ∞
(2.20)
n (1)
2
(N )
is dense in L (O), for compact K0 ⊂ L2 (O) and > 0, there exist ρ0 , . . . , ρ0 (k) −1 such that K0 ⊂ ∪N k=1 B(ρ0 , C1 ) and (k)
C0 ≡ max sup Em (πm (ρ0 )) < ∞.
(2.21)
1≤k≤N m
Therefore, by (2.18) and (2.19), there exists a compact Ka,T,C0 such that sup {ρ∈πm (K0 )}
P (∃t, 0 ≤ t ≤ T, ηn (Yn (t)) 6∈ Ka,T,C , Yn (0) = ρ) 0
≤
P (∃t, 0 ≤ t ≤ T, ηn (Yn (t)) 6∈ Ka,T,C0 , Yn (0) = ρ)
sup {ρ∈RΛm  :Em (ρ)≤C0 }
≤ e−na , where Ka,T,C = {ρ ∈ E : 0
inf
γ∈Ka,T ,C
kρ − γkL2 (O) < }.
0
For a sequence l → 0, let l K1 = cl ∩l Ka+l,T,C l . 0
Then K1 is compact, and since ρ = πm (ηn (ρ)) for every ρ ∈ En , {ηn (Yn (t)) ∈ K1 } ⊂ {Yn (t) ∈ πm (K1 )} and (2.13) holds. For most ϕ, (1.33) will be unbounded and the convergence of Hn fn will not be uniform over En . Examples of this type lead us to introduce a pair of limiting
2.3. GENERAL STATE SPACES
37
operators: H† , corresponding (roughly) to the limit when {Hn fn } is bounded above and H‡ corresponding to {Hn fn } being bounded below. (See the Convergence Condition 7.11.)
We have the following counterpart of Theorem 2.3. Note that since we are no longer assured that the extrema in Definition 2.2 are achieved, more general definitions of viscosity solution and the comparison principle are also required. (See Definitions 7.1 and 7.3.) Theorem 2.10. Let (En , rn ), (E 0 , r0 ), (E, r), ηn , ηn0 , and γ be as above, and let LIM be given by Definition 2.5. Step 1: (Convergence of Hn ) In the continuous time case, let 1 Hn f ≡ e−nf An enf , enf ∈ D(An ). n In the discrete time case, let 1 1 Hn f ≡ log e−nf Tn enf = log(1 + e−nf (Tn − I)enf ). nn nn Let H† ⊂ C l (E, R) × M u (E 0 , R) and H‡ ⊂ C u (E, R) × M l (E 0 , R), and assume that {Hn } satisfies the Convergence Condition 7.11. Step 2: (Exponential tightness) Suppose D(H† ) ∩ D(H‡ ) ∩ Cb (E) is bucdense in Cb (E) and Condition 2.8 is satisfied. Then {Xn } is exponentially tight. Step 3: (The comparison principle) Let α0 > 0, and assume that for each 0 < α < α0 , there exists a subset Dα ⊂ Cb (E) such that Cb (E) is the bucclosure (Definition A.6) of Dα and that for each h ∈ Dα , the comparison principle holds for viscosity subsolutions of (I − αH† )f = h and supersolutions of (I−αH‡ )f = h. Then a viscosity solution Rα h exists for each h ∈ Dα , and Rα extends continuously (in terms of bucconvergence) to all of Cb (E). In addition, we suppose that {Xn (0)} satisfies a large deviation principle with a good rate function I0 . Then a) {Xn } satisfies the large deviation principle with a good rate function I. b) The limit m V (t)h ≡ lim Rt/m h m→∞
exists and defines an operator semigroup on Cb (E), and the rate function is given by (2.6). The above result is a special case of Theorem 7.18. The assumption that 0 0 lim supn→∞ ηn (Knq ) is compact in E can be replaced by weaker conditions. See Theorem 7.24. Theorem A.8 gives conditions under which Cb (E) is the bucclosure of a subset D. Finally, the control representation method for simplifying the rate function (Theorem 8.14 and Corollary 8.28) continues to apply for noncompact E.
Part 1
The general theory of large deviations
CHAPTER 3
Large deviations and exponential tightness We discuss basic properties of the large deviation principle for metric spacevalued random variables. Section 3.1 collects wellknown definitions and general properties; Section 3.2 develops rate function identification using a class of functions that isolates points; Section 3.3 derives a form of the rate function in product space that will be very useful later in the stochastic processes setting. 3.1. Basic definitions and results Let (S, d) be a metric space. For n = 1, 2, . . ., let Xn be a Svalued random variable and let Pn denote its distribution on the Borel subsets of S, that is, Pn (A) = P {Xn ∈ A}. Recall that {Xn } satisfies the large deviation principle (Definition 1.1) with rate function I if for each open set A, 1 (3.1) − inf I(x) ≤ lim inf log P {Xn ∈ A}, n→∞ n x∈A and for each closed set B, 1 (3.2) lim sup log P {Xn ∈ B} ≤ − inf I(x). x∈B n→∞ n Remark 3.1. a) If I satisfies (3.1) and (3.2), then the lower semicontinuous regularization of I, I∗ (x) = lim inf I(y), →0 y∈B (x)
also satisfies these inequalities. We will require I to be lower semicontinuous, so I = I∗ . b) Requiring I to be lower semicontinuous, the large deviation principle with rate function I implies (3.3) 1 1 − I(x) = lim lim inf log P {Xn ∈ B (x)} = lim lim sup log P {Xn ∈ B (x)}, →0 n→∞ n →0 n→∞ n and it follows that I is uniquely determined. c) If (3.3) holds for all x ∈ S, then (3.1) holds for all open sets A and (3.2) holds for all compact sets B. This statement (that is the large deviation principle with “closed” replaced by “compact”) is called the weak large deviation principle. (O’Brien and Vervaat [91] say vague rather than weak.) d) The large deviation principle is equivalent to the assertion that for each A ∈ B(S), 1 1 − inf o I(x) ≤ lim inf log P {Xn ∈ A} ≤ lim sup log P {Xn ∈ A} ≤ − inf I(x), n→∞ n x∈A n→∞ n x∈A where Ao is the interior of A and A is the closure. 41
42
3. LDP AND EXPONENTIAL TIGHTNESS
c) Lower semicontinuity of I is equivalent to {x : I(x) ≤ c} being closed for each c ∈ R. If {x : I(x) ≤ c} is compact for each c ∈ R, we say that I is good. In weak convergence theory, a sequence {Pn } is tight if for each > 0 there exists a compact subset K ⊂ S such that inf n Pn (K) ≥ 1 − . The analogous concept in large deviation theory is exponential tightness. Definition 3.2. (Exponential Tightness) A sequence of probability measures {Pn } on S is said to be exponentially tight if for each a > 0, there exists a compact set Ka ⊂ S such that 1 lim sup log Pn (Kac ) ≤ −a. n→∞ n A sequence {Xn } of Svalued random variables is exponentially tight if the corresponding sequence of distributions is exponentially tight. de Acosta [25] (see the proof of Lemma 4.1) and O’Brien [89] (Theorem 3.3) have observed the following equivalence. For η > 0 and K ⊂ S, let K η = {x : inf y∈K d(x, y) < η}. Lemma 3.3. For each n = 1, 2, . . ., let Pn be a tight probability measure on S. (Recall that if S is complete and separable, then every probability measure is tight.) Then {Pn } is exponentially tight if and only if for each a > 0 and η > 0 there exists a compact Ka,η ⊂ S such that 1 η lim sup log Pn ((Ka,η )c ) ≤ −a. n→∞ n Proof. Since for each n, Pn is tight, without loss of generality, we can assume 1 η )c ) ≤ −a, sup log Pn ((Ka,η n n and hence η Pn ((Ka,η )c ) ≤ e−na . b a be the closure of ∩k K k−1 −1 . Then K b a is compact (it is complete and Let K a+k,k
totally bounded) and b ac ) ≤ Pn (K
∞ X
e−na−nk ≤
k=1
so lim sup n→∞
e −na e , e−1
1 b ac ) ≤ −a. log Pn (K n
Example 3.4. We illustrate the use of Lemma 3.3 with the following lemma on sums of Banach spacevalued random variables. (See de Acosta [25].) Let E be a Banach space, and let ξ1 , ξ2 , . . . be independent and identically distributed Evalued random variables satisfying E[eαkξk k ] < ∞, Define
for all α > 0. n
Xn =
1X ξk . n k=1
3.1. BASIC DEFINITIONS AND RESULTS
43
We claim that {Xn } is exponentially tight. To check the claim, let {xi } be dense in E, and for each m, let m m X X Km = { qi xi : qi ≥ 0, qi ≤ 1}. i=1
i=1
Clearly, Km is compact. For a, > 0, let α > a/ and select m so that αkξk kI{ξk ∈∪ / m
α − log E[e
(3.4)
i=1
B (xi )}
] > a.
i−1 . For i = 1, . . . , m, let Di = B (xi ) ∩ ∩j=1 B (xj )c . Define γk = kξk kI{ξk ∈∪ / m i=1 B (xi )} m m (Note that the Di are disjoint and ∪i=1 Di = ∪i=1 B (xi ).) Finally, note that
bn = X
m n X 1X I{ξk ∈Di } xi ∈ Km , n i=1 k=1
and (3.5)
bn = Xn − X
m n n X 1X 1X I{ξk ∈Di } (ξk − xi ) + I{ξk ∈∪ ξk . / m i=1 B (xi )} n n i=1 k=1
k=1
By (3.5), n
bn k ≤ + kXn − X
1X γk , n k=1
and by (3.4), n
Pn
1X E[eα k=1 γk ] P{ γk > } ≤ ≤ e−na . n eαn k=1
Consequently, 1 2 log P {Xn ∈ / Km } ≤ −a, n n→∞ and since and a are arbitrary, Lemma 3.3 gives the exponential tightness of {Xn }. lim sup
One consequence of exponential tightness is that if a sequence is exponentially tight, then the large deviation principle holds if and only if the weak large deviation principle holds. In addition, if {Pn } is exponentially tight and the large deviation lower bound (3.1) holds, then the rate function I is good. (See, for example, Dembo and Zeitouni [29], Lemma 1.2.18.) Lemma 3.5 gives the converse. Lemma 3.5. For each n = 1, 2, . . ., let Pn be a tight probability measure on S. If {Pn } satisfies the large deviation principle with good rate function I, then {Pn } is exponentially tight. Proof. Let a > 0 and Ka = {x : I(x) ≤ a}. Then for η > 0, lim sup log Pn ((Kaη )c ) ≤ − n→∞
inf
x∈(Kaη )c
and exponential tightness follows by Lemma 3.3.
I(x) ≤ −a,
Another useful observation is that if S is a product space, then a sequence of tight probability measures {Pn } ⊂ P(S) is exponentially tight if and only if the marginal distributions are exponentially tight.
44
3. LDP AND EXPONENTIAL TIGHTNESS
Lemma 3.6. Let (S1 , d1 ), (S P2 , d2 ), . . . be metric spaces, and let S = S1 ×S2 ×· · · with (for example) d(x, y) = k 2−k 1 ∧ dk (xk , yk ). For n = 1, 2, . . ., let Pn be a tight probability measure on S. Then {Pn } is exponentially tight if and only if each marginal sequence {Pnk } ⊂ P(Sk ) is exponentially tight. Proof. Suppose {Pn } is exponentially tight. Then for a > 0 there exists compact Ka such that 1 lim sup log Pn (Kac ) ≤ −a. n→∞ n Without loss of generality, we can assume that Ka is of the form Ka = Ka1 ×Ka2 ×· · · , where each Kak is compact in Sk . It follows immediately that 1 lim sup log Pnk ((Kak )c ) ≤ −a. n→∞ n Conversely, suppose that for each k, {Pnk } is exponentially tight. Then for a > 0 there exist compact sets Kak such that Pnk ((Kak )c ) ≤ e−nak . Setting Ka = Ka1 × Ka2 × · · · , we have Pn (Kac ) ≤
∞ X
e−nak =
k=1
and hence lim sup n→∞
e−na , 1 − e − na
1 log Pn (Kac ) ≤ −a. n
Exponential tightness plays the same role in large deviation theory as tightness does in weak convergence theory (Chapter 3, Ethier and Kurtz [36]). The following analogue of the Prohorov compactness theorem is essentially Theorem P of Puhalskii [97]. More general results have been given by O’Brien and Vervaat [91] and de Acosta (see Theorem 2.1 of [24]). Theorem 3.7. Let (S, d) be a metric space and {Pn } a sequence of tight probability measures on the Borel σalgebra of S. Suppose that {Pn } is exponentially tight. Then there exists a subsequence {nk } along which the large deviation principle holds with a good rate function. Proof. Select compact Kk , k = 1, 2, . . . so that 1 lim sup log Pn (Kkc ) ≤ −k. n→∞ n By the compactness of the Kk , there is a countable set {xi } ⊂ ∪k Kk that is dense in ∪k Kk . Let 1 ≥ 1 > 2 > · · · > 0 satisfy limj→∞ j = 0. Then we can select a subsequence {nl } such that −Ij (xi ) = lim
l→∞
1 log Pnl (Bj (xi )) nl
exists for all i and j. Define I0 (xi ) = limj→∞ Ij (xi ). For x ∈ S0 ≡ cl ∪k Kk , define I(x) = lim
inf
→0 xi ∈B (x)
I0 (xi ),
3.1. BASIC DEFINITIONS AND RESULTS
45
and for x ∈ S − S0 , define I(x) = ∞. We claim that the large deviation principle holds for the subsequence {Pnl } with rate function I. If x ∈ S − S0 , then the definition of Kk implies lim lim sup
→0
l→∞
1 log P {Xnl ∈ B (x)} = −∞ nl
giving (3.3). Consider x ∈ S0 . Let j + d(x, xi ) < . Then lim inf l→∞
1 log P {Xn ∈ B (x)} ≥ −Ij (xi ), nl
and hence lim lim inf
→0 l→∞
1 log P {Xn ∈ B (x)} ≥ −I(x). nl
Similarly, if > 0 and d(x, xi ) + < j , then lim sup l→∞
1 log P {Xn ∈ B (x)} ≤ −Ij (xi ), nl
and hence, since we can select (xik , jk ) so that xik → x, jk → ∞, and Ijk (xik ) → I(x), 1 lim lim sup log P {Xn ∈ B (x)} ≤ −I(x). →0 l→∞ nl It follows that (3.3) holds for all x ∈ S. But (3.3) implies the weak large deviation principle and exponential tightness then gives the full large deviation principle. The following moment characterization of the large deviation principle due to Varadhan and Bryc is central to our consideration of large deviation theorems for Markov processes. Proposition 3.8. Let {Xn } be a sequence of Svalued random variables. a) (Varadhan Lemma) Suppose that {Xn } satisfies the large deviation principle with a good rate function I. Then for each f ∈ Cb (S), (3.6)
1 log E[enf (Xn ) ] = sup{f (x) − I(x)}. n→∞ n x∈S lim
b) (Bryc formula) Suppose that the sequence {Xn } is exponentially tight and that the limit 1 (3.7) Λ(f ) = lim log E[enf (Xn ) ] n→∞ n exists for each f ∈ Cb (S). Then {Xn } satisfies the large deviation principle with good rate function (3.8)
I(x) =
sup {f (x) − Λ(f )}. f ∈Cb (S)
Remark 3.9. a) We will refer to Λ as the rate transform for {Xn }. b) See Theorems 4.3.1 and 4.4.2 in Dembo and Zeitouni [29].
46
3. LDP AND EXPONENTIAL TIGHTNESS
Proof. Let x ∈ E, and for > 0, let δ > 0 satisfy B (x) ⊂ {y : f (y) > f (x) − δ }. By the continuity of f , we can assume that lim→0 δ = 0. Then P {Xn ∈ B (x)} ≤ en(δ −f (x)) E[enf (Xn ) ], so (3.9)
lim lim inf
1 1 log P {Xn ∈ B (x)} ≤ −f (x) + lim inf log E[enf (Xn ) ] n→∞ n n
lim lim sup
1 1 log P {Xn ∈ B (x)} ≤ −f (x) + lim sup log E[enf (Xn ) ]. n n→∞ n
→0 n→∞
and (3.10)
→0 n→∞
For x ∈ S and > 0, let f,x ∈ Cb (S) satisfy f,x (y) ≤ f,x (x) − r1Bc (x) (y) for all y ∈ S. Then e−nr + P {Xn ∈ B (x)} ≥ e−nf,x (x) E[enf,x (Xn ) ], so lim sup n→∞
1 1 log P {Xn ∈ B (x)} ≥ −f,x (x) + lim sup log E[enf,x (Xn ) ] n n n→∞
and (3.11)
lim inf n→∞
1 1 log P {Xn ∈ B (x)} ≥ −f,x (x) + lim inf log E[enf,x (Xn ) ]. n→∞ n n
If (3.7) holds for each f ∈ Cb (S), then by (3.10) and (3.11), (3.3) holds with I(x) =
sup {f (x) − Λ(f )} f ∈Cb (S)
implying the weak large deviation principle. Assuming exponential tightness, we have Part (b). If (3.3) holds, then by (3.9) sup(f (x) − I(x)) ≤ lim inf E[enf (Xn ) ]. x∈S
n→∞
If I is good, {Xn } is exponentially tight and there exists a compact K such that lim sup n→∞
1 1 log E[enf (Xn ) ] = lim sup log E[enf (Xn ) 1K (Xn )]. n n n→∞
For each > 0, there exists and a finite collection x1 , . . . , xm ∈ K such that K ⊂ ∪m i=1 B (xi ) and f (y) ≤ max{ + f (xi ) : y ∈ B (xi )}. Consequently, 1 log E[enf (Xn ) 1K (Xn )] n m X 1 ≤ lim sup log E[ en(+f (xi )) 1B (xi ) (Xn )] n→∞ n i=1
lim sup n→∞
≤ max (f (xi ) + + lim sup 1≤i≤m
n→∞
1 log P {Xn ∈ B (xi )}), n
and hence for each > 0, there exists x ∈ K such that lim sup n→∞
1 1 log E[enf (Xn ) 1K (Xn )] ≤ f (x ) + + lim sup log P {Xn ∈ B (x )}. n n n→∞
3.1. BASIC DEFINITIONS AND RESULTS
47
Since K is compact, we can select a subsequence along which x → x0 ∈ K as → 0. If r(x , x0 ) + < δ, then f (x ) + + lim sup n→∞
1 log P {Xn ∈ B (x )} n
≤ f (x ) − f (x0 ) + + f (x0 ) + lim sup n→∞
1 log P {Xn ∈ Bδ (x0 )}, n
and it follows that lim sup n→∞
1 log E[enf (Xn ) 1K (Xn )] ≤ f (x0 ) − I(x0 ) n
giving Part (a).
Corollary 3.10. Suppose that the sequence {Xn } is exponentially tight and that the limit 1 (3.12) Λ(f ) = lim log E[enf (Xn ) ] n→∞ n exists for each f ∈ Cb (S). Then I(x) = −
inf
f ∈Cb (S), f (x)=0
Λ(f )
Proof. The result follows from (3.8) and the fact that Λ(f + c) = Λ(f ) + c for any constant c. In weak convergence theory, the continuous mapping theorem plays a key role. The analogous theorem for large deviation theory is the contraction principle, Dembo and Zeitouni [29], Theorem 4.2.1. The following result of Puhalskii ([97] Theorem 2.2) extends the usual statement of the contraction principle to discontinuous mappings in exactly the same way that the continuous mapping theorem extends to discontinuous functions. Theorem 4.2.23 of [29] and Garcia [53] give different approaches to the extension of the contraction principle to discontinuous functions. Lemma 3.11. For n = 1, 2, . . ., let Xn be an Svalued random variable with a tight distribution. Suppose that {Xn } satisfies a large deviation principle with good rate function I. Let (S 0 , d0 ) be a metric space and suppose F : S → S 0 is measurable and that F is continuous at x ∈ S for each x with I(x) < ∞. Define Yn = F ◦ Xn . Then {Yn } satisfies a large deviation principle with good rate function I 0 (y) = inf{I(x) : F (x) = y}. In particular, if y is not in the range of F , then I 0 (y) = ∞. Proof. Let Km = {x : I(x) ≤ m}. Then F is continuous at each point in Km and hence F (Km ) is compact in S 0 . Exponential tightness for {Yn } follows since for each η > 0, there exists > 0 such that x ∈ Km implies F (x) ∈ F (Km )η and η hence {Yn ∈ / F (Km ) } ⊂ {Xn ∈ / Km } and lim sup
1 log P {Yn ∈ / F (Km )η } n
1 log P {Xn ∈ / Km } n inf c I(x) ≤ −m.
≤ lim sup ≤
−
x∈(Km )
48
3. LDP AND EXPONENTIAL TIGHTNESS
Let I 0 denote the rate function for {Yn } (or a subsequence). Suppose F (x) = y. If I(x) < ∞, then for each > 0, there exists δ > 0 such that F (Bδ (x)) ⊂ B (y). Consequently, I 0 (y) = − lim lim inf →0 n→∞
1 1 log P {Yn ∈ B (y)} ≤ − lim lim inf log P {Xn ∈ Bδ (x)} = I(x) δ→0 n→∞ n n
and hence I 0 (y) ≤ inf{I(x) : F (x) = y}. If I 0 (y) ≤ m < ∞, then I 0 (y) = − lim lim inf →0 n→∞
1 inf I(x) log P {Yn ∈ B (y)} ≥ lim →0 x∈clF −1 (B (y)) n
and there exist x ∈ clF −1 (B (x)) such that lim→0 I(x ) ≤ I 0 (y) ≤ m. Since {x : I(x) ≤ m} is compact, we may assume that x → x0 . But I(x0 ) ≤ lim→∞ I(x ) ≤ I 0 (y). Since I(x0 ) < ∞, F is continuous at x0 , and since x ∈ clF −1 (B (y)), there exist x0 ∈ F −1 (B (y)) such that x0 → x0 and hence F (x0 ) = lim→0 F (x0 ) = y. Consequently, I 0 (y) ≥ inf{I(x) : F (x) = y}. 0 = {y : I 0 (y) ≤ m} = {F (x) : I(x) ≤ m}. Since F is continuous on Note that Km 0 {x : I(x) ≤ m}, Km is compact.
b be metric spaces. Suppose that S ⊂ S, b d) b Lemma 3.12. Let (S, d) and (S, b B(S) ⊂ B(S), and if {xn } ⊂ S converges to x ∈ S under d, then it also converges b Suppose {Xn } satisfies a large deviation principle in (S, b with rate b d) under d. b P {Xn ∈ S} = 1, and {Xn } is exponentially tight in (S, d). Then {Xn } function I, satisfies a large deviation principle in S with rate function given by the restriction of Ib to S. Proof. Let I be the rate function for the large deviation principle for {Xn } (or a subsequence) in (S, d). Since the injection map F : S → Sb is continuous, we b = I(x) for x ∈ S and I(x) b = ∞ for x ∈ Sb − S. must have I(x) The following lemma, which is essentially Theorem 4.2.13 of Dembo and Zeitouni [29], is the exponential analogue of a weak convergence result known in the statistical literature as Slutsky’s theorem. (See, for example, Ethier and Kurtz [36], Corollary 3.3.3.) Lemma 3.13. For n = 1, 2, . . ., let (Xn , Yn ) be an S ×Svalued random variable. Suppose that for each η > 0, (3.13)
lim sup n→∞
1 log P {d(Xn , Yn ) > η} = −∞. n
Then the following hold: a) {Xn } is exponentially tight if and only if {Yn } is exponentially tight. b) If {Xn } (or, equivalently, {Yn }) is exponentially tight, then for each f ∈ Cb (E) (3.14)
lim sup n→∞
1 1 log E[enf (Xn ) ] = lim sup log E[enf (Yn ) ] n n n→∞
3.1. BASIC DEFINITIONS AND RESULTS
49
and (3.15)
lim inf n→∞
1 1 log E[enf (Xn ) ] = lim inf log E[enf (Yn ) ], n→∞ n n
and hence, the large deviation principle holds for {Xn } if and only if it holds for {Yn }. If the large deviation principle holds, {Xn } and {Yn } have the same rate function. Proof. Part (a) is an immediate consequence of Lemma 3.3. As in the proof of Proposition 3.17, the collection of f for which (3.14) (and, similarly, (3.15)) holds is bucclosed. (See Appendix A.2.) Consequently, it suffices to consider f ∈ Cu (S), the space of bounded, uniformly continuous functions. Let ωf () = sup{f (x) − f (y) : d(x, y) ≤ } be the modulus of continuity for f (which, by uniform continuity, satisfies lim→0 ωf () = 0), and let K be a compact subset of S satisfying lim supn→∞ n1 log P {Xn ∈ K c } < −2kf k. Then, by (3.13), lim supn→∞ n1 log P {d(Xn , Yn ) > η} < −2kf k, so for η > 0, lim sup n→∞
1 log E[enf (Xn ) ] n
1 log E[I{d(Xn ,Yn )≤η} enf (Xn ) ] n→∞ n 1 ≤ lim sup log E[I{d(Xn ,Yn )≤η} en(f (Yn )+ωf (η)) ] n→∞ n 1 = ωf (η) + lim sup log E[enf (Yn ) ] n = lim sup
and similarly lim sup n→∞
1 1 log E[enf (Xn ) ] ≥ −ωf (η) + lim sup log E[enf (Yn ) ]. n n→∞ n
Letting η → 0, (3.14)follows. The proof of (3.15) is similar.
The following variation of Lemma 3.13 is slightly more complicated to state, but may be more natural to apply in certain situations. Lemma 3.14. For each n, let Xn be an Svalued random variable with a tight distribution. Suppose that for each η > 0 and a > 0, there exists a sequence {Xnη,a } of Svalued random variables such that for each n, Xnη,a has a tight distribution, {Xnη,a } satisfies the large deviation principle with good rate function I η,a , and lim sup n→∞
1 log P {d(Xn , Xnη,a ) > η} ≤ −a. n
Then {Xn } satisfies the large deviation principle with good rate function I satisfying (3.16) I(x) = lim lim inf lim inf inf I η,a (y) = lim lim sup lim sup inf I η,a (y) δ→0
η→0
a→∞ y∈Bδ (x)
δ→0
η→0
a→∞ y∈Bδ (x)
Proof. By Lemma 3.5, {Xnη,a } is exponentially tight, and Lemma 3.3 then gives exponential tightness for {Xn }. By Theorem 3.7, we may as well assume that {Xn } satisfies the large deviation principle, and show that the rate function is uniquely determined by (3.16). Observe that for > 0, 1 1 log P {Xn ∈ B (x)} ≤ log(P {Xnη,a ∈ B+η (x)} + P {d(Xn , Xnη,a ) > η}). n n
50
3. LDP AND EXPONENTIAL TIGHTNESS
Consequently, by (3.3), −I(x) ≤ (−a) ∨ (−
I η,a (y))
inf y∈Bδ (x)
for any δ > η, and hence (3.17)
I(x) ≥ lim lim sup lim sup δ→0
η→0
inf
I η,a (y).
a→∞ y∈Bδ (x)
Similarly, 1 1 log P {Xnη,a ∈ Bδ (x)} ≤ log(P {Xn ∈ Bδ+η (x)} + P {d(Xn , Xnη,a ) > η}), n n and hence I(y)) − inf I η,a (y) ≤ (−a) ∨ (− inf y∈Bδ (x)
y∈B δ+η (x)
and (3.18)
I(x) = lim lim
inf
δ→0 η→0 y∈B δ+η (x)
I(y) ≤ lim lim inf lim inf δ→0
η→0
inf
a→∞ y∈Bδ (x)
I η,a (y).
Together (3.17) and (3.18) give (3.16).
3.2. Identifying a rate function To prove a large deviation principle, it is, in general, enough to verify (3.7) for a class of functions that is much smaller than Cb (S). Let Γ = {f : (3.7) holds}. Definition 3.15. A collection of functions D ⊂ Cb (S) will be called rate function determining if whenever D ⊂ Γ for an exponentially tight sequence, the sequence satisfies the large deviation principle with good rate function (3.19)
I(x) = sup {f (x) − Λ(f )}. f ∈D
The following observations are useful in identifying rate function determining classes. Lemma 3.16. a) For positive sequences {an } and {bn }, 1 1 lim  log(an + bn ) − log(max{an , bn }) = 0. n→∞ n n b) Let f ∈ Cb (S), and let {Fn } be a sequence of events such that 1 lim sup log P (Fnc ) < −2kf k. n→∞ n Then 1 1 lim  log E[enf (Xn ) ] − log E[IFn enf (Xn ) ] = 0. n→∞ n n Proof. Part (a) is immediate. For part (b), note that lim inf n→∞ inf x f (x) ≥ −kf k and 1 lim sup log E[IFnc enf (Xn ) ] < −kf k, n→∞ n so the conclusion follows from part (a). Proposition 3.17. Let {Xn } be exponentially tight. Then
1 n
log E[enf (Xn ) ] ≥
3.2. IDENTIFYING A RATE FUNCTION
51
a) Γ = {f : (3.7) holds} is bucclosed. b) If Cb (S) is the bucclosure of D, then D is rate function determining. Proof. Suppose f is bucapproximable by functions in Γ. Let Cf,Γ be as in Definition A.6. Let K be compact and satisfy lim supn→∞ n1 log P {Xn ∈ K c } ≤ −3Cf,Γ . Then, for > 0, there exists g ∈ Γ such that kgk ≤ Cf,Γ and supx∈K f (x)− g(x) ≤ . Observe that lim sup n→∞
1 log E[IK c (Xn )eng(Xn ) ] ≤ sup g(x) − 3Cf,Γ ≤ −2Cf,Γ , n x∈E
and hence that 1 log E[eng(Xn ) ] n→∞ n lim
1 log E[IK eng(Xn ) ] + E[IK c eng(Xn ) ] n→∞ n 1 log E[IK eng(Xn ) ]. = lim n→∞ n =
lim
Since f (x) ≤ (g(x) + )IK (x) + IK c (x)kf k, 1 log E[enf (Xn ) ] n→∞ n 1 ≤ lim sup log E[IK (Xn )en(g(Xn )+) + IK c (Xn )enkf k ] n→∞ n 1 ≤ max lim sup log E[IK (Xn )en(g(Xn )+) ], kf k + lim sup log P {Xn ∈ K c } n→∞ n n→∞ ≤ max {Λ(g) + , −2Cf,Γ } = Λ(g) + .
lim sup
Similarly, 1 log E[enf (Xn ) ] n 1 ≥ lim inf log E[IK (Xn )en(g(Xn )−) + IK c (Xn )e−nkf k ] n→∞ n 1 n(g(Xn )−) c ≥ max lim inf log E[IK (Xn )e ], −kf k + lim inf log P {Xn ∈ K } n→∞ n n→∞ ≥ max {Λ(g) − , −3Cf,Γ } = Λ(g) − .
lim inf n→∞
The existence of Λ(f ) and part (a) follow. For part (b), let Γx,c = {f : f (x) − Λ(f ) ≤ c}. We claim that Γx,c is bucclosed. Suppose that f is bucapproximable by Γx,c . Let Cf,Γx,c be as in Definition A.6, and let K ⊂ S be compact such that x ∈ K and lim supn→∞ P {Xn ∈ K c } ≤ −3Cf,Γx,c . As in the proof of part (a), for > 0, there exists g ∈ Γx,c such that f (x)−g(x) ≤ and Λ(g) − ≤ Λ(f ) ≤ Λ(g) + . Consequently, f (x) − Λ(f ) ≤ c + 2, and since is arbitrary, f ∈ Γx,c . Let D ⊂ Cb (S) and suppose that Λ(f ) exists for f ∈ D. Let D denote the bucclosure of D. Then by part (a), Λ(f ) exists for f ∈ D. Set c = supf ∈D (f (x)−Λ(f )). Since Γx,c is bucclosed, we have c = supf ∈D (f (x) − Λ(f )), and since in part (b), D = Cb (S), the result follows.
52
3. LDP AND EXPONENTIAL TIGHTNESS
Definition 3.18. A collection of functions D ⊂ Cb (S) isolates points in S, if for each x ∈ S, each > 0, and each compact K ⊂ S, there exists f ∈ D satisfying f (x) < , supy∈K f (y) ≤ 0, and sup y∈K∩Bc (x)
1 f (y) < − .
If f satisfies these conditions, we will say that f isolates x relative to and K. A collection of functions D ⊂ Cb (S) is bounded above if supf ∈D supy f (y) < ∞. The following lemma is immediate. Lemma 3.19. Let C(c, S) = {f ∈ Cb (S) : supy f (y) ≤ c}. If D ⊂ C(c, S) is dense in C(c, S) in the compact uniform topology, then D is bounded above and isolates points. Proposition 3.20. If D is bounded above and isolates points, then D is rate function determining. Proof. Suppose {Xn } is exponentially tight, and let Γ = {f ∈ Cb (S) : (3.7) holds}. It is sufficient to show that for any subsequence along which the large deviation principle holds, the corresponding rate function is given by (3.19). Consequently, we may as well assume Γ = Cb (S). Note that Λ(f ) is a monotone function of f . Let c > 0 satisfy supf ∈D supy f (y) ≤ c. Fix f0 ∈ Cb (S) satisfying f0 (x) = 0, and let δ > 0. Let a > c + Λ(f0 ), and let Ka be a compact set satisfying 1 lim sup log P {Xn ∈ / Ka } ≤ −a. n→∞ n Let 0 < < δ be such that supy∈B (x) f0 (y) ≤ δ and inf y∈K f0 (y) ≥ −−1 . By the assumptions on D, there exists f ∈ D satisfying f ≤ cIKac , f IKa ≤ f0 IKa + δ, and f (x) < . It follows that enf ≤ enc IKac + en(f0 +δ) , and hence Λ(f ) ≤ max(−(a − c), Λ(f0 ) + δ) = Λ(f0 ) + δ. Consequently, f (x) − Λ(f ) ≥ −Λ(f0 ) − 2δ, and (3.19) follows.
Corollary 3.21. Fix x ∈ S, and suppose {fm } ⊂ D satisfies the following: a) supm supy fm (y) ≤ c < ∞. b) For each compact K and each > 0 lim sup fm (y) ≤ 0
m→∞ y∈K
and lim
sup
m→∞ y∈K∩B c (x)
c) limm→∞ fm (x) = 0. Then I(x) = − limm→∞ Λ(fm ).
fm (y) = −∞.
3.3. RATE FUNCTIONS IN PRODUCT SPACES
53
Proof. Let f0 ∈ Cb (S) and f0 (x) = 0. The assumptions on the sequence {fm } imply that for each compact K ⊂ S and δ > 0 there exists m0 such that for m > m0 enfm ≤ enc IK c + en(f0 +δ) , and hence lim inf (fm (x) − Λ(fm )) ≥ −Λ(f0 ) − δ. m→∞
Since fm (x) → 0, it follows that I(x) = − limm→∞ Λ(fm ).
3.3. Rate functions in product spaces We now consider the construction of rate function determining classes for product spaces. Recall (Lemma 3.6) that a sequence {Xn } with values in a product space is exponentially tight if and only if each of its marginal distributions is exponentially tight. Lemma 3.22. Let (S1 , d1 ), . . . , (Sm , P dm ) be complete separable metric spaces, m and let S = S1 × . . . × Sm and d(x, y) = k=1 dk (xk , yk ). Suppose that for each k, Dk ⊂ Cb (Sk ) is bounded above and isolates points in Sk . Then D = {f1 +· · ·+fm ) : fk ∈ Dk } is bounded above and isolates points in S. The proof of Lemma 3.22 is similar to the proof of the following lemma. Lemma 3.23. Let (S1 , d1) , (S2 , d2 ), . . . be complete separable metric spaces, and P −k let S = S1 × S2 × · · · and d(x, y) = (dk (xk , yk ) ∧ 1). Suppose that for k2 each k, Dk ⊂ Cb (Sk ) is bounded above by 1 and isolates points in Sk . Then D = {m−1 (f1 + · · · + fm ) : m ≥ 1, fk ∈ Dk } is bounded above by 1 and isolates points in S. Proof. Let x ∈ S, > 0, and compact K ⊂ S. Select m > 1 such that 2−m < /2. For k = 1, . . . , m, there exist compact Kk ⊂ Sk such that K ⊂ K1 × · · · × Km × Sm+1 × · · · . For each k = 1, . . . , m, let fk ∈ Dk isolate xk relative to /m and Kk . Then m
m
k=1
k=1
1 X 1 X fk ≤ IKkc ≤ IK c , m m and m

1 X fk (xk ) ≤ . m m k=1
Bc (x),
c If y ∈ K ∩ then there exists k ≤ m such that yk ∈ Kk ∩ B/2 (xk ). Consequently, we have m
1 X 1 1 fk (yk ) ≤ sup min fk (yk ) ≤ − , 1≤k≤m m m c c y∈K∩B (x) y∈K∩B (x) sup
k=1
and f = m−1 (f1 + · · · + fm ) isolates x relative to and K.
The following variation of Lemma 3.23 gives a rate function determining class with somewhat simpler structure.
54
3. LDP AND EXPONENTIAL TIGHTNESS
Lemma 3.24. Let (S1 , d1 ), (S2 ,P d2 ), . . . be complete separable metric spaces, and let S = S1 × S2 × · · · and d(x, y) = k 2−k (dk (xk , yk ) ∧ 1). Suppose that for each k, Dk ⊂ Cb (Sk ) is bounded above and isolates points in Sk . Then D = {f1 + · · · + fm : m ≥ 1, fk ∈ Dk } is a rate function determining class for S. Proof. Suppose {Xn } is exponentially tight in S and the large deviation principle holds with rate function I. Letting Xnk denote the kth component of Xn , the large deviation principle holds for {(Xn1 , . . . , Xnm )}. Let Im denote the corresponding rate function. By Lemma 3.22 and Proposition 3.20, Im (x1 , . . . , xm ) =
(f1 (x1 ) + · · · + fm (xm ) − Λ(f1 + · · · + fm )),
sup f1 ∈D1 ,...,fm ∈Dm
where
1 m 1 E[en(f1 (Xn )+···+fm (Xn )) ]. n→∞ n It follows easily that I(x) ≥ Im (x1 , . . . , xm ), and in fact, by the contraction principle, Lemma 3.11,
Λ(f1 + · · · + fm ) = lim
Im (x1 , . . . , xm ) = inf{I(y) : y1 = x1 , . . . , ym = xm }. Consequently, by the lower semicontinuity of I, it follows that lim Im (x1 , . . . , xm ) = I(x)
m→∞
and hence I(x) = sup {f (x) − Λ(f )}. f ∈D
The following result is particularly useful in considering Markov processes. Proposition 3.25. Suppose {(Xn , Yn )} is exponentially tight in the product space (S1 × S2 , d1 + d2 ). Let µn ∈ P(S1 × S2 ) be the distribution of (Xn , Yn ) and let µn (dx × dy) = ηn (dyx)µ1n (dx), that is, µ1n is the S1 marginal of µn and ηn gives the conditional distribution of Yn given Xn . Suppose that for each f ∈ Cb (S2 ) Z 1 (3.20) Λ2 (f x) = lim log enf (y) ηn (dyx) n→∞ n S2 exists, that the convergence is uniform for x in compact subsets of S1 , and that Λ2 (f x) is a continuous function of x. For x ∈ S1 and y ∈ S2 , define I2 (yx) =
sup (f (y) − Λ2 (f x)). f ∈Cb (S2 )
If {Xn } satisfies the large deviation principle with good rate function I1 , then {(Xn , Yn )} satisfies the large deviation principle with good rate function (3.21)
I(x, y) = I1 (x) + I2 (yx).
Remark 3.26. The uniformity of convergence in (3.20) can be replaced by the assumption that there exists a sequence Kn ⊂ S1 such that Z 1 enf (y) ηn (dyx) = 0 lim sup Λ2 (f x) − log n→∞ x∈Kn n S2 and lim
n→∞
1 log µ1n (Knc ) = −∞. n
3.3. RATE FUNCTIONS IN PRODUCT SPACES
55
Proof. By the assumption of exponential tightness, it is enough to show that for any subsequence along which the large deviation principle holds, the rate function is given by (3.21). Consequently, we may as well assume that the large deviation principle holds for the original sequence. Observe that by the assumptions on the convergence in (3.20), for f (x, y) = f1 (x) + f2 (y), we have Λ(f ) = Λ1 (f1 + Λ2 (f2 ·)) We apply Corollary 3.21 taking, for definiteness 1 2 fm1 ,m2 (x, y) = fm (x) + fm (y) = −(m21 d1 (x0 , x)) ∧ m1 − (m22 d2 (y0 , y)) ∧ m2 . 1 2 2 Recall that Λ(f + c) = Λ(f ) + c, and observe that by the continuity of Λ2 (fm x) 2 as a function of x, 1 1 2 1 fbm = fm + Λ2 (fm ·) − Λ2 (fm x0 ) 1 1 2 1 satisfies the conditions of Corollary 3.21 with S = S1 and x = x0 . Then
I(x0 , y0 )
= =
lim
m1 ,m2 →∞
lim
Λ(fm1 ,m2 )
lim
m2 →∞ m1 →∞
1 2 1 2 Λ1 (fm + Λ2 (fm ·) − Λ2 (fm x0 )) + Λ2 (fm x0 ) 1 2 1 2
= I1 (x0 ) + I2 (y0 x0 ) giving the desired result.
CHAPTER 4
Large deviations for stochastic processes We focus on stochastic processes with cadlag trajectories. Throughout, (E, r) will be a complete, separable metric space, DE [0, ∞) will be the space of cadlag, Evalued functions with the Skorohod (J1 ) topology, and CE [0, ∞) will be the space of continuous, Evalued functions. Theorems 4.1 and 4.4 in Section 4.1 characterize exponential tightness for these processes. These results are direct analogs of standard tightness theorems in the weak convergence theory. The effects of changes of timescale on large deviation results are discussed in Section 4.2. Exponential tightness is usually easier to verify if the state space is compact. Frequently, results can be obtained by first compactifying the state space, verifying the large deviation principle in the compact space, and then inferring the large deviations principle in the original space from the result in the compact space. Section 4.3 gives results supporting this approach. Restricted to CE [0, ∞), the Skorohod topology is just the compact uniform topology. If the sequence of processes is asymptotically continuous in a strong enough sense, then we see in Section 4.4 that the large deviation principle in the Skorohod topology implies the large deviation principle in the compact uniform topology. The main exponential tightness results are developed further in Sections 4.5 and 4.6 in the context of martingale problems. Section 4.7 gives the large deviation analog of the fact that tightness in the Skorohod topology plus convergence of the finite dimensional distributions implies weak convergence of the processes. Here, exponential tightness plus the large deviation principle for the finite dimensional distributions implies the large deviation principle for the processes. The rate function is identified as the appropriate supremum of the finite dimensional rate functions. The results can be viewed as a variation of the projective limit method (Dawson and G¨ artner [22], de Acosta [24]) adapted to the Skorohod topology. 4.1. Exponential tightness for processes We now consider exponential tightness for a sequence of processes {Xn } in DE [0, ∞). Let Xn be adapted to a rightcontinuous, complete filtration {Ftn }. Let S n (T ) be the collection of all {Ftn }stopping times bounded by T , and let S0n (T ) ⊂ S n (T ) be the subcollection of discrete stopping times. Recall that each τ ∈ S n (T ) can be approximated by a decreasing sequence in S0n (T ). The following theorem is the analogue for exponential tightness of Ethier and Kurtz [36], Theorem 3.8.6. The weak convergence version of condition (b) is due to Kurtz [71] and the weak convergence version of condition (c) is due to Aldous [2]. For δ > 0 and T > 0, define the modulus of continuity in DE [0, ∞) by w0 (x, δ, T ) = inf max {ti }
i
sup s,t∈[ti−1 ,ti ) 57
r(x(s), x(t)),
58
4. LDP FOR PROCESS
where the infimum is over {ti } satisfying 0 = t0 < t1 < · · · < tm−1 < T ≤ tm and min1≤i≤n (ti −ti−1 ) > δ. See Ethier and Kurtz [36], Section 3.6, for a discussion of the properties of w0 . We define q(x, y) = 1 ∧ r(x, y). Note that q is a metric equivalent to r. Theorem 4.1. Let T0 be a dense subset of [0, ∞). Suppose that for each t ∈ T0 , {Xn (t)} is exponentially tight. Then the following are equivalent. a) {Xn } is exponentially tight in DE [0, ∞). b) For each T > 0, there exist β > 0 and random variables γn (δ, λ, T ), δ, λ > 0, satisfying (4.1)
(4.2)
(4.3)
E[enλq
β
(Xn (t+u),Xn (t))∧q β (Xn (t),Xn (t−v))
Ftn ] ≤ E[eγn (δ,λ,T ) Ftn ],
for 0 ≤ t ≤ T , 0 ≤ u ≤ δ, and 0 ≤ v ≤ t ∧ δ, such that for each λ > 0, 1 lim lim sup log E[eγn (δ,λ,T ) ] = 0 δ→0 n→∞ n and β 1 lim lim sup log E[enλq (Xn (δ),Xn (0)) ] = 0. δ→0 n→∞ n c) Condition (4.3) holds, and for each T > 0, there exists β > 0 such that for each λ > 0 Cn (δ, λ, T ) ≡
sup E[ sup enλq
sup τ ∈S0n (T )
u≤δ
β
(Xn (τ +u),Xn (τ ))∧q β (Xn (τ ),Xn (τ −v))
]
v≤δ∧τ
satisfies (4.4)
lim lim sup
δ→0 n→∞
1 log Cn (δ, λ, T ) = 0. n
d) For each > 0 and T > 0 1 (4.5) lim lim sup log P {w0 (Xn , δ, T ) > } = −∞. δ→0 n→∞ n Remark 4.2. Note that (4.1) is implied by the simpler inequality (4.6)
E[enλq
β
(Xn (t+u),Xn (t))
Ftn ] ≤ E[eγn (δ,λ,T ) Ftn ].
Puhalskii [97], Theorem 4.4, (see also Puhalskii [100], Theorem 3.2.3) gives a condition similar to (c) that would correspond to taking the supremum over u inside the expectation in the definition of Cn (δ, λ, T ) (a condition that is substantially more difficult to verify). Under Puhalskii’s condition, it is, in fact, unnecessary to consider stopping times, that is, the supremum over τ ∈ S0n (T ) can be replaced by the supremum over 0 ≤ t ≤ T , and Cexponential tightness (Definition 4.12) follows without additional assumptions. Theorem 4.2 of Puhalskii [97] (Theorem 3.2.1 of [100]) gives the equivalence of (a) and (d). Note also that the minimum in the exponent can be replaced by the product. In particular, if 0 ≤ a, b ≤ 1 and β > 0, then aβ bβ ≤ aβ ∧ bβ ≤ aβ/2 bβ/2 . Before proving Theorem 4.1, we need the following analogue of Ethier and Kurtz [36], Lemma 3.8.4. We suppress the n to simplify the notation and let aβ = 2(β−1)∨0 so that q β (x, y) ≤ aβ (q β (x, z) + q β (z, y)).
4.1. EXPONENTIAL TIGHTNESS FOR PROCESSES
59
Lemma 4.3. Fix β > 0. For δ, λ, T > 0, define C(δ, λ, T ) ≡
sup sup E[ sup eλq τ ∈S0 (T ) u≤δ
β
(X(τ +u),X(τ ))q β (X(τ ),X(τ −v))
].
v≤δ∧τ
Let τ ∈ S(T ), and let τb be a stopping time with τb ≥ τ a.s. Then E[ sup eλq
β
b
(X(τ ∧(τ +δ)),X(τ ))q β (X(τ ),X(τ −v))
]
v≤δ∧τ
≤ C(3δ, 24a4β λ, T + 2δ). Proof. Let τ ∈ S(T +δ), and let Mτ (δ) denote the collection of Fτ measurable random variables U satisfying 0 ≤ U ≤ δ. Note that τ + U ∈ S(T + 2δ). It follows from Ethier and Kurtz [36] (3.8.23) and the Schwartz and Jensen inequalities, that for τ ∈ S(T + δ) and U ∈ Mτ (δ) (4.7)
E[ sup eλq
β
(X(τ +U )),X(τ ))q β (X(τ ),X(τ −v))
]
v≤δ∧τ
≤
q
C(2δ, 2aβ λ, T + δ)C(3δ, 8a2β λ, T + 2δ)
≤ C(3δ, 8a2β λ, T + 2δ) . (Note that C(δ, λ, T ) is nondecreasing in all three variables.) For τ ∈ S(T ), observe that q β (X(b τ ∧ (τ + δ)), X(τ ))q β (X(τ ), X(τ − v)) ≤ aβ q β (X(τ + δ), X(τ ))q β (X(τ ), X(τ − v)) +a2β q β (X(τ + δ), X(b τ ∧ (τ + δ)))q β (X(b τ ∧ (τ + δ)), X(τ )) +a2β q β (X(τ + δ), X(b τ ∧ (τ + δ)))q β (X(b τ ∧ (τ + δ)), X(τ − v)). Observing that U = τ + δ − τb ∧ (τ + δ) is Fτb∧(τ +δ) measurable so that (4.7) can be applied, H¨ older’s inequality and (4.7 give E[ sup eλq
β
b
(X(τ ∧(τ +δ)),X(τ ))q β (X(τ ),X(τ −v))
]
v≤δ∧τ
≤ C(3δ, 24a3β λ, T + δ)C(3δ, 24a4β λ, T + 2δ)2
1/3
≤ C(3δ, 24a4β λ, T + 2δ). Proof. (of Theorem 4.1). (a implies b) Fix β > 0. For compact K ⊂ DE [0, ∞), define γK (δ, T ) = sup sup
sup
q β (x(t + u), x(t)) ∧ q β (x(t), x(t − v))
x∈K 0≤t≤T 0≤u≤δ,0≤v≤δ∧t
and note that limδ→0 γK (δ, T ) = 0. By the assumption of exponential tightness, for each λ > 0, there is a compact Kλ satisfying lim sup n→∞
1 log P {Xn ∈ / Kλ } ≤ −2λ. n
Then, setting γn (δ, λ, T ) = nλ(γKλ (δ, T ) + I{Xn ∈K / λ } ), (4.1) is trivially satisfied and noting that / λ} E[enλI{Xn ∈K ] ≤ 1 + enλ P {Xn ∈ / Kλ } → 1,
60
4. LDP FOR PROCESS
we have
1 log E[eγn (δ,λ,T ) ] = λγKλ (δ, T ) n and (4.2) follows. A similar argument gives (4.3). (b implies c) Note that if (4.1) holds for 0 ≤ t ≤ T , then the inequality also holds with t replaced by τ ∈ S0n (T ), and hence, (4.2) implies (4.4). (c implies d ) We follow the development for weak convergence in Section 3.8 of Ethier and Kurtz [36]; however, the details are actually simpler in the present setting. Let 0 < ≤ 1. Suppressing the index n for the moment, let τ0 = σ0 = 0 and for k = 1, 2, . . ., define lim
n→∞
τk = inf{t > τk−1 : r(X(t), X(τk−1 )) ≥ }, if τk−1 < ∞, and σk = sup{t ≤ τk : r(X(t), X(τk )) ∨ r(X(t−), X(τk )) ≥ }, if τk < ∞. Set σk = ∞ if τk = ∞. It follows (see Section 3.8 of Ethier and Kurtz [36]) that min{τk+1 − σk : τk < T + δ/2} > δ implies w0 (X, δ/2, T ) ≤ 2, so [T /δ]+2
X
0
P {w (X, δ/2, T ) > 2} ≤
P {τk+1 − σk ≤ δ, τk < T + δ/2}.
k=1
Consequently, it is sufficient to show that 1 (4.8) lim lim sup max log P {τk+1 − σk ≤ δ, τk < T + δ/2} = −∞. δ→0 n→∞ k n Let τ be any stopping time with τ ≤ T + δ/2, and define τ + = inf{t > τ : r(X(t), X(τ )) ≥ } and τ − = sup{t ≤ τ : r(X(t), X(τ )) ∨ r(X(t−), X(τ )) ≥ }. Then for 0 < ≤ 1 and any λ > 0, (4.9) 2β β + β P {τ + − τ − ≤ δ} ≤ e−nλ E[ sup enλq (Xn (τ ∧(τ +δ)),Xn (τ ))q (Xn (τ ),Xn (τ −v)) ] . v≤δ∧τ
By Lemma 4.3, the right side of (4.9) is bounded by 2β
e−nλ Cn (3δ, 24a4β λ, T + 3δ). By (4.4), it follows that lim lim sup
sup
δ→0 n→∞ τ ∈S(T +δ/2)
1 log P {τ + − τ − ≤ δ} ≤ −λ2β . n
Since λ is arbitrary and for τ = τk ∧ (T + δ/2), P {τk+1 − σk ≤ δ, τk < T + δ/2} ≤ P {τ + − τ − ≤ δ}, (4.8) follows. (d implies a) Let t1 , t2 , . . . be some ordering of T0 . For each λ > 0 and k = 1, 2, . . ., there exists δk > 0 and compact Γk ⊂ E such that P {w0 (Xn , δk , k) > k −1 } ≤ e−nλk and P {Xn (tk ) ∈ / Γk } ≤ e−nλk ,
4.1. EXPONENTIAL TIGHTNESS FOR PROCESSES
61
for n = 1, 2, . . .. Let Hk1 = {x : w0 (x, δk , k) ≤ k −1 } and Hk2 = {x : x(tk ) ∈ Γk }. Let 1 2 Kλ be the closure of ∩∞ k=1 Hk ∩ Hk . Then Kλ is compact and P {Xn ∈ / Kλ } ≤ 2
∞ X k=1
so
lim supn→∞ n1
log P {Xn ∈ / Kλ } ≤ −λ.
e−nλk ≤
2e−nλ , 1 − e−nλ
The next theorem from Schied [106] (see Theorem A.1 in [30]) reduces the verification of the exponential tightness of {Xn } to that of {f (Xn )} for realvalued functions f . The weak convergence analogue is in Kurtz [73]. (See Ethier and Kurtz [36], Theorem 3.9.1. Jakubowski [62] gives simpler conditions on the collection of functions and a simpler proof.). Theorem 4.4. A sequence {Xn } is exponentially tight in DE [0, ∞) if and only if a) for each T > 0 and a > 0, there exists a compact Ka,T ⊂ E such that 1 (4.10) lim sup log P (∃t ≤ T 3 Xn (t) 6∈ Ka,T ) ≤ −a; n→∞ n b) there exists a family of functions F ⊂ C(E) that is closed under addition and separates points in E such that for each f ∈ F , {f (Xn )} is exponentially tight in DR [0, ∞). Remark 4.5. We will refer to Condition (a) as the exponential compact containment condition. Proof. Necessity of the two conditions follows from the definition of exponential tightness and continuity of the mapping x ∈ DE [0, ∞) → f ◦ x ∈ DR [0, ∞) for f ∈ C(E). Exponential tightness of {f ◦ Xn }, for all f ∈ F implies, by Lemma 3.6, that for f1 , f2 , . . . ∈ F {Zn } ≡ {(f1 ◦ Xn , f2 ◦ Xn , . . .)} is exponentially tight in DR [0, ∞) × DR [0, ∞) × · · · . Since F is closed under addition, exponential tightness in DR∞ [0, ∞) then follows. (See Ethier and Kurtz [36], Problem 3.11.22.) For compact K ⊂ E, let {fiK } separate points in K. Then ∞ X qK (x, y) = 2−i 1 ∧ fiK (x) − fiK (y) i=1
is a metric equivalent to r on K. In particular, there exists a nondecreasing function ρK : [0, ∞) → [0, ∞) with limu→0 ρK (u) = 0 such that r(x, y) ≤ ρK (qK (x, y)), x, y ∈ K. LetPZnK = (f1K ◦ Xn , f2K ◦ Xn , . . .), and let the metric on R∞ be given by ∞ rR∞ (u, v) = i=1 2−i 1 ∧ ui − vi . It follows that {w0 (Xn , δ, T ) > } ⊂ {Xn (t) ∈ / K, some t ≤ T } ∪ {Xn (t) ∈ K, t ≤ T, ρK (w0 (ZnK , δ, T )) > }, and the exponential tightness of {ZnK } then implies 1 1 lim lim sup log P {w0 (Xn , δ, T ) > } ≤ lim sup log P {Xn (t) ∈ / K, some t ≤ T }. δ→0 n→∞ n n n→∞ Since K is an arbitrary compact set, (4.10) then implies (4.5).
62
4. LDP FOR PROCESS
The following localization result may simplify certain arguments, particularly for Markov processes, where if generators A and AK satisfy Af (x) = AK f (x), x ∈ K, f ∈ D(A) = D(AK ), then solutions of the corresponding martingale problems should agree in distribution up to the first time they leave K. (See Lemma 4.5.16 of Ethier and Kurtz [36].) Lemma 4.6. Suppose {Xn } satisfies the exponential compact containment condition and for a > 0, T > 0, let Ka,T satisfy (4.10). For x ∈ DE [0, ∞), let τa,T (x) = inf{t : x(t) ∈ / Ka,T }. Suppose that for each a > 0, T > 0, and n = 1, 2, . . ., Xna,T (· ∧ τa,T (Xna,T )) has the same distribution as Xn (· ∧ τa,T (Xn )) and that for each a > 0 and T > 0, {Xna,T } satisfies the large deviation principle with good rate function I a,T . Then {Xn } satisfies the large deviation principle with good rate function given by I(x) = lim lim inf
inf
δ→0 a,T →∞ y∈Bδ (x)
I a,T (y) = lim lim sup sup I a,T (y) δ→0 a,T →∞ y∈Bδ (x)
Proof. Without loss of generality, we can assume that Xn (· ∧ τa,T (Xn )) = Xna,T (· ∧ τa,T (Xna,T )). (See Lemma 5.15 of Ethier and Kurtz [36].) Setting Xnη,a = Xna,T for η = e−T , the lemma follows by Lemma 3.14. 4.2. Large deviations under changes of timescale Continuous time large deviation problems are frequently approached by discrete time approximations. The following lemma demonstrates the generality of this approach (cf. Dupuis and Ellis [35] Lemma 10.3.4). Lemma 4.7. For Xn in DE [0, ∞) and n > 0, define Yn (t) = Xn ([t/n ]n ). Then the following hold. a) If {Xn } is exponentially tight, then {Yn } is exponentially tight. b) If n → 0 and {Xn } is exponentially tight, then for each f ∈ Cb (DE [0, ∞)) 1 1 (4.11) lim sup log E[enf (Xn ) ] = lim sup log E[enf (Yn ) ] n→∞ n n→∞ n and 1 1 (4.12) lim inf log E[enf (Xn ) ] = lim inf log E[enf (Yn ) ], n→∞ n n→∞ n and hence, the large deviation principle holds for {Xn } if and only if it holds for {Yn } and if it holds, {Xn } and {Yn } have the same rate function. Proof. Clearly, if {Xn } satisfies the exponential compact containment condition, then so does {Yn }. By the necessity of part (b) of Theorem 4.1, there exist γn (δ, λ, T ) satisfying (4.1). Consequently, {Yn } will satisfy (4.1) with γn (δ, λ, T ) replaced by 0, δ < n γ bn (δ, λ, T ) = γn (2δ, λ, T ), δ ≥ n . Since γ bn (δ, λ, T ) clearly satisfies (4.2), exponential tightness for {Yn } follows. Part (b) will follow from Lemma 3.13 provided we verify (3.13). But, defining x by x (t) = x([t/]), if K ⊂ DE [0, ∞) is compact, then for the metric given by (1.5), lim sup d(x, x ) = 0, →0 x∈K
and (3.13) follows. (See Lemma 4.8 below.)
4.2. LARGE DEVIATIONS UNDER CHANGES OF TIMESCALE
63
Lemma 4.7 can be generalized to other time changes. The following property of the Skorohod topology provides the basis for these extensions. Lemma 4.8. Let ζ be a cadlag, nonnegative, and nondecreasing function on [0, ∞), and let η > 0 and T > 0 satisfy η ≥ supt≤T ζ(t) − t. Then for δ > 2η and the metric d given by (1.5), d(x, x ◦ ζ) ≤ log
(4.13)
η(T + η) δ ∨ (w0 (x, δ, T + η) + + e−T ). δ − 2η δ
Proof. Fix > 0. Let 0 = t0 < t1 < · · · < tm−1 < T + η ≤ tm satisfy max i
sup
r(x(s), x(t)) ≤ w0 (x, δ, T + η) + .
s,t∈[ti−1 ,ti )
For each i, define τi = inf{t : ζ(t) ≥ ti }, and for 0 ≤ t ≤ tm , let λ(t) be the linear interpolation of the points (ti , τi ). For t > tm , let λ0 (t) = 1. Since ti − τi  ≤ η, δ δ δ 0 δ+2η ≤ λ (t) ≤ δ−2η , and γ(λ) ≤ log δ−2η . For t ∨ λ(t) ≤ T + η, r(x ◦ ζ ◦ λ(t), x(t)) ≤ w0 (x, δ, T + η) + , and it follows that Z T e−u sup r(x ◦ ζ(λ(t) ∧ u), x(t ∧ u))du ≤ w0 (x, δ, T + η) + + 2mη. 0
t≥0
Note that 2mη bounds the Lebesgue measure of the set of u for which there exists a t and a ti satisfying (4.14)
λ(t) ∧ u < ti ≤ t ∧ u
or t ∧ u < ti ≤ λ(t) ∧ u
since either of the inequalities in (4.14) implies u − ti  ≤ η. Since m ≤ (T + η)/δ and > 0 is arbitrary, (4.13) follows. Lemma 4.9. For each n, let Xn be a process in DE [0, ∞), let Λn be a nonnegative, nondecreasing process independent of Xn , and define Yn (t) = Xn (Λn (t)). Suppose that for each t > 0 and η > 0, 1 (4.15) lim log P {sup s − Λn (s) > η} = −∞. n→∞ n s≤t a) If {Xn } is exponentially tight, then {Yn } is exponentially tight. b) If {Xn } is exponentially tight, then for each f ∈ Cb (DE [0, ∞)) 1 1 (4.16) lim sup log E[enf (Xn ) ] = lim sup log E[enf (Yn ) ] n→∞ n n→∞ n and 1 1 (4.17) lim inf log E[enf (Xn ) ] = lim inf log E[enf (Yn ) ], n→∞ n n→∞ n and hence, the large deviation principle holds for {Xn } if and only if it holds for {Yn } and if it holds, {Xn } and {Yn } have the same rate function. Remark 4.10. An obvious question is what happens to {Yn } if Λn does not converge as in 4.15? Russell [103] gives a systematic discussion of this question. Proof. Lemma 4.8 implies (3.13), and the result follows by Lemma 3.13.
64
4. LDP FOR PROCESS
4.3. Compactification The requirement in Theorem 4.1 that {Xn (t)} be exponentially tight for each t ∈ T0 and the stronger requirement in Condition (a) of Theorem 4.4 are, of course, trivially satisfied if E is compact. As in the weak convergence setting, these conditions can sometimes be finessed in the noncompact case by working with some b of E. The following result gives conditions under which the compactification E large deviation principle in DEb [0, ∞) can be used to obtain the large deviation b is principle in DE [0, ∞). The result also covers situations where the topology on E b could be the space of probability weaker than the topology on E. For example, E measures on a metric space with the weak topology while E is the space of probability measures on the same metric space having a density with respect to some fixed reference measure ν with the metric on E given by the L1 (ν)norm. b rb) and (E, r) are metric spaces, E ⊂ E, b B(E) ⊂ Theorem 4.11. Suppose (E, b B(E), and if {zn } ⊂ E converges to z ∈ E under r, then it also converges under rb (that is, r generates a stronger topology on E than rb). Let {Pn } be a sequence of probability measures on D(E, b rb) [0, ∞) such that for each n, Pn (D(E,r) [0, ∞)) = 1. (We include the metric in the notation for clarity. Note that D(E,r) [0, ∞) ⊂ D(E,rb) [0, ∞), but the two spaces need not be equal.) Let b = D b [0, ∞) − D(E,r) [0, ∞). Γ (E,r b) Suppose that the large deviation principle for {Pn } holds in DEb [0, ∞) with good b rate function I. a) If the exponential compact containment condition holds in (E, r), then b b and the large deviation principle holds for {Pn } I(x) = ∞, for all x ∈ Γ, on D(E,r) [0, ∞). b) Suppose that the topology on E generated by rb is the same as the topology generated by r so b = {x ∈ D b [0, ∞) : ∃t ≥ 0 3 x(t) or x(t−) ∈ E b − E}. Γ E b b then the large deviation principle holds for If I(x) = ∞ for each x ∈ Γ, b {Pn } on DE [0, ∞) with rate function I given by I(x) = I(x) for x ∈ DE [0, ∞). In particular, the exponential compact containment condition holds in E. b rb) Proof. Note that if K is compact in (E, r), then it is also compact in (E, (but not conversely) and the topology on K given by r is the same as the topology on K given by rb. In particular, D(K,r) [0, ∞) = D(K,rb) [0, ∞). If K ⊂ D(E, b rb) [0, ∞) is compact, then KK = K ∩ D(K,r) [0, ∞) is a compact subset of D(E,r) [0, ∞). (Apply the characterization of Skorohod convergence given in [36], Proposition 3.6.5, and the equivalence of r and rb on K.) Under the conditions of Part (a), for a, η > 0 and e−T < η, let K = Ka,T be a compact subset of E (under r) satisfying (4.10), and let K be a compact subset of D(E, b rb) [0, ∞) satisfying lim sup n→∞
1 log P {Xn ∈ / K} ≤ −a. n
4.4. LARGE DEVIATIONS IN THE COMPACT UNIFORM TOPOLOGY
65
Let KK = K ∩ D(K,r) [0, ∞). Since KK ⊃ K ∩ (D(K,r) [0, ∞)) ,
1 log P {Xn ∈ / KK } n→∞ n 1 1 / K}, lim sup log P {Xn ∈ / (D(K,r) [0, ∞)) } ≤ max{lim sup P {Xn ∈ n→∞ n n→∞ n ≤ −a,
lim sup
where the “fattening” is with respect to the metric (1.5) given by r. It follows that {Xn } is exponentially tight, and Part (a) follows by Lemma 3.12. b b If A is an open subset of For Part (b), suppose that I(x) = ∞ for each x ∈ Γ. b DE [0, ∞), then by the equivalence of the topologies, there exists an open subset A b ∩ DE [0, ∞). Consequently, of DEb [0, ∞) such that A = A lim inf n→∞
1 log Pn (A) n
1 b ∩ DE [0, ∞)) log Pn (A n 1 b = lim inf log Pn (A) n→∞ n b ≥ − inf I(x) =
lim inf n→∞
b
x∈A
b = − inf I(x), x∈A
b − A implies I(x) b where the last equality follows from the fact that x ∈ A = ∞. b Similarly, if B is a closed subset of DE [0, ∞), then there exists a closed subset B b of DEb [0, ∞) such that B = B ∩ DE [0, ∞). Consequently, lim sup n→∞
1 log Pn (B) n
1 b ∩ DE [0, ∞)) log Pn (B n→∞ n 1 b = lim sup log Pn (B) n→∞ n b ≤ − inf I(x) =
lim sup
b
x∈B
b = − inf I(x), x∈B
and Part (b) follows.
4.4. Large deviations in the compact uniform topology Definition 4.12. [Cexponential tightness] A sequence of stochastic processes {Xn } that is exponentially tight in DE [0, ∞) is Cexponentially tight if for each η > 0 and T > 0, (4.18)
lim sup n→∞
1 log P (sup r(Xn (s), Xn (s−)) ≥ η) = −∞. n s≤T
The weak convergence analogue of this condition is simply that sup r(Xn (s), Xn (s−)) ⇒ 0 s≤T
for each T > 0 which implies that any weak limit point of {Xn } must be continuous. The definition of Cexponential tightness given here is not the same as that given
66
4. LDP FOR PROCESS
in Puhalskii [98] (Definition 3.2.2 of [100]); however, the following theorem shows that the two definitions are equivalent. Theorem 4.13. An exponentially tight sequence {Xn } in DE [0, ∞) is Cexponentially tight if and only if each rate function I that gives the large deviation principle for a subsequence {Xn(k) }, satisfies I(x) = ∞ for each x ∈ DE [0, ∞) such that x ∈ / CE [0, ∞). Proof. Let Aη,T = {x ∈ DE [0, ∞) : sups≤T r(x(s), x(s−)) ≥ η} and Bη,T = {x ∈ DE [0, ∞) : sups η}. Then Aη,T is closed and Bη,T is open. If {Xn } is exponentially tight and (4.18) holds, then for any subsequence satisfying the large deviation principle with a rate function I (see Theorem 3.7), we have 1 − inf I(x) ≤ lim inf log P (Xn(k) ∈ Bη,T ) x∈Bη,T k→∞ n(k) 1 log P (Xn(k) ∈ Aη,T ) = −∞. ≤ lim sup k→∞ n(k) Therefore, I(x) = ∞ for each x ∈ Bη,T . By the arbitrariness of η and T , I(x) = ∞ for every x ∈ DE [0, ∞) − CE [0, ∞). Now, assume that {Xn } is exponentially tight but that (4.18) does not hold. Then there exists a subsequence n(k) such that 1 (4.19) lim inf log P (Xn(k) ∈ Aη,T ) > −∞, k→∞ n(k) and by exponential tightness a subsubsequence along which the large deviation principle holds with a rate function I. It follows that 1 − inf I(x) ≥ lim inf log P (Xn(k) ∈ Aη,T ) > −∞, x∈Aη,T k→∞ n(k) that is, there exists x ∈ Aη,T ⊂ DE [0, ∞) − CE [0, ∞) such that I(x) < ∞.
With reference to (1.5), for x, y ∈ DE [0, ∞), let Z ∞ du (x, y) = e−u sup q(x(t), y(t))du. 0
t≤u
Then du is a metric corresponding to the compact uniform topology. Let SE be the Borel σalgebra of (DE [0, ∞), d). Note that fx (y) = du (x, y) defines a SE measurable function and hence Bu (x) = {y ∈ DE [0, ∞) : du (x, y) < } and u B (x) = {y ∈ DE [0, ∞) : du (x, y) ≤ } are in SE . On CE [0, ∞), du and d are equivalent metrics, and in the theory of weak convergence, if Xn ⇒ X in the Skorohod topology and X is continuous, then Xn ⇒ X in the compact uniform topology in the sense that lim E[f (Xn )] = E[f (X)]
n→∞
for every bounded function that is du continuous and SE measurable. Cexponential tightness is the large deviation analogue of the limit being continuous in weak convergence. Theorem 4.14. Suppose that {Pn } is a sequence of probability measures on DE [0, ∞) that is Cexponentially tight and that the large deviation principle holds with rate function I (in the Skorohod topology). Then the large deviation principle holds in the compact uniform topology with the same rate function.
4.5. EXPONENTIAL TIGHTNESS FOR SOLUTIONS OF MARTINGALE PROBLEMS
67
Proof. The result is an immediate consequence of the contraction principle, Lemma 3.11. Take S = (DE [0, ∞), d), S 0 = (DE [0, ∞), du ), and F (x) = x. Then F is continuous at each point in CE [0, ∞) and by Theorem 4.13, I(x) = ∞ if x∈ / CE [0, ∞). 4.5. Exponential tightness for solutions of martingale problems Definition 4.15. Let A be the generator of a Markov process, and let H† be the collection of (f, g) ∈ M (E) × M (E) such that for every solution X of the martingale problem for A, (4.20)
R t g(X(s))ds
Zf,g (t) = ef (X(t))−f (Y (0))−
0
is a cadlag supermartingale, or, in the discrete time case with X(t) = Y[t/] , (4.21)
R [t/] g(X(s))ds
Zf,g (t) = ef (X(t))−f (X(0))−
0
is a supermartingale. Of course, H defined in (1.7) is a subset of H† , and, for example, if A gives a diffusion and L is the second order differential operator such that Af = Lf for f ∈ D(A), then for f ∈ C 2 and h = e−f Lef , (4.20) is a local martingale by Itˆo’s formula. Since every nonnegative local martingale is a supermartingale, (f, e−f Lef ) ∈ H† . More generally, if fk ∈ D(H), inf k inf x fk (x) > −∞, supk supx Hfk (x) < ∞, and for each x ∈ E, (fk (x), Hfk (x)) → (f (x), h(x)), then (4.20) is a supermartingale. For n = 1, 2, . . ., let En be a metric space and let ηn : En → E be a Borel measurable mapping into a complete, separable metric space E. Let An ⊂ B(En )× B(En ) and Hn = {(f, n1 e−nf g) : f ∈ B(En ), (enf , g) ∈ An }. Let H†,n be the collection of (f, g) ∈ M (En ) × M (En ) such that
R t ng(Y
enf (Yn (t))−nf (Yn (0))−
0
n (s))ds
is a supermartingale for every solution Yn of the martingale problem for An . Of course, Hn ⊂ H†,n . Similarly, in discrete time, let Tn be a transition operator on B(En ), n > 0, and 1 Hn = log e−nf Tn enf , f ∈ B(En ). nn Then H†,n is the collection of (f, g) ∈ B(En ) × B(En ) such that
R [t/n ]n ng(Y
enf (Yn (t))−nf (Yn (0))−
0
n (s))ds
is a supermartingale for every Markov chain with time step n satisfying E[f (Yn (t))Yn (0) = y] = Tn[t/n ] f (y). In other words, e−g(x) Tn enf (x) ≤ enf (x) ,
x ∈ En .
As above, q = r ∧ 1 Definition 4.16. D ⊂ Cb (E) approximates the metric q if for each compact K ⊂ E and z ∈ K, there exists fn ∈ D such that lim sup fn (x) − q(x, z) = 0.
n→∞ x∈K
68
4. LDP FOR PROCESS
We want to give conditions implying exponential tightness for the sequence Xn = ηn (Yn ). For f ∈ B(E), define ηn f = f ◦ ηn . We assume that Xn is cadlag. Corollary 4.17. Let Yn , Xn , Hn , and H†,n be as above in either the continuous or discrete time case, and in the discrete time case, assume that n → 0. Let F ⊂ Cb (E) and S ⊂ R. Suppose that a) {Xn } satisfies the exponential compact containment condition. b) Either F is closed under addition and separates points in E and S = R, or F approximates the metric q and S = (0, ∞). c) For each λ ∈ S and f ∈ F , there exists {(fn , gn )} such that (λfn , gn ) ∈ H†,n , supn kfn k < ∞, for each compact K ⊂ E, (4.22)
lim
n→∞
fn (x) − ηn f (x) = 0
sup −1 x∈ηn (K)
and (4.23)
sup
sup
gn (x) = Cλ (f, K) < ∞.
−1 n x∈ηn (K)
Then {Xn } is exponentially tight. Remark 4.18. In many of the examples mentioned in the Introduction, En = E, ηn is the identity map, and it suffices to let fn = f . In others including Examples 1.9 and 1.12, the more general condition is needed. Proof. First, assume that F is closed under addition and separates points in E and S = R. We consider the continuoustime case. The proof for the discretetime case is essentially the same. Since for λ > 0, E[enλf (Xn (t+s))−f (Xn (t)) Ftn ] ≤ E[enλ(f (Xn (t+s))−f (Xn (t))) Ftn ] + E[e−nλ(f (Xn (t+s))−f (Xn (t))) Ftn ], by Theorems 4.1 and 4.4, it is enough to show that for each λ ∈ R, T > 0 and n, there exists γn (s, λ, T ), increasing in s and satisfying E[exp{nλf (Xn (t + s)) − nλf (Xn (t))}Ftn ] ≤ E[exp{γn (s, λ, T )}Ftn ] for 0 ≤ t ≤ T , such that 1 log E[eγn (s,λ,T ) ] = 0. n For compact K ⊂ E, let Γn (K, T ) = {Xn (t) ∈ K, t ≤ T } = {Yn (t) ∈ ηn−1 (K), t ≤ T }. If f ∈ F and (λfn , gn ) ∈ Hn satisfies Condition (c), then lim lim sup
s→0 n→∞
(4.24)
E[IΓn (K,T ) enλfn (Yn (t+s))−nλfn (Yn (t)) Ftn ]
R s ng
≤ enCλ (f,K)s E[enλfn (Yn (t+s))−nλfn (Yn (t))−
0
n (Yn (t+u))du
Ftn ]
≤ enCλ (f,K)s . By assumption, there exists a compact set K = K(λ, f, T ) ⊂ E such that 1 lim sup log P (∃t ≤ T, Xn (t) 6∈ K) ≤ −(1 + 2λkf k). n n→∞
4.5. EXPONENTIAL TIGHTNESS FOR SOLUTIONS OF MARTINGALE PROBLEMS
69
Therefore, for t, s ≥ 0, t + s ≤ T , E[enλf (Xn (t+s))−nλf (Xn (t)) Ftn ] ≤ E[enλf (Xn (t+s))−nλf (Xn (t)) IΓn (K,T ) Ftn ] +E[enλf (Xn (t+s))−nλf (Xn (t)) IΓn (K,T )c Ftn ] 2nλ supy∈η−1 (K) ηn f (y)−fn (y)
≤e
n
E[IΓn (K,T ) enλfn (Yn (t+s))−nλfn (Yn (t)) Ftn ]
+e2nλkf k P {Γn (K, T )c Ftn } 2nλ supy∈η−1 (K) fn (y)−ηn f (y)+nCλ (f,K)s
≤e
n
+ e2nλkf k P {Γn (K, T )c Ftn }.
Taking 2nλ supy∈η−1 (K) fn (y)−ηn f (y)+nCλ (f,K)s
γn (s, λ, T ) = log{e
n
+ e2nλkf k IΓn (K,T )c },
then lim sup n→∞
1 log E[eγn (s,λ,T ) ] n
= max{Cλ (f, K)s, 2λkf k + lim sup n→∞
1 log P {Γn (K, T )c }} n
= Cλ (f, K)s. Hence, by Theorems 4.1 and 4.4, the conclusion follows. Now assume that F approximates the metric q and S = (0, ∞). For λ ∈ S, select compact K ⊂ E so that lim sup n→∞
1 log P (∃t ≤ T, Xn (t) 6∈ K) ≤ −(1 + λ). n
For each > 0, there exist N > 0, {ξ1 , . . . , ξN } ⊂ E, and {f1 , . . . , fN ) ∈ F such that mink=1,...,N q(x, ξk ) < , for all x ∈ K, and max1≤k≤N supz∈K q(z, ξk ) − fk (z) ≤ . For y ∈ K, there exists ξk ∈ {ξ1 , . . . , ξN } such that q(y, ξk ) < . Then, for x ∈ K,
q(x, y) ≤ q(x, ξk ) + q(y, ξk ) ≤ q(x, ξk ) + ≤ q(x, ξk ) − q(y, ξk ) + 2, and q(x, y) ≤ fk (x) − fk (y) + 4. It follows that N ()
enλq(x,y) ≤ e4nλ
X
enλfk (x)−nλfk (y) .
k=1
By (4.22) and (4.23), there exist large max k
(λfn,k , gn,k )
sup −1 y∈ηn (K)
∈ H†,n such that for n sufficiently
fn,k (y) − ηn fk (y) ≤
70
4. LDP FOR PROCESS
and Cλ (fk , K) ≡ maxk supy∈ηn−1 (K) gn,k (y) < ∞, and hence,
IΓn (K,T ) enλq(Xn (t+s),Xn (t)) ≤ IΓn (K,T ) e6nλ
N X
enλfn,k (Yn (t+s))−nλfn,k (Yn (t))
k=1
≤ IΓn (K,T ) e6nλ+sn∨k Cλ (fk ,K) N X
R t+s ng
enλfn,k (Yn (t+s))−nλfn,k (Yn (t))−
t
n,k (Yn (r))dr
k=1 6nλ+sn∨k Cλ (fk ,K)
≤e
N X
R t+s ng
enλfn,k (Yn (t+s))−nλfn,k (Yn (t))−
t
n,k (Yn (r))dr
.
k=1
Therefore E[enλq(Xn (t+s),Xn (t)) Ftn ] ≤ E[IΓn (K,T ) enλq(Xn (t+s),Xn (t)) Ftn ] +E[IΓn (K,T )c enλq(Xn (t+s),Xn (t)) Ftn ] N
≤ e6nλ+sn∨k=1 Cλ (fk ,K) + enλ P {Γn (K, T )c Ftn }. −1/2 For δ > 0, let 0 (δ) = inf{ > 0 : ∨N }. Note that 0 (δ) → 0 as k=1 Cλ (fk , K) ≤ δ N(δ) (δ) δ → 0, and select (δ) such that (δ) < 0 (δ) + δ and ∨k=1 Cλ (fk , K) ≤ δ −1/2 . Finally, define N(δ)
γn (δ, λ, T ) = log{e6(δ)nλ+δn∨k=1
(δ)
Cλ (fk
,K)
+ enλ IΓn (K,T )c },
and observe that 1 lim sup log E[eγn (s,λ,T ) ] n→∞ n N
(δ)
(δ) = max{6(δ)λ + δ ∨k=1 Cλ (fk
N
(δ)
(δ) = 6(δ)λ + δ ∨k=1 Cλ (fk
, K), λ + lim sup n→∞
1 log P (Γn (K, T )c )} n
, K).
Since the right side goes to zero as δ → 0, the conclusion follows by Theorem 4.1. The following technical modification of Corollary 4.17 is occasionally useful. The proof is essentially the same. Corollary 4.19. Let Yn , Xn , Hn , and H†,n be as above in either the continuous or discrete time case, and in the discrete time case, assume that n → 0. Let F ⊂ Cb (E) and S ⊂ R. Let Q be an index set and for n = 1, 2, . . . and q ∈ Q, let Knq be a measurable subset of En . Suppose that a) For each q ∈ Q, ∪n ηn (Knq ) is relatively compact in E. b) For each a > 0 and T > 0, there exists q(a, T ) ∈ Q such that (4.25)
lim sup n→∞
1 log P {∃t ≤ T 3 Yn (t) 6∈ Knq(a,T ) } ≤ −a. n
(In particular, {Xn } satisfies the exponential compact containment condition.)
4.6. VERIFYING COMPACT CONTAINMENT
71
c) Either F is closed under addition and separates points in E and S = R, or F approximates the metric q and S = (0, ∞). d) For each λ ∈ S and f ∈ F , there exists {(fn , gn )} such that (λfn , gn ) ∈ H†,n , supn kfn k < ∞, for each q ∈ Q, lim sup fn (x) − ηn f (x) = 0
(4.26)
n→∞ x∈K q
n
and sup sup gn (x) = Cλ (f, K) < ∞.
(4.27)
q n x∈Kn
Then {Xn } is exponentially tight. 4.6. Verifying compact containment The usual approach to verifying a property like the exponential compact containment condition (Condition (a) of Theorem 4.4 and Corollary 4.17) is with a Lyapunov function technique. Lemma 4.20. Let K be compact and G ⊃ K be open. Let A be the generator for a Markov process, and let H† be given by Definition 4.15. Let (f, g) ∈ H† , and define β ≡ inf c f (x) − sup f (x), γ ≡ sup g(x) ∨ 0. x∈G
x∈K
x∈G
Let X be a solution of the martingale problem for A, and suppose that Zf,g defined in (4.20) is right continuous. Then for T > 0, (4.28)
P {X(0) ∈ K, X(t) ∈ / G some t ≤ T } ≤ P {X(0) ∈ K}e−β+T γ .
Remark 4.21. Suppose, as in Corollary 4.17, Y takes valued in E0 , η : E0 → E, X = η(Y ), and (f, g) ∈ H† ⊂ M (E0 ) × M (E0 ). If K ⊂ E is compact, G ⊃ K is open and β≡
inf
y∈η −1 (G)c
f (y) −
sup
f (y),
y∈η −1 (K)
γ≡
sup
g(y),
y∈η −1 (G)
then for T > 0, (4.29)
P {X(0) ∈ K, X(t) ∈ / G some t ≤ T } ≤ P {X(0) ∈ K}e−β+T γ .
Proof. Let τ = I{X(0)∈K} inf{t : X(t) ∈ / G}. Then, since Zf,g , given in (4.20), is a right continuous supermartingale, by the optional sampling theorem P {X(0) ∈ K, X(t) ∈ / G some t ≤ T }eβ−T γ + P {X(0) ∈ / K}
≤ E[Zf,g (τ ∧ T )] ≤ 1,
and (4.28) follows.
The following lemma is an immediate consequence of the previous lemma. Lemma 4.22. Let Xn be a solution of the martingale problem for An . Let K be compact, and let G ⊃ K be open. Let (fn , gn ) ∈ H†,n = {(f, g) : (nf, ng) ∈ H†,n }, and assume that the corresponding supermartingale is right continuous. Define β(K, G) ≡ lim inf ( inf c fn (x) − sup fn (x)), n→∞
x∈G
x∈K
γ(G) ≡ lim sup sup gn (x). n→∞ x∈G
72
4. LDP FOR PROCESS
Then lim sup n→∞
1 log P {Xn (t) ∈ / G some t ≤ T } n
≤ max{−β(K, G) + T γ(G), lim sup n→∞
1 log P {Xn (0) ∈ / K}}. n
If E is locally compact, Lemma 4.22 can be applied to open G with compact closure to verify the exponential compact containment condition. Example 4.23. Suppose in Example 1.5 there exists C > 0 such that X (4.30) xi bi (x) ≤ C(1 + x2 ), aij (x) ≤ C(1 + x2 ), i
and there exists α > 0 such that Z (4.31) C(η, α) ≡ sup x∈Rd
Rd
z 1 + x
2
z
eα 1+x η(x, dz) < ∞.
Then for δ > 0 satisfying 6δ < α, let fn (x) = f (x) = δ log(1 + x2 ) and X 1 X 1X gn (x) = aij (x)∂i ∂j f (x) + aij (x)∂i f (x)∂j f (x) + bi (x)∂i f (x) 2n ij 2 ij i Z +
nδ log 1+
(e
n−1 2x·z+n−2 z2 1+x2
Rd
−1−
2δx · z )η(x, dz). 1 + x2
The derivative terms are uniformly bounded in x and n by a constant depending on C in (4.30) and δ. The integral term satisfies −1 Z n 2x·z+n−2 z2 nδ log 1+ 2δx · z 1+x2 (e −1− )η(x, dz) 1 + x2 d R −1 Z n 2x·z+n−2 z2 2δx·z nδ log 1+ 1+x2 ≤ (e − e 1+x2 )η(x, dz) Rd Z 2δx·z 2δx · z )η(x, dz). + (e 1+x2 − 1 − 1 + x2 Rd The second term on the right is bounded by 2δ 2 C(η, 4δ) < ∞. Setting Γn,x = {z : z > n(1 + x)}, the first term on the right is bounded by Z 2x·z+n−1 z2 δz2 δ 1+x2 η(x, dz) e n(1 + x2 ) {z:z≤n(1+x)} nδ Z n−1 2x · z + n−2 z2 + 1+ η(x, dz) 1 + x2 {z:z>n(1+x)} 2 !nδ Z 2δ 4 z 2 z ≤ C(η, 6δ) + 1+ + η(x, dz) n n 1 + x n2 1 + x Γn,x Z z 2δ ≤ C(η, 6δ) + e4δ 1+x η(x, dz) n Γn,x 2δ 1 C(η, 6δ) + 2 C(η, 4δ), n n where C(η, 4δ) ≤ C(η, 6δ) < ∞. ≤
4.7. FINITE DIMENSIONAL DETERMINATION OF THE PROCESS RATE FUNCTION
73
It follows that (fn , gn ) ∈ H†,n and for any bounded G, γ(G) ≤ lim sup sup gn (x) < ∞. n→∞ x∈Rd
Suppose K = {x : x ≤ c}. For β > 0, let d satisfy δ log(1 + d2 ) = β + δ log(1 + c2 ). Then for G = {x : x < d}, the conditions of Lemma 4.22 are satisfied. Consequently, if {Xn (0)} is exponentially tight, the exponential compact containment condition holds for {Xn }. Furthermore, if (4.31) holds for all α > 0, Conditions (b) and (c) of Corollary 4.17 are easy to check, and hence {Xn } is exponentially tight. If E is not locally compact, then, with Lemma 3.3 in mind, it may be useful to work with a sequence of open sets {Gk }. Lemma 4.24. Let Xn be a solution of the martingale problem for An . Let K ⊂ E be compact, and for k = 1, 2, . . ., let Gk be open and contained in a finite b be the closure of ∩k Gk . (Note that K b union of open balls of radius k −1 . Let K b will be complete and totally bounded, hence compact.) Assume that K ⊂ K Let (fk,n , gk,n ) ∈ H†,n , and assume that the corresponding supermartingale is right continuous. Define βk,n ≡ inf c fk,n (x) − sup fk,n (x), x∈Gk
γk,n ≡ sup gk,n (x).
x∈K
If γ = supk,n γk,n < ∞ and βk,n ≥ β +
2 n
x∈Gk
log k, then
1 b some t ≤ T } log P {Xn (t) ∈ /K n 1 ≤ max{−β + γT, lim sup log P {Xn (0) ∈ / K}}. n→∞ n
lim sup n→∞
Proof. By Lemma 4.20, P {Xn (0) ∈ K, Xn (t) ∈ / Gk some t ≤ T } ≤ P {Xn (0) ∈ K}e−nβk,n +nT γk,n , and hence 1 b some t ≤ T } log P {Xn (t) ∈ /K n X 1 1 ≤ max{lim sup log e−nβk,n +nγk,n T , lim sup log P {Xn (0) ∈ / K}}. n→∞ n n→∞ n
(4.32) lim sup n→∞
k
Under the assumptions on γ and β, the right side of (4.32) is bounded by max{−β + γT, lim sup n→∞
1 log P {Xn (0) ∈ / K}}. n
4.7. Finite dimensional determination of the process rate function The following lemma shows that finite dimensional rate functions give lower bounds for process rate functions.
74
4. LDP FOR PROCESS
Lemma 4.25. Let {Xn } satisfy the large deviation principle with rate function I. Let x ∈ DE [0, ∞), and let ∆x denote the set of discontinuities of x. If t1 , . . . , tm ∈ ∆cx and the large deviation principle holds for {(Xn (t1 ), . . . , Xn (tm ))} with rate function It1 ,...,tm , then (4.33)
I(x) ≥ It1 ,...,tm (x(t1 ), . . . , x(tm )).
Consequently, if for each finite subset {t1 , . . . , tm } ⊂ T0 , the large deviation principle holds for {(Xn (t1 ), . . . , Xn (tm ))} with rate function It1 ,...,tm , then (4.34)
I(x) ≥
sup {ti }⊂T0 ∩∆cx
It1 ,...,tm (x(t1 ), . . . , x(tm )).
Proof. For > 0 and t1 , . . . , tm ∈ / ∆x , there exists 0 > 0 such that B0 (x) ⊂ {y : (y(t1 ), . . . , y(tm )) ∈ Bm ((x(t1 ), . . . , x(tm )))} where Bm denotes a ball in E m . Consequently, (4.33) follows from (3.3).
∆cx
The assumption that t1 , . . . , tm ∈ can be dropped in (4.33) if the right side is replaced by the minimum of It1 ,...,tm (y1 , . . . , ym ) over the 2m choices where yi is x(ti ) or x(ti −). This minimum is in fact needed unless other conditions are introduced. For example, let P {Xn = I[1+1/n,∞) } = 1. Then for x = I[1,∞) , I(x) = 0, but I1 (1) = ∞. Our next goal is to estimate I from above in terms of finite dimensional rate functions. Let y ∈ DE [0, ∞). Suppose that 0 ≤ t1 < t2 < · · · satisfies limi→∞ ti = ∞, and define P y by P y(t) = y(t1 ), 0 ≤ t < t2 , and for t ≥ t2 , P y(t) = y(ti ), ti ≤ t < ti+1 . Note that if we set ζ(t) = t1 for 0 ≤ t < t2 and ζ(t) = ti for ti ≤ t < ti+1 , i = 2, 3, . . ., then P y(t) = y ◦ ζ(t). Lemma 4.26. Let T0 be dense in [0, ∞). Let x ∈ DE [0, ∞). For > 0 and compact K ⊂ DE [0, ∞), there exist t1 , . . . , tm ∈ T0 and 0 > 0 such that (4.35)
B (x) ⊃ {y : (y(t1 ), . . . , y(tm )) ∈ Bm0 ((x(t1 ), . . . , x(tm )))} ∩ K.
Proof. Without loss of generality, we can assume that there exist compact ΓT ⊂ E and a function h(δ, T ) > 0 defined for T, δ > 0 satisfying limδ→0 h(δ, T ) = 0 such that K = {y : y(t) ∈ ΓT , t ≤ T, w0 (y, δ, T + δ) ≤ h(δ, T )}. Let 2η < δ, and let 0 ≤ t1 < t2 < · · · satisfy ti ∈ T0 , ti → ∞, t2 ≤ η, and ti+1 − ti ≤ η. Select m so that tm ≥ T + δ. If y is in the set on the right of (4.35), then by Lemma 4.8, d(x, y) ≤ d(x, P x) + d(P x, P y) + d(P y, y) η(T + η) δ + e−T ) ∨ log δ δ − 2η η(T + η) δ ≤ d(x, P x) + 0 + (h(δ, T ) + + e−T ) ∨ log . δ δ − 2η ≤ d(x, P x) + 0 + (w0 (y, δ, T + η) +
Select 0 < /5, T so that e−T ≤ 0 , and δ so that h(δ, T ) ≤ 0 . Then select η so that d(x, P x) ≤ 0 , η(T + η)/δ ≤ 0 , and log(δ/(δ − 2η)) ≤ 30 . Then d(x, y) < for all y in the set on the right of (4.35).
4.7. FINITE DIMENSIONAL DETERMINATION OF THE PROCESS RATE FUNCTION
75
Lemma 4.27. Let {Xn } be exponentially tight in DE [0, ∞), and assume that the large deviation principle holds with rate function I (otherwise select a subsequence). Let T0 be dense in [0, ∞), and suppose that for each finite subset {t1 , . . . , tm } ⊂ T0 , the large deviation principle holds for {(Xn (t1 ), . . . , Xn (tm ))} with rate function It1 ,...,tm . Then either lim inf n→∞ n1 log P {Xn ∈ B (x)} = −∞ and (by exponential tightness) sup It1 ,...,tm (x(t1 ), . . . , x(tm )) = ∞ = I(x)
(4.36)
{ti }⊂T0
or (4.37)
lim inf n→∞
1 log P {Xn ∈ B (x)} ≥ −It1 ,...,tm (x(t1 ), . . . , x(tm )) n
and I(x) ≤ sup It1 ,...,tm (x(t1 ), . . . , x(tm ))
(4.38)
{ti }⊂T0
Proof. Lemma 4.26 implies that for each > 0 and compact set K ⊂ DE [0, ∞), there exists t1 , . . . , tm ∈ T0 and 0 > 0 such that 1 1 log P {Xn ∈ B (x)} ∨ lim inf log P {Xn ∈ K c } n→∞ n n→∞ n 1 = lim inf log P ({Xn ∈ B (x) ∪ K c })} n→∞ n 1 ≥ lim inf log P {(X(t1 ), . . . , X(tm )) ∈ Bm0 (x(t1 ), . . . , x(tm ))} n→∞ n ≥ −It1 ,...,tm (x(t1 ), . . . , x(tm )).
lim inf
Since for each a > 0 there exists a compact Ka such that lim sup n→∞
1 log P {Xn ∈ Kac } ≤ −a, n
the lemma follows.
Lemmas 4.25 and 4.27 give the following theorem. Theorem 4.28. Assume that {Xn } is exponentially tight in DE [0, ∞) and that for each 0 ≤ t1 < t2 < · · · < tm , {(Xn (t1 ), . . . , Xn (tm ))} satisfies the large deviation principle in E m with rate function It1 ,...,tm . Then {Xn } satisfies the large deviation principle in DE [0, ∞) with good rate function (4.39)
I(x) =
sup It1 ,...,tm (x(t1 ), . . . , x(tm )).
{ti }⊂∆cx
Proof. Let I be the rate function corresponding to a subsequence of {Xn } for which the large deviation principle holds. Exponential tightness implies I will be good. Taking T0 = ∆cx , Lemmas 4.25 and 4.27 imply that I satisfies (4.39), which in turn implies that there is only one possible rate function and hence the large deviation principle holds for the full sequence {Xn } with rate function given by (4.39). The following corollary is a consequence of Lemma 3.23.
76
4. LDP FOR PROCESS
Corollary 4.29. Suppose that D ⊂ Cb (E) is bounded above and isolates points. (See Definition 3.18.) Assume that {Xn } is exponentially tight in DE [0, ∞) and that for each 0 ≤ t1 ≤ . . . ≤ tm and f1 , . . . , fm ∈ D 1 Λ(t1 , . . . , tm , f1 , . . . , fm ) = lim log E[en(f1 (Xn (t1 ))+···+fm (Xn (tm ))) ] n→∞ n exists. Then {Xn } satisfies the large deviation principle in DE [0, ∞) with good rate function I(x) = sup
sup
sup
{f1 (x(t1 )) + · · · + fm (x(tm ))
m {t1 ,...,tm }⊂∆cx f1 ,...,fm ∈D
−Λ(t1 , . . . , tm , f1 , . . . , fm )}. If Cexponential tightness holds, then Theorem 4.28 can be simplified. The next theorem is essentially Puhalskii [97], Theorem 4.5 (Theorem 3.2.8 of [100]). He only considers Rd valued processes, but his proof extends easily to the metricvalued case. See also de Acosta [24] for generalizations to projective systems. Theorem 4.30. Assume that {Xn } is Cexponentially tight in DE [0, ∞). Let T0 be a dense subset of [0, ∞), and suppose that for each 0 ≤ t1 ≤ . . . ≤ tm ∈ T0 , {(Xn (t1 ), . . . , Xn (tm ))} satisfies the large deviation principle in E m with rate function It1 ,...,tm . Then {Xn } satisfies the large deviation principle in DE [0, ∞) with good rate function (4.40)
I(x) = sup It1 ,...,tm (x(t1 ), . . . , x(tm )). {ti }⊂T0
Proof. By Cexponential tightness, if x ∈ / CE [0, ∞), then I(x) = ∞ and (4.40) follows by (4.36). Otherwise, (4.40) follows by (4.34) and (4.38).
Part 2
Large deviations for Markov processes and semigroup convergence
CHAPTER 5
Large deviations for Markov processes and nonlinear semigroup convergence The main result of this chapter, Theorem 5.15, shows that the convergence of Fleming’s logexponential nonlinear semigroup implies the large deviation principle for Markov processes in a metric space. The convergence of the semigroup immediately implies the large deviation principle for the one dimensional distributions by Bryc’s lemma, Proposition 3.8. The semigroup property then gives the large deviation principle for the finite dimensional distributions, which, after verifying exponential tightness, implies the pathwise large deviation principle by Theorem 4.28. In most applications, we need infinitesimal criteria for the semigroup convergence, and Corollaries 5.19, 5.20, 5.21, 5.22 are derived for this purpose. Their proofs depend on a resolvent estimate (Lemma 5.9) and a semigroup estimate (Lemma 5.12). These estimates play an important role in Chapter 7 as well. The semigroup convergence results in this chapter are based on the CrandallLiggett theory of nonlinear semigroups. Application of these theorems depends on verification of a range condition (Condition 5.1) that is typically difficult or impossible. This difficulty is overcome by the introduction of viscosity solution methods in Chapters 6 and 7. Theorem 5.15 gives the rate function in terms of the limiting nonlinear semigroup. Since the semigroup is usually not known explicitly, this representation may not be either intuitive or computationally useful. This difficulty is overcome by the introduction of the control representation of the semigroup (and hence, of the rate function) in Chapter 8. 5.1. Convergence of sequences of operator semigroups Let B be a Banach space. A (possibly nonlinear) operator H ⊂ B × B dissipative if for each α > 0 and (f1 , g1 ), (f2 , g2 ) ∈ H, kf1 − f2 − α(g1 − g2 )k kf1 − f2 k. H will denote the closure of H as a graph in B(E) × B(E), D(H) {f : ∃(f, g) ∈ H}, and R(H) = {g : ∃(f, g) ∈ H}. If H is dissipative, then H dissipative, and for α > 0, R(I − αH) = R(I − αH) and Jα ≡ (I − αH)−1 is a contraction operator, that is for f1 , f2 ∈ R(I − αH) kJα f1 − Jα f2 k ≤ kf1 − f2 k. See Miyadera [86] for general background information on dissipative operators. The following condition will be referred to as the range condition on H: 79
is ≥ = is
80
5. CLASSICAL SEMIGROUP APPROACH
Condition 5.1. There exists α0 > 0 such that D(H) ⊂ R(I − αH), for all 0 < α < α0 . A dissipative operator H is called mdissipative if R(I −αH) = B for all α > 0. The following is Lemma 2.13 of Miyadera [86]. Lemma 5.2. Let H be dissipative. If R(I − α0 H) = B for some α0 > 0, then R(I − αH) = B for all α > 0, that is, H is mdissipative. The primary consequence of the range condition for a dissipative operator is the generation theorem of Crandall and Liggett [18]. Proposition 5.3. (CrandallLiggett Theorem). Let H be dissipative and satisfy the range condition 5.1. Then for each f ∈ D(H), t 1 S(t)f = lim (I − H)−m f = lim (I − H)−[mt] f m→∞ m→∞ m m exists and defines a contraction semigroup on D(H). Lemma 5.4. Let V be a contraction operator on B, that is V : B → B and kV f − V gk ≤ kf − gk, f, g ∈ B. For > 0, define Hf = −1 (V f − f ). Then H is dissipative and satisfies the range condition 5.1. If {S(t)} is the semigroup corresponding to H given by Proposition 5.3, then for f ∈ B, √ kS(t)f − V [t/] f k ≤ ( + t)kHf k Proof. See Miyadera [86] Lemma 3.16.
With the examples in the Introduction in mind, we state a convergence theorem for nonlinear semigroups. Note that the estimate in Lemma 5.4 implies the corresponding result for a sequence of discretetime semigroups for which the timestep n → 0. Proposition 5.5. Let En , E be arbitrary metric spaces, and let ηn : En → E be Borel measurable mappings. Define the bounded linear operator ηn : B(E) → B(En ) by ηn f = f ◦ ηn . Let Hn ⊂ B(En ) × B(En ) and H ⊂ B(E) × B(E) be dissipative, and suppose that each satisfies the range condition 5.1 with the same α0 > 0. Let {Sn (t)} denote the semigroup generated by Hn and {S(t)} denote the semigroup generated by H. a) Suppose that H ⊂ ex lim Hn n→∞
in the sense that for each (f, g) ∈ H, there exists (fn , gn ) ∈ Hn satisfying kηn f − fn k + kηn g − gn k → 0. Then, for f ∈ D(H) and fn ∈ D(Hn ) satisfying kfn − ηn f k → 0 and T > 0, lim sup kηn S(t)f − Sn (t)fn k = 0. n→∞ t≤T
5.1. CONVERGENCE OF SEQUENCES OF OPERATOR SEMIGROUPS
81
b) Let LIM satisfy Condition A.13 for Jn ⊂ B(En ) and J ⊂ B(E), and let Hn ⊂ Jn × Jn . Suppose that H ⊂ exLIMHn and that for fn , gn ∈ Jn satisfying LIMfn = LIMgn = f0 , we have LIM(Sn (t)fn − Sn (t)gn ) = 0,
t > 0,
and LIM((I − αHn )−1 fn − (I − αHn )−1 gn ) = 0,
0 < α < α0 .
Then, for f ∈ D(H) and fn ∈ D(Hn ) satisfying f = LIMfn , and tn → t, S(t)f = LIMSn (tn )fn . Remark 5.6. This result extends the linear semigroup convergence theory of Trotter [117] and Kurtz [69, 70] to the setting of the CrandallLiggett theorem. As noted earlier, convergence of linear semigroups can be used to prove weak convergence results in much the same way as we will use convergence of nonlinear semigroups to prove large deviation theorems. Applications to diffusion approximations is one of the motivations for Trotter’s original work. Proof. These results follow by directly applying Theorem 3.2 in Kurtz [72]. For Part (a), take L = {hf, {fn }i : f ∈ B(E), fn ∈ B(En ), sup kfn k < ∞} n
with norm khf, {fn }ik = kf k ∨ supn kfn k. Define a linear mapping P ⊂ L × B(E) by P = {(hf, {fn }i, f ) ∈ L × B(E) : lim kηn f − fn k = 0}. n→∞
It is easy to see that D(P ) is closed and P is a continuous linear mapping. Define H ⊂ L × L by H = {(hf, {fn }i, hg, {gn }i) ∈ L × L : (f, g) ∈ H, (fn , gn ) ∈ Hn }. Then H is dissipative on L and satisfies the range condition, hence generates a semigroup {T (t)}. For hf, {fn }i ∈ D(H), T (t)hf, {fn }i = hS(t)f, {Sn (t)fn }i. For 0 < α < α0 , let Jα = (I − αH)−1 on C = {hf, {fn }i : f ∈ D(H), fn ∈ D(Hn )}. Note that C is closed and that for each hf, {fn }i ∈ C, Jα hf, {fn }i = (I − αH)−1 hf, {fn }i = h(I − αH)−1 f, {(I − αHn )−1 fn }i. For any hf, {fn }i, hf, {gn }i ∈ C ∩ D(P ), k(I − αHn )−1 fn − (I − αHn )−1 gn k ≤ kfn − gn k ≤ kfn − ηn f k + kgn − ηn f k → 0. Therefore P (Jα hf, {fn }i − Jα hf, {gn }i) = 0.
82
5. CLASSICAL SEMIGROUP APPROACH
The conditions of Lemma 3.1 in Kurtz [72], with B = Jα , follow by straightforward calculations. It is also easy to verify that the conditions hold for B = T (t). Therefore by (3.4) of Theorem 3.2 of Kurtz [72], lim kηn S(t)f − Sn (t)fn k = 0,
n→∞
for each f ∈ D(H) and fn ∈ D(Hn ) satisfying limn→∞ kηn f − fn k = 0. The fact that the convergence is uniform on bounded time intervals follows from the strong continuity of the semigroup {T (t)}. (Note that the definition of A in Theorem 3.2 of Kurtz [72], which corresponds to H here, can be replaced by the assumption that A ⊂ {(P x, P y) : (x, y) ∈ H ∩ D(P ) × D(P )}.) Part (b) follows through the same arguments by taking L = {hf, {fn }i : f ∈ J, fn ∈ Jn , sup kfn k < ∞}, n
and P = {(hf, {fn }i, f ) ∈ L × J : f = LIMfn }. 5.2. Applications to large deviations Lemma 5.7. Let E be a complete separable metric space and B(E) be the space of bounded measurable functions. Define a bounded linear operator A : B(E) → B(E) by Z (5.1) Af (x) = λ(x) (f (y) − f (x))q(x, dy), E
where 0 ≤ λ(x) ≤ λ ≡ supx λ(x) < ∞ and q is a transition function on E. (For each x ∈ E, q(x, ·) ∈ P(E), and for each B ∈ B(E), q(·, B) ∈ B(E).) Define Hf = e−f Aef ; Ag f = e−g A(f eg ) − (e−g f )Aeg ; Lg = Ag g − Hg; HK f (x) = sup {Ag f (x) − Lg(x)}. kgk≤K
Then Hf (x) = sup {Ag f (x) − Lg(x)},
(5.2)
g∈B(E)
and HK f = Hf for kf k ≤ K. Furthermore, HK is Lipschitz on B(E) with Lipschitz constant MK = 2λe2K and is mdissipative on B(E). Finally, k(I − αHK )−1 hk ≤ khk, H is mdissipative on B(E), and the sup in (5.2) is attained when g = f . Proof. For f ∈ B(E), let xn ∈ E satisfy limn→∞ f (xn ) = supy f (y). Then Z (5.3) lim sup Af (xn ) = lim sup λ(xn ) (f (y) − f (xn ))q(xn , dy) n→∞
n→∞
E
≤ lim sup λ(xn )(sup f (y) − f (xn )) = 0, n→∞
y
5.2. APPLICATIONS TO LARGE DEVIATIONS
83
so A is strongly dissipative. (See Definition A.16 and Lemma A.18.) For f, g ∈ B(E) and (fixed) x ∈ E, let F (y) = eg(y)−g(x) {(f (y) − f (x)) − (g(y) − g(x))} + eg(y)−g(x) − ef (y)−f (x) . By the convexity of the exponential function, F (y) ≤ 0. Note also that Z g −g(x) A f (x) = e λ(x) eg(y) (f (y) − f (x))q(x, dy). E
Since F (x) = 0 = supy F (y), by (5.3), Z 0 ≥ AF (x) = λ(x) eg(y)−g(x) (f (y) − f (x))q(x, dy) E Z −λ(x) eg(y)−g(x) (g(y) − g(x))q(x, dy) E Z +λ(x) (eg(y)−g(x) − ef (y)−f (x) )q(x, dy) E
= Ag f (x) − Ag g(x) + Hg(x) − Hf (x) = Ag f (x) − Lg(x) − Hf (x), and hence, Hf (x) ≥ Ag f (x) − Lg(x). But by the definition of L, Hf (x) = Af f (x) − Lf (x), and (5.2) follows. By this representation of H, it is also easy to see that HK f = Hf whenever kf k ≤ K. To prove HK is Lipschitz, let f1 , f2 ∈ B(E). For each x ∈ E, there exists kgn k ≤ K such that HK f1 (x) = sup {Ag f1 (x) − Lg(x)} ≤ Agn f1 (x) − Lgn (x) + kgk≤K
1 , n
and hence (HK f1 − HK f2 )(x) ≤ Agn f1 (x) − Lgn (x) +
1 − Agn f2 (x) + Lgn (x) n
1 ≤ Agn (f1 − f2 )(x) + n Z 1 = λ(x) egn (y)−gn (x) {(f1 − f2 )(y) − (f1 − f2 )(x)}q(x, dy) + n E 1 2K ≤ 2e sup λ(x)kf1 − f2 k + . n x It follows that kHK f1 − HK f2 k ≤ 2e2K sup λ(x)kf1 − f2 k, x 2K
so HK is Lipschitz with constant MK = 2e supx λ(x). To see that HK is strongly dissipative, we apply Lemma A.18. Let xn ∈ E be such that (f1 −f2 )(xn ) ≥ supy (f1 −f2 )(y)−1/n. Then (f1 −f2 )(y)−(f1 −f2 )(xn ) ≤
84
5. CLASSICAL SEMIGROUP APPROACH
1/n. It follows that there exist gn with kgn k ≤ K such that Z 1 (HK f1 − HK f2 )(xn ) ≤ λ(xn ) egn (y)−gn (xn ) {(f1 − f2 )(y) − (f1 − f2 )(xn )}q(x, dy) + n E 1 1 2K ≤ 2e sup λ(x) + . n n x Therefore lim supn→∞ (HK f1 − HK f2 )(xn ) ≤ 0, and HK is strongly dissipative. Let h ∈ B(E). For 0 < α < 1/MK = αK , the mapping PK f = h + αHK f is a strict contraction on B(E). Hence, there exists a unique solution f ∈ B(E) of f = PK f , which implies (I − αHK )f = h, that is, R(I − αHK ) = B(E) for 0 < α < αK . Since HK is dissipative, by Lemma 5.2, HK is mdissipative. By the mdissipativity of HK , for each α > 0 and h, h0 ∈ B(E), k(I − αHK )−1 h − (I − αHK )−1 h0 k ≤ kh − h0 k. Since HK 0 = 0, k(I − αHK )−1 hk ≤ khk. Consequently, if khk ≤ K and f = (I − αHK )−1 h, then Hf = HK f and f − αHf = h. It follows that H is mdissipative on B(E). The fact that the sup in (5.2) is attained when g = f can be verified directly through the definition of Hf and Lf . Let {T (t)} be the semigroup given by Z T (t)f (x) = f (y)P (t, x, dy), E
where P (t, x, Γ) is the transition function for a Markov process that has cadlag paths. The full generator for {T (t)} is the collection A of (f, g) ∈ B(E) × B(E) satisfying Z t T (t)f = f + T (s)gds, t ≥ 0. 0
(See Ethier and Kurtz [36], Section 1.5.) The full generator satisfies R(I − A) = B(E) for > 0 with Z ∞ −1 (I − A)−1 f = −1 e− t T (t)f dt. 0
Note that (I − A)−1 is given by a transition function Z ∞ −1 q (x, Γ) = −1 e− t P (t, x, Γ)dt. 0
Define the Yosida approximation of A by 1 A = ((I − A)−1 − I) = A(I − A)−1 . Then A is a bounded, linear dissipative operator on R(I − A) = B(E) of the form (5.1).
5.2. APPLICATIONS TO LARGE DEVIATIONS
85
We apply the semigroup convergence result in Part (b) of Proposition 5.5 using the definition of LIM given in Definition 2.5 and assuming Condition 2.8. Note that Condition 2.8 can be restated as follows: For each q ∈ Q, T > 0, and a > 0, there exist qb(q, a, T ) and C(q, a, T ) such that (5.4)
sup P {Yn (t) ∈ / Knqb(q,a,T ) for some 0 ≤ t ≤ T Yn (0) = y} ≤ C(q, a, T )e−na .
q y∈Kn
A natural setting for this notion of limit is one in which En = E 0 is independent of n and ηn ≡ η : E 0 → E, Q is the collection of compact subsets of E 0 and KnK = K for K ∈ Q. Another example would be to take En to be the ddimensional lattice with mesh n1 and ηn : En → Rd to be the natural embedding. Then one could take KnK = En ∩ K for compact K ⊂ Rd . Lemma 5.8. Let En , E, ηn be defined as in Proposition 5.5. For each n, let An ⊂ B(En ) × B(En ) be of the form (5.1), that is Z (f (y) − f (x))qn (x, dy).
An f (x) = λn (x) E
Let Yn be a solution of the martingale problem for An with distribution Pn ∈ P(DEn [0, ∞)). Define Xn (t) = ηn (Yn (t)), and let Hn f = n−1 e−nf An enf . Suppose that Condition 2.8 is satisfied for {Yn }. For each T > 0, define Qn,T ∈ P(DEn [0, T ]) by (5.5)
dQn,T = exp{n(fn (Yn (T )) − fn (Yn (0)) − dPn
Z
T
Hn fn (Yn (s))ds)}. 0
Then, under Qn,T , {Yn (t), 0 ≤ t ≤ T } is a solution of the martingale problem for Afnn given by (5.6)
Afnn h(x)
= e−nfn (x) An (henfn )(x) − e−nfn (x) h(x)An enfn (x) Z = λn (x) en(fn (y)−fn (x)) (h(y) − h(x))qn (x, dy), En
and if supn (kfn k + kHn fn k) < ∞, Yn with distribution Qn,T on DEn [0, T ] satisfies Condition 2.8. Proof. Qn,T ∈ P(DEn [0, T ]) since Z Mn (t) ≡ exp{n(fn (Yn (t)) − fn (Yn (0)) −
t
Hn fn (Yn (s))ds)} 0
is a mean one martingale under Pn . The fact that under Qn,T , Yn is a solution of the martingale problem for Afnn can be checked by applying Theorem III.20 of Protter [96].
86
5. CLASSICAL SEMIGROUP APPROACH
For q, qb ∈ Q, 1 log sup Qn,T (Yn (t) ∈ / Knqb, q n y∈Kn
for some 0 ≤ t ≤ T Yn (0) = y)
=
dQn,T 1 log sup E Pn [ I(Yn (t) ∈ / Knqb, q n dPn y∈Kn
≤
1 / Knqb, log e2nkfn k−nT inf y∈En Hn fn (y) sup Pn (Yn (t) ∈ q n y∈Kn
for some 0 ≤ t ≤ T )Yn (0) = y] for some 0 ≤ t ≤ T )Yn (0) = y)
≤ 2kfn k − T inf Hn fn (y) y∈En
1 + sup log Pn (Yn (t) ∈ / Knqb, n y∈Knq
for some 0 ≤ t ≤ T )Yn (0) = y).
By (2.12), the conclusion follows.
Lemma 5.9. Under the Conditions of Lemma 5.8, let hin ∈ B(En ) and fni = (I − αHn )−1 hin ∈ B(En ),
i = 1, 2.
Then for q ∈ Q, T > 0, and a > 0, there exist qb = qb(q, a, T, supn kh1n k, supn kh2n k) ∈ Q and C = C(q, a, T, supn kh1n k, supn kh2n k) such that sup fn1 (y) − fn2 (y)
(5.7)
−1
≤ (e−α
q y∈Kn
T
+ Ce−na )kh1n − h2n k +
sup
q b(q,a,T ) y∈Kn
h1n (y) − h2n (y),
and hence LIMh1n = LIMh2n = h
(5.8) implies
LIM(fn1 − fn2 ) = 0.
(5.9)
Remark 5.10. The fni exist by Lemma 5.7. Proof. Define Agn f = e−ng An f eng −e−ng f An eng (see (5.6)) and Ln f = Afn f − Hn f . By Lemma 5.7 (appropriately modified to take into account the factor of n), Hn f (y) =
sup {Agn f (y) − Ln g(y)} = Afn f (y) − Ln f (y) g∈B(En )
holds for each f ∈ B(En ). Therefore hin (y) = fni − αHn fni ≤ fni − α(Agn fni (y) − Ln g(y)),
y ∈ En , g ∈ B(En ),
and the equality is attained when g = fni , i = 1, 2. Consequently, fni (y) =
fi
sup (I − αAgn )−1 (hin − αLn g)(y) = (I − αAnn )−1 (hin − αLn fni )(y), g∈B(En )
5.2. APPLICATIONS TO LARGE DEVIATIONS
87
and (fn1 − fn2 )(y)
(5.10)
sup (I − αAgn )−1 (h1n − αLn g)(y)
=
g∈B(En )
sup (I − αAgn )−1 (h2n − αLn g)(y)
−
g∈B(En ) f1
≤ (I − αAnn )−1 (h1n − h2n )(y) Z ∞ 1 −1 = E Qn [ α−1 e−α t h1n (Yn (t)) − h2n (Yn (t)) dtYn (0) = y], 0
where for
Q1n
is the distribution under which Yn is a solution of the martingale problem
f1 Ann .
By the dissipativity of Hn and the fact that (0, 0) ∈ Hn , kfn1 k ≤ kh1n k and 1 1 − inf Hn fn1 (y) = sup(h1n (y) − fn1 (y)) ≤ sup 2kh1n k < ∞. y α y α n Consequently, by Lemma 5.8, there exist qb1 = qb1 (q, a, T, supn kh1n k) and C1 = C1 (q, a, T, supn kh1n k) satisfying (5.4) for Yn with distribution Q1n . Then, setting Gn (b q , T ) = {Yn (t) ∈ / Knqb for some 0 ≤ t ≤ T }
(5.11) sup (fn1 − fn2 )(y) q y∈Kn
∞
Z 1 ≤ sup E Qn [ q y∈Kn
−1
α−1 e−α
t
h1n (Yn (t)) − h2n (Yn (t)) dtYn (0) = y]
0
! ≤
−α−1 T
e
+ sup q y∈Kn
+ sup
q b
Q1n (Gn (b q , T )Yn (0) (h1n (y)
−
= y) (0 ∨ sup (h1n (y) − h2n (y)) y∈En
h2n (y))
y∈Kn1 −1
≤ (e−α
T
+ C1 e−na )0 ∨ sup (h1n (y) − h2n (y)) y∈En
+ sup
q b
(h1n (y)
− h2n (y)).
y∈Kn1
There exist C2 (q, a, T, supn kh2n k) and qb2 (q, a, T, supn kh2n k), so that a similar inequality holds with the roles of f1 and f2 reversed. Consequently (5.7) holds with C = C1 (q, a, T, supn kh1n k) ∨ C2 (q, a, T, supn kh2n k) and qb satisfying
b
q (q,a,T,supn kh1n k)
Kn1
b
q (q,a,T,supn kh2n k)
∪ Kn2
⊂ Knqb.
Let En , E, ηn be defined as in Proposition 5.5. For each n, let An ⊂ B(En ) × B(En ) be a full generator corresponding to a Markov transition semigroup {Tn (t)}. (In particular, An is a closed, linear dissipative operator satisfying R(I − αAn ) = B(En ) for some (hence all) α > 0.) Define Vn (t)f = n1 log Tn (t)enf and Hn f = 1 −nf An enf for enf ∈ D(An ), or more precisely, since An can be multivalued, ne Hn = {(f,
1 −nf e g) : (enf , g) ∈ An }. n
88
5. CLASSICAL SEMIGROUP APPROACH
Let n > 0, and define 1 Ann = ((I − n An )−1 − I) = An (I − n An )−1 . n n Then An is a bounded, linear dissipative operator on R(I − n An ) = B(En ) of n the form (5.1), Tnn (t) = etAn defines a strongly continuous, linear contraction semigroup on B(En ), and Vnn (t)f = n1 log Tnn (t)enf defines a strongly continuous, nonlinear contraction semigroup on B(En ). Define 1 Hnn f = e−nf Ann enf , n for f ∈ B(En ), and note that Vnn (t) − I f t→0+ t
Hnn f = lim for f ∈ B(En ).
Lemma 5.11. Let Yn be a solution of the martingale problem for An , let N be a unit Poisson process, and let {∆1 , ∆2 , . . .} be independent, unit exponential random variables. Assume that the Yn , N , and the ∆k are independent. Define N (t/n )
X
λn (t) = n
∆k
k=1
and Ynn (t) = Yn (λn (t)). is a solution of the martingale problem for Ann , and if (fn , gn ) ∈ Hn ,
Then Ynn
sup Vn (t)fn (y) − Vnn (t)fn (y)
y∈En
≤ e2nkfn k kgn kE[λn (t) − t] √ ≤ 2n te2nkfn k kgn k.
(5.12)
Proof. Since (enfn , enfn gn ) ∈ An , nfn (Yn (t))
E[e
nfn (y)
Yn (0) = y] = e
Z t + nE[ enfn (Yn (r)) gn (Yn (r))drYn (0) = y] 0
and n (t))
E[enfn (Yn
Z Yn (0) = y] = enfn (y) + nE[
λn (t)
enfn (Yn (r)) gn (Yn (r))drYn (0) = y].
0
Consequently, sup Tn (t)enfn (y) − Tnn (t)enfn (y)
y∈En
Z ≤ sup nE[ y∈En
t∨λn (t)
enfn (Yn (r)) gn (Yn (r))dr]
t∧λn (t)
≤ nenkfn k kgn kE[λn (t) − t], and since  log a − log b ≤ b − a/a ∧ b and Tn (t)enfn and Tnn (t)enfn are both bounded below by e−nkfn k , sup Vn (t)fn (y) − Vnn (t)fn (y) ≤ e2nkfn k kgn kE[λn (t) − t].
y∈En
5.2. APPLICATIONS TO LARGE DEVIATIONS
89
The second inequality follows from the fact that E[(λn (t) − t)2 ] = 2n t Lemma 5.12. Suppose Yn satisfies Condition 2.8, and define 1 log E P [exp{nhin (Yn (t))}Yn (0) = y], i = 1, 2. n Then for q ∈ Q, t > 0, and a > 0, there exist qb(q, a, t) ∈ Q and C(q, a, t) such that Vn (t)hin (y) =
1
(5.13)sup Vn (t)h1n (y) − Vn (t)h2n (y)
≤
q y∈Kn
2
1 C(q, a, t)e−n(a−khn k−khn k) n 1 − C(q, a, t)e−na +
sup
q b(q,a,T ) y∈Kn
h1n (y) − h2n (y),
and hence LIMh1n = LIMh2n = h. implies LIM(Vn (t)h1n − Vn (t)h2n ) = 0.
(5.14)
Proof. Since Vn (t)(h + c) = Vn (t)h + c, without loss of generality, we can assume that h1n and h2n are nonnegative. Note that if a, b, c, and d are positive and a + b ≥ c + d, then Z a Z a+b a+b a b 1 1 log ≤ dx + dx ≤ log + . c+d x c a c x a Let q ∈ Q. By (5.4), there exists C(q, a, t) > 0 and qb(q, a, t) ∈ Q such that, for n satisfying e−na C(q, a, t) < 1, sup Vn (t)h1n (y) − Vn (t)h2n (y)
q y∈Kn
≤ sup  q y∈Kn
1 1 log E P [I{Yn (t))∈K qb(q,a,t) } enhn (Yn (t)) Yn (0) = y] n n 2 1 log E P [I{Yn (t)∈K qb(q,a,t) } enhn (Yn (t)) Yn (0) = y] n n 1 2 qb(q,a,t) 1 en(khn k+khn k) P {Yn (t) ∈ / Kn Yn (0) = y} + sup q b (q,a,t) q n y∈Kn P {Yn (t) ∈ Kn Yn (0) = y}
−
1
≤ sup h1n (y) − h2n (y) + q b y∈Kn
2
1 C(q, a, t)e−n(a−khn k−khn k) , n 1 − C(q, a, t)e−na
giving (5.13). Taking a > kh1n k + kh2n k, (5.14) follows.
Lemma 5.13. Let Hn and Hnn be as above with nn → 0. Let H ⊂ B(E) × B(E) be dissipative and satisfy the range condition 5.1. Then H generates a strongly continuous contraction semigroup on D(H) given by V (t)f = lim (I − m→∞
t H)−m f. m
90
5. CLASSICAL SEMIGROUP APPROACH
a) Suppose that H ⊂ ex lim Hn n→∞
in the sense that for each (f, g) ∈ H, there exist (fn , gn ) ∈ Hn satisfying kfn − ηn f k + kgn − ηn gk → 0.
(5.15) Then
H ⊂ ex lim Hnn n→∞
and for every f ∈ D(H) and fn ∈ B(En ) satisfying kηn f − fn k → 0, lim kηn V (t)f − Vnn (t)fn k = 0
(5.16)
n→∞
and lim kηn V (t)f − Vn (t)fn k = 0.
(5.17)
n→∞
b) Suppose {Yn } satisfies Condition 2.8. Let LIM be given by Definition 2.5, and suppose that H ⊂ exLIMHn . Then H ⊂ exLIMHnn and for every f ∈ D(H) and fn ∈ B(En ) satisfying f = LIMfn , V (t)f = LIMVnn (t)fn ,
(5.18) and (5.19)
V (t)f = LIMVn (t)fn .
Remark 5.14. Ordinarily, the dissipativity of H will follow from the fact that it is the limit (norm convergence) of dissipative operators, that is, if for (fi , gi ) ∈ H, i = 1, 2 and ≥ 0, limn→∞ kηn (f1 − f2 − (g1 − g2 ))k = kf1 − f2 − (g1 − g2 )k then the dissipativity of H will follow from the dissipativity of Hn and the convergence in (5.15). Proof. The operator Ann is just the Yosida approximation of An , and by Lemma 1.2.4, Ethier and Kurtz [36], Ann is a bounded, linear, dissipative operator and {Tnn (t)} forms a strongly continuous semigroup with Tnn (t)f − f , t→0+ t for f ∈ D(Ann ) = B(En ). The convergence in (5.20) implies that
(5.20)
Ann f = lim
Vnn (t)f − f , t→0+ t
Hnn f = lim
for f ∈ B(En ). By Lemma 5.7, Hnn is mdissipative, and hence t (5.21) Vnn (t)f = lim (I − Hnn )−m f. m→∞ m (See Miyadera [86], Corollary 3.21.) For Part (a), we show that (5.15) implies the existence of (fbn , gbn ) ∈ Hnn satisfying kfbn − ηn f k + kb gn − ηn gk → 0.
5.2. APPLICATIONS TO LARGE DEVIATIONS
91
For (f, g) ∈ H and (fn , gn ) ∈ Hn satisfying (5.15), for n sufficiently large, enfn (1 − nn gn ) > 0 and we can define fbn by
b
enfn = enfn (1 − nn gn ). If An is single valued, then enfn (1 − nn gn ) = (I − n An )enfn and
b
enfn = (1 − n An )enfn . Recall that (fn , gn ) ∈ Hn if and only if (enfn , enfn gn ) ∈ An , so
b
gbn = Hnn fbn = en(fn −fn ) gn , and since
b
enfn −nfn = 1 − nn gn → 1, we have kfn −fbn k → 0 and kb gn −ηn gk → 0. Hence H ⊂ ex limn→∞ Hnn . Therefore, by Proposition 5.5, (5.16) holds. Since n can be any sequence going to zero sufficiently fast, we can assume that limn→∞ n enc = 0 for all c > 0. Then the limit in (5.17) follows by (5.12) and the fact that Vn is a contraction. Similar arguments give Part (b). If (fn , gn ) ∈ Hn satisfy (5.22)
f = LIMfn ,
g = LIMgn ,
then there exist (fbn , gbn ) ∈ Hnn satisfying f = LIMfbn ,
g = LIMb gn .
exLIMHnn .
Hence H ⊂ By Proposition 5.5 and Lemmas 5.9 and 5.12, (5.18) holds. 0 Let f ∈ D(H) and (f, g) ∈ H. There exist (fn , gn ) ∈ Hn satisfying (5.22). Then sup ηn V (t)f 0 (y) − Vn (t)ηn f 0 (y) ≤ 2kf 0 − f k + sup ηn V (t)f (y) − Vnn (t)fn (y)
q y∈Kn
q y∈Kn
+ sup Vnn (t)fn (y) − Vn (t)fn (y) q y∈Kn
+ sup Vn (t)fn (y) − Vn (t)ηn f (y). q y∈Kn
Again, assuming that limn→∞ n enc = 0 for all c > 0, the second term on the right converges to zero by (5.18), the third term by (5.12), and the last term by Lemma 5.12. Consequently, lim sup sup ηn V (t)f 0 (y) − Vn (t)ηn f 0 (y) ≤ 2kf 0 − f k, q n→∞ y∈Kn
and since f ∈ D(H) is arbitrary, the left side converges to zero. Finally, (5.19) follows by Lemma 5.12. Theorem 5.15. Let (En , rn ), n = 1, 2, . . . and (E, r) be complete and separable metric spaces. Let ηn : En → E be Borel measurable, and define ηn : B(E) → B(En ) by ηn f = f ◦ ηn . Assume that one of the following holds:
92
5. CLASSICAL SEMIGROUP APPROACH
a) (Continuoustime case.) For each n = 1, 2, . . ., let An ⊂ C(En ) × B(En ), and assume that existence and uniqueness holds for the DE [0, ∞)martingale problem for (An , µ) for each initial distribution µ ∈ P(E). For y ∈ En , let Pyn ∈ P(DEn [0, ∞)) be the distribution of the solution of the martingale problem for An starting from y, and assume that the mapping y → Pyn is Borel measurable taking the weak topology on P(DEn [0, ∞)) (cf. Theorem 4.4.6, Ethier and Kurtz [36]). Let Yn be a solution of the martingale problem for An . b) (Discretetime case.) For each n = 1, 2, . . ., let Tn be a transition operator on B(En ) for a Markov chain and let n > 0. Let Yn be a discretetime Markov chain with timestep n and transition operator Tn , that is, E[f (Yn (t))Yn (0) = y] = Tn[t/n ] f (y), and suppose n → 0. Define Xn = ηn ◦ Yn , and define {Vn (t)} on B(En ) by (5.23)
Vn (t)f (y) =
1 log E[enf (Yn (t)) Yn (0) = y]. n
Let D ⊂ Cb (E) be closed under addition, and suppose that there exists an operator semigroup {V (t)} on D such that kηn V (t)f − Vn (t)fn k → 0,
(5.24)
whenever f ∈ D, fn ∈ B(En ) and kηn f − fn k → 0. In the discretetime case, assume also that kηn V (t)f − Vn (t + n )fn k → 0.
(5.25)
Let µn = P {Xn (0) ∈ ·}. Suppose that {µn } satisfies a large deviation principle with good rate function I0 on E, which is equivalent (see Proposition 3.8) to the existence of the rate transform Z 1 Λ0 (f ) = lim log enf dµn , n→∞ n E for each f ∈ Cb (E), and I0 (x0 ) =
sup {f (x0 ) − Λ0 (f )},
x0 ∈ E.
f ∈Cb (E)
Then a) For each 0 ≤ t1 < · · · < tk and f1 , . . . , fk ∈ D, n 1 log E P [enf1 (Xn (t1 ))+...+nfk (Xn (tk )) ] n = Λ0 (V (t1 )(f1 + V (t2 − t1 )(f2 + . . . + V (tk − tk−1 )fk ) . . .)).
lim
n→∞
b) Let 0 ≤ t1 < . . . < tk , and assume that (5.26)
{(Xn (t1 ), . . . , Xn (tk ))} is exponentially tight in E k . If D contains a set that is bounded above and isolates points, then (5.26) satisfies the large deviation principle with rate
5.2. APPLICATIONS TO LARGE DEVIATIONS
93
function (5.27) It1 ,...,tk (x1 , . . . , xk ) = sup {f1 (x1 ) + . . . + fk (xk ) f1 ,...,fk ∈D
−Λ0 (V (t1 )(f1 + V (t2 − t1 )(f2 + . . . V (tk − tk−1 )fk ) . . .))}. c) If {Xn } is exponentially tight in DE [0, ∞) and D contains a set that is bounded above and isolates points, then {Xn } satisfies the large deviation principle in DE [0, ∞) with rate function (5.28)
I(x) =
sup It1 ,...,tk (x(t1 ), . . . , x(tk )).
{ti }⊂∆cx
d) If {Xn } is Cexponentially tight and D contains a set that is bounded above and isolates points, then {Xn } satisfies the large deviation principle with rate function (5.29)
I(x(·)) = sup It1 ,...,tk (x(t1 ), . . . , x(tk )). t1 ,...,tk
Remark 5.16. Let
{{Knq }
: q ∈ Q} be as in Definition 2.5. If for each t ≥ 0,
inf lim sup
q∈Q n→∞
1 log P {Yn (t) ∈ / Knq } = −∞, n
then (5.24) can be replaced by the assumption: If f ∈ D and fn ∈ B(En ) satisfies f = LIMfn , then V (t)f = LIMVn (t)fn . In the discretetime case, also replace (5.25) by V (t)f = LIMVn (t + n )fn . Proof. We consider the continuoustime case. The proof for the discrete time case is similar. Note that since D is closed under addition and V (t) : D → D, if f1 , f2 ∈ D, then f1 + V (t)f2 ∈ D. By Ethier and Kurtz [36], Theorem 4.4.2, Yn is a Markov process. It follows that for f1 , . . . , fk ∈ D, n
E P [enf1 (Xn (t1 ))+...+nfk (Xn (tk )) ] n = E[E[enηn f1 (Yn (t1 ))+...+nηn fk (Yn (tk )) FtYk−1 ]]
= E[enηn f1 (Yn (t1 ))+...+nVn (tk −tk−1 )ηn fk (Yn (tk−1 )) ] Z = enVn (t1 )(ηn f1 +Vn (t2 −t1 )(ηn f2 +...+Vn (tk−1 −tk−2 )(ηn fk−1 +Vn (tk −tk−1 )ηn fk )...) (y)Pn Yn−1 (0)(dy) Z ∼ enηn V (t1 )(f1 +V (t2 −t1 )(f2 +...+V (tk −tk−1 )fk )...) (y)Pn Yn−1 (0)(dy), and hence n 1 log E P [enf1 (Xn (t1 ))+...+nfk (Xn (tk )) ] n = Λ0 (V (t1 )(f1 + V (t2 − t1 )(f2 + . . . + V (tk − tk−1 )fk ) . . .)).
lim
n→∞
Part (b) follows by Lemma 3.22 and Proposition 3.20; Parts (c) and (d) follow by Theorems 4.28 and 4.30.
94
5. CLASSICAL SEMIGROUP APPROACH
Corollary 5.17. Let En , E, ηn , Yn , Xn , Vn , D, V , and I0 be as in Theorem 5.15. Suppose that D contains a set that is bounded above and isolates points and that limn→∞ ηn (En ) = E. Define It (yx) = sup (f (y) − V (t)f (x)).
(5.30)
f ∈D
Then, setting t0 = 0 ≤ t1 < · · · < tk , It1 ,...,tk (x1 , . . . , xk ) = inf (I0 (x0 ) +
k X
x0 ∈E
Iti −ti−1 (xi xi−1 ))
i=1
and (5.31)
I(x) =
sup (I0 (x(0)) + {ti }⊂∆cx
k X
Iti −ti−1 (x(ti )x(ti−1 ))).
i=1
If in addition {Xn } is Cexponentially tight (cf. Theorem 4.13), then I satisfies (5.32)
I(x) =
sup (I0 (x(0)) + t1 0, u(0, x) = f0 (x). ∂t To set the ideas, we assume (E, r) is compact throughout this section. (These results also cover many locally compact situations. See the discussion on compactifying the state space in Section 4.3 and Theorem 4.11.) In Chapter 7, we remove the compactness assumption (at the cost of greater technicality) and generalize the results of this section to processes in complete, separable metric spaces.
For g ∈ B(E), g ∗ will denote the upper semicontinuous regularization of g, that is, (6.2)
g ∗ (x) = lim sup g(y), →0 y∈B (x)
and g∗ will denote the lower semicontinuous regularization, (6.3)
g∗ (x) = lim
inf
→0 y∈B (x) 97
g(y).
98
6. VISCOSITY SOLUTION METHOD
6.1. Viscosity solutions, definition and convergence Definition 6.1 (Viscosity Solution). Let E be a compact metric space, and H ⊂ C(E) × B(E). Fix h ∈ C(E) and α > 0. Let f ∈ B(E) and define g = α−1 (f − h), that is, f − αg = h. Then a) f is a viscosity subsolution of (6.1) if and only if f is upper semicontinuous and for each (f0 , g0 ) ∈ H such that supx (f −f0 )(x) = kf −f0 k, there exists x0 ∈ E satisfying (f − f0 )(x0 ) = kf − f0 k
(6.4) and
α−1 (f (x0 ) − h(x0 )) = g(x0 ) ≤ (g0 )∗ (x0 ).
(6.5)
b) f is a viscosity supersolution of (6.1) if and only if f is lower semicontinuous and for each (f0 , g0 ) ∈ H such that supx (f0 − f )(x) = kf0 − f k, there exists x0 ∈ E satisfying (f0 − f )(x0 ) = kf − f0 k
(6.6) and (6.7)
α−1 (f (x0 ) − h(x0 )) = g(x0 ) ≥ (g0 )∗ (x0 ).
A function f ∈ C(E) is said to be a viscosity solution of (6.1) if it is both a subsolution and a supersolution. If in the definition of subsolution (6.5) holds for every x0 satisfying (6.4), f will be called a strong subsolution, and similarly, if in the definition of supersolution (6.7) holds for every x0 satisfying (6.6), f will be called a strong supersolution. Remark 6.2. The basic goal of the definition of a viscosity solution is to extend H so that the desired solution is included in an extended domain while at the same time preserving the dissipativity of the operator. Continuous viscosity solutions were introduced in Crandall and Lions [19] to study certain nonlinear partial differential equations. The concept was later extended to include discontinuous sub super solutions. See Barles and Perthame [8], Barles and Souganidis [9], Ishii [60], Ishii and Lions [61] and Fleming and Soner [47]. The last reference focuses on control theory. In addition to the existence issues, these articles offer a rich theory of comparison results (see the next definition). Definition 6.1 is an adaptation of this concept to general spaces. When E ⊂ Rd is compact, H is local, and D(H) is dense in C(E), the regular and strong definitions are equivalent and also equivalent to that, say, in Barles and Souganidis [9] or Ishii and Lions [61] (take F (x, u, Hu) = u(x) − α(Hu)(x) − h(x)). The relationship between strong and ordinary viscosity solutions is similar to the relationship between strong and ordinary dissipative operators (see Appendix A.3). The operators H that arise as limits of the type discussed in the Introduction have the property that (f, g) ∈ H and c ∈ R implies (f + c, g) ∈ H. The following lemma simplifies the definition of viscosity solution. Lemma 6.3. Suppose (f, g) ∈ H and c ∈ R implies (f + c, g) ∈ H. Then an upper semicontinuous function f is a viscosity subsolution of (6.1) if and only if for each (f0 , g0 ) ∈ H, there exists x0 ∈ E such that (6.8)
f (x0 ) − f0 (x0 ) = sup(f (x) − f0 (x0 )) x
6.1. VISCOSITY SOLUTIONS, DEFINITION AND CONVERGENCE
99
and α−1 (f (x0 ) − h(x0 )) ≡ g(x0 ) ≤ (g0 )∗ (x0 ).
(6.9)
Similarly, a lower semicontinuous function f is a viscosity supersolution of (6.1) if and only if for each (f0 , g0 ) ∈ H, there exists x0 ∈ E such that f0 (x0 ) − f (x0 ) = sup(f0 (x) − f (x0 ))
(6.10)
x
and α−1 (f (x0 ) − h(x0 )) ≡ g(x0 ) ≥ (g0 )∗ (x0 ).
(6.11)
Proof. Let f be a subsolution and (f0 , g0 ) ∈ H. Then there exists a constant c such that sup(f (x) − f0 (x) + c) = kf − f0 + ck, x
and hence, by the definition of subsolution, there exists x0 ∈ E such that (6.12)
f (x0 ) − f0 (x0 ) + c = sup(f (x) − f0 (x) + c) = kf − f0 + ck x
and (6.9) holds. Of course, (6.12) implies (6.8). The proof of the second part is similar. Definition 6.4 (Comparison Principle). We say that (I − αH)f = h satisfies a comparison principle, if f a viscosity subsolution and f a viscosity supersolution implies f ≤ f on E. We will see that the comparison principle can be used to replace the range condition in results analogous to Proposition 5.5 and Theorem 5.15. The following lemma shows that the comparison principle generalizes the range condition. Lemma 6.5. Let E be a compact metric space and H ⊂ C(E) × B(E), and suppose that (f, g) ∈ H and c ∈ R implies (f + c, g) ∈ H. Then for each h ∈ C(E) ∩ R(I − αH), the comparison principle holds for f − αHf = h. Proof. Suppose f is a viscosity subsolution, (fn , gn ) ∈ H, and kfn − αgn − hk → 0. Set hn = fn − αgn . By Lemma 6.3, there exist xn satisfying f (xn ) − fn (xn ) = supx (f (x) − fn (x)) and sup(f (x) − fn (x))
= f (xn ) − fn (xn )
x
≤ h(xn ) + αgn∗ (xn ) − fn (xn ) = h(xn ) − (hn )∗ (xn ) → 0. Similarly, if f is a supersolution of f − αHf = h, lim inf inf (f (x) − fn (x)) ≥ 0, n→∞
x
and it follows that f ≤ f .
Let H ⊂ B(E) × B(E). The closure of H is (6.13) H = {(f, g) ∈ B(E) × B(E) : ∃{(fn , gn )} ⊂ H, kfn − f k + kgn − gk → 0}.
100
6. VISCOSITY SOLUTION METHOD
b ⊂ B(E) × B(E) is a viscosity extension of H if H b ⊃ H and Definition 6.6. H for each h ∈ C(E), f is a viscosity subsolution (supersolution) of (I − αH)f = h b = h. In if and only if f is a viscosity subsolution (supersolution) for (I − αH)f particular, the comparison principle holds for (I − αH)f = h if and only if the b =h comparison principle holds for (I − αH)f It is easy to check that H is a viscosity extension of H. Example 6.7. Let E be the one point compactification of Rd (we will write E = Rd ∪ {∞}). Let the operator H be given by (1.15) for f ∈ Dd = {f ∈ C(E) : f Rd − f (∞) ∈ Cc2 (Rd )}, and let H(x, p) be given by (1.16). Suppose a(x) ≤ C(1 + x2 ) and b(x) ≤ C(1 + x), and let b = {(f, H(·, ∇f (·))) : f ∈ C 1 , f (∞) ≡ lim f (x) exists , lim x∇f (x) = 0}. H x→∞
x→∞
b is a viscosity extension of H. Then H b For simplicity, assume that f (∞) = 0. Let {ζn } ⊂ Cc2 satisfy Let f ∈ D(H). ζn (x) ≡ 1 for x ≤R n, 0 ≤ ζn ≤ 1, and limn→∞ supx x∇ζn (x) = 0, and let ρ ∈ Cc2 satisfy ρ ≥ 0 and Rd ρ(x)dx = 1. Then fn defined by Z fn (x) = ζn (x) f (x − y)nd ρ(ny)dy Rd
is in D(H), kf − fn k → 0 (since f is uniformly continuous and vanishes at ∞), and Z Z ∇fn (x) = ζn (x) ∇f (x − y)nd ρ(ny)dy + f (x − y)nd ρ(ny)dy∇ζn (x) Rd
Rd
satisfies lim sup x∇f (x) − ∇fn (x) = 0.
n→∞ x
By the growth conditions on a and b, b k = 0. lim kHfn − Hf
n→∞
b ⊂ H. Suppose xn → x 6= ∞. Then In particular, H lim sup Hfn (xn ) = lim sup H(xn , ∇fn (xn )) = lim sup H(xn , ∇f (xn )) ≤ H ∗ f (x), n→∞
n→∞
n→∞
and similarly for lim inf. If xn  → ∞, then Hfn (xn ) → 0 = H ∗ f (∞) = H∗ f (∞). b ⊂ C(E) × B(E). Suppose that Lemma 6.8. Suppose E is compact. Let H ⊂ H b for each (f, g) ∈ H there exists {(fn , gn )} ⊂ H such that limn→∞ kfn − f k = 0 and for xn , x ∈ E such that xn → x, (6.14)
g∗ (x) ≤ lim inf gn (xn ) ≤ lim sup gn (xn ) ≤ g ∗ (x). n→∞
n→∞
b is a viscosity Suppose that (f, g) ∈ H implies (f + c, g) ∈ H for all c ∈ R. Then H extension of H. b = h, then f is Proof. Clearly, if f is a viscosity subsolution for (I − αH)f a viscosity subsolution for (I − αH)f = h. Suppose f is a viscosity subsolution b There exist (fn , gn ) ∈ H such that for (I − αH)f = h, and let (f0 , g0 ) ∈ H.
6.1. VISCOSITY SOLUTIONS, DEFINITION AND CONVERGENCE
101
kf0 − fn k → 0 and (6.14) holds with g replaced by g0 . Since fn is continuous and f is upper semicontinuous, there exist xn such that f (xn ) − fn (xn ) = sup(f (x) − fn (x)) x −1
gn∗ (xn ).
and α (f (xn ) − h(xn )) ≤ By the compactness of E, there exists x0 ∈ E and a subsequence of {xn } along which xn → x0 . By the upper semicontinuity of f, f (x0 )−f0 (x0 ) ≥ lim f (xn )−fn (xn ) = lim sup(f (x)−fn (x)) = sup(f (x)−f0 (x)). n→∞
n→∞ x
x
Hence the first inequality must be equality and, in particular, f (xn ) → f (x0 ). Consequently, α−1 (f (x0 ) − h(x0 )) = lim α−1 (f (xn ) − h(xn )) ≤ lim sup gn∗ (xn ) ≤ g0∗ (x0 ), n→∞
n→∞
b verifying that f is a subsolution for (I − αH)f = h. The proof is similar for supersolutions. Lemma 6.9. Let (En , rn ) be general metric spaces and (E, r) be a compact metric space. Let ηn : En → E be Borel measurable, and let E = limn→∞ ηn (En ) in the sense that for each x ∈ E, there exist zn ∈ En such that x = limn→∞ ηn (zn ). Define ηn : B(E) → B(En ) by ηn f = f ◦ ηn . For each n, we assume Hn ⊂ B(En )×B(En ) is a dissipative operator such that (f, g) ∈ Hn and c ∈ R implies (f + c, g) ∈ Hn . Let H ⊂ C(E) × B(E). Assume that for each (f0 , g0 ) ∈ H, there exist (fn,0 , gn,0 ) ∈ Hn such that kfn,0 − ηn f0 k → 0
(6.15)
and for each x ∈ E and each sequence zn ∈ En satisfying ηn (zn ) → x, (g0 )∗ (x) ≤ lim inf gn,0 (zn ) ≤ lim sup gn,0 (zn ) ≤ (g0 )∗ (x).
(6.16)
n→∞
n→∞
Let α > 0 and h ∈ C(E), and assume that the comparison principle holds for (I − αH)f = h.
(6.17)
Let (fn , gn ) ∈ Hn , and define fn − αgn = hn . If khn − ηn hk → 0, then there exists a unique viscosity solution f of (6.17), f ∈ C(E), and lim kfn − ηn f k = 0.
n→∞
If (f0 , g0 ) ∈ H, then kf − f0 k ≤ kh − (f0 − αg0 )k.
(6.18)
Remark 6.10. Note that kηn g0 − gn,0 k → 0 implies (6.16). Proof. First, we claim supn kfn k < ∞. Let {(fn,0 , gn,0 )} be any sequence satisfying (6.15) and (6.16). By the dissipativity of Hn , kfn − fn,0 k ≤ khn − (fn,0 − αgn,0 )k.
(6.19) Therefore lim sup kfn k n→∞
≤ 2 lim sup kfn,0 k + lim sup khn k + α lim sup kgn,0 k n→∞
n→∞
n→∞
≤ 2kf0 k + khk + α max{k(g0 )∗ k, k(g0 )∗ k} ≤ 2kf0 k + khk + αkg0 k.
102
6. VISCOSITY SOLUTION METHOD
Now define f , f ∈ B(E) by f (x) = lim sup{fn (z) : n ≥ k, r(x, ηn (z)) ≤ k→∞
1 }, k
and
1 }. k Then f is bounded and uppersemicontinuous, and f is bounded and lowersemicontinuous. We will prove that they are respectively sub and supersolutions of (6.17). We only consider f . The argument for f is similar. Let (f0 , g0 ) ∈ H. By assumption, there exists (fn,0 , gn,0 ) ∈ Hn such that (6.15) and (6.16) hold. By the dissipativity of Hn and Lemma 6.3, for each n, there exist znm ∈ En such that limm→∞ (fn − fn,0 )(znm ) = supz (fn − fn,0 )(z), and limm→∞ (gn −gn,0 )(znm ) ≤ 0. (See Lemma A.19.) We can therefore select m(n) such m(n) m(n) that (fn − fn,0 )(zn ) ≥ supz (fn − fn,0 )(z) − 1/n, and (gn − gn,0 )(zn ) ≤ 1/n. m(n) Define xn = ηn (zn ). Note that by the definition of f , the compactness of E, and (6.15), f (x) = lim inf{fn (z) : n ≥ k, r(x, ηn (z)) ≤ k→∞
sup(f − f0 )(x) = lim sup sup(fn − fn,0 )(z) = lim sup(fn − fn,0 )(znm(n) ). x
n→∞
z
n→∞
Selecting a subsequence along which the last lim sup becomes a limit and xn converges to a point x0 , by the definition of f , (6.15), and the continuity of f0 , (6.20)
(f − f0 )(x0 ) ≥ lim sup(fn − fn,0 )(znm(n) ) n→∞
=
sup(f − f0 )(x). x m(n)
From the above, along the selected subsequence, we also have fn (zn f (x0 ). Define g by f − αg = h. Then
) →
gn (znm(n) ) → g(x0 ). Hence g(x0 ) = lim gn (znm(n) ) ≤ (g0 )∗ (x0 ), n→∞
and it follows that f is an upper semicontinuous subsolution of (6.17). Finally, by the comparison principle, f ≡ f = f ∈ C(E), and f is a viscosity solution of (6.17). To see that kηn f − fn k → 0, we show that for each sequence zn ∈ En , fn (zn ) − f (ηn (zn )) → 0.
(6.21)
Since E is compact, we may assume ηn (zn ) → x e for some x e ∈ E (otherwise take a subsequence). For each k > 0, for n sufficiently large, 1 fn (zn ) ≤ sup{fn (z) : n ≥ k, r(e x, ηn (z)) < }. k Therefore lim sup fn (zn ) ≤ f (e x) = f (e x) = lim f (ηn (zn )). n→∞
n→∞
Similarly one can prove lim inf fn (zn ) ≥ f (e x) = f (e x) = lim f (ηn (zn )), n→∞
n→∞
6.1. VISCOSITY SOLUTIONS, DEFINITION AND CONVERGENCE
103
and (6.21) follows. To verify (6.18), let (fn,0 , gn,0 ) be as in (6.19). Note that the left side of (6.19) converges to kf − f0 k. Consequently, either there exist zn such that lim sup(hn (zn ) − fn,0 (zn ) + αgn,0 (zn ) ≥ kf − f0 k n→∞
or lim inf (hn (zn ) − fn,0 (zn ) + αgn,0 (zn ) ≤ −kf − f0 k. n→∞
Select a subsequence along which ηn (zn ) → x. Then by (6.15) and (6.16), the assumption on hn , and the continuity of h, either h(x) − f0 (x) + α(g0 )∗ (x) ≥ kf − f0 k or h(x) − f0 (x) + α(g0 )∗ (x) ≤ −kf − f0 k. In either case, (6.18) follows.
The original ideas in the above proof were due to Barles and Perthame [7] and Barles and Souganidis [9]. See also Fleming and Soner [47]. The condition E = limn→∞ ηn (En ) can be removed if we assume the existence of a strong viscosity solution of equation (6.17). Example 6.11. Verification of the comparison principle is frequently very technical. To at least show that such a condition is plausible, we verify it in the simplest possible case, Hf = 12 ∇f 2 , the limit operator for Brownian motion. We continue with E = Rd ∪ {∞}. By the kind of closure argument illustrated in Example 6.7, we can take D(H) = {f ∈ C 1 : f (∞) ≡ lim f (x) exists and x→∞
lim ∇f (x) = 0}.
x→∞
For h ∈ C(E), let f be a subsolution and f a supersolution of (I − αH)f = h. It is easy to check that every subsolution is a strong subsolution and every supersolution is a strong supersolution. (See Lemma 9.9.) First, suppose that supx (f (x) − f (x)) > (f (∞) − f (∞)). Consider Φn (x, y) = f (x) − f (y) − n
x − y2 , 1 + x − y2
extended so that Φn is upper semicontinuous on E × E. In particular, Φn (∞, ∞) = f (∞)−f (∞), Φn (∞, y) = f (∞)−f (y)−n, y ∈ Rd , and Φn (x, ∞) = f (x)−f (∞)−n, x ∈ Rd . Since Φn is upper semicontinuous, there exist xn , yn such that (6.22)
Φn (xn , yn ) = sup Φn (x, y) x,y 2
x−yn  n and xn , yn ∈ Rd . Setting f0n (x) = f (yn ) + n 1+x−y 2 , f0 ∈ D(H) and xn satisfies n
f (xn ) − f0n (xn ) = supx (f (x) − f0n (x)). Consequently, by (6.9), α−1 (f (xn ) − h) ≤ Hf0n (xn ) = n2
4xn − yn 2 , (1 + xn − yn 2 )4
2
xn −y and setting f1n (y) = f (xn ) − n 1+x 2 , a similar argument gives n −y
α−1 (f (yn ) − h) ≥ Hf1n (yn ) = n2
4xn − yn 2 . (1 + xn − yn 2 )4
104
6. VISCOSITY SOLUTION METHOD
It follows that α−1 (f (xn ) − f (yn )) ≤ 0. 2
xn −yn  A small amount of analysis (Lemma 9.2) gives limn→∞ n 1+x 2 = 0 and n −yn 
(6.23)
0 ≥ lim (f (xn ) − f (yn )) = lim Φn (xn , yn ) ≥ sup(f (x) − f (x)). n→∞
n→∞
x
We still need to consider the case f (∞) − f (∞) = sup(f (x) − f (x)). x
Now we take Φn (x, y) = f (x) − f (y) − n(
1 1 + ), 1 + x2 1 + y2
and again take xn and yn satisfying (6.22). Now either or both xn and yn may be ∞. As before α−1 (f (xn ) − h) ≤ Hf0n (xn ) = n2
(6.24)
4xn 2 , (1 + xn 2 )4
where the right side is zero if xn = ∞. Similarly, α−1 (f (yn ) − h) ≥ Hf1n (yn ) = n2
(6.25)
4yn 2 . (1 + yn 2 )4
Lemma 9.2 implies lim n(
n→∞
1 1 + ) = 0, 2 1 + xn  1 + yn 2
which in turn implies the right sides of (6.24) and (6.25) converge to zero. Consequently, we have lim sup(f (xn ) − f (yn ) ≤ 0, n→∞
which as in (6.23) implies f (∞) − f (∞) ≤ 0.
Lemma 6.12. Let En , E, ηn , Hn and H satisfy the assumptions of Lemma 6.9. Suppose that, for i = 1, 2, there exist (fni , gni ) ∈ Hn and hi ∈ C(E) such that hin ≡ fni − αgni satisfies khin − ηn hi k → 0 and the comparison principle holds for (I − αH)fi = hi .
(6.26)
Then for i = 1, 2 there exists a unique viscosity solution fi ∈ C(E) of (6.26) and kf1 − f2 k ≤ kh1 − h2 k. Proof. The existence and uniqueness of viscosity solutions fi ∈ C(E) of (6.26) follows by Lemma 6.9 and the assumption that kηn hi − hin k → 0. It also follows that kηn fi − fni k → 0. Since Hn is dissipative, kfn1 − fn2 k ≤ kh1n − h2n k. Noting hi , fi ∈ C(E), kf1 − f2 k = lim kηn (f1 − f2 )k ≤ lim kfn1 − fn2 k ≤ lim kh1n − h2n k ≤ kh1 − h2 k. n→∞
n→∞
n→∞
6.1. VISCOSITY SOLUTIONS, DEFINITION AND CONVERGENCE
105
Theorem 6.13. Let (En , rn ) be general metric spaces, (E, r) be a compact metric space, ηn : En → E be Borel measurable, and E = limn→∞ ηn (En ) (cf. Lemma 6.9). Define ηn : B(E) → B(En ) by ηn f = f ◦ ηn . For each n = 1, 2, . . ., assume Hn ⊂ B(En ) × B(En ) is dissipative and satisfies the range condition 5.1 for 0 < α < α0 with α0 independent of n. Assume that (f, g) ∈ Hn and c ∈ R implies (f + c, g) ∈ Hn . Let H ⊂ C(E) × B(E). Suppose that for each (f0 , g0 ) ∈ H, there exist (fn,0 , gn,0 ) ∈ Hn such that kfn,0 − ηn f0 k → 0 and for each x ∈ E and each sequence zn ∈ En satisfying ηn (zn ) → x, (6.27)
(g0 )∗ (x) ≤ lim inf (gn,0 )(zn ) ≤ lim sup(gn,0 )(zn ) ≤ (g0 )∗ (x). n→∞
n→∞
Assume that for each 0 < α < α0 , C(E) ⊂ lim R(I − αHn ), n→∞
in the sense that for each h ∈ C(E), there exist hn ∈ R(I − αHn ) such that kηn h − hn k → 0, and that there exists a dense subset Dα ⊂ C(E) such that for each h ∈ Dα , the comparison principle holds for (I − αH)f = h.
(6.28) Then
a) For each h ∈ Dα , there exists a unique viscosity solution of (6.28) which we will denote by f = Rα h. b) Rα is a contraction and hence extends to all of C(E). c) The operator b = ∪α {(Rα h, Rα h − h ) : h ∈ C(E)} H α
(6.29)
b ⊃ D(H), and H b satisfies the range condition 5.1. is dissipative, D(H) b d) H defined in (6.29) generates a strongly continuous contraction semigroup b given by {V (t)} on D(H) n V (t)f = lim Rt/n f.
(6.30)
n→∞
e) Letting {Vn (t)} denote the semigroup on D(Hn ) generated by Hn , for each b and fn ∈ D(Hn ) satisfying kfn − ηn f k → 0, f ∈ D(H) lim kVn (t)fn − ηn V (t)f k = 0.
n→∞
Proof. For 0 < α < α0 and h ∈ Dα , since h ∈ limn→∞ R(I − αHn ), by Lemma 6.9 there exists a unique viscosity solution f ∈ C(E) of (6.31)
(I − αH)f = h
(giving Part (a)), and there exist (fn , gn ) ∈ Hn such that (6.32)
kfn − ηn f k + k(fn − αgn ) − ηn hk → 0.
By Lemma 6.12 (6.33)
kRα h1 − Rα h2 k ≤ kh1 − h2 k,
for h1 , h2 ∈ Dα , and Rα extends to all of C(E), giving Part (b).
106
6. VISCOSITY SOLUTION METHOD
If (fni , gni ), f i and hi , i = 1, 2, are as in (6.32) (perhaps, for different values of α), then kfn1 − fn2 k → kf 1 − f 2 k and fn1 − ηn h1 fn2 − ηn h2 2 ) − (f − α )k n n→∞ α1 α2 f 1 − h1 f 2 − h2 2 = k(f 1 − α ) − (f − α k, α1 α2
lim k(fn1 − αgn1 ) − (fn2 − αgn2 )k =
n→∞
lim k(fn1 − α
b follows from the dissipativity of Hn . For (f0 , g0 ) ∈ H, so the dissipativity of H (6.18) implies kRα h − f0 k ≤ kh − f0 + αg0 k. b ⊃ D(H). Taking h = f0 , we see that limα→0 Rα f0 = f0 , and hence D(H) b b ⊃ Dα and hence By Lemma 6.9, D(H) ⊂ C(E). For 0 < α < α0 , R(I − αH) b = C(E) ⊃ D(H). b Part (d) follows by Proposition 5.3. Part (e) then R(I − αH) follows by Proposition 5.5. 6.2. Large deviations using viscosity semigroup convergence Theorem 6.14. Let (En , rn ) be complete, separable metric spaces and (E, r) be a compact metric space. Let ηn : En → E be Borel measurable, and define ηn : B(E) → B(En ) by ηn f = f ◦ ηn . Assume that E = limn→∞ ηn (En ). Assume that one of the following holds: a) (Continuoustime case.) For each n = 1, 2, . . ., An ⊂ B(En ) × B(En ) and existence and uniqueness holds for the DEn [0, ∞)martingale problem for (An , µ) for each initial distribution µ ∈ P(En ). Letting Pyn ∈ P(DEn [0, ∞)) denote the distribution of the solution of the martingale problem for An starting from y ∈ En , the mapping y → Pyn is Borel measurable taking the weak topology on P(DEn [0, ∞)) (cf. Theorem 4.4.6, Ethier and Kurtz [36]). Yn is a solution of the martingale problem for An , and 1 Hn f = e−nf An enf , enf ∈ D(An ). n b) (Discretetime case.) For each n = 1, 2, . . ., Tn is a transition operator on B(En ) for a Markov chain, n > 0, Yn is a discretetime Markov chain with timestep n and transition operator Tn , that is, E[f (Yn (t))Yn (0) = y] = Tn[t/n ] f (y), and 1 log e−nf Tn enf , nn The sequence n → 0. Hn f =
f ∈ B(En ).
Define {Vn (t)} on B(En ) by 1 log E[enf (Yn (t)) Yn (0) = y]. n Let H ⊂ C(E) × B(E) with D(H) dense in C(E). Suppose that for each (f, g) ∈ H, there exist (fn , gn ) ∈ Hn such that Vn (t)f (y) =
kηn f − fn k → 0,
6.2. LARGE DEVIATIONS USING VISCOSITY SEMIGROUP CONVERGENCE
107
supn kgn k < ∞, and for each x ∈ E and sequence zn ∈ En satisfying ηn (zn ) → x, (6.34)
(g)∗ (x) ≤ lim inf gn (zn ) ≤ lim sup gn (zn ) ≤ (g)∗ (x). n→∞
n→∞
Fix α0 > 0. Suppose that for each 0 < α < α0 , there exists a dense subset Dα ⊂ C(E) such that for each h ∈ Dα , the comparison principle holds for (I − αH)f = h.
(6.35)
Define Xn = ηn (Yn ). Suppose that {Xn (0)} satisfies a large deviation principle with a good rate function. Then b defined in (6.29) generates a semigroup on C(E) as in (6.30). a) H b) In the continuous time case, lim kηn V (t)f − Vn (t)fn k = 0,
n→∞
whenever f ∈ C(E), fn ∈ B(En ), and kηn f − fn k → 0. c) In the discrete time case, lim kηn V (t)f − Vn (t)fn k + kηn V (t)f − Vn (n + t)fn k = 0,
n→∞
whenever f ∈ C(E), fn ∈ B(En ) and kηn f − fn k → 0. d) {Xn } is exponentially tight and satisfies the large deviation principle with rate function I given by (5.30) and (5.31) with D = C(E) and {V (t)} as in Theorem 6.13. Proof. For the continuoustime case, follow the proof of Corollary 5.19. Define bn A bn and H b n . Then H b n satisfies the range condition 5.1 and generates a A n n n n b nn ) = B(En ) for all semigroup {Vn (t)}. Also, recall that by Lemma 5.7, R(I − αH n b α > 0. Therefore Dα ⊂ R(I − αHn ). By Theorem 6.13, kVnn (t)fn − ηn V (t)f k → 0, whenever fn ∈ B(En ), f ∈ C(E), and kηn f − fn k → 0. The remainder of the proof is the same as for Corollary 5.19. The discretetime case follows by direct application of Theorem 6.13, Lemma 5.4, and Theorem 5.15.
CHAPTER 7
Extensions of viscosity solution methods We extend the results of Chapter 6 in several directions. First, we allow the state space to be a noncompact metric space. Since results in locally compact spaces can frequently be obtained by considering the onepoint compactification of the space, the real goal of this extension is to be able to treat problems in Banach spaces or other infinite dimensional settings. Second, we relax the boundedness assumptions on the domain and range of H. At first glance, this extension may appear odd, but it allows for greater flexibility in the choice of test functions needed in the verification of the comparison principle. Third, we consider large deviations for processes obtained as projections or other transformations of Markov processes where the state space of the transformed process is lower dimensional than the original. Examples 1.8, 1.9, and 1.11, describe settings where these results are useful. In each of these examples, the natural limiting operator H satisfies H ⊂ C(E) × M (E 0 ), where E 0 is another metric space, and there is a natural mapping γ : E 0 → E. In the cited examples, E 0 = E × E0 for some metric space E0 , and γ is just the obvious projection. Theorem 7.24 gives the main result for this setting. We highlight two technical points in this chapter. First, as mentioned at the beginning of Section 2.3, when E is noncompact, we need to work with notions of convergence of functions in B(E) that are weaker than uniform convergence. As in Corollary 5.22, the corresponding semigroup convergence, in particular, the computations that give us the large deviation principle for the finite dimensional distributions, require the a priori estimates on the resolvent and semigroup given in Lemmas 5.9 and 5.12. Second, we need to extend the concept of viscosity solution (Definition 7.1), since we are no longer assured that suprema are actually achieved.
Throughout this chapter, we assume (E, r) and (E 0 , r0 ) are complete, separable metric spaces, γ : E 0 → E is continuous, and E = γ(E 0 ), that is, the mapping γ is onto. For f ∈ M (E), we define γf = f ◦ γ ∈ M (E 0 ). 7.1. Viscosity solutions, definition and convergence Recall that R = [−∞, ∞], M (E, R) denotes the collection of Borel measurable functions with values in R and f (x) ∈ R for at least one x ∈ E, and C(E, R) ⊂ M (E, R) denotes the collection of functions that are continuous as mappings from E into R with the natural topology on R. M u (E, R) ⊂ M (E, R) and C u (E, R) ⊂ C(E, R) denote the corresponding collections of functions that are bounded above (that is, f ∈ M u (E, R) implies supx∈E f (x) < ∞), and M l (E, R) ⊂ M (E, R) and C l (E, R) ⊂ C(E, R) denote the collections of functions that are bounded below. 109
110
7. EXTENSION OF VISCOSITY SOLUTION METHODS
Let H ⊂ M (E, R) × M (E 0 , R), h ∈ Cb (E), and α > 0. Consider the equation (I − αH)f = h.
(7.1)
Since all the examples in which we are interested have the property that (f, g) ∈ H implies (f + c, g) ∈ H for all c ∈ R, we generalize the definition implied by Lemma 6.3 rather than Definition 6.1. (Note that for c ∈ R, we take ∞ + c = ∞ and −∞ + c = −∞.) Definition 7.1 (Viscosity Solution). Let E, E 0 , H, h, α be as above. Let f ∈ B(E), and define g = α−1 (f − h), that is, f − αg = h. Then a) f is a viscosity subsolution of (7.1) if and only if f is upper semicontinuous and for each (f0 , g0 ) ∈ H such that supx (f − f0 )(x) < ∞ (that is f0 is bounded below), there exist {yn } ⊂ E 0 satisfying lim (γf − γf0 )(yn ) = sup(f − f0 )(x)
(7.2)
n→∞
x
and lim sup(α−1 (γf − γh) − (g0 )∗ )(yn ) ≤ 0.
(7.3)
n→∞
b) f is a viscosity supersolution of (7.1) if and only if f is lower semicontinuous and for each (f0 , g0 ) ∈ H such that supx (f0 − f )(x) < ∞ (that is, f0 is bounded above), there exist {yn } ⊂ E 0 satisfying lim (γf0 − γf )(yn ) = sup(f0 − f )(x)
(7.4)
n→∞
x
and (7.5)
lim inf (α−1 (γf − γh) − (g0 )∗ )(yn ) ≥ 0. n→∞
A function f ∈ Cb (E) is a viscosity solution of (7.1), if it is both a subsolution and a supersolution. f will be called a strong subsolution if for every {yn } ⊂ E 0 satisfying (7.2), there exists a sequence {b yn } ⊂ E 0 satisfying (7.2) and (7.3) and lim inf n→∞ r0 (yn , ybn ) = 0. Similarly, f will be called a strong supersolution if for every {yn } ⊂ E 0 satisfying (7.4), there exists a sequence {b yn } ⊂ E 0 satisfying (7.4) and (7.5) and 0 lim inf n→∞ r (yn , ybn ) = 0. Remark 7.2. a) If E is compact, E = E 0 , γ(x) = x, H ⊂ C(E) × B(E), and (f, g) ∈ H implies (f + c, g) ∈ H, c ∈ R, then the conditions of Definition 7.1 and Definition 6.1 are equivalent. If the conditions of Definition 6.1 are satisfied, then take yn = x0 to see that the conditions of Definition 7.1 are satisfied. Conversely, if {yn } satisfies (7.2) and (7.3) and E = E 0 is compact, we can assume that yn → x0 . It follows from (7.2) and the upper semicontinuity of f , that limn→∞ f (yn ) = f (x0 ). Consequently, (7.3) implies α−1 (f (x0 ) − h(x0 )) ≤ (g0 )∗ (x0 ). The definitions of strong sub and supersolution are also equivalent. b) If E is compact and E 6= E 0 , then f is a viscosity subsolution if and only if for each (f0 , g0 ) ∈ H such that supx (f − f0 )(x) < ∞, there exist x0 ∈ E and {yn } ⊂ E 0 satisfying limn→∞ γ(yn ) = x0 , limn→∞ γf (yn ) = f (x0 ), (7.6)
f (x0 ) − f0 (x0 ) = sup(f − f0 )(x0 ) x
and (7.7)
α−1 (f (x0 ) − h(x0 )) ≤ lim inf (g0 )∗ (yn ). n→∞
7.1. VISCOSITY SOLUTIONS, DEFINITION AND CONVERGENCE
111
f is a strong subsolution if for each x0 satisfying (7.6), there exists a sequence yn satisfying limn→∞ γ(yn ) = x0 and (7.7). We also generalize the definitions of the comparison principle and a viscosity extension. Definition 7.3 (Comparison Principle). Let H† , H‡ ⊂ M (E, R) × M (E 0 , R), h ∈ Cb (E) and α > 0. Subsolutions of (I − αH† )f = h
(7.8) and supersolutions of
(I − αH‡ )f = h
(7.9)
satisfy a comparison principle, if for every viscosity subsolution f of (7.8) and every viscosity supersolution f of (7.9) we have f ≤ f on E. When H† = H‡ ≡ H, we say that (I − αH)f = h satisfies a comparison principle. We have the following generalization of Lemma 6.5 Lemma 7.4. Let H† = H‡ = H. If h ∈ Cb (E) and there exist (fn , gn ) ∈ H ∩(Cb (E)×B(E 0 )) satisfying kγfn −αgn −γhk → 0, then the comparison principle holds for f − αHf = h. Proof. Suppose f is a viscosity subsolution. Set hn = γfn − αgn . For n > 0, n → 0, there exist yn ∈ E 0 satisfying γf (yn ) − γfn (yn ) ≥ supx (f (x) − fn (x)) − n and sup(f (x) − fn (x))
≤
γf (yn ) − γfn (yn ) + n
x
≤ γh(yn ) + αgn∗ (yn ) − γfn (yn ) + 2n = γh(yn ) − (hn )∗ (yn ) + 2n → 0. Similarly, if f is a supersolution of f − αHf = h, lim inf inf (f (x) − fn (x)) ≥ 0, n→∞
and it follows that f ≤ f .
x
b ⊂ M (E, R) × M (E 0 , R) is a viscosity subextension of H ⊂ Definition 7.5. H 0 b ⊃ H and for each h ∈ Cb (E), α > 0, f is a viscosity M (E, R) × M (E , R), if H b = subsolution of (I−αH)f = h if and only if f is a viscosity subsolution of (I−αH)f h. b ⊂ M (E, R) × M (E 0 , R) is a viscosity superextension of H ⊂ M (E, R) × H b ⊃ H and for each h ∈ Cb (E), α > 0, f is a viscosity supersolution M (E 0 , R), if H b = h. of (I − αH)f = h if and only if f is a viscosity supersolution of (I − αH)f b is a viscosity extension of H if it is both a viscosity subextension and a H viscosity superextension.
112
7. EXTENSION OF VISCOSITY SOLUTION METHODS
b † ⊂ M l (E, R) × M (E 0 , R). Suppose that for each Lemma 7.6. Let H† ⊂ H b † , there exists {(fn , gn )} ⊂ H† such that for each c, d ∈ R, (f, g) ∈ H lim kfn ∧ c − f ∧ ck = 0
n→∞
and (7.10)
lim sup
sup
n→∞ z∈{γfn ∨γf ≤c}
(gn∗ (z) ∨ d − g ∗ (z) ∨ d) ≤ 0.
b † is a viscosity subextension of H† . Then H b ‡ , there b ‡ ⊂ M u (E, R) × M (E 0 , R). Suppose that for each (f, g) ∈ H Let H‡ ⊂ H exists {(fn , gn )} ⊂ H‡ such that for each c, d ∈ R, lim kfn ∨ c − f ∨ ck = 0
n→∞
and (7.11)
lim inf
inf
n→∞ z∈{γfn ∧γf ≥c}
((gn )∗ (z) ∧ d − g∗ (z) ∧ d) ≥ 0.
b ‡ is a viscosity superextension of H‡ . Then H b † )f = h, then f is a Proof. Clearly, if f is a viscosity subsolution for (I − αH viscosity subsolution for (I − αH† )f = h. Suppose f is a viscosity subsolution for b † . There exist (fn , gn ) ∈ H† such that for each (I − αH† )f = h, and let (f0 , g0 ) ∈ H c, d ∈ R, kf0 ∧ c − fn ∧ ck → 0 and (7.10) holds with g replaced by g0 . Let n > 0 and n → 0. There exist zn ∈ E 0 such that γf (zn ) − γfn (zn ) ≥ sup(f (x) − fn (x)) − n x
and α
−1
(γf (zn ) − γh(zn )) −
gn∗ (zn )
≤ n . Since
γfn (zn ) ≤ γf (zn ) + n − inf (fn (x) − f (x)), x
lim sup γfn (zn ) ≤ 2kf k − inf f0 (x) < ∞, n→∞
x
we have lim γf (zn )−γf0 (zn ) = lim γf (zn )−γfn (zn ) = lim sup(f (x)−fn (x)) = sup(f (x)−f0 (x)).
n→∞
n→∞
n→∞ x
x −1
There exists c < ∞ such that fn (zn ) ∨ f0 (zn ) < c. Let d < −α (kf k + khk). Then for n sufficiently large, gn∗ (zn ) > d and (by (7.10)) g0∗ (zn ) > d. Consequently, lim sup α−1 (γf (zn ) − γh(zn )) − g0∗ (zn ) n→∞
≤ lim sup α−1 (γf (zn ) − γh(zn )) − gn∗ (zn ) + (gn∗ (zn ) − g0∗ (zn )) n→∞
≤ lim sup α−1 (γf (zn ) − γh(zn )) − gn∗ (zn ) ≤ 0, n→∞
b † )f = h. The proof is similar for verifying that f is a subsolution for (I − αH supersolutions. Lemma 7.6 determines a closure operation that preserves sub and supersolutions. The next lemma gives another. Lemma 7.7. Let (f0 , g0 ) ∈ M l (E, R) × M (E, R). Suppose that there exists {(fn , gn )} ⊂ H† satisfying the following:
7.1. VISCOSITY SOLUTIONS, DEFINITION AND CONVERGENCE
113
a) fn is lower semicontinuous. b) f1 ≤ f2 ≤ · · · and fn → f0 pointwise. c) If {zn } ⊂ E 0 satisfies supn γfn (zn ) < ∞ and inf n gn∗ (zn ) > −∞, then {γ(zn )} is relatively compact, and if {γ(znk )} converges to x0 , lim sup gn∗ (znk ) ≤ g0∗ (x0 ).
(7.12)
n→∞
b † = H† ∪ {(f0 , γg0 )} is a viscosity subextension of H† . Then H The analogous result holds for superextensions. Proof. Suppose f is a viscosity subsolution for (I − αH† )f = h. Let (fn , gn ) ∈ H† satisfy (a) through (c), and let n > 0 and n → 0. There exist zn ∈ E 0 such that γf (zn ) − γfn (zn ) ≥ sup(f (x) − fn (x)) − n x
and α
−1
(γf (zn ) − γh(zn )) −
gn∗ (zn )
≤ n . Note that
γfn (zn ) ≤ 2kf k − inf f0 (y) + n y
and gn∗ (zn ) ≥ −α−1 (kf k + khk) − n . It follows that {γ(zn )} is relatively compact, and by Proposition 2.42 of Attouch [5], if x0 is a limit point of {γ(zn )}, then lim γf (zn ) − γfn (zn ) = f (x0 ) − f0 (x0 ) = sup(f (x) − f0 (x)).
(7.13)
n→∞
x
Since fn is lower semicontinuous, Theorem 2.40 of Attouch [5] gives (7.14)
f0 (x0 ) = lim lim
inf
δ→0+ n→∞ r(x,x0 )≤δ
fn (x) = sup sup δ>0
inf
n r(x,x0 )≤δ
fn (x).
For simplicity, assume that γ(zn ) → x0 . (Otherwise, consider a subsequence.) Then (7.13), (7.14), and (7.12) imply f (x0 ) − f0 (x0 ) − h(x0 ) = lim sup(γf (zn ) − γfn (zn ) − γh(zn )) ≤ lim sup(αgn∗ (zn ) + αn − γfn (zn )) n→∞
≤ lim lim sup(α(gn∗ (zn ) − ≤
δ→0+ n→∞ αg0∗ (x0 ) − f0 (x0 ).
inf r(x,x0 )≤δ
fn (x))
Consequently, (7.2) and (7.3) hold for any yn = y ∈ γ −1 (x0 ).
Next, we generalize Lemma 6.9 to the present setting. We will need the following simple technical lemma. Lemma 7.8. Suppose f, g ∈ M (E, R). a) If sup f (x) ≤ sup(f (x) − g(x)) < ∞ x
x
for all 0 < < 0 , then there exist xn such that limn→∞ f (xn ) = supx f (x) and lim supn→∞ g(xn ) ≤ 0.
114
7. EXTENSION OF VISCOSITY SOLUTION METHODS
b) If inf f (x) ≥ inf (f (x) − g(x)) > −∞ x
x
for all 0 < < 0 , then there exist xn such that limn→∞ f (xn ) = inf x f (x) and lim inf n→∞ g(xn ) ≥ 0. Proof. We prove Part (a). The proof for (b) is similar. Note that inf x g(x) > −∞. Let n → 0+, and let xn satisfy sup(f (x) − n g(x)) ≤ f (xn ) − n g(xn ) + 2n . x
Then sup f (x) + n g(xn ) − 2n ≤ f (xn ) ≤ f (xn ) − n g(xn ) + 2n x
which implies lim sup g(xn ) ≤ lim n = 0 n→∞
n→∞
and lim f (xn ) = sup f (x).
n→∞
x
We recall the notion of lim supn→∞ Gn and lim inf n→∞ Gn for a sequence of subsets Gn in a topological space in Definition 2.4. In particular, we note that both limits are closed sets by definition. We require the following conditions on the approximating state spaces (cf. Definition 2.5). Condition 7.9. (En , rn ), n = 2, 3, . . ., (E 0 , r0 ), and (E, r) are complete, separable metric spaces, ηn0 : En → E 0 is Borel measurable, γ : E 0 → E is continuous and onto, and ηn = γ ◦ ηn0 . For n = 1, 2, . . . and q ∈ Q, Knq ⊂ En . The Knq satisfy the following: a) For q1 , q2 ∈ Q, there exists q3 ∈ Q such that Knq1 ∪ Knq2 ⊂ Knq3 . b) For each x ∈ E, there exists q ∈ Q and zn ∈ Knq such that limn→∞ ηn (zn ) = x. b q ⊂ E such that c) For each q ∈ Q, there exist a compact K lim sup inf r(ηn (y), x) = 0,
b
n→∞ y∈K q x∈K q n
b 0q ⊂ E 0 such that and a compact K (7.15)
lim sup
inf r0 (ηn0 (y), x) = 0.
b
n→∞ y∈K q x∈K 0q n
d) For each compact K ⊂ E, there exists q ∈ Q such that (7.16)
K ⊂ K q ≡ lim inf ηn (Knq ). n→∞
b q , so K q is compact.) (Note that by (c), K q ⊂ K bq = Remark 7.10. Note that if Condition 7.9(c) holds, then it holds with K q 0q 0 q b lim supn→∞ ηn (Kn ) and K = lim supn→∞ ηn (Kn ).
7.1. VISCOSITY SOLUTIONS, DEFINITION AND CONVERGENCE
115
Define ηn : M (E, R) → M (En , R) by ηn f = f ◦ ηn , and define γf and ηn0 f similarly. Following Definition 2.5, for fn ∈ B(En ) and f ∈ B(E), LIMfn = f , if supn kfn k < ∞ and for each q ∈ Q, lim sup fn (z) − ηn f (z) = 0.
n→∞ z∈K q
n
Somewhat abusing notation, for gn ∈ B(En ) and g ∈ B(E 0 ), we write LIMgn = g, if supn kgn k < ∞ and for each q ∈ Q, lim sup gn (z) − ηn0 g(z) = 0.
n→∞ z∈K q
n
We assume that An is the full generator for a Markov process Yn with values in En and that 1 (7.17) Hn = {(f, e−nf g) : (enf , g) ∈ An }, n or in the discrete time case, that Tn is a transition operator, n > 0, and 1 log e−nf Tn enf , f ∈ B(En ). Hn f = nn We will assume the following convergence condition. Condition 7.11. (Convergence Condition) Let H† ⊂ C l (E, R) × M u (E 0 , R) and H‡ ⊂ C u (E, R) × M l (E 0 , R), and let υn → ∞. For each (f0 , g0 ) ∈ H† , assume that there exist (fn,0 , gn,0 ) ∈ Hn such that LIMfn,0 ∧ c = f0 ∧ c, for each c ∈ R,
(7.18) (7.19)
sup n
1 log kgn,0 k < ∞ and υn
sup gn,0 (y) < ∞, n,y∈En
and that for each q ∈ Q and each sequence zn ∈ Knq satisfying ηn0 (zn ) → y and limn→∞ fn,0 (zn ) = γf0 (y) < ∞, lim sup gn,0 (zn ) ≤ (g0 )∗ (y).
(7.20)
n→∞
For each (f1 , g1 ) ∈ H‡ , assume that there exist (fn,1 , gn,1 ) ∈ Hn such that LIMfn,1 ∨ c = f1 ∨ c, for each c ∈ R,
(7.21) (7.22)
sup n
1 log kgn,1 k < ∞ and υn
inf
n,y∈En
gn,1 (y) > −∞,
and that for each q ∈ Q and each sequence zn ∈ Knq satisfying ηn0 (zn ) → y and limn→∞ fn,1 (zn ) = γf1 (y) > −∞, (7.23)
lim inf gn,1 (zn ) ≥ (g1 )∗ (y). n→∞
By definition, Hn ⊂ B(En ) × B(En ). Lyapunov function arguments (for example, Lemma 4.20) frequently require working with unbounded functions, and it may be useful to allow unbounded functions in the convergence condition. In the
116
7. EXTENSION OF VISCOSITY SOLUTION METHODS
continuous time case, as in Section 4.5, define H†,n to be the collection of pairs (f, g) ∈ M l (En , R) × M u (En , R) such that
R t ng(Y
n Zf,g (t) = en(f (Yn (t))−f (Yn (0))−
(7.24)
0
n (s))ds
is a supermartingale whenever f (Yn (0)) < ∞ almost surely, and let H‡,n be the collection of (f, g) ∈ M u (En , R) × M l (En , R) such that (7.24) is a submartingale whenever f (Yn (0)) > −∞ almost surely. We have the following lemma. Lemma 7.12. For n = 1, 2, . . ., let An be the full generator for a Markov process and let Hn be given by (7.17). Let (fn , gn ) ∈ H†,n , (f0 , g0 ) ∈ C l (E, R)×M u (E 0 , R), and for each > 0, let σn = inf{t : r0 (η 0 (Yn (t)), η 0 (Yn (0))) > }. Suppose that inf n,y∈En fn (y) > −∞, supn,y∈En gn (y) < ∞, and that there exist cn → ∞ and δn > 0 satisfying 1 (ncn − log(nδn )) = 0 (7.25) lim nδn = 0, lim n→∞ n→∞ υn such that for each q ∈ Q, lim sup fn (x) ∧ cn − ηn f0 (x) ∧ cn  = 0,
(7.26)
n→∞ x∈K q
n
lim sup sup E[en(fn (Yn (s))∧cn −fn (x)∧cn ) ] − 1 = 0,
(7.27)
n→∞ x∈K q s≤δn n
and for each sequence zn ∈ Knq satisfying ηn0 (zn ) → y and limn→∞ fn (zn ) = γf0 (y) < ∞, lim sup gn (zn ) ≤ (g0 )∗ (y).
(7.28)
n→∞
In addition, assume that for each 0 < < 1 lim encn sup P {σn ≤ δn Yn (0) = x} = 0.
(7.29)
n→∞
q x∈Kn
Then there exist (fn,0 , gn,0 ) ∈ Hn satisfying (7.18), (7.19), and (7.20). The analogous result holds for (fn , gn ) ∈ H‡,n , (f1 , g1 ) ∈ C u (E, R) × M l (E, R), giving a sequence satisfying (7.21), (7.22), and (7.23). Proof. Define fn,0 to satisfy Z δn 1 enfn,0 (x) = E[enfn (Yn (s))∧cn Yn (0) = x]ds, δn 0 so gn,0 (x) = Hn fn,0 (x) =
1 −nfn,0 (x) 1 E[enfn (Yn (δn ))∧cn Yn (0) = x] − enfn (x)∧cn e An enfn,0 (x) = . R δn n n E[enfn (Yn (s))∧cn Yn (0) = x]ds 0
Note that bf = inf n,y∈En fn,0 (y) ≥ inf n,y∈En fn (y) > −∞, fn,0 ≤ cn , and kgn,0 k ≤ 1 n(cn −cn ∧bf ) , and setting C g = supn,y∈En gn (y), the supermartingale property nδn e implies sup gn,0 (y) ≤ sup n,y∈En
n,y∈En
enδn C g − 1 nδn inf s≤δn E[en(fn (Yn (s))∧cn −fn (y)∧cn ) Yn (0) = y]
and (7.19) holds by (7.25) and (7.27).
,
7.1. VISCOSITY SOLUTIONS, DEFINITION AND CONVERGENCE
117
For each q ∈ Q, Z δn 1 n(fn,0 (x)−fn (x)∧cn ) − 1 ≤ sup e sup E[en(fn (Yn (s))∧cn −fn (x)∧cn ) Yn (0) = x] − 1 ds → 0, q q δ n 0 x∈Kn x∈Kn so by (7.26) and (7.27) we have lim sup fn,0 (x) − ηn f0 (x) ∧ cn  = 0,
n→∞ x∈K q
n
and (7.18) follows. Now suppose zn ∈ Knq , ηn0 (zn ) → y, and limn→∞ fn (zn ) = γf0 (y) < ∞. Let Cn, (gn ) = sup{gn (y) : r0 (η 0 (y), η 0 (zn )) ≤ }. Then E[en(fn (Yn (δn ))∧cn −fn (zn )∧cn ) Yn (0) = zn ]
R δn
n(fn (Yn (δn ))∧cn −fn (zn )∧cn )−
≤ E[e
∧δn σn
ngn (Yn (s))ds+n(δn −δn ∧σn )C g
n(fn (Yn (δn ∧σn ))∧cn −fn (zn )∧cn )
+E[e
1{fn (Yn (δn ∧σn )≥cn } Yn (0) = zn ]
n(fn (Yn (δn ∧σn ))∧cn −fn (zn )∧cn )+n(δn −δn ∧σn )C g
≤ E[e
n(fn (Yn (δn ∧σn ))∧cn −fn (zn )∧cn )
≤ E[e
1{fn (Yn (δn ∧σn ) 0 with n → 0. Suppose (fn , gn ) ∈ M l (En , R) × M u (En , R) satisfy Tn enfn ≤ enfn +n ngn , allowing ∞ ≤ ∞, and (f0 , g0 ) ∈ C l (E, R)×M u (E 0 , R). Suppose that inf n,y∈En fn (y) > −∞, supn,y∈En gn (y) < ∞, and that there exist cn → ∞ satisfying (7.30)
lim
n→∞
1 (cn − log n ) = 0 υn
such that for each q ∈ Q, (7.31)
lim sup fn (x) ∧ cn − ηn f0 (x) ∧ cn  = 0,
n→∞ x∈K q
n
and for each sequence zn ∈ Knq satisfying ηn0 (zn ) → y and limn→∞ fn (zn ) = γf0 (y) < ∞, (7.32)
lim sup gn (zn ) ≤ (g0 )∗ (y). n→∞
Then setting fn,0 = fn ∧ cn and 1 log e−nfn,0 Tn enfn,0 , nn (7.18), (7.19), and (7.20) are satisfied. The analogous result holds for (fn , gn ) satisfying gn,0 =
Tn enfn ≥ enfn +n ngn and (f1 , g1 ) ∈ C u (E, R) × M l (E, R), giving a sequence satisfying (7.21), (7.22), and (7.23). Proof. The convergence in (7.18) follows from the lower bound on the fn and (7.31). We have 1 kgn,0 k ≤ cn − inf fn (y)), y∈En n so the first part of (7.19) holds. If fn (y) > cn , then gn,0 (y) ≤ 0, and if fn (y) ≤ cn , then gn,0 (y) ≤ gn (y), so the second part of (7.19) holds and (7.20) is given by (7.32).
Lemma 7.14. Let An be a bounded generator, that is, An is of the form Z (7.33) An f (x) = λn (x) (f (z) − f (x))qn (x, dz), En
with λn bounded, and let Hn be given by (7.17). Let Condition 7.9 hold and {Yn } satisfy Condition 2.8. Let H† ⊂ C l (E, R) × M u (E 0 , R) and H‡ ⊂ C u (E, R) × M l (E 0 , R), and assume that the Convergence Condition 7.11 holds. Let α > 0 and h ∈ Cb (E), and assume that the comparison principle holds for subsolutions of (7.34)
(I − αH† )f = h,
7.1. VISCOSITY SOLUTIONS, DEFINITION AND CONVERGENCE
119
and supersolutions of (I − αH‡ )f = h.
(7.35)
Let (fn , gn ) ∈ Hn , and define fn − αgn = hn . If h = LIMhn , then there exists a unique f ∈ B(E) that is a viscosity subsolution of (7.34) and a supersolution of (7.35), f ∈ Cb (E), and f = LIMfn . For (f0 , g0 ) ∈ H† , sup (f − f0 )(x) ≤ sup (γh − (γf0 − αg0 ))(y),
(7.36)
y∈E 0
x∈E
and for (f1 , g1 ) ∈ H‡ , inf (f − f1 )(x) ≥ inf 0 (γh − (γf1 − αg1 ))(y).
(7.37)
x∈E
y∈E
Remark 7.15. LIMgn,0 ∨c = g0 ∨c for all c ∈ R implies (7.20) and LIMgn,1 ∧c = g1 ∧ c implies (7.23). Proof. Note that kfn k ≤ khn k. By assumption, for each x ∈ E, there exists q ∈ Q and zn ∈ Knq such that limn→∞ ηn (zn ) = x. Let f3 (x) = sup{lim sup fn (zn ) : zn ∈ Knq , lim ηn (zn ) = x, q ∈ Q}, n→∞
n→∞
and f4 (x) = inf{lim inf fn (zn ) : zn ∈ Knq , lim ηn (zn ) = x, q ∈ Q}, n→∞
n→∞
and define f , f ∈ B(E) by f = f3∗ and f = (f4 )∗ . Then f is bounded and uppersemicontinuous, and f is bounded and lowersemicontinuous. We claim that they are, respectively, a subsolution of (7.34) and a supersolution of (7.35). We only prove this assertion for f . The argument for f is similar. Let (f0 , g0 ) ∈ H† . By assumption, there exists (fn,0 , gn,0 ) ∈ Hn such that n and (7.18), (7.19) and (7.20) hold. Let 0 < < α, and set hn = fn − fn −h α h0,n = f0,n − g0,n . Note that khn k ≤ khn k. As in the proof of Lemma 5.9, fn − f0,n ≤ (I − Afnn )−1 (hn − hn,0 ), and by Condition 2.8, for each q ∈ Q and δ > 0, there exists qb ∈ Q depending on q, δ, supn khn k, and inf n,x∈En f0,n (x) and supn,x∈En g0,n (x) such that for any sequence yn ∈ Knq there is a sequence zn ∈ Knqb satisfying (7.38) lim sup(fn (yn ) − f0,n (yn )) n→∞
≤ lim sup(I − Afnn )−1 (hn − h0,n )(yn ) n→∞
≤ δ + lim sup sup (hn (y) − h0,n (y)) n→∞ y∈K qb n
= δ + lim sup((1 − α−1 )fn (zn ) + g0,n (zn ) + α−1 hn (zn ) − f0,n (zn )), n→∞
By Condition 7.9(c), there exists a subsequence along which the lim sup on the 0 b 0qb ⊂ E 0 . Consequently, we can right is achieved and ηn(k) (zn(k) ) converges to z0 ∈ K 0 assume that ηn (zn ) converges to z0 . If the left side of (7.38) is greater than −∞, we
120
7. EXTENSION OF VISCOSITY SOLUTION METHODS
must have f0 (z0 ) < ∞ and hence lim supn→∞ f0,n (zn ) = limn→∞ f0,n (zn ) = f0 (z0 ), and by the continuity of h and f0 , the definition of f , and (7.20), we have lim sup(fn (yn ) − f0,n (yn ))
(7.39)
n→∞
≤ δ + ((1 − α−1 )γf (z0 ) + g0∗ (z0 ) + α−1 γh(z0 ) − γf0 (z0 )). Since δ is arbitrary, by the definition of f3 and f , we have sup (f (x) − f0 (x))
sup (γf (y) − γf0 (y))
=
y∈E 0
x∈E
≤
(7.40)
sup y∈E 0
γf (y) − γf0 (y) − (
γf (y) − γh(y) − g0∗ (y)) . α
By Lemma 7.8, there exist yk ∈ E 0 such that sup (γf (y) − γf0 (y)) = lim (γf (yk ) − γf0 (yk )) k→∞
y∈E 0
and γf (yk ) − γh(yk ) − g0∗ (yk )) ≤ 0. α Consequently, f is a subsolution of (7.34). By the comparison principle and the construction of f and f , f = f ≡ f ∈ Cb (E). To see that f = LIMfn , it is enough to show that for each q ∈ Q and each sequence yn ∈ Knq , lim fn (yn ) − f (ηn (yn )) = 0. lim sup( k→∞
n→∞
Since f is continuous, by Condition 7.9(c), it is enough to show that if ηn (yn ) → x0 , then fn (yn ) → f (x0 ). By definition, f (x0 ) ≤ f4 (x0 ) ≤ lim inf fn (yn ) ≤ lim sup fn (yn ) ≤ f3 (x0 ) ≤ f (x0 ). n→∞
n→∞
Since f = f , we have f = LIMfn . Finally, (7.36) follows from (7.40) by sending → α, and (7.37) can be proved similarly. We generalize Lemma 6.12 and Theorem 6.13 next. Lemma 7.16. Let An be a bounded generator. Recalling that in this case R(I − αHn ) = B(En ), define (n) Rα f = (I − αHn )−1 f,
f ∈ B(En ), α > 0.
Let Condition 7.9 hold and {Yn } satisfy Condition 2.8. Let H† and H‡ be as in Lemma 7.14. For α > 0, let Dα ⊂ Cb (E). Assume that for each h ∈ Dα , the comparison principle holds for subsolutions of (I − αH† )f = h,
(7.41) and supersolutions of
(I − αH‡ )f = h.
(7.42) Define
(n) D(Rα ) ≡ {h ∈ Cb (E) : LIMRα ηn h exists and is continuous}, (n)
and Rα h = LIMRα ηn h for h ∈ D(Rα ).
7.1. VISCOSITY SOLUTIONS, DEFINITION AND CONVERGENCE
121
Then a) Dα ⊂ D(Rα ), and for each h ∈ Dα , Rα h is the unique function in Cb (E) that is both a viscosity subsolution of (7.41) and a supersolution of (7.42). b ⊂ E, b) For K ⊂ E compact, δ > 0, and α, there exists another compact K depending on K, δ, and α such that for h1 , h2 ∈ D(Rα ), sup Rα h1 (x) − Rα h2 (x) ≤ δkh1 − h2 k + sup h1 (x) − h2 (x)
(7.43)
b
x∈K
x∈K
and hence kRα h1 − Rα h2 k ≤ kh1 − h2 k.
(7.44)
c) D(Rα ) is bucclosed. d) For 0 < α, µ < α0 , if g ∈ D(Rµ ), then Rµ g − αµ−1 (Rµ g − g) ∈ D(Rα ) and Rµ g = Rα (Rµ g − αµ−1 (Rµ g − g)),
(7.45) and hence
kRα Rµ g − Rµ gk ≤ αµ−1 kRµ g − gk. e) Let α0 > 0. Suppose, in addition, that for 0 < α < α0 , Cb (E) is the bucclosure of Dα (so D(Rα ) = Cb (E)). Then for 0 < µ, λ, α < α0 and h1 , h2 ∈ Cb (E), (7.46)
kRλ h1 − Rµ h2 k ≤ kRλ h1 − Rµ h2 − α(
R µ h2 − h2 R λ h1 − h1 − )k. λ µ
f) For h ∈ D(Rα ), (f0 , g0 ) ∈ H† , and (f1 , g1 ) ∈ H‡ , sup (Rα h − f0 )(x) ≤ sup (γh − γf0 + αg0 )(y)
(7.47)
y∈E 0
x∈E
and inf (Rα h − f1 )(x) ≥ inf 0 (γh − γf1 + αg1 )(y),
(7.48)
x∈E
y∈E
and hence, if h ∈ D(H† ) ∩ D(H‡ ) ∩α 0, there exists qb ∈ Q, depending on q, δ, and α such that lim sup sup (ηn Rα h1 (z) − ηn Rα h2 (z)) q n→∞ z∈Kn
= lim sup sup ((I − αHn )−1 ηn h1 (z) − (I − αHn )−1 ηn h2 (z)) q n→∞ z∈Kn
≤ δkh1 − h2 k + lim sup sup (ηn h1 (z) − ηn h2 (z)). n→∞ z∈K qb n
By the continuity of hi and Rα hi , (7.50)
sup (Rα h1 (z) − Rα h2 (z) ≤ δkh1 − h2 k + sup h1 (x) − h2 (x), z∈K q
b
x∈K qb
122
7. EXTENSION OF VISCOSITY SOLUTION METHODS
b qb = lim supn→∞ ηn (Knqb). Since the roles where K q = lim inf n→∞ ηn (Knq ) and K b qb is compact by Condition 7.9(c), (7.43) of h1 and h2 can be interchanged and K follows. Suppose h is bucapproximable by D(Rα ). (See Section A.2.) Then there exists a constant Ch and h,K ∈ D(Rα ), > 0, K ⊂ E compact, such that kh,K k ≤ Ch and supx∈K h(x) − h,K (x) ≤ . By (7.43), for each K ⊂ E compact and δ > 0, b ⊂ E depending on K, δ, and α such that there exists another compact K (7.51)
sup Rα h1 ,K1 (x) − Rα h2 ,K2 (x) ≤ 2δCh + sup (h1 ,K1 − h2 ,K2 )(x).
b
x∈K
x∈K
b and K2 ⊃ K, b the right side is bounded by 2δCh + 1 + 2 . It Taking K1 ⊃ K follows that there exists a bounded function f such that for each compact K ⊂ E b ⊂ E as above, and K sup f (x) − Rα h,K2 (x) ≤ 2δCh + sup (h − h,K2 )(x),
(7.52)
b
x∈K
x∈K
b the right side is bounded by 2δCh +. The continuity of Rα h,K where for K2 ⊃ K, 2 implies the continuity of f . By (5.7), for every a, T > 0 and q ∈ Q, there exists qb = qb(q, a, T ) ∈ Q, (7.53)
(n) (n) sup Rα ηn h(y) − Rα ηn h,K2 (y)
q y∈Kn
−1
≤ (e−α
+ C(q, a, T, Ch )e−na )kh − h,K2 k + sup ηn h(y) − ηn h,K2 (y),
T
q b(q,a,T )
y∈Kn
and hence (7.54)
(n) sup Rα ηn h(y) − ηn f (y)
q y∈Kn
−1
≤ (e−α T + C(q, a, T, Ch )e−na )kh − h,K2 k + sup ηn h(y) − ηn h,K2 (y) q b(q,a,T )
y∈Kn
+ sup ηn f (y) − ηn Rα h,K2 (y) q y∈Kn
(n) + sup Rα ηn h,K2 (y) − ηn Rα h,K2 (y). q y∈Kn
Consequently, (7.55)
(n) lim sup sup Rα ηn h(y) − ηn f (y) q n→∞ y∈Kn
−1
≤ 2Ch e−α
T
+
sup
x∈K qb(q,a,T )
h(x) − h,K2 (x)
+ sup f (x) − Rα h,K2 (x), x∈K q
and (7.52) and the fact that T is arbitrary imply the left side is zero. It follows that h ∈ D(Rα ) and Rα h = f . By Lemma 2.11 of Miyadera [86], (n)
(n) Rµ(n) g = Rα (Rµ(n) g − α
Rµ g − g ), µ
7.1. VISCOSITY SOLUTIONS, DEFINITION AND CONVERGENCE
123
for g ∈ B(En ), α > 0, µ > 0. Part (d) follows by Lemma 5.9. Considering Part (e), by Part (c), D(Rα ) = Cb (E), for all α < α0 . Let h1 , h2 ∈ Cb (E). As in (5.7), for each q ∈ Q and δ > 0, there exists qb ∈ Q such that (n)
lim sup sup (Rµ(n) ηn h1 (y) − Rλ ηn h2 (y)) n→∞
q y∈Kn
= lim sup sup q n→∞ y∈Kn
(n)
(n) Rα (Rµ(n) ηn h1 − α
Rµ ηn h1 − ηn h1 ) µ
(n) Rλ ηn h2 − ηn h2 ) (y) λ (n) Rµ ηn h1 − ηn h1 ) ≤ δ + lim sup sup (Rµ(n) ηn h1 − α µ n→∞ y∈K qb n (n)
(n) −Rα (Rλ ηn h2 − α
(n)
−(Rλ ηn h2 − α
(n) Rλ ηn h2 − ηn h2 ) (y). λ
Consequently, for any x0 ∈ E, (Rµ h1 − Rλ h2 )(x0 ) R µ h1 − h1 R λ h2 − h2 ≤ δ + sup (Rµ h1 − α ) − (Rλ h2 − α ) (x), µ λ x giving (7.46). (n) Finally, (7.47) can be obtained by replacing fn in (7.38) by Rα ηn h and f by Rα h and then following the proof of (7.36). We now derive the main semigroup convergence theorem in this section. For r > 0, set Br = {h ∈ Cb (E) : khk ≤ r}. Theorem 7.17. For n = 1, 2, . . ., let An be the full generator (not necessarily bounded) of a Markov process Yn , and let Hn be given by (7.17). Let Condition 7.9 hold and {Yn } satisfy Condition 2.8. Let H† ⊂ C l (E, R) × M u (E 0 , R) and H‡ ⊂ C u (E, R) × M l (E 0 , R), and assume that the Convergence Condition 7.11 holds. Let α0 > 0, and assume that for each 0 < α < α0 , there exists Dα ⊂ Cb (E) such that for each r > 0, Br is the bucclosure of Dα ∩ Br (see Theorem A.8), and for each h ∈ Dα , the comparison principle holds for subsolutions of (7.56)
(I − αH† )f = h
and supersolutions of (7.57)
(I − αH‡ )f = h.
Then a) For each h ∈ Dα , 0 < α < α0 , there exists a unique function in Cb (E), denoted Rα h, that is a viscosity subsolution of (7.56) and a supersolution of (7.57). b) Rα has a unique extension to Cb (E) that gives a continuous mapping of Br into Cb (E) under the buctopology for each r > 0. (We will continue to denote the extension by Rα ). In particular, (7.58)
Rα f = buc lim Rα fm m→∞
124
7. EXTENSION OF VISCOSITY SOLUTION METHODS
whenever buc limm→∞ fm = f . c) The operator (7.59)
b = ∪0 0 and n enυn → 0, where {υn } is the sequence in Condition 7.11. (Note also the role of n in (5.12).) Define Ann f = An (I − n An )−1 f, Hnn f
f ∈ B(En ),
1 −nf n nf An e . ne
and = Then, as in the proof of Lemma 5.13, if (fn , gn ) ∈ Hn and limn→∞ nn kgn k = 0, for n sufficiently large, enfn (1 − nn gn ) > 0 and defining fbn so that
b
enfn = enfn (1 − nn gn ),
(7.62) we have (7.63)
b
gbn = Hnn fbn = en(fn −fn ) gn =
gn . 1 − nn gn
It follows that lim kfn − fbn k = lim kgn − gbn k = 0.
n→∞
n→∞
Suppose (fn,0 , gn,0 ) satisfies (7.18), (7.19) and (7.20). Define fbn,0 and gbn,0 as in (7.62) and (7.63) with (fn , gn ) replaced by (fn,0 , gn,0 ). Then {(fbn,0 , gbn,0 )} satisfies (7.18), (7.19) and (7.20). Similarly, for (fn,1 , gn,1 ) satisfying (7.21), (7.22) and (7.23), there exist fbn,1 and gbn,1 = Hnn fbn,1 satisfying (7.21), (7.22) and (7.23). Hnn then satisfies the conditions of Lemma 7.16, and Part (a) follows. For each r > 0, by (7.43), the operator that arises as the limit of (I − αHnn )−1 is a continuous mapping of Br into Cb (E) that agrees with the Rα of Part (a) on bα be an extension of Rα to Cb (E) that is continuous on the dense set Dα . Let R b bα h}. Then by the continuity of Rα and R bα Br . Let D = {h ∈ Cb (E) : Rα h = R
7.2. LARGE DEVIATION APPLICATIONS
125
b ∩ Br must be closed, and hence, by Lemma A.7, D b is bucclosed. Since on Br , D bα on Dα , it follows that Rα = R bα on Cb (E), so Part (b) follows. Rα = R b for each 0 < α < α0 , D(H) b ⊂ Cb (E) = R(I − αH). b By the definition of H, b b Hence H satisfies the range condition 5.1. The dissipativity of H follows from Part (e) of Lemma 7.16. The first statement in Part (d) follows from Proposition 5.3. The second statement follows by (7.49). Considering Part (e), Part (c) of Lemma 7.16 implies that b ⊂ exLIMHnn . H b Now, applying Lemmas 5.9, 5.12 and Part (b) of Proposition 5.5, for f ∈ D(H) and fn ∈ B(En ) satisfying LIMfn = f and tn → t ≥ 0, we have LIMVnn (tn )fn = V (t)f.
(7.64) If we define
D(V (t)) = {f ∈ Cb (E) : LIMVnn (tn )ηn f exists and is continuous}, then D(V (t)) is bucclosed by essentially the same proof as for Part (c) of Lemma 7.16, using (5.13) in place of (5.7). By (5.13) (7.65)
sup V (t)h1 (y) − V (t)h2 (y) ≤ y∈K
b
sup
h1 (y) − h2 (y),
y∈K(K,kh1 k∨kh2 k,t)
and it follows that V (t) is buccontinuous on D(V (t)) ∩ Br . Then V (t)f agrees b with (7.60) for f ∈ D(H). Let Vb (t) be an extension of V (t) b to D that is D(H)
b = {f ∈ D : Vb (t)f = V (t)f }. Then buccontinuous on D ∩ Br for each r > 0. Let D b b is bucclosed. Hence D = D. b D ∩ Br is bucclosed for each r, so by Lemma A.7, D Part (f) follows by Part (e) and Lemma 5.11. 7.2. Large deviation applications Following the same arguments as in Theorem 6.14, we have the following large deviation theorem. Theorem 7.18. Let Condition 7.9 hold and {Yn } satisfy Condition 2.8. Assume that one of the following holds: a) (Continuoustime case.) For each n = 1, 2, . . ., An ⊂ B(En ) × B(En ) and existence and uniqueness hold for the DEn [0, ∞)martingale problem for (An , µ) for each initial distribution µ ∈ P(En ). Letting Pyn ∈ P(DEn [0, ∞)) denote the distribution of the solution of the martingale problem for An starting from y ∈ En , the mapping y → Pyn is Borel measurable taking the weak topology on P(DEn [0, ∞)) (cf. Theorem 4.4.6, Ethier and Kurtz [36]). Yn is a solution of the martingale problem for An , and 1 Hn f = e−nf An enf , enf ∈ D(An ), n or if An is multivalued, 1 Hn = {(f, e−nf g) : (f, g) ∈ An }. n
126
7. EXTENSION OF VISCOSITY SOLUTION METHODS
b) (Discretetime case.) For each n = 1, 2, . . ., Tn is a transition operator on B(En ) for a Markov chain, n > 0, Yn is a discretetime Markov chain with timestep n and transition operator Tn , that is, E[f (Yn (t))Yn (0) = y] = Tn[t/n ] f (y), and 1 log e−nf Tn enf , nn The sequence n → 0. Define {Vn (t)} on B(En ) by Hn f =
Vn (t)f (y) =
f ∈ B(En ).
1 log E[enf (Yn (t)) Yn (0) = y]. n
Let H† ⊂ C l (E, R) × M u (E 0 , R) and H‡ ⊂ C u (E, R) × M l (E 0 , R), and assume that the Convergence Condition 7.11 holds. Let α0 > 0, and assume that for each 0 < α < α0 , there exists a subset Dα ⊂ Cb (E) such that Cb (E) is the bucclosure of Dα and that for each h ∈ Dα , the comparison principle holds for subsolutions of (I − αH† )f = h, and supersolutions of (I − αH‡ )f = h. Let F ⊂ Cb (E) and S ⊂ R satisfy Condition (c) of Corollary 4.19, and assume that for f ∈ F and λ ∈ S, λf ∈ D(H† ). b Assume that b be defined in (7.59), and let D be the bucclosure of D(H). Let H D is closed under addition and contains a set that is bounded above and isolates points. (If ∩0 0, Qn satisfies Z T dQn n n = exp{nfn (Yn (T )) − nfn (Yn (0)) − nHnn fn (Ynn (s))ds} dP DEn [0,T ] 0 ≤ exp{2nkfn k + nT kHnn fn k} ≤ exp{2n(1 + α−1 T )kh0 k}, where the last inequality follows from (7.69). By the exponential tightness of {ηn (Ynn ) : n = 1, 2, . . .} under P , for each T0 > 0 and q ∈ Q, there exists a compact Kq,T0 ⊂ DE [0, ∞) and C > 0 such that P {ηn (Ynn ) 6∈ Kq,T0 Ynn (0) = y} ≤ Ce−n−2n(1+T0 )kh0 k ,
y ∈ Knq .
Consequently Qn DEn [0,αT0 ] {ηn (Ynn ) 6∈ Kq,T0 Ynn (0) = y} ≤ Ce−n ,
y ∈ Knq .
Setting Z Gα,T0 (y) =
αT0
−1
α−1 e−α
s
E Qn [h0 (ηn (Ynn (s))) − h0 (ηn (Ynn (0)))Ynn (0) = y]ds
0
and using the above estimate, for y ∈ Knq , Z αT0 −1 Gα,T0 (y) ≤ sup α−1 e−α s (h0 (x(s)) − h0 (x(0)))ds x∈Kq,T0
0
+2kh0 kQn DEn [0,αT0 ] {ηn (Ynn ) 6∈ Kq,T0 Ynn (0) = y} Z ≤
sup x∈Kq,T0
T0
e−r (h0 (x(αr)) − h0 (x(0)))dr + 2Ckh0 ke−n ,
0
implying lim sup lim sup sup Gα,T0 (y) ≤ 0. α→0+
Moreover, Z
∞
−1
n→∞ y∈K
α−1 e−α s E Qn [h0 (ηn (Ynn (s))) − h0 (ηn (Ynn (0)))Ynn (0) = y]ds αT0 Z ∞ −1 ≤ α−1 e−α s ds2kh0 k = 2e−T0 kh0 k. αT0
7.3. CONVERGENCE USING PROJECTED OPERATORS
129
Combining the above estimates, lim sup lim sup sup (I − αHnn )−1 ηn h0 (y) − ηn h0 (y) ≤ 2e−T0 kh0 k, α→0+
q n→∞ y∈Kn
and by (7.67) and (7.68), we have buc − lim Rα h0 = h0 . α→0
Lemma 7.20. Under the assumptions of Theorem 7.18, for f0 ∈ D(H† )∩Cb (E), buc − limα→0 Rα f0 = f0 . Consequently, if D(H† ) ∩ Cb (E) is bucdense in Cb (E), then D = Cb (E). Proof. Let (f0 , g0 ) ∈ H† . For 0 < α < α0 , Dα is bucdense in Cb (E), and since D(Rα ) ⊃ Dα is bucclosed, D(Rα ) = Cb (E) and f0 ∈ D(Rα ). Consequently, taking h = f0 in (7.47), sup (Rα f0 − f0 )(x) ≤ α sup g0 (y).
(7.70)
y∈E 0
x∈E
The proof of the corresponding lower bound is the same as in Lemma 7.19.
7.3. Convergence using projected operators In a number of examples, it is not possible to select {Knq } so that Condition 2.8 holds and Condition 7.9 holds for {ηn0 (Knq )}. We can drop this requirement at the cost of relaxing the requirements on the solutions (and, hence, making the verification of the comparison principle more difficult). Define π ∗ , π∗ : M (E 0 , R) → M (E, R) by π ∗ g(x) = lim
sup
→0 γ(y)∈B (x)
g(y)
π∗ g(x) = lim
inf
→0 γ(y)∈B (x)
g(y).
For H ⊂ C(E, R) × M (E 0 , R), define π ∗ H = {(f, π ∗ g) : (f, g) ∈ H} ⊂ C(E, R) × M (E, R) and π∗ H = {(f, π∗ g) : (f, g) ∈ H}. Let h ∈ Cb (E). If f is a viscosity subsolution of (7.1), then f is a viscosity subsolution of (I − απ ∗ H)f = h, and similarly, if f is a viscosity supersolution of (7.1), then f is a viscosity supersolution of (I − απ∗ H)f = h. In particular, if the comparison principle holds for subsolutions of π ∗ H† and supersolutions of π∗ H‡ , then it holds for subsolutions of H† and supersolutions of H‡ . The convergence condition then becomes Condition 7.21. (Convergence Condition) Let H† ⊂ C l (E, R) × M u (E 0 , R) and H‡ ⊂ C u (E, R) × M l (E 0 , R), and let υn → ∞. For each (f0 , g0 ) ∈ H† , assume that there exist (fn,0 , gn,0 ) ∈ Hn such that LIMfn,0 ∧ c = f0 ∧ c, for each c ∈ R,
(7.71) (7.72)
sup n
1 log kgn,0 k < ∞ and υn
sup gn,0 (y) < ∞, n,y∈En
130
7. EXTENSION OF VISCOSITY SOLUTION METHODS
and that for each q ∈ Q and each sequence zn ∈ Knq satisfying ηn (zn ) → x lim sup gn,0 (zn ) ≤ π ∗ g0 (x).
(7.73)
n→∞
For each (f1 , g1 ) ∈ H‡ , assume that there exist (fn,1 , gn,1 ) ∈ Hn such that LIMfn,1 ∨ c = f1 ∨ c, for each c ∈ R,
(7.74) (7.75)
sup n
1 log kgn,1 k < ∞ and υn
inf gn,1 (y) > −∞,
n,y∈E 0
and that for each q ∈ Q and each sequence zn ∈ Knq satisfying ηn (zn ) → yx lim inf gn,1 (zn ) ≥ π∗ g1 (x).
(7.76)
n→∞
We have the following variation of Lemma 7.14 Lemma 7.22. Let An be a bounded generator, that is, An is of the form Z (7.77) An f (x) = λn (x) (f (z) − f (x))qn (x, dz), En
with λn bounded, and let Hn be given by (7.17). Let Condition 7.9 hold without the requirement (7.15), and let {Yn } satisfy Condition 2.8. Let H† ⊂ C l (E, R) × M u (E 0 , R) and H‡ ⊂ C u (E, R) × M l (E 0 , R), and assume that the Convergence Condition 7.21 holds. Let α > 0 and h ∈ Cb (E), and assume that the comparison principle holds for subsolutions of (I − απ ∗ H† )f = h,
(7.78) and supersolutions of
(I − απ∗ H‡ )f = h.
(7.79)
Let (fn , gn ) ∈ Hn , and define fn − αgn = hn . If h = LIMhn , then there exists a unique f ∈ B(E) that is a viscosity subsolution of (7.78) and a supersolution of (7.79), f ∈ Cb (E), and f = LIMfn . For (f0 , g0 ) ∈ H† , (7.80)
sup (f − f0 )(x) ≤ sup (h − (f0 − απ ∗ g0 ))(x), x∈E
x∈E
and for (f1 , g1 ) ∈ H‡ , (7.81)
inf (f − f1 )(x) ≥ inf (h − (f1 − απ∗ g1 ))(x).
x∈E
x∈E
Remark 7.23. LIMgn,0 ∨c = g0 ∨c for each c ∈ R implies (7.73), and LIMgn,1 ∧ c = g1 ∧ c for each c ∈ R implies (7.76). Proof. Define f and f as in the proof of Lemma 7.14. We claim that they are, respectively, a subsolution of (7.78) and a supersolution of (7.79). We only prove this assertion for f . The argument for f is similar. Let (f0 , g0 ) ∈ H† . By assumption, there exists (fn,0 , gn,0 ) ∈ Hn such that (7.71), (7.72) and (7.73) hold. Then, following the proof of Lemma 7.14, for any
7.3. CONVERGENCE USING PROJECTED OPERATORS
131
sequence yn ∈ Knq there is a subsequence zn ∈ Knqb satisfying ηn (zn ) → x0 , such that (7.82) lim sup(fn (yn ) − f0,n (yn )) n→∞
≤ lim sup(I − Afnn )−1 (hn − h0,n )(yn ) n→∞
≤ δ + lim sup sup (hn (y) − h0,n (y)) n→∞ y∈K qb n
= δ + lim sup((1 − α−1 )fn (zn ) + g0,n (zn ) + α−1 hn (zn ) − f0,n (zn )) n→∞
≤ δ + ((1 − α−1 )f (x0 ) + π ∗ g0 (x0 ) + α−1 h(x0 ) − f0 (x0 )), and hence, f (x) − h(x) − π ∗ g0 (x)) . sup (f (x) − f0 (x)) ≤ sup f (x) − f0 (x) − ( α x∈E x∈E By Lemma 7.8, there exist xk ∈ E such that sup (f (x) − f0 (x)) = lim (f (xk ) − f0 (xk )) k→∞
x∈E
and lim sup( k→∞
f (xk ) − h(xk ) − π ∗ g0 (xk )) ≤ 0. α
Consequently, f is a subsolution of (7.78). The remainder of the proof is that same as for Lemma 7.14.
Similar modifications for the remainder of the results of Section 7.1 hold. Consequently, we have the following theorem. Theorem 7.24. Let Condition 7.9 hold without the requirement that (7.15) hold, and let {Yn } satisfy Condition 2.8. Assume that one of the following holds: a) (Continuoustime case.) For each n = 1, 2, . . ., An ⊂ B(En ) × B(En ) and existence and uniqueness hold for the DEn [0, ∞)martingale problem for (An , µ) for each initial distribution µ ∈ P(En ). Letting Pyn ∈ P(DEn [0, ∞)) denote the distribution of the solution of the martingale problem for An starting from y ∈ En , the mapping y → Pyn is Borel measurable taking the weak topology on P(DEn [0, ∞)) (cf. Theorem 4.4.6, Ethier and Kurtz [36]). Yn is a solution of the martingale problem for An , and 1 −nf e An enf , n or if An is multivalued, Hn f =
enf ∈ D(An ),
1 −nf e g) : (f, g) ∈ An }. n b) (Discretetime case.) For each n = 1, 2, . . ., Tn is a transition operator on B(En ) for a Markov chain, n > 0, Yn is a discretetime Markov chain with timestep n and transition operator Tn , that is, Hn = {(f,
E[f (Yn (t))Yn (0) = y] = Tn[t/n ] f (y),
132
7. EXTENSION OF VISCOSITY SOLUTION METHODS
and
1 log e−nf Tn enf , f ∈ B(En ). nn The sequence n → 0. Define {Vn (t)} on B(En ) by 1 Vn (t)f (y) = log E[enf (Yn (t)) Yn (0) = y]. n Let H† ⊂ C l (E, R) × M u (E 0 , R) and H‡ ⊂ C u (E, R) × M l (E 0 , R), and assume that the Convergence Condition 7.21 holds. Let α0 > 0, and assume that for each 0 < α < α0 , there exists a subset Dα ⊂ Cb (E) such that Cb (E) is the bucclosure of Dα and that for each h ∈ Dα , the comparison principle holds for subsolutions of Hn f =
(I − π ∗ H† )f = h, and supersolutions of (I − απ∗ H‡ )f = h. b Assume that b be defined in (7.59), and let D be the bucclosure of D(H). Let H D is closed under addition and contains a set that is bounded above and isolates points. (If ∩0 0. The pair (x, λ) ∈ DE [0, ∞) × Mm (U ) satisfies the relaxed control equation for A, if and only if Z Af (x(s), u)λ(du × ds) < ∞, U ×[0,t]
and Z (8.1)
f (x(t)) − f (x(0)) −
Af (x(s), u)λ(du × ds) = 0, U ×[0,t] 133
134
8. THE VARIATIONAL FORM OF RATE FUNCTION
for each f ∈ D(A) and t ≥ 0. We denote the collection of pairs satisfying the above identity by J , and for x0 ∈ E, set Jx0 = {(x, λ) ∈ J : x(0) = x0 }. For T > 0, let J T denote the collection of pairs in DE [0, T ] × MTm (U ) satisfying (8.1) for 0 ≤ t ≤ T . Set Jb = {x ∈ DE [0, ∞) : (x, λ) ∈ J , for some λ ∈ Mm (U )}, and define Jbx0 and JbT similarly. Example 8.2. Let E = U = Rd , and let σ(x) : Rd → Md×d and b(x) : Rd → R be bounded and continuous. Define d
Af (x, u) = u · (σ T (x)∇f (x)) + b(x)∇f (x),
f ∈ Cc∞ (Rd ),
and suppose (x, λ) satisfies (8.1). Then Z u · σ T (x(s)) + b(x(s)) λ(du × ds), x(t) − x(0) = U ×[0,t]
R
provided U ×[0,t] uλ(du × ds) < ∞. Since λ ∈ Mm (U ), we can write λ(du × ds) = R λs (du)ds for some P(Rd )valued function s → λs . If we let u(s) = U uλs (du), then x(t) ˙ = u(t) · σ T (x(t)) + b(x(t)). Example 8.3. Let (E, r) = (L2 (O), k · k = k · kL2 (O) ), where O = [0, 1)d with periodic boundary; (U, q) = (H−1 (O), k·k−1 ), where H−1 is the completion of L2 (O) under the norm kρk−1 = hρ, (I − ∆)−1 ρi (see Section 13.1); and let F ∈ C 2 (R) have bounded second derivative. Let D(A) consist of functions of the form (8.2)
f (ρ) = ϕ(kρ − γk2−1 ) + c,
ϕ ∈ Cc∞ (R), γ ∈ E, c ∈ R.
We define gradf (ρ) as the unique element in L2 (O) such that lim t−1 (f (ρ + tq) − f (ρ)) = hgradf (ρ), qi,
t→0
∀q ∈ L2 (O).
Thus, setting B = (I − ∆)−1 , for f ∈ D(A), 0
gradf (ρ) = 2ϕ (kρ − γk2−1 )B(ρ − γ) ∈ H2 (O), and we define Af (ρ, u)
= h∆ρ − F 0 (ρ) + u, gradf (ρ)i 0
=
2ϕ (kρ − γk2−1 )h∆ρ − F 0 (ρ) + u, B(ρ − γ)i
=
2ϕ (kρ − γk2−1 )h(B − I)ρ − BF 0 (ρ) + Bu, ρ − γi.
0
The last equality follows from the fact that ∆B = B − I. Note that Af (ρ, u) is continuous on E × U . For (ρ, λ) ∈ J , Z kρ(t)−γk2−1 −kρ(0)−γk2−1 = 2h∆ρ(s)−F 0 (ρ(s))+u, B(ρ(s)−γ)iλ(du×ds) U ×[0,t]
for γ ∈ L2 (O). Subtracting this identity with γ = 0 and dividing by −2 gives Z (8.3) hρ(t), Bγi − hρ(0), Bγi = h∆ρ(s) − F 0 (ρ(s)) + u, Bγiλ(du × ds). U ×[0,t]
8.1. FORMULATION OF THE CONTROL PROBLEM
If the control satisfies Z (8.4)
kukL2 (O) λ(du × ds) < ∞,
135
t ≥ 0,
U ×[0,t]
R then writing λ(du × ds) = λs (du)ds and defining u(s) = U uλs (du) ∈ L2 (O), we see that ρ is a weak solution of ∂ ρ = ∆ρ − F 0 (ρ) + u. ∂t Under (8.4), for ρ(0) ∈ E, classical theory guarantees that there is a unique solution Rt to this partial differential equation satisfying 0 kρ(s)kds < ∞, t ≥ 0. In some examples, it is necessary to restrict the controls that are allowed depending on the location of the state. Definition 8.4 (Admissible controls). Let Γ ⊂ E × U be closed, and for z ∈ E define Γz = {u : (z, u) ∈ Γ}. The assumption that Γ is closed is equivalent to the assumption that the mapping z → Γz is upper semicontinuous in the sense that lim supy→z Γy ⊂ Γz , where lim sup Γy = ∩>0 cl ∪y∈B (z) Γy , y→z
clD denoting the closure of the set D. Define Z (8.5) J Γ = {(x, λ) ∈ J : IΓ (x(s), u)λ(du × ds) = t, t ≥ 0}. U ×[0,t]
Γz is the set of admissible controls for state z. JxΓ0 , J Γ,t , JbΓ , etc. are defined similarly to Jx0 , J t , Jb, etc. The restriction in (8.5) is equivalent to the requirement that λs (Γx(s) ) = 1 for almost every s. Remark 8.5. The measure λ is known as a relaxed control. For each x ∈ Jb, there could exist more than one λ satisfying the above control equation. A set D ⊂ Cb (E) separates points if for any x 6= y ∈ E, there exists f ∈ D such that f (x) 6=R f (y). A set D ⊂ Cb (E) is convergence determining if for µn , µ ∈ P(E), R f dµn → f dµ for all f ∈ D implies µn converges to µ in the weak topology on on P(E). For example, the collection of bounded, uniformly continuous functions Cbu (E) is convergence determining as is Cc∞ (Rd ) in the case E = Rd . It is easy to see that if D(A) separates points in E, x ∈ Jb, and {x(s) : s ≤ t} is relatively compact for each t > 0, then x ∈ CE [0, ∞), and that if D(A) is convergence determining, every x ∈ Jb is continuous. Let H be one of the operators arising as a limit as in the examples of Chapter 1, and suppose for the moment that H is singlevalued. Frequently, we can find an operator A as in Definition 8.1 and a lower semicontinuous function L on E × U such that Hf = Hf , where (8.6)
Hf ≡ sup (Af (x, u) − L(x, u)), u∈U
or if there is an admissibility condition, (8.7)
Hf ≡ sup (Af (x, u) − L(x, u)). u∈Γx
136
8. THE VARIATIONAL FORM OF RATE FUNCTION
In fact, this representation of H holds quite generally (see Section 8.6). We note that, by definition, H is single valued and D(H) = D(A). In general, H may be multivalued, and the relationship between H and H may be H ⊂ H, or may be expressed implicitly through the resolvent by the requirement that for a large class of h ∈ Cb (E) and α > 0, each viscosity solution of (I − αH)f = h is also a viscosity solution of (I − αH)f = h. More generally, in the extended setup of Chapter 7, we may have two limit operators H† and H‡ instead of H. In this case, we may find that H‡ f ≤ Hf ≤ H† f,
(8.8)
f ∈ D(A),
as in Section 10.5, or the opposite inequalities, H† f ≤ Hf ≤ H‡ f,
(8.9)
f ∈ D(A),
as in Chapter 11. Example 8.6. For Example 8.2, if we let L(x, u) = 12 u2 , then 1 sup (Af (x, u) − L(x, u)) = b(x) · ∇f (x) + σ T (x)∇f (x)2 = Hf (x), 2 u∈Rd where the H coincides with (1.15). Similarly, for Example 8.3, if we let L(ρ, u) = 1/2kuk2L2 (O) , then sup (Af (ρ, u) − L(ρ, u)) = h∆ρ − F 0 (ρ), u∈U
δf 1 δf i + k k2L2 (O) = Hf (ρ), δρ 2 δρ
where H coincides with an extended version of (1.33). (See Section 13.1.) Let V(t) be the Nisio semigroup corresponding to the optimal control problem with dynamics determined by (8.1) and “reward” function −L, that is, Z (8.10) V(t)g(x0 ) = sup {g(x(t)) − L(x(s), u)λ(du × ds)}, (x,λ)∈JxΓ,t 0
[0,t]×U
for each x0 ∈ E. (The supremum of an empty set is defined to be −∞.) Under the conditions discussed in the previous paragraph, we expect that V (t) = V(t) and that we will be able to simplify the form of the rate function I using a variational structure induced by A and L. In the development that follows, we will apply results that were originally stated for the cost minimization problem Z L(x(s), u)λ(du × ds)}, (8.11) S(t)g(x0 ) = inf {g(x(t)) + (x,λ)∈JxΓ,t 0
[0,t]×U
(where inf over the empty set is ∞). Note that S(t)g = −V(t)(−g), so results on S translate immediately into results on V. Definition 8.7 (Tightness Function). Let U be a complete, separable, metric space. A measurable function Φ : U → (−∞, ∞] is a tightness function if inf u∈U Φ(u) > −∞ and for each M ∈ R, {u : Φ(u) ≤ M } is relatively compact in U.
8.1. FORMULATION OF THE CONTROL PROBLEM
137
If U is locally compact, then tightness functions are easy to construct. Take b ). Of course, if U = Rd , then Φ(u) = uβ Φ = ψ −1 for any strictly positive ψ ∈ C(U for some β > 0 is a tightness function. If U is a Hilbert space, {vi } is a complete, orthonormal basis, and {ai } satisfies ai > 0 and ai → ∞, then X Φ(u) = ai hvi , ui2 i
is a tightness function. More generally, if U is a Banach space, then one can construct tightness functions of the form X Φ(u) = ai hvi , uiβ , i
where ai > 0, β > 0, and vi ∈ U ∗ . Tightness functions are introduced as a means of expressing compactness conditions on spaces of measures. In particular, we have the following lemma. Lemma 8.8. Let S be a complete separable metric space, and suppose Φ is a tightness function on S. Then for each M < ∞, Z QM = {µ ∈ P(S) : Φ(x)µ(dx) ≤ M } S
is relatively compact in the weak topology. Proof. Adding a constant to Φ if necessary, we can assume Φ ≥ 0. For 0 < N < ∞, let KN be the closure of {x : Φ(x) ≤ N }. Then KN is compact and x ∈ (KN )c implies Φ(x) > N . If µ ∈ QM , then Z Z Z c N µ(KN )≤ Φdµ + Φdµ ≤ Φdµ ≤ M. KN
c KN
Consequently M , N and, since N is arbitrary, it follows that QM is tight and, therefore, is relatively compact by Prohorov’s theorem. c µ(KN )≤
Much of our development will be based on the following set of conditions. Condition 8.9. (1) A ⊂ Cb (E) × C(E × U ) is singlevalued and D(A) separates points. (2) Γ ⊂ E × U is closed, and for each x0 ∈ E, there exists (x, λ) ∈ J Γ such that x(0) = x0 . (3) L(x, u) : E × U → [0, ∞] is a lower semicontinuous function, and for each c ∈ [0, ∞) and compact K ⊂ E, {(x, u) ∈ Γ : L(x, u) ≤ c} ∩ (K × U ) is relatively compact. (4) For each compact K ⊂ E, T > 0, and 0 ≤ M < ∞, there exists a compact b ≡ K(K, b K T, M ) ⊂ E such that (x, λ) ∈ J Γ , x(0) ∈ K, and Z L(x(s), u)λ(du × ds) ≤ M U ×[0,T ]
138
8. THE VARIATIONAL FORM OF RATE FUNCTION
imply b x(t) ∈ K,
0 ≤ t ≤ T.
(5) For each f ∈ D(A) and compact set K ⊂ E, there exists a right continuous, nondecreasing function ψf,K : [0, ∞) → [0, ∞) such that Af (x, u) ≤ ψf,K (L(x, u)),
∀(x, u) ∈ Γ ∩ (K × U ),
and lim r−1 ψf,K (r) = 0.
r→∞
Condition 8.10. For each x0 ∈ E, there exists (x, λ) ∈ J Γ such that x(0) = x0 and Z L(x(s), u)λ(du × ds) = 0.
(8.12) U ×[0,∞)
We will need one additional assumption regarding existence of solutions of the control problem. Let H† ⊂ D(A) × M u (E) and H‡ ⊂ D(A) × M l (E) satisfy H‡ f (x) ≤ Hf (x), ∀f ∈ D(H‡ ),
(8.13)
Hf (x) ≤ H† f (x), ∀f ∈ D(H† ).
We note that for (x, λ) ∈ J Γ and f ∈ D(H† ), Z t Z (8.14) (H† f )(x(s))ds ≥ (Af (x(s), u) − L(x(s), u))λ(ds × du), 0
t≥0
U ×[0,t]
and we will occasionally need to assume a similar inequality for H‡ . Condition 8.11. For each x0 ∈ E and each f ∈ D(H‡ ), there exists (x, λ) ∈ such that (8.15) Z t2 Z (H‡ f )(x(s))ds ≤ (Af (x(s), u) − L(x(s), u))λ(ds × du), 0 ≤ t1 < t2 . JxΓ0
U ×(t1 ,t2 ]
t1
Remark 8.12. a) Conditions 8.9.1 and 8.9.4 imply that each x ∈ Jb having finite cost on bounded time intervals is continuous. b) Under Condition 8.9.2, each (x, λ) ∈ J Γ,t can be extended to a solution in Γ J , so the supremum in (8.10) can be taken over (x, λ) ∈ JxΓ0 . c) Without loss of generality, we can assume that r−1 ψf,K (r) is nonincreasing. (Otherwise, replace ψ by ψef,K (r) = r sups≥r s−1 ψf,K (s).) d) Let V be the limit semigroup in Chapter 5 (or Chapters 6 or 7), then V (t)1 ≡ 1. Consequently, if (8.10) holds for A and L satisfying Condition 8.9 and V = V , then by Proposition 8.13, for each x0 ∈ E there exists (x, λ) ∈ J Γ such that x(0) = x0 and (8.12) holds. Of course, Condition 8.10 implies the second part of Condition 8.9.2. e) In the important special case H‡ ≡ H ≡ H† , (8.15) becomes Z t Z (8.16) (Hf )(x(s))ds = (Af (x(s), u) − L(x(s), u))λ(ds × du), t ≥ 0. 0
U ×[0,t]
Ordinarily, H‡ 1 = H† 1 = 0, and Condition 8.11 implies Condition 8.10.
8.2. THE NISIO SEMIGROUP
139
8.2. The Nisio semigroup The following semigroup property is a consequence of the dynamic programming principle. Proposition 8.13. Let Condition 8.9 be satisfied. Then a) For each M > 0, compact K ⊂ E, and T > 0, Z T Γ,T KM,K = {(x, λ) ∈ J : L(x(s), u)λ(du × ds) ≤ M, x(0) ∈ K} U ×[0,T ]
is a compact set in CE [0, T ] × MTm (U ). b) {V(t)} defined in (8.10) forms a nonlinear contraction semigroup on the space of upper semicontinuous functions which are bounded above. T Proof. For n = 1, 2, . . ., let (xn , λn ) ∈ KM,K . We must show that a subT sequence converges to an element of KM,K . Without loss of generality, we may b ⊂ E assume that xn (0) → x0 . By Condition 8.9.4, there exists a compact K b 0 ≤ t ≤ T , n = 1, 2, . . .. Define the occupation measure on such that xn (t) ∈ K, E × U × [0, T ] by Z µn (C × [0, t]) = IC (xn (s), u)λn (du × ds). U ×[0,t]
b × U ) × [0, T ]. By Condition 8.9.3, with Then µn is really concentrated on (Γ ∩ K b {µn } is relatively compact in MT (E × U × [0, T ]), so we may as well K = K, m assume that µn → µ. This convergence implies that for each c > 0, Z Z cAf (x, u) cAf (x, u) µn (dx×du×ds) = µ(dx×du×ds), lim n→∞ E×U ×[0,t] c ∨ Af (x, u) E×U ×[0,t] c ∨ Af (x, u) and Condition 8.9.5 implies Z Z cAf (x, u) µn (dx × du × ds) − Af (x, u)µn (dx × du × ds) E×U ×[0,t] c ∨ Af (x, u) E×U ×[0,t] Z ≤ I{Af (x,u)>c} Af (x, u)µn (dx × du × ds) E×U ×[0,t] Z ≤ I{ψf,K b (L(x, u))µn (dx × du × ds) c (L(x,u))≥c} ψf,K E×U ×[0,t]
≤
sup r≥ψ −1c (c)
ψf,Kb (r) r
M,
f,K
where b = inf{r : ψf,Kb (r) ≥ c}. A similar inequality holds with µn replaced by µ. Consequently, Z Z lim Af (xn (s), u)λn (du × ds) = lim Af (x, u)µn (dx × du × ds) n→∞ U ×[0,t] n→∞ E×U ×[0,t] Z = Af (x, u)µ(dx × du × ds), −1 ψf, (c) K
E×U ×[0,t]
for each f ∈ D(A) and each 0 ≤ t ≤ T .
140
8. THE VARIATIONAL FORM OF RATE FUNCTION
b 0 ≤ t ≤ T , n = 1, 2, . . ., in any time interval [a, b] ⊂ [0, T ], Since xn (t) ∈ K, there exists a subsequence along which tn converges to t ∈ [a, b] and xn (tn ) conb ⊂ E. Since D(A) separates points, the value of x(t) is verges to a value x(t) ∈ K uniquely determined by the fact that Z (8.17) f (x(t)) = f (x0 ) + Af (x, u)µ(dx × du × ds), E×U ×[0,t]
for all f ∈ D(A), and it follows that for any sequence sn → t, xn (sn ) → x(t). By this argument, we can construct x(t) ∈ E for t in a countable dense subset J ⊂ [0, T ]. The mapping x : J → E is continuous and hence extends to a continuous mapping x : [0, T ] → E satisfying (8.17) for all t ∈ [0, T ] and all f ∈ D(A). Consequently, for any t ∈ [0, T ] and any sequence sn → t, xn (sn ) → x(t) and it follows that lim sup r(xn (s), x(s)) = 0,
n→∞ s≤T
and in turn, that µ must be of the form µ(dx × du × ds) = δx(s) λs (du)ds. Therefore, there exists λ ∈ MTm (U ) such that (x, λ) satisfies (8.1). The semigroup property follows as in Section 2 of Kurtz [75]. (Results for semigroups of the form {S(t)} translate immediately into results for {V(t)}.) In [75], the state and control spaces are assumed to be locally compact, but the extension to complete, separable metric spaces, under Condition 8.9.4, is straightforward. Also, [75] does not consider admissible controls, but J Γ is a closed subset of J , so the compactness result does not change. 8.3. Control representation of the rate function The primary result of this section is the representation of the rate function given by the following theorem. Theorem 8.14. Suppose (E, r) and (U, q) are complete, separable, metric spaces. Let A : D(A) ⊂ Cb (E) → C(E × U ) and lower semicontinuous L(x, u) ≥ 0 satisfy Conditions 8.9 and 8.10. Suppose that the conditions of Theorem 5.15 hold, and that V is the limit semigroup in that theorem. Define V according to (8.10), and suppose that V (t) = V(t) on D ⊂ Cb (E), where D is the domain of V . Assume that D contains a set that is bounded above and isolates points. Then, for x ∈ DE [0, ∞), Z (8.18) I(x) = I0 (x(0)) + inf { L(x(s), u)λ(du × ds)}, Γ λ:(x,λ)∈Jx(0)
U ×[0,∞)
where inf ∅ = ∞. Remark 8.15. a) Note that x is fixed and that there may be more than one λ for which Γ (x, λ) ∈ Jx(0) . b) If we restrict our attention to a bounded time interval [0, T ], then by Condition 8.10, for x ∈ DE [0, T ] I T (x)
= =
inf{I(z) : z ∈ DE [0, ∞), z(s) = x(s), s ≤ T } Z inf{ L(y(s), u)λ(du × ds) : (y, λ) ∈ J Γ,T , y = x}. U ×[0,T ]
8.4. PROPERTIES OF THE CONTROL SEMIGROUP V
141
Proof. For f ∈ D and x0 ∈ E, Z V (t)f (x0 )
= V(t)f (x0 ) =
{f (x(t)) −
sup (x,λ)∈JxΓ0
L(x(s), u)λ(du × ds)} U ×[0,t]
Z = −
inf
(x,λ)∈JxΓ
0
{
L(x(s), u)λ(du × ds) − f (x(t))}. U ×[0,t]
With reference to Corollary 5.17, let {fm } ⊂ D satisfy the conditions of Corollary 3.21 with S = E and x replaced by x1 . Then It (x1 x0 ) = sup (f (x1 ) − V(t)f (x0 )) f ∈D
Z = sup
inf
f ∈D (x,λ)∈JxΓ0
L(x(s), u)λ(du × ds)}
{f (x1 ) − f (x(t)) + U ×[0,t]
Z = lim
inf
m→∞ (x,λ)∈JxΓ
{fm (x1 ) − fm (x(t)) +
L(x(s), u)λ(du × ds)}. U ×[0,t]
0
Either It (x1 x0 ) = ∞ or thereR exists a minimizing sequence (xm , λm ) such that xm (t) → x1 and lim supm→∞ U ×[0,t] L(xm (s), u)λm (du × ds) < ∞. In the latter case, by Proposition 8.13, there exists a convergent subsequence whose limit (x∞ , λ∞ ) must satisfy x∞ (0) = x0 , x∞ (t) = x1 , and Z It (x1 x0 ) = L(x∞ (s), u)λ∞ (du × ds) U ×[0,t] Z = inf{ L(x(s), u)λ(du × ds) : (x, λ) ∈ J Γ , x(0) = x0 , x(t) = x1 }. U ×[0,t]
Therefore, as in Corollary 5.17, (8.18) follows.
Next, we derive infinitesimal conditions under which V = V. 8.4. Properties of the control semigroup V We study some properties of V under Conditions 8.9 and 8.10. Lemma 8.16. Suppose that Conditions 8.9 and 8.10 hold. Then for each f ∈ Cb (E) and each x0 ∈ E, inf f (x) ≤ V(t)f (x0 ) ≤ sup f (x),
(8.19)
x
x
and V(t)f (x0 ) is a continuous function of t. Proof. For (x, λ) satisfying (8.12), V(t)f (x0 ) ≥ f (x(t)), giving the first inequality. The second inequality in (8.19) follows from the fact that L ≥ 0. To verify the continuity, let (xt , λt ) ∈ J Γ satisfy xt (0) = x0 , Z V(t)f (x0 ) = f (xt (t)) − L(xt (s), u)λt (du × ds), U ×[0,t]
and Z U ×(t,∞)
L(xt (s), u)λt (du × ds) = 0.
142
8. THE VARIATIONAL FORM OF RATE FUNCTION
Then for r > t Z V(r)f (x0 ) ≥ f (xt (r))−
L(xt (s), u)λt (du×ds) = f (xt (r))−f (xt (t))+V(t)f (x0 ).
U ×[0,t]
For r < t, the same inequality follows by the nonnegativity of L. It follows that lim inf V (r)f (x0 ) ≥ V(t)f (x0 ). r→t
Note that Z
L(xt (s), u)λt (du × ds) ≤ 2kf k,
U ×[0,∞)
so by Proposition 8.13, {(xt , λt ) : 0 ≤ t ≤ T }, with (xt , λt ) restricted to [0, T ], is relatively compact in CE [0, T ] × MTm . The upper semicontinuity of the mapping Z (8.20) (x, λ, t) → f (x(t)) − L(x(s), u)λ(du × ds) U ×[0,t]
implies that lim sup V(r)f (x0 ) ≤ V(t)f (x0 ),
(8.21)
r→t
giving the desired continuity.
Let h be measurable and bounded on compact sets, and let α > 0. Define Rα h by Rα h(x0 )
(8.22) = =
Z {
lim sup
sup
t→∞
(x,λ)∈JxΓ0
−1
e−α
s
U ×[0,t]
(
h(x(s)) − L(x(s), u))λ(du × ds)} α
Z t −1 { α−1 e−α s h(x(s))ds
lim sup
sup
t→∞
(x,λ)∈JxΓ0
0 ∞
Z
−1
α−1 e−α
−
Z
s
L(x(r), u)λ(du × dr)ds}, U ×[0,s∧t]
0
where the second equality follows by Z −1 e−α s L(x(s), u)λ(du × ds U ×[0,t]
Z
Z
= U ×[0,t] Z ∞Z
∞
r
drL(x(s), u)λ(du × ds)
s
= 0
−1
α−1 e−α
−1
α−1 e−α
r
L(x(s), u)λ(du × ds)dr,
U ×[0,r∧t]
with the interchange justified by the fact that the integrand is nonnegative. Lemma 8.17. Suppose h ∈ B(E) is upper semicontinuous. Then under Condition 8.9, Rα h is an upper semicontinuous function on E. Under Conditions 8.9
8.4. PROPERTIES OF THE CONTROL SEMIGROUP V
143
and 8.10, kRα hk ≤ khk, and (8.23) Rα h(x0 ) = =
Z {
h(x(s)) − L(x(s), u))λ(du × ds)} α (x,λ)∈JxΓ0 U ×[0,∞) Z Z ∞ −1 L(x(r), u)λ(du × dr) ds}, sup { α−1 e−α s h(x(s)) − sup
(x,λ)∈JxΓ
0
−1
e−α
s
(
U ×[0,s]
0
Proof. The boundedness of h and the nonnegativity of L implies Rα h(x0 ) is the decreasing limit of Z t −1 −1 Rtα h(x0 ) = sup {e−α t khk + α−1 e−α s h(x(s))ds (x,λ)∈JxΓ
0
0
Z −
∞
−1
α−1 e−α
s
Z L(x(r), u)λ(du × dr)ds}. U ×[0,s∧t]
0
Upper semicontinuity of Rtα h follows as in the proof of Proposition 8.13, so Rα h is upper semicontinuous since it is the decreasing limit of a sequence of upper semicontinuous functions. Under Condition 8.10, the right side of (8.23) differs −1 from Rtα h(x0 ) by less than 2e−α t khk, giving the desired equality. Lemma 8.18. Suppose Conditions 8.9 and 8.10 hold, and let {V(t)} be defined by (8.10). Then for each f ∈ Cb (E) and each x0 ∈ E, [nt]
V(t)f (x0 ) = lim Rn−1 f (x0 ) n→∞
Proof. Note that Z ∞ Z −1 α−1 e−α s (h(x(s)) − L(x(r), u))λ(du × dr))ds 0 U ×[0,s] Z = E[h(x(α∆)) − L(x(r), u))λ(du × dr)], U ×[0,α∆]
where ∆ is a unit exponential random variable, so Z Rα h(x0 ) = sup E[h(x(α∆)) − L(x(r), u))λ(du × dr)]. (x,λ)∈JxΓ0
U ×[0,α∆]
Fix f ∈ Cb (E), x0 ∈ E, and t > 0. It follows that (8.24) Z ∆1 + · · · + ∆[nt] [nt] ))− L(x(r), u)λ(du×dr)], Rn−1 f (x0 ) ≥ sup E[f (x( ∆1 +···+∆[nt] n (x,λ)∈JxΓ U ×[0, ] n 0
and (8.25) [nt] Rn−1 f (x0 )
≤ E[
sup
(x,λ)∈JxΓ0
∆1 + · · · + ∆[nt] f (x( )) − n
!
Z U ×[0,
∆1 +···+∆[nt] n
L(x(r), u)λ(du × dr) ], ]
144
8. THE VARIATIONAL FORM OF RATE FUNCTION
where ∆1 , ∆2 , . . . are iid unit exponential random variables. Let (xt , λt ) ∈ J Γ be as in the proof of Lemma8.16. Then by (8.24) we have Z ∆1 + · · · + ∆[nt] [nt] Rn−1 f (x0 ) ≥ E[f (xt ( ))− L(xt (r), u)λt (du×dr)], ∆1 +···+∆[nt] n ] U ×[0, n and hence, by the law of large numbers and the definition of (xt , λt ), Z [nt] lim inf Rn−1 f (x0 ) ≥ f (xt (t)) − L(xt (s), u)λt (du × ds)V (t)f (x0 ). n→∞
U ×[0,t]
By (8.25), we have ∆1 + · · · + ∆[nt] )f (x0 )], n and it follows by the law of large numbers and Lemma 8.16 that [nt]
Rn−1 f (x0 ) ≤ E[V ( [nt]
lim sup Rn−1 f (x0 ) ≤ V(t)f (x0 ). n→∞
Lemma 8.19. Let H‡ and H† satisfy (8.13), and suppose Conditions 8.9 and 8.11 are satisfied. Then for each x0 ∈ E, each α > 0, and each f ∈ D(H† ), Rα (I − αH† )f (x0 ) = Z lim sup sup t→∞
(x,λ)∈JxΓ0
−1
s
(
f (x(s)) − αH† f (x(s)) − L(x(s), u))λ(du × ds) α
−1
s
(
f (x(s)) − αH‡ f (x(s)) − L(x(s), u))λ(du × ds) α
e−α
U ×[0,t]
≤ f (x0 ) and for each f ∈ D(H‡ ) Rα (I − αH‡ )f (x0 ) = Z lim sup sup t→∞
(x,λ)∈JxΓ0
e−α
U ×[0,t]
≥ f (x0 ). Proof. Let (x, λ) ∈ JxΓ0 and f ∈ D(H† ). Note that H† f (x) + L(x, u) ≥ Hf (x) + L(x, u) ≥ Af (x, u), so allowing −∞ ≤ −∞, Z −1 f (x(s)) − αH† f (x(s)) − L(x(s), u))λ(du × ds) e−α s ( α U ×[0,t] Z −1 f (x(s)) − αHf (x(s)) ≤ e−α s ( − L(x(s), u))λ(du × ds) α U ×[0,t] Z −1 ≤ α−1 e−α s (f (x(s)) − αAf (x(s), u))λ(du × ds) U ×[0,t] t −1 −α−1 s
Z =
α
e
Z f (x(s))ds −
0
α 0
−1
= f (x0 ) − e−α
t
f (x(t)),
∞
−1 −α−1 s
Z Af (x(r), u)λ(du × dr)ds
e
U ×[0,s∧t]
8.4. PROPERTIES OF THE CONTROL SEMIGROUP V
145
where the first equality comes from integration by parts and the second from the fact that (x, λ) ∈ JxΓ0 . Letting t → ∞, the first inequality of the lemma follows. b ∈ J Γ satisfies (8.15). Then Now let f ∈ D(H‡ ) and suppose that (b x, λ) x0
Z
−1
e−α
sup (x,λ)∈JxΓ 0
s
U ×[0,t]
Z
−1
e−α
≥
s
f (b x(s)) − αH‡ f (b x(s)) b − L(b x(s), u))λ(du × ds) α
(
U ×[0,t]
Z
−1
α−1 e−α
≥ =
f (x(s)) − αH‡ f (x(s)) − L(x(s), u))λ(du × ds) α
(
U ×[0,t] Z t −1 −α−1 s
α
e
s
b (f (b x(s)) − αAf (b x(s), u))λ(du × ds) Z
f (b x(s))ds −
0
∞
0 −1
= f (x0 ) − e−α
t
−1
α−1 e−α
s
Z b Af (b x(r), u)λ(du × dr)ds U ×[0,s∧t]
f (b x(t)),
proving the lemma.
Lemma 8.20. Under Conditions 8.9 and 8.10, for each h ∈ B(E) and α, β > 0,
Rα h = Rβ (Rα h − β
Rα h − h ). α
Proof. RBy Condition 8.10, in (8.22), we can take the supremum over solutions that satisfy U ×[T,∞) L(x(s), u)λ(du × ds) = 0 for some T > 0. In particular, we can work with solutions that satisfy
Z U ×[0,∞)
for all α > 0.
−1
e−α
s

h(x(s)) − L(x(s), u)λ(du × ds) < ∞ α
146
8. THE VARIATIONAL FORM OF RATE FUNCTION
By definition, Rβ (Rα h − β
Rα h − h )(x0 ) Z α
1 β α−β ( h+ Rα h)(x(s)) − L(x(s), u) λ(du × ds)} β α α (x,λ)∈JxΓ0 U ×[0,∞) Z h(x(s)) −1 = sup { − L(x(s), u)) e−β s ( α (x,λ)∈JxΓ U ×[0,∞) 0 Z −1 α−β h(b x(r)) b + sup − L(b x(r), u))λ(du × dr)) λ(du × ds)} e−α r ( αβ (xb,λb)∈J Γ U ×[0,∞) α x(s) Z h(x(s)) −1 e−β s ≥ − L(x(s), u) λ(du × ds) α U ×[0,∞) Z Z ∞ −1 h(x(s + r)) α − β −β −1 s e e−α r ( − L(x(s + r), u))λ(du × dr)) ds + αβ α U ×[0,∞) 0 Z h(x(s)) −1 − L(x(s), u) λ(du × ds) ≥ e−β s α U ×[0,∞) Z ∞ Z −1 α − β s( α1 − β1 ) h(x(r)) + e e−α r ( − L(x(r), u))λ(du × dr)) ds αβ α 0 U ×[s,∞) Z −1 h(x(s)) − L(x(s), u) λ(du × ds) = e−α s α U ×[0,∞) =
sup
{
e−β
−1
s
for any (x, λ) ∈ JxΓ0 for which the integrals are all finite. (The last equality follows from integration by parts.) Therefore Rβ (Rα h − β
Rα h − h )(x0 ) ≥ Rα h(x0 ). α
b,s ) ∈ Similarly, for each > 0 and s > 0, there exist (x , λ ) ∈ JxΓ0 and (b x,s , λ JxΓ (s) such that (8.26) Rα h − h )(x0 ) α −1 h(x (s)) e−β s ( − L(x (s), u))λ (du × ds) α U ×[0,∞) Z Z −1 α − β ∞ −β −1 s h(b x,s (r)) b,s (du × dr) ds. + e e−α r ( − L(b x,s (r), u))λ αβ α 0 U ×[0,∞)
Rβ (Rα h − β Z ≤+
e,s ) ∈ J Γ by For s ≥ 0, define (e x,s , λ x0 x (r), x e,s (r) = x b,s (r),
0≤r≤s s 0, and compact K ⊂ E, there exist compact K 0 (δ, r, K) ⊂ E, nondecreasing in δ −1 , r, and K such that (8.30)
sup Rα h1 (x) − Rα h2 (x) ≤ δ + x∈K
sup y∈K 0 (δ,r,K)
for any h1 , h2 ∈ B(E) satisfying kh1 k ∨ kh2 k ≤ r.
h1 (y) − h2 (y),
148
8. THE VARIATIONAL FORM OF RATE FUNCTION
Proof. Let h1 be bounded from above and h2 bounded from below. Letting hk1 = h1 ∨ k and hk2 = h2 ∧ k, by the definition of Rα in (8.22), Z t −1 (Rα hk1 − Rα hk2 )(x0 ) ≤ lim sup sup α−1 e−α s (hk1 (x(s)) − hk2 (x(s)))ds t→∞
≤
(x,λ)∈JxΓ0
sup(hk1 (x) x
0
hk2 (x)).
−
Letting k → ∞, (8.29) follows. For δ, r, K as in Part (b), choose T large enough so that 4re−T /α < δ. Let b = K(K, b M = (δ + 2r)eT /α , and select the compact set K T, M ) in Condition 8.9.4. Let h1 , h2 ∈ B(E) be such that kh1 k ∨ kh2 k ≤ r. Assume that there exists x0 = x0 (δ, K, h1 , h2 ) ∈ K satisfying sup Rα h1 (x) − Rα h2 (x) ≤ x∈K
δ + Rα h1 (x0 ) − Rα h2 (x0 ). 4
(Otherwise, interchange the roles of h1 and h2 .) By the definition of Rα , there exists (x, λ) ∈ JxΓ0 such that Z −1 δ h1 (x(s)) (8.31) Rα h1 (x0 ) ≤ + e−α s ( − L(x(s), u))λ(du × ds), 4 α U ×[0,∞) and hence sup Rα h1 (x) − Rα h2 (x)
≤
x∈K
≤ ≤
δ + 2
Z
δ + 2
Z
δ + 2
Z
∞
−1
α−1 e−α
s
(h1 − h2 )(x(s))ds
0 T
−1
s
(h1 − h2 )(x(s))ds + (kh1 k + kh2 k)e−α
−1
s
δ (h1 − h2 )(x(s))ds + . 2
α−1 e−α
−1
0 T
α−1 e−α
0
By (8.31), we also have Z Z α−1 T L(x(s), u)λ(du × ds) ≤ e [0,T ]×U −1
≤ eα
U ×[0,T ] Z ∞
δ ( + 4
L(x(s), u))λ(du × ds) −1
s
h1 (x(s))ds − Rα h1 (x0 ))
0
T
(δ + 2kh1 k)
−1
T
(δ + 2r) = M,
α
s
α−1 e−α
−1
≤ eα ≤ e
T
−1
e−α
so b x(t) ∈ K,
t ∈ [0, T ],
and sup Rα h1 (x) − Rα h2 (x) ≤ δ + sup h1 (y) − h2 (y). x∈K
b
y∈K
Using essentially identical arguments as in Lemma 8.21, we can also prove the following property for V:
T
8.5. VERIFICATION OF SEMIGROUP REPRESENTATION
149
Lemma 8.22. Under Conditions 8.9 and 8.10, V is a contraction on B(E): kV(t)h1 − V(t)h2 k ≤ kh1 − h2 k,
h1 , h2 ∈ B(E).
For each δ > 0, r > 0 and a compact K ⊂ E, there exists compact K 0 (δ, r, K) ⊂ E that is nondecreasing in δ −1 , r and K such that (8.32)
sup V(t)h1 (x) − V(t)h2 (x) ≤ δ + sup h1 (y) − h2 (y),
b
x∈K
y∈K
for every h1 , h2 ∈ B(E) satisfying kh1 k ∨ kh2 k ≤ r. 8.5. Verification of semigroup representation Lemma 8.19 suggests that (I −αH)−1 h = Rα h. The following theorems demonstrate that this assertion is indeed valid in a certain sense. Consequently, Lemma b in Chapters 8.18 implies that H is related to V in the same sense that H (or H) 5, 6, and 7 is related to V , and we can formulate conditions in terms of H and H that imply V = V. The control problem then gives a more explicit representation of the rate function. Theorem 8.23. Let (E, r), (U, q) be complete, separable, metric spaces. Suppose that A ⊂ Cb (E) × C(E × U ) and L : E × U → [0, ∞] satisfy Conditions 8.9, 8.10, and 8.11 with H† f (x) = H‡ f (x) ≡ Hf (x) = sup (Af (x, u) − L(x, u)). u∈Γx
Suppose H ⊂ Cb (E) × B(E) is dissipative and satisfies the range condition 5.1. Then for each 0 < α < α0 (α0 is the same constant as in the range condition 5.1), Rα h = (I − αH)−1 h ∈ D(H) ⊂ Cb (E),
∀h ∈ D(H),
and V is the semigroup generated by H on D(H), that is, Z V(t)f (x0 ) ≡ sup {f (x(t)) − L(x(s), u)λ(du × ds)} (x,λ)∈JxΓ,t 0
=
lim (I −
n→∞
[0,t]×U
t H)−n f (x0 ) n
for each x0 ∈ E. Remark 8.24. If H ⊂ Cb (E)×B(E) is a dissipative operator satisfying H ⊂ H and D(H) = D(H) and H satisfies the conditions of Theorem 8.23, then H also satisfies the range condition 5.1 and t V(t)f = lim (I − H)−n f n→∞ n for each f ∈ D(H). Proof. Since the range condition D(H) ⊂ R(I − αH) is satisfied, for each h ∈ D(H), there exist hn ∈ R(I − αH) such that khn − hk → 0. Setting fn = (I − αH)−1 hn ∈ D(H) ⊂ C(E), kfn − fm k ≤ khn − hm k,
150
8. THE VARIATIONAL FORM OF RATE FUNCTION
and, consequently, there exists f ∈ D(H) ⊂ Cb (E) such that kfn − f k → 0. Let g = (f − h)/α ∈ Cb (E). Then g ∈ Hf . By Lemma 8.19, fn (x0 ) = Rα (I − αH)fn (x0 ). Letting n → ∞, we have f (x0 ) = Rα (I − αH)f (x0 ). Noting f = (I − αH)−1 h, (I − αH)−1 h = Rα h. By Lemma 8.18 and Proposition 5.3, the conclusion follows.
Combining Corollary 5.19, 5.20 and Theorems 8.14 and 8.23 (including its remark), we arrive at the following: Corollary 8.25 (Continuoustime case). Assume that the conditions of Corollary 5.19 and Theorem 8.23 are satisfied, and that H of Corollary 5.19 and H of Theorem 8.23 satisfy H ⊂ H and D(H) = D(H). Then the sequence of stochastic processes {Xn } is exponentially tight and satisfies the large deviation principle with good rate function I given by (8.18). Corollary 8.26 (Discretetime case). Assume that the conditions of Corollary 5.20 and Theorem 8.23 are satisfied, and that H of Corollary 5.20 and H of Theorem 8.23 satisfy H ⊂ H and D(H) = D(H). Then the sequence of stochastic processes {Xn } is exponentially tight and satisfies the large deviation principle with good rate function I given by (8.18). As observed before, the range condition 5.1 is usually hard to verify. Next, we derive a viscosity solution version of Theorem 8.23 and Corollaries 8.25 and 8.26. Theorem 8.27. Let (E, r) and (U, q) be complete, separable metric spaces. Suppose that A ⊂ Cb (E) × C(E × U ) and L : E × U → [0, ∞] satisfy Conditions 8.9, 8.10, and 8.11. Define Hf (x) = sup (Af (x, u) − L(x, u)) u∈Γx
with D(H) = D(A). Let H† ⊂ D(A) × M u (E) and H‡ ⊂ D(A) × M l (E), and assume (8.33)
H‡ f (x) ≤ Hf (x), ∀f ∈ D(H‡ ),
Hf (x) ≤ H† f (x), ∀f ∈ D(H† ).
Suppose that for each 0 < α ≤ α0 , there exists Dα ⊂ Cb (E) such that Cb (E) is the bucclosure of Dα and that for each h ∈ Dα , the comparison principle holds for subsolutions of (8.34)
(I − αH† )f = h
and supersolutions of (8.35)
(I − αH‡ )f = h.
Then a) For each h ∈ Dα , Rα h given by (8.22) is continuous and is the unique function that is a subsolution of (8.34) and a supersolution of (8.35). b) Rα : Cb (E) → Cb (E).
8.5. VERIFICATION OF SEMIGROUP REPRESENTATION
151
c) The operator b = ∪0 0. By Lemma 7.8, there exist xn ∈ E such that f −h lim sup(f − f0 )(xn ) = sup(f − f0 )(y) and lim sup( − H† f0 )(xn ) ≤ 0. α n→∞ y n→∞ Therefore, f is a subsolution of (8.34). Similarly, to see that f∗ is a viscosity supersolution of (8.35), for f0 ∈ D(H‡ ), we have inf (f − f0 )(y) y
=
inf (Rα h − f0 )(y) y
≥ inf (Rβ (Rα h − βα−1 (Rα h − h))(y) − Rβ (f0 − βH‡ f0 ))(y)) y
≥ inf ((Rα h(y) − βα−1 (Rα h − h))(y) − f0 (y) + βH‡ f0 (y)) y
Hence, for β < α (so that the expression on the right is (1 − βα−1 )Rα h + βH‡ f0 plus a continuous function), f∗ − h − (H‡ f0 )∗ ) (y), inf (f∗ − f0 )(y) ≥ inf f∗ − f0 − β( y y α and by Lemma 7.8, there exist xn ∈ E such that f∗ − h lim inf (f∗ − f0 )(xn ) = inf (f∗ − f0 )(y) and lim inf ( − (H‡ f0 )∗ )(xn ) ≥ 0. n→∞ y n→∞ α
152
8. THE VARIATIONAL FORM OF RATE FUNCTION
By the comparison principle, for each h ∈ Dα , f = Rα h ∈ Cb (E) and is the unique function that is a subsolution of (8.34) and a supersolution of (8.35), giving Part (a). For Part (b), let Cα ≡ {f ∈ Cb (E) : Rα f ∈ Cb (E)}. Then Dα ⊂ Cα , and by assumption, Dα is bucdense in Cb (E). Therefore, if Cα is closed under the buctopology, then Cα = Cb (E). Suppose f is bucapproximable by Cα , that is, there exists a constant Cf,Cα , such that for any > 0 and any compact b ⊂ E, there is f b ∈ Cα such that set K ,K kf,Kb k ≤ Cf,Cα ,
sup f,Kb (x) − f (x) ≤ .
b
x∈K
For δ > 0 and compact K ⊂ E, by Lemma 8.21, there exists K 0 ≡ K 0 (δ, Cf,C , K) ⊂ E such that sup Rα f (x) − Rα g(x) ≤ δ + sup f (x) − g(x), x∈K 0
x∈K
b = K 0 and g = f b , we have for every kgk ≤ Cf,C . In particular, taking K ,K sup Rα f (x) − Rα f,K 0 (x) ≤ δ + sup f (x) − f,K 0 (x) ≤ δ + . x∈K 0
x∈K
Since f,K 0 ∈ Dα , Rα f,K 0 is continuous and hence Rα f ∈ Cb (E). For Part (c), the range condition b ⊂ R(I − αH) b = Cb (E), D(H)
0 < α < α0 ,
b To see that H b is dissipative, let follows directly from the definition of H. (Rλ h1 ,
R λ h1 − h 1 ), λ
(Rµ h2 ,
R µ h 2 − h2 b ) ∈ H, µ
where 0 < λ, µ < α0 , h1 , h2 ∈ Cb (E). Let 0 < < min{λ, µ}. By Lemma 8.20, Rλ h1 = R (Rλ h1 −
R λ h1 − h1 ), λ
Rµ h2 = R (Rµ h2 −
R µ h2 − h2 ). µ
By the contractivity of R , (8.38)kRλ h1 − Rµ h2 k ≤ kRλ h1 − Rµ h2 − (
R λ h1 − h1 R µ h2 − h2 − )k. λ µ
Since (kf − αgk − kf k)α−1 is an increasing function of α, (8.38) holds for all > 0 b follows. Finally, (8.36) follows by Proposition 5.3 and and the dissipativity of H Lemma 8.18. The following two corollaries play key roles in the representation of rate functions in later examples. Corollary 8.28. Assume that the conditions of Theorems 7.18 and 8.27 are satisfied with the same Dα , α > 0. Let H† and H‡ be as in Theorem 7.18 and H† and H‡ as in Theorem 8.27. Suppose that one of the following holds: a) For each h ∈ Dα , every viscosity subsolution of (I − αH† )f = h is also a viscosity subsolution of (I − αH† )f = h and every viscosity supersolution of (I − αH‡ )f = h is also a viscosity supersolution of (I − αH‡ )f = h.
8.6. VERIFYING THE ASSUMPTIONS
153
b) For each h ∈ Dα , every viscosity subsolution of (I − αH† )f = h is also a viscosity subsolution of (I − αH† )f = h and every viscosity supersolution of (I − αH‡ )f = h is also a viscosity supersolution of (I − αH‡ )f = h. Then V of Theorem 7.18 and V given by (8.10 satisfy V f = Vf , f ∈ D, and the rate function for {Xn } satisfies (8.18). Proof. We follow the notation of Theorems 7.18 and 8.27. By assumption, Rα h = Rα h for h ∈ Dα . Let Cα = {h ∈ Cb (E) : Rα h = Rα h}. Then Cα is bucclosed by (8.30), (7.43), and Lemma A.11. Consequently, Rα = Rα on Cb (E), and b b = H. H b but again {f ∈ Cb (E) : V (t)f = V(t)f } is It follows that V(t) = V (t) on D(H), bucclosed, so V(t) = V (t) on D. The last statement follows by Theorem 8.14). Corollary 8.29 (Compact state space). Assume that the conditions of Theorem 6.14 and Theorem 8.27 are satisfied with the same Dα , α > 0. Let H be as in Theorems 6.14 and H† = H‡ = H in 8.27. Suppose that one of the following holds: a) For each h ∈ Dα , every viscosity solution of (I − αH)f = h is also a viscosity solution of (I − αH)f = h. b) For each h ∈ Dα , every viscosity solution of (I − αH)f = h is also a viscosity solution of (I − αH)f = h. Then V of Theorem 6.14 and V given by (8.10) satisfy V f = Vf , f ∈ C(E), and the rate function for {Xn } satisfies (8.18). 8.6. Verifying the assumptions Typically, the variational representation (8.7) holds and Conditions 8.10, 8.11 are satisfied. We describe some techniques for verifying these assumptions in this section. The discussion here is frequently heuristic; however, in the sections that follow, we will rigorously exploit these techniques in concrete examples. 8.6.1. Variational representation of H. 8.6.1.1. Construction using operator duality. Fleming [44], [45] and Sheu [109] observed that representation (8.7) holds quite generally. See also Feng [41] for a summary of sample path level large deviation results under the generalities discussed here. These papers present ways of constructing A and L based on the exponential martingale problem for Hn and discuss the probabilistic relationship to the Girsanov change of measure. We quote some relevant results from these papers without proof. Let E be a compact metric space, and let A ⊂ C(E) × C(E) be the generator for a martingale problem. Assume that for each x ∈ E, there exists a solution of the martingale problem for A with sample paths in DE [0, ∞) and X(0) = x. (Note that A corresponds to the An that determine the Yn , not the A in Definition 8.1.) Suppose that D ≡ D(A) is an algebra and that f ∈ D(A) implies ef ∈ D(A). For simplicity, we assume A is singlevalued. Define a new set of transformed operators
154
8. THE VARIATIONAL FORM OF RATE FUNCTION
under the above conditions. For f, g ∈ D, Hf Ag f Lg
= e−f Aef = e−g A(f eg ) − (e−g f )Aeg = Ag g − Hg.
The following natural duality between H and L was first observed by Fleming for diffusion processes and extended by Sheu [108] to more general Markov processes: (8.39)
Hf (x)
=
sup {Ag f (x) − Lg(x)} g∈D
Lg(x)
=
sup {Ag f (x) − Hf (x)}, f ∈D
with both suprema attained when f = g. Note that this duality relationship is obtained for the H that appears in the exponential martingale problem. We are interested in a corresponding duality relationship for the rescaled limit H of a sequence {Hn }, where Hn is of the form Hn f =
1 −nf e An enf . n
Define Agn f Ln g
= e−ng An (f eng ) − (e−ng f )An eng = Agn g − Hn g.
Then (8.40)
Hn f (x)
=
sup {Agn f (x) − Ln g(x)}
g∈D
Ln g(x)
=
sup {Agn f (x) − Hn f (x)}.
f ∈D
Let H = limn→∞ Hn . Feng [41] proved that (8.39) holds for the limiting operators under the assumption that the An are local operators. The locality seems nonessential. In applications, one can typically compute Ln and Agn and then directly verify that the desired duality holds in the limit. The representation of Hf given by (8.39) is of the form (8.6) with U = D and L(x, u) = Au u(x) − Hu(x). One may view the duality formula (8.39) as an infinitesimal version of the usual proof of large deviation theorems using Girsanov change of measures. To use D, a collection of functions, as a control space would be awkward. Feng [41] also discusses an embedding technique which may allow the replacement of D by a more reasonable control space. Let U be a complete separable metric space, and let B map D to the space of U valued, measurable functions (i.e. for each g ∈ D, u = Bg is a measurable, U valued function). Assume that an operator A : D → C(E × U ) and a function L : E × U → [0, ∞) exist such that Af (x, Bg(x)) = Ag f (x),
f, g ∈ D,
and L(x, Bg(x)) = Lg(x),
g ∈ D.
8.6. VERIFYING THE ASSUMPTIONS
155
Let Γ be the closure of {(x, Bg(x)) : g ∈ D}. Then we should have Hf (x) = sup (Af (x, u) − L(x, u)).
(8.41)
u∈Γx
8.6.1.2. Construction using the FenchelLegendre transform. When E is a convex subset of a topological vector space, the limiting operator H typically can be rewritten as H(x, ∇f (x)), where H(x, p) is convex in the second variable and ∇f is the gradient of f . In such cases, an alternative way of constructing a variational representation for the operator H is available through the FenchelLegendre transform. Let E ∗ be the dual of E. For each q ∈ E ∗ , define L(x, q) = sup {hp, qi − H(x, p)}.
(8.42)
p∈E
Then with mild regularity on H(x, ·), we also have H(x, p) = sup {hp, qi − L(x, q)}, q∈E ∗
which implies Hf (x) = H(x, ∇f (x)) = sup {h∇f (x), qi − L(x, q)}, q∈E ∗
giving the desired representation. The transform approach for constructing the rate function was employed by Freidlin and Wentzell and is currently popular in the literature. In general, the representation of H obtained through this approach differs from that of the operator duality approach. The alternative approach may be particularly useful when L defined by (8.42) is only determined implicitly while the L constructed using the operator duality has an explicit representation. The price paid for this explicit representation may be to complicate the control dynamics; however, the control dynamics are still given explicitly. 8.6.2. Verifying Condition 8.10. Constructing the variational representation through operator duality and the convex transformation lead to different approaches for the verification of Conditions 8.10 and 8.11. 8.6.2.1. The operator duality approach. Verifying Condition 8.10 is basically equivalent to finding a g ∈ D such that Lg(x) = 0, for example, g ≡ 0. Note that, taking g ≡ 0, A0 = lim An . n→∞
0
Typically, A will be a generator corresponding to a deterministic flow, and for each x0 ∈ E, there will exist x ∈ CE [0, ∞) such that x(0) = x0 and Z t f (x(t)) − f (x0 ) − A0 f (x(s))ds = 0, t ≥ 0, f ∈ D. 0
Defining λ(du × ds) = δ{Bg(x(s))} (du) × ds, we then have Z t f (x(t)) − f (x(0)) − Af (x(s), u)λ(du × ds) = 0. 0
156
8. THE VARIATIONAL FORM OF RATE FUNCTION
8.6.2.2. The FenchelLegendre transform approach. For each x ∈ E, H(x, p) and L(x, q) are typically convex functions in p and q respectively and H(x, 0) = 0. In order to have a q(x) such that L(x, q(x)) = 0, it is enough to prove that the maximum of hp, q(x)i − H(x, p), as a function of p, is attained at 0. Under certain regularity conditions, through the duality results in convex analysis, this is equivalent to q(x) = (∂p H)(x, 0). Therefore, we can define q(x) as above for the given H and seek the smoothness of q(x) in order to have existence of x(t) satisfying Z t f (x(t)) − f (x(0)) − hq(x(s)), ∇f (x(s))ids = 0, ∀f ∈ D, x(0) ∈ E. 0
If such an x exists, define λ(du × ds) = δ{q(x(s))} (du) × ds. 8.6.3. Verifying Condition 8.11. 8.6.3.1. The operator duality approach. Formally, at least, Hf (x) = Af f (x) − Lf (x) = Af (x, Bf (x))−L(x, Bf (x)). For each f , we expect Af to be the generator of a flow, and hence, for each x0 ∈ E there should exist x ∈ CE [0, ∞) with x(0) = x0 such that Z t
Af g(x(s))ds = 0,
g(x(t)) − g(x0 ) −
t ≥ 0, g ∈ D,
0
and hence, for g = f and λ(du × ds) = δ{Bf (x(s))} (du) × ds, we have (8.15). 8.6.3.2. The FenchelLegendre transform approach. We seek a function qf (x) such that H(x, ∇f (x)) = qf (x)∇f (x) − L(x, qf (x)), where qf (x) is smooth enough so that for each x0 ∈ E there exists x satisfying x(t) ˙ = qf (x(t)). By a convexity argument, we should have qf (x) ∈ (∂p H)(x, ∇f (x)). Again, define λ(du × ds) = δ{qf (x(s))} (du) × ds. 8.6.3.3. A general approach. Frequently, the variational representation of H enjoys stronger regularity than is expressed in Condition 8.9. Under these stronger conditions, Condition 8.11 always holds. Condition 8.30. (1) E is compact, Γ = E × U , A ⊂ C(E) × C(E × U ), D(A) is an algebra, and D(A) = C(E). For each u ∈ U , Au f ≡ Af (·, u) satisfies the positive maximum principle and Af 2 (x, u) = 2f (x)Af (x, u),
∀f ∈ D(A).
(2) L : E × U → [0, ∞] is lower semicontinuous, and there exists a tightness function Φ on U such that Φ(u) ≤ L(x, u) for (x, u) ∈ E × U . (3) For each f ∈ D(A), Hf ∈ C(E). (4) For each f ∈ D(A), there exists a right continuous, nondecreasing function ψf : [0, ∞) → [0, ∞) such that Af (x, u) ≤ ψf (Φ(u)), for each (x, u) ∈ E × U , and limr→∞ r−1 ψf (r) = 0. We give an estimate on H under Conditions 8.30 and 8.10. Lemma 8.31. Suppose that Conditions 8.30 and 8.10 hold, and H is defined as in (8.7). Then for each f ∈ D(A), there exists uf : E → U such that uf (x) ∈ Γx and (8.43)
Hf (x) = Af (x, uf (x)) − L(x, uf (x)),
8.6. VERIFYING THE ASSUMPTIONS
(8.44)
157
− ψf (0) ≤ Hf (x) ≤ cf ≡ sup(ψf (r) − r) < ∞, r
and for every uf (x) satisfying (8.43), L(x, uf (x)) ≤ df ,
(8.45) where df is defined by
df ≡ sup{d : ψf (d) − d ≥ −ψf (0)} < ∞. Proof. By Condition 8.10, the nonnegativity of L, and the fact that each level set of L is compact, for each x ∈ E, there exists u ∈ Γx such that L(x, u) = 0. Hence, by Condition 8.30.4, Af (x, u) ≥ −ψf (0) giving the left inequality in (8.44). Also by Condition 8.30.4, we have (8.46)
Hf (x) ≤ sup (ψf (L(x, u)) − L(x, u)), u∈Γx
giving the right inequality. Finally, let un ∈ Γx satisfy lim (Af (x, un ) − L(x, un )) = Hf (x).
n→∞
Then (8.44) and Condition 8.30.4 imply −ψf (0) ≤ lim inf (ψf (L(x, un )) − L(x, un )), n→∞
and hence lim supn L(x, un ) ≤ df . By Condition 8.30.2, we can assume that un → uf (x) ∈ Γx , and by the continuity of Af and the lower semicontinuity of L, we must have (8.43) and (8.45). Indeed, by (8.44) and Condition 8.30.4, for any uf (x) satisfying (8.43), (8.45) holds. Lemma 8.32. Suppose that Conditions 8.30 and 8.10 are satisfied with H‡ = H. Then Conditions 8.9 and 8.11 hold. Proof. Condition 8.9.2 is implied by the definition of Γ and Condition 8.10. The other parts of Condition 8.9 are immediate consequences of Condition 8.30. It remains to verify Condition 8.11. To simplify notation, we establish (8.15) with t = 1. By Lemma 8.31, for each x0 ∈ E and f ∈ D(A), there exists u0 ∈ U such that Hf (x0 ) = Af (x0 , u0 ) − L(x0 , u0 ). Since E is compact and Au0 satisfies the positive maximum principle, by Theorem 5.4 in Chapter 4 of Ethier and Kurtz [36], there exists xn (possibly random) such that xn (0) = x0 and for each g ∈ D(A), Z t M g (t) = g(xn (t)) − g(xn (0)) − Ag(xn (s), u0 )ds = M g (t) 0 1 n.
2
is a martingale on 0 ≤ t ≤ Since Au0 g = 2gAu0 g, the quadratic variation of M g is zero, and hence M g (t) = 0. Define un (s) = u0 , for 0 ≤ s ≤ 1/n, so that Z t g(xn (t)) − g(xn (0)) − Ag(xn (s), un (s))ds = 0, 0
for g ∈ D(A) and 0 ≤ t ≤ 1/n. For xn ( n1 ) ∈ E, there exists un ( n1 ) ∈ U such that 1 1 1 1 1 Hf (xn ( )) = Af (xn ( ), un ( )) − L(xn ( ), un ( )). n n n n n
158
8. THE VARIATIONAL FORM OF RATE FUNCTION
Define un (t) = un ( n1 ) for
1 n
0, Z g(xn (t + )) − g(xn (t)) =  Ag(xn (s), u)λn (du × ds) U ×[t,t+] Z ≤ ψg (L(xn (s), u))λn (du × ds) U ×[t,t+]
≤ ψg (−1 ) + sup r≥−1
ψ(r) r
Z L(xn (s), u)λn (du × ds) U ×[0,1]
≡ ωg (), and lim→0 ωg () = 0. Since D(A) is dense in C(E), {xn } is relatively compact in CE [0, 1]. Consequently, there exists a modulus ω such that sup r(xn (t + ), xn (t)) ≤ ω(), n
and it follows that sup r(b xn (t + ), x bn (t)) ≤ ω( n
[n(t + )] [nt] − ) ≤ ω(). n n
Therefore {b xn } is relatively compact as well (in the Skorohod topology). Select a subsequence so that (b xn , xn , λn ) converges to (x, x, λ). Define occupation measures on E × U × [0, 1] Z t µn (C × [0, t]) = IC (xn (s), u)λn (du × ds). 0
Since (xn , λn ) → (x, λ), µn → µ in the weak topology and Z t µ(C × [0, t]) = IC (x(s), u)λ(du × ds). 0
Since Ag(xn (s), un (s)) ≤ ψg (Φ(un (s))) ≤ ψg (L(b xn (s), un (s))), ∀g ∈ D(A),
8.6. VERIFYING THE ASSUMPTIONS
we have Z sup n
ψg−1 (Ag(x, u))µn (dx×du×dt) = sup n
E×U ×[0,1]
Therefore, by uniform integrability, Z lim Ag(xn (s), u)λn (du × ds) n→∞
Z
159
1
ψg−1 (Ag(xn (s), un (s)))ds ≤ dg < ∞.
0
Z =
U ×[0,t]
lim n→∞ Z
Ag(x, u)µn (dx × du × ds) E×U ×[0,t]
Ag(x, u)λ(du × ds),
= U ×[0,t]
for each g ∈ D(A) and 0 ≤ t ≤ 1. Therefore Z g(x(t)) − g(x0 ) − Ag(x(s), u)λ(du × ds) = 0, U ×[0,t]
that is, (x, λ) ∈ Jx0 . In addition, by (8.47) and the upper semicontinuity of Af −L, Z 1 Z 1 Hf (x(s))ds = lim Hf (b xn (s))ds n→∞
0
0
Z =
n→∞
Z ≤
1
(Af − L)(b xn (s), u)λn (du × ds)
lim
0
1
(Af − L)(x(s), u)λ(du × ds). 0
Noting that Hf (x) ≥ (Af − L)(x, u), we conclude Z 1 Z 1 Hf (x(s))ds = (Af − L)(x(s), u)λ(du × ds), 0
and (x, λ) ∈ Jx0 .
0
Part 3
Examples of large deviations and the comparison principle
CHAPTER 9
The comparison principle At the beginning of Chapter 2, we outlined a fourstep program for proving large deviation results. The technically most difficult step in the program is usually the verification of the comparison principle. For h1 , h2 ∈ B(E), we consider subsolutions of (9.1)
(I − αH† )f = h1
and supersolutions of (9.2)
(I − αH‡ )f = h2 .
For h1 = h2 = h, the comparison principle states that if f is a viscosity subsolution of (9.1) and f is a viscosity supersolution (9.2), then f ≤ f . Consequently, to verify the comparison principle, we must be able to bound f from above and f from below. The techniques in this chapter are mainly extensions of those in Crandall, Ishii and Lions [17]. However, these generalizations are nontrivial. The most significant improvement is the treatment for equations with infinite dimensional state space. The method here does not rely on Ekeland’s perturbed optimization principle or its variants. See Section 9.4. In the past, Ekeland’s principle has been a basic point of departure for HamiltonJacobi theory in infinite dimensions. See the available theory developed by Crandall and Lions [21], [20] and Tataru [116]. This work applies in a restricted class of Banach spaces for equations with sufficiently regular coefficients. These requirements exclude equations arising for large deviations of interacting particle systems, such as Example 1.14. The method in Section 9.4 allows us to treat a large class of problems in this type of situation. It is based on some techniques in Feng and Katsoulakis [43]. Throughout this chapter, we assume that H† ⊂ M l (E, R) × M u (E 0 , R), H‡ ⊂ u M (E, R) × M l (E 0 , R), and H = H† ∩ H‡ . It follows that H ⊂ B(E) × B(E). Note that if the comparison principle holds for H, then it holds for the pair (H† , H‡ ). We assume that (f, g) ∈ H† and c ∈ R imply that (f + c, g) ∈ H† and similarly for H‡ . Recall that by Remark 7.2, if E = E 0 and E is compact, then Definition 7.1 is equivalent to Definition 6.1. 9.1. General estimates The definitions of subsolution and supersolution suggest an approach to the estimates needed on f and f . Suppose that E is compact. If f is a subsolution of (9.1) and (f0 , g0 ) ∈ H† , then there exists x0 ∈ E such that (9.3)
f (x0 ) − f0 (x0 ) = sup f (x) − f0 (x) x 163
164
9. THE COMPARISON PRINCIPLE
and f (x0 ) ≤ α(g0 )∗ (x0 ) + h1 (x0 ),
(9.4)
and if f is a supersolution of (9.2) and (f1 , g1 ) ∈ H‡ , then there exists x1 ∈ E such that f1 (x1 ) − f (x1 ) = sup f1 (x) − f (x),
(9.5)
x
and f (x1 ) ≥ α(g1 )∗ (x1 ) + h2 (x1 ).
(9.6)
If x1 = x0 and (9.4) and (9.6) both hold, we have (9.7)
f (x0 ) − f (x0 ) ≤ α((g0 )∗ (x0 ) − (g1 )∗ (x0 )) + h1 (x0 ) − h2 (x0 ).
If in addition f (x0 ) − f (x0 ) = sup(f (x) − f (x))
(9.8)
x
and the right side of (9.7) is less than or equal to zero, we have f ≤ f . In other words, for x0 satisfying (9.8), we would like to find (f0 , g0 ) ∈ H† , (f1 , g1 ) ∈ H‡ satisfying (9.3), (9.4), (9.5), and (9.6) (with x1 = x0 ) such that (g0 )∗ (x0 ) − (g1 )∗ (x0 ) ≤ 0. Then supx (f (x) − f (x)) ≤ supx (h1 (x) − h2 (x)), so if h1 = h2 = h, f ≤ f . In the absence of knowledge of the regularity of f and f , however, selecting f0 and f1 so that (9.3) and (9.5) hold with x0 = x1 is likely to be impossible. The following lemma formalizes an approximate approach to the same end. For a subsolution f and (f0 , g0 ) ∈ H† , let V (f , f0 , g0 ) ⊂ E be the collection of x0 satisfying (9.3) and (9.4). For a supersolution f and (f1 , g1 ) ∈ H‡ , let V (f , f1 , g1 ) be the collection of x1 satisfying (9.5) and (9.6). The lemma does not assume that E is compact; however, it does assume that the appropriate extrema are achieved which may be difficult to verify for noncompact E. Lemma 9.1. Let f be a viscosity subsolution of (9.1) and f a viscosity supersolution of (9.2). Suppose that there exist {(f0m , g0m )} ⊂ H† , {(f1m , g1m )} ⊂ H‡ , m m m m m xm 0 ∈ V (f , f0 , g0 ), and x1 ∈ V (f , f1 , g1 ) such that m lim f (xm 0 ) − f (x1 ) = sup f (x) − f (x)
m→∞
x
lim
m→∞
h1 (xm 0 )
− h2 (xm 1 )=0
and m m lim sup(g0m )∗ (xm 0 ) − (g1 )∗ (x1 ) ≤ 0. m→∞
Then f ≤ f . Proof. Note that (9.4) and (9.6) imply (9.9)
m m ∗ m m m m m f (xm 0 ) − f (x1 ) ≤ α((g0 ) (x0 ) − (g1 )∗ (x1 )) + h1 (x0 ) − h2 (x1 ),
and hence passing to the limit, we have sup(f (x) − f (x)) ≤ 0. x
9.1. GENERAL ESTIMATES
165
The following purely analytic lemma is useful in constructing the sequences required in the previous lemma. It is a straightforward adaptation to metric spaces of Proposition 3.7 in Crandall, Ishii and Lions [17]. Lemma 9.2. Let E be a metric space. Suppose that Ψ : E × E → [0, ∞] is lower semicontinuous, Ψ ≥ 0, and for each µ > 0, Φµ : E × E → [−∞, ∞) is upper semicontinuous. Suppose that there exists an upper semicontinuous Φ : E × E → [−∞, ∞), Φ 6≡ −∞, such that lim sup Φ(x, y) ∨ C − Φµ (x, y) ∨ C = 0,
∀C ∈ R.
µ→∞ x,y∈E
Define Mµ =
sup
(Φµ (x, y) − µΨ(x, y)).
(x,y)∈E×E
Suppose that Mµ < ∞ for some µ > 0 and that M∞ ≡ limµ→∞ Mµ > −∞. Select (xµ , yµ ) ∈ E × E so that lim (Mµ − (Φ(xµ , yµ ) − µΨ(xµ , yµ ))) = 0.
µ→∞
Then the following hold: a) limµ→∞ µΨ(xµ , yµ ) = 0. b) limµ→∞ Φµ (xµ , yµ ) = limµ→∞ Mµ ≡ M∞ ≥ sup{(x,y):Ψ(x,y)=0} Φ(x, y). c) If (b x, yb) is a limit point of (xµ , yµ ) as µ → ∞, then Ψ(b x, yb) = 0 and (9.10)
lim Mµ = lim Φµ (xµ , yµ ) = Φ(b x, yb) =
µ→∞
µ→∞
sup
Φ(x, y).
{(x,y):Ψ(x,y)=0}
Proof. Note that the hypotheses will still hold if we replace Φµ by Φ and that the conclusions with Φ in place of Φµ imply the original conclusions. Consequently, we assume that Φµ = Φ. Then Mµ is decreasing and for µ > µ0 , (9.11)
lim sup(µ − µ0 )Ψ(xµ , yµ ) ≤ Mµ0 − M∞ . µ→∞
Since the right side of (9.11) can be made arbitrarily small, Part (a) follows, which in turn implies Part (b). The fact that Ψ(b x, yb) = 0 follows from the lower semicontinuity of Ψ. The first equality in (9.10) follows from Part (a) and the second and third from the upper semicontinuity of Φ and the definition of (xµ , yµ ), In the setting of Chapter 6, the natural domain of H will usually be closed under affine transformations, that is, if f ∈ D(H), then for constants a, b, af + b ∈ D(H) and H(af + b) = H(af ), or more precisely, we will say that H is closed under affine transformations if f ∈ D(H) implies that af ∈ D(H) and (af, g) ∈ H and b ∈ R imply that (af + b, g) ∈ H. In the setting of Chapter 7, D(H† ) and D(H‡ ) will usually be closed under multiplication by positive constants. (As noted, we can always assume that the domains are closed under addition of a constant.) In order to simplify the notation, in the next lemma, for an operator H, we write H ∗ f (x) = (Hf )∗ (x) and H∗ f (x) = (Hf )∗ (x). Lemma 9.3. Suppose that D(H† ) and D(H‡ ) are closed under addition of arbitrary constants with H† (f + c) = H† f, H‡ (f + c) = H‡ f . Let h1 , h2 ∈ Cb (E) and α > 0, and let f be a viscosity subsolution of (9.1) and f a viscosity supersolution of (9.2). Suppose that V : E → R is lower semicontinuous, V ≥ 0, and V 6≡ ∞, and that Ψ is lower semicontinuous on E × E, Ψ ≥ 0, Ψ(x, y) > 0 when x 6= y, and
166
9. THE COMPARISON PRINCIPLE
{x ∈ E : Ψ(x, x) = 0} is nonempty. Suppose that there exist µ0 > 0, δ0 > 0, and 0 > 0 such that for µ > µ0 , 0 < δ < δ0 , and λ − 1 ≤ 0 , and each x ∈ E, −µΨ(x, ·) − δV ∈ D(H‡ ) and λ−1 µΨ(·, x) + δV ∈ D(H† ). Let λm − 1 ≤ 0 , µm > µ0 , limm→∞ λm → λ > 0, and limm→∞ µm = ∞. m Then for δ sufficiently small, there exist xm bm bm 0 , x1 , x 0 and x 1 such that (9.12)
m m m m m λm f (xm 0 ) − f (x1 ) − µm Ψ(x0 , x1 ) − λm δV (x0 ) − δV (x1 )
≥ sup (λm f (x) − f (y) − µm Ψ(x, y) − λm δV (x) − δV (y)) − x,y∈E
(9.13)
m m m xm xm xm λm f (b 0 ) − f (x1 ) − µm Ψ(b 0 , x1 ) − λm δV (b 0 ) − δV (x1 )
≥ sup (λm f (x) − f (y) − µm Ψ(x, y) − λm δV (x) − δV (y)) − x,y∈E
(9.14)
1 , m
m m λm f (xm xm bm xm 0 ) − f (b 1 ) − µm Ψ(x0 , x 1 ) − λm δV (x0 ) − δV (b 1 )
≥ sup (λm f (x) − f (y) − µm Ψ(x, y) − λm δV (x) − δV (y)) − x,y∈E
(9.15)
1 , 2m
1 , m
(λf (x) − f (x) − (1 + λ)δV (x))
sup {x:Ψ(x,x)=0}
m ≤ lim inf α λm (H† )∗ (λ−1 xm m µm Ψ(·, x1 ) + δV )(b 0 ) m→∞
−(H‡ )∗ (−µm Ψ(xm xm 0 , ·) − δV )(b 1 ) + lim sup(λh1 (b xm xm 0 ) − h2 (b 1 )), m→∞
(9.16)
−1 lim sup(H‡ )∗ (−µm Ψ(xm xm (kf k + kh2 k) < ∞, 0 , ·) − δV )(b 1 )≤α m→∞
(9.17)
m −1 lim inf (H† )∗ (λ−1 xm (kf k + kh1 k) > −∞, m µm Ψ(·, x1 ) + δV )(b 0 ) ≥ −α m→∞
and (9.18)
m m m lim µm (Ψ(xm xm bm 0 , x1 ) + Ψ(b 0 , x1 ) + Ψ(x0 , x 1 ) = 0.
m→∞
If the level sets {(x, y) : µΨ(x, y) + λδV (x) + δV (y) ≤ c} are relatively compact m for some µ > 0 (and hence for all µ > 0), then the sequence {(xm bm bm 0 , x1 , x 0 ,x 1 )} is relatively compact in E × E × E × E, any limit point will be of the form (x, x, x, x) with Ψ(x, x) = 0, and the right side of (9.15) is bounded by m (9.19) lim inf α λm (H† )∗ (λ−1 xm m µm Ψ(·, x1 ) + δV )(b 0 ) m→∞ −(H‡ )∗ (−µm Ψ(xm xm 0 , ·) − δV )(b 1 ) +
sup
(λh1 (x) − h2 (x)).
{x:Ψ(x,x)=0} m Suppose that xm 0 and x1 satisfy
(9.20)
m m m m m λm f (xm 0 ) − f (x1 ) − µm Ψ(x0 , x1 ) − λm δV (x0 ) − δV (x1 )
= sup (λm f (x) − f (y) − µm Ψ(x, y) − λm δV (x) − δV (y)). x,y∈E
9.1. GENERAL ESTIMATES
167
(If the level sets of µm Ψ(x, y) + λm δV (x) + δV (y) are relatively compact, then m xm 0 , x1 will exist by the semicontinuity assumptions). If f is a strong subsolution m and f is a strong supersolution, then the x bm bm bm 0 and x 1 can be chosen so that x 0 = x0 m m and x b1 = x1 . Remark 9.4. The lemma will be applied in situations in which the first expression in (9.19) is bounded by a function ω(λ) satisfying limλ→1 ω(λ) = 0. Observe that V (x) may be infinite, so this estimate only implies (9.21)
(f (x) − f (x)) ≤
sup
sup
(h1 (x) − h2 (x)),
{x:Ψ(x,x)=0}
{x:Ψ(x,x)=0,V (x) 0 such that for each x0 , y0 ∈ E satisfying V (x0 ) + V (y0 ) < ∞, µ > µ0 , 0 < δ < δ0 , and λ − λ0  ≤ 0 , the following hold: (1) There exists a lower semicontinuous g0 such that µΨ(·, y0 ) + δV (·) + g0 ∈ D(H† ), g0 (x0 ) = 0, and g0 (x) > 0 for x 6= x0 , and H†∗ λ−1 µΨ(·, y0 ) + δV (·) + g0 (x0 ) ≤ H†∗ λ−1 µΨ(·, y0 ) + δV (·) (x0 ). (2) There exists a lower semicontinuous g1 such that −µΨ(x0 , ·)−δV (·)−g1 ∈ D(H‡ ), g1 (y0 ) = 0 and g1 (y) > 0 for y 6= y0 , and (H‡ )∗ (−µΨ(x0 , ·) − δV (·) − g1 )(y0 ) ≥ (H‡ )∗ (−µΨ(x0 , ·) − δV (·))(y0 ). m Lemma 9.6. Let Condition 9.5 hold in Lemma 9.3. Suppose that xm 0 and x1 m m m m m m b0 = x0 and x b1 = x1 . satisfy (9.20). Then x b0 and x b1 can be chosen so that x
Proof. Let (9.20) be satisfied. Then there exists g0 satisfying Condition 9.5 m with x0 replaced by xm 0 such that for x 6= x0 , m m m m λm f (xm 0 ) − (f (x1 ) + µm Ψ(·, x1 ) + λm δV (·) + g0 + δV (x1 ))(x0 ) m m > λm f (x) − (f (xm 1 ) + µm Ψ(·, x1 ) + λm δV (·) + g0 + δV (x1 ) (x).
9.1. GENERAL ESTIMATES
169
Since xm 0 is the unique maximum, m ∗ −1 m m f (xm 0 ) ≤ αH† λm µm Ψ(·, x1 ) + δV + g0 (x0 ) + h1 (x0 ) m m m ≤ αH†∗ λ−1 m µm Ψ(·, x1 ) + δV (x0 ) + h1 (x0 ), m m that is, x bm bm 0 = x0 . Similarly, one can select x 1 = x1 .
A special case of Lemma 9.3 is of particular interest. Lemma 9.7. In addition to the assumptions of Lemma 9.3, let K = {x : Ψ(x, x) = 0} be compact and assume that for each open U ⊃ K, inf
(x,y)∈(U ×U )c
Ψ(x, y) > 0.
m bm bm Then, for xm 0 , x1 , x 0 , and x 1 as in Lemma 9.3,
(9.28)
sup (λf (x) − f (x) − (1 + λ)δV (x)) x∈K
≤ sup (λh1 (x) − h2 (x)) x∈K m + lim inf α(λm (H† )∗ (λ−1 xm m µm Ψ(·, x1 ) + δV (·))(b 0 ) m→∞
−(H‡ )∗ (−µm Ψ(xm xm 0 , ·) − δV (·))(b 1 )). m If one of the four sequences, {xm xm xm 0 }, {b 0 }, {x1 } or {b 1 }, converges, all four converge to the same point. m m m Proof. Since limm→∞ µm Ψ(xm 0 , x1 ) = 0, by the assumptions on Ψ, {x0 }, {x1 } m m are both relatively compact. Similarly, {b x0 } and {b x1 } are relatively compact. Therefore (9.28) follows from (9.15) and the continuity of h1 and h2 .
As (9.15) and (9.28) suggest, we can verify the comparison principle by estimating λH† (λ−1 f1 )(x) − H‡ f2 (y) for some λ−1 f1 ∈ D(H† ), f2 ∈ D(H‡ ), where the x, y are close to each other. In the remainder of this section, we discuss concrete techniques for estimating this difference for a variety of operators H† and H‡ with special structures. The first technique we discuss deals with a simple but general situation where H† = H‡ = H and H admits a variational representation (9.29) of the form considered in Chapter 8. Note that in the following lemma, the role of λ above is played by (1 + ). Lemma 9.8. Let (E, r) and (U, q) be metric spaces, Γ ⊂ E × U and Γx = {u : (x, u) ∈ Γ}. Suppose H ⊂ Cb (E) × B(E) satisfies (9.29)
Hf (x) = sup {Af (x, u) − L(x, u)}, u∈Γx
where A is linear and L is a function defined on Γ. Let f1 , f2 ∈ D(H), x, y ∈ E, C > 0, and λ > 1 be given. Suppose that for each u ∈ Γx , there exists v ∈ Γy such that (9.30) Af1 (x, u) − Af2 (y, v) ≤ (C + L(x, u)), 2 L(y, v) − L(x, u) ≤ (C + L(x, u)). 2
170
9. THE COMPARISON PRINCIPLE
Then
f1 )(x) − Hf2 (y) ≤ C. 1+ Proof. There exists un ∈ Γx such that f1 1 (1 + )H( )(x) ≤ Af1 (x, un ) − (1 + )L(x, un ) + . 1+ n For each n, choose vn ∈ Γy satisfying (9.30) for u = un . Then (1 + )H(
(1 + )H(
f1 )(x) − Hf2 (y) 1+
1 + Af1 (x, un ) − Af2 (y, vn ) + L(y, vn ) − (1 + )L(x, un ) n 1 1 ≤ + (C + L(x, un )) + (C + L(x, un )) − L(x, un ) = + C. n 2 2 n Sending n → ∞, the result follows. ≤
9.2. General conditions in Rd Application of Lemma 9.7 is simpler if every viscosity subsolution (supersolution) is a strong subsolution (supersolution). If the natural state space is Rd , we can frequently replace Rd by the onepoint compactification E = Rd ∪ {∞}. For many examples, D(H) ⊂ {f ∈ C(E) : ∇f is continuous, ∇f (∞) ≡ lim ∇f (x) = 0}, x→∞
and the operator H can be written as Hf (x) = H(x, ∇f (x)),
(9.31)
for some function H(x, p) (or similarly for H† and H‡ ). For the function H(x, p), H ∗ denotes the upper semicontinuous regularization and H∗ denotes the lower semicontinuous regularization, that is, H ∗ (x, p) = lim
sup
→0 (y,q)∈B (x,p)
H(y, q),
with sup replaced by inf in the definition of H∗ . For the operator H, H ∗ f (x) = lim sup H(y, ∇f (y)). →0 y∈B (x)
∗
In general, H f (x) need not equal H ∗ (x, ∇f (x)) (we always have H ∗ f (x) ≤ H ∗ (x, ∇f (x))); however, if H satisfies lim sup sup H(x, q) − H(x, p) = 0,
δ→0 q∈Bδ (p) x∈Γ
for each compact Γ ⊂ Rd , then for x ∈ Rd H ∗ (x, p) = lim sup H(y, p). →0 y∈B (x)
Consequently, we will assume that (9.32)
(H† f )∗ (x) = H†∗ (x, ∇f (x)),
f ∈ D(H† )
and, similarly, that (9.33)
(H‡ f )∗ (x) = (H‡ )∗ (x, ∇f (x)),
f ∈ D(H‡ ).
9.2. GENERAL CONDITIONS IN Rd
171
For operators having this kind of local form, one typically has equivalence of viscosity and strong viscosity solutions. Lemma 9.9. Let E = Rd ∪ {∞}. Let H(x, p) be measurable, D(H† ) ⊂ {f ∈ C(E) : ∇f is continuous, ∇f (∞) ≡ lim ∇f (x) = 0}, x→∞
∗
and H† f (x) = H (x, ∇f (x)). Assume that D(H† ) is closed under addition and that for each x0 ∈ E, there exists f ∈ D(H† ) such that f (x0 ) = 0 and f (x) > 0 for x 6= x0 . Let h ∈ C(E). If f is a viscosity subsolution of (I − αH† )f = h, then f is a strong subsolution. Similarly, let D(H‡ ) ⊂ {f ∈ C(E) : ∇f is continuous, ∇f (∞) ≡ lim ∇f (x) = 0} x→∞
and H‡ f (x) = H∗ (x, ∇f (x)). Assume that D(H‡ ) is closed under addition and that for each x0 ∈ E, there exists f ∈ D(H‡ ) such that f (x0 ) = 0 and f (x) < 0 for x 6= x0 . Let h ∈ C(E). If f is a viscosity supersolution of (I − αH‡ )f = h, then f is a strong supersolution. Proof. Since E is compact, it is sufficient to show that if f0 ∈ D(H† ) and f (x0 ) − f0 (x0 ) = sup(f (x) − f0 (x)), x
then α−1 (f (x0 ) − h(x0 )) ≤ H ∗ (x0 , ∇f0 (x0 )).
(9.34)
Let f1 ∈ D(H† ), f1 (x0 ) = 0, and f1 (x) > 0 for x 6= x0 . Then f0 + f1 ∈ D(H† ) and x0 is the unique point in E such that f (x0 ) − f0 (x0 ) − f1 (x0 ) = sup(f (x) − f0 (x) − f1 (x)). x
Consequently, (9.35)
α−1 (f (x0 ) − h(x0 )) ≤ H†∗ (x0 , ∇f0 (x0 ) + ∇f1 (x0 )) = H†∗ (x0 , ∇f0 (x0 )).
The proof of the supersolution result is similar.
We will prove the comparison principle under the following conditions on H† and H‡ . Condition 9.10. H† satisfies (9.32) and H‡ satisfies (9.33) and the following hold: (1) For each compact Γ ⊂ Rd , there exist µm → ∞ and ω : (0, ∞) → [0, ∞] with (9.36)
lim
inf
→0 λ−1≤
ω(λ) = 0
such that for each λ > 0 and each sequence {(xm , ym )} ⊂ Γ × Γ satisfying µm xm − ym 2 → 0, (9.37)
sup(H‡ )∗ (ym , µm (xm − ym )) < ∞, m
172
9. THE COMPARISON PRINCIPLE
and inf H†∗ (xm ,
(9.38)
m
µm (xm − ym ) ) > −∞, λ
we have (9.39)
µm (xm − ym ) ) − (H‡ )∗ (ym , µm (xm − ym ))] ≤ ω(λ). λ → ∞ and xm pm  → 0, then
lim inf [λH†∗ (xm , m→∞
(2) If xm
lim sup H† (xm , pm ) ≤ 0 and lim inf H‡ (xm , pm ) ≥ 0. m→∞
m→∞
Occasionally, we will use a stronger version of Condition 9.10.1. Condition 9.11. If λ > 1, µm → ∞, supm (xm  + ym ) < ∞, and µm xm − ym 2 → 0, then µm (xm − ym ) ) − (H‡ )∗ (ym , µm (xm − ym ))] ≤ 0. λ The following lemmas are immediate.
(9.40)
lim inf [λH†∗ (xm , m→∞
Lemma 9.12. If Condition 9.11 holds, then Condition 9.10.1 holds with 0 λ>1 ω(λ) = ∞ λ ≤ 1. (1)
(1)
(k)
(k)
Lemma 9.13. Suppose (H† , H‡ ), . . . , (H† , H‡ ) satisfies Condition 9.11 (1)
(k)
(1)
(k)
and a1 , . . . , ak ≥ 0. Then (H† , H‡ ) = (a1 H† + · · · + ak H‡ , a1 H‡ + · · · + ak H‡ ) satisfies Condition 9.11. Remark 9.14. Although Condition 9.10 has no obvious intuitive interpretation, we will see that it covers many examples. In particular, it gives the classical results in the FreidlinWentzell theory (Freidlin and Wentzell [52]) and many generalization. Freidlin and Wentzell placed regularity conditions on the FenchelLegendre transform L of H (= H† = H‡ ) (see Remark 10.16), whereas Condition 9.10 places regularity conditions on H (or H† and H‡ ). For a given problem, regularity of H is typically easier to check. A number of sufficient conditions will be given in lemmas below. Lemma 9.15. Let E = Rd ∪ {∞} and D(H† ) = D(H‡ ) = Dd given by (9.41)
Dd = {f ∈ C(E) : f Rd − f (∞) ∈ Cc2 (Rd )}.
Define ∇f (∞) = 0, and assume that H† and H‡ have the form (9.42)
H† f (x) = H† (x, ∇f (x)),
H‡ f (x) = H‡ (x, ∇f (x)),
where H† (x, p) and H‡ (x, p) satisfy Condition 9.10. Then for h1 = h2 = h ∈ C(E) and α > 0, the comparison principle holds for subsolutions of (9.1) and supersolutions of (9.2). Proof. By Lemma 7.6, under Condition 9.10.2, we can extend the domains to contain the subspace of functions that are continuously differentiable on Rd and satisfy (9.43)
lim x∇f (x) = 0.
x→∞
9.2. GENERAL CONDITIONS IN Rd
173
By Lemma 9.9, every subsolution of (9.1) is a strong subsolution and every supersom lution of (9.2) is a strong supersolution, so we can apply Lemma 9.7 with x bm i = xi . Let µm = m, λ = 1, 1 1 + Ψ(x, y) = 1 + x3 1 + y3 and K = {∞}. Then by (9.28), f (∞) − f (∞) ≤ lim inf α(H†∗ (xm 0 , −m m→∞
m m 3xm 3xm 1 x1 0 x0 ) − H‡∗ (xm )), 1 ,m m m 3 2 (1 + x0  ) (1 + x1 3 )2
m and since limm→∞ mΨ(xm 0 , x1 ) = 0, the right side is less than or equal to zero by Condition 9.10.2. Suppose δ ≡ supx (f (x)−f (x)) > 0. By upper semicontinuity, there exists κ > 0 and > 0 such that δ sup sup (λf (x) − f (x)) ≤ 3 λ−1κ
and
2δ . 3 Let Γ = {x : x ≤ 2κ}, and let µm and ω be as in Condition 9.10.1. Let ϕi : [0, ∞) → [0, ∞), i = 1, 2, be nondecreasing and continuously differentiable, ϕ1 (r) = 21 r, 0 ≤ r ≤ 1, ϕ1 (r) = 1, r > 2, ϕ2 (r) = 0, 0 ≤ r ≤ 2κ2 , ϕ2 (r) > 0, r > 2κ2 , and ϕ2 (r) = 1, r > 4κ2 . Now apply Lemma 9.7 with ϕ1 (x − y2 ) + ϕ2 (x2 ) + ϕ2 (y2 ) x, y < ∞ 2 + ϕ2 (x2 ) x < ∞, y = ∞ Ψ(x, y) = 2 2 + ϕ (y ) y < ∞, x = ∞ 2 2 x=y=∞ inf
sup(λf (x) − f (x)) ≥
λ−1 0.
The following lemmas give sufficient conditions for Condition 9.10 to hold. The first treats H (H† and H‡ ) that are nondegenerate in a sense that will be clear from specific examples.
174
9. THE COMPARISON PRINCIPLE
Lemma 9.16. Suppose H† is upper semicontinuous and H‡ is lower semicontinuous on Rd × Rd and H† (x, p) ≤ H‡ (x, p), x, p ∈ Rd . Suppose that for each compact Γ ⊂ Rd , lim inf H‡ (x, p) = ∞.
(9.45)
p→∞ x∈Γ
Then Condition 9.10.1 holds with ω(λ) =
0 ∞
λ=1 λ 6= 1.
If H‡ is convex in p and for each x, p ∈ Rd , lim H‡ (x, rp) = ∞,
(9.46)
r→∞
then (9.45) holds. Proof. Suppose that λ = 1, µm → ∞, xm , ym ∈ Γ, µm xm − ym 2 → 0, and sup H‡∗ (ym , µm (xm − ym )) < ∞. m
By (9.45), supm µm xm −ym  < ∞. Select a subsequence along which (xm , ym , µm (xm − ym )) converges to some (x0 , x0 , p0 ). Then by the semicontinuity assumptions lim inf [H†∗ (xm , µm (xm − ym )) − (H‡ )∗ (ym , µm (xm − ym ))] m→∞
≤ H†∗ (x0 , p0 ) − (H‡ )∗ (x0 , p0 )] ≤ 0. and (9.39) holds for ω(1) = 0. To see that (9.46) implies (9.45), it is sufficient to consider sequences pn → ∞ and xn → x ∈ Γ. Without loss of generality, we can assume that θn = pn /pn  → θ. By lower semicontinuity, lim inf (H‡ (xn , rθn ) − H‡ (xn , 0)) ≥ H‡ (x, rθ) − H‡∗ (x, 0), n→∞
and, since for n sufficiently large, pn  > r, by convexity, H‡ (xn , rθn ) − H‡ (xn , 0) n→∞ n→∞ r H‡ (x, rθ) − H‡∗ (x, 0) ≥ . r For r sufficiently large, the right side is positive, so H‡ (xn , pn ) → ∞. lim inf
H‡ (xn , pn ) − H‡ (xn , 0) pn 
≥
lim
The next lemmas are applicable under Lipschitz assumptions on the generator. Lemma 9.17. Suppose there exists a continuous function K : R → [0, ∞) such that (9.47)
H† (x, p) − H‡ (y, p) ≤ K(H‡ (y, p))(1 + p)x − y.
Then Condition 9.10.1 holds with ω(λ) =
0 ∞
λ=1 λ 6= 1.
Proof. For µm , xm , and ym as in Condition 9.10.1, set pm = µm (xm − ym ). Replacing x, y, and p in (9.47) by xm , ym , and pm , the right side goes to zero and the condition follows.
9.2. GENERAL CONDITIONS IN Rd
175
The conditions of Lemma 9.17 are easy to state but do not appear to cover a variety of examples for which some kind of Lipschitz continuity holds. The next set of conditions is more generally applicable. For simplicity, we state the results for H† = H‡ = H. Condition 9.18. H is continuous and Z H(x, p) = β(x, v)φ(c(x, v) · p; v)γ(dv), V
where (1) (2) (3) (4)
γ is a σfinite measure on a measurable space (V, V). φ : Rm × V → R is measurable and φ(q; v) is convex in q for each fixed v. c : Rd × V → M m×d (the m × d matrices) is measurable. There exists Φ : R+ × V → R+ such that for each v ∈ V , Φ(0; v) = 0 and Φ(·; v) is nondecreasing and φ(q; v) ≤ Φ(q; v),
(q, v) ∈ R+ × V.
(5) For each r > 0, there exists a δr > 0 such that for δ < δr , Z c(x, v) − c(y, v) κr (δ) ≡ sup{ β(x, v)Φ(δ ; v)γ(dv) : x, y < r, x 6= y} < ∞. x − y V Lemma 9.19. Suppose that Condition 9.18 holds with β ≡ 1. Then Condition 9.11 holds. Proof. Let λ > 1. By convexity, 1 φ( c(x, v) · p; v) λ 1 λ − 1 (c(x, v) − c(y, v)) · p + c(y, v) · p; v) = φ( λ λ−1 λ 1 (c(x, v) − c(y, v)) · p 1 ≤ (1 − )φ( ; v) + φ(c(y, v) · p; v). λ λ−1 λ Hence, 1 λφ( c(x, v) · p; v) − φ(c(y, v) · p; v) λ (c(x, v) − c(y, v)) · p ≤ (λ − 1)φ( ; v) λ−1 1 c(x, v) − c(y, v) ≤ (λ − 1)Φ( px − y; v). λ−1 x − y Integrating by γ and taking appropriate limits in x and y to obtain H ∗ and H∗ , this inequality implies 1 λH ∗ (x, p) − H∗ (y, p) ≤ (λ − 1)κr ((λ − 1)−1 px − y), λ for x, y, p satisfying x, y < r and (λ − 1)−1 px − y < δr . Note that by the dominated convergence theorem, limδ→0 κr (δ) = 0. Let 0 < r < ∞, xm , ym  < r, µm → ∞ and µm xm − ym 2 → 0. For λ > 1, there exists m0 such that m > m0 implies 1 µm xm − ym 2 < δr . 0< λ−1
176
9. THE COMPARISON PRINCIPLE
Consequently, µm (xm − ym ) ) − H∗ (ym , µm (xm − ym )) λ ≤ lim (λ − 1)κr ((λ − 1)−1 µm xm − ym 2 )
lim inf λH ∗ (xm , m→∞
m→∞
= 0. Lemma 9.20. Suppose that H0 satisfies Condition 9.18 with φ ≥ 0 and β such that β(x, v) − β(y, v) ≤ K(x, y)β(y, v), where K is continuous and K(x, x) = 0, for all x ∈ Rd . Then H0 satisfies Condition 9.10.1 with 0 λ>1 ω(λ) = ∞ λ ≤ 1. Let b : Rd → Rd be Lipschitz and define H(x, p) = H0 (x, p) + b(x) · p. If K satisfies K(x, y) ≤ Kx − y
(9.48)
for some K < ∞, then H satisfies Condition 9.10.1 with the same ω. Proof. As in the previous lemma, by convexity, 1 φ( c(x, v) · p; v) λ λ − 1 (c(x, v) − c(y, v)) · p 1 = φ( + c(y, v) · p; v) λ λ−1 λ 1 (c(x, v) − c(y, v)) · p 1 ≤ (1 − )φ( ; v) + φ(c(y, v) · p; v). λ λ−1 λ Hence, 1 λβ(x, v)φ( c(x, v) · p; v) − β(y, v)φ(c(y, v) · p; v) λ (c(x, v) − c(y, v)) · p ; v) + (β(x, v) − β(y, v))φ(c(y, v) · p; v) ≤ (λ − 1)β(x, v)φ( λ−1 1 (c(x, v) − c(y, v)) ≤ (λ − 1)β(x, v)Φ( px − y; v) λ−1 x − y +K(x, y)β(y, v))φ(c(y, v) · p; v). As before, we have 1 λ(H0 )∗ (x, p) − (H0 )∗ (y, p) ≤ (λ − 1)κr ((λ − 1)−1 px − y) + K(x, y)(H0 )∗ (y, p), λ which implies 1 λH ∗ (x, p) − H∗ (y, p) ≤ (λ − 1)κr ((λ − 1)−1 px − y) + K(x, y)H∗ (y, p) λ +(b(x) − b(y) − K(x, y)b(y)) · p, (9.49)
9.3. BOUNDED SMOOTH DOMAINS IN Rd WITH (POSSIBLY OBLIQUE) REFLECTION177
Let 0 < r < ∞, xm , ym  < r, µm → ∞, µm xm − ym 2 → 0 and sup H∗ (ym , µm (xm − ym )) < ∞. m
If b = 0 or if b is Lipschitz and K satisfies (9.48), the last two terms on the right of (9.49) with x and y replaced by xm and ym will go to zero. For λ > 1, there exists m0 such that m > m0 implies 1 0< µm xm − ym 2 < δr . λ−1 Consequently, µm (xm − ym ) ) − H∗ (ym , µm (xm − ym )) lim inf λH ∗ (xm , m→∞ h λ ≤ (λ − 1) lim inf (λ − 1)κr ((λ − 1)−1 µm xm − ym 2 ) m→∞
+K(xm , ym )H∗ (ym , µm (xm − ym ) i +(b(xm ) − b(ym ) − K(xm , ym )b(ym )) · µm (xm − ym ) = 0, and the lemma follows.
9.3. Bounded smooth domains in Rd with (possibly oblique) reflection Let ϕ ∈ C 2 (Rd ) and suppose that E ≡ {x : ϕ(x) ≥ 0} is compact, E is the closure of E o = {x : ϕ(x) > 0}, and for x ∈ ∂E = {x : ϕ(x) = 0}, ∇ϕ(x) 6= 0. We assume that ν : Rd → Rd , ν(x) = 1, and κ0 = inf ν(x) · ∇ϕ(x) > 0.
(9.50)
x∈∂E
With reference to (9.1) and (9.2), and the reflecting diffusion example in Sece Hb : Rd × Rd → R be measurable and Hb (x, p) ≥ 0. Define tion 10.5, we let H, H† f (x) = H† (x, ∇f (x)) and for f ∈ C 1 (E), H‡ f (x) = H‡ (x, ∇f (x)), where ( e H(x, p) + Hb (x, p)(0 ∨ ν(x) · p) x ∈ ∂E H† (x, p) = e H(x, p) x ∈ Eo and ( H‡ (x, p) =
e H(x, p) + Hb (x, p)(0 ∧ ν(x) · p) e H(x, p)
x ∈ ∂E x ∈ Eo .
Let n(x) = ∇ϕ(x)/∇ϕ(x). Then for x ∈ ∂E, n(x) is the unit interior normal at x, and n is C 1 in a neighborhood of ∂E. The smoothness of the boundary also implies that there exists a constant C1 > 0 such that n(x) · (x − y) < C1 x − y2 , ∀x ∈ ∂E, y ∈ E. Our analysis is an adaptation of Lions and Sznitman [81] and Lions [80]. In particular, the following lemma is essentially Lemma 4.1 of [81]. Lemma 9.21. Let ϕ and E be as above, and let ν be the restriction to ∂E of a C 1 function (which we will also denote by ν). Then there exists a d × d symmetricmatrixvalued function m(x) = ((mij (x) )) such that m is positive definite and C 1 and m(x)ν(x) = n(x), x ∈ ∂E.
178
9. THE COMPARISON PRINCIPLE
Proof. Let ρ : [0, ∞) → [0, ∞) be smooth, ρ(0) = 1 and ρ(r) = 0, r ≥ 1. Define the matrix p(x) by p(x) = I − ρ(ϕ(x)/δ)(ν(x) · n(x))−1 ν(x)nT (x), for δ > 0 small enough so that ν and n are differentiable and ν(x) · n(x) is bounded away from zero on {x : 0 ≤ ϕ(x) ≤ δ}. Note that for x ∈ ∂E, p(x)ν(x) = 0. Then m(x) = pT (x)p(x) + ρ(ϕ(x)/δ)(ν(x) · n(x))−1 n(x)nT (x) satisfies the requirements of the lemma.
Let f be a subsolution of (9.1) and f a supersolution of (9.2) with h1 = h2 = h ∈ C(E). Define f = f + ϕ,
f = f − ϕ,
and let H†, (x, p) = H† (x, p − ∇ϕ(x)),
H‡, (x, p) = H‡ (x, p + ∇ϕ(x)),
and H†, f (x) ≡ H†, (x, ∇f (x)),
H‡, f (x) ≡ H‡, (x, ∇f (x)).
Then the following lemma is immediate from the definitions. Lemma 9.22. f is a subsolution of (I − αH†, )f = h + ϕ,
(9.51) and f is a supersolution of
(I − αH‡, )f = h − ϕ.
(9.52)
We will compare f and f by applying (9.28) of Lemma 9.7. Instead of x−y2 , we take Ψ(x, y) = (x − y)T
(9.53)
m(x) + m(y) (x − y). 2
Define ∇1 Ψ(x, y) ≡ ∇x Ψ(x, y)
∇2 Ψ(x, y) ≡ ∇y Ψ(x, y).
Note that in the following lemma, the various sequences depend on , but to simplify notation, we do not specifically indicate this fact. Lemma 9.23. Suppose h ∈ C(E), α > 0, and > 0. Then for λm → λ > 0 and m 0 < µm → ∞, there exist xm 0 , x1 ∈ E such that sup (λf (x) − f (x)) ≤ λ − 1khk + (1 + λ)kϕk pm m e ∗ xm − ∇ϕ(x ) +α lim inf {λm H , 0 0 m→∞ λm e ∗ xm , pm + qm + ∇ϕ(xm ) }, −H 1 1
x∈E
(9.54)
where m pm = µm ∇1 Ψ(xm 0 , x1 )
9.3. BOUNDED SMOOTH DOMAINS IN Rd WITH (POSSIBLY OBLIQUE) REFLECTION179
and qm
m m m = −µm (∇2 Ψ(xm 0 , x1 ) + ∇1 Ψ(x0 , x1 )) m m m T ∂1 m(x0 ) + ∂1 m(x1 ) m = −µm (xm (xm 0 − x1 ) 0 − x1 ), 2 m m m m m T ∂d m(x0 ) + ∂d m(x1 ) (x − x ) . . . . , (xm − x ) 0 1 0 1 2
m m m 2 In addition, limm→∞ µm Ψ(xm 0 , x1 ) = 0, implying µm x0 −x1  → 0 and qm  → 0.
Proof. As in Lemma 9.9, any viscosity subsolution is a strong subsolution and any supersolution is a strong supersolution. Applying Lemma 9.7, there exist m xm 0 , x1 ∈ E such that sup (λf (x) − f (x)) ≤ λ − 1khk + (1 + λ)kϕk µm ∗ m m +α lim inf {λm H†, xm , ∇ Ψ(x , x ) 1 0 0 1 m→∞ λm m m −(H‡, )∗ xm , −µ ∇ Ψ(x , x ) }. m 2 1 0 1
x∈E
(9.55)
Noting that ∇x Ψ(x, y)
= m(x)(x − y) + m(y)(x − y) 1 + (x − y)T ∂1 m(x)(x − y), . . . , (x − y)T ∂d m(x)(x − y) 2
and setting ν(x) = (ν1 (x), . . . , νd (x))T , there is a constant C2 such that ν(x) · ∇x Ψ(x, y) d
= ν(x)T (m(x) + m(y)) (x − y) +
1X νk (x)(x − y)T ∂k m(x)(x − y) 2 k=1
d
1X ∂k m(x)x − y2 ≤ 2n(x) · (x − y) + m(y) − m(x)x − y + 2 k=1
2
≤ C2 x − y . m m m 2 By Lemma 9.2, limm→∞ µm Ψ(xm 0 , x1 ) = 0, so µm x0 − x1  → 0. Therefore, m when x0 ∈ ∂E, for m sufficiently large, we have
ν(xm 0 )·(
µm C2 m m m 2 ∇1 Ψ(xm µm xm 0 , x1 ) − ∇ϕ(x0 )) ≤ 0 − x1  − κ0 < 0. λm λm
Consequently ∗ H†, (xm 0 ,
µm m m m e ∗ m µm ∇1 Ψ(xm ∇1 Ψ(xm 0 , x1 )) ≤ H (x0 , 0 , x1 ) − ∇ϕ(x0 )). λm λm
Similarly m m m m m e m (H‡, )∗ (xm 1 , −µm ∇2 Ψ(x0 , x1 )) ≥ H∗ (x1 , −µm ∇2 Ψ(x0 , x1 ) + ∇ϕ(x1 )).
The conclusion follows from these inequalities and (9.55).
The following lemma extends the comparison principle for nondegenerate equations, Lemma 9.16, to equations with reflecting boundary conditions.
180
9. THE COMPARISON PRINCIPLE
e is conLemma 9.24. Let h1 = h2 = h ∈ C(E) and α > 0. Suppose that H tinuous and satisfies the nondegeneracy condition (9.45). Then the comparison principle holds for subsolutions of (9.1) and supersolutions of (9.2). m Proof. Let λm ≡ 1 in (9.54), and let {xm 0 } and {x1 } be as in Lemma 9.23. Then (9.16) gives m e m lim sup H(x 1 , pm + qm + ∇ϕ(x1 )) m→∞
m ≤ lim sup(H‡, )∗ (xm 1 , pm + qm + ∇ϕ(x1 )) m→∞
< α−1 (kf k + khk), implying supm pm + qm  < ∞ with a bound uniform in 0 < < 1. Since qm  → 0, m e xm 0 − x1  → 0, and H is continuous, taking a subsequence if necessary, we have m limm→∞ x0 = limm→∞ xm 1 = x and p satisfying e , p + ∇ϕ(x )) . sup (f (x) − f (x)) ≤ 2kϕk + α( H(x , p − ∇ϕ(x )) − H(x x∈E
Noting that p is uniformly bounded in and letting → 0, we have the result. In specific cases, the nondegeneracy assumption in the above lemma can be replaced by Lipschitz continuity in the coefficients. As an example, we consider 1 e H(x, p) = σ(x)T p2 + b(x)p, 2 where σ(x) is a d × d matrix and b(x) ∈ Rd . Lemma 9.25. Suppose h1 = h2 = h ∈ C(E) and α > 0. Let σ, b be Lipschitz continuous. Then the comparison principle holds for subsolutions of (9.1) and supersolutions of (9.2). Proof. Let λ > 1, x, y ∈ E, p, q, r, s ∈ Rd and r, s ≤ δ. Then p e e p + q + s) (9.56) λH(x, − r) − H(y, λ 1 1 ≤ λ−1 (σ(x) − σ(y) + σ(y))T (p − λr)2 − σ T (y)(p + q + s)2 2 2 +b(x) − b(y)p + sup b(z)(q + (1 + λ)δ) z∈E −1 2 2 ≤ (2λ) Lσ (x − y)p + λ2 L2σ x − y2 δ 2 +2λL2σ (x − y)px − yδ + σ T (y)p2 +λ2 sup σ(z)2 δ 2 z
+2λσ T (y)p sup σ T (z)δ + 2Lσ (x − y)pσ T (y)p z
1 − σ T (y)p2 + σ T (y)pσ T (y)(q + s) 2 +Lb x − yp + sup b(z)(q + (1 + λ)δ), z∈E
where Lσ , Lb are the Lipschitz constants for σ and b. We next use the above inequality to estimate the right side of (9.54) with λm = λ. Note that for each
9.4. CONDITIONS FOR INFINITE DIMENSIONAL STATE SPACE
181
m m m > 0, xm 0 − x1  → 0, qm → 0 and x0 − x1 pm  → 0, so we have
sup (λf (x) − f (x)) − λ − 1khk x∈E
≤ α−1 lim sup lim sup →0+
m→∞
1 2 σ T (xm 1 )pm  2λ
T +σ T (xm 1 )pm  sup σ (z) sup ∇ϕ(z) z
z
Lσ m T m + (xm 0 − x1 )pm σ (x1 )pm  λ 1 2 T m T m m − σ T (xm 1 )pm  + σ (x1 )pm σ (x1 )(qm + ∇ϕ(x1 )) 2 1λ−1 −1 2 = α lim sup lim sup − σ T (xm 1 )pm  2 λ →0+ m→∞ +σ T (xm 1 )pm {sup σ(z) sup ∇ϕ(z) z
≤ α−1
λ 2(λ − 1)
z
Lσ m T m m + (xm 0 − x1 )pm  + σ (x1 )(qm + ϕ(x1 ))} λ lim sup lim sup sup σ(z) sup ∇ϕ(z) →0+
m→∞
z
z
2 Lσ m T m m = 0. + (xm 0 − x1 )pm  + σ (x1 )(qm + ϕ(x1 )) λ The last inequality follows from the fact that −au2 + 2bu ≤
b2 , a
a > 0.
Sending λ → 1, we obtain the result.
9.4. Conditions for infinite dimensional state space In Examples 1.12, 1.13 and 1.14, the limit operators are first order differential operators in infinite dimensions. Next, we develop estimates useful for applying Lemma 9.3 to these situations. In the examples, E is either a subspace of some function space, or a space of probability measures. Therefore, we use ρ, γ to denote a typical element in E and let x denote an element of the set on which the functions are defined. More motivation for the examples considered here can be found in Feng and Katsoulakis [43]. A distance function on E is a metric defined on E×E that is not necessarily equivalent to the original metric r. “Convergence” always refers to convergence in the original metric r unless otherwise noted. Condition 9.26. (1) There exists a nonnegative, lower semicontinuous function E ∈ M (E), E 6≡ ∞, that has compact level sets, a lower semicontinuous distance function d(ρ, γ), and a0 , b0 > 0 such that for ρ0 , γ0 ∈ E satisfying E(ρ0 ) + E(γ0 ) < ∞, a > a0 , and 0 < b < b0 , ad2 (·, γ0 ) + bE(·) ∈ D(H† )
(9.57) and (9.58)
− ad2 (ρ0 , ·) − bE(·) ∈ D(H‡ ).
182
9. THE COMPARISON PRINCIPLE
(2) For 0 < κ < b0 , there exist ωκ : [0, ∞) → [0, ∞] satisfying lim inf lim inf ωκ (r) = 0, κ→0+ r→0+
(9.59)
and for m(1 − κ) > a0 and ρ, γ ∈ E, n 1 ∗ H† (m(1 − κ)d2 (·, γ) + κE) (ρ) 1−κ o 1 H‡ (−m(1 + κ)d2 (ρ, ·) − κE(·)) (γ) − 1+κ ∗ ≤ ωκ (md2 (ρ, γ)).
Condition 9.27. (1) f and f are dcontinuous in the sense that limn→∞ d(ρn , ρ0 ) = 0 implies lim f (ρn ) = f (ρ0 ) and lim f (ρn ) = f (ρ0 ).
n→∞
n→∞
(2) For each ρ ∈ E, there exist ρn ∈ E with E(ρn ) < ∞ such that lim d(ρn , ρ) = 0,
n→∞
that is, {ρ : E(ρ) < ∞} is dense in E in the topology determined by d. Theorem 9.28. Assume that Conditions 9.26 and 9.5 hold with V = E. For α > 0 and h1 , h2 ∈ Cb (E), let f be a subsolution of (9.1) and f be a supersolution of (9.2). Then (9.60)
sup
(f (ρ) − f (ρ)) ≤ sup (h1 (ρ) − h2 (ρ)).
ρ∈{ρ:E(ρ) 0, O ρ(dx) = 1 in Example 1.14).
9.4. CONDITIONS FOR INFINITE DIMENSIONAL STATE SPACE
183
For consistency of notation in these examples, we only use H† , H‡ to denote limit operators arising in the application of Theorem 7.17. The comparison princib † and ples we prove in the rest of this section, however, are not for these H† , H‡ . H b H‡ are modifications of H† and H‡ for which the comparison principle is easier to prove. Chapter 13 will discuss the large deviation principle for these examples, and b † and H b ‡ implies the comparison we will show that the comparison principle for H principle for the corresponding H† and H‡ .
Example 9.29. (Reactiondiffusion equation, Example 1.12.) Let E = H0 (O) = L2 (O), where O = [0, 1)d with periodic boundary. For β = 1, 2, . . ., let Hβ (O) be the completion of C ∞ (O) under the norm kρk2β = h(I − ∆)β ρ, ρi and H−β (O) be the completion of H0 (O) under the norm kρk2−β = h(I − ∆)−β ρ, ρi. The inner product hρ, γi extends consistently to ρ ∈ H−β (O) and γ ∈ Hβ (O), and hρ, γi ≤ kρk−β kγkβ . For each ρ ∈ E, we take h i u, v = hu, vi, u ∈ H−β (O), v ∈ Hβ (O). ρ
The intuition is that at each point ρ ∈ E, we take L2 (O) as the tangent space and allow the possibility that the gradient only exists in “smooth directions.” Let k · k = k · k0 = k · kL2 (O) . Definition 9.30 (Gradient). Let f : E → R and ρ ∈ E. Then, if it exists, gradf (ρ) is the unique element in ∪β H−β (O) such that for each p ∈ C ∞ (O) 1 lim f (ρ + tp) − f (ρ) = hgradf (ρ0 ), pi. t→0 t With this definition of grad, H given by (1.33) is of the form 1 (9.63) Hf (ρ) = h∆ρ − F 0 (ρ), gradf (ρ)i + kgradf (ρ)k2 . 2 Define Z 1 (9.64) E(ρ) = { ∇ρ(x)2 + F (ρ(x))}dx. 2 O Then gradE(ρ) = −∆ρ + F 0 (ρ), ∀ρ, E(ρ) < ∞, and we see that (9.63) has the form (9.62). Let {e1 , e2 , . . .} ⊂ C ∞ (O) be the orthonormal basis of eigenfunctions for −∆ in L2 (O) (see Appendix C). Define (9.65)
G(ρ, γ) =
∞ X
2−k hρ − γ, ek i2 .
k=1
Then fixing γ0 , gradG(ρ, γ0 ) =
∞ X k=1
and kgradG(ρ, γ0 )k < ∞.
2−k+1 hρ − γ0 , ek iek ,
184
9. THE COMPARISON PRINCIPLE
Let d(ρ, γ) = kρ − γk, a, c ≥ 0, 0 < b < 1, and C ∈ R. We extend the definition of H in (9.63), according to (9.62), to test functions of the form (9.66)
f0 (ρ) = ad2 (ρ, γ0 ) + bE(ρ) + cG(ρ, ρ0 ) + C
for subsolutions, and to test functions of the form f1 (γ) = −ad2 (ρ0 , γ) − bE(γ) − cG(γ, γ0 ) + C, for supersolutions. We define (9.67) 2a(1 − b)h∆ρ − F 0 (ρ), ρ − γ0 i + 2a2 kρ − γ0 k2 +( 21 b2 − b)k∆ρ − F 0 (ρ)k2 +c(1 − b)hgradG(ρ, ρ0 ), ∆ρ − F 0 (ρ)i b † f0 (ρ) = H +2−1 c2 kgradG(ρ, ρ0 )k2 +2achgradG(ρ, ρ0 ), ρ − γ0 i, −∞
∆ρ − F 0 (ρ) ∈ L2 (O) otherwise.
and (9.68) −2a(1 + b)h∆γ − F 0 (γ), γ − ρ0 i + 2a2 kγ − ρ0 k2 +( 21 b2 + b)k∆γ − F 0 (γ)k2 −c(1 + b)hgradG(γ, γ0 ), ∆γ − F 0 (γ)i b ‡ f1 (γ) = H +2−1 c2 kgradG(γ, γ0 )k2 +2achgradG(γ, γ0 ), γ − ρ0 i, ∞
∆γ − F 0 (γ) ∈ L2 (O) otherwise.
The “free energy” E introduces an important higher order term k∆ρ − F 0 (ρ)k2 that b † f0 is upper semicontinuous and H b ‡ f1 is lower semicontinuous. ensures H Theorem 9.31. Let h1 , h2 ∈ Cb (E) and α > 0. Suppose f is a subsolution of b † )f = h1 and f is a supersolution of (I − αH b ‡ )f = h2 . Then (I − αH (9.69)
sup
(f (ρ) − f (ρ)) ≤ sup (h1 (ρ) − h2 (ρ)).
ρ∈{ρ:E(ρ) 0, we have ∗ 1 b (9.71) H† (m(1 − κ)d2 (·, γ) + κE) (ρ) 1−κ 1 b − H‡ (−m(1 + κ)d2 (ρ, ·) − κE(·)) (γ) 1+κ ∗ = 2mh(∆ρ − F 0 (ρ)) − (∆γ − F 0 (γ)), ρ − γi −2κmh(∆ρ − F 0 (ρ)) + (∆γ − F 0 (γ)), ρ − γi − 4κm2 kγ − ρk2 κ(2 + κ) κ(2 − κ) k∆ρ − F 0 (ρ)k2 − k∆γ − F 0 (γ)k2 − 2(1 − κ) 2(1 + κ) ≤ −2mhF 0 (ρ)) − F 0 (γ), ρ − γi 3κ 3κ − k∆ρ − F 0 (ρ)k2 − k∆γ − F 0 (γ)k2 8 8 ≤ 2mLF kρ − γk2 , where the first inequality follows from the inequalities h∆(ρ − γ), ρ − γi ≤ 0 and κ −κmhq, ρ − γi − 2κm2 kρ − γk2 ≤ kqk2 8 √ −3/2 2 (expand k 2m(ρ−γ)+2 qk ), and the second inequality from kF 0 (ρ)−F 0 (γ)k ≤ LF kρ − γk. Condition 9.26 follows. Finally, since {ρ : E(ρ) < ∞} is dense in E, if f and f are continuous, (9.69) implies (9.70). Example 9.32. P (Discrete CahnHilliard equation, Example 1.13.) In this example, x∈Λm Yn (t, x) is constant in time. Without loss of generality, we can assume that this conserved quantity is zero. The limiting state space then becomes Z E = H 0 (O) = {ρ ∈ L2 (O) :
ρ(x)dx = 0}. O
For β = 1, 2, . . ., define H β (O) to be the completion of H 0 (O) ∩ C ∞ (O) under the norm kρkβ = (−1)β h∆β ρ, ρi0 . Note that Z hρ, γi0 = hρ, γi = ρ(x)γ(x)dx, O
but we use the subscript to emphasize the differences among the inner products. The inverse Laplacian is welldefined on E, and we define H −β (O) to be the completion of E under the norm given by kρk2−β = (−1)β hρ, ∆−β ρi0 , that is, (9.72)
hρ, γi−β = (−1)β h∆−β ρ, γi0 = (−1)β hρ, ∆−β γi0 ,
if ρ, γ ∈ H 0 (O). The norm on H −1 (O) can also be represented as Z Z kuk2−1 = sup {2 up − ∇p2 dx}. p∈C ∞ (O)
O
O
The additional fact that we will need regarding these spaces is that for ρ, γ ∈ H 0 (O), hρ, γi0  ≤ kρk−β kγkβ ,
186
9. THE COMPARISON PRINCIPLE
where the right side is infinite if γ ∈ / H β (O). We view E as a manifold for which the tangent space at ρ ∈ E is given by H −1 (O) with h i u, v = hu, vi−1 . ρ
We will use the following definition of gradient: Definition 9.33 (Gradient). Let f : E → R and ρ ∈ E. Then, if it exists, gradf (ρ) is the unique element in ∪β H −β (O) such that for each p ∈ C ∞ (O) 1 f (ρ − t∆p) − f (ρ) = hgradf (ρ0 ), pi. lim t→0 t Then for f (ρ) = kρ − γk2 , gradf (ρ) = −2∆(ρ − γ); for f (ρ) = d2 (ρ, γ) ≡ kρ − γk2−1 , gradf (ρ) = 2(ρ − γ) and hgradd2 (ρ, γ), gradd2 (ρ, γ)i−1 = 4kρ − γk2−1 ; for E given by (9.64) and ρ satisfying E(ρ) < ∞, gradE(ρ) = −∇ · (∇(−∆ρ + F 0 (ρ))) = ∆2 ρ − ∆F 0 (ρ),
(9.73)
and hgradE(ρ), gradE(ρ)i−1 = k∆(∆ρ − F 0 (ρ))k2−1 ; and for f of the form (1.31), gradf (ρ) = −∇ · (∇
δf δf ) = −∆ . δρ δρ
Consequently, H in (1.42) can be written in the form (9.62): 1 Hf (ρ) = h−gradE(ρ), gradf (ρ)i−1 + kgradf (ρ)k2−1 (9.74) 2 1 = h∆ρ − F 0 (ρ), gradf (ρ)i + kgradf (ρ)k2−1 , 2 where the second expression follows by (9.72). b † and H b ‡ which can be viewed as extending H to a new We next introduce H class of test functions. Taking d(ρ, γ) = kρ − γk−1 in Condition 9.26, and m > 0, 0 < κ < 1. Let f0 (ρ) = m(1 − κ)kρ − γk2−1 + κE(ρ)
(9.75) and
f1 (γ) = −m(1 + κ)kρ − γk2−1 − κE(γ).
(9.76)
If ρ, γ satisfy E(ρ) < ∞, E(γ) < ∞, gradf0 (ρ) = 2(1 − κ)m(ρ − γ) + κ∆(∆ρ − F 0 (ρ)) gradf1 (γ) = −2(1 + κ)m(γ − ρ) − κ∆(∆γ − F 0 (γ)). Therefore, we define b † f0 (ρ) = −2m(1 − κ)2 h∆(∆ρ − F 0 (ρ)), ρ − γi−1 (9.77) H +2(1 − κ)2 m2 kρ − γk2−1 − (κ −
κ2 )k∆(∆ρ − F 0 (ρ))k2−1 2
b † f0 (ρ) = −∞ otherwise. We also define if k∆(∆ρ − F 0 (ρ))k2−1 < ∞, and define H b ‡ f1 (γ) (9.78) H
= −2m(1 + κ)2 h∆(∆γ − F 0 (γ)), ρ − γi−1 +2(1 + κ)2 m2 kρ − γk2−1 + (κ +
κ2 )k∆(∆γ − F 0 (γ))k2−1 2
9.4. CONDITIONS FOR INFINITE DIMENSIONAL STATE SPACE
187
if k − ∆(−∆γ + F 0 (γ))k2−1 < ∞, and define H‡ f1 (γ) = ∞ otherwise. Because of the b † f0 is upper semicontinuous higher order perturbation term, for any 0 < κ ≤ 1, H b and H‡ f1 is lower semicontinuous in E. As in the previous example, we can introduce the G in (9.65) into the test b † f0 and H b ‡ f1 correspondingly. functions f0 , f1 , and modify the definition of H Theorem 9.34. Let h1 , h2 ∈ Cb (E) and α > 0. Suppose f is a subsolution of b ‡ )f = h. Then b † )f = h and f is a supersolution of (I − αH (I − αH (9.79)
sup
(f (ρ) − f (ρ)) ≤ sup (h1 (ρ) − h2 (ρ)).
ρ∈{ρ:E(ρ) 0, we have ∗ 1 b H† (m(1 − κ)d2 (·, γ) + κE) (ρ) (9.81) 1−κ 1 b − H‡ (−m(1 + κ)d2 (ρ, ·) − κE(·)) (γ) 1+κ ∗ = −2mh∆(∆ρ − F 0 (ρ)) − ∆(∆γ − F 0 (γ)), ρ − γi−1 +2κmh∆(∆ρ − F 0 (ρ)) + ∆(∆γ − F 0 (γ)), ρ − γi−1 − 4κm2 kγ − ρk2−1 κ(2 + κ) 2 κ(2 − κ) 2 k∆ ρ − ∆F 0 (ρ)k2−1 − k∆ γ − ∆F 0 (γ)k2−1 − 2(1 − κ) 2(1 + κ) ≤ 2m(h∆(F 0 (ρ) − F 0 (γ)), ρ − γi−1 − k∆(ρ − γ)k2−1 ) 3κ 3κ 2 − k∆2 ρ − ∆F 0 (ρ)k2−1 − k∆ γ − ∆F 0 (γ)k2−1 8 8 ≤ 2m(LF kρ − γk20 − kρ − γk21 ) ≤ 2m(LF kρ − γk−1 kρ − γk1 − kρ − γk21 ) 1 ≤ L2F mkρ − γk2−1 , 2 where the first inequality follows from the inequality κmhq, ρ − γi−1 − 2κm2 kρ − γk2−1 ≤
κ kqk2−1 8
√ (expand k 2m(ρ − γ) − 2−3/2 qk2−1 ), and the second inequality from h∆(F 0 (ρ) − F 0 (γ)), ρ − γi−1  = hF 0 (ρ) − F 0 (γ), ρ − γi0  ≤
kF 0 (ρ) − F 0 (γ)k0 kρ − γk0 ≤ LF kρ − γk20 .
Finally, since {ρ : E(ρ) < ∞} is dense in E, under the continuity assumption, (9.79) implies (9.80). Example 9.35. (Weakly Interacting Diffusions, Example 1.14)
188
9. THE COMPARISON PRINCIPLE
Let E = P2 (Rd ) be the set of all Borel probability measures on Rd with finite second moment. Let r = d be the 2Wasserstein metric on E given in Definition D.13 by Z (9.82) d2 (ρ, γ) = inf x − y2 π(dx, dy) π∈Π(ρ,γ)
where Π(ρ, γ) = {π ∈ P(Rd × Rd ) satisfying π(dx × Rd ) = ρ(dx), π(Rd × dy) = γ(dy)}. and we let Πopt (ρ, γ) denote the subset of π ∈ The infimum in (9.82) is achieved, R 2 Π(ρ, γ) such that d (ρ, γ) = x − y2 π(dx, dy). (E, r) is complete and separable (Lemma D.17). Throughout this example, with a slight abuse of notation, we write ρ(dx) = ρ(x)dx if the Lebesgue density exists. Following Otto [92], we formally view E as a manifold. With reference to (9.62), the form of H in (1.49) suggests a form for the inner product on the tangent space at ρ0 ∈ E and the definition of the gradient of a function on E (Definition 9.36). p ∈ Cc∞ (Rd ) determines a “smooth” direction in E in the following way. Let zp (t, x) satisfy z˙p = ∇p(zp ), with zp (0, x) = x, and for ρ0 ∈ E, define ρp (t) by Z Z p p (9.83) hρ (t), gi = g(x)ρ (t, dx) = g(zp (t, x))ρ0 (dx), ∀g ∈ B(Rd ) Rd
Rd
that is, ρp is a weak solution of ρp + ∇ · (ρp ∇p) = 0,
(9.84)
ρp (0) = ρ0 .
Definition 9.36 (Gradient). Let f : E → R and ρ0 ∈ E. The gradient of f at ρ0 , gradf (ρ0 ), exists, if and only if it can be identified as the unique element in the Schwartz space D0 (Rd ) (see Section D.1) such that for each p ∈ Cc∞ (Rd ) and ρp (t) satisfying (9.84), f (ρp (t)) − f (ρ0 ) = hgradf (ρ0 ), pi. t→0 t lim
For example, let p1 , . . . , pl ∈ Cc2 (Rd ), and define Z Z p = (p1 , . . . , pl ), hp, ρi = ( p1 dρ, . . . , Rd
pl dρ).
Rd
Then for (9.85)
f (ρ) = ϕ(hp, ρi), hgradf (ρ0 ), pi =
l X
Z ∂i ϕ(hp, ρ0 i)
Z ∇ Rd
= h
∇pi (x) · ∇p(x)ρ0 (dx) Rd
i=1
=
ϕ ∈ C 2 (Rl ),
δf (x) · ∇p(x)ρ0 (dx) δρ
δf , pi1,ρ0 , δρ
where, as before, l
X δf = ∂i ϕ(hp, ρi)pi . δρ i=1
9.4. CONDITIONS FOR INFINITE DIMENSIONAL STATE SPACE
189
With this definition of gradient, we consider the tangent space at a given point ρ ∈ E to be a weighted Sobolev space Z −1 d 0 d 2 Hρ (R ) ≡ {u ∈ D (R ) : kuk−1,ρ ≡ sup 2hu, pi − ∇p2 dρ < ∞}. p∈Cc∞ (Rd )
Rd
Appendix D.5 discusses the properties of this space. Hρ−1 (Rd ) is a Hilbert space, and we denote its inner product by h·, ·i−1,ρ . Note that for f given by (9.85), we have Z δf 2 (9.86) kgradf (ρ)k−1,ρ = sup (2∇ (x) − ∇p(x)) · ∇p(x)ρ(dx) δρ p∈Cc∞ (Rd ) Rd Z δf = ∇ 2 dρ. δρ d R Next, we introduce a number of functionals on E and identify their gradients. Lemma 9.37. For γ, ρ ∈ E, let f (ρ) = d2 (ρ, γ). Then for ρ(t) satisfying (9.84), Z f (ρ(t)) − f (ρ0 ) (9.87) lim ∇p(x) · (x − y)π(dx, dy). =2 inf t→0 t π∈Πopt (ρ0 ,γ) Remark 9.38. If ρ0 is absolutely continuous, then π ∈ Πopt (ρ0 , γ) is unique and satisfies πρ0 ,γ (dx, dy) = ρ0 (x)dxδTρ0 ,γ (x) (dy), where Tρ0 ,γ is defined in (D.29). In this case, the right side of (9.87) is linear in p and determines an element of D0 (Rd ), and Z (9.88) hgradf (ρ0 ), pi = 2 ∇p(x) · (x − Tρ0 ,γ (x))ρ0 (x)dx. See Theorem D.25. Proof. Since x → zp (t, x) is invertible, t−1 (d2 (ρ(t), γ) − d2 (ρ0 , γ)) Z Z = t−1 ( inf zp (t, x) − y2 π(dx, dy) − inf x − y2 π(dx, dy)) π∈Π(ρ0 ,γ) π∈Π(ρ0 ,γ) Z Z −1 2 ≥t zp (t, x) − x ρ0 (dx) + 2 t−1 (zp (t, x) − x) · (x − y)πt (dx, dy), where πt is an element of Π(ρ, γ) that achieves the first infimum. Since all limit points of πt , t → 0, will be in Πopt (ρ0 , γ), we have Z (9.89) lim t−1 (d2 (ρ(t), γ) − d2 (ρ0 , γ)) ≥ 2 inf ∇p(x) · (x − y)π(dx, dy). opt t→0
π∈Π
(ρ0 ,γ)
If π0 is the element of Πopt (ρ, γ) that achieves the infimum in (9.89), then t−1 (d2 (ρ(t), γ) − d2 (ρ0 , t)) Z Z ≤ t−1 zp (t, x) − x2 ρ0 (dx) + 2 t−1 (zp (t, x) − x) · (x − y)π0 (dx, dy), and it follows that equality holds in (9.89).
190
9. THE COMPARISON PRINCIPLE
Throughout this section, we assume Ψ, Φ ∈ C 2 (Rd ) are semiconvex in the sense that there exist constants λΨ and λΦ such that (∇Ψ(x) − ∇Ψ(y)) · (x − y) ≥ λΨ x − y2
(9.90)
(∇Φ(x) − ∇Φ(y)) · (x − y) ≥ λΦ x − y2 . We also assume Ψ(x) =∞ x→∞ x2
(9.91)
lim
and inf z Φ(z) > −∞. The main large deviation example in Dawson and G¨artner [22] is the CurieWeiss model with double well potential, where Ψ(x) = ax4 − bx2 , a > 0, b ∈ R, which satisfies these conditions. Setting µ∞ (dx) ≡ Z −1 e−2Ψ(x) dx,
(9.92)
we define the relative entropy with respect µ∞ by Z dρ dρ log dµ∞ (9.93) R(ρkµ∞ ) = dµ∞ Rd dµ∞ and the free energy functional by E(ρ) =
(9.94)
1 1 R(ρkµ∞ ) + 2 2
Z Φ(x − y)ρ(dx)ρ(dy). Rd ×Rd
By a variational representation due to Donsker and Varadhan (e.g. Lemma 1.4.3 of [35]), Z Z R(ρkµ∞ ) = sup { pdρ − log ep dµ∞ } (9.95) p∈B(Rd )
Z =
sup
{
Z pdρ − log
ep dµ∞ }.
p∈Cc∞ (Rd )
This representation and the assumption that Φ is bounded below ensure that E is lower semicontinuous in the weak topology on P(Rd ) and hence also in the 2Wasserstein topology on P2 (Rd ). Note that if ρ is not absolutely continuous with respect to µ∞ , E(ρ) = ∞. Taking p ≡ 0 in (9.95), we see that R(ρkµ∞ ) ≥ 0 and hence E(ρ) ≥ inf z Φ(z). For ρ ∈ E such that ρ(dx) = ρ(x)dx, we can write Z Z 1 1 (9.96) E(ρ) = log Z + ρ(x) log ρ(x)dx + Ψ(x)ρ(x)dx 2 2 Rd Rd Z 1 + Φ(x − y)ρ(x)ρ(y)dxdy. 2 Rd ×Rd Note that by Jensen’s inequality, Z Z ρ(x) log ρ(x)dx = − Rd
(9.97)
−1 −x2
ρ(x) log ρ(x) e Z Z 2 − log e−x dx − Rd
≥
Rd
> −∞.
Rd
Z dx − Rd
x2 ρ(x)dx
x2 ρ(x)dx
9.4. CONDITIONS FOR INFINITE DIMENSIONAL STATE SPACE
191
By (9.91) and (9.97), the sum of the second and third terms in (9.96) is bounded below, as are the other terms. Let p, ρ0 , and ρ(t) be defined as in (9.83), and assume that E(ρ0 ) < ∞. Following calculations in [63], R
ρ(t, x) log ρ(t, x)dx =
R
ρ0 (x) log
ρ0 (x) dx det∇zp (t,x)
,
so ∂ ∂t
Z
Z ρ(t, x) log ρ(t, x)dxt=0 = −
ρ0 (x)∆p(x)dx.
Furthermore, ∂ ∂t
Z Ψ(x)ρ(t, dx)
∂ ∂t Z
=
Z Ψ(zp (t, x))ρ0 (dx) ∇Ψ(zp (t, x))∇p(zp (t, x))ρ0 (dx)
= and ∂ ∂t
Z Φ(x − y)ρ(t, dx)ρ(t, dy) Z ∂ = Φ(zp (t, x) − zp (t, y))ρ0 (dx)ρ0 (dy) ∂t Z = ∇Φ(zp (t, x) − zp (t, y))(∇p(zp (t, x)) − ∇p(zp (t, y))ρ0 (dx)ρ0 (dy) Z = 2 ∇Φ(zp (t, x) − zp (t, y))∇p(zp (t, x))ρ0 (dx)ρ0 (dy),
where the last equality follows from the symmetry of Φ. Consequently, (9.98)
1 hgradE(ρ0 ), pi = hρ0 , ∇Ψ · ∇p + ∇Φ(·) ∗ ρ0 · ∇p − ∆pi 2 1 = −h ∆ρ0 + ∇ · (ρ0 (∇Ψ + ∇Φ ∗ ρ0 )), pi, 2
so gradE(ρ0 ) = −( 21 ∆ρ0 + ∇ · (ρ0 (∇Ψ + ∇Φ ∗ ρ0 ))) in D0 (Rd ). (9.99)
1 kgradE(ρ0 )k−1,ρ0 = k ∆ρ0 + ∇ · (ρ0 (∇Ψ + ∇Φ ∗ ρ0 ))k−1,ρ0 . 2
By Theorem D.45, Z (9.100)
kgradE(ρ0 )k−1,ρ0 = Rd
∇ρ0 (x) + ρ0 (x)∇(2Ψ + 2ρ0 ∗ Φ)2 dx ρ0 (x)
whenever ρ0 has a Lebesgue density with ∇ρ0 ∈ L1loc (Rd ), and kgradE(ρ0 )k−1,ρ0 = ∞ otherwise.
192
9. THE COMPARISON PRINCIPLE
For f of the form (9.85) and ρ satisfying E(ρ) < ∞ and ∇ρ ∈ L1loc (Rd ), by (9.86) and (9.98), 1 (kgrad(f (ρ) − E(ρ))k2−1,ρ − kgradE(ρ)k2−1,ρ ) 2 Z δf ∇ρ 1 − − (∇Ψ + ∇Φ(·) ∗ ρ)2 dρ = ( ∇ 2 δρ 2ρ Z ∇ρ + (∇Ψ + ∇Φ(·) ∗ ρ)2 dρ) −  2ρ Z Z δf ∇ρ δf 1 =− ∇ ( ∇ 2 dρ, + (∇Ψ + ∇Φ(·) ∗ ρ))dρ + δρ 2ρ 2 δρ so H in (1.49) can be written as 1 (9.101) Hf (ρ) = h−gradE(ρ), gradf (ρ)i−1,ρ + kgradf (ρ)k2−1,ρ , 2 and (9.62) holds. As in the previous two examples, this observation motivates us to extend H by including test functions of the forms (9.57) and (9.58), where the d in (9.57) and (9.58) is the 2Wasserstein metric. With Condition 9.26 in mind, we assume conditions that imply the level sets of E are compact. Lemma 9.39. Assuming (9.91), for each C ∈ R, {ρ ∈ E : E(ρ) ≤ C} is compact in (E, r). Proof. There exists G ≥ 0 such that limr→∞ r−1 G(r) = ∞ and ∞, and for each M > 0, (9.95) implies Z Z 2 G(x2 ) ∧ M dρ ≤ R(ρkµ∞ ) + log eG(x )∧M dµ∞
R
eG(x
2
)−2Ψ(x)
(take p = G(x2 ) ∧ M ), and hence Z sup
G(x2 )dρ < ∞.
ρ∈E:E(ρ)≤C
Therefore, {ρ ∈ E : E(ρ) ≤ C} is relatively compact in (E, r).
We will express our extended operators in terms of the following Fisher information functionals: for each ρ ∈ P(Rd ), let (9.102) ( R ∇ρ(x)+2ρ(x)∇Ψ(x)2 dx when ρ(dx) = ρ(x)dx, ∇ρ ∈ L1loc (Rd ) ρ(x) Rd I(ρ) = +∞ otherwise and (9.103) ( R # Rd I (ρ) ≡
∇ρ(x)+ρ(x)∇(2Ψ+2ρ∗Φ)2 dx ρ(x)
+∞
when ρ(dx) = ρ(x)dx, ∇ρ ∈ L1loc (Rd ) otherwise,
where we follow the convention that 0/0 = 0. By Lemma D.44, R ∇ dµdρ∞ 2 dρ dµ∞ when ρ(dx) = ρ(x)dx, ∇ dµ ∈ L1loc (Rd ), dρ Rd ∞ I(ρ) ≡ dµ∞ +∞ otherwise.
dx
0, let 1 f0 (ρ) = m(1 − κ) d2 (ρ, γ) + κE(ρ), 2 and 1 f1 (γ) = −m(1 + κ) d2 (ρ, γ) − κE(γ). 2 Applying (9.101), we define (9.107) R 1 ∇ρ 2 −m(1 − κ) Rd ( 2 ρ + ∇Ψ + ∇(Φ ∗ ρ))∇pρ,γ dρ 2 2 b † f0 (ρ) ≡ H +(1 − κ)2 m2 d2 (ρ, γ) − (κ − κ2 ) 14 I # (ρ), if I # (ρ) < ∞ −∞ otherwise; and (9.108) b ‡ f1 (γ) ≡ H
R −m(1 + κ)2 Rd ( 21 ∇γ γ + ∇Ψ + ∇(Φ ∗ γ))∇pγ,ρ dγ 2 2 m2 2 +(1 + κ) 2 d (ρ, γ) + (κ + κ2 ) 14 I # (γ), +∞
if I # (γ) < ∞ otherwise,
where pρ,γ is defined in (9.106) and pγ,ρ is defined similarly with the roles of γ and ρ interchanged.
194
9. THE COMPARISON PRINCIPLE
If E(ρ) < ∞ and I # (ρ) < ∞, 2 b † f0 (ρ) = 1 (kgrad((1 − κ)E(ρ) − (1 − κ)m 1 d2 (ρ, γ))k2 H −1,ρ − kgradE(ρ)k−1,ρ ) 2 2 1 1 1 = (1 − κ)2 kgrad(E(ρ) − m d2 (ρ, γ))k2−1,ρ − kgradE(ρ)k2−1,ρ 2 2 2 (1 − κ)2 m2 2 1 (1 − κ)2 hgrad(E(ρ), gradd2 (ρ, γ)i−1,ρ + d (ρ, γ) − (2κ − κ2 )kgradE(ρ)k2−1,ρ = − 2 2 2 1 1 κ ≤ (1 − κ) (kgrad(E(ρ) − m d2 (ρ, γ)k2−1,ρ − kgradE(ρ)k2−1,ρ ) − kgradE(ρ)k2−1,ρ , 2 2 2 and we have 2 b † f0 (ρ) ≤ (1 − κ) mh−gradE(ρ), gradρ 1 d2 (ρ, γ)i−1,ρ + m d2 (ρ, γ) − κ I # (ρ). H 2 2 8 Similarly, if E(γ) < ∞ and I # (γ) < ∞, 2 b ‡ f1 (γ) ≥ (1 + κ) mhgradE(γ), gradγ 1 d2 (ρ, γ)i−1,γ + m d2 (ρ, γ) + κ I # (γ). H 2 2 8 Consequently, by (D.78), b † f0 (ρ) − (1 + κ)−1 H b ‡ f1 (γ) ≤ m(λΨ  + 2λΦ )d2 (ρ, γ)), (9.109) (1 − κ)−1 H where λΨ and λΦ are the constants in (9.90). b † f0 and H b ‡ f1 which The small perturbations κE introduce the I # term in H b b ensures the appropriate semicontinuity of H† f0 and H‡ f1 . b † f0 in (9.107) is upper semicontinuous, and H b ‡ f1 in (9.108) Lemma 9.40. H is lower semicontinuous. Proof. The conclusion follows by Lemma D.48 of Appendix D.
Cc∞ (Rd )
be bucdense
Finally, to verify condition (9.5), let {pk : k = 1, 2, . . .} ⊂ in Cb (Rd ). Define ∞ X 1 (9.110) G(ρ, γ) = hρ − γ, pk i2 , 2k (1 + m2k ) k=1
where mk = sup(pk (x) + ∆pk (x) + ∇pk (x)). x
b † and Then, as in the previous examples, we can further enlarge the domains of H b ‡ to include test functions H (9.111)
f0 (ρ) = ad2 (ρ, γ0 ) + bE(ρ) + cG(ρ, ρ0 ) + C
and f1 (γ) = −ad2 (ρ0 , γ) − bE(γ) − cG(γ, γ0 ) + C, b † f0 and H b ‡ f1 are still well defined with a, c > 0, 0 < b < 1, ρ0 , γ0 ∈ E and C ∈ R. H and are respectively upper and lower semicontinuous.
(9.112)
Theorem 9.41. Suppose that h0 , h1 ∈ Cb (E) and that f ∈ Cb (E) is a subsob † )f = h and f ∈ Cb (E) is a supersolution of (I − αH b ‡ )f = h. lution of (I − αH Then sup (f (ρ) − f (ρ)) ≤ sup (h0 (ρ) − h1 (ρ)). ρ∈E
ρ∈E
9.4. CONDITIONS FOR INFINITE DIMENSIONAL STATE SPACE
195
Proof. We apply Theorem 9.28. As in the previous two examples, G provides the family of functions needed to verify Condition 9.5. (9.59) trivially holds if I # (ρ)+I # (γ) = ∞ (it reduces to −∞ ≤ ωκ (md2 (ρ, γ))). Otherwise, it follows from (9.109). By Theorem 9.28, the conclusion follows.
CHAPTER 10
Nearly deterministic processes in Rd The fundamental problem for large deviations for processes introduced by Freidlin and Wentzell (see [52]) and, in particular cases, by Varadhan [118], considers processes with generators of the form Z 1 1 (10.1) An f (x) = n (f (x + z) − f (x) − z · ∇f (x))η(x, dz) n n d R 1 X +b(x) · ∇f (x) + aij (x)∂i ∂j f (x). 2n i,j For large values of n, the corresponding Markov process is essentially a solution of the ordinary differential equation X˙ = b(X), and the corresponding large deviation theory is concerned with the probabilities of sample paths significantly different from the solution of this equation. We begin by considering processes having stationary, independent increments, that is, when the coefficients in (10.1) do not depend on x. The spatial homogeneity makes verifying the comparison principle simple. (The function H in Lemma 9.15 does not depend on x.) In Section 10.1, Theorem 10.1 generalizes a result on Rd valued L´evy processes by Lynch and Sethuraman [82]; Section 10.1.6 discusses another type of scaling of L´evy processes considered by Mogulskii [88]; and Section 10.1.7 derives the results for L´evy processes in separable Banach spaces due to de Acosta [25]. In Section 10.2, we treat the discretetime analogues of the results in Section 10.1. Results of this type have been given by Mogulskii [87] and Borovkov [11]. In Section 10.3, we consider results for more general Markov processes with generators of the form (10.1), obtaining the results of Freidlin and Wentzell [52] as well as various generalizations that have been given by a number of authors. In particular, Theorem 10.2.6 of Dupuis and Ellis [35] follows from Lemmas 10.5, 10.21 and Remark 10.16, Theorem 5.6.7 of Dembo and Zeitouni [29] and Theorem 3.1 of de Acosta [26] follows from Lemmas 10.10, 10.21 and Remark 10.11. Discrete time analogs of these results have been considered by Gulinsky and Veretennikov [57], Dupuis and Ellis [35], Chapter 6, de Acosta [27]. Results of this type are discussed in Section 10.4. Finally, in Section 10.5, we consider processes with reflection, extending results of Anderson and Orey [4] and Doss and Priouret [34]. 10.1. Processes with independent increments We begin by considering processes with stationary, independent increments in Rd . For this class of problems, the comparison principle is trivial, since H does not depend on x and Condition 9.10 is immediate. 197
198
10. SMALL PERTURBATION PROBLEMS
As a first example, we consider Schilder’s theorem, that is the large deviation principle for 1 Xn = √ W, n where W is standard Brownian motion. Note that it would be distributionally equivalent to define Xn by 1 Xn (t) = W (nt). n We have 1 An f = ∆f 2n and 1 1 ∆f + ∇f 2 . Hn f = 2n 2 It follows that Hf = 12 ∇f 2 . Letting E = Rd ∪ ∞, exponential tightness is immediate by Corollary 4.17. As noted above, the comparison principle follows by Lemma 9.15, since H(x, p) = p2 does not depend on x. Theorem 6.14 then gives that the large deviation principle holds for {Xn }. By a simple calculation 1 Hf (x) = H(x, ∇f (x)) = Hf (x) ≡ sup(u · ∇f (x) − u2 ), 2 u and any solution of Z tZ Z tZ f (x(t)) = f (0)+ u·∇f (x(s))λ(du×ds) = f (0)+ uλs (du)·∇f (x(s))ds 0
Rd
0
satisfies
Rd
Z x(t) ˙ =
uλt (du),
a.e.
Rd
Consequently, if x is absolutely continuous, Z Z ∞ Z ∞Z 1 1 2 2 I(x) = inf{ u λs (du) : uλs (du) = x(s), ˙ s ≥ 0} = x(s) ˙ ds, 2 2 d d R 0 0 R where the last equality follows by Jensen’s inequality, that is 2 Z Z 2 u λs (du) ≥ uλs (du) , Rd
Rd
and I(x) = ∞ otherwise. We now consider more general Levy processes with generators of the form Z Ag(x) = (g(x + z) − g(x) − z · ∇g(x))η(dz) (10.2) Rd
+
1X aij ∂i ∂j g(x) + b · ∇g(x). 2 i,j
If X corresponds to A, then Xn defined by Xn (t) = n1 X(nt) has generator Z 1 1 (10.3) An g(x) = n (g(x + z) − g(x) − z · ∇g(x))η(dz) n n Rd 1 X + aij ∂i ∂j g(x) + b · ∇g(x). 2n i,j
10.1. PROCESSES WITH INDEPENDENT INCREMENTS
199
10.1.1. Convergence of Hn . Hn is given by Z 1 Hn f (x) = (en(f (x+ n z)−f (x)) − 1 − z · ∇f (x))η(dz) Rd
+
1 X 1X aij ∂i ∂j f (x) + aij ∂i f (x)∂j f (x) + b · ∇f (x), 2n i,j 2 i,j
and, assuming Z
(eα·z − 1 − α · z)η(dz) < ∞,
(10.4) Rd
for all α ∈ Rd , for f ∈ Dd defined as in (9.41) to be Dd = {f ∈ C(E) : f Rd − f (∞) ∈ Cc2 (Rd )},
(10.5)
Hf = limn→∞ Hn f is given by Z 1X Hf (x) = (e∇f (x)·z −1−z·∇f (x))η(dz)+ aij ∂i f (x)∂j f (x)+b·∇f (x) = H(∇f (x)), 2 i,j Rd where
Z H(p) =
1 (ep·z − 1 − z · p)η(dz) + pT ap + b · p. 2 Rd
10.1.2. Exponential tightness. The exponential compact containment condition follows by Example 4.23, and exponential tightness follows by Corollary 4.17. 10.1.3. The comparison principle. As noted above, the comparison principle follows by Lemma 9.15, since H(p) does not depend on x. 10.1.4. The large deviation theorem. Theorem 10.1. Let An be given by (10.3) for g ∈ Dd . Suppose that Xn is a solution of the martingale problem for An and that the large deviation principle holds for Xn (0) in Rd with a good rate function I0 and rate transform Λ0 . Then a) For each α > 0 and f ∈ C(E), there exists a unique Rα f ∈ C(E) such that (I − αH)Rα f = f in the viscosity sense, and n V (t)f = lim Rt/n f n
defines a semigroup V (t) : C(E) → C(E). b) The large deviation principle holds for {Xn } in DE [0, ∞) with good rate function: I(x) =
sup It1 ,...,tm (x(t1 ), . . . , x(tm )),
{ti }⊂∆cx
where It1 ,...,tm (x1 , . . . , xm ) = sup {f1 (x1 ) + . . . + fk (xk ) f1 ,...,fk ∈Dd
−Λ0 (V (t1 )(f1 + V (t1 − t1 )(f2 + . . . V (tk − tk−1 )fk ) . . .)))}. c) If x ∈ DE [0, ∞) − CRd [0, ∞), then I(x) = ∞.
200
10. SMALL PERTURBATION PROBLEMS
Remark 10.2. Particular cases of this theorem were considered by Borovkov [11] and Lynch and Sethuraman [82]. Mogulskii [88] gives a more general version, including results under the assumption that (10.4) only holds for α in a ball containing the origin. Proof. Since the comparison principle holds for (I − αH)f = h, parts (a) and (b) follow by Theorem 6.14. The compact containment condition implies that if x ∈ DE [0, ∞) − DRd [0, ∞), then I(x) = ∞. (See Example 4.23.) Since η determines the Poisson rate of jumps of a particular size, it follows that P {sup Xn (s) − Xn (s−) ≥ } =
1 − e−nT η{z:z≥n}
s≤T
≤ nT η{z : z ≥ n} R nT {z:z≥n} ekz η(dz) , ≤ ekn and hence, lim sup P {sup Xn (s) − Xn (s−) ≥ } ≤ −kn. n→∞
s≤T
Since k is arbitrary, the left side is −∞, and by Theorem 4.13, x ∈ DRd [0, ∞) − CRd [0, ∞) implies I(x) = ∞. 10.1.5. Variational representation of the rate function. Since H is convex, setting L(q) = sup (p · q − H(p)), p∈Rd
we have H(p) = sup (p · q − L(q)). q∈Rd
(See [104], Chapter 12.) It follows that Hf (x) = H(∇f (x)) = Hf (x) ≡ sup (q · ∇f (x) − L(q)), q∈Rd
and hence by Theorem 8.14, if x is absolutely continuous, Z ∞Z Z I(x) = I0 (x(0)) + inf{ L(u)λs (du) : uλs (du) = x(s)} ˙ Rd Z ∞ 0 = I0 (x(0)) + L(x(s))ds, ˙ 0
where the last equality follows by the convexity of L. I(x) = ∞ otherwise. 10.1.6. Other scalings. Mogulskii [88] also considers different scalings of X. In our notation, let 1 Xn (t) = √ (X(βn t) − βn bt), nβn where n/βn → 0. An then becomes Z 1 1 (10.6)An g(x) = βn (g(x + √ z) − g(x) − √ z · ∇g(x))η(dz) nβn nβn Rd 1 X + aij ∂i ∂j g(x), 2n i,j
10.1. PROCESSES WITH INDEPENDENT INCREMENTS
201
giving Hn f (x)
=
βn n
Z
Z
nβn
z)−f (x))
r −1−
Rd
+ =
n(f (x+ √ 1
(e
n z · ∇f (x))η(dz) βn
1 X 1X aij ∂i ∂j f (x) + aij ∂i f (x)∂j f (x) 2n i,j 2 i,j
βn n(f (x + √
Rd
1 z) − f (x))2 nβn
n(f (x+ √ 1
×
e
nβn
z)−f (x))
− 1 − n(f (x +
√ 1 z) nβn 2
− f (x))
η(dz) − f (x)) r Z βn 1 n + n(f (x + √ z) − f (x)) − z · ∇f (x) η(dz) n Rd βn nβn 1X 1 X aij ∂i ∂j f (x) + aij ∂i f (x)∂j f (x). + 2n i,j 2 i,j
n(f (x +
√ 1 z) nβn
If Z
z2 eαz η(dz) < ∞,
(10.7)
some α > 0,
Rd
then Hn converges to H given by Z 1 1X Hf (x) = (z · ∇f (x))2 η(dz) + aij ∂i f (x)∂j f (x). 2 Rd 2 i,j Setting Z b aij = we have Hf (x) =
zi zj η(dz) + aij , Rd
1X b aij ∂i f (x)∂j f (x), 2 i,j
which is the same limiting operator as would be obtained if we took η = 0 and replaced aij by b aij in the original generator. In other words, speeding up time by more than n allows the central limit theorem to “beat” the large deviation principle, so that the large deviation behavior is the same as for a Brownian motion. The moment condition (10.7) can be weakened under more restrictive assumpR tions about the behavior of βn . Assuming, at a minimum, that Rd z2 η(dz) < ∞ p and observing that the exponent is bounded on the set z ≤ βn /n, to obtain the desired limit, it is enough to show Z n(f (x+ √ 1 z)−f (x)) βn nβn lim e η(dz) = 0. √ n→∞ n {z> βn /n} Consequently, it is sufficient to select βn so that for each K > 0 Z z K(n∧ √ ) βn nβn (10.8) lim e η(dz) = 0. √ n→∞ n {z> βn /n} For example (cf. [87]), if Z Rd
γ
z2 ez η(dz) < ∞,
202
10. SMALL PERTURBATION PROBLEMS
for some > 0 and 0 < γ ≤ 1, then it is sufficient to have n2/γ = 0. n→∞ n3 βn
(10.9)
lim
Since we are already assuming n/βn → 0, (10.9) places no additional restrictions on βn unless γ < 1/2. To see that (10.9) implies (10.8), note that the expression in the limit is bounded by Z Z z1−γ γ n √z √ 2 K(n∧ nβn ) 2 z K( zγ ∧ nβn ) η(dz) = η(dz), z z e e √ √ {z>
{z>
βn /n}
βn /n}
and
n z1−γ n √ , ∧ ≤ √ zγ nβn (n nβn )γ which tends to zero by (10.9). The compact containment condition follows by applying Lemma 4.22 with fn (x) = log(
1 + x2 ), 1 + δx2
for δ > 0 sufficiently small. a−1 q, and assuming for simplicity that If b a is nonsingular, then L(q) = 12 q T b Xn (0) = 0, Z 1 ∞ I(x) = x(s) ˙ Tb a−1 x(s)ds, ˙ 2 0 for absolutely continuous x, and I(x) = ∞ otherwise. If b a has rank m < d, then there exists a d × m matrix σ such that b a = σσ T . Let W be a standard Rm valued Brownian motion. Then the rate function for {Xn } is the same as the rate function for 1 Yn = √ σW. n Consequently, by Schilder’s theorem and the contraction principle, Z ∞ Z t 1 I(x) = inf{ u(t)2 dt : x(t) = σ u(s)ds, t ≥ 0}, 2 0 0 where inf ∅ = ∞. 10.1.7. Extension to infinite dimensions. de Acosta [25] considers the analog of Theorem 10.1 for Banach spacevalued processes. Let E be a separable b ⊂ E ∗ be a separable subspace of E ∗ such Banach space and E ∗ its dual, and let E that for each z ∈ E, kzk = supξ∈Eb hz, ξi/kξk. Let η be a σfinite measure on E satisfying Z kzk2 eαkzk η(dz) < ∞, E
for each α > 0. Let Ψ be a mean zero, Gaussian, Evalued random variable, and for ξ1 , ξ2 ∈ E ∗ , define a(ξ1 , ξ2 ) = E[hΨ, ξ1 ihΨ, ξ2 i]. Let D(A) be the collection of functions of the form (10.10)
f (x) = g(hx, ξ1 i, . . . , hx, ξm i),
10.1. PROCESSES WITH INDEPENDENT INCREMENTS
203
b and g ∈ Dm = {g : g − c ∈ Cc2 (Rm ) some c ∈ R}, m = 1, 2, . . .. for ξ1 , . . . , ξm ∈ E To simplify notation, we write hx, ξi = (hx, ξ1 i, . . . , hx, ξm i) and a(ξ) = ((a(ξi , ξj ) )). Let X be the Evalued process with generator Af (x)
=
X 1X a(ξi , ξj )∂i ∂j g(hx, ξi) + hb, ξi i∂i g(hx, ξi) 2 i,j i Z X + (g(hx + z, ξi) − g(hx, ξi) − hz, ξi i∂i g(hx, ξi))η(dz), E
i
and define Xn (t) = n1 X(nt). Since X(nt) − X(0) can be represented as a sum of n independent copies of X(t)−X(0), for each t > 0, exponential tightness for {Xn (t)} follows as in Example 3.4. Since X has stationary, independent increments, for λ > 0, we have E[enλkXn (t+u)−Xn (t)k∧1 Ftn ]
= E[enλkXn (u)−Xn (0)k∧1 ] = E[eλkX(nu)−X(0)k∧1 ] ≤ E[eλkX(u)−X(0)k∧1 ]n .
Consequently, in Theorem 4.1, we can take γn (δ, λ, T ) = n sup log E[eλkX(u)−X(0)k∧1 ]. 0≤u≤δ
Since lim lim sup
u→0 n→∞
1 log E[eγn (δ,λ,T ) ] = lim log sup E[eλkX(u)−X(0)k∧1 ] = 0, u→0 n 0≤u≤δ
exponential tightness for {Xn } follows. Assume for the moment, that Xn (0) = 0. Then x0 + Xn is the process with generator An and initial value x0 . By the finitedimensional results, for ξ1 , . . . , ξm ∈ E ∗ , the large deviation principle holds for hXn , ξi. Let V ξ denote the corresponding semigroup and I ξ denote the corresponding rate function. It follows that for f (x) = g(hx, ξi) and xn0 → x0 , V (t)f (x0 )
=
lim Vn (t)f (xn0 )
n→∞
n 1 log E[enf (x0 +Xn (t)) ] n n 1 = lim log E[eng(hx0 ,ξi+hXn (t),ξi) ] n→∞ n = V ξ (t)g(hx0 , ξi)
=
lim
n→∞
exists, and hence the convergence of Vn (t)f to V (t)f will be uniform on compact sets. Suppose that Xn (0) satisfies the large deviation principle with rate function I0 and rate transform Λ0 . Observing that D(A) contains a subset that is bounded above and isolates points, by Theorem 5.15, {Xn } satisfies the large deviation
204
10. SMALL PERTURBATION PROBLEMS
principle with It1 ,...,tk (x1 , . . . , xk ) sup {f1 (x1 ) + . . . + fk (xk ) (10.11) = f1 ,...,fk ∈D(A)
= sup sup
b
−Λ0 (V (t1 )(f1 + V (t2 − t1 )(f2 + . . . V (tk − tk−1 )fk ) . . .))} sup {g1 (hx1 , ξi) + . . . + gk (hxk , ξi)
m ξ∈E m g1 ,...,gk ∈Dm
−Λξ0 (V ξ (t1 )(g1 + V ξ (t2 − t1 )(g2 + . . . V ξ (tk − tk−1 )gk ) . . .))} = sup sup Itξ1 ,...,tk (hx1 , ξi, . . . , hxk , ξi).
b
m ξ∈E m
It follows that I(x)
=
sup sup I ξ (hx, ξi)
b
m ξ∈E m
=
sup sup
b
m ξ∈E m
=
=
=
Z
∞
d L ( hx(s), ξi)ds ds ξ
+ 0 ∞
d ξ sup sup + sup (p · hx(s), ξi − H (p))ds ds m ξ∈E p∈Rm bm 0 ! Z ∞ d ξ lim sup I0 (hx(0), ξi) + sup ( hx(s), ζi − H(ζ))ds m→∞ bm ζ∈S(ξ) ds 0 ξ∈E ! Z ∞ d ξm m lim I0 (hx(0), ξ i) + sup ( hx(s), ζi − H(ζ))ds m→∞ ζ∈S(ξ m ) ds 0 ! Z ∞ d I0 (x(0)) + sup ( hx(s), ζi − H(ζ))ds , b ds 0 ζ∈E
=
I0ξ (hx(0), ξi) I0ξ (hx(0), ξi)
Z
b m and p ∈ Rm , S(ξ) denotes the linear subspace of E b spanned by where for ξ ∈ E ξ1 , . . . , ξm , Z 1 H ξ (p) = (ep·hz,ξi − 1 − hz, ξi · p)η(dz) + pT a(ξ)p + hb, ξi · p, 2 d R b for ζ ∈ E, Z 1 H(ζ) = (ehz,ζi − 1 − hz, ζi)η(dz) + a(ζ) + hb, ζi, 2 d R b and ξ m = (ξP 1 , . . . , ξm ) for some sequence ξ1 , ξ2 , . . . that is dense in E. Note that m ξ H (p) = H( i=1 pi ξi ). 10.2. Random walks The discretetime analogues of the results in Section 10.1 are results on random walks considered by Borovkov [11] and Mogulskii [87]. Again, the comparison principle is trivial, since H does not depend on x and Condition 9.10 is immediate. Let ξ1 , ξ2 , . . . be independent and identically distributed Rd valued random variables with distribution ν. Define [t/n ]
Xn (t) =
βn−1
X k=1
ξk .
10.3. MARKOV PROCESSES
205
Then, Z
An f (x) =
−1 n (
f (x + βn−1 z)ν(dz) − f (x))
Rd
and Hn f (x) =
1 log nn
Z
−1
en(f (x+βn
z)−f (x))
ν(dz).
Rd
10.2.1. Convergence of Hn . If βn = n, then Hn f → Hf given by
R Rd
Z Hf (x) = log
eα·z ν(dz) < ∞, α ∈ Rd , n = n−1 , and e∇f (x)·z ν(dz).
Rd
R R If Rd zν(dz) = 0, Rd eαz ν(dz) < ∞ for some α > 0, n βn2 = n, and n n → 0, then Hn f → Hf given by Z 1 (∇f (x) · z)2 ν(dz). Hf (x) = 2 Rd The moment condition can be relaxed as in Section 10.1.6. The large deviation theorem and rate function representation are essentially the same as in Section 10.1. 10.3. Markov processes d
Let E = R ∪ ∞ denote the onepoint compactification of Rd . Let a = σσ T , and for f ∈ Dd given by (9.41), define Z 1 1 (f (x + z) − f (x) − z · ∇f (x))η(x, dz) (10.12) An f (x) = n n n Rd 1 X +b(x) · ∇f (x) + aij (x)∂i ∂j f (x), 2n i,j for x ∈ Rd , and An f (∞) = 0. We will require σ, b, and η to satisfy the following condition. Condition 10.3. σ : Rd → M d×d and b : Rd → Rd are locally bounded and measurable, ψ : Rd → [1, ∞) and η : Rd → M(Rd ), the space of σfinite measures on Rd , satisfies Z z 2 αz/ψ(x) e (10.13) sup η(x, dz) < ∞, x Rd ψ(x) for each α > 0. If a is bounded and continuous, b is bounded, for each x ∈ Rd , a(x) is positive definite, and for each Γ ∈ B(Rd ), the mapping Z x→ z2 (1 + z2 )−1 η(x, dz) Γ
is bounded and continuous, then for any initial distribution ν ∈ P(Rd ), existence and uniqueness will hold for the martingale problem for (An , ν) by Theorem 4.3 of Stroock [114] (Theorem 8.3.3 of Ethier and Kurtz [36]). Existence and uniqueness
206
10. SMALL PERTURBATION PROBLEMS
can also be obtained by verifying existence and uniqueness for a stochastic equation of the form Z t Z t 1 σ(Xn (s))dW (s) + b(Xn (s))ds Xn (t) = Xn (0) + √ n 0 0 Z 1 e (10.14) + I[0,nβ(Xn (s−),u)] (v)g(Xn (s−), u)ξ(dv × du × ds), n [0,∞)×U ×[0,t] where (U, U) is a measurable space, ξ is a Poisson random measure on [0, ∞) × U × [0, ∞) with mean measure m × γ × m (m denotes Lebesgue measure on [0, ∞)), γ is a σfinite measure on U , and ξe = ξ − m × γ × m. Xn given by (10.14) is a solution of the martingale problem for An with Z η(x, Γ) = β(x, u)IΓ (g(x, u))γ(du). U
Conditions for uniqueness are given in Graham [58], Theorem 1.2, and Kurtz and Protter [76], Theorem 8.3. bn that is More generally, we will simply assume that An has an extension A the full generator for a wellposed martingale problem, that Xn is a solution of the bn , and that the large deviation principle holds for Xn (0) martingale problem for A with a good rate function I0 . If An ⊂ C(E) × C(E), then such an extension exists by Krylov’s theorem (see Theorems 4.5.11 and 4.5.19 of [36]), and the distribution of the solution, Pxn ∈ P(DE [0, ∞)) is a measurable function of the initial position x. 10.3.1. Convergence of Hn . The operator Hn f = n1 e−nf An enf is given by Z 1 Hn f (x) = (en(f (x+ n z)−f (x)) − 1 − z · ∇f (x))η(x, dz) Rd
+
1 X aij (x)∂i ∂j f (x) 2n i,j
+
1X aij (x)∂i f (x)∂j f (x) + b(x) · ∇f (x), 2 i,j
and if Condition 10.3 holds with ψ(x) = 1 + x, for each f ∈ Dd , Hn f converges, uniformly in x, to Hf given by Z Hf (x) = (e∇f (x)·z − 1 − z · ∇f (x))η(x, dz) Rd
+
1X aij (x)∂i f (x)∂j f (x) + b(x) · ∇f (x). 2 i,j
Uniform convergence on compact subsets of Rd follows easily from the exponential moment conditions. To see that convergence is uniform on all of Rd , suppose the support of f − f (∞) is contained in BK (0) for some K > 0. Assume that x > 2K + 1. Then f (x +
1 2n−1 zkf k z) − f (x) ≤ , n 1 + x
10.3. MARKOV PROCESSES
207
since the left side is zero unless x + n−1 z < K and hence, n−1 z > x − K ≥ 1 2 (x + 1). It follows that Z z (10.15) Hn f (x) ≤ (e2kf k 1+x − 1)η(x, dz). {z/(1+x)>n/2}
Taking ψ(x) = (1 + x) in Condition 10.3, the right side of (10.15) converges to zero uniformly on Rd , so the left side converges uniformly on {x : x > 2K + 1}. Defining H(x, p) : Rd × Rd → R by Z 1 (10.16) H(x, p) = σ T (x)p2 + b(x) · p + (ep·z − 1 − p · z)η(x, dz), 2 d R we have Hf (x) = H(x, ∇f (x)).
(10.17)
Note that if H is continuous, then b is continuous, but that there is an interplay between σ and η. 10.3.2. Exponential tightness. The compact containment condition can be verified as in Example 4.23. Alternatively, we can consider the process in the onepoint compactification of Rd , E = Rd ∪ {∞}. In either case, exponential tightness for {Xn } follows by Corollary 4.17. Since Theorem 6.14 is proved under a compactness assumption on E, we follow the latter alternative with E = Rd ∪ {∞}. We still have Hf (x) = H(x, ∇f (x)), where ∇f (∞) is understood to be 0. 10.3.3. The comparison principle. By Lemma 6.8, we can consider H, the closure of H, rather than H. Under Condition 10.3, if for some K > 0, σ(x)+b(x) ≤ Kψ(x), then D(H) contains all functions that are continuously differentiable on Rd and satisfy limx→∞ f (x) = f (∞) and limx→∞ ψ(x)∇f (x) = 0. The next lemmas verify Condition 9.10 for most of the standard large deviation theorems for Rd valued Markov processes. We say that σ and b have at most linear growth if there exist constants K1 and K2 such that σ(x) + b(x) ≤ K1 + K2 x.
(10.18)
Lemma 10.4. If σ and b have at most linear growth and and there exists δ > 0 such that Z p·z (10.19) sup sup (ep·z/(1+x) − 1 − )η(x, dz) < ∞, 1 + x d p 0}) > 0. If Conditions 10.2.2 and 10.2.4 of Dupuis and Ellis [35] hold, then the conditions of the lemma hold. Proof. Condition 9.10.1 follows by Lemma 9.16. Condition 9.10.2 holds by Lemma 10.4 which in turn implies that D(H) contains all continuously differentiable f satisfying (9.43). The comparison principle then follows by Lemma 9.15. The next lemmas verify Condition 9.11, which implies Condition 9.10.1 by Lemma 9.12. Condition 9.11 is preferable to Condition 9.10.1, since the collection of H satisfying Condition 9.11 is closed under positive linear combinations. Lemma 10.7. Let (U, U) be a measurable space, and let σ : Rd × U → Rd be locally Lipschitz in the sense that for each r > 0, there exists Cr such that σ(x, u) − σ(y, u) < Cr x − y, x, y ≤ r, u ∈ U. R Suppose ν is a finite measure on U , U σ(x, u)2 ν(du) < ∞, x ∈ Rd , and Z 1 σ T (x, u) · p2 ν(du). H(x, p) = 2 U Then H satisfies Condition 9.11. Remark 10.8. Note that this lemma covers the case H(x, p) = σ T (x)p2 for Lipschitz σ : Rd → M d×d . Proof. Fix r > 0. Let λ > 1, µ > 0, and define δ = µx − y2 . Then µ(x − y) ) − H(y, µ(x − y)) λ Z Z µ2 µ2 σ T (x, u) · (x − y)2 ν(du) − σ T (y, u) · (x − y)2 ν(du) = 2λ U 2 U Z µ2 = (σ(x, u) − σ(y, u))T · (x − y)2 ν(du) 2λ U Z µ2 + (σ(y, u) · (x − y))(σ(x, u) − σ(y, u))T · (x − y)ν(du) λ U Z µ2 −(1 − λ−1 ) σ T (y, u) · (x − y)2 ν(du) 2 U Z ≤ 2−1 λ−1 Cr2 δ 2 ν(U ) + λ−1 δCr µ σ T (y, u) · (x − y)ν(du) U Z −1 −1 2 −2 (1 − λ )µ σ T (y, u) · (x − y)2 ν(du) U (δCr )2 −1 −1 2 2 ≤ 2 λ Cr δ + ν(U ), 2λ(λ − 1)
λH(x,
10.3. MARKOV PROCESSES
209
where the last inequality follows from the fact that 1 a2 au − bu2 ≤ . 2 2b Consequently, if supm (xm  + ym ) ≤ r and δm ≡ µm xm − ym 2 → 0, (9.40) follows. Lemma 10.9. Let b : Rd → Rd be locally Lipschitz and define H(x, p) = b(x) · p. Then H satisfies Condition 9.11. Proof. Fix r > 0, and let C satisfy b(x) − b(y) < Cx − y,
x, y ≤ r.
Let λ > 1, µ > 0. Then µ(x − y) ) − H(y, µ(x − y)) λ = b(x) · µ(x − y) − b(y) · µ(x − y) = µ(b(x) − b(y)) · (x − y)
λH(x,
≤ Cµx − y2 , and the lemma follows.
Lemma 10.10. Suppose that σ, b, and η satisfy Condition 10.3 with ψ(x) = 1 + x and H is given by (10.16). Suppose that σ and b are Lipschitz and that η is of the form Z (10.21) η(x, Γ) = IΓ (g(x, v))γ(dv), V
where g : Rd × V → Rd . Let Φ(u) = u2 eu , and assume that for each r > 0 there exists a δr > 0 such that Z g(x, v) − g(y, v) Φ(δr (10.22) sup )γ(dv) < ∞. x − y x,y≤r,x6=y V Then H satisfies Condition 9.11, Condition 9.10, and the comparison principle. Remark 10.11. With η = 0, large deviation results for Lipschitz σ and b are given by Theorem 5.6.7 of Dembo and Zeitouni [29]. de Acosta [26] gives results for Lipschitz σ and b and η of the form (10.21). The condition in (10.22) is essentially Condition 3.5 of his result. Proof. Let Z Z H0 (x, p) = (ep·z − 1 − p · z)η(x, dz) = (eg(x,v)·p − 1 − g(x, v) · p)γ(dv). Rd
V
Then H0 satisfies Condition 9.18 with β ≡ 1, and hence, by Lemma 9.19, Condition 9.11. By Lemmas 10.7 and 10.9 and Lemma 9.13, H satisfies Condition 9.11 and (9.39) follows with 0 λ>1 ω(λ) = ∞ λ ≤ 1. By the Lipschitz continuity of σ and b, σ and b have at most linear growth. Condition 10.3 with ψ(x) = 1 + x implies (10.19). Consequently, Condition 9.10.2 follows by Lemma 10.4. The lemma then follows by Lemma 9.15
210
10. SMALL PERTURBATION PROBLEMS
Lemma 10.12. Suppose that σ, b, and η satisfy Condition 10.3 with ψ(x) = 1 + x and H is given by (10.16). Suppose that σ and b are Lipschitz and that η is of the form Z (10.23) η(x, Γ) = β(x, u)IΓ (g(x, u))γ(du), U d
d
where g : R × U → R and β ≥ 0. Assume that there exists K such that β(x, u) − β(y, u) ≤ Kx − yβ(y, u). Let Φ(z) = z 2 ez , and assume that for each r > 0 there exists a δr > 0 such that Z g(x, u) − g(y, u) (10.24) sup β(x, u)Φ(δr )γ(du) < ∞. x − y x,y≤r,x6=y U Then H satisfies Condition 9.10, and the comparison principle. Remark 10.13. This lemma covers processes with jump terms of the form Z 1 1 n β(x, u)(f (x + g(x, u)) − f (x) − g(x, u) · ∇f (x))γ(du). n n U For example, taking U = Rd and g(x, u) = u, we have Z 1 1 n β(x, u)(f (x + u) − f (x) − u · ∇f (x))γ(du). n n d R Proof. Let H0 (x, p)
= =
Z 1 T 2 σ(x) p + (ep·z − 1 − p · z)η(x, dz) 2 Rd Z 1 σ(x)T p2 + β(x, u)(eg(x,u)·p − 1 − g(x, u) · p)γ(du). 2 U
Noting that σ(x)T p2 =
d X
σi (x) · p2 ,
i=1
where σi (x) is the ith column of σ(x), we can take V = U ∪ {1, . . . , d} and extend γ to a measure on V by setting γ({i}) = 1, extend β by defining β(x, i) = 1, and extend g by defining g(x, i) = σi (x). Then H0 satisfies the conditions of Lemma 9.20. Consequently, H satisfies Condition 9.10.1 with 0 λ>1 ω(λ) = ∞ λ ≤ 1. The remainder of the proof is the same as for Lemma 10.10
Occasionally, the convex dual L is of simpler form than H. The following lemma gives conditions on L that can be useful in verifying the comparison principle. Lemma 10.14. Suppose L : Rd × Rd → [0, ∞) and (10.25)
H(x, p) = sup {p · q − L(x, q)},
x, p ∈ Rd .
q∈Rd
Then Condition 9.10.1 holds if L satisfies the following:
10.3. MARKOV PROCESSES
211
Condition 10.15. For each compact set Γ ⊂ Rd and > 0 there exist δ > 0 and C > 0 such that if x, y ∈ Γ satisfies x − y < δ and q ∈ Rd , then there exists qb ∈ Rd such that L(y, qb) − L(x, q) ≤ (1 + L(x, q)) (10.26) 2 b q − q ≤ Cx − y(1 + L(x, q)). If, in addition, Condition 9.10.2 holds, then the comparison principle holds. Remark 10.16. Condition 10.15 is Condition 10.2.5 of Dupuis and Ellis [35]. This condition is implied by L(y, q) − L(x, q) = 0, δ→∞ x,y∈Rd ,x−y 0 and f ∈ C(E), there exists a unique Rα f ∈ C(E) such that (I − αH)Rα f = f in the viscosity sense, and n V (t)f = lim Rt/n f n
defines a semigroup V (t) : C(E) → C(E). b) The large deviation principle holds for {Xn } in DE [0, ∞) with good rate function: I(x) =
sup It1 ,...,tm (x(t1 ), . . . , x(tm )),
{ti }⊂∆cx
212
10. SMALL PERTURBATION PROBLEMS
where It1 ,...,tm (x1 , . . . , xm ) = inf {I0 (x0 ) + x0
{f1 (x1 ) + . . . + fk (xk )
sup f1 ,...,fk ∈Dd
−V (t1 )(f1 + V (t1 − t1 )(f2 + . . . V (tk − tk−1 )fk ) . . .))(x0 )}}. c) If x ∈ DE [0, ∞) − DRd [0, ∞), then I(x) = ∞. Remark 10.18. a) Condition 9.10.2 holds by Lemma 10.4. b) The critical assumption is that Condition 9.10.1 holds. The results of Section 10.3.3 identify a number of settings in which this condition is valid. In particular, the condition holds under any of the following conditions: (1) H is continuous and nondegenerate in the sense of (10.20). For example, if σ(x) is nonsingular for each x, then the nondegeneracy condition holds. See Lemma 10.5. These conditions combined with the variational conditions covered by Lemma 10.14 extend Theorem 10.2.6 of Dupuis and Ellis [35]. (2) σ and b are Lipschitz and η = 0. See Lemmas 10.7 and 10.9 and recall that Condition 9.11 implies Condition 9.10.1. These conditions give Theorem 5.6.7 of Dembo and Zeitouni [25]. (3) σ and b are Lipschitz and η satisfies the conditions of Lemma 10.10 or Lemma 10.12. Under the conditions of Lemma 10.10, the theorem extends Theorem 3.1 of de Acosta [26]. Proof. Let Hn f = n1 e−nf An enf and H be defined as in (10.17). Let Dd be given by (9.41). Then limn kHn f − Hf k = 0 for each f ∈ Dd . The comparison principle for (I − αH)f = h is given by Lemma 9.15, and parts (a) and (b) follow by Theorem 6.14. Part (c) follows by the compact containment condition. (See Example 4.23.) 10.3.5. Variational representation of the rate function. Lemma 10.19. Assume Condition 10.3 holds. Then for each x ∈ Rd , H(x, ·) is convex and finite (hence continuous) on Rd . Define L(x, ·) as the FenchelLegendre transform of H(x, ·), that is, (10.27)
L(x, q) = sup {p · q − H(x, p)}, x, q ∈ Rd . p∈Rd
Then L(x, q) is convex and lower semicontinuous in q ∈ Rd and (10.28)
H(x, p) = sup {p · q − L(x, q)}. q∈Rd
Remark 10.20. In the diffusion case, H(x, p) = 12 σ(x)T p2 + b(x) · p. If a(x) = σ(x)σ(x)T is invertible for all x, 1 L(x, q) = (q − b(x))T a(x)−1 (q − b(x)). 2 Proof. The convexity and finiteness of H(x, ·) follows by direct calculation. The Fenchel and Moreau theorem in convex analysis gives (10.28). The convexity and lower semicontinuity of L(x, ·) are basic properties of the transform (10.27). (See [104], Chapter 12.)
10.3. MARKOV PROCESSES
213
Note that if we define Lψ (x, q) = sup (p · q − H(x, p∈Rd
p ), ψ(x)
then Lψ (x, q) = L(x, qψ(x)) and H(x, p) = sup (ψ(x)p · q − Lψ (x, q)). q∈Rd
Lemma 10.21 (Variational representation of H). Assume that Condition 10.3 holds, that ψ and H are continuous, and that there exists K > 0 such that σ(x) + b(x) ≤ Kψ(x), x ∈ Rd . Let U = Rd . Define Af (x, q) = ψ(x)q · ∇f (x), q ∈ U, f ∈ Dd , where ∇f (∞) is defined to be 0. Extend the definition of Lψ by: (10.29)
Lψ (∞, q) =
lim inf
x→∞,q 0 →q
Lψ (x, q 0 ).
Then Lψ ≥ 0 is lower semicontinuous on E × U , (10.30)
Hf (x) = Hf (x) ≡ sup((Af − Lψ )(x, q)),
x ∈ E, f ∈ Dd ,
q∈U
and Conditions 8.9, 8.10, and 8.11 are satisfied. Proof. Lψ (x, q) ≥ 0 follows from H(x, 0) = 0. By the continuity of H, Lψ as extended in (10.29) is lower semicontinuous. The representation (10.30) follows from (10.28). Next consider Condition 8.9. A ⊂ Cb (E) × C(E × U ) and Dd = Cb (E) are immediate giving Condition 8.9.1; Γ = E × U and for each x0 ∈ E, x(t) ≡ x0 and λ(dq × ds) = δ{0} (dq) × ds defines a pair such that (x, λ) ∈ J Γ giving 8.9.2. By Condition 10.3, H(c) ≡ sup sup H(x, p/ψ(x)) < ∞, p≤c x∈Rd
and it follows that limq→∞ inf x Lψ (x, q) = ∞ implying Condition 8.9.3. Condition 8.9.4 is immediate from the compactness of E. To obtain the estimate in Condition 8.9.5, define, which is finite and note that q H(c) H(c) Lψ (x, q) ≥ sup p · − =c− . q q q q p=c It follows that (10.31)
lim
inf
inf
N →∞ x∈Rd q=N
Lψ (x, q) = ∞. q
Define Lψ (x, q) , s ≥ 0. q Then ϕ is a strictly increasing function. For each f ∈ Dd , there exists 0 < Cf < ∞ such that Af (x, q) = ψ(x)q · ∇f (x) ≤ Cf q. By (10.31), r−1 ϕ(r) → +∞ as r → ∞. Let ψf (r) = Cf ϕ−1 (r). Then r−1 ψf (r) → 0, and since ϕ(Cf−1 Af (x, q)) ≤ ϕ(q) ≤ Lψ (x, q), we have Af (x, q) ≤ ψf (Lψ (x, q)). Since r−1 ϕ(r) → +∞ as r → +∞, ϕ is a tightness function, and Condition 8.9.3 follows. ϕ(s) = s inf inf
x∈Rd q≥s
214
10. SMALL PERTURBATION PROBLEMS
To obtain Condition 8.10, observe that for x ∈ Rd (10.32)
b(x) ∂ p = H(x, ) p=0 = (∂2 H)(x, 0)/ψ(x). ψ(x) ∂p ψ(x)
Since b is continuous, for each x0 ∈ Rd , there exists a (local) solution of the differential equation x˙ = b(x) satisfying x(0) = x0 . Note that x can be constructed so that either sups≤t x(s) < ∞ for all t or there exists t0 such that limt→t0 x(t) = ∞. Define x(t) = ∞ for t ≥ t0 . Observe that p · b(x)/ψ(x) − H(x, p/ψ(x)) attains its supremum in p ∈ Rd at p = 0, and hence Lψ (x, b(x)/ψ(x)) = 0b(x)/ψ(x) − H(x, 0) = 0. Note also that since b/ψ is bounded and Lψ is lower semicontinuous, there exists q0 ∈ U such that Lψ (∞, q0 ) = 0. Let λ(du × ds) = δ{b(x(s))/ψ(x(s))} (du) × ds for s < t0 and λ(du × ds) = δ{q0 } (du) × ds for s ≥ t0 . Then (x, λ) satisfies (8.12). For x0 = ∞, let x(t) = ∞ and λ(du × ds) = δ{q0 } (du) × ds. Then (x, λ) satisfies (8.12). Finally, consider Condition 8.11. For each f ∈ Dd . let qf (x) = (∂2 H)(x, ∇f (x)),
x ∈ Rd .
Then qf is continuous and there exists a solution of x(t) ˙ = qf (x(t)) and x(0) = x0 on an interval [0, t0 ) such that either t0 = ∞ or limt→t0 x(t) = ∞. By the convex duality between Lψ (x, ·) and H(x, ·/ψ(x)), the finiteness of H(x, p/ψ(x)) and continuity of H(x, p/ψ(x)), applying Theorem 23.5 in Rockafellar [104], qf (x) = (∂p H)(x, ∇f (x)) implies that ψ(x)q∇f (x) − Lψ (x, q) attains its supremum in q ∈ Rd at q = qf (x)/ψ(x). Therefore Hf (x) = qf (x)∇f (x) − L(x, qf (x)). Taking λ(du × ds) = δ{qf (x(s))/ψ(x(s))} (du) × ds for s < t0 and λ(du × ds) = δ{q0 } (du) × ds for s ≥ t0 , (x, λ) ∈ JxΓ0 and satisfies (8.15). For x0 = ∞, let x(t) ≡ ∞ and λ(du × ds) = δ{q0 } (du) × ds. Then (x, λ) ∈ JxΓ0 and, since Hf (∞) = 0, (8.15) holds as well. We have the following variation of the classical FreidlinWentzell theorem. By Lemmas 10.5, 10.21 and Remark 10.16, the result extends Theorem 10.2.6 of [35]; by Lemmas 10.10 and 10.21, it extends Theorem 3.1 of [26] Theorem 10.22. Assume that the conditions of Theorem 10.17 hold and that H is continuous. Then the large deviation principle holds for {Xn } in DRd [0, ∞) with good rate function R∞ I0 (x(0)) + 0 L(x(s), x(s))ds ˙ if x is absolutely continuous I(x) = +∞ otherwise Proof. By Theorem 10.17, the large deviation principle holds for {Xn } in DE [0, ∞) with some good rate function I. By Lemma 10.21, the conditions of Corollary 8.29 hold and the rate function satisfies Z I(x) = I0 (x(0)) + inf{ L(x(s), ψ(X(s))u)λ(du × ds) : (x, λ) ∈ J }, U ×[0,∞)
where (x, λ) ∈ J if Z f (x(t)) − f (x(0)) =
t
ψ(X(s))u∇f (x(s))λ(du × ds), 0
f ∈ Dd .
10.3. MARKOV PROCESSES
215
For each (x, λ) ∈ J , Rthere exists a P(U )valued process µ(t) such that λ(du × dt) = µt (du) × dt. If U ×[0,∞) L(x(s), ψ(x(s))u)λ(du × ds) < ∞, then by (10.31), R∞ R uλ(du × ds) < ∞, u(t) = U uµt (du) is integrable in t, and 0 Z t x(t) = x(0) + ψ(x(s))u(s)ds. 0
Consequently, x is absolutely continuous with x(t) ˙ = ψ(x(t))u(t). Since L(x, q) = supp∈Rd {p · q − H(x, p)} is convex in q, Z ∞ Z ∞ L(x(s), ψ(x(s))u)µs (du) × ds L(x(s), ψ(x(s))u(s))ds ≤ 0 Z0 = L(x(s), ψ(x(s))u)λ(du × ds), U ×[0,∞)
and it follows that Z
∞
I(x) = I0 (x(0)) +
L(x(s), x(s))ds. ˙ 0
The following result is essentially Theorem 5.6.7 of Dembo and Zeitouni [29]. It provides an alternative representation of the rate function for diffusion processes. Lemma 10.23 (Variational Representation of H). Let σ and b be bounded and continuous and η = 0. Define U = Rd , Af (x, u) = u · (σ T (x)∇f (x)) + b(x)∇f (x), and L(x, u) = u2 /2 for any (x, u) ∈ E × U . Then Hf (x) = Hf (x) ≡ sup ((Af − L)(x, u)), u∈U
and Conditions 8.9, 8.10, and 8.11 are satisfied. Proof. Verification of Condition 8.9 is straightforward. Condition 8.10 follows by taking u ≡ 0. Then for x0 ∈ Rd , a solution of x˙ = b(x), x(0) = x0 exists by the continuity of b. For x0 = ∞, x(t) ≡ ∞. Finally, consider Condition 8.11. For x0 ∈ Rd and f ∈ Dd , let u(x) = σ(x)∇f (x), and solve x(t) ˙ = σ(x(t))σ T (x(t))∇f (x(t)) + b(x(t)),
x(0) = x0 .
Again, the solution exists by the continuity of σ and b. Since Hf (x) = Af (x, u(x)) − L(x, u(x)), taking λ(du × ds) = δu(x(s)) (du) × ds, Condition 8.11 is verified. If x0 = ∞, take λ(du × ds) = δ0 (du) × ds and x(t) ≡ ∞. Theorem 10.24. Assume that the conditions of Theorem 10.17 and Lemma 10.23 hold. Then the large deviation principle holds for {Xn } with good rate function Z ∞ 1 I(x) = I0 (x(0)) + inf{ u(s)2 ds : u ∈ L2 [0, ∞) satisfies 2 0 Z t Z t x(t) = x(0) + b(x(s))ds + σ T (x(s))u(s)ds}. 0
0
216
10. SMALL PERTURBATION PROBLEMS
Proof. Condition 9.10 holds by Lemma 10.10. By Theorem 10.17, the large deviation principle in DE [0, ∞) holds for Xn with a good rate function. By Lemma 10.23 and Corollary 8.29, Z u2 I(x) = I0 (x(0)) + inf{ λ(du × ds) : (x, λ) ∈ J }, Rd ×[0,∞) 2 where (x, λ) ∈ J means that Z f (x(t))−f (x(0))− [u(σ T (x(s))∇f (x(s)))+b(x(s))∇f (x(s))]λ(du×ds) = 0. Rd ×[0,t]
Decompose λ(du × ds) = µs (du) × ds as in the previous example, and define u(t) = R uµ (du). Then t U Z t Z t x(t) − x(0) = σ T (x(s))u(s)ds + b(x(s))ds, 0
0
and the result follows as in Theorem 10.22.
10.4. Nearly deterministic Markov chains Let ξ1 , ξ2 , . . . be independent and identically distributed U valued random variables with distribution ν. Consider the Markov chain in Rd satisfying (10.33)
Xn (t + n ) = Xn (t) + n F (Xn (t), ξ[t/n ]+1 ).
Then Z Tn f (x) =
f (x + n F (x, u))ν(du) U
and Hn f (x) =
1 log nn
Z
en(f (x+n F (x,u))−f (x)) ν(du).
U
We consider the case nn → 1, although results can also be obtained under the assumption nn → 0. (See Section 11.1.) 10.4.1. Convergence of Hn . Let ψ(x) = 1 + x and assume that for each c > 0, Z (10.34) sup ecF (x,u)/ψ(x) ν(du) < ∞. x
U
Then for f ∈ Dd , Hn f converges uniformly to Z Hf (x) = log eF (x,u)·∇f (x) ν(du), by an estimate similar to (10.15). We define Z H(x, p) = log eF (x,u)·p ν(du). 10.4.2. Exponential tightness. Taking E = Rd ∪ {∞}, exponential tightness follows by Corollary 4.17.
10.4. NEARLY DETERMINISTIC MARKOV CHAINS
217
10.4.3. The comparison principle. Lemma 10.25. If (10.34) holds, then Condition 9.10.2 holds. Proof. Let xm , pm ∈ Rd be such that xm  → ∞ and xm pm  → 0. Then E[eF (xm ,ξ)·pm ] − 1 ≤ E[e(1+xm )pm F (xm ,ξ)/ψ(xm ) ] − 1 → 0 by the uniform integrability implied by (10.34), and hence lim H(xm , pm ) = log E[ lim eF (xm ,ξ)·pm ] = 0.
m→+∞
m→+∞
Lemma 10.26. Let h ∈ Cb (E) and α > 0. Suppose H(x, p) is continuous in x, p, (10.34) holds for all c > 0, and (9.45) holds. Then the comparison principle holds for (I − αH)f = h. Proof. Lemma 9.16 gives Condition 9.10.1, and Lemma 10.25 gives Condition 9.10.2, so the comparison principle follows by Lemma 9.15 (taking H† = H‡ = H). We also have the comparison principle under a Lipschitz assumption. Condition 10.27. For each r > 0, there exists a δr > 0 such that for 0 < δ < δr , (10.35)
κr (δ) ≡ sup{E[exp{δ
F (x, z) − F (y, z) }] : x 6= y, x + y ≤ r} < ∞. x − y
Lemma 10.28. Let h ∈ Cb (E) and α > 0. Suppose H(x, p) is continuous in x, p, (10.34) holds for all c > 0, and Condition 10.27 holds. Then the comparison principle holds for (I − αH)f = h. Proof. Let λ > 1, and ξ have distribution ν. By the H¨older inequality 1
E[e λ F (x,ξ)·p ] = E[e
λ−1 (F (x,ξ)−F (y,ξ))·p 1 +λ F (y,ξ)·p λ λ−1
]≤E
λ−1 λ
[e
(F (x,ξ)−F (y,ξ))·p λ−1
1
]E λ [eF (y,ξ)·p ].
That is, F (x,ξ)−F (y,ξ)·p p λ−1 λH(x, ) − H(y, p) ≤ (λ − 1) log E[e ] λ 1
≤ (λ − 1) log E[e λ−1
F (x,ξ)−F (y,ξ) px−y x−y
].
Therefore, by (10.35), Condition 9.10.1 is satisfied. Again, by Lemma 10.25, Condition 9.10.2 holds, so the comparison principle holds by Lemma 9.15. 10.4.4. The large deviation theorem. Theorem 10.29. Let Xn be given by (10.33), and suppose that the large deviation principle holds for Xn (0) in Rd with a good rate function I0 . Assume that (10.34) holds with ψ(x) = 1 + x, that H(x, p) is continuous, and that either Condition 10.27 or (9.45) holds. Then
218
10. SMALL PERTURBATION PROBLEMS
a) For each α > 0 and f ∈ C(E), there exists a unique Rα f ∈ C(E) such that (I − αH)Rα f = f in the viscosity sense, and n f V (t)f = lim Rt/n n
defines a semigroup V (t) : C(E) → C(E). b) The large deviation principle holds for {Xn } in DE [0, ∞) with good rate function: I(x) =
sup It1 ,...,tm (x(t1 ), . . . , x(tm )),
{ti }⊂∆cx
where It1 ,...,tm (x1 , . . . , xm ) = inf {I0 (x0 ) + x0
sup
{f1 (x1 ) + . . . + fk (xk )
f1 ,...,fk ∈Dd
−V (t1 )(f1 + V (t1 − t1 )(f2 + . . . V (tk − tk−1 )fk ) . . .))(x0 )}}. c) If x ∈ DE [0, ∞) − DRd [0, ∞), then I(x) = ∞. Proof. The comparison principle for (I − αH)f = h is given by Lemma 10.26 or Lemma 10.28, and Parts (a) and (b) follow by Theorem 6.14. The exponential compact containment condition can be verified by an argument similar to Example 4.23 and Part (c) follows by Theorem 4.11(a). 10.4.5. Variational representation of the rate function. As in Theorem 10.22, the rate function can be written R∞ I0 (x(0)) + 0 L(x(s), x(s))ds ˙ if x is absolutely continuous I(x) = ∞ otherwise where L(x, q) = sup {p · q − H(x, p)}, x, q ∈ Rd . p∈Rd
10.5. Diffusion processes with reflecting boundaries We next consider reflecting diffusions in a smooth domain. Specifically, let ϕ ∈ C 2 (Rd ) and suppose that E ≡ {x : ϕ(x) ≥ 0} is compact, E is the closure of E o = {x : ϕ(x) > 0}, and for x ∈ ∂E = {x : ϕ(x) = 0}, ∇ϕ(x) 6= 0. Let Xn satisfy Z t Z t 1 (10.36) Xn (t) = Xn (0) + √ σ(Xn (s))dW (s) + b(Xn (s))ds n 0 0 Z t + ν(Xn (s))dλn (s), 0
where Xn (t) ∈ E for all t a.s. and λn is nondecreasing and increases only when Xn (t) ∈ ∂E. We assume that σ, b, and ν are bounded and continuous and that κ0 = inf ν(x) · ∇ϕ(x) > 0.
(10.37)
x∈∂E
en by If we define A X en f (x) = b(x) · ∇f (x) + 1 aij (x)∂i ∂j f (x) A 2n i,j
10.5. DIFFUSION PROCESSES WITH REFLECTING BOUNDARIES
219
and Bf (x) = ν(x) · ∇f (x), Itˆ o’s formula gives Z t Z t en f (Xn (s))ds − f (Xn (t)) − f (Xn (0)) − A Bf (Xn (s))dλn (s) 0 0 Z t 1 =√ ∇f (Xn (s)) · σ(Xn (s))dW (s), (10.38) n 0 en so the left side of (10.38) is a martingale. If we define An to be the restriction of A 2 to DE,ν = {f ∈ C (E) : Bf (x) = 0, x ∈ ∂E}, then any solution of the stochastic differential equation will be a solution of the martingale problem for An . Alteren and B. In either natively, we can formulate a martingale problem in terms of A case, under the continuity assumptions on σ, b, and ν, and (10.37), we can apply Krylov’s selection theorem (Theorem 4.5.19 of [36]) to obtain a Markov process with full generator An extending An restricted to DE,ν , and the process can be constructed so that it is a weak solution of (10.36). In general, An (and the corresponding Hn ) with domain DE,ν may not be large enough either to characterize the process (hence the application of the selection theorem) or to obtain a limiting H satisfying the comparison principle; however, if {Tn (t)} denotes the semigroup given by the selection theorem, any function of the form Z n 1 (10.39) hn = Tn (t)hdt n 0 will be in the domain of the full generator An and (10.40)
An hn =
Tn (n )h − h . n
10.5.1. Convergence of Hn . With (10.39) and (10.40) in mind, for f ∈ C 2 (E), define fn by Z n 1 nfn e = Tn (t)enf dt. n 0 Then, letting Ex denote the expectation under the assumption that Xn (0) = x, e n f = 1 e−nf A en enf , for f ∈ C 2 (E), we have and setting H n nf (Xn (n )) = −1 − enf (x) ] n Ex [e Z n e n f (Xn (s))ds = −1 enf (Xn (s)) H n nEx [ 0 Z n + enf (Xn (s)) Bf (Xn (s))dλn (s)].
An enfn (x)
0
It follows that Hn fn is given by Z n e n f (Xn (s))ds Ex [ en(f (Xn (s))−f (x)) H 0 Z n Z +n en(f (Xn (s))−f (x)) Bf (Xn (s))dλn (s)] /Ex [ 0
Z = Ex [
Z
0
e n f (Xn (n s))ds en(f (Xn (n s))−f (x)) H n(f (Xn (n s))−f (x))
e 0
en(f (Xn (s))−f (x)) ds]
1
0 1
+
n
Z b Bf (Xn (n s))dλn (s)] /Ex [ 0
1
en(f (Xn (n s))−f (x)) ds],
220
10. SMALL PERTURBATION PROBLEMS
bn (t) = −1 λn (n t). Taking n = n−1 and defining Un (t) = n(Xn (n−1 t)−x), where λ n we have Z t Z t 1 1 (10.41) Un (t) = σ(x + Un (s))dWn (s) + b(x + Un (s))ds n n 0 0 Z t 1 bn (s), + ν(x + Un (s))dλ n 0 √ bn , note where Wn (t) = nW (n−1 t) is a standard Brownian motion. To estimate λ that Z t −1 en ϕ(Xn (n−1 s))ds A Zn (t) ≡ nϕ(Xn (n t)) = nϕ(Xn (0)) + 0 Z t ∇ϕ(X(n−1 s)) · σ(Xn (n−1 s))dWn (s) + 0 Z t bn (s), + ν(Xn (n−1 s)) · ∇ϕ(Xn (n−1 s))dλ 0
where Zn ≥ 0 and, by (10.37), the last term is nondecreasing and increases only when Zn is zero. Let Z t Z t −1 e An ϕ(Xn (n s))ds + ∇ϕ(X(n−1 s)) · σ(Xn (n−1 s))dWn (s). Rn (t) = 0
0
From properties of the Skorohod reflection map (see, for example, Theorem 6.1 of [13]), Z t bn (t) ≤ bn (s) κ0 λ ν(Xn (n−1 s)∇ϕ(Xn (n−1 s))dλ 0
=
0 ∨ sup{−nϕ(Xn (0)) − Rn (r)} r≤t
≤ sup(−Rn (r)). r≤t
Noting that there exist constants C and δ independent of x such that 2
P {sup(−Rn (r)) ≥ a} ≤ Ce−δa , r≤1
it is easy to check that limn→∞ kfn − f k = 0 and supn kHn fn k < ∞. If xn → x e and nϕ(xn ) → ∞, then Hn f (xn ) → H(x, ∇f (x)). If xn → x and nϕ(xn ) → z, then Zn ⇒ Z satisfying bx,z (t), Z(t) = z + b(x) · ∇f (x)t + ∇ϕ(x) · σ(x)W (t) + ν(x) · ∇ϕ(x)λ where
0 ∨ supr≤t {−z − R(r)} , ν(x) · ∇ϕ(x) R(t) = b(x) · ∇f (x)t + ∇ϕ(x) · σ(x)W (t), satisfying bx,z (t) = λ
and Un ⇒ Ux,z
bx,z (t). Ux,z (t) = σ(x)W (t) + b(x)t + ν(x)λ Consequently, R1 bx,z (s)] E[ 0 e∇f (x)·Ux,z (s) dλ e lim Hn fn (xn ) = H(x, ∇f (x)) + Bf (x). R1 n→∞ E[ 0 e∇f (x)·Ux,z (s) ds]
10.5. DIFFUSION PROCESSES WITH REFLECTING BOUNDARIES
221
Defining R1 bx,z (s)] E[ 0 ep·Ux,z (s) dλ Hb (x, p) = sup , R1 z≥0 E[ 0 ep·Ux,z (s) ds] 1 e H(x, p) = σ T (x)p2 + b(x) · p, 2 (
e H(x, p) + Hb (x, p)(0 ∨ ν(x) · p) e H(x, p)
x ∈ ∂E x ∈ Eo
(
e H(x, p) + Hb (x, p)(0 ∧ ν(x) · p) e H(x, p)
x ∈ ∂E x ∈ Eo
H† (x, p) = and H‡ (x, p) =
and setting H† f (x) = H† (x, ∇f (x)) and H‡ f (x) = H‡ (x, ∇f (x)), for any sequence xn → x, lim sup Hn fn (xn ) ≤ H† f (x) n→∞
and lim inf Hn fn (xn ) ≥ H‡ f (x). n→∞
10.5.2. Exponential tightness. Since we are assuming E is compact and supn kHn fn k < ∞ for the sequences {fn } defined above, exponential tightness follows by Corollary 4.17. 10.5.3. The Comparison Principle. Lemma 10.30. Suppose h ∈ C(E) and α > 0. If σ and b are continuous and (10.42)
inf
σ(x)ξ > 0,
x∈E,ξ=1
or if σ, b are Lipschitz, then the comparison principle holds for subsolutions of (I − αH† )f = h and supersolutions of (I − αH‡ )f = h. Proof. Assuming the σ and b are continuous and (10.42) holds, then the result follows by Lemma 9.24. If σ and b are Lipschitz, then the result follows by Lemma 9.25. 10.5.4. The large deviation theorem. Having verified the convergence of Hn and the corresponding comparison principle, we have the following large deviation theorem. Theorem 10.31. Let E ⊂ Rd be defined as above, let ν be C 1 on E and satisfy (10.37), and let σ and b satisfy the conditions of Lemma 10.30. If {Xn (0)} satisfies a large deviation principle with good rate function I0 , then {Xn } satisfies a large deviation principle with good rate function defined as in Theorem 10.17.
222
10. SMALL PERTURBATION PROBLEMS
10.5.5. Variational representation of the rate function. As in the unconstrained case, there are two possible choices for the control problem. Assuming that (10.42) holds and a and b are continuous, a(x) = σ(x)σ(x)T is invertible for all x and setting u = (q, r) ∈ Rd × R, we can take D(A) = C 1 (E), Af (x, u)
=
(q + rν(x)) · ∇f (x),
L(x, u)
=
1 1 (q − b(x))T a(x)−1 (q − b(x)) + 2 2
r+
∇ϕ(x) · q ∧0 ∇ϕ(x) · ν(x)
2 ,
and Γ = {(x, q, 0) : x ∈ E, q ∈ Rd } ∪ {(x, q, r) : x ∈ ∂E, q ∈ Rd , r ≥ 0}. Solutions of the control problem satisfy Z t (q(s) + r(s)ν(x(s)))ds, x(t) = x(0) + 0
and since r(t) = 0 unless x(t) ∈ ∂E, for almost every t with r(t) > 0, ∇ϕ(x(t)) · x(t) ˙ = ∇ϕ(x(t)) · (q(t) + r(t)ν(x(t)) = 0. It follows that (10.43)
∇ϕ(x(t)) · q(t) =− r(t) = − ∇ϕ(x(t)) · ν(x(t))
∇ϕ(x(t)) · q(t) ∇ϕ(x(t)) · ν(x(t))
∧ 0,
for almost every t in {t : ϕ(x(t)) = 0}. Consequently, the boundary control adds nothing to the cost of the solution. e Lemma 10.32. Let H be given by (8.7). Then for x ∈ E o , Hf (x) = H(x, ∇f (x)). If x ∈ ∂E and ν(x) · ∇f (x) ≤ 0, then (10.44) ∇ϕ(x) · (a(x)∇f (x) + b(x)) e e H(x, ∇f (x))−( ∧0)ν(x)·∇f (x) ≤ Hf (x) ≤ H(x, ∇f (x)), ∇ϕ(x) · ν(x) If ν(x) · ∇f (x) > 0, then 1 ∇ϕ(x) · q Hf (x) = sup q · ∇f (x) + ( ν(x) · ∇f (x) − ∧ 0)ν(x) · ∇f (x) 2 ∇ϕ(x) · ν(x) q 1 (10.45) − (q − b(x))a(x)−1 (q − b(x)) 2 and hence (10.46)
e e H(x, ∇f (x)) ≤ Hf (x) ≤ H(x, ∇f (x)) + Gb (x, ∇f (x))ν(x) · ∇f (x)
where Gb (x, p) =
1 ∇ϕ(x) · qb(x) ν(x) · p − ∧ 0. 2 ∇ϕ(x) · ν(x)
for qb(x) = a(x)(p − (∇ϕ(x) · ν(x))−1 ∇ϕ(x)) + b(x). Proof. The first inequality in (10.44) is obtained by taking q = a(x)∇f (x) + b(x)
r=−
∇ϕ(x) · q ∧ 0. ∇ϕ(x) · ν(x)
The second is immediate. The identity in (10.45) follows by maximizing over r first.
10.5. DIFFUSION PROCESSES WITH REFLECTING BOUNDARIES
223
The first inequality in (10.46) follows by taking q = a(x)∇f (x)+b(x) in (10.45). To see that the second inequality holds, let G0 (x, q) be the function on the right of (10.45). If ∇ϕ(x) · q ≥ 0, then 1 e (ν(x) · ∇f (x))2 + H(x, ∇f (x)). 2 If ∇ϕ(x) · q < 0, then G0 (x, q) is less than or equal to 1 ∇ϕ(x) · q sup q · ∇f (x) + ( ν(x) · ∇f (x) − )ν(x) · ∇f (x) 2 ∇ϕ(x) · ν(x) q 1 − (q − b(x))a(x)−1 (q − b(x)) 2 1 ∇ϕ(x) · qb(x) e ≤ H(x, ∇f (x)) + ν(x) · ∇f (x) − ν(x) · ∇f (x), 2 ∇ϕ(x) · ν(x) G0 (x, q) ≤
since the supremum is achieved at qb(x). Combining the two inequalities, we obtain (10.46). Let H† and H‡ be the same as H† and H‡ but with Hb (x, p) replaced by ∇ϕ(x) · (a(x)p + b(x)) Hb (x, p) ∨ Gb (x, p) ∨ ∧ 0 . ∇ϕ(x) · ν(x) Then H‡ f (x) ≤ Hf (x) ≤ H† f (x) and H‡ f (x) ≤ H‡ f (x) ≤ H† f (x) ≤ H† f (x), and it follows that every subsolution of (I − αH)f = h and every subsolution of (I − αH† )f = h is a subsolution of (I − αH† )f = h and every supersolution of (I − αH)f = h and every supersolution of (I − αH‡ )f = h is a supersolution of (I − αH‡ )f = h. Lemma 10.30 still holds for H† and H‡ , and hence H satisfies the comparison principle under the same conditions on the coefficients. Condition (a) of Corollary 8.28 is also satisfied. Condition 8.9 is satisfied. Since H‡ 1 = 0, Condition 8.10 will follow immediately once we verify Condition 8.11. Fixing f ∈ D(A), consider the equation Z t x(t) = x(0) + (a(x(s))∇f (x(s)) + b(x(s)) + r(s)ν(x(s)))ds, 0
that is, q(t) = a(x(t))∇f (x(t)) + b(x(t)). Existence follows from the continuity of the functions involved by approximation and compactness. and by (10.43) ∇ϕ(x(t)) · (a(x(t))∇f (x(t)) + b(x(t))) (10.47) r(t) = − ∧0 ∇ϕ(x(t)) · ν(x(t)) Then, with reference to (8.15), Z (Af (x(s), u) − L(x(s), u))λ(ds × du) U ×[t1 ,t2 ]
Z
t2
= t1 t2
1 ( σ(x(s))T ∇f (x(s))2 + b(x(s)) · ∇f (x(s)) + r(s)ν(x(s)) · ∇f (x(s)))ds 2
Z ≥
H‡ f (x(s))ds, t1
224
10. SMALL PERTURBATION PROBLEMS
where the inequality follows by (10.47) and the definition of H‡ . Consequently, by Corollary 8.28, Z I(x) = I0 (x(0)) + inf L(x(s), u)λ(du × ds). Γ λ:(x,λ)∈Jx(0)
U ×[0,∞)
Since any solution of the control problem will satisfy Z t (q(s) + r(s)ν(x(s)))ds, x(t) = x(0) + 0
we must have x(t) ˙ = q(t) + r(t)ν(x(t)), with r(t) 6= 0 only when ϕ(x(t)) = 0, that is, x(t) ∈ ∂E. If x(t) ∈ ∂E, we can select q ∈ Rd and r ≥ 0 in any way that satisfies x(t) ˙ = q(t) + r(t)ν(x(t)). Note that if x is absolutely continuous with values in E, then ∇ϕ(x(t)) · x(t) ˙ = 0 for almost every t in {t : ϕ(x(t)) = 0}, and hence, for almost every such t, 2 ∇ϕ(x(t)) · (x(t) ˙ − r(t)ν(x(t))) 1 ∧ 0 = 0. r(t) + 2 ∇ϕ(x(t)) · ν(x(t)) Defining
L(x, z) inf r≥0 L(x, z − rν(x)) we have the following theorem. b z) = L(x,
x ∈ E0 x ∈ ∂,
Theorem 10.33. Assume that a and b are continuous and that (10.42) holds. Then the large deviation principle holds for {Xn } in DE [0, ∞) with good rate function R∞ b I0 (x(0)) + 0 L(x(s), x(s))ds ˙ if x is absolutely continuous with values in E I(x) = +∞ otherwise
If σ and b are Lipschitz, we can take Af (x, u) = q · (σ T (x)∇f (x)) + (b(x) + rν(x)) · ∇f (x) and
1 2 1 ∇ϕ(x) · (σ(x)q + b(x)) q + (r + ∧ 0)2 . 2 2 ∇ϕ(x) · ν(x) Again, by essentially the same argument as before, we have L(x, u) =
H‡ f (x) ≤ Hf (x) ≤ H† f (x) with H‡ and H† defined as above. Every solution of the control problem is of the form Z t Z t x(t) = x(0) + σ(x(s))q(s)ds + (b(x(s)) + r(s)ν(x(s)))ds. 0
0
Since r(t) = 0 unless x(t) ∈ ∂E, for almost every t with r(t) > 0, ∇ϕ(x(t)) · x(t) ˙ = ∇ϕ(x(t)) · (σ(x(s))q(t) + b(x(s)) + r(t)ν(x(t)) = 0 and hence (10.48)
r(t) = −
∇ϕ(x(t)) · (σ(x(t))q(t) + b(x(t))) ∧ 0. ∇ϕ(x(t)) · ν(x(t))
10.5. DIFFUSION PROCESSES WITH REFLECTING BOUNDARIES
225
Consequently, Z (10.49) x(t) = x(0) +
t
Z (b(x(s)) + 1∂E (x(s))r(s)ν(x(s)))ds +
0
t
σ(x(s))q(s)ds 0
with r given by (10.48). Theorem 10.34. Assume that σ and b are Lipschitz. Then the large deviation principle holds for {Xn } with good rate function Z ∞ 1 I(x) = I0 (x(0)) + inf{ q(s)2 ds : q ∈ L2 [0, ∞) satisfies (10.48) and (10.49)}. 2 0
CHAPTER 11
Random evolutions In this chapter, we consider stochastic models with multiple time scales. The models include the random evolutions given in Examples 1.8 and 1.9, a model with small diffusion and periodic coefficients (Example 1.10), and a system with rapidly varying coefficients and small diffusion. The examples considered here are not exhaustive of the multiscale schemes having interesting large deviation behavior, and the results we give may not be the most general possible. Our main interest in this chapter is to give a relatively straightforward presentation for a variety of models exhibiting averaging behavior. Let (E0 , r0 ) be a complete, separable metric space, and let Yn be a sequence of E0 valued processes. Yn models a randomly varying environment in which a second process evolves. We take the second process Xn to be an Rd valued process satisfying Z t (11.1) Xn (t) = xn + Fn (Xn (s), Yn (s))ds, 0 d
d
where Fn : R × E0 → R and xn ∈ Rd . Different scalings of Yn and Fn give different large deviation behavior for Xn . We consider two cases. In the first, Fn (x, y) = F (x, y) and Yn (t) = Y (nt), where Y is an ergodic, E0 valued Markov process with stationary distribution π. In this case, if xn → x0 and F satisfies appropriate regularity and growth conditions, Xn converges to a solution of Z t X(t) = x0 + F (X(s))ds, 0
R
where F (x) = E0 F (x, y)π(dy). Since {Xn } essentially satisfies a law of large numbers, we will refer to this case as the law of Rlarge numbers scaling. In the second case, Fn (x, y) = βn F (x, y), F (x, y)π(dy) = 0, and Yn (t) = Y (nβn2 t), for βn → ∞. We will see that the large deviation behavior of {Xn } is analogous to that of a small diffusion problem and that, as in the examples of Section 10.1.6, the central limit theorem dominates the large deviation behavior. We will refer to this scaling as the central limit scaling. We also consider discrete time analogs of these models. Let Yn , n = 1, 2, . . ., be E0 valued Markov chains with transition operator Pn ≡ P and time step n > 0, that is, E[f (Yn (t))Yn (0)] = P [t/n ] f (Yn (0)), and let Xn satisfy (11.2)
Xn (t + n ) = Xn (t) + n Fn (Xn (t), Yn (t)), 227
Xn (0) = xn .
228
11. RANDOM EVOLUTIONS
It follows that Z
[t/n ]n
Xn (t) = xn +
Fn (Xn (s), Yn (s))ds. 0
As in continuous time, we have two scalings. In the law of large numbers scaling, Fn (x, y) = F (x, y) Rand nn → 1. In the central limit scaling, Fn (x, y) = (nn )−1/2 F (x, y), where E0 F (x, y)π(dy) = 0, and nn → 0. Weak convergence results for models of this type date back to Khas’minskii [66] (see Pinsky [94] for a recent review). Large deviation results for the law of large numbers scaling have been studied by Freidlin and Wentzell [52], Chapter 7. In the case of law of large number scaling as well as in the example of systems of diffusion with averaging, the main condition ((11.16), (11.56) and (11.104)) is formulated through an abstract inequality. This inequality is closely related to the principle eigenvalue problem for the driving Markov process as discussed in Donsker and Varadhan [32], Kurtz [74], among others. In Appendix B, we discuss the verification of this inequality in detail. In summary, this condition is more general e than Condition F of FreidlinWentzell [52] (Theorem B.12) and Conditions U and U of Deuschel and Stroock [30] (Lemma B.9); this condition can be verified through the methods of Kontoyiannis and Meyn [67], [68], using geometric ergodicity of the Markov process (see Remarks 11.7 and 11.31). When the driving process is a nondegenerate multidimensional diffusion, the minimax theorems of Pinsky [95] which are a consequence of the Harnack inequality, can be applied. In the case of central limit scaling random evolutions, conditions in the main theorems are compatible with those appearing in the weak convergence context. The methods developed for random evolutions apply to diffusions with periodic coefficients, and in Section 11.5 we consider an example of Baldi [6]. The methods also apply to models with small diffusion and averaging. Theorem 11.65, extends results of Freidlin and Wentzell [52] and Veretennikov [125].
Throughout this section, K(E0 ) ⊂ Cb (E0 ) is the set of nonnegative, bounded continuous functions on E0 , K0 (E0 ) ⊂ K(E0 ) is the set of strictly positive, bounded continuous functions, and K1 (E0 ) ⊂ K0 (E0 ) is the set of bounded continuous functions satisfying inf y∈E0 f (y) > 0. For a generic operator A ⊂ M (E0 ) × M (E0 ), we denote D+ (A) = D(A) ∩ K0 (E0 ) and D++ (A) = D(A) ∩ K1 (E0 ). If y ∈ E0 and K ⊂ E0 , then r0 (y, K) = inf z∈K r0 (y, z). 11.1. Discrete time, law of large numbers scaling Let n > 0 and {Y (k) : k = 0, 1, 2, . . .} be an E0 valued Markov chain with transition operator P on B(E0 ), Z P f (y) = f (z)P (y, dz), E0
where P (y, dz) is a transition probability on E0 . We consider the discrete time random evolution defined by (11.2) with Fn (x, y) = F (x, y) and t Yn (t) = Y ([ ]), n = 1, 2, . . . . n We assume the following:
11.1. DISCRETE TIME, LAW OF LARGE NUMBERS SCALING
229
Condition 11.1. (1) limn→∞ nn = 1. (2) P is Feller in the sense that P : Cb (E0 ) → Cb (E0 ). (3) F satisfies sup F (0, y) < ∞
(11.3)
y∈E0
and (11.4)
F (x1 , y) − F (x2 , y) ≤ Kx1 − x2 ,
xi ∈ Rd , y ∈ E0 ,
for some K > 0. e nq ⊂ E0 : q ∈ Q}, (4) There is an index set Q and a family of subsets of E0 , {K q1 e e q2 ⊂ K e q3 ; and such that for q1 , q2 ∈ Q, there exists q3 ∈ Q with Kn ∪ K n n e for each y ∈ E0 , there exists q ∈ Q such that limn→∞ r0 (y, Knq ) = 0. Moreover, for each q ∈ Q, T > 0 and a > 0, there exists a qb(q, a, T ) ∈ Q satisfying (11.5)
1 e nqb(q,a,T ) , some t ≤ −1 log P (Y (t) 6∈ K n T Y (0) = y) ≤ −a. n e
lim sup sup
n→∞ y∈K q n
(5) There exist an upper semicontinuous function ψ on E0 , {ϕn } ⊂ K1 (E0 ), and q0 ∈ Q such that ψ is bounded above, {y ∈ E0 : ψ(y) ≥ c} is compact for each c ∈ R, 0 < inf y∈Knq0 ϕn (y) < 2 inf y∈E0 ϕn (y), inf n,y∈E0 ϕn (y) > 0, limn→∞ n1 log kϕn k = 0, and for each q ∈ Q, (11.6)
lim sup sup
e
log
n→∞ y∈K q n
P ϕn (y) − ψ(y) ≤ 0. ϕn (y)
e nq = E0 , ϕn = 1 and If E0 is compact, we can take Q to be an arbitrary set, K ψ = 0. Then Conditions 11.1.4 and 11.1.5 are always satisfied. See Example 11.24 for the verification of these conditions for a noncompact E0 . Condition 11.1.5 is essentially a Lyapunov condition for the chain. One consequence of the Feller property and the Lyapunov condition is that the chain has a stationary distribution. Lemma 11.2. Assume that RCondition 11.1 holds. Then there exists a stationary distribution π for P satisfying E0 ψdπ ≥ 0. Proof. Let yn ∈ Knq0 satisfy ϕn (yn ) ≤ 2 inf y∈E0 ϕn (y), and let Yn be a Markov chain corresponding to P with time step 1 and Yn (0) = yn . By Condition 11.1.4, there is a qe ∈ Q such that P {Yn (k) ∈ Knqe, 0 ≤ k < n} → 1. Define νn (A) ≡ E[
n−1 1X I{Yn (k)∈A} ] n k=0
and νen (A) ≡ E[
n−1 1X I{Yn (k)∈A} Yn (k) ∈ Knqe, 0 ≤ k < n]. n k=0
230
11. RANDOM EVOLUTIONS
Then 1
P ϕn ϕn (Yn (n)) − Pn−1 k=0 log ϕn (Yn (k)) ] e n→∞ ϕn (yn ) Pn−1 1 ≥ lim sup E[e− k=0 ψ(Yn (k)) 1{Yn (k)∈Knqe ,0≤k 0, define ( G2,f,g,κ (x, y) =
F (x, y) · ∇f (x) + (1 + κ) log ∞
P g(y) g(y)
− κψ(y)
y ∈ E0 b0 − E0 . y∈E
By Lemma 11.3 G1,f,g,κ (x, y) = lim
sup
(F (x, z) · ∇f (x) + (1 − κ) log
P g(z) + κψ(z)) g(z)
inf
(F (x, z) · ∇f (x) + (1 + κ) log
P g(z) − κψ(z). g(z)
δ→0+ z∈Bδ (y)∩E0
and G2,f,g,κ (x, y) = lim
δ→0+ z∈Bδ (y)∩E0
For κ = 0, we define G1,f,g,0 = Gf,g , extended to be upper semicontinuous on E 0 , and G2,f,g = Gf,g extended to by lower semicontinuous on E 0 . We define H† ⊂ Cb (E) × M u (E 0 , R) and H‡ ⊂ Cb (E) × M l (E 0 , R) by (11.8) (11.9)
H† H‡
= {(f, G1,f,g,κ ) : f ∈ Dd , g ∈ K1 (E0 ), 0 ≤ κ < 1}, = {(f, G2,f,g,κ ) : f ∈ Dd , g ∈ K1 (E0 ), κ ≥ 0}.
Next we verify the Convergence Condition 7.11. For f ∈ Dd , g ∈ K1 (E0 ), ϕn ∈ K1 (E0 ) as in Condition 11.1.5, and 0 ≤ κ < 1, define fn,κ (x, y) = f (x) + (1 − κ)
1 1 log g(y) + κ log ϕn (y), n n
and hn,κ (x, y) =
1 1−κ P g(y) κ P ϕn (y) (f (x + n F (x, y)) − f (x)) + log + log . n nn g(y) nn ϕn (y)
Then by the H¨ older inequality, −nfn,κ (x,y)
e
nfn,κ
Tn e
(x, y)
Z
g(z) 1−κ ϕ (z) κ n P (y, dz) g(y) ϕ n (y) E0 P g(y) 1−κ P ϕ (y) κ n ≤ en(f (x+n F (x,y))−f (x) g(y) ϕn (y) = exp{nn hn,κ (x, y)}, n(f (x+n F (x,y))−f (x))
= e
232
11. RANDOM EVOLUTIONS
so Hn fn,κ ≤ hn,κ . Let υn = n. Then noting that by Condition 11.1.5, supn kfn,κ k < e q satisfying ∞, (7.18) and (7.19) follow. By (11.6), for (xn , yn ) ∈ Knq = E × K n 0 (xn , yn ) → (x, y) ∈ E , lim sup hn,κ (xn , yn ) ≤ lim sup(F (x, yn )·∇f (x)+(1−κ) log n→∞
n→∞
P g(yn ) +κψ(yn )) ≤ G1,f,g,κ (x, y), g(yn )
and (7.20) holds. Similarly, define 1 1 fen,κ (x, y) = f (x) + (1 + κ) log g(y) − κ log ϕn (y), n n and 1 1 P g(y) 1 P ϕn (y) e hn,κ (x, y) = (f (x + n F (x, y)) − f (x)) + (1 + κ) log −κ log . n nn g(y) nn ϕn (y) By the H¨ older inequality, −1 (1+κ)−1 κ(1+κ)−1 κ(1+κ)−1 P g = P (g 1+κ ϕ−κ ϕn ≤ P (1+κ) (g 1+κ ϕ−κ ϕn , n ) n )P or equivalently, 1+κ P (g 1+κ ϕ−κ gP −κ ϕn . n )≥P
Then
e
Tn enfn,κ (x, y) ≥ exp{nfen,κ (x, y) + nn e hn,κ (x, y)}, and (7.21)(7.23) follow as above. 11.1.2. Exponential tightness. Since E is compact, the convergence of Hn to H given in (11.7) implies exponential tightness for {Xn } by Corollary 4.17. 11.1.3. The comparison principle. Let α > 0 and h ∈ Cb (E). By a slight modification of Lemma 9.9, every subsolution f of (I − αH† )f = h is a strong subsolution (see Remark 7.2), so if f0 ∈ Dd and f (x0 ) − f0 (x0 ) = sup(f (x) − f0 (x)), x
then for each 0 ≤ κ < 1 and g ∈ K1 (E0 ), there exist {yn } ⊂ E0 and {xn } ⊂ E such that xn → x0 and P g(yn ) α−1 (f (x0 ) − h(x0 )) ≤ lim sup(F (xn , yn ) · ∇f0 (xn ) + (1 − κ) log + κψ(yn )) g(yn ) n→∞ P g(yn ) ≤ lim sup(F (x0 , yn ) · ∇f0 (x0 ) + (1 − κ) log ) + κψ(yn )), g(yn ) n→∞ and hence (11.10) α−1 (f (x0 )−h(x0 )) ≤
P g(y) sup F (x0 , y)·∇f0 (x0 )+(1−κ) log +κψ(y) . g(y) κ∈(0,1) g∈K1 (E0 ) y∈E0 inf
inf
Similarly, if f is a supersolution of (I − αH‡ )f = h, f0 ∈ Dd , and f0 (x0 ) − f (x0 ) = sup(f0 (x) − f (x)), x
then α−1 (f (x0 )−h(x0 )) ≥ sup
sup
inf
κ>0 g∈K1 (E0 ) y∈E0
F (x0 , y)·∇f0 (x0 )+(1+κ) log
P g(y) )−κψ(y) . g(y)
11.1. DISCRETE TIME, LAW OF LARGE NUMBERS SCALING
233
Define (11.11) H1 (x, p) = inf
inf
sup (F (x, y) · p + (1 − κ) log
00 g∈K1 (E0 ) y∈E0
P g(y) + κψ(y)), g(y)
P g(y) − κψ(y)), g(y)
and (11.13)
Hi f (x) = Hi (x, ∇f (x)),
f ∈ Dd , i = 1, 2.
Every subsolution of (I − αH† )f = h is a subsolution of (I − αH1 )f = h,
(11.14)
and similarly, every supersolution of (I − αH‡ )f = h is a supersolution of (I − αH2 )f = h.
(11.15) The requirement that
H1 (x, p) ≤ H2 (x, p),
(11.16)
x, p ∈ Rd .
will be the critical hypothesis in the large deviation theorem. This inequality is closely related to hypotheses used by a number of authors. These connections are discussed more fully in Appendix B. Here we indicate several situations in which (11.16) can be verified. Lemma 11.4. If for each x ∈ E and p ∈ Rd , there exists gx,p ∈ K1 (E0 ) such that λx,p ≡ F (x, y) · p + log
(11.17)
P gx,p (y) gx,p (y)
does not depend upon y, then (11.16) holds. Remark 11.5. If the {Y (k)} are independent R with distribution ν, then we can take gx,p (y) = eF (x,y)·p and (11.17) becomes log E0 eF (x,z)·p ν(dz). More generally, the desired gx,p is an eigenfunction of the operator eF (x,·)p P satisfying eF (x,y)·p P gx,p (y) = λx,p gx,p (y). Consequently, if E0 is finite and P is the transition matrix for an irreducible Markov chain, then eF (x,·)·p P is an irreducible, nonnegative matrix, and the PerronFrobenius theorem ensures the existence of gx,p and λx,p . Generalizations of the PerronFrobenius theorem are discussed in Section B.1. In particular, Theorem B.5 gives conditions for existence. Proof. Note that for 0 < κ ≤ 1, H1 (x, p) ≤
sup (F (x, y) · p + (1 − κ) log y∈E0
=
log λx,p + κ sup(ψ(y) − log y
P gx,p (y) + κψ(y)) gx,p (y)
P gx,p (y) ) gx,p (y)
234
11. RANDOM EVOLUTIONS
and H2 (x, p) ≥ =
inf (F (x, y) · p + (1 + κ) log
y∈E0
log λx,p + κ inf (log y
P gx,p (y) − κψ(y)) gx,p (y)
P gx,p (y) − ψ(y)). gx,p (y)
Recalling that ψ is bounded above, letting κ → 0 gives the inequality.
Lemma 11.6. Let Tx,p f (y) = eF (x,y)·p P f (y). Suppose that there exist λx,p ∈ R and gx,p ∈ K1 (E0 ) such that for each c ∈ R, T n+1 g (y) x,p x,p (11.18) lim sup n − λx,p = 0. n→∞ {y:ψ(y)≥c} Tx,p gx,p (y) Then (11.16) holds. Remark 11.7. Under Harris recurrence conditions, Kontoyiannis and Meyn [67], Theorem 4.1, and [68], Theorem 3.1, give results that imply (11.18). Take ψ in (11.6) to be P V /V for the V in [67], and multiply the numerator and denominator in (11.18) by e−nΛ , where Λ is the constant in the exponent in (50) of [67]. Proof. Let Kx,p = supy F (x, y) · p, cx,p = inf y gx,p (y), Cx,p = supy gx,p (y), and note that n+1 Tx,p gx,p (y) cx,p −Kx,p Cx,p Kx,p ≤ e ≤ n e Cx,p Tx,p gx,p (y) cx,p and n gx,p (y) P Tx,p T n+1 gx,p (y) cx,p −2Kx,p Cx,p 2Kx,p −F (x,y)·p x,p e ≤ = e ≤ e . n n Cx,p Tx,p gx,p (y) Tx,p gx,p (y) cx,p For κ, > 0 and c ∈ R, there exists n such that H1 (x, p) ≤
sup (F (x, y) · p + (1 − κ) log y∈E0
=
sup (log y∈E0
=
sup (log y∈E0
" =
n P Tx,p gx,p (y) + κψ(y)) n Tx,p gx,p (y)
n n gx,p (y) gx,p (y) P Tx,p eF (x,y)·p P Tx,p − κ log + κψ(y)) n n Tx,p gx,p (y) Tx,p gx,p (y) n+1 n Tx,p gx,p (y) P Tx,p gx,p (y) − κ log + κψ(y)) n g n g Tx,p (y) T x,p x,p x,p (y)
# n P Tx,p gx,p (y) log(λx,p + ) + κ sup (ψ(y) − log n ) Tx,p gx,p (y) {y:ψ(y)≥c} Cx,p ∨ (1 + 2κ)(Kx,p + log ) + κc cx,p
and n P Tx,p gx,p (y) − κψ(y)) n y∈E0 Tx,p gx,p (y) P gx,p (y) − κψ(y)) = log(λx,p − ) + κ inf (log gx,p (y) {y:ψ(y)≥c} Cx,p ∧ −(1 + 2κ)(Kx,p + log ) − κc . cx,p
H2 (x, p) ≥
inf (F (x, y) · p + (1 + κ) log
11.1. DISCRETE TIME, LAW OF LARGE NUMBERS SCALING
235
By selecting and κ sufficiently small and c sufficiently negative (that is c < 0 and c sufficiently large), we see that the desired inequality must hold. Lemma 11.8. If E0 is compact and for each x, p ∈ Rd , there exists a constant c(x, p) such that
Pn−1 1 log E[e k=0 p·F (x,Y (k)) Y (0) = y] = c(x, p) n→∞ n uniformly in y, then Z Z Pg (11.20) c(x, p) = sup ( dµ), F (x, y) · pµ(dy) + inf log g g∈K1 (E0 ) E0 µ∈P(E0 ) E0
(11.19)
lim
and (11.16) holds with H1 (x, p) ≤ c(x, p) ≤ H2 (x, p).
(11.21)
Remark 11.9. The convergence in (11.19) is essentially Condition F of [52]. For more general situations where E0 may be noncompact, methods for verifying (11.21) are discussed in Appendix B. Proof. The lemma is a special case of Theorem B.20.
Lemma 11.10. Assume that (11.3) and (11.4) hold, that H1 and H2 are given by (11.11) and (11.12), and that (11.16) is satisfied. Then for h ∈ Cb (E) and α > 0, the comparison principle holds for subsolutions of (I − αH1 )f = h and supersolutions of (I − αH2 )f = h. Proof. We verify Condition 9.10. We have H1 (x1 , p) − H1 (x2 , p) =
inf
sup (F (x1 , y) · p + (1 − κ) log
inf
κ∈(0,1) g∈K1 (E0 ) y∈E0
P g(y) ) + κψ(y) g(y)
P g(y) + κψ(y)) g(y) y P g(y) ≤ inf inf sup(F (x2 , y) · p + Kx1 − x2 p + (1 − κ) log + κψ(y)) g(y) κ∈(0,1) g∈K1 (E0 ) y P g(y) − inf inf sup(F (x2 , y) · p + (1 − κ) log + κψ(y)) g(y) κ∈(0,1) g∈K1 (E0 ) y = Kx1 − x2 p. − inf
inf
κ∈(0,1) g∈K1 (E0 )
sup(F (x2 , y) · p + (1 − κ) log
By (11.16), H1 (x1 , p) − H2 (x2 , p) ≤ Kx1 − x2 p, verifying Condition 9.10.1, by Lemma 9.17. To verify Condition 9.10.2, let xm → ∞ and xm pm  → 0. Then lim sup H1 (xm , pm )
=
inf
inf
sup ((1 − κ) log
κ∈(0,1) g∈K1 (E0 ) y∈E0
m→∞
≤
inf
sup ((1 − κ) log
κ∈(0,1) y∈E0
=
P 1(y) + κψ(y)) 1(y)
inf (κ sup ψ(y)) = 0, κ∈(0,1)
y∈E0
P g(y) + κψ(y)) g(y)
236
11. RANDOM EVOLUTIONS
and similarly, lim inf H2 (xm , pm ) m→∞
= ≥ ≥
sup
sup
inf ((1 + κ) log
κ∈(0,1) g∈K1 (E0 ) y∈E0
sup
P g(y) − κψ(y)) g(y)
inf (−κψ(y))
κ∈(0,1) y∈E0
− inf κ sup ψ(y) = 0. κ∈(0,1)
y∈E0
The comparison principle follows by Lemma 9.15.
Since every subsolution of (I − αH† )f = h is a subsolution of (I − αH1 )f = h and every supersolution of (I − αH‡ )f = h is a supersolution of (I − αH2 )f = h, we have the following immediate consequence of Lemma 11.10. Lemma 11.11. Assume that (11.3), (11.4) and (11.16) hold. Then for h ∈ Cb (E) and α > 0, the comparison principle holds for subsolutions of (I −αH† )f = h and supersolutions of (I − αH‡ )f = h. 11.1.4. Variational representation of H. The following lemma gives the desired control representation. Lemma 11.12. Let Condition 11.1 hold, and let H1 be given by (11.11). Then Z (11.22) H1 (x, p) = H(x, p) ≡ sup ( F (x, y) · pdν − IP (ν)), ν∈P(E0 )
where
Z IP (ν) = −
log
inf g∈K1 (E0 )
E0
E0
Pg dν ∧ g
Z ψdν.
Remark 11.13. Under Condition 11.1.5, typically Z Z P ϕn ψdν ≥ lim sup log dν ϕn n→∞ E0 so that
Z IP (ν) = −
log
inf g∈K1 (E0 )
E0
Pg dν. g
Proof. Note that Z
H1 (x, p)
P g(y) + κψ(y))ν(dy) g(y) E0 Z Z Z Pg ≥ inf sup inf ( F (x, ·) · pdν + (1 − κ) log dν ∧ ψdν 0rn e nq . Consequently, taking ψ = Lϕ1 /ϕ1 , we have and hence {y : log ϕ1 (y) ≤ rn } ⊃ K B0 ϕn (y) = ψ(y), ϕn (y)
e nq , y∈K
for n ≥ nq . Note that ϕβn ∈ D(B0 ) for all β ∈ R. Consequently, (11.37) holds completing the verification of Condition 11.21.4. If Y is the OrnsteinUhlenbeck process with 1 ∆g(y) − αy · ∇g(y), g ∈ Dd , 2 p for some α > 0, then we can take ϕ1 (y) = exp{ 1 + y2 } and ϕ2 (y) = exp{δy2 }, for δ < α. Bg = B0 g(y) =
11.2.1. Convergence of Hn . Let Hn f = n1 e−nf An enf . For fn (x, y) = f (x) + n1 log g(y), f ∈ Dd and g ∈ D++ (B), 1 −nf (x) −1 Hn fn (x, y) = e g (y) nF (x, y) · (∇x f (x))enf (x) g(y) + nenf (x) Bg(y) n Bg(y) (11.42) = F (x, y) · ∇x f (x) + g(y) ≡ Gf,g (x, y), so (f, Gf,g ) ∈ H. Lemma 11.26. Let g1 , g2 ∈ D++ (B) and 0 < κ < 1, and define Z h −1 gh = h S(s)g1κ g21−κ ds. 0
11.2. CONTINUOUS TIME, LAW OF LARGE NUMBERS SCALING
245
Then (11.43)
Bgh g κ g 1−κ ≤ 1 2 h−1 gh gh
S(h)g1 g1
κ
S(h)g2 g2
1−κ
! −1
and as h → 0, the buclimit of the right side is Bg1 Bg2 (11.44) κ + (1 − κ) . g1 g2 For κ > 0, let Z h S(s)g11+κ g2−κ ds. geh = h−1 0
Then
(11.45)
g 1+κ g2−κ Be gh ≥ 1 geh geh
S(h)g1 g1
(1+κ)
S(h)g2 g2
−κ
! −1
and the buclimit of the right side is (1 + κ)
Bg1 Bg2 −κ g1 g2
Proof. Following the H¨ older inequality arguments of the previous section, Bgh gh
= ≤
h−1 (S(h)g1κ g21−κ − g1κ g21−κ ) gh ! κ 1−κ κ 1−κ S(h)g1 g1 g2 S(h)g2 −1 gh g1 g2
giving (11.43). For the second part, observe that (1+κ)−1 κ(1+κ)−1 S(h)g1 S(h)g2 S(h)g11+κ g2−κ ≤ g1 g2 g11+κ g2−κ so that (1+κ) −κ S(h)g1 S(h)g2 S(h)g11+κ g2−κ ≤ g1 g2 g11+κ g2−κ giving (11.45)
We apply Lemma 11.26 to obtain sufficiently rich limit operators H† and H‡ . b0 , rb0 ) be the compactification of (E0 , r0 ) as obtained in the discrete Let (E b0 and Knq = E × K e nq , for q ∈ Q. time case (Section 11.1.1), and let E 0 = E × E 0 0 0 Define ηn : En → E and γ : E → E as in Section 11.1.1. Condition 11.21.3 implies Condition 2.8 holds. In addition, by the compactness of E 0 , Condition 7.9 is satisfied. Again define the LIM convergence according to Definition 2.5. Let ψ satisfy Condition 11.21.4, f ∈ Dd , and g ∈ D++ (B). For 0 < κ < 1, define ( G1,f,g,κ (x, y) = and for κ > 0, define
F (x, y) · ∇f (x) + (1 − κ) Bg(y) g(y) + κψ(y) −∞
y ∈ E0 b0 − E0 y∈E
246
11. RANDOM EVOLUTIONS
( G2,f,g,κ (x, y) =
F (x, y) · ∇f (x) + (1 + κ) Bg(y) g(y) − κψ(y) ∞
y ∈ E0 b0 − E0 . y∈E
As in the discrete time case, by Lemma 11.3, G1,f,g,κ (x, y) = lim
sup
(F (x, z) · ∇f (x) + (1 − κ)
G2,f,g,κ (x, y) = lim
inf
(F (x, z) · ∇f (x) + (1 + κ)
δ→0+ z∈Bδ (y)∩E0
Bg(z) + κψ(z)) g(z)
and δ→0+ z∈Bδ (y)∩E0
Bg(z) − κψ(z)). g(z)
For κ = 0, define G1,f,g,0 = G2,f,g,0 = Gf,g , and set (11.46)
H†
= {(f, G1,f,g,κ ) : f ∈ Dd , g ∈ D++ (B0 ), κ ∈ [0, 1)},
(11.47)
H‡
= {(f, G2,f,g,κ ) : f ∈ Dd , g ∈ D++ (B0 ), κ ≥ 0}.
We verify the Convergence Condition 7.11 with υn = n(kϕn k ∨ 1). For κ = 0, the convergence follows from (11.42). For g ∈ D++ (B0 ) and κ > 0, let Z hn gn,κ = h−1 S(s)g 1−κ ϕκn ds n 0
and for f ∈ Dd , let (11.48)
fn,κ (x, y) = f (x) +
1 log gn,κ (y). n
Then by (11.43) Hn fn,κ (x, y) (11.49)
= F (x, y) · ∇x f (x) +
Bgn,κ (y) gn,κ (y)
≤ F (x, y) · ∇x f (x) g 1−κ ϕκn −1 + hn gn,κ
S(hn )g(y) g(y)
1−κ
S(hn )ϕn (y) ϕn (y)
κ
! −1
Since g and ϕn are in D(B0 ) and (11.37) holds, hn can be selected so that (11.50) ! 1−κ κ S(hn )g S(hn )ϕn Bg(y) Bϕn (y) −1 sup hn − 1 − (1 − κ) +κ → 0, g ϕn g(y) ϕn (y) y (11.51)
lim
sup
n→∞ {y:ψ(y)≥c}
1−κ κ g ϕn gn,κ − 1 = 0,
and (11.52) By Condition 11.21.4,
lim
sup kS(t)ϕκn − ϕκn k = 0.
n→∞ 0≤t≤hn 1 n
log kϕn k → 0, and hence
lim
sup
n→∞ (x,y)∈E
n
fn,κ (x, y) − f (x) = 0,
11.2. CONTINUOUS TIME, LAW OF LARGE NUMBERS SCALING
247
verifying (7.18). The bounds in (7.19) follow by the definition of υn , (11.51), (11.35), and the fact that there exist c1 and c2 depending on g such that c1
ϕκn g 1−κ ϕκn ϕκn ≤ ≤ c R R 2 h h n n gn,κ h−1 S(s)ϕκn ds h−1 S(s)ϕκn ds n n 0 0
and the ratio on the left and right converges uniformly to 1. e nq satisfy (xn , yn ) → (x, y) ∈ E 0 = For q ∈ Q, let (xn , yn ) ∈ Knq = E0 × K b0 . Then by (11.36) and the properties of ψ, for any subsequence satisfying E×E ψ(yn ) → −∞, we have Hn fn,κ (xn , yn ) → −∞, and for any subsequence satisfying ψ(yn ) ≥ c > −∞, (11.50) and (11.51) imply lim sup Hn fn,κ (xn , yn ) ≤ lim sup G1,f,g,κ (x, yn ) ≤ G1,f,g,κ (x, y), n→∞
n→∞
verifying (7.20). Similarly, we can verify the convergence to H‡ . 11.2.2. Exponential tightness. Exponential tightness for {Xn } follows by the convergence of Hn and Corollary 4.17. 11.2.3. The comparison principle. Let α > 0, h ∈ Cb (E). We define Bg(y) (11.53) H1 (x, p) = inf inf sup F (x, y) · p + (1 − κ) + κψ(y) , g(y) κ∈(0,1) g∈D ++ (B) y∈E0 (11.54)
H2 (x, p) = sup
sup
inf
κ>0 g∈D ++ (B) y∈E0
F (x, y) · p + (1 + κ)
Bg(y) − κψ(y) , g(y)
and Hi f (x) = Hi (x, ∇f (x)),
(11.55) If g ∈ D
++
(B), then for > 0, Z −1 g = S(t)gdt ∈ D++ (B0 ),
f ∈ Dd .
−1
Z
B0 g =
0
S(t)Bgdt, 0
and assuming (11.33), B0 g Bg . = g g The properties of ψ then assure that taking the inf over g ∈ D++ (B) in the definition of H1 gives the same result as taking the inf over g ∈ D++ (B0 ) and similarly with the sup in the definition of H2 . buc − lim
→0
Lemma 11.27. Assume that (11.3) and (11.4) hold and that H1 and H2 are given by (11.53) and (11.54). Then the comparison principle holds for H1 and H2 . Proof. The proof is similar to that of Lemma 11.10.
By the same argument as in Section 11.1.3, if (11.56)
H1 (x, p) ≤ H2 (x, p),
x, p ∈ Rd ,
and the comparison principle holds for (I − αH1 )f = h or (I − αH2 )f = h, then the comparison principle holds for subsolutions of (I − αH† )f = h and supersolutions of (I − αH‡ )f = h. We have conditions that imply (11.56) similar to the conditions giving (11.16) in Section 11.1.3.
248
11. RANDOM EVOLUTIONS
Lemma 11.28. If for each x ∈ E and p ∈ Rd , there exists gx,p ∈ D++ (B) such that λx,p ≡ F (x, y) · p +
(11.57)
Bgx,p (y) gx,p (y)
does not depend upon y, then (11.56) holds. Remark 11.29. The desired gx,p is an eigenfunction of the operator B +F (x, ·)· p satisfying Bgx,p (y) + F (x, ·) · pgx,p (y) = λx,p gx,p (y). Consequently, if E0 is finite and B is the intensity matrix for an irreducible Markov chain, then the PerronFrobenius theorem ensures the existence of gx,p and λx,p . Generalizations of the PerronFrobenius theorem are discussed in Section B.1. In particular, Theorem B.3 gives conditions for existence. Proof. The proof is similar to that of Lemma 11.4. Lemma 11.30. Let
R t F (x,Y (s))·pds
Tx,p (t)f (y) = E[e
0
f (Y (t))Y (0) = y].
Suppose that there exist δ > 0, λx,p (s) ∈ R, 0 ≤ s ≤ δ, and gx,p ∈ K1 (E0 ) such that for each c ∈ R, Tx,p (t + s)gx,p (y) (11.58) lim sup − λx,p (s) = 0. t→∞ Tx,p (t)gx,p (y) {y:ψ(y)≥c}
Then (11.56) holds. Remark 11.31. Again, under Harris recurrence conditions, Kontoyiannis and Meyn [67], Theorem 4.1, and [68], Theorem 1.2, give results that imply (11.58). Proof. Let δ,t gx,p =
1 δ
Z
t+δ
Tx,p (s)gx,p . t
Then δ,t δ,t Bgx,p + F (x, y) · pgx,p = δ −1 (Tx,p (t + δ)gx,p − Tx,p (t)gx,p ),
and the remainder of the proof is similar to the proof of Lemma 11.6.
n Lemma 11.32. Suppose that for each x, p ∈ Rd , there exist {gx,p } ⊂ D++ (B) n and {λx,p } such that n Bgx,p (y) < ∞, (11.59) sup n gx,p (y) y,n
sup λnx,p  < ∞,
(11.60)
n
and for each c ∈ R, (11.61)
n Bgx,p (y) n lim sup − λx,p = 0. F (x, y) · p + n n→∞ {y:ψ(y)≥c} gx,p (y)
Then (11.56) holds.
11.2. CONTINUOUS TIME, LAW OF LARGE NUMBERS SCALING
249
Proof. Since we can select a subsequence along which {λnx,p } converges, without loss of generality we can assume that λnx,p = λx,p is independent of n. Select ax,p , bx,p ∈ R so that n Bgx,p ax,p ≤ n ≤ bx,p . gx,p For > 0, c ∈ R, there exists nc such that n ≥ nc implies n Bgx,p (y) H1 (x, p) ≤ sup (F (x, y) · p + (1 − κ) n + κψ(y)) gx,p (y) y∈E0 " # ≤
λx,p + + κ
(ψ(y) − ax,p )
sup {y:ψ(y)≥c}
∨ sup F (x, y) · p + (1 − κ)bx,p + κc y∈E0
and H2 (x, p) ≥
inf
y∈E0
F (x, y) · p + (1 + κ)
" ≥
λx,p − − κ
sup
n (y) Bgx,p − κψ(y) n (y) gx,p #
(ψ(y) − ax,p )
{y:ψ(y)≥c}
∧
inf F (x, y) · p + (1 + κ)ax,p − κc .
y∈E0
Selecting κ sufficiently small, c < 0 with c sufficiently large, and sufficiently small, we see that H1 (x, p) ≤ λx,p ≤ H2 (x, p). The conditions of Lemma 11.32 hold for a large class of nondegenerate diffusion processes. Example 11.33. For a domain Γ ⊂ Rm and 0 < α < 1, define the H¨older norm h(y) − h(x) khkα,Γ = sup h(y) + sup . y − xα y∈Γ x,y∈Γ,x6=y Let Lg(y) =
m m X 1 X ∂2 ∂ aij (y) g(y) + bi (y) g(y), 2 i,j=1 ∂yi ∂yj ∂y i i=1
g ∈ C 2 (Rm ).
Suppose that for each bounded domain Γ and each x ∈ Rd , there exist Γ > 0 and ΛΓ,x such that X aij (y)ξi ξj ≥ Γ ξ2 , y ∈ Γ, ξ ∈ Rm , i,j
and kai,j kα,Γ , kbi kα,Γ , kFi (x, ·)kα,Γ ≤ ΛΓ,x . (cf. Assumption H, page 85, Pinsky [95]). Let Y be the diffusion corresponding to L. Y exists and is unique up to the first time Y leaves a bounded domain. These conditions ensure that for each smooth, bounded, open domain Γ ⊂ Rm and x, p ∈ Rd , there exist λΓ,x,p ∈ R and ϕΓ,x,p ∈ C 2 (Γ) such that ϕΓ,x,p > 0 on Γ, ϕΓ,x,p = 0 on ∂Γ, and F (x, y) · pϕΓ,x,p (y) + LϕΓ,x,p (y) = λΓ,x,p ϕΓ,x,p (y),
y ∈ Γ.
250
11. RANDOM EVOLUTIONS
(See Theorem 5.5, page 94, Pinsky [95].) For y ∈ / Γ, set ϕΓ,x,p (y) = 0. Setting Vx,p (y) = F (x, y) · p and τΓ = inf{t : Y (t) ∈ / Γ}, (11.62)
RtV
E[e
0
x,p (Y
(s))ds
ϕΓ,x,p (Y (t))1{τΓ >t} Y (0) = y] = eλΓ,x,p t ϕΓ,x,p (y).
In particular, (11.63)
inf Vx,p (y) ≤ λΓ,x,p ≤ sup Vx,p (y). y
y
Assume that 0 ∈ Γ, and we can normalize ϕΓ,x,p so that ϕΓ,x,p (0) = 1. Then the Harnack inequality (Theorem 0.1, page 124, Pinsky [95]) implies that for each r > 0, there exist 0 < d1,x,p,r < d2,x,p,r , such that for all Γ ⊃ Br (0) d1,x,p,r ≤ ϕΓ,x,p (y) ≤ d2,x,p,r ,
y ≤ r.
Setting Bf = Lf for f ∈ Dl , we assume that the martingale problem for B is wellposed and that the corresponding semigroup {S(t)} satisfies b l ) → C(R b l ), (11.64) S(t) : C(R b l ) is the space of continuous functions vanishing at infinity. Note that where C(R (11.64) implies (11.65)
lim P {Y (t) ∈ KY (0) = y} = 0,
y→∞
for each t ≥ 0 and each compact K ⊂ Rl . Conversely, if S(t) : Cb (Rl ) → Cb (Rl ) and (11.65) holds, then (11.64) holds. To verify (11.65), it is enough to show the b l ) ∩ C 2 (Rl ) such that f (y) > 0, y ∈ Rl , and existence of f ∈ C(R c ≡ sup
(11.66)
y
Lf (y) < ∞. f (y)
Then f (Y (t))e−
R t Lf (Y (s)) ds 0
f (Y (s))
is a local martingale and P {Y (t) ∈ KY (0) = y} ≤
1 inf z∈K f (z)
ect f (y).
In particular, if b(y) ≤ c1 + c2 y and a(y) ≤ c1 + c2 y2 , then (11.66) holds for f (y) = (1 + y2 )−1 . We assume that there there exist ϕ1 and ϕ2 satisfying Condition 11.25, and e nq = {y : ϕ2 (y) ≤ eqn }. Define ϕ0 = √ϕ1 , and note that define K Lϕ0 Lϕ1 ≤ ϕ0 2ϕ1 so dj = sup y
Lϕj (y) , ϕj (y)
j = 0, 1
are finite. Let µx,p satisfy 1 µx,p > kVx,p k + sup λΓ,x,p  + d0 + d1 . 2 Γ For simplicity, let Ey [Z] denote E[ZY (0) = y]. Lemma 11.34. There exists a constant Cx,p > 0 independent of Γ such that ϕΓ,x,p (y) ≤ Cx,p ϕ0 (y).
11.2. CONTINUOUS TIME, LAW OF LARGE NUMBERS SCALING
251
Proof. Let r > 0 satisfy sup y≥r
Lϕ0 (y) < −µx,p ϕ0 (y)
and let γr = inf{t > 0 : Y (t) ≤ r}. Let Γ ⊃ Br (0). Since
R t∧τΓ (V
ϕΓ,x,p (Y (t ∧ τΓ ))e
(11.67)
is a martingale, for y ∈ Γ, we have ϕΓ,x,p (y)
0
x,p (Y
(s))−λΓ,x,p )ds
R γr ∧τΓ
(Vx,p (Y (s))−λΓ,x,p )ds = Ey [ϕΓ,x,p (Y (γr ∧ τΓ ))e 0 ] d2,x,p,r Ey [ϕ0 (Y (γr ∧ τΓ ))eγr ∧τΓ (kVx,p k−λΓ,x,p ) ]. ≤ inf y≤r ϕ0 (y)
Since ϕ0 (y)
−
R γr ∧τΓ
Lϕ0 (Y (s)) ds
ϕ0 (Y (s)) = Ey [ϕ0 (Y (γr ∧ τΓ )e 0 ≥ Ey [ϕ0 (Y (γr ∧ τΓ )eγr ∧τΓ µx,p ],
]
the inequality follows with Cx,p =
d2,x,p,r . inf y≤r ϕ0 (y)
For n = 1, 2, . . ., let Γn = {y : y < n}, and let ρn ∈ Cb (Rm ) satisfy 0 ≤ ρn (y) ≤ 1, ρn (y) = 1 on Γcn+1 and ρn (y) = 0 on Γn . Define λnx,p = λΓn ,x,p and Z ∞ R t n gx,p (y) = Ey [ e 0 (Vx,p (Y (s))−µx,p )ds (ϕΓn ,x,p (Y (t)) + n ρn (Y (t)))dt], 0
where n > 0 is to be determined. Note that n (y) Bgx,p ϕΓ ,x,p (y) + n ρn (y) = µx,p − n (11.68) Vx,p (y) + n n (y) gx,p (y) gx,p and by (11.62) and the strong Markov property Z ∞ R t n gx,p (y) = Ey [ e 0 (Vx,p (Y (s))−µx,p )ds ϕΓn ,x,p (Y (t))1{τΓn >t} dt] 0 Z ∞ R t +Ey [ e 0 (Vx,p (Y (s))−µx,p )ds (ϕΓn ,x,p (Y (t)) + n ρn (Y (t)))1{τΓn ≤t} dt] 0
= =
R τΓn 1 n ϕΓn ,x,p (y) + Ey [e 0 (Vx,p (Y (s))−µx,p )ds gx,p (Y (τΓn ))] n µx,p − λx,p Z ∞ R t 1 ϕ (y) + E [ e 0 (Vx,p (Y (s))−µx,p )ds ϕΓn ,x,p (Y (t))1{τΓn ≤t} dt] Γ ,x,p y µx,p − λnx,p n 0 Z ∞ R t +n Ey [ e 0 (Vx,p (Y (s))−µx,p )ds ρn (Y (t)))dt]. 0
For cx,p > 0 satisfying −cx,p ≤ inf y (Vx,p (y) − µx,p ) Z ∞ R t Ey [ e 0 (Vx,p (Y (s))−µx,p )ds ρn (Y (t)))dt] 0 Z ∞ ≥ e−cx,p t S(t)ρn (y)dt, 0
252
11. RANDOM EVOLUTIONS
and since limy→∞ S(t)ρn (y) = 1, Z ∞ Kn = {y : e−cx,p t S(t)ρn (y)dt ≤ 0
is compact. Let n = n
−1
Z ∧ inf Ey [ y∈Kn
∞
R t (V
e
x,p (Y
0
(s))−µx,p )ds
0
1 } 2cx,p
ϕΓn ,x,p (Y (t))1{τΓn ≤t} dt],
and note that n > 0. Since ρn > 0 only when ϕΓn ,x,p = 0, it follows from (11.68) and the definition of n and cx,p that n Bgx,p (y) Vx,p (y) + ≤ µx,p  + ϕΓn ,x,p (y) + n ρn (y) n n (y) gx,p (y) gx,p ≤
µx,p  + (µx,p − λnx,p ) ∨ 1 ∨ (2cx,p ) ,
which implies (11.59). The bound (11.60) follows from (11.63). To see that (11.61) is satisfied, it is enough to show that for each compact K Z ∞ R t (11.69) lim sup Ey [ e 0 (Vx,p (Y (s))−µx,p )ds ϕΓn ,x,p (Y (t))1{τΓn ≤t} dt] = 0. n→∞ y∈K
0
By Lemma 11.34 and the fact that −
ϕj (Y (t))e
R t Lϕj (Y (s)) ds 0
ϕj (Y (s))
is a supermartingale, Z ∞ R t Ey [ e 0 (Vx,p (Y (s))−µx,p )ds ϕΓn ,x,p (Y (t))1{τΓn ≤t} dt] 0 Z ∞ −τΓn (µx,p −kVx,p k) ≤ Cx,p Ey [e e−t(µx,p −kVx,p k) ϕ0 (Y (τΓn + t))dt] 0 Z ∞ ≤ Cx,p Ey [e−τΓn (µx,p −kVx,p k) ϕ0 (Y (τΓn )) e−t(µx,p −kVx,p k−d0 ) dt] 0
Rτ Cx,p −2τ (µ −kVx,p k)+ 0 Γn Ey [e Γn x,p ≤ µx,p − kVx,p k − d0 −
×Ey [ϕ1 (Y (τΓn ))e ≤
Lϕ1 (Y (s)) ds ϕ1 (Y (s))
R τΓn 0
]1/2
Lϕ1 (Y (s)) ds ϕ1 (Y (s))
]1/2
1 Cx,p E[e−2τΓn (µx,p −kVx,p k− 2 d1 ) ]1/2 . µx,p − kVx,p k − d0
Since τΓn → ∞, (11.69) holds. 11.2.4. Variational representation of H. As in Section 11.1.4, the following lemma gives the desired control representation. Lemma 11.35. Let Condition 11.21 hold, and let H1 be given by (11.53). Then Z (11.70) H1 (x, p) = H(x, p) ≡ sup ( F (x, y) · pν(dy) − IB (ν)), ν∈P(E0 )
where
Z IB (ν) = −
inf ++
g∈D
(B)
E0
E0
Bg dν ∧ g
Z ψdν. E0
11.2. CONTINUOUS TIME, LAW OF LARGE NUMBERS SCALING
253
Remark 11.36. Under Condition 11.21.4, typically Z Z Bϕn ψdν ≥ lim sup dν n→∞ E0 ϕn so that Z IB (ν) = −
inf
g∈D ++ (B)
E0
Bg dν. g
Proof. Note that Z
H1 (x, p)
Bg(y) + κψ(y))ν(dy) 0 0, Hn fn (xn , yn ) → Gf,g (x, y) + ψ(y) = G1,f,g, (x, y). b0 − E0 and En = E × E0 3 (xn , yn ) → (x, y) ∈ If E0 is noncompact, then for y ∈ E b0 , by (11.79), E×E lim sup Hn fn (xn , yn ) ≤ lim sup(Gf,g (x, yn ) + ψ(yn )) ≤ G1,f,g, (x, y). n→∞
n→∞
Consequently, we set (11.81)
H† = {(f, G1,f,g, ) : f ∈ Dd , g ∈ D(B), ≥ 0}.
260
11. RANDOM EVOLUTIONS
Similarly, H‡ = {(f, G2,f,g, ) : f ∈ Dd , g ∈ D(B), ≥ 0}.
(11.82)
As in Section 11.2, every subsolution f of (I−αH† )f = v is a strong subsolution, so if f0 ∈ Dd and f (x0 ) − f0 (x0 ) = sup(f (x) − f0 (x)), x
then (11.83) α−1 (f (x0 )−v(x0 )) ≤
inf
sup ((F (x0 , y)·∇f0 (x0 ))(h(x0 , y)·∇f0 (x0 ))+Bg(y)+ψ(y)).
g∈D(B) y∈E0
Define R ∞ −λth0 (y) = (F (x0 , y) · ∇f0 (x0 ))(h(x0 , y) · ∇f0 (x0 )) for y ∈ E0 , and let gλ = e S(t)h0 dt. Then Bgλ = λgλ − h0 , so 0 α−1 (f (x0 ) − v(x0 )) ≤ sup (λgλ (y) + ψ(y)). y∈E0
Since kλgλ k ≤ kh0 k, for > 0, the supremum will be realized on the set {ψ > −2−1 kh0 k}. Letting λ → 0, by Condition 11.42.2, we have α−1 (f (x0 ) − v(x0 )) ≤ sup (H0 (x0 , ∇f0 (x0 )) + ψ(y)), y∈E0
where
Z (F (x, y) · p)(h(x, y) · p)π(dy),
H0 (x, p) = E0
and letting → 0 α−1 (f (x0 ) − v(x0 )) ≤ H0 (x0 , ∇f0 (x0 )). Similarly, if f is a supersolution of (I − αH‡ )f = v, f0 ∈ Dd , and f0 (x0 ) − f (x0 ) = sup(f0 (x) − f (x)), x
then α−1 (f (x0 ) − v(x0 )) ≥ H0 (x0 , ∇f0 (x0 )). Consequently, every subsolution of (I − αH† )f = v is a subsolution of (I − αH0 )f = v and every supersolution of (I − αH‡ )f = v is a supersolution of (I − αH0 )f = v,and if the comparison principle holds for (I − αH0 )f = v, then the comparison principle holds for subsolutions of (I − αH† )f = v and supersolutions of (I − αH‡ )f = v. 11.3.2. Exponential tightness. Exponential tightness for {Xn } follows by the convergence of Hn and Corollary 4.17. 11.3.3. The comparison principle. Define Z 1 (11.84) aij (x) = (hi (x, y)F (j) (x, y) + hj (x, y)F (i) (x, y))π(dy), E0 2 where F (x, y) = (F (1) (x, y), · · · , F (d) (x, y)) and h(x, y) = (h1 (x, y), · · · , hd (x, y)). Then a(x) is nonnegative definite and continuous, and H0 (x, p) = pT a(x)p, which is a special case of the H treated in Section 10.3. Under Condition 11.42, a is continuous and satisfies a(x) ≤ K1 + K2 x2 for some K1 and K2 , so H0 satisfies Condition 9.10.2.
11.3. CONTINUOUS TIME, CENTRAL LIMIT SCALING
261
The operator H0 is closely related to a (pre)Dirichlet form. For each f, g ∈ D(B), define Z E(f, g) = − (f Bg + gBf )dπ, E0
and write E(f ) ≡ E(f, f ). Then, under Condition 11.42, 1 E(p · h(x, ·)). 2 We derive some properties of E(f, g). Let Y be a stationary solution of the martingale problem for B, that is, the solution with initial distribution π. For each f ∈ D(B), let Z t M f (t) = f (Y (t)) − f (Y (0)) − Bf (Y (s))ds. H0 (x, p) =
0
Lemma 11.44. Suppose f, g ∈ D(B), then a) E[Mf2 (t)] = E(f )t; b) E[Mf (t)Mp g (t)] = E(f, g)t; c) E(f, g) ≤ E(f )E(g). Proof. Applying a small amount of algebra and Itˆo’s formula gives Z t Z t 2 2 Mf (t) = (f (Y (t)) − f (Y (0))) − 2Mf (t) Bf (Y (s))ds − ( Bf (Y (s))ds)2 0 0 Z t 2 = (f (Y (t)) − f (Y (0))) − 2 Mf (s)Bf (Y (s))ds 0 Z tZ r Z t −2 Bf (Y (s))dsdMf (r) − ( Bf (Y (s))ds)2 0 0 0 Z t = (f (Y (t)) − f (Y (0)))2 − 2 f (Y (s))Bf (Y (s))ds 0 Z t Z s (f (Y (0)) + Bf (Y (r))dr)Bf (Y (s))ds +2 0 0 Z t Z s Z tZ r −2 Bf (Y (s)) Bf (Y (r))drds − 2 Bf (Y (s))dsdMf (r) 0 0 0 0 Z t = f 2 (Y (t)) − f 2 (Y (0)) − 2f (Y (0))(f (Y (t)) − f (Y (0)) − Bf (Y (s))ds) 0 Z t Z tZ r −2 f (Y (s))Bf (Y (s))ds − 2 Bf (Y (s))dsdMf (r). 0
0
0
Therefore E[M 2 (t)]
Z
t
= −2
E[f (Y (s)Bf (Y (s))]ds 0
= E(f )t. It follows that 4E[Mf (t)Mg (t)] = E[Mf2+g (t)] − E[Mf2−g (t)] = (E(f + g) − E(f − g))t = 4E(f, g)t
262
11. RANDOM EVOLUTIONS
and E(f, g) = E[Mf (1)Mg (1)] ≤
q
E[Mf2 (1)]E[Mg2 (1)] =
p E(f )E(g).
Lemma 11.45. Suppose π is the stationary distribution of B. Then E(·) is convex in the sense that for each f, g ∈ D(B) and 0 ≤ α ≤ 1, E(αf + (1 − α)g) ≤ αE(f ) + (1 − α)E(g). Proof. By Lemma 11.44, E(αf + (1 − α)g)
= α2 E(f ) + (1 − α)2 E(g) + 2E(αf, (1 − α)g) p ≤ α2 E(f ) + (1 − α)2 E(g) + 2 E(αf )E((1 − α)g) p p = (α E(f ) + (1 − α) E(g))2 ≤ αE(f ) + (1 − α)E(g),
where the last inequality follows by Jensen’s inequality.
Lemma 11.46. Under Condition 11.42, the comparison principle holds for H0 . Proof. The proof is a slight variation of Lemma 9.19. Let Γ ⊂ Rd be compact and convex. By convexity, for λ > 1 and x, y ∈ Γ, E(
p · h(x, ·) ) λ
λ − 1 p · (h(x, ·) − h(y, ·)) 1 + p · h(y, ·)) λ λ−1 λ 1 p · (h(x, ·) − h(y, ·)) 1 ≤ (1 − )E( ) + E(p · h(y, ·)). λ λ−1 λ = E(
Therefore,
= ≤ = ≤
p λH0 (x, ) − H0 (y, p) λ λ p · h(x, ·) 1 E( ) − E(p · h(y, ·)) 2 λ 2 λ − 1 p · (h(x, ·) − h(y, ·)) E( ) 2 λ−1 Z 1 p · (h(x, z) − h(y, z))(p · (F (x, z) − F (y, z)))π(dz) 2(λ − 1) 1 CΓ Kp2 x − y2 , 2(λ − 1)
where Condition 11.42 was used to derive the last inequality. Therefore, Condition 9.11 is satisfied. By Lemma 9.12 and Lemma 9.15, the comparison principle holds for H0 . 11.3.4. The large deviation theorem. Take U = Rd , and define L : E × U → [0, ∞) by L(x, q) = sup {p · q − pT a(x)p}, p∈Rd
and L(∞, q) = 0.
x ∈ Rd ,
11.4. DISCRETE TIME, CENTRAL LIMIT SCALING
263
Theorem 11.47. Suppose that Condition 11.42 is satisfied and that the large deviation principle holds for {Xn (0)} with good rate function I0 . Then {Xn } satisfies the large deviation principle in CRd [0, ∞) with a good rate function R∞ I0 (x(0)) + 0 L(x(s), x(s))ds ˙ x absolutely continuous I(x) = ∞ otherwise. Proof. The large deviation principle holds by Theorem 7.24. The rate function representation of I is the same as in Theorem 10.22. 11.4. Discrete time, central limit scaling As in Section 11.1, for n = 1, 2, . . ., let Y be an E0 valued, discretetime Markov chain with transition operator P and Yn (t) = Y ([ tn ]). Let E = Rd ∪ {∞}. We take Fn (x, y) = (nn )−1/2 F (x, y) in (11.2), with nn → 0, and assume Condition 11.48. e nq ⊂ E0 : q ∈ Q}, (1) There is an index set Q and a family of subsets of E0 , {K q e nq3 ; and e nq2 ⊂ K e n1 ∪ K such that for q1 , q2 ∈ Q, there exists q3 ∈ Q with K e nq ) = 0. for each y ∈ E0 , there exists q ∈ Q such that limn→∞ r0 (y, K Moreover, for each q ∈ Q, T > 0 and a > 0, there exists a qb(q, a, T ) ∈ Q satisfying 1 e nqb(q,a,T ) , some t ≤ −1 log P {Y (t) 6∈ K n T Y (0) = y} ≤ −a. q n e y∈Kn
(11.85) lim sup sup n→∞
(2) There exists ψ ∈ M u (E0 , R) which is upper semicontinuous and {y ∈ E0 : ψ(y) ≥ c} is compact for each c ∈ R. There exist ϕn ∈ B(E0 ) such that √ limn→∞ nn kϕn k = 0, supn supy∈E0 (P − I)ϕn (y) < ∞, and lim sup sup (P − I)ϕn (y) ∨ c − ψ(y) ∨ c ≤ 0, c ∈ R, q > 0,
e
n→∞ y∈K q n
and Y is ergodic with stationary distribution π, satisfying Z ∞ X k k lim sup (1 − ρ) ρ P f (y) − f dπ = 0, f ∈ Cb (E0 ), c ∈ R.
ρ→1 y∈{ψ≥c}
k=0
E0
(3) F satisfies sup F (0, y) < ∞,
F (x1 , y) − F (x2 , y) ≤ Kx1 − x2 ,
y
for some constant K, and Z F (x, y)π(dy) = 0,
x ∈ Rd .
E0
(4) There exists h : Rd × E0 → Rd such that (P − I)h(x, y) = −F (x, y), h(·, ·) is continuous, and for each compact Γ ⊂ Rd , Z h(x1 , y) − h(x2 , y)π(dy) < CΓ x − y, x1 , x2 ∈ Γ.
264
11. RANDOM EVOLUTIONS
Let En ≡ E × E0 . The transition operator for (Xn , Yn ) is Z p Tn f (x, y) = f (x + n /nF (x, y), z))P (y, dz), f ∈ B(En ), E0
√ and for fn (x, y) = f (x) + n1 log(1 + nn h(x, y) · ∇f (x) + nn g(y)), f ∈ Dd and g ∈ B(E0 ), we have 1 log e−nfn Tn enf (x, y) Hn fn (x, y) = nn p 1 = (f (x + n /nF (x, y)) − f (x)) n √ 1 + nn Tn (h · ∇f )(x, y) + nn P g(y) 1 + log √ nn 1 + nn h(x, y) · ∇f (x) + nn g(y) 1 F (x, y)) · ∇f (x) ≈ √ nn (nn )−1/2 (Tn (h · ∇f )(x, y) − h(x, y) · ∇f (x)) + (P − I)g(y) √ 1 + nn h(x, y) · ∇f (x) + nn g(y) 1 − (Tn (h · ∇f )(x, y) − h(x, y) · ∇f (x))2 2 1 ≈ F (x, y) · ∇f (x)h(x, y) · ∇f (x) − (F (x, y) · ∇f (x))2 + (P − I)g(y) 2 1 (h(x, y) · ∇f (x))2 − (P h(x, y)) · ∇f (x))2 + (P − I)g(y), = 2 where the last equality follows from the fact that (P − I)h = −F . e nq and Following the continuous time case in Section 11.3, we take Knq = E × K 0 b b E = E × E0 , where (E0 , rb0 ) is the compactification of (E0 , r0 ) in Section 11.1.1. Let ϕn , ψ be given by Condition 11.48.2. For each f ∈ Dd , g ∈ B(E0 ), Let ϕn , ψ be given by Condition 11.42,2. For f ∈ Dd , g ∈ D(B), and ≥ 0, setting 1 Gf,g (x, y) = (h(x, y) · ∇f (x))2 − (P h(x, y) · ∇f (x))2 + (P − I)g(y), 2 define Gf,g (x, y) + ψ(y) y ∈ E0 G1,f,g, (x, y) = b0 − E0 −∞ y∈E and Gf,g (x, y) − ψ(y) y ∈ E0 G2,f,g, (x, y) = b0 − E0 . ∞ y∈E Again, by Lemma 11.3, +
G1,f,g, (x, y) = lim
sup
G1,f,g, (x, z)
G2,f,g, (x, y) = lim
inf
G2,f,g, (x, z).
δ→0 z∈Bδ (y)∩E0
and δ→0 z∈Bδ (y)∩E0
Then we can verify the Convergence Condition 7.11 for convergence to limit operators H† H‡
= {(f, G1,f,g, ) : f ∈ Dd , g ∈ D(B), ≥ 0} = {(f, G2,f,g, ) : f ∈ Dd , g ∈ D(B), ≥ 0}.
11.5. DIFFUSIONS WITH PERIODIC COEFFICIENTS
265
Setting Z H0 f (x) = E0
1 (h(x, y) · ∇f (x))2 − (P h(x, y)) · ∇f (x))2 π(dy) 2
and Z E(f, g) =
(f g − P f P g)dπ, E0
the remainder of the analysis is similar to the continuous time case. In particular, observe that if {Y (k)} is a stationary Markov chain with transition function P and initial distribution π, then E[(f (Y (k)) − P f (Y (k − 1)))(g(Y (k)) − P g(Y (k − 1)))] = E(f, g). Define a by (11.84), and let L(x, q) = sup {p · q − pT a(x)p},
x ∈ Rd .
p∈Rd
Theorem 11.49. Suppose that Condition 11.48 is satisfied and that the large deviation principle holds for {Xn (0)} with good rate function I0 . Then {Xn } satisfies the large deviation principle in CRd [0, ∞) with a good rate function R∞ I0 (x(0)) + 0 L(x(s), x(s))ds ˙ x absolutely continuous I(x) = ∞ otherwise. 11.5. Diffusions with periodic coefficients In this section we consider a large deviation result of Baldi [6] on diffusions with periodic coefficients. The technicalities needed to obtain the limiting H are essentially the same as in Section 11.3. The comparison principle is immediate since H is spatially homogeneous. Our conditions are slightly weaker than Baldi’s, requiring only nondegeneracy and continuity of the diffusion coefficients. We also include a drift term. Freidlin and Sowers [51] consider a similar problem. The scaling treated here corresponds to δ/ → 0 in their notation. Let σ be a d × d matrixvalued function and b be a Rd valued function that are continuous and periodic in the sense that for each 1 ≤ i ≤ d, there is a period pi > 0 such that σ(y) = σ(y + pi ei ) and b(y) = b(y + pi ei ) for all y ∈ Rd . Define a = σσ T , and assume that a(y) is positive definite for all y. Let (11.86)
1 dXn (t) = √ σ(αn Xn (t))dW (t) + b(αn Xn (t))dt, n
where αn > 0 and limn→∞ n−1 αn = ∞. 11.5.1. Convergence of Hn . Define E0 ≡ [0, p1 ) × · · · × [0, pd ), topologized as a torus so that it is compact, and let En = Rd , E = Rd ∪ {∞}, E 0 = E × E0 , ηn0 (x) = (x, y) where yi = αn xi mod pi , and ηn (x) = x. As before, Dd = {f : f Rd − f (∞) ∈ Cc2 (Rd )}. Xn solves the martingale problem given by An f (x) =
X ∂2 ∂ 1 X aij (αn x) f (x) + bi (αn x) f (x), 2n ij ∂xi ∂xj ∂x i i
f ∈ Dd ,
266
11. RANDOM EVOLUTIONS
and Hn f (x) =
X 1 X 1X aij (αn x)∂i ∂j f (x)+ bi (αn x)∂i f (x)+ aij (αn x)∂i f (x)∂j f (x) 2n ij 2 ij i
for f ∈ Dd . Define Bg(y) =
1X ∂2 aij (y) g(y), 2 ij ∂yi ∂yj
for all g ∈ D(B) ⊂ C(E0 ) that extend periodically to a function g ∈ C 2 (Rd ). The continuity and positive definiteness of a imply that the martingale problem for B is well posed, and the compactness of E0 and nondegeneracy imply that the corresponding process is ergodic. In particular, if {S(t)} is the corresponding semigroup, then Condition 11.42.2 holds with ϕ ≡ 1 and ψ ≡ 0. That is, there exists a π ∈ P(E0 ), Z ∞ Z lim sup λ e−λt S(t)f (y) − f dπ = 0, f ∈ Cb (E0 ). λ→0+ y∈E0
0
E0
For each f ∈ Dd and g ∈ D(B), let fn (x) = f (x) + n g(αn x), where n = nαn−2 . Then ∂2 1 X ∂2 Hn fn (x) = aij (αn x) f (x) + +n αn2 g(αn x) 2n ij ∂xi ∂xj ∂yi ∂yj ∂ X ∂ 1 ∂ ∂ aij (αn x) f (x) + n αn g(αn x) f (x) + n αn g(αn x) + 2 ij ∂xi ∂yi ∂xj ∂yj ∂ X ∂ + bi (αn x) f (x) + n αn g(αn x) . ∂xi ∂yi i Define Gf,g (x, y) =
1 ∇f (x)T a(y)∇f (x) + b(y) · ∇f (x) + Bg(y). 2
Then lim sup Hn fn (x) − Gf,g (x, αn x) = 0,
n→∞ x
and we define H = {(f, Gf,g ) : f ∈ Dd , g ∈ D(B)}. Then, as before, if f is a subsolution of (I − αH)f = h, it is a strong subsolution, and if f0 ∈ Dd and f (x0 ) − f0 (x0 ) = sup(f (x) − f0 (x)), x
then 1 sup( ∇f (x)T a(y)∇f (x) + b(y) · ∇f (x) + Bg(y)). 2 g∈D(B) y R∞ 1 T Define h0 (y) = 2 ∇f (x) a(y)∇f (x)+b(y)·∇f (x) and gλ = 0 e−λt S(t)h0 dt. Then Bgλ = λgλ − h0 , so α−1 (f (x0 ) − v(x0 )) ≤
inf
α−1 (f (x0 ) − v(x0 )) ≤ lim sup λgλ (y) = H0 (∇f (x0 )), λ→∞
where H0 (p) =
y
1 T p ap + b · p 2
11.6. SYSTEMS WITH SMALL DIFFUSION AND AVERAGING
267
R R for aij = E0 aij dπ, bi = E0 bi dπ. The corresponding inequality holds for supersolutions and since H0 does not depend on x, the comparison principle follows. As in Section 10.1.5, the next result follows from Theorems 7.18 and 8.28. Theorem 11.50. Let Xn satisfy (11.86) where b and σ are continuous and periodic and σ(y) is nonsingular for all y. Suppose {Xn (0)} satisfies a large deviation principle on Rd with good rate function I0 . Then {Xn } satisfies a large deviation principle with good rate function Z ∞ x(s) ˙ − b2 I0 (x(0)) + 21 ds x absolutely continuous (11.87) I(x) = . a 0 ∞ otherwise 11.6. Systems with small diffusion and averaging Veretennikov [125] considers a system of the following form: Z t Z t 1 σ X (Xn (s), Yn (s))dW (s) Xn (t) = Xn (0) + F (Xn (s), Yn (s))ds + √ n 0 0 Z t Z t √ Yn (t) = Yn (0) + nb(Xn (s), Yn (s))ds + n σ Y (Xn (s), Yn (s))dW (s), 0
0
where W is a mdimensional standard Brownian motion. In [125], σ Y is independent of x. Theorem 7.9.2 of Freidlin and Wentzell [52] considers the case σ X = 0, σ Y independent of x, and Yn confined to a compact manifold. Guillin [56] considers related models. We assume that F : Rd × Rl → Rd , b : Rd × Rl → Rl , σ X : Rd × Rl → M d×m , and σ Y : Rd × Rl → M l×m are locally bounded and that the solution of the system exists and is unique so that (Xn , Yn ) is a Rd+l valued Markov process with full generator An . For f sufficiently smooth and satisfying appropriate growth conditions, 1 X X An f (x, y) = aij (x, y)∂xi ∂xj f (x, y) + F (x, y) · ∇x f (x, y) 2n X + aXY ij (x, y)∂xi ∂yj f (x, y) X n + aYij (x, y)∂yi ∂yj f (x, y) + nb(x, y) · ∇y f (x, y), 2 where aX = σ X (σ X )T , aXY = σ X (σ Y )T , and aY = σ Y (σ Y )T , and 1 −nf (x,y) Hn f (x, y) = e An enf (x, y) n 1 X X = aij (x, y)∂xi ∂xj f (x, y) + F (x, y) · ∇x f (x, y) 2n X + aXY ij (x, y)∂xi ∂yj f (x, y) X 1 +n aYij (x, y)∂yi ∂yj f (x, y) + nb(x, y) · ∇y f (x, y) 2 1X X + aij (x, y)∂xi f (x, y)∂xj f (x, y) 2 X +n aXY ij (x, y)∂xi f (x, y)∂yj f (x, y) n2 X Y + aij (x, y)∂yi f (x, y)∂yj f (x, y). 2
268
11. RANDOM EVOLUTIONS
As before, Dd = {f ∈ Cb (Rd ∪ {∞}) : f Rd − f (∞) ∈ Cc2 (Rd )}, and we define Dd++ = {f ∈ Dd : inf y∈Rd f (y) > 0}. For each g ∈ C 2 (Rl ), x ∈ Rd , and p ∈ Rd , define (11.88) X 1X Y aij (x, y)∂yi ∂yj g(y)+b(x, y)·∇y g(y)+ aXY y ∈ Rl . Bx,p g(y) = ij (x, y)pi ∂yj g(y), 2 i,j i,j For f ∈ Cc2 (Rl ) and g = ef , Bx,p g(y) g(y) (11.89)
= e−f (y) Bx,p ef (y) =
l 1 1 X Y a (x, y)∂yi ∂yj f (y) + (σ Y (x, y))T · ∇y f (y)2 2 i,j=1 ij 2
+b(x, y) · ∇y f (y) + pT σ X (x, y)(σ Y (x, y))T ∇y f (y). We assume that Condition 11.51. (1) F , b, σ X , and σ Y are continuous on Rd × Rl , and there exist c1 , c2 such that sup (F (x, y) + σ X (x, y)) ≤ c1 + c2 x. y∈Rl
(2) There exists ϕ1 , ϕ2 ∈ C 2 (Rd ) satisfying (a) ϕi ≥ 1, i = 1, 2 (b) limy→∞ ϕi (y) = ∞, i = 1, 2 (c) For each k > 0 (11.90)
lim
sup
y→∞ x,p≤k
Bx,p ϕi (y) = −∞, ϕi (y)
i = 1, 2
(d) log ϕ1 (y) =0 y→∞ log ϕ2 (y)
(11.91)
lim
(e) There exists b0 > 0 such that for i = 1, 2, (11.92)
sup y∈Rl
Bx,p ϕi (y) < b0 (1 + p(1 + x)) ϕi (y)
e q = {y : Lemma 11.52. Suppose that Condition 11.51 holds. For q > 0, let K n qn ϕ2 (y) ≤ e }. Then for q > 0, k > 0, T > 0, and a > 0, there exist qb(q, k, a, T ) > 0 and b k(q, k, a, T ) > 0 for which lim sup
sup
1 e qb(q,k,a,T ) or Xn (t) > b log P (Yn (t) ∈ /K k(q, k, a, T ), n n e
n→∞ x≤k,y∈K q n
(11.93)
some t ≤ T Xn (0) = x, Yn (0) = y)
Proof. The lemma follows by Lemma 4.20 taking f (x, y) = log(1 + x2 ).
1 n
≤ −a.
log ϕ2 (y) +
11.6. SYSTEMS WITH SMALL DIFFUSION AND AVERAGING
269
e nq = {y : ϕ2 (y) ≤ eqn }. Lemma 11.53. Let Condition 11.51.2 hold, and define K Then {y ∈ Rl :
(11.94)
sup Bx,p ϕ1 (y)/ϕ1 (y) ≥ c} x,p≤k
is compact for each c ∈ R, and for each q > 0, 1 (11.95) lim log sup ϕ1 (y) = 0. n→∞ n e nq y∈K Proof. The compactness of (11.94) follows from from (11.90), and (11.95) follows from (11.91) as in Example 11.24. The following lemma gives a simple criterion for Condition 11.51.2. Lemma 11.54. Suppose there exist δ2 > δ1 > 0 and b1 such that Pl Y T XY (x, y)y i=1 aii (x, y) + 2b(x, y) · y + 2p a ≤ b1 (1 + p(1 + x)) 2 1−δ 2 (1 + y ) y T aY (x, y)y ≤ b1 (1 + p(1 + x)) (1 + y2 )1−δ2 and for each k, Pl (11.96)
lim sup
i=1
y→∞ x≤k
aYii (x, y) + 2b(x, y) · y + 2kaXY (x, y)y = −∞ (1 + y2 )1−δ1
Then Condition 11.51.2 is satisfied with ϕ1 (y) = exp{(1 + y2 )δ1 } and ϕ2 (y) = exp{(1 + y2 )δ2 }. Proof. For δ = δ1 , δ2 , P δ( j aYjj (x, y) + 2b(x, y) · y + 2pT aXY (x, y)y) 2(δ − (1 − δ)(1 + y2 )−δ )y T aY (x, y)y Bx,p ϕi (y) = + ϕi (y) (1 + y2 )1−δ (1 + y2 )2−2δ and noting that (1 + y2 )1−δ2 ≤ (1 + y2 )1−δ1 , the lemma follows. qP 2 Remark 11.55. Let a denote the matrix norm ij aij .
Assume that
Y
inf x,y a (x, y) > 0 and that for each k > 0, (11.97)
lim
sup
y→∞ x,p≤k,x
y (bT (x, y) + pT aXY (x, y)) y
aY (x, y)
= −∞.
Then (11.96) is satisfied with δ1 = 1/2. This condition is a slightly generalized version of (2) in [125]. 11.6.1. Convergence of Hn . For f ∈ Dd and h ∈ Dd+l , define fn (x, y) = f (x) + n−1 h(x, y). Then the pointwise limit of Hn fn is 1X X Hf (x, y) = F (x, y) · ∇x f (x) + aij (x, y)∂xi f (x)∂xj f (x) 2 1X Y + a (x, y)∂yi ∂yj h(x, y) + b(x, y) · ∇y h(x, y) 2X ij + aXY ij (x, y)∂xi f (x)∂yj h(x, y) X 1 + aYij (x, y)∂yi h(x, y)∂yj h(x, y). 2
270
11. RANDOM EVOLUTIONS
We must, however, verify the Convergence Condition 7.11. Let En = Rd × Rl and E0 = Rl ∪ {∞}, the one point compactification of Rl , and take E 0 = Rd × E0 . e nq = {y : ϕ2 (y) ≤ eqn } and Assume Condition 11.51. For q > 0, define K e q }. Let η 0 be the natural injection of En into E 0 , and Knq = {(x, y) : x ≤ q, y ∈ K n let γ(x, y) = x. As in Section 11.1.1, Conditions 2.8 and 7.9 are satisfied, and we can define LIM convergence according to Definition 2.5. e nq , for For n > 2, define ϕn ∈ Dl++ as in (11.41) so that ϕn (y) = ϕ1 (y), y ∈ K all n sufficiently large and 1 lim log kϕn k = 0. n→∞ n Let f ∈ Dd and g ∈ Dl++ , and for κ ∈ (0, 1), define gn,κ (y) = g(y)1−κ ϕn (y) κ and 1 fn (x, y) = f (x) + log gn,κ (y). n Hn fn (x, y) becomes 1 X X Hn fn (x, y) = a (x, y)∂xi ∂xj f (x) + F (x, y) · ∇x f (x) 2n i,j ij +
=
1X Y a (x, y)∂yi ∂yj log gn,κ (y) 2 i,j ij
+b(x, y) · ∇y log gn,κ (y) 1 + ∇x f (x)T aX (x, y)∇x f (x) 2 +∇x f (x)T aXY (x, y)∇y log gn,κ (y) 1 + ∇y log gn,κ (y)T aY (x, y)∇y log gn,κ (y) 2 1 X X a (x, y)∂xi ∂xj f (x) + F (x, y) · ∇x f (x) 2n i,j ij Bx,∇f (x) gn,κ (y) 1 . + ∇x f (x)T aX (x, y)∇x f (x) + 2 gn,κ (y)
For fixed x and p, Bx,p is the generator for a Markov process, and applying Lemma 11.26, we have Bx,∇f (x) g(y) 1 Hn fn (x, y) ≤ F (x, y) · ∇x f (x) + ∇x f (x)T aX (x, y)∇x f (x) + (1 − κ) 2 g(y) X Bx,∇f (x) ϕn (y) 1 +κ + aX ij (x, y)∂xi ∂xj f (x). ϕn (y) 2n Similarly, if we take gen,κ (x, y) = g(y)1+κ ϕn (y)−κ , κ > 0, and 1 fen (x, y) = f (x) + log gen,κ (y), n then Bx,∇f (x) g(y) 1 Hn fen (x, y) ≥ F (x, y) · ∇x f (x) + ∇x f (x)T aX (x, y)∇x f (x) + (1 + κ) 2 g(y) X Bx,∇f (x) ϕn (y) 1 −κ + aX ij (x, y)∂xi ∂xj f (x). ϕn (y) 2n
11.6. SYSTEMS WITH SMALL DIFFUSION AND AVERAGING B
271
ϕ (y)
1 e Let H(x, y, p) = F (x, y) · p + 12 pT aX (x, y)p and ψx,p (y) = x,p ϕ1 (y) . For 0 < κ < 1, define ( B f (x) g(y) e H(x, y, ∇x f (x)) + (1 − κ) x,∇xg(y) + κψx,∇x f (x) (y), y ∈ Rl G1,f,g,κ (x, y) = −∞, y = ∞,
and for κ > 0, define ( B f (x) g(y) e H(x, y, ∇x f (x)) + (1 + κ) x,∇xg(y) − κψx,∇x f (x) (y), G2,f,g,κ (x, y) = ∞,
y ∈ Rl y = ∞.
Set H† = {(f, G1,f,g,κ ) : f ∈ Dd , g ∈ Dl++ , κ ∈ [0, 1)} and H‡ = {(f, G2,f,g,κ ) : f ∈ Dd , g ∈ Dl++ , κ ∈ [0, 1)}. As in Section 11.1.1, the Convergence Condition 7.11 holds. 11.6.2. Exponential tightness. The exponential compact containment condition follows by Condition 11.51.2. Exponential tightness for {Xn } follows by the convergence of Hn and Corollary 4.17. 11.6.3. The comparison principle. Let (11.98) 1 Bx,p g(y) H1 (x, p) = inf inf++ sup (F (x, y)·p+ pT aX (x, y)p+(1−κ) +κψx,p (y)), 2 g(y) κ∈(0,1) g∈D y∈Rl l (11.99) H2 (x, p) = sup sup κ>0
g∈Dl++
1 Bx,p g(y) inf (F (x, y) · p + pT aX (x, y)p + (1 + κ) − κψx,p (y)), 2 g(y)
y∈Rl
and (11.100)
Hi f (x) = Hi (x, ∇f (x)),
f ∈ Dd .
As in the previous examples, the critical hypothesis will be that (11.101)
H1 (x, p) ≤ H2 (x, p).
Note that the results of Section 11.2 and Appendix B can be applied to the operator Bx,p for fixed x and p to verify (11.101). In particular, we have the following: Lemma 11.56. Assume that for each x, p ∈ Rd , Bx,p satisfies the H¨ older conditions of Example 11.33 and that Condition 11.51 holds. Then (11.101) holds. Proof. The computations in Example 11.33 verify the conditions of the Lemma 11.32. For nondegenerate aX , the comparison principle is an immediate consequence of Lemma 9.16. Lemma 11.57. Let h ∈ Cb (Rd ) and α > 0. Suppose that Condition 11.51 and (11.101) hold. If for each k > 0, there exists k > 0 such that ck ≡
sup x≤k,y∈Rl
F (x, y) < ∞
272
11. RANDOM EVOLUTIONS
and inf
x≤k,y∈Rl
pT aX (x, y)p ≥ k p2 ,
p ∈ Rd ,
then the comparison principle holds for subsolutions of (I − αH1 )f = h and supersolutions of (I − αH2 )f = h. Proof. For g ∈ Dl++ and 0 < κ < 1, the continuity of the coefficients and the compactness of (11.94) implies 1 Bx,p g(y) hg,κ (x, p) ≡ sup (F (x, y) · p + pT aX (x, y)p + (1 − κ) + κψx,p (y)) 2 g(y) l y∈R is upper semicontinuous. Since the infimum of a collection of upper semicontinuous functions is upper semicontinuous, H1 is upper semicontinuous. Similarly, H2 is lower semicontinuous. Let κ > 0 and g = 1. Then 1 H2 (x, p) ≥ inf (F (x, y) · p + pT aX (x, y)p − κψx,p (y)) l 2 y∈R 1 ≥ −ck p + k p2 − κ sup ψx,p (y), 2 y for x ≤ k, and hence lim
inf H2 (x, p) = ∞.
p→∞ x≤k
Consequently, Lemma 9.16 implies that Condition 9.10.1 holds. Suppose xm  → ∞ and xm pm  → 0. Then 1 H1 (xm , pm ) ≤ inf sup (F (xm , y) · pm + pTm aX (xm , y)pm + κψxm ,pm (y)) 0 0. Then for x, z ∈ Γ and λ > 1, Z Z 1 1 X T 2 σ (x, y) p µ(dy) − σ X (z, y)T p2 ν(dy) 2λ 2 Z 1 1 = (σ X (x, y)T − σ X (z, y)T + σ X (z, y))p2 − σ X (z, y)T p2 µ(dy) 2λ 2 Z 1 λ−1 X ≤ ((LΓ x − zp)2 + 2LΓ x − zpσ X (x, y)T p) − σ (z, y)T p2 µ(dy) 2λ 2λ 4λ2 + λ − 1 2 ≤ L x − z2 p2 . 2λ(λ − 1) Γ Verification of (11.108) is more difficult. If Bx,p generates a reversible process with respect to a stationary distribution, then usually I(µx, p) can be computed through its associated Dirichlet form (e.g. Theorem 7.44 of Stroock [115]). Such an explicitly parameterized representation (in x, p) of I may be helpful in verifying (11.108). Sometimes, (11.108) can be verified directly. One set of conditions is given in the following lemma. Lemma 11.60. Assume that (1) σ Y (x, y) = σ Y (y) is independent of x. (2) aY is uniformly nondegenerate aY (y) = σ Y (y)σ Y (y)T ≥ 0 I,
(11.110)
some 0 > 0.
X
(3) σ (x, y) is locally Lipschitz in x uniformly in y sup x+z≤N,x6=z;y∈Rl
σ X (x, y) − σ X (z, y) < ∞, x − z
∀N > 0;
(4) b(x, y) is locally Lipschitz in x, uniformly in y sup x+z≤N,x6=z;y∈Rl
(b(x, y) − b(z, y)) < ∞, x − z
N > 0;
(5) pT σ X (σ Y )T = 0 for all p ∈ Rd . Then Condition 11.58 is satisfied with ωΓ,λ (x − z, x − zp) = LΓ
λ x − z2 . λ−1
Remark 11.61. If pT σ X (σ Y )T = 0 for all p ∈ Rd , then Bx,p = Bx and I(µx, p) = I(µx) do not depend on p. Proof. Let Γ ⊂ Rd Rbe compact, λ > 1, x, z ∈ Γ, and p ∈ Rd . By definition, I(µz, p) = supg∈Cc2 (Rl ) − e−g Bz,p eg dµ. For each µ ∈ P(Rl ), > 0, there exists a g0 ∈ Cc2 (Rd ) depending on µ, z, p, such that Z I(µz) ≤ − e−g0 (y) Bz eg0 (y)µ(dy).
11.6. SYSTEMS WITH SMALL DIFFUSION AND AVERAGING
275
Hence I(µz) − λI(µx) Z ≤ − e−g0 (y) Bz eg0 (y)µ(dy) + λ ≤+λ
Z inf
g∈Cc2 (Rl )
e−g(y) Bx eg (y)µ(dy)
Z X l 1 1 aYij (y)∂yi ∂yj (λ−1 g0 )(y) + (σ Y (y))T · ∇y (λ−1 g0 )(y)2 2 i,j=1 2 +b(x, y) · ∇y (λ−1 g0 )(y) µ(dy)
Z X l 1 1 − aY (y)∂yi ∂yj g0 (y) + (σ Y (y))T · ∇y g0 (y)2 2 i,j=1 ij 2 +b(z, y) · ∇y g0 (y) µ(dy) Z 11−λ Y σ (y)T ∇g0 (y)2 µ(dy) =+ (b(x, y) − b(z, y))∇g0 (y) + 2 λ Z λ−1 ≤+ b(x, y) − b(z, y)∇g0 (y) − 0 ∇g0 (y)2 µ(dy) 2λ Z 2 λb(x, y) − b(z, y) µ(dy), ≤+ 20 (λ − 1) where 0 > 0 is given by (11.110). Note that the integral in the last expression does not depend on , so by the Lipschitz continuity of b, there exists LΓ ≥ 0 such that for x, z ∈ Γ and λ > 1, I(µz) − λI(µx) ≤ LΓ
(11.111)
λ x − z2 λ−1
and (11.108) holds.
Condition 11.62. For each x, p ∈ Rd , there exists πx,p ∈ P(Rl ) such that I(πx,p x, p) = 0. Under condition Conditions 11.51.2, Bx,p has a stationary distribution πx,p and I(πx,p x, p) = 0. (See Lemmas 11.23, 11.2, and 11.38.) Lemma 11.63. Assume that Conditions 11.51.1, 11.58 and 11.62 hold, and (11.104) is satisfied. Let α > 0 and h ∈ Cb (E). Then the comparison principle holds for (I − αH)f = h, and hence, the comparison principle holds between subsolutions of (I − αH† )f = h and supersolutions of (I − αH‡ )f = h. Proof. Let x, p ∈ Rd , µ ∈ P(Rl ), and denote Z F (x, µ) =
F (x, y) · pµ(dy),
X
a (x, µ) =
Z
aX (x, y)µ(dy).
276
11. RANDOM EVOLUTIONS
Let λ > 1. Then p λH(x, ) − H(z, p) λ =
sup
inf
µ∈P(Rl )
ν∈P(Rl )
F (x, µ) · p +
1 T X p p a (x, µ)p − λI(µx, ) 2λ λ
1 −F (z, ν) · p − pT aX (z, ν)p + I(νz, p) 2 1 1 T X p a (x, µ)p − pT aX (z, ν)p ≤ sup (F (x, µ) − F (z, ν)) · p + 2λ 2 µ∈P(Rl ) p −λI(µx, ) + I(νz, p) λ for any ν ∈ P(Rl ). For compact Γ ⊂ Rd and x, z ∈ Γ, Condition 11.58 gives p (11.112) λH(x, ) − H(z, p) ≤ 2ωΓ,λ (z − x, z − xp). λ Hence Condition 9.10.1 is satisfied. Noting that I(µx, p) ≥ 0, Conditions 11.51.1 and 11.62 imply that Condition 9.10.2 is satisfied. By Lemma 9.15, the comparison principle holds for H. 11.6.4. Variational representation. If I(µx, p) does not depend on p, then Z Z 1 T F (x, y) · pµ(dy) + p ( aX (x, y)µ(dy))p − IB (µx) , H(x, p) = sup 2 µ∈P(Rl ) Rl Rl and H(x, p) is convex in p. This is the case when aXY = 0, or slightly more generally, when aXY · q = 0 for every q ∈ Rd . Frequently, H(x, p) is also continuous in (x, p). This is the case, for example, when Condition 11.58 is satisfied (hence (11.112) holds) and when I(µx, p) is pindependent. If H(x, p) is both continuous in (x, p) ∈ Rd × Rd and is convex in p, the large deviation rate function admits a simple variational representations. Let ϕ(x) = 1 + x. We define p Lϕ (x, q) = sup (p · q − H(x, )), x ∈ Rd . ϕ(x) d p∈R If H(x, p) is convex in p, then H(x, p) = sup (ϕ(x)p · q − Lϕ (x, q)),
x ∈ Rd .
q∈Rd
As in Lemma 10.21, we have the following. Lemma 11.64. Suppose that aXY · q = 0 for every q ∈ Rd (hence I(µx, p) = I(µx) and Bx,p = Bx do not depend on p) and that H(x, p) is continuous in (x, p) ∈ Rd ×Rd . Assume that for each x ∈ Rd , there exists a stationary distribution R πx for Bx and that F (x, y)πx (dy) is Lipschitz continuous in x. Let U = Rd and Af (x, q) = ϕ(x)q∇f (x),
q ∈ U, f ∈ Dd ,
where ∇f (∞) = 0, and extend the definition of Lϕ by Lϕ (∞, q) =
lim inf
x→∞,q 0 →q
Lϕ (x, q 0 ).
11.6. SYSTEMS WITH SMALL DIFFUSION AND AVERAGING
277
Then Lϕ (x, q) ≥ 0 is lower semicontinuous on E × U , Hf (x) = sup (Af (x, u) − Lϕ (x, u)),
x ∈ E, f ∈ Dd ,
q∈Rd
and Conditions 8.9 and 8.10 are satisfied. Furthermore, Condition 8.11 is satisfied with H† = H‡ = H. Proof. As in Lemma 10.21, Condition 8.9 is satisfied. In particular, lim
inf
inf
N →∞ x∈Rd q=N
Lϕ (x, q) = ∞, q
Condition 8.30 holds. We note that Z H(x, p) ≥ F (x, y) · pπx (dy) R and that Lϕ (x, q) ≥ 0. For q(x) = F (x, y)πx (dy), we must have Z p Lϕ (x, q(x)) = sup (p · F (x, y)πx (dy) − H(x, ) = 0. ϕ(x) d p∈R Hence Condition 8.10 holds with Z x(t) = x0 +
t
F (x(s), y)πx(s) (dy) 0
if x0 ∈ Rd , and x(t) = ∞ if x0 = ∞. By Lemma 8.32, Conditions 8.9 and 8.11 hold.
If σ X = 0, then (11.102) simplifies to Z (11.113) H(x, p) = sup { F (x, y) · pµ(dy) − I(µx)}, µ∈P(Rl )
and a modification of Lemma 11.15 gives a representation of the form Z ∞ I(x) = inf{I0 (x(0)) + I(µs x(s))ds : 0 Z tZ (11.114) x(t) = x(0) + F (x(s), y)µs (dy)ds}; 0
E0
however, because of the xdependency in I(µx), the conditions are more complicated, and we do not pursue this representation further. 11.6.5. The large deviation theorem. We have the following large deviation theorem. Theorem 11.65. Suppose that Conditions 11.51, 11.58, and 11.62 hold, and inequality (11.104) is satisfied. Then (1) {Xn } satisfies the large deviation principle in CRd [0, ∞) with a good rate function. (2) Suppose that the conditions of Lemma 11.64 are satisfied. Then for L given by L(x, p) = sup (p · q − H(x, p)), p∈Rd
278
11. RANDOM EVOLUTIONS
we have R∞ I0 (x(0)) + 0 L(x(s), x(s))ds ˙ (11.115) I(x) = ∞
for x absolutely continuous otherwise.
CHAPTER 12
Occupation measures Large deviations for occupation measures, such as Example 1.11, have been studied by many authors beginning with the fundamental work of Donsker and Varadhan [33]. The books [52, 57, 29, 30, 35] discuss these results at length. In this chapter, we derive large deviation results for occupation measures (both continuous and discretetime) as corollaries of the random evolution results in Chapter 11. Theorem 12.4 gives the main discrete time result and Theorem 12.7 the main continuous time result. The critical assumptions are again the inequalities (12.10) and (12.19). Conditions implying these inequalities and their relationship to earlier work are discussed in Chapter 11 and Appendix B.
Let E0 be a complete, separable metric space, and let Mf (E0 ) be the space of finite, Borel measures on E0 . Define R E = Mf (E R 0 ). We take the weak topology on E, so that µn → µ if and only if E0 f dµn → E0 f dµ for every f ∈ Cb (E0 ), and set En = E0 × E. In the continuous time case, let Y , {S(t)}, and B be as in Section 11.2. Define Yn (t) = Y (nt) and Z Z t 1 nt (12.1) Zn (C, t) = IC (Y (s))ds = IC (Yn (s))ds. n 0 0 Then Xn defined by Xn (t) = (Yn (t), Zn (·, t)) is a Markov process with state space En . We can consider Zn as a random variable with values in L(E0 ) = {z ∈ M(E0 × [0, ∞)) : z(E0 × [0, t]) = t, t ≥ 0}. (Topologize L(E0 ) by weak convergence on bounded time intervals, that is, zn → z if and only if Z Z f (u, s)zn (du × ds) → f (u, s)z(du × ds) E0 ×[0,t]
E0 ×[0,t]
for all f ∈ Cb (E0 ×[0, ∞) and t ≥ 0.) Let MP(E0 ) [0, ∞) be the space of measurable, P(E0 )valued functions on [0, ∞). If z ∈ L(E0 ), then there exists µ ∈ MP(E0 ) [0, ∞) such that Z t (12.2) z(C × [0, t]) = µ(C, s)ds, C ∈ B(E0 ), 0
and we will write z(t) ˙ = µ(t). Conversely, if µ ∈ MP(E0 ) [0, ∞), then (12.2) defines Rt an element of L(E0 ). We will write z(t) = 0 µ(s)ds. 279
280
12. OCCUPATION MEASURES
In the discrete time case, let Ye = {Ye (k), k ≥ 0} be an E0 valued Markov chain with one step transition probability P (x, dy), and define (12.3)
Zn (C, t) =
[nt]−1 1 X IC (Ye (k)). n k=0
Setting Yn (t) = Ye ([nt]), we have Z
[nt] n
Zn (C, t) =
IC (Yn (s))ds. 0
It follows that Xn (t) = (Yn (t), Zn (·, t)) is a Markov chain with timestep n1 . As in the continuous time case, we can consider Zn as a random variable with values in Ln (E0 ) = {z ∈ M(E0 × [0, ∞)) : z(E0 × [0, t]) = [nt]/n, t ≥ 0}. In both cases, we assume Ye and Y are Feller processes, and we are interested in large deviations for {Zn }. 12.1. Occupation measures of a Markov process  Discrete time Define
Z P f (x) =
f (y)P (x, dy), E0
for each f ∈ B(E0 ). We assume the Feller property P : Cb (E0 ) → Cb (E0 ). 12.1.1. Convergence of Hn . We modify the definition of Zn to allow for a nonzero initial value, so [nt]−1 1 X IC (Ye (k)). Zn (C, t) = Zn (C, 0) + n k=0
Let Tn be the transition operator for the chain Xn = (Yn , Zn ) with stepsize 1 . Then n E[f (Yn (t), Zn (t))Yn (0) = y, Zn (0) = z] = Tn[nt] f (y, z),
f ∈ B(En ).
Let βi ∈ B(E0 ), i = 1, . . . , d. To simplify notation, for β = (β1 , . . . , βd ), we define hβ, zi = (hβ1 , zi, . . . , hβd , zi),
z ∈ Mf (E0 ).
If f (y, z) = f0 (y)f1 (hβ, zi) for f0 ∈ B(E0 ) and f1 ∈ C(Rd ∪ {∞}), then Tn f (y, z) = f1 (hβ, zi +
1 β(y))P f0 (y), n
and if fn (y, z) = f1 (hβ, zi) +
1 f0 (y), n
then Hn fn (y, z) ≡ log(e−nfn Tn enfn )(y, z) (12.4)
=
1
log(enf1 (hβ,zi+ n β(y))−nf1 (hβ,zi) e−f0 (y) P ef0 (y)).
Assume that f1 ∈ Dd , and let g = ef0 . Then fn → f given by (12.5)
f (z) = f1 (hβ, zi)
12.1. OCCUPATION MEASURES OF A MARKOV PROCESS  DISCRETE TIME
281
and Hn fn → h given by (12.6)
h(y, z) = ∇f1 (hβ, zi) · β(y) + log
P g(y) , g(y)
(y, z) ∈ E0 × E.
In particular, (12.7)
Hn fn (y, z) < ∞.
sup y∈E0 ,z∈E
12.1.2. Exponential tightness. Lemma 12.1. Let ϕ be a Borel measurable function on E0 satisfying ρ ≡ inf ϕ(y) > 0, y∈E0
αK ≡ sup ϕ(y) < ∞ y∈K
for each compact K ⊂ E0 , and Ka = {y ∈ E0 :
ϕ (y) ≤ a} Pϕ
compact for each 0 < a < ∞. If {Yn (0)} is exponentially tight, then {Zn } satisfies the exponential compact containment condition. Similarly, if Conditions 11.1.4 and 11.1.5 hold and for each a > 0, there exists q ∈ Q such that 1 e nq } ≤ −a, (12.8) lim sup log P {Yn (0) ∈ / K ∩K n n→∞ then {Zn } satisfies the exponential compact containment condition. Proof. Let ϕm = ϕ ∧ m and ψm = log
P ϕm ϕm .
Then
exp{−nhψm , Zn (t)i + log ϕm (Yn (t)) − log ϕm (Yn (0))} is a martingale. Let m → ∞. Then by Fatou’s lemma, Pϕ , Zn (t)i + log(ϕ(Yn (t)) − log ϕ(Yn (0))} Un (t) = exp{−nhlog ϕ is a cadlag supermartingale. Consequently, for each T > 0 and L > 0, Pϕ , Zn (t)i ≤ −L} ≤ P { sup Un (t) ≥ enL+log ρ−log αK } + P {Yn (0) ∈ P { inf hlog / K} 0≤t ≤T ϕ 0≤t≤T ≤ e−nL−log ρ+log αK + P {Yn (0) ∈ / K}. ϕ(y) Since log Pϕϕ is bounded above and {y : log Pϕ(y) ≥ −L} is compact, for each 0 < L < ∞, Pϕ KL ≡ {z : hlog , zi ≥ −L} ϕ is compact in Mf (E0 ), and we have
1 1 log P {∃t ≤ T 3 Zn (t) ∈ / KL } ≤ max{−L, lim sup log P {Yn (0) ∈ / K}}. n→∞ n n→∞ n The exponential compact containment condition follows by the fact that L is arbitrary and {Yn (0)} is exponentially tight. The second statement follows by a similar argument. (See Lemma 12.5.) lim sup
D(H) ⊂ Cb (E) is closed under addition and separates points. By (12.7), the exponential tightness of {Zn } follows by Corollary 4.17.
282
12. OCCUPATION MEASURES
12.1.3. Convergence of nonlinear semigroups. In this section, we suppose that the Y satisfies Conditions 11.1.2, 11.1.4 and 11.1.5 in Section 11.1. Let Xnβ (t) = hβ, Zn (t)i. Then (Xnβ , Yn ) is a special case of the random evolution models considered in that section. Since (Xnβ , Yn ) is a Markov process, β 1 log E[enh(Xn (t),Yn (t)) Xnβ (0) = x, Yn (0) = y], n defines a semigroup on B( Rd ∪ {∞} × E0 ), and letting
Unβ (t)h(x, y) =
Vn (t)f (y, z) =
1 log E[enf (Yn (t),Zn (t)) Yn (0) = y, Zn (0) = z], n
for fβ (y, z) = h(hβ, zi, y), Vn (t)fβ (y, z) = Unβ (t)h(hβ, zi, y).
(12.9)
Let ψ be given by Condition 11.1.5. Define H1β (p) = inf
sup (β(y) · p + (1 − κ) log
inf
κ∈[0,1) g∈K1 (E0 ) y∈E0
P g(y) + κψ(y)), g(y)
p ∈ Rd ,
and H2β (p) = sup
inf (β(y) · p +
sup
κ∈[0,1) g∈K1 (E0 ) y∈E0
1 P g(y) κ log − ψ(y)), 1−κ g(y) 1−κ
p ∈ Rd .
Suppose, as in (11.16), H1β ≤ H2β
(12.10)
Then by Theorem 11.20, there exists a semigroup U β (t) on C(Rd ∪ {∞}) such that lim
n→∞
(12.11)
sup
e nq (z,y)∈E×K
+
sup
e
Unβ (t)fn (hβ, zi, y) − U β (t)f (hβ, zi)
q (z,y)∈E×Kn
Unβ (t + n−1 )fn (hβ, zi, y) − U β (t)f (hβ, zi) = 0,
q ∈ Q,
e nq , q ∈ Q} in Condition 11.1.4, where fn ∈ B( Rd ∪ {∞} × E0 ) and f ∈ for {K C(Rd ∪ {∞}) satisfy lim
sup
n→∞ x∈Rd ∪{∞},y∈K q
fn (x, y) − f (x) = 0,
∀q ∈ Q.
n
By (12.9), it follows that for fβ (z) = h(hβ, zi), h ∈ C(Rd ∪ {∞}), V (t)fβ (z) = lim Vn (t)fβ (z) = lim Unβ (t)h(hβ, zi) = U β (t)h(hβ, zi). n→∞
n→∞
Let D(E) = {h(hβ, ·i) : h ∈ Dd , β ∈ C(E0 )d , d = 1, 2, . . .}. Clearly, V extends to a contraction semigroup on D(E). By the contraction property of Vn (t) and the convergence in (12.11), for g ∈ D(E) and gn ∈ B(E0 × E) satisfying g = LIMgn , 1 V (t)g = LIMVn (t)gn = LIMVn (t + )gn , n e nq . where LIM is given by Definition 2.5 with Knq = (Rd ∪ {∞}) × K
12.1. OCCUPATION MEASURES OF A MARKOV PROCESS  DISCRETE TIME
283
12.1.4. Variational representation. As in Section 11.1, define Hβ (p) =
(12.12)
sup (hβ, µi · p − IP (µ)) µ∈P(E0 )
where
Z
Pg dµ ∧ g K1 (E0 ) E0 Then under Conditions 11.1.2, 11.1.4 and 11.1.5 IP (µ) = − inf
Z
log
ψdµ. E0
H1β = Hβ , .
(12.13)
Lemma 12.2. If Conditions 11.1.2, 11.1.4 and 11.1.5 hold and (12.10) holds for each β ∈ Cb (E0 )d , d = 1, 2, . . ., then ΓM P given by (11.23) is compact for each M > 0 and for h ∈ C(Rd ∪ {∞}), Z t Z t U β (t)h(x) = sup{h(x(t)) − IP (µ(s))ds : x(t) = x + hβ, µ(s)ids}, 0
0
where the supremum is over µ ∈ MP(E0 ) [0, ∞). Proof. Since Hβ depends only on p, the comparison principle follows by Lemma 9.15. The representation follows from Lemma 11.15 and Theorem 8.27, taking H† = H‡ = Hβ . Lemma 12.3. Let h ∈ C(Rd ∪ {∞}), β ∈ Cb (E0 )d and fβ (z) = h(hβ, zi). Then fβ ∈ D(E), and under the conditions of Lemma 12.2, V (t)fβ (z0 )
= U β (t)h(hβ, z0 i) Z =
sup{fβ (z(t)) −
t
t
Z IP (µ(s))ds : z(t) = z0 +
0
µ(s)ds}, 0
where the supremum is over µ ∈ MP(E0 ) [0, ∞). Proof. By the variational representation of U β (t) in Lemma 12.2 and the definition of V (t), V (t)fβ (z)
= U β (t)h(hβ, zi) Z = =
t
Z
t
sup{h(x(t)) −
IP (µ(s))ds : x(t) = hβ, zi + hβ, µ(s)ids} 0 0 Z t Z t sup{fβ (z(t)) − IP (µ(s))ds : z(t) = z + µ(s)ds}. 0
0
12.1.5. The large deviation theorem. Theorem 12.4. Suppose that Conditions 11.1.2, 11.1.4 and 11.1.5 hold and that {Yn (0)} satisfies (12.8). Let Zn be defined by (12.3). Let C ⊂ Cb (E0 ) be separating, and suppose that H1β ≤ H2β , for β ∈ C d and d = 1, 2, . . .. Then {Zn } satisfies a large deviation principle in CE [0, ∞) with good rate function I given by R∞ IP (z(s))ds ˙ for z ∈ L(E0 ) 0 (12.14) I(z) = ∞ otherwise.
284
12. OCCUPATION MEASURES
Proof. Theorem 5.15 and Remark 5.16 give the large deviation principle for {Zn } in DE [0, ∞) with a good rate function I. By Lemma 12.3 and Theorem 8.14, I can be represented as (12.14). 12.2. Occupation measures of a Markov process  Continuous time Let Y , {S(t)}, and B be as in Section 11.2. Define Yn (t) = Y (nt) and Z t Z 1 nt IC (Yn (s))ds. IC (Y (s))ds = (12.15) Zn (C, t) = Zn (C, 0) + n 0 0 Then Xn defined by Xn (t) = (Yn (t), Zn (·, t)) is a Markov process with state space En = E0 × Mf (E0 ). Let f0 ∈ D(B), f1 ∈ Dd , and βi ∈ Cb (E0 ), i = 1, . . . , d. Define f (y, z) = f0 (y)f1 (hβ, zi),
(y, z) ∈ En ,
and An f (y, z) = nf1 (hβ, zi)Bf0 (y) + f0 (y)β(y) · ∇f1 (hβ, zi). Then An has a linear extension that is the full generator of Xn . Note that for fixed β, (Yn , hβ, Zn i) is a special case of the model considered in Section 11.2 with F (x, y) = β(y). 12.2.1. Convergence of Hn . Convergence follows by essentially the same computation as in Section 11.2. If f1 ∈ Dd , β1 , . . . , βd ∈ Cb (E0 ), enf0 ∈ D(B), and f (y, z) = f0 (y) + f1 (hβ, zi), then Hn f =
1 −nf An enf ne
is given by
Hn f (y, z) = β(y) · ∇f1 (hβ, zi) + e−nf0 (y) Benf0 (y). In particular, if f1 ∈ Dd , ef0 ∈ D(B), and fn (y, z) = f1 (hβ, zi) +
1 f0 (y), n
then (12.16)
Hn fn (y, z) = β(y) · ∇f1 (hβ, zi) + e−f0 Bef0 (y).
Let g = ef0 . Then fn → f given by (12.17)
f (z) = f1 (hβ, zi),
and Hn fn → h given by (12.18)
h(z, y) = ∇f1 (hβ, zi) · β(y) +
Bg(y) . g(y)
Let H consist of all pairs (f, h) given by (12.17) and (12.18). Then, by definition, H ⊂ ex − lim Hn . n
12.2. OCCUPATION MEASURES OF A MARKOV PROCESS  CONTINUOUS TIME
285
12.2.2. Exponential tightness. Lemma 12.5. Suppose Conditions 11.34.3 and 11.34.4 hold. If for each a > 0, there exists compact q ∈ Q such that 1 e nq } ≤ −a, lim sup log P {Yn (0) ∈ / K ∩K n→∞ n then {Zn } satisfies the exponential compact containment condition. Proof. For each n, ϕn (Yn (t)) Un (t) = exp{−n ϕn (Yn (0))
Z 0
t
Bϕn (Yn (s)) ds} ϕn (Yn (s))
is a martingale. Let ρ = inf n,y∈E0 ϕn (y) Then, for each T > 0 and L > 0, P { inf h 0≤t ≤T
Bϕn , Zn (t)i ≤ −L} ϕn
≤ P { sup Un (t) ≥ enL+log ρ−log kϕn k } 0≤t≤T −nL−log ρ+log kϕn k
≤ e
.
Since ψ is bounded above and {y : ψ(y) ≥ c} is compact for each c ∈ R, for each L > 0, KL ≡ {z : hψ, zi ≥ −L} is compact in Mf (E0 ). By Condition 11.34.3, there exists qb such that 1 log P {∃t ≤ T 3 Zn (t) ∈ / KL } n n→∞ Bϕn 1 Bϕn (y) ≤ max{−L, lim sup log P { inf h − ψ(y))}} , Zn (t)i ≤ −L + T sup ( 0≤t ≤T ϕn q b n→∞ n e n ϕn (y) y∈K
lim sup
≤ −L The exponential compact containment condition follows by the fact that L is arbitrary. The exponential tightness of {Zn } follows from the convergence of Hn and Corollary 4.17. 12.2.3. Convergence of nonlinear semigroups. For β ∈ Cb (E0 )d , let = hβ, Zn (t)i. Since (Xnβ , Yn ) is a Markov process, β 1 Unβ (t)h(x, y) = log E[enh(Xn (t),Yn (t)) Xnβ (0) = x, Yn (0) = y], n defines a semigroup on B( Rd ∪ {∞} × E0 ), and letting
Xnβ (t)
Vn (t)f (y, z) =
1 log E[enf (Yn (t),Zn (t)) Yn (0) = y, Zn (0) = z], n
for fβ (y, z) = h(hβ, zi, y), Vn (t)fβ (y, z) = Unβ (t)h(hβ, zi, y). We suppose throughout that Y satisfies Conditions 11.21.1, 11.21.3 and 11.21.4. In particular, let ψ be given by Condition 11.21.4. Define H1β (p) = inf
inf
κ∈[0,1) g∈D ++ (B)
sup(β(y) · p + (1 − κ) y
Bg(y) + κψ(y)), g(y)
p ∈ Rd ,
286
12. OCCUPATION MEASURES
and H2β (p) = sup
sup
1 Bg(y) κ − ψ(y)), 1 − κ g(y) 1−κ
inf (β(y) · p +
κ∈[0,1) g∈D ++ (B) y
p ∈ Rd .
Suppose H1β (p) ≤ H2β (p)
(12.19)
for all p ∈ Rd , then by Theorem 11.41, there exists a semigroup U β (t) on C(Rd ∪ {∞}) such that (12.20)
lim
n→∞
sup
e nq (z,y)∈E×K
Unβ (t)fn (hβ, zi, y) − U β (t)f (hβ, zi) = 0,
whenever fn ∈ B( Rd ∪ {∞} × E0 ) and f ∈ C(Rd ∪ {∞}) satisfy lim
n→∞
sup
e
fn (x, y) − f (x) = 0,
∀q ∈ Q.
q x∈Rd ∪{∞},y∈Kn
For fβ (z) = h(hβ, zi), h ∈ C(Rd ∪ {∞}), define V (t)fβ (z) = U β (t)h(hβ, zi), and as before, let D(E) = {h(hβ, ·i) : h ∈ Dd , β ∈ C(E0 )d , d = 1, 2, . . .}. Clearly, V extends to a contraction semigroup on D(E). By the contraction property of Vn (t) and the convergence in (12.20), for g ∈ D(E) and gn ∈ B(E0 × E) satisfying g = LIMgn , we have V (t)g = LIMVn (t)gn , where the LIM is defined in Definition 2.5. 12.2.4. Variational representation. As in Section 11.2, define Hβ (p) =
(12.21)
sup (hβ, µi · p − IB (µ)), µ∈P(E0 )
where Z IB (µ) = −
inf
g∈D ++ (B)
E0
Bg dµ ∧ g
Z ψdµ. E0
Then H1β = Hβ .
(12.22)
The same arguments as in Section 12.1.4 lead to the following conclusion. Lemma 12.6. Let h ∈ Dd , β ∈ Cb (E0 )d and fβ (z) = h(hβ, zi). If (12.19) holds for all β ∈ Cb (E0 )d , d = 1, 2, . . ., ΓM B is compact for all M > 0, and there is a stationary distribution for B, then V (t)fβ (z0 )
= U β (t)h(hβ, z0 i) Z =
sup{fβ (z(t)) −
t
Z IB (µ(s))ds : z(t) = z0 +
0
where the supremum is over µ ∈ MP(E0 ) [0, ∞).
t
µ(s)ds}, 0
12.2. OCCUPATION MEASURES OF A MARKOV PROCESS  CONTINUOUS TIME
287
12.2.5. The large deviation theorem. As in Theorem 12.4, we have the following results. Theorem 12.7. Suppose that Conditions 11.21.1, 11.21.3 and 11.21.4 hold and that {Yn (0)} satisfies the conditions of Lemma 12.5. Let Zn be defined by (12.15) with Zn (0) = 0. Let C ⊂ Cb (E0 ) be separating, and suppose that H1β ≤ H2β , for β ∈ C d and d = 1, 2, . . .. Then {Zn } satisfies a large deviation principle in CE [0, ∞) with good rate function given by R∞ IB (z(s))ds ˙ z ∈ L(E0 ) 0 (12.23) I(z) = ∞ otherwise.
CHAPTER 13
Stochastic equations in infinite dimensions In this chapter, we prove large deviation theorems for Examples 1.12, 1.13 and 1.14. Since the state space E is either a function space or the space of probability measures, we use ρ, γ or τ to denote a typical element in E and let x denote an element in the space on which the functions or measures are defined. As discussed in the beginning of Chapter 9, in addition to providing a proof of the large deviation principle, the results in this chapter also imply existence and uniqueness for a number of first order HamiltonJacobi equations in infinite dimensions. The methods we use here are new both for large deviations and for partial differential equations. In particular, the probabilitymeasurevalued example in Section 13.3 cannot be treated using existing results in partial differential equations. Our method for this example relies on careful estimates obtained using mass transport techniques (Appendix D). 13.1. Stochastic reactiondiffusion equations on a rescaled lattice Let Λm ⊂ O ≡ [0, 1)d be the periodic lattice with mesh size m−1 in each direction; m = m(n) → ∞ as n → ∞. Setting En = RΛm  and E = L2 (O), the map ηn ≡ ηbm : En → E is the piecewise constant interpolation map defined by (C.2). We define 1 X p(x)q(x), kpk2m ≡ kpk2L2 (Λm ) = hp, pim hp, qim = d m x∈Λm
and similarly Z hp, qi =
p(x)q(x)dx, O
kpk2 ≡ kpk2L2 (O) ≡ hp, pi.
Let πm : L2 (O) → RΛm  be the projection defined by (C.1). To simplify notation, for p, q ∈ L2 (O), we write hp, qim = hπm p, πm qim = hb ηm πm p, ηbm πm qi. Let {B(·, x) : x ∈ Λm } be independent, standard Brownian motions, and let Yn (t, x) satisfy md/2 dYn (t, x) = (∆m Yn (t, x) − F 0 (Yn (t, x)))dt + √ dB(t, x), x ∈ Λm , n with ∆m the discrete Laplacian defined in Appendix C. We study the large deviation behavior of {ηn (Yn )} as a sequence of L2 (O)valued Markov processes. We assume that F 00 is bounded so that F 0 is Lipschitz, that there exist c1 ∈ R, c2 > 0 such that F (r) ≥ c1 + c2 r2 , and that m2+d /n → c0 ≥ 0. We take Q to be the collection of compact subsets of L2 (O) and define Knq = πm (q), q ∈ Q.
(13.1)
289
290
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
13.1.1. Convergence of Hn . The convergence in (1.33) does not, in general, satisfy Condition 7.11, so care needs to be taken in the selection of the domains D(H† ) and D(H‡ ) and the approximating sequences {fn } for each element in the domains. To compute An and Hn , define Dm (ρ) = ∆m ρ − F 0 (ρ), ρ ∈ L2 (Λm ) and note that Zn,p (t)
= hYn (t), pim t
Z = hYn (0), pim + 0
md/2 hDm (Yn (s)), pim ds + √ hB(t), pim n
and that the covariation for two such processes is 1 hp1 , p2 im t. n
[Zn,p1 , Zn,p2 ]t = Similarly, Rn (t)
= hYn (t), Yn (t)im Z t = hYn (0), Yn (0)im + 0
+√
X Z
2
md n x∈Λm
md 2hDm (Yn (s)), Yn (s)im + n
ds
t
Yn (s, x)dB(s, x)
0
More generally, let {ek,m : k = (k1 , . . . , km )} be the orthonormal basis of eigenfunctions for −∆m (see Appendix C), γ ∈ RΛm  , and let {ak } ⊂ R. Then X {ak } (t) = ak hYn (t) − γ, ek,m i2 Rn,γ k
=
X
ak hYn (0) − γ, ek,m i2
k
Z t md/2 X ak 2hYn (s) − γ, ek,m idhB(s), ek,m im + √ n 0 k Z tX 1 + ds. ak 2hYn (s) − γ, ek,m ihDm (Yn (s), ek,m im + n 0 k
Consequently, {ak } {ak } [Rn,γ , Rn,γ ]t = 4
1 n
Z tX 0
a2k hYn (s) − γ, ek,m i2m ds
k
and {ak } {bk } [Rn,γ , Rn,γ ] =4 1 2 t
1 n
Z tX 0
ak bk hYn (s) − γ1 , ek,m ihYn (s) − γ2 , ek,m ids.
k
Let {λk,m } be the eigenvalues for −∆m such that −∆ek,m = λk,m ek,m . Define Bm = (I − ∆m )−1 and kρk2−1,m = hBm ρ, ρim . Then Bm ek,m = (1 + λk,m )−1 ek,m and for every ρ, γ ∈ L2 (Λm ) = RΛm  , X kρ − γk2−1,m = (1 + λk,m )−1 hρ − γ, ek,m i2m , k
13.1. STOCHASTIC REACTIONDIFFUSION EQUATIONS ON A RESCALED LATTICE 291
and
X
(1 + λk,m )−1 hρ − γ, ek,m ihDm (ρ), ek,m im = hBm (ρ − γ), Dm (ρ)im .
k
Let ϕ ∈ Cb2 (Rl+1 ) and γ0 , γ1 , . . . , γl ∈ L2 (O). We consider
fn (ρ) = ϕ(kρ − πm γ0 k2m , kρ − πm γ1 k2−1,m , . . . , kρ − πm γl k2−1,m ).
(13.2)
For simplicity of notation, we write
(13.3) ~γ = (γ1 , . . . , γl ),
kρ−πm~γ k2−1,m = (kρ−πm γ1 k2−1,m , . . . , kρ−πm γl k2−1,m ).
Then
fn (ρ) = ϕ(kρ − πm γ0 k2m , kρ − πm~γ k2−1,m ).
P −1 Define κm = , and index the l + 1 coordinates of Rl+1 by k (1 + λk,m ) 0, 1, . . . , l. By Itˆ o’s formula, the generator of Yn applied to fn is
An fn (ρ) =
l X
κm ∂i ϕ(kρ − γ0 k2m , kρ − ~γ k2−1,m ) h2Bm (ρ − γi ), Dm (ρ)im + n i=1 md +∂0 ϕ(kρ − γ0 k2m , kρ − ~γ k2−1,m ) h2(ρ − γ0 ), Dm (ρ)im + n l X 2 + ∂i ∂j ϕ(kρ − γ0 k2m , kρ − ~γ k2−1,m )hBm (ρ − γi ), Bm (ρ − γj )im n i,j=1 l
+
2X ∂0 ∂i ϕ(kρ − γ0 k2m , kρ − ~γ k2−1,m )h(ρ − γ0 ), Bm (ρ − γi )im n i=1
2 + ∂02 ϕ(kρ − γ0 k2m , kρ − ~γ k2−1,m )kρ − γ0 k2m n
292
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
and Hn fn (ρ) =
l X
κm ∂i ϕ(kρ − γ0 k2m , kρ − ~γ k2−1,m ) h2Bm (ρ − γi ), Dm (ρ)im + n i=1 md +∂0 ϕ(kρ − γ0 k2m , kρ − ~γ k2−1,m ) + 2hρ − γ0 , Dm (ρ)im n l X +2 ∂i ϕ∂j ϕ(kρ − γ0 k2m , kρ − ~γ k2−1,m )hBm (ρ − γi ), Bm (ρ − γj )im i,j=1
+2
l X
∂0 ϕ∂i ϕ(kρ − γ0 k2m , kρ − ~γ k2−1,m )hρ − γ0 , Bm (ρ − γi )im
i=1
+2(∂0 ϕ)2 (kρ − γ0 k2m , kρ − ~γ k2−1,m )kρ − γ0 k2m +
l 2 X ∂i ∂j ϕ(kρ − γ0 k2m , kρ − ~γ k2−1,m )hBm (ρ − γi ), Bm (ρ − γj )im n i,j=1
+
2X ∂0 ∂i ϕ(kρ − γ0 k2m , kρ − ~γ k2−1,m )hρ − γ0 , Bm (ρ − γi )im n i=1
l
2 + ∂02 ϕ(kρ − γ0 k2m , kρ − ~γ k2−1,m )kρ − γ0 k2m . n Let B = (I − ∆)−1 . Following the notation of Example 9.29, for β = 1, 2, . . ., let Hβ (O) be the completion of C ∞ (O) under the norm kρk2β = h(I − ∆)β ρ, ρi and H−β (O) be the completion of H0 (O) under the norm kρk2−β = h(I − ∆)−β ρ, ρi. The inner product hρ, γi extends consistently to ρ ∈ H−β (O) and γ ∈ Hβ (O), and hρ, γi ≤ kρk−β kγkβ . The following inequality is useful in later calculations throughout this section: for each ρ, γ ∈ L2 (O), ∆ρ ∈ H−2 (O) and B(ρ − γ) ∈ H2 (O). Noting that ∆B = B − I, hB(ρ − γ), ∆ρ − F 0 (ρ)i = kρk2−1 − kρk2 − h(B − I)γ, ρi − hB(ρ − γ), F 0 (ρ)i) ≤
(13.4)
kρk2−1 − kρk2 + cγ,F (1 + kρk−1 )kρk,
where cγ,F is a constant depending only on γ and F . Furthermore, if ρ ∈ H1 (O), then ∆ρ ∈ H−1 (O) and hρ − γ, ∆ρ − F 0 (ρ)i = −k∇ρk2 − h∆γ, ρi − hρ − γ, F 0 (ρ)i. Similarly, for ρ, γ ∈ L2 (Λm ), hBm (ρ − γ), ∆m ρ − F 0 (ρ)im (13.5)
= kρk2−1,m − kρk2m − h(Bm − I)γ, ρim − hBm (ρ − γ), F 0 (ρ)im ) ≤
kρk2−1,m − kρk2m + cγ,F,m (1 + kρk−1,m )kρk,
Lemma 13.1. Let D(H† ) be the collection of functions of the form (13.6) f (ρ) = ϕ(kρ − γ0 k2 , kρ − γ1 k2−1 , . . . , kρ − γl k2−1 ) = ϕ(kρ − γ0 k2 , kρ − ~γ k2−1 ), where l = 0, 1, 2, . . ., γ0 ∈ Cb2 (O), γ1 , . . . , γl ∈ L2 (O), ϕ ∈ Cb2 (Rl+1 ) satisfies inf r∈Rl+1 ϕ(r) > −∞, and for i = 0, 1, . . . , l, ∂i ϕ ≥ 0 and there exists di such that
13.1. STOCHASTIC REACTIONDIFFUSION EQUATIONS ON A RESCALED LATTICE 293
∂i ϕ(r0 , r1 , . . . , rl ) = 0 if ri ≥ di . Define H† f given by H† f (ρ) =2
l X
∂i ϕ(kρ − γ0 k2 , kρ − ~γ k2−1 )hB(ρ − γi ), D(ρ)i
i=1
+2∂0 ϕ(kρ − γ0 k2 , kρ − ~γ k2−1 ) − k∇ρk2 − hρ, F 0 (ρ)i − hγ0 , D(ρ)i +2
l X
∂i ϕ∂j ϕ(kρ − γ0 k2 , kρ − ~γ k2−1 )hB(ρ − γi ), B(ρ − γj )i
i,j=1
+2
l X
∂0 ϕ∂i ϕ(kρ − γ0 k2 , kρ − ~γ k2−1 )hρ − γ0 , B(ρ − γi )i
i=1
+2(∂0 ϕ)2 (kρ − γ0 k2 , kρ − ~γ k2−1 )kρ − γ0 k2 =2
l X
∂i ϕ(kρ − γ0 k2 , kρ − ~γ k2−1 )hB(ρ − γi ), D(ρ)i
i=1
+2∂0 ϕ(kρ − γ0 k2 , kρ − ~γ k2−1 ) − k∇ρk2 − hρ, F 0 (ρ)i − hγ0 , D(ρ)i 1 + 2∂0 ϕ0 (kρ − γ0 k2 , kρ − ~γ k2−1 )(ρ − γ0 ) 2 l X
2 ∂i ϕ(kρ − γ0 k2 , kρ − ~γ k2−1 )B(ρ − γi ) , + i=1
where D(ρ) = ∆ρ−F 0 (ρ) and, noting that k∇ρk2 may be infinite, we take 0·∞ = 0. Let D(H‡ ) = {f : −f ∈ D(H† )}, and define H‡ by the same formula as H† . Then Condition 7.11 is satisfied with υn = n. Remark 13.2. By the assumptions on ϕ and (13.4), supρ∈E H† f (ρ) < ∞ and H† f ∈ M u (E, R) and is upper semicontinuous. Similarly, H‡ f ∈ M l (E, R) and is lower semicontinuous. Proof. Note that hρ, πm pim = hb ηm ρ, pi for p ∈ L2 (O). Let (13.7)
fn (ρ) = ϕ(kρ − πm γ0 k2m , kρ − πm~γ k2−1,m ),
where ϕ satisfies the conditions in the definition of D(H† ). If kρn − πm ρk → 0 as n → ∞, then fn (ρn ) → f (ρ) = ϕ(kρ − γ0 k2 , kρ − γ1 k2−1 , . . . , kρ − γl k2−1 ). Since Knq = πm (q), where q is a compact subset of L2 (O), (7.18) follows. Since Hn fn (ρ) = 0 if kρk2−1,m is large enough and (13.5) holds, (13.8)
sup Hn fn (ρ) < ∞, n,ρ∈En
294
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
giving the second part of (7.19). To obtain the first part, note that h−∆m ρ, ρim ≤ dm2 kρk2m , so that ∂0 ϕ(kρ − πm γ0 k2m , kρ − πm~γ k2−1,m ) hρ − γ0 , ∆m ρ − F 0 (ρ)im ≤ ∂0 ϕ(kρ − πm γ0 k2m , kρ − πm~γ k2−1,m ) c0 + c1 (1 + dm2 kρk2m ) ≤ c2 (1 + dm2 ), for constants c0 , c1 , c2 > 0, and kρk2m
= h(I − ∆m )Bm ρ, ρim 1/2 1/2 hBm ρ, ρim + h(−∆m )Bm ρ, Bm ρim
≤
≤ (1 + dm2 )kρk2−1,m , so that ∂i ϕ(kρ − πm γ0 k2m , kρ − πm~γ k2−1,m ) hBm (ρ − γi ), ∆m ρ − F 0 (ρ)im ≤ ∂i ϕ(kρ − πm γ0 k2m , kρ − πm~γ k2−1,m )c3 (1 + kρk2m ) ≤ ∂i ϕ(kρ − γ0 k2m , kρ − ~γ k2−1,m )c3 (2 + dm2 kρk2−1,m ) ≤ c4 (1 + dm2 ) for some constants c3 , c4 > 0. It follows that sup Hn fn (ρ) ≤ c5 (1 + dm2 ) ρ∈En
for some c5 > 0, and since m2+d /n → c0 ≥ 0, the first part of (7.19) holds with υn = n. For ρn ∈ En = RΛm  satisfying ηn (ρn ) → ρ, by Lemma C.1, lim inf n→∞ hρn , −∆m ρn im ≥ k∇ρk2 . The nonnegativity of ∂0 ϕ and Lemma C.2 then imply lim sup Hn fn (ρn ) ≤ H†∗ f (ρ),
(13.9)
n→∞
which gives (7.20).
Let f ∈ M (E, R). Define gradf (ρ) according to Definition 9.30, that is, we identify gradf (ρ) as the unique element in ∪β=1,2,... H−β (O) satisfying lim
t→0
f (ρ + tq) − f (ρ) = hgradf (ρ), qi, t
∀q ∈ C ∞ (O),
if it exists. If f (ρ) = hρ, pi, then gradf (ρ) = p, and if f (ρ) = kCρk2 for some bounded operator C, gradf (ρ) = 2Cρ. More generally, for the f in (13.6), (13.10)
grad f (ρ)
=
2∂0 ϕ0 (kρ − γ0 k2 , kρ − ~γ k2−1 )(ρ − γ0 ) +2
l X
∂i ϕ(kρ − γ0 k2 , kρ − ~γ k2−1 )B(ρ − γ).
i=1
Since h∆ρ, ρi = −h∇ρ, ∇ρi = −k∇ρk2 , we have 1 H† f (ρ) = h∆ρ − F 0 (ρ), gradf (ρ)i + kgradf (ρ)k2 . 2 In particular, if ∂0 ϕ ≡ 0, gradf (ρ) ∈ H2 (O), for each ρ ∈ L2 (O), and H† f ∈ C(E). (13.11)
13.1. STOCHASTIC REACTIONDIFFUSION EQUATIONS ON A RESCALED LATTICE 295
Let E : E → R be defined according to (9.64): Z 1 E(ρ) = ( ∇ρ(x)2 + F (ρ(x)))dx, O 2 Then gradE(ρ) = −∆ρ + F 0 (ρ),
∀ρ ∈ H1 (O),
and H† is of the form (9.62): 1 H† f (ρ) = h−gradE(ρ), gradf (ρ)i + kgradf (ρ)k2 . 2 Similarly, H‡ admits the same representations. In the proof of the comparison principle, a special class of test functions (13.12)
b = {f (ρ) = αkρ − γk2 + d : α, d ∈ R, γ ∈ L2 (O)} D −1
b+ ⊂ D b will denote the subclass with α > 0 and D b− the will play a major role. D subclass with α < 0. Since they are unbounded, these functions do not belong to the domains of H† or H‡ ; however, we can apply Lemma 7.6 to obtain appropriate extensions. b by To extend H† , we approximate f ∈ D fn (ρ) = αϕn (kρ − γk2−1 ) + d
(13.13)
where ϕn ∈ Cb2 (R) is nondecreasing, ϕn (r) = r,
0 ≤ r ≤ n,
ϕn (r) = n + 1,
r ≥ n + 2.
By definition, fn ∈ D(H† ) and H† fn (ρ)
=
2ϕ0n (kρ − γk2−1 )(hρ − γ, Bρ − ρi − hB(ρ − γ), F 0 (ρ)i)
2 1 + 2ϕ0n (kρ − γk2−1 )B(ρ − γ) . 2
Lemma 13.3. Let e†) = D b+ ∪ D(H† ), D(H
e‡) = D b− ∪ D(H‡ ). D(H
Define e † f (ρ) = h∆ρ − F 0 (ρ), gradf (ρ)i + 1 kgradf (ρ)k2 , f ∈ D(H e † ), H 2 e ‡ similarly. and define H Then for h ∈ Cb (E), any subsolution of f − αH† f = h is a subsolution of e † f = h and any supersolution of f − αH‡ f = h is a supersolution of f − αH e ‡ f = h. f − αH Proof. Let f (ρ) = αkρ−γk2−1 +d and let fn be given by (13.13). If kρ−γk2−1 < e † f (ρ), and hence for each c ∈ R, n, fn (ρ) = f (ρ) and H† fn (ρ) = H lim sup fn (ρ) ∧ c − f (ρ) ∧ c = 0
n→∞ ρ∈E
and (7.10) holds. Lemma 7.6 then gives the conclusion for subsolutions. The supersolution case is similar.
296
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
13.1.2. Exponential tightness. We apply Corollary 4.17. The exponential compact containment condition is verified in Example 2.9 using the following lemma. Lemma 13.4. Suppose that Yn and Zn both satisfy (13.1), but with different initial values: dYn (t, x)
=
md/2 (∆m Yn (t, x) − F 0 (Yn (t, x)))dt + √ dB(t, x) n
dZn (t, x)
=
md/2 (∆m Zn (t, x) − F 0 (Zn (t, x)))dt + √ dB(t, x). n
Then for each T > 0, there exists a constant C0 = C0 (T, F ) > 0 such that sup kYn (t) − Zn (t)kL2 (Λm ) ≤ C0 kYn (0) − Zn (0)kL2 (Λm )
a.s.
0≤t≤T
Proof. Observing that d(Yn (t, x) − Zn (t, x)) = ∆m (Yn (t, x) − Zn (t, x))dt − F 0 (Yn (t, x)) − F 0 (Zn (t, x)) dt, we have 1 1 kYn (t) − Zn (t)k2L2 (Λm ) − kYn (0) − Zn (0)k2L2 (Λm ) 2 2 Z t = h∆m (Yn (s) − Zn (s)), Yn (s) − Zn (s)im − hF 0 (Yn (s)) − F 0 (Zn (s)), Yn (s) − Zn (s)im ds 0 Z t ≤ LF 0 kYn (s) − Z(s)k2L2 (Λm ) ds, 0
where LF 0 is the Lipschitz constant for F 0 . The conclusion follows by Gronwall’s inequality. With reference to Corollary 4.17, functions of the form (13.6) are in D(H† ), and, consequently, D(H† ) approximates the metric q(ρ1 , ρ2 ) = kρ1 −ρ2 k in the sense of Definition 4.16. Since λ > 0 and f ∈ D(H† ) implies λf ∈ D(H† ), Conditions (b) and (c) of Corollary 4.17 follow from the convergence of Hn verified above. The corollary gives the desired exponential tightness. 13.1.3. The comparison principle. Let D0 ⊂ Cb (E) consist of functions of the form (13.14)
h(ρ) = ϕ(hρ, q1 i, . . . , hρ, qm i),
where ϕ ∈ C ∞ (Rm ) ∩ Cb (Rm ) is Lipschitz, qk ∈ C ∞ (O), and m = 1, 2, . . .. Then D0 is an algebra which separates points in E and vanishes nowhere. Therefore, by Theorem A.8, D0 is bucdense in Cb (E). Let α > 0 and h ∈ D0 . Suppose that f is a viscosity subsolution to (I − αH† )f = h and f is a viscosity supersolution to (I − αH‡ )f = h. Our strategy is to show that slightly perturbed versions of f and f satisfy Condition 9.27 and that the perturbed solutions solve the nonlinear equations given by b † and H b ‡ in Example 9.29 in an approximate sense. We then apply Theorem 9.31. H
13.1. STOCHASTIC REACTIONDIFFUSION EQUATIONS ON A RESCALED LATTICE 297
Each h ∈ D0 is uniformly continuous with respect to k · k−1 . Let ωh denote the modulus of continuity, that is, h(ρ) − h(γ) ≤ ωh (kρ − γk−1 ). Let d(ρ, γ) = kρ − γk. For δ > 0, define f δ (ρ)
= λ0,δ sup {f (γ) −
f δ (γ)
= λ1,δ inf {f (ρ) +
γ∈E
ρ∈E
kρ − γk2−1 }, 2δ kρ − γk2−1 }; 2δ
where λ0,δ = (1 +
(1 + LF )2 δ −1 (1 + LF )2 δ ), and λ1,δ = (1 − ) . 2 2
By Theorem 2.64 on page 228 of Attouch [5], f δ and f δ are continuous with respect to k · k−1 , hence also with respect to k · k. Let q h0,δ = λ0,δ h + ωh (2 δkf k) , and q h1,δ = λ1,δ h − ωh (2 δkf k) ; b†, H b ‡ be defined according to (9.67) and (9.68). and let H Lemma 13.5. f δ ∈ Cb (E) is a viscosity subsolution of b † )f = h0,δ (I − αH and f δ ∈ Cb (E) is a viscosity supersolution of b ‡ )f = h1,δ . (I − αH Proof. As before, we only prove the subsolution case. The supersolution case is similar. Let f0 be defined according to (9.66) with E defined by (9.64). Since f δ is bounded above and f0 has compact level sets under the L2 norm, there exists ρ0 ∈ H1 (O) such that (f δ − f0 )(ρ0 ) = sup
ρ,γ∈E
λ0,δ (f (γ) −
kρ − γk2−1 ) − f0 (ρ) . 2δ
By the viscosity subsolution property of f and by Lemma 13.3, there exist k → 0 and γk ∈ E such that (13.15) kρ0 − γk k2−1 kρ − γk2−1 λ0,δ (f (γk )− )−f0 (ρ0 ) +k ≥ sup λ0,δ (f (γ)− )−f0 (ρ) , 2δ 2δ ρ,γ∈E and 2
(13.16)
e † kρ0 − ·k−1 )(γk ) α−1 (f − h)(γk ) ≤ k + (H 2δ B(γk − ρ0 ) 1 B(γk − ρ0 ) 2 0 = k + h∆γk − F (γk ), i+ k k . δ 2 δ
298
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
It follows from (13.15) that supk kρ0 − γk k−1 < ∞. Since h∆γk − F 0 (γk ), B(γk − ρ0 )i − h∆ρ0 − F 0 (ρ0 ), B(γk − ρ0 )i = −kγk − ρ0 k2 + hγk − ρ0 , B(γk − ρ0 )i − hF 0 (γk ) − F 0 (ρ0 ), B(γk − ρ0 )i ≤ −kγk − ρ0 k2 + kγk − ρ0 kkB(γk − ρ0 )k(1 + LF ), where LF = supr F 0 (r). (13.16) implies B(γk − ρ0 ) 1 B(γk − ρ0 ) 2 i+ k k δ 2 δ kγk − ρ0 k2 kγk − ρ0 k kB(γk − ρ0 )k(1 + LF ) − + . δ δ 1/2 δ 1/2
α−1 (f − h)(γk ) ≤ k + h∆ρ0 − F 0 (ρ0 ),
By the boundedness of f and h and by the estimate sup h∆ρ0 , Bγk i = sup hρ0 , (B − I)γk i ≤ sup(kρ0 k + kρ0 k1 )kγk k−1 < ∞, k
k
k
we have supk kρ0 − γk k < ∞, implying the weak sequential relative compactness of γk in L2 (O). Let assume γ0 be a limit point. By (13.15), λ0,δ f (γk ) + k ≥ f δ (ρ0 ), and since −x2 + ax ≤ a2 /4, (13.17) α−1 (λ−1 0,δ f δ (ρ0 ) − h(γ0 )) ≤ lim inf α−1 (f − h)(γk ) k→∞
B(γk − ρ0 ) 1 (1 + LF )2 δ B(γk − ρ0 ) 2 i + (1 + )k k k→∞ δ 2 2 δ B(γ0 − ρ0 ) 1 (1 + LF )2 δ B(γ0 − ρ0 ) 2 = h∆ρ0 − F 0 (ρ0 ), i + (1 + )k k . δ 2 2 δ Since B is a selfadjoint compact operator on L2 (O), limk→∞ kBγk − Bγ0 k = 0, and lim kγk − γ0 k2−1 = lim hB(γk − γ0 ), γk − γ0 i = 0. ≤ lim
h∆ρ0 − F 0 (ρ0 ),
k→∞
k→∞
From (13.15), kρ − γk k2−1 kγk − ρ0 k2−1 − f0 (ρ0 ) + k ≥ −λ0,δ − f0 (ρ), 2δ 2δ In the limit as k → ∞, −λ0,δ
ρ ∈ L2 (O).
kγ0 − ρ0 k2−1 kρ − γ0 k2−1 − f0 (ρ0 ) ≥ −λ0,δ − f0 (ρ), ρ ∈ L2 (O). 2δ 2δ Let ρ = ρ0 + tp, where p ∈ C ∞ (O), and apply Definition 9.30. Since p is arbitrary, −λ0,δ
gradf0 (ρ0 ) = −λ0,δ
B(ρ0 − γ0 ) ∈ H2 (O). δ
By (13.15), we also have kρ0 − γ0 k2−1 ≤ 4δkf k, and (13.17) becomes q α−1 λ−1 f (ρ ) − h(ρ ) − ω (2 kf kδ) 0 h 0,δ δ 0 ≤ h∆ρ0 − F 0 (ρ0 ),
gradf0 (ρ0 ) 1 (1 + LF )2 δ gradf0 (ρ0 ) 2 i + (1 + )k kL2 (O) , λ0,δ 2 2 λ0,δ
13.1. STOCHASTIC REACTIONDIFFUSION EQUATIONS ON A RESCALED LATTICE 299
and hence 1 h∆ρ0 − F 0 (ρ0 ), gradf0 (ρ0 )i + kgradf0 (ρ0 )k2L2 (O) 2 b † f0 (ρ0 ), = H
α−1 (f δ (ρ0 ) − h0,δ (ρ0 )) ≤
giving the result.
Theorem 13.6. Let h be given by (13.14 and α > 0. Suppose f is a subsolution of (I − αH† )f = h and f is a supersolution of (I − αH‡ )f = h. Then f ≤ f. Proof. By Lemma 13.5 and Theorem 9.31, λ0,δ f (ρ) − λ1,δ f (ρ) ≤ f δ (ρ) − f δ (ρ) ≤ sup (h0,δ (γ) − h1,δ (γ)), γ∈E
and letting δ → 0+ gives the desired inequality.
13.1.4. Variational representation. Since 1 1 kgrad f (ρ)k2 = sup (hu, grad f (ρ)i − kuk2 ), 2 2 2 u∈L (O) we have H† f (ρ) =
1 sup {h∆ρ − F 0 (ρ) + u, grad f (ρ)i − kuk2 }. 2 u∈L2 (O)
An identical representation holds for H‡ as well. These representations suggest that the results of Chapter 8 apply; however, to verify the conditions, we need to choose the operators and the control space carefully. Let D† = {f (ρ) = ϕ(kρ − γk2−1 ) : γ ∈ E, ϕ ∈ Cb2 (R), ϕ0 ≥ 0, ∃dϕ , ϕ0 (r) = 0, r ≥ dϕ }, and D‡ = {f (ρ) = ϕ(kρ − γk2−1 ) : γ ∈ E, ϕ ∈ Cb2 (R), ϕ0 ≤ 0, ∃dϕ , ϕ0 (r) = 0, r ≥ dϕ } Then D† and D‡ separate points in E. Note that for ρ ∈ E = L2 (O) and f ∈ D† ∪ D‡ , gradf (ρ) ∈ H2 (O). Let D(A) be the linear span of D† ∪ D‡ , and for ρ ∈ E and u ∈ U ≡ H−1 (O), define Af (ρ, u) = h∆ρ − F 0 (ρ) + u, grad f (ρ)i and 1 L(ρ, u) = kuk2 . 2 Then (13.18)
Hf (ρ) = sup {Af (ρ, u) − L(ρ, u)}. u∈U
Define H† and H‡ by letting D(H† ) = D† , H† f = H† f , D(H‡ ) = D‡ , and H‡ f = H‡ f . Then, (8.33) is satisfied. We now verify the other conditions required in Corollary 8.28. Let (ρ, λ) ∈ J and Z (13.19) L(ρ(s), u)λ(du × ds) < ∞. U ×[0,∞)
300
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
R Write λ(du×ds) = λs (du)ds, and define u(s) = U uλs (du) ∈ L2 (O) in the Bochner sense. By Example 8.3, the control equation (8.1) is just the weak form of the controlled partial differential equation ∂ ρ = ∆ρ − F 0 (ρ) + u. ∂t In particular, for λ satisfying (13.19), the equation uniquely determines the solution ρ. Except for Condition 8.9.4, Conditions 8.9 and 8.10 can be directly verified. For ρ0 ∈ L2 (O) and f ∈ D(H), let ρ be the solution to
(13.20)
∂ ρ = ∆ρ − F 0 (ρ) + grad f (ρ). ∂t Since gradf (ρ) is Lipschitz in ρ in the L2 (O) norm, existence and uniqueness of the solution follows from classical semilinear equation theory. Taking u(t) = gradf (ρ(t)), (ρ, u) satisfies (13.20) and Condition 8.11 is verified: Z t Z t Hf (ρ(s))ds = {Af (ρ(s)u(s)) − L(ρ(s), u(s))}ds, t ≥ 0. 0
0
Verification of Condition 8.9.4 consists of three steps. Let u in (13.20) satisfy Z ∞Z C0 ≡ u2 (t, x)dxdt < ∞. 0
O
If E(ρ(0)) < ∞, standard a priori estimates give Z t E(ρ(t)) − E(ρ(0)) ≤ − k∆ρ − F 0 (ρ)k2 + kukk∆ρ − F 0 (ρ)k ds 0 Z Z 1 ∞ u2 (t, x)dxds, ≤ 4 0 O implying that ρ(t) is in the compact set 1 K(ρ(0), C0 ) = {γ : E(γ) ≤ E(ρ(0)) + C0 }. 4 If ρ, γ are solutions of (13.20) with identical control u, kρ(t) − γ(t)k ≤ C2 kρ(0) − γ(0)k,
0 ≤ t ≤ T,
where C2 = C2 (T, F ) depends only on T > 0 and F and is independent of u. This inequality follows Gronwall’s inequality from the estimate 1 1 kρ(t) − γ(t)k2 − kρ(0) − γ(0)k2 2 2 Z t ≤ − k∇(ρ(s) − γ(s))k2 + kF 0 (ρ(s)) − F 0 (γ(s))kkρ − γkds 0 Z t ≤ LF kρ(s) − γ(s)k2 ds, 0
where LF is the Lipschitz constant for F 0 . Finally, for any compact K0 ⊂ L2 (O) and > 0, there exists N = N (, K0 ) and γ1 , . . . , γN ∈ L2 (O) with E(γk ) < ∞ such that ). K0 ⊂ ∪N k=1 B(γk ; C2
13.2. STOCHASTIC CAHNHILLIARD EQUATIONS ON RESCALED LATTICE
301
Let γk (t) be the solution of (13.20) with γk (0) = γk . For any solution ρ(t) of (13.20) with ρ(0) ∈ K0 , there exists γi satisfying kρ(0) − γi (0)k < C2−1 and hence sup kρ(t) − γi (t)k < . 0≤t≤T
Consequently, ρ(t) ∈
N () ∪i=1 K(γi (0), C0 ) ,
0 ≤ t ≤ T,
where A is the fattening of a set A. Since > 0 is arbitrary, there exists compact K(K0 , C0 ) such that ρ(0) ∈ K0 implies ρ(t) ∈ K(K0 , C0 ), 0 ≤ t ≤ T . Finally, for ρ satisfying (13.20), by the convexity in u of L(ρ, u) = 12 kuk2 , Z inf { L(ρ(s), u)λ(du × ds) : (ρ, λ) ∈ J } λ
U ×[0,∞) Z ∞
1 ku(s)k2 ds 2 0 Z Z 1 ∞ ∂ =  ρ − ∆ρ + F 0 (ρ)2 dxdt. 2 0 ∂t O
=
13.1.5. The large deviation theorem. Theorem 13.7. Let Yn , n = 1, 2, . . . satisfy (13.1). Assume that n−1 m2+d → c0 ≥ 0, F ∈ C 2 (R) with F 00 bounded, and F (r) ≥ c1 + c2 r2 with c2 > 0. Suppose ηn (Yn (0)) satisfies a large deviation principle with good rate function I0 . Then {ηn (Yn )} satisfies a large deviation principle in CL2 (O) [0, ∞) with good rate function Z Z 1 ∞ ∂ I(ρ) = I0 (ρ(0)) +  ρ − ∆ρ + F 0 (ρ)2 dxdt. 2 0 O ∂t 13.2. Stochastic CahnHilliard equations on rescaled lattice We now consider the large deviation problem in Example 1.13. As before, let Λm ⊂ O = [0, 1)d be the periodic lattice with mesh size m−1 in each direction. We assume that m is odd so that ∆m has only one zero eigenvalue. Let E, En be defined according to (1.39): Z X Λm  2 En = {ρ ∈ R : ρ(x) = 0}, E = {ρ ∈ L (O) : ρ(x)dx = 0}. x∈Λm
The metric on E will be the metric given by the L2 norm, r(ρ, γ) = kρ − γk. Define hp, qim , kρkm , ηn = ηbm : En → E, and πm : E → En as in Section 13.1. Let Q in Definition 2.5 be the collection of compact subsets of E, and define Knq = R −1 πm (q), q ∈ Q. Let B = (−∆) . Since ρ ≡ O ρ(x)dx = 0 for every ρ ∈ E, Z ∞ Z ∞ Bρ = S(t)ρdt = S(t)(ρ − ρ)dt, 0
0
where {S(t)} is the semigroup corresponding to ∆ with periodic boundary conditions. The continuity properties of {S(t)} imply that B is a bounded, compact operator on E. The Poincar´e inequality gives Z 2 (13.21) kρk = ρ(x) − ρ2 dx ≤ Ck∇ρk2 , O
302
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
Define H β (O) : β = . . . , −1, 0, 1, . . . as in Example 9.32, and recall that the L2 inner product can be extended so that hρ, γi ≤ kρk−β kγkβ ,
ρ ∈ H −β (O), γ ∈ H β (O).
Note that B is not the same as in Section 13.1, and to emphasize the difference, we use H β (O) instead of the previous notation Hβ (O), β = . . . , −1, 0, 1, . . .. Define the discrete partial derivative in the ith direction ∇in and the discrete Laplacian ∆m as in Appendix C. Let {Bi (t, x) : x ∈ Λm , i = 1, . . . , d} be a collection of independent standard Brownian motions, and let Yn (t, x) satisfy (13.22) d md/2 X i ∇ dBi (t, x), x ∈ Λm . dYn (t, x) = ∆m (−∆m Yn (t, x) + F 0 (Yn (t, x))dt + √ n i=1 m We study the large deviation behavior of {ηn (Yn )} as a sequence of Evalued Markov processes (Theorem 13.13). As in the previous example, we assume that F 00 is bounded so that F 0 is Lipschitz, and denote the Lipschitz constant by LF . We also assume the existence of c1 ∈ R, c2 > 0 such that F (r) ≥ c1 + c2 r2 , and that m4+d /n → c0 . 13.2.1. Convergence of Hn . As in the stochastic reactiondiffusion example in Section 13.1, the test functions defining (1.41) is not good enough for verifying the convergence Condition 7.11. Therefore, we first identify a suitable class of test functions fn which will verify that condition. Let {(λk,m , ek,m ) : k = (k1 , . . . , kd ), 0 ≤ kj ≤ m − 1} be the eigenvalueeigenfunction system for −∆m on E, that is, −∆m ek,m = λk,m ek,m , where {ek,m } forms an orthonormal basis for E. (See Appendix C.) For p ∈ RΛm  , we note that Z Zn,p (t)
= hYn (t), pim = hYn (0), pim +
t
h∆m (−∆m Yn (s) + F 0 (Yn (s))), pim ds
0 d md/2 X i + √ h∇ Bi (t), pim . n i=1 m
As before, let Dm (ρ) = ∆m ρ − F 0 (ρ). For γ ∈ RΛm  and {ak } ⊂ R, let {ak } Rn,γ (t) X ≡ ak hYn (t) − γ, ek,m i2m k
=
X
ak hYn (0) − γ, ek,m i2m
k
Z t d X md/2 X 2hYn (s) − γ, ek,m im d h∇im Bi (s), ek,m im + √ ak n 0 i=1 k Z tX 1 ak 2hYn (s) − γ, ek,m im h−∆m Dm (ρ), ek,m im + λk,m ds. + n 0 k
13.2. STOCHASTIC CAHNHILLIARD EQUATIONS ON RESCALED LATTICE
303
The last equality follows from
[hYn − γ, ek,m im , hYn − γ, el,m im ]t
=
d md X [hBi , (−∇im )ek,m im , hBi , (−∇im )el,m im ]t n i=1
=
1X i h∇ ek,m , ∇im el.m im t n i=1 m
=
1 1 h−∆m ek,m , el,m im t = δkl λk,m t. n n
d
Consequently, we also have
{ak } {bk } [Rn,γ , Rn,γ ] =4 1 2 t
1 n
Z tX 0
ak bk λk hYn (s) − γ1 , ek,m im hYn (s) − γ2 , ek,m im ds.
k
Let Bm = (−∆m )−1 , so Bm ek,m = λ−1 k,m ek,m , and define kρk2−2,m = kBm ρk2m =
X
2 λ−1 k,m hρ, ek,m im ,
∀ρ ∈ Em .
k
As in Section 13.1, let ϕ ∈ Cb2 (Rl+1 and γi ∈ En , i = 0, 1, . . . , l, and define fn (ρ) = ϕ(kρ − γ0 k2m , kρ − ~γ k2−2,m ). Then by Itˆ o’s formula, the generator of Yn applied to fn is An fn (ρ) l X
1 X −1 2 λk,m ∂i ϕ(kρ − γ0 k2m , kρ − ~γ k2−2,m ) hBm (ρ − γi ), −∆m Dm (ρ)im + n i=1 k 1X +2∂0 ϕ(kρ − γ0 k2m , kρ − ~γ k2−2,m )(hρ − γ0 , −∆m Dm (ρ)im + λk,m ) n
=2
k
l 2 X 2 2 + ∂i ∂j ϕ(kρ − γ0 k2m , kρ − ~γ k2−2,m )h−∆m Bm (ρ − γi ), Bm (ρ − γj )im n i,j=1 l
+
2X ∂0 ∂i ϕ(kρ − γ0 k2m , kρ − ~γ k2−2,m )hρ − γ0 , Bm (ρ − γi )im n i=1
2 + ∂02 ϕ(kρ − γ0 k2m , kρ − ~γ k2−2,m )h−∆m (ρ − γ0 ), ρ − γ0 im . n
304
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
and Hn fn (ρ) l X
1 X −1 2 ∂i ϕ(kρ − γ0 k2m , kρ − ~γ k2−2,m ) hBm (ρ − γi ), −∆m Dm (ρ)im + λk,m n i=1 k 1X λk,m ) +2∂0 ϕ(kρ − γ0 k2m , kρ − ~γ k2−2,m )(hρ − γ0 , −∆m Dm (ρ)im + n
=2
k
+2
l X
2 2 ∂i ϕ∂j ϕ(kρ − γ0 k2m , kρ − ~γ k2−2,m )h−∆m Bm (ρ − γi ), Bm (ρ − γj )im
i,j=1
+2
l X
2 ∂0 ϕ∂i ϕ(kρ − γ0 k2m , kρ − ~γ k2−2,m )hρ − γ0 , −∆m Bm (ρ − γi )im
i=1
+2(∂0 ϕ)2 (kρ − γ0 k2m , kρ − ~γ k2−2,m )h(−∆m )(ρ − γ0 ), ρ − γ0 im l 2 X 2 2 ∂i ∂j ϕ(kρ − γ0 k2m , kρ − ~γ k2−2,m )h−∆m Bm (ρ − γi ), Bm (ρ − γj )im + n i,j=1 l
2X 2 + ∂0 ∂i ϕ(kρ − γ0 k2m , kρ − ~γ k2−2,m )hρ − γ0 , −∆m Bm (ρ − γi )im n i=1 2 + ∂02 ϕ(kρ − γ0 k2m , kρ − ~γ k2−2,m )h−∆m (ρ − γ0 ), ρ − γ0 im . n Let γ0 ∈ C 4 (O). Then for ρ ∈ H 2 (O), hρ − γ0 , ∆(−∆ρ + F 0 (ρ)i = −k∆ρk2 + h∆ρ, F 0 (ρ)i + h∆2 γ0 , ρi − h∆γ0 , F 0 (ρ)i. Extend this expression to all ρ ∈ E by noting that if ρn ∈ H 2 (O) and kρn − ρk → 0 for ρ ∈ / H 2 (O), then lim suphρn − γ0 , ∆(−∆ρn + F 0 (ρn )i = −∞. n→∞
It follows that hρ − γ0 , ∆(−∆ρ + F 0 (ρ)i is upper semicontinuous in ρ ∈ E and is bounded above. Similarly, since k∇ρk is dominated by k∆ρk, (13.23)
lim suphρn − γ0 , ∆(−∆ρn + F 0 (ρn )i + ck∇(ρn − γ0 )k2 = −∞ n→∞
for ρn → ρ ∈ / H 2 (O) and c ∈ R. Following an argument similar to the proof of Lemma 13.1, we have Lemma 13.8. Let D(H† ) be the collection of functions of the form (13.24)
f (ρ) = ϕ(kρ − γ0 k2 , kρ − ~γ k2−2 ),
where γ0 ∈ E ∩ Cb4 (O), γ1 , . . . , γl ∈ E, l = 1, 2, . . ., ϕ ∈ Cb2 (Rl+1 ), inf r∈Rl+1 ϕ(r) > −∞, and for i = 0, 1, . . . , l, ∂i ϕ ≥ 0 and there exists di such that ∂i ϕ(r0 , r1 , . . . , rl ) =
13.2. STOCHASTIC CAHNHILLIARD EQUATIONS ON RESCALED LATTICE
305
0 if ri ≥ di . Define H† f (ρ) =2
l X
∂i ϕ(kρ − γ0 k2 , kρ − ~γ k2−2 )hB 2 (ρ − γi ), ∆(−∆ρ + F 0 (ρ))i
i=1
+2∂0 ϕ(kρ − γ0 k2 , kρ − ~γ k2−2 )hρ − γ0 , ∆(−∆ρ + F 0 (ρ))i l 1 X + k∇ 2 ∂i ϕ(kρ − γ0 k2 , kρ − ~γ k2−2 )B 2 (ρ − γi ) 2 i=1 +2∂0 ϕ(kρ − γ0 k2 , kρ − ~γ k2−2 )(ρ − γ0 ) k2 =2
l X
∂i ϕ(kρ − γ0 k2 , kρ − ~γ k2−2 )hB(ρ − γi ), ∆ρ − F 0 (ρ)i
i=1
+2∂0 ϕ(kρ − γ0 k2 , kρ − ~γ k2−2 )hρ − γ0 , ∆(−∆ρ + F 0 (ρ))i l 1 X ∂i ϕ(kρ − γ0 k2 , kρ − ~γ k2−2 )B 2 (ρ − γi ) + k∇ 2 2 i=1 +2∂0 ϕ(kρ − γ0 k2 , kρ − ~γ k2−2 )(ρ − γ0 ) k2 , for ρ ∈ H 2 (O) and extend the definition to all ρ ∈ E using (13.23). Let D(H‡ ) = {f : −f ∈ D(H† )}, and define H‡ by the same formula as H† . Then Condition 7.11 is satisfied with υn = n. Remark 13.9. By the assumptions on ϕ and (13.23), supρ∈E H† f (ρ) < ∞ and H† f ∈ M u (E, R) and is upper semicontinuous. Similarly, H‡ f ∈ M l (E, R) and is lower semicontinuous. Proof. The proof follows that of Lemma 13.1; however, we need some addiP tional estimates. Note that limn→∞ n−1 k λ−1 k,m = 0, and since we are assuming m4+d /n → c0 , m2+d n−1 → 0 and X lim n−1 λk,m = 0. n→∞
k
In addition, to control the growth rate of supρ∈En Hn fn (ρ), note that k∆m ρk2m = k∇m · (∇m ρ)k2m ≤ dm2 k∇m ρk2m ≤ d2 m4 kρk2m ,
∀ρ ∈ En .
For f : E → R, we use the notion of gradient given in Definition 9.33. For f of the form (13.24), gradf (ρ) = −2∆ ∂0 ϕ(kρ − γ0 k2 , kρ − ~γ k2−2 )(ρ − γ0 ) +
l X
∂i ϕ(kρ − γ0 k2 , kρ − ~γ k2−2 )B 2 (ρ − γi )
i=1
= −2∂0 ϕ(kρ − γ0 k2 , kρ − ~γ k2−2 )∆(ρ − γ0 ) +2
l X i=1
∂i ϕ(kρ − γ0 k2 , kρ − ~γ k2−2 )B(ρ − γi ) ∈ H −2 (O).
306
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
Define E : E → R as in (9.64). Then by (9.73), for ρ satisfying E(ρ) < ∞, gradE(ρ) = ∆(∆ρ − F 0 (ρ)). Let hp, qi−1 = hBp, qi,
∀q, p ∈ H −1 (O).
Noting that k∇pk2 = h∆p, pi = k∆pk2−1 and h∆(−∆ρ + F 0 (ρ)), gradf (ρ)i−1 = h∆ρ − F 0 (ρ), gradf (ρ)i, for ρ ∈ H 2 (O), 1 H† f (ρ) = h∆(−∆ρ + F 0 (ρ)), gradf (ρ)i−1 + kgradf (ρ)k2−1 2 1 = −hgradE(ρ), gradf (ρ)i−1 + kgradf (ρ)k2−1 . 2 A similar representation holds for H‡ . We next extend the definition of H† , H‡ to include test functions of the form f (ρ) = αkρ − γk2−2 + c = αkB(ρ − γ)k2 + c,
∀α, c ∈ R, γ ∈ E.
For α > 0, define e † f (ρ) (13.25)H
1 = h∆(−∆ρ + F 0 (ρ)), gradf (ρ)i−1 + kgradf (ρ)k2−1 2 1 0 2 = h∆(−∆ρ + F (ρ)), 2αB (ρ − γ)i + k∇2αB 2 (ρ − γ)k2 . 2
e † f is bounded above and continuous. Similarly, for α < 0, define Then H e ‡ f = h∆(−∆ρ + F 0 (ρ)), 2αB 2 (ρ − γ)i + 1 k∇2αB 2 (ρ − γ)k2 . H 2 e ‡ f is bounded below and continuous. Then H We have the following analogue of Lemma 13.3, The proof is the same. Lemma 13.10. For h ∈ Cb (E), any subsolution of f −αH† f = h is a subsolution e † f = h and any supersolution of f − αH‡ f = h is a supersolution of of f − αH e f − αH‡ f = h. 13.2.2. Exponential tightness. Exponential tightness follows by Corollary 4.19 provided we can verify (4.25). The verification is similar to Example 2.9. We only highlight the differences. Let Em be defined by (2.14). We first establish an analogue of (2.15). By Itˆo’s formula, 1 Hn En (ρ) = h∆m (−∆m ρ + F 0 (ρ)), −∆m ρ + F 0 (ρ)im + k∇n (−∆m ρ + F 0 (ρ))k2m 2 d X d+4 X 1 F 00 (ρ(z)) d+2 m + m−d ( + m ) 2n 2 2 k=1 z∈Λm
≤
1 − k∇m (−∆m ρ + F 0 (ρ))k2m 2 d 1 X X −d md+4 F 00 (ρ(z)) d+2 + m ( + m ). 2n 2 2 k=1 z∈Λm
13.2. STOCHASTIC CAHNHILLIARD EQUATIONS ON RESCALED LATTICE
307
Since m4+d /n → c0 , sup sup Hn En (ρ) < ∞.
(13.26)
n ρ∈En
To obtain the analog of Lemma 13.4 for solutions Yn , Zn of (13.22), observe that d 1 kYn (s) − Zn (s)k2m ds 2 = h∆m (−∆m Yn (s) + F 0 (Yn )), Yn (s) − Zn (s)im −h∆m (−∆m Zn (s) + F 0 (Zn (s))), Yn (s) − Zn (s)im = −k∆m (Yn (s) − Zn (s))k2m − hF 0 (Yn (s)) − F 0 (Zn (s)), Yn (s) − Zn (s)im 1 ≤ LF kYn (s) − Zn (s)k2m . 4 and Gronwall’s inequality gives 1
kYn (t) − Zn (t)k2m ≤ kYn (0) − Zn (0)k2m e 2 LF t . The rest of the argument follows as in Example 2.9. Note that we do not need (2.17) here, as the Poincar´e inequality (13.21) and its discrete version hold for elements in E and on En . Consequently, if supn Em (ρn ) < ∞, supn k∇m ρn km < ∞ and {ηn (ρn )} is relatively compact in E. 13.2.3. The comparison principle. Let D0 be defined according to (13.14). Suppose α > 0 and h ∈ D0 . Let f be a viscosity subsolution of (I − αH† )f = h and f a viscosity supersolution. As in Section 13.1.3, D0 is bucdense in Cb (E), and each h ∈ D is uniformly continuous with respect to k · k−2 . Let ωh be a modulus of continuity. The proof of the comparison principle is similar to the previous example. Let kB(ρ − γ)k2 f δ (ρ) = λ0,δ sup {f (γ) − }, 2δ γ∈E f δ (γ)
= λ1,δ inf {f (ρ) + ρ∈E
kB(ρ − γ)k2 }, 2δ
where λ0,δ = 1 + δCL4F /4,
λ1,δ = (1 − δCL4F /4)−1
and C (depending only on the dimension d) is the constant in Poincar´e inequality Z Z ρ(x) − ρ(z)dz2 dx ≤ Ck∇ρk2 . O
O
b†, H b ‡ be given by (9.77) and (9.78). Let H Lemma 13.11. f δ is a continuous viscosity subsolution of b † )f = h0,δ (I − αH and f δ is a continuous viscosity supersolution of b ‡ )f = h1,δ , (I − αH where r h0,δ = λ0,δ h + ωh (2 δ sup f (ρ)) ρ∈E
308
and
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
r h1,δ = λ1,δ h − ωh (2 δ sup f (ρ)) . ρ∈E
Proof. The general method is similar to the proof of Lemma 13.5. The major difference lies in technical estimates in which the second order differential operators are replaced by forth order ones. b † ) be of the form (9.75). Then f δ − f0 attains its supremum at Let f0 ∈ D(H some point ρ0 ∈ H 1 (O). By Lemma 13.10 and the definition of viscosity subsolution, there exists k → 0 and γk ∈ E such that kB(ρ0 − γk )k2 λ0,δ (f (γk ) − ) − f0 (ρ0 ) + k 2δ kB(ρ − γ)k2 (13.27) ≥ sup λ0,δ (f (γ) − ) − f0 (ρ) 2δ ρ,γ∈E = sup f δ (ρ) − f0 (ρ) ρ∈E
and 2
e † kB(ρ0 − ·)k )(γk ) (13.28) α−1 (f − h)(γk ) ≤ k + (H 2δ 1 1 ∇B 2 (γk − ρ0 ) 2 0 = k + h∆(−∆γk + F (γk )), B 2 (γk − ρ0 )i + k k . δ 2 δ From (13.27), supk kB(ρ0 − γk )k < ∞. Noting that f and h are bounded, as in the proof of Lemma 13.5, (13.28) implies that supk kγk k < ∞. Therefore, there exists γ0 ∈ L2 (O) that is a weak sequential limit point of {γk }. Without loss of generality, we can assume that {γk } converges weakly to γ0 , and in particular, limk→∞ Bγk = Bγ0 . Note that 1 1 h∆(−∆γk + F 0 (γk )), B 2 (γk − ρ0 )i − h∆(−∆ρ0 + F 0 (ρ0 )), B 2 (γk − ρ0 )i δ δ 1 1 0 2 0 = − kγk − ρ0 k + hF (γk ) − F (ρ0 ), B(γk − ρ0 )i) δ δ 1 L2 ≤ − kγk − ρ0 k2 + F kB(γk − ρ0 )k2 2δ 2δ 2 1 L B 2 (γk − ρ0 ) ≤ − kγk − ρ0 k2 + F kγk − ρ0 kk k 2δ 2 δ δ L4F B 2 (γk − ρ0 ) 2 CL4F ∇B 2 (γk − ρ0 ) 2 ≤ k k ≤ δk k , 2 4 δ 8 δ where the last step follows from the Poincar´e inequality (13.21). Therefore from (13.27) and (13.28), we conclude (13.29) α−1 λ−1 f (ρ ) − h(γ ) ≤ lim inf α−1 (f − h)(γk ) 0 0 0,δ δ k→∞ 1 0 ≤ lim h∆(−∆ρ0 + F (ρ0 )), B 2 (γk − ρ0 )i k→∞ δ 1 CL4F ∇B 2 (γk − ρ0 ) 2 + (1 + δ )k k 2 4 δ B 2 (γ0 − ρ0 ) λ0,δ ∇B 2 (γ0 − ρ0 ) 2 = h∆(−∆ρ0 + F 0 (ρ0 )), i+ k k . δ 2 δ
13.2. STOCHASTIC CAHNHILLIARD EQUATIONS ON RESCALED LATTICE
309
From (13.27), kB(ρ0 − γk )k2 kB(ρ − γk )k2 − f0 (ρ0 ) + k ≥ −λ0,δ − f0 (ρ), ρ ∈ E, 2δ 2δ and letting k → ∞, kB(ρ0 − γ0 )k2 kB(ρ − γ0 )k2 . (13.30)f0 (ρ) − f0 (ρ0 ) ≥ −λ0,δ − − λ0,δ 2δ 2δ Let p ∈ C ∞ (O) and ρ(t) = ρ0 − t∆p. Replacing ρ in (13.30) by ρ(t) and differentiation at t = 0, we have Z Z B(γ0 − ρ0 ) pdx. gradf0 (ρ0 )pdx ≥ λ0,δ δ O O Since p is arbitrary, B(ρ0 − γ0 ) ∈ H 2 (O). (13.31) gradf0 (ρ0 ) = λ0,δ δ Taking ρ = ρ0 and γ = γ0 on the right side of (13.27), we have −λ0,δ
kB(ρ0 − γk )k2 , k→∞ 2δ
lim inf (f (γk ) − f (ρ0 )) ≥ lim k→∞
so 4δ sup f (ρ) ≥ kB(ρ0 − γ0 )k2 .
(13.32)
ρ∈E
By (13.31), (13.29) becomes 1 α−1 f δ (ρ0 ) − λ0,δ h(γ0 ) ≤ h∆(−∆ρ0 + F 0 (ρ0 )), gradf0 (ρ0 )i−1 + kgradf0 (ρ0 )k2−1 , 2 and since λ0,δ h(γ0 ) ≤ λ0,δ h(ρ0 ) + λ0,δ ωh (kB(ρ0 − γ0 )k) ≤ h0,δ (ρ0 ), so the conclusion follows.
Theorem 13.12. Let h be given by (13.14 and α > 0. Suppose f is a subsolution of (I − αH† )f = h and f is a supersolution of (I − αH‡ )f = h. Then f ≤ f. Proof. By Lemma 13.11 and Theorem 9.34, sup f δ (ρ) − f δ (ρ) ≤ sup h0,δ (ρ) − h1,δ (ρ) . ρ∈E
ρ∈E
Sending δ → 0+, we conclude.
13.2.4. Variational representation. Recall that hp, qi−1 = hBp, qi for q, p ∈ H −1 (O). If gradf (ρ) ∈ H −1 (O), then 1 1 kgradf (ρ)k2−1 = sup (hgradf (ρ), ui−1 − kuk2−1 ). 2 2 u∈H −1 (O) Therefore, H† f (ρ) =
1 (h∆(−∆ρ + F 0 (ρ)) + u, gradf (ρ)i−1 − kuk2−1 ), 2 u∈H −1 (O) sup
and a representation of the same form holds for H‡ f . As in Section 13.1, we need to exercise care in defining H, H† and H‡ .
310
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
Let D† = {f (ρ) = ϕ(kρ − γk2−2 ) : γ ∈ E, ϕ ∈ Cb2 (R), ϕ0 ≥ 0, ∃dϕ , ϕ0 (r) = 0, if r ≥ dϕ }, and D‡ = {f (ρ) = ϕ(kρ − γk2−2 ) : γ ∈ E, ϕ ∈ Cb2 (R), ϕ0 ≤ 0, ∃dϕ , ϕ0 (r) = 0, if r ≥ dϕ } Then D† , D‡ ⊂ Cb (E) separate points, and for f ∈ D† ∪ D‡ , gradf (ρ) = 2ϕ0 (kρ − γk2−2 )(−∆)B 2 (ρ − γ) = 2ϕ0 (kρ − γk2−2 )B(ρ − γ) ∈ H 2 (O). Let D(A) be the linear span of D† ∪ D‡ , and for ρ ∈ E and u ∈ U ≡ H −2 (O), define Af (ρ, u) = h∆(−∆ρ + F 0 (ρ)) + u, gradf (ρ)i−1 = h∆ρ − F 0 (ρ)) + u, gradf (ρ)i and 1 kuk2−1 . 2 Then Af is continuous on E × U , {u ∈ U : L(ρ, u) ≤ c} is compact in U for each c < ∞, and L(ρ, u) =
Hf (ρ) = sup {Af (ρ, u) − L(ρ, u)} =
sup
{Af (ρ, u) − L(ρ, u)}.
u∈H −1 (O)
u∈U
As in the previous example, define H† and H‡ by letting D(H† ) = D† , H† f = H† f , D(H‡ ) = D‡ and H‡ f = H‡ f . It follows that (8.33) is satisfied, H† ⊂ D(A) × M u (E), H‡ ⊂ D(A) × M l (E). Let (ρ, λ) ∈ J with Z (13.33) L(ρ(s), u)λ(du × ds) < ∞. U ×[0,∞)
R Writing λ(du × ds) = λs (du)ds we define u(s) = U uλs (du) in the Bochner sense. Then Z Z t f (ρ(t)) − f (ρ(0)) = Af (ρ(s), u)λ(du × ds) = Af (ρ(s), u(s))ds. U ×[0,t]
0
As in Example 8.3, it follows that any solution of the control equation is a weak solution of the controlled partial differential equation ∂ ρ = ∆(−∆ρ + F 0 (ρ)) + u, ∂t
(13.34)
which, by the Lipschitz condition on F 0 , uniquely determines a solution for any control satisfying (13.33). Verification of the regularity conditions in Chapter 8 follows as in Section 13.1.4. First, for ρ0 ∈ L2 (O) and f ∈ D(H), let ρ be the weak solution of ∂ ρ = ∆(−∆ρ + F 0 (ρ)) + gradf (ρ), ∂t which again exists and is uniquely determined since ρ → F 0 (ρ)+grad(ρ) is Lipschitz on H 0 (O). Taking u(t) = gradf (ρ(t)), ρ, u satisfies (13.34) and Condition 8.11: Z t Z Hf (ρ(s))ds = {Af (ρ(s)u(s)) − L(ρ(s), u(s))}ds, t ≥ 0. 0
U ×[0,t]
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
311
To verify Condition 8.9.4, let the free energy functional E be defined by (9.64). If E(ρ(0)) < ∞, standard estimates give E(ρ(t)) − E(ρ(0)) Z t − k∇(−∆ρ + F 0 (ρ))k2 + kuk−1 k∇(−∆ρ + F 0 (ρ))k ds ≤ 0 Z 1 ∞ ≤ ku(s)k2−1 ds < ∞, 4 0 so if Z Cu ≡
∞
ku(t)k2−1 dt < ∞,
0
then ρ takes values in the compact set {ρ : E(ρ) ≤ Cu }. If ρ, γ are solutions of (13.34) with the same control process u, then 1 1 kρ(t) − γ(t)k2 − kρ(0) − γ(0)k2 2 2 Z t ≤ − k∆(ρ − γ)k2L2 (O) + kF 0 (ρ) − F 0 (γ)kk∆(ρ − γ)k ds 0 Z t ≤ L2F kρ(s) − γ(s)k2 ds, 0
where LF is the Lipschitz constant for F 0 , and Gronwall’s inequality, gives 2
kρ(t) − γ(t)k2 ≤ e2LF t kρ(0) − γ(0)k2 . Then Condition 8.9.4 follows as in Section 13.1.4. Finally, for ρ satisfying (13.34), the convexity of L(ρ, u) in u implies Z inf { L(ρ(s), u)λ(du × ds) : (ρ, λ) ∈ J } λ
U ×[0,∞) Z ∞
1 ku(s)k2−1 ds 2 0 Z 1 ∞ ∂ = k ρ − ∆ − ∆ρ + F 0 (ρ) k2−1 dt. 2 0 ∂t =
13.2.5. The large deviation theorem. Theorem 13.13. Let Yn , n = 1, 2, . . . satisfy (13.22). Assume that F ∈ C 2 (R) with F 00 bounded and that m4+d /n → c0 . Suppose that {ηn (Yn (0))} satisfies a large deviation principle with good rate function I0 in (E, k · k). Then {ηn (Yn ) : n = 1, 2 . . .} satisfies a large deviation principle in CE [0, ∞) with good rate function Z 1 ∞ ∂ k ρ − ∆(−∆ρ + F 0 (ρ))k2−1 dt. I(ρ) = I0 (ρ(0)) + 2 0 ∂t 13.3. Weakly interacting stochastic particles We now consider large deviations for the weakly interacting diffusions in Example 1.14. Our goal here is not only to prove the large deviation principle which was first established by Dawson and G¨artner [22] using a different argument, but also to show that the comparison principle for equation (1.50) holds under reasonable assumptions. In this context, the probabilistic large deviation problem is
312
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
naturally linked with variational problems originating from controlled partial differential equations in the space of probability measures. Throughout, we assume the following on Φ and Ψ. Condition 13.14. (1) Φ, Ψ ∈ C 2 (Rd ). (2) inf z Φ(z) > −∞. (3) Ψ, Φ are semiconvex in the sense that, there exist λΨ , λΦ ∈ R such that (13.35)
(∇Ψ(x) − ∇Ψ(y)) · (x − y) ≥ λΨ x − y2
and similarly for Φ. (4) Φ is even (i.e. Φ(x) = Φ(−x)). (5) ∇Φ(x) ≤ C0 , some C0 > 0. (6) (13.36)
lim
x→∞
Ψ(x) =∞ x2
Remark 13.15. The requirement that Φ is even is common in physical applications where Φ is a pairwise interaction potential depending only on particle distance. It is possible to relax Condition 13.14 to allow subquadratic growth of ∇Φ by using a superexponential approximation argument (Lemma 3.14). We will need the following estimate on ∇Ψ. Lemma 13.16. If Condition 13.14 holds, then ∇Ψ(x) · x = ∞. x2 x→∞
(13.37)
lim
Proof. By (13.35), Z ∇Ψ(x) · x = Ψ(x) − Ψ(0) + 0
1
1 (∇Ψ(x) − ∇Ψ(sx)) · xds ≥ Ψ(x) − Ψ(0) + λΨ x2 . 2
Dividing by x2 , the lemma follows by (13.36).
We consider the finite system of stochastic equations n
(13.38)
dXn,i (t) = −∇Ψ(Xn,i (t)) −
1X ∇Φ(Xn,i (t) − Xn,j (t))dt + dWi (t) n j=1
i = 1, . . . , n, where the {Wi : i = 1, . . . , n} are independent Rd valued Brownian motions. We define empiricalmeasurevalued processes n
ρn (t, dx) =
1X δXn,k (t) (dx). n k=1
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
313
Note that for ϕ ∈ C 2 (Rd ), t 1 h ∆ϕ − ∇Ψ · ∇ϕ, ρn (s)ids (13.39) hϕ, ρn (t)i = hϕ, ρn (0)i + 0 2 Z t Z − h ∇Φ(· − x) · ∇ϕ(·)ρn (s, dx), ρn (s)ids
Z
0 n
+
1X n i=1
t
Z
∇ϕ(Xn,i (s))dWi (s) 0
Z
t
hB(ρn (s))ϕ, ρn (s)ids
= hϕ, ρn (0)i + 0 n
1X + n i=1
Z
t
∇ϕ(Xn,i (s))dWi (s), 0
where 1 ∆p − ∇(Ψ + Φ ∗ ρ) · ∇p, ∀ρ ∈ E, p ∈ C 2 (Rd ). 2 As in the other examples, we first derive a limit operator H and then prove the comparison principle for a nonlinear equation given by H (Theorem 13.32). Finally, we show that the result implies the large deviation principle. Let n 1X En = {ρ(dx) ≡ δxk (dx), xk ∈ Rd , k = 1, . . . , n} n (13.40)
B(ρ)p =
k=1
and E = P2 (Rd ) ≡ {ρ ∈ P(Rd ) :
Z
x2 dρ < ∞},
Rd
and let ηn : En → E be the identity map. Let r and rn be given by the 2Wasserstein metric (Definition D.13). Then (E, r) and (En , rn ) are complete, separable metric spaces. We now specify {Knq : q ∈ Q} determining LIM convergence by Definition 2.5. Let Q = {q = (ϕ, L) : L ∈ (0, ∞), ϕ : [0, ∞) → [0, ∞), ϕ(r) ϕ increasing, lim = ∞}, r→∞ r and define Z Knq = {ρ ∈ En : ϕ(x2 )ρ(dx) ≤ L}, q = (ϕ, L) ∈ Q.
(13.41)
Rd
This definition is motivated by the characterization of relatively compact sets in E (Lemma D.17 in Appendix D): K is relatively compact if and only if there exists a q = (ϕ, L) ∈ Q such that Z sup ϕ(x2 )ρ(dx) ≤ L. ρ∈K
In particular, (13.42)
Knq
Rd
is compact, as is K q ≡ {ρ ∈ E :
Z ϕdρ ≤ L}, Rd
bq . and limn→∞ Knq = K q . The conditions of Definition 2.5 are satisfied with Kq = K
314
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
Let fn ∈ B(En ) and f ∈ Cb (E). Since every convergent sequence is in some K q , f = LIMfn if and only if (1) supn kfn k < ∞; (2) limn→∞ fn (ρn ) = f (ρ) whenever ρn ∈ En , ρ ∈ E satisfies r(ρn , ρ) → 0. 13.3.1. Convergence of Hn to H. Define D(H) = D0
≡
{f (ρ) = ϕ(hp, ρi) : ϕ ∈ C 2 (Rl ), p = (p1 , . . . , pl ), pi − Ci ∈ Cc∞ (Rd ), Ci ∈ R, i = 1, . . . , l; l = 1, 2, . . .} ⊂ Cb (E),
(13.43) and for f ∈ D(H), define
l
X δf = ∂i ϕ(hp, ρi)pi δρ i=1
(13.44) and (13.45)
Hf (ρ) = hρ, B(ρ)
1 δf i+ δρ 2
Z ∇ Rd
δf 2  dρ. δρ
Since hp, ρi ≤ supx p(x), Hf ∈ Cb (E). The convergence arguments provided in Example 1.14 show that {Hn } satisfies the requirements of Condition 7.11 with H† = H‡ = H. 13.3.2. Exponential tightness. D(H) is an algebra and separates points in E. Furthermore, for each ρ ∈ E, there exists a f ∈ D(H) such that f (ρ) 6= 0. By Theorem A.8, the bucclosure for D(H) is Cb (E). Consequently, Corollary 4.17 gives exponential tightness once we verify the exponential compact containment condition. Lemma 13.17. Assume Condition 13.14. Let K ⊂ E be compact. Then there exists γ : [0, ∞) → [0, ∞) such that γ 0R, γ 00 ≥ 0, γ 00 nonincreasing, and limr→∞ r−1 γ(r) = ∞ such that LK ≡ supρ∈K γ(x2 )ρ(dx) < ∞ and ϕK (x) ≡ γ(x2 ) satisfies (13.46)
CK ≡
sup x∈Rd ,ρ∈E
1 B(ρ)ϕK (x) + ∇ϕK (x)2 < ∞. 2
Proof. For ϕK (x) = γ(x2 ), B(ρ)ϕK (x) = 2γ 00 (x2 )x2 + γ 0 (x2 )(d − 2∇Ψ(x) · x − 2
Z ∇Φ(x − y)ρ(dy) · x)
and ∇ϕK (x)2 = 4x2 γ 0 (x2 )2 . By (13.37), γ 0 increases slowly enough γ 0 (x2 )x2 = 0, x→∞ ∇Ψ(x) · x lim
0 and (13.46) follows. Modifying γ so that still more slowly if necessary, R γ increases we can select γ so that LK ≡ supρ∈K γ(x2 )ρ(dx) < ∞.
Let K be compact, and let ϕK and LK be given by Lemma 13.17. Since ϕK has superquadratic growth, by Lemma D.17, for each M > 0, KM = {ρ ∈ E : hϕK , ρi ≤ M }
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
315
is a compact set in (E, r). Suppose that ρn (0) ∈ K. Let τn,M = inf{t > 0 : hϕK , ρn (t)i > M } and V (ρ) = log(1 + hϕK , ρi). Then by Itˆ o’s formula, (13.39), and the strong Markov property, n Zn (t) = exp nV (ρn (t ∧ τn,M )) − nV (ρn (0)) Z t∧τn,M o hB(ρ(s))ϕK , ρn (s)i 1 1 h∇ϕK 2 , ρ(s)i −n ( )ds + (1 − ) 1 + hϕK , ρn (s)i n 2 (1 + hϕK , ρn (s)i)2 0 is a mean one martingale. By (13.46), Z t∧τn,M hB(ρn (s))ϕK , ρn (s)i 1 1 h∇ϕK 2 , ρn (s)i ( + (1 − ) )ds ≤ CK (t ∧ τn,M ). 1 + hϕK , ρn (s)i n 2 (1 + hϕK , ρn (s)i)2 0 Therefore, as in Lemmas 4.20 and 4.22, for each T > 0, n log
P (∃t, t ∈ [0, T ], ρn (t) 6∈ KM ρn (0) ∈ K)e
1+M 1+LK
−nCK T
≤ E[Z(T )] = 1.
Summarizing the above, we arrive at the following. Lemma 13.18. For each a, T > 0, there exists M > 0 such that for all n = 1, 2, . . . (13.47)
sup
P (∃t, 0 ≤ t ≤ T, ρn (t) 6∈ KM ρn (0) = ρ0 ) ≤ e−na .
ρ0 ∈K∩En
13.3.3. Viscosity extension of H. We obtain the comparison principle by extending the operator H in a number of ways, so that Theorem 9.41 applies. The extensions are involved because we only identified the limiting H for a small class of test functions D(H). Starting with a small class of test functions gives a stronger result that has additional applications. The free energy functional E is defined by (9.94). In addition, we define the Fisher information I and I # by (9.102) and (9.105) respectively. The gradient for a function on E is defined in Definition 9.36. We will make extensive use of the weighted Sobolev space Hρ−1 (Rd ), whose definition and properties are discussed in Appendix D.5. The argument parallels the previous two examples; however, mass transport inequalities (Appendix D.7) will be used in place of the Sobolev inequalities. The mass transport techniques used here are summarized in Appendix D. For the purpose of proving the comparison principle, it is enough to work with subsolutions that are upper semicontinuous taking the weak topology on E and supersolutions that are lower semicontinuous taking the weak topology on E. To see that is the case, let rb be a metric on P(Rd ) giving the topology of weak convergence. (Note that (E, rb) is not complete.) Define fb(ρ) = lim sup{f (γ) : γ ∈ E, rb(ρ, γ) < }, fb(ρ) = lim inf{f (γ) : γ ∈ E, rb(ρ, γ) < }. →0+
→0+
Then for ρn , ρ0 ∈ E and rb(ρn , ρ0 ) → 0 (that is ρn ⇒ ρ0 ), lim sup fb(ρn ) ≤ fb(ρ0 ), n→∞
lim inf fb(ρn ) ≥ fb(ρ0 ). n→∞
316
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
Lemma 13.19. Let h ∈ D0 (as defined in (13.43)) and α > 0, and consider the equation f − αHf = h.
(13.48)
Let f be a viscosity subsolution of (13.48) and f a viscosity supersolution. Then fb is a viscosity subsolution of (13.48) and fb is a viscosity supersolution. Proof. Let f0 ∈ D(H). Since f is a subsolution, there exists ρn ∈ E such that lim (f (ρn ) − f0 (ρn )) = sup (f (ρ) − f0 (ρ)) n→∞
ρ∈E
and lim sup(f (ρn ) − h(ρn ) − αHf0 (ρn )) ≤ 0. n→∞
By definition, fb ≥ f , and clearly sup (f (ρ) − f0 (ρ)) = sup (fb(ρ) − f0 (ρ)). ρ∈E
ρ∈E
Consequently lim (fb(ρn ) − f0 (ρn )) ≥ lim(f (ρn ) − f0 (ρn )) = sup (f (ρ) − f0 (ρ))
n→∞
n
=
ρ∈E
sup (fb(ρ) − f0 (ρ)) ≥ lim (fb(ρn ) − f0 (ρn )) ρ∈E
n→∞
and limn fb(ρn ) = limn f (ρn ). Therefore lim inf (fb(ρn ) − h(ρn ) − αHf0 (ρn )) ≤ 0. n→∞
The proof for f is similar.
Remark 13.20. Because f ≤ fb and f ≥ fb, to prove the comparison principle f ≤ f , it is enough to show fb ≤ fb. With Lemma 13.19 in mind, we have the following variation of Lemma 7.7. Lemma 13.21. Let (E, r) = (P2 (Rd ), d2 ), H† ⊂ M l (E, R) × M (E, R), and h ∈ D0 (D0 defined in (13.43)), and let rb be a metric on P(Rd ) giving the topology of weak convergence. Let f be a subsolution of f − αH† f = h such that f is upper semicontinuous on (E, rb). Let (f0 , g0 ) ∈ M l (E, R) × M (E, R). Suppose that there exists {(fn , gn )} ⊂ H† satisfying the following: a) fn is lower semicontinuous on (E, rb). b) f1 ≤ f2 ≤ · · · and fn → f0 pointwise. c) If {ρn } ⊂ E satisfies supn fn (ρn ) < ∞ and inf n gn∗ (ρn ) > −∞, then {ρn } is relatively compact in (E, rb), and if a subsequence {ρnk } converges to ρ0 ∈ E and limn→∞ fnk (ρnk ) = f0 (ρ0 ), then (13.49)
lim sup gn∗ (ρnk ) ≤ g0∗ (ρ0 ). n→∞
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
317
b † = H† ∪ {(f0 , g0 )}, f is a subsolution of f − αH b † f = h. In Then, setting H addition, there exists ρ0 ∈ E such that f (ρ0 ) − f0 (ρ0 ) = sup (f (ρ) − f0 (ρ))
(13.50)
ρ∈E
and α−1 (f (ρ0 ) − h(ρ0 )) ≤ g0∗ (ρ0 ).
(13.51)
Let f be a supersolution of f −αH‡ f = h such that f is lower semicontinuous on (E, rb). Let (f0 , g0 ) ∈ M u (E, R) × M (E, R). Suppose that there exists {(fn , gn )} ⊂ H‡ satisfying the following: a) fn is upper semicontinuous on (E, rb). b) f1 ≥ f2 ≥ · · · and fn → f0 pointwise. c) If {ρn } ⊂ E satisfies inf n fn (ρn ) > −∞ and supn (gn )∗ (ρn ) < ∞, then {ρn } is relatively compact in (E, rb), and if a subsequence {ρnk } converges to ρ0 ∈ E and limn→∞ fnk (ρnk ) = f0 (ρ0 ), then lim inf (gn )∗ (ρnk ) ≥ (g0 )∗ (ρ0 ).
(13.52)
n→∞
b ‡ f = h. In b ‡ = H‡ ∪ {(f0 , g0 )}, f is a subsolution of f − αH Then, setting H addition, there exists ρ0 ∈ E such that f0 (ρ0 ) − f (ρ0 ) = sup (f0 (ρ) − f (ρ0 )
(13.53)
ρ∈E
and α−1 (f (ρ0 ) − h(ρ0 )) ≥ (g0 )∗ (ρ0 ).
(13.54)
Remark 13.22. Let ζ ∈ C 2 (R) be nondecreasing and satisfy ζ(u) = u, u ≤ 1, ζ ≤R 0, and ζ(u) = 2, u ≥ 3. Define ζn (x) = nζ(n−1 x2 ). If {ρn } satisfies supn Rd ζn (x)ρn (dx) < ∞, then {ρn } is relatively compact in (P(Rd ), rb). Since by Fatou’s lemma, any limit point will be in E = P2 (Rd ), {ρn } is relatively compact in (E, rb). 00
Proof. Let (fn , gn ) ∈ H† satisfy (a) through (c), and let n > 0 and n → 0. There exist ρn ∈ E such that f (ρn ) − fn (ρn ) ≥ sup(f (ρ) − fn (ρ)) − n ρ
and α−1 (f (ρn ) − h(ρn )) − gn∗ (ρn ) ≤ n . Note that fn (ρn ) ≤ 2kf k − inf f0 (ρ) + n ρ
and gn∗ (ρn ) ≥ −α−1 (kf k + khk) − n . It follows that {ρn } is relatively compact in (E, rb), and since f − fn is a decreasing sequence of upper semicontinuous functions, by Lemma A.4, if ρ0 ∈ E is a rblimit point of {ρn }, then (13.55)
lim f (ρn ) − fn (ρn ) = f (ρ0 ) − f0 (ρ0 ) = sup(f (ρ) − f0 (ρ)).
n→∞
ρ
For simplicity, assume that ρn ⇒ ρ0 . (Otherwise, consider a subsequence.) Since fn is lower semicontinuous in (E, rb), f0 is also, and the upper semicontinuity
318
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
of f and the lower semicontinuity of f0 imply that limn→∞ f (ρn ) = f (ρ0 ) and limn→∞ fn (ρn ) = f0 (ρ0 ). In addition, (13.56)
f0 (ρ0 ) = lim lim
inf
b
δ→0+ n→∞ r (ρ,ρ0 )≤δ
fn (ρ) = sup sup δ>0
b
inf
n r (ρ,ρ0 )≤δ
fn (ρ).
Then (13.55), (13.56), and (13.49) imply f (ρ0 ) − f0 (ρ0 ) − h(ρ0 ) = lim sup(f (ρn ) − fn (ρn ) − h(ρn )) ≤ lim sup(αgn∗ (ρn ) + αn − fn (ρn )) n→∞
≤ lim lim sup(α(gn∗ (ρn ) − ≤
δ→0+ n→∞ αg0∗ (ρ0 ) − f0 (ρ0 ).
b
inf
r (ρ,ρ0 )≤δ
fn (ρ))
Consequently, (13.50) and (13.51) hold for ρ0 and (7.2) and (7.3) hold for yn ≡ ρ0 . The proof for supersolutions is similar. Lemma 13.23. Suppose ϕ ∈ C 1 (R∞ ) (R∞ with the usual product topology), inf y ϕ(y) > −∞, ∂i ϕ ≥ 0, ∂1 ϕ > 0, limr1 →∞ ϕ(r1 , r2 , . . .) = ∞ for each r2 , r3 , . . . ∈ R, and K ≡ sup
(13.57)
∞ X
y∈R∞ i=1
∂i ϕ(y) < ∞.
For k = 1, 2, . . ., let pk ∈ C 2 (Rd ). Suppose that there exist a0 , b0 , a1 , b1 > 0 such that for each k, a1 x2 − b1 ≤ pk (x) ≤ a0 + b0 x2 and (13.58)
sup B(ρ)pk (x) + ρ
K ∇pk (x)2 ≤ a0 + b0 x2 . 2
Define f0 (ρ) = ϕ(hp, ρi) = ϕ(hp1 , ρi, hp2 , ρi, . . .) and g0 (ρ) = hρ,
∞ X
∂i ϕ(hp, ρi)B(ρ)pi i +
i=1
1 2
Z  Rd
∞ X
∂i ϕ(hp, ρi)∇pi 2 dρ.
i=1
Let H be defined by (13.45) and H† ⊃ H. Let f be a subsolution of f − αH† f = h b † = H† ∪ {(f0 , g0 )}, such that f is upper semicontinuous on (E, rb). Then, setting H b f is a subsolution of f − αH† f = h and there exists ρ0 ∈ E such that α−1 (f (ρ0 ) − h(ρ0 )) ≤ g0∗ (ρ0 ).
f (ρ0 ) − f0 (ρ0 ) = sup (f (ρ) − f0 (ρ)), ρ∈E
The analogous result for supersolutions is obtained by replacing ϕ by −ϕ, but keeping the same conditions on the pi . Proof. For k > n, let pnk (x) = rk = inf x pk (x), and for k ≤ n, let pnk = nζ(n−1 pk ). Define fn (ρ) = ϕ(hpn , ρi) = ϕ(hpn1 , ρi, hpn2 , ρi, . . .) and gn (ρ) = hρ,
n X i=1
n
∂i ϕ(hp
, ρi)B(ρ)pni i
1 + 2
Z  Rd
n X i=1
∂i ϕ(hpn , ρi)∇pni 2 dρ.
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
319
Then (fn , gn ) ∈ H ⊂ H† and Conditions (a) and (b) of Lemma 13.21 are satisfied. To see that (c) is satisfied, note that sup fn (ρn ) ≥ sup ϕ(hpn1 , ρn i, r2 , r3 , . . .). n
n
Consequently, supn fn (ρn ) < ∞ implies supn hpn1 , ρn i < ∞, so by Remark 13.22, {ρn } is relatively compact in (E, rb). For simplicity, assume that ρn → ρ0 in (E, rb). (Otherwise, consider a subsequence.) By Fatou’s lemma, lim inf n→∞ hpni , ρn i ≥ hpi , ρ0 i, and hence, if fn (ρn ) → f0 (ρ0 ), by the monotonicity and continuity of ϕ, f0 (ρ0 )
=
lim fn (ρn )
n→∞
n n lim ϕ( inf hpm 1 , ρm i, hp2 , ρ0 i ∧ inf hp2 , ρn i, hp3 , ρ0 i ∧ inf hp3 , ρn i, . . .)
≥
n→∞
≥ lim inf n→∞
m≥n m≥n n ϕ(hp1 , ρn i, hp2 , ρ0 i, hp3 , ρ0 i, . . .)
m≥n
≥ f0 (ρ0 ). It follows that hpn1 , ρn i → hp1 , ρ0 i, which in turn implies Z
Z g(x)(1 + ζn (x))ρn (dx) →
Rd
g(x)(1 + x2 )ρ0 (dx),
g ∈ Cb (Rd ).
Rd
Consequently, observing that for i ≤ n, B(ρ)pni = ζ 0 (n−1 pi )B(ρ)pi +
1 00 −1 ζ (n pi )∇pi 2 ≤ ζ 0 (n−1 pi )B(ρ)pi 2n
and Z gn (ρn ) = Rd
n X
n
∂i ϕ(hp
i=1
, ρn i)B(ρn )pni
n 1 X ∂i ϕ(hpn , ρn i)∇pni 2 +  2 i=1
! dρn ,
by (13.57) and (13.58), the integrand is bounded by n
n X
1 X ∂i ϕ(hpn , ρn i)B(ρn )pni (x) +  ∂i ϕ(hpn , ρn i)∇pni (x)2 2 i=1 i=1 ≤
n X i=1
∂i ϕ(hpn , ρn i)(B(ρn )pni (x) +
K ∇pni (x)2 ) 2
≤ Kζ 0 (n−1 (a1 x2 − b1 ))(a0 + b0 x2 ) 3n + b1 ≤ Ka0 + Kb0 (x2 ∧ ) a1 ≤ K0 (1 + ζn (x)), and Fatou’s lemma gives lim supn→∞ gn (ρn ) ≤ g0 (ρ0 ). To obtain the result for supersolutions, one needs a lower bound on the integrand in order to apply Fatou’s lemma. But that actually requires weaker conditions
320
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
on the pi . −
n X
n
1 X ∂i ϕ(hpn , ρn i)B(ρn )pni (x) +  ∂i ϕ(hpn , ρn i)∇pni (x)2 2 i=1 i=1 ≥−
n X
∂i ϕ(hpn , ρn i)ζ 0 (n−1 pi )B(ρn )pi (x)
i=1
≥ −Kζ 0 (n−1 (a1 x2 − b1 ))(a0 + b0 x2 ) 3n + b1 ) ≥ −Ka0 + Kb0 (x2 ∧ a1 ≥ −K0 (1 + ζn (x)) The following lemma gives the existence of useful {pi } satisfying the conditions of Lemma 13.23. Lemma 13.24. Suppose Condition 13.14 holds, and let ψ0 (x) = 1 + x2 . Then there exists C1 > 0 such that 1 sup B(ρ)ψ0 (x) + ∇ψ0 (x)2 = d − 2∇(Ψ + Φ ∗ ρ)(x) · x + 2x2 ≤ C1 (1 + x2 ). 2 ρ Proof. The semiconvexity condition implies −∇Ψ(x) · x ≤ −∇Ψ(0) · x − λΨ x2 , and that observation and the boundedness of ∇Φ give the lemma. Let a > 0, αk ≥ 0,
P∞
k=1
αk = 1, and for n = 1, 2, . . ., define ∞
(13.59)
ϕn (y) =
X a αk exp{2n yk }. log 2n k=1
Then αi exp{2n yi } ∂i ϕn (y) = a P∞ , n k=1 αk exp{2 yk } so
P
i
∂i ϕ(y) = a and (13.57) is satisfied with K = a. By Jensen’s inequality, (
∞ X
αk exp{2n yk })2 ≤
k=1
∞ X
αk exp{2n+1 yk },
k=1
and it follows that ϕ1 ≤ ϕ2 ≤ · · · . Finally, note that lim ϕn (y) = a sup yk .
n→∞
k
Lemma 13.25. Let θ > 0, ψ0 (x) = 1 + x2 , and p1 , . . . , pm ∈ C 2 (Rd ), and suppose that the conditions of Lemma 13.23 are satisfied with pi replaced by pi +θψ0 . Define f0 (ρ) = aθhψ0 , ρi + a sup hpk , ρi, 1≤k≤m
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
321
and Z n a2 g0 (ρ) = aθhρ, B(ρ)ψ0 i + max ahρ, B(ρ)pi i + θ∇ψ0 + ∇pi 2 dρ : 2 Rd o hρ, pi i = max hρ, pk i . 1≤k≤m
Let H be defined by (13.45), and H† ⊃ H. Let f be a subsolution of f −αH† f = b † = H† ∪{(f0 , g0 )}, h such that f is upper semicontinuous on (E, rb). Then, setting H b † f = h and there exists ρ0 ∈ E such that f is a subsolution of f − αH f (ρ0 ) − f0 (ρ0 ) = sup (f (ρ) − f0 (ρ)), ρ∈E
α−1 (f (ρ0 ) − h(ρ0 )) ≤ g0∗ (ρ0 ).
The analogous result holds for supersolutions with a replaced by −a and g0 given by g0 (ρ) = −aθhρ, B(ρ)ψ0 i + min
n
Z X a2 θ∇ψ0 + ∇ βi pi 2 dρ : − ahρ, B(ρ) βi pi i + 2 Rd o X X βi ≥ 0, βi = 1, βi hρ, pi i = max hρ, pk i . X
1≤k≤m
Proof. First consider the case for subsolutions. Let ϕn be given by (13.59) with ci = 0 for all i and αi = 0, i > m. Define fn (ρ) = ϕn (hθψ0 + p1 , ρi, . . . , hθψ0 + pm , ρi) and gn (ρ) = aθhρ, B(ρ)ψ0 i +
m X
∂i ϕn (hp, ρi)hρ, B(ρ)pi i
i=1
+
1 2
Z  Rd
m X
∂i ϕn (hp, ρi)(θ∇ψ0 + ∇pi )2 dρ,
i=1
and note that gn (ρ) ≤ aθhρ, B(ρ)ψ0 i +
m X
∂i ϕn (hp, ρi)(hρ, B(ρ)pi i +
i=1
a 2
Z
θ∇ψ0 + ∇pi 2 dρ)
Rd
By Lemma 13.23, we can assume that (fn , gn ) ∈ H† . Noting that limn→∞ fn (ρ) = f0 (ρ), if ρn → ρ in (E, rb) and fn (ρn ) → f0 (ρ), then hψ0 , ρn i → hψ0 , ρi. It follows that ρn → ρ in (E, r) and consequently that hpi , ρn i → hpi , ρi. If hpi , ρi < maxhpk , ρi, ∂i ϕn (hp, ρn i) → 0 and lim supn→∞ gn (ρn ) ≤ g0 (ρ). Consequently, the lemma follows by Lemma 13.21. Lemma 13.26. Let {pi } ⊂ C 2 (Rd ), and assume that for each i, there exist ai , bi > 0 such that −ai ≤ pi (x) ≤ ai + bi x2 and (13.60)
a sup B(ρ)pi (x) + ∇pi (x)2 ≤ ai + bi x2 . 2 ρ
Let a, θ > 0, and define f0 (ρ) = aθhψ0 , ρi + a sup(hpk , ρi), k
322
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
where ψ0 (x) = 1 + x2 . Setting (13.61)
h(ρ, p) = ahρ, B(ρ)pi +
a2 2
Z
θ∇ψ0 + ∇p2 dρ,
Rd
define = aθhρ, B(ρ)ψ0 i
g0 (ρ)
+ lim lim sup
sup
η→0+ m→∞ r(ρ0 ,ρ)≤η
n o max h(ρ0 , pi ) : hρ0 , pi i = max hρ0 , pk i . 1≤k≤m
Let H be defined by (13.45), and H† ⊃ H. Let f be a subsolution of f −αH† f = b † = H† ∪{(f0 , g0 )}, h such that f is upper semicontinuous on (E, rb). Then, setting H b f is a subsolution of f − αH† f = h and there exists ρ0 ∈ E such that α−1 (f (ρ0 ) − h(ρ0 )) ≤ g0∗ (ρ0 ).
f (ρ0 ) − f0 (ρ0 ) = sup (f (ρ) − f0 (ρ)), ρ∈E
The analogous result holds for supersolutions with a replaced by −a and g0 given by g0 (ρ) = −aθhρ, B(ρ)ψ0 i + lim lim inf
inf 0
η→0+ m→∞ r(ρ ,ρ)≤η
βi ≥ 0,
m X
βi = 1,
m X
i=1
m n X min h(ρ0 , βi pi ) : i=1
o βi hρ0 , pi i = max hρ0 , pk i .
i=1
1≤k≤m
Proof. For the subsolution case, define fn (ρ) = aθhψ0 , ρi + a sup (hpk , ρi) 1≤k≤n
and Z n a2 θ∇ψ0 + ∇pi 2 dρ : gn (ρ) = aθhρ, B(ρ)ψ0 i + max ahρ, B(ρ)pi i + 2 Rd o i ≤ n, hρ, pi i = max hρ, pk i . 1≤k≤n
By Lemma 13.25, we can assume that (fn , gn ) ∈ H† . Noting that limn→∞ fn (ρ) = f0 (ρ), if ρn → ρ in (E, rb) and fn (ρn ) → f0 (ρ), then hψ0 , ρn i → hψ0 , ρi. It follows that ρn → ρ in (E, r) and, by the definition of g0 , lim supn→∞ gn (ρn ) ≤ g0 (ρ). Consequently, the lemma follows by Lemma 13.21. The proof for supersolutions is similar. Corollary 13.27. For ψ0 (x) = 1 + x2 , let Cψ0 = {p ∈ C(Rd ) : p/ψ0 ∈ Cb (Rd )} be the Banach space of continuous functions with norm kpkψ0 = supx p(x)/ψ0 (x). Let Γ ⊂ Cψ0 ∩ C 2 (Rd ) be a compact, convex subset of Cψ0 such that for each p ∈ Γ, inf x p(x) > −∞ and there exist ap , bp > 0, such that a sup B(ρ)p(x) + ∇p(x)2 ≤ ap + bp x2 . 2 ρ Let a, θ > 0. Define (13.62)
fΓ (ρ) = aθhψ0 , ρi + a suphp, ρi + b p∈Γ
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
323
and gΓ (ρ)
= aθhρ, B(ρ)ψ0 i n o + lim sup h(ρ0 , p) : r(ρ0 , ρ) ≤ η, p ∈ Γ, hρ0 , pi ≥ sup hρ0 , p0 i − η . η→0+
p0 ∈Γ
Let H be defined by (13.45), and H† ⊃ H. Let f be a subsolution of f −αH† f = b † = H† ∪{(fΓ , gΓ )}, h such that f is upper semicontinuous on (E, rb). Then, setting H b f is a subsolution of f − αH† f = h and there exists ρΓ ∈ E such that α−1 (f (ρΓ ) − h(ρΓ )) ≤ gΓ∗ (ρΓ ).
f (ρΓ ) − fΓ (ρΓ ) = sup (f (ρ) − fΓ (ρ)), ρ∈E
The analogous result holds for supersolutions with a replaced by −a and gΓ given by gΓ (ρ)
= −aθhρ, B(ρ)ψ0 i n o + lim inf h(ρ0 , p) : r(ρ0 , ρ) ≤ η, p ∈ Γ, hρ0 , pi ≥ sup hρ0 , p0 i − η . η→0+
p0 ∈Γ
Proof. Let {pi } be dense in Γ, and note that {pi } satisfies the conditions of Lemma 13.26. Defining (f0 , g0 ) as in Lemma 13.26, the density of {pi } ensures that fΓ = f0 and that for each ρ and η > 0, for m sufficiently large, max1≤k≤m hρ0 , pk i > supp0 ∈Γ hρ0 , p0 i−η for all ρ0 satisfying r(ρ0 , ρ) ≤ η. That observation implies gΓ (ρ) ≥ g0 (ρ) (gΓ (ρ) ≤ g0 (ρ) in the supersolution case) and the corollary follows. Let c ∈ Cb (Rd × Rd ) satisfy c(x, y) ≥ 0, and assume that c is uniformly continuous. Define Fc (ρ, γ) as in (D.22), Z Fc (ρ, γ) = inf{ cdπ : π ∈ Π(ρ, γ)}, and let J be the mollifier defined in (D.4). In particular, J (x) = −d J(−1 x), R d ∞ where J ∈ Cc (R ) is nonnegative and Rd J(x)dx = 1, and Z J ∗ ρ(dx) = J (x − y)ρ(dy)dx. Rd
Note that if Z is an Rd valued random variable with density function J, then Z has density J . We will also write Z J ∗ p(x) = J (x − y)p(y)dy Rd
and note that hp, J ∗ ρi = hJ ∗ p, ρi. b γc be defined in (D.24), and note that by Corollary D.21 Let Γ Fc (ρ, γ) = sup hp, ρi.
b
p∈Γγ c
b γc satisfy kpk∞ ≤ kck∞ and have a common By Remark D.19, the functions in Γ b γc is compact in Cb (Rd ) in the topology of modulus of continuity. Consequently Γ uniform convergence on compact sets and is compact in Cψ0 in the norm topology. b γc } will have the same compactness properties. In addition, Γγc, = {J ∗ p : p ∈ Γ
324
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
Corollary 13.28. For a0 , k0 > 0, let c ∈ Cb (Rd × Rd ) satisfy c(x, y) = a0 for x + y ≥ k0 , and let Γγc, be defined as above. Define (13.63)
γ fc, (ρ) = θhψ0 , ρi +
1 1 sup hp, ρi = θhψ0 , ρi + Fc (J ∗ ρ, γ). γ 2δ p∈Γc, 2δ
Setting (13.64)
hδ,θ (ρ, p) =
Z
1 1 hρ, B(ρ)pi + 2 2δ 8δ
2δθ∇ψ0 + ∇p2 dρ,
Rd
b γc such that there exists pγ,ρ ∈ Γ γ fc, (ρ) = θhψ0 , ρi +
1 γ 1 hp , J ∗ ρi = θhψ0 , ρi + Fc (J ∗ ρ, γ) 2δ ,ρ 2δ
and γ gc, (ρ)
= θhρ, B(ρ)ψ0 i n o + lim sup hδ,θ (ρ0 , p) : r(ρ0 , ρ) ≤ η, hρ0 , pi ≥ sup hρ0 , p0 i − η η→0+
p0 ∈Γγ c,
= θhρ, B(ρ)ψ0 i + hδ,θ (ρ, J ∗ pγ,ρ ) Let H be defined by (13.45), and H† ⊃ H. Let f be a subsolution of f − b† = αH† f = h such that f is upper semicontinuous on (E, rb). Then, setting H γ γ b † f = h and there exists ρ0 ∈ E such H† ∪ {(fc, , gc, )}, f is a subsolution of f − αH that γ ∗ ) (ρ0 ). α−1 (f (ρ0 ) − h(ρ0 )) ≤ (gc,
γ γ f (ρ0 ) − fc, (ρ0 ) = sup (f (ρ) − fc, (ρ)), ρ∈E
The analogous result holds for supersolutions with γ fc, (ρ) = −θhψ0 , ρi −
1 γ 1 hp,ρ , J ∗ ρi = −θhψ0 , ρi − Fc (J ∗ ρ, γ) 2δ 2δ
and γ gc, (ρ)
= −θhρ, B(ρ)ψ0 i n o + lim inf hδ,θ (ρ0 , p) : r(ρ0 , ρ) ≤ η, hρ0 , pi ≥ sup hρ0 , p0 i − η η→0+
p0 ∈Γγ c,
= −θhρ, B(ρ)ψ0 i + h−δ,θ (ρ, J ∗
pγ,ρ )
where h−δ,θ (ρ, p) = −
1 1 hρ, B(ρ)pi + 2 2δ 8δ
Z
2δθ∇ψ0 + ∇p2 dρ
Rd
Proof. Taking a = 1/(2δ) and replacing θ by 2δθ in the definition of fΓ in b γc , and the (13.62), the Corollary follows by Corollary 13.27, the compactness of Γ fact that J ∗ p(x) is constant for x ≥ k0 + . For θ > 0, let (13.65)
f0 (ρ)
= θhψ0 , ρi + sup hp, ρi p∈Γγ c0 ,
= θhψ0 , ρi + θhψ0 , γi +
1 2 d (J ∗ ρ, γ). 2δ
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
325
We again apply Lemma 13.21 to extend the operator to functions of this form. For ζ defined as in Remark 13.22, and n = 1, 2, . . ., let cn (x, y) = nζ(n−1 x − y2 + n−1 2δθψ0 (y)) and c0 (x, y) = x − y2 + 2δθψ0 (y). Then cn satisfies the conditions of Corollary 13.28, and setting fn = fcγn , , limn→∞ fn = f0 . Since f1 ≤ f2 ≤ · · · , Conditions (a) and (b) of the first part of Lemma 13.21 are satisfied. It remains to identify g0 satisfying Condition (c). Suppose rb(ρn , ρ) → 0, fn (ρn ) → f0 (ρ), and inf gn∗ (ρn ) > −∞.
(13.66)
n
By Fatou’s lemma lim inf Fcn (J ∗ ρn , γ) ≥ Fc0 (J ∗ ρ, γ), n→∞
lim inf hψ0 , ρn i ≥ hψ0 , ρi, n→∞
so convergence of fn (ρn ) to f0 (ρ) implies convergence of both sequences. In particular, hψ0 , ρn i → hψ0 , ρi which in turn implies r(ρn , ρ) → 0. Lemma 13.29. Assume that γ has compact support Kγ . Let ρn be as above, b ∈ Rd , and let pn = pγ,ρn . Then for x, x (13.67)
pn (x) − pn (b x) ≤ sup (x + b x + 2y)x − x b, y∈Kγ
and {pn } is bounded in Cψ0 and relatively compact in the topology of uniform convergence on compact sets. Any limit point p,ρ of pn satisfies 1 f0 (ρ) = θhψ0 , ρi + hp,ρ , J ∗ ρi (13.68) 2δ 1 = θhψ0 , ρi + θhψ0 , γi + d2 (J ∗ ρ, γ) 2δ and (13.69)
∇p,ρ (x) ≤ 2x + sup 2y,
a.e.
y∈Kγ
Proof. The inequality (13.67) followsR as in (D.21). Since the right side of (13.67) is independent of n and pn (x) ≤ Rd c0 (x, y)γ(dy), for each compact set K ⊂ Rd , Z sup sup pn (x) < c0 (b x, y)γ(dy) + sup sup (x + b x + 2y)x − x b < ∞ n x∈K
x∈K y∈Kγ
Rd
and either lim inf sup pn (x) = −∞ n→∞ x∈K
or (13.70) lim inf inf pn (x) > lim inf sup pn (x) − sup sup (x + b x + 2y)x − x b > −∞. n→∞ x∈K
n→∞ x∈K
b
x,x∈K y∈Kγ
But hpn , J ∗ ρn i → 2δθhψ0 , γi + d2 (J ∗ ρ, γ) > −∞, so (13.70) must hold. Consequently, pn (x) ≤ sup pn (0) + sup (x + 2y)x, n
y∈Kγ
326
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
and the second assertion of the lemma follows. (13.68) follows by the dominated convergence theorem, and (13.69) follows from (13.67). Since {pn } bounded in Cψ0 implies {∆J ∗ pn } is bounded in Cψ0 and the estimate on ∇pn  implies {∇J ∗ pn 2 } is bounded in Cψ0 , pn converging to p,ρ implies that all the terms in Z 1 1 2δθ∇ψ0 + ∇J ∗ pn 2 dρn gn (ρn ) = θhρn , B(ρn )ψ0 i + hρn , B(ρn )J ∗ pn i + 2 2δ 8δ Rd converge to the corresponding terms in θhρ, B(ρ)ψ0 i +
1 1 hρ, B(ρ)J ∗ p,ρ i + 2 2δ 8δ
Z
2δθ∇ψ0 + ∇J ∗ p,ρ 2 dρ
Rd
1 hρn , ∇Ψ · ∇J ∗ pn i and −θhρn , ∇Ψ · ∇ψ0 i. The with the possible exceptions of − 2δ limit in (13.37) and Fatou’s lemma imply that
lim sup(−θhρn , ∇Ψ · ∇ψ0 i) ≤ −θhρ, ∇Ψ · ∇ψ0 i. n→∞
Let Xn have distribution ρn and let Z be independent of Xn and have distribution with density function J. Then Xn = Xn + Z has distribution J ∗ ρn . By Theorem D.20 and Corollary D.21, there exists a distribution πn ∈ Π(J ∗ ρn , γ) and qn such that pn (x) + qn (y) ≤ cn (x, y) with equality holding almost surely πn . It follows that ∇pn (x) = ∇x cn (x, y) = ζ 0 (n−1 x − y2 + n−1 δθψ0 (y))2(x − y) a.s. πn . There exists a probability space with random variables (Xn , Yn , Zn ) such that Xn and Zn are independent, Xn has distribution ρn , Zn has density J, and the joint distribution of (Xn + Zn , Yn ) is πn . We then have −hρn , ∇Ψ · ∇J ∗ pn i = −E[∇Ψ(Xn ) · ∇pn (Xn + Zn )] = E[2ζ 0 (n−1 Xn + Zn − Yn 2 + n−1 δθψ0 (Yn ))∇Ψ(Xn ) · (Yn − Xn − Zn )] As in the proof of Lemma 13.16, the semiconvexity of Ψ implies λΨ x − y2 . 2 Consequently, setting Vn = ζ 0 (n−1 Xn + Z − Yn 2 + n−1 δθψ0 (Yn )) ∇Ψ(x) · (y − x) ≤ Ψ(y) − Ψ(x) −
(13.71)
−hρn , ∇Ψ · ∇J ∗ pn i
λΨ Xn + Zn − Yn 2 )]. 2 By (13.66), it follows that supn E[Ψ(Xn )] = supn hρn , Ψi < ∞. By (13.36) and the compactness of the support of Yn and Zn , there exists a constant a > 0 such that the random variable on the right is bounded above by aψ0 (Xn ). Let (X, Y, Z) be the limit in distribution of a subsequence of {(Xn , Yn , Zn )}. Then Y has distribution γ, Z has density J, X + Z has distribution J ∗ ρ, and ≤ 2E[Vn (Ψ(Yn − Zn ) − Ψ(Xn ) −
E[Xn + Zn − Yn 2 ] = Fcn (J ∗ ρn , γ) → d2 (J ∗ ρ, γ) = E[X + Z − Y 2 ]. Fatou’s lemma then implies lim sup(−hρn , ∇Ψ · ∇J ∗ pn i) ≤ 2E[Ψ(Y − Z)] − 2hρ, Ψi − λΨ d2 (J ∗ ρ, γ). n→∞
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
327
Letting π denote the joint distribution of (X + Z, Y ), p,ρ is locally Lipschitz and as in Lemma D.23 ∇p,ρ (x) = 2(x − y) a.s. π . Since Xn and Zn are independent, X and Z are independent, and we have Z 2δθ∇ψ0 + ∇J ∗ p,ρ 2 dρ = E[4δθX + E[∇p,ρ (X + Z)X]2 ] Rd
= E[4δθX + 2E[X + Z − Y X]2 ] = E[4δθX + 2X − 2E[Y X]2 ] p E[X + Z − Y 2 ] ≤ 4 2 p +4 E[2δθX − Z + (Y − E[Y X])2 ] p = 4(d(J ∗ ρ, γ) + E[R(X, Y, Z, , δ, θ)2 ])2 , where we define R(X, Y, Z, , δ, θ) = 2δθX − Z + (Y − E[Y X]). Similarly, Z p 2δθ∇ψ0 + ∇J ∗ p,ρ 2 dρ ≥ 4(d(J ∗ ρ, γ) − E[R(X, Y, Z, , δ, θ)2 ])2 . Rd
Summarizing these estimates and applying Lemma 13.21, we have the following. Lemma 13.30. For θ, δ > 0, let f (ρ) = θhψ0 , ρi + θhψ0 , γi +
1 2 d (J ∗ ρ, γ) 2δ
and g (ρ)
= θhρ, B(ρ)ψ0 i 1 1 hρ, ∆J ∗ p,ρ i − hρ, ∇Φ ∗ ρ · J ∗ ∇p,ρ i + 2δ 2 1 λΨ 2 + E[Ψ(Yρ − Zρ )] − hρ, Ψi − d (J ∗ ρ, γ) δ 2 q 1 + 2 (d(J ∗ ρ, γ) + E[R(Xρ , Yρ , Zρ , , δ, θ)2 ])2 , 2δ
where (Xρ , Yρ , Zρ ) is the limit in distribution of (Xn , Yn , Zn ) (possibly along a subsequence). Let H be defined by (13.45), and H† ⊃ H. Let f be a subsolution of f −αH† f = b † = H† ∪{(f , g )}, h such that f is upper semicontinuous on (E, rb). Then, setting H b † f = h and there exists ρ ∈ E such that f is a subsolution of f − αH f (ρ ) − f (ρ ) = sup (f (ρ) − f (ρ)), ρ∈E
α−1 (f (ρ ) − h(ρ )) ≤ g∗ (ρ ).
The analogous result holds for supersolutions with f (ρ) = −θhψ0 , ρi − θhψ0 , γi −
1 2 d (J ∗ ρ, γ) 2δ
328
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
and g (ρ)
= −θhρ, B(ρ)ψ0 i 1 1 − hρ, ∆J ∗ p,ρ i − hρ, ∇Φ ∗ ρ · J ∗ ∇p,ρ i 2δ 2 λΨ 2 1 − E[Ψ(Yρ − Zρ )] − hρ, Ψi − d (J ∗ ρ, γ) δ 2 q 1 + 2 (d(J ∗ ρ, γ) − E[R(Xρ , Yρ , Zρ , , δ, θ)2 ])2 . 2δ
R Set S(ρ) = Rd ρ(x) log ρ(x)dx, if ρ has a Lebesgue density and the integral exists. Otherwise, set S(ρ) = +∞. The DonskerVaradhan variational principle (e.g., Lemma 1.4.3 of [35]) implies Z Z S(ρ) = sup{ (log f (x))ρ(dx) : f ∈ Cb (Rd ), f ≥ 0, f (x)dx = 1}, Rd
Rd −x2
so S(ρ) is lower semicontinuous, and taking f (x) = ce , Z Z Z 2 (13.72) ρ(x) log ρ(x)dx ≥ − log e−x dx − Rd
Rd
x2 ρ(x)dx.
Rd
If γ has a Lebesgue density and S(γ) < ∞, then by Lemma D.53, 1 hρ, ∆J ∗ p,ρ i ≤ S(γ) − S(J ∗ ρ). 2 As in (13.71) −hρ, ∇Φ ∗ ρ · J ∗ ∇p,ρ i = −E[∇Φ(Xρ − x)ρ(dx) · ∇p,ρ (Xρ + Zρ )] Z = 2E[ ∇Φ(Xρ − x)ρ(dx) · (Yρ − Xρ − Zρ )] d ZR ≤ 2E[ Φ(Yρ − Zρ − x)ρ(dx)] Rd Z −2 Φ(b x − x)ρ(dx)ρ(db x) − λΦ d2 (J ∗ ρ, γ) Rd ×Rd
Recalling the definition of E and (9.96) E(ρ) = where e0 =
1 2
R Rd
1 1 S(ρ) + hΨ, ρi + hΦ ∗ ρ, ρi + e0 , 2 2
e−2Ψ(x) dx, we have
1 g (ρ) ≤ θhρ, B(ρ)ψ0 i + (S(γ) − S(J ∗ ρ)) 2δ Z 1 (13.73) + E[ Φ(Yρ − Zρ − x)ρ(dx)] δ d ZR 1 − Φ(b x − x)ρ(dx)ρ(db x) − λΦ d2 (J ∗ ρ, γ) 2 Rd ×Rd λΨ 2 1 d (J ∗ ρ, γ) + E[Ψ(Yρ − Zρ )] − hρ, Ψi − δ 2 q 1 + 2 (d(J ∗ ρ, γ) + E[R(Xρ , Yρ , Zρ , , δ, θ)2 ])2 , 2δ
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
329
Lemma 13.31. Suppose S(γ) < ∞. Let f0 (ρ) = θhψ0 , ρi + θhψ0 , γi +
1 2 d (ρ, γ) 2δ
and g0 (ρ)
= θhρ, B(ρ)ψ0 i 1 + S(γ) − S(ρ) + 2(hΦ ∗ ρ, γi − hΦ ∗ ρ, ρi) 2δ λΨ + λ Φ 2 1 + hγ, Ψi − hρ, Ψi − d (ρ, γ) δ 2 p 1 + 2 (d(ρ, γ) + 2δθ hψ0 − 1, ρi)2 2δ = θhρ, B(ρ)ψ0 i 1 1 1 + E(γ) − E(ρ) + (hΦ ∗ ρ, γi − hΦ ∗ γ, γi − hΦ ∗ ρ, ρi) δ 2 2 p 1 λΨ + λ Φ 2 d (ρ, γ) + 2 (d(ρ, γ) + 2δθ hψ0 − 1, ρi)2 , − 2δ 2δ
Let H be defined by (13.45), and H† ⊃ H. Let f be a subsolution of f −αH† f = b † = H† ∪{(f0 , g0 )}, h such that f is upper semicontinuous on (E, rb). Then, setting H b † f = h and there exists ρ0 ∈ E such that f is a subsolution of f − H f (ρ0 ) − f0 (ρ0 ) = sup (f (ρ) − f0 (ρ)), ρ∈E
α−1 (f (ρ0 ) − h(ρ0 )) ≤ g0∗ (ρ0 ).
The conclusion also holds with θ = 0. The analogous result holds for supersolutions with f0 (ρ) = −θhψ0 , ρi − θhψ0 , γi −
1 2 d (ρ, γ) 2δ
and g0 (ρ) =
θhρ, B(ρ)ψ0 i 1 1 1 − E(γ) − E(ρ) + (hΦ ∗ ρ, γi − hΦ ∗ γ, γi − hΦ ∗ ρ, ρi) δ 2 2 p λ Ψ + λΦ 2 1 + d (ρ, γ) + 2 (d(ρ, γ) − 2δθ hψ0 − 1, ρi)2 . 2δ 2δ
Proof. Denote the right side of (13.73) by gb (ρ). Then the conclusion of Lemma 13.30 holds with g replaced by gb . Let ρ satisfy f (ρ ) − f (ρ ) = sup (f (ρ) − f (ρ)), ρ∈E
α−1 (f (ρ ) − h(ρ )) ≤ gb∗ (ρ ).
Then sup f (ρ ) < ∞ and inf gb∗ (ρ ) ≥ −α−1 (kf k+khk). It follows that sup hΨ, ρ i < ∞ and sup S(J ∗ ρ ) < ∞. Consequently, {ρ } is relatively compact in (E, r). Assume that r(ρ , ρ0 ) → 0 (otherwise, select a convergent subsequence). Then S(ρ0 ) < ∞ and ρ0 has a Lebesgue density. Selecting a convergent subsequence, if necessary, (Xρ , Yρ ) ⇒ (X0 , Y0 ) and E[X0 − Y0 2 ] = d2 (ρ, γ). The convergence
330
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
(Xρ , Yρ ) ⇒ (X0 , Y0 ) implies lim sup E[(Yρ − E[Yρ Xρ ])2 ] ≤ →0
=
sup
lim E[(Yρ − g(Xρ ))2 ]
sup
E[(Y0 − g(X0 ))2 ]
g∈Cb (Rd ) →0
g∈Cb
(Rd )
= E[(Y0 − E[Y0 X0 ])2 ], and Theorem D.25 implies the right side is zero. The upper semicontinuity of f implies f (ρ0 )−f0 (ρ0 ) = supρ∈E (f (ρ)−f0 (ρ)) and f (ρ ) → f (ρ0 ). The convergence of ρ implies r(J ∗ ρ , ρ0 ) → 0, and the continuity and semicontinuity properties of the terms in gb imply lim sup→0 gb (ρ ) ≤ g0 (ρ0 ). It follows that α−1 (f (ρ0 ) − h(ρ0 )) ≤ gb0∗ (ρ0 ) giving the lemma. For θ > 0, rename (f0 , g0 , ρ0 ), (fθ , gθ , ρθ ). The above argument now applies as θ → 0. The proof for supersolutions is similar. 13.3.4. The comparison principle. Theorem 13.32. Let H be defined by (13.45), and let h ∈ D0 and α > 0. Suppose that f is a subsolution and f is a supersolution to (I − αH)f = h.
(13.74) Then f ≤ f .
Remark 13.33. By Remark 13.20, we can assume, without loss of generality, that f is upper semicontinuous on (E, rb) and f is lower semicontinuous on (E, rb). By Theorem 9.41, Theorem 13.32 is a consequence of the following lemma. Let ωh denote the modulus of continuity of h with respect to d, that is, h(ρ) − h(γ) ≤ ωh (d(ρ, γ)),
ρ, γ ∈ E.
b † and H b ‡ be defined as in (9.107) and (9.108). For δ > 0, Lemma 13.34. Let H define 1 f δ (ρ) = λ0,δ sup {f (γ) − d2 (ρ, γ)}, 2δ γ∈E 1 f δ (γ) = λ1,δ inf {f (ρ) + d2 (ρ, γ)}, ρ∈E 2δ where λ0,δ = (1 + 2δ(λΨ  + 2λΦ )), and
λ1,δ = (1 − 2δ(λΨ  + 2λΦ ))−1 ,
r h0,δ = λ0,δ h + ωh (2 δ sup f (ρ)) ρ∈E
and
r h1,δ = λ1,δ h − ωh (2 δ sup f (ρ)) . ρ∈E
Then f δ is a continuous subsolution of (13.75)
b † )f = h0,δ (I − αH
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
331
and f δ is a continuous supersolutions of b ‡ )f = h1,δ . (I − αH
(13.76)
Proof. We prove the subsolution case. The proof of the supersolution case is similar. Since f δ (ρ) − f δ (ρ0 ) λ0,δ ≤ sup{d2 (ρ0 , γ) − d2 (ρ, γ) : γ ∈ E, d2 (ρ, γ) ≤ 2δ(sup f (γ 0 ) − f (ρ))}, 2δ γ0 f δ is continuous. Let f0 be of the form (9.111): f0 (ρ) = ad2 (ρ, τ0 ) + bE(ρ) + cG(ρ, σ0 ),
(13.77)
where a, c > 0, 0 < b < 1, τ0 , σ0 ∈ E, E is defined in (9.94), and G in (9.110). Since E has compact level set in (E, r) (Lemma 9.39) and since f δ − f0 is upper semicontinuous, there exists ρ0 ∈ E such that 1 (13.78) (f δ − f0 )(ρ0 ) = sup λ0,δ (f (γ) − d2 (ρ, γ)) − f0 (ρ) . 2δ ρ,γ∈E Note that ρ0 must satisfy f0 (ρ0 ) < ∞ and E(ρ0 ) < ∞ and have a Lebesgue density. Since f is a viscosity subsolution for (13.74), by Lemma 13.31, there exists γ0 ∈ E satisfying (13.79) 1 1 λ0,δ (f (γ0 ) − d2 (ρ0 , γ0 )) − f0 (ρ0 ) = sup λ0,δ (f (γ) − d2 (ρ, γ)) − f0 (ρ) 2δ 2δ ρ,γ∈E and f −h 1 1 (γ0 ) ≤ H0 ( d2 (ρ0 , ·) + f0 (ρ0 ))(γ0 ) α 2δ λ0,δ 1 1 1 E(ρ0 ) − E(γ0 ) + (hΦ ∗ γ0 , ρ0 i − hΦ ∗ γ0 , γ0 i − hΦ ∗ ρ0 , ρ0 i) (13.80) = δ 2 2 λΨ + 2λΦ 2 1 − d (ρ0 , γ0 ) + 2 d(ρ0 , γ0 ). 2δ 2δ 1 1 ≤ (E(ρ0 ) − E(γ0 )) + 2 (1 − δ(λΨ + 2λΦ ))d(ρ0 , γ0 ). δ 2δ To complete the proof, we must show f δ (ρ0 )
= λ0,δ sup {f (γ) − γ∈E
1 2 d (ρ0 , γ)} 2δ
1 = λ0,δ (f (γ0 ) − d2 (ρ0 , γ0 )) 2δ r b † f0 (ρ0 ), ≤ λ0,δ h(ρ0 ) + ωh (2 δ sup f (ρ)) + αH ρ∈E
which is the consequence of the next two lemmas. Lemma 13.35. Let ρ0 satisfy (13.78). Then I # (ρ0 ) < ∞.
332
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
Proof. Noting that E(ρ0 ) < ∞, from (D.31), (D.32), and the definition of f0 in (13.77), (13.81) gradf0 (ρ0 )
= b gradE(ρ0 ) + a gradρ0 d2 (ρ0 , τ0 ) + c gradρ0 G(ρ0 , σ0 ) 1 = −b( ∆ρ0 + ∇ · (ρ0 ∇(Ψ + ρ0 ∗ Φ))) 2 ∞ X hek , ρ0 − σ0 i −2c∇ · (ρ0 ∇( ek )) 2k (1 + m2k ) k=1
−2a∇ · (ρ0 ∇pρ0 ,τ0 ), where pρ0 ,τ0 is as in Theorem D.25. With reference to Appendix D.5 for the definition and properties of Hρ−1 (Rd ), 1 kgradρ0 d2 (ρ0 , τ0 )k2−1,ρ0 = d2 (ρ0 , τ0 ) < ∞, 2 by (D.46), and kgradρ0 G(ρ0 , σ0 )k−1ρ0 < ∞, so kgradf0 (ρ0 )k2−1,ρ0
= a2 kgradρ0 d2 (ρ0 , τ0 )k2−1,ρ0 + b2 kgradE(ρ0 )k2−1,ρ0 +c2 kgradρ0 G(ρ0 , σ0 )k2−1,ρ0 +2abhgradρ0 d2 (ρ0 , τ0 ), gradE(ρ0 )i−1,ρ0
(13.82)
+2achgradρ0 d2 (ρ0 , τ0 ), gradρ0 G(ρ0 , σ0 )i−1,ρ0 +2bchgradρ0 d2 (ρ0 , τ0 ), gradρ0 G(ρ0 , σ0 )i−1,ρ0 is finite if and only if kgradE(ρ0 )k−1,ρ0 < ∞. Since (13.79) implies that (13.83)
f0 (ρ) − f0 (ρ0 ) ≥ −λ0,δ
1 2 1 d (ρ, γ0 ) + λ0,δ d2 (ρ0 , γ0 ), 2δ 2δ
by Lemma D.55 and (D.31), (13.84)
gradf0 (ρ0 ) = gradρ=ρ0
−λ0,δ 2 ∇ · (ρ0 ∇pρ0 ,γ0 ) d (ρ, γ0 ) = λ0,δ . 2δ δ
By (D.46), (13.85) kgradf0 (ρ0 )k2−1,ρ0 = kgradρ=ρ0
λ20,δ λ0,δ 2 d (ρ, γ0 )k2−1,ρ0 = 2 d2 (ρ0 , γ0 ) < ∞. 2δ δ
Consequently, by (13.82), (13.86)
I # (ρ0 ) = 4kgradE(ρ0 )k2−1,ρ0 < ∞.
Next, using I # (ρ0 ) < ∞ and the mass transport inequality (D.75), we connect b † f0 (ρ0 ). the right side of (13.80) with H Lemma 13.36. 1 −hgradE(ρ0 ), gradf0 (ρ0 )i−1,ρ0 + kgradf0 (ρ0 )k2−1,ρ0 2 b † f0 (ρ0 ). = H
λ0,δ α−1 (f − h)(γ0 ) ≤ (13.87)
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
333
Proof. By (D.75), α−1 (f − h)(γ0 ) ≤
1 δ
Z ∇pρ0 ,γ0 · ( Rd
+
1 ∇ρ0 + ∇Ψ + ∇(Φ ∗ ρ0 ))dρ0 2 ρ0
λ0,δ 2 d (ρ0 , γ0 ), 2δ 2
where λ0,δ = 1 + 2δ(λΨ  + 2λΦ ). Note that by (13.84) and Lemma D.46, Z λ0,δ 1 ∇ρ0 (x) ∇p0 (x) · ( + ∇Ψ + ∇(Φ ∗ ρ0 ))ρ0 (x)dx δ 2 ρ0 (x) Rd = h−gradE(ρ0 ), gradf0 (ρ0 )i−1,ρ0 . Therefore, applying (13.85), the lemma follows.
Proof. (Lemma 13.34) By (13.79), (f δ − f0 )(ρ0 )
1 2 d (ρ0 , γ0 )) − f0 (ρ0 ) 2δ 1 = sup (λ0,δ (f (γ) − d2 (ρ, γ)) − f0 (ρ)) 2δ ρ,γ∈E = λ0,δ (f (γ0 ) −
=
sup (f δ (ρ) − f0 (ρ)),
ρ∈E
which implies that λ0,δ 2 d (ρ0 , γ0 ) − λ0,δ h(γ0 ) 2δ = f δ (ρ0 ) − λ0,δ h(γ0 ).
λ0,δ (f − h)(γ0 ) ≥ λ0,δ f (γ0 ) −
In addition, (13.79) implies d2 (ρ0 , γ0 ) ≤ 2δ(f (γ0 ) − f (ρ0 )) ≤ 4δ sup f (ρ), ρ∈E
which further implies that r h(ρ0 ) − h(γ0 ) ≤ ωh (2 δ sup f (ρ)). ρ∈E
Therefore, from (13.87), we have b † f0 (ρ0 ) ≥ α−1 (f δ (ρ0 ) − λ0,δ h(γ0 )) H = α−1 f δ (ρ0 ) − λ0,δ (h(ρ0 ) − h(γ0 ) + h(ρ0 )) ≥ α−1 (f δ − h0,δ )(ρ0 ), and we conclude that f δ is a subsolution to (13.75).
Proof. (Theorem 13.32) By Lemma 13.34 and Theorem 9.41, sup f δ (ρ) − f δ (ρ) ≤ sup h0,δ (ρ) − h1,δ (ρ) . ρ∈E
Letting δ → 0+, we conclude f ≤ f .
ρ∈E
334
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
13.3.5. Variational representation. Let ρ ∈ P(Rd ) and p ∈ Cc∞ (Rd ). Then Z Z Z 1 1 ∇p(x)2 ρ(dx) = sup { u(x) · ∇p(x)ρ(dx) − u(x)2 ρ(dx)}) 2 Rd 2 2 d d d u∈Lρ (R ) R R Z Z 1 (13.88) u(x)2 ρ(dx)}, = sup { u(x) · ∇p(x)ρ(dx) − 2 Rd Rd u∈L2ρ,∇ (Rd ) where L2ρ,∇ (Rd ) is the closure of {∇p : p ∈ Cc∞ (Rd )} in L2ρ (Rd ). The second R R equality = Rd Pρ,∇ u(x)·∇p(x)ρ(dx) R follows2 from theRfact that Rd u(x)·∇p(x)ρ(dx) and Rd u(x) ρ(dx) ≥ Rd Pρ,∇ u(x)2 ρ(dx), where Pρ,∇ : L2ρ (Rd ) → L2ρ,∇ (Rd ) is the projection onto L2ρ,∇ (Rd ). This identity suggests a variational form for H in which the controls are Rd valued functions; however, the compactness conditions of Chapter 8 are difficult to obtain using that approach. R Instead, let the control space U be the subspace of µ ∈ P(Rd × Rd ) such that (1 + x2 + z)µ(dx × dz) < ∞, with the topology in which µn → µ if and only if Z Z f (x, z)(1 + x2 + z)µn (dx × dz) → f (x, z)(1 + x2 + z)µ(dx × dz) for all f ∈ Cb (Rd × Rd ). Let the set Γ that determines the admissible controls be the collection of (ρ, µ) ∈ E × U such that ρ(dx) = µ(dx × Rd ). For (ρ, µ) ∈ Γ, we write µ(dx × dz) = µ(dzx)ρ(dx) and define Z Z Z 2 (13.89) L(ρ, µ) ≡ z µ(dx × dz) ≥  zµ(dzx)2 ρ(dx), Rd ×Rd
Rd
Rd
where the inequality follows by Jensen’s inequality. Consequently, by (13.88) and the inequality in (13.89), we have the variational representation for the H in (13.45): Z δf Hf (ρ) = sup { (B(ρ) + z · ∇) (x)µ(dx × dz) − L(ρ, µ)} δρ µ∈Γρ Rd ×Rd Z Z δf u(x)2 ρ(dx)} = sup { (B(ρ) + u(x) · ∇) (x)ρ(dx) − δρ u∈L2ρ (Rd ) Rd Rd Z Z δf = sup { (B(ρ) + u(x) · ∇) (x)ρ(dx) − u(x)2 ρ(dx)}, δρ Rd Rd u∈L2ρ,∇ (Rd ) where f ∈ D(H), δf /δρ is defined by (13.44), and the second expression is obtained from the first by setting µ(dx × dz) = δu(x) (dz)ρ(dx). Then, for f ∈ D(H) and (ρ, µ) ∈ Γ, define Z δf (13.90) Af (ρ, µ) = (B(ρ) + z · ∇) (x)µ(dx × dz), δρ Rd ×Rd so (13.91)
Hf (ρ) = sup (Af (ρ, µ) − L(ρ, u)). µ∈Γρ
The control equation becomes Z t δf f (ρ(t)) = f (ρ(0)) + (B(ρ(s)) + uµ (s, x) · ∇) (x)ρ(s, dx)ds, δρ 0
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
R where uµ (s, x) = zµ(dzx, s) and we require Z t δf (B(ρ(s)) + uµ (s, x) · ∇) (x)ρ(s, dx)ds < ∞, δρ 0
335
f ∈ D(H).
The control equation is equivalent to requiring Z t (13.92) hq, ρ(t)i = hq, ρ(0)i + hB(ρ(s))q + uµ (s, ·) · ∇q, ρ(s)ids,
q ∈ Cc∞ (Rd ).
0
The continuity condition, Condition 8.9.(1), follows immediately from the definition of the topology on U . Setting µ(dx × dz) = ρ(dx)δ0 (dz), the control problem reduces to (1.45) and Condition 8.9.(2) follows, as does Condition 8.10. Let K ⊂ E be compact and set ΓK,c = {(ρ, µ) ∈ Γ : L(ρ, µ) ≤ c} ∩ (K × U ). Then {µ(·×Rd ) : (ρ, µ) ∈ ΓK,c } = K is compact in E by assumption and {µ(Rd ×·) : (ρ, R µ) ∈ KK,c } is compact R in the topology determined by the requirement that (1 + z)f (z)νn (dz) → (1 + z)f (z)ν(dz), f ∈ Cb (Rd ). Compactness of ΓK,c then follows by a minor modification of Prohorov’s theorem. By thepSchwartz inequality, Condition 8.9.(5) follows with ψf,K (r) of the form Cf,K (1 + r), for some constant Cf,K . For f (ρ) = ϕ(hp, ρi) ∈ D(H), the supremum in (13.91) is achieved by taking Pl µf,ρ (dx × dz) = ρ(dx)δuf,ρ (x) (dz) with uf,ρ (x) = i=1 ∂i ϕ(hp, ρi)∇pi (x). Taking H = H† = H‡ = H, Condition 8.11 follows from the existence of a solution of Z t (13.93) hq, ρ(t)i = hq, ρ(0)i + hB(ρ(s))q + uf,ρ · ∇q, ρ(s)ids, q ∈ Cc∞ (Rd ). 0
A solution can be constructed as the limit of a system Z dXn,i (t) = (−∇Ψ(Xn,i (t)) − ∇Φ(Xn,i (t) − y)ρn (t, dy) y∈Rd
+
k X
∂i ϕ(hp, ρn i)∇pi (Xn,i (t)))dt + dWi (t),
i=1
where n
ρn (t) =
1X δX (t) n i=1 n,i
and {Xn,i (0)} are independent with distribution ρ(0). This system is another example of a McKeanVlasov system. Typically, these systems are considered under Lipschitz assumptions (see, for example, M´el´eard [85]) that are not assumed here; however, the semiconvexity assumption on Ψ gives growth estimates on the Xn,i which in turn ensure the relative compactness of the system. Any limit point will give a solution of Z dXi (t) = (−∇Ψ(Xi (t)) − ∇Φ(Xi (t) − y)ρ(t, dy) y∈Rd
+
k X i=1
∂i ϕ(hp, ρi)∇pi (Xi (t)))dt + dWi (t),
336
13. STOCHASTIC EQUATIONS IN INFINITE DIMENSIONS
Pk where ρ(t) = limk→∞ k1 i=1 δXi (t) satisfies (13.93). Systems of this form have been considered by Kurtz and Protter [76] and Kurtz and Xiong [78]. Finally, to verify Condition 8.9.(4), note that K ⊂ E is compact if and only if {Zρ 2 : ρ ∈ K} is uniformly integrable, where Zρ is a random variable with distribution ρ. Let (ρ, µ) be a solution of (13.92). Then by Corollary 1.12 of Kurtz and Λ(t))} with values in Stockbridge [77], there exists R a stochastic process {(X(t), R Rd × P(Rd ) such that E[ Rd h(X(t), z)Λ(s, dz)] = h(x, z)µ(s, dx × dz), h ∈ R Cb (Rd × Rd ), and setting U (t) = zΛ(s, dz), Z t q(X(t)) − q(X(0)) − (B(ρ(s))q(X(s)) + U (s) · ∇q(X(s)))ds 0
{FtX,Λ }martingale
is a motion so that
for each q ∈ Cc∞ (Rd ). It follows that there exists a Brownian
t
Z
Z (−∇Ψ(X(s)) −
X(t) = X(0) +
∇Φ(X(s) − y)ρ(s, dy) + U (s))ds + W (t). y∈Rd
0
Let K ⊂ E be compact. Then the corresponding uniform integrability implies that there existsR a convex, C 2 function γ : [0, ∞) → [0, ∞) such that limr→∞ r−1 γ(r) = ∞ and supρ∈K γ(x2 )ρ(dx) < ∞. Without loss of generality, we can assume that γ 00 ≥ 0 is nonincreasing, γ(0) = γ 0 (0) = 0, and, recalling Lemma 13.16, that γ 0 grows slowly enough so that (13.94)
γ 0 (x2 )x2 γ 0 (x2 ) = lim = 0. x→∞ ∇Ψ(x) · x x→∞ ∇Ψ(x) · x/x2 lim
Setting τc = inf{t : X(t) ≥ c} and applying Itˆo’s formula, we have E[γ(X(t ∧ τc 2 )] = E[γ(X(0)2 )] + E
hZ
t∧τc
2γ 0 (X(s)2 )X(s)
0
·(−∇Ψ(X(s)) − ρ ∗ ∇Φ(X(s)) + U (s)) i +2γ 00 (X(s)2 )X(s)2 + γ 0 (X(s)2 ) ds h Z t∧τc i ≤ E[γ(X(0)2 )] + E U (s)2 + C1 + C2 (1 + γ(X(s)2 )) ds , 0
where C1 = sup(γ 0 (x2 )2 x2 − 2γ 0 (x2 )x · ∇Ψ(x)) x
and
2γ 00 (x2 )x2 + γ 0 (x2 ) − 2γ 0 (x2 )x · ρ ∗ ∇Φ(x) . 1 + γ(x2 ) x,ρ Note that C1 < ∞ by (13.94) and C2 < ∞ by the fact that γ 00 (r)r ≤γ 0 (r) and γ 0 (r)r ≤ 2γ(r). Gronwall’s inequality gives Z t 2 2 E[γ(X(t ∧ τc  )] ≤ (E[γ(X(0) )] + C1 + C2 + E[ U (s)2 ds])eC2 t . C2 = sup
0
Letting c → ∞, we have Z Z Z t 2 2 γ(x )ρ(t, dx) ≤ ( γ(x )ρ(0, dx) + C1 + C2 + L(ρ(s), µ(s))ds)eC2 t , 0
13.3. WEAKLY INTERACTING STOCHASTIC PARTICLES
337
and Condition 8.9.(4) follows with Z Z b K(K, T, M ) = {ρ : γ(x2 )ρ(dx) ≤ ( sup γ(x2 )ρ0 (dx) + C1 + C2 + M )eC2 T }. ρ0 ∈K
The control equation (13.92) is the weak (in the Schwartz distributional sense) formulation of the controlled partial differential equation 1 (13.95) ∂t ρ = ∆ρ + ∇ · (ρ∇(Ψ + ρ ∗ Φ)) − ∇ · (ρuµ ). 2 By (8.18) and (13.89) Z ∞Z uµ (s, x)2 ρ(s, dx))ds : (ρ, uµ ) satisfies (13.92)} I(ρ) = I0 (ρ(0)) + inf{ 0 Z 1 ∞ kuµ (s)k20,ρ(s) ds : (ρ, uµ ) satisfies (13.92)} = I0 (ρ(0)) + inf{ 2 0 Z 1 ∞ = I0 (ρ(0)) + inf{ kPρ(s),∇ uµ (s)k20,ρ(s) ds : (ρ, uµ ) satisfies (13.92)}. 2 0 Let q ∈ Cc∞ ((0, ∞)×Rd ), and write q(s) = q(s, ·) ∈ Cc∞ (Rd ). Then for any solution of (13.92), Z ∞ Z ∞ hB(ρ(t))q(t) + ∂t q(t), ρ(t)idt = − huµ (t, ·)∇q(t), ρ(t)idt. 0
0
Cc∞ ((0, ∞)
Taking the supremum over all q ∈ × Rd ), Z ∞ 1 kPρ(s),∇ uµ (s)k20,ρ(s) ds 2 0 Z ∞ Z Z 1 ∞ = sup{− huµ (t, ·)∇q(t), ρ(t)idt − ∇q(t, x)2 ρ(t, dx)} 2 0 q 0 Rd Z ∞ Z Z 1 ∞ = sup{ hB(ρ(t))q(t) + ∂t q(t), ρ(t)idt − ∇q(t, x)2 ρ(t, dx)} 2 0 q 0 Rd Z 1 ∞ 1 = k∂t ρ − ∆ρ − ∇ · (ρ∇(Ψ + ρ ∗ Φ))k2−1,ρ(t) dt. 2 0 2 13.3.6. The large deviation theorem. Theorem 13.37. Let Ψ, Φ satisfy Conditions 13.14, and let (E, r) = (P2 (Rd ), d2 ), where d2 is the 2Wasserstein metric on the space of probability measures with finite second moments P2 (Rd ). See Definition D.13. Let ρn , n = 1, 2, . . . be the empirical measures (1.44) of the Xn,k satisfying (1.43). Assume that {ρn (0)}, as a sequence of Evalued random variables, satisfies the large deviation principle with good rate function I0 . Then {ρn (·) : n = 1, 2, . . .} satisfies a large deviation principle in CE [0, ∞) with good rate function Z 1 ∞ ∂ 1 I(ρ) = I0 (ρ(0)) + k ρ − ∆ρ − ∇ · (ρ∇(Ψ + ρ ∗ Φ))k2−1,ρ(t) dt. 2 0 ∂t 2
Appendix
APPENDIX A
Operators and convergence in function spaces A.1. Semicontinuity Definition A.1. f : E → [−∞, ∞] is lower semicontinuous if for each x ∈ E, f (x) ≤ lim inf y→x f (y); f : E → [−∞, ∞] is upper semicontinuous if for each x ∈ E, f (x) ≥ lim supy→x f (y). The proofs of the following standard lemmas are left as exercises. Lemma A.2. For f : E → [−∞, ∞], define f ∗ (x) = lim→∞ supy∈B (x) f (y). Then f ∗ is an upper semicontinuous function called the upper semicontinuous regularization of f . Similarly, f∗ (x) = lim→∞ inf y∈B (x) f (y) is a lower semicontinuous function called the lower semicontinuous regularization of f . If f is upper semicontinuous, then f ∗ = f , and if f is lower semicontinuous, f∗ = f . Lemma A.3. Suppose f1 ≤ f2 ≤ · · · are lower semicontinuous. Then the pointwise limit f of {fn } is lower semicontinuous. Suppose f1 ≥ f2 ≥ · · · are upper semicontinuous. Then the pointwise limit f of {fn } is upper semicontinuous. Lemma A.4. Let {fn } be a decreasing sequence of upper semicontinuous functions with limit f . Suppose that xn ∈ E and n > 0 satisfy supx∈E fn (x) ≤ fn (xn ) + n , n → 0, and xn → x0 . Then lim fn (xn ) = f (x0 ) = sup f (x).
n→∞
x∈E
The analogous result holds for increasing sequences of lower semicontinuous functions. Remark A.5. See Proposition 2.42 of Attouch [5] for much more general results of this type. Proof. Let a = lim→0 limn→∞ supy∈B (x0 ) fn (y). Then a = lim fn (xn ) = lim sup fn (y), n→∞
n→∞ y∈E
and since fn (xn ) + n ≥ fn (x0 ) ≥ lim sup fn (xm ) ≥ lim sup fm (xm ) = a, m→∞
m→∞
it follows that lim fn (xn ) = f (x0 ) = a = sup f (x).
n→∞
x∈E
341
342
A. OPERATORS AND CONVERGENCE IN FUNCTION SPACES
A.2. General notions of convergence Definition A.6. (bucConvergence) a) A sequence {fn } of functions on E converges boundedly and uniformly on compacts to f if and only if supn kfn k < ∞ and for each compact K ⊂ E, lim sup fn (x) − f (x) = 0.
n→∞ x∈K
We will denote this convergence by f = buc lim fn . n→∞
b) Let E be a metric space and for n = 1, 2, . . . let En be a metric space and ηn : En → E be Borel measurable. Define ηn : B(E) → B(En ) by ηn f = f ◦ ηn . A sequence {fn } with fn ∈ En bucconverges to f ∈ E if and only if supn kfn k < ∞ and for each compact K ⊂ E lim
n→∞
sup
fn (y) − ηn f (y) = 0.
−1 y∈ηn (K)
We write f = buc limn→∞ fn . c) A function f ∈ Cb (E) is bucapproximable by a set B ⊂ Cb (E) if there exists a constant Cf,B such that for every > 0 and every compact K ⊂ E, there exists g,K ∈ B such that kg,K k ≤ Cf,B and supx∈K f (x) − g,K (x) ≤ . d) A set B ⊂ Cb (E) is bucclosed if B contains every f ∈ Cb (E) that is bucapproximable by B. e) C is the bucclosure of B if C is the smallest bucclosed set containing B. B is bucdense in C, if C is the bucclosure of B. Note that bucclosure is a legitimate notion of closure (that is, if {Bα } is a collection of bucclosed sets then ∩α Bα is bucclosed and a finite union of bucclosed sets is bucclosed). We will refer to the corresponding topology as the buc topology. The buc topology is closely related to the strict topology introduced by Buck [12] for E locally compact and extended to more general E by a variety of authors [14, 55, 59]. (In fact, we have not ruled out the possibility that the two topologies are equivalent. Wheeler [129] gives a survey of a collection of related topologies.) For compact K ⊂ E, let pK be the seminorm given by pK (f ) = supx∈K f (x). The collection of seminorms {pK : K ⊂ E compact} generates the compact uniform topology on C(E). For a sequence of compact sets {Kn } and a sequence {an } ⊂ (0, ∞) satisfying limn→∞ an = 0, let p{(Kn ,an )} (f ) = supn an pKn (f ). The strict topology is generated by the collection of seminorms of this form. Clearly, any strictly closed set is bucclosed (that is, the topology corresponding to bucclosure is finer than the strict topology). Furthermore, a sequence is bucconvergent if and only if it is strictly convergent. (See Cooper [14].) Furthermore, restricted to bounded sets, both the topology corresponding to bucclosure and the strict topology are just the compact uniform topology. For r > 0, let Br = {f ∈ Cb (E) : kf k ≤ r}. Lemma A.7. Let D ⊂ Cb (E). Suppose that for each r > 0, D∩Br is bucclosed. Then D is bucclosed. Proof. Suppose f is bucapproximable by D. Then there exists a constant r > 0 such that for every > 0 and every compact K ⊂ E, there exists g,K ∈ D such
A.2. GENERAL NOTIONS OF CONVERGENCE
343
that kg,K k ≤ r and supx∈K f (x) − g,K (x) ≤ . But then, f is bucapproximable by D ∩ Br and hence f ∈ D ∩ Br , so f ∈ D and D is bucclosed. Let D ⊂ Cb (E), D is called to be an algebra if and only if for all f1 , f2 ∈ D, a, b ∈ R, we have af1 + bf2 ∈ D and f1 f2 ∈ D. Theorem A.8. Let D ⊂ Cb (E) be an algebra that separates points and vanishes nowhere (that is, for each x ∈ E, there exists f ∈ D such that f (x) 6= 0). Then Cb (E) is the bucclosure of D, and for each r > 0, Br is the bucclosure of D ∩ Br . Proof. If bucclosure is the same as strict closure, then this theorem is a special case of Theorem 3.1 of Giles [55]. If not, then the current theorem is a consequence of the earlier result. To see that, first note that the norm closure of 2 D contains {ge−g : g ∈ D} for each > 0. Consequently, letting Dst denote the strict closure of D, the bucclosure of D contains all functions of the form 2 2 {ge−g : g ∈ Dst }. But by Giles’s theorem, Dst = Cb (E). Letting → 0, ge−g converges uniformly to g, so the bucclosure of D contains Cb (E). P m Let ϕr (u) = (−r)∨(r∧u). Since for g ∈ D and a0 , . . . , am ∈ R, k=0 ak g k ∈ D, and any continuous ϕ : R → R can be approximated uniformly on bounded intervals by polynomials, it follows that the bucclosure Dr of Dr = D ∩ Br contains ϕr (g) for every g ∈ D. Since g → ϕr (g) is continuous, it in turn follows that Dr contains ϕr (g) for every g ∈ Cb (E). Consequently, Dr = Br . Corollary A.9. If E is separable, then there exists a countable set D0 ⊂ Cb (E) such that Cb (E) is the bucclosure of {gi }. Remark A.10. Note that for a < b ∈ R, {f ∈ Cb (E) : a ≤ f ≤ b} is the bucclosure of {a ∨ (f ∧ b) : f ∈ D0 . Proof. Let {xj } be dense in E and let D0 be the collection of functions of the form m X Y ak f (y) = e−d(xj ,y) , k=1
j∈Jk
where ak is rational and Jk ⊂ {1, 2, . . .} is finite. Then D0 is countable, linear, and closed under multiplication. The norm closure D of D0 is an algebra that separates points and vanishes nowhere. Consequently, Cb (E) is the bucclosure of D and hence of D0 . Lemma A.11. Let Q : D ⊂ Cb (E) → Cb (E). Suppose that for each δ > 0, r > 0, and compact K ⊂ E, there exist C0 (r) > 0, C1 (δ, r) > 0 and compact K 0 (δ, r, K) ⊂ E that are nondecreasing in δ −1 , r, and K, such that (A.1)
sup Qh1 (x) − Qh2 (x) ≤ δC0 (r) + C1 (δ, r) x∈K
sup
h1 (x) − h2 (x)
x∈K 0 (δ,r,K)
for all h1 , h2 ∈ D satisfying kh1 k ∨ kh2 k ≤ r. Let Br = {h ∈ Cb (E) : khk ≤ r} b r of and Dr = D ∩ Br . Then for each r > 0, there exists a unique extension Q b Qr ≡ QDr to the bucclosure Dr of Dr such that the mapping Qr : Dr → Cb (E) b r is an is continuous in the buc topology and (A.1) is satisfied. If r1 < r2 , then Q 2 br . extension of Q 1
344
A. OPERATORS AND CONVERGENCE IN FUNCTION SPACES
Proof. Let L1 ⊂ Br × Cb (E) be the collection of (f, g) such that (A.2)
sup Qh(x) − g(x) ≤ δC0 (r) + C1 (δ, r)
h(x) − f (x),
sup x∈K 0 (δ,r,K)
x∈K
for all h ∈ Dr . Then L1 is closed in the product buctopology on Br × Cb (E) and L1 ⊃ Qr . Let L2 ⊂ Br × Cb (E) be the collection of (fb, gb) such that (A.3)
sup b g (x) − g(x) ≤ δC0 (r) + C1 (δ, r)
sup x∈K 0 (δ,r,K)
x∈K
fb(x) − f (x),
for all (f, g) ∈ L1 . Then L2 is also closed, and L2 ⊂ L1 , since Qr ⊂ L1 , and Qr ⊂ L2 by the definition of L1 . If (f, g1 ), (f, g2 ) ∈ L2 , then g1 = g2 , since for each compact K, supx∈K g1 (x) − g2 (x) ≤ δC0 (r) for all δ > 0. Consequently, L2 is a (singleb r = {(f, g) ∈ L2 : f ∈ Dr }, we valued) closed mapping that extends Qr . Setting Q b r ) is a bucclosed subset of Br . Note that if h ∈ Dr and f ∈ D(Q b r ), claim the D(Q then (A.2) implies b r f k ≤ kQr hk + δC0 (r) + 2rC1 (δ, r), kQ
(A.4)
b r is uniformly bounded. that is, the range of Q b r ). Let δn > 0 decrease to zero, Suppose that f is bucapproximable by D(Q K b and let fn ∈ D(Qr ) satisfy sup x∈K 0 (δ
f (x) − fnK (x) ≤
n ,r,K)
δn . C1 (δn , r)
By assumption, for m > n, C1 (δn , r) ≤ C1 (δm , r) and K 0 (δn , r, K) ⊂ K 0 (δm , r, K). Consequently, for m > n, b r fnK (x) − Q bK sup Q r fm (x)
x∈K
≤ δn C0 (r) + C1 (δn , r)
sup x∈K 0 (δ
K fnK (x) − fm (x)
n ,r,K)
≤ (C0 (r) + 2)δn . b r fn K } is Cauchy in C(K), and hence, there exist gK ∈ C(K) It follows that {Q b r fn (x) − gK (x) = 0. Furthermore, if K1 ⊂ K2 , then such that limn→∞ supx∈K Q gK1 (x) = gK2 (x) for x ∈ K1 , so there is a unique g defined on E such that g(x) = gK (x) for x ∈ K. Since gK is continuous on K and bounded by the right side of b r and hence in Q br , (A.4), g ∈ Cb (E). It follows that (f, g) is in the bucclosure of Q b so D(Qr ) = Dr . In several of the examples discussed in the Introduction, the generators Hn are defined for functions on a different state space than the limiting operator H. The following definition makes precise the type of convergence we will need in these settings. Definition A.12. Let E be a metric space and for n = 1, 2, . . . let En be a metric space and ηn : En → E be Borel measurable. Define ηn : B(E) → B(En ) by ηn f = f ◦ ηn . Let Hn ⊂ B(En ) × B(En ). Then the extended limit of {Hn } is the collection of (f, g) ∈ B(E) × B(E) such that there exist (fn , gn ) ∈ Hn satisfying lim (kfn − ηn f k + kgn − ηn gk) = 0.
n→∞
A.2. GENERAL NOTIONS OF CONVERGENCE
345
We will denote the convergence by ex lim Hn . n→∞
(f, g) ∈ B(E) × B(E) is in the extended buclimit of {Hn }, if there exist (fn , gn ) ∈ Hn satisfying supn (kfn k + kgn k) < ∞ and for each compact K ⊂ E, lim
n→∞
(fn (y) − ηn f (y) + gn (y) − ηn g(y)) = 0.
sup −1 y∈ηn (K)
We will denote this convergence by exbuc lim Hn . n→∞
It is, in fact, useful to consider even more general notions of convergence of fn ∈ Jn ⊂ B(En ) to a function f ∈ J ⊂ B(E). We denote such a notion by LIM, and we assume that LIM satisfies the following conditions (cf. [36], Section 1.6): Condition A.13. (1) LIMfn = f and LIMgn = g imply LIM(afn + bgn ) = aLIMfn + bLIMgn ,
a, b ∈ R. (k)
(k)
(2) LIMfn = f (k) , k = 1, 2, . . . and limk→∞ supn kfn −fn k∨kf (k) −f k = 0 imply LIMfn = f . (3) There exists K > 0 such that for each f ∈ J there exist fn ∈ Jn satisfying supn kfn k ≤ Kkf k and LIMfn = f . The following examples satisfy Conditions A.13. Example A.14. a) (bucconvergence) LIMfn = f if and only if supn kfn k < ∞ and for each compact K ⊂ E lim
n→∞
sup
fn (y) − ηn f (y) = 0.
−1 (K) y∈ηn
b) (Qlimit) Let Q be an index set. For each q ∈ Q and n = 1, 2, . . ., let Knq ⊂ En . Assume that for q1 , q2 ∈ Q, there exists q3 ∈ Q such that Knq1 ∪ Knq2 ⊂ Knq3 and ∪q∈Q ∩m cl ∪n≥m ηn (Knq ) = E. LIMfn = f if and only if supn kfn k < ∞ and for each q ∈ Q lim sup fn (y) − ηn f (y) = 0.
n→∞ y∈K q
n
Q Let L denote the Banach space ( n Jn ) × J with k({fn }, f )k = supn kfn k ∨ kf k and define L0 = {({fn }, f ) ∈ L : LIMfn = f }. Note that LIM determines a bounded linear operator from L0 to J. If Hn ⊂ Jn × Jn , we define ex − LIMHn = {(f, g) : ∃(fn , gn ) ∈ Hn 3 LIMfn = f, LIMgn = g}.
346
A. OPERATORS AND CONVERGENCE IN FUNCTION SPACES
A.3. Dissipativity of operators Let B be a general Banach space. An operator H ⊂ B × B is dissipative if for (fi , gi ) ∈ H, i = 1, 2, k(f1 − αg1 ) − (f2 − αg2 )k ≥ kf1 − f2 k, for all α > 0. In Hilbert space, dissipativity is equivalent to the requirement that hf1 − f2 , g1 − g2 i ≤ 0,
(A.5)
where h·, ·i is the usual inner product. The following brackets give a type of generalization of the Hilbert inner product to Banach spaces: [f, g]+ ≡ inf
kf + gk − kf k ,
[f, g]− ≡ sup
kf + gk − kf k ,
>0
and 0
kf k − }. e e Lemma A.17. An operator H ⊂ B(E) × B(E) is dissipative if and only if for (fi , gi ) ∈ H, i = 1, 2, there exist xn such that limn→∞ f1 (xn ) − f2 (xn ) = kf1 − f2 k > 0 and lim supn→∞ (g1 (xn ) − g2 (xn )sgn(f1 (xn ) − f2 (xn )) ≤ 0. e e Lemma A.18. An operator H ⊂ B(E) × B(E) is strongly dissipative if and only if for (fi , gi ) ∈ H, i = 1, 2, limn→∞ f1 (xn ) − f2 (xn ) = kf1 − f2 k > 0 implies lim supn→∞ (g1 (xn ) − g2 (xn ) ≤ 0.
A.3. DISSIPATIVITY OF OPERATORS
347
e e Lemma A.19. Let H ⊂ B(E) × B(E) be dissipative, and assume that for f ∈ D(H) and K ∈ R, f + K ∈ D(H) and H(f + K) = Hf . Then for (fi , gi ) ∈ H, i = 1, 2, there exist {xn } ⊂ E such that limn→∞ (f1 (xn )−f2 (xn )) = supx (f1 (x)−f2 (x)) and lim supn→∞ (g1 (xn )−g2 (xn )) ≤ 0, and there exist zn such that limn→∞ (f1 (zn )− f2 (zn )) = inf x (f1 (x) − f2 (x)) and lim inf n→∞ (g1 (zn ) − g2 (zn )) ≥ 0. e e Corollary A.20. Let A ⊂ B(E) × B(E) be linear and strongly dissipative and (1, 0) ∈ A. Let (f, g) ∈ A and {xn } ⊂ E. If limn→∞ f (xn ) = supx f (x), then lim supn→∞ g(xn ) ≤ 0, and if limn→∞ f (xn ) = inf x f (x), then lim inf n→∞ g(xn ) ≥ 0.
APPENDIX B
Variational constants, rate of growth and spectral theory for the semigroup of positive linear operators Let (E, r) be a complete, separable metric space, Mf (E) be the space of finite (positive) Borel measures on E, P(E) ⊂ Mf (E), the space of probability measures on E, K(E) ⊂ Cb (E), the collection of nonnegative, bounded, continuous functions, K0 (E) ⊂ K(E), the collection of strictly positive, bounded, continuous functions, and K1 (E) ⊂ K0 (E), the collection of bounded continuous functions satisfying inf x∈E f (x) > 0. For a general operator A ⊂ M (E) × M (E), we denote D(A) = {f ∈ M (E) : ∃g ∈ M (E) 3 (f, g) ∈ A}, D+ (A) = D(A) ∩ K0 (E), and D++ (A) = D(A) ∩ K1 (E). We are interested in finding explicit conditions under which inequalities such as (11.16) and (11.56) hold. These and the related inequalities of Chapter 11 are implied by the following conditions. (1) Let P (y, dz) be a transition R function for a discrete time Markov chain Y on E, and define P g(y) = E g(z)P (y, dz) for g ∈ Cb (E). Let ψ be upper semicontinuous on E and bounded above, and for c ∈ R, assume with {y : ψ(y) ≥ c} compact. For V ∈ Cb (E), (B.1)
inf
sup (V (y) + log(
g∈K1 (E) y∈E
P g(y) P g(y) )) ≤ sup inf (V (y) + log( ). y∈E g(y) g(y) g∈K1 (E)
or (B.2)
inf
inf
sup (V (y) + (1 − κ) log
P g(y) + κψ(y)) g(y)
≤ sup
sup
inf (V (y) + (1 + κ) log
P g(y) − κψ(y)). g(y)
00 g∈K1 (E) y∈E
(2) Let B be the infinitesimal generator of a continuous time Markov process Y in the sense of (11.31), and let ψ be as above. For V ∈ Cb (E), (B.3)
inf
sup (V (y) +
g∈D ++ (B) y∈E
Bg(y) Bg(y) ) ≤ sup inf (V (y) + ) g(y) g(y) g∈D ++ (B) y∈E
or (B.4)
inf
inf
sup (V (y) + (1 − κ)
00 g∈D ++ (B) y∈E
349
Bg(y) − κψ(y)) g(y)
350
B. GENERALIZED PRINCIPAL EIGENVALUES
There are a number of results in the literature giving these inequalities. A variant of Condition F in FreidlinWentzell [52] (Conditions B.7 for continuous time and B.15 for discrete time) implies (B.2) and (B.4) (Theorems B.12 and B.20). This condition can be verified using a condition of Deuschel and Stroock [30] on the transition probability of the process Y (Conditions B.8, B.16 and Lemmas B.9 and B.17). B.1. Relationship to the spectral theory of positive linear operators g(y) Note that (B.1) holds if there exists g ∈ K1 (E), such that V (y) + log Pg(y) is a constant. Exponentiating, suppose that there exist λV ∈ R and g0 ∈ K1 (E) such that
eV P g0 = eλV g0 .
(B.5) Then inf
sup(V (y) + log(
g∈K0 (E)
y
P g(y) P g(y) )) ≤ λV ≤ sup inf (V (y) + log( )). y g(y) g(y) g∈K0 (E)
Of course, (B.5) says that eλV is a positive eigenvalue for the operator eV P with a positive eigenfunction g0 . In particular, if E is finite and P is the transition operator for an irreducible Markov chain, then the existence of g0 and eλV follows by the PerronFrobenius theorem. Similarly, in the continuous time case (B.3), the analogous condition is that there exist λV ∈ R and g0 ∈ D++ (B) such that (B.6)
(B + V )g0 = λV g0 .
Then inf
sup (V (y) +
g∈D ++ (B) y∈E
Bg(y) Bg(y) ) ≤ λV ≤ sup inf (V (y) + )). y∈E g(y) g(y) ++ g∈D (B)
Again, if E is finite, the PerronFrobenius theorem gives the desired result. Extensions of PerronFrobenius theory for eigenvalue problems of the forms (B.5) and (B.6) exist in the literature. Such extensions are usually proved under some type of irreducibility and compactness hypotheses on P or the resolvent of B. B.1.1. Continuous time case. Throughout this section, we assume that Y is an Evalued, cadlag Markov process with semigroup S(t)f (y) = E[f (Y (t))Y (0) = y] satisfying S(t) : Cb (E) → Cb (E) and weak infinitesimal generator B. Denote P (t, x, dy) = P (Y (t) ∈ dyY (0) = x) and for V ∈ Cb (E) define
R t V (Y (s))ds
T (t)f (y) = E[f (Y (t))e
0
Y (0) = y],
f ∈ Cb (E).
The weak infinitesimal generator for {T (t)} is Af = Bf + V f , f ∈ D(B). Condition B.1. (1) There exists a measurable function ρ(t, x, y) : [0, ∞) × E × E → [0, ∞) such that for each compact K ⊂ E, lim sup
sup ρ(t, x, y) = 0,
→0+ x∈K r(x,y)≤
and sup P (t, x, A) − P (t, y, A) ≤ ρ(t, x, y), A∈B(E)
B.1. RELATIONSHIP TO THE SPECTRAL THEORY OF POSITIVE LINEAR OPERATORS 351
for each t > 0 and x, y ∈ E. (2) P is irreducible in the sense that for each y ∈ E and open A ⊂ E, there exists t0 > 0 such that P (Y (t0 ) ∈ AY (0) = y) > 0. Lemma B.2. For each t > 0, there exists ωt (x, y) such that T (t)f (x) − T (t)f (y) ≤ kf kωt (x, y) and lim sup
sup ωt (x, y) = 0.
→0 x∈K r(x,y)≤
Proof. Let Ex denote the expectation under the assumption that Y (0) = x, and assume that v ≤ V ≤ v. Then for 0 < δ ≤ t and f ≥ 0,
R t V (Y (s))ds
evδ Ex [T (t−δ)f (Y (δ)] ≤ T (t)f (x) = Ex [e
0
f (Y (t))] ≤ evδ Ex [T (t−δ)f (Y (δ)]
and hence T (t)f (x) − T (t)f (y) ≤ (evδ − evδ )ev(t−δ) kf k + evδ (Ex [T (t − δ)f (Y (δ)] − Ey [T (t − δ)f (Y (δ)]) ≤ (evδ − evδ )ev(t−δ) kf k + evδ ev(t−δ) kf kρ(δ, x, y). It follows that we can take ωt (x, y) = inf [(evδ − evδ )ev(t−δ) + evδ ev(t−δ) ρ(δ, x, y)]. 0≤δ≤t
Theorem B.3. Suppose that Condition B.1 is satisfied, that E is compact, and that V ∈ C(E). Let 1 λV = lim sup log T (t)1(y). t→∞ t y∈E Then there exists g0 ∈ D++ (B) such that (B.6) holds. Furthermore, the pair (λV , g0 ) satisfying (B.6) is unique up to multiplication of g0 by a positive constant. Proof. First assume sup V (y) = v = −β1 < 0.
(B.7)
y
Then ∞
Z Sg ≡ −
(B.8)
T (t)gdt 0
is a welldefined operator on C(E). Since Z ∞ R t V (Y (s))ds Sg(y) = E[e 0 g(Y (t))Y (0) = y]dt 0 Z ∞ Rt ≤ kgk E[e 0 V (Y (s))ds Y (0) = y]dt 0
≤
kgkβ1−1 ,
kSk ≤ β1−1 . For each g ∈ D(B), Z g = T (t)g −
t
T (s)(B + V )gds 0
352
B. GENERALIZED PRINCIPAL EIGENVALUES
and limt→∞ T (t)g = 0. It follows that g ∈ D(B),
S(B + V )g = g,
and therefore, (B + V )−1 = S. We can assume that ρ in Condition B.1.1 is bounded by 1 (otherwise, replace ρ by ρ ∧ 1), and hence ωt (x, y) ≤ evt = e−β1 t . Then Z ∞ ωt (x, y)dt, Sf (x) − Sf (y) ≤ kf k 0
and it follows that S = (B + V )−1 is compact. By Condition B.1.2 and the representation Z ∞ Rt E[e 0 V (Y (s))ds g(Y (t))Y (0) = y]dt, −(B + V )−1 g(y) = 0 −1
−(B + V ) is a strictly positive operator in the sense that g ∈ C(E), g ≥ 0 and g not identically zero implies −(B + V )−1 g(y) > 0 for all y ∈ E. Setting β0 = − inf y V (y), the spectral radius satisfies r((B + V )−1 ) ≡ ≥
lim k(B + V )−n k1/n
n→∞
lim (sup (B + V )−n 1(y))1/n
n→∞
y
1 1 ≥ ( n )1/n = , β0 β0 and by Corollary 1 of Karlin [64], r((B + V )−1 ) is an eigenvalue for −(B + V )−1 with nonnegative eigenfunction g0 . The strict positivity of −(B +V )−1 implies that the eigenfunction is strictly positive, and (B.6) follows with λV = −1/r((B +V )−1 ). In general, without assuming supy V (y) < 0, let c > kV k. Then −(B +V −c)−1 has a unique (up to constant multiples), strictly positive eigenfunction g0 , and setting λV −c = −1/r((B + V − c)−1 ), we have (B + V − c)g0 = λV −c g0 , so with λV = λV −c + c, (B + V )g0 = λV g0 . By (B.6), T (t)g0 = e−tλV g0 . Since there exist constants 0 < inf y g0 (y) ≤ supy g0 (y) < ∞, (inf g0 (y))T (t)1 ≤ T (t)g0 ≤ (sup g0 (y))T (t)1, y
y
and λV = lim sup
t→∞ y∈E
1 log T (t)1(y). t
B.2. RELATIONSHIP TO SOME VARIATIONAL CONSTANTS
353
B.1.2. Discrete time case. Let {Y (0), Y (1), . . .} be an Evalued, discrete time, Feller, Markov chain with transition probability P (x, dy). Condition B.4. (1) There exists a function ρ : E × E → [0, ∞) such that for each compact K ⊂ E, lim sup sup ρ(x, y) = 0 →0 x∈K r(x,y)≤
and sup P (x, A) − P (y, A) ≤ ρ(x, y). A∈B(E)
(2) P is irreducible in the sense that for each y ∈ E and open A ⊂ E, there exists n0 > 0 such that P (Y (n0 ) ∈ AY (0) = y) > 0. Let T f (y) = eV (y) P f (y). Theorem B.5. Suppose that Condition B.4 is satisfied, that E is compact, and that V ∈ C(E). Let 1 (B.9) λV = lim log sup T n 1(y). n→∞ n y∈E Then there exists g0 ∈ K1 (E) such that (B.5) holds, and (λV , g0 ) is unique up to multiplication of g0 by a positive constant. Proof. By uniform continuity of eV on E and Condition B.4, TPis a compact ∞ operator on C(E). Let ρ > 0 satisfy ρkT k < 1, and define Sρ = k=0 ρk T k = −1 (I − ρT ) . Then Sρ is a strictly positive, compact operator. As in the proof of Theorem B.3, Sρ has a positive eigenvalue λρ and corresponding strictly positive eigenfunction g0 . Consequently, (I − ρT )−1 g0 = λρ g0 and T g0 =
λρ − 1 g0 ≡ eλV g0 . ρλρ
As before 0 < C1 ≤ g0 ≤ C2 , and C1 kT n k ≤ T n g0 = enλV g0 ≤ C2 kT n k. Therefore λV = lim
n→∞
1 log kT n k. n
B.2. Relationship to some variational constants As we have already noted, the validity of (B.4) and (B.2) is closely related to the eigenvalues of the operators involved and to variational representations of these eigenvalues. In this section, we develop this relationship further. A key tool is Sion’s theorem. Theorem B.6. Let M and N be convex subsets of possibly different topological vector spaces. Suppose that F : M × N → R satisfies the following conditions: a) For each c ∈ R and ν ∈ N , {µ ∈ M : f (µ, ν) ≥ c} is closed and convex. b) For each c ∈ R and µ ∈ M , {ν ∈ N : f (µ, ν) ≤ c} is closed and convex.
354
B. GENERALIZED PRINCIPAL EIGENVALUES
If either N or M is compact, then inf sup f (µ, ν) = sup inf f (µ, ν).
ν∈N µ∈M
µ∈M ν∈N
Proof. This result is Corollary 3.3 of Sion [111].
B.2.1. Continuous time case. Assume that S(t) : Cb (E) → Cb (E) and V ∈ Cb (E). Define
R t V (Y (s))ds
(B.10)
T (t)f (y) = E[f (Y (t))e
0
Y (0) = y].
Then {T (t)} is a semigroup with weak generator A = B + V , where B is the generator in the sense of (11.31) and V denotes multiplication by V . Note that (1, V ) ∈ A. Positivity implies kT (t)k = kT (t)1k = supy T (t)1(y) and
R t V (Y (s))ds
kT (t)k = sup E[e
(B.11)
0
Y (0) = y].
y
R R R For ν ∈ P(E) and g satisfying E g + dν < ∞ or E g − dν < ∞ (so E gdν is unambiguously defined, although possible ±∞), define Z hg, νi =
gdν, E
and let ψ be as in (B.4). We will develop results based upon the following hypotheses. Condition B.7. There exists a constant cV such that (B.12)
lim sup t→∞
Rt 1 log Eν [e 0 V (Y (s))ds ] = cV t
for each ν ∈ P(E) satisfying hψ, νi > −∞. Condition B.7 is a variant of Condition F of Freidlin and Wentzell (Chapter 7, (4.1), page 234 of [52]). It can be verified (Lemma B.9) under the following condition on the transition probability of Y . This criteria is an adaptation of e in Deuschel and Stroock [30]. Condition U Condition B.8. For ν1 , ν2 ∈ P(E) satisfying hνi , ψi > −∞, there exists t0 > 0, ρ1 , ρ2 ∈ P((0, t0 ]) and 1 ≤ M < ∞ such that Z
Z
Z
Z
P (t, x, A)ν1 (dx)ρ1 (dt) ≤ M
(B.13) (0,t0 ]
E
P (t, y, A)ν2 (dy)ρ2 (dt), (0,t0 ]
E
for each A ∈ B(E). Lemma B.9. Condition B.8 implies Condition B.7.
B.2. RELATIONSHIP TO SOME VARIATIONAL CONSTANTS
355
Proof. Let ν1 , ν2 ∈ P(E). Then
R t V (Y (r))dr
Eν1 [e
0
]
R s+t V (Y (r))dr
Z
≤ e2t0 kV k
Eν1 [e s (0,t0 ] Z Z Z
= e2t0 kV k
(0,t0 ]
x∈E
z∈E
Z
Z
Z
=e
Z
y∈E
R t V (Y (u))du
E[e y∈E
Z
0
R t+s V (Y (r))dr
Z
(0,t0 ]
Y (0) = z]P (s, x, dz)ν1 (dx)ρ1 (ds) Y (0) = z]P (s, y, dz)ν2 (dy)ρ2 (ds)
z∈E
M
≤ e2t0 kV k M
0
E[e (0,t0 ]
2t0 kV k
R t V (Y (u))du
E[e
≤ e2t0 kV k M
]ρ1 (ds)
s
R t V (Y (r))dr
Eν2 [e
0
R t V (Y (r))dr
Y (0) = y]ν2 (dy)ρ2 (ds)
]e2t0 kV k ρ2 (ds)
(0,t0 ]
= M e4t0 kV k Eν2 [e Therefore (B.14)
0
].
Rt Rt 1 1 log M + 4t0 kV k log Eν1 [e 0 V (Y (r))dr ] ≤ log Eν2 [e 0 V (Y (r))dr ] + , t t t
implying that lim supt→∞
1 t
R t V (Y (r))dr
log Eν [e
0
] is independent of ν.
We now identify the relation between cV and (B.4). Define c∗V c∗∗ V
Rt 1 log Eν [e 0 V (Y (s))ds ] ν∈P(E),hψ,νi>−∞ t→∞ t Rt 1 = sup lim sup log Eν [e 0 V (Y (s))ds ] ν∈P(E),hψ,νi>−∞ t→∞ t =
inf
lim sup
Condition B.7 is just the statement that c∗V = c∗∗ V . Lemma B.10. Assume that S(t) : Cb (E) → Cb (E), V ∈ Cb (E), and Kc = {y : ψ(y) ≥ −c} is compact for each c ∈ R. Then (B.15)
sup
sup
κ>0
g∈D ++ (B)
inf (V (y) + (1 + κ)
y∈E
Bg(y) − κψ(y)) ≥ c∗V . g(y)
Proof. Denote the left side of the inequality by H2V . For λ < c∗V , let Z t Rλt g = e−λs T (s)gds, 0
and let Γ be the collection of functions of the form Z ∞ gγ = Rλt 1γ(dt), γ ∈ P([0, ∞)), 0
such that gγ ∈ Cb (E).R The dominated convergence theorem implies that if gγ is ∞ continuous, then so is 0 Rλt V γ(dt), and it follows that Z ∞ (V + B)gγ = Rλt V γ(dt). 0
356
B. GENERALIZED PRINCIPAL EIGENVALUES
Note that k(V + B)gγ k ≤ kV kkgγ k and R (V + B)gγ 0∞ Rλt V γ(dt) = R∞ ≤ kV k. gγ Rλt 1γ(dt) 0 We topologize Γ so that gγn → gγ if and only if kgγn − gγ k + kBgγn − Bgγ k → 0 and γn ⇒ γ. Then setting Kc = {ν ∈ P(E) : ν(Kc ) = 1}, H2V
≥ = = ≥
lim lim sup inf (V (y) + (1 + κ)
κ→0+ c→∞ gγ ∈Γ ,y∈Kc
lim lim sup inf
κ→0+ c→∞ gγ ∈Γ ν∈Kc
lim
inf
sup
κ→0+ ν∈∪c Kc gγ ∈Γ
Bgγ (y) − κψ(y)) gγ (y)
h−κ(V + ψ)gγ + (1 + κ)(V + B)gγ , νi hgγ , νi
h−κ(V + ψ)gγ + (1 + κ)(V + B)gγ , νi hgγ , νi
h(V + B)gγ , νi . hgγ , νi ν∈P(E),hψ,νi>−∞ gγ ∈Γ inf
sup
The first inequality follows from the fact that Γ ⊂ D++ (B) and the inequality V (y) + (1 + κ)
Bgγ (y) − κψ(y) ≥ −(1 + 2κ)kV k − κψ(y). gγ (y)
Since Kc is compact and Kc and Γ are convex, the interchange of the inf and sup is justified by Sion’s theorem, Theorem B.6. Take N = Kc and M = Γ. Note that (λ − (B + V ))Rlt 1 = 1 − e−λt T (t)1
(B.16) so that
h(V + B)Rλt 1, νi 1 − he−λt T (t)1, νi =λ− , t hRλ 1, νi hRλt 1, νi
(B.17)
and that for ν satisfying hψ, νi > −∞, lim supt→∞ e−λt hT (t)1, νi = ∞. For > 0, there exists ν with hψ, ν i > −∞ such that H2V
h(V + B)gγ , ν i − hgγ , ν i gγ ∈Γ
≥
sup
≥ λ + lim sup t→∞
he−λt T (t)1, ν i − 1 − hRλt 1, ν i
≥ λ − . Since > 0 and λ
−∞ g∈D
(V + (B)
Bg )dν ≤ c∗∗ V g
Proof. By (B.16), Z inf ++
g∈D
(B)
E
(B + V )g dν ≤ λ − g
Z E
1 − e−λt T (t)1 dν. Rλt 1
B.2. RELATIONSHIP TO SOME VARIATIONAL CONSTANTS
357
R 1 −(λ−inf V (y))s t y Let λ > c∗∗ ds ≡ δ0 > 0, and V and t > 1. Then Rλ 1 ≥ 0 e Z R t V (Y (r))dr (B + V )g −1 −λt 0 inf dν ≤ λ + δ e E [e ]. ν 0 g g∈D ++ (B) E Assuming hψ, νi > −∞ and letting t → ∞, the right side converges to λ giving (B.18). Putting these lemmas together, we have the following. Theorem B.12. Assume that S(t) : Cb (E) → Cb (E), V ∈ Cb (E), and Kc = {y : ψ(y) ≥ −c} is compact for each c ∈ R. If B.7 holds, then (B.4) holds. Proof. As in Lemma 11.35, the left side of (B.4) equals the left side of (B.18). Since Condition B.7 is just the assertion that c∗V = c∗∗ V , (B.4) follows by B.15 and B.18. Lemma B.10 also gives the following. Theorem B.13. Assume that S(t) : Cb (E) → Cb (E), V ∈ Cb (E), and Kc = {y : ψ(y) ≥ −c} is compact for each c ∈ R. Define IB as in Lemma 11.35, that is, Z Z Bg IB (ν) = − inf dν ∧ ψdν. g∈D ++ (B) E g E If for each y ∈ E such that ψ(y) > −∞, Rt 1 (B.19) lim inf log Ey [e 0 V (Y (s))ds ] ≥ sup (hV, µi − IB (µ)), t→∞ t µ∈P(E) then (B.4) holds. Proof. As in Lemma 11.35, the right side of (B.19) equals the left side of (B.4). Noting that Z Rt Rt 1 1 log Eν [e 0 V (Y (s))ds ] ≥ log Ey [e 0 V (Y (s))ds ]ν(dy), t E t Fatou’s lemma implies Rt 1 c∗V ≥ lim inf log Eν [e 0 V (Y (s))ds ] ≥ sup (hV, µi − IB (µ)), t→∞ t µ∈P(E) Consequently, (B.4) follows by (B.15).
d
Example B.14. Consider the OrnsteinUhlenbeck process in R satisfying Z t (B.20) Y (t) = Y (0) + W (t) − α Y (s)ds, α > 0. 0
Then Bf (y) =
1 ∆f (y) − αy∇f (y), 2
Define µ∞ (dy) =
α d/2
f ∈ Dd . 2
e−αy dy.
π Then µ∞ is the stationary distribution for Y and Y is reversible. We can therefore define a symmetric Dirichlet form Z Z E(f, g) = − f Bgdµ∞ = ∇f · ∇gdµ∞ , ∇f, ∇g ∈ L2 (µ∞ ), f, g ∈ L2 (µ∞ ),
358
B. GENERALIZED PRINCIPAL EIGENVALUES
where ∇f, ∇g are given in the Schwartz distribution sense. By Theorem 7.44 on page 152 of Stroock [115], s s dµ dµ , ). IB (µ) = E( dµ∞ dµ∞ Indeed, this representation can also be directly verified by a computation similar to that of Lemma D.44. Let Γ ⊂ P(Rd ) be the collection of measures satisfying dµ e2g , = R 2g dµ∞ e dµ∞
(B.21)
for some g ∈ Cc2 (Rd ). As in Lemma D.44, by the lower semicontinuity of IB and by convexity, each µ ∈ P(Rd ) satisfying IB (µ) < ∞ can be approximated by a sequence of {µn } ⊂ Γ in the sense that limn→∞ I(µn ) = I(µ) and limn→∞ µn = µ in the weak topology. Therefore, (B.19) is implied by s s R t V (Y (s))ds Z 1 dµ dµ (B.22) lim inf log Ey [e 0 ] ≥ V dµ − E( , ) t→∞ t dµ∞ dµ∞ for all µ satisfying (B.21). We now introduce a class of perturbations of B given by 1 ∆ − αy · ∇ + ∇g · ∇, 2 Using stochastic calculus, we know that
g ∈ Cc2 (Rd ).
Bg =
dY (t) = (∇g(Y (t)) − αY (t))dt + dW (t) defines a process Y which solves the martingale problem of B g . Let P g , P ∈ P(CRd [0, ∞)) denote the probability measures solving the martingale problems for B g and B, and let {Ft } be the natural filtration on CRd [0, ∞). Then, through the Girsanov transformation,
R t −g g dP g Ft = eg(Y (t))−g(Y (0))− 0 e Be (Y (s))ds . dP Finally, Y has a unique stationary distribution under P g as well, which is given by dµg = R For every y ∈ E and g ∈ D,
e2g e2g dµ∞
dµ∞ .
Rt 1 log EyP [e 0 V (Y (s))ds ] t→∞ t Z t Z t g 1 ≥ lim EyP [ V (Y (s))ds − (g(Y (t)) − g(Y (0)) − e−g Beg (Y (s))ds)] t→∞ t 0 0 Z Z g = V dµ + e−g Beg dµg s s Z g dµ dµg = V dµg − E( , ) dµ∞ dµ∞ Z = V dµg − I(µg ),
lim inf
B.2. RELATIONSHIP TO SOME VARIATIONAL CONSTANTS
359
where the first inequality follows by Jensen’s inequality and the concavity of the log and the first equality follows from the ergodicity of the process Y under P g . By the arbitrariness of g, (B.22) follows. Consequently by Theorem B.13, (B.4) holds for the OrnsteinUhlenbeck process. (Of course, (B.4) also follows by Lemma 11.32 and the calculations in Example 11.33. B.2.2. Discrete time case. Let {Y (0), Y (1), . . .} be an R Evalued Markov chain with transition probability P (x, dy). Define P f (x) = f (y)P (x, dy), and assume P : Cb (E) → Cb (E).
(B.23)
Let P (l) (x, A) denote the l step transition probability of Y , and let V ∈ Cb (E). Define Z V (y) f (z)P (y, dz). T f (y) = e
Pn
Then
T n f (y) = E[f (Y (n))e
k=0
V (Y (k))
Y (0) = y].
We verify (B.2) under discrete time versions of Conditions B.7 and B.8. Condition B.15. There exists a constant cV such that Pn 1 (B.24) lim sup log Eν [e k=0 V (Y (k)) ] = cV , n→∞ n for each ν ∈ P(E) satisfying hψ, νi > −∞. The following is a variant of Condition (U) on page 100 in Deuschel and Stroock [30] Condition B.16. For ν1 , ν2 ∈ P(E) satisfying hνi , ψi > −∞, there exist positive integers l, N, M with 1 ≤ l ≤ N and M ≥ 1 such that Z N Z M X (l) (B.25) P (x, A)ν1 (dx) ≤ P (m) (y, A)ν2 (dy), A ∈ B(E). N m=1 As in Lemma B.9, we have Lemma B.17. Condition B.16 implies Condition B.15. We now identify the relation between cV and (B.2). Define c∗V c∗∗ V
Pn 1 log Eν [e k=0 V (Y (k)) ] ν∈P(E),hψ,νi>−∞ n→∞ n Pn 1 = sup lim sup log Eν [e k=1 V (Y (k)) ] ν∈P(E),hψ,νi>−∞ t→∞ n
=
inf
lim sup
Condition B.15 is just the statement that c∗V = c∗∗ V . Lemma B.18. Assume that P : Cb (E) → Cb (E), V ∈ Cb (E), and Kc = {y : ψ(y) ≥ −c} is compact for each c ∈ R. Then (B.26)
sup
sup
inf (V (y) + (1 + κ) log
k>0 g∈K1 (E) y∈E
P g(y) − κψ(y)) ≥ c∗V g(y)
360
B. GENERALIZED PRINCIPAL EIGENVALUES
Proof. Denote the left side of the inequality by H2V . For λ < c∗V , let Rλm g =
m X
e−λk T k g,
k=0
and let Γ be the collection of functions of the form gγ =
∞ X
Rλm 1γm ,
γm ≥ 0,
m=0
∞ X
γm = 1,
m=0
such that gγ ∈ Cb (E). Note that T Rλm g = Rλm T g = eλ (Rλm g + e−λ(m+1) T m+1 g − g)
(B.27) and
P∞ Rλm eV γm log T gγ = log Pm=0 ∞ m ≤ kV k. gγ m=0 Rλ 1γm The dominated convergence theorem implies that if gγ is continuous, then so is T gγ =
∞ X
Rλm eV γm .
m=0 (n)
We topologize Γ so that gγ (n) → gγ if and only if kgγ (n) − gγ k → 0 and γm ⇒ γm , m = 0, 1, 2, . . .. Then, as before, setting Kc = {ν ∈ P(E) : ν(Kc ) = 1}. H2V
≥
lim lim sup inf (1 + κ) log
e−κ(1+κ)
−1
(V (y)+ψ(y))
T gγ (y)
gγ (y)
κ→0+ c→∞ gγ ∈Γ ,y∈Kc
−κ
he−κ(1+κ) (V +ψ) T gγ , νi = lim lim sup inf (1 + κ) log κ→0+ c→∞ gγ ∈Γ ν∈Kc hgγ , νi −1
he−κ(1+κ) (V +ψ) T gγ , νi = lim inf sup (1 + κ) log κ→∞ ν∈∪c Kc gγ ∈Γ hgγ , νi ≥
inf
sup log
ν∈P(E),hψ,νi>−∞ gγ ∈Γ
hT gγ , νi . hgγ , νi
The first inequality follows from the fact that Γ ⊂ K1 (E) and the inequality (1 + κ) log
e−κ(1+κ)
−1
(V (y)+ψ(y))
gγ (y)
T gγ (y)
≥ −(1 + 2κ)kV k − κψ(y).
Since Kc is compact and Kc and Γ are convex, the interchange of the inf and sup is justified by Sion’s theorem, Theorem B.6. Take N = Kc and M = Γ. Note that (B.28)
hT Rλm 1, νi he−λ(m+1) T m+1 1, νi − 1 = eλ (1 + ), m hRλ 1, νi hRλm 1, νi
B.2. RELATIONSHIP TO SOME VARIATIONAL CONSTANTS
361
and that for ν satisfying hψ, νi > −∞, lim supm→∞ e−λm hT m 1, νi = ∞. For > 0, there exists ν with hψ, ν i > −∞ such that H2V
≥
sup log gγ ∈Γ
hT gγ , ν i − hgγ , ν i
≥ λ + lim sup log(1 + m→∞
he−λ(m+1) T m+1 1, ν i − 1 )− hRλm 1, ν i
≥ λ − . Since > 0 and λ < c∗V are arbitrary, (B.26) follows.
Lemma B.19. For V ∈ Cb (E), (B.29)
sup
hV + log
inf
ν∈P(E),hψ,νi>−∞ g∈K1 (E)
Pg , νi ≤ c∗∗ V . g
Proof. By (B.27) and the concavity of log, inf
hlog
g∈K1 (E)
e−λ(m+1) T m+1 1 − 1 Tg , νi ≤ λ + log(1 + h , νi). g Rλm 1
−λm m m T 1, νi = 0, so Let λ > c∗∗ V . Then Rλ 1 ≥ 1 and limm→∞ he
inf
hlog
g∈K1 (E)
Tg , νi ≤ λ, g
giving (B.29).
Putting these lemmas together, we have the following. Theorem B.20. Assume that P : Cb (E) → Cb (E), V ∈ Cb (E), and Kc = {y : ψ(y) ≥ −c} is compact for each c ∈ R. If Condition B.15 is satisfied, then (B.2) holds. Proof. As in Lemma 11.12, the left side of (B.2) equals the left side of (B.29). Since Condition B.15 is just the assertion that c∗V = c∗∗ V , (B.2) follows by (B.26) and (B.29). Lemma B.18 also gives the following. Theorem B.21. Let Z IP (ν) = −
inf g∈K1 (E)
log E
Pg dν ∧ g
Z ψdν. E
Suppose that for each y ∈ E satisfying ψ(y) > −∞, (B.30)
lim inf n→∞
Pn−1 1 log Ey [e k=0 V (Y (k)) ] ≥ sup (hV, νi − IP (ν)). n ν∈P(E)
Then (B.2) holds. Proof. The proof is essentially the same as the proof of Theorem B.13.
Dupuis and Ellis [35] (Part (a) of Proposition 8.6.1) show that (B.30) holds under the following condition. Condition B.22.
362
B. GENERALIZED PRINCIPAL EIGENVALUES
(1) There exist positive integers m0 , n0 such that for all x, y ∈ E, ∞ ∞ X X 1 (i) 1 (j) P (x, dz) 1 For d > 1, {ek1 ,··· ,kd (x1 , · · · , xd ) = αk1 (x1 ) · · · αkd (xd ) : k1 , · · · , kd = 0, 1, 2, · · · } forms a complete, orthonormal system for L2 (O), and ek1 ,...,kd is an eigenfunction for −∆ with eigenvalue λk1 ,··· ,kd = λk1 + · · · + λkd . A similar relation holds for ∆m with {ek1 ,··· ,kd ;m (x1 , · · · , xd ) = αk1 ,m (x1 ) · · · αkd ,m (xd )} and {λk1 ,...,kd ;m = λk1 ,m + · · · + λkd ,m : kj = 0, 1, 2, . . . , m − 1; j = 1, . . . , d}.
C.3. E = L2 (O) ∩ {ρ :
R ρdx = 0}
365
The following lemma gives a sense in which the orthonormal systems for L2 (Λm ) converge to the orthonormal system for L2 (O). Lemma C.1. Let ρm ∈ L2 (Λm ) satisfy lim kρm − πm ρkm = 0.
m→∞
Then lim hρm , ek1 ,...,kd ;m im = hρ, ek1 ,...,kd i,
m→∞
and hence lim inf hρm , −∆m ρm im m→∞
=
lim inf h∇m ρm , ∇m ρm im
=
lim inf
m→∞
m→∞
m−1 X k1 =0,...,kd =0
∞ X
≥
λk1 ,...,kd ;m hρm , ek1 ,...,kd ;m i2m
λk1 ,...,kd hρ, ek1 ,...,kd i2
k1 =0,...,kd =0
≡
h∇ρ, ∇ρi.
Let Bm = (I − ∆m )−1 on L2 (Λm ), and B = (I − ∆)−1 on L2 (O). Lemma C.2. Let ρm ∈ L2 (Λm ) and ρ ∈ L2 (O) satisfy lim kb ηm ρm − ρk = 0,
m→∞
or equivalently, limm→∞ kρm − πm ρkm = 0. Then for s = 1, 2, . . ., s s lim kb ηm Bm ρm − B s ρk = lim kBm ρm − πm B s ρk = 0.
m→∞
m→∞
Proof. By induction, it is enough to check convergence for s = 1, and since B and Bm are bounded operators with norm 1, that lim kBm πm ρ − πm Bρkm = 0
m→∞
holds for a dense set of ρ ∈ L2 (O). Consequently, it is enough to check convergence for ρ = αj , but this convergence is immediate from kBm πm αj −πm Bαj km ≤ kBm (πm αj −αj,m )k+k(1+λj,m )−1 αj,m −(1+λj )−1 πm αj k. C.3. E = L2 (O) ∩ {ρ :
R
ρdx = 0}
R In Example 1.13, because P of the restriction to functions satisfying ρdx = 0, and in the discrete case x∈Λm ρ(x) = 0, E and Em in (1.39) become Em = span{ek1 ,...,kd ;m : (k1 , . . . , kd ) 6= (0, . . . , 0)} and E = span{ek1 ,...,kd : (k1 , . . . , kd ) 6= (0, . . . , 0)}. In this context, if we let B = (−∆)−1 and, for m odd, Bm = (−∆m )−1 , then analogs of the results of the previous section hold. In particular, the conclusion of Lemma C.2 still holds with L2 (O) replaced by E and the limit taken through m odd.
366
C. SPECTRAL PROPERTIES FOR DISCRETE AND CONTINUOUS LAPLACIANS
C.4. Other useful approximations Let ρ ∈ L2 (O). It follows from Jensen’s inequality that kπm ρkm ≤ kρkL2 (O) . Moreover, since ∇m πm ρ = πm ∇m ρ, k∇m πm ρkm = kπm ∇m ρkm ≤ k∇m ρkL2 (O) ≤ k∇ρkL2 (O) , where the last inequality follows from by Lemma 7.23 of Gilbarg and Trudinger [54]. Let F ∈ C 2 (R) and supr F 00 (r) < ∞ so that there exist c0 , c1 > 0 such that F (r) ≤ c0 + c1 r2 . As in (2.14), define a discrete free energy function Em by X 1 X (C.5) Em (ρ) ≡ ∇m ρ(x)2 m−d + F (ρ(x))m−d , ρ ∈ Em ≡ RΛm  . 2 x∈Λm
x∈Λm
Then (C.6)
Em (πm ρ) ≤ c0 + c1 kρk2L2 (O) + k∇ρk2L2 (O) ,
ρ ∈ L2 (O).
APPENDIX D
Results from mass transport theory In Example 9.35 of Section 9.4 and in Section 13.3, we proved the comparison principle (Theorems 9.41 and 13.32) for a HamiltonJacobi equation with probabilitymeasurevalued state space. A prominent feature of this equation is that the control problem associated with it is a massconserving flow determined by ρ˙ + ∇ · (ρv) = 0, where v is a vectorvalued function modeling the velocity field and ∇· denotes the divergence. In other words, letting zv (t, x) denote the solution of z˙v (t, x) = v(zv (t, x), t) with zv (0, x) = x, ρ(t) is the probability measure determined by the identity Z Z f (x)ρ(t, dx) = f (zv (t, x))ρ0 (dx), Rd
Rd
where ρ(0) = ρ0 is the initial mass distribution. This observation naturally leads us to the consideration of mass transport techniques which we summarize in this appendix. Much of this material is taken from Villani [126]. Throughout this appendix, we use the convention that 0/0 = 0. We assume that Ψ, Φ ∈ C 2 (Rd ), that Φ is even, and that there exist λΨ , λΦ ∈ R (not necessarily nonnegative) such that (D.1)
(∇Ψ(x) − ∇Ψ(y)) · (x − y) ≥ λΨ x − y2
and similarly for Φ. We use ρ or γ or σ to denote typical elements in P(Rd ). The support supp(ρ) of ρ ∈ P(Rd ) is the closed set defined by (D.2)
supp(ρ) = {x ∈ Rd : ρ(U ) > 0 for each neighborhood U of x}.
As before, if ρ ∈ P(Rd ) has a Lebesgue density, we use ρ(x) to denote its density and ρ(dx) to denote the measure. That is, dρ = ρ(dx) = ρ(x)dx. Definition D.1 (Push forward of a probability measure). Let T : Rd → Rd be a measurable map and ρ0 ∈ P(Rd ). The image measure of ρ0 by T , denoted as (T #ρ0 )(A) ≡ ρ0 (T −1 (A)) for all Borel measurable A ⊂ Rd , is called the pushforward measure of ρ0 by T . D.1. Distributional derivatives We will use the notion of weak derivatives in the (Schwartz) space of distributions. Let O ⊂ Rd be open and let Cc∞ (O) be the space of infinitely differentiable functions on O whose support is contained in a compact subset of O. For α = (α1 , . . . , αd ), αi a nonnegative integer, let Dα f = ∂xα11 · · · ∂xαdd f . For each compact K ⊂ O, let Cc∞ (K) = {ϕ ∈ Cc∞ (O) : supp(ϕ) ⊂ K} topologized by the Frechet topology determined by the seminorms kDα ϕk∞ = supx Dα ϕ(x) for all nonnegative, integervalued vectors α. Let D(O) be the set Cc∞ (O) with the topology U such that U ∈ U if and only if U ∩Cc∞ (K) is open in Cc∞ (K) for each compact 367
368
D. MASS TRANSPORT
K ⊂ O. Then D0 (O) is defined to be the space of continuous linear functionals on D(O). (See, for example, Reed and Simon [102], Section V.4.) Lploc (O) will denote the collection of measurable functions f on O such that Z f (x)p dx < ∞, ∀ compact K ⊂ O, K
1/p R . Note that if f ∈ L1loc (O), then topologized byR the seminorms K f (x)p dx 0 ϕ → hf, ϕi ≡ O ϕf dx is in D (O). Let ν be a signed measure on O. Then for each multiindex α, Dα ν is the l ∈ D0 (O) such that Z l(ϕ) = (−1)α Dα ϕdν, ∀ϕ ∈ Cc∞ (O). O
We will write hDα ν, ϕi = (−1)α
Z
Dα ϕdν,
∀ϕ ∈ Cc∞ (O).
O
Dα ν is (can be identified as) a signed measure, if there exists a signed measure µ such that Z Z α α (−1) D ϕdν = ϕdµ, ∀ϕ ∈ Cc∞ (O), O
O
and we write Dα ν = µ in D0 (O). We say Dα ν ∈ Lploc (O) if there exists a ξ ∈ Lploc (O), such that Z Z α α (−1) D ϕdν = ϕ(x)ξ(x)dx, ∀ϕ ∈ Cc∞ (O) O
O
in which case we write Dα ν = ξ in D0 (O). If ν(dx) = f (x)dx, f ∈ L1loc (O), we write Dα f = Dα ν. Let J(x) = C exp{(x2 − 1)−1 }, if x < 1; = 0, if x ≥ 1, R where the constant C > 0 is selected so that Rd J(x)dx = 1, and define J (z) = −d J(−1 z). For f ∈ L1loc (Rd ), define Z J ∗ f (x) = J (x − y)f (y)dy
(D.3)
Rd
and if ν is a finite signed measure, Z J ∗ ν(dx) ≡
(D.4)
J (x − y)ν(dy)dx. Rd
Note that J ∗ ν ∈ C ∞ (Rd ). Let JO ∗ ν denote the restriction of J ∗ ν to O. Lemma D.2. Let ν be a finite signed measure on O and α a multiindex. Then lim→∞ Dα JO ∗ ν = Dα ν in the sense that lim hDα JO ∗ ν, ϕi = hDα ν, ϕi,
→0
ϕ ∈ D(O).
If Dα ν = ξ ∈ L1loc (Rd ), then (D.5)
α
Z
D J ∗ ν(x) =
J (x − y)ξ(y)dy. Rd
D.1. DISTRIBUTIONAL DERIVATIVES
369
For > 0, let O = {x ∈ O : supy∈Oc x − y > . If Dα ν = ξ ∈ L1loc (O), then Z α O (D.6) D J ∗ ν(x) = J (x − y)ξ(y)dy in D0 (O ), O α
JO ∗ν, ϕi
Proof. For ϕ ∈ D(O), hD = hDα J ∗ν, ϕi, and for > 0 sufficiently small, J ∗ ϕ ∈ D(O). Integrating by parts, hDα J ∗ ν, ϕi =
(−1)α hJ ∗ ν, Dα ϕi
= (−1)α hν, Dα J ∗ ϕi = hDα ν, J ∗ ϕi → hDα ν, ϕi, where the convergence follows from the fact that J ∗ ϕ converges to ϕ in D(O). The identity (D.5) follows from the fact that for ϕ ∈ Cc∞ (Rd ), J ∗ ϕ ∈ Cc∞ (Rd ) and integration by parts. See, for example, Evans [38], page 250. Similarly, if ϕ ∈ D(O ), then J ∗ ϕ ∈ D(O), so hDα JO ∗ ν, ϕi = (−1)α hν, Dα J ∗ ϕi = hξ, J ∗ ϕi = hJ ∗ ξ, ϕi Lemma D.3. If f ∈
L1loc (O),
then for each compact K ⊂ O,
Z J ∗ f − f dx = 0.
lim
→0
K
Proof. If f is continuous, the result follows by uniform continuity. The general statement follows from the fact that the continuous functions are dense in L1 (K). Lemma D.4. If f ∈ L1loc (O), ∂xi f = ξ ∈ L1loc (O), and G : R → R has a bounded continuous derivative, then (D.7)
∂xi G(f ) = G0 (f )ξ
in D0 (O).
If g ∈ C 1 (Rd ), then (D.8)
∂xi (f g) = gξ + f ∂xi g.
Proof. The ordinary chain rule and (D.6) gives ∂xi G(J ∗ f ) = G0 (J ∗ f )J ∗ ξ in D0 (O ), and (D.7) follows by Lemma D.3. The proof of (D.8) is similar.
Definition D.5. For k ≥ 0, H k (O) is the collection of f ∈ L2 (O) such that D f ∈ L2 (O) for all α, α ≤ k, and 1/2 X Z Dα f 2 dx . kf kH k (O) = α
α≤k
O
In particular, H 0 (O) = L2 (O). Recall that for each k, H k (O) is a Hilbert space. We will need the following result regarding the relationship between H 1 (O) and H 0 (O).
370
D. MASS TRANSPORT
Theorem D.6. Suppose that O is bounded and the boundary ∂O is C 1 . Then bounded subsets of H 1 (O) are relatively compact in H 0 (O). Proof. The result is a special case of the RellichKondrachov Compactness Theorem. See Adams [1], Theorem 6.2. Lemma D.7. Let f ≥ 0, f ∈ L1 (O), and ∇f ∈ L1loc (O). If Z ∇f 2 dx < ∞, (D.9) f O √ then f ∈ H 1 (O), and p ∇f (D.10) ∇ f= √ . 2 f √ Conversely, if f ≥ 0 and f ∈ H 1 (O), then f ∈ L1 (O), ∇f ∈ L1 (O), and (D.10) holds. Remark D.8. Note that (D.9) implies ∇f ∈ L1 (O), since sZ Z Z ∇f 2 f dx dx < ∞. ∇f dx ≤ f O O O Proof. Let J be defined as in (D.3), ζ0 (x) = 1[−2,2] (x), and ζ = J ∗ ζ0
(D.11) Cc∞ (Rd ), 2
d
Then ζ ∈ ζ : R → [0, 1], ζ(x) = 1, x ≤ 1, ζ(x) = 0, x ≥ 3, and supx ∇ζ(x) /ζ(x) < ∞. Define ζn (x) = ζ(n−1 x). Setting fn = ζn f , ∇fn = ζn ∇f + f n−1 ∇ζ(n−1 x), and
Z Z ∇fn 2 2 ∇ζ(n−1 x)2 ∇f 2 dx + 2 f (x) dz < ∞. dx ≤ 2 fn f n O ζ(n−1 x) O O √ √ Since Gδ (r) = r + δ − δ has a bounded, continuous derivative for r ≥ 0, Z
∇fn ∇Gδ (fn ) = √ , 2 fn + δ and by the monotone convergence theorem, Z Z ∇fn 2 2 lim ∇Gδ (fn ) dx → dx, δ→0 O fn O and in turn, by the dominated convergence theorem, ∇fn lim k∇Gδ (fn ) − √ kL2 (O) = 0. δ→0 2 fn √ √ √ √ Since r + δ − δ ≤ r, Gδ (fn ) →√ fn in L2 (O) by the dominated √ √convergence theorem. Consequently, Gδ (fn ) → fn in H 1 (O). Finally, fn → f in L2 (O) by the dominated convergence theorem, and writing √ p ∇fn ∇f ∇f f ∇ζ(n−1 ·) √ − √ = ( ζn − 1) √ + √ , 2 fn 2 f 2 f n ζn the first term on the right goes to zero in L2 (O) by the definition of ζn and the dominated convergence theorem, and the L2 (O) norm of the second term on the
D.2. CONVEX FUNCTIONS
371
p √ √ right is bounded by n−1 supx ∇ζ(x)/ ζ(x). It follows that fn → f in H 1 (O) and (D.10) holds. √ Conversely, suppose f ≥ 0 and f ∈ H 1 (O). For > 0, let f = f e−f . Setting √ 2 G (r) = r2 e−r , f = G ( f ) and p p p ∇f = 2 f e−f ∇ f − 2f 3/2 e−f ∇ f . √ √ √ √ The second term is dominated by 4 f ∇ f ,√and hence converges to 2 f ∇ f in √ L1 (O) as → 0. It follows that ∇f = 2 f ∇ f giving (D.10). D.2. Convex functions We recall a few facts about convex functions that will be used below without further comment. The standard reference on convex functions is [104]. Let ϕ denote a convex function on Rd , and define Dom(ϕ) = {x ∈ Rd : ϕ(x) < ∞}. If ϕ is proper (i.e. not identically +∞), it is continuous and locally Lipschitz on the interior of Dom(ϕ), int(Dom(ϕ)). Theorem D.9. Let ϕ be a convex function, and assume that Dom(ϕ) has none is welldefined empty interior O = int(Dom(ϕ)). Then the classical gradient ∇ϕ almost everywhere in O and agrees with the distributional gradient e ∈ L∞ (O). ∇ϕ = ∇ϕ loc
More precisely, ϕ is twice differentiable almost everywhere in O in the sense that 2 such that for almost every x ∈ O, there is a nonnegative definite d × d matrix DA 2 e ϕ(x + y) = ϕ(x) + ∇ϕ(x) · y + y T · DA ϕ(x) · y + o(y2 ),
y ∈ Rd .
Proof. Since ϕ is locally Lipschitz in O, Rademacher’s theorem gives existence of the gradient. See Evans and Gariepy [39], Section 3.1.2. The second order properties are given by Aleksandrov’s theorem. See Theorem 5 in Section 5.8.3 of Evans [38]. Furthermore, the distributional Hessian D2 ϕ is a matrixvalued mea2 ϕ is just the absolutely continuous part of D2 ϕ. (See the comments sure, and DA on page 58 of [126]). For an arbitrary function ψ, let ψ ∗ denote the LegendreFenchel transform ψ ∗ (x) ≡ sup{x · y − ψ(y)}. y
Then ψ ∗ is a lower semicontinuous, convex function. (Do not confuse this notation with the notation for the upper semicontinuous regularization of a function.) Lemma D.10. A proper, convex function ϕ is lower semicontinuous if and only if ϕ = ϕ∗∗ . Proof. See Theorem 12.2 of [104].
Lemma D.11. If ϕ is a lower semicontinuous, convex function, then ϕ(x) = x · ∇ϕ(x) − ϕ∗ (∇ϕ(x)), for almost every x ∈ int(Dom(ϕ)). Lemma D.12. Let ϕ be a lower semicontinuous, convex function, and let ψ = ϕ∗ . For r > 0, define ϕr (x) = sup (x · y − ψ(y)). y≤r d
Then ϕr is finite on R , ϕr ≤ ϕ, ∇ϕr  ≤ r, and ϕr (x) = ϕ(x) and ∇ϕr (x) = ∇ϕ(x) for almost every x such that ∇ϕ(x) ≤ r.
372
D. MASS TRANSPORT
Proof. The lemma follows from Theorems 23.5 and 25.1 or [104].
D.3. The pWasserstein metric space For p ≥ 1, let Pp (Rd ) = {ρ ∈ P(Rd ) :
(D.12)
Z
xp ρ(dx) < ∞}.
Rd
For ρ, γ ∈ P(Rd ), define (D.13)
Π(ρ, γ) ≡ {π ∈ P(Rd × Rd ) : π(· × Rd ) = ρ, π(Rd × ·) = γ}.
Definition D.13. The pWasserstein metric on the space Pp (Rd ) is Z 1/p (D.14) dp (ρ, γ) = inf{ x − yp π(dx, dy) : π ∈ Π(ρ, γ)} . Rd ×Rd
for ρ, γ ∈ Pp (Rd ). Lemma D.14. For ρ, γ ∈ P(Rd ), Π(ρ, γ) is compact in the weak topology on P(Rd × Rd ) and the infimum in the definition of dp is achieved. If dp (γn , γ0 ) → 0, dp (ρn , ρ0 ) → 0 and πn ∈ Π(ρn , γn ) satisfies Z 1/p p dp (ρn , γn ) = x − y πn (dx, dy) , Rd ×Rd
then {πn } is relatively compact in the weak topology and any limit point π0 of the sequence {πn } satisfies Z 1/p (D.15) x − yp π0 (dx, dy) = dp (ρ0 , γ0 ). Rd ×Rd
In particular, if there is a unique π0 satisfying (D.15), then πn converges weakly to π0 . Proof. If π ∈ Π(ρ, γ), then π((K × K)c ) ≤ ρ(K c ) + γ(K c ) and compactness of Π(ρ, γ) in the weak topology follows. The lower semicontinuity of the integral in the definition (D.14) ensures that the infimum is achieved. Similarly, tightness for {πn } follows from the tightness of {ρn } and {γn } and Z 1/p Z 1/p x − yp π0 (dx, dy) ≤ lim inf x − yp πn (dx, dy) Rd ×Rd
n→∞
Rd ×Rd
= dp (ρ0 , γ0 ), so (D.15) follows.
(Pp (Rd ), dp ) is a complete, separable metric space for p ≥ 1 (Theorem 7.3 of [126]). The case p = 2 will be of special interest to us, and we will simply use d = d2 to denote the 2Wasserstein metric.
D.3. THE pWASSERSTEIN METRIC SPACE
373
Lemma D.15. For ρ, γ ∈ P2 (Rd ), there exists π ∈ Π(ρ, γ) such that Z (D.16) d2 (ρ, γ) = x − y2 π(dx, dy). Rd ×Rd
If ρ or γ is absolutely continuous with respect to Lebesgue measure, then π satisfying (D.16) is unique. Remark D.16. We denote the collection of π ∈ Π(ρ, γ) satisfying (D.16) by Πopt (ρ, γ). Proof. R As noted above, Π(ρ, γ) is compact in the weak topology and the mapping π → x − y2 π(dx, dy) is lower semicontinuous. Consequently, the infimum is achieved. The uniqueness is given in Theorem D.25. We have the following characterization of convergence and relative compactness in (Pp (Rd ), dp ). Lemma D.17. For p ≥ 1, (Pp (Rd ), dp ) is a complete, separable metric space. For ρn , ρ ∈ Pp (Rd ), the following are equivalent: (1) limn→∞ dp (ρn , ρ) = 0. (2) For all ϕ ∈ C(Rd ) satisfying ϕ(x) ≤ C(1 + xp ) for some C > 0, Z Z ϕdρn = ϕdρ. lim n→∞
Rd
Rd
(3) ρn ⇒ ρ in the sense of weak convergence of probability measures and Z Z lim xp ρn (dx) = xp ρ(dx). n→∞
Rd
Rd
(4) ρn ⇒ ρ in the sense of weak convergence of probability measures and Z lim sup xp ρn (dx) = 0. N →∞ n
x>N
Let K ⊂ Pp (Rd ). The following are equivalent: (1) K is relatively compact. (2) K is puniformly integrable in the sense that Z lim sup xp ρ(dx) = 0. N →∞ ρ∈K
x>N
(3) There exists ϕ : [0, ∞) → [0, ∞) satisfying ϕ(0) = 0, ϕ convex and increasing, and ϕ superlinear in the sense that ϕ(x) = ∞, lim x→∞ x such that Z sup ϕ(xp )ρ(dx) < ∞. ρ∈K
Rd
Proof. The results follow easily from standard results on weak convergence and uniform integrability. The convergence results can be found in Villani [126], Theorem 7.12. The completeness and separability of the space, as well as the relative compactness results can be found in Proposition 7.4.2. of Ambrosio, Gigli and Savar´e [3]. The use of the superlinear function ϕ follows from Theorem 22 and its proof on page 24 of Dellacherie [28].
374
D. MASS TRANSPORT
Note that for probability measures on Rd , puniform integrability (p ≥ 1) implies tightness. D.4. The MongeKantorovich problem Many of the results on the minimization problem defining the pWasserstein metric (D.14) can be extended to bivariate functions c more general than x − yp . We collect a number of results that will be useful in the main text. Let c : Rd × Rd → [0, ∞] be lower semicontinuous. Following Section 3.3 of Rachev and R¨ uschendorf [101] and Section 2.4 of Villani [126], the usual concept of concavity can be generalized as follows. Definition D.18 (ctransform, cconcavity). a) For p : Rd → R, the ctransform pc : Rd → R is defined by pc (y) = inf (c(x, y) − p(x)) x∈Rd
with the convention that the difference is +∞ when c(x, y) = +∞ and p(x) = +∞. Similarly, q c (x) = inf (c(x, y) − q(y)). y∈Rd
b) A function q is cconcave if q = pc for some p; equivalently, q is cconcave if there exists some {(xi , ti )}i∈I ⊂ Rd × R such that q(y) = inf (c(xi , y) + ti ), i∈I
∀y ∈ Rd .
It follows that pcc ≡ (pc )c ≥ p and equality holds if and only if p is cconcave (Exercise 2.35 of [126]). If c is continuous, then any cconcave function is upper semicontinuous. Taking c(x, y) = x − y2 /2, let ϕ(x) =
x2 − p(x), 2
ψ(y) =
y2 − q(x). 2
Then p(x) + q(y) ≤ c(x, y) if and only if ϕ(x) + ψ(y) ≥ x · y. In particular, p = q c if and only if ϕ = ψ ∗ , where ψ ∗ is the LegendreFenchel transform ψ ∗ (x) ≡ sup{x · y − ψ(y)}. y
A function p : Rd → R is proper if there exists x ∈ Rd such that p(x) ∈ R. Let ρ, γ ∈ P(Rd ), and define (D.17) (D.18) (D.19)
= {(p, q) : p, q are proper and p = q c , q = pc }, = {(p, q) ∈ Φc : inf q(y) ≤ 0 ≤ sup q(y)}, y y Z Φc,0 (γ) = {(p, q) ∈ Φc : q ∈ L1 (dγ), q(y)γ(dy) = 0}, Φc Φc,0
Rd
(D.20)
Φc (ρ, γ)
=
1
1
Φc ∩ (L (dρ) × L (dγ)),
D.4. THE MONGEKANTOROVICH PROBLEM
375
Remark D.19. Let c be uniformly continuous with modulus of continuity ωc , that is, c(b x, yb) − c(x, y) ≤ ωc (b x − x + b y − y). If (p, q) ∈ Φc , then p(x) − p(b x) ≤ ωc (b x − x) and q(y) − q(b y ) ≤ ωc (y − yb). For example, (D.21) p(x)−p(b x) = inf (c(x, y)−q(y))−inf (c(b x, y)−q(y)) ≤ sup(c(x, y)−c(b x, y)). y
y
y
If c is bounded, then − sup q(y) ≤ p(x) ≤ kck∞ − sup q(y), y
y
and hence q(y) = inf (c(x, y) − p(x)) ≥ sup q(y 0 ) − kck∞ x
y0
and similarly for p. Consequently, if (p, q) ∈ Φc,0 , then kpk∞ , kqk∞ ≤ kck∞ . If c is bounded and uniformly continuous, then by Ascoli’s Theorem (e.g., Theorem 17 in Chapter 7 of Kelley [65]), {p : (p, q) ∈ Φc,0 (γ)} and {p : (p, q) ∈ Φc,0 } are compact subsets of Cb (Rd ) under the topology of uniform convergence over compact subsets of Rd . If in addition there exist k0 and c0 such that c(x, y) = c0 for x + y ≥ k0 , then (p, q) ∈ Φc implies p(x) = c0 − supy q(y) for x ≥ k0 , and Φc,0 is compact in Cb (Rd ) under the uniform topology. For example, take c(x, y) = (x − y2 + θy2 ) ∧ M,
θ, M > 0.
d
For ρ, γ ∈ P(R ), define (D.22)
Z Fc (ρ, γ) = inf{
c(x, y)π(dx, dy) : π ∈ Π(ρ, γ)}
Rd ×Rd
Theorem D.20. [Kantorovich duality] RLet c : Rd × Rd → [0, ∞] be lower semicontinuous, and let ρ, γ ∈ P(Rd ) satisfy Rd ×Rd c(x, y)ρ(dx)γ(dy) < ∞. Then a) There exist π0 ∈ Π(ρ, γ) and (p0 , q0 ) ∈ Φc,0 (γ), such that Z (D.23) Fc (ρ, γ) = c(x, y)π0 (dx, dy) Rd ×Rd Z = p0 (x)ρ(dx) Rd Z = sup{ pdρ : (p, q) ∈ Φc,0 (γ)} d ZR Z = sup{ pdρ + qdγ : (p, q) ∈ Φc (ρ, γ)}, Rd
Rd
b) π0 ∈ Π(ρ, γ) satisfies (D.23) if and only if p0 (x) + q0 (y) = c(x, y),
(x, y) − π0 − a.e.
Proof. Existence of π0 follows from the compactness of Π(ρ, γ) and the lower R semicontinuity of the mapping π → Rd ×Rd c(x, y)π(dx, dy), and Part (a) follows from Theorem 7.2.4 of [3]. Part (b) follows from the fact that p0 (x) + q0 (y) ≤ c(x, y).
376
D. MASS TRANSPORT
Corollary D.21. Let Kγ ⊂ Rd denote the support of γ, and define (D.24)
b γc = {b Γ p(·) = inf c(·, y) − q(y) : (p, q) ∈ Φc,0 (γ)}. y∈Kγ
Then Z (D.25)
Fc (ρ, γ) = sup{ Rd
b γc }, pdρ : p ∈ Γ
and for π0 and (p0 , q0 ) satisfying (D.23), the supremum in (D.25) is achieved at pb0 (·) = inf y∈Kγ c(·, y) − q0 (y) and pb0 (x) + q0 (y) = c(x, y)
a.s. π0 .
bγ Remark D.22. If c is continuous and Kγ is compact, then, as in (D.21), Γ c γ b is an equicontinuous collection of functions, and the functions in Γc are bounded R R b γ : d pdρ ≥ α} above by Rd c(x, y)γ(dy). It follows that for each α ∈ R, {p ∈ Γ c R is relatively compact in the topology of uniform convergence on compact sets. Proof. For (p, q) ∈ Φc,0 (γ), p(x) + q(y) ≤ pb(x) + q(y) ≤ c(x, y) a.s. π0 . Consequently, Z Fc (ρ, γ) = sup{ pdρ : (p, q) ∈ Φc,0 (γ)} d ZR b c (γ)} ≤ sup{ pbdρ : pb ∈ Γ Rd Z ≤ c(x, y)π0 (dx × dy). Rd ×Rd
We define (D.26)
Φopt c (ρ, γ) = {(p0 , q0 ) ∈ Φc (ρ, γ) : (p0 , q0 ) satisfies (D.23)}
and (D.27)
Πopt c (ρ, γ) = {π0 ∈ Π(ρ, γ) : π0 satisfies (D.23)}.
Lemma D.23. Let c be continuously differentiable with bounded gradient, and let opt ρ have a Lebesgue density. Suppose that π0 ∈ Πopt c (ρ, γ) and (p0 , q0 ) ∈ Φc (ρ, γ). Then ∇p0 exists Lebesgue almost everywhere (hence almost everywhere with respect to ρ) and (D.28)
∇p0 (x) = ∇x c(x, y),
π0 − a.e.
d
Consequently, there exist R valued random variables X, Y such that (X, Y ) has joint distribution π0 , and ∇p0 (X) = ∇x c(X, Y ),
almost surely.
Remark D.24. This lemma is essentially Part b) of Remark 3.3.14 of Rachev and R¨ uschendorf [101]. Proof. By part (b) of Theorem D.20, p0 (x) + q0 (y) = c(x, y) a.s. π0 . By Remark D.19, p0 is Lipschitz and hence ∇p0 exists Lebesgue almost everywhere. Since c(x, y) − p0 (x) − q0 (y) ≥ 0, for each (x, y) such that p0 (x) + q0 (y) = c(x, y) and ∇p0 (x) exists, we have ∇x (c(x, y) − p0 (x) − q0 (y)) = 0 and (D.28) follows.
D.4. THE MONGEKANTOROVICH PROBLEM
1 2 x
377
Results from convex analysis can be used to extend Lemma D.23 to c(x, y) = − y2 .
Theorem D.25. [Brenier’s optimal transport map] Let ρ, γ ∈ P2 (Rd ), c(x, y) = − y2 , and πρ,γ ≡ π0 and (pρ,γ , qρ,γ ) ≡ (p0 , q0 ) ∈ Φc,0 (γ) satisfy (D.23). Suppose ρ(dx) = ρ(x)dx has a Lebesgue density, and let O be the interior of the convex hull of the support of ρ. Then ∇pρ,γ ∈ D0 (O) exists and agrees with the classical gradient almost everywhere in O. The optimal πρ,γ ≡ π0 in (D.23) is unique and is given by 1 2 x
(D.29)
πρ,γ (dx, dy) = ρ(dx)δTρ,γ (x) (dy),
where Tρ,γ (x) = x − ∇pρ,γ (x). In particular, γ is the push forward of ρ by Tρ,γ and Z Z (D.30) ∇pρ,γ (x)2 ρ(dx) = x − y2 πρ,γ (dx, dy) = d2 (ρ, γ). Rd
Rd ×Rd
Remark D.26. Note that while ∇pρ,γ is only defined in O, ρ(O) = 1, so the left side of (D.30) is welldefined. More generally, taking ρ∇pρ,γ to be zero off the support of ρ, the finiteness of (D.30) implies ρ∇pρ,γ ∈ L1loc (Rd ) and ∇ · (ρ∇pρ,γ ) ∈ D0 (Rd ). Proof. For uniqueness of πρ,γ , see Brenier’s theorem, Theorem 2.12 of [126]. Let ϕρ,γ (x) = supy (x · y − 12 y2 + qρ,γ (y)). Then 1 2 x − ϕρ,γ (x), 2 and hence ∇pρ,γ exists Lebesgue almost everywhere in int(Dom(ϕ ρ,γ )) (see TheR orem D.9) and agrees with the classical gradient. Since Rd pρ,γ dρ = d(ρ, γ), int(Dom(ϕ)) must contain O. Consequently, (D.28) holds giving ∇pρ,γ (x) = (x−y) a. s. πρ,γ . pρ,γ (x) =
Remark D.27. Following [126], we call Tρ,γ in the above theorem Brenier’s optimal transport map . In Definition 9.36, we defined a gradient for a function on P2 (Rd ). We can now identify this quantity for two useful functions. Theorem D.28. Suppose that ρ, γ ∈ P2 (Rd ) both have Lebesgue densities. Let E be defined according to (9.94), and let d be the 2Wasserstein metric. Then in the sense of Definition 9.36, (D.31)
gradρ d2 (ρ, γ) = −2∇ · (ρ∇pρ,γ ),
where pρ,γ is as in Theorem D.25 and ρ∇pρ,γ is defined as in Remark D.26. If in addition, E(ρ) < ∞, then 1 (D.32) grad E(ρ) = − ∆ρ + ∇ · (ρ∇(Ψ + ρ ∗ Φ)) in D0 (Rd ). 2 Proof. The conclusion follows from Theorem 8.13 of [126] and some computations following [63]. Let ρ, γ ∈ P2 (Rd ) have Lebesgue densities, and let p ∈ Cc∞ (Rd ). Define ρp (t) as in (9.83), that is, ρp is a weak solution of ρ˙ p + ∇ · (ρp ∇p) = 0,
ρp (0) = ρ.
378
D. MASS TRANSPORT
By Theorem 8.13 of [126], Z d 2 p d (ρ (t), γ) = 2∇pρ,γ (x) · ∇p(x)ρ(dx), dt t=0 Rd and (D.31) follows by Definition 9.36. Similarly, (D.32) follows from (37) and (38) of [63] and the following computation. Let ρ ∈ P2 (Rd ), p ∈ Cc∞ (Rd ) and E(ρ) < ∞. Then, with reference to (9.83), Z Z Φ(x − y)ρp (t, dx)ρp (t, dy) = Φ(zp (t, x) − zp (t, y))ρ0 (dx)ρ0 (dy), Rd ×Rd
Rd ×Rd
and d dt
Z Rd ×Rd
Φ(x − y)ρ(t, dx)ρ(t, dy)
Z ∇Φ(x − y)
= Rd ×Rd
t=0
d (zp (t, x) − zp (t, y)) ρ0 (dx)ρ0 (dy) dt t=0
Z ∇(ρ0 ∗ Φ)∇pdρ0 .
=2
D.5. Weighted Sobolev spaces Hµ1 (Rd ) and Hµ−1 (Rd ) For µ ∈ P(Rd ), define kpk21,µ ≡
Z
∇p2 dµ,
Rd
∀p ∈ Cc∞ (Rd ),
and let Hµ1 (Rd ) ≡ the completion of Cc∞ (Rd ) under k · k1,µ . For u ∈ D0 (Rd ), define kuk2−1,µ ≡
(D.33)
sup p∈Cc∞ (Rd )
(2hu, pi − kpk21,µ ) =
hu, pi2 2 , p∈Cc∞ (Rd ) kpk1,µ sup
and let Hµ−1 (Rd ) ≡ {equivalence classes of u ∈ D0 (Rd ) : kuk2−1,µ < ∞}. Noting that we
see that Hµ−1 (Rd ) gives a Let L2µ (Rd ) be the class
hu, pi ≤ kuk−1,µ kpk1,µ , representation of the dual for Hµ1 (Rd ). of Rd valued functions which are componentwise in
L2 (µ), and denote kξk20,µ ≡
Z
ξ2 dµ,
Rd (Hµ1 (Rd ),
ξ ∈ L2µ (Rd ).
Direct verification shows that k·k1,µ ), (L2µ (Rd ), k·k0,µ ) and (Hµ−1 (Rd ), k · k−1,µ ) are Hilbert spaces, and their inner products can be represented using the polarization identity: 1 (D.34) hu, vik,µ = (ku + vk2k,µ − ku − vk2k,µ ). 4 Geometrically, these spaces can be viewed as equivalent model spaces for the tangent bundle to P2 (Rd ). Below, we examine the relationships among the three spaces.
1 −1 D.5. WEIGHTED SOBOLEV SPACES Hµ (Rd ) AND Hµ (Rd )
379
Let p, q ∈ Hµ1 (Rd ). For each q, lq (p) ≡ hp, qi1,µ is a linear functional in p satisfying (D.35)
hp, qi1,µ  ≤ kqk1,µ kpk1,µ = kqk1,µ k∇pkL2µ (Rd ) ,
∀p ∈ Cc∞ (Rd ).
We can use this functional to extend the notion of ∇q from q ∈ Cc∞ (Rd ) to all b q ∈ Hµ1 (Rd ). We denote the extension by ∇q. In general, the extension may depend strongly on µ, and we need to clarify its relationship with the distributional derivative ∇q, if ∇q exists. Let L = {∇p : p ∈ Cc∞ (Rd )} ⊂ (L2µ (Rd ), k · k0,µ ), and let L2µ,∇ (Rd ) denote the closure of L in L2µ (Rd ). Since p ↔ ∇p is a onetoone correspondence between Cc∞ (Rd ) and L, and kpk1,µ = k∇pk0,µ , we see that Hµ1 (Rd ) is isomorphic to L2µ,∇ (Rd ). By the Riesz representation theorem, there exists a unique ξq ∈ L2µ,∇ (Rd ) such that Z (D.36) lq (p) ≡ hp, qi1,µ = ξq · ∇pdµ, ∀p ∈ Cc∞ (Rd ). Rd
b = ξq . Note that kqk2 = k∇qk b 2 and more generally that We define ∇q 1,µ 0,µ Z b · ∇qdµ b b ∇qi b 0,µ . ∇p ≡ h∇p, hp, qi1,µ = Rd
Lemma D.29. Suppose ξ ∈ L2µ (Rd ). Then ξ ∈ L2µ,∇ (Rd ) if and only if kξ + ηkL2µ (Rd ) ≥ kξkL2µ (Rd ) ,
∀η ∈ L2µ (Rd ) 3 ∇ · (µη) = 0,
or equivalently Z (D.37)
hξ, ηi0,µ =
ξ · ηdµ = 0, Rd
∀η ∈ L2µ (Rd ) 3 ∇ · (µη) = 0.
Proof. If ξ ∈ L2µ,∇ (Rd ), then by definition, there exist pn ∈ Cc∞ (Rd ) such that kξ − ∇pn kL2µ (Rd ) → 0. Suppose ∇ · (µη) = 0 Then h∇pn , ηi0,µ = 0, and hence, hξ, ηi0,µ = 0. Conversely, for η ∈ L2µ (Rd ), ∇ · (µη) = 0 implies ⊥ ⊥ η ∈ {∇p : p ∈ Cc∞ (Rd )}L2µ (Rd ) = L2µ,∇ (Rd ) . ⊥ Therefore (D.37) implies ξ ∈ (L2µ,∇ (Rd ))⊥ = L2µ,∇ (Rd ).
Let ξ ∈ L2µ (Rd ), and write (D.38)
ξ = ξ1 + ξ2 ,
where ξ1 is the projection of ξ on L2µ,∇ (Rd ) and ξ2 is the projection of ξ on ⊥ L2µ,∇ (Rd ) . Note that −∇ · (µξ) exists as a Schwartz distribution, and h−∇ · (µξ), pi = hξ1 , ∇pi0,µ
380
D. MASS TRANSPORT
for each p ∈ Cc∞ (Rd ). Since k − ∇ · (µξ)k2−1,µ
Z =
sup
Z ∇p · ξ1 dµ −
(2
p∈Cc∞ (Rd )
Rd
∇p2 dµ)
Rd
= kξ1 k20,µ ≤ kξk20,µ ,
(D.39)
−∇ · (µξ) ∈ Hµ−1 (Rd ). Let µ ∈ P(Rd ). If q ∈ Cc∞ (Rd ) ⊂ Hµ1 (Rd ), then the classical gradient ∇q ∈ b = ∇q. More generally, we have the L2µ,∇ (Rd ). By the defining relation (D.36), ∇q following. R Lemma D.30. Suppose q ∈ C 1 (Rd ) and Rd ∇q2 dµ < ∞. Then q ∈ Hµ1 (Rd ) b = ∇q. and ∇q Proof. Let ζ ∈ Cc∞ (Rd ) satisfy 0 ≤ ζ ≤ 1 and ζ(x) = 1 for x ≤ 1. Define ζn (x) = ζ(n−1 x). Let β ∈ C ∞ (R) be nondecreasing and satisfy β(r) = r, r ≤ 1, and β(r) = 2, r ≥ 3. Define βm (r) = mβ(m−1 r).
(D.40)
Let J be the mollifier defined in (D.3). Define (D.41)
qn,m, = ζn J ∗ (βm ◦ q) ∈ Cc∞ (Rd ).
Then 0 ∇qn,m, = n−1 J ∗ (βm ◦ q)∇ζ(n−1 ·) + ζn J ∗ (βm ◦ q∇q)
and Z
2
∇qn,m, − ∇q dµ ≤ 3n Rd
−2
Z
J ∗ (βm ◦ q)∇ζ(n−1 ·)2 dµ
Rd
Z +3
ζn − 12 ∇q2 dµ
Rd
Z +3 Rd
0 ζn 2 ∇q − J ∗ (βm ◦ q∇q)2 dµ.
First let → 0 eliminating the mollification, then n → ∞ taking the first two terms to zero and the last term to Z 0 3 1 − βm ◦ q2 ∇q2 dµ. Rd
Finally, this term goes to zero as m → ∞, and the lemma follows.
Lemma D.31. Let µ ∈ P(Rd ) have a Lebesgue density. Let q ∈ L1loc (Rd ), and assume that the gradient of q in the distributional sense satisfies ∇q ∈ L1loc (Rd ) R b = ∇q. In particular, and Rd ∇q2 dµ < ∞. Then q ∈ Hµ1 (Rd ) and ∇q Z kqk21,µ = ∇q2 dµ. Rd
Proof. Note that by (D.5), the distributional gradient ∇q satisfies ∇J ∗ q = J ∗ ∇q and the claim follows by the same approximation argument as in Lemma D.30. Lemma D.32. Let βm be as defined in (D.40), and let q : Rd → R and v : d 0 R ∇βm (q) = βRm (q)v for each m = 1, 2, . . .. If R → 2R be measurable. Suppose 1 d 2 v dµ < ∞, then q ∈ H (R ) with kqk = v2 dµ. d d µ 1,µ R R d
1 −1 D.5. WEIGHTED SOBOLEV SPACES Hµ (Rd ) AND Hµ (Rd )
381
Proof. Let qn,m, be as in (D.41). Then 0 ∇qn,m, = n−1 J ∗ (βm ◦ q)∇ζ(n−1 ·) + ζn J ∗ (βm ◦ qv),
and the lemma follows as in the proof of Lemma D.30
Lemma D.33. Let µ ∈ P(Rd ) have a Lebesgue density. Let ϕ beR a lower semicontinuous, convex function with µ(int(Dom(ϕ))) = 1. Assume that Rd ∇ϕ2 dµ < b = ∇ϕ. ∞. Then ϕ ∈ Hµ1 (Rd ) and ∇ϕ d Proof. Let ϕr be defined as in Lemma D.12. Then ϕr ∈ L∞ loc (R ), and since ∇ϕr (x) ≤ ∇ϕ(x), ϕr satisfies the conditions of Lemma D.31. Consequently, ϕr ∈ Hµ1 (Rd ), and since ∇ϕ(x) < r implies ∇ϕ(x) = ∇ϕr (x), the dominated convergence theorem gives Z lim ∇ϕ − ∇ϕr 2 dµ = 0, r→∞
Rd
giving the desired result.
Assuming ρ, γ ∈ P2 (Rd ) and ρ(dx) = ρ(x)dx, let pρ,γ be as in Theorem D.25. Since pρ,γ is the difference of two convex functions, by Lemma D.33, pρ,γ ∈ Hρ1 (Rd ), b ρ,γ = ∇pρ,γ ∈ L2 (Rd ), and ∇p ρ,∇ Z 2 2 (D.42) kpρ,γ k1,ρ = k∇pρ,γ k0,ρ = ∇pρ,γ 2 dρ = d2 (ρ, γ). Rd
Hµ−1 (Rd )
Hµ1 (Rd )
and Hµ1 (Rd ) is a Hilbert space, there gives the dual for Since must be a natural isomorphism between the spaces. Lemma D.34. (1) For each u ∈ Hµ−1 (Rd ), there exists a unique pu ∈ 1 d Hµ (R ) such that b u) u = −∇ · (µ∇p (D.43)
in the sense that Z b u dµ = hu, pi, ∇p · ∇p
∀p ∈ Cc∞ (Rd ).
Rd
(2) Let p, q ∈ Hµ1 (Rd ), and define b u = −∇ · (µ∇p),
(D.44)
b v = −∇ · (µ∇q).
Then Z (D.45)
hu, vi−1,µ = hp, qi1,µ =
b · ∇qdµ b ∇p = hu, qi = hv, pi. Rd
Proof. For u ∈ Hµ−1 (Rd ), hu, pi, p ∈ Cc∞ (Rd ), extends to a unique continuous linear functional on Hµ1 (Rd ). Consequently, there must exist pu ∈ Hµ1 (Rd ) such that Z b u dµ, p ∈ Cc∞ (Rd ). hu, pi = hpu , pi1,µ = ∇p · ∇p Rd
It follows that kuk2−1,µ and (D.34) then implies (D.45).
=
kpu k21,µ
Z =
b u 2 dµ, ∇p
Rd
382
D. MASS TRANSPORT
By Theorem D.28, if ρ, γ ∈ P2 (Rd ) have Lebesgue densities, the gradient (in the sense of Definition 9.36) satisfies gradρ d2 (ρ, γ) = −2∇ · (ρ∇pρ,γ ). By Lemma D.33 and (D.45), kgradρ d2 (ρ, γ)k2−1,ρ = 4
(D.46)
Z
∇pρ,γ 2 dρ = 4d2 (ρ, γ).
Rd
This identity is the mass transport version of the Euclidean space identity ∇x x − y2 2 = 4x − y2 ,
x, y ∈ Rd .
We use it in a critical way to prove the comparison principle for the nonlinear equation in Example 9.35. D.6. Fisher information and its properties Let µ∞ be defined as in (9.92): µ∞ (dx) = Z −1 e−2Ψ(x) dx. The Fisher information for ρ ∈ P(Rd ) is R ∇ dµdρ∞ 2 dρ ∈ L1loc (Rd ), dµ∞ when ρ(dx) = ρ(x)dx, ∇ dµ dρ Rd ∞ I(ρ) ≡ dµ∞ ∞ otherwise. In the above definition, we use the convention 0/0 = 0 and a/0 = ∞ for a ∈ dρ (0, ∞]. The requirement that ∇ dµ ∈ L1loc (Rd ) only ensures that the distributional ∞ gradient is given by a function. The integral defining I(ρ) may still be infinite. Note that if ρ(dx) = ρ(x)dx, dρ/dµ∞ (x) = Zρ(x)e2Ψ(x) . Consequently, by (D.8), ∇(ρe2Ψ ) = e2Ψ ∇ρ + 2e2Ψ ρ∇Ψ ∈ D0 (Rd ),
(D.47)
and the requirement that ∇(ρe2Ψ ) ∈ L1loc (Rd ) is equivalent to ∇ρ ∈ L1loc (Rd ). Therefore, we also have ( R ∇ρ(x)+2ρ(x)∇Ψ(x)2 dx when ρ(dx) = ρ(x)dx, ∇ρ ∈ L1loc (Rd ) d ρ(x) R I(ρ) ≡ ∞ otherwise. Let ( I1 (ρ) ≡
4
R Rd
∇
q
dρ 2 dµ∞  dµ∞
∞
when ρ(dx) = ρ(x)dx and ∇ otherwise.
q
dρ dµ∞
∈ L1loc (Rd )
.
Note that by Lemma D.7, I1 (ρ) = I(ρ). As in (D.47), p p p ∇( ρ(x)eΨ(x) ) = eΨ(x) (∇ ρ(x) + ρ(x)∇Ψ(x)), √ √ so ∇( ρeΨ ) ∈ L1loc (Rd ) is equivalent to ∇ ρ ∈ L1loc (Rd ), and p p R √ 4 Rd ∇ ρ(x) + ρ(x)∇Ψ(x)2 dx when ρ(dx) = ρ(x)dx, ∇ ρ ∈ L1loc (Rd ) I1 (ρ) ≡ ∞ otherwise. Fisher information as defined here is related to the functional IB defined in Section 11.2 and Appendix B. Let Y satisfy √ dY (t) = −2∇Ψ(Y (t))dt + 2dW (t),
.
D.6. FISHER INFORMATION AND ITS PROPERTIES
383
where W is standard Brownian motion in Rd . Then µ∞ is the stationary distribution for Y . Let B be the weak infinitesimal generator (in the sense of (11.31)) of the process Y , and let B0 be the restriction of B to Dd , that is, B0 ϕ(x) = ∆ϕ(x) − 2∇Ψ(x)∇ϕ(x),
ϕ ∈ Dd .
With reference to Appendix B, define B ++ = {(ϕ, Bϕ) : ϕ ∈ D(B), inf ϕ(y) > 0} y
and Z IB (ρ) = −
(D.48)
inf
ϕ∈D ++ (B)
Bϕ dρ = − inf ϕ (ϕ,ψ)∈B ++
Z
ψ dρ, ϕ
and similarly, B0++ = {(ϕ, B0 ϕ) : ϕ ∈ Dd , inf ϕ(y) > 0} y
and Z (D.49)
IB0 (ρ) = −
inf ϕ∈Dd ,inf y ϕ(y)>0
Bϕ dρ = − inf ϕ (ϕ,ψ)∈B0++
Z
ψ dρ. ϕ
It follows that IB and IB0 are convex and lower semicontinuous in the topology of weak convergence, and IB0 ≤ IB . In fact, we claim (D.50)
IB = IB0 .
For C ⊂ B(E) × B(E) such that (f, g) ∈ C implies inf y∈E f (y) > 0, define the strictly positive closure of C, spcl(C), to be the smallest subset C 0 ⊂ B(E) × B(E) such that {(fn , gn )} ⊂ C, inf n,y fn (y) > 0, supn,y gn (y) < ∞, f (x) = lim fn (x), g(x) = lim gn (x), x ∈ E, implies (f, g) ∈ C 0 . A simple application of Fatou’s lemma implies that if (f, g) ∈ spcl(B0++ ), then Z g IB0 (ρ) ≥ dρ. f Consequently, to prove (D.50), it is enough to prove that B ++ ⊂ spcl(B0++ ). Let {T (t) : t ≥ 0} be the semigroup on Cb (Rd ) generated by B, and let Lf (x) = ∆f − 2∇Ψ∇f,
f ∈ C 2 (Rd ).
B0 is the restriction of L to Dd . We will also need the adjoint operator L∗ ϕ = ∆ϕ + 2∇ · (ϕ∇Ψ),
ϕ ∈ C 2 (Rd ).
Lemma D.35. Let h ∈ Cb (Rd ), and define u(t) = T (t)h, t ≥ 0. Then u is a weak (Schwartz distributional) solution of ∂t u = Lu in the sense that for each 0 ≤ s < t, Z Z u(t, x)ϕ(t, x)dx = u(s, x)ϕ(s, x)dx Rd Rd Z (D.51) + u(r, x)(∂r ϕ(r, x) + L∗ ϕ(r, x))dxdr, (s,t]×Rd
R∞ for all ϕ ∈ × R ), and for λ > 0, f = 0 e−λt T (t)hdt is a weak solution of λf − Lf = h in the sense that Z Z ∗ (D.52) f (λϕ − L ϕ)dx = hϕdx, ϕ ∈ Cc∞ (Rd ). Cc∞ ([s, t]
Rd
d
Rd
384
D. MASS TRANSPORT
Proof. Let Ψk = Jk−1 ∗ (Ψ ∧ k). Then Ψk is bounded and C ∞ and all its derivatives have compact support. Let Z t √ (D.53) Yk (t, x) = x + 2W (t) − 2∇Ψk (Yk (s, x))ds, 0
and define uk (t, x) = E[f (Yk (t, x))]. The differentiability of Yk as a function of x (see, for example, Theorem 40, page 254 of Protter [96]) and simple moment estimates for the derivatives imply that uk is infinitely differentiable in x and satisfies Z t uk (t, x) = f (x) + Lk uk (r, x)dr, 0
where Lk f (x) = ∆f − 2∇Ψk ∇f . Integration by parts then gives Z Z uk (t, x)ϕ(t, x)dx = uk (s, x)ϕ(s, x)dx Rd Rd Z + uk (r, x)(∂r ϕ(r, x) + L∗k ϕ(r, x))dxdr. (s,t]×Rd
As k → ∞, Yk (·, x) ⇒ Y (·, x), so uk (t, x) → u(t, x) boundedly and pointwise. Since ∇Ψk converges to ∇Ψ uniformly on compact sets and ϕ has compact support, (D.51) follows. R To see that (D.52) holds, let hϕ, gi = Rd ϕgdx, and observe that Z ∞ hλϕ − L∗ ϕ, f i = hλϕ, f i − e−λt hL∗ ϕ, u(t)idt 0 Z ∞ Z t = hλϕ, f i − λe−λt hL∗ ϕ, u(r)idrdt 0 0 Z ∞ = hλϕ, f i − λe−λt (hϕ, u(t)i − hϕ, hi)dt 0
= hϕ, hi. Under our basic assumptions on Ψ, we have the following regularity result for T (t)h and f in Lemma D.35. Lemma D.36. Let Ψ ∈ C 2 (Rd ), and suppose that h ∈ Cb (Rd ) ∩ C 1 (Rd ) and ∇h ∈ Cb (Rd ). Then ∇T (t)h ∈ Cb (Rd ), for t > 0, and for λ > λΨ , Z ∞ Z ∞ −λt (D.54) ∇ e T (t)hdt = e−λt ∇T (t)hdt ∈ Cb (Rd ). 0
0
Proof. Let Y (t, x) (D.55)
Y (t, x) = x +
√
Z 2W (t) −
t
2∇Ψ(Y (s, x))ds. 0
Then, as with Yk , Y is continuously differentiable in x with ∂ Dki (t, x) ≡ Yi (t, x) ∂xk satisfying d Z t X 2 Dki (t, x) = δki − ∂ij Ψ(Y (s, x))Dkj (s, x)ds. j=1
0
D.6. FISHER INFORMATION AND ITS PROPERTIES
385
Let Dk = (Dk1 , . . . , Dkd ). Then by (D.1), Dk 2 (t) ≤ 1 − 2λΨ
t
Z
Dk 2 (s)ds.
0
Therefore for each T > 0, sup Dk (t)2 ≤ e2λΨ T . 0≤t≤T
Hence ∂ T (t)h(x) = E[∇h(Y (t, x)) · Dk (t, x)] ∈ Cb (Rd ). ∂xk Noting that k∇T (t)hk∞ ≤ k∇hk∞ eλΨ  , we see that (D.54) holds for λ > λΨ . There are a variety of notions of weak solution employed in the literature on partial differential equations. For elliptic equations like λf − Lf = h, Gilbarg and Trudinger [54], Chapter 8, require Z Z (D.56) (∇ϕ · ∇f + 2ϕ∇Ψ · ∇f + λϕf )dx = ϕhdx, ϕ ∈ Cc1 (Rd ). Rd
Rd
Under the conditions of Lemma D.36, (D.52) implies (D.56) by first integrating by parts and then extending to Cc1 (Rd ) by approximating ϕ ∈ Cc1 (Rd ) by J ∗ ϕ. Lemma D.37. Suppose ∈ C ∞ (Rd ), λ > λΨ , h ∈ Cb (Rd ) ∩ C ∞ (Rd ), and R ∞Ψ −λt d ∇h ∈ Cb (R ). Then f = 0 e T (t)hdt ∈ C ∞ (Rd ). Proof. For any bounded open subset O ⊂ Rd , the restriction of f to O is in W (O) ≡ H 1 (O) and is a weak solution of λf − Lf = h in O. Consequently, the lemma follows by Corollary 8.11 of Gilbarg and Trudinger [54]. 1,2
Lemma D.38. Let λ > 0 and fn ∈ D(B), n = 1, 2, . . ., and define hn = λfn − Bfn , Suppose that h0 = buc limn→∞ hn . Then Z buc lim fn = f0 ≡ n→∞
Proof. Since
∞
e−λt T (t)h0 .
0
Z
∞
e−λt E[hn (Y (t, x))]dt,
fn (x) = 0
supn kfn k∞ ≤ supn λ−1 khn k∞ . If xn → x, then the continuity of x → Y (t, x) and the bounded convergence theorem implies Z ∞ fn (xn ) → e−λt E[h0 (Y (t, x))]dt, 0
which implies uniform convergence on compact sets.
Lemma D.39. Suppose Ψ ∈ C ∞ (Rd ). For each f ∈ D(B) with inf y f (y) > 0, there exist fn ∈ C ∞ (Rd ) ∩ D(B) with ∇fn ∈ Cb (Rd ) such that f = buc lim fn , n→∞
and inf n inf y fn (y) > 0.
Bf = buc lim Lfn , n→∞
386
D. MASS TRANSPORT
Proof. For f ∈ D(B) with a ≡ inf y f (y) > 0, let λ > λΨ  ∨ (kBf k∞ /a) and define h = λf − Bf ∈ Cb (Rd ). Let ζ ∈ Cc∞ (Rd ) be as in (D.11), and define ζn (x) = ζ(nx). Let hn = ζn Jn−1 ∗ h + (1 − ζn ) inf y h(y). Then inf y hn (y) ≥ inf y h(y) > 0 and (D.57)
h = buc lim hn . n→∞ R ∞ −λt Then by Lemma D.37, fn = 0 e T (t)hn dt ∈ C ∞ (Rd ) ∩ D(B). Applying Lemma D.38, f = buc lim fn , n→∞
Bf = λf − h = buc lim (λfn − hn ) = buc lim Lfn . n→∞
n→∞
Finally, note that fn ≥
inf y hn (y) inf y h(y) λa − kBf k∞ ≥ ≥ > 0. λ λ λ
The following lemma is an immediate consequence of Lemma D.39. Lemma D.40. Suppose Ψ ∈ C ∞ (Rd ). Then (D.58) Z Lf IB (ρ) = − inf{ dρ : f ∈ C ∞ (Rd ) ∩ D(B), ∇f ∈ Cb (Rd ), inf f (y) > 0}. y f d R Lemma D.41. Suppose Ψ ∈ C ∞ (Rd ). Then IB = IB0 . ∞
d
Proof. Let f ∈ C (R )∩D(B), ∇f ∈ Cb (Rd ), and a ≡ inf y f (y) > 0. Define fn (y) = ζn (y)f (y) + (1 − ζn (y)) sup f (z) ∈ Dd . z
It follows that inf y fn (y) ≥ inf y f (y) > 0. Since ∇ζn (x) = n−1 ζ 0 (n−1 x) Lfn (x)
x , x
∆ζn (x) = n−2 ζ 00 (n−1 x) + n−1 ζ 0 (n−1 x)
d−1 , x
= ζn (x)Lf (x) + f (x)Lζn (x) + ∇f (x) · ∇ζn (x) − sup f (z)Lζn (x) z
d−1 = ζn (x)Bf (x) + (f (x) − sup f (z)) n−2 ζ 00 (n−1 x) + n−1 ζ 0 (n−1 x) x z x ∇f (x) · x −2n−1 ζ 0 (n−1 x) · ∇Ψ(x) + n−1 ζ 0 (n−1 x) . x x Since lim
x→∞
x · ∇Ψ(x) =∞ x2
and ζ 0 ≤ 0, for n sufficiently large, (f − sup f (z))(−2n−1 ζ 0 (n−1 x) z
x · ∇Ψ) ≤ 0. x
Therefore, (D.59)
lim sup sup Lfn (y) ≤ kBf k∞ , n→∞ y∈Rd
D.6. FISHER INFORMATION AND ITS PROPERTIES
387
and for each compact K, limn→∞ supy∈K Lfn (y) − Bf (y) = 0. Consequently, B ++ ⊂ spcl(B0++ ) and the lemma follows. We now extend the result to Ψ ∈ C 2 . Lemma D.42. Let Ψ ∈ C 1 (Rd ). Then for each > 0, there exists Ψ ∈ C (Rd ), such that sup ∇Ψ(x) − ∇Ψ (x) < . ∞
x∈Rd
∞ d P∞Proof. Let {ψn ∈ C (R ) : n = 0, 1, . . .} be a smoothd partition of unity, 1 = 0 ≤ ψn ≤ 1 and supp(ψn ) ⊂ {x ∈ R : n − 1 < x < n + 1}. n=0 ψn (x), satisfying P∞ Then Ψ(x) = n=0 ψn (x)Ψ(x). Define
Ψ (x) =
∞ X
Jn ∗ (ψn Ψ)(x) ∈ C ∞ (Rd ),
n=0
where 0 < n < 1 is chosen so that Z (D.60) Jn (y)∇(ψn Ψ)(x − y) − ∇(ψn Ψ)(x)dy < . 4 d R Note that supp(Jn ∗ (ψn Ψ)) ⊂ (n − 2, n + 2), so for each x ∈ Rd , at most four of the terms in the sum are non zero. Consequently, X Z ∇Ψ (x) − ∇Ψ(x) ≤  Jn (y) ∇(ψn Ψ)(x − y) − ∇(ψn Ψ)(x) dy < . Rd
n
Lemma D.43. For Ψ satisfying Condition 13.14, IB = IB0 . Proof. We write IB,Ψ and IB0 ,Ψ to emphasize the dependence on Ψ. Let Ψ be as in Lemma D.42. Then Z IB0 ,Ψ (ρ) = sup (2h−∆ϕ + 2∇Ψ∇ϕ, ρi − ∇ϕ2 dρ) ϕ∈Cc∞ (Rd )
Rd
Z ≥
lim
sup
→0+ ϕ∈C ∞ (Rd )
(2h−∆ϕ + 2∇Ψ ∇ϕ, ρi −
sZ ∇ϕ2 dρ − 4
Rd
c
Z ≥
lim
sup
→0+ ϕ∈C ∞ (Rd )
(2h−∆ϕ + 2∇Ψ ∇ϕ, ρi − (1 + )
=
Z
−1
lim (1 + )
→0+
∇ϕ2 dρ − 4)
Rd
c
=
∇ϕ2 dρ)
Rd
(2h−∆ϕ + 2∇Ψn ∇ϕ , ρi −
sup ϕ =(1+)−1 ϕ∈Cc∞ (Rd )
∇ϕ 2 dρ) − 4
Rd
lim IB0 ,Ψ (ρ).
→0+
The proof of the opposite inequality is similar, and lim→0+ IB0 ,Ψ = IB0 ,Ψ . Since IB0 ,Ψ = IB,Ψ , by Lemma D.41, we have I1,Ψ = 4IB,Ψ = 4IB0 ,Ψ . But by the definition of I1 , (I1,Ψ (ρ) − 2)2 ≤ I1,Ψ (ρ) ≤ (I1,Ψ (ρ) + 2)2 , so lim→0+ I1,Ψ (ρ) = I1,Ψ (ρ). Therefore IB,Ψ (ρ) = lim IB,Ψ (ρ) = lim IB0 ,Ψ (ρ) = IB0 ,Ψ (ρ). →0+
→0+
388
D. MASS TRANSPORT
If we take ϕ = eψ in the definition of IB0 , we have I2 (ρ) ≡ 4IB0 (ρ) = −4 =
inf
ψ∈Cc∞ (Rd )
4
hBψ + ∇ψ2 , ρi
h−∆ψ + 2∇Ψ · ∇ψ − ∇ψ2 , ρi
sup ψ∈Cc∞ (Rd )
=
2h−∆p + 2∇Ψ · ∇p, ρi − h∇p2 , ρi
sup p∈Cc∞ (Rd )
= k∆ρ + 2∇(ρ∇Ψ)k2−1,ρ , where p = 2ψ. The last identity follows from the definition of k · k−1,ρ in (D.33). By definition, I2 ≥ 0 and I2 is convex and lower semicontinuous in the topology of weak convergence. Since µ∞ is the stationary distribution for B, I2 (µ∞ ) = 0. We also define 1 I3 (ρ) = 2 sup h−∇ · ξ + 2ξ · ∇Ψ − ξ2 , ρi, 2 ∞ d ξ∈Cc (R ) where ξ is an Rd valued function. Taking ξ = ∇p, we see that I3 ≥ I2 . In fact, we will show equality for all four expressions. Lemma D.44. For ρ ∈ P(Rd ), I(ρ) = I1 (ρ) = I2 (ρ) = I3 (ρ). Proof. As already noted, I(ρ) = I1 (ρ) by Lemma D.7. By Theorem 7.44 of Stroock [115], (D.61)
I1 (ρ) = 4IB (ρ) = 4IB0 (ρ) = I2 (ρ),
and by Theorem 7.25 of [115], I1 (ρ) < ∞ implies ρ 0 is selected so that ρn is a probability density (at least for n sufficiently large) and limn→∞ mn = ∞. Then ρn is bounded and has compact support and limn→∞ E(ρn ) = E(ρ). If, in addition, ∇ρ ∈ L1loc (Rd ), then for mn → ∞ sufficiently rapidly, (D.68)
lim I(ρn ) = I(ρ).
n→∞
Proof. Since cn → 1, the monotone convergence theorem implies Z Z 1 1 lim Φ(x − y)ρn (dx)ρn (dy) = Φ(x − y)ρ(dx)ρ(dy). n→∞ 2 Rd ×Rd 2 Rd ×Rd It follows from (13.72), that Z ρ log ρdx < ∞, {ρ 0), then R(ρkµ∞ ) ≤
(D.83)
1 I(ρ). 2λΨ
Proof. (D.80) follows by applying the CauchySchwartz inequality to the right side of (D.75). Similarly, (D.83) follows from (D.82). A variant of (D.75) holds even without the assumption I # (ρ) < ∞. The next proof is a modification of that in Otto [92], where ρ is required to be smooth and have compact support. See also Theorem 5.15 of Villani [126]. Lemma D.53. Suppose that ρ, γ ∈ P2 (Rd ) have Lebesgue densities and that Z γ(y) log γ(y)dy < ∞. 2 Recalling that pρ,γ is the difference of two convex functions, let DA pρ,η denote the 2 Hessian in the Aleksandrov sense, and ∆A pρ,γ ≡ trDA pρ,γ , the Laplacian. Then Z Z Z (D.84) γ(y) log γ(y)dy − ρ(x) log ρ(x)dx ≥ ∆A pρ,γ (x)ρ(x)dx. Rd
Rd
Rd
Noting that ∆A pρ,γ ≤ d, the right side is welldefined. Remark D.54. Again, ∆A pρ,γ may only be defined in O, the interior of the convex hull of the support of ρ; however, ρ(O) = 1, so the right side of D.84 is well 2 defined. See McCann [84] for regularity regarding DA pρ,γ .
396
D. MASS TRANSPORT
Proof. Let φt (x) ≡
x2 − tpρ,γ (x), 2
πt (dx, dy) ≡ ρ(dx)δ∇φt (x) (dy).
Then by Proposition 1.3 of [84], ρt (dy) = πt (Rd , dy), 0 ≤ t ≤ 1 defines a function in P(Rd ) with ρ0 = ρ, ρ1 = γ. Furthermore, ρt (dy) = ρt (y)dy has a Lebesgue density. Let U (r) = r log r, for r ≥ 0. By the change of variable formula in Theorem 2 φt (x) is defined, finite, and invertible, for x almost everywhere with 4.4 of [84], DA respect to ρ(x)dx. Moreover, Z Z ρ(x) 2 (D.85) S(ρt ) ≡ U (ρt (y))dy = U( 2 φ (x)] ) det[DA φt (x)]dx det[D d d t R A ZR 2 = ρ(x)(log ρ(x) − log det[DA φt (x)])dx, Rd
where we allow the possibility of ∞ = ∞. 2 pρ,γ (x). Then T (x) ≤ Id×d (that is, Id×d − T (x) is nonnegative Let T (x) = DA 2 definite) and DA φt (x) = Id×d − tT (x). By Lemma 5.21 of [126], 2 (det[DA φt (x)])1/d = (det[Id×d − tT (x)])1/d is concave in t ∈ [0, 1]. 2 φt (x)]) is That is, for x almost everywhere with respect to ρ(x)dx, − log(det[DA finite and is convex in t ∈ [0, 1]. Consequently, 1 1 2 2 2 ht (x) ≡ − log(det[DA φt (x)]) = − log(det[DA φt (x)]) − log(det[DA φ0 (x)]) t t is nondecreasing in t and
1 2 2 φt (x)]) = ht (x) ≤ h1 (x) = − log(det[DA φ1 (x)]). − log(det[DA t Since lim ht (x) = −
t→0+
d 2 2 = trDA pρ,γ (x) = ∆A p(x), log det[Id×d − tDA pρ,γ (x)] dt t=0
for 0 < t ≤ 1, ∆A pρ,γ (x) ≤ ht (x) ≤ h1 (x),
x − a.e. ρ(dx).
The conclusion of the lemma (D.84) trivially holds if the right side is −∞. Therefore, we may assume that (D.86) Z Z Z 1 2 log det[DA φt (x)] ρ(x)dx. −∞< ∆A p(x)ρ(x)dx ≤ ht dρ = − Rd Rd Rd t By (D.85), Z S(ρt ) = S(ρ) − Rd
2 ρ(x) log det[DA φt (x)])dx.
Taking t = 1 and noting S(ρ1 ) = S(γ) < ∞ by assumption, we have S(ρ) < ∞, and by (D.86), Z Z S(γ) − S(ρ) = h1 dρ ≥ ∆A p(x)ρ(x)dx. Rd
Rd
D.8. MISCELLANEOUS
397
D.8. Miscellaneous Let f, g ∈ C 1 (R). Then by elementary calculus, it is an easy to verify that f (x) − f (x0 ) ≤ g(x) − g(x0 ) implies gradf (x0 )=gradg(x0 ). We give the mass transport version of this result next. Lemma D.55. Let f and g be lower semicontinuous functions on (P2 (Rd ), d). Let ρ0 ∈ P2 (Rd ). Suppose that both gradf (ρ0 ) and gradg(ρ0 ) exist as elements in D0 (Rd ), and that f (ρ) − f (ρ0 ) ≤ g(ρ) − g(ρ0 ),
∀ρ ∈ P2 (Rd ).
Then gradf (ρ0 ) = gradg(ρ0 ). Proof. Let p ∈ Cc∞ (Rd ), and define {ρ(t) : −∞ < t < ∞} by ρ˙ + ∇ · (ρ∇p) = 0,
ρ(0) = ρ0 .
Then 1 (f (ρ(t)) − f (ρ(0))) t 1 ≤ lim (g(ρ(t)) − g(ρ(0))) = hgradg(ρ0 ), pi. t→0+ t By the arbitrariness of p, the conclusion follows. hgradf (ρ0 ), pi =
lim
t→0+
Bibliography 1. Adams, Robert A. Sobolev spaces. Pure and Applied Mathematics, Vol. 65. Academic Press, New YorkLondon, 1975. D.1 2. Aldous, David. Stopping times and tightness. Ann. Probab. 6 (1984), 335340. 1.1.1, 4.1 3. Ambrosio, Luigi, Gigli, Nicola and Savar´ e, Giuseppe. Gradient flows in metric spaces and in the space of probability measures. Lectures in Mathematics ETH Zrich. Birkhuser Verlag, Basel, (2005). D.3, D.4 4. Anderson, Robert F. and Orey, Steven. Small random perturbation of dynamical systems with reflecting boundary. Nagoya Math. J. 60 (1976), 189216. 10 5. Attouch, H´ edy. Variational Convergence for Functions and Operators, Pitman Advanced Publishing Program, Boston, London, 1984. 7.1, 7.1, 13.1.3, A.5 6. Baldi, Paolo. Large deviations for diffusion processes with homogenization and applications. Ann. Probab. 19 (1991), 509524. 1.10, 11, 11.5 7. Barles, G, and Perthame, B. Exit time problems in optimal control and vanishing viscosity method. SIAM J. Control Optim. 26 (1988), 11331148. 1.5, 6.1 8. Barles, G, and Perthame, B. Comparison results in Dirichlet type firstorder HamiltonJacobi Equations. Appl. Math. Optim. 21 (1990) 2144. 1.5, 6.2 9. Barles, G, and Souganidis, Panagiotis.E. Convergence of approximation schemes for fully nonlinear second order equations. Asymptotic Analysis. 4 (1991) 271283. 6.2, 6.1 10. Bertini, L., Landim, C. and Olla, S. Derivation of CahnHilliard equations from GinzburgLandau models. J. Stat. Phys. 88, 365381, (1997) 1.13 11. Borovkov, Alexandr. A. Boundaryvalue problems for random walks and large deviations in function spaces. Theory Probab. Appl. 12 (1967) 575595. 1.6, 10, 10.2, 10.2 12. Buck, R. Creighton. Bounded continuous functions on a locally compact space. Michigan Math. J. 5 (1958), 95104. A.2 13. Chen, Hong and Yao, David D. Fundamentals of Queueing Networks Springer, New York, 2001. 10.5.1 14. Cooper, J. B. The strict topology and spaces with mixed topologies. Proc. Amer. Math. Soc. 30 (1971),583592. A.2 15. CorderoErausquin, Dario; Gangbo, Wilfrid; and Houdr´ e, Christian. Inequalities for generalized entropy and optimal transportation. Recent advances in the theory and applications of mass transport, 73–94, Contemp. Math., 353, Amerocan Mathematical Society, Providence, RI, 2004 D.7 16. Cram´ er, H. Sur un nouveau th´ eoremelimite de la th´ eorie des probabilit´ es. Acta. Sci. et Ind. 736 (1938), 523. 1, 1.6 17. Crandall, Michael G., Ishii, Hitoshi and Lions, PierreLouis. User’s Guide to Viscosity Solutions of Second Order Partial Differential Equations. Bulletin A. M. S., N. S. 27 (1992), 167. 9, 9.1 18. Crandall, Michael G. and Liggett, Thomas M. Generation of Semigroups of Nonlinear Transfomations on General Banach Spaces. Amer. J. Math. 93 (1971), 265298. 5.1 19. Crandall, Michael G. and Lions, PierreLouis. Viscosity solutions of HamiltonJacobi Equations. Trans. A. M. S. 277 (1983), 142. 1.5, 6.2 20. Crandall, Michael G. and Lions, PierreLouis. HamiltonJacobi Equations in Infinite Dimensions, Part VI: Nonlinear A and Tataru’s Method Refined. Evolution equations, Control theory and Biomathematics (Han Sur Lasse, 1991) 5189 Lecture Notes in Pure and Applied Math.155. Dekker, New York (1994). 1.14, 1.5, 9 21. Crandall, Michael G. and Lions, PierreLouis. HamiltonJacobi Equations in Infinite Dimensions, PART I J. Funct. Anal. 62 (1985) 3. 379396. PART II J. Funct. Anal. 65 (1986) 3. 399
400
22. 23.
24. 25. 26. 27. 28.
29. 30. 31. 32.
33.
34.
35.
36.
37. 38. 39. 40.
41. 42. 43. 44.
BIBLIOGRAPHY
368405. PART III J. Funct. Anal. 68 (1986) 2. 214247. PART IV J. Funct. Anal. 90 (1990) 2. 237283. PART V J. Funct. Anal. 97 (1991) 2. 417465. PART VII J. Funct. Anal. 125 (1994) 1. 111148. 1.14, 1.5, 9 Dawson, Donald. A. and G¨ artner, J¨ urgen. Large Deviations from the McKeanVlasov limit for weakly interacting diffusions. Stochastics 20. 247308, 1987. 1.14, 4, 9.35, 13.3 Dawson, Donald. A. and G˝ artner, J. Large deviations, free energy functional and quasipotential for a mean field model of interacting diffusions. Memoirs of the American Mathematics Society Vol 78, No. 398, March 1989. 1.14 de Acosta, Alejandro. Exponential Tightness and Projective Systems in Large Deviation Theory. Festschrift for Lucien Le Cam. Springer, New York. 143156, 1997. 1, 1.1.1, 3.1, 4, 4.7 de Acosta, Alejandro. Large deviations for vectorvalued L´ evy processes. Stochastic Process. Appl. 51 (1994), 75115. 1.7, 3.1, 3.4, 10, 10.1.7, 2 de Acosta, Alejandro. A general nonconvex large deviation result with applications to stochastic equations. Probab. Theory Relat. Fields 118 (2000), 483521. 1.3.1, 10, 10.11, 3, 10.3.5 de Acosta, Alejandro. A general nonconvex large deviation result II. Ann. Probab. (to appear) 1.3.1, 10 Dellacherie, Claude and Meyer, PaulAndr´ e. Probabilities and Potential, (NorthHolland mathematics studies, 29; Translation of Probabiliti´ es et potentiel.) NorthHolland Publishing, New York, 1978. D.3 Dembo, Amir and Zeitouni, Ofer. Large Deviations Techniques and Applications, Jones and Bartlett Publishers, Boston, 1993. 3.1, 3.9, 3.1, 3.1, 10, 10.11, 10.3.5, 12 Deuschel, JeanDominique and Stroock, Daniel W. Large Deviations. Academic Press, Boston, 1989. 4.1, 11, 12, B, B.2.1, B.2.2 Diestel, J.; Uhl, J. J., Jr. The RadonNikodym theorem for Banach space valued measures. Rocky Mountain J. Math. 6 (1976), 146. 1.14 Donsker, Monroe D. and Varadhan, S. R. S. On a variational formula for the principal eigenvalue for operators with maximum principle Proc. Nat. Acad. Sci. USA 72 No.3 (1975), 780783. 1.8, 11 Donsker, Monroe D. and Varadhan, S. R. S. Asymptotic evaluation of Markov process expectations for large time, I,II,III Comm. Pure Appl. Math. textbf27 (1975), 147, 28 (1975), 279301, 29 (1976), 389461. 1, 1.11, 12 Doss, Halim and Priouret, Pierre. Petites perturbations de systemes dynamiques avec reflexion. Seminar on probability, XVII, 353–370, Lecture Notes in Math., 986, Springer, Berlin, 1983. 10 Dupuis Paul and Ellis, Richard S. A Weak Convergence Approach to the Theory of Large Deviations. Wiley, New York, 1997. 1.1.3, 1.3, 4.2, 9.35, 10, 10.6, 10.16, 1, 10.3.5, 12, 13.3.3, B.2.2 Ethier, Stewart N. and Kurtz, Thomas G. Markov Processes, John Wiley and Sons, New York, 1986. (document), 21, 1.2.1, 1.9, 3.1, 3.1, 4.1, 4.1, 4.1, 4.1, 4.1, 4.1, 4.1, 4.3, 5.2, 5.2, 5.15, 5.2, 6.14, 7.18, 7.24, 8.6.3.3, 10.3, 10.3, 10.5, 11.2, A.2 Evans, Lawrence C. The perturbed test function method for viscosity solutions of nonlinear PDE. Proc. Roy. Soc. Edinburgh Sect A. 111 (1989), 359375. 1.8 Evans, Lawrence. Partial Differential Equations. Graduate Studies in Mathematics, Volume 19, American Mathematical Society. Providence, Rhode Island 1998. D.1, D.2 Evans, Lawrence and Gariepy, Ronald. Measure theory and fine properties of functions. Studies in Advanced Mathematics, CRC Press, London 1992. D.2 Evans, Lawrence C. and Ishii, Hitoshi. A PDE approach to some asymptotic problems concerning random differential equations with small noise intensities. Ann. Inst. H. Poincar´ e Anal. Non Lin´ eaire. 2 (1985), 120. 1.1.2, 1.5 Feng, Jin. Martingale problems for large deviations of Markov processes. Stochastic Process. Appl. 81 (1999), no. 2, 165–216. 1.1.3, 6, 8.6.1.1, 8.6.1.1 Feng, Jin. Large deviation for a stochastic CahnHilliard equation. Methods of functional analysis and topology9 (2003), no. 4, 333356. 1.13 Feng, Jin and Katsoulakis, Markos A HamiltonJacobi theory for controlled gradient flows in infinite dimensions. Submitted 2003. 9, 9.4 Fleming, Wendell. H. Exit probabilities and optimal stochastic control. Applied Math. Optimiz. 4 329346, 1978. 1.1.2, 1.1.3, 8.6.1.1
BIBLIOGRAPHY
401
45. Fleming, Wendell H. A stochastic control approach to some large deviation problems. Recent mathematical methods in dynamic programmming (Rome 1984). Lecture Notes in Math. 1119, Springer, BerlinNew York, 1985. 5266. 1.1.2, 1.1.3, 1.2.2, 8.6.1.1 46. Fleming, Wendell H. and Soner, H. Mete. Asymptotic expansions for Markov processes with L´ evy generators. Appl. Math. Optim. 19 (1989), 203223. 1.1.2 47. Fleming, Wendell H. and Soner, H.Mete. Controlled Markov processes and viscosity solutions. SpringerVerlag, New York, 1991. 6.2, 6.1 48. Fleming, Wendell H. and Souganidis, Panagiotis E. PDEviscosity solution approach to some problems of large deviations. Ann. Scuola Norm. Sup. Pisa Cl. Sci. 13 (1986), 171192. 1.1.2, 1.5 49. Freidlin, Mark. I. Fluctuations in dynamical systems with averaging. Soviet Math. Dokl. 17 (1976), 104108. 1.8 50. Freidlin, Mark. I. The averaging principle and theorems on large deviations. Russion Math. Surveys 33 (1978), 117176. 1.8 51. Freidlin, Mark I., and Sowers, Richard B. A comparison of homogenization and large deviations, with applications to wavefront propagation. Stochastic Process. Appl. 82 (1999), 2352. 11.5 52. Freidlin,Mark I. and Wentzell, Alexander D. Random perturbations of dynamical systems. Second Edition, SpringerVerlag, New York, 1998. 1, 1.4, 1.8, 9.14, 10, 10, 10.16, 11, 11.9, 11.6, 12, B, B.2.1 53. Garcia, Jorge. An extension of the contraction principle. Journal of Theoretical Probability 17., no. 2, (2004), 403434. 3.1 54. Gilbarg, David and Trudinger, Neil S. Elliptic Partial Differential Equations of Second Order. Classics in Mathematics, SpringerVerlag, New York, 1998 edition. C.4, D.6, D.6 55. Giles, Robin. A generalization of the strict topology. Trans. Amer. Math. Soc. 161 (1971), 467474. A.2, A.2 56. Guillin, A. Averaging principle of SDE with small diffusion: moderate deviations. Ann. Probab. 31 (2003), 413443. 11.6 57. Gulinsky, O. V.; Veretennikov, A. Yu. Large deviations for discretetime processes with averaging. VSP, Utrecht, 1993. 10, 12 58. Graham, Carl. McKeanVlasov Itˆ oSkorohod equations and nonlinear diffusions with discrete jump sets. Stochastic Process. Appl. 40 (1992), 6982. 10.3 59. HoffmannJorgensen, J., A generalization of the strict topology Math. Scand. 30 (1972), 313323. A.2 60. Ishii, Hitoshi. On uniqueness and existence of viscosity solutions of fully nonlinear secondorder elliptic PDE’s. Comm. Pure Appl. Math. 42 (1989), 1445. 6.2 61. Ishii, Hitoshi and Lions, PierreLouis. Viscosity solutions of fully nonlinear secondorder elliptic partial differential equations. J. Differential Equations 83 (1990), 2678. 6.2 62. Jakubowski, Adam. On the Skorohod topology. Ann. Inst. H. Poincar´ e B 22 (1986), 263285. 4.1 63. Jordan, Richard, Kinderlehrer, David and Otto, Felix. The variational formulation of the FokkerPlanck equation. SIAM J. Math. Anal. Vol. 29 1998, No.1, 117. 9.35, D.4 64. Karlin, Samuel. Positive operators J. Math. Mech. 8 (1959), 907937. B.1.1 65. Kelley, John, General Topology. Springer, New York, 1975 (Reprint of the 1955 ed. published by Van Nostrand). D.19 66. Khas’minskii, R. Z. A limit theorem for the solutions of differential equations with random righthand sides. Theory Probab. Appl. 11 (1966) 390406. 1.8, 11 67. Kontoyiannis, Yannis. and Meyn, Sean. P. Spectral theory and limit theorems for geometrically ergodic Markov processes. Ann. Appl. Probability 13 (2003), no.1, 3 04362. 11, 11.7, 11.31 68. Kontoyiannis, Yannis. and Meyn, Sean. P. Large deviations asymptotics and the spectral theory of multiplicatively regular Markov processes. Preprint 2003. 11, 11.7, 11.31 69. Kurtz, Thomas G. Extensions of Trotter’s operator semigroup approximation theorems. J. Functional Analysis 3 (1969), 354375. 5.6 70. Kurtz, Thomas G. A general theorem on the convergence of operator semigroups. Trans. Amer. Math. Soc. 148 (1970), 2332. 5.6 71. Kurtz, Thomas G. A limit theorem for perturbed operator semigroups with applications to random evolutions J. Funct. Anal. 12 (1973), 5567. 1.1.1, 1.8, 4.1
402
BIBLIOGRAPHY
72. Kurtz, Thomas G., Convergence of sequences of semigroups of nonlinear operators with an application to gas kinetics. Trans. Amer. Math. Soc. 186. 259–272, 1973. 1.8, 5.1 73. Kurtz, Thomas G. Semigroups of conditioned shifts and approximation of Markov processes. Ann. Probab. 3 (1975), 618642. 4.1 74. Kurtz, Thomas G. A variational formula for the growth rate of a positive operator semigroup SIAM J. Math. Anal. 10 (1979), 112117. 11 75. Kurtz, Thomas G. Martingale problems for controlled processes Springer Lecture Notes in Control and Information Sciences 91. 7590, 1987. 8.2 76. Kurtz, Thomas G. and Protter, Philip. Weak convergence of stochastic integrals and differential equations II: Infinite dimensional case. Probabilistic Models for Nonlinear Partial Differential Equations (Montecatini Terme, 1995) 197–285. Lecture Notes in Math 1627, Springer, Berlin, 1996. 10.3, 13.3.5 77. Kurtz, Thomas G.and Stockbridge, Richard H. Stationary solutions and forward equations for controlled and singular martingale problems. Electron. J. Probab. 6 (2001), no. 15, 52 pp 13.3.5 78. Kurtz, Thomas G. and Xiong, Jie. Particle representations for a class of nonlinear SPDEs. Stochastic Process. Appl. 83 (1999), no. 1, 103–126. 13.3.5 79. Kushner, Harold J., Approximation and Weak Convergence Methods for Random Processes MIT Press, Cambridge, MA, 1984. 1.8 80. Lions, PierreLouis. Neumann type boundary conditions for HamiltonJacobi equations. Duke Math. J. 52 (1985), 793820. 9.3 81. Lions, PierreLouis, and Sznitman, AlainSol. Stochastic differential equations with reflecting boundary conditions. Comm. Pure Appl. Math. 37 (1984), 511537. 9.3 82. Lynch, James. and Sethuraman, Jayaram. Large deviations for processes with independent increments. Ann. Probab. 15 (1987), 610627. 1.7, 10, 10.2 83. McCann, J. Robert. Existence and uniqueness of monotone measurepreserving maps. Duke Math. J. Vol. 80 1995, No.2, 309323. 84. McCann, J. Robert. A convexity principle for interacting gases. Advances in Mathematics 128, 1997, 153179. D.54, D.7 85. M´ el´ eard, Sylvie. Asymptotic behaviour of some interacting particle systems; McKeanVlasov and Boltzmann models. Probabilistic models for nonlinear partial differential equations (Montecatini Terme, 1995), 42–95, Lecture Notes in Math., 1627, Springer, Berlin, 1996. 13.3.5 86. Miyadera, Isao. Nonlinear Semigroups, Translations of Mathematical Monographs, 109. AMS, 1991. 5.1, 5.1, 5.1, 5.2, 7.1, A.3 87. Mogulskii, A. A. Large deviations for trajectories of multidimensional random walks. Theory Probab. Appl. 21 (1976) 300315. 1.6, 10, 10.1.6, 10.2 88. Mogulskii, A. A. Large deviations for processes with independent increments. Ann. Probab. 21 (1993) 202215. 1.7, 10, 10.2, 10.1.6 89. O’Brien, George L. Sequences of capcities with connections to largedeviation theory. J. Theoret. Probab. 9 (1996), 1935. 3.1 90. O’Brien, George L. and Vervaat, Wim. Capacities, large deviations and loglog laws. Stable processes and related topics (Ithaca, NY, 1990), 43–83, Progr. Probab., 25, Birkhuser Boston, Boston, MA, 1991. 91. O’Brien, George L. and Vervaat, Wim. Compactness in the theory of large deviations. Stochastic Process. Appl. 57 (1995), 110. 1, 1.1.1, 3.1, 3.1 92. Otto, Felix. The geometry of dissipative evolution equations: the porous medium equation. Comm. P.D.Es 26 (2001), no. 12, 101174. 9.35, D.7 93. Papanicolaou, George C. and Varadhan, S. R. S. A limit theorem with strong mixing in Banach space and two applications to stochastic differential equations. Comm. Pure Appl. Math. 26 (1973), 497524. 1.8 94. Pinsky, Mark A. Lectures on random evolution. World Scientific Publishing Co., Inc., River Edge, NJ, 1991. 11 95. Pinsky, Ross G. Positive Harmonic Functions and Diffusions. Cambridge Studies in Advanced Mathematics, Vol 45. Cambridge University Press, 1995. 11, 11.33, 11.33 96. Protter, Philip Stochastic integration and differential equations. A new approach. Applications of Mathematics, 21. SpringerVerlag, Berlin, 1990. 5.2, D.6 97. Puhalskii, Anatolii. On functional principle of large deviations. New Trends in Probability and Statistics. Vol. 1 (Bakuriani, 1990) 198219, VSP, Utrecht, 1991. 1, 1.1.1, 3.1, 3.1, 4.2, 4.7
BIBLIOGRAPHY
403
98. Puhalskii, Anatolii. The method of stochastic exponentials for large deviations. Stochastic Process. Appl. 54 (1994), 4570. 4.4 99. Puhalskii, Anatolii. Large deviations of semimartingales: a maxingale problem approach. I. Limits as solutions to a maxingale problem. Stochastic and Stochastic Reports 61 (1997) 141243. II. Uniqueness for the maxingale problem. Stochastic and Stochastic Reports 68 (1999) 65143. 1.3.2 100. Puhalskii, Anatolii. Large deviations and Idempotent Probability. Chapman and Hall/CRC, 2001, New York. 1.3.2, 4.2, 4.4, 4.7 101. Rachev, Svetlozar T. and R¨ uschendorf, Ludger. Mass transportation Problems, Vol I: Theory. SpringerVerlag, New York, 1998. D.4, D.24 102. Reed, Michael; Simon, Barry. Methods of modern mathematical physics. I. Functional analysis. Second edition. Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York, 1980. D.1 103. Russell, Raymond. The large deviations of random timechanges. PhD Thesis, University of Dublin, 1997. 4.10 104. Rockafellar, R. Tyrrell. Convex analysis. Princeton University Press, Princeton, NJ, 1970. 10.1.5, 10.3.5, 10.3.5, D.2, D.2, D.2 105. Sato, K. On the generators of nonnegative contraction semigroups in Banach lattices. J. Math. Soc. Japan 20 No.3, 1968, 423436. A.3 106. Schied, Alexander. Criteria for exponential tightness in path spaces. Unpublished preprint. 1995. 4.1 107. Schilder, Michael. Some asymptotic formulae for Wiener integrals Trans. Amer. Math. Soc. 125 (1966), 6385. 108. Sheu, Shuenn Jyi. Stochastic control and exit probabilities of jump processes. SIAM J. Control Optim. 23 (1985), 306328. 1.1.3, 8.6.1.1 109. Sheu, ShuennJyi. Stochastic control and exit probabilities of jump processes. Springer Lecture Notes in Control and Information Sciences 91, 7590, 1987. 1.1.3, 8.6.1.1 110. Sinestrari, Eugenio. Accretive Differential Operators. Bollettino U.M.I., 13B, (5), 1976, 1931. A.3 111. Sion, Maurice. On general minimax theorems. Pacific J. Math. 8 (1959), 171176. 11.1.4, 11.2.4, B.2 112. Sowers, Richard. Large deviations for a reactiondiffusion equation with nonGaussian perturbations Annals of Probability 20 (1992) 504537. 1.12 113. Spohn, H. Large Scale Dynamics of Interacting Particles. Texts and Monographs in Physics, Springer, (1991) 1.12, 1.13 114. Stroock, Daniel W. Diffusion processes associated with Levy generators. Z. Wahrsch. verw. Gebiete 32 (1975), 209244. 10.3 115. Stroock, Daniel W. An introduction to the theory of large deviations. SpringerVerlag, Berlin, 1984. 11.6.3, B.14, D.6, D.6 116. Tataru, Daniel. Viscosity solutions of HamiltonJacobi equations with unbounded nonlinear terms. Journal of Mathematical Anlaysis and Applications 163 (1992), 345392. 1.14, 1.5, 9 117. Trotter, H. F. Approximation of semigroups of operators. Pacific J. Math. 8 (1958), 887919. 5.6 118. Varadhan, S. R. S. Asymptotic probabilities and differential equations. Comm. Pure Appl. Math. 19 (1966), 261286. 1, 10 119. Veretennikov, A. Yu. On large deviations in averaging principle for stochastic differential equations with periodic coefficients. I. Probability theory and mathematical statistics, Vol. II (Vilnius, 1989), 542551, ”Mokslas”, Vilnius, 1990. 120. Veretennikov, A. Yu. On large deviations in the averaging principle for stochastic differential equations with periodic coefficients. II. Math. USSRIzv. 39 (1992), no. 1, 677–701 121. Veretennikov, A. Yu. On large deviations in the averaging principle for stochastic difference equations on a torus. Proc. Steklov Inst. Math. 202 (1994), 2733. 122. Veretennikov, A. Yu. On large deviations in averaging principle for systems of stochastic differential equation with unbounded coefficients. Probability theory and mathematical statistics (Vilnius, 1993), 735742, TEV, Vilnius, 1994. 123. Veretennikov, A. Yu. On large deviations in the averaging principle for Markov processes (compact case, discrete time). Probability theory and mathematical statistics (St. Petersburg, 1993), 235240, Gordon and Breach, Amsterdam, 1996.
404
BIBLIOGRAPHY
124. Veretennikov, A. Yu. On large deviations for stochastic differential equations with small diffusion and averaging. Theory Probab. Appl. 43 (1998), 335–337. 125. Veretennikov, A. Yu. On large deviations for SDEs with small diffusion and averaging. Stochastic Process. Appl. 89 (2000), 6979. 11, 11.6, 11.55 126. Villani, Cedric. Topics in optimal transportation. American Mathematical Society, Providence, Rhode Island. Graduate Studies in Mathematics, Vol 58. 2003. 1.14, D, D.2, D.3, D.3, D.4, D.4, D.4, D.27, D.4, D.7, D.7, D.7 127. Wentzell, Alexander. D. Rough limit theorems on large deviations for Markov stochastic processes. I, II, III. Theory Probab. Appl. 21 (1976), 227242, 24 (1979), 675672, 27 (1982), 215234. 1.5 128. Wentzell, Alexander. D. Limit Theorems on Large Deviations for Markov Stochastic Processes. Kluwer, Dordrecht. (1990) 1.5 129. Wheeler, Robert F., A survey of Baire measures and strict topologies. Exposition. Math. 1 (1983), 97190. A.2