Asymptotic Analysis of Random Walks This book focuses on the asymptotic behaviour of the probabilities of large deviati...
61 downloads
1462 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Asymptotic Analysis of Random Walks This book focuses on the asymptotic behaviour of the probabilities of large deviations of the trajectories of random walks with ‘heavy-tailed’ (in particular, regularly varying, sub- and semiexponential) jump distributions. Large deviation probabilities are of great interest in numerous applied areas, typical examples being ruin probabilities in risk theory, error probabilities in mathematical statistics and buffer-overflow probabilities in queueing theory. The classical large deviation theory, developed for distributions decaying exponentially fast (or even faster) at infinity, mostly uses analytical methods. If the fast decay condition fails, which is the case in many important applied problems, then direct probabilistic methods usually prove to be efficient. This monograph presents a unified and systematic exposition of large deviation theory for heavy-tailed random walks. Most of the results presented in the book are appearing in a monograph for the first time. Many of them were obtained by the authors. Professor A l e x a n d e r B o r o v k o v works at the Sobolev Institute of Mathematics in Novosibirsk. Professor K o n s t a n t i n B o r o v k o v is a staff member in the Department of Mathematics and Statistics at the University of Melbourne.
ENCYCLOPEDIA OF MATHEMATICS AND ITS APPLICATIONS All the titles listed below can be obtained from good booksellers or from Cambridge University Press. For a complete series listing visit http://www.cambridge.org/uk/series/sSeries.asp?code=EOM 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 110 111 112 113 114
J. Krajicek Bounded Arithmetic, Propositional Logic, and Complexity Theory H. Groemer Geometric Applications of Fourier Series and Spherical Harmonics H.O. Fattorini Infinite Dimensional Optimization and Control Theory A.C. Thompson Minkowski Geometry R.B. Bapat and T.E.S. Raghavan Nonnegative Matrices with Applications K. Engel Sperner Theory D. Cvetkovic, P. Rowlinson and S. Simic Eigenspaces of Graphs F. Bergeron, G. Labelle and P. Leroux Combinatorial Species and Tree-Like Structures R. Goodman and N. Wallach Representations and Invariants of the Classical Groups T. Beth, D. Jungnickel, and H. Lenz Design Theory I, 2nd edn A. Pietsch and J. Wenzel Orthonormal Systems for Banach Space Geometry G.E. Andrews, R. Askey and R. Roy Special Functions R. Ticciati Quantum Field Theory for Mathematicians M. Stern Semimodular Lattices I. Lasiecka and R. Triggiani Control Theory for Partial Differential Equations I I. Lasiecka and R. Triggiani Control Theory for Partial Differential Equations II A.A. Ivanov Geometry of Sporadic Groups I A. Schinzel Polymomials with Special Regard to Reducibility H. Lenz, T. Beth, and D. Jungnickel Design Theory II, 2nd edn T. Palmer Banach Algebras and the General Theory of *-Algebras II O. Stormark Lie’s Structural Approach to PDE Systems C.F. Dunkl and Y. Xu Orthogonal Polynomials of Several Variables J.P. Mayberry The Foundations of Mathematics in the Theory of Sets C. Foias, O. Manley, R. Rosa and R. Temam Navier–Stokes Equations and Turbulence B. Polster and G. Steinke Geometries on Surfaces R.B. Paris and D. Kaminski Asymptotics and Mellin–Barnes Integrals R. McEliece The Theory of Information and Coding, 2nd edn B. Magurn Algebraic Introduction to K-Theory T. Mora Solving Polynomial Equation Systems I K. Bichteler Stochastic Integration with Jumps M. Lothaire Algebraic Combinatorics on Words A.A. Ivanov and S.V. Shpectorov Geometry of Sporadic Groups II P. McMullen and E. Schulte Abstract Regular Polytopes G. Gierz et al. Continuous Lattices and Domains S. Finch Mathematical Constants Y. Jabri The Mountain Pass Theorem G. Gasper and M. Rahman Basic Hypergeometric Series, 2nd edn M.C. Pedicchio and W. Tholen (eds.) Categorical Foundations M.E.H. Ismail Classical and Quantum Orthogonal Polynomials in One Variable T. Mora Solving Polynomial Equation Systems II E. Olivieri and M. Eul´alia Vares Large Deviations and Metastability A. Kushner, V. Lychagin and V. Rubtsov Contact Geometry and Nonlinear Differential Equations L.W. Beineke, R.J. Wilson, P.J. Cameron. (eds.) Topics in Algebraic Graph Theory O. Staffans Well-Posed Linear Systems J.M. Lewis, S. Lakshmivarahan and S. Dhall Dynamic Data Assimilation M. Lothaire Applied Combinatorics on Words A. Markoe Analytic Tomography P.A. Martin Multiple Scattering R.A. Brualdi Combinatorial Matrix Classes M.-J. Lai and L.L. Schumaker Spline Functions on Triangulations R.T. Curtis Symmetric Generation of Groups H. Salzmann, T. Grundh¨ofer, H. H¨ahl and R. L¨owen The Classical Fields S. Peszat and J. Zabczyk Stochastic Partial Differential Equations with L´evy Noise J. Beck Combinatorial Games
Asymptotic Analysis of Random Walks Heavy-Tailed Distributions A.A. BOROVKOV K.A. BOROVKOV Translated by O.B. BOROVKOVA
cambridge university press Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, S˜ao Paulo, Delhi Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org C
A.A. Borovkov and K.A. Borovkov 2008
This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2008 Printed in the United States of America A catalogue record for this publication is available from the British Library ISBN 978-0-521-88117-3 hardback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
Contents
page xi xv
Notation Introduction 1
Preliminaries 1.1 Regularly varying functions and their main properties 1.2 Subexponential distributions 1.3 Locally subexponential distributions 1.4 Asymptotic properties of ‘functions of distributions’ 1.5 The convergence of distributions of sums of random variables with regularly varying tails to stable laws 1.6 Functional limit theorems
1 1 13 44 51 57 75
2
Random walks with jumps having no finite first moment 80 2.1 Introduction. The main approach to bounding from above the distribution tails of the maxima of sums of random variables 80 2.2 Upper bounds for the distribution of the maximum of sums when α 1 and the left tail is arbitrary 84 2.3 Upper bounds for the distribution of the sum of random variables when the left tail dominates the right tail 91 2.4 Upper bounds for the distribution of the maximum of sums when the left tail is substantially heavier than the right tail 97 2.5 Lower bounds for the distributions of the sums. Finiteness criteria for the maximum of the sums 103 2.6 The asymptotic behaviour of the probabilities P(Sn x) 110 120 2.7 The asymptotic behaviour of the probabilities P(S n x)
3
Random walks with jumps having finite mean and infinite variance127 127 3.1 Upper bounds for the distribution of S n 3.2 Upper bounds for the distribution of S n (a), a > 0 137 141 3.3 Lower bounds for the distribution of Sn 142 3.4 Asymptotics of P(Sn x) and its refinements 149 3.5 Asymptotics of P(S n x) and its refinements v
vi
Contents 3.6 3.7 3.8 3.9
4
5
6
The asymptotics of P(S(a) x) with refinements and the general boundary problem Integro-local theorems on large deviations of Sn for index −α, α ∈ (0, 2) Uniform relative convergence to a stable law Analogues of the law of the iterated logarithm in the case of infinite variance
154 166 173 176
Random walks with jumps having finite variance 4.1 Upper bounds for the distribution of S n 4.2 Upper bounds for the distribution of S n (a), a > 0 4.3 Lower bounds for the distributions of Sn and S n (a) 4.4 Asymptotics of P(Sn x) and its refinements 4.5 Asymptotics of P(S n x) and its refinements 4.6 Asymptotics of P(S(a) x) and its refinements. The general boundary problem 4.7 Integro-local theorems for the sums Sn 4.8 Extension of results on the asymptotics of P(Sn x) and P(S n x) to wider classes of jump distributions 4.9 The distribution of the trajectory {Sk } given that Sn x or S n x
182 182 191 194 197 204
Random walks with semiexponential jump distributions 5.1 Introduction 5.2 Bounds for the distributions of Sn and S n , and their consequences 5.3 Bounds for the distribution of S n (a) 5.4 Large deviations of the sums Sn 5.5 Large deviations of the maxima S n 5.6 Large deviations of S n (a) when a > 0 5.7 Large deviations of S n (−a) when a > 0 5.8 Integro-local and integral theorems on the whole real line 5.9 Additivity (subexponentiality) zones for various distribution classes
233 233
208 217 224 228
238 247 250 268 274 287 290 296
Large deviations on the boundary of and outside the Cram´er zone for random walks with jump distributions decaying exponentially fast 300 6.1 Introduction. The main method of studying large deviations when Cram´er’s condition holds. Applicability bounds 300 6.2 Integro-local theorems for sums Sn of r.v.’s with distributions from the class ER when the function V (t) is of index from the interval (−1, −3) 308
Contents 6.3
6.4 6.5
Integro-local theorems for the sums Sn when the Cram´er transform for the summands has a finite variance at the right boundary point The conditional distribution of the trajectory {Sk } given Sn ∈ Δ[x) Asymptotics of the probability of the crossing of a remote boundary by the random walk
vii
315 318 319
7
Asymptotic properties of functions of regularly varying and semiexponential distributions. Asymptotics of the distributions of stopped sums and their maxima. An alternative approach to studying the asymptotics of P(Sn x) 335 7.1 Functions of regularly varying distributions 335 7.2 Functions of semiexponential distributions 341 7.3 Functions of distributions interpreted as the distributions of stopped sums. Asymptotics for the maxima of stopped sums 344 7.4 Sums stopped at an arbitrary Markov time 347 7.5 An alternative approach to studying the asymptotics of P(Sn x) for sub- and semiexponential distributions of the summands 354 7.6 A Poissonian representation for the supremum S and the time when it was attained 367
8
On the asymptotics of the first hitting times 8.1 Introduction 8.2 A fixed level x 8.3 A growing level x
9
Integro-local and integral large deviation theorems for sums of random vectors 398 9.1 Introduction 398 9.2 Integro-local large deviation theorems for sums of independent random vectors with regularly varying distributions 402 9.3 Integral theorems 412
10
Large deviations in trajectory space 10.1 Introduction 10.2 One-sided large deviations in trajectory space 10.3 The general case
417 417 418 422
11
Large deviations of sums of random variables of two types 11.1 The formulation of the problem for sums of random variables of two types 11.2 Asymptotics of P (m, n, x) related to the class of regularly varying distributions
427
369 369 370 391
427 429
viii
Contents 11.3
Asymptotics of P (m, n, x) related to semiexponential distributions
432
12
Random walks with non-identically distributed jumps in the triangular array scheme in the case of infinite second moment. Transient phenomena 439 439 12.1 Upper and lower bounds for the distributions of S n and Sn 12.2 Asymptotics of the crossing of an arbitrary remote boundary 454 12.3 Asymptotics of the probability of the crossing of an arbitrary remote boundary on an unbounded time interval. Bounds for the first crossing time 457 12.4 Convergence in the triangular array scheme of random walks with non-identically distributed jumps to stable processes 464 12.5 Transient phenomena 471
13
Random walks with non-identically distributed jumps in the triangular array scheme in the case of finite variances 482 13.1 Upper and lower bounds for the distributions of S n and Sn 482 13.2 Asymptotics of the probability of the crossing of an arbitrary remote boundary 495 13.3 The invariance principle. Transient phenomena 502
14
Random walks with dependent jumps 14.1 The classes of random walks with dependent jumps that admit asymptotic analysis 14.2 Martingales on countable Markov chains. The main results of the asymptotic analysis when the jump variances can be infinite 14.3 Martingales on countable Markov chains. The main results of the asymptotic analysis in the case of finite variances 14.4 Arbitrary random walks on countable Markov chains
506 506
509 514 516
15
Extension of the results of Chapters 2–5 to continuous-time random processes with independent increments 522 15.1 Introduction 522 15.2 The first approach, based on using the closeness of the trajectories of processes in discrete and continuous time 525 15.3 The construction of a full analogue of the asymptotic analysis from Chapters 2–5 for random processes with independent increments 532
16
Extension of the results of Chapters 3 and 4 to generalized renewal processes 543 16.1 Introduction 543 16.2 Large deviation probabilities for S(T ) and S(T ) 551 16.3 Asymptotic expansions 574
Contents 16.4 The crossing of arbitrary boundaries 16.5 The case of linear boundaries Bibliographic notes References Index
ix 585 592 602 611 624
Notation
This list includes only the notation used systematically throughout the book. Random variables and events ξ1 , ξ2 . . . are independent random variables (r.v.’s), assumed to be identically d distributed in Chapters 1–11, 16 (in which case ξj = ξ) ξ(a) = ξ − a ξ y is an r.v. ξ ‘truncated’ at the level y: P(ξ y < t) = P(ξ < t)/P(ξ < y), ty (λ) is an r.v. with distribution P(ξ (λ) ∈ dt) = (eλt /ϕ(λ))P(ξ ∈ dt) (the ξ Cram´er transform) ξ n = max ξk kn n Sn = j=1 ξj S n = max Sk kn
S = sup Sk k0
S n = min Sk kn
Sn = max |Sk | ≡ max{S n , S n } kn n Sn (a) = i=1 ξi (a) ≡ Sn − an S n (a) = max Sk (a) = max(Sk − ak) kn
kn
S(a) = supk0 Sk (a) = supk0 (Sk − ak) y y Sn = nj=1 ξj (λ) (λ) n Sn = j=1 ξj τ1 , τ2 , . . . are independent identically distributed r.v.’s (τj > 0 in Chapter16) d
τ = τj k tk = j=1 τj Gn is one of the events {Sn x},{S n x} or maxkn (Sk − g(k)) 0 (in Chapters 15 and 16) GT = suptT (S(t) − g(t)) 0 1(A) is the indicator of the event A xi
xii
Notation d
d
d
=, , are equality and inequalities between r.v.’s in distribution ⇒ is used to denote convergence of r.v.’s in distribution Distributions and their characteristics The notation ζ ⊂ = G means that the r.v. ζ has the distribution G =⇒ G means that the distributions of the r.v.’s ζn converge The notation ζn ⊂ weakly to the distribution G (as n → ∞) Fj is the distribution of ξj (Fj = F in Chapters 1–11, 16) F+ (t) = F([t, ∞)) ≡ P(ξ t), Fj,+ (t) = Fj ([t, ∞)) F− (t) = F((−∞, −t)) ≡ P(ξ < −t), Fj,− (t) = Fj ((−∞, −t)) F (t) = F− (t) + F+ (t), Fj (t) = Fj,− (t) + Fj,+ (t) t ∞ FI (t) = 0 F (u) du, F I (t) = t F (u) du V (t), W (t), U (t) are regularly varying functions (r.v.f.’s) (in Chapters 1–4): V (t) = t−α L(t), α > 0 W (t) = t−β LW (t), β > 0 U (t) = t−γ LU (t), γ > 0 L(t), LW (t), LU (t), LY (t) are slowly varying functions (s.v.f.’s), corresponding to V , W , U , Y V (t) = e−l(t) , l(t) = tα L(t), α ∈ (0, 1), L(t) is an s.v.f. (in Chapter 5) l(t) = tα L(t) is the exponent of a semiexponential distribution V (t) = max{V (t), W (t)} Fτ is the distribution of τ G is the distribution of ζ α, β are the exponents of the right and left regularly varying distribution tails of ξ respectively, or those of their regularly varying majorants (or minorants) α ˆ = max{α, β} α ˇ = min{α, β} d = Var ξ = E(ξ − Eξ)2 f(λ) = Eeiλξ is the characteristic function (ch.f.) of ξ g(λ) = Eeiλζ is the ch.f. of ζ ϕ(λ) = Eeλξ is the moment generating function of ξ (α, ρ) are the parameters of the limiting stable law Fα,ρ is the (standard) stable distribution with parameters (α, ρ) Fα,ρ,+ (t) = Fα,ρ ([t, ∞)), Fα,ρ,− = Fα,ρ ((−∞, −t)), t > 0 Fα,ρ (t) = Fα,ρ,+ (t) + Fα,ρ,− (t), t > 0 Φ is the standard normal distribution Φ(t) is the standard normal distribution function Conditions on distributions [ · , =] ⇔ F+ (t) = V (t), t > 0 [ · , 0
Notation
xiii
[ · , >] ⇔ F+ (t) V (t), t > 0 [=, · ] ⇔ F− (t) = W (t), t > 0 [ 0 [>, · ] ⇔ F− (t) W (t), t > 0 [=, =] ⇔ F+ (t) = V (t), F− (t) = W (t), t > 0 [, >] ⇔ F+ (t) V (t), F− (t) W (t), t > 0 [Rα,ρ ] means that F (t) = t−α LF (t), α ∈ (0, 2], where LF (t) is an s.v.f. and there exists the limit 1 F+ (t) =: ρ+ = (ρ + 1) ∈ [0, 1] lim t→∞ F (t) 2 [D(h,q) ], h ∈ (0, 2], are conditions on the smoothness of F (t) at infinity; see § 3.4 [D(k,q) ], k = 1, 2, . . . , are generalized conditions of the differentiability of F (t) at infinity; see § 4.4 [ 0). This is done for the sake of completeness of exposition, and also to ascertain that, in a number of cases, studying ‘light-tailed’ random walks can be reduced to the respective problems for heavy-tailed distributions considered in Chapters 2–5. In § 6.1 we describe the main method for studying large deviation probabilities when Cram´er’s condition holds (the method is based on the Cram´er transform and the integro-local Gnedenko–Stone–Shepp theorems) and also ascertain its applicability bounds. In §§ 6.2 and 6.3 we study integro-local theorems for sums of r.v.’s with light tails of the form P(ξ t) = e−λ+ t V (t),
0 < λ+ < ∞,
where V (t) is a function regularly varying as t → ∞. In a number of cases the methods presented in § 6.1 do not work for such distributions, but one can achieve success using the results of Chapters 3 and 4. In § 6.2 we consider the case when the index of the function V (t) belongs to the interval (−1, −3) (in this case one uses the results of Chapter 3); in § 6.3, we take the index of V (t) to be less than −3 (in this case one needs the results of Chapter 4). In § 6.4 we consider large deviations in more general boundary problems. However, here the exposition has to be restricted to several special types of boundary {g(k)}, as the nature of the boundary-crossing probabilities turns out to be quite complicated and sensitive to the particular form of the boundary. Chapters 7–16 are devoted to some more specialized aspects of the theory of random walks and also to some generalizations of the results of Chapters 2–5 and their extensions to continuous-time processes. In Chapter 7 we continue the study of functions of subexponential distributions that we began in § 1.4. Now, for narrower classes of regularly varying and semiexponential distributions, we obtain wider conditions enabling one to find the desired asymptotics (§§ 7.1 and 7.2). In § 7.3 we apply the obtained results to study the asymptotics of the distributions of stopped sums and their maxima, i.e. the asymptotics of P(Sτ x)
and
P(S τ x),
where the r.v. τ is either independent of {Sk } or is a stopping time for that sequence (§§ 7.3 and 7.4). In § 7.5 we discuss an alternative approach (to that presented in Chapters 3–5) to studying the asymptotics of P(S ∞ x) in the case of subexponential distributions of the summands ξk with Eξk < 0. The approach is based on factorization identities and the results of § 1.3. Here we also obtain integro-local theorems and asymptotic expansions in the integral theorem under minimal conditions (in Chapters 3–5 the conditions were excessive, as there we also included the case n < ∞).
Introduction
xxv
Chapter 8 is devoted to a systematic study of the asymptotics of the first hit ting time distribution, i.e. the probabilities P η+ (x) n as n → ∞, where η+ (x) := min{k 1 : Sk x}, and also of similar problems for η− (x) := min{k 1 : Sk −x}. We classify the results according to the following three main criteria: (1) the value of x, distinguishing between the three cases x = 0, x > 0 is fixed and x → ∞; (2) the drift direction (the value of the expectation Eξ, if it exists); (3) the properties of the distribution of ξ. In § 8.1 we consider the case of a fixed level x (usually x = 0) and different combinations of criteria (2) and (3). In § 8.2 we study the case when x → ∞ together with n, again with different combinations of criteria (2) and (3). In Chapter 9 the results of Chapters 3 and 4 are extended to the multivariate case. Our attention is given mainly to integro-local theorems, i.e. to studying the asymptotics P Sn ∈ Δ[x) , where Sn = nj=1 ξj is the sum of d-dimensional i.i.d. random vectors and Δ[x) := {y ∈ Rd : xi yi < xi + Δ} is a cube with edge length Δ and a vertex at the point x = (x1 , . . . , xd ). The reason is that in the multivariate case, the language and approach of integro-local theorems prove to be the most natural. Integral theorems are more difficult to prove directly and can easily be derived from the corresponding integro-local theorems. Another difficulty arising when one is studying the probabilities of large deviations of the sums Sn of ‘heavy-tailed’ random vectors ξk consists in defining and classifying the very concept of a heavy tailed multivariate distribution. In § 9.1, examples are given in which the main contribution to the probability for Sn to hit a remote cube Δ[x) comes not from trajectories with one large jump (as in the univariate case) but from those with exactly k large jumps, where k can be any integer between 1 and d > 1. In § 9.2 we concentrate on the ‘most regular’ jump distributions and establish integro-local theorems for them, both when E|ξ|2 = ∞ and when E|ξ|2 < ∞; § 9.3 is devoted to integral theorems which can be obtained using integro-local theorems as well as in a direct way. In the latter case, one has to impose conditions on the asymptotics of the probability that the remote set under consideration will be reached by one large jump. We then return to univariate random walks. In Chapter 10 such walks are considered as processes, and we study there the probability of large deviations of such processes in their trajectory spaces. In other words, we study the asymptotics of P Sn (·) ∈ xA ,
xxvi
Introduction
where Sn (t) = Snt , t ∈ [0, 1], and A is a measurable set in the space D(0, 1) of functions without discontinuities of the second type (it is supposed that the set A is bounded away from zero). Under certain conditions on the structure of the set A the desired asymptotics are found for regularly varying jump distributions, both in the case of ‘one-sided’ sets (§ 10.2) and in the general case (§ 10.3). Here we use the results of Chapter 3 when Eξ 2 = ∞ and the results of Chapter 4 when Eξ 2 < ∞. Chapters 11–14 are devoted to extending the results of Chapters 3 and 4 to random walks of a more general nature, when the jumps ξi are independent but not identically distributed. In Chapter 11 we consider the simplest problem of this kind, the large deviation probabilities of sums of r.v.’s of two different types. In § 11.1 we discuss a motivation for the problem and give examples. As before, n m we let Sn := i=1 ξi and, moreover, Tm := i=1 τi , where the r.v.’s τi are independent of each other and also of {ξk } and are identically distributed. We are interested in the asymptotics of the probabilities P (m, n, x) := P(Tm + Sn x) as x → ∞. In § 11.2 we study the asymptotics of P (m, n, x) for the case of regularly varying distributions and in § 11.3 for the case of semiexponential distributions. In Chapters 12 and 13 we consider random walks with arbitrary non-identically distributed jumps ξj in the triangular array scheme, both in the case of an infinite second moment (Chapter 12 contains extensions of the results of Chapter 3) and in the case of a finite second moment (Chapter 13 is a generalization of Chapter 4). The order of exposition in Chapters 12 and 13 is roughly the same as in Chapters 3 and 4. In §§ 12.1 and 13.1 we obtain upper and lower bounds for P(S n x) and P(Sn x) respectively. The asymptotics of the probability of the crossing of an arbitrary remote boundary are found in §§ 12.2,12.3 and 13.2. Here we also obtain bounds, uniform in a, for the probabilities P S n (a) x and the distributions of the first crossing time of the level x → ∞. In § 12.4 we establish theorems on the convergence of random walks to random processes. On the basis of these results, in § 12.5 we study transient phenomena in the problem on the asymptotics of the distribution of S n (a) as n → ∞, a → 0. Similar results for random walks with jumps ξi having a finite second moments are established in § 13.3. The results of Chapters 12 and 13 enable us in Chapter 14 to extend the main assertions of these chapters to the case of dependent jumps. In § 14.1 we give a description of the classes of random walks that admit an asymptotic analysis in the spirit of Chapters 12 and 13. These classes include: (1) martingales with a common majorant of the jump distributions; (2) martingales defined on denumerable Markov chains; (3) martingales defined on arbitrary Markov chains;
xxvii
Introduction (4) arbitrary random walks defined on arbitrary Markov chains.
For arbitrary Markov chains one can obtain essentially the same results as for denumerable ones, but the exposition becomes much more technical. For this reason, and also because in case (1) one can obtain (and in a rather simple way) only bounds for the distributions of interest, we will restrict ourselves in Chapter 14 to considering martingales and arbitrary random walks defined on denumerable Markov chains. In § 14.2 we obtain upper and lower bounds for and also the asymptotics of the probabilities P(Sn x) and P(S n x) for such walks in the case where the jumps in the walk can have infinite variance. The case of finite variance is considered in § 14.3. In § 14.4 we study arbitrary random walks defined on denumerable Markov chains. Chapters 15 and 16 are devoted to extending the results of Chapters 2–5 to continuous-time processes. Chapter 15 contains such extensions to processes {S(t)} with independent increments. Two approaches are considered. The first is presented in § 15.2. It is based on using the closeness of the trajectories of the processes with independent increments to random polygons with vertices at the points (kΔ, S(kΔ)) for a small fixed Δ, where the S(kΔ) are clearly the sums of i.i.d. r.v.’s that we studied in Chapters 2–5. The second approach is presented in § 15.3. It consists of applying the same philosophy, based on singling out one large jump (now in the process {S(t)}), as that employed in Chapters 2–5. Using this approach, we can extend to the processes {S(t)} all the results of Chapters 3 and 4, including those for asymptotic expansions. The first approach (that in § 15.1) only allows one to extend the first-order asymptotics results. Chapter 16 is devoted to the generalized renewal processes S(t) := Sν(t) + qt,
t 0,
where q is a linear drift coefficient, ν(t) :=
∞
1(tk t) = min{k 1 : tk t} − 1,
k=1
tk := τ1 + · · · + τk , the r.v.’s τj are independent of each other and of {ξk } and are identically distributed with finite mean aτ := Eτ1 . It is assumed that the distribution tails P(ξ t) = V (t) of the r.v.’s ξ (and in some cases also the distribution tails P(τ t) = Vτ (t) of the r.v.’s τ ) are regularly varying functions or are dominated by such. In § 16.2 we study the probabilities of large deviations of the r.v.’s S(T ) and S(T ) := maxtT S(t) under the assumption that the mean trend in the process is equal to zero, Eξ+qEτ = 0. Here substantial contributions to the probabilities P(S(T ) x) and P(S(T ) x) can come not only from large jumps ξj but also from large renewal intervals τj (especially when q > 0). Accordingly, in some deviation zones, to the natural (and expected) quantity H(T )V (x) (where H(T ) := Eν(T )) giving the asymptotics of P(S(T ) x) and P(S(T ) x), we
xxviii
Introduction
may need to add, say, values of the form a−1 τ (τ − x/q)Vτ (x/q), which can dominate when Vτ (t) V (t). The asymptotics of the probabilities P(S(T ) x) and P(S(T ) x) are studied in § 16.2 in a rather exhaustive way: for values of q having both signs, for different relations between V (t) and Vτ (t) or between x and T and for all the large deviation zones. In § 16.3 we obtain asymptotic expansions for P(S(T ) x) under additional assumptions on the smoothness of the tailsV (t) and Vτ (t). The asymptotics of the probability P suptT (S(t)−g(t)) 0 of the crossing of an arbitrary remote boundary g(t) by the process {S(t)} are studied in § 16.4. The case of a linear boundary g(t) is considered in greater detail in § 16.5. Let us briefly list the main special features of the present book. 1. The traditional range of problems on limit theorems for the sums Sn is considerably extended in the book: we include the so-called boundary problems relating to the crossing of given boundaries by the trajectory of the random walk. In particular, this applies to problems, of widespread application, on the probabilities of large deviations of the maxima S n = maxkn Sk of sums of random variables. 2. The book is the first monograph in which the study of the above-mentioned wide range of problems is carried out in a comprehensive and systematic way and, as a rule, under minimal adequate conditions. It should fill a number of previously existing gaps. 3. In the book, for the first time in a monograph, asymptotic expansions (the asymptotics of second and higher orders) under rather general conditions, close to minimal, are studied for the above-mentioned range of problems. (Asymptotics expansions for P(Sn x) were also studied in [276] but for a narrow class of distributions.) 4. Along with classical random walks, a comprehensive asymptotic analysis is carried out for generalized renewal processes. 5. For the first time in a monograph, multivariate large deviation problems for jump distributions regularly varying at infinity are touched upon. 6. For the first time complete results on large deviations for random walks with non-identically distributed jumps in the triangular array scheme are obtained. Transient phenomena are studied for such walks with jumps having an infinite variance. One may also note that the following are included: • integro-local theorems for the sums Sn ; • a study of the structure of the classes of semiexponential and subexponential distributions; • analogues of the law of the iterated logarithm for random walks with infinite jump variance;
Introduction
xxix
• a derivation of the asymptotics of P(Sn x) and P(S n x) for random walks with dependent jumps, defined on Markov chains. The authors are grateful to the ARC Centre of Excellence for Mathematics and Statistics of Complex Systems, the Russian Foundation for Basic Research (grant 05-01-00810) and the international association INTAS (grant 03-51-5018) for their much appreciated support during the writing of the book. The authors are also grateful to S.G. Foss for the discussions of some aspects of the book. The writing of this monograph would have been a much harder task were it not for a constant technical support from T.V. Belyaeva, to whom the authors express their sincere gratitude.
For the reader’s attention We use := for ‘is defined by’, ‘iff’ for ‘if and only if’, and 2 for the end of proof. Parts of the expositions that are, from our viewpoint, of secondary interest, are typeset in a small font. A.A. Borovkov, K.A. Borovkov
1 Preliminaries
1.1 Regularly varying functions and their main properties Regularly (and, in particular, slowly) varying functions play an important role in the subsequent exposition. In this section we will present those basic properties of the above-mentioned functions that will be used in what follows. We will often assume that the domain of the functions under consideration includes the right half-axis (0, ∞) where the functions are measurable and locally integrable.
1.1.1 General properties Definition 1.1.1. A positive (Lebesgue) measurable function L(t) is said to be a slowly varying function (s.v.f.) as t → ∞ if, for any fixed v > 0, L(vt) →1 L(t)
as
t → ∞.
(1.1.1)
A function V (t) is said to be a regularly varying (of index −α ∈ R) function (r.v.f.) as t → ∞ if it can be represented as V (t) = t−α L(t),
(1.1.2)
where L(t) is an s.v.f. as t → ∞. The definition of an s.v.f. (r.v.f.) as t ↓ 0 is quite similar. In what follows, the term s.v.f. (r.v.f.) will always refer, unless otherwise stipulated, to a function which is slowly (regularly) varying at infinity. One can easily see that, similarly to (1.1.1), the convergence V (vt) → v −α V (t)
as
t→∞
(1.1.3)
for any fixed v > 0 is a characteristic property of regularly varying functions. Thus, an s.v.f. is an r.v.f. of index zero. Note that r.v.f.’s admit a definition that, from the first glance, appears to be more general
1
2
Preliminaries
than (1.1.3). One can define them as measurable functions such that, for all v > 0 from a set of positive Lebesgue measure, there exists the limit lim
t→∞
V (vt) =: g(v). V (t)
(1.1.4)
In this case, one necessarily has g(v) ≡ v −α for some α ∈ R and, moreover, (1.1.3) holds for all v > 0 (see e.g. p. 17 of [32]). The fact that the power function appears in the limit becomes natural from the obvious relation g(v1 v2 ) = lim
t→∞
V (v1 v2 t) V (v2 t) × = g(v1 )g(v2 ), V (v2 t) V (t)
which is equivalent to the Cauchy functional equation for h(u) := ln g(eu ): h(u1 + u2 ) = h(u1 ) + h(u2 ). It is well known that, in ‘non-pathological’ cases, this equation can only have a linear solution of the form h(u) = cu, which means that g(v) = v c .
The following functions are typical representatives of the class of s.v.f.’s: the logarithmic function and its powers lnγ t, γ ∈ R, linear combinations thereof, multiple logarithms, functions with the property that L(t) → L = const = 0 as t → ∞ etc. An example of an oscillating bounded s.v.f. is provided by L0 (t) = 2 + sin(ln ln t),
t > 1.
(1.1.5)
We will need the following two fundamental properties of s.v.f.’s. Theorem 1.1.2 (Uniform convergence theorem). If L(t) is an s.v.f. as t → ∞, then the convergence (1.1.1) holds uniformly in v on any interval [v1 , v2 ] with 0 < v1 < v2 < ∞. It follows from the assertion of the theorem that the uniform convergence (1.1.1) on an interval [1/M, M ] will also take place in the case when, as t → ∞, the quantity M = M (t) increases to infinity slowly enough. Theorem 1.1.3 (Integral representation). A positive function L(t) is an s.v.f. as t → ∞ iff for some t0 > 0 one has t ε(u) L(t) = c(t) exp du , t t0 , (1.1.6) u t0
where c(t) and ε(t) are measurable functions, with c(t) → c ∈ (0, ∞) and ε(t) → 0 as t → ∞. For example, for the function L(t) = ln t the representation (1.1.6) holds with c(t) = 1, t0 = e and ε(t) = (ln t)−1 . Proof of Theorem 1.1.2. Put h(x) := ln L(ex ).
(1.1.7)
3
1.1 Regularly varying functions and their main properties
Then the property (1.1.1) of s.v.f.’s is equivalent to the following: for any u ∈ R, one has the convergence h(x + u) − h(x) → 0
(1.1.8)
as x → ∞. To prove the theorem, we have to show that this convergence is uniform in u ∈ [u1 , u2 ] for any fixed ui ∈ R. To do this, it suffices to verify that the convergence (1.1.8) is uniform on the interval [0, 1]. Indeed, from the obvious inequality |h(x+u +u )−h(x)| |h(x+u +u )−h(x+u )|+|h(x+u )−h(x)| (1.1.9) we have |h(x + u) − h(x)| (u2 − u1 + 1) sup |h(x + y) − h(x)|,
u ∈ [u1 , u2 ].
y∈[0,1]
For a given ε ∈ (0, 1) and any x > 0 put Ix := [x, x + 2],
Ix∗ := {u ∈ Ix : |h(u) − h(x)| ε/2},
∗ I0,x := {u ∈ I0 : |h(x + u) − h(x)| ε/2}. ∗ are measurable and differ from each other only It is clear that the sets Ix∗ and I0,x ∗ ∗ ), where μ is the Lebesgue measure. by a translation by x, so that μ(Ix ) = μ(I0,x ∗ By virtue of (1.1.8), the indicator function of the set I0,x converges to zero at any point u ∈ I0 as x → ∞. Therefore, by the dominated convergence theorem, the ∗ integral of this function, which is equal to μ(I0,x ), tends to 0, so that for large ∗ enough x0 one has μ(Ix ) < ε/2 when x x0 . Further, for s ∈ [0, 1] the interval Ix ∩Ix+s = [x+s, x+2] has length 2−s 1, so that when x x0 the set ∗ ) (Ix ∩ Ix+s ) \ (Ix∗ ∪ Ix+s
has measure 1 − ε > 0 and is therefore non-empty. Let y be a point from this set. Then |h(x + s) − h(x)| |h(x + s) − h(y)| + |h(y) − h(x)| < ε/2 + ε/2 = ε for x x0 , which proves the required uniformity on [0, 1] and hence on any other fixed interval as well. The theorem is proved. Proof of Theorem 1.1.3. That the right-hand side of (1.1.6) is an s.v.f. is almost obvious: for any fixed positive v = 1, vt L(vt) ε(u) c(vt) = exp du , (1.1.10) L(t) c(t) u t
where, as t → ∞, one has c(vt)/c(t) → c/c = 1 and vt vt ε(u) du du = o = o(ln v) = o(1). u u t
t
(1.1.11)
4
Preliminaries
Now we prove that any s.v.f. admits the representation (1.1.6). In terms of the function (1.1.7), the required representation will be equivalent (after the change of variable t = ex ) to the relation x h(x) = d(x) +
δ(y) dy,
(1.1.12)
x0
where d(x) = ln c(ex ) → d ∈ R and δ(x) = ε(ex ) → 0 as x → ∞, x0 = ln t0 . Therefore it suffices to establish the representation (1.1.12) for the function h(x). First of all note that h(x) (like L(t)) is a locally bounded function. Indeed, by Theorem 1.1.2, for a large enough x0 and all x x0 sup |h(x + y) − h(x)| < 1.
0y1
Hence for any x > x0 we have by virtue of (1.1.9) the bound |h(x) − h(x0 )| x − x0 + 1. Further, the local boundedness and measurability of the function h mean that it is locally integrable on [x0 , ∞) and therefore can be represented for x x0 as x0 +1
h(x) =
1 (h(x)−h(x+y)) dy+
h(y) dy+ x0
x
0
(h(y+1)−h(y)) dy. (1.1.13)
x0
The first integral in (1.1.13) is a constant that we will denote by d. The second tends to zero as x → ∞ owing to Theorem 1.1.2, so that 1 d(x) := d +
(h(x) − h(x + y)) dy → d,
x → ∞.
0
As to the third integral in (1.1.13), by the definition of an s.v.f. for its integrand one has δ(y) := h(y + 1) − h(y) → 0 as y → ∞, which completes the proof of the representation (1.1.12).
1.1.2 The main asymptotic properties In this subsection we will obtain several corollaries from Theorems 1.1.2 and 1.1.3. Theorem 1.1.4. (i) If L1 and L2 are s.v.f.’s then L1 + L2 , L1 L2 , Lb1 and L(t) := L1 (at + b), where a 0 and b ∈ R, are also s.v.f.’s.
5
1.1 Regularly varying functions and their main properties (ii) If L is an s.v.f. then for any δ > 0 there exists a tδ > 0 such that t−δ L(t) tδ
for all
t tδ .
(1.1.14)
In other words, L(t) = to(1) as t → ∞. (iii) If L is an s.v.f. then for any δ > 0 and v0 > 1 there exists a tδ > 0 such that for all v v0 and t tδ , L(vt) vδ . L(t)
v −δ
(1.1.15)
(iv) (Karamata’s theorem) If α > 1 then, for the r.v.f. V in (1.1.2), one has ∞ V I (t) :=
V (u) du ∼
tV (t) α−1
as
t → ∞.
(1.1.16)
V (u) du ∼
tV (t) 1−α
as
t → ∞.
(1.1.17)
t
If α < 1 then t VI (t) := 0
If α = 1 then one has the equalities VI (t) = tV (t)L1 (t)
(1.1.18)
and ∞ V (t) = tV (t)L2 (t) I
V (u)du < ∞,
if
(1.1.19)
0
where the Li (t) → ∞ as t → ∞, i = 1, 2, are s.v.f.’s. (v) For an r.v.f. V of index −α < 0 put σ(t) := V (−1) (1/t) = inf{u : V (u) < 1/t}. Then σ(t) is an r.v.f. of index 1/α: σ(t) = t1/α L1 (t), where L1 is an s.v.f. If the function L has the property L tL1/α (t) ∼ L(t) as t → ∞ then
L1 (t) ∼ L1/α t1/α .
(1.1.20)
(1.1.21)
(1.1.22)
Similar assertions hold for functions that are slowly or regularly varying as t decreases to zero. Note that from Theorem 1.1.2 and the inequality (1.1.15) we also obtain the
6
Preliminaries
following property of s.v.f.’s: for any δ > 0 there exists a tδ > 0 such that for all t and v satisfying the inequalities t tδ , vt tδ one has (1 − δ) min{v δ , v −δ }
L(vt) (1 + δ) max{v δ , v −δ }. L(t)
(1.1.23)
Proof. Assertion (i) is evident (just observe that, to prove the last part of (i), one needs Theorem 1.1.2). (ii) This property follows immediately from the representation (1.1.6) and the bound t ln t t ε(u) ln t t du du du = + = O +o = o(ln t) u u u t0
ln t
t0
ln t
t0
as t → ∞. (iii) To prove this property, we notice that, in relation to the expression on the right-hand side of (1.1.10), for any fixed δ > 0 and v0 > 1 and all sufficiently large t one has −δ/2
v−δ/2 v0 and
c(vt) δ/2 v0 v δ/2 , c(t)
v v0 ,
vt ε(u) δ du ln v 2 u t
(by virtue of (1.1.11)). From this (1.1.15) follows. (iv) Owing to the uniform convergence theorem, one can choose an M = M (t) → ∞ as t → ∞ such that the convergence in (1.1.1) will be uniform in v ∈ [1, M ]. Changing variables by putting u = vt, we obtain V I (t) = t−α+1 L(t)
∞ 1
L(vt) dv = t−α+1 L(t) v −α L(t)
∞
M + 1
. M
If α > 1 then, as t → ∞, M
M ∼
1
v −α dv →
1
1 , α−1
whereas due to property (iii) one has, for δ = (α − 1)/2, the relation ∞
∞
1, vt V (t) = V (vt) + I
V (u) du,
I
t
where the integral clearly does not exceed (v−1)L(t)(1+o(1)). Owing to (1.1.26) this implies that V I (vt)/V I (t) → 1 as t → ∞, which completes the proof of (1.1.19). That (1.1.18) is true in the subcase when (1.1.25) holds is almost obvious, since t VI (t) = tV (t)L1 (t) = L(t)L1 (t) =
∞ V (u) du →
0
V (u) du, 0
so that, firstly, L1 is an s.v.f. by virtue of property (i) and, secondly, L1 (t) → ∞ since L(t) → 0 owing to (1.1.26).
8
Preliminaries ∞
Now let α = 1 and 0 V (u) du = ∞. Then, if M = M (t) → ∞ sufficiently slowly, one obtains by the uniform convergence theorem a result similar to (1.1.26) (see also (1.1.24)): 1 VI (t) =
v
−1
0
1 L(vt) dv
v −1 L(vt) dv ∼ L(t) ln M L(t).
1/M
Therefore L1 (t) := VI (t)/L(t) → ∞ as t → ∞. Further, also by an argument similar to the previous exposition, for v ∈ (0, 1) one has t VI (t) = VI (vt) +
V (u) du, vt
where the last integral does not exceed (1 − v)L(t)(1 + o(1)) VI (t), so that VI (t) (and also, by property (i), L1 (t)) is an s.v.f. This completes the proof of property (iv). (v) Clearly, by the uniform convergence theorem the quantity σ = σ(t) is a solution to the ‘asymptotic equation’ V (σ) ∼
1 t
as
t→∞
(1.1.27)
(where the symbol ∼ can be replaced by the equality sign provided that the function V is continuous and monotonically decreasing). Representing σ in the form σ = t1/α L1 , L1 = L1 (t), we obtain an equivalent relation 1/α L1 ) ∼ 1, L−α 1 L(t
(1.1.28)
and it is obvious that t1/α L1 → ∞
as t → ∞.
(1.1.29)
Fix an arbitrary v > 0. Substituting vt for t in (1.1.28) and for brevity putting L2 = L2 (t) := L1 (vt), we get the relation 1/α L−α L2 ) ∼ 1, 2 L(t
(1.1.30)
since L(v 1/α t1/α L2 ) ∼ L(t1/α L2 ) owing to (1.1.29) (with L1 replaced by L2 ). Now we will show by contradiction that (1.1.28)–(1.1.30) imply that L1 ∼ L2 as t → ∞, where the latter clearly means that L1 is an s.v.f. Indeed, the contrary assumption means that there exist a v0 > 1 and a sequence tn → ∞ such that un := L2 (tn )/L1 (tn ) > v0 ,
n = 1, 2, . . .
(1.1.31)
(the possible alternative case can be considered in exactly the same way). Ev1/α idently, t∗n := tn L1 (tn ) → ∞ by virtue of (1.1.29), so that from (1.1.28),
9
1.1 Regularly varying functions and their main properties (1.1.29) and property (iii) with δ = α/2 we obtain that 1∼
1/α
L−α 2 (tn )L(tn L2 (tn )) 1/α
L−α 1 (tn )L(tn L1 (tn ))
= u−α n
L(un t∗n ) −α/2 u−α/2 < v0 < 1. n L(t∗n )
We get a contradiction. Note that the above argument proves the uniqueness (up to the asymptotic equivalence) of the solution to the equation (1.1.27). Finally, the relation (1.1.22) can be proved by directly verifying (1.1.27) for σ := t1/α L1/α (t1/α ): using (1.1.21), one has V (σ) = σ −α L(σ) =
L(t1/α L1/α (t1/α )) L(t1/α ) 1 ∼ = . 1/α t tL(t ) tL(t1/α )
The desired assertion now follows owing to the above-mentioned uniqueness of the solution to the asymptotic equation (1.1.27). Theorem 1.1.4 is proved.
1.1.3 The asymptotic properties of the transforms of r.v.f.’s (an Abelian type theorem) For an r.v.f. V (t), its Laplace transform ∞ ψ(λ) :=
e−λt V (t) dt < ∞
0
is defined for any λ > 0. The following asymptotic relations hold true for the transform. Theorem 1.1.5. Let V (t) be an r.v.f. (i.e. it has the form (1.1.2)). (i) If α ∈ [0, 1) then ψ(λ) ∼ (ii) If α = 1 and
∞ 0
Γ(1 − α) V (1/λ) as λ
λ ↓ 0.
(1.1.32)
V (t) dt = ∞ then
t
ψ(λ) ∼ VI (1/λ) as
λ ↓ 0,
(1.1.33)
where VI (t) = 0 V (u) du → ∞ is an s.v.f. and, moreover, VI (t) L(t) as t → ∞. ∞ (iii) In any case, ψ(λ) ↑ VI (∞) = 0 V (t) dt ∞ as λ ↓ 0. Rewriting the relation (1.1.32), one obtains V (t) ∼
ψ(1/t) tΓ(1 − α)
as
t → ∞.
Relations of this type will also hold true in the case when, instead of the regularity of the function V , we require its monotonicity and then assume that ψ(λ) is an
10
Preliminaries
r.v.f. as λ ↓ 0. Assertions of this kind are referred to as Tauberian theorems. In the present book, we will not be using such theorems, so we will not dwell on them here. Proof of Theorem 1.1.5. (i) For any fixed ε > 0 we have ε/λ ∞ + , ψ(λ) = 0
(1.1.34)
ε/λ
where owing to (1.1.17) one has the following relation for the first integral in the case α < 1: ε/λ ε/λ εV (ε/λ) −λt e V (t) dt V (t) dt ∼ λ(1 − α) 0
λ ↓ 0.
as
(1.1.35)
0
Making the change of variables λt = u, one can rewrite the second integral in (1.1.34) as follows: ∞ ε/λ
V (1/λ) = λ
∞
L(u/λ) V (1/λ) e−u u−α du = L(1/λ) λ
∞
2
ε
+ ε
.
(1.1.36)
2
Here, as λ ↓ 0, each of the two integrals on the right-hand side converges to the respective integral of e−u u−α : for the former, this follows from the uniform convergence theorem (the convergence L(u/λ)/L(1/λ) → 1 holds uniformly in u ∈ [ε, 2]), whereas for the latter it is a consequence of (1.1.1) and the dominated convergence theorem (since, owing to Theorem 1.1.4(iii), for all sufficiently small λ one has L(u/λ)/L(1/λ) < u for u 2). Therefore ∞ ∼ ε/λ
V (1/λ) λ
∞
u−α e−u du.
(1.1.37)
ε
Now observe that, as λ ↓ 0, εV (ε/λ) λ
L(ε/λ) V (1/λ) = ε1−α → ε1−α . λ L(1/λ)
Since ε > 0 can be chosen arbitrary small, this relation together with (1.1.35) and (1.1.37) completes the proof of (1.1.32). (ii) Integrating by parts and again making the change of variables λt = u, we
1.1 Regularly varying functions and their main properties
11
obtain for α = 1 and M > 1 that ∞ ∞ −λt ψ(λ) = e dVI (t) = − VI (t) de−λt 0
0
∞ =
VI (u/λ)e
−u
1/M
du =
0
M +
0
∞ +
1/M
.
(1.1.38)
M
By Theorem 1.1.4(iv), VI (t) L(t) is an s.v.f. as t → ∞ and hence, for M = M (λ) → ∞ sufficiently slowly as λ ↓ 0, by the uniform convergence theorem the middle integral on the final right-hand side of (1.1.38) is M M VI (u/λ) −u e−u du ∼ VI (1/λ). VI (1/λ) e du ∼ VI (1/λ) 1/M VI (1/λ) 1/M The other two integrals are negligibly small: since VI (t) is an increasing function, the first does not exceed VI (1/λM )/M = o(VI (1/λ)) and, for the second, by Theorem 1.1.4(iii) we have ∞ ∞ VI (u/λ) −u ue−u du = o(VI (1/λ)). VI (1/λ) e du VI (1/λ) M VI (1/λ) M Hence (ii) is proved. Assertion (iii) is obvious.
1.1.4 The subexponential property An important property of r.v.f.’s is that their regularity character is preserved under convolution. We will confine ourselves here to considering the case of probability distributions whose tails are r.v.f.’s. Let ξ, ξ1 , ξ2 , . . . be independent identically distributed (i.i.d.) random variables (r.v.’s) with distribution F, and let the right tail of this distribution, F+ (t) := F([t, ∞)) = P(ξ t),
t ∈ R,
be an r.v.f. (as t → ∞) of the form (1.1.2): F+ (t) ≡ V (t) = t−α L(t). We will denote the class of all such distributions with a fixed α 0 by R(α), and the class of all distributions with regularly varying right tails by R := α0 R(α). It turns out that in this case, as x → ∞, P(ξ1 + ξ2 x) = V 2∗ (x) := V ∗ V (x) ∞ =− V (x − t) dV (t) ∼ 2V (x) = 2P(ξ x), −∞
(1.1.39)
12
Preliminaries
and, more generally, for any fixed n > 1, V n∗ (x) := P(ξ1 + · · · + ξn x) ∼ nV (x)
x → ∞.
as
(1.1.40)
In order to prove (1.1.39), introduce the events A = {ξ1 + ξ2 x} and Bi = {ξi < x/2}, i = 1, 2. Clearly P(A) = P(AB1 ) + P(AB2 ) − P(AB1 B2 ) + P(AB 1 B 2 ), where P(AB1 B2 ) = 0, P(AB 1 B 2 ) = P(B 1 B 2 ) = V 2 (x/2) (here and in what follows B denotes the complement of B) and x/2 P(AB1 ) = P(AB2 ) =
V (x − t) F(dt). −∞
Therefore V
2∗
x/2 (x) = 2
V (x − t) F(dt) + V 2 (x/2).
(1.1.41)
−∞
(We could have obtained the same result by integrating by parts the convolution in (1.1.39).) It remains to observe that V 2 (x/2) = o(V (x)) and x/2
−M
V (x − t) F(dt) = −∞
M +
−∞
−M
x/2 + ,
(1.1.42)
M
where, as can be easily seen, for any M = M (x) → ∞ as x → ∞ such that M = o(x) one has M
−M
∼ V (x)
+
and −∞
−M
x/2 = o(V (x)), M
which proves (1.1.39). One can establish (1.1.40) in a similar way (we will prove this relation for a more general case in Theorem 1.2.12(iii) below). The same assertions turn out also to be true for the so-called semiexponential distributions, i.e. distributions of which the right tails have the form F+ (t) = e−t
α
L(t)
,
α ∈ (0, 1),
(1.1.43)
where L(t) is an s.v.f. as t → ∞ satisfying a certain smoothness condition (see Definition 1.2.22 below, p. 29). Considerable attention will be paid in this book to extending the asymptotic relation (1.1.40) to the case when n grows together with x, and also to refining this relation for distributions with both regularly varying and semiexponential tails.
1.2 Subexponential distributions
13
Such distributions are the main representatives of the class of so-called subexponential distributions, of which the characteristic property is given by the relation (1.1.39). In the next section we will consider the main properties of these distributions.
1.2 Subexponential distributions 1.2.1 The main properties of subexponential distributions Before giving any formal definitions, we will briefly describe the relationships between the classes of distributions that we are going to introduce and explain why we pay them different amounts of attention in different contexts. As we have already noted in the Introduction, the main objects of study in this book are random walks in which the jump distributions have tails that are either regularly varying at infinity or semiexponential. In both cases, such distributions are typical representatives of the class of so-called subexponential distributions. The characteristic property of subexponential distributions is the asymptotic tail additivity of the convolutions of the original distributions, i.e. the fact that for the sums Sn = ξ1 + · · · + ξn of (any) fixed numbers n of i.i.d. r.v.’s ξi one has P(Sn x) ∼ nP(ξ1 x)
as
x→∞
(1.2.1)
(cf. (1.1.40)). Below we will establish a number of sufficient and necessary conditions that, to some extent, characterize the class S of subexponential distributions. These conditions show that the class S is the result of a considerable extension of the union of the class R of distributions with regularly varying tails and the class Se of distributions with semiexponential tails (see Definition 1.2.22 below, p. 29) which is obtained, roughly speaking, through the addition of functions that ‘oscillate’ between the functions from R or between those from Se. For such ‘oscillating’ functions the limit theorems to be obtained in Chapters 2–4 will, as a rule, be valid in much narrower zones than for elements of R or Se (see § 4.8). However, these functions usually do not appear in applications, where the assumption of their presence would look rather artificial. Therefore in what follows we will confine ourselves mostly to considering distributions from the classes R and Se. We will devote less attention to other distributions from the class S. Nevertheless, a number of properties of random walks will be established for the whole broad class S. The difference between the class S, on the one hand, and the classes R and Se, on the other, is that the above-mentioned tail-additivity property (1.2.1) extends much further (in terms of the number n of summands in the sum of random variables) for distributions from R and Se than for arbitrary distributions from S. More precisely, for the classes R and Se, the relation (1.2.1) remains true also in the case when n grows rather fast with x (for more detail, see § 5.9). This allows one to advance much further in studying the asymptotic properties of the distributions of the r.v.’s Sn , S n = maxkn Sk etc. for the classes R and Se, whereas
14
Preliminaries
the zone in which it is natural to do this within the class S is quite narrow (see below). At the same time, in subsequent chapters we will study distributions from the classes R and Se separately, the reason being that the technical aspects and even the formulations of results for these distribution classes are different in many respects. However, there are problems for which it is natural to conduct studies within the wider class of subexponential distributions. As an example of such a problem, we could mention here that of the asymptotic behaviour of the probability P(S x) as x → ∞ in the case Eξi < 0, where S = supk0 Sk (see § 7.5). Now we will give a formal definition of the class of subexponential distributions and discuss their basic properties and also the relations between this class and other important distribution classes to be considered in the present book. Let ζ ∈ R be an r.v. with distribution G: G(B) = P(ζ ∈ B) for any Borel set B (recall that in this case we write ζ ⊂ = G). By G(t) we denote the complement distribution function corresponding to the distribution of the r.v. ζ: G(t) := P(ζ t),
t ∈ R.
Similarly, to the distribution Gi there corresponds the function Gi (t), and so on. The function G(t) is also referred to as the (right) tail of the distribution G, but normally this term is used only when t > 0. Note that throughout this book, except in §§ 1.2–1.4, for the right distribution tails we will use notation of the form G+ (t) (this should not lead to any confusion). The convolution of the tails G1 (t) and G2 (t) is the function G1 ∗ G2 (t) := −
G1 (t − y) dG2 (y) =
G1 (t − y) G2 (dy) = P(Z2 t),
= Gi , i = 1, 2. Clearly where Z2 = ζ1 + ζ2 is the sum of independent r.v.’s ζi ⊂ G1 ∗ G2 (t) = G2 ∗ G1 (t). By G2∗ (t) = G ∗ G(t) we denote the convolution of the tail G(t) with itself, and put G(n+1)∗ (t) = G ∗ Gn∗ (t), n 2. Similarly, the convolution G1 ∗ G2 of two distributions G1 and G2 is the measure G1 ∗ G2 (B) = G1 (B − t) G2 (dt), and so on. Definition 1.2.1. A probability distribution G on [0, ∞) belongs to the class S+ of subexponential distributions on the positive half-line if G2∗ (t) ∼ 2G(t)
as
t → ∞.
(1.2.2)
A probability distribution G on R belongs to the class S of subexponential distributions if the distribution G+ of the positive part ζ + := max{0, ζ} of the r.v. ζ ⊂ = G belongs to S+ . A r.v. is said to be subexponential if its distribution is subexponential.
1.2 Subexponential distributions
15
Remark 1.2.2. As clearly we always have (G+ )2∗ (t) = P(ζ1+ + ζ2+ t) P({ζ1+ t} ∪ {ζ2+ t}) = P(ζ1 t) + P(ζ2 t) − P(ζ1 t, ζ2 t) = 2G(t) − G2 (t) = 2G+ (t)(1 + o(1)) as t → ∞, subexponentiality is equivalent to the following property: lim sup t→∞
(G+ )2∗ (t) 2. G+ (t)
(1.2.3)
Observe also that, since the relation (1.2.2) only makes sense when G(t) > 0 for all t ∈ R, any subexponential distribution has an unbounded (from the right) support. Note that in the literature the notation S is normally used for the class of subexponential distributions on [0, ∞), whereas the general case is either ignored or just mentioned in passing as a possible extension obtained as above. This situation can be explained, on the one hand, by the fact that subexponentiality is, by definition, a property of right distribution tails only and, on the other, by a historical tradition: the class of subexponential distributions was originally introduced in the context of the theory of branching processes and then used mostly for modelling insurance claim sizes and service times in queueing theory, where the respective r.v.’s are always positive. In the context of general random walks, such a restrictive assumption is no longer natural. Moreover, such an approach (i.e. one confined to the class of positive r.v.’s) may well cause confusion, especially when the concept of subexponentiality is encountered for the first time. Thus, it is by no means obvious from the definition that the tail additivity property (1.2.2) holds for subexponential r.v.’s that can assume both negative and positive values; this fact requires a non-trivial proof and will be seen to be a consequence of Theorems 1.2.8 and 1.2.4(vi) (see Theorem 1.2.12(iii) below). Moreover, the fact that (1.2.2) holds for the distribution G of a signed r.v. ζ itself (and not for the distribution G+ of the r.v. ζ + ) does not imply that G ∈ S (an example illustrating this observation is given in Remark 1.2.10 on p. 19). Therefore, to avoid vagueness and ambiguities, from the very beginning we will be considering the general case of distributions given on the whole real line. As we have already seen in § 1.1.4, the class R of distributions with regularly varying right tails is a subset of the class S (see (1.1.39)). Below we will see that in a similar way the so-called semiexponential distributions also belong to the class S (Theorem 1.2.21). One of the main properties of subexponential distributions G is that the corresponding functions G(t) are asymptotically locally constant in the following sense.
16
Preliminaries
Definition 1.2.3. A function G(t) > 0 is said to be (asymptotically) locally constant (l.c.) if, for any fixed v, G(t + v) →1 G(t)
as
t → ∞.
(1.2.4)
In the literature, distributions with l.c. tails are often referred to as ‘long-tailed distributions’; it would appear, however, that the term ‘locally constant’ better reflects the meaning of the concept. The class of all distributions G with l.c. tails G(t) will be denoted by L. For future reference, we will present the main properties of l.c. functions in the form of a separate theorem. Theorem 1.2.4. (i) For an l.c. function G(t) the convergence (1.2.4) is uniform in v on any fixed bounded interval. (ii) A function G(t) is l.c. iff, for some t0 > 0, it admits the following representation: t G(t) = c(t) exp ε(u) du , t t0 , (1.2.5) t0
where the functions c(t) and ε(t) are measurable, with c(t) → c ∈ (0, ∞) and ε(t) → 0 as t → ∞. (iii) If G1 (t) and G2 (t) are l.c. then G1 (t) + G2 (t), G1 (t)G2 (t), Gb1 (t) and G(t) := G1 (at + b) where a 0, b ∈ R, are also l.c. functions. (iv) If G(t) is an l.c. function then, for any ε > 0, eεt G(t) → ∞
as t → ∞.
In other words, any l.c. function G(t) can be represented as G(t) = e−l(t) , (v) Let
l(t) = o(t) as t → ∞.
∞
GI (t) :=
(1.2.6)
G(u) du < ∞
t
and let at least one of the two following conditions be met: (a) G(t) is an l.c. function, (b) GI (t) is an l.c. function and G(t) is monotone. Then G(t) = o(GI (t)) as
t → ∞.
(vi) If G ∈ L then G2∗ (t) ∼ (G+ )2∗ (t) as t → ∞.
(1.2.7)
17
1.2 Subexponential distributions
Remark 1.2.5. It follows from the assertion of part (i) of the theorem that the uniform convergence in (1.2.4) on the interval [−M, M ] will also hold in the case when M = M (t) increases unboundedly (together with t) slowly enough. Remark 1.2.6. The coinage of the term ‘subexponential distribution’ was apparently due mostly to the fact that the tail of such a distribution decays, as t → ∞, slower than any exponential function e−εt with ε > 0. According to Theorem 1.2.4(iv), this is actually a property of a much wider class L of distributions with l.c. tails (the relationship between the classes S and L is discussed in more detail below, in Theorems 1.2.8, 1.2.17 and 1.2.25). Proof of Theorem 1.2.4. (i)–(iii) It is evident from Definitions 1.1.1 and 1.2.3 that G(t) is an l.c. function iff L(t) := G(ln t) is an s.v.f. Therefore the assertion of part (i) immediately follows from Theorem 1.1.2 (the uniform convergence theorem for s.v.f.’s), whereas those of parts (ii) and (iii) follow from Theorems 1.1.3 and 1.1.4(i) respectively. The assertion of part (iv) follows from the integral representation (1.2.5). (v) If condition (a) is met then, for any M > 0 and all sufficiently large t, t+M
G (t) > I
G(u) du >
1 M G(t). 2
t
Since M is arbitrary, GI (t) G(t). Further, if condition (b) is satisfied then G(t) 1 I GI (t) G (t)
t G(u) du =
GI (t − 1) −1→0 GI (t)
t−1
as t → ∞. (vi) Let ζ1 and ζ2 be independent copies of the r.v. ζ and let Z2 := ζ1 + ζ2 , (+) Z2 := ζ1+ + ζ2+ . Clearly ζi ζi+ , so that (+)
G2∗ (t) = P(Z2 t) P(Z2
t) = (G+ )2∗ (t).
(1.2.8)
However, for any M > 0, G2∗ (t) P(Z2 t, ζ1 > 0, ζ2 > 0) +
2 P Z2 t, ζi ∈ [−M, 0] , i=1 (+)
where the first term on the right-hand side equals P(Z2 t, ζ1+ > 0, ζ2+ > 0), and the last two can be estimated as follows: since G ∈ L then, for any ε > 0
18
Preliminaries
and for all sufficiently large M and t, P Z2 t, ζ1 ∈ [−M, 0] P ζ2 t + M, ζ1 ∈ [−M, 0] = G(t)
G(t + M ) [P(ζ1 0) − P(ζ1 < −M )] G(t)
(1 − ε)G(t) P(ζ1+ = 0) (+)
= (1 − ε) P(Z2
t, ζ1+ = 0).
Thus we obtain for G2∗ (t) the lower bound 2 (+) (+) P(Z2 t, ζi+ = 0) G2∗ (t) P Z2 t, ζ1+ > 0, ζ2+ > 0 + (1 − ε) i=1
(1 −
(+) ε)P(Z2
∗ 2∗
t) = (1 − ε)(G ) (t).
Thereby part (vi) is proved, as ε is arbitrarily small. Later we will also need the following modification of the concept of an l.c. function. Definition 1.2.7. Let ψ(t) 1 be a fixed non-decreasing function. A function G(t) > 0 is said to be ψ-asymptotically locally constant (ψ-l.c.) if, for any fixed v, G t + vψ(t) → 1 as t → ∞. (1.2.9) G(t) Clearly, any l.c. function will be ψ-l.c. for ψ ≡ 1 and, moreover, it will also be ψ-l.c. for a suitable (i.e. sufficiently slow growing) function ψ(t) → ∞. In what follows, we will be using only monotone ψ-l.c. functions. For them the convergence in (1.2.9), as well as the convergence in (1.2.4), will obviously be uniform in v on any fixed bounded interval. In this case any ψ-l.c. function is l.c., so that the class of l.c. functions is at its broadest and includes all ψ-l.c. functions. Observe that any r.v.f. is ψ-l.c. for any function ψ(t) = o(t). For example, −ctα the function G(t) , α ∈ (0, 1) (a Weibull distribution tail) is ψ-l.c. for = e 1−α . However, the exponential function G(t) = e−ct is not ψ-l.c. for ψ(t) = o t any ψ. Accordingly, any ψ-l.c. function decays (or grows) more slowly than the exponential function (cf. Theorem 1.2.4(iv)). Now we return to our discussion of subexponential distributions. First we consider the relationship between the classes S and L . Theorem 1.2.8. One has S ⊂ L, and therefore all the assertions of Theorem 1.2.4 hold true for subexponential distributions. This inclusion is strict: not every distribution from the class L is subexponential. Remark 1.2.9. Some sufficient conditions for a distribution G ∈ L to belong to the class S are given below, in Theorems 1.2.17 and 1.2.25.
19
1.2 Subexponential distributions
Remark 1.2.10. In the case when a distribution G is not concentrated on [0, ∞), the tail additivity condition (1.2.2) alone will be insufficient for the function G(t) to be l.c. (and hence for ensuring the ‘subexponential decay’ of the distribution tail, cf. Remark 1.2.6). It is this fact that explains the necessity of defining subexponentiality in the general case in terms of the condition (1.2.2) on the distribution G+ of the r.v. ζ + . As we will see below (Corollary 1.2.16), the subexponentiality of a distribution G on R is actually equivalent to the combination of the conditions (1.2.2) (on the distribution G itself) and G ∈ L. The following example shows that for r.v.’s assuming values of both signs, the condition (1.2.2), generally speaking, does not imply subexponential behaviour for G(t). Example 1.2.11. Let μ > 0 be fixed and let the right tail of a distribution G have the form G(t) = e−μt V (t),
(1.2.10)
where V (t) is an r.v.f. converging to zero as t → ∞ and such that ∞ g(μ) :=
eμy G(dy) < ∞. −∞
We have (cf. (1.1.41), (1.1.42)) 2∗
t/2
G (t) = 2
G(t − y) G(dy) + G2 (t/2),
−∞
where t/2 G(t − y) G(dy) = e −∞
−μt
t/2 eμy V (t − y) G(dy)
−∞
⎛
⎜ = e−μt ⎝
−M
−∞
M +
t/2 +
−M
⎞ ⎟ ⎠.
M
One can easily see that, for M = M (t) → ∞ slowly enough as t → ∞, we get M
−M
e V (t − y) G(dy) ∼ g(μ)V (t), −∞
−M
whereas
t/2 +
μy
= o G(t) ,
M
G2 (t/2) = e−μt V 2 (t/2) ce−μt V 2 (t) = o G(t) .
Thus we obtain G2∗ (t) ∼ 2g(μ)e−μt V (t) = 2g(μ)G(t),
(1.2.11)
20
Preliminaries
and it is clear that one can always find a distribution G (with a negative mean) such that g(μ) = 1. Then the relation (1.2.2) given in the definition of subexponentiality will hold true, although G(t) decays exponentially fast and therefore is not an l.c. function. Nevertheless, observe that the class of distributions that satisfy just the relation (1.2.2) is an extension of the class S, distributions from the former class possessing many properties of distributions from S. Proof of Theorem 1.2.8. First we will prove that S ⊂ L. Since the definitions of both classes are given in terms of the right distribution tails, one can assume without loss of generality that G ∈ S+ (or just consider from the very beginning the distribution G+ ). For independent (non-negative) r.v.’s ζi ⊂ = G we have, for t > 0, G2∗ (t) = P(ζ1 + ζ2 t) = P(ζ1 t) + P(ζ1 + ζ2 t, ζ1 < t) t G(t − y) G(dy).
= G(t) +
(1.2.12)
0
Since G(t) is a non-increasing function and G(0) = 1, we obtain for t > v > 0 that G2∗ (t) =1+ G(t)
v 0
G(t − y) G(dy) + G(t)
t
G(t − y) G(dy) G(t)
v
G(t − v) 1 + 1 − G(v) + G(v) − G(t) . G(t) Hence for large enough t (such that G(v) − G(t) > 0) 2∗ G (t) 1 G(t − v) − 2 + G(v) . 1 G(t) G(v) − G(t) G(t) Since G ∈ S+ , the final right-hand side tends to G(v)/G(v) = 1 as t → ∞, and therefore G ∈ L. To complete the proof of the theorem, it suffices to give an example of a distribution G ∈ L \ S. Since in order to achieve this we will have to refer to a necessary condition for subexponentiality given below in Theorem 1.2.17, it seems natural to present such a construction after this theorem, which we do in Example 1.2.18 (see p. 27). One can also find examples of distributions from L that are not subexponential in [109, 229]. The next theorem states several important properties of subexponential distributions.
1.2 Subexponential distributions
21
Theorem 1.2.12. Let G ∈ S. Then the following assertions hold true. (i) If Gi (t)/G(t) → ci as t → ∞, ci 0, i = 1, 2, c1 + c2 > 0, then G1 ∗ G2 (t) ∼ G1 (t) + G2 (t) ∼ (c1 + c2 )G(t). (ii) If G0 (t) ∼ cG(t) as t → ∞, c > 0, then G0 ∈ S. (iii) For any fixed n 2 Gn∗ (t) ∼ nG(t)
as
t → ∞.
(1.2.13)
(iv) For any ε > 0 there exists a value b = b(ε) < ∞ such that, for all n 2 and t, holds Gn∗ (t) b (1 + ε)n G(t) Remark 1.2.13. It is clear that the asymptotic relation G1 (t) ∼ G2 (t) as t → ∞ defines an equivalence relation on the set of distributions on R. Theorem 1.2.12(ii) means that the class S is closed with respect to that equivalence. One can easily see that in each equivalence subclass of S under this relation there is always a distribution with an arbitrarily smooth tail G(t). Indeed, let p(t) be an infinitely many times differentiable probability density on R vanishing outside [0, 1]; for instance, one can take p(x) = c exp −1/[x(1 − x)] , x ∈ (0, 1); p(x) = 0, x ∈ (0, 1). Now we will ‘smooth’ the function l(t) := − ln G(t), G ∈ S, by putting (1.2.14) l0 (t) := p(t − u)l(u) du, G0 (t) := e−l0 (t) . It is evident that G0 (t) is infinitely many times differentiable and, since l(t) is non-decreasing and we actually integrate only over [t − 1, t], one has l(t − 1) l0 (t) l(t) and hence by Theorem 1.2.8 1
G(t − 1) G0 (t) →1 G(t) G(t)
as
t → ∞.
Therefore G0 is equivalent to the original G. A simpler smoothing procedure that leads to a less smooth asymptotically equivalent tail consists in replacing l(t) by its linear interpolation with nodes at the points (k, l(k)), k being an integer. Thus, up to an additive term o(1), the function l(t) = − ln G(t), G ∈ S, can always be assumed arbitrarily smooth. The aforesaid is clearly applicable to the class L as well: this class is also closed with respect to the above equivalence, and in each of its equivalence subclasses there are arbitrarily smooth representatives. Remark 1.2.14. Note that from Theorem 1.2.12(ii), (iii) it follows immediately that if G ∈ S then also Gn∗ ∈ S, n = 2, 3, . . . Moreover, if we denote by
22
Preliminaries
= G then, from the Gn∨ the distribution of the maximum of i.i.d. r.v.’s ζ1 , . . . , ζn ⊂ obvious relation Gn∨ (t) = 1 − (1 − G(t))n ∼ nG(t)
as
t→∞
(1.2.15)
and Theorem 1.2.12(ii), one obtains that Gn∨ also belongs to S. The relations (1.2.15) and (1.2.13) show that, in the case of a subexponential G, the tail Gn∗ (t) of the distribution of the sum of a fixed number n of i.i.d. = G is asymptotically equivalent (as t → ∞) to the tail Gn∨ (t) of the r.v.’s ζi ⊂ maximum of these r.v.’s. This means that ‘large’ values of this sum are mainly due to the presence of a single ‘large’ summand ζi in it. One can easily see that this property is characteristic of subexponentiality. Remark 1.2.15. Observe also that the converse of the assertion stated at the beginning of Remark 1.2.14 is true as well: if Gn∗ ∈ S for some n 2 then G ∈ S ([112]; see also Proposition A3.18 of [113]). That Gn∨ ∈ S implies that G ∈ S follows in an obvious way from (1.2.15) and Theorem 1.2.12(ii). Proof of Theorem 1.2.12. (i) First assume that c1 c2 > 0 and that both distributions Gi are concentrated on [0, ∞). Fix an arbitrary ε > 0 and choose values M < N large enough that Gi (M ) < ε, i = 1, 2, G(M ) < ε and such that for t > N one has 1−ε
0. If, say, c1 = 0, c2 > 0 then one can introduce the distribution ! 1 := (G1 + G)/2, for which clearly G !1 (t)/G(t) → ! G c1 = 1/2, and therefore, by virtue of the assertion that we have already proved, ! 1 ∗ G2 (t) G1 ∗ G2 (t) + G ∗ G2 (t) G 1 + c2 ∼ = 2 G(t) 2G(t) G1 ∗ G2 (t) 1 + c2 = + (1 + o(1)) 2G(t) 2 as t → ∞, so that G1 ∗ G2 (t)/G(t) → c2 = c1 + c2 . + (ii) Let G+ = G0 . Since G+ 0 be the distribution of the r.v. ζ0 , where ζ0 ⊂ 0 (t) = G0 (t) for t > 0, it follows immediately from part (i) with G1 = G2 = G+ 0 that + 2∗ (G+ ) (t) ∼ 2G (t), i.e. G ∈ S. 0 0 0
(iii) If G ∈ S then, by Theorems 1.2.4(vi) and 1.2.8, one has, as t → ∞, G2∗ (t) ∼ (G+ )2∗ (t) ∼ 2G(t). The relation (1.2.13) follows straight away from part (i) using an induction argument. (iv) One has Gn∗ (t) Gn∗ + (t), n 1 (cf. (1.2.8)). Hence it is clear that it suffices to consider the case G ∈ S+ . Put αn := sup t0
Gn∗ (t) . G(t)
Analogously to (1.2.12), for n 2 one has t G (t) = G(t) + n∗
0
G(n−1)∗ (t − y) G(dy),
25
1.2 Subexponential distributions and therefore, for any M > 0, t αn 1 + sup
0tM
t + sup t>M
1+
0
0
G(n−1)∗ (t − y) G(dy) G(t)
G(n−1)∗ (t − y) G(t − y) G(dy) G(t − y) G(t)
1 G2∗ (t) − G(t) + αn−1 sup . G(M ) G(t) t>M
Since G ∈ S, for any ε > 0 there exists an M = M (ε) such that sup t>M
G2∗ (t) − G(t) < 1 + ε, G(t)
and hence αn b0 + αn−1 (1 + ε),
b0 := 1 + 1/G(M ),
α1 = 1.
From here one obtains recursively αn b0 + b0 (1 + ε) + αn−2 (1 + ε)2 · · · b0
n−1
(1 + ε)j
j=0
b0 (1 + ε)n . ε
The theorem is proved.
1.2.2 Sufficient conditions for subexponentiality Now we will turn to a discussion of sufficient conditions for a given distribution G to belong to the class of subexponential distributions. As we already know, S ⊂ L (Theorem 1.2.8) and therefore the easily verified condition G ∈ L is, quite naturally, always present in conditions sufficient for G ∈ S. First we will present a simple assertion that clarifies the subexponentiality condition for signed r.v.’s; it follows from Theorems 1.2.8, 1.2.12(iii) and 1.2.4(vi). Corollary 1.2.16. A distribution G belongs to S iff G ∈ L and G2∗ (t) ∼ 2G(t) as t → ∞. The next theorem basically paraphrases the laconic definition of subexponentiality in terms of the relative smallness of the probabilities of certain simple events for a pair of independent r.v.’s with a common distribution G. Theorem 1.2.17. (i) Let G ∈ S. Then, for any fixed p ∈ (0, 1), as t → ∞ G(pt)G((1 − p)t) = o(G(t)),
(1.2.20)
26
Preliminaries and for any M = M (t) → ∞ such that M pt t − M one has pt G(t − y) G(dy) = o(G(t)).
(1.2.21)
M
(ii) Conversely, let G ∈ L and, for some p ∈ (0, 1), let relation (1.2.20) hold. Moreover, suppose that for some M = M (t) → ∞ such that c0 := lim sup t→∞
G(t − M ) 0 is fixed then G(bt) ∼ cb G(t) as t → ∞ (where cb = b−α when G(t) is of the form (1.1.2)). Following [42], we will now introduce the following class of functions. Definition 1.2.20. A function G(t) is said to be an upper-power function if it is l.c. and, for any b > 1, there exists a c(b) > 0 such that G(bt) > c(b) G(t),
t > 0,
(1.2.26)
where c(b) is bounded away from zero on any interval (1, b1 ), b1 < ∞. In other words, for any p ∈ (0, 1) one has G(pt) < cp G(t) for some cp < ∞, where cp is bounded on any interval (p1 , 1), 0 < p1 < 1. One can easily see that if the function G(t) is non-increasing and the condition (1.2.26) is satisfied for some b > 1 then this condition will also be met for any b > 1 and c(b) will be bounded away from zero on any interval (1, b1 ), b1 < ∞. In the literature, the property (1.2.26) is often referred to as dominated variation (see e.g. § 1.4 of [113]). It is clear that the class of upper-power functions is broader than the class of r.v.f.’s. In particular, it contains functions obtained by multiplying r.v.f.’s by ‘slowly oscillating’ factors m(t) that bounded away from zero. It turns out that such functions still belong to S and, moreover, applying the above operation (under the additional condition that the function m(t) is bounded) to the tails of subexponential distributions does not take one outside S. Theorem 1.2.21. (i) If G(t) is an upper-power function then G ∈ S. (ii) Let the tail of a distribution G0 be of the form G0 (t) = m(t)G(t), where G ∈ S and m(t) is an l.c. function such that for all t 0 < m1 m(t) m2 < ∞.
(1.2.27)
Then G0 ∈ S. Proof. (i) This assertion follows from Corollary 1.2.19 and the above remarks. (ii) It suffices to verify the conditions of Theorem 1.2.17(ii) for p = 1/2. Denote by M = M (t) → ∞ (as t → ∞) a function for which the convergence G(t + v)/G(t) → 1 is uniform in v ∈ [−M, M ] (see Remark 1.2.5 on p. 17). Clearly, G0 ∈ L (cf. Theorem 1.1.4(i)). Therefore it remains only to verify (1.2.20) and (1.2.21) for G0 (t), p = 1/2 and the chosen M = M (t). By virtue
1.2 Subexponential distributions
29
of (1.2.27) and the relation (1.2.20) for G ∈ S (which holds owing to Theorem 1.2.17(i)), G20 (t/2) m22 G2 (t/2) = o(G(t)) = o(G0 (t)), so that (1.2.20) also holds for G0 (t). Further, quite similarly to the bounds (1.2.18), (1.2.19) from the proof of Theorem 1.2.12(i), one derives that ⎛ ⎞ t/2 t/2 ⎜ ⎟ G0 (t − y) G0 (dy) = O ⎝ G(t − y) G(dy)⎠ + o(G(t)) = o(G(t)); M
M
the last equality holds owing to the fact that G ∈ S and to the relation (1.2.21) for G(t) (which holds by Theorem 1.2.17(i)). Since G(t) = O(G0 (t)) by condition (1.2.27), the above bound immediately implies the relation (1.2.21) for G0 (t). The theorem is proved.
1.2.3 Further sufficient conditions for distributions to belong to S. Relationship to the class of semiexponential distributions Theorem 1.2.17 essentially gives necessary and sufficient conditions for a distribution to belong to S. So far, using the theorem we have only established sufficient conditions from Theorem 1.2.21, which, as we will see from what follows, are quite narrow. To construct broader (in a certain sense) sufficient conditions, we will introduce the class of so-called semiexponential distributions. This is a rather wide class (in particular, it includes, along with the class R, the lognormal α distribution, the Weibull distribution G(t) = e−t , 0 < α < 1, and many others). Definition 1.2.22. A distribution G belongs to the class Se of semiexponential distributions if (i) G(t) = e−l(t) , where, for some s.v.f. L(t), the following representation holds true: l(t) = tα L(t),
α ∈ [0, 1];
L(t) → 0 as t → ∞ when α = 1; (1.2.28) (ii) as t → ∞, for Δ = Δ(t) = o(t) one has l(t + Δ) − l(t) = (α + o(1))
Δ l(t) + o(1). t
(1.2.29)
We will denote the subclass of the distributions G ∈ Se for which the index of the function l(t) is equal to α by Se(α), α ∈ [0, 1]. Condition (ii) could be rewritten in the following equivalent form: for any fixed
30
Preliminaries
δ > 0, as t → ∞, for α > 0, for α = 0,
l(t + Δ) − l(t) ∼ αΔl(t)/t l(t + Δ) − l(t) = o(Δl(t)/t)
for α 0,
l(t + Δ) − l(t) = o(1)
"
Δ l(t) > δ, t Δ if l(t) → 0. t if
(1.2.30) (1.2.31)
As we will see below, in the proof of Theorem 1.2.36, it is this property that, in a sense, plays the central role in determining whether G ∈ Se. The first property in (1.2.28) (i.e. that the representation l(t) = tα L(t), where L is an s.v.f., holds) is, in many aspects, a consequence of (1.2.29). This observation will be important when we compare the classes S and Se. Remark 1.2.23. It is seen from Definition 1.2.22 that if G ∈ Se(α) for some α ∈ [0, 1] then the distribution G1 with tail G1 (t) ∼ G(t) (or, which is the same, with l1 (t) := − ln G1 (t) = l(t) + o(1)) as t → ∞ also belongs to the subclass Se(α). Thus, like the class S (cf. Remark 1.2.13 on p. 21), each subclass Se(α) is closed with respect to the relation of asymptotic equivalence and can be partitioned in a similar way into equivalence subclasses. The last assertion is evidently applicable to the whole class Se as well. Remark 1.2.24. From the above it follows that the definition of the class Se can be rewritten in the following way: G ∈ Se if G(t) = e−l(t)+o(1) , where l(t) admits the representation (1.2.28), and for all Δ = o(t) one has, for α > 0,
l(t + Δ) − l(t) ∼ αΔl(t)/t,
for α = 0,
l(t + Δ) − l(t) = o(Δl(t)/t).
(1.2.32)
In this definition, the function l(t) (and therefore the function L(t)) can, without loss of generality, be assumed to be differentiable, since it can be ‘smoothed’ without leaving a given equivalence subclass (as in Remark 1.2.13). If L(t) is differentiable and L (t) = o(L(t)/t) as t → ∞, then the relation (1.2.32) for all Δ = o(t) follows immediately from the representation (1.2.28). Now we will turn to the relationship between the class Se (and its subclasses Se(α)) and the distribution classes that we have already considered. First we will show that R ⊂ Se(0),
but
R = Se(0).
(1.2.33)
That is, the class R of distributions with tails regularly varying at infinity proves to be strictly smaller than Se(0). For this and other reasons also (in particular, the importance of the class R in applications), in the consequent exposition we will single out this class and consider it separately from Se.
1.2 Subexponential distributions
31
We now return to (1.2.33). The first relation is next to obvious. Indeed, for G ∈ R one has G(t) = t−β LG (t), β 0, where LG (t) is an s.v.f., so that l(t) = − ln G(t) = β ln t − ln LG (t), and hence, as one can easily verify, conditions (1.2.28) and (1.2.29) are met for α = 0. To establish the second relation in (1.2.33), we will construct an example of a distribution G ∈ Se(0) \ R. Put l(t) := L0 (t) ln t, where L0 (t) is the s.v.f. L0 (t) from (1.1.5), which oscillates between the values 1 and 3. Clearly G(t) = e−l(t) is not an r.v.f., whereas l(t) is an s.v.f. by Theorem 1.1.4(i), and it is not hard to verify that condition (1.2.29) holds with α = 0, so that G ∈ Se(0). Now we will present a number of further assertions characterizing the class S and its connections with Se and its subclasses Se(α). Theorem 1.2.25. Let G ∈ L and let the function l(t) = − ln G(t) possess the property # $ t . (1.2.34) lim sup l(t + uz(t)) − l(t) < 1 for z(t) := l(t) t→∞, u→1 Then G ∈ S. Remark 1.2.26. The property (1.2.34) can be rewritten in the following form: if Δ = Δ(t) ∼ z(t) as t → ∞ then, for all sufficiently large t, one has l(t + Δ) − l(t) (α + o(1))
Δ l(t) + q t
(1.2.35)
for 0 α < 1 and α + q < 1. Under broad conditions on the function l(t), this relation will also remain true for Δ = o(z(t)) (see below). Roughly speaking, the first term on the right-hand side of (1.2.35) bounds the rate of the ‘differentiable growth’ of the function l(t), whereas the second bounds the rate of its ‘singular growth’. The jumps in the function l(t) must be o(t) as t → ∞ owing to the fact that G(t) is an l.c. function. Remark 1.2.27. The relation (1.2.35) could be rewritten in the following equivalent form: Δ Δ = Δ(t) ∼ z(t), l(t) − l(t − Δ) (α + o(1)) l(t) + q, t for 0 α < 1 and α + q < 1. To verify this, one has to observe that z(t) = o(t) and that under condition (1.2.34) we have l(t + Δ) ∼ l(t) for Δ ∈ [0, z(t)]. Since the functions l(t) and z(t) = t/l(t) are ‘equally smooth’, we also have z(t + Δ) ∼ z(t) and Δ(t) ∼ Δ(! t ) for ! t := t − Δ(t). Prior to proving Theorem 1.2.25, we will state an important corollary of that assertion which establishes a connection between the distribution classes under consideration.
32
Preliminaries
Corollary 1.2.28. (i) If, for some c > 0 and α ∈ [0, 1) and all Δ ∈ [c, z(t)], the function l(t) = − ln G(t) satisfies the inequality l(t + Δ) − l(t) (α + o(1))
Δ l(t) + o(1) as t
t→∞
(1.2.36)
then G ∈ S. (ii) For α ∈ [0, 1) one has Se(α) ⊂ S. Proof of Corollary 1.2.28. (i) One can easily see that inequality (1.2.36) ensures that the conditions of Theorem 1.2.25 are met. Indeed, (1.2.36) implies (1.2.35) and therefore also (1.2.34) by virtue of Remark 1.2.26. It remains to demonstrate that G ∈ L. To this end, note that from (1.2.36) with Δ = c it follows that l(t+c)−l(t) = O(l(t)/t)+o(1), and hence we just have to show that l(t) = o(t). This relation follows immediately from (1.2.36) and the next lemma. Lemma 1.2.29. If, for some γ0 < 1 and t0 < ∞, l(t + z(t)) − l(t) γ0
for
t t0
(1.2.37)
then l(t) = o(t) as t → ∞ (in other words, z(t) → ∞). Observe that (1.2.34) implies (1.2.37). Proof. Consider the increasing sequence s0 := t0 , sn+1 := sn + z(sn ), Owing to (1.2.37) and the obvious inequality "
x1 + x2 x1 x2 min , y1 + y2 y1 y2
n = 0, 1, 2, . . .
for
(1.2.38)
y1 , y2 > 0
we obtain z(sn+1 ) =
sn + z(sn ) sn + z(sn ) min{z(sn ), z(sn )/γ0 } l(sn + z(sn )) l(sn ) + γ0
= z(sn ) z(sn−1 ) · · · z(s0 ) = z(t0 ) > 0.
(1.2.39)
Hence sn → ∞ as n → ∞, and there exists a (possibly infinite) limit z∞ := lim z(sn ) z(t0 ) > 0. n→∞
Since for t ∈ [sn , sn+1 ] z(t) =
t sn sn z(sn ) ∼ z(sn ), = l(t) l(sn+1 ) l(sn ) + γ0 1 + γ0 /l(sn )
to complete the proof of the lemma we just have to show that z∞ = ∞. Assume the contrary, that z∞ < ∞, so that z(sn ) = (1 − ε1 (n))z∞ ,
ε1 (n) → 0 as n → ∞.
(1.2.40)
33
1.2 Subexponential distributions This implies that there exists an infinite subsequence {n } such that l(sn +1 ) − l(sn ) (1 − ε2 (n ))
z(sn ) , z∞
ε2 (n ) → 0.
(1.2.41)
Indeed, if this were not the case then there would exist a δ > 0 and an n0 < ∞ such that, for all n n0 , l(sn+1 ) − l(sn )
z(sn ) z∞ + δ
and therefore l(sn ) = l(sn0 ) +
n
l(sk ) − l(sk−1 )
k=n0 +1
l(sn0 ) +
1 z∞ + δ
n
z(sk−1 ) = l(sn0 ) +
k=n0 +1
sn − sn0 , z∞ + δ
whence lim inf n→∞ z(sn ) z∞ + δ, which would contradict the definition of z∞ . It remains to notice that from (1.2.40) and (1.2.41) one has l(sn + z(sn )) − l(sn ) (1 − ε1 (n ))(1 − ε2 (n )) → 1
as
n → ∞.
This contradicts (1.2.37), therefore z∞ = ∞. The lemma is proved. We return to the proof of Corollary 1.2.28. (ii) Since z(t) = o(t) when α < 1, the property (1.2.29) of the elements of Se(α) with α < 1 clearly implies (1.2.36). Corollary 1.2.28 is proved. The proof of Theorem 1.2.25 will be preceded by the following lemma. Lemma 1.2.30. If condition (1.2.37) is satisfied for the function l(t) then there exist t0 < ∞, γ < 1 and a continuous piecewise differentiable function h(t) such that, for all t t0 , |h(t) − l(t)| γ, γh(t) , t γ t h(t) , h(u) u
(1.2.42)
h (t)
(1.2.43) t0 u t.
(1.2.44)
In particular, h(t) h(t0 )(t/t0 )γ . Proof. Suppose that (1.2.37) holds. Since l(t) → ∞, we can assume without loss of generality that γ0 + 1 γ0 γ := . 1 − 1/l(t0 ) 2
34
Preliminaries
Using the sequence (1.2.38) with s0 = t0 , denote by h(t) the continuous piecewise linear function with nodes at the points (sn , l(sn )), n 0: h(t) := l(sn ) +
l(sn+1 ) − l(sn ) (t − sn ), z(sn )
t ∈ [sn , sn+1 ].
Since sn → ∞ as n → ∞ by virtue of (1.2.39), we have thus defined the function h(t) on the entire half-line [t0 , ∞) (its definition on the left of the point t0 is inessential for our purposes). It is evident that, owing to (1.2.37), on the interval [sn , sn+1 ] one has the inequalities l(sn ) l(t) l(sn+1 ) l(sn ) + γ0 , l(sn ) h(t) l(sn+1 ) l(sn ) + γ0 ,
(1.2.45)
which proves (1.2.42) since γ0 < γ < 1. Further, to obtain (1.2.43) note that for t ∈ [sn , sn+1 ] we have sn t − z(sn ) and therefore γ0 γ0 l(sn ) l(sn+1 ) − l(sn ) γ0 h(t) = z(sn ) z(sn ) sn t − z(sn ) γ0 h(t) h(t) γ0 = 1 − z(sn )/t t 1 − z(sn )/sn t h(t) γh(t) γ0 , = 1 − 1/l(sn ) t t
h (t) =
since l(sn ) l(t0 ). Finally, (1.2.44) is almost obvious from (1.2.43): integrating the latter inequality on the interval [u, t], we get ln
t h(t) γ ln , h(u) u
(1.2.46)
which proves (1.2.44). The lemma is proved. Proof of Theorem 1.2.25. We will make use of Theorem 1.2.17(ii) with p = 1/2 and M = z(t). Recall that, under the conditions of the theorem, by virtue of Remark 1.2.27 one has z(t) ∼ z( ! t ) for ! t := t − z(t). Therefore, by (1.2.34), G(t − M ) = exp{l(t) − l( ! t )} = exp{l( ! t + z(t)) − l( ! t )} < e G(t) for all sufficiently large t, and so condition (1.2.22) of Theorem 1.2.17(ii) is satisfied. Further, for the function h from Lemma 1.2.30 one has |l(t)−h(t)| < γ, t t0 . Therefore, if we establish relations (1.2.20), (1.2.21) for the distribution Gh with Gh (t) = e−h(t) then they will be valid for the original distribution G as well. So to simplify the exposition one could assume from the very beginning that the
35
1.2 Subexponential distributions
function l(t) has the properties (1.2.43), (1.2.44). From these properties it follows that for any v ∈ [0, 1/2] and t t0 one has l(t) − l(vt) − l((1 − v)t) [1 − vγ − (1 − v)γ ]l(t).
(1.2.47)
Next, use the inequality 1 − (1 − v)γ (2γ − 1)v γ ,
v ∈ [0, 1/2],
of which the left-hand (right-hand) side is a concave (convex) function on the indicated interval, the values of these functions coinciding with each other at the end points v = 0 and v = 1/2 of the interval. Together with (1.2.47) the inequality implies that l(t) − l(vt) − l((1 − v)t) −cv γ l(t),
c = 2 − 2γ > 0.
(1.2.48)
From this we immediately obtain (1.2.23), so that the condition (1.2.20) of Theorem 1.2.17 is satisfied (for any p ∈ (0, 1)). As we have already noted, under the conditions of Theorem 1.2.25 there exist t0 < ∞ and γ0 < 1 such that (1.2.37) holds true. As z(t) → ∞ by Lemma 1.2.29 (or simply by virtue of the assumption G ∈ L and Theorem 1.2.4(iv)), one has z(t) t0 for all sufficiently large t. Denote by {sn ; n 0} the sequence constructed according to (1.2.38) with initial value s0 = z(t). Further, inequality (1.2.48) implies that t/2 t/2 G(t − y) G(dy) = G(t) exp{l(t) − l(y) − l(t − y)} dl(y) M
z(t)
t/2 G(t)
exp{−c(y/t)γ l(y)} dl(y) =: G(t)I(t).
z(t)
(1.2.49) Put N := min{n 1 : sn t/2} and represent I(t) as a sum of integrals over the intervals [sn , sn+1 ). Observe that, according to (1.2.37), the increments of the function l(t) on each of these intervals do not exceed γ. Therefore, using the inequalities sn s0 + nz(s0 ) (n + 1)z(t), sn+1 sn+1 l(sn+1 ) − l(sn ) − + 1 1, z(sn+1 ) z(sn ) which are valid owing to (1.2.38), (1.2.39) and our choice of s0 , we obtain from
36
Preliminaries
the dominated convergence theorem that I(t)
N −1
sn+1
exp{−c(y/t)γ l(t)} dl(y)
n=0 s n ∞
N −1
exp{−c(sn /t)γ l(t)}
n=0
exp{−c(n + 1)γ l1−γ (t)} → 0
as t → ∞.
n=0
Together with (1.2.49) this implies that condition (1.2.21) of Theorem 1.2.17 is satisfied. Thus, all the conditions of Theorem 1.2.17(ii) are met, and therefore G ∈ S. The theorem is proved. We will present one more important consequence of Theorem 1.2.25. First recall that, by virtue of Remark 1.2.13 (p. 21), for distributions G ∈ S one can always assume that the function l(t) = − ln G(t) (up to an additive o(1) correction term, t → ∞) is differentiable arbitrarily many times. A similar assertion holds for G ∈ Se as (cf. Remark 1.2.24 on p. 30). Theorem 1.2.31. Let G(t) = e−l(t)+o(1) , where l(t) is continuous and piecewise differentiable and lim sup z(t)l (t) < 1,
z(t) = t/l(t).
(1.2.50)
t→∞
Then the conditions of Theorem 1.2.25 are satisfied, so that G ∈ S. Proof. The condition (1.2.50) clearly means that, for some γ < 1 and t0 < ∞, one has l (t) γl(t)/t for t t0 . As shown in Lemma 1.2.30, this relation implies (1.2.46), so that ln
t+Δ l(t + Δ) γ ln , l(t) t
Hence, for Δ = o(t), t → ∞, one has Δ l(t + Δ) 1+ l(t) t
t t0 , Δ > 0.
γ
=1+
Δ (γ + o(1)). t
From here the relation (1.2.36) is immediate. It remains to use Corollary 1.2.28. Theorem 1.2.31, in its turn, implies the following. Corollary 1.2.32. For G ∈ S it suffices that lim sup t [ln(− ln G(t))] < 1.
(1.2.51)
t→∞
Next we will give one more consequence of the above results. It concerns conditions for G ∈ S that are, in a sense, close to the necessary ones and are not related to the differentiability properties of the function l(t) = − ln G(t). The condition to be presented will be expressed simply in terms of some restrictions
37
1.2 Subexponential distributions
on the asymptotic behaviour of the increments l(t + Δ) − l(t) for some function Δ = Δ(t) such that 0 < c Δ = o(z(t)) as t → ∞. First note that, for an l(t) = t−α L(t) with a ‘regular’ s.v.f. L(t), the derivative l (t) is close to αl(t)/t = α/z(t) (cf. Theorem 1.1.4(iv)), so that in this situation 1/z(t) characterizes the decay rate of the function l (t) as t → ∞. Theorem 1.2.31 shows that a sufficient condition for G ∈ S is that the ratio of l (t) and 1/z(t) is bounded away (from below) from unity as t → ∞. In the general case, we will consider the ratio r(t, Δ) := z(t)(l(t + Δ) − l(t))/Δ of the difference analogue (l(t + Δ) − l(t))/Δ of the derivative of l(t) and the function 1/z(t). Theorem 1.2.33. (i) Let there exist a function Δ = Δ(t) such that 0 < c Δ(t) = o(z(t)) as
t→∞
(1.2.52)
α+ (G, Δ) := lim sup r(t, Δ(t)) < 1.
(1.2.53)
and t→∞
Then G ∈ S. (ii) Conversely, if G ∈ S then, for any function Δ = Δ(t) c > 0, we have α− (G, Δ) := lim inf r(t, Δ(t)) 1. t→∞
(1.2.54)
Remark 1.2.34. Observe that (1.2.52) contains the condition that z(t) → ∞ as t → ∞ or, equivalently, that l(t) = o(t). The proof of part (ii) of the theorem shows that in this case one always has α− (G, Δ) 1. To prove the theorem we will need the following analogue of Lemma 1.2.30. Lemma 1.2.35. If the conditions of Theorem 1.2.33(i) are satisfied for l(t) then there exist t0 < ∞, γ < 1 and a continuous piecewise differentiable function h(t) such that |h(t) − l(t)| = o(1) as h (t)
γh(t) t
for all
t → ∞, t t0 .
(1.2.55) (1.2.56)
Proof. The proof of the lemma essentially repeats that of Lemma 1.2.30 but with z(t) replaced by Δ(t) when constructing the sequence (1.2.38): now we put sn+1 := sn + Δ(sn ). The value s0 is chosen to be such that r(t, Δ(t)) γ < 1 for all t s0 (its existence is ensured by condition (1.2.53)). It is clear that sn → ∞ as n → ∞ by virtue of (1.2.52).
38
Preliminaries
Denote by h(t) a continuous piecewise linear function with nodes (sn , l(sn )), n 0. Then clearly l(sn+1 ) − l(sn )
γΔ(sn ) =: rn = o(1) as z(sn )
n → ∞.
(1.2.57)
Since both l(t) and h(t) are non-decreasing functions we find, cf. (1.2.45), that the deviation of these functions from each other on the interval [sn , sn+1 ] also does not exceed rn . This proves (1.2.55). Further, from (1.2.57) and (1.2.55) we obtain that, for any given ε > 0 and for all sufficiently large n, one has h (t) =
γ γ(1 + ε)h(t) l(sn+1 ) − l(sn ) , Δ(sn ) z(sn ) t
t ∈ [sn , sn+1 ].
Increasing, if necessary, the values of the previously chosen quantities s0 and γ < 1, we arrive at (1.2.56). The lemma is proved. Proof of Theorem 1.2.33. (i) This assertion is an obvious consequence of Theorem 1.2.31 and Lemma 1.2.35. (ii) Assume that α− (G, Δ) > 1 for some function Δ(t) c > 0. Then, evidently, sn := sn−1 + Δ(sn−1 ) → ∞ as n → ∞ and, for a sufficiently large s0 , one has r(t, Δ(t)) 1 for all t s0 . Therefore, for all n 0 we have z(sn+1 ) =
sn + Δ(sn ) sn + Δ(sn ) = z(sn ). l(sn + Δ(sn )) l(sn ) + Δ(sn )/z(sn )
Thus the sequence {z(sn )} is non-increasing, so that l(sn ) sn /z(s0 ), n 0. Since sn → ∞, the above relation implies that l(t) = o(t), which is inconsistent with G ∈ S by Theorems 1.2.8 and 1.2.4(iv). The theorem is proved. It follows from the above assertions that the ‘regular’ part of the class S, i.e. the part consisting of the elements for which the upper and lower limits α± in (1.2.53) and (1.2.54) coincide with each other, is nothing other than the class Se of semiexponential distributions. More precisely, denote by S(α) ⊂ S the subclass of distributions G ∈ S for which there exists a function Δ = Δ(t) satisfying (1.2.52) and such that α+ (G, Δ) = α− (G, Δ) = α ∈ [0, 1]: "
there is a Δ such that (1.2.52) holds . (1.2.58) S(α) := G ∈ S : and α+ (G, Δ) = α− (G, Δ) = α Recall that Se(α) denotes the subclass of semiexponential distributions with index α ∈ [0, 1] (see Definition 1.2.22 on p. 29). Theorem 1.2.36. For α ∈ [0, 1), S(α) = Se(α). For α = 1 one has S(1) ⊂ Se(1).
1.2 Subexponential distributions
39
For distributions G ∈ Se(1), to ensure that G ∈ S(1) one needs, generally speaking, additional conditions on the s.v.f. L(t) → 0 in the representation (1.2.28). For more detail, see Remark 1.2.37 below. Proof. As we observed in Remark 1.2.23 (p. 30), the subclasses Se(α) are closed with respect to asymptotic equivalence: if G1 ∈ Se(α) and G1 (t) ∼ G(t) (or, equivalently, l1 (t) := − ln G1 (t) = l(t) + o(1)) as t → ∞ then G ∈ Se(α). Hence, to prove the inclusion S(α) ⊂ Se(α),
α ∈ [0, 1],
(1.2.59)
it suffices to show that, for any G ∈ S(α), there is an asymptotically equivalent distribution G1 ∈ Se(α). Assume that G ∈ S(α). To construct the required function G1 (t) we will employ the linear interpolation h(t) from the proof of Lemma 1.2.35 (it is obtained using the function Δ(t) from (1.2.58)) and put l1 (t) := h(t), G1 (t) := e−l1 (t) . The same argument as in Lemma 1.2.35, together with the condition α+ (G, Δ) = α− (G, Δ) = α, shows that there exists the limit lim t ln l1 (t) = α. t→∞
Therefore, (ln l1 (t)) = α/t + ε(t)/t, where ε(t) → 0 as t → ∞, so that t ln l1 (t) = α ln t + 1
ε(u) du u
and hence
t
l1 (t) = t L(t),
where L(t) = exp
α
0
ε(u) du . u
(1.2.60)
By virtue of Theorem 1.1.3, the last representation means that L(t) is an s.v.f. Since G ∈ S, by Theorem 1.2.12(ii) we have G1 ∈ S and therefore l1 (t) = o(t) (see Theorems 1.1.2(iv) and 1.1.3), so that L(t) = o(1) when α = 1. Further, it is clear that t+Δ
ε(u) du = o(Δ/t) as u
Δ = o(t),
t
so that from (1.2.60) we obtain l1 (t + Δ) = l1 (t)
1+
Δ t
α
eo(Δ/t) .
Therefore
Δ l1 (t). t Thus the conditions of Definition 1.2.22 are satisfied for the function G1 (t). So l1 (t + Δ) − l1 (t) = (α + o(1))
40
Preliminaries
we have proved that G1 ∈ Se(α) (and hence that G ∈ Se(α)), which establishes (1.2.59). Now let G ∈ Se(α), α ∈ [0, 1). To prove that G ∈ S(α) it suffices, by virtue of Theorem 1.2.33(i), to show that there is a function Δ = Δ(t) satisfying (1.2.52), for which there exists the limit α+ (G, Δ) = α− (G, Δ) = α (note that we consider only the case α < 1). But the existence of such a function Δ(t) directly follows from the relation (1.2.29) in Definition 1.2.22. The relation implies that there exists a function ε = ε(t) that converges to zero slowly enough as t → ∞ and has the property that, for Δ(t) := ε(t)t/l(t) = o(z(t)), one has Δ(t) c > 0 and l(t + Δ(t)) − l(t) = (α + o(1))
Δ(t) l(t). t
The last relation means that r(t, Δ(t)) → α as t → ∞. The theorem is proved. Remark 1.2.37. In the boundary case where α+ (G, Δ) = 1 in (1.2.53), both G ∈ S and G ∈ / S are possible. Before discussing the conditions for a distribution G ∈ Se(1) to belong to the class of subexponential distributions, we will give an example of a distribution G ∈ Se(1) \ S. Example 1.2.38. Set l(t) := tL(t), where −1 k , t ∈ [22k , 22k+1 ], L(t) := t ∈ [22k+1 , 22k+2 ], (log2 t − k − 1)−1 ,
k = 1, 2, . . .
Clearly, L(t) ∼ 2/ log2 t, L (t) = 0 for t ∈ (22k , 22k+1 ) and L (t) = −
log2 e t(log2 t − k − 1)2
for t ∈ (22k+1 , 22k+2 ),
so that in any case L (t) = o(L(t)/t). Hence for Δ = Δ(t) = o(t) one has L(t + Δ) ∼ L(t) and l(t + Δ) − l(t) = ΔL(t + Δ) + t(L(t + Δ) − L(t)) = (1 + o(1))ΔL(t) + O(tL (t)Δ) = (1 + o(1))ΔL(t).
(1.2.61)
Thus the conditions for G ∈ Se(1) from Definition 1.2.22 are satisfied. However, for t = 22k+1 , 2l(t/2) − l(t) = t[L(t/2) − L(t)] = 22k+1 [L(22k ) − L(22k+1 )] = 0, so that condition (1.2.20) (or equivalently (1.2.23)) for p = 1/2 is not satisfied and therefore G ∈ S by Theorem 1.2.17(i).
41
1.2 Subexponential distributions
That a distribution G ∈ Se(1) belongs to the class S can only be proved under the additional assumption that the derivative L (t) is regularly varying. Namely, the following assertion holds true. Theorem 1.2.39. Assume that G ∈ Se(1) and that the function L(t) in the representation l(t) = − ln G(t) = tL(t) is differentiable for t > t0 for large enough t0 . If ε(t) := −tL (t) is an s.v.f.
(1.2.62)
at infinity and, for some γ ∈ (0, 1), γε(t)L(−1) (L(t)/γ) + ln ε(t) − ln L(t) → ∞,
(1.2.63)
where L(−1) (u) = inf{t : L(t) < u} is the inverse to the function L(t), then G ∈ S. Example 1.2.40. Consider a distribution G with l(t) = tL(t), where L(t) = ln−β t, β > 0, for t > e. Clearly, in this case ε(t) = β ln−β−1 t and L(−1) (u) = exp{u−1/β }, so that for γ = 1/2 the left-hand side of (1.2.63) is equal to −1/β
2−1 βt2
ln−β−1 t + ln β − ln ln t → ∞ as t → ∞.
Thus, the conditions of Theorem 1.2.39 are satisfied and therefore G ∈ S for all1 β > 0. Proof of Theorem 1.2.39. We will make use of Theorem 1.2.17(ii) with p = 1/2. First note that, for t > t0 , Z∞ ε(u) L(t) = du, (1.2.64) u t
so that ε(t) = o(L(t)) by Theorem 1.1.4(iv) (for the case α = 1). Therefore, l (t) = L(t) − ε(t) ∼ L(t), and hence, for M = o(t), Zt L(u) du ∼ M L(t).
l(t) − l(t − M ) ∼ t−M
Since L(t) → 0 as t → ∞, we obtain immediately that G ∈ L (choosing M := c), and also that condition (1.2.22) is satisfied for M := 1/L(t) → ∞ (note that 1/L(t) = o(t) by Theorem 1.1.4(ii)). Further, again by Theorem 1.1.4 (part (iv) in the case α = 0 and part (ii)) we have that, as t → 0, 2l
„ « » „ « – Zt ε(u) t t − l(t) = t L − L(t) = t du 2 2 u t/2
Zt
ε(u) du ∼ tε(t) −
„ « t t t ε ∼ ε(t) → ∞, 2 2 2
t/2
since ε(t) is an s.v.f. Thus condition (1.2.23) is also met. 1
It was claimed in [266] that, in the case when G(t) = exp{−t ln−β t}, ‘we can similarly show that G ∈ S’ iff β > 1 (a remark at the bottom of p. 1001). This assertion appears to be imprecise.
42
Preliminaries
It remains to verify (1.2.21), namely that the following expression tends to zero: Zt/2 M
G(t) G(dy) = G(t − y) =
Zt/2 exp{l(t) − l(t − y) − l(y)} dl(y) M
j » –ff Zt/2 ˆ ˜ L(t − y) exp t L(t) − L(t − y) − l(y) 1 − dl(y) L(y) M
ZN =
Zt/2 +
M
=: I1 + I2 , N
where we put N = N (t) := L(−1) (L(t)/γ), so that N → ∞ and N = o(t) as t → ∞. Observe that because ε(u) > 0 (as an s.v.f.), by virtue of (1.2.64) the function L(t) will decrease monotonically and continuously for t > t0 and therefore L(−1) (v) will be a unique solution to the equation L(L(−1) (v)) = v, so that one has L(N ) = L(t)/γ. Without restricting the generality, one can assume that M > t0 and therefore the function L(y) decreases for y M . This implies that for all sufficiently large t one has L(y) L(N ) = L(t)/γ for y ∈ [M, N ] and, since L(t − y) ∼ L(t) as t → ∞ when y ∈ [M, N ], for c := (1 − γ)/2 > 0 we obtain ZN I1
˘
¯ exp −l(y)[1 − L(t − y)/L(y)] dl(y)
M
ZN
e−cl(y) dl(y)
0 we also have γ(1 + δ)γ /q < 1, i.e. the conditions of Theorem 1.2.31 are satisfied and so G ∈ S. Observe also that the fluctuations of the function l(t) around l0 (t) = qtγ are unbounded: lim sup(l(t) − l0 (t)) = lim sup(l0 (t) − l(t)) = ∞. t→∞
t→∞
This corresponds to unbounded ‘relative’ fluctuations of the function G(t) around the tail G0 (t) of the semiexponential distribution G0 . This example also shows that functions from S could be constructed in a similar way from ‘pieces’ of functions from the classes Se(α) with different values of α. Remark 1.2.42. Definition 1.2.1 and the assertions of statements 1.2.4–1.2.36 could be extended in an obvious way to the case of finite measures G with G(R) = g = 1. For them, the subexponentiality property can be stated as G2∗ (t) → 2g G(t)
as
t → ∞.
A few of the assertions in statements 1.2.4–1.2.21 will need minor obvious amendment. Thus, the assertion of Theorem 1.2.12(iii) will take the form Gn∗ (t) → ng n−1 , G(t)
44
Preliminaries
whereas that of part (iv) of the same theorem will become Gn∗ (t) bg n−1 (1 + ε)n . G(t) To prove these assertions, one has to introduce the subexponential distributions ! = g −1 G and make use of statements 1.2.4–1.2.36. G
1.3 Locally subexponential distributions Along with subexponentiality one could also consider a more subtle property, that of local subexponentiality. To simplify the exposition we will confine ourselves here to dealing with distributions on the positive half-axis. The transition to the general case could be made in exactly the same way as in § 1.2.
1.3.1 Arithmetic distributions We will start with the simpler discrete case, where one considers distributions (or measures) G = {gk ; k 0} on the set of integers. By the convolution of the sequences {gk } and {fk } we mean the sequence (g ∗ f )k :=
k
gj fk−j .
j=0
We will denote the convolution of {gk } with itself by {gk2∗ } and, furthermore, put (n+1)∗ := (g ∗ g n∗ )k , n 2. Clearly gkn∗ = Gn∗ ({k}). gk ∞ Definition 1.3.1. A sequence {gk 0; k 0}, k=0 gk = 1, is said to be subexponential if gk+1 lim = 1, (1.3.1) k→∞ gk g 2∗ (1.3.2) lim k → 2. k→∞ gk One can easily see that a regularly varying sequence gk = k−α−1 L(k),
α > 1,
where L(t) is an s.v.f., will belong (after proper normalization) to the class of α subexponential sequences, just as a semiexponential sequence gk = e−k L(k) , α ∈ (0, 1) does. Without normalizing, on the right-hand side of (1.3.2) one should ∞ have 2g, where g = k=0 gk . For subexponential sequences there exist analogues of statements 1.2.4–1.2.21 (with the difference that the property (1.3.1) is now part of the definition and therefore requires no proof). We will restrict our attention to those assertions whose proofs require substantial changes compared with the respective arguments in § 1.2. These assertions are analogues of parts (iii) and (iv) of Theorem 1.2.12.
45
1.3 Locally subexponential distributions Theorem 1.3.2. Let {gk ; k 0} be a subexponential sequence. Then: (i) for any fixed n 2 gkn∗ = n; k→∞ gk lim
(1.3.3)
(ii) for any ε > 0 there exists an M < ∞ such that for all n 1 and k M 1 gkn∗ < (1 + ε)n+1 . gk ε Proof. Similarly to the argument in the proof of Theorem 1.2.12(iii), we will make use of induction. Assume that (1.3.3) holds true. Then, for M k, (n+1)∗
gk
=
gk
k n∗ gj gk−j
gk
j=0
=
k−M j=0
k
+
.
(1.3.4)
j=k−M +1
The first sum on the final right-hand side can be rewritten as follows: k−M
=
k−M
j=0
j=0
gj
n∗ gk−j gk−j , gk−j gk
can be made arbitrarily close to n for all j k − M where the ratio by choosing M large enough, whereas n∗ /gk−j gk−j
k−M j=0
gk−j = − gk j=0
k
k
gj
→ 1 + G(M ) as
k → ∞.
(1.3.5)
j=k−M +1
The latter relation follows from the fact that, by virtue of (1.3.2) and (1.3.1), for any fixed M one has, as k → ∞, k j=0
=
gk2∗ →2 gk
k
and
gj
j=k−M +1
gk−j ∼ gk
k
gk−j → 1 − G(M ).
j=k−M +1
(1.3.6) ∞ Since G(M ) = g can be made arbitrarily small by choosing M large j=M j enough, the first sum on the final right-hand side of (1.3.4) can be made arbitrarily close to n for all sufficiently large k. The second sum on the final right-hand side of (1.3.4) satisfies the relation k j=k−M +1
gj
M −1 n∗ gk−j ∼ gjn∗ gk j=0
(1.3.7)
as k → ∞ and, therefore, can be made arbitrarily close to 1 by choosing M large ∞ enough, since j=0 gjn∗ = 1. As the left-hand side of (1.3.4) does not depend (n+1)∗
on M , we have proved that gk
/gk → n + 1 as k → ∞.
(ii) Put αn := sup kM
gkn∗ . gk
46
Preliminaries
Then αn+1 = sup
k
gj
kM j=0
sup
k−M
kM j=0
n∗ gk−j gk
n∗ gj gk−j gk−j + sup gk−j gk kM
k
gj
j=k−M +1
n∗ gk−j . gk
(1.3.8)
One can easily see from (1.3.5) that, for large enough M , the first supremum in the second line of (1.3.8) does not exceed αn sup
k−M
kM j=0
gj gk−j αn (1 + ε), gk
while the second does not exceed 1 + ε owing to (1.3.7). Thus we have obtained αn+1 αn (1 + ε) + (1 + ε) for
n 1.
Since α1 = 1, it follows that αn ((1 + ε)n+1 − 1)/ε. The theorem is proved. Theorem 1.2.17 carries over to the case of sequences in an obvious way. Neither it is difficult to obtain an analogue of Theorem 1.2.21. An arithmetic distribution G = {gk } with the properties (1.3.1), (1.3.2) could be called locally subexponential.
1.3.2 Non-lattice locally subexponential distributions There are two ways of extending the concept of local subexponentiality to the case of non-lattice distributions. The first one suggests to consider distributions having densities. A density h(t) = G(dt)/dt = −G (t) will be called subexponential if, for any fixed v, as t → ∞, h(t + v) → 1, h(t)
2∗
t
h (t) :=
h(u)h(t − u) du ∼ 2h(t).
(1.3.9)
0
By Theorem 1.2.4(i) the convergence in the first relation in (1.3.9) is uniform in v ∈ [0, 1]. For distributions with subexponential densities a complete analogue of Theorem 1.3.2 holds true, the proof of which basically coincides with the argument used in the discrete case. The second way of extending the concept of local subexponentiality does not
1.3 Locally subexponential distributions
47
require the existence of a density. For a Δ > 0, denote by Δ[t) the half-open interval Δ[t) := [t, t + Δ) of length Δ, and let Δ1 [t) := [t, t + 1). Definition 1.3.3. A distribution G on [0, ∞) is said to be locally subexponential (belonging to the class Sloc ) if, for any fixed Δ > 0, G(Δ[t)) = Δ, G(Δ1 [t))
(1.3.10)
G2∗ (Δ[t)) = 2. t→∞ G(Δ[t))
(1.3.11)
lim
t→∞
lim
It is not difficult to verify that if, for any fixed Δ > 0 as t → ∞, G(Δ[t)) ∼ Δt−α−1 L(t),
α > 1,
(1.3.12)
where L(t) is an s.v.f., then G ∈ Sloc (this will also follow from Theorem 1.3.6 below). Evidently, the relation (1.3.12) always holds in the case when ∞ G(t) =
u−α−1 L(u) du.
t
It is also clear that if G has a subexponential density then G ∈ Sloc . If, for an arithmetic distribution G = {gk ; k 0}, the sequence {gk } is subexponential, then we will also write G ∈ Sloc , although the property (1.3.10) will in this case hold only for integer-valued Δ 1 while (1.3.11) will be true for any Δ 1. In what follows, in the case of arithmetic distributions G we will always assume integer-valued Δ 1 in (1.3.10), (1.3.11). The analogues of the main assertions about the properties of distributions from the class Sloc that we established in the discrete case have the following form in the non-lattice case. Theorem 1.3.4. Let G ∈ Sloc . Then: (i) for any fixed n 2 and Δ 1, Gn∗ (Δ[t)) = n; t→∞ G(Δ[t)) lim
(1.3.13)
(ii) for any ε > 0 there exist an M = M (ε) and a b = b(ε) such that, for all n 2, t M and any fixed Δ 1, Gn∗ (Δ[t)) b(1 + ε)n . G(Δ[t))
48
Preliminaries
Proof. The proof of the theorem follows the same line of reasoning as that of Theorem 1.3.2. (i) Here we will again use induction. Suppose that (1.3.13) is correct. Then, for M ∈ (0, t), G(n+1)∗ (Δ[t)) = G(Δ[t))
t+Δ
0
t−M
Gn∗ (Δ[t − u)) G(du) = G(Δ[t))
t+Δ
+ 0
.
(1.3.14)
t−M
In the first integral, t−M
0
Gn∗ (Δ[t − u)) G(Δ[t − u)) G(du), G(Δ[t − u)) G(Δ[t))
(1.3.15)
the ratio G (Δ[t − u))/G(Δ[t − u)) can, by the induction hypothesis, be made arbitrarily close to n for all u t − M by choosing M large enough. Further, n∗
t−M
0
G(Δ[t − u)) G2∗ (Δ[t)) G(du) = − G(Δ[t)) G(t)
t+Δ
.
(1.3.16)
t−M
To estimate the integral t+Δ
t−M
G(Δ[t − u)) G(du) G(Δ[t))
(1.3.17)
we need the following lemma. Lemma 1.3.5. Let the relation (1.3.10) hold for the distribution G, and let Q(v) be a bounded non-increasing function on R. Then, for any fixed Δ, M > 0, t+Δ
lim
t→∞ t−M
1 G(du) = Q(t − u) G(Δ[t)) Δ
M Q(v) dv.
(1.3.18)
−Δ
Proof of Lemma 1.3.5. Consider a finite partition of [t − M, t + Δ) into K halfopen intervals Δ1 , . . . , ΔK of the form Δk = [t − M + (k − 1)δ, t − M + kδ),
k = 1, . . . , K,
δ=
M +Δ . K
Then K k=1
G(Δk ) Q(M − δ(k − 1)) G(Δ[t))
t+Δ
Q(t − u) t−M K k=1
G(du) G(Δ[t))
Q(M − δk)
G(Δk ) . G(Δ[t))
(1.3.19)
49
1.3 Locally subexponential distributions
But G(Δk )/G(Δ[t)) → δ/Δ as t → ∞ by virtue of (1.3.10), so the sum on the left-hand side of (1.3.19) will converge, as t → ∞, to δ Q(M − δ(k − 1)). Δ K
k=1
A similar relation holds for the sum in the second line of (1.3.19). Now each of these sums can be made, by choosing a large enough K, arbitrarily close to 1 Δ
M Q(v) dv. −Δ
Since the integral on the first right-hand side of (1.3.19) does not depend on K, the lemma is proved. Now we return to the proof of Theorem 1.3.4. Applying Lemma 1.3.5 to the functions Q(t) = G(t) and Q(t) = G(t + Δ), we obtain that the integral (1.3.17) converges as t → ∞ to ⎛ M ⎛ 0 ⎞ ⎞ M M +Δ +Δ 1 ⎝ 1 ⎝ G(v)dv − G(v)dv − G(v)dv ⎠ = G(v)dv ⎠ Δ Δ 0
−Δ
−Δ
=1−
1 Δ
M M +Δ
G(v)dv = 1 − r(M ), M
where r(M ) G(M ). From this it follows that, by choosing M large enough, one can make the integral (1.3.17) arbitrarily close to 1 as t → ∞, and therefore the integral on the left-hand side of (1.3.16) will, by virtue of (1.3.11), also be arbitrarily close to 1 while the first integral on the second right-hand side of (1.3.14) will be arbitrarily close to n. Again using Lemma 1.3.5, we obtain in a similar way that for the last integral in (1.3.14) one has t+Δ
lim
t→∞ t−M
Gn∗ (Δ[t − u)) G(du) = 1 − rn (M ), G(Δ[t))
where rn (M ) Gn∗ (M ) → 0 as M → ∞. Hence, for large t, the last integral in (1.3.14) can be made, by choosing large enough M , arbitrarily close to 1. Thus the left-hand side of (1.3.14) converges to n + 1 as t → ∞. (ii) The proof of this assertion repeats that of Theorem 1.3.2(ii), with the same modifications related to Lemma 1.3.5 as were made in the proof of part (i). The theorem is proved. A sufficient condition for a distribution to be locally subexponential is contained in the following assertion.
50
Preliminaries
Theorem 1.3.6. Let G ∈ S and, for each fixed Δ > 0, as t → ∞, G(Δ[t)) ∼ ΔG(t)v(t),
(1.3.20)
where v(t) → 0 is an upper-power function.1 Then G ∈ Sloc . Remark 1.3.7. If G ∈ R and the function G(t) is ‘differentiable at infinity’, i.e. for each fixed Δ > 0, as t → ∞, G(t) − G(t + Δ) ∼ αΔt−α−1 L(t) =
αΔG(t) , t
then the conditions of Theorem 1.3.6 are clearly met. If G ∈ Se and the relation (1.2.29) from Definition 1.2.22 holds for each fixed Δ and without the additive term o(1) (i.e. l(t + Δ) − l(t) ∼ Δv(t), v(t) = αl(t)/t), then G(t) − G(t + Δ) = e−l(t) 1 − el(t)−l(t+Δ) = e−l(t) 1 − e−Δv(t)(1+o(1)) ∼ G(t)Δv(t) so that again the conditions of Theorem 1.3.6 are satisfied. Proof of Theorem 1.3.6. That the relation (1.3.10) holds true is obvious from condition (1.3.20). Further, let r.v.’s ζi ⊂ = G, i = 1, 2, be independent, and let Z2 = ζ1 + ζ2 . Then t/2 G (Δ[t)) = P(Z2 ∈ Δ[t)) = 2 G(Δ[t − y)) G(dy) + q, 2∗
0
where by virtue of (1.3.20), (1.2.20) and the fact that v(t) is an upper-power function one has q = P(ζ1 ∈Δ[t/2), ζ2 ∈ Δ[t/2), Z2 < t + Δ) G2 (Δ[t/2)) cv 2 (t)G2 (t/2) = o(v(t)G(t)). If M = M (t) → ∞ slowly enough as t → ∞ then, since v(t) and G(t) are l.c. functions, we obtain from (1.3.20) that M
M G(Δ[t − y)) G(dy) ∼ Δ
0
v(t − y)G(t − y) G(dy) ∼ Δv(t)G(t). 0
Moreover, by (1.2.21) t/2 t/2 G(Δ[t − y)) G(dy) < cΔv(t) G(t − y) G(dy) = o(v(t)G(t)). M 1
See Definition 1.2.20 on p. 28.
M
1.4 Asymptotic properties of ‘functions of distributions’
51
Hence G2∗ (Δ[t)) ∼ 2Δv(t)G(t) ∼ 2G(Δ[t)), which immediately implies (1.3.11). The theorem is proved. Subexponential and locally subexponential distributions will be used in §§ 4.8, 7.5, 7.6 and 8.2.
1.4 Asymptotic properties of ‘functions of distributions’ As before, let G denote the distribution of a r.v. ζ, so that G(B) = P(ζ ∈ B) for any Borel set B, and let G denote the tail of this distribution: G(t) = G([t, ∞)). Let g(λ) := Eeiλζ be the characteristic function (ch.f.) of the distribution G, and let A(w) be a function of the complex variable w. In a number of problems (see e.g. § 7.5), the form of a desired distribution can be obtained in terms of certain transforms of that distribution (e.g. its ch.f.) that have the form A(g(λ)). The question is, what can one say about the asymptotics of the tails of the desired distribution given that the asymptotics of G(t) are known? It is the distribution that corresponds to the ch.f. A(g(λ)) (or to some other transform) to which we refer in the heading of this section. The next theorem answers, to some extent, the question posed. Theorem 1.4.1. Let the distribution G of the r.v. ζ be subexponential, and let a function A(w) be analytic in the disk |w| 1. Then there exists a finite measure A such that the function A(g(λ)) admits a representation of the form Im λ = 0, (1.4.1) A(ψ(λ)) = eiλx A(dx), where A(t) := A([t, ∞)) ∼ A (1)G(t) as t → ∞. Proof. Since the domain of analyticity is always open, the function A(w) will be analytic in the region |w| 1 + δ for some δ > 0 and the following expansion will hold true: ∞ A(w) = Ak wk , where |Ak | < c(1 + δ)−k . k=0
Hence the measure A := is finite, with
|A(dx)|
∞ k=0
∞
Ak Gk∗
k=0
|Ak |. Moreover, the function
A(g(λ)) =
∞ k=0
Ak gk (λ)
52
Preliminaries
is the Fourier transform of the measure A. This proves (1.4.1). Further, ∞ A(t) = Ak Gk∗ (t), k=0
where, due to the subexponentiality of G, for each k 1 one has Gk∗ (t) →k G(t)
as
t → ∞.
Moreover, by Theorem 1.2.12(iv), for any ε > 0 Gk∗ (t) b(1 + ε)k . G(t) Choosing ε = δ/2, we obtain from the dominated convergence theorem that, as t → ∞, ∞ ∞ Gk∗ (t) A(t) Ak Ak = A (1). = → G(t) G(t) k=0
k=0
The theorem is proved. One could also find the proof of (a somewhat more general form of) Theorem 1.4.1 of [84]. For locally subexponential distributions, we have the following analogue of Theorem 1.4.1. As before, let Δ[t) = [t, t + Δ) be a half-open interval of length Δ > 0. Theorem 1.4.2. Let G ∈ Sloc and a function A(w) be analytic in the unit disk |w| 1. Then there exists a finite measure A such that A(g(λ)) = eiλx A(dx) and, for any fixed Δ, as t → ∞, A(Δ[t)) ∼ A (1)G(Δ[t)).
(1.4.2)
Proof. The proof of the theorem is quite similar to that of Theorem 1.4.1. As before, the measure A has the form A=
∞
Ak Gk∗ ,
k=0
where for the coefficients Ak in the expansion of the function A(w) we have the inequalities Ak c(1 + δ)k for some δ > 0. Then one has to make use of Theorem 1.3.4, which states that Gk∗ (Δ[t)) →k G(Δ[t))
as k → ∞
1.4 Asymptotic properties of ‘functions of distributions’
53
and Gk∗ (Δ[t)) < b(1 + ε)k G(Δ[t))
for
k 2, t M.
Setting ε = δ/2, we can use the dominated convergence theorem to obtain (1.4.2). The theorem is proved. One also has complete analogues of Theorem 1.4.2 in the case when the distribution G has a subexponential density and also in the case when the distribution G = {gk ; k 0} is discrete and the sequence {gk } is subexponential. Thus, in the arithmetic case, we have Theorem 1.4.3. If the sequence {gk } is subexponential and a function A(w) is analytic in the unit disk |w| 1 then there exists a finite discrete measure ∞ ∞ A = {ak ; k 0} such that A(g(w)) = k=0 ak wk , where g(w) = k=0 gk wk and ak ∼ A (1)gk as k → ∞. Proof. The proof of the theorem repeats that of Theorem 1.4.2. One just has to use Theorem 1.3.2 instead of Theorem 1.3.4. There arises the following natural question: is it possible to relax the assumption on the analyticity of A(w) in the above Theorems 1.4.1–1.4.3 ? We will concentrate on Theorems 1.4.1 and 1.4.3. They are the simplest in a series of results that enable one to find, given the asymptotics of G(t), the asymptotics of A(t) for the ‘preimage’ of the function A(g(λ)), where the class of functions A can be broader than that in Theorems 1.4.1 and 1.4.3 (see e.g. [83, 35, 42]). We will present here without proof some ‘local theorems’ for arithmetic distributions. These results are quite useful in the asymptotic analysis (see e.g. § 8.2). Theorem 1.4.4. Let {gk = G({k}); k 0} be a probability distribution on the ∞ set of integers, the sequence {gk } be subexponential, g(w) = k=0 gn wn and A(w) be a function analytic on the range of values g(y) on the disk |y| 1. Then there exists a finite measure A = {ak ; k 0} such that A(g(w)) =
∞
ak w k
(1.4.3)
k=0
and, as k → ∞, ak ∼ A (1)gk .
(1.4.4)
If A(w) =
∞ k=0
Ak wk ,
∞
|Ak | < ∞,
k=0
then the measure A can be identified with the measure
Ak Gk∗ .
(1.4.5)
54
Preliminaries For the proof of the theorem, see [83].
It is not hard to see that using a change of variables one can extend the assertion of the theorem to the case when, instead of the assumption on subexponentiality of the sequence {gk }, one has 1 gk+1 = , k→∞ gk r lim
gk2∗ = 2g(r), k→∞ gk
r > 1,
lim
and the function A(w) is analytic on the set of values g(y), |y| r. In [83] one can also find similar assertions for densities and discrete distributions on the whole axis, under the assumption that conditions (1.3.1) and (1.3.2) hold as k → ±∞. Then the assertion (1.4.4) also holds as k → ±∞. In the case when gk ∼ ck−α−1 as k → ∞, one can consider, instead of functions A(w) that are analytic on the set of values g(y), |y| 1, functions of the form (1.4.5) as well, where ∞
|k|r |Ak | < ∞,
r > α + 1.
k=0
In this case, the assertion (1.4.4) remains valid (see Theorem 6 of [83]). Next we will present an analogue of Theorem 1.4.4 for an extension of the class of regularly varying functions. Put d(bk ) := bk − bk+1 . Theorem 1.4.5. (i) In the notation of Theorem 1.4.4, let the following conditions be satisfied: ∞ ∞ 1 |d(kgk )| < ∞; (a) n n=1 k=n (b) there exist an s.v.f. L(t), α > 0, and a constant c < ∞ such that gk+1 L(k)k −α−1 |gk | cL(k)k −α−1 , → 1 as k → ∞. gk Further, let A(w) be a function analytic on the range of values g(y), |y| 1. Then (1.4.3), (1.4.4) hold true. (ii) Each of the following conditions is sufficient for condition (a) to hold: (a1 ) kgk is non-increasing for k k0 , k0 < ∞; (a2 ) for some ε > 0, one has gk ck−2 (ln k)−2−ε , k 0. Proof. For the proof of first part of the theorem see [35] or [42] (Appendix 3). The assertion of the second part of the theorem is obvious, since if (a1 ) holds then ∞ |d(kgk )| = ngn , |d(kgk )| = kgk − (k + 1)gk+1 , and condition (a) turns into the relation
∞
k=n
n=1 gn
< ∞, which is always true.
1.4 Asymptotic properties of ‘functions of distributions’
55
If (a2 ) is true then ∞
|d(kgk )|
k=n
∞
kgk < c
k=n
∞
k −1 (ln k)−2−ε ∼ c1 (ln n)−1−ε ,
k=n
and condition (a) will be satisfied since
∞ n=1
n−1 (ln n)−1−ε < ∞.
Theorem 1.4.5, just as Theorem 1.4.4, can be extended to the case of distributions on the whole real line (see [35, 40]). Note that the assertion of Theorems 1.4.4 and 1.4.5 on the representation (1.4.3) for the function A(g(w)) is a special case of the well-known Wiener–Levy theorem on the rings (Banach algebras) B of the generating functions of absolutely convergent series, which states the following. Let g ∈ B (i.e. g(w) = gk w k , |gk | < ∞) and let A(w) be a function analytic in a simply connected domain D, situated, generally speaking, on a multi-sheeted Riemann surface. If one can consider the range of values {g(w); |w| 1} as a set lying in D then A(g) also belongs to B, i.e. one has a representation of the form A(g(w)) =
∞
ak w k ,
k=1
∞
|ak | < ∞.
k=1
There arises a natural question: to what extent will the assertions of Theorems 1.4.2 and 1.4.3 remain true in the case of signed measures? For example, when studying the asymptotics of the distribution of the first passage time (see [58]), there arises a problem concerning an analogue of Theorem 1.4.4 in the case when the terms in the sequence {gk } can assume both positive and negative values. We will present here an analogue of Theorem 1.4.3, restricting ourselves to considering regularly varying sequences. In what follows, the relation ak ∼ cbk with c = 0 will be understood as ak = o(bk ) as k → ∞. Let L be a s.v.f., 0 < |c| < ∞, and −α
gn ∼ cn
L(n),
α > 1;
g n := |gn |,
g(w) :=
∞
g k wk .
(1.4.6)
k=0
Theorem 1.4.6. Suppose that {gk ; k 0} is a real-valued sequence of the form (1.4.6) and that a function A(w) is analytic in the disk |w| g(1). Then A(g(w)) can be represented as the series A(g(w)) =
∞ k=0
ak w k ,
∞ k=0
and, as k → ∞, ak ∼ A (g(1))gk .
|ak | < ∞,
56
Preliminaries
Proof. As before, let {gkn∗ } denote the nth convolution of the sequence {gk } with itself: (n+1)∗
=
gk
k
n 1,
gjn∗ gk−j ,
j=0
so that k gkn∗ wk = g n (w). It is not hard to see that gk2∗ ∼ 2g(1)gk and then, by induction, that gkn∗ ∼ ng n−1 (1)gk
as
k→∞
(1.4.7)
n∗ } is the for any fixed n. Further, it is evident that |gkn∗ | < g n∗ k where {g k n∗ k nth convolution of the sequence {g k := |gk |} with itself, so that gk w = g n (w). Clearly, the sequence {g k /g(1)} specifies a subexponential distribution and hence, by virtue of Theorem 1.3.2(ii), for any ε > 0 and all large enough k and n one has n
n−1 g n∗ (1) (1 + ε/2) . k gk g
(1.4.8)
Furthermore, from the relation A(g(w)) =
An g n (w)
we obtain ak =
∞
An gkn∗ =
n=0
+
nN
,
(1.4.9)
n>N
where, for each fixed N , one has from (1.4.7) that 1 → An ng n−1 (1) = A (g(1)) + rN gk nN
as
k→∞
(1.4.10)
nN
with rN → 0 as N → ∞. For the last sum in (1.4.9) we find by virtue of (1.4.8) that |An gkn∗ | |An | g k g n−1 (1)(1 + ε)n/2 . n>N
n>N
n>N
Since |An | c1 [g(1)(1 + ε)]−n for a suitable ε > 0, we see that the sequence |An |g n (1)(1 + ε)n/2 decays exponentially fast as n → ∞. Therefore c2 gk (1 + ε)−N/2 . (1.4.11) n>N
Comparing the relations (1.4.9)–(1.4.11) and noting that N can be chosen arbitrarily large, we obtain the assertion of the theorem.
57
1.5 Convergence of distributions of sums of r.v.’s to stable laws 1.5 The convergence of distributions of sums of random variables with regularly varying tails to stable laws
As is well known, in the case Eξ 2 < ∞ one has the central limit theorem, which n states that the distributions of the normalized sums Sn = i=1 ξi of independent d
r.v.’s ξi = ξ converge to the normal law as n → ∞. If Eξ 2 = ∞ then the situation noticeably changes. In this case, the convergence of the distributions of the appropriately normalized sums Sn to a limiting law will only take place for r.v.’s with regularly varying distribution tails. From the proof of the central limit theorem by the method of characteristic functions, it is seen that the nature of the limiting distribution for Sn is defined by the behaviour of the ch.f. f(λ) := Eeiλξ ,
λ ∈ R,
of ξ in the vicinity of zero. If Eξ = 0 and Eξ 2 = d < ∞ then, as n → ∞, μ 1 1 dμ2 f (0)μ f (0)μ2 f √ +o =1− +o . (1.5.1) =1+ √ + 2n n 2n n n n √ It is this relation that defines the asymptotic behaviour of the ch.f. f n (μ/ n) √ of Sn / n, which leads to the limiting normal law. In the case Eξ 2 = ∞ (so that f (0) does not exist) we will use the same method, but, in order to obtain the ‘right’ asymptotics of f(μ/b(n)) under a suitable scaling b(n), we will have to impose regular variation conditions on the ‘two-sided’ tails F (t) := F((−∞, −t)) + F([t, ∞)) = P(ξ ∈ [−t, t)),
t > 0.
As before, the functions F+ (t) := F([t, ∞)) = P(ξ t),
F− (t) := F((−∞, −t)) = P(ξ < −t)
will be referred to as the right and the left tails of the distribution of ξ, respectively. Assume that the following condition holds for some α ∈ (0, 2] and ρ ∈ [−1, 1]: [Rα,ρ ] The two-sided tail F (t) = F− (t) + F+ (t) is an r.v.f. at infinity, i.e. it has a representation of the form F (t) = t−α LF (t),
α ∈ (0, 2],
(1.5.2)
where LF (t) is an s.v.f.; in addition there exists the limit lim
t→∞
F+ (t) 1 =: ρ+ = (ρ + 1) ∈ [0, 1]. F (t) 2
(1.5.3)
If ρ+ > 0 then clearly the right tail F+ (t) (just like F (t)) is an r.v.f., i.e. it admits a representation of the form F+ (t) = V (t) := t−α L(t),
α ∈ (0, 2],
L(t) ∼ ρ+ LF (t)
58
Preliminaries
(following § 1.1, we use the symbol V to denote an r.v.f.). If ρ+ = 0 then the right tail F+ (t) = o(F (t)) need not be assumed to be regularly varying. It follows from (1.5.3) that there also exists the limit lim
t→∞
F− (t) =: ρ− = 1 − ρ+ . F (t)
If ρ− > 0 then, similarly, the left tail F− (t) admits a representation of the form F− (t) = W (t) := t−α LW (t),
α ∈ (0, 2],
LW (t) ∼ ρ− LF (t).
If ρ− = 0 then the left tail F− (t) = o(F (t)) is not assumed to be regularly varying. The parameters ρ± are connected to the parameter ρ from condition [Rα,ρ ] by the relations ρ = ρ+ − ρ− = 2ρ+ − 1. Evidently, for α < 2 one has Eξ 2 = ∞, so that the representation (1.5.1) ceases to hold, and the central limit theorem is inapplicable. In what follows, in situations where Eξ exists and is finite we will always assume, without loss of generality, that Eξ = 0. Since F (t) in non-increasing, the (generalized) inverse function F (−1) (u), understood as F (−1) (u) := inf{t > 0 : F (t) < u}, always exists. If F (t) is strictly monotone and continuous then b = F (−1) (u) is the unique solution of the equation F (b) = u,
u ∈ (0, 1).
Put ζn :=
Sn , b(n)
where the scaling factor b(n) is defined in the case α < 2 by b(n) := F (−1) (1/n).
(1.5.4)
It is obvious that in the case ρ+ > 0 the scaling factor b(n) is connected to the function σ(n) = V −1 (1/n) introduced in Theorem 1.1.4(v) by σ(ρ−1 + n) ∼ b(n). For α = 2 we put b(n) := Y (−1) (1/n),
(1.5.5)
1.5 Convergence of distributions of sums of r.v.’s to stable laws where Y (t) := 2t−2
t
⎛ yF (y) dy = 2t−2 ⎝
0
t
t yV (y) dy +
0
59
⎞ yW (y) dy ⎠
0
# $ ∼ t−2 E ξ 2 ; −t ξ < t =: t−2 LY (t)
(1.5.6)
and LY is an s.v.f. (see Theorem 1.1.4(iv)). From Theorem 1.1.4(v) it follows also that if (1.5.2) holds then b(n) = n1/α Lb (n),
α 2,
where Lb is an s.v.f. t ∞ Recall the notation VI (t) = 0 V (y) dy, V I (t) = t V (y) dy. Theorem 1.5.1. Let condition [Rα,ρ ] be satisfied. Then the following assertions hold true. (i) For α ∈ (0, 2), α = 1, and the scaling factor (1.5.4), we have ζn ⇒ ζ (α,ρ)
as
n → ∞,
(1.5.7)
where the distribution Fα,ρ of the r.v. ζ (α,ρ) depends only on the parameters α and ρ and has a ch.f. f(α,ρ) (λ) given by f(α,ρ) (λ) = Eeiλζ
(α,ρ)
= exp{|λ|α B(α, ρ, φ)},
(1.5.8)
where φ = sign λ,
απ απ B(α, ρ, φ) = Γ(1 − α) iρφ sin − cos 2 2
(1.5.9)
and for α ∈ (1, 2) we put Γ(1 − α) = Γ(2 − α)/(1 − α). (ii) When α = 1, for the sequence ζn with scaling factor (1.5.4) to converge to a limiting law the former, generally speaking, needs to be centred. More precisely, we have ζn − An ⇒ ζ (1,ρ) where An :=
as
n → ∞,
$ n # VI (b(n)) − WI (b(n)) − ρC, b(n)
(1.5.10)
(1.5.11)
C ≈ 0.5772 is the Euler constant and "
(1,ρ) π|λ| f(1,ρ) (λ) = Eeiλζ − iρλ ln |λ| . (1.5.12) = exp − 2 # $ If n VI (b(n)) − WI (b(n)) = o(b(n)), then ρ = 0 and one can put An = 0.
60
Preliminaries If Eξ = 0, then An =
$ n # I W (b(n)) − V I (b(n)) − ρC. b(n)
If Eξ = 0, ρ = 0, then ρAn → −∞ as n → ∞. (iii) For α = 2 and the scaling factor (1.5.5), ζn ⇒ ζ (2,ρ) = ζ
as
n → ∞,
f(2,ρ) (λ) = Eeiλζ = e−λ
2
/2
,
so that ζ has the standard normal distribution that is independent of ρ. Remark 1.5.2. It is not difficult to verify (cf. Lemma 2.2.1 of [286]) that in the ‘extreme’ cases ρ = ±1 the ch.f.’s (1.5.8), (1.5.12) of stable distributions with α < 2 admit the following simpler representations: f(α,1) (λ) = exp −Γ(1 − α)(−iλ)α , α ∈ (0, 2), α = 1, f(α,−1) (λ) = f(α,1) (−λ), α 2. f(1,1) (λ) = exp (−iλ) ln(−iλ) ; Remark 1.5.3. From the representation (1.5.11) for the centring sequence {An } in the case α = 1 it follows that if there exists Eξ = 0 then the boundedness of the sequence implies that ρ = 0. The converse assertion, that in the case Eξ = 0 the relation ρ = 0 implies the boundedness of {An }, is false. Indeed, let ξ be an r.v. with Eξ = 0 such that for t t0 > 0 one has % & 1 1 , W (t) = V (t) 1 + , L2 (t) := ln ln t. V (t) = L2 (t) 2t ln2 t Then ρ = 0, F (t) ∼ t−1 ln−2 t, b(n) ∼ n ln−2 n and V I (t) =
1 , 2 ln t
W I (t) = V I (t) +
1 + o(1) , L2 (t) ln(t)
so that W I (t) − V I (t) ∼
1 . L2 (t) ln t
Therefore An =
(1 + o(1)) ln2 n ln n − ρC ∼ → ∞ as L2 (b(n)) ln b(n) ln ln n
n → ∞.
Remark 1.5.4. The last assertion of the theorem shows that the limiting distribution can be normal even in the case when ξ has infinite variance. Remark 1.5.5. If α < 2 then from the properties of s.v.f.’s (Theorem 1.1.4(iv)) we have that, as t → ∞, t
t yF (y) dy =
0
0
y 1−α LF (y) dy ∼
1 1 t2−α LF (t) = t2 F (t). 2−α 2−α
1.5 Convergence of distributions of sums of r.v.’s to stable laws
61
Hence for α < 2 one has Y (t) ∼ 2(2 − α)−1 F (x), Y
(−1)
1 n
∼F
(−1)
2−α 2n
∼
2 2−α
1/α
F
(−1)
1 n
(cf. (1.5.4)). However, when α = 2 and d := Eξ 2 < ∞, we have √ 1 Y (t) ∼ t−2 d, b(n) = Y (−1) ∼ nd. n Thus, the scaling (1.5.5) is ‘transitional’ between the scaling √ (1.5.4) (up to a constant factor (2/(2 − α))1/α ) and the standard scaling nd in the central limit theorem in the case Eξ 2 < ∞. This also means that the scaling (1.5.5) is ‘universal’ and can be used for all α 2 (as it is in many texts on probability theory). However, as we will see later on, for α < 2 the scaling (1.5.4) is simper and easier to deal with, and this is why it will be used in the present exposition. We will present here a proof of Theorem 1.5.1 that essentially uses the explicit form of the scaling sequence b(n) and thereby helps to establish a direct connection between the zones of ‘normal’ deviations (as in Theorem 1.5.1) and large deviations (as in Chapters 2 and 3). Recall that Fα,ρ denotes the distribution of ζ (α,ρ) . The parameter α assumes values from the half-interval (0, 2] and the parameter ρ = ρ+ − ρ− can assume any value from the closed interval [−1, 1]. The role of the parameters α and ρ will be clarified later, at the end of this section. It follows from Theorem 1.5.1 that each law Fα,ρ , 0 < α 2, −1 ρ 1 is limiting for the distributions of suitably normalized sums of i.i.d. r.v.’s. The law of large numbers implies that the degenerate distribution Ia concentrated at some point a is also a limiting one. The totality of all these distributions will be denoted by S0 . Further, it is not hard to see that if F ∈ S0 then a distribution obtained from F by scale and shift transformations, i.e. a distribution F{a,b} given, for some fixed b > 0 and a, by the relation B−a B−a F{a,b} (B) := F , where = {u ∈ R : ub + a ∈ B}, b b is also limiting (for the distributions of (Sn − an )/bn as n → ∞, with suitable {an } and {bn }). It turns out that the class S of all distributions obtained by such an extension of S0 includes all the limiting laws for sums of i.i.d. r.v.’s. Another characterization of the class S of limiting distributions is possible. Definition 1.5.6. A distribution F is called stable if, for any a1 , a2 and for any b1 > 0 and b2 > 0, there exist a and b > 0 such that F{a1 ,b1 } ∗ F{a2 ,b2 } = F{a,b} .
62
Preliminaries
This definition implies that the convolution of a stable distribution F with itself produces the same distribution F, up to scale and shift transformations (or, equivalently, for independent r.v.’s ξi ⊂ = F one has (ξ1 + ξ2 − a)/b ⊂ = F for suitable a and b > 0). In terms of ch.f.’s, stability is stated as follows. For any b1 > 0, b2 > 0 there exist a and b > 0 such that f(λb1 )f(λb2 ) = eiλa f(λb),
λ ∈ R.
(1.5.13)
The class of all stable laws will be denoted by SS . The remarkable fact is that the class S of all limiting laws coincides with the class SS of all stable laws. If, under a suitable scaling, ζn ⇒ ζ (α,ρ)
as
n→∞
then one says that the distribution F of the summands ξ belongs to the domain of attraction of the stable law Fα,ρ . Theorem 1.5.1 means that if F satisfies condition [Rα,ρ ] then F belongs to the domain of attraction of the distribution Fα,ρ . One can show that the converse is also true (see e.g. § 5, Chapter XVII of [122]): if F belongs to the domain of attraction of the law Fα,ρ for an α < 2 then condition [Rα,ρ ] is satisfied. Proof of Theorem 1.5.1. We follow the same path as when proving the central limit theorem using the relation (1.5.1). We will study the asymptotic properties of the ch.f. f(λ) = Eeiλξ in the vicinity of zero (more precisely, the asymptotics of μ f −1→0 b(n) as b(n) → ∞) and show that, under condition [Rα,ρ ], for any μ ∈ R one has μ − 1 → ln f(α,ρ) (μ) n f (1.5.14) b(n) (or some modification of this relation; see (1.5.51) below). From this it will follow that, for ζn = S(n)/b(n), fζn (μ) → f(α,ρ) (μ) Indeed,
fζn (μ) = fn
Since f(λ) → 1 as λ → 0, we have μ ln fζn (μ) = n ln f b(n) % μ = n ln 1 + f b(n)
μ b(n)
& −1
n → ∞.
as
(1.5.15)
.
% μ =n f b(n)
& − 1 + Rn ,
1.5 Convergence of distributions of sums of r.v.’s to stable laws
63
where |Rn | n|f(μ/b(n)) − 1|2 for all sufficiently large n and hence Rn → 0, owing to (1.5.14). From this we see that (1.5.14) implies (1.5.15). Thus, first we will study the asymptotics of f(λ) as λ → 0 and then establish (1.5.14). (i) Let α ∈ (0, 1). One has ∞ f(λ) = −
∞ eiλt dV (t) −
0
e−iλt dW (t).
(1.5.16)
0
Consider the first integral: ∞ ∞ iλt − e dV (t) = V (0) + iλ eiλt V (t) dt, 0
(1.5.17)
0
where, after the change of variables |λ|t = y, |λ| = 1/m, we obtain ∞ I+ (λ) := iλ
∞ iλt
e
V (t) dt = iφ
0
eiφy V (my) dy
(1.5.18)
0
with φ = sign λ (the trivial case λ = 0 is excluded throughout the argument). Assume for the present that ρ+ > 0. Then V (t) is an r.v.f. at infinity and, for each y, owing to the properties of s.v.f.’s one has V (my) ∼ y −α V (m)
as
|λ| → 0.
So it is natural to expect that, as |λ| → 0, ∞ I+ (λ) ∼ iφV (m)
eiφy y −α dy = iφV (m)A(α, φ),
(1.5.19)
0
where
∞ A(α, φ) :=
eiφy y −α dy.
(1.5.20)
0
Let us assume that the relation (1.5.19) holds and similarly that ∞ − e−iλt dW (t) = W (0) + I− (λ),
(1.5.21)
0
where ∞ I− (t) := −iλ
e−iλt W (t) dt
0
∞
∼ −iφW (m) 0
e−iφy y −α dy = −iφW (m)A(α, −φ).
(1.5.22)
64
Preliminaries
Since V (0) + W (0) = 1, the relations (1.5.16)–(1.5.22) mean that, as λ → 0, f(λ) = 1 + F (m)iφ [ρ+ A(α, φ) − ρ− A(α, −φ)](1 + o(1)).
(1.5.23)
One can find a closed-form expression for the integral A(α, φ). Observe that the contour integral along the boundary of the positive quadrant in the complex plane (closed as a contour) of the function eiz z −α , which is analytic in the quadrant, is equal to zero. From this it is not hard to obtain that A(α, φ) = Γ(1 − α)eiφ(1−α)π/2 ,
α > 0.
(1.5.24)
(Note also that (1.5.20) is a table integral; its value, (1.5.24), can be found in handbooks; see e.g. integrals 3.761.4 and 3.761.9 of [134]). Thus, one has in (1.5.23) that iφ [ρ+ A(α, φ) − ρ− A(α, −φ)] (1 − α)π (1 − α)π + iφρ+ sin = iφ Γ(1 − α) ρ+ cos 2 2 (1 − α)π (1 − α)π − ρ− cos + iφρ− sin 2 2 (1 − α)π (1 − α)π = Γ(1 − α) iφ(ρ+ − ρ− ) cos − sin 2 2 απ απ = Γ(1 − α) iφρ sin − cos = B(α, ρ, φ), 2 2 where B(α, ρ, φ) is defined in (1.5.9). Therefore, as λ → 0, f(λ) − 1 = F (m)B(α, ρ, φ)(1 + o(1)).
(1.5.25)
Setting λ := μ/b(n) (so that m = b(n)/|μ|), where b(n) is defined in (1.5.4), and taking into account that F (b(n)) ∼ 1/n, we get & % b(n) μ − 1 = nF B(α, ρ, φ)(1 + o(1)) n f b(n) |μ| ∼ |μ|α B(α, ρ, φ).
(1.5.26)
We have established the validity of (1.5.14) and hence that of assertion (i) of the theorem in the case α < 1, ρ+ > 0. we reIf ρ+ = 0 (ρ− = 0) then the above argument remains valid provided place the term V (m) W(m) with zero or o(W (m)) o(V (m)) . This follows from the fact that F+ (t) F− (t) admits in this case a regularly varying majorant V ∗ (t) = o(W (t)) W ∗ (t) = o(V (t)) . Similar remarks apply to what follows as well. Thus the theorem is proved in the case α < 1 provided that we justify the
65
1.5 Convergence of distributions of sums of r.v.’s to stable laws
asymptotic equivalence in (1.5.19). To do that, it suffices to verify that the integrals ε ∞ iφy e V (my) dy and eiφy V (my) dy (1.5.27) 0
M
can be made arbitrarily small compared with V (m) by choosing ε and M appropriately. Note beforehand that, by virtue of Theorem 1.1.4(iii) (see (1.1.23)), for any δ > 0 there exists a tδ > 0 such that for all v 1, vt tδ one has V (vt) (1 + δ)v −α−δ . V (t) So, for δ < 1 − α, t > tδ , t
t V (u) du tδ +
0
1
V (vt) dv V (t)
V (u) du = tδ + tV (t) tδ
tδ /t
1 tδ + tV (t)(1 + δ)
v −α−δ dv = tδ +
0
tV (t)(1 + δ) 1−α−δ
ctV (t),
(1.5.28)
because tV (t) → ∞ as t → ∞. From this we obtain that ε εm eiφy V (my) dy 1 V (u) du cεV (εm) ∼ cε1−α V (m). m 0
0
1−α
Since ε → 0 as ε → 0, the required assertion concerning the first integral in (1.5.27) is proved. The second integral in (1.5.27) is equal to ∞ iφy
e M
∞ ∞ 1 iφy 1 e V (my) − V (my) dy = eiφy dV (my) iφ iφ M M
∞
1 1 = − eiφM V (mM ) − iφ iφ
eiφu/m dV (u), mM
so that its absolute value does not exceed 2V (mM ) ∼ 2M −α V (m).
(1.5.29)
Therefore, by choosing a suitable M , the value of the second integral in (1.5.27) can also be made arbitrarily small compared with V (m). The relation (1.5.19) (and, together with it, the assertion of the theorem in the case α < 1) is proved. Now let α ∈ (1, 2); hence there exists a finite expectation Eξ that, according
66
Preliminaries
to our convention, will be assumed to be equal to zero. In this case, |λ| f(λ) − 1 = φ f (φu) du,
φ = sign λ,
(1.5.30)
0
and we have to find the asymptotic behaviour of ∞
f (λ) = −i
∞ iλt
te
dV (t) + i
0
(1)
(1)
te−iλt dW (t) =: I+ (λ) + I− (λ)
(1.5.31)
0
as λ → 0. Since tdV (t) = d(tV (t)) − V (t) dt, integrating by parts yields ∞
(1)
I+ (λ) = −i
∞ teiλt dV (λ) = −i
0
∞ eiλt d(tV (t)) + i
0
∞ 0
∞
= iV I (0) − λ
0
∞
tV (t) eiλt dt + iV I (0) − λ
= −λ
eiλt V (t) dt
V I (t) eiλt dt 0
V! (t) eiλt dt,
(1.5.32)
0
where by Theorem 1.1.4(iv) both the functions ∞ V (t) :=
V (u) du ∼
I
t
tV (t) α−1
as t → ∞,
V I (0) < ∞,
and αtV (t) V! (t) := tV (t) + V I (t) ∼ α−1 are regularly varying. Letting, as before, m = 1/|λ|, m → ∞ (cf. (1.5.18), (1.5.19)), we obtain ∞ −λ
V! (t) e
iλt
dt = −φV! (m)
0
∞ ∼ −φ 0
(1)
I+ (λ) = iV I (0) −
∞
V! (my) eiφy dy
0
y −α+1 eiφy dy = −
αV (m) A(α − 1, φ), λ(α − 1)
αρ+ F (m) A(α − 1, φ)(1 + o(1)), λ(α − 1)
where the function A(α, φ) defined in (1.5.20) is equal to (1.5.24).
(1.5.33)
1.5 Convergence of distributions of sums of r.v.’s to stable laws
67
Similarly,
(1) I− (λ)
∞ =i 0
te−iλt dW (t)
∞
= −λ
tW (t) e 0
−iλt
∞
= −iW I (0) − λ
∞ dt − iW (0) − λ I
W I (t) e−iλt dt
0
' (t) e−iλt dt, W
0
where ∞ W (t) :=
W (u) du,
I
t
' (t) := tW (t) + W I (t) ∼ αtW (t) W α−1
and ∞ −λ 0
' (t) e−iλt dt ∼ − αW (m) A(α − 1, −φ). W λ(α − 1)
Therefore (1)
I− (λ) = iW I (0) −
αρ− F (m) A(α − 1, −φ)(1 + o(1)). λ(α − 1)
Hence, by virtue of (1.5.31), (1.5.33) and the equality V I (0)−W I (0) = Eξ = 0, one has f (λ) = −
$ αF (m) # ρ+ A(α − 1, φ) + ρ− A(α − 1, −φ) (1 + o(1)). λ(α − 1)
Now let us return to the relation (1.5.30). Since |λ| u−1 F (u−1 ) du ∼ α−1 F (|λ|−1 ) = α−1 F (m) 0
(see Theorem 1.1.4(iv)), we obtain, again using (1.5.24) and an argument like that
68
Preliminaries
employed in the proof for the case α < 1, that $ # 1 F (m) ρ+ A(α − 1, φ) + ρ− A(α − 1, −φ) (1 + o(1)) α−1 % Γ(2 − α) (2 − α)π (2 − α)π =− F (m) ρ+ cos + iφ sin α−1 2 2 & (2 − α)π (2 − α)π + ρ− cos − iφ sin (1 + o(1)) 2 2 Γ(2 − α) απ απ = F (m) cos − iφρ sin (1 + o(1)) α−1 2 2
f(λ) − 1 = −
= F (m)B(α, ρ, φ)(1 + o(1)).
(1.5.34)
We arrive once more at the relation (1.5.25) which, by virtue of (1.5.26), implies the assertion of the theorem in the case α ∈ (1, 2). (ii) The calculations in the case α = 1 are somewhat more intricate. We will again follow the relation (1.5.16), according to which f(λ) = 1 + I+ (λ) + I− (λ).
(1.5.35)
Rewrite the relation (1.5.18) for I+ (λ) as ∞ I+ (t) = iφ
eiφy V (my) dy 0
∞
∞ V (my) cos y dy −
= iφ 0
V (my) sin y dy.
(1.5.36)
0
Here the first integral on the right-hand side can be represented as the sum of two integrals, 1
∞ V (my) dy +
0
g(y)V (my) dy,
(1.5.37)
0
where
if y 1, if y > 1.
(1.5.38)
g(y)y −1 dy = C ≈ 0.5772
(1.5.39)
g(y) =
cos y − 1 cos y
Note (see e.g. integral 3.782 of [134]) that ∞ − 0
is the Euler constant. Since V (ym)/V (m) → y −1 as m → ∞, in a similar way
1.5 Convergence of distributions of sums of r.v.’s to stable laws
69
to before we obtain for the second integral in (1.5.37) the relation ∞ g(y)V (my) dy ∼ −CV (m).
(1.5.40)
0
Now consider the first integral in (1.5.37), 1 V (my) dy = m 0
−1
m
V (u) du = m−1 VI (m),
(1.5.41)
0
where t VI (t) =
V (u) du
(1.5.42)
0
can easily be seen to be an s.v.f. in the case α = 1 (see Theorem 1.1.4(iv)) and, moreover, if E|ξ| = ∞ then VI (t) → ∞ as t → ∞, whereas if E|ξ| < ∞ then VI (t) → VI (∞) < ∞. Thus, for the first term on the right-hand side of (1.5.36) we have Im I+ (λ) = φ(−CV (m) + m−1 VI (m)) + o(V (m)).
(1.5.43)
Next we will clarify the character of the dependence of VI (vt) on v when t → ∞. For any fixed v > 0, vt VI (vt) = VI (t) +
v V (u) du = VI (t) + tV (t) 1
t
V (yt) dy. V (t)
By Theorem 1.1.3 one has v 1
V (yt) dy ∼ V (t)
v 1
dy = ln v, y
so that VI (vt) = VI (t) + (1 + o(1)) tV (t) ln v =: AV (v, t) + tV (t) ln v,
(1.5.44)
where clearly AV (v, t) = VI (t) + o(tV (t)) as
t → ∞;
(1.5.45)
evidently VI (t) tV (t) by Theorem 1.1.4(iv). Hence, for λ = μ/b(n) (so that m = b(n)/|μ| and therefore V (m) ∼ ρ+ |μ|/n) we obtain from (1.5.43), (1.5.44) (in which one has to put t = b(n), v = 1/|μ|)
70
Preliminaries
that the following representation is valid as n → ∞: % & ρ+ μ ρ+ μ μ Im I+ (λ) = −C + AV (|μ|−1 , b(n)) − ln |μ| + o(n−1 ) n b(n) n ρ μ μ + = AV (|μ|−1 , b(n)) − (C + ln |μ|) + o(n−1 ). (1.5.46) b(n) n For the second term on the right-hand side of (1.5.36) we have ∞ Re I+ (λ) = −
∞ V (my) sin y dy ∼ −V (m)
0
y −1 sin y dy.
0
Because sin y ∼ y as y → 0, the last integral converges. Since Γ(γ) ∼ 1/γ as γ → 0, the value of this integral can be found to be (see (1.5.20) and (1.5.24)) π γπ = . (1.5.47) lim Γ(γ) sin γ→0 2 2 Thus, for λ = μ/b(n), π|μ| + o(n−1 ). (1.5.48) 2n In a similar way we can find an asymptotic representation for the integral I− (λ) (see (1.5.16)–(1.5.22)): Re I+ (λ) = −
∞ I− (λ) = −iφ
W (my)e−iφy dy
0
∞ = −iφ
∞ W (my) cos y dy −
0
W (my) sin y dy.
(1.5.49)
0
Comparing this with (1.5.36) and the subsequent computation of I+ (λ), we can immediately write that, for λ = μ/b(n) (cf. (1.5.46), (1.5.48)), 1 −μAW (|μ|−1 , b(n)) ρ− μ Im I− (λ) = − + (C + ln |μ|) + o , b(n) n n (1.5.50) π|μ|ρ− −1 Re I− (λ) = − + o(n ). 2n Thus we obtain from (1.5.35), (1.5.46), (1.5.48) and (1.5.50) that π|μ| iρμ μ −1=− − (C + ln |μ|) f b(n) n n $ iμ # + AV (|μ|−1 , b(n)) − AW (|μ|−1 , b(n)) + o(n−1 ). b(n) From (1.5.45) it follows that the second to last term here is equal to $ iμ # VI (b(n)) − WI (b(n)) + o(n−1 ), b(n)
1.5 Convergence of distributions of sums of r.v.’s to stable laws so that finally μ f b(n)
−1=−
where An =
π|μ| iρμ An − ln |μ| + iμ + o(n−1 ), 2n n n
71
(1.5.51)
$ n # VI (b(n)) − WI (b(n)) − ρC. b(n)
Therefore, cf. (1.5.14), (1.5.15), we obtain μ fζn −An (μ) = exp −iμAn f n b(n)
&" % μ = exp −iμAn + n ln 1 + f −1 b(n)
% & μ μ = exp −iμAn + n f − 1 + nO f b(n) b(n)
2 " . − 1
When α = 1 the functions VI and WI are slowly varying, by Theorem 1.1.4(iv), so by virtue of (1.5.51) we get 2 A2 1 μ − 1 c + n n f b(n) n n $ 1 1 # c1 + VI (b(n))2 + WI (b(n))2 → 0. n b(n) Since clearly μ −iμAn + n f b(n) we have
−1
→−
π|μ| − iρμ ln |μ|, 2
" π|μ| − iρμ ln |μ| , fζn −An (μ) → exp − 2
and so the relation (1.5.10) is proved. The assertions about the centring sequence {An } that follow (1.5.10) are obvious when one takes into account (1.5.3) and Theorem 1.1.4(iv). (iii) It remains to consider the case α = 2. We will follow the representations (1.5.30)–(1.5.32), according to which we have to find the asymptotics (as m = 1/|λ| → ∞) of (1)
(1)
f (λ) = I+ (λ) + I− (λ),
(1.5.52)
where (1) I+ (λ)
∞ = iV (0) − λ I
V! (t) e
iλt
0
∞ dt = iV (0) − φ I
0
V! (my) eiφy dy (1.5.53)
72
Preliminaries
and, by Theorem 1.1.4(iv), ∞ V (t) =
V! (t) = tV (t) + V I (t) ∼ 2tV (t) (1.5.54)
V (y) dy ∼ tV (t),
I
t
as t → ∞. Further, ∞
V! (my) eiφy dy =
0
∞
V! (my) cos y dy + φ
0
∞
V! (my) sin y dy.
(1.5.55)
0
Here the second integral on the right-hand side of (1.5.55) is asymptotically equivalent (as m → ∞, see (1.5.47)) to V! (m)
∞
y −1 sin y dy =
0
π ! V (m). 2
The first integral on the right-hand side of (1.5.55) equals 1
V! (my) dy +
0
∞
g(y)V! (my) dy,
0
where the function g(y) was defined in (1.5.38) and 1 0
V!I (t) :=
t
1 V! (my) dy = m
m 0
1 ! VI (m); V! (u) du = m
V! (u) du is an s.v.f. according to (1.5.54). Since
0
t 0
t2 V (t) 1 − uV (u) du = 2 2
t
t
u2 dV (u),
0
t V I (u) du = tV I (t) +
0
uV (u) du, 0
V I (t) ∼ tV (t), we obtain V!I (t) =
t
2
t
(uV (u) + V (u)) du = tV (t) + t V (t) − I
I
0
u2 dV (u)
0
t =− 0
u2 dV (u) + O(t2 V (t)),
(1.5.56)
1.5 Convergence of distributions of sums of r.v.’s to stable laws
73
where the last term is negligibly small because, owing to Theorem 1.1.4(iv), t
uV (u) du t2 V (t).
0
It is also clear that, as t → ∞,
# $ V!I (t) → V!I (∞) = E ξ 2 ; ξ > 0 ∈ (0, ∞].
As a result we obtain, cf. (1.5.40), that (1)
iπ ! V (m) − λV!I (m) + φC V! (m) + o(V! (m)) 2 = iV I (0) − λV!I (m)(1 + o(1)),
I+ (λ) = iV I (0) −
since V!I (t) tV! (t). In the same manner we find that 'I (m)(1 + o(1)), I− (λ) = −iW I (0) − λW (1)
'I is an s.v.f. and is obtained from the function W in the same way as V!I where W was from V . Since V I (0) = W I (0), we see now that the relation (1.5.52) leads to # $ 'I (m) (1 + o(1)). f (λ) = −λ V!I (m) + W Hence from (1.5.30) we get the representation 1/m
f(λ) − 1 = φ
1/m
# $ 'I (1/u) du u V!I (1/u) + W
f (φu) du = − 0
0
$ # $ 1 # 'I (m) ∼ − 1 E ξ 2 ; −m ξ < m ∼ − 2 V!I (m) + W 2 2m 2m 'I . Turning now to the definition of owing to (1.5.56) and a similar relation for W −2 the function Y (t) = t LY (t) in (1.5.6) and putting b(n) := Y (−1) (1/n), we obtain n n(f(λ) − 1) ∼ − Y 2
b(n) |μ|
λ := μ/b(n),
∼−
nμ2 μ2 Y (b(n)) → − . 2 2
The theorem is proved. With regard to the role of the parameters α and ρ we will note the following. The parameter α characterizes the decay rate of the functions Fα,ρ,− (t) := Fα,ρ ((−∞, −t))
and
Fα,ρ,+ (t) := Fα,ρ ([t, ∞))
74
Preliminaries
as t → ∞. The following assertion follows from the results of Chapters 2 and 3: If condition [Rα,ρ ] is met, α = 1, α < 2, and ρ+ > 0 then Sn P v ∼ nV (vb(n)) as v → ∞. b(n) Therefore, if v → ∞ slowly enough then, owing to the properties of s.v.f.’s, Sn P v ∼ v −α nV (b(n)) ∼ ρ+ v −α nF (b(n)) ∼ ρ+ v −α . b(n) However, by virtue of Theorem 1.5.1, if v → ∞ slowly enough then Sn v ∼ Fα,ρ,+ (v). P b(n)
(1.5.57)
From this it follows that, for ρ+ > 0, Fα,ρ,+ (v) ∼ ρ+ v −α
as v → ∞.
(1.5.58)
One can easily obtain a similar relation for the left tails as well: for ρ− > 0, Fα,ρ,− (v) ∼ ρ− v −α
as
v → ∞.
(1.5.59)
Note that, for ξ ⊂ = Fα,ρ , the asymptotic relation (1.5.57) turns into an exact equality if in it one replaces b(n) by bn := n1/α : Sn P v = Fα,ρ,+ (v). (1.5.60) bn $n # This follows from the observation that f(α,ρ) (λ/bn ) coincides with f(α,ρ) (λ) (see (1.5.8)) and therefore the distribution of the scaled sum Sn /bn coincides with that of ξ. The parameter ρ assuming values from the closed interval [−1, 1] is a measure of asymmetry of the distribution Fα,ρ . If, for instance, ρ = 1 (ρ− = 0) then for α < 1 the distribution Fα,1 will be concentrated on the right half-axis. This is seen from the fact that in this case the distribution Fα,1 could be considered as the limiting law for the scaled sums of i.i.d. r.v.’s ξk 0 (with F− (0) = 0). Since the distributions of such sums will all be concentrated on the right halfaxis, the limiting law must also have this property. Similarly, for ρ = −1, α < 1, the distribution Fα,−1 is concentrated on the left half-axis. In the case ρ = 0 (ρ+ = ρ− = 1/2) the ch.f. of the distribution Fα,0 will be real-valued, and the distribution Fα,0 itself will be symmetric. As we saw, the ch.f.’s f(α,ρ) (λ) of the stable laws Fα,ρ admit closed-form expressions. It is obvious that they all are absolutely integrable on R, and the same applies to the functions λk f(α,ρ) (λ), k 1. Therefore, all stable distributions have densities that are infinitely many times differentiable (see e.g. Chapter XV of [122]). As to the explicit formulae for these densities, they are only known for a few laws. To them belong, first of all, the normal law F2,ρ and the Cauchy distribution F1,0 , with density 2/(π 2 + 4t2 ), −∞ < t < ∞.
75
1.6 Functional limit theorems
An explicit form for another stable distribution can be obtained from a closedform expression for the distribution of the maximum of the Wiener process. This is the distribution F1/2,1 with parameters 1/2, 1 and density (up to a scale transform, cf. (1.5.58)) 1 √ e−1/2t , t>0 2π t3/2 (this is the density of the first passage time of level 1 by the standard Wiener process; see e.g. § 2, Chapter 18 of [49]).
1.6 Functional limit theorems 1.6.1 Preliminary remarks In this section, we will be interested in obtaining conditions ensuring the converk gence of random processes ζn (t) generated by the partial sums Sk = j=1 ξj of i.i.d. r.v.’s ξj . For example, we could consider ζn (t) :=
Snt , b(n)
t ∈ [0, 1],
(1.6.1)
where · denotes the integral part and b(n) is a scaling factor. To state the respective assertions, we will need two functional spaces, the space C = C(0, 1) of continuous functions g(t) on [0, 1] and the space D = D(0, 1) of functions g(t), t ∈ [0, 1], without discontinuities of the second kind, g(+0) = g(0), g(1 − 0) = g(1), which can be assumed, for definiteness, to be rightcontinuous: g(t + 0) = g(t), t ∈ [0, 1). We will suppose that the spaces C and D are endowed with the respective metrics ρC (g1 , g2 ) := sup |g1 (t) − g2 (t)| t∈[0,1]
and
) ( ρD (g1 , g2 ) := inf sup |g1 (t) − g2 (λ(t))| + sup |t − λ(t)| , λ
t
t
where the infimum is taken over all continuous monotone functions λ(t) such that λ(0) = 0, λ(1) = 1 (the Skorokhod metric). Consider further the measurable functional spaces H, BH , where H can be either C or D and BH is the σ-algebra generated by cylindric sets or the Borel σ-algebra generated by the metric ρH (for H = C or D these two σ-algebras coincide with each other; see e.g. §§ 1.3 and 3.14 of [28]). Denote by FH the class of functionals f on D possessing the following properties: (1) f is BD -measurable; (2) f is continuous in the metric ρH at points belonging to H. Now consider a sequence of processes {ζn (·); n 1} defined on D, BD .
76
Preliminaries
We will say that the processes ζn (·) H-converge as n → ∞ to a process ζ(·) given in H, BH if, for any functional f ∈ FH , one has f (ζn ) ⇒ f (ζ) (the symbol ⇒ denotes, as usual, weak convergence of the respective distributions). If H = C and P(ζn (·) ∈ C) = 1,
n 1,
(1.6.2)
then the C-convergence turns into the conventional weak convergence of distributions in the space C, BC . The requirement (1.6.2), however, does not need to be satisfied – as it does, for example, for the process defined in (1.6.1). In this case, to meet condition (1.6.2) one should construct a continuous polygon instead of (1.6.1). If H = D then the D-convergence is the weak convergence of distributions in D, BD . Since the class FC is wider than FD , the C-convergence is clearly stronger than the D-convergence.
1.6.2 Invariance principle Let the r.v.’s ξj have zero mean and a finite variance Eξj2 = d < ∞. Denote by w(·) the standard Wiener process, i.e. a process with homogeneous independent increments such that for t, u 0 the increment w(t + u) − w(t) has the normal distribution with parameters (0, u).√Let, as before, ζn (·) be a random polygon, say, of the form (1.6.1) with b(n) = nd. Theorem 1.6.1. Under the above assumptions, the sequence of the processes ζn (·) defined in (1.6.1) C-converges to the process w(·) as n → ∞. In the case when one takes the ζn (·) to be continuous polygons, this theorem on the weak convergence in C, FC of the distributions of ζn to that of w is known as the invariance principle. Since the functional f (g) := maxt∈[0,1] g(t) is ρC -continuous and also ρD -continuous, Theorem 1.6.1 implies, in particular, the following assertion on the distribution of S n := maxkn Sk (for simplicity we put d = 1): for any x, as n → ∞, −1/2 P(n S n x) = P(f (ζn ) x) → P max w(t) x t∈[0,1]
= 2P(w(1) x) = 2(1 − Φ(x)),
(1.6.3)
where Φ(x) is the standard normal distribution function. One can similarly obtain a closed-form expression for the limiting distribution for n−1/2 maxkn |Sk |. It will follow from these relations and the bounds for P(S n x) to be obtained in Chapter 4 that, in the case when Eξ 2 < ∞ and E[max{0, ξ}]α < ∞ for some
77
1.6 Functional limit theorems
α > 2, along with the convergence of distributions (1.6.3) as n → ∞ one also has the convergence of the moments v Sn E √ → E(w(1))v (1.6.4) n for any v < α, where w(t) := maxut w(u). Similar relations hold for the √ maxima maxkn |Sk / n| when E|ξ|α < ∞, α > 2. The assertion (1.6.4) can be strengthened somewhat. Let F+ (t) = P(ξ t) V (t), where V (t) is an r.v.f, and let g(t) be a continuous increasing function on [0, ∞) such that − g(t) dV (t) < ∞. Then, as n → ∞,
Eg
Sn √ n
→ Eg(w(1)).
(1.6.5)
One can also obtain from Theorem 1.6.1 the following fact extending (1.6.3). Let h+ (t) > h− (t) be two functions from D. Then, under the assumptions we have made, Sk k k < √ < h+ ; 1kn P h− n n n − P h− (t) < w(t) < h+ (t); t ∈ [0, 1] → 0
as
n→∞
(1.6.6)
(Kolmogorov’s theorem [170, 172]). When addressing the problem of convergence rates in the invariance principle, the question how to measure these immediately arises. A natural way here is to study the decay rate (in n) of the Prokhorov distance [230, 74] between the distributions Pζn and Pw of the processes ζn (·) and w(·) respectively in C(0, 1). The Prokhorov distance between the distributions P and Q in C, BC is defined as follows: ρ(P, Q) := inf{ε : ρ(P, Q, ε) ε}, where ρ(P, Q, ε) := sup |P(B) − Q(B ε )| B∈BC ε
and B is the ε-neighbourhood of B: ( ) B ε := g ∈ C : inf ρC (g, h) < ε . h∈B
The following assertion holds true for the distance ρn := ρ(Pζn , Pw ) (see also the survey and bibliography of [74]).
78
Preliminaries
Theorem 1.6.2. Let Eξ = 0, Eξ 2 = 1. (i) If E|ξ|α < ∞ for some α > 2 then, as n → ∞,
ρn = o n−(α−2)/[2(α+1)] . β
(ii) If Eeλ|ξ| < ∞ for some 0 < β 1 and λ > 0 then, as n → ∞,
ρn = o n−1/2 (ln n)1/β . The rate of the convergence Pζn (B) → Pw (B) for sets B of a special form (for instance, for the sets appearing in the boundary problems (1.6.6)) admits a sharper bound (see [202, 244] and also the survey [44]): Theorem 1.6.3. If Eξ = 0, Eξ 2 = 1, E|ξ|3 < ∞ and the functions h± (t) satisfy the Lipschitz condition that, for an h < ∞, |h± (t + u) − h± (t)| h u,
0 t t + u 1,
then the absolute value of the difference on the left-hand side of (1.6.6) does not √ exceed ch E|ξ|3 / n. Recall that, under the conditions of Theorem 1.6.3 on the distribution of the r.v. ξ, the Berry–Esseen theorem states that cE|ξ|3 Sn sup P √ < x − Φ(x) < √ , n n x where c is an absolute constant (see e.g. § 8.5 of [49]).
1.6.3 A functional limit theorem on convergence to stable laws Now consider the case where Eξ 2 = ∞ and the distribution of ξ satisfies the regularity conditions [Rα,ρ ] of § 1.5. Put b(n) = F (−1) (1/n) when α < 2 (see (1.5.4)); when α = 2 the function b(n) is defined by the relation (1.5.5). When E|ξ| < ∞ we assume that Eξ = 0. Then, by virtue of Theorem 1.5.1, in the case α ∈ (0, 2], α = 1, we have the following convergence for ζn = Sn /b(n) as n → ∞: ζn ⇒ ζ (α,ρ) , where ζ (α,ρ) is an r.v. following the stable distribution Fα,ρ with ch.f. (1.5.8). In the case α = 1 one needs, generally speaking, centring as well: ζn − An ⇒ ζ (1,ρ) (see (1.5.10), (1.5.11)). Now consider in D, BD a right-continuous process ζ (α,ρ) (·) with homogeneous independent increments such that ζ (α,ρ) (1) has the distribution Fα,ρ . As before, let ζn (·) be defined by (1.6.1).
1.6 Functional limit theorems
79
Theorem 1.6.4. Under condition [Rα,ρ ] (i.e. the condition of Theorem 1.5.1) with α = 1 the processes ζn (·) D-converge to the process ζ (α,ρ) (·). Similarly to the preceding subsection, Theorem 1.6.4 implies the convergence of the distributions of S n /b(n) and maxkn |Sk |/b(n) to those of the maxima (α,ρ)
ζ (1) := maxt1 ζ (α,ρ) (t) and maxt1 |ζ (α,ρ) (t)| respectively. The rela√ tion (1.6.6) holds, too, if in it one replaces n and w(t) by b(n) and ζ(t) respectively. Also, it follows from the bounds of Chapter 3 for P(S n x) that for v < α, as n → ∞, v (α,ρ) v Sn E →E ζ (1) . b(n)
v A similar relation holds true for the moments E max |Sk |/b(n) as well. One kn
also has an assertion similar to (1.6.5). The proof of Theorem 1.6.4 in the more general case of non-identically distributed r.v.’s ξi in the triangular array scheme is presented in Chapter 12.
1.6.4 The law of the iterated logarithm The invariance principle is closely related to the following result, which characterizes the a.s. magnitude of oscillations in a random walk with zero drift. Theorem 1.6.5. Let Eξ = 0, d = Eξ 2 ∞. Then √ Sn P lim sup √ = d n→∞ 2n ln ln n
= 1.
Proof. In the case d < ∞ a proof can be found in e.g. [49, 122] and in the case d = ∞ in [261]. When the scaled sums Sn of i.i.d. r.v.’s converge to a non-normal stable law, by the ‘law of the iterated logarithm’ one means the following assertion, which is of a somewhat different character. Theorem 1.6.6. If, as n → ∞, one has ζn ⇒ ζ (α,ρ) , α < 2, then P lim sup |ζn |1/ln ln n = e1/α = 1. n→∞
In Chapter 3 we will present some extensions of this result.
2 Random walks with jumps having no finite first moment
2.1 Introduction. The main approach to bounding from above the distribution tails of the maxima of sums of random variables 2.1.1 Introduction Let ξ, ξ1 , ξ2 , . . . be i.i.d. r.v.’s with a common distribution F. Put S0 := 0 and consider the r.v.’s Sn :=
n
ξj ,
S n (a) := max(Sk − ak), a ∈ R, kn
j=1
S n := S n (0),
and the events Bj (v) := {ξj < y + vg(j)},
B(v) :=
n *
Bj (v),
v 0,
(2.1.1)
j=1
where the choice of the function g will depend on the distribution F. One of our main goals will be to obtain bounds for probabilities of the form P(Sn x),
P(S n (a) x)
and P(S n (a) x; B(v))
(2.1.2)
as x → ∞. Note that the probabilities P(S n (a) x; B(v)) will play an important role when we come to find the exact asymptotics of P(S n (a) x). As for the distribution of ξj , it will be assumed in Chapters 2–4 that its tails F− (t) = F((−∞, −t]) = P(ξj < −t), F+ (t) = F([t, ∞)) = P(ξj t),
t > 0,
are majorated or minorated by r.v.f.’s (at infinity). Majorants (or minorants) for the right tails F+ (t) will be denoted by V (t) and for the left tails F− (t) by W (t). In addition, in the case where V (t) (W (t)) is an r.v.f., we will be using for the respective index and s.v.f. the notation α and L(t) (β and LW (t) respectively): V (t) = t−α L(t), W (t) = t
−β
LW (t), 80
α > 0,
(2.1.3)
β > 0.
(2.1.4)
2.1 Introduction. Main approach to bounding distribution tails
81
Without loss of generality, we will assume that the functions V and W are monotone, with V (0) 1 and W (0) 1. In what follows, we will often use the following conditions on the asymptotic behaviour of the distribution tails under consideration: [ · , 0,
[ · , >] F+ (t) V (t), t > 0, [ 0, [>, · ]
F− (t) W (t), t > 0,
where V (t) and W (t) are of the forms (2.1.3) and (2.1.4) respectively. When studying the exact asymptotic behaviour of the probabilities P(Sn x) and P(S n (a) x), we will also be using the condition of regular variation of the tails: [ · , =]
F+ (t) = V (t), t > 0,
which is the intersection of the conditions [ · , ] (with a common function V (t)), so that one could write: [ · , =] = [ · , ]. Recall that we agreed to denote the class of distributions satisfying condition [ · , =] by R (see p. 11). In a similar way, for a condition of the form F− (t) = W (t), t > 0, we will use the notation [=, · ] and will be considering the following intersections of conditions already introduced: [=, =] = [ · , =] ∩ [=, · ], [ 1). y
(2.1.11)
In concluding this section we will note that, in the cases where α < 1 or β < 1 (under conditions [ · , , · ] respectively), the bounds for the probabilities (2.1.2) could substantially differ from one another, depending on the relative ‘thicknesses’ of the left and right distribution tails of ξ. In this connection, we will single out the following two possibilities: (1) α < 1, the tail F− (t), t > 0, is arbitrary; (2) β < 1, the tail F− (t) is substantially ‘heavier’ than F+ (t), t > 0. The bounds in the first case are essentially bounds for the sums Sn when the r.v.’s ξj are non-negative (F− (0) = 0). 2.2 Upper bounds for the distribution of the maximum of sums when α 1 and the left tail is arbitrary As we have already noted, the considerations in this and many subsequent sections will be based on bounds for the probability P = P(S n x; B), (2.1.7), the main tool used for their derivation being the basic inequality (2.1.8).
2.2 Upper bounds when α 1 and the left tail is arbitrary
85
Theorem 2.2.1. Let condition [ · , 0 one has the following inequality for the probability P defined in (2.1.7): P cΠ(y)r ,
r = x/y,
Π(y) = nV (y).
(2.2.1)
The constant c in (2.2.1) can be replaced by the expression (e/r)r + ε(Π(y)), where ε(·) is a bounded function such that ε(v) ↓ 0 as v ↓ 0. The notation ε(v) (with or without indices) will be used in what follows for functions converging to zero (either as v ↓ 0 or as v → ∞, depending on the circumstances). Remark 2.2.2. Observe that the bound (2.2.1) is of a universal character: up to minor modifications, it is applicable to all the types of random walks with ‘heavy tails’ that are discussed in detail in the present monograph (namely, for those with F ∈ R or F ∈ Se). The bound is rather sharp and admits the following graphical interpretation. From the subsequent exposition in Chapters 2–5 it will be seen that, for these classes of random walks, the main contribution to the probability of a large deviation S n x (or Sn x) comes from trajectories having a single large jump of the order of magnitude of x, so that the asymptotics of P(S n x) have the form + n {ξj x} ∼ nV (x). P j=1
If, however, none of the jumps ξj exceeds y = x/r, r > 1 (as is the case for our event {S n x; B}), then reaching the level x by one jump is impossible – to cross the level, there need to be at least r ‘large’ jumps (assume for simplicity that r > 1 is an integer). Since the probability of a jump of a size comparable with y is of order of V (y), the probability ofhaving r jumps of that r size among n independent summands will be of order nV (y) . This is, up to a constant factor, the right-hand side of the bound (2.2.1) for the probability P(S n x, all jumps < y). For the case α = 1 we will state a separate assertion. Here, along with V (x) and s = x/σ(n), we will also need some additional characteristics. Recall that, according to Theorem 1.1.4(iv), when α = 1, 1 L1 (x) := xV (x)
x V (u) du = 0
VI (x) → ∞ as xV (x)
x→∞
is an s.v.f. For δ > 0 put (x) ∼ Π(y)L1+δ (y). Π(δ) (y) := Π(y)L1+δ 1 1 It is not hard to see that, under broad conditions, L1 σ(n) ∼ L1 (n).
(2.2.2)
86
Random walks with jumps having no finite first moment
Theorem 2.2.3. Let condition [ · , 0 and c1 < ∞, one has Π(δ) (y) c1 or Π(δ) (x) c1 . For the last inequality to hold, it suffices that 1+γ for a fixed γ > 0. s = s(x, n) > L1 σ(n) The constant c in (2.2.1) can be replaced by (e/r)r + ε Π(δ) (y) , where ε(·) is a bounded function such that ε(v) ↓ 0 as v ↓ 0. The above-stated bounds enable one to obtain the next important result. Corollary 2.2.4. Let condition [ · , 0, sup x: Π(δ) v
P(S n x) 1 + ε(v). Π
(2.2.4)
From this corollary it follows, in particular, that in the case α 1 for any δ > 0 one has P(S n x) 1. (2.2.5) lim sup sup Π n→∞ x>n1/α+δ Proof of Corollary 2.2.4. By (2.1.6), condition [ · , 0.
(2.2.21)
The relation (2.2.21) is clearly equivalent to the inequality Π(δ) (y) < c1 . If we have Π(δ) < c1 then Π(δ) (y) ∼ rα Π(δ) < c1 r α , and the sufficiency of condition Π(δ) < c1 for (2.2.20) is also proved. Now let 1/(α−δ) s > L1 σ(n) . Then, for any δ ∈ (0, δ/4),
Π nV (sσ(n)) < s−α+δ < s−δ L1 (σ(n))(−α+2δ )/(α−δ)
< L1 (x)(−α+2δ )/(α−δ) < L−1−ε 1
(2.2.22)
for some ε > 0. This means that the inequality Π(δ) < 1 holds true. The theorem is proved. Remark 2.2.5. If the function L(t) is differentiable, L (t) = o(L(t)/t) and F+ (t) = V (t) ≡ t−α L(t) then the bound for I3 in (2.2.11) can be improved: I3 α1
V (y) μy
for any α1 > α and large enough y. This makes it possible to refine the assertion of Theorem 2.2.1 and obtain the bound &r % Π(y) . P c | ln Π(y)|
2.3 Upper bounds when the left tail dominates
91
2.3 Upper bounds for the distribution of the sum of random variables when the left tail dominates the right tail To obtain the desired estimates for the large deviation probabilities, we will need bounds for the probabilities of ‘small deviations’ of the sums of negative r.v.’s. In this section, it will be convenient for us to mean by σW (n) any fixed function such that σW (n) ∼ W (−1) (1/n), where W (−1) is the function inverse to W (t) = t−β LW (t), LW being an s.v.f. (in particular, one could put σW (n) := W (−1) (1/n)). If, for instance, W (t) ∼ ct−β then one could take σW (n) = (cn)1/β . As before, by the symbols c, C (with or without indices) we will be denoting constants that are not necessarily the same when they appear in different formulae. Theorem 2.3.1. Let the r.v.’s ξj 0 satisfy condition [>, · ] with β < 1. Then β−1 (n) and all large there exist c > 0, C < ∞ and δ > 0 such that for u CσW enough n one has −δ Sn P −u e−cu . (2.3.1) σW (n) If LW (n) is a non-increasing function or LW (n) → const as n → ∞ then one can put δ = β/(1 − β). In the general case, one can choose any δ < β/(1 − β). Remark 2.3.2. Let LW (t) ≡ 1 and C 1, for example. Then, for u = β−1 σW (n) = n−(1−β)/β , we have from (2.3.1) that Sn −u ≡ P Sn > −n e−cn . P σW (n) β−1 (n), to obtain Now observe that for any u > 0, including the case u < CσW −cn for P(Sn /σW (n) > −u) a bound that would be better than e is, generally speaking, impossible. Indeed, if p0 = P(ξj = 0) > 0 then P(Sn = 0) = pn0 , so that for all u > 0 Sn −u pn0 . P σW (n)
Proof of Theorem 2.3.1. For λ 0 and any z > 0 one has 0 ϕ(λ) ≡
0
e−λz P Sn −z ,
e P Sn ∈ dt
n
λt
−∞
so that
−z
P Sn −z eλz+n ln ϕ(λ) .
Further, clearly 0 ϕ(λ) =
0 e P(ξ ∈ du) = 1 − λ λu
−∞
−∞
eλu P(ξ < u) du.
(2.3.2)
92
Random walks with jumps having no finite first moment
Since P(ξ < −t) W (t) = t−β LW (t), we get ∞ ϕ(λ) 1 − λ
−λt
e
∞ W (t) dt = 1 −
0
e−v W (v/λ) dv,
0
where, as λ → ∞, ∞
⎛∞ ⎞ e−v W (v/λ) dv = W (1/λ) ⎝ e−v v −β dv ⎠ (1 + o(1))
0
0
= W (1/λ)Γ(1 − β)(1 + o(1)) (see Theorem 1.1.5) and Γ(·) is the gamma function. Thus ϕ(λ) 1 − Γ(1 − β)W (1/λ) (1 + o(1)), and therefore ln ϕ(λ) −Γ(1 − β)W (1/λ) (1 + o(1)). Put Γ := Γ(1 − β) and take z = uσW (n),
λ = (βΓ/u)
1/(1−β)
−1 σW (n),
β−1 (n) and one has so that λ → 0 as u σW 1/(1−β) λz + n ln ϕ(λ) u βΓ/u − nΓW (u/βΓ)1/(1−β) σW (n) (1 + o(1)).
First assume that LW (n) is non-increasing or, alternatively, that LW (n) → const as n → ∞. Then, for u/βΓ 1, β/(β−1) u W (u/βΓ)1/(1−β) σW (n) n−1 (1 + o(1)). (2.3.3) βΓ From here it follows that λz + n ln ϕ(λ) −(1 − β)β β/(1−β) Γ1/(1−β) uβ(β−1) (1 + o(1)), and hence, by virtue of (2.3.2), −δ Sn P −u e−cu (1+o(1)) σW (n)
as
δ=
β . 1−β
(2.3.4)
(2.3.5)
β−1 (n), in (2.3.5) one has Now choose C sufficiently large that, for u CσW 1 + o(1) 1/2 for all large enough n. Then (2.3.5) implies that the bound (2.3.1) is proved for
c=
1 (1 − β)β β/(1−β) Γ1/(1−β) , 2
δ=
β . 1−β
In the general case, by the properties of s.v.f.’s (Theorem 1.1.4), for any ε > 0 and all large enough μσW (n) and n, one has LW (μσW (n)) με LW (σW (n)),
W (μσW (n)) μ−β+ε n−1 .
2.3 Upper bounds when the left tail dominates
93
Putting μ := (u/βG)1/(1−β) and repeating the above argument (see (2.3.3), (2.3.4)), we again get (2.3.5) but with δ = (β − ε)/(1 − β) < β/(1 − β). The theorem is proved. Taking into account that, by virtue of (1.5.59), Fβ,−1,− (t) ∼ t−β
as
t → ∞,
we obtain the following assertion (see also (1.5.60)). Corollary 2.3.3. Let F = Fβ,−1 be the stable distribution with parameters β and ρ = −1. One can assume without loss of generality that W (t) ∼ t−β . Then, putting bn := n1/β , we obtain that bn ∼ σW (n) = n1/β and all the distributions of the r.v.’s Sn /bn coincide with F and therefore, for u > 0, −δ
P(ξ −u) e−cu ,
δ=
β . 1−β
(2.3.6)
From (2.3.6) it follows that, for any k > 0, E|ξ|−k < ∞.
(2.3.7)
Remark 2.3.4. The exact asymptotics of P(ξ −u) as u → 0 for ξ ⊂ = Fβ,−1 were found in Theorem 2.5.3 of [286]. Comparison with that result shows that the bound (2.3.6) is asymptotically tight (up to the value of the constant c). Now we will obtain bounds for P(Sn −z) in the case of arbitrary r.v.’s ξj ≶ 0, satisfying conditions [>, , 0) c1 nV (σW (n)) + exp −c . (2.5.12) σW (n) Here, by virtue of Theorem 1.1.4(iii), for any fixed ε > 0 and all large enough n, −α−ε σW (n) σW (n) 1 σ(n) V (σW (n)) = V σ(n) σ(n) n and therefore
nV (σW (n))
σW (n) σ(n)
−α−ε
.
At the same time, for any k > 0 exp{−ct−δ } t−k as t → ∞. Hence
nV (σW (n)) exp −c
σ(n) σW (n)
−δ "
,
and so by (2.5.12) P(Sn > 0) cnV (σW (n)).
(2.5.13)
This, together with (2.5.11) and Lemma 2.5.7, proves the first assertion of the theorem in the case α < 1. If α 1, β < 1 then the integral (2.5.7) converges. However, in that case condition [ · , 0, the r.v.’s Sn /cσW (n) converge in distribution (as n → ∞) to the stable law Fβ,ρ with parameters β and ρ > −1, so that Fβ,ρ,+ (0) =: q > 0. Therefore P(Sn 0) P S!n 0 → q > 0. From here and (2.5.11) it follows that S = ∞ a.s. Now suppose that the first inequality in (2.5.14) holds true. We will make use of the following lower bound for P(Sn 0), which is derived from Theorems 2.5.1 and 2.5.2 (with x = 0, y = uσW (n)): n−1 F+ (y) , P(Sn 0) nF+ (y) 1 − Qn−1 (u) − (2.5.15) 2 where
Sn < −u . σW (n) n Construct an r.v. ξ!j := min{ξj , 0} ξj and put S!n := j=1 ξ!j . Then evidently ! Sn < −u → Fβ,−1,− (−u), Qn (u) P σW (n) Qn−1 (u) = P
where Fβ,−1 is the stable distribution with parameters (β, −1). Further, c1 n c1 u−β n−1 F+ (y) W (uσW (n)) ∼ 2 2 2 as n → ∞. Hence for large enough fixed u we will have Qn−1 (u) +
n−1 1 F+ (y) , 2 2
so that by virtue of (2.5.15) n nu−β V (uσW (n)) ∼ V (σW (n)). 2 2 This means that if the series in (2.5.10) diverges (see Lemma 2.5.7) then the series in (2.5.11) also diverges and S = ∞ a.s. P(Sn 0)
110
Random walks with jumps having no finite first moment
(iii) The third assertion of Theorem 2.5.4 follows from the first two. Indeed, if the integral in (2.5.7) converges then V (t) = o(W (t)) as t → ∞ (the limit in (2.5.9) is equal to zero), and it follows from assertion (i) that S < ∞ a.s. Now let the integral (2.5.7) diverge. Then it follows from assertion (ii) that S = ∞ a.s. (owing to (2.5.9), the one of the alternatives in (2.5.14) is always true). The theorem is proved. 2.6 The asymptotic behaviour of the probabilities P(Sn x) As in §§ 2.2–2.4, we will distinguish here between the following two cases: (1)
α < 1, the tail F− (t) does not dominate F+ (t), t > 0;
(2)
β < 1, the tail F− (t) is ‘much heavier’ than F+ (t), t > 0.
First we consider the former possibility. Theorem 2.6.1. Let condition [Rα,ρ ], α < 1, ρ > −1, be satisfied. Then, for x = sσ(n), P(Sn x) sup − 1 ε(t) ↓ 0, nV (x) st P(S n x) − 1 ε(t) ↓ 0 sup nV (x) st as t ↓ 0. Proof. The relations follow immediately from Corollaries 2.2.4 and 2.5.3. In § 3.7 we will present integro-local theorems on the asymptotic behaviour of the probability P(Sn ∈ [x, x + Δ)), Δ > 0, when W (t) cV (t). Now consider the latter possibility (case (2)). Here (and also in many other considerations in the sequel) we will be using the following standard approach, which is related to the bounds for the distributions of sums of truncated r.v.’s that we obtained in §§ 2.2 and 2.4. In agreement with our previous notation, we put Bj = {ξj < y},
B=
n *
Bj ,
j=1
and let Gn := {Sn x}. Then P(Gn ) = P(Gn B) + P(Gn B), where n j=1
P(Gn B j ) P(Gn B)
n j=1
P(Gn B j ) −
i<jn
P(Gn B i B j ).
2.6 The asymptotic behaviour of the probabilities P(Sn x)
111
Since P(Gn B i B j ) P(B 1 )2 = F+2 (y), we have P(Gn ) = P(Gn B) +
n
P(Gn B j ) + O((nF+ (y))2 ).
(2.6.1)
j=1
Our first task is to estimate the probability P(Gn B). The bounds for it that we obtained in §§ 2.1–2.4 prove to be insufficient for our purposes in this section. We will need additional bounds; unfortunately, to derive them, one has to assume that β < α. Bounds in the case α = β are apparently much harder to obtain. Theorem 2.6.2. Let condition [>, β, ε > 0 such that θ := 1 − β/γ − ε > 0 and all large enough n one has P(Gn B) n−θxn
−1/γ
e−n . θ
(2.6.2)
(ii) For y = x/r, any fixed r 1, ε > 0 and all small enough values of nV (x) one has P(Gn B) [nV (y)]r e−ny
−β−ε
.
(2.6.3)
Remark 2.6.3. Note that, for y = n1/γ and all large enough n, it follows from the inequality (2.6.2) that P(Gn B) e−n , θ
0 < θ < 1 − α/γ,
and this bound cannot be substantially sharpened when 1 1 1 β |x| nv , v < 1 − + ∈ , γ γ γ β
(2.6.4)
.
However, if y = x/r and nV (x) → 0 fast enough then (2.6.3) turns into the inequality # $r P(Gn B) nV (y) ; (2.6.5) this also cannot be improved. Corollary 2.6.4. Let condition [>, β and |x| nv , v ∈ [1/γ, 1/β), then θ P(Sn x) nV n1/γ + O e−n ,
0 < θ < 1 − β/γ.
(2.6.6)
(ii) If nV (x) → 0 then P(Sn x) nV (x)(1 + o(1)).
(2.6.7)
112
Random walks with jumps having no finite first moment
Proof. The proof of Corollary 2.6.4 is next to obvious. One has to use the inequality P(Gn ) P(B) + P(Gn B) nV (y) + P(Gn B). Then the bound (2.6.6) follows from (2.6.4) and the inequality (2.6.7) from (2.6.5), provided that we put r := 1 + | ln nV (x)|−1/2 ∼ 1. Observe that for x = 0 the bound (2.6.6) is weaker than the bound obtained in Theorem 2.3.5. It is also clear that, for γ < α, the zones of the deviations x in (2.6.6) and in (2.6.7) do overlap. Proof of Theorem 2.6.2. Following the standard argument from (2.1.8), we obtain P(Gn B) e−μx Rn (μ, y), where
(2.6.8)
y R(μ, y) =
eμt F(dt). −∞
Further, following (2.4.5), (2.4.6) and taking into account the observation that V (t) = o(W (t)) as t → ∞, we obtain that, as μ → 0, μy → ∞, R(μ, y) 1 − ΓW (1/μ)(1 + o(1)) + V (y)eμy (1 + o(1)),
(2.6.9)
where Γ = Γ(1 − β). Hence P(Gn B) exp −μx−nΓW (1/μ)(1+o(1))+nV (y)eμy (1+o(1)) . (2.6.10) (i) First put y := n1/γ ,
μ :=
θ ln n, y
θ ∈ (0, (α − β)/γ)
for some γ > β. Then for the terms on the right-hand side of (2.6.10) we will have nΓW (1/μ) ∼ nΓ θβ W (y/ln n) ∼ n1−β/γ L1 (n), nV (y)e
μy
=n
1−α/γ+θ
L2 (n),
(2.6.11) (2.6.12)
where L1 , L2 are s.v.f.’s and 1 − α/γ + θ < 1 − α/γ + (α − β)/γ = 1 − β/γ. Hence the term (2.6.12) will be negligibly small compared with (2.6.11). This means that, for any fixed ε > 0 and all large enough n, ln P(Gn B) −θxn−1/γ ln n − n1−β/γ−ε . This proves (2.6.2). (ii) Now put y :=
x , r
r 1,
1 μ := − ln nV (y), y
2.6 The asymptotic behaviour of the probabilities P(Sn x)
113
where we assume that nV (x) → 0 as x → ∞. Then for the terms in (2.6.10) we have nV (y)eμy = 1, nΓW (1/μ) = nΓW y| ln nV (y)|−1 > ny −β−ε for any fixed ε > 0 and all large enough y. This means that, under the above conditions, ln P(Gn B) r ln nV (y) − ny −β−ε , which is equivalent to (2.6.3). The theorem is proved. Now we will formulate the main assertion of the present section. Theorem 2.6.5. Let condition [=, =] with β < α < 1 be satisfied. Then, for x −n1/γ , γ ∈ (β, α), max{x, n} → ∞, one has P(Sn x) = nEV (x − ζσW (n))(1 + o(1)),
(2.6.13)
where ζ 0 is an r.v. following the stable distribution Fβ,−1 with parameters (β, −1) (i.e. the limiting law for Sn /σW (n) as n → ∞). That is, ⎧ if x = o(σW (n)) and nV (σW (n))E(−ζ)−α ⎪ ⎪ ⎪ ⎪ n → ∞, ⎪ ⎨ −α (2.6.14) P(Sn x) ∼ nV (σW (n))E(b − ζ) if x ∼ bσW (n) and ⎪ ⎪ ⎪ 0 < b < ∞, n → ∞, ⎪ ⎪ ⎩ nV (x) if x σW (n). Remark 2.6.6. Note that in the case when |x| = o(σW (n)) the asymptotics of P(Sn x) do not depend on x. As was shown in [197], the assertion of the theorem will remain true in the case α = β, V (t) = o(W (t)). For x σW (n) the assertion follows from the bounds of §§ 2.2 and 2.5. From Theorem 2.6.5 one can obtain corollaries describing the asymptotic behaviour of the renewal function H(t) =
∞
P(Sn t)
(2.6.15)
n=1
or, equivalently, of the mean time spent by the trajectory {Sn } above the level t. Corollary 2.6.7. Let condition [=, =] with β < α < 1 be met, and let t be an arbitrary fixed number. Then the following assertions hold true. (i) H(t) < ∞ iff ∞ n=1
nV (σW (n)) < ∞.
(2.6.16)
114
Random walks with jumps having no finite first moment
(ii) If (2.6.16) is true then, as x → ∞, H(−x) ∼
E(−ζ)−β . W (x)
From part (i) of the corollary it follows, in particular, that these implications hold true: {α > 2β} =⇒ {H(t) < ∞} =⇒ {α 2β}. In the case when ξ 0 the following, more general, assertion was proved in [118]: Let β < 1, t F−,I (t) :=
F− (u) du. 0
Then the relations F−,I (t) ∼
t1−β LW (t) 1−β
as t → ∞,
(2.6.17)
where LW is an s.v.f., and # $−1 H(−x) ∼ Γ(β + 1)Γ(2 − β)
1 x−β LW (x)
as x → ∞
(2.6.18)
are equivalent. If, instead of (2.6.17), a stronger condition that F− (t) = W (t) ∼ t−β LW (t) is satisfied then (2.6.18) and Theorem 2.6.2 imply that # $−1 H(−x) ∼ Γ(β + 1)Γ(1 − β) and
1 W (x)
as
x→∞
# $−1 E(−ζ)−β = Γ(β + 1)Γ(1 − β) .
Moreover, for ξ 0, [118] also contains a local renewal theorem that describes the asymptotic behaviour of H(−x − Δ) − H(−x), for an arbitrary fixed Δ > 0, as x → ∞ in the case when F− (t) = W (t), 1/2 < β < 1: # $−1 H(−x − Δ) − H(−x) ∼ Γ(β)Γ(1 − β)
Δ . xW (x)
$ In the case when β ∈ (0, 1/2 , a similar relation was obtained but only for # $ lim sup H(−x − Δ) − H(−x) xW (x). x→∞
In the lattice case, these results were extended in [128, 279] to the class of r.v.’s ξ that can assume values of both signs.
2.6 The asymptotic behaviour of the probabilities P(Sn x)
115
Proof of Corollary 2.6.7. (i) The first assertion of the corollary follows in an obvious way from the first relation in (2.6.14). (ii) To prove the second assertion, put nv := v/W (x) and, for fixed ε > 0 and N < ∞, partition the sum H(−x) (see (2.6.15)) into three separate sums: + + . (2.6.19) H(−x) = nε 0, c1 < ∞, 0 < c < ∞. Since xk+1 = o(σW (nk )) as k → ∞, we obtain (replacing, for convenience, the index k − 1 by k) that nk V (σW (nk ) − 2xk+1 ) ∼ nk V (σW (nk )) cnk W (σW (nk ))(ln nk )−γ ∼ c(ln nk )−γ ∼ ck−γ ln A. Now consider the second term on the right-hand side of (3.9.8). It is not hard to see that, for any fixed ε1 > 0 and all large enough n, σ(n) = V (−1) (1/n) = inf {t : V (t) 1/n} inf t : W (t)(ln t)−γ 1/n < σW (n)(ln n)−γ/β+ε1 . Therefore
2xk+1 + σ(nk ) σW (nk )
−δ
−δ c1 (ln nk )−ε + (ln nk )−γ/β+ε1
−δ c2 k −ε + k −γ/β+ε1 c3 k v with v := min εδ, (γ/β − ε1 )δ > 0 provided that ε1 < γ/β. Hence the second v summand in (3.9.8) decays no slower than e−c3 k k −γ . From this it follows that Pk,1 ck−γ and Pk,1 < ∞. k
Now consider the terms Pk,2 . By virtue of Theorem 2.4.1 (see (2.4.2)), for any ε2 > 0, Pk,2 c1 mk V (xk ) c2 nk V (xk ) = c2 nk V σW (nk )(ln nk )−ε c2 (ln nk )αε+ε2 nk W (σW (nk ))(ln σW (nk ))−γ ∼ c3 k αε−γ+ε2 . Next we observe that it will suffice to prove (3.9.7) for any ε ∈ (0, (γ − 1)/3α). If it holds for any ε from this interval then it will certainly hold for ε (γ − 1)/3α. Now, if ε ∈ (0, (γ − 1)/3α) then, for ε2 < (γ − 1)/3, αε − γ + ε2 (γ − 1)/3 − γ + ε2 −1 − (γ − 1)/3 < −1. Hence k Pk,2 < ∞. The theorem is proved. If, in the assumptions of Theorem 3.9.3, we were to require in addition that condition [Rα,ρ ] (with ρ = −1) is met then the distribution of Sn /σW (n) would converge to the respective stable law and so, for any fixed ε > 0, infinitely many events An = {Sn −εσW (n)} would occur. Therefore, if there exists an upper
180
Random walks with finite mean and infinite variance
boundary {ψ(n)} for {Sn } then, by Theorem 3.9.3, it will have to be of the form ψ(n) = −σW (n)ψ1 (n), where ψ1 (n) → 0 and | ln ψ1 (n)| = o(ln ln n) as n → ∞. Now consider the problem on the upper boundary of the random walk in the case Eξ = 0, [ 0 and all large enough n, P(Ck ) P(S nk xk ) < c1 nk V (xnk ) < c2 nk W (vσW (nk ) ln nk ) c3 nk W (σW (nk ))(ln nk )−β+ε c4 k −β+ε . Hence, for ε < (β − 1)/2 one has P(Ck ) c4 k −1−(β−1)/2 ,
P(Ck ) < ∞.
k
That (3.9.10) and (3.9.9) are equivalent to one another follows from the fact that, in the case Eξ = 0, one has lim supn→∞ Sn = ∞ (see e.g. Chapter 10 of [49]). The theorem is proved. Remark 3.9.5. It can seen from the proof of Theorem 3.9.4 that the factor ln n multiplying σW (n) in (3.9.9) appears there not from the upper bound for P(Ck ) but rather from the condition (3.1.6) for the applicability of Theorem 3.1.1. This
3.9 Analogues of the law of the iterated logarithm
181
observation, together with a comparison with Theorem 3.9.1 (the conditions of Theorems 3.9.1 and 3.9.4 have, for β > 1, a non-empty intersection), leads to a natural conjecture that the above-mentioned factor ln n could be replaced by a slower growing function (say, by (ln n)γ with γ < 1/β). Remark 3.9.6. It is obvious that Theorems 3.9.1–3.9.4 also enable one to construct bounds for lower boundaries for {Sk } (by considering the reflected random walk {−Sk }).
4 Random walks with jumps having finite variance
In this chapter, we will assume that Eξ = 0,
d := Eξ 2 < ∞.
As before, the main objects of study will be the probabilities of large deviations of Sn , S n , S n (a) and also the general boundary problem on the asymptotics of
P max(Sk − g(k)) 0 , kn
where the boundary g(k) is such that minkn g(k) =: x → ∞.
4.1 Upper bounds for the distribution of S n On the one hand, by the central limit theorem, as n → ∞, √ P(Sn x) ∼ 1 − Φ x/ nd √ uniformly in x ∈ (0, Nn n), where Nn → ∞ slowly enough. On the other hand, as we saw in § 1.1.4, when condition [ · , =], V ∈ R is met and x → ∞ one has P(Sn x) ∼ nV (x) = nP(ξ x)
(4.1.1)
for any fixed n (and hence also for an n growing sufficiently slowly). These two asymptotics ‘interlock’ with each other in the following way. If, in addition to the above-mentioned conditions, it is also assumed that E(ξ 2 ; |ξ| > t) = o(1/ ln t) √ as t → ∞ and that x > n, then √ P(Sn x) ∼ 1 − Φ x/ nd + nV (x), 182
n→∞
(4.1.2)
4.1 Upper bounds for the distribution of S n
183
(Corollary 7 of [237]; see also [206]1 ). The representation (4.1.2) will also be discussed below in § 4.7.2 of the present chapter. In what follows, we will need a value σ(n) that will characterize the range of deviations of Sn where P(Sn x) change from the ‘normal’ √the asymptotics asymptotics, 1 − Φ x/ nd , to the asymptotics nV (x) describing P(Sn x) for large enough x. We will denote this value, as before, by σ(n). It is defined 2 as the deviation for which the asymptotics e−x /2nd(1+o(1)) and nV (x) ‘almost coincide’. It is easily seen that one can put (4.1.3) σ(n) = (α − 2)nd ln n, σ(n) being the principal part of the solution for x of the equation −
x2 = ln nV (x) = ln n − α ln x + o(ln x). 2nd
Recall that, under the assumptions of the preceding chapters, the deviations x at which the approximation by a stable law was replaced with an approximation by the quantity nV (x) were of the form F (−1) (1/n), which had a power factor n1/α , α 2. Remark 4.1.1. To avoid confusion over the definition of σ(n) for n = 1 (we do not exclude the value n = 1), we just put σ(1) := 1. It will be assumed throughout the present section that √ d = 1, x > n, so that nV (x) → 0 as x → ∞. As before, we will use the notation Bj = Bj (0) = {ξj < y},
B=
n *
Bj
j=1
(see (2.1.1)) with y = x/r, where r 1 is fixed. Theorem 4.1.2. Let the conditions [ · , 2, Eξ = 0 and Eξ 2 = 1 be satisfied. Then the following assertions hold true. (i) For any fixed h > 1, s0 > 0, for x = sσ(n), s s0 and all small enough Π = nV (x), one has r−θ r Π(y) P ≡ P(S n x; B) e , (4.1.4) r 1
In [206], the representation (4.1.2) was given under the additional assumption that E|ξ|2+δ < ∞, δ > 0 (Theorem 1.9), with a reference to [194]. The latter paper in fact dealt only with the case x n1/2 ln n (then (4.1.2) turns into (4.1.1)) but without additional moment conditions. According to [191], the result presented in [206] was obtained in A.V. Nagaev’s Dr. Sci. thesis (On large deviations for sums of independent random variables, Institute of Mathematics of the Academy of Sciences of the UzSSR, Tashkent, 1970). In [225] the representation (4.1.2) was obtained under the assumption that F (t) = O(t−α ) as t → ∞.
184
Random walks with jumps having finite variance where Π(y) = nV (y),
θ=
hr2 4s2
1+b
ln s , ln n
b=
2α . α−2
(4.1.5)
(ii) For any fixed h > 1, τ > 0, for x = sσ(n), s2 < (h − τ )/2 and all large enough n, we have 2
P e−x
/2nh
.
(4.1.6)
Corollary 4.1.3. (i) If x = sσ(n), s → ∞ then, for any ε > 0 and all small enough Π = nV (x), P Πr−ε .
(4.1.7)
P c1 Πr .
(4.1.8)
(ii) If s2 > c ln n then
(iii) If s is fixed and r = 2s2 /h 1 then, as n → ∞, 2
P cΠs
/h+o(1)
.
In particular, for s2 > 2h and all large enough n, P cΠ2 .
(4.1.9)
Corollary 4.1.4. (i) If s → ∞ then, for any δ > 0 and all small enough nV (x), P(S n x) nV (x)(1 + δ).
(4.1.10)
(ii) If s2 h + τ for a fixed τ > 0 then, for all small enough nV (x), P(S n x) cnV (x).
(4.1.11)
(iii) For any fixed h > 1, τ > 0, for s2 < (h − τ )/2 and all large enough n, P(S n x) e−x
2
/2nh
.
(4.1.12)
Remark 4.1.5. It is not hard to verify that, as in Corollaries 2.2.4 and 3.1.2, there exists a function ε(t) ↓ 0 as t ↑ ∞ such that, along with (4.1.10), one has P(S n x) 1 + ε(t), nV (x) x: st sup
x = sσ(n).
Proof of Corollary 4.1.3. Since θ → 0 as s → ∞, assertion (i) follows in an obvious way from (4.1.4). We now prove the second assertion. Since y = sσ(n)/r, for any δ > 0 one has r r T := = < c1 sα+δ n(α+δ)/2−1 . Π(y) nV (y)
4.1 Upper bounds for the distribution of S n
185
From this we obtain % & α+δ hr2 ln s θ ln c1 + (α + δ) ln s + − 1 ln n . ln T 2 1 + b 4s ln n 2 Clearly, for s2 > c ln n the right-hand side of this inequality is bounded. Together with (4.1.4), this proves (4.1.8). If s ≡ x/σ(n) is fixed and nV (x) → 0 then necessarily n → ∞. Therefore, ln s hr2 = ψ(r, s) + o(1), r−θ =r− 2 1+b 4s ln n where the function ψ(r, s) := r −
hr2 4s2
achieves at the point r0 = 2s2 /h its maximum in r, which is equal to ψ(r0 , s) =
s2 . h
(4.1.13)
Hence s2 + o(1). h In particular, for s2 > 2h and all large enough n, we obtain r0 − θ
r0 − θ > 2.
(4.1.14)
(4.1.15)
This establishes (4.1.9). Corollary 4.1.3 is proved. Proof of Corollary 4.1.4. The proof, as well as parallel arguments from Chapters 2 and 3 (cf. Corollaries 2.2.4 and 3.1.2), is based on the inequality (2.1.6): P(S n x) nV (y) + P.
(4.1.16)
Assertion (i) follows from (4.1.7) by setting r := 1 + 2ε and using standard arguments from calculus. We now prove (ii). If s → ∞, the assertion follows from (i). If s is bounded then, setting r := r0 and assuming without loss of generality that s2 > h + τ (for a suitable h > 1 and a τ that is somewhat smaller than that in condition (ii)), we obtain from (4.1.4), (4.1.13) and (4.1.14) that P(S n x) nV (y) + c(nV (y))1+τ /2 2nV (x/r0 ) ∼ 2r0α nV (x). Assertion (iii) follows from inequalities (4.1.6) and (4.1.16); these imply that 2
P(S n x) nV (x) + e−x
/2nh
,
where, for s2 < (h − τ )/2 and n → ∞, "
(h − τ ) (α − 2)n ln n −x2 /2nh e > exp − 2 2nh √ −(α−2)/4 nV ( n) nV (x) >n
(4.1.17)
(4.1.18)
186
Random walks with jumps having finite variance √ (recall that x > n). Hence the second term on the right-hand side of (4.1.17) dominates. Slightly changing h (if necessary), we obtain (4.1.12). Corollary 4.1.4 is proved. Proof of Theorem 4.1.2. (i) We will follow the same line of reasoning as in the proofs of Theorems 2.2.1 and 3.1.1. The main elements will again be the basic inequality (2.1.8) and bounds for R(μ, y). In the present context, however, the partition of R(μ, y) into ‘subintegrals’ will differ from (2.2.7). Put M (v) := v/μ, so that M := 2α/μ = M (2α), and write R(μ, y) = I1 + I2 , where, for a fixed ε > 0, M (ε)
M (ε)
I1 :=
e
μt
F(dt) =
−∞
1 + μt + −∞
with 0 θ(t)/t 1. Now
M (ε) −∞
μ2 t2 μθ(t) F(dt) e 2
(4.1.19)
F(dt) 1,
M (ε)
∞
t F(dt) = − −∞
t F(dt) 0
(4.1.20)
t2 F(dt) eε =: h.
(4.1.21)
M (ε)
and M (ε)
2 μθ(t)
t e
M (ε)
F(dt) e
−∞
ε −∞
Therefore I1 1 +
μ2 h . 2
(4.1.22)
Next we will bound y y μt ε I2 := − e dF+ (t) V (M (ε))e + μ V (t)eμt dt. M (ε)
(4.1.23)
M (ε)
First consider, for M (ε) < M < y, the integral M I2,1 := μ M (ε)
2α V (t)e dt = V (v/μ)ev dv. μt
ε
As μ → 0, V (v/μ)ev ∼ V (1/μ)g(v), where the function g(v) := v −α ev is convex on (0, ∞). Hence 2α − ε V (1/μ) g(ε) + g(2α) (1 + o(1)) cV (1/μ). I2,1 2
(4.1.24)
(4.1.25)
4.1 Upper bounds for the distribution of S n
187
The integral y I2,2 := μ
V (t)eμt dt M
can be dealt with in the same way as I3 in (2.2.11)–(2.2.15), which yields I2,2 V (y)eμy (1 + ε(λ)),
λ := μy,
(4.1.26)
ε(λ) ↓ 0 as λ ↑ ∞. Summing up (4.1.22)–(4.1.25), we obtain R(μ, y) 1 + μ2 h/2 + cV (1/μ) + V (y)eμy (1 + ε(λ)), and so
R (μ, y) exp n
nμ2 h + cnV 2
1 μ
" + nV (y)e (1 + ε(λ)) . μy
(4.1.27)
(4.1.28)
Now we will choose μ :=
1 ln T, y
T :=
r , nV (y)
so that λ = ln T (cf. (2.2.18)). Then (4.1.28) will become "
2 1 nμ h + cnV + r(1 + ε(λ)) , Rn (μ, y) exp 2 μ
(4.1.29)
where, as before, by Theorem 1.1.4(iii) for any δ > 0, y 1 y nV ∼ nV ∼ cnV μ ln T | ln nV (y)| cnV (y)| ln nV (y)|α+δ → 0
(4.1.30)
since nV (y) → 0. Therefore from (2.1.8) we obtain (cf. (2.2.19)) nh ln P −r ln T + r + 2 ln2 T + ε1 (T ) 2y nh = −r + 2 ln T ln T + r + ε1 (T ), (4.1.31) 2y (α − 2)n ln n and where ε1 (T ) ↓ 0 as T ↑ ∞. For x = sσ(n), σ(n) = nV (x) → 0, we have ln T = − ln nV (x) + O(1) α = − ln n + α ln s + ln n + O ln ln n + | ln L(sσ(n))| 2 α−2 ln s = ln n 1 + b (1 + o(1)), (4.1.32) 2 ln n
188
Random walks with jumps having finite variance
where b = 2α/(α − 2) (the term o(1) appears in the last equality owing to our assumption that n + s → ∞). Hence hr2 nh ln s ln T = 2 1 + b (1 + o(1)), 2 2y 4s ln n so that, by virtue of (4.1.31), %
h r 2 ln P r − r − 2 4s
ln s 1+b ln n
& ln T
for any h > h > 1 and all small enough nV (x). This proves the first assertion of Theorem 4.1.2. √ (ii) Since we are assuming everywhere that x > n, we have s = x/σ(n) > 1/ (α − 2) ln n. Hence it suffices to prove (4.1.6) for values of s that satisfy n−γ < s2
0. This corresponds to the following range of values for x2 : cn1−γ ln n < x2
1/2h, whereas the last two terms on the right-hand side are negligibly small as n → ∞. Indeed, owing to the second inequality in (4.1.33), 8 nh n cnV → 0 as n → ∞. nV x ln n Further, using the first inequality in (4.1.33), we find that
nV (x) n(2−α)/2+γ ,
4.1 Upper bounds for the distribution of S n
189
where the choice of γ is at our disposal. Moreover, by virtue of (4.1.33), x2 α − 2 τ (α − 2) 1 (h − τ )(α − 2) ln n = − ln n. nh 2h 2 2h Hence, for γ < τ (α − 2)/2h, 2
nV (x)ex
/nh
n−τ (α−2)/2h+γ → 0
as n → ∞. Thus x2 + o(1), 2nh where the term o(1) in the last relation can be removed by slightly changing the value of h > 1. (Formally, we have proved that, for h > 1 and all large enough n, the inequality (4.1.6) holds with h on its right-hand side replaced by h > h, where one could take, for instance, h = h + (h − 1)/2. Since the value h > 1 can also be made arbitrarily close to 1, by choosing a suitable h, the assertion that we have obtained is equivalent to that in the formulation of Theorem 4.1.2.) This proves (4.1.6). The theorem is proved. ln P −
Comparing the assertions of Theorem 4.1.2 and Corollaries 4.1.3, 4.1.4, we see that, roughly speaking, the range of possible values of s splits into two subranges: s < 1/2 and s > 1/2. In each subrange one can obtain quite satisfactory and, in a certain sense, unimprovable bounds for the probabilities P and P(S n x). Now consider the ‘threshold’ case α = 2, Eξ 2 < ∞. In this situation, finding the asymptotics of the solution σ(n) (for x) of the equation x2 = ln n + ln V (x) ≡ ln n − α ln x + ln L(x) (4.1.35) 2n is rather difficult, because it depends now on the s.v.f. L. Because of this, to describe the probabilities of deviations that are close to normal (i.e. they lie in the region where the bound (4.1.6) is valid), we will need two conditions. The first one assumes straight away 2 that the function e−x /nh dominates nV (x), i.e. it is supposed that x is such that −
2
nV (x)ex The second condition stipulates that
„
/nh
1 V (t) = O 2 t ln t
→ 0.
(4.1.36)
« as
t → ∞,
(4.1.37)
which, in essence, leads to no loss of generality in the threshold case α = 2, Eξ 2 < ∞. In the present context, we will consider, for the case α = 2, deviations of the form √ x = s n ln n, where the parameter s will clearly have a meaning somewhat different from that it previously had. As before, x r= , Π(y) = nV (y), Π = nV (x). y
190
Random walks with jumps having finite variance
√ Theorem 4.1.6. Let the conditions [ · , 0 and all s s0 , one has „ «r+θ « „ ln s Π(y) , θ = o(s−2 ) + O 2 P er r s ln n
(4.1.38)
as nV (x) → 0. (ii) In addition, let conditions (4.1.36), (4.1.37) be met. Then, for any fixed h > 1 and all large enough n, one has (4.1.6). Corollary 4.1.7. (i) If s s0 , nV (x) → 0 then P cΠr+o(1) .
(4.1.39)
P cΠr .
(4.1.40)
P(S n x) nV (x)(1 + o(1)).
(4.1.41)
2
(ii) If s > c1 ln n then Corollary 4.1.8. (i) If s s0 , nV (x) → 0 then
(ii) Let conditions (4.1.36), (4.1.37) be satisfied. Then, for any fixed h > 1 and all large enough n, one has (4.1.12). Proof of Theorem 4.1.6. (i) For s s0 , the reasoning in the proof of the first part of Theorem 4.1.2 remains unchanged, up to relations (4.1.28), (4.1.29). However, in contrast √ with (4.1.32), for x = s n ln n we now have ln T ≡ ln r − ln nV (y) = − ln r − ln nV (x) + o(1), where
` √ ´ ` √ ´ ln nV s n ln n = −2 ln s − ln ln n + ln L s n ln n = −2 ln s + o(ln n) + o(ln s). Hence ln T = 2 ln s − ln r + o(ln n) + o(ln s), and therefore ´ hr2 ` nh ln T = 2 ln s − ln r + o(ln n) + o(ln s) = o(s−2 ) + O 2y 2 2s2 ln n
(4.1.42) „
ln s s2 ln n
« ,
so that (4.1.31) becomes
„ «– » ln s ln T + o(1). ln P r − r + o(s−2 ) + O s2 ln n
The first assertion of the theorem is proved. (ii) Now we will prove the second assertion in the case when conditions (4.1.36), (4.1.37) are met. Again letting μ := x/nh, we obtain the relation (4.1.34), in which, as in Theorem 4.1.2 as well, we need to bound the last two terms. From condition (4.1.36) it necessarily follows that s → 0 as nV (x) → 0 and so, by virtue of (4.1.37), „ « „ „r «« nh n nV = o nV = o(1) x ln n
4.2 Upper bounds for the distribution of S n (a), a > 0
191
as n → ∞. That the second term from (4.1.34) to be bounded converges to zero follows directly from (4.1.36). Theorem 4.1.6 is proved. Proof. The proofs of Corollaries 4.1.7 and 4.1.8 are also similar to the previous proofs. Their assertions follow from Theorem 4.1.6 and the fact that if nV (x) → 0, s s0 , then n + s → ∞. This implies that θ = o(1). Moreover, owing to (4.1.42), θ ln Π = O(θ ln T ) = o(1), provided that s2 > c ln n. The relations (4.1.39), (4.1.40) are proved. To obtain inequality (4.1.41), one should use relations (4.1.16) and (4.1.39), in the latter setting r := 1 + ε, where ε tends to zero so slowly that P cΠ1+ε/2 ,
Πε/2 → 0.
Corollaries 4.1.7 and 4.1.8 are proved. In conclusion, we state a consequence of Theorems 4.1.2 and 4.1.6 that is concerned with bounds for ˘ ¯ Sbn := max |Sk | = max S n , |S n | , S n := min Sk . P(Sbn x), kn
kn
Let, as before, Vb (t) := max{V (t), W (t)}, α b := max{α, β}. Corollary 4.1.9. Let the following conditions be satisfied: [ 0. Then, for all small enough nV (x), P(Sbn x) cnVb (x). This assertion follows from Corollary 4.1.4(ii) and Corollary 4.1.8(i), applied to S n and S n .
4.2 Upper bounds for the distribution of S n (a), a > 0 This section does not differ much from § 3.2. As in the latter, we will put B(v) =
n *
Bj (v),
Bj (v) = {ξj < y + vaj},
v > 0,
j=1
η(x) = min{k : Sk − ak x}. We will not exclude the case a → 0 as x → ∞ and will consider here the triangular array scheme, assuming that the uniformity condition [U] from § 3.2 (see p. 137) is satisfied. Theorem 4.2.1. (i) Assume that the conditions [ · , 2, Eξ = 0, Eξ 2 = d < ∞ and [U]
192
Random walks with jumps having finite variance are satisfied. Then, for v 1/4r, a ∈ (0, a0 ) for an arbitrary fixed a0 < ∞, all n and all x such that x → ∞, x > c| ln a|/a, we have P(S n (a) x; B(v)) c1 [mV (x)]r∗ ,
(4.2.1)
P(S n (a) x) c1 mV (x),
(4.2.2)
where the constants c, c1 are defined below in the proof of the theorem and m := min{n, x/a},
r∗ :=
r , 2(1 + vr)
r≡
x 5 > . y 2
For any bounded or sufficiently slow-growing values t, we have c2 xV (x) 1−α xt t . P ∞ > η(x) a a
(4.2.3)
If, together with x, t tends to infinity at an arbitrary rate then the inequality (4.2.3) will remain true provided that one replaces in it the exponent 1 − α by 1 − α + ε with an arbitrary fixed ε > 0. (ii) Let the conditions of part (i) of the theorem be met, with the following amendments for the range of x and n: now we assume that c| ln a| x , n n1 = . a a Then, for any fixed h > 1 and all large enough n1 , x
η(x) (4.2.6) a
where the constants c, c1 , γ are defined below in the proof of the theorem. In particular, for xa c2 > 0, xt c1 e−γ t , γ = γc2 . P ∞ > η(x) a The theorem implies the following result. Corollary 4.2.2. Let the conditions [ · , 2, Eξ = 0, Eξ 2 < ∞ and [U] be satisfied. Then there exist constants c and γ > 0 such that, for any x and n n1 := x/a, ) ( x P(S n (a) x) c max e−γxa , V (x) . a Proof of Theorem 4.2.1. The proof of part (i) basically repeats the argument used to prove Theorem 3.2.1. One just has to observe that, to prove (4.2.1), one should now use Theorems 4.1.2 and 4.1.6 forn c0 x/a with some c0 < ∞. First note that if the value s ≡ x (α − 2)n ln n in the conditions of Theorem 4.1.2 is such that θ r/2 (for the definition of θ see (4.1.5)) then all the
4.2 Upper bounds for the distribution of S n (a), a > 0
193
bounds in the proof of Theorem 3.2.1 will remain valid provided that in them we replace rk and rk by rk /2 and rk /2 respectively. Hence the inequality (4.2.1) will hold for r∗ = r0 /2 (cf. the assertion of Theorem 3.2.1, p. 138). It remains to discover for which x and a, in the case n co x/a, one has θ r/2 or, equivalently, x2 > c2 , n ln n where c1 and c2 are determined by the values of α and r. The inequality (4.2.7) will hold if c| ln a| c ln x x> or a > . a x s2 ≡ c1
Indeed, in this case c0 x , x ln (1 + o(1)) n ln n a a % & c 0 x2 x2 c0 x2 ln (1 + o(1)) (1 + o(1)). c ln x ln x c
(4.2.7)
(4.2.8)
From this it follows that s2 > c1 c /co c2 for c c2 c0 /c1 , and so (4.2.7) holds true. This proves (4.2.1). Similarly, to prove (4.2.2) we use Corollaries 4.1.4 and 4.1.8, in which one requires that the condition x2 /n ln n > c3 , with c3 an explicitly known constant, is met. We are again in a situation where we have to determine under what conditions the relation (4.2.7) holds. The only difference is in the values of the constants. Otherwise, the proof of Theorem 3.2.1(i) remains unchanged. (ii) Now we prove the second assertion. Using an argument similar to that involving (4.2.7), (4.2.8), one can verify that the inequality s2 < (h − τ )/2 (see Theorem 4.1.2(ii)) will hold for n n1 provided that x < c| ln a|/a (or a > (c ln x)/x). We will again follow the argument in the proof of Theorem 3.2.1 but will obtain different bounds, since we will now be using Theorem 4.1.2(ii), see (4.1.6). For example, instead of (3.2.5) we will obtain that, for a fixed k, nk = n1 2k−1 and n1 = x/a, the inequality pk = P(η(x) ∈ (nk , nk+1 ]; B(v))
2 2(k−2) "
2 " x 2 x (1 + 2k−2 )2 exp − + exp − 2nk hd 2nk hd "
xa2k−2 2 exp − 4hd
(4.2.9)
will hold if the pairs (nk , x2k−2 ) and (nk , x(1 + 2k−2 )) (see (3.2.5)), together with the pair (n1 , x), still satisfy the conditions of Theorem 4.1.2(ii). As k grows, starting from some point these conditions will no longer be met and then one will have to make use of the assertion of Theorem 4.1.2(i). However, when summing
194
Random walks with jumps having finite variance
the pk , the main contribution to the sum will be due to the first few summands, since, for n = n1 = x/a,
"
" x2 xa x exp − = exp − V (x) 2nhd 2hd a for x < c| ln a|/a and a suitable c. It also follows from these relations that P(B(v))
n
V (y + avj)
j=1
cx V (x) = o(e−xa/2hd ). a
The above argument implies (4.2.5). If x = o (| ln a|/a) then, for each fixed k, the bound (4.2.9) will be valid, and the right-hand side of (4.2.9) will dominate xV (x)/a. Therefore, by virtue of (4.2.9),
" xa2k−2 k−1 ) c exp − P(∞ > η(x) n1 2 . 4h This implies (4.2.6). The theorem is proved. Note that analogues of Corollary 3.6.6 and Theorem 3.6.7 (under the conditions [ · , 1/2) remain valid in the present set-up. As can be seen from the proof, one can make a more precise statement regarding the admissible growth rate for t in (4.2.6). If, for instance, x = v/a, a → 0, v = const then one can take t < c| ln a| with a suitable c.
4.3 Lower bounds for the distributions of Sn and S n (a) First we consider lower bounds for P(Sn x).
Theorem 4.3.1. Let Eξ = 0, Eξ 2 = d < ∞. Then, for y = x + u (n − 1)d, u > 0, n−1 F+ (y) . (4.3.1) P(Sn x) nF+ (y) 1 − u−2 − 2 √ Proof. The assertion will follow from Theorem 2.5.1 if we put K(n) := nd and make use of the Chebyshev inequality, which implies that Qn (u) ≡ P Sn /K(n) < −u u−2 .
The following results are obvious consequences of Theorem 4.3.1. Corollary 4.3.2. (i) If x → ∞, x
√ n then, as u → ∞, P(Sn x) nF+ (y)(1 + o(1)).
4.3 Lower bounds for the distributions of Sn and S n (a)
195
(ii) If, moreover, condition [ · , >] is satisfied then P(Sn x) nV (x)(1 + o(1)). Thus, a lower bound for P(Sn x) with right-hand side nF+ (y) holds √ under weaker conditions on x than the upper bounds (the latter require x n ln n) and in the absence of a regular domination condition of the form [ · , 0. Clearly, Zγ (x, t) → 0 as x → ∞. Theorem 4.3.3. For all n, x and t > 0, # $ P(S n (a) x) Zγ (x, t) 1 − (1 + ε(t))t−γ − Zγ (x, t) ,
(4.3.3)
where ε(t) ≡ 0 for γ = 2, and ε(t) → 0 as t → ∞, γ < 2. Let n+1
F+ x + au + (u − 1)1/γ tb du Zγ (x, t).
Iγ (x, t) := 1
Given that (4.3.3) holds, it is not hard to find values t0 > 1, z0 = z0 (t0 ) > 0 such that, for Zγ (x, t) < z0 , t > t0 , # $ P(S n (a) x) Iγ (x, t) 1 − (1 + ε(t))t−γ − Iγ (x, t) . (4.3.4) Indeed, observe that for t > t0 > 1 and c := supt (1 + ε(t)), the function g(z) := z(1 − ct−γ − z) increases monotonically on [0, z0 ], where z0 = z0 (t0 ) > 0. Hence, for Zγ (x, t) < z0 , the right-hand side of (4.3.3) exceeds # $ Iγ (x, t) 1 − (1 + ε(t))t−γ − Iγ (x, t) . It is not difficult to give explicit values for t0 , z0 . For instance, when γ = 2, one can put t0 := 2, z0 := 3/8. Corollary 4.3.4. Assume that the conditions (4.3.2) and [ · , P Sj−1 −(j − 1)1/γ tb; B j # $ = F+ x + aj + (j − 1)1/γ tb 1 − P(Sj−1 < −(j − 1)1/γ tb) . (4.3.8) If γ = 2 then, by the Chebyshev inequality, P Sj−1 < −(j − 1)1/2 tb t−2 . If γ < 2 then condition [ c > 1 provided that n → ∞. Note also that the relation P(Sn x) lim =1 n→∞ nV (x) for s > c > 1 (i.e. for x > cσ(n), c > 1) also follows from the uniform representation (4.1.2). It appears that a similar representation # √ $ √ x > n, n → ∞, (4.4.3) P(S n x) ∼ 2 1 − Φ x/ nd + nV (x), holds for S n (with the same consequences for P(S n x)/nV (x)); it was established in [225] under the additional condition that F (t) = O(t−α ) as t → ∞. To obtain more precise results, one needs additional smoothness conditions on the function V (t). First we will formulate ‘first-order’ and ‘second-order’ smoothness conditions. It will be assumed in the remaining part of this chapter that condition [ · , =] is satisfied. As in § 3.4, let q(t) be a function vanishing as t → ∞ (for instance, an r.v.f. of index −γ, γ 0). The conditions below are similar to conditions [D(h,q) ] from § 3.4 (see p. 144). [D(1,q) ]
The following representation is valid: # $ V (t(1 + Δ)) − V (t) = V (t) − L1 (t)Δ(1 + ε(Δ, t)) + o(q(t)) ,
where L1 (t) = α + o(1) and ε(Δ, t) → 0 as |Δ| → 0, t → ∞. As we observed in § 3.4, the distribution tail of the first positive sum in a random walk will always satisfy condition [D(1,q) ] with q(t) = t−1 , provided that condition [ · , =] is met (see § 7.5). If the function L(t) in the representation V (t) = t−α L(t) is continuously differentiable for all t t0 , starting from some t0 > 0, and if L(t) as t → ∞, L (t) = o t
198
Random walks with jumps having finite variance
then condition [D(1,0) ] is satisfied. Indeed, in this case, V (t) = −
V (t) L1 (t), t
L1 (t) = α −
L (t)t = α + o(1) L(t)
as t → ∞. Integrating V (u) from t to t + Δt yields V (t + Δt) − V (t) = V (t)Δt(1 + ε(Δ, t)) = −V (t)L1 (t)Δ(1 + ε(Δ, t)),
(4.4.4)
where ε(Δ, t) → 0 as |Δ| → 0, t → ∞, and hence condition [D(1,q) ] is met for q(t) ≡ 0. The next, stronger, smoothness condition [D(2,q) ] has a similar form: [D(2,q) ] for t t0 ,
For some t0 > 0, the function V (t) is continuously differentiable L1 (t) :=
tL (t) V (t)t = −α + V (t) L(t)
is an s.v.f. and, as t → ∞,
$ # V (t(1 + Δ)) − V (t) = V (t) − L1 (t)Δ + L2 (t)Δ2 (1 + ε(Δ, t)) + o(q(t)) ,
where L2 (t) = α(α + 1) + o(1) and ε(Δ, t) → 0 as |Δ| → 0, t → ∞. It is evident that if V (t) = α(α + 1)
V (t) (1 + o(1)) t2
(4.4.5)
exists then tL (t)/L(t) → 0 as t → ∞, and so condition [D(2,q) ] is satisfied for q(t) ≡ 0. The reason is that in this case t+Δt
v
V (t) +
V (t + Δt) − V (t) = t
V (u) du dv
t
Δ 2 t2 = V (t)Δt + V (t) (1 + o(1)) 2 α(α + 1) 2 Δ (1 + ε(Δ, t)) , (4.4.6) = V (t) −L1 (t)Δ + 2 where ε(Δ, t) → 0 as |Δ| → 0, t → ∞, and, by virtue of (4.4.5), L1 (t) = −
V (t)t = α + o(1). V (t)
Clearly, condition [D(1,q) ] (like condition [D(2,q) ]) is not a differentiability condition in the usual sense, as it specifies the values of the increments V (t(1+Δ))− V (t) only for Δ > q(t). In the case q(t) = 0 this condition could be referred to as ‘differentiability at infinity’. Now we will formulate a general smoothness condition [D(k,q) ], k 1.
4.4 Asymptotics of P(Sn x) and its refinements
199
[D(k,q) ] For some t0 > 0, the function V (t) is k − 1 times continuously differentiable for t t0 , and % k−1 Δj V (t(1 + Δ)) − V (t) = V (t) (−1)j Lj (t) j! j=1 & Δk k + (−1) Lk (t) (1 + ε(Δ, t)) + o(q(t)) , k! (4.4.7) where ε(Δ, t) → 0 as |Δ| → 0, t → ∞, and |ε(Δ, t)| < c for t t0 and |Δ| < 1 − δ, where δ > 0 is a fixed arbitrary number. The s.v.f.’s Lj are defined, for j k − 1, by the equalities Lj (t) = (−1)j
tj d j V (t). V (t) dtj
(4.4.8)
The functions Lj with j k have the property that, as t → ∞, Lj (t) = α(α + 1) · · · (α + j − 1)(1 + o(1)).
(4.4.9)
It can be seen that the the expansion in (4.4.7) differs from an expansion for the power function t−α only by the presence of slowly varying factors that are asymptotically equivalent to 1. However, replacing these factors by unity is, of course, impossible, since that could lead to the introduction of errors whose orders of magnitude would exceed those of the subsequent terms of the expansion. Similarly to the above discussion, we can observe that if the kth derivative V (k) (t) ≡
dk V (t) V (t) = α(α + 1) . . . (α + k − 1) k (1 + o(1)) dtk t
(4.4.10)
exists then condition [D(k,0) ] is met. Also, it is not hard to see that condition [D(k,0) ] implies [D(j,0) ] for j < k. One could give the following sufficient conditions for (4.4.10): if the kth derivative V (k) (t) exists for t t0 and is monotone on that half-line then (4.4.10) holds and so [D(k,0) ] is satisfied. Indeed, in this case, for some t1 t0 the function V (k) (t) will have the same sign on the whole interval (t1 , ∞), and therefore the (k − 1)th derivative V (k−1) (t) will be monotone on this interval. Continuing this kind of reasoning, we conclude that all the derivatives V (k−2) (t), . . . , V (t) will be monotone ‘at infinity’. It is known, however (see e.g. Theorem 1.7.2 of [32]), that the monotonicity of V (t) and regular variation of V (t) imply that V (t) ∼ −αt−α−1 L(t),
t → ∞.
From this and the monotonicity of V (t) it follows, in turn, that V (t) ∼ (−1)2 α(α + 1)t−α−2 L(t),
t → ∞,
200
Random walks with jumps having finite variance
and so on, up to the relation (4.4.10). Remark 4.4.3. (i) As in § 3.4, one can also consider, along with [D(k,q) ], conditions [D(k,O(q)) ], that differ from [D(k,q) ] in that in them the remainder term o(q(t)) in (4.4.7) is replaced by O(q(t)). (2) As in § 3.4, one could also consider conditions [D(h,q) ] with a fractional value of h and a remainder term of order |Δ|h . Comments on how the assertions in the forthcoming theorems would change if we switched to the conditions mentioned in Remark 4.4.3 will be presented after the respective theorems. Now we state the main assertion of this section. Recall that we are assuming everywhere that Eξ = 0, d = Eξ 2 < ∞ and that V (t) = max{V (t), W (t)}. Theorem 4.4.4. Let the following conditions be satisfied: [ · , =], α > 2, and that, for some k 1, [D(k,q) ] holds and E|ξ|k < ∞ (the latter is only required when k > 2). Then, as x → ∞, we have % & k Lj (x) j k/2 −k ES +o(n x )+o(q(x)) (4.4.11) P(Sn x) = nV (x) 1+ n−1 j!xj j=2 uniformly in n cx2 / ln x for some c > 0. Since 2 = (n − 1)d, ESn−1 3 ESn−1 = (n − 1)Eξ 3 , 1 4 = (n − 1)(n − 2)d2 + O(n) ESn−1 2
and L2 (x) = α(α + 1) + o(1),
L4 (x) = 4!
α+3 + o(1), 4
we obtain the following. Corollary 4.4.5. Let the conditions of Theorem 4.4.4 be met. Then the following assertions hold as x → ∞. (i) If k = 2 then
% & α(α + 1)(n − 1)d P(Sn x) = nV (x) 1 + (1 + o(1)) . 2x2
(ii) If k = 4 then
(4.4.12)
% (n − 1)L2 (x)d (n − 1)L3 (x)Eξ 3 − P(Sn x) = nV (x) 1 + 2x2 3!x3 & (n − 1)(n − 2) α + 3 (1 + o(1)) . + 4 2x4
4.4 Asymptotics of P(Sn x) and its refinements
201
Both relations hold uniformly in n cx2 /ln x for some c > 0. Remark 4.4.6. In the lattice case, when ξ is an integer-valued r.v. and the greatest common divisor of its possible values is equal to 1, it is not hard to construct difference analogues of conditions [D(k,q) ] and then to obtain a complete analogue of Theorem 4.4.4 for integer-valued x. Remark 4.4.7. For the special case where the principal part of V (t) is a linear combination of negative powers of t, a number of refinements of the relation P(Sn x) ∼ nV (x) were derived in [276] under some additional restrictive assumptions and conditions, which are sometimes irrelevant. It will be seen easily from the proofs below that all the main assertions of the present section could be transferred, in a standard way, to the case where V (t) is a linear combination of terms of the form t−α(i) L(i) (t) satisfying the respective smoothness condition. See also Remark 3.4.2. √ j in (4.4.11) are clearly polynomials in n of Remark 4.4.8. The terms ESn−1 orders not exceeding j. Observe, however, that the asymptotic representation (4.4.11) is, generally speaking, not a ‘pure form’ asymptotic expansion in powers √ of ( n/x), which is the case in (4.4.12), since the Lj (x) could be quite complicated s.v.f.’s. If L(t) ≡ L = const for t t0 then α+j−1 Lj (x) , = j j! √ and (4.4.11) becomes an asymptotic expansion in powers of ( n/x). Proof of Theorem 4.4.4. Let Gn := {Sn x} and, as before, put Bj = {ξj < y},
B=
n *
Bj ,
j=1
r=
x > 2. y
Then, similarly to § 3.4, by virtue of Corollary 4.1.3 we will again have representations of the form (3.4.12), (3.4.13), which yield, for n cx2 /ln x and a suitable c, that P(Gn ) =
n
P(Gn B j ) + O (nV (x))2 .
(4.4.13)
j=1
So the main problem consists in finding asymptotic representations for P(Gn B j ) = P(Gn B n ) = P(Sn−1 + ξn x, ξn y) = P(Sn−1 x − y, ξn y) + P(Sn−1 < x − y, Sn−1 + ξn x).
(4.4.14)
202
Random walks with jumps having finite variance
Here, owing to Corollary 4.1.4, P(Sn−1 x − y, ξn y) = P(Sn−1 x − y) P(ξn y) cnV 2 (x).
(4.4.15)
Further, for y = δx, δ = 1/r < 1/2, # $ P(ξn x − Sn−1 , Sn−1 < x − y) = E V (x − Sn−1 ); Sn−1 < (1 − δ)x = E1 + E2 ,
(4.4.16)
where # $ E1 := E V (x − Sn−1 ); Sn−1 −(1 − δ)x , # $ E2 := E V (x − Sn−1 ); |Sn−1 | < (1 − δ)x . First, assume that condition [D(k,0) ] holds. Setting t := x and Δ := −Sn−1 /x in (4.4.7), we obtain % k j Lj (x) Sn−1 E2 = V (x)E 1 + j! x j=1 & k Sn−1 Sn−1 Lk (x) Sn−1 + ε ,x ; < 1 − δ . (4.4.17) k! x x x Next we will find out by how much the truncated moments of Sn on the righthand side of (4.4.17) differ from the complete moments. 2 Lemma 4.4.9. Let Eξ = 0, Eξ = 1 and E|ξ|b < ∞ for some b 2. Then the √ sequence (Sn / n )b ; n 1 is uniformly integrable.
√ Proof. First note that |Sn / n |b ; n 1 is uniformly integrable. This follows from the fact that, under the condition E|ξ|b < ∞, one has convergence of moments in the central limit theorem: as n → ∞, Sn b E √ → |t|b dΦ(t), n where Φ(t) is the standard normal distribution function (see e.g. § 5.10, Chapter IV of [224]). To complete the proof, it suffices to use Kolmogorov’s inequality: Sn P √ t n
√ Sn P √ t − 2 n
(see e.g. (3.13) in Chapter III of [224] or § 2, Chapter 10 of [49]).
4.4 Asymptotics of P(Sn x) and its refinements
203
By Lemma 4.4.9, for j k, E |Snj |; |Sn | (1 − δ)x
% S n k nk/2 E √ ; ((1 − δ)x)k−j n
& Sn (1 − δ)x √ √ = o nk/2 xj−k , n n
and therefore replacing the truncated moments by the complete moments in the relation (4.4.17) would introduce an error of order o(nk/2 x−k ). Further, since, as x → ∞, we have ε(Δ, x) c ε(Δ, x) → 0
for |Δ| < 1 − δ, for |Δ| → 0,
we obtain, again owing to uniform integrability, that % E
Sn−1 x
k
& Sn−1 Sn−1 < 1 − δ = o nk/2 x−k . ε ,x ; x x
Therefore (4.4.17) yields E2 = V (x) 1 +
k Lj (x) j=1
j!xj
j ESn−1 + o nk/2 x−k .
(4.4.18)
It remains to bound the integral E1 in (4.4.16). Applying Corollary 4.1.4 (or Corollary 4.1.8) to bound the probabilities of negative deviations of Sn−1 , we have E1 V (x) P Sn−1 −(1 − δ)x cV (x)nW (x).
(4.4.19)
Summarizing, we obtain from (4.4.13)–(4.4.19) the assertion of the theorem under condition [D(k,0) ]. If q(t) ≡ 0 then, as in § 3.4, the term Sn−1 < 1 − δ = o(q(x)) o(q(x))P x is added to the right-hand side of (4.4.17), owing to condition [D(k,q) ], and this term will pass, unchanged through all the calculations, to the representation (4.4.11). The theorem is proved. The proof of the assertion in Remark 4.4.2 is a simplified version of that of Theorem 4.4.4. We leave it to the reader. All the remarks from § 3.4 on how the assertion of Theorem 4.4.4 changes when one replaces condition [D(k,q) ] by [D(k,O(q)) ] remain valid in the present section. One can also obtain a modification of the assertion of the theorem in the case where condition [D(h,q) ] holds for a fractional value of h (see Remark 4.4.3).
204
Random walks with jumps having finite variance 4.5 Asymptotics of P(S n x) and its refinements
As was the case for P(Sn x), the first-order asymptotics for P(S n x) follow directly from Corollaries 4.1.4 and 4.3.2 and is contained in Theorem 4.4.1. Now we will present refinements of that theorem. As in the previous section, we will need the smoothness conditions [D(k,q) ]. Theorem 4.5.1. Let the following conditions be satisfied: [ · , =], α > 2, and that, for some k 1, [D(k,q) ] holds and E|ξ k | < ∞ (the latter is only required when k > 2). Then, as x → ∞,
n−1 L1 (x) P(S n x) = nV (x) 1 + ES j nx j=1
%n−1 & i Li (x) i i−l i l ES ES + ES n−j j−1 n−j l i!xi j=1 i=2 l=2 " k/2 −k +o n x + o q(x) (4.5.1)
+
1 n
k
uniformly in n cx2 / ln x for some c > 0. Remarks 4.4.3 and 4.4.7, which relate to Theorem 4.4.4, remain valid for the above theorem as well. To obtain simpler asymptotic representations from this theorem, note that by Kolmogorov’s inequality (or by virtue of Corollary 4.1.4) the r.v.’s n−1/2 S n are uniformly integrable. Therefore it follows from the invariance principle that 8 √ √ 2d −1/2 −1/2 n S n ⇒ d w(1), En S n → d Ew(1) = π as n → ∞, where d = Eξ 2 , {w(t); t 0} is the standard Wiener process, w(t) := maxut w(u), P(w(1) > t) = 2(1−Φ(t)), and Φ is the standard normal distribution function. Since, moreover, L1 (x) → α, we obtain the following. Corollary 4.5.2. If conditions [ · , =], α > 2, and [D(1,q) ] are satisfied then, as n → ∞, √ & % 23/2 α nd √ (1 + o(1)) + o(q(x)) (4.5.2) P(S n x) = nV (x) 1 + 3 πx uniformly in x from the zone n cx2 / ln x. One could compute higher-order terms of the asymptotic expansion (4.5.1) in a similar way. Observe also that one could easily derive from Theorem 4.5.1 asymptotic expansions for the joint distribution of Sn and S n . Namely, for any fixed u > 1, on the one hand, we clearly have P(S n x, Sn < ux) = P(S n x) − P(Sn ux).
4.5 Asymptotics of P(S n x) and its refinements
205
On the other hand, for a fixed u ∈ (0, 1), P(S n < x, Sn ux) = P(Sn ux) − P(S n x) + P(S n x, Sn < ux), (4.5.3) where, under condition [=, =], the last term is negligibly small (the order of magnitude being n2 V (x)W (x)), since the event {S n x, Sn < ux} requires, roughly speaking, two large jumps (in opposite directions) in the random walk. More precisely, letting η(x) := min{k : Sk x}, we have P(S n x, Sn < ux)
n
P(η(x) = k) P Sn−k < −(1 − u)x
k=1
c1 nW (1 − u)x P(S n x) c2 n2 W (x)V (x). Under conditions [=, =] and [D(k,q) ] on the tails V (t) and W (t), the required asymptotic expansions follow immediately from the above relation and (4.5.3). Proof of Theorem 4.5.1. According to the above remarks (see Remark 3.4.8 and the end of the proof of Theorem 4.4.4), without loss of generality we can assume from the very beginning that condition [D(k,0) ] is met and then just add a remainder term o(q(x)) to the final result. Our proof will repeat, in many aspects, the arguments in the proofs of Theorems 4.4.4 and 3.5.1. Let Gn := {S n x}. As in (4.4.13) and (3.5.5), for n cx2 /ln x with a suitable c and r = x/y > 2 we have P(Gn ) =
n
P(Gn B j ) + O (nV (x))2 .
(4.5.4)
j=1
Furthermore, as in (3.5.6) and (3.5.9), for y = δx, δ = 1/r < 1/2 and (j)
S n−j =
max (Sm+j − Sj ),
0mn−j
we have P(Gn B j ) = P(S n x, ξj y) = P(S n x, ξj y, S j−1 < x) + O(jV 2 (x)) (j) = P Sj + S n−j x, ξj y + O(jV 2 (x)). As before, set (j)
Zj,n := Sj−1 + S n−j .
(4.5.5)
206
Random walks with jumps having finite variance
Then (j)
P(Sj + S n−j x, ξj y) = P(ξj + Zj,n x, ξj y) = P ξj + Zj,n x, ξj δx; Zj,n < (1 − δ)x + P ξj + Zj,n x, ξj δx, Zj,n (1 − δ)x . The event {ξj δx} is clearly redundant in the last term. Moreover, noting d
that ξj and Zj,n are independent and that Zj,n S n−1 , we see that the last term does not exceed P(ξj δx) P S n−1 (1 − δ)x = O(nV 2 (x)). Therefore
P(Gn B j ) = P ξj + Zj,n x, Zj,n < (1 − δ)x + O(nV 2 (x)) $ # = E V (x − Zj,n ); Zj,n < (1 − δ)x +O(nV 2 (x)),
so that P(Gn ) =
n # $ E V (x − Zj,n ); Zj,n < (1 − δ)x + O (nV (x))2 .
(4.5.6)
j=1
Using reasoning similar to that in previous sections (cf. (3.5.12), (3.5.13) and (4.4.16)), we have # $ E V (x − Zj,n ); Zj,n < (1 − δ)x = Ej,1 + Ej,2 , where
# $ Ej,1 := E V (x − Zj,n ); Zj,n −(1 − δ)x = O nV (x)W (x) .
Now consider
# $ Ej,2 := E V (x − Zj,n ); |Zj,n | < (1 − δ)x
and use the inequalities d
Sj−1 Zj,n S n−1 and condition [D(k,0) ] with t = x, Δ = −Zj,n /x. We obtain % k i Li (x) Zj,n Ej,2 = V (x)E 1 + i! x i=1 & k Zj,n Zj,n Lk (x) Zj,n 0 and set, as before, S n (a) = max(Sj − aj), jn
assuming that Eξ = 0, Eξ 2 = d < ∞. Theorem 4.6.1. Let condition [ 2, be met. Then the following assertions hold true. (i) Uniformly in n = 1, 2, . . . , n V (x + ja) (1 + o(1)), P(S n (a) x) =
x → ∞.
j=1
(ii) If, for some k 1, the conditions [D(k,q) ] and E|ξ|k < ∞ hold (the latter is only required when k > 2), and q(t) is an r.v.f. then, as x → ∞, P(S n (a) x) =
k Li (x + aj) V (x + aj) 1 + i!(x + aj)i j=1 i=1
n
% &" i i i l ESj−1 × ES n−j (a) + ES i−l (a) n−j l l=2 + O mV (x)(mV (x) + xV (x)) + o xV (x)q(x) , (4.6.1) where m = min{n, x}. (iii) If we additionally require in part (ii) that E(ξ + )k+1 < ∞ then we have ES k (a) < ∞, where S(a) := S ∞ (a), and the representation (4.6.1) holds for n = ∞. In this case, P(S(a) x) =
k Li (x + aj) V (x + aj) 1 + i!(x + aj)i j=1 i=1
∞
% &" i i l × ES i (a) + ES i−l (a) ESj−1 l l=2 2 + O x V (x)V (x) + o(xV (x)q(x)). (4.6.2) Remark 4.6.2. As will be seen in the proof of the theorem, the assertion of part (ii) is obtained under the assumption that α < k + 1. If E(ξ + )k+1 < ∞ then this can be improved with the help of Lemma 4.6.6 (see below). Remarks 4.4.3 and 4.4.7, which relate to Theorem 4.4.4, remain valid in the present subsection.
210
Random walks with jumps having finite variance
Corollary 4.6.3. If the conditions [ 2, [D(2,0) ] and E(ξ + )3 < ∞ are satisfied then P(S(a) x)
∞ L1 (x + aj) ES(a) V (x + aj) 1 + = x + aj j=1
" jL2 (x + aj)d 2 + (j − 1)d + O x2 V (x)V (x) ES(a) 2 2(x + aj)
" ∞ α α(α + 1)jd = V (x + aj) 1 + + o(V (x)). (4.6.3) ES(a) + x + aj 2(x + aj)2 j=1 +
Observing that, owing to [D(2,0) ] and the properties of r.v.f.’s, one has ∞
1 V (x + aj) = a j=1
∞ V (x + aj) j=1 ∞ j=1
∼
x + aj
1 a
∞ V (u)du −
1 V (x) + o(V (x)), 2
x
∞
V (u) V (x) du ∼ , u aα
x
1 jV (x + aj) ∼ 2 (x + aj)2 a 1 = 2 a
∞ 0
V (x + u)u du (x + u)2
%∞ x
V (t) dt − x t
∞
V (t) dt t2
&
x
% & V (x) V (x) 1 V (x) − = 2 ∼ 2 a α α+1 a α(α + 1) as x → ∞, we obtain from (4.6.3) the following ‘integral’ representation: 1 P(S(a) x) = a
∞ V (u) du x
+
d ES(a) 1 − V (x) + o(V (x)). + 2a2 a 2
(4.6.4)
It is of interest to note that the functions L1 (x), L2 (x) appearing in condition [D(2,0) ], as well as the third moment of ξ + , are not present in the representation (4.6.4). That the conditions [D(2,0) ] and E(ξ + )3 < ∞ are not needed for (4.6.4) will be confirmed in Theorem 7.5.8, where the assertion (4.6.4) is obtained without the above-mentioned conditions, see p. 361. (One should note that, in the notation of Chapter 7, the function V (x) =F+ (x) in Theorem 7.5.8 is the ∞ tail of the r.v. ξ − a, so that the expression a−1 x F+ (u) du from § 7.5 is the
4.6 Asymptotics of P(S(a) x) and the general boundary problem 211 same as 1 a
∞
1 V (u + a) du = a
x
∞ V (u) du − V (x) + o(V (x)) x
in our current notation. This is why the term −V (x)/2 in (4.6.4) is replaced by +V (x)/2 in Theorem 7.5.8.) The above means that the methods used to prove Theorem 4.6.1 are not quite adequate in the case n = ∞, i.e. when we are studying the asymptotics of P(S(a) x). Proof of Theorem 4.6.1. (i) The proof of the theorem will consist of several stages, which we will formulate as lemmata. The first is an analogue of Lemma 3.6.9. Set m := min{x, n}, Gn := {S n (a) x},
Bj (v) := {ξj < y + vj},
B(v) :=
n *
Bj (v).
j=1
Lemma 4.6.4. Let condition [ · , 2, hold. Then, for v min{a/2r, (r − 2)/2r},
r > 2,
and for all n one has, as x → ∞, P(S n (a) x) =
n
P(Gn B j (v)) + O (mV (x))2 .
(4.6.5)
j=1
Proof. The proof the lemma repeats that of Lemma 3.6.9. The only difference is that now, instead of using Theorem 3.2.1, one should use Theorem 4.2.1. The subsequent proof of Theorem 4.6.1 involves arguments quite similar to those in the proofs of Theorems 3.6.1 and 4.5.1. As in § 3.2, put (j) S n−j (a) := max 0, ξj+1 − a, . . . , Sn − Sj − (n − j)a , (j)
Zj,n (a) := Sj−1 + S n−j (a). First we will obtain representations for the summands in (4.6.5). Lemma 4.6.5. Assume that condition [ c j ln j, we find from Corollary 4.1.4 and Theorem 4.2.1 that P(Zj,n (a) z) P Sj−1 z/2 + P S n−j (a) z/2 # $ c1 jV (z) + min{n, z}V (z) # $ (4.6.9) c2 j + min{n, z} V (z). Further, since d
√
(j)
|Zj,n (a)| |Sj−1 | + S n−j (a),
we have, for z > c j ln j, P |Zj,n (a)| z P |Sj−1 | z/2 + P S n (a) z/2 # $ c2 j V (z) + min{n, z}V (z) .
(4.6.10)
(4.6.11)
Next we obtain a representation for P(Gn B j (v)) = P S n (a) x, ξj y + jv = P S n (a) x, ξj y + jv, S j−1 (a) < x + ρn,j,x , (4.6.12) where, by Theorem 4.2.1, ρn,j,x cV (y + jv) min{j, x}V (x).
(4.6.13)
By virtue of (4.6.7), we have, similarly to § 3.6 (cf. (3.6.18)), that P S n (a) x, ξj y + jv, S j−1 (a) < x = P ξj + Zj,n (a) x + aj, ξj y + jv + O min{j, x}V (x)V (y + jv) = P ξj + Zj,n (a) x + aj, ξj y + jv, Zj,n (a) < x + aj − y − jv + O min{j, x}V (x)V (x + aj) + O min{n, x + aj}V 2 (x + aj) . Here we have used the fact that, for the chosen values of r and v, one always has c1 (x + aj) x − y + j(a − v) c2 (x + aj), c3 (x + aj) y + jv c4 (x + aj). As in § 3.6, this yields (4.6.6). Lemma 4.6.5 is proved. As before, we will split the expectation in (4.6.6) into two parts: # $ Ej,1 := E V (x + aj − Zj,n (a)); Zj,n (a) −x(j) , # $ Ej,2 := E V (x + aj − Zj,n (a)); |Zj,n (a)| < x(j) , where x(j) = δ(x + aj) and Ej,1 c1 V (x + aj) P(Sj−1 −x(j)) cjV (x + aj)W (x + aj).
(4.6.14)
4.6 Asymptotics of P(S(a) x) and the general boundary problem 213 Owing to (4.6.8), as x → ∞, P |Zj,n (a)| (x + aj)ε → 0 uniformly in j for any fixed ε > 0. This means that Ej,2 = V (x + aj)(1 + o(1)). By virtue of (4.6.5) and (4.6.12)–(4.6.14), this proves the first part of the theorem. (ii) Now let condition [D(k,q) ] be satisfied and q(t) be an r.v.f. Then we have the following expansion, similar to (4.4.17): % k i Li (x + aj) Zj,n (a) Ej,2 = V (x + aj) E 1 + i! x + aj i=1 k Zj,n (a) Lk (x) Zj,n (a) + ε , x + aj k! x + aj x + aj & Zj,n (a) n it does not exceed
2x(j)
c3 nk+1 V (n) + n
z k−1 V (z) dz
c4 nk+1 V (n).
n
Thus the integral is bounded from above by cmk+1 V (mj ). This proves the secj ond inequality in (4.6.16). Since the function ε z/(x + aj), x + aj is bounded and tends to zero when z = o(x + aj), and since in the subsequent argument the upper integration limit x(j) = δ(x + aj) can be chosen with an arbitrarily small fixed δ > 0, the relation (4.6.17) is also proved. i . It follows from the inequality (4.6.10) that It remains to consider Tj,n &
% x(j) x(j) i i i + (x + aj) P |Sj−1 | Tj,n c E |Sj−1 |; |Sj−1 | 2 2 % & x(j) i + E S n−j (a); S n−j (a) 2 " x(j) i + (x + aj) P S n−j (a) , (4.6.19) 2 where, by virtue of Corollary 4.1.4 and Theorem 4.2.1(i), & % x(j) i < c1 j(x + aj)i V (x + aj), E |Sj−1 |; |Sj−1 | 2 & % x(j) i < c1 min{n, x + aj}(x + aj)i V (x + aj). E S n−j (a); S n−j (a) 2 This, together with (4.6.19), implies (4.6.18). The lemma is proved. Now we return to (4.6.5) and start gathering up the bounds that we have obtained. First we consider the sum of the remainder terms that appear in (4.6.6). It
4.6 Asymptotics of P(S(a) x) and the general boundary problem 215 is not hard to see that n
# $ V (x + aj) min{j, x}V (x) + min{n, x + aj}V (x + aj) cm2 V 2 (x),
j=1
so that P(S n (a) x) =
n (Ej,1 + Ej,2 ) + O m2 V 2 (x) , j=1
where, by virtue of (4.6.14), n
Ej,1 < cm2 V (x)W (x).
j=1
To compute that
n j=1
Ej,2 , we have to use (4.6.15) and Lemma 4.6.6, which implies
n V (x + aj) j=1
(x + aj)i
i Tj,n c
n #
$
jV (x + aj)V (x + aj) + mj V 2 (x + aj)
j=1
# $ c1 m2 V (x)V (x) + mxV 2 (x) .
Thus, for α < k + 1, P(S n (a) x) =
n j=1
V (x + aj) 1 +
⎡
k Li (x + aj) i=1
i!(x +
i EZj,n (a) aj)i
n V (x + aj)
⎤
V (mj ) ⎦ j k/2 + mk+1 j k (x + aj) j=1 + O m2 V (x)V (x) + O mxV 2 (x) + o xV (x)q(x) ,
+ o⎣
where n V (x + aj) j=1 n j=1
(x + aj)k
j k/2 cmk/2+1 x−k V (x),
V (x + aj) k+1 m V (mj ) cmk+2 V (m)x−k V (x) (x + aj)k j =c
mk+1 V (m) mxV 2 (x) cmxV 2 (x), xk+1 V (x)
since the power factor of the function tk+1 V (t) has the exponent k + 1 − α > 0, while m x. The second assertion of Theorem 4.6.1 is proved. In the case when E(ξ + )k+1 < ∞, the bounds could be improved owing to Lemma 4.6.6. (iii) The third assertion of the theorem is an obvious consequence of the second. The theorem is proved.
216
Random walks with jumps having finite variance
Proof of Corollary 4.6.3. The first equality in (4.6.3) is obvious from (4.6.2). To proof the second, it suffices to observe that L1 (x) ∼ α,
L2 (x) ∼ α(α + 1),
x2 V (x) = o(1).
4.6.2 The general boundary problem Now we will turn to the general boundary problem on the asymptotic behaviour of the probability
P max(Sk − g(k)) 0 , kn
where {g(k)} is a boundary from the class Gx,n , for which minkn g(k) = cx, c > 0 (cf. p. 155). The following analogue of Theorem 3.6.4 holds true. Theorem 4.6.7. Assume that the conditions [ 0. Proof. The proof of Theorem 4.6.7 basically repeats the argument proving Theorem 3.6.4(i), with a few obvious changes since now Eξ 2 < ∞. So we will omit it. We also have a complete analogue of Corollary 3.6.6 for the case under consideration, and, in particular, an assertion on the asymptotics of the probabilities P S n (−a) − an x for a > 0, where S n (−a) := max(Sk + ak). kn
Under condition [D(k,q) ], one could carry out a more complete analysis of the distribution of S n (−a)−an. We will confine ourselves to the case when condition [D(2,0) ] is met. Theorem 4.6.8. Let conditions [ 2, and [D(2,0) ] be satisfied. Then, for a fixed a > 0, as x → ∞, % n L1 (x) P(S n (−a) − an x) = nV (x) 1 + Eζj xn j=1 & 3/2 −3 α(α − 1)n + (1 + o(1)) + O n x , 2x2
4.7 Integro-local theorems for the sums Sn
217
where ζj := − minkj (Sk +ak) and the remainder terms are uniform in the zone n cx2 /ln x for some c > 0. If n N → ∞, then the above representation takes the form % & 3/2 −3 αEζ α(α − 1)n nV (x) 1 + (1 + o(1)) + , (1 + o(1)) + O n x x 2x2 where ζ = ζ∞ , Eζ < ∞ and the remainder terms are uniform in the zone n ∈ [N, cx2 /ln x], c > 0. Proof. The proof of Theorem 4.6.8 follows the scheme of the proof of Theorems 4.4.4, refg4p5t1 and 4.6.1 and uses the relations Sn S n (−a) − an S n ,
S n (−a) − an = Sn + ζn ,
d
where ζn = ζn . A detailed scheme of the proof is given in § 5.7. We will not present it here to avoid repetition. It is seen from the theorem that the first terms in the expansions for the distributions of Sn and S n (−a) − an coincide when x = o(n). 4.7 Integro-local theorems for the sums Sn 4.7.1 Integro-local theorems on the large deviations of Sn In this section, as in §3.7, we will study the asymptotic behaviour of the proba bilities P Sn ∈ Δ[x) , where Δ[x) := [x, x + Δ), x → ∞, Δ = o(x). For remarks on the formulation of this problem, see § 3.7. Integro-local theorems will be used in Chapters 6–8. In the multivariate case, integro-local theorems provide the most natural form for assertions on the asymptotics of large deviation probabilities (see Chapter 9). We will need condition [D(1,q) ] in the form (3.7.2): [D(1,q) ]
As t → ∞, for Δ ∈ [Δ1 , Δ2 ], Δ1 Δ2 = o(t), one has # $ V1 (t) = αV (t)/t, V (t) − V (t + Δ) = V1 (t) Δ(1 + o(1)) + o(q(t)) , (4.7.1) where the term o(q(t)) does not depend on Δ, and q(t) = t−γq Lq (t), γq −1, is an r.v.f. The remainder term o(1) is assumed here to be uniform in Δ in the same sense as in (3.7.2). If, in the important special case where Δ = Δ1 = Δ2 = const (Δ is an arbitrary fixed number), one has V (t) − V (t + Δ) = V1 (t)(Δ + o(1)) as t → ∞ then clearly condition [D(1,q) ] will hold with q(t) ≡ 1 (here the assumption on the uniformity of o(1) disappears).
218
Random walks with jumps having finite variance
In the lattice case, when the lattice span is equal to 1, we assume that Δ 1 and that Δ and t in (4.7.1) are integer-valued. For remarks on the form (4.7.1) of condition [D(1,q) ], which is somewhat different from that used in § 4.4, see § 3.7 (the differences are only related to the convenience of exposition). Condition (4.7.1) will always hold with q(t) ≡ 0 provided that L(t) is differentiable, L (t) = o(L (t)/t). Then one can identify V1 (t) with −V (t). Theorem 4.7.1. Let Eξ = 0, Eξ 2 < ∞ √ and let conditions [ · , =] with α > 2 and [D(1,q) ] be satisfied. Then, for x N n ln n, Δ cq(x), as N → ∞, P(Sn ∈ Δ[x)) = ΔnV1 (x)(1 + o(1)),
V1 (x) =
αV (x) , x
(4.7.2)
where the remainder term o(1) in (4.7.2) is uniform in x, n and Δ such that √ x N n ln n, max{x−γ0 , q(x)} Δ xεN for any fixed γ0 −1 and any fixed function εN ↓ 0 as N ↑ ∞. By the uniformity of o(1) in (4.7.2) we understand the existence of a function δ(N ) ↓ ∞ as N ↑ ∞ (depending on εN ) such that the term o(1) in (4.7.2) could be replaced by a function δ(x, n, Δ) with |δ(x, n, Δ)| δ(N ). In the lattice case, the assertion (4.7.2) remains valid for integer-valued x and Δ max{1, cq(x)}. All the remarks following Theorem 3.7.1 remain valid here. For Δ = const, x n1/(b−1) , where b is such that E|ξ|b < ∞, the assertions of Theorem 4.7.1 can be derived from Theorem 4.4.4. Now we will require that, in addition, the following condition be satisfied: [D]
For some t0 > 0 and all t t0 , the function V (t) (L(t)) is differentiable, L(t) αV (t) L (t) = o as t → ∞. V (t) ∼ − t t
Then the next analogue of Theorem 3.7.2 will hold true. √ Theorem 4.7.2. Let the conditions [ · , =], α > 2, [D] and x n ln n be met. Then the distribution of Sn can be represented as a sum of two measures: P(Sn ∈ ·) = Pn,1 (·) + Pn,2 (·), where the measure Pn,1 has, for any fixed r and all large enough x, the property Pn,1 [x, ∞) [nV (x)]r . The measure Pn,2 has density Pn,2 (dx) = −nV (x)(1 + o(1)), dx
4.7 Integro-local theorems for the sums Sn 219 √ where the reminder o(1) is uniform in x and n such that n ln n/x < εx for any fixed function εx → 0 as x → ∞. Proof. The proof of Theorem 4.7.1 follows the scheme of the proof of Theorem 3.7.1. For y < x, set Gn := {Sn ∈ Δ[x)},
Bj := {ξj < y},
B=
n *
Bj .
(4.7.3)
j=1
Then P(Gn ) = P(Gn B) + P(Gn B),
(4.7.4)
where n
P(Gn B j ) P(Gn B)
j=1
n
P(Gn B j ) −
j=1
P(Gn B i B j ).
(4.7.5)
i<jn
As in the proof of Theorem 3.7.1, we will split the argument into three stages: bounding P(Gn B), bounding P(Gn B i B j ), i = j, and evaluating P(Gn B j ). (1) Bounding P(Gn B). We will use the crude inequality P(Gn B) P(Sn x; B)
(4.7.6)
and Theorem 4.1.2. Owing to the latter, for x = ry, a fixed r > 2, any δ > 0 and √ x N n ln n, N → ∞, one has P(Sn x; B) (nV (y))r−δ
(4.7.7)
(see Corollary 4.1.3). Now choose r such that (nV (x))r−δ nΔV1 (x)
(4.7.8)
√ for x n, Δ cq(x). Setting n = x2 and comparing the powers of x on the right- and left-hand sides of (4.7.8), we obtain, taking into account the condition Δ x−γ0 , that for (4.7.8) to hold it suffices that r is chosen in such a way that (2 − α)(r − δ) < 1 − α − γ0 . For α > 2, this is equivalent to r>
α − 1 + γ0 . α−2
For such an r, owing to (4.7.6)–(4.7.8) we will have P(Gn B) = o nΔV1 (x) .
(4.7.9)
(2) Bounding P(Gn B i B j ) is done in the same way as in the proof of Theorem 3.7.1 and results in the inequality (3.7.16).
220
Random walks with jumps having finite variance
(3) Evaluating P(Gn B j ). This is based, as in the proof of Theorem 3.7.1, on the representation (3.7.17), which yields # $ P(Gn B n ) ΔE V1 (x − Sn−1 ); Sn−1 < (1 − δ)x + Δ (4.7.10) = ΔV1 (x)(1 + o(1)). √ The last since, due to the Chebyshev inequality, one # relation holds for x n √ $ √ has E V1 (x−Sn−1 ); |Sn−1 | M n ∼ V1 (x) as M → ∞ and M n = o(x). Moreover, the following obvious bounds are valid: $ # √ E V1 (x − Sn−1 ); Sn−1 ∈ (M n, (1 − δ)x + Δ) = o(V1 (x)) and
# √ $ E V1 (x − Sn−1 ); Sn−1 ∈ (−∞, −M n ) = o(V1 (x))
as M → ∞. Similarly, by virtue of (3.7.17) one finds that (1−δ)x
P(Sn−1 ∈ dz) P ξ ∈ Δ[x − z) ∼ ΔV1 (x).
P(Gn B n )
(4.7.11)
−∞
From (4.7.10) and (4.7.11) we obtain P(Gn B n ) = ΔV1 (x)(1 + o(1)). Together with (4.7.4), (4.7.9) and (3.7.16), this gives P(Gn ) = ΔnV1 (x)(1 + o(1)). The required uniformity of the term o(1) follows in an obvious way from the above argument. The theorem is proved. Remark 4.7.3. To bound P(Gn B) (see stage (1) of the proof of Theorem 4.7.1) one could use more refined approaches instead of the crude inequality (4.7.6), which would lead to stronger results. Note that P(Gn B) = P(B) P(Gn | B), where n y y y P(Gn | B) = P(Sn ∈ Δ[x)), Sn = ξj j=1 y
and the ξj
are ‘truncated’ (at the level y) r.v.’s, following the distribution P(ξ y < t) =
P(ξ < t) , P(ξ < y)
t y,
so that (cf. § 2.1) ϕy (λ) := Eeλξ
y
=
R(λ, y) , P(ξ < y)
R(λ, y) = E(eλξ ; ξ < y).
Since the r.v. ξ y is bounded from above, we have ϕy (λ) < ∞ for λ > 0, so
4.7 Integro-local theorems for the sums Sn
221
that one can perform a Cram´er transform on the distribution of ξ y and introduce r.v.’s ξj with the distribution P(ξ ∈ dt) =
eλt P(ξ y ∈ dt) . ϕy (λ)
(4.7.12)
Then (see e.g. § 8, Chapter 8 of [49] or § 6.1 of the present text) P(Sny ∈ dt) = e−λt ϕny (λ) P(Sn ∈ dt), n where Sn = nj=1 ξj . Since P(B) = P(ξ < y) , we have P(Sny ∈ Δ[x)) e−λx ϕny (λ) P(Sn ∈ Δ[x)) and, therefore, P(Gn B) e−λx Rn (λ, y) P(Sn ∈ Δ[x)).
(4.7.13)
Now the product e−λx Rn (λ, y) in (4.7.13) is exactly the quantity which is nory mally used to bound P(Sn x) and which was the main object of our studies in § 4.1. In particular, we established there that, for λ=
1 r ln , y nV (y)
x → ∞,
(4.7.14)
and any δ > 0, one has # $−r+δ e−λx Rn (λ, y) nV (x) , so that
# $−r+δ P(Sn ∈ Δ[x)). P(Gn B) nV (x)
(4.7.15)
This inequality is more precise than (4.7.6), (4.7.7), because of the presence of the factor P(Sn ∈ Δ[x)) on the right-hand side of (4.7.15). Indeed, for this factor we have the following bounds, owing to the well-known results for the concentration function: for all n 1, (Δ + 1) for all Δ, c P(Sn ∈ Δ[x)) √ × n Q(Δ) for Δ 1, where Q(Δ) := supt P(ξ ∈ Δ[t)) 1 is the concentration function of the r.v. ξ (see e.g. Lemma 9 and Theorem 11 in Chapter 3 of [224]). This means that, for any δ > 0, Δ > 0, as x → ∞ we have # $−r+δ Δ + 1 √ . P(Gn B) nV (x) n
(4.7.16)
It is not hard to see that if ξ has a bounded density V1 (x) and V1 (x) = O(V (x)) as x → ∞ then the Cram´er transform (4.7.12) for the value of λ given in (4.7.14)
222
Random walks with jumps having finite variance
will result in a distribution of ξ that also possesses a bounded density, and therefore Q(Δ) cΔ. In this case we will also have # $−r+δ Δ √ P(Gn B) nV (x) n
(4.7.17)
for any Δ 1. The inequality (4.7.16) enables one to obtain the bound (4.7.9) for smaller values of r, and the inequality (4.7.17) enables one to obtain (4.7.9) for any Δ 1 (not necessarily for Δ q(x)). Proof. The proof of Theorem 4.7.2 is similar to that of Theorem 3.7.2 and hence will be omitted. Remark 4.7.4. If n → ∞ and the order of magnitude of the deviations is fixed as follows: x ∼ A(n) := n1/γ LA (n), γ < 2, where LA is an s.v.f., then an analogue of Remark 3.7.4 is valid under the conditions of the present chapter. This remark concerns the possibility of an alternative approach to obtaining integrolocal theorems when condition [D(1,q) ] is substantially relaxed and replaced by √ condition (3.7.20) for x ∼ A(n), Δn = εn n, εn = o(1) as n → ∞. The latter is equivalent to the condition V (x) − V (x + Δ) = ΔV1 (x)(1 + o(1)) for Δ = xγ/2 LΔ (x), where LΔ is an s.v.f., x → ∞. The scheme of the proof of the integro-local theorem remains the same as in Remark 3.7.4. One just lets √ b(n) := nd, takes f to be the standard normal density and uses the Gnedenko– Stone–Shepp theorem [130, 254, 258, 259]. In conclusion note that, under the conditions of this section, one could also obtain asymptotic expansions in the integro-local theorem, as in § 4.4.
4.7.2 Integro-local theorems valid on the whole real line An integral theorem for the sums Sn of r.v.’s with regularly varying distributions, which is valid on the whole real line, is given by (4.1.2). Now we will present a similar integro-local representation, i.e. a representation for P Sn ∈ Δ[x) , that will be valid on the whole half-line x 0. The complexity of the proofs of the respective results, which were obtained in the recent paper [192], is somewhat beyond the level of the present monograph. So we will present these results without proofs, the latter can be found in [192]. As before, we will consider two distribution types, non-lattice and arithmetic. For these distributions we will need an additional smoothness condition [D1 ], which is a simplified version of condition [D(1,1) ], see (4.7.1):
4.7 Integro-local theorems for the sums Sn [D1 ]
223
In the non-lattice case, for any fixed Δ > 0, as t → ∞, V (t) − V (t + Δ) ∼
ΔαV (t) . t
In the arithmetic case, for integer-valued k → ∞, V (k) − V (k + 1) ∼
αV (k) . k
As we have already pointed out, the function V1 (t) := αV (t)/t in condition [D1 ] plays the role of the derivative of −V (t), and is asymptotically equivalent to the latter, when the derivative exists and behaves regularly enough at infinity. We will also need the condition that 1 as t → ∞, (4.7.18) E |ξ|2 ; ξ < −t = o ln t which controls the decay rate of the left tail F− (t) as t → ∞. First we consider the non-lattice case, where we can assume without loss of generality that Eξ = 0, d = Var ξ = 1. Theorem 4.7.5. Let ξ be a non-lattice r.v., Eξ = 0,
Var ξ = 1,
(4.7.19)
and let the conditions [ · , =] with α > 2, [D1 ] and (4.7.18) be satisfied. Then, for any fixed Δ > 0, 2 Δ V (x) P Sn ∈ Δ[x) ∼ √ e−x /2n + Δnα x 2πn √ uniformly in x n. In particular, for any fixed ε > 0, P Sn ∈ Δ[x) ⎧ √ Δ 2 ⎪ ⎪ e−x /2n if n x (1 − ε) (α − 2)n ln n, ⎨ √ 2πn ∼ ⎪ ⎪ ⎩ Δnα V (x) if x (1 + ε) (α − 2)n ln n. x If x (1 − ε) (α − 2)n ln n then condition [D1 ] is redundant.
(4.7.20)
(4.7.21)
Now consider the arithmetic case. Here condition (4.7.19) does restrict generality, and we are dealing with arbitrary values of Eξ and d = Var ξ. Theorem 4.7.6. Let ξ be an arithmetic r.v., and let the conditions [ · , =] for α > 2, [D1 ] and (4.7.18) be satisfied. Then 2 1 V (x) P Sn − nEξ = x ∼ √ e−x /2dn + nα x 2πnd
224
Random walks with jumps having finite variance √ uniformly in x n. In particular, for any fixed ε > 0, P Sn − nEξ = x ⎧ 1 2 ⎪ ⎪ e−x /2nd , if x (1 − ε) d(α − 2)n ln n, ⎨ √ 2πnd ∼ ⎪ V (x) ⎪ ⎩ nα , if x (1 + ε) d(α − 2)n ln n. x If x (1 − ε) d(α − 2)n ln n then condition [D1 ] is redundant. Note that generally speaking Theorems 4.7.5 and 4.7.6 do not imply the integral theorem (4.1.2) because in that theorem condition [D1 ] was not assumed. It is clear that Theorems 4.7.5 and 4.7.6 could be combined into one assertion about the asymptotics of P Sn − nEξ ∈ Δ[x) , without the assumption (4.7.19), where Δ is an arbitrary fixed positive number in the non-lattice case and Δ = 1 in the arithmetic case. Observe also that the assertions of Theorems 4.7.5 and 4.7.6 for deviations √ √ x n ln n and x = O n have already been proved in Theorem 4.7.1 and the Gnedenko–Stone–Shepp theorem (see √ [130, 258, 254]) respectively. For a √ proof for the remaining zone n x = O n ln n , see [192]. 4.8 Extension of results on the asymptotics of P(Sn x) and P(S n x) to wider classes of jump distributions The main assumption under which we established in Chapters 3 and 4 the asymptotics P(Sn x) ∼ nP(ξ x),
P(S n x) ∼ nP(ξ x)
(4.8.1)
in the respective deviation zones was that the distribution F of the summands ξi is regularly varying at infinity, i.e. that condition [ · , =] is satisfied. Now we will show that the asymptotics (4.8.1) remain valid for a wider class of distributions as well, but possibly for narrower deviation zones. First assume that Eξ 2 < ∞. Recall that the definitions of ψ-locally constant (ψ-l.c.) and upper-power functions can be found in Chapter 1 (see pp. 18, 28). Theorem 4.8.1. Let the following conditions be satisfied: (1) [ · , 2, and Eξ 2 < ∞; (2) the function F+ (t) is both upper-power and ψ-l.c. Let, moreover, x → ∞ and x2 nV 2 (x) = o(F+ (x)) , n < c1 ψ 2 (x), 2c ln x for some c > α − 2, c1 < ∞. Then (4.8.1) holds true. n
ct−α−ε for some ε < α − 2. Indeed, in that case, for n < x2 and any ε ∈ (0, α − 2 − ε), by Theorem 1.1.4(i) we have
nV 2 (x) = nx−2α L2 (x) = o x−2α+2+ε = o(F+ (x)), since −2α + 2 + ε < −α − ε. Remark 4.8.4. It is easily seen that Theorem 4.8.1 together with Theorem 4.8.6 below, which covers the case Eξ 2 = ∞, includes the following as special cases: (a) situations when the distribution F is of ‘extended regular variation’, i.e. when, for some 0 < α1 α2 < ∞ and any b > 1, b−α2 lim inf x→∞
F+ (bx) F+ (bx) lim sup b−α1 ; F+ (x) x→∞ F+ (x)
(4.8.3)
(b) the somewhat more general case of distributions of ‘intermediate regular variation’ (introduced in [89] and sometimes also referred to as distributions with ‘consistently varying tails’), i.e. distributions with the property lim lim inf b↓1 x→∞
F+ (bx) = 1. F+ (x)
(4.8.4)
Under the assumption that the r.v. ξ = ξ − Eξ was formed by centring a non-negative r.v. ξ 0, the first of the asymptotic relations (4.8.1) was obtained in these (a) and (b) in [90] and [210], respectively. Note that in [210] condition [ · , 1 but the large deviations theorem was established there only in the zone x δn, where δ > 0 is fixed. Remark 4.8.5. Since a ψ-l.c. function F+ with ψ(t) = t is an s.v.f., one can assume in condition (2) of Theorem 4.8.1 that necessarily ψ(t) = o(t). Proof of Theorem 4.8.1. First consider P(Sn x). As condition [ · , cM n → 0 as M → ∞, the right-hand side of (4.8.10) is o(F+ (x)). If we now consider, instead of (4.8.9), (4.8.10), the expression ⎛ ⎞ x−u−M x−u ψ(x) ⎜ ⎟ + ⎝ ⎠ P(ξ ∈ dv) P(S n−k x − u − v) δx
x−u−M ψ(x)
228
Random walks with jumps having finite variance
for any fixed u, |u| < δx/2 then all the computations will remain valid (with obvious minor changes, which, however, do not affect the final bound o(F+ (x))). This means that the bound o(F+ (x)) remains true for the integrals in (4.8.7), (4.8.8) as well. Comparing (4.8.6), (4.8.7) with the derived bounds, we obtain that P(S n x) = nF+ (x)(1 + o(1)). The theorem is proved. The case Eξ 2 = ∞, E|ξ| < ∞, Eξ = 0 is dealt with in exactly the same way. Theorem 4.8.6. Let the following conditions be satisfied: (1) [ 0, P η(x) = i, χ(x) zx| S n x ∼ P η(x) = i, χ(x) zx| Sn x ∼
(1 + z)−α . n
(4.9.1)
The conditional distribution of the process {x−1 Snt ; t ∈ [0, 1]} given Sn x (or S n x), converges weakly in the Skorokhod space D(0, 1), as n → ∞, to the distribution of the process {ζα E(t); t ∈ [0, 1]}. Here the r.v. ζα and the process {E(t)} are independent of each other and P(ζα y) = y −α , y 1. Analogous assertions will hold true under the conditions of Chapter 3 when α ∈ (1, 2), x σ(n). Proof. Put Gi−1 := S i−1 < x ,
Yi−1 := V (x + zx − Si−1 ) 1 Gi−1 √ and let ε = ε(x) → 0 as x → ∞ be such that εx n ln n. Then, for i n, P η(x) = i, χ(x) zx = P Gi−1 ; Si−1 + ξi x + zx = EYi−1 = E1 + E2 + E3 + E4 , where # E1 : = E Yi−1 ; # E2 : = E Yi−1 ; # E3 : = E Yi−1 ; # E4 : = E Yi−1 ;
$ |Si−1 | < εx ∼ V (x + zx), $ Si−1 −εx = o V (x) , $ Si−1 ∈ [εx, x/2] = o V (x) , $ Si−1 ∈ (x/2, x) V (zx)P Si−1 > x/2 cnV (x)V (zx) = o V (x) .
Thus we obtain that, for i n, P η(x) = i, χ(x) zx ∼ V (x + zx) ∼ (1 + z)−α V (x).
(4.9.2)
Moreover, because P η(x) = i, χ(x) zx, Sn < x P η(x) = i, χ(x) zx P(Sn−i < −zx) and the last factor tends to zero, we also have P η(x) = i, χ(x) zx, Sn x ∼ (1 + z)−α V (x).
(4.9.3)
230
Random walks with jumps having finite variance
Since P(S n x) ∼ P(Sn x) ∼ nV (x)
(4.9.4)
by Theorem 4.4.1 the relations (4.9.2) and (4.9.3) immediately imply (4.9.1). Furthermore, it is evident that the probabilities
(i) P max |Sj−1 | < εx, ξi x, max |Sj | < εx ji
jn−i
have the same asymptotics as (4.9.2). From this and (4.9.4) one can easily derive the convergence of the conditional distributions of {x−1 Snt ; t ∈ [0, 1]} in D(0, 1): it suffices to prove the convergence of the finite-dimensional distributions (which is obvious) and verify the compactness (tightness) conditions. The latter can be done in a standard way (see e.g. Chapter 3 of [28]), and we will leave this to the reader. The following assertion could be obtained in a similar way, using the integrolocal theorems of § 4.7. As before, let ξ n = maxkn ξk . Theorem 4.9.2. Let √ the conditions [ · , =], α > 2, Eξ 2 < ∞, [D(1,q) ] with q(t) ≡ 1 and x n ln n be satisfied, and let Δ > 0 be an arbitrary fixed number. Then, as n → ∞, 1 P η(x) = i | S n ∈ Δ[x) ∼ . n The conditional distribution of the process {x−1 Snt ; t ∈ [0, 1]} given the event Sn ∈ Δ[x) converges weakly in D(0, 1), as n → ∞, to the distribution of the process {E(t); t ∈ [0, 1]}. √ The limiting distribution for the conditional laws of (ξ n − x)/ n given the √ event Sn ∈ Δ[x) coincides with the limiting distribution for −Sn / n (which is clearly normal). The last assertion follows in an obvious way from the relation Si−1 + ξ n + Sn − Si ∈ Δ[x), which clearly holds on the event {ωn = i} ∩ {Sn ∈ Δ[x)}, where we put ωn := min{k 1 : ξk = ξ n }, and from the fact that the limiting distributions for Sn−1 + δn √ , n
|δn | Δ
and
S √n n
are the same. Along with the assertions of Theorems 4.9.1 and 4.9.2 providing first-order approximations to the conditional distributions of {x−1 Snt }, one can also obtain a second-order approximation. What happens here is that, roughly speaking, under the condition Sn ∈ Δ[x) the processes {Snt } approach (in distribution) the process √ √ x − w(1) n E(t) + nw(t),
4.9 Distribution of {Sk } given that Sn x or S n x
231
where w(t) is the standard Wiener process (we assume that Eξ 2 = 1), which is independent of E(t). We will state the above claim in a more precise form. Let
0 for t ωn , Eωn (t) := 1 for t > ωn . Theorem 4.9.3. Let the conditions of Theorem 4.9.2 be met. Then, as n → ∞, the conditional distribution of the process 1 √ Snt − xE(t) , n
t ∈ [0, 1],
given Sn ∈ Δ[x), converges weakly in D(0, 1) to the distribution of the process {w(t) − w(1)E(t)}, where the processes {w(t)} and {E(t)} are independent of each other. Proof. To prove the convergence of the finite-dimensional distributions we will ∗ follow the arguments in §§ 4.4 and 4.7. Consider the trajectory {Snt } that is obtained from {Snt } by replacing in the latter its maximum jump ξ n (which will be unique with probability tending to 1) by zero, so that ξ n ∈ x − Sn∗ + Δ[0) (given Sn ∈ Δ[x)). It can be seen from the proofs of the theorems of §§ 4.4 ∗ and 4.7 that, for each i, the conditional distribution of the trajectory {Snt } given ξi x/r ‘almost coincides’ with the distribution of the sequence of cumulative sums of independent r.v.’s (distributed as ξ), one of which (the ith one) is replaced by zero. As in §§ 4.4 and 4.7, we can represent the principal part of the probability P(Gn ) of the desired event Gn as the sum nj=1 P(Gn B j ), and then it is not hard to derive from the above observations that, for each t, the sequence 1 1 √ Snt − ξ n Eωn (t) = √ Snt − (x − Sn∗ + δn )Eωn (t) , n n
|δn | Δ,
given Sn ∈ Δ[x), will converge in distribution as n → ∞ to the same r.v. as the √ ∗ sequence Snt / n. In other words, the ‘conditional process’ 1 √ Snt − xEωn (t) n will converge to the same limiting process as 1 ∗ √ Snt − Sn∗ Eωn (t) , n i.e. to the process {w(t) − w(1)E(t)}. Compactness (tightness) conditions are verified in the standard way. One could also obtain second-order approximations to the conditional distributions of the process {Snt } given Sn x. In the case Eξ 2 = ∞ it is not difficult to obtain, using the same arguments,
232
Random walks with jumps having finite variance
complete analogues of Theorems 4.9.1–4.9.3 under condition [ 0, as n → ∞, 1 P η(x) = i | Sn ∈ Δ[x) ∼ . n The conditional distribution of the process {x−1 Snt ; t ∈ [0, 1]} given the event Sn ∈ Δ[x) converges weakly in D(0, 1), as n → ∞, to the distribution of the process {E(t); t ∈ [0, 1]}. If, moreover, condition [Rα,ρ ] holds then the limit of the conditional laws of (ξ n − x)/b(n), b(n) = F (−1) (1/n) given Sn ∈ Δ[x) coincides with the limiting distribution of −Sn /b(n) (i.e. with the stable distribution Fα,−ρ ; for simplicity we assume that α = 1). In the statement of the analogue of Theorem 4.9.3 in the case Eξ 2 = ∞, one should replace the Wiener process {w(t)} by the stable process {ζ (α,ρ) (t)} and √ the scaling sequence n by b(n).
5 Random walks with semiexponential jump distributions
5.1 Introduction In Chapters 5 and 6 we will be studying random walks with jumps for which the distributions decay at infinity in a regular manner but faster than any power function. For such walks also, one can obtain a rather complete description of the asymptotics of the large deviation probabilities using methods close to those developed in Chapters 2–4. Two distribution classes will be considered. This chapter is devoted to studying random walks with semiexponential jump distributions, which were introduced in Definition 1.2.22 (p. 29). In Chapter 6, however, we will study exponentially decaying distributions of the form λ+ > 0, P(ξ t) = V0 (t)e−λ+ t , ∞ where V0 (t) is an r.v.f. such that 1 tV0 (t)dt < ∞. In this case, the left derivative ϕ (λ+ ) of the function ϕ(λ) = Eeλξ is finite, and so the existing analytic methods for studying the asymptotics of, say, the probabilities P(Sn x) and P(S n x) will not work for deviations x such that x/n > ϕ (λ+ )/ϕ(λ+ ) (see [37]). This means that one has to look for some ‘mixed’ approaches that would use the results of Chapters 2–4 (for more detail, see Chapter 6). So, the present chapter deals with large deviation problems for the class Se of semiexponential distributions, which have the form P(ξ t) = V (t) = e−l(t) ,
(5.1.1)
where (1) the function l(t) admits the representation l(t) = tα L(t),
0 < α < 1,
L(t) is an s.v.f. at infinity,
(5.1.2)
(2) as t → ∞, for Δ = o(t) one has l(t + Δ) − l(t) =
αΔl(t) (1 + o(1)) + o(1). t 233
(5.1.3)
234
Random walks with semiexponential jump distributions
In other words, (2) means that l(t + Δ) − l(t) ∼
αΔl(t) t
(5.1.4)
if Δ = o(t) and Δl(t)/t > ε for a fixed ε > 0, and that l(t + Δ) − l(t) = o(1)
(5.1.5)
if Δl(t)/t → 0. Along with the notation F ∈ Se, which indicates that the distribution F is semiexponential, we will sometimes use the equivalent notation V ∈ Se (or F+ ∈ Se), where V (or F+ ) is the right tail of F. This is quite natural, as conditions (5.1.3)–(5.1.5) refer to the right distribution tails only. Note that, in the present chapter, we will exclude the extreme cases α = 0 and α = 1 (cf. Definition 1.2.22). Some properties of semiexponential distributions were studied in § 1.2. Recall that if the function L(t) is differentiable and L (t) = o L(t)/t as t → ∞ then l (t) ∼ αl(t)/t, the property (5.1.3) always holds and (5.1.4) is true for all Δ = o(t), so that the second remainder term o(1) in (5.1.3) is absent. If the distribution of ζ satisfies Cram´er’s condition, ϕ(λ) = Eeλζ < ∞ for some λ > 0, then, under rather broad conditions, the distribution of ξ = ζ 2 will be semiexponential. Assume, for example, that, in the representation P(ζ t) = e−λ+ t+h(t) ,
λ+ = sup{λ : ϕ(λ) < ∞} > 0,
(5.1.6)
the function h(t) = o(t) is differentiable for t t0 > 0 and h (t) → 0 as t → ∞. Then √
√
P(ξ t) = e−λ+ t+h( t) , √ √ so that the function l(t) = λ+ t − h( t ) has the property that Δ l(t + Δ) − l(t) = √ 1 + o(1) 2 t as t → ∞, Δ = o(t), and hence the relations (5.1.2), (5.1.3) hold for α = 1/2. It is obvious that the same could also be said about the distribution of the sum χ2 := ζ12 + · · · + ζk2 , d
where the r.v.’s ζj = ζ are independent. In this chapter, by conditions [ · , =] and [ · , 0. Beyond the deviation zone x σ1 (n) = n1/(2−α) L1 (n) one can identify two further zones, σ1 (n) < x σ2 (n) and x > σ2 (n), in which the asymptotics of P(Sn x) will be different. In the deviation zone x σ2 (n) = n1/(2−2α) L2 (n) (the asymptotics of σ2 (n) will be made more precise later on), the ‘maximum jump principle’ is valid, i.e. the principal contribution to large deviations of Sn comes from ξ n = maxjn ξj , so that P(Sn x) ∼ P(ξ n x) = nV (x) 1 + o(1) . (5.1.16) The asymptotic representation (5.1.16) for x n1/(2−2α) was obtained in [238, 196, 227, 51]. Concerning results on the large deviations of Sn for distributions
237
5.1 Introduction
satisfying (5.1.10), see also [206, 191] (these references also contain more complete bibliographies). The asymptotics of P(S n x) in cases when they coincide with those for P(Sn x) were established in [227]. In [238] theorems on the asymptotics of P(Sn x) valid on the whole real line were considered. In particular, that paper gives the form of P(Sn x) in the intermediate zone x ∈ σ1 (n), σ2 (n) but it does this under conditions whose meaning is quite difficult to comprehend. The intermediate deviation zone σ1 (n) < x < σ2 (n) was also considered in [195]. The paper deals with the rather special case when the distribution F has density f (t) ∼ e−|t|
α
|t| → ∞;
as
the asymptotics of P(Sn x) were found there for α > 1/2 in the form of recursive relations, whence one cannot extract, in the general case, the asymptotics of P(Sn x) in explicit form. Asymptotic representations for P(S n x) in the intermediate deviation zone studied in [52]. σ1 (n) < x < σ2 (n) and also in the zone x σ2 (n) were The following asymptotic representation for P S n (a) x , a > 0, was established in [178] (see also [275]): for all so-called strongly subexponential distributions V (t), as x → ∞, for all n one has 1 P(S n (a) x) ∼ a
x+an
V (u) du.
(5.1.17)
x
D.A. Korshunov has communicated to us that the sufficient conditions from [178] for a distribution to be strongly subexponential will be satisfied for laws from the class Se. The content of the present chapter is, in many aspects, similar to that of Chapters 2–4. We will be using the same approaches as in the above chapters but in a modified form. Owing to more severe technical difficulties, the class of problems will be narrowed: we will not deal with the probability that a trajectory will cross a given arbitrary boundary, although in principle there are no obstacles to deriving this. In § 5.8 we present integro-local and integral theorems for Sn that are valid on the whole real line. They cover all deviation zones including boundary regions. In particular, they improve the integral theorems for Sn established in the previous sections and the above-mentioned literature. The complexity of the technique used to prove these results, which were presented in the recent paper [73], is, however, somewhat beyond the level of the other material in the present monograph. This is why the respective theorems are presented in § 5.8 without proofs; the latter can be found in [73].
238
Random walks with semiexponential jump distributions
5.2 Bounds for the distributions of Sn and S n , and their consequences 5.2.1 Upper bounds for P(S n x) Similarly to Chapter 4, introduce a function σ1 = σ1 (n) that characterizes the 2 zone of deviations x where the ‘normal’ asymptotics e−x /2n and the ‘maximum jump’ asymptotics nV (x) will be approximately the same. More precisely, we will define such deviations x as solutions to the equation x2 = − ln nV (x). 2n
(5.2.1)
As we are only interested in the asymptotics of σ1 (n), this is the same as the solution to the equation x2 = − ln V (x) = l(x). 2n It will somewhat be more convenient for us to consider instead the equation x2 = nl(x),
(5.2.2)
of which the solution differs from that of the original equation by a bounded factor. It is not hard to see that, under the conditions of the present chapter, the function σ1 = σ1 (n) will have the form σ1 (n) = n1/(2−α) L1 (n),
(5.2.3)
where L1 (n) is an s.v.f. (see Theorem 1.1.4(v); note that σ1 (n) denoted in § 5.1 a formally different but quite close quantity, see (5.4.13) on p. 253). The deviation zone x σ1 (n) will be referred to as the Cram´er’s zone; the zone σ1 (n) < x σ2 (n), where σ2 (n) is to be defined below, will be called intermediate and the zone x > σ2 (n) will be called the zone of validity of the maximum jump principle (the asymptotics of P(S n x) in this zone coincide with those of P(ξ n x) ∼ nV (x)). Put w1 (t) := −t−2 ln V (t) = t−2 l(t) = tα−2 L(t).
(5.2.4)
One can assume, without loss of generality, that the function w1 (t) is decreasing. Then the equation (5.2.2) can be rewritten as w1 (x) = 1/n, and σ1 (n) is simply (−1) the value of w1 , the function inverse to w1 , at the point 1/n: (−1)
σ1 (n) = w1
(1/n).
It is not difficult to see that, if L satisfies the condition L tL1/(2−α) (t) ∼ L(t) as t → ∞, (−1)
then w1
(5.2.5)
(5.2.6)
(u) has the form (−1)
w1
(u) ∼ u1/(α−2) L1/(2−α) u1/(α−2) ,
(5.2.7)
5.2 Bounds for the distributions of Sn and S n 239 so that L1 (n) ∼ L1/(2−α) n1/(2−α) (see Theorem 1.1.4(v)). Note that although condition (5.2.6) is quite broad it is not always satisfied. For example, it fails for the s.v.f. L(t) = exp{ln t/ ln ln t}. Since the boundary σ1 (n) of the Cram´er deviation zone depends on n, it could be equivalently characterized by the inequalities 1 w1 (x) 1 n< w1 (x) n
in the Cram´er zone, in the intermediate zone.
Thus, deviations could be characterized both by the quantity x s1 := σ1 (n) (cf. Chapter 4; s1 1 for the Cram´er zone) and the quantity π1 := π1 (x) = nw1 (x) = nxα−2 L(x)
(5.2.8)
(π1 1 for the Cram´er zone); observe that, for a fixed s1 , w1 (σ(n)) ∼ sα−2 π1 (x) = nw1 (s1 σ(n)) ∼ nsα−2 1 1 as n → ∞. In some cases, it will be more convenient for us to use the characteriztic π1 (in which the argument x will often be omitted; when the argument of π1 (·) is different from x, it will always be included). As before, let Bj = {ξj < y},
B=
n *
P = P(S n x; B).
Bj ,
j=1
Theorem 5.2.1. Let condition [ · , 1, all n and all sufficiently large y, x r= , P c[nV (y)]r−π1 (y)h/2 , (5.2.9) y where π1 is defined in (5.2.8). If, for fixed h > 1 and ε > 0, one has π1 h 1 + ε then, for y = x and all large enough n, P e−x
2
/2nh
.
(5.2.10)
If the deviation y is characterized by the relation y = sσ1 (n) for a fixed s > 0 then (5.2.9) holds true with π1 (y) replaced by sα−2 (1+o(1)). If y = x, s2−α < h then the relation (5.2.10) is true. Now we will give a few consequences of Theorem 5.2.1. Along with the function w1 (t) (see (5.2.4)), we introduce the function w2 (t) := w1 (t)l(t) = t−2 l2 (t) = t2α−2 L2 (t),
(5.2.11)
240
Random walks with semiexponential jump distributions
which will be assumed, like w1 (t), to be monotonically decreasing, so that the (−1) inverse function w2 (·) is defined for it. Set (−1)
σ2 (n) := w2
(1/n) = n1/(2−2α) L2 (n),
(5.2.12)
where L2 is an s.v.f. that could be found explicitly under the additional assumption (5.2.6) (as was the case for L1 ). Further, let r0 be the minimum solution to the equation 1=r−
π1 h 2−α r , 2
which always exists when π1 h < 2α−1 . Obviously, π1 h as π1 → 0. 2 Here and in what follows, h > 1 is, as before, an arbitrary fixed number, which can be chosen to be arbitrarily close to 1. r0 − 1 ∼
Corollary 5.2.2. (i) If π1 h < 2α−1 then P(S n x) cnV (x/r0 ).
(5.2.13)
If π1 l(x) c or, equivalently, x c2 σ2 (n) then (ii) If π1 h 2
α−1
P(S n x) c1 nV (x).
(5.2.14)
P(S n x) < cnV (x)1/2π1 h .
(5.2.15)
then
(iii) Let h > 1, ε > 0 be any fixed numbers. If π1 h 1 + ε then, for all large enough n, one has 2
P(S n x) e−x
/2nh
= V (x)1/2π1 h .
(5.2.16)
Remark 5.2.3. As in Remark 4.1.5 (see also Corollaries 2.2.4 and 3.1.2), it is not difficult to verify that there exists a function ε(t) ↓ 0 as t ↑ ∞ such that, for x = s2 σ2 (n), P(S n x) 1 + ε(t). nV (x) x: s2 t sup
Proof of Theorem 5.2.1. The scheme of the proof remains the same as before. The main tool is again the basic inequality (2.1.8), in which one has to bound the integral R(μ, y). The integral I1 (see (4.1.19)) from the representation R(μ, y) = I1 +I2 admits the same upper bound as that obtained in the proof of Theorem 4.1.2 with M (ε) = ε/μ. Set eε = h. Then (see (4.1.22)) I1 1 +
μ2 h . 2
(5.2.17)
5.2 Bounds for the distributions of Sn and S n
241
An upper bound for y I2 =
y e F(dt) V (M (ε))h + μ
V (t)eμt dt
μt
M (ε)
(5.2.18)
M (ε)
(see (4.1.23)) will now be somewhat different. We will represent the last term as the sum M y f (t) e dt + μ ef (t) dt =: I2,1 + I2,2 , f (t) := −l(t) + μt, (5.2.19) μ M (ε)
M
where the quantity M will be chosen below, and study the properties of the function f . First assume for simplicity that the function L from the representation (5.1.2) is differentiable and, moreover, that L (t) = o(L(t)/t), In this case, l (t) =
L (t)t αl(t) 1+ t αL(t)
l (t) is decreasing.
∼
αl(t) t
as
(5.2.20)
t → ∞.
Then the minimum of f (t) is attained at t0 = λ(μ), where λ(·) = (l )(−1) (·) is the function inverse to l (·), so that l (λ(μ)) ≡ μ,
λ(μ) = μ1/(α−1) L∗ (μ),
L∗ (μ) is an s.v.f. as μ → 0.
Put μ := vl (y),
(5.2.21)
where v > 1 is to be chosen later (more precisely, it will be convenient for us to choose μ, thereby specifying v as well). It is clear that λ(μ) < y for v > 1. Observe that, for v ≈ 1/α > 1, the value f (y) = −l(y) + vl (y)y ≈ l(y)(vα − 1) could be made small, and so ef (y) becomes comparable with 1. Note also that y ≡ λ(μ/v) ∼ v 1/(1−α) λ(μ),
v > 1.
(5.2.22)
In the following argument we will keep in mind that v 1 + ε, ε > 0. Let M := γλ(μ), where γ is some point from the interval (1, α1/(α−1) ) (for example, its midpoint). Then, on the one hand, for t M we have f (t) f (M ) ∼ −γ α−1 l (λ(μ)) + μ ∼ μ(1 − γ α−1 ) = cμ,
c > 0.
(5.2.23)
242
Random walks with semiexponential jump distributions
On the other hand , for brevity setting λ := λ(μ), one has l (γλ)γλ f (M ) ∼ −l(γλ) + μγλ ∼ − + μγλ α α−1 γ μγλ, ∼ 1− α
(5.2.24)
where γ α−1 > α. Since by Theorem 1.1.4(ii) l M (ε) = l(ε/μ) > μ−α+δ μλ(μ) > μα/(α−1)+δ , for any δ > 0 and all small enough μ, we see that the quantity I2,1 from (5.2.19) can be bounded, owing to the above-mentioned properties of the function f , as follows: I2,1 μM e−μ
−α+δ
= μγλ(μ)e−μ
−α+δ
= o(μ2 )
(5.2.25)
as μ → 0. It is obvious that the term V (M (ε))h in (5.2.18) admits a bound of the same kind. To evaluate I2,2 in the case when y > M (i.e. when v 1/(1−α) > γ), we will use the inequality (5.2.23), which means that when computing the integral it suffices to consider its part over a neighbourhood of the point y only. For t = y − u, u = o(y), we have, owing to (5.2.21), that f (t) − f (y) = l(y) − l(y − u) − μu
= l (y)u(1 + o(1)) − μu ∼ μu
1 −1 . v
So, for U = o(y), U μ−1 , y μ
ef (t) dt ∼ μef (y)
y−U M
eμu(1/v−1) du ef (y)
0
y−U
The integral Therefore,
U
v . v−1
can be evaluated in a similar way, with the result o ef (y) . I2,2
vef (y) (1 + o(1)). v−1
(5.2.26)
Now we can complete the bounding of the integral R(μ, y) in the basic inequality (2.1.8). Collecting together (5.2.17), (5.2.18), (5.2.25) and (5.2.26), we obtain μ2 h vef (y) R(μ, y) 1 + (1 + o(1)) + (1 + o(1)), 2 v−1 and hence R (μ, y) exp n
" nhμ2 vn f (y) (1 + o(1)) + e (1 + o(1)) . 2 v−1
(5.2.27)
5.2 Bounds for the distributions of Sn and S n
243
Put μ :=
1 ln T, y
T :=
r(1 − α) c ≡ nV (y) nV (y)
and observe that (5.2.9) becomes trivial for π1 (y) = nw1 (y) > 2r/h (the righthand side of this inequality will then be unboundedly increasing). For deviations y such that π1 (y) 2r/h, one has n c1 y 2−α+ε for any ε > 0 and therefore μ=
1 l(y) l (y) 1 ln T ∼ − ln nV (y) ∼ ∼ . y y y α
This means that we have v ≈ 1/α > 1 in (5.2.21) and that all the assumptions made about μ and v hold true. Further, as in § 4.1, we find that nhμ2 nV (y) μy (1 + o(1)) + e (1 + o(1)), 2 1−α where, by virtue of our choice of μ, we have ln P −μx +
nV (y) μy e = r, 1−α ρ :=
−μx +
(5.2.28)
nhμ2 = (−r + ρ) ln T, 2
nh nh ln T = − 2 (ln nV (y) + ln c), 2 2y 2y
c = r(1 − α).
Here (see (5.2.4)) −
π1 (y) ln V (y) = l(y)y −2 = w1 (y) = . y2 n
(5.2.29)
Therefore, assuming for simplicity that cn 1, we see that ρ π1 (y)h/2 and hence P c1 [nV (y)]r−π1 (y)h/2
(5.2.30)
(if cn < 1 then one should add o(1) to the exponent in (5.2.30) and then remove that summand by slightly increasing h). This proves the first part of the theorem. Now we will consider the Cram´er deviation zone, where π1 = nw1 (x) can be large. Here we put x , y := x, μ := nh so that xw1 (x) l(x) l (x) 1 μ= = ∼ , v∼ π1 h xπ1 h απ1 h απ1 h (see (5.2.21)), and the condition v > 1, which we assumed after (5.2.21), could fail for large π1 . If v > γ 1−α (or, equivalently, x = y > M ; see (5.2.22)) then all the bounds for R(μ, y) obtained above will still hold and one will again obtain (5.2.27). If, however, v γ 1−α then I2,2 will disappear from the above considerations and likewise the last term in the exponent on the right-hand side of (5.2.27) will disappear, too. In this case, we will immediately obtain the second assertion of the theorem.
244
Random walks with semiexponential jump distributions
Thus it only remains to consider the case (απ1 h)−1 > γ 1−α > 1, and for this case to bound the last term in the exponent in (5.2.27). For μ = y/nh = x/nh, the logarithm of that term is equal to H = μy + ln nV (y) + O(1) x2 1 x2 1 = − w1 (y)n + ln n + O(1) = − π1 n h n h
+ ln n + O(1).
If π1 h 1 + ε and x2 n ln n then H → −∞ as n → ∞. Now observe that if n1/2 < x < n1/2+ε for a small enough ε > 0 then x2 π1 = x2 w1 (x) ln n → ∞ n
π1 = nw1 (x) → ∞,
and hence again H → −∞. Therefore, the last summand on the right-hand side of (5.2.28) is negligibly small, and ln P −
x2 (1 + o(1)) + o(1); 2nh
the last term, o(1), could be removed by slightly increasing the value of h. The theorem is proved under the assumption (5.2.20). If (5.2.20) does not hold then one should use the representation (5.1.3), which implies that the increments l(t+Δ)−l(t) (and it was the behaviour of these increments that was important in the above analysis) behave in exactly the same way as under the assumption (5.2.20), up to error terms o(1) that change neither the qualitative conclusions nor the bounds for the integrals. The value l (y) used in the above considerations should be replaced by l (1) (y) := αl(y)/y (cf. (5.1.3)). Then all the assertions in (5.2.21)–(5.2.30) will remain true. The theorem is proved. Proof of Corollary 5.2.2. We have P(S n x) nV (y) + P nV (y) + c[nV (y)]r−π1 (y)h/2 .
(5.2.31)
Our aim is to choose y (or r = x/y) as close to the optimal value as possible. Observe that, for r comparable with 1, one has, as x → ∞, π1 (y) = π1 (x/r) ∼ r 2−α π1 ,
l(y) ∼ r −α l(x). √ Moreover, recall that we are considering deviations x > n, which corresponds to the situation ln n < 2 ln x l(x). Therefore the factors n will play a secondary role in (5.2.31), and so we can only consider the behaviour of the factors V (y) (to the respective powers). The logarithm of the second term on the right-hand side of (5.2.31) has the form (recall that π1 = π1 (x)) # $r−π1 (y)h/2 hπ1 2−2α r (1 + o(1)), (5.2.32) = −l(x) r1−α − ln V (y) 2
5.2 Bounds for the distributions of Sn and S n
245
where the right-hand side attains its minimum value, −l(x)
1 1 + o(1) , 2π1 h
in the vicinity of the point r := (π1 h)1/(α−1) . For r = r, ln V (y) = −l(x)(π1 h)−α/(α−1) (1 + o(1)).
(5.2.33)
Therefore, if r −α = (π1 h)−α/(α−1) (2π1 h)−1 (or, equivalently, π1 h 2α−1 ) then the logarithm of the second term on the right-hand side of (5.2.31) will be at least as large as (5.2.33), and we could choose r0 = r as the desired optimal value of r. Moreover, since the power of n in this term is equal to (2π1 h)−1 2−α < 1, we obtain from (5.2.31) that P(S n x) cnV (x)(1+o(1))/2π1 h , where the factor 1 + o(1) could be replaced by 1 on slightly changing the value of h. This proves (5.2.15). If π1 h < 2α−1 then one should take r0 to be the value for which both terms on the right-hand side of (5.2.31) are roughly equal, i.e. r0 is chosen as the minimum solution to the equation 1=r− so that
π1 h 2−α r , 2
π1 h π1 h r0 = 1 + + (2 − α) 2 2
2
+ ···
and r0 − 1 ∼ π1 h/2 as π1 → 0. In this case, P(S n x) cnV (x/r0 ). This proves (5.2.13). The inequality (5.2.14) is a consequence of (5.2.13). Indeed, setting 1/r0 = 1 − θ we obtain that θ = 12 π1 h(1 + o(1)) as π1 → 0, V (x/r0 ) = exp −l(x(1 − θ)) = exp −l(x) + αθl(x)(1 + o(1)) ( ) α = exp −l(x) + π1 hl(x)(1 + o(1)) . (5.2.34) 2 This implies (5.2.14). Finally, the assertion (5.2.16) follows from the first inequality in (5.2.31) with √ x = y, the bound (5.2.10) and the fact that, for π1 h 1 + ε and x > n, one has "
"
"
l(x) 1 + 2ε x2 = exp − V (x) exp l(x) nV (x). exp − 2nh 2π1 h 2 + 2ε The corollary is proved.
246
Random walks with semiexponential jump distributions
Remark 5.2.4. Unfortunately, Theorem 5.2.1 does not enable one to obtain inequalities of the form P(S n x) nV (x)(1 + o(1)) for x σ2 (n) (π1 l(x) → 0 as x → ∞). Indeed, we will have the asymptotic equivalence V (y) ∼ V (x) in the main inequality (5.2.31) only if r ≡ x/y is of the form r = 1 + θ, where θl(x) → 0 as x → ∞ (cf. (5.2.34)). But, for such a θ, the second term on the second right-hand side of (5.2.31) will be asymptotically equivalent to cnV (x), so that the whole of this right-hand side will be asymptotically equivalent to (1 + c)nV (x), c > 0. Bounds of that form have already been obtained in (5.2.14), under the weaker assumption π1 l(x) < c.
5.2.2 Lower bounds for P(Sn x) Lower bounds for P(Sn x) will follow from Theorem 4.3.1, the assertion of which is not related to the condition of regular variation of the tails P(ξ t). In √ particular, the theorem implies that, for y = x + u n, as u → ∞, P(Sn x) nF+ (y)(1 + o(1)). If condition [ · , =], V ∈ Se, is met then √ √ V (y) = V (x + u n) = e−l(x+u n ) , where
√ αu nl(x) 1 + o(1) + o(1) = o(1), l(x + u n ) − l(x) = x √
provided that n x2 /l2 (x) = 1/w2 (x) or, equivalently, that x σ2 (n) and u → ∞ slowly enough. Therefore, in the specified range of x-values, one has V (y) ∼ V (x). We have proved the following assertion. Corollary 5.2.5. Let condition [ · , =], V ∈ Se, be satisfied. Then there exists a function ε(t) ↓ 0 as t ↑ ∞ such that, for x = s2 σ2 (n), P(Sn x) 1 − ε(s2 ). nV (x) In view of the absence of a similar opposite inequality (see Remark 5.2.4), one cannot derive from here the exact asymptotics P(Sn x) ∼ nV (x),
P(S n x) ∼ nV (x)
in the zone x σ2 (n). These asymptotics will be obtained below, in Theorems 5.4.1 and 5.5.1.
5.3 Bounds for the distribution of S n (a)
247
5.3 Bounds for the distribution of S n (a) We will begin with upper bounds. As in Chapters 3 and 4, the main elements of these bounds are inequalities for P (a, v) := P(S n (a) x; B(v)), where B(v) =
n *
Bj (v),
Bj (v) = {ξj < y + vj}.
j=1
Put z := z(x) =
x = o(x). αl(x)
(5.3.1)
Note that the value z(x) is an increment of x such that, for a fixed t, one has l(x + zt) − l(x) = t(1 + o(1)) or, equivalently, V (x + zt) ∼ e−t V (x).
(5.3.2)
In situations where the argument of the function z(·) is different from x, it will be indicated explicitly. Theorem 5.3.1. Suppose that condition [ · , 0 are fixed. Then, for y εx, x (5.3.3) P (a, v) c min{z r+1 (y), nr }V r (y), r= . y Note that the bound (5.3.3) in not unimprovable; the function z r+1 (y) could be replaced by z r (y). This is a result of the use of the crude inequalities (5.3.8) in the proof of the theorem. Deriving an exact bound requires additional effort. However, the inequality (5.3.3) will prove to be sufficient for finding the exact asymptotics of P(S n (a) x). Owing to the above-mentioned deficiency of the inequality (5.3.3), one cannot derive from it the following assertion, the proof of which will be constructed in a different way. Theorem 5.3.2. Let condition [ · , 0 be a fixed number. Then P(S n (a) x) cmV (x),
m := min{z(x), n}.
(5.3.4)
To prove Theorem 5.3.1, we will need an auxiliary assertion. Set S(l, r) :=
n
j l V r (y + vj).
(5.3.5)
j=1
Lemma 5.3.3. If V ∈ Se then
S(l, r) cΓ(l + 1) min Al+1 ,
" nl+1 V r (y), Γ(l + 2)
(5.3.6)
248
Random walks with semiexponential jump distributions
where A = z(y)/rv and the constant c can be chosen arbitrarily close to 1 for large enough n. Proof. Clearly S(l, r) c I(l, r), where n I(l, r) :=
t V (y + vt) dt = l
0
nv
1
r
ul V r (y + u) du.
v l+1
0
For u nv = o(y) we have
"
ur (1 + o(1)) . V r (y + u) = V r (y) exp − z(y)
Since A 0
"
Al+1 tl e−t dt min Γ(l + 1), l+1
we obtain nv
z(y) u V (y + u) du V (y) r l
0
r
l+1
nvr/z(y)
tl e−t(1+o(1)) dt
r
cV r (y)
z(y) r
0
l+1
nvr min Γ(l + 1), z(y)
l+1
" 1 . l+1
This bound, proving (5.3.6), will obviously remain valid for arbitrary nv as well. The lemma is proved. Proof of Theorem 5.3.1. For n z(y) we have π1 (y) = nw1 (y) z(y)y−2 l(y) ∼
y1 := y + vn y + vz(y) ∼ y,
and therefore x h + o(1) π1 (y)h x − − y1 2 y(1 + vz(y)/y) 2αy z(y) + O(1/y) = r + O(1/l(y)). r − rv y Hence, by Theorem 5.2.1, n * P (a, v) P S n x; {ξj < y1 + vn} #
j=1
c nV (y1 )
$x/y1 −π1 (y)h/2
# $r c1 nV (y1 ) .
Now let n be arbitrary. First we bound the probability n * {ξj < y + vn} . P Sn − an x; B(v) P Sn x + an; j=1
1 , αy
5.3 Bounds for the distribution of S n (a)
249
We will use Theorem 5.2.1, there taking x to be x1 = x + an and y and r to be y1 = y + vn and r1 = x1 /y1 respectively, so that r1 r
x + an x + a(1 − δ)n
for
v
a(1 − δ) . r
(5.3.7)
By virtue of Theorem 5.2.1, # $r1 −hπ1 (y1 )/2 P(Sn − an x; B(v)) c nV (y1 ) , where π1 (y1 ) = nw1 (y1 ) = ny1α−2 L(y1 ) = o min{1, n/x} for y εx and x → ∞. However, r1 r 1 + f (n/x) , where, owing to (5.3.7), f (t) :=
atδ 1 + at −1= c min{1, t}. 1 + at(1 − δ) 1 + at(1 − δ)
Hence, for all large enough x, # $r hπ1 (y1 ) r, P(Sn − an x; B(v)) c nV (y + vn) . 2 This leads to the bounds n P(Sj − aj x; B(v)) P (a, v) r1 −
j=1 n
c
# $r+1 r j r V r (y + vj) c1 min{z(y), n} V (y).
(5.3.8)
j=1
The last inequality uses Lemma 5.3.3. Theorem 5.3.1 is proved. Proof of Theorem 5.3.2. For n z, the assertion of the theorem follows from Corollary 5.2.2. Indeed, in this case l(x) → 0, αx and therefore the conditions of the second assertion of Corollary 5.2.2(i) are satisfied. Hence π1 l(x) zl2 (x)x−2 ∼
P(S n (a) x) P(S n x) cnV (x). For n z, we will make use of Corollary 7.5.4 below (see also [275]), which states that ∞ 1 V (x + t) dt (1 + o(1)). P(S(a) x) = a 0
Therefore (see the proof of Lemma 5.3.3) P(S n (a) x) P(S(a) x) czV (x).
250
Random walks with semiexponential jump distributions
The theorem is proved. The assertion of Theorem 5.3.2 can also be obtained as a consequence of the results of [178], where the relation (5.1.17) was established for so-called strongly subexponential distributions. Sufficient conditions for a distribution to belong to the class of strongly subexponential distributions will be met for any V ∈ Se. As in § 5.2, lower bounds will follow from those of § 4.3.
5.4 Large deviations of the sums Sn In this section we will obtain the first-order approximation for P(Sn x) together with a more detailed description of the asymptotics of this probability.
5.4.1 Preliminary considerations We will introduce an additional smoothness condition on the r.v.f. l(t) = tα L(t) from (5.1.1), which will be needed for deriving asymptotic expansions. This condition is similar to condition [D(2,q) ] from § 4.4. [D] The s.v.f. L(t) is continuously differentiable for t t0 and some t0 > 0, and, as t → ∞, one has L (t) = o(L(t)/t) and, for Δ = o(t), l(t + Δ) − l(t) = l (t)Δ +
α(α − 1) l(t) 2 Δ 1 + o(1) + o q(t) , 2 2 t
(5.4.1)
where q(t) 1 is a non-increasing non-negative function. Condition [D] with q(t) ≡ 0 will be met provided there exists l (t) = α(α − 1)
l(t) 1 + o(1) . 2 t
(5.4.2)
In this case, t+Δ
v
l (t) +
l(t + Δ) − l(t) = t
l (u)du dv
t
α(α − 1) l(t) 2 = l (t)Δ + 1 + o(1) . Δ 2 t2
(5.4.3)
If a function l1 (t) has a second derivative with the property (5.4.2) and if (5.4.4) l(t) = l1 (t) + o q(t) , then condition (5.4.1) will be satisfied but with l(t) replaced on its right-hand side by l1 (t) and with the function q(t) from (5.4.4). In fact, it is this, more general, form of condition [D] that is actually required for refining the asymptotics of P(Sn x). However, the form of condition [D] presented above appears preferable as it simplifies the exposition.
5.4 Large deviations of the sums Sn
251
Condition [D] with q(t) ≡ 0 (see (5.4.2)) could be referred to as second-order differentiability at infinity. In the lattice (arithmetic) case, t and Δ are assumed to be integer-valued, whereas condition [D] takes this form: [D] The s.v.f. L(t) is such that L(t + 1) − L(t) = o L(t)/t as t → ∞ and for Δ = o(t), Δ 2, one has # $ α(α − 1) l(t) 2 Δ (1 + o(1)) + o q(t) , l(t + Δ) − l(t) = l(t + 1) − l(t) Δ + 2 2 t where q(t) 1 is a non-increasing non-negative function. All sufficiently smooth r.v.f.’s l(t) = tα L(t) will satisfy (5.4.2), and hence condition [D] as well. For example, for L(t) = (ln t)γ we have γ l (t) = α(α − 1) tα−2 (ln t)γ (1 + o(1)). L (t) = (ln t)γ−1 = o(L(t)/t), t Now consider conditions of another type. They will ensure that approximations of the form (5.1.11), (5.1.15) hold true. Had the relations (5.1.9) or (5.1.10) been satisfied, one would not need these new conditions. But the property V ∈ Se (see (5.1.1)–(5.1.3)) does not imply (5.1.9) and even less does it imply the excessive (concerning left tails) condition (5.1.10). There is little doubt, however, that (5.1.11) holds (possibly in a somewhat smaller deviation zone; see below for more details) only when conditions (5.1.1)–(5.1.3) are satisfied. Since such a result, to the best of our knowledge, has not been obtained yet we can assume, along with (5.1.1)–(5.1.3), only what we need, namely, that the Cram´er approximation (5.1.11) holds true. (For an approximation of the form (5.1.16), we use the term maximum jump approximation). In order to state the main assertions, we will discuss in some detail the characteristics of the Cram´er deviation zone, the so-called intermediate zone and the zone where the maximum jump approximation holds true (the ‘extreme zone’). First consider the boundary of the Cram´er deviation zone. Recall that we defined it as the value x = σ1 = σ1 (n), for which the logarithmic Cram´er approximation x2 (5.4.5) 2n is of the same order of magnitude as the logarithmic maximum jump approximation ln P(Sn x) ∼ −
ln P(Sn x) ∼ ln n V (x). In other words, we consider the equation x2 = − ln nV (x), 2n or, equivalently from the viewpoint of the asymptotics of σ1 (n), the equation x2 = l(x). 2n
(5.4.6)
252
Random walks with semiexponential jump distributions
It will, however, be convenient for us to amend further this equation for x = σ1 by removing the coefficient 1/2 from its left-hand side, which results in (5.2.2). Using the function w1 (t) = t−2 l(t), which we introduced in (5.2.4), we obtain a solution to the equation (5.2.2) in the form (−1)
σ1 (n) = w1
(1/n),
(5.4.7)
(−1)
is the function inverse to w1 . For simplicity, we assume that the where w1 functions l(t) and w1 (t) are continuous and monotone for t t0 and some t0 > 0, so that one could write 1 1 for n > w1 (σ1 (n)) = n w1 (t0 ) (the transition to the general case merely complicates the notation somewhat). Clearly, the solution to (5.4.6) is equal to σ1 (2n); it differs from σ1 (n) only by a constant factor, which is close to 21/(2−α) (see also (5.4.9) below). If the function L(t) has the property (5.4.8) L tL1/(2−α) (t) ∼ L(t) as t → ∞ (for example, all the powers (ln t)γ , γ > 0, and all functions varying even more slowly will possess this property) then it is not hard to see that we will have (5.2.7) or, equivalently, (5.4.9) σ1 (n) = n1/(2−α) L1 (n), L1 (n) ∼ L1/(2−α) n1/(2−α) . We will assign deviations x = s1 σ1 (n) with s1 1 to the Cram´er deviation zone. When s1 → ∞, we will assign them either to the intermediate or to the extreme zones. One could equivalently characterize deviations using the quantity π1 = π1 (x) = nw1 (x).
(5.4.10)
It is obvious that if s1 = x/σ1 (n) is fixed then π1 (x) = nw1 (sσ1 (n)) ∼ sα−2 , 1
(5.4.11)
so that the deviations x belong to the Cram´er zone if π1 (x) > 1. Now observe that, under the conditions [ · , 0, t∗ = o(x). Indeed, on the one hand, g2 (t) < 0 for t 0. On the other hand, for large enough x, εx εx g2 (εx) = −l (x(1 − ε)) + = −αxα−1 (1 − ε)α−1 L(x)(1 + o(1)) + n n $ x# α−1 ε − α(1 − ε) π1 (1 + o(1)) > 0 (5.4.21) = n by virtue of (5.4.20), which implies that t∗ = o(x). In what follows, along with l(x) an important role will be played by the function x x1−α 1 ∼ = . (5.4.22) z = z(x) := l (x) αl(x) αL(x) In terms of z, the property (5.4.20) can be rewritten as n xz.
(5.4.23)
To find an approximation to t∗ , observe that d = −l (x)(1 + o(1)), l(x − t) dt t=t∗ and hence g2 (t∗ ) = 0 = −l (x)(1 + o(1)) +
t∗ . n
From here we find that t∗ = n l (x)(1 + o(1)) =
n (1 + o(1)) = o(n). z
(5.4.24)
5.4 Large deviations of the sums Sn
255
Now note that if t = o(n) then for the function Λκ from (5.1.12) we have nΛκ
t n
t2 = (1 + o(1)), 2n
Λκ
t n
=
t (1 + o(1)). n
(5.4.25)
Therefore, all the above discussion (and, in particular, the relation (5.4.24)) will remain true if, in the definition (5.4.19) of g2 (t), we substitute nΛκ (t/n) for the function t2 /2n, i.e. if by t∗ we understand the point where the function gκ (t) = gκ (t, x, n) := l(x − t) + nΛκ
t n
(5.4.26)
attains its minimum value. Put M = M (x, n) := min gκ (t, x, n). t
(5.4.27)
Clearly M l(x) and, moreover, owing to (5.4.24) and (5.4.25), one has M = l(x − t∗ ) + nΛκ
t∗ n
= l(x) −
n (l (x))2 (1 + o(1)) 2
2 α2 nα2 l(x) nw2 (x)(1 + o(1)) (1 + o(1)) = l(x) − = l(x) − 2 x 2 α2 nw1 (x)(1 + o(1)) . = l(x) 1 − (5.4.28) 2 Hence, if π2 = nw2 (x) → 0 (i.e. the deviations belong to the maximum jump approximation zone) then M = l(x) + o(1).
(5.4.29)
If the deviations are in the intermediate zone (i.e. π1 = nw1 (x) → 0) then M = l(x)(1 + o(1)).
(5.4.30)
If condition [D] holds with q(t) ≡ 0 (or (5.4.2) holds true) and κ = 2 then we can find more precise expressions for t∗ and M . In this case, for t = o(x), π1 (x) = nw1 (x) → 0 we have g2 (t) = −l (x − t) +
$ t t # = −l (x) + 1 + α(α − 1)π1 (x)(1 + o(1)) . n n
So the solution t∗ to the equation g2 (t) = 0 will have the form # $ t∗ = nl (x) 1 − α(α − 1)π1 (x)(1 + o(1)) .
256
Random walks with semiexponential jump distributions
From this we find that (t∗ )2 2n α(α − 1) l(x) ∗ 2 (t∗ )2 (t ) (1 + o(1)) + = l(x) − t∗ l (x) + 2 2 x 2n 2 # $ = l(x) − n l (x) 1 − α(α − 1)π1 (x)(1 + o(1)) 2 α(α − 1) π1 (x)n l (x) (1 + o(1)) + 2 2 2 n l (x) + − n l (x) α(α − 1)π1 (x)(1 + o(1)). 2 Putting, for brevity, 2 π2∗ (x) := n l (x) ∼ α2 π2 (x), M = l(x − t∗ ) +
one obtains π2∗ (x) α(α − 1) + π1 (x)π2∗ (x)(1 + o(1)) 2 2 π ∗ (x) α3 (α − 1) = l(x) − 2 (5.4.31) + π1 (x)π2 (x)(1 + o(1)). 2 2 The case κ = 3 can be considered in a similar way but the resulting expressions will be somewhat different. M = l(x) −
5.4.2 Limit theorems for large deviations of Sn Now we can state the main assertion. Theorem 5.4.1. Let the conditions (5.1.7) and [ · , =] with V ∈ Se be satisfied. Then the following assertions hold true. (i) If condition [D] holds with q(t) ≡ 1 then in the intermediate and extreme deviation zones one has P(Sn x) = ne−M (1 + ε(x, n)),
(5.4.32)
where ε(x, n) = o(1) uniformly (see Remark 5.4.4 below) in the range of values x, n such that x → ∞. n → ∞, s1 = σ1 (n) The functions M = M (x, n), σ1 (n) and σ2 (n) are defined and described in (5.4.27)–(5.4.31), (5.4.7), (5.4.17). The condition s1 → ∞ can be replaced by π1 → 0. (ii) Let n → ∞ and s2 = x/σ2 (n) → ∞ (or, equivalently, π2 = nw2 (x) → 0). Then, uniformly in x and n from that range, P(Sn x) = nV (x)(1 + o(1)).
(5.4.33)
5.4 Large deviations of the sums Sn
257
(iii) If condition [D] holds and s2 = x/σ2 (n) → ∞ (or, equivalently, n = o(z 2 ), where z = 1/l (x) ∼ x/αl(x)), then % n − 1 (α − 1)(n − 1) + P(Sn x) = n V (x) 1 + (1 + o(1)) 2z 2 2xz & √ + O ( n/z)3 + o q(x) , (5.4.34) where the remainder terms are uniform in the range of x and n specified in part (ii). Remark 5.4.2. It will be seen below from the proof of the theorem that the value ne−M from (5.4.32), which gives the asymptotics of P(Sn x) in the intermediate zone, is simply the main part of the convolution of the Cram´er approximation and the extreme zone approximation nV (x) (see part (ii) of the theorem). Remark 5.4.3. In the lattice case, we have a complete analogue of the assertions (5.4.32) and (5.4.34) for the values of x on the lattice. Remark 5.4.4. The uniformity in part (i) of the theorem is understood in the following sense. For any sequences n → ∞, s1 → ∞ one can construct an ε = ε (n , s1 ) → 0 such that in (5.4.32) one has ε(x, n) ε for all n n , s1 s1 . The uniformity of the remainder terms in the assertion (5.4.34) is to be understood in a similar way. Remark 5.4.5. It can be seen from (5.4.28) that e−M = V (x)eα
2
π2 (1+o(1))/2
,
and hence in the case π2 → ∞ (i.e in the intermediate deviation zone) we have e−M V (x), so that the asymptotics (5.4.32) and nV (x) are quite different. It follows from (5.4.30), however, that the ‘crude asymptotics’ (i.e. the asymptotics of ln P(Sn x)) will coincide in the intermediate and extreme zones (when π1 → 0, see (5.4.28), (5.4.30)); then ln P(Sn x) = (1 + o(1)) ln nV (x) = (1 + o(1)) ln V (x). Remark 5.4.6. When b > 3 one can obtain more complete asymptotic expansions in part (ii) of the theorem. Using Theorem 5.4.1, one can find, for example, the asymptotics of the distribution tail of χ2n := ζ12 + · · · + ζn2 , where the i.i.d. r.v.’s ζi satisfy Cram´er’s condition (see (5.1.6)). To prove the theorem, we will need the following auxiliary assertion. Lemma 5.4.7. Let the conditions (5.1.7) and [ · , =] with V ∈ Se be satisfied,
258
Random walks with semiexponential jump distributions
and an r.v. ξ be independent of Sn . Then, for n → ∞ and x σ+ , 1 + o(1) P(ξ + Sn x, σ− Sn < σ+ ) = √ 2πn
σ+
V (x − t)e−nΛκ (t/n) dt,
σ−
(5.4.35) where σ± were defined in (5.4.12), (5.4.15). Proof. First note that the function % & 2 x Qn (x) := 1 − Φ √ e−nΛκ (x/n)+x /2n n on the right-hand side of (5.1.11) is differentiable and that qn (x) := −Qn (x) = √
1 e−nΛκ (x/n) (1 + o(1)) 2πn
as n → ∞ uniformly in [σ− , σ+ ]. Moreover, if points v1 < v2 from [σ− , σ+ ] are such that the integral v2 qn (t) dt v1
is comparable with Qn (v1 ) (i.e. has the same order of magnitude) then we obtain from (5.1.11) that
v2
P Sn ∈ [v1 , v2 ) =
qn (t) dt (1 + o(1)). v1
For a given Δ > 0, partition the segment [σ− , σ+ ] into semi-open intervals of the form Δk := [uk−1 , uk ), where the uk are defined as solutions to the equations 1 Qn (uk ) = e−Δk , 2 uk 1 Δk Φ √ = e , 2 n
k 0, k < 0,
and assume for simplicity that N+ := −
1 ln 2Qn (σ+ ), Δ
N− :=
1 σ− ln 2Φ √ Δ n
0, the integral qn (t) dt = Qn (uk−1 ) − Qn (uk ) = Qn (uk−1 )(1 − e−Δ ), 0 < k N+ , Δk
5.4 Large deviations of the sums Sn
259
is comparable with Qn (uk−1 ) it follows from the previous argument that P(Sn ∈ Δk ) = qn (t) dt (1 + o(1)). (5.4.36) Δk
It is clear that the representation (5.4.36) remains valid for negative k > N− as well. Further, note that for large k one has |Δk | := uk − uk−1 = o(uk ). This follows √ from the fact that, for uk n, √ u2 1 ln 2Qn (uk ) ∼ k , uk ∼ 2nΔk. Δ 2nΔ As |Δk | = o(uk ), we obtain (slightly decreasing the value of σ+ if needed) that (5.4.36) holds for the ‘boundary’ interval ΔN+ +1 = [uN+ , uN+ +1 ) as well. The same remark applies to ΔN− . Now we can proceed to prove (5.4.35). Rewrite the probability on the left-hand side of (5.4.35) as k=−
σ+ V (x − t)P(Sn ∈ dt) =
p :=
N+
V (x − t)P(Sn ∈ dt)
k=N− +1Δ
σ−
k
N+
V (x − uk ) P(Sn ∈ Δk )
k=N− +1
N+
=
V (x − uk )
k=N− +1
qn (v) dv (1 + o(1)),
Δk
where thelast equality holds by (5.4.36). By the definition of the quantities uk , the integrals Δk over the intervals Δk of the function qn (v) will have the following properties: = eΔ for k 1, = , = e−Δ for k < 0. Δk
Δk+1
Δ0
Δ1
Δk
Δk+1
Therefore, continuing the above chain of inequalities, we obtain p eΔ
N+
V (x − uk )
k=N− +1
eΔ
N+ +1
qn (v) dv (1 + o(1))
Δk+1
V (x − v)qn (v) dv (1 + o(1))
k=N− +2Δ
k
uN+ +1
eΔ
V (x − t)qn (t) dt (1 + o(1)). uN−
260
Random walks with semiexponential jump distributions σ +|ΔN +1 | + On the right-hand side we have the integral σ−+ V (x−t)qn (t) dt, where, as we observed above, |ΔN+ +1 | = o(uN+ ) = o(σ+ ). The asymptotics of exactly the same integral but for limits σ− and σ+ , will be studied below (see (5.4.49)– (5.4.54)). From these computations it will follow in an obvious way that for deviations x σ+ the asymptotics in question are determined by an integral over a subinterval of (σ− , σ+ ) that is ‘far’ from the endpoints σ± and so will not depend on small relative variations in the upper integration limit. This means that, as n → ∞ and x σ+ , σ+ +|ΔN+ |
σ+ ∼
σ−
and
σ+
Δ
p e (1 + o(1))
σ−
.
σ−
In exactly the same way one finds that −Δ
pe
σ+ (1 + o(1))
.
σ−
Since Δ > 0 is arbitrary and p does not depend on Δ, it follows that σ+ p = (1 + o(1))
.
σ−
The lemma is proved. Proof of Theorem 5.4.1. (i) Let Gn := {Sn x},
Bj := {ξj < y},
B=
n *
Bj .
j=1
Then P(Sn x) = P(Gn B) + P(Gn B), where P = P(Gn B) was bounded in Theorem 5.2.1: # $r−π1 (y)h/2 x , r = , π1 (y) = nw1 (y). P c nV (y) y
(5.4.37)
(5.4.38)
Since by assumption π1 = π1 (x) → 0, we see that for y = δx, where δ ∈ (0, 1) is fixed, one has π1 (y) → 0 and V (y)r−π1 (y)h/2 = V (y)r+o(1) = exp −l(δx)(1/δ + o(1)) = exp −l(x)(δ α−1 + o(1)) V (x)1+γ (5.4.39) √ for any fixed γ δ α−1 − 1 > 0 and all large enough x. For x n the same bound will clearly hold for P as well.
5.4 Large deviations of the sums Sn
261
For the second term on the right-hand side of (5.4.37), we have n
P(Gn B j ) P(Gn B)
j=1
n
P(Gn B j ) −
j=1
n
P(Gn B j ) −
j=1
so that for y = δx, δ ∈ (0, 1), x P(Gn B) =
n
P(Gn B i B j )
i<jn
$2 n(n − 1) # P(ξ1 y) , 2
(5.4.40)
√ n the following holds:
P(Gn B j ) + O (nV (y))2 .
j=1
Therefore P(Gn ) =
n
P(Gn B j ) + O V 1+γ (x)
(5.4.41)
j=1
for some γ ∈ (0, min{1, γ }). Thus the main problem is now to evaluate the terms P(Gn B j ) = P(Gn B n ) = P(Sn−1 + ξn x, ξn y) = P(ξn y, Sn−1 x − y) + P(Sn−1 < x − y, Sn−1 + ξn x).
(5.4.42)
Here the first term in the sum is equal to P(1) := V (y)P(Sn−1 x − y). From Corollary 5.2.2(i) we obtain that, for y = δx,
1 (5.4.43) P(1) cnV (δx)V x(1 − δ) 1 − π1 (x(1 − δ))h . 2 The following evaluations are insensitive to the asymptotics of L(x), so, for simplicity, we will put for the present L(x) ≡ 1. Then we obtain from (5.4.43) that
,
-α " 1 α P(1) cn exp −(δx) − x(1 − δ) 1 − π1 (x(1 − δ))h 2 α α α = cn exp −x [δ + (1 − δ) + O(π1 )] (5.4.44) = cn exp −xα [1 + γ(δ) + O(π1 )] , where the function γ(δ) := δ α + (1 − δ)α − 1 is concave on [0, 1] and symmetric with respect to the point δ = 1/2, γ(0) = 0, γ(1/2) = 21−α − 1 and γ (0) = ∞. Hence γ(δ) > 0 for δ ∈ (0, 1), and, for any δ ∈ (0, 1/2), one has γ(δ) > 2δ(21−α − 1). In the general case, for an arbitrary s.v.f. L, we will have l(x) instead of xα on the right-hand side of (5.4.44), and the quantity γ(δ) will acquire a factor (1 + o(1)).
262
Random walks with semiexponential jump distributions
As a result, the right-hand side of (5.4.44) admits an upper bound cn(V (x))1+γ , √ γ > 2γ(δ)/3, which yields for x n nP(1) (V (x))1+γ ,
γ>
γ(δ) > 0. 2
(5.4.45)
Now consider the second term on the right-hand side of (5.4.42): P(2) : = P(Sn−1 < x − y, Sn−1 + ξn x) $ # (5.4.46) = E V (x − Sn−1 ); Sn−1 < x − y = E1 + E2 + E3 , √ where, for σ− = −c n ln n (see (5.4.15) and also (5.4.12)), $ # E1 := E V (x − Sn−1 ); Sn−1 < σ− , # $ E2 := E V (x − Sn−1 ); σ− < Sn−1 σ+ , (5.4.47) # $ E3 := E V (x − Sn−1 ); σ+ < Sn−1 x(1 − δ) . √ We will begin by bounding E1 . Since |σ− | n, by the central limit theorem we have P(Sn < σ− ) → 0 and therefore E1 V (x − σ− ) P(Sn < σ− ) = o(V (x)).
(5.4.48)
Next consider E2 . By Lemma 5.4.7 (to simplify the notation, we will replace n−1 by n; this will change nothing in the asymptotics of E2 ), 1 + o(1) E2 = √ 2πn
σ+
−nΛκ (t/n)
V (x − t) e
1 + o(1) dt = √ 2πn
σ−
σ+
e−gκ (t) dt. (5.4.49)
σ−
We have already discussed the properties of the function gκ (t) = l(x − t) + nΛκ (t/n) (see pp. 254–256). For t = o(x), t = o(n) we have, by virtue of condition [D], that l(x − t) = l(x − t∗ ) − (t − t∗ )l (x − t∗ ) +
α(α − 1) l(x − t∗ ) (t − t∗ )2 (1 + o(1)) + o(1). 2 (x − t∗ )2
Here the last term, o(1), on the right-hand side could be omitted (i.e. one can assume that [D] holds with q ≡ 0), since eo(1) ∼ 1 and hence that term does not affect the first-order asymptotics dealt with in part (i) of the theorem. Further, (t∗ )2 t∗ (t − t∗ ) (t − t∗ )2 t2 = + + . 2n 2n n 2n It follows that, when π1 → 0 (i.e. when x−2 l(x) = o(1/n)), gκ (t) = gκ (t∗ ) +
(t − t∗ )2 (1 + o(1)). n
(5.4.50)
5.4 Large deviations of the sums Sn
263 √ Now we show that the minimum point t∗ , together with its n-neighbourhood, lies inside the integration interval [σ− , σ+ ]. If s1 → ∞ so slowly that L(s1 σ1 (n)) ∼ L(σ1 (n)) then, by virtue of (5.4.24), t∗ ∼ αn
l(x) = αnxw1 (x) = αns1 σ1 (n)w1 (s1 σ1 (n)) x ∼ αsα−1 σ1 (n) = o(σ1 (n)). 1
(5.4.51)
If s1 → ∞ at a faster rate then it is even more obvious that t∗ = o(σ1 (n))
(5.4.52)
holds. Moreover, since σ+ ∼ σ1 we have t∗ = o(σ+ (n)). As
(5.4.53)
√ n = o(σ+ (n)), along with (5.4.53) the following also holds: √ t∗ + c n σ+ .
√ Finally, it is evident that t∗ > 0 and n |σ− |. The above, together with (5.4.49) and (5.4.50), enables one to conclude that ∗
E2 = e−gκ (t ) (1 + o(1)) = e−M (1 + o(1)). It remains to bound the quantity # $ E3 = E V (x − Sn−1 ); σ+ Sn−1 < x(1 − δ) x(1−δ)
V (x − u) dP(Sn−1 u)
=− σ+
x(1−δ) = − V (x − u) P(Sn−1 u) σ+
x(1−δ)
P(Sn−1 > u)V (x − u) l (x − u) du.
+ σ+
(5.4.54)
264
Random walks with semiexponential jump distributions
By virtue of Corollary 5.2.2 and the fact that l (x) → 0 as x → ∞, we have E3 V (x − σ+ ) P(Sn−1 σ+ ) π1 (u)h V (x − u)l (x − u)cnV u 1 − 2
x(1−δ)
+ σ+
du
π1 (σ+ )h cnV (x − σ+ )V σ+ 1 − 2 ⎡ x(1−δ) π1 (u)h ⎢ + cn ⎣ V (x − u)V u 1 − 2
⎤ ⎥ du⎦ o(1),
(5.4.55)
σ+
where π1 (u) π1 (σ+ ) ∼ π1 (σ1 ) = 1. We have already estimated products of the form π1 (u)h V (x − u)V u 1 − , 2 which are present in (5.4.55). This was done in (5.4.43) and (5.4.44), but for the case where u is comparable with x whereas in (5.4.55) one could have u = o(x). In the latter case, for h 4/3 and u σ+ , π1 (u) h V (x − u)V u 1 − 2
" π1 (u) h = exp −l(x − u) − l u 1 − 2
" αul(x) h exp −l(x) + (1 + o(1)) − l u 1 − (1 + o(1)) x 2
" αul(x) exp −l(x) + (1 + o(1)) − 3−α l(u)(1 + o(1)) , x where l(x)/x = o(l(u)/u) for u = o(x). Therefore, for such a u → ∞ one has π1 (u) h V (x − u)V u 1 − V (x)V (u)γ1 2 for some fixed γ1 > 0. From here and (5.4.55) it follows that, for large enough n, E3 V (x)V (σ1 )γ
(5.4.56)
for some γ ∈ (0, 1). Collecting together the relations (5.4.48), (5.4.54) and (5.4.56) and noticing that V (x) e−M , we obtain P(2) = e−M (1 + o(1)). This, together with (5.4.41) and (5.4.45), proves the first assertion of the theorem. (ii) In this case, the bound for P(1) remains true, and the only difference will
5.4 Large deviations of the sums Sn
265
be in how we evaluate P(2) . Instead of (5.4.46), (5.4.47), one now needs to use another partition of the integration range Sn−1 < x − y. To simplify the exposition, we first assume that the function l(t) is differentiable and that l (t) ∼ αl(t)/t,
t → ∞.
(5.4.57)
Further, for z = 1/l (x) ∼ x/αl(x), set P(2) = E1 + E2 + E3 , where # $ E1 := E V (x − Sn−1 ); Sn−1 < −z , # $ E2 := E V (x − Sn−1 ); |Sn−1 | z , # $ E3 := E V (x − Sn−1 ); z < Sn−1 < x(1 − δ) . The quantity E1 can be bounded using Chebyshev’s inequality:
E1 V (x + z)P(Sn−1 < −z) = V (x) o nb/2 z −b .
(5.4.58)
(5.4.59)
Evaluating E2 is quite trivial: since V (x + v) = V (x) (1 + o(1)) for v = o(z), √ 0 < c1 < V (x+v)/V (x) < c2 < ∞ for |v| < z and z n by the assumptions of the theorem, we have from the central limit theorem that E2 = V (x)(1 + o(1)).
(5.4.60)
In order to bound E3 we need to distinguish between the following two cases: z σ1 and z > σ1 . In the second case, by virtue of Corollary 5.2.2(i) we find, cf. (5.4.55), that # $ E3 = E V (x − Sn−1 ); z Sn < x(1 − δ) V (x − z) P(Sn−1 z) π1 (u)h V (x − u)l (x − u)cnV u 1 − 2
x(1−δ)
+
du,
(5.4.61)
z
where π1 (u)h/2 h/2 for u z σ1 . Repeating the argument that follows (5.4.55), we find that, for z u x(1 − δ), π1 (u)h V (x − u)V u 1 − V (x)V (y)γ1 , γ1 > 0, 2 and therefore E3 V (x)V (z)γ ,
0 < γ < 1.
If z σ1 then we split the integral E3 into two parts, # $ E3,1 := E V (x − Sn−1 ); z Sn−1 < σ1
(5.4.62)
266 and
Random walks with semiexponential jump distributions # $ E3,2 := E V (x − Sn−1 ); σ1 Sn−1 < x(1 − δ) .
(5.4.63)
The integral E3,2 coincides with E3 in the preceding considerations and therefore admits an upper bound V (x)V (σ1 )γ V (x)V (z)γ , γ > 0. For E3,1 we obtain in a way similar to that used in our previous analysis, that E3,1
# $ = E V (x − Sn−1 ); Sn−1 ∈ [z, σ1 ) = −
σ1 V (x − u) dP(Sn−1 u) z
σ1 V (x − z) P(Sn−1 z) +
P(Sn−1 u)V (x − u)l (x − u) du
z
V (x − z)e−z
2
σ1 /2nh
+
2
V (x − u)l (x − u)e−u
/2nh
du,
(5.4.64)
z
where the last inequality follows from Corollary 5.2.2(ii). For u σ1 = o(x) we have α l(x) u l(x − u) = l(x) − 1 + o(1) . x Observe that, for u z ∼ x/αl(x) and π2 (x) → 0, one has 2 l(x) nl(x) cn = cπ2 (x) → 0. xu x Hence for u z and large enough x we obtain the inequality 2
V (x − u)eu so that E3,1 V (x)e−z
2
/2nh
V (x)e−u
2
/3nh
,
/3nh
and therefore 2 E3 V (x) V γ (z) + e−z /3nh .
(5.4.65)
Since z 2 /n ∼ 1/α2 π2 (x) → ∞, collecting up the above bounds leads to the relation (5.4.33). In the general case, when (5.4.57) does not necessarily hold, one should put z := x/αl(x) and then replace the integrals containing l (x − u) du by sums of integrals with respect to du l(x − u) over intervals of length Δ0 z for a small fixed Δ0 > 0; on these intervals the increments of the function l behave in the same way as when (5.4.57) holds true (see (5.1.3)). (iii) It remains to establish the asymptotic expansion (5.4.34). To this end, one needs to do a more detailed analysis of the integral E2 . All the other bounds remain the same. √ Recall that z n in the deviation zone under consideration. We have # $ (5.4.66) E2 = E e−l(x−Sn−1 ) ; |Sn−1 | z ,
5.4 Large deviations of the sums Sn
267
where, for S = o(x), owing to condition [D] (see (5.4.1)) we have l(x − S) = l(x) − l (x)S +
α(α − 1) l(x) 2 S (1 + o(1)) + o q(x) 2 2 x
or, equivalently, l(x) − l(x − S) =
S (1 − α) S 2 + (1 + o(1)) + o q(x) . z 2 xz
(5.4.67)
Clearly, for |S| z, |l(x − S) − l(x)| c < ∞, and the Taylor expansion of the function el(x)−l(x−S) in the powers of the difference (l(x) − l(x − S)) yields 3 S2 |S| S (1 − α) S 2 , el(x)−l(x−S) = 1 + + (1 + o(1)) + o q(x) + 2 + O z 2 xz 2z z3 (5.4.68) where the remainders o(1) and O(·) are uniform in |S| z. Substituting the sum Sn−1 for S in (5.4.68) and using (5.4.67) and the fact that b = 1/(1 − α) + 2 3 and therefore E|ξ|3 < ∞ and E|Sn |3 = O(n3/2 ), we obtain
% 2 (1 − α) Sn−1 Sn−1 + (1 + o(1)) E2 = V (x) E 1 + z 2 xz & " 2 Sn−1 3/2 −3 + o q(x) + ; |Sn−1 | z + O(n z ) . 2z 2 (5.4.69) Next note that, for k b,
% & # $ |Sn−1 |b ; |S | > z Tk := E |Sn−1 |k ; |Sn−1 | > z = E n−1 |Sn−1 |b−k z k−b E|Sn−1 |b = O z k−b nb/2 .
(5.4.70)
2 Returning to (5.4.69) and observing that ESn−1 = 0, ESn−1 = n − 1 and −b 3/2 −3 nz = o(n z ), we have % 3/2 & n − 1 (1 − α) (n − 1) n + E2 = V (x) 1 + . (1 + o(1)) + o q(x) + O 2 2z 2 xz z3
Now taking into account the bounds (5.4.59) and (5.4.65) for E1 and E3 respectively, we obtain (5.4.34). The uniformity of the bounds claimed in the statement of Theorem 5.4.1 can be verified in an obvious way since, for all the terms o(1) and O(·), one can give explicit bounds in the form of functions of x and n. The theorem is proved.
268
Random walks with semiexponential jump distributions 5.5 Large deviations of the maxima S n
As noted, the asymptotics of P(S n x) in the zone of moderately large deviations x σ+ (n) ∼ σ1 (n) (σ1 is defined in (5.4.7)) were studied in [2], where it was established that under condition (5.1.10) one has the representation (5.1.15). In the present section, as in § 4, we will deal with deviations x σ1 (n). We observe that condition (5.1.10) is somewhat excessive for (5.1.15) (cf. the discussion in the previous section), and so we will simply assume that property [CA] (see p. 253) is satisfied. Recall that the expected result here is that condition (5.1.9) (or [ · , 2α. This means that the second term in the exponent in (5.6.32) will dominate, and so the value of (5.6.32), for large enough x, will not exceed 2
V (xj ) e−zj /3jh . Thus, for j xθ we have an inequality similar to (5.4.65):
2 γ > 0. E3,j V (xj ) V γ (zj ) + e−zj /3jh ,
(5.6.33)
If j > xθ then for E3,j,1 we will use the obvious bound E3,j,1 V (xj − σ1 (j)).
(5.6.34)
To estimate E3,j,2 , one should use the second inequality in Lemma 5.6.6, which implies that, for v σ1 (j), P(Zj,n (a) v) Pj (v) V ((1 − Δ)v),
Δ = o(1) as π1 (j) → 0.
Using this inequality, one can bound E3,j,2 in exactly the same way as in our estimation of the integral E3 in (5.4.55) and (5.4.56), which yields the bound E3,j,2 V (xj )V γ (σ1 (j)) for some γ > 0. Therefore, by virtue of (5.6.34) we can use the crude inequality E3,j V (xj − σ1 (j)).
(5.6.35)
Now we evaluate E2,1 in (5.6.23). Again making use of expansions of the form (5.4.3), (5.4.67), (5.4.68), we obtain, cf. (5.4.69): % 2 (a) Zj,n (a) Zj,n + E2,j = V (xj )E 1 + zj 2zj2 & 2 (1 − α)Zj,n (a) + (1 + o(1)) + o q(xj ) ; |Zj,n (a)| zj 2xj zj % EZj,n (a) = V (xj ) 1 + zj & 2 EZj,n (a) (1 − α) zj 1 + (5.6.36) + (1 + o(1)) + R n,j , 2zj2 xj
286
Random walks with semiexponential jump distributions
where (0) (1) (2) (3) |Rn,j | Rj + Rj + Rj + Rj + o q(xj ) , (k)
Rj
(3)
Rj
$ c # E |Zj,n (a)|k ; |Zj,n (a)| > zj , zjk $ # zj−3 E |Zj,n (a)|3 ; |Zj,n (a)| zj .
k = 0, 1, 2,
(5.6.37)
Since the r.v. ζ has finite moments of all orders we see that, owing to the inequality (5.6.24), the moments of Zj,n (a) over the set {|Zj,n (a)| > zj } admit the same bounds as the moments of Sj over the set {|Sj | > zj }. Hence one can bound the (k) quantities Rj , k = 0, 1, 2, in exactly the same way as Tk in (5.4.70): (k)
Rj
j 3/2 zj−3 ,
k = 0, 1, 2;
(5.6.38)
for k = 3 we have the same bound, (3)
zj−3 E|Zj,n (a)|3 cj 3/2 zj−3 . (5.6.39) Hence the total remainder term in the sum j E2,j , by virtue of (5.6.36)–(5.6.39) and Lemma 5.6.4 (it is obvious that (5.6.5) and the first relation in (5.6.9) remain true for fractional k 0 as well), will not exceed 3 n 4 n n j 3/2 V (xj )|Rn,j | c V (xj ) + o q(xj )V (xj ) zj3 j=1 j=1 j=1 % 2 & m m5/2 c1 V (x) + 3 + o mq(x) z3 z % 5/2 & m + o mq(x) , (5.6.40) c2 V (x) z3 Rj
m = min{z, n}. To obtain the last two relations in (5.6.40), we used the monotonicity of q(t) and the inequalities n
q(xj )V (xj ) q(x)
j=1
n
V (xj ) cmq(x)V (x).
j=1
Since, furthermore, 2 2 EZj,n (a) = ES n−j (a), EZj,n (a) = j − 1 + ES n−j (a), n the sum j=1 E2,j has exactly the same form as the right-hand side of (5.6.2). To complete the proof of the theorem, it suffices to show that the contributions of all the remaining terms in the representation for P(Gn ), which we obtain from (5.6.18), (5.6.19), (5.6.21) and (5.6.23), will be ‘absorbed’ by the remainder terms on the right-hand side of (5.6.2). For the sum nj=1 P1,j , this follows from (5.6.20). That the remainder term O(V 1+γ (x)) from (5.6.21) is negligible is obvious. Further, comparison of the
5.7 Large deviations of S n (−a) when a > 0
287
bounds (5.6.26) and (5.6.38) shows that the total contribution of the terms E1,j does not exceed the right-hand side of the inequality (5.6.40), which bounds n the total remainder term for j=1 E2,j . Similarly, a comparison of the bounds (5.6.33) and (5.6.38) shows that the same assertion is true for the total contribution from the terms E3,j , j mθ := min{n, xθ }. Finally, the contribution of the terms E3,j with j > mθ is bounded, by virtue of (5.6.35) and Lemma 5.6.4, by the sum E3,j V (x + aj − σ1 (j)) V (x + aj/2) j>mθ
j>mθ
j>mθ
czV (x + ax /2) V (x) e θ
−xψ
ψ > 0.
,
Here the last inequality follows from the relation % & αaxθ−1 axθ = l(x) 1 + (1 + o(1)) , l x+ 2 2 where, since θ > 1 − α, one has αaxθ−1 l(x) xγ , 2
γ > 0.
Theorem 5.6.1 is proved. Observe that the monotonicity of q(t), assumed in condition [D], was used only in (5.6.40). We have not needed it in the theorems of §§ 5.4 and 5.5. 5.7 Large deviations of S n (−a) when a > 0 In this section, we will study the asymptotics of the probability P(S n (−a) − an x) as
x → ∞,
(5.7.1)
where S n (−a) = max(Sk + ak), kn
a > 0,
Eξ = 0.
Possible approaches to solving this problem were mentioned in § 3.6. One approach is based on the observation that for a > 0 the r.v. S n (−a) will, in a certain sense, ‘almost coincide’ with Sn + an. More precisely, Sn S n (−a) − an = Sn + ζn , where, for ξi (−a) = ξi + a and Sn (−a) = Sn + an, the r.v. ζn := max 0, −ξn (−a), −ξn (−a) − ξn−1 (−a), . . . , −Sn (−a) has the same distribution as max(−Sk (−a)) = − min Sk (−a) kn
kn
(5.7.2)
(5.7.3)
288
Random walks with semiexponential jump distributions
and is dominated in distribution by the proper r.v. ζ := sup(−Sk (−a)) = − inf (Sk + ak); k0
k0
(5.7.4)
moreover, Eζ b−1 < ∞
if
E|ξj |b < ∞.
(5.7.5)
However, evaluating the large deviation probabilities in question on the basis of (5.7.2) is difficult and inconvenient for at least two reasons: (1) the r.v.’s ζn and Sn are dependent; (2) bounds for probabilities of large deviations of ζn require additional conditions on the left distribution tails of ξi , and these conditions prove to be superfluous for the problems we consider here. Another approach uses the cruder inequalities Sn S n (−a) − an S n .
(5.7.6)
These inequalities and results already derived give the ‘correct’ asymptotics of (5.7.1) in the zone x σ2 (n) and an ‘almost correct’ asymptotics (up to a factor 2) in the intermediate zone σ1 (n) x σ2 (n). In this section we will use the same approach as in §§ 5.2–5.4. We will assume that condition [D] is satisfied, and, for the intermediate deviation zone, the following condition also (cf. conditions [CA], [CA]). [CA ∗] The uniform approximation given by the right-hand side of the relation (5.1.11) holds for P(S n (−a) − an x) in the zone 0 < x < σ+ (n) = σ1 (n) (1 − ε(n)), where ε(n) → 0 as n → ∞. It was established in [2] that condition [CA ∗] will hold provided that (5.1.10) is satisfied. As before, it is likely that, under conditions (5.1.1), (5.1.2) and for large enough b > 2 (see (5.1.7)), condition [CA ∗] will be redundant. Following the same exposition path as in §§ 5.2–5.4, it will be noticed that, owing to inequality (5.7.2), the exposition undergoes no serious changes. So we will present below only a sketch of the proof of the following main assertion. Theorem 5.7.1. Let the conditions (5.1.7) and [ · , =] with V ∈ Se be satisfied. Then the following statements hold true. (i) The assertions of Theorem 5.4.1(i), (ii) remain true if one substitutes in them S n (−a) − an for Sn and condition [CA ∗] for [CA].
5.7 Large deviations of S n (−a) when a > 0
289
(ii) Let condition [D] hold and s2 = x/σ2 (n) → ∞ (i.e. π2 = nw2 (x) → 0). Then, uniformly in x and n from that range, P(S n (−a) − an x) % n n−1 1 Eζj + = nV (x) 1 + zn j=1 2z 2
3/2 √ n n (1 − α)(n − 1) +O (1 + o(1)) + o + 2 2xz z z3 % Eζ n (1 − α)n = nV (x) 1 + + 2+ (1 + o(1)) z 2z 2xz 3/2 & n −1 + o(z +O + q(x)) , z3
& + o(q(x))
(5.7.7)
where ζj and ζ were defined in (5.7.3), (5.7.4). Here one also has analogues of Remarks 5.4.3–5.4.6, which followed Theorem 5.4.1. Comparison with Theorem 5.4.1 expansions shows that the asymptotic for the probabilities P(Sn x) and P S n (−an) − an x differ even in the first term, provided that z n (x n1/(1−α) ). Proof. The argument proving the first part of the theorem does not deviate much from the corresponding argument in § 5.4. As in the previous situation, the main contribution to the asymptotics of P(Gn ), Gn := {S n (−a) − an x}, comes from the summands (cf. (5.4.41), (5.4.42), (5.5.7))
P Gn B j {S j−1 (−a) − a(j − 1) < x − y} , B j = {ξj y}, which, up to the respective higher-order correction terms (owing to (5.7.2), (5.7.6), all the bounds we used before remain valid), could be written as (j) P Sj−1 + ξj + S n−j (−a) − a(n − j) x,
(j) Sj−1 + S n−j (−a) − a(n − j) < x − y # $ = P ξj + Zj,n x, Zj,n < x − y = E V (x − Zj,n ); Zj,n < x − y , (5.7.8) where (j)
Zj,n := Sj−1 + S n−j (−a) − a(n − j) (j)
d
and the r.v. S n−j (−a) = S n−j (−a) is independent of ξ1 , . . . , ξj . By an argument similar to that in § 5.5 (see Lemma 5.5.4), we can verify that, in the deviation zone x < δσ+ , one has for Zj,n the same approximations, of the form (5.1.11), as those that hold for Sn (all the evaluations remain true, owing to (5.7.6)). Therefore the computation of the asymptotics of P(Gn ) leads to the same result as in Theorem 5.4.1. This proves the first assertion of the theorem.
290
Random walks with semiexponential jump distributions
When deriving asymptotic expansions for probability (5.7.1), there will be some changes compared with the proof of Theorem 5.4.1. We will have to use the representation ∗ Zj,n = Sn−1 + ζn−j , d
d
∗ where the ζn−j = ζn−j are defined, in the appropriate way, by the last n − j summands in the sum Sn−1 . Further, one should use, as before, the decompositions (5.4.67) and (5.4.68). We obtain, cf. (5.4.69), (5.5.24), (5.6.36), % n 1 P(Gn ) = nV (x) 1 + Eζn−j zn j=1 & n 1 (1 − α)z 2 (1 + o(1)) + Rn (x) , + 2 EZj,n 1 + 2z n x j=1
(5.7.9) where |Rn (x)| cn3/2 z −3 . Next, we use the relations ∗ ) = Eζn−j → Eζ EZj,n = E(Sn−1 + ζn−j
as
n − j → ∞;
2 ∗ 2 = (n − 1) + 2ESn−1 ζn−j + Eζn−j . EZj,n √ √ ∗ ∗ = o( n) because Sn−1 / n and ζn−j are asymptotically inHere ESn−1 ζn−j dependent and, for all n,
Eζn2 Eζ 2 < ∞ owing to (5.7.5) since b 3. Therefore
√ n .
2 =n+o EZj,n
(5.7.10)
(5.7.11)
Substituting this in (5.7.9), we obtain the desired assertion. The theorem is proved.
5.8 Integro-local and integral theorems on the whole real line In this section, we present integro-local theorems on the asymptotic behaviour of P(Sn ∈ Δ[x)) and integral theorems on the asymptotics of P(Sn x), x → ∞, that are valid on the whole real line, i.e. in all deviation zones, including the boundary zones. In particular, these results improve the integral theorems for Sn , both those obtained in the preceding sections and also obtained in the literature cited in § 5.1. The complexity of the proofs of the above-mentioned results, which were obtained in the recent paper [73], renders them somewhat beyond the level of the present monograph. So we will present the theorems below without proofs; these can be found in [73].
5.8 Integro-local and integral theorems on the whole line
291
When studying the asymptotics of P(Sn x) in § 5.4, we found that, in the case s1 = x/σ1 (n) → ∞, an important role is played by the function t M (x, n) = min M (x, t, n), M (x, t, n) = l(x − t) + nΛκ 0tx n (see Theorem 5.4.1 and (5.4.27), (5.1.12)), whose properties were studied in § 5.4.1. To study the asymptotics of P(Sn x) in the transitional zone x = s1 σ1 (n) for a fixed s1 ∈ (0, ∞), we will need the function Mδ (x, n) :=
min
0t(1−δ)x
M (x, t, n)
with
δ = δα :=
1−α , 2−α
so that M (x, n) = M0 (x, n). It turns out, however, that, as s1 → ∞ the asymptotic properties of Mδ and M0 coincide and therefore do not depend on δ. So, by function M in the discussion in § 5.4 one can understand the function Mδ , and everywhere in what follows we can assume that δ = δα and denote the function Mδ with δ = δα by M . This will lead to no confusion. The asymptotics of M (x, n) in the zone x = s1 σ1 (n) with a fixed s1 are ‘transitional’ from the asymptotics of l(x) to that of x2 /2n; these asymptotics correspond respectively to the cases s1 → ∞ and s1 → 0. In this connection, it is of interest to consider the ratio limits (recall that δ = δα ) G1 (s1 ) := lim
n→∞
M (x, n) l(x)
and
G2 (s1 ) := lim
n→∞
2nM (x, n) , (5.8.1) x2
√ where x = s1 σ1 (n). Since for t n one has t t2 Λκ n∼ , n 2n
(5.8.2)
we obtain for x = s1 σ1 (n) and n → ∞ that, putting t := px for p ∈ [0, 1 − δ] and using (5.2.3) and the relation l(σ1 (n)) ∼ σ12 (n)/n, one has p 2 x2 (1 − p)α l(x) + M (x, n) ∼ min 0p1−δ 2n % & p2 s21 σ12 (n) α = l(x) min (1 − p) + 0p1−δ 2nl(s1 σ1 (n)) p2 s2−α 1 (1 − p)α + ∼ l(x) min 0p1−δ 2 and so M (x, n) ∼
x2 min 2sα−2 (1 − p)α + p2 . 1 2n 0p1−δ
From here it follows that the limits in (5.8.1) are equal to the values of Gi (s) =
min
0p1−δ
Hi (s, p),
i = 1, 2,
(5.8.3)
292
Random walks with semiexponential jump distributions
at s = s1 , where H1 (s, p) := (1 − p)α +
1 2 2−α , p s 2
H2 (s, p) := 2sα−2 (1 − p)α + p2 .
Now we will study the properties of the functions Gi (s). Denote by p(s) the maximum value p ∈ [0, 1 − δ], at which the minimum in (5.8.3) is attained, so that Gi (s) = Hi (s, p(s)) for i = 1, 2 (clearly, p(s) does not depend on i). The quantity s0 :=
2−α (2 − 2α)(1−α)/(2−α)
(5.8.4)
will play an important role in what follows. Lemma 5.8.1. If Var ξ = 1 then the functions G1 (s), G2 (s) and p(s) have the following properties. (i) The functions G1 (s) and G2 (s) are related to each other by G1 (s) =
s2−α G2 (s); 2
the function G2 (s) is decreasing, while G1 (s) is increasing for s > 0. Moreover, G2 (s0 ) = 1,
G2 (s) → ∞ as s → 0,
G1 (s) ↑ 1 as s → ∞. (5.8.5) (ii) The function p(s) is continuous and positive for s 0 and decreasing for s s0 , and α , 2−α
α as s → ∞. s2−α √ If we waived the condition d ≡ Var ξ = 1 then, for t n, instead of (5.8.2) we would have t t2 Λκ n∼ , n 2nd p(s0 ) =
p(s) ∼
and instead of (5.8.1) we would obtain, for x = s1 d1/(2−α) σ1 (n) and any fixed s1 > 0, the relation M (x, n) ∼ G1 (s1 )l(x) ∼
x2 G2 (s1 ). 2nd
Equally obvious changes consequent on replacing Var ξ = 1 by Var ξ = d could be made in what follows as well. So, from here on, we will assume in the present section, without loss of generality, that Var ξ = 1 unless stipulated otherwise. Now we will state the main result on crude (logarithmic) asymptotics in the integral theorem setup.
5.8 Integro-local and integral theorems on the whole line √ Theorem 5.8.2. Let F ∈ Se. Then, for x n, x = s1 σ1 (n), ⎧ √ x2 ⎪ ⎨ − if x n, s1 s0 , 2n ln P(Sn x) ∼ ⎪ ⎩ −G1 (s1 )l(x) if s1 s0 ,
293
(5.8.6)
where G1 (s1 )l(x) ∼ G2 (s1 )x2 /2n for each fixed s1 > 0. It follows from Theorem 5.8.2 that the ‘point’ x = s0 σ1 (n) separates the Cram´er deviation zone from the ‘non-Cram´er’ zone. One can also show that there exists a common representation for the right-hand side of (5.8.6) in the form −M0 (x, n), since ⎧ 2 √ x ⎪ ⎨ if x n, s1 s0 , 2n M0 (x, n) ∼ ⎪ ⎩ G1 (s1 )l(x) if s1 s0 , i.e. for −M0 (x, n) one has the same representation as for the left-hand side of (5.8.6). Next we will state an ‘exact’ integral theorem. Put 1
−1/2 for s s0 , c1 (s) := H2 (s, p(s)) 2 where s0 is defined in (5.8.4) and H2 (s, p) is the second derivative of the function H2 (s, p) with respect to p, and let c1 (s) := c1 (s0 )
for
s s0 .
By Lemma 5.8.1 the function c1 (s) is continuous and positive for all s 0, and c1 (s) → 1 (H2 (s, p(s)) → 2) as s → ∞. Theorem 5.8.3. Let F ∈ Se and E|ξ|κ < ∞, where : ⎧ 9 1 1 ⎪ ⎪ +2 if is a non-integer, ⎨ 1−α 1−α κ= ⎪ 1 1 ⎪ ⎩ +1 if is an integer. 1−α 1−α √ Then, uniformly in x = s1 σ1 (n) → ∞, x n, we have √ 0 P(Sn x) ∼ Φ −x/ n e−Λκ (x/n)n + nc1 (s1 )e−M (x,n) , where
for x
√
(5.8.7)
√ √ 0 n Φ −x/ n e−Λκ (x/n)n ∼ √ e−Λκ (x/n)n x 2π
n and c1 (s1 ) → 1 as s1 → ∞. In particular, for any fixed ε > 0, ⎧ √ √ 0 if x n, s1 s0 − ε, ⎨ Φ −x/ n e−Λκ (x/n)n P(Sn x) ∼ ⎩ nc1 (s1 ) e−M (x,n) if s1 s0 + ε.
294
Random walks with semiexponential jump distributions
In the ‘extreme’ deviation zone x = s2 σ2 (n) → ∞, s2 c = const > 0, one has P(Sn x) ∼ nc2 (s2 )e−l(x) , where
α2 c2 (s) := exp 2s2−2α
(5.8.8)
" →1
as
s → ∞.
Theorem 5.8.3 is an analogue of the results of [238], which hold on the whole √ real line (the zone of normal deviations x < n, n → ∞, is covered by the central limit theorem), but it has a simpler form. It is also an analogue of the √ representation (4.1.2), which is uniform over the range x n and holds for regularly varying tails F+ (t). Now we will turn to integro-local theorems. Here we will need the following additional condition. [D1 ]
For non-lattice distributions, for any fixed Δ > 0, as t → ∞,
l(t) ; t for arithmetic distributions, for integer-values k → ∞, l(t + Δ) − l(t) ∼ Δα
l(k + 1) − l(k) ∼ α
l(k) . k
The function αl(t)/t in condition [D1 ] plays the role of the derivative l (t) and is asymptotically equivalent to it, provided that the latter exists and is sufficiently regular. Observe that, in the zone of deviations that are close to normal, this condition will not be required (see Theorems 5.8.4 and 5.8.5 below) while in the large deviation zone it could be relaxed (cf. § 4.8). For x = s1 σ1 (n), put m(x) := c(m) (s1 )α
l(x) , x
where c(m) (s1 ) := (1 − p(s1 ))α−1 for s1 s0 , and c(m) (s1 ) := c(m) (s0 ) for s1 s0 . By Lemma 5.8.1(ii), l(x) x as s1 → ∞. Furthermore, we can show that, for any fixed Δ > 0, in the zone s1 s0 one has p(s1 ) → 0,
c(m) (s1 ) → 1,
m(x) ∼ α
M (x + Δ, n) − M (x, n) ∼ Δm(x), so that the function m(x) plays the role of the derivative of M (x, n) with respect to x. First we will state an integro-local theorem in the non-lattice case.
295
5.8 Integro-local and integral theorems on the whole line
Theorem 5.8.4. Assume that F ∈ Se and that the conditions E|ξ|κ < ∞ and [D1 ] are met, where κ is defined in Theorem 5.8.3. Then, for any fixed Δ > 0, √ uniformly in x = s1 σ1 (n) → ∞, x n, Δ e−Λκ (x/n)n + Δnm(x)c1 (s1 ) e−M (x,n) , P Sn ∈ Δ[x) ∼ √ 2πn
(5.8.9)
where one has m(x)c1 (s1 ) ∼ αl(x)/x as s1 → ∞. In particular, for any fixed ε > 0, ⎧ √ ⎪ −Λκ (x/n)n ⎪ Δ if x n, s1 s0 − ε, ⎨ √2πn e P Sn ∈ Δ[x) ∼ ⎪ ⎪ ⎩ Δnm(x)c1 (s1 ) e−M (x,n) if s1 s0 + ε. In the ‘extreme’ deviation zone x = s2 σ2 (n) → ∞, s2 c = const > 0, we have l(x) P(Sn ∈ Δ[x)) ∼ Δnα c2 (s2 )e−l(x) , (5.8.10) x where the ci (si ), i = 1, 2, are defined in Theorem 5.8.3. If x = O(σ2 (n)) then condition [D1 ] is superfluous. Now consider the arithmetic case. We cannot assume here without losing generality that a = Eξ = 0, d = Var ξ = 1. Hence the following theorem is stated for arbitrary a and d. Theorem 5.8.5. Suppose that F ∈ Se and that the conditions E|ξ|κ < ∞ and [D1 ] are satisfied, where κ is defined in Theorem 5.8.3. Then, uniformly √ in integer-valued x = s1 b2/(2−α) σ1 (n) → ∞, x n, the following holds: 1 P Sn − an = x ∼ √ e−Λκ (x/n)n + nm(x)c1 (s1 )e−M (x,n) , 2πnd where m(x)c1 (s1 ) ∼ αl(x)/x as s1 → ∞. In particular, for any fixed ε > 0, ⎧ √ 1 ⎪ −Λκ (x/n)n ⎪√ if x n, s1 s0 − ε, ⎨ 2πnd e P Sn − an = x ∼ ⎪ ⎪ if s1 s0 + ε. ⎩ nm(x)c1 (s1 ) e−M (x,n) In the ‘extreme’ deviation zone x = s2 σ2 (n) → ∞, s2 c = const > 0, for integer-valued x one has l(x) P Sn − an = x ∼ nα c2 (s2 )e−l(x) , x
(5.8.11)
where ci (si ), i = 1, 2, are defined in Theorem 5.8.3. If x = O(σ2 (n)) then condition [D1 ] is superfluous. Note that, in the deviation zone x σ2 (n) Theorem 5.8.3 will, generally speaking, not follow from Theorems 5.8.4 and 5.8.5 (indeed, Theorem 5.8.3 does
296
Random walks with semiexponential jump distributions
not contain condition [D1 ]). Theorems 5.8.4 and 5.8.5 are analogues of Theorems 4.7.5 and 4.7.6. Further, observe that the case when n does not grow is not excluded in Theorems 5.8.2–5.8.5. In this case s2 := x/σ2 (n) → ∞, c2 (s2 ) → 1 as x → ∞, and the assertions (5.8.8), (5.8.10) and (5.8.11), with c2 (s2 ) replaced on their righthand sides by 1, can also be obtained from the known properties of subexponential and locally subexponential distributions (see e.g. §§ 1.2 and 1.3).
5.9 Additivity (subexponentiality) zones for various distribution classes We have seen that, under the conditions of Chapters 2–5, the following subexponentiality property holds for the sums Sn of i.i.d. r.v.’s ξ1 , . . . , ξn , Eξ = 0. For any fixed n, as x → ∞, P(Sn x) ∼ nV (x),
V (x) = P(ξ x).
(5.9.1)
This property could also be referred to as tail additivity with respect to the addition of random variables. This term is more justified in the case of non-identically distributed independent summands ξ1 , . . . , ξn . In this case, instead of (5.9.1) one has (see Chapters 12 and 13) P(Sn x) ∼
n
Vj (x),
j=1
where Vj (x) = P(ξj x). It follows from the results of Chapters 2–5 that the relation (5.9.1) remains valid also for all n n(x), where n(x) → ∞ as x → ∞. If n(x) is an ‘unimprovable’ value with this property then it could be called the boundary of the additivity (subexponentiality) zone. When considering the whole class S of subexponential distributions, one can only say about the boundary n(x) of the additivity zone for a distribution F ∈ S that it ought to be a sufficiently slow growing function of x (that depends on F). On the one hand, the more regular is the behaviour of F at infinity, the greater the boundary n(x). On the other hand, there exists a trivial upper bound 1 n(x) < (5.9.2) V (x) dictated by the fact that nV (x) approximates a probability and therefore one must have nV (x) 1. It will be seen in what follows that, in a number of cases, this trivial bound is essentially attained. For the distribution classes R(α) and Se(α), which we considered in Chapters 1–5 (see pp. 11, 29), the boundaries of the additivity zones can be found explicitly. It turns out that we always have n(x) = xγ Ln (x),
(5.9.3)
5.9 Additivity zones for distribution classes
297
where Ln (x) is an s.v.f. and γ > 0. More precisely, the following assertions hold true. (i) If F ∈ R(α) and condition [ −1 then the scaling sequence F (−1) (1/n) from § 1.5 will differ from V (−1) (1/n) just by a constant factor; if n V −1 (x) then P(Sn x) ∼ Fα,ρ,+ (0).) Thus, for F ∈ R(α) with α ∈ (0, 2), in the representation (5.9.3) one has γ = γ(F) = α,
Ln (x) =
ε(x) . L(x)
(ii) If F ∈ R(α) for α > 2 and Eξ 2 < ∞ then, from the discussion in Chapter 4, x2 (5.9.5) n(x) c ln x for any c > α − 2. If n assumes values in the vicinity of x2 /(α − 2) ln x then for the asymptotics we will have a ‘mixed’ approximation, given by the sum of nV (x) of P(Sn x) √ and 1 − Φ(x/ nd ), where Φ is the standard normal distribution function and d = Eξ12 (see (4.1.2)). For n > x2 /c1 ln x, c1 < α−2 (in particular, for n ∼ cx2 ), we will have the normal approximation. If n x2 then P(Sn x) → 1/2. Hence, in the case F ∈ R(α), α > 2, in the representation (5.9.3) one can let γ = γ(F) = 2,
Ln (x) =
1 + ε(x) ln x, (α − 2)
(5.9.6)
where ε(x) → 0 as x → ∞. (iii) If F ∈ Se(α), α ∈ (0, 1) then, by virtue of the results of the present chapter (see Theorem 5.4.1(ii)), n(x) =
x2−2α ε(x) , L2 (x)
where ε(x) is an arbitrary s.v.f. vanishing at infinity and L(x) is the s.v.f. from the representation P(ξ t) = e−l(t) ,
l(t) = tα L(t),
α ∈ (0, 1).
298
Random walks with semiexponential jump distributions
If n is comparable with or greater than x2−2α L−2 (x) but satisfies the relation n x2−α /L(x) then there is another approximation for P(Sn x) (see Theorem 5.4.1(i)). For n x2−α /L(x), we have first the ‘Cram´er approximation’ and then the normal one. If n x2 then P(Sn x) → 1/2. Thus, for F ∈ Se(α) in (5.9.3) we have γ = γ(F) = 2 − 2α,
Ln (x) =
ε(x) . L2 (x)
Summarizing the above, we can now plot the dependence of the main parameter γ = γ(F), which characterizes the additivity boundary n(x), on the parameter α that specifies the classes R(α) and Se(α) containing the distribution F (see Fig. 5.1). 6γ(F)
6γ(F)
2 2j
3j -
1j
2
F ∈ R(α)
3j A 4j @ A@ A@ 5i A 1j A A α
1
F ∈ Se(α)
α
Fig. 5.1. The plots show the values of γ in the representation (5.9.3) for which one or another approximation holds: 1 , additivity (subexponentiality) zone; 2 , approximation 3 , normal approximation; 4 , Cram´er’s approximation; 5 , intermediate by a stable law; approximation.
For exponentially decaying distributions the tail additivity property, generally speaking, does not hold. For example, if P(ξ t) ∼ ce−μt as t → ∞, μ > 0 then P(S2 t) ∼ c1 te−μt 2P(ξ t). For distributions with a negative mean, however, the additivity property becomes possible. As was shown in § 1.2 (Example 1.2.11 on p. 19), if P(ξ t) = e−μt V (t), where V (t) is an integrable r.v.f. then ϕ(μ) = Eeμξ < ∞ and P(S2 t) ∼ 2ϕ(μ)P(ξ t). If the distribution F is such that ϕ(μ) = 1 (which is only possible in the case ϕ (0) = Eξ < 0) then it will have the additivity (subexponentiality) property. Using Cram´er transforms of the distributions of the r.v.’s ξ and Sn , the problem
5.9 Additivity zones for distribution classes
299
on the asymptotics of P(Sn x) can be reduced to the respective problem for sums of r.v.’s with distributions from R (for this reduction, we do not need the equality ϕ(μ) = 1; see § 6.2).
6 Large deviations on the boundary of and outside the Cram´er zone for random walks with jump distributions decaying exponentially fast
6.1 Introduction. The main method of studying large deviations when Cram´er’s condition holds. Applicability bounds In this chapter, in contrast with the rest of the book, we will assume that the distribution of ξ satisfies Cram´er’s condition, i.e. ϕ(λ) := Eeλξ < ∞, for some λ > 0. In this case, methods for studying the probabilities of large deviations of Sn are quite well developed and go back to Cram´er’s paper [95] (see also [233, 16, 219, 259, 120, 49] etc.). Somewhat later, these methods were extended in [37, 44, 69, 70] to enable the study of S n and also the solution of a number of other problems related to the crossing of given boundaries by the trajectory of a random walk. The basis of the modern approach to studying the probabilities of large deviations of Sn , including integro-local theorems, consists of the following two main elements: (1) the Cram´er transform and the reduction of the problem to integro-local theorems in the normal deviation zone; (2) the Gnedenko–Stone–Shepp integro-local theorems [130, 254, 258, 259] on the distribution of Sn . We will briefly describe the essence of the approach. As before, let Δ[x) := [x, x + Δ) be a half-open interval of length Δ > 0 with left endpoint x. We will study the asymptotics of P Sn ∈ Δ[x) as n → ∞. (Note that earlier we assumed that x → ∞ but often made no such assumption with regard to n. Now the unbounded growth of x will follow from that of n.) We say that an r.v. ξ (λ) follows the Cram´er transform of the distribution (or the 300
6.1 Introduction. The main method under Cram´er’s condition
301
conjugate distribution1 ) of ξ if eλt P(ξ ∈ dt) , ϕ(λ)
P(ξ (λ) ∈ dt) =
(6.1.1)
so that Eeμξ
(λ)
ϕ(λ + μ) . ϕ(λ)
=
The Cram´er transform of the distribution of Sn has the form eλt P(Sn ∈ dt) . ϕn (λ)
(6.1.2)
It is clear that the n moment generating function of this distribution is equal to ϕ(λ + μ)/ϕ(λ) . So we obtain the following remarkable fact: the transform (6.1.2) corresponds to the distribution of the sum (λ)
Sn(λ) := ξ1 (λ)
(λ)
of i.i.d. r.v.’s ξ1 , . . . , ξn follows that
+ · · · + ξn(λ)
following the same distribution as ξ (λ) . From this it
P(Sn ∈ dt) = ϕn (λ) e−λt P(Sn(λ) ∈ dt), and therefore P Sn ∈ Δ[x) = ϕn (λ) e−λx
x+Δ
e(x−t)λ P(Sn(λ) ∈ dt)
x
= ϕn (λ) e−λx
Δ
e−λu P Sn(λ) − x ∈ du .
(6.1.3)
0
Since e−λu → 1 uniformly in u ∈ [0, Δ] as Δ → 0, we have, for such Δ’s, that P Sn ∈ Δ[x) ∼ ϕn (λ) e−λx P Sn(λ) − x ∈ Δ[0) ; (6.1.4) (λ) hence, knowing the asymptotics of P Sn − x ∈ Δ[0) would mean that we also know the desired asymptotics of P Sn ∈ Δ[x) . Now note that Eξ (λ) = ϕ (λ)/ϕ(λ) = ln ϕ(λ) . Clearly, Eξ (λ) is an in creasing function of λ because ln ϕ(λ) > 0. This function increases from the value
" ϕ (λ) θ− := inf : λ ∈ (λ− , λ+ ) ϕ(λ) to the value
θ+ := sup
1
" ϕ (λ) : λ ∈ (λ− , λ+ ) , ϕ(λ)
In actuarial and finance-related literature, this distribution is often referred to as the Esscher transformed or exponentially tilted distribution; see e.g. p. 31 of [113].
302
Random walks with exponentially decaying distributions
where λ− := inf{λ : ϕ(λ) < ∞},
λ+ := sup{λ : ϕ(λ) < ∞}.
Therefore, if x ∈ (θ− , θ+ ) n then we can choose a λ = λ(θ) such that θ :=
Eξ (λ) =
ϕ (λ) = θ. ϕ(λ)
(6.1.5)
(6.1.6)
(λ)
Inthis case we have ES n − x = nθ − x = 0, which means that the probability (λ) P Sn − θn ∈ Δ[0) on the right-hand side of (6.1.4) will refer to the normal deviation zone. Observe also that Eξ = ϕ (0) ∈ [θ− , θ+ ],
λ(θ± ) = λ± .
Now consider the lattice case, for which the distribution of ξ is concentrated on the set {a + kh, k = . . . , −1, 0, 1, . . .} and h is the maximum value possessing this property. Without loss of generality, we can put h = 1. Moreover, when studying the distribution of Sn , we can set a = 0 (sometimes it is more convenient to set Eξ = 0 but in such a case, generally speaking, a = 0). When h = 1 and a = 0, the distribution of ξ is said to be arithmetic. In the arithmetic case, analogues of the relations (6.1.3), (6.1.4) have a somewhat simpler form: for integer-valued x, P(Sn = x) = ϕn (λ) e−λx P(Sn(λ) − θn = 0),
(6.1.7)
where, as before, E(ξ (λ) − θ) = 0 for λ = λ(θ), θ ∈ (θ− , θ+ ). Thus, to study the asymptotics of P Sn ∈ Δ[x) , we should turn to integrolocal theorems for the sums Zn := ζ1 + · · · + ζn of i.i.d. r.v.’s ζ1 , ζ2 , . . . in the normal deviation zone in the non-lattice case and to local theorems for these sums in the lattice case. ! be a stable distribution with a density f . Let F In the lattice case, we have Gnedenko’s theorem [130], as follows. Theorem 6.1.1. Assume that ζ has a lattice distribution with span h = 1 and an arbitrary a. The relation na + k − an lim sup bn P(Zn = na + k) − f (6.1.8) =0 n→∞ k bn holds, for suitable an , bn , iff the distribution of ζ belongs to the domain of attrac! Moreover, here the sequence (an , bn ) is the same as that tion of the stable law F. ! ensuring convergence of the distribution of (Zn − an )/bn to the law F.
6.1 Introduction. The main method under Cram´er’s condition
303
Proof. The proof of Theorem 6.1.1 can be found in §§ 49, 50 of [130]; see also Theorem 4.2.1 of [152] and Theorem 8.4.1 of [32]. In the non-lattice case, the following Stone–Shepp integro-local theorem holds true. Theorem 6.1.2. Suppose that ζ is a non-lattice r.v. Then a necessary and sufficient condition for the existence of an , bn such that x − an sup sup bn P Zn ∈ Δ[x) − Δf lim (6.1.9) =0 n→∞ Δ∈[Δ ,Δ ] x bn 1 2 for any fixed 0 < Δ1 < Δ2 < ∞ is that the distribution of ζ belongs to ! Moreover, here the sethe domain of attraction of the stable distribution F. quence (an , bn ) is the same as that ensuring convergence of the distribution ! of (Zn − an )/bn to the law F. Proof. The proof of this assertion can be found in [254, 258, 259, 120, 32]. Remark 6.1.3. The assertions of Theorems 6.1.1 and 6.1.2 can be presented in a unified form, using instead of (6.1.8) the common assertion (6.1.9), where in the lattice case the variable x assumes values an + k, k = . . . , −1, 0, 1, . . . , and Δ 1 is integer-valued. Remark 6.1.4. If the Cram´er condition on the characteristic function, [C] p := lim sup|t|→∞ g(t) < 1, where g(t) := Eeitζ , is satisfied then the assertion of Theorem 6.1.2 can be made stronger: the relation (6.1.9) will hold uniformly in Δ ∈ [e−p1 n , Δ2 ] for any fixed p1 < p and Δ2 < ∞ (see [259]). Note that we want to apply Theorems 6.1.1 and 6.1.2 in the representations (λ) (6.1.4), (6.1.7) to the sums Sn − θn with λ = λ(θ), where the value θ = x/n and hence that of λ(θ) will depend, generally speaking, on n (as, for instance, is the case when x = θ0 n + bnα , α ∈ (0, 1), θ0 = const). This means that we will have to deal with the triangular array scheme and so will need versions of Theorems 6.1.1 and 6.1.2 that are uniform in λ. Such versions were established in [72], and in part in [259]. In the general situation, conditions for the uniformity are rather cumbersome but in the special case when ζ = ξ (λ(θ)) − θ, θ ∈ [θ− , θ + ] and θ− < θ− < θ + < θ+ , one needs no additional conditions and Theorems 6.1.1 and 6.1.2 take more concrete forms. In particular, we have the following result. Theorem 6.1.5. If the r.v. ξ is non-lattice and ζ = ξ (λ(θ)) − θ then uniformly in θ = x/n ∈ [θ− , θ + ] one has the relation (6.1.9), where 2 1 f (t) = √ e−t /2 2π
304
Random walks with exponentially decaying distributions
is the density of the standard normal distribution, an = 0 and b2n = nd(θ) with d(θ) := Var ξ (λ(θ)) . In particular, (λ(θ)) Δ √ sup nd(θ) P Sn − θn ∈ Δ[0) − lim = 0. (6.1.10) n→∞ Δ∈[Δ ,Δ ] 2π 1 2 One should also note that the Cram´er transform (6.1.1) converts the distribution of ξ into distributions that are absolutely continuous with respect to the original one and also with respect to each other. If θ− > −∞ and ϕ λ(θ− ) < ∞ (Var ξ (λ(θ− )) < ∞) then the uniformity + in θ = x/n in Theorems 6.1.1 and 6.1.2 will hold on the interval [θ− , θ ].−Simi larly, for θ+ < ∞ and ϕ λ(θ+ ) < ∞ one will have uniformity in θ ∈ [θ , θ+ ]. Thus, under the conditions of Theorem 6.1.5, we have Δ P Sn(λ(θ)) − θn ∈ Δ[0) ∼ , n → ∞, (6.1.11) 2πnd(θ) uniformly in Δ ∈ [Δ1 , Δ2 ]. If the distribution of ξ satisfies condition [C] then the interval [Δ1 , Δ2 ] can be replaced by [e−p1 n , Δ2 ] for some p1 > 0. The assertion of Theorem 6.1.5 follows from the uniform versions of Theorems 6.1.1 and 6.1.2 in an obvious way. If the conditions that ϕ λ(θ± ) are finite do not hold then, for θ = x/n ↑ θ+ (θ ↓ θ− > −∞), one will need versions of Theorems 6.1.1 and 6.1.2 with a nonnormal stable density f (see § 6.2). Coresponding changes will have to be made in assertions (6.1.10), (6.1.11) as well. Further, note that assertions (6.1.9), (6.1.11), being true in the non-lattice case for any fixed Δ, will also hold for Δ = Δn → 0 slowly enough. Therefore, returning to (6.1.4), we obtain from Theorem 6.1.5 the following statement. Again let [θ− , θ+ ] be any interval, located inside (θ− , θ+ ). Theorem 6.1.6. In the non-lattice case, uniformly in θ = x/n ∈ [θ − , θ + ], as n → ∞ and Δ = Δn → 0 slowly enough, one has, for λ = λ(θ), Δ . P Sn ∈ Δ[x) ∼ ϕn (λ) e−λx 2πnd(θ)
(6.1.12)
In the lattice case, for values of x of the form an + k, one has 1 P(Sn = x) ∼ ϕn (λ) e−λx . 2πnd(θ)
(6.1.13)
The uniformity in θ could be extended to the entire intervals [θ− , θ + ] and [θ− , θ+ ] if θ− > −∞, ϕ λ− < ∞ and θ+ < ∞, ϕ λ+ < ∞ respectively. The case θ+ < ∞, ϕ λ+ = ∞ and θ ∼ θ+ will be considered in § 6.2. Note that the product ϕn λ(θ) e−λ(θ)x in (6.1.12) and (6.1.13) can be written as $n # ϕ λ(θ) e−λ(θ)θ = e−nΛ(θ) ,
305
6.1 Introduction. The main method under Cram´er’s condition where
Λ(θ) = λ(θ)θ − ln ϕ λ(θ)
is the large deviation rate function (a.k.a. the deviation function or Legendre transform of ln ϕ(λ)) given by Λ(θ) := sup λθ − ln ϕ(λ) . λ
In this form, thedeviation function appears in a natural way when finding the asymptotics of P Sn ∈ Δ[x) using an alternative approach employing the inversion formula and the saddle-point method. The properties of the deviation function as a probabilistic characteristic are known well enough (see e.g. [38, 49, 67]). In particular, it is known that the function Λ(θ) is non-negative, lower semicontinuous, convex and analytic on the interval (θ− , θ+ ) and that Λ(Eξ) = 0,
Λ (θ) = λ(θ) > 0
for
θ > Eξ.
In terms of the function Λ(θ), the relation (6.1.12) takes the form Δe−nΛ(θ) P Sn ∈ Δ[x) ∼ . 2πnd(θ) From here (or from (6.1.12)) one can easily obtain an integral theorem as well. Since for θ < θ+ , v/n → 0 one has v v v = Λ(θ) + λ(θ) + o , Λ θ+ n n n we see that, for any fixed v, as n → ∞, Δe−nΛ(θ)−vλ(θ) . P Sn ∈ Δ[x + v) ∼ 2πnd(θ)
(6.1.14)
This implies the following. Corollary 6.1.7. Let θ = x/n > Eξ. Then in the non-lattice case, uniformly in θ = x/n ∈ [θ− , θ + ], for any fixed Δ ∞ one has e−nΛ(θ) P Sn ∈ Δ[x) ∼ 2πnd(θ)
Δ
e−vλ(θ) dv =
0
(1 − e−λ(θ)Δ )e−nΛ(θ) . λ(θ) 2πnd(θ) (6.1.15)
In particular, for Δ = ∞, P(Sn x) ∼
e−nΛ(θ) . λ(θ) 2πnd(θ)
(6.1.16)
In the lattice case, we similarly obtain from (6.1.13) that, for values of x of the form an + k, one has P(Sn x) ∼
e−nΛ(θ) . (1 − e−λ(θ) ) 2πnd(θ)
(6.1.17)
306
Random walks with exponentially decaying distributions
The general integro-local theorem (6.1.15), (6.1.13) was obtained, in a somewhat different form, in [259]. The asymptotic relations (6.1.16), (6.1.17) show, in particular, the degree of precision of the following crude bound, which is obtained directly from Chebyshev’s inequality. For x Eξ 0, P(Sn x) inf e−λx EeλSn = inf e−λx+n ln ϕ(λ) = e−nΛ(θ) . λ>0
λ>0
(6.1.18)
We emphasize that, when studying one-sided deviations, one of the main conditions in the basic Theorem 6.1.6 is the relation x θ + < θ+ . n
θ=
This inequality, describing the so-called Cram´er zone of right-sided deviations, sets applicability bounds for methods based on the Cram´er transform and also on all the other analytic methods leading to asymptotics of the form (6.1.12)– (6.1.17). If θ+ = ∞ then there are no such bounds. If θ+ < ∞, λ+ = ∞ then necessarily P(ξ θ+ ) = 1, i.e. the r.v. ξ is bounded. This case is scarcely interesting from the large deviations viewpoint, and we will not consider it here. There remains the little-studied (for deviations x > θ+ n) possibility 0 < λ+ < ∞,
θ+ =
ϕ (λ+ ) < ∞, ϕ(λ+ )
(6.1.19)
where necessarily one has ϕ(λ+ ) < ∞, ϕ (λ+ ) < ∞. It follows from these relations that the function V (t) from the representation P(ξ t) = e−λ+ t V (t),
λ+ > 0,
(6.1.20)
has the property that tV (t) is integrable: ∞ tV (t) dt < ∞.
(6.1.21)
0
Indeed, the condition ϕ (λ+ ) < ∞ implies that E ξeλ+ ξ ; ξ > 0) < ∞, and therefore, as t → ∞, V (t) = eλ+ t P(ξ t) t−1 E ξeλ+ ξ ; ξ t = o(t−1 ).
(6.1.22)
307
6.1 Introduction. The main method under Cram´er’s condition From (6.1.22) and the last relation, integrating by parts, we obtain ∞ > E ξeλ+ ξ ; ξ > 0) > E ξeλ+ ξ ; 0 < ξ < N ) N =−
λ+ t
te
d e
−λ+ t
N
V (t) = λ+
0
N tV (t) dt −
0
N λ+
t dV (t) 0
∞ tV (t) dt − N V (N ) → λ+
0
tV (t) dt
as N → ∞,
0
which establishes (6.1.21). Moreover, if 1/V (t) is an upper-power function (see p. 28) then 2c V (t) t
t
4c V (u) du 2 t
t/2
t
uV (u)du = o(t−2 )
t/2
owing to (6.1.21). It is not difficult to construct an example showing that without additional assumptions on the behaviour of V (t) the bound V (t) = o(t−1 ) is essentially unimprovable. It turns out that one can find the asymptotics of the probabilities of large deviations x > θ+ n in the case (6.1.19), (6.1.20) only under the additional assumption of regular variation of the function V ; this is quite similar to what we saw in Chapters 2–5 and in contrast with the ‘Cram´er theory’ presented at the beginning of the present section. Moreover, the asymptotics of P(Sn x) are determined in this case not only by the function V (t) but also by the values of λ+ , ϕ(λ+ ) and ϕ (λ+ ), which depend on the entire distribution of ξ. (We did not have this phenomenon in the situations considered in Chapters 2–5; in particular, in the Cram´er deviation zone there was no dependence on the asymptotics of the nonexponential factor V (t)). Thus, in this sense, we will be dealing with ‘transitional’ asymptotics, whose place is between the asymptotics of Chapters 2–5 and the asymptotics (6.1.15)–(6.1.17) in the Cram´er deviation zone. In what follows, we will consider the class ER of distributions (or, equivalently, functions of the form (6.1.20)) with the property that the function V (t) in (6.1.20) belongs to the class R of regularly varying functions. The class ER will be referred to as the class of regularly varying exponentially decaying distributions. We will distinguish between two subclasses of ER, for which one has, respectively, ∞ t2 V (t) dt = ∞ (6.1.23) 0
and
∞ 0
t2 V (t) dt < ∞.
(6.1.24)
308
Random walks with exponentially decaying distributions
For studying these one will have to use the results of Chapter 3 and Chapter 4, respectively. Functions V ∈ R of indices from the intervals (−2, −3) correspond to the subclass (6.1.23), (6.1.21) whereas functions of indices less than −3 correspond to the subclass (6.1.24). One can also obtain results, similar to (but, unfortunately, technically more complicated than) those to be derived below in §§ 6.2 and 6.3, for the class ESe of distributions with the property that the function V in (6.1.20) is from the class Se of semiexponential functions. In this case one should make use of integro-local theorems. As mentioned in the Introduction, a companion volume to the present text will be devoted to a more detailed study of random walks with distributions fast decaying at infinity.
6.2 Integro-local theorems for sums Sn of r.v.’s with distributions from the class ER when the function V (t) is of index from the interval (−1, −3) In the previous section, we used integro-local theorems in the normal deviation zone and the Cram´er transform to obtain theorems of the same type in the whole Cram´er deviation zone. Now we will turn to integro-local theorems on the boundary and outside the Cram´er zone, where the approaches presented in § 6.1 do not work. In this section, we will consider the possibility (6.1.23), i.e. the case when the distribution of ξ has the form P(ξ t) = e−λ+ t V (t),
λ+ > 0,
(6.2.1)
where V (t) = t−α−1 L(t),
(6.2.2)
α ∈ (0, 2) and L is an s.v.f. (For reasons that will become clear later on it is convenient in the section to denote the index of the r.v.f. V (t) by −α − 1.) Formally, the case α ∈ (0, 1) is not related to the group of distributions with λ+ < ∞ and θ+ < ∞, which was specified by (6.1.19), since, for such indices, ∞
∞ tV (t) dt =
1
t−α V (t) dt = ∞
1
and therefore ϕ (λ+ ) = ∞, θ+ = ϕ (λ+ )/ϕ(λ+ ) = ∞. This means that we can use the approaches discussed in § 6.1 to study the probabilities P(Sn x) with x ∼ cn for any fixed c. It turns out, however, that the approaches presented in § 6.2.2 below (first and foremost, this refers to the use of the Cram´er transform at the fixed point λ+ ) enable one to study the asymptotics of P(Sn ∈ Δ[x)) when x n also (such deviations x could be called super-large; under Cram´er’s
6.2 Integro-local theorems: the index of V is from (−1, −3) 309 √ condition, along with normal deviations O( n ), one usually distinguishes be√ tween moderately large deviations, when n x = o(n), and the ‘usual’ large deviations x ∼ cn). 6.2.1 Large deviations on the boundary of the Cram´er zone: case α ∈ (1, 2) Now we return to the case α ∈ (1, 2) and put y := x − nθ+ .
(6.2.3)
We will assume in this subsection that y = o(n) as n → ∞ (in fact, the range of the values of y to be considered will be somewhat narrower). This means that θ = x/n remains in the vicinity of the point θ+ , i.e. next to the right-hand boundary of the Cram´er zone (θ− , θ+ ). Applying the Cram´er transform (6.1.1) at the fixed point λ+ , we obtain that, as Δ → 0 (cf. (6.1.4)), (6.2.4) P Sn ∈ Δ[x) ∼ ϕn (λ+ )e−λ+ x P Sn(λ+ ) − θ+ n ∈ Δ[y) . Now we will find out what kind of distribution will describe the jumps ζ := ξ (λ+ ) − θ+ .
(6.2.5)
We have λ+ V (t + θ+ ) dt − dV (t + θ+ ) eλ+ (t+θ+ ) P(ξ ∈ dt + θ+ ) = . ϕ(λ+ ) ϕ(λ+ ) (6.2.6) From this and Theorem 1.1.4, one derives that P(ζ ∈ dt) =
λ+ tV (t) (1 + o(1)) αϕ(λ+ ) λ+ t−α L(t) = (1 + o(1)) = t−α Lζ (t), αϕ(λ+ )
Vζ (t) := P(ζ t) =
(6.2.7)
where Lζ (t) ∼
λ+ L(t) αϕ(λ+ )
is clearly also an s.v.f. Let ζ1 , ζ2 , . . . be independent copies of the r.v. ζ. Since Eζ = Eξ (λ+ ) − θ+ = 0 we obtain, by virtue of the results of § 1.5, that, as n → ∞, the distribution of the scaled sums Zn /bn , where n 1/α λ+ (−1) ζi , bn = Vζ (1/n) ∼ V (−1) (1/n), (6.2.8) Zn = αϕ(λ ) + i=1
310
Random walks with exponentially decaying distributions
will converge to the stable law Fα,1 with parameters (α, 1) (the left distribution tail of ζ decays exponentially fast). Hence, owing to Theorem 6.1.2 one (−1) has (6.1.9), where an = 0, bn = Vζ (1/n) and f = fα,1 is the density of the stable law Fα,1 . This means that if Δ = Δn → 0 slowly enough then y Δ Δ +o . P Zn ∈ Δ[y) = fα,1 bn bn bn In other words, when |y| = O(bn ) (i.e. for n > c/Vζ (y) or when nVζ (y) decays slowly enough) one has y Δ P Zn ∈ Δ[y) ∼ . fα,1 bn bn Returning to (6.2.4) and noting that ϕn (λ+ )e−λ+ x = e−nΛ(θ+ )−λ+ y , we obtain the following assertion. Theorem 6.2.1. Assume that conditions (6.2.1), (6.2.2) with α ∈ (1, 2) are satisfied. Then, in the non-lattice case, for y = x − θ+ n = o(n) and for Δ = Δn → 0 slowly enough as n → ∞, one has the representation % & Δe−nΛ(θ+ )−λ+ y y P Sn ∈ Δ[x) = fα,1 + o(1) , (6.2.9) bn bn where the bn are defined in (6.2.8). In the arithmetic case, for integer-valued x, % e−nΛ(θ+ )−λ+ y y P(Sn = x) = fα,1 bn bn
& + o(1) .
(6.2.10)
In both cases the remainder terms are uniform in x (in y) in the zone where nVζ (y) εn , εn → 0 slowly enough as n → ∞. Remark 6.2.2. The function Λ(θ) is linear for θ θ+ : Λ(θ) = Λ(θ+ ) + (θ − θ+ )λ+ ,
θ θ+ .
(6.2.11)
Hence for y > 0 the argument of the exponent in (6.2.9), (6.2.10) can also be written in the form −nΛ(θ). Corollary 6.2.3. In the non-lattice case, under conditions (6.2.1), (6.2.2), for any sequence εn → 0 slowly enough as n → ∞, one has & % e−nΛ(θ+ )−λ+ y y −λ+ Δ + o(1) 1−e fα,1 P Sn ∈ Δ[x) = λ+ bn bn uniformly in y, |y| < εn n and Δ εn .
6.2 Integro-local theorems: the index of V is from (−1, −3)
311
Proof. The assertion of the corollary follows from Theorem 6.2.1 owing to the continuity of the function fα,1 and the fact that, as n → ∞, Δ e
−λ+ v
fα,1
0
% y+v y 1 − e−λ+ Δ dv = fα,1 bn λ+ bn
& + o(1)
uniformly in y and Δ. Thus, Theorem 6.2.1 and the asymptotics of the prob Corollary 6.2.3 describe (−1) abilities P Sn ∈ Δ[x) when |y| = O Vζ (1/n) . We will need another (−1)
approach when y Vζ
(1/n).
6.2.2 Large deviations and super-large deviations outside the Cram´er zone: case α ∈ (0, 2) In this subsection, we will assume that x and n are such that y = x − nθ+ → ∞,
(−1)
y σ(n) := Vζ
(1/n),
where the tail Vζ (t) of the r.v. ζ = ξ (λ+ ) − θ+ is given in (6.2.7). Observe that the left tail of ζ decays at infinity exponentially fast, so that condition [ θ+ , Eξ (λ+ ) = θ+ and λ(θ) = λ+ ,
Λ(θ) = θλ+ − ln ϕ(λ+ ) = Λ(θ+ ) + λ+ y/n.
Theorem 6.2.4. Let conditions (6.2.1), (6.2.2) with α ∈ (0, 2) be met. Then, in the non-lattice case, for Δ = Δy → 0 slowly enough as y → ∞, we have the representation nλ+ V (y) P Sn ∈ Δ[x) = Δe−nΛ(θ) (1 + o(1)) ϕ(λ+ )
(6.2.12)
as N → ∞, where the remainder term o(1) is uniform in y and n such that y max{N, n1/γ } for some fixed γ < α. In the arithmetic case, the representation (6.2.12) holds for Δ = 1 and integervalued x → ∞, provided that we replace the factor λ+ on the right-hand side of (6.2.12) by 1 − e−λ+ . Remark 6.2.5. Note that in this theorem, in contrast with Theorem 6.1.6 and Corollary 6.1.7, generally speaking, we did not assume that n → ∞.
312
Random walks with exponentially decaying distributions
Proof of Theorem 6.2.4. It follows from (6.2.6) that, for any Δ = o(t), P ξ (λ+ ) − θ+ ∈ Δ[t) 3 t+θ + +Δ =ϕ
−1
−1
=ϕ
(λ+ ) V (t + θ+ ) − V (t + θ+ + Δ) + λ+
4
V (u) du t+θ+
#
$ (λ+ ) V (t + θ+ ) − V (t + θ+ + Δ) + λ+ ΔV (t)(1 + o(1)) .
Since for Δ = o(t) one has V (t + θ+ ) − V (t + θ+ + Δ) = o V (t) , we obtain that the distribution of ζ = ξ (λ+ ) − θ+ has the property that λ+ V (t) # $ P ζ ∈ Δ[t) = (1 + o(1))Δ + o(1) . ϕ(λ+ )
(6.2.13)
In other words, the tail Vζ (t) := P(ζ t) (see (6.2.7)) satisfies condition [D(1,q) ] from § 3.7 in the form (3.7.2): # $ Vζ (t) − Vζ (t + Δ) = V1 (t) (1 + o(1))Δ + o q(t) with q(t) ≡ 1,
V1 (t) =
λ+ V (t) , ϕ(λ+ )
Vζ (t) ∼
tλ+ V (t) . αϕ(λ+ )
(6.2.14)
Hence we can use Theorem 3.7.1. Applying the theorem with γ0 = 0, we obtain that, for any Δ c, Δ = o(y), one has nλ+ V (y) (1 + o(1))Δ P Sn(λ+ ) − θ+ n ∈ Δ[y) = ϕ(λ+ )
(6.2.15)
as N → ∞, where the remainder term o(1) is uniform in y, n and Δ satisfying the inequalities y max{N, n1/γ } for some fixed γ < α and c Δ yεN for an arbitrary function εN ↓ 0 as N ↑ ∞. Since c is arbitrary, the relation (6.2.15) will still be true for Δ = Δy → 0 slowly enough. It only remains to use (6.2.4). The demonstration in the lattice case is quite similar. The theorem is proved. Corollary 6.2.6. Assume that (6.2.1), (6.2.2) hold true. In the non-lattice case, for any Δ Δy , where Δy → 0 slowly enough as y → ∞, one has e−nΛ(θ) nV (y) P Sn ∈ Δ[x) = 1 − e−λ+ Δ (1 + o(1)). ϕ(λ+ )
(6.2.16)
In particular, when Δ = ∞ we obtain P(Sn x) =
e−nΛ(θ) nV (y) (1 + o(1)) ϕ(λ+ )
(6.2.17)
6.2 Integro-local theorems: the index of V is from (−1, −3)
313
as N → ∞, where the remainders o(1) are uniform in y max{N, n1/γ } for any fixed γ < α, Δ Δy . In the arithmetic case, the relation (6.2.17) holds for integer-valued x → ∞. Proof. The proof of Corollary 6.2.6 is almost obvious from Theorem 6.2.4. Indeed, in the non-lattice case, for small Δ one has (6.2.16) and, moreover, for any fixed v > 0, v nΛ θ + = nΛ(θ) + λ+ v. n Therefore, for small Δ we have Δnλ+ V (y + v) −nΛ(θ) −λ+ v e P Sn ∈ Δ[x + v) = e (1 + o(1)), ϕ(λ+ ) and so for arbitrary Δ Δy nλ+ e−nΛ(θ) P Sn ∈ Δ[x) = ϕ(λ+ ) =ϕ
−1
Δ
V (y + v)e−λ+ v dv 1 + o(1)
0
(λ+ )nV (y)e−nΛ(θ) 1 − e−λ+ Δ (1 + o(1)), (6.2.18)
where the required uniformity of o(1) follows from Theorem 6.2.4 and obvious uniform bounds for the integral in (6.2.18). This proves (6.2.16) and (6.2.17). In the lattice case, the assertion (6.2.17) follows directly from Theorem 6.2.4. The corollary is proved.
6.2.3 Super-large deviations in the case α ∈ (0, 1) As already noted, in the case α ∈ (0, 1) one has θ+ = ∞ and therefore the probabilities of deviations of the form cn for any fixed c can be obtained within the framework of § 6.1. It turns out that, using the same approaches as in §§ 6.2.1 and 6.2.2, in the case α ∈ (0, 1) one can also find the asymptotics of P(Sn x) for x n. For Δ → 0 one has (cf. (6.1.4), (6.2.4)) (6.2.19) P Sn ∈ Δ[x) ∼ ϕn (λ+ )e−λ+ x P Sn(λ+ ) ∈ Δ[y) , where ζ := ξ (λ+ ) follows the distribution P(ζ ∈ dt) =
eλ+ t λ+ V (t) dt − dV (t) P(ξ ∈ dt) = , ϕ(λ+ ) ϕ(λ+ )
so that Vζ (t) := P(ζ t) =
λ+ tV (t) (1 + o(1)) = t−α Lζ (t), αϕ(λ+ )
Lζ (t) ∼
λ+ L(t) . αϕ(λ+ )
314
Random walks with exponentially decaying distributions
As in § 6.2.1, it follows from this that the scaled sums Zn /bn (see (6.2.8)) converge in distribution to the stable law Fα,1 with parameters (α, 1). Therefore, by virtue of Theorem 6.1.2, for Δ = Δn → 0 slowly enough, x Δ Δ +o , fα,1 P Zn ∈ Δ[x) = bn bn bn where fα,1 is the density of the distribution Fα,1 (recall that in the case of convergence to a stable law with α < 1, no centring of the sums Zn is needed). Returning to (6.2.19), we obtain, as in § 6.2.1, the following assertion. Theorem 6.2.7. Let conditions (6.2.1), (6.2.2) with α ∈ (0, 1) be satisfied and let x n. Then, in the non-lattice case, for Δ = Δn → 0 slowly enough as n → ∞ one has % & Δ n x −λ+ x ϕ (λ+ )e fα,1 + o(1) , P Sn ∈ Δ[x) = bn bn where the bn were defined in (6.2.8). In the arithmetic case, for integer-valued x,
% 1 n x −λ+ x P(Sn = x) = ϕ (λ+ )e fα,1 bn bn
Corollary 6.2.8. In the non-lattice case, as n → ∞, % x ϕn (λ+ )e−λ+ x fα,1 P(Sn x) = λ+ bn bn
& + o(1) . & + o(1) .
The above assertions could be made uniform in the same way as in § 6.2.1. They give exact asymptotics of the desired probabilities only when x is comparable with bn ∼ n1/α Lb (n), where Lb is an s.v.f. If x grows at a faster rate then the following analogue of Theorem 6.2.4 should be used. Let γ < α be an arbitrary fixed number. Theorem 6.2.9. Let conditions (6.2.1), (6.2.2) with α ∈ (0, 1) be satisfied. Then, as N → ∞, for x max{N, n1/γ } and Δ = Δx → 0 slowly enough, in the non-lattice case one has (6.2.20) P Sn ∈ Δ[x) = Δϕn−1 (λ+ )e−λ+ x nλ+ V (x)(1 + o(1)). In the arithmetic case, the assertion (6.2.20) remains true for integer-valued x and Δ = 1, provided that we replace the factor λ+ on the right-hand side of (6.2.20) by 1 − e−λ+ . Corollary 6.2.10. Under conditions (6.2.1), (6.2.2), for α ∈ (0, 1), N → ∞ and x max{N, n1/γ } one has P(Sn x) = ϕn−1 (λ+ )e−λ+ x nV (x)(1 + o(1)). We will omit the proofs of Theorem 6.2.9 and Corollary 6.2.10 as they are completely analogous to the proofs of Theorem 6.2.4 and Corollary 6.2.6 respectively.
6.3 Integro-local theorems: the finite variance case
315
6.3 Integro-local theorems for the sums Sn when the Cram´er transform for the summands has a finite variance at the right boundary point The structure of this section is similar to that of § 6.2. First we will consider deviations √ y = x − nθ+ = O n under the assumption that ϕ (λ+ ) < ∞ (here and elsewhere in the present chapter, by the derivatives at the end points λ± we understand the respective one-sided derivatives) and then, in the second part of the section, turn to deviations √ y n ln n under conditions (6.2.1), (6.2.2), α > 2. Note that in the first part of the section the assumptions (6.2.1), (6.2.2) will not be needed.
6.3.1 Large deviations on the boundary of the Cram´er zone We will assume in this subsection that ϕ (λ+ ) < ∞ (under the assumptions of § 6.2 this condition was not met). Approaches to studying the asymptotics of P(Sn ∈ Δ[x)) in this case are close to those used in § 6.1 to prove Theorem 6.1.6. The difference is that here we will use the Cram´er transform at the fixed point λ+ (as in § 6.2). For the r.v. ζ = ξ (λ+ ) − θ+ we have ϕ(λ + λ+ ) , Eζ = 0, ϕ(λ+ ) 2 ϕ (λ+ ) ϕ (λ+ ) − b := Eζ 2 = < ∞. ϕ(λ+ ) ϕ(λ+ )
Eeλξ
(λ+ )
=
(6.3.1)
Theorem 6.3.1. Let ϕ (λ+ ) < ∞. Then, in the non-lattice case, for y = x − θ+ n = o(n) and Δ = Δn → 0 slowly enough as n → ∞, one has Δe−nΛ(θ+ )−λ+ y # −y 2 /2bn $ √ P Sn ∈ Δ[x) = + o(1) , e 2πnb
(6.3.2)
where the remainder o(1) is uniform in x (in y). In the arithmetic case, for integer-valued x one has P(Sn = x) =
$ e−nΛ(θ+ )−λ+ y # −y2 /2bn √ + o(1) . e 2πnb
Recall that, for θ θ+ , λ(θ) = λ+ ,
e−nΛ(θ) = e−nΛ(θ+ )−λ+ y = ϕn (λ+ )e−λ+ x .
The assertion of Theorem 6.3.1 extends, in the case ϕ (λ+ ) < ∞, the applicability zone of Theorem 6.1.6 up to the boundary values θ = x/n ∼ θ+ . We also have the following analogue of Corollary 6.1.7.
316
Random walks with exponentially decaying distributions
Corollary 6.3.2. In the non-lattice case, for any Δ Δn , where Δn → 0 slowly enough, one has e−nΛ(θ+ )−λ+ y−y √ P Sn ∈ Δ[x) = λ+ 2πnb √ uniformly in y, |y| c n. In particular, for Δ = ∞, P(Sn x) ∼
2
/2bn
1 − e−λ+ Δ (1 + o(1))
e−nΛ(θ+ )−λ+ y−y √ λ+ 2πnb
2
(6.3.3)
/2bn
.
(6.3.4)
In the arithmetic case, for integer-valued x one has 2
e−nΛ(θ+ )−λ+ y−y /2bn √ . P(Sn x) ∼ (1 − e−λ+ ) 2πnb Corollary 6.3.2 follows from Theorem 6.3.1 in an obvious way. Proof of Theorem 6.3.1. For the distribution of ζ = ξ (λ+ ) − θ+ we have relations (6.3.1), which imply that b = Var ζ < ∞. Hence the distribution of the normalized sums Zn = ζ1 +· · ·+ζn converges to the normal law, and the integrolocal theorem 6.1.2 holds true in the non-lattice case: −y2 /2nb 1 + o(1) (6.3.5) Δe P Zn ∈ Δ[y) ∼ √ 2πnb uniformly in Δ ∈ [Δ1 , Δ2 ]. It only remains to make use of (6.2.4). In the arithmetic case, the proof is similar. The theorem is proved.
6.3.2 Large deviations outside the Cram´er zone
√ We will assume in this section that y = x − nθ+ n ln n. Such deviations are large for Zn , and so the relations (6.3.5) are not meaningful for them. However, we can now use the integro-local theorems from § 4.7. As a result, we can prove the following assertion. Theorem 6.3.3. Let conditions (6.2.1), (6.2.2) with α > 2 be satisfied, and let √ y = x − nθ+ n ln n.
Then, in the non-lattice case with Δ = Δy → 0 slowly enough as y → ∞, we have nλ+ V (y) (1 + o(1)) (6.3.6) P Sn ∈ Δ[x) = Δe−nΛ(θ) ϕ(λ+ ) √ as N → ∞, where the term o(1) is uniform in y and n such that y N n ln n. In the arithmetic case, the assertion (6.3.6) holds true for integer-valued x, Δ = 1, if we replace the factor λ+ on the right-hand side of (6.3.6) by 1 − e−λ+ . Remark 6.2.5 following Theorem 6.2.4 is valid for the above theorem as well.
6.3 Integro-local theorems: the finite variance case
317
Proof. The proof of Theorem 6.3.3 essentially repeats that of Theorem 6.2.4. Under the new conditions, we still have the representation (6.2.13), and hence condition [D(1,q) ] of § 3.7 holds in the form (3.7.2), where q and V1 are given in (6.2.14). Therefore the conditions of Theorem 4.7.1 are met and so, by virtue of that theorem, for Δ = Δy → 0 slowly enough as y → ∞, one has P Zn ∈ Δ[y) = ΔnV1 (y) 1 + o(1) ,
V1 (t) =
λ+ V (t) , ϕ(λ+ )
√ as N → ∞, where the term o(1) is uniform in y and n such that y N n ln n. It remains to make use of (6.2.4). The proof in the lattice case is completely analogous. The theorem is proved. Theorem 6.3.3 implies the following. Corollary 6.3.4. Under the conditions of Theorem 6.3.3, for any Δ Δy , e−nΛ(θ) nV (y) P Sn ∈ Δ[x) = 1 − e−λ+ Δ (1 + o(1)). ϕ(λ+ ) In particular, e−nΛ(θ) nV (y) (1 + o(1)) ϕ(λ+ ) √ as N → ∞, where the remainder o(1) is uniform in y N n ln n. The assertion remains true in the lattice case as well. P(Sn x) =
Assertions that are close to the theorems and corollaries of §§ 6.2 and 6.3 were obtained for narrower deviation zones in [27] (Lemmata 2, 3). Remark 6.3.5. Observe that, under the assumptions of the present chapter, the integro-local theorems could be obtained from the corresponding integral theorems (provided that the latter are available). Indeed, for y = x − θ+ n > 0, e−nΛ(θ) = e−nΛ(θ+ )−λ+ y , and therefore P(Sn x) decays in x exponentially fast (in the presence of other, not so quickly varying, factors). Hence, for any fixed Δ > 0, the asymptotics of the probability P Sn ∈ Δ[x) will differ from that for P(Sn x) just by a constant factor (1 − e−λ+ Δ ). So the headings of §§ 6.2 and 6.3 reflect the methodological side of the subject rather than its essence, since the assertions in these sections were obtained with the help of the integro-local theorems of Chapters 3 and 4. Theorems 6.2.4 and 6.3.3 show that the integro-local theorems of §§ 3.7 and 4.7, together with the Cram´er transform, are effective tools for studying large deviation probabilities outside the Cram´er zone.
318
Random walks with exponentially decaying distributions
In concluding this section we observe that, under the conditions assumed in it, one can also obtain asymptotic expansions in the integro-local and integral theorems, in a way similar to that used in § 4.4 (see also [64]). 6.4 The conditional distribution of the trajectory {Sk } given Sn ∈ Δ[x) The present section is an analogue of § 4.9. For F ∈ ER, the conditional distribution of the trajectory {Sk ; 1 k n} given Sn x and that given Sn ∈ Δ[x) will not differ much from each other. So, for convenience of exposition, we will restrict ourselves to conditioning on the events {Sn ∈ Δ[x)} only. First observe that, in the Cram´er deviation zone (when θ := x/n < θ+ ), in √ contrast with the results of § 4.9, for x n → ∞ the process {x−1 Snt ; t ∈ [0, 1]} given Sn ∈ Δ[x) converges in distribution to the deterministic process {ζ1 (t) ≡ t; t ∈ [0, 1]}. This is the ‘first-order approximation’ in trajectory space. The ‘second-order approximation’ states that the (conditional) process {n−1/2 (Snt − xt); t ∈ [0, 1]} given Sn ∈ Δ[x) converges in distribution to the Brownian bridge process {w(t) − tw(1)}, where {w(t)} is the standard Wiener process (see [45, 47]). In the present section, we will show that, outside the Cram´er deviation zone, when y = x − θ+ n → ∞, we have for the conditional distributions of the sums {Sk } of r.v.’s with distributions from the class ER a picture that is intermediate between two absolutely different behaviour types, that described above and that established in § 4.9. As in § 4.9, let E(·) denote a step process on [0, 1]:
0 for t ω, E(t) := 1 for t > ω, where ω is an r.v. uniformly distributed over [0, 1]. Theorem 6.4.1. Let conditions (6.2.1), (6.2.2) be satisfied, Δ > 0 be an arbitrary fixed number, n → ∞ and one of the following two conditions hold true: (1) α ∈ (1, 2), y = x − θ+ n n1/γ , where γ < α is some fixed number; √ (2) α > 2, y = x − θ+ n n ln n. Then the conditional distribution of the process −1 y (Snt − θ+ nt); t ∈ [0, 1] given the event Sn ∈ Δ[x) will converge weakly in D(0, 1) to the distribution of E(t). Proof. We will present only a sketch of the proof. It is not difficult to give a more detailed and formal argument, using [45, 47] and Chapters 3 and 4. The Cram´er transform of the distribution of the sequence S1 , . . . , Sn at the point λ+ has the property that the distribution of the process {Snt } in the zone (λ )
+ of deviations of order x coincides with the distribution of {Snt − θ+ nt } (up
6.5 The probability of the crossing of a remote boundary
319
to a factor ϕn (λ+ )e−λ+ x , cf. (6.2.4), which then disappears when we switch to the conditional distribution given Sn ∈ Δ[x)). The conditional distribution of (λ+ ) the process {Snt − θ+ nt } given Sn ∈ Δ[x) coincides with the conditional distribution of (λ )
+ − θ+ nt Znt := Snt
given Zn ∈ Δ[y), where y = x − θ+ n. Now, the r.v.’s ζ = ξ (λ+ ) − θ+ have a regularly varying right tail Vζ and an exponentially fast decaying left tail. Hence we can make use of Theorems 4.9.2 and 4.9.4 to describe the conditional distribution of Znt given Zn ∈ Δ[y). Therefore, we will obtain the conditional behaviour of the original trajectory Snt by taking the sum of the trajectory θ+ t and the trajectory described in Theorems 4.9.2 and 4.9.4. This completes the proof of the theorem. The assertion of Theorem 6.4.1 means that the trajectory n−1 Snt given the event Sn ∈ Δ[x) can be represented as n−1 Snt = θ+ t + yn−1 E(t) + rn (t), where rn = O(n−1 b(n)) and (−1) V (1/n) when α ∈ (1, 2), b(n) = √ζ n when α > 2. As in § 4.9, one could also obtain here an approximation of higher order by determining the behaviour of the remainder rn . Namely, as in Theorem 4.9.3, one can show that conditioning on Sn ∈ Δ[x) results in the weak convergence $ 1 # Snt − θ+ nt − yEωn (t) ⇒ ζ(t) − ζ(1)E(t), b(n) where the ωn have the same meaning as in Theorem 4.9.3, but refer to the r.v.’s ζi , and ζ(t) is the corresponding stable process; {ζ(t)} and {E(t)} are independent.
6.5 Asymptotics of the probability of the crossing of a remote boundary by the random walk 6.5.1 Introduction As in Chapters 3 and 4, we will study here the asymptotics of the probability P(Gn ) of the crossing of a remote boundary g(·) = {g(k); k = 1, . . . , n} by the trajectory {Sk ; k = 1, . . . , n}: ( ) Gn := max(Sk − g(k)) 0 . kn
Without losing generality, we will assume in this section that Eξ = 0. We have a situation similar to the one that we had when studying the asymptotics of P(Sn x). When the boundary g(·) lies, roughly speaking, in the
320
Random walks with exponentially decaying distributions
Cram´er deviation zone, the asymptotics of P(Gn ) have been studied fairly thoroughly (see e.g. [36, 37, 38, 44]). If, for instance, g(n) = x, g(k) = ∞ for k < n, θ=
x ϕ (λ+ ) < θ+ ≡ n ϕ(λ+ )
and λ+ = sup{λ : ϕ(λ) < ∞}
then for P(Gn ) = P(Sn x) we obtain the asymptotics (6.1.16), (6.1.17). The asymptotic behaviour of P(S n x) and also that of the joint distribution P(S n x, Sn < x − y) for such x values was studied in detail in [33, 34, 37] (for lattice r.v.’s ξ, and also when the distribution of ξ has an absolutely continuous component). In the general case, first assume for simplicity that the boundary g(·) has the form g(k) = xf (k/n),
k = 1, . . . , n,
inf f (t) > 0,
0t1
(6.5.1)
where f (t), t ∈ [0, 1], is a fixed piecewise continuous function. As shown in [36], a determining role for the behaviour of the probabilities P(Gn ) is played by the level lines pθ (t), t ∈ [0, 1], pθ (1) = θ, which are given implicitly as solutions to the equation tΛ(pθ (t)/t) = Λ(θ),
t ∈ [0, 1],
(6.5.2)
for fixed values of the parameter θ. The relation (6.5.2) is obtained by equating the expressions kΛ(pθ (k/n)n/k) and nΛ(θ). Owing to the well-known logarithmic asymptotics [80] − ln P(Sk y) ∼ kΛ(y/k)
as
k → ∞,
(6.5.3)
the above values are the principal terms of the asymptotics for the quantities − ln P Sk pθ (k/n)n and − ln P(Sn θn) respectively (cf. (6.1.16), (6.1.17)). Therefore, any two points on a given level line pθ (t), t ∈ [0, 1], have the property that, for a suitably scaled random walk (for which both the time and the space coordinates are ‘compressed’ n times each), the probabilities that these points will be reached (in the respective times) are roughly the same. As was shown in [36], the functions pθ (·) are concave and θt pθ (t) θ for t ∈ [0, 1]. Moreover, in the general case, on the left of the point
" Λ(θ) tθ := min 1, (6.5.4) Λ(θ+ ) the function pθ is linear with a slope coefficient, whose value does not depend on θ: pθ (t) = λ−1 + Λ(θ) + g+ t,
t ∈ [0, tθ ],
g+ := λ−1 + ln ϕ(λ+ ),
(6.5.5)
6.5 The probability of the crossing of a remote boundary
321
whereas on the right of tθ this function is analytic with a derivative continuous at tθ . The deviation function Λ(θ) increases for θ > 0, so that for θ θ+ one has tθ = 1 owing to (6.5.4), and hence (6.5.5) holds for all t ∈ [0, 1]. This also implies that pθ (t) is an increasing function of θ for θ > 0. Moreover, it is obvious from (6.5.2) and (6.5.4) that t ∈ [0, tθ )
iff
pθ (t)/t > θ+ .
(6.5.6)
Now let θ∗ > 0 = Eξ be the minimum value of θ, for which a curve from the family {pθ (·); θ > 0} will ‘touch’ the scaled boundary xn−1 f (·); cf. (6.5.1). Then, using the well-known logarithmic asymptotics (6.5.3) and the crude bounds n max P Sk xf (k/n) P(Gn ) P Sk xf (k/n) kn
k=1
n max P Sk xf (k/n) kn
(or the results of [36]), one can easily verify that, as n → ∞, ln P(Gn ) ∼ −nΛ(θ∗ ).
(6.5.7)
In other words, the probability that the random walk will cross the boundary (6.5.1) has the same logarithmic asymptotics as the probability that the random walk will be above the point g(k) = xf (k/n) = npθ∗ (k/n) at the time k := nt∗ , where t∗ := inf{t > 0 : pθ∗ (t) = xn−1 f (t)} is the point where the level lines ‘touch’ the scaled boundary for the first time. In the case when t∗ lies in the ‘regularity interval’ (tθ∗ , 1) (i.e. when we are within the Cram´er deviation zone), the exact asymptotics of P(Gn ) were obtained in [36] under rather broad assumptions on f . In the present chapter, we are dealing with an alternative situation, t∗ ∈ (0, tθ∗ ),
(6.5.8)
and assuming, as in §§ 6.2 and 6.3, that λ+ < ∞,
ϕ(λ+ ) < ∞.
(6.5.9)
In this case, the nature of the asymptotics of P(Gn ) will be quite different from those in the case when the boundary g(·) lies within the Cram´er deviation zone. Obtaining the asymptotics of P(Gn ) in the case (6.5.8) is more difficult than in the Cram´er case t∗ ∈ (tθ∗ , 1); moreover, in the present situation there exists no common (universal) law for P(Gn ). As we have already noted, most often one encounters in applications boundaries that belong to one of the following two main types: (1) g(k) = x + gk , k 1, where the sequence {gk } depends neither on x nor on n;
322
Random walks with exponentially decaying distributions
(2) g(k) = xf (k/n), where f (t) > 0 is a function on [0, 1] that depends neither on x nor on n. In this section, we will somewhat change the form in which we represent the boundaries {g(k)} and will write k = 1, 2 . . . ;
g(k) = x + gk,x,n ,
this includes representations (1) and (2) as special cases. In addition, in each of the typical cases to be dealt with below we will impose on {gk,x,n } a few specific conditions. Recall that 1 ln ϕ(α+ ) > 0 (6.5.10) g+ = λ+ (because ϕ (0) = Eξ = 0 and ϕ(λ) is a convex function). The above-mentioned ‘typical cases’ of boundaries refer to situations where the level lines first touch the boundary at the left endpoint, at the right endpoint, or at a middle point, respectively. They are characterized by the following sets of conditions [A ]. [A0 ]
For some fixed k0 1 and γ > 0, gk,x,n (g+ + γ)k
for
k > k0
(6.5.11)
for all large enough x and n, and there exists a fixed number a (independent of x and n) such that, for any fixed k 1, gk,x,n → ak
as
x, n → ∞.
(6.5.12)
Note that, owing to (6.5.11), a g+ + γ necessarily holds and also that condition (6.5.12) actually stipulates convergence on an initial segment of the boundary {gk,x,n ; k = 1, . . . , n} (the relation (6.5.12) holds uniformly in k N, where the value N = N (x, n) → ∞ grows slowly enough as x, n → ∞, so that for, √ say, k > n the values of gk,x,n could be quite far from ak). Condition (6.5.12) could be referred to as asymptotic local linearity of the boundary at zero. An example of a boundary for which condition [A0 ] is met is given by a boundary of the form (6.5.1) with x ∼ cn, f (0) = 1 and f (t) − f (0) c−1 (g+ + γ)t and such that the function f (t) has a finite right derivative f (0) > c−1 (g+ + γ) at t = 0 (in this case a = cf (0)). The next condition has a similar form but refers to the terminal segment of the boundary {gk,x,n ; k = 1, . . . , n}. [An ]
For some fixed k0 1 and γ > 0, gn−k,x,n −(g+ − γ)k
for k > k0
6.5 The probability of the crossing of a remote boundary
323
for all large enough x and n, gn,x,n = 0 and there exists a fixed value b (independent of x and n) such that, for any fixed k 1, gn−k,x,n → −bk
as
x, n → ∞.
(6.5.13)
Remarks similar to those we made regarding condition [A0 ] apply to [An ] as well (in particular, that b g+ − γ). The case when the level lines first touch the boundary inside the temporal interval is described by the following combination of the above conditions. [Am ] There exists a sequence m = m(n) → ∞ such that n − m → ∞, gm,x,n = 0 and the sequences {gm+k,x,n ; k 1}
and {gm−k,x,n ; k 1}
possess the properties stated in conditions [A0 ] and [An ] respectively. Observe that, in the above-listed cases, the level lines pθ∗ (t) ‘touch’ the boundary n−1 g( nt ) at a non-zero angle. To get an idea of what happens in the case of a smooth contact of these lines, we will consider the following condition: [A0,n ]
g(k) = x + g+ k,
k = 1, . . . , n.
The analysis of the cases [A ] below will show how diverse the asymptotic behaviour of P(Gn ) can be. Including in our considerations the general case of ‘smooth contact’ of the level lines with the boundary would lead to an even wider variety of different types of asymptotic behaviour and to much more complicated proofs. At the same time, as we will see, under conditions [A ] the case of an arbitrary boundary can often be reduced to that of a linear boundary. 6.5.2 Boundaries under condition [A0 ] In this case, the boundary g(·) increases so fast that the most likely scenario for its crossing is that the random walk exceeds the boundary at the very beginning of the time interval. Hence the asymptotics of P(Gn ) will be the same as that of P(G∞ ), the probability that the boundary g(·) will ever be crossed. Prior to stating the main result observe that, when a > g+ , for the random walk Sk (a) := Sk − ak,
k = 0, 1, . . . ,
generated by the r.v.’s ξj (a) := ξj − a, we have Eξj (a) := ξj − a < −g+ < 0 (see (6.5.10)) and therefore S(a) = sup Sk (a) < ∞
a.s.
k0
Hence the first ladder epoch ηa := η(0+, a) = inf{k 1 : Sk (a) > 0}
(6.5.14)
324
Random walks with exponentially decaying distributions
is an improper r.v., P(ηa < ∞) < 1. Since any r.v.f. is an upper-power function (see Definition 1.2.20, p. 28), the following assertion holds true for a distribution class that is wider than ER. Theorem 6.5.1. Assume that the following conditions are met: F+ (t) = e−λ+ t V (t),
ϕ(λ+ ) < ∞,
V (t) is upper-power.
(6.5.15)
If condition [A0 ] holds then, as x → ∞ and n → ∞, P(Gn ) ∼ c(a)F+ (x),
(6.5.16)
1 − P(ηa < ∞) # $ . 1 − E exp{λ+ Sηa (a)}; ηa < ∞
(6.5.17)
where c(a) :=
eλ+ a
−
e−λ+ g+
We will need two lemmata to prove the theorem. The first follows immediately from Theorem 12, Chapter 4 of [42]. Lemma 6.5.2. If (6.5.15) holds and ϕ(a) (λ+ ) := Eeλ+ ξ(a) < 1 then, as x → ∞, P(S(a) x) ∼ c(a)F+ (x), where c(a) is given in (6.5.17). Lemma 6.5.3. Assume that (6.5.15) holds true. Then, for any a > g+ , we have ϕ(a) (λ+ ) = e−λ+ (a−g+ ) < 1 and, for any fixed N 1, as x → ∞, −1 P(SN (a) x) ∼ N ϕN (a) (λ+ )F+ (x)
and
(6.5.18)
P sup Sk (a) x k>N
cN ϕN (a) (λ+ )F+ (x)(1 + o(1)).
(6.5.19)
Proof of Lemma 6.5.3. From the definition of g+ = λ−1 + ln ϕ(λ+ ) we have ϕ(a) (λ+ ) = e−λ+ a ϕ(λ+ ) = e−λ+ (a−g+ ) < 1 for
a > g+ .
Further, repeating the argument from Example 1.2.11 (p. 19), we can establish that P(S2 (a) x) ∼ 2ϕ(a) (λ+ )F+ (x)
as
x→∞
(see (1.2.11)). The demonstration of (6.5.18) is completed by induction, in exactly the same way as in the above-mentioned computation in Example 1.2.11. Now we will prove (6.5.19). Clearly d
sup Sk (a) = SN +1 (a) + S,
k>N
6.5 The probability of the crossing of a remote boundary
325
d
where S = S(a) is an r.v. independent of SN +1 (a). Owing to Lemma 6.5.2 and the fact that V (t) is an l.c. function, we have P(S t) cP(ξ t) c1 P(ξ − a t),
t 0,
whereas for t < 0 the above inequality is obvious. Therefore, by virtue of (6.5.18)
P sup Sk (a) x = P(SN +1 (a) ∈ dt)P(S x − t) k>N c1 P(SN +1 (a) ∈ dt)P(ξ − a x − t) = c1 P(SN +2 (a) x) +1 c2 (N + 2)ϕN (a) (λ+ )F+ (x)(1 + o(1)).
The lemma is proved. Proof of Theorem 6.5.1. Fix an N k0 and put ε = εx,n (N ) := max |gk,x,n − ak|. kN
Clearly, for n N ,
P S(a) x + ε − P sup Sk (a) x + ε k>N
P sup Sk (a) x + ε kN
P(Gn ) P sup Sk (a) x − ε + P sup Sk (g+ + γ) x kN
k>N
P(S(a) x − ε) + cN e−λ+ γN F+ (x)(1 + o(1)), owing to (6.5.19) and the obvious equality ϕ(g+ +γ) (λ+ ) = e−λ+ γ . Again using Lemmata 6.5.2 and 6.5.3, we obtain, after dividing the left- and right-hand sides of the previous relation by c(a)F+ (x), the inequality F+ (x + ε) (1 + o(1)) − cN e−λ+ γN F+ (x) F+ (x − ε) P(Gn ) (1 + o(1)) + cN e−λ+ γN , c(a)F+ (x) F+ (x) since a − g+ γ. By choosing a large enough N one can make the term N e−λ+ γN arbitrarily small. To complete the proof of the theorem, it remains for us to notice that F+ (x ± ε) V (x ± ε) = e∓λ+ ε → 1, F+ (x) V (x) since, for any fixed N , one has that ε → 0 as x, n → ∞ owing to (6.5.12), and since V (x) is an l.c. function.
326
Random walks with exponentially decaying distributions
6.5.3 Boundaries under condition [An ] It follows from what we said in § 6.5.1 that, in the case [An ], the ‘most likely’ crossing time of the boundary g(·) (given that event Gn has occurred) will be close to the right-end point n. Recall that [An ] implies the inequality b g+ − γ. Let (λ+ )
τk := −ξk
+ b,
Tk := τ1 + · · · + τk .
(6.5.20)
Clearly, Eτ1 = b − θ+ < g+ − θ+ =
1 Λ(θ+ ) (ln ϕ(λ+ ) − λ+ θ+ ) = − < 0. λ+ λ+ (6.5.21)
Therefore T := sup Tk < ∞
a.s.,
(6.5.22)
k0
and we have from (6.1.1) that ϕτ (λ) := Eeλτ1 = eλb
ϕ(λ+ − λ) ϕ(λ+ )
and hence ϕτ (λ+ ) = e−λ+ (g+ −b) e−λ+ γ < 1.
(6.5.23)
Now put y := x − θ+ n ≡ (θ − θ+ )n. Theorem 6.5.4. For α > 1, let F+ (t) = e−λ+ t V (t),
V (t) = t−α−1 L(t),
L(t) be an s.v.f.,
(6.5.24)
let condition [An ] be met, n → ∞ and s > 0 be fixed. Then, for α ∈ (1, 2), uniformly in y n1/α for any fixed α < α, one has P(Gn ; Sn x − s) = e−nΛ(θ)
$ nV (y) # λ+ T ; T < s + eλ+ s P(T s) (1 + o(1)) (6.5.25) E e ϕ(λ+ )
and P(Gn ) = e−nΛ(θ)
nV (y) Eeλ+ T (1 + o(1)), ϕ(λ+ )
Eeλ+ T < ∞.
(6.5.26)
√ In the case α > 2, these relations hold uniformly in y Nn n ln n for any fixed sequence Nn → ∞. Proof. To simplify the argument, we will confine ourselves to considering the special case of a linear boundary g(k) = x − b(n − k),
k = 1, . . . , n,
b g+ − γ.
(6.5.27)
327
6.5 The probability of the crossing of a remote boundary
The changes that need to be made in the proof in the case of general (asymptotically locally linear) boundaries satisfying condition [An ] amount to adding an argument similar to that used in the proof of Theorem 6.5.1. First we will show that, if the random walk does cross the linear boundary (6.5.27) during the time interval [0, n] then, with a high probability, this will occur at the very end of the interval. For j n (the choice of j will be made later), introduce the event ( ) Gn,j := max (Si − g(i)) 0 . n−jin
On the one hand, it is obvious that P(Gn ) P(Gn,j ) P(Sn x). On the other hand, for the event difference Gn,j := Gn \ Gn,j , we have from Corollaries 6.2.6 and 6.3.4 the following bound:
n−j
P(Gn,j )
n−j
P(Sk g(k)) cnV (y)
k=1
= cnV (y)e−nΛ(θ)
n−j k=1
e−kΛ(g(k)/k)
k=1
" g(k) . exp nΛ(θ) − kΛ k
(6.5.28)
Since both the argument values, θ and g(k)/k > θ+ , of the function Λ in (6.5.28) lie, in the case under consideration, in the interval where this function is linear (see (6.2.11)), we conclude that the argument of the exponential on the right-hand side of (6.5.28) is equal to % & % & x x − b(n − k) n λ+ − ln ϕ(λ+ ) − k λ+ − ln ϕ(λ+ ) n k = λ+ (b − g+ )(n − k) −λ+ γ(n − k). Therefore
n−j
P(Gn,j )
−nΛ(θ)
cnV (y)e
e−λ+ γ(n−k) c1 e−λ+ γj P(Sn x),
k=1
by virtue of Corollaries 6.2.6 and 6.3.4. By choosing a large enough j, this probability could be made arbitrarily small relative to P(Sn x) and hence also relative to P(Gn ) and P(Gn ; Sn x − s) c P(Sn x − s). This means that we only have to evaluate the probabilities P(Gn,j ) and P(Gn,j ; Sn x − s). Next we observe that, for any s > 0, one has 0 P(Gn,j ; Sn x − s) = P(Sn x) +
P(Gn,j ; Sn ∈ x + du).
(6.5.29)
−s
In the range of x values with which we are dealing, one can make the Cram´er
328
Random walks with exponentially decaying distributions
transform according to (6.1.1) and switch to the random walk {Zi }, which was introduced in (6.2.5) and (6.2.8). Then, for u < 0, putting sk := x1 + · · · + xk , and
( A := (x1 , . . . , xn ) :
k = 1, 2, . . . ,
) max (sk − g(k)) > 0, sn ∈ x + du ,
n−jkn
we obtain the representation P(Gn,j ; Sn ∈ x + du) = P(ξ1 ∈ dx1 , . . . , ξn ∈ dxn ) A
= ϕ (λ+ ) n
(λ+ )
e−λ+ sn P(ξ1
∈ dx1 , . . . , ξn(λ+ ) ∈ dxn )
A
= e−nΛ(θ) e−λ+ u P(Hn,j | Zn = y + u) P(Zn ∈ y + du), where the event Hn,j :=
(
(6.5.30)
)
max (Zi − h(i)) 0
n−jin
means that the random walk {Zi } crossed the boundary h(i) := g(i) − θ+ i = y + (θ+ − b)(n − i) =: y + hn,i ,
i = 1, . . . , n, (6.5.31) during the time interval [n − j, n]. Note that the slope coefficient of this linear boundary is negative (see (6.5.21)). Thus the integral in (6.5.29) takes the form −nΛ(θ)
0
e
e−λ+ u P(Hn,j | Zn = y + u) P(Zn ∈ y + du).
(6.5.32)
−s
Since we have not assumed that V (t) is absolutely continuous (even for large enough t), we will now have to approximate the last integral by a finite sum. Fix a Δ > 0 such that k := s/Δ is an integer, and put ym := y − mΔ, m = 1, 2, . . . It is clear that the relative error of approximating the integral in (6.5.32) by the sum Σ :=
k
eλ+ mΔ P(Hn,j | Zn ∈ Δ[ym )) P(Zn ∈ Δ[ym ))
(6.5.33)
m=1
does not exceed max eλ+ mΔ − eλ+ (m−1)Δ λ+ eλ+ s Δ mk
and so could be made uniformly arbitrarily small by choosing a small enough Δ.
329
6.5 The probability of the crossing of a remote boundary Now consider the conditional probabilities
P(Hn,j | Zn ∈ Δ[ym ))
= P max Zi − y − hn,i 0 Zn ∈ Δ[ym ) n−jin
P max Zi − Zn − (m − 1)Δ − hn,i 0 Zn ∈ Δ[ym ) n−jin
(n) (6.5.34) = P max Ti (m − 1)Δ Zn ∈ Δ[ym ) , ij
(n)
where Ti
(n)
is the time-reversed random walk with jumps τi
(n)
Ti
(n)
:= τ1
(n)
+ · · · + τi
= Tn − Tn−i ,
:= τn−i+1 :
i = 0, 1, . . . , n.
Observe that the event under the last probability sign is expressed in terms of the r.v.’s ζn−j+1 , ζn−j+2 , . . . , ζn only (see (6.2.5), (6.2.8)), whereas their joint conditional distribution given Zn ∈ Δ[ym ) will be close to the unconditional distribution of ζ1 , . . . , ζj . Indeed, following the remark made at the beginning of § 4 in [45], for any fixed bounded Borel set B we have P(ζn ∈ B| Zn ∈ Δ[ym )) = P(ζn ∈ du| Zn ∈ Δ[ym )) B
P(ζn ∈ du)
= B
P(Zn−1 ∈ Δ[ym − u)) → P(ζ1 ∈ B), P(Zn ∈ Δ[ym ))
because the ratio of the probabilities in the last integrand converges to unity, owing to Corollaries 6.2.6 and 6.3.4. One can show in the same way that for any fixed j the joint conditional distribution of the random vector (ζn−j+1 , ζn−j+2 , . . . , ζn ) given Zn ∈ Δ[ym ) tends to the unconditional distribution of (ζ1 , . . . , ζj ). Therefore
(6.5.35) P(Hn,j | Zn ∈ Δ[ym )) P max Ti (m − 1)Δ (1 + o(1)). ij
We can establish in a similar way that
P(Hn,j | Zn ∈ Δ[ym )) P max Ti mΔ (1 + o(1)). ij
Note that
P max Ti u = P(T u) + εj , ij
(6.5.36)
(6.5.37)
where εj = εj (u) → 0 as j → ∞ uniformly in u ∈ [0, s], since T < ∞ a.s. Now combining (6.5.35) and (6.5.37) and using Theorems 6.2.4 and 6.3.3, we obtain for the sum (6.5.33) the upper bound Σn
k # $ λ+ V (y) Δeλ+ mΔ P(T (m − 1)Δ) + εj (1 + o(1)). ϕ(λ+ ) m=1
330
Random walks with exponentially decaying distributions
Now, fixing an arbitrarily small δ > 0, one can always choose j large enough that εj < δ and Δ small enough that the relative error of approximating the integral in (6.5.32) by the sum (6.5.33) and the relative error of the approximation k
s Δe
λ+ mΔ
P(T (m − 1)Δ) ≈
m=1
eλ+ u P(T u) du 0
will each not exceed δ. Together with a similar argument for the lower bound (using (6.5.36) instead of (6.5.35)), this shows that, for j = j(n) → ∞ slowly enough as n → ∞, the integral in (6.5.32) is asymptotically equivalent to nV (y) ϕ(λ+ )
s λ+ eλ+ u P(T u) du 0
=
$ nV (y) # λ+ T E e ; T < s + eλ+ s P(T s) − 1 ϕ(λ+ )
(we have used integration by parts). Thereby the asymptotic behaviour of the second term on the right-hand side of (6.5.29) is established. Next we observe that the asymptotic behaviour of the first term on the right-hand side of (6.5.29) has already been established in Corollaries 6.2.6 and 6.3.4. Taking into account the remark we made (at the beginning of the proof of the theorem) concerning the possibility of replacing Gn with Gn,j , the sum of these two asymptotics will give the desired representation (6.5.25). To prove (6.5.26), note that P(Gn,j ) = P(Gn,j ; Sn x − s) + P(Gn,j ; Sn < x − s), where
P(Gn,j ; Sn < x − s) P(Gn,j ) P min(Sk − b(k)) < −s kj
and, for a fixed j, the second factor on the right-hand side of this inequality could be made arbitrarily small by choosing a large enough s. Again turning to the remark about replacing Gn with Gn,j , we conclude that the desired result will follow immediately from (6.5.25) once we have shown that Eeλ+ T < ∞, since in this case # $ E eλ+ T ; T < s + eλ+ s P(T s) → Eeλ+ T < ∞ as
(6.5.38)
s → ∞.
To prove (6.5.38), observe that from (6.5.23) and Chebyshev’s inequality we have e−λ+ t P(Tk t) e−λ+ t ϕkτ (λ+ ) = , P(T t) 1 − e−λ+ γ k0
k0
6.5 The probability of the crossing of a remote boundary
331
and hence EeλT < ∞ for any λ ∈ [0, λ+ ). By the monotone convergence theorem, it remains to show that limλ↑λ+ EeλT < ∞. Now, the last relation follows from the factorization identity (see e.g. § 18 of [42] or § 3, Chapter 11 of [49]) EeλT = w(λ)/(1 − ϕτ (λ)),
(6.5.39)
where w(λ) is a function that is analytic in the half-plane Re λ > 0 and continuous for Re λ 0. Indeed, we see from (6.5.23) that the right-hand side of (6.5.39) is analytic for Re λ ∈ (0, λ+ ) and continuous in the closed strip Re λ ∈ [0, λ+ ]. Thus we have proved (6.5.38). The uniform smallness of the remainder terms in (6.5.25), (6.5.26) follows from the uniform smallness in the assertions of Theorems 6.2.4 and 6.3.3 and their corollaries. Theorem 6.5.4 is proved.
6.5.4 Boundaries under condition [Am ] In this case, the ‘most likely’ time of the crossing of the boundary g(·) by the random walk is in the vicinity of m. Theorem 6.5.5. Let conditions (6.5.24) and [Am ] be satisfied, n → ∞, and let θ := x/m,
y := x − θ+ m ≡ (θ − θ+ )m.
Then, for α ∈ (1, 2), uniformly in y n1/α for any fixed α < α, P(Gn ) = e−mΛ(θ)
mV (y) λ+ max{T , S(a)} Ee (1 + o(1)), ϕ(λ+ )
where T and S(a) are independent copies of the r.v.’s defined in (6.5.22) and (6.5.14) respectively. In the case α > 2,√the above representation for the probability P(Gn ) holds uniformly in y Nn n ln n for any fixed sequence Nn → ∞. Proof. Using an argument similar to (6.5.30) but transforming the distributions of the first m r.v.’s ξi only, we obtain for u < 0 and ) ( A := max(sk − g(k)) 0, sm ∈ x + du kn
the representation P(Gn ; Sm ∈ x + du) (λ ) (λ+ ) ∈ dxm , = ϕm (λ+ ) e−λ+ sm P ξ1 + ∈ dx1 , . . . , ξm A
ξm+1 ∈ dxm+1 , . . . ξn ∈ dxn
= e−mΛ(θ) e−λ+ u P(Hm ∪ Gm,n,u | Zm = y + u) P(Zm ∈ y + du), (6.5.40)
332
Random walks with exponentially decaying distributions
where the events ( ) Hm := max(Zk − h(k)) 0 , km
and Gm,n,u :=
(
h(k) := g(k) − θ+ k,
k = 1, . . . , m,
) # $ max (Sm+k − Sm ) − (g(m + k) − x) −u
kn−m
are independent. It is not difficult to see that, for a fixed Zm equal to y + u ≡ x − θ+ m ≡ g(m) − θ+ m, the occurrence of the event Hm is equivalent to the occurrence of the event ( ) # $ max (Tm − Tm−k ) + g(m) − g(m − k) − bk −u . km
Using the argument from the proof of Theorem 6.5.4 we obtain that, for a fixed Zm = y + u, P(Hm | Zm ) → P(T −u) as
n→∞
(here we again have to consider events of the form Gn,j and Hn,j in order to approximate integrals by finite sums etc.). Thus the right-hand side of (6.5.40) takes the form , e−mΛ(θ) e−λ+ u P max{T , S(a)} −u + o(1) P(Zm ∈ y + du). The rest of the proof follows the argument establishing Theorem 6.5.4. 6.5.5 Boundaries under condition [A0,n ] The case [A0,n ] adjoins, in a certain sense, both [A0 ] and [An ]. Recall that here we are considering a boundary of the form g(k) = g(0) + g+ k,
k 1,
(6.5.41)
and that for ξj (g+ ) := ξj − g+ we have ϕ(g+ ) (λ+ ) ≡ Eeλ+ ξ1 (g+ ) = 1. As was the case for [A0 ], in the case [A0,n ] we can establish the asymptotics of P(Gn ) for a distribution class that is wider than ER. We will use the r.v.’s Sk (a) and ηa , which were introduced before Theorem 6.5.1 (see p. 323). Theorem 6.5.6. Let F+ (t) = e−λ+ t V (t), where the function V (t) satisfies condition (6.1.21), ∞ tV (t) dt < ∞. 0
If x → ∞, n → ∞ in such a way that x + g+ < θ+ − γ n
6.5 The probability of the crossing of a remote boundary
333
for a fixed γ > 0 then, for the boundary (6.5.41), we have P(Gn ) ∼ c0 e−λ+ x ,
(6.5.42)
1 − P(ηg+ < ∞) # $. λ+ E Sηg+ (g+ ) exp{λ+ Sηg+ (g+ )}; ηg+ < ∞
(6.5.43)
where c0 :=
In the case when x/n + g+ > θ+ − γ, the asymptotics of P(Gn ) can depend on n. Proof. We have
P(Gn ) P(S(g+ ) x) P(Gn ) + P sup Sk (g+ ) x . k>n
The probability in the middle part of this relation was evaluated in Theorem 11, Chapter 4 of [40] and has the following form. Lemma 6.5.7. Under the conditions of Theorem 6.5.5, we have ϕg+ (λ+ ) < 1 and, as x → ∞, P(Gn ) ∼ c0 e−λ+ x , where c0 is defined in (6.5.43). To complete the proof of Theorem 6.5.6, it remains to demonstrate that, as x → ∞, n → ∞,
P sup Sk (g+ ) x = o e−λ+ x . k>n
∗
∗
As before, let θ = θ (x, n) be the minimum value of the parameter θ, for which a curve from the family of level lines {pθ (·); θ > 0} will ‘touch’ the boundary n−1 g( nt ) (although the level lines were introduced on p. 320 when considering a boundary of the form xf (k/n), the concept does not depend on the specific form of boundary and retains its meaning in the general case). Since in the case under consideration λ+ < ∞ and θ+ < ∞ (cf. (6.1.19)) and hence Λ(θ+ ) < ∞, by virtue of (6.5.4) we have tθ > 0 for any fixed θ > 0. Therefore, there is a straight-line segment with a slope coefficient g+ at the beginning of any level line pθ (·). As the boundary (6.5.41) itself has a constant slope coefficient g+ , and the level lines are concave, it is clear that the ‘contact’ of pθ∗ (t) and n−1 g( nt ) takes place over an entire interval at the initial part of the boundary and, in particular, for t = 0 one has Λ(θ∗ (x, n)) x g(0) . = = pθ∗ (0) = n n λ+
(6.5.44)
Further, from the property pθ (1) = θ, the concavity of the level lines and (6.5.5), it follows owing to the assumptions of the theorem that x θ ∗ = pθ ∗ (1) + g+ θ+ − γ. n
334
Random walks with exponentially decaying distributions
Therefore, Λ(θ ∗ )/Λ(θ+ ) 1 − c1 , c1 > 0, so that by virtue of (6.5.4) we have tθ∗ 1 − c2 , c2 > 0. From this it is easily seen that, for some c3 > 0 and all k n, x + g+ θ ∗ (x, k) + c3 k and hence Λ(x/k + g+ ) Λ(θ ∗ (x, k) + c3 ) > Λ(θ∗ (x, k)) + c4 ,
c4 > 0.
So, owing to inequality (6.1.18) and to (6.5.44), we have
P(Sk x + g+ k) P sup Sk (g+ ) x k>n
k>n
e−kΛ(x/k+g+ )
k>n
=e
−λ+ x
k>n
as required. The theorem is proved.
k>n
e
−c4 k
e−kΛ(θ
∗
(x,k))−c4 k
= O e−λ+ x−c4 k = o e−λ+ x ,
7 Asymptotic properties of functions of regularly varying and semiexponential distributions. Asymptotics of the distributions of stopped sums and their maxima. An alternative approach to studying the asymptotics of P(Sn x) The present chapter continues § 1.4, where we studied the asymptotic properties of functions of subexponential distributions. More precisely, assuming that ζ is an r.v. with a subexponential distribution and that g(λ) = Eeiλζ is its ch.f., we studied there properties of the preimage A(x) corresponding to the asymptotic function A g(λ) , where A(w) is a given function of a complex variable: A(x) := A [x, ∞) , eiλx A(dx) = A g(λ) . We refer to the measure A as a function of the distribution of the r.v. ζ. In this chapter, we will identify ζ with the original r.v. ξ considered in Chapters 1–6. (In Chapters 6 and 8 and in a number of other places as well, ζ is actually identified with r.v.’s other than ξ, and this is why this notation was introduced in § 1.4.) Thus, in this chapter, g(λ) = f(λ) ≡ Eeiλξ , where ξ has a distribution F, F+ (t) = 1 − F([t, ∞)) = V (t), V is defined in (1.1.2) in the case when F ∈ R and in (5.1.1)–(5.1.3) (p. 233; see also (1.2.28) and (1.2.29)) for F ∈ Se. In the case when the function A(w) is analytic in the unit disk |w| < 1, including the boundary |w| = 1, and the distribution F is subexponential the asymptotics of the preimage A(x) were studied in § 1.4. If, however, we assume that the distribution F belongs to R or Se then one can broaden the conditions on A, and the conditions we have to impose will now depend on the value of a := Eξ. 7.1 Functions of regularly varying distributions In this section, we will assume only that A(w) =
∞ k=0
335
ak w k ,
336
Asymptotic properties of functions of distributions ∞ where the series k=0 ak is absolutely convergent, and that there exists A (1) = ∞ k=0 kak > 0. In some cases, we will also assume sufficiently fast (but not exponentially fast) decay of the sequence T (n) :=
∞
ak → 0,
n → ∞,
k=n
where T (n) > 0 for all large enough n. Furthermore, for a non-integer t we will put T (t) := T ( t ). As in § 1.4, the main object of study in this section will be the asymptotics of the preimage A(x), which clearly admits a representation of the form A(x) =
∞
an P(Sn x).
(7.1.1)
n=0
Theorem 7.1.1. Let F ∈ R, x → ∞. (i) If a = Eξ < 0 then, without any additional conditions, A(x) ∼ A (1)V (x).
(7.1.2)
(ii) Let a = 0, α > 2 and Eξ 2 < ∞ (we will assume without loss of generality that Eξ 2 = 1). Then the following assertions hold true. (ii1 ) If
T (x2 ) = o V (x)
(7.1.3)
T ∈R
(7.1.4)
A(x) ∼ A (1)V (x) + BT (x2 ),
(7.1.5)
B = 2γ−1 π −1/2 Γ(γ + 1/2)
(7.1.6)
then we have (7.1.2). (ii2 ) If
then
where and −γ < −1 is the index of the r.v.f. T . (iii) Let a = 0, α ∈ (1, 2) and the condition [ 0 and one of the following two conditions be satisfied: (iv1 )
T (x) = o V (x) ;
(7.1.9)
T ∈ R.
(7.1.10)
(iv2 ) Then, in the former case, (7.1.2) holds true. In the latter case, A(x) ∼ A (1)V (x) + T (x/a).
(7.1.11)
The condition F ∈ R of Theorem 7.1.1 could be broadened by replacing R by wider distribution classes, described in § 4.8. Proof. (i) First let a = Eξ < 0. Then, uniformly in n, as x → ∞, P(Sn x) = P(Sn − an x − an) ∼ nV (x − an),
(7.1.12)
so that for any k one has P(Sk x) →k V (x)
as
P(Sk x) < c. kV (x)
x → ∞,
Therefore, by virtue of (7.1.1) and the dominated convergence theorem, ∞
A(x) ∼ kak = A (1). V (x) k=0
This proves (7.1.2). (ii) Now let a = 0, Eξ 2 = 1. In this case, as x → ∞, P(Sn x) = nV (x) 1 + o(1)
(7.1.13) uniformly in n n1 = n1 (x) = c1 x2 /ln x for some c1 ∈ 0, 1/[2(α − 2)] (see Remark 4.4.2 and Theorem 4.4.1). Represent the preimage A(x) from (7.1.1) as ∞ an P(Sn x) = Σ1 + Σ2 + Σ3 , (7.1.14) A(x) = n=0
where Σ1 :=
,
n 0. Then, for n n(x) := xa−1 (1 − ε), ε > 0, we have P(Sn x) ∼ nV (x − an). Assume that condition (7.1.9) is met. As V (x − an) < c for V (x)
n < n(x),
V (x − an) →1 V (x)
for
n = o(x),
we obtain, in a similar way to our previous calculations, that A(x) = A (1)V (x)(1 + o(1)) + Σ2 ,
(7.1.17)
340
Asymptotic properties of functions of distributions where |Σ2 | |T (n(x))| ∼ c1 |T (x)| = o V (x) . Now let T ∈ R. By the law of large numbers Sn /n → a as n → ∞, so that we have P(Sn x) → 1 for n xa−1 (1 + ε) and hence (assuming for definiteness that T (xa−1 ) > 0) T (xa−1 (1 + ε))(1 + o(1)) Σ2 T (xa−1 (1 − ε)). As ε is arbitrarily small, this implies that Σ2 ∼ T (xa−1 ) and therefore, together with (7.1.17), proves the relation (7.1.11). The theorem is proved. Remark 7.1.2. In the third part of the theorem, the asymptotics of A(x) remain unknown for a rather broad class of cases, for which neither (7.1.7) nor (7.1.8) hold; for example, in the case when W (t) V (t), T (1/W (t)) is comparable with or greater than V (t). It is not difficult to determine the form of the asymptotics, but we could not give a rigorous proof thereof owing to the absence of estimates for P(Sn x) in the ‘intermediate’ zone n1 (x) :=
1 1 n2 . b(n) b(n) b(n) Here b(n) = W (−1) (1/n) and Fβ,−1 is the stable law with parameters (β, −1). To obtain the asymptotics of A(x) in the case when W (t) V (t),
T ∈ R,
we need to split the series (7.1.1) representing A(x) into three parts, cf. (7.1.14), over the ranges n n1 (x), n1 (x) < n n2 (x) and n > n2 (x) respectively. As before, we can then derive that Σ1 ∼ A (1)V (x). Further, setting for brevity n2 (x) =: n2 and using summation by parts, we obtain x Sn Σ3 = an P b(n) b(n) n>n2 x ∼ T (n2 )Fβ,−1,+ + T (n)f (x, n), (7.1.18) b(n2 ) n>n 2
where f (x, n) = Fβ,−1,+
x b(n)
− Fβ,−1,+
x b(n + 1)
x/b(n)
f (u) du,
= x/b(n+1)
7.2 Functions of semiexponential distributions
341
f being the density of Fβ,−1 . Hence, setting T1 (t) := T (1/W (x)), we see that the second term on the right-hand side of (7.1.18) is asymptotically equivalent to x/b(n 2)
T1 (xu
−1
x/b(n 2)
)f (u) du = T1 (x)
0
0
T1 (xu−1 ) f (u) du T1 (x)
∞ ∼ T1 (x)
uβγ f (u) du, 0
where −γ is the index of the r.v.f. T ∈ R. It remains to evaluate the ‘intermediate’ sum Σ2 . It is possible to obtain the desired bound Σ2 = o V (x) + T (1/W (x)) , but this would require rather cumbersome calculations, which we will not present here. They would lead to the asymptotics
A(x) ∼ A (1)V (x) + BT 1/W (x) ,
∞ B :=
tβγ f (t) dt. 0
Remark 7.1.3. If we assume that P(ξ ∈ Δ[x)) ∼ ΔαV (x)/x as x → ∞, Δ[x) = [x, x + Δ), and that the integro-local theorems for P(Sn ∈ Δ[x)) hold true (see §§ 3.7, 4.7 and 9.2) then we canderive, in a way similar to our previous calculations, the asymptotics of A Δ[x) = A(x) − A(x + Δ) as x → ∞.
7.2 Functions of semiexponential distributions In this section, we use the notation of § 7.1 but assume that F+ = V ∈ Se. Theorem 7.2.1. Let V ∈ Se, α ∈ (0, 1). Suppose that the series ∞ n=0 nan = A (1) > 0 converges. (i) If a := Eξ < 0 then, as x → ∞, A(x) ∼ A (1)V (x).
(7.2.1)
|an | < exp −nα/(2−α) L1 (n)
(7.2.2)
(ii) Let a = 0, Eξ 2 < ∞. If
for a suitable s.v.f. L1 (see the proof below) then (7.2.1) holds true. (iii) Let a > 0 and, for some ε > 0, T xa−1 (1 − ε) = o V (x) . (7.2.3) Then (7.2.1) holds true.
342
Asymptotic properties of functions of distributions
As in Theorem 7.1.1, the condition V ∈ Se can be relaxed in the above assertion. Moreover, in parts (ii) and (iii) of the theorem one can obtain asymptotics of A(x), depending on the function T , in the case when T ∈ Se and T (t2 ) cV (t) for a = 0, and also when T (t/a) cV (t) for a > 0. Proof. (i) The first assertion of the theorem is proved in exactly the same way as in Theorem 7.1.1. (ii) To prove the second assertion, we will make use of the following results (see Theorem 5.4.1(ii) and Corollary 5.2.2): 2 x (7.2.4) P(Sn x) ∼ nV (x) for n = o 2 l (x) and
%
&" bnl(x) (1 + o(1)) for P(Sn x) < cn exp −l(x) 1 − x2
x2 , l(x) (7.2.5) where the constants b and c1 are known. First put n1 (x) := x2 /l2 (x), n2 (x) := c1 x2 /l(x) > n1 (x) and split the series (7.1.1) into three parts as in (7.1.14) with Σ1 = , Σ2 = , Σ3 = . (7.2.6) n 0 we have for the ‘remainder’ term in (7.2.9) the bound nl(x ) n α−δ , =o 2 (x ) x it follows from (7.2.10) that a bound of the form (7.2.11) will remain true for & % 2bnl(x ) l(an) + l(x ) 1 − − l(x). (x )2 From this and (7.2.9) we obtain |an |P(Sn x) cnV (x) exp −cl(x)1−α+δ . This clearly shows that the sub-sum over the range an ∈ [z, εx] admits the bound o V (x) . For the sub-sum corresponding to an ∈ [εx, x(1 − ε)], we can use the representation obtained in Chapter 5 (see (5.4.44)): , an l(an) + l(x − an) − l(x) = l(x) 1 + γ (1 + o(1)) , x where the function γ(v) = vα +(1−v)α −1 0 is concave and symmetric around the point v = 1/2 and γ(v) γ(ε) > 0 for v ∈ [ε, 1 − ε], ε < 1/2. From this, we can again easily derive that the sub-sum over the range an ∈ [εx, x(1 − ε)] is o V (x) . Therefore the same bound holds for the whole sum Σ2 . It remains to bound the third sum Σ3 in (7.2.6). This sum is taken over the range an x(1 − ε) and so, owing to (7.2.3), is also o V (x) . The theorem is proved.
7.3 Functions of distributions interpreted as the distributions of stopped sums. Asymptotics for the maxima of stopped sums One could interpret the assertions of the previous sections and § 1.4 as follows. Let us depart from the representation (7.1.1). Given that ak 0,
A(1) =
∞ k=0
ak < ∞,
345 we can assume without loss of generality that A(1) = 1 and consider A f(λ) as the ch.f. of an r.v. S for which EeiλS = A f(λ) 7.3 Distributions of stopped sums
and which can be represented as S = Sτ = ξ1 + · · · + ξτ ,
(7.3.1)
where the r.v. τ is independent of {ξi }, P(τ = k) = ak . In this case, A (1) = Eτ and our Theorem 1.4.1 implies, for instance, the following result (see also e.g. Theorem A3.20 of [113]). Corollary 7.3.1. Let F ∈ S and E(1 + δ)τ < ∞ for some δ > 0. Then P(Sτ x) ∼ Eτ V (x),
x → ∞.
(7.3.2)
One can similarly reformulate, in terms of the distribution of τ and the tails T (n) = P(τ n), the assertions of Theorems 7.1.1 and 7.2.1. In regard to the above probabilistic interpretation of Theorems 7.1.1 and 7.2.1, there arise the following two natural problems. (1) To study the asymptotics of P(S τ x), S n = maxkn Sk . (2) To establish under what conditions the relation (7.3.2) will remain valid when τ is an arbitrary stopping (Markov) time, not necessarily independent of {ξi } (i.e. an r.v. defined on a common probability space together with {ξi } and such that, for any n 0, the event {τ n} belongs to the σ-algebra σ(ξ1 , . . . , ξn ) generated by ξ1 , . . . , ξn ). This and the next sections are devoted to solving the above problems. In the present section we will concentrate on analogues of Theorems 7.1.1 and 7.2.1 for S τ in the case when τ is independent of {ξi }. Since the asymptotics of P(Sn x) and P(S n x) are close to each other in many deviation zones, the assertions below do not differ much from those of Theorems 7.1.1 and 7.2.1. Theorem 7.3.2. Assume that F ∈ R, x → ∞ and that an integer-valued r.v. τ with Eτ < ∞ is independent of {ξi }. (i) If a := Eξ < 0 then P(S τ x) ∼ Eτ V (x).
(7.3.3)
(ii) If a = 0, α > 2, Eξ 2 = 1 and at least one of the conditions (7.1.3), (7.1.4) holds true then P(S τ x) ∼ Eτ V (x) + 2BT (x2 ),
(7.3.4)
where B is defined in (7.1.6). (iii) Let a = 0, F satisfy condition [ 0 and one of the conditions (7.1.9) and (7.1.10) be met. Then, in the former case (7.3.3) holds true. In the latter case, P(S τ x) ∼ Eτ V (x) + T (x/a).
(7.3.5)
As was the case with Theorem 7.1.1, the condition F ∈ R could be broadened by replacing R by the wider distribution classes described in § 4.8. Proof of Theorem 7.3.2. We can essentially repeat the argument used to prove Theorem 7.1.1. Instead of (7.1.1), we need to start with the relation P(S τ x) =
∞
an P(S n x),
(7.3.6)
n=0
where for the probabilities P(S n x) we have, as a rule, the same bounds as for P(Sn x) (see Chapters 3 and 4). More precisely, the following hold. (i) The proof of the first assertion repeats verbatim the argument from § 7.1 for the case a < 0. (ii) The proof of the second assertion differs from that for Theorem 7.1.1 in two respects. These refer to the evaluation of analogues of the sums Σ2 and Σ3 . In the sum Σ3 = nn2 (x) an P(S n x) one should take the summation limit n2 (x) := x2 /N (x), where N (x) → ∞ slowly enough that, uniformly in n n2 (x), % & x P(S n x) ∼ 2 1 − Φ √ . n The subsequent evaluation of the sum Σ3 differs from the calculations in § 7.1 (see (7.1.16) etc.) only by the presence of the factor 2, and therefore we will eventually obtain Σ3 ∼ 2BT (x2 ). n (x) To bound Σ2 = n21 (x) an P(S n x), with the same value of the summation limit n1(x) = c1 x2/ ln x , we use inequalities from Corollary 4.1.4. This yields Σ2 = o V (x) + o T (x2 ) and completes the proof of (7.3.4). (iii), (iv) There are no substantial changes in the proofs of the third and fourth parts of the theorem compared with those § 7.1, since the same bounds are valid for P(Sn x) and P(S n x) in the respective calculations, and S n /n → a as n → ∞. The theorem is proved. Theorem 7.3.3. Assume that F ∈ Se, α ∈ (0, 1) and that an integer-valued r.v. τ with Eτ < ∞ is independent of {ξi }. (i) If a := Eξ < 0 then P(S τ x) ∼ Eτ V (x),
x → ∞.
(7.3.7)
7.4 Sums stopped at an arbitrary Markov time
347
(ii) Let a = 0, Eξ 2 < ∞. If
an < exp −nα/(2−α) L1 (n)
for a suitable s.v.f. L1 then (7.3.7) holds true. (iii) Let a > 0 and, for some ε > 0, T xa−1 (1 − ε) = o V (x) . Then (7.3.7) holds true. Proof. The proof of Theorem 7.3.3 does not differ from that of Theorem 7.2.1, because, under the conditions of the theorem, we have the same estimates for P(Sn x) and P(S n x). The remarks that we made following Theorems 7.1.1 and 7.2.1 remain valid for Theorems 7.3.2 and 7.3.3 (in Remark 7.1.2, instead of Fβ,−1 we now need the distribution of the maximum of a stable process {ζ(1); t ∈ [0, 1]} with stationary independent increments, for which ζ(1) ⊂ = Fβ,−1 ).
7.4 Sums stopped at an arbitrary Markov time In this section we will concentrate on the second class of problems mentioned in § 7.3. They deal with the conditions under which the asymptotic laws established in §§ 7.1–7.3 remain valid in the case when τ is an arbitrary stopping time. It turns out that in this case the asymptotics of P(S τ x) are, in a certain sense, more accessible that those of P(Sτ x). 7.4.1 Asymptotics of P(S τ x) We introduce the class S ∗ of distributions F with a finite mean a = Eξ that have the property t
F+ (u)F+ (t − u) du ∼ 2 Eξ + F+ (t)
as
t → ∞,
(7.4.1)
0 +
where ξ = max{0, ξ}. It is not difficult to verify that R and Se are subclasses of S ∗ . It isalso known that if F ∈ S ∗ then F ∈ S and a distribution with the tail ∞ F I (t) := t F+ (u) du will also belong to S (see [166]). The following rather general assertion was obtained in [126] (this paper also contains a comprehensive bibliography on related results). Theorem 7.4.1. Let F ∈ S ∗ , a < 0 and τ be an arbitrary stopping time. Then P(S τ x) → Eτ F+ (x)
as x → ∞.
(7.4.2)
348
Asymptotic properties of functions of distributions
For the case τ = min{k : Sk < 0} this assertion was proved in [8]. Similar results for the asymptotics of the trajectory maxima over cycles of an ergodic Harris Markov chain were obtained in [56]. Now let a be arbitrary. Corollary 7.4.2. Let F ∈ S ∗ and let there exist a function h(x) such that, as x → ∞, one has h(x) → ∞, h(x) = o(x) and F+ (x − h(x)) → 1, F+ (x)
P τ > h(x) = o F+ (x) .
(7.4.3)
Then (7.4.2) holds true. The first relation in (7.4.3) means that F+ (t) is an h-l.c. function (see p. 18). The second relation clearly implies that Eτ < ∞. d
Proof. Choose a number b > a = Eξ and introduce i.i.d. r.v.’s ξi (b) = ξ − b, n so that Eξi (b) = a − b < 0. Then, on the one hand, for Sn (b) := i=1 ξi (b), S n (b) := maxkn Sk (b), we will have P(S τ x) P(S τ (b) + bτ x) P bτ > bh(x) + P(S τ (b) x − bh(x) = o F+ (x) + Eτ F+ x − bh(x) + b (1 + o(1)) ∼ Eτ F+ (x), owing to Theorem 7.4.1. On the other hand, clearly ξ > ξ(b), S τ S τ (b) and therefore P(S τ x) P(S τ (b) x) ∼ Eτ F+ (x + b) ∼ Eτ F+ (x), since F+ is l.c. The corollary is proved. Corollary 7.4.3. (i) Let F ∈ R and, in the case a 0, in addition let P(τ > x) = o F+ (x) .
(7.4.4)
Then (7.4.2) holds true. (ii) Let F ∈ Se and, in the case a 0, in addition let P(τ > z(x) = o e−l(x) ,
(7.4.5)
where z(x) = x/αl(x). Then (7.4.2) holds true. Theorems 7.1.1 and 7.2.1 show that conditions (7.4.4), (7.4.5) are essential for (7.4.2) to hold. Proof. (i) If (7.4.4) there exists a function h(x) = o(x) such holds then clearly that P τ > h(x) = o F+ (x) . This means that conditions (7.4.3) of Corollary 7.4.2 will be met.
7.4 Sums stopped at an arbitrary Markov time
349
(ii) If F there exists a function ∈ Se and (7.4.5) holds true then, similarly, h(x) = o z(x) such that P(τ > h(x) = o F+ (x) , and hence (7.4.3) is true. The corollary is proved. 7.4.2 On the asymptotics of P(Sτ x) Now we turn to the asymptotic behaviour of Sτ . Unfortunately, under the conditions of Theorem 7.4.1, it is impossible to obtain a result of the form P(Sτ x) ∼ Eτ F+ (x)
as
x → ∞.
(7.4.6)
This is demonstrated by the following simple example. Let a = Eξ < 0 and τ = η− := min{k 1 : Sk 0}. Then, clearly, Sτ 0 and P(Sτ x) = 0 for all x > 0, so that (7.4.6) cannot hold under any conditions. Thus, for (7.4.6) to hold true, we must narrow down the class of stopping times. The following assertion was obtained in [136] (on p. 43 of this paper the authors state that it could easily be extended to the case F ∈ R). Theorem 7.4.4. Let F+ (x) ∼ x−α and F− (x) = O(x−β ) as x → ∞, where α, β > 0. Further, let τ be an arbitrary stopping time such that P(τ > n) = o(n−r ) where r > max{1, α/β} and r
α/2 α
if if
as
n → ∞,
(7.4.7)
Eξ = 0, Eξ = 0.
Then P(Sτ x) ∼ Eτ x−α as x → ∞. Bounds for P(Sτ x) obtained under very broad conditions can be found in [76, 46]. In connection with the above example for τ = η− , observe that condition (7.4.7) is not satisfied in that situation (indeed, by virtue of Theorem 8.2.3 we have P(η− > n) > cn−α ), so that condition (7.4.7) is essential for the assertion of Theorem 7.4.4: it restricts the class of stopping times under consideration. Another restriction of this class was considered in § 7.1; there we assumed that τ was independent of {ξi }. Now we will consider another restriction. Suppose we are given two sequences (boundaries), {g+ (k)} and {g− (k)}, such that g+ (k) > −g− (k) for k 1. We will call τ a boundary stopping time if (7.4.8) τ := inf k 1 : Sk g+ (k) or Sk −g− (k) . Observe that the differences g± (k) ∓ ak, a = Eξ, cannot grow too fast as k → ∞, as then τ might be an improper r.v., owing to the law of the iterated logarithm. For example, in the case√Eξ = 0, Eξ 2 < ∞ it is natural to consider only functions satisfying g± (k) < c k ln k.
350
Asymptotic properties of functions of distributions
The values of g− (k) in the definition (7.4.8) may all be infinite. In this case, we will have a boundary stopping time corresponding to a single boundary g(k) = g+ (k). In accordance with the above observation, we will assume in this case that g(k) c + k,
k 1.
(7.4.9)
Note that if g(k) is non-decreasing then, clearly, Sτ ≡ S τ and so all the assertions of the previous subsection become applicable. First we consider the case Eξ = 0. Theorem 7.4.5. Let Eξ = 0, the distribution of ξ satisfy condition [ 0 and introduce the events Ak := {ξk (1 + ε)x},
k = 1, 2, . . .
One can easily see that, for the indicator function of the event in question, we have I(Sτ x) I1 − I2 , where min{τ,N }
I1 :=
I Ak ; Sτ x ,
k=1 min{τ,N }
I2 :=
(ν − 1)I ν events Ak occurred prior to the time min{τ, N } .
ν=1
Here EI1 = E
N I Ak ; Sτ x, τ k , k=1
EI2
2 1 N V (x) (1 + o(1)). 2
As τ is a boundary stopping time and g(k) < c + k, N = o(x), we see that, for k N , Ak ∩ {Sk−1 > −εx/2, τ k} ⊂ Ak ∩ {Sτ x, τ k}.
7.4 Sums stopped at an arbitrary Markov time
351
Therefore EI1
N k=1
εx P Sk−1 − , τ k P(Ak ) 2
N , εx = F+ (1 + ε)x P(τ k) − P τ k, Sk−1 < − 2 k=1 4 3 N εx F+ (1 + ε)x Eτ (1 + o(1)) − P Sk−1 < − 2 k=1 4 3 N εx F+ (1 + ε)x Eτ (1 + o(1)) − (1 + o(1)) (k − 1)W 2 k=1 # $ = F+ (1 + ε)x Eτ + o(1) + O N 2 W (x) = Eτ F+ (x) (1 + ε)x + o F+ (x) .
Since ε is arbitrary, this implies (7.4.10). The theorem is proved. Transition to the case a = 0 under the conditions of Theorem 7.4.5 can be done in a way similar to that used in Corollaries 7.4.2 and 7.4.3. Now we return to the general case (7.4.8) where τ is a stopping time specified by two boundaries. Here the sign of Eξ does not play any important role, whereas conditions on the distribution F can be relaxed. Set θ± (x) := sup k : g± (k) x , where we let θ± (x) = ∞ if sup g± (k) x (the θ± (x) are generalized inverse functions for g± (k)). It is evident that θ± (x) → ∞ as x → ∞; moreover, θ± (x) may be equal to ∞ for finite values of x. Lemma 7.4.6. Assume that Eτ < ∞ and that h(x) is an arbitrary function with the properties x h(x) → ∞ as x → ∞. h(x) < , 2 (i) The following lower bound holds true: P(Sτ x) F+ (x + h(x)) Eτ − δ(x) ,
(7.4.12)
where δ(x) → 0 as x → ∞. If g± (k) g < ∞
(7.4.13)
P(Sτ x) F+ (x + g) Eτ.
(7.4.14)
then, for x > g,
352
Asymptotic properties of functions of distributions
(ii) The following upper bound holds true: P(Sτ x) F+ (x − h(x))Eτ + F+ (x/2)δ(x) +
P(τ > k).
kθ+ (x/2)
(7.4.15) If g+ (k) g+ < ∞ then, for x g+ , P(Sτ x) F+ (x − g+ ) Eτ. Proof. (i) Let θ(x) := min θ− (h(x)), θ+ (x) , Ck := Sj ∈ (−g− (j), g+ (j)), j k ≡ {τ > k}.
(7.4.16)
Is is clear that θ(x) → ∞ as x → ∞. We have P(Sτ x) =
∞
P(τ = k + 1, Sk+1 x)
k=0
θ(x)−1
P(Ck ; Sk + ξk+1 x),
(7.4.17)
k=0
where the inequality holds owing to the fact that g+ (k + 1) x for k < θ(x). Since for such k holds g− (k) h(x), we obtain P Sk + ξk+1 x| Ck F+ (x + h(x)), (7.4.18) so that P(Sτ x) F+ (x + h(x))
P(τ > k).
k k) = Eτ F+ (x + g).
k=0
(ii) Using the equality in (7.4.17) we can split the sum on its right-hand side into three parts as follows: P(Sτ x) =
θ+ (x/2)−1
+
k k)
k k) +
P(τ > k).
kθ+ (x/2)
Since θ+ (h(x)) → ∞ as x → ∞ and Eτ < ∞ we have P(τ > k) → 0 δ(x) := kθ+ (h(x))
as x → ∞. Hence
P(Sτ x) F+ (x − h(x))Eτ + F+ (x/2)δ(x) +
P(τ > k).
kθ+ (x/2)
This proves (7.4.15). If g+ (k) g+ then the left-hand side in (7.4.18) does not exceed F+ (x − g+ ) for x g+ , and so (7.4.17) implies (7.4.16). The lemma is proved. Theorem 7.4.7. Let τ be a boundary stopping time (7.4.8) with Eτ < ∞ and let F+ be an l.c. function. (i) If (7.4.13) is true then we have (7.4.6). (ii) Assume that F+ has the property F+ (x/2) < cF+ (x) and, moreover, P(τ > k) = o F+ (x) as x → ∞. (7.4.19) kθ+ (x/2)
Then (7.4.6) holds true. It is evident that functions F+ ∈ R satisfy the conditions of the theorem. Proof. The first assertion of the theorem is obvious from (7.4.14) and (7.4.16). To obtain the second assertion, observe that for an l.c. F+ (x) there always exists a function h(x) tending to infinity slowly enough that F+ (x − h(x)) ∼ F+ (x) as x → ∞. Therefore it remains for us to make use of (7.4.12) and 7.4.15. The theorem is proved. The most difficult condition to verify in Theorem 7.4.7 is (7.4.19). It can be simplified for special boundaries g+ (k). For example, suppose that g+ (k) = kγ , γ ∈ (0, 1). Then θ+ (x) ∼ x1/γ and in the case F+ (x) = x−α L(x), L being an s.v.f., condition (7.4.19) will be satisfied provided that P(τ > k) < kθ ,
θ < −αγ − 1.
354
Asymptotic properties of functions of distributions
Indeed, in this case the sum on the left-hand side in (7.4.19) does not exceed cx(θ+1)/γ , where (θ + 1)/γ < −α. If Eξ 2 < ∞ and g+ (k) + g− (k) cnγ for γ < 1/2 and all k n then it is not difficult to obtain the bound 1−2γ
P(τ > n) (1 − p)n
,
where p > 0 is the minimum probability that the random walk will reach a boundary during the time interval [jn2γ , (j +1)n2γ ] having started from inside the strip. Remark 7.4.8. Recall that the l.c.-property used in Theorem 7.4.7, generally speaking, does not imply that F is subexponential (see Theorem 1.2.8). Thus Theorem 7.4.7 provides us with an example of a situation where the relation P(Sn x) ∼ nF+ (x) (as x → ∞) may not hold for each fixed n, but there exists a stopping time τ such that (7.4.6) holds true. 7.5 An alternative approach to studying the asymptotics of P(Sn x) for sub- and semiexponential distributions of the summands As we stated in § 1.4, there is a class of large deviation problems for random walks that are analyzed more naturally using not the techniques being developed in the main part of this monograph for regularly varying and semiexponential distributions but, rather, asymptotic analysis within a wider class of subexponential distributions. For example, one such problem is that on the asymptotics of P(S x), S = supk0 Sk , in the case Eξ < 0 (see also problems from Chapter 8). For such problems, analytic approaches different from those used in Chapters 2–5 and in subsequent chapters prove to be the most effective. These approaches are based on factorization identities and employ the asymptotic analysis of subexponential distributions presented in §§ 1.2–1.4 and also in the present chapter. For upper-power distributions the asymptotics of P(S x) were obtained in that way in [39, 42]. For the class of subexponential distributions, these results were established, using the same approaches, in [275]. The present section is devoted to presenting the above-mentioned approaches. It lies somewhat outside the mainstream exposition in Chapters 2–5. 7.5.1 An integral theorem on the first order asymptotics of P(S x) As before, let ξi be independent and identically distributed, Eξ < 0, S = sup Sk , k0
Sk =
n
ξi
and
F+ (x) = P(ξ x).
i=1
Set
∞ F+I (t)
:=
F+ (u) du. t
7.5 An alternative approach to studying P(Sn x)
355
A main aim of this subsection is to prove the following assertion (see also [275]). Theorem 7.5.1. If the function F+I (t) is subexponential (F+I ∈ S) then P(S x) ∼ −
1 I F (x) Eξ +
as
x → ∞.
(7.5.1)
Theorem 7.5.1 undoubtedly differs from the theorems in Chapters 2–5 on the distribution of S in the cases of regularly varying and semiexponential distributions F, in that it is more general and complete. Finding the asymptotics of P(S n x) (or P(S n (a) x) if Eξ = 0, a > 0) for subexponential distributions in the case of a finite growing n, however, requires much more effort and additional conditions (see e.g. [178]). If we also take into account that one needs to refine the asymptotics and find distributions of other functionals of the random walk {Sk } then the separate analysis of regularly varying and semiexponential distributions becomes justified. Note also that the condition F+I ∈ S is not only sufficient but also necessary for the asymptotics (7.5.1); see [177]. First we will state a factorization identity to be used in what follows. Let η+ := inf{k 1 : Sk > 0},
η− := inf{k 1 : Sk 0}.
(7.5.2)
On the events {η± < ∞} we can define r.v.’s χ± := Sη± , which are respectively the first positive and the first non-positive sums. It is clear that in the case Eξ < 0 we have P(η− < ∞) = 1,
P(η+ < ∞) = P(S > 0) =: p < 1.
Further, set f(λ) := Eeiλξ and introduce an r.v. χ with distribution U, given by the following relation: P(χ < t) := P(χ+ < t| η+ < ∞) ≡
1 P(χ+ < t, η+ < ∞), p
Lemma 7.5.2. If Eξ < 0 then, for Im λ = 0, 1 − f(λ) = 1 − p Eeiλχ+ 1 − Eeiλχ− , 1−p . EeiλS = 1 − p Eeiλχ
t > 0.
(7.5.3) (7.5.4)
Proofs of these assertions can be found, for instance, in [42, 49, 122]. The ν latter identity can easily be derived directly from the representation S = i=1 χi , d
where the r.v. ν does not depend on {χi } (the χi = χ are independent) and is the number of ‘upper ladder points’ in the sequence {Sk }, P(ν = k) = (1 − p)pk , k = 0, 2, . . .
356
Asymptotic properties of functions of distributions
Proof of Theorem 7.5.1. We will split the proof into two steps. First we will find the asymptotics of the tail U (x) := P(χ x). Lemma 7.5.3. If Eξ < 0 and F+ (x) = o F+I (x)
(7.5.5)
as x → ∞ then U (x) ∼
F+I (x) , a− p
(7.5.6)
where a− := −Eχ− < ∞. Proof. The proof of the lemma follows a scheme used in [39, 42]. Let
0 for t 0, δ(t) := F0 (t) := δ(t) − P(ξ < t), 1 for t > 0, so that F0 (t) → 0 as t → ±∞. Then 1−f(λ) and 1/(1 − Eeiλχ− ) can be written, respectively, as 1 − f(λ) =
iλx
e
1 =− 1 − Eeiλχ−
dF0 (x),
0 eiλx dH(−x),
−∞
where H(x) is the renewal function for the r.v. −χ− 0. Differentiating the identity (7.5.3) at λ = 0 we obtain Eξ = (1 − p)Eχ− , so that a− := −Eχ− = −
Eξ >0 1−p
(7.5.7)
is finite and therefore H(x) ∼ x/a− as x → ∞. Hence the identity (7.5.3) can be rewritten as 0 −
eiλx dF0 (x)
eiλx dH(−x) = 1 − p Eeiλχ , −∞
which implies that, for x > 0, ∞ pU (x) =
H(t − x) F(dt).
(7.5.8)
x
The renewal function H(t) has the following property (see e.g. § 1, Chapter 9 of [49]): for any ε > 0 there exists an N < ∞ such that t t+N . H(t) < a− a− − ε
(7.5.9)
7.5 An alternative approach to studying P(Sn x)
357
Therefore 1 a−
∞ (t − x) F(dt) pU (x) < x
1 a− − ε
∞ (t − x + N ) F(dt), x
where ∞ ∞ ∞ (t − x) F(dt) = − (t − x) dF+ (t) = F+ (t) dt ≡ F+I (x). x
x
x
From this we find that
1 pU (x) 1 F+ (x)N . I 1+ a− a− − ε F+ (x) F+I (x)
Since the right-hand side converges to 1/(a− − ε) as x → ∞, and ε > 0 is arbitrary, the lemma is proved. Now we can proceed to the second stage of the proof of the theorem. Let F+I ∈ S. Then, according to Theorem 1.2.4(v) and Theorem 1.2.8, we have F+ (x) = o F+I (x) , and so (7.5.5) is proved. Further, it follows from (7.5.4) that EeiλS = A g(λ) ,
(7.5.10)
where g(λ) := Eeiλχ and A(z) = (1 − p)/(1 − pz). The function A(z) is analytic in the disk |z| < 1/p, and so we can make use of Theorem 1.4.1, according to which P(S x) ∼ A (1)P(χ x) =
F+I (x) p U (x) ∼ 1−p a− (1 − p)
due to (7.5.6). It remains to use identity (7.5.7). The theorem is proved. Corollary 7.5.4. If the tail F+ (x) = P(ξ x) is regularly varying or semiexponential then the conditions of Theorem 7.5.1 are satisfied and therefore (7.5.1) holds true. Proof. The assertion is nearly obvious: if the function F+ is regularly varying then F+I is also regularly varying, by virtue of Theorem 1.1.4(iv), and hence is subexponential. The same claim holds for semiexponential tails F+ (see § 1.3). The fact that U (x) ∼
1 a− p
∞ F+ (u) du x
358
Asymptotic properties of functions of distributions
and hence that for an l.c. F+ (u) we have Δ U Δ[x) = U (x) − U (x + Δ) ∼ F+ (x), a− p
(7.5.11)
suggests that the function U (x) will, under broad conditions, be locally subexponential (see Definition 1.3.3, p. 47). This means that one could sharpen the ‘integral’ theorem 7.5.1 and obtain, along with the latter, an ‘integro-local’ theorem on the asymptotics of P(S ∈ Δ[x)) as well. 7.5.2 An integro-local theorem on the asymptotics of P(S x) The objective of this subsection is to give a proof of the following refinement of Theorem 7.5.1. Theorem 7.5.5. Let the distribution of ξ be non-lattice, Eξ < 0, F+ be an l.c. function and F+I ∈ S. Further, assume that, as x → ∞, F+I (x) ∼
F+ (x) , v(x)
(7.5.12)
where v(x) → 0 is an upper-power function (see Definition 1.2.20, p. 28). Then, for any fixed Δ > 0, Δ F+ (x). P S ∈ Δ[x) ∼ − Eξ
(7.5.13)
If the distribution of ξ is arithmetic then, under the same conditions, for integervalued x → ∞, F+ (x) . (7.5.14) P(S = x) ∼ Eξ Corollary 7.5.6. Let Eξ < 0. If F ∈ R or F ∈ Se, α ∈ (0, 1) then (7.5.13) ((7.5.14) in the arithmetic case) holds true. This result could be strengthened by replacing the conditions F+ ∈ R and F+ ∈ Se by the weaker conditions of Theorems 1.2.25, 1.2.31 and 1.2.33. In the case when Eξ 2 < ∞, Corollary 7.5.6 also follows from Theorem 7.5.8 below. An assertion close to the claims of Theorem 7.5.5 and Corollary 7.5.6 was obtained in [26, 11]. In [11], to ensure the validity of the relations (7.5.13), (7.5.14) the authors used the condition that F belongs to the distribution class S ∗ , which is characterized by the relation (7.4.1). Proof of Corollary 7.5.6. If F ∈ R then the assertion of the corollary is almost obvious. In this case F+ (t) = V (t) := t−α L(t) is an l.c. function, and, by Theorem 1.1.4(iv), ∞ F+I (x)
= x
u−α L(u) du ∼
x−α+1 L(x) F+ (x) = , α−1 v(x)
7.5 An alternative approach to studying P(Sn x)
359
where v(x) = (α − 1)/x. The conditions of Theorem 7.5.5 are satisfied. Now let F ∈ Se. Then F+ (t) = e−l(t) , αul(x) ul(x) as x → ∞, u = o(x), → ∞. (7.5.15) x x Set v(x) := αl(x)/x and then choose N = N (x) such that N = o(x) and N v(x) → ∞. Then, by virtue of (7.5.15), l(x + u) − l(x) ∼
x+N
−l(t)
e
dt = e
−l(x)
N
e−(l(x+u)−l(x)) du
0
x
= e−l(x)
N
e−uv(x)(1+o(1)) du
0
e−l(x) = v(x)
Nv(x)
e−y(1+o(1)) dy ∼
0
F+ (x) . v(x)
Repeating the same argument for x+N +N (x+N )
e−l(t) dt
x+N
and so on, we obtain F+I (x) ∼
F+ (x) . v(x)
This proves (7.5.12) and also that F+I ∈ Se and therefore that F+I ∈ S. The conditions of Theorem 7.5.5 are again satisfied. The corollary is proved. Proof of Theorem 7.5.5. The scheme of this proof is roughly the same as in [42, 275]. It is based on two elements: the well-known facts about factorization identities, and Theorems 1.3.6 and 1.4.2 (1.4.3 in the discrete case). The first element can be presented in the form of the following assertion, most of whose parts are well known. To formulate it, we will need the notation introduced after the statement of Theorem 7.5.1 (see p. 355) and also some factorization identities. Lemma 7.5.7. Let F+ be an l.c. function, Eξ < 0 and let x → ∞. Then the following assertions hold true. (i) Along with (7.5.3), (7.5.4), we have F+I (x) , a− = −Eχ− , a− p ΔF+ (x) U Δ[x) ∼ a− p
U (x) ∼
(7.5.16) (7.5.17)
360
Asymptotic properties of functions of distributions
for any fixed Δ > 0. In the arithmetic case, the quantity Δ in (7.5.17) has to be integer-valued. (ii) If Eξ 2 < ∞ then, for non-lattice ξ’s, U (x) =
F+I (x) b + F+ (x) + o F+ (x) , a− p p
(7.5.18)
where b := a(2) /2a2− and a(2) := Eχ2− < ∞. In the arithmetic case (7.5.18) remains true, but with a somewhat different coefficient of F+ (x) (a(2) is replaced by a(2) + a− ). Proof. (i) Since, in the case of an l.c. F+ , the relation (7.5.5) follows from Theorem 1.2.4(v), the assertion (7.5.16) was obtained in Lemma 7.5.3. Now we prove (7.5.17) (formally, (7.5.16) does not imply (7.5.17)). From the representation (7.5.8) and the local renewal theorem (see e.g. (10) in § 4, Chapter 9 of [49]), we obtain that for an l.c. F+ one has
x+Δ
pU Δ[x) =
∞
H(t − x) F(dt) + x
#
$ H(t − x) − H(t − x − Δ) F(dt)
x+Δ
Δ = o F+ (x) + F+ (x + Δ)(1 + o(1)) a− ΔF+ (x) + o F+ (x) . = a− This proves (7.5.17). (ii) If Eξ 2 < ∞ then, differentiating the identity (7.5.3) at λ = 0, we obtain Eχ2− < ∞. Therefore t + b + ε(t), H(t) = a− where ε(t) → 0 as t → ∞ (see e.g. Appendix 1 of [42] or [120, 49]). Hence, owing to (7.5.8), 1 U (x) = p
∞ H(t − x) F(dt) = x
where 1 ε1 (x) := p
∞
F+I (x) bF+ (x) + + ε1 (x), a− p p
ε(v) F(x + dv) = o F+ (x) ,
0
when F+ is an l.c. function. The lemma is proved. Now we are in position to prove the theorem. We will restrict ourselves to the non-lattice case and will make use of Theorem 1.3.6, where we take G to be the distribution U. That conditions (1.3.20) are satisfied follows from Lemma 7.5.7
7.5 An alternative approach to studying P(Sn x)
361
and (7.5.12), and therefore U ∈ Sloc , owing to the above-mentioned theorem. It remains to use the representation (7.5.4), which has the form EeiλS = A g(λ) , (7.5.19) where χ⊂ = U,
g(λ) = Eeiλχ ,
A(w) =
1−p . 1 − pw
The function A(w) is analytic in the disk |w| < 1/p and so, by Theorem 1.4.2, P S ∈ Δ[x) ∼ A (1)U Δ[x) . From this, Lemma 7.5.7 and the relation (7.5.7) we derive (7.5.13). The theorem is proved.
7.5.3 A refinement of the integral theorem for the maxima of sums In this subsection we will obtain another refinement of Theorem 7.5.1, which contains the next term in the asymptotic expansion for P(S x). Such a refinement in the case F ∈ R was established in Corollary 3 of [63], but this was under additional moment and smoothness conditions on F+ . As will follow from Theorem 7.5.8 below, these additional conditions prove to be superfluous. For M/G/1 queueing systems, i.e. in the case ξ = ζ − τ , where the r.v.’s ζ 0 and τ 0 are independent and τ follows the exponential distribution, a refinement of the first-order asymptotics for the distribution of S (or, equivalently, for the limiting waiting-time distribution in M/G/1 systems) was obtained, under the assumption that ζ has a heavy-tailed density of a special form, in [267]. Theorem 7.5.8. Let F ∈ R or F ∈ Se, α ∈ (0, 1), Eξ < 0, Eξ 2 < ∞ and the distribution of ξ be non-lattice. Then P(S x) = −
F+I (x) + cF+ (x) + o F+ (x) , Eξ
(7.5.20)
where c=
2a+ p b − , 1 − p Eξ(1 − p)
b=
Eχ2− , 2(Eχ− )2
p = P(η+ < ∞),
a+ = Eχ.
In the arithmetic case, this representation remains true for integer-valued x and somewhat different values of c (see Lemma 7.5.7). One also has1 Eξ 2 ES . c= − 2 2(Eξ) Eξ 1
The calculation of the constant c to be found in [63] contains an error: in the corresponding expan and so account for the dependence of sion, the terms that correspond to the second derivatives F+ c on the variance of ξ were not taken into account.
362
Asymptotic properties of functions of distributions
For remarks on why the coefficient c of F+ (x) in (7.5.20) and the corresponding coefficient in the representation (4.6.4), which was obtained under more restrictive conditions, are different, see the end of Corollary 4.6.3 (p. 210). Remark 7.5.9. Observe that the asymptotic expansion (7.5.20) was obtained for the classes R and Se only. The question whether (7.5.20) holds for the entire class S remains open. Remark 7.5.10. In the case Eξ 2 < ∞, Theorem 7.5.8 clearly implies Corollary 7.5.6. As in Corollary 7.5.6, the conditions F ∈ R and F ∈ Se can be relaxed. For example, instead of F ∈ R it suffices to assume that (1) F+ has the property that, for any fixed v, F+ (x + v ln x) →1 F+ (x)
as
x → ∞;
(2) F+ is an upper-power function; (3) F+ has a regularly varying majorant V such that x−1 V (x) = o F+ (x) as x → ∞. The scheme of the proof of Theorem 7.5.8 is basically the same as for Theorem 7.5.5: it is based on factorization identities, Lemma 7.5.7(ii) and direct calculations, relating the distribution of S to the distributions of the sums Zn :=
n
ζi ,
ζi := χi − a+ ,
i=1 d
where the r.v.’s χi = χ are independent. We will denote the distribution of the r.v. ζ = χ − a+ by G. At the last stage of the proof we will need an additional proposition, refining the asymptotics of P(Zn x). It is an insignificant modification (and simplification) of Theorem 3 of [63] in the case G ∈ R, and of Theorem 2.1 of [52] in the case G ∈ Se. In the case G ∈ R (G+ (t) = t−αG LG (t)), consider the following smoothness condition1 , which was introduced in § 4.7 (see p. 217): [D(1,q) ] As t → ∞, Δ → 0, # $ G+ t(1 + Δ) − G+ (t) = −G+ (t) ΔαG (1 + o(1)) + o(q(t)) ,
(7.5.21)
where q(t) → 0. As observed inRemark 3.4.8 and in the proofs of Theorems 4.4.4 and 4.5.1, the remainder term o q(t) can be directly transferred to the final answer (cf. (7.5.23), (7.5.25) below), after which one can simply assume that condition [D(1,0) ] is met. 1
This is a relaxed version of conditions [D1 ] of [63]: the latter did not have the term o(q(t)) on the right-hand side of (7.5.21). It almost coincides with condition [D(1,q) ] from [66], which contained ` ´ O(q(t)) instead of o q(t) .
7.5 An alternative approach to studying P(Sn x)
363
In the case G ∈ Se, condition [D(1,q) ] will have the following form. As usual, put z(t) := t/l(t). [D(1,q) ] As t → ∞, Δ = o(l−1 (t)) (or Δt = o(z(t))), # $ G+ t(1 + Δ) − G+ (t) = −G+ (t) ΔαG l(t)(1 + o(1)) + o q(t) . (7.5.22) (This condition is a relaxed version of condition [D1 ] from [52].) In the case G ∈ R, αG < 2, we will need the inverse function (−1)
σ(n) := G+
(1/n).
The above-mentioned refinement of the asymptotics of P(Zn x) is contained in the following assertion. Theorem 7.5.11. Let Eζ = 0. (i) If Eζ 2 < ∞, G ∈ R, αG < 2 and condition [D(1,q) ] holds then # $ √ P(Zn x) = nG(x) 1 + o(q(x)) + o( nx−1 )
(7.5.23)
2
uniformly in n < cx / ln x for some c > 0. (ii) If G ∈ R, αG ∈ (1, 2), G− (t) cG+ (t) and condition [D(1,q) ] is met then # $ (7.5.24) P(Zn x) = nG+ (x) 1 + o(q(x)) + o(σ(n)x−1 ) uniformly in n < ε/G+ (x), ε > 0. (iii) If G ∈ Se, αG ∈ (0, 1), Eζ 2 < ∞ and (7.5.22) holds then # $ √ P(Zn x) = nG+ (x) 1 + o(q(x)) + o( nz −1 ) ,
(7.5.25)
where z = z(x) = x/l(x), uniformly in n = o(z 2 ). Proof. (i) When G ∈ R, Eζ 2 < ∞, the assertion (7.5.23) is a direct consequence of Theorem 4.4.4 for k = 1. (ii) The assertion (7.5.24) is a direct consequence of Theorem 3.4.4(iii). (iii) The case G ∈ Se can be dealt with in the same way. All the calculations from the proof of Theorem 5.4.1(iii) remain valid here except for the estimate of the quantity E2 , which was introduced in (5.4.58): # $ E2 = E G+ (x − Zn−1 ); |Zn−1 | z , and which gives the principal part of the desired asymptotics. More precisely, √ owing to (5.4.58), (5.4.59) and (5.4.65), one has for z n the representation √ P(Zn x) = nE2 + nG+ (x) o nz −2 for z n. (7.5.26) Since |G+ (x − vz) − G+ (x)| < cG+ (x) for |v| 1, the part of the integral E2 over the set εz < |Zn−1 | z, where ε tends to zero slowly enough, can be bounded, owing to Chebyshev’s inequality, by G+ (x)O P(εz |Zn−1 | z) = G+ (x)o(nz −2 ).
364
Asymptotic properties of functions of distributions
Therefore we just need to evaluate # $ E G+ (x − Zn−1 ); |Zn−1 | εz , where ε → 0. For this term, owing to [D(1,q) ] and the relations EZn−1 = 0, √ E|Zn−1 | = O( n ), we have, again assuming that ε → 0 slowly enough, $ # E G+ (x − Zn−1 ); |Zn−1 | εx # $ = G+ (x)E 1 + αG Zn−1 z −1 + o |Zn−1 |z −1 + o(q(x)); |Zn−1 | εx , √ = G+ (x) 1 + o( nz −1 ) + o(q(x)) + O P(|Zn−1 | > εx + E |Zn−1 |z −1 ; |Zn−1 | > εz # $ √ = G+ (x) 1 + o( nz −1 ) + o(q(x)) . Together with (7.5.26), this proves (7.5.25). Theorem 7.5.11 is proved. Proof of Theorem 7.5.8. It follows from the identity (7.5.4) that P(S x) = (1 − p)
∞
pn P(Zn x − a+ n),
(7.5.27)
n=0
n where Zn = i=1 (χi − a+ ), a+ = Eχ and the χi are independent and follow the same distribution as χ. Hence, owing to Lemma 7.5.7(ii), the probability P(χ x) = U (x) has the form (7.5.18) and therefore G+ (t) ≡ P(χ − a+ t) = U (t + a+ ) =
F+I (t + a+ ) bF+ (t) + + o F+ (t) . a− p p
(7.5.28)
Now let us verify that G+ (t) satisfies condition [D(1,q) ]. First let F ∈ R, Eζ 2 z −1/2 ), we write ∞
=
n=0
n1
n2
+
+
n=n1 +1
n=0
∞
=: Σ1 + Σ2 + Σ3 .
n=n2 +1
Then
√ Σ3 = O e−cεx = O e−cεl(x)z = O e−cl(x) z = o F+ (x) .
(7.5.32)
In the sum Σ1 we have n = o(z). For such an n, one clearly has l(x − a+ n) = l(x) + o(1) and so (cf. (7.5.30)) we obtain % I F (x) a+ (n − 1)F+ (x) bF+ (x) + + P(Zn x − a+ n) = n + a− p a− p p & √ n , 1+o + o F+ (x) z so that, as in (7.5.31), (1 − p)Σ1 =
F+I (x) a+ F+ (x)p bF+ (x) + + o F+ (x) . + 2 a− (1 − p) 2a− (1 − p) (1 − p)
(7.5.33)
In the sum Σ2 , we have n εx, π1 ≡ nl(x)x−2 ε/z → 0 and therefore, owing to Corollary 5.2.2(i), for a c1 < ∞, π1 h x − a+ n , r0 = 1 + (1 + o(1)). P(Zn x − a+ n) c1 nG+ r0 2 But, for n = o(x) and all small enough π1 , x − a+ n l > l(x − a+ n)(1 − cπ1 ) r0 = l(x) − αa+ nz −1 + o max{1, nz −1 } + O nz −2 > l(x) − c2 nz −1 for a c2 > 0. Hence ( ) n exp −l(x) + c2 + n ln p + ln n z n1 +1 ) ( εz ln p c3 exp −l(x) + 2
" √ z c3 exp −l(x) + ln p = o F+ (x) . 2
(1 − p)Σ2 c3
∞
From this and (7.5.32), (7.5.33) we obtain (7.5.20). The theorem is proved. The coefficient c in (7.5.20) also admits a somewhat different representation. Since b = a(2) /2a2− and, owing to factorization identities, a− = −
Eξ , 1−p
a+ =
(1 − p) ES, p
Eξ 2 = a(2) (1 − p) − 2Eξ ES,
7.6 A Poissonian representation for (θ, S)
367
we have b=
(Eξ 2 + 2Eξ ES)(1 − p) , 2(Eξ)2
c=
2ES ES Eξ 2 + 2Eξ ES Eξ 2 − − = . 2(Eξ)2 Eξ 2(Eξ)2 Eξ
7.6 A Poissonian representation for the supremum S and the time when it was attained We will complete this chapter with a remark that is somewhat outside the mainstream exposition of the book and applies to arbitrary random walks (those without any conditions on the distribution F), for which S = supk0 Sk < ∞ a.s. Let (τ, Z), (τ1 , Z1 ), (τ2 , Z2 ), . . . be a sequence of i.i.d. random vectors with distribution P(τ = j, Z t) =
P(Sj t) , Dj
j 1,
t > 0,
(7.6.1)
where D=
∞ P(Sj 0) < ∞. j j=1
The latter condition is equivalent to the relation S < ∞ a.s. (see e.g. § 7, Chapter XII of [122] or § 2, Chapter 11 of [49]). Here we make no additional assumptions on the distribution of ξj . Denote by θ the time when the supremum value S is first attained in the random walk {Sk }: θ := min k 0 : Sk = S , and by ν an r.v. which is independent of {(τj , Zj ); j 1} and has the Poisson distribution with parameter D. Theorem 7.6.1. If D < ∞ then the following representation holds true: d
(θ, S) = (τ1 , Z1 ) + · · · + (τν , Zν ),
(7.6.2)
where the right-hand side is equal to (0, 0) when ν = 0. In particular, d
S=
ν
Zj ,
j=1
where P(Z t) =: U (t) =
∞ 1 P(Sj t) , D j=1 j
t > 0.
368
Asymptotic properties of functions of distributions
If the distribution U (t) is subexponential then, as x → ∞, for a suitable function n(x) → ∞ we have P(Z1 + · · · + Zn x) ∼ nU (x) for all n n(x). In this case, P(S x) ∼
∞ k=0
∞ P Sj x . P(ν = k)kU (x) = EνU (x) = j j=1
The required subexponentiality property U (t) will apparently follow from the subexponentiality of F. In any case, U (t) has this property under the conditions of Theorem 2.7.1 in the case E|ξj | = ∞ (see also (2.7.7)). This is also true when the conditions Eξj = −a < 0 and [ · , =], V ∈ R, are satisfied. Indeed, in this case, for all n as x → ∞, P(Sn x) = P(Sn + an x + an) ∼ nF+ (x + an), so that by Theorem 1.1.4(iv) U (x) =
∞ ∞ 1 1 P(Sn x) ∼ F+ (x + an) D n=1 n D n=1
1 ∼ aD
∞ F+ (t) dt ∼ x
xF+ (x) . aD(α − 1)
(7.6.3)
The distribution U (t) will be subexponential for semiexponential distributions F as well; this can be verified in the same way as (7.6.3) (see Chapter 5). Proof of Theorem 7.6.1. The assertion of the theorem immediately follows from Corollary 6, § 15 of [42], which establishes the infinite divisibility of the distribution of (θ, S) via the representation ⎫ ⎧ ∞ ∞ ⎬ ⎨ dP(S < t) k ω k eiλt − 1 Eω θ eiλS = exp ⎭ ⎩ k k=1 0 ∞ ωk = exp E(eiλSk ; Sk 0) − D k k=1
(see formula (10) in § 15 of [42]). Now, the integral representation Eω
P
jν
τj iλ
e
P
jν
Zj
for the sum on the right-hand side of (7.6.2) has exactly this form. The theorem is proved. Results similar to Theorem 7.6.1 can also be obtained for the distribution of (θ∗ , S), where θ∗ := max{k 0 : Sk = S} (see [42]).
8 On the asymptotics of the first hitting times
8.1 Introduction For x 0, let
η+ (x) := inf k 1 : Sk > x ,
η− (x) := inf k 1 : Sk −x ,
where we put η+ (x) = ∞ (η− (x) = ∞) if Sk x (Sk > −x) for all k = 1, 2, . . . The objective of the present chapter is to find the asymptotics of and bounds for the probabilities P(η− (x) = n) and P(η+ (x) = n) or for their integral analogues P(n < η− (x) < ∞) and P(n < η+ (x) < ∞) as n → ∞. The level x 0 can be either fixed or growing with n. Special attention will be paid to the random variables η± = η± (0). Since {η+ (x) > n} = {S n x}, the asymptotics of P η+ (x) > n → 0 can be considered as the asymptotics of the probabilities of small deviations of S n . In the study of the above-mentioned asymptotics, the determining role is played by the ‘drift’ in the random walk, which we will characterize using the quantities D = D+ :=
∞ P(Sk > 0) k
and
k=1
D− :=
∞ P(Sk 0) , k
(8.1.1)
k=1
where clearly D+ + D− = ∞. Let p := P(η+ = ∞). It is well known (see e.g. [122, 49]) that {D+ < ∞, D− = ∞} ⇐⇒ {η− < ∞ a.s., p > 0} ⇐⇒ {S = −∞, S < ∞ a.s.}
(8.1.2)
and {D+ = ∞, D− = ∞} ⇐⇒ {η− < ∞, η+ < ∞ a.s.} ⇐⇒ {S = −∞, S = ∞ a.s.}, where S = supk0 Sk and S = inf k0 Sk . Also, it is obvious that a relation symmetric to (8.1.2) holds in the case {D− < ∞, D+ = ∞}. Taking into account this symmetry, we can confine ourselves to considering the case A0 = {D− = ∞, D+ = ∞} 369
370
On the asymptotics of the first hitting times
and only one of the two possibilities A− = {D− = ∞, D+ < ∞}
or
A+ = {D− < ∞, D+ = ∞}.
If Eξ = a exists then A0 = {a = 0},
A− = {a < 0},
A+ = {a > 0}.
In what follows, the classification of the results will be made according to the following three main criteria: (1) the value of x (we will distinguish between the cases x = 0, a fixed value of x > 0 and x → ∞); (2) the direction of the drift (one of the possibilities A0 , A± ); (3) the character of the distribution of ξ. Accordingly, the present chapter has the following structure. In § 8.2 we study the case of a fixed level x, mostly when x = 0: §§ 8.2.1–8.2.3 are devoted to the case A− , x = 0, for various distributions classes for ξ; § 8.2.4 deals with the case A0 . The relationship between the distributions of η± (x) and η± for a fixed x > 0 is discussed in § 8.2.5. In § 8.3 we consider the case x → ∞ together with n; §§ 8.3.1–8.3.3 deal with various distribution classes for the law of ξ. The asymptotics of the distributions of η± (x) do not always depend on the exact asymptotic behaviour of the distribution tails of ξ. But sometimes, in the cases of regularly varying or exponentially fast decaying tails F+ , the desired asymptotics are closely related to each other (cf. Chapter 6). Therefore exponentially fast decaying distributions are also considered in the present chapter. A survey of results, which is close to the exposition of this chapter, was presented in [193].
8.2 A fixed level x 8.2.1 The case A− with x = 0. Introduction Assuming that a = Eξ < 0 exists, we will concentrate here on studying the asymptotics of the probabilities P(η− > n)
and
P(η+ = n)
as
n → ∞,
(8.2.1)
and also on the closely related asymptotics of the probability P(θ = n) for the time θ := min{n 0 : Sn = S} when the random walk first attains its maximum value S. This relation has a simple form. Since P(θ = n) = P(S n−1 < Sn = S n ) P max(Sk − Sn ) = 0 kn
8.2 A fixed level x
371
(the first factor on the right-hand side is equal to 1 for n = 0) and from the duality principle for random walks (see e.g. § 2, Chapter XII of [122]) we have P S n−1 < S n = Sn = P(η− > n), it follows that P(θ = n) = p P(η− > n)
(8.2.2)
for p = P(S = 0) = P(η+ = ∞) = 1/Eη− (with regard to the last two equalities, see Theorem 8.2.1(i) below or [122, 49, 42]). We will consider the following distribution classes, which we have already dealt with, and their extensions: • the class R of distributions with regularly varying right tails F+ (t) = V (t) = t−α L(t),
α 0,
(8.2.3)
where L(t) is an s.v.f., and • the class Se of semiexponential distributions with tails F+ (t) = V (t) = e−l(t) ,
l(t) = tα L(t),
α ∈ (0, 1),
(8.2.4)
where L(t) is an s.v.f. such that, for Δ = o(t), t → ∞, and any fixed ε > 0, ⎧ αΔl(t) αΔl(t) ⎪ ⎪ if > ε; ⎨ l(t + Δ) − l(t) ∼ t t (8.2.5) ⎪ ⎪ αΔl(t) ⎩ l(t + Δ) − l(t) → 0 →0 if t (see also (5.1.1)–(5.1.5), p. 233). According to Remark 1.2.24, a sufficient condition for (8.2.5) is that the function L(t) is differentiable for all large enough t and that L (t) = o(L(t)/t), t → ∞. Another sufficient condition for (8.2.5) is that l(t) = l1 (t) + o(1), t → ∞, where l1 (t) satisfies (8.2.5). Some properties of distributions from the classes R and Se were studied in Chapter 1. In particular, we saw there that distributions from these classes are subexponential (see § 1.1.4 and Theorem 1.2.36 respectively). It turns out that if we somewhat extend the classes R and Se by allowing the functions F+ to ‘oscillate slowly’ (see below) then distributions from the new classes will still possess the properties of the distributions F ∈ R and F ∈ Se that are required for our purposes in this chapter. So we will deal with such extensions as well. An alternative to R and Se is • the class C of exponentially fast decaying distributions, i.e. distributions that satisfy the right-sided Cram´er condition: λ+ := sup{λ : ϕ(λ) < ∞} > 0,
where
ϕ(λ) := Eeλξ .
372
On the asymptotics of the first hitting times
Let λ0 be the point at which the minimum min ϕ(λ) =: ϕ λ
occurs. We will distinguish between the two possibilities: (1)
λ0 λ+ ,
ϕ (λ0 ) = 0,
(2)
λ0 = λ+ ,
ϕ (λ+ ) < 0.
(8.2.6)
It is evident that in case (1) we always have ϕ (λ0 ) = 0 if λ0 < λ+ and Eξ < 0. Now we will turn to previously known results. As usual, by the symbol c, with or without indices, we will be denoting a constant, not necessarily with the same meaning if it appears in different formulae. In the case of bounded lattice-valued ξ’s, the complete asymptotic analysis of P(η± (x) = n) for all x and n → ∞ was presented in [33, 34]. For an extended version of the class R, the asymptotics P(θ = n) ∼ cV (−an) and hence that of P(η− > n) were obtained in Chapter 4 of [42]. It was also established there that, for the class C in the case λ0 < λ+ , one has P(θ = n) ∼
cϕn n3/2
∼ p P(η− > n) owing to (8.2.2) .
(8.2.7)
Comprehensive results on the asymptotics of the probabilities P(η− (x) > n) and P(n < η+ (x) < ∞) in the case of a fixed x 0 for the classes R and C were obtained in [116, 32, 104, 27, 193]. In this connection, one should note that some results from Theorems 8.2.3, 8.2.12, 8.2.14 and 8.2.16 below are already known. Nevertheless, we present them here for completeness of exposition. γ , γ > 0, and some Necessary and sufficient conditions for the finiteness of Eη− relevant problems were studied in [143, 154, 138, 160]. It was found, in particular, γ < ∞, γ > 0, iff E(ξ + )γ < ∞ where ξ + = max{0, ξ} (see e.g. [143]). that Eη− Unimprovable bounds for P(η− > n) were given in § 43 of [50]. Because of the established relationship (8.2.2) between the distribution of the r.v.’s θ and η− , we will concentrate on the distributions of η± in what follows. We will begin with results that clarify the relationship between the distributions of η− and η+ , which is of independent interest. Introduce an r.v. ζ having a generating function 1 − Ez η− H(z) := Ez ζ = (1 − z)Eη− (Eη− < ∞ owing to our assumption that Eξ < 0), so that P(η− > k) , Eη− and put p(z) := E z η+ | η+ < ∞ , P(ζ = k) =
p := P(η+ = ∞) = P(S = 0),
k = 0, 1, . . . ,
q := 1 − p = P(η+ < ∞).
8.2 A fixed level x
373
Moreover, let η be an r.v. with distribution P(η = k) = P(η+ = k| η+ < ∞) (and generating function p(z)); let η1 , η2 , . . . be independent copies of η, T0 := 0,
Tk := η1 + · · · + ηk ,
k 1,
and ν be an r.v. with the geometric distribution P(ν = k) = pq k , k 0, that is independent of {ηj }. Theorem 8.2.1. The following assertions hold true. (i) D=
∞ P(Sk > 0) = − ln p, k
p=
k=1
1 . Eη−
(8.2.8)
(ii) H(z) =
1−q , 1 − qp(z)
(8.2.9)
and therefore the distribution of η+ completely defines that of η− , and vice versa. (iii) For any n 0, (8.2.10) P(η− > n) = pP(Tν = n) > P(η+ = n). (iv) If the sequence P(η+ = n)/q; n 1 is subexponential then, as n → ∞, P(η− > n) ∼
1 P(η+ = n). p2
(8.2.11)
If the sequence P(Sn > 0) , nD is subexponential then, as n → ∞, bn :=
n 1,
(8.2.12)
eD P(Sn > 0), n (8.2.13) e−D P(Sn > 0). P(η+ = n) ∼ n (v) The r.v. ζ has an infinitely divisible distribution and admits the representation P(η− > n) ∼
d
ζ = ω1 + · · · + ων ,
(8.2.14)
where ω1 , ω2 , . . . are independent copies of an r.v. ω with distribution P(ω = k) = P(Sk > 0)/kD,
k = 1, 2, . . . ,
and the r.v. ν is independent of {ωi } and has the Poisson distribution with parameter D.
374
On the asymptotics of the first hitting times It follows from (8.2.14) that P(η− > n) = Eη− P(ω1 + · · · + ων = n).
(8.2.15)
Remark 8.2.2. The assertion of the theorem that the distributions of the r.v.’s η± determine each other looks somewhat surprising, as a similar assertion for the first positive and first non-positive sums χ+ := Sη+ and χ− := Sη− would be wrong. Indeed, if F− (t) = ce−ht for t 0, c < 1, h > 0, then for any F+ we would have P(χ− < −t) = e−ht , t > 0, whereas 1 − E(eiλχ+ ; η+ < ∞) =
(1 − f(λ)) (iλ + h) , iλ
f(λ) := Eeiλξ .
Therefore, the distributions P(χ+ > x, η+ < ∞) will be different for different tails F+ (t), t > 0. Proof. The proof of Theorem 8.2.1 generally follows the path used in § 21 of [42] to study the asymptotics of P(θ = n). It is based on the factorization identities
" ∞ zn η− P(Sn 0) , (8.2.16) 1 − Ez = exp − n n=1
" ∞ zn (8.2.17) P(Sn > 0) , 1 − E z η+ ; η+ < ∞ = exp − n n=1 |z| 1 (see e.g. [122, 49, 42]). Since "
∞ 1 zn − ln(1−z) =e , = exp 1−z n n=1 the identities (8.2.16), (8.2.17) can be rewritten as
" ∞ ∞ 1 − Ez η− zn = exp P(Sn > 0) , z n P(η− > n) = 1−z n n=0 n=1
" ∞ ∞ zn P(Sn > 0) . z n P(η+ = n) = 1 − exp − n n=1 n=1
(8.2.18) (8.2.19)
We now consider the theorem parts in turn. (i) Letting z = 1 in (8.2.18), (8.2.19), we obtain D = − ln p, p = 1/Eη− . (ii) The assertion (8.2.9) follows from comparing (8.2.18) with (8.2.19). (iii) The relation (8.2.9) is clearly equivalent to the equality 1 1−
∞ n=1
z n P(η+ = n)
=
∞ n=0
z n P(η− > n),
8.2 A fixed level x
375
of which the left-hand side is equal to ∞
qk
∞
z n P(Tk = n) = pEz Tν .
n=0
k=0
This immediately implies (8.2.10). (iv) Consider the entire functions A− (v) := evD
and
A+ (v) := 1 − e−vD .
For |z| 1, the relations (8.2.18), (8.2.19) can be rewritten in the form ∞ n=0 ∞
z n P(η− > n) = A− b(z) ,
(8.2.20)
z n P(η+ = n) = A+ b(z) ,
(8.2.21)
n=1
where b(z) :=
∞ ∞ 1 z n P(Sn > 0) n ≡ z bn D n=1 n n=1
and the bn were defined in (8.2.12). Since {bn } is subexponential by assumption, it only remains to make use of the known theorems (see § 1.4 or e.g. [83]) on functions of distributions (specified by the functions A± in (8.2.20), (8.2.21)). As A± are entire functions and A− (1) = DeD , A+ (1) = De−D , we obtain by virtue of (8.2.20), (8.2.21) and Theorem 1.4.3 that P(Sn > 0) P(Sn > 0) = Eη− , n n P(Sn > 0) P(Sn > 0) = P(η+ = ∞) . P(η+ = n) ∼ bn A+ (1) = e−D n n This proves (8.2.13). The relation (8.2.11) can be proved in exactly the same way, taking into account the fact that the function P(η− > n) ∼ bn A− (1) = eD
A(v) :=
1−q 1 − qv
is analytic in the disk |v| < 1/q, A (1) = q/(1 − q), and that H(z) = A p(z) owing to (8.2.9). Therefore, as n → ∞, q P(η+ = n) P(η− > n) ∼ , Eη− 1−q q which (taking account of (8.2.8)) is equivalent to (8.2.11). (v) The assertion (8.2.14) follows from the observation that "
∞ z n P(Sn > 0) − D = exp D(Q(z) − 1) , H(z) = exp n n=1
(8.2.22)
376
On the asymptotics of the first hitting times
where Q(z) =
∞ z n P(Sn > 0) . nD n=1
It is clear that the right-hand side of (8.2.22) is the generating function of the random sum ω1 + · · · + ων . The theorem is proved. Now we will turn to ‘explicit’ asymptotic representations for the distributions of the r.v.’s η± in terms of the distribution F. 8.2.2 The case Eξ < 0 for x = 0 when F belongs to the classes R or Se or to their extensions Theorem 8.2.3. Let a = Eξ < 0 and either F ∈ R or F ∈ Se, where we assume in the latter case that α < 1/2. Then, as n → ∞, P(η− > n) ∼ eD V (|a|n), −D
P(η+ = n) ∼ e
(8.2.23)
V (|a|n),
(8.2.24)
where D is defined in (8.1.1), (8.2.8) and eD = Eη− ,
e−D = P(S = 0) = P(η+ = ∞).
(8.2.25)
The same assertion holds for the ‘dual’ pair of r.v.’s 0 := inf{k 1 : Sk 0}, η+
0 η− := inf{k : Sk < 0}.
Theorem 8.2.4. Assume that the conditions of Theorem 8.2.3 are satisfied. Then, as n → ∞, 0
0 P(η− > n) ∼ eD V (|a|n), 0
0 = n) ∼ e−D V (|a|n), P(η+
where D0 :=
∞ P(Sk 0) , k
k=1
0
0 eD = Eη− =
1 . = ∞)
0 P(η+
Remark 8.2.5. Both assertions of Theorem 8.2.3 have a simple ‘physical’ interpretation. First note that, under the conditions of the theorem, the probability that during time m a large jump of size x ∼ cn or bigger will occur is approximately equal to mV (x). Further, the rare event {η− > n} where n is large occurs, roughly speaking, when there is a large jump of size |a|n in a time interval of length η− (with mean Eη− ); after that, the random walk will need, on average, a time n to reach the negative half-axis. Therefore it is natural to expect that P(η− > n) ∼ Eη− V (|a|n). The result P(S η− > x) ∼ Eη− V (x) of [8] admits a similar interpretation. The rare event {η+ = n} occurs when first (prior to time n) the trajectory stays
8.2 A fixed level x
377
above zero (the probability of this is close to P(S = 0); at time n the trajectory will be in a ‘neighbourhood’ of the point an) and then, at time n, there occurs a large jump of size −an. These considerations explain, to some extent, the asymptotics (8.2.24). Proof of Theorem 8.2.3. The assertions (8.2.23) and (8.2.24) are simple consequences of Theorem 8.2.1(iv) (see (8.2.12), (8.2.13)). We simply have to find the asymptotics of {bn } as n → ∞ and verify that this sequence is subexponential. The probability P(Sn > 0) can be written in the form P(Sn0 > |a|n), where 0 Sk = Sk − ak, ESk0 = 0. Now if F ∈ R then P(Sn0 > |a|n) ∼ nV (|a|n)
(8.2.26)
(see e.g. §§ 3.4 and 4.4). The same relation holds if F ∈ Se (that is, conditions (8.2.4), (8.2.5) are satisfied) and α < 1/2 (see § 5.4; for α 1/2 the deviations x = |a|n do not belong to the region where (8.2.26) holds true). Thus, under the conditions of Theorem 8.2.3, bn ∼
V (|a|n) , D
and so the sequence {bn } is subexponential (see p. 44). The theorem is proved. It follows from Theorem 8.2.3 that the relation (8.2.11) holds as n → ∞ and t that, for any increasing function Q(t), we have, putting QI (t) := 0 Q(u) du, the following equivalences: EQ(η− ) < ∞ ⇐⇒ E[Q(ξ/|a|); ξ > 0] < ∞, E[Q(η+ ); η+ < ∞] < ∞ ⇐⇒ E[QI (ξ/|a|); ξ > 0] < ∞. Theorem 8.2.4 can be proved in exactly the same way, using the ‘dual’ identities
" ∞ 0 zk P(Sk < 0) , 1 − Ez η− = exp − k k=1
k " 0 0 z < ∞ = exp − 1 − E z η + ; η+ P(Sk 0) . k The duality seen in Theorems 8.2.3 and 8.2.4 is present everywhere in the remainder of the chapter as well. Because of its obviousness, its further description will be omitted in what follows. One can broaden the conditions under which the relations (8.2.23), (8.2.24) remain true. First we consider the case of finite variance. Theorem 8.2.6. Let Eξ 2 < ∞.
378
On the asymptotics of the first hitting times
(i) Suppose that, instead of the condition F ∈ R, the following conditions hold in Theorem 8.2.3: (i1 ) [ · , an) (see Chapters 3 and 4), one obtains from (i1 )–(i3 ) that, for h ∈ (0, 1), P(Sn0 > |a|n) = nP(Sn0 > |a|n, ξn − a > h|a|n) + O (nV (h|a|n))2 , where, for M → ∞ slowly enough, one has, by virtue of Chebyshev’s inequality and (i2 ), (i3 ), that P(Sn0 > |a|n, ξn − a > h|a|n) 0 + ξn − a > |a|n, ξn − a > h|a|n) = P(Sn−1 0 = P Sn−1 > (1 − h)|a|n F+ (|a|(hn − 1)) (1−h)|a|n
0 P(Sn−1 ∈ dt) F+ (|a|(n − 1) − t)
+ −∞
(8.2.27)
# √ $ 0 0 = o(V (n)) + E F+ (|a|(n − 1) − Sn−1 ); Sn−1 <M n $ # √ 0 0 ); M n Sn−1 < (1 − h)|a|n + E F+ (|a|(n − 1) − Sn−1 = F+ (|a|n) + o F+ (n) . This proves the relation P Sn0 > |a|n ∼ nF+ (|a|n).
8.2 A fixed level x
379
The subexponentiality of {bn }, bn ∼ D −1 F+ (|a|n), follows from the relations (supposing for simplicity that n is even) n
n/2−1
bk bn−k = 2
k=0
bk bn−k + b2n/2 ,
k=0
where
M
n/2−1
=
k=0
√
n
k=0
n/2−1
+
k=M
√
n+1
and, owing to (i2 ), (i3 ), M
√
n
k=0
n/2−1
bk bn−k ∼ bn ,
k=M
n/2−1
bk bn−k < cF+ (|a|n)
√ n+1
k=M
√
bk = o(bn ).
n+1
(ii) The case when conditions (ii1 )–(ii3 ) are met is dealt with in a similar manner, using the results of Chapter 5. Theorem 8.2.6 is proved. The case Eξ 2 = ∞ can be considered in exactly the same way. For simplicity let α ∈ (1, 2) in (8.2.3), and let F− (t) ≡ P(ξ < −t) cF+ (t).
(8.2.28)
Theorem 8.2.7. Under the above conditions, the first assertion of Theorem 8.2.6 √ remains true in the case (8.2.28), provided that we replace the range |u| < M t in (i3 ) by |u| < M σ(t), σ(t) = V (−1) (1/t). If the condition P(ξ < −t) cF+ (t) does not hold then Theorem 8.2.7 still remains true, but with a different function σ(t) (see Chapter 3). Proof. The proof of Theorem 8.2.7 completely repeats that of Theorem 8.2.6. √ 0 < M n} by the range One just replaces in (8.2.27) the deviation range {Sn−1 0 {Sn−1 < M σ(n)} and then uses bounds from Chapter 3. One could also consider the cases α = 1, α = 2. Now we will derive bounds for the probabilities (8.2.1). Since, owing to Theorem 8.2.1(iii), one has P(η+ = n) < P(η− > n), we will obtain bounds only for P(η− > n). which satisfies the conTheorem 8.2.8. Assume that there exists a distribution F ditions of Theorem 8.2.6 and is such that F+ (t) V (t) := F(t) for all t (in d
P(ξ t) = V (t), ξ satisfies the conditions of other words, one has ξ ξ, Theorem 8.2.3, if Eξ = a < 0). Then, for any ε > 0 and all large enough n, P(η− > n) V |a|n(1 − ε) Eη− . (8.2.29)
380
On the asymptotics of the first hitting times
∈ Se, α 1/2 then we still have (8.2.29) (in the case F ∈ Se, mulIf F tiplying V |a|n(1 − ε) bya constant factor makes no difference since a small variation in ε can change V |a|n(1 − ε) by more than a constant factor). d
Proof. The assertion (8.2.29) is next to obvious, since η− η− , where η− is Therefore, by virtue of Theo= F. defined as η− but for i.i.d. r.v.’s ξ1 , ξ2 , . . . ⊂ rem 8.2.3, P(η− > n) P( η− > n) ∼ V (| a|n)E η− ,
n → ∞.
(8.2.30)
Now we will redefine the distribution of ξ as follows,
F+ (t) for t M, P(ξ t) = V (t) for t > V (−1) (F+ (M )), and then, for a given ε > 0, choose a value M such that Eξ =: a < a + ε, E η− < Eη− + ε. This, together with (8.2.30), implies (8.2.29). To prove the second assertion, we can again make use of the above argument, n except that there is no relation of the form (8.2.26) for the sums Sn = i=1 ξi . If V ∈ Se, α 1/2 then, from the results of Chapter 5, it follows only that, for any ε > 0 and all large enough n, 1−ε an > | a|n cn V (| a|n) = cne−l(|ba|n)(1−ε) (8.2.31) P Sn − (see Corollary 5.2.2, p. 240). Since l(tn) ∼ tα l(n) for a fixed t > 0, we see that when a < a+ε one can bound the right-hand side of (8.2.31) (slightly changing ε if needed) by ne−l(|a|n(1−ε)) = nV |a|n(1 − ε) . η− > n) do not exceed the Owing to (8.2.18), the probabilities P(η− > n) P( coefficients in the Taylor expansion of "
∞ zk P(Sk > 0) , exp k k=0
where
1 P(Sk > 0) V |a|k(1 − ε) k for large enough k. Therefore these probabilities will also not exceed the coefficients in the expansion of "
∞ exp B z k bk , k=0
where bk ∼ B −1 V (|a|k(1 − ε)), B :=
M k=1
∞
k=1 bk
P(Sk > 0) +
= 1, and
k>M
V |a|k(1 − ε) .
8.2 A fixed level x
381
Now set A (v) := eBv , so that A (1) = BeB . As the sequence {bk } is subexponential, we obtain from Theorem 1.4.3 that P(η− > n) < V |a|n(1 − ε) BeB 1 + o(1) . Since the value of V |a|n(1 − ε) can be diminished BeB times (or by any other fixed factor) by slightly changing ε, we have obtained the second assertion of the theorem. The theorem is proved. Corollary 8.2.9. If
E := E el(ξ) ; ξ > 0 < ∞,
where l(t) = tα L(t) is a non-decreasing function, α ∈ (0, 1), and L(t) is an s.v.f. then, for any ε > 0, E el(|a|(1−ε)η+ ) ; η+ < ∞ < ∞. Eel(|a|(1−ε)η− ) < ∞, This assertion follows directly from Theorem 8.2.8 and Chebyshev’s inequality P(ξ x) Ee−l(x) . Remark 8.2.10. In the case V ∈ Se the assertion (8.2.29) admits a simpler proof, based on the crude inequalities (for, say, α < 1/2) P(η− > n) P(Sn > 0) P(Sn > 0)(1 + o(1)) nV (| a|n) V |a|n(1 − ε) . Remark 8.2.11. In the assertion (8.2.29) of Theorem 8.2.8 one can assume that ε = ε(n) → 0 as n → ∞. The rate of convergence ε(n) → 0 can be obtained from the bounds for ε in (8.2.31), which are found in Corollary 5.2.2, and from estimates for the distance between a and a. 8.2.3 The case Eξ < 0 for x = 0 when F belongs to the class C First we consider the possibility (1) in (8.2.6) for the case F ∈ C. Recall that ϕ(λ) = Eeλξ , λ0 > 0 is the point where the function ϕ(λ) attains its minimum value ϕ := minλ ϕ(λ) and that λ+ = sup{λ : ϕ(λ) < ∞}. In the case under consideration, one has λ0 λ+ , ϕ (λ0 ) = 0, which includes the following two substantially different subcases: (1a) λ0 < λ+ ; (1b) λ0 = λ+ ,
ϕ (λ+ ) = 0.
In case (1b), the fact that both quantities ϕ(λ+ ) and ϕ (λ+ ) are finite implies that (see (6.1.20), (6.1.21)) F+ (t) = e
−λ+ t
∞ V (t),
tV (t)dt < ∞. 0
(8.2.32)
382
On the asymptotics of the first hitting times
In this situation, we will need the following condition: [B]
Either ϕ (λ+ ) < ∞, or ϕ (λ+ ) = ∞ and ∞ V (t) =
V (u) du ∈ R.
I
(8.2.33)
t
The relation (8.2.33) means that V I (t) = t−α L0 (t),
(8.2.34)
where α ∈ [1, 2] and L0 is an s.v.f. Now return to the general case λ0 λ+ and consider the distribution, conjugate to F (its Cram´er transform): F(λ0 ) (dt) := (λ0 )
Let the ξi
eλ0 t F(dt). ϕ
(8.2.35)
be independent r.v.’s with distribution F(λ0 ) . Put Sn(λ0 ) :=
n
(λ0 )
ξi
.
i=1
(λ ) 2 (λ ) < ∞ and therefore Then clearly Eξi 0 = 0. If λ0 < λ+ then d := E ξi 0 (λ0 ) √ the distribution of Sn / nd converges weakly to the normal law as n → ∞. If λ0 = λ+ then (cf. (6.2.6), (6.2.7), p. 309) (λ ) V0 (t) := P ξ1 0 t =
∞
eλ0 u F(du) ϕ
t
=
1 ϕ
∞
λ0 I V (t) ∈ R. λ0 V (u) du − dV (u) ∼ ϕ
t
(8.2.36) This means (see § 1.5) that, provided that the conditions λ0 = λ+ and [B] are (λ ) satisfied, the distribution of ξi 0 belongs to the domain of attraction of the stable law Fα,1 with parameters (α, 1), α ∈ [1, 2] (note that the left tail of the distribution F(λ0 ) decays exponentially fast; for α = 2, the limiting law Fα,1 is normal). The distribution Fα,1 has a continuous density f . Put √ ∞ nd if d < ∞, P(Sk > 0) Dϕ := , σn := (−1) k kϕ V0 (1/n) if d = ∞, k=1 (−1)
:= inf{v : V0 (v) u} is the (generalized) inverse of the funcwhere V0 tion V0 . Recall that a distribution F is said to be non-lattice if its support is not part
8.2 A fixed level x
383
of a set of the form {b + hk; k = 0, ±1, ±2, . . .}, 0 b h, where h can be assumed, without loss of generality, to be equal to 1. A distribution F is called arithmetic if the r.v. ξ ⊂ = F is integer-valued with lattice span equal to 1. Theorem 8.2.12. Assume that F ∈ C, λ0 λ+ , ϕ (λ0 ) = 0 and, in the case λ0 = λ+ , that the conditions (8.2.32) and [B] are met. Then, in the non-lattice case, P(η− > n) ∼ eDϕ
f (0)ϕn , λ0 nσn
P(η+ = n) ∼ e−Dϕ
(8.2.37)
f (0)ϕn , λ0 nσn
(8.2.38)
where f (0) is the value of the density of the limiting stable law Fα,1 at 0. If the r.v. ξ has an arithmetic distribution then the factor 1/λ0 on the right-hand sides of (8.2.37), (8.2.38) should be replaced by e−λ0 /(1 − e−λ0 ). (λ0 )
Proof. The distributions of Sn and Sn
have a similar relation to that in (8.2.35): P(Sn ∈ dt) = ϕn e−λ0 t P Sn(λ0 ) ∈ dt
(see e.g. (6.1.3)), so that ∞ P(Sn > 0) = ϕ
n 0
e−λ0 t P Sn(λ0 ) ∈ dt
∞
= ϕ λ0 n
e−λ0 u P Sn(λ0 ) ∈ (0, u] du.
(8.2.39)
0
Repeating the argument from the proofs of Theorems 6.2.1 and 6.3.1, we obtain λ0 ϕn f (0) P(Sn > 0) ∼ σn
∞
ue−λ0 u du =
0
ϕn f (0) . λ0 σn
This result also follows from Corollaries 6.2.3 and 6.3.2. Further, we will again make use of the factorization identities (8.2.18), (8.2.19), where, having made the change of variables s = zϕ, we will proceed as in the proof of Theorem 8.2.1, using in (8.2.20), (8.2.21) the function ∞ s = bϕ,k sk bϕ (s) := b ϕ k=1
instead of b(z). The sequence bϕ,k =
P(Sk > 0) f (0) ∼ Dϕ kϕk λ0 kσk Dϕ
(8.2.40)
384
On the asymptotics of the first hitting times
is subexponential. Therefore, again applying results from § 1.4, we obtain P(η− > n)ϕ−n ∼ eDϕ
f (0) . λ0 kσk Dϕ
In the arithmetic case, we use the local theorem P(Sn(λ0 ) = k) − 1 f k σn σn
→0
k
as n → ∞ (see Theorem 4.2.2 of [152]), which implies that P(Sn > 0) = ϕn
∞ k=1
ϕn e−λ0 f (0) e−λ0 k P Sn(λ0 ) = k ∼ . (1 − e−λ0 )σn
Again, the same result follows from Corollaries 6.2.3 and 6.3.2. The proof of the second assertion of the theorem is completely analogous. The theorem is proved. Remark 8.2.13. Note the cases λ0 < λ+ or λ0 = λ+ , ϕ (λ+ ) < ∞, √ that, in √ one has f (0) = 1/ 2π, σn = nd and d = ϕ (λ0 )/ϕ in the relations (8.2.37), (8.2.38). Observe also that one could consider the lattice non-arithmetic case as well, when ξ assumes the values b + hk, k = 0, ±1, . . . , 0 < |b| < h. In this case the coefficient of (nσn )−1 on the right-hand sides of (8.2.37), (8.2.38) will depend on n, oscillating within fixed limits. Now consider possibility (2) in (8.2.6): λ0 = λ+ , ϕ (λ+ ) < 0. Then we still have (8.2.32). Suppose that V ∈ R, i.e. V (t) = t−α−1 L(t),
α > 1,
(8.2.41)
where L is an s.v.f. Theorem 8.2.14. Assume that F ∈ C, λ0 = λ+ and ϕ (λ+ ) < 0 and also that conditions (8.2.32), (8.2.41) are satisfied. Then P(η− > n) ∼ eDϕ ϕn−1 V (a+ n), P(η+ = n) ∼ e−Dϕ ϕn−1 V (a+ n), where a+ = −Eξ (λ0 ) = −ϕ (λ+ )/ϕ > 0. Proof. Under the conditions of Theorem 8.2.14 one still has the equality (8.2.39) for P(Sn > 0), in which λ0 can be replaced by λ+ , so that the problem again reduces to finding the asymptotics of P Sn(λ0 ) ∈ (0, u] = P Sn(λ0 ) + a+ n ∈ (a+ n, a+ n + u] .
8.2 A fixed level x
385
We have (λ ) P ξ1 0 ∈ (t, t + u] =
t+u
eλ+ v F(dv) ϕ
t t+u
=
λ+ V (v) dv − ϕ
t
t+u
dV (v) λ+ u ∼ V (t). ϕ ϕ
t
Again repeating the argument from the proofs of Theorems 6.2.1 and 6.3.1 we obtain that, for any fixed u > 0, λ+ u nV (a+ n). P Sn(λ0 ) ∈ (0, u] ∼ ϕα
(8.2.42)
This assertion also follows from Corollaries 6.2.3 and 6.3.2. As before, from (8.2.42) we find that, by virtue of (8.2.39), P(Sn > 0)ϕ
−n
∞ ∼ λ+
e−λ+ u u du
0
nV (a+ n) λ+ nV (a+ n) = . ϕα ϕα
Thus, using the notation (8.2.40) we have bϕ,k =
V (a+ k) P(Sk > 0)ϕ−k ∼ . Dϕ k ϕαDϕ
As in Theorem 8.2.12 the sequence {bϕ,k } is subexponential, and hence the remainder of the previous proof will remain valid. The theorem is proved. It appears that it would not be very difficult to extend the assertion of Theorem 8.2.14 to the case V ∈ Se, α < 1/2. Observe that the assumption (8.2.41), as may be seen from the relation (8.2.42), corresponds to the non-lattice case. The lattice case can be considered in exactly the same way, but this will require a change in the form of condition (8.2.41). Note also that the local theorems 2.1 and 3.1 of [54] for the lattice case were obtained earlier in [105].
8.2.4 The case A0 for x = 0 First note that we have here the following analogue of Theorem 8.2.1. Theorem 8.2.15. For the case A0 we have that "
∞ 1 1 − Ez η− zk , = exp P(Sk > 0) = 1−z k 1 − Ez η+ k=1
Eη± = ∞ and that the distribution of η− is uniquely determined by that of η+ and vice versa.
386
On the asymptotics of the first hitting times
Proof. The proof of the theorem follows in an obvious way from (8.2.17), (8.2.18) and the fact that P(η+ < ∞) = 1. Now, for γ ∈ (0, 1) put
" ∞ z k Δk D(z) := exp − . k
Δn := P(Sn 0) − γ,
(8.2.43)
k=1
Theorem 8.2.16. The following assertions hold true for the case A0 . (i) The relation P(η− > n) ∼
n−γ L(n) Γ(1 − γ)
as n → ∞, γ ∈ (0, 1), L is an s.v.f., (8.2.44)
holds iff n−1
n
P(Sk 0) → γ
(8.2.45)
k=1
or, equivalently, iff P(Sn 0) → γ.
(8.2.46)
(ii) If (8.2.44) or (8.2.45) holds then necessarily L(n) ∼ D(1 − n−1 ) and P(η− (x) > n) → r(x), P(η− > n)
n → ∞,
for any fixed x 0, where the function r(x) is given in explicit form. (iii) If Eξ = 0, Eξ 2 < ∞ then γ = 1/2 and |Δn | n
< ∞.
(8.2.47)
This means that the function L(n) in the relation (8.2.44) can be replaced by D(1), 0 < D(1) < ∞. Proof. The proof of Theorem 8.2.16 can be found in [32], pp. 381–2 (see [32] for a more detailed bibliography; the equivalence of conditions (8.2.45) and (8.2.46) was established in [106]). It is evident that an assertion symmetric to (8.2.44) holds for η+ , η+ (x) (with γ and Δn replaced by 1 − γ and −Δn respectively): P(η+ > n) ∼
nγ−1 L−1 (n) . Γ(γ)
If Eξ 2 = ∞ (assuming that if Eξ is finite then Eξ = 0) then, under the well known regularity conditions on the distribution tails of ξ (see p. 57 and Theorem 1.5.1), we have, as n → ∞, P(Sn 0) → Fα,ρ ((−∞, 0]) = Fα,ρ,− (0) ≡ γ,
8.2 A fixed level x
387
where Fα,ρ is a stable law with parameters α ∈ (0, 2], ρ ∈ [−1, 1]. For symmetric ξ’s one has γ = 1/2, P(Sn 0) −
1 c 1 = P(Sn = 0) < √ , 2 2 n
so that the series (8.2.47) is convergent and
" ∞ 1 zn P(Sn = 0) . D(z) = exp − 2 n=1 n It may be seen from the above that here, in contrast with the case A− , the influence of the distribution tails of ξ on the asymptotics (8.2.1) is less significant. There is a vast literature on the rate of convergence of F (n) (t) := P(Sn /σn < t) (for a suitable scaling sequence σn ) to Fα,ρ,− (t). In particular, one can find in it conditions that are sufficient for convergence of the series (8.2.47) in the case Eξ 2 = ∞. For illustration purposes, we will present here just one of the results known to us : r Let νr := |x (F(dx) − Fα,ρ (dx))| < ∞ for some r > α, and assume that x(F(dx) − Fα,ρ (dx)) = 0 in the case α 1. Then supF (n) (t) − Fα,ρ,− (t) cνr n1−r/α t
(see [216, 217]). The convergence (8.2.47) clearly occurs in such a case. Now we will obtain a refinement of Theorem 8.2.16. Theorem 8.2.17. (i) If in the case A0 we have that Δn ∼ cn−γ
as
n → ∞,
0 c < ∞,
γ ∈ (0, 1),
(8.2.48)
then P(η− = n) ∼
γn−γ−1 D(1), Γ(1 − γ)
0 < D(1) < ∞.
(8.2.49)
(ii) If Eξ = 0, E|ξ|3 < ∞ and the distribution ξ is either lattice or satisfies the condition lim sup |f(λ)| < 1, |λ|→∞
then γ = 1/2 and the relations (8.2.48), (8.2.49) hold true. Similar relations hold for P(η+ = n).
388
On the asymptotics of the first hitting times
Concerning the asymptotics of P(η+ = n) under weaker conditions, see remarks after Theorem 8.2.18 below. Proof of Theorem 8.2.17. Owing to (8.2.16) and (8.2.43), we have "
∞ ∞ z k z k Δk Ez η− = 1 − exp −γ − = 1 − (1 − z)γ D(z). k k k=1
(8.2.50)
k=1
The asymptotics of the coefficients ak in the expansion of the function ∞
a(z) := −(1 − z)γ =
ak z k
k=0
are well known: an ∼
γn−1−γ . Γ(1 − γ)
(8.2.51)
To find the asymptotics of the coefficients dk in the expansion D(z) =
∞
dk z k ,
k=0
we will make use of Theorem 1.4.6 (p. 55) with A(w) = ew , gk = −Δk /k to claim that dn ∼ A (g(1))gn = −
D(1) Δn ∼ cD(1)n−γ−1 , n
n → ∞.
(8.2.52)
Now, from (8.2.50) with ε = ε(n) → 0 slowly enough as n → ∞, we have (assuming for simplicity that εn is integer-valued) P(η− = n) =
n
ak dn−k =
k=0
εn k=0
(1−ε)n
+
k=εn+1
n
+
,
k=(1−ε)n+1
where, by virtue of the equality a(1) = 0, εn k=0
=
εn
ak dn (1 + o(1)) = o(dn ).
k=0
Further, n
=
k=(1−ε)n+1
n
an (1 + o(1)) dn−k ∼ D(1)an ,
k=(1−ε)n+1
and so from (8.2.51), (8.2.52) we obtain that (1−ε)n
k=εn+1
This proves (8.2.49).
# $2 n (εn)−1−γ = cε−2−2γ n−1−2γ = o(an ).
8.2 A fixed level x
389
The second assertion of the theorem follows from the observation that, under the conditions of part (ii), one has cEξ 3 1 1 Δn = P(Sn 0) − = √ + o √ 2 n n (see e.g. Theorem 21 in Chapter V of [224]), and so (8.2.48) holds with γ = 1/2. The theorem is proved.
8.2.5 A fixed level x > 0 The asymptotics of the distributions of r.v.’s η± (x) for x > 0 differ from the respective asymptotics for η± by factors that depend only on x. Namely, the following assertion was established in [116, 32, 104, 27, 193]. Theorem 8.2.18. Let the conditions of one of Theorems 8.2.3 (for the class R), 8.2.12, 8.2.14, 8.2.16 ((8.2.45)) be satisfied. Then, for the cases A− and A0 , as n → ∞, for any fixed x 0 we have P(η− (x) > n) → r− (x), P(η− > n) P(n < η+ (x) < ∞) → r+ (x), P(n < η+ < ∞)
(8.2.53)
where the functions r± (x) are given in explicit form. In the case Eξ = 0, d = Eξ 2 < ∞ the following local theorem was obtained in [3]: if the distribution of ξ is arithmetic then there exists a function U (x) such that for any fixed x 0 we have lim n3/2 P(η+ (x) = n) = U (x),
n→∞
(8.2.54)
√ where U (x) ∼ x/ 2πd as x → ∞. If, in addition, E|ξ|3 < ∞ then (8.2.54) holds true for non-lattice distributions as well [193].1 In the case of bounded lattice-valued r.v.’s ξ, asymptotic expansions for P(η± (x) = n) were derived in [33, 34]. In the next section, we will find the asymptotics of the functions r± (x) from (8.2.53) as x → ∞. In the present subsection we will restrict ourselves to giving a simple proof of Theorem 8.2.18 and the explicit form of the functions r± (x) in two cases, when Eξ = 0, Eξ 2 < ∞ and when Eξ = 0 and condition [Rα,ρ ], ρ ∈ (−1, 1), (see p. 57) is met. For the case Eξ < 0 we will prove the first of the assertions (8.2.53). 1
A more general result claiming that Eξ = 0, Eξ 2 < ∞ would suffice for (8.2.54) was published in [117]. This theorem, however, proved to be incorrect since its conditions contain no restrictions on the structure of F (as communicated to us by A.A. Mogulskii). A paper by A.A. Mogulskii with a correct version of this result was submitted to Siberian Advances in Mathematics.
390
On the asymptotics of the first hitting times
Proof of Theorem 8.2.18. Let Eξ 0 (we will assume that Eξ 2 < ∞ in the Eξ = 0), let η1 , η2 , . . . and χ1 , χ2 , . . . be independent r.v.’s that are distributed respectively as η− and χ− , and let Tk :=
k
ηi ,
Hk :=
i=1
k
χi ,
τ (x) := min{k : Hk −x}.
i=1
Then it is well known (see e.g. § 17 of [42]) that E|χ− | < ∞,
η− (x) = Tτ (x) ,
Eτ (x) < ∞.
Denote by Fn the σ-algebra generated by (η1 , χ1 ), (η2 , χ2 ), . . . , (ηn , χn ). Then obviously {τ (x) n} ∈ Fn , so that τ (x) will be a stopping time. By virtue of Theorems 8.2.3 and 8.2.16(i) (in the cases under consideration, when Eξ = 0 and either Eξ 2 < ∞ or condition [Rα,ρ ], ρ ∈ (−1, 1), is satisfied, clearly the limit (8.2.45) will always exist), the distribution tail of ηj is an r.v.f. Since ηi > 0 one has maxkn Hk = Hn and so one can make use of Corollary 7.4.2. It is evident that P(τ (x) > n) decays exponentially fast as n → ∞, and hence the conditions of that theorem are met and therefore P η− (x) > n = P(Tτ (x) > n) ∼ Eτ (x)P(η− > n), (8.2.55) where Eτ (x) =
∞
P(Hk −x) = r− (x).
k=1
The first assertion in (8.2.53) is proved. The second assertion in the case Eξ = 0, Eξ 2 < ∞ is proved in exactly the same way. If F ∈ C, Eξ < 0 then the function r− (x) will be of a more complex nature. Since by the renewal theorem Eτ (x) ∼
x E|χ− |
as
x → ∞,
it is natural to expect from (8.2.55) and (8.2.23) that, under the conditions of Theorem 8.2.3, for Eξ = a < 0 and large x, x xEη− P η− (x) > n ∼ V |a|n = V |a|n . E|χ− | |a|
(8.2.56)
For x = o(n), this fact (in its more precise formulation) will be proved in the next section. Observe also that relations similar to (8.2.55), (8.2.56), could also be obtained for the distribution tails of χ− (x) = Sη− (x) + x, x 0. Indeed, using the factorization identity (see e.g. formula (1) in Chapter 4 of [42]) 1 − Eeiλχ− = Eη− EeiλS 1 − f(λ) , f(λ) = Eeiλξ ,
8.3 A growing level x
391
one can easily demonstrate that, in the case of an l.c. left tail W (t) = P(ξ < −t), we have, as t → ∞, P(χ− < −t) ∼ Eη− W (t) (for more detail, see e.g. § 22 of [42]). This corresponds to the above reasoning and the representation χ− = ξ1 + · · · + ξη− . A similar representation holds for χ− (x), which makes it natural to expect that, as t → ∞, P(χ− (x) < −t) ∼ Eη− (x) W (t). 8.3 A growing level x In this section, it will be convenient for us to change the direction of the drift when it is non-zero, so that now the main case is A+ = {a > 0} and the main object of study will be the time η+ (x) of the first crossing of the level x → ∞. Accordingly, the regular variation condition will now be imposed on the left tail: F− (t) = P(ξ < −t) = W (t) ≡ t−β LW (t),
(8.3.1)
where β > 1 and LW (t) is an s.v.f. at infinity (this is condition [=, · ] as used in Chapters 3 and 4). In the present section, we will deal only with the tail classes R and C and also with distributions from the class Ms , for which E|ξ|s < ∞. It is clear that the intersection of the classes Ms and R is non-empty. Since {η+ (x) > n} = {S n x}, we can also view our problem as that on ‘small deviations’ of the maximum S n , if x does not grow fast enough. 8.3.1 The distribution class C For distributions from the class C, a comprehensive study of the asymptotics of P η+ (x) > n = P(S n x) for arbitrary a = Eξ was made in [33, 34, 37]. The method papers consists in finding double transforms (in x and used in these in n) of P η+ (x) > n in terms of solutions to Wiener–Hopf-type equations (or, equivalently, in terms of the factorization components; these components were found in [33, 34] in explicit form) and then using asymptotic inversion of these transforms. For this method to work, one has to assume that the following condition is met. [Ca ]
The distribution F of ξ has an absolutely continuous component.
The same approach (in its simpler form) is also applicable in the case ξ is lattice-valued (see [33, 34]). Along with condition [Ca ], we will also need Cram´er’s condition [C0 ] on the ch.f. f of the distribution F:
392
On the asymptotics of the first hitting times
[C0 ]
lim sup |f(λ)| < 1. |λ|→∞
Clearly, [Ca ] ⊂ [C0 ]. To formulate the corresponding results we will need some notation. As before, let θ = x/n, (8.3.2) ϕ(λ) = Eeλξ , Λ(θ) = sup λθ − ln ϕ(λ) λ
and λ+ = sup{λ : ϕ(λ) < ∞}, θ− = lim
λ↓λ−
ϕ (λ) , ϕ(λ)
λ− = inf{λ : ϕ(λ) < ∞}, θ+ = lim
λ↑λ+
ϕ (λ) . ϕ(λ)
The deviation function Λ(θ) is analytic in (θ− , θ+ ), and the supremum in (8.3.2) is attained at the point λ(θ) = Λ (θ) (see e.g. § 8, Chapter 8 of [49]). The next assertion follows from the results of [37]. Theorem 8.3.1. Let F ∈ C, condition [Ca ] be satisfied and n → ∞. x = o(1) then n cx P(η(x) = n) ∼ 3/2 e−nΛ(θ) , n
(i) If θ− < 0 < θ+ and x → ∞, θ =
where c is a known constant admitting a closed-form expression in terms of the factorization components of the function 1 − zf(λ). (ii) If 0 < θ1 θ θ2 < θ+ then c(θ) P(η(x) = n) ∼ √ e−nΛ(θ) , n where c(θ) is a known analytic function, also admitting a closed-form expression in terms of the above factorization components. From this theorem one can easily obtain asymptotic representations for P(η(x) > n),
P(S x),
P(η(x) > n| η(x) < ∞)
and so on. Note also that there are no conditions on the sign of a = Eξ in the statement of the theorem. As we have already observed, the method of proof of Theorem 8.3.1 is analytic and is based on the idea of factorization and inverting double transforms. It was discovered relatively recently that this approach is not the only possible one for solving the problems under consideration. Using direct probabilistic methods, it was shown in [45, 47] that, in a number of cases, the asymptotics of the probabilities of large (and small) deviations of S n can be found from the respective asymptotics for Sn . In particular, the following result holds true.
8.3 A growing level x
393
Theorem 8.3.2. Let a distribution F ∈ C be either lattice or satisfy condition [C0 ] and let θ = x/n satisfy the inequalities θ < a, 0 < θ1 θ θ2 < θ+ . Then, as n → ∞, P(η(x) > n) ∼ c(θ) P(Sn x),
(8.3.3)
where the function c(θ) is defined below (see 8.3.5). In fact, the relation (8.3.3) and the form of the function c(θ) are simple corollaries of the following general fact. Consider an r.v. ξ (λ(θ)) that is the Cram´er transform of ξ at the point λ(θ), eλ(θ)t P ξ (λ(θ)) ∈ dt = P(ξ ∈ dt), ϕ(λ(θ))
Eξ (λ(θ)) = θ.
n (λ(θ)) (λ(θ)) (λ(θ)) := , where the r.v.’s ξi are independent copies of Put Sn i=1 ξi (λ(θ)) the r.v. ξ . For simplicity assume that the distribution of Sn has a density and that x/n → θ = const. Then, for any fixed k, the conditional distribution of the random vector (ξn , ξn−1 , . . . , ξn−k ) given Sn ∈ x − dy, where y = o(x) (for instance, y is fixed), converges as n → ∞ to the distribution (λ(θ)) (λ(θ)) (λ(θ)) (see [45, 47]). From here it is not difficult to see , ξ2 , . . . , ξk of ξ1 that P S n x| Sn ∈ x − dy → P T (λ(θ)) y , (λ(θ)) < ∞ a.s. Now we use the total probability where T (λ(θ)) := supk0 −Sk formula for P(S n x) (conditioning on the value of Sn ) and take into account the facts that, for θ ∈ (θ− , θ+ ), θ < a, Λ (θ) −nΛ(θ) √ e P(Sn x) ∼ (8.3.4) λ(θ) 2πn (cf. (6.1.16); observe that Λ (θ) = 1/d(θ)) and P(Sn ∈ x − dy) ∼ λ(θ)eλ(θ)y dy. P(Sn x) Thus we arrive at (8.3.3) with ∞ c(θ) = λ(θ)
eλ(θ)y P(T (λ(θ)) y) dy < ∞
(8.3.5)
0
(here λ(θ) < 0). The above assumption that x/n → θ = const does not restrict the generality, since all the assertions that we have used hold uniformly in θ. Moreover, it is not difficult to derive from (8.3.3) and (8.3.5) the asymptotics of P η+ (x) = n .
394
On the asymptotics of the first hitting times
8.3.2 The distribution class Ms For distributions from the class Ms , in the absence of any regularity assumptions on the function (8.3.1) one can obtain the asymptotics of the probability P(η+ (x) > n) only in case A0 = {a = 0}. It will be assumed everywhere in what follows that, in the lattice case, the value of the level x → ∞ belongs to the respective lattice. √ Theorem 8.3.3. Let F ∈ M3 , a = 0, Eξ 2 = 1, x → ∞ and x = o n . Then 8 2 P(η+ (x) > n) ∼ x . (8.3.6) πn This assertion immediately follows from the convergence rate estimate & % c x 1 sup P S n x − 2 Φ √ − < √ , c < ∞, 2 n n x ∞. Here Φ is the from [203] (see also [204, 202]), which holds when E|ξ|3 < √ standard normal distribution function. Since Φ(u) − 1/2 ∼ u/ 2π as u → 0, we obtain (8.3.6). If the distribution of ξ is lattice or condition [C0 ] is satisfied then, provided that F ∈ Ms for s > 3, one can also study asymptotic expansions for P S n x as x → ∞ (see [204, 37, 175, 176]); it is also possible to obtain similar expansions for x P(η+ (x) = n) ∼ √ (8.3.7) 2πn3/2 (i.e. the local limit theorems for η+ (x)). As far as we know, the problem when F ∈ Ms with a > 0 and x → ∞ remains open. The next subsection is devoted to studying the problem for distributions from the class R.
8.3.3 Asymptotics of P(η+ (x) > n) under the conditions x → ∞, a > 0 and [=, · ] with W ∈ R Along with the conditions listed in the heading of this subsection, in the case β ∈ (1, 2) (see (8.3.1)) we might in addition need condition [ · , 1, where L is an s.v.f. Put z := an − x. Theorem 8.3.4. Let a > 0, x → ∞, x < an, and let condition [=, · ] with W ∈ R be satisfied. Then the following assertions hold true.
8.3 A growing level x √ (i) If Eξ 2 < ∞, β > 2 and z n ln n then P(η+ (x) > n) ∼
395
x W (z). a
(8.3.9)
(ii) The relation (8.3.9) remains true when β ∈ (1, 2), condition (8.3.8) is met and n and z are such that z → 0. (8.3.10) nW (z) → 0, nV ln z The meaning of the assertion (8.3.9) is rather simple: the principal contribution to the probability P(S n x) comes from trajectories that have a negative jump x − an at one of the first x/a steps (prior to the crossing of the boundary x by the ‘drift line’ a(k) = ESk = ak). It follows from Theorem 8.3.4 and the relation P(Sn x) ∼ nW (z) (see Chapters 3 and 4) that P(S n x) ∼
x P(Sn x), an
so that we will still have the asymptotics (8.3.3), but with a very simple function c(θ) = θ/a instead of a rather complex factor c(θ) (the case θ → 0 is not excluded). Proof of Theorem 8.3.4. We will split the argument into three steps. (1) A lower bound. Let Gn := {S n x},
Bj := {ξj < −z(1 + ε)},
v :=
x (1 − ε), a
where ε > 0 is a fixed small number. Then, assuming for simplicity that v is an integer, we will have (cf. (2.6.1)) + v v P(Gn ) P Gn ∩ Bj P(Gn Bj ) + O (xW (z))2 . = j=1
j=1
It is obvious that, for j v, P(Gn | Bj ) → 1 as x → ∞ owing to the invariance principle. As xW (z) anW (z) → 0, we have P(Gn )
v j=1
x P(Bj ) (1 + o(1)) + O (xW (z))2 ∼ (1 − ε) W (z(1 + ε)). a
Since ε > 0 is arbitrary, we finally obtain P(Gn )
x W (z) (1 + o(1)). a
(8.3.11)
(2) An upper bound for P(Gn ) for truncated summands. Set ( z) , Cj := ξj > − r
r > 1,
C :=
n * j=1
Cj .
396
On the asymptotics of the first hitting times
Then P(Gn ) = P(Gn C) + P(Gn C),
(8.3.12)
where C is the complement of C, rP(Gn C)
n
P(Gn C j ),
(8.3.13)
j=1
P(Gn C) P(Sn − an −z; C). √ Lemma 8.3.5. If Eξ 2 < ∞, β > 2 and x n ln n then
(8.3.14)
P(Gn C) (nW (z))r+o(1) . If β ∈ (1, 2) and conditions (8.3.8), (8.3.10) are met then r P(Gn C) c nW (z) . The assertion of the lemma follows from the bound (8.3.14) and Theorem 4.1.2 (or Corollary 4.1.3) and Theorem 3.1.1 (or Corollary 3.1.2). If we choose an r > β/(β − 1) then clearly r nW (z) = o xW (z) , P(Gn C) = o xW (z) . (8.3.15) (3) An upper bound for (8.3.13). We have
z (j) P(Gn C j ) = P S j−1 x, ξj − , Sj−1 + ξj + S n−j x , r (j)
d
where the r.v. S n−j = S n−j is independent of ξ1 , . . . , ξj . Since ξj does not (j)
depend on ξ1 , . . . , ξj−1 or S n−j , one has , (j) z(j) P(Gn C j ) E W −x + S n−j + Sj−1 ; S n−j + Sj−1 > x + r (z ) (z ) 0 EW max = EW max , −x + Sn−1 , z + Sn−1 r r %
"&−β 0 1 S , ∼ W (z) E max , 1+ n r z p
where√Sn0 = Sn − an. Since Sn0 /z − → 0 under the respective conditions (for z n ln n in part (i) of the theorem and under condition (8.3.10) in part (ii)), we obtain that P(Gn C j ) W (z)(1 + o(1)). Moreover, P(Gn C j ) W
z r
P(S j−1 x) = W
z r
P(η+ (x) j).
8.3 A growing level x Hence n j=1
P(Gn C j ) =
j(1+ε)x/a
397
+
j>(1+ε)x/a
z x (1 + ε)W (z) + W a r
P η(x) j .
j>(1+ε)x/a
(8.3.16) Here W (z/r) cW (z), and we find from the strong law of large numbers and the renewal theorem Eη+ (x) = P(η+ (x) > j) ∼ x/a that the second term on the right-hand side of (8.3.16) is o(x)W (z). Therefore n j=1
P(Gn C j )
x W (z)(1 + o(1)). a
Comparing this inequality with (8.3.12)–(8.3.15), we obtain the same bound for the probability P(Gn ). This, together with the lower bound (8.3.11), completes the proof of the theorem. Corollary 8.3.6. Assume that the conditions of Theorem 8.3.4 are satisfied and Eξ 2 < ∞. Then, for a fixed x 0, there exist constants c± = c± (x) such that, for all sufficiently large n, we have c− < P(η+ (x) > n)/W (n) < c+ .
(8.3.17)
Proof. First assume that there exists a subsequence n → ∞ such that (8.3.18) P η+ (x) > n r(n)W (n), where r(n) → ∞, r(n) = o(n). Set y := r(n). Then, by Theorem 8.3.4(i), for all sufficiently large n we have x < y and r(n) W (an), P η+ (x) > n < P η+ (y) > n ∼ a which contradicts to (8.3.18). This proves the second inequality in (8.3.17). The first inequality in (8.3.17) follows from the relations P(η+ (x) > n) P(ξ1 < −2an) P S n−1 x + 2an ∼ W (2an) P S n−1 x + 2an , where the second factor on the final right-hand side tends to 1 as n → ∞. The corollary is proved.
9 Integro-local and integral large deviation theorems for sums of random vectors
9.1 Introduction Let ξ1 , ξ2 , . . . be independent d-dimensional random vectors, d 1, having the same distribution as ξ ⊂ = F, let Sn = ni=1 ξi and let Δ[x) be a cube in Rd with edge length Δ > 0 and a vertex at the point x = (x(1) , . . . , x(d) ), Δ[x) := y ∈ Rd : x(i) y (i) < x(i) + Δ, i = 1, . . . , d . Let (x, y) = di=1 x(i) y (i) be the standard scalar product. The main objective of this chapter is to study the asymptotics of (9.1.1) P Sn ∈ Δ[x) for t := |x| =
(x, x) σ(n),
Δ ∈ [Δ1 , Δ2 ],
t−γ Δ1 < Δ2 = o(t),
where γ > −1 and √ the sequence σ(n) determines the zone of large deviations (see below; σ(n) = n ln n in the case Eξ = 0, E|ξ|2 < ∞). In the case when the Cram´er condition holds, Ee(λ,ξ) < C < ∞
(9.1.2)
in theneighbourhood of a point λ0 = 0, a comprehensive study of the asymptotics of P Sn ∈ Δ[x) as (λ0 , x) → ∞ was presented in [69, 70]. The method of studying the asymptotics of (9.1.1) in Cram´er’s case (9.1.2), which is based on using the Cram´er transform and was described in § 6.1 in the univariate setting, remains fully applicable in the multivariate case as well. In this chapter we will study the asymptotics of (9.1.1) in the case condition (9.1.2) does not hold. In the univariate case, the asymptotics of (9.1.1) were found in §§ 3.7 and 4.7 (for regularly varying tails in the cases Eξ 2 = ∞ and Eξ 2 < ∞ respectively). The problem becomes more complicated in the multivariate case d > 1, as there are no ‘collective’ limit theorems on the asymptotics of (9.1.1) in that situation; while in the Cram´er case the asymptotics of these probabilities were determined 398
399
9.1 Introduction
by the properties of the analytic function Ee(λ,ξ) (see e.g. [69, 70, 49, 224]), in the case when (9.1.2) is not satisfied these asymptotics will depend strongly on the ‘configuration’ of the distribution F (more precisely, on the configuration of certain level lines). The examples below illustrate this statement. Example 9.1.1. Let ξ = ξ (1) , . . . , ξ (d) , where the r.v.’s ξ (i) are independent and have distribution tails V (i) (u) := P(ξ (i) u) = u−αi Li (u), the Li being s.v.f.’s at infinity such that the functions Vi satisfy condition √ [D(1,0) ] (see pp. 167, 217). Then, in the case E|ξ|2 < ∞ and mini x(i) n ln n, we clearly have from Theorem 4.7.1 that d = (i) V1 (x(i) )(1 + o(1)), P Sn ∈ Δ[x) = Δd nd
(9.1.3)
i=1 (i)
where V1
is the function V1 from (4.7.1) corresponding to the coordinate ξ (i) .
Example 9.1.2. The character of the asymptotics of (9.1.1) is similar, in a sense, to (9.1.3) in the case when ξ = (0, . . . , 0, ξ (i) , 0, . . . , 0) with probability pi > 0, d i = 1, . . . , d, i=1 pi = 1 and the r.v.’s ξ (i) again satisfy condition [D(1,0) ]. In this case, the components of the vector ξ are strongly dependent. By the law of large numbers (or by the central limit theorem for the multinomial scheme),
P Sn ∈ Δ[x) ∼ Δ
d
P
∼ Δ d nd
pn1 1
· · · pnd d
ni =n d =
(i)
pi V1 (x(i) ).
d n1 ! · · · nd ! = (i) ni V1 (x(i) ) n! i=1
(9.1.4)
i=1
In these two examples, the asymptotics of P ξ ∈ Δ[x) as |x| → ∞ are the ‘star-shaped’: for equal αi = α, say, and for the same values of t = |x|, (i) values of P ξ ∈ Δ[x) along the coordinate axes (more precisely, when x maxj=i x(j) for some i) are much larger than, for instance, those on the ‘diagonal’ x(1) = · · · = x(d) . Example 9.1.3. If, however, P ξ ∈ Δ[x) ∼ Δd V1 (x), where V1 (x) has a form V1 (x) = v(t)g e(x) , t = |x|, e(x) := x/t, (9.1.5) g being a continuous positive function on the unit sphere Sd−1 in Rd then, as we will see below, under broad assumptions on the r.v.f. v the asymptotics of (9.1.1) will have the form (9.1.6) P Sn ∈ Δ[x) ∼ Δd nV1 (x) and therefore will be of a substantially different character compared with (9.1.3), (9.1.4).
400
Large deviation theorems for sums of random vectors
Example 9.1.4. Combining Examples 9.1.1 and 9.1.3, i.e. considering random vectors ξ with independent subvectors satisfying conditions (9.1.5), one can ob tain for P Sn ∈ Δ[x) asymptotics of the form Δd nk
k =
(i)
V1 (x)
i=1
for any k, 1 k d, where k will be determined, roughly speaking, by the minimum number of large ‘regular’ jumps required to move from 0 to Δ[x). One could give other examples illustrating the fact that, even for distributions F possessing the property of ‘directional tail’ regular variation, the asymptotics of the large deviation probabilities will essentially depend on the ‘configuration’ of the distribution. Moreover, in some cases, the very setting-up of the large deviation problem proves to be difficult (a possible general setup will be illustrated by Theorem 9.2.4 below). It is seemingly due to this reason that there is only a rather small number of papers on large deviations in the multivariate ‘non-Cram´erian’ case, most of the publications known to us being devoted mainly to the ‘isotropic’ situation, when the rate of decay of the distribution in different directions is given by the same r.v.f. First of all, observe that the convergence of the scaled sums Sn to a nonGaussian limiting law takes place iff P(|ξ| t) is an r.v.f., P(|ξ| t) = t−α L(t),
L(t) is an s.v.f.,
(9.1.7)
where α ∈ (0, 2), and the distribution St (B) := P e(ξ) ∈ B| |ξ| t on the unit sphere Sd−1 converges weakly to a limiting distribution S: St ⇒ S
as
t→∞
(9.1.8)
(Theorem 4.2 of [243]; see also Corollary 6.20 of [5]; further references, as well as a detailed discussion of the problem on the convergence of Sn under operator scaling, can be found in [189]). In the special case when F has a bounded density f that admits a representation of the form # $ f (x) = t−α−d h(e(x)) + θ(x)ω(t) , t = |x| > 1, (9.1.9) where h(e) is a continuous function on Sd−1 , |θ(x)| 1 and ω(t) = o(1) as t → ∞, the large deviation problem was considered in [283, 200]. In [283], a local limit theorem for the density fn of Sn was established for large deviations along ‘non-singular directions’, i.e. directions satisfying e(x) ∈ {e ∈ Sd−1 : h(e) δ} for a fixed δ > 0. The theorem states that, uniformly in such directions, one
9.1 Introduction
401
has fn (x) ∼ nf (x) in the zone t n1/α . This result was complemented in [200] by an analysis of the asymptotics of fn (x) along ‘singular directions’ (i.e. for e ∈ Sd−1 such that h(e) = 0) for an even narrower distribution class (in particular, it was assumed that there are only finitely many such singular directions and that the density f (x) decays along such directions as a power function of order −β − d, β > α). The main result of [200] shows that the principal contribution to the probabilities of large deviations along singular directions comes not from trajectories with a single large jump (which was the case in the univariate problem, and also for non-singular directions when d > 1) but from those with two large jumps. In § 9.2.2 below we will investigate this phenomenon under substantially broader assumptions, when the principal contribution comes from trajectories with k 2 jumps. In a more general case, when only condition (9.1.7) is met (with α > 0), an integral-type large deviation theorem describing the behaviour of the probabilities P(Sn ∈ xA) when the set A ⊂ Rd is bounded away from 0 and has a ‘regular’ boundary was obtained in [151] together with its functional version. As we have already said, the above-mentioned difficulties related to the distribution configuration are absent in the univariate case, and the large deviation problem itself is quite well studied in that case, the known results including asymptotic expansions and large deviation theorems in the space of trajectories (see Chapters 2–8; a more complete bibliography can be found in these chapters and in the respective bibliographic notes at the end of the book). We emphasize once again that in the multivariate case we encounter a situation where, for a distribution with ‘heavy’ tails, the principal contribution to the probability that a set in the large deviation zone will be reached is due not to one large jump (as in the univariate case) but to several large jumps. In Examples 9.1.1 and 9.1.2, say, in order to hit a cube Δ[x) located at the diagonal x(1) = · · · = x(d) the random walk will need d large jumps. All other trajectories will be far less likely to reach that cube. In the present chapter, we will concentrate on the ‘most regular’ distribution types (for example, those of the form (9.1.5)) and also on distributions that are more general versions of the laws in Examples 9.1.1 and 9.1.2. Moreover, we would like to point out that, in the multivariate case, the language of integro-local theorems is the most natural and convenient one, because: (1) describing the asymptotics of the probability that a ‘small’ remote cube will be hit is much easier than finding such asymptotics for an arbitrary remote set; (2) in this case, the large deviation problem admits a complete (in a certain sense) solution, since knowing such ‘local’ probabilities for small cubes allows one to easily obtain the ‘integral’ probability of hitting a given arbitrary set; (3) in integro-local theorems, one needs to impose essentially no additional
402
Large deviation theorems for sums of random vectors conditions on the distribution F, in contrast with integral theorems; in essence, we can obtain local limit theorems without assuming the existence of a density.
Moreover, integro-local theorems on the asymptotics of probabilities (9.1.1) are of substantial independent interest: for a broad class of functions f , they prove # $ to be quite useful for evaluating integrals of the form E f (Sn ); Sn ∈ tA as t → ∞, where A is a ‘solid’ set that is bounded away from the origin. The usefulness of integro-local theorems was demonstrated in Chapters 6–8 (e.g. in §§ 6.2 and 6.3, where such theorems helped us to study the asymptotics of large deviation probabilities for distributions from the class ER). In § 9.2, we will study of (9.1.1) under the assumption of the the asymptotics regular behaviour of P ξ ∈ Δ[x) as x escapes to infinity within a given cone, when the principal contribution to the probability P Sn ∈ [x) comes from trajectories containing a single large jump. Assuming that Eξ = 0, we consider both the case E|ξ|2 < ∞ and the case E|ξ|2 = ∞. We will also deal with a more general approach to the problem, in which regularity conditions are im (j) (j) , for a fixed j and events B that mean that all posed on P Sj ∈ Δ[x); B the jumps ξ (1) , . . . , ξ (j) are large (for a more precise formulation, see (9.2.11) below). it may happen that the principal contribution to the probability In this case, P Sn ∈ Δ[x) belongs to trajectories of a ‘zigzag’ shape, which contain several large jumps. Integral theorems (i.e. theorems on the asymptotics of the prob large deviation ability P Sn ∈ tA as t → ∞) are obtained in § 9.3 as corollaries of the integrolocal theorems of § 9.2. We also consider an alternative approach to proving integral theorems that is not related to integro-local theorems. Of course, the asymptotics of (9.1.1) can be found when ξ has a regularly varying density. In this case, one can obtain an asymptotic representation for the density of Sn (cf. Theorem 8 in [63]).
9.2 Integro-local large deviation theorems for sums of independent random vectors with regularly varying distributions 9.2.1 The case when the main contribution to the probability that Δ[x) will be hit is due to trajectories containing one large jump We will begin with a more precise formulation (compared with that in (9.1.5)) of the conditions under which we will be studying the asymptotics of (9.1.1). Consider the asymptotics of P ξ ∈ Δ[x) in a cone, i.e. for such x with t = |x| → ∞ that e(x) = x/t ∈ Ω, where the subset Ω ⊂ Sd−1 of the unit sphere, which characterizes the cone, is assumed to be open. An analogue [DΩ ] of condition [D(1,0) ] (see §§ 3.7 and 4.7) has here the following form.
9.2 Random vectors with regularly varying distributions
403
Introduce the half-space Π(e) := {v ∈ Rd : (v, e) 0} and put + Π(Ω) := Π(e). e∈Ω
It is evident that if Ω contains a hemisphere then Π(Ω) coincides with the whole space Rd . [DΩ ] There exists a sector Ω ⊂ Sd−1 and functions Δ1 = Δ1 (t) t−γ for some γ > −1, Δ2 = Δ2 (t) = o(t) such that, for any Δ ∈ [Δ1 , Δ2 ], e(x) ∈ Ω, one has (9.2.1) P ξ ∈ Δ[x) = Δd V1 (x)(1 + o(1)) as t → ∞, where
V1 (x) = v(t)g e(x) ,
x , v(t) = t−α−d L(t), t L(t) is an s.v.f. at infinity and g(e) is a continuous function on Ω that satisfies on this set the inequalities 0 < g1 < g(e) < g2 < ∞. Moreover, for any constant c1 > 0 and all y ∈ Π(Ω), |y| c1 t, we have (9.2.2) P ξ ∈ Δ[y) c2 Δd v(t). t = |x|,
e(x) =
The remainder term o(1) in (9.2.1) is assumed to be uniform in the following sense: there exists a function εu ↓ 0 as u ↑ ∞ such that the o(1) term in (9.2.1) can be replaced by ε(x, Δ) εu for all t u → ∞, e(x) ∈ Ω, Δ ∈ [Δ1 , Δ2 ]. Clearly, under condition [DΩ ] one has P |ξ| ∈ Δ[t), e(ξ) ∈ Ω ∼ c(Ω)td−1 v(t)Δ,
(9.2.3)
so that the left-hand side of (9.2.3) follows the same asymptotics as the probability P ξ ∈ Δ[t) in the univariate case under condition [D(1,0) ] (see §§ 3.7 and 4.7). Note that condition (9.2.2) is essential for the claim of our theorem below. It does not allow the ‘most plausible’ of the trajectories hitting the cube Δ[x) to contain two (or more) large jumps (i.e. to reach the set Δ[x) along a ‘zigzag’ trajectory, as was the case in Example 9.1.1). Condition (9.2.2) can be weakened, depending on the deviation zone. Denote by ∂Ω the boundary of the set Ω in Sd−1 , by U ε (∂Ω) the ε-neighbourhood of ∂Ω in Sd−1 and by Ωε = Ω \ U ε (∂Ω) the ‘ε-interior’ of Ω. First we consider the case of finite variance. Theorem 9.2.1. Let √ Eξ = 0, E|ξ|2 < ∞, α > 2 and condition [DΩ ] be satisfied. Then, for t = |x| n ln n and e(x) ∈ Ωε for some ε > 0, P Sn ∈ Δ[x) = Δd nV1 (x)(1 + o(1)), (9.2.4) where the remainder term o(1) is uniform in the following sense. For any sequence s(n) → ∞, there exists a sequence R(n) → 0 such that the o(1) term
404
Large deviation theorems for sums of random vectors
√ in (9.2.4) can be replaced by R such that |R| R(n) for all t > s(n) n ln n, e(x) ∈ Ωε and Δ ∈ [Δ1 , Δ2 ]. Proof. We will follow the scheme of the proofs of Theorems 3.7.1 and 4.7.1. Using notation similar to before, we put Gn := Sn ∈ Δ[x) ,
Bj := (ξj , e(x)) < t/r ,
r > 1,
B :=
n *
Bj ,
j=1
and again make use of the relations (3.7.8) and (3.7.9) ((4.7.4) and (4.7.5)). (1) Bounding P(Gn B). It is not hard to see that, under the conditions [DΩ ] and e(x) ∈ Ω, as t → ∞, P (ξ, e(x)) t V0 (t), where V0 (t) ∼ ctd v(t) = ct−α L(t). (9.2.5) Hence, cf. §§ 3.7 and 4.7, we obtain from Corollary 4.1.3 that for any δ > 0, as t → ∞, r−δ , P(Gn B) nV0 (t) √ where we choose r > 2 and δ in such a way that, for t n and Δ t−γ , we have r−δ nΔd v(t). nV0 (t) Using an argument like that following (4.7.8) (p. 219), one can easily see that it suffices to choose r − δ in such a way that r−δ >1+
(1 + γ)d . α−2
Therefore, for such an r and δ, P(Gn B) = o nΔd V1 (x) . (2) Bounding P(Gn B n−1 B n ). Let √ Ak := v ∈ Rd : (v, e(x)) < t(1 − k/r) + Δ d ,
(9.2.6)
k = 1, 2.
(9.2.7)
Then Gn B n−1 B n ⊂ {Sn−2 ∈ A2 } and P(Gn B n−1 B n ) = P(Sn−2 ∈ dz) P z + ξ ∈ dv, (ξ, e(x)) t/r z∈A2
v∈A1
× P v + ξ ∈ Δ[x), (ξ, e(x)) t/r .
(9.2.8) √ Since for v ∈ A1 one has x−v ∈ Π(Ω), |x−v| > t/r−Δ d, we see from (9.2.2) (with y = x − v) that the last factor in (9.2.8) does not exceed cΔd v(t). Therefore (see also (9.2.5)), the integral over the range A1 in (9.2.8) is less than cΔd v(t)P (ξ, e(x)) t/r c1 Δd v(t)V0 (t).
405
9.2 Random vectors with regularly varying distributions It is evident that the same bound holds for the integral over A2 . Hence P(Gn B i B j ) c1 Δd n2 v(t)V0 (t) = o nΔd V1 (x) .
(9.2.9)
i=j
(3) Evaluating P(Gn B n ). Since Gn B n ⊂ {Sn−1 ∈ A1 } by the definition of the set A1 , we obtain that, by virtue of condition [DΩ ] (recall that x − z ∈ Π(Ω) for z1 ∈ A1 ), P(Gn B n ) = P(Sn−1 ∈ dz)P ξ ∈ Δ[x − z), (ξ, e(x)) t/r A1
$ # E P ξ ∈ Δ[x − Sn−1 )| Sn−1 ; Sn−1 ∈ A1 # $ Δd E V1 x − Sn−1 ; |Sn−1 | < εt (1 + o(1)) + cΔd v(t)P Sn−1 ∈ A1 , |Sn−1 | εt . √ Here e(x − z) ∈ Ω for |z| < εt, P |Sn−1 | εt → 0 for t n. Hence P(Gn B n ) Δd V1 (x)(1 + o(1)). √ Setting A0 := v ∈ Rd : (v, e(x)) < t(1 − 1/r) − Δ d , we find in a similar way that P(Gn B n ) P Sn−1 ∈ dz P ξ ∈ Δ[x − z) A0
# $ Δd E V1 x − Sn−1 ; |Sn−1 | < εt (1 + o(1)) = Δd V1 (x)(1 + o(1)). This means that P(Gn B n ) = Δd V1 (x)(1 + o(1)). Together with (9.2.6), (9.2.9) and the relations (4.7.4), (4.7.5), this implies P(Gn ) = nΔd V1 (x)(1 + o(1)). As before, the required uniformity of the bounds follows from the argument used in the proof. The theorem is proved. Remark 9.2.2. Observe that if we had used more precise bounds for integrals in the proof of the above theorem then condition (9.2.2) could have been relaxed to the following: |z|2 P ξ ∈ Δ[x − z) cΔd v(t) n √ for e(x) ∈ Ωε , z ∈ A1 , |z| εt, t n (cf. condition (9.2.12) below).
406
Large deviation theorems for sums of random vectors
Now consider the case of infinite variance and assume that condition [DΩ ] of the present subsection holds for α < 2. For such an α, one should complement condition [DΩ ] by adding the following requirement. For any e ∈ Sd−1 we have
P (ξ, e) t W (t),
(9.2.10)
where W (t) = t−αW LW (t) is an r.v.f. of index −αW −α. Theorem 9.2.3. Let condition [DΩ ], complemented in the above-mentioned way, be satisfied. Then, in each of the following two cases, (1) α < 1, t = |x| > n1/θ , θ < α, (2) α ∈ (1, 2), there exists Eξ = 0, αW > 1 and t > n1/θ , θ < αW , we will have the relation (9.2.4) for all x such that e(x) ∈ Ωε , t = |x| → ∞ and all Δ ∈ [Δ1 , Δ2 ]. The statements of Theorems 3.7.1 and 9.2.1 concerning the uniformity of the remainder term o(1) in t t∗ → ∞, t > n1/θ , e(x) ∈ Ωε , Δ ∈ [Δ1 , Δ2 ], remain true in this case. Proof. The proof of Theorem 9.2.3 is quite similar to that of Theorems 3.7.1 (p. 169) and 9.2.1, and so is left to the reader. We just note that, at the third stage of the argument (see the proofs of Theorems 3.7.1, 9.2.1), it will be necessary to use the relation P |Sn−1 | > M σW (n) → 0 as M → ∞, where σW (n) = W (−1) (1/n), which follows from Corollaries 2.2.4 and 3.1.2.
9.2.2 The case when the main contribution to the probability that Δ[x) will be hit is due to ‘zigzag’ trajectories containing several large jumps We return to the case E|ξ|2 < ∞ and consider some more general conditions that would cover Examples 9.1.1–9.1.4 and allow situations when the most likely trajectories to reach Δ[x) are of ‘zigzag’ shape. For simplicity we can restrict ourselves to the case when the cone from Theorem 9.2.1 coincides with the positive orthant, i.e. Ω = e = (e(1) , . . . , e(d) ) ∈ Sd−1 : e(i) 0 . As before, we use the notation t = |x|,
B j = (ξj , e(x)) t/r ,
B (k) =
k * j=1
In this subsection, we will be using the following conditions:
Bj .
407
9.2 Random vectors with regularly varying distributions
For a fixed j and the values of Δ specified in condition [DΩ ], we have P Sj ∈ Δ[x); B (j) = Δd v(j) (t)g(j) e(x) (1 + o(1)) (9.2.11) for t = |x| → ∞, e(x) ∈ Ωε , ε > 0, where v(j) is an r.v.f. of the form v(j) (t) = t−α(j) −d L(j) (t),
α(j) > 2,
L(j) is an s.v.f., the g(j) (e) > 0 are continuous functions on Ωε and the remainder term o(1) in (9.2.11) is uniform in the same sense as in condition [DΩ ]. Moreover, √ for e(x) ∈ Ωε , z ∈ A1 (see (9.2.7)), |z| εt, t n, we have |z|2 . P Sj ∈ Δ[x − z); B (j) cΔd v(j) (t) n
(9.2.12)
We will also need the following condition. [D∗Ω ]
Let there exist a k, possessing the following properties:
(1) for all j k, conditions (9.2.11), (9.2.12) are satisfied; (2) x is such that v(j) (t) = o v(k) (t)nk−j as t = |x| → ∞ for j < k; !(k+1) = o Δd v(k) (t)nk where (3) P Sn ∈ Δ[x); B + !(k+1) = B i1 · · · B ik+1 B 1i1 2 can be dealt with in exactly the same way (although the argument is more tedious). For j = d + 1, the relation (9.2.14) is no longer correct. In this case, d (i) V1 (t) P Sd+1 ∈ Δ[x); B (d+1) Δd v(d) (t)t
(9.2.16)
i=1
(we write a b if a = O(b), b = O(a)). For example, when d = 2 the principal contribution to P S3 ∈ Δ[x); B (3) comes from trajectories for which (1)
(2)
either S3 contains two large jumps and S3 just one (this comprises the event B (1) (2)B (2) (1)) or the other way around (the event B (1) (1)B (2) (2)). Now for
409
9.2 Random vectors with regularly varying distributions some c1 , c2 , 0 < c1 < c2 < 1, (1) P S3 ∈ Δ[x(1) ); B (1) (2)B (2) (1) (1) c 2x
(1) (1) (1) P S2 ∈ du, one large jump in S2 P ξ3 ∈ Δ[x(1) −u)
∼ −∞
(1) c 2x
(1)
(1) (1)
V1 (u)V1
∼ 2Δ
x
− u du
c1 x(1)
# (1) $2 ∼ cΔx(1) V1 (x(1) ) . Moreover, for the second component we clearly have (2) (2) P S3 ∈ Δ[x(2) ); B (2) (1) ∼ 3ΔV1 x(2) . Since analogous relations hold true for the event B (1) (1)B (2) (2), this proves the relation (9.2.16). The case d > 2 is dealt with in a similar (but more cumbersome) way. Since √ (i) n max Vi (t) → 0 for t n, tV1 (t) ∼ αi Vi (t), i
the above relations prove (9.2.11) for j k = d, and they also prove that condition (2) and relation (9.2.13) hold for Example 9.1.1. Now we prove(9.2.12).(i)For j d, under additional conditions of the form P ξ (i) ∈ Δ[y (i) ) cΔV1 (|y (i) |) for y (i) < 0, we have
P Sj ∈ Δ[x − z); B (j) c(j)Δ
d
d =
(i)
V1 (t),
t = |x|,
(9.2.17)
i=1
for all x ∈ Ωε and z ∈ A1 , |z| εt. This proves (9.2.12) for j d. For j = d + 1 we obtain, cf. (9.2.16), that for e(x) ∈ Ωε and z ∈ A1 , |z| > εt, d (i) V1 (t). P Sj ∈ Δ[x − z) Δd v(d) (t)t i=1
√ (i) Since nt maxi V1 (t) → 0 for t n, we arrive at (9.2.14). One can verify in a similar way that condition [D∗Ω ] is satisfied in Examples 9.1.2 (with k = d) and 9.1.3 (with k = 1). It is not difficult to see that in Example 9.1.2 we have P Sj ∈ Δ[x); B (j) = 0 for j < d, e(x) ∈ Ωε and t = |x| > 0. It is clear that condition [D∗Ω ] will be satisfied for a much wider class of distributions F. For instance, this applies to distributions similar to those from Example 9.1.1 when there is a weak enough dependence between the components of ξ,
410
Large deviation theorems for sums of random vectors
and also to distributions for which P (ξ, x) u ∼ h e(x) u−f (e(x)) , where the functions h(·) and f (·) satisfy some additional conditions, and so on. 2 ∗ Theorem 9.2.4. √ Let Eξ = 0, E|ξ| < ∞ and condition [DΩ ] be satisfied. Then, for t = |x| n ln n and e(x) ∈ Ωε for some ε > 0, we have P Sn ∈ Δ[x) = Δd nk v(k) (t)g(k) e(x) (1 + o(1)),
where the term o(1) is uniform in the same sense as in Theorem 9.2.1. Proof. The scheme of the argument is somewhat different in this case. We will again make use of the equality (4.7.4) but will apply the latter representation to the probability P(Gn B): P(Gn B) =
n i=1
+
(1) + P Gn B i
(2) P Gn B ij + · · ·
1i<jn
(k) !(k+1) , (9.2.18) P Gn B i1 ,...,ik + P Gn B
1i1 t/r − Δ d ∼ t/r. Therefore we can apply condition (9.2.12) to the second factor in the integrand in (9.2.20). Note that this factor is asymptotically equivalent to the right-hand side of (9.2.11) for |z| < ε1 t as ε1 → 0. In the first factor, n−j * n−j n Bi = 1 − P(B 1 ) 1 − c1 v(1) (t) → 1, P i=1
n−j * P Sn−j < ε1 t, Bi
P Sn−j < ε1 t → 1,
→1
i=1
√ assuming that ε1 → 0 slowly enough that ε1 t n. Now we will represent the integral in (9.2.20) as the sum + . |z| 0 : U ε ({v}) ⊂ A > 0
(9.3.1)
v∈A
hold true, i.e. the set A is bounded away from the origin and is solid (contains an open subset). (2) Condition [DΩ ] from § 9.2 is satisfied for some α > 2, γ > −1, ε > 0 and (9.3.2) Ω(A) := e(v) : v ∈ A ⊂ Ωε , where Ωε = Ω \ U ε (∂Ω) is the ε-interior of Ω.
413
9.3 Integral theorems
In the present context, the function g(e) does not need to be everywhere positive on Ω(A) ; here, the relation (9.2.1) has to be written in the form P ξ ∈ Δ[x) = d d Δ V1 (x) + o Δ v(t) , t = |x|, and it should be assumed that Ω(A) g(e) de > 0.
(3) For any fixed M > 0, the Lebesgue measure μ of the intersection of the ε-neighbourhood U ε (∂A) of the boundary ∂A of A with the ball U M ({0}) tends to 0 as ε → 0: μ U ε (∂A) ∩ U M {0} → 0. (9.3.3)
It is obvious that the property (9.3.3) will hold for all sets A with smooth enough boundaries. √ Theorem 9.3.1. Under the above conditions (1)–(3), as t → ∞ and t n ln n, one has −α P Sn ∈ tA = nt L(t) |v|−α−d g e(v) dv (1 + o(1)), (9.3.4) A
where the functions L and g are defined in condition [DΩ ] (see p. 403). The nature of the large deviation probabilities for the sums Sn clearly remains the same as before: the right-hand side of (9.3.4) can be written as nP(ξ ∈ tA) (1 + o(1)),
(9.3.5)
so that the main contribution to the probability of the event Sn ∈ tA comes from the n events ξj ∈ tA , j = 1, . . . , n. With the help of Theorem 9.2.3 the reader will encounter no difficulties in proving a similar assertion in the case E|ξ|2 = ∞. As we have already said, Theorem 9.3.1 demonstrates the following useful property ofintegro-local theorems. When calculating the asymptotics of the prob abilities P Sn ∈ tA , these theorems enable one to proceed as if Sn had a density and we knew its asymptotics, although condition [DΩ ] contains no assumptions on the existence of densities. d Proof of Theorem 9.3.1. Let ZΔ be a lattice in Rd with a span Δ = o(t). Denote b b by U := U ({0}) the ball of radius b > 0 whose centre is at the origin, and let d A(t,M ) be the set of all points z ∈ ZΔ for which Δ[z) ⊂ tA ∩ U tM and A(t,M ) d be the minimal set of points z ∈ ZΔ such that + Δ[z). tA ∩ U tM ⊂ z∈A(t,M )
Then, clearly,
P Sn ∈ Δ[z) P Sn ∈ tA ∩ U tM
z∈A(t,M )
z∈A(t,M )
P Sn ∈ Δ[z) .
(9.3.6)
414
Large deviation theorems for sums of random vectors
It is evident that, owing to conditions (9.3.1)–(9.3.3) and Theorem 9.2.1, each sum in (9.3.6) is asymptotically equivalent to n Δd |z|−α−d L |z| g e(z) d ∩U tM z∈tA∩ZΔ
|u|−α−d L |u| g e(u) du
∼n u∈tA∩U tM
∼ nt−α L(t)
|v|−α−d g e(v) dv.
v∈A∩U M tM
It only remains to notice that, for U := Rd \ U tM ,
tM P Sn ∈ tA ∩ U = o nt−α L(t) as M → ∞, which can be verified either using relations of the form (9.2.4) or using bounds for the probability that Sn hits a half-space of the form Π(e, tM ) = v ∈ Rd : (v, e) tM , e ∈ Ω. The theorem is proved.
9.3.2 A direct approach to integral theorems The relations (9.3.4), (9.3.5) indicate that a somewhat different approach to proving integral large deviation theorems in Rd , d > 1, may also be possible. Instead of condition [DΩ ] one could postulate those properties of the set A and vector ξ that turn out to be essential for the asymptotics (9.3.5) to hold true. Namely, in place of a rather restrictive condition [DΩ ], we will assume that condition [A], to be stated below, is satisfied. For simplicity we will first restrict ourselves to the case when Eξ = 0, E|ξ|2 < ∞ and the set A can be separated from the origin by a hyperplane, i.e. there exist a b > 0 and a vector e ∈ Sd−1 such that (9.3.7) A ⊂ Π(e, b) = v ∈ Rd : (v, e) b . (A remark concerning the transition to the general case, when A is bounded away from the origin by a ball U b for some b > 0, will be made after the proof of Theorem 9.3.2.) Condition [A] has the following form: [A]
(1) We have the relation P (tA) := P(ξ ∈ tA) ∼ V (t)F (A),
(9.3.8)
where V (t) = t−α L(t) (α > 2 when E|ξ|2 < ∞), L is an s.v.f. at infinity and F (A) is a functional defined on a suitable class of sets. This functional, the set A and the vector ξ are such that P (z + tA) ∼ P (tA)
for |z| = o(t),
t → ∞.
(9.3.9)
415
9.3 Integral theorems
The property (9.3.9) simply expresses the continuity of the functional F : we have F (v + A) ∼ F (A) as |v| → 0. In Theorem 9.3.1 the role of the functional F is played by the integral on the right-hand side of (9.3.4). (2) Condition (9.3.7) is met and, for the corresponding e ∈ Sd−1 , P (ξ, e) t cV (t)
(9.3.10)
for some c < ∞.
√ Theorem 9.3.2. Let condition [A] be satisfied. Then, for t n ln n, (9.3.11) P Sn ∈ tA ∼ nV (t)F (A) ∼ nP(ξ ∈ tA).
Proof. The proof of this result follows our previous approaches. Set Gn := Sn ∈ tA ,
Bj := (ξj , e) < ρt
for
ρ < b,
B=
n *
Bj ,
j=1
and again make use of the representations (4.7.4), (4.7.5). (1) Bounding P(Gn B). Since Gn ⊂ Gn,e := Sn , e bt , we have the inequality P(Gn B) P(Gn,e B), where, owing to the bounds from § 4.1, for any δ > 0 and as t → ∞, # $r−δ , P(Gn,e B) c nV (t) Hence
r :=
b > 1. ρ
P(Gn B) = o nV (t) .
(9.3.12)
2 (2) The bound P(Gn B n−1 B n ) P(B n−1 B n ) < c V (ρt) is obvious here, so that 2 P(Gn B i B j ) c nV (t) = o nV (t) . (9.3.13) i=j
(3) Evaluating P(Gn B n ). We have P(Gn B n ) = P Sn−1 ∈ dz P z + ξ ∈ tA, (ξ, e) ρt = + . (9.3.14) |z|<M
√
n
|z|M
√ n
Here, in the first integral on the right-hand side, for ρ < b, P z + ξ ∈ tA, (ξ, e) ρt = P(z + ξ ∈ tA) ∼ P(ξ ∈ tA) owing to condition (9.3.9), so that when M → ∞ slowly enough, ∼ P(ξ ∈ tA). |z|<M
√ n
416
Large deviation theorems for sums of random vectors
In the second integral on the right-hand side of (9.3.14), P z + ξ ∈ tA, (ξ, e) ρt P (ξ, e) ρt cV (t), so that, as M → ∞,
|z|>M
= o V (t) . √ n
The above means that P(Gn B n ) ∼ P ξ ∈ tA . Together with (9.3.12), (9.3.13), this establishes (9.3.11). The theorem is proved. The transition to the case A is bounded away from the origin by a ball of radius b > 0 can be made by, for example, partitioning the set A into finitely many subsets A1 , . . . , AK (one could, say, take the intersections of A with each of the K = 2d coordinate orthants), in such a way that each of the subsets would lie in a subspace Π(ek , bk ) for suitable ek ∈ Sd−1 and bk > 0, k = 1, . . . , K. After that, the above argument is applied to each of the subsets A1 , . . . , AK . Instead of (9.3.10) one should now, of course, assume that, for all k = 1, . . . , K and some ck < ∞, P (ξ, ek ) t < ck V (t). The transition to the case E|ξ|2 = ∞ can be made in a way similar to that used in Theorem 9.2.3 (see also § 3.7).
10 Large deviations in trajectory space
10.1 Introduction We return now to the one-dimensional random walks studied in Chapters 2–8. The most general of the problems considered in those chapters was that on the crossing of a given remote boundary by the trajectory of the walk. This is one of the simplest problems on large deviations in the space of trajectories. Now we will consider a more general setting of this problem. Let D(0, 1) be the space of functions f (·) on [0, 1] without discontinuities of the second kind, endowed with the uniform norm f = supt∈[0,1] |f (t)| and the respective σ-algebra B of Borel (or cylindrical) sets. For definiteness, we will assume that the elements of f ∈ D(0, 1) are right-continuous. Define a random process {Sn (t)} in D(0, 1) by letting Sn (t) := Snt , t ∈ [0, 1], k where, as before, Sk = i=1 ξi , the r.v.’s ξi are independent and have the same distribution as ξ and nt is the integral part of nt. Then the probability (10.1.1) P Sn (·) ∈ xA is defined for any A ∈ B. If Eξ = 0 and the set A is bounded away from zero (for a more precise definition, see below) then, provided that x → ∞ fast enough, the probability (10.1.1) will tend to zero, and so there arises the problem of studying the asymptotics of this probability. This problem is referred to as the problem on probabilities of large deviations in the trajectory space. As we have already said, the simplest problems of this kind were dealt with in Chapters 2–8, where, for a given function g, we considered a set A ( ) A = Ag = f : sup f (t) − g(t) 0 . t∈[0,1]
In § 10.2 we will consider a problem on so-called one-sided large deviations in trajectory space under the assumption that condition [ · , =] of Chapters 3 and 4 is met. In § 10.3, the results obtained in the previous section will be extended to the general case. 417
418
Large deviations in trajectory space 10.2 One-sided large deviations in trajectory space
To formulate the main assertions, we will need some notation and notions. Introduce the step functions
0 for u < t, ev,t = ev,t (u) := t ∈ [0, 1], v ∈ R, v for u t, and the number set A(t) := {v : ev,t ∈ A},
(10.2.1)
which could be called the section of the set A ∈ B at the point t. Define the measure μA(t) of the set A(t) by μA(t) := α u−α−1 du, (10.2.2) A(t)
where −α is the index of the r.v.f. V (t) in condition [ · , =] . We will need the following conditions on the set A. [A0 ]
The set A is bounded away from zero, i.e. inf f > 0.
f ∈A
In this case, one can assume without loss of generality that inf f = 1.
f ∈A
(10.2.3)
The set A will be said to be one-sided if the following condition is met: [A0+ ] inf sup f (u) > 0.
f ∈A u∈[0,1]
It is evident that condition [A0+ ] implies [A0 ] and, moreover, one can assume without loss of generality that inf sup f (u) = 1,
f ∈A u∈[0,1]
A(t) ⊂ [1, ∞).
(10.2.4)
The next condition is related to the structure of the sets A(t). [A1 ] For each t ∈ [0, 1], the set A(t) is a union of finitely many intervals (segments, half-intervals). The dependence of the boundaries of these intervals on t is such that the function μA(t) is Riemann integrable. Thus, under condition [A1 ] we can define the measure 1 μ(A) :=
μA(t) dt. 0
(10.2.5)
419
10.2 One-sided large deviations in trajectory space
Remarks on a possible relaxation of condition [A1 ] can be found after Theorem 10.2.3. Further, we will need a continuity condition. Take an arbitrary f ∈ D(0, 1) and add to it a jump of a variable size at a point t, i.e. consider the family of functions fv,t (u) := f (u) + ev,t (u). Those values of v for which fv,t ∈ A form the set Af (t) := {v ∈ R : fv,t ∈ A}.
(10.2.6)
Our continuity condition (for the measure μA(t) ) consists in the following. [A2 ] For each t ∈ [0, 1], the set Af (t) is a finite union of intervals (segments, half-intervals) and, as f → 0, the set Af (t) converges ‘in measure’ to A(t): uniformly in t ∈ [0, 1], α u−α−1 du → μA(t) . (10.2.7) Af (t)
Example 10.2.1. The boundary problems considered in Chapters 3–5. In these problems, we had ) ( A = f ∈ D(0, 1) : sup f (t) − g(t) 0 t∈[0,1]
for a given function g. If inf t∈[0,1] g(t) > 0 and g ∈ D(0, 1) then conditions [A0 ]–[A2 ] are satisfied, and the sets A(t) and Af (t) are half-intervals of the form [v, ∞): A(t) = [g∗ (t), ∞), ∞ μA(t) = α
u
−α−1
du =
g∗ (t) = inf g(u), u∈(t,1]
g∗−α (t),
1 μ(A) =
g∗−α (t) dt.
0
g∗ (t)
Example 10.2.2. The set
" 1 A = f ∈ D(0, 1) : f (u)du b ,
b > 0,
0
satisfies all the conditions [A0 ]–[A2 ]. In this situation, the sets A(t) and Af (t) are, as was the case in Example 10.2.1, half-intervals of the form [v, ∞): we have A(t) = [b/(1 − t), ∞) and ∞ μA(t) = α b/(1−t)
u
−α−1
du =
1−t b
α
,
1 α u b−α . μ(A) = du = b α+1 0
420
Large deviations in trajectory space
Now we can formulate the main assertion. First consider one-sided sets A and the case of finite variance Eξ 2 < ∞. Theorem 10.2.3. Let Eξ = 0, Eξ 2 < ∞ and assume that conditions [ · , =] with V ∈ R and α > 2, [A0+ ], [A1 ] and [A2 ] are satisfied. Then, as n → ∞, (10.2.8) P Sn (·) ∈ xA = μ(A)nV (x)(1 + o(1)), where the measure μ(A) is defined in (10.2.1), (10.2.2) and (10.2.5) and the remainder term o(1) is uniform in the zone n < cx2 /ln x for a suitable c. Regarding the value of c, see Remark 4.4.2. Remark 10.2.4. The assertion of the theorem is substantially different from those on the asymptotics of P Sn (·) ∈ xA in the ‘Cram´er case’, when Eeλξ < ∞ for some λ > 0. The foremost difference in the Cram´er case one is that, generally, can find only the asymptotics of ln P Sn (·) ∈ xA . In addition, the nature of that asymptotics proves to be completely different from (10.2.8) (see e.g. [38] and also Chapter 6). Proof of Theorem 10.2.3. The scheme of the proof is the same as that used in Chapters 2–5. For Gn := {Sn (·) ∈ xA},
Bj = {ξj < y},
B=
n *
Bj ,
j=1
we have, for x = ry, r > 2, P(Gn ) = P(Gn B) + P(Gn B), n P(Gn B j ) + O (nV (x))2 , P(Gn B) =
(10.2.9) (10.2.10)
j=1
where, owing to the obvious inclusion Gn ⊂ {S n x} (see (10.2.4)) and Theorem 4.1.2, P(Gn B) = o (nV (x))2 . As before, it remains to evaluate P(Gn B j ). Let Snj (t) := Sn (t) − eξj ,j/n (t),
j n,
(10.2.11)
j
so that the process Sn (t) is obtained from Sn (t) by removing the jump ξj at the point j/n. For ε > 0, put ( ) (10.2.12) Cj := max |Snj (t)| < εx . t∈[0,1]
j
It is obvious that ξj and {Sn (t)} are independent of each other and that, for
10.2 One-sided large deviations in trajectory space 421 √ ε = ε(x) → 0 slowly enough (but in such a way that εx n), we have from the Kolmogorov inequality that max P(C j ) → 0. jn
This implies that, for such an ε, P(Gn B j ) = P(Gn B j Cj ) + o V (x) .
(10.2.13)
j
Now let Fn be the σ-algebra generated by ξ1 , . . . , ξj−1 , ξj+1 , . . . , ξn . Then $ # (10.2.14) P(Gn B j Cj ) = E P(Gn B j | Fj n ); Cj . Furthermore, observe that x−1 Sn (·) ∈ U ε (eξj ,j/n ) on the event Cj , where U ε (f ) denotes the ε-neighbourhood of f in the uniform norm. Next we note also that, j for fixed σ-algebra Fn , the event Gn B j in (10.2.14) is equivalent to the event j B j ∩ {ξj ∈ xAf (j/n)} with f (·) = x−1 Sn (·), so that Af (t) does not depend on ξj . By condition [A2 ], on the set Cj one has P B j ∩ {ξj ∈ xAf (j/n)}| Fj n = P ξj y, x−1 ξj ∈ A(j/n) (1 + o(1)) (10.2.15) and
P ξj ∈ xA(j/n) = V (x)μA(j/n) (1 + o(1)).
(10.2.16)
The event {ξj y} on the right hand-side of (10.2.15) is ‘redundant’ owing to (10.2.4), whereas the equality in (10.2.16) holds because the set A(j/n) is a union of a finite number of intervals and V (ux) ∼ u−α V (x) as x → ∞. Since P(Cj ) → 1, we obtain from (10.2.14)–(10.2.16) that P(Gn B j Cj ) = V (x)μA(j/n) (1 + o(1)).
(10.2.17)
Combining this relation with (10.2.9), (10.2.10) and (10.2.13) and using the uniformity in condition [A2 ] and (10.2.15), (10.2.16), we obtain P(Gn ) = V (x)
n
μA(j/n) (1 + o(1)) = μ(A)nV (x)(1 + o(1)).
j=1
The uniformity of o(1) in (10.2.8) follows from that of the bounds from § 4.1. The theorem is proved. Remark 10.2.5. An assertion close to Theorem 10.2.3 was obtained in [225] in the case when ξ has a regularly varying density −F+ (t) = αt−α−1 L(t),
L is an s.v.f. at infinity.
In this case, conditions [A1 ] and [A2 ] could be somewhat relaxed with regard to the assumption that A(t) and Af (t) are unions of finite collections of intervals.
422
Large deviations in trajectory space
√ The same paper [225] contains a ‘transient’ assertion for deviations x n as well, where an approximation for P(Gn ) includes also the term P w(·) ∈ xA , w(·) being the standard Wiener process. An assertion on the asymptotics of the probability P(S(·) ∈ xA) in the case Eξ = 0, Eξ 2 = ∞ has a similar form, as follows. Theorem 10.2.6. Let Eξ = 0, Eξ 2 = ∞ and let conditions [ 0, g− (t) < 0 possessing the properties inf g+ (t) = 1,
t∈[0,1]
sup g− (t) = −g,
g > 0,
(10.3.1)
t∈[0,1]
we want to find the asymptotic behaviour of the probability of the event Gn complementary to
" k k Gn := xg− < Sk < xg+ ; k = 1, . . . , n . n n − Clearly Gn = G+ n ∪ Gn , where
" k
S G+ = max − xg 0 , k + n kn n
G− n =
" k
min Sk − xg− 0 . kn n
Hence − + − P(Gn ) = P(G+ n ) + P(Gn ) − P(Gn Gn ).
(10.3.2)
423
10.3 The general case
The asymptotics of P(G± n ) were found (under the appropriate conditions) in Chapters 3 and 4 (they are of the form c+ nV (x) and c− nW (x) respectively). Now we will show that − (10.3.3) P(G+ n Gn ) = o P(Gn ) . To avoid repeating very similar formulations in the cases Eξ 2 < ∞, Eξ = 0 and Eξ 2 = ∞, Eξ = 0 (cf. e.g. Theorems 10.2.3 and 10.2.6), we will introduce a ‘united’ condition [Q∗ ], which ensures that the approximations P(S n x) ∼ P(Sn x) ∼ nV (x),
(10.3.4)
P(S n −x) ∼ P(Sn −x) ∼ nW (x),
(10.3.5)
where S n = minkn Sk , hold true (not necessarily both at the same time) or that the respective upper bounds cnV (x) and cnW (x) for the probabilities in question will be valid. √ In the case Eξ 2 < ∞, Eξ = 0, condition [Q∗ ] means that x > c n ln n, n → ∞, and that one of the following three alternative sets of conditions: √ (1) [=, =], W (x) ∼ c1 V (x), c1 > 0, α > 2, c > α − 2; (10.3.6) (2) [ 2, c > β − 2; (10.3.7) √ (10.3.8) (3) [=, 2, c > α − 2. It follows from Remark 4.4.2 that under conditions (10.3.6) we will have the relations (10.3.4), (10.3.5). Under condition (10.3.7), we will have (10.3.4) and the bound P(S n −x) < c2 nW (x) (see Remark 4.4.2 and Corollary 4.1.4). A similar (symmetric) assertion holds under condition (10.3.8). In the case Eξ 2 = ∞, Eξ = 0, condition [Q∗ ] means that x → ∞ and that one of the following three alternative sets of conditions is satisfied: (10.3.9) (1) [=, =], α ∈ (1, 2), W (x) ∼ c1 V (x), c1 > 0, nV (x) → 0; x (2) [ 0,
490
Non-identically distributed jumps with finite variances
Hence (cf. (4.1.31)) hD ln P −r ln T + r + 2 ln2 T + ε1 (T ) 2y hD = − r − 2 ln T ln T + r + ε1 (T ), 2y
(13.1.15)
where ε1 (T ) → 0 as T → ∞. Owing to [U1 ], we have ln T = − ln nV (x)+O(1). Next we will bound nV (x) from below. It follows from [U2 ] that for x > tδ one has nV (x) J(n, δ)x−α−δ .
(13.1.16)
Hence, putting J := J(n, δ), ρ := J/D and α := α + δ, we obtain that, for x = sσ(n) → ∞, σ(n) = (α − 2)D ln D,
nV (x) Jx−α = Js−α (α − 2)−α /2 D −α /2 (ln D)−α /2
= cD (2−α )/2 s−α (ln D)−α /2 ρ.
(13.1.17)
Therefore ln T = − ln nV (x) + O(1) α α − 2 ln D + α ln s − ln ρ + ln ln D + O(1) (13.1.18) 2 2 2α ln s 2 ln ρ α − 2 ln D 1 + − (1 + o(1)). = 2 α − 2 ln D α − 2 ln D
Since s 1, ρ 1 and 2α 2α , α − 2 α−2 we obtain ln T
α − 2 ln D 2
1+b
1 1 , α − 2 α−2 ln s 2 ln ρ − (1 + o(1)). ln D α − 2 ln D
Why the relative remainder term in (13.1.18) has the form o(1) can be explained as follows. The convergence x → ∞, which is necessary for nV (x) → 0, means that either s → ∞ or D → ∞. If s → ∞, D < c then the value in the large parentheses in (13.1.18) increases unboundedly. Hence the terms ln ln D + O(1) translate into a factor o(1) times the expression in the large parentheses. Alternatively, if D → ∞ then also ln D → ∞, and we again obtain o(1) for the same reason. If ln ρ = o(ln D) (which is only possible when D → ∞) then the representation (13.1.18) will coincide with (4.1.32) provided that in the latter we replace n by D.
13.1 Upper and lower bounds for the distributions of S n and Sn
491
As in § 4.1, (13.1.18) implies that hr2 hD δ ln s 2 ln ρ ln T 2 1 + 1+b − (1 + o(1)). 2y 2 4s α−2 ln D α − 2 ln D (13.1.19) Since δ can be chosen arbitrarily small we obtain that, owing to (13.1.15), one has the following bound with a new value of h, somewhat greater than that in (13.1.19): & % ln s hr2 + χ ln T. ln P r − r − 2 1 + b 4s ln D This proves the first assertion of the theorem. (ii) Now we will prove the second part of the theorem. We will make use of (13.1.13), where we put x μ := . Dh Then, for y = x (r = 1),
μ2 hD 1 ln P −μx + + cnV + nV (y)eμy (1 + o(1)) 2 μ 2 x2 Dh =− + cnV + ex /Dh nV (x)(1 + o(1)). 2Dh x
Here, for s2 c, we have from [U2 ] that 8 Dh D c1 nV nV x ln D where, for t tδ ,
V (t) V (tδ )
(13.1.20)
,
−α+δ
t tδ
.
Hence for δ < 2 − α we find from (13.1.4) and (13.1.9) that, for α := α − δ, Dh c2 D−α /2 J c3 D 1−α /2 → 0 as D → ∞. nV (13.1.21) x Consider the last term in (13.1.20). When
(recall that x
√
1 (α − 2)(h − τ ) s2 2(α − 2) (α − 2) ln D
(13.1.22)
D), we find in a similar way, using (13.1.9), that
nV (x) c1 x−α J(n, δ) < c2 s−α D 1−α and x2 (h − τ )(α − 2) ln D = Dh 2h
/2
c2 D 1−α
α − 2 τ (α − 2) − 2 2h
/2
(ln D)α
ln D.
/2
(13.1.23)
492
Non-identically distributed jumps with finite variances
Therefore nV (x)ex
2
/Dh
D−τ (α
−2)/2h
(ln D)α
/2
→0
(13.1.24)
as D → ∞. Thus, x2 + o(1). (13.1.25) 2Dh The term o(1) could be removed from (13.1.25) by slightly changing h > 1. Since by choosing a suitable δ one can make the ratio (α − 2)/(α − 2) in (13.1.22) arbitrarily close to 1, we can replace the upper bound for s2 in (13.1.22) by s2 (h − τ )/2, again slightly changing, if necessary, either h or τ (see the end of the proof of Theorem 4.1.2 for a remark concerning this observation). This completes the proof of the second assertion. ln P −
(iii) To prove the third assertion, which uses ‘individual’ conditions [U] for the distributions Fj , one bounds the quantities Rj (μ, y) separately for each j. The whole argument that we used above to derive ‘averaged’ bounds will remain valid, as in this case the uniformity condition [U] will be satisfied for ‘individual’ distributions. Further, the relations (13.1.16), (13.1.18) will also remain true provided that in them we replace α by α∗ = maxjn αj and put α = α∗ + δ. In the proof of the second assertion, one replaces α by α∗ = minjn αj and lets α = α∗ − δ. Thus the inequalities (13.1.23), (13.1.24) will hold true when (α∗
(h − τ )(α∗ − 2) 1 s2 . − 2) ln D 2(α∗ − 2)
The theorem is proved.
13.1.2 Upper bounds for the distributions of S n . The second version of the conditions Along with condition [ · , 0, owing to (13.1.27). Further, for the last term in (13.1.20), we find, for
(recall that x
√
((α + 2) ln D)−1 s2 s20 D), that
nV (x) cH(n)x−α cH(n)D −α
/2
,
x2 s2 (α − 2) ln D. Dh h
Therefore nV (x)ex
2
/Dh
2
cH(n)D s0 (α−2)/h−α
/2
→0
2
as n → ∞, when H(n) Dα/2−s0 (α−2) and δ is small enough. Together with (13.1.21), this demonstrates (13.1.25) and hence the second assertion of the theorem as well. The theorem is proved. It is not hard to see that the assertions of Corollaries 13.1.3 and 13.1.4 will remain true provided that in them we replace conditions [ · , ] and [U1 ] or conditions [ · , >] and [UR]. Then P(Sn x) nV (x)(1 + o(1)). Proof. The proof of the corollary is obvious. It follows from Corollary 13.1.9, √ because if the conditions [U1 ] (or [UR]), x2 Dn and u = o(x/ Dn ) are satisfied then we have x ∼ y and so nV (y) ∼ nV (x).
13.2 Asymptotics of the probability of the crossing of an arbitrary remote boundary Under somewhat excessive conditions, the desired asymptotics for the distributions of S n and Sn can be obtained from the bounds derived in § 13.1. As in our
496
Non-identically distributed jumps with finite variances
previous exposition, we will say that the averaged distribution 1 Fj n j=1 n
F=
satisfies condition [ · , =]U ([ · , =]UR ) if it satisfies condition [ · , =] with the function V satisfying condition [U] ([UR]) of § 13.1. Recall also that condition [N] from § 13.1 has the following form: [N]
For some δ < min{1, α − 2} and γ 0, nV (tδ ) cD−γ , J(n, δ) = tα+δ δ
where tδ is from condition [U]. Theorem 13.2.1. Let Eξj = 0 and the averaged distribution F satisfy the condi√ tions [ · , =]U , [N] and x Dn ln Dn . Then P(Sn x) = nV (x)(1 + o(1)), P(S n x) = nV (x)(1 + o(1)).
(13.2.1)
The assertion remains true if we replace conditions [ · , =]U and [N] by condition [ · , =]UR . Under condition [UR] with H(n) = n (see p. 492), the asymptotic relation P(Sn x) ∼ nV (x) for x > cn was established in [218]. The assertion of Theorem 13.2.1 can be sharpened. Namely, one can find an explicit value of c such that (13.2.1) holds true for x = s (α − 2)D ln D, s c. To achieve this, one would need to do a more detailed analysis, similar to that in the proof of Theorem 4.4.4 (cf. Remark 4.4.2 on p. 197). The assertion of the theorem could be stated in a uniform version, like that of Theorem 4.4.1. This follows from the fact that all the upper and lower bounds obtained above are explicit and uniform. Moreover, Theorem 13.2.1 can be extended to the case of arbitrary boundaries. Then, however, it is more natural to use ‘individual’ conditions (see § 13.1), where each tail Fj,+ satisfies condition [ · , =]U . As before, let Gx,n be the class of boundaries {g(k)} for which minkn g(k) = cx, c > 0. The following analogue of Theorem 4.6.7 for the probability of the event ( ) Gn = max Sk − g(k) 0 kn
holds true. Theorem 13.2.2. Let Eξj = 0 and the distributions Fj satisfy condition [ · , =]U uniformly in j and n with a common α = αj , j = 1, . . . , n. Moreover, let the
497
13.2 Probability of the crossing of a remote boundary
averaged distribution F satisfy condition [N]. Then there exists a c1 < ∞ such √ that, for x > c1 D ln D, x → ∞, 3 n 4 P(Gn ) = Vj g∗ (j) (1 + o(1)) + O n2 V 2 (x) , (13.2.2) j=1
where g∗ (j) = minkj g(k) and the reminder terms o(·) and O(·) are uniform over the class Gx,n of boundaries and the class F of distributions {Fj } that satisfy conditions [ · , =]U and [N]. The above assertion remains true if conditions [ · , =]U and [N] are replaced by [ · , =]UR . Recall that when n → ∞ condition [U] can be replaced by a weaker condition, [U∞ ] (see Remark 13.1.1). It follows from (13.2.2) that when max g(k) < c1 x
(13.2.3)
kn
we have P(Gn ) ∼
n
Vj g∗ (j) .
(13.2.4)
j=1
Proof. The proof of Theorem 13.2.2 is similar to those of Theorems 3.6.4 (see p. 156) and 12.2.1, its argument repeating almost verbatim the respective arguments for those theorems (up to obvious amendments due to the fact that now dj = Eξj2 ). Therefore it will be omitted. We could also obtain here analogues of all main assertions of § 12.3 on the probability of the crossing of an arbitrary boundary on an unbounded time interval. We will briefly discuss them. Consider a class of boundaries {g(k)} of the form g(k) = x + gk ,
k = 1, 2, . . . ,
(13.2.5)
which are defined on the whole axis and lie between two linear functions: c1 ak − p1 x gk c2 ak + p2 x,
p1 < 1,
k = 1, 2, . . . ,
(13.2.6)
where the constants 0 < c1 c2 < ∞ and pi > 0, i = 1, 2, do not depend on the parameter of the triangular array scheme and the variable a ∈ (0, a0 ), a0 = const > 0, can tend to zero. We are interested in deriving asymptotic representations for P(Gn ) that (1) are uniform in a as a → 0, and (2) hold for n, growing faster than cx/a, in particular, for n = ∞.
498
Non-identically distributed jumps with finite variances
Put n1 := x/a and introduce, as in § 12.3, averaged majorants V(1) (x) :=
n1 1 Vj (x) n1 j=1
(we assume for simplicity that x/a is an integer). We will need the homogeneity condition [H]
For nk = 2k−1 n1 and all k = 1, 2, . . . and t > 0, c(1) V(1) (t)
2nk 1 Fj,+ (t) c(2) V(1) (t), nk j=n k
c(2) c(1) 1 Dn1 Dnk Dn1 . n1 nk n1 The second line of inequalities basically means that the values Dn grow almost linearly with n. As before, let S n (a) = max(Sk − ak), kn
Bj (v) = {ξj < y + vaj},
η(x, a) = inf{k : Sk − ak x}, v > 0,
B(v) =
n *
Bj (v).
j=1
The following theorem bounds probabilities of the form P S n (a) x; B(v) , P S n (a) x and P ∞ > η(x, a) xt/a for n n1 = x/a. (For n n1 such bounds can be obtained from Theorem 13.2.2.) In what follows, by condition [U] we understand condition [U∞ ] (see Remark 13.1.1). Theorem 13.2.3. (i) Let the averaged distribution F(1) =
n1 1 Fj , n1 j=1
n1 = x/a,
(13.2.7)
satisfy conditions [ · ,
c| ln a| , a
v
1 , 4r
r=
we have the inequalities # $ r1 P S n (a) x; B(v) c1 n1 V(1) (x) , P S n (a) x c1 n1 V(1) (x),
x 5 > , y 2
(13.2.8) (13.2.9)
where r1 = r/[2(1 + vr)] and the constants c and c1 are defined in the proof.
13.2 Probability of the crossing of a remote boundary Moreover, for any fixed or slowly enough growing t, xt c2 n1 V(1) (x)t1−α . P ∞ > η(x, a) a
499
(13.2.10)
If t → ∞ together with x at an arbitrary rate then the inequality (13.2.10) will remain true provided that we replace the exponent 1−α on its right-hand side by 1 − α + ε, where ε > 0 is an arbitrary fixed number. (ii) Let the conditions of part (i) be satisfied when the range of x is c| ln a| , a and assume additionally that Dn → ∞ as n → ∞. Then, for any fixed h > 1 and all large enough n, Dn1 d := . P S n (a) x c1 e−xa/2dh , n1 x
η(x, a) c1 e−γt , (13.2.11) a where the constants γ and c1 can be found explicitly. The assertion of the theorem remains true if conditions [ · , =]U and [N] are replaced by [ · , =]UR . It follows from the theorem that there exist constants γ > 0 and c < ∞ such that, for n n1 = x/a and all x, one has ( ) x P S n (a) x c max e−γxa/d , V (x) . (13.2.12) a Proof. The proof of the theorem repeats those of Theorems 4.2.1 and 12.3.1. The only difference is that now for n n1 we use Theorem 13.1.2 (instead of Theorem 4.1.2, as was the case in § 4.2). In comparison with Theorem 12.3.1, the exponent of the product n1 V(1) (x) in (13.2.8) is different (the r0 in (12.3.6) is replaced by r1 = r0 /2 in (13.2.8)). This is due to the fact that, as in Theorem 4.2.1, we can only use inequality (13.1.6) of Theorem 13.1.2 with the exponent r/2 on its right-hand side if we ensure that θ r/2. This last inequality will hold if s2 = c
x2 > c3 n1 ln n1
(13.2.13)
with a suitable constant c3 . As in (4.2.8), one can verify that (13.2.13) will be true provided that c4 | ln a| c4 ln x x> or a > a x for a suitable c4 . The rest of the argument does not differ from that used to prove Theorems 4.2.1 and 12.3.1. The theorem is proved.
500
Non-identically distributed jumps with finite variances
Now we return to considering boundaries of the more general form (13.2.5). Let ηg (x) := min{k : Sk − gk x}. Corollary 13.2.4. Let the conditions of Theorem 13.2.3 (i) and conditions (13.2.5), (13.2.6) be satisfied. Then, for t bounded or growing slowly enough, cxV(1) (x) 1−α xt P ∞ > ηg (x) t . (13.2.14) a a If t → ∞ at an arbitrary rate then the inequality (13.2.14) will remain true provided that we replace the exponent 1 − α on its right-hand side by 1 − α + ε, where ε > 0 is an arbitrary fixed number. Proof. The assertion of the corollary follows from Theorem 13.2.3 and the inequality (see (13.2.6)) xt xt P ∞ > η (1 − p1 )x, c1 a . P ∞ > ηg (x) a a
Observe that a similar corollary can be obtained under the conditions of Theorem 13.2.3(ii). Now put gj∗ := min gk kj
and identify the parameter of the triangular array scheme with a. It is not hard to see that {gj∗ } is non-decreasing and, together with {gj }, satisfies the inequalities (13.2.6) (cf. (12.3.10)). Theorem 13.2.5. Let Eξj = 0 and let the distributions Fj satisfy condition [ · , =]U uniformly in j and a, with a common α = αj , j = 1, 2, . . . Moreover, let conditions [H], (13.2.5) and (13.2.6) be met and the averaged distribution F(1) (see (13.2.7)) with n = n1 = x/a satisfy condition [N]. Further, let x > c| ln a|/a for a suitable c (see the proof of Theorem 13.2.3). Then ∞
∗ P sup(Sk − gk ) x = Vj (x + gj ) (1 + o(1)), (13.2.15) k0
j=1
where the remainder term o(·) is uniform in a and also over the classes of distributions {Fj } and boundaries {gk } that satisfy the conditions of the theorem. The assertion of the theorem remains true if we replace conditions [ · , =]U and [N] by [ · , =]UR . Proof. The proof of the theorem repeats, with obvious amendments, the proof of Theorem 12.3.4.
13.2 Probability of the crossing of a remote boundary
501
If we make the homogeneity conditions for the tails and boundaries somewhat stronger then it is possible to obtain a simpler representation for the right-hand side of (13.2.15). Consider the following condition. [HΔ ]
Let condition [H] hold and, for any fixed Δ > 0 and 9 : xΔ = n1 Δ , nΔ := a
let there exist a g > 0 and majorants Vj such that uniformly in k one has 1 nΔ
(k+1)nΔ
Vj (x) ∼ V(1) (x),
j=knΔ +1
Dn 1 1 D(k+1)nΔ − DknΔ ∼ , nΔ n1 1 g(k+1)nΔ − gknΔ ∼ ga nΔ as n → ∞. Corollary 13.2.6. Let the averaged distribution F(1) (see (13.2.7)) satisfy conditions [ · , =]U and [N] with n = n1 . Moreover, assume that x > c| ln a|/a, condition [HΔ ] is met and, for definiteness, g1 = o(x). Then ∞
xV(1) (x) 1 . P sup(Sk − gk ) x ∼ V(1) (u)du ∼ ga ga(α − 1) k0
x
The above assertion remains true if conditions [ · , =]U and [N] are replaced by [ · , =]UR . Proof. The proof of the corollary repeats that of Corollary 12.3.5. Corollary 13.2.6 implies the following. Corollary 13.2.7. Let ξ1 , ξ2 , . . . be i.i.d. r.v.’s (Fj = F, Vj = V , dj = d < ∞) and let condition [ · , =]U and condition [N] with n = n1 = x/a be satisfied. Then, if x > c| ln a|/a (see Theorem 13.2.3) we have xV (x) P S(a) x ∼ . a(α − 1) The assertion remains true if conditions [ · , =]U and [N] are replaced by condition [ · , =]UR . It is clear that the assertions of Corollaries 13.2.6 and 13.2.7 can be stated in a uniform version, as in Theorem 13.2.5.
502
Non-identically distributed jumps with finite variances 13.3 The invariance principle. Transient phenomena
There is no need for us to go into much detail in this section, since the invariance principle and transient phenomena have already been well covered in the existing literature (the single reservation being that transient phenomena have been studied for i.i.d. jumps only). We will review here the main results (for completeness of exposition) and give brief explanations in those cases where the results are extended.
13.3.1 The invariance principle As before, let ξ1 , ξ2 , . . . , ξn be independent r.v.’s in the triangular array scheme, Eξj = 0,
dj = Eξj2 < ∞,
Dn =
n
dj .
j=1
√ In this case, the convergence of the distribution of Sn / Dn to the limiting normal law is determined by the Lindeberg condition: [L]
For any fixed τ > 0, as n → ∞, n $ 1 # 2 E ξj ; |ξj | > τ Dn → 0 Dn j=1
(see e.g. § 4, Chapter 8 of [49] or § 4, Chapter VIII of [122]). To ensure that the processes Snt ζn (t) := √ , Dn
t ∈ [0, 1],
(13.3.1)
converge to the standard Wiener process w(·), one also needs the homogeneity conditions [HD Δ]
For any fixed Δ > 0, nΔ = Δn and all k 1/Δ, Dn 1 D(k+1)nΔ − DknΔ ∼ nΔ n
as
n → ∞.
Let C(0, T ) be the space of continuous functions on [0, T ], endowed with the uniform metric. The trajectories of ζn (·) can be considered as elements of the space D(0, 1) of functions without discontinuities of the second kind. Theorem 13.3.1. Let conditions [L] and [HD Δ ] be satisfied. Then, for any measurable functional f on D(0, 1) that is continuous in the uniform metric at the points of the space C(0, 1), we have the weak convergence of distributions as n → ∞: f (ζn ) ⇒ f (w), where w is the standard Wiener process.
13.3 The invariance principle. Transient phenomena
503
Along with the process (13.3.1), one often considers a continuous process ζ!n (·) defined as a polygon with nodes at the points k Sk , k = 1, . . . , n. ,√ n Dn In this case, the assertion of Theorem 13.3.1 can be formulated as the weak convergence of the distributions of ζ!n (·) and w(·) in the metric space C(0, 1) endowed with the σ-algebra of Borel sets (which coincides with the σ-algebra generated by cylinders). Proof. The proof of Theorem 13.3.1 follows the standard path: one needs to establish the convergence of finite-dimensional distributions and the compactness of the family of distributions of ζn (·) (see e.g. [28, 44]). It will be convenient for us to rewrite the assertion of Theorem 13.3.1 in a somewhat different form. Let the totality of the independent r.v.’s ξ1 , ξ2 , . . . , ξn be extended (or shortened) to the sequence ξ1 , ξ2 , . . . , ξnT , T > 0, and let this new sequence also satisfy conditions [L] and [HD Δ ] (in condition [L] we sum up D the first nT r.v.’s, and in condition [HΔ ] the value k varies from 1 to T /Δ). The process ζn (·) (see (13.3.1)) will now be defined on the segment [0, T ]. Corollary 13.3.2. If the totality of independent r.v.’s ξ1 , . . . , ξnT satisfies conditions [L] and [HD Δ ] then, for any measurable functional f on D(0, T ) that is continuous in the uniform metric at the points of C(0, T ), we have the weak convergence of distributions as n → ∞: f (ζn ) ⇒ f (w), where w is the standard Wiener process on [0, T ].
13.3.2 Transient phenomena The essence of transient phenomena was described in sufficient detail in § 12.5. Here there is no real difference. The main result for i.i.d. summands ξj has already been given in § 12.5 (see (12.5.1)): for any z 0,
√ lim P aS n (a) z = P max d w(t) − t z , (13.3.2) a→0
tT
where w is the standard Wiener process, n = T a−2 and d = lima→0 Eξj2 (see e.g. § 25, Chapter 4 of [42] or [165, 231]). For n = ∞ (T = ∞) the right-hand 2 side of (13.3.2) is equal to e−2z /d . Below we will extend these results to the case of non-identically distributed independent jumps ξj in the triangular array scheme. As before, let F(1) be the averaged distribution F(1) =
n1 1 Fj n1 j=1
with
n1 = a−2 .
504
Non-identically distributed jumps with finite variances
Theorem 13.3.3. Let condition [L] be met for n = n1 T for any fixed T, and Dn →d n
as
n → ∞.
(13.3.3)
Further, let the averaged distribution F(1) satisfy conditions [ · , 2, n1 1 E|ξj |α < c < ∞ n1 j=1
(13.3.5)
implies conditions [ 2, we have 2 2 P GT Jy,0 = O T V (x) , P Jy,2 = O T V (x) . It remains to consider the term P GT Jy,1 . Denote by J(dt, dv) the event that on the interval [0, T ] the trajectory {S(t)} has just one jump of size ζ y and, moreover, that ζ ∈ (v, v + dv), the time of the jump belonging to (t, t + dt). It is clear that, for v y > 1, P J(dt, dv) = e−tG+ (y) dt G(dv)e−(T −t)G+ (y) = e−T G+ (y) dt G(dv) and that
T ∞
P GT Jy,1 = 0
P GT J(dt, dv) .
(15.3.11)
y
Further, observe that, as the process {S(t)} is stochastically continuous, the distributions of S(t − 0) and S(t) and also those of S(t − 0) and S(t) will coincide. Next note that S 2). Then, as x → ∞, uniformly in all T, P(S(T, a) x) T = 0
k Li (x + au) G+ (x + au) 1 + i!(x + au)i i=1
% &" i i ES l (u) ES i−l (T − u, a) du × ES i (T − u, a) + l l=2 + O mG+ (x)(mG(x) + xG(x)) + o xG+ (x)q(x) , := max{G+ (t), W (t)} and W (t) is the r.v.f. domwhere m := min{T, x}, G(t) inating G− (t). All the remarks following Theorem 4.6.1 and an analogue of Corollary 4.6.3 will remain true in the present situation. The proofs of these assertions are similar to the arguments presented in § 4.6 (up to obvious changes, illustrated by the proof of Theorem 15.3.10). Note, however, that while finding the asymptotics of P(S(∞, a) x) it would seem to be more natural to use the approaches presented in § 7.5, where we studied the asymptotics of P(S(a) x) for random walks under conditions broader than those in Chapters 3–5. As we have observed already in §§ 15.1 and 15.2, when studying large deviation probabilities for processes with independent increments with ‘heavy’ tails, the component {S[0] (t)} of the process (see (15.3.1)) that corresponds to the first three terms in the representation (15.1.2) does not affect the asymptotics of the desired distributions, because it does not influence the behaviour of the tails of the measures F and G at infinity. This allows one, in particular, to reduce the problem on the first-order asymptotics for P(S(∞, a) x) to the corresponding problem for random walks in discrete time considered in Chapters 2–5. This is due to the fact that for compound Poisson processes (corresponding to the fourth term in (15.1.2)) such a reduction is quite simple. Indeed, let {S(t)} be a compound Poisson process with jump intensity Θ and jump distribution P(ζj ∈ dz) = GΘ (dz). The jump epochs tk in the process
542
Extension to processes with independent increments k {S(t)} have the form tk = j=1 τj , where the r.v.’s τj are independent and P(τj t) = e−Θt , t > 0. Clearly, S(∞, a) ≡ sup S(t) − at = sup S(tk ) − atk = S ∗ := sup Sk∗ , t0
where Sk∗ :=
k0
k
∗ ∗ j=1 ξj , ξj
k0
:= ζj − aτj and the r.v.’s τj , ζj are independent. If
a∗ := Eξi∗ = Eζ − aΘ−1 < 0 then the first- and second-order asymptotics of the probability P(S(∞, a) x) = P(S ∗ x) can be found with the help of Theorems 7.5.1 and 7.5.8, where we should replace ξ by ξ ∗ and V (t) by ∗
∞
∗
V (t) := P(ξ t) = Θ
e−Θu GΘ,+ (t + au) du ∼ GΘ,+ (t)
0
(the last relation holds when GΘ,+ (t) := GΘ,+ ([t, ∞)) is an l.c. function), so that ∞ ∞ 1 1 GΘ,+ (u) du = G+ (u) du. P(S(∞, a) x) ∼ ∗ |a | |ES(1) − a| x
x
That the last answer has this particular integral form is in no way related to the assumption that {S(t)} is a compound Poisson process; it will clearly remain true for more general processes with independent increments. In a somewhat formal way, the same assertion could be obtained by, for example, using approaches from § 15.2. The above reduction to discrete-time random walks (when studying the distribution of S(∞, a)) will be carried out in a more general case (for generalized renewal processes) in § 16.1.
16 Extension of the results of Chapters 3 and 4 to generalized renewal processes
16.1 Introduction In the present chapter, we will continue carrying over the results of Chapters 3 and 4 to continuous-time random processes. Here we will deal with generalized renewal processes (GRPs), which are defined as follows. Let τ, τ1 , τ2 , . . . be a sequence of positive independent i.i.d. r.v.’s with a finite mean aτ := Eτ . Put t0 := 0, tk := τ1 + · · · + τk ,
k = 1, 2, . . . ,
and let ν(t) :=
∞
1(tk t),
t 0,
k=1
be the (right-continuous) renewal process generated by that sequence. Recall the obvious relation {ν(t) < k} = {tk > t}, which will often be used in what follows, and note that ν(t) + 1 is the first hitting time of the level t by the random walk {tk , k 0}: ν(t) + 1 = min{k 1 : tk > t}. Further, suppose that S0 = 0,
Sk = ξ1 + · · · + ξk ,
k 1,
is a random walk generated by a sequence of i.i.d. r.v.’s ξ, ξ1 , ξ2 , . . . , which is independent of {τi ; i 1}. Definition 16.1.1. A continuous-time process S(t) := Sν(t) + qt,
t 0,
where q is a real number, is called a generalized renewal process with linear drift q. 543
544
Extension to generalized renewal processes
Note that in the previous chapter, which was devoted to processes with independent increments, we have already considered an important special case of GRPs, that of compound Poisson processes. Our main aim will be to approximate probabilities of the form
" (16.1.1) GT := sup(S(t) − g(t)) 0 tT
for sufficiently ‘high’ boundaries {g(t); t ∈ [0, T ]} (assuming, without loss of generality, that T 1; the case T → 0 is trivial). The most important special cases of such events GT are, for sufficiently large x, the events {S(T ) x}
{S(T ) x},
and
(16.1.2)
where S(T ) := suptT S(t). Regarding the distributions F and Fτ of the r.v.’s ξ and τ respectively, we will assume satisfied a number of conditions, which are mostly expressed in terms of the asymptotic behaviour at infinity of their tails F+ (t) = P(ξ t),
F− (t) = P(ξ < −t),
Fτ (t) := P(τ t),
t > 0.
As in our previous exposition, the conditions F+ ∈ R and Fτ ∈ R respectively mean that F+ (t) = V (t) := t−α L(t), Fτ (t) = Vτ (t) := t
−γ
Lτ (t),
α > 1,
L is an s.v.f.,
(16.1.3)
γ > 1,
Lτ is an s.v.f.
(16.1.4)
The problem on the asymptotic behaviour of the probabilities P(GT ) is a generalization of similar problems for random walks {Sk } with regularly varying tails, which were studied in detail in Chapters 3 and 4. The asymptotic behaviour of the large deviation probabilities for GRPs was studied less extensively, although such processes, like random walks, are widely used in applications. One of the most important is the famous Sparre Andersen model in risk theory [257, 268, 9]. Throughout the chapter we will be using the notation aξ := Eξ,
aτ := Eτ,
H(t) := Eν(t),
t 0,
so that H(t) is the renewal function for the sequence {tk ; k 0}. In [188], an analogue of the uniform representation (4.1.2) was given for GRPs under the additional assumptions that, apart from condition (16.1.3) on the distribution tail of aτ ξ − aξ τ , one also has E|ξ|2+δ < ∞ and Eτ 2+δ < ∞ for some δ > 0 and the distribution Fτ has a bounded density. In [169], in the case q = 0 the authors established the relation P S(T ) − ES(T ) x ∼ H(T )V (x) as T → ∞, x δT, (16.1.5) for any fixed δ > 0 under the assumption that the distribution tail of the r.v. ξ 0 satisfies the condition of ‘extended regular variation’ (see § 4.8) and that, for the
545
16.1 Introduction
process {ν(t)} (which in [169] can be of a more general form than a renewal process), the following condition holds: for some ε > 0 and c > 0, eck P(ν(T ) > k) → 0 as T → ∞. (16.1.6) k>(1+ε)T /aτ
The paper [169] also contains sufficient conditions for (16.1.6) to hold for renewal processes: Eτ 2 < ∞ and Fτ (t) 1 − e−bt , t > 0, for some b > 0 (Lemma 2.3 of [169]). It was shown in [262] that condition (16.1.6) is always met for renewal processes with aτ < ∞ without any additional assumptions on the distribution Fτ . Observe also that the last fact immediately follows from inequality (16.2.13) below, in the proof of Lemma 16.2.8(i). In this chapter, we will not only extend the zone of x values for which the relation (16.1.5) holds true (and this will be done for arbitrary q) but also establish exact asymptotic behaviour for the probabilities P(GT ) in the case of a much wider class of events GT of the form (16.1.1). First of all, note that the problem on the asymptotics of P(S(∞) x) as x → ∞ can be reduced to that on the asymptotics of P(S x) for the maxima S = supk0 Sk of ordinary random walks, considered in Chapters 3 and 4. This follows from the observation that, for q 0, one has S(∞) = sup(Sk + qtk ) =: Z
(16.1.7)
k0
and for q > 0 d
S(∞) = sup(Sk−1 + qtk ) = qτ1 + sup[Sk−1 + q(tk − τ1 )] = qτ + Z, k1
k1
(16.1.8) where the r.v.’s τ and Z are independent and Z = supk0 Zk for the random walk k Zk := j=1 ζj generated by i.i.d. r.v.’s ζj := ξj + qτj . These representation imply the following result. Theorem 16.1.2. Let a := Eζ = aξ + qaτ < 0. Then the following assertions hold true. I. If q 0 and F+ ∈ R then
1 P S(∞) x ∼ |a|
∞ F+ (t)dt ∼ x
xV (x) . (α − 1)|a|
II. If q > 0 and one of the following three conditions is met, (i) F+ ∈ R and Fτ (t) = o V (t) as t → ∞, (ii) F+ ∈ R and Fτ ∈ R, (iii) Fτ ∈ R and F+ (t) = o Vτ (t) as t → ∞,
(16.1.9)
546
Extension to generalized renewal processes
then
1 P S(∞) x ∼ |a|
∞
F+ (t) + Fτ (t/q) dt.
(16.1.10)
x
Note that, in cases II(i) and II(iii), the second and the first summands respectively in the integrand in (16.1.10) become negligibly small and so can be omitted. Observe also that the first relation in (16.1.9) was obtained in [115]. We will need the following well-known assertion, whose first part is a consequence of Theorem 12, Chapter 4 of [42] (see also Corollary 3.6.3), while the second follows from the main theorem of [178]. Theorem 16.1.3. If F+ ∈ R and aξ < 0 then, as x → ∞, 1 P Sx ∼ |aξ |
∞ V (t) dt ∼ x
xV (x) (α − 1)|aξ |
(16.1.11)
and, moreover, 1 P Sn x ∼ |aξ |
∼
x+n|a ξ|
V (t) dt x
xV (x) − (x + n|aξ |)V (x + n|aξ |) (α − 1)|aξ |
(16.1.12)
uniformly in n 1. Note that the first asymptotic relation in (16.1.11) was established in [42] in a more general case (later on, it was shown in [275, 115] that a sufficient condition for this relation is that F+I is a tail of a subexponential distribution; the necessity of this condition was proved in [177]), while the first relation in (16.1.12) was obtained in [178] for so-called strongly subexponential distributions. The second relations in each of the formulae (16.1.11), (16.1.12) are consequences of (16.1.3) and Theorem 1.1.4(iv). Proof of Theorem 16.1.2. I. In the case q 0 one has (16.1.7), the tail Fζ,+ of the distribution Fζ of ζ := ξ + qτ being asymptotically equivalent to the tail F+ = V ∈ R: ∞ Fζ,+ (t) =
V (t − qu) dFτ (u) 0
⎡
= V (t) ⎣
M 0
⎤ V (t − qu) dFτ (u) + θFτ (M )⎦ , V (t)
0 < θ < 1,
where, for M = M (t), increasing to infinity slowly enough as t → ∞, the expression in the square brackets converges to 1 by the uniform convergence
16.1 Introduction
547
theorem for r.v.f.’s (Theorem 1.1.2). Hence (16.1.9) follows immediately from (16.1.11) with aξ replaced in it by a. II. When q > 0 one has the representation (16.1.8). In case (i) it is easily seen, cf. (1.1.39)–(1.1.42), that Fζ,+ (t) = P(ξ + qτ t) ∼ F+ (t),
(16.1.13)
so that, by virtue of (16.1.11), P(Z x) ∼
1 |a|
∞ Fζ,+ (t) dt.
(16.1.14)
x
Therefore the tail of Z is also an r.v.f. and, moreover, Fτ (t) = o P(Z t) . Hence, cf. (16.1.13), we obtain that P(qτ + Z x) ∼ P(Z x), which coincides in this case with (16.1.10) by virtue of (16.1.13), (16.1.14) (see also Chapter 11). In case (ii), again using calculations similar those in (1.1.39)–(1.1.42), we have Fζ,+ (t) ∼ V (t) + Vτ (t/q) so that Fζ,+ ∈ R and therefore, as in case (i), 1 P(Z x) ∼ |a|
∞ Fζ,+ (t) dt ∼ cxFζ,+ (x) x
and P(qτ + Z x) ∼ P(Z x). Case (iii) is considered similarly to (i), (ii). The theorem is proved. For T < ∞ such a simple reduction of the problem on the asymptotic behaviour of P(S(T ) x) to the respective results for random walks is impossible, and so we will devote a special section (§ 16.5) to studying the asymptotics of P(GT ) for linear boundaries in the whole spectrum of deviations. A possible way of doing the asymptotic analysis of the probabilities of the form P(GT ) (in the first place for the events (16.1.2)) for a GRP is to use the decomposition P(GT ) =
∞ P GT ν(T ) = n P(ν(T ) = n).
(16.1.15)
n=0
If, on the set {ν(T ) = n}, the conditional probability P GT ν(t), t T does not depend on the behaviour of the trajectory of the renewal process {ν(t)} inside the interval [0, T ], i.e. P GT ν(T ) = n = P GT {ν(t); t T }, ν(T ) = n (which holds, for example, for the event GT = {S(T ) x}, when one has
548 Extension to generalized renewal processes P GT | ν(T ) = n = P(Sn +qT x)) then we will say that partial factorization takes place, since in this case the problem reduces to studying the processes {Sn } and {ν(t)} separately. It then turns out that the asymptotic behaviour of P(GT ) (including some asymptotic expansions) can be derived from the known results for random walks presented in Chapters 3 and 4 and some bounds for renewal processes. In the general case, however, one may need to employ another approach, which directly follows the basic scheme for studying random walks with regularly varying jump distribution tails developed in Chapters 3 and 4. Namely, along with ‘truncated versions’ of the jumps ξi , we will now truncate the renewal-interval lengths τi . The main contribution to the probability of the event GT will again be due to trajectories containing exactly one large jump, but now the latter can comprise not only a large ξi but also a large τi (when q > 0). The asymptotic behaviour of the probabilities of the events (16.1.2) is mostly determined by simple relations for the mean values aξ and aτ , the linear drift coefficient q and the quantities x and T . Since the mean number of jumps per time unit is equal to 1/aτ , clearly the rate of the mean trend in the process {S(t)} equals a/aτ , where a := aξ + qaτ is the mean trend of the GRP per renewal interval. Therefore the event {S(T ) x}, say, will be a large deviation when x is much greater than aT /aτ . To simplify the exposition, we will assume here and in what follows that the mean trend of the process S(t) per renewal interval is equal to zero: a := aξ + qaτ = 0.
(16.1.16)
It is clear that, when considering boundary-crossing problems, such an assumption does not restrict generality. Indeed, let {S(t)} be a general GRP with a linear drift. For this process, the event GT will clearly coincide with the event G0T = {suptT (S 0 (t) − g 0 (t)) > 0} of the same type, but for the ‘centred’ process S 0 (t) := S(t) − (a/aτ )t with zero mean trend and for the boundary g 0 (t) := g(t) − (a/aτ )t. In particular, the event {S(T ) x} will become, for the new process {S 0 (t)}, the event of the crossing of a linear boundary of the form x − (a/aτ )t. Next we will discuss briefly how the nature of the asymptotics of P(S(T ) x) (the simplest of the probabilities under consideration) depends on the value of aξ , under the assumption (16.1.16). First let aξ 0 (and therefore q 0 owing to (16.1.16)). Clearly, starting from the ‘vicinity of zero’ at time t < T and moving along a straight line with slope coefficient q, at time T we will be even further below the (high) level x. This means that the occurrence of the event {S(T ) x}, whether in the presence or absence of long renewal intervals (it is during these intervals that the process S(t) moves along straight lines with slope coefficient q 0), will be very unlikely in the absence of large jumps ξi . In this case, the asymptotics of large deviation probabilities will be similar to those established for ordinary random walks: in
549
16.1 Introduction
the respective zone of x values and under the assumption that F+ ∈ R, one has P(S(T ) x) ∼ H(T )V (x),
x→∞
(16.1.17)
(recall that H(T ) is simply the mean number of jumps in the process {S(t)} during the time interval [0, T ], so that (16.1.17) is a natural generalization of the asymptotic relation P(Sn x) ∼ nV (x)
(16.1.18)
for random walks). If T → ∞ then, by the renewal theorem, the relation (16.1.17) clearly becomes T P(S(T ) x) ∼ V (x) (16.1.19) aτ for an appropriate range of values x → ∞. If aξ < 0 (so that q > 0 owing to (16.1.16)) then we have to distinguish between the two cases qT x and qT > x. In the first case, for the event {S(T ) x} to occur a large jump ξj is necessary (with a dominating probability). Moreover, it turns out that, while the relations (16.1.17)–(16.1.19) still hold true when x − qT > δT (δ > 0 is fixed), in the ‘threshold’ situation when o(T ) = x − qT → ∞ the presence of a long renewal interval could make it much easier for the level x to be reached (it would then suffice for the ‘large’ jump ξj to have a substantially smaller value). This could be reflected by the appearance of an additional term on the right-hand side of (16.1.19). In the second case, one has in any case to take into account the possibility of the occurrence of the event {S(T ) x} due to a very long renewal interval τk . For this to happen, such an interval should appear close to the starting point of the trajectory of S(t). Indeed, owing to the zero mean trend in the process the value of S(tk−1 ) will be ‘moderate’ in the absence of large jumps ξj , j < k. Therefore, if tk−1 is close enough to T then, starting at the point with co-ordinates (tk−1 , S(tk−1 )) and moving along a straight line with slope coefficient q, it will already be impossible to reach the level x by the time T . Roughly speaking, the occurrence of the event {S(T ) x} due to a large τk is only possible when q(T − aτ k) > x, i.e. when T − x/q . k< aτ Then, for the event {S(T ) x} to occur, it will suffice for the large jump at the beginning of the trajectory to have the property τk > x/q since, both before and after that long renewal interval, the process {S(t)} will not deviate far from the mean trend line, i.e. it will be moving roughly ‘horizontally’. Assuming the regularity of the tail Fτ (t) of the distribution of τ , we arrive at the conclusion that, in such a case, one could expect to have asymptotic results of the form P(S(T ) x) ∼
T T − x/q V (x) + Fτ (x/q). aτ aτ
(16.1.20)
550
Extension to generalized renewal processes
The corresponding results are established in Theorem 16.2.1 below, with the help of the representation (16.1.15). They include delicate ‘transient’ phenomena that occur when x ∼ qT . Moreover, it turns out that one can also obtain the few first terms of the asymptotic expansion for the probability P(S(T ) x) (Theorem 16.3.1). The derivation of more complete expansions is substantially harder than solving the same problem for ordinary random walks, owing to the more complex structure of the process {S(t)}. Arguments similar to those above also hold in the case of more general boundaries. In particular, the relations (16.1.17)–(16.1.20) remain true (under the respective conditions) for the probabilities P(S(T ) x) (Theorem 16.2.3). For boundaries of a general form, we will restrict ourselves to considering cases where the occurrence of the event GT due to long renewal intervals is impossible, as otherwise the asymptotic analysis would become rather tedious. (An exceptional case is that of boundaries for which the right endpoint is the lowest, i.e. g(T ) = min0tT g(t). For such boundaries we will show in Corollary 16.4.4 that all the asymptotic results obtained in this chapter for the probabilities P(S(T ) x) remain valid for the probabilities P(GT ).) Namely, we consider classes of boundaries {g(t); t ∈ [0, T ]}, satisfying the conditions x inf t0 g(t) Kx (K > 1 is fixed) under the additional assumptions that x δT when q 0 and inf tT (g(t) − qt) δT when q > 0. It will be shown that, cf. the results in §§ 3.6 and 4.6 on the asymptotics of the boundarycrossing probabilities for a random walk, the main term in the probability P(GT ), as x → ∞, has the form T V (g∗ (t)) dH(t), 0
where g∗ (t) := inf tsT g(s) (Theorem 16.4.1). If, moreover, T → ∞ then the above integral is asymptotically equivalent to 1 aτ
T V (g∗ (t)) dt. 0
For linear boundaries g(t) = x + gt, we will obtain more complete results in § 16.5. For a finite large T , we can find the asymptotics of P(GT ) including the case when the ‘deterministic drift’ line qt can cross the boundary x+gt. As in our analysis of the distributions of S(T ) and S(T ), we also study the ‘threshold case’, when x + gT and qT are relatively close to each other. Note that the case of linear boundaries with T = ∞ is covered by Theorem 16.1.2. In conclusion, we note that studying large deviation probabilities for S(T ) in the case when the components of the vectors (τi , ξi ) are dependent is also possible, but only for some special joint distributions of (τi , ξi ) (those that are regularly
16.2 Large deviation probabilities for S(T ) and S(T )
551
varying at infinity). The representation ∞
T
P(S(T ) x) =
P(Tn ∈ dt, Sn + qT x) P(τn+1 > T − t)
n=1 0
(16.1.21) reduces the problem to one of analysing the joint distribution of the vector of sums (Tn , Sn ). Such an analysis, enabling one to find the asymptotics of (16.1.21), was given in Chapter 9 for some basic types of joint distributions of (τi , ξi ) that display regular variation at infinity. 16.2 Large deviation probabilities for S(T ) and S(T ) We now introduce the conditions [QT ] and [ΦT ], which are respectively similar to condition [Q] of Chapter 3 and the conditions that we often used in Chapter 4. [QT ] One has F+ ∈ R for α ∈ (1, 2), and at least one of the following two conditions holds true: (i) F− (t) cV (t), t > 0, and T V (x) → 0; (ii) F− (t) W (t), t > 0, W ∈ R, x → ∞ and T [V (x/ ln x) + W (x/ ln x)] < c. Clearly, [QT ] is simply condition [Q] with n replaced in it by T ; see p. 138. [ΦT ]
One has F+ ∈ R forAα > 2, d := Var ξ < ∞ and x → ∞; moreover,
for some c > 1, we have x > c
(α − 2)a−1 τ dT ln T .
Furthermore, we will also need a condition of the form [ · , 2, P(S(T ) x) ∼
T x2 V (x∗ )Vτ (T ) . V (x) + 2 2∗ aτ q aτ (α − 1)(α − 2)
(iii) If Fτ (t) = o(V (t)) as t → ∞ then the assertions of parts I(i), I(ii) hold without any additional assumptions on x and T (apart from those in conditions [QT ]and [ΦT ] and in the above-mentioned parts of the theorem). (iv) Let condition Fτ ∈ R hold for a γ ∈ (1, 2) ∪ (2, ∞) (see (16.1.4)) and let x∗ = x − qT → −∞,
x T 1/γ Lz (T )
for a suitable s.v.f. Lz (a way of choosing the function Lz (t) is indicated in Lemma 16.2.8(ii); for γ > 2 the latter inequality always holds owing to conditions [QT ] or [ΦT ]). Then P(S(T ) x) ∼
$ 1# T V (x) + (T − x/q)Vτ (x/q) aτ
as
T → ∞.
Remark 16.2.2. If the conditions α > 2 and [ 2), the asymptotics acquire an additional term which depends on the distribution of τk and which may or may not dominate. Its presence is caused by the following (relatively probable) possibility: one renewal interval, starting at the very beginning of the time interval [0, T ], proves to be very long (and covers the time point t = T ), the sum of the jumps ξj ‘accumulated’ during the initial time interval (prior to the start of that renewal interval) being large enough for the process to negotiate the ‘gap’ between qT and x. This, in its turn, is also a ‘large deviation’ and is due to a single ‘moderately large’ jump ξj exceeding the value x∗ (hence the factor V (x∗ )). The presence of the factor x2∗ can be explained by the fact that, roughly speaking, the number of such jumps, at the beginning of the trajectory of the process, that would have to correspond to these large values of τk and ξj is of order x∗ , the sequences {τk } and {ξj } being independent of each other. Therefore the probability of the required combination of events will be of the order of magnitude of the product of x∗ V (x∗ ) and x∗ Fτ (T ). In case II(iv) the level x is already substantially lower than qT (owing to the condition x−qT → −∞), and hence the probability that the process will be above that level increases by a contribution corresponding to the presence of a single very large τk at the beginning of the interval [0, T ]. The form of this additional term can be explained as follows. Roughly speaking, in the absence of large deviations in the random walk {Sn }, for the process S(t) to exceed the level x by the time T it suffices that one of the first (roughly) (T −x/q)/aτ renewal intervals is long (> x/q). In this case, the trajectory of S(t) will oscillate about zero until the start of the long renewal interval, and then during that interval it will move along a straight line with slope coefficient q > 0 (and this will take the trajectory above the level x). After that (if t is still less than T ), the trajectory will again oscillate at an approximately constant level (and therefore will still be above the level x by the time T ). The very narrow transient case x∗ = O(1) (when the values of qT and x are
554
Extension to generalized renewal processes
almost the same) proves to be quite difficult. This case is not considered in the present exposition and is not covered by Theorem 16.2.1. As was the case for the ordinary random walks, the first-order asymptotics of the probabilities P(S(T ) x) of large deviations of the maximum of the process turn out to be of the same form as those for P(S(T ) x). This is due to the same reason: if (say, due to a large jump ξj ) the process {S(t)} crosses a high level x somewhere inside the interval [0, T ] then, with a high probability, it will stay in a ‘neighbourhood’ of the point S(tj ) until the end of the interval [0, T ] (recall that our process has a zero mean trend). Hence the events {S(T ) x} and {S(T ) x} prove to be almost equivalent. The process can cross the level x during an interval of its linear growth, but even then the above remains true as well. Theorem 16.2.3. All the assertions of Theorem 16.2.1 remain true, under the respective assumptions, for the probability P(S(T ) x). As we saw in § 4.8 in the case of random walks {Sn }, the asymptotics P(Sn x) ∼ nF+ (x),
P(S n x) ∼ nF+ (x)
(16.2.2)
(valid in the respective deviation zones) extend to a wider distribution class than R (possibly for narrower deviation zones). Using the partial factorization approach, which is based on the relations (16.2.2), one can show that a similar situation occurs for the GRPs as well. We will give here just the following simple corollary (further results can be derived in a similar way, using Theorem 4.8.6 and bounds from Lemmata 16.2.6–16.2.8). Theorem 16.2.4. Suppose that the distribution of ξ satisfies the conditions of Theorem 4.8.1 and the relations (4.8.2) hold true (with n replaced in them by T ), and let Eτ 2 < ∞. Assume also that (16.1.16) is met, and that δ > 0 is an arbitrary fixed number. Then, uniformly in T such that x (δ + max{0, q})T, one has P(S(T ) x) ∼ P(S(T ) x) ∼ H(T )F+ (x). If T → ∞ then the term H(T ) on the right-hand side can be replaced by T /aτ . To prove the theorems we will need a few auxiliary results on the deviations of the renewal process ν(t) from its mean value H(t) ∼ t/aτ . We will state these results as separate lemmata. First we will note the following. Remark 16.2.5. To simplify the computations, we will be assuming in all the proofs of the present chapter that aτ = 1.
(16.2.3)
Clearly, this does not lead to any loss of generality. One can easily see how to
16.2 Large deviation probabilities for S(T ) and S(T )
555
make the transition from the results obtained under this assumption to the general case. One just has to scale the time by a factor aτ , i.e. to make the following changes in the derived results: replace T by T /aτ and Fτ (·) by Fτ (aτ ·) (and, correspondingly, H(·) by H(aτ ·) and so on, so that, for example, the value H(T ) will remain unchanged). Further, q should be replaced by qaτ , and g(·) by g(aτ ·) (so that, say, in the case of a linear boundary g(t) = x + gt one replaces, in the relations obtained in the case (16.2.3), the coefficient g by gaτ ). Precisely this will be done when the results of our theorems are stated (convention (16.2.3) is not used therein). Denote by t0k := tk − k,
k = 0, 1, 2, . . . ,
the centred random walk generated by the sequence {τi }, and by Λθ (v) the deviation function for the r.v. θj := 1 − τj : ϕθ (λ) := Eeλθ = eλ Ee−λτ . (16.2.4) Λθ (v) := sup vλ − ln ϕθ (λ)}, λ
Lemma 16.2.6. (i) For all z 0 and n 1,
P min t0k −z e−nΛθ (z/n) . kn
(16.2.5)
In particular, (ii) if Eτ 2 < ∞ then Λθ (v) cv 2 for v > 0 and some c > 0. Therefore, for all z 0, n 1,
2 P min t0k −z e−cz /n . (16.2.6) kn
(iii) Assume that condition [ 0, one has
(16.2.8) P min t0k −z e−czh(z/n) . kn
Proof. (i) The inequality (16.2.5) follows immediately from Lemma 2.1.1 (it is obtained by minimizing the right-hand side of (2.1.5) with respect to μ, see p. 82). (ii) If Eτ 2 < ∞ then there exists Λθ (0) = Var τ . Moreover, as is well known, the function Λθ (v) is convex (being the Legendre transform of a convex function, see e.g. § 8, Chapter 8 of [49]). Hence Λθ (v) cv 2 for v ∈ [0, 1] (Λθ (v) = ∞ for v > 1). This proves (16.2.6).
556
Extension to generalized renewal processes
(iii) Here the asymptotics of Λθ (v) as v → 0 will differ from that in the case Eτ 2 < ∞. Integrating by parts yields the representation −λτ
Ee
∞ =1−λ
Fτ (t)e
−λt
2
∞
dt = 1 − λ + λ
0
FτI (t)e−λt dt,
0
where, by virtue of condition [ 0 large enough, we get Ee−λτ 1 − λ + cVτ (1/λ) e−λ+cVτ (1/λ) ,
λ ∈ [0, 1],
and therefore ϕθ (λ) = eλ Ee−λτ ecVτ (1/λ) ,
λ ∈ [0, 1].
Again using Lemma 2.1.1, we obtain
P min t0k −z e−λz+cnVτ (1/λ) . kn
(16.2.9)
The assertions of the theorem regarding the regular variation at zero of the function u(λ) = λ−1 Vτ (λ−1 ) and its inverse h(t) = u(−1) (t) are obvious. Further, for λ = h(z/cnγ) the exponent on the right-hand side of (16.2.9) equals −λz + cnVτ (1/λ) = (−z + cnu(λ))λ z cnz h −z + cnγ cnγ z z z(γ − 1) =− h c1 zh γ cnγ n for z/n 1, since h(t) is an r.v.f. as t → 0. This establishes (16.2.8) in the case z n. For z > n, the inequality (16.2.8) is trivial. Lemma 16.2.6 is proved. The next result follows from Corollary 3.1.2 and Remark 4.1.5.
16.2 Large deviation probabilities for S(T ) and S(T )
557
Lemma 16.2.7. If condition [ 0 is fixed. In the following lemma, V (t) is an arbitrary r.v.f. Lemma 16.2.8. (i) Let n+ := T + z0 and z0 = εx, where ε = ε(x) > 0, and let x δT for an arbitrary fixed δ > 0. Then, for any k 0 and m 0, uniformly in the zone x δT one has the relation nk P(ν(T ) = n) = o((T V (x))m ) as x → ∞ (16.2.10) n>n+
provided that ε = ε(x) → 0 slowly enough as x → ∞. (ii) Let condition [ 0 is an arbitrary fixed number. Observe that, if condition [n+
n+
= (T + z0 )k P(ν(T ) > T + z0 ) ∞ + k (T + z)k−1 P(ν(T ) > T + z) dz.
(16.2.12)
z0
Assuming for simplicity that T + z is an integer, we note that
" θj > z , {ν(T ) > T + z} ⊆ {tT +z < T } = jT +z
θj = 1 − τj .
558
Extension to generalized renewal processes
Therefore, using the deviation function (16.2.4), we get from the Chebyshev inequality the following bound. For z z0 , T +z P(ν(T ) > T + z) P θj > z j=1
(
z ) T +z exp{−(T + z)Θ(εδ)},
exp −(T + z)Λθ
(16.2.13)
where Θ(u) := Λθ (u/(1 + u)) > 0 for u > 0 and we have used the fact that the function Θ(u) is increasing, u := z/T z0 /T = εx/T εδ. Hence the right-hand side of (16.2.12) does not exceed k −(T +εx)Θ(εδ)
(T + εx) e
∞ + k (T + z)k−1 e−(T +z)Θ(εδ) dz = o((T V (x))m ), εx
(16.2.14) when ε → 0 slowly enough. (ii) In this case the inequality in the first line of (16.2.13) and Lemma 16.2.6(iii) give the bound "
z . (16.2.15) P(ν(T ) > T + z) exp −czh T +z Since one can assume that z0 < T , we see that the right-hand side of (16.2.12) is bounded from above by k −c2 z0 h(z0 /T )
c1 T e
T + c1 T
k−1
e
−c2 z0 h(z0 /T )
∞ dz + c1
z0
z k−1 e−c3 z dz
T
) ( c c 2 + ln T + c4 e−c3 T /2 = o(T −c5 ) 2c1 T exp − 2 for any given c5 , once c+ is large enough. As one has x cT , the lemma is proved. k
Proof of Theorem 16.2.1. As stated earlier (on p. 548), when considering the event {Sn x}, one could make use of partial factorization. Letting Sn0 := Sn − aξ n ≡ Sn + qn (the last relation holds by virtue of (16.2.3) and (16.1.16)), rewrite (16.1.15) as P Sn0 x − q(T − n) P(ν(T ) = n) P(S(T ) x) = n0
=
nn+
where the values n± are chosen according to the situation. It will be convenient
16.2 Large deviation probabilities for S(T ) and S(T )
559
to estimate these three sums separately, and, depending on the situation, to do this in different ways. The contribution of the last sum will always be negligibly small and the middle sum will reduce to the main term of the form H(T )V (x), while the first sum will contribute substantially only in those cases when the presence of a long renewal interval can ‘help’ the trajectory of the process S(t) to exceed the level x by the time T . I. The case q 0. (i) In this part we put n± := T ± εx, where ε = ε(x) → 0 as x → ∞ slowly enough that, uniformly in the specified zone of the T -values, one has P ν(T ) ∈ [n− , n+ ] = o(1) as x → ∞ (16.2.17) (since x δT , this is always possible owing to the law of large numbers for renewal processes). As x − q(T − n) x δT > δn
for
n < n− ,
in this case by Corollaries 3.1.2 and 4.1.4 one has P Sn0 x − q(T − n) = O(nV (x)) = O(T V (x)),
n < n− , (16.2.18)
and hence we obtain from (16.2.17) that = o(T V (x)). n n+ ) = o(T V (x)), (16.2.19) n>n+
so that it remains only to estimate the middle sum in the second line of (16.2.16). Since without loss of generality we can assume that |qε| < 1/2 and εδ 1, we have for n n+ that δ x δ δ x T+ n+ n. x − q(T − n) (1 + qε)x 2 4 δ 4 4 Therefore, by Theorems 3.4.1 and 4.4.1, = (1 + o(1)) nV (x − q(T − n)) P(ν(T ) = n) n− nn+
n− nn+
= (1 + o(1))V (x)
n P(ν(T ) = n)
n− nn+
= (1 + o(1))V (x)
n P(ν(T ) = n) + o(T V (x)) ∼ H(T )V (x)
n>0
(16.2.20) using the bounds from Lemma 16.2.8(i) and from (16.2.17). If T → ∞ then H(T ) ∼ T by the renewal theorem. Part I(i) of the theorem is proved.
560
Extension to generalized renewal processes
(ii) It follows from I(i) that here we could restrict ourselves to considering the case x cT (so that automatically T → ∞). Put n± = T ± εz0 (without loss of generality, one can assume for simplicity that n± are integers), where ε = ε(x) → 0 slowly enough as x → ∞ and z0 = T 1/γ Lz (T ) is defined in Lemma 16.2.8(ii), and then again turn to the representation (16.2.16). For n < n− we have x − q(T − n) x, and (16.2.18) will hold by virtue of the conditions imposed on x. If we show that T Vτ (z0 ) → 0
(16.2.21)
then, when ε tends to zero slowly enough, we will also have T Vτ (εz0 ) → 0 and therefore, by Lemma 16.2.7, P(ν(T ) < n− ) = P t0T −εz0 > εz0 = O(T Vτ (εz0 )) = o(1).
(16.2.22)
Thus for the first sum on the right-hand side of (16.2.16) we have
= O T V (x)P(ν(T ) < n− ) = o(T V (x)).
n 0.
On the one hand, as for the original r.v.’s τj , we have the bound (16.2.8) for the sums ˆt0j := kj=1 (ˆ τj − Eˆ τj ). Therefore, using an argument similar to that in the proof of Lemma 16.2.8(ii), for any given c > 0 one has P ˆt0T −z0 = o(T −c ),
T → ∞.
(16.2.23)
(−1)
On the other hand, putting bτ (T ) := Vτ (1/T ) one can easily see that, by virtue of Theorem 1.5.1, the distribution of ˆt0T /bτ (T ) converges as T → ∞ to the stable law Fγ,1 with parameters γ and 1. Since the support of that law is the whole real axis when γ > 1, this, together with (16.2.23), means that z0 bτ (T ) and hence (16.2.21) holds true. Further, to prove (16.2.19) in the case under consideration we put m+ := T +z0 and write = + . (16.2.24) n>n+
n+ m+
Owing to Corollary 3.1.2 and Lemma 16.2.6(iii), the first sum on the right-hand
16.2 Large deviation probabilities for S(T ) and S(T )
561
side does not exceed P Sn0 x − |q|z0 P(ν(T ) = n) n+ εT ) = O(T Vτ (εT )) = o(T V (x)) (when ε tends to 0 slowly enough), so that P ν(T ) ∈ [n− , n+ ) = o(T V (x)). Moreover, for n ∈ [n− , n+ ) we have P Sn0 x − q(T − n) ∼ nV (x). Now the desired assertion can easily be obtained, using computations similar to those in (16.2.20). We will only note that, in order to replace the sum nP(ν(T ) = n) by nP(ν(T ) = n), n− nn+
n>0
562
Extension to generalized renewal processes
we will have to prove the required smallness of nP(ν(T ) = n) = o(T )
(16.2.26)
n>n+
in a somewhat different way. Namely, since as T → ∞ one has the convergence ν(T )/T → 1 a.s. by the law of large numbers for renewal processes, and since Eν(T )/T = H(T )/T → 1 by the renewal theorem, we conclude that the r.v.’s ν(T )/T 0 are uniformly integrable. Therefore, when ε tends to 0 slowly enough, we get ν(T ) ν(T ) E ; > 1 + ε → 0 as T → ∞, T T which is equivalent to (16.2.26). √ When α > 2 and Eτ 2 < ∞, put n± = T ± ε T ln T and let ε = ε(x) tend to 0 slowly enough. Then again the relation (16.2.18) clearly holds for n < n− . Combining this with the observation that
√ P(ν(T ) ∈ [n− , n+ ]) = P |ν(T ) − T | > ε T ln T → 0 (16.2.27) by the central limit theorem for renewal processes (see e.g. § 5, Chapter 9 of [49]), we have nn+ = o(T V (x)) is established in √ the same way as in the proof of part I(ii), but this time we choose m+ = c1 T ln T , where c1 is large enough, and use Lemma 16.2.6(i). Finally, for n ∈ [n− , n+ ] we again have x − q(T − n) ∼ x √ when x > c T ln T (condition [ΦT ]), and the proof is completed in the same way as in the previous parts of the theorem. II. The case q > 0. (i) If x − qT δT then all the arguments from the proof of part I(i) remain valid without any significant changes. (ii) Turning to the representation (16.2.16) with n± = (1 ± ε)T (assuming for simplicity that T and εT are integers; similar assumptions will be tacitly made in what follows as well), one can easily verify that, as in the proof of part I(i), one has n>n+ = o(T V (x)) (note that x ∼ qT in the case under consideration) and n− nn+ = (1 + o(1))T V (x). Thus it remains to consider 0 = P Sν(T ) x∗ + qν(T ), ν(T ) < εT n εT ) nn−
cT V (εT ) T (εT )−γ = o(T V (T ))
(16.2.29)
when ε tends to 0 slowly enough (the first two relations in (16.2.29) follow from Corollaries 3.1.2 and 4.1.4 and Lemma 16.2.7 respectively). To estimate the first term on the right-hand side of (16.2.28), fix a small enough δ > 0 and introduce events C (k) , k 0, of which the meaning is that for exactly k of the r.v.’s τj , j εT , one has τj δT whereas for all the rest τj < δT . Clearly, the probability we require is equal to P(S(T ) x, ν(T ) < εT ) =
P S(T ) x, ν(T ) < εT ; C (k) .
(16.2.30)
0k1/δ+1
We will show that the main contribution to the sum of probabilities on the righthand side comes from the second summand (k = 1), which corresponds to the presence of exactly one long renewal interval τj (covering most of the segment [0, T ]). It is this large τj that will be responsible for ‘driving’ the trajectory of the process S(t) = Sν(t) +qt along a straight line, with slope coefficient q > 0, beyond the level x = qT + x∗ by time T . The total contribution of all the other terms (k = 1) will be negligibly small relative to T V (x), and we already know that P(S(T ) x (1 + o(1))T V (x). We will start with the case k = 0. By Theorems 3.1.1 and 4.1.2, for any given b > 0 we have P S(T ) x, ν(T ) < εT ; C (0) P t0εT > (1 − ε)T, τi < δT, i εT = O (εT Vτ (T ))m = O(T −b )
(16.2.31)
provided that δ < 1/m, where m is large enough. Therefore this probability is o(T V (x)). The main contribution to the sum (16.2.30), as we have already said, is from the term with k = 1. This case requires a detailed analysis. Consider the events (1)
Cj
:= {τj δT, τi < δT, i εT, i = j},
so that C (1) =
jεT
(1)
1 j εT,
(16.2.32)
Cj . We are interested in the second term in (16.2.30),
564
Extension to generalized renewal processes
namely,
0 (1) P Sν(T ) x∗ + qν(T ), ν(T ) < εT ; Cj
j T − j, τi < δT, i j c j x∗ , one has P(Sn0 x∗ + qn) ∼ nV (x∗ + qn) cjV (x∗ + qj).
16.2 Large deviation probabilities for S(T ) and S(T )
565
Therefore, the above-mentioned term does not exceed c
(1)
x∗ V (x∗ ) P(j ν(T ) < εT ; Cj )
jx∗
+c
(1)
jV (x∗ + qj) P(j ν(T ) < εT ; Cj ).
(16.2.36)
x∗ <j δT , j ν(T ) < εT , will be very small (by Theorems 3.1.1 and 4.1.2). In the same way one can consider the case k 3. Since the total number of probabilities in the sum (16.2.30) corresponding to that case is bounded, their total contribution will be small relative to (16.2.40). This observation completes the proof of part II(ii) of the theorem. (iii) In this case, the only difference from the argument proving part I of the theorem is that we have to give another bound for n εx n ,Y q 2 x/2 x−z = P t0T −(x−z)/q > q
P(Y ∈ dz)
0
x/2 ∼ 0
x−z T− q
Vτ
x−z q
P(Y ∈ dz)
using the asymptotic relation from Theorem 3.4.1 for distribution tails of the sums t0n (condition [Q] holds for the r.v.’s τ for x T 1/γ Lz (T ) owing to (16.2.21)). Since, when ε tends to 0 slowly enough, the probability P(Y > εx) vanishes by x/2 εx Corollaries 3.1.2 and 4.1.4, we obtain that 0 ∼ 0 and hence (16.2.43) P1 (T − x/q) Vτ (x/q)(1 + o(1)). At the same time, as P minkT Sk0 −εx = o(1) when ε tends to 0 slowly enough (for α > 2 this follows from the Kolmogorov inequality and for α ∈ (1, 2) it follows from the bounds of Corollary 3.1.3), we obtain
0 0 P Sν(T ) + q(T − ν(T )) x, ν(T ) < n− , min Sk > −εx n x, ν(T ) < n− P min Sk0 > −εx kT 0 ∼ P ν(T ) < T − (1 − ε)x/q = P tT −(1−ε)x/q > (1 − ε)x/q ∼ (T − x/q) Vτ (x/q),
where the last relation follows from Theorems 3.4.1 and 4.4.1 (for γ ∈ (1, 2) one should use (16.2.21)) and from the fact that T − (1 − ε)x/q → ∞ in the case under consideration when −x∗ = q(T − x/q) > δx. Thus, ∼ (T − x/q) Vτ (x/q), n 2T ) εx P min S(tn ) < − n2T 2 εx + P max τn > + P(ν(T ) > 2T ) → 0 n2T +1 2|q| as T → ∞, which holds owing to the law of large numbers and the obvious relation εx = O(T Vτ (εx)) = o(1), P max τn > n2T +1 2|q| when ε tends to 0 slowly enough. Cases I(ii), I(iii) and II(iii) are dealt with in almost the same way. In case II(i), we have q > 0, and the crossing of the level x is possible not only by a jump but also on a linear growth interval. The number of the renewal interval on which the process {S(t)} first crosses the level x is equal to ηx := inf n > 0 : max{S(tn ), S(tn ) − ξn )} > x . Next we have to make use of the inequality
P ηx = n, tn < T, S(T ) − max{S(tn ), S(tn ) − ξn } < −εx
P ηx = n, tn < T, inf S(t) − |ξ| < −εx , tT
where ξ is independent of {S(t)}. All the subsequent calculations have to be modified accordingly. In case II(ii), when x∗ = x − qT = o(T ), such a simple approach proves to be insufficient. We will again begin with inequality (16.2.48), but to derive (16.2.49) we have to repeat all the steps in the proof of Theorem 16.2.1, this time just to derive an upper bound only for P(S(T ) x). The modifications that have to be made on the way are relatively minor and do not cause any particular difficulties. Thus, instead of the bounds for the probability P S(T ) x, εT ν(T ) n− 0 = P Sν(T ) x∗ + qν(T ), εT ν(T ) n− in (16.2.28), (16.2.29) (with n− = (1 − ε)T ), we could use the following observations to bound a similar expression for S(T ). Put T := (1 − ε/4)T (again assuming for simplicity that both T and T are integers). Then:
16.2 Large deviation probabilities for S(T ) and S(T )
573
(1) by Lemma 16.2.6(i) one has P ν(T ) − ν(T ) > εT /2 P ν(εT /4) εT /2 = P tεT /2 εT /4 = P t0εT /2 −εT /4 e−cεT ; (2) one has the following inclusion: S(T ) x, ν(T ) n− = Sν(t) x∗ + q(T − t) for some t T , ν(T ) n− ) ( ⊂ max Sn qεT /4, ν(T ) n− ; nn−
(3) if ε tends to 0 slowly enough then x − qT 0 and one has (
)
max S(t) x, εT ν(T ) n− , ν(T ) − ν(T ) εT /2 t∈[T ,T ] ( 0 ⊂ Sν(t) x − qt + qν(t) for some t ∈ [T , T ], ) εT ν(T ) n− , ν(T ) εT /2 ( ) ⊆ max Sn0 qεT /2; ν(T ) n− . nn−
From (1)–(3) it clearly follows that P S(T ) x, εT ν(T ) n− P ν(T ) − ν(T ) > εT /2 + P S(T ) x, ν(T ) n−
+ P max S(t) x, εT ν(T ) n , ν(T ) − ν(T ) εT /2 − t∈[T ,T ]
e−cεT + P max Sn qεT /4 P(ν(T ) n− ) nn−
+ P max Sn0 qεT /2; ν(T ) n− nn−
e−cεT + 2P max Sn qεT /4 P(ν(T ) n− ) = o(T V (T )) nn−
(16.2.50) by virtue of (16.2.29), when ε vanishes slowly enough. The main contribution to an analogue of the right-hand side of (16.2.33) for S(T ) will also be from the middle sum, which can be estimated as follows: since ) ( ) ( {S(T ) x} = sup(Sν(t) + qt) x ⊂ max Sj x∗ , tT
jν(T )
574
Extension to generalized renewal processes
we have
(1) P S(T ) x; ν(T ) = j, Cj+1
j T + ψ1 (x)) +
P(ν(T ) > T + z) dz ψ1 (x)
( c ψ 2 (x) ) 1 1 (T + ψ1 (x)) exp − + T + ψ1 (x)
∞
( c z2 ) 1 dz = o(T ). exp − T +z
ψ1 (x)
Therefore P(Sn x) ∼ H(T )F+ (x) in the case under consideration. The modifications that one needs to make in the proof in the case q < 0 and also when considering the asymptotics of P(S n x) are equally elementary. The theorem is proved.
16.3 Asymptotic expansions In this section we will state and proof an assertion establishing for the probabilities P(S(T ) x) asymptotic expansions similar to those obtained in § 4.4 for
16.3 Asymptotic expansions
575
random walks. Our approach will consist in using partial factorization and relations of the form (16.2.16) in conjunction with the above-mentioned results for random walks. As in § 4.4, we will need additional conditions of the form [D(k,q) ] on the distribution of the random jumps ξj (see p. 199). Moreover, we will also need additional conditions on the distribution Fτ . First of all, since the asymptotic formulae for P(S(T ) x) will include the variance of ν(T ), the minimum moment condition on τ will be aτ,2 := Eτ 2 < ∞. In the case when qT > x and thus the event {S(T ) x} could occur because of the presence of large intervals between the jumps in the process, we will also require the tail Fτ (t) to be regular. Since we will only be considering events of the form {S(T ) x}, we can again assume without loss of generality that the mean trend in the process is equal to zero (assumption (16.1.16)). Recall that H(t) = Eν(t) denotes the renewal function for the process {ν(t)}, and put Hm (t) := Eν m (t), m 2. Theorem 16.3.1. Suppose that the distribution of ξ is non-lattice, conditions (16.1.3) hold for α > 2, d = Var ξ < ∞ and the right tail of the distribution of ξ satisfies [D(2,0) ] and also that aτ,2 < ∞ and the mean trend in the condition process S(t) is equal to zero. Let δ > 0 be an arbitrary fixed number. I. In the case q 0 the following assertions hold true. (i) Uniformly in T , satisfying x δT, as x → ∞, P(S(T ) x)
%
& T + 1 H(T ) − H2 (T ) aτ % 2 α(α + 1) T 2 2 (T ) − H(T ))d + q a + 1 H(T ) + (H 2 τ 2x2 aτ & T 2 2 −2 + 1 H2 (T ) + H3 (T ) + o((1 + T )/x ) . aτ
= V (x) H(T ) +
qaτ L1 (x) x
(16.3.1) (ii) If T → ∞ then, uniformly in x δT , aτ,2 3αqT aτ,2 T + − 1 − −1 P(S(T ) x) = V (x) aτ 2a2τ x 2a2τ α(α + 1)T 2 d a τ,2 + + q2 −1 + o(1) . 2x2 a2τ a2τ (16.3.2)
576
Extension to generalized renewal processes If Fτ is a lattice distribution then the relation (16.3.2) holds for T values from the lattice.
II. In the case q < 0, the following assertions hold true. (i) The relations (16.3.1) and (16.3.2) remain true uniformly in the zone x (q + δ)T . (ii) If Fτ ∈ R with exponent γ > 2 (see (16.1.4)) then, for x δT , T − x/q → ∞, the relation (16.3.2) holds with an additional term on the right-hand side of the form x x −1 Vτ (1 + o(1)), T → ∞. (16.3.3) aτ T − q q If, moreover, E|ξ|3 < ∞ and condition [D(1,0) ] holds for the distribution tail of τ then the remainder term o(1) in (16.3.3) can be replaced by o((T − x/q)−1/2 ). Remark 16.3.2. Repeating the argument from the proof of Theorem 16.3.1, one can easily verify that, under condition [D(1,0) ] on the right distribution tail of ξ, the error in the approximation of the probabilities of the form P(Sn x) proves to be too large and so ‘masks’ the effects introduced by the randomness of the jump epochs in the process {S(t)}. Therefore, non-trivial asymptotic expansions for P(S(T ) x) can be obtained only by imposing condition [D(2,0) ] on V (t) = P(ξ t) and assuming that E(ξ 2 + τ 2 ) < ∞. However, imposing additional conditions (say, [D(3,0) ] on the right distribution tail of ξ and/or Eτ 3 < ∞) does not lead to any substantial improvement of the results. Proof of Theorem 16.3.1. Cases I and II(i). We will again make use of the decomposition (16.2.16) with n± = T ± εx, where ε → 0 slowly enough as x → ∞ (with the natural convention that n− = 0 for T − εx 0). We will also assume, as usual, that aτ = 1 (so that aξ = −q by (16.1.16)) and that d = 1. To estimate the sum n 0, x − qT δT. ⎩ x − qT q+δ Hence, using Lemma 16.2.7 (with γ = 2) to bound the probability P(t0n− > εx) in the following formula, we obtain max P Sn0 > cx P(ν(T ) < n− ) n εx = O T V (x) n− Vτ (εx) = O T 2 V (x)x−2 ε−2 Lτ (εx) = o T 2 V (x)x−2
when ε vanishes slowly enough.
(16.3.5)
16.3 Asymptotic expansions
577
Moreover, by Lemma 16.2.6(ii) the last sum in (16.2.16) can be bounded as follows: "
cε2 x2 = o(T 2 V (x)x−2 ), P(ν(T ) > n+ ) exp − (16.3.6) T + εx n>n +
and so it remains to consider the middle sum n− nn+ . Using the obvious inequality P(ξ − aξ > t) = V (t + aξ ) = V (t − q) and our observation (16.3.4), we have from condition [D(2,0) ] for V (t) and Corollary 4.4.5(i) (p.200) the following relation for all n− n n+ and the values of x specified in the conditions of the theorem: P Sn0 x − q(T − n) % & % & q α(α + 1)(n − 1)d = nV x 1 − (T − n + 1) 1+ (1 + o(1)) x 2(x − q(T − n))2 4 3 2 q(T − n + 1) α(α + 1) q(T − n + 1) + = nV (x) 1 + L1 (x) x 2(1 + o(1)) x & % α(α + 1)(n − 1)d (1 + o(1)) × 1+ 2(x − q(T − n))2 q(T − n + 1) = nV (x) 1 + L1 (x) x & % 2 α(α + 1) q(T − n + 1) n−1 + + 2 d 2 x x −3 (16.3.7) × (1 + o(1)) + O (|T − n| + 1)nx (note that on the right-hand side of the first line we have nV (x(1 + Δ)), where Δ := −q(T − n + 1)/x is indeed o(1) for n− n n+ , x → ∞; we have used this to substitute the representation for V (x(1+Δ)) from condition [D(2,0) ], taking into account (4.4.9)). If we substitute the expressions for the respective probabilities into the sum n− nn+ from (16.2.16) and replace that sum by a sum over all n 0, the error introduced thereby will be of the order of V (x)
+
nn+
n+
n|T − n| n2 + n(T − n)2 + . x x2
(16.3.8)
Here the first sum, by virtue of Lemma 16.2.7 (with γ = 2), is O
T+
T3 T2 + 2 x x
P(ν(T ) < n− )
= O T P(t0T −εx > εx) = o(T 2 x−2 )
578
Extension to generalized renewal processes
(cf. (16.3.5)), while the second sum is n2 n3 O n+ + 2 P(ν(T ) = n) = o(T 2 x−2 ) x x n>n +
by Lemma 16.2.8(i), so that that the total error (16.3.8) is o(T 2 V (x)x−2 ). Thus we obtain from (16.2.16) and (16.3.5)–(16.3.8) that P(S(T ) x)
qL1 (x) E ν(T )(T − ν(T ) + 1) = V (x) Eν(T ) + x α(α + 1) , 2 q E ν(T )(T − ν(T ) + 1)2 + 2 2x -
+ (Eν 2 (T ) − Eν(T ))d (1 + o(1)) " −3 2 + O x E(ν (T )(|T − ν(T )| + 1)) + o(T 2 V (x)x−2 ). (16.3.9)
It remains to estimate the remainder term O(·) (this will complete the proof of the representation (16.3.1) in cases I(i) and II(i)) and also the coefficients expressed in terms of the expectations in the case when T → ∞. We will start with the latter problem and recall the well-known fact that, for aτ = 1 and aτ,2 < ∞, Eν(T ) = T + (aτ,2 /2 − 1) + o(1), Var ν(T ) = (aτ,2 − 1)T + o(T )
(16.3.10) (16.3.11)
(see e.g. § 12, Chapter XIII of [121] and § 4, Chapter XI of [122]; in the lattice case it is assumed that T belongs to the lattice). Hence # $ E ν(T )(T − ν(T ) + 1) = −Var ν(T ) − (Eν(T ))2 + (T + 1)Eν(T ) = 3(1 − aτ,2 /2)T + o(T ), 2
(16.3.12)
2
Eν (T ) = T (1 + o(1)). Now we estimate the moment E[ν(T )(T − ν(T ) + 1)2 ]. By the law of large numbers and the central limit theorem for renewal processes (see e.g. § 5, Chapter 9 of [49]), as T → ∞, we have ν(T ) →1 T
a.s.,
ζT :=
T − ν(T ) + 1 √ ⊂ =⇒ N (0, aτ,2 − 1), T
(16.3.13)
where the last relation means that the distributions of ζT converge weakly to the indicated normal law. Note that this weak convergence takes place together with the convergence of the first and second moments of ζT (cf. (16.3.10)–(16.3.11)),
16.3 Asymptotic expansions
579
so that, in particular, the r.v.’s ζT2 are uniformly integrable. Now write $ # T −2 E ν(T )(T − ν(T ) + 1)2 % & ν(T ) 2 ν(T ) =E ζT ; − 1 ε T T % & % & ν(T ) 2 ν(T ) ν(T ) 2 ν(T ) +E ζT ; 1+ε T T T T =: E1 + E2 + E3 and note that, owing to the above, E1 ∼ EζT2 ∼ T −1 Var ν(T ) → aτ,2 − 1, # $ E2 < E ζT2 ; ν(T ) < (1 − ε)T = o(1), # $ E3 < T −2 E ν(T )3 ; ν(T ) > (1 + ε)T = o(1) as T → ∞ (assuming that ε = ε(T ) tends to 0 slowly enough). Here the estimate for E2 follows from the uniform integrability of ζT2 and the law of large numbers (owing to which P(ν(T ) < (1 − ε)T ) = o(1)), while the last relation follows from the inequalities (16.2.12) and (16.2.13) in the proof of Lemma 16.2.8. Thereby we have proved that $ # T → ∞. (16.3.14) E ν(T )(T − ν(T ) + 1)2 = (aτ,2 − 1)T 2 (1 + o(1)), Finally, for the remainder term O(·) in (16.3.9) we have, from the relations (16.3.10)–(16.3.11), the Cauchy–Bunyakovskii inequality and the almost obvious relation Eν 4 (T ) ∼ T 4 as T → ∞ (cf. (16.3.13) and Lemma 16.2.8), the bound # $ x−3 E ν 2 (T )(|T − ν(T )| + 1) , 1/2 x−3 Eν 2 (T ) + Eν 4 (T ) E(T − ν(T ))2 = O x−3 1 + T 5/2 = o 1 + T 2 x−2 . (16.3.15) Now the assertions of parts I(i), (ii) and II(i) of Theorem 16.3.1 can be obtained by substituting the estimates (16.3.10)–(16.3.15) into the representation (16.3.9). One should just note that, for x δT , we have T k /xk = O(1), x−1 = O(T −1 ) and T 2 V (x)x−2 = O(V (x)) and that 1 T2 1 1 + 2 . 1/2 x x x 2T Case II(ii). In this case we clearly have to consider also the possibility that the event {S(T ) x} occurs as a result of the presence of a very long renewal interval on [0, T ]. Using the representation (16.2.16) together with the results for large deviation probabilities in the random walk {Sn } and for the moments of the renewal process is no longer sufficient. Now we will have to introduce truncated versions of not only the random jumps ξj but also of the renewal intervals τj .
580
Extension to generalized renewal processes
Let n± := (1 ± ε)T (note that x T in this part of the theorem) and B :=
n+ *
{ξj < y},
j=1
C :=
n− *
{τj < y},
j=1
where y x is such that r := x/y > max{1, x/T }. It is evident that P(B) = O(T V (T )),
P(C) = O(T Vτ (T )).
(16.3.16)
By virtue of Lemma 16.2.8(i), it suffices to estimate the probability of the event D := {S(T ) x; ν(T ) n+ }. It is obvious that P(D) = P(DB C) + P(DBC) + P(DB) =: P1 + P2 + P3 , where
P1 P(B) P(C) = O T 2 V (T ) Vτ (T )
(16.3.17)
(16.3.18)
and P2 = P DBC; ν(T ) < n− + P DBC; ν(T ) n− .
(16.3.19)
By Corollary 4.1.3, the first term on the right-hand side of the last equality does not exceed P B P(ν(T ) < n− ; C) n+ V (y) P t0n− > εT ; C = O T V (T ) (T Vτ (εT ))r = O T 2 V (T )Vτ (T ) , when ε vanishes slowly enough, while the second term does not exceed P DB; ν(T ) n− − P DB C; ν(T ) n− = P S(T ) x, ν(T ) ∈ [n− , n+ ]; B + O T 2 V (T )Vτ (T ) by virtue of the bound in (16.3.18). The first term on the right-hand side coincides, up to O(T 2 V 2 (T )), with P(S(T ) x, ν(T ) ∈ [n− , n+ ]) (we again make use of Corollary 4.1.3), and it can be estimated in exactly the same way as in the proof of part I of the theorem; this leads to the expression on the right-hand side of (16.3.2). It remains to consider the term P3 in (16.3.17). When for the distribution of τ we assume only condition Fτ ∈ R, the required assertion can be derived in exactly the same way as in the proof of Theorem 16.2.1, II(iv). Now we turn to the case where we also impose condition [D(1,0) ] on the function Vτ from (16.1.4) and require that E|ξ|3 < ∞. Introduce the notation nT := T − x/q,
nT ± := (1 ± ε)nT ;
581
16.3 Asymptotic expansions
as usual, we will assume for simplicity that all these quantities (as well as T itself) are integers. We have P3 = P S(T ) x, ν(T ) n+ ; B =
P Sn0 q(n − nT ); B P(ν(T ) = n)
nn+
#
$ 1(n < nT ) − P Sn0 q(n − nT ); B
= P(ν(T ) < nT ) −
nn+
= P(t0nT > x/q) − +
n x/q = nT Vτ (x/q)(1 + o(nT /x)). (16.3.21) This follows from Theorem 4.4.4 (for k = 1) and the fact that, by condition [D(1,0) ] on the distribution tail of τ 0 = τ − 1, one has the representation Vτ 0 (t) := P(τ 0 t) = Vτ (t + 1) = Vτ (t)(1 + O(1/t)).
(16.3.22)
So, we turn to the sums on the right-hand side of (16.3.20). Replacing them by the sums and nT − n qε nT = o(1) ε 2 nT ε n n uniformly in n < nT − when ε vanishes slowly enough. Using this relation and
582
Extension to generalized renewal processes
the Chebyshev inequality, one obtains E1 P Sn0 < −qεnT P(ν(T ) = n) n qεnT P(ν(T ) = n) q 2 ε2 n2T n + εnT =o = o(Vτ (T )). nT nT q For the second error, we have E2 = P Sn0 > q(n − nT ); B P(ν(T ) = n) nT + nn+
=
+
nT + n δ1 T,
δ1 > 0,
one obtains from (16.3.16) that P · · · ; B P(ν(T ) = n) P B; ν(T ) < nT + nT − nnT +
= P(B) P t0nT + > T − nT + = O T 2 Vτ (T ) V (T ) = o(Vτ (T )).
Further, put
√ z(u) := q(u − nT )/ ud,
d = Var ξ,
16.3 Asymptotic expansions
583
and observe that, by the Berry–Esseen theorem (see e.g. Appendix 5 of [49]), the following asymptotic representations hold true uniformly in n ∈ [nT − , nT + ]: −1/2 , P Sn0 q(n − nT ) = Φ(z(n)) + O nT 0 −1/2 P Sn > q(n − nT ) = Φ(−z(n)) + O nT , where Φ denotes the standard normal distribution function. Substituting these representations into the sums in (16.3.24) (now without the event B under the probability symbols), we see that, by Theorem 4.4.4 (in the case k = 1) and in −1/2 view of (16.3.22), the contribution of the terms O(nT ) to these sums does not exceed $ −1/2 # cnT P(ν(T ) < nT + ) − P(ν(T ) < nT − ) , −1/2 P t0nT + > x/q − εnT − P t0nT − > x/q + εnT = cnT , −1/2 −1/2 = cnT nT + Vτ (x/q − εnT ) 1 + o nT −1/2 . − nT − Vτ (x/q + εnT ) 1 + o nT −1/2
Here the terms with o(nT ) will yield a term of order −1/2 −1/2 = o(Vτ (T )), × nT Vτ (x) × o nT O nT while, owing to condition [D(1,0) ] on the distribution tail Vτ (t) of the r.v. τ and in view of (16.3.22), the remaining main terms in the sum are equal to , −1/2 nT Vτ (x/q − εnT ) − Vτ (x/q + εnT ) cnT + εnT Vτ (x/q − εnT ) + Vτ (x/q + εnT ) % x nT 1/2 = c(1 + ε)nT Vτ 1 − qε q x & 1/2 x nT − Vτ 1 + qε + o nT Vτ (T ) q x 3/2 1/2 1/2 n x + o nT Vτ (T ) = o nT Vτ (T ) . = 2γ(c + o(1))qε T Vτ x q Therefore, up to negligibly small terms (i.e. terms of order not exceeding the error order stated in part II(ii) of the theorem), the sums in (16.3.24) are equal respectively to Φ(z(n))P(ν(T ) = n), J1 = nT − n T )] ds Φ (nT + s)d
Denoting the last integral by J1 and assuming for simplicity that nT + s is an integer, we note that, by Theorem 4.4.4 (for k = 1) the integrand in J1 is equal, by virtue of (16.3.22), to x x 0 [· · · ] = P tnT +s > − s − P t0nT > q q −1/2 −1/2 x x = (nT + s)Vτ − s 1 + o nT 1 + o nT − nT Vτ . q q 1/2 It is clear that integrating the remainder terms will yield o nT Vτ (T ) , whereas the sum of the main terms is, by condition [D(1,0) ], equal to % & x x x nT Vτ − s − Vτ + sVτ −s q q q x γqs x = nT Vτ (1 + o(1)) + sVτ (1 + o(1)) q x q x γqnT =s 1+ Vτ (1 + o(1)), −εnT s < 0. x q −1/2 1/2 Therefore, up to the term O Φ(−qεnT d−1/2 )nT Vτ (T ) = o(nT Vτ (T )), one has 0 x qs γqnT J1 = − 1 + Vτ (1 + o(1))s ds Φ x q (nT + s)d −εnT
γqnT =− 1+ x =
1 q
8
Vτ
x q
√ nT d q
nT d γqnT 1+ 2π x
Vτ
0 √
−qεnT /
(1 + o(1))v dΦ(v) nT − d
x (1 + o(1)). q
585
16.4 The crossing of arbitrary boundaries In a similar way we can establish that 8 1/2 1 nT d x γqnT J2 = 1+ Vτ (1 + o(1)) + o nT Vτ (T ) , q 2π x q 1/2 so that J1 − J2 = o nT Vτ (T ) . Theorem 16.3.1 is proved.
16.4 The crossing of arbitrary boundaries In this section we consider the problem of the crossing of an arbitrary boundary {g(t); t ∈ [0, T ]} by the trajectory of the process {S(t)}, when inf tT g(t) tends to infinity fast enough, so that the event ( ) GT := sup(S(t) − g(t)) 0 tT
belongs to the large deviation zone in which we are interested. We will again assume, without loss of generality, that the mean trend in the process {S(t)} is equal to zero (i.e. (16.1.16) holds true). Note that we have already considered the special case of a flat boundary g(t) ≡ x in Theorem 16.2.3. As everywhere in this chapter, an important factor in this situation is the possibility that the event GT may occur as a result of a very long renewal interval. To avoid cumbersome computations, we will exclude this possibility in the present section. For q 0 it simply does not exist, while in the case q > 0 we will require the ‘gap’ between the boundary g(t) and the trajectory of the deterministic linear drift qt, which appears in the definition of the process {S(t)}, to be wide enough: inf (g(t) − qt) > δT
0tT
(16.4.1)
for a fixed δ > 0. In this case, in the absence of large jumps ξj and/or τj , the process S(t) = Sν(t) + qt moves along the line of its mean trend (with a slope equal to zero), i.e. it stays in an εT -neighbourhood of its initial value, which is zero. The presence of a single large τj means that, over the respective renewal interval, the value of S(t) will increase linearly at the rate q > 0 while outside the interval it will just oscillate about the mean trend line. Therefore the abovementioned condition (16.4.1) makes the probability that the process will cross the boundary g(t) during that long renewal interval very small. The presence of two or more large jumps τj and/or ξj will, as usual, be very unlikely. Therefore, as in the random walk case, the process {S(t)} can actually cross a high boundary as a result of a single large jump ξj . At the time t of the jump it suffices if, instead of crossing the boundary g(t) itself, the process just exceeds the level g∗ (t) := inf g(s), tsT
since afterwards its trajectory will again move along the (horizontal) mean trend
586
Extension to generalized renewal processes
line and, at some time, will rise above the boundary, which by that time will have already ‘dropped’ down to the level g∗ (t). As with the random walks, in the case of a general boundary we can only give the main term of the asymptotics. Owing to the ‘averaging’ of the jump epochs in the process {S(t)}, the answer will have the form of an integral with respect to the renewal measure dH(t) (see (16.4.2) below). As T → ∞, the renewal theorem allows one to replace dH(t) by a−1 τ dt. Now we will state the main result of the section. Denote by Gx,K the class of all measurable boundaries g(t) such that x inf g(t) Kx,
x > 0,
t0
K ∈ (1, ∞).
Theorem 16.4.1. Let either condition [QT ] or condition [ΦT ] hold for the distribution of ξj , and let δ > 0 and K > 1 be arbitrary fixed numbers. Assume that the mean trend of the process {S(t)} is equal to zero. I. In the case q 0, the relation T P(GT ) = (1 + o(1))
V (g∗ (t)) dH(t),
x → ∞,
(16.4.2)
0
holds uniformly in g ∈ G(x,K) , x δT. If, moreover, condition [ 0 the following assertions hold true. (i) The relation (16.4.2) holds uniformly in g ∈ G(x,K) for x∗ := inf (g(t) − qt) δT. tT
(ii) If Fτ (t) = o(V (t)) as t → ∞ then all the assertions stated in part I of the theorem remain true. Remark 16.4.2. It is not difficult to see that, as T → ∞, one has the relation T 0
1 V (g∗ (t)) dH(t) ∼ aτ
T V (g∗ (t)) dt
( T V (x))
(16.4.3)
0
uniformly in g ∈ G(x,K) , so that in this case the integral on the right-hand side of (16.4.2) can be replaced by the expression on the right-hand side of (16.4.3). Indeed, assuming for simplicity that the functions H(t) and V (g∗ (t)) have no common jump points, and integrating twice by parts (recall that g∗ (t) is nondecreasing function, so that V (g∗ (t)) is a function of bounded variation), we obtain by the renewal theorem that the integral on the left-hand side of (16.4.3)
587
16.4 The crossing of arbitrary boundaries equals T V (g∗ (T )) H(T ) −
H(t) dV (g∗ (t)) 0
T 1 = V (g∗ (T )) + o(T V (x)) − aτ aτ T −
H(t) − 0
1 = aτ
T t dV (g∗ (t)) 0
T dV (g∗ (t)) aτ T
T
H(t) −
V (g∗ (t)) dt + o(T V (x)) − 0
0
T dV (g∗ (t)). aτ
One can easily see that the last integral is o(T V (x)) as well, since by the renewal theorem H(t) − t/aτ = o(t) as t → ∞, so that T εT T T + H(t) − dV (g∗ (t)) aτ 0
0
εT
cεT V (g∗ (0)) + o(T V (g∗ (εT ))) = o(T V (x)) when ε vanishes slowly enough. The following corollary follows immediately from Theorem 16.4.1 and the definition of an r.v.f. Corollary 16.4.3. Let f (s), s ∈ [0, 1], be a given measurable function taking values in an interval [c1 , c2 ] ⊂ (0, ∞). Then, for the boundary g(t) := xf (t/T ),
0 t T,
under the respective conditions on x from Theorem 16.4.1 and, in the case q > 0, under the condition that x∗ = inf (xf (s) − qsT ) δT, 0s1
we have P(GT ) ∼
T V (x) aτ
1
f∗−α (s) ds
0
where f∗ (s) := inf su1 f (u), 0 s 1.
as T → ∞,
588
Extension to generalized renewal processes
Indeed, one just has to observe that V (g∗ (t)) =
V (xf∗ (t/T )) L(xf∗ (t/T )) V (x) = f∗−α (t/T ) V (x) V (x) L(x) = (1 + o(1))f∗−α (t/T )V (x)
uniformly on [0, T ] by virtue of Theorem 1.1.2. In contrast with Theorems 16.2.1 and 16.2.3, we do not consider here the case when q > 0 and x∗ → ∞ but x∗ = o(T ). As we saw in those theorems, in this case there appears the possibility that the boundary will be crossed owing to a combination of two rare events: first, at the very beginning of the interval [0, T ], the process exceeds a linear boundary of the form x∗ + qt and then, during a very long renewal interval, the process is taken above the boundary g(t) by the deterministic positive linear drift. It is clear that considering such a possibility in the case of an arbitrary boundary will be much more difficult than in the situation of Theorems 16.2.1 and 16.2.3. Therefore, in the present exposition we will restrict ourselves to analysing the above situation in the important special case of a linear boundary; this will be done in the next section. Another exceptional case is when the terminal point g(T ) of the boundary is the ‘lowest’ one, i.e. the function g(t) attains there its minimal value on [0, T ]: g∗ (t) = g(T ) =: x
for all t ∈ [0, T ].
(16.4.4)
Then from the evident relations P(S(T ) x) P(GT ) P(S(T ) x) and Theorems 16.2.1 and 16.2.3 we immediately obtain the following result. Corollary 16.4.4. Under the assumptions of Theorem 16.2.1, all the asymptotic representations established in that theorem for P(S(T ) x) will remain true for P(GT ) uniformly over all the boundaries satisfying (16.4.4). Proof of Theorem 16.4.1. As usual, we assume that aτ = 1, so that aξ = −q due to (16.1.16). Put tj := tj for j ν(T ), tν(T )+1 := T , and let τj+1 := tj+1 −tj , j ν(T ). For j ν(T ), introduce the variables g!(j) :=
inf
tj t εx} = min t0k < −εx sup(ν(t) − t) > εx ⊂ tT
kεx+T
kεx+T
and Lemma 16.2.6(i). Indeed, for n := εx + T we have z := εx bε (εx + T ) ≡ bε n
with
bε :=
εδ . 1 + εδ
Since ε → 0 arbitrary slowly, both bε z/n and Λθ (bε ) (see (16.2.4)) will also have the same property. Therefore it follows from (16.2.5) and the inequality Λθ (z) Λθ (bε ) that
P min t0k < −εx exp{−(εx + T )Λθ (bε )} = o(T V (x)) kεx+T
if ε tends to 0 slowly enough. I. The case q 0. From (16.4.6) we conclude that + 0 + P(GT ) P(G+ ) P(G+ D) = P Sj g! (j) − q(tj − j) ; D =P
jν(T )
+
Sj0
+
(1 + o(1))! g (j) ; D ,
(16.4.10)
jν(T )
since |tj − j| εx, j ν(T ), on the event D, whereas g!+ (j) x for any boundary g ∈ G(x,K) . Now put g!∗+ (j) :=
min
jkν(T )
g!+ (k)
and note that one has the following double-sided inequality on the event D: g∗ (tj ) g!∗+ (j) g∗ (tj ) + 2|q|εx,
j ν(T ).
590
Extension to generalized renewal processes Using this observation and the fact that V (1 + o(1))t = (1 + o(1))V (t) as t → ∞, we see that by Theorems 3.6.4 and 4.6.7 the conditional probability of the event jν(T ) {· · · } appearing in the last line of (16.4.10), given t1 , t2 , . . . , is equal on the event D to V (1 + o(1) g!∗+ (j)) (1 + o(1)) jν(T )
= (1 + o(1))
V (! g∗+ (j))
jν(T )
= (1 + o(1))
V (g∗ (tj )) ν(T ) V (x). (16.4.11)
jν(T )
To find the unconditional probability of the event of interest, it only remains to find the expectation of the expression above. To this end, first note that, for any bounded measurable function h(t), t ∈ (0, ∞), E
T
h(tj ) = E
0<jν(T )
T h(t) dν(t) =
0
h(t) dH(t),
T < ∞.
0
Hence & T % V (g∗ (tj )) ∼ V (g∗ (t)) dH(t) cT V (x) (16.4.12) E (1 + o(1)) jν(T )
0
(using g(t) Kx). Moreover, since the r.v.’s ν(T )/T are uniformly integrable (see e.g. p. 56 of [138] or our argument following (16.2.26)) and one has (16.4.9), we have & % V (g∗ (tj )); D E (1 + o(1)) jν(T )
cV (x)E(ν(T ); D) = o(T V (x)). (16.4.13) From this and (16.4.10)–(16.4.12) we conclude that T P(GT ) (1 + o(1))
V (g∗ (t)) dH(t).
(16.4.14)
0
At the same time, from (16.4.6) it also follows that P(GT ) P(G) = P(GD) + P(GDD+ ) + O(P(D+ )).
(16.4.15)
Repeating the argument used to estimate the probability P(G+ D), we obtain T V (g∗ (t)) dH(t).
P(GD) = (1 + o(1)) 0
(16.4.16)
591
16.4 The crossing of arbitrary boundaries Further, on the event D+ , for sufficiently large x we have 1 −q(tj − j) = |q|t0j −|q|εx − g!(j), 2 so that
+
Sj0
P(GDD+ ) = P
j ν(T ),
g!(j) − q(tj − j) ; DD+
jν(T )
+ (
Sj0
P
jν(T )
) 1 g!(j) ; DD+ 2
cV (x) E(ν(T ); D) = o(T V (x)),
(16.4.17)
cf. (16.4.13). From (16.4.15)–(16.4.17) and (16.4.9) we obtain T P(GT ) (1 + o(1))
V (g∗ (t)) dH(t), 0
which, together with (16.4.14), establishes (16.4.2) in the case under consideration. If condition [ 0. (i) Using the left-hand side of (16.4.7) and repeating the argument proving (16.4.14), we obtain T P(GT ) P(G) P(GD) (1 + o(1))
V (g∗ (t)) dH(t). 0
On the other hand, from the right-hand side of (16.4.7) we find that P(GT ) P(G− ) = P(G− D) + P(G− D).
(16.4.18)
Since on the event D one has τj+1 2εx,
|tj − j| εx, we obtain G− D = D
+ (
Sj0
jν(T )
⊂D
inf
tj t x/2. Therefore (16.4.21) g!− (j) − q(tj − j) = inf g(t) − qt + qj cx tj t 0 is in estimating the probability P(G− Dc ) in (16.4.18). Assuming that Fτ (t) = o(V (t)), one can easily see that P(D) = o(T V (x)) (cf. the proof of Theorem 2.1, I(iii); one just has to replace |ν(T )−T | by suptT |ν(t)−t|, while all the estimates used will remain true). This establishes the required assertion.
16.5 The case of linear boundaries In this section we will consider a more special problem on the asymptotic behaviour of the probability of the event GT introduced in (16.1.1), in the case of linear boundaries of the form {g(t) = x + gt; 0 t T } as x → ∞. This behaviour will depend on the relationship between the parameters of the problem, and in some particular cases it will immediately follow from the results derived above. Therefore we will give a general picture of what happens in that special case and will formulate new results only in the form of several separate theorems. As before, we assume that the zero mean trend condition (16.1.16) is met. First of all note that the case T = O(1) is covered by Theorem 16.4.1 on the crossing of arbitrary boundaries. In this case the conditions of Theorem 16.4.1 reduce to the requirement that the distribution of ξj satisfies (1.3), while when α ∈ (1, 2) the requirement is that the condition W (t) cV (t), t > 0, holds for the majorant of the left tail and when α > 2 it is that E ξj2 < ∞. The relation (16.4.2) established in Theorem 16.4.1 now takes the form P(GT ) ∼ H(T )V (x),
x → ∞.
16.5 The case of linear boundaries
593
Further, the case g 0 is covered by Corollary 16.4.4, which holds for boundaries satisfying (16.4.4). Clearly, in that case it suffices to consider the situation when T < ∞ (since by virtue of (16.1.16) and the law of large numbers one has P(GT ) = 1 when T = ∞). Then, according to Corollary 16.4.4, P(GT ) ∼ P(S(T ) x + gT ),
(16.5.1)
provided that the conditions of Theorem 16.2.1 are satisfied (with x replaced in them by x0 := x + gT ), in which case the right-hand side of (16.5.1) can be replaced by the representations established in the above-mentioned theorem for the probability P(S(T ) x0 ). Thus, it remains only to consider the situation of an increasing linear boundary g(t) = x + gt, g > 0, when either T → ∞ or T = ∞. The latter case is evidently covered by Theorem 16.1.2. Indeed, for the GRP S (t) := S(t) − gt with a negative mean trend (a = aξ + (q − g)aτ = −gaτ by (16.1.16)) we have G∞ = S (∞) x , so that the probability P(G∞ ) will be asymptotically equivalent, under the respective conditions, to the right-hand sides of the relations (16.1.9), (16.1.10) with |a| replaced in them by gaτ . Note that the condition q 0 (q > 0) of Theorem 16.1.2 will translate then into r := g − q 0 (r < 0 respectively). Now we turn to the case of a (finite) T → ∞. As one can expect from earlier results in the present chapter, the asymptotic behaviour of the probability P(GT ) will depend, generally speaking, on whether the condition g q is met (the condition excludes the possibility that the boundary will be crossed in any way other than by one of the jumps ξj ). • The case r = g − q 0 Write P(GT ) = P(G∞ ) − P(G∞ GT ).
(16.5.2)
The behaviour of the first probability on the right-hand side of (16.5.2) was considered in Theorem 16.1.2. On the event G∞ GT the crossing of the boundary g(t) = x + gt occurs (and, owing to the assumption r 0, it occurs at one of the jumps ξj ) at a time t ∈ (T, ∞), and, as we know, with a very high probability this will be a ‘large’ jump. Since a combination of two or more ‘large’ jumps is quite unlikely, on this event the trajectory {S(t); t T } will, with high probability, only ‘moderately’ deviate from the (horizontal) mean trend line. Therefore, one can expect that the relation S(T ) = o(x + gT ) will hold true and that hence the probability of that the boundary x + gt, t > T , is crossed will be close to the probability of the crossing of the boundary (x + gT ) + gt, t > 0, by the original process {S(t); t 0}, i.e. the probability will be close to the quantity (x + gT )V (x + gT )/((α − 1)gaτ ) by virtue of Theorem 16.1.2. More precisely, the following theorem holds true.
594
Extension to generalized renewal processes
Theorem 16.5.1. Let g > 0, r = g−q 0, let either condition [QT ] or condition [ΦT ] be satisfied for the distribution of ξj and let the mean trend in the process {S(t)} be equal to zero. Then, as x → ∞ and T → ∞, P(GT ) ∼
1 xV (x) − (x + gT )V (x + gT ) . (α − 1)gaτ
(16.5.3)
Proof. If T = O(x) then we are in the situation of Theorem 16.4.1 (I or II(i)), and moreover x∗ ≡ inf tT (g(t) − qt) = x δT for some fixed δ > 0, g∗ (t) = g(t) = x + gt and T → ∞. Therefore, by virtue of the above-mentioned theorem and (16.4.3), one has 1 P(GT ) ∼ aτ ∼
T V (x + gt) dt 0
# $ 1 xV (x) − (x + gT )V (x + gT ) (α − 1)gaτ
by Theorem 1.1.4(iv). Now if x = o(T ) then the desired assertion will follow from (16.5.2). Indeed, as we have already noted, by Theorem 16.1.2 one has P(G∞ ) ∼
xV (x) . (α − 1)gaτ
(16.5.4)
Further, putting x0 := (x + gT )/2 we conclude that, on the event {S(T ) x0 } the crossing of the boundary x + gt in the time interval (T, ∞) is only possible at a time t tν(T )+1 , and, moreover, that S(tν(T )+1 ) S(T ) + (tν(T )+1 − T )q + ξν(T )+1 x0 + (tν(T )+1 − T )g + ξν(T )+1 . Therefore the aforementioned crossing also implies that, for some t > 0, S(t + tν(T )+1 ) − S(tν(T )+1 ) x + (tν(T )+1 + t)g − x0 + (tν(T )+1 − T )g + ξν(T )+1 = x0 − ξν(T )+1 + gt. Hence
# $ P(G∞ GT ) P sup S(t) − g(t) 0, S(T ) x0 + P(S(T ) > x0 ) t>T # P sup S(t + tν(T )+1 ) − S(tν(T )+1 ) t>0
$ − (x0 − ξν(T )+1 + gt) 0 + O(T V (x0 ))
$ # P(ξ x0 /2) + P sup S(t) − (x0 /2 + gt) 0 + O(T V (x0 )) t>0
= O(V (x0 ) + x0 V (x0 ) + T V (x0 )) = o(xV (x))
595
16.5 The case of linear boundaries
(here we used Theorem 16.2.1 to bound P(S(T ) > x0 ), Theorem 16.1.2 to bound the last probability in the formula, and also the obvious relation (x0 +T )V (x0 ) = O(T V (T )) = o(xV (x))). So we obtain from (16.5.2) that P(GT ) ∼ P(G∞ ), where the right-hand side was given in (16.5.4). Since (x + gT )V (x + gT ) = o(xV (x)) in the case under consideration, the assertion (16.5.3) remains true. • The case r = g − q < 0 We have x∗ ≡ inf (g(t) − qt) = (x + gT ) − qT = x + rT. tT
If x∗ δT for a fixed δ > 0 then we are in the situation of Theorem 16.4.1, and hence (16.5.3) will hold true (cf. Theorem 16.5.1). Now suppose that x∗ → ∞, but x∗ = o(T ). Along with the events D and D+ from (16.4.8), introduce the event D− := inf (ν(t) − t) −εT tT
(note that x T in the case under consideration, so that in the definition of the events D and D± one can replace x with T , and this we have done). Now write P(GT ) = P(GT D) + P(GT D),
(16.5.5)
where, using the events G and G± introduced in (16.4.5) and arguing as in the proof of Theorem 16.4.1, II, we obtain that T (1 + o(1))
V (g∗ (t)) dH(t) P(GD) P(GT D) 0
T P(G− D) (1 + o(1))
V (g∗ (t)) dH(t). 0
In our case of a linear boundary g(t) = x + gt with g > 0, one clearly has g∗ (t) = g(t), so that the asymptotic behaviour of the integral (and therefore that of the probability P(GT D)) will be given by the right-hand side of (16.5.3). To estimate the second probability on the right-hand side of (16.5.5), observe that D = D− D+ and therefore D = D− D+ + D + . Hence P GT D = P GT D− D+ + P GT D+ . The last term does not exceed P(D+ ) = o(T V (x)), according to (16.4.9), so that we just have to bound the probability P(GT D− D+ ) = P sup Sν(t) + qt − (x + gt) 0; D− D+
tT
$ # 0 − (x + rt + qν(t)) 0; D− D+ = P sup Sν(t) tT
(16.5.6)
596
Extension to generalized renewal processes
(assuming, as usual, for simplicity that aτ = 1). Here it will be convenient for us to estimate supt(1−ε)T and sup(1−ε)T 0, on the event D+ one clearly has the inequality # 0 0 $ sup Sν(t) − (x + rt + qν(t)) sup Sν(t) − (x + rt) t(1−ε)T
t(1−ε)T
sup
0 Sν(t) − x∗ + rεT
t(1−ε)T
max Sn0 − |r|εT. nT
Therefore, P
#
$ 0 Sν(t) − (x + rt + qν(t)) 0; D − D+
sup
t(1−ε)T
P max Sn0 |r|εT ; D− D+ nT
P max Sn0 |r|εT P(D− ) = o(T V (x)), nT
cf. (16.2.29) (and using (16.4.9) again). At the same time, on the event {ν((1 − ε)T ) εT }D+ we have the bound sup (1−ε)T 0 ⊂ D − ; tT
(3) removing the event D+ from under the above-mentioned probability symbol will introduce, owing to (16.4.9), an error of order o(T V (x)). The asymptotic behaviour of the probability on the right-hand side of (16.5.14) can be derived similarly to the way in which we evaluated the probability of the
16.5 The case of linear boundaries
601
event (16.5.13); this leads to the inequality % x x + gT x + gT & q−g x P2 (1 + o(1)) Vτ − Vτ (γ − 1)g q − g q−g q q + o(T V (x)). Therefore q−g P(GT D− D+ ) = o(T V (x)) + (1 + o(1)) (γ − 1)g % x x + gT x + gT & x × Vτ − Vτ . q−g q−g q q Together with (16.5.5), (16.5.10) and (16.5.11), the above relation establishes the following result. Theorem 16.5.3. Let g > 0, r = g − q < 0, x∗ = x + rT −δT and x δT for a fixed δ > 0, T → ∞. Assume that the distribution of the jumps ξj satisfies either condition [QT ] or condition [ΦT ] and that the mean trend in the process {S(t)} is equal to zero. Then the following assertions hold true. (i) If Fτ (t) = o(V (t)) as t → ∞ then the relation (16.5.3) holds true for the probability P(GT ). (ii) If Fτ ∈ R, γ > 1 (see (16.1.4)) then # $ 1 P(GT ) ∼ xV (x) − (x + gT )V (x + gT ) (α − 1)gaτ % x x + gT x + gT & x q−g Vτ − Vτ . + (γ − 1)gaτ q − g q−g q q
Bibliographic notes
Chapter 1 The concept of slow variation first appeared in [158] (where only continuous functions were considered). Somewhat earlier, the close concept of a very slowly oscillating sequence (v.s.o.s., in German sehr langsam oszillirende Folge) had been introduced in [249, 250] when studying various types of convergence in the context of extending the conditions of applicability of Tauberian theorems. By definition, a v.s.o.s. {sn } is characterized by the property that, for any subsequence of indices k = k(n) such that k/n → u ∈ (0, ∞) as n → ∞, one has sk − sn → 0 (i.e. lk /ln → 1 for ln := esn ). In [250] analogues of Theorems 1.1.2 and 1.1.4(ii) for v.s.o.s.’s were obtained, and it was noted that the very idea of a v.s.o.s. can be easily extended to functions of a continuous variable. The assertions of Theorems 1.1.2 and 1.1.3 were established in [158] for continuous functions; for arbitrary measurable functions they were first obtained in [174]. The proofs of these theorems presented in this book are close to those in [32] (see ibid. for other versions of the proofs and further historical comments). According to [251] the traditional notation L for s.v.f.’s goes back to [158], although the notation xα L(x) for functions displaying regular variation type behaviour can be seen already in [250]. The publication of the books [130] and [122] played an important role in introducing r.v.f.’s into probability theory. For a more detailed account of the properties of s.v.f.’s and r.v.f.’s see [32, 251]. The class of subexponential distributions was introduced in [81]. This paper also contained the proof of the first part of Theorem 1.2.8. Our proof of Theorem 1.2.12(iii) mostly follows the exposition in [15] (the term ‘subexponential distribution’ apparently appeared in that monograph). According to [32], subexponential distributions on the whole real line were first considered in [137]. Theorem 1.2.21(i) was obtained in [133, 166], Theorem 1.2.21(ii) was established in [166]. In conjunction with the relations l (t) → 0 and tl (t) → ∞ as t → ∞, the sufficient condition for a distribution G to belong to the class S (and simultaneously to a narrower class S ∗ ) from Corollary 1.2.32 was obtained in [166] (as a corollary of Theorem 3 in that paper). Along with a number of other conditions sufficient for G ∈ S, it was presented in [133] (see ibid. for a more complete bibliography); see also [81, 266, 229, 114]. Surveys of the main properties of ‘heavy-tailed’ distributions can be found in [113, 9, 252, 235]. A proof of Theorem 1.3.2 can be found in [83]. A more general case was considered in [242]. A proof of Theorem 1.4.1 (in a somewhat more general form) can be found in [84]. Locally subexponential distributions were studied in [10]; some results of that paper are close to the results of §§ 1.3 and 1.4. For instance, Corollary 2 and Assertion 4 are close to our Theorem 1.3.4, while Theorem 2 is close to our Theorem 1.4.2. Some results presented in §§ 1.2–1.4 were established in [55]. The general theory of the convergence of sums of independent r.v.’s was presented in [130] (see also [122]; for results in the multivariate case, historic comments and further
602
Bibliographic notes
603
bibliography, see e.g. [5, 189]). The role of the contribution of the maximum summands to the sum Sn in the case of convergence to a non-Gaussian stable distribution was studied in [97, 7] (see also [182, 100, 98]). The monographs [286, 86, 273, 245, 153] are concerned with stable distributions and processes; see also [256, 211, 25, 246]. The invariance principle (Theorem 1.6.1) was established in [107] (and in [230] for nonidentically distributed summands ξj ; for more details, see e.g. [28, 74]). The functional limit theorem on convergence to stable processes was obtained in [255]; conditions for the convergence of arbitrary functionals of the processes of partial sums were found in [41, 43, 75]. The law of the iterated logarithm was first obtained in the case of the uniform discrete distribution of ξ in [161]. Then this result was extended in [171] to the case of independent non-identically distributed bounded r.v.’s. In the case of i.i.d. r.v.’s with finite variance the law of the iterated logarithm was established in [140], and the converse result was obtained in [261]. The assertion of Theorem 1.6.6 was first obtained in [147] in the case when F belongs to the domain of normal attraction of a stable law Fα,ρ , α < 2, |ρ| < 1; it was then generalized in [274]. Detailed surveys of results related to the law of the iterated logarithm can be found in our § 3.9, in § 5.5.4 of [260] and in [31].
Chapters 2 and 3 Theorem 2.5.5 was established in [119]. An alternative proof of this theorem, using the subadditive Kingman theorem, was given in [102]. The assertion of Theorem 2.6.5 was obtained in [197]. In [118] a local renewal theorem for ξ 0 was obtained in the case when condition [ · , =] holds with 1/2 < α < 1 (for α ∈ (0, 1/2] the paper gave an upper bound only). In the lattice case, these results were extended in [128, 279] to the case of r.v.’s ξ assuming both negative and positive values. Upper bounds for the distributions of Sn in terms of truncated moments and without assuming the existence of majorants were obtained in [201, 127] (and also in [206]). The asymptotics of P(Sn x) ∼ nV (x) for x cn were established in [207]. Some results of Chapters 2 and 3 were obtained in [51, 54, 57, 66]. An analogue of the uniform representation (4.1.2) for x > n1/α in the case when F belongs to the domain of attraction of a stable law with exponent α was obtained in [238, 239]. An analogue of the law of the iterated logarithm (the assertion of Corollary 3.9.2) was first obtained in [82] for the case of symmetric stable summands (it was noted in that paper that, as had been pointed out by V. Strassen, this assertion immediately follows from the results of [163] and also that, with the help of the results of [96], it could be extended to the case of distributions F that belong to a special subset of the domain of normal attraction of Fα,ρ , α < 2). For distributions F from the entire domain of attraction of a stable law Fα,ρ , α < 2, |ρ| < 1, this law was obtained in [147], while some refinements and extensions to it were established in [190]. See also the bibliography in [190, 164] and detailed surveys of results, related to the law of the iterated logarithm, presented in § 5.5.4 of [260] and in [31]. Some assertions from § 3.9 were obtained in [51].
Chapter 4 For bibliographies concerning the asymptotic equivalence P(Sn x) ∼ nV (x),
P(S n x) ∼ nV (x)
under condition [ · , =] with α > 2, see e.g. [225, √ 237, 191, 206]. The uniform representation (4.1.2) for x > n under the minimal additional condition
604
Bibliographic notes
that E(ξ 2 ; |ξ| > t) = o(1/ ln t), t → ∞, was established in Corollary 7 of [237]. In [206] the representation (4.1.2) was presented in Theorem 1.9 under the additional condition that E|ξ|2+δ < ∞, δ > 0 with a reference to [194]; the latter, in fact, only dealt with the zone x n1/2 ln n (when (4.1.2) degenerates into (4.1.1)), but under no additional moment conditions. According to [191], the result presented in [206] was obtained in the doctoral thesis of A.V. Nagaev (On large deviations for sums of independent random variables, Institute of Mathematics of the Academy of Sciences of the UzSSR, Tashkent, 1970). In [225] the representation (4.1.2) was obtained under the assumption that F (t) = O(t−α ) as t → ∞ (Corollary 2). −α Under the additional assumption √ that F (t) = O(t ) as t → ∞, the representation (4.4.3) was established for x/ n → ∞ in Corollary 1 of [225]. Upper bounds for the distributions of Sn in terms of truncated moments and without assuming the existence of majorants were obtained in [201, 127] (and also in [206]). These inequalities are, in a sense, more general but also much more cumbersome. One cannot derive from them the assertions of Corollary 4.1.4 (even for the sums Sn ). Some lower bounds for P(Sn x) were found in [205]. Lower bounds for P(|Sn | x) were obtained in [208]. In [194] the following integral and integro-local theorems for the distribution of Sn were √ presented. Let F ∈ R, α > 2, x/ ln x > n and n → ∞. Then P(Sn > x) ∼ nV (x). If, moreover, P(ξ ∈ Δ[t)) ∼ αΔV (t)/t as t → ∞ and 0 < c Δ = o(t) then P(Sn ∈ Δ[x)) ∼ nαΔV (x)/x. Under the assumption that the r.v. ξ = ξ −Eξ was obtained by centring a non-negative r.v. ξ 0, the first of the asymptotic relations (4.8.2) was obtained in [90] for distributions with tails of extended regular variation (i.e. in the case when (4.8.3) holds true) and in [210] for distributions with ‘intermediate regular variation’ of the tails (i.e. in the case when (4.8.4) holds; see also [89, 91, 156, 248]), but only in the zone x δn, where δ > 0 is fixed. The assertion of Theorem 4.9.1 in a narrower case (in particular, under the assumption that x = cn) was obtained in Theorem 3.1 of [108]. The limiting behaviour (as x → ∞) of the conditional distribution j ff « „ Sη(x)t (a) Sη(x) (a) − x Sη(x)−1 (a) η(x) F I (x) , , , t ∈ [0, 1] , , N (x) := + , N (x) N (x) η(x) N (x) F+ (x) under the condition that η(x) := inf{k : Sk − ak x} < ∞ was studied in [12] in the case when Eξ = 0, a > 0 and either F ∈ R for α > 1 or F belongs to the maximum domain of attraction of the Gumbel distribution (the latter condition is known to be equivalent to the property that, for any u > 0, lim
t→∞
F+ (t + N (t)u) = e−u , F+ (t)
see e.g. Proposition 3.1 of [12] or Theorem 3.3.27 of [113]; an alternative characterization of this class was given by Theorem 3.3.26 of [113]). The paper also dealt with the asymptotics of the conditional distribution of ξη(x) and some others; in particular, it was shown there that, in the case when F ∈ R, α > 1, ˛ ` ´ ` ´ lim P x−1 ξη(x) > u˛ η(x) < ∞ = 1 + α(1 − u−1 ) u−α , u > 0. x→∞
Some results of Chapter 4 were established in [51, 54, 63].
Chapter 5 Problems on ‘moderately large’ deviations of the sums Sn for semiexponential and related distributions were studied in [152, 201, 220, 221, 280, 281, 247, 212, 206, 238] and some other papers, where, in particular, the validity of the Cram´er approximation (5.1.11)
605
Bibliographic notes `
´
for P(Sn x) was established under the condition that E eh(ξ) ; ξ 0 < ∞ (or Eeh(ξ) < ∞) for some function h(t) that is close, in a sense, to an r.v.f. of index α ∈ (0, 1) (conditions on the function h(t) vary from paper to paper). Similar results for P(S n x) were obtained in [2]. The asymptotic representation P(Sn x) ∼ nV (x) for x n1/(2−2α) was obtained in [238, 196, 227, 51]. For results concerning large deviations of Sn in the case of distributions satisfying the condition Eeh(|ξ|) < ∞, see also [206, 191] (one can find there more complete bibliographies as well). In [227] the asymptotics of P(S n x) were established in cases where they coincide with the asymptotics of P(Sn x). Theorems on the asymptotics of P(Sn x) that would be valid on the whole real line were considered in [238]. In particular, ` ´ the paper gives the form of P(Sn x) in the intermediate zone x ∈ σ1 (n), σ2 (n) , but it does that under conditions of which the meaning is quite difficult to comprehend. The intermediate deviation zone σ1 (n) < x < σ2 (n) was also considered in [195]. The paper deals with the rather special case when the distribution F has density α
f (t) ∼ e−|t|
as |t| → ∞;
the asymptotics of P(Sn x) were found there for α > 1/2 in the form of recursive relations, from which one cannot extract, in the general case, a closed-form expression for the asymptotics of P(Sn x). Close results on the first-order asymptotics of P(Sn x) were obtained in [205] (they are also discussed in [206]) for the class of distributions with P(ξ t) = e−l(t) , where the function l(t) is thrice differentiable and its derivatives satisfy a number of conditions. Asymptotic representations for P(S n x) in the intermediate deviation zone σ1 (n) < x < σ2 (n) and also in the ` zone x ´ σ2 (n) were studied in [52]. The asymptotic representation (5.1.17) for P S n (a) x , a > 0, which holds, as x → ∞, for all n and all so-called strongly subexponential distributions, was established in [178] (see also [275]); sufficient conditions from [178] for a distribution to be strongly subexponential are satisfied for F ∈ Se. A number of results of Chapter 5 were obtained in [51, 52].
Chapter 6 In the Cram´er case, methods for studying the probabilities of large deviations of Sn are quite well developed and go back to [95] (see also [233, 16, 219, 259, 120, 49] etc.). Somewhat later, these methods were extended in [37, 44, 69, 70] to enable one to study S n and also to solve a number of other problems related to the crossing of given boundaries by the trajectory of a random walk. Theorem 6.1.1 was established by B.V. Gnedenko (see §§ 49 and 50 of [130]; see also Theorem 4.2.1 of [152] and Theorem 8.4.1 of [32]); the multivariate case was considered in [243]. Sufficiency in Theorem 6.1.2 was established in [259] (cf. [120]; the finite variance case was studied earlier in [254]) and the proof of the necessary part was presented in § 8.4 of [32]. Uniform versions of Theorems 6.1.1 and 6.1.2 were established in [72] and, to some extent, in [259]. The properties of the deviation function Λ (which is sometimes also referred to as the Chernoff function [80] or rate function) are discussed in [38, 49, 67, 101]. The monograph [101] is devoted to the analysis of the crude (logarithmic) asymptotics of large deviation probabilities. In the case when the distribution of ξ has a density of the form e−λ+ t tγ−1 L(t), L(t) being an s.v.f. at infinity, γ > 0, a number of results on the large deviations of Sn (including (6.1.15)) were obtained in [241]. Some assertions close to the theorems and corollaries of §§ 6.2 and 6.3, were obtained for narrower deviation zones in [27] (Lemmata 2 and 3).
606
Bibliographic notes
A detailed investigation of the ‘lower sub-zone’ in the boundary case θ ≡ x/n = θ+ is presented in [198], where the values of x up to which the ‘classical’ exact asymptotic expansions for P(Sn x) remain correct were found. The paper also contains an integral analogue of Theorem 6.2.1. The multivariate case was considered in [284] in a very special situation. Some assertions of Theorems 6.2.1 and 6.3.1 can be obtained from [259]. The main results of § 6.5 were obtained in [64].
Chapter 7 Corollaries 1 and 2 of [272] contain conditions, sufficient for (7.3.2), that are close to the conditions of Theorems 7.1.1 and 7.2.1. The assertion of Theorem 7.4.1 was obtained in [126] (see ibid. for a more complete bibliography on related results). In the special case when τ is the number of the first positive sum, it was established in [8]. A necessary and sufficient condition for (7.4.6) in the case F ∈ R was obtained in [135]. This condition has the form ˆ ˜ lim lim sup P(Sτ t, τ n) − P(Sn t, τ n) /P(ξ t) = 0. n→∞
t→∞
The assertion of Theorem 7.4.4 was obtained in [136]. For upper-power distributions, the asymptotics of P(S x) from the assertion of Theorem 7.5.1 was obtained in [39, 42]. For the class of subexponential distributions, Theorem 7.5.1 was established in [275] (see also [213, 282]; special cases were considered in [278, 269]). That the condition F+I ∈ S is necessary for the asymptotics (7.5.1) was proved in [177]; in the special case when ξ = ζ − τ, where the r.v.’s τ, ζ 0 are independent, τ has an exponential distribution (which corresponds to M/G/1 queueing systems), this result was obtained in [213, 115]. The case Eξ = −∞ was studied in [103]. Assertions close to Theorem 7.5.5 and Corollary 7.5.6 were obtained in [26, 11]. The condition used in [11] to ensure that (7.5.13), (7.5.14) hold true is that F+ belongs to the distribution class S ∗ , which is characterized by the property (7.4.1). The assertion of Theorem 7.5.8 in the case F ∈ R, under additional moment conditions and assumptions concerning the smoothness of F+ , was established in Corollary 3 of [63]. For queueing systems of the type M/G/1 (see above), under the assumption that ζ has a heavy-tailed density of a special form, such a refinement for the distribution of S (or for the limiting distribution for the waiting time in M/G/1) was obtained in [267]. See also [18, 19, 20]. Theorem 7.5.11 is a slight modification (and simplification) of Theorem 3 of [63] in the case G ∈ R and of Theorem 2.1 of [52] in the case G ∈ Se.
Chapter 8 A complete asymptotic analysis (including the derivation of asymptotic expansions) of P(η± (x) = n) for all x and n → ∞ for bounded lattice-valued ξ was given in [33, 34]. For an extension of the class R, the asymptotics P(θ = n) ∼ cV (−an), and hence the asymptotics of P(η− > n) as well, was found in Chapter 4 of [42]. Comprehensive results on the asymptotics of P(η− (x) > n) and P(n < η+ (x) < ∞) in the case of a fixed x 0 for the classes R and C were obtained in [116, 32, 104, 27, 193]. Necessary γ , γ > 0, and some relevant problems and sufficient conditions for the finiteness of Eη− were studied in [143, 154, 138, 160]. Unimprovable bounds for P(η− > n) were given in § 43 of [50]. Note also that the local theorems 2.1 and 3.1 of [54] in the lattice case were established earlier, in [105]. For a proof of Theorem 8.2.16 and bibliographic comments on it, see [32], pp. 381–2; that conditions (8.2.45) and (8.2.46) are equivalent was established in [106]. The assertion of Theorem 8.2.17(ii) was established in [39].
Bibliographic notes
607
Theorem 8.2.18 was established in [116, 32, 104, 27, 193]. In the case when Eξ = 0, Eξ`2 < ∞ and the ´ distribution F is arithmetic, local limit theorems for the asymptotics of P η± (x) = n as n → ∞ were obtained in [3]. In the non-lattice case, it was shown in [193] that (8.2.54) holds true under the additional condition that E|ξ|3 < ∞. A more general result claiming that Eξ 2 < ∞ would suffice for (8.2.54) was published in [117]. However, as communicated to us by A.A. Mogulskii, this result is incorrect as its conditions contain no restrictions on the structure of F (a paper by A.A. Mogulskii with a correct version of this theorem was submitted to Siberian Adv. Math). In the case of bounded lattice-valued r.v.’s ξ, asymptotic expansions for P(η± (x) = n) were derived in [33, 34]. For`distributions´F ∈ C, a comprehensive study of the asymptotics of the probabilities P η+ (x) > n = P(S n x) for arbitrary a = Eξ, under the assumption that F has an absolutely continuous component, was made in [33, 34, 37]. The assertion of Theorem 8.3.1 follows from the results of [37], while that of Theorem 8.3.2 follows from bounds in [45, 47]. The assertion of Theorem 8.3.3 follows immediately from the convergence rate´ ` estimate obtained in [203] (see also [204, 202]). An asymptotic expansion for P S n x as x → ∞ in the cases when Eξ = 0, E|ξ|s < ∞, s > 3, and the distribution of ξ is either lattice or satisfies the Cram´er condition lim sup|λ|→∞ |f(λ)| < 1, was studied in [204, 37, 175, 176]. The exposition in Chapter 8 is close to [58]. A survey of results close to those presented in Chapter 8 is contained in [193].
Chapter 9 Necessary and sufficient conditions for partial sums Sn of i.i.d. random vectors in Rd to converge to a non-Gaussian stable distribution were found in [243]; further references, as well as a detailed discussion of the problem on the convergence of Sn under operator scaling, can be found in [189]). The concept of regular variation in the multivariate setup was discussed in [23]. In the case when the Cram´er condition Ee(λ,ξ) < C < ∞ holds in a neighbourhood ` ´ of a point λ0 = 0, a comprehensive study of the asymptotic behaviour of P Sn ∈ Δ[x) as (λ0 , x) → ∞ was presented in [69, 70]. In the special case when F has a bounded density f (x), which admits a representation of the form (9.1.9), the large deviation problem was considered in [283, 200]; see also [199]. In [283], a local limit theorem for the density fn (x) of Sn was established for large deviations along ‘non-singular directions’. This result was complemented in [200] by an analysis of the asymptotics of fn (x) along ‘singular directions’ for an even narrower distribution class (in particular, it was assumed that there are only finitely many such directions, and that the density f (x) decays along such directions as a power function of order −β −d, β > α). The main result of [200] shows that the principal contribution to the probabilities of large deviations along singular directions comes not from trajectories with a single large jump (which is the case in the univariate problem and also for non-singular directions when d > 1) but from those with two large jumps. The case when ξ follows a lattice distribution (with probabilities regularly varying at infinity) was studied in [285]. In a more general case, when only conditions (9.1.7) (with α > 0) and (9.1.8) are met, an integral-type large deviation theorem, describing the behaviour of the probabilities P(Sn ∈ xA) when the set A ⊂ Rd is bounded away from 0 and has a ‘regular’ boundary, was obtained in [151] together with its functional version. The main results presented in Chapter 9 were obtained in [54].
608
Bibliographic notes
Chapter 10 An assertion close to Theorem 10.2.3 was obtained in [225] in the case when ξ has a density regularly varying at infinity. In this case, conditions [A1 ] and [A2 ] could be somewhat relaxed where it is assumed that the sets A(t) and Af (t) are unions of finite collections √ of intervals. The same paper [225] contains a ‘transient’ assertion for deviations x n, ` ´ where an approximation for P(Gn ) includes the term P w(·) ∈ xA , where w(·) is the standard Wiener process. An assertion close to Theorem 10.2.6 was obtained in [131] under more stringent conditions on F+ (t) and somewhat relaxed versions of conditions [A1 ] and [A2 ]. In [150] a theorem was established, which, in the case of Markov processes in Rd with weakly dependent increments and distributions regularly varying at infinity, enables one to make a transition from the large deviation results for one-dimensional distributions of the process to those for the trajectories of the process. This result was used in [151] to obtain a functional version of the large deviation theorem in the case when one has (9.1.7) (for an α > 0) and (9.1.8).
Chapter 11 The problem on the asymptotics of P (1, n, an) as n → ∞, when ξ a < ∞, τ 0 and the distribution of τ is close to semiexponential, was studied in [13, 124] (see ibid. for motivations and applications to queueing theory).
Chapter 12 In [127] upper bounds for the distribution of Sn in terms of ‘truncated’ moments were obtained without any assumptions on the existence of regularly varying majorants (such majorants always exist in the case of finite moments, but their decay rate will not be the true one). This makes bounds from [127] in a sense more general than those presented in § 12.1 but also substantially more cumbersome. One cannot derive from them bounds for Sn of the form (12.1.11), (12.1.12). In the case Eξ 2 → d < ∞, the limiting relation (12.5.1) in the problem on transient phenomena was obtained in [165, 231] (see also [42]). Only some partial results for some special distributions of ξ were known in the case Eξ 2 = ∞ (see [93, 78, 94]). Under condition [UR] with h(n) = n, the relation P(Sn x) ∼ nV (x) for x > cn was obtained in [218]. The main results presented in Chapter 12 were obtained in [59, 61].
Chapter 13 In [127] upper bounds for the distribution of Sn in terms of ‘truncated’ moments were obtained without any assumptions on the existence of regularly varying majorants (such majorants always exist in the case of finite moments, but their decay rate will not be the true one). This makes bounds from [127] more general in a sense, but also substantially more cumbersome. One cannot derive from them bounds for Sn of the form (13.1.11), (13.1.12). Inequalities for P(|Sn | x) close to our lower bounds for P(Sn x) were obtained in [208]. √ P The crude inequality P(Sn x) 12 n j=1 Fj,+ (2x) for x 2 Dn was obtained in [206]. Some bounds for the distribution of Sn were also established in [240, 226]. Under condition [UR] with H(n) = n, the relation P(Sn x) ∼ nV (x) for x > cn was obtained in [218]. The main result (13.3.2) for transient phenomena in the i.i.d. case with Eξ 2 → d < ∞ was established in [165, 231] (see also § 25, Chapter 4 of [42]).
Bibliographic notes
609
The main results presented in Chapter 13 were obtained in [60].
Chapter 14 In the case when ξj (Xj ) = f (Xj ), where f is a given function on X and {Xk } forms a Harris Markov chain (having a positive recurrent state z0 ), the asymptotics of P(Sn x) in terms of regularly varying distribution tails of the sums of quantities f (Xk ) taken over a cycle (formed by consecutive visits to a fixed positive recurrent atom), were studied in [187]. The asymptotics of P(S(a) x) in the case when the Sn are partial sums of a sequence of moving averages of i.i.d. r.v.’s, satisfying conditions of a subexponential type, were established in [179]. The asymptotics of the distribution of the maximum of a random walk, defined on a finite Markov chain, were studied in [6]. An analogue of Theorem 7.5.1 for processes with ‘modulated increments’ (i.e. processes the distributions of whose increments depend on the value of an unobservable regenerative process) both in discrete and in continuous time, was obtained in [125, 123]. See also [155, 14, 4, 139]. For sequences {ξj } of r.v.’s satisfying a mixing condition and such that F ∈ R, results on the large deviations of Sn were obtained in [99]; for autoregressive processes with random coefficients (i.e. when ξn = An ξn−1 + Yn , where the sequences of positive i.i.d. r.v.’s {An }, {Yn } are independent, Yn ⊂ = FY ∈ R), results on the large deviations of Sn were obtained in [173, 263].
Chapter 15 The properties of processes with independent increments are described in the monographs [256, 245, 25, 246, 273]. Theorem 15.1.1 (including the converse assertion) was proved in [112] (see also Theorem A3.22 of [113] and § 8.2.7 of [32]; more precisely, it was shown in [112] that the following assertions are equivalent: (1) F ∈ S, (2) G[1] ∈ S, (3) F+ (t) ∼ G[1],+ (t) as t → ∞). A similar assertion for the class R was obtained in [110]. The asymptotics of the tails of subadditive functionals of processes with independent increments in the case when G[1] ∈ R were studied in [79]. The asymptotics of P(S(∞) x) and some other characteristics of a process with independent increments in the case when the spectral measure has a subexponential right tail was studied in [168].
Chapter 16 The assertion of Theorem 16.1.2, I was proved in [115]. The first assertion of Theorem 16.1.3 follows from Theorem 12 in Chapter 4 of [42], where it was established in a more general case (later on, it was shown in [275, 115] that a sufficient condition for this assertion is that F+I ∈ S; that this a necessary condition as well was proved in [177], see also the remarks on Theorem 7.5.1 above), while the second assertion follows from the main theorem of [178] (where it was proved for strongly subexponential distributions). An analogue of the uniform representation (4.1.2) for generalized renewal processes, under additional moment assumptions and the condition that the distribution Fτ has a bounded density, was presented in [188]. In the case q = 0 and under the assumptions that the distribution tail of the r.v. ξ 0 is of ‘extended regular variation’ (see § 4.8) and that the process {ν(t)} satisfies condition (16.1.6) for some ε > 0 and c > 0, the relation (16.1.5) for any fixed δ > 0 was established in [169] (the process {ν(t)} was assumed there to be of a more general form than a renewal process). This latter paper also contains sufficient
610
Bibliographic notes
conditions for (16.1.6) to hold for a renewal process: Eτ 2 < ∞ and Fτ (t) 1 − e−bt , t > 0, for some b > 0 (Lemma 2.3 of [169]). In [262] it was shown that condition (16.1.6) is always met for renewal processes with aτ < ∞, without any additional assumptions regarding Fτ . Note that the last fact follows also from the inequality (16.2.13). The main results presented in Chapter 16 were obtained in [65].
References
[1]
[2]
[3] [4] [5] [6] [7] [8]
[9] [10]
[11]
[12] [13] [14] [15] [16] [17]
Adler, R., Feldman, R. and Taqqu, R.S., eds. A Practical Guide to Heavy Tails: Statistical Techniques for Analysing Heavy-tailed Distributions (Birkh¨auser, Boston, 1998). Aleˇskjaviˇcene, A.K. On the probabilities of large deviations for the maximum of sums of independent random variables. I, II. Theory Probab. Appl., 24 (1979), 16– 33, 322–337. Alili, L. and Doney, R.A. Wiener-Hopf factorization revisited and some applications. Stochastics and Stoch. Reports., 66 (1999), 87–102. Alsmeyer, G. and Sbignev, M. On the tail behaviour of the supremum of a random walk defined on a Markov chain. Yokohama Math. J., 46 (1999), 139–159. Araujo, A. and Gin´e, E. The Central Limit theorem for Real and Banach Valued Random Variables (Wiley, New York, 1980). Arndt, K. Asymptotic properties of the distribution of the supremum of a random walk on a Markov chain. Theory Probab. Appl., 25 (1980), 309–324. Arov, D.Z. and Bobrov, A.A. The extreme members of a sample and their role in the sum of independent variables. Theory Probab. Appl., 5 (1960), 377–396. Asmussen, S. Subexponential asymptotics for stochastic processes: extremal behaviour, stationary distributions and first passage probabilities. Ann. Appl. Probab., 8 (1998), 354–374. Asmussen, S. Ruin Probabilities (World Scientific, Singapore, 2000). Asmussen, S., Foss, S. and Korshunov, D. Asymptotics for sums of random variables with local subexponential behaviour. J. Theoretical Probab., 16 (2003), 489– 518. Asmussen, S., Kalashnikov, V., Konstantinides, D., Kl¨uppelberg, C. and Tsitsiashvili, G. A local limit theorem for random walk maxima with heavy tails. Statist. Probab. Letters, 56 (2002), 399–404. Asmussen, S. and Kl¨uppelberg, C. Large deviations results for subexponential tails, with applications to insurance risk. Stoch. Proc. Appl., 64 (1996), 103–125. Asmussen, S., Kl¨uppelberg, C. and Sigman, K. Sampling of subexponential times with queueing applications. Stoch. Proc. Appl., 79 (1999), 265–286. Asmussen, S. and Møller, J.R. Tail asymptotics for M/G/1 type queueing processes with subexponential increments. Queueing Systems, 33 (1999), 153–176. Athreya, K.B. and Ney, P.E. Branching Processes (Springer, Berlin, 1972). Bahadur, R. and Ranga Rao, R. On deviations of the sample mean. Ann. Math. Statist., 31 (1960), 1015–1027. Baltrunas, A. On the asymptotics of one-sided large deviation probabilities. Lithua-
611
612 [18] [19] [20] [21] [22]
[23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36]
[37]
[38] [39] [40] [41]
References nian Math. J., 35 (1995), 11–17. Baltrunas, A. Second-order asymptotics for the ruin probability in the case of very large claims. Siberian Math. J., 40 (1999), 1226–1235. Baltrunas, A. Second order behaviour of ruin probabilities. Scandinavian Actuar. J., (1999), 120–133. Baltrunas, A. The rate of convergence in the precise large deviation theorem. Probab. Math. Statist., 22 (2002), 343–354. Baltrunas, A. Second-order tail behavior of the busy period distribution of certain GI/G/1 queues. Lithuanian Math. J., 42 (2002), 243–254. Baltrunas, A., Daley, D.J. and Kl¨uppelberg, C. Tail behaviour of the busy period of a GI/GI/1 queue with subexponential service times. Stochastic Process. Appl., 11 (2004), 237–258. Basrak, B., Davis, R.A. and Mikosch, T. A characterization of multivariate regular variation. Ann. Appl. Probab., 12 (2002), 908–920. Bentkus, V., Bloznelis, M. Nonuniform estimate of the rate of convergence in the CLT with stable limit distribution. Lithuanian Math. J., 29 (1989), 8–17. Bertoin, J. L´evy Processes (Cambridge University Press, Cambridge, 1996). Bertoin, J. and Doney, R.A. On the local behaviour of ladder height distributions. J. Appl. Probab., 31 (1994), 816–821. Bertoin, J. and Doney, R.A. Some asymptotic results for transient random walks. Adv. Appl. Probab., 28 (1996), 207–226. Billingsley, P. Convergence of Probability Measures (Wiley, New York, 1968). Bingham, N.H. Maxima of sums of random variables and suprema of stable processes. Z. Wahrscheinlichkeitstheorie und verw. Geb., 26 (1973), 273–296. Bingham, N.H. Limit theorems in fluctuation theory. Adv. Appl. Probab., 5 (1973), 554–569. Bingham, N.H. Variants of the law of the iterated logarithm. Bull. London Math. Soc., 18 (1986), 433–467. Bingham, N.H., Goldie, C.M. and Teugels, J.L. Regular Variation (Cambridge University Press, Cambridge, 1987). Borovkov, A.A. Limit theorems on the distributions of maxima of sums of bounded lattice random variables. I. Theory Probab. Appl., 5 (1960), 125–155. Borovkov, A.A. Limit theorems on the distributions of maxima of sums of bounded lattice random variables. II. Theory Probab. Appl., 5 (1960), 341–355. Borovkov, A.A. Remarks on Wiener’s and Blackwell’s theorems. Theory Probab. Appl., 9 (1964), 303–312. Borovkov, A.A. Analysis of large deviations in boundary-value problems with arbitrary boundaries. I, II. Siberian Math. J., 5 (1964), 253–289, 750–767. (In Russian.) Borovkov, A.A. New limit theorems in boundary problems for sums of independent random variables. Select Transl. Math. Stat. Probab., 5 (1965), 315–372. (Original publication in Russian: Siberian Math. J., 3 (1962), 645–694.) Borovkov, A.A. Boundary-value problems for random walks and large deviations in function spaces. Theory Probab. Appl., 12 (1967), 575–595. Borovkov, A.A. Factorization identities and properties of the distribution of the supremum of sequential sums. Theory Probab. Appl., 15 (1970), 359–402. Borovkov, A.A. Notes on inequalities for sums of independent variables. Theory Probab. Appl., 17 (1972), 556–557. Borovkov, A.A. The convergence of distributions of functionals of stochastic processes. Russian Math. Surveys, 271(1) (1972), 1–42.
References [42] [43] [44] [45]
[46] [47] [48] [49] [50] [51]
[52] [53] [54]
[55]
[56]
[57] [58] [59] [60] [61] [62] [63]
[64]
613
Borovkov, A.A. Stochastic Process in Queueing Theory (Springer, New York, 1976). Borovkov, A.A. Convergence of measures and random processes. Russian Math. Surveys, 31(2) (1976), 1–69. Borovkov, A.A. Boundary problems, the invariance principle and large deviations. Russian Math. Surveys, 38(4) (1983), 259–290. Borovkov, A.A. On the Cram´er transform, large deviations in boundary value problems, and the conditional invariance principle. Siberian Math. J., 36 (1995), 417– 434. Borovkov, A.A. Unimprovable exponential bounds for distributions of sums of random number of random variables. Theory Probab. Appl., 40 (1995), 230–237. Borovkov, A.A. On the limit conditional distributions connected with large deviations. Siberian Math. J., 37 (1996), 635–646. Borovkov, A.A. Limit theorems for time and place of the first boundary passage by a multidimensional random walk. Dokl. Math., 55 (1997), 254–256. Borovkov, A.A. Probability Theory (Gordon and Breach, Amsterdam, 1998). Borovkov, A.A. Ergodicity and Stability of Stochastic Processes (Wiley, Chichester, 1998). Borovkov, A.A. Estimates for the distribution of sums and maxima of sums of random variables when the Cram´er condition is not satisfied. Siberian Math. J., 41 (2000), 811–848. Borovkov, A.A. Probabilities of large deviations for random walks with semiexponential distributions. Siberian Math. J., 41 (2000), 1061–1093. Borovkov, A.A. Large deviations of sums of random variables of two types. Siberian Adv. Math., 4 (2002), 1–24. Borovkov, A.A. Integro-local and integral limit theorems on the large deviations of sums of random vectors: regular distributions. Siberian Math. J., 43 (2002), 402–417. Borovkov, A.A. On subexponential distributions and asymptotics of the distribution of the maximum of sequential sums. Siberian Math. J., 43 (2002), 995–1022, 1253–1264. Borovkov, A.A. Asymptotics of crossing probability of a boundary by the trajectory of a Markov chain. Heavy tails of jumps. Theory Probab. Appl., 47 (2003), 584–608. Borovkov, A.A. Large deviations probabilities for random walks in the absence of finite expectations of jumps. Probab. Theory Relat. Fields, 125 (2003), 421–446. Borovkov, A.A. On the asymptotic behavior of the distributions of first-passage times. I, II. Math. Notes, 75 (2004), 23–37, 322–330. Borovkov, A.A. Large deviations for random walks with nonidentically distributed jumps having infinite variance. Siberian Math. J., 46 (2005), 35–55. Borovkov, A.A. Asymptotic analysis for random walks with nonidentically distributed jumps having finite variance. Siberian Math. J., 46 (2005), 1020–1038. Borovkov, A.A. Transient phenomena for random walks with nonidentically distributed jumps with infinite variances. Theory Probab. Appl., 50 (2005), 199–213. Borovkov, A.A. and Borovkov, K.A. Probabilities of large deviations for random walks with a regular distribution of jumps. Dokl. Math., 61 (2000), 162–164. Borovkov, A.A. and Borovkov, K.A. On probabilities of large deviations for random walks. I. Regularly varying distribution tails. Theory Probab. Appl., 46 (2001), 193-213. Borovkov, A.A. and Borovkov, K.A. On probabilities of large deviations for ran-
614
[65]
[66] [67]
[68]
[69] [70] [71] [72] [73]
[74]
[75] [76] [77] [78] [79] [80] [81]
[82] [83] [84] [85]
References dom walks. II. Regular exponentially decaying distributions. Theory Probab. Appl., 49 (2005), 189–206. Borovkov, A.A. and Borovkov, K.A. Large deviation probabilities for generalized renewal processes with regularly varying jump distributions. Siberian Adv. Math., 15 (2006), 1–65. Borovkov, A.A. and Boxma, O.J. Large deviation probabilities for random walks with heavy tails. Siberian Adv. Math., 13 (2003), 1–31. Borovkov, A.A. and Mogulskii, A.A. Large Deviations and the Testing of Statistical Hypotheses (Proceedings of the Institute of Mathematics, vol. 19. Nauka, Novosibirsk, 1992). Borovkov, A.A. and Mogulskii, A.A. The second rate function and the asymptotic problems of renewal and hitting the boundary for multidimensional random walks. Siberian Math. J., 37 (1996), 647–682. Borovkov, A.A. and Mogulskii, A.A. Integro-local limit theorems including large deviations for sums of random vectors. I. Theory Probab. Appl., 43 (1999), 1–12. Borovkov, A.A. and Mogulskii, A.A. Integro-local limit theorems including large deviations for sums of random vectors. II. Theory Probab. Appl., 45 (2001), 3–22. Borovkov, A.A. and Mogulskii, A.A. Limit theorems in the boundary hitting problem for a multidimensional random walk. Siberian Math. J., 42 (2001), 245–270. Borovkov, A.A. and Mogulskii, A.A. Integro-local theorems for sums of independent random vectors in the series scheme. Math. Notes, 79 (2006), 468–482. Borovkov, A.A. and Mogulskii, A.A. Integro-local and integral theorems for sums of random variables with semiexponential distributions. Siberian Math. J., 47 (2006), 990–1026. Borovkov, A.A., Mogulskii, A.A. and Sakhanenko, A.I. Limit Theorems for Random Processes (Current Problems in Mathematics. Fundamental Directions 82. Vsesoyuz. Inst. Nauchn. i Tekhn. Inform. (VINITI), Moscow, 1995). (In Russian.) Borovkov, A.A. and Peˇcerski, E.A. Weak convergence of measures and random processes. Z. Wahrscheinlichkeitstheorie verw. Geb., 28 (1973), 5–22. Borovkov, A.A. and Utev, S.A. Estimates for distributions of sums stopped at Markov time. Theory Probab. Appl., 38 (1993), 214–225. Borovkov, K.A. A note on differentiable mappings. Ann. Probab., 13 (1985), 1018–1021. Boxma, O.J. and Cohen, J.W. Heavy-traffic analysis for the GI/G/1 queue with heavy-tailed distributions. Queueing Systems, 33 (1999), 177–204. Braverman, M., Mikosch, T. and Samorodnitsky, G. Tail probabilities of subadditive functionals of L´evy processes. Ann. Appl. Probab., 12 (2002), 69–100. Chernoff, H.A. A measure of asymptotic efficiency for tests of a hypothesis based on the sums of observations. Ann. Math. Statist., 23 (1952), 493–507. Chistyakov, V.P. A theorem on sums of independent positive random variables and its application to branching random processes. Theory Probab. Appl., 9 (1964), 640–648. Chover, J. A law of the iterated logarithm for stable summands. Proc. Amer. Math. Soc., 17 (1965), 441–443. Chover, J., Ney, P.E. and Wainger, S. Functions of Probability Measures. J. Analyse Math., 26 (1973), 255–302. Chover, J., Ney, P.E. and Wainger, S. Degeneracy properties of subcritical branching processes. Ann. Probab., 1 (1973), 663–673. Chow, Y.S. and Lai, T.L. Some one-sided theorems on the tail distribution of sample sums with applications to the last time and largest excess of boundary crossing.
References [86] [87] [88] [89] [90]
[91] [92] [93] [94] [95] [96] [97] [98] [99]
[100]
[101] [102] [103] [104] [105] [106] [107] [108]
615
Trans. Amer. Math. Soc., 120 (1975), 108–123. Christoph, G. and Wolf, W. Convergence Theorems With a Stable Limit Law (Akademie-Verlag, Berlin, 1992). Cline, D.B.H. Convolution tails, product tails and domains of attraction. Probab. Theory Relat. Fields, 72 (1986), 529–557. Cline, D.B.H. Convolution of distributions with exponential and subexponential tails. J. Austral. Math. Soc. Ser. A, 43 (1987), 347–365. Cline, D.B.H. Intermediate regular and Π variation. Proc. London Math. Soc., 68 (1994), 594–616. Cline, D.B.H. and Hsing, T. Large deviations probabilities for sums and maxima of random variables with heavy or subexponential tails. Texas A&M University preprint (1991). Cline, D.B.H. and Samorodnitsky, G. Subexponentiality of the product of independent random variables. Stoch. Proc. Appl., 49 (1994), 75–98. Cohen, J.W. Some results on regular variation for distributions in queueing and fluctuation theory. J. Appl. Probab., 10 (1973), 343–353. Cohen, J.W. A heavy-traffic theorem for the GI/G/1 queue with Pareto-type service time distributions. J. Appl. Math. Stochastic Anal., 11 (1998), 247–254. Cohen, J.W. Random walk with a heavy-tailed jump distribution. Queueing Systems, 40 (2002), 35–73. Cram´er, H. Sur un nouveau th´eoreme-limite de la th´eorie des probabilit´es. Actualites Sci. Indust., 736 (1938), 5–23. Cram´er, H. On asymptotic expansions for sums of independent random variables with a limiting stable distribution. Sankhya A, 25 (1963), 12–24. Darling, D.A. The influence of the maximum term in the addition of independent random variables. Trans. Amer. Math. Soc., 73 (1952), 95–107. Davis, R.A. Stable limits for partial sums of dependent random variables. Ann. Probab., 11 (1983), 262–269. Davis, R.A. and Hsing, T. Point processes and partial sum convergence for weakly dependent random variables with infinite variance. Ann. Probab., 23 (1995), 879– 917. Davydov, Yu.A. and Nagaev, A.V. On the role played by extreme summands when a sum of independent and identically distributed random vectors is asymptotically α-stable. J. Appl. Probab., 41 (2004), 437–454. Dembo, A. and Zeitouni, O. Large Deviation Techniques and Applications (Jones and Bartlett, London, 1993). Denisov, D.F. and Foss, S.G. On transience conditions for Markov chains and random walks. Siberian Math. J., 44 (2003), 44–57. Denisov, D., Foss, S. and Korshunov, D. Tail asymptotics for the supremum of a random walk when the mean is not finite. Queueing Systems, 46 (2004), 15–33. Doney, R.A. On the asymptotic behaviour of first passage times for a transient random walk. Probab. Theory Relat. Fields, 18 (1989), 239–246. Doney, R.A. A large deviation local limit theorem. Math. Proc. Cambridge Phil. Soc., 105 (1989), 575–577. Doney, R.A. Spitzer’s condition and ladder variables in random walks. Probab. Theory Relat. Fields, 101 (1995), 577–580. Donsker, M.D. An invariance principle for certain probability limit theorems. Mem. Amer. Math. Soc., 6 (1951), 1–12. Durrett, R. Conditional limit theorems for random walks with negative drift. Z. Wahrscheinlichkeitstheorie verw. Geb., 52 (1980), 277–287.
616 [109]
[110] [111] [112] [113] [114] [115]
[116] [117] [118] [119] [120]
[121] [122] [123]
[124] [125]
[126]
[127]
[128] [129] [130]
References Embrechts, P. and Goldie, C.M. On closure and factorization theorems for subexponential and related distributions. J. Austral. Math. Soc. Ser. A, 29 (1980), 243– 256. Embrechts, P. and Goldie, C.M. Comparing the tail of an infinitely divisible distribution with integrals of its L´evy measure. Ann. Probab., 9 (1981), 468–481. Embrechts, P. and Goldie, C.M. On convolution tails. Stoch. Proc. Appl., 13 (1982), 263–278. Embrechts, P., Goldie, C.M. and Veraverbeke, N. Subexponentiality and infinite divisibility. Z. Wahrscheinlichkeitstheorie verw. Geb., 49 (1979), 335–347. Embrechts, P., Kl¨uppelberg, C. and Mikosch, T. Modeling Extremal Events (Springer, New York, 1997). Embrechts, P. and Omey, E. A property of long-tailed distributions. J. Appl. Prob., 21 (1982), 80–87. Embrechts, P. and Veraverbeke, N. Estimates for the probability of ruin with special emphasis on the possibility of large claims. Insurance: Math. Econom., 1 (1982), 55–72. Emery, D.J. Limiting behaviour of the distribution of the maxima of partial sums of certain random walks. J. Appl. Probab., 9 (1972), 572–579. Eppel, N.S. A local limit theorem for first passage time. Siberian Math. J., 20(1) (1979), 130-138. Erickson, K.B. Strong renewal theorems with infinite mean. Trans. Amer. Math. Soc., 151 (1970), 263–291. Erickson, K.B. The strong law of large numbers when the mean is undefined. Trans. Amer. Math. Soc., 185 (1973), 371–381. Feller, W. On regular variation and local limit theorems, in Proc. Fifth Berkeley Symp. Math. Stat. Prob. II(1), ed. Neyman, J. (University of California Press, Berkeley, 1967), pp. 373–388. Feller, W. An Introduction to Probability Theory and its Applications I, 2nd edn (Wiley, New York, 1968). Feller, W. An Introduction to Probability Theory and its Applications II, 3rd edn (Wiley, New York, 1971). Foss, S., Konstantopoulos, T. and Zachary, S. The principle of a single big jump: discrete and continuous time modulated random walks with heavy-tailed increments. arxiv.org/pdf/math.PR/0509605 (2005). Foss, S. and Korshunov, D. Sampling at random time with a heavy-tailed distribution. Markov Proc. Related Fields., 6 (2000), 543–568. Foss, S.G. and Zachary, S. Asymptotics for the maximum of a modulated random walk with heavy-tailed increments. Analytic methods in applied probability. Amer. Math. Soc. Transl. Ser. 2, 207 (2002), 37–52. Foss, S.G. and Zachary, S. The maximum on a random time interval of a random walk with long-tailed increments and negative drift. Ann. Appl. Probab., 1 (2003), 37–57. Fuk, D.H. and Nagaev, S.V. Probability inequalities for sums of independent random variables. Theory Probab. Appl., 16 (1971), 643–660. (Also, ibid., 1976, 21, 896.) Garsia, A. and Lamperti, J. A discrete renewal theorem with infinite mean. Comment Math. Helv., 36 (1962), 221–234. Gikhman, I.I. and Skorokhod, A.V. Introduction to the Theory of Random Processes (Dover, Mineola, 1996). (Translated from the 1965 Russian original.) Gnedenko, B.V. and Kolmogorov, A.N. Limit Distributions for Sums of Indepen-
References
[131] [132] [133]
[134] [135] [136] [137] [138] [139]
[140] [141]
[142] [143] [144] [145] [146] [147] [148] [149]
[150] [151]
[152]
617
dent Random Variables (Addison-Wesley, Reading, 1954). (Translated from the 1949 Russian original.) Godovanchuk, V.V. Probabilities of large deviations for sums of independent random variables attracted to a stable law. Theory Probab. Appl., 23 (1978), 602–608. Goldie, C.M. Subexponential distributions and dominated-variation tails. J. Appl. Prob., 15 (1978), 440–442. Goldie, C.M. and Kl¨uppelberg, C. Subexponential distributions, in A Practical Guide to Heavy Tails: Statistical Techniques for Analysing Heavy-tailed Distributions, ed. Adler, R. et al. (Birkh¨auser, Boston, 1998), pp. 435–454. Gradshtein, I.S. and Ryzhik, I.M. Table of Integrals, Series, and Products (Academic Press, New York, 1980). Greenwood, P. Asymptotics of randomly stopped sequences with independent increments. Ann. Probab., 1 (1973), 317–321. Greenwood, P. and Monroe, I. Random stopping preserves regular variation of process distributions. Ann. Probab., 5 (1977), 42–51. Gr¨ubel, R. Asymptotic analysis in probability theory using Banach-algebra techniques. Essen Universit¨at Habilitationsschrift (1984). Gut, A. Stopped Random Walks (Springer, Berlin, 1988). Hansen, N.R. and Jensen, A.T. The extremal behaviour over regenerative cycles for Markov additive processes with heavy tails. Stoch. Proc. Appl., 115 (2005), 579–591. Hartman, P. and Wintner, A. On the law of the iterated logarithm. Amer. J. Math., 63 (1941), 169–176. Heyde, C.C. A contribution to the theory of large deviations for sums of independent random variables. Z. Wahrscheinlichkeitstheorie verw. Geb., 7 (1967), 303– 308. Heyde, C.C. A limit theorem for random walks with drift. J. Appl. Probab., 4 (1967), 144–150. Heyde, C.C. Asymptotic renewal results for a natural generalization of classical renewal theory. J. Roy. Statist. Soc. Ser. B, 29 (1967), 141–150. Heyde, C.C. On large deviation problems for sums of random variables which are not attracted to the normal law. Ann. Math. Statist., 38 (1967), 1575–1578. Heyde, C.C. On large deviation probabilities in the case of attraction to a nonnormal stable law. Sankhya A, 30 (1968), 253–258. Heyde, C.C. On the maximum of sums of random variables and the supremum functional for stable processes. J. Appl. Probab., 6 (1969), 419–429. Heyde, C.C. A note concerning behaviour of iterated logarithm type. Proc. Amer. Math. Soc., 23 (1969), 85–90. Heyman, D.P., and Lakshman, T.V. Source models for VBR broadcast-video traffic. IEEE/ACM Trans. Netw., 4 (1996), 40–48. H¨oglund, T. A unified formulation of the central limit theorem for small and large deviations from the mean. Z. Wahrscheinlichkeitstheorie verw. Geb., 49 (1979), 105–117. Hult, H. and Lindskog, F. Extremal behaviour for regularly varying stochastic processes. Stoch. Proc. Appl., 115 (2005), 249–274. Hult, H., Lindskog, F., Mikosch, T. and Samorodnitsky, G. Functional large deviations for multivariate regularly varying random walks. Ann. Appl. Probab., 15 (2005), 2651–2680. Ibragimov, I.A. and Linnik, Yu.V. Independent and Stationary Sequences of Random Variables (Wolters-Noordhoff, Groningen, 1971). (Translated from the 1965
618 [153] [154]
[155]
[156] [157]
[158] [159] [160] [161] [162]
[163] [164]
[165] [166] [167] [168]
[169] [170] [171] [172] [173]
References Russian original.) Janicki, A. and Weron, A. Simulation and Chaotic Behavior of α-stable Stochastic Processes (Marcel Dekker, New York, 1994). Janson, S. Moments for first-passage and last-exit times. The minimum, and related quantities for random walks with positive drift. Adv. Appl. Probab., 18 (1986), 865– 879. Jelenkovi´c, P.R. and Lazar, A.A. Subexponential asymptotics of a Markovmodulated random walk with queueing applications. J. Appl. Probab., 25 (1998), 132–141. Jelenkovi´c, P.R. and Lazar, A.A. Asymptotic results for multiplexing subexponential on-off processes. Adv. Appl. Probab., 31 (1999), 394–421. Jelenkovi´c, P.R., Lazar, A.A. and Semret, N. The effect of multiple time scales and subexponentiality of MPEG video streams on queueing behavior. IEEE J. Sel. Areas Commun., 15 (1997), 1052–1071. Karamata, J. Sur un mode de croissance r´eguli`ere des fonctions. Mathematica (Cluj), 4 (1930), 38–53. Karamata, J. Sur un mode de croissance r´eguli`ere. Th´eor`emes fondamenteaux. Bull. Math. Soc. France, 61 (1933), 55–62. Kesten, H. and Maller, R.A. Two renewal theorems for general random walks tending to infinity. Probab. Theory Relat. Fields, 106 (1996), 1–38. ¨ Khintchine, A.Ya. Uber einen Satz der Wahrscheinlichkeitsrechnung. Math. Annalen, 6 (1924), 9–20. Khintchine, A.Ya. Asymptotische Gesetze der Wahrscheinlichkeitsrechnung (Springer, Berlin, 1933). (In German. Russian translation published by ONTI NTKP, 1936.) Khintchine, A.Ya. Two theorems on stochastic processes with stable increment distributions. Matem. Sbornik, 3(45) (1938), 577–584. (In Russian.) Khokhlov, Yu.S. The law of the iterated logarithm for random vectors with an operator-stable limit law. Vestnik Moskov. Univ. Ser. XV Vychisl. Mat. Kibernet., (1995), 62–69. (In Russian.) Kingman, F.G. On queues in heavy traffic. J. Royal Statist. Soc. Ser. B, 24 (1962), 383–392. Kl¨uppelberg, C. Subexponential distributions and integrated tails. J. Appl. Probab., 25 (1988), 132–141. Kl¨uppelberg, C. Subexponential distributions and characterizations of related classes. Probab. Theory Relat. Fields, 82 (1989), 259–269. Kl¨uppelberg, C., Kyprianou, A.E. and Maller, R.A. Ruin probabilities and overshoots for general L´evy insurance risk processes. Ann. Appl. Probab., 14 (2004), 1766–1801. Kl¨uppelberg, C. and Mikosh, T. Large deviations of heavy-tailed random sums with applications in insurance and finance. J. Appl. Probab., 34 (1997), 293–308. ¨ Kolmogoroff, A.N. Uber die Grenzwerts¨atze der Wahrscheinlichkeitsrechnung. Math. Annalen, 101 (1929), 120–126. ¨ Kolmogoroff, A.N. Uber das Gesetz des iterierten Logarithmus. Math. Annalen, 101 (1929), 126–135. ¨ Kolmogoroff, A.N. Uber die analytischen Methoden in der Wahrscheinlichkeitsrechnung. Math. Annalen, 104 (1931), 415–458. Konstantinides, D. and Mikosch, T. Large deviations and ruin probabilities for solutions to stochastic recurrence equations with heavy-tailed innovations. Ann. Probab., 33 (2005), 1992–2035.
References [174] [175] [176] [177] [178]
[179] [180] [181] [182] [183]
[184] [185] [186] [187] [188] [189]
[190]
[191] [192] [193] [194]
[195]
[196]
619
Korevaar, J., van Aardenne-Ehrenfest, T. and de Bruijn, N.G. A note on slowly oscillating functions. Nieuw Arch. Wiskunde (2), 23 (1949), 77–86. Korolyuk, V.S. On the asymptotic distribution of the maximal deviations. Dokl. Akad. Nauk SSSR, 142 (1962), 522–525. (In Russian.) Korolyuk, V.S. Asymptotic analysis of distributions of maximum deviations in a lattice random walk. Theory Probab. Appl., 7 (1962), 383–401. Korshunov, D.A. On distribution tail of the maximum of a random walk. Stoch. Proc. Appl., 72 (1997), 97–103. Korshunov, D.A. Large deviations probabilities for maxima of sums of independent random variables with negative mean and subexponential distribution. Theory Probab. Appl., 46 (2001), 355-366. Korshunov, D.A., Shlegel, S. and Shmidt, F. Asymptotic analysis of random walks with dependent heavy-tailed increments. Siberian Math. J., 44 (2003), 833–844. Landau, E. Darstellung und Begr¨undung Einiger Neuer Ergebnisse der Funktiontheorie, 2nd edn (Springer, Berlin, 1929). Leland, W.E., Taqqu, M.S., Willinger, W. and Wilson, D.V. On the self-similar nature of Ethernet traffic, in Proc. SIGCOMM ’93 (1993), 183–193. LePage, R., Woodroofe, M. and Zinn, J. Convergence to a stable distribution via order statistics. Ann. Probab., 9 (1981), 624–632. Linnik, Yu.V. On the probability of large deviations for the sums of independent variables, in Proc. Fourth Berkeley Symp. Math. Stat. Prob. 2 (University of California Press, Berkeley, 1961), pp. 289–306. Linnik, Yu.V. Limit theorems for sums of independent variables taking into account large deviations. I, II. Theory Probab. Appl., 6 (1961), 113–148, 345–360. Linnik, Yu.V. Limit theorems for sums of independent variables taking into account large deviations. III. Theory Probab. Appl., 7 (1962), 115–129. Lo`eve, M. Probability Theory, 4th edn (Springer, New York, 1977). Malinovskii, V.K. Limit theorems for Harris Markov chains. II. Theory Probab. Appl., 34 (1989), 252–265. Malinovskii, V.K. Limit theorems for stopped random sequences II: Large deviations. Theory Probab. Appl., 41 (1996), 70–90. Meerschaert, M.M. and Scheffler, H.-P. Limit Distributions for Sums of Independent Random Vectors: Heavy Tails in Theory and Practice (Wiley, New York, 2001). Mikosh, T. The law of the iterated logarithm for independent random variables outside the domain of partial attraction of the normal law. Vestnik Leningradskogo Univ. Mat. Mech. Astronom., 3 (1984), 35–39. (In Russian.) Mikosh, T. and Nagaev, A.V. Large deviations of heavy-tailed sums with applications in insurance. Extremes, 1 (1998), 81-110. Mogulskii, A.A. Integro-local theorem for sums of random variables with regularly varying distributions valid on the whole real line. Siberian Math. J. (In print.) Mogulskii, A.A. and Rogozin, B.A. A local theorem for the first hitting time of a fixed level by a random walk. Siberian Adv. Math., 15 (2005), 1–27. Nagaev, A.V. Limit theorems that take into account large deviations when Cram´er’s condition is violated. Izv. Akad. Nauk. UzSSR, Ser. Fiz-Mat Nauk, 13 (1969), 17– 22. (In Russian.) Nagaev, A.V. Integral limit theorems taking large deviations into account when Cram´er’s condition does not hold. I, II. Theory Probab. Appl., 14 (1969), 51–64; 193–208. Nagaev, A.V. On a property of sums of independent random variables. Theory
620
References
Probab. Appl., 22 (1977), 326–338. [197] Nagaev, A.V. On the asymmetric problem of large deviations when the limit law is stable. Theory Probab. Appl., 28 (1983), 670–680. [198] Nagaev, A.V. Cram´er large deviations when the extreme conjugate distribution is heavy-tailed. Theory Probab. Appl., 43 (1998), 405–421. [199] Nagaev, A.V. and Zaigraev, A. Multidimensional limit theorems allowing large deviations for densities of regular variation. J. Multivariate Anal., 67 (1998), 385– 397. [200] Nagaev, A.V. and Zaigraev, A. New large deviation local theorems for sums of independent and identically distributed random vectors when the limit is α-stable. Bernoulli, 11 (2005), 665–688. [201] Nagaev, S.V. Some limit theorems for large deviations. Theory Probab. Appl., 10 (1965), 214–235. [202] Nagaev, S.V. On the speed of convergence in a boundary problem. I, II. Theory Probab. Appl., 15 (1970), 163–186, 403–429. [203] Nagaev, S.V. On the speed of convergence of the distribution of maximum sums of independent random variables. Theory Probab. Appl., 15 (1970), 309–314. [204] Nagaev, S.V. Asymptotic expansions for the maximum of sums of independent random variables. Theory Probab. Appl., 15 (1970), 514–515. [205] Nagaev, S.V. Large deviations for sums of independent random variables, in Trans. Sixth Prague Conf. on Information Theory, Random Proc. and Statistical Decision Functions (Academia, Prague, 1973), pp. 657–674. [206] Nagaev, S.V. Large deviations of sums of independent random variables. Ann. Probab., 7 (1979), 745–789. [207] Nagaev, S.V. On the asymptotic behavior of one-sided large deviation probabilities. Theory Probab. Appl., 26 (1981), 362–366. [208] Nagaev, S.V. Probabilities of large deviations in Banach spaces. Math. Notes, 34 (1983), 638–640. [209] Nagaev, S.V. and Pinelis, I.F. Some estimates for large deviations and their application to strong law of large numbers. Siberian Math. J., 15 (1974), 153–158. [210] Ng, K.W., Tang, Q., Yan, J.-A. and Yang, H. Precise large deviations for sums of random variables with consistently varying tails. J. Appl. Prob., 41 (2004), 93–107. [211] Nikias, C.L. and Shao, M. Signal Processing with Alpha-stable Distributions and Applications (Wiley, New York, 1995). [212] Osipov, L.V. On probabilities of large deviations for sums of independen random variables. Theory Probab. Appl., 17 (1972), 309–331. [213] Pakes, A. On the tails of waiting times distributions. J. Appl. Probab., 7 (1975), 745–789. [214] Pakshirajan, R.P. and Vasudeva, R. A law of the iterated logarithm for stable summands. Trans. Amer. Math. Soc., 232 (1977), 33–42. [215] Park, K. and Willinger, W., eds. Self-similar Network Traffic and Performance Evaluation (Wiley, New York, 2000). [216] Paulauskas, V.I. Estimates of the remainder term in limit theorems in the case of stable limit law. Lithuanian Math. J., 14 (1974), 127–146. [217] Paulauskas, V.I. Uniform and nonuniform estimates of the remainder term in a limit theorem with a stable limit law. Lithuanian Math. J., 14 (1974), 661–672. [218] Paulauskas, V. and Skucha˘ıte, A. Some asymptotic results for one-sided large deviation probabilities. Lithuanian Math. J., 43 (2003), 318–326. [219] Petrov, V.V. Generalization of Cram´er’s limit theorem. Uspehi Matem. Nauk, 9 (1954), 195–202. (In Russian.)
References [220] [221] [222] [223] [224]
[225] [226] [227]
[228]
[229] [230] [231] [232] [233] [234] [235] [236] [237]
[238] [239]
[240]
[241]
621
Petrov, V.V. Limit theorems for large deviations when Cram´er’s condition is violated. Vestnik Leningrad Univ. Math., 19 (1963), 49–68. (In Russian.) Petrov, V. V. Limit theorems for large deviations violating Cram´er’s condition. II. Vestnik Leningrad. Univ. Ser. Mat. Meh. Astronom., 19 (1964), 58–75. (In Russian.) Petrov, V.V. On the probabilities of large deviations for sums of independent random variables. Theory Probab. Appl., 10 (1965), 287–298. Petrov, V.V. Sums of Independent Random Variables (Springer, Berlin, 1975). (Translated from the 1972 Russian original.) Petrov, V.V. Limit Theorems of Probability Theory: Sequences of Independent Random Variables (Clarendon Press, Oxford University Press, New York, 1987). (Translated from the 1987 Russian original.) Pinelis, I.F. A problem on large deviations in a space of trajectories. Theory Probab. Appl., 26 (1981), 69–84. Pinelis, I.F. On certain inequalitites for large deviations. Theory Probab. Appl., 26 (1981), 419–420. Pinelis, I.F. Asymptotic equivalence of the probabilities of large deviations for sums and maxima of independent random variables. Trudy Inst. Mat., 5 (1985), 144–173. (In Russian.) Pinelis, I. Exact asymptotics for large deviation probabilities, with applications, in Modelling Uncertainty. Internat. Ser. Oper. Res. Management Sci. 46 (Kluver, Boston, 2002), pp. 57–93. Pitman, E.J.G. Subexponential distribution functions. J. Austral. Math. Soc. Ser. A., 29 (1980), 337–347. Prokhorov, Yu.V. Convergence of random processes and limit theorems in probability theory. Theory Probab. Appl., 1 (1956), 157–214. Prokhorov, Yu.V. Transition phenomena in queueing processes. Litovsk. Math. Sb., 3 (1963), 199–206. (In Russian.) Resnick, S.I. Extreme Values, Regular Variation, and Point Processes (Springer, New York, 1987). Richter, V. Multi-dimensional local limit theorems for large deviations. Theory Probab. Appl., 3 (1958), 100–106. Rogozin, B.A. On the constant in the definition of subexponential distributions. Theory Probab. Appl., 44 (2001), 409–412. Rolski, T., Schmidli, H., Schmidt, V. and Teugels, J. Stochastic Processes for Insurance and Finance (Wiley, New York, 1999). Rozovskii, L.V. An estimate for probabilities of large deviations. Math. Notes, 42 (1987), 590–597. Rozovskii, L.V. Probabilities of large deviations of sums of independent random variables with common distribution function in the domain of attraction of the normal law. Theory Probab. Appl., 34 (1989), 625–644. Rozovskii, L.V. Probabilities of large deviations on the whole axis. Theory Probab. Appl., 38 (1994), 53–79. Rozovskii, L.V. Probabilities of large deviations for sums of independent random variables with a common distribution function from the domain of attraction of an asymmetric stable law. Theory Probab. Appl., 42 (1998), 454–482. Rozovskii, L.V. A lower bound for the probabilities of large deviations of the sum of independent random variables with finite variances. J. Math. Sci., 109 (2002), 2192–2209. Rozovskii, L.V. Superlarge deviations of a sum of independent random variables having a common absolutely continuous distribution under the Cram´er condition.
622
References
Theory Probab. Appl., 48 (2003), 108–130. [242] Rudin, W. Limits of ratios of tails of measures. Ann. Probab., 1 (1973), 982–994. [243] Rvacheva, E.L. On domains of attraction of multi-dimensional distributions. Select. Transl. Math. Statist. Probab., 2 (1962), 183–205. (Original publication in Russian: L’vov. Gos. Univ. Uˇc. Zap. Ser. Meh.-Mat., 3 (1954), 5–44.) [244] Sahanenko, A.I. On the speed of convergence in a boundary problem. Theory Probab. Appl., 19 (1974), 399–403. [245] Samorodnitsky, G. and Taqqu, M. Stable Non-Gaussian Random Processes (Chapman & Hall, New York, 1994). [246] Sato, K. L´evy Processes and Infinitely Divisible Distributions (Cambridge University Press, Cambridge, 1999). [247] Saulis, L. and Statuleviˇcius, V. A. Limit Theorems for Large Deviations (Kluwer, Dordrecht, 1991). (Translated and revised from the 1989 Russian original.) [248] Schlegel, S. Ruin probabilities in perturbed risk models. Insurance Math. Econom., 22 (1998), 93–104. ¨ [249] Schmidt, R. Uber der Borelsche Summirungsverfahren. Schriften der K¨onigsberger gelehrten Gesellschaft, 1 (1925), 202–256. ¨ [250] Schmidt, R. Uber divergentente Folgen und lineare Mittelbildungen. Mathem. Zeitschrift, 22 (1925), 89–152. [251] Seneta, E. Regularly Varying Functions (Springer, Berlin, 1976). [252] Sigman, K. A primer on heavy-tailed distributions. Queueing Systems, 33 (1999), 261–275. [253] Sgibnev, M.S. Banach algebras of functions with the same asymptotic behavior at infinity. Siberian Math. J., 22 (1981), 467–473. [254] Shepp, L.A. A local limit theorem. Ann. Math. Statist., 35 (1964), 419–423. [255] Skorokhod, A.V. Limit theorems for stochastic processes with independent increments. Theory Probab. Appl., 2 (1957), 138–171. [256] Skorokhod, A.V. Random Processes with Independent Increments (Kluwer, Dordrecht, 1991). (Translated and revised from the 1964 Russian original.) [257] Sparre Andersen, E. On the collective theory of risk in the case of contagion between the claims, in Trans. XVth Internat. Congress of Actuaries II (New York, 1957), pp. 219–229. [258] Stone, C. A local limit theorem for nonlattice multi-dimensional distribution functions. Ann. Math. Statist., 36 (1965), 546–551. [259] Stone, C. On local and ratio limit theorems, in Proc. Fifth Berkeley Symp. Math. Stat. Prob. II(2), ed. Neyman, J. (University of California Press, Berkeley, 1967), pp. 217–224. [260] Stout, W. Almost Sure Convergence (Academic Press, New York, 1974). [261] Strassen, V. A converse to the law of the iterated logarithm. Z. Wahrscheinlichkeitstheorie verw. Geb., 4 (1966), 265–268. [262] Tang, Q., Su, Ch., Jiang, T. and Zhang, J. Large deviations for heavy-tailed random sums in compound renewal model. Statist. Probab. Lett., 52 (2001), 91–100. [263] Tang, Q. and Tsitsiashvili, G. Precise estimates for the ruin probability in finite horizon in a discrete-time model with heavy-tailed insurance and financial risks. Stoch. Proc. Appl., 108 (2004), 299–325. [264] Tang, Q. and Yang, J. A sharp inequality for the tail probabilities of sums of i.i.d. r.v.’s with dominantedly varying tails. Sci. China A, 45 (2002), 1006–1011. [265] Teugels, J.L. The sub-exponential class of probability distributions. Theory Probab. Appl., 19 (1974), 821–822. [266] Teugels, J.L. The class of subexponential distributions. Ann. Probab., 3 (1975),
References [267]
[268] [269] [270]
[271] [272] [273] [274] [275] [276] [277]
[278] [279] [280] [281] [282] [283] [284]
[285]
[286]
623
1000–1011. Teugels, J.L. and Willekens, E. Asymptotic expansions for waiting time probabilities in an M/G/1 queue with long tailed service time. Queueing Systems Theory Appl., 10 (1992), 295–311. Thorin, O. Some remarks on the ruin problem in case the epochs of claims form a renewal process Skand. Aktuarietidskr. (1970), 29–50. Thorin, O. and Wikstad, N. Calculation of ruin probabilities when the claim distribution is lognormal. Astin Bulletin, 9 (1976), 231–246. Tkaˇcuk, S.G. Local limit theorems, allowing for large deviations, in the case of stable limit laws. Izv. Akad. Nauk UzSSR Ser. Fiz.-Mat. Nauk, 17 (1973), 30–33. (In Russian.) Tkaˇcuk, S.G. A theorem on large deviations in Rs in case of a stable limit law, in Random processes and statistical inference, 4 (Fan, Tashkent, 1974), pp. 178–184. Shneer, V.V. Estimates for the distributions of the sums of subexponential random variables. Siberian Math. J., 45 (2004), 1143–1158. Uchaikin, V.V. and Zolotarev, V.M. Chance and Stability (VSP Press, Utrecht, 1999). Vasudeva, R. Chover’s law of the iterated logarithm and weak convergence. Acta Math. Hungar., 44 (1984), 215–221. Veraverbeke, N. Asymptotic behaviour of Wiener–Hopf factors of a random walk. Stoch. Proc. Appl., 5 (1977), 27–37. Vinogradov, V. Refined Large Deviation Limit Theorems (Longman, Harlow, 1994). Vinogradov, V.V. and Godovanchuk, V.V. Large deviations of sums of independent random variables without several maximal summands. Theory Probab. Appl., 34 (1989), 512–515. von Bahr, B. Asymptotic ruin probabilities when exponential moments do not exist Scand. Actuarial Journal (1975), 6–10. Williamson, J.A. Random walks and Riesz kernels. Pacific J. Math, 25 (1968), 393–415. Wolf, W. On probabilities of large deviations in the case in which Cram´er’s condition is violated. Math. Nachr., 70 (1975), 197–215. (In Russian.) Wolf, W. Asymptotische Entwicklungen f¨ur Wahrscheinlichkeiten grosser Abweichungen. Z. Wahrscheinlichkeitstheorie verw. Geb., 40 (1977), 239–256. Zachary, S. A note on Veraverbeke’s theorem. Queueing Systems, 46 (2004), 9–14. Zaigraev, A. Multivariate large deviations with stable limit laws. Probab. Math. Statist., 19 (1999), 323–335. Zaigraev, A.Yu. and Nagaev, A. V. Abelian theorems, limit properties of conjugate distributions, and large deviations for sums of independent random vectors. Theory Probab. Appl., 48 (2004), 664–680. Zaigraev, A.Yu., Nagaev, A.V. and Jakubowski, A. Probabilities of large deviations of the sums of lattice random vectors when the original distribution has heavy tails. Discrete Math. Appl., 7 (1997), 313–326. Zolotarev, V.M. One-dimensional Stable Distributions (American Mathematical Society, Providence RI, 1986). (Translated from the 1983 Russian original.)
Index
deviation, 305, 321, 392, 555 generalized inverse, 58, 83, 508 locally constant (l.c.), 16, 348, 358–360 regularly varying (r.v.f.), 1 slowly varying (s.v.f.), 1 upper-power, 28, 224, 307, 362, 358 ψ-locally constant (ψ-l.c.), 18, 224
Abelian type theorem, 9 arithmetic distribution, 44, 302 boundary stopping time, 349 condition Cram´er’s, xx, 234, 303, 371, 398 Lindeberg’s, 502 conjugate distribution, 301, 382 convolution, 14 of sequences, 44 Cram´er approximation, 235, 251–253, 298, 433 condition, xx, 234, 303, 371, 398 deviation zone, 252 transform, 301, 382, 393
generalized inverse function, 58, 83, 508 renewal process, 543 inequality, Kolmogorov–Doob type, 82 integral representation theorem, 2 intermediate deviation zone, 252 invariance principle, 76, 395 iterated logarithm, law of the, 79
density, subexponential, 46 deviation function, 305, 321, 392, 555 deviations, moderately large, 309 normal, 309 super-large, 308 distribution arithmetic, 302 classes, see below conjugate, 301, 382 exponentially tilted, 301 function of, 335 locally subexponential, 46, 47, 358 regularly varying exponentially decaying, 307 semiexponential, 29, 233 stable, 61 strongly subexponential, 237, 250, 274, 546 subexponential, 14 tail, 11, 14, 57 domain of attraction, 62 Esscher transform, 301 exponentially tilted distribution, 301 extreme deviation zone, 252 function of distribution, 335
Kolmogorov–Doob type inequality, 82 large deviation rate function, 305, 321, 392 law of the iterated logarithm, 79 level lines, 320 Lindeberg condition, 502 locally constant function, 16, 348, 358–360 locally subexponential distribution, 46, 47, 358 Markov time, 345, 428 martingale, 506 moderately large deviations, 309 normal deviations, 309 partial factorization, 548, 554 process generalized renewal, 543 renewal, 543 stable, 78, 469 Wiener, 75, 76, 472 random walk, 14 defined on a Markov chain, 508 regularly varying exponentially decaying distribution, 307 regularly varying function (r.v.f.), 1
624
Index renewal process, 543 semiexponential distribution, 29, 233 sequence subexponential, 44 convolution of, 44 Skorokhod metric, 75 slowly varying function (s.v.f.), 1 stable distribution, 61 process, 78, 469 stopping time, 345, 428 boundary, 349 strongly subexponential distribution, 237, 250, 274, 546 subexponential density, 46 distribution, 14 sequence, 44 super-large deviations, 308 tail, two-sided, 57 Tauberian theorem, 10 theorem Abelian type, 9 integral representation, 2 Tauberian, 10 uniform convergence, 2 Wiener–Levy, 55 time, stopping, 345, 428 transform Cram´er, 301, 382, 393 Esscher, 301 Laplace, 9 transient phenomena, 439, 471, 503 uniform convergence theorem, 2 upper-power function, 28, 224, 307, 358, 362 Wiener process, 75, 76, 472, 502, 503 Wiener–Levy theorem, 55 ψ-(asymptotically) locally constant (ψ-l.c.) function, 18, 224
Boundary classes Gx,K , 586 Gx,n , 155, 455 Gx,T,ε , 526 Gx,T , 526
Conditions and properties [