Stochastic Systems In Merging Phase Space
This page intentionally left blank
Stochastic Systems In Merging Phase Space Vladimir S. Koroliuk lnstitute of Mathematics, National Academy of Sciences, Ukraine
Nikolaos Limnios Applied Mathematics Laboratory, University of Technology of Compiegne, France
N E W JERSEY
 LONDON
d i' World Scientific *
SINGAPORE
 BElJlNG
SHANGHAI
 HONG KONG
*
TAIPEI
*
CHENNAI
Published by World Scientific Publishing Co. Re. Ltd. 5 Toh Tuck Link, Singapore 596224 USA once: 27 Warren Street, Suite 401402, Hackensack, NJ 07601
UK oflcc: 57 Shefton Street, Covent Garden, London WC2H 9HE
British Library CataloguinginPublicationData A catalogue record for this book is available from the British Library.
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE Copyright 0 2005 by World Scientific Publishing Co. Re. Ltd All rights reserved. This book, or parts tlzereoJ may not be reprodured in any fonn or by any means,
electronic or mecltanical, inrluding photocopying, rerording or any information storage and retrieval system now known or to be invented, withoui written pennission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN 9812565914
Printed in Singapore by World Scientific Printers ( S )Pte Ltd
Preface
"
... the theory of systems should be built on the methods of simplification and is, essentially, the science of simplification". Walter Ashby (1969)
The actual problem of systems theory is the development of mathematically justified methods of simplification of complicated systems whose mathematical analysis is difficult to perform even with help of modern computers. The main difficulties are caused by the complexity of the phase (state) space of the system, which leads to virtually boundedless mathematical models. A simplified model for a system must satisfy the following conditions: (i) The local characteristics of the simplified model are determined by rather simple functions of the local characteristics of original model. (ii) The global characteristics describing the behavior of the stochastic system can be effectively calculated on large enough time intervals. (iii) The simplified model has an effective mathematical analysis, and the global characteristics of the simplified model are close enough to the corresponding characteristics of the original model for application. Stochastic systems considered in the present book are evolutionary systems in random medium, that is, dynamical systems whose state space is subject to random variations. From a mathematical point of view, such systems are naturally described by operatorvalued stochastic processes on Banach spaces and are nowadays known as Random evolution. This book gives recent results on stochastic approximation of systems by weak convergence techniques. General and particular schemes of proofs for average, diffusion, diffusion with equilibrium, and Poisson approximations of stochastic systems are presented. The particular systems studied here V
vi
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
are stochastic additive functionals, dynamical systems, stochastic integral functionals, increment processes, impulsive processes. As application we give the cases of absorption times, stationary phase merging, semiMarkov random walk, LBvy approximation, etc. The main mathematical object of this book is a family of coupled stochastic processes [ € ( t )z'(t), , t 2 0 ,E > 0 (where E is the small series parameter) called the switched and switching processes. The switched process ['(t),t 2 0 , describes the system evolution, and, in general, is a stochastic functional of a third process. The switching process z'(t),t 2 0, also called the driving or modulation process, is the perturbing process or the random medium, and can represent the environment, the technical structure, or any perturbation factor. The two modes of switching considered here are Markovian and semiMarkovian. Of course, we could present only the semiMarkov case since the Markov is a special case. But we present both mainly for two reasons: the first is that proofs are simpler for the Markov case, and the second that most of the readers are mainly interested by the Markov case. The switching processes are considered in phase split and merging scheme. The phase merging scheme is based on the split of the phase space into disjoint classes
E = UkEVEk,
Ek n Eki = 8 , k
# k'
(0.1)
v,
and further merging these classes Ek, k E into distinct states k E V . So the merged phase space of the simplified model of system is E = V (see Figure 4. l).' The transitions (connections) between the states of the original system S are merged to yield the transitions between merged states of the merged system S . The analysis of the merged system is thus significantly simplified. It is important to note that the additional supporting system So with the same phase space E but without connections between classes of states Ek is used. Split the phase space (0.1) just means introducing a new supporting system consisting of isolated subsystems s k , k E V , defined on classes of states Ek, k 6 V . The merged system S is constructed with respect to the ergodic property of the support system So. It is worth noticing that the initial processes in the series scheme contain no diffusion part. Diffusion processes appear only as limit processes. 'Figures, theorems, lemmas, etc. are numbered by x.y, where x is the number of chapter, and y is the number of figure, theorem, etc. into t h e chapter.
PREFACE
vii
The general scheme of proof of weak convergence for stochastic processes in series scheme is the following.
I. Limit compensating operator: 1. Construction of the compensating operator ILEof the Markov additive process J E ( t )t, 2 0. 2. Asymptotic form of ILE acting on some kind of test functions ' p E . 3. Singular perturbation problem: LevE= Q EO".
+
11. Tightness: 1. Compact containment condition lim M%x
sup O<E<EO
B( sup l~'(t)l>. M ) o 1, with B := B1. In this book, the space where the trajectories of processes are considered is D[O,co),the space of rightcontinuous functions having left side limits with Skorokhod metric We call them cadlag trajectories and cadlag processes. Let also C[O,co)be the subspace of D[O,oo), of continuous functions with the supnorm, llzll = supt>, IIL:(t)l,IL: E C[O,co). These two spaces are Polish spaces (see Appendix AT. Let B be the Banach space, that is a complete linear normed space, of all bounded realvalued measurable functions on E , with the supnorm 1694517011321153.
II'Pll = SUPzEE1'P(IL:)(,'P E B.
Let us consider also a fixed stochastic basis '3 = (52,F,F = (Ft, t 2 O),P),where (Ft,t 2 0) (for discrete time note that F = (Fn, n 2 0)) is a filtration of subaalgebras of F ,that is FsC Ft C F,for all s < t and t 2 0. The filtration F = (&, t 2 0) is said to be complete, if FOcontains 2 0. If for any t 2 0, 3 t = Ft+, all the Pnull sets. Set Ft+ = ns>tFs,t then the filtration Ft,t 2 0 , is said to be rightcontinuous. If a filtration is complete and rightcontinuous, we say that it satisfies the usual conditions.
1
CHAPTER 1. MARKOV A N D SEMIMARKOV PROCESSES
2
A mapping T : R + [O,+oo], such that {T 5 t } E Ft,is called a stopping time. If T is a stopping time, we denote by 3~the collection of all sets A E F such that A n {T 5 t } E Ft. An (E,E)valued stochastic process x ( t ) , t E I , ( I = R+ or I = N), defined on the stochastic basis 3,is adapted, if for any t E I , z ( t ) is Ftmeasurable. The set of values E is said to be the state (or phase) space of the process z ( t ) ,t 2 0. Given a probability measure on (E,&), we define the probability measure P,, by
where P,(B):= B(B 1 x ( 0 ) = x). We denote by E, and E, the expectations corresponding respectively to P, and P,. We will also consider the following spaces endowed by the corresponding supnorms:

B is the Banach space of realvalued measurable bounded functions cp(u,x),u E Rd, x E E ; B' := C' (Bd x E ) n B is the Banach space of continuously differentiable functions on u E Bd, uniformly on x E E , with bounded first derivative; B2 := C2(Rd x E) n B is the Banach space of twice continuously differentiable functions on u E Bd, uniformly on x € E, with bounded first two derivatives.
1.2 1.2.1
Markov Processes Markov Chains
Definition 1.1 A positivevalued function P ( z , B ) , z E E , B E &, is called a Markov transition function or a Markov kernel or transition (probability) kernel, if 1) for any fixed x E E , P ( x , . ) is a probability measure on ( E , E ) ,and 2) for any fixed B E E , P ( . , B ) is a Bore1 measurable function, that is, an ( I B)measurable , function. If P ( x , E ) 5 1, for a x E E , then P is said to be a subMarkov kernel. If for fixed x E E , the P ( x , .) is a signed measure, then it is said to be a
1.2. MARKOV PROCESSES
3
signed kernel. In that case, we will suppose that the signed kernel P is of bounded variation, that is,
1PI ( x , E ) < +m.
(1.1)
If E is a finite or countable set, we take E = P ( E ) (the set of all subsets of E ) , the Markov kernel is determined by the matrix ( P ( i , j ) ; i , j E E ) , with P ( i ,B ) = CjEB P ( i , j ) ,B E E .
Definition 1.2 A timehomogeneous Markov chain associated to a Markov kernel P ( x ,B ) is an adapted sequence of random variables x,, n 2 0 , defined on some stochastic basis S, satisfying, for every n E N, x E E , and B E E , the following relation
P(Z,+I
E
B I F,) = P(x,+l
E
B
I 2),
=: P ( x n , B ) ,
(as.),
(1.2)
which is called the Markov property. In most cases we consider the Markov property with respect to 3, := n ( x k , k 5 n ) , n 2 0, the natural filtration generated by the chain x,, n 2 0. The Markov property (1.2) is satisfied for any finite 3,stopping time and it is called strong Markov property and the chain a strong Markov chain. The product of two Markov kernels P and Q defined on ( E , & ) is , also a Markov kernel, defined by
Let us denote by P ( x ,B ) = P(x, E B I xo = x) = P(x,+, E B I x, = x) the nstep transition probability which is defined inductively by (1.3). By the Markov property we get, for n, m E N ,
which is the ChapmanKolmogorov equation. A subset B E E , is called accessible from a state x E E , if
Pz(x,
E
or equivalently P”(2,B ) > 0.
B , for some n 2 1) > 0,
CHAPTER 1. M A R K O V A N D SEMIMARKOV PROCESSES
4
Definition 1.3 1) A Markov chain X n , n 1 0 , is called Harris recurrent if there exists a afinite measure $ on ( E , E ) ,with + ( E )> 0, such that Px('Jn>l{xnE A } ) = 1, z E E ,
(1.4)
for any A E & with +(A)> 0. 2) If the probability (1.4) is positive, the Markov chain is called $I
irreducible. 3) The Markov chain is said to be uniformly irreducible if, for any A SUpPz(7A > N ) X
where
TA
:= inf{n

0,
N
m,
EI,
(1.5)
2 0 : zn E A } , is the hitting time of set A E E.
Definition 1.4 A Markov chain x n , n 2 0, is said t o be dperiodic (d > I), if there exists a cycle, that is a sequence (c1,..., c d ) of sets, ci E &, 1 5 i 5 d , with P(z,Cj+l)= 1, x E Cj, 1 I j 5 d  1, and P ( z , C l ) = 1, x E Cd, such that:  the set E \ U%lCi is $null;  if (Ci, ...,C;,) is another cycle, then d' divides d and C,! differs from a union of dld' members of (C1,..., Cd) only by a +null set which is of type UirlV,, where, for any i 2 1, Pz(limsup{xn E K}) = 0. If d = 1 then the Markov chain is said to be aperiodic. Definition 1.5 A probability measure p on ( E ,E ) , is said to be a stationary distribution or invariant probability for the Markov chain xn, n 0, (or for the Markov kernel P ( z ,B ) )if, for any B E E ,
>
Definition 1.6 1) If a Markov chain is $irreducible and has an invariant probability, it is called positive, otherwise it is called null. 2) If a Markov chain is Harris recurrent and positive it is called Harris positive. 3) If a Markov chain is aperiodic and Harris positive it is called (Harris) ergodic. Proposition 1.1 Let x n , n 2 0 , be a n ergodic Markov chain, then: 1 ) for a n y probability measure a o n ( E , E ) ,we have
IlaP
 pII
f
0,
n +
00;
1.2. MARKOV PROCESSES
5
2) f o r any cp E B, we have
for any probability measure p o n ( E , & ) . Let us denote by P the operator of transition probabilities on B defined by W X ) = E[cp(Xn+l)
I xn = I.
=
L
P ( Z , dY)cp(Y),
and denote by Pn the nstep transition operator corresponding t o P ( x , B ) . The Markov property (1.2) can be represented in the following form
Definition 1.7 Let us denote by II the stationary projector in B defined by the stationary distribution p ( B ) , B E & of the Markov chain x,, as follows
where 1 ( z )= 1 for all x E E. Of course, we have 112 = lI.
Definition 1.8
The Markov chain xn is called uniformly ergodic if
Note that uniform ergodicity implies Harris recurrence 117,80,137,139. Moreover, the convergence in (1.7) is of exponential rate (see, e.g. 137). So the series 00
Ro
:= C [ P n 
n],
n=O
is convergent and defines the potential operator of the Markov chain x,, n 2 0, satisfying the property (see Section 1.6)
&[IPI = [ I  P ] & = I  I I .
CHAPTER 1. MARKOV A N D SEMIMARKOV PROCESSES
6
1.2.2
Continuous Time Markov Processes
Let us consider a family of Markov kernels (Pt = Pt(x,B ) , t E R+) on ( E , E ) . Let an adapted (E,E)valued stochastic process x ( t ) , t 2 0, be defined on some stochastic basis 3.
Definition 1.9 A stochastic process x ( t ) , t 2 0, is said t o be a timehomogeneous Marlcov process, if, for any fixed s , t E R+ and B E €, P(z(t+s) E B I F 3 ) = P ( x ( t + s ) E B l z ( s ) ) = P t ( z ( s ) , B ) ,(as.). (1.8) When the Markov property (1.8) holds for any finite Fstopping time 7, instead of a deterministic time s, we say that the Markov process z ( t ) ,t 1 0 , satisfies the strong Marlcov property, and that the process x ( t ) is a strong
Markov process. Definition 1.10 On the Banach space B, the operator Pt of transition probability, is defined by
This is a contractive operator (that is, llPtcpll I llcpll). The ChapmanKolmogorov equation is equivalent t o the following semigroup property of Pt,
PtP,
= Pt+3,for
all t , s E R+.
(1.10)
The Markov process x ( t ) , t 2 0 , has a stationary (or invariant) distribution, x say, if, for any B E E ,
Definition 1.11 The Markov process a;(t),t 2 0 , is said to be ergodic, if for every cp E B, we have
for any probability measure p on ( E ,E ) . The stationary projector IT, of an ergodic Markov process with stationary distribution x, is defined as follows (see Definition 1.7)
1.2. MARKOV PROCESSES
7
where 1(x) = 1 for all x E E. Of course, we have 112 = II. Let us consider a Markov process x ( t ) , t 2 0 , on the stochastic basis 3, with trajectories in D[O,co),and semigroup (Pt, t 2 0 ) . There exists a linear operator Q acting on B, defined by 1
(1.11)
lim i(Ptcp cp) = Qcp, tl0
t
where the limit exists in norm. Let D ( Q ) be the subset of B for which the above limit exists, this is the domain of the operator Q. The operator Q is called a (strong) generator or (strong) infinitesimal operator.
Definition 1.12 A Markov semigroup Pt,t 2 0, is said to be uniformly continuous on B, if
where I is the identity operator on B.
A timehomogeneous Markov process is said t o be (purely) discontinuous or of jump type, if its semigroup is uniformly continuous. In that case, the process stays in any state for a positive (strict) time, and after leaving a state it moves directly t o another one. We will call it a jump Murkow
process
34,56,153,165
Let x ( t ) , t 2 0 , be a timehomogeneous jump Markov process. Let n 2 0 be the jump times for which we have 0 = TO 5 TI 5 . . . 5 T, 5 . . . . A Markov process is said to be regular (non explosive),if 7, +00, as n + 00 T,,
(see, e.g.
56).
Definition 1.13
The stochastic process x,, n 2 0 defined by 2 ,
= x(~,),
n 2 0,
is called the embedded Markow chain of the Markov process x ( t ) , t 2 0. Let P ( x ,B ) be the transition probability of x,, n 2 0. The generator Q of the jump Markov process x ( t ) ,t 2 0 , is of the form (see, e.g. 56134),
Q 4 z ) = q(x)
/
E
P(x,dy)[cp(y)  cp(x)l,
(1.12)
where the kernel P ( x ,d y ) is the transition kernel of the embedded Markov chain, and q ( x ) ,x E E , is the intensity of jumps function.
CHAPTER 1 . MARKOV AND SEMIMARKOV PROCESSES
8
Proposition 1.2 (see, e.g. ) Let (Pt, t L 0 ) be a uniformly continuous semigroup on B, and Q its generator with domain V ( Q ) c B. Then: 1 ) the limit in (1.11) exists, and the operator Q is bounded with B(Q)= B; 2) dPtldt = QPt = PtQ; 3) Pt = exp(tQ) = I Ckl(tQ)k/k!. 165952
+
If x ( t ) has a stationary distribution, T , then x, also has a stationary distribution, p, and we have
Let us consider the counting process
v ( t ) = max{n
: T,
5 t},
(1.13)
with maxO = 0. That gives the number of jumps of the Markov process in ( O , t ] .
Example 1.1. The generator IL of a Poisson process, with intensity X > 0, is D
D Example 1.2. Let Tn,n 2 O, be a renewal process on R+, TO = O, with distribution function F , and hazard rate of interarrival times 8, := 7,  Tn1,
Let v ( t ) ,t 2 0 be the corresponding counting process, that is v(t) := sup{n : Tn
I t}.
The generator of the Markov process x ( t ) := t7(t),t 2 0 , T(t) := T v ( t ) , is given by
M z ) = cp’(x>+ X ( X ) [ c p ( O )  c p b ) ] ,
E
N,t
E R+,
where F ( t ) := 1  F ( t ) . The domain of this generator is D(L) = C’(R).
1.2. M A RKOV PROCESSES
9
D Example 1.3. Let z ( t ) , t 0, be a nonhomogeneous jump Markov process, the generator of the coupled Markov process t , z ( t ) t, 2 0, is defined as follows
with D(L) = C’I’(IW+ x E ) . D Example 1.4. Let z ( t ) , t 2 0, be a pure jump Markov process (that is, without drift and diffusion part) with state space E and generator Q, and let v ( t ) , t 2 0, be the corresponding counting process of jumps and z,,n L 0 the embedded Markov chain. Let a be a realvalued measurable function on the state space E , and consider the increment process
k=l
Then the generator of the coupled Markov process a ( t ) ,z ( t ) ,t 2 0 is
IL = Q + Qo[r(z) I ] ,
(1.14)
where:
and I is the identity operator. D Example 1.5. For the jump Markov process z ( t ) , t 2 0, as in the previous example, let us consider the process
0, we have the Dynkin formula 45,34 rt
(1.25) From this formula, using conditional expectation, we get
1.2. MARKOV PROCESSES
17
Thus, the process t := cp(z(t>)  cp(z) 
is an 3:
= u(x(s),s
Qcp(z(s))ds
(1.26)
5 t)martingale.
The following theorem gives the martingale characterization of Markov processes. Theorem 1.1 (45) Let ( E ,E ) be a standard state space and let z ( t ) ,t 2 0 , be a stochastic process o n it, adapted to the filtration F = (Ft,t >_ 0 ) . Let Q be the generator of a strongly continuous semigroup Pt,t 2 0, o n the Banach space B, with dense domain D ( Q ) c B. If for any cp E D ( Q ) , the process p ( t ) ,t 2 0, defined b y (1.26) is a n Ftmartingale, then z(t),t 2 0, is a Markov process generated by the infinitesimal generator Q .
The process z ( t ) , t2 0, is said to solve the martingale problem for the generator Q . The martingale (1.26) is a square integrable one whose the square integrable characteristic is the process:
t32)
The square characteristic of the martingale p ( t ) ,t 2 Theorem 1.2 0, (see 1.26), denoted b y ( ~ ) ~ 2 0, ,t is the process rt
PROOF. Let us denote at := s,” Q p ( z ( s ) ) d s . Then, from the representation of the martingale p ( t ) , we have p2 =
where L := 2 p a
(p
+ a)2 = p2 + 2 p a + a2= p2 + L ,
+ a2.Differentiating L , we get d L = 2dpa + 2 p Q p d s + 2 a Q p d s .
Now the martingale representation p = cp  a , gives d L = 2dpa
and, by integration,
+ 2cpQcpd~,
18
CHAPTER I. MARKOV AND SEMIMARKOV PROCESSES
The first term is a martingale, since it is the integral with respect t o a martingale p ( s ) . It is obvious that
PI(^) := cp2(4t))
/
t
Qp2(4s))ds,
0
is a martingale. Hence
where p2 is a martingale. So, the latter relation gives the square characteristic of the martingale p ( t ) . Let x,, n 2 0, be a Markov chain on a measurable state space ( E ,E ) induced by a stochastic kernel P ( z , B ) , 1c E E , B E E. Let P be the corresponding transition operator defined on the Banach space B. Let us construct now the following martingale as a sum of martingale differences n1
p n = C [ ( P ( X ~ +E~( ()P ( x ~ +I~7=k)I. )
(1.27)
k=O
By using the Markov property and the rearrangement of terms in (1.27) the martingale takes the form n 1
p n = cp(xn)  ~ ( x o)
C[P  IICP(S~)*
(1.28)
k=l
This representation of the martingale is associated with a Markov chain characterization.
Lemma 1.2 Let x,,n 2 0, be a sequence of random variables taking n 2 0. values in a measurable space ( E ,E ) and adapted to the filtration 3n, Let P be a bounded linear positive operator o n the Banach space B induced by a transition probability kernel P ( x ,B ) on ( E ,E ) . If for every cp E B, the right hand side of (1.28) is a martingale pn,Fn,n L. 0, then the sequence xn, n 2 0 , is a Markov chain with transition probability kernel P ( x ,B ) induced b y the operator P .
19
1.3. SEMIMARKOV PROCESSES
PROOF. Using (1.28) we have n 1
E[Pn I 37211 = wP(zn) I
Fn11  d z o ) 
C [ P
IIdZk)
k=l
I
= ~ [ c p ( z n )Fn11  W Z k  1 )
= PnI
+ W(P(zn)I 3,11
So, the martingale property E[pn Markov property
I
IEE[(P(zn+dI Fnl = wP(zn+l)
=
+ P(GL1) c p ( ~ o >
PP(2k1). ~ ~  is 1 equivalent ,
I znl = P d z n ) .
to the (1.29)
0 By the definition of square characteristic of martingale it is easy to check that n1
(P)n =
C[PP2(Xk)  (Pcp(Zk))21.
(1.30)
k=O
1.3
SemiMarkov Processes
The semiMarkov process is a generalization of the Markov and renewal processes. We will present shortly definitions and basic properties of semiMarkov process useful in the sequel of the book (see, e.g. lZ7,ll6). 1.3.1
Markov Renewal Processes
Definition 1.18 A positivevalued function Q ( z , B , t ) ,z E E , B E E , t E R+, is called a semiMarkov kernel on ( E ,E ) if (i) Q ( z , B , . ) ,for z E E , B E E , is a nondecreasing, right continuous real function, such that Q(z, B , 0) = 0; (ii) Q(., ., t ) , for any t E R+, is a subMarkov kernel on ( E ,E ) ; (iii) P ( . ,.) = &(., ., m) is a Markov kernel on ( E , E ) . For any fixed z E E , the function F,(t) := Q(z, E , t ) is a distribution function on R+. By RadonNikodym theorem, as Q , which is the predictable process of (xic,xjc),that is, the . . process x’cx~c< zit, zjc > is a local martingale; = Bh,
14. SEMIMA RTINGA LES
27
compensator of the measure p of jumps of x ( t ) ,that is a predictable measure on W+ x Wt.
 u is the
It is convenient to introduce the second modified characteristic ?i =
($), by
We will use the semimartingales as a tool in order to establish Poisson approximation results (see Chapter 7). D Example 1.7. Brownian motion. Let w ( t ) ,t 2 0 be a Wiener process with w(0) = 0. This is a local martingale with (w,w ) ~= c 2 ( t ) .Its predictable characteristics are ( B ,C,u ) = (0, u 2 ( t )0). , D Example 1.8. Gaussian process. Let x ( t ) ,t 2 0, be a Gaussian process. We have ( B ,C, v) = (IEx(t),IE(x(t)  l E ~ ( t )0))~ . , D Example 1.9. Generalized diffusion (62). Let us consider Bore1 functions a 2 0 and b defined on I%+ x R, and a family of transition kernels Kt, t 2 0, on (W, B),satisfying the following conditions:
(1A y)2Kt(z,dy) < +m. Ii
Let ~ ( tt) 2 , 0, be a semimartingale with predictable characteristics ( B ,C, v) given by: rt
In that case, the semimartingale z ( t ) , t 2 0, is said to be a generalized diffusion. If a(t,x), b(t,x) and Kt(x, B ) do not depend upon t , them it is called a timehomogeneousgeneralized diffusion (compare with PLII).
28
CHAPTER 1. MARKOV AND SEMIMARKOV PROCESSES
For a timehomogeneous generalized diffusion x ( t ) ,t 2 0, the infinitesimal generator IL acts on functions cp E C’(JR),as follows
K(z,dy)[cp(z+ Y)  cp(z> h(Y)cp’(Y)l.
(1.48)
The triplet (b, a , K ) is called the infinitesimal characteristics of the generalized timehomogeneous diffusion. D Example 1.10. Processes with stationary independent increments. For the processes with stationary independent increments given in Section 1.2.4, with cumulant function $(A), given in (1.16), we have (Bt,Ct,vt(dz))= (at,n’t, tH(dz)).
1.5
Counting Markov Renewal Processes
In this section, we consider counting processes as semimartingales. Let z, On, n 2 0 , be a Markov renewal process taking values in E x [0,fm), and defined by the semiMarkov kernel
So, the components xn+l and
en+, are conditionally independent
The renewal moments are defined by
The counting process is defined by
v ( t ) = max{n 2 1 : 7, 5 t } . Definition 1.25
(see, e.g.
211709133)
An integervalued random measure
1.5. COUNTING MARKOV RENEWAL PROCESSES
29
for the Markov renewal process xn,T,, n 2 0, is defined by the relation
P(dZ,4 =
c
6(,,,7n)(d2,dt)l(,~,y ( s ) ) d s ,
where y(s) := s  T ( s ) , T ( s ) := T ” ( ~ ) .
CHAPTER 1. MARKOV AND SEMIMARKOV PROCESSES
30
It is worth noticing that the compensator of the counting Markov renewal process is a stochastic integral functional of the Markov process z ( t ) ,y ( t ) ,t 2 0 (see Section 2.2). PROOF.Introduce the conditional distributions of the Markov renewal process z, T,, n 2 0:
By Theorem 111.1.33 70, the compensating measure of the multivariate point process (1.49) can be represented as follows

4% dt) =
c
l(rn = cp(u>,
E
E , u E R,
(2.10)
38
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
in the Banach space C(R). The operators At(z) transform an initial test function ~ ( uinto ) @P;(u).The evolution (2.10) can be determined by a solution of the evolution equation = A(z)@P;(u),
(2.11)
%(u) = (P(u), or, in another form (see (2.9))
[email protected] = (2.12)
@E().
= cp(u>*
Definition 2.2 The random evolution for integral functional is defined on a test function cp E C(R) by the relation
apt(.) := p ( U ( t ) ) ,
t 2 0,u E R,
(2.13)
where V ( t ) , t 0, is the integral functional (2.1).
Lemma 2.2 lowing form
The random evolution (2.13) can be represented in the fol
n
u(t)l
at(.) = A ~ ( t ) ( z ( t ) )
(zk)(P(u),
t 2 0,
E R,
(2.14)
k=O
and satisfies the evolution equation %@t(.)
=A ( 4 W t ( 4 ,
t L 0,
@o(u)= cp(u>.
Indeed, if v(t) = 0, then y ( t ) = t , z ( t )= 2,hence in (2.14) at(.) = At(z)cp(u)= (P(U(t)>.
Next, the formula (2.14) can be proved by induction using (2.2). The characterization of the stochastic integral functional with semiMarkov switching is realized by using the compensating operator for the extended Markov renewal process
U, := U(T,),
z, := Z(T,),
T,,
n 2 0,
(2.15)
where ~ , , n2 0, are the renewal jump times of the semiMarkov process z ( t ) t, 2 0.
2.2. STOCHASTIC INTEGRAL FUNCTIONALS
39
Definition 2.3 The compensating operator of the extended Markov renewal process (2.15) is defined by the relation
Lp(u, 2, t ) = E[p(U1,X l , 71) p(.,
2, t )
I uo = ,.
20
= x,70 = tI/m(x).
(2.16) It is easy to verify that the compensating operator has the following homogeneous property ~ ( u2, t, ) = E[p(un+1,zn+l, ~ n + 1) ~ ( uz,, t ) I u n = u,x n = z, 7, = t I / m ( z ) ,
where m ( x ) := EB, =
s,”[l
 Fx(t)]dt.
Lemma 2.3 The compensating operator (2.16) can be represented in the following f o n n
where A,(x),s 1 0 , x E E , are the semigroups defined in (2.6) by the generators A(x),x E E in (2.9), and q(x) = l/m(x). The transformation of the compensating operator can be realized as follows. By definition the compensating operator, acting on functions p(u, x), is given by the following relation
or, in a symbolic form
IL = q[IF(x)P I ] , where 00
P(x) :=
F,(ds)A,(z).
The first step of transformation is the following
IL = Q I[p(x)  I]Qo, where Q := q[P I ] ,
(2.17)
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
40
is the generator of the associated Markov process, and
1
Qocp(z)= 4 2 )
P(zldy)cp($).
E
The second step of transformation consists in using the integral equation for the semigroup
A,(z)  I = A(z)
A,(z)dv.
For the second term in (2.17), we obtain
F,(ds)[A,(z) I]
1
00
= A(z)
Fz(s)A3(z)ds.
So, we get the equivalent representation
F(s)
 I = A(Z)F(~)(Z),
where, by definition,
So doing, we have proven the following result. Lemma 2.4 The compensating operator of the extended Markov renewal process (2.15) is represented as follows
IL = Q + A(z)F(~)(z)Qo. 2.3
(2.18)
Increment Processes
The discrete analogue of the integral functional considered in Section 2.2 is the increment process defined by the sum on the embedded Markov chain xnr n 2 0, rt
(2.19)
2.3. INCREMENT PROCESSES
41
with the given realvalued measurable bounded function a(z),z E E . The counting process
v ( t ) := max{n 2 0 : T~ 5 t } , t L 0,
(2.20)
is defined by the renewal mqments rn,n 2 0, of the switching semiMarkov process ~ ( t t) 2, 0. Introduce the family of shift linear operators on the Banach space C(R) D(z)cp(~)= cp(u
+ a ( ~ ) ) ,z C E,u E R.
(2.21)
Definition 2.4 The random evolution associated with the increment process a(t),t 2 0, is defined by the relation
at(.)
:= cp(a(t)), a(0)= 21.
(2.22)
Clearly, the random evolution (2.22) can be represented in the following form
Indeed, for t < 7 1 , by definition
Next, for
T,
5 t < T,+I, from (2.23) and (2.21)
that is Equation (2.22). The recursive relation for the random evolution (2.23) (2.24)
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
42
provides the following additive representation of the random evolution (2.23)
44 t 2 0. @t(u)= cp(u)+ C [ D ( S k )
[email protected](4,
(2.25)
k=l
In what follows it will be useful t o characterize the increment process by the generator of the coupled increment process
Let the switching process x ( t ) , t L 0, be Markovian and defined by the generator (2.27)
Proposition 2.1 The coupled increment process (2.26) is also Markovian and can be defined by the generator
where
Let the switching semiMarkov process z ( t ) , t 2 0, associated t o the Markov renewal process x,, T,, n 2 0 , be given by the semiMarkov kernel
Q ( x , B , t )= P ( z , B ) F , ( t ) ,
x
E
E , B E E , t 2 0.
(2.29)
Introduce the extended Markov renewal process
a, : = a ( ~ , ) , z,
T,,
n 2 0.
(2.30)
Proposition 2.2 The compensating operator of the extended Marlcov renewal process (2.30) can be represented as follows
PROOF. Let Pu,z,tbe the conditional probability on t ) , and Eu,z,tthe corresponding expectation.
(a0 = u, zo
=x
, =~
2.4. STOCHASTIC EVOLUTIONARY SYSTEMS
43
Then we have
But
so,
and the conclusion follows from Definition 2.3. It is easy t o verify the following result.
0
Corollary 2.1 The compensating operator IL acts o n test functions cp(u,x) as follows
ILP(u,
2) =
[Q+ Qo(D(2)  I)lcp(u, 2 ) .
(Compare with (2.28)). 2.4
Stochastic Evolutionary Systems
Various stochastic systems can be described by evolutionary processes with Markov or semiMarkov switching.
Definition 2.5 The evolutionaq switched process U ( t ) , t defined as a solution of the evolutionary equation
[email protected])
2 0 , in Wd,is
= a ( U ( t ) ;z(t>),
(2.31)
U ( 0 ) = u. The local velocity is given by the Wdvalued continuous function a ( u ; z ) , u E R d , z E E. The switching regular semiMarkov process x ( t ) , t 2 0, is considered in the standard phase space ( E , E ) , given by the semiMarkov kernel (see Section 1.3.1) Q(z,B , t ) = P ( x ,B)F,(t).
44
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
The integral form of the evolutionary equation is
U ( t )= u
+
s,
t
a(U(s);z(s))ds.
(2.32)
In what follows, it is assumed that the velocity a ( u ; z ) satisfies the condition of the unique global solvability of the deterministic problems (2.33). That is Lipschitz condition on u E Rd, with a constant which is independent of x E E. In order to emphasize the dependence of U ( t ) on the initial condition u,let us write
$U(t; 2,u)= a ( U ( t ;2,u); z), (2.33)
U ( 0 ;z, u)= u,
for all
2 E
E.
The wellposedness of the stochastic process U ( t ) , t 2 0, by the solution of Equation (2.31) follows from the fact that this solution can be represented in the following recursive form by using the solution of Problem (2.33)
The initial values for Problem (2.33) are defined by the following recursive relation
The recursive relation (2.34) can be represented in the following form
U ( t ) = U ( t  7,; Z,, U(‘T,)),
7,
5 f! < ‘Tnfl,
72
2 0.
The existence of a global solution U ( t ) , t 2 0, for arbitrary timeinterval [O,T]follows from the regular property of the switching semiMarkov process z ( t ) , t > 0 (see Section 1.3). It is well known (see, e.g. loo) that the solution of the deterministic problem (2.33) under fixed value of 2 E E has a semigroup property which can be expressed as follows
+
U ( t t’;z, u)= U(t’;2,U ( t ;2,u)).
(2.36)
It means that the trajectory at time t + t’ with initial value u can be obtained by extending the trajectory at time t‘ of the trajectory with initial value U ( t ;5,u).
2.4. STOCHASTIC EVOLUTIONARY SYSTEMS
45
The semigroup property (2.36) can be reformulated for the semigroup operators in abstract form by the relation rt(z)cp(u) := cp(U(t;2, u)), t 2 0 ,
(2.37)
in the Banach space C(Rd) of continuous bounded realvalued functions cp(u),u E Rd.
It is easy to see that the operators property
rt(z),t
2 0, satisfy a semigroup
rt+tl(z)= Ft‘(Z)Ft(Z).
Indeed: rt,(z)rt(z)cp(u) = rt,(z)cp(U(t;2,u)) = p(U(t’; z, U ( t ;2,u ) ) = cp(U(t
+ t‘;z, u)) by
(2.36)
= rt+tj(z)cp(u).
Definition (2.37) of the semigroup rt(z),t 2 0 , implies the contraction property ‘18
Ilrt(z)II 5 1, and their uniform continuity Iim
t0
(pyz) I ( (= 0.
Proposition 2.3 The generator T(z) of the semigroup I’t(z), t 2 0, Zs defined b y the following relation
Uz)cp(u) = du;z)v’(u).
(2.38)
PROOF. We have ~r(z)cp(u) = lim tl t0
= lim tl t+O
[r&)
 I ]cp(u)
I’
a ( ~ ( s z)dscp’(u) );
= a(u;z)cp’(u).
For sake of simplicity, we have written U ( s ) := U ( s ;z, u).
0
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
46
Remark 2.1. In the vector case we have to consider the scalar product, that is ucp'(u) = C ak&cp(u). Note that the domain of definition D(lr(x)) of the generator F(x) contains C1(@) , the continuously differentiable functions cp(u) with bounded first derivative. 2.5
Markov Additive Processes
Markov additive processes (MAP) constitute a very large family of processes including semiMarkov processes as a particular case. Of course, since the MAP are a generalization of the Markov renewal (semiMarkov) processes, of Markov processes, and of renewal processes, the field of their applications is very large: reliability, survival analysis, queuing theory, risk process, etc.
Definition 2.6 An Rd x Evalued coupled stochastic process E(t),z ( t ) , t 2 0 is called a MAP if 1) the coupled process [ ( t ) z, ( t ) t, 2 0 is a Markov process; 2) and, on {E(t) = u},we have, (as.),
for all A E B ( Rd ) ,B E E , t
2 0 and s 2 0, where Ft := a ( ~ ( s ) , z ( s )s; I t ) ,
t 2 0. From 2., it is clear that z(t),t 2 0, is a Markov process. A typical example of a MAP is the Markov renewal process when the time t is discrete and < (t), t 2 0, is an increasing sequence of R+valued random variables. Let us define also the transition function Pt(x, A, B), A E B d , B E E,t 2 0 , by P t ( Z , A , B ) :=
I?'( I
= x [ ~ ( x k) 1 1 @ ( 7 ~  1 ) 7
(2.63)
k= 1
0 that is an equivalent form of (2.60). The linear forms of the relations (2.46) and (2.61) provide the most effective asymptotic analysis of stochastic systems in the series scheme considered in the next Chapter 3. Proposition 2.7 The mean value of the coupled random evolution defined by the family of bounded operators JD(x),x E E , is determined b y a solution of the Markow renewal equation
PROOF. We use the following recursive relation for the jump random evolution
z ( t ) )= D ( z l ) a ( t 
x(t  7 1 ) ) 1 ( T l < t )
+
Y(u7x)1(T1>t)’
The mean value of (2.65) gives the Markov renewal equation (2.64).
(2’65)
0
Corollary 2.3 The mean value of the Markov jump random evolution is determined b y a solution of the evolutionary equation
[email protected], x) = [Q+ Qo(D(x)  I ) l U ( t ,x), U ( 0 ,). = 4%.).
56
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
So, the mean value of the Markov j u m p random evolution is characterized by the generator
Lo = Q + Qo[D(z)  11. It is worth noticing that the generator LO characterizes the coupled Markov process cr(t),z(t),t2 0, (see Section 2.3), with the switched increment process cr(t),t 2 0. The operator Qo, defined in Section 2.3, is
Q o 4 ~= ) q(2) 2.7.3
P ( x ,dy)cp(y).
SemiMarkov Random Evolutions
The semiMarkov random evolution can be characterized by the compensating operator.
Definition 2.11 The compensating operator of the continuous coupled random evolution (2.51) is determined by the relation
I
m ( t , X ) := { I E [ ~ T ~ , X To ~ )= t , X O =
4

q t , z ) ) / ~ [ e , ~ . (2.66)
It is worth noticing that the compensating operator (2.66) satisfies the homogeneous condition
[email protected](t,X ) := {IE[@(T,+i,
%+I)
I 7,
= t ,% = X]  @(t, z))/E[&+l
I 5,
= XI.
(2.67)
Proposition 2.8 The compensating operator of the continuous coupled random evolution @(t, x ( t ) ) , (2.51), can be represented as follows
where q ( x ) := l/rn(z) := l/IEO,. pROOF. lET US CALCULATE BY USING(2.51)
Hence, (2.68) follows from definition (2.67).
2.7. RANDOM EVOLUTIONS
57
Proposition 2.9 The compensating operator (2.68) acting o n the test functions cp E Ck(Rdx E ) ,k 2 3, can be transformed into
Lcp(u,x) = Qd., 2) + r ( x ) P i ( x ) Q o ~ ( u x), Lcp(u, x) = QCP(., z)
+ r ( x ) W ( ux) , + r2(x)Pz(z)&odu,x)
(2.69) (2.70)
and
+
b ( u , x) = Qd., x) r(x)Pcp(u,x) + ~ 2 ( x ) r ~ ( x ) Mz)u , + r 3 ( x ) ~ 3 ( x )x). ~~~(~, (2.71)
The operator Qcp(x) := 4 2 )
p ( x ,d l / ) " P ( ~) c p ( ~ ) l ,
is the generator of the associated Markov process x o ( t ) t, 2 0, with the same embedded Markov chain x,, n 2 0 , as in the semiMarkov process, and the intensity of sojourn times q(z) = l/m(x), m ( z )=
w,:= l m F , ( t ) d t .
sE
As usual, Qocp(x) := q(x) P ( x ,dy)cp(y). The bounded operators E , k = 1,2,3, are given by the relation
Pk(Z),z E
where 00
(k)
F,
(3)
(k1)
F,
:=
( t ) d t , k 2 2,
4 1 )
F , ( t )= FZ(t),
and a3
p2(z) := rn2(2)/[2m(z)],
m2(z) := 2 1 F r ) ( t ) d t .
PROOF. Integration by parts gives
1
oo
P(z) :=
F,(ds)r,(z)ds
=I
+
Using the differential equation for semigroups
(2.72)
58
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
we get
F(z) = I
+ r(z)FI(z),
where
In the same way, we get (2.73)
where
that is (2.74)
where
Successively putting the Pk(z),k = 1 , 2 , 3 , given by formulas (2.73) and (2.74), into (2.72), we get the representations (2.69)(2.71). Proposition 2.10 T h e compensating operator f o r the j u m p random evolution (2.59) can be represented as follows
w t ,).
= 4x1
[ / P(., E
d Y ) W Y ) @ ( t Y, )  q t , 41.
(2.75)
PROOF. From Definition 2.9, we calculate:
0 Now, by Definition 2.11 we obtain (2.75). It is worth noticing that the compensating operator for the random evolution can be considered directly on the test functions p(u,z) as represented in Proposition 2.9.
2.8. EXTENDED COMPENSATING OPERATORS
59
Corollary 2.4 The compensating operator of the j u m p random evolution can be represented as follows
2.8
Extended Compensating Operators
The semiMarkov continuous random evolution can be characterized by the extended compensating operator which is constructed by using the extended Markov renewal process
where x,,~,, n 2 0 , is the Markov renewal process associated with the switching semiMarkov process x ( t ) ,t >_ 0 , defined by the semiMarkov kernel (see Section 1.3.1)
with en+, := T,+I  Tn, n 2 0. The first component in (2.77) is the continuous part of the random evolution generated by the values of semigroup (x,),t 2 0 , such that
where Atn := &+I  tn1 n >_ 0. For example, the evolutionary stochastic system defined in Section 2.3 is generated by the first component of the extended Markov renewal process (2.77) by the semigroup
where U ( t ) ,t 2 0, is a solution of the evolutionary equation
and
60
CHAPTER 2. STOCHASTIC SYSTEMS WITH SWITCHING
Analogously the first component of the extended Markov renewal process (2.77) can be defined for other stochastic systems considered in above Sections 2.22.4.
Definition 2.12 The extended compensating operator of the Markov renewal process (2.77) is defined by the relation
L P ( % X l t ) := ~ [ P ( J l , ~ l , n( P) ( % Z , t )
1x0
= 2 7 7  = t]/ m ( x > ,
where m ( x ) := EB, = JomF,(t)dt.
Proposition 2.11 The extended compensating operator of the Markov renewal process (2.77)is represented as follows
(2.79)
where rS(x), s 2 0 , x E E , is the family of semigroups with generators K'(x), x E E , defined in (2.40). The proof of Proposition 2.11 is based on representation (2.78) for the increments of the first component and on the homogeneous property of the compensating operator W ( U , x,t ) =
E [P(Jn+l,%+lr
Tn+1
 P(
t ( t ) ,A(% z(t), t 2 0, defined by the generator
Lv(u, 21,
=
[Q + r ( z ) + A(z)lv(u, v, z),
where A ( t ) is one of the predictable characteristics (2.83)(2.85) defined by the generator
A(z)(P(u) := a(u; z)cp'(u). The function a(u; z) is one of the local characteristics b(u;z), c(u; z) and
This page intentionally left blank
Chapter 3
Stochastic Systems in the Series Scheme
3.1
Introduction
This chapter deals with the stochastic systems presented in Chapter 2, in a series scheme. That is, for a process c(t),t 2 0, as in the previous chapter, we will consider here a family of processes, t E ( t ) , t2 O , E > 0, where 0 < E < EO is the series parameter, defined on a stochastic basis 3 = (R, F,F = (Ft, t 2 0), IF’). We are interested in the weak convergence of the probability measures P o (t“)l,as E + 0. Here E is supposed t o be a sequence E, +0, as n + co.Instead of a common probability space, we could consider different spaces for each E. Two different schemes are considered here, the average approximation and the diffusion approximation. The switching semiMarkov process is considered with fast timescale “EC’” for average approximation, and for diffusion approximation. The ergodic property of the switching process is used in the average and diffusion approximation algorithms. The main results presented in this chapter concern the asymptotic representation of compensating operators of the coupled switchedswitching processes. First we give results for random evolution (Propositions 3.13.5), on which the average and diffusion approximation results will be based. The average approximation is presented for stochastic additive functionals and increment processes (Theorems 3.13.2). The diffusion approximation is presented in two different schemes. The first one is the usual one whose equilibrium point is the average limit fixed point (Theorems 3.33.5), and the second one is one whose equilibrium point is a deterministic function (Theorems 3.63.8). In the next chapter we will also present results for a diffusion approximation whose equilibrium point is a random process.
67
68
3.2
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
Random Evolutions in the Series Scheme
The characterization of random evolutions in the series schemes is considered with two different switching processes: semiMarkov and Markov processes, with the different algorithms. 3.2.1
Continuous Random Evolutions
The continuous random evolution with semiMarkov switching in the average scheme with the small series parameter e > O,E + 0 , is given by a solution of the evolutionary equation (compare with Proposition 2.4), p ( t ) = Ir(z(t/e))@E(t),t
2 0, (3.1)
V ( 0 )=I.
Here r ( z ) , z E E , is the family of generators of the semigroup operators I't(z),t 2 0,. E E , which determines the random evolution in the following form (compare with Definition 2.8) V(t/E)
@[)"(t) = r&y(t/E)(z(t/&))
reOk(zk),
t > 0 , Q E ( 0 )= 1.
(3.2)
k=l
The semiMarkov continuous random evolution @"(t), t 2 0, in the average series scheme can be characterized by the compensating operator on the test functions cp E C(Rd x E ) , given by the following relation (compare with Proposition 2.8)
(3.3) The normalized factor "E"' corresponds to the fast timescaling of the switching semiMarkov process in (3.1). The small timescaling "E" in the semigroup rss(x)provides the representation (3.2) for the random evolution in the series scheme. As usual, we will suppose that the domain D r ( z ) contains the Banach space c'(R~).
Proposition 3.1 The compensating operator (3.3) in the average scheme o n the test functions cp E C2i0(Rdx E ) has the following asymptotic
69
3.2. RANDOM EVOLUTIONS IN T H E SERIES SCHEME
representation (compare with Proposition 2.9):
where:
for
k
= 1,2, and, as usual,
PROOF. The same transformation as in the proof of Proposition 2.9 is used with one essential difference. The equation for semigroup is now
I'
rES(z)= I + E ~ ( z )
rEv(z)dv,
that is, in differential form,
dr,,
= &qz)r,,(z)ds.
0 The continuous random evolution in the diffusion approximation scheme with accelerated switching is represented by a solution of the evolutionary equation
%(t)
=lrE(z(t/&2))@P"(t),
t 2 0, (3.6)
W(O) =I. The family of generators S E ( z )z, E E , has the following representation
re(,)= &  l r ( Z ) + r l ( z ) .
(3.7)
Note that the generalization of the average scheme in such a way would not be productive. The compensating operator of the random evolution (3.6) on the test functions cp E C(Rd x E ) is given by the relation (in symbolic form, see Section 2.8) LEv(u,z)= &(z"E(4p
 IIV,
(3.8)
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
70
where
Proposition 3.2 The compensating operator (3.8)(3.9),in the diflusion approximation scheme, acting on the test functions cp E C3(Rd x E ) has the following asymptotic representation:
+ E'T(z)P + Qz(x)P+ d;(z)]cp = + E  ~ T ( ~+) P = [E~Q +~  l e ; ( ~ ) ] ~ ,
L'(P(u,X) = [&2Q
E ~ ; ( E ) ] ~
[ E  2 ~
(3.10)
where
and the remaining terms are:
+
e;(x) := [r2(~)FJ2)(x) Tl(x)P]Qo,
(3.13)
+
(3.14) eg(x) := r,(x)[T2(2)F!3)(x) I~~(E)F!~)(z)]Q~.
+
Here, by definition T,(x) := Ir(z) ~ T l ( x ) .
PROOF. The starting point is the integral equation for semigroup
or, in differential form, dI'Zz,
(E)
= ET,(x)I'z2, ( z ) d s .
There we use the following relation &2lr"(Z) = & l r & ( X ) .
The initial representation of the compensating operator is
IL" = E  ~ Q + E~[P~(Z)  I]Qo,
(3.16)
3.2. RANDOM EVOLUTIONS IN T H E SERIES SCHEME
where
P,(z) =
1"
Fz(ds)I'ZzS(z)ds
71
(3.17)
is transformed, by using (3.15), into
P&)
 I =&rE(z)Ip(z),
with
Now, by using (3.16), an integration by parts gives
@)(z) = m ( z ) l +&rE(z)IFp(z) 1 IFz"'(z) = mz(s)I 2
(3.18)
+ &k,(z))Fp(z),
where, by definition:
(s)r:z,(z)ds,
and
mz(z):=
Jd
k = 1 , 2 , ...,
(3.19)
co
s2F,(ds).
Now by putting (3.18) and (3.19) into (3.15) and then into (3.8) and by 0 using (3.7), we get (3.10). The following result concerns the coupled random evolution defined in Definition 2.9.
The coupled Markov random evolution, with the switching Markov process z ( t ) , t 0, in the average scheme can be characterized by the generator
Proposition 3.3
>
+
ILE'p(u,z)= &lQp lr(z)cp.
The coupled Marlcov random evolution in the diffusion approximation scheme can be characterized by the generator I L E ~ ( Z) u , = &'Q'p
+ &'k(z)'p+ II'l(~)cp.
72
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
It is worth noticing that the characterization of the Markov random evolution is comparatively simpler than the characterization of the semiMarkov random evolution (see Propositions 3.1 and 3.2). 3.2.2
Jump Random Evolutions
The jump random evolution in the average series scheme is represented with fastscaling (3.20) The family of bounded operators D E ( x ) , zE E , is supposed to have the following asymptotic representation DE(2)
=I
+ eD(2) + D;(2),
2
E E,
(3.21)
on the space Bo dense in C,"(Rd x E ) , with the negligible term
ll~;(~)cPll
+
0,
E
+
0,
'p E
Bo.
(3.22)
The compensating operator of the semiMarkov jump random evolution in the average scheme is represented as follows (see Proposition 2.10)
Proposition 3.4 The compensating operator (3.23) has the following asymptotic representations: LE'p(u, 2) = =
[E~Q + QoD(z)
+ QoD;(z)]Au, 2)
[&'Q + QoJ%(z)l~(u,z),
(3.24)
with
D;(z) := D(z) +ID!(%) and the negligible term (3.25) WHERE, AS USUAL,
3.2. RANDOM EVOLUTIONS IN THE SERIES SCHEME
73
PROOF.The proof is obtained by putting the expansion (3.21) in (3.23). 0 The j u m p random evolution in the diffusion approximation scheme is considered in the accelerated fastscaling scheme:
n
4tlE2)
V ( t )=
DE(xi),
t > 0, W(0) = I .
(3.26)
k=l
The family of bounded operators D E ( x ) , zE E , has the following asymptotic expansion
W ( s )= I
+ eD(x) + &2D1(2)+ &2D",(z),
(3.27)
on the test functions cp E Bo, a dense subset of C2(Rd),with the negligible term
The compensating operator acting on the test functions cp(u,x) is
Proposition 3.5 The compensating operator of the jump random evolution in the diffusion approximation scheme has the following asymptotic representation
0 PROOF.The proof is obtained by putting (3.27) in (3.29). The Markov jump random evolutions in the average and diffusion approximation schemes are respectively characterized by the generators ILE represented in (3.24) and (3.30), with the generator Q of the switching Markov process. It is worth noticing that the semiMarkov random evolution is characterized by the compensating operators in asymptotic forms (3.24) and (3.30) with the generator Q of the associated Markov process x ( t ) ,t 2 0. The intensity function of the renewal times is q(x) = l/rn(x), where m ( x ) := EB,, is the mean value of renewal times of the switching semiMarkov process.
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
74
Average Approximation
3.3
The phase merging effect for stochastic systems can be achieved under different scaling of the stochastic system and of the switching semiMarkov process. Let US consider the main model of stochastic systems, presented in the previous Chapter 2, that is the stochastic additive functionals model.
3.3.1
Stochastic Additive Functionals
Stochastic additive functionals are considered in the following scaling scherrie
C ( t )= E“(0)
+/
t
qE(ds;Z ( S / E ) ) ,
t 2 0.
(3.31)
0
The switching semiMarkov process z ( t ) t, 2 0, on the standard phase space ( E ,E ) is given by the semiMarkov kernel Q(z,B , t ) ,
Q ( z , B , t )= P(z,B)F,(t), z E E , B E E , t 2 0.
(3.32)
The family of Markov processes with locally independent increments v E ( tz), ; t 2 0,z E E , with values in the Euclidean space Rd, d 2 1, is given
by the generators
+
l r , ( ~ ) ~ p ( ~=) U(U,z)c~’(u)E
s,.
 ~
[V(U
+
E W )  V ( U ) ] ~ ( U ,d
~z), ; (3.33)
defined on the Banach space C’(Rd). The fast timescaling for the switching process in (3.31) corresponds to the scale factor E for the increments EZ, of the switched processes $(t ; z), t 2 0. This explains why the largescale intensity of the switching process is compensated by the smallscale of increments of the switched processes. By subtracting the first moment of the jump values in (3.33), the generator takes the form
r&(z)Cp(u) = r(z)cp(u)+ 7&(~C)Cp(4
(3.34)
where:
lr(z)cp(u):= d”; z)Cp’(u),
(3.35)
3.3. AVERAGE APPROXIMATION
Here g(u;3) := a(u;z)
+ b(u;z),
b(u;z) :=
Ld
vr,(u, d v ;z).
75
(3.37)
Let us consider the following assumptions.
A l : The switching semiMarkov process z ( t ) , t 2 0, is uniformly ergodic with stationary distribution 7r(B),B E E. A2: The function g(u;z),u E Rd, z E E , is (globally) Lipschitz continuous on u E Rd, with common Lipschitz constant L for all z E E. So, there exists a global solution to the evolutionary systems d Uz(t) = g ( U z ( t ) , z ) , z E E. dt A3: The operators y E ( z )are negligible for cp E Ilr&(z)cpII 0, +
E
C:(Rd), that is,
+
0.
A4: The initial value condition is
lE IJ"(0)l I c < +m.
JE(0)2J ( O ) ,
The average phase merging principle is formulated as follows.
Under Assumptions AlAd, the stochastic additive functional (3.31) converges weakly, as E + 0, to the average evolutionary deterministic system G ( t ) ,t 2 0, determined by a solution of the evolutionary equation
Theorem 3.1
(3.38)
where the average velocity
is given by G(u) =
where: A
a(.)
=
s,
qu)+I+), h
n(dz)a(u;z),
b(u)=
(3.39)
s,
7r(dz)b(u;z).
(3.40)
76
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
Remark 3.1. The weak convergence
r"(t)
==+
0, i G ( t ) , c 
means, in particular, that for every finite time T sup IE'(t)  G(t)I 50,
&
(3.41)
> 0, 4
0.
OltlT
The verification of the average merging principle (3.38)(3.40) is made in Chapter 5. The weak convergence (3.41) is investigated in Chapter 6. The proof of Theorem 3.1 is based on the representation of the stochastic additive functional (3.31) by the associated continuous random evolution (3.1). The corresponding family of generators IFE(z),z E E , E > 0, is r e p resented in (3.34). Setting (3.34) in the asymptotic representation (3.4) of the compensating operators (3.3) (see Proposition 3.1) we get the following form of the compensating operator for the stochastic additive functional: ILEv(u,Z) = [ E  ~ Q = klQ
+ l r ( ~ )+P ~ e f ( z ) ] q + G(x)lvl
(3.42)
where
+
:= Y , ( ~ ) P E e ; ( z ) ,
(3.43)
and the remaining terms O;(z), k = 1,2, are given in (3.19) with the generators IF,(x) and the semigroups rEs(x),depending on the parameter series & > 0. The family of generators IF(z),z E E , is represented in (3.35), and the negligible term y,(z),z E El is represented in (3.36). Let us now give some heuristic explanation about the phase merging effect of Theorem 3.1. The average algorithm in Theorem 3.1 is evident from the ergodic theorem point of view. The problem is how does the ergodicity principle works? In order to explain this, let us consider the Markov additive process t & ( t ) , z ( t /t ~2)0,, which can be characterized by the following generator, ILEv(u,z) = [ E  ~ Q
+ T(x)lv(u,z),
on the Banach space C1(Rd x E ) of cp(u,x),where the generator r ( z ) is defined in (3.35), and the negligible operator (3.36) is neglected. The
3.3. AVERAGE APPROXIMATION
77
uniform ergodicity of the switching Markov process with the generator Q provides the definition of the projection operator 11 (see Section 1.6), which satisfies the following property
IIQ = Q11= 0. The projector 11 acts on the functions cp E B(E)as follows
ndx) =
s,
7r(dx)(P(x)l(z)= W X ) ,
where l ( x ) = 1, for all x E E , and
Since 11v(u) = cp(u),the generator ILE of the Markov additive process acts on a function 'p E C'(Rd), which does not depend on x E E , as follows, LEcp(u)= Ir(x)cp(u).
Note that, since IIQ = 0,
+
IIILEp(u,X) = [ E  ~ I I Q 1 1 l l ? ( ~ ) ] c p ( ~X) , = ITn'(~)cp(u,~).
Hence, we have
rILErIcp(u,x) = rIIr(x)IIcp(u, x) = IIn?(x)
[email protected](u),
s,
where b(u):= 7r(dz)cp(u;x). The average evolution in Theorem 3.1 is characterized by the main part of the average generator
FrI = rIIr(Z)rI. Note that the problem of verification of such a scheme is still open (see Chapters 5 and 6). The stochastic homogeneous additive functional in series scheme
0. The variance a2 is calculated by (T2
where
= uo"+ u p ,
+
0,
83
3.4. DIFFUSION APPROXIMATION
and the velocity of the drift is
The potential operator Ro (see Section 1.6) corresponds t o the generator Q associated t o the Markov process
where q ( z ) := l/m(z), m ( z ):= J,"F(t)dt.
Remark 3.3. The function p ( z ) is positive, if the density f, (with respect to Lebesgue measure on R+)of F, is a completely monotone function. That means if the derivatives of fzn', ( for n = 1,2, ..., exist and 2 0. This class of distribution function is included in the class of decreasing failure rate distribution functions. (See 84). In the case where fz is of Polya frequency function of infinite order (PF,), we have p ( x ) 5 0. This class of distribution functions is a subset of the class of increasing failure rate distribution functions. We have p ( z ) = 0 for exponential distributed renewal times, that is, for switching Markov processes.
(l)"fp'(z)
Corollary 3.6 Under Conditions D l  D 3 , the integral functionals (3.53) with the switching Markov process, converge weakly [ " ( t ) ==+ aO(t):= a0
+ U l t +aow(t),
&
4
0,
where a: is defined as an Theorem 3.3. Let us give here some heuristic explanation of the diffusion approximation of the integral functional. By using the representation (3.54), the integral functional takes the form:
a , ( z ( s / c 2 ) ) d s = c L l E 2a , ( x ( s ) ) d s
=
&lo
+
a(x(s))ds
84
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
It is easy to see that the second term satisfies the average principle
al(x(s/E2))ds==+ Zit,
E +0.
The first term requires a more thorough explanation. The integral functional with timescaling
cl!'(t)= E LlE2 a(x(s))ds, under the balance condition
induces fluctuations comparable to the accelerated moving determined by the velocity = a(z)Roa(x).
go(.)
Indeed, the potential kernel Ro(x,d y ) can be interpreted as an intensity of transition between state x and d y . Now, the variance of the Wiener process
can be interpreted as a characteristic of the accelerated moving of Wiener process. 3.4.2
Stochastic Additive Functionals
The diffusion approximation is applied to the stochastic additive functionals (Section 3.3) in the series scheme with accelerated switching
+ / $ ( d s ; z ( s / E 2 ) ) , t 2 0. t
("(t)= (0
(3.56)
0
The family of processes with locally independent increments $(t; x),t 2 0 , x E E , depends also on the series parameter E and is determined by the generators
The process x ( t / E 2 ) ,t 2 0, is a semiMarkov process as described in the previous section.
3.4. DIFFUSION APPROXIMATION
85
The selection of the first two moments of the jump values in (3.57) transforms the generators into the following form
1 + fdu; z)cp"(u) + Y,(z)cp(u),
r,(z)cp(u) = g,(u; z)cp'(u)
where
Here:
The intensity kernel has the representation
r,(u,dv;z)= r(u,dv;z) + E ~ ~ ( U , ~ V ; Z ) .
(3.58)
The velocity of the deterministic drift has the representation g E ( u ; z )= g(u;z)+ E g l ( U ; z ) .
(3.59)
The timescaling of the increments in (3.57) is made for the same reasons as in the average scheme (Section 3.3.1). The timescaling of the intensity kernel is connected with the finiteness of the second moments of increments. The balance condition (3.60) provides the compensation of the velocity &lg(u;x) in the average scheme and appears in the diffusion scheme. Let us state here the following additional conditions.
D3': The velocity functions g ( u ; z) and g l ( u ; s )belong to C1(Rd x E ) , and the balance condition is fulfilled (3.60) D4: The operators Y&(Z)(P(U)
:= E  l
E2V2
[(P(u+Ev) (P(u)~vcp'(u)  v2cp"(u)]r& 2
(u, dv;z),
CHAPTER 3. STOCHASTIC SYSTEMS I N THE SERIES SCHEME
86
are negligible for cp E Ci(IRd),that is,
Theorem 3.4 Under Assumptions D1, 0 2 , D5” and 04,the following weak convergence holds
E‘(t) ===+EO(t),
E +
0,
provided that the diffusion coeficient g(u) is positive for u E Rd. The limit diflusion process (‘(t), t 2 0, is defined by the generator
Lcp(u)= &)cpl(u)
+ p1(u ) c p ’ l ( U ) .
The velocity of the drift is h
g ( u ) = 51(u)
+ &(.) + F3(U),
where
AND
WHERE
Let us consider a stochastic additive functional
( t ) t, 2 0, represented
by
(“(t)= t o
+/
0
t
C“(ds;x(s/E2)), t 2 0.
(3.61)
3.4. DIFFUSION APPROXIMATION
The family of Markov processes with independent increments 0, x E E l is determined by the generators
87
c(t;x),t 2
Subtracting the first two moments of jump values, the transformed generators take the following form: 1
+ ,c€(4cp”(u)+ Y € ( 4 ( P ( 4 .
r&(Z)cP(u)= 9 & ( Z ) ( P f ( 4
Here
CE(x) :=
Ld
vv*l?,(dv;z).
The velocity of the deterministic drift has the representation g“(x)
= clg(x)
+ 91(x).
(3.63)
+ EI’l(dv;x).
(3.64)
The intensity kernel
r e ( d v ;X) = l?(dv;Z)
Then the following balance condition holds
The first two moments of the increments are bounded functions:
Corollary 3.7 gence holds
Under Assumptions D1D.2, the following weak conver
I’
CE(ds;x(s/E2)) ==+ (‘(t),
E + 0.
88
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
The limit diffusion process ['(t), t 2 0 is determined b y the generator Lop(.)
1+ +'(u).
= &p'(u)
Here:
3.4.3
Stochastic Evolutionary Systems
The evolutionary stochastic system in the diffusion approximation scheme is given by the evolutionary equation
$ U & ( t )= g"(U"(t);.(t/&2)), U"(0)= 21. The velocity gE has the following representation gE(u; ). = elg(u; ).
+ g 1 ( u ;x).
The balance condition (3.60) holds. Corollary 3.8 Under Assumptions D1,0 2 , and 03: the following weak convergence holds
U " ( t ) ===+< O ( t ) ,
E + 0,
provided that the diffusion coeflcient B^ is positive. The limit diffusion process Co(t),t 2 0, is determined by the generator
Lop(.)
= g(u)p'(.)
+;s(u)pyu).
The velocity of the drift is
F(u)= &(.) where
+ F2((.) + ?3(u),
3.4. DIFFUSION APPROXIMATION
89
The covariance function is
where:
3.4.4
Increment Processes
The diffusion approximation for the increment processes in the series scheme is considered with the following timescaling V(tlEZ)
F ( t )= P O
+E
C
as(zn),
t 2 0.
(3.66)
n=l
The values of jumps are a,(z) = a(.>
+
EUl(Z).
The following balance condition holds
where p ( d x ) is the stationary distribution of the embedded Markov chain xn, n 2 0.
Theorem 3.5 gence holds
Under Assumptions DlD3, the following weak conver
F(t)
b
+ at + a w ( t ) ,
e +0,
provided that u 2 > 0. The variance u2 is calculated by u2 = a; +Is;,
where:
90
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
Co(z) := C(z)&C(z),
C(z) := b(z)/m(z),
The drift velocity is
Remark 3.4. As in the averaging scheme (Section 3.3.2), since the increment process (3.66) has its jumps at the renewal moments, the average effect is realized by using the stationary distribution of the embedded Markov chain p(dz). The normalized factor l / m transforms the discrete jumps of the increment process into the continuous characteristics of the limit process.
3.5
Diffusion Approximation with Equilibrium
The balance condition in the diffusion approximation for the stochastic additive functional in the series scheme, considered in Section 3.4.2, provides the homogeneous in time limit diffusion process. In applications there are situations in which the average approximation is not trivial, that is, the limit process must be considered as an equilibrium process, very often deterministic, determining the main behavior of the stochastic systems on the increasing time intervals. The problem of approximation of fluctuations of stochastic systems with respect to equilibrium is considered in this section.
3.5.1
Locally Independent Increment Processes
First we consider a stochastic system in series scheme with small series parameter E > O,E + 0, described by a Markov process with locally independent increments q E ( t )t, 2 0, on the Euclidean space Rd, d 2 1, given by the generator
91
3.5. DIFFUSION A P P R O X I M A T I O N WITH EQUILIBRIUM
The main condition in the average scheme is the asymptotic representation of the first moment of jumps b E ( u ) :=
LdwrE(u,
+ &eyu),
dV) = qu)+ Ebl(u)
with bounded continuous functions b ( u ) , bl(u)and with the negligible term
lleq
+ 0,
E + 0.
Then the Markov process ~ " ( tt )2, 0, converges weakly
to the solution of the evolutionary equation
& d t )= b ( P ( t ) ) , p(0) = q"(0) = u.
If there exists an equilibrium point p for the velocity b(u),that is,
b ( p ) = 0, and the initial value of the process is close to the point p, (see (3.75)) then the weak convergence
$(t)
* p,
t
E + 0,
+
co,
(3.68)
holds. Approximation of the fluctuation ~ " ( t p) is considered in the following centered and normalized scheme
("(t):= l;l"(t/&) Elp,
t 2 0.
(3.69)
Such a normalization can be explained by noticing that
( " ( t ):= [ E $ ( t / E )

PI/&.
(3.70)
The convergence (3.68) provides the weak convergence EVE(t/E)
===+p,
E + 0,
t
+
00.
Hence, the normalized scheme (3.70) is productive.
92
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
Theorem 3.6 Let the intensity of the j u m p values of the Markov process q E ( t )t, 2 0, given by the generator (3.67), have the asymptotic representations of the first two moments of jumps as E 4 0: bE(zu l ) :=
ld s,.
+ EU, d v ) = b ( z ) + Eb(z, u)+ E q ( z , u ) ,
vr,(z
~ ~ (U 2 ) := ,
+ E u , d v ) = qz)+ e;(z, u),
vv*rE(z
(3.71)
(3.72)
with the negligible residual terms
Ilefll
+
0,
E
+
0,
i = 1,2.
(3.73)
T h e n the normalized centered process (3.69) converges weakly, as E+O, t o the digusion process ['(t), t 2 0, given by the following generator LOV(U>= b(p,U)V/(U)
1
+ p(p)V%).
(3.74)
The initial value of the limit diffusion is
['(o) = E'O lim[EqE(0) p ] / ~ , that is ~ q " ( 0 ) p N
(3.75)
+ ~['(o).
Remark 3.5. Let the intensity kernel be represented by r E ( u , d v )= r ( u , d v )
+ &I'1(u,dv),
(3.76)
and the kernel r ( u , d v ) have continuous derivative in u.Then the asymptotic representation (3.71) has the following form
v ( z , u ) = qz)+&[bl(z)
+ uqu)l +&eyz,u),
where r
Corollary 3.9 Under the conditions of Theorem 3.6 and the additional condition (3.76), the limit diffusion process Co(t),t2 0 , i s defined by the generator
3.5.DIFFUSION APPROXIMATION WITH EQUILIBRIUM
93
where
that is the Ornstein Uhlenbeck diffusion process. 3.5.2
Stochastic Additive finctionals with Equilibrium
More complicated but some what similar is the diffusion approximation of the stochastic additive functional (Section 3.3.1) in the series scheme satisfying the average approximation conditions with nonzero average limit processes. That is, the stochastic additive functional with Markov switching in the average approximation scheme is represented as follows
&(t)= 50“+
t
$(dS;z(s/E)),
t 2 0.
0
The family of Markov processes with locally independent increments
f ( t ;x),t 1. 0, x E E , with values in the Euclidean space Rd, d 3 1, is given by the generators (Section 3.3.1)
+
]r&)cp(u) = &J(%xC>Cp’(u) &  l % ( 4 ( P ( 4 ,
(3.77)
defined on the Banach space C1(Rd). The switching Markov process x ( t ) , t 2 0 , on the standard state space ( E , E ) is given by the generator Q d z ) = q(z)
1 E
P ( x ,~ Y ) [ V ( Y )cp(z)l.
(3.79)
According to Theorem 3.1, the following weak convergence holds
r“(t)r. Cg(t),
&
+
0.
The limit process f ( t ) , t 2 0, is a solution of the deterministic evolution equation
$At) = Xm, (3.80)
CgP) = &.
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
94
The average velocity c(u),u E Rd, is defined as follows
Now we consider the centered stochastic additive functional
with the rescaled switching Markov process as follows rt
(3.82) and with the more general of the stochastic additive functional qe(t;x),t 2 0, x El
where
This generalization means that the velocity of drift g E ( u ; x ) and the intensity kernel r,(u,dv;x)now depend on the parameter series in the following way S E ( Y
). = s(u;).
+ Egl(U; 21,
(3.85)
and
Subtracting the second moment of jump values in (3.83)3.84) gives the representation
Here:
3.5.DIFFUSION APPROXIMATION W I T H EQUILIBRIUM
and CE(u;x) :=
ld
vv*rE(u, d v ; x).
95
(3.88)
From (3.86), we get in (3.88) c E ( u ; x= ) C ( u ; x )+&cl(u;x),
where
Theorem 3.7 (Diflusion approximation without balance condition). Let the following conditions be fulfilled. D1’: The velocity and the intensity kernel are represented by (3.85) and (5’.86). D2’: The velocity functions and the second moments of jumps have the following asymptotic expansion:
+ E u ; x) = C(v; x) + e;(v,
C(?J
21;
x),
with the negligible terms ei(v, u;x),k = 1 , 2 , 3 satisfying the condition, f o r any R > 0 , sup
u;
+ 0,
+0.
ZEE IuIO
5 c < +m.
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
96
Then the weak convergence holds
C ( t )===+a t > ,
E
4
0,
provided that B ( v ) > 0. The limit diffusion process t 2 0, is determined by the generator of the coupled Markov process C(t),r(t),t 2 0,
t(t),
1 W u , v) = b(v,u)cpl(u,). + f(")cp:,(%
v) + %J)(P1(% v),
where: b(v,'LL)= &(v)
+ uY(v),
The covariance function is
where:

C(v) =
s,
7r(dz)C(v;z).
Here
j(v;z)
:= g(v;z) 
gv),
and & is the potential operator of Q (Section 1.6). This means that the coupled Markov process c(t),r(t),t 2 0, can be defined as a solution of the system of stochastic differential equations
+0(5^(WW(t),
d a t ) = b ( t @ ) ,F(t))dt
d r ( t ) = g^(F(t))dt. The covariance function g(v) is determined from the representation
B ( v ) = O(v)a*(v).
97
3.5. DIFFUSION APPROXIMATION WITH EQUILIBRIUM
Remark 3.6. The limit diffusion process r ( t ) , t 2 0, is not homogeneous in time and is determined by the generator
The limit diffusion process is switched by the equilibrium process r ( t ) t, 2 0.
Remark 3.7. The stationary regime in the averaged process (3.80) is obtained when the velocity has an equilibrium point p, that is, c ( p ) = 0. Then the limit diffusion process c(t), t 2 0, is of the OrnsteinUhlenbeck type with generator Cop(.)
+ p1 d y u ) ,
= b(u)cp'(u)
where: b ( u ) = bo
bo = &), 3.5.3
bl
+
Ubl,
= Z'(p),
B = B(p).
Stochastic Evolutionary Systems with SemiMarkov Switching
Now the stochastic evolutionary systems in diffusion approximation scheme considered in Section 3.4.3 is investigated without balance condition (3.60) but under assumption of average approximation conditions of Corollary 3.3, (Section 3.3). The centered and normalized process is considered as follows
C'(t) = El[U'(t)  G ( t ) ] ,
(3.89)
The stochastic evolutionary system V ( t )is described by a solution of the evolutionary equation in Rd
U'(t) d dt
= aE(U'(t);Z(t/E2)),
(3.90)
with a,(u;x) = a ( u ; x )+cal(u;Z), where u E Rd and x E E.
(3.91)
98
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
The switching semiMarkov process x ( t ) ,t 2 0 , on the standard state space ( E ,E ) , is given by the semiMarkov kernel
Q(., B , t ) = p(., B ) F z ( t ) ,
(3.92)
for x E E, B E E , and t 2 0, supposed t o be uniformly ergodic with the stationary distribution 7r(B),B E E , satisfying the relation
r ( d x ) = p(d.c)m(.)/m,
(3.93)
where p ( B ) ,B E E , is the stationary distribution of the embedded Markov chain x,, n 2 0, given by the stochastic kernel
P(.,B) := P(.,+1
E
B 1 x, = x).
(3.94)
As usual: M
m ( z ):=
Fx(t)dt, F z ( t ):= 1  F z ( t ) , m := L p ( d x ) m ( x ) . ( 3 . 9 5 )
The deterministic average process of the average evolutionary equation d
V(t) dt
6(t),t 2 0, is defined by a solution
= 2(6(t)),
(3.96)
r ( d z ) u ( u ;2).
(3.97)
with the average velocity h
a(.)
= /E
Theorem 3.8 Let the stochastic evolutionary system (3.89) be defined by relations (3.89)(3.97) and the following conditions be fulfilled. C1: The switching semiMurkov process x ( t ) , t 2 0, is uniformly ergodic with stationary distribution x(dx) on the compact phase space E . C2: The following asymptotic expansions take place:
+
a(w
EU;
where, for any R > 0 ,
= a(w;x)
+ Eua;(w;). + e;(w,u;.).
3.5. DIFFUSION A P P R O X I M A T I O N W I T H EQUILIBRIUM
99
Moreover, the velocity functions a(u;x) and a l ( u ;x) satisfy the global solution of equations (3.90) and (3.91). Then the weak convergence for 0 5 t 5 T ,
C“(t)===+ c O ( t ) ,
E
+
0,
takes place. The limit diffusion process cO(t),t2 0, is determined by the generator of the coupled process co(t),6(t), t 2 0,
ILv(u, V ) = b ( ~W,) ( P ; ( U, W)
1 + B(v)p;,(u, + G(W)~:(U,w). 2 W)
(3.98)
Here:
b(u,v)= 2 1 ( W ) + u2’(v), U(W)
=
L
(3.99)
,.
n ( d x ) a ( v ; x ) , a1(v) =
The covariance matrix B ( v ) ,v
E Rd,
L
r(dx)al(v;x).
is determined b y the relations:
B ( v ) = Bo(v) + B i ( v ) ,
(3.100)
r ( d x ) Z ( v x)RoZ(v; ; z), Bl(V) =
s,
n ( d x ) p ( x ) Z ( vx)Z*(w; ; x)
(3.101)
4 2 ) = b z ( x )  2m2(41/m(4 a(v; x) = a(v;x)  q.).
In the particular case of Markov switching, we have p(x) = 0 (see Remark 3.3, page 83). The limit diffusion process co(t),t 2 0 , is nonhomogeneous in time and is solution of the following SDE
+
+
dCO(t)= [ a l ( C ( t ) ) E’(C(t))CO(t)]dt B1’2(6(t))dw(t), (3.102) where w ( t ) ,t 2 0 is the standard Wiener process in Rd. The stationary regime for the average process 6(t),t2 0, is obtained when the average velocity 2(v) has an equilibrium point p, that is, 2 ( p ) = 0. Then the limit diffusion process c(t),t2 0, is an OrnsteinUhlenbeck process with the following generator 1 t p ( u ) = b(u)p’(u) ~ B p ” ( u ) ,
+
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
100
where
b ( ~=) bl + t h o ,
bl = Zl(p),
B
bo = E’(p),
= B(p).
PROOF. The proof of Theorem 3.8 is divided into several steps. First, the extended Markov chain h
Uz = U(E~T,), Z, = z(T,),
[: = [ ‘ ( E ~ T , ) ,
n 2 0,
(3.103)
is considered, where ~ ~ 2 , 0, nis the sequence of the Markov renewal moments (moments of jumps of the semiMarkov process x ( t ) , t 2 0), that is:
F S ( t )= p(e,+l I t I 2,
=
Let us introduce the following families of semigroups:
rz(z)cp(u) = cp(U:(t)),
U:(o)
=uE
Rd,
(3.104)
where U i ( t ) , t 2 0, is a solution of the evolutionary system
d
UG(t) dt
= ae(U:(t);Z),
zE E,
(3.105)
and, similarly,
Xt&)
= cp(G(t)), G(0)= 2, E Rd,
(3.106)
where C(t),t L: 0, is the solution of the average evolutionary system (3.96). It is worth noticing that the generators of semigroups (3.104) and (3.106) are respectively: IrE(5)cp(U)
icp(2,)
= a,(u; z)cp’(4, = 2(2,)cp’(.).
The following generators will be also used: T(z)cp(u) = a ( u ;Zc)cp’(4, F(Z)y7(U)
= qu;Z)cp’(U),
a(u;Z):= a(u;Z) E(u).
The main object in asymptotic analysis with semiMarkov processes is the compensating operator of the extended embedded Markov chain (3.103) given here in the next lemma.
101
3.5. DIFFUSION APPROXIMATION W I T H EQUILIBRIUM
Lemma 3.1 The compensating operator of the extended embedded Markov chain (3.103) is determined by the relation
(3.107)
where the semigroup I'Z(xlv),t 2 0, is defined by the generator:
+
A E ( vz)cp(u) ; = [ d ( EU; ~ z)  Z(v)]cp'(u), a"(u;x) := &la& z) = cla(u; x) a1(u; z),
+
(3.108) (3.109)
It is worth noticing that the generator AE(v; x) in (3.108) can be transformed by using condition C2 of Theorem 3.8, as follows (3.110)
A'(.; x) = E'A&(v; x),
+
A,(v; x)cp(u) := [a,(v EU; x)  Z(v)]cp'(~) = a(v;x)cp'(u) cb(u,u; x)cp'(u) where by definition:
+ F(v,u;z)p(u),
+
I
a(v;z) = a(v;x)  Z(v), b ( v , U ; x )= al(v;z) +uu;(v;x).
PROOF OF LEMMA 3.1. The proof of this lemma is based on the conditional expectation of the extended embedded Markov chain (3.103) which is calculated by using (3.89)(3.91) and (3.96): E[cp(C:+,,
1

u:+1,2,+1)
00
=
F,(dt)E[cp(u
+
I C:
=
u,u; = v,z,
= x]
1
E2
E  ~
aE(Ug(s);x)ds
t
2(6(s))ds],
0 The next step in the asymptotic analysis is to construct the asymptotic expansion of the compensating operator with respect to c , (see Lemmas 5.35.4, Section 5.5.3).
CHAPTER 3. STOCHASTIC SYSTEMS IN THE SERIES SCHEME
102
Lemma 3.2 The compensating operator (3.1Or)
[email protected]) has the following asymptotic representation o n test functions 'p E c:J(w~ x I W ~ )
+ &'X(V; v)P'p(u,*, .) +[LO(Z,V ) P P ( * , .>+ &J)PP(% ., .)I
ILE'p(u,V ,X) = cW2Q'p(., .,x)
21,
+OfP(%v,x),
(3.111)
with the negligible term
Here, by definition
QdxC) = q(x)[P Ilc~(x),
(3.112)
is the generator of the associated Markov process xo(t),t 1 0 , with the intensity function
The generator X(V;x), and the operator ILO(v; x) are defined as follows: X(v; z)p(u) = q v ;z)'p'(u),
(3.113)
and
+
b(v, u;x) := a l ( v ;x) ua:(v; x), B1(v;x) := p2(x)iz(w;x)ii*(v; x),
(3.115) (3.116)
pz(x) := m a ( x ) / m ( z ) ,
1
00
mz(2) :=
t2Fz(dt).
The proof of Lemma 3.2 is given in Section 5.5.3.
(3.117)
0
Chapter 4
Stochastic Systems with Split and Merging
4.1 Introduction In the study of real systems a special problem arises, connected to the generally high complexity of the state space. Concerning this problem, in order to be able to give analytical or numerical tractable models, the state space must be simplified via a reduction of the number of states. This is possible when some subsets are connected between them by small transition probabilities and the states within such subsets are asymptotically connected. That is typically the case of reliability and in most applications involving hitting time models, for which the state space is naturally cut into two subsets (the up states set and the down states set) In this case, transitions between the subsets are slow compared with those within the subsets. In the literature, the reduction of state space is also called aggregation, lumping, or consolidation of state space. This chapter deals with average and diffusion approximations with single and double asymptotic phase split and merging of the switching process. The asymptotic merging provides a simpler process and for that reason is important for applications, as for example in reliability where in general two subsets of states are of interest: up and down states. 1009127.
The main object studied here is the following stochastic additive functional (see Sections 2.6, 3.3.1, 3.4.2, and 3.5.2)
["(t)= ['(O)
+
/
t
q"(ds; z " ( s / E ) ) ,
t 2 0 , E > 0.
0
The switching semiMarkov process z ( t ) is considered in two cases: er103
104
CHAPTER 4 . STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING
godic and absorbing. Particular cases of the above additive functional that will be studied are the following three: 1. Integral Functional
d ( t )=
I"
a " ( z E ( s / & ) ) d s , t 2 0,
E
> 0.
2. Dynamical System d dtU & ( t )
= CE(U&(t); X&(S/&)),
t 2 0, & > 0.
(44
3. Compound Poisson Process V(t/&)
cE(t)= E
C aE(z;),
t 2 0, t > 0,
(4.3)
k=l
where ~ ( t t) 2 , 0 , is a Poisson process. The above functional F ( t ) , t 2 0, can also be written in the following form V E ( t / & ) 1
( " (t)=
1
+ $(&ee(t);zE(t/&)),t 2 0,
'$(&ek;z;i)
&
> 0.
k=l
The generators r,(z),z E E , of the Markov processes with locally independent increments $(t; z), are given in Section 3.3, that is ~,(z)Cp(u)= a&(Kz)Cp'(u) +&I
4.2
S,.
[Cp(u
+ &v)  Cp(u)

&vcpyu)lr,(u, dv;z). (4.4)
Phase Merging Scheme
4.2.1 Ergodic Merging The general scheme of phase merging, described in the introduction, now will be realized for the semiMarkov processes zc"(t),t 2 0, with the standard phase (state) space ( E , E ) ,in the series scheme with the small series parameter E 0, E > 0, on the split phase space (see Fig. 4.1) f
N
E= UEk, k= 1
[email protected],
k#k'.
(4.5)
4.2. PHASE MERGING SCHEME
105
Remark 4.1. More general split schemes can be used without essential changes in formulation, for example
E
=
u E,,
E V p v =r 0,
#
VEV
where the factor space (V,V) is a compact measurable space. The case where V is a finite set is of particular interest in applications. The semiMarkov kernel is
wher e xE E , B ~ & , t 2 0 . Let us introduce the following assumptions:
ME1: The transition kernel of the embedded Markov chain x;, n 2 0, has the following representation
P(z, B ) = P ( z ,B ) + EPI(IC, B).
(4.7)
The stochastic kernel P ( x ,B ) is coordinated with the split phase space (4.5) as follows
The stochastic kernel P ( z ,B ) determines the support Markov chain z,,n >_ 0, on the separate classes Ek, 1 5 k 5 N , (see Fig. 4.1 (b)). Moreover, the perturbing signed kernel Pl(s, B ) satisfies the conseruative condition
which is a direct consequence of (4.7) and P E ( zE , ) = P ( z ,E ) = 1. M E 2 : The associated Markov process z o ( t ) t, 2 0 , given by the generator
where q ( z ) := l/rn(z), is uniformly ergodic in every class Ek, 1 5 k 5 N , with the stationary distributions 'ITk(dx),1 5 k I N , satisfying the
106
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING
relations:
As a consequence, the Markov chain z,,n 2 0, is uniformly ergodic with the stationary distributions p k ( B ) , B E &k = & f l Ek,1 5 k 5 N , satisfying the integral equations
ME3: The average exit probabilities
are positive, and the merged mean values
are positive and bounded. The perturbing signed kernel Pl(x,B ) in (4.7) defines the transition probabilities between classes Ek, 1 5 k I N . So, relation (4.7) means that the embedded Markov chain xE,n 2 0, spends a long time in every class Ek and jumps from one class to another with the small probabilities E P ~ ( ~ , E \ E It ~ )is. worth noticing that under fast timescaling the initial semiMarkov process can be approximated by some merged stochastic process on the merged phase space E = (1,..., N } . The particularity of phase merging effect is that the approximating process will be Markovian. Introduce the merging function (see Fig. 4.1. (c)) A
.(z) = k,
z EEk,
15 k 5 N ,
(4.14)
and the merged process T ( t ):= w(z"(t/&)),t 2 0,
(4.15)
h
on the merged phase space E = (1, ...,N } . The phase merging principle establishes the weak convergence, as E of the merged process (4.15) to the limit Markov process.
4
0,
107
4.8. PHASE MERGING SCHEME
(a) Initial System S,
(b) Supporting System S
3 A
(c) Merged System
S
Fig. 4.1 Asymptotic ergodic merging scheme
108
CHAPTER 4. STOCHASTIC S Y S T E M S W I T H SPLIT A N D MERGING
Theorem 4.1 (Ergodic Phase merging principle). Under Assumptions MElMES, the following weak convergence holds
2&(t)==+ q t ) ,
(4.16)
& + 0.
The limit Markov process 2(t),t 1 0, o n the merged phase space ?!, = (1, ...,N ) is determined by the generating matrix (j = (&, 1 5 k , r 5 N ) , where:
a
First let us precise the matrix which is the generating matrix of some conservative Markov process. Indeed, from (4.7) and (4.8), we calculate: &p^kT =
L,
/'k(dx)&Pl(x,
Er)
where d k r is the Kronecker symbol. Hence, p^kT 2 0, if r # k and p^I& 5 0. Using the conservative condition in ME1: P l ( z ,E ) = 0, and taking into account (4.12) we obtain the following:
Fk = 
P k ( d x ) P l ( x , E k ) = pkk
=I P k r , T#k
cT+
and after dividing by p^k , we get p^kr = 1. After multiplying by &, together with (4.17), it gives (4.18) h
That is the condition of conservation of the generating matrix Q. Condition (4.12) ensures that all states in E are stable. h
We will introduce the following additional assumption.
4.2. PHASE MERGING SCHEME
109
ME4: The merged Markov process .^(t),t 2 0, is ergodic, with the stationary distribution ?? = ( X k , k E E ) . h
In the particular case of the Markov initial process zc"(t), t 2 0, with the semiMarkov kernel Q E ( zB , , t ) = P E ( zB, ) [ 1  e  q ( z ) t ] ,
the statement of the phase merging process theorem is valid with m ( z )= l / q ( z ) , z E E . Using the equation for the stationary distributions of the support Markov process n k ( d z ) q ( z ) = qkPk(dz),
we calculate:
that iS qk = l / f % k . Hence, the intensity of the merged Markov process can be represented as in (4.17) with the merged intensity T k = qkgk,
15
5 N.
(4.19)
From the heuristic point of view the merging formulas (4.17) and (4.19) are natural. Indeed, in order to calculate an average exit prcrbability by using the stationary distribution of the semiMarkov process defined by the semiMarkov kernel (4.6)(4.7),we have to calculate:
 &gk.
The relations = &k/%k,l 5 k 5 N , also are natural, since the intensity of the limit Markov process has to be directly proportional to the average intensity qk = l / & k 7 in the class Ek with factor p k which is the
CHAPTER
110
4 . STOCHASTIC S Y S T E M S W I T H SPLIT A N D MERGING
exit probability. Only one question remain, why are the stationary distributions of the support Markov process used? It is a natural consequence of the limit merging effect in the fast timescaling scheme. Equalities (4.17) are obtained by using the phase merging principle based on a solution of singular perturbation problem (see Section 5.2). 4.2.2
Merging with Absorption
We will study here the merging scheme with an absorbing state. The semiMarkov process z"(t),t 2 0, is considered on a split phase space
uEk, N
Eo = E U { O } , E =
Ek n E p = 8, k # k',
(4.20)
k=l
with absorbing state 0. For example, in Fig. 4.2., we have: Eo = El U E2 U E3 U ( 0 ) ; E = El U EzU E3; and = {1,2,3}. Let us introduce the following assumptions.
MA1: The semiMarkov kernel
of which stochastic transition kernel is perturbed, that is
P E ( xB l ) = P ( x ,B ) 4E P l ( Z , B ) , where P(xlB ) satisfies relation (4.8). MA2: The perturbing kernel P ~ ( Z B ,) satisfies the following absorption condition. There exists at least one Ic E El such that the absorption probability from Ic is positive, that is (4.21)
MA3: The stochastic kernel P ( x , B ) defines the support embedded Markov chain x,, n 2 0 , P ( Z ,B ) = P(Z,+l E B
1 2,
= x),
which is uniformly ergodic in every class Ek, 1 5 k 5 N , with the stationary distributions & ( B ) , 1 5 k 5 N , defined by a solution of the
4.2. PHASE MERGING SCHEME
111
equations:
Fig. 4.2
Asymptotic merging scheme with absorption
Theorem 4.2 (Absorbing phase merging scheme) Under split phase merging scheme (4.20) and Assumptions MA1MA3 (Section 4.2.1) for the support Markov process, the following weak convergence takes place
*q t ) ,
V(Zc'(t/&))

E
4
c,
0,
where the limit Markov process i?(t),O 5 t 5 is defined o n the merged phase space Eo = U { 0 } , 5= (1, ..., N } by the generating matrix A
Q = [ T k r ; oI k , r I N ] ,
112
CHAPTER 4. STOCHASTIC S Y S T E M S W I T H SPLIT A N D MERGING
where:
qk = l / m k .
Tk = pkqk,
c7
The random time is the absorption (stoppage) time of the merged Markov process, that is, := inf{t 2 0 : 2(t)= 0).
0, on a standard state space (El&), with semiMarkov kernel
QE(zlB , t) = P(z, B)F,(t).
113
4.2. PHASE MERGING SCHEME
Fig. 4.3 Double merging
We consider the following finite split of the state space E (see Fig. 4.3): Nk
N
E = U E k , Ek= U E L , I I k S N , k=l
EL
r=l
E;: = 0, IC # IC' or T #
TI.
(4.22)
Let us introduce the following assumptions, specific to the double phase merging.
MD1: The stochastic kernel P E ( zdy) , has the following representation p E ( d~y ,) = p ( z ,d y ) 4 ~ P i ( 5d y, )
+ c2pz(2,d y ) ,
(4.23)
where the stochastic kernel P ( z ,dy) defines the associated Markov chain z, n 2 0, and Pl(z,dy) and P2(2,dy) are perturbing signed kernels. The first one concerns transitions between classes EL and the second one between classes Ek. The perturbing kernels PI and P2 satisfy the following conservative merging conditions:
Pl(z,Ek)= 0 , z E Ek, P ~ ( zE,) = 0 , z E E .
15 k 5 N
(4.24) (4.25)
MD2: The associated Markov process z o ( t ) , t 2 0, is uniformly ergodic with generator Q, defined by
Q 4 z )= q(z)
P ( z 7d y ) [CP(Y)

cp(z)l,
(4.26)
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT AND MERGING
114
with stationary distribution 7r;(dz), 1 5 r L Nk, 1 5 k I N . As a consequence, the associated Markov chain xn, n 2 0, is also uniformly ergodic in every class E l , 1 5 r 5 Nk, 1 5 k 5 N . MD3: The merged Markov process 2(t),t 1. 0 is uniformly ergodic, with stationary distribution (?;, 1 5 r 5 Nk, 1 5 k 5 N ) . The perturbing operators Qk, Ic = 1,2 are defined as follows
Let us define the following two merging functions:
G(z) = w,L,
if
z E EL,
and h
G(%) = k, if
X E
Ek.
Theorem 4.3 (Ergodic double merging) Assume that the merging conditions MDlMDS hold, then the following weak convergences take place: G ( Z E ( t / & ) ) ===+ 2(t), E + 0,
* $(t)
A
G(."(t/&2))
& +
(4.27)
0.
(4.28)
The limit process ?(t) has the state space E =
u?=l,!?k,
,!?k
=
{v;
:
A
5 N k } , and $(t) the state space 2 = {1,2, ..., N } . The generators of processes 2(t) and 2(t) are respectively and which are defined 15
T
h
G2,
h
below. The contracted operators 61 and
A
6,
are defined as follows:
IIQlII = GlII and
a&. A
A
h
h
IIQ2II = The projectors II and
fi are defined as follows:
ncp(.)
=
c N
Nk
k=l
r=l
Glrk(2)
4.2. PHASE MERGING SCHEME
115
where:
and
(4.29)
where:
Thus we have
0 2 =
(g), where
6 A
Moreover Qz =
($ek)r
where
and
Now, we have: N
N
116
CHAPTER 4 . STOCHASTIC SYSTEMS WITH SPLIT AND MERGING
and
m= 1
5 0.
( ‘‘1
Qg),
h
Thus we have Q1 = diag(Qij, ..., where Qfj = qki . Let us introduce the following additional assumption, needed in the sequel.
MD4: The merged Markov process $(t),t 2 0, is ergodic with stationary distribution ( G k , 1 5 Ic 5 N ) . 4.3
Average with Merging
In this section we consider switched stochastic systems with split and merging of the switching semiMarkov process. Let the processes ~ “ (x), t; t 2 0, x E E , E > 0, be given by the generators (4.4), = a&(u;x)(P’(u)
+ &v)  +)
+&I Ld[p(u
Let the stochastic evolutionary system,

&up’(u)lrE(u, dv;4.(4.30)
P(t),be represented
by
rt
[“(t)= [ “ ( O )
+ J0
V E ( d s ;z & ( s / E ) ) ,
t 2 0,
E
> 0.
(4.31)
Let us introduce the following conditions.
A l : The drift velocity a(u; x) belongs to the Banach space B1, with a,(u;
x) =
x) + eyu; x),
where F ( u ;x) goes to 0 as E + 0 uniformly on (u; x). And rE(u, dv;x) = r ( u ,dv;x) is independent of E . A2: The operator
4.3. AVERAGE WITH MERGING
117
is negligible on B 1 ,that is, SUP IpEB'
Ilr&(xc>Pll 0, +
E
+
0.
A3: Convergence in probability of the initial values of p ( t )v(x"(t/E)), , t2 0 hold, that is,
and there exists a constant c E
E%+,
such that
supE I"(0)l 5 c < +w. E>O
Remark 4.2.The operator y,(z) is the jump part after extraction of the drift part due to the jumps of the process $(t, x). 4.3.1 Ergodic Average In this section we will give a theorem for the averaging of the evolutionary system p ( t ) t, 2 0 , in ergodic single split of the switching semiMarkov process z"(t),t L 0. The switching semiMarkov process x E ( t ) t, 2 0 , is considered in a split phase space
and supposed t o satisfy the phase merging assumptions ME1ME3 (Section 4.2.1).
Theorem 4.4 (Ergodic Average) Let the switching semiMarkov process x"(t), t 2 0 , satisfies the phase merging conditions ME1ME3. Then, under Assumptions A l  A 3 , the stochastic evolutionary system J E ( t )t, 2 0 , (4.31), converges weakly to the averaged stochastic system
6(t): ( " ( t )=+ G ( t ) ,
&
4
0.
CHAPTER 4 . STOCHASTIC SYSTEMS WITH SPLIT AND MERGING
118
The limit process G(t), t 2 0 , is defined by a solution of the evolutionary equation d U ( t ) = Z ( G ( t ) ;Z ( t ) ) , G(0) = dt where the averaged velocity is determined by 7rk(dz)a(u;z),
a(u;k) = L
m,
15k
(4.32)
IN.
k
The following corollary gives particular results of Theorem 4.4,in the three cases described in Section 4.1.
Corollary 4.2 1) The stochastic integral functional (4.1) converges weakly as follows
l
l
a ( z ( s ) ) d s e 4 0,
u(z‘(s/e))ds =+
where q(dx)a(x).
a(k) = L
k
2) The dynamical system defined by (4.2), with
cE(u; ). = c
( ).~ +; eyu; z),
where F ( u ;x) is the negligible term IleE(U;2)ll
0,
E
0,
converges weakly to a dynamical system with switching process .^(t),t 2 0, d
 U ( t ) = C(G(t);.^(t)), dt where
C(u;k) =
lk
7rk(dx)C(u;x).
3) The compound Poisson process with Murkov switching defined by (4.3) converges weakly as follows
(“(t)+ fZ(Z(s))ds, 0
e t 0.
4.3. AVERAGE WITH MERGING
119
Average with Absorption
4.3.2
The switching semiMarkov process z"(t), t 2 0 , is considered in a split phase space (see relation (4.20), Section 4.2.2)
u N
Eo = E U { O } ,
€3 =
Ek,
Ek
n E p = 8, k # k',
(4.33)
k=l with absorbing state 0, and supposed t o satisfy the phase merging scheme, that is, Assumptions MA1MA2 (Section 4.2.2).
Theorem 4.5 (Average with Absorption) Let the switching semiMarkov process x"(t), t 2 0 , satisfy the phase merging Assumptions M A l  M A 2 , (Section 4.2.2). Then, under Assumptions A1A3, the stochastic evolutionary system ,t 2 0 , (4.3l), converges weakly to the averaged stochastic system U ( tA
q(t),
c):
_ 0 , x (4.4). Let Assumptions A1A3 hold. Then the weak convergence
E
E , are given by the generator of
C"(t)=+ 6 ( t ) ,
E +0 h
takes place. The limit double averaged system C(t),$(t),t>_ 0 , is defined also equivalently by a solution of the equation d= dt
V(t)
A
=
[email protected](t),$(t)),
(4.35)
4.3. AVERAGE WITH MERGING
121
where
Remark 4.3. The stochastic ergodic system (4.35) can be considered in ergodic average scheme (see Section 4.3.1). The same ergodic average result can be obtained for the initial stochastic system (4.34) with timescaling c3 instead of c2.
Remark 4.4. A result analogous to Corollary 4.1 can be obtained for the double merged process .^(t),t 2 0 , in the cases of stochastic integral functional (4.1), of dynamical system (4.2), and of the compound Poisson process with Markov switching (4.3) loo. 4.3.4
Double Average with Absorption
The following result is an averaging result for the evolutionary system CE(t) in the double merging scheme (4.22). Define
c
the absorption time of the process $(t),by
c = min{t 2 o : $(t>= 0). Corollary 4.4 (Double average) Let the switching Markov process x " ( t ) , t 2 0 , satisfy the conditions of double merging scheme (4.22). Let the stochastic system be represented as follows
( " ( t )= C"(0)+
t
J 77"(d.;xE(s/E2)), 0
where the processes q E ( t ; x ) ,t 2 0, x E E , are given b y the generator of (4.4). Let Conditions A l  A 3 (Section 4.3) hold. Then the weak convergence
C'(t) + @(tA
c)
E + 0,
takes place. The limit double averaged system c(t),t 2 0 , is defined by a
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT AND MERGING
122
solution of the equation
where
r=l
c,
h
The stopping time, for N = 1, parameter
has an exponential distribution with the
h
= 9P,
where:
and p ( x ) is defined as follows PE(x, (0)) = &2P2(2,E ) = E 2 P ( Z ) . 4.4
Diffusion Approximation with Split and Merging
In this section we consider the additive functional c"(t),t 2 0, under the following timescaling of the switching process (4.36) The processes q"(t;x),t 2 0 , x E E , e (compare with (4.30))
> 0, are given
Let us consider the following conditions.
by the generator
4.4. DIFFUSION APPROXIMATION W I T H SPLIT A N D MERGING
123
D1: The drift velocity function has the following representation
a'(u; x) = €la(u; x)
+ al(u;x),
where a ( u ; x )and al(u;z) belong to the Banach space BC1: Balance condition
r ( d x ) a ( u ; x ) 0.
B2. (4.38)
D2: The operators
are negligible on B2,that is,
4.4.1 Ergodic Split and Merging The switching Markov processes x c ' ( t ) t, 2. 0, are considered on the split phase space (4.5) and the support Markov process x ( t ) ,t 2 0, defined by the generator (4.9)is uniformly ergodic in every class E k , 1 5 k _< N , with the stationary distributions r k ( d x ) ,1 5 k 5 N , satisfying relation (4.10). The stochastic additive functional 0, dt
(4.40)
converges weakly to a diffusion as in the above theorem. 3) The compound Poisson process with Markov switching defined by generators (4.3) converges weakly
6 " ( t / E 2 ) +r^ct),
E
4
0,
where the limit process r(t),t 2 0 , is a diffusion process with generator (4.41), with drijl
CHAPTER 4 . STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING
126
and covariance coefficient
with:
4.4.2
Split and Merging with Absorption
Here, the switching Markov process z'(t),t 2 0, E > 0, is supposed to be as in the previous Section 4.4.1 but with N = 1,for simplification and without the conservative condition ME1, Section 4.2.1. The stochastic additive functionals c(t),t 2 0, are given in (4.36)(4.37) and satisfy Conditions DlD2, and the balance condition BC1.
Theorem 4.8 (Split and merging with absorption) Let the switching Markov processes x"(t),t 2 O , E > 0, satisfy Condition M S l . Then the weak convergence
E " W =+ tw,
0
I t 5 t, E
takes place. The limit diffusion process c(t),0 5 t 5
c,
+
0,
is defined by the generator
+
Ecp(u) =%(u)cp'(u) Iij(u)p"(u)  ;ip(u). 2
(4.41)
The drift coefficient is defined by A
+&(u),
b ( u ) =;I(.) where i?l(u)= L ? i ( d z ) a l ( u ; z ) and
b l ( u ) =
The covariance function is defined by
where
s,
?i(dZ)a(u;x)&aL(u;z).
4.4. DIFFUSION APPROXIMATION WITH SPLIT A N D MERGING
127
where v* is the transpose of the vector v. The absorption time is exponentially distributed with intensity
r^

A = qp? The following corollary concerns particular cases of Theorem 4.8 in the three cases given in Section 4.1.
Corollary 4.6 1) The stochastic integral functional (4.1) converges weakly
The limit process r(t),t where b(u) = 21
=
2 0, is a diffusion process with generator (4.41),
T ( d z ) a l ( z ) and , B ( u )3
n(dz)a(z)fia(x).
2) The dynamical system (4.40) converges weakly to a diffusion as in the above theorem. 3 ) The compound Poisson process with Marlcov switching defined by generators (4.3) converges weakly 0.
(4.45)
The following condition will be used next.
BC3: Balance condition Nk
(4.46)
where
a k ( u ):=
T
L,
7r;(dz)a(u;z).
Theorem 4.10 (Double merging) Under conditions DlD3, and the balance condition BC3, the following weak convergence holds
S'(t)
=+ ?(t),
&
4
0.
h
The limit diffusion process {(t), t 2 0, switched by the twice merged Markov process $(t), t 2 0, is defined by the generator of the coupled A
Markov process: {(t),$(t),t 2 0,
h h
h
where the generating matrix Q of the double merged Markov process E ( t ) , t 2 0, is defined by the relations in Section 4.2.3. The drift function is defined by A
A h
b(u;k ) = i?i(u; k) +&(u;k ) ,
4.4. DIFFUSION A P P R O X I M A T I O N W I T H S P L I T A N D M E R G I N G
131
where:
r=l
b;(u) = %(u)iioq ' ( u ) ,
The covariance function as defined by
where:
Here, the operator & is the potential operator of the merged Markov process
.^(t),t 2 0, defined by the generating matrix
s.
Corollary 4.8 1) The stochastic integral functional (4.1) converges weakly A
l a " ( x " ( s / ~ ~ ) ) d r(t), s
E + 0.
A
The limit process r(t),t L. 0 , is a diffusion process with generator (4.47), where:
and
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING
132
2) The compound Poisson processes with Markov switching defined by generators (4.3) converge weakly: J E ( t / E 3 ) =+
&),
E + 0,
h
where the limit process r(t),t 2 0 , is a diffusion process with generator (4.47), where Nk
h
b 30,
e: =
Fie;,
B^(k) = r=l
4.4.5
L;
Fi(dz)C(z).
Double Split and Double Merging
Here, the switching Markov processes z"(t),t 2 O , E > 0, are considered as in the previous Section 4.4.4, and satisfy conditions MD1MD3, Section 4.2.3. The stochastic additive functionals t 2 0 , are considered with the more accelerated switching
c(t),
+
0.
0
Theorem 4.11 (Double split and double merging) Under Conditions D103, and the balance condition BC2 (Section 4.4.4), the following weak convergence takes place h
*
JE(t) $(t),
E + 0.
h
The limit diffusion process r ( t ) t, 2 0 , is defined by the generator h
ILp(u) = i;(u)cp'(u)
1% + B(u)p"(u). 2
The drift coefficient is defined by A h
b(u)
where:
h
=Z&)
h
+2o(u),
(4.48)
4.4. DIFFUSION APPROXIMATION WI T H SPLIT A N D MERGING
h
C(u;k) =
Nk
1?fC(u;k,r ) ,
1
a(u;k,r ) =
r=l
133
L;
r f ( d z ) a ( u ;x).
The covariance matrix is defined by h
N  2
h
B^(U)
??kB(u;k),
=2 k=l
where:
h
h
&(u; k) = 2(u;k)&2(u; k),
r=l
Eo(u;k , r ) =
L;
?f(dx)Co(u;z).
.. h
Here, the potential operator & corresponds to the twice contracted operator that is
G2,
A
h
A A
h h
h h
&2k,= RoQz = II  I. Corollary 4.9 1 ) The stochastic integral functional (4.1) converges weakly
134
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING A
The limit process &t), t 2 0 , is a diffusion process with generator (4.48), where:
A
N
h
A
g(u)= C ? k B k , A
h
c?i/ Nk
A A
i i k =g(k)&g(k),
Z(k) =
k=l
r=l
xi(dx)a(x).
El
2) The compound Poisson processes with Markov switching defined by generators (4.3) converge weakly h
< " ( t / € 4 ) 3 ?(t),
& +
0,
h
where the limit process &t), t 2 0, is a diffusion process with generator (4.48), where:
4.5
Integral F'unctionals in Split Phase Space
In this section we will consider the integral functional
d ( t )= (YO +
I'
(4.49)
a(x'(s))ds,
with Markov switching z"(t),t 2 0 , in single and double ergodic split. 4.5.1
Ergodic Split
Let us define the potential matrix
fi,
= [rkl;l 5
k,l I N ] ,
by the following relations : A h
A
h
Q& = &Q = where the generator
A
I
= [ r l k = rk  61k; 1
0is defined in Theorem 4.1.
5 1, k 5 N ] ,
4.5. INTEGRAL FUNCTIONALS IN SPLIT PHASE SPACE
135
A
The centering shiftcoefficient 2 is defined by the relation N
or, in an equivalent form,
where N
and
is the stationary distribution of the embedded Markov chain Zn, n 2 0. The vector Zi is defined as Zi = (& := ak  2, 1 5 k 5 N ) .
Let the merged condition of Theorem 4.3 be fulfilled, and the limit merged Markov process .^(t),t 2 0, be ergodic with the stationary distribution r = ( T k , 1 5 k 5 N ) . Then the normalized centered integral functional
Proposition 4.1
I"(t) = & 2 0 " ( t / & 3 )  &  I t % converges weakly, as mean and variance
E +
0 , to a diffusion process E(t), t 2 0, with zero
k,l=0 The variance (4.51) can be represented by the Liptser's formula the following way
132
in
(4.52) i= 1
136
CHAPTER 4. STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING
where ( b l , . . . ,b N ) is the solution of the equations N C Q i j b j j=l
and Q = [ Q i j ; 1 I i , j
= a ( i )  a(O),
i = 1 , . .. N
(4.53)
I N ] is nonsingular matrix defined by 4.. a2  Q i j  9 O j .
(4.54)
As was shown in I o 1 , the variance (4.51) can be represented in the following form (qi := qii) c2= 2
N
N
i=1
i#j>l
C niqibg  C
bibj[xiqij
+~ j q j i ] .
(4.55)
From (4.53)(4.54) we obtain:
By using the balance condition N
Eni&= 0 ,
(4.56)
i=O
in the right hand side of (4.53), we get N i=O
and hence N
(4.57) j=l
Now from (4.53) and (4.57), we have:
and hence N
(4.58) j=1
4.5. INTEGRAL FUNCTIONALS IN SPLIT PHASE SPACE
137
Let us consider the negative term in (4.55): N
N
N
i=l
i=l
j#i
i#j
c c +c N
N
ribi
=
N
qijbj
i=1
ribBqi
(from (4.58))
i=1
Thus the variance (4.52) is transformed into the form (4.55). 4.5.2 A A
Double Split and Merging
o2 A
A
Let Ro = ( F k e , 1 F k , l I N ) be the potential matrix of operator Theorem 4.3) defined by relations:
(see
&.
In the same way, is the potential matrix of operator Let w ( t ) , t 2 0 , be the standard Wiener process. Then the following result takes place. If the merging condition of Theorem 4.3 holds true and the limit merged Marlcov process .^(t),t 2 0 , has a stationary distribution, ?? = (r;, 1 5 r 5 Nk, 1 5 k 5 N ) , then, under the balance condition Proposition 4.2
A^ = 0 , the following weak convergence
I
takes place,
t
&l
a ( z E ( S / E 4 ) ) d s===+ f f w ( t ) , & + 0,
with variance N
N
(4.59)
138
CHAPTER A. STOCHASTIC SYSTEMS WITH SPLIT A N D MERGING
Triple Split and Merging
4.5.3
Proposition 4.3 If the merging condition of Theorem 4.3 holds true and the limit merged Markov process $(t), t >_ 0 , has a stationary distribution, h
h A A
a1(z)(p'(u) =
[ElA(z)
+ A1(z)]cp(u),
(5.107)
where:
A(z)cp(u) = a(z)cp'(u), and Al(z)cp(u) = al(z)cp'(u). (5.108) The balance condition (5.104) can be expressed in terms of the projector
II of the generator Q of the switching uniformly ergodic Markov process z(t),t2 0 , as in Theorem 3.3.
162
C H A P T E R 5. P H A S E MERGING PRINCIPLES
The limit generator IL in Corollary 3.5, obtained by a solution of the singular perturbation problem for the generator (5.106):
+ + = I L ~ (+~ eyu, ) x),
+
ILEvE(u, Z) = [C2Q E  ~ A ( x )A ~ ( z ) ] [ ( P ( ~ )E ( P ~ ( u ,x)
+ E ~ P Z ( x)] ~ , (5.109)
has the following form (see Proposition 5.2, Section 5.2)
lL = I I A ~ ( z ) I I+ rIh(~)RoA(z)rI,
(5.110)
with negligible term P ( u ,x)
and where & is the potential of Q (Section 1.6). Now, we calculate using representation (5.107):
& W ( u ) = l&(z)Wu) = ITAl(Z)P(.) = rIa1 ( W ( u )
= alP/(U),
where a1 := SE7r(dz)a~(x). By a similar calculus, we have:
lIA(z)RoA(z)ncp(u)= nA(Z)RoA(Z)m(u) = ~A(Z)RoA(Z)P(u) = ~A(z)Roa(z)(p'(u) = nu(Z) &a( Z)(p'I (u) 1
2
/I
= ZOOP (u),
where
a; := 2
s,
7r(dz)a0(x), ao(x) := a(z)Roa(x).
Note that actually a: 2 0 (see Appendix C). Therefore, the limit generator IL in Corollary 3.5 is represented by b ( u ) = a1cpYu)
+ ZaoP 1 2
I/
(u>,
(5.111)
5.4. DIFFUSION APPROXIMATION PRINCIPLE
163
that is the limit diffusion process is
ao(t)= a0
+ a l t + aow(t),
t 2 0,
(5.112)
exactly as in Corollary 3.5.
The proof of Theorem 3.3 is based on the representation of the stochastic integral functional (5.102) by the associated semiMarkov random evolution given by a solution of the evolutional equation (3.6). The corresponding family of generators IC"(z),z E E , in (3.7), is given in (5.107), that is ly'(z) = &(z). Here A:(%), s 2 0 , z E E , is the family of semigroups, determined by the evolutions az(s) = u
+sa,(~),
s
2 O,Z
E E,
(5.113)
that is
A:(x)(P(U)= cp(a:(s)>.
(5.114)
The compensating operator of the extended Markov renewal process
xi,
a: = Q'(T;),
T:
= E 2T
~ ,n
2 0.
(5.115)
is given by relations (3.8)(3.9), that is
[ 1 F,(ds)A:,,(z)Pp(u, co
IL'cp(u, x) := e2q(z)
x)  p(u, z)] . (5.116)
0
Now we can use the asymptotic representation (3.10) given in Proposition 3.2 with obvious changes r(z) = A(z), and lyl(z) = A,(z). 0 Proposition 5.8 The generator IL of the limit diffusion process in Theorem 3.3 is calculated by a solution of the singular perturbation problem for the truncated generator
IL$ = E'Q
+ E'A(z)P + Qz(z)P,
as follows
lL = IIQ:!(z)II + IIA(z)P&A(z)PII, where
Qdz)= Ai(z) + p2(z)A2(z),
P Z ( ~=) mz(z)/2m(z).
(5.117)
CHAPTER 5. PHASE MERGING PRINCIPLES
164
The operator & is the potential of Q (see Section 1.6)
Q&
zz
&Q = I2  I,
(5.118)
or, equivalently
q[P  I]& = rI  I. Hence (5.119)
PRO = & + r n ( r I  I ) , where q(z) = l / m ( z). PROOF. We calculate:
where: a1
:=
s,
7r(dz)al(z), and B1 := 2
s,
7r(dz)p2(z)u2(z). (5.120)
Next:
A(X )'p(u) = IIA(z) P&u (Z)cp' (U )
IIA(X )P &A( Z)PIIp(U ) = IIA (Z)P&
= rIa(z)P&a(z)cp'(u)
= nu(z)&a( z)(p"( u) rIrn(z)a2(z)cp" (u),
(by using (5.119))
where, by definition :
Bo := 2
s,
7r(dz)ao(z)/rn, ao(z) := u(z)&a(z),
5.4. DIFFUSION APPROXIMATION PRINCIPLE
165
Hence, we get the following representation of the limit generator W ( U ) = alcp'(4
1 + pcp"(u),
where:
+
al := L n ( d z ) a l ( z ) , B := BO BOO,
with:
Bo := 2
s,
7r(dz)ao(z),
p ( z ) := [m&)
Boo
:=
s,
7r(dz)p(z)a2(z),
 2m2(z)]/m(z).
Note that p(z) can be seen as a distance from exponential distributions of F,(t) of sojourn times (see Remark 3.3, page 83). 5.4.2
Continuous Random Evolutions
The diffusion approximation principle in ergodic merging scheme can be verified for a continuous random evolution in the series scheme with accelerated semiMarkov switching given by a solution of the evolutionary equation (Section 3.2.1)
$@"(t)= ] r " ( Z ( t / & 2 ) ) @ " ( t ) , t 2 0, (5.121) P(0)
=I,
on the Banach space B , with the given family of generators K"(z),z E E , of the semigroup rg(z),t 2 0, z E E. The generators lr"(z),z E E , have the following form K',(z) = E  ' ~ ( z )
+ K',(z),
5
E E.
(5.122)
In what follows, the generators r(z) and rl(z),z E E , are supposed to have a common domain of definition Bo,dense in B. The couppled random evolution (Section 2.7.3, Definition 2.11) W ( t ,Z ( t / E 2 ) ) := W ( t ) c p ( U , Z ( t / C 2 ) > ,
z(0) = z,
166
CHAPTER 5. PHASE MERGING PRINCIPLES
can be characterized by the compensating operator (Section 3.2.1)
L“cp(v,
[/
/
t
= &2q(z)
4.
F3C(dS)r:ZS(z) P ( z ,dY)cp(.u,Y)  4% E
0
(5.123) The factor “ E  ~ ” corresponds to the accelerating timescaling of the switched semiMarkov process z(t),t 2 0, in (5.121), given by the semiMarkov kernel
Q(z,dy, ds) = P ( z ,d~)F,(ds). The fast timescaling “ E ~ ” of the semigroup in (5.123) provides a diffusion approximation of the increments under the balance condition for the first term in (3.10),
rIIr(z)rI= 0,
(5.124)
where rI is the projector of the associated ergodic Markov process z o ( t ) ,t 2 0, defined by the generator
(5.125) with
The key problem in the diffusion approximation is t o construct an asymptotic representation for the compensating operator (5.123), by using Proposition 5.2 and Assumptions (5.122) and (5.124). The diffusion approximation principle for the semiMarkov continuous random evolution in the series scheme (5.121) with the switching ergodic semiMarkov process z(t),t 2 0, is realized by a solution of the singular perturbation problem for the truncated operator (see Proposition 5.2)
The limit generator is given by
or, in our case
5.4. DIFFUSION APPROXIMATION PRINCIPLE
167
where:
Qi(z)cp(u,z) := r(z)cp(u, 21, Q z ( z ) ~ (z) u ,= [ri(z)+ ~ 2 ( z ) ~ ~ ( z ) Iz). c~(u, Let us now compute the limit generator in explicit form. The first term in (5.127) gives:
nQ2(z)n= nri(z) + p 2 ( z ) r 2 ( z ) n = (F1+FOl)rI,
(5.128)
where, by definition:
Recall that the potential operator & satisfies the equation (see Section 1.6):
Q& = RoQ =
 I , Q = g[P  I ] .
Hence,
P&=Ro+rn[rII]. Next, we calculate the second term in (5.127):
rIr(z)P&s(z)lI= rIIr(z)RoIT(z)rI rIrn(z)s"z)n =
( f o  Fo2)n,
where, by definition, (5.129) Gathering all the above calculations we get
IL = $0
+el,
+&
(5.130)
where, by definition: h
A
h
r o o := r01  lr02 =
s,
7r(dz)p(z)r2(z),
p ( z ) := [rnz(z) 2m2(z)]/rn(z).
(5.131) (5.132)
168
CHAPTER 5. PHASE MERGING PRINCIPLES
It is worth noticing that the formulas (5.127)(5.132) give the preliminary “blankcheque” for constructing the limit generator in the diffusion approximation scheme for stochastic systems with ergodic semiMarkov switching considered in Section 3.4. The generators of the limit diffusion processes are constructed by formulas (5.129)(5.132) with the following generators of the corresponding continuous random evolutions (Section 3.4). 1. In Theorem 3.4:
2. In Corollary 3.6:
3. In corollary 3.7:
Now, the generators of the limit diffusion processes for stochastic systems in Section 3.4 can be calculated in explicit form by formulas (5.129)and First, we calculate the generator for stochastic evolutionary system (Section 3.4.3, Corollary 3.7):
where, as in Theorem 3.4:
5.4. DIFFUSION APPROXIMATION PRINCIPLE
169
Next, we calculate:
Now, we get, by using (5.129)(5.132):
where, by definition:
(5.137) and
(5.138) Gathering (5.136)(5.138),we obtain the generator of the limit diffusion process in Corollary 3.7. Analogous calculation can be done for the stochastic additive functionals considered in Sections 3.4.2 (Theorem 3.4).
5.4.3
Jump Random Evolutions
The increment process in the series scheme with the semiMarkov switching, considered in Section 3.4.4, in the diffusion approximation scheme, is here considered with the following accelerated scaling 4tlE2)
P(t)= P O
as(xk),
+&
t 2 0.
(5.139)
k=l
The values of jumps are defined by the bounded deterministic function
u,(x),x E E , which takes values in the Euclidean space Rd, and has the following representation a&)
= u(z)
+ EUl(5).
(5.140)
170
CHAPTER 5. PHASE MERGING PRINCIPLES
The first term satisfies the balance condition (5.141) where p(dx) is the stationary distribution of the embedded Markov chain x,, n 2 0. To verify the algorithm of diffusion approximation formulated in Theorem 3.5, the associated jump random evolution is considered (Section 3.2.2), defined by the family of bounded operators
+
DE(x)cp(u):= cp(u EaE(x)),
2
E El
(5.142)
given on the test functions cp E C,3(Wd). By using representation (5.140), the following asymptotic expansion is valid := I
+
+ E ~ D ~ +( x )
E2eE(X),
(5.143)
where, by definition:
and
Proposition 5.9 The diffusion approximation principle for the semiMarkov jump random evolution in the series scheme with the switching ergodic semiMarkov process, satisfying Conditions D1D3 of Theorem 3.3 (Section 3.4.1), and the family of jump operators IDE(x),xE E , is realized b y a solution of the singular perturbation problem for the truncated operator
ILg = E  ~ Q
+ E  ~ Q o D ( x+)QoDl(x).
(5.147)
PROOF. The proof is obtained by using the asymptotic representation (3.30) of the compensating operator (3.29) for the jump random evolution (3.26)(3.28).
5.4. DIFFUSION A P P R O X I M A T I O N PRINCIPLE
171
Considering the operator (5.147) on the perturbed test functions p‘(u,z) = ‘ ~ ( ~ ) + E ( P ~ ( u , z ) + E ~ ’with ~ ( ~ ‘p , zE) Ci(Rd) , we get, by Proposition 5.2,
with the negligible term
llee(z)cplI
+
0,
E
+
0,
‘p E
caw.
The limit operator lL is calculated by the formula (see Proposition 5.2)
ILII = ~ Q o D(z)n I + nQ0D(z)&QoD(5)IT7 where Ro is the potential of operator Q (see Section 5.2). Now we calculate using representation (5.144)(5.145) and Qo’p(z) = q(x)P’p(x):
ILin‘p = nQo&(z)n‘p(u) = nQob( 5 M u ) 1
= nQ oa i(z)d (u ) t 5 n Qo a2 (z)d r(u )
Hence
Here (compare with Theorem 3.5)
nEXT, THE OPERATOR IS CALCULATED AS FOLLOWS
172
CHAPTER 5. PHASE MERGING PRINCIPLES
Hence
where, by definition,
with:
b(z):= P a ( z ) =
L
P(z,dy)a(y),
that is exactly as in Theorem 3.5. Note that according to Appendix C, cl: 2 0. 5.4.4
Random Evolutions with Markov Switching
The diffusion approximation principle for random evolution with Markov switching can be obtained from the results presented in Sections 5.3.2 and 5.3.3, for the semiMarkov random evolutions, by putting the “distance from exponential distribution” parameter p ( z ) equal to 0 (see Remark 3.3, p. 83). According to Propositions 5.8 and 5.9 we can formulate as a corollary the following result.
Proposition 5.10 The diffusion approximation principle for the Markov random evolution given b y a solution of the evolutionary equation (5.121), with the switching Markov process x ( t ) , t 2 0, defined by the generator (5.125), is realized by a solution of the singular perturbation problem for the generator (see Proposition 3.3)
The limit generator is given by (see Proposition 5.2)
5.5. DIFFUSION APPROXIMATION WITH EQ U1.LI B R IUM
173
Thus we obtain the preliminary “blankcheque” t o construct the limit generator in diffusion approximation scheme for stochastic systems with ergodic Markov switching considered in Section 3.4 in the following form
where:
The diffusion approximation principle for jump random evolution with Markov switching coincides with the analogous one for the semiMarkov jump random evolution. We have t o keep in mind that, in case of switching Markov process, q(z)is the true intensity of the exponential distribution of renewal times. Hence, for example, the parameter is
without transformation from the equality
which was used in the case of switching semiMarkov processes.
5.5
Diffusion Approximation with Equilibrium
The main problem in constructing the diffusion approximation principle for stochastic systems with equilibrium considered in Section 3.5 is the representation of the generator of the centered and normalized process in a suitable asymptotic form. Certainly, the situations considered in Sections 3.5.1 and 3.5.2 are completely different. The centered and normalized process (3.69) is Markovian, which essentially simplify the problem. While, the centered and normalized process (3.81) has t o be extended t o the Markov process by two components: the deterministic shiftprocess ( ( t ) t, 2 0, defined by a solution of the evolutionary equation (3.80), and the switching Markov process z(t),t 2 0, defined by the generator (3.79). A
CHAPTER 5. PHASE MERGING PRINCIPLES
174
5.5.1
Locally Independent Increment Processes
The generator of the centered and normalized process (3.69) is constructed by using the generator (3.67) and the following relation (see (3.70)) EV"(t/E) = p
Lemma 5.1
+ EC"(t).
(5.148)
The generator of the Markov process
0, x
E
r&(x)cp('zL) = S&(Wx)cp'(u).
E , is given (5.158)
For simplification we dropped the remaining term in (3.83). The decisive step in the asymptotic analysis of the considered problem is the construction of the generator of the three component Markov process
r"(t), E^(t), x; Lemma 5.2
:= z(t/e2),
t 2 0.
(5.159)
The generator of the Markov process (5.159) is represented
as follows
+
+
IL;(P(u, v,x) = [ E  ~ Q r E ( v
EU;
x)]cp(u,v,x)
+ m c p ( U , 21,
(5.160)
Here Q is the generator of the switching Markov process x ( t ) , t 2 0, F(v)cp(v) := g(v)cp'(v).
(5.161)
The generator Ira is defined by
r&(v+ au;x)cp(u):= [g&(v+ E U ; x)  g(v)]cp'(U).
(5.162)
PROOF. The representation (5.162) provides the following equality (compare with (5.156))
F ( t )= a t , + E C E ( t ),
(5.163)
CHAPTER 5. PHASE MERGING PRINCIPLES
176
that is, under conditions:
r(t)= v , ["(t)= u, we have 0). The diffusion approximation scheme in Sections 4.4.24.4.5 are considered with Markov switching, which simplifies the asymptotic analysis. The generators of the Markov stochastic systems are considered instead of the compensating operator. Generalization for switching semiMarkov processes can be also obtained following the analysis considered in Section 5.7.1.
5.7.1
Ergodic Split and Merging
According to Conditions D1D3, in Section 4.4, and additional assumptions of Theorem 4.7, Section 4.4.1, the compensating operator of the continuous random evolution in split and merging scheme is represented in the following form (compare with (5.123), Section 5.4.2)
L E p ( u)., = E2q(.)[F€(.)PEP(u, ).  d u , .)I.
(5.198)
Here, by definition: r E
(5.199) and:
(5.200) P(Z,
B ) = P(., B ) + &2Pl(Z,B ) .
Proposition 5.14 The Compensating operator (5.198)(5.200) acting on the test functions p E C3(Rd x E ) has the following asymptotic represeratation (compare with Proposition 3.2, Section 3.2.1)
189
5.7. DIFFUSION APPROXIMATION WITH SPLIT A N D MERGING
where G2(2)=
[ri(z)+ P ~ ( ~ ) F ~ (+ x Q) II. P
PROOF. In representation (3.10) put Q'
=Q
+ E ' Q ~with , Q1 = qP1.
0
Now, the solution of the singular perturbation problem, given in Proposition 5.2, for the truncated operator
LE = E  ~ Q + E'F(z)P + Q ~ ( z ) , gives the limit generator of the coupled Markov process f ( t ) ,.^(t),t 2 0, in the form given in Theorem 4.7, Section 4.4.1. Calculation of the limit generator L almost coincides with calculation of the limit generator in Section 5.4.2. As a result we obtain the following construction of the limit generator (compare with (5.130)) A
2 = 61 + Fo + 6'oo(k) + G ( k ) ,
kE
z,
(5.201)
with the operators depending on the state of the limit merged Markov process .^(t),t 2 0, that is: A
m ( d 4 F 0 ( 4 , Fo(z) = q z ) R o u z > ,
61
and defined by QlII = IIQlrI. Formulas (5.201)(5.202) allow us to construct the limit generator for the stochastic systems considered in diffusion approximation scheme with split and merging of the state space of the switching semiMarkov process. The limit generator IL of the limit diffusion process r(t), t 2 0, switched by the merged Markov process .^(t),t 2 0, in Theorem 4.7, is calculated by formulas (5.201)(5.202), setting: qz)cp(u) = 4 u ;Z)(P'('zL),
1 Jqz)cp(u) = a1 (u; zc>cp'(u> zCo(";
+
5.7.2
+l44.
Split and Double Merging
According to conditions DlD2 in Section 4.4 and additional Condition BC2, Section 4.4.3, the generator of the random evolution
C H A P T E R 5. PHASE MERGING PRINCIPLES
190
r'(t),zE(t/E2),t 2 0 , described in Theorem 4.9 is represented in the following form
IL'
+
+
+ B(z>+ Of(z),
= E  ~ Q E  ~ Q E~  A ~( . >
(5.203)
where by definition:
Wz)cp(u):= al(.zL;z)cp'(u)+ CO(U; z)d'(u), with negligible term IlOf(z)cpll + 0, as E
+ 0,
cp E
(5.205)
C3(Rd).
Proposition 5.15 The generator lL of the limit diffusion process in Theorem 4.9 is calculated b y using a solution of the singular perturbation problem for the generator (5.203) given in Proposition 5.4 in the following form A

IL = 6 + ARoA. A
h
A
(5.206)
The calculation of IL in the formula (5.206) using (5.204) and (5.205) leads to the representation of the limit generator in Theorem 4.9. 5.7.3
Double Split and Merging
Conditions MD1MD4, Section 4.2.3 and Conditions DlD3, BC3, Section 4.4.4, in Theorem 4.10, provide the generator IL' of the Markov process
t'(t),
A
A
z; := Z(t/E3), z't := a(z;),
t 2 0,
represented in the following form
(5.207) where, by definition: (5.208)
with the negligible term
5.7. DIFFUSION APPROXIMATION WITH SPLIT A N D MERGING
191 h
Proposition 5.16 The generator JL of the limit digusion process r(t), t 2 0, switched by the twice merged Markov process z ( t ) , t 2 0 , is defined by a solution of the regular perturbed problem for the truncated operator
IL; = E  ~ Q+ E'Q
i(z)
+ E  ~ A ( +~ )[Qz + B(z)I1
(5.209)
in the following form h
IL = Q2
h
+ii + ARoA,
(5.210)
o2 A
A
where the generator of the twice merged Markov process 2 ( t ) , t 2 0, is given in Condition MD4, Section 4.2.3. The potential 60 is defined for the generator Q1 as follows &^la = 6061 = II  I. The twice average operators in (5.210) are calculated by A
h
h
Efi = IIBII, A A A
& =I rIB(z)II,
(5.211)
and analogously,
(5.212)
PROOF. The formulas (5.210)(5.212) are obtained straightforwardly from Proposition 5.4, Section 5.2. Calculation by formulas (5.210)(5.212),with (5.208), give the representation of the limit generator JL in (4.47) of Theorem 4.10. 0 5.7.4
Double Split and Double Merging
Under the conditions of Theorem 4.11, Section 4.4.5, and taking into account the conditions of Theorem 4.3 we can calculate the generator ILE of the coupled Markov processes c ( t ) , z t := z & ( t / ~ * )2, t0 , with the first component Jb(t),t 2 0 , given in Theorem 4.11 in the following form
LLEp(= ~ ,[ ~E )~ Q
+Of
+ ~  ~ Q l ( +z )E  ~ Q Z ( . )
+&'A(%)
+ B(z)]p(u,z) (5.213)
bC)Cp7
where, by definition, Q is the generator of the support Markov process z o ( t ) , t2 0, given by the generator (4.26) and Qi(z) := q(z)Pi(z,B),i = 1,2, (see Section 4.2). The operators are (compare with Section 5.7.2):
A(.)cp(.)
:= 4u;z)cp'(u),
CHAPTER 5. PHASE MERGING PRINCIPLES
192
B(Z)(P(U) := a1(u;z ) d ( u )
+ Co(u;Z ) d ( U ) ,
with the negligible term: IlOf(x)(pll + 0, as E
+
0, for cp E C 3 ( R d ) . h
Proposition 5.17 The generator lL of the limit digusion process s^ 0 , such that: H1: For all nonnegative functions cp E C r ( R d ) there exists a constant
A, 2 0 such that (cp(x'(t))+A,t, F f ) is a nonnegative submartingale. H2: Given a nonnegative cp E C r ( R d ) , the constant A, can be chosen so that it does not depend o n the translates of cp. Then, under the initial condition
the family of associated probability measures P,,
E
> 0 , is relatively compact.
196
CHAPTER 6. WEAK CONVERGENCE
In order t o verify the weak convergence of a family of stochastic processes on DE[O,m), we have to establish the relative compactness and the weak convergence of finitedimensional distributions. Both these problems for a family of Markov processes in DE[O,00) can be solved by using the martingale characterization of Markov processes and convergence of generators. The particularity of the fast timescaling switching processes is that the convergence of generators or compensating operators cannot be obtained in a direct way because the generators are considered in a singular perturbed form. But, as was shown in Chapter 5, the phase merging and averaging algorithms as well as diffusion and Poisson approximation schemes can be obtained by using a solution of the singular perturbation problem for reducibleinvertible operators. Such an approach will be used in the following. Another approach t o verifying the relative compactness of a family of Markov processes consists in using the martingale characterization and the compactness conditions for square integrable martingales 132. The uniqueness of the limit measure follows from the uniqueness of solution of the martingale problem 45.
6.3
6.3.1
Pattern Limit Theorems Stochastic Systems with Markov Switching
The stochastic systems with Markov switching in series scheme with the small series parameter E > 0, defined by the coupled Markov process (see Sections 3.23.3)
can be characterized by the martingale rt
> 0, have the common domain of definition D(IL), The generators which is supposed t o be dense in C,"(Rdx E ) . The limit Markov process ( ( t ) t, 2 0, is considered on Rd,characterized
6.3. PATTERN LIMIT THEOREMS
197
by t h e martingale

where t h e closure D(L) of t h e domain D ( L ) of t h e generator L is a convergencedetermining class (see Appendix A). (loo) Let the following conditions hold for a family of Markov processes ( " ( t ) t, 2 0, E > 0:
Theorem 6.3
C1: There exists a family of test functions p E ( u , x )in C,"(Wdx E ) , such that
uniformly o n u,IC.
lim p E ( ux) , = p(u),
&iO
C2: The following convergence holds
l i m ILEpE(u, x ) = ILp(u),
uniformly o n u, x.
€40
The family of functions IL&p", E > 0, is uniformly bounded, and ILp and LEp" belong to C(Rd x E ) . C3: The quadratic characteristics of the martingales (6.3) have the representation
where the random functions < " , E > 0, satisfy the condition
C4: The convergence of the initial values holds, that is,
('(0)
5
m,
E
+
0,
and supIE I["(O)l
5 c < +oo.
E>O
Then the weak convergence E"(t) ===+ E(t),
&
f
0,
takes place. The limit Marlcov process J ( t ) , t2 0 , is characterized by the martingale (6.4).
CHAPTER 6. WEAK CONVERGENCE
198
In the particular case where Condition (6.5) is replaced by sup IEI,.'(s))d~I
= E[cp"(S'(t),z"(t)) cp(J"(t))l
+Wcp(JE(t))
I"
~cp(SE(s))dsl
t
+ El [b(J'(t))  L E c p " ( J E ( S ) ,x"(s))dsl. The first and third terms on the right hand side tend t o zero, as E + 0, by Conditions C1 and C2 of the theorem. Due t o the relative compactness of the family of stochastic processes c(t),t O , E > 0, the following convergence
>
takes place. Now, due to Condition C3, We calculate:
199
6.3. PATTERN LIMIT THEOREMS
Hence, the following convergence holds EP;"

Ecp(J(O)),
+
00.
Consequently, we get that the limit process [ ( t ) ,t 2 0, is characterized by the martingale (6.4), with EPt = Ecp(E(0)).
0 The following convergence theorem is an adaptation of Theorem 8.2, p. 226 and Theorem 8.10, p. 234, in 45, t o our conditions with the solution of singular perturbation problem.
e5)
Theorem 6.4 Suppose that for the generator IL of the coupled Markov process [ ( t ) , 2(t),t 2 0, o n the state space Rd x V , with V a finite set, there is at most one solution of the martingale problem in D R d x v[O, m), and that the closure of the domain V(IL)is a convergencedetermining class. Suppose that the family of Markov processes J " ( t / E ) , x:, t 2 O , E > 0 o n Rd x E defined by the generators IL",E > 0 , with domains V(IL")dense in C,"(Rd x E ) , satisfies the following conditions: C1: The family of probability measures (P", E > 0 ) associated to the processes ( F ( t )v, ( z E ( t / E ) )t, 2 0, E > 0) is relatively compact. C2: There exists a collection of functions cp"(u,z) E C(Rd x E ) , such that the following uniform convergence takes place
lim (pE(u, z) = cp(u,v ( x ) ) E c(@ x V)
(6.8)
E0
and such that for every T > 0
C3: The uniform convergence of generators
lim ILEpE(u, x) = Lcp(u,v(z)),
(6.10)
0'"
takes place, and the functions IL"q9, E > 0, are uniformly bounded on E > 0 , and ILq E C(Rd x V). C4: The convergence in probability of the initial values holds, that is, ( J " ( 0 ) , v ( z E ( O ) )5 ) (E(o),qo>),
E
0,
CHAPTER 6 . WEAK CONVERGENCE
200
with uniformly bounded expectation
I c < +m.
supE I("(0)l E>O
Then the weak convergence in D R d (€&(t)l
[0, m)
v ( f ( t > )==+ ) (C(t),.^(t>>,E
+
0,
takes place. The limit Markov process ( ( t ) .^(t),t , 2 0 , is defined b y the generator L.
Remark 6.1. The main algorithmic conditions (8.53) and (8.54) of Theorem 8.10 in 45 are represented in conditions of Theorem 6.4, respectively (6.8) and (6.10). Additional conditions of boundedness (8.51) and (8.52) correspond to additional conditions C2 and C3. All other conditions of Theorem 8.10 are represented in the convergence Theorem 6.4 in the same form. We will use also the following theorem which is a compilation of Theorem 9.4, p. 145, and Corollary 8.6, p. 231 in 45 under our conditions in diffusion approximation schemes.
Theorem 6.5
(") Let us consider the family of coupled Markov processes
('(t), Z"(t/E2),
t 2 0 , E > 0,
(6.11)
with state space Rd x E , and generators IL",E > 0, with domains D ( L E )dense in C(Rd x E ) . Let ((t),.^(t),t 1 0 , be a Markov process with state space Rd x V , and generator L with domain D(IL), and let be a convergence class. Consider also the test functions
Suppose that the following conditions are fulfilled: C1: The family of processes ('(t),t 2 0 , c > 0, satisfies the compact containment condition lim
sup^(
C + ~ E > O
sup o < t g
~_ 0,
(6.14)
which can be characterized by the martingale n
PE+1 = d G + l J E + l )  pi+lILEP(E;,x;), k=l
72
2 0,
(6.15)
with respect t o the filtration F .: := a( 0, is relatively compact. C2: Thew. exists a family of test functions cp"(u,x) in C,"(Rd x E ) , such that lim (pE(u,x ) = cp(u), uniformly on u,x .
E+O
C3: The following uniform convergence holds lim ILEcpE(u, x ) = ILcp(u),
uniformly on u,x.
E'O
The family of functions ILEcpE,&> 0, is uniformly bounded, and IL"cp" and ILcp belong to C(Rd x E ) . C4: The convergence of the initial values holds, that is, t " ( 0 )5 a o > ,
E
+
0,
and supE I("(0)l 2
c < +oo.
E>O
Then the weak convergence
* E(t),
S"(t)
&
+
0,
(6.19)
takes place. The limit process < ( t )t, 2 0, is characterized by the martingale (6.18).
6.3. PATTERN LIMIT THEOREMS
203
I n the particular case where the martingale is constant pt = po = const., then the limit process [ ( t ) ,t 2 0, is given by the solution of the deterministic equation
or, in an equivalent form d
cp(J(t)) = Lcp(J). dt PROOF.Let us introduce the following random variables
v"(t):= max{n 2 0 : T: 5 t } , V;(t)
:= V " ( t )
+ 1,
T ; ( t ) := T,,;(t),
T E ( t ):= Tye(t).
Recall that the timescaled semiMarkov process in the average scheme is considered as ~ ' ( s ) := z ( s / E ) , and in the diffusion approximation scheme as Z'(S) := Z(S/&2). Note also that the random variables v$(t) are stopping times with respect t o the filtration
For the proof of theorem we need the following lemma. In what follows, we consider the embedded stochastic system with piecewise trajectories as follows
Lemma 6.1
The process
has the martingale property E[CE(t) C(S)I F:] = 0,
0 5 s 5 t 5 T.
(6.23)
PROOF. It is worth noticing that
c'(t) = c " ( ~ ' ( t )=) and
0, is uniformly bounded, and LEpE and ILp belong to C(Rdx E ) . C4: The convergence of the initial values holds, that is,
to'
P
to,
0,
E
and
Then the weak convergence
I; *tt,
E
+
0,
(6.35)
takes place. The limit process process &,t 2 0 is characterized by the martingale (6.34). I n the particular case, where the martingale is constant p t = po = const., the limit process is determined by a solution of the deterministic equation
or, in equivalent f o r m
PROOF. In order to verify the weak convergence (6.35), we have to estimate the expectation of the following process (6.36)
CHAPTER 6. WEAK CONVERGENCE
208
by using the conditions of Theorem 6.7, and the martingale property of the piecewise process (6.27). Let us calculate:
w: = =
1M J 3 4 WS:) +W"(JPl.Z) c t
W(P(@) 
0
 Cp'(~:l~P>l
[tl"Il
E
4 1
~"V"(E&
k=O
we1  1
c
[LEP"(Gl4 JL(f(G)I
k=O
Due to Condition C2 of theorem, the first and third terms of the sum tend t o zero as E + 0. The forth term also tends to zero, since in the square brackets we have the difference between the integral and the corresponding integral sum. The second term is exactly the martingale (6.27). In conclusion we obtain:
Finally,
with the negligible term
Hence, the limit process in (6.36),
is the martingale (6.34).
6.4. RELATIVE COMPACTNESS
6.4
209
Relative Compactness
In this sections the relative compactness of the family of stochastic systems c'(t), t 2 0, E > 0, is realized by using the Stroock and Varadhan criteria formulated in Theorem 6.2. 6.4.1
Stochastic Systems with Markov Switching
The stochastic systems with Markov switching in the series scheme with the small parameter E > 0, E + 0, is characterized by the martingale (6.37) where the generators L € , E> 0, have the common domain of definition D(IL), is supposed to be dense in C,"(IRdx E). Lemma 6.2
Let the generators IL',
E
> 0, have the following estimation
ILEcp(u)I 5
c,,
(6.38)
f o r any realvalued nonnegative function cp E C,"(Rd),where the constant C, depends o n the norm of cp, but not on E > 0, nor on shifts of cp 45. Suppose that the compact containment condition holds
(6.39) Then the family of stochastic processes t E ( t t) ,2 0, E > 0, is relatively compact.
PROOF. Let us consider the process
V E ( t ):= cp(J'(t))
+ c,t,
t L 0,
and prove that it is an Ffnonnegative submartingale. To see this, let us calculate, by using the martingale characterization (6.37), for 0 5 s 5 t: E [ V E ( t I)
el
I el + c,s
= E[cp(EE(t))
CHAPTER 6. WEAK CONVERGENCE
210
So, the following equality takes place
q
t
E [ V E ( t I) F3 = V E ( S ) +
+
(LE'p($(u)) C d d u
I el,
where the last term is nonnegative due to the estimation (6.54). Hence
IE[qE(t)IF,']2 ~ " ( s ) , for s < t. Now, we can see that both hypotheses of Theorem 6.2 are valid. We need the following lemma for the proof of Lemma 6.4 below.
0
Lemma 6.3 (Lemma 3.2, p. 174, in 45) Let z(t), t 2 0 , be a Marlcov process defined by the generator L,and Gt 3 FF. Then for any jixed X E R and 'p E D(L) t
+
ex"(4t>>
eX"[X'p(.(s>>
 IL'p(4s))lds
is a Gtrnartingale. Lemma 6.4 Let the generators f o r 'po(u) = d G 2 ,
> 0, have the following estimation
LE'po(u)I Cl'pO(~>l Iul I4
where the constant Ci depends o n the function 'pol but not o n E > 0 , and
Then the compact containment condition holds lim s u p F
PROOF. Since (P~('(L) =u 1
+ J u Jwe , have
I(Pd(4l
( sup IF(t)l2 t1
=O.
(6.41)
/ d w ' ,and ( ~ " ( u=) (1+
and 'po(u) 5
e+mE>0
OltlT
5 1I 'po(u), Ivb'(4l 51 5 vo(u),
Let us define the stopping time re",by
'(L
E It.
6.4.RELATIVE COMPACTNESS
211
By Lemma 6.3, applied to a stopping time instead of a fixed time t , we have that
(6.43) is a martingale. We get, for s 5 t A re',
and from (6.43) we obtain
JE [eC1tArtcpo((E(tA
?$)I
5 Ep:
=
= Epo(CE(0)).
(6.45)
The convexity of po and the inequality PO(.) 2 1, together provide the estimation: p; : = PE
(
sup IC"(t>l2 l )
OltlT
and Chebichev's inequality yields Pd
(6.46)
I IE [cpo(CE(7a)l /cpo(Q
From inequality (6.46), together with inequality er; 2 eT (since r; 5 T ) ,we obtain PeE
0 . Then the family of processes [ " ( t ) , t2 O,E > 0 , is relatively compact. 6.4.2
Stochastic Systems with SemiMarkov Switching
The stochastic systems with semiMarkov switching in the series scheme with the small series parameter E > 0, E + 0, is characterized by the process (see Lemma 6.1)
1
7; ( t )
S ( t >= ' p ( G ( t )x"(t)) , 
where the compensating operators IL", representation (see Proposition 3.1)
E
w4m,s"(s))ds,
(6.51)
> 0, have the following asymptotic
+
+
IL''p(u,x) = ~  l Q ' p lr(x)P'p ~ B ; ( x ) ' p ,
(6.52)
on the test functions 'p(u,x) in C,"(Rdx E ) . The remaining term is
Bg(x) = S2(z)FL2)(x)Qo. The process c ( t ) in (6.51) has the martingale property (see Lemma 6.1). The relative compactness of the family of the stochastic systems c ( t ) , t 2 O , E > 0, is realized by using the Stroock and Varadhan criteria, formulated in Theorem 6.2.
Lemma 6.5 estimation
Let the compensating operators IL', c > 0 , have the following (6.53)
for any realvalued nonnegative functions ~ ( uin) C r ( R d ) ,where the constant C, depends only on the norm of 'p, but not on E nor on shifts of 'p45.
Let the compact containment condition (6.39) holds, and 1 ) sup
1
= 0,
(6.55)
together with the additional condition that p ( e ( t ) ) is relatively compact for each test function p(u) in a dense set, say H , in C ( E ) ,in the topology of uniform convergence on compact set. The unique solution of the martingale problem for the limit generator of Markov process together with Condition (6.55) provides the weak convergence of the processes (see Theorem 9.1, c h . 3 in 45).
The family of processes ('(t), t 2 0 , 0 < E 5 EO, characterized by the compensating operator (6.52), with bounded initial value IE I$(O)l 5 b < 00, satisfies the compact containment condition (see 45) Lemma 6.6
(6.56) PROOF. We will use the function cp~(u) = d m .The asymptotic representation (6.52) for the compensating operator and the following properties
CHAPTER 6. W E A K CONVERGENCE
214
of
yield the inquality
Let us use now the process
defined by (see Lemma 7.8)
First, the following inequality is satisfied
for any large enough c > 0. Using the martingale property of the process (E(t)and inequality (6.58), we obtain the following inequality
The left hand side in
is estimated as follows
The second term is estimated using property
Hence,
We will use below the following property of random sojourn times y'(T) (see Appendix C), for all S > 0, (6.62)
6.4. RELATIVE COMPACTNESS
215
Note that the function d ( s ) = bsecs is bounded in s E R+
0 5 d ( s ) I e < +w.
(6.63)
Let us estimate:
Similarly, using property (6.62), we estimate: E[ecT'(T)
I FT ]= E [ e  c ~ ' ( T ) [ ~ ( y t 2( TS)) + I(~'(T) < s)] I F@] 2 ecsP(y'(T) < S) = e" [I  ~ ( y( "T ) 2 s)].
(6.65)
By (6.62), we have ~ [ e  ~ y ' (I ~~ )$ 2 1 h
> 0 , for
o < E 5 EO.
Inequality (6.59) can be now transformed into the following
hecTEcpo(tE(T))I WO(J"(O)). The convexity of cpo(u) = d vide the estimation:
and Chebishev's inequality yields
Now, Inequality (6.66) is used:
(6.66)
m ,and the inequality cpo(u) 2 1, pro
CHAPTER 6. W E A K CONVERGENCE
216
6.5
Verification of Convergence
The verification of convergence of stochastic systems with semiMarkov switching in the average merging scheme (see Theorem 3.1) is based on the determination of the pattern limit Theorem 6.6 conditions, by using the explicit representation of the solution of the singular perturbation problem given in Proposition 5.7. First, the explicit representation of the remaining term in (5.65)(5.67) and Condition A2 in Section 3.3.1 (Theorem 3.1) provide that Condition (6.53) in Lemma 6.5 is valid. Next, the compact containment condition (6.56) is realized by Lemma 6.6. So, Condition C1 of Theorem 6.6 is true. Conditions C2 and C3 also are true from the same explicit representation (5.65)(5.67) of the limit generator and the remaining term and, of course, Conditions A2 and A3 in Section 3.3.1 (Theorem 3.1). The characterization of the limit process in (6.20) with the limit generator IL = I?, lrp(u) = ij(u)(p'(u),(see (5.59)), due t o Condition C4, completes the proof of Theorem 3.1.

Verification of weak convergence of stochastic additive functionals (3.56) in Theorem 3.4 is achieved following an analogous scheme t o Theorem 3.1. First, Conditions C2 and C3 of Theorem 6.6 are obtained by using asymptotic representation (3.10) in Proposition 3.2. Next, compact containment condition (6.39) is realized by Lemma 6.6. Condition (6.53) in Lemma 6.5 is proved for the perturbed test functions (pE(u,z)= p(u) + E ( P ~ ( u , z ) ,such that
where the constant C, depends only on cp(u),but not on
E
nor on shifts of
9. The characterization of the limit generator IL,in Proposition 5.8, completes the proof of Theorem 3.4. The proof of Theorem 3.3 can be obtained as a particular case of that of Theorem 3.4. The verification of weak convergence of Theorems 3.2 and 3.5 is obtained similarly by using Propositions 3.4 and 5.9 respectively.
6.5. VERIFICATION OF CONVERGENCE
217
The convergence in distribution of the coupled Markov process
C'(t),f(t),t2 O , E > 0 , is made by the Pattern Limit Theorem 6.3. For the weak convergence, we propose to the interested reader to calculate the square characteristic of the martingale characterization of the coupled Markov process C E ( t )c(t), , t 2 0 , and verify the relative compactness of the family t 2 0, E > 0 , as E + 0.
c(t),
Theorem 3.6 can be considered as a particular case of Theorem 3.7. The weak convergence in Theorems 4.74.11 is based on the solutions of the singular perturbation problems given in Propositions 5.145.17 and on the Pattern Limit Theorem 6.4 in average merging scheme and Theorem 6.5 in diffusion approximation scheme. The switching semiMarkov processes is considered in Theorem 6.6. The verification of relative compactness is made by using the estimations of generator (or compensating operator) on the test functions given in Lemmas 6.2 and 6.4. The relative compactness of the processes on the series scheme is shown by using StroockVaradhan approach given in Theorem 6.2.
This page intentionally left blank
Chapter 7
Poisson Approximation
7.1
Introduction
The Poisson approximation merging scheme is represented here for two kinds of stochastic systems: impulsive processes with Markov switching (Sections 7.2.1 and 7.2.2) and stochastic additive functionals with semiMarkov switching (Section 7.2.3). The average and diffusion approximation merging principles are constructed for stochastic systems in the series scheme with the small series parameter E +0, ( E > 0) normalizing the values of jumps. In the Poisson approximation scheme, the jump values of the stochastic system are split into two parts: a small jump taking values with probabilities close t o one and a big jump taken values with probabilities tending t o zero together with the series parameter E 4 0. So, in the Poisson merging principle the probabilities (or intensities) of jumps are normalized by the series parameter E . The main assumption in the Poisson merging principle is the asymptotic representation of the probability measure on the measuredetermining class of functions cp E C3(lR), which are realvalued, bounded, and such that 70 cp(u)/u24 0, I u I + 0, (see Appendix B). The techniques of proofs developed here are quite different from those used in the previous chapters for diffusion and average approximations. The proofs of theorems in the present chapter make use of semimartingale theory. Theorems 7.1 and 7.2 concern impulsive process, with and without state space merging of switching Markov process. Theorem 7.3 concerns additive functionals with semiMarkov switching. The main framework of proofs is that of Theorems VIII.2.18, and IX.3.27 in 70 (see Appendix B, Theorems B.l and B.2). But the main point here is to prove convergence of predictable characteristics of semimartingales which are integral functionals
219
CHAPTER 7. POISSON APPROXIMATION
220
of switching Markov processes. This is done by techniques given in Chapters 5 and 6 . The Poisson merging principle is constructed similarly t o the average approximation principle (see Section 5.4) with some special devices. As usual, there are four different schemes: the continuous and jump random evolutions considered with Markov and semiMarkov switching. The associated continuous random evolution in the Poisson approximation scheme is given by the family of generators S , ( x ) , x E E , which defines the switched Markov processes with locally independent increments q“(t;z),t2 0 , x E E , with values in Bd,d >_ 1, and the switching Markov renewal process x ; , T;, n >. 0, which determines the states xk E E , and the renewal times by the transition probabilities given by the Markov kernel. The starting point of construction of the Poisson approximation principle i s the compensating operator of the continuous random evolution in series scheme (see Section 5.3). For reasons of easier understanding by the reader, we consider, in the first part, Markov switching, but the same approach can be used for semiMarkov switching when we replace the generator by the compensating operator.
7.2 7.2.1
Stochastic Systems in Poisson Approximation Scheme Impulsive Processes with Markov Switching
Let x ( t ) , t >. 0, be a Markov jump process on a standard state space ( E ,E ) defined by the generator
(7.1) The semiMarkov kernel
Q ( z , B , t )= P ( x , B ) ( l eq(”)t),
x E E , B E E , t 2 0,
defines the associated Markov renewal process X k , T k , k L 0 , where 0, is the embedded Markov chain defined by the stochastic kernel
P ( x ,B ) = p ( X k t 1
EB
xk
(7.2) Xk,
k L
=x),
and T k , Ic 2 0 , is the point process of jump times defined by the distribution function of sojourn times 6&1 = T k + l  r k , k 2 0 ,
7.2. POISSON APPROXIMATION SCHEME
221
We suppose that the Markov process z(t),t 2 0, is uniformly ergodic with stationary distribution .rr(B),B E E . Thus the embedded Markov chain z k , k 2 0 , is uniformly ergodic too, with stationary distribution p(B), B E E l connected by the following relations
In the sequel we will suppose that
0 < 40
I q(z)5 4 1 < +OO,
z E E.
(7.4)
The impulsive process with Markov switching is defined by
+
4tlc)
[ ' ( t ) := c(0)
&(zk)l
(7.5)
t 20,
k=l
where v ( t ) = max{k : Tk 5 t } is the counting process of jumps. The family of random variables az(z), k 2 1, z E El is considered in the series scheme with a small series parameter E > 0, and is defined by the following distribution functions on the real line EX
Analogous results can be obtained for the impulsive processes in E X d , d 2 1. In the sequel, we will suppose that for any fixed sequence ( z k ) in El the sequence a;(&),k 2 1, is constituted of independent random variables. Let the following conditions hold.
A l : The switching jump Markov process z(t), t 2 0, is uniformly ergodic with the stationary distributions (7.3). A2: The family of random variables az(z),k 2 1,z E El is uniformly square integrable, that is, sup sup E>OZEE
J
u2@jc(du) 0 ,
IUI>C
A3: Approximation of mean values
with sup
c+
00.
CHAPTER 7. POISSON APPROXIMATION
222
A4: Poisson approximation condition
and SUPZEE1@’2(g)15 @ ( g )< A5: Squareintegrability condition sup xEE
/
u 2 a x ( d u ) < +m.
W
where the measure @,(du) is defined on the measuredetermining class C, (R),by the relation
The negligible terms t9t(z), eE(z) and eg(z) in the above conditions satisfy
Theorem 7.1 Under Assumptions A l  A 5 , the impulsive process (7.5) converges weakly to the compound Poisson process
c
vo(t)
_ 1, is defined o n the measuredetermining class C3(R) of functions g b y the relation
(7.8) where: (7.9)
The counting Poisson process vo(t) is defined by the intensity qo := q&(l).
The drijl parameter
a0
(7.10)
is defined by
(7.11)
7.2. POISSON APPROXIMATION SCHEME
223
The following corollary gives an adaptation of the above theorem in the case of finite valued random variables a:((.).
Corollary 7.1 values:
The impulsive process (7.5) with a finite number of j u m p
P(a,Ek(z)= a,)
= ~p,(z),
15 m 5 M,
(7.12) M
m=l converges weakly to the compound Poisson process (7.7) determined by the distribution function of jumps:
P(ai = a m ) = p;,
15 m 5 M ,
where:
M

and Fo := pm. The intensity of the counting Poisson process vO(t), t 2 0 , is defined b y 40 := 4POl
the drift parameter
a0
(7.14)
is given in (7.12).
Remark 7.1. Assumptions A3 and A4 together split jumps into two parts. The first part gives the deterministic drift, and the second part gives the jumps of the limit Poisson process. The small jumps of the initial process characterized by the function a(.) in A3, are transformed into deterministic drift 2 for the limit process.
Remark 7.2. The stochastic exponential process for the impulsive process
CHAPTER 7. POISSON APPROXIMATION
224
(7.5) is defined as follows lo6
n
4 t / g )
f ( x ( " ) t :=
+
[1 x a ; ( z k ) ] ,
t 2 0.
(7.15)
t 2 0.
(7.16)
k=l
The weak limit of the process (7.15), as E
0, is
(t)
II[1+
yo
:=
€(X[O))t
+
XQ3&t9ao,
k=l
D
Example 7.1. Let us consider a two state ergodic Markov process
z ( t ) , t 2 0, with generating matrix Q, and the transition matrix P of the embedded Markov chain
Thus, the stationary distributions of z(t),t 2 0, and zn,n 2 0, are respectively: P
lr=
(,), x+p
1 1
A x+p
P=
(2'2)
Now, suppose that, for each E > 0, the random variables a;(z), z = 1 , 2 , k 2 1, take values in { E ~ o a, l } with probabilities depending on the state z, @;(eao) = P(a; = ~ a o= ) 1 ~ p , and @ z ( a l )= P(a; = a ~=) cp,, for z E E. We have
where
Oi(z) := ~ a z g ( ~ a o )( ~l p , ) / ~ ~=a ;E a z . o(1) = o ( E ) , for E + 0, and
/
+
u q + = ~ &[(ao alp,)
+~ W I ,
where ee(z) =  m o p z . For the limit process, we have P(G0 = a l ) = 1, thus
+ alvO(t), with Evo(t)= qot, q = A + p, qo = qpo = q(p1 + p 2 ) / 2 . r o ( t )= qaot
7.2. POISSON APPROXIMATION SCHEME
225
Let us now take: X = p = 0.01;pl = 0 . 5 ; ~ = ~0.6; a1 = 100; ao =  2 ; ~= 0.1. Then we get qo = 0.0165, and Fig. 7.1 gives two trajectories in the time interval [0,4500], one for the initial process and the other for the limit process.
Fig. 7.1 Trajectories of the initial and limit processes, and of the drift
7.2.2 Impulsive Processes in an Asymptotic Split Phase Space Now the switching Markov process z " ( t ) ,t 2 0, is considered in the series scheme with a small series parameter E > 0, on an asymptotic split state space:
u N
E=
E,,
E,
p,) = 0,
21
# 21',
(7.17)
VEV
where (V, V ) is the factor compact measurable space. The generator is given by the relation
S,
~ ~ c p (= z)
(7.18)
QE(z,dy)[cp(y)  cp(z)~.
The transition kernel Q, has the following representation
QE(z, B ) = q(z)P'(z,B ) = Q(z,B )
+ EQI(X,
B),
(7.19)
226
CHAPTER 7. POISSON APPROXIMATION
with the stochastic kernel P" representation
P"(5,B ) = P ( x ,B )
+ EPl(2,B ) .
(7.20)
The stochastic kernel P ( z , B ) is coordinated to the split state space (7.17) as follows: (7.21) In the sequel we suppose that the signed kernel PI is of bounded variation, that is,
lP1l(z,E)< +m.
(7.22)
According to (7.20) and (7.21), the Markov process z"(t),t 2 0, spends a long time in every class E,, and the probability of transition from one class t o another is O(e). The state space merging scheme (7.17) is realized under the condition that the support Markov process z ( t ) , t 2 0, with generator (7.1) is uniformly ergodic in every class E,,, v E V, with the stationary distributions (7.23) Let us define the merging function
v(z)= v, z
(7.24)
E E,.
By the state merging scheme, the merged Markov process converges weakly (see Section 4.2), V ( Z " ( t / & ) ) ===+q t ) ,
&
4
0,
(7.25)
to the merged Markov process .^(t), t 2 0, defined on the merged state space V by the generating kernel
The counting process of jumps, denoted by C ( t ) ,t 2 0, can be obtained as the following limit & V E ( t / & ) ===+
i;(t),
&
4
0.
7.2. POISSON APPROXIMATION SCHEME
227
Theorem 7.2 Under conditions A l  A 5 , in the state space merging scheme the impulsive process with Markov switching in series scheme
c
VC(t/E)
r"(t):=
(7.27)
a;(z;), t 2 0,
k=l
converges weakly to the additive semimartingale
to(t), t 2 0, (7.28)
or, in the equivalent increment form P(t)
(7.29)
k=l The compound Poisson processes ,Et(t)are defined by the generators
and v t ( t ) are the counting Poisson processes characterized b y the intensity h
q: = qW@,,(1),or, in an expZicit form
4 (t) 0, namely (7.31)
t ; t 2 0, z E ElE > 0, is a family of Markov jump processes in where ~ “ (z), the series scheme defined by the generators F , ( z ) c ~ ( u=) E
+
[V(U W)  p ( ~ ) ] r ~x()d, z~ E; E ,
 ~ L
(7.32)
d
switched by the semiMarkov process z ( t ) , t 1 0, defined on a standard state space ( E ,E ) by the semiMarkov kernel Q ( z ,B , t ) = P ( z ,B)F,(t), z E ElB E E , t 2 0 , which defines the associated Markov renewal process x,,
T,,
~ , e , +I ~ t I 2, = E B 1 z, = z)p(en+lI t I z,
(7.33)
n20:
~ ( z , ~ =, P(Z,+~ t ) E = IP(%,+~
=
(7.34)
Remark 7.4. Here we do not consider drift for processes $(t; z),t 2 0 , as was the case in diffusion approximation (see Section 4.1), since only random jumps can be transformed into jumps of limit Poisson processes. Let the following conditions hold.
C l : The switching semiMarkov process z ( t ) , t 2 0 , is uniformly ergodic with the stationary distribution: 4 d z ) = p(dz)m(z)/m,
7.2. POISSON APPROXIMATION SCHEME
p(B) =
/
E
229
p(dz)P(z,B), P(E) = 1.
C2: Approximation of the mean jumps: (7.35)
and (7.36) and a ( z ) ,c(z) are bounded, that is, Ia(z)I I a < +m, Ic(z)1 I c < +m. C3: Poisson approximation condition
for all g E C3(Wd),and the kernel rg(z)is bounded for all g E C3(Rd), that is,
The negligible terms in (7.35)(7.37) satisfy the condition (7.38) C4: Uniform squareintegrability
where the kernel r ( d v ;z) is defined on the measuredetermining class C3(Wd)by the relation
C5: CramBr’s condition
Now, we get the following result.
CHAPTER 7.POISSON APPROXIMATION
230
Theorem 7.3 Under Assumptions ClC5, the additive functional (7.31) converges weakly to the Markov process 0, on the product space Rd x El defined by the generators L", E > 0. The domains of definition D(lLE)are supposed t o be dense in the space C(Rd x E ) of realvalued, bounded, continuous functions cp(u,x), u E Rdl x E E , with supnorm
llvll = SUPUEJRd, Z E E 14% .)I.
c(t)
takes values in the Euclidean space The first switched component Rd, d 2 1. The second switching Markov component x " ( t / ~is) defined on the standard state space ( E ,E ) by the generator
in perturbed form, with the kernel:
t 2 0, is considered on the asympThe switched Markov process xc"(t), totic split state space (7.17). The merged state space V is defined by the merging function (7.24).
CHAPTER 7. POISSON APPROXIMATION
232
The limit Markov process
at),
q4, t 2 0,
(7.46)
is considered on the product space Rd x V and is defined by the generator IL,with domain D(L) dense in C(Rd x V ) . 7.3.1
Impulsive Processes as Semimartingales
Let 3; := a(z(s),0 I s 5 t ) ,t 2 0 , be the natural filtration of the Markov process z(t),t 2 0. Let us define also the filtration IFE = ( F f , t 2 0 ) , Ff := a ( z 6 ( s )a, ; ( z k ) , 0 5 s 5 t , k 5 v'(t)), and the discrete time filtration :=a(z;,CYg(Zk),O 5 S 5 t , k 5 n),n 2 0. The semimartingale characterization of the impulsive process with Markov switching (7.27) is given by the predictable characteristics as follows. Lemma 7.1 Under Assumptions A 1A5, the predictable characteristics (B'(t),C E ( t ) , of the semimartingale
@i(t))
c
VE(t/&)
"(t) =
t10,
&(z;),
(7.47)
k=l
are defined as follows. The first predictable characteristic is VE(t/E)
B E @= )E
C
+
b ( ~ ;  ~ )e;(t), t 2 0,
k=l
where b(z)= P a ( z ) =
s,
P ( z ,d y ) a ( y ) , z
c
VE(tlE)
=E
P@z;,(g)
E
E , and the predictable measure
+ O,.(t;9),
t 2 0,
(7.48)
k= 1
where The modified second characteristic is
(7.49)
where
7.3.SEMIMA RTINGALE CHARACTERIZATION
233
The continuous part of the second predictable characteristic is C,"(t)G 0. The negligible terms satisfy the following asymptotic conditions for every finite T > 0:
sup leE(t)l 3 0,
O Z. So, renewal moments T,, n 2 0, can be described by the Markov chain x,, Cn, n 2 0, with values in E , and which transition probabilities are given in the matrix (8.34)
where: Fl(Z
 dy) := P(Z,+l = 1,G + 1 E dy I z, = 1,Cn = S),
F ~ (x d y ) := P(z,+~ = 2,
E dy
I Z,
= 2, Cn = z).
Here the transition ((1, z), (1,dy)) means that a’ E 2  dy, that is, z  y < a1 5 z  y dy. Similar interpretation holds for the other transitions. The particularity of the Markov chain zn,C,,n 2 0, is that it has a stationary distribution determined by
+
pl(dz) = ~ F , ( z ) d z , p 2 ( d ~ = ) ~Fl(~)dz. where a = l/(al
+ a2), ai = Ed,i = 1,2.
(8.35)
8.3. SUPERPOSITION OF TWO RENEWAL PROCESSES
255
It is worth noticing that the densities (8.35) can be defined by the stationary residual times a'* according to the renewal theorem: p1(dz) = plf;(z)dz, p2(dz) = pzfi*(z)dz,
where:
f:(z) = Fi(z)/ai. The semiMarkov kernel of the Markov renewal process z n , ~ n , 2 n 0, can be calculated starting from (8.33) as follows. Set Qij(z, dy, t ) instead of Q((i,z),(j,dy),t). We have:
+
QIz(x,dy,t) = P(al> x , a l E z dy,Oi = P(a1 E z+dy,z I t) = Fl(Z
It )
+ dY)l(,lt),
and, similarly,
Q21(2, dy, t ) = F2(z + dy)l(z / P l
P=P+ f p  ,
P+. =Elf ,
(8.45)
where, as usual, P(t):= 1  P ( t ) . T h e embedded SMRW =
C(t),
E
+
0,
holds. The limit process C(t) is the solution of the following martingale problem cp(C(t))

b ( C ( s ) ) d s = Pt*
Thus, the process ( ( t ) , t 2 0, is the OrnsteinUhlenbeck process with generator IL given in (9.4). Now, in order to get the weak convergence of the stationary distributions, we establish the stochastic boundedness of the processes CE(t)loo. For the Lyapounov function
with % > 0 and V1
>
soooe'(Y)dy,
b(z) = 
a(u)du/P2,we have
Hence we get x E =+ xo. 0 PROOF OF THEOREM 9.2. The proof of this theorem follows the same lines as that of Theorem 9.1. In this case, relation (9.11) becomes (9.11)
Thus we have: a,'(.)
a,(.)
=
{
&
1 a (x)u,
E"coX(z)
+ p(x).],
u > co u 5 co
CHAPTER 9. APPLICATIONS II
276
and
a,+(.)
+a,(.)
=
n[2b(z)  EC(Z)U], u>Co n[2b(z) E(coX(x) p(x)u)],'LL I co
+
where C(II:)= X(x)  p(z). From these, we can proceed as previously.
LBvy Approximation of Impulsive Processes
9.2 9.2.1
Introduction
The impulsive processes considered here are switched by Markov processes (see Sections 2.9.1, 7.2.1, 7.2.2 and 7.3.1). Let us consider a family of random sequences a; (x), 5 = 1 , 2 , ...,II: E E , where E is a nonempty set, indexed by the small parameter E > 0, and a family of jump Markov processes zc"(t), t 2 0, with embedded Markov renewal process xi,T ; , k 2 0, and counting processes of jumps v"(t),t2 0. Thus, times T;, k 2 0, are jump times, xi := xC'(7;),and v'(t) := max{k 2 0 : 7; 5 t } . Define now the impulsive process as partial sums in a series scheme, with series parameter E > 0, by
c
v'(tla)
E&(t):=
a;(.;).
k=l
The limit LQvy process, obtained here, has been used directly in55 in order t o model the time of ruin via defective renewal equation. So, results of the present section can be used directly in order to take into account a more general real situation, and results of 55 can be used in order t o get ruin time probabilities for the limit LBvy process. Since L6vy processes are now standard, L6vy approximation is quite useful for analyzing complex systems (see, e.g. Moreover they are involved in many applications, e.g., risk theory, finance, queueing, physics, etc. For a background on L6vy process see, e.g. 137155).
13,155156.
Let ( E ,E ) be a standard state space. Let us consider an Evalued cadlag
9.2. L E V Y A P P R O X I M A T I O N OF IMPULSIVE PROCESSES
277
Markov jump process z ( t ) ,t 2 0, with generator Q , that is,
p ( z 7dy)[cp(y)  cp(z)l,
Qcp(z)= q ( z )
and z , , ~ ~ ,2n 0, the associated Markov renewal process to z ( t ) , t 2 0. The transition probability kernel of z, n 2 0, is P ( z ,B ) , z E E , B E €. Let ~ ( t ) ,2t 0, be the counting process of jumps of z(t),t 2 0, that is, Y ( t ) = Sup{’??2 0 : 7, 5 t } . We suppose here that the process z ( t ) ,t 2 0, is uniformly ergodic with stationary probability 7r(B),B E E. Thus the embedded Markov chain is uniformly ergodic too. Let p(B),B E E l denote the stationary probability measure of the embedded Markov chain x,,n 2 0. These two probability measures are related by the following relation
Define the projector ll by
where 1(z)= 1 for all z E E . Let us denote by Ro the potential operator defined by (see Section 1.6) RoQ
= Q&
=II

(9.12)
I.
Let E > 0 be a small parameter and define the family of Markov processes z E ( t ):= z(t/~~),t 2 0. We formulate here a new result of approximation by a L&y process of the following impulsive processes
c ai(zi),
v E( t / € 2 )
“ ( t ) := (5
+
t 2 O,& > 0.
(9.13)
k=l
For any E > 0, and any sequence Z k , k 2 1,of elements of El the random variables a;(&),Ic 1 1 are supposed to be independent. Let us denote by G: the distribution function of a i ( x ) ,that is,
G:(dv):= P ( a i ( z )E dv), Ic 2 O , E > 0,z
E E.
It is worth noticing that the coupled process F ( t ) , z e ( t ) 2 , t 0, is a
Markov additive process (see Section 2.5).
CHAPTER 9. APPLICATIONS I I
278
Let c(t),t 2 0 , be a LQvy process with characteristic exponent (cumulant) given by the LiwyKhintchine formula 1
1
t
2
+
$(u) := Eei"c(t) = ibu    a ~
/
[eius  1  iuzl{lul ] d *) (u, 1 (9.17) + ~ Q o c ( x ) ~ ' p z.), ( ~Q o ~ ~ ( P*),( u ,
ILEcp(u, X) = ~  ~ Q c p ( . , X)
0 )
+
where: Qocp(z) := d x )
1 E
P k ,dy)cp(y), bo(x) := L v G d d v ) ,
9.8. LEVY APPROXIMATION OF IMPULSIVE PROCESSES
PROOF From
283
we can write
an
In order to apply Assumptions
let us consider the operator
which is transformed as follows
where the function
belongs t o C3(lR),and
byy)
:= L v ~ : ; ( d v ) , c&(Y) :=
s,
v2~;(dv) = E~[c(Z)
+ e:(x)].
Then, using Assumptions L3L5, we get 1 qfw = 2 { 1w Gy(dv)L7,(4+[Elbl (Y)+b(Y)lcp’(u)+Zc(Y)cp//(u))+o(E2) 1
or, in another form q/cp(u) = E
{
H‘,cp(U) +&
1b l (Y>Cp’(u) +[b(Y)
1 bo (Y)l ‘PI (u) +5C(Y)cp!I (u)}+ 0 ( E 2 ) .
Putting this representation in (9.16), we get (9.17). 0 We will now obtain the limit generator by solving the following singular perturbation problem for the reducibleinvertible operator Q, according to Proposition 5.2,
+
(u, x) = ~ p ( u ) &eyx),
for a test function cpE(u,x) = cp(u) Let us define the operators: Q1 := QoBl(x)
and
(9.18)
+ ecpl(u, x) + c2p2(u,x).
Q2 := Qo[B(x)
+ rz+ C(x)l,
where Bl(z)p(u) := b~(z)cp‘(u), and B(z)cp(u) = [b(x) bo(x)]p’(u), and: Q3
:= Q2
+ Q1RoQ1,
1 and C(x)cp(u)= c(x)cp”(u). 2
CHAPTER 9. APPLICATIONS II
284
Note that under the balance condition we have
IIQiII = 0. Lemma 9.3
T h e asymptotic representation
[ E  ~ Q +&'&I
+ Q2][cp + &pi+ ~
~
= b~ p
+ e"(z), 2
1
i s verified by 'PI = RoQicp,
with negligible t e r m eE(z>= [Qi
+ EQzI'P~ + Q 2 ' ~ i .
T h e limit operator L can be obtained by Proposition 5.2
W ( U=) ~ [ Q + z QiRoQi]n~(u).
(9.19)
Calculation of the limit operator. Taking into account RoP = Ro+HI, and the balance condition L6, the limit operator (9.19) is represented by (9.15). Specifically, by a straightforward calculus, we obtain:
nQoWz)cp(u) ( b  bo)cp'(u),
and
In order to prove the relative compactness of the family of the processes ['(t),t 2 0, E > 0, we can follow the lines of Chapter 6.
PROOF OF THEOREM 9.4. For the coupled Markov process T ( t ) , z ' ( t ) t, 2 0, with generator L', we have the following equation
+
cp"(["(t),Z"(t)) = 'PE('lL,z)
I'
L E ' P E ( J E ( S ) ,z ' ( s ) ) d s
+y"(t),
(9.20)
285
9.2. LEVY APPROXIMATION OF IMPULSIVE PROCESSES
where y'(t) is an Ffmartingale, and the test functions cp" considered here are of the form V E ( %).
= cp(v)
+W1(%
(9.21)
z)
For the limit process e0(t),t 2 0, in Theorem 9.4, we get (9.22) = c ( r o ( s ) ,s 5 t)martingale. where y o ( t ) , t 2 0, is an From (9.20) and (9.21), we get
where
Now, from (9.22) and (9.23), we get
+yE(t)
 yo(t)
+ ce;(t).
(9.25)
From Lemma 9.3, we have
b ( t O ( t ) )+@(S).
J J E P E ( E E ( S ) , Z E ( 5= ))
(9.26)
From (9.25) and (9.26), we get cp(tE(t))  cp(tO(t>> = Y'(4
+ Jot
 Yo@)
+ @(t),
where e;(t) := O;(t) O;(s)ds. Since E[y"(t) yo@)] = 0, the result follows from the latter equality.
This page intentionally left blank
Problems to Solve
Here we give about fifty problems for the reader to solve. These problems propose results stated without proofs in the previous chapters, alternative proofs, and some extensions. They are classified following chapters.
Chapter 1
D Problem 1. Prove that the generators in examples in Section 1.2.2 are as stated there. D
Problem 2. Prove Proposition 1.3.
D
Problem 3. Prove Proposition 1.4.
D
Problem 4. Prove the Markov Renewal Equation (1.40).
D
Problem 5. Prove identities of Proposition 1.6 for Ro given by (1.66).
D
Problem 6. Prove Proposition 1.7.
D
Problem 7. Prove identity (1.59).
D
Problem 8. Prove that the generator of the backward Markov process
~ ( t=)t  ~ ( t t) 2, 0, of a renewal process on R+, considered in Example 1.2, is a reducibleinvertible operator. 287
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
288
Problem 9. Let x ( t ) , t 2 0, be a regular semiMarkov process. 1) Prove that the process x ( t ) , y ( t ) , t 2 0, is a Markov process, (see Section 1.3.3). 2) Calculate its generator and prove that it is a reducibleinvertible operator. D
Problem 10. Let z(t),t L 0 , be a jump Markov process with a standard state space ( E , & ) . Let q ( x ) be the intensity function, m ( z ) the mean jump value at x, r n z ( x ) the second moment of jump at x, and Q ( x ,.) the distribution of jumps at x. Prove that: 1) the compensator of the process x ( t ) ,t 2 0, is D
2) the square characteristic of the local martingale p ( t ) = x ( t )  x ( 0 ) 
a ( t ) ,t 2 0, is
provided that m z ( x ) is finite for any x E E .
Chapter 2
D
Problem 11. Prove Lemma 2.2.
D
Problem 12. Prove Lemma 2.3.
D
Problem 13. Prove Proposition 2.1.
D
Problem 14. Prove Corollary 2.1.
D
Problem 15. Prove Corollary 2.3.
D
Problem 16.
z ~ , T 2~0,, be ~
(174) Let x ( t ) , t 2 0 , be a semiMarkov process, let the corresponding Markov renewal process, and IL be
PROBLEMS TO SOLVE
289
its compensating operator. Define the process c(t),t 2 0, by
1
+
T"Y(t) 1
r ( t ) := cp(Zv(t)+lr G ( t ) + l ) 
JLcp(zv(s)Jv(s))ds,
and 3 t := n ( { 5 ~ t~} n { ( m ,...,z,) E B;''};n measurable, and cp and ILcp are bounded. Prove that: 1) for t 2 0 and s 2 0, &[J(t
+ s)  r ( t ) I Ft] = 0,
2 0).
7?,
2 0,
The function cp is
as.
2) the process C(t),t 2 0, is not measurable with respect to 3 t . So, the process 0.
Problem 23. (Liptser’s formula). Let z ( t ) , t 2 0, be an irreducible jump Markov process with finite state space E = (1, ..., d} and generating matrix Q. Denote by 7 r i , l 5 i 5 d its stationary distribution. Let a be a realvalued function defined on E . Let us define the following family of integral functionals D
s,
t
cr‘(t) = &I Let
(vl,...,vd1)
[.(z(s/E2))  Ea(z(s/&2))]ds,t
2 0,
> 0.
be the solution of the following system of equations d 1
6ij.j
= .(i)
 a(d),
i
= 1,...,d
j=1
where
E
6is a nonsingular matrix defined as follows
Deduce from Theorem 3.3 that
*bw(t),
E + 0,
where w ( t ) ,t 2 0 , is a standard Wiener process, and

1,
PROBLEMS TO SOLVE
291
D Problem 24. For the process in the previous problem, define the occupation time process of states
Ai(t) = rneas{s : z(s) = i , O 5 s 5 t}, and the vector A* = (A:,
...,A;),
t 2 0,
with
Deduce from Liptser’s formula that the vector A converges weakly to a multivariate normal distribution with mean zero and covariance matrix to be defined. D
Problem 25. Let us consider a family of contraction semigroups
Ft(z),t 2 0,z E E , on a Banach space B, and the random evolution operator
4(t) = r T 1
(z(o))rT2T~
(z(T1))
rtT(t)
(5(T(t))).
1) Prove that 4(t)is a contraction operator on B. 2) Prove that for any f E B the mapping t H 4(t)f is continuous, and differentiable at points t # ~ 1 , 7 2..., , and satisfies the following differential equation d
4(t)f = 4(W(z(t))f, dt where T(z) is the generator of rt(z), z E E. 3) Define the expectation semigroup as follows
( W t ) f ) ( z := ) K[4(t)f(z(t))l. Prove that II(t),t 2 0, is a contraction semigroup. Chapter 4
D
Problem 26. Construct the generator of the limit stochastic system
f i ( t )?(t), , t L 0, in Theorem 4.4. D
P r o b l e m 27. State and prove the result of Theorem 4.6, when the limit h
merged Markov process $(t),t 2 0, has null generator
6,= 0.
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
292
Problem 28. By ad hoc time scaling, give averaging results for system (4.35) in Theorem 4.6. D
Problem 29. State and prove the result of Theorem 4.10, when the limit merged Markov process 2(t),t >_ 0, is not conservative. D
h
Chapter 5
D
Problem 30. Prove Lemma 5.5 by using the following formula (Propo
sition 5.2).
LII = IIQz(v;z)II + IIK(v; x)PR&(v;z)II. D Problem 31. Derive results of Proposition 5.17, for the following representation of the generator of the switching Markov process
Q'
=Q
+ E Q ~+ E
~
+Q E ~~ Q ~ .
Give an interpretation of this scheme. D Problem 32. Give a conclusion such as in Proposition 5.5 for the following generator ILE
for k
= &kQ
+ &k+l
Qi
+ ... +
&
2
Qk2
+ E  l Q k  1 + &k
(9.29)
> 4.
Problem 33. Show that the generator of the random evolution in Proposition 4.1 is represented as follows D
IL" = E  ~ Q +C 2 Q 1
+ E'&(x),
where the operator
K(.)cp(.)
= E(.)cp'(.),

a(.)
satisfies the balance condition
flX(Z)fl = 0.
h
:= a(.)
 2,
293
PROBLEMS TO SOLVE
Prove that the limit generator is the following h   A

c
Lfi = IIARoAII.
Hint. Use Proposition 5.4. D Problem 34. Prove that the generator of the random evolution in Proposition 4.2 is represented as follows
IL&  E 4 Q + E  ~ Q + E~  ~ Q +~ ~  l A ( z ) , where the generator
satisfies the balance condition
fifirrA(+Ififi
= 0.
Prove that the limit generator is the following
Hint. Use Proposition 5.5. D Problem 35. Prove that
the generator of the random evolution in Proposition 4.3 is represented as follows
L“=.= E  2 6 +
+z,
6
h
where is the generator of twice merged Markov process .^(t),t2 0, and the operator
z
= fiIIA(z)IIL,  A h   A
satisfies the balance condition IIAII = 0. The limit generator has the following form A h   A
Ah..
Lfi = IIARoAII. Hint. Use Proposition 5.3.
294
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
D Problem 36. Prove that the generator of the random evolution in Proposition 4.4 is represented as follows
LE= E  ~ Q
+ E  ~ Q +~ E  ~ Q +~ ~  l A ( z ) ,
where the operator
a(.)
= ir(Z)Cp’(U),
&.)Cp(U>
:= a(.)
 i?,
satisfies the balance condition h
fifinxnfifi = 0. Prove that the limit generator is the following
 __A
  A

 h
Lfi = nARrJAn.
Hint. Use Proposition 5.5. D Problem 37. Prove Theorem 4.6, by using the development in Section 5.6.1.
Chapter 6
D Problem 38. Let z,,n 2 1, be a sequence of i.i.d. centered random variables, and define the family of stochastic processes z E ( t ) , t2 O,E > 0, by
k=l
Show that x E ( t )==+ w ( t ) ,where w(t), t 2 0, is a standard Wiener process. D Problem 39. Prove the diffusion approximation result in Theorem 3.4, following calculus in Section 5.5.2 and Chapter 6. D Problem 40. Let x E ( t / & ) , t2 O,E > 0, be a family of semiMarkov processes with phase space El split as follows N
E = U+IEk,
Ek n Eki = 8,
k # k‘.
295
PROBLEMS TO SOLVE
Let w be the merging function on E with values in {1,2, ...,N } . Suppose that the following averaging principles are fulfilled: W(Z"(tl&))
* .^(t),
& V " ( t / & ) ===+ q
t),
where .^(t),t 2 0 is a Markov process. 1) Show that the compensating operator of
v"(t) =
I"
U"
is
X(z"(S),y(S))dS,
and that of i; is
v ( t )=
Jnt
q^(.^(s))ds.
2) Show that we have
where %(dz x ds)X(z, s),
q^(k) = Ek X
b
and ? is the stationary distribution of the Markov process zc"(t), t7(t),t 0, on E x R+.
2
Chapter 7
D Problem 41. Formulate
the corresponding stochastic singular perturbation problem and prove Lemma 7.3.
Problem 42. Show that the predictable characteristics ( B ( t )c^(t), , vt(g)) of the semimartingale ('(t),t 2 0, in Theorem 7.2, relation (7.28), are given by: D
t
B ( t )=
b(f(s))ds, b(w) = qv2(w),
2(w)
:=
s,,
p,(dz)a(z).
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
296
The modified second characteristic is t
e(t)= where
e ( v ) = q,,
L,
p,(dz)Co(z),
E(?(s))ds,
Y
E V
and Co(z) =
LU'@~(~U).
The predictable measure is
where
D Problem 43. Prove that the compensating operator formula is as stated in Lemma 7.5.
Problem 44. Prove that under conditions of Theorem 7.1, the following convergence takes place D
E(Xc'>t
* E(Xco)t =
n
yo
(t)
[I
+ X ( Y exp(tqao), ~
E .+ 0.
k=l
The limit stochastic exponential process &(X_ 0}, of the SMRW is given
xn,&,n 2 0, with phase space E =
by
+
p*(dz) = F*(z)dz/(a+ a  ) ,
*
where ah = lEa, .
297
PROBLEMS T O SOLVE D
Problem 46. Consider a centered SMRW defined as follows
where b = b+/p+  b/p. Let b # 0 and the third moments E[P,fl3< 00. For notation see Section 8.4. Show that the weak convergence
*
("(t) ( O ( t )
=u
+
OW(t),
E
+
0
takes place, and that the variance u2 is
where: 00
00" =
2 l
[F(Z)x;(Z)
+P+(s)i;O(z)]dz,
bO,(x):=X* (x)ROf& (x), 0::= j
p + ( X ) C  ( X ) +P(z)C+(z)]dz,
C*(X) := E[Y:+~~Z, = z], x
E E*.
The potential operators R t are defined for the semiMarkov kernel Q = q ( z ) [ P I].The process w(t) is the standard Wiener process.
Chapter 9
D Problem 47. Let zc"(t),t 1 O , E > 0, be a family of Markov processes, with embedded Markov chain xL,n 2 0, with the standard state space ( E ,E ) ; let the process v ( t ) ,t 1 0, be a Poisson process with intensity q. Let the following autoregressive realvalued process a"(t),t 2 0, be defined by
a"(t)= ayE(0) +E
4 t / E )
C a(ayEk;zyEk), k=l
where a is a fixed realvalued function defined on R x E.
(9.30)
STOCHASTIC S Y S T E M S IN MERGING PHASE SPACE
298
1) Prove that the generator of the coupled Markov process a"(t),x'(t),t 2 0, is
+
where [D"(x) I]cp(u)= cp(u ~ a ( ux)) ;  cp(u). 2) Formulate the singular perturbation problem and find out the limit generator. 3) Prove the following weak convergence result
a"(t)=+ aO(t), where the limit process ao(t), t 1 0, (deterministic), is defined as a solution of the following evolutionary equation d
a0@) dt = i;(aO(t)), where
Z(u)= q
s,
p(dx)a(u;x).
D Problem 48. Let the process a"(t),t2 0, in the previous problem be scaled as follows
a"(t)= a'(0) + &
c
V(tlE2)
a&(a;;xi),
(9.31)
k=l
with a,(u;
x) = a(u;x) + &al(u;x).
Prove that the following weak convergence takes place
a"(t)=+ aO(t), where the limit diffusion process aO(t),t 2 0, is defined by the generator L, defined as follows
+ f1( U ) ' p " ( " ) ,
Lcp(u) = b ( u ) d ( u ) where, the drift coefficient is defined by
PROBLEMS TO SOLVE
299
with b~,(u; x) = a(u;x)&a:(u; x), and the diffusion coefficient is
B ( u )= Q
s,
p(dx)uo(u;x),
with ao(u;x) = a(u;x)&a(u; x).
General problems
Problem 49. Let v"(t),t 2 O,E > 0, be a family of counting processes with intensities D
2 0,
X ( t ) = &1C(t;&v'(t)),t
&
> 0.
Prove that the following convergence holds & Y ' ( t ) ===+ xO(t),
&
4
0,
where xo(t),t 2 0 is the solution of d
x(t) dt D
= C ( t ; x ( t ) ) , x(0) = 0.
Problem 50. Let us consider a birth and death process with state space
EN = {0,1, ..., N } and jumps intensities: Q(i,+1) = ( N  i ) X , 0 5 i < N , and Q(Z, 1) = ip,0 < i I N. 1) Put
E =
1/N and define the normalized family of processes
v'(t)
:= & V ( t / & ) ,
t 2 0,
&
> 0,
on the state spaces E E= {u = ie : 0 5 i 5 N } . Prove that v"(t)+ p ( t ) , as E + 0, where the limit process p ( t ) , t >_ 0, is a deterministic function obtained as a solution of the following evolutional equation
d p(t)
dt
= CMt)),
with C ( u )= A( 1  u) pu. 2) Put E := Nl/' and consider the normalized family of processes
" ( t ) := & V ( t / & ' ) where p = X/(X
+p ) .

&lp,
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
300
Prove that _ 1, be a relatively compact sequence in M I ( E ) , such that every convergent subsequence has the same limit P . Then P, =+ P. Definition A.6 1) The probability measure P E M l ( E ) is tight if for every E > 0 there exists a compact subset K of E l such that P(K") < E. 2) A subset A4 of M l ( E ) , is tight if, for every E > 0, there exists a compact subset K of E such that P ( K C )< E , for every P in M . Theorem A.3 (Prohorov) A subset M of M l ( E ) is relatively compact (for the weak topology) if and only if it is tight. Theorem A.4 Let x,(t),t 2 0,n 2 0, a sequence of processes and let a process x ( t ) ,t 2 0 , be with simple paths in D[O,00). 1) If xn(t) =+ x ( t ) , then
* ( X n ( t l ) r ..*ixn(tk)), n
(xn(tl),...,xn(tk))
(A.1)
f o r any finite set { t l , ..., t k } c D, := {t 2 0 : P ( x ( t ) = x ( t  ) ) = 1). 2) If the sequence xn(t) of processes is relatively compact and there exists a dense set D c [0,00) such that ( A . 1 ) holds for every finite set { t l ,..., t k } C D , then
xn(t)+ x(t), n
4 00.
Theorem A.5 (Skorokhod representation) Let x,,n >_ 1, and x be E valued stochastic elements, and suppose that x , =+x . Then there exist stochastic elements Z n l n 2 1, and 5, all defined o n a common probability space, such that Z, has the same distribution as xn, and Z as x , and 5,
a.s.
x.
Remark A . l . The above definitions and more detailed results can be found, e.g., in 163563703132.
This page intentionally left blank
Appendix B
Some Limit Theorems for Stochastic
Processes
The present appendix gives three theorems used in proofs of theorems in Poisson approximation of Chapter 7, (Theorems B.lB.2, from 7 0 ) and in LBvy approximation of SMRW in Chapter 9 (Theorem B.3, from .)'61
B.l
Two Limit Theorems for Semimartingales
Let us consider the classes of functions C,(Rd), C2(Rd),and C3(Rd) defined as follows (see 70, p. 354).
C2(Rd) is the set of all realvalued continuous bounded functions defined on Rd which are zero around 0 and have a limit at infinity. C1(Rd) is the subclass of Cz(Rd) of all nonnegative functions g a ( z ) = ( a 1x1  1)+ A 1 for all positive rationals a , and with the following property: let p n , p be positive measures on Rd \ {0}, finite on any complement of neighborhood of 0; then p n f + pf for all f E C1(Rd) implies pn f + p f for all f E C2(Rd). So, it is a convergencedetermining class. C3(Rd) is the measuredetermining class of functions cp, which are realvalued, bounded, and such that 4 ' 1 1 ) / 1'1112
+
0,
11 ' 11 +
0.
The above three classes satisfy the following inclusion relations:
C1(Rd) c C2(Rd) c C3(Rd). Integral process ( 7 0 ) . First we consider a random measure v = { v ( w ; d t , d z ) ; wE R} on (R+ x E , B + x E ) , such that v({O} x E ) = 0. 305
STOCHASTIC SYSTEMS IN MERGING PHASE SPACE
306
Let R be a measurable function on (0 x R+ x optional aalgebra on 0 x R+. Define the integral process R * v ( w ,t ) by
E , 8 x E ) , where 6 is the
r
R ( w ;s, z)v(w;ds, d z ) , xE
when
J& x E IR(w;s, .)I
v(w;ds, d z ) < 00; and R * v ( w , t ) = 0 otherwise.
Theorem B . l (Theorem VIII2.18, p . 423, an 70) Let x ( t ) , t >_ 0 be a semimartingale of an independent increment process continuous in probability and let v"(t),t2 0 and v ( t ) , t 2 0 , be such that
1x1 * v'(t) < +co, 1x1 * v ( t ) < +co for all t 2 0. Define B t s ( t ) , t2 0, and B ' ( t ) , t L 0 , as follows
+ (Z  h ( z ) )* v"(t),
B'&(t)= B E ( t )
and ?(t),t 2 0 , C ' ( t ) , t2 0, as follows
+(dzk)
2;1.>jk(t) = cs,jk(t)
sst
If P
sup (B'"(s) B'(s)l + 0,
for all t 2 0,
sa) ~ * uE(t) > E ) ,
atm
f o r all E > 0 , t
E>O
E R+.
Then C ( x " ) =+ IP,
&
4
0,
where ,C(z') is the law of the process x"(t),t 2 0
B.2
A Limit Theorem for Composed Processes
Let n E , &> 0, be a family of positive non random numbers, such that nE 00, as E + 0; let a;,k = 1 , 2 , . . . , &> 0, be a family of realvalued random variables and let a family of stochastic processes Jb(t),t 2 0 , E > 0 be defined as follows f
b'l
t'(t)=
c
a;,
t 2 O , & > 0.
k=l
Let further p E ,E > 0, be a family of non negative random variables, and set u" := pLE/nE. Define now the cadlag process
==+ ( < O ( t ) , t E
w>,E
+
01
where W = R+ \ A , A is any set at most countable, and 0 A j ( J 8 , c , T ) is the modulus of compactness, defined as follows: Aj(