MATHEMATICAL PROGRAMMING STUDIES
Editor-in-Chief M.L. BALINSKI, International Institute for Applied Systems Analysis, ...
70 downloads
1429 Views
8MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
MATHEMATICAL PROGRAMMING STUDIES
Editor-in-Chief M.L. BALINSKI, International Institute for Applied Systems Analysis, Laxenburg, Austria, and City University of New York, N.Y., U.S.A. Senior Editors E.M.L. BEALE, Scientific Control Systems, Ltd., London, Great Britain GEORGE B. DANTZIG, Stanford University, Stanford, Calif., U.S.A. L. KANTOROVICH, National Academy of Sciences, Moscow, U.S.S.R. TJALLING C. KOOPMANS, Yale University, New Haven, Conn., U.S.A. A.W. TUCKER, Princeton University, Princeton, N.J., U.S.A. PHILIP WOLFE, IBM Research, Yorktown Heights, N.Y., U.S.A. Associate Editors PETER BOD, Hungarian Academy of Sciences, Budapest, Hungary VACLAV CHVATAL, Stanford University, Stanford, Calif., U.S.A. RICHARD W. COTTLE, Stanford University, Stanford, Calif., U.S.A. J.E. DENNIS, Jr., Cornell University, Ithaca, N.Y., U.S.A. B. CURTIS EAVES, Stanford University, Stanford, Calif., U.S.A. R. FLETCHER, The University, Dundee, Scotland TERJE HANSEN, Norwegian School of Economics and Business Administration, Bergen, Norway ELI HELLERMAN, Bureau of the Census, Washington, D.C., U.S.A. ELLIS L. JOHNSON, IBM Research, Yorktown Heights, N.Y., U.S.A. C. LEMARECHAL, IRIA-Laboria, Le Chesnay, Ivelines, France C.E. LEMKE, Rensselaer Polytechnic Institute, Troy, N.Y., U.S.A. GARTH P. McCORMICK, George Washington University, Washington, D.C., U.S.A. GEORGE L. NEMHAUSER, Cornell University, Ithaca, N.Y., U.S.A. WERNER OETTLI, Universitiit Mannheim, Mannheim, West Germany MANFRED W. PADBERG, New York University, New York, U.S.A. L.S. SHAPLEY, The RAND Corporation, Santa Monica, Calif., U.S.A. K. SPIELBERG, IBM Scientific Center, Philadelphia, Pa., U.S.A. D.W. WALKUP, Washington University, Saint Louis, Mo., U.S.A. R. WETS, University of Kentucky, Lexington, Ky., U.S.A. C. WITZGALL, National Bureau of Standards, Washington, D.C., U.S.A.
MATHEMATICAL PROGRAMMING STUDY 5 Stochastic Systems: Modeling, Identification and Optimization, I Edited by Roger I.-B. WETS
November (1976)
NORTH-HOLLAND PUBLISHING COMPANY - AMSTERDAM
© The Mathematical Programming Society, 1976 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner.
This STUDY is also available to non-subscribers in a book edition.
Printed in The Netherlands
PREFACE
This volume and its companion on stochastic systems are essentially* the proceedings of a symposium held in Lexington, Kentucky, in June 1975. The purpose of the meeting was to bring together researchers involved in one way or another in the description and control of stochastic phenomena. One hears sometimes that if a field has reached the level where the unsolved problems are well formulated, the language has been unified and almost everybody agrees on the fundaments, then - from a researcher's viewpoint - the field is nearly moribund. Assuming that as measure of vitality of a field one would use the opposite criteria, then stochastic systems is alive and well. Naturally the field is more solidly established in the modeling part than in identification and we know relatively more about identification than about optimization. However since there are strong interconnections between the questions raised in optimization, identification and modeling of stochastic processes, it is imperative to approach the problems from a global standpoint. At this juncture unfortunately, the development of such a general theory seems to be a major undertaking that would also require from its author more than common foresight since by no means have we even scratched the surface of possible applications of stochastic systems theory. The disparity of background of the researchers - of which those contributing to these volumes are a representative sample - just gives us one more hint as to the wide range of applicability of the models. This great variety of applications is also in many ways responsible for the diversity of techniques, terminologies, ... that are tieing used. As already indicated above one of the goals of the symposium was to help to reduce the distance between the work going on in the different "subfields" of stochastic systems and to start the dialogue that would allow the "translation" of the results from one area to another. In this first volume, we have included those articles that deal with modeling and identification. The second volume is exclusively devoted to • A few participants were unable to meet the publication's deadlines and we regret that these proceedings are not as complete as they could have been. v
vi
Preface
models involving - at least partial - control of the stochastic process. (The preceding paragraph indicates that these divisions must remain to some extent arbitrary). The papers by Cinlar, Elliot and Hida are concerned with the extension of the class of stochastic processes that can be mathematically modeled. Cinlar shows that it is possible to associate to each semi-regenerative process (enjoying the strong Markov property) a Markov additive process. Elliot reviews results on the martingales that can be associated to a jump process. Hida extends the definition of Brownian functional (a function of Brownian motion) to be able to handle certain problems arising in stochastic control, quantum mechanics, and others. The possibility to handle analytically stochastic models, often by means of approximations, is the concern of Brockett, Holland, Kurtz, McShane, Pinsky and Zachrisson. Brockett studies a class of Ito equations of the form dx == Ax dt + L::, B,x dw. + L~~, g, dv, where w. and v, are standard independent Wiener processes. He shows that if the Lie algebra generated by the B. possesses some natural properties, then many of the features of the purely linear case can be salvaged. Kurtz shows that for a certain class of one-parameter families of Markov chains X A (t), X A (t)/A converge in probability as A ~ 00 to a system of ordinary differential equations. He then considers approximation by diffusions. The effect of replacing a stochastic process by an idealized version of this process is studied by McShane. This is done so that convergence and approximation results can be derived. Pinsky considers a system of Ito equations in R" and studies stability properties in particular "asymptotic stability in probability". A number of conditions are given that guarantee this property for systems of Ito equations and the results are applied to the Dirichlet problem Lu == 0 with L the infinitesimal operator of the Ito process. Zachrisson is also concerned with the "reduction" of practical problems to Gauss-Markov processes. He justifies this procedure by characterizing processes in terms of their first and second order moments and then giving the "equivalent" process generated by a linear Ito equation. Holland uses a probabilistic technique to handle ordinary and boundary layer expansions derived from classes of singularly perturbed semilinear second order elliptic and parabolic partial differential equations. Finally, Benes settles an equivalence question between an innovation process and a related observation process. This result, a generalization of a result of Clark, exemplifies the difficulties one encounters when passing from the purely modeling stage to modeling with identification. In which way should we study the passage of information from the
Preface
vii
modeler to the controler? In their contribution Ho and Sun view this as a two-person game and study the value of information that is defined in terms of the relative strength of the player and the market's nature. Tse shows that there is a tight coupling between the control adopted and the information collected. The probing aspect of the strategy is defined in terms of Shannon's information measure. He then gives an application to the control of a Markov process with imperfect observation. The papers of Casti, Le Breton, Lindquist and Mehra are more directly concerned with the identifiability of a system and the associated calculations. Le Breton considers the problem of parameter estimation in a linear stochastic differential equation, with constant coefficients. He compares continuous and discrete sampling. Mehra suggests the use of Markov models for the estimation, one step ahead, of the values of independent variables in time series. He shows, among other things, that these predicted estimates can be used for computing maximum likelihood estimates of the unknown parameters. Convergence of parameter estimates is the object of the paper of Ljung. This is studied when the parameter set contains a true description of the system that generated the data and also, when - more realistically we cannot assume this a priori nice situation. In view of the richness of the contributions it has only been possible to give a quick overview of the content of this volume. I hope that nevertheless this volume and its companion will help the process of cross-fertilization that seems so promising. The symposium and the preparation of these volumes would not have been possible without the help of the Mathematics Department of the University of Kentucky and the National Science Foundation. They contributed without hesitation moral and financial support. I must single out Raymond Cox, Clifford Swanger, Robert Goor and Anders Lindquist (all of the University of Kentucky) and Robert Agins (of N.S.F.). The organizing committee A. Balakrishnan, G. Dantzig, H. Kushner, N. Prahbu and R. Rishel also provided us with invaluable expertise in setting up the program. Finally, we must recognize that the preparation and the running of the symposium would not have been possible without the organizational and secretarial skills of Ms Linda Newton and Pat Nichols. Louvain, 1976
Roger I.-B. Wets
CONTENTS v
Preface. Contents
viii
MODELING
(1) Extension of Clark's innovations equivalence theorem to the case of signal z. independent of noise, with J~ z; ds < 00 a.s., V.BeneS
2
(2) Parametrically stochastic linear differential equations, R. Brockett
8
(3) Entrance-exit distributions for Markov additive processes, E. (:in/ar .
22
(4) Martingales of a jump process and absolutely continuous changes of measure, R. Elliott
39
(5) Analysis of Brownian functionals, T. Hida
53
(6) Probabilistic representations of boundary layer expansions, C. Holland
60
(7) Limit theorems and diffusion approximations for density dependent Markov chains, T. Kurtz.
67
(8) The choice of a stochastic model for a noise system, E. McShane
79
(9) Asymptotic stability and angular convergence of stochastic systems, M. Pinsky
93
(10) Remarks on wide sense equivalents of Gauss-Markov processes in R", L. Zachrisson viii
continuous
103
Contents
ix
IOENTIFICATION
(11) A reduced dimensionality method for the steady-state Kalman filter, J. Casti
116
(12) On continuous and discrete sampling for parameter estimation in diffusion type processes, A. Le Breton. .
124
(13) On integrals in multi-output discrete-time Kalman-Bucy filtering, A. Lindquist .
145
(14) On consistency and identifiability, L. Ljung .
169
(15) Identification and estimation of the error-in-variables model (EVM) in structural form, R. Mehra
191
(16) Value of information in zero-sum games, F Sun and Y. Ho
211
(17) Sequential decision and stochastic control, E. Tse.
227
Mathematical Programming Study 5 (1976) 2-7. North-Holland Publishing Company
EXTENSION OF CLARK'S INNOVATIONS EQUIVALENCE THEOREM TO THE CASE OF SIGNAL z INDEPENDENT OF NOISE, W I T H fot z s2d s < ~ a.s. V.E. B E N E S Bell Laboratories, Murray Hill, N.J., U.S.A. Received 15 September 1975 Revised 20 December 1975 Let z be a process, w a Brownian motion, and y, = f~ z, ds + w, a noisy observation of z. The innovations process v,, defined in terms of the estimate ~, = E{z, I Y,, 0 ~< s ~< t} by y, = f~ i~ ds + v,, is also a Brownian motion. The innovations problem is to determine whether y is adapted to u. The positive answer of C l a r k / o r bounded z independent of w is extended to z independent of w with f~ z~ds < 0o a.s.
I. Introduction
It is a classical problem of filtering theory to estimate signals from the past of observations of them corrupted by noise. A standard mathematical idealization of this problem is as follows: The signal z, is a measurable stochastic process, with E ] z , [< w, the noise w, is a Brownian motion, and the observations consist of the process y, =
f0 t z , d s + w,.
(1)
Introduce s = E{z, t y~,0 ~< s ~< t}, the expected value of z, given the past of the observations up to t. It can be shown [4] that if f~ z]ds < ~ a.s., then there is a measurable version of 2 with f o' z~:s d s < oo a.s. The i n n o v a t i o n s p r o c e s s for this setup is defined to be v, =
fo
( z , - Ss)ds + w,,
and it is a basic result of Frost [1] and also of Kailath [2] that under weak
V.E. Bene# / Extension of Clark's innovations equivalence theorem
conditions v, is itself a Wiener process with respect to the observations. Thus (1) is equivalent to the integral equation y, =
~sds + v,,
(2)
which reduces the general case (1) to that in which z is adapted to y., a special property useful in problems of absolute continuity, filtering, and detection. The innovations problem, first posed by Frost [1], is to determine when the innovations process v contains the same information as the observations y, that is, when y. is adapted to v. This problem has been outstanding since about 1968, in both senses of the word, and it has drawn the attention of electrical engineers and probabilists alike. An account of the problem and its background can be found in a paper by Orey [3] and in lecture notes by Meyer [4]. It is now known that the answer to the general innovations problem is in the negative. Cirel'son has given a counterexample [7] for the following special case: Suppose that the signal z, is a causal function a(t, y) of the observations, i.e., the signal is entirely determined by feedback from the observations. Then z = P., w = v, and the problem reduces to asking whether the observations are "well-defined" in the strong sense of being adapted to the noise; for in this vestigial or degenerate case the noise is the only process left. Cirel'son's disturbing example consists of a choice of a ( . , - ) for which there is a unique weak solution y., nonanticipative in the sense that the future increments of w. are independent of the past of y., but such that y cannot be expressed as a causal functional of w.. Prior to this counterexample, several cases of the innovations problem had been settled in the affirmative. Clark [5] showed that if noise and signal are independent and the signal is bounded (uniformly in t and to), then observations are adapted to innovations. Also, the case of Gaussian observations turns out affirmatively: it follows from results of Hitsuda [8] that in this case 5. is a linear functional o f (the past of) y. and the equation (2) is solvable for y by a Neumann series. More generally, the case in which 2 is a Lip functional of y. also turns out affirmatively; in practice, though, it is difficult to find out when this condition is met. We extend Clark's result to the case where signal and noise are independent and the signal is square-integrable almost surely. The method used is similar to that of Clark, as described by Meyer [4], who also states, without proof, a generalization to sup,] z, I < ~ a.s.
V.E. Beneg / Extension of Clark's innovations equivalence theorem
2. Preliminaries The o--algebra in probability space generated by random variables x,, t E T, with T some indexing set, is denoted by ~r{x,, t E T}. Since we are assuming that z and w are completely independent, as opposed to just having the future increments of w. independent of the past of z., we can represent the conditional expectation 2, by Kallianpur and Striebel's formula [6]. For this and later purposes, it is convenient to introduce, with Orey [3], the functional q defined for pairs of suitable functions f, g by
q(f, g), =
exp{fo'f(s)dg(s)- ~fo'f(s)2ds};
"suitable" pairs f, g will be those for which f is square-integrable locally, and g is a Wiener function; for such pairs the stochastic integration in the formula for q is well-defined. With this notation, Kallianpur and Striebel's formula is
f Z(dz)ztq(z, y ) , _ 2, = Elz, l y , , 0 < ~ s ~< t} . . . . . .
f Z ( d z ) q ( z , - Y),
c~(t, y)
(3)
where Z is the (independent) measure for the signal process. In addition, the transformation 2(. ) is defined by 2(f), = a(t,f). It is easy to see that the denominator A, above is just q($(y), y),. For a quick proof, multiply by A,, integrate both sides from 0 to t with respect to dy, and add one to each side. Then
1+for
2(y),Asdy, = 1 +
f0f Z(dz)z,q(z,
y),dy,
= f Z(dz)[l +fo'Z,q(z,y).dy~]= A, because dq(z, y), = zsq(z, y)sdys. The unique solution of this equation is just q(2(y),y),, and we are done [4].
3. Argument We prove the following result: Theorem.
If z and w. are independent, if E Iz, I < ~ for each t, and if
V.E. Beneg / Extension of Clark "s innovations equivalence theorem
P
{fo z~ds 0 OB + B' O = 0 and OA + A ' O 1 ~ ~0t
eA~B~f0 tr eAog~g'~eA'OdpB~ea'~do ". "
j=l
Iterating on this argument we see that A kgl, A kBjA "g,, A kBjA " B j'A "g, . . . . etc. must all belong to an invariant subspace of the collection A, B,, B 2 . . . B,,. On the other hand, if there is such an invariant subspace which contains all the g,, then clearly Z is singular. If the eigenvalues of L ( . ) have negative real parts, then the solution of (5) is bounded. Since Z ( t ) is monotone increasing for Z(0) = 0, this implies that there is a limit Z ( ~ ) which satisfies (6). On the other hand, if there exists a positive definite solution P of (6), then for X(0) = X'(0) positive definite we see that the solutions of
f~ = A X + X A ' + ~ B'~XB,
i=1
satisfy X ( t ) = X ' ( t ) > 0 for all t > 0 and that
R. W. Brockett / Parametrically stochastic differential equations
13
d d t (X, P) = ( a 'P + P A + B P B ' , X ( t ) ) :
- ( C C ' , x ( t ) ) ~ o,
where (P, X ) = t r ( P ' x ) ~ 0.
3. The diffusion equation and invariant measures
One of our goals will be to establish some properties of the invariant measures of (1). If one could solve for these measures in closed form, then this work would be unnecessary. For this reason we now investigate more fully certain special cases. Associated with the Ito equation (3) is the forward equation ap _ c)t
Vx, (V~, A x p ) + ~(V~ ' ' ( B x x ' B ' + gg')p ).
(7)
One may verify with some effort that
(Vx, A x p ) = (tr A )p + (Vpx, A x )
= (Vx V'x, B x x ' B ' p ) = [tr(S 2) + (tr B)g]p + (V,p, [ B B ' + B ' B + t r ( S ) (B + B')]x ) + (Bx,
(%V'xV)Bx).
R. W. Brockett / Parametrically stochastic differential equations
14
Example 2 suggests that we investigate the possibility of a Gaussian invariant measure. For p = ((27r)" det Q)-~e -x'~ we have Vxp = - Q - l x p Vx V'~p = ( - O - ' + O - ' x x ' O - ' ) p .
Using these formulas we see that the steady state part of the Fokker-Planck equation is satisfied if and only if tr(A ) - ( A x , Q ' x ) - }trB 2 - ~(tr B): - } ( Q - l x , B B ' + B ' B + t r ( B ) ( B + B ' ) x }
(8) + 89
i x, Q-lgg'O-'x} + 89 - ~{
O-'xx'O-')Bx}
= O.
In view of the fourth degree terms in x a necessary condition for this equation to hold is that Q - 1 B be skew symmetric. Since Q would necessarily be a solution of the steady state variance equation 0 1 4 ' + A O + B O B ' + gg' = 0,
(9)
this gives an easily checked necessary condition. Moreover (9) implies 2tr A + t r ( B Q B ' Q - ' ) + tr g g ' Q - 1 = O. In order to have the zero-th degree terms in (8) cancel it must happen then that tr B Q B ' Q -~ = - t r B 2 - (tr B ) 2. But this holds if B Q = - Q B ' since in this case (tr B ) = 0. A short calculation shows that the quadratic terms also sum to zero if B Q = - Q B ' . Putting this together with the stability condition and making the generalization to an arbitrary number of B ' s gives the following result. Theorem 3. The Ito equation (1) admits a unique nondegenerate Gaussian invariant measure if and only if the following conditions are satisfied: (i) equation (2) is controllable, (ii) L ( . ) = A ( . ) + ( . ) A ' + E?=l B, (.)B', has all its eigenvalues in Res < 0, (iii) the solution Qo of L ( Q ) = - G G ' (which exists and is positive definite by (ii)) satisfies QoB~ = - B',Qo; i = 1,2 . . . . . m. In this case the invariant measure is zero m e a n Gaussian with variance Qo.
Notice that it can happen that the system is not hypoelliptic with B~ = 0, i.e. it may happen that (A, G ) is not controllable, yet the effect of the B~xdw terms is to give hypoellipticity. If we drop the controllability condition in this theorem there still may be an invariant Gaussian measure but it will be
R. W. Brockett / Parametrically stochastic differential equations
15
degenerate if (ii) holds or possibly nondegenerate but nonunique if (i) and (ii) both fail. Thus if "unique" is dropped from the theorem statement, the theorem does not hold.
4. The m o m e n t equations In this section we set up the necessary machinery to investigate the moments of equation (1). This material is classical in a certain sense but I am unaware of a satisfactory reference. Let N(n, p) denote the binomial coefficient - - ( n + p - 1 ] . If A :R"---~R", there is a family of maps A tpl, p = \ P / 1,2 . . . . which map R N('p) into R Nt"'p) and which are defined as follows. Given x E R", define x t"J E R N(n'p) a s x t~j--col(x~,
a , xP l P-' x2 . . . . , x ~ ) .
That is, the c o m p o n e n t s of x rpJ are the (suitably scaled) monomials of degree p in Xl ... xn, the components of x. The scale factors c~ are chosen so as to obtain for the Euclidean norm (see [9]) IIx II~ -- Ilxl~JII.
The map A tpj is then the unique map such that the following diagram commutes. A R n
~R"
[p]
Ip] RNtn, p)
Atpl ) R N(m'p)
This process was investigated by 1. Schur in connection with the theory of group representations. The entries of the matrix A tpj are h o m o g e n e o u s of degree p in the entries of A. M o r e o v e r one sees easily that (AB) I~j= A ~PJBrPJwhich more or less explains the significance of the theory for group representations. We are more interested in the infinitesimal version of this process which plays a corresponding role in the representation of Lie algebras. Suppose x(t) E R" with x ( . ) differentiable. If ~(t) = Ax(t), then
R. IV. Brockett / Parametrically stochastic differential equations
16
dt x
-~ [x[P](I + h ) - xtPl(t)]
= lima_0hl [ ( i + hA)tPjx~PJ(t)_ xtPJ(t)] =d AtplXtPk If qbA(t, r) represents the fundamental solution of 2(0 = A(t)x(t),
then of course
9 .,,.,(t, ~-) = [q~.. (t. ~-)]~PJ which specializes in the constant case to eat.,, = [e a,]tpl. We now observe several properties of the "sub p " operator. (a) [aA + flB]tpj = aAip j +/3[B]rpj. (b) (A')I~I = (A~pl)'(c) [L]fpl = plr,,(.,p~; L = n-dimensional identity. (d) If (A,, A2. . . . . A.) are the eigenvalues of A (including possible repetitions) the spectrum of A,pj consists of all (\ n + p - 1~/ formally distinct P of p eigenvalues of A. (e) If
sums
0
AI with A n + l
by n + l
and A n by n, then
0
0
...
0
0
Am
...
0
0
0
...
Atp j
Alp I =
The moment equations for (1) are easily expressed in terms of this notation. It is known [2] that if the g, are zero in (1) then - -
dt
=
1
i=1
2
[pl
i=1
If the g, are nonzero we can write (1) as
R. IV. Brockett / Parametrically stochastic differentialequations
o
A][~]dt+~=
d [ ~ ] = [~
dw,[O
Bi
17
~ l
,=1
Applying the above construction together with comment (e) above gives
d ~ dt
1
0
0
0
...
0
1
x
0 -'~rq + N~
0
...
0
x
~t~2j+ N2 ...
0
~ X [3]
.
x tp'
G2
0
0
G3
0
. . .
0
0
... ,ff.le,+ Np
L0
0
J
x'"]]
(10) where ,~ = ( A - 8 9 IV, =~E(Bwl) 2 and G, are the coupling matrices defined by the [ ( i - 2), i]-th block in 0
01, )
Thus dt
~=1
tpJ
i=1
We note by way of explanation that
[o Ool,' has its nonzero entries in blocks along a diagonal immediately below the main diagonal. Thus its square has nonzero entries in blocks along a diagonal 2 below the main diagonal. The blocks in the square are the product of exactly two nonzero entries. Since []AIpj[[ ~< p IIA [[ we see that the norm of the G, terms in (10) are bounded by IIGpl[- ~ [[g, ll2.p z.
(11)
As an immediate consequence of this representation of the moment equations we have the following theorem.
18
R. W. Brockett/ Parametricallystochasticdifferentialequations
Theorem 4. Consider the Ito equation (1). If p(O,. ) is such that ~(c[x(O)]) p exists for all linear functionals c, then for p = 1, 2 . . . . . (i) ~(c[x(t)]) p exists for all 0 0)-. Here, (/4,) is a continuous additive functional of X, L is a transition kernel from (E, ~) into (E, ~')x (R., ~ ) , and a is a positive Borel function on E (see the next section for the precise meaning). Define ~
U(x,f):
f
E x , f(X,,S,)dH,
(1.4)
for x E E and f positive Borel on E x R+. The following proposition is preparatory to the theorem after it, which is our main result. (1.5) Proposition. Let A = {a >0}. There exists a transition kernel (t, x, A )---> U,(x, A ) from (R., ~+) x (E, ~') into (E, ~') vanishing on E\A such that U(x,A
xB)=fo
u,(x,A)dt
(1.6)
for all x E E, A C A Borel, B E ~?+.
(1.7) Main Theorem. For any positive Borel measurable function f on
ExE•215
24
E. ~.inlar / Entrance-exit distributions [or Markov processes
E" [f(Z-;, Z ;, R,,
R +,)]= f,.
u,(x, d y ) a ( y ) f ( y , y, 0, 0)
(1.s) + fE~,,,,,
U(x'dy'ds)f,.x,.
L(y, dz, t - s + d u ) / ( y , z, t - s, u)
for every x C E and (Lebesgue) almost every t E R , . identically, then the same is true for every t.
If fl( ., ., ., 0) = 0
A number of further results will be given in Section 3 along with the proofs. For the time being we note the interesting deficiency of the preceding theorem: the equality is shown to hold only for almost every t. The difficulty lies with computing the probabilities of events contained in {R, = 0, R ~ = 0}. When E is a singleton, this becomes the infamous problem resolved by Kesten [10]. Using his result, we are able to resolve the matter when X is a regular step process (and therefore, in particular, if E is countable). The matter is simpler for the event {R ~ > 0, R ~ = 0}; in general, the qualitier "almost every t " cannot be removed. But again, we do not have a complete solution. To see the significance of the seemingly small matter, and also to justify the title of this paper, consider a semiregenerative process (Zs; M) in the sense of Maisonneuve [14]. Here (Z~) is a process, M is a right closed random set, and (Z,) enjoys the strong Markov property at all stopping times whose graphs are contained in M. This is a slightly different, but equivalent, formulation of the "regencrative systems" of Maisonneuve [12]. Then, M is called the regeneration set, and L, = sup{t ~< s : t • M},
N, = inf{t > s : t E M}
.(1.9)
are the last time of regeneration before t and the next time of regeneration after t. In accordance with the terminology of the boundary theory of Markov processes, the processes (z-,) = ( z L . ) ,
( z ; ) = (z~...)
(1.1o)
are called, respectively, the exit and the entrance processes. Under reasonable conditions on M (see Jacod [9] and also Maisonneuve [15] for the precise results) it can be shown that there exists a Markov additive process (X, S) such that the entities defined by (1.2), (1.3), (1.9), (1.10) are related to each other as follows: M={s:
S,=s
for some
t E R+},
(1.11)
17..~intar / Entrance- exit d~tributions for Markov processes R - , = s - Ls; z~ = zLs-;
R ~ : N~ - s; Z+s = zNs.
25 (1.12) (1.13)
In other words, R~ and R+~ are the "backward and forward recurrence times" and Z s and Z+~ are the exit and entrance states. So, R *~= 0 implies that s E M, that is, that s is a time of regeneration. Conversely, starting with a Markov additive process (X,, S,), if (Z~) is defined by (1.3) and M by (1.11), the pair (Z~; M) is semiregenerative in the sense of Maisonneuve [14]. It was noted by Maisonneuve [12] that semiregenerative processes may be studied by using the techniques of Neveu [15, 16] and Pyke [17, 19] via Markov additive processes. We are essentially doing just this by bringing in "renewal theoretic" thinking together with the results on Markov additive processes obtained in (~inlar [3, 4]. In fact, our techniques may be used to obtain most of the results given by Maisonneuve [12-14] and by Jacod [9], but we have limited ourselves to results which are extensions of their work. Moreover, these results are related to the last exit-first entrance decompositions for Markov processes of Getoor and Sharpe [6, 7]. We are planning to show the precise connections later; roughly, they were working with the conditional expectations of our additive process (S,).
2. Preliminaries
Let (X, S) = (S2, ~ , 2L, X,, &, 0,, W ) be a Markov additive process as in the introduction. It was shown in (~inlar [4] that there is a L6vy system ( H ' , L ' ) for (X, S), where H ' is a continuous additive functional of X and L ' is a transition kernel, such that
(2.1)
fo' l
=E ~
dH'~
L ' ( X . dy, d s ) f ( X . y , s )
JgxR+
for all x, t and all positive ~ x ~' • ~+ measurable functions f. This was shown for X Hunt with a reference measure; but the work of Benveniste and Jacod [1] shows that the same is true for arbitrary standard Markov processes X. The process S can be decomposed as (see (~inlar [3] for this)
E. (finlar / Entrance-exit distributions for Markov processes
26
S=A
+S a+S r
(2.2)
where A is a continuous additive functional of X, S d is a pure jump increasing additive process which is continuous in probability with respect to px(. I ~f~), and S I is a pure jump increasing additive process whose jumps coincide with those of X and therefore the jump times are fixed by X. We define /4, = H: + A, + t,
(2.3)
and let the positive ~-measurable functions a and h be such that
A, =
f
a(X~)dH.~;
H:=
foh(X,)dH~;
(2.4)
(that this is possible follows from Blumenthal and Getoor [2, Chapter VI). Define
L(x,. ) = h (x)L'(x,. );
(2.5)
L~(x,B)=L(x,{x}•
LI(x,A •215
(2.6)
K(x,A)=LI(x,A•
F(x,y,B) = L I ( x ' d y x B ) " K(x, dy) '
(2.7)
(in fact one starts from F and K and defines LI; see (~inlar [4] for the derivations). Then, (/4, K) is a L6vy system for X alone; (H, L r) is a L6vy system for the Markov additive process (X, St); (H, a) defines A by (2.4); and (H, L ~) defines the conditional law of S a given Y{~= ~r(Xs;s ~>0)- by
Finally, if r is a jump time of X, then F(XT , X , . ) is the conditional distribution of the magnitude of the jump of S r at r given .,~. We call (H, L, a) the L6vy system of (X, S). The following random time change reduces the complexity of the future computations. Define G, = inf{s :/45 > t}, Yc, = x ( G , ) ,
= S(G,) .....
(2.9)
(2.10)
and define C,, ,Y], Z], /~], / ~ by (1.2) and (1.3) but from (.~, S). Then we have the following
E. (2inlar / Entrance-exit distributions for Markov processes
27
(2.11) Proposition. (X, S) is a Markov additive process with a Lgvy system
(ISI, L, a) where I2I, = t A ~. Moreover,
(2.2+,, ~1., ~;) = (z;, z;, R;, R;).
(2.12)
Proof. This is immediate from the definitions involved since H is strictly increasing and continuous (which makes G continuous and strictly increasing); see also (~inlar [4, Proposition (2.35)]. Note that the potential U defined by (1.4) is related to (.~, S) by
U(x, f) = E x
f(X,, S,)dH, = E x
f0
f(.~,, S,)dt.
(2.13)
In view of (2.11) and (2.13), it is advantageous to work with (.X, S). We will do this throughout the remainder of this paper, but will also drop . . . . . from the notation. In other words, we may, without loss of any generality, assume that the L6vy system (H, L, a) is such that /4, = t A~'. (2.14) Notations. In addition to the usual notations it will be convenient to introduce the following: For any t E R. = [0, oo) we write R, = (t, oo), the set of all real numbers to the right of t; and B, -- [0, t], the set of all numbers before t; for B C R + we write B - t = { b - t ~ O : b E B } and B + t = {b+t: bEB}. For any topological space G we write ~ for the set of all its Borel subsets; we write f E p~ to mean that f is a positive Y-measurable function on G. If N is a transition kernel from (E, 3) into (F, ~ ) x ( G , ~), we write N(x, dy, du) instead of N(x, d(y, u)), and write N(x,f, g) instead of N(x, h) whenever h has the form h(y, u ) - - f ( y ) g ( u ) , that is
N(x, f, g) = f~•
(2.15)
N(x, dy, d u ) f ( y ) g(u).
If N is a transition kernel from (E, ~) into (E, ~')• (R+, ~+), and if f E p~ x ~+, we define the "convolution" of N and f by
N * f ( x , t) = f
dE •
(2.16)
N(x, dy, du)f(y, t - u).
If M and N are two transition kernels from (E, ~') into (E, g~) x (R+, 9~+), their convolution is defined by
M*N(x,h)=f~• M(x,dy,du)~•
N(y, dz, d s ) h ( z , u + s ) , h E p~' x ~+.
(2.17)
E. ~inlar / Entrance-exit distributions [or Markov processes
28
The convolution operation is associative: M * (N * jr) = (M * N) * f, but in general not commutative.
3. Proof of the Main Theorem Let (X, S) be a Markov additive process as in the introduction, and let (14, L, a) be its L6vy system. We may and do assume/4, = t^~r without any loss of generality (see the preceding section). Let b > 0 be fixed, and define
(3.1)
T = inf{t : S, - S,_ > b}.
The following auxiliary result is of interest in itself. Recall that Rb = (b, ~). (3.2) Proposition. For all x ~ E and f E p~s • ~+,
E x [f(XT-, ST-)] = E ~f0 T f(X,, S,)L(X,, E, Rb)dt.
Proof. Define
S b, : A , + ~
(3.3)
(S. - S.-(I~s.-s.~_b,.
u~t
Then, S, = S b, on { T > t} and ST-= S~-_ = S~. Moreover, given ~| o-(Xs, s/> 0)-, T is conditionally independent of X and S b, and
[fo t L~(Xs, Rb) ds ] [ I
M,---ex{r>t ly{~}=exp -
F(X~_,X,,Bb).
u~t
(see ~inlar [3]). Hence, with 2(~ = ~r(X,, S~,;t ~>0)-. We have
E*[f[(XT , ST ) [ Y{~] = fR § f(X,_, Sb,_)(- dM,) (3.4)
=
f0
f(X,_,Sb,_)M, L a ( X , , R b ) d t +
~
/ER+
f(X,-,Sb,-)M,-F(X,-,X,,Rb).
Now, the process W, = f(X,-, S b, )M, is predictable; therefore, by theorems on L6vy systems,
E. (inlar / Entrance-exitdistributionsfor Markovprocesses
EX[~
29
W,F(X,_,X,,Rb)] = E'fo=dtf~ K(X,_,dy)F(X,_,y, Rb)W,
(3.5)
= Exf]
W,U(X,_,E, Rb)dt.
Putting this into (3.4) while taking expectations, and noting that X,-, Sb,_, M, can be replaced by X,, Sb,, M, since their jump times are countable, we get E x [f(XT, s~)] = Exf, = f(X,, Sb,)M,(L e(X,, Rb) + LI(X,, E, Rb))dt =Ex
= E ~
f; f;
f(X,, S,b)L (X,, E, Rb)I(r>,)dt
f ( X , , S , ) L (X,, E,
Rb)dt
as desired. The following is immediate from the strong Markov property for (X, S); see ~inlar [3, p. 103]. (3.6) Proposition. Let r be a stopping time of (~,) and define o (x, D : E x [[(x,, s,)].
Then, for any f E p~ x ~+ and x E E, U(x,f)=E x
fo
f(X,,S,)dt+O*U(x,f).
The next result is essentially the second statement of the main theorem (1.7). (3.7) Proposition. Let b > 0, A x B x C x D @ ~ x ~ x gt+ • ~?+, and F, = { Z - , @ A , Z + , ~ B , R - , @ C , R + E b + D}.
(3.8)
PX(F,) = fA
(3.9)
Then, U(x, dy, d s ) l c ( t - s ) L ( y , B , t + b + D - s ) xB t
for every x E E and every t ~ R+ (recall that B, = [0, t]).
30
E. (~inlar/ Entrance-exitdistributionsfor Markovprocesses
Proof. Define T as in (3.1), and put f(x, t) = PX(F,);
g(x, t) = PX(F,; ST > t).
(3.10)
Then, by the strong Markov property for (X, S) at T, and by the additivity of S which implies St+, = ST + S, o OT, f(x, t) = g(x, t)+ Px(r,; ST t} we have C , = T , and Z T = X r _ , Z,+=XT, RT=t-ST-, and R + = S T - t . So, g(x,t)= P'{XT_~A, XTEB, ST_Et-C, STEt+b+
D}.
(3.13)
The stopping time T is totally inaccessible since S is quasi-left-continuous. Therefore, by the results of Weil [19] on conditioning on the strict past .~/r-, we have P'{XrEB,
ST--ST-~b+D.'I
JI,IT-} =
L ( X T - , B , b + D') L ( X T - , E , Rb)
(3.14)
for any B ~ g' and D'@ ~+; here we used the additivity of S, so that Sr - ST- is conditionally independent of ~T given Xr . Putting (3.14) into (3.13) we obtain g (x, t) = E" [h (XT-, t -- ST_)/L (XT-, E, Rb)]
(3.15)
h(y, u) = 1A ( y ) l c ( u ) L ( y , B, u + b + D).
(3.16)
where In view of Proposition (3.2), (3.15) implies g = V* h
(3.17)
where h is as defined by (3.16) and V(x,k)=E"
k(X,,S,)dt,
Putting (3.17) into (3.11) we obtain
kEp~X~+.
(3.18)
E. (~inlar/ Entrance-exit distributionsfor Markov processes f= V*h+O*f,
31 (3.19)
and by Proposition (3.6), we have
U*h = V*h+O*U*h.
(3.20)
It is now clear from (3.19) and (3.20) that
(3.21)
f=U*h
is a solution to (3.19). Note that U * h(x, t) is exactly the right-hand side of (3.9). Therefore, the proof will be complete once we show that U * h is the only solution to (3.19). To show this, let f' and f" be two bounded solutions to (3.19); then k = f ' - f " satisfies k = O * k.
(3.22)
Let O, be defined recursively by Q~= Q, Q,+~= Q * Q, through the formula (2.17). Then, (3.22) implies k = O, * k, and
Ik(x, t)l = [Q, 9 k(x, t)l S= we have C, = ~ and X= = A and h(X,_, 32,) = 0 by the usual conventions, we get lhs(3.30)
= E x~ f(X,_, X, ) d(1 -
e, (S,)),
(3.32)
where "lhs(-)" stands for the "left-hand side of ( . ) . " By the generalized version of Fubini's theorem,
EX[fo=f(X,_,X,)d(-e,(S,)) [ X=]= -fo=f(X, ,X,)dM,,
(3.33)
E. ~inlar / Entrance-exit distributions for Markov processes
33
where
M, = EX[e,(S,) [ X-I
(3.34)
= e x p [ - A f 0` a(X,)ds-fo'
L~(xs,1-e.)ds]~ F(Xs_,X,,e,)
(see (~inlar [3] for this formula). Now (3.32), (3.33), (3.34) yield lhs(3.30) = Exfo ~ f(X, ,X,)M,(~a(X,)+ L"(X,, 1 - e~))dt (3.35)
+ E*~, f(X,_, X,)M,_F(X,_, X,, 1 - e,). t
The process (M,_) is predictable; therefore, by theorems on Ldvy systems, the second term on the right-hand side is equal to
EX fo =dtM'- fE K(X,_,dy)f(X,_,y)F(X,_,y,l-e~)
f; dtM,
=E~
Lt(X,,dz, l - e , ) f ( X , , z )
by the definitions (2.6) and (2.7). Putting this in (3.35) and noting (3.34), we see that lhs(3.30) = E x
j; g(X,)M, dt = E xf0 g(X,)e~(S,)dt = U(x, g,e~)
(3.36)
where g (y) = f(y, y) (Aa (y) + L ~(y, 1 - e, )) + ] L t(y, dz, 1 - e, )f(y, z) = Aa (y)f(y, y) + I L(y, dz, 1 - e~)f(y, z).
(3.37)
With this g, U(x, g, e~) is precisely the right-hand side of (3.30); and thus the proof is complete. Next we consider the problem of inverting the Laplace transform (3.30). Note that g defined by (3.37) can be written as
g(y)= a{a(y)f(y, y)f0~ e0(du)e -~" (3.38) + f,= [fE L(y'dz'R")/(Y'z)]e-~"du} ' which has the form A f n(y, du)e-"". Putting this in (3.36) we see that
34
E. ~intar / Entrance-exit distributions for Markov processes
U(x,g,e,)= A f U*n(x, du)e ~", and this is equal to the right-hand side of (3.30). Inverting the Laplace transforms on both sides of (3.30), we obtain
f, ex[f(Z-,,Z+,)ldt : f~ U(x, dy, B)a(y)f(y, y)
+f. dts215 U(x,dy, ds)s
(3.39)
L(y, dz, R, s)f(y,z)
for every B ~ N.. We are now ready to give the proof. (3.40) Proof of Proposition (1.5). Choose f such that h (y) = f(y, y) is strictly positive, and let A = {a > 0}. Now the first term on the right-hand side of (3.39) is U(x, a 9h, B), and clearly this is at most equal to the left-hand side. It follows that the measure B --->U(x, a 9h, B) is absolutely continuous with respect to the Lebesgue measure. Let u,(x, ah) be its Radon--Nikodym derivative with respect to the Lebesgue measure. Since X is standard, its state space (E, ~') is locally compact with a countable base. Therefore, it is possible to choose this derivative such that (t,x)---~u,(x, ah) is ~ + x measurable. Now, for k EpiC, define a,(x,k) to be u,(x, al~) where /~ = k/a. By the special nature of (E, ~) again, by theorems on the existence of regular versions of conditional probabilities, we may take ti,(x, 9) to be a measure on while retaining the measurability of the mapping (t, x ) ~ a, (x, A ) for each A E sO. Finally, let
u,(x,A)= 6,(x,A AA),
A E ~.
(3.41)
The statement (1.5) is true for this transition kernel u. The following is immediate from Proposition (1.5) applied to (3,39). (3.42) Theorem. For any f E p~ x ~ and x @ E,
EX[f(Z-,, Z~)] = f~ u,(x, d y ) a ( y ) f ( y , y) (3.43)
+fE• U(x, dy, ds)s for (Lebesgue) almost every t E R+.
L(y, dz, R,_,)f(y, z)
E. ~inIar / Entrance-exit distributions for Markov processes
35
In view of Corollary (3.26), the second term on the right-hand side of (3.43) is equal to the expectation of f(Zs, Z +) on {R s > 0}. Hence, (3.43) implies that + Z-,=Z,EAa.s. on{R~,=O};
,), R ,+ = 0] = E ~[g ( Z +"
u,(x, ag)
(3.44)
for any g E p ~ , x E E, and almost every t E R+. The method of the proof of Proposition (3.7) goes through to show
P'{ZsEA, Z+,EB, R ~ b + C , R + , E D } = (3.45)
= fA•
U(x, dy, d s ) l c ( t - s - b ) L ( y , B , t - s + D)
for all A, B E ~, all C, D E ~+, all t E R+, for b > 0. In particular, this yields
Px{Z-,= Z+,~A,R,>O,R+,=o}= fA
U(x, dy, ds)L~(y,{t-s})
•
(3.46)
/-
= JA ~
•
t)
u,(x, d y ) L a(y, {t -
s})ds
0
since the function s ~ L d (y, {t - s}) is zero everywhere except on a countable set. Hence, for any t, R~=0a.s.
on
{Z]=Z*,EA;R+,=O}.
(3.47)
Conversely, Corollary (3.26) and T h e o r e m (3.42) show that R+=0a.s.,
on
{Z~ = Z , ,+R , =
0}.
(3.48)
It follows that a.s. on {Z~ = Z~}, either R ~ = R +, = 0 or R ~ > 0 and R ~ > 0. (3.49) Proof of Theorem (1.7). The proof now follows from these observations put together with Theorem (3.42) and Corollary (3.26).
4. From almost to all
This section is devoted to showing that under certain reasonable conditions, when X is a regular step process, Theorem (1.7) can be strengthened so that (4.8) is true for every t (instead of for almost every t). Unfortunately, our technique does not generalize to arbitrary X. We shall need the following facts concerning the case where X is trivial:
E. (2inlar/ Entrance-exitdistributionsfor Markovprocesses
36
that is, where E is a singleton {x}. Then, writing a, = u,(x,{x}), U ( d s ) = U(x,{x}, ds), etc., we obtain from Theorem (1.7) that
E[f(R-,,R~)]
/ ) ( d s ) l I~(t- s + d u ) f ( t - s, u)
a,ff(O,O)+(_
(4.1)
,J
f
for almost every t. This is the result which Kingman [11] obtained by inverting a triple Laplace transform in the case where d > 0 . If d > 0 , then t---~ fi, is continuous and U ( d s ) = fisds, and (4.1) holds for every t (this is due to Neveu [15]. If d = 0 and/~ (Ro) = + ~, then 8, = 0 and (4.1) holds for every t again (this is due to Kesten [10] essentially). If d = 0 and /~(Ro) < 0% then fi, -~ 0 but the restriction "almost every t" cannot be removed in the absence of further restrictions on the smoothness of/~. In fact, if/~ has a countable support and a = 0,/~ (R0) < o% then (4.1) fails to hold at every t belonging to the group generated by that support. In view of this, the only real restriction in the proposition below is on the process X. Define D={x~E:
a(x)>0
or
Ld(x, R0)=o@
(4.2)
(4.3) Proposition. Suppose X is a regular step process. Then, we may take t--~ u,(x,A) to be continuous, and we have E
x
+
-
+
.
-
+
fD u,(x, dy)a(y)g(y,O,O) + fD
(4.4)
U(x'dy'ds)La(y't-s+du)g(Y'Y-S'U)
• E l
for all x E E
and all t ~ E .
(4.5) Remark. Suppose X is a regular step process and (S,) is strictly increasing (which means that ((7,) defined by (1.2) is continuous, which in turn means that the regeneration set M is without isolated points). In particular, Getoor [5] assumes this holds. (S,) can be strictly increasing only if D = E. Hence, Proposition (4.3) applies to this important case. Proof of (4.3). The expectation on {R ~, > 0} is equal to the second term on the right-hand side of (4.4) by the second statement of Theorem (1.7). Hence, we need only show that
E. ~?inlar / Entrance-exit distributions for Markov processes
Px{Z+, E A, R5 = 0, R~ = 0, Z5 = Z~,}
37
(4.6)
is continuous in t for any A E @. (This is equal to u, (x, a 1A ) for almost every t; and therefore the density t---, u,(x, a l ~ ) can be taken continuous and equal to (4.6) for all t.) Let r be the time of first jump for X, and define r0 = 0, r.+, = ~-, + ~- o 0T~ Then, ~-, is the time of the nth jump of X, and ~-, ~ ( almost surely, and X remains constant on each interval ft,, ~'-+0. Therefore, (4.6) is equal to
p~{Z_=Z+~A,R_=R+=O,S~
< ~ t < S .... }=
n=0
(4.7)
=~
Px{X~ E A , R-~=R+,=O,S~ < t < S .... }.
n=0
Note that, on {ST. = t-- u} we have R~; = R~(0~.), and R~ = R-~(O~.). By the strong Markov property at r.,
P X { R - , = R + , = O , S ~ . < t < S .... 12/l.T.}=f(X~.,t-ST.)Iio,,)(S~~
(4.8)
where f(y, u) = PY{R-~ = R~ = 0, ~-, > u}.
(4.9)
Starting at y, X stays there an exponential time with parameter k ( y ) = K(y, E); and during that sojourn, S has the law of an increasing L6vy process with drift parameter a ( y ) and L6vy measure Ld(y, .). It follows from the results mentioned following (4.1) that 0 f(y,t)=
r(y,t)a(y)
if a(y) = O, if a ( y ) > 0
for all t, where r ( y , . ) is the density (which exists when a ( y ) > 0 ) potential measure R (y,-) with R(y,e~) = [ A a ( y ) + Ld(y, 1 - e~)+ k(y)] -1.
(4.10) of the (4.11)
Putting (4.7)-(4.10) together, we see that (4.6) is equal to
E X [ a ( X , . ) r ( X . , , t - ST,);X~ E A fqA;S, < t ] = n=0
(4.12)
t"
3( A
V (x, dy, ds )a (y )r(y, t - s) h A ) x [ 0 , t)
by an obvious definition for V. This is essentially a convolution, and the function t ~ r(y, t) is continuous (Neveu [15]). Hence, (4.12) is continuous in t, and the proof of Proposition (4.3) is complete.
38 (4.13) R e m a r k ,
E. (ginlar / Entrance- exit distributions ]:orMarkov processes
As mentioned
before,
the r e s t r i c t i o n " a l m o s t e v e r y t "
c a n n o t b e r e m o v e d o n E \ D w i t h o u t a d d i n g o t h e r c o n d i t i o n s of s m o o t h n e s s . S i m i l a r l y for e q u a l i t i e s c o n c e r n i n g e x p e c t a t i o n s on t h e e v e n t {Z~ r Z+}.
References [1] A. Benveniste and J. Jacod, "Syst~mes de L6vy des processus de Markov", Inventiones Mathematicae 21 (1973) 183-198. [2] R.M. Blumenthal and R.K. Getoor, Markov processes and potential theory (Academic Press, New York, 1968). [3] E. t~inlar, "Markov additive processes If", Zeitschrift fiir Wahrscheinlichkeitstheorie und verwandte Ciebiete 24 (1972) 94-12I. [4] E. t~inlar, "L6vy systems of Markov additive processes", Zeitschri[t [fir Wahrscheinlichkeitstheorie und verwandte Gebiete 31 (1975) 175-185. [5] R.K. Getoor, "Some remarks on a paper of Kingman", Advances in Applied Probability 6 (1974) 757-767. [6] R.K. Getoor and M.J. Sharpe, "Last exit times and additive functionals", Annals o[ Probability 1 (1973) 550-569. [7] R.K. Getoor and M.J. Sharpe, "Last exit decompositions and distributions", Indiana University Mathematics Journal 23 (1973) 377-404. [8] J, Jaeod, "Un th6or~me de renouvellement et classification pour les chaines semimarkoviens", Annales de l'Institut Poincar6, Sec. B., 7 (1971) 83-129. [9] J. Jacod, "Syst~mes r6g6n6ratifs et processus semi-markoviens", Zeitschrift fiir Wahrscheinlichkeitstheorie und verwandte Gebiete 31 (1974) 1-23. [10] H. Kesten, "Hitting probabilities for single points for processes with stationary and independent increments", Memoirs of the American Mathematical Society 93 (1969). [11] J.F.C. Kingman, "Homecomings of Markov processes", Advances in Applied Probability 5 (1973) 66-102. [12] B. Maisonneuve, "Syst6mes r6g6n6ratifs", Astgrisque, 15 (1974), Societ6 Math6matique de France, Paris. [13] B. Maisonneuve, "Exit systems", Annals o]: Probability 3 (1975) 399-411. [14] B. Maisonneuve, "Entrance-exit results for semi-regenerative processes", Zeitschki]:t [iir Wahrscheinlichkeitstheorie und verwandte Gebiete 32 (1975) 81-94. [15] J. Neveu, "Une g6n6ralisation des processus ~ accroisements positifs ind6pendants", Abhandlungen aus den Mathematischen Seminar der Universit~it Hamburg 25 (1961) 36-61. [16] J. Neveu, "Lattice methods and submarkovian processes", Proceedings o1: the Fourth Berkeley Symposium on Mathematical Statistics and Probability, 2 (1901) 347-391. [17] R. Pyke, "Markov renewal processes: definitions and preliminary properties", The Annals o[ Mathematical Statistics 32 (1961) 1231-1242. [18] R. Pyke, "Markov renewal processes with finitely many states", The Annals o[Mathematical Statistics 32 (1961) 1243-1259. [19] M. Weil, "Conditionnement par rapport au pass6 strict", in: Sdminaire de Probabilitds V, Springer Lecture Notes in Mathematics 191. Universit6 de Strasbourg, 1971 (Springer, Berlin, 1971) pp. 362-372.
Mathematical Programming Study 5 (1976) 39-52. North-Holland Publishing Company
MARTINGALES OF A JUMP PROCESS AND ABSOLUTELY CONTINUOUS CHANGES OF MEASURE R o b e r t J. E L L I O T T
University of Hull, Hull, England Received 15 April 1975 Revised manuscript received 20 October 1975 A process (x,),,0 with values in a standard Borel space (X, 5~ is considered. Starting at the fixed position Zo it has a single jump at the random time S to a random position z E X. The probability space g2 can be taken to be Y = (R L{~})X X, and the jump time S and jump position z are described by a probability # on .O. Distribution functions F,a =/x (] t, ~] x A ) and F, = F,x are introduced, and the countably many discontinuities of F, are predictable stopping times. Associated with the jump process (x,) are certain basic martingales q(t, A), (A E 9"). The jumps of q(t, A) at the discontinuities of F, can be considered as innovation projections and the optional [q, q] (t, A) and predictable (q,q)(t,A) quadratic variations of q(t,A) are determined. Using the innovation projection norm, local martingales on the family of o--fields generated by (x,) can be represented as stochastic integrals of the q martingales. Finally it is shown how the L6vy system, or local description, of the (x,) process changes when the measure/x is replaced by an absolutely continuous measure /2.
1. Introduction F o l l o w i n g t h e classical results of K u n i t a a n d W a t a n a b e [12] for m a r t i n g ales on t h e family of g - f i e l d s g e n e r a t e d by a H u n t process, Boel, V a r a i y a a n d W o n g [2] e s t a b l i s h e d a m a r t i n g a l e r e p r e s e n t a t i o n t h e o r e m for m a r t i n g ales on t h e f a m i l y of g - f i e l d s g e n e r a t e d by a j u m p p r o c e s s w h o s e j u m p times w e r e t o t a l l y i n a c c e s s i b l e a n d w h o s e j u m p t i m e s h a d no finite a c c u m u l a t i o n point. In a s e c o n d p a p e r [3] t h e y a p p l i e d t h e i r results to d e t e c t i o n and filtering p r o b l e m s , a n d B o e l in his thesis [1], discusses o p t i m a l c o n t r o l of j u m p p r o c e s s e s using these results. E x t e n d i n g a t e c h n i q u e of C h o u and M e y e r a s i m i l a r r e p r e s e n t a t i o n t h e o r e m when the j u m p p r o c e s s has at most o n e finite a c c u m u l a t i o n point, b u t with no restriction on t h e n a t u r e of the j u m p t i m e s , was p r o v e d by D a v i s in [4]. A r e l a t e d o p t i m a l c o n t r o l result using t h e L 6 v y system, o r local d e s c r i p t i o n , is o b t a i n e d in t h e p a p e r of Rishel [141. 39
40
R.J. Ellion / Martingales of a lump process
In references [7], [8], [9] and [10] we obtain representation theorems for martingales on the family of o--fields generated by a jump process (x,) whose jump times may have both an accessible and totally inaccessible part and whose jump times may have finite accumulation points (indeed, accumulation points of arbitrary order). Below, for simplicity of exposition, we discuss the case when (x,) has a single jump at a random time S from its initial position zo to a random position z. The well measurable and predictable quadratic variation processes of certain basic martingales associated with the jump process are obtained and related to certain innovation projections. Using the innovation projection norms local martingales on the family of o--fields are represented as stochastic integrals of the basic martingales. Finally, it is shown how the L6vy system, or local description, of the jump process changes when the measure determining the time and position of the jump is replaced by an absolutely continuous measure. It is indicated how this is related to solving a stochastic differential equation driven by a jump process. Details of the latter result appear in [11] and applications are now being made to filtering and control.
2. The single jump case Consider a stochastic process (x,),~o with values in a standard Borel space (X, 5e), which has a single jump at the random time S from its initial position zo to a random position z E X. The underlying probability space can be taken to be 12 = Y = R ?~{~}• X with the o--field ~ which is the product o--field Y)(R+)*5 e together with the atom {oo}xX. Here ~ denotes the Borel o--field, and the G-field 0%o generated by the process up to time t is Y3([0, t]) * (O~ - {z0}) together with the atoms (]t, ~] x X), (R § • {z0}). This last term covers the situation when the process has a 'zero jump' at S - - s o we cannot observe when it happens. The time S and position z E X of the jump are described by a probability measure tx on 12 and we suppose that /x(R +x{z,,})=0, so the probability of a zero jump is zero. ~, is the completion of 0%o by adding null sets of ~z. For A E 5e write F,a = / x (] t, ~] x A ) for the probability that S > t and z E A. Define F, = F x so that F, is right continuous and monotonic decreasing. Consequently, F,
R.J. Elliott / Martingales o[ a jump process
41
has only countably many points of discontinuity {u} where AF, = Fu - F,_ ~ 0. Each such point u is a constant and so certainly a predictable stopping time. The measure on (R +, ~ ( R + ) ) given by F,a is absolutely continuous with respect to that given by F, so there is a positive function A(A, s) such that
F,A - F~ = fjo,,~ A(A, s)dFs. Write
A (t) =
- fl o,,1 dFs/Fs_.
Definition 2.1. (A, ^) is called the L6vy system for the basic jump process (x,). Write 7~(t) = A(t^ S); then roughly speaking, d~(s) is the probability that the jump occurs at s given that it has not happened so far, and A(A, s) is the probability that the jump value z is in A given that it occurs at time s. Define for A E 5e:
p(t, A ) ~(t,A)=
= I(,>>-s)Itz~a),
fj0.,j A(A, s)d~(s)
= f~o,,^sj - X ( A , s ) d F s / F s - . The basic martingales are described by the following result ([4, Proposition 3]): Lemma 2.2.
q ( t , A ) = p ( t , A ) - p ( t , A ) is an .~, martingale.
3. Quadratic variation Introduce the process
r(t,A)=
~, O= IIg I1~c+ IIg 11~". In [8] the following result is proved.
Suppose M, is a square integrable ~, martingale. Then there is a g E L z ((q, q)) such that
Theorem 4.1.
M, = Yn I(s~,)g(s, x)q(ds, dx)
a.s.
The existence of g Eor is a consequence of work of Davis [4]; that
g E LZ((q, q)) is established by calculations (see [8]) from the observation that Ms = lim,_~M, exists and is in Lz(O,/~). We now indicate another proof of why g E LZ((q, q)), together with an explanation of the [I II(q,q)n o r m . Now
fa gdq = fa gdq~ + fa gdqa and
= E [ ( ~ a g d q C ) 2 ] + E [ ( f a gdq't) 2]
by orthogonality
= [[g [[,~o+ [[g 1[~ say,
qc ] + [fo from.properties of stochastic integrals (see Doldans-Dade, Meyer [6]),
from the characterization of the quadratic variations in section 4. If IIg I1~< ~ this is = IIg I1~ -c + I[g I[~-"- IIg II,~
= [Ig I[~--IIg I1~: IIg 11,%,~>.
R.J. Elliott / Martingales of a jump process
46
For any point of discontinuity u of F, ,AFu f~ gdq"U= g(u,z)I,so,O+fx g(u,x)X(dx,u)-gf/_ I(s~.). However, AF~ E[g(u, z)l(s=.,l .~. ] = - fx g(u, x)A(dx, u ) - ~ - I(s~., so fagdq au is the extra information in g(u, z)I(s=.) that we do not have given only the information in ~._. That is, fagdq a" is the innovation projection of g(u, z)I(s..~ given ~._. Furthermore, its L 2 norm is
= - fx g(u, x)2A(dx, u)AF,, -E[(fx
g(u,x)A(dx,u)AF~/Fu-fI(s~,,].
Therefore
f,~ gdq~=~f,~
gdq a*'
is the sum of the innovation projections at the points of discontinuity of F,. By orthogonality ~E[(fa
gdqaU) ]
and from above if IIg I[,< ~ this is --
II g I1~-d - JIg If,~ -- If g rf~d.
Hg I1~~ is, therefore, the sum of the squares of the L ~ norms of the innovation projections. Having determined the meaning of the L2((q, q)) norm we make the following definitions: Definition 4.2.
L'((q,q)) = {g ~ # : Hgll, < ~ } where
I,gll,=E[If, ~ e,dq*r]+EE[If, gdq~'l].
R.J. Ellion / Martingales of a jump process
47
L~oc((q, q)) = {g E # : gls-1. Let F. be the (L2)^(R")-function associated with ~0 ~ ~ . by the integral representation, and let H '~ (R") be the Sobolev space of order m on R n with the norm ill" Ill,-. Set 2d~.")= { r
~ g('+l)/2(R ") CI (L2)^ (R')}.
(8)
If we introduce a norm [[. [[~ in ~/(~) by
then ~ " ) is a Hilbert space and the injection
is continuous. We are now able to define the dual space ~'~") of ~(.") in such a way that with any G in H-("~)/2(R ") we can associate a member g'a in ~ - " ) satisfying (r
r
= n !(G, F).
(9)
for any Cv in 2d~") with the integral representation F E (L2)~(R"), where ( . , - ) . ( ( . , . ) . ) is the canonical bilinear form which links ~ c . ) a n d ~(.-") (H('+')/~(R ") and H-("+')/2(R")). We are now given a triple
YG") c ~,,
c
~.-"',
(10)
where the injections from left to right are both continuous. Let II" tl-. be the norm in ~ . "), and define
t n
(L~) - =
n~O
)
the dual space of (L2) §
Then the following theorem is straightforward.
Theorem 2.
A n y r in (L2) - is expressed in the form n n -o
(ll)
T. Hida / A natysis of Brownianfunctionals
57
with the property that
(12)
Jf,Po IL < o~. n = 0
A m e m b e r in (L2) - is said to be a generalized Brownian functional. Thus we have been able to generalize Brownian functionals by the help of integral representation and Sobolev space. T h e following examples will serve to illustrate those functionals in (L2) + and (L 2) . Example 1.' Let q~(x) be in ~ 2 ) and assume that it is a positive quadratic functional minus its expectation. This implies that the associated (LZ) ^ (R2) function F(u, v) is expressible, in terms of eigensystem {A,, rim, n -> 1}, as
F(u,v)=E ~'oo(u)'oo(v),
An>0,
(13)
where En 1/An < ~ and the series (13) converges uniformly (see [6, w The original functional itself can be expressed as
1 ~(x) = E ~((x, n. )2 - 1)
in (L ~)
(14)
(cf. [3]). If we write it as ((x, no)) 2
1,
then we see that q~(x) is a continuous function on a Hilbert space since x---){(x,'o~),n >-1} presents a coordinate representation of x (we add, if necessary, some "On's so that {"On} b e c o m e s a complete o r t h o n o r m a l system in L2(R~)).
Example 2.
Consider a quadratic functional
U(~) = [ f ( u ) ~ ( u ) Z d u ,
f E H'(R'),
(15)
defined on 5e. Such a functional never appears in the integral representation of ~2, h o w e v e r it does appear when we extend the representation to the class of ~-~). Indeed, if we would express U(~:) in the form
u(r
f f f(u)~(u-v)~(u)~(v)dudv,
' The author enjoyed discussions with Professor G. Kallianpur.
(15')
T. Hida / Analysis o[ Brownian functionals
58
then, the symmetric function f ( u ) 6 ( u - v), being in H-3/~(R2), could be associated with a member, say q,(x), in'Y(~-2). Let us have an observation on this functional ~(x). A system {(x, xi,^,,,,v,,i); t CR'} in Y(~ is taken to be a version of Brownian motion. Consider a functional
t~.(x)= ~-~ f(u,)~ \ ~ !
- 1 ,
u, EA,,
(16)
where {A~} is the system of intervals with length [A~ t = 1/n and s = R Z. We can associate (L2) ^ (R2)-function F.(u, v) = Einf(u,)xa,• v) with tp. (x). The F, does not converge in (L 2) ^ (R 2) but does in H;3/2(R 2) with the limit f(u)6(u - v). Thus we claim that the limit of the 6.(x) does not exist in ~2 while the limit could be found in y(~-2). The limit must be the same as tp(x). A formal expression of ~b(x) could now be introduced; being suggested by (16) and noting (dB(t)/~/dt):=/3(t)2dt, we may write
tp(x) = ] f(u){[~(u) 2- E(/~(u)g)}du.
(17)
Finally we note that for q,(x) in Yg~zz) with the integral representation F we have =
2f fF(u,v)f(u)a(u-v)dudv 2 .I F(u, uff(u)du.
(Use trace theorem for F; see, e.g. [5].)
4. Multiplication by
B(t)
As we have seen in Section 3 we shall be able to deal with /3(t). In this section we shall show that the multiplication of (L2)+-functional b y / ~ ( t ) is possible and that the product is in (L2) -. Again we shall make use of the integral representation. Since the system {e~<x'~>,rt E 5e} of exponential functionals generates the (L 2) space, we start with the product /)(t)C <x''>,
r / E ow,
t E R' fixed,
(18)
where B(t) is taken to be (x, xl,,^,.0v,1) as in Section 3. The generalized functional given by (18) is approximated in (L~) - by
(x,,g,
T. Hida / Analysis of Brownian functionals
59
Applying the transformation J- to this approximation yields
i(•+ r/, ~-~ )exp[-&[l~ II2- (~, ~)] C(~). As
i A[--+ 0, A containing t, this transformation tends to i(~:(t) + rt (t)) exp [ - ~ll ,~ II2 - (~, ,7)]" c ( ~ ) .
(19)
The fact that the transform of the approximation tends to the quantity (19) enables us to prove
3-(B (t)H. ((x, "r/)/~/2)) (~:) = = i"+'2"/2{~:(t)(~:, r/)- + nn(t)(~, r/)"-'}c(~),
(20) I177 II = 1.
This means that with [3(t)H,((x, r/)/~/2), II "0 II = 1, we can associate 2"/2{(r/"|
6,) ^ - nrl(t), r/'"-"|
(21)
where (~ denotes the tensor product, and ( . ) ^ symmetrization, which is a sum of an H ( ~"+')a)(R"+')-function and an (L2)^(R"-')-function. Namely
B(t)Ho((x, " o ) l V - 2 ) ~ ~#.-:,-'~ + 8o ,. Such a relation can be extended to the product of/~ (t) and F o u r i e r - H e r m i t e polynomials. Thus we can immediately prove the following theorem.
We can multiply functionals in ~g?) by B ( t ) and the product is in the space ygc-+. ,)+ ~g,_,.
T h e o r e m 3.
Finally, we would like to note that this result together with the expressions (20), (21), is helpful to solve a certain class of stochastic differential equations.
References [1] A.V. Balakrishnan, "Stochastic bilinear partial differential equations", Preprint (1974). [2] T. Hida, Stationary stoehasticprocesses (Princeton University Press, Princeton, N. J., 1970). [3] T. Hida, "Quadratic functional of Brownian motion, Journal of Multivariate Analysis 1 (1971) 58-69. [4] P. L6vy, Probl~mes concrets d'analyse .fonctionnelle (Gauthier-Villars, Paris, 1951). [5] J.L. Lions and E. Magenes, Non-homogeneous boundary value problems and applications, I (Springer-Berlin, 1972). [6] F. Smithies, Integral equations (Cambridge University Press, London, 1970).
Mathematical Programming Study 5 (1976) 60-66. North-Holland Publishing Company
PROBABILISTIC REPRESENTATIONS OF B O U N D A R Y LAYER EXPANSIONS Charles J. H O L L A N D Purdue University, West Lafayette, Ind., U.S.A.
Received 27 May 1975 Revised manuscript received 22 July 1975 Using probabilistic methods we treat the problem of asymptotic expansions for a class of semilinear elliptic equations depending upon a small parameter e which degenerate to parabolic equations when e becomes zero. The method depends upon the representation of the solution of the partial differential equation as the expected value of a functional of an Ito stochastic differential equation. In [3, 4, 5], we have used probabilistic methods to derive asymptotic expansions for a class of singularly perturbed second order semilinear elliptic and parabolic partial differential equations in two independent variables. Singularly perturbed indicates that the highest order derivatives are multiplied by a small positive p a r a m e t e r e so that the equation becomes of first order when e is zero. Our results include the validity of the regular expansion and the ordinary and parabolic boundary layer expansions in the theory of singular perturbations. Our approach is probablistic depending upon the representation of the solution of the partial differential equation as the expected value of a functional of an Ito stochastic differential equation. This approach has been used previously by Fleming [2] to treat the regular expansion. The work discussed above appears to be the first theoretical treatment of both ordinary and parabolic boundary layer expansions for semilinear equations. These results also have the advantage over previously existing work for the linear case in that we only require local estimates and consequently prove local results. In this p a p e r we illustrate our probabilistic method on a class of semilinear elliptic equations depending upon a small p a r a m e t e r e which degenerate to parabolic equations when E becomes zero. Consider for e > 0 the semilinear elliptic equation 60
C'.J. Holland / Representations of boundary layer expansions C(x)qb'~,x, + eq~:,~2+ A ( x ) q S ~ , - B ( x ) c h : . + F(x, ~ )
=0
61 (1)
in the square S = (0, 1)x ( - 1,0) with Dirichlet boundary data q~ = A on OS and C > 0 , B > 0 o n S U a S . Under certain assumptions the regular and ordinary boundary layer expansions are established. In the regular expansion the solution ~ of (1) is approximated for small e by the solution 4, ~ of the parabolic equation (1) with e = 0 taking the boundary data q~o = A on the lower and lateral sides of the square. Near the top of the square q~0 is not generally a good approximation to 4~' and in that region one determines a boundary layer correction term. We consider the case where ~b~ is bounded in S. This depends partly upon the compatibility condition that the boundary data satisfy the parabolic equation (1) with e = 0 at the corner points (0, - 1), and (1, - 1). For the linear case with the additional restriction that B is only a function of x2, Zlamal [6] has considered the case where ~b',',, is not assumed bounded. We prove the following theorem. Theorem 1. Let 0 < m < { and let there exist positive constants eo, K * such that the following hold:
(BI) S = (0,1)x ( - 1,0). (B2) (B3) (B4) (h''= A
A, B, C a r e C 2 functions on S a n d F is a C 2 function on S x ( - ~, ~). C > 0 and B >O on S. There exists a solution d) ~ to (1) with e = 0 on S taking boundary data on the bottom and lateral sides of S such that ch~ is uniformly bounded on S. (B5) For 0 < e < e o there exists a solution of class C 2 to (11) on S, continuous on S U &S, satisfying I ch ~ [ 0 0 ~ satisfies the linear elliptic equation
0:,~, + eC '(x)0~2~2 + ( - B ( x ) C - ' ( x ) + eD(x))O'.2
~- e '-"[C-'(x)V~
D(x)V~ + H(x)V ~ = 0
with the b o u n d a r y data 0 * = 0 on the b o t t o m and lateral sides of S and 0 ~ = e ~ ' [ q ~ - ' A - V ~ along the top of S. We now m a k e a probabilistic representation of 0". Consider for x E S and e -> 0 the Ito stochastic differential e q u a t i o n
C.J. Holland / Representationsof boundary layer expansions
- B(~)C(~)-' + eD(~) dt +
e~(C(~))_l] i dw2)
63
(8)
with initial condition ~,(0) = x. For x E (0, 1) x ( - 1, - 8(e)), let r~" be the first time t -> 0 that ~ (t) E OS and let p be a constant such that p > 2~-~ for all x ~ S. T h e existence of a constant p is guaranteed by the assumption B / C > 0 on ff Let y~ = rain (r~, p), then using the Ito stochastic differential rule we have for x E (0, 1)• ( - 1, - ~(e)) that
0"(x) = Ext~
fo': O:(t)e '-~ [C
V 0.... +DV~
H V o) (~., (t))]dt
+ Q : ( 7 : ) 0 , (~: (7~,))}
(9)
where
O;(t)= exP fo' [eH(r + fo' Gv(~:(s), V~
A(V'(s~:(s)) -
V~
We show that 0"--~0 using the representation (9). The proof depends upon the estimates (10), (11), (12) relating trajectories of the e-system (8) to the system with e = 0. Redefine the functions A, B, C, qb outside S so that there exists a constant M which is b o t h a b o u n d for [D[ and a Lipschitz constant for I BC-1[ on R 2, For functions h defined on [0, t], let IIh = sup0~,,~, I h(t')[ and let sr~( t ) = e~f~ C(~(s))4dw2(s). Since the first component of the vector ~:~(t) is i n d e p e n d e n t of e -> 0, an application of Gronwall's inequality yields
II,
liE:
- euP[eMp + e89 ?~ lip].
-
T h e n f r o m equation (14) in [3] we have that P {J[~:- ~':, II, >89 0 ~
(1.3)
0
where Z ( s ) , 0 0 and Z(O) ~ K implies Z ( t ) E K all t > O. In addition to the assumptions of Theorem (2.3) suppose there is a point Xo such that F(xo) = O, lim,_~ IZ ( t ) - Xo I = 0 for every solution of 2 = F ( Z ) with Z(O)@ K and that the eigenvalues of the matrix H = ((0iFJ(x0))) have negative real parts. Suppose
sup [ 0,F'(x)l < ~,
sup Ig,(x)l < o%
xEK
xEK
limsup ~ d~
xEK
l ll2f(x,l)=O.
]ll>d
Let ~PA(s,O) be the characteristic function lim ~/A (Za (0) - Z(0)) = 0 implies
lira sup I OA(S, O ) - 6(s, O)l = O,
~ / A ( Z z ( s ) - Z(s)).
Then
(2.8)
s
where ~(s,O) satisfies (2.4). In particular, suppose Z A ( s ) converges in distribution to ZA (~) as s --->~z. Then v ' A (ZA (~) - xo) converges in distribu tion to a normal random vector with mean zero and covariance matrix G H -l where G,j = g,(xo)
and
H, = 0,F(Xo).
The primary significance of Theorem (2.7) is that it gives a central limit theorem for limiting distributions. It is applicable to many chemical reaction models and some genetics and population models. Its chief restriction is that it essentially allows only one fixed point and usually would only be applicable to population models with immigration and genetics models with mutation, and of course is not applicable to the epidemic model. Nagaev and Startsev [19] have, however, given limit theorems for the limiting distribution of the epidemic model under various assumptions about the relationships among the parameters. We restate one in our language. Theorem (2.9) (Nagaev and Startsev).
Let X A ( t ) = (SA(t), IA(t)) be the epidemic process discussed in Section 1, with p = #/A < 1 and let Z ( t ) =
Th.G. Kurtz / Density dependentMarkovchains
73
(S(t),I(t)) be a solution of 2 = F ( Z ) with F given by (1.5). Suppose limv'A(ZA(0)--Z(0))=0 and let W A = l i m , _ = S a ( t ) / A and W = lim,~= S(t). Then lirn P{~v/A (WA -- W) < x} = P{V < x}
(2.10)
where V is normally distributed with mean zero and variance W(p - W) 2
s(o) (s(o)- w) (s(o) w + p2) 3. Diffusion approximations Formally the infinitesimal generator for ZA(t) is given by
BAh(x) = ~ A [ h ( x + l / A ) - h(x)]f(x, l).
(3.1)
l
If we assume h is smooth and expand in a Taylor series we see 11 BAh(X)~ ~. F'(x)d,h(x)+--~ ~ g,~(x)O,O,h(x)
=- DAh (x).
(3.2)
This suggests that in some sense ZA (t) can be approximated by a diffusion process YA(t) with generator DA. We already know that ZA(t) = Z(t)+O(l/~v/A). Consequently if the diffusion approximation is to be worthwhile it must do better than Z(t). Our goal in this section is to give conditions under which
zA(t)= YA(t)+ o (L~-0-)
(3.3)
in a precise sense. Lemma (3.5). There is a version of ZA such that
ZA(t)= ZA(O)+ ~i/-~ Y, A for t < r= = sup{s : supu,s IZA(u)l < son processes with parameter 1.
f(ZA(s),l)ds
~}, where
(3.6)
the Yt are independent eois-
Th. G. Kurtz / Density dependent Markov chains
74
Proof. Consider (3.6) as an equation for ZA. Observe that
-- e x p { - ~
Af(f(ZA(O),l)}.
The finiteness of the sum in the exponent insures that there is a first jump and similarly a second jump etc. Since supx~K ~,lf(x, l) < :~ for all bounded sets K, (3.6) determines ZA (t) uniquely up to r~. Lemma (3.'/). Let E be an open set such that
sup 2 II I f(x, l) < xE:E
~" 1 1 ] 2 1 X / ~ , l ) - X/f(y,l)12 1,
(6.2)
in all other cases.
With this evaluation (5.9) becomes
y'(t,+l) =
y'(/,) + g ~(y (t,))A,z p
(6.3)
- [g ~,l(y (tj)) + . . . + g'.,.(y (tj))]A///2
+ 89 'o.~(Y(tj))Aiz pAiz L If the system satisfies a set of equations E*, which are equations (2.2) with asterisks on the x ~, z and g~, and the noise process z * has sample functions that are uniformly Lipschitzian, and we wish to idealize the system by replacing the z*" by independent Wiener processes, we must choose equations E for the idealization such that the right members of (6.1), with the g~p replaced by the g *~ o, are the same functions as the right members of (6.3), with the g~ of equations E. We cannot attain this by simply choosing the g*~ for the g~o,because of the term with factor ~jt in (6.3). But we recall that z ~ t, so if we define g ~(t, x ) = g *'(t, x) + [g *?~ (t, x) + ... + g *' . (t, x)]/2
g~(t,x) = g*~'(t,x)
(i = 1 , . . . , n),
(all other p and i);
(6.4)
we find that the right member of (6.1) with the g*~ in place of the g~, and the right m e m b e r of (6.3), with the g~ defined by (6.4), are the same function. Therefore the equations E for the idealization are those in which the coefficients are deduced from the g *~ by equations (6.4). The procedure espoused in [2] is as follows. Given the system E * for uniformly Lipschitzian noises, replace it by its "canonical extension" x *'(t) = x * ' ( a ) +
fo' g *'(x *(s))dz *~(s) + 89 g*i~(x *(s))dz *" (s)dz
*P(s) (6.5)
For noise-processes z* with uniformly Lipschitzian sample functions these
E.J. McShane / The choice of a stochastic model
90
equations have the same solutions as E * . If we idealize to any other noise-process z that satisfies the standing hypotheses, we keep the same equation, the g~ being the same as the g*~. When the idealizations z p are independent standard Wiener processes, from the known facts that
fo
f(s)dzP(s)dz~(s) =
fa
= 0
f(s)dt
(0 = o" i> 1) (all other O and ~r),
we see at once that these canonical equations are the same as the ones we have obtained by (6.3). There is, however, a small difference between the approximation procedures in the preceding pages and those in [2]. In that book, the equations were written in form (2.1) rather than in the condensed form (2.2). Accordingly, the C a u c h y - M a r u y a m a approximation procedure applied to the canonical extension in [2], which omits all second-order integrals with P or or equal to 0, will lack some small terms that are present when the C a u c h y - M a r u y a m a procedure is applied to the condensed form (2.2). The foregoing discussion leads to the suspicion that these terms, though small, might yield some i m p r o v e m e n t in the approximation. This might be worth some investigation. Consider next a point-process z ~. This is constant between jumps, which occur at the discontinuities of a right-continuous random function N whose sample-functions N ( . , to) are constant between finitely many points, at each of which it increases by 1. Let u be a process (t, ~o)~ u(t, to) such that all the random variables u (t, .) have the same distribution and form an independent set. For each to in g2, we define z ( . , to) to be the function such that z(a, to) = 0 and z(t, to) - z(s, to) = sum of u ( z ) [ N ( z , to) - N ( z - , to)] over all ~- in [s, t) at which N ( . , to) is discontinuous. We assume that incremeBts of N over disjoint intervals are independent and h o m o g e n e o u s in time; the expectation and the variance of AN are proportional to At, as on p. 87 of [2]. We also assume that E ( u ( a ) ) = 0; for simplicity we shall write u for u(a). The conditional distribution of Az under the condition A N = n is the distribution of the sum of n independent random variables Ul . . . . . u~, each with the same distribution as u. By the computation on p. 87 of [2], E(Az) = 0, and E([Az] 2) is a constant multiple of At. The conditional distribution of [hz] 4, given AN = n, is that of the fourth power of the sum of n independent r a n d o m variables, each with the same distribution as u has. Let these be ul . . . . . un. The fourth power of the sum contains terms in which some uj has exponent 1. All these have expectation 0. So
E.J. McShane / The choice of a stochastic model
E([Az]41AN = n ) = E
u
+E
= nE(u4)+ n ( n -
91
ujuk
1) (E(u2)) 2
= n var u 2 + n2(E(u2)) 2. We multiply by P(AN = n) and sum over all non-negative integers n, obtaining E([A z ]4) : (var u 2)E(A N) + (E u 2)2E([AN]2). We now define Aj = (Ajz) 2 - E(AN). E(AN) is proportional to At, and for notational simplicity we shall consider that it is equal to At. If ~ is a belated partition of [a, b] with evaluation-points '7"1. . . . , " / ' n and division-points tl = a , . . . , t,+l = b, we can repeat the steps of the computation at the top of page 84 of [2] and find that the Riemann sums S ( ~ ; / ; z , z ) and S ( ~ ; f ; t) have a difference whose L2-norm does not exceed a multiple of [varu2+ (E(u2))2] 1/2. Since the distribution of jumps, which is that of u, is fixed, we do not have the privilege of letting this tend to 0. Nevertheless, if the distribution of z is to be close enough to that of a Wiener process to make it at all reasonable to try to use a Wiener-process idealization, in time-intervals of the size appropriate to the problem there must be many jumps, each of small size in comparison to the total amount during such a time-interval. In this case, [varu~+(E(u2))2] 1/~ must be small in comparison with other numbers that occur in the approximation procedure (5.9). We shall therefore discard this term, and thus obtain an approximation to the second-order integral which is the same as the first of equations (6.2). However, with Wiener processes equations (6.2) gave evaluations of the integrals. With our point-processes, the first of these equations merely furnishes an estimate, and we should be a bit more suspicious of an idealization based on (6.2). There is no difficulty in showing that if z ~ and z " are two independent point-processes of the type just described, the second of equations (6.2) is exact. Since we are accepting equations (6.2) for our point-processes, and the derivation of (6.3) was based on equations (6.2), we obtain the same procedure (6.3) for the point-process that we obtained for the Wiener processes. Consequently, if we wish to idealize a system in which the noises are point-processes, replacing them by Wiener processes, the best procedure is to retain the same g',, in the idealized system as were in equations E * of the actual system. This is in accord with the conclusions of Barrett and Wright. [1].
92
E.J. McShane / The choice of a stochastic model
Obviously we could consider other examples. For instance, let the system be affected by noises indirectly, through a smoothing process. That is, when the noise-processes are z 1. . . . . zr, let the equations of the system be x'(t) = x ' ( a ) +
g~(s,x(s))dv~
(6.6)
vP(t) = f~_~ ~o(t- s)dzP(s), where ~ is a continuously differentiable function on ( - ~ , oo) that vanishes outside some interval [0, T]. It is an obvious conjecture, and can be proved correct, that in order to realize this by any other process satisfying the standing hypotheses the best choice of equations E for the idealized system is the original set of equations, unchanged. However, for general functional equations I have not been able to obtain any results of interest.
References [1] J.F. Barrett and D.J. Wright, "The random nature of stockmarket prices", Operations Research 22 (1974) 175-177. [2] E.J. McShane, Stochastic calculus and stochastic models (Academic Press, New York, 1974). [3] E.J. McShane, "Stochastic differential equations", Journal of Multivariate Analysis 5 (1975) 121-177.
Mathematical Programming Study 5 (1976) 93-102. North-Holland Publishing Company
ASYMPTOTIC STABILITY AND ANGULAR CONVERGENCE OF STOCHASTIC SYSTEMS Mark A. PINSKY* Northwestern University, Evanston, Ill. U.S.A.
Received 9 May 1975 Revised manuscript received 23 July 1975 A system of It6 equations is considered in the unit disc. The circumference is assumed to be a stable invariant manifold with a finite number of stable equilibrium points. Under supplementary hypotheses it is proved that-the solution converges to a limit and that the angle converges to a limit. This leads to a well-posed Dirichlet problem and to the determination of all L-harmonic functions for the infinitesimal operator of the process.
1. I n t r o d u c t i o n
In this p a p e r we study the a s y m p t o t i c b e h a v i o r when t--~ ~ of a M a r k o v p r o c e s s d e f i n e d by a system of It6 s t o c h a s t i c e q u a t i o n s in the (xl, x2) p l a n e . This w o r k is a c o n t i n u a t i o n of p r e v i o u s w o r k s [4, 5, 8] with an a t t e m p t to f o r m u l a t e t h e results u n d e r m i n i m a l technical h y p o t h e s e s . T h e m o s t basic case of stochastic s t a b i l i t y c o n c e r n s the a s y m p t o t i c stability of the z e r o s o l u t i o n of a system of It6 e q u a t i o n s . In o r d e r to p r o v e a s y m p o t i c stability in c o n c r e t e cases, we h a v e i n t r o d u c e d the notion of S-function. This c o r r e s p o n d s to the l o g a r i t h m of t h e stochastic L y a p u n o v function [6]. W h e n e v e r an S-function exists, we h a v e " a s y m p t o t i c stability in p r o b a b i l ity", i.e., lim P { l i m , _ ~ X 7
= 0} = 1.
x~0
This is o n l y a local c o n c e p t , as is t h e case for d e t e r m i n i s t i c systems [7]. D u a l to the n o t i o n of a s y m p t o t i c stability is the q u e s t i o n of a n g u l a r b e h a v i o r , in t h e case of stochastic s y s t e m s in two variables. F o r o s c i l l a t o r y systems [6], it is n a t u r a l to p r o v e " s p i r a l i n g b e h a v i o r " , i.e., that O(t)-->m * Research supported by National Science Foundation MPS71-02838-A04. 93
M.A. Pinsky / Asymptotic stability and angular convergence
94
when t --~ ~. These questions have been treated in detail in earlier work [3, 4, 8] and therefore are omitted from the subsequent discussion here. We shall be m o r e interested in the question of angular convergence. This corresponds to the case of an improper node in the classification of ordinary differential equations [1, ch. 15, Sec. 1]: every orbit has a limiting direction at (0, 0), but not all limiting directions are possible. At the other extreme is so-called ergodic behavior where a continuum of directions is attained infinitely often as the orbit converges to (0, 0). We shall give concrete instances of both cases. Once we have determined the possible angular convergence, it is natural to ask whether there can be discerned any other asymptotic characteristics of the stochastic system. Mathematically, this corresponds to the classification of all harmonic functions for the given It6 process in the neighborhood of an equilibrium point. In a companion p a p e r [9] we study this question by examining the limiting behavior of f(Xl, X2) when (x,, x2)--> (0, 0), where f is an L-harmonic function for the It6 process with an equilibrium point at (0, 0). These results appear in Section 4 of the present work. Sections 2 and 3 of this work contain the statements and proofs of results on a system of It6 equations in the unit disc x~+ x~ < 1, where the unit circle is assumed to be an attractive invariant set which contains a finite number of stable equilibrium points. This set-up was already studied in [5, 8] in connection with the Dirichlet problem. In the following sections 0 will denote an element of the interval [0, 2rr] or the point (cos 0, sin 0) on the perimeter of the unit circle. Thus the notation x ~ 0 is a shorthand for the statement ( x l - ~ c o s 0, x2---*sin 0).
2. Statement of results
We assume given a system of stochastic equations dx = t r ( x ) d w + b(x)dt,
(2.1)
where w ( t ) = ( w l ( t ) , . . . , w " ( t ) ) is an n-dimensional Wiener process, x ~ or(x) is a C = 2 • n matrix, and x ~ b(x) is a C a 2-vector which satisfy the following conditions: (A) Interior conditions inf
b,l-O, l < - i < - n .
(2.7)
If 6.(0,) = 0,/~(0,) = 0, then
(2.8)
Qrp(0,) ~ lira [/~(0)/(0 - 0 , ) - 16"2(0)/(0 - 0,) 2] < O. 0~0
i
These points will be labeled 0, . . . . . 0,., m < n. In o r d e r to state the final set of conditions, we introduce a special coordinate system in the n e i g h b o r h o o d of (1, 0,), where 1 -< i -< m. Let p, = [(1 - r)2 + (0 - 0,)2] '/2 ~o~ = t a n - ' [ ( 0 - 0,)/[1 - r)] where 0 < p < p,, and 0 < q~ < w. T h e stochastic equations (2.6) can be written in the form dp = d'~ (p, q~)dw s +/~(p, q~)dt,
(2.9)
d ~ : 6-~(p, q~)dw s +/~(p, q~)dt, where the coefficients dr,/~ have limits w h e n p - - - > 0, which we d e n o t e by 6.,(~), /~(q~), respectively 6-(q~) 2 --- YT=, 6.~(q~)~. (D) Angular boundary conditions
M.A. Pinsky / Asymptotic stability and angular convergence
96
We assume that at each 0~ (1-< i -< m) we have
6-(~)~> 0
(o< ~ 0 or ~--~ ~r lim ([/~'(~p) - 2~[d-'(~)] 2) = O i - Qll.
(3.18)
By hypothesis (2.11), this quantity is negative. T h e r e f o r e by an explicit solution of the equation for )~ we see that lim f ( ~ ) = -0% w h e n e v e r ~ - - ~ 0 or ~--~ w. N o w we apply It6's f o r m u l a f(~o,) = f(q~o) +
fo
^ - dws + o-ff~
f/
(Lf)ds.
(3.19)
On the set ( X I - - ~ 0;), we have
--/(,p,) lira - t~
t
-< - 1.
T h e r e f o r e ~i (X~) ~ 0 or w when t --~ 0% which is assertion (2.16). T o p r o v e (2.17), let g = e x p ( h / ) , A > 0. T h e n for A sufficiently small, we have Lg 0 in the o p e n h a l f - n e i g h b o r h o o d (0 < p~ < p0, 0 < ~i < ~). By following the a r g u m e n t of [8, l e m m a 2.3], we see that for given e, "q > 0 by choosing 0o sufficiently small, we have P(~, < ~/ for all t > 0} -> 1 - e. A p p l y i n g (3.19) on this set p r o v e s that P { ~ , ( X ~ ) - - ~ 0 ) -> 1 - e , which was to be proved. T h e same a r g u m e n t applies to ~ =-rr, which completes the proof. Proof of Theorem 2.3. In this case the equations (3.17)-(3.18) are still valid, but now the limit in (3.18) is positive. W e now take f ( ~ ) to be the solution of the e q u a t i o n 89 0 , / ( w / 2 ) = 0, f'Or/2) = 1. Since the limit (3.18)is positive, we must have l i m ~ , / ( g ) ) = + ~ , l i m ~ o / ( ~ ) = - c o ; further >o.
A p p e a l i n g to (3.19), we see that the f i n a l ' t e r m is b o u n d e d , since (Lf)(r,, 0 , ) = O ( 1 - r , ) = O(e-k'), t--~ ~. T h e stochastic integral is a martingale and is equivalent to WA,, w h e r e W is a W i e n e r process and A, = f~, 16-~ [2ds. But [ d-~/~ I -> 13 > 0 on (0, ~r), and therefore A, --* ~. Thus (3.19) becomes
100
M.A. Pinsky / Asymptotic stability and angularconvergence
f(,p,) = w,,, + o ( 1 )
(t--, o~)
But lim,_~ WA, = + 0% lim,_: Wa, = - oo. T h e r e f o r e the same is true of f(q~,) and thus (2.18) follows. T h e proof is complete.
4. Dirichlet Problem; Fatou Theorems T h e preceeding results allow us to formulate a well-posed b o u n d a r y value problem for the o p e r a t o r 82u Lu = ~(cro")q ~
Ou + b,--Ox,
(4.1)
and to find all b o u n d e d solutions of Lu = 0 in the interior of the unit disc.
Theorem 4.1. Assume (A), (B), (C) and (D) 2.10. Assume that for each l < - i < - m , we have (2.11) and for m j < i < - m we have (2.12). Then the problem Lu = O,
Ix ] < 1,
lira u ( x ) = fi
x~Oi
(4.2)
(I -< i --< ml),
(4.3)
lim
u(x)=f?
(mlTt} 9
(4.6)
i =;,nl+l
Theorem 4.2. A s s u m e (A), (B), (C), and (D) (2.10). A s s u m e that for 1 1 s) and Q(t, t) are abs. cont. as functions of t. (2)
There exists a function A : [ 0 , ~ ) D t ~ A ( t ) E (n x n)-matrices locally belonging to L', such that almost everywhere dr(t) dt = A ( t ) r ( t )
(2')
OQ(t, s) = A ( t ) Q(t, s) (t > s) at (II) In such case the matrix
(2")
d W ( t ) =- ~ Q(t, t) - A (t) Q(t, t) - Q(t, t) A *(t) is positive semidefinite (a.e.). Let B ( t ) be any L2-solution of the equation B ( t ) B * ( t ) = W(t). Then the simple diffusion process (A, B, r(0), O(0, 0 ) - r(0) r*(0)) is wide sense equivalent to x. We shall now discuss to what extent the conditions in the proposition determine A (t). According to the proof of the lemma, a minimal condition that A (t) shall fulfil almost everywhere is A ( t ) O(t, t) = lim 1 {Q(t + h, t ) - O(t, t)}. h "~O - h
So if O(t, t) is invertible A ( t ) is unique at t.
(8')
L.E. Zachrisson/ Equivalentsof continuousGauss-Markovprocesses
109
If Q(t, t) has not full rank, it is nevertheless true that the ambiguity set of A is just the ambiguity set of (8') for different t : s (apart from measurability restrictions). In other words, if Al(t) is a solution of (2'), (2") and hence of (8'), then any L-measurable solution A2(t) of (8') is also a solution of (2') and (2") and so is just as good as A,(t) for the proposition. This fact is made plausible from the fact, that (8') is an equation which gives constraints on the behavior of A (t) only on Im Q(t, t), which is the smallest linear set in R" containing all the probability mass of the stochastic vector x(t). Things outside of that set should not influence the development of the process. To prove our ambiguity statement we first observe that the definition of W(t) amounts to the same for A1 and A2, as
A,(t) Q(t, t) = A2(t) Q(t, t) according to assumption; put A~ - A2 = A. To prove that also (2') and (2") with A2 instead of A, are true we have to show that A(t) O(t, t ) = 0 implies A(t) r(t) = 0 and A(t) O(t, s) = 0. The easy proof by Schwarz' inequality is omitted. Finally, it is good to recognize the following facts. Under the proposition O(t, s) has the same rank as O(s, s) if t/> s (trivial according to (2")). The rank d(t) of O(t, t) is an integer valued, monotonically increasing, left continuous function of t. So a set of constancy for d(t) is a left open, right closed interval. The proof is best carried out by considering
P(t, t) ~ qb '(t) Q(t, t) (@-')*(t) which has the same rank properties as O. The differential equation for P is
d P(t, t) = dp '(t) W(t) (qb-')*(t), dt so P(t, t ) - P(s, s) is positive semidef, if t/> s. This shows that (N means "null-space of")
NP(t,t)CNP(s,s)
and
ImP(t,t)D!mP(s,s)
especially d(t) >~d(s). So NP(t, t) and Im P(t, t) do not depend on t within a set where d(t) = constant. Furthermore, the graph of NP(t, t) is closed, due to continuity. From these facts it is easy to see that the function d ( . ) is left continuous.
110
L.E. Zachrisson / Equivalents of continuous Gauss-Mark'ov processes
3. Further comments It can be shown, that (a version of) the process x in the sufficiency part of proposition can be considered as generated by some process w(t) with centered orthogonal increments via eq. (4'): dx(t) = A (t)x(t)dt + dw(t).
(4')
We define w(t) as
w(t) = - x(0)+ x ( t ) -
f0
A(~')x(r)d~"
(9)
and have to show that w(t) is a process with centered, orthogonal increments, also orthogonal to x(O). To carry out the proof we have to assume that x is measurable T x g2. That w(t) is orthogonal to x(0), for instance, is equivalent to proving E(w(t)x*(O)) = 0. But
E(w(t)x*(O))= - O(0,0)+ O ( t , 0 ) -
A ( r ) O(z,O)dr
according to eq. (7b). We do not try to carry through the measure-theoretic details but have all the same obtained a strong indication that only processes generated via stochastic equations of type (4') with a process w with centered, orthogonal increments can be candidates for having a wide sense equivalent of simple diffusion type. If w in eq. (4') happens to be a martingale, then the decomposition of x(t) into the martingale x(O)+w(t) and the "predictable" process f; A (t)x(t)dt (Meyer) is a simple instance of the Doob decomposition. See
[s]. The martingale case is connected with the following observation (see [6]). Both equations (2') and (2") are consequences of the stronger condition, that there exists a version of each conditional expectation E{x(t) I x(s)} = r,(t) (each t ~>s; s fixed but arbitrary, versions are with respect to x(s)-space) such that
d r , ( t ) = A ( t ) . rs(t); dt
lim r,(t) = x(s) (a.s.)
,,,,
The proof is simple:
r,(t) = 45(t) q) '(s)x(s)
(a.s.),
r(t) = E rs(t) = 4~(t) 45-1(s) r(s)
hence (2'),
O(t, s) = E{r,(t)x*(s)} = cl)(t) qS-'(s) O(s, s) hence (2").
L.E. Zachrisson / Equivalentsof continuous Gauss-Markov processes
111
4. How special is a simple diffusion process? This is a natural question. A fairly complete answer for the centered case (r(0) = 0) is delivered by
Proposition 2. A centered Gaussian Markov process x : [o, ~) • o ~ (t, o~)~ x(t, to)~ r ~
has a version which is a simple diffusion process if (1) Q(t, s) (for t >1s) and Q(t, t) are abs. cont. as functions of t, (2) there exists a deterministic A ( . ) E L 1 (locally) such that for almost every t >! 0 iim Q ( t + h, t ) - Q(t, t) = A (t)Q(t, t). h.~o h Under these conditions the process is generated by dx(t, to) = A (t )x(t, to)dt + dw(t, to) where w (t) is a Gaussian process with independent centered increments and
E{w(t)w*(t)}= O(t,t)-
0(0,0)-
fo
[ A ( r ) O ( r , r ) + O(r,r)A(r)]dr.
A rapid sketch of a partial proof will be given. The first thing to be proved is that lim O ( t + h, s) - O(t, s) = A (t) O(t, s) h
h %0
for all s ~< t (not only for s = t). This is where the Markov property comes in. "Centered Gaussian" implies that there is a n Ah(t) such that E{x(t + h)l x(t)} = x(t) + hAh (t)x(t)
(a.s.)
and " M a r k o v " shows that this Ah also satisfies E{x(t + h ) l x ( t ) , x(s)} = x ( t ) + h A h ( t ) x ( t ) (a.s.) (The "a.s." says that Ah (t) as an operator in R" is unique with resp. to its action on I m Q ( t , t ) but can be arbitrary outside that set.) Now E(x(t + h ) l x ( t ) , x ( s ) ) - x ( t + h ) i s orthogonal both to x ( t ) and x(s); So postmultiplication by x(s) and taking expectations gives
112
L.E. Zachrisson / Equivalents of continuous Gauss-Markov processes
O(t + h, s) - Q(t, s) h =Ah(t)O(t,s)
for
any
s-0
I
with covariance r(s,t)=e
"~
e "~ e "~ )
e '~
for
O 0 ,
a n d for i = O, 1,2, there exists a
130
A. Le Breton / Continuous and discrete sampling
sequence
{e~4(K)}N>I of positive
random variables such that, for all 0 " ~
~(R") (j) {e ~(K); N >- 1} is bounded in tx ~* probability, i.e., lim~+| SUpN~+~ P[t e ~ ( K ) l > A ] = O, "o) _ ~,Tt,,,')l f(') ta r almost (jj) Sup,o~(.,~:lol 0 and for i = 0, 1, 2, there exists a sequence {A ~- r ( K ) ; N > 1} of positive random variables such that, for all O* E ~ ( R " ) , (j) { A ~ , r ( K ) ; N -> 1} is bounded in tx~* probability, < "x'/2aN,~N,r ( K ) tz o*T almost surely. (jj) Supl 1} be another sequence of random variables satisfying the conditions. One has, for all K-> 1,
{ o, ~ a : ~ ~ ON(K), 116~(~o)- 6(0,)11-< ~1- , L~,(~k(w),oJ)=O} c [~
=
~N],
SO,
Vo*[0k# 0N] --< (1 - Po*O2N(K)))+ Po* [][0r~-6 tl > 1 ] +
Po*[L~)(O[O#O];
then lim sup Po*[Ok# fiN] -< 1 - lim inf Po*(glN(K)). N~+~
N~+~
Using limK_+~ lim infN_§ Po*(FL~(K)) = 1, it yields lim Po*[Ok= ON] = 1.
N~+~
Let {0N; N > 1} be a sequence of random variables satisfying the conditions, we can write, for aJ E [ L ~ ( 0 N ) = 0],
L(I'(oN(w), w ) - L ~)(6N(to), w) = L(')(Or~(w), w) = L(')(0(w), to)+ L'2'(w)(0N(w)- 0(w)) = L2(w)(0N(w)- 6(o))). So that
y ~'( ON(w ) - 0(r )) = y k'[ L '~'(w )]-'{L "'( ON(o~), oJ) - L ~'(ON(w), to)}
A. Le Breton / Continuous and discrete sampling
and, for o~ E [LC~
143
= 0] N [110~ II 0,
~/;,~[16~(0,)- 6(.,)11-< H( L '='(o~ ))-'IIVL(K) . Now
Po.[~-2 II~N - 6 II > A]
A 1. Let e > 0 be given, then there exists Ko > 0 such that
P o*[]] 6 II > go] < e/6, lira P o*[ II0N II > go] = P o*[ II6 II > No].
N~+~
We can choose N~ such that for N -> N~,
I Po*[ II6~ II > go] - Po*[ II6 II > goll < e/6, so that P o*[ll 0~ II > Ko] < ~/3. Further, using limN_+= P~.[L g'(0N)g 0] = 0, we find Nz such that for N >--N2, Po*[Lg~(0~)r 0] < e/3, and, using the [act that {IIL:(" )-1 I1 V~(Ko); N >-1~ is bounded in Po*-probability, we can find A and N3 such that N >-N3 implies Po*[II L(~)(. ) ~ll"V~(Ko) > A ] < e/3. So, e being given, there exists No (No = Max(N~, N2, Na)) and A such that Po*[~'?~'ll 0N - 6[[ > AI < e. Then limA_§ proof.
A ] = O . This completes the
6. The result Now we come back to our problem about the sequence of statistical structures N ((7, ~Tr, {/z0T . N ., 0 ~ f(R~)}).
From Theorems 1, 2, 3 and [2] we immediately deduce the following theorem. Theorem 4. There exists a sequence {ON r ; N >_ 1} of estimates of O such that, for all O* E ~ ( R " ) , one has
144
A. Le Breton / Continuous and discrete sampling
(j) 0N.~ is 5"Nr-measurable, (jj) l i m N ~ /X 0%[C'~?T(0N. T) = 0] = 1, (j j j) /x or*-limN~+~ 0N,T = 0r = [f~dH,. II;] [f[H2,| -'. Further, the sequences {6N'/Z(0N,r -- 0T); N --> 1} and {6~'/~(ON.r -- 0.); N >-- 1}, where 0N.T = [EL~ (H,, - H,, .)H;,_.] [EL~ (t~ - t,-~)H~.] ', are bounded in t~ to.probability. Proof. Using Theorems 1 and 2 and the fact that for H,2| dt is almost surely a strictly positive definite matrix [2], one can easily show that assumptions (A1)-(A4) in Theorem 3 are satisfied for {LT, ISN,T;N>--I} and {LT,s N --> 1} with YN = 6~ 2. SO Theorem 3 applied to {LT, LN, T; N --> 1} provides assertions (j), (jj), (jjj) and the fact {6kl/2(ON.T -- OT);-N >-- 1} is a sequence which is b o u n d e d in /z 0%-probability. Further, ON.T is clearly the unique solution for s ) = 0 and {0u, T ; N -> 1} is nothing but the sequence of estimates provided by Theorem 3 for the sequence {s r (0,'); N -> 1}.
References [1] A. Le Breton, "'Estimation des param~tres d'une 6quation diff6rentielle stochastique vectorielle lin6aire", Comptes Rendus Hebdomadaires des Sdances de l'Acad~mie des Sciences, Paris, S6rie A, t. 279 (1974) 289-292. [2] A. Le Breton, "Parameter estimation in a linear stochastic differential equation" in: Transactions of 7th Prague Conference and 1974 E.M.S., to appear. [3] J. Aitchinson and S.D. Silvey, "Maximum likelihood estimation of parameters subject to constraint", The Annals of Mathematical Statistics 29 (1955) 813-828.
Mathematical Programming Study 5 (1976) 145-I68. North-Holland Publishing C o m p a n y
ON INTEGRALS DISCRETE-TIME
IN MULTI-OUTPUT KALMAN-BUCY
FILTERING*
Anders LINDQUIST University of Kentucky, Lexington, Ky., USA Received 9 May 1975 Revised manuscript received 29 January 1976
T h e problem of determining the error covariance matrix for a discrete-time K a l m a n - B u c y filter is equivalent to solving a certain n x n-matrix Riccati equation, which, due to symmetry, contains ~n(n + 1 ) scalar, coupled first-order difference equations. In this paper we show that, under the extra assumption that the underlying system is stationary, the matrix Riccati equation can be replaced by an algorithm containing mn first-order difference equations, where m is the n u m b e r of outputs. Hence we have reduced the n u m b e r of first-order recursions whenever, as is often the case, m < 89 + 1). This reduction is however bought at the expense of greater algebraic complexity, and therefore, at present, the merit claimed for this result is that of theoretical insight rather than computational efficiency. W e hope that this insight will bring about better numerical procedures in the future. The reduction is achieved through the exploitation of certain redundancies in the matrix Riccati equation. Although the main portion of this paper is concerned with stationary systems, our method also works in a more general setting. Hence we investigate what h a p p e n s when the assumption of stationarity is removed, and, under certain conditions, we find that the matrix Riccati equation can be replaced by vn first-order recursions, where u ~< n. W e conjecture that this is true in general.
I. Introduction
C o n s i d e r t h e d i s c r e t e - t i m e , n • n m a t r i x Riccati e q u a t i o n
P(t + 1) = F [ P ( t ) - P(t)H'(HP(t)H' + S)-IHP(t)]F' + GG', (1.1) . P ( 0 ) = P0, t = 0, 1 , 2 , . . . ,
w h e r e t h e n • n - m a t r i c e s F and P0, the m x n - m a t r i x H, the
n x p - m a t r i x G, a n d the m • m - m a t r i x S a r e all c o n s t a n t . P r i m e d e n o t e s * This work was supported by the National Science Foundation under grant MPS75-07028. 145
A. Lindquist/ Integralsin discrete-timeKalman-Bucyfiltering
146
transpose. The matrices P0, and S are nonnegative definite and symmetric, and moreover they are defined so as to insure the existence of the inverse in (1.1). (We may for example take S to be positive definite.) To begin with (in Sections 2-7) we shall also assume that F is a stability matrix (i.e. all its eigenvalues are inside the unit circle) and that Po is the (unique) solution of the Liapunov equation Po - F P o F ' - G G ' = 0.
(1.2)
The Riccati equation (1.1) is encountered in Kalman-Bucy filtering theory, where the matrix sequences K(t) =
FP(t)H',
R (t) = HP(t)H'
(1.3) + S
(1.4)
are to be determined. Condition (1.2) corresponds to the extra assumption that the underlying stochastic system is stationary. The solution of (1.1), subject to the additional condition (1.2), can also be determined [8, 10, 12, 13, 18] from the matrix recursion
P(t + 1) = P(t)- Q*(t)R *(t)-~Q*(t) ',
I
(1.5)
(P(O) = Po, where the n x m-matrix sequence Q* is generated by the
non-Riccati
algorithm I K(t + 1) = K(t) - FQ*(t)R *(t)-lQ *(t)'H',
(1.6)
Q*(t + 1) = [ F - K(t)R(t)-1H]Q*(t),
(1.7)
R(t + 1) = R ( t ) - HQ*(t)R *(t)- Q*(t)'H;
(1.8)
R *(t + 1) = R
(1.9)
*(t) - Q*(t)'H'R (t)-IHQ*(t)
with initial conditions K(O) = O*(O) =
FPoH',
(1.10)
R(0) = R *(0) =
HPoH'+ S.
(1.11)
This algorithm determines K and R, which are the quantities usually required, directly without resort to (1.5). It can be written in a more compact form [10, 12, 13], but the present formulation is more suitable for later references. In fact, it can be shown that recursion (1.8) is not needed, and therefore (in view of the fact that the m x m-matrix R * is symmetric) the
A. Lindquist / Integrals in discrete-time KaIman-Bucy filtering
147
algorithm actually contains only 2ran + ~m (m + 1) scalar first-order difference equations. Often, and this is the raison d'etre for our study, m .~ n. Then the non-Riccati algorithm contains a much smaller number of equations than the i n ( n + 1) of the matrix Riccati equation (1.1). This is of course provided that only K and R are required, for if we also want P, (l.5) has to be invoked. The non-Riccati algorithm (1.6)-(1.9) was first obtained by the author [10]. The original derivation [10] proceeds from basic results in filtering of stationary time series [1, 5, 9, 22, 23] and does not in any way involve the Riccati equation. However, once the structure of the algorithm is known, it is not hard to derive it directly from the Riccati equation. Indeed, this merely amounts to observing that the factorization (1.5) holds. In doing this, (1.2) has to be applied in the initial stage to insure the correct initial conditions, for without (1.2) the algorithm (1.6)--(1.9) does not hold. The factorization (1.5) is however still valid provided that we account for the non-zero left member of (1.2) in determining the initial conditions. Obviously, the algorithm thus obtained has the same structure as (1.6)-(1.9), although Q* in general has a different number of columns. Whenever this number is small (which depends on the initial factorization), the procedure will result in a useful algorithm. This rather natural extension of our work was first presented by Kailath, Morf and Sidhu [8, 18]. 1 A similar factorization technique had previously been used by Kailath [6] in obtaining the continuous-time analog [6, 7, 11, 12] of the non-Riccati algorithm, thereby extending the Chandrasekhar-type results of Casti, Kalaba and Murthy [3]. The results of [6] and [10] were developed independently. Another algorithm, which is similar to (1.6)-(1.9) and to which we shall have reason to return below, was developed independently by Rissanen [19, 20]. The Riccati equation (1.1) and the non-Riccati algorithm (1.6)-(1.9) both have the form X1q(t)),
{x~(t+l)=f~(xl(t),x (t)
i = 1 , 2 , . . . , N,
(1.12)
x, ( 0 ) = a~,
where fl,f2 . . . . . fN are real-valued functions, and N equals 89 + 1) and 2ran + m (m + 1) respectively. In this paper we shall study certain relations of the type ' These papers were completed while the authors had privileged access to [10]. We wish to point this out, since this fact is not reflected in the reference list of [8].
148
A. Lindquist / Integrals in discrete-time Kalman-Bucy filtering q~i (xl(t ), x2(t) . . . . . xN (t )) ~ constant
(1.13)
for t = 0, 1, 2 . . . . . where ~,, ~02. . . . . q~M ( M < N ) are real-valued functions. Following the terminology of the classical theory of differential equations, a relation such as (1.13) will be denoted an integral of the system (1.12). We believe that knowledge of these integrals will prove valuable in studying the numerical properties of the algorithms. Using (1.6)-(1.9) in place of (1.1) amounts to a reduction in the number N of first-order difference equations whenever n ~> 4m. This poses an interesting question: Is it possible to reduce this number even further? It is the purpose of this paper to show that this is indeed the case, and we shall use a system of integrals of (1.6)-(1.9) to see this. In fact, under the very natural assumption that (H,F, G ) is a m i n i m a l realization [2], i.e. ( H , F ) is observable and (F, G ) is controllable, we shall find that only m n first-order difference equations are needed to solve the Riccati equation (1.1) subject to the initial-condition constraint (1.2), and under certain special conditions even fewer are required. This reduction is mainly of theoretical interest. Indeed, from a computational point of view, the important problem (which we shall not consider here) is to minimize the number of arithmetic operations. Our basic approach does not rely on the assumption (1.2), and in the end of the paper we shall briefly discuss what happens when this constraint is removed. The outline of the paper goes as follows. In Section 2 the algorithm (1.6)-(1.9) is transformed into a system which is " p a r a m e t e r free" in the sense that the recursions do not depend on (H, F, G), other than through a certain partitioning pattern. In fact, the system parameters enter only in the initial conditions. In Section 3 a set of m n + 8 9 + 1) integrals o f ' t h e parameter-free system is derived. These relations are highly nonlinear, but by a suitable transformation a bilinear system results, so that m n + 89 (m + 1) variables can be solved in terms of the remaining ran. The proof of this is found in Section 4. In Section 5, the 89 + 1) integrals of the combined system (1.5)-(1.9), obtained by eliminating P ( t + 1) between (1.1) and (1.5), are solved for P. As a preliminary of Section 7, in Section 6 a transformation is applied to the parameter-free version, the purpose of which is to eliminate R *. Then, in Section 7, the reduced-order algorithm is presented. Finally, Section 8 is devoted to a brief discussion of the nonstationary case. This work extends the results of our previous paper [14] to the (discretetime) multi-output case (m > 1). The corresponding continuous-time result
A. Lindquist
/ Integrals in discrete -time Kalman-Bucy
filtering
149
is r e p o r t e d in [15]. Recently a seemingly similar result has b e e n presented by L u o and Bullock [17]. H o w e v e r , it should be noted that the m n equations of [17] are not of first-order and t h e r e f o r e do not constitute a r e d u c e d - o r d e r system in the sense of this paper.
2. A generating function formulation of the non-Riccati algorithm Letting e~k) be the ith unit vector in R ~, define the k x k-shift-matrix & = (0, e~k), e~k~. . . . . e ~ l )
(2.1)
and the m x k-matrices /--/~k = (el'% O, 0 . . . . . O)
(2.2)
with a unit v e c t o r in the first column and zeros in all o t h e r positions. Then, assuming that (H, F ) is observable and H has full rank, it is no restriction to take F and H to be in the canonical form F = J-
(2.3)
All,
H = (Hx,,,/-/2 . . . . . . , H , , . . ),
(2.4)
where n~, n 2 , . . . , nm are positive integers such that nl
+
n2-]-
" " " "Jr- n r n
=
T[:,
.] is the n x n block-diagonal matrix J,,
0
0
o
&o...
...
0
o
3 =
,
0
0
0
...
(2.5)
J.~
and A is a constant n • m-matrix, which we shall partion in the following way all
a 12
a 13
...
a TM
a 2,
a=
a 23
...
a 2"
A =
, a ml
a me
a m3
. . .
(2.6)
a mm
a ~i being an n~-dimensional (column) vector. In fact, (F, H ) can always be
150
A. Lindquist / Integralsin discrete-time Kalman-Bucy filtering
transformed into the form (2.3)-(2.4) (see e.g. [16]), and in the sequel we shall assume that this has been done. The non-Riccati algorithm can now be recast into a form which is parameter-free in the sense that the recursions do not depend on the system parameters A : Lemma 2.1. The matrix sequences K, R, Q* and R*, defined in Section 1, are determined by the system of matrix recursions O(t + 1) -- O ( t ) - JQ*(t)F* ; O(O) = (J - A H ) P o H ' + A R (0),
(2.7)
Q*(t + 1) -- J Q * ( t ) - Q(t)F,; O*(0)-- ( J - A H ) P o H ' ,
(2.8)
R ( t + 1) = R ( t ) ( I - F,F*); R ( O ) = H P o H ' + S,
(2.9)
R *(t + 1) = R *(t)(I - F ' F , ) ; R *(0) = R(0),
(2.10)
where the m x n-matrix parameter sequences F and F* are defined as
I
F, = R ( t ) - ' H O * ( t ) ,
(2.11)
F * = R *(t)-lO*(t)'H ',
(2.12)
and the gain K is given by K(t) = Q(t)- aR(t).
(2.13)
Proof. Relations (2.9) and (2.10) are the same as (1.8) and (1.9). To obtain (2.7) insert (2.13) and (2.3) into (1.6): Q ( t + 1) - A R ( t + 1) = Q ( t ) - a R ( t ) -
(J - A H ) Q * ( t ) F * , .
By (2.11) the last term can be written JQ *(t)F* - A R (t)F,F*, and therefore, in view of (2.11) and (2.12), relation (2.7) follows. Similarly, (1.7), (2.3) and (2.11) give us O ( t + 1) = J Q * ( t ) - A R ( t ) F , - K(t)F,, which, by (2.13), is the same as (2.8). For m = 1 this version of the non-Riccati algorithm is essentially identical to Rissanen's algorithm [19]. (See Remark 4.2 of [14], where this is explained in detail.) For m > 1, the relation is more complicated in that Rissanen's algorithm contains more equations.
A. Lindquist / Integrals in discrete-timeKalman-Bucy filtering
151
Note that the algorithm of L e m m a 2.1 actually only contains 2ran + m nontrivial scalar (first order) recursions. In fact, from (1.3), (1.4), (2.3) and (2.13), we have Q ( t ) = JP(t)H'+ AS,
(2.14)
R ( t ) = H P ( t ) H ' + S.
(2.15) i
It follows from (2.4) that row number v~ in Q (where v~ = ~k=~n~) is constant in t and need not be updated. (This can also be seen from (2.7).) Since there are m such rows, each with m elements, m2 equations can be removed from the 2mn + m ( m + 1) recursions of L e m m a 2.1. From (2.14) and (2.15) it is also clear that the components of the n x m-matrix P(t)H' from which R ( t ) is formed are exactly the ones excluded from (2.14). Similarly,
R ( t + 1) = R ( t ) - HO*(t)F*
(2.16)
and (2.7) show that Q and R are updated by the corresponding components of O * ( t ) F * . This fact will be used in the generating function formulation of Lemma 2.1, to which we now proceed. To this end, define the set of n,-dimensional vectors {q~
i = 1,2 . . . . . m, j = 1,2 . . . . , m}
(2.17)
and
{q*q(t);i = 1,2 . . . . . m , j = 1,2 . . . . . m}
(2.18)
formed by partitioning O and Q * in the same way as in (2.6). For these vectors we have the following generating functions
ON(z) = ~_, q~(t)z k,
(2.19)
k=O ni
O*~J(z) = ~ q*'~(t)z k
(2.20)
k=O
i,j = 1,2 . . . . . m, where we have defined the 0th components of the vectors (2.17) and (2.18) as { q g ( t ) = r~(t)= element (i,]) in R(t),
q*q(t) = 0.
(2.21) (2.22)
Then the following lemma is an immediate consequence of (2.7), (2.8) and (2.16):
152
A. Lindquist / Integrals in discrete- time Kalman-Bucy filtering
Lemma 2.2. The m • m-matrix valued polynomials Q,(z) and Q * ( z ) with components (2.19) and (2.20), satisfy the following recursions:
{ Q,+l(z) = Q, (z) - 1 Q*(z)F*,,
(2.23)
O*+l(z) = 1 O * ( z ) - Q, (z)F,.
(2.24)
z
3. Redundancy in the non-Riccati algorithm
The number of recursions in Lemma 2.1 can be further reduced. In fact, there is an extensive redundancy in the algorithm manifested in the existence of a number of simple integrals. In the following lemma, which generalizes a previously presented result [14] to the case m > 1, we give a generating function formulation of these relations. Lemma 3.1. Let Or(z) and Q*(z), t = 0 , 1 , 2 . . . . . be the m Ira-matrix valued polynomials defined in Section 2. Then there is a constant matrix polynomial B ( z ) such that Q, (z )R (t) 1Q, ( l / z ) ' - Q*(z )R *(t)-~Q*(1/z ) '
= ~[B(z)+
SO~z)- B(0)]
(3.1)
holds for all t.
(Explicit formulas for the coefficients of B (z) in terms of A and G will be given in Section 5.) Proof. We need the following relation, the proof of which can be found in [10; Lemma 3.2]: FiR *(t + 1) -1 = R ( t + 1)-1F *'.
(3.2)
By using the recursions of Lemma 2.2, we obtain: Q,+l(z)R(t + 1)-lO,+i(1/z) ' - Q ,+,(z)R*(t + 1) Q t+l(1/z) t = Q, ( z ) [ I - F,F*]R(t + 1)-IQ,(1/z) ' - Q * ( z ) [ I - F*,F,]R *(t + 1)-'Q*,(1/z) '
(3.3)
which, in view of (2.9) and (2.10); establishes that the left member of (3.1) is constant in t. Clearly, it must have the form exhibited in the right member of
A. Lindquist / Integrals in discrete-time K a l m a n - B u c y filtering
153
(3.1). To obtain (3.3), we have used (3.2) three times, whereupon the appropriate cancellations have occurred. Equation (3.1) provides us with m 2 generating function relations, the ijth of which contains n, + nj + 1 scalar relations, i.e. 2ran + m 2 in total. However, due to symmetry, most of these occur in pairs, so that we only have mn + 89 + 1) different relations. (We can only use the functions, say, above the diagonal, and the ith diagonal one only provides n~ + 1 scalar relations.) These relations are highly nonlinear, but as in the scalar case [14], we shall transform them into bilinear forms. However, unlike the scalar case, R * ~ R, and therefore the simple transformation of [14] is no longer directly applicable, but needs to be modified slightly. To this end, we define
0 *(z ) = Q *(z )R *(t)-T/2R (t) r/2,
(3.4)
where
R = R ~ R T/2 is the Cholesky factorization (see e.g. [21]), i.e. R ~ is lower triangular and R T/z is its transpose. (Some other factorization could also be used.) In Section 6 we shall reformulate the non-Riccati algorithm in terms of O and 0 " . Equation (3.1) can now be written:
O,(z )R (t )-l Q,(1/z ) ' - Q *(z )R (t )-l O*(1/z ) ' = 89[B ( z ) + B ( 1 / z ) - B
(0)],
(3.5)
to which we can apply the transformation U, ( z ) = [ O , ( z ) - Q*,(z)]R(t) -~,
(3.6)
V, (z) = O, ( z ) + (~*(z)
(3.7)
to obtain
U,(z)V,(1/z)'+ V,(z)U,(1/z)'= B(z)+B(1/z)-B(O),
(3.8)
which is the required bilinear form. The m x m-matrix polynomials U, and V, have components UiJ(z) = ~ u~(t)z k,
(3.9)
k
Vi'(z) = ~ v~(t)z k. k
Here,
(3.10)
154
A . L i n d q u i s t / I n t e g r a l s in d i s c r e t e - t i m e K a l m a n - B u c y
{u0,=8,={10
filtering
ifi=J,ifi~j,
(3.11)
v~ = r,j = element (i,j) in R.
(3.12)
Note that the 0th coefficient of the matrix polynomial U, (z) is the identity matrix, so that the only time varying coefficients of Ui~(z) are the ones collected in the n~-dimensional (column) vector u ' = (u~, u2,.. ' .,u~,)'.
(3,13)
In analogy to (2.6) we shall arrange the vectors u'i in the n • m-matrix
[ U =
~11
/ u21
/
/,/ml
~12
/,/13
. . .
/,/lm
u2:
u23
"'"
u2m
Urn2
Urn3
, 99
Urrtm
(3.14)
The vectors v ' and the matrix V are defined in the same manner. The mn + 89 + 1) equations of (3.8) can now be used to solve the mn + 89 + 1) components of V(t) and R ( t ) in terms of the nm components of U(t). In Section 7 we shall thus be able to present an algorithm for U to replace the present non-Riccati equations. This algorithm will of course only contain mn scalar (first-order) equations. Identifying coefficients of z j in (3.8) we obtain
~ [UTk-,Vt~k+Ur~k+~V7k] = b7 ~,
(3.15)
k = l 1=o
where i = - n ~ , - n ~ , + l ..... 0 .... , n a - l , n ~ (if a # [ 3 ) or i = 0 , 1 , . . . , n ~ (if a =/3). [Note that (3.9) and (3.10) define u~ and v~ to be zero whenever k < 0 or k > n,. We have chosen the upper limit of the second summation of (3.15) to be greater than all n~.] Let us now define the rectangular Toeplitz matrix function (the two first arguments of which exhibit its dimension)
T(i,j, x_,, x_,+, . . . . . xj) =
Xo
X1
X2
...
X]
X-I
Xo
X1
999
Xj-t
X-2
X-1
Xo
9. 9
Xj-2
X-i
X-i+1
X-i§
. 99
Xy-i
(3.16)
A . L i n d q u i s t / Integrals in discrete-time K a l m a n - B u c y filtering
155
and the H a n k e l matrix function
H(i,
j,
xo, X l
.....
Xi+j)
~-
"X,o
Xl
X2
. . .
Xj
Xi
X2
X3
999
Xj+I
X2
X3
X4
9 9
Xj+2
Xi+l
Xi+2
99.
Xi+l _
- )~i
(3.17)
in terms of which we can write (3.15) as
{T(ni + ns, nj, O',&k,u~k,O') [ rs' ] k=,
tv'kJ 9
[ r,k ] / - -
+H(n~+n~,n~,O',6sk, U'k,O')Lv~k] J forj=i+l,i+2,...,m;
i = 1,2,...,m,
b ij
(3.18)
and
[ r,k ] = b' {T(n~, n~, 0", &k, u~k)+ H(ni, n~, &k, u ,k, 0"')} [vik
(3.19)
k=l
for i = j. H e r e Ok is the k - d i m e n s i o n a l zero vector, &k is defined by (3.11), b ~ is the (n, + nj + D-dimensional vector b ij = (b ~,,, b .... ij , . . . . . b,,), 0 ,
(3.20)
and b' the (n, + 1)-dimensional v e c t o r
b i = ( b ~ , b "1, b :,. 'i .., b,,) "' .
(3.21)
(Note that, in (3.18)-(3.19), Ok and u ~k are arrays of arguments.) The 89 + 1) vector relations (3.18) and the m vector relations (3.19) t o g e t h e r constitute a system of m n + 89 + 1) linear equations in the mn + 89 + 1) c o m p o n e n t s of V and R. If the coefficient matrix has full rank, we can solve this system to obtain R and V in terms of U:
R (t) = Mo(U(t)),
(3.22)
V(t)
(3.23)
M(U(t)),
A sufficient condition for this to be the case is provided by the following t h e o r e m , the p r o o f of which will be given in Section 4. T h e o r e m 3.2. Let (i, H ) be given by (2.3)-(2.4), and assume that (1, G) is
controllable. Let C be any matrix such that S = CC', and suppose there is a matrix X such that
A. Lindquist / Integrals in discrete-time Kalman-Bucy filtering
156
(3.24)
C X = H.
Then, for each fixed t = O, 1, 2 . . . . . the system (3.18)-(3.19) of linear equations has a unique solution (3.22)-(3.23).
In particular condition (3.24) is satisfied if S is nonsingular. Remark 3.3. The Toeplitz-Hankel structure of the system (3.18)-(3.19) permits the derivation of a fast algorithm for the solution of the system. Hence this latter task should not be quite as laborious as it first appears. Remark 3.4. To more explicitly bring out the similarities with the exposition in [14] we may instead use the following formalism: Let = max(n,, n2,..., n,,), and define the m • m-matrices U~ 1
U~ 2
...
U~ m
uf'
uf 2
...
uf m
(3.25)
O, =
u?'
u? 2 ...
u? m
i = 0, 1,2 . . . . . The matrices V, are defined analogously. Clearly Oi = Q~ = 0 for i > ti, and by (3.11) and (3.12) ,~ O0 = I,
(3.26)
[ V0(t) = R (t).
(3.27)
In fact, 01 (t) and Q~(t) are the coefficients of the matrix polynomials U, ( z ) and V, (z): O, (t)z',
(3.28)
| V, (z) = ~ ~ (t)z', i=0 (
(3.29)
U, ( z ) = ~ i=0 ,i
which inserted into (3.8) yields ~] [U~_,V~+ VjU;+,] = B,
(3.30)
j=o
for i = 0, 1. . . . . ti, after identifying coefficients of z'. This is as far as we can
157
A. Lindquist /Integrals in discrete-timeKalman-Bucy filtering
bring the analogy with the scalar case, for in order to be able to write the left m e m b e r of (3.30) as the sum of a b l o c k Toeplitz and a block H a n k e l system, we would n e e d to have the second t e r m transposed. R e l a t i o n (3.30) is of course identical to (3.15), and we shall have to p r o c e e d as above. R e m a r k 3.5. Precisely as in the scalar case [14], some c o m p o n e n t s of Q and Q * and h e n c e of U will be zero w h e n the polynomial B ( z ) satisfies certain conditions, which h o w e v e r are s o m e w h a t m o r e c o m p l i c a t e d in the multivariate case. F o r example, if S = 0, q,,'J =- 0 by (2.14); then, in view of (2.8), q*,i'(t)=-O w h e n e v e r t/> 1; hence u~,(t)=-0, too, for the s a m e values of t. T h e r e f o r e the n u m b e r of equations can be further reduced.
4. Proof of Theorem 3.2
L e m m a 4.1. Let (F, H ) be given by (2.3)-(2.4), and let {P(t); t = 0, 1, 2 . . . . } be the solution of (1.1)-(1.2). Then, for each fixed t = 0, 1, 2 . . . . , P(t) satisfies the Liapunov equation
P = (3- UH)P(J-
UH)'+(U-A)S(U-A)'+
GO'
(4.1)
where U = U ( t ) , defined by (3.6). Proof. F r o m (3.6) and (3.4) we have
O * ( R *)-r/2nT/Z = K - ( U - A )R.
(4.2)
T h e n eliminating P(t + 1) and O * ( t ) b e t w e e n (1.1), (1.5) and (4.2) we have
P : [F-(U-A)H]P[F-(U-A)H]'+(U-A)S(U-A)'
+ GG',
which is the s a m e as (4.1). T h e following l e m m a is a discrete-time version of a result due to W o n h a m [24, p. 289]. W e shall use the s a m e idea of proof. L e m m a 4.2. Assume that the conditions of Theorem 3.2 hold. Then, for each fixed t = O, 1, 2 . . . . .
J-
U(t)H'
is a stability matrix, i.e. all its eigenvalues have moduli less than one. Proof. Let (4.3) be d e n o t e d ff'(t), and let (~ be defined by
(4.3)
A. Lindquist / Integrals in discrete-time Kalman-Bucy filtering
158
t~t~'= ( U - A ) S ( U - A )' + GG'. Since (F, G ) is controllable, ( F - ( U - A ) C X , G) is controllable for each matrix X of suitable dimension [24; L e m m a 4.1]. Choose X to be a solution of (3.24). Then it follows that (F, G ) is controllable. The Liapunov equation (4.1) can be written e = FPF'+ GO',
(4.1)'
or, by successive iterations, s-1
P = F ' P ( F ' ) " + ~'~ ( F ' 0 ) ( 1 6 @ ) ',
(4.5)
i=O
which holds for all s = 0, 1, 2 . . . . . Now (4.1)', and hence (4.5), has a solution which is nonnegative definite, namely the Riccati matrix P(t). Hence s-I
~'~ ( P @ ) ( P ' 0 ) ' ~ < P(t) i=0
for s = 0, 1,2 . . . . (where ~< is defined with respect to the cone of nonnegative definite matrices). Consequently P must be a stability matrix. Lemma 4.3. Let Y be an n x m-matrix, and let Y ( z ) be the m x m-matrix polynomial formed from Y in analogy with (3.28). Then
Y ( z )X(1/z )' + X ( z ) Y(1/z)' = B (z) + B (1/z) - B (0)
(4.6)
has a unique m x m-matrix polynomial solution X ( z ) (of degree D~(Ae, J/t) with probability one as N ~ oo and to be parameter identifiable if, in addition, DT(ow,M ) consists of only one point. Although this definition makes no reference to any " t r u e " parameter value 0o, it should be regarded as "consistency-oriented", since the requirement that DT(Se, M) is non-empty implies that there is a "very good model" available among the set of models M. Indeed, if D~ contains a " t r u e " parameter 0o, then this definition of parameter identifiability is equivalent to the one first given. These definitions require that the true system allows an exact description within the model set. In practice this is usually not a very realistic assumption, since almost any real-life process is more complex than we would allow our model to be. However, even if the set of models does not contain the true system, questions of identifiability of the model parameters are still relevant. One could think of a state space model like (1) where all the matrices are filled with parameters. Even if the data are furnished by an infinitely complex system, it will not be possible to identify the parameters of the model, simply because several models give exactly the same fit, i.e., the identification criterion VN(O) does not have a unique minimum. This leads us to "uniqueness-oriented identifiability definitions", like in [11], where a model set is said to be (globally) identifiable, if the identification criterion used has a unique global minimum. A complication in the present context is that the identification criterion is a random function and a bit awkward to handle. We would be much better off if VN(O) converges (with probability one) to a deterministic function (or asymptotically behaves like one). Let us remark here already that such convergence must be uniform in O, in order to enable us to relate minima of VN(O) to minima of the deterministic function. We shall have occasion to return to this point below. In addition to the references mentioned above, interesting results can also be found in, e.g., [29, 30 and 31].
L. Ljung / Consistencyand identifiability
177
4. Some consistency results
The consistency problem for the maximum likelihood method has been quite widely studied. For independent observations the consistency has been studied by, e.g., Cramer [12], Wald [13] and Kendall-Stuart [14]. The application of the maximum likelihood method to system identification (for single input-single output models on a difference equation form) was introduced in [7], where it also is shown how the assumption on independent observations can be relaxed. Applications to other (linear) model choices have been considered in, e.g., [15, 16, 17, 18, 9 and 19]. However, it should be remarked that several of the proofs on strong consistency (convergence with probability one to the true parameter value) are not complete, a fact that can be traced back to a short-coming in the proof in [14]. The first complete strong consistency proofs for applications to system identification seem to be given in [2 and 20]. Let us cite, for future discussion, the following consistency result from [3, Theorem 4.2 and Lemma 5.1]. Theorem 1. Consider the set of models described in Example 1. Assume that
D~, over which the search in 0 is performed, is a compact subset of Ds(~t) (cf. (16)), and is such that Dr(Ae, Z/t) defined by (22) is non-empty. Assume that the actual system (with possible feedback terms) is exponentially stable and that the innovations of its output have bounded variance and are of full rank. Then, the identification estimate ONthat minimizes the criterion (19) converges into D, = { O I O E D ~ ; l~m i n f ~ , = , E l ~ ( t l 6 P ) - ~ ( t l 0 ) [ 2 = 0 for the actual input to the process}
(23)
with probability one as N tends to infinity. This result is rather general and is not based on any ergodicity assumptions. To ensure parameter consistency, it should be required first that the actual input during the identification experiment was sufficiently general so that /91 C Dr(Y, ~ ) holds (which implies "system identifiability"), and secondly, that the model is suitably parameterized so that
178
L. Ljung / Consistency and identifiability
DT(,9 ~ ./~ ) : {/9 *} holds. It is convenient to study these conditions separately. The restrictive assumption in the theorem apparently is that Jw(o'c~ ./~) be non-empty. This requires the true system to be "not too complex" and is rare!y met for real life processes. However, the philosophy of consistency results should be viewed as a test of the method: If the method is unable to recognize the true system in a family of models, then it is probably not a good method. The same philosophy clearly lies behind testing identification methods on simulated data. It should however be noted, that from such consistency results, strictly speaking nothing can be stated about the performance of the method when applied to a system that does not have an exact description within the set of models.
5. A limit result for the criterion function
In this section we shall give results for determining the asymptotic behaviour of the estimates ON that minimize the criterion function (19), VN(O), also in the case where the true system is more complex than can be described within the set of models. We shall do that by giving conditions under which
VN(O) = h i e QN(0)] can be used for the asymptotic analysis. Thereby "the stochastic part of the problem" is removed and the analysis can proceed with the deterministic loss function V'N(0). In order to make the analysis as general as possible, we would like to impose as weak conditions as possible upon the actual system. The important property we need is a stability property, but in order to state it, we shall assume that the true system with (possibly adaptive) feedback admits a description as follows,
x(t + 1) = f[t; x(t), u(t), e(t)], y(t) = g[t; x(t), e(t)],
(24)
u(t) = h[t; x(t) . . . . . x(O), uR(t)], where y ( . ) is the output, u ( . ) the actual input to the process, uR(.) a reference (extra) input, or noise in the feedback and e ( . ) is a sequence of
L. L]ung / Consistency and identifiability
179
independent random variables. The over-all stability property which we shall require is the following. Define yO(. ) and u~ 9) through x~ + 1) -- f[t; x~
u~
e(t)],
y~
= g[t; x~
e(t)],
u~
= hit; x~
. . . . . x~
X~
= 0 (or any value independent of e(r), r < s)
(25a)
0 . . . . ,0, UR(t)],
Then the property is E l y ( t ) - y ] ( t ) 1 4 < CA' s,
E tu(t)-u~
N~(O*,p,e, to)
all to E 1)(0"), where P [fl(0*)] = 1.] and similarly for ~'~(t, 0 *, O)-
f o r a l l 0*
for
(A2)
L. Ljung / Consistencyand identifiability
186
Proof of Lemma A. We shall throughout the proof use constants, where A < 1, that do not need to be the same. We ourselves to suppress arguments and subscripts freely, when of confusion. Let the linear filter determining ~(tlO) be given by (12). /5, we then have
C and A for shall also allow there is no risk For B(O*, to) C
sup [Ih,,k(0)l + [f,,k(0)[] < CA' k
(A3a)
OEB
and
Hence sup ~-~33(t[
k~_o-d~h,.k(O)y(k)+~f,,k(O)u(k)
=sup
rd
r
-- 0, there exists a pair of prices, (P*, P*), with O((VA/B --P*) M a x ( V B / a , V~/,~),
then there exist PA and PB, satisfying conditions of Corollary 3.2, such that
Jc (PA, p . ) > Jc (PA, PB)Note that (a) condition (i) in the corollary simply requires that AI cannot be simultaneously contained in IA and IB, and that is not useless. (b) If both fiA and/SB are greater than Max(Vain, VAIn) and Max(VB/A, VB/,~), respectively, then it is obvious that there exists such a pair (Pa, PB) that Jr PB)>Jc(Pa, PB). To see this, by the part (b) of Proposition 3.1 A and B will not buy AI with such a pair of prices. Thus, C will gain nothing.
Corollary 3.6. Let condition (i) in Corollary 3.5 be assumed. (i) If PA = Max(VAin, VAle)
(or fib = Max(VB/A, V,/,~)),
then A (or B) will either choose " D " definitely or be indifferent between " D " and "B". (ii) If PA = Min(Vam,
Vain),
(or/50 = Min(V./a, V./~)),
then A (or B) will either choose " B " definitely or be indifferent between " D " and " B " . However, If A (or B) chooses " B " definitely, then there exists a pair (PA, PB) such that JC(PA, P , ) > Jc(fA, P,)
[or Jc(PA, PB)],
where (PA, PB) satisfies conditions in Corollary 3.2. Remark 3.7. From these four corollaries, the only region left for which C can possibly increase his return definitely, comparing to (Pa, PB), is to announce (Pa, PB) in R, where
F. Sun, Y. Ho / Value of information in zero-sum games
218 R
o PB). o . Min(Va/s, Va/a) < P~ < Max(Va/B, VAIn); = ((Pa,
Min( VBIA, V~t~ ) < po < Max(VB/A, VB/~,)}. However, we have the following proposition.
Proposition 3.8. ff (PA, PB) E R, then (a) There exist no permanently optimal strategies for both A and B;
(b) Jc(PA, Ps)=(X,PA + y , P s ) < (VAIn + V , / a ) = ( V b - Vc) for any Nash-equilibrium ((x,, 1 - x,), ( y l , ] -- y l ) ) with 1 0, there exists a P * with 0 < [Min(VAm, V A / a ) ] - P * < (e/2) such that A will choose " B u y " , and that is independent of what PB is. Thus the game facing B becomes
222
F. Sun, Y. Ho / Value of information in zero-sum games
B
(Vd+PB)
V
B would choose " B u y " , if PB < (Vb - Vd) = VBZA.Obviously, for this same e > 0, C could pick such P * with 0 < (VB/,* -- P*) < (e/2) that B will act " B " also. Since Jc(P~,PB)=PA+PB
if both A and B buy AI,
we have 0 < (VA/B + VB/a - J c ( P * , P * ) )
Vb -
V~.
This implies that Min(VB/A, VB/,~ ) = VB/A. Thus, by interchanging the role of A and B in the previous proof, we shall complete the proof. Proof of Corollary 3.3. If Min(VAm, VAIn)= Va/8, then
Min(VAm, Vain)+ MaX(VB/A, VB/~,)= VAin + VB/A from Lemma 3.1. Conversely, if Max(VB/A, V~/a)= VB/~. It follows that
Min(Va/B, Va/a)= Vain,
then
Min(Va/B, Vam)+Max(VB./A, VB/a) = V b - Vc
= ( v b - v~)+(vo- v~)= v,,j~ + vB,~. Proof of Corollary 3.5. If only fiA > Max(VA/B, VAIn), then A will not buy
AI by the fact that the optimality derived in Proposition 3.1 is permanent 9 The game facing B is B B
(Vc+&)
D
Vo
Thus, B will buy AL only if fib < Vo - Vc. Since V, # Vb by assumption, we have
F. Sun, Y. Ho / Value of information in zero-sum games Vb > Vo
and
223
PB < ( V o - Vc)0 (or K~ ~< 0), if the assumption holds. For the second part if A chooses " B " definitely, then KA > 0. Thus, if Min(VA/B, V A / a ) = Vd - Vc, then KA = ( V b -
PA - Vo)y2>0,
only if y2 # 0. However, if y2 = 1, then J c ( P ~ , P ~ ) = V~ - Vc < Vb - V~.
The assertion is obviously true. By Proposition 3.1, if PB > Max(VB/A, Vs/x ), then y2 = 1, and that is the previous case. Hence, we only need to consider the case where PB 0. Thus, (x *, x *) cannot be p.o.s, in this case. F o r the rest of cases the p r o o f is identical. (b) T h e first equality is trivial. To p r o v e the second part, assume that
(Vb - V~) ( V b - V,), A will choose " D " , and x, = 0. Thus, ((1, 0), (0, 1)) and ((1, 0), (1, 0)) cannot be Nash-equilibria.
225
F. Sun, Y. Ho / Value of information in zero-sum games
Similarly, it can be shown that ((0, 1), (1, 0)) and ((0, 1), (0, 1)) cannot be Nash-equilibria either 9 Thus, the only region, for which the Nashequilibrium m a y exist when (PA, P~) E R, is xl E (0, 1). Let
V~
Va
J L(1 - yl)
= x , y , ( V d - V b - V~ + V o ) + x ~ ( V b - P A ) + ( 1 - x l ) V o + y ~ ( V ~ - V~);
L(V~+eB) = x,y,(Vd
-
Vo
Vc - Vb + V . ~ ) +
(l-y,)
y , ( ~ + e ~ ) + (1 - y,)V~ + x , ( V b - V o ) .
T h e necessary conditions, for which ((x~, (1 - Xl)), (y,, (1 - y,))) with x, and y, E (0, 1), are (OSA/OX0 = 0 and (OSB/Oy,)= O, i.e., Vo - V~ + PA Y'=
9
V.-Vc-PB
=
Vd-V~-V~+v~'
x,
v~-
v ~ - v ~ + v~ " "
It is easy to check that this pair is indeed a Nash-equilibrium. Next, x,PA + y,P~ = (uo - v~ + PA)P~ + ( g ~ - g~ - P~)PA V,,-- V~-- V~ + Va < (vo -
v~)(v~
-
v~)+(Vo
-
v~)(v~
-
v~)
v ~ - v b - vo + vo =V~-V~.
The last inequality holds by assumptions. For the converse part, the proof is similar. Proof of proposition 3.10. Since u(N)= O, and
v ( { A } ) + v ( { B } ) + u({C})--- - Vb + V~ < 0 , we have .(N) > ~
.({i}).
i E N
Thus, this g a m e is essential. F u r t h e r m o r e , since this g a m e is zero-sum, by a t h e o r e m [8] we have that for an essential constant sum g a m e the core is empty.
226
F. Sun, Y. 1-to / Value of information in zero-sum games
References [1] K. Arrow, "The value and demand for information", in: C. McGuire and R. Radner, Eas., Decision and organization (North-Holland, Amsterdam, 1972) Ch. 6. [2] Y.C. Ho and K.C. Chu, "Information structure in dynamic multi-person control problems", Automatica 10 (1974) 341-351. [3] H. Witsenhausen, "On the relations between the values of a game and its information structure", Information and Control 19 (3) (1971). [4] Y.C. Ho and F.K. Sun, "Value of information in two-team zero-sum problems", Journal of Optimization Theory and Applications, 14 (5) (1974). [5] F.K. Sun and Y.C. Ho, "Role of information in the stochastic zero-sum differential game", Journal of Optimization Theory and Applications (1976). [6] H.S. Witsenhausen, "Alternatives to the tree model for extensive games", in: J.D. Grote, Ed., Proceedings of N A TO advanced study institute on differential games (Reidel Publishing Co., Dordrecht, 1974). [7] H.S. Witsenhausen, see [6]. [8] G. Owen, Game theory (Saunders Company, Philadelphia, 1968).
Mathematical Programming Study 5 (1976) 227-243. North-Holland Publishing Company
S E Q U E N T I A L DECISION AND S T O C H A S T I C C O N T R O L * Edison TSE Stanford University, Stanford, Calif., U.S.A. Received 8 July 1975 Revised manuscript received 22 March 1976 The tight coupling between learning and control in a sequential decision or a stochastic control problem is being considered in this paper. A quantitative description on the learning capability of a control law is defined in terms of Shannon's information measure. It is shown that when the control law cannot influence the amount of learning, the stochastic control problem has the separation property regardless of the cost criterion.
1. Introduction In many processes arising in social, economic, engineering and biological systems, the problem of decision making or control under various sources of uncertainties is inherent. Usually, if all the uncertain events were made known to the decision maker before his decision was made; he could, at least in principle, make the best decision which will optimize his objective function. In reality, however, the decision maker is forced to make a decision without having full knowledge of these uncertainties. Therefore, an intuitive, yet plausible, approach is to treat the problem as two interconnected problems: first the decision maker tries to estimate the uncertain events; and then the decision maker makes an optimum decision based on the estimation results. In most cases, the estimation procedure is independent of the decision rule chosen, whereas the optimum decision made at a particular time is highly dependent on the effect of present decision made to future estimation accuracy which will, in turn, help in making good decisions in the future. This tight coupling between learning and control exists in most statistical sequential decision problems. In the stochastic control literature, this interaction between learning and control has been studied under the topic of dual control theory [1, 5, 14]. * This research is supported by ONR Contract N00014-75-C-0738. 227
228
E. Tse / Sequential decision and stochastic control
One very common and widely used (or misused) approach to the sequential decision problem is to neglect the dependency between present decision and future estimation performance; and a suboptimal decision is made by assuming that the estimation performance remain unchanged in the future. In so doing, we break the problem into two separate disjointed subproblems, and such an approach is called separation. It is, however, conceivable that for a certain class of degenerate problems, the separation approach yields the optimum decision. In fact, it is shown in Section 4 that the separation property is a structural property of the stochastic dynamic process. Except for a difference in terminology, a discrete time stochastic control problem is equivalent to a Bayesian sequential decision problem. A common ground formulation is to express the problem in terms of the conditional density for predictive observation, which is dependent on the past decision and the underlying uncertainty. This is done in Section 2. In Section 3, optimal control (or decision) rule will be derived using the principle of optimality [4]. A quantitative measure on the learning capability of a control law is defined in Section 4. In terms of this measure, the coupling between learning and control in a stochastic control problem can be discussed in a quantitative manner. When such a coupling disappears, the system is said to be neutral. One of the main results established in Section 4 is to relate neutrality of a system to the separation property in a stochastic control problem. Using this result one can deduce, very simply, all the separation results which are pertinent to discrete time dynamical systems. Several specific cases are discussed in Section 5.
2. Stochastic control problem Consider a stochastic control process operating for N time steps. Observations are made at each step, and a control is made after each observation. The stochastic control problem is to find a control law which maps the cumulative information data into value of control input such that the expected value of an objective criterion is minimized or maximized. We shall distinguish two types of uncertainties: parameter (or process) uncertainty and observation uncertainty. Let (~2, ~ ) be a measurable space which represents the parameter uncertainty, and (g~', ~ ' ) be another measurable space which represents the observation uncertainty. For each parameter 0 C ~, we have a probability measure Po on ~ ' . For a fixed admissible deterministic control sequence U N-1 ~ { u o , . . . , uN 1} with
E. Tse / Sequential decision and stochastic control
229
uk @ ~ k = 0, 1. . . . . N - 1 and a fixed parameter 0 E/7, we have a random sequence (N '-measurable)
y~=y,[w;O, U'-'];
i--1,2 .... ,N-l;
wES2'
which represents observations at each time step. If there is no confusion, the arguments of yi will be dropped for simplicity, and the sequence {y~,..., yk} will be denoted by yk. Given {Po;O@J2} and {y,[w;0, U " ] ; 0 E J2, uk @ ~//k, k = 0 , 1 , . . . , i 1}~=~, a family of conditional densities, {p(yk I Y~-~, 0; Uk-~)}, ' can be deduced. N - , be an admissible control law. The family Let u = {uk (Yk, U k ,) ff q/k}k=o of conditional densities on y~ is given by
p,(y~ I Yk-'; O) = p(yk I yk ,, 0; u,,, u,(y,, uo). . . . . uk-,(Y k-', Uk-2)); 0 ~ J2
(2.1)
where the subscript u on the left-hand side denotes that a control law u is assumed to be held fixed. Thus we see that the specification of ~ {p(yk I yk-,, O; U k-~) I 0 ~ J2, u~ E ~ i = 0 ..... k - 1}N , allows us to deal with both deterministic control sequences and feedback control laws. If a control law is being considered, we shall use the notation fi, to denote the realization of the control value at step i. A payoff, or a cost, is associated with the triple (0, U u-', yN), denoted by J(O, U N-', y u ) . A stochastic control problem is defined as follows: Given a family of conditional density 3~ and a prior probability measure P on N, find an optimal control law u~ = u * ( Y ~, U * k - ' ) E ~ k -- 0, 1 . . . . . N - 1, such that E{J(0, U N-~, yN)} is either maximized or minimized. In the following discussions, we shall always minimize E{J(0, U ~-~, yN)}. The above formulation is under the framework of Bayesian sequential decision analysis. To see that the classical stochastic control problem is included in the present general formulation, consider a stochastic system described by
Xk+,=fk(Xk, Uk, Wk);
Xk E R "
u k E R ~, w k E R '
(2.2)
with observation yk =hk(xk, vk);
yk E R " ,
V~ E R d;
m/>d
(2.3)
where {vk, wk} are noise processes with known statistics, and hk (x,.) is a In the following discussion, we assume that these conditional densities are well defined. If Yk is a discrete r a n d o m variable we shall replace the conditional density by conditional probability and the integral by summation.
230
E. Tse / Sequential decision and stochasticcontrol
o n e - o n e mapping from R d to R" for all x ~ R". The performance criterion is given by E{J(X N, U N-l, yN)}. To cast this problem into the above framework, let us define 0 = {Xo, w0 . . . .
, w~
1}.
For a given observation sequence y k - , , and a realized sequence of control 0 ~-~, a parameter 0 is consistent with the input-output data if there exists a sequence V k-~= {Vl,..., vk-1} such that it satisfies
y~ = h,(x~(O i-', W~-~,Xo),V~);
i = 1. . . . . k - 1
(2.4)
where W k-1 = { W 0 , 9 9. , Wk--1}
and x i ( U I-1, Wi-~,Xo) is the solution of (2.2) with Uk-l-~ 0 k-~. Since hk (x, 9) is a o n e - o n e mapping from R d to R " for all x E R", the existence of V k-~ implies that it is also a unique solution for (2.4). If 0 is consistent with (yk--l, Ok-l), then
p(yk [ yk-~, O; 0 k-l) = p(yk [ X, (Ok-'; Wk-'), V k-l, W k-l)
(2.5)
where V k-~ is the unique sequence that satisfies (2.4). If, on the other hand, 0 is not consistent with (yk-~, O~-t), the conditional density for yk is zero. Therefore, given (2.2), (2.3) and the noise statistics, the family ~ and the prior P are specified. The performance criterion can be rewritten in terms of (0, U N-l, Y~) E{J(X N, uN-1, y N ) } = E{/(x, ( U I-1, W'-', Xo), i = 1. . . . , N, x0, U N-', V~)} E{J(0, U ~'-~, Y")}.
(2.6)
Note that in the above discussion, there is no requirement that {xk} be a Markov sequence, i.e., {vk} is a white noise sequence; or that {wk} be independent of {vk}.
3. Optimal stochastic control Let us assume that a certain control law {uk (.)}, k = 0, 1 . . . . . N - 2 is specified, and the observation sequence y ~ - i is obtained. The remaining problem is to select UN-1 such that conditional expected cost
E. Tse / Sequential decision and stochastic control
231
E{I(0; U ~-', Y~) I yN-,} = ~,{J(0; t? ~-~, uN-,, yN) I Y~-', O "-~} is minimized. Using Bayes' rule and equation (2.1) we have EIJ(O"' U N-', Y~) [ Y~-'} = C~_,(Y N-', I 0 ~-2) 3f J(O; 0 ~-2, uN_,, Y~) (3.1) 2
N-I
p(yN I YN-',O;DN-:,u~,-,)1-[ p(yk [ Yk-',O; Ok-')p(O)dOdyN k=l
where CN_t(Y N-~, 0 ~'-:) is a normalizing constant for the conditional density. It should be noted that in (3.1), only the realized value of the past control sequence effects the computation of the conditional expected cost. The minimizing uN-~ has the form 3
u*_, = u*-,(Y N-', UN-2).
(3.2)
The optimal return function has the form
j*_,(yN-',O'-2)=E{J(O
;
* N-1 UN-Z, UN_,(Y ,UN-2),yN)[ yN-1, UU-2}(3.3)
In applying the optimization procedure backward in time, we have the following equations: (k = 0, 1 , . . . , N - 2)
j*(yk
Ok-l) =
min E{J~+,(Y k+', O k-', uk) [ yk, Ok-,}
(3.4)
ttk E q / k
E{J*+,(Y k+~, O k-', uk) [ yk, Ok-,} =
ck( Yk,1 0 k-') f j~+,(yk+,, Ok-,, uk)
k
p(yk+, I Yk, o;Ok-',Uk)I--lP(Y, I Y'-',O;O'-')'P(O)dOdyk+,
(3.5)
i=1
and u* is a minimizing solution of (3.4). We can easily see that
u~ = u*(Y k, Ok-').
(3.6)
The optimal control taw is then given by {Uk}k-o. , u-, For stochastic control problems with state and observation equations given by (2.2) and (2.3), and with additive cost N-,
j ( X ~, UN-,, yN) = K[xu, yu] + Z Lk [xk, uk, yk]. k=O
z We shall write formally d P ( 0 ) = p(O)dO and allow p(O) to contain impulses. 3 Throughout the discussions, we assume that the optimal control law exists.
(3.7)
232
E. Tse / Sequentialdecision and stochasticcontrol
Equations (2.6) and (3.1) give E{J(0,
U N-l,
yN)
[ yN-l}
=
E{J(X N, U N - - 1 , yN) I yN-,, u~-,}
= E{K[xN, YNI + LN-I[XN-1, UN--1,Y"--'I N--2
+ •
Lk [xk, lik,yk ] [ rN-1, ON-2}. (3.8)
k=O
Since E{Lk (Xk, r/k,yk) [ yN-,, ON-E} is independent of UN-1, k = 0, 1. . . . . N - 2, the optimizing u N-t(Y * N-1, O N-a) is obtained by minimizing E{K(xN, yN)+ LN-I(XN-I, UN--,,yN-,) ] yN-,, ON-=} where XN, yN are dependent on UN-1 through the system dynamic (2.2) and observation equation (2.3). Define the expected optimal cost IN-I(Y N-l, L~N-a) a s IN-I(Y N-', O N-a) ~=E{K[fN-I(XN-,, U/~4--1,WN-1), hN (fN-l(XN-1, U*-I, WN-,), VN)] + L(XN-I, U*-l) I yN-,, 0N-2}.
(3.9)
Equation (3.4) becomes j._,(yN-l, 0N-2) = E
E Lk (xk, uk, yk) I yN-1, 0N-2 k=0
+ IN_I(Y N-', ON-=).
(3.10)
When we carry the minimization one step backward in time, equation (3.4), with k = N - 2 , becomes N-3
J~,-2(YN-=, 0N-~) = min E UN 2
~ Lk (xk, &, yk) + L (XN-=, UN-a,yN-2)
k=O
+IN_,(yN-2, yN_,, 0N-3, UN_a) [ yN-2, i.)~-3} =E
}
~ Lk (xk, zik,yk) + min E{L (XN-2, UN-2, yN--2)
k ~0
14N--2
iN_,(y,-2, y,_l, 0N-3, UN_2) I yN-2, 0N-a} (3.11) where yN-1 is dependent on UN-1 through (2.2) and (2.3). Inductively, the expected optimal cost to go, h (yk, 0k-l), satisfies
E. Tse / Sequentialdecisionandstochasticcontrol
233
uk E{Lk(xk, uk, yk)
Ik (yk, Ok-,) = min
+ Ik +l( yk, yk +,, ~k-,, Uk ) l
yk, [.~k-1}
(3.12)
and the optimal return function is given by
J*l'sk kt , uk--l) = m i n E I ~ 1 L,(x,,a,,y,)] gk, Ok-1} +Ik(Y~,l]k-'). uk
(3.13)
t. i = 0
Equation (3.12) is the usual dynamic programming equation. It should be noted that in the above derivation, (3.1)-(3.6), we deviate from the conventional approach of introducing the concept of information state [7, 11, 12]. Instead, we express the results in terms of the conditional density of the future observations. These results, equations (3.1)-(3.6), indicate clearly that the coupling between learning and control lies in the fact that, in general, the control law can influence the predictive conditional density of the output observation. A measure of how much a control law can influence the learning capability is being discussed in the next section.
4. Learning, neutrality and separation In this section we shall define a measure on the learning capability of a control law. Neutrality of the system, a concept which was introduced by Feldbaum [5] and vaguely defined by several authors [3, 8] is then defined in terms of lack of learning capability. The justification for the definition given here on neutrality rests on the relation we can establish between neutrality and the separation property. For a given admissible control law u = {uk (.)}~-o~, the mutual information between 0 and yk is defined as
iu(Yk;O)= f p.(yk ] O)p(O)log p, pu(yk) ( y k ] 0) d y k dO
(4.1)
for k = 1 , 2 . . . . . N - 1 . D e n o t e ~f~={xER~-'lx~=I.(Yi;O), i= 1 , . . . , N, u is an admissible control law} C R N-1. Similarly, we can define for a given admissible control sequence U ~'-',
p(yk r O; U k-,) I ( Y k", 0 ] Uk-')= f p(yk I O; Uk-')p(O)log p(y,,[U,,_l) d Y k d e ( 4 . 2 ) for k = 1 , 2 . . . . . N ; and denote ~d ~{x~-R~-']x,--- I ( Y ' ; 0 ] U'-'), i = 1. . . . . N, U N-1 is an admissible control sequence}. Since an admissible
234
E. Tse ] Sequential decision and stochastic control
control sequence can be viewed as a singular admissible control law, we must have ~a C Ytr. The properties of the mutual information measure between yk and 0 are discussed in the literature. Mutual information is always nonnegative and its numerical value represents, quantitatively, the amount of information that is contained in the observation y k about the unknown "source", 0. (See e.g. [6, 10].) Thus a measure on the learning capability of the control law u can be defined in terms of I , ( Y ~; 0). A partial ordering on the class of admissible control laws, in terms of its learning capability, can be defined as follows
u~