Stochastic Dynamics and Control
MONOGRAPH SERIES ON NONLINEAR SCIENCE AND COMPLEXITY SERIES EDITORS Albert C.J. Luo Southern Illinois University, Edwardsville, USA
George Zaslavsky New York University, New York, USA
ADVISORY BOARD Valentin Afraimovich, San Luis Potosi University, San Luis Potosi, Mexico Maurice Courbage, Université Paris 7, Paris, France Ben-Jacob Eshel, School of Physics and Astronomy, Tel Aviv University, Tel Aviv, Israel Bernold Fiedler, Freie Universität Berlin, Berlin, Germany James A. Glazier, Indiana University, Bloomington, USA Nail Ibragimov, IHN, Blekinge Institute of Technology, Karlskrona, Sweden Anatoly Neishtadt, Space Research Institute Russian Academy of Sciences, Moscow, Russia Leonid Shilnikov, Research Institute for Applied Mathematics & Cybernetics, Nizhny Novgorod, Russia Michael Shlesinger, Office of Naval Research, Arlington, USA Dietrich Stauffer, University of Cologne, Köln, Germany Jian-Qiao Sun, University of Delaware, Newark, USA Dimitry Treschev, Moscow State University, Moscow, Russia Vladimir V. Uchaikin, Ulyanovsk State University, Ulyanovsk, Russia Angelo Vulpiani, University La Sapienza, Roma, Italy Pei Yu, The University of Western Ontario, London, Ontario N6A 5B7, Canada
Stochastic Dynamics and Control JIAN-QIAO SUN University of Delaware, Newark, DE 19716, USA
AMSTERDAM • BOSTON • HEIDELBERG • LONDON • NEW YORK • OXFORD PARIS • SAN DIEGO • SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO
Elsevier Radarweg 29, PO Box 211, 1000 AE Amsterdam, The Netherlands The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK
First edition 2006 Copyright © 2006 Elsevier B.V. All rights reserved No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without the prior written permission of the publisher Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email:
[email protected]. Alternatively you can submit your request online by visiting the Elsevier web site at http://elsevier.com/locate /permissions, and selecting Obtaining permission to use Elsevier material Notice No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN-13: 978-0-444-52230-6 ISBN-10: 0-444-52230-1 Series ISSN: 1574-6917 For information on all Elsevier publications visit our website at books.elsevier.com
Printed and bound in The Netherlands 06 07 08 09 10 10 9 8 7 6 5 4 3 2 1
Contents
Preface
xv
Chapter 1. Introduction 1.1. Stochastic dynamics 1.2. Stochastic control 1.2.1 Covariance control 1.2.2 PDF control 1.2.3 Time delayed systems 1.2.4 FPK based design 1.2.5 Optimal control
1 1 2 3 4 5 5 6
Chapter 2. Probability Theory 2.1. Probability of random events 2.2. Random variables 2.3. Probability distributions 2.4. Expectations of random variables 2.5. Common probability distributions 2.6. Two-dimensional random variables 2.6.1 Expectations 2.7. n-Dimensional random variables 2.7.1 Expectations 2.8. Functions of random variables 2.8.1 Linear transformation 2.8.2 General vector transformation 2.9. Conditional probability Exercises
9 9 10 11 11 13 18 19 20 21 22 25 28 29 30
Chapter 3. Stochastic Processes 3.1. Definitions 3.2. Expectations 3.3. Vector process
33 33 34 35 v
vi
Contents
3.4. Gaussian process
36
3.5. Harmonic process
37
3.6. Stationary process 3.6.1 Scalar process 3.6.2 Vector process 3.6.3 Correlation length
38 38 39 40
3.7. Ergodic process 3.7.1 Statistical properties of time averages 3.7.2 Temporal density estimation
40 42 44
3.8. Poisson process 3.8.1 Compound Poisson process
45 47
3.9. Markov process
48
Exercises
49
Chapter 4. Spectral Analysis of Stochastic Processes
51
4.1. Power spectral density function 4.1.1 Definitions 4.1.2 One-sided spectrum 4.1.3 Power spectrum of vector processes
51 51 52 53
4.2. Spectral moments and bandwidth 4.2.1 Narrowband process 4.2.2 Broadband process
54 55 55
4.3. Process with rational spectral density function
59
4.4. Finite time spectral analysis
61
Exercises
62
Chapter 5. Stochastic Calculus
65
5.1. Modes of convergence
65
5.2. Stochastic differentiation 5.2.1 Statistical properties of derivative process 5.2.2 Spectral analysis of derivative processes
67 68 71
5.3. Stochastic integration 5.3.1 Statistical properties of stochastic integrals 5.3.2 Integration of weakly stationary processes 5.3.3 Riemann–Stieltjes integrals
74 76 78 80
5.4. Itô calculus 5.4.1 Brownian motion 5.4.2 Itô and Stratonovich integrals
81 81 84
Contents
5.4.3 Itô and Stratonovich differential equations 5.4.4 Itô’s lemma 5.4.5 Moment equations
vii
85 88 89
Exercises
90
Chapter 6. Fokker–Planck–Kolmogorov Equation
93
6.1. Chapman–Kolmogorov–Smoluchowski equation
93
6.2. Derivation of the FPK equation 6.2.1 Derivation using Itô’s lemma
94 98
6.3. Solutions of FPK equations for linear systems
99
6.4. Short-time solution 6.4.1 Improvement of the short-time solution
101 102
6.5. Path integral solution 6.5.1 Markov chain representation of path integral
102 103
6.6. Exact stationary solutions 6.6.1 First order systems 6.6.2 Second order systems 6.6.3 Dimentberg’s system 6.6.4 Equivalent Itô equation 6.6.5 Hamiltonian systems 6.6.6 Detailed balance
104 105 105 108 110 110 114
Exercises
118
Chapter 7. Kolmogorov Backward Equation
121
7.1. Derivation of the backward equation
121
7.2. Reliability formulation
124
7.3. First-passage time probability
125
7.4. Pontryagin–Vitt equations
126
Exercises
127
Chapter 8. Random Vibration of SDOF Systems
129
8.1. Solutions in the mean square sense 8.1.1 Expectations of the response 8.1.2 Stationary random excitation 8.1.3 Power spectral density
129 130 131 133
8.2. Solutions with Itô calculus 8.2.1 Moment equations
135 136
Contents
viii
8.2.2 Fokker–Planck–Kolmogorov equation
137
Exercises
139
Chapter 9. Random Vibration of MDOF Discrete Systems
143
9.1. Lagrange’s equation 9.1.1 Formal solution
143 145
9.2. Modal solutions of MDOF systems 9.2.1 Eigenvalue analysis 9.2.2 Classical and non-classical damping 9.2.3 Solutions with classical damping 9.2.4 Solutions with non-classical damping
146 146 147 148 150
9.3. Response statistics 9.3.1 Stationary random excitation 9.3.2 Spectral properties 9.3.3 Cross-correlation and coherence function
151 152 152 153
9.4. State space representation and Itô calculus 9.4.1 Formal solution in state space 9.4.2 Modal solution 9.4.3 Solutions with Itô calculus 9.4.4 Moment equations 9.4.5 Stationary response of moment equations
154 154 155 157 157 158
9.5. Filtered white noise excitation
159
Exercises
161
Chapter 10. Random Vibration of Continuous Structures
163
10.1. Distributed random excitations
163
10.2. One-dimensional structures 10.2.1 Bernoulli–Euler beam 10.2.2 Response statistics of the beam 10.2.3 Equations of motion in operator form 10.2.4 Timoshenko beam 10.2.5 Response statistics
164 164 168 173 174 178
10.3. Two-dimensional structures 10.3.1 Rectangular plates 10.3.2 Response statistics of the plate 10.3.3 Discrete solutions
179 179 182 185
Exercises
186
Contents
ix
Chapter 11. Structural Reliability
187
11.1. Modes of failure
187
11.2. Level crossing 11.2.1 Single level crossing 11.2.2 Method of counting process 11.2.3 Higher order statistics of level crossing 11.2.4 Dual level crossing 11.2.5 Local minima and maxima 11.2.6 Envelope processes
187 187 189 191 192 193 196
11.3. Vector process
200
11.4. First-passage reliability based on level crossing
202
11.5. First-passage time probability – general approach 11.5.1 Example of SDOF linear oscillators 11.5.2 Common safe domains 11.5.3 Envelope process of SDOF linear oscillators
204 205 206 206
11.6. Structural fatigue 11.6.1 S-N model 11.6.2 Rainflow counting 11.6.3 Linear damage model 11.6.4 Time-domain analysis of fatigue damage
209 209 210 212 213
11.7. Dirlik’s formula for fatigue prediction
214
11.8. Extended Dirlik’s formula for non-Gaussian stress 11.8.1 Regression analysis of fatigue 11.8.2 Validation of the regression model 11.8.3 Case studies of fatigue prediction
215 216 222 222
Exercises
224
Chapter 12. Monte Carlo Simulation
227
12.1. Random numbers 12.1.1 Linear congruential method 12.1.2 Transformation of uniform random numbers 12.1.3 Gaussian random numbers 12.1.4 Vector of random numbers
227 227 227 228 229
12.2. Random processes 12.2.1 Gaussian white noise 12.2.2 Random harmonic process 12.2.3 Shinozuka’s method
229 229 232 235
x
Contents
12.3. Stochastic differential equations 12.3.1 Second order equations 12.3.2 State equation 12.3.3 Runge–Kutta algorithm 12.3.4 First-passage time probability
235 235 236 237 238
12.4. Simulation of non-Gaussian processes 12.4.1 Probability distribution 12.4.2 Spectral distribution
239 239 240
Exercises
242
Chapter 13. Elements of Feedback Controls
243
13.1. Transfer function of linear dynamical systems 13.1.1 Common Laplace transforms
243 244
13.2. Concepts of stability 13.2.1 Stability of linear dynamic systems 13.2.2 Bounded input–bounded output stability 13.2.3 Lyapunov stability
245 245 246 247
13.3. Effects of poles and zeros
248
13.4. Time domain specifications
249
13.5. PID controls 13.5.1 Control of a second order oscillator
250 251
13.6. Routh’s stability criterion
252
13.7. Root locus design 13.7.1 Properties of root locus
254 255
Exercises
255
Chapter 14. Feedback Control of Stochastic Systems
257
14.1. Response moment control of SDOF systems
257
14.2. Covariance control
260
14.3. Generalized covariance control 14.3.1 Moment specification for nonlinear systems 14.3.2 Control of a Duffing oscillator 14.3.3 Control of Yasuda’s system
261 261 262 264
14.4. Covariance control with maximum entropy principle 14.4.1 Maximum entropy principle 14.4.2 Control of the Duffing system
270 270 274
Exercises
277
Contents
xi
Chapter 15. Concepts of Optimal Controls
279
15.1. Optimal control of deterministic systems 15.1.1 Total variation 15.1.2 Problem statement 15.1.3 Derivation of optimal control 15.1.4 Pontryagin’s minimum principle 15.1.5 The role of Lagrange multipliers 15.1.6 Classes of optimal control problems 15.1.7 The Hamilton–Jacobi–Bellman equation
280 280 280 281 283 283 284 287
15.2. Optimal control of stochastic systems 15.2.1 The Hamilton–Jacobi–Bellman equation
289 290
15.3. Linear quadratic Gaussian (LQG) control 15.3.1 Kalman–Bucy filter
293 294
15.4. Sufficient conditions
299
Exercises
299
Chapter 16. Stochastic Optimal Control with the GCM Method
301
16.1. Bellman’s principle of optimality
301
16.2. The cell mapping solution approach 16.2.1 Generalized cell mapping (GCM) 16.2.2 A backward algorithm
302 303 305
16.3. Control of one-dimensional nonlinear system
306
16.4. Control of a linear oscillator
312
16.5. Control of a Van der Pol oscillator
316
16.6. Control of a dry friction damped oscillator
319
16.7. Systems with non-polynomial nonlinearities
324
16.8. Control of an impact nonlinear oscillator
330
16.9. A note on the GCM method
334
Exercises
336
Chapter 17. Sliding Mode Control
337
17.1. Variable structure control 17.1.1 Robustness
337 339
17.2. Concept of sliding mode 17.2.1 Single input systems 17.2.2 Nominal control 17.2.3 Robust control
340 340 341 342
Contents
xii
17.2.4 Bounds of tracking error 17.2.5 Multiple input systems
344 344
17.3. Stochastic sliding mode control 17.3.1 Nominal sliding mode control 17.3.2 Response moments 17.3.3 Robust sliding mode control 17.3.4 Variance reduction ratio 17.3.5 Rationale of switching term kS
348 348 349 350 352 353
17.4. Adaptive stochastic sliding mode control
356
Exercises
358
Chapter 18. Control of Stochastic Systems with Time Delay
361
18.1. Method of semi-discretization
362
18.2. Stability and performance analysis
364
18.3. An example
366
18.4. A note on the methodology
370
Exercises
371
Chapter 19. Probability Density Function Control
373
19.1. A motivating example
374
19.2. PDF tracking control
375
19.3. General formulation of PDF control
377
19.4. Numerical examples
379
Exercises
380
Appendix A. Matrix Computation
383
A.1. Types of matrices
383
A.2. Blocked matrix inversion
384
A.3. Matrix decomposition A.3.1 LU decomposition A.3.2 QR decomposition A.3.3 Spectral decomposition A.3.4 Singular value decomposition A.3.5 Cholesky decomposition A.3.6 Schur decomposition
385 385 385 386 386 387 387
A.4. Solution of linear algebraic equations and generalized inverse A.4.1 Underdetermined system with minimum norm solution
387 388
Contents
A.4.2 Overdetermined system with least squares solution A.4.3 A general case Appendix B. Laplace Transformation B.1. Definition and basic properties B.2. Laplace transform of common functions
xiii
388 389 391 391 393
Bibliography
395
Subject Index
407
Preface
When I first started working with my doctoral advisor, Professor C.S. Hsu, in 1984 at the University of California–Berkeley, he instructed me to look into random vibrations and identify problems to study by using the cell mapping method. That marked the beginning of my research career and interest in random vibrations and stochastic dynamics. Since I joined the faculty in the Department of Mechanical Engineering at the University of Delaware in 1994, I have taught a graduate course on random vibrations, and my research has evolved to deal with random vibration analysis and applications as well as control of nonlinear stochastic systems. This book is a compilation of my class notes, and my recent publications on control of nonlinear stochastic systems based on the work done by several of my excellent graduate students. The book consists of two parts: random vibrations and stochastic control. Random vibrations is a well established area. There are several excellent books on the subject that have been my major references. These are the books by Professor Y.K. Lin (1967, 1995), by Professor R.A. Ibrahim (1985), by Professor T.T. Soong and colleagues (1973, 1993), by Professor P.H. Wirsching and colleagues (1995) and by Professor I. Elishakoff (1983). The first part of this book on random vibrations covers much the same materials in these seminal books. Chapters 2 to 4 reviews the basic theory of probability, stochastic processes and spectral analysis of stochastic processes. Chapters 5 to 7 discuss stochastic calculus including Itô’s lemma, the Fokker–Planck–Kolmogorov equation and the backward Kolmogorov equation governing the evolution of the probability density function of Markov processes. Chapters 8 to 10 present random vibration analysis of single and multiple degree of freedom systems and continuous structural systems. Chapter 11 deals with applications of the random vibration theory to structural reliability problems including level crossing, first-passage time probability and random fatigue. Some recent research results of my graduate student are included in this chapter. Chapter 12 discusses the Monte Carlo simulation method for random vibration analysis. The second part of the book deals with stochastic controls, and contains much the recent research results on stochastic control in the literature including publications from my research group at the University of Delaware. Chapter 13 reviews basic concepts of classical feedback controls. Chapter 14 studies feedback controls of stochastic dynamical systems including response moment controls, the celebrated covariance controls, and the generalized covariance control for nonlinear stochastic dynamical systems. Chapter 15 reviews the theory of optimal xv
xvi
Preface
control for deterministic systems, and presents the optimal control formulation for stochastic systems governed by Itô differential equations. The Hamilton–Jacobi– Bellman equation is derived with the help of Itô’s lemma. The linear quadratic Gaussian (LQG) optimal control is also discussed. Chapter 17 presents sliding mode controls for deterministic and stochastic dynamical systems. Robust and adaptive sliding mode controls are illustrated with examples. Chapter 18 presents the control study of stochastic systems with time delay by using the method of semi-discretization. Chapter 19 studies a special class of control problems: tracking a prespecified probability density function for stochastic dynamical systems. I have included many exercises at the end of each chapter in hope that this book will be a good choice as a textbook for an upper division undergraduate or graduate level course on random vibrations, stochastic dynamics and control. The book contains much the recent research results on stochastic dynamics and control which can serve as references to the researchers and practicing engineers in this area. I would like to acknowledge my graduate students at Delaware whose research has contributed to the book. Dr. Xiangyu Wang contributed to a new frequencydomain method for random fatigue analysis. Dr. Ozer Elbeyli did much the work on control of delayed stochastic systems, nonlinear feedback and generalized covariance controls of stochastic systems. Dr. Luis G. Crespo extended the cell mapping methods to deterministic and stochastic optimal control problems. I would also like to thank my postdoctoral fellow, Dr. Ling Hong, and my graduate student, Dr. Huseyin Denli, for proofreading the manuscript. Finally, I would like to thank my wife, Jue, my son, Eliot, and my daughter, Crystal, for their love and support. Being a ‘fool’ professor, I feel very lucky to have my loving and supporting family.
Jian-Qiao Sun Newark, Delaware February, 2006
Chapter 1
Introduction
Uncertainties are ubiquitous. When there is a sufficient amount of data to form a sample space, uncertainties can be modeled as random variables or stochastic processes by means of statistical inference. There are many examples of dynamical systems subject to random uncertainties, which arise from external excitations to the system, and from the system parameters. Economy is an example of dynamical systems with random uncertainties. Many random factors such as weather, government policy, natural disaster and war affect the economy. Evolution of an economic system is a stochastic process. Another example is structural and mechanical systems, which can subject to uncertainties from external loadings such as earthquakes, ocean waves, wind loading and aerodynamic forces of fasting moving vehicles, and from randomness of material parameters such as inaccurate modulus, internal material damping and geometrical variations due to variabilities of manufacturing processes. The subject of stochastic dynamics and control deals with response analysis and control design for dynamical systems with random uncertainties.
1.1. Stochastic dynamics It is commonly recognized that Einstein (1956) established the theoretical foundation of stochastic calculus in his doctoral research and a series of papers on this subject, although Brownian motions, as a fundamental building block of stochastic calculus, were studied long before him (Hänggi et al., 2005). Since then, the literature on stochastic dynamics has been steadily increasing. A quick Google book search retreats a large number of textbooks and monographs in various subject areas including economy, finance, geology, ecology, population dynamics and biology. In the area of mechanical and civil systems, stochastic dynamics is known as random vibrations. The pioneering research work and the resulting book by Crandall and Mark (1963) marked the beginning of the theory of random vibrations in engineering community. Since then, there have been quite a few excellent books on random vibrations. The ones that heavily influenced this author are the following books: Lin (1967), Nigam (1983), Newland (1984), Ibrahim (1985), 1
2
Chapter 1. Introduction
Soong (1973), Young and Chang (1988), Soong and Grigoriu (1993), Lin and Cai (1995), Wirsching et al. (1995) and Elishakoff (1983). Research on random vibrations is an active research topic. The survey paper by Crandall and Zhu (1983) helped me start a research career in random vibrations. Housner et al. (1997) represented a major review effort of structural controls research including random vibrations. Other review papers were focused on special topics. Spanos (1981) and Socha and Soong (1991) examined the theory and application of the method of statistical linearization to nonlinear stochastic systems. An even more comprehensive study of the subject was documented in the book by Roberts and Spanos (1990). Roberts and Spanos (1986) presented a survey of the method of stochastic averaging and its applications to the analysis of nonlinear stochastic systems. Stochastic averaging is a popular method and is rigorously based on the mathematical results due to Khasminskii (1966). A recent edited monograph (Zhu et al., 2003) presented the current research activities and trends in stochastic structural dynamics. An important goal of structural analysis is to investigate the reliability of the structure subject to random loadings, and predict the probability of structural failure over a certain service time. The subject of random vibrations provides the mathematical theory and solution methodologies for engineers to conduct structural design and analysis taking into account of random uncertainties, and to develop a probabilistic assessment of reliability of the structure. Chapters 2 to 12 of this book cover the theory and practice of random vibrations. A thorough understanding of stochastic dynamics including the theory of random vibrations is essential to attacking the problem of stochastic controls.
1.2. Stochastic control When passive approaches such as damping, structural design and modification cannot change the system response to meet certain performance objectives, active controls are often resorted to. Control of stochastic systems is a much more difficult topic than that of deterministic systems, and is still an active research area. There have been a few books on this subject. The book by Bryson Jr. and Ho (1969) presents formulations of optimal control for random systems. The book by Stengel (1986) deals with state estimation, Kalman filter and optimal control of linear stochastic systems. Yong and Zhou (1999) present a systematic theory of optimal control formulation for stochastic Hamiltonian systems. The stability and control of externally and parametrically excited stochastic systems have attracted the attention of several researchers. Blaquiere (1992) studied the controllability and existence of solutions to the general Fokker–Planck– Kolmogorov (FPK) equation. Florchinger (1997, 1999) investigated feedback stabilization of stochastic differential systems leading to discontinuous feedback
1.2. Stochastic control
3
laws. Control strategies in the conjunction with linearization techniques are commonly used. Socha and Blachuta (2000) applied statistical and equivalent linearization methods to develop quasi-optimal control problems. Young and Chang (1988) proposed a suboptimal non-linear controller by using external linearization. Drakunov et al. (1993) studied a discrete time sliding mode control for continuous time linear systems subject to stationary stochastic disturbances. Pieper and Surgenor (1994) applied a single input discrete sliding mode control to a gantry crane problem. Stochastic effects were considered in the study leading to an optimal switching gain in the quadratic sense. Sinha and Miller (1995) investigated stochastic optimal sliding mode controls for space application. Sun and Xu (1998) designed sliding mode controls to reduce the response variance of nonlinear stochastic systems. 1.2.1. Covariance control Many controls of stochastic systems seek to achieve the pre-specified response mean and covariance in steady state. Studies by Skelton and colleagues (Grigoriadis and Skelton, 1997; Lui and Skelton, 1998; Skelton et al., 1998), Field and Bergman (1998) and Wojtkiewicz and Bergman (2001) are good examples of moment specification and covariance controls in engineering applications. Iwasaki and Skelton (1994) designed covariance controls based on the observed states and established conditions for assignable covariances. Grigoriadis and Skelton (1997) investigated a minimum energy covariance control. Iwasaki et al. (1998) formulated a recursive approach to achieve assignable response covariances. The book by Skelton et al. (1998) is an excellent text about variance control techniques for linear systems. Chung and Chang (1994a, 1994b) developed covariance controls of bilinear systems. Chang et al. (1997) used describing functions to study the covariance control of a nonlinear hysteretic system. The maximum entropy principle was applied to design covariance controls for nonlinear stochastic systems (Wojtkiewicz and Bergman, 2001). The maximum entropy principle provides an approximate probability density function (PDF) of the response. The availability of the response PDF makes the evaluation of higher order response moments fairly effective, even though the PDF is approximate. Elbeyli and Sun (2004) investigated a class of nonlinear feedback controls that render the exact stationary PDF of the response to assist the evaluation of the response moments in the design of covariance controls. The advantage of having the exact PDF lies in that arbitrarily higher order moments or the expectation of general nonlinear functions of the response variables can be evaluated accurately.
4
Chapter 1. Introduction
1.2.2. PDF control There is a strong interest in the control of stochastic systems to reshape the probability distribution function (PDF) of the response. Controlling the PDF, of course, implies the control of the response moments of any order, and is potentially a far more difficult problem. Two control design approaches for shaping the response PDF are available in the literature. The first method directly uses the Fokker–Planck–Kolmogorov (FPK) equation. The desired PDF of the system is shaped with the feedback terms. Generally the functional form of the feedback is determined with the help of the available analytical solutions of the FPK equation (Elbeyli and Sun, 2002). This method is thus limited to the systems whose exact PDF solutions in steady state can be obtained in closed form, and sometimes leads to impractical feedback controls. Approximate techniques such as the stochastic averaging method and statistical linearization can be used to design controls for tracking the target PDF (Socha and Blachuta, 2000; Elbeyli and Sun, 2002). The second method directly controls the system response to track the target PDF. The complex input–output relationship of stochastic systems is formulated with a series expansion of the response PDF using stochastic eigenvectors (Seo and Choi, 2001). Orthonormal series expansion can be used to parametrize the response PDF in conjunction with nonlinear system identification (Scott and Mulgrew, 1997). Some control methods also require online estimation of the response mean and variance. Forbes and colleagues (Forbes et al., 2003a, 2003b, 2004) designed a constant feedback gain control to achieve the target PDF in the steady-state. They used Gram–Charlie expansions as the PDF base functions, studied multidimensional systems, obtained an approximately parametrized response PDF and a sub-optimal control to track the target PDF in steady state. Crespo and Sun (2003a) found controls to minimize the error between the response and target PDFs. The control has a switching term, as is in the robust sliding mode control, to attack the parameter uncertainty of the system. The state of the art of stochastic control for tracking PDFs is represented by the work of Wang and colleagues (Guo and Wang, 2003; Wang, 2002, 2003; Wang and Yue, 2001; Wang and Zhang, 2001). They used B-spline expansions to represent the output PDF of discrete-time ARMAX systems and derived optimal controls to track the target PDF. The systems considered there are subject to bounded random disturbances. One of the advantages with discrete-time systems lies in that a one-step mapping of the response PDF can be obtained analytically in terms of the disturbance PDF. Wang (1998) also developed a B-spline neural network MIMO control to minimize the difference between the PDF of the system output and a desired density function. Some issues in designing feedback controls for the systems governed by stochastic differential equations to track a target PDF precisely at any time were
1.2. Stochastic control
5
studied (Elbeyli et al., 2005a). Moment equations of the response were used to demonstrate a hierarchical control design procedure. 1.2.3. Time delayed systems Time delay comes from different sources and often leads to instability or poor performance in control systems. Other than a few exceptional cases, delay is undesired and control strategies to eliminate or minimize its unwanted effects have to be employed. Effects of time delay on the stability and performance of deterministic control systems have been a subject of many studies. Yang and Wu (1998) and Stepan (1998) studied structural systems with time delay. Ali et al. (1998) studied stability and performance of feedback controls with multiple time delays by considering the roots of the closed loop characteristic equation. Niculescu et al. (1998) presented a survey of stability analysis of deterministic delayed linear systems. For stochastic systems, an effective Monte Carlo simulation scheme was presented by Kuchler and Platen (2002). Buckwar (2000) studied numerical solutions of Itô type differential equations and their convergence where the system has time delay both in diffusion and drift terms. Guillouzic et al. (1999) studied first order delayed Itô differential equations using a small delay approximation and obtained PDFs as well as the second order statistics analytically. Frank and Beek (2001) obtained the PDFs using FPK equation for linear delayed stochastic systems and studied the stability of fixed point solutions in biological systems. Fu et al. (2003) investigated the state feedback stabilization of nonlinear time delayed stochastic systems. Pinto and Goncalves (2002) fully discretized a nonlinear SDOF system to study control problems with time delay. Klein and Ramirez (2001) studied MDOF delayed optimal regulator controllers with a hybrid discretization technique where the state equation was partitioned into discrete and continuous portions. Elbeyli et al. (2005b) extended the semi-discretization method to stochastic systems, and studied the effect of various higher order approximations in semi-discretization on the computational efficiency and accuracy. 1.2.4. FPK based design Feedback control and stability of nonlinear stochastic systems have been studied extensively in the literature. Florchinger (1997) and Bensoubaya et al. (2000) studied the stability of the feedback control systems with the Lyapunov approach. Boulanger (2000) studied the asymptotic stability of the feedback control of an Itô equation in probability. Brockett and Liberzon (1998) studied the explicit solution of the steady state FPK equation for nonlinear feedback systems for which one can obtain an exact probability density function (PDF). Brockett and Liberzon
6
Chapter 1. Introduction
(1998) studied the existence of stationary PDF of the FPK equation for feedback non-linear systems of Lur’e type. A detailed review on the control of structural systems with stochastic considerations is presented by Housner et al. (1997). Duncan et al. (1998) studied adaptive controls for a partially known semi-linear stochastic system in the infinite dimensional space. Kazakov (1995) constructed a realizable conditional optimal control based on local criteria. By using the closed form solutions to the Hamilton–Jacobi–Bellman (HJB) equation for a linear single degree of freedom system, Dimentberg et al. (2000) studied the optimal control of steady-state random vibrations. 1.2.5. Optimal control Optimal control of stochastic nonlinear dynamic systems is an active area of research due to its relevance to many engineering applications. This is a difficult problem to study, particularly when the system is strongly nonlinear and there are constraints on the states and the control. Optimal feedback controls for systems under white-noise random excitations may be studied by the Pontryagin maximum principle, Bellman’s principle of optimality and the Hamilton–Jacobi– Bellman (HJB) equation. When the control and the state are bounded, the direct solution of the HJB equation faces exigent difficulties since it is multidimensional, nonlinear and, is defined in a domain that in general might not be simply connected (Yong and Zhou, 1999). Very few closed form solutions to this problem have been found so far. Bratus et al. (2000) studied the HJB equation for optimal control of MDOF linear systems. They obtained a closed form solution of the HJB equation, and studied optimal control problems subject to constraints. Using the generalized HJB equation, Saridis and Wang (1994) studied the suboptimal control of nonlinear stochastic systems. In their study, they addressed the controller design and stability issues. Jumarie (1996) used higher order terms in Taylor expansion in the presence of strong white noise excitation to improve optimal control results and applied the controller to the moment equations. Krstic and Deng (1998) pointed out the similarities of optimal control design of deterministic and stochastic systems. They illustrated a backstepping control design similar to the one applicable to deterministic systems. Westman and Hanson (1999) investigated a dynamic programming control approach to find optimal control for a nonlinear stochastic system. Crespo and Sun (2003c) extended the generalized cell mapping method to stochastic optimal control problems in connection with Bellman’s principle of optimality. Zhu et al. (2001) proposed a strategy for optimal feedback control of randomly excited structural systems based on the stochastic averaging method for quasi-Hamiltonian systems and the HJB equation. Given the intrinsic complexity of the problem, we usually must resort to numerical methods to find approximate control solutions (Anderson et al., 1984; Kushner and Dupois, 2001). While certain numerical methods of solution to the
1.2. Stochastic control
7
HJB equation are known, the numerical approaches often require knowledge of the boundary/asymptotic behavior of the solution in advance (Bratus et al., 2000). Chapters 13 to 19 review classical feedback controls and the optimal control theory, and study controls of linear and nonlinear stochastic systems.
Chapter 2
Probability Theory
This chapter presents brief descriptions of random variables and probability theory. Further in-depth discussions on the subject can be found in the classical books by Feller (1968, 1972) as well as the one by Lin (1967).
2.1. Probability of random events A random phenomenon can have many unpredictable outcomes. The totality of all possible outcomes is called the sample space denoted by Ω. An element of Ω is called a random sample. There are many events in our daily life that are not predictable with certainty. Take tossing a coin as an example. We could have a head or a tail. Here, the head and the tail are random samples. Together, they form the sample space. A subset A of the sample space Ω is called a random event. We assign the random event a quantity denoted as P (A) to measure how likely it occurs. This quantity is known as probability. Let Ai (i = 1, 2, . . . , ) be a set of subsets of Ω. Let F represent a family of subsets of Ω such that 1. If A ∈ F, then Ac ∈ F where Ac is the complement of A. 2. If Ai ∈ F (i = 1, 2, . . . , ), then ∞ i=1 Ai ∈ F. Such a set F is called a sigma algebra. We are now ready to state the axioms of the probability theory. A XIOM 2.1. Assume that A ∈ F, Ai ∈ F (i = 1, 2, . . . , ) and Ai are mutually exclusive. Then ∞ ∞ 0 P (A) 1, P (2.1) Ai = P (Ai ), P (Ω) = 1. i=1
i=1
Some common results in the probability theory are listed below. 9
Chapter 2. Probability Theory
10
1. Complementary event: P Ac = 1 − P (A).
(2.2)
2. Total event: Given A, B ∈ F. Then P (A ∪ B) = P (A) + P (B) − P (A ∩ B).
(2.3)
3. Conditional probability: Let P (A|B) denote the probability of A conditional on B. Then, P (A|B) =
P (A ∩ B) , P (B)
provided P (B) > 0.
(2.4)
4. Joint events: P (A ∩ B) = P (A|B)P (B) = P (B|A)P (A),
(2.5)
P (A ∩ B ∩ C) = P (A)P (B ∩ C|A) = P (A)P (B|A)P (B|A ∩ C).
(2.6)
5. Independent events: Given A, B ∈ F. A and B are said to be independent if P (A|B) = P (A), or P (A ∩ B) = P (A)P (B).
(2.7)
6. Null set: Let φ denote a set containing no random samples. Then P (φ) = 0.
(2.8)
The null set often implies the physically impossible event.
2.2. Random variables Consider the stress response X on a structure subject to random excitations. X is a function of the random excitation. Formally, we define a random variable X(ω) as a real function of the random sample ω ∈ Ω. In this example, X(ω) maps Ω to R 1 . Let Ax denote a random event defined as Ax = {ω ∈ Ω: X < x}.
(2.9)
Ax represents all the random stresses less than x. A probability of the event can be assigned as P (Ax ) = P (ω ∈ Ω: X < x). Other random events can also be defined in a similar manner. For example, {ω ∈ Ω: x1 X < x2 } and {ω ∈ Ω: X = x}, etc. In the future, we shall drop the symbol ω in the argument of the random variable X for the sake of brevity. The example of the random variable X considered here is a scalar. Random variables can also be vectors and matrices. These are dealt with later.
2.3. Probability distributions
11
2.3. Probability distributions Let FX (x) be the distribution function of X defined as FX (x) = P (Ax ) = P (X x).
(2.10)
Note that A−∞ = ∅ and A+∞ = Ω where ∅ is the null set. Hence FX (−∞) = 0,
FX (∞) = 1.
(2.11)
A+∞ represents the event that the random stress is less than +∞. This is, of course, a sure event. Since x1 x2 implies Ax1 ⊆ Ax2 , FX (x1 ) FX (x2 ). Therefore, the distribution function is a non-decreasing function of x. In general, FX (x) is continuous and piecewise differentiable. The derivative of the distribution function is known as the probability density function (PDF) defined as d (2.12) FX (x). dx In terms of the probability density function, the probability of the event {x1 < X(ω) x2 } is given by pX (x) =
x2 P (x1 < X x2 ) = FX (x2 ) − FX (x1 ) =
pX (x) dx.
(2.13)
x1
The probability density function is nonnegative, and satisfies the normalization condition in accordance with the probability axiom, ∞ pX (x) dx = 1.
(2.14)
−∞
A random variable is completely specified by its probability density function, meaning that any statistics of the random variable can be obtained from the probability density function.
2.4. Expectations of random variables The expected value or mean value E[X] of the random variable X is defined as ∞ μX ≡ E[X] =
xpX (x) dx. −∞
(2.15)
Chapter 2. Probability Theory
12
The variance Var[X] of X is defined as
σX2 ≡ Var[X] = E (X − μX )2 ∞
= (x − μX )2 pX (x) dx = E X 2 − μ2X .
(2.16)
−∞
σX is known as the standard deviation. When μX = 0, we define a variational coefficient V [X] as V [X] =
σX . μX
(2.17)
The nth order moment of X is defined as
E Xn =
∞ x n pX (x) dx.
(2.18)
−∞
The nth order central moment is defined as ∞
E (X − μX )n =
(x − μX )n pX (x) dx.
(2.19)
−∞
The skewness of random variable X is defined as γX =
E[(X − μX )3 ] σX3
(2.20)
.
When the distribution of the probability density function of X is symmetrical, γX = 0. γX is thus a measure of symmetry of the probability density function. When γX > 0, the mean of X is to the right of the mode of the probability density function. When γX < 0, the mean of X is to the left of the mode. The kurtosis of random variable X is defined as αX =
E[(X − μX )4 ] . σX4
(2.21)
The skewness and kurtosis are commonly used to characterize the nonnormality or non-Gaussianity of a probability density function. The characteristic function of random variable X is defined as the Fourier transform of the probability density function,
MX (θ ) ≡ E e
iθx
∞ =
eiθx pX (x) dx. −∞
(2.22)
2.5. Common probability distributions
13
Expanding the exponential term eiθx in power series, we have MX (θ ) = 1 +
∞ (iθ )n n E X . n!
(2.23)
n=1
The moments of the random variable can be calculated from the characteristic function as
1 d n MX (θ ) E Xn = n (2.24) . θ=0 i dθ n The nth order cumulant functions can also be derived from the characteristic function as 1 d n ln MX (θ ) kn (X) = n (2.25) . θ=0 i dθ n Moments and cumulant functions are useful when we attempt to obtain the PDF of a random variable from a large set of sampled data.
2.5. Common probability distributions First, we consider a set of continuous random variables. Uniform X ∈ U (a, b). A uniformly distributed random variable is defined by the following probability density function as well as the mean, variance and characteristic function listed below.
1 , a x b, pX (x) = b−a (2.26) 0, elsewhere, a+b (b − a)2 , σX2 = , μX = (2.27) 2 12 ibθ 1 e − eiaθ . MX (θ ) = iθ(b − a) The same information on other random variables is presented in the following. Normal or Gaussian X ∈ N (μ, σ 2 ). 2 1 − (x−μ) e 2σ 2 . pX (x) = √ 2πσ
μX = μ,
σX2 = σ 2 ,
(2.28) MX (θ ) = eiμθ−
σ 2θ2 2
.
(2.29)
The unique property of normal random variables lies in that the moments of the first two orders completely specify the probability density function, and therefore
Chapter 2. Probability Theory
14
Figure 2.1.
The standard Gaussian PDF φX (x) and the distribution function ΦX (x).
any other statistics. Other non-Gaussian random variables often need the knowledge of moments up to infinite order to completely specify the probability density function. Another reason why normal variables are important is due the central limit theorem in the probability theory, which suggests that many physical phenomenon influenced by a large number of random factors tends to be normally distributed. When μ = 0 and σ = 1, we introduce a common notation for pX (x) as x2 1 φX (x) = √ e− 2 , 2π and the distribution function x y2 1 ΦX (x) = √ e− 2 dy. 2π
(2.30)
(2.31)
−∞
φX (x) and ΦX (x) are called the standard unit-variant Gaussian PDF and distribution function respectively. They are shown in Figure 2.1. Chi-Square X ∈ χ 2 (n). 1 n x x 2 −1 e− 2 , x 0, n n 2 pX (x) = 2 ( 2 ) 0, x < 0, μX = n,
σX2 = 2n,
(2.32) n
MX (θ ) = (1 − 2iθ)− 2 .
Gamma X ∈ (α, β).
β α α−1 −βx x e , x 0, pX (x) = (α) 0, x < 0,
(2.33)
(2.34)
2.5. Common probability distributions
Figure 2.2.
α μX = , β
σX2
15
Examples of Rayleigh probability density functions.
α = 2, β
iθ MX (θ ) = 1 − β
−x .
Rayleigh X ∈ R(σ 2 ). x2 x − 2 pX (x) = σ 2 e 2σ , x 0, 0, x < 0, π 4−π 2 σ, σX2 = σ . μX = 2 2 Two examples of Rayleigh PDFs are shown in Figure 2.2. Exponential X ∈ E(λ).
−λx , x 0, pX (x) = λe 0, x < 0, μX =
1 , λ
σX2 =
1 , λ2
(2.35)
(2.36)
(2.37)
(2.38) iθ −1 MX (θ ) = 1 − . λ
Weibull X ∈ W (x0 , m, α). (x−x0 )m m m−1 e− α (x − x ) , 0 pX (x) = α 0,
x x0 , x < x0 ,
(2.39)
(2.40)
Chapter 2. Probability Theory
16
Figure 2.3.
Examples of Weibull probability density functions.
1 1 μX = α m 1 + + x0 , m 2 2 1 − 2 1 + . σX2 = α m 1 + m m Two examples of Weibull PDFs are shown in Figure 2.3. Log Normal X ∈ Ln(μ, σ 2 ). ⎧ 2 ⎨ 1 − (ln(x)−μ) 2σ 2 √ e , pX (x) = 2π σ x ⎩ 0, μX = eμ+
σ2 2
,
x > 0,
x 0, 2 2 σX2 = e2μ+σ eσ − 1 .
(2.41)
(2.42)
(2.43)
Next, we present several examples of probability density functions of discrete valued random variables. Binomial X ∈ B(n, p). The probability of the random event X = x is given by PB (X = x) = Cnx p x (1 − p)n−x ,
(2.44)
where Cnx =
n(n − 1) · · · (n − x + 1) , x!
(2.45)
2.5. Common probability distributions
17
0 < p < 1, n is an integer and x = 0, 1, 2, . . . , n. p is the probability that a certain event occurs during on random experiment. PB (X = x) is the probability that exactly x occurrences of the same event are observed in n repeated experiments. Counting of the coin tossing is a typical binomial random variable. For a fair coin, the probability of the event of the head-up is p = 1/2. The mean of X is given by μX =
n
xPB (X = x) =
x=0
n
xCnx p x (1 − p)n−x = np.
(2.46)
x=0
The variance is given by σX2 =
n
(x − np)2 PB (X = x) = np(1 − p).
(2.47)
x=0
The characteristic function is
n MX (θ ) = E eiθx = peiθ + 1 − p .
(2.48)
Poisson X ∈ P (λ). The probability of the random event X = x is given by PP (X = x) =
λx −λ e , x!
μX = σX2 = λ,
MX (θ ) = eλ(e
(2.49) iθ −1)
.
(2.50)
E XAMPLE 2.1. Chebyshev inequality. Let X be a random variable with mean μX and standard deviation σX . Then, for a given α > 0, P
σ 2 |X − μX | α X2 . α
(2.51)
By definition, we have ∞ σX2
=
(x − μX )2 pX (x) dx −∞
(x − μX )2 pX (x) dx
|x−μX |α
α2 |x−μX |α
pX (x) dx = α 2 P
|X − μX | α .
(2.52)
Chapter 2. Probability Theory
18
2.6. Two-dimensional random variables Let X and Y be random variables defined on the same sample space Ω. The vector X = [X, Y ]T is a two-dimensional random variable. Let Ax = {X x} and By = {Y y} be the events. The intersection Ax ∩ By is also an event, and its probability P (Ax ∩ By ) is defined as FXY (x, y) = P (X x ∧ Y y),
(2.53)
where FXY (x, y) is a function of (x, y) and represents the distribution function of the two-dimensional random variable [X, Y ]. Since A∞ = B∞ = Ω and A−∞ = B−∞ = φ, it follows from the probability axiom, FY Y (x, ∞) = P (Ax ∩ Ω) = P (Ax ) = FX (x), FXY (∞, y) = P (Ω ∩ By ) = P (By ) = FY (y), FXY (∞, ∞) = P (Ω ∩ Ω) = P (Ω) = 1,
(2.54)
FXY (−∞, y) = P (φ ∩ By ) = P (φ) = 0, FXY (x, −∞) = P (Ax ∩ φ) = P (φ) = 0. FXY (x, y) is assumed to have piecewise continuous mixed second order derivatives. The probability density function of the joint random variable [X, Y ] is defined as pXY (x, y) =
∂2 FXY (x, y). ∂x∂y
(2.55)
Hence, we have x y FXY (x, y) =
pXY (x, y) dx dy.
(2.56)
−∞ −∞
Because pXY (x, y) dx dy represents the probability of the event {x < X x + dx} ∩ {y < Y dy}, pXY (x, y) is nonnegative for all x and y. The individual probability density function of X and Y , known as the marginal probability density function, can be obtained from the joint probability density function as ∞ pX (x) =
pXY (x, y)dy,
(2.57)
pXY (x, y) dx.
(2.58)
−∞ ∞
pY (y) = −∞
2.6. Two-dimensional random variables
19
When X and Y are mutually independent random variables, we have pXY (x, y) = pX (x)pY (y).
(2.59)
2.6.1. Expectations The expected values of X and Y can be defined in the same manner as for single random variables ∞ ∞ μX = E[X] =
xpXY (x, y) dx dy,
(2.60)
ypXY (x, y) dx dy.
(2.61)
−∞ −∞ ∞ ∞
μY = E[Y ] = −∞ −∞
Furthermore, joint moments and joint central moments can be defined for twodimensional random variables as follows
E Xn Y m =
∞ ∞ x n y m pXY (x, y) dx dy,
(2.62)
−∞ −∞
E (X − μX )n (Y − μY )m ∞ ∞ (x − μX )n (y − μY )m pXY (x, y) dx dy. =
(2.63)
−∞ −∞
When n = m = 1, we have the correlation function RXY and the covariance κXY of the random variables RXY = E[XY ],
κXY = E (X − μX )(Y − μY ) = RXY − μX μY . A correlation coefficient is defined as κXY . ρXY = σX σY
(2.64) (2.65)
(2.66)
When ρXY = 0, X and Y are said to be uncorrelated. When two random variables are mutually independent, they are also uncorrelated. However, when they are uncorrelated, they may or may not be independent. Now, we prove that |ρXY | 1. Consider the following non-negative expectation
2 E (X − μX ) + λ(Y − μY ) (2.67) 0, ∀λ.
Chapter 2. Probability Theory
20
Expanding the square terms, we have σX2 + λ2 σY2 + 2λκXY 0. Take λ =
−κXY /σY2 .
(2.68)
We have
2 σX2 σY2 κXY .
(2.69) | σκXXYσY
This is known as the Schwarz inequality. Hence, | 1. The characteristic function of the two-dimensional probability density function is given by
MXY (θx , θy ) ≡ E ei(θx x+θy y) ∞ ∞ = (2.70) ei(θx x+θy y) pXY (x, y) dx dy. −∞ −∞
Moments and cumulant functions of [X, Y ] can be readily derived from MXY (θx , θy ). E XAMPLE 2.2. The probability density function of a two-dimensional Gaussian random variable [X1 , X2 ] is given by 1 −1 x1 − μ1 2 pX1 X2 (x1 , x2 ) = · exp σ1 2(1 − ρ 2 ) 2πσ1 σ2 1 − ρ 2 x1 − μ1 x2 − μ2 x2 − μ2 2 − 2ρ + . (2.71) σ1 σ2 σ2 Some expectations of the random variable can be found as E[X1 ] = μ1 , Var[X1 ] =
σ12 ,
E[X2 ] = μ2 , Var[X2 ] = σ22 ,
(2.72)
κXY = ρσ1 σ2 .
2.7. n-Dimensional random variables Let X1 , . . . , Xn be a set of random variables all defined on the sample space Ω. The vector X = [X1 , . . . , Xn ]T is an n-dimensional random variable. As a straightforward generalization of the two-dimensional case, the distribution function of X is defined as FX (x) = P {X1 x1 } ∩ · · · ∩ {Xn xn } . (2.73)
2.7. n-Dimensional random variables
21
The probability density function of X is defined as pX (x) =
∂ n FX (x) . ∂x1 ∂x2 · · · ∂xn
(2.74)
The probability that X falls in a domain D ⊆ R n can be written as P (X ∈ D) = pX (x) dx.
(2.75)
D
2.7.1. Expectations The mean of the n-dimensional random vector is given by μX = E[X] = xpX (x) dx,
(2.76)
Rn
where μX is a n-dimensional real vector. The qth order joint moments and joint central moments are defined as follows n n m m k k E (2.77) Xk xk pX (x) dx, = k=1
E
n
k=1
Rn
(Xk − μXk )
k=1
n
mk
=
n Rn
(Xk − μXk )
mk
pX (x) dx,
(2.78)
k=1
where q = k=1 mk . The correlation matrix of the random variables is defined as
RXX = E XXT .
(2.79)
RXX is symmetric by definition. The covariance matrix of the random variables is defined as
CXX = E (X − μX )(X − μX )T = RXX − μX μTX .
(2.80)
CXX is symmetric by definition and is positive definite. The proof of the positive definiteness is presented here. Define a scalar random variable a = bT (X − μX ) where b is an arbitrary and non-zero deterministic vector. Because E[a 2 ] 0 is always true and a = (X − μX )T b, we have
E a 2 = E bT (X − μX )(X − μX )T b = bT CXX b 0. (2.81) This completes the proof. When the random variables have zero mean, we have RXX = CXX .
Chapter 2. Probability Theory
22
The characteristic function of the n-dimensional probability density function is given by
iθ T X T x MX (θx ) ≡ E e (2.82) = eiθx x pX (x) dx, Rn
where θx is a n-dimensional vector. E XAMPLE 2.3. The probability density function of an n-dimensional normally distributed random variable, X ∈ N (μX , CXX ), is given by pX (x) =
1 (2π)n/2 [det(CXX )]1/2 1 × exp − (x − μX )T C−1 (x − μ ) X , XX 2
(2.83)
where E[X] = μX ,
CXX = E (X − μX )(X − μX )T .
(2.84)
2.8. Functions of random variables Consider a continuous function g(x). If X is a random variable, Y = g(X) is also a random variable. Given the probability distribution of X. How can we find the probability density function of Y ? Assume that g(x) is a differentiable monotonously increasing or decreasing function of x. This guarantees a unique inversion X = g −1 (Y ). Note that ∞
∞ pY (y) dy =
−∞
pX (x) dx = 1.
(2.85)
−∞
Consider now the transformation of variable in the integration, ∞
∞ pX (x) dx =
−∞
pX g −1 (y)
−∞
1 |g (g −1 (y))|
dy.
(2.86)
We have pY (y) = pX g −1 (y)
1 . |g (g −1 (y))|
(2.87)
2.8. Functions of random variables
23
The probability density functions pY (y) and pX (x) where X ∈ N (0, 1) and y = ex .
Figure 2.4.
E XAMPLE 2.4. Let X ∈ N (μ, σ 2 ) and y = ex . Then g (x) = ex , x = g −1 (y) = ln y, and g (g −1 (y)) = y. Hence (ln y − μ)2 1 1 pY (y) = √ (2.88) exp − , y > 0. y 2σ 2 2πσ That is, Y ∈ Ln(μ, σ 2 ). An example with μ = 1 and σ = 1 is shown in Figure 2.4. E XAMPLE 2.5. Let X ∈ N (μ, σ 2 ) and y = g(x) = x 2 . This function is not monotonic. We have to partition the domain (−∞, ∞) into regions where g(x) is monotonic. ∞ −∞
2 1 − (x−μ) e 2σ 2 dx √ 2πσ
0 = −∞ ∞
= 0
1
e √ 2πσ
− (x−μ) 2
∞
2
2σ
dx + 0
2 1 − (x−μ) e 2σ 2 dx √ 2π σ
√ (− y−μ)2 ( y−μ)2 dy − − 2 2σ e + e 2σ 2 √ √ . y 2 2πσ
1
√
(2.89)
Chapter 2. Probability Theory
24
Figure 2.5.
The probability density functions pY (y) and pX (x) where X ∈ N (0, 1) and y = x 2 .
Hence, we have ⎧ √ (−√y−μ)2 ( y−μ)2 ⎪ − − ⎨ √ 1 2σ 2 2σ 2 e + e , √ pY (y) = 2 2π σ y ⎪ ⎩ 0,
y 0,
(2.90)
y < 0.
An example with μ = 1 and σ = 1 is shown in Figure 2.5. E XAMPLE 2.6. Let X ∈ N (μ, σ 2 ) and y = g(x) = x 3 . This function is monotonic in (−∞, ∞). ∞ −∞
1
e √ 2πσ
− (x−μ) 2 2σ
∞
2
dx =
√
−∞
1
3 2πσ
e
− (y
2/3 −μ)2 2σ 2
dy . y 2/3
(2.91)
Hence, we have 2/3 2 1 − (y −μ) 2σ 2 e . pY (y) = √ 3 2πσy 2/3
(2.92)
Note that pY (y) is singular at y = 0, but it is integrable, as shown in Figure 2.6.
2.8. Functions of random variables
Figure 2.6.
25
The probability density functions pY (y) and pX (x) where X ∈ N (0, 1) and y = x 3 .
2.8.1. Linear transformation Probability density function of transformed variables Consider a linear transformation Y = a + BX,
(2.93)
where Y and X are n-dimensional random variables, a is an n-dimensional deterministic vector and B is an n × n dimensional nonsingular deterministic matrix. The inverse transformation becomes X = B−1 (Y − a).
(2.94)
In calculus, the differential volume elements dx and dy are related through the Jacobian J of the transformation as dy = |J | dx, J = det(B).
(2.95)
By following the same steps as demonstrated above, we obtain the probability density function of Y in terms of that of X as follows pY (y) = pX B−1 (y − a)
1 . | det(B)|
(2.96)
Chapter 2. Probability Theory
26
Statistics of the transformed variables Take the mathematical expectation of the linear transformation. We have μY = a + BμX .
(2.97)
The covariance matrix CYY of Y can be obtained as CYY = BCXX BT .
(2.98)
These two relationships hold for any random variable X. In particular, it is also true for Gaussian variables as shown in the following example. E XAMPLE 2.7. Let X ∈ N (μX , CXX ) and Y = a + BX. We have pY (y) = =
exp[− 12 yT1 C−1 XX y1 ] 1
(2π)n/2 [det(CXX )] 2 | det(B)| exp[− 12 yT2 (BCXX BT )−1 y2 ] n
1
(2π) 2 [det(BCXX BT )] 2
,
(2.99)
where y1 = B−1 (y − a) − μX and y2 = y − a − BμX . Hence, Y ∈ N (a + BμX , BCXX BT ). Normality is preserved under linear transformations. This result can be shown to hold, even when X and Y are of different dimensions. This example has further implications. Consider a linear structural system subject to Gaussian random excitations. If the solution of the structural response can be formulated as a result of linear transformations of the Gaussian excitation, then, it must be Gaussian. We shall discuss this in Chapters 8, 9 and 10. E XAMPLE 2.8. Consider now a special linear transformation such that a + BμX = 0,
(2.100)
BCXX B = I.
(2.101)
T
I is the identity matrix. Then equation (2.99) takes the form n n 1 2 1 exp − pYj (yj ) = y . pY (y) = √ 2 j 2π j =1 j =1
(2.102)
Hence, all components of Y are mutually independent random variables N (0, 1). This example also demonstrates that one can find a linear transformation to transform a set of correlated random variables into a set of independent random variables, and vice versa.
2.8. Functions of random variables
27
Uncorrelated normal random variables When Xj and Xk are mutually independent random variables, then, the joint probability density function is given by pXj Xk (xj , xk ) = pXj (xj )pXk (xk ).
(2.103)
We have the covariance matrix as ∞ ∞ CXj Xk =
(xj − μXj )(xk − μXk )pXj (xj )pXk (xk ) dxj dxk −∞ −∞ ∞
=
∞
(xj − μXj )pXj (xj ) dxj −∞
(xk − μXk )pXk (xk ) dxk = 0. −∞
(2.104)
That is, Xj and Xk are uncorrelated when they are independent. When the random variables in X are uncorrelated and ! normal, CXX becomes diagonal. From equation (2.83), it follows that pX (x) = nj=1 pXj (xj ). Hence, uncorrelated normal random variables are mutually independent. This result is unique for normally distributed random variables. In general, uncorrelated random variables may not be mutually independent. When Dim(Y) = Dim(X) When the dimension of Y is greater than that of X, there must be linearly dependent elements in Y assuming that the n elements of X are mutually independent. In this case, only the n independent elements of Y are considered first in the transformation. The probability distribution of the rest of the dependent variables can be obtained from that of the n independent elements of Y. When the dimension of Y is less than that of X, we can artificially add enough linearly independent elements to Y so that its dimension is equal to n. The following example illustrates how to do this. E XAMPLE 2.9. Let Z = X + Y where X and Y are joint random variables with a probability density function pXY (x, y). Find pZ (z). Consider the following extended mapping
" " V 1 0 X = . (2.105) Z 1 1 Y Note that det(B) = 1. According to equation (2.96), we have the joint probability density function for [V , Z] pV Z (v, z) = pXY (v, z − v).
(2.106)
Chapter 2. Probability Theory
28
Hence, we obtain ∞ ∞ pV Z (v, z) dv = pXY (v, z − v) dv. pZ (z) = −∞
(2.107)
−∞
2.8.2. General vector transformation Consider the nonlinear bijective (one-to-one) mapping Y = g(X),
(2.108)
where X and Y are n-dimensional random variables. Using the same arguments as those leading to equations (2.85) and (2.96), the probability density function of Y is evaluated as pX (g−1 (y)) , |J |
(2.109)
∂g(g−1 (y)) J = det ∂xT
(2.110)
pY (y) = where
is the Jacobian of the mapping, and x = g−1 (y) is the inverse. E XAMPLE 2.10. Let Z = XY , where X and Y are joint random variables with a probability density function pXY (x, y). Find pZ (z). Consider the following extended mapping V = X, Z = XY.
(2.111)
Evaluate the Jacobian as 1 0 J = det = x = v. y x We have the joint probability density function for [V , Z] 1 z pV Z (v, z) = pXY v, . |v| v Hence, we obtain ∞ ∞ 1 z pV Z (v, z) dv = pXY v, dv. pZ (z) = |v| v −∞
(2.112)
(2.113)
(2.114)
−∞
Note that the singularity at v = 0 has to be considered when a specific probability density function pXY (x, y) is given.
2.9. Conditional probability
29
2.9. Conditional probability Let [X, Y ] be a two-dimensional random variable. The conditional probability density function of X on Y is defined as pX|Y (x|y) =
pXY (x, y) , pY (y)
pY (y) > 0.
(2.115)
The conditional expected value of the function g(X) of random variable X on Y = y is defined as
E g(X)|y =
∞ g(x)pX|Y (x|y) dx.
(2.116)
−∞
When g(X) = X and g(X) = (X − E[X|y])2 , we obtain the conditional mean and variance. Other statistics of X conditional on Y = y can be obtained by using pX|Y (x|y) in the same manner. E XAMPLE 2.11. Let [X, Y ] ∈ N (μ, C) where σX σY ρ σX2 μX μ= , C= . μY σY2 σ X σY ρ
(2.117)
By using equations (2.28) and (2.71), we obtain pX|Y (x|y) =
pXY (x, y) pY (y)
[x − μX −ρ σσXY (y − μY )]2 1 =√ . exp − 2σX2 (1 − ρ 2 ) 2πσX 1 − ρ 2 (2.118) Equation (2.118) shows that the conditional distribution also becomes normal with the following conditional expected value and conditional variance σX E[X|y] = μX + ρ (y − μY ), σY 2 Var[X|y] = σX 1 − ρ 2 .
(2.119) (2.120)
To generalize this example to higher dimensional problems, we consider an ndimensional random vector X ∈ N (μX , CXX ) and an m-dimensional random vector Y ∈ N (μY , CYY ). The covariance between X and Y is given by the matrix CYX = E[(X − μX )(Y − μY )T ]. It can be shown that the probability density function of X on condition that Y = y is still Gaussian with the mean μX + −1 T CTYX C−1 YY (y − μY ) and covariance matrix CXX − CYX CYY CYX .
Chapter 2. Probability Theory
30
Exercises E XERCISE 2.1. Find the mean, variance and characteristic function of a normal random variable. E XERCISE 2.2. Find the cumulant functions of a normal random variable by using the characteristic function in equation (2.43), and show that kn (X) = 0 for n 3. E XERCISE 2.3. Let X ∈ N (μX , σX2 ). Show that
k
k E (X − μX ) = 1 · 3 · 5 · · · (k − 1)σX , k even, 0, k odd.
(2.121)
E XERCISE 2.4. Let X ∈ N (μX , σX2 ) and Y = a + bX with b = 0. Find pY (y), μY , and σY . E XERCISE 2.5. Let X ∈ N (0, 1) and Y = |X|. Show that 2 − y2 pY (y) = e 2. π
(2.122)
E XERCISE 2.6. Consider the following non-negative expectation 2
= σX2 + λ2 σY2 + 2λκXY E (X − μX ) + λ(Y − μY ) 0, ∀λ.
(2.123)
By considering equation (2.123) as an equation for λ, prove the Schwarz inequality 2 σX2 σY2 κXY .
(2.124)
Hence, |ρXY | 1. E XERCISE 2.7. Let X ∈ U (0, 2π) and Y = sin(X). Show that pY (y) =
1
π 1 − y2
,
−1 y 1.
(2.125)
E XERCISE 2.8. Consider the joint probability density of X and Y . pXY (x, y) = 1,
0 < x < 1,
Let Z = X + Y . Show that
z, 0 < z < 1, pZ (z) = 2 − z, 1 < z < 2.
0 < y < 1.
(2.126)
(2.127)
Exercises
31
E XERCISE 2.9. Let pXY (x, y) be the joint probability density of X and Y . Let V = X + Y and W = X − Y . Show that the joint probability density pV W (v, w) of V and W is given by 1 v+w v−w , . pV W (v, w) = pXY (2.128) 2 2 2 By using this result, show that ∞ pV (v) =
pXY (x, v − x) dx.
(2.129)
−∞
E XERCISE 2.10. Let R ∈ R(σ 2 ), θ ∈ U (0, 2π) be mutually independent random variables, and X = R cos θ and Y = R sin θ . Show that X and Y are mutually independent and identically distributed random variables, N (0, σ 2 ). Recall that for Gaussian variables, you only need to show that they are uncorrelated in order to prove the independence. E XERCISE 2.11. Let X1 , X2 , . . . , and Xn have a joint probability density function as 1 1 2 2 2 pX (x1 , x2 , . . . , xn ) = (2.130) exp − x1 + x2 + · · · + xn . 2 (2π)n/2 Let Yk = kj =1 Xj , k = 1, 2, . . . , n. Find the joint probability density function pY for Y1 , Y2 , . . . , Yn . E XERCISE 2.12. Let R ∈ R(σ 2 ) and Φ ∈ U (0, 2π) be independent random variables. Consider the transformation X R cos(α + Φ) = , (2.131) Y R sin(α + Φ) where α is an arbitrary constant. Show that X ∈ N (0, σ 2 ), Y ∈ N (0, σ 2 ) and that X and Y are mutually independent. E XERCISE 2.13. Let U1 ∈ U (0, 1) and U2 ∈ U (0, 1) be independent, uniformly distributed random variables. Consider the transformation √ X1 −2 ln U2 cos(2πU1 ) (2.132) = √ . X2 −2 ln U2 sin(2πU1 ) Show that X1 ∈ N (0, 1), X2 ∈ N (0, 1) and that X1 and X2 are mutually independent. This example is known as the Box–Muller transformation.
32
Chapter 2. Probability Theory
E XERCISE 2.14. Let X ∈ P (λX ) and Y ∈ P (λY ) be two independent and discrete Poisson random variables. Let Z = X + Y . (a) Show that the characteristic iθ function of Z is MZ (θ ) = e(λX +λY )(e −1) . (b) Show that Z ∈ P (λX + λY ).
Chapter 3
Stochastic Processes
3.1. Definitions A stochastic process is an indexed set {X(t), t ∈ T } of random variables X(t) defined on the same sample space Ω. t is the index parameter and T denotes the index set. In general, T can be discrete or continuous. In this book, we are interested in the set T which is continuous time variable. Also, we consider continuous random variable X(t). That is to say, we are only interested in continuous stochastic processes as a function of time t ∈ R 1 . An example of a scalar stochastic process is the stress response at a critical point on a building during the earthquake. A continuous stochastic process X(t) essentially can be viewed as an infinite dimensional random vector. As in the case of n-dimensional random variables, the statistics of the stochastic process is described by the joint probability density or distribution functions of the random variables in the process. For example, consider the first, second and up to nth order joint probability distribution functions FX(t1 ) (x1 ), FX(t1 )X(t2 ) (x1 , x2 ), . . . , FX(t1 )...X(tn ) (x1 , . . . , xn ),
(3.1)
where tk ∈ R 1 (1 k n) are the time instances. These probability distribution functions are obviously functions of the index parameters ti besides the random variables xk . The following notation is adopted for brevity FX (x1 , t1 ; x2 , t2 ; . . . ; xn , tn ) = FX(t1 )X(t2 )···X(tn ) (x1 , x2 , . . . , xn ).
(3.2)
FX (x1 , t1 ; x2 , t2 ; . . . ; xn , tn ) is invariant under arbitrary permutations of the pair (xk , tk ) with 1 k n according to the definition of the probability distribution function. The probability density function of the nth order of the stochastic process is defined as pX (x1 , t1 ; x2 , t2 ; . . . ; xn , tn ) =
∂ n FX (x1 , t1 ; x2 , t2 ; . . . ; xn , tn ) . ∂x1 ∂x2 · · · ∂xn 33
(3.3)
Chapter 3. Stochastic Processes
34
3.2. Expectations The mean value function μX (t) of the stochastic process X(t) is defined as ∞
μX (t) = E X(t) =
xpX (x, t) dx.
(3.4)
−∞
The autocorrelation function of the stochastic process X(t) at two time instances t1 and t2 is defined as
RXX (t1 , t2 ) = E X(t1 )X(t2 ) ∞ ∞ x1 x2 pX (x1 , t1 ; x2 , t2 ) dx1 dx2 . = (3.5) −∞ −∞
The autocovariance function of the stochastic process X(t) at two time instances t1 and t2 is defined as
κXX (t1 , t2 ) = E X(t1 ) − μX (t1 ) X(t2 ) − μX (t2 ) ∞ ∞ = x1 − μX (t1 ) x2 − μX (t2 ) pX (x1 , t1 ; x2 , t2 ) dx1 dx2 −∞ −∞
= RXX (t1 , t2 ) − μX (t1 )μX (t2 ).
(3.6)
Due to the symmetry of the expectation, it follows that RXX (t1 , t2 ) = RXX (t2 , t1 ),
κXX (t1 , t2 ) = κXX (t2 , t1 ).
(3.7)
When the process has zero mean, we have RXX (t1 , t2 ) = κXX (t1 , t2 ). The variance function σX2 (t) of the stochastic process at a given time is given by
2 σX2 (t) = E X(t) − μX (t) . (3.8) The autocorrelation coefficient function ρXX (t1 , t2 ) of the process is defined as ρXX (t1 , t2 ) =
κXX (t1 , t2 ) . σX (t1 )σX (t2 )
(3.9)
By using Schwarz inequality, we can show that |ρXX (t1 , t2 )| 1 for any t1 and t2 . The nth order moment functions of X(t) are defined as E[X(t)n ], and the corresponding central moment functions are defined as E[(X(t) − μX (t))n ]. Likewise, the characteristic function and the cumulant functions of the stochastic process X(t) can be defined in the same way as in the case of a single random
3.3. Vector process
35
variable. They are listed below.
MX (θ, t) ≡ E e
iθX
∞ =
eiθx pX (x, t) dx.
(3.10)
−∞
Expanding the exponential term eiθx in power series, we have MX (θ, t) = 1 +
∞ (iθ )n
E X(t)n . n!
(3.11)
n=1
The nth order moments and cumulant functions of the stochastic process are given by
1 ∂ n MX (θ, t) E X(t)n = n (3.12) , θ=0 i ∂θ n n
1 ∂ ln MX (θ, t) kn X(t) = n (3.13) . θ=0 i ∂θ n
3.3. Vector process A stochastic vector process consists of N scalar random processes
XT (t) = X1 (t), X2 (t), . . . , XN (t) ,
(3.14)
where all random variables are defined on the same sample space. For example, consider the response of an off-shore structure subject to random wave excitations. We may construct a vector process including displacements, stresses and crack damages of the structure at certain critical locations to represent the state of the structure. A vector process can be considered as a double indexed set of random variables, where t ∈ R 1 and the subscripts of Xk (t) are the index parameters. In the same manner as for the scalar stochastic process, the statistics of a vector process is specified by a sequence of probability distribution functions FXi1 (t1 ) (x1 ), FXi1 (t1 )Xi2 (t2 ) (x1 , x2 ), . . . , FXi1 (t1 )···Xin (tn ) (x1 , x2 . . . , xn ).
(3.15)
The distribution functions of the nth order depend on the time instances tk (1 k n) and on the subscripts ik of the process. The following notation will be adopted for brevity, FX (x1 , t1 , i1 ; x2 , t2 , i2 ; . . . ; xn , tn , in ) = FXi1 (t1 )...Xin (tn ) (x1 , x2 , . . . , xn ).
(3.16)
Chapter 3. Stochastic Processes
36
In this case, FX (x1 , t1 , i1 ; x2 , t2 , i2 ; . . . ; xn , tn , in ) is invariant with respect to an arbitrary permutation of the triplet (xk , tk , ik ) for k = 1, 2, . . . . The cross-covariance function κXi1 Xi2 (t1 , t2 ) between two different stochastic processes Xi1 (t1 ) and Xi2 (t2 ) is defined as
κXi1 Xi2 (t1 , t2 ) = E Xi1 (t1 ) − μXi1 (t1 ) Xi2 (t2 ) − μXi2 (t2 ) = RXi1 Xi2 (t1 , t2 ) − μXi1 (t1 )μXi2 (t2 ),
(3.17)
where RXi1 Xi2 (t1 , t2 ) = E[Xi1 (t1 )Xi2 (t2 )] is the correlation function between Xi1 (t1 ) and Xi2 (t2 ), and μXi (t) = E[Xi (t)] is the mean of Xi (t) at time t. The cross-covariance function also is symmetric, κXi1 Xi2 (t1 , t2 ) = κXi2 Xi1 (t2 , t1 ).
(3.18)
The covariance matrix of the vector process X(t) is defined as
T CXX (t) = E X(t) − μX (t) X(t) − μX (t) .
(3.19)
The cross-correlation coefficient function ρXi1 Xi2 (t1 , t2 ) between two different stochastic processes Xi1 (t1 ) and Xi2 (t2 ) is defined as ρXi1 Xi2 (t1 , t2 ) =
κXi1 Xi2 (t1 , t2 ) σXi1 (t1 )σXi1 (t2 )
.
(3.20)
By using Schwarz inequality, we can show that |ρXi1 Xi2 (t1 , t2 )| 1 for any t1 and t2 .
3.4. Gaussian process Consider a Gaussian probability density function for X(t) at a given time t (x − μX (t))2 1 exp − pX (x, t) = √ (3.21) , 2σ 2 (t) 2πσ (t) where μX (t) = E[X(t)] is the mean, and σ 2 (t) = E[(X(t) − μX (t))2 ] is the variance. X(t) is said to be a Gaussian process, if every finite linear combination of the form Y =
N
ak X(tk ),
(3.22)
k=1
is a Gaussian random variable. T HEOREM 3.1. (Doob, 1953) Let μX (t) be any function of t ∈ R 1 and κXX (t, s) be a function of (t, s) ∈ R 2 such that κXX (t, s) = κXX (s, t).
(3.23)
3.5. Harmonic process
37
Let t1 , t2 , . . . , tN be a finite set in R 1 . The matrix κXX (tm , tn ) is positive definite. Then, there exists a Gaussian process such that
μX (t) = E X(t) , (3.24)
κXX (t, s) = E X(t) − μX (t) X(s) − μX (s) . Let X(t) be a Gaussian process. Then, XT = [X(t1 ), . . . , X(tn )] is normally distributed for any order n and arbitrary time instances tk ∈ R 1 . By definition, Y = θ T X is a Gaussian random variable where θ is an n-dimensional vector of real numbers. The mean and variance of Y are given by E[Y ] = Var[Y ] =
n
(3.25)
θk μX (tk ),
k=1 n
θj θk κXX (tj , tk ).
j,k=1
The characteristic function of the vector X is then given by
T MX (θ ) = E eiθ X = E eiY n n 1 θk μX (tk ) − θj θk κXX (tj , tk ) . = exp i 2 k=1
(3.26)
j,k=1
Hence, the mean vector μX and covariance matrix CXX of the random vector X can both be obtained from the mean function μX (t) and the autocovariance function κXX (t, s). Therefore, a Gaussian process is completely described by its mean and autocovariance functions.
3.5. Harmonic process Let R ∈ R(σ 2 ) and Θ ∈ U (0, 2π) be mutually independent random variables. Consider a stochastic process X(t) defined as X(t) = R cos(ω0 t + Θ) = R cos Θ cos(ω0 t) − R sin Θ sin(ω0 t),
(3.27)
where ω0 is a deterministic frequency. Equation (3.27) represents a harmonic process. It is generated from only two random variables R and Θ. Since Θ ∈ U (0, 2π), it follows that
1 E cos(ω0 t + Θ) = 2π
2π cos(ω0 t + θ ) dθ = 0. 0
(3.28)
Chapter 3. Stochastic Processes
38
We have the mean and autocovariance function of the process as
μX (t) = E[R]E cos(ω0 t + Θ) = 0,
κXX (t1 , t2 ) = E R 2 E cos(ω0 t1 + Θ) cos(ω0 t2 + Θ)
= σ 2 E cos ω0 (t1 − t2 ) + E cos ω0 (t1 + t2 ) + 2Θ = σ 2 cos ω0 (t1 − t2 ). Let t = t1 = t2 . We have
σX2 (t)
= κXX (t, t) =
(3.29)
(3.30) σ 2.
Furthermore, we clearly have
κXX (t1 , t2 ) = κXX (t2 , t1 ) = κXX (τ ) = σ 2 cos ω0 τ,
(3.31)
where τ = t1 − t2 . Since μX (t) = 0, we have RXX (τ ) = κXX (τ ) = 0 τ ). Recall that R cos Θ and R sin Θ are mutually independent and identically distributed random variables, N (0, σ 2 ). Since X(t) is a linear combination of these random variables according to equation (3.27), X(t) is N (0, σ 2 ) too. The proof of this statement is left as an exercise. Even though X(t) is normally distributed for all t, the stochastic process X(t) is not Gaussian. This is because none of the higher order joint distributions are normal. σ 2 cos(ω
3.6. Stationary process 3.6.1. Scalar process A stochastic process X(t) is said to be stationary in the strict sense (or strictly stationary, also known as strictly homogeneous) if the set of finite dimensional joint probability distributions of the process is invariant under a linear translation t → t + a for every a ∈ R 1 FX (x1 , t1 ) = FX (x1 , t1 + a), FX (x1 , t1 ; x2 , t2 ) = FX (x1 , t1 + a; x2 , t2 + a), .. .
(3.32)
FX (x1 , t1 ; x2 , t2 ; . . . ; xn , tn ) = FX (x1 , t1 + a; x2 , t2 + a; . . . ; xn , tn + a), .. . These equations require that t is defined in an unbounded set, i.e., T = R 1 . If equation (3.32) only holds for n = 1 and n = 2, the process is stationary in the weak sense, or is simply called weakly stationary. It is also known as weakly homogeneous. In this book, strictly and weakly stationary processes terms are used synonymously with strongly and weakly homogeneous processes.
3.6. Stationary process
39
Take a special value a = −t1 . Equation (3.32) indicates that for weakly stationary processes, the first order distribution function is independent of t, and the second order distribution function depends only on the difference t2 − t1 , i.e., FX (x, t) = FX (x),
(3.33)
FX (x1 , t1 ; x2 , t2 ) = FX (x1 , x2 ; t2 − t1 ). For a strictly homogeneous process, the higher order distribution functions depend on the differences t2 − t1 , t3 − t1 , . . . , tn − t1 , i.e., FX (x1 , t1 ; x2 , t2 ; . . . ; xn , tn ) = FX (x1 , x2 , . . . , xn ; t2 − t1 , . . . , tn − t1 ).
(3.34)
Since FX (x) is independent of t for a weakly homogeneous process, the mean and variance functions are independent of t, i.e., μX (t) = μX and σX2 (t) = σX2 . As FX (x1 , t1 ; x2 , t2 ) is a function of τ = t2 − t1 , the autocovariance function can be written as κXX (t1 , t2 ) = κXX (τ ).
(3.35)
The symmetry of κXX (t1 , t2 ) with respect to t1 and t2 implies that κXX (τ ) = κXX (−τ ).
(3.36)
Hence the autocovariance function of a weakly homogeneous process is an even function of the time difference. 3.6.2. Vector process The previous definitions can be extended to N -dimensional stochastic vector processes X(t). The process is strictly stationary if for any order n, any index parameters (tk , ik ) ∈ R 1 × {1 k N } and any a ∈ R 1 , the probability distribution functions satisfy the following conditions, FX (xi1 , t1 , i1 ) = FX (xi1 , t1 + a, i1 ), FX (xi1 , t1 , i1 ; xi2 , t2 , i2 ) = FX (xi1 , t1 + a, i1 ; xi2 , t2 + a, i2 ), .. .
(3.37)
FX (xi1 , t1 , i1 ; xi2 , t2 , i2 ; . . . ; xin , tn , in ) = FX (xi1 , t1 + a, i1 ; xi2 , t2 + a, i2 ; . . . ; xin , tn + a, in ), .. . A vector process is weakly stationary, if equation (3.37) only holds for n = 1 and n = 2. When the vector process X(t) is stationary, all its elements are stationary. The converse is not necessarily true.
Chapter 3. Stochastic Processes
40
For a weakly stationary vector process, the mean value function is constant, and the cross-covariance functions depend only on the time difference τ = t2 − t1 , μXi (t) = μXi ,
κXi1 Xi2 (t1 , t2 ) = κXi1 Xi2 (τ ).
(3.38)
The symmetry of κXi1 Xi2 (t1 , t2 ) leads to κXi1 Xi2 (τ ) = κXi2 Xi1 (−τ ).
(3.39)
An application note All the random processes observed in the nature will be of limited extent in space and time. Hence, no natural processes can be strictly or weakly homogeneous as discussed here. The concept of stationary processes is therefore a mathematical approximation. A necessary condition for a physical process to be stationary is that significant correlations among the random variables are present within a time difference τ = |t1 − t2 | which is small compared to the duration T ⊂ R 1 of observation of the process. Let {X(t), t ∈ T } be the response of a vibratory system exposed to random excitations during the interval T . The above requirement will be met if the decaying interval τ1 of free vibrations fulfils τ1 T . For wind loads, T 1 − 2 hours, whereas τ1 < 1 minute depending on the damping, and the requirement is consequently fulfilled in this case. However, for earthquake excitations, τ1 T . Hence, the structural response to earthquake cannot be modelled as stationary processes. 3.6.3. Correlation length A quantitative measure of the stationarity of stochastic processes can be defined in terms of the autocorrelation coefficient function as ∞ τ0 = ρXX (τ ) dτ, (3.40) 0
where τ0 is referred to as the correlation length of the process. When the process is totally uncorrelated such that ρXX (τ ) = δ(τ ), then τ0 = 1. When there is strong correlation between X(t1 ) and X(t2 ) even as τ = t2 − t1 1, then, we can have τ0 → ∞.
3.7. Ergodic process The expectations of a stochastic process are also known as ensemble averages. To evaluate the expectations, we need to have a large number of realizations of the process. Assume that N realizations xk (t) (1 k N ) of the process X(t)
3.7. Ergodic process
41
are available. The mean value function and the autocovariance function can be computed from the ensemble averages N
1 xn (t), μX (t) = E X(t) = lim N→∞ N
(3.41)
n=1
N 1 xn (t1 )xn (t2 ) − μX (t1 )μX (t2 ), N→∞ N
κXX (t1 , t2 ) = lim
(3.42)
n=1
where μX (t) in equation (3.42) is obtained in equation (3.41). The accuracy of the averages depends on the number of realizations N . For many physical processes, e.g., the earthquake, it is impractical to have a large number of observations of the stochastic process. However, quite often, a single and long time series of the process is available. Consider an observation x(t) of a stochastic process X(t) in the interval [−T , T ] where T is sufficiently large. We define a temporal average of X(t) as 1 X(t) = lim T →∞ 2T
T (3.43)
x(t) dt. −T
When X(t) = E[X(t)], we say that the process is ergodic in the mean. Since X(t) is independent of time t, E[X(t)] is a constant. Similarly, an ergodic process in the mean square satisfies the following, X 2 (t)
1 = lim T →∞ 2T
T
x 2 (t) dt = E X 2 (t) .
(3.44)
−T
The process is ergodic in the correlation if the temporal correlation function estimate is equal to the ensemble average of the correlation function, 1 X(t)X(t + τ ) = lim T →∞ 2T
T x(t)x(t + τ ) dt −T
= E X(t)X(t + τ ) = RXX (τ ).
(3.45)
The process is ergodic in the autocovariance if the temporal autocovariance function estimate is equal to the corresponding ensemble average, 1 κXX (τ ) = lim T →∞ 2T
T x(t)x(t + τ ) dt − X(t)2 = κXX (τ ). −T
(3.46)
Chapter 3. Stochastic Processes
42
Since the temporal averages X(t)X(t + τ ) and κXX (τ ) are functions of τ only, the corresponding ensemble averages for the ergodic process must be a function of τ only too. Furthermore, ergodicity in the higher order joint moments implies ergodicity in the lower order, whereas the reverse is not necessarily true. Hence, ergodicity in the covariance implies ergodicity in the mean value. Therefore, an ergodic process in the autocovariance must also be a weakly stationary processes. 3.7.1. Statistical properties of time averages Consider now an ergodic process in the autocovariance. The expectations of the temporal average mean and autocovariance given by,
E X(t) = lim
1 T →∞ 2T 1 T →∞ 2T
T −T
T
= lim
E κXX (τ ) = lim
1 T →∞ 2T
1 = lim T →∞ 2T
E X(t) dt
μX dt = μX ,
(3.47)
−T
T
E X(t)X(t + τ ) − μ2X dt
−T
T κXX (τ ) dt = κXX (τ ).
(3.48)
−T
These results indicate that the time averages in equations (3.43) and (3.46) are unbiased. Furthermore, consider the variance of the temporal average X(t). Var X(t) = lim
1 T →∞ 4T 2
1 = lim T →∞ 4T 2
T T
E X(t1 ) − μ2X X(t2 ) − μ2X dt1 dt2
−T −T
T T κXX (t1 − t2 ) dt1 dt2 .
(3.49)
−T −T
Let τ1 = t1 + t2 and τ2 = t1 − t2 . We have the Jacobian ∂(t1 , t2 ) 1 J = det = . ∂(τ1 , τ2 ) 2
(3.50)
3.7. Ergodic process
Figure 3.1.
43
The mapping of the integration domain from (t1 , t2 ) to (τ1 , τ2 ).
Hence, we obtain
Var X(t) = lim
1 T →∞ 4T 2
1 κXX (τ1 ) dτ1 dτ2 2
Dτ
1 = lim T →∞ T
τ2 κXX (τ1 ) dτ1 , 1− 2T
T
(3.51)
0
where the domain Dτ is illustrated in Figure 3.1. This indicates that the temporal average X(t) converges to a deterministic number as T → ∞. E XAMPLE 3.1. Recall the harmonic process defined in equation (3.27). The probability density function of X(t) is given by a zero-mean Gaussian function with a constant variance σ 2 . Let x(t) = r cos(ω0 t + θ ) be a realization of the process, where r and θ are a realization of the random amplitude and phase. Using the time series in equations (3.43) and (3.46), the following results are obtained as T → ∞ 1 X(t) = lim T →∞ 2T 1 T →∞ 2T
T r cos(ω0 t + θ ) dt = 0 = μX , −T
T
κXX (τ ) = lim =
r2
(3.52)
r 2 cos(ω0 t + θ ) cos(ω0 (t + τ ) + θ ) dt − X(t)2 −T
(3.53) cos(ω0 τ ) = κXX (τ ). 2 Equation (3.52) is the true mean value function of the harmonic process. The harmonic process is consequently ergodic in the mean. In general the temporal 2 average autocovariance r2 is different from σ 2 . Hence, the harmonic process is not ergodic in the autocovariance.
Chapter 3. Stochastic Processes
44
3.7.2. Temporal density estimation In addition to the moments, we can also estimate the probability density function of an ergodic stochastic process from the available time series. This procedure is known as ergodic sampling. T Assume that the time series is sampled at a time interval t = N where N is the total number of points in the series in t ∈ [0, T ]. Denote Xi = x(it) which is a random number indexed by the time interval i. The probability of the random samples Xi falling in the interval (x, x + x] is approximately equal to pX (x)x where pX (x) is the probability density function of X. This probability is estimated by the fraction N(x) N , where N(x) is the number of samples Xi falling in the interval (x, x + x]. We have 1 N(x) . (3.54) x N Equation (3.54) is known as the histogram estimate of the probability density function and converges to the true probability density function as N → ∞ and x → 0 under certain conditions (Silverman, 1986). Introduce an indicator function 1A (x(t)) for the interval defined as
1, y ∈ A. 1A (y) = (3.55) 0, y ∈ / A, pX (x)
where the A = (x, x + x]. Then, the histogram estimate can be written in a temporal average form as 1 pX (x) xT
T
1(x,x+x] x(t) dt.
(3.56)
0
Since the probability density function determines moments of the underlined process of arbitrary order, the stochastic process must be ergodic in the moments of all orders in order for the density estimate to be accurate. The probability distribution function in the interval (−∞, x] can be estimated from the following time average 1 FX (x) T
T
1(−∞,x] x(t) dt.
(3.57)
0
E XAMPLE 3.2. Let x(t) = r cos(ω0 t + θ ) be a realization of the process of the harmonic process defined in equation (3.27), where r and θ are a realization of the random amplitude and phase. Note that x(t) is a periodic function of the phase angle ϕ = ω0 t + θ and ϕ ∈ U (0, 2π). Consider the transformation x = r cos ϕ
3.8. Poisson process
Figure 3.2.
45
The probability density function of the harmonic process.
from ϕ to x. We can obtain the probability density function for X as √1 , |x| < r, pX (x) = π r 2 −x 2 0, |x| > r.
(3.58)
Note that equation (3.58) as shown in Figure 3.2 is completely different from being a Gaussian function. E XAMPLE 3.3. Consider a different process X(t), X(t) = r0 cos(ω0 t + Θ) with Θ ∈ U (0, 2π) and a deterministic amplitude r0 . The ergodicity in the moments of any order can be obtained. Also, the probability density function of X is given by equation (3.58) with r = r0 .
3.8. Poisson process The Poisson process is a counting process of arrivals of random events such as cars on a highway, bumps on the road that a car goes over, customers coming to a shop, and telephone calls. Let N (t) denote the stochastic Poisson process that counts the number of arrived random events in the time interval (0, t]. N (t) is defined by means of the following set of assumptions. 1. Independent arrivals: The numbers of arrivals of the random events in nonoverlapping time intervals are independent random numbers. 2. Stationary arrival rate: There exists a constant λ > 0 such that the probability of exactly one arrival in the time interval (t, t + dt] is equal to λ dt for any time t. 3. Isolated arrivals: The probability for simultaneous arrivals of the random events in the time interval (t, t + dt] is negligible.
Chapter 3. Stochastic Processes
46
From the above assumptions, we have the following conditional probabilities P N (t + dt) = n | N (t) = n − 1 = λ dt, (3.59) P N (t + dt) = n | N (t) = n = 1 − λ dt. Let PN (n, t) = P ({N (t) = n}) be the probability distribution function of the Poisson process. We have PN (n, t + dt) = P N (t + dt) = n = P N (t) = n − 1 P N (t + dt) = n | N (t) = n − 1 + P N (t) = n P N (t + dt) = n | N (t) = n = PN (n − 1, t)λ dt + PN (n, t)(1 − λ dt).
(3.60)
From here, we obtain a differential equation for the probability distribution function dPN (n, t) (3.61) = −λPN (n, t) + λPN (n − 1, t). dt Since n is nonnegative, we must have PN (−1, t) = 0,
t > 0.
(3.62)
Also, we have the initial condition PN (0, 0) = 1.
(3.63)
By solving equation (3.61) recursively starting from n = 0, we obtain (λt)n e−λt (3.64) , n 0, t 0. n! With this distribution function, we can evaluate the mean, autocorrelation and autocovariance of the Poisson process. The mean is defined as PN (n, t) =
μN (t) =
∞ n=0
nPN (n, t) = λt
∞ (λt)n−1 e−λt n=1
(n − 1)!
= λt.
(3.65)
To evaluate the autocorrelation function E[N (t1 )N (t2 )], in theory, we need the joint PDF of the Poisson process at two different time instances. This can be difficult to obtain. Instead, we make use of the property of the Poisson process as stated in Assumption 1. Assume that t2 > t1 . Let N (t2 ) = N (t1 )+N(t2 −t1 ). We consider the number N(t2 ) of arrivals at t2 as the sum of the number of arrivals at an earlier time N (t1 ) plus an increment N(t2 − t1 ) over the time interval t2 − t1 . Note that the time intervals (0, t1 ) and (t1 , t2 ) are non-overlapping. According to Assumption 1 of
3.8. Poisson process
47
the Poisson process, N(t2 − t1 ) is independent of N (t1 ) and has the distribution function given by PN (n, t2 − t1 ) =
[λ(t2 − t1 )]n e−λ(t2 −t1 ) , n!
n 0, t2 − t1 0.
(3.66)
This solution is obtained by substituting t with t2 − t1 in equation (3.64). The autocorrelation function of the Poisson process is then evaluated as
RN (t1 , t2 ) = E N (t1 )N (t2 )
= E N 2 (t1 ) + E N (t1 ) E N(t2 − t1 ) = (λt1 )2 + λt1 + λt1 · λ(t2 − t1 ) = λ min(t1 , t2 ) + λ2 t1 t2 .
(3.67)
The autocovariance is obtained as
κN N (t1 , t2 ) = E N (t1 ) − μN (t1 ) N (t2 ) − μN (t2 )
= E N (t1 )N (t2 ) − μN (t1 )μN (t2 ) = λ min(t1 , t2 ).
(3.68)
Furthermore, we can obtain the characteristic function of the Poisson process as ∞
(λt)n e−λt MN (θ, t) = E eiθN (t) = eiθn n! n=0
= e−λt
∞ (λteiθ )n n=0
n!
= e−λt eλte . iθ
(3.69)
3.8.1. Compound Poisson process The Poisson process only counts the number of arrivals of the random events. When the nature of the random events is also considered, we have a compound Poisson process. For example, when the height of road bumps are modeled as a random variable, the excitation to the vehicle is a compound Poisson process. Let Yn denote a set of independent and identically distributed random variables with the probability density function pY (y). Define a stochastic process as X(t) =
N (t) n=1
Yn ,
(3.70)
Chapter 3. Stochastic Processes
48
where N (t) is the Poisson process. X(t) is a compound Poisson process. The mean, autocovariance and characteristic function can be evaluated as ∞
E X(t) = nPN (n, t) n=1
∞ ypY (y) dy = λtμY ,
−∞
2
κXX (t1 , t2 ) = λ min(t1 , t2 )E Y ,
MX (θ, t) = E eiθX(t)
= P N (t) = 0 + =e
(3.71)
∞
k E eiθ n=1 Yk N (t) = k P N (t) = k
k=1 λt[MY (θ)−1]
, t > 0.
3.9. Markov process Many physical systems have a short memory of the past history. Only the most recent history affects the outcome in the immediate future. Let X(t) be a stochastic process, and tk be a time sequence such that t1 < t2 < · · · < tn and xn = X(tn ). If the conditional probability density of the process satisfies pX (xn , tn |xn−1 , tn−1 ; . . . ; x1 , t1 ) = pX (xn , tn |xn−1 , tn−1 ),
(3.72)
we say that X(t) is a Markov process. The probability of the event (xn , tn ) conditional on the past history (xn−1 , tn−1 ; . . . ; x1 , t1 ) depends only on the most recent past event (xn−1 , tn−1 ). Markov processes have some interesting properties. For example, the second order joint probability density function can be written as pX (x1 , t1 ; x2 , t2 ) = pX (x1 , t1 )pX (x2 , t2 |x1 , t1 ),
(3.73)
the third order joint probability density function can be written as pX (x1 , t1 ; x2 , t2 ; x3 , t3 ) = pX (x1 , t1 ; x2 , t2 )pX (x3 , t3 |x1 , t1 ; x2 , t2 ) = pX (x1 , t1 )pX (x2 , t2 |x1 , t1 )pX (x3 , t3 |x2 , t2 ),
(3.74)
and so on. This suggests that a higher order joint probability density function of a Markov process can always be expressed in terms of the first and second order functions. A stationary Markov processis such that the conditional probability density function is a function of the time difference only, pX (xn , tn |xn−1 , tn−1 ) = pX (xn , tn − tn−1 |xn−1 , 0).
(3.75)
Exercises
49
More discussions on Markov processes will be presented in Chapter 5. The references (Doob, 1953; Jazwinski, 1970; Karlin and Taylor, 1981; Lin, 1967; Rice, 1954; Sólnes, 1997; Wong, 1971) have further discussions on stochastic processes.
Exercises E XERCISE 3.1. Let R ∈ R(σ 2 ), Θ ∈ U (0, 2π) be mutually independent random variables, and X(t) = R cos Θ cos(ω0 t) − R sin Θ sin(ω0 t).
(3.76)
Show that the harmonic stochastic process X(t) is Gaussian with zero mean and a constant variance σ 2 at any given time t. E XERCISE 3.2. Find the characteristic function of the process in Exercise 3.1. E XERCISE 3.3. Let t1 and t2 be two time instances such that t1 = t2 . Show that Y = aX(t1 ) + bX(t2 ) is not normal where X(t) is the harmonic process in Exercise 3.1, a and b are real constants. E XERCISE 3.4. Let A and Θ be mutually independent random variables and Θ ∈ U (0, 2π). Define a harmonic process X(t) = A sin(ω0 t + Θ).
(3.77)
Show that the correlation function of X(t) is given by RXX (t1 , t2 ) = E[X(t1 )X(t2 )] =
1 2 E A cos ω0 (t1 − t2 ). 2
(3.78)
E XERCISE 3.5. Show that a harmonic process is always ergodic in the mean and in the correlation. E XERCISE 3.6. Let Φk ∈ U (0, 2π) (1 k N ) be independent random variables. Consider the sum of random phase processes X(t) =
N
rj cos(ωj t + Φj ),
(3.79)
j =1
where rj , ωj are arbitrary, positive constants. Determine the autocovariance function of X(t). Is X(t) normally distributed for finite N ? Does X(t) converge to a Gaussian process for N → ∞?
Chapter 3. Stochastic Processes
50
Chapter 3. Stochastic Processes E XERCISE 3.7. Derive the cumulant functions kn [X(t)] for a Gaussian process. Show that kn [X(t)] = 0 for n 3. E XERCISE 3.8. Show the steps leading to the mean, and correlation functions μN (t) = λt,
(3.80)
RN (t1 , t2 ) = λ min(t1 , t2 ) + λ t1 t2 2
of a Poisson process with the probability distribution function PN (n, t) =
(λt)n e−λt , n!
n 0, t 0.
(3.81)
Chapter 4
Spectral Analysis of Stochastic Processes
Besides the probability distribution, another way to characterize stochastic processes is to examine their frequency content. This study is known as spectral analysis and is commonly practiced in scientific and engineering research (Bendat and Piersol, 2000; Shinozuka and Deodatis, 1991; Newland, 1984).
4.1. Power spectral density function The power spectral density function of a stochastic process describes the frequency content of the process, and provides a basis for Monte Carlo simulation of the process in digital computers. 4.1.1. Definitions We shall restrict our discussion to weakly stationary stochastic processes with zero mean such that the autocovariance and cross-covariance functions depend on the time difference. We define a Fourier transformation of the autocorrelation function RXX (τ ) of a scalar stochastic process X(t) as 1 SXX (ω) = 2π
∞
RXX (τ )e−iωτ dτ.
(4.1)
−∞
SXX (ω) is known as the power spectral density function of X(t). SXX (ω) exists when RXX (τ ) is absolutely integrable. The inverse Fourier transformation of SXX (ω) returns the autocorrelation function ∞ RXX (τ ) =
SXX (ω)eiωτ dω.
(4.2)
−∞
Equations (4.1) and (4.2) are called the Wiener–Khintchine relations. From the ∗ (−ω). Because R definition, we have SXX (ω) = SXX XX (τ ) = RXX (−τ ) is an 51
Chapter 4. Spectral Analysis of Stochastic Processes
52
even function, we have SXX (ω) =
∞
1 π
RXX (τ ) cos ωτ dτ,
(4.3)
0
and SXX (ω) is real and even. Hence, we also have ∞ RXX (τ ) = 2
SXX (ω) cos ωτ dω.
(4.4)
0
These two relations are also referred to as the cosine transforms. E XAMPLE 4.1. The harmonic process considered in Section 3.5 has zero mean and an autocovariance function κXX (τ ) = σ 2 cos(ω0 τ ). The power spectral density function of the process is given by 1 SXX (ω) = 2π = =
σ2 4π
∞ −∞ ∞
σ 2 cos ω0 τ e−iωτ dτ
e−i(ω−ω0 )τ + e−i(ω+ω0 )τ dτ
−∞ 2 σ
δ(ω − ω0 ) + δ(ω + ω0 ) .
(4.5) 2 In the derivation, we have used the Fourier transform of f (τ ) = 1 and δ(ω) in the sense of generalized functions as follows ∞ 1=
δ(ω)e
iωτ
dω,
−∞
1 δ(ω) = 2π
∞
1 · e−iωτ dτ.
(4.6)
−∞
4.1.2. One-sided spectrum Due to the symmetry of SXX (ω), a one-sided power spectral density function is often used in practice SX (ω) = 2SXX (ω),
ω ∈ [0, ∞).
(4.7)
SXX (ω) is sometimes referred to as the double-sided power spectral density function. The transform relations in terms of SX (ω) are as follows 2 SX (ω) = π
∞ RXX (τ ) cos ωτ dτ, 0
(4.8)
4.1. Power spectral density function
53
∞ RXX (τ ) =
SX (ω) cos ωτ dω.
(4.9)
0
Since
σX2
= RXX (0), we have ∞
σX2
=
(4.10)
SX (ω) dω. 0
4.1.3. Power spectrum of vector processes Consider now an N-dimensional stochastic vector process XT (t) = [X1 (t), . . . , XN (t)]. Assume that the cross correlation functions RXj Xk (τ ) are absolutely integrable. We define the cross-power spectral density functions as SXj Xk (ω) =
1 2π
∞
RXj Xk (τ )e−iωτ dτ.
(4.11)
−∞
The inverse transformation returns the cross correlation function ∞ SXj Xk (ω)eiωτ dω. RXj Xk (τ ) =
(4.12)
−∞
RXj Xk (τ ) is always real, whereas SXj Xk (ω) is generally complex. Making use of the property RXk Xj (−τ ) = RXj Xk (τ ), we can show that 1 SXj Xk (ω) = 2π =
1 2π
1 = 2π
∞ RXk Xj (−τ )eiωτ dτ −∞ ∞
RXk Xj (τ )eiωτ dτ −∞
∗
∞ RXk Xj (τ )e
−iωτ
dτ
−∞
∗ = SX (ω). k Xj
(4.13)
Also, from the definition, we have ∗ (−ω). SXj Xk (ω) = SX j Xk
(4.14)
In contrast to the cross-power spectral density function, we sometimes also refer to SXX (ω) as the auto power spectral density function.
Chapter 4. Spectral Analysis of Stochastic Processes
54
Coherence function Similar to the concept of the correlation coefficient for random variables, we can define a coherence function for the stochastic processes as CoXj Xk (ω) =
|SXj Xk (ω)|2 SXj Xj (ω)SXk Xk (ω)
(4.15)
.
By definition, 0 < CoXj Xk (ω) 1. The coherence function indicates relationship between two processes. When there is a linear relationship, e.g., Xj (t) = cXk (t), where c is a constant, then CoXj Xk (ω) = 1.
4.2. Spectral moments and bandwidth With the help of the one-sided spectral density function SX (ω), we can define a set of spectral moments as ∞ λn =
ωn SX (ω) dω,
n = 0, 1, 2, . . .
(4.16)
0
From equation (4.10), we have λ0 = σX2 . A normalized moment with the dimension of circular frequency can be defined as 1/n λn γn = (4.17) . λ0 γ1 is known as the central frequency and has a geometrical meaning of being the centroid of the spectral distribution SX (ω). γ2 has the geometrical meaning of the radius of gyration of SX (ω) about the origin. A variance parameter δ describing the dispersion of SX (ω) around its central frequency is defined as (Vanmarcke, 1983) # 1 (λ0 λ2 − λ21 ) 2 λ0 λ2 δ= (4.18) = − 1. λ1 λ21 λ2
From the Schwarz inequality, we have 0 λ0 λ1 2 1 and 0 δ < ∞. For a harmonic process, λ1 = ω0 λ0 and λ2 = ω02 λ0 , we have δ = 0. Hence, small values of δ indicate narrowbandedness. Similarly, another bandwidth parameter ε can be defined as # $ λ2 ε = 1 − 2 = 1 − α22 , (4.19) λ0 λ4
4.2. Spectral moments and bandwidth
55
where λ2 α2 = √ , λ0 λ4
(4.20)
is called the irregularity factor. A family of bandwidth parameters can be defined as λn αn = √ (4.21) , n = 1, 2, . . . λ0 λ2n 4.2.1. Narrowband process When the power spectral density function has a marked concentration near a given frequency, it is called a narrowband process. A harmonic process is a special case of the narrowband process whose power spectral density function consists of a delta function in the frequency domain. Figure 4.1(a) shows a typical narrowband process, which is approximately periodic with the circular frequency ω0 . The amplitude of the process is modulated by a random variable. Each crossing of the t-axis by X(t) is followed by exactly one local maximum or minimum. This is a distinct property of the narrowband process. The autocovariance in Figure 4.1(b) consists of a cosine function with circular frequency ω0 and a decreasing amplitude. The power spectral density of the process shown in Figure 4.1(c) indicates clear dominance of the frequency component near frequency ω0 . The half power bandwidth B shown in the figure can be used to measure the narrowbandedness. A nondimensional bandwidth parameter ζ can be defined as ζ =
B . 2ω0
(4.22)
If X(t) is the response of a slightly damped single-degree-of-freedom system subjected to white noise excitation, ζ can be identified as the damping ratio of the oscillator. With ζ as a bandwidth measure, the processes with ζ < 0.05 are classified as narrowband. 4.2.2. Broadband process In a vague term, a broadband process is defined as a process whose power spectrum is distributed over a wide band of frequencies in such a way that no single significantly dominating frequency exists in the spectrum. A realization, the autocovariance function and the auto power spectral density function of a typical broadband process are shown in Figure 4.2. The realization in Figure 4.2(a) has a very irregular variations. Each crossing of the t-axis is quite
56
Figure 4.1.
Chapter 4. Spectral Analysis of Stochastic Processes
Realization of a typical narrowband process and its covariance and power spectral density functions (Nielsen, 2000).
often succeeded by more than one local maximum or local minimum contrary to ω0 be the expected number of upcrossings per the narrowband case. Let ν0 = 2π unit time of the t-axis and let μ0 be the expected number of local maxima per unit time. It then follows that μ0 γ = (4.23) 1, ν0 where γ > 1 for a broadband process, γ 1 for a narrowband process and γ = 1 for a harmonic process. More discussion on the parameters μ0 and ν0 will be presented in Chapter 11. γ may then be considered as an alternative measure of bandwidth. In terms of γ the bandwidth parameter ε can be redefined as $ ε = 1 − γ −2 , (4.24) where γ = 1 and γ = ∞, corresponding to the extremely narrowband and broadband cases, are mapped into ε = 0 and ε = 1, respectively. In Figure 4.2(a), the time is scaled by T0 = 2π ω0 . As seen from Figure 4.2(b), significant correlations are only present for time separations of magnitude T0 , i.e. the correlation length is such that τ0 < T0 for a broadband process. The
4.2. Spectral moments and bandwidth
Figure 4.2.
57
A broadband process. (a) A realization. (b) Autocovariance function. (c) Autospectral density function (Nielsen, 2000).
autospectral density function is broad over a wide range of frequencies, as shown in Figure 4.2(c). White noise A special broadband process is the white noise, which is stationary Gaussian process X(t) with mean value μX = 0 and a constant power spectral level for all frequencies, SXX (ω) = S0 ,
ω ∈ [0, ∞).
(4.25)
The term “white noise” comes from optics, where the power spectral density function of sunlight is almost constant over the visible frequency range of the electromagnetic spectrum. The autocorrelation function of the white noise is obtained as RXX (τ ) = 2πS0 δ(τ ).
(4.26)
When 2πS0 = 1, the process is a unit white noise. The autocorrelation and power spectral density function of the white noise process are shown in Figure 4.3.
Chapter 4. Spectral Analysis of Stochastic Processes
58
Figure 4.3.
The power spectral density and the correlation function of the white noise.
According to Equation (4.26), X(t) and X(t + τ ) for a white noise are independent no matter how small τ is. Since σX2 = RXX (0), we have σX2 = ∞, implying X(t) ∈ N (0, ∞). Let a < b. Then b a P a < X(t) b = lim Φ −Φ σX →∞ σX σX = 0.5 − 0.5 = 0. (4.27) Φ(x) is the distribution function of the unit normal random variable X ∼ N (0, 1). Equations (4.26) and (4.27) imply that the realizations of a white noise process are discontinuous for all t, and the samples are outside any finite interval with probability one. Clearly, such a process is a mathematical abstraction, and is not physically realizable. The harmonic process and the white noise process are two opposite extreme cases. The former has a power spectral function concentrated on one single circular frequency ω0 , while the latter is uniformly distributed over all frequencies. Both cases are mathematical idealizations, and are used to approximate the physical processes. Band-limited white noise A weakly homogeneous stochastic process is called band-limited white noise, if the one-sided power spectral density function is given by σ2
X , ω ∈ ω0 − B2 , ω0 + B2 , B SXX (ω) = (4.28)
0, ω∈ / ω0 − B2 , ω0 + B2 , where ω0 is the center frequency and B is the bandwidth of the power spectral density function such that B 2ω0 as seen in Figure 4.4. The autocorrelation function follows from equation (4.9) σ2 RXX (τ ) = X B
ω0 + B2
cos ωτ dω ω0 − B2
4.3. Process with rational spectral density function
Figure 4.4.
59
The power spectral density function of the band-limited white noise.
σX2 B B = sin ω0 + τ − sin ω0 − τ Bτ 2 2 2σX2 cos(ω0 τ/2) sin(Bτ/2). Bτ The spectral moments are obtained from equation (4.16) =
σ2 λn = X B
ω0 + B2
ωn dω ω0 − B2
=
σX2 B(n + 1)
B n+1 B n+1 − ω0 − ω0 + . 2 2
The variance parameter δ is evaluated to be, 1 ζ, δ= 3 where ζ =
(4.29)
(4.30)
(4.31)
B 2ω0 .
4.3. Process with rational spectral density function Consider a weakly homogeneous stochastic process {X(t), t ∈ R} with zero mean and a rational spectral density function given by SXX (ω) =
B(iω)B(−iω) S0 , A(iω)A(−iω)
(4.32)
m m−1 + · · · + b and where S0 is a positive real constant. m !nB(s) = b0 s + b1 s n n−1 A(s) = s + a1 s + · · · + an = j =1 (s − sj ) are polynomials of order m and n, and sj are the roots of A(s) = 0.
Chapter 4. Spectral Analysis of Stochastic Processes
60
Further, it is assumed that n > m, and that all roots sj have a negative real part. By the method of residues, it can be shown that the autocorrelation function can be evaluated as RXX (τ ) = −πS0
n esj |τ |
sj
j =1
B(sj )B(−sj ) . n ! 2 2 (sk − sj )
(4.33)
k=1, k =j
The rational spectrum often represents the power spectral density function of the response of linear structural systems to random excitations. This is the subject of Chapter 10. E XAMPLE 4.2. Consider a weakly stationary process with the power spectral density function of order (m, n) = (0, 2) SXX (ω) =
(ω02
S0 , + (2ζ ω0 ω)2
(4.34)
− ω2 )2
where ζ ∈ (0, 1), ω0 > 0, and S0 > 0. Equation (4.34) is a rational power spectral density function with A(s) = s 2 + 2ζ ω0 s + ω02 ,
B(s) = 1.
(4.35) The roots of A(s) = 0 are s1 , s2 = −ζ ω0 ± iωd where ωd = ω0 1 − ζ 2 . The autocorrelation function is given by
∗ es1 |τ | es1 |τ | 1 − ∗ RXX (τ ) = −πS0 2 s1 s1 4iω0 ζ 1 − ζ 2 s1 |τ | πS0 e =− Im = σX2 ρXX (τ ), 2 2 s1 2ω0 ζ 1 − ζ where σX2 =
πS0 2ζ ω03
ρXX (τ ) = e
(4.36)
and
−ζ ω0 |τ |
cos(ωd τ ) +
ζ 1 − ζ2
sin(ωd |τ |) ,
(4.37)
where ρXX (τ ) is the autocorrelation coefficient function. Im(·) denotes the imaginary part. The parameter ζ can be identified as the nondimensional half power bandwidth in equation (4.22), if ζ 1.
4.4. Finite time spectral analysis
61
4.4. Finite time spectral analysis Assume that the stochastic process is available only in a finite time interval as defined by
X(t), −T t T , XT (t) = (4.38) 0, elsewhere. We also assume that as T → ∞, we recover the original stochastic process. In practice, we often take the Fourier transform of the sample function as 1 X(ω, T ) = 2π
T
X(t)e−iωt dt.
(4.39)
−T
The condition for the existence of the Fourier transform in a statistical sense will be discussed in Chapter 5. In general, X(ω, T ) is a random process indexed by the frequency ω. Consider the correlation function of X(ω, T ) with respect to the index ω.
E X(ω1 , T )X ∗ (ω2 , T ) 1 = (2π)2
T T
E X(t1 )X(t2 ) e−iω1 t1 eiω2 t2 dt1 dt2 .
(4.40)
−T −T
Let ω1 = ω2 = ω and t1 − t2 = τ . We have 2
E X(ω, T ) =
1 (2π)2
2T
2T − |τ | RXX (τ )e−iωτ dτ.
(4.41)
−2T
Hence, 2 π 1 lim E X(ω, T ) = lim T →∞ T T →∞ 2π
2T −2T
1 − lim T →∞ 4π =
1 2π
∞
RXX (τ )e−iωτ dτ 2T
−2T
|τ | RXX (τ )e−iωτ dτ T
RXX (τ )e−iωτ dτ
−∞
= SXX (ω).
(4.42)
Chapter 4. Spectral Analysis of Stochastic Processes
62
When T is sufficiently large and finite, equation (4.42) suggests an estimate of the power spectral density function as 2 π SXX (ω, T ) = E X(ω, T ) . (4.43) T As we have shown here, when T → ∞, this estimate approaches the exact spectral density function.
Exercises E XERCISE 4.1. Let X(t) be a zero mean stationary process with autocorrelation function RXX (τ ) = σX2 e−α|τ | where α > 0. Show that the double sided power spectral density function is SXX (ω) =
σX2 α . π ω2 + α 2
(4.44)
E XERCISE 4.2. Let X(t) be a zero mean stationary process with autocorrelation 2 function RXX (τ ) = σX2 e−ατ where α > 0. Show that σ2 ω2 SXX (ω) = √ X e− 4α . πα
(4.45)
E XERCISE 4.3. Let X(t) be a zero mean stationary process with autocorrelation function
|τ | 2 RXX (τ ) = σX (1 − T ), |τ | T , (4.46) 0, |τ | > T . Show that SXX (ω) =
2σX2 sin2 (ωT /2) . πT ω2
(4.47)
E XERCISE 4.4. Let X(t) be a zero mean stationary process with autocorrelation function RXX (τ ) = σX2 e−α|τ | cos(ω0 τ ) where α > 0. Show that the power spectral density function is σX2 α α SXX (ω) = (4.48) + . 2π (ω + ω0 )2 + α 2 (ω − ω0 )2 + α 2 E XERCISE 4.5. Let X(t) be a zero mean stationary process with autocorrelation function RXX (τ ) = σX2 sin(ω0 τ )/(πτ ). Show that
2 σX SXX (ω) = 2π , |ω| ω0 , (4.49) 0, |ω| > ω0 .
Exercises
Figure 4.5.
63
The power spectral density function of a band-limited white noise process.
E XERCISE 4.6. Let X(t) be a zero mean stationary process with autocorrelation function RXX (τ ) = σX2 cos(ω0 τ ). Show that the power spectral density function is SXX (ω) =
σX2
δ(ω + ω0 ) + δ(ω − ω0 ) . 2π
(4.50)
E XERCISE 4.7. Given a weakly stationary process X(t) with a band-limited white noise power spectral density function characterized by the bandwidth parameter B and the center frequency ω0 . Determine the bandwidth parameters δ and ε as a function of ωB0 . E XERCISE 4.8. X(t) is a band-limited white noise process with the double-sided power spectral density function shown in Figure 4.5. Determine the power spectral ˙ density function and the autocovariance function of the derivative process X(t). E XERCISE 4.9. Consider a stationary stochastic process with a one-sided power spectral density function
G0 , ω a ω ω b , G(ω) = (4.51) 0, elsewhere. (a) Show that the first three spectral moments are given by λ0 = G0 (ωb − ωa ), (4.52) G0 2 λ1 = (4.53) (ωb − ωa2 ), 2 G0 3 (ωb − ωa3 ). λ2 = (4.54) 3 (b) Computer the bandwidth parameters δ and ε. Show that when ωb − ωa (ωb + ωa )/2, the process is narrowband.
Chapter 5
Stochastic Calculus
The subject of stochastic calculus deals with differentiation and integration of stochastic processes (Jazwinski, 1970; Lin and Cai, 1995; Sólnes, 1997; Soong, 1973; Wong, 1971). It provides an important mathematical tool for modeling stochastic dynamical systems and formulating stochastic control problems.
5.1. Modes of convergence In the mathematics of calculus, the convergence of a series in some measure is the basis for the concepts such as continuity, differentiation and integration of functions. When we study the calculus of random processes such as differentiation, integration and convergence of a series, we need to define the notions of convergence (Lin, 1967). Convergence with probability one, also known as almost sure convergence, is defined as follows. Let Xn be a sequence of random variables, and P lim Xn = X = 1. (5.1) n→∞
We say that Xn converges to X with probability 1. Here, P (·) is the probability measure of the random event limn→∞ Xn = X. Convergence in probability is defined as lim P |Xn − X| > ε = 0, ∀ε > 0. (5.2) n→∞
Convergence in distribution is defined as lim FXn (x) = FX (x).
(5.3)
n→∞
Finally, convergence in mean square is defined as
lim E |Xn − X|2 = 0, or l.i.m. Xn = X. n→∞
n→∞
(5.4)
The convergence in mean square is the strongest among the modes of convergence defined above. It implies the convergence in probability, which in turn 65
Chapter 5. Stochastic Calculus
66
Figure 5.1.
The relationship among the different convergence modes.
implies the convergence in distribution. Convergence with probability 1 also implies the convergence in probability. The weakest mode of convergence is the convergence in distribution. This discussion is illustrated in Figure 5.1. There are two very useful theorems about the convergence in mean square. T HEOREM 5.1. If l.i.m.t→t0 X(t) = X and l.i.m.s→s0 Y (s) = Y , then we have
lim E X(t)Y (s) = E[XY ],
t→t0 s→s0
% &
lim E X(t) = E l.i.m. X(t) = E[X].
t→t0
t→t0
(5.5) (5.6)
Equation (5.6) implies that the mean square limit l.i.m. and the expectation operator E[·] commute. T HEOREM 5.2. A second order stochastic process X(t) with E[X 2 (t)] < ∞ satisfies l.i.m. X(t) = X, t→t0
(5.7)
if and only if the following limit exists and is finite
E X(t)X(t ) = RXX (t0 , t0 ). lim
t,t →t0
(5.8)
In the rest of this chapter, the convergence in mean square is used unless it is otherwise indicated.
5.2. Stochastic differentiation
67
5.2. Stochastic differentiation Consider a stochastic process X(t) defined on the sample space Ω. A derivative ˙ process X(t) can be defined in terms of the following limit X(t + t) − X(t) ˙ , X(t) = l.i.m. (5.9) t→0 t in the mean square sense. In the same way, the higher order derivatives of the process can be defined. These definitions are also applicable to vector stochastic processes. According to Theorem 5.2, when the correlation function of the process X(t+t)−X(t) ˙ exists and is finite as t → 0, X(t) exists and the process X(t) t is differentiable in the mean square sense. Consider the correlation function of the ratio X(t + t) − X(t) X(s + s) − X(s) lim E t→0 t s s→0
1
RXX (t + t, s + s) − RXX (t + t, s) t→0 ts s→0 − RXX (t, s + s) + RXX (t, s)
= lim
=
∂ 2 RXX (t, s) . ∂t∂s
(5.10)
2
XX (t,s) Hence, if ∂ R∂t∂s exists at (t, t) and is finite, X(t) is differentiable in the mean square sense. As a consequence of the definition of differentiation in the mean square sense, many properties of differentiation in calculus also hold in the same sense. Two examples are listed below. d
˙ aX(t) + bY (t) = a X(t) + bY˙ (t), (5.11) dt d
˙ g(t)X(t) = g(t)X(t) ˙ + g(t)X(t), (5.12) dt where a and b are constants, and g(t) is a deterministic function.
E XAMPLE 5.1. Consider the white noise X(t) with mean value μX = 0 and the autocorrelation function RXX (t, s) = 2πS0 δ(t − s).
(5.13) ∂ 2 RXX (t,s) ∂t∂s
Clearly, the limit limt→s RXX (t, s) is unbounded, and the derivative dose not exist. Hence, the limit l.i.m.t→t0 X(t) of the white noise does not exist for any t0 , and the white noise is not differentiable, both in the mean square sense.
Chapter 5. Stochastic Calculus
68
5.2.1. Statistical properties of derivative process ˙ can be evaluated by equation (5.9) The mean value function μX˙ (t) of X(t) 1 X(t + t) − X(t) μX˙ (t) = lim E t→0 t % & 1 = E l.i.m. X(t + t) − X(t) t→0 t 1 = lim μX (t + t) − μX (t) t→0 t = μ˙ X (t). (5.14) ˙ 1 )X(t ˙ 2 )] has been shown to be The correlation function E[X(t 2
˙ 1 )X(t ˙ 2 ) = ∂ RXX (t1 , t2 ) . RX˙ X˙ (t1 , t2 ) = E X(t ∂t1 ∂t2
The autocovariance function of the derivative process becomes
˙ 1 )X(t ˙ 2 ) − μ ˙ (t1 )μ ˙ (t2 ) κX˙ X˙ (t1 , t2 ) = E X(t X X =
∂ 2 κXX (t1 , t2 ) . ∂t1 ∂t2
(5.15)
Higher order moments of the derivative process can be determined in the same way. The joint nth order moment is given by n
˙ 2 ) · · · X(t ˙ n ) = ∂ E[X(t1 )X(t2 ) · · · X(tn )] . ˙ 1 )X(t E X(t ∂t1 ∂t2 · · · ∂tn
(5.16)
Generally, the mean function μX(n) (t) of the nth order derivative process X (n) (t), and the cross-covariance function κX(m) X(n) (t1 , t2 ) of X (n) (t) and X (m) (t) are given by μX(n) (t) =
d n μX (t) , dt n
κX(m) X(n) (t1 , t2 ) =
∂ m+n κXX (t1 , t2 ) . ∂t1m ∂t2n
(5.17)
(5.18)
Gaussian processes Let X(t) be a Gaussian process. The derivatives at an arbitrary set of time in˙ 1 ), X(t ˙ 2 ), . . . , X(t ˙ n )] are a linear combination of normally distributed stances [X(t random variables [X(t1 + t1 ), X(t1 ), . . . , X(tn + tn ), X(tn )] as tk → 0 ˙ (k = 1, 2, . . . , n). Hence, X(t) is a Gaussian process. By induction, it follows that the nth order derivative process of X (n) (t) is Gaussian if X(t) is Gaussian.
5.2. Stochastic differentiation
69
Stationary processes ˙ If X(t) is stationary in the strict (weak) sense, X(t) is also strictly (weakly) stationary. Again, by induction, this statement holds true for the derivative process of arbitrary order. Recall that the mean of a stationary process is constant so that dμX (5.19) ≡ 0. dt The autocovariance function of the derivative process in the stationary case is given by μX˙ (t) =
κX˙ X˙ (t1 , t2 ) =
∂ 2 κXX (t2 − t1 ) ∂t1 ∂t2
d 2 κXX (τ ) , τ = t2 − t1 . (5.20) dτ 2 In equation (5.20), we have used ∂t2 = dτ with t1 kept constant and ∂t1 = − dτ with t2 kept constant. Similarly, the cross-covariance function of X (m) (t) and X (n) (t) is =−
κX(m) X(n) (t1 , t2 ) =
∂ m+n κXX (t2 − t1 ) ∂t1m ∂t2n
= (−1)m
d m+n κXX (τ ) , dτ m+n
(5.21)
where τ = t2 − t1 . In particular, for m = 0 and n = 1, the cross-covariance function of X(t) and ˙ X(t) is obtained as κXX˙ (τ ) =
dκXX (τ ) . dτ
Since κXX (τ ) is a symmetric function of τ , its derivative symmetric with respect to τ and must be zero at τ = 0. Then
(5.22) dκXX (τ ) dτ
is skew-
d κXX (0) = 0. (5.23) dτ This shows that for a weakly stationary process, the stochastic process X(t) ˙ and its derivative X(t) are uncorrelated. Moreover, when X(t) is Gaussian, X(t) ˙ and X(t) are independent. κXX˙ (0) =
E XAMPLE 5.2. Let X(t) be a nonstationary Gaussian process with a mean μX (t), standard deviation σX (t) and autocorrelation coefficient function ρXX (t1 , t2 ). ˙ X(t) and X(t) are jointly normally distributed, and σX˙ (t) can be determined from
Chapter 5. Stochastic Calculus
70
equations (5.14) and (5.15), respectively dμX (t) , dt ∂ 2 κXX (t, t) σX2˙ (t) = . ∂t1 ∂t2
μX˙ (t) =
(5.24) (5.25)
Thus the cross-correlation coefficient function becomes ρ(t) = ρXX˙ (t, t) =
κXX˙ (t, t) σX (t)σX˙ (t)
=
1 ∂κXX (t, t) . σX (t)σX˙ (t) ∂t2
(5.26)
Note that ∂t∂2 represents the partial derivative with respect to the second argument of κXX (t, t). From equation (2.71), the joint probability density function of X(t) ˙ and X(t) is obtained as ˙ t) = pXX˙ (x, x,
ξ1 =
1
2πσX (t)σX˙ (t) 1 − ρ 2 (t) 2 ξ − 2ρ(t)ξ1 ξ2 + ξ22 × exp − 1 , 2(1 − ρ 2 (t))
x − μX (t) , σX (t)
ξ2 =
x˙ − μX˙ (t) . σX˙ (t)
(5.27)
(5.28)
If X(t) is weakly stationary, ρ(t) ≡ 0 and equation (5.27) reduces to ˙ t) = pX (x)pX˙ (x), ˙ pXX˙ (x, x,
(5.29)
1 x − μX (t) pX (x) = φ , σX σX (t)
(5.30)
1 x˙ ˙ = φ pX˙ (x) , σX˙ σX˙
(5.31)
where φ(x) denotes the probability density function of the unit-variant Gaussian variable N (0, 1).
5.2. Stochastic differentiation
71
5.2.2. Spectral analysis of derivative processes Recall equation (4.2). We have the correlation function of stationary derivative processes as d 2 RXX (τ ) dτ 2 ∞ d2 =− 2 eiωτ SXX (ω) dω dτ
RX˙ X˙ (τ ) = −
−∞
∞ =
eiωτ ω2 SXX (ω) dω,
(5.32)
−∞
d m+n RX(m) X(n) (τ ) = (−1) dτ m+n
∞
m
eiωτ SXX (ω) dω −∞
∞ =
eiωτ (−1)m (iω)m+n SXX (ω) dω.
(5.33)
−∞
Equations (5.32) and (5.33) suggest that the auto-power spectral density functions of the derivative process are given by SX˙ X˙ (ω) = ω2 SXX (ω),
(5.34)
SX(m) X(n) (ω) = (−1)m (iω)m+n SXX (ω).
(5.35)
The variance function σX2 (n) is given by ∞ σX2 (n)
=
ω2n SXX (ω) dω −∞
∞ =
ω2n SX (ω) dω = λ2n ,
n = 0, 1, . . .
(5.36)
0
where λ2n are the spectral moments defined in equation (4.16). When σX2 (n) < ∞, ω2n SXX (ω) is integrable over (−∞, ∞). For a process with rational auto power spectral density function, the variance function of the nth order derivative process
Chapter 5. Stochastic Calculus
72
is, ∞ σX2 (n)
=
ω2n −∞
B(iω)B(−iω) S0 dω. A(iω)A(−iω)
(5.37)
Time series of physically realizable processes are continuous, and the variance function σX2 (n) (t) is finite. Continuity of sample curves implies that the random variables X (n) (t −t), X (n) (t) and X (n) (t +t) for a given t become increasingly correlated as t → 0. This means that the autocovariance function κX(n) X(n) (t, t + t) is continuous as t → 0. The requirements for existence of X (n) (t) can then be stated as (1) κX(n) X(n) (t1 , t2 ) continuous at t1 = t2 . (2) κX(n) X(n) (t, t) = σX2 (n) (t) < ∞. E XAMPLE 5.3. For zero mean and weakly stationary processes with auto power spectral density function and autocovariance function given by S0 , (ω02 − ω2 )2 + (2ζ ω0 ω)2 s1 |τ | πS0 e Im RXX (τ ) = − 2 2 s1 2ω0 ζ 1 − ζ SXX (ω) =
(5.38)
= σX2 ρXX (τ ), where ζ ∈ (0, 1), ω0 > 0, S0 > 0, σX2 = ρXX (τ ) = e
−ζ ω0 |τ |
(5.39) πS0 2ζ ω03
cos(ωd τ ) +
and
ζ 1 − ζ2
sin ωd |τ |
.
(5.40)
˙ ¨ The auto power spectral density function of X(t) and X(t) can be evaluated from equations (5.34) and (5.35), SX˙ X˙ (ω) =
ω 2 S0 , (ω02 − ω2 )2 + 4ζ 2 ω02 ω2
(5.41)
SX¨ X¨ (ω) =
ω 4 S0 . (ω02 − ω2 )2 + 4ζ 2 ω02 ω2
(5.42)
As ω → ±∞, SX˙ X˙ (ω) ∝ ωS02 and SX¨ X¨ (ω) = S0 . Hence, SX˙ X˙ (ω) is integrable and σX2˙ < ∞, whereas σX2¨ = ∞.
5.2. Stochastic differentiation
73
˙ The autocovariance function of X(t) follows from equation (5.20) when the mean function is zero, κX˙ X˙ (τ ) = σX2˙ ρX˙ X˙ (τ ),
(5.43)
πS0 = ω02 σX2 , 2ζ ω0
(5.44)
σX2˙ =
ρX˙ X˙ (τ ) = −
(τ ) ρXX
ω02
≡−
1 d 2 ρXX (τ ) dτ 2 ω02
ζ = e−ζ ω0 |τ | cos(ωd τ ) − sin ωd |τ | , 1 − ζ2
(5.45)
where the primes indicate differentiation with respect to τ . d4 + − ¨ Since dτ 4 κXX (τ ) has different limits as τ → 0 or τ → 0 , X(t) is not contin¨ uous and has unbounded autocovariance. Hence, X(t) has certain characteristics of the Gaussian white noise. E XAMPLE 5.4. Given a stationary Gaussian process X(t) with the mean μX and the autocovariance function κXX (τ ) = σX2 ρ(τ ). Consider the 4-dimensional random variable X0 Z= (5.46) , X X(0) X(t) X0 = ˙ , X= ˙ . X(0) X(t) Assume that at time t = 0 the initial values X0 = x0 = [x0 , x˙0 ]T are prescribed T conditional on X . ˙ probabilistically. Find the distribution of X = [X(t), X(t)] 0 The conditional distribution is normal with the conditional mean μX|X0 and the conditional covariance matrix CXX|X0 . In order to determine μX|X0 and CXX|X0 , the corresponding unconditional moments of the stochastic vector Z is determined at first. Due to the stationarity it follows that κX(0)X(t) = σX2 ρ(t), κX(0)X(0) = κX(t)X(t) = σX2 ,
(5.47)
= κX(t)X(t) = 0. κX(0)X(0) ˙ ˙ Furthermore, from equations (5.20), (5.21) and (5.22), it follows that κX(0)X(t) = −κX(0)X(t) = σX2 ρ (t), ˙ ˙
(5.48)
Chapter 5. Stochastic Calculus
74
κX(0) = −σX2 ρ
(t). ˙ X(t) ˙ The mean and covariance matrix of Z read μX 0 μZ = , μX
(5.49)
CX0 X0 C X0 X CTX0 X CXX ⎡ 1 0 ρ(t)
(0) −ρ (t) ⎢ 0 −ρ = σX2 ⎢ ⎣ ρ(t) −ρ (t) 1 ρ (t) −ρ
(t) 0
CZZ =
⎤ ρ (t) −ρ
(t) ⎥ ⎥. 0 ⎦ −ρ
(0)
(5.50)
μX|X0 and CXX|X0 then follow from Example 2.11 μX|X0 = μX0 + CTX0 X C−1 X X (x0 − μX0 ) 0 0 ρ(t) −ρ (t) μX0 + = ρ (t) −ρ
(t) μX˙ 0 −1 x0 − μX 0 1 0 , × 0 ω02 x˙0 − μX˙ 0
(5.51)
CXX|X0 = CXX − CTX0 X C−1 (5.52) X0 X0 C X0 X −1 1 0 −ρ (t) 1 0 2 ρ(t) = σX2 − σ X 0 −ρ
(0) ρ (t) −ρ
(t) 0 −ρ
(0) −ρ(t) ρ (t) × −ρ (t) −ρ
(t) −ρ (t) ρ(t) − ρ
(t)/ρ
(0) 1 − ρ 2 (t) + (ρ (t))2 /ρ
(0) . = σX2 −ρ (t) ρ(t) − ρ
(t)/ρ
(0) 1 − (ρ (t))2 + (ρ
(t))2 /ρ
(0)
5.3. Stochastic integration Consider a stochastic process {X(t), t ∈ [a, b]}, defined on the sample space Ω. x(t, ω) denotes a realization of X(t) where ω ∈ Ω is a random sample. Define the following Riemann integral t y(t, ω) =
h(t, τ )x(τ, ω) dτ, a
(5.53)
5.3. Stochastic integration
75
where the weighting function h(t, τ ) is deterministic. If the Riemann integral in equation (5.53) exists for all ω ∈ Ω and for any t ∈ [a, b], a stochastic process {Y (t), t ∈ [a, b]} is generated. This stochastic process is the stochastic integral or the integrated process of {X(t), t ∈ [a, b]} with the weight h(t, τ ). The integral process is formally written as t Y (t) =
(5.54)
h(t, τ )X(τ ) dτ. a
Equation (5.54) can be considered as the limit of the following random sequence Yn (t) =
n
h(t, τk )X(τk )τk ,
(5.55)
k=1
in the mean square sense, where a = τ1 < τ2 < · · · < tn = t and τk = τk − τk−1 . This integral exists as long as RY Y (t, t) exists and is finite according to Theorem 5.2. That is, the following integral exists and is finite, t t h(t, τ1 )h(t, τ2 )RXX (τ1 , τ2 ) dτ1 dτ2 . a
(5.56)
a
We can take derivative of the integrated process leading to the Leibniz rule in the mean square sense. Y˙ (t) =
t
∂h(t, τ ) X(τ ) dτ + h(t, t)X(t). ∂t
(5.57)
a
When we integrate a derivative process, we obtain t Y (t) =
˙ ) dτ h(t, τ )X(τ
a
t = h(t, τ )X(τ ) a −
t
∂h(t, τ ) X(τ ) dτ. ∂t
(5.58)
a
In particular, when h(t, τ ) = 1, we have t X(t) − X(a) = a
˙ ) dτ. X(τ
(5.59)
Chapter 5. Stochastic Calculus
76
E XAMPLE 5.5. Consider again the white noise X(t) with mean value μX = 0 and the autocorrelation function RXX (t, s) = 2πS0 δ(t − s). Then, t t h(t, τ1 )h(t, τ2 )RXX (τ1 , τ2 ) dτ1 dτ2 a
a
t t =
h(t, τ1 )h(t, τ2 )2πS0 δ(τ2 − τ1 ) dτ1 dτ2 a
a
t = 2πS0
h2 (t, τ1 ) dτ1 .
(5.60)
a
-t As long as a h2 (t, τ1 ) dτ1 < ∞, the integral defined in equation (5.54) exists in the mean square sense. We can show that the impulse response function of all damped dynamical systems satisfies this condition. 5.3.1. Statistical properties of stochastic integrals Consider the mean and autocovariance function of the integrated process,
μY (t) = E Y (t) t n
h(t, τi )E X(τi ) τi = h(t, τ )μX (τ ) dτ, = lim (5.61) τk →0 n→∞ k=1
a
κY Y (t1 , t2 ) =
lim
n n
τi →0 τj →0 j =1 k=1 n→∞ t1 t2
=
h(t1 , τj )h(t2 , τk )κXX (τj , τk )τj τk
h(t1 , τ1 )h(t2 , τ2 )κXX (τ1 , τ2 ) dτ1 dτ2 . a
(5.62)
a
The joint nth order moment reads E[Y (t1 )Y (t2 ) · · · Y (tn )] t1 t2 tn · · · h(t1 , τ1 )h(t2 , τ2 ) · · · h(tn , τn ) = a
a
a
× E X(τ1 )X(τ2 ) · · · X(τn ) dτ1 dτ2 · · · dτn .
Recall that the limit is in the mean square sense.
(5.63)
5.3. Stochastic integration
77
Integration of Gaussian processes Let {X(t), t ∈ [a, b]} be a normal process. Consider the n-dimensional random variable [Y (t1 ), Y (t2 ), . . . , Y (tn )] where Y (t) is the integration of X(t) given in equation (5.54). Equation (5.55) defines a linear transformation from the Gaussian vector [X(τ1 ), X(τ2 ), . . . , X(τn )] to another vector [Y (t1 ), Y (t2 ), . . . , Y (tn )]. Hence, this n-dimensional vector of Y (t) is normal. That is to say, the integrated process is a Gaussian process if X(t) is Gaussian. Integration of vector processes Consider an N -dimensional stochastic vector process {X(t), t ∈ [a, b]} and an M-dimensional integrated vector process {Y(t), t ∈ [a, b]} such that Yk (t) =
N
t
hkj (t, τ )Xj (τ ) dτ,
k = 1, . . . , M,
(5.64)
j =1 a
or t Y(t) =
h(t, τ )X(τ ) dτ,
(5.65)
a
where hkj (t, τ ) are the components of an (M × N )-dimensional weighting matrix h(t, τ ). The mean function μYi (t), cross-covariance function κYi Yj (t1 , t2 ) and joint nth order moment of the integrated vector process can be written as μYk (t) =
N
t
(5.66)
hkj (t, τ )μXj (τ ) dτ,
j =1 a
κYi1 Yi2 (t1 , t2 ) =
N t1 t2 N j1 =1 j2 =1 a
hi1 j1 (t1 , τ1 )hi2 j2 (t2 , τ2 )
a
× κXj 1 Xj 2 (τ1 , τ2 ) dτ1 dτ2 ,
(5.67)
N N N
... E Yi1 (t1 )Yi2 (t2 ) · · · Yin (tn ) = j1 =1 j2 =1
t1 t2 ··· a
a
jn =1
tn hi1 j1 (t1 , τ1 )hi2 j2 (t2 , τ2 ) · · · hin jn (tn , τn ) a
× E Xj1 (τ1 )Xj2 (τ2 ) · · · Xjn (τn ) dτ1 dτ2 · · · dτn .
(5.68)
Chapter 5. Stochastic Calculus
78
5.3.2. Integration of weakly stationary processes In the following, we consider a special weighting function which is the impulse response function of a dynamic system such that
h(t − τ ), t τ, h(t, τ ) = (5.69) 0, t < τ. Assume that X(t) is weakly stationary. Consider an integrated process defined as t h(t − τ )X(τ ) dτ.
Y (t) =
(5.70)
−∞
The mean function of the integrated process is t μY (t) =
∞ h(t − τ )μX dτ = μX
−∞
h(t − τ ) dτ −∞
∞ h(u) du ≡ μY .
= μX
(5.71)
−∞
Hence, μY (t) is constant also. The auto-covariance function is given by t1 t2 κY Y (t1 , t2 ) =
h(t1 − τ1 )h(t2 − τ2 )κXX (τ2 − τ1 ) dτ1 dτ2 −∞ −∞ ∞ ∞
=
h(u1 )h(u2 )κXX (t2 − t1 + u1 − u2 ) du1 du2 −∞ −∞
= κY Y (t2 − t1 ).
(5.72)
Therefore, κY Y (t1 , t2 ) is a function of the time difference. Consequently, we conclude that Y (t) is a weakly stationary process. The auto power spectral density function of Y (t) follows 1 SY Y (ω) = 2π 1 = 2π
∞
e−iωτ κY Y (τ ) dτ
−∞ ∞
e −∞
−iωτ
∞ ∞ h(u1 )h(u2 )κXX (τ + u1 − u2 ) du1 du2 dτ −∞ −∞
5.3. Stochastic integration
∞
∞
=
e
iωu1
h(u1 ) du1
−∞
×
79
e−iωu2 h(u2 ) du2
−∞
1 2π
∞
e−iω(τ +u1 −u2 ) κXX (τ + u1 − u2 ) dτ.
(5.73)
−∞
The last integral of equation (5.73) is equal to SXX (ω). Equation (5.73) can then be written as 2 SY Y (ω) = H ∗ (ω)H (ω)SXX (ω) = H (ω) SXX (ω), (5.74) where ∞ H (ω) =
e−iωu h(u) du.
(5.75)
−∞
H (ω) is known as the frequency response function of the system. The impulse response function is related to H (ω) by 1 h(τ ) = 2π
∞ eiωτ H (ω) dω.
(5.76)
−∞
Setting ω = 0 in equation (5.75), we have 1 = H (0) = k
∞ h(u) du,
(5.77)
−∞
where H (0) is known as the DC gain or the static response of the system, and k is the static stiffness. Hence, μY (t) in equation (5.71) reads 1 μX . k Consider now the integration of stationary vector processes μY =
Yk (t) =
N t
hkj (t − τ )Xj (τ ) dτ,
(5.78)
(5.79)
j =1−∞
where k = 1, . . . , M. We have the mean,cross-covariance and cross-spectral density functions as follows μYi =
N j =1
∞ μXj
hij (u) du, −∞
(5.80)
Chapter 5. Stochastic Calculus
80
N ∞ ∞ N
κYi1 Yi2 (τ ) =
hi1 j1 (u1 )hi2 j2 (u2 )
j1 =1 j2 =1−∞ −∞
× κXj1 Xj2 (τ + u1 − u2 ) du1 du2 , SYi1 Yi2 (ω) =
N N j1 =1 j2 =1
Hi∗1 j1 (ω)Hi2 j2 (ω)SXj1 Xj2 (ω),
(5.81)
(5.82)
where τ = t2 − t1 , and ∞ Hi1 j1 (ω) =
e−iωu hi1 j1 (u) du.
(5.83)
−∞
The (M × N )-dimensional matrix H(ω) with components Hi1 j1 (ω) is known as the frequency response matrix. 5.3.3. Riemann–Stieltjes integrals Stochastic integrals also come in the form of Riemann–Stieltjes as follows b Y =
b g(t) dX(t),
and Z =
a
X(t) dg(t).
(5.84)
a
These integrals can be defined via the following limits Yn = Zn =
n k=1 n
g(tk ) X(tk ) − X(tk−1 ) ,
X(tk ) g(tk ) − g(tk−1 ) .
and
(5.85)
(5.86)
k=1
According to Theorem 5.2, the integrals exist if and only if the following Riemann–Stieltjes integrals exist and are finite. b b g(t)g(s) ddRXX (t, s), a
and
(5.87)
a
b b RXX (t, s) dg(t) dg(s). a
a
(5.88)
5.4. Itô calculus
81
Furthermore, it is interesting to point out that the existence of one integral, say Y , implies the other. This can be shown by the following result of integration by parts b
b g(t) dX(t) = g(t)X(t) a −
a
b X(t) dg(t).
(5.89)
a
5.4. Itô calculus In Itô calculus, our attention is focused on the dynamical systems driven by Brownian motions such that the response is a Markov process (Jazwinski, 1970; Lin and Cai, 1995; Soong, 1973). 5.4.1. Brownian motion The Brownian motion B(t) is a special stochastic process, also known as the Weiner process, with the following interesting properties. It is a Gaussian process with zero mean, and a covariance function RBB (t, s) = κBB (t, s)
= E B(t)B(s) = σB2 min(t, s).
(5.90)
Let t1 < t2 < t3 < t4 . Then,
E B(t2 ) − B(t1 ) B(t4 ) − B(t3 )
= E B(t2 )B(t4 ) − B(t1 )B(t4 ) − B(t2 )B(t3 ) + B(t1 )B(t3 ) = σB2 (t2 − t1 − t2 + t1 ) = 0.
(5.91)
Hence, B(t) has independent increments over nonoverlapping time intervals. All the increments of the Brownian motion have zero mean
E B(t) − B(s) = E B(t) − E B(s) = 0. (5.92) Let dB(t) = B(t + dt) − B(t) denote a differential increment of B(t) at time t. Then, we have
σB2 dt, t = s, E dB(t) dB(s) = (5.93) 0, t = s. In the mean square sense, we can write t+t
B(t + t) − B(t) =
dB(τ ). t
(5.94)
Chapter 5. Stochastic Calculus
82
We obtain
E B(t + t) − B(t) = 0,
2 = E B(t + t) − B(t) t+t t
t+t
σB2 dτ = σB2 t,
(5.96)
t
dB(τ2 ) = σB2 t.
dB(τ1 )
E
t+t
(5.95)
(5.97)
t
Lévy’s oscillation property Consider the Brownian motion in a finite time interval [a, b]. Let a = t0 < t1 < · · · < tn = b. tk = tk − tk−1 and n = max tk . Then, k
l.i.m.
n B(tk ) − B(tk−1 ) 2 = σ 2 (b − a). B
(5.98)
n →0 n→∞ k=1
This is known as Lévy’s oscillation property of the Brownian motion. To prove the Lévy’s oscillation property, we consider a random sequence defined as Xn =
n B(tk ) − B(tk−1 ) 2 − σ 2 (b − a). B
(5.99)
k=1
According to equation (5.96), we have E[Xn ] =
n 2
E B(tk ) − B(tk−1 ) − σB2 (b − a) k=1
=
n
σB2 (tk − tk−1 ) − σB2 (b − a) = 0.
(5.100)
n n 2 2
2
E Xn = E B(tk ) − B(tk−1 ) B(tj ) − B(tj −1 )
(5.101)
k=1
and
k=1 j =1 n
−2
2
E B(tk ) − B(tk−1 ) σB2 (b − a) + σB4 (b − a)2
k=1 n n 2 2
= E B(tk ) − B(tk−1 ) B(tj ) − B(tj −1 ) − σ 4 (b − a)2 . B
k=1 j =1
5.4. Itô calculus
83
Note that when j = k, B(tj ) − B(tj −1 ) and B(tk ) − B(tk−1 ) are independent. Hence, n n 2 2
E B(tk ) − B(tk−1 ) B(tj ) − B(tj −1 ) k=1 j =1 n
=
4
E B(tk ) − B(tk−1 )
k=1
+
=
n n
2 2
E B(tk ) − B(tk−1 ) E B(tj ) − B(tj −1 )
k=1 j =1 j =k n 3 σB4 (tk k=1
=2
n
− tk−1 ) +
σB4 (tk − tk−1 )2 +
=2
n n k=1 j =1 j =k n n
σB2 (tk − tk−1 )σB2 (tj − tj −1 )
σB4 (tk − tk−1 )(tj − tj −1 )
k=1 j =1
k=1 n
2
σB4 (tk − tk−1 )2 + σB4 (b − a)2 ,
(5.102)
k=1
where we have used the property of zero mean Gaussian random variable E[X 4 ] = 3E[X 2 ]2 . Therefore, we have n
σB4 (tk − tk−1 )2 E Xn2 = 2 k=1 n n
+ = =
σB4 (tk − tk−1 )(tj − tj −1 ) − σB4 (b − a)2
k=1 j =1 n 2 σB4 (tk − tk−1 )2 k=1 n 2σB4 (b − a) → 0,
n 2σB4
n (tk − tk−1 ) k=1
n → ∞, n → 0.
(5.103)
A special case of Lévy’s oscillation property of the Brownian motion is when the time interval is [t, t + dt]. We have then in the mean square sense 2 dB(t) · dB(t) = B(t + dt) − B(t) = σB2 dt. (5.104) This result is a stronger statement of equation (5.93). It indicates that dB(t) is of the order dt 1/2 . Hence, the ratio dB(t)/dt is always unbounded at dt → 0. Therefore, the Brownian motion is not differentiable in the mean square sense.
Chapter 5. Stochastic Calculus
84
Since n B(tk ) − B(tk−1 ) 2 max B(tj ) − B(tj −1 ) 1j n
k=1
×
n B(tk ) − B(tk−1 ) ,
(5.105)
k=1
and max1j n B(tj ) − B(tj −1 ) → 0 as n → ∞ and n → 0, hence, we have n B(tk ) − B(tk−1 ) → ∞. l.i.m.
n →0 n→∞ k=1
(5.106)
This indicates that B(t) has unbounded variations in any finite time interval. 5.4.2. Itô and Stratonovich integrals Consider a Riemann–Stieltjes integral defined as t Y (t) =
f X(τ ), τ dB(τ ),
(5.107)
a
where X(t) is a stochastic process and B(t) is a Brownian motion. Let a = τ1 < τ2 < · · · < τn = t. This integral can be viewed as the limit of a random sequence consisting of the summation in the mean square sense. In particular, when the sequence is defined as YnI (t) =
n
f X(τk−1 ), τk−1 B(τk ) − B(τk−1 ) ,
(5.108)
k=1
where f [X(τ ), τ ] is evaluated at the lower end of the time interval, and dB(τ ) is taken as the forward difference, we obtain the so-called Itô integral. Because the increment B(τk ) − B(τk−1 ) occurs after the time instance τk−1 , it is independent of X(τk−1 ). This offers advantages in evaluating expectations. However, the integration rules of conventional calculus may not apply any more. On the other hand, when the sequence is defined as YnS (t) =
n
f X(τk−1/2 ), τk−1/2 B(τk ) − B(τk−1 ) ,
(5.109)
k=1
where a shorthand τk−1/2 = (τk + τk−1 )/2, we obtain the Stratonovich integral. The integration rules of conventional calculus are all applicable to the Stratonovich integral. However, in many applications, the process X(τk−1/2 ) may be dependent on the increment B(τk ) − B(τk−1 ), which makes the evaluation of expectations difficult.
5.4. Itô calculus
85
E XAMPLE 5.6. Consider a special Itô integral t B(τ ) dB(τ ) = l.i.m.
(I) a
n
n→∞ n →0 k=1
B(τk−1 ) B(τk ) − B(τk−1 )
n n 2
2
1 2 B (τk ) − B (τk−1 ) − B(τk ) − B(τk−1 ) = l.i.m. n→∞ 2 →0 n
k=1
.
k=1
1 1
= B 2 (t) − B 2 (a) − σB2 (t − a), (5.110) 2 2 where (I) indicates the Itô integral, n = max τk and τk = τk − τk−1 . k
E XAMPLE 5.7. The same integral in the Stratonovich sense is given by t B(τ ) dB(τ ) = l.i.m.
(S)
n
n→∞ n →0 k=1
a
= l.i.m.
B(τk−1/2 ) B(τk ) − B(τk−1 )
n 1
n→∞ n →0 k=1
2
B(τk ) + B(τk−1 ) B(τk ) − B(τk−1 )
1 2 (5.111) B (t) − B 2 (a) , 2 where (S) indicates the Stratonovich integral. Clearly, the Stratonovich integral follows the usual integration rule =
t
t B(τ ) dB(τ ) =
(S) a
1 1
dB 2 (τ ) = B 2 (t) − B 2 (a) . 2 2
(5.112)
a
5.4.3. Itô and Stratonovich differential equations Consider a stochastic differential equation dX(t) = m(X, t) dt + σ (X, t) dB(t),
(5.113)
and its formal solution t X(t) = X(0) +
t m(X, τ ) dτ +
0
σ (X, τ ) dB(τ ).
(5.114)
0
From now on, we assume that B(t) is a unit Brownian motion with σB = 1. One can view σ (X, t) as a scaling factor that absorbs the parameter σB .
Chapter 5. Stochastic Calculus
86
When the Riemann–Stieltjes integral of the formal solution is interpreted in the Itô sense, equation (5.113) is called the Itô differential equation. When this integral is viewed in the Stratonovich sense, equation (5.113) is called the Stratonovich differential equation. It should be noted that the first integral in the formal solution is Riemannian in the mean square sense. It is interesting then to find out the differences between the Itô and Stratonovich integrals. t
t σ (X, τ ) dB(τ ) − (I)
(S) 0
σ (X, τ ) dB(τ ) 0
n
σ X(τk−1/2 ), τk−1/2 = l.i.m. n→∞ n →0 k=1
− σ X(τk−1 ), τk−1 B(τk ) − B(τk−1 ) .
(5.115)
Assume that the function is differentiable so that a Taylor expansion can be obtained as
σ X(τk−1/2 ), τk−1/2 = σ X(τk−1 ), τk−1 ∂σ (X(τk−1 ), τk−1 ) X(τk ) − X(τk−1 ) + ∂X 2 ∂σ (X(τk−1 ), τk−1 ) τk + (5.116) + h.o.t. ∂τ 2 where h.o.t. denotes the higher order terms. Note that as n → ∞ and n → 0, X(τk ) − X(τk−1 ) → dX(τ ) and B(τk ) − B(τk−1 ) → dB(τ ). From equation (5.113), we have dX(τ ) dB(τ ) = m(X, τ ) dτ dB(τ ) + σ (X, τ ) dB(τ ) dB(τ ) = σ (X, τ ) dτ + h.o.t.
(5.117)
Note that the term including the product τk [B(τk ) − B(τk−1 )] → dτ dB(τ ) is a term of higher order than dτ . Substituting equations (5.116) and (5.117) in equation (5.113), and dropping higher order terms, we obtain the difference between the two integrals t
t σ (X, τ ) dB(τ ) − (I)
(S) 0
=
σ (X, τ ) dB(τ ) 0
1 2
t 0
∂σ (X(τ ), τ ) σ (X, τ ) dτ. ∂X
(5.118)
5.4. Itô calculus
87
The right-hand side of the above equation is a Riemann integral in the mean square sense. 5.4.3.1. Wong–Zakai correction Consider the Stratonovich stochastic differential equation (S) dX(t) = m(X, t) dt + σ (X, t) dB(t),
(5.119)
and its formal solution t (S) X(t) = X(0) +
t m(X, τ ) dτ +
0
σ (X, τ ) dB(τ ).
(5.120)
0
By making use of equation (5.118), we convert equation (5.120) to an Itô integral as t 1 ∂σ (X, τ ) σ (X, τ ) dτ (I) X(t) = X(0) + m(X, τ ) + 2 ∂X 0
t +
σ (X, τ ) dB(τ ).
(5.121)
Hence, X(t) satisfies the following Itô differential equation 1 ∂σ (X, t) (I) dX(t) = m(X, t) + σ (X, t) dt 2 ∂X + σ (X, t) dB(t),
(5.122)
0
1 ∂σ (X,t) 2 ∂X σ (X, t)
where the term is called the Wong–Zakai correction. For the nth dimensional vector stochastic process Xj (t) satisfying the Stratonovich differential equation (S) dXj (t) = mj (X, t) dt +
m
σj k (X, t) dBk (t),
(5.123)
k=1
the corresponding Itô differential equation is given by (I) dXj (t) = mj (X, t) dt n m 1 ∂σj l (X, t) + σkl (X, t) dt 2 ∂Xk k=1 l=1
+
m k=1
σj k (X, t) dBk (t),
(5.124)
Chapter 5. Stochastic Calculus
88
where dBk (t) are increments of m independent unit Brownian motions. In general, n = m. Equations (5.123) and (5.124) are the essence of the Wong–Zakai limit theorem. In the theorem, dBk (t) in the Stratonovich equation represents a piecewise differential Gaussian process modeling a physical phenomenon that converges to the mathematical Brownian motion in some sense. Finally, we note that when σj k (X, t) is not a function of X, the Wong–Zakai correction vanishes, and the Itô and Stratonovich differential equations are completely equivalent. 5.4.4. Itô’s lemma Assume that the nth dimensional vector stochastic process Xj (t) satisfies the Itô differential equation (I) dXj (t) = mj (X, t) dt +
m
σj k (X, t) dBk (t).
(5.125)
k=1
Let F (X, t) be an arbitrary function of X and t. Then, the differentiation of the function is given by the Taylor expansion of F (X, t) up to the order dt, n n n ∂F ∂F ∂ 2F 1 dF (X, t) = mj + bj k + dt ∂t ∂Xj 2 ∂Xj ∂Xk j =1
+
m n
σj k
j =1 k=1
j =1 k=1
∂F dBk (t), ∂Xj
(5.126)
where bj k =
m
(5.127)
σj l σkl .
l=1
It is worthwhile pointing out that according to the conventional calculus, the total differential of the function F (X, t) is given by the chain rule n ∂F ∂F dXj (t) dt + ∂t ∂Xj j =1 m n n ∂F ∂F ∂F mj σj k dBk (t), + dt + = ∂t ∂Xj ∂Xj
dF (X, t) =
j =1
where the term n n 1 2
j =1 k=1
bj k
∂ 2F ∂Xj ∂Xk
j =1 k=1
(5.128)
5.4. Itô calculus
89
is missing. This term is due to the peculiar property of the Brownian motion dB(t)2 ∼ dt, which can only be captured by including the second order derivatives of F (X, t) with respect to X in the total differential formula. 5.4.5. Moment equations An important application of Itô’s lemma is to derive a set of moment equations for the response of the Itô differential equation. Recall that in the Itô sense, dBk (t) is defined as the forward difference and σj k (X, t) is independent of dBk (t). Also, E[dBk (t)] = 0. Taking the mathematical expectation on both sides of equation (5.124), we have
dE[Xj (t)] = E mj (X, t) . dt Consider F (X, t) = Xj Xk . According to Itô’s lemma, we have 1 d(Xj Xk ) = mj Xk + mk Xj + bj k dt 2 m (σj l Xk + σkl Xj ) dBl (t). +
(5.129)
(5.130)
l=1
Taking the expectation of the equation, and recalling that
E (σj l Xk + σkl Xj ) dBl (t) = 0,
(5.131)
because of the independence of the terms σj l Xk + σkl Xj and dBl (t) in the Itô sense, we have the equation for the correlation function dE[Xj Xk ] 1 (5.132) = E mj Xk + mk Xj + bj k . dt 2 By following the same steps, we can construct differential equations governing the evolution of the moments of any order. E XAMPLE 5.8. Consider a linear Itô differential equation dXj (t) =
n
Aj k Xk (t) dt +
k=1
m
σj k dBk (t).
(5.133)
k=1
The equation governing the evolution of the mean is n
dE[Xj (t)] Aj k E Xk (t) . = dt k=1
(5.134)
Chapter 5. Stochastic Calculus
90
In the vector form, it reads dE[X] = AE[X]. dt The correlation matrix satisfies the following equation
(5.135)
n 1 dE[Xj Xk ] Aj l E[Xl Xk ] + Akl E[Xl Xj ] + bj k , = dt 2
(5.136)
dRXX 1 = ARXX + RXX AT + b. dt 2
(5.137)
l=1
or
Exercises E XERCISE 5.1. Define an inner product of two random variables √ X and Y as X, Y = E[XY ], a norm of a random variable X as ||X|| = X, X, and a distance between two random variables as d(X, Y ) = ||X − Y ||. Show that d(X, Y ) 0, d(X, Y ) = 0,
if and only if
X = Y,
d(X, Y ) = d(Y, X),
(5.138)
d(X, Y ) = d(X, Z) + d(Z, Y ). E XERCISE 5.2. Prove that if l.i.m.t→t0 X(t) = X and l.i.m.t→t0 X(t) = Y , then X and Y are equivalent in the sense that P ({X = Y }) = 0. E XERCISE 5.3. Let {Xn } be a sequence of random variables defined by 1 Xn = Yk , n n
(5.139)
k=1
where Yk are independent and identically distributed random variables with zero mean and variance σY2 . Show that lim ||Xn − Xm ||2 = 0.
n,m→∞
(5.140)
Hence, l.i.m.n→∞ Xn = X and E[X] = 0. Show that % &
σ2 lim E Xn2 = E l.i.m. Xn2 = lim Y . n→∞ n→∞ n n→∞
(5.141)
Exercises
E XERCISE 5.4. Let W (t) be a Wiener process such that
E W (t) = 0,
91
(5.142)
RW W (t, s) = 2D min(t, s). Show that the integrated process t Y (t) =
(5.143)
W (t) dt, 0
exists and RY Y (t, s) =
Ds 2 3 (3t
− s) where t s 0.
E XERCISE 5.5. Consider the stochastic differential equation d Y (t) = W (t), t > 0, dt Y (0) = 0, where W (t) is a unit white noise such that
E W (t) = 0,
E W (t)W (t + τ ) = 2δ(τ ).
(5.144)
(5.145)
The solution process Y (t) is known as a Wiener process. Determine the mean and autocovariance functions of Y (t). Let 0 < t1 < t2 < t3 < t4 . Show that the increments Y (t1 , t2 ) = Y (t2 ) − Y (t1 ) and Y (t3 , t4 ) = Y (t4 ) − Y (t3 ) are independent random variables. This example shows that the white noise is a formal derivative of the Wiener process with independent increments. E XERCISE 5.6. Consider the following Itô differential equation dX = −aX dt + 2aσ 2 dB(t),
(5.146)
where a > 0 and σ are constants, and B(t) is a unit Brownian motion. Let F (X, t) = X 2 . Show that
dF = −2aX 2 + 2aσ 2 dt + 2X 2aσ 2 dB(t). (5.147) Show that the first and second order moments of X satisfy the following equations d E[X] = −aE[X], dt
d 2 E X = −2aE X 2 + 2aσ 2 . dt X(t) in this example is known as the Ornstein–Uhlenbeck process.
(5.148)
Chapter 5. Stochastic Calculus
92
E XERCISE 5.7. Consider the following Itô differential equation dX = −aX dt + 2aσ 2 dB(t),
(5.149)
Show that Y (t) satisfies the where a > 0 and σ are constants. Let Y (t) = Itô differential equation
dY (t) = −aY ln Y + aσ 2 Y dt + Y 2aσ 2 dB(t). (5.150) eX(t) .
E XERCISE 5.8. Recall that the Stratonovich integral follows the conventional rules of calculus. Find the solution of the Stratonovich differential equation (S) dX(t) = X(t) dB(t).
(5.151)
Verify that the corresponding Itô differential equation is given by (I) dX(t) =
1 X(t) dt + X(t) dB(t). 2
(5.152)
E XERCISE 5.9. Derive the first and second order moment equations of X(t) governed by the Itô differential equation dX(t) = X(t) dB(t).
(5.153)
E XERCISE 5.10. Derive the first and second order moment equations of X(t) governed by the Itô differential equation dX(t) =
1 X(t) dt + X(t) dB(t). 2
(5.154)
Chapter 6
Fokker–Planck–Kolmogorov Equation
In the previous chapters, we have analyzed the statistics of random variables and stochastic processes including mean, variance, and correlation functions. These are relatively low order statistics. Arbitrarily high order statistics are determined by the probability density function. This and next chapters will address the questions: What are the equations governing the evolution of probability density functions of stochastic processes that in turn are governed by stochastic differential equations? How do we obtain the probability density function? This chapter first derive the equation, known as the Fokker–Planck–Kolmogorov (FPK) equation, governing the probability density function forward in time (Lin, 1967; Risken, 1984; Soize, 1994). The FPK equation discussed in this book is primarily for Markov processes. For non-Markov processes, both the governing equation of the probability density function and its solution are far more difficult to obtain.
6.1. Chapman–Kolmogorov–Smoluchowski equation Probability density functions satisfy a so-called consistence equation pX (x2 , t2 |x1 , t1 ) = pX (x2 , t2 ; x, t|x1 , t1 ) dx.
(6.1)
R1
Applying the formula for conditional probability, we have pX (x2 , t2 ; x, t|x1 , t1 ) = pX (x2 , t2 |x, t; x1 , t1 )pX (x, t|x1 , t1 ). Hence, equation (6.1) becomes pX (x2 , t2 |x1 , t1 ) = pX (x2 , t2 |x, t; x1 , t1 )pX (x, t|x1 , t1 ) dx. R1 93
(6.2)
(6.3)
Chapter 6. Fokker–Planck–Kolmogorov Equation
94
Assume that t2 > t > t1 and X(t) is a stationary Markov process. Then, equation (6.3) is reduced to pX (x2 , t2 |x1 , t1 ) = pX (x2 , t2 |x, t)pX (x, t|x1 , t1 ) dx. (6.4) R1
This is the well-known Chapman–Kolmogorov–Smoluchowski equation, which is the starting point for deriving a differential equation governing the evolution of the probability density function for Markov processes. The Chapman–Kolmogorov– Smoluchowski equation for vector Markov processes can be derived in the same manner and reads pX (x2 , t2 |x1 , t1 ) = pX (x2 , t2 |x, t)pX (x, t|x1 , t1 ) dx, (6.5) Rn
where x ∈
Rn .
6.2. Derivation of the FPK equation Consider an infinitely differentiable and arbitrary function R(y) such that d n R(y) = 0, y→±∞ dy n lim
for any n > 0.
Consider an integration ∂ I = R(y) pX (y, t|x0 , t0 ) dy ∂t R1 1 pX (y, t + t|x0 , t0 ) − pX (y, t|x0 , t0 ) dy, = R(y) lim t→0 t
(6.6)
(6.7)
R1
where t > t0 . The Chapman–Kolmogorov–Smoluchowski equation suggests pX (y, t + t|x0 , t0 ) = pX (x, t|x0 , t0 )pX (y, t + t|x, t) dx. (6.8) R1
Expanding R(y) about the point x in a Taylor series, we have R(y) = R(x) + (y − x)R (x) + Define two symbols as 1 A(x, t) = lim t→0 t
(y − x)2
R (x) + · · · 2!
(6.9)
(y − x)pX (y, t + t|x, t) dy, R1
(6.10)
6.2. Derivation of the FPK equation
1 t→0 t
95
B(x, t) = lim
(y − x)2 pX (y, t + t|x, t) dy. R1
Denote x = y − x. Equation (6.10) can be written as 1 E[X|X = x], t 1
E X 2 |X = x . B(x, t) = lim t→0 t Putting all these together, we have ∂
∂ 0= pX + A(x, t)pX ∂t ∂x A(x, t) = lim
t→0
R1
−
" ∂ 2 B(x, t) p + · · · R(x) dx. X 2! ∂x 2
(6.11)
(6.12)
Since R(x) is arbitrary, we obtain a partial differential equation governing the evolution of the conditional probability density function, ∂ 2 B(x, t) ∂
∂pX (x, t|x0 , t0 ) =− A(x, t)pX + 2 pX − · · · (6.13) ∂t ∂x 2! ∂x A question is often asked when to truncate the series expansion in equation (6.13). Pawula theorem provides an answer to it (Risken, 1984). T HEOREM 6.1 (Pawula Theorem). In order for the probability density function pX (x, t|x0 , t0 ) to be positive, the series may stop after the first or second term. If it does not stop after the second term, it must contain an infinite number of terms. When the process X(t) satisfies an Itô differential equation driven by a Brownian motion, the right hand side of equation (6.13) needs to retain only the first two terms. In this case, we arrive at the FPK equation ∂ 2 B(x, t) ∂
∂ pX (x, t|x0 , t0 ) = − A(x, t)pX + 2 pX . (6.14) ∂t ∂x 2! ∂x The first term containing A(x, t) on the right hand side of the equation is known as the drift term associated with the deterministic behavior of the system, while the second term containing B(x, t) is the diffusion term due to the stochasticity of the excitation. For an n-dimensional vector stochastic process, the FPK equation can be derived in the same manner, and it reads ∂
∂ Ak (x, t)pX pX (x, t|x0 , t0 ) = − ∂t ∂xk n
k=1
Chapter 6. Fokker–Planck–Kolmogorov Equation
96
+
n n j =1 k=1
Bj k (x, t) ∂2 pX , ∂xj ∂xk 2!
(6.15)
where the drift coefficient Ak (x, t) is a vector, and the diffusion coefficient Bj k (x, t) is a symmetric matrix. To fully determine the probability density function pX (x, t|x0 , t0 ) from the FPK equation, we need to specify an initial condition, and proper boundary conditions in the state space spanned by the variables x. It should be noted that the above derivation is for a generic Markov process. In the following, we use examples to demonstrate how to derive the FPK equation for Markov processes that are the solutions of Itô differential equations. E XAMPLE 6.1. Consider a one dimensional linear stochastic system governed by an Itô differential equation dX = −αX dt + σ dB(t),
(6.16)
where α and σ are constants, B(t) is the unit Brownian motion. It is known that X(t) is a Markov process. Equation (6.16) can be understood in the increment sense as X = −αXt + σ B(t).
(6.17)
Using this equation, we can evaluate the coefficients of the FPK equation for the solution process X(t) as 1 E[X|X = x] = −αx, t→0 t 1
B = lim E X 2 |X = x t→0 t 1 2 2 2 = lim E α x t − 2ασ xtB(t) + σ 2 B 2 (t) = σ 2 . t→0 t A = lim
(6.18)
The FPK equation then reads ∂ σ 2 ∂ 2 pX ∂ . pX (x, t|x0 , t0 ) = − [−αxpX ] + ∂t ∂x 2! ∂x 2
(6.19)
E XAMPLE 6.2. The equation of motion of a linear oscillator can be written in a state space form dX1 = X2 dt, dX2 = −2ζ ω0 X2 − ω02 X1 dt + dβ(t),
(6.20)
6.2. Derivation of the FPK equation
where β(t) is a Wiener process with the following properties
2D dt, t = s, E dβ(t) = 0, E dβ(t)dβ(s) = 0, t = s.
97
(6.21)
The formal derivative of β(t) is known as the white noise. Let X = [X1 , X2 ]T be the vector stochastic process of the system. According to equation (6.11), we have 1 E[X1 |X = x] = x2 , A1 = lim (6.22) t→0 t 1 A2 = lim (6.23) E[X2 |X = x] = −2ζ ω0 x2 − ω02 x1 , t→0 t 1
E X12 |X = x t t 2 E[X2 |X = x] = 0, = lim (6.24) t→0 t 1 = B12 = lim E[X1 X2 |X = x] t→0 t 1 2 = lim E t X2 −2ζ ω0 X2 − ω02 X1 + tX2 β(t)|X = x t→0 t = 0, (6.25) 1
= lim E X22 |X = x t→0 t 2 1 = lim E −2ζ ω0 X2 − ω02 X1 t 2 t→0 t + 2 −2ζ ω0 X2 − ω02 X1 tβ(t) + β(t)2 |X = x = 2D. (6.26)
B11 = lim
t→0
B21
B22
The FPK equation for the oscillator is given by ∂ pX (x, t|x0 , t0 ) ∂t ∂
∂ 2 B11 (x, t) ∂
A1 (x, t)pX − A2 (x, t)pX + 2 pX =− ∂x1 ∂x2 2! ∂x1 2 2 ∂ B12 (x, t) ∂ B22 (x, t) +2 pX + 2 pX ∂x1 ∂x2 2! 2! ∂x2 =−
∂ ∂ ∂ 2 pX [x2 pX ] − . −2ζ ω0 x2 − ω02 x1 pX + D ∂x1 ∂x2 ∂x22
(6.27)
It is known that the solution to this equation is a Gaussian probability density function.
Chapter 6. Fokker–Planck–Kolmogorov Equation
98
6.2.1. Derivation using Itô’s lemma Recall that Itô’s lemma stated in equation (5.126) for an arbitrary function F (X), n n m n ∂F ∂ 2F 1 dF (X) = mj + σj l σkl dt ∂Xj 2 ∂Xj ∂Xk j =1 m n
+
j =1 k=1 l=1
σj k
j =1 k=1
∂F dBk (t) + O dt 1.5 , ∂Xj
(6.28)
where dXj (t) = mj (X, t) dt +
m
σj k (X, t) dBk (t)
k=1
in the Itô sense and {Bk (t)} is an m-dimensional vector of independent unit Brownian motions. Taking the mathematical expectation of equation (6.28) conditional on (x0 , t0 ) and noting that m n ∂F E σj k dBk (t) x0 , t0 = 0, (6.29) ∂Xj j =1 k=1
due to Itô’s interpretation, we obtain d E[F (X)|x0 , t0 ] dt n n n ∂F ∂ 2 F 1 =E mj + bj k x 0 , t0 . ∂Xj 2 ∂Xj ∂Xk j =1
Recall that bj k = m l=1 σkl σj l . By definition, we have d F (x)pX (x, t|x0 , t0 ) dx dt n R n n n ∂F ∂ 2F 1 = mj + bj k pX (x, t|x0 , t0 ) dx. ∂xj 2 ∂xj ∂xk Rn
(6.30)
j =1 k=1
j =1
(6.31)
j =1 k=1
Integrating by parts of the right hand side of equation (6.31) making an assumption that as |xk | → ∞, pX (x, t|x0 , t0 ) and its derivatives up to second order with respect to x all vanish, and making use of the fact that F (x) is arbitrary, we obtain the FPK equation ∂
∂ mk (x, t)pX pX (x, t|x0 , t0 ) = − ∂t ∂xk n
k=1
6.3. Solutions of FPK equations for linear systems
+
n n j =1 k=1
bj k (x, t) ∂2 pX . ∂xj ∂xk 2!
99
(6.32)
This derivation indicates that the FPK is of order dt. The truncation of the series expansion in equation (6.13) at the second order leads to the same FPK equation and therefore has the same order dt of accuracy.
6.3. Solutions of FPK equations for linear systems Consider a linear Itô stochastic equation of an n-dimensional vector process Xj (t) dXj (t) =
n
aj k Xk dt +
k=1
m
(6.33)
σj k dBk (t),
k=1
where aj k and σj k are constant matrices, Bk (t) are independent unit Brownian motions. The FPK equation for an n-dimensional vector stochastic process is ∂ ∂ [aj k xk pX ] pX (x, t|x0 , t0 ) = − ∂t ∂xj n
n
j =1 k=1 n n
+
j =1 k=1
b j k ∂ 2 pX . 2 ∂xj ∂xk
(6.34)
Let A denote the matrix {aj k } and B denote the matrix {bj k }. Introduce a transformation C such that z = Cx,
(6.35)
AC = C, where = {λk } is a diagonal matrix consisting of the eigenvalues λk of the matrix A. Hence, C is the eigenmatrix of A. Let H = CBCT = {hj k }. Under this transformation, equation (6.34) becomes ∂ hj k ∂ 2 pZ ∂ [λk zk pZ ] + . pZ (z, t|z0 , t0 ) = − ∂t ∂zk 2 ∂zj ∂zk n
n
n
(6.36)
j =1 k=1
k=1
For simplicity, we assume that pZ (z, t|z0 , t0 ) satisfies the following initial condition pZ (z, t|z0 , t0 ) = δ(z − z0 ).
(6.37)
Consider the characteristic function of the probability density function
T T MZ (θ, t) ≡ E eiθ z |z0 , t0 = eiθ z pZ (z, t|z0 , t0 ) dz. (6.38) Rn
Chapter 6. Fokker–Planck–Kolmogorov Equation
100
Substituting the definition in equation (6.36), we have ∂MZ 1 ∂MZ (θ, t) λk θ k − = hj k θj θk MZ . ∂t ∂θk 2 n
n
n
(6.39)
j =1 k=1
k=1
The solution of this equation can be obtained from subsidiary equations (Rice, 1954) dθ2 dθn dθ1 dt =− = ··· = − =− 1 λ1 θ 1 λ2 θ 2 λn θ n dMZ = − 1 n n . j =1 k=1 hj k θj θk MZ 2
(6.40)
This equation suggests that MZ (θ, t) = f θ1 eλ1 t , θ2 eλ2 t , . . . , θn eλn t n θj θk 1 hj k , × exp 2 λ j + λk
(6.41)
j,k=1
where f () is an arbitrary function. From equation (6.37), we have n θk z0k . MZ (θ, 0) = exp i
(6.42)
k=1
Applying this initial condition to the solution (6.41), we have n n θj θk 1 hj k +i θk z0k . f (θ1 , θ2 , . . . , θn ) = exp − 2 λ j + λk j,k=1
Therefore,
MZ (θ, t) = exp i
n
(6.43)
k=1
θk eλk t z0k
k=1
n θk θj 1 1 − e(λj +λk )t . + hj k 2 λ j + λk
(6.44)
j,k=1
This characteristic function represents the Fourier transform of an n-dimensional Gaussian probability density function pZ (z, t|z0 , t0 ) =
1 (2π)n/2 (det(CZZ ))1/2 1 × exp − (z − μZ )T C−1 (z − μ ) Z , ZZ 2
(6.45)
6.4. Short-time solution
101
where (μZ )k = z0k eλk t ,
(CZZ )j k = −
hj k 1 − e(λj +λk )t . λj + λk
(6.46)
Hence, pX (x, t|x0 , t0 ) is also Gaussian with the mean and variance given by −1 μX = C−1 μZ , (6.47) CXX = C−1 CZZ CT .
6.4. Short-time solution When the stochastic differential equation that generates the FPK equation is nonlinear, the closed form solution of pX (x, t|x0 , t0 ) is often difficult to obtain. However, an approximate closed form solution over a very small time interval starting from a given initial condition can be constructed. Consider equation (6.14) as an example. We rewrite the equation in operator form as ∂ pX (x, t|x0 , t0 ) = LF P K (x, t)pX , ∂t where
(6.48)
∂ 1 ∂2 B(x, t), A(x, t) + ∂x 2 ∂x 2 subject to the initial condition LF P K (x, t) = −
(6.49)
pX (x, t|x0 , t0 ) = δ(x − x0 ).
(6.50)
The formal solution of equation (6.48) can be written as (Risken, 1984) pX (x, t|x0 , t0 ) = eLF P K (x,t)·(t−t0 ) δ(x − x0 ).
(6.51)
Let τ = t − t0 and assume that τ 1. Expand the solution with respect to τ . We have the short-time solution
pX (x, t0 + τ |x0 , t0 ) = 1 + LF P K (x, t) · τ + O τ 2 δ(x − x0 ). (6.52) The following identity in the generalized function sense is applied to the shorttime solution δ(x − x0 )f (x) = δ(x − x0 )f (x0 ),
(6.53)
leading to pX (x, t0 + τ |x0 , t0 ) = e
∂ − ∂x A(x0 ,t)τ + 12
∂2 B(x0 ,t)τ ∂x 2
δ(x − x0 ).
(6.54)
Chapter 6. Fokker–Planck–Kolmogorov Equation
102
Substituting the generalized function representation of δ(x − x0 ) 1 δ(x − x0 ) = 2π
∞ eiθ(x−x0 ) dθ,
(6.55)
−∞
to this solution, we obtain pX (x, t0 + τ |x0 , t0 ) ∞ ∂ 1 ∂2 1 exp − A(x0 , t)τ + B(x , t)τ + iθ(x − x ) = 0 0 dθ 2π ∂x 2 ∂x 2 =
1 2π
−∞ ∞ −∞
θ2 exp −iθA(x0 , t)τ − B(x0 , t)τ + iθ(x − x0 ) dθ 2
1 [x − x0 − A(x0 , t)τ ]2 =√ . exp − 4B(x0 , t)τ 2π · 2B(x0 , t)τ
(6.56)
Hence, the short-time solution of the FPK equation starting from the initial condition δ(x − x0 ) is a Gaussian probability density function with the mean x0 + A(x0 , t)τ and variance 2B(x0 , t)τ . 6.4.1. Improvement of the short-time solution The short-time solution presented above is based on the Taylor expansion of the first order accuracy in τ . If the short-time solution is viewed as a numerical integration of the FPK equation in time domain, it is comparable to the Euler integration rule. To improve the accuracy of the short-time solution while keeping the normality of the short-time solution, we naturally look for higher order estimates of the mean and variance. One approach for nonlinear systems is to apply the Gaussian closure or nonGaussian closure to the moment equations of the system response. This allows for more accurate estimates of the mean and variance over a longer time period, leading to more accurate Gaussian probability density function over somewhat longer time interval (Sun and Hsu, 1989a, 1990).
6.5. Path integral solution Let pX (x0 , t0 ) denote the initial probability density function of the stochastic process X(t). Recall that the definition of conditional probability pX (x, t0 + τ ; x0 , t0 ) = pX (x, t0 + τ |x0 , t0 )pX (x0 , t0 ).
(6.57)
6.5. Path integral solution
Hence,
103
pX (x, t0 + τ ) =
pX (x, t0 + τ |x0 , t0 )pX (x0 , t0 ) dx0 .
(6.58)
R1
Repeating this step over the time interval t0 + τ to t0 + 2τ , we have pX (x, t0 + 2τ ) = pX (x, t0 + 2τ |x1 , t0 + τ )pX (x1 , t0 + τ ) dx1
(6.59)
R1
=
pX (x, t0 + 2τ |x1 , t0 + τ )pX (x1 , t0 + τ |x0 , t0 )pX (x0 , t0 ) dx0 dx1 . R2
Assume that τ is small. We apply the short-time solution leading to a general expression over n steps n−1 pX (x0 , t0 ) pX (x, t0 + nτ ) = √ 4πB(x k , t0 + kτ )τ R n k=0 [xk+1 − xk − A(xk , t0 + kτ )τ ]2 dxk . (6.60) × exp − 4B(xk , t0 + kτ )τ This is known as the path integral of the FPK equation. The path integral converges to the correct solution at a given t > t0 as n → ∞ with the accuracy of the order 1/n (Risken, 1984). 6.5.1. Markov chain representation of path integral To compute the path integral, we have to discretize the solution. Consider a onedimensional example as shown in equation (6.60). Let us partition R 1 into equalintervals with the j th interval defined as xj −1 x < xj . Define a probability vector as xj pj (k) =
pX (x, t0 + kτ ) dx.
(6.61)
xj −1
pj (k) is the probability that the system lies in the interval xj −1 x < xj at the kth time step. We further assume that this probability is concentrated in the center xj −1/2 = (xj + xj −1 )/2 of the interval. Hence, we have an approximate histogram representation of pX (x, t0 + kτ ) as follows pj (k) δ(x − xj −1/2 ), pX (x, t0 + kτ ) = (6.62) xj j
Chapter 6. Fokker–Planck–Kolmogorov Equation
104
where xj = xj −xj −1 . Substituting the discrete representation of pX (x, t0 +kτ ) in equation (6.60), we have Pj ln (n)Pln ln−1 (n − 1) · · · Pl1 l0 (1)pl0 (0), pj (n) = (6.63) ln ,ln−1 ,...,l0
where 1 Pj l (k) = xj xl
xj
xl dx
xj −1
pX x, t0 + kτ |x1 , t0 + (k − 1)τ dx1 . (6.64)
xl−1
Pj l (k) is the one step transition probability at the kth time step from the lth interval to the j th interval. Introduce the vector notations: p(n) = {pj (n)} and P(k) = {Pj l (k)}. Equation (6.63) now reads p(n) = P(n)P(n − 1) · · · P(1)p(0).
(6.65)
Equation (6.65) is known as a nonstationary Markov chain. When the short-time solution is substituted in equation (6.64), equation (6.65) becomes the Markov chain representation of the path integral solution. This discrete representation is readily amenable to numerical computations. When the stochastic process X(t) is weakly stationary such that pX (x, t0 + kτ |x1 , t0 + (k − 1)τ ) = pX (x, t0 + τ |x1 , t0 ) and Pj l (k) is independent of k, we have P(n) = P(n − 1) = · · · = P(1) ≡ P.
(6.66)
Equation (6.65) is reduced to p(n) = Pn p(0).
(6.67)
The equation represents a stationary Markov chain. This solution is also known as the short-time Gaussian generalized cell mapping, and can be applied to a vector process X(t) (Sun and Hsu, 1989b).
6.6. Exact stationary solutions When the stochastic differential equation is nonlinear, it is in general very difficult to obtain exact transient or stationary solutions of the FPK equation. Under certain conditions, we can, however, obtain exact stationary solution of the FPK equation. Here, we study examples of exact solutions of stationary probability density function for the FPK equation. n n n Bj k (x, t) ∂
∂2 Ak (x, t)pX + pX = 0. − (6.68) ∂xk ∂xj ∂xk 2! k=1
j =1 k=1
6.6. Exact stationary solutions
105
6.6.1. First order systems The stationary probability density function, if it exists, is independent of time, and satisfies the reduced FPK equation −
1 d2
d
B(x)pX (x) = 0. A(x)pX (x) + dx 2 dx 2
(6.69)
Integrating the equation once and assuming that B(x) = 0 for all x ∈ R 1 , we obtain A(x)
d
B(x)pX (x) − 2 B(x)pX (x) = −2C1 , (6.70) dx B(x) where C1 is a constant. Integrating the above equation, we have x x x A(z) A(y) C0 C1 pX (x) = exp 2 exp 2 dz − dy dz, B(x) B(z) B(x) B(y) z 0 0 (6.71) where C0 is another constant. When the system is defined in R 1 and stable in the sense that as |x| → ∞, d pX → 0, then C1 = 0 and pX (x) → 0 and dx x A(z) C0 exp 2 dz , pX (x) = (6.72) B(x) B(z) 0
where C0 is determined from the normalization condition. 6.6.2. Second order systems Consider a general nonlinear system (Wang and Yasuda, 1999), x¨ + g(x, x) ˙ = η1 W1 (t) + η2 xW2 (t), where Wj (t) are independent Gaussian white noises such that
E Wj (t) = 0,
E Wj (t)Wk (t + τ ) = 2Dk δj k δ(τ ).
(6.73)
(6.74)
The equation can be written in the Stratonovich sense as (S) dX1 (t) = X2 dt, dX2 (t) = −g(X1 , X2 ) dt +
(6.75) 2 k=1
σ2k (X, t) dBk (t),
Chapter 6. Fokker–Planck–Kolmogorov Equation
106
where Bk (t) are independent unit Brownian motions and
0 0 √ √ σj k (X, t) = . η1 2D1 η2 X1 2D2
(6.76)
The Wong–Zakai correction term 1 ∂σj l (X, t) σkl (X, t) 2 ∂Xk n
m
k=1 l=1
is zero for this system. Hence, the equation in the Itô sense is the same as equation (6.75) in the Stratonovich sense. To derive the FPK equation, we have m2 = −g(x1 , x2 ), m1 = x2 , m 0 0 [bj k ] = σkl σj l = . 0 2η12 D1 + 2η22 D2 x12
(6.77)
l=1
The FPK equation reads ∂ ∂
∂ g(x1 , x2 )pX [x2 pX ] + pX (x1 , x2 , t|x0 , t0 ) = − ∂t ∂x1 ∂x2
2 ∂ 2 pX + η1 D1 + η22 D2 x12 . ∂x22
(6.78)
The stationary probability density function pX (x) satisfies the reduced FPK equation ∂ 2 pX (x1 , x2 )
2 η1 D1 + η22 D2 x12 ∂x22 ∂ ∂
g(x1 , x2 )pX = 0. − [x2 pX ] + ∂x1 ∂x2 Assume that the solution is in the following form pX (x1 , x2 ) = C0 ef (x1 ,x2 ) ,
(6.79)
(6.80)
where C0 is the normalization constant and f (x1 , x2 ) is a function to be determined from the reduced FPK equation. f (x1 , x2 ) satisfies the following ∂ 2 f (x1 , x2 ) ∂g(x1 , x2 )
2 + η1 D1 + η22 D2 x12 ∂x2 ∂x22 ∂f (x1 , x2 ) ∂f (x1 , x2 ) − x2 + g(x1 , x2 ) ∂x1 ∂x2
2 ∂f (x1 , x2 ) 2 + η1 D1 + η22 D2 x12 = 0. ∂x2
(6.81)
6.6. Exact stationary solutions
107
To find the solution f (x1 , x2 ) for this equation, we split it into two equations ∂ 2 f (x1 , x2 ) ∂g(x1 , x2 ) 1 + 2 = h(x1 , x2 ), 2 2 2 ∂x2 ∂x2 η 1 D 1 + η 2 D 2 x1 ∂f (x1 , x2 ) ∂f (x1 , x2 ) 2 + g(x1 , x2 ) η1 D1 + η22 D2 x12 − ∂x2 ∂x2 ∂f (x1 , x2 ) 2 + x2 = η1 D1 + η22 D2 x12 h(x1 , x2 ), ∂x1
(6.82)
(6.83)
where h(x1 , x2 ) is an arbitrary function. By choosing different functions for h(x1 , x2 ), we can find various solutions for f (x1 , x2 ) that satisfies both equations (6.82) and (6.83). As an example, we choose h(x1 , x2 ) = 0 and assume that g(x1 , x2 ) = g0 (x1 ) +
n
gk (x1 )x2k .
(6.84)
k=1
A general solution for f (x1 , x2 ) can be obtained as g0 (x1 )g1 (x1 ) dx1 f (x1 , x2 ) = − η12 D1 + η22 D2 x12 n gk−1 (x1 ) xk , − 2 2D x2] 2 k[η D + η 1 2 1 2 1 k=2
(6.85)
subject to the following constraints on gk (x1 ) g0 (x1 )g2 (x1 ) = 0, 2η22 D2 x1
(x1 ) − 2 gk−1 (x1 ) − kg0 (x1 )gk+1 (x1 ) = 0, gk−1 η1 D1 + η22 D2 x12 k = 2, 3, . . .
(6.86)
(6.87)
When g0 (x1 ) = 0 and g2 (x1 ) = 0, the system is unstable and the stationary probability density function does not exist. Therefore, we only consider the case when g0 (x1 ) = 0 and g2 (x1 ) = 0. Some examples are presented below. E XAMPLE 6.3. Consider a nonlinear system x2 x¨ + 2 cx 2 + β 2 x˙ + ω02 x = ηxW (t). ω0 x 2 + x˙ 2
(6.88)
The stationary probability density function of the response is given by − β c pX (x1 , x2 ) = C0 ω02 x12 + x22 η2 D exp − 2 ω02 x12 + x22 . (6.89) η D
Chapter 6. Fokker–Planck–Kolmogorov Equation
108
Equation (6.89) can be transformed to the polar coordinates (a, θ ) by using x1 =
a sin(θ ), ω0
x2 = a cos(θ ).
We have pX (a, θ ) = C1 a
−
β +1 η2 D
c 2 exp − 2 a , η D
(6.90)
(6.91)
where 0 a < ∞ and 0 θ < 2π, and C1 is another normalization constant of pX (a, θ ). In order for pX (a, θ ) to be integrable over the domain 0 a < ∞, we must have c > 0,
−
β + 1 > −1. η2 D
(6.92)
When the second condition is not satisfied, the stationary probability density function does not exist in the conventional sense, and it approaches to a delta distribution at the origin (x, x) ˙ = (0, 0) suggesting that the origin is stable and the steady state response of the system has no effect of the stochasticity. 6.6.3. Dimentberg’s system Consider a nonlinear Stratonovich stochastic equation (Dimentberg, 1982)
x˙ 2 2 x¨ + 2α x˙ 1 + η(t) + β1 x˙ x + 2 + Ω 2 x 1 + ξ(t) = ζ (t), (6.93) Ω where β1 0, α > 0, η(t), ξ(t) and ζ (t) are zero-mean independent Gaussian physical white noises with the correlation functions given by
E η(t)η(t + τ ) = Dη δ(τ ),
E ξ(t)ξ(t + τ ) = Dξ δ(τ ), (6.94)
E ζ (t)ζ (t + τ ) = Dζ δ(τ ). The equation of motion in the Stratonovich sense reads (S) dX1 (t) = X2 dt, X2 dX2 (t) = − 2αX2 + β1 X2 X12 + 22 + Ω 2 X1 dt Ω +
3 k=1
σ2k (X, t) dBk (t),
(6.95)
6.6. Exact stationary solutions
where Bk (t) are independent unit Brownian motions and
0 0 0 . σj k (X, t) = 2αX2 Dη Ω 2 X1 Dξ Dζ
109
(6.96)
We convert the equation to that in the Itô sense as (I) dX1 (t) = X2 dt, (6.97) 2 X dX2 (t) = − 2αX2 + β1 X2 X12 + 22 + Ω 2 X1 − 2α 2 Dη X2 dt Ω +
3
σ2k (X, t) dBk (t).
k=1
To derive the FPK equation, we have m1 = x2 , x2 m2 = − 2αx2 + β1 x2 x12 + 22 + Ω 2 x1 − 2α 2 Dη x2 , Ω m 0 0 σkl σj l = [bj k ] = . 0 4α 2 Dη x22 + Ω 4 Dξ x12 + Dζ
(6.98)
l=1
The FPK equation reads ∂ ∂ [x2 pX ] pX (x1 , x2 , t|x0 , t0 ) = − ∂t ∂x1
" x22 ∂ 2 2 2 + 2αx2 + β1 x2 x1 + 2 + Ω x1 − 2α Dη x2 pX ∂x2 Ω 2
1 ∂ + 4α 2 Dη x22 + Ω 4 Dξ x12 + Dζ pX . 2 2 ∂x2
(6.99)
The stationary probability density function pX (x) satisfies the reduced FPK equation 0=
∂ 1 ∂ 2 2 4α Dη x22 + Ω 4 Dξ x12 + Dζ pX (x1 , x2 ) − [x2 pX ] 2 ∂x22 ∂x1
" x22 ∂ 2 2 2 2αx2 + β1 x2 x1 + 2 + Ω x1 − 2α Dη x2 pX . (6.100) + ∂x2 Ω
If Ω 2 Dξ = 4α 2 Dη ,
(6.101)
110
Chapter 6. Fokker–Planck–Kolmogorov Equation
the reduced FPK equation has an exact solution x22 2 exp −β x1 + 2 Ω , pX (x1 , x2 ) = C0 2 x2 δ−κβ 2 κ + x1 + 2 Ω where Dζ 2α 1 β1 κ= 4 , δ= 2 + , β= 2 , 2 Ω Dξ Ω Dξ Ω Dξ
(6.102)
(6.103)
and C0 is the normalization constant. 6.6.4. Equivalent Itô equation The FPK equation (6.99) suggests that the Itô equation (6.97) with three independent Brownian motions is equivalent to another Itô equation with only one Brownian motion excitation in the sense that they both share the same FPK equation. The equivalent system can be readily obtained by considering equation (6.99) as (I) dX1 (t) = X2 dt, (6.104) 2 X dX2 (t) = − 2αX2 + β1 X2 X12 + 22 + Ω 2 X1 − 2α 2 Dη X2 dt Ω $ 2 + 4α 2 Dη X2 + Ω 4 Dξ X12 + Dζ dB(t), where B(t) is a unit Brownian motion. This is an important result. When we study numerical solutions of the stochastic system such as the one defined in equation (6.93) by using Monte Carlo simulation, we must simulate three independent white noise processes. If we use the equivalent Itô equation, we only need to simulate one Brownian motion. This is far more efficient computationally. 6.6.5. Hamiltonian systems Consider the following Itô equations for a Hamiltonian system (Zhu and Yang, 1996) (I) dQj (t) = Pj (t) dt, n ∂H ∂H + mj k (Q, P) dt dPj (t) = − ∂Qj ∂Pk k=1
+
m k=1
σj k (Q, P) dBk (t),
(6.105)
6.6. Exact stationary solutions
111
where H = H (Q, P) is the Hamiltonian function of the system, Q is the ndimensional generalized displacement vector, P is the corresponding generalized momentum vector, and Bk (t) are m independent unit Brownian motions. The FPK equation for the system reads n n ∂ ∂ ∂H ∂ ∂H pH + pH pH (q, p, t|q0 , p0 , t0 ) = − ∂t ∂qk ∂pk ∂pk ∂qk k=1 k=1 n n ∂ ∂H + pH mj k (q, p) ∂pj ∂pk +
k=1 j =1 n n j =1 k=1
where bj k (q, p) =
m
bj k (q, p) ∂2 pH , ∂pj ∂pk 2!
(6.106)
σkl σj l (q, p).
l=1
Define a Poisson bracket as n n ∂ ∂H ∂ ∂H [pH , H ] = − pH + pH ∂qk ∂pk ∂pk ∂qk =−
k=1 n k=1
k=1
∂H ∂pH + ∂pk ∂qk
n k=1
∂H ∂pH . ∂qk ∂pk
(6.107)
The reduced FPK equation governing the stationary probability density function pQ (q, p) is given by n n ∂ ∂H 0 = [pH , H ] + pH mj k (q, p) ∂pj ∂pk k=1 j =1
+
n n j =1 k=1
bj k (q, p) ∂2 pH . ∂pj ∂pk 2!
(6.108)
One way to systematically search for exact stationary probability density functions is to split the equation into two parts. [pH , H ] = 0, n n ∂ ∂H pH mj k (q, p) ∂pj ∂pk k=1 j =1 n n
+
j =1 k=1
bj k (q, p) ∂2 pH = 0. ∂pj ∂pk 2!
(6.109)
(6.110)
Chapter 6. Fokker–Planck–Kolmogorov Equation
112
Note that when pH (q, p) is assumed to be a function of the Hamiltonian function H only, the first equation is satisfied automatically. Let pH (q, p) = F (H ). Then, [pH , H ] = −
n n ∂H ∂pH ∂H ∂pH + ∂pk ∂qk ∂qk ∂pk k=1
k=1
∂H ∂H ∂H ∂H dF = + − dH ∂pk ∂qk ∂qk ∂pk n
n
k=1
k=1
= 0.
(6.111)
We integrate the second equation once with respect to pj and apply the boundary condition at infinity where pQ (q, p) and its derivatives vanish. This leads to n n ∂ bj k (q, p) ∂H mj k (q, p) pQ + pQ = 0, (6.112) ∂pk ∂pk 2! k=1
k=1
where j = 1, 2, . . . , n. Assume that pH (q, p) = C0 e−f (H ) . We obtain a set of equations for the arbitrary function f (H ), 0=
n
mj k (q, p)
k=1
+
∂H ∂pk
n 1 ∂bj k (q, p) df ∂H − bj k (q, p) . 2 ∂pk dH ∂pk
(6.113)
k=1
These n equations must determine one unique solution of f (H ). This uniqueness requirement puts constraints on the function forms for mj k (q, p) and σj k (q, p). E XAMPLE 6.4. Consider a two degree of freedom system governed by the Itô equations, dQ1 (t) = P1 (t) dt, dQ2 (t) = P2 (t) dt, ∂H ∂H dP1 (t) = − + c1 dt + 2D1 dB1 (t), ∂Q1 ∂P1 ∂H ∂H dP2 (t) = − + c2 dt + 2D2 dB2 (t), ∂Q2 ∂P2
(6.114)
where c1 , c2 , D1 and D2 are constants and H =
p2 p2 1 1 k1 q12 + k2 q22 + 1 + 2 . 2 2 2m1 2m2
(6.115)
6.6. Exact stationary solutions
113
Equation (6.113) for this system becomes ∂H ∂H df (H ) − 2D1 = 0, ∂p1 ∂p1 dH ∂H ∂H df (H ) − 2D2 = 0. 2c2 ∂p2 ∂p2 dH
2c1
(6.116)
Hence, we have df (H ) c2 c c1 = ≡ , = dH D1 D2 D where
c1 D1
=
c2 D2
≡
(6.117)
is the compatibility condition. Then, f (H ) =
c D
c DH,
and
c
pH (q, p) = C0 e− D H = C0 e
− Dc
1
p12 p22 2 1 2 2 k1 q1 + 2 k2 q2 + 2m1 + 2m2
.
(6.118)
This is the well-known result for the linear oscillating systems. E XAMPLE 6.5. Consider a system given by dX1 (t) = X2 dt, dX2 (t) = −g(X1 , X2 ) dt +
√
(6.119) 2D dB(t),
where W (t) is a zero Gaussian white noise with correlation function
E W (t)W (t + τ ) = 2Dδ(τ ) and g(x1 , x2 ) = −g0 (x1 ) −
dQ(H ) ∂H dQ(H ) ∂H − , x2 = − dH ∂x1 dH ∂x2
x1 H (x1 , x2 ) =
g0 (x) dx +
x22 . 2
(6.120) (6.121)
0
Q(H ) is a differentiable function. The FPK equation for the system is given by D
∂ ∂
∂ 2 pX (x1 , x2 ) g(x1 , x2 )pX = 0. − [x2 pX ] + 2 ∂x ∂x ∂x2 1 2
Let pX (x1 , x2 ) = C0 e−f (H ) . Equation (6.113) for this system is reduced to −
df (H ) dQ(H ) −D = 0. dH dH
(6.122)
Chapter 6. Fokker–Planck–Kolmogorov Equation
114
Hence, we have f (x1 , x2 ) = −
1 Q(H ), D
(6.123)
1
pX (x1 , x2 ) = C0 e− D Q(H ) .
(6.124)
6.6.6. Detailed balance Systems with invertible diffusion matrix Recall the FPK equation (6.32). Consider the stationary solution pX (x) of the FPK equation assuming that the drift vector mk (x) and the diffusion matrix bj k (x) are not explicit functions of time. We have the reduced FPK equation −
n ∂
mk (x)pX (x) ∂xk k=1 n n bj k (x) ∂2 + pX (x) = 0. ∂xj ∂xk 2!
(6.125)
j =1 k=1
Following the lines of discussion in Soize (1994), we define a probability current vector J = {Jk } as n ∂ bj k (x) pX (x) Jk = mk (x)pX (x) − ∂xj 2! j =1 n n bj k (x) ∂pX (x) 1 ∂bj k (x) = mk (x) − (6.126) . pX (x) − 2! ∂xj 2! ∂xj j =1
j =1
The reduced FPK equation (6.125) implies that −∇ · J = 0.
(6.127)
Let us consider a constant solution J = J0 . Since equation (6.127) suggests that (6.128) ∇ · J dx = 0, Rn
we must have J → 0 as ||x|| → ∞. Hence, J0 = 0. Therefore, from equation (6.126), we have n partial differential equations n n bj k (x) ∂pX (x) 1 ∂bj k (x) = 0, mk (x) − (6.129) pX (x) − 2! ∂xj 2! ∂xj j =1
j =1
6.6. Exact stationary solutions
115
where 1 k n. In the vector form, we have F(x)pX (x) −
1 B(x) · ∇pX (x) = 0, 2!
(6.130)
where
n 1 ∂bj k (x) F(x) = Fk (x) = mk (x) − , 2! ∂xj j =1 B(x) = bj k (x) .
(6.131) (6.132)
The condition for the probability current J to vanish is called the condition of detailed balance. Assume for now that the diffusion matrix b(x) is invertible for all x ∈ R n . Equation (6.130) becomes ∇pX (x) − 2B−1 (x)F(x)pX (x) = 0.
(6.133)
Consider a solution in the exponential form with a potential function φ(x),
pX (x) = C0 exp −φ(x) . (6.134) We obtain an equation for φ(x) as ∇φ(x) = 2B−1 (x)F(x).
(6.135)
The necessary and sufficient condition for the existence of the potential function is
∇ × B−1 (x)F(x) = 0. (6.136) Systems with non-invertible diffusion matrix In most structural systems where the governing stochastic equation is second order, the diffusion matrix is not invertible. The condition of detailed balance can still be achieved, as discussed in Lin and Cai (1995). When a time reversal t → −t is introduced, some variables change sign and others do not. The ones that change sign are said to be odd, and the ones that do not change sign are said to be even. For second order structural systems, displacements are even and velocities are odd. Let x˜ denote the state vector x in the reversed time −t. We decompose the drift vector m(x) into the reversible and irreversible parts as follows 1
mk (x) − sgn(xk )mk (˜x) , 2 1
I mk (x) = mk (x) + sgn(xk )mk (˜x) . 2
mR k (x) =
(6.137)
Chapter 6. Fokker–Planck–Kolmogorov Equation
116
Next, we substitute the exponential solution of equation (6.134) into equation (6.129) and require the irreversible part of the drift vector to balance the effect of diffusion, leading to 1 ∂bj k (x) bj k (x) ∂φ(x) + =0 2! ∂xj 2! ∂xj n
mIk (x) −
n
j =1
(1 k n).
(6.138)
j =1
This is reasonable since the diffusion process is irreversible. Applying this result back to equation (6.125), we obtain n ∂mR ∂φ(x) k (x) R − mk (x) = 0. (6.139) ∂xk ∂xk k=1
The potential function φ(x) must satisfy (n + 1) equations in equations (6.138) and (6.139). E XAMPLE 6.6. Reconsider Dimentberg’s system (I) dX1 (t) = X2 dt, (6.140) 2 X dX2 (t) = − 2αX2 + β1 X2 X12 + 22 + Ω 2 X1 − 2α 2 Dη X2 dt Ω +
3
σ2k (X, t) dBk (t),
k=1
where m1 = x2 , x2 m2 = − 2αx2 + β1 x2 x12 + 22 + Ω 2 x1 − 2α 2 Dη x2 , Ω m 0 0 σkl σj l = . [bj k ] = 0 4α 2 Dη X22 + Ω 4 X12 Dξ + Dζ
(6.141) (6.142) (6.143)
l=1
The reduced FPK equation is ∂ [x2 pX ] ∂x1 1 ∂ 2 2 2 4 2 D x + Ω D x + D (x , x ) 4α p + η ξ ζ 1 2 X 2 1 2 ∂x22
" x2 ∂ + 2αx2 + β1 x2 x12 + 22 + Ω 2 x1 − 2α 2 Dη x2 pX . (6.144) ∂x2 Ω
0=−
6.6. Exact stationary solutions
117
We assume that Ω 2 Dξ = 4α 2 Dη .
(6.145)
Note that x1 is even and x2 is odd. 1
mR (6.146) x2 − 1 × (−x2 ) = x2 , 1 = 2 1
mI1 = x2 + 1 × (−x2 ) = 0, (6.147) 2
x22 1 2 2 2 mR = − + β x + x − 2α D x 2αx x + Ω 2 1 2 1 η 2 2 1 2 Ω2 " x22 2 2 2 − (−1) × −2αx2 − β1 x2 x1 + 2 + Ω x1 + 2α Dη x2 Ω = −Ω 2 x1 , (6.148)
2 x 1 mI2 = − 2αx2 + β1 x2 x12 + 22 + Ω 2 x1 − 2α 2 Dη x2 2 Ω " x2 + (−1) × −2αx2 − β1 x2 x12 + 22 + Ω 2 x1 + 2α 2 Dη x2 Ω 2 x = − 2αx2 + β1 x2 x12 + 22 − 2α 2 Dη x2 . (6.149) Ω Hence, we have from equation (6.138) mI1 (x) −
1 ∂bj 1 (x) bj 1 (x) ∂φ(x) + = 0, 2! ∂xj 2! ∂xj n
n
j =1 n
j =1 n
∂bj 2 (x) bj 2 (x) ∂φ(x) + ∂xj 2! ∂xj j =1 j =1 x22 1 2 2 = − 2αx2 + β1 x2 x1 + 2 − Ω Dξ x2 , 2 Ω 2 x2 1 ∂φ(x) 2 4 2 = 0, Ω Dξ x1 + 2 + Dζ −Ω Dξ x2 + 2 ∂x2 Ω
mI2 (x) −
1 2!
(6.150)
(6.151) (6.152)
and from equation (6.139) ∂φ(x) ∂φ(x) + Ω 2 x1 = 0. ∂x1 ∂x2 Rewrite equation (6.153) as −x2
(6.153)
1 ∂φ(x) 1 ∂φ(x) = . x1 ∂x1 x2 /Ω 2 ∂x2
(6.154)
Chapter 6. Fokker–Planck–Kolmogorov Equation
118
x2
This suggests that φ(x) = φ(R) where R = x12 + Ω22 . Substituting this solution φ(R) to equation (6.151), we obtain Ω 2 Dξ 1 R+ 2α + β1 β1 2 dR. dφ(R) = 2 (6.155) Dζ Ω Dξ R+ 4 Ω Dξ This equation can be readily integrated in closed form. Using the same shorthands as defined in equation (6.103), we obtain the stationary probability density function, which is the same as the one given in equation (6.102).
Exercises E XERCISE 6.1. Complete the steps of derivation of the FPK equation from Itô’s lemma. E XERCISE 6.2. Consider the stochastic differential equation d Y (t) = W (t), dt Y (0) = 0,
t > 0,
where W (t) is a unit white noise such that
E W (t) = 0, E W (t)W (t + τ ) = 2δ(τ ).
(6.156)
(6.157)
Determine the conditional probability density function pY (y, t|y0 , t0 ) of the solution process Y (t). E XERCISE 6.3. Derive the FPK equation for the Ornstein–Uhlenbeck process governed by the following Itô differential equation dX = −aX dt + 2aσ 2 dB(t), (6.158) where a > 0 and σ are constants, and B(t) is a unit Brownian motion. E XERCISE 6.4. Solve the FPK equation for the Ornstein–Uhlenbeck process subject to an initial condition pX (x, t|x0 , t0 ) = δ(x − x0 ). E XERCISE 6.5. Show that the stationary probability density function of the Ornstein–Uhlenbeck process is given by 2 1 −x pX (x) = √ e 4σ 2 . 2 πσ
(6.159)
Exercises
119
E XERCISE 6.6. Consider a linear spring-mass-damper oscillator excited by a white noise X¨ + 2ζ ω0 X˙ + ω02 X = W (t).
(6.160)
Assume that W (t) is a formal derivative of a Brownian motion B(t) such that W (t) dt = dB(t). dB(t) has the following properties
E dB(t) = 0, E dB(t1 )dB(t2 ) = 2Dδ(t1 − t2 ). (6.161) Derive the FPK equation for the oscillator. E XERCISE 6.7. Solve the FPK equation for the linear oscillator starting from an initial condition pX (x, t|x0 , t0 ) = δ(x − x0 ).
(6.162)
E XERCISE 6.8. Finish the integration in Example 6.6 to obtain the solution in equation (6.102). E XERCISE 6.9. Consider a nonlinear stochastic system X¨ + h(Λ)X˙ + f (X) = W (t),
(6.163)
where W (t) is a white noise such that
E W (t) = 0, E W (t)W (t + τ ) = 2Dδ(τ ).
(6.164)
˙ defined as Λ is a function of (X, X) ˙ = 1 X˙ 2 + Λ(X, X) 2
X (6.165)
f (x) dx. 0
By applying the method of detailed balance, show that the system has an exact stationary probability density function satisfying the reduced FPK equation
1 pX (x1 , x2 ) = C0 exp − D
λ
h(r) dr ,
(6.166)
0
where 1 λ(x, x) ˙ = x˙ 2 + 2
x f (z) dz. 0
(6.167)
120
Chapter 6. Fokker–Planck–Kolmogorov Equation
E XERCISE 6.10. Consider a nonlinear stochastic system
X¨ + α + βX 2 X˙ + ω02 1 + W1 (t) X = W2 (t),
(6.168)
where W1 (t) and W2 (t) are independent white noise excitations such that
E Wj (t) = 0, E Wj (t)Wj (t + τ ) = 2Dj δ(τ ) (j = 1, 2). (6.169) (A) Derive the corresponding Itô stochastic differential equations for the system, and the FPK equation. (B) Apply the method of detailed balance to show that the system has the stationary probability density function α 2 2 ω0 x1 + x22 , pX (x1 , x2 ) = C0 exp − (6.170) D2 when α D2 . = β D1 ω04
(6.171)
Chapter 7
Kolmogorov Backward Equation
The conditional probability density function of a Markov process pX (x, t|x0 , t0 ) is a function of two sets of arguments, namely, (x, t) and (x0 , t0 ). The FPK equation in the previous chapter deals with the variation of the density function with respect to the arguments (x, t). This describes the forward dynamics of the system since t > t0 . In this chapter, we study how pX (x, t|x0 , t0 ) changes with respect to (x0 , t0 ). In other words, we are interested in the backward dynamics (Lin, 1967; Risken, 1984; Soize, 1994).
7.1. Derivation of the backward equation Rewrite the Chapman–Kolmogorov–Smoluchowski equation as pX (y, t|x0 , t0 ) = pX (y, t|x, t0 + t)pX (x, t0 + t|x0 , t0 ) dx.
(7.1)
R1
Expanding pX (y, t|x, t0 + t) with respect to x about the point x0 in a Taylor series, we have pX (y, t|x, t0 + t) = pX (y, t|x0 , t0 + t) ∂pX (y, t|x0 , t0 + t) + x ∂x0 2 2 x ∂ pX (y, t|x0 , t0 + t) + + ···, 2! ∂x02 where x = x − x0 . Define two symbols as 1 x · pX (x, t0 + t|x0 , t0 ) dx, A(x0 , t0 ) = lim t→0 t 1 R 1 x 2 pX (x, t0 + t|x0 , t0 ) dx. B(x0 , t0 ) = lim t→0 t R1 121
(7.2)
(7.3)
Chapter 7. Kolmogorov Backward Equation
122
Equation (6.10) can be written as 1
E X|X = x0 , A(x0 , t0 ) = lim (7.4) t→0 t 1
E X 2 |X = x0 . B(x0 , t0 ) = lim t→0 t We obtain the backward Kolmogorov equation governing the evolution of the conditional probability density function −
∂ ∂pX B(x0 , t0 ) ∂ 2 pX pX (x, t|x0 , t0 ) = A(x0 , t0 ) + + ··· ∂t0 ∂x0 2! ∂x02
(7.5)
According to Theorem 6.1 (the Pawula theorem), we keep the first two terms of the backward Kolmogorov equation leading to −
∂pX B(x0 , t0 ) ∂ 2 pX ∂ pX (x, t|x0 , t0 ) = A(x0 , t0 ) + . ∂t0 ∂x0 2! ∂x02
(7.6)
In the same manner, we can derive the backward Kolmogorov equation for vector processes as ∂pX ∂ pX (x, t|x0 , t0 ) = Aj (x0 , t0 ) ∂t0 ∂x0j n
−
j =1
+
n n Bj k (x0 , t0 ) j =1 k=1
2!
∂ 2 pX . ∂x0j ∂x0k
(7.7)
Note that t0 t < ∞. E XAMPLE 7.1. Consider a one-dimensional linear stochastic system given by dX = −αX dt + σ dB(t), where α and σ are constants, B(t) is the unit Brownian motion. 1 A = lim E[X|X = x0 ] = −αx0 , t→0 t 1
E X 2 |X = x0 B = lim t→0 t 1 2 2 2 E α x0 t − 2ασ x0 tB(t) + σ 2 B 2 (t) = lim t→0 t = σ 2.
(7.8)
(7.9)
(7.10)
The backward Kolmogorov equation reads −
∂ ∂pX σ 2 ∂ 2 pX pX (x, t|x0 , t0 ) = −αx0 + . ∂t0 ∂x0 2! ∂x02
(7.11)
7.1. Derivation of the backward equation
123
E XAMPLE 7.2. The equation of motion of a linear oscillator excited by a white noise can be written in a state space form dX1 = X2 dt, dX2 = −2ζ ω0 X2 − ω02 X1 dt + dβ(t),
(7.12)
where β(t) is a Wiener process with the following properties
E β(t) = 0,
E β(t1 )β(t2 ) = 2D(t1 − t2 ),
t 1 > t2 .
(7.13)
Let X = [X1 , X2 ]T be the vector stochastic process of the system. According to equation (7.4), we have 1 E[X1 |X = x0 ] = x02 , t→0 t 1 E[X2 |X = x0 ] = −2ζ ω0 x02 − ω02 x01 , A2 = lim t→0 t
A1 = lim
(7.14)
1
E X12 |X = x0 t→0 t
B11 = lim
t 2 E[X2 |X = x0 ] = 0, t→0 t 1 = B12 = lim (7.15) E[X1 X2 |X = x0 ] t→0 t 1 2 E t x02 −2ζ ω0 x02 − ω02 x01 = lim t→0 t + tx02 β(t)|X = x0 = 0, 1
= lim E X22 |X = x t→0 t 2 1 E −2ζ ω0 x02 − ω02 x01 t 2 = lim t→0 t + 2 −2ζ ω0 x02 − ω02 x01 tβ(t) + β(t)2 |X = x0 = 2D.
= lim B21
B22
The backward Kolmogorov equation for the oscillator is given by −
∂ pX (x, t|x0 , t0 ) ∂t0 = x02
∂pX ∂pX ∂ 2 pX − 2ζ ω0 x02 + ω02 x01 +D 2 . ∂x01 ∂x02 ∂x02
(7.16)
Chapter 7. Kolmogorov Backward Equation
124
7.2. Reliability formulation An important application of the backward Kolmogorov equation is the reliability study. Consider the case of n-dimensional vector processes X(t). Let S ⊆ R n be a domain in which the system is considered to be safe. Γ is the boundary of S. Assume that X(t0 ) = x0 ∈ S at time t0 . The probability that the system is still in the safe domain S at time t is given by RS (t, t0 , x0 ) = P t < T ∩ X(t) ∈ S|X(t0 ) = x0 = pX (x, t|x0 , t0 ) dx, (7.17) S
where T is the first time when X(t) crosses the boundary Γ . RS (t, t0 , x0 ) is also known as the reliability against the first-passage failure with respect to the safe domain S. Integrating equation (7.7) over S with respect to x, we obtain a partial differential equation of the reliability function RS (t, t0 , x0 ). ∂RS (t, t0 , x0 ) ∂RS (t, t0 , x0 ) = Aj (x0 , t0 ) ∂t0 ∂x0j n
−
j =1
+
n n Bj k (x0 , t0 ) ∂ 2 RS (t, t0 , x0 ) j =1 k=1
2!
∂x0j ∂x0k
.
(7.18)
subject to the following initial and boundary conditions RS (t0 , t0 , x0 ) = 1,
x0 ∈ S,
(7.19)
RS (t, t0 , x0 ) = 0,
x0 ∈ Γ.
(7.20)
When X(t) is a stationary process such that pX (x, t|x0 , t0 ) = pX (x, t − t0 |x0 ), Aj (x0 , t0 ) = Aj (x0 ) and Bj k (x0 , t0 ) = Bj k (x0 ). Equation (7.17) implies that RS (t, t0 , x0 ) = RS (t − t0 , x0 ). Let τ = t − t0 . Equation (7.18) becomes ∂RS (τ, x0 ) ∂RS (τ, x0 ) Aj (x0 ) = ∂τ ∂x0j n
j =1
+
n n Bj k (x0 ) ∂ 2 RS (τ, x0 ) j =1 k=1
2!
∂x0j ∂x0k
.
(7.21)
The initial and boundary conditions are RS (0, x0 ) = 1,
x0 ∈ S,
(7.22)
RS (τ, x0 ) = 0,
x0 ∈ Γ.
(7.23)
7.3. First-passage time probability
125
7.3. First-passage time probability Denote the complement of RS (t, t0 , x0 ) as FS (t, t0 , x0 ), which is the probability distribution function of the first-passage time. We have FS (t, t0 , x0 ) = P t T |X(t0 ) = x0 = 1 − RS (t, t0 , x0 ).
(7.24)
Substituting this relationship to equation (7.18), we obtain ∂FS (t, t0 , x0 ) ∂FS (t, t0 , x0 ) = Aj (x0 , t0 ) ∂t0 ∂x0j n
−
j =1
+
n n Bj k (x0 , t0 ) ∂ 2 FS (t, t0 , x0 ) j =1 k=1
2!
∂x0j ∂x0k
.
(7.25)
The probability density function of the first-passage time denoted by pT (t|x0 , t0 ) is given by pT (t|x0 , t0 ) =
∂FS (t, t0 , x0 ) ∂RS (t, t0 , x0 ) =− . ∂t ∂t
(7.26)
Differentiating equation (7.25) with respect to t, we yield the governing equation for pT (t|x0 , t0 ) ∂pT (t|x0 , t0 ) ∂pT (t|x0 , t0 ) = Aj (x0 , t0 ) − ∂t0 ∂x0j n
j =1
+
n n Bj k (x0 , t0 ) ∂ 2 pT (t|x0 , t0 ) j =1 k=1
2!
∂x0j ∂x0k
.
(7.27)
Since, at a given time t > t0 and when x0 ∈ Γ , the reliability of the system vanishes RS (t, t0 , x0 ) = 0, this suggests the boundary condition pT (t|x0 , t0 ) = 0,
t > t0 , x0 ∈ Γ.
(7.28)
Assume that initially, the system starts from a point in the safe domain with probability one, we have an initial condition pT (t0 |x0 , t0 ) = δ(x0 ),
x0 ∈ S.
We leave the discussion on stationary process as an exercise.
(7.29)
Chapter 7. Kolmogorov Backward Equation
126
7.4. Pontryagin–Vitt equations The first-passage time is a random variable and its rth order moment can be defined as ∞
r Mr (x0 , t0 ) = E (T − t0 ) |x0 , t0 = (t − t0 )r pT (t|x0 , t0 ) dt. (7.30) t0
From equation (7.27), we obtain a set of integral-partial differential equations for the moments of the first-passage time as ∞ n ∂pT (t|x0 , t0 ) ∂Mr (x0 , t0 ) − (t − t0 )r dt = Aj (x0 , t0 ) ∂t0 ∂x0j j =1
t0
+
n n j =1 k=1
Bj k (x0 , t0 ) ∂ 2 Mr (x0 , t0 ) . 2! ∂x0j ∂x0k
(7.31)
This equation is in general difficult to solve. Next, we assume that X(t) is a stationary process. Recall that ∂RS (τ, x0 ) ∂τ from equation (7.26). Equation (7.27) now reads pT (τ |x0 ) = −
∂pT (τ |x0 ) ∂pT (τ |x0 ) Aj (x0 ) = ∂τ ∂x0j n
j =1
+
n n Bj k (x0 ) ∂ 2 pT (τ |x0 )
2!
j =1 k=1
∂x0j ∂x0k
(7.32)
.
Multiplying equation (7.32) by τ r and integrating from 0 to ∞ with respect to τ , we obtain ∞
∂pT (τ |x0 ) ∂Mr (x0 ) τ Aj (x0 ) dτ = ∂τ ∂x0j n
r
0
j =1
+
n n Bj k (x0 ) ∂ 2 Mr (x0 ) j =1 k=1
2!
∂x0j ∂x0k
,
(7.33)
where ∞ Mr (x0 ) =
τ r pT (τ |x0 ) dτ. 0
(7.34)
Exercises
127
Assume that limτ →∞ τ r pT (τ |x0 ) = 0. Integration by part leads to ∞
∂pT (τ |x0 ) τ dτ = −r ∂τ
∞ τ r−1 pT (τ |x0 ) dτ = −rMr−1 (x0 ).
r
0
(7.35)
0
Hence, equation (7.33) becomes −rMr−1 (x0 ) =
n
Aj (x0 )
j =1
+
∂Mr (x0 ) ∂x0j
n n Bj k (x0 ) ∂ 2 Mr (x0 ) j =1 k=1
2!
∂x0j ∂x0k
.
(7.36)
These are called the generalized Pontryagin–Vitt equations. All the moments satisfy the same boundary condition Mr (x0 ) = 0,
x0 ∈ Γ, r = 1, 2, 3, . . .
(7.37)
Note that M0 (x0 ) = 1 because pT (τ |x0 ) is a probability density function of τ . Hence, the mean of the first-passage time satisfies the following equation −1 =
n j =1
∂M1 (x0 ) Bj k (x0 ) ∂ 2 M1 (x0 ) + . ∂x0j 2! ∂x0j ∂x0k n
Aj (x0 )
n
(7.38)
j =1 k=1
This is known as the Pontryagin–Vitt equations.
Exercises E XERCISE 7.1. Derive the backward Kolmogorov equation for the Ornstein– Uhlenbeck process governed by the following Itô differential equation dX = −aX dt + 2aσ 2 dB(t), (7.39) where a > 0 and σ are constants, and B(t) is a unit Brownian motion. E XERCISE 7.2. Show that for a stationary process X(t), the first-passage time probability density is a function of the time increment τ = t − t0 , i.e., pT (t|x0 , t0 ) = pT (τ |x0 ). Derive the governing equation, initial and boundary conditions for pT (τ |x0 ) with respect to the safe domain S. E XERCISE 7.3. Derive the Pontryagin–Vitt equations for the Ornstein–Uhlenbeck process in Exercise 7.1.
Chapter 8
Random Vibration of SDOF Systems
In this chapter, some essential results of the random vibration theory of linear structural systems are presented. We first consider single-degree-of-freedom (SDOF) systems. The emphasis is on the various methods available for obtaining the response statistics for the system (Elishakoff, 1983; Lin, 1967; Newland, 1984; Nigam, 1983; Soong and Grigoriu, 1993; Young, 1986; Wirsching et al., 1995; Sólnes, 1997).
8.1. Solutions in the mean square sense Consider a SDOF spring-mass oscillator subject to an external random loading W (t). The motion of the system is governed by the stochastic differential equation mX¨ + cX˙ + kX = W (t), t > 0,
(8.1)
˙ X(0) = X˙ 0 ,
X(0) = X0 ,
where m is the mass, k is the spring constant, and c is the damping coefficient. Let h(t) be the impulse response of the system ⎧ ⎨ e−ζ ωn t sin(ωd t), t 0, h(t) = (8.2) ⎩ mωd 0, t < 0, $ where ωn = mk is the resonant frequency, ζ = 2ωcn m is the damping ratio and ωd = ωn 1 − ζ 2 is the damped resonant frequency. The response of the system is given by Duhamel’s integral in the mean square sense t X(t) = XI (t) +
h(t − τ )W (τ ) dτ, 0 129
(8.3)
Chapter 8. Random Vibration of SDOF Systems
130
where XI (t) is the response due to initial conditions ˙ XI (t) = h(t)c + h(t)m X0 + h(t)mX˙ 0 .
(8.4)
Recall that when the damping ratio ζ < 1, the system is underdamped. When ζ > 1, the system is overdamped. When ζ = 1, the system is critically damped. When ζ = 0, the system has a resonance at ωn . All these properties of the oscillatory system still exist in the stochastic response. 8.1.1. Expectations of the response The mean and autocovariance functions of the response X(t) are given by
μX (t) = E X(t) = μXI (t) +
t h(t − τ )μW (τ ) dτ,
(8.5)
0
κXX (t1 , t2 ) = E X(t1 ) − μX (t1 ) X(t2 ) − μX (t2 ) t1 t2 = κXI XI (t1 , t2 ) +
h(t1 − τ1 )h(t2 − τ2 )κW W (τ1 , τ2 ) dτ1 dτ2 , (8.6) 0 0
where ˙ μX0 + h(t)mμX˙ 0 , μXI (t) = h(t)c + h(t)m κXI XI (t1 , t2 ) = A(t1 )A(t2 )σX2 0 + B(t1 )B(t2 )σX2˙ 0 + A(t1 )B(t2 ) + A(t2 )B(t1 ) κX0 X˙ 0 .
(8.7)
The mean of the velocity is defined as
˙ = μX˙ I (t) + μX˙ (t) = E X(t)
t
˙ − τ )μW (τ ) dτ. h(t
(8.8)
0 dμX (t) dt
Hence, μX˙ (t) = in the mean square sense. The cross-covariance function of the displacement and velocity is given by
˙ 2 ) − μ ˙ (t2 ) κXX˙ (t1 , t2 ) = E X(t1 ) − μX (t1 ) X(t X ∂κXX (t1 , t2 ) = (8.9) . ∂t2 The auto-covariance function of the velocity is obtained as κX˙ X˙ (t1 , t2 ) =
∂ 2 κXX (t1 , t2 ) . ∂t1 ∂t2
(8.10)
8.1. Solutions in the mean square sense
131
8.1.2. Stationary random excitation Assume now that the excitation W (t) is a weakly or strictly stationary random process such that its mean is constant and κW W (τ1 , τ2 ) = κW W (τ2 − τ1 ). Since μXI (t) → 0 as t → ∞, we have ∞ μX = lim μX (t) = μW
h(τ ) dτ =
t→∞
1 μW . k
(8.11)
0
Note that κXI XI (t1 , t2 ) → 0 as t1 , t2 → ∞. Consider the limit of the autocovariance function κXX (t1 , t2 ) such that t1 , t2 → ∞ and τ = t2 − t1 is finite. We have ∞ ∞ h(τ1 )h(τ2 )κW W (τ − τ1 + τ2 ) dτ1 dτ2 κXX (t1 , t2 ) = 0
0
= κXX (τ ).
(8.12)
The stationary mean square response of the oscillator is given by σX2 = κXX (0). Recall that h(t) = 0 for t < 0. Equation (8.12) can be rewritten as ∞ ∞ κXX (τ ) =
h(τ1 )h(τ2 )κW W (τ − τ1 + τ2 ) dτ1 dτ2 .
(8.13)
−∞ −∞
E XAMPLE 8.1. Consider a white noise excitation such that μW = 0 and κW W (t2 − t1 ) = 2πS0 δ(t2 − t1 ). Assume that the initial conditions of the system are all zero so that XI (t) = 0. Then, for t1 , t2 > 0, t1 t2 κXX (t1 , t2 ) = 2πS0
h(τ1 )h(τ2 )δ(t2 − t1 − τ1 + τ2 ) dτ1 dτ2 0
=
2πS0 m2 ωd2
0
min(t 1 ,t2 )
e−ζ ωn (t1 +t2 −2τ ) sin ωd (t1 − τ ) sin ωd (t2 − τ ) dτ
0
ζ ωn πS0 −ζ ωn |t2 −t1 | sin ωd |t2 − t1 | = e cos ωd (t1 − t2 ) + ωd 2m2 ζ ωn3 2 ζ ωn ω sin ωd (t1 + t2 ) − e−ζ ωn (t1 +t2 ) n2 cos ωd (t2 − t1 ) + ωd ωd " ζ 2 ωn2 − (8.14) cos ωd (t1 + t2 ) . ωd2
Chapter 8. Random Vibration of SDOF Systems
132
As t1 , t2 → ∞ and τ = t2 − t1 is finite, we have πS0 −ζ ωn |τ | ζ ωn e τ + sin ω |τ | κXX (t1 , t2 ) = cos ω d d ωd 2ζ ωn3 m2 = κXX (τ ),
(8.15)
and the steady state mean square response σX2 = κXX (0) =
πS0 . 2ζ ωn3 m2
(8.16)
The transient mean square response is obtained by setting t = t1 = t2 in κXX (t1 , t2 ) σX2 (t) =
πS0 2ζ ωn3 m2
" 2 ζ ωn ζ 2 ωn2 ω × 1 − e−2ζ ωn t n2 + sin 2ωd t − cos 2ω t . (8.17) d ωd ωd ωd2
Examples of the transient mean square response are shown in Figure 8.1. Recall that in the steady state, we have κXX (t1 , t2 ) = κXX (t2 − t1 ) = κXX (τ ), and κXX (τ ) is an even function of τ . That is, κXX (t1 − t2 ) = κXX (t2 − t1 ). Equations (8.9) and (8.10) then become κXX˙ (τ ) =
Figure 8.1.
∂κXX (t2 − t1 ) ∂κXX (τ ) = , ∂t2 ∂τ
(8.18)
The transient mean square response of a SDOF oscillator subject to the Gaussian white noise excitation as a function of the damping ratio. m = 1. k = 1. S0 = 1.
8.1. Solutions in the mean square sense
∂ 2 κXX (t2 − t1 ) ∂ 2 κXX (τ ) =− . ∂t1 ∂t2 ∂τ 2
κX˙ X˙ (τ ) =
133
(8.19)
8.1.3. Power spectral density Continue to assume that W (t) is stationary with zero mean such that κW W (τ ) = RW W (τ ). Since X(t) is also stationary and has zero mean in steady state as t → ∞ and κXX (τ ) = RXX (τ ), we can compute its power spectral density function from equation (8.13) 1 SXX (ω) = 2π 1 2π
=
∞
κXX (τ )e−iωτ dτ
−∞ ∞ ∞ ∞
h(τ1 )h(τ2 ) −∞ −∞ −∞
× κW W (τ − τ1 + τ2 )e−iωτ dτ1 dτ2 dτ.
(8.20)
The impulse response is related to the frequency response function by 1 h(t) = 2π
∞
H (ω)e−iωt dω,
(8.21)
−∞
where ∞ H (ω) =
h(τ )eiωτ dτ = −∞
1 . m(ωn2 − ω2 + 2ζ ωn ωi)
(8.22)
Also, we have ∞ κW W (τ ) =
SW W (ω)eiωτ dω.
(8.23)
−∞
Substituting all these relationships to equation (8.20), we have 2 SXX (ω) = H (ω) SW W (ω).
(8.24)
In the steady state, we also have κXX˙ (τ ) = RXX˙ (τ ) and κX˙ X˙ (τ ) = RX˙ X˙ (τ ). The power spectral densities for the cross-covariance and the velocity are given by 1 SXX˙ (ω) = 2π
∞ −∞
κXX˙ (τ )e−iωτ dτ
Chapter 8. Random Vibration of SDOF Systems
134
1 = 2π
∞ −∞
∂κXX (τ ) −iωτ dτ e ∂τ
∞ 1 −iωτ τ =∞ −iωτ = + iω κXX (τ )e dτ κXX (τ )e τ =−∞ 2π −∞
2 = iωSXX (ω) = iω H (ω) SW W (ω), 1 SX˙ X˙ (ω) = 2π
∞
(8.25)
κX˙ X˙ (τ )e−iωτ dτ
−∞
∞ 2 1 ∂ κXX (τ ) −iωτ =− e dτ 2π ∂τ 2 −∞ τ =∞ " 1 ∂κXX (τ ) = − − iωκXX (τ ) e−iωτ 2π ∂τ τ =−∞ ∞ 2 + ω2 κXX (τ )e−iωτ dτ = ω2 H (ω) SW W (ω),
(8.26)
−∞
where we have assumed that ∂κXX (τ ) (8.27) = 0. τ →±∞ τ →±∞ ∂τ In the same manner, we can obtain the power spectral density for the acceleration as 2 SX¨ X¨ (ω) = ω4 H (ω) SW W (ω). (8.28) lim κXX (τ ) = 0,
lim
A plot of SXX (ω), SX˙ X˙ (ω) and SX¨ X¨ (ω) is shown in Figure 8.2 when the system is subject to the Gaussian white noise excitation with a constant power spectral density S0 . It is noted that on the logarithmic scale, the slope of SX¨ X¨ (ω) as ω ωn is zero. In other words, SX¨ X¨ (ω) becomes a flat line in the region ω ωn indicating that the acceleration behaves like a white noise at high frequencies. This observation has significant consequences when accelerometers are used to measure random structural responses. E XAMPLE 8.2. Let W (t) be the same Gaussian white noise with SW W (ω) = S0 as in the previous example. We have SXX (ω) =
S0 . m2 [(ωn2 − ω2 )2 + 4ζ 2 ωn2 ω2 ]
(8.29)
8.2. Solutions with Itô calculus
135
Figure 8.2. The power spectral densities of the displacement (solid line), velocity (dash-dot line) and acceleration (dashed line) of the SDOF oscillator subject to the Gaussian white noise excitation with a constant spectral density S0 = 1. m = 1. k = 1. ζ = 0.1.
The stationary mean square response of X(t) can be expressed in terms of SXX (ω). ∞ σX2 = κXX (0) =
SW W (ω)eiωτ τ =0 dω
−∞
∞ =
H (ω) 2 SW W (ω) dω.
(8.30)
−∞
In the exercise, we shall ask the reader to show that ∞ σX2˙ = κX˙ X˙ (0) = −∞ ∞
σX2¨ = κX¨ X¨ (0) =
2 ω2 H (ω) SW W (ω) dω,
(8.31)
2 ω4 H (ω) SW W (ω) dω,
(8.32)
−∞
provided that the integrations exist.
8.2. Solutions with Itô calculus Assume that the excitation is the white noise as a formal derivative of the BrownT . The ˙ ian motion. Define a state vector of the response as X(t) = [X(t), X(t)]
Chapter 8. Random Vibration of SDOF Systems
136
equation of motion (8.1) can be written in the Itô sense as dX1 (t) = X2 dt, t > 0, X(0) = x0 ,
2 dX2 (t) = −ωn X1 − 2ζ ωn X2 dt + σ21 dB1 (t),
(8.33)
where m1 (X) = X2 ,
m2 (X) = −ωn2 X1 − 2ζ ωn X2 , 1 σ21 = 2πS0 , σ11 = 0, m and the unit Brownian motion B1 (t) satisfies the following conditions
dt, t = s, E dB1 (t) = 0, E dB1 (t)dB1 (s) = 0, t = s.
(8.34)
(8.35)
8.2.1. Moment equations Consider the derivative of a polynomial function X1k X2l by applying Itô’s lemma stated in equation (5.126). We have
d X1k X2l = m1 (X)kX1k−1 X2l + m2 (X)lX1k X2l−1 dt 2 σ21 l(l − 1)X1k X2l−2 dt 2 + σ21 lX1k X2l−1 dB1 (t) + O dt 2 .
+
(8.36)
By selecting the pairs of the power indices (k, l) to be (1, 0), (0, 1), (2, 0), (1, 1), and (0, 2), we arrive at a set of moment equations dE[X1 ] = E[X2 ], dt dE[X2 ] = −ωn2 E[X1 ] − 2ζ ωn E[X2 ], dt
(8.37)
and dE[X12 ] = 2E[X1 X2 ], dt
dE[X1 X2 ] (8.38) = E X22 − ωn2 E X12 − 2ζ ωn E[X1 X2 ], dt
2πS0 dE[X22 ] . = −2ωn2 E[X1 X2 ] − 4ζ ωn E X22 + dt m2 The higher order moment equations can be constructed in the same fashion. To solve for the moment equations, we need to specify the initial conditions for these
8.2. Solutions with Itô calculus
137
Figure 8.3. The mean responses of the SDOF oscillator subject to the Gaussian white noise. The initial conditions are x1 (0) = 1 and x2 (0) = 1 with probability one. m = 1. k = 1. ζ = 0.1. S0 = 1. Solid line: E[X1 ]. Dash-dot line: E[X2 ].
moments and integrate this set of deterministic ordinary differential equations. For example, assume that the system starts at a point x0 with probability one. Then, the initial conditions for the moments are E[X1 ] = x10 ,
2 , E X12 = x10
E[X2 ] = x20 ,
2 E X22 = x20 ,
E[X1 X2 ] = x10 x20 ,
(8.39)
at t = 0.
The time histories of the first and second order moments of the system are shown in Figures 8.3 and 8.4. The steady state solutions of the moment equations are obtained by setting the time derivatives of the moments to zero in equations (8.37) and (8.38) leading to E[X1 ] = E[X2 ] = E[X1 X2 ] = 0,
πS0 , E X12 = 2ζ ωn3 m2
πS0 E X22 = . 2ζ ωn m2
(8.40)
8.2.2. Fokker–Planck–Kolmogorov equation To evaluate the probability distribution and other statistics of the response of the oscillator, we need to have the probability density function. The probability density function of the response is governed by the Fokker–Planck–Kolmogorov (FPK) equation.
Chapter 8. Random Vibration of SDOF Systems
138
Figure 8.4. The second order moment responses of the SDOF oscillator subject to the Gaussian white noise. The initial conditions are x1 (0) = 1 and x2 (0) = 1 with probability one. m = 1. k = 1. ζ = 0.1. S0 = 1. Solid line: E[X12 ]. Dash-dot line: E[X1 X2 ]. Dashed line: E[X22 ].
Recall the FPK equation (6.27) for the linear oscillator in Section 6.2, ∂ ∂ 2 ∂ −ωn x1 − 2ζ ωn x2 pX [x2 pX ] − pX (x, t|x0 , t0 ) = − ∂t ∂x1 ∂x2 πS0 ∂ 2 pX + 2 , m ∂x22 where D is substituted by
πS0 . m2
(8.41)
Let the initial condition be given by
pX (x, t0 |x0 , t0 ) = δ(x − x0 ).
(8.42)
As has been shown in Section 6.2, pX (x, t0 |x0 , t0 ) is a Gaussian probability density function shown in equation (6.45) with mean and variance matrix given by
" ˙ (h(t)c + h(t)m)x 0 + h(t)mx˙ 0 μX (t) = (8.43) , ˙ ¨ ˙ (h(t)c + h(t)m)x x˙0 0 + h(t)m and
CXX (t) =
κXX (t) κXX˙ (t)
κXX˙ (t) , κX˙ X˙ (t)
(8.44)
where
κXX (t) = κXX (t1 , t2 ) t
1 ,t2 =t
,
∂κXX (t1 , t2 ) κXX˙ (t) = , t1 ,t2 =t ∂t2
∂ 2 κXX (t1 , t2 ) κX˙ X˙ (t) = . t1 ,t2 =t ∂t1 ∂t2
(8.45)
Exercises
139
Figure 8.5. Contours of the probability density function of the linear oscillator subject to the Gaussian white noise excitation. The initial condition at time t = 0 of the system is at (x, x) ˙ = (1, 1). T is the undamped period of the oscillator.
κXX (t1 , t2 ) is given in equation (8.14). Examples of the transient solution of the FPK equation at different times are shown in Figure 8.5.
Exercises E XERCISE 8.1. The linear SDOF system is excited by a weakly stationary random process W (t) with the autocovariance function κW W (τ ) = παS0 e−α|τ | ,
α > 0, S0 > 0.
(8.46)
Determine the autocovariance function κXX (τ ) of the displacement process in steady state. E XERCISE 8.2. Recall the transient mean square response σX2 (t) =
πS0 2ζ ωn3 m2
" 2 ζ ωn ζ 2 ωn2 −2ζ ωn t ωn × 1−e + sin 2ωd t − cos 2ωd t . (8.47) ωd ωd2 ωd2
Chapter 8. Random Vibration of SDOF Systems
140
Prove that when ζ → 0, the mean square response of the undamped linear oscillator is given by σX2 (t) =
πS0 [2ωn t − sin 2ωn t], 2ωn3 m2
t > 0.
(8.48)
E XERCISE 8.3. Consider the SDOF system driven by a white noise process such that μW = 0 and κW W (t2 −t1 ) = 2πS0 δ(t2 −t1 ). Assume that the initial conditions of the system are all zero. Show that
πS0 −ζ ωn (t1 +t2 ) ζ ωn κXX˙ (t1 , t2 ) = cos ωd (t1 + t2 ) −e ωd 2ζ ωd ωn m2 + e−ζ ωn |t2 −t1 | sin ωd (t2 − t1 ) " ζ ωn cos ωd (t2 − t1 ) . + sin ωd (t2 − t1 ) − ωd
(8.49)
E XERCISE 8.4. Find κXX˙ (t1 , t2 ) = κXX˙ (τ ) as t1 , t2 → ∞ and τ = t2 − t1 is finite. Show that κXX˙ (0) = 0 which indicates that at a given moment in steady state, the displacement and velocity are uncorrelated. E XERCISE 8.5. Under the same conditions as the above exercise, show that κX˙ X˙ (t1 , t2 )
πS0 ζ ωn −ζ ωn |t2 −t1 | = sin ω |t − t | + cos ω (t − t ) e − d 2 1 d 2 1 ωd 2ζ ωn m2 2 ζ ωn ω sin ωd (t1 + t2 ) − e−ζ ωn (t1 +t2 ) n2 cos ωd (t2 − t1 ) + ωd ωd " ζ 2 ωn2 cos ω (t + t ) . − (8.50) d 1 2 ωd2 Find κX˙ X˙ (t1 , t2 ) = κX˙ X˙ (τ ) as t1 , t2 → ∞ and τ = t2 − t1 is finite. E XERCISE 8.6. By using the results in the above exercise, find the transient mean square response of the velocity σX2˙ (t). E XERCISE 8.7. Recall that κX˙ X˙ (t1 , t2 ) =
∂ 2 κXX (t1 , t2 ) . ∂t1 ∂t2
(8.51)
Show that in steady state when κXX (t1 , t2 ) = κXX (t2 − t1 ),
(8.52)
Exercises
141
the power spectral density of the velocity can be written as 2 SX˙ X˙ (ω) = ω2 H (ω) SW W (ω).
(8.53)
Similarly, show that for the acceleration, 2 SX¨ X¨ (ω) = ω4 H (ω) SW W (ω).
(8.54)
E XERCISE 8.8. Assume that the initial conditions of all the moments are zero. Find the solution of the second order moments by solving equation (8.38). E XERCISE 8.9. Plot the evolution of pX (x, t|x0 , t0 ) of the linear oscillator in the state space for t = 0, π/ωn , and 2π/ωn when the initial condition is δ(x − x0 ). E XERCISE 8.10. Consider a first order stochastic differential equation subject to a band-limited Gaussian white noise excitation, X˙ + aX = Wb (t),
a > 0, X(0) = 0,
where the power spectral density of Wb (t) is given by
S0 , |ω| ωb , SWb (ω) = 0, |ω| > ωb .
(8.55)
(8.56)
Show that the power spectral density of the response is given by SWb (ω) , a 2 + ω2 and the correlation function is ωb eiωτ dω. RXX (τ ) = S0 a 2 + ω2 SXX (ω) =
(8.57)
(8.58)
−ωb
E XERCISE 8.11. In Exercise 8.10, when the excitation is a true Gaussian white noise such that ωb → ∞, show that RXX (τ ) =
πS0 −a|τ | . e a
(8.59)
E XERCISE 8.12. Let X(t) be a weakly stationary stochastic process. The Hilbert ˆ transform X(t) of X(t) is defined by the stochastic integral 1 ˆ X(t) = π
∞ −∞
X(τ ) dτ. t −τ
(8.60)
Chapter 8. Random Vibration of SDOF Systems
142
ˆ X(t) can also be viewed as the response of a dynamic system with impulse response function 1 . (8.61) πt Determine the frequency response function of the integral transformation. Next, determine the mean, power spectral density function and autocovariance function ˆ ˆ of the process X(t). Finally, show that X(t) and X(t) are uncorrelated. Hint: you may find the following integral useful. ⎧π ∞ ω > 0, ⎨ 2, 1 ω = 0, sin(ωu) du = 0, (8.62) ⎩ − π , ω < 0. u h(t) =
0
2
Chapter 9
Random Vibration of MDOF Discrete Systems
In this chapter, we study the random vibration problem of multi-degree-offreedom (MDOF) systems. The mathematics of the MDOF system analysis is a generalization of that for the SDOF system. Furthermore, it provides an important foundation for the treatment of continuous structural systems (Elishakoff, 1983; Lin, 1967; Soong and Grigoriu, 1993; Wirsching et al., 1995; Sólnes, 1997).
9.1. Lagrange’s equation A linear time-invariant n degree-of-freedom system is determined by the mass matrix M, the damping matrix C and the stiffness matrix K. The mass matrix M is symmetric and positive definite. In general, the stiffness matrix K is symmetric and semi-positive definite. The damping matrix C is positive definite, and need not be symmetric. When C is not symmetric, only the symmetric part 12 (C + CT ) contributes to the energy dissipation. We summarize the properties of these matrices here xT Mx > 0, M = MT , xT Kx 0, K = KT , T
x Cx > 0,
(9.1)
∀x ∈ R , x = 0. n
The symmetry property of K is a consequence of the Maxwell–Betti reciprocal theorem in elasticity. Semi-positive definiteness of K implies that the rigid body type motion without elastic deformation is possible. When the system is excited by an external dynamic excitation in the form of an n-dimensional stochastic vector process W(t), the displacements caused by the dynamic load become a stochastic vector process X(t). To make the discussion brief, we assume that X represents a set of independent generalized coordinates, and W includes all the forces acting along these generalized coordinate directions. The total kinetic and potential energies of the system are given by T =
1 ˙T ˙ X MX, 2
V =
1 T X KX. 2 143
(9.2)
Chapter 9. Random Vibration of MDOF Discrete Systems
144
The virtual work done by the damping force and the external loading is given by ˙ T δX + WT δX. δW = −(CX)
(9.3)
The Hamiltonian principle states t2 (δT − δV + δW ) dt = 0,
(9.4)
t1
where t1 and t2 are two arbitrary time instances. After going through the steps of variations, we arrive at Lagrange’s equations as ∂V ∂W d ∂T ∂T + = , k = 1, . . . , n. (9.5) − dt ∂ X˙ k ∂Xk ∂Xk ∂Xk This leads to a stochastic matrix differential equation ¨ + CX ˙ + KX = W(t), t > 0, MX ˙ ˙ 0. X(0) = X0 , X(0) =X
(9.6)
˙ 0 are n-dimensional random vectors (Meirovitch, The initial conditions X0 and X 1967). E XAMPLE 9.1. Consider a two-degree of freedom dynamic system shown in Figure 9.1. The kinetic energy of the system is given by 1 ˙ T m1 ˙ T = X (9.7) X, m2 2 ˙ = [X˙ 1 , X˙ 2 ]T . The potential energy is given by where X 1 k 1 + k2 −k2 X. V = XT −k2 k 2 + k3 2
(9.8)
The virtual work is given by ˙ T δX + WT δX, δW = −(CX)
Figure 9.1.
A two-degree-of-freedom system subject to random excitations.
(9.9)
9.1. Lagrange’s equation
where
−c2 , c2 + c3
c 1 + c2 C= −c2
145
W1 (t) W= . W2 (t)
The equations of motion are given by −c2 m1 ¨ + c1 + c 2 ˙ X X m2 −c2 c 2 + c3 k 1 + k2 W1 (t) −k2 + X= . W2 (t) −k2 k 2 + k3
(9.10)
(9.11)
9.1.1. Formal solution The solution of equation (9.6) can be formally written as t X(t) = XI (t) +
H(t − τ )W(τ ) dτ,
(9.12)
0
where
˙ ˙ 0, XI (t) = H(t)C + H(t)M X0 + H(t)MX
(9.13)
H(t) is the impulse response matrix of the system. As is the case with SDOF systems, H(t) is related to the frequency response matrix as 1 H(t) = 2π
∞ H(ω)eiωt dω,
(9.14)
−∞
where
−1 H(ω) = K − ω2 M + iωC .
(9.15)
The j th column hj (t) of H(t) is determined as the response of equation (9.6) to the excitation ej δ(t) with zero initial conditions, and the j th column hj (ω) of H(ω) is determined as the stationary response of equation (9.6) to the excitation ej eiωt . ej is defined as ej = [ 0
···
0 1 0 ···
0 ]T .
(9.16)
j th component
Note that the frequency response matrix H(ω) in equation (9.15) involves matrix inversion. When the degree of freedom is high and when the response is to be evaluated in a wide range of frequencies, the computations for the matrix H(ω) become prohibitive. In Section 9.2, we shall present the method of modal solution to compute the matrix H(ω).
Chapter 9. Random Vibration of MDOF Discrete Systems
146
9.2. Modal solutions of MDOF systems 9.2.1. Eigenvalue analysis First, we consider undamped free response of equation (9.6) in the form X(t) = x cos(ωt − φ),
(9.17)
where φ is an arbitrary phase. The amplitude vector x and circular frequency ω are obtained as solutions to the linear homogeneous equations K − ω2 M x = 0. (9.18) Nontrivial solutions x = 0 exist when the transcendental equation is satisfied det K − ω2 M = 0. (9.19) The roots ωn of equation (9.19) are the resonant frequencies of the system, and are often ordered in such a way 0 ω1 ω2 · · · ωn . For each of the frequency, we have an eigen vector xn known as an eigenmode. The eigenmodes are orthogonal with respect to the mass matrix. After normalizing the eigenmodes, we have the following orthogonality properties xTj Mxk = δj k ,
xTj Kxk = ωj2 δj k .
(9.20)
Since the eigenmodes form a linearly independent basis in the space R n , the solution vector X(t) ∈ R n to equation (9.6) can be expressed in terms of the following modal expansion X(t) =
n
Qj (t)xj = Q(t),
(9.21)
j =1
where = [x1 , x2 , . . . , xn ],
T Q(t) = Q1 (t), Q2 (t), . . . , Qn (t) .
(9.22)
Qj (t) are called the modal coordinates. is the modal matrix. Equation (9.21) defines a modal transformation from the physical coordinates X to the modal coordinates Q. Equation (9.20) can be rewritten in matrix notation as T M = I,
T K = ,
where I is the n-dimensional identity matrix and ⎡ 2 ⎤ ω1 ⎢ ⎥ ω22 ⎢ ⎥ ⎥. =⎢ ⎢ ⎥ .. ⎣ ⎦ . ωn2
(9.23)
(9.24)
9.2. Modal solutions of MDOF systems
147
With the modal transformation, equation (9.6) is converted to ¨ + T CQ ˙ + Q = T W(t) ≡ F(t), Q ˙ ˙ 0, Q(0) = T MX0 , Q(0) = T MX
(9.25)
where F(t) = T W(t). The elements of F(t) are the modal force processes. 9.2.2. Classical and non-classical damping In general, the damping term T C in equation (9.25) is not diagonal while the inertial and stiffness terms are decoupled because the matrices I and are diagonal. When T C is diagonal, we say that the damping is classical. Otherwise, it is non-classical. Under certain conditions, the damping term can be diagonal. 1. When C = αM + βK where α and β are real numbers, T C = αI + β is diagonal. In this case, C is known as the Rayleigh damping. 2. When CM−1 K = KM−1 C, T C is also diagonal. This result is due to Caughey and O’Kelly (Caughey and O’Kelly, 1964). E XAMPLE 9.2. Continue with Example 9.1. Let X(t) = x cos(ωt −φ). The eigen frequencies of the system are determined from det K − ω2 M k1 + k2 − m1 ω2 −k2 = det −k2 k2 + k3 − m2 ω2
= m1 m2 ω4 − m1 (k1 + k2 ) + m2 (k2 + k3 ) ω2 + k 1 k 2 + k2 k 3 + k1 k 3 .
(9.26)
Consider a special case when m1 = m2 = m, k1 = k2 = k3 = k and c1 = c2 = c3 = c. Let ωn2 = k/m. Then, we have ω4 − 4ωn2 ω2 + 3ωn4 = 0,
(9.27)
and the two roots ω12 = ωn2 , and ω22 = 3ωn2 . The associated eigen vectors are obtained as 1 1 x1 = (9.28) , x2 = . 1 −1 Applying the condition xTi Mxi = 1, we obtain the normalized eigen vectors as 1 1 1 1 . , x2 = √ x1 = √ (9.29) 2m −1 2m 1
Chapter 9. Random Vibration of MDOF Discrete Systems
148
Hence,
1 1 1 2 1 , = ωn = √ (9.30) . 3 2m 1 −1 It is straightforward to check that equation (9.23) holds. Furthermore, we have 1 , T C = 2ζ ωn (9.31) 3
where mc = 2ζ ωn . Hence, the system has classical damping. Consider the modal transformation X(t) = Q(t). The modal equations become 1 1 0 ˙ 2 1 ¨ Q + 2ζ ωn Q + ωn Q 1 0 3 3 1 W1 (t) + W2 (t) F1 (t) ≡ . =√ (9.32) F2 (t) 2m W1 (t) − W2 (t) The initial conditions for Q(t) are given by m X10 + X20 , Q(0) = T MX0 = (9.33) 2 X10 − X20 m X˙ 10 + X˙ 20 T ˙ ˙ Q(0) = MX0 = . 2 X˙ 10 − X˙ 20 9.2.3. Solutions with classical damping For classical damping, we write ⎡ 2ζ ω ⎢ ⎢ T C = ⎢ ⎣
⎤
1 1
2ζ2 ω2 ..
.
⎥ ⎥ ⎥, ⎦
(9.34)
2ζn ωn where ζj is the j th modal damping ratio. Equation (9.25) becomes ¨ j (t) + 2ζj ωj Q ˙ j (t) + ωj2 Qj (t) = Fj (t), Q
1 j n,
together with initial conditions ˙0 . Qj (0) = T MX0 j , Q˙j (0) = T MX j
(9.35)
(9.36)
The solution to equation (9.35) for each j is the same as that of the SDOF system. t Qj (t) = QI,j (t) +
hq,j (t − τ )Fj (τ ) dτ, 0
(9.37)
9.2. Modal solutions of MDOF systems
where
hq,j (t) =
e
−ζj ωj t
ωd,j
sin(ωd,j t),
0,
t 0, t < 0,
149
(9.38)
QI,j (t) = e−ζj ωj t ˙ j (0) + ζj ωj Qj (0) Q × Qj (0) cos(ωd,j t) + sin(ωd,j t) . (9.39) ωd,j $ hq,j (t) is the j th modal impulse response function. ωd,j = ωj 1 − ζj2 is the j th damped circular frequency. Introduce a short-hand for the convolution integral t hq,j (t) ∗ Fj (t) =
hq,j (t − τ )Fj (τ ) dτ,
(9.40)
0
and the following n × n diagonal matrices ζj ωj −ζj ωj t Aq (t) = diag e sin(ωd,j t) cos(ωd,j t) + ωd,j
= diag h˙ q,j (t) + 2ζj ωj hq,j (t) ,
1 Bq (t) = diag e−ζj ωj t sin(ωd,j t) = diag hq,j (t) , ωd,j
Hq (t) = diag hq,j (t) .
(9.41)
We rewrite equation (9.37) in vector form as ˙ Q(t) = Aq (t)Q(0) + Bq (t)Q(0) + Hq (t) ∗ F(t) T T ˙ 0 + Hq (t) ∗ T W(t). = Aq (t) MX0 + Bq (t) MX
(9.42)
By using equation (9.21), we obtain the solution in the original physical coordinates ˙0 X(t) = Aq (t)T MX0 + Bq (t)T MX + Hq (t) ∗ T W(t).
(9.43)
Comparing this solution with equation (9.12), we have ˙ 0, XI (t) = Aq (t)T MX0 + Bq (t)T MX
(9.44)
H(t) = Hq (t)T .
(9.45)
and
Chapter 9. Random Vibration of MDOF Discrete Systems
150
Therefore, we have ˙ 0 + H(t) ∗ W(t). X(t) = Aq (t)T MX0 + Bq (t)T MX
(9.46)
So far, we have essentially rederived the solution in equation (9.12) by using the modal transformation for the case of classical damping. The solution in equation (9.43) involves no matrix inversion and is thus computationally feasible for systems with many degrees of freedom. Note that the impulse response matrix of the modal system Hq (t) is diagonal, while that of the physical system H(t) is in general not diagonal. E XAMPLE 9.3. Continue with Example 9.2. The modal impulse matrix is given by hq,1 (t) Hq (t) = (9.47) , hq,2 (t) where
hq,1 (t) = hq,2 (t) = and
e−ζ ωn t ωd,1
sin(ωd,1 t),
0, e−3ζ ωn t ωd,2
t 0,
(9.48)
t < 0, sin(ωd,2 t),
0,
$ ωd,1 = ωn 1 − ζ 2 ,
t 0, t < 0,
$ ωd,2 = ωn 3 1 − 3ζ 2 .
(9.49)
The impulse response matrix of the original system is then given by H(t) = Hq (t)T 1 hq,1 (t) + hq,2 (t) = 2m hq,1 (t) − hq,2 (t)
hq,1 (t) − hq,2 (t) hq,1 (t) + hq,2 (t)
.
(9.50)
9.2.4. Solutions with non-classical damping There are various methods to obtain approximate solutions of MDOF systems with non-classical damping. Here, we discuss several common methods. Approximate methods There are several approximate methods to obtain the solutions of the modal equations. We discuss two here. The first one is simply to drop the off-diagonal
9.3. Response statistics
151
terms of the matrix T C when they are much smaller than the diagonal elements. There have been many studies on the error analysis of such approximations (Hwang and Ma, 1993; Park et al., 1992; Shahruz and Langari, 1992; Shahruz and Packard, 1994; Shahruz and Srimatsya, 1997). Studies reported in these references indicate that even when the off-diagonal terms of the matrix T C are much smaller than the diagonal elements, the error in the approximate solution can still be large in some cases. The second method proposes an iterative solution (Udwadia, 1993; Udwadia and Esfandiari, 1990). When a small parameter is introduced for the lightly damped system, a perturbation solution can be formulated (Shahruz and Packard, 1993). The perturbation solution can be obtained in the frequency domain also (Peres-Da-Silva et al., 1995; Perotti, 1994). The perturbation solution is, however, obtained iteratively. The iterative method solves the modal equations with the diagonal terms of the matrix T C, and treats the off-diagonal terms as excitations. Under certain conditions as demonstrated in the cited references, the iteration converges and accurate responses can be computed. State space approach ˙ T , we can then reformulate If we define a 2n × 1 state vector as Y(t) = [X, X] the problem in the state space. An eigen value problem and its adjoint can be solved leading to two sets of orthogonal eigen vectors. In this higher dimensional state space, the issue of decoupling the damping matrix is essentially by-passed. Details of this solution method are presented in Section 9.4.2.
9.3. Response statistics The mean function of the modal solution in equation (9.35) is given in component form as μQj (t) = μQI,j (t) + hq,j (t) ∗ μFj (t).
(9.51)
In vector form, we have μQ (t) = μQI (t) + Hq (t) ∗ μF (t).
(9.52)
Hence, the mean function of the original physical process is μX (t) = μQ (t) = μQI (t) + Hq (t) ∗ μF (t) = μXI (t) + H(t) ∗ μW (t). where μXI (t) = Aq (t)T MμX0 + Bq (t)T MμX˙ 0 .
(9.53)
Chapter 9. Random Vibration of MDOF Discrete Systems
152
The autocovariance of X(t) is given by
T κXX (t1 , t2 ) = E X(t1 ) − μX (t1 ) X(t2 ) − μX (t2 ) = κQQ (t1 , t2 )T ,
(9.54)
where κQQ (t1 , t2 ) is the autocovariance of Q(t) with a component defined as κQi1 Qi2 (t1 , t2 ) = κQI,i1 QI,i2 (t1 , t2 ) t1 t2 +
hq,i1 (t1 − τ1 )hq,i2 (t2 − τ2 )κFi1 Fi2 (τ1 , τ2 ) dτ1 dτ2 . 0
(9.55)
0
9.3.1. Stationary random excitation Assume now that the excitation W(t) is a stationary random process such that its mean is constant and κWW (τ1 , τ2 ) = κWW (τ1 − τ2 ). Since μXI (t) → 0 as t → ∞, we have ∞ μX = lim μX (t) =
H(τ ) dτ μW .
t→∞
(9.56)
0
Note that κQI QI (t1 , t2 ) → 0 as t1 , t2 → ∞. Consider the limit of the autocovariance function κQQ (t1 , t2 ) such that t1 , t2 → ∞ and τ = t2 − t1 is finite. We have κQQ (t1 , t2 ) = κQQ (τ ),
(9.57)
κXX (t1 , t2 ) = κQQ (τ )T = κXX (τ ).
(9.58)
and
9.3.2. Spectral properties Continue to assume that W(t) is stationary. Since Q(t) is also stationary in steady state as t → ∞, we can compute its power spectral density function as 1 SQQ (ω) = 2π =
1 2π
∞
κQQ (τ )e−iωτ dτ
−∞ ∞ ∞ ∞
Hq (τ1 )κFF (τ − τ1 + τ2 )
−∞ −∞ −∞ × HTq (τ2 )e−iωτ
dτ1 dτ2 dτ
9.3. Response statistics
153
= H∗q (ω)SFF (ω)HTq (ω) = H∗q (ω)T SWW (ω)HTq (ω),
(9.59)
where Hq (ω) is the frequency response matrix given by ∞ Hq (ω) =
Hq (τ )eiωτ dτ.
(9.60)
−∞
The impulse response matrix Hq (t) is given in equation (9.41). Hence, we have SXX (ω) = SQQ (ω)T = H∗ (ω)SWW (ω)HT (ω),
(9.61)
where H(ω) is the frequency response matrix based on the impulse response matrix H(t) in equation (9.45). E XAMPLE 9.4. Continue with Example 9.2. The modal frequency responses are given by hq,1 (ω) , Hq (ω) = (9.62) hq,2 (ω) where hq,1 (ω) =
1 , ωn2 − ω2 + 2j ζ ωωn
hq,2 (ω) =
1 . 3ωn2 − ω2 + 6j ζ ωωn
(9.63)
The frequency response of the original system is given by H(ω) = Hq (ω)T 1 hq,1 (ω) + hq,2 (ω) = 2m hq,1 (ω) − hq,2 (ω)
hq,1 (ω) − hq,2 (ω) . hq,1 (ω) + hq,2 (ω)
(9.64)
9.3.3. Cross-correlation and coherence function In Section 4.1.3, we have discussed the concept of cross-correlation and coherence function. Recall that when the stationary vector process X(t) has zero mean, its cross-correlation is the same as its cross-covariance function. That is, RXj Xk (τ ) = κXj Xk (τ ). When j = k, κXj Xk (τ ) are the off-diagonal elements of the covariance matrix κXX (τ ). Likewise, the cross-power spectral density functions are the off-diagonal elements of the matrix SXX (ω). The coherence function between two response
Chapter 9. Random Vibration of MDOF Discrete Systems
154
processes is defined as CoXj Xk (ω) =
|SXj Xk (ω)|2 SXj Xj (ω)SXk Xk (ω)
.
(9.65)
The coherence function is the frequency domain counterpart of the crosscorrelation coefficient function in equation (3.20). When two processes are uncorrelated such that RXj Xk (τ ) = 0, we have SXj Xk (ω) = 0. Hence, the coherence function between two uncorrelated processes is zero. On the other hand, when two processes are linearly related, the coherence function is unity.
9.4. State space representation and Itô calculus Equation (9.6) is often represented in the state space as in the case of SDOF systems. Define a 2n × 1 state vector as X Y(t) = ˙ , (9.66) X Equation (9.6) then reads ˙ Y(t) = AY(t) + GW(t),
(9.67)
where the initial conditions of the system are given by X0 Y(0) = Y0 = ˙ , X0 the state matrix 0 I A= , −M−1 K −M−1 C and the disturbance influence matrix 0 G= . M−1
(9.68)
(9.69)
(9.70)
9.4.1. Formal solution in state space The formal solution of Y(t) can be written as t Y(t) = e Y0 + At
eA(t−τ ) GW(τ ) dτ,
(9.71)
0 2
k
where eAt = I + At + t2! A2 + · · · + tk! Ak + · · ·. It can be computed with the help of spectral decomposition of the state matrix A. This is illustrated in the following modal solution.
9.4. State space representation and Itô calculus
155
9.4.2. Modal solution Consider the free response of equation (9.67) as Y(t) = ueλt . Then, we come to an eigen value problem to determine the modal vector u and the eigen value λ. Au = λu.
(9.72)
Let λj and uj (j = 1, 2, . . . , 2n) be the solution of equation (9.72). Let λj and vj (j = 1, 2, . . . , 2n) be the solution of the adjoint eigenvalue problem AT v = λv.
(9.73)
Assume that uj and vj are normalized such that the following orthogonality conditions hold vTj uk = δj k ,
vTj Auk = λj δj k .
(9.74)
Define two eigen matrices as U = [u1 , u2 , . . . , u2n ], and a diagonal matrix ⎡ λ1 ⎢ λ2 ⎢ =⎢ .. ⎣ .
V = [v1 , v2 , . . . , v2n ],
(9.75)
⎤ ⎥ ⎥ ⎥. ⎦
(9.76)
λ2n Introduce a modal transformation as Y(t) = UZ(t),
(9.77)
where Z(t) is a 2n × 1 vector, known as the modal vector. Then, the state equation (9.67) is converted to ˙ Z(t) = Z(t) + VT GW(t),
(9.78)
subject to the initial condition Z(0) = Z0 = VT Y0 .
(9.79)
Equation (9.78) is decoupled and can be solved one at a time as t Zj (t) = Z0j e
λj t
+ 0
eλj (t−τ ) VT GW(τ ) j dτ.
(9.80)
Chapter 9. Random Vibration of MDOF Discrete Systems
156
In vector notation, we have t Z(t) = et Z0 +
e(t−τ ) VT GW(τ ) dτ,
(9.81)
0
where
⎡
⎢ ⎢ et = ⎢ ⎣
⎤
eλ1 t
⎥ ⎥ ⎥. ⎦
eλ2 t ..
.
(9.82)
eλ2n t Therefore, t t
Ue(t−τ ) VT GW(τ ) dτ.
Y(t) = Ue V Y0 + T
(9.83)
0
By comparing equations (9.71) and (9.83), we have the spectral decomposition of the matrix eAt as eAt = Uet VT .
(9.84)
The computation of the exponential matrix eAt is thus substantially simplified with the help of spectral decomposition. Define a matrix as (t) = Uet VT .
(9.85)
(t) is known as the fundamental matrix of the system defined by equation (9.67). It is straightforward to show that the fundamental matrix satisfies the following equation and initial condition d(t) = A(t), dt
(0) = I.
(9.86)
The solution in equation (9.83) can be written in a compact form as t Y(t) = (t)Y0 +
(t − τ )GW(τ ) dτ.
(9.87)
0
In principle, we can compute the response statistics by using the explicit solution given by equation (9.87). In the following, we use Itô calculus to derive a set of differential equations governing the evolution of response moments.
9.4. State space representation and Itô calculus
157
9.4.3. Solutions with Itô calculus Assume that W(t) is a formal derivative of an m × 1 vector of independent unit Brownian processes B(t) and
2 E W(t) = 0, E W(t)WT (t + τ ) = σW (9.88) δ(τ ), 2 is an m × m positive definite matrix representing the strength of the where σW random excitation W(t). Equation (9.67) can be written as Itô equations
dY(t) = AY(t) dt + GσW dB(t),
(9.89)
where
E dB(t) = 0,
I dt, T E dB(t)dB (s) = 0,
(9.90) t = s, t = s,
or
E B(t) = 0,
κBB (t, s) = E B(t)BT (s) = I min(t, s).
(9.91)
Note that when the components of the random excitation W(t) are independent, 2 is diagonal. Because σ 2 fully describes the correlation among the compoσW W nents of the random excitation, the Brownian motions in B(t) can be treated as being independent without the loss of generality. 9.4.4. Moment equations Applying Itô’s lemma, we can derive moment equations of various orders for the state vector Y(t). We present the first and second order moment equations here. The mean of the response can be obtained as d (9.92) μY (t) = AμY (t), μY (0) = E[Y0 ]. dt The solution of the mean function can be readily obtained by using the spectral decomposition of the matrix A as μY (t) = (t)E[Y0 ].
(9.93)
The equation for the 2nd order joint moments in the correlation matrix RYY (t) = E[Y(t)YT (t)] can be derived as d 2 T G , RYY (t) = ARYY (t) + RYY (t)AT + GσW dt
(9.94)
158
Chapter 9. Random Vibration of MDOF Discrete Systems
RYY (0) = E Y0 YT0 . This equation is known as the Lyapunov equation. By definition, we have RYY (t) = RTYY (t). The Lyapunov equation only contains n(2n + 1) independent linear differential equations. The covariance matrix of Y(t) is given by CYY (t) = RYY (t) − μY (t)μTY (t). From equations (9.92) and (9.95), it follows that d d CYY (t) = RYY (t) − μ˙ Y (t)μTY (t) − μY (t)μ˙ TY (t) dt dt = A RYY (t) − μY (t)μTY (t) 2 T G + RYY (t) − μY (t)μTY (t) AT + GσW 2 T = ACYY (t) + CYY (t)AT + GσW G ,
t > 0,
(9.95)
E[Y0 YT0 ] − E[Y0 ]E[YT0 ].
where CYY (t) satisfies the initial condition CYY (0) = The spectral decomposition method can also be used to solve for the solutions of the correlation or covariance matrix if we take the independent elements of RYY (t) or CYY (t) to form an n(2n + 1) × 1 vector. 9.4.5. Stationary response of moment equations In steady state as t → ∞, μY and CYY become time-independent such that μ˙ Y = ˙ YY = 0. Equations (9.92) and (9.95) lead to 0 and C ACYY + CYY A
T
μY 2 T + GσB G
= 0,
(9.96)
= 0.
E XAMPLE 9.5. Consider a two-dimensional problem. Assume that the vector 2 = I. W(t) consists of two unit independent white noise processes such that σW The system matrices A and G and the initial condition Y0 are given as 2 −8 2 −1 1 A= , G= , Y0 = (9.97) . 5 −10 1 −1 1 Note that Y0 is deterministic. The differential equations for the mean functions μYj (t) and the covariance κYj Yk (t, t) then become d μY1 2 −8 μY1 μY1 (0) 1 (9.98) = , = . 5 −10 μY2 (0) μY2 1 dt μY2 2 2 σY1 κY1 Y2 σY1 κY1 Y2 d 2 −8 = 5 −10 dt κY1 Y2 σY2 κY1 Y2 σY22 2 2 σY1 κY1 Y2 2 5 2 −1 1 2 1 + + −8 −10 1 −1 1 −1 −1 κY1 Y2 σY22
9.5. Filtered white noise excitation
=
4σY21 − 16κY1 Y2
5σY21 − 8κY1 Y2 − 8σY22
5σY21 − 8κY1 Y2 − 8σY22
10κY1 Y2 − 20σY22
159
5 3 + . 3 2 (9.99)
Three nontrivial differential equations for σY21 , κY1 Y2 and σY22 may be extracted from the above equation as ⎡ 2 ⎤ ⎡ ⎤⎡ 2 ⎤ ⎡ ⎤ σY1 4 −16 0 σY1 5 d ⎣ (9.100) κY1 Y2 ⎦ = ⎣ 5 −8 −8 ⎦ ⎣ κY1 Y2 ⎦ + ⎣ 3 ⎦ , dt 2 2 0 10 −20 2 σY2 σY2 with the initial conditions ⎡ ⎤ ⎡ ⎤ 0 σY21 (0) ⎣ κY1 Y2 (0, 0) ⎦ = ⎣ 0 ⎦ . 0 σY22 (0)
(9.101)
Equations (9.98) and (9.100) are 1st order linear differential equations with constant coefficients. By using the method of exponential function, we obtain the solutions as follows μY1 (t) 1 1 −4t (9.102) = e cos(2t) − 1 e−4t sin(2t), μY2 (t) 1 2 ⎞ ⎛ ⎡ 248 ⎤ ⎡ 48 ⎤ −8t ⎜ ⎣ 162 ⎦ − ⎣ 12 ⎦ e cos(4t) ⎟ ⎡ 2 ⎤ ⎟ ⎜ σY1 (t) ⎟ ⎜ 1 −12 ⎟. ⎜ ⎡ 113⎤ ⎣ κY1 Y2 (t) ⎦ = ⎡ ⎤ (9.103) ⎟ ⎜ 96 200 320 ⎜ ⎟ 2 σY2 (t) ⎝ − ⎣ 84 ⎦ e−8t sin(4t) − ⎣ 150 ⎦ e−8t ⎠ 66 125 The mean and covariance functions approach the following stationary values as t →∞ ⎡ ⎤ ⎡ 2 ⎤ 248 σY1 (∞) 1 0 μY1 (∞) ⎣ 162 ⎦ . ⎣ κY1 Y2 (∞) ⎦ = = , (9.104) μY2 (∞) 0 320 113 σY22 (∞) Note that the convergence rate of the mean and covariance functions is controlled by e−4t and e−8t , respectively. The covariances approach the steady state significantly faster than the means.
9.5. Filtered white noise excitation The white noise process is a mathematical idealization and is physically impossible. In reality, structures are often subject to non-white or colored random excitations. The Itô calculus cannot be directly applied to such systems. However,
160
Chapter 9. Random Vibration of MDOF Discrete Systems
many random excitations can be modeled as the output of a dynamic system subject to a Gaussian white noise excitation. In other words, random excitations in real engineering systems can be viewed as the filtered white noise process. In the following, we discuss a well-known example of the earthquake model known as the Kanai–Tajimi filter. E XAMPLE 9.6. Consider a linear elastic one-storey frame built on a flexible sediment layer on top of the bedrock as shown in Figure 9.2. The columns and beams are assumed to be inextensible in the axial direction. The equivalent stiffness of all columns in the horizontal direction is k. The mass of the columns is neglected. The mass of the storey beam is lumped to be m. The horizontal displacement Y (t) of the frame is relative to the ground surface. The dissipation in the columns is modelled by a linear viscous damper with a coefficient c. The displacement of the ground surface relative to the bedrock is U (t), and the absolute horizontal displacement of the bedrock is Xb (t) (Nielsen, 2000). The equations of motion for the system are mY¨ + cY˙ + kY = −m 2ζs ωs U˙ + ωs2 U , (9.105) 2 ¨ ˙ ¨ U + 2ζs ωs U + ωs U = −Xb (t). The load −m(X¨ b (t) + U¨ (t)) = −m(2ζs ωs U˙ + ωs2 U ) on the storey beam in equation (9.105) is known as the Kanai–Tajimi filter. In many applications −X¨ b (t) is modeled as a Gaussian white noise. The load on the storey beam −m(2ζs ωs U˙ + ωs2 U ) is non-white. In the higher dimensional space with the state vector X(t) = [Y, Y˙ , U, U˙ ]T , the Itô calculus is applicable and the response vector X(t) is a Markov process.
Figure 9.2.
a) Linear elastic one-storey frame. b) Internal forces applied to free storey mass. c) Shear model for sediment layer.
Exercises
161
Exercises E XERCISE 9.1. Explain the significance of semi-positive definiteness of the stiffness matrix K. E XERCISE 9.2. Consider the stochastic differential equations Y¨ + 2ζ0 ω0 Y˙ + ω02 Y = 2ζs ωs U˙ + ωs2 U, U¨ + 2ζs ωs U˙ + ωs2 U = W (t),
(9.106)
where ζ0 and ζs are damping ratios, ω0 and ωs are natural frequencies, and W (t) is a white noise with the power spectral density S0 . Show that the frequency response function of the system Y (ω) = H (ω)W (ω),
H (ω) =
P (iω) , Q(iω)
(9.107)
where P (z) = −2ζs ωs z − ωs2 , Q(z) = z2 + 2ζ0 ω0 z + ω02 z2 + 2ζs ωs z + ωs2 .
(9.108)
Determine the autocovariance function κY Y (τ ) of the output process Y (t), the autocovariance function κY˙ Y˙ (τ ) of the velocity process Y˙ (t), and plot κY Y (τ ) for ω0 = 1, ζ0 = 0.01, ωs = 3, ζs = 0.2 and 2 2 S0 = ζ0 ω03 ω02 − ωs2 + 4ζs2 ωs2 ω02 / ζs2 ω02 ωs2 + ωs4 . (9.109) π E XERCISE 9.3. In Exercise 9.2, assume that the circular frequencies ω0 and ωs are well separated. Determine the variance σY2 of the response process by means of the equivalent white noise approximation for the case where ζ0 1 and ζs 0.5, and for the case where ζ0 1 and ζs 1. E XERCISE 9.4. The linear SDOF system with a spring, mass and damper is excited by a zero mean, weakly stationary dynamic loading process V (t) with the autocovariance function πS0 −a|τ | , a > 0, S0 > 0. e κV V (τ ) = (9.110) a Determine the stationary autocovariance function κXX (τ ) of the displacement process. E XERCISE 9.5. Reconsider Exercise 9.4 by treating V (t) as a filtered white noise. X¨ + 2ζ0 ω0 X˙ + ω02 X = V (t),
(9.111)
Chapter 9. Random Vibration of MDOF Discrete Systems
162
Figure 9.3.
Half car model traveling on a rough road at a constant speed v.
V˙ (t) + aV (t) = W (t), where W (t) is the Gaussian white noise with power spectral density S0 . Apply the Itô calculus to obtain the moment equations for the system response. E XERCISE 9.6. Derive equations of motion for an half car model traveling on a rough road as shown in Figure 9.3. The surface roughness under the wheels is modeled as a zero-mean process with a correlation function RY Y (τ ) =
D −α|τ | , e α
α > 0, D > 0.
(9.112)
E XERCISE 9.7. In Exercise 9.6, assume that (a + b)/v 1/α so that Y (t) and Y (t + a+b v ) can be treated as independent processes. Let Y (t) be the output of a first order filter driven by a Gaussian white noise with power spectral density S0 , Y˙ (t) + αY (t) = W (t).
(9.113)
Derive the FPK equation for the half car model, and the first and second order moment equations. E XERCISE 9.8. Show that the fundamental matrix of the system in equation (9.67) satisfies the following equation and initial condition d(t) = A(t), dt
(0) = I.
(9.114)
Chapter 10
Random Vibration of Continuous Structures
10.1. Distributed random excitations Random excitations acting on continuous structures in general have a spatial distribution. Before we study the random response of structures, we first briefly describe distributed random excitations. Consider a one-dimensional random excitation denoted by F (x, t). The mean of the excitation is
E F (x, t) = μF (x, t), (10.1) where μF (x, t) is a deterministic function of both space and time. The space–time cross-covariance function κF F (x1 , x2 , t1 , t2 ) is defined as κF F (x1 , x2 , t1 , t2 )
= E F (x1 , t1 ) − μF (x1 , t1 ) F (x2 , t2 ) − μF (x2 , t2 )
= E F (x1 , t1 )F (x2 , t2 ) − μF (x1 , t1 )μF (x2 , t2 ) = RF F (x1 , x2 , t1 , t2 ) − μF (x1 , t1 )μF (x2 , t2 ), where the space–time correlation function is
E F (x1 , t1 )F (x2 , t2 ) = RF F (x1 , x2 , t1 , t2 ).
(10.2)
(10.3)
When the random excitation is a weakly stationary process in time for any x, we have the mean and space–time cross-covariance function as E[F (x, t)] = μF (x),
(10.4)
κF F (x1 , x2 , t1 , t2 ) = RF F (x1 , x2 , t1 − t2 ) − μF (x1 )μF (x2 ). The space–time cross-spectral density function of the weakly stationary process is 1 SF F (x1 , x2 , ω) = 2π
∞
κF F (x1 , x2 , τ )e−iωτ dτ.
−∞ 163
(10.5)
164
Chapter 10. Random Vibration of Continuous Structures
Conversely, we have ∞ κF F (x1 , x2 , τ ) =
SF F (x1 , x2 , ω)eiωτ dω.
(10.6)
−∞
E XAMPLE 10.1. When a stationary distributed random excitation is spatially independent such that the space–time cross-covariance has the form κF F (x1 , x2 , τ ) = δ(x1 − x2 )g(τ ),
(10.7)
it is called the rain on the roof. When g(τ ) = DF δ(τ ), it is a spatially independent white noise excitation.
10.2. One-dimensional structures We start the discussions of structural random vibrations with a simple system, namely, a Bernoulli–Euler beam. We then generalize the discussion to other onedimensional structures. 10.2.1. Bernoulli–Euler beam In the theory of Bernoulli–Euler beams, the rotational inertia and the shear strain are neglected. A differential element of the deformed beam is shown in Figure 10.1.The displacement field of the beam has two components: the longitudinal displacement U (x, t) and the deflection W (x, t). U (x, t) is determined from the following relationship in case of pure bending. ∂W (x, t) . ∂x The bending moment and shear force of the beam are given by U (x, t) = −z
M(x, t) = EI
∂ 2 W (x, t) , ∂x 2
Figure 10.1.
The differential element of the Bernoulli–Euler beam.
(10.8)
(10.9)
10.2. One-dimensional structures
Q(x, t) = −EI
165
∂ 3 W (x, t)
. (10.10) ∂x 3 Consider a simply supported uniform beam of length L subject to a random loading F (x, t) per unit length. The equation of motion of the deflection W (x, t) of the beam is given by ∂W (x, t) ∂ 2 W (x, t) ∂ 4 W (x, t) + c = F (x, t), + ρA (10.11) ∂t ∂x 4 ∂t 2 where EI is the bending rigidity of the beam, c is the damping coefficient, ρ is the mass density per unit volume, and A is the cross section area. The boundary conditions are EI
∂ 2 W (0, t) (10.12) = 0, ∂x 2 ∂ 2 W (L, t) = 0. W (L, t) = EI (10.13) ∂x 2 In the following, we review two methods of solution for the vibration problem of the beam. W (0, t) = EI
Modal solution Consider the free undamped solution of the beam. Following the method of separation of variables, we assume W (x, t) = X(x)eiω for harmonic solutions. This leads to the following spatial eigenvalue problem. d 4 X(x) − ρAω2 X(x) = 0, dx 4 subject to EI
(10.14)
d 2 W (0) = 0, (10.15) dx 2 d 2 W (L) = 0. X(L) = (10.16) dx 2 The nontrivial solution of the eigenvalue problem forms a set of mode functions # 2 nπx sin , n = 1, 2, . . . , Φn (x) = (10.17) ρAL L X(0) =
and resonant frequencies 2 # EI nπ . ωn = L ρA
(10.18)
Chapter 10. Random Vibration of Continuous Structures
166
The mode functions are normalized with respect to the mass density ρA and are orthogonal to each other such that L Φn (x)EI
d 4 Φm (x) dx = δnm ωn2 , dx 4
(10.19)
0
L ρAΦn (x)Φm (x) dx = δnm .
(10.20)
0
The forced and damped vibration response of the beam can now be expressed in terms of the mode functions in the mean square sense W (x, t) =
∞
(10.21)
Qj (t)Φj (x).
j =1
Note that L cΦn (x)Φm (x) dx =
c δnm 2ζn ωn δnm . ρA
(10.22)
0 c where ζn = 2ρAω is the modal damping ratio. We expand the random loading n F (x, t) in terms of the mode function in the mean square sense as
F (x, t) =
∞
(10.23)
ρAFj (t)Φj (x),
j =1
where Fj (t) are the modal force components given by L Fj (t) =
(10.24)
Φj (x)F (x, t) dx. 0
Here, the integration of the random excitation is also in the mean square sense. The equations governing the modal coefficients Qj (t) are readily obtained as ¨ j + 2ζj ωj Q˙ j + ωj2 Qj = Fj (t), Q
j = 1, 2, . . . ,
(10.25)
We have studied how to obtain the solution of these equations in Section 9.2. Assume that all the initial conditions are zero. We have t Qj (t) =
hq,j (t − τ )Fj (τ ) dτ hq,j (t) ∗ Fj (t), 0
(10.26)
10.2. One-dimensional structures
167
where e
hq,j (t) =
−ζj ωj t
ωd,j
sin(ωd,j t),
0,
t 0, t < 0,
(10.27)
$ hq,j (t) is the j th modal impulse response function. ωd,j = ωj 1 − ζj2 is the j th damped frequency. Hence, we obtain the response of the beam to the random excitation in the mean square sense as W (x, t) =
∞
hq,j (t) ∗ Fj (t)Φj (x).
(10.28)
j =1
Solution by Green’s function Consider a response of the beam to a unit impulse excitation at a point xF , i.e. F (x, t) = δ(x − xF )δ(t − tF ). Let h(x, xF , t − tF ) denote this solution. It is called the unit impulse response function, also known as Green’s function. The response to the general random excitation F (x, t) is given by t W (x, t) =
L h(x, y, t − τ )F (y, τ ) dy,
dτ 0
(10.29)
0
where the integration is in the mean square sense. We have assumed that the initial conditions of W (x, t) are zero. Modal representation of Green’s function Applying the modal solution to the special force F (x, t) = δ(x − xF )δ(t − tF ), we have L Fj (t) =
Φj (x)F (x, t) dx = Φj (xF )δ(t − tF ),
(10.30)
0
Qj (t) = Φj (xF )hq,j (t − tF ).
(10.31)
Hence, h(x, xF , t − tF ) =
∞ j =1
hq,j (t − tF )Φj (xF )Φj (x).
(10.32)
Chapter 10. Random Vibration of Continuous Structures
168
10.2.2. Response statistics of the beam Taking the expectation of equation (10.29), we obtain the mean of the response as t μW (x, t) =
L h(x, y, t − τ )μF (y, τ ) dy.
dτ 0
(10.33)
0
In terms of the mode functions, we have μW (x, t) =
∞
t
L
Φj (x)
j =1
hq,j (t − τ )Φj (y)μF (y, τ ) dy.
dτ 0
(10.34)
0
The space–time cross-covariance function of the response is given by t1 t2 κW W (x1 , x2 , t1 , t2 ) =
L L dτ1 dτ2
0
0
dy1 dy2 0
0
× h(x1 , y1 , t1 − τ1 )h(x2 , y2 , t2 − τ2 ) × κF F (y1 , y2 , τ1 , τ2 ),
(10.35)
or κW W (x1 , x2 , t1 , t2 ) =
∞ ∞
t1 t2 Φj (x1 )Φk (x2 )
j =1 k=1
dτ1 dτ2 0
0
L L ×
hq,j (t1 − τ1 )hq,k (t2 − τ2 ) 0
0
× Φj (y1 )Φk (y2 )κF F (y1 , y2 , τ1 , τ2 ) dy1 dy2 . (10.36) Stationary excitations Assume now that the excitation F (x, t) is a weakly or strictly stationary random process such that its mean is independent of time at any point on the beam, and κF F (y1 , y2 , τ1 , τ2 ) = κF F (y1 , y2 , τ1 − τ2 ). In steady state as t → ∞, we have μW (x) = lim μW (x, t) t→∞
=
∞ j =1
∞ Φj (x)
L hq,j (τ ) dτ
0
Φj (y)μF (y) dy 0
10.2. One-dimensional structures
∞ Φj (x)
169
L
=
ωj2
j =1
Φj (y)μF (y) dy.
(10.37)
0
Consider the limit of the space–time cross-covariance function κW W (x1 , x2 , t1 , t2 ) when t1 , t2 → ∞ and τ = t1 − t2 is finite. We have κW W (x1 , x2 , t1 , t2 ) ∞ ∞ Φj (x1 )Φk (x2 ) = j =1 k=1
∞ ∞ ×
hq,j (τ1 )hq,k (τ2 ) dτ1 dτ2 0
0
L L ×
Φj (y1 )Φk (y2 )κF F (y1 , y2 , τ − τ1 + τ2 ) dy1 dy2 0
0
= κW W (x1 , x2 , τ ).
(10.38)
From equation (10.6), we have ∞ κF F (x1 , x2 , τ − τ1 + τ2 ) =
SF F (x1 , x2 , ω)eiω(τ −τ1 +τ2 ) dω.
(10.39)
−∞
Recall that ∗ (ω) Hq,j
∞ =
hq,j (τ1 )e−iωτ1 dτ1 ,
(10.40)
hq,k (τ2 )eiωτ2 dτ2 .
(10.41)
0
∞ Hq,k (ω) = 0
We obtain κW W (x1 , x2 , τ ) =
∞ ∞ ∞
Φj (x1 )Φk (x2 )
−∞ j =1 k=1
L L ×
SF F (y1 , y2 , ω)Φj (y1 )Φk (y2 )dy1 dy2 eiωτ dω. 0
0
(10.42)
Chapter 10. Random Vibration of Continuous Structures
170
By definition, we have the space-spectral power density SW W (x1 , x2 , ω) =
∞ ∞
Φj (x1 )Φk (x2 )
j =1 k=1
L L ×
SF F (y1 , y2 , ω)Φj (y1 )Φk (y2 ) dy1 dy2 . (10.43) 0
0
The variance function of the beam response at a given point is given by 2 σW (x) = κW W (x, x, 0)
=
∞ ∞ ∞
Φj (x)Φk (x)
−∞ j =1 k=1
L L ×
SF F (y1 , y2 , ω)Φj (y1 )Φk (y2 ) dy1 dy2 dω. 0
(10.44)
0
Modal representation of the random excitation Recall the modal expansion of the excitation in equation (10.23). Continue to consider the stationary case. The mean and space–time cross-covariance function of the excitation in terms of the modal function are given by μF (x) =
∞
ρAμFj Φj (x),
(10.45)
j =1
κF F (x1 , x2 , τ ) =
∞ ∞ (ρA)2 κFj Fk (τ )Φj (x1 )Φk (x2 ),
(10.46)
j =1 k=1
where μFj is the mean of the modal force Fj (t), and κFj Fk (τ ) is the cross covariance function of the modal forces Fj (t) and Fk (t). From equation (10.5), we have 1 SF F (x1 , x2 , ω) = 2π =
∞ ∞ ∞ (ρA)2 κFj Fk (τ )Φj (x1 )Φk (x2 )e−iωτ dτ −∞ j =1 k=1
∞ ∞ (ρA)2 SFj Fk (ω)Φj (x1 )Φk (x2 ), j =1 k=1
(10.47)
10.2. One-dimensional structures
171
where 1 SFj Fk (ω) = 2π
∞
κFj Fk (τ )e−iωτ dτ.
(10.48)
−∞
These modal representations of the random excitation can be substituted into the solutions of the beam response. For example, the variance function of the beam response becomes 2 σW (x) = κW W (x, x, 0) ∞ ∞ ∞ Φj (x)Φk (x) = −∞ j =1 k=1
×
L L ∞ ∞
(ρA)2
0 n=1 m=1
0
× SFn Fm (ω)Φn (y1 )Φm (y2 )Φj (y1 )Φk (y2 ) dy1 dy2 dω ∞ ∞ ∞ = Φj (x)Φk (x) SFj Fk (ω) dω. j =1 k=1
(10.49)
−∞
Expectations of moments and shear forces For the bending moment, we have M(x, t) = EI
∞
hq,j (t) ∗ Fj (t)
j =1
t = EI
L dτ
0
d 2 Φj (x) dx 2
∂ 2 h(x, y, t − τ ) F (y, τ ) dy. ∂x 2
(10.50)
0
The mean of the moment is given by μM (x, t) = EI
∂ 2 μW (x, t) . ∂x 2
(10.51)
The space–time covariance function is given by κMM (x1 , x2 , t1 , t2 ) = (EI )2
∂ 4 κW W (x1 , x2 , t1 , t2 ) . ∂x12 ∂x22
(10.52)
Chapter 10. Random Vibration of Continuous Structures
172
Likewise, we have the statistics for the shear force. Q(x, t) = −EI
∞
hq,j (t) ∗ Fj (t)
j =1
t = −EI
L dτ
0
d 3 Φj (x) dx 3
∂ 3 h(x, y, t − τ ) F (y, τ ) dy, ∂x 3
(10.53)
0
∂ 3 μW (x, t) , ∂x 3 ∂ 6 κW W (x1 , x2 , t1 , t2 ) . κQQ (x1 , x2 , t1 , t2 ) = (EI )2 ∂x13 ∂x23 μQ (x, t) = EI
(10.54) (10.55)
E XAMPLE 10.2. Consider a simply supported homogeneous Bernoulli–Euler beam in Figure 10.2. At t = 0 the beam is at rest, when a load P is entering the beam with the constant velocity V . P and V are assumed to be mutually independent random variables. The random excitation is expressed as F (x, t) = 2 (x, t) of the P δ(x − V t). Find the mean function μM (x, t) and the variance σM bending moment in the middle of the beam (Nielsen, 2000). ˙ j (0) = 0, and the modal force comThe modal initial conditions Qj (0) = Q ponents are
L Fj (t) =
Φj (x)F (x, t) dx =
P Φj (V t), 0,
0 V t L, V t > L.
(10.56)
0
The beam response for 0 V t L can be written W (x, t) =
∞
Qj (t)Φj (x)
j =1
=
∞ j =1
Figure 10.2.
t hj (t − τ )Φj (V τ ) dτ Φj (x).
P 0
A moving random load on a simply supported homogeneous beam.
(10.57)
10.2. One-dimensional structures
173
The moment is given by ∂ 2 W (x, t) ∂x 2 t ∞ d 2 Φj (x) = EI P hj (t − τ )Φj (V τ ) dτ . dx 2
M(x, t) = EI
j =1
(10.58)
0
Let M0 (x, t, V ) = EI
∞
t
hj (t − τ )Φj (V τ ) dτ
j =1 0
d 2 Φj (x) . dx 2
(10.59)
The moment is re-written as M(x, t) = P · M0 (x, t, V ) when 0 V t L. The mean and variance of the moment are given by
μM (x, t) = E[P ]E M0 (x, t, V ) , (10.60)
2 2 2 2 σM (x, t) = E P E M0 (x, t, V ) − μM (x, t). (10.61) In order to compute the second order statistics of the moment, we need to know the second order statistics of the random variable P , and the probability density function of the variable V . The PDF of V is needed because the moment is a nonlinear function of V . 10.2.3. Equations of motion in operator form The vibration problem of one-dimensional structures can be generally described in the operator form as
M W (x, t) = L W (x, t) + F (x, t), x ∈ D, (10.62) where L[·] is a linear spatial differential operator of order 2n, M[·] is the inertial operator and F (x, t) is a random excitation. The time derivatives in M[·] is of second order. The initial conditions of the random vibration problem of the beam are W (x, 0) = W0 (x),
W˙ (x, 0) = W˙ 0 (x),
and the boundary conditions are
Bk W (x, t) = 0, k = 1, 2, . . . , 2n.
(10.63)
(10.64)
where Bk [W (x, t)] are the differential operators with spatial derivatives up to (2n − 1)th order, and time derivatives up to second order.
Chapter 10. Random Vibration of Continuous Structures
174
Modal solution We consider the harmonic solution of the free vibration problem, given in the form W (x, t) = X(x)eiω . This leads to the following general eigenvalue problem
L X(x) = ω2 M X(x) , x ∈ D, (10.65)
Bk X(x) = 0, k = 1, 2, . . . , 2n. (10.66) Let Φj (x) be a set of normalized mode functions of the eigenvalue problem, and ωj be the corresponding eigen-frequencies. The mode functions satisfy the following orthogonality conditions,
(10.67) M Φj (x) Φk (x) dx = δj k , D
L Φj (x) Φk (x) dx = δj k ωj2 .
(10.68)
D
The modal solution of the forced random response is given in the mean square sense by W (x, t) =
∞
Qj (t)Φj (x).
(10.69)
j =1
We expand the random excitation as follows F (x, t) =
∞
Fj (t)M Φj (x) ,
(10.70)
j =1
where
Fj (t) =
Φj (x)F (x, t) dx.
(10.71)
D
Substituting all the expansions in equation (10.62) and making use of the orthogonality conditions, we obtain the modal equations (10.25). From this point on, the solution procedures are the same as discussed before. 10.2.4. Timoshenko beam In the Timoshenko beam, the shear deformation and the rotational inertia are included. The total slope of the deflection curve of the beam consists of two parts ∂W (x, t) = β(x, t) + ψ(x, t), ∂x
(10.72)
10.2. One-dimensional structures
Figure 10.3.
175
The differential element of the Timoshenko beam.
where β(x, t) is the shear angle of the beam and ψ(x, t) is the angle of rotation of the cross-section as shown in Figure 10.3. Because of equation (10.72), there are only two independent functions out of the three. It is common to keep W (x, t) and ψ(x, t). The bending moment and shear force of the beam are given by M(x, t) = EI
∂ψ(x, t) , ∂x
(10.73)
∂W (x, t) Q(x, t) = kGAβ(x, t) = kGA − ψ(x, t) , ∂x
(10.74)
where k is a correction factor depending on the geometry of the cross section, and is known at the Timoshenko shear coefficient, G is the shear modulus, and A is the cross sectional area of the beam. Following Lagrange’s method (Meirovitch, 1967), we can derive the equations of motion as 2 " ρA ∂t∂ 2 W (x, t) 2 ψ(x, t) ρAr 2 ∂t∂ 2 "
" ∂2 ∂ −kGA ∂x kGA ∂x W (x, t) F (x, t) 2 , = + (10.75) ∂ ∂2 ψ(x, t) 0 kGA ∂x EI ∂x 2 − kGA where r is the radius of gyration of the cross section area about the neutral axis such that the mass moment J of inertia per unit length and the moment I of inertia of the cross section about the same axis is J = ρAr 2 ,
I = Ar 2 ,
and the boundary conditions ∂ψ(x, t) δψ(x, t) = 0, EI ∂x ∂W (x, t) kGA − ψ(x, t) δW (x, t) = 0, ∂x
(10.76)
(10.77) x = 0, L,
176
Chapter 10. Random Vibration of Continuous Structures
where F (x, t) is the distributed random excitation on the beam, L is the length of the beam, and δψ(x, t) denotes the virtual variation of the function ψ(x, t). For the simply supported boundary conditions, we have ⎧ ⎫ ⎧ ⎫ W (0, t) ⎪ 0⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ W (L, t) ⎪ ⎬ ⎪ ⎨0⎪ ⎬ (10.78) = . ∂ψ(0,t) EI ∂x ⎪ ⎪ ⎪ 0⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ⎭ ⎪ ⎭ ⎩ 0 EI ∂ψ(L,t) ∂x Modal solution Consider the free undamped solution of the beam. We look for harmonic solutions. Assume that,
" " W (x, t) W (x) iωt = e . (10.79) ψ(x, t) ψ(x) We obtain " W (x) ρAω2 − ψ(x) ρAr 2 ω2 " ∂2 ∂ kGA ∂x 2 −kGA ∂x W (x) = . ∂ ∂2 ψ(x) kGA ∂x EI ∂x 2 − kGA Two matrix operators can be defined as ρA , MTB [·] = ρAr 2 ∂2 ∂ kGA ∂x −kGA ∂x 2 LTB [·] = − . ∂ ∂2 kGA ∂x EI ∂x 2 − kGA Assume that " "
W (x) W = eiλx . ψ(x) ψ This leads to an algebraic equation relating λ to ω as " W −ikGAλ −kGAλ2 + ρAω2 2 2 2 ψ ikGAλ −EI λ − kGA + ρAr ω
" 0 = . 0 For nontrivial solution of {W, ψ}, we have −kGAλ2 + ρAω2 −ikGAλ det = 0. ikGAλ −EI λ2 − kGA + ρAr 2 ω2
(10.80)
(10.81) (10.82)
(10.83)
(10.84)
(10.85)
10.2. One-dimensional structures
177
Expanding the determinant, we have r2 ρA 2 (ρA)2 4 E ω = 0. 1+ ρAλ2 ω2 − ω + λ4 − EI kG EI kGEA2
(10.86)
Solving for λ2 , we obtain 1 r2 E λ2 = 1+ ρAω2 2 EI kG # r4 ρA 2 (ρA)2 4 E 2 2 ω4 + 4 ± (ρA) − ω ω 1 + . (10.87) kG EI (EI )2 kGEA2 There are four roots of λ as a function of the frequency ω. In general, there are a pair of complex conjugates iλ1 and −iλ1 where λ1 is real, and a pair of real roots λ2 and −λ2 . Associated with each root, we can find a non-zero vector denoted by uj
" W uj = , j = 1, 2, 3, 4. (10.88) ψ j The general solution of equation (10.80) is given by
" W (x) = C1 u1 cos λ1 x + C2 u2 sin λ1 x ψ(x) + C3 u3 cosh λ2 x + C4 u4 sinh λ2 x,
(10.89)
where Cj (1 j 4) are undetermined coefficients. Applying the boundary conditions in equation (10.78), we obtain a homogeneous algebraic equation for determining the coefficients Cj . ⎡
u11 ⎢ 0 ⎢ ⎣ u11 cos λ1 L −λ1 u12 sin λ1 L ⎧ ⎫ C1 ⎪ ⎪ ⎪ ⎬ ⎨C ⎪ 2 = 0. ⎪ ⎪ ⎪ C3 ⎪ ⎭ ⎩ C4
0 u31 λ1 u22 0 u21 sin λ1 L u31 cosh λ2 L λ1 u22 cos λ1 L λ2 u32 sinh λ2 L
⎤ 0 ⎥ λ2 u42 ⎥ u41 sinh λ2 L ⎦ λ2 u42 cosh λ2 L (10.90)
Setting the determinant of the matrix to zero, we yield the transcendental equation for determining the resonant frequencies of the beam. The roots of the transcendental equation can be obtained numerically.
Chapter 10. Random Vibration of Continuous Structures
178
Assume that we have found a discrete set of frequencies ωj and the associated mode functions
" Wj (x) (10.91) = j (x). ψj (x) The mode functions satisfy the following orthogonality conditions
Tk (x)MTB j (x) dx = δj k , D
Tk (x)LTB j (x) dx = δj k ωj2 .
(10.92)
(10.93)
D
The general solution of the Timoshenko beam subject to random excitations can be written in terms of the modal summation
" ∞ W (x, t) = (10.94) Qj (t)j (x). ψ(x, t) j =1
We expand the random excitation as follows
" ∞
F (x, t) = Fj (t)MTB j (x) , 0
(10.95)
j =1
where
Fj (t) =
Tj (x)
" F (x, t) dx. 0
(10.96)
D
Hence, the equation for the model coefficient Qj (t) of an undamped Timoshenko beam is ¨ j + ωj2 Qj (t) = Fj (t), Q
j = 1, 2, 3, . . .
(10.97)
Qj (t) can be readily solved by using the methods discussed in the previous chapters. 10.2.5. Response statistics After Qj (t) is obtained, we can follow the same steps to compute response statistics as in the case of the Bernoulli–Euler beam. The mean and space–time correlation function of the response are presented here. " ∞
W (x, t) E Qj (t) j (x), E = (10.98) ψ(x, t) j =1
10.3. Two-dimensional structures
179
E
" W (x1 , t1 ) W (x2 , t2 )ψ(x2 , t2 ) ψ(x1 , t1 ) ∞ ∞ E[Qj (t1 )Qk (t2 )]j (x1 )Tk (x2 ). =
(10.99)
j =1 k=1
More solutions of the response statistics of the Timoshenko beam are left as exercises.
10.3. Two-dimensional structures 10.3.1. Rectangular plates Consider the pure bending vibration problem of a simply supported uniform rectangular plate. The equation of motion is ∂W (x, t) ∂ 2 W (x, t) = F (x, t), + ρh ∂t ∂t 2 with the boundary conditions D∇ 4 W (x, t) + c
x ∈ D,
W (0, x2 , t) = W (a, x2 , t) = 0, ∂ 2 W (a, x2 , t) 2 , t) =D = 0, D 2 ∂x1 ∂x12 W (x1 , 0, t) = W (x1 , b, t) = 0, ∂ 2 W (x1 , b, t) ∂ 2 W (x1 , 0, t) =D = 0, D 2 ∂x2 ∂x22 ∂ 2 W (0, x
(10.100)
(10.101)
and the initial conditions W (x, 0) = W0 (x),
W˙ (x, 0) = W˙ 0 (x),
(10.102)
where xT = [x1 , x2 ], D = {0 < x1 < a; 0 < x2 < b}, ρ is the mass density per unit volume, h is the plate thickness, c is a damping coefficient, F (x, t) is the external transverse random load per unit area, and the biharmonic differential operator is defined as ∇4 =
∂4 ∂4 ∂4 + 2 + . ∂x14 ∂x12 ∂x22 ∂x24
(10.103)
The bending rigidity D of the plate is defined as D=
Eh3 , 12(1 − ν 2 )
where ν is the Poisson ratio of the material, and E is Young’s modulus.
(10.104)
Chapter 10. Random Vibration of Continuous Structures
180
Modal solution Consider the harmonic response of a free undamped plate. By following the method of separation of variables, we look for the bending response of the plate in the following form W (x, t) = X1 (x1 )X2 (x2 )eiωt .
(10.105)
This leads to the eigenvalue problem of the plate. The nontrivial solution of the eigenvalue problem is a set of mode functions nπx1 2 mπx2 Φnm (x) = √ sin sin , n, m = 1, 2, . . . , (10.106) a b ρhab and resonant frequencies ωnm =
nπ a
2 +
mπ b
2 #
D . ρh
(10.107)
The mode functions are normalized with respect to the mass ρh and are orthogonal to each other such that 2 (10.108) Φnm (x)∇ 4 Φkl (x) dx = δnk δml ωnm , D
ρhΦnm (x)Φkl (x) dx = δnk δml .
(10.109)
D
The forced and damped vibration response of the plate is expressed in terms of the mode functions in the mean square sense W (x, t) =
∞
Qnm (t)Φnm (x).
(10.110)
n,m=1
Note that c cΦnm (x)Φkl (x) dx = δnk δml 2ζnm ωnm δnk δml . ρh
(10.111)
D c where ζnm = 2ρhω is the modal damping ratio. We expand the random loading nm F (x, t) in terms of the mode function in the mean square sense as
F (x, t) =
∞ n,m=1
ρhFnm (t)Φnm (x),
(10.112)
10.3. Two-dimensional structures
where the modal force components are given by Fnm (t) = Φnm (x)F (x, t) dx.
181
(10.113)
D
The integration of the random excitation is in the mean square sense. The equations governing the modal coefficients Qj (t) are readily obtained as 2 ¨ nm + 2ζnm ωnm Q˙ nm + ωnm Qnm = Fnm (t), Q
n, m = 1, 2, . . .
(10.114)
From now on, the solution process is the same as for the beam. Assume that all the initial conditions are zero. We have t Qnm (t) = hq,nm (t − τ )Fnm (τ ) dτ hq,nm (t) ∗ Fnm (t), (10.115) 0
where
hq,nm (t) =
e−ζnm ωnm t ωd,nm
sin(ωd,nm t),
0,
t 0, t < 0.
(10.116)
2 hq,nm (t) is the (n, m)th modal impulse response function. ωd,nm = ωnm 1 − ζnm is the (n, m)th damped frequency. Hence, ∞
W (x, t) =
hq,nm (t) ∗ Fnm (t)Φnm (x).
(10.117)
n,m=1
Green’s function Applying the modal solution to the special force F (x, t) = δ(x − xF )δ(t − tF ), we have Fnm (t) = Φnm (x)F (x, t) dx = Φnm (xF )δ(t − tF ), (10.118) D
Qnm (t) = Φnm (xF )hq,nm (t).
(10.119)
Hence, we obtain Green’s function of the plate in terms of the mode functions h(x, xF , t − tF ) =
∞
hq,nm (t − tF )Φnm (xF )Φnm (x).
(10.120)
n,m=1
The plate response can be written in a compact form as t W (x, t) =
h(x, y, t − τ )F (y, τ ) dy.
dτ 0
D
(10.121)
Chapter 10. Random Vibration of Continuous Structures
182
10.3.2. Response statistics of the plate The mean response of the plate is
μW (x, t) = E W (x, t) =
∞
hq,nm (t) ∗ E Fnm (t) Φnm (x),
(10.122)
n,m=1
or t μW (x, t) =
h(x, y, t − τ )μF (y, τ ) dy,
dτ 0
(10.123)
D
where μF (x, t) = E[F (x, t)] is the mean of the random excitation. The space–time cross-covariance function of F (x, t) is given by κF F (x1 , x2 , t1 , t2 )
= E F (x1 , t1 ) − μF (x1 , t1 ) F (x2 , t2 ) − μF (x2 , t2 )
= E F (x1 , t1 )F (x2 , t2 ) − μF (x1 , t1 )μF (x2 , t2 ) = RF F (x1 , x2 , t1 , t2 ) − μF (x1 , t1 )μF (x2 , t2 ),
(10.124)
where the space–time correlation function is E[F (x1 , t1 )F (x2 , t2 )] = RF F (x1 , x2 , t1 , t2 ). The space–time cross-covariance function of the plate response is then obtained as κW W (x1 , x2 , t1 , t2 ) t1 t2 dτ1 dτ2 h(x1 , y1 , t1 − τ1 ) = 0
0
D D
× h(x2 , y2 , t2 − τ2 )κF F (y1 , y2 , τ1 , τ2 ) dy1 dy2 ,
(10.125)
or κW W (x1 , x2 , t1 , t2 ) =
∞ ∞ n,m=1 k,l=1
t1 t2 Φnm (x1 )Φkl (x2 )
hq,nm (t1 − τ1 )
dτ1 dτ2 0
0
D D
× hq,kl (t2 − τ2 )Φnm (y1 )Φkl (y2 )κF F (y1 , y2 , τ1 , τ2 ) dy1 dy2 . (10.126) Stationary excitations Assume that the excitation F (x, t) is a weakly or strictly stationary random process such that its mean is independent of time at any point on the plate, and κF F (y1 , y2 , τ1 , τ2 ) = κF F (y1 , y2 , τ1 − τ2 ). In steady state as t → ∞, we have
10.3. Two-dimensional structures
183
μW (x) = lim μW (x, t) t→∞
=
∞
∞ Φnm (x)
n,m=1
hhq,nm (τ ) dτ
0
Φnm (y)μF (y) dy D
∞ Φnm (x) = Φnm (y)μF (y) dy. 2 ωnm n,m=1
(10.127)
D
Consider the limit of the space–time cross-covariance function κW W (x1 , x2 , t1 , t2 ) when t1 , t2 → ∞ and τ = t1 − t2 is finite. We have κW W (x1 , x2 , t1 , t2 ) ∞ ∞
=
∞ ∞
Φnm (x1 )Φkl (x2 )
n,m=1 k,l=1
0
×
hq,nm (τ1 )hq,kl (τ2 ) dτ1 dτ2 0
Φnm (y1 )Φkl (y2 )κF F (y1 , y2 , τ − τ1 + τ2 ) dy1 dy2 D D
= κW W (x1 , x2 , τ ).
(10.128)
From equation (10.6), we have ∞ κF F (x1 , x2 , τ − τ1 + τ2 ) =
SF F (x1 , x2 , ω)eiω(τ −τ1 +τ2 ) dω.
(10.129)
−∞
Recall that ∗ Hq,nm (ω)
∞ =
hq,nm (τ1 )e−iωτ1 dτ1 ,
(10.130)
hq,kl (τ2 )eiωτ2 dτ2 .
(10.131)
0
∞ Hq,kl (ω) = 0
We obtain κW W (x1 , x2 , τ ) ∞ ∞ ∞ = Φnm (x1 )Φkl (x2 ) −∞ n,m=1 k,l=1
×
SF F (y1 , y2 , ω)Φnm (y1 )Φkl (y2 ) dy1 dy2 eiωτ dω. D D
(10.132)
Chapter 10. Random Vibration of Continuous Structures
184
By definition, we have the space-spectral power density of the plate response SW W (x1 , x2 , ω) =
∞ ∞
Φnm (x1 )Φkl (x2 )
(10.133)
n,m=1 k,l=1
×
SF F (y1 , y2 , ω)Φnm (y1 )Φkl (y2 ) dy1 dy2 . D D
The variance function of the plate response at a given point is given by 2 (x) = κW W (x, x, 0) σW
=
∞ ∞ ∞
Φnm (x)Φkl (x)
−∞ n,m=1 k,l=1
×
SF F (y1 , y2 , ω)Φnm (y1 )Φkl (y2 ) dy1 dy2 dω.
(10.134)
D D
Modal representation of the random excitation Recall the modal expansion of the excitation in equation (10.112). Continue to consider the stationary case. The mean and space–time cross-covariance function of the excitation in terms of the modal function are given by μF (x) =
∞
ρhμFnm Φnm (x1 ),
(10.135)
n,m=1
κF F (x1 , x2 , τ ) =
∞ ∞
(ρh)2 κFnm Fkl (τ )Φnm (x1 )Φkl (x2 ),
(10.136)
n,m=1 k,l=1
where μF nm is the mean of the modal force Fnm (t), and κFnm Fkl (τ ) is the cross covariance function of the modal forces Fnm (t) and Fkl (t). From equation (10.5), we have SF F (x1 , x2 , ω) ∞ ∞ ∞ 1 (ρh)2 κFnm Fkl (τ )Φnm (x1 )Φkl (x2 )e−iωτ dτ = 2π −∞ n,m=1 k,l=1
=
∞
∞
n,m=1 k,l=1
(ρh)2 SFnm Fkl (ω)Φnm (x1 )Φkl (x2 ),
(10.137)
10.3. Two-dimensional structures
185
where 1 SFnm Fkl (ω) = 2π
∞
κFnm Fkl (τ )e−iωτ dτ.
(10.138)
−∞
These modal representations of the random excitation can be substituted into the solutions of the plate response. The variance function of the plate response becomes 2 σW (x)
=
∞ ∞ ∞
Φnm (x)Φkl (x) −∞ n,m=1 k,l=1 ∞ ∞ (ρh)2 SFop Fqs (ω)Φop (y1 )Φqs (y2 ) × D D o,p=1 q,s=1
× Φnm (y1 )Φkl (y2 ) dy1 dy2 dω ∞ ∞ ∞ = Φnm (x)Φkl (x) SFnm Fkl (ω) dω. n,m=1 k,l=1
(10.139)
−∞
10.3.3. Discrete solutions Modal solutions and Green’s functions for structures such as plates and shells with irregular geometry are generally difficult to obtain analytically. It is now a commonplace to discretize elastic structures by using the finite element method. The advantage of such a numerical solution approach is the flexibility in handling irregular geometry of the structure and complex loadings. For example, we consider a finite element solution of a structure. Let X denote the vector of all nodal displacements of the finite elements. X can contain all translational and rotational degrees of freedom. After imposing the boundary conditions, we come to an equation of motion in the same form as equation (9.6) ¨ + CX ˙ + KX = F(t), t > 0 MX ˙ ˙ 0. X(0) = X0 , X(0) =X
(10.140)
Equation (10.140) represents a finite degree of freedom approximation of the continuous structure. Solutions presented in Chapter 9 become applicable to the random vibration problem of the structure. Further readings on random vibration of continuous structures can be found in Elishakoff (1983), Lin (1967), Sólnes (1997), Soong and Grigoriu (1993), Wirsching et al. (1995).
186
Exercises
Exercises E XERCISE 10.1. The uniform simply supported Bernoulli–Euler beam is subject to a point force at x = x0 (0 < x0 < L) with zero mean and a time-correlation function D0 δ(τ ). Obtain the space–time cross-covariance function of the response 2 (x) = κ κW W (x1 , x2 , t1 , t2 ) and the mean square response σW W W (x, x, 0). E XERCISE 10.2. Consider the random vibration problem of a uniform Bernoulli– Euler beam with simply supported boundary conditions subject to the rain-on-theroof white noise excitation. Derive the mean and space–time cross-covariance function of the response. E XERCISE 10.3. Consider the random vibration problem of a uniform Bernoulli– Euler beam with clamped-clamped boundary conditions subject to the rain-onthe-roof white noise excitation. Derive the mean and space–time cross-covariance function of the response. E XERCISE 10.4. Assume that the Timoshenko beam in Section 10.2.4 is subject to a Gaussian white noise excitation with a uniform spatial distribution, i.e. F (x, t) = F0 V (t) where E[V (t)] = 0 and E[V (t)V (t + τ )] = δ(τ ). Derive the mean and space–time cross-covariance function of the response vector. E XERCISE 10.5. The uniform simply supported plate is subject to a point force at x = x0 (x0 ∈ D) with zero mean and a time-correlation function D0 δ(τ ). Obtain the space–time cross-covariance function of the response κW W (x1 , x2 , t1 , t2 ) and 2 (x) = κ the mean square response σW W W (x, x, 0).
Chapter 11
Structural Reliability
11.1. Modes of failure When a structure is subject to random excitations, it may lose its function because of the structural failure. There are several possible modes of failure. They include the first-passage failure and accumulative fatigue damage failure. In this chapter, we study all these modes of failure (Lin, 1967; Sólnes, 1997; Soong and Grigoriu, 1993; Wirsching et al., 1995).
11.2. Level crossing 11.2.1. Single level crossing We study two approaches to calculate the statistics of level crossing of the system response. The first approach constructs a random event and computes its probability. The second approach, on the other hand, constructs a counting process and evaluates its mean. Consider a scalar random response of the system X(t). Let b be a level of ˙ ˙ t) of X(t) and X(t) is available. interest. Assume that the joint PDF pXX˙ (x, x, Consider a random event that the response X(t) crosses the level b from below at time t when X(t − t) < b for an arbitrary small t > 0 as illustrated in Figure 11.1. In other words, the random event is described as B=
˙ ˙ X(t), X(t) : X(t − t) = X(t) − X(t)t + O t 2 < b ˙ − t) > 0 and X(t) = b . and X(t
(11.1)
This random event is denoted by the region B in the state space spanned by (x, x) ˙ as shown in Figure 11.2. 187
Chapter 11. Structural Reliability
188
Figure 11.1.
Figure 11.2.
A realization of crossing level b from below of a scalar random process X(t).
The region in the state space defining the random event of level b crossing from below.
By definition, the probability of this event denoted as P (b+ , t) is given by P (b+ , t) = pXX˙ (x, x, ˙ t) dx d x˙ B
∞ b =
pXX˙ (x, x, ˙ t) dx d x. ˙
(11.2)
˙ 0 b−xt
For small t, we apply the mean value theorem of integration b
pXX˙ (x, x, ˙ t) dx = pXX˙ (b, x, ˙ t)xt ˙ + O t 2 .
(11.3)
b−xt ˙
Hence, +
∞
P (b , t) = t
pXX˙ (b, x, ˙ t)x˙ d x. ˙ 0
(11.4)
11.2. Level crossing
189
The ratio P (b+ , t)/t as t → 0 is defined as the mean frequency of upcrossing of level b. The mean crossing frequency denoted as vb+ (t) is given by vb+ (t)
∞ =
pXX˙ (b, x, ˙ t)x˙ d x. ˙
(11.5)
0
E XAMPLE 11.1. Assume that X(t) is a stationary Gaussian process with zero mean. The joint PDF of the response is 1 1 x2 x˙ 2 ˙ = exp − + 2 . pXX˙ (x, x) (11.6) 2πσX σX˙ 2 σX2 σX˙ Hence, −
vb+ =
b2 2 2σX
e 2πσX σX˙
∞ 0
b2 1 x˙ 2 1 σX˙ − 2σ 2 X. exp − e x ˙ d x ˙ = 2 σ 2˙ 2π σX
(11.7)
X
This result is due to Rice (Rice, 1954). Recall the definitions of the spectral moments λ0 , λ2 and the geometrical meanfunction ing of the radius of gyration γ2 of the power spectral density √ G(ω) √ about the origin ω = 0 in Section 4.2. We have σX = λ0 , σX˙ = λ2 , and γ2 = (λ2 /λ0 )1/2 . Consequently, we express the mean crossing frequency as # λ2 − 2λb2 1 γ2 − 2λb2 e 0 = e 0. vb+ = (11.8) 2π λ0 2π When b = 0, we denote γ2 (11.9) . 2π v0 is the mean equivalent frequency of the random process. In radians per second, we have the mean equivalent frequency v0 = vb+ =
ωe = 2πv0 = γ2 =
σX˙ . σX
(11.10)
11.2.2. Method of counting process Let us now consider the approach of counting process. Define a random process as Y (t) = H X(t) − b , (11.11)
Chapter 11. Structural Reliability
190
Figure 11.3.
A realization of the counting process Y (t) corresponding to Figure 11.1.
where H (t) is the Heaviside step function defined as 1, t > 0, H (t) = 1/2, t = 0, 0, t < 0.
(11.12)
Corresponding to Figure 11.1, a realization of Y (t) is shown in Figure 11.3. Differentiate Y (t) with respect to t in some sense. We have ˙ Y˙ (t) = δ X(t) − b X(t),
(11.13)
where δ(t) is the Dirac delta function. Y˙ (t) represents a sequence of impulses as shown in Figure 11.4. ˙ When X(t) crosses b from below with X(t) > 0, Y˙ (t) is a positive impulse. ˙ When X(t) crosses b from above with X(t) < 0, Y˙ (t) is a negative impulse. The net number of crossings, from below and above, during the time interval (t − t, t) is Nb± (t
t − t, t) =
˙ dt. δ X(t) − b X(t)
(11.14)
t−t
Figure 11.4.
A realization of the derivative of the counting process Y (t) corresponding to Figure 11.3.
11.2. Level crossing
191
Nb± (t − t, t) is known as a counting process. The mean of the counting process is t
E Nb± (t − t, t) =
˙ dt E δ X(t) − b X(t)
t−t
t
∞ ∞ pXX˙ (x, x, ˙ t)δ(x − b)|x| ˙ dx d x˙ dt
= t−t −∞ −∞
0 pXX˙ (b, x, ˙ t)(−x) ˙ d x˙
= t −∞
∞ + t
pXX˙ (b, x, ˙ t)x˙ d x˙ + O t 2 .
(11.15)
0
The mean crossing frequency from below and above is then E[Nb± (t − t, t)] t→0 t ∞
= pXX˙ (b, −x, ˙ t) + pXX˙ (b, x, ˙ t) x˙ d x. ˙
vb± (t) = lim
(11.16)
0
The first part of this integration represents the mean crossing frequency from above, while the second part is the same as in equation (11.5). When ˙ t) = pXX˙ (b, x, ˙ t), which is true for most physical processes, the pXX˙ (b, −x, mean crossing frequency from above is equal to that from below. 11.2.3. Higher order statistics of level crossing Equation (11.14) represents a nonlinear transformation from a pair of random ˙ variables (X(t), X(t)) to one random variable Nb± (t −t, t). In theory, for a given ˙ t), we can find the PDF of the counting process Nb± (t − joint PDF pXX˙ (x, x, t, t). It is, however, a difficult task. ˙ Assume that a higher order joint PDF pXX˙ (x1 , x˙1 , t1 ; x2 , x˙2 , t2 ) of (X(t), X(t)) exists. We can evaluate the correlation function of the counting process. Take vb+ (t) as an example.
E vb+ (t1 )vb+ (t2 ) =
∞ ∞ pXX˙ (x1 , x˙1 , t1 ; x2 , x˙2 , t2 )x˙1 x˙2 d x˙1 d x˙2 . 0
0
(11.17)
Chapter 11. Structural Reliability
192
Figure 11.5.
A realization of dual level crossings of a scalar random process X(t).
11.2.4. Dual level crossing The above discussion can be extended to the case where the response X(t) crosses a lower level a from above or a upper level b from below as illustrated in Figure 11.5. The random event that the response X(t) crosses the level b from below OR that X(t) crosses a from above during the time interval (t − t, t) is represented by two regions in the state space in Figure 11.6. These crossings represent two mutually exclusive random events. Therefore, the probability of the joint event is equal to the sum of the probabilities of the two crossings during the time interval (t − t, t). It is given by pXX˙ (x, x, ˙ t) dx d x˙ P a − , b+ , t = A∪B ˙ 0 a− xt
∞ b pXX˙ (x, x, ˙ t) dx d x˙ +
= −∞
pXX˙ (x, x, ˙ t) dx d x. ˙ (11.18) ˙ 0 b−xt
a
Following the same steps to deal with small time interval t, we obtain the mean frequency of crossings of the two levels ± vab
0 =−
pXX˙ (a, x, ˙ t)x˙ d x˙ +
−∞
=
∞
∞
pXX˙ (b, x, ˙ t)x˙ d x˙ 0
pXX˙ (a, −x, ˙ t) + pXX˙ (b, x, ˙ t) x˙ d x. ˙
(11.19)
0
Hence, the total outcrossing mean frequency is the sum of the outcrossing frequencies at both the upper and lower level. When a = b, this result becomes the same as in equation (11.16).
11.2. Level crossing
Figure 11.6.
193
The regions in the state space defining the random event of level b crossing from below or level a crossing from above.
E XAMPLE 11.2. Consider again the stationary Gaussian process X(t) with zero mean in the previous example. We have the mean crossing frequency of the dual levels as ± vab
=
=
a2 2 2σX
−
1 σX˙ % e 2π σX
2 − a2 2σX
e
−
+e 2πσX σX˙
b2 2 2σX
∞ 0
+e
1 x˙ 2 exp − x˙ d x˙ 2 σ 2˙ X
2 − b2 2σX
& .
(11.20)
Make use of the spectral moments λ0 , λ2 and the geometrical meaning of the radius of gyration γ2 of the power spectral density function G(ω) about the origin ω = 0 in Section 4.2. We have # 2 & 2 & λ2 % − 2λa2 1 γ2 % − 2λa2 −b −b ± vab (11.21) = e 0 + e 2λ0 . e 0 + e 2λ0 = 2π λ0 2π This result can also be derived by using the counting process. This is left as exercise. 11.2.5. Local minima and maxima When a function reaches a local maximum, its first order derivative is zero and the second order derivative is negative. Assume that the random process X(t) is twice differentiable in the mean square sense, and the joint PDF pX˙ X¨ (x, ˙ x, ¨ t) of
Chapter 11. Structural Reliability
194
its velocity and acceleration exists. Hence, the expected number of local maxima of the process X(t) per unit time is the same as the mean frequency of the veloc˙ ity process X(t) crossing zero from above. From equation (11.16), we have the expected number of local maxima of the process X(t) per unit time ∞ μ0 (t) =
pX˙ X¨ (0, −x, ¨ t)x¨ d x. ¨
(11.22)
0
E XAMPLE 11.3. When X˙ and X¨ are stationary and jointly Gaussian, then 1 x˙ x¨ 1 ˙ x) ¨ = φ φ , pX˙ X¨ (x, (11.23) σX˙ σX˙ σX¨ σX¨ where φ(x) is the standard unit-variant Gaussian PDF. In this case, μ0 (t) is constant # ∞ 1 1 x¨ 2 1 λ4 1 σX¨ μ0 = (11.24) exp − = , x¨ d x¨ = 2πσX˙ σX¨ 2 σ 2¨ 2π σX˙ 2π λ2 0
X
where λ2 and λ4 are the spectral moments. Recall that μ0 is the parameter used in the definition of the bandwidth parameter γ and ε in equations (4.23) and (4.24). Hence, we have # # λ22 λ0 λ4 μ0 = , ε = . 1 − γ = (11.25) ν0 λ0 λ4 λ22 When a function reaches a local minimum, its first order derivative is also zero, but the second order derivative is positive. The expected number of local minima of the process X(t) per unit time is the same as the mean frequency of the velocity ˙ process X(t) crossing zero from below. Hence, we have the expected number of local minima of the process X(t) per unit time ∞ μ0 (t) =
pX˙ X¨ (0, x, ¨ t)x¨ d x. ¨
(11.26)
0
In physical systems where the random process is continuous in the mean square sense, the expected numbers of local minima and local maxima are equal. That is why we use the same notation μ0 (t) for the two expected numbers. Distribution of local maxima of stationary Gaussian processes Let X(t) be a stationary and ergodic Gaussian process with the mean value μX , standard deviation σX and autocorrelation function ρXX (τ ). The magnitude of a
11.2. Level crossing
195
local maximum of the process is denoted by A. We derive the distribution function FA (a) = 1 − P (A > a) of A in the following. Let T be the length of a realization of the process. When T is sufficiently large, the probability P (A > a) can be estimated by counting the number of maxima above a in the interval [0, T ] in proportion to the total number of local maxima in [0, T ]. Let Na (t) be the number of maxima above a per unit time and N0 (t) be the number of maxima per unit time. N˙ a (t) and N˙ 0 (t) are the formal derivative of Na (t) and N0 (t). By definition, we have, -T 1 T →∞ T 0 -T lim T1 T →∞ 0 lim
P (A > a) =
N˙ a (t) dt = N˙ 0 (t) dt
E[N˙ a ] . E[N˙ 0 ]
(11.27)
For stationary Gaussian processes, E[N˙ 0 ] is given by equation (11.24) 1 σX¨ . 2π σX˙
E[N˙ 0 ] = μ0 =
(11.28)
By definition, E[N˙ a ] is given by E[N˙ a ] =
∞ 0 −xp ¨ XX˙ X¨ (x, 0, x) ¨ d x¨ dx.
(11.29)
a −∞
˙ ¨ The joint PDF of the vector XT = [X(t), X(t), X(t)] is written in terms of the normalized random variables U (t) = (X(t) − μX )/σX ,
˙ V (t) = X(t)/σ X˙ ,
˙ x) ¨ = pXX˙ X¨ (x, x,
1
¨ W (t) = X(t)/σ X¨
as √ (2π) σX σX˙ σX¨ 1 − r 2 2 u + (1 − r 2 )v 2 + w 2 − 2ruw × exp − 2(1 − r 2 ) 1 w − ru 1 ϕ(u) √ φ √ , = √ σX 2πσX˙ σX¨ 1 − r 2 1 − r2
where u =
x−μX σX ,
v=
3 2
x˙ σX˙ ,
w=
x¨ σX¨ ,
σ2
(11.30)
and r = − σXXσ˙ ¨ = − √λλ2λ . φ(u) denotes X
0 4
Chapter 11. Structural Reliability
196
the standard unit variant Gaussian PDF. Hence, we have ∞ σ 1 ¨ X φ(u) E[N˙ a ] = √ 2π σX˙ 0 × −∞
where α =
a−μX σX .
α
w − ru −√ φ √ 1 − r2 1 − r2 w
dw du,
Note that,
w − ru −√ φ √ dw 1 − r2 1 − r2 −∞ ru ru 2 = 1−r φ √ − ruΦ − √ . 1 − r2 1 − r2 Hence, ∞ √ ru 2 P (A > a) = 2π 1 − r φ √ 1 − r2 α ru ru −√ Φ −√ φ(u) du, 1 − r2 1 − r2 and the PDF of the dFA (a) α φ(α) 2 pA (a) = 1−r φ √ = da σX 1 − r2 √ rα − 2πrαΦ − √ φ(α). 1 − r2 0
w
(11.31)
(11.32)
(11.33)
(11.34)
11.2.6. Envelope processes We have discussed level crossing, local minima and maxima of a generic scalar random process X(t). For mechanical or structural systems, X(t) can represent the displacement of the response. Other scalar processes of the random response can be constructed such as the envelope process. In the following, we study a couple of special envelope processes. Energy envelope A common energy envelope process is defined as # ˙ 2 2 X(t) 1 X(t) − μX + , E(t) = σX ωe
(11.35)
11.2. Level crossing
197
σ
where ωe = σXX˙ . When X(t) is the displacement of the response of a system, E(t) ˙ has a unit of energy. Furthermore, when X(t) = 0, E(t) = (X(t) − μX )/σX is a local minimum or maximum. Hence, E(t) is called an energy envelope process. Assume that we know the joint PDF of the response X(t) and its derivatives. We can obtain the joint PDF for the envelope process E(t). With the joint PDF, we can then obtain the frequency of level crossings of the envelope process. The following example demonstrates how to do this. E XAMPLE 11.4. Let X(t) be a stationary Gaussian process with the mean μX , standard deviation σX and autocorrelation coefficient function ρXX (τ ). Assume ¨ exists in the mean square sense. Define the normalized random variables that X(t) as U (t) =
X(t) − μX , σX
V (t) =
˙ X(t) , σX˙
W (t) =
¨ X(t) , σX¨
(11.36)
where (2)
σX2˙ = −ρXX (0)σX2 ,
(4)
σX2¨ = ρXX (0)σX2 .
(11.37)
ρ (j ) (τ ) denotes the j th order derivative of the autocorrelation coefficient function with respect to the argument. ˙ ˙ X(t)] ¨ Note that E[X(t)X(t)] = E[X(t) = 0. In terms of the normalized variables, we have E[U (t)V (t)] = E[V (t)W (t)] = 0. The covariance matrix of the random vector XT = [U (t), V (t), W (t)] and its inverse are ⎡ ⎤ 1 0 r CXX = ⎣ 0 1 0 ⎦ , (11.38) r 0 1 ⎤ ⎡ 1 0 −r 1 ⎣ 0 1 − r2 0 ⎦ , C−1 (11.39) XX = 1 − r 2 −r 0 1 where
r = E U (t)W (t) =
¨ E[X(t)X(t)] σX σX¨
= σX2
(2) σ 2˙ ρXX (0) λ2 = − X = −√ . σX σX¨ σX σX¨ λ0 λ4
(11.40)
Chapter 11. Structural Reliability
198
The joint PDF of the vector X is 1 √ (2π) 1 − r 2 2 u + (1 − r 2 )v 2 + w 2 − 2ruw × exp − 2(1 − r 2 ) = pU V (u, v) · pW |U V (w|u, v),
pU V W (u, v, w) =
3 2
(11.41)
where
2 1 u + v2 exp − , pU V (u, v) = 2π 2 1 1 w − ru 2 . pW |U V (w|u, v) = exp − √ 2 1 − r2 2π(1 − r 2 )
(11.42) (11.43)
pU V (u, v) is the joint PDF of U and V , and pW |U V (w|u, v) is the PDF of W conditional on (U, V ) = (u, v). Let us now relate U and V to the envelope process E. Consider a polar transformation U (t) = E(t) cos Θ(t),
V (t) = E(t) sin Θ(t).
(11.44)
Differentiating the transformation, we obtain σX dU (t) σX ˙ ˙ (E cos Θ − E sin Θ · Θ), = σX˙ dt σX˙ σ˙ σ ˙ dV (t) ˙ = X (E˙ sin Θ + E cos Θ · Θ). W (t) = X σX¨ dt σX¨ V (t) =
(11.45) (11.46)
Recall that V (t) = E(t) sin Θ(t). Equation (11.45) leads to ˙ Θ(t) =
˙ cos Θ(t) σX˙ E(t) . − E(t) sin Θ(t) σX
(11.47)
Equation (11.46) results in σ 2˙ σX˙ E˙ − X E cos Θ σX¨ sin Θ σX σX¨ 1 E˙ . = r E cos Θ − ωe sin Θ
W (t) =
(11.48)
Together, equations (11.44) and (11.48) complete a transformation between ˙ Θ) and (U, V , W ). The Jacobi of the transformation is (E, E, det du dv dw de d e˙ dθ
11.2. Level crossing
⎡ cos θ = det ⎣ sin θ r cos θ e r . = ωe sin θ
0 0 − ω1e
r sin θ
−e sin θ e cos θ r −e sin θ + ωe˙e
199
cos θ sin2 θ
⎤ ⎦ (11.49)
˙ Hence, the joint PDF of (E(t), E(t), Θ(t)) is e r pE EΘ ˙ θ ) = pU V W (u, v, w) ˙ (e, e, ω sin θ e
e r . = pU V (u, v)pW |U V (w|u, v) ωe sin θ ˙ Θ), we have In terms of the transformed variables (E, E, 1 1 pU V (u, v) = exp − e2 cos2 θ + e2 sin2 θ 2π 2 1 1 = exp − e2 , 2π 2 pW |U V (w|u, v) =
(11.50)
(11.51)
1
2π(1 − r 2 ) e˙ 2 [r(e cos θ − ωe sin θ ) − re cos θ ] × exp − 2(1 − r 2 ) sin θ ωe r 1 e˙2 exp − =√ (11.52) , 2 σ 2 (θ ) 2πσ (θ ) $ 2 = εωe |sin θ|. Hence, we have where σ (θ) = ωe |sin θ | 1−r r2 −1 2 e e˙2 exp ˙ θ) = e + 2 . pE EΘ (11.53) ˙ (e, e, 2 (2π)3/2 σ (θ) σ (θ ) ˙ can be readily obFrom equation (11.53) the joint PDFs of (E, Θ) and (E, E) tained as e 1 pEΘ (e, θ ) = (11.54) exp − e2 , 2π 2 1 1 ˙ = e exp − e2 pE E˙ (e, e) 2 2π 2π e˙2 1 × √ (11.55) exp − 2 dθ. 2σ (θ ) 2πσ (θ ) 0
Chapter 11. Structural Reliability
200
˙ Equation (11.53) suggests that E(t) and Θ(t) are correlated. Equation (11.54) indicates that E(t) and Θ(t) are mutually independent with the marginal distribution of E(t) being R(1) and of Θ(t) being U (0, 1). Equation (11.55) also ˙ are independent. The marginal distribution of E(t) ˙ is indicates that E(t) and E(t) not necessarily normal. This is only true in the limiting case of extreme narrowbandedness with r = −1. Consider now the crossing of the envelope process of the level β from below. The crossing frequency is + vE,β =
∞
1 1 pE E˙ (β, e) ˙ e˙ d e˙ = β exp − β 2 2 2π
0
2π ∞ × 0
0
1 e˙2 e˙ exp − d e˙ dθ √ 2 σ 2 (θ ) 2π σ (θ )
2π 1 1 2 σ (θ) dθ = β exp − β 2 (2π)3/2 0
2π 1 1 2 εωe |sin θ | dθ = β exp − β 2 (2π)3/2 = εωe
8 1 2 β exp − β . π 2
0
(11.56)
11.3. Vector process The concept of level crossing of scalar random processes can be extended to vector random processes X(t). Let Γ denote the boundary of a domain, which the process X(t) crosses at time t. Consider a point b on Γ with a unit outward normal vector n. If the process crosses Γ at this point during the time interval (t, t + t). Then, we must have (X(t) − b)T n < 0 and (X(t + t) − b)T n > 0. This is a well defined random event. The probability of this event is
T T + vX,b t = P X(t) − b n < 0 ∧ X(t + t) − b n > 0 ∞ = t 0
pXX˙ n (b, x˙n , t)x˙n d x˙n + O t 2 ,
(11.57)
11.3. Vector process
201
where ˙ T n. X˙ n (t) = X(t)
(11.58)
= (X(t)− ˙ T n + O(t 2 ). p ˙ (x, x˙n , t) is the joint PDF of X(t) and X˙ n (t). b)T n + t X(t) X Xn The frequency of the vector process outcrossing the entire boundary Γ is In the derivation, we have used the Taylor expansion (X(t +t)−b)T n
+ vX,Γ
∞ =
pXX˙ n (b, x˙n , t)x˙n d x˙n dA(b), Γ
(11.59)
0
where dA(b) is a differential element of Γ at point b. E XAMPLE 11.5. Consider a rectangular domain shown in Figure 11.7, S = (x1 , x2 ) | a1 < x1 < b1 ∧ a2 < x2 < b2 , (11.60) where a1 , b1 , a2 , b2 are constant. The boundary Γ of S consists of four straight lines. The frequency of the vector process X(t) crossing Γ is computed along these four lines. Left surface: n = {1, 0}T , b = {b1 , x2 }T , ˙ X˙ n (t) = nT X(t) = X˙ 1 (t), pXX˙ n (b, x˙n , t) = pX1 X2 X˙ 1 (b1 , x2 , x˙1 , t).
Figure 11.7.
The rectangular domain and its boundary.
(11.61)
Chapter 11. Structural Reliability
202
Right surface: n = {−1, 0}T ,
b = {a1 , x2 }T ,
˙ X˙ n (t) = nT X(t) = −X˙ 1 (t),
(11.62)
pXX˙ n (b, x˙n , t) = pX1 X2 X˙ 1 (a1 , x2 , −x˙1 , t). Top surface: n = {0, 1}T ,
b = {x1 , b2 }T ,
˙ X˙ n (t) = nT X(t) = X˙ 2 (t),
(11.63)
pXX˙ n (b, x˙n , t) = pX1 X2 X˙ 2 (x1 , b2 , x˙2 , t). Bottom surface: n = {0, −1}T ,
b = {x1 , a2 }T ,
˙ X˙ n (t) = nT X(t) = −X˙ 2 (t),
(11.64)
pXX˙ n (b, x˙n , t) = pX1 X2 X˙ 2 (x1 , a2 , −x˙2 , t). The frequency of the vector process outcrossing the boundary Γ is given by + vX,Γ
b2 ∞ =
pX1 X2 X˙ 1 (b1 , x2 , x˙1 , t)x˙1 d x˙1 dx2 a2 0
b2 ∞ +
pX1 X2 X˙ 1 (a1 , x2 , −x˙1 , t)x˙1 d x˙1 dx2 a2 0
b1 ∞ +
pX1 X2 X˙ 2 (x1 , b2 , x˙2 , t)x˙2 d x˙2 dx1 a1 0
b1 ∞ +
pX1 X2 X˙ 2 (x1 , a2 , −x˙2 , t)x˙2 d x˙2 dx1 .
(11.65)
a1 0
11.4. First-passage reliability based on level crossing We now make use of the results of level crossings of random responses to assess the reliability of the system. There are engineering systems whose response X(t) must be in a domain all the time in order to be safe and operational. This domain is known as the safe domain S. Let S = {x| −∞ < x b} be a one-dimensional
11.4. First-passage reliability based on level crossing
203
safe domain for the system. When the response level is higher than b, the system is considered to have failed. Such a failure is called the first-passage failure. The question is then when the system response X(t) crosses b when the system is initially safe with X(0) ∈ S. Consider a stationary scalar process X(t). We assume that the up-crossing of level b is a rare event such that the crossing rate is a constant and there is not more than one crossing during an infinitesimal time interval dt. Hence, the counting process of the level crossings can be modelled as a Poisson process. Recall that n −λt the probability of exactly n crossings at time t is given by PN (n, t) = (λt) n! e where λ is the mean frequency of up-crossing of level b. By definition, the mean frequency vb+ of up-crossing is equal to the arrival rate λ of the Poisson process. Therefore, the probability that the system has zero crossing up to time t is given by +
PN (0, t) = e−vb t .
(11.66)
The probability that the system crosses the level b from below at least once during the time interval (0, t] denoted by FT (t) is then +
FT (t) = 1 − PN (0, t) = 1 − e−vb t .
(11.67)
T denotes the random variable of the first-passage time. By definition, FT (t) represents the distribution function of the random variable T such that +
P (T t) = FT (t) = 1 − e−vb t .
(11.68)
The PDF of the first passage time T is given by + dFT (t) (11.69) = vb+ · e−vb t . dt The reliability of the system in the sense of no up-crossing of level b at time t
pT (t) =
is +
R(t) = PN (0, t) = 1 − FT (t) = e−vb t .
(11.70)
The mean first-passage time is ∞ E[T ] =
pT (t)t dt = 0
1 . vb+
(11.71)
The variance of the first-passage time is ∞ σT2
=
pT (t)t 2 dt − E[T ]2 = 0
1
. (vb+ )2
(11.72)
Chapter 11. Structural Reliability
204
The Poisson approximation of the first-passage problem is reasonable only when the level crossing is rare and statistically uniform in time. Many engineering systems don’t necessarily meet all these requirements. There are notably so-called seasonal effects when the level crossing is more frequent during a certain period of time. The general theory presented in Section 7.3 can address this issue.
11.5. First-passage time probability – general approach Assume that the system response is initially in the safe domain such that X(0) ∈ S. In Section 7.3, we have presented the general theory of first-passage time probability. Here, we briefly review the theory and present an example of a linear oscillatory system subject to the white noise excitation. The probability density function of the first-passage time denoted by pT (τ |x0 ) for a stationary process satisfies the equation ∂pT (τ |x0 ) ∂pT (τ |x0 ) Aj (x0 ) = ∂τ ∂x0j n
j =1
+
n n Bj k (x0 ) ∂ 2 pT (τ |x0 )
2!
j =1 k=1
∂x0j ∂x0k
,
(11.73)
subject to the boundary condition pT (τ |x0 ) = 0,
τ > 0, x0 ∈ Γ,
(11.74)
and an initial condition pT (τ |x0 ) = δ(x0 ),
τ = 0, x0 ∈ S.
(11.75)
The rth order moment of the first-passage time satisfies the generalized Pontryagin–Vitt equations −rMr−1 (x0 ) =
n
Aj (x0 )
j =1
+
∂Mr (x0 ) ∂x0j
n n Bj k (x0 ) ∂ 2 Mr (x0 ) j =1 k=1
2!
∂x0j ∂x0k
.
(11.76)
subject to the boundary condition Mr (x0 ) = 0,
x0 ∈ Γ, r = 1, 2, 3, . . .
(11.77)
11.5. First-passage time probability – general approach
205
In particular, the mean of the first-passage time satisfies the Pontryagin–Vitt equation, −1 =
n j =1
∂M1 (x0 ) Bj k (x0 ) ∂ 2 M1 (x0 ) Aj (x0 ) + . ∂x0j 2! ∂x0j ∂x0k n
n
(11.78)
j =1 k=1
11.5.1. Example of SDOF linear oscillators Consider the linear oscillator subject to the white noise excitation. Recall the example in Section 7.1. The equation of motion is written in the Itô sense as dX1 = X2 dt, dX2 = −2ζ ωn X2 − ωn2 X1 dt + dB(t), where B(t) is a Brownian motion with the following properties
E dB(t) = 0,
2D dt, t = s, E dB(t)dB(s) = 0, t = s.
(11.79)
(11.80) (11.81)
Let X = [X1 , X2 ]T be the vector stochastic process of the system. According to equation (7.4), we have A1 (x0 ) = x02 , A2 (x0 ) = −ωn2 x01 − 2ζ ωn x02 , B11 (x0 ) = B12 (x0 ) = B21 (x0 ) = 0,
(11.82)
B22 (x0 ) = 2D, The backward Kolmogorov equation for the oscillator is given by ∂pX ∂ pX (x, t|x0 , t0 ) = x02 , ∂t0 ∂x01 ∂pX ∂ 2 pX +D 2 . − 2ζ ωn x02 + ωn2 x01 ∂x02 ∂x02 −
(11.83)
Assume that the response is a stationary process. The first-passage time probability density function satisfies the following equation, ∂pT (τ |x0 ) ∂pT (τ |x0 ) ∂pT (τ |x0 ) − 2ζ ωn x02 + ωn2 x01 = x02 ∂τ ∂x01 ∂x02 +D
∂ 2 pT (τ |x0 ) . 2 ∂x02
(11.84)
Chapter 11. Structural Reliability
206
The generalized Pontryagin–Vitt equations read ∂Mr (x0 ) , ∂x01 ∂Mr (x0 ) ∂ 2 Mr (x0 ) +D . − 2ζ ωn x02 + ωn2 x01 2 ∂x02 ∂x02 −rMr−1 (x0 ) = x02
(11.85)
11.5.2. Common safe domains There are three common safe domains for SDOF systems studied extensively in the literature. They are single-sided, double-sided and envelope domains as shown in Figure 11.8. In general, analytical solutions to the first-passage time problem are difficult to obtain. Numerical solutions are common. 11.5.3. Envelope process of SDOF linear oscillators Assume that the oscillator is lightly damped ζ 1. Consider the transformation X1 = A cos(ωn t + Φ),
(11.86)
X2 = −Aωn sin(ωn t + Φ). A represents the envelope of the response state vector (X1 , X2 ). Equation (11.79) is converted to 1
−2ζ ωn2 A sin2 (ωn t + Φ) dt − sin(ωn t + Φ) dB(t) , ωn 1
dΦ = −ζ ωn2 A sin(2ωn t + 2Φ) dt − cos(ωn t + Φ) dB(t) . Aωn dA =
Figure 11.8.
(11.87) (11.88)
From left to right, the single-sided, double-sided and envelope safe domains for SDOF systems.
11.5. First-passage time probability – general approach
207
Applying the method of stochastic averaging (Khasminskii, 1966), we obtain √ D D dβ1 (t), dA = −ζ ωn A + (11.89) dt + ωn 2ωn2 A 1 dΦ = (11.90) dβ2 (t), Aωn where β1 (t) and β2 (t) are two independent unit Brownian motions. Since the envelope process is decoupled from the phase response Φ(t), we can derive the backward Kolmogorov equation for the first-passage time probability density function as follows ∂pT (τ |a0 ) D ∂pT (τ |a0 ) = −ζ ωn a0 + 2 ∂τ ∂a0 2ωn a0 +
D ∂ 2 pT (τ |a0 ) . 2ωn2 ∂a02
(11.91)
The initial and boundary conditions for the first-passage time probability density function can be specified as pT (0|a0 ) = 1, pT (τ |ασX1 ) = 0,
0 a0 ασX1 ,
(11.92)
pT (τ |0) < ∞, τ > 0,
where σX1 is the steady state standard deviation of the response X1 and α is a scaling factor describing the critical level of the envelope process. Recall that σX2 1 =
D 2ζ ωn3
or
D = ζ ωn σX2 1 . 2ωn2
(11.93)
Following the method of separation of variables, we assume that pT (τ |a0 ) = T (τ )f (a0 ). This leads to ˙ D D f¨(a0 ) f (a0 ) T˙ = −ζ ωn a0 + + 2 T 2ωn a0 f (a0 ) 2ωn2 f (a0 ) ≡ −2ζ ωn λ.
(11.94)
(11.95)
Hence, we have T (τ ) = T0 e−2ζ ωn λτ , and an ordinary differential equation for the function f (a0 ) ζ ωn σX2 1 f˙(a0 ) + 2ζ ωn λf (a0 ) = 0. (11.96) ζ ωn σX2 1 f¨(a0 ) + −ζ ωn a0 + a0
Chapter 11. Structural Reliability
208
Introduce a new variable z =
1 2σX2
a02 . We have
1
zf¨(z) + (1 − z)f˙(z) + λf (z) = 0,
0 a. E XERCISE 11.6. Continue with Exercise 11.5. Define an envelope process as # x˙ 2 A = x2 + 2 . (11.156) ω0 Show that the probability of the envelope being less than a given level α is given by √ ω0 α 2 −x 2 α Fα (A) = P (A α) = 4 dx (11.157) pXX˙ (x, x) ˙ d x, ˙ 0
0
and the probability density function of A is pA (α) =
dFα (A) πα . = dα 2abω0
(11.158)
Chapter 12
Monte Carlo Simulation
12.1. Random numbers 12.1.1. Linear congruential method A sequence of pseudo-random numbers Ui can be generated by the linear congruential method (Knuth, 1998). Zi+1 = (aZi + c) mod m, Ui+1 = Ui+1 ÷ m,
(12.1)
where m = − 1, a and c are constants. c is often set to be zero, while a is a large number. Some common values for a in different software packages include 397204094, 764261123, 507435369, and 1277850838. The initial value of Z0 , also known as a seed, lies in the range 0 < Z0 < 231 − 2. Many studies have been done to show that the collection of a large number of the random numbers Ui has a uniform distribution over (0, 1), and each random number can be treated as being independent from others. 231
12.1.2. Transformation of uniform random numbers Let FX (x) be the probability distribution function of a random variable X, and U ∈ U (0, 1). Recall that FX (x) is monotonically increasing in x and its inverse FX−1 (u) exists with 0 u 1. Consider a random variable defined as Y = FX−1 (U ).
(12.2)
The distribution function of Y is FY (x) = P (Y x) = P FX−1 (U ) x = P U FX (x) = FX (x).
(12.3)
Note that P (U u) = u with u ∈ [0, 1]. Hence, Y and X are identically distributed. The transformation in equation (12.2) generates independent samples of 227
Chapter 12. Monte Carlo Simulation
228
X based on independent samples of U . This method is referred as the inverse method. E XAMPLE 12.1. Consider a Rayleigh distributed variable R ∈ R(σ 2 ) with 1 r2 FR (r) = 1 − exp − . 2 σ2
(12.4)
Hence, the transformation in equation (12.2) reads R = σ −2 ln(1 − U ).
(12.5)
12.1.3. Gaussian random numbers In the exercises of Chapter 2, we have shown the Box–Muller transformation from uniform random variables to Gaussian ones. Let U1 ∈ U (0, 1) and U2 ∈ U (0, 1) be independent, uniformly distributed random numbers. The random numbers X1 and X2 defined by the following transformation √ −2 ln U2 cos(2πU1 ) X1 = √ (12.6) , −2 ln U2 sin(2πU1 ) X2 are mutually independent, and Gaussian. A polar form of the Box–Muller transformation is more efficient and robust. The polar method is described by the following algorithm. Let X1 = 2U1 − 1 and X2 = 2U2 − 1. Then, X1 ∈ U (−1, 1) and X2 ∈ U (−1, 1). W = X12 + X22 , if W 1, W = −2 ln W/W , Y1 X1 =W Y2 X2 else regenerate U1 and U2 to repeat. In each step, the polar method gives a pair of independent Gaussian random numbers Y1 and Y2 with zero mean and unit variance. With both the uniform random numbers and Gaussian random numbers, we can construct many random numbers with different distributions by means of transformations.
12.2. Random processes
229
12.1.4. Vector of random numbers Consider a set of independent Gaussian random numbers {Xi } ∈ N (0, 1) generated by the method described above. Let X be a n-dimensional vector consisting of the random numbers Xi , and Y be a n-dimensional vector with prescribed mean μY and covariance matrix CYY . How to find a transformation from X to Y? Note that the mean of X is μX = 0 and the covariance matrix is CXX = I. Recall the linear transformation discussed in Section 2.8.1, Y = a + BX.
(12.7)
Take the mathematical expectation of the transformation. We have μY = a + BμX = a.
(12.8)
The covariance matrix CYY of Y is given by CYY = BCXX BT = BBT .
(12.9)
Since CYY is positive definite, we can obtain a spectral decomposition as ⎡ ⎤ λ1 ⎢ ⎥ λ2 ⎢ ⎥ T CYY = ⎢ (12.10) ⎥ , . .. ⎣ ⎦ λn where λi 0 are the eigenvalues of the matrix CYY and is the associated eigenmatrix. Hence, we obtain the transformation matrix, ⎡√ ⎤ λ1 √ ⎢ ⎥ λ2 ⎢ ⎥ B=⎢ (12.11) ⎥. . .. ⎣ ⎦ √ λn
12.2. Random processes 12.2.1. Gaussian white noise In theory, the Gaussian white noise W (t) is a mathematical idealization, and cannot be physically realized. Therefore, we cannot simulate a true white noise. Instead, we can simulate a broadband process with a power spectral density function, which is flat and approximately constant at the value S0 of the targeted white noise over a range of frequencies. The bandwidth of the simulated process should be selected based on the consideration of the detailed application.
230
Chapter 12. Monte Carlo Simulation
Consider random process consisting of a sequence of mutually independent 2 ). W are placed at the and identically distributed random numbers Wi ∈ N (0, σW i instants of time Ti = (α + i − 1)t. t is the sample interval and α ∈ U (0, 1) is independent of Wi . In the time interval [Ti , Ti+1 ), the random process is obtained by linear interpolation between the values Wi and Wi+1 , i.e. (Nielsen, 2000) W (t) = Wi + (Wi+1 − Wi )
t − Ti , t
t ∈ [Ti , Ti + t).
(12.12)
The mean function of W (t) is t − Ti μW (t) = E Wi + (Wi+1 − Wi ) t t − E[Ti ] = E[Wi ] + E[Wi+1 − Wi ] (12.13) = 0. t To derive the autocovariance of W (t), we first consider the velocity of W (t). 1 (Wi+1 − Wi ), t ∈ [Ti , Ti+1 ). W˙ (t) = (12.14) t Let τ 0. We have
κW˙ W˙ (τ ) = E W˙ (0)W˙ (τ ) τ 1 2 τ E (W1 − W0 ) 0.
(13.47)
Hence, none of the elements in the first column of the array is negative. The system is stable. However, since there is one zero element, there must be a pair of poles on the imaginary axis. The system is therefore marginally stable as we know it. E XAMPLE 13.7. Consider the closed-loop characteristic equation (13.41) of a second order system under a PD control. We have the Routh array as s2 : 1 ωn2 (1 + KP ) 1 2 s : 2ζ ωn + KD ωn s0 : b1 where
1 1 b1 = − det 2ζ ωn + KD ωn2 2ζ ωn + KD ωn2
ωn2 (1 + KP ) 0
= ωn2 (1 + KP ).
(13.48)
The stability conditions for the closed-loop system are 2ζ ωn + KD ωn2 > 0,
ωn2 (1 + KP ) > 0.
(13.49)
In terms of control gains, KD > −
2ζ , ωn
KP > −1.
(13.50)
E XAMPLE 13.8. Consider the closed-loop characteristic equation (13.40) of a second order system under a PID control. We have the Routh array as s3 s2 s1 s0
: 1 ωn2 (1 + KP ) 2 : 2ζ ωn + KD ωn KI ωn2 : b1 : c1
Chapter 13. Elements of Feedback Controls
254
where
1 1 b1 = − det 2 2ζ ωn + KD ωn2 2ζ ωn + KD ωn =− c1 = −
ωn2 (1 + KP ) KI ωn2
KI ωn2 − ωn2 (1 + KP )(2ζ ωn + KD ωn2 ) , 2ζ ωn + KD ωn2
(13.51)
1 2ζ ωn + KD ωn2 det b1 b1
(13.52)
KI ωn2 0
= KI ωn2 .
The stability conditions for the closed-loop system are 2ζ ωn + KD ωn2 > 0, ωn2 (1 + KP ) 2ζ ωn + KD ωn2 − KI ωn2 > 0, KI ωn2
(13.53)
> 0.
In terms of control gains, 2ζ , ωn KP > −1,
KD > −
0 < KI < (1 + KP ) 2ζ ωn + KD ωn2 .
(13.54)
This last example suggests that there is a bounded range of integral control gains for which the system is stable. Therefore, the integral control is conditionally stable.
13.7. Root locus design We have yet to pick proper control gains to meet the time domain specifications of the system. Here, we introduce the root locus method to design the feedback gains. Consider a proportional feedback control given by U (s) = KP E(s) where E(s) is the tracking error. The closed-loop system response in equation (13.34) now reads KP G(s) (13.55) R(s). 1 + KP G(s) The closed-loop characteristic equation has the following equivalent forms, known as the root locus form X(s) =
1 + KP G(s) = 0, A(s) + KP B(s) = 0, 1 G(s) = − . KP
(13.56)
Exercises
255
The root locus design of the control makes use of the variation of the roots of the closed-loop characteristic equation as the gain KP varies from zero to positive infinite. 13.7.1. Properties of root locus 1. On the root locus, G(s) is equal to a real negative number. Hence, the phase angle of G(s) must be π or 180◦ . 2. The root locus starts (KP = 0) from the open-loop poles (A(s) = 0). 3. If the system has p open-loop zeros (B(s) = 0) where p m, then, p branches of the root locus will converge to the open-loop zeros as KP → ∞. Other branches approach infinity at an asymptotic angle given by 180◦ + 360◦ (k − 1) , k = 1, 2, . . . , n − m. n−m More properties of root locus can be found in the textbooks such as the one by Franklin et al. (2003). φk =
E XAMPLE 13.9. We return to the example of the PID control of the second order system. We rewrite the closed-loop characteristic equation (13.39) as 1 + KP
ωn2 (βs 2 + s + α) = 0. s 3 + 2ζ ωn s 2 + ωn2 s
(13.57)
where α = KI /KP and β = KD /KP . α and β need to be determined beforehand in the root locus design with respect to KP . They can be chosen such that the zeros of the transfer function, i.e., the roots of βs 2 + s + α = 0, are at proper locations to shape the closed-loop root locus.
Exercises E XERCISE 13.1. Consider the speed control problem of a DC motor. The equations of motion are given by di J θ¨ + bθ˙ = kt (13.58) + T (t), dt di ke θ˙ + L + Ri = v(t), dt where θ is the rotor angle, J is the rotor inertia, b is the damping coefficient, kt is the torque constant, ke is the emf constant, T is the mechanical torque loading as disturbance, L is the circuit inductance, R is the circuit resistance, i is the current, and v is the applied voltage. Derive the transfer functions from the voltage V (s) to the rotor angle Θ(s) and from V (s) to the current I (s).
Chapter 13. Elements of Feedback Controls
256
E XERCISE 13.2. Consider the SDOF oscillator, x¨ + 2ζ ωn x˙ + ωn2 x = ωn2 u,
(13.59)
˙ Plot a family of the root loci of the control and a PD control u = −KP x − KD x. system with respect to KD when KP is held constant. E XERCISE 13.3. Reconsider Exercise 13.2 with an acceleration feedback u = ¨ −KA x. E XERCISE 13.4. Write a Matlab program to plot the root locus of the PID control in equation (13.57). Experiment different selections of α and β and observe the effect of the zeros. Simulate the step responses of the closed-loop system for each control design. E XERCISE 13.5. The transfer function of an inverted pendulum-cart system from a horizontal control force U (s) to the angle output Θ(s) is given by Θ(s) =
mp l U (s), [I (mt + mp ) + mt mp l 2 ]s 2 − mp l(mt + mp )g
(13.60)
where the inertial terms (I, mt , mp ) are all positive, l is the length of the pendulum rod and g is the gravitational acceleration. Consider a PID control U (s) = (KP + sKD + KI /s)(R(s) − Θ(s)). By using the Routh stability criteria, point out which of the following combinations of PID controls will stabilize the system and which will not. (a) PD control (i.e., KI = 0, etc.), (b) PI control, (c) ID control.
Chapter 14
Feedback Control of Stochastic Systems 14.1. Response moment control of SDOF systems Here, we consider controlling random vibrations of a SDOF oscillator subject to Gaussian white noise excitations. We shall study the regulation of the mean trajectory, and reduction of the response variance, as well as tailoring the probability density function of the response. Consider a SDOF spring-mass oscillator subject to an external random loading W (t) and a control U (t). The motion of the system is governed by the stochastic differential equation X¨ + 2ζ ωn X˙ + ωn2 X = W (t) + U (t),
t > 0.
Assume that W (t) is the Gaussian white noise such that
E W (t) = 0, E W (t)W (t + τ ) = 2πS0 δ(τ ).
(14.1)
(14.2)
Consider a proportional and derivative control given by ˙ U (t) = −KP X(t) − KD X(t),
(14.3)
where KP > 0 and KD > 0 are the control gains. ˙ Let X1 = X(t), and X2 = X(t). Following the solution steps outlined in Section 8.2, we arrive at a set of moment equations of the closed loop system dE[X1 ] = E[X2 ], dt dE[X2 ] = − ωn2 + KP E[X1 ] − (2ζ ωn + KD )E[X2 ], dt
(14.4)
and dE[X12 ] = 2E[X1 X2 ], dt
dE[X1 X2 ] = E X22 − ωn2 + KP E X12 dt − (2ζ ωn + KD )E[X1 X2 ], 257
(14.5)
Chapter 14. Feedback Control of Stochastic Systems
258
dE[X22 ] = −2 ωn2 + KP E[X1 X2 ] dt
− 2(2ζ ωn + KD )E X22 + 2πS0 . The closed loop moment equations indicate that the control gains KP and KD regulate the convergence of the moment responses as in the case of deterministic systems. We have assumed that the system is initially at point (x10 , x20 ) with probability one. Then, the initial conditions for the moments are
2 E[X1 ] = x10 , E[X2 ] = x20 , E X12 = x10 (14.6) ,
2 2 , at t = 0. E[X1 X2 ] = x10 x20 , E X2 = x20 Examples of time histories of the first and second order moments are shown in Figures 14.1 and 14.2. The steady state solution of the moment equations can be obtained by setting the time derivatives of the moments to zero in equations (8.37) and (8.38) leading to E[X1 ]ss = E[X2 ]ss = 0,
πS0 , E X12 ss = (2ζ ωn + KD )(ωn2 + KP ) E[X1 X2 ]ss = 0,
πS0 E X22 ss = . 2ζ ωn + KD
(14.7)
Since both X1 and X2 are zero mean, the variances are Var[X1 ] = E[X12 ] and Var[X2 ] = E[X22 ]. Hence, to reduce the response variance in the presence of the Gaussian white noise excitation, we need large gains. On the other hand, if we would like the steady state to reach given variances such that
E X12 ss = E X12 ref , E[X1 X2 ]ss = E[X1 X2 ]ref ,
E X22 ss = E X22 ref ,
(14.8)
we must specify E[X1 X2 ]ref = 0. Otherwise, the linear PD control in equation (14.3) cannot lead the system to the target. From equations (14.7) and (14.8), we solve for the required feedback gains KD =
πS0 − 2ζ ωn , E[X22 ]ref
KP =
E[X22 ]ref E[X12 ]ref
− ωn2 .
(14.9)
If we specify the steady state variances such that E[X1 X2 ]ref = 0, the PD control in equation (14.3) does not have enough degrees of freedom to reach the target.
14.1. Response moment control of SDOF systems
Figure 14.1.
KP = 1 and KD = 1.6. x10 = 1 and x20 = 1. ωn = 2 and ς = 0.1. S0 = 1.
Figure 14.2.
KP = 1 and KD = 1.6. x10 = 1 and x20 = 1. ωn = 2 and ς = 0.1. S0 = 1.
259
Chapter 14. Feedback Control of Stochastic Systems
260
14.2. Covariance control Consider a linear time invariant system given by ˙ = AX(t) + BU(t) + GW(t), X
(14.10)
where X(t) ∈ R n , U(t) ∈ R m and W(t) ∈ R p . A, B and G are matrices of proper dimensions. W(t) is a vector Gaussian white noise process such that
2 E W(t) = 0, E W(t)WT (t + τ ) = σW (14.11) δ(τ ), 2 is a p × p positive definite matrix. where σW Let Cr,XX denote a prespecified reference covariance matrix for the steady state response of X(t). Consider a full state feedback
U(t) = −Kcc X(t).
(14.12)
Substitute the control to equation (14.10). We have the closed loop system, ˙ = (A − Kcc B)X(t) + GW(t). X
(14.13)
Following the steps in Chapter 9, we obtain a governing equation for the covariance of the response X(t) as d CXX (t) = (A − Kcc B)CXX (t) dt 2 T G , + CXX (t)(A − Kcc B)T + GσW
t > 0.
(14.14)
˙ XX (∞) = 0 and set CXX (∞) = In the steady state as t → ∞, we have C Cr,XX . This leads to an algebraic equation to determine the gain matrix 2 T G = 0. (A − Kcc B)Cr,XX + Cr,XX (A − Kcc B)T + GσW
(14.15)
T HEOREM 14.1. (Hotz and Skelton, 1987; Skelton et al., 1998) Cr,XX must satisfy the following condition that guarantees the existence of the covariance control gain matrix Kcc satisfying equation (14.15) 2 T G I − BB† = 0, I − BB† ACr,XX + Cr,XX AT + GσW (14.16) where B† is the Moore–Penrose inverse of B. The set of all possible gain matrices is given by Kcc =
1 † 2 T G B ACr,XX + Cr,XX AT + GσW 2 † † −1 † · 2I − BB† C−1 r,XX + B LBB Cr,XX + I − BB Z,
where L = −LT is arbitrary and real, and Z is an arbitrary real matrix.
(14.17)
14.3. Generalized covariance control
261
For a given system, we cannot arbitrarily pick the reference covariance matrix Cr,XX . An iterative procedure has been developed to find the pair of the reference covariance matrix Cr,XX and the feedback gain matrix Kcc (Skelton et al., 1998).
14.3. Generalized covariance control 14.3.1. Moment specification for nonlinear systems The covariance control presented in Section 14.2 is for linear systems. When the system is nonlinear, the moment equations are coupled to form an infinite hierarchy. It becomes very difficult to specify the stationary covariance matrix of the system response. However, when the system admits an exact stationary probability density function, we make use of the probability density function to design a generalized covariance control. Recall that examples of the nonlinear systems discussed in Section 6.6 that admit exact solutions of stationary probability density function of the FPK equation. Consider a nonlinear stochastic system ˙ = U (t) + g(X, X)W ˙ X¨ + h(X, X) (t),
(14.18)
where h and g are nonlinear functions of the state variables, and U is a control, ˙ The W (t) is the Gaussian white noise with zero mean. Let X1 = X and X2 = X. objectives are as follows. 1. Find a nonlinear feedback control U = f (X1 , X2 ; Kj ), where Kj ’s are the control gains, such that the exact stationary probability density function pX (x1 , x2 ) of the response is obtainable. 2. Select the control gains by following the concept of the covariance control. Denote the moments of the state variables as ∞
j k j x1 x2k pX (x1 , x2 ) dx1 dx2 , mj k = E X 1 X 2 =
(14.19)
−∞
where j, k = 0, 1, 2, . . . . Let mrj k denote the target values for the (j k)th order moments. We define a quadratic index consisting of the moment tracking errors and the control penalty as
J = Jˆ + βE U 2 , (14.20) where Jˆ denotes the moment tracking performance 2 bj k mrj k − mj k . Jˆ =
(14.21)
j,k=0
The weighting constants bj k 0 and β > 0. The optimal control gains are determined by minimizing the index J with respect to Kj ,
262
Chapter 14. Feedback Control of Stochastic Systems
∂J = 0 for ∀j. (14.22) ∂Kj The necessary condition of equation (14.22) must be satisfied by a set of finite gains Kj . The necessary condition does not guarantee the global minimum in the parameter space of the control gain. The numerical schemes, however, can be developed to make sure at least that the solution of equation (14.22) is a local minimum. When the control achieves a perfect tracking such that Jˆ = 0, the desired moments mrj k can be reached exactly. This becomes the moment specification control. There are three cases to be considered with regard to the number of control gains versus the number of target moments: the control design problem can be 1. Determined when the number of equations is equal to the number of control gains; 2. Overdetermined when there are more equations than the number of control gains; 3. Underdetermined when there are not enough equations to fully specify the control gains. The functional form of the feedback control plays an important role also. For example, a linear feedback cannot specify the kurtosis of the displacement for a Duffing oscillator. However, we look for a nonlinear feedback control such that the exact stationary probability density function pX (x1 , x2 ) of the response is obtainable. One advantage of having the exact probability density function in control design is that the expectation of general nonlinear functions, including nonpolynomial types, of the system response can be evaluated without much difficulty. Stability Since the moments of an unstable system are unbounded, the existence of a finite J implies that the control system is stable up to the highest order of moments included in J . Furthermore, the existence of the probability density function can lead to even stronger stability conditions. For example, the existence of an exponential type probability density function of the closed loop system guarantees the existence of finite response moments of any order. On the other hand, the nonexistence of the probability density function does not necessarily indicate the instability of the system, as will be shown in the next section. 14.3.2. Control of a Duffing oscillator Consider a Duffing oscillator subject to an external white noise excitation, X¨ + 2ζ ωn X˙ + ωn2 X + ωn2 c3 X 3 = W (t) + U (t),
(14.23)
where ζ , ωn , and c3 are system parameters, W (t) is a white noise such that E[W (t)] = 0 and E[W (t)W (t + τ )] = 2Dδ(τ ), τ > 0. The exact stationary
14.3. Generalized covariance control
263
probability density function of the uncontrolled system is obtainable for the Duff3 ˙ ing system. Consider a nonlinear feedback control U (t) = −K1 X−K2 X−K 3X . The closed loop system becomes X¨ + 2ζ ∗ X˙ + ω∗ X + ΩX 3 = W (t),
(14.24)
and the stationary probability density function of the closed loop system can be obtained as ζ∗ 1 2 ∗ 2 4 pX (x, x) (14.25) ˙ = C0 exp − x˙ + ω x + Ωx , 2D 2 where C0 is the normalization constant of the probability density function, and K2 ζ ∗ = ζ ωn 1 + (14.26) , 2ζ ωn K3 K1 ω∗ = ωn2 1 + 2 , Ω = ωn2 c3 + 2 . ωn ωn Define a performance index 2 2 J = b20 mr20 − m20 + b02 mr02 − m02 2 + b40 mr40 − m40 + β K12 m20 + K22 m02 + K32 m60 + 2K1 K3 m40 .
(14.27)
Because the moments with odd powers such as m11 and m31 vanish in steady state, they are not included in the performance index. All the response moments are implicit functions of the control gains. The necessary condition, equation (14.22), for minimizing the performance index reads ∂m20 ∂m02 0 = b20 mr20 − m20 + b02 mr02 − m02 ∂K1 ∂K1 r ∂m40 + b40 m40 − m40 + β(K1 m20 + K3 m40 ) ∂K1 ∂m20 ∂m02 ∂m60 ∂m40 β + + K22 + K32 + 2K1 K3 K12 , 2 ∂K1 ∂K1 ∂K1 ∂K1 ∂m20 ∂m02 + b02 mr02 − m02 0 = b20 mr20 − m20 ∂K2 ∂K2 r ∂m40 + b40 m40 − m40 + βK2 m02 ∂K2 ∂m20 ∂m02 ∂m60 ∂m40 β + + K22 + K32 + 2K1 K3 K12 , 2 ∂K2 ∂K2 ∂K2 ∂K2
(14.28)
(14.29)
264
Chapter 14. Feedback Control of Stochastic Systems
∂m20 ∂m02 0 = b20 mr20 − m20 + b02 mr02 − m02 ∂K3 ∂K2 r ∂m40 + b40 m40 − m40 + β(K3 m60 + K1 m40 ) ∂K3 β ∂m20 ∂m02 ∂m60 ∂m40 + + K22 + K32 + 2K1 K3 K12 . 2 ∂K3 ∂K3 ∂K3 ∂K3
(14.30)
20 Evaluation of terms such as ∂m ∂K1 can be readily done by using the exact probability density function in equation (14.25). Equations (14.33) to (14.35) are a set of nonlinear algebraic equations for the control gains K and can be solved numerically.
Numerical examples We choose the system parameters ωn = 1, ζ = 0.2, c3 = 0.1 and D = 0.4. For the weighting constants, we fix b20 = b02 = 1, and vary b40 and β. The target moments, weighting constants and corresponding optimal gains of eleven case studies are listed in Table 14.1. Cases 1, 2 and 4 correspond to the moment specification control where the control is not penalized (β = 0), and minimization of the index J results in Jˆ = 0. The control is underdetermined when there are two target moments and three gains. In Table 14.2, we list the optimal gains for the problem with a little more demanding target moments. Case 2 is equivalent to the moment specification control. Time domain simulations corresponding to several cases in Table 14.2 are shown in the figures. Figure 14.3 shows the second order moments of the displacement and velocity of the Duffing system. The simulation results are based on an average over 80000 samples. Figure 14.4 shows the expected control effort and the fourth order moment of the displacement. The agreement of the theoretical results with the simulations is excellent. 14.3.3. Control of Yasuda’s system This parametrically excited nonlinear system demonstrates the accuracy and flexibility of the optimization problem when the expectation of nonpolynomial nonlinear functions of the response is required, and is evaluated with the help of an exact probability density function. X¨ + ωn2 X + c1 X 2 X˙ = ηXW (t) + U (t),
(14.31)
where W (t) is the same white noise as in Section 14.3.2. With the help of the knowledge of the solvability of the exact stationary probability density function as discussed in Section 6.6.2, we propose the following
Table 14.1
Optimal feedback gains of the Duffing system. ζ = 0.2, c3 = 0.1, ωo2 = 1.0, D = 0.4, mr20 = 0.7, mr02 = 0.9, mr40 = 1.0, b20 = 1 and b02 = 1. k3 = 0 corresponds to case where nonlinear feedback gain is prescribed to be zero b40
β
K1
K2
K3
m20
m02
m40
Jˆ
E[U 2 ]
1 2 3 4 5 6 7 8 9 10 11
0.0 0.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.25 4.0
0.0 0.0 0.0 0.0 0.01 0.1 1.0 0.1 1.0 0.5 0.5
0.09201 0.03074 0.31481 −1.53631 −0.70006 −0.03341 0.06061 0.06647 0.02318 0.05544 0.06998
0.04444 0.04444 0.04532 0.04444 0.04669 0.05671 0.09253 0.04673 0.04614 0.07067 0.08545
0 0.03433 0 1.17542 0.58690 0.15849 0.03642 0 0 0.03860 0.06518
0.70000 0.70000 0.60487 0.70000 0.66520 0.62980 0.62742 0.70930 0.73217 0.65287 0.61309
0.90000 0.90000 0.89824 0.90000 0.89548 0.87582 0.81213 0.89540 0.89659 0.84986 0.82398
1.35596 1.32865 1.02948 1.00000 1.01320 1.03316 1.07528 1.38946 1.47447 1.16011 1.01692
0.00000 0.00000 0.00992 0.00000 0.00141 0.00661 0.01866 0.00011 0.00105 0.01114 0.01448
0.00770 0.00981 0.06179 0.67067 0.23911 0.05634 0.01778 0.00509 0.00230 0.01595 0.02929
Table 14.2 Optimum feedback gains for the same system with mr20 = 0.6, mr02 = 0.6, mr40 = 0.8, b20 = 1, b02 = 1 Case #
b40
β
K1
K2
K3
m20
m02
m40
Jˆ
E[U 2 ]
1 2 3 4
1.0 1.0 0.2 5.0
0.1 0.0 0.1 0.1
−0.18137 −0.91992 −0.16358 −0.18778
0.25034 0.26668 0.24794 0.25118
0.09742 0.58995 0.07093 0.10668
0.55489 0.59994 0.56424 0.55189
0.61507 0.59999 0.61734 0.61427
0.81461 0.79998 0.85061 0.80322
0.00248 0.00000 0.00209 0.00257
0.04517 0.19449 0.04313 0.04611
14.3. Generalized covariance control
Case #
265
Chapter 14. Feedback Control of Stochastic Systems
266
Figure 14.3. Time history of the second order moments of the Duffing system. (a) displacement. (b) velocity. Thick solid line represents the uncontrolled response. (- - -) case 7 in Table 14.1. The following lines correspond to the cases in Table 14.2: cases 1 (· · ·), 2 (− · −), 3 (—–), and 4 (- - -).
control, U (t) = −K1 X − K2 X 2 X˙ − K3
X 2 X˙ , ω∗ X 2 + X˙ 2
(14.32)
˙ where Kj are the control gains, and ω∗ = ωn2 + K1 . Let X1 = X and X2 = X. The stationary probability density function of the response is given by ∗ 2 K3 c1 + K2 ∗ 2 2 − 2η2 D 2 pX (x1 , x2 ) = C0 ω x1 + x2 ω x1 + x 2 . exp − 2η2 D (14.33) where C0 is the normalization constant of the probability density function pX (x1 , x2 ). Equation (14.33) can be transformed to the polar coordinates (A, Θ) by A X1 = √ sin(Θ), ω∗ leading to pAΘ (a, θ ) = C0 a
−
X2 = A cos(Θ),
(14.34)
c1 + K2 2 a , exp − 2η2 D
(14.35)
K3 +1 η2 D
14.3. Generalized covariance control
267
Figure 14.4. The response of the controlled Duffing system. (a) The fourth order moment of the displacement m40 . (b) The control effort E[U 2 ]. Thick solid line represents the uncontrolled response, (- - -) case 7 in Table 14.1. The following lines correspond to the cases in Table 14.2: cases 1 (· · ·), 2 (− · −), 3 (—–), and 4 (- - -).
where 0 a < ∞ and 0 θ < 2π, and C0 is the normalization constant of pAΘ (a, θ ). The existence of the probability density function depends on the control gains. To ensure the exponential decay of the probability density function as a → ∞ and the stability of the system, we have c1 + K2 > 0.
(14.36)
To guarantee pAΘ (a, θ ) to be integrable at a = 0, we must have −
K3 + 1 > −1. η2 D
(14.37)
When this condition is not satisfied, the stationary probability density function does not exist in the conventional sense, and it approaches to a delta distribution at the origin (X1 , X2 ) = (0, 0) suggesting that the controlled system is stable and the controller eliminates the stochasticity of the system in steady state. In this case, we have K3 > 2η2 D.
268
Chapter 14. Feedback Control of Stochastic Systems
Consider the performance index, 2 2 J = b20 mr20 − m20 + b02 mr02 − m02 2 + b40 mr40 − m40
X14 X22 + β K12 m20 + K22 m42 + K32 E (ω∗ X12 + X22 )2 " X14 X22 . + 2K2 K3 E ω∗ X12 + X22
(14.38)
The necessary conditions to minimize J with respect to the control gains are three nonlinear algebraic equations. We leave the derivation of these equations as an exercise. As an example, we show how the expectations such as X14 X22 E ω∗ X12 + X22 are evaluated explicitly.
X14 X22 E = E (ω∗ )−2 A4 sin4 (Θ) cos2 (Θ) 2 2 ω ∗ X 1 + X2 π C0 = 8(ω∗ )2
∞ a
−
K3 +5 η2 D
c1 + K2 2 a da. exp − 2η2 D
(14.39)
0
Numerical examples We take the parameters as ωn = 1, η = 0.6, c1 = 25 , D = 0.5, b20 = 1.0, and b20 = 1.0. The effect of the control weighting, β, and error weighting, b40 , on the optimal solutions is summarized in Table 14.3. As before, when β = 0, the control becomes the moment specification with a small tracking error Jˆ = 1.9e−9 . Figure 14.5 shows the simulation results of the second order moments for several cases in Table 14.3. The fourth order moment of displacement and the expected control effort are shown in Figure 14.6. As an example of comparison, the simulated steady state moments of the uncontrolled system, i.e. Case 1 in Table 14.3, are m20 = 0.897471, m02 = 0.886263, and m40 = 2.346586. Case 2 is to be compared with the simulation results: m20 = 0.600491, m02 = 0.692606, m40 = 1.162720, and E[U 2 ] = 0.019149, while Case 5 is to be compared with the simulation results: m20 = 0.599384, m02 = 0.643422, m40 = 1.179737, and E[U 2 ] = 0.009135. The agreement of the theoretical results with the simulations is again excellent.
Optimal feedback gains for the parametrically excited system. ωo2 = 1.0, c1 = 2/5, η = 0.6, D = 0.5, mr20 = 0.6, mr02 = 0.7, mr40 = 1.2, b20 = 1, b02 = 1
Case #
b40
β
K1
K2
K3
m20
m02
m40
Jˆ
E[U 2 ]
1 2 3 4 5 6
N/A 1.0 0.10 10.0 1.0 1.0
N/A 0.0 0.10 0.10 1.0 4.0
0.0 0.16657 0.13800 0.12153 0.08983 0.05384
0.0 0.02062 0.02918 0.06590 0.02916 −0.00881
0.0 0.13111 0.12807 0.07812 0.16214 0.24535
0.89987 0.60004 0.60598 0.61421 0.59638 0.57568
0.89987 0.69999 0.68961 0.68885 0.64995 0.60668
2.42764 1.19999 1.22800 1.20062 1.22205 1.25119
N/A 0.00000 0.00057 0.00326 0.00300 0.01192
0.00000 0.01913 0.01497 0.01543 0.00934 0.00496
14.3. Generalized covariance control
Table 14.3
269
270
Chapter 14. Feedback Control of Stochastic Systems
Figure 14.5. Time history of the second order moments of the parametrically excited system. (a) The displacement. (b) The velocity. Thick solid line represents case 1, the uncontrolled response, in Table 14.3. Cases 2 (− · −), 3 (· · ·), 4 (—–), 5 (- - -), and 6 (......).
Next, we show the case when the steady state probability density function does not exist. We keep two of the control gains fixed as K1 = 0.1 and K2 = 0.04. The condition for which the steady state probability density function approaches to a delta function at the origin is K3 0.72. We simulate the system with six different values of K3 0.72. The responses are shown in Figures 14.7 and 14.8. As expected, the second and fourth order moments of the displacement, and the second order moment of the velocity vanish faster as K3 increases. It is interesting to note that the control effort required to remove the stochasticity from the system in each case is finite.
14.4. Covariance control with maximum entropy principle 14.4.1. Maximum entropy principle When the system nonlinear terms are in the form of polynomials, it is convenient to use the maximum entropy principle to construct an approximate probabil-
14.4. Covariance control with maximum entropy principle
271
Figure 14.6. The response of the controlled parametrically excited system. (a) The fourth order moment of the displacement m40 . (b) The control effort E[U 2 ]. Thick solid line represents case 1, the uncontrolled response, in Table 14.3. Cases 2 (− · −), 3 (· · ·), 4 (—–), 5 (- - -), and 6 (......).
ity density function of the system response for the covariance control design (Wojtkiewicz and Bergman, 2001). Recall the Itô differential equation for the nth dimensional vector stochastic process Xj (t) dXj (t) = mj (X, t) dt +
m
σj k (X, t) dBk (t),
1 j n.
(14.40)
k=1
Define a qth order polynomial of the state variables as F (X) =
n
q
q
q
X1 1 X2 2 · · · Xnn ,
(14.41)
k=1
where q=
n k=1
qk .
(14.42)
Chapter 14. Feedback Control of Stochastic Systems
272
Figure 14.7. Effect of the nonlinear feedback gain K3 on the response decay. (a) The second order moment of the displacement m20 . (b) The second order moment of the velocity m02 , with control gains K1 = 0.1, and K2 = 0.04. All other parameters are the same as the previous example. (· · ·) K3 = 0.8, (- - -) K3 = 1.0, (− · −) K3 = 1.2, (—–) K3 = 1.4, (- - -) K3 = 1.6, and (......) K3 = 1.8.
The qth order moment equations of the system can be generated by choosing all possible combinations of qk and applying Itô’s lemma. n n n dE[F (X)] ∂F (X) 1 ∂ 2 F (X) mj + bj k =E , dt ∂Xj 2 ∂Xj ∂Xk j =1
(14.43)
j =1 k=1
where bj k = m l=1 σj l σkl . If we are interested in the response moments up to the qth order, we can construct a probability density function as pX (x) = λ0 exp −
n
λk xk −
k=1
−
n q1 ,q2 ,...,qn =1 q1 +q2 +···+qn =q
n
λj k xj xk − · · ·
j,k=1 q q λq1 q2 ···qn x1 1 x2 2
q · · · xn n
.
(14.44)
14.4. Covariance control with maximum entropy principle
273
Figure 14.8. Effect of the nonlinear feedback gain K3 on the response decay. (a) The fourth order moment of the displacement m40 . (b) The control effort E[U 2 ]. All the parameters are the same as in Figure 14.7. (· · ·) K3 = 0.8, (- - -) K3 = 1.0, (− · −) K3 = 1.2, (—–) K3 = 1.4, (- - -) K3 = 1.6, and (......) K3 = 1.8.
Consider Shannon’s entropy
H = E −ln pX (x) = − pX (x) ln pX (x) dx.
(14.45)
Rn
According to the maximum entropy principle, the coefficients λ should be chosen such that the entropy is maximized subject to the constraint pX (x) dx = 1. (14.46) Rn
This leads to a set of algebraic equations for determining λ. λ will be ultimately determined as a function of the response moments up to the qth order. When the probability density function determined by the maximum entropy principle is applied to evaluate the expectations in equation (14.43), it leads to a set of implicit and closed nonlinear functions of moments. In principle, both the transient and stationary solutions can be obtained with the maximum entropy principle. However, the computational effort would be prohibitive for transient
Chapter 14. Feedback Control of Stochastic Systems
274
solutions. Therefore, it is realistic to design the covariance control in steady state with the maximum entropy principle. 14.4.2. Control of the Duffing system Consider again the Duffing system under the nonlinear feedback control. X˙ 1 = X2 , X˙ 2 = −2ζ ωn X2 − ωn2 X1 − εX13 + U (t) + W (t), U (t) =
(14.47)
−K1 X1 − K2 X2 − K3 X13 ,
where K1 , K2 and K3 are control gains, and W (t) is a white noise such that E[W (t)] = 0 and E[W (t)W (t + τ )] = 2Dδ(τ ), τ > 0. The steady state moment equations up to the second order are given by m01 = 0, 2 − ωn + K1 m10 − (ε + K3 )m30 = 0,
(2ζ ωn + K2 )m11 +
ωn2
(14.48)
m11 = 0,
+ K1 m20 +
(ε + K3 )m40 − m02 = 0, (2ζ ωn + K2 )m02 + ωn2 + K1 m11 +
(14.49)
(ε + K3 )m31 = D. The approximate probability density function is taken to be
pX (x1 , x2 ) = exp λ0 − λ2 x12 − λ4 x22 − λ6 x14 .
(14.50)
Consider the following index consisting of Shannon’s entropy and the normalization constraint of the probability density function
ˆ λ0 − λ2 x12 − λ4 x22 − λ6 x14 pX (x1 , x2 ) dx1 dx2 H =− R2
+λ
pX (x1 , x2 ) dx1 dx2 − 1 ,
(14.51)
R2
where λ is a Lagrange multiplier. The necessary conditions for maximizing Shannon’s entropy subject to the normalization constraint are (1 − λ + λ0 ) − λ2 m20 − λ4 m02 − λ6 m40 = 0, m20 (1 − λ + λ0 ) − λ2 m40 − λ4 m22 − λ6 m60 = 0,
(14.52)
14.4. Covariance control with maximum entropy principle
275
m02 (1 − λ + λ0 ) − λ2 m22 − λ4 m04 − λ6 m42 = 0, m40 (1 − λ + λ0 ) − λ2 m60 − λ4 m42 − λ6 m80 = 0, and
pX (x1 , x2 ) dx1 dx2 = 1.
(14.53)
R2
These conditions and the moment equations form a set of nonlinear algebraic equations. After imposing the covariance to the system response, we still need to search for the solutions of the nonlinear algebraic equations. It has been found that the fsolve command in Matlab is an effective tool for finding the solutions (Wojtkiewicz and Bergman, 2001). E XAMPLE 14.1. In equation (14.47), we choose the parameters as ωn = 1, ε = 0.1, ζ = 0.2 and D = 0.4, and specify the target moments as mr10 = 0, mr01 = 0, mr11 = 0, mr20 = 0.7 and mr02 = 0.9. Since the steady state probability density function of the Duffing system can be obtained, it can be used to compute the control gains, leading to K1 = −1.5363, K2 = 0.04444, and K3 = 1.1754. A comparison Reconsider the system defined by equation (14.31). We select the target moments as mr20 = 0.7, mr02 = 0.7, and mr40 = 1.2. We take the probability density function in equation (14.50). Following the procedure outlined about, we obtain λ0 = 1.59618, λ2 = 0.354372, λ4 = 0.104975, λ6 = 0.714286.
(14.54)
The approximate probability density function in equation (14.50) is used to compute the expectations in the optimization problem to minimize J defined in equation (14.38). With the exact probability density function, we set the weighting β = 0 and look for the control gains that attain the perfect tracking. These control gains are found to be K1 = −5.02335e–6, K2 = 0.412923, and K3 = −0.418092. Table 14.4 compares the results obtained by the exact probability density function and the maximum entropy method. It is seen that the results by the maximum entropy method are less accurate especially in estimating the expected control effort E[U 2 ] than the approach using the exact probability density function. That is mainly because of the fact that the function form used with the maximum entropy principle is much off the target as compared with the exact probability density function. To show this point graphically, we plot the cross sections of the exact probability density function and compare them with that obtained by the maximum entropy principle. The results are shown in Figure 14.9. Note that these curves are scaled so that the area under the curve is unity for better
276
Chapter 14. Feedback Control of Stochastic Systems
Figure 14.9. Cross-sections of the steady state PDF pX (x1 , x2 ) of the exact solution for (− · −) x2 = 0, (—–) x2 = 0.5, (- - -) x2 = 1.0 and (· · ·) the approximate solution by the maximum entropy principle. All the curves are scaled so that the area under each is unity.
Figure 14.10. Time history of various expected control input and system response for the parametrically excited system (refer to Table 14.4) with control gains, K1 = −5.02335e–6, K2 = 0.412923, x4x2
1 2 K3 = −0.418092. (a) (......) m20 , (—–) m02 , (− · −) m40 , (- - -) m42 , (b) (− · −) E , (ω∗ x12 +x22 )2 4 2 x1 x2 (- - -) E ∗ 2 2 , (—–) E[U 2 ].
ω x1 +x2
Exercises
277
Table 14.4 Comparison of expectations of various quantities by the exact PDF and the maximum entropy principle (MEP) for the parametrically excited system with ωo2 = 1.0, c1 = 2/5, η = 0.6, D = 0.5, mr20 = 0.7, mr02 = 0.7, mr40 = 1.2, b20 = 1, and b02 = 1. Note that the MEP approximation should agree with the three prescribed moments Expected values
Exact PDF
MEP
Error (%)
m20 m02 m40 m42
X4 X2 E ∗ 12 2
0.700000 0.699999 1.199999 0.634276
0.700000 0.699998 1.200000 0.839999
0.0 0.0 0.0 32.43
0.199999
0.233799
16.90
0.087501
0.091313
4.36
0.054387
0.078460
44.26
E
ω X1 +X22
X14 X22
(ω∗ X12 +X22 )2 E[U 2 ]
visualization. We also present the time domain simulations of the response moments in Figure 14.10. These results clearly demonstrate the advantage of having an exact probability density function for nonlinear stochastic systems.
Exercises E XERCISE 14.1. Determine a PD control for the SDOF system such that the second order moments of the response converge to the steady state solution in three cycles of the undamped natural frequency. With this control, what is the convergence speed of the mean response? E XERCISE 14.2. Derive the necessary conditions to minimize J in equation (14.38) with respect to the control gains Kj . E XERCISE 14.3. The exact steady state probability density function of the Duffing oscillator in equation (14.47) is given by ζ ωn ε ωn2 x12 + x22 + x14 . pX (x1 , x2 ) = C0 exp − (14.55) 2D 2 Verify the control gains for Example 14.1 on page 275.
Chapter 15
Concepts of Optimal Controls
Optimal control of stochastic nonlinear dynamic systems is an active area of research due to its relevance to many engineering applications. This is a difficult problem to study, particularly when the system is strongly nonlinear and there are constraints on the states and the control. Optimal feedback controls for systems under white-noise random excitations may be studied by the Pontryagin maximum principle, Bellman’s principle of optimality and the Hamilton–Jacobi– Bellman (HJB) equation. When the control and the state are bounded, the direct solution of the HJB equation faces exigent difficulties since it is multidimensional, nonlinear and, is defined in a domain that in general might not be simply connected. The theory of viscosity solutions, first introduced by Crandall and colleagues (Yong and Zhou, 1999), provides a convenient framework for studying the HJB equation. Very few closed form solutions to this problem have been found so far. Dimentberg and colleagues found the analytical solutions of the optimal control of a linear spring mass oscillator with Lagrange and Mayer cost functionals in a region of the phase space (Bratus et al., 2000; Dimentberg et al., 2000). Given the intrinsic complexity of the problem, we usually must resort to numerical methods to find approximate control solutions (Anderson et al., 1984; Kushner and Dupois, 2001). While certain numerical methods of solution to the HJB equation are known, the numerical approaches often require knowledge of the boundary/asymptotic behavior of the solution in advance (Bratus et al., 2000). Relatively few problems are known today which are solved through the use of the HJB equation. Zhu et al. (2001) proposed a strategy for optimal feedback control of randomly excited structural systems based on the stochastic averaging method for quasi-Hamiltonian systems and the HJB equation. In this chapter, we shall first study the theory of optimal control for deterministic systems. We derive the governing equations for the optimal control by using the variational method. Several classes of optimal control problems are discussed. We also formulate the HJB equation for deterministic optimal control problems. Subsequently, we study the theory of optimal control for stochastic systems. We derive the stochastic HJB equation by making use of Itô’s lemma. Finally, we 279
Chapter 15. Concepts of Optimal Controls
280
study the stochastic optimal control problem with state estimation of linear systems. This is known as the linear quadratic Gaussian (LQG) control.
15.1. Optimal control of deterministic systems This section presents the formulation of the optimal control problem for deterministic systems. The discussion follows the lines of derivation of equations for determining optimal controls in Lewis and Syrmos (1995). 15.1.1. Total variation Before we present the formulation of optimal controls, we review an important relation in the variational principle. The total variation of a function x(t) is given by dx(t) = δx(t) + x(t) ˙ dt,
(15.1)
where δx(t) is the virtual change of x(t) over no time without breaking any physical constraint and dx(t) is the total variation of x(t) due to the virtual change and the time change dt. 15.1.2. Problem statement Consider a deterministic dynamic system governed by the state equation x˙ = f(x, u, t),
x(t0 ) = x0 ,
(15.2)
where t0 is the initial time, x0 is the initial state, x ∈ u ∈ and f(·) is a nonlinear function of its arguments. Define a performance index, also known as the cost function, as Rn,
J = φ x(T ), T +
Rm
T L(x, u, τ ) dτ,
(15.3)
t0
where T is the terminal time, φ(x(T ), T ) 0 is the terminal cost, L(x, u, t) > 0 is called the Lagrange function. E XAMPLE 15.1. Several common performance indices are listed here. T 1 · dτ = T − t0 ,
Minimum time J = t0
(15.4)
15.1. Optimal control of deterministic systems
281
T m ui (τ ) dτ, Minimum fuel J = t0 i=1
Minimum energy J =
1 T x (T )ST x(T ) 2 T 1
x(τ )T Qx(τ ) + u(τ )T Ru(τ ) dτ, + 2 t0
where ST and Q are semi-positive definite and R is strictly positive definite. Define a target set at T as x(T ), T = 0,
(15.5)
where ∈ defines a set where the terminal state x(T ) must settle at time T . An example of such a set is the predetermined orbit of a satellite. The optimal control problem amounts to finding a control u that drives the system from the initial state x0 to the target set (x(T ), T ) in such a way that the performance index J is minimized while the state equation is satisfied. Rp .
15.1.3. Derivation of optimal control The above optimal control statement defines a constrained optimization problem. To derive the equations governing the optimal control, we apply the method of Lagrange multipliers and consider an augmented performance index as Jˆ(x, u, t0 , T ) = φ x(T ), T + ν T x(T ), T T +
L(x, u, τ ) + λT (τ ) f(x, u, τ ) − x˙ dτ,
(15.6)
t0
where ν ∈ R p and λ(t) ∈ R n are Lagrange multipliers. Define a Hamiltonian function as H (x, u, λ, t) = L(x, u, t) + λT (t)f(x, u, t).
(15.7)
Note that Jˆ is a functional of unknown functions x(t) and u(t), and a function of t0 and T . The necessary condition for Jˆ to achieve optimality is the total variation of Jˆ with respect to all this arguments vanishes. That is, T ∂ ∂ φ + T ν dx(T ) + φ + T ν dT + T dν d Jˆ = ∂x ∂t + H (x, u, λ, T ) − λT (T )˙x(T ) dT
Chapter 15. Concepts of Optimal Controls
282
− H (x, u, λ, t0 ) − λT (t0 )˙x(t0 ) dt0 T +
∂H T ∂H T δx + δu − λT δ x˙ + ∂x ∂u
T
∂H − x˙ ∂λ
" δλ dτ
t0
= 0. Note that
(15.8)
∂H ∂H ∂x , ∂u
∂H = ∂x
and
∂H ∂λ
denote column vectors. An example is as follows, " ∂H T ∂H ∂H (15.9) , ,..., . ∂x1 ∂x2 ∂xn
Also, we have T
T −λ δ x˙ dτ = −λ (T ) δx(T ) + λ (t0 ) δx(t0 ) + T
T
T
t0
λ˙ T δx dτ,
(15.10)
t0
dx(t0 ) = δx(t0 ) + x˙ (t0 ) dt0 ,
(15.11)
dx(T ) = δx(T ) + x˙ (T ) dT .
(15.12)
Reorganize the terms of d Jˆ as T ∂ φ + T ν − λ(T ) dx(T ) d Jˆ = ∂x ∂ T + φ + ν + H (x, u, λ, T ) dT + T dν ∂t − H (x, u, λ, t0 ) dt0 + λT (t0 ) dx(t0 ) T +
T T " ∂H T ∂H ∂H ˙ + λ δx + δu + − x˙ δλ dτ. ∂x ∂u ∂λ
(15.13)
t0
Since δx(t), δu(t) and δλ(t) are arbitrary and independent over the time interval (t0 , T ), dν is arbitrary and independent from the rest of the variables, and the initial and terminal conditions are independent, we have the following equations and terminal conditions for determining the optimal control. State Equation ∂H = f(x, u, t). ∂λ Co-state Equation x˙ =
λ˙ = −
∂H ∂L ∂fT =− − λ. ∂x ∂x ∂x
(15.14)
(15.15)
15.1. Optimal control of deterministic systems
283
Optimality Condition ∂H = 0. ∂u Initial and Terminal Conditions −H (x, u, λ, t0 ) dt0 + λT (t0 ) dx(t0 ) = 0, T ∂ T φ + ν − λ(T ) dx(T ) ∂x ∂ T + φ + ν + H (x, u, λ, T ) dT = 0. ∂t Target Set x(T ), T = 0.
(15.16)
(15.17)
(15.18)
(15.19)
15.1.4. Pontryagin’s minimum principle The optimality condition ∂H ∂u = 0 implies that there is no constraint on the control. If the admissible control u lies in a bounded subset U of R m , then, this condition may not hold in U . In this case, the optimality condition is replaced by Pontryagin’s minimum principle, which is stated as H x∗ , u∗ , λ∗ , t H (x∗ , u, λ∗ , t), ∀u ∈ U ⊂ R m , (15.20) where x∗ , u∗ and λ∗ denote the optimal solutions. Note that Pontryagin’s minimum principle is both a necessary and sufficient condition, while the optimality condition (15.16) is only necessary. 15.1.5. The role of Lagrange multipliers In the derivation of optimal controls, we have to use Lagrange multipliers to include the constraints of the minimization problem. Lagrange multipliers therefore provide important and indispensable utilities for handling various constraints in optimization. The consequence of using Lagrange multipliers is an increased number of unknown functions and variables such as the co-state vector λ(t). The dimension of the state space in which the optimal solutions are defined is doubled from n to 2n. However, the order of the required differentiability of the solutions is kept the same as the original system, i.e., x(t) and λ(t) ∈ C 1 . In some cases, one can, in principle, eliminate Lagrange multipliers such as the co-state vector λ(t) by using the state and co-state equations, resulting in a differential equation for the state vector x(t) of doubled orders. This would require a much smoother solution x(t) ∈ C 2 . If the optimal control system is
Chapter 15. Concepts of Optimal Controls
284
to be implemented in real-time, we may or may not be able to achieve such as smoothness. In fact, the dimension of the state space and the order of the differential equation form a pair of compromising factors. The product of the two factors is a constant and describes the complexity of the system. Hence, a decrease of one factor will lead to an increase of the other. From the viewpoint of numerical computations and real-time implementation of the optimal control, keeping a lower order of differentiations is almost always advantageous than the other way around. 15.1.6. Classes of optimal control problems Most practical applications require specified initial conditions such that dt0 = 0 and dx(t0 ) = 0.
(15.21)
The terminal conditions depend on the control objective. Fixed final time problem When dT = 0 and δx(T ) = 0,
(15.22)
we have the fixed final time and free terminal state optimal control problem. E XAMPLE 15.2. Consider a control problem of a first order dynamical system defined as x˙ = −ax + bu (a > 0).
(15.23)
The initial conditions are all fixed such that x(t0 = 0) = x0 = 0.
(15.24)
Consider a performance index 2 1 1 J = sT x(T ) − 10 + 2 2
T u2 (τ ) dτ.
(15.25)
0
We fix the final time such that dT = 0. The Hamiltonian function is given by 1 2 u + λ · (−ax + bu). 2 The co-state equation and the optimality condition are given by H =
λ˙ = −
∂H = aλ, ∂x
(15.26)
(15.27)
15.1. Optimal control of deterministic systems
∂H = u + bλ = 0. ∂u The terminal condition at t = T is given by ∂ψ ∂φ = sT x(T ) − 10 − λ(T ) = 0. + v − λ ∂x ∂x t=T
285
(15.28)
(15.29)
Hence, λ(T ) = sT x(T ) − 10 .
(15.30)
The solution to the co-state equation can be obtained as λ(t) = λ(T )e−a(T −t) .
(15.31)
Substituting this solution to the state equation, we obtain b2 λ(T )e−aT sinh(at). (15.32) a Applying the terminal condition (15.30), we finally have the optimal solutions x(t) = x0 e−at +
10sT b2 sinh(at) , + sT b2 sinh(aT ) 10asT eat λ∗ (t) = − aT , ae + sT b2 sinh(aT ) 10asT eat u∗ (t) = b aT . ae + sT b2 sinh(aT )
x ∗ (t) =
aeaT
(15.33) (15.34) (15.35)
In this example, because λ(T ) is determined by the system response x(T ) at t = T , and x(T ) is influenced by u(t), the optimal control contains the feedback of the response. Fixed final time and state problem – trajectory planning When both the final time and state are predetermined, the optimal control problem is also known as the dynamic trajectory planning. The resulting control is often open-loop. E XAMPLE 15.3. Reconsider Example 15.2 with a fixed final time and state such that dT = 0, δx(T ) = 0 and x(T ) = 10. Since the final state is fixed, we modify the performance index as 1 J = 2
T u2 (τ ) dτ. 0
(15.36)
Chapter 15. Concepts of Optimal Controls
286
Applying the terminal condition x(T ) = 10 to the solution in equation (15.32), we obtain 20a . λ(T ) = 2 (15.37) b (1 − e−2aT ) Hence, the optimal control solutions are given by 10 sinh(at) , sinh(aT ) 10aeat , λ∗ (t) = − 2 b sinh(aT ) 10aeat u∗ (t) = . b sinh(aT )
x ∗ (t) =
(15.38) (15.39) (15.40)
Free final time problem When dT = 0, the final time of the optimal control is not fixed. Recall the terminal condition T ∂ T φ + ν − λ(T ) dx(T ) 0= ∂x ∂ T + (15.41) φ + ν + H (x, u, λ, T ) dT . ∂t There are a few cases to be considered separately dependent on how the terminal state x(T ) is specified. If x(T ) is also free and independent of T , we must have two terminal conditions ∂ φ + T ν − λ(T ) = 0, (15.42) ∂x ∂ (15.43) φ + T ν + H (x, u, λ, T ) = 0. ∂t This is known as the transversality condition. When x(T ) has to equal to a prescribed function r(T ) and is still free, we have dr(T ) (15.44) dT . dT In this case, there is no target set to be specified so that = 0. The terminal condition then reads T dr(T ) ∂φ ∂φ − λ(T ) + + H (x, u, λ, T ) = 0. (15.45) ∂x dT ∂t dx(T ) =
More discussions and examples of free final time optimal control problem can be found in Lewis and Syrmos (1995).
15.1. Optimal control of deterministic systems
287
E XAMPLE 15.4. Consider the target hitting of the missile control problem. The missile must hit the target precisely. When it hits the target is not important as long as the final time T is finite. 15.1.7. The Hamilton–Jacobi–Bellman equation Define a value function based on the performance index as
∗
∗
T
V x (t), t = φ x (T ), T +
L x∗ (τ ), u∗ (τ ), τ dτ
t
∗
T
= min φ x (T ), T +
∗
L(x , u, τ ) dτ .
u
(15.46)
t
We have also changed the integration dummy variable from dt to dτ . Take the derivative of the value function with respect to time t based on the definition in equation (15.46) dV (x∗ (t), t) = −L x∗ (t), u∗ (t), t . (15.47) dt By treating V (x∗ (t), t) as a multi-variable function of the state and time, we can also apply the chain rule of differentiation along the optimal path to obtain ∂V (x∗ (t), t) ∂V (x∗ (t), t)T ∗ dV (x∗ (t), t) = + x (t) dt ∂t ∂x ∂V (x∗ (t), t) ∂V (x∗ (t), t)T ∗ = + f x (t), u∗ (t), t . ∂t ∂x Equating these two derivatives, we have ∂V (x∗ (t), t) = −L x∗ (t), u∗ (t), t ∂t ∂V (x∗ (t), t)T ∗ − f x (t), u∗ (t), t ∂x ∂V (x∗ (t), t) = −H x∗ (t), u∗ (t), ,t ∂x ∂V (x∗ (t), t) = − min H x∗ (t), u(t), ,t . u ∂x
(15.48)
(15.49)
This is a partial differential equation defined in the state space governing the value function and is known as the Hamilton–Jacobi–Bellman equation (HJB). In this equation, ∂V ∂x plays the role of λ in the original definition of the Hamiltonian (15.7).
Chapter 15. Concepts of Optimal Controls
288
According to the definition of the value function, we have a terminal condition for the HJB equation as V x∗ (T ), T = φ x∗ (T ), T . (15.50) When the states are unconstrained, the only boundary condition is at infinity. That is, as |x| → ∞, V (x, t) → 0 for stable systems. When there are constraints imposed on the states, boundary conditions can be formulated. E XAMPLE 15.5. Reconsider the control problem of the first order dynamical system in Example 15.2. The initial conditions are all fixed such that dt0 = 0,
x(t0 = 0) = 0.
(15.51)
The final time is fixed such that dT = 0. The final state x(T ) is free. Consider a performance index sT 2 1 J = x (T ) + 2 2
T u2 (τ ) dτ.
(15.52)
0
The value function is defined as sT ∗ 2 1 V x ∗ (t), t = x (T ) + 2 2
T
2 u∗ (τ ) dτ.
(15.53)
t
The Hamilton–Jacobi–Bellman equation for this example reads ∂V (x ∗ (t), t) ∂V (x ∗ (t), t) = − min H x ∗ (t), u(t), ,t u ∂t ∂x ∗ ∂V (x (t), t) 1 = − min u2 + (−ax + bu) . u 2 ∂x
(15.54)
Since there is no constraint on u, the minimum of the right-hand side of the above equation is achieved when ∂H ∂V (x ∗ (t), t) =u+b = 0. ∂u ∂x
(15.55)
2
Since ∂∂uH2 = 1 > 0, we obtain the optimal control which minimizes the performance index as u∗ = −b
∂V (x ∗ (t), t) . ∂x
(15.56)
15.2. Optimal control of stochastic systems
289
Substituting the optimal control to equation (15.54), we obtain a partial differential equation to determine the value function ∂V (x ∗ (t), t) ∗ b2 ∂V (x ∗ (t), t) 2 ∂V (x ∗ (t), t) +a = x , (15.57) ∂t 2 ∂x ∂x subject to the terminal condition sT ∗ 2 x (T ) . V x ∗ (T ), T = 2 Motivated by the terminal condition, we assume that
(15.58)
s(t) ∗ 2 x (t) . V x ∗ (t), t = 2 Note that x(t) is treated as an independent variable and
(15.59)
∂V (x ∗ (t), t) s˙ (t) ∗ 2 = x (t) , ∂t 2 ∂V (x ∗ (t), t) = s(t)x ∗ (t). ∂x We obtain an equation and an terminal condition for s(t) as s˙ (t) = b2 s 2 (t) + 2as(t),
s(T ) = sT .
(15.60) (15.61)
(15.62)
Equation (15.62) is known as the Riccati equation. Integrating the Riccati equation, we have 2asT ea(t−T ) . b2 sT (1 − ea(t−T ) ) + 2a Finally, the optimal control is given by s(t) =
u∗ = −bs(t)x(t) = −
(15.63)
2absT ea(t−T ) x(t). b2 sT (1 − ea(t−T ) ) + 2a
(15.64)
15.2. Optimal control of stochastic systems Consider now a dynamical system governed by the Itô differential equation dX(t) = m(X, U, t) dt + σ (X, t) dB(t),
X(t0 ) = x0 ,
(15.65)
where X is an n-dimensional vector stochastic process, x0 is a fixed initial state at t0 , m(·) ∈ R n and σ (X, t) ∈ R n×q are known functions of their arguments, U ∈ R m is the control vector and dB(t) is a q-dimensional vector of independent unit Brownian motions such that
I dt, t = s, T E dB(t) = 0, E dB(t) dB (s) = q (15.66) 0, t = s,
Chapter 15. Concepts of Optimal Controls
290
where Iq is the q × q identity matrix. Consider a performance index as
T
J = E[Jd ] = E φ X(T ), T +
L(X, U, τ ) dτ ,
(15.67)
t0
where T is the terminal time, φ(X(T ), T ) 0 is the terminal cost, L(X, U, t) > 0 is the Lagrange function. Jd has the same form as the performance index for the deterministic system. Similarly, we can also impose the target set for the system in a probability sense. For example, the target set is reached in the mean sense
E (x(T ), T ) = 0. (15.68) 15.2.1. The Hamilton–Jacobi–Bellman equation Define the value function in terms of the optimal solution as V X∗ (t), t = E[Vd ] T ∗ ∗ ∗ = E φ X (T ), T + L X (τ ), U (τ ), τ dτ ,
(15.69)
t
where t0 t T . Vd has the same form as that in equation (15.46) given by
∗
∗
T
Vd X (t), t = φ X (T ), T +
L X∗ (τ ), U∗ (τ ), τ dτ.
(15.70)
t
Hence, V (X∗ (t), t) can be viewed as the expectation of the value function in equation (15.46) when the system is subject to stochastic excitations. The time derivative of V (X∗ (t), t) based on the above definition can be obtained by taking the expectation of equation (15.47) and is given by
dV (X∗ (t), t) (15.71) = −E L X∗ (t), U∗ (t), t . dt We make an important assumption that at time t, both X∗ (t) and U∗ (t) are fully determined such that
E L X∗ (t), U∗ (t), t = L X∗ (t), U∗ (t), t . (15.72) Hence, the time derivative of V (X∗ (t), t) can be rewritten as dV (X∗ (t), t) = −L X∗ (t), U∗ (t), t . dt
(15.73)
15.2. Optimal control of stochastic systems
291
By treating V (X∗ (t), t) as a multi-variable function of the state and time, we can obtain another expression for the time derivative. Applying Itô’s lemma to Vd , we have ∂Vd (X∗ (t), t) dVd X∗ (t), t = dt ∂t ∂Vd (X∗ (t), t)T + dX∗ (t) ∂X ∂ 2 Vd (X∗ (t), t) 1 dX∗ (t) + O dt 1.5 . + dX∗ (t)T 2 ∂X2 In component form, we have n n n ∂Vd ∂ 2 Vd ∂Vd 1 mj + bj k + dVd = dt ∂t ∂Xj 2 ∂Xj ∂Xk j =1
+
q n j =1 k=1
σj k
(15.74)
j =1 k=1
∂Vd dBk (t), ∂Xj
(15.75)
where bj k =
q
(15.76)
σj l σkl .
l=1
Taking the expectation of equation (15.75), we have ∂E[Vd ] 1 ∂ 2 E[Vd ] ∂E[Vd ] dE[Vd ] mj + bj k , = + dt ∂t ∂Xj 2 ∂Xj ∂Xk n
j =1
n
n
(15.77)
j =1 k=1
or in the vector notation and in terms of V (X∗ (t), t) dV (X∗ (t), t) ∂V (X∗ (t), t) = dt ∂t ∂V (X∗ (t), t)T ∗ + m X (t), U∗ (t), t ∂X 2 T ∗ ∂ V (X∗ (t), t) ∗ 1 X σ X (t), t σ (t), t . + Tr 2 ∂X2
(15.78)
Combining the two time derivatives, we have −
∂V (X∗ (t), t)T ∗ ∂V (X∗ (t), t) = L X∗ (t), U∗ (t), t + m X (t), U∗ (t), t ∂t ∂X 2 T ∗ 1 ∂ V (X∗ (t), t) ∗ + Tr σ X (t), t σ X (t), t 2 ∂X2
Chapter 15. Concepts of Optimal Controls
292
∂V (X∗ (t), t)T ∗ = min L X∗ (t), U(t), t + m X (t), U(t), t U ∂X " 2 T ∗ 1 ∂ V (X∗ (t), t) ∗ + Tr (15.79) X σ X (t), t σ (t), t . 2 ∂X2 This is the HJB equation for the stochastic optimal control problem. The last term of the trace of matrix multiplications is due to the peculiar behavior of the Brownian motion as described by the Itô calculus. E XAMPLE 15.6. Consider a control problem of a first order stochastic dynamical system defined as dX = (−aX + bU ) dt + σ dB
(a > 0),
(15.80)
where dB(t) is a unit Brownian motion. Consider a performance index
sT 2 1 J =E X (T ) + 2 2
T
U 2 (τ ) dτ .
(15.81)
0
The value function is given by
sT 2 1 V X (t), t = E X (T ) + 2 2 ∗
T 2
U (τ ) dτ .
(15.82)
t
The HJB equation reads ∂V (X ∗ (t), t) ∂V (X ∗ (t), t) 1 = − min U 2 + (−aX + bU ) U ∂t 2 ∂X σ 2 ∂ 2 V (X ∗ (t), t) + . 2 ∂X 2
(15.83)
The optimal control is still given by U ∗ = −b
∂V (X ∗ (t), t) . ∂X
(15.84)
Hence, ∂V (X ∗ (t), t) b2 ∂V (X ∗ (t), t) 2 = ∂t 2 ∂X + aX ∗
∂V (X ∗ (t), t) σ 2 ∂ 2 V (X ∗ (t), t) . − ∂X 2 ∂X 2
(15.85)
15.3. Linear Quadratic Gaussian (LQG) control
293
Assume that S(t) ∗ 2 1 X (t) + V X ∗ (t), t = 2 2
T σ 2 S(τ ) dτ.
(15.86)
t
The integral term is to account for the stochastic excitation and does not affect the control. We obtain the Riccati equation for S(t) as ˙ = b2 S 2 (t) + 2aS(t), S(t)
S(T ) = sT .
(15.87)
Integration of the Riccati equation leads to 2asT ea(t−T ) . b2 sT (1 − ea(t−T ) ) + 2a The closed loop system reads dX = − a + b2 S(t) X(t) dt + σ dB S(t) =
(15.88)
(a > 0).
(15.89)
15.3. Linear quadratic Gaussian (LQG) control Consider now a linear dynamical system governed by the Itô differential equation dX(t) = (AX + GU) dt + σ dB(t),
X(t0 ) = x0 ,
(15.90)
where X is an n-dimensional vector stochastic process, x0 is a fixed initial state at t0 , A ∈ R n×n is the state matrix, G ∈ R n×m is the control influence matrix, U ∈ R m is the control vector, σ ∈ R n×q is the matrix describing the strength of the random excitations, and dB(t) is a q-dimensional vector of independent unit Brownian motions such that
I dt, t = s, E dB(t) = 0, E dB(t) dBT (s) = q (15.91) 0, t = s, where Iq is the q × q identity matrix. Consider a performance index as 1 J = E XT (T )ST X(T ) 2 T 1 T T X (t)QX(t) + U (t)RU(t) dτ , + 2 t0
where T is the terminal time, ST 0, Q 0, and R > 0. Define the value function in terms of the optimal solution as ∗ 1 V X (t), t = E XT (T )ST X(T ) 2
(15.92)
Chapter 15. Concepts of Optimal Controls
294
1 + 2
T
X (τ )QX(τ ) + U (τ )RU(τ ) dτ ,
T
T
(15.93)
t
where t0 t T . The HJB equation is given by
∂V (X∗ (t), t) 1 = min X(t)T QX∗ (t) + UT (t)RU(t) − U ∂t 2 ∂V (X∗ (t), t)T ∗ AX (t) + GU(t) + ∂X 2 " 1 ∂ V (X∗ (t), t) T + Tr σ σ . 2 ∂X2
(15.94)
Minimizing the right-hand side of equation (15.94), we obtain an expression for the optimal control ∂V (X∗ (t), t) . ∂X Following Example 15.6, we assume that U∗ (t) = −R−1 GT
1 1 V X (t), t = X∗ (t)T S(t)X∗ (t) + 2 2 ∗
(15.95)
T
Tr S(τ )σ σ T dτ,
(15.96)
t
where S(t) = ST (t) and the integral is to cancel the trace term in equation (15.94). Recall that at time t, both X∗ (t) and U∗ (t) are assumed to be fully determined. We obtain ∂V (X∗ (t), t) = S(t)X∗ (t), (15.97) ∂X U∗ (t) = −R−1 GT S(t)X∗ (t). (15.98) Note that 1 ∗ T
X (t) S(t)A + AT S(t) X∗ (t). 2 We obtain the matrix Riccati equation for S(t) X∗ (t)T S(t)AX∗ (t) =
˙ −S(t) = S(t)A + AT S(t) − S(t)GR−1 GT S(t) + Q,
(15.99)
(15.100)
S(t) = ST . 15.3.1. Kalman–Bucy filter The optimal control derived here is a full state feedback control, and requires the knowledge of the full state vector X(t). When the system is observed through a set of the sensor measurements, the full state has to be estimated.
15.3. Linear Quadratic Gaussian (LQG) control
295
Consider again the linear stochastic system defined in equation (15.90) and an output equation as Y(t) = CX(t) + v(t),
(15.101)
is the matrix and, Y ∈ is the output vector, and v(t) ∈ R p where C ∈ is the measurement noise. v(t) is commonly assumed to be a zero-mean Gaussian white noise such that
E v(t) = 0, E v(t)vT (τ ) = Dv δ(t − τ ). (15.102) R p×n
Rp
Before we present the filter design, we state all the assumptions. 1. (A, C) forms an observable pair. 2. The stochastic excitations to the system, i.e., the Brownian motions in equation (15.90), are uncorrelated with the measurement noise v(t). 3. The state model characterized by the matrices (A, G, σ ) is perfect. Define the state estimation equation as
ˆ ˆ + GU + L Y(t) − CX(t) ˆ d X(t) = AX dt,
(15.103)
ˆ where L is the state estimation gain matrix, X(t) is an estimate of X(t) starting from the initial conditions such that
ˆ E X(0) = μXˆ (0), (15.104) T
ˆ ˆ = CXˆ Xˆ (0). − μXˆ (0) E X(0) − μXˆ (0) X(0) ˜ = X− X. ˆ Subtracting equations (15.90) Define the state estimate error vector as X and (15.103), and making use of the output equation, we obtain ˜ ˜ dt − Lv(t) dt + σ dB(t). d X(t) = (A − LC)X
(15.105)
Assume that the measurement noise can be expressed as a formal derivative of a Brownian motion as v(t) dt = dBv (t),
(15.106)
such that dBv (t) is independent of dB(t) and has the following properties,
Dv dt, t = s, E dBv (t) = 0, (15.107) E dBv (t) dBTv (s) = 0, t = s. Equation (15.105) can be written in the form of the Itô stochastic differential equation as ˜ ˜ dt − L dBv (t) + σ dB(t). d X(t) = (A − LC)X
(15.108)
Chapter 15. Concepts of Optimal Controls
296
˜ The moment equations for the error process X(t) can be readily derived from equation (15.108) by applying Itô’s lemma. ˜
dE[X(t)] ˜ , = (A − LC)E X dt dRX˜ X˜ (t) = (A − LC)RX˜ X˜ + RX˜ X˜ (A − LC)T dt + LDv LT + σ σ T ,
(15.109)
(15.110)
˜ X ˜ T (t)] = {E[X˜ j (t)X˜ k (t)]} is the correlation matrix of where RX˜ X˜ (t) = E[X(t) ˜ the error process X(t). The covariance matrix of the error process is defined as ˜ ˜ T (t)]. Combining the above two equations, we CX˜ X˜ (t) = RX˜ X˜ (t) − E[X(t)]E[ X have the following equation dCX˜ X˜ (t) = (A − LC)CX˜ X˜ + CX˜ X˜ (A − LC)T dt + LDv LT + σ σ T .
(15.111)
˜ The mean of the estimation error E[X(t)] represents the biased and deterministic deviation in the state estimation, while the covariance matrix CX˜ X˜ (t) represents the uncertainty in the estimation due to the measurement noise and random excitation. The objectives of the estimator design are two-folds. First, the estimator gain L must be chosen so that the mean error governed by equation (15.109) converges to zero as fast as possible. Second, L will also minimize ˜ the uncertainty, i.e., the covariance of the estimation error X. The first objective is fulfilled by following the deterministic state estimation approach. When (A, C) is an observable pair, we can always find a gain matrix L that makes the matrix Ao = A − LC stable and the estimation mean error converge exponentially (Lewis and Syrmos, 1995). It is more difficult to fulfill the second objective by making the covariance matrix CX˜ X˜ (t), the response of equation (15.111), as small as possible. One common and simplified solution is to minimize the steady state response of equation (15.111) when Ao is stable. In the steady state, equation (15.111) reads Ao CX˜ X˜ + CX˜ X˜ ATo + LDv LT + σ σ T = 0.
(15.112)
Define a performance index as 1 trace(CX˜ X˜ ). (15.113) 2 The problem is to select the estimator gain L to minimize J subject to the constraint of equation (15.112). Define an extended performance including the J =
15.3. Linear Quadratic Gaussian (LQG) control
297
constraint by means of Lagrange multipliers 1 1 Jˆ = trace(CX˜ X˜ ) + trace(ES), 2 2 where S is a n × n matrix of Lagrange multipliers and E = Ao CX˜ X˜ + CX˜ X˜ ATo + LDv LT + σ σ T .
(15.114)
(15.115)
The necessary conditions for minimizing Jˆ are ∂ Jˆ 1 = E = 0, ∂S 2 ∂ Jˆ = ATo S + SAo + I = 0, ∂CX˜ X˜
(15.116) (15.117)
1 ∂ Jˆ (15.118) = SLDv − SCX˜ X˜ CT = 0. 2 ∂L Detail derivations of these necessary conditions can be done by writing all the equations in indicial form. This is left as exercises to the readers. When Ao is stable, the solution S to equation (15.117) is positive definite. Hence, from equation (15.118), we obtain an estimator gain matrix L = CX˜ X˜ CT D−1 v .
(15.119)
Substituting the gain matrix to equation (15.116), we obtain an algebraic Riccati equation for determining the steady state covariance matrix ACX˜ X˜ + CX˜ X˜ AT + σ σ T − CX˜ X˜ CT D−1 ˜X ˜ = 0. v CCX
(15.120)
After obtaining the covariance matrix, we can then compute the estimator gain matrix. E XAMPLE 15.7. Consider a control problem of a first order stochastic dynamical system defined as dX = (−aX + bU ) dt + σ dB
(a > 0),
(15.121)
where dB(t) is a unit Brownian motion. Consider a performance index
sT 2 1 J =E X (T ) + 2 2
T
qX 2 (τ ) + rU 2 (τ ) dτ .
(15.122)
0
The output of the system is given by dY (t) = dX(t) + v(t) dt,
(15.123)
Chapter 15. Concepts of Optimal Controls
298
where the measurement noise v(t) is a formal derivative of a Brownian motion v(t) dt = dBv (t).
(15.124)
dBv (t) is independent of dB(t) and has the following properties
Dv dt, t = s, T E dBv (t) = 0, E dBv (t) dBv (s) = 0, t = s.
(15.125)
From equation (15.98), we have U ∗ (t) = −bS(t)X ∗ (t).
(15.126)
The Riccati equation for S(t) reads ˙ =− −S(t)
b2 2 S (t) − 2aS(t) + q, r
S(T ) = sT .
(15.127)
˙ = 0. We obtain Consider the steady state suboptimal control when S(t) $ % & 1 S = 2 −ar + b2 qr + a 2 r 2 . (15.128) b The optimal estimation gain is given by equation (15.119) L = CX˜ X˜ Dv−1 ,
(15.129)
where the steady state covariance satisfies the algebraic Riccati equation (15.120) 2 −1 −2aCX˜ X˜ + σ 2 − CX ˜ X˜ Dv = 0.
We obtain CX˜ X˜ = −aDv + # L = −a +
$
a 2 Dv2 + Dv σ 2 ,
a2 +
σ2 . Dv
The closed loop system reads $ & 1% 2 ˆ U (t) = − b qr + a 2 r 2 − ar X(t), b $ % & ˆ dX = −aX(t) + ar − b2 qr + a 2 r 2 X(t) dt + σ dB,
(15.130)
(15.131) (15.132)
(15.133)
(15.134)
#
# 2 2 σ σ ˜ d X(t) = − a2 + dBv (t) X˜ dt − −a + a 2 + Dv Dv + σ dB(t).
(15.135)
15.4. Sufficient conditions
299
15.4. Sufficient conditions The HJB equation for both deterministic and stochastic systems is a sufficient condition, not a necessary condition for optimality. For deterministic systems, there are two other sufficient conditions. When there is no constraint on the control, a sufficient condition for minimizing the performance index by the optimal control is ∂ 2 H (x∗ , u∗ , λ∗ , t) 0. (15.136) ∂u2 For linear systems with the quadratic performance index of minimum energy defined in equation (15.4) as J =
1 T x (T )ST x(T ) 2 T 1
x(τ )T Qx(τ ) + u(τ )T Ru(τ ) dτ, + 2
(15.137)
t0
the sufficient condition reads R 0.
(15.138)
That is, the control penalty must be positive. When the control is bounded, i.e., u ∈ U ⊂ R m where U is a bounded set, Pontryagin’s minimum principle is both necessary and sufficient for the control to minimize the performance index. The principle states H (x∗ , u∗ , λ∗ , t) H (x∗ , u, λ∗ , t),
∀u ∈ U.
(15.139)
Exercises E XERCISE 15.1. Consider the SDOF linear system x¨ + 2ζ ωn x˙ + ωn2 x = ωn2 u, x(0) = x0 ,
(15.140)
x(0) ˙ = x˙0 ,
and a performance index 1 1 J = xT ST x(T ) + 2 2
Tf
xT Qx(τ ) + uT Ru(τ ) dτ,
(15.141)
0
and u ∈ R. Derive all the governing equations for the optimal where x = control by going through the variational steps. {x, x} ˙ T
Chapter 15. Concepts of Optimal Controls
300
E XERCISE 15.2. Consider the control problem of a unit mass particle moving along a line. The equation of motion is given by x¨ = u(t),
(15.142)
where x denotes the position of the particle on the line and u(t) is a control force in the direction of the line. Let the performance be T J =
1 · dτ = T − t0 .
(15.143)
t0
Show that the result is the bang-bang control u(t) = − sgn λ(t) ,
(15.144)
where λ(t) is the co-state variable. The control is either +1 or −1 depending on whether the state is on one side of a switching curve or the other. Sketch the switching curve in the phase plane. E XERCISE 15.3. Derive the HJB equation for the control system in Example 15.1. E XERCISE 15.4. Derive equations (15.133) to (15.135). E XERCISE 15.5. Consider an index 1 1 J = trace(P) + trace(ES) 2 2 1 1 = Pkk + Ekj Sj k , (15.145) 2 2 where the summation convention over the repeated indices of their corresponding ranges is adopted, and Ekj = [Akl Plj + Pkl Aj l ] + Lkl Dlh Lj h + bkj , Akl = A0kl − Lkr Crl . Derive equations (15.116) to (15.118) by using the indicial operations.
(15.146)
Chapter 16
Stochastic Optimal Control with the GCM Method
The cell mapping methods, originally developed by Hsu (1980), have been applied to the optimal control problem of deterministic dynamic systems (Hsu, 1985). Strategies to solve optimal control problems with fixed final time (Crespo and Sun, 2003b) and fixed final state (Crespo and Sun, 2000) terminal conditions have been developed and applied to several problems. The generalized cell mapping (GCM) method is a very effective tool for studying the global behavior of strongly nonlinear stochastic systems (Hsu, 1987). In Sun (1995), Sun and Hsu (1990), the GCM method is integrated with the short-time Gaussian approximation (STGA) scheme to provide a very efficient way for constructing Markov chains that describe the global dynamics of the system. In this chapter, these tools are applied to the stochastic optimal control problem. Specifically, we first present Bellman’s principle of optimality and the GCM method with STGA to generate and then investigate global control solutions for stochastic optimal control problems. The STGA-GCM method involves both analytical and numerical steps, and offers several advantages. It can handle strongly nonlinear and non-smooth systems, can include various state and control constraints in the solution process and leads to global solutions. All these features make the solution even more intractable than its corresponding deterministic counterpart.
16.1. Bellman’s principle of optimality Bellman’s principle of optimality states as the follows. Let (X∗ , U∗ ) be an optimal solution pair over the time interval [t0 , T ] subject to an initial condition X(t0 ) = x0 . Let tˆ be a time instant such that t0 tˆ T . Then, (X∗ , U∗ ) is still the optimal solution pair from [tˆ, T ] subject to the initial condition X(tˆ) = X∗ (tˆ). Let V (x0 , t0 , T ) = J (U∗ , x0 , t0 , T ) be the value function or optimal cost function. Bellman’s principle of optimality can be stated as (Yong and Zhou, 1999) 301
Chapter 16. Stochastic Optimal Control with the GCM Method
302
tˆ V (x0 , t0 , T ) = min E U
L X(t), U(t) dt
t0
T +
∗ ∗ ∗ L X (t), U (t) dt + φ X (T ), T ,
(16.1)
tˆ
where t0 tˆ T . Consider the optimal problem of the system starting from Xj = X(j τ ) in the time interval [j τ, T ] where τ is a discrete time step. Define an incremental cost and an accumulative cost as (j +1)τ Jτ (Xj , Uj ) = E (16.2) L X(t), U(t) dt , jτ
JT X∗j +1 , U∗j +1
∗
T
= E φ X (T ), T +
L X (t), U (t) dt ,
∗
∗
(j +1)τ
(16.3) where is the optimal solution pair over the time interval [(j + 1)τ, T ] and Uj = U(j τ ). Then, equation (16.1) can be rewritten as V (Xj , j τ, T ) = min Jτ (Xj , Uj ) + JT X∗j +1 , U∗j +1 . (16.4) (X∗ (t), U∗ (t))
U
The incremental cost Jτ (Xj , Uj ) is the cost for the system to march one time step forward starting from a deterministic initial condition Xj . The system lands on an intermediate set of the state variables. The accumulative cost JT (X∗j +1 , U∗j +1 ) is the cost for the system to reach the target set starting from this intermediate set, and is calculated through the accumulation of incremental costs over several short time intervals between (j + 1)τ and T . Bellman’s principle of optimality as stated in equation (16.4) suggests that one can obtain a local solution of the optimal control problem over a short time interval τ to form the global solution provided that certain continuity conditions on the solution are satisfied. Here, we must impose such conditions in the probability sense. This is explained later. The global solution consists of all the intermediate solutions that are constructed backward in time starting from the terminal condition φ(X(T ), T ) at time T .
16.2. The cell mapping solution approach We need to evaluate the mathematical expectations in equations (16.2) and (16.3) for a given control. In other words, we need the conditional probability density function pX (x, τ |x0 , 0) which satisfies the FPK equation associated with the
16.2. The cell mapping solution approach
303
process. It should be noted that for a given feedback control law U = f(X(t)), the response X(t) is a stationary Markov process (Lin and Cai, 1995). For a very small τ , this density function pX (x, τ |x0 , 0) is known to be approximately Gaussian within an error of order O(τ 2 ) (Risken, 1984). When pX (x, τ |x0 , 0) is Gaussian, only the first and second order moments of X(t) need to be evaluated in order to completely specify the density function. We can readily derive the moment equations from the Itô equation. When the system is nonlinear, the moment equations can be closed by applying the Gaussian closure method (Lin and Cai, 1995; Sun and Hsu, 1987, 1989a), which is consistent with the short-time Gaussian approximation. The short-time conditional PDF pX (x, τ |x0 , 0) gives a probabilistic description of the system response X(t). The short-time solution has been implemented in two ways in the literature. The first one is the path integral (Naess and Moe, 2000; Tombuyses and Aldemir, 1997; Wehner and Wolfer, 1983a, 1983b), which treats the phase space as a continuum. The second approach is the cell mapping method (Sun and Hsu, 1990). 16.2.1. Generalized cell mapping (GCM) The GCM method can provide a probabilistic description of the time evolution of the system response. After we determine the optimal control, we need to use the GCM method to evaluate the control performance in time domain. The GCM offers an effective method for this. Here, we present a brief description of the GCM method. The GCM method proposes to discretize the state space into a collection of small cells and to discretize the time to form mappings describing the evolution of the system dynamics. When the system is initially in a cell with probability one, for example, after one mapping time step, the system evolves and occupies a region containing a number of cells as illustrated in Figure 16.1. Let pX (x, t) be the probability density function of the response process X(t) at time t and pX (x, t|x0 , t0 ) the probability density function of X(t) at time t conditional on X0 at time t0 . When X(t) is a stationary Markov process, we have pX (x, t|x0 , t0 ) = pX (x, t − t0 |x0 , 0) for t − t0 > 0. Under this condition, the consistent equation for the probability density function reads pX (x, t) = pX (x, t − t0 |x0 , 0)pX (x0 , t0 ) dx0 . (16.5) Rn
Let t0 = (n − 1)τ and t = nτ , where τ is a mapping time step. This equation can be rewritten as pX (x, nτ ) = pX (x, τ |x0 , 0)pX x0 , (n − 1)τ dx0 . (16.6) Rn
Chapter 16. Stochastic Optimal Control with the GCM Method
304
Figure 16.1.
An illustration of the generalized cell mapping in the two dimensional phase plane.
We discretize R n into a countable set S of cells denoted by z1 , z2 , . . . , zm ∈ S. Equation (16.6) can be approximated by the evolution equation of a discrete Markov chain as p(n + 1) = Pp(n),
(16.7)
where p(n) = [pi (n)] denotes the probability of the system being in the ith cell at time nτ and P = [Pij ] is the probability of the system being in the ith cell at time τ when the system is initially in the j th cell with probability one. The terms in equation (16.7) are defined as Pij = pX (x, τ |xj , 0) dx, pi (n) = pX (x, nτ ) dx, (16.8) Ci
Ci
Rn
where Ci is the domain in occupied by cell zi , and xj is the center of cell zj (Sun and Hsu, 1990). The Markov chain represented by the transition probability matrix P, contains the topological structure of the global system response in the cellular state space. The iteration of equation (16.7) generates the time evolution of the probability density function of x(t) including the transient and steady state responses of the system. For more discussions on cell mapping methods, the reader is referred to Fischer and Kreuzer (2001), Hsu (1987). There have been two ways to generate the transition probability matrix. The first one is the Monte Carlo simulation. This method is time-consuming, and can have significant statistical errors (Sun and Hsu, 1988b). The second method makes use of the short-time Gaussian property
16.2. The cell mapping solution approach
305
of the probability density function (Sun and Hsu, 1990). This scheme has provided a very efficient and accurate way of calculating the transition probability matrix P. After applying the algorithm, the optimal control solution is available at the cell centers. The calculation of a controlled Markov chain can be done as follows. For each cell zj ∈ S, the moment equations with u = u∗ (zj ) are integrated for τ seconds. Then, the corresponding Gaussian density function pX (x, τ |xj , 0) is built. Equation (16.8) is used to construct the transition probability matrix P. Equation (16.7) then provides the transient and steady state response of the controlled system starting from any initial condition. 16.2.2. A backward algorithm The backward solution process starts from the last segment over the time interval [T − τ, T ]. Since the terminal condition for the last segment of the local solutions is specified, we can obtain a family of local optimal solutions for all possible initial conditions x(T − τ ). Then, repeat this process to obtain the next segment of the local optimal solution over the time interval [T − 2τ, T ] subject to the continuity condition at t = T − τ . In general, the optimal control in the time interval [j τ, T ] is determined by minimizing the sum of the incremental cost and the accumulative cost in equation (16.4) leading to V (xj , j τ, T ) subject to the continuity condition x((j + 1)τ ) = x∗j +1 where x∗j +1 is a set of initial conditions used for the problem in the time interval [(j + 1)τ, T ]. x∗j +1 is a random variable. The equality x((j + 1)τ ) = x∗j +1 has to be interpreted in a probabilistic sense. Here, it is interpreted in the sense of maximum probability. Quantitatively, this is done as follows. In theory, the conditional PDF pX (x, τ |x0 , 0) of a diffusion process even for a very short time τ will cover the entire phase space. Let be the extended target set to be defined later such that x∗j +1 ∈ . For a given control, we define P as P (xj , uj ) = pX (x, τ |xj , 0) dx. (16.9)
P represents the probability that the system enters the extended target set in time τ when it starts at xj with probability one. The controlled response x(t) starting from a set of initial conditions xj will become a candidate for the optimal solution when P is maximal among all the initial conditions under consideration. Since we shall use a cellular structure of the phase space, we consider a finite region D ⊂ Rn and discretize D into a countable number of small cells. We shall assume that the control is constant over each time interval [j τ, (j + 1)τ ]. Let ⊂ Rn denote the set of cells that form the target set defined by (x(T ), T ) = 0. As the backward solution steps proceed, the set will be expanded. The terminal
306
Chapter 16. Stochastic Optimal Control with the GCM Method
cost JT is initially assigned to be E[φ(x(T ), T )]. In the following discussion, we suppress the time index j because the solution is implicitly ordered backward in time by the algorithm. Instead, we denote uk as one of the admissible controls where k ∈ {1, 2, . . . , K}, and zj as one of the cell centers encountered during the search process. The backward algorithm for searching the optimal control solutions is stated as follows: 1. Find all the cells that surround the target set . Denote these cell centers as zj . 2. Construct the conditional PDF pX (x, τ |zj , 0) for each control uk and for all cell centers zj . Let us call every combination (zj , uk ) a candidate pair. 3. Calculate the incremental cost Jτ (zj , uk ), the accumulative cost JT (zj∗ˆ , u∗ˆ ) k and P (zj , uk ) for all candidate pairs. zj∗ˆ is an image cell of zj in , and u∗ˆ is k the optimal control identified for zj∗ˆ in previous iterations. 4. Search for the candidate pairs that minimize Jτ (zj , uk ) + JT (zj∗ˆ , u∗ˆ ) and maxk imize P (zj , uk ). Denote such pairs as (z∗j , u∗k ). 5. Save the minimized accumulative cost function JT (z∗j , u∗k ) = Jτ (z∗j , u∗k ) + JT (zj∗ˆ , u∗ˆ ) and the optimal pairs (zj∗ˆ , u∗ˆ ). k k 6. Expand the target set by including the cells z∗j . 7. Repeat the search from Step (1) to Step (6) until all the cells in the entire region D are processed or until the original initial condition x0 of the optimal control problem is reached. After the algorithm finishes, every cell that has been processed in the backward search has an optimal control and a minimized accumulative cost associated with it. Hence, the present method not only obtains the optimal control solution for a specific initial condition, but also the solutions for all the initial conditions encountered during the backward search process. As a computational note, in the application of the algorithm, we have chosen the integration time τ as a function of the initial condition zj in order to obtain mean trajectories with the same length scale of the cells uniformly over the entire region D. The GCM method provides a description of the time evolution of the probability density function of the system response. After finding the optimal control solution for all the cells in the domain of interest, we use the GCM method to create a control dependent Markov chain. In this fashion, the performance of the control solution is evaluated.
16.3. Control of one-dimensional nonlinear system Consider the one-dimensional stochastic differential equation in the Stratonovich sense given by X˙ + aX + bX2 + cX 3 = U (t) + W (t),
(16.10)
16.3. Control of one-dimensional nonlinear system
where the control is subject to a constraint |U | 1,
E W (t) = 0 and E W (t)W (t + ξ ) = 2Dδ(ξ ), The cost functional is defined by ∞ 2 2 J =E αX + βU dt .
307
ξ > 0.
(16.11)
t0
The control objective is to drive the system from any initial condition in x ∈ [1.5, −1.5] to the origin while J is minimized. The moment equations for the mean m(t) and the variance v(t) of the stochastic process X(t) are given by m ˙ = −am − b m2 + v − cm m2 + 3v + u, (16.12) v˙ = −2av − 4bmv − 6cv m2 + v + 2D, where u = E[U ]. At the current time t, the control should be fully determined and is treated as a deterministic quantity. Hence, its variance is zero and does not affect v explicitly. Note that the Gaussian closure method has been applied to express the higher order moments in terms of the mean and variance in equation (16.12). Case studies For this problem, the region of interest x ∈ [1.5, −1.5] is divided into 71 cells. The control set is discretized uniformly into 21 levels: u ∈ {−1, −0.9, . . . , 1}. Assume that a = −2, b = 0, c = 2, α = 0.5, β = 4 and D = 0.1. The associated deterministic system has two attracting points separated by an unstable node at x = 0. The bi-modal uncontrolled steady state probability density function is shown in Figure 16.2. The optimal control solution obtained is presented in Figure 16.3. The time evolutions of the moments and the expected value of the Lagrangian E[L] for the optimal controlled response are shown in Figure 16.4. The controlled steady state probability density function is shown in Figure 16.5. In this case, the control is able to drive the system to the target resulting in a uni-modal steady state probability density function centered at x = 0. The origin of the uncontrolled system is thus globally stabilized in the stochastic sense. Next, we consider the same system with different parameters: a = −3, b = 1, c = 5, α = 0.5, β = 2 and D = 0.1. The inclusion of the quadratic term with b = 0 breaks the symmetry of the system. The uncontrolled steady-state probability density function is shown in Figure 16.6. The optimal control solution is presented in Figure 16.7. The time evolution of the mean, variance and E[L] starting from the uncontrolled steady state response of the system are shown in Figure 16.8. The controlled steady state probability density function of the system
308
Chapter 16. Stochastic Optimal Control with the GCM Method
Figure 16.2.
Steady state PDF of the uncontrolled system for the symmetric case. (With permission from Springer Science and Business Media.)
Figure 16.3.
Optimal control solution with |U | 1 for the symmetric case. (With permission from Springer Science and Business Media.)
is shown in Figure 16.9. In this case, the control authority is limited so that the steady state probability density function is bi-modal indicating that the existence of two stable equilibrium states at the peak locations of the probability density function.
16.3. Control of one-dimensional nonlinear system
309
Figure 16.4. The target set is the cell centered at x = 0. Time evolutions of the controlled system response for the symmetric case. Top: mean m(-) and variance v(- -) starting from the initial condition specified by the steady state PDF of the uncontrolled system. Bottom: Time evolution of the integrand of the cost functional, E[L]. (With permission from Springer Science and Business Media.)
Figure 16.5.
Steady state PDF of the controlled system for the symmetric case. (With permission from Springer Science and Business Media.)
310
Chapter 16. Stochastic Optimal Control with the GCM Method
Figure 16.6.
Steady state PDF of the uncontrolled system for the asymmetric case. (With permission from Springer Science and Business Media.)
Figure 16.7.
Optimal control solution with |U | 1 for the asymmetric case. (With permission from Springer Science and Business Media.)
16.3. Control of one-dimensional nonlinear system
311
Figure 16.8. Time evolution for the asymmetric case. Top: the mean m(-) and the variance v(- -) of the controlled system starting from the initial condition specified by the steady state PDF of the uncontrolled system. The evolution of the integrand of the cost functional, E[L], is shown in the bottom. (With permission from Springer Science and Business Media.)
Figure 16.9. Steady state PDF of the controlled system for the asymmetric case. The target set is the cell centered at x = 0. (With permission from Springer Science and Business Media.)
Chapter 16. Stochastic Optimal Control with the GCM Method
312
16.4. Control of a linear oscillator Consider a linear spring-mass oscillator subject to a white noise excitation X¨ + Ω 2 X = U (t) + W (t),
(16.13)
where E[W (t)] = 0, E[W (t)W (t +ξ )] = 2Dδ(ξ ), ξ > 0, and U (t) is a bounded control. The cost functional of the control problem is J =E
∞
αX12
+ βX22
+ γU
2
dt ,
(16.14)
t0
where α, β and γ are positive weighting coefficients, X1 = X, X2 = X˙ and |U | 1. The control objective is to drive the system from any arbitrary initial condition to the origin of the phase space while J is minimized. Since the system is linear, the moment equations are straightforward to derive and integrate. We shall omit the details about this here. Case studies Let Ω = 1, α = 0.5, β = 0.5 and D = 0.5. The region defined by x1 ∈ [−25/7, 25/7] and x2 ∈ [−25/7, 25/7] is discretized with 25×25 = 625 uniform cells, while the control set is discretized to be {−1, −3/4, . . . , 1} with 9 levels. First, we take γ = 0 in the cost functional so that the control effort is not penalized. This problem has been extensively studied (Bratus et al., 2000; Dimentberg et al., 2000; Iourtchenko, 2000). In reference (Dimentberg et al., 2000), the solution to the fixed final state problem was found as a limit process of the fixed final time problem. It was found that the dry-friction, also known as the bang-bang control, is not only the asymptotic solution to the fixed final time problem in an “outer domain”, but also the optimal solution for the steady-state response. Here, we solve the same problem with a target cell centered at the origin of the phase space. Figure 16.10 shows the time evolutions of the moments and E[L] the uncontrolled system. Note that the system is marginally stable and the second order moments grow unbounded. The uncontrolled system response does not reach a steady state probability density function. Figure 16.11 shows the global distribution of optimal controls obtained by the present method. The solution in the “outer domain” follows the dry friction control law found analytically in Dimentberg et al. (2000). The shape of the switching curves agrees with the ones found in Iourtchenko (2000), where a finite difference scheme with exact boundary conditions was used. The time evolutions of the mean, variance and cost function of the controlled response starting from a deterministic initial condition are shown
16.4. Control of a linear oscillator
313
Figure 16.10. Time evolution of the moments and E[L] for the uncontrolled spring-mass oscillator. In the middle figure, m10 (-·-) and m01 (- -). In the bottom figure, m20 (-·-) and m02 (- -). (With permission from Springer Science and Business Media.)
Figure 16.11. Global distribution of the optimal control for the spring-mass oscillator. Every cell is color-coded according to the optimal control level. Cells in white define have u∗ = 1 and cells in gray have u∗ = −1. The target cell in the center is black. (With permission from Springer Science and Business Media.)
314
Chapter 16. Stochastic Optimal Control with the GCM Method
Figure 16.12. Time evolution of the moments and E[L] of the controlled response with γ = 0. Legends are the same as in Figure 16.10. (With permission from Springer Science and Business Media.)
Figure 16.13. Steady state PDF pX (x1 , x2 ) of the controlled response with γ = 0. The target set is the cell centered at (0, 0). (With permission from Springer Science and Business Media.)
16.4. Control of a linear oscillator
315
Figure 16.14. Vector field of the mean response of the controlled spring-mass oscillator with α = β = γ = 0.5. (With permission from Springer Science and Business Media.)
Figure 16.15. Time evolution of the moments and E[L] for the controlled system. Legends are the same as in Figure 16.10. (With permission from Springer Science and Business Media.)
316
Chapter 16. Stochastic Optimal Control with the GCM Method
in Figure 16.12. The steady-state probability density function of the controlled response is shown in Figure 16.13. It can be seen that the control drives the system to the target cell with a very high probability. Next, we set γ = 0.5 so that the control is penalized. The optimal control is no longer bang-bang, and its analytical solution is not available. Figure 16.14 shows the vector field of the mean response of the controlled system. The time evolutions of E[L] and the moments of the controlled response are shown in Figure 16.15.
16.5. Control of a Van der Pol oscillator Consider a stochastic Van der Pol oscillator given by X¨ + θ X 2 − 1 X˙ + Ω 2 X = U (t) + W (t),
(16.15)
where E[W (t)] = 0, E[W (t)W (t + ξ )] = 2Dδ(ξ ), ξ > 0, and U (t) is bounded with |U | 1. The same cost function as in equation (16.14) is considered in this example. The control objective is to drive the system from any arbitrary initial condition to the origin of the phase space while J is minimized. This is equivalent to the global stabilization of the origin, which is unstable without control. j Introduce a notation for the moments as mj k = E[X1 X2k ]. The moment equations of the state variables can be derived as m ˙ 10 = m01 , m ˙ 01 = θ (m01 − m21 ) − Ω 2 m10 + u, m ˙ 20 = 2m11 ,
(16.16)
m ˙ 02 = 2θ (m02 − m22 ) − 2Ω m11 + 2um01 + 2D, 2
m ˙ 11 = θ (m11 − m31 ) − Ω 2 m20 + um10 + m02 , where u = E[U ] and is considered to be fully determined at time t. The higher order moments are approximated by the Gaussian closure with the following relationships. m12 = m10 m02 + 2m11 m01 − 2m10 m01 , m30 = 3m10 m20 − 2m310 , m21 = 2m10 m11 + m01 m20 − 2m210 m01 , m22 = 2(m10 m12 + m01 m21 ) − 2 m210 m02 + m201 m20 − 8m10 m01 m11 + m20 m02 + 2m211 + 6m210 m201 , m31 = m01 m30 + 3m10 m21 − 6m10 m01 m20 − 6m11 m210 + 3m11 m20 + 6m01 m310 .
(16.17)
16.5. Control of a Van der Pol oscillator
317
Case studies The parameters of the system are set as θ = 1, Ω = 1, α = 0.5, β = 0.5, γ = 0 and D = 0.2. The region defined by x1 ∈ [−4, 4] and x2 ∈ [−4, 4] is discretized with 25 × 25 = 625 uniform cells. When γ = 0, the control is found to be bang-bang. Thus, the control set is bi-level {−1, 1}. The steady state probability density function with the limit cycle of the uncontrolled system is shown in Figure 16.16. At the peak location of the probability density function, the response moves slower along the limit cycle. The same behavior exists in the deterministic Van der Pol oscillator. Figure 16.17 shows the vector field of the mean response of the controlled system. The control has clearly eliminated the limit cycle behavior of the response. Time evolutions of the moments and E[L] with the initial condition specified by the steady state probability density function in Figure 16.16 are shown in Figure 16.18. The corresponding steady state probability density function of the controlled response is shown in Figure 16.19. The steady state response has a mean right on the target with a very small variance. A note on the computation is in order. For the examples studied, the system response is kept in a bounded domain of the state space with probability one. The leakage of the probability to the outside of the domain is negligible. We did not study the effect of cell size on the accuracy of the control solution. In general,
Figure 16.16.
Steady state PDF of the uncontrolled Van der Pol oscillator. (With permission from Springer Science and Business Media.)
318
Figure 16.17.
Chapter 16. Stochastic Optimal Control with the GCM Method
Vector field of the mean response of the controlled Van der Pol oscillator. (With permission from Springer Science and Business Media.)
Figure 16.18. Time evolution of the means, variances and E[L] of the response of the controlled Van der Pol oscillator. Legends are the same as in Figure 16.10. (With permission from Springer Science and Business Media.)
16.6. Control of a dry friction damped oscillator
Figure 16.19.
319
Steady state PDF pX (x1 , x2 ) of the controlled Van der Pol oscillator. (With permission from Springer Science and Business Media.)
smaller cells around the target area can be used to reduce the steady state variances of the controlled response due to discretization.
16.6. Control of a dry friction damped oscillator Consider a nonlinear system with dry friction damping ˙ + 2ζ X˙ + ωn2 X + εX 3 = f¨ + U (t), X¨ + μ(g + v) ¨ sgn(X)
(16.18)
where X(t) describes the horizontal sliding motion of a mass block placed on a moving foundation with rough contact surface and U (t) is a bounded force satisfying |U | 1 (Su and Ahmadi, 1990; Sun, 1995). ζ is the viscous damping coefficient, μ is the dry friction damping coefficient, g is the gravitational acceleration, ωn is the natural frequency of the linear system and ε is the nonlinear stiffness coefficient. Assume that f¨ and v¨ are correlated Gaussian white noise processes such that
E f¨ = 0, E[v] ¨ = 0,
¨ E v(t) ¨ f (t ) = 2Dvf δ(t − t ), (16.19)
E v(t) ¨ v(t ¨ ) = 2Dv δ(t − t ),
Chapter 16. Stochastic Optimal Control with the GCM Method
320
E f¨(t)f¨(t ) = 2Df δ(t − t ). ˙ The cost functional to be minimized is defined as Let X1 = X and X2 = X. J =E
∞
α||X||2 + βU
2
(16.20)
dt .
t0
The control objective is to drive the system from any arbitrary initial condition to the origin of the phase space while J is minimized. Following the rules of the Itô calculus in Chapter 5, we convert equation (16.18) into a set of stochastic differential equations in the Itô sense, dX1 = X2 dt,
dX2 = −μg sgn(X2 ) − 2ζ X2 − ωn2 X1 − εX13
(16.21)
+ μ Dv sgn(X2 ) sgn (X2 ) + μDvf sgn (X2 ) + U dt
1/2 + 2μ2 Dv sgn2 (X2 ) + 4μDvf sgn(X2 ) + 2Df dB(t), 2
where B(t) is the unit Brownian motion. An infinite hierarchy of moment equations for the state variables can be derived j from the Itô equation. Define mj k = E[X1 X2k ]. We apply the Gaussian closure method to truncate the hierarchy. The standard joint Gaussian conditional probability density function of X1 and X2 is given by pX (x, τ |x0 , 0) =
1
2πσ1 σ2 1 − ρ 2 [α 2 − 2ρα1 α2 + α22 ] × exp − 1 , 2(1 − ρ 2 )
(16.22)
where
mj = E[Xj ], σj2 = E (Xj − mj )2 ,
c12 = E (X1 − m1 )(X2 − m2 ) , Xj − mj c12 ρ= , αj = , j = 1, 2. σ1 σ2 σj
(16.23)
In these expressions, mj and σj and ρ are the means, standard variations and correlation coefficient of the density function evaluated at time t = τ when the system starts from the deterministic initial condition x0 . Recall that for real variables, |ρ| 1. The expectation of an arbitrary function of X1 and X2 using the
16.6. Control of a dry friction damped oscillator
321
Gaussian closure can be computed as
E f (X1 , X2 ) =
∞ ∞ −∞ −∞
f (x1 , x2 ) 2πσ1 σ2 1 − ρ 2
−[α12 − 2ρα1 α2 + α22 ] × exp dx1 dx2 . 2(1 − ρ 2 )
The following expressions can be found readily.
mj |mj | E sgn(Xj ) = sgn √ erf √ , σj 2 σj 2
E sgn (Xj ) =
2 √
σj 2π
−
e
m2j 2σj2
(16.24)
(16.25)
(16.26)
, m2
2ρσ1 σ2 − k2
E Xj sgn(Xk ) = √ e 2σk + mj E sgn(Xk ) , σk 2π
(16.27)
m2
−mk ρσ1 σ2 − 2σk2
e k + mj E sgn (Xk ) , E Xj sgn (Xk ) = √ 3 σk 2π
(16.28)
m2
2σk − k2 E Xk sgn(Xk ) = √ e 2σk + mk E sgn(Xk ) , 2π
E (Xj − mj )E sgn(Xk ) = 0,
E Xj sgn(Xj ) sgn (Xj ) = 0,
E Xj sgn(Xk ) sgn (Xk ) = 0,
E sgn(Xk ) sgn (Xk ) = 0,
E sgn(Xk )2 = 1,
E (Xj − mj )Xj3 = 3σj2 σj2 + m2j ,
E (Xj − mj )Xk3 = 3ρσ1 σ2 σk2 + m2k ,
E Xj sgn (Xj ) = 0, E Xj3 = mj 3σj2 + m2j .
(16.29) (16.30) (16.31) (16.32) (16.33) (16.34) (16.35) (16.36) (16.37)
With these expressions, we obtain a closed set of nonlinear differential equations for the first and second order moments, m ˙ 10 = m01 ,
√ m ˙ 01 = μg sgn(m01 ) erf |m01 |/ 2σ2 − 2ζ m01
Chapter 16. Stochastic Optimal Control with the GCM Method
322
− ωn2 m10 − εm10 3σ12 + m210 √ 1 2 + μDvf 2/ 2πσ2 e− 2 (m01 /σ2 ) + u,
(16.38)
m ˙ 20 = 2c12 + 2m10 m01 , m ˙ 11 = σ22 + m201 − 2ζ c12 − ωn2 σ12 1 2 − μg 2/π(c12 /σ2 )e− 2 (m01 /σ2 ) − 3εσ12 σ12 + m210 1 2 − μDvf c12 m01 / 2/πσ23 e− 2 (m01 /σ2 ) + m10 m ˙ 01 + m ˙ 10 m01 , m ˙ 02 =
−4ζ σ22
− 2ωn2 c12
− 6εc12 σ12
+ m210
(16.39)
1 2 − 2μg 2/πσ2 e− 2 (m01 /σ2 ) 1 2 − μDvf 2/π (m01 /σ2 )e− 2 (m01 /σ2 )
+ 2μ2 Dv + 2Df + 2m01 m ˙ 01 √ + 4μDvf sgn(m01 ) erf |m01 |/ 2σ2 . The initial conditions required to integrate equations (16.38) and (16.39) from t = 0 to t = τ are specified by the coordinates of a cell center (x1c , x2c ), i.e., 2 , m (0) = x 2 and m (0) = x x . m10 (0) = x1c , m01 (0) = x2c , m20 (0) = x1c 02 11 1c 2c 2c After obtaining the time evolution of the first and second order moments, we construct a joint Gaussian probability density function for (X1 , X2 ) from time t = 0 to t = τ . With this probability density function, the cost functional and other response statistics over one time step τ can be readily calculated. Notice that the system is parametrically and externally excited. The diffusion term of the system is a nonlinear function of the state. Such features make analytical as well as numerical solutions of the stochastic optimal control problem difficult to obtain (Zhu et al., 2001). Case studies We consider a region D = [−2, 2] × [−2, 2]. It is discretized with 25 × 25 = 625 uniform cells. The parameters of the system are set as follows: μ = 0.05, ζ = 0.1, ωn = 1, ε = 1, Dv = 0.1, Df = 0.1, and Dvf = 0. The Lagrangian of the cost function in equation (16.20) is specified with α = β = 0.5 and the control set is uniformly discretized into 11 levels: u ∈ {−1, −0.8, . . . , 1}. There is a region on the x1 -axis about the origin, where the mean trajectories of the uncontrolled response are trapped due to the effect of dry friction. In √ equations (16.38) and (16.39), when the term μg sgn(m01 ) erf(|m01 |/ 2σ2 ) is dominant, a never ending sequence of changes in the sign of x2 takes place. This
16.6. Control of a dry friction damped oscillator
Figure 16.20.
323
Vector field of the mean trajectories of the controlled response of the system (16.18).
term causes the response to switch indefinitely about the trapping region without having a net displacement. Figure 16.20 shows the vector field of the mean trajectory for the controlled response. The size of the arrows about the origin is enlarged to enable better observation. Jumps in the velocity are still present, but the trapping region on the x1 -axis is not. These jumps show that the velocity just before and after reaching maximum elongation of the spring differ in magnitude. Now we evaluate the time evolution of the system response starting from a uniformly distributed initial condition. This initial condition allows us to study the control performance on the entire computational domain. The out-crossing of ¯ acts as a sink cell any boundary of D is an irreversible process in the sense that D (Hsu, 1987). The reader should notice that such boundaries eventually will absorb all the probability. This is a mere consequence of having a bounded computational domain. In order to circumvent this difficulty, we define stationarity as the state in which the leakage of probability is stable and considerably small, e.g., both the probability of the system remaining in D and the expected value of the Lagrangian approach to a constant value with very small changes thereafter. The stationary probability density function of the uncontrolled system is obtained after 10 time units with 67% of the probability remaining in D, while the stationary probability density function for the controlled response is obtained in
324
Chapter 16. Stochastic Optimal Control with the GCM Method
Figure 16.21. Time evolutions of the normalized cost JD and the probability of being on D, PD , for the control of a nonlinear oscillator subjected to dry friction. Data for the uncontrolled (—) and the controlled (−−) system responses is shown.
about 7 time units with 82% of the probability in D. In order to evaluate the control performance, the leakage of probability to the outside of D must be taken into consideration. We propose the following quantity to evaluate the control performance E[L(X(t), U(t))] , JD (t) = (16.40) PD (t) where PD (t) is the probability of the system of being in D as defined in equation (16.9). Figure 16.21 shows the time evolutions of JD and PD for both the uncontrolled and controlled system responses. By comparison we conclude that (i) the controlled response reaches the target with less value of the cost J , (ii) the stationary probability density function of the controlled system is more concentrated around the desired target set, (iii) a faster convergence to the stationary probability density function takes place and (iv) a higher percentage of the probability PD (t) is kept in the domain D.
16.7. Systems with non-polynomial nonlinearities The closure methods can readily handle polynomials. However, for other types of nonlinearities the closure methods might not only require tedious and lengthy
16.7. Systems with non-polynomial nonlinearities
325
integrations (Sun, 1995), but also might lead to expressions that don’t admit a closed form. This problem prevents us from integrating the system of moment equations efficiently. In order to overcome this difficulty we make use of the cellular structure of the state space by approximating such nonlinearities with Taylor expansions. Once the approximations are available, the infinite hierarchy of moments can be readily closed. We illustrate this approach next. The example is concerned with the problem of navigating a boat in a vortex. The boat moves on the (x1 , x2 ) plane with a constant velocity relative to the water. Let the control U be the heading angle with respect to the positive x1 -axis. The velocity field of the water is given by v(x) = ar/(br 2 + c) where x = (x1 , x2 ) and r = |x| is the distance to the origin, i.e., the center of the vortex. The equation of motion of the boat is described by the stochastic differential equations in the Stratonovich sense X˙ 1 = cos(U ) − vX2 /|X| + αvW1 , X˙ 2 = sin(U ) + vX1 /|X| + αvW2 ,
(16.41)
where α is a constant, W1 and W2 are correlated Gaussian white noise processes with zero mean such that E[W1 (t)W1 (t )] = 2D1 δ(t − t ), E[W2 (t)W2 (t )] = 2D2 δ(t − t ) and E[W1 (t)W2 (t )] = 2D12 δ(t − t ). The control objective is to navigate the boat from an initial condition to the target set defined by such that the cost function T J =E
λv(X) dt ,
(16.42)
t0
is minimized. Here, we assume that the risk of navigating on the vortex is proportional to the velocity of the vortex. λ is a scaling constant. The cost in equation (16.42) represents the cumulative risk associated with the selected path. From equation (16.41), we define the following matrices, 1 0 D1 D12 σ = αv(X) , Q=2 (16.43) . 0 1 D12 D2 The Wong–Zakai correction terms are given by 1 ∂ , σks Qsl σj l mj = 2 ∂Xk
(16.44)
k,s,l
or explicitly,
∂v ∂v m1 = α v D1 + D12 , ∂X1 ∂X2 ∂v ∂v + D12 . m2 = α 2 v D2 ∂X2 ∂X1 2
(16.45)
Chapter 16. Stochastic Optimal Control with the GCM Method
326
Note that the diffusion term in the FPK equation for this system is given by σ Qσ T = 2α 2 v 2 Q. From the FPK equation, we come back to obtain a set of stochastic differential equations in the Itô sense given by ∂v ∂v X2 v 2 + D12 + α v D1 dt dX1 = cos(U ) − |X| ∂X1 ∂X2 + σe1 dB1 , (16.46) ∂v ∂v X1 v + D12 + α 2 v D2 dt dX2 = sin(U ) + |X| ∂X2 ∂X1 + σe2 dB2 , where 1/2 ∂v 2 σe1 = 2α 2 D1 v 2 + D12 dX1 , ∂X2 1/2 ∂v 2 dX2 , σe2 = 2α 2 D2 v 2 + D12 ∂X1
(16.47)
and B1 and B2 are independent unit Brownian motions. Applying Itô’s lemma, we obtain the moment equations, X2 v ∂v 2 + α D1 E v , m ˙ 10 = cos(u) − E (16.48) |X| ∂X1 X1 v ∂v m ˙ 01 = sin(u) + E + α 2 D2 E v , |X| ∂X2
X22 v X2 v ∂v + α 2 D1 E |X| |X| ∂X1 2 X1 v X1 v ∂v 2 + sin(u)m10 + E + α D2 E , |X| |X| ∂X2 X 1 X2 v X1 v ∂v 2 = 2m10 cos(u) − 2E + 2α D1 E |X| |X| ∂X1
2 2 + 2α D1 E v , X 1 X2 v X2 v ∂v = 2m01 sin(u) + 2E + 2α 2 D2 E |X| |X| ∂X2
2 2 + 2α D2 E v .
m ˙ 11 = cos(u)m01 − E
m ˙ 20
m ˙ 02
(16.49)
In the example, we use Taylor expansions for the functions such as v and v/|X| up to the second order. After substituting the polynomial approximations into equations (16.48) and (16.49), we obtain the expected values of the polynomial
16.7. Systems with non-polynomial nonlinearities
327
functions in terms of lower order moments. Thus, the infinite hierarchy of moments is closed using the Gaussian closure method. Consider a special case when the white noise processes are uncorrelated. The first and second order moments of the state variables are derived as X2 v ∂v m ˙ 10 = cos(u) − E (16.50) + α 2 D1 E v , |X| ∂X1 X1 v ∂v m ˙ 01 = sin(u) + E + α 2 D2 E v , |X| ∂X2
X22 v X2 v ∂v + α 2 D1 E |X| |X| ∂X1 2 X1 v X1 v ∂v 2 + α D2 E , + sin(u)m10 + E |X| |X| ∂X2 X 1 X2 v X1 v ∂v 2 = 2m10 cos(u) − 2E + 2α D1 E |X| |X| ∂X1
2 2 + 2α D1 E v , X 1 X2 v X2 v ∂v 2 = 2m01 sin(u) + 2E + 2α D2 E |X| |X| ∂X2
2 2 + 2α D2 E v .
m ˙ 11 = cos(u)m01 − E
m ˙ 20
m ˙ 02
(16.51)
2v ] in the above moment equations cannot Several expected values such as E[ X|X| be analytically expressed by the lower order moments. That is to say, the Gaussian closure method cannot lead to analytically simple expressions for such terms, thus to close the infinite hierarchy of the moment equations. Consider an initially delta probability density function to evolve over a short mapping time step τ . The one step probability density function at t = τ will still be concentrated in a small region with a mean not far away from the initial condition. This fact suggests us to expand the non-analytically closeable functions about the initial condition, i.e., a cell center, in a Taylor series. The Taylor series provides an accurate representation of these nonlinear functions in a small neighborhood where the one step probability density function at t = τ is defined. The polynomials of the Taylor series allow us to analytically express the higher order moments in terms of the lower order ones according to the Gaussian closure method. Let f (x) denote a non-analytically closeable function such as x2r v in equations (16.50) and (16.51). A second order Taylor approximation f˜(x) is given by
f˜(x) = β1 + β2 x1 + β3 x2 + β4 x12 + β5 x1 x2 + β6 x22 ,
(16.52)
328
Chapter 16. Stochastic Optimal Control with the GCM Method
where the coefficients βk are calculated by imposing the following conditions f (x)|z = f˜(x)|z ,
f,x1 x2 (x)|z = f˜,x1 x2 (x)|z ,
f,x2 (x)|z = f˜,x2 (x)|z ,
f,x1 (x)|z = f˜,x1 (x)|z ,
f,x1 x1 (x)|z = f˜,x1 x1 (x)|z ,
(16.53)
f,x2 x2 (x)|z = f˜,x2 x2 (x)|z .
In these equations, z stands for the state coordinates of the initial cell center and the subscripts of the function refer to the partial derivative with respect to the state variable. It should be emphasized that the Taylor expansion is for each initial cell z, and is used only to integrate the moment equations over a short time interval (0, τ ). This is why this proposed closure method in the cell state space is very accurate. Numerical examples The region D = [−2, 2] × [−2, 2] is discretized with 1089 square cells. A set of 30 evenly spaced angles values in the 2π range is taken as the control set, i.e., U = {−π, −14π/15, . . . , 14π/15}. Notice that the set U represents an unbounded circular set of controls. The system parameters are chosen as follows: λ = 1, a = 15, b = 10, c = 2, α = 1, D1 = 0.05 and D2 = 0.05. Let be the set of cells that form a discrete representation of the target set = {(x1 = 2, x2 )}. consists of the cells in the rightmost column of the discretized domain D. Figure 16.22 shows the mean vector field of the vortex. A trajectory of the boat freely moving in the vortex starting from x = (0, −1) is superimposed. The center of the vortex attracts probability due to the state dependence of the diffusion term and not to the existence of an attracting point at the origin. This behavior does not have a corresponding counterpart in the deterministic case. The vector field for the mean of the controlled response is shown in Figure 16.23. The trajectory of the boat under the optimal navigation starting from the same initial condition as in Figure 16.22 is superimposed. In the process, the control keeps the system in D with probability one. A discontinuity in the vector field exists in spite of having an unbounded control set. Such a discontinuity implies a dichotomy in the long term behavior of the controlled response depending on the initial state of the system. From the initial conditions above the discontinuity, the control will guide the boat against the velocity field of the vortex. On the other hand, from the initial conditions under the discontinuity, the control will move the boat in the direction of the current. Figure 16.24 shows the controlled response of a system starting from the deterministic initial condition x = (0.8, −0.32) after 3 time units. A twin-peak probability density function emerges due to the discontinuity in the control solution.
16.7. Systems with non-polynomial nonlinearities
329
Figure 16.22. Vector field of the mean trajectories for the uncontrolled response of navigating on a vortex field. A state trajectory for the initial condition x = (0, −1) is also shown.
Figure 16.23. Vector field of the mean trajectories for the controlled response of navigating on a vortex field. A state trajectory for the initial condition x = (0, −1) is also shown.
330
Chapter 16. Stochastic Optimal Control with the GCM Method
Figure 16.24. Controlled response PDF pX (x1 , x2 ) of the boat after 0.3 time units for the initial condition x(0) = (0.8, −0.32). This initial condition is very close to the discontinuity in optimal controls.
16.8. Control of an impact nonlinear oscillator A vibro-impact system with a one-sided rigid barrier subject to a Gaussian white noise excitation is considered. The impact is assumed to be perfectly elastic. The control of the vibro-impact system is then subject to a state constraint given by the one-sided rigid barrier. Such a constraint makes the solution of optimal control problem very difficult to obtain analytically. A transformation of the state variables is used to effectively remove the non-smooth state constraint. The optimal control problem with fixed terminal conditions is solved in the transformed domain. The control solution is then transformed back to the physical domain. In the transformed domain, the solution satisfies the constraints. However, the control constraint may be violated when the solution is transformed back to the physical domain. For systems with bang-bang controls, the inverse transformation preserves the control bounds. Bang-bang controls are considered in this example. The equation of motion for the vibro-impact system is given by Y¨ + θ Y 2 − 1 Y˙ + 2ζ Ω Y˙ + Ω 2 Y = U (t) + U (t), (16.54) subject to the impact condition − + ) = −Y˙ (timpact ) Y˙ (timpact
at Y (timpact ) = −h,
(16.55)
16.8. Control of an impact nonlinear oscillator
331
where timpact is the time instant at which the impact occurs, Y (t) is the displacement, w(t) is a Gaussian white noise process satisfying E[W (t)] = 0, E[W (t)W (t + ξ )] = 2Dδ(ξ ) and U (t) is a bounded control satisfying |U | u. ˆ The cost function in equation (16.20) is used here with α = 0.5 and β = 0. The control objective is to drive the system from any arbitrary initial condition to the origin of the phase space, i.e., (y,y) ˙ = (0, 0), while the energy of the system is minimized. The computational domain is chosen to be DY = [−h, a] × [−v, v]. As suggested in reference (Dimentberg, 1988), we introduce the following transformation Y = |X| − h. This leads to the stochastic differential equation for x X¨ + θ X 2 − 2h|X| + h2 − 1 X˙ + 2ζ Ω X˙ + Ω 2 X = sgn(X) U (t) + W (t) + hΩ 2 . (16.56) Now, the transformed domain is given by DX = [−2h, 2a] × [−v, v], the target set is (x,x) ˙ = (±h, 0), and the cost functional is given by ∞
2 |X| − h + X˙ 2 dt . Jx = E α (16.57) t0
˙ The Itô stochastic differential equation of the system Let X1 = X and X2 = X. is given by dX1 = X2 dt,
dX2 = −θ X12 X2 + 2θ h|X1 |X2 − θ h2 − 1 + 2ζ Ω X2 − Ω 2 X1 + sgn(X1 ) U (t) + hΩ 2 dt √ + sgn(X1 ) 2D dB,
(16.58)
where B(t) is a unit Wiener process. After some manipulations, we find that the first and second order moment equations of the state variables are given by m ˙ 10 = m01 , m ˙ 01
= −θ m21 + 2θ hE |X1 |X2 − θ h2 − 1 + 2ζ Ω m01
− Ω 2 m10 + E sgn(X1 ) u + hΩ 2 ,
m ˙ 20 = 2m11 , m ˙ 11 m ˙ 02
= m02 − θ m31 + 2θ hE |X1 |X1 X2 − Ω 2 m20
− θ h2 − 1 + 2ζ Ω m11 + E X1 sgn(X1 ) u + hΩ 2 ,
= −2θ m22 + 4θ hE |X1 |X22 − 2Ω 2 m11
(16.59)
(16.60)
332
Chapter 16. Stochastic Optimal Control with the GCM Method
− 2 θ h2 − 1 + 2ζ Ω m02 + 2DE sgn(X1 )2
+ 2E X2 sgn(X1 ) u + hΩ 2 , where u = E[U ] and is fully determined at time t. The above moment equations are closed with the Gaussian closure method. Next, we apply the current method and solve for optimal controls in the state domain DX with the cost function Jx and the target set (x,x) ˙ . After obtaining the global optimal control solution u∗ (X1 , X2 ), we transform it back to the original domain leading to the optimal control of the original system u∗ (Y, Y ). Notice that in general, this process leads to controls that don’t preserve the control constraints in DY . The optimal control in both domains is bang-bang. The bang-bang control is fully determined by the switching curves in the phase space. From the solution u∗ (X1 , X2 ), we find numerical approximations of such curves and then we transform them back to the physical state space. Numerical simulations In the numerical example, we have chosen: uˆ = 1, h = −1, a = 1, v = 1, θ = 0, ζ = 0, Ω = 1 and D = 0.1. The transformed state space is DX = [−2, 2] × [−2, 2]. 849 square cells are used to discretize DX . The transformed target set is formed by the cells that contain the points (x,x) ˙ = (±1, 0). Since the optimal control is bang-bang, we have U = {−1, 1}. The uncontrolled response of the system is marginally stable about the origin since θ = 0 and ζ = 0. When the system impacts at Y = h, it is reflected back in such a way that Y remains the same and the sign of Y˙ is reversed. The crossing of ¯ Y acts as the sink any other boundary is an irreversible process in the sense that D cell. To demonstrate the behavior of the uncontrolled system, we consider an uniformly distributed initial condition in DY = [−0.86, −0.78] × [−0.64, −0.57]. The probability density function of the response after 3 time units is shown in Figure 16.25. At this time, only 22% of the probability remains inside of DY indicating the strong effect of diffusion on the marginally stable system. It should be noted that even the probability that remains inside DY does not concentrate around the target. The global optimal control solution obtained in the domain DX is shown in Figure 16.26, which indicates the existence of switching curves of the bang-bang control. Numerical solutions of the switching curves are obtained with the curve fitting method by using this control solution. These curves are superimposed in Figure 16.26. The switching curves as well as the control solutions are mapped back to the physical domain DY as shown in Figure 16.27. It is interesting to point out the qualitative differences in the controlled response for states in the
16.8. Control of an impact nonlinear oscillator
333
Figure 16.25. PDF pY (y1 , y2 ) of the uncontrolled response of a spring mass oscillator subject to elastic impacts at y = −1 after 3 time units. The system starts from the initial condition y = [−0.7, −0.7].
Figure 16.26. Global control solution of the system (16.56) in DX . Cells marked with circles denote the regions where the starting optimal control is u∗ = uˆ and cells with crosses where the starting ˆ Curves that approximate the switching curves are superimposed. optimal control is u∗ = −u.
334
Chapter 16. Stochastic Optimal Control with the GCM Method
Figure 16.27. Global control solution of the system (16.54) in DY . Cells marked with circles denote the regions where the starting optimal control is u∗ = uˆ and cells with crosses where the starting ˆ Curves that approximate the switching curves are superimposed. optimal control is u∗ = −u.
third quadrant of DY . For the states marked with crosses, the control speeds up the system to impact, while it avoids impact for the state marked with circles. Consider now the same uniformly distributed initial condition in DY . The controlled system is found to converge to the steady state probability density function shown in Figure 16.28 in 4 time units with 98% of the probability remaining in DY . As discussed in the previous example, the switching curves in this example will split a uni-modal probability density function of the response into a bi-modal one when the system comes near them. In particular, in the third quadrant of DY just before impact, part of the probability flow changes the sign of the velocity, while the other remains the same, thus changing a uni-modal probability density function into one with two peaks. The time evolutions of the cost function and moments of the response are shown in Figure 16.29. As in the previous examples, we have found that the controlled response (i) converges to the target set with a high probability, (ii) minimizes the cost and (iii) maximizes the probability of staying in the computational domain.
16.9. A note on the GCM method Like many other numerical methods, the GCM method is well suited for systems with low dimensions. The computational effort becomes prohibitive for higher
16.9. A note on the GCM method
335
Figure 16.28.
Stationary PDF pY (y1 , y2 ) of the controlled response of a spring mass oscillator subject to elastic impacts at y = −1.
Figure 16.29.
Time evolutions of PD (–), JD (-) and the moment equations m10 (-·-), m01 (–), m20 (-·-), m02 (–) for the controlled response of the system (16.54).
336
Chapter 16. Stochastic Optimal Control with the GCM Method
dimensional systems. Nevertheless, it has been a persistent endeavor and dream of many researchers to extend the method to MDOF systems (Flashner and Burns, 1990; Smith and Corner, 1992; Song et al., 1999; van der Spek et al., 1994; Wang and Lever, 1994; Zhu and Leu, 1990; Zufiria and Guttalu, 1993).
Exercises E XERCISE 16.1. Write a Matlab program to compute the transition probability matrix for the one-dimensional system in equation (16.10) when the control is off (U = 0). You need to integrate equation (16.12) over one mapping step starting from the center of a cell. After you obtain the mean and variance, you construct a Gaussian probability density function. Use this density function to compute the transition probability matrix P. E XERCISE 16.2. Derive equations (16.25) to (16.37).
Chapter 17
Sliding Mode Control
Sliding mode control belongs to a larger class of controls, namely variable structure controls, and is commonly used to handle uncertainties in the control system (Slotine and Li, 1991). In this chapter, we first review the concept of sliding mode control for deterministic systems, and then apply it to stochastic systems.
17.1. Variable structure control Classical controls including PID and LQR controls are “fixed structure”. For example, the PID control in equation (13.33) KI + KD s E(s), U (s) = KP + (17.1) s is fixed in the sense that after the control gains are selected, the structure of the control function remains unchanged even when we find out that there are uncertainties due to the poor knowledge of system parameters or the unmodeled dynamics. The fixed structure makes the control less robust with respect to uncertainties. Furthermore, the fixed structure also limits the performance of the control system. Consider the control problem of a unit mass particle moving along a line. The equation of motion is given by x¨ = u(t),
(17.2)
where x denotes the position of the particle on the line and u(t) is a control force in the direction of the line. The objective of control is to bring the particle to the origin of the state space (x, x) ˙ = (0, 0) starting from any initial condition. Consider a simple proportional control, u = −kx,
k > 0.
(17.3)
The closed loop system is governed by x¨ + kx = 0.
(17.4) 337
Chapter 17. Sliding Mode Control
338
Figure 17.1.
The closed loop response of the particle under a fixed structure proportional feedback control.
Integrating this equation once, we obtain x˙ 2 + kx 2 = r 2 = const,
(17.5)
which represents a family of closed orbits in the state space as shown in Figure 17.1. Hence, the proportional control alone can never bring the system to the origin from any non-zero initial condition. Adding a derivative feedback control term −kd x˙ will make the system approach to the origin in a spiral path. Consider the same proportional control. However, this time, we allow the “structure” of the control to vary such that
−k1 x, when x x˙ < 0, u= (17.6) −k2 x, when x x˙ > 0, where 0 < k1 < 1 < k2 .
(17.7)
This is a variable structure control, or a switching control. The closed loop response now converges to the origin at a reasonably fast rate as shown in Figure 17.2. Clearly, allowing simple controls to have variable structures can potentially improve the performance of the control. To prove that the variable structure control is stable, we consider a Lyapunov function V =
1 2 x˙ + x 2 . 2
(17.8)
Then, V˙ = x˙ x¨ + x x˙ = xu(t) ˙ + x x˙
˙ when x x˙ < 0, (1 − k1 )x x, = ˙ when x x˙ > 0. (1 − k2 )x x,
(17.9)
17.1. Variable structure control
Figure 17.2.
339
The closed loop response of the variable structure proportional feedback control.
Hence, V˙ 0 holds everywhere in the state space, and the control system is stable. The invariant set of the control system is defined as I = (x, x): ˙ V˙ (x, x) ˙ =0 . (17.10) Let E denote the equilibrium set in I. According to the Invariant Theorem (Slotine and Li, 1991), the system will converge to the equilibrium set E ⊂ I. For this example, the only equilibrium in I is (0, 0). Therefore, the variable structure control will bring the particle to rest at x = 0. In this example, the control switches values when the state crosses the manifold defined by s(x, x) ˙ = x x˙ = 0. This manifold is known as the switching curve. E XAMPLE 17.1. Consider another well known simple variable structure control. Let us now find a control that drives the particle to rest from any given initial condition in minimum time. The result is the bang-bang control u(t) = − sgn λ(t) , (17.11) where λ(t) is the co-state variable as shown in Exercise 15.2. The control is either +1 or −1 depending on whether the state is on one side of a switching curve or the other. This is illustrated in Figure 17.3. Since the co-state variable is a function of the state, we denote it as λ = s(x, x). ˙ The switching curve is specified by a generally nonlinear equation s(x, x) ˙ = 0. 17.1.1. Robustness Consider the particle motion problem again. Assume that there is a small model uncertainty such that the equation of motion of the particle becomes x¨ = −a sin x + u(t),
(17.12)
Chapter 17. Sliding Mode Control
340
Figure 17.3.
The switching curve (thick line) of the minimum time optimal control.
where 0 < a 1. Assume that a is not known precisely. Consider a variable structure control u(t) = − sgn s(x, x) ˙ , (17.13) where s(x, x) ˙ = mx + x˙ and m > 0. Let V = function. Then,
V˙ = s(mx˙ + x) ¨ = s mx˙ − a sin x − sgn(s) |s| m|x| ˙ + a − 1 < 0,
1 2 2s
be the control Lyapunov
(17.14)
when m|x| ˙ + a − 1 < 0. Note that the same control works for the system with a = 0 and the stability condition becomes m|x| ˙ − 1 < 0. As t → ∞, V → 0 and s → 0 in both the systems. Once s = 0, we achieve the sliding motion of the system on the switching curve given by x˙ = −mx regardless the uncertainty a of the system. Hence, the variable structure control is robust with respect to the small model uncertainty. The sliding motion on the switching curve is of order one less than that of the original system and the state x ∼ e−mt converges to the origin exponentially.
17.2. Concept of sliding mode 17.2.1. Single input systems Consider the normal form of an nth order nonlinear dynamical system defined by x˙1 = x2 , x˙2 = x3 ,
(17.15)
17.2. Concept of sliding mode
341
.. . x˙n = f (x) + b(x)u, (n−1) T ]
where x = [x1 , x2 , . . . , xn ]T . Let xd = [xd , x˙d , . . . , xd state for x to track. The tracking error is defined as y = x − xd ≡ [y, y, ˙ . . . , y (n−1) ]T .
We define a scalar sliding function as n−1 d s(y, t) = y +λ dt n−1 d n−2 d n−1 + (n − 1)λ n−2 + · · · + λ = y, dt n−1 dt
be the reference (17.16)
(17.17)
where λ > 0. When s(y, t) = 0, we have d n−1 y d n−2 y (17.18) + (n − 1)λ n−2 + · · · + λn−1 y = 0. n−1 dt dt This equation has a (n − 1) repeated pole at −λ. Thus, y decays exponentially on the sliding surface s(y, t) = 0. The goal of the sliding mode control design is to find a control u(t) to drive the system on to the sliding surface and keep it there. E XAMPLE 17.2. For a second order system with n = 2, the sliding surface is a straight line passing the origin as shown in Figure 17.4. On the sliding surface, the system converges to the origin exponentially and straightly. The natural path of the system is a spiral as shown in the figure.
17.2.2. Nominal control Assume that we find a control that has driven the system on to the sliding surface at time ts and keeps it there for t > ts . We thus have s˙ (y, t) = 0 for t > ts . Rewrite this condition in the following form. d nx + G(x) = D(xd ), dt n where x = x1 and d n−1 x + · · · + λn x, dt n−1 d n xd d n−1 xd D(xd ) = + nλ + · · · + λn x d . dt n dt n−1 G(x) = nλ
(17.19)
(17.20) (17.21)
Chapter 17. Sliding Mode Control
342
Figure 17.4.
The sliding surface in the two dimensional phase plane is a straight line passing the origin.
From the line of equation (17.15), we have d nx = f (x) + b(x)u. dt n Hence, we obtain a nominal control as 1
un = D(xd ) − G(x) − f (x) . b(x)
(17.22)
(17.23)
This control is valid in the region where b(x) = 0. 17.2.3. Robust control The nominal control requires a complete knowledge of the system in terms of the functions f (x) and b(x). Assume that f (x) is not known precisely, but b(x) is known. Let fˆ(x) be an estimate of f (x) such that fˆ(x) − f (x) F (x), (17.24) where F (x) is given. The control can only use the estimate fˆ(x). To account for this uncertainty, we consider a variable structure control with a switching term u=
1
D(xd ) − G(x) − fˆ(x) − k sgn(s) , b(x)
(17.25)
17.2. Concept of sliding mode
343
where k > 0 is a control gain. Consider a Lyapunov function V = 12 s 2 . Then, n d x ˙ V = s s˙ = s · + G(x) − D(xd ) dt n
= s · f (x) − fˆ(x) − k sgn(s)
|s| · fˆ(x) − f (x) − k = |s| · F (x) − k .
(17.26)
If we choose k = F (x) + η
(η > 0),
(17.27)
we have V˙ −η|s|.
(17.28)
Hence, the system is stable. The control that is implementable reads u=
1
D(xd ) − G(x) − fˆ(x) − F (x) + η sgn(s) . b(x)
(17.29)
This sliding mode control is robust because it brings the system to the sliding surface s(y, t) = 0 as long as the uncertainty in the system dynamics f (x) satisfies the inequality (17.24). When the function b(x) is not known precisely and an estimate of b(x) with known error bounds is available, a robust sliding mode control can also be designed. This is left as an exercise. Chattering When the system is on the sliding surface, the switching term −[F (x) + η] sgn(s) does not stay zero all the time. In the real time implementation of the control or in the numerical simulation, s never truly holds a zero value. It is actually a very small random number determined by the measurement noise in the experiments or by the computer precision in the numerical simulation. The sign of the term sgn(s) therefore changes randomly and infinitely fast. When the gain portion F (x) + η of the switching term is reasonably large, this leads to a high frequency and large amplitude changes of the control. This is known as chattering (Slotine and Li, 1991). One common way to remove chattering is to replace the signum function sgn(s) with the saturation function sat(s/φ) where φ is the thickness of a boundary layer near the sliding surface. It is common to use a time-varying boundary layer. More details on this subject can be found in Slotine and Li (1991).
Chapter 17. Sliding Mode Control
344
17.2.4. Bounds of tracking error Let us rewrite the sliding function in a cascade form z˙ 1 + λz1 = s(y, t), z˙ 2 + λz2 = z1 , .. .
(17.30)
y˙ + λy = zn−2 . Assume that the sliding function is bounded such that |s(y, t)| Φ. Furthermore, we assume that y(0) = 0. Integrating the first equation, we obtain z1 and z˙ 1 as t z1 (t) =
e−λ(t−τ ) s(y, τ ) dτ,
(17.31)
0
t z˙ 1 = s(y, τ ) − λ
e−λ(t−τ ) s(y, τ ) dτ.
(17.32)
0
Hence, z1 (t) Φ
t
e−λ(t−τ ) dτ =
Φ Φ 1 − e−λt , λ λ
(17.33)
0
z˙ 1 (t) Φ + λ Φ = 2Φ. λ Continuing this process with the following equations, we have zj (t) Φ and |y| Φ , λj λn−1 Φ Φ z˙ j (t) 2 and |y| ˙ 2 n−2 . j −1 λ λ
(17.34)
(17.35) (17.36)
17.2.5. Multiple input systems Consider a linear system with uncertain nonlinear dynamic element given by (Edwards and Spurgeon, 1998) x˙ (t) = Ax + Bu + f(x, u, t),
x(t0 ) = x0 ,
(17.37)
where x ∈ R n , u ∈ R m is the control vector, A ∈ R n×n is the state matrix, B ∈ R n×m is the control influence matrix and f : R n × R m × R → R n with
17.2. Concept of sliding mode
345
n m. Define a set of m functions as s(x) = Sx,
(17.38)
where S ∈ R m×n with a full rank m. The hyper-plane defined by S = x ∈ R n : s(x) = 0 ,
(17.39)
is the sliding surface. Equivalent control When the system is sliding on S for t ts , we have s(x) = 0,
and
S˙x = 0,
t ts .
(17.40)
Assume that f(x, u, t) = 0, the sliding condition becomes SAx + SBu = 0.
(17.41)
We obtain the so-called equivalent control as ueq = −(SB)−1 SAx. The closed loop system in this case reads x˙ (t) = I − B(SB)−1 S Ax ≡ Ps Ax,
(17.42)
(17.43)
where Ps is known as a projection operator, Ps = I − B(SB)−1 S.
(17.44)
We leave the proof of the following properties to the readers in the exercise. SPs = 0,
Ps B = 0.
(17.45)
Ps can have at most n − m nonzero eigenvalues. The associated eigenvectors all belong to the null space of S. Disturbance rejection Let the uncertain nonlinear dynamics be in the form f(x, u, t) = Dg(x, t),
(17.46)
where D ∈ R n×l and g : R n × R → R l . When R(D) ⊂ R(B) where R(·) denotes the range of the matrix, f is said to be a matched uncertainty. Otherwise, it is a un-matched uncertainty. When R(D) ⊂ R(B), we can find two vectors d ∈ R l and b ∈ R m such that Dd = Bb.
(17.47)
Chapter 17. Sliding Mode Control
346
This implies that D can be expressed as D = BR,
Rd = b,
(17.48)
where R is a matrix of elementary column operations. T HEOREM 17.1. The ideal sliding motion s(x) = 0 is totally insensitive to the uncertain function g if R(D) ⊂ R(B). P ROOF. When f(x, u, t) = 0, the equivalent control reads ueq = −(SB)−1 (SAx + SDg).
(17.49)
The closed loop sliding motion satisfies x˙ (t) = Ps Ax + Ps Dg,
t ts .
(17.50)
Note that Ps D = Ps BR = 0R = 0.
(17.51)
Hence, x˙ (t) = Ps Ax and the system is free of the effect of uncertain nonlinear dynamics. In other words, the sliding mode control rejects the matched uncertainty. Selection of S For a given matrix B, we can find an invertible matrix T of elementary row operations such that 0 TB = (17.52) , B2 where B2 ∈ R m×m and det(B2 ) = 0. Note that T can even be orthogonal so that TT = T−1 . We continue to assume that f(x, u, t) = 0. Introduce the transformation y1 . Tx = y = (17.53) y2 The state equation (17.37) becomes y˙ 1 (t) = A11 y1 + A12 y2 ,
(17.54)
y˙ 2 (t) = A21 y1 + A22 y2 x + B2 u. and S2 ∈ Let S = [S1 , S2 ] where S1 ∈ that when t ts , sliding motion occurs such that R m×(n−m)
Sy = S1 y1 + S2 y2 = 0.
(17.55) R m×m
is non-singular. Assume (17.56)
17.2. Concept of sliding mode
347
Hence, we have y2 = −S−1 2 S1 y1 ≡ −My1 .
(17.57)
Substituting this result into equation (17.54), we have y˙ 1 (t) = (A11 − A12 M)y1 .
(17.58)
On the sliding surface, we wish y1 → 0 as t → ∞. Therefore, S1 and S2 must be chosen such that A11 − A12 M is stable. Next, we need to show under what condition, we can find a matrix M to stabilize y1 governed by equation (17.58). P ROPOSITION 17.1. The matrix pair (A11 , A12 ) is controllable if and only if (A, B) is controllable. P ROOF. Note that det(B2 ) = 0. rank[sI − A, B] sI − A11 −A12 0 = rank −A21 sI − A22 B2 = rank[sI − A11 , A12 ] + m, s ∈ C. Hence, rank[sI − A, B] = n ⇐⇒ rank[sI − A11 , A12 ] = n − m.
(17.59)
T HEOREM 17.2. Assume that (A, B) is controllable. The following statements are equivalent. • • • •
(A, B) is controllable. [B, AB, . . . , An−1 B] has full rank. [sI − A, B] has full rank for all s ∈ C. Spectrum of (A + BF) can be assigned arbitrarily by F ∈ R m×n .
Assume now that (A, B) is controllable. We can then find a matrix M to arbitrarily place the poles of equation (17.58). The matrix M can be determined either by following the optimal control approach, or by the direct pole placement. After the matrix M is determined, we can use the following relationship to compute the matrices for the sliding surface. S1 = S2 M.
(17.60)
We can pick an arbitrary nonsingular matrix S2 and then compute S1 . The simplest choice of S2 is the unit matrix.
Chapter 17. Sliding Mode Control
348
17.3. Stochastic sliding mode control We now apply sliding mode control to stochastic systems to reduce response variance of the system. Response variance reduction can also be obtained by using classic proportional-derivative feedback control. If the objective is to bring the system to the origin, a linear quadratic regulator will be adequate. The advantage of sliding mode control lies in that with a relatively limited knowledge of the system, such as the bounds of the parameters, a controller can be properly designed to ensure the stability and the transient performance of the system (Yao and Tomizuka, 1993, 1995). 17.3.1. Nominal sliding mode control Consider a set of stochastic differential equations in the Itô sense dX1 = X2 dt,
dX2 = −2ζ ωn X2 − ωn2 1 + εX12 X1 dt + U (t) dt + dB(t),
(17.61) (17.62)
where
E dB(t) = 0,
2D dt, t = t1 = t2 , E dB(t1 ) dB(t2 ) = t2 . 0, t1 =
(17.63) (17.64)
Let us define a sliding function S as S = X2 + λX1
(λ > 0).
(17.65)
Assume that the measurements of X1 and X2 are available and all the system parameters are known. By setting S˙ = 0, we obtain a nominal sliding mode control as Un (t) = 2ζ ωn X2 + ωn2 1 + εX12 X1 − λX2 . (17.66) To account for the system uncertainties later, we introduce a ‘switching’ term −kS, leading to a sliding mode control given by U (t) = 2ζ ωn X2 + ωn2 1 + εX12 X1 − λX2 − kS, (17.67) where k > 0. The rationale for choosing kS instead of k sgn(S) for stochastic systems will be discussed in Section 17.3.5. Here, we shall show that this control will guarantee the sliding function S to exponentially decay in the mean, and to be bounded in the mean square sense. When k sgn(S) is chosen, the moment equations become nonlinear and are more difficult to analyze. Furthermore, since the system
17.3. Stochastic sliding mode control
349
response is random, k sgn(S) changes values far more erratically than it does in the deterministic system, when S is sufficiently small. 17.3.2. Response moments Following Itô’s lemma, we have
dS 2 = 2S λX2 − 2ζ ωn X2 − ωn2 1 + εX12 X1 + U (t) + D dt + 2SdB.
(17.68)
We obtain moment equations for the sliding function, d d d E[S] = E[X2 ] + λ E[X1 ] dt dt dt
= −2ζ ωn E[X2 ] − ωn2 E 1 + εX12 X1 + E[U ] + λE[X2 ] = −kE[S],
d 2 E S = 2E[SX2 ] + 2λE[SX1 ] = −2kE S 2 + 2D. dt In the steady state as t → ∞, we have
E[S] = 0, E S 2 = D/k.
(17.69) (17.70)
(17.71)
The sliding mode control of equation is therefore stable in the mean square sense. This proof is in agreement with the well known input-output stability theorem for linear systems (Desoer and Vidyasagar, 1975). The effect of k on the system performance can be seen from the above steady-state solutions. Apparently, larger values of k can make the system converge to the sliding surface faster in the mean and reduce the variance of S more. To further study the effect of the control parameters, we consider the first and second order moments of the controlled system. Let us denote m1 = E[X1 ],
m20 = E X12 ,
m2 = E[X2 ], m11 = E[X1 X2 ],
m02 = E
X22
(17.72)
.
From the equation of motion and Itô’s lemma, we have m ˙ 1 = m2 , m ˙ 2 = −(λ + k)m2 − kλm1 , m ˙ 20 = 2m11 , m ˙ 11 = m02 − (λ + k)m11 − kλm20 , m ˙ 02 = −2(λ + k)m02 − 2kλm20 + 2D.
(17.73)
350
Chapter 17. Sliding Mode Control
The effect of λ and k can be seen from the equations for the first order moments. The sliding mode control changes the dynamics of the original system by linearizing it and placing two real poles at −λ and −k of the linearized system. These two parameters also dictate the decay rate of the first and second order moments as can be seen in the moment equation. In the steady state, we have m1 = m2 = m11 = 0, D D m20 = , m02 = . kλ(λ + k) λ+k
(17.74)
In general, larger values of both λ and k will reduce the variance of the system response more. However, there are trade-offs. As discussed in Slotine and Li (1991), sliding surface introduces a low pass filter with a bandwidth λ. If λ is chosen too large, the bandwidth of the control system will be high. As a result, high frequency disturbance can excite the unmodeled dynamics of the system. Of course, large values of λ and k will result in large control effort. For this simplified case, we have assumed that the state vector (X1 , X2 ) is available through measurements. For more complicated case with multiple degrees of freedom and internal state variables, one often has to estimate the full state vector. 17.3.3. Robust sliding mode control Assume that the parameters ζ and ε are not known precisely. Let ζˆ and εˆ be an estimate of ζ and ε, respectively. Let ε = max εˆ − ε , ζ = max ζˆ − ζ , (17.75) where ζ and ε are prescribed. Consider a robust sliding mode control in the form U = 2ζˆ ωn X2 + ωn2 1 + εˆ X12 X1 − λX2 − η sgn(S), (17.76) where η > 0. Also, we consider
1 2 dS = S 2 ζˆ − ζ ωn X2 + ωn2 (ˆε − ε)X13 dt 2 + 2D dt − ηS sgn(S) dt + S dB − η − 2ζ ωn |X2 | − ωn2 ε X13 |S| dt + 2D dt + S dB.
(17.77)
Let η = 2ζ ωn |X2 | + ωn2 ε X13 + k|S|
(k > 0).
(17.78)
17.3. Stochastic sliding mode control
We then have 1 2
dS −kS 2 + 2D dt + S dB, 2
d 2 E S −2kE S 2 + 2D. dt
351
(17.79) (17.80)
Therefore, E[S 2 ] is bounded. Since 0 (E[S])2 E[S 2 ], E[S] is also bounded. The robust sliding mode control is stable in the mean square sense. Let us now consider the first and second order moment equations of the controlled system. m ˙ 1 = m2 ,
(17.81)
m ˙ 2 = −(λ + k)m2 − λkm1 + 2 ζˆ − ζ ωn m2
− 2ζ ωn E |X2 | sgn(S)
+ ωn2 (ˆε − ε)E X13 − ωn2 εE X13 sgn(S) , m ˙ 20 = 2m11 ,
(17.82)
m ˙ 11 = m02 − (λ + k)m11 − λkm20
+ 2 ζˆ − ζ ωn m11 − 2ζ ωn E |X2 |X1 sgn(S)
+ ωn2 (ˆε − ε)E X14 − ωn2 εE X13 X1 sgn(S) , m ˙ 02 = −2(λ + k)m02 − 2λkm11 + 2D
+ 4 ζˆ − ζ ωn m02 − 4ζ ωn E |X2 |X2 sgn(S)
+ 2ωn2 (ˆε − ε)E X13 X2
− 2ωn2 εE X13 X2 sgn(S) . In this set of equations, there are moments of order higher than two and the expectation of nonlinear terms involving |X13 |, sgn(S), etc. To close these moment equations, we apply the Gaussian closure method to approximate the higher order moments and the nonlinear expectations in terms of the first and second order moments. The following moment relationships for Gaussian processes are used.
E X13 = 3E X12 E[X1 ] − 2E[X1 ]3 ,
E X1 X22 = E[X1 ]E X22 + 2E[X2 ]E[X1 X2 ] (17.83) − 2E[X1 ]E[X2 ]2 ,
E X14 = 4E X13 E[X1 ] − 12E X12 E[X1 ]2
2 + 2E X12 + 6E[X1 ]4 .
Chapter 17. Sliding Mode Control
352
The expectation of the nonlinear terms such as E[|X13 | sgn(S)] in the equation can be numerically evaluated by using the Gaussian probability density function as follows
E X13 sgn(S) ∞ ∞ =
3 x sgn(x2 + λx1 )p(x1 , x2 , t) dx1 dx2 . 1
(17.84)
−∞ −∞
Because the probability density function decays exponentially, the integration domain can be truncated to a finite area in which 99.95% probability is captured (Sun and Hsu, 1989b). Another approach to calculate the integration is to derive an approximate expression for the integral in terms of the mean and variance of the probability density function. One of the objectives of studying the moment equations is to determine the steady state values of the first and second order moments of the system response as a function of the control parameters. Such results will help us choose the control parameters λ and k. It should be pointed out that because the controlled system contains highly nonlinear terms, Gaussian approximation may become invalid in some parameter regions or under some initial conditions. 17.3.4. Variance reduction ratio The uncontrolled system admits an exact steady state probability density function and a closed form solution of the steady state mean square response of the velocity given by
E X22 u =
D . 2ζ ωn
(17.85)
Comparing this solution with m02 of the controlled system, we can define a variance reduction ratio as, m02 ρ≡ (17.86) . E[X22 ]u For the system under the ideal sliding mode control, the steady state moment m02 is given by equation (17.74). Hence, the variance reduction ratio is ρ=
2ζ ωn . λ+k
(17.87)
Since λ defines the bandwidth of a low pass filter introduced by the sliding function, it is often chosen as a multiple of the highest frequency of the modeled system. Let λ = N ωn (N > 1) and take k = λ for example. We have for the ideal
17.3. Stochastic sliding mode control
353
sliding mode control ρ = ζ /N . For a given damping ratio ζ , we can determine a number N such that a desired reduction of the steady state velocity response variance can be achieved. The variance reduction ratio ρ can thus be used as a guide in selecting the control parameters. For the uncertain nonlinear stochastic system under the robust sliding mode control, the analytical solution of steady state solution m02 cannot be obtained. It has to be numerically computed by integrating the moment equations. When the nonlinear terms are small or if they converge to very small values in the steady state, the variance reduction ratio for the robust control will be very close to that of the ideal sliding mode control. In this case, the above discussion on choosing control parameters based on the ideal sliding mode control can be useful to the robust control. 17.3.5. Rationale of switching term kS In the literature, sliding mode controls for deterministic systems are designed such that the system approaches the sliding surface in finite time and slides on it thereafter. The discontinuous term k sgn(S) is used in the control to ensure the finite approaching time. However, once the system is on the sliding surface, this discontinuous term causes the control to switch infinitely fast (Slotine and Li, 1991). This is known as chattering. Chattering occurs when the magnitude of S approaches the noise floor in the experiment or when it becomes smaller than the precision of the computer in numerical simulations. To avoid chattering, a saturation function is commonly used in the control so that when the system falls within a boundary layer around the sliding surface S = 0, k sgn(S) is replaced by a linear term in S and the sliding condition becomes 1 2
(17.88) dS −kS 2 + 2D dt + S dB, for |S| φ 1, k > 0, 2 where φ is a parameter defining the thickness of the boundary layer. In this case, the system can reach the boundary layer in finite time, but can no longer approach the sliding surface in finite time. For stochastic systems, the noise floor for chattering to happen is higher than the deterministic system. Therefore, chattering will occur when the magnitude of S is bigger than the corresponding value for deterministic systems. For this reason, the thickness φ of the boundary layer depends on the level of random excitation and should generally be bigger for stronger random excitations. This relationship can be determined quantitatively by properly defining the onset of chattering. Because, for stochastic systems, φ has to be determined according to the level of random excitations in order to avoid chattering, this is not a robust feature. The control in equation (17.76) together with the choice of η in equation (17.78) can avoid chattering completely regardless the level of random excitations. On the
354
Chapter 17. Sliding Mode Control
other hand, random disturbances are often not directly measurable. Without the knowledge of the random disturbance, it is difficult to make the system slide on the surface S = 0 over a period of time since the disturbance will inevitably drive the system away from the sliding surface. We would like the control to be able to bring the system back to the vicinity of the sliding surface at an exponential rate, which is faster than a constant rate particularly when S is large. This way, we shall be able to keep the response variance of the system low without control chattering. The controls presented in this section are designed with this objective in mind. However, one should keep in mind that when |S| is greater than one, the control term kS is bigger than k sgn(s) for the same gain k. Numerical examples When the excitation level is high or when the estimate of the unknown system parameter ε is far deviated from the true value and the upper bound ε is set very large, the transient behavior of the ideal and robust controls can be very different. However, both the controls lead to a nearly identical steady state response of the system. Figure 17.5 shows the steady state variance reduction ratio as a function of the parameter λ when the system is subjected to a very strong random excitation D = 1. Other parameters are given here: ωn = 1, ζ = ζˆ = 0.04, ζ = 0, ε = 0.01, εˆ = 0.011, ε = 0.005, t = 0.01, m1 (0) = 1, m2 (0) = 0, m20 (0) = 1.1, m11 (0) = 0, and m02 (0) = 0.1.
Figure 17.5. Variance reduction ratio as a function of λ. Top: k = 2; Bottom: k = λ. Solid line: equation (17.87) for the ideal sliding mode control. Symbol +: Computed ratio for the robust control.
17.3. Stochastic sliding mode control
355
Figure 17.6. The first and second order moments of the uncontrolled system. Top: m1 : Dash-Dot line; m2 : Solid line. Bottom: m20 : Dash-Dot line; m02 : Solid line; m11 : Dash line.
Figure 17.7. The second order moments (top) of the system response under the robust sliding mode control. m20 : Dash-Dot line; m02 : Solid line; m11 : Dash line. The first and second order moments (bottom) of the robust control. E[U ]: Solid line; E[U 2 ]: Dash line. k = 1, k = 2ωn .
Chapter 17. Sliding Mode Control
356
The reason that both controls lead to close steady state responses lies in the fact that the nonlinear terms in the moment equations of the system under the robust sliding control become smaller as the system approaches the steady state. Hence, the nonlinear system under the robust control approaches its linear counterpart under the ideal sliding mode control. Next, we present numerical solutions of the moment equations of the robust sliding mode control. The parameters used in the simulations are listed here ωn = 1, ζ = ζˆ = 0.04, ζ = 0, ε = 0.01, εˆ = 0, ε = 0, D = 0.001, t = 0.01, m1 (0) = 0.1, m2 (0) = 0, m20 (0) = 0.011, m11 (0) = 0, and m02 (0) = 0.05. Figure 17.6 shows the first and second order moments of the uncontrolled system. Because the system is lightly damped, the first order moments of the system decay slowly over time and the second order moments reach steady state values. In Figure 17.7, the second order moments are presented for the system under the robust sliding control and the moments of the control. It should be pointed out that because of the low level of random excitation, the ideal and robust sliding mode controls have very close transient and steady state responses.
17.4. Adaptive stochastic sliding mode control Let us consider a linear stochastic system as an example. The system is governed by stochastic differential equations in the Itô sense as dX1 = X2 dt,
dX2 = −2ζ ωn X2 − ωn2 X1 dt + U (t) dt + dB(t),
(17.89) (17.90)
where
E dB(t) = 0,
2D dt, t = t1 = t2 , E dB(t1 ) dB(t2 ) = 0, t1 = t2 .
(17.91) (17.92)
We use the sliding function S = X2 + λX1 (λ > 0). Assume that the measurements of X1 and X2 are available, ζ is unknown and ωn is known. After obtaining the nominal sliding mode control, we focus on the following control U (t) = 2ζˆ ωn X2 + ωn2 X1 − λX2 − kS,
(17.93)
where ζˆ is an estimate of ζ , k > 0, and −kS is the ‘switching’ term. Consider a Lyapunov function V = E[Vd ] where Vd =
2 1 2 1 S + ζˆ − ζ . 2 2γ
(17.94)
Clearly, V > 0. In adaptive controls, it is common to assume that the unknown parameter is slowly time-varying. That is, dζ /dt ≈ 0. To evaluate the time deriv-
17.4. Adaptive stochastic sliding mode control
ative of Vd by following Itô’s lemma, we have 1 ζˆ − ζ d ζˆ dVd = S dS + γ
= S λX2 − 2ζ ωn X2 − ωn2 X1 + U (t) dt 1 + dB(t) + 2D dt + ζˆ − ζ d ζˆ γ
= 2ωn S ζˆ − ζ X2 − kS 2 + 2D dt 1 + S dB(t) + ζˆ − ζ d ζˆ . γ Let d ζˆ = −2γ ωn SX2 dt. This defines a rule for updating the parameter estimate ζˆ . We have
dVd = −kS 2 + 2D dt + S dB(t).
357
(17.95)
(17.96)
(17.97)
Hence, we obtain
dE[Vd ] V˙ = (17.98) = −kE S 2 + 2D. dt In the absence of stochastic excitation with D = 0, we have V˙ < 0 and as t → ∞, V˙ → 0. Hence, V (t) V (0) for all t > 0 (Slotine and Li, 1991). The closedloop system response is bounded in the mean square sense. Let us consider the closed-loop equations in terms of (X1 , X2 , ζˆ ) as dX1 = X2 dt,
dX2 = −(k + λ)X2 − kλX1 + 2ωn ζˆ − ζ X2 dt + dB(t), d ζˆ = −2γ ωn (X2 + λX1 )X2 dt.
(17.99)
Note that when there is no parameter uncertainty, i.e., ζˆ = ζ , the closed-loop system responses X1 and X2 exponentially converge to the steady state at the rate determined by k and λ. Hence, E[S] and E[S 2 ] are bounded. When the system is subject to an external stochastic excitation with parameter uncertainty, in the steady state, we have E[X1 X2 ] → 0 and E[d ζˆ /dt] = −2γ ωn E[X22 ] < 0. Hence, the parameter estimation ζˆ can grow unbounded, and the system becomes unstable. If we impose a lower bound ζˆ 0 based on the physical meaning of ζˆ , and modify the adaptation rule as
ˆ ˆ d ζ = −2γ ωn (X2 + λX1 )X2 dt, ζ > 0, (17.100) ˆ 0, ζ = 0, then, ζˆ will be bounded as t → ∞. When (ζˆ −ζ )2 is bounded, V˙ > 0 will eventually lead to an increase of E[S 2 ]. The increase of E[S 2 ] will result in V˙ < 0. This
Chapter 17. Sliding Mode Control
358
Figure 17.8. The average responses of the stochastic system under the adaptive sliding mode control with k = 1, λ = 1, ωn = 1, ς = 0.5, γ = 1, D = 0.1, and dt = 0.1. The number of sample trajectories used to compute the average is 100.
suggests that E[S 2 ] is bounded. Hence, the system is stable in the mean square sense under the modified adaptation rule. A numerical simulation of the adaptive control is shown in Figure 17.8.
Exercises E XERCISE 17.1. Let us express the minimum time optimal control of the particle in the following form
−1, when s(x, x) ˙ >0 u= 1, when s(x, x) ˙ 0).
(17.102)
Consider a Lyapunov function V = Show that when m|x| ˙ < 1, the system is stable and approaches the switching manifold s(x, x) ˙ = 0 as t → ∞. 1 2 2s .
Exercises
359
E XERCISE 17.2. Complete the steps leading to equations (17.35) and (17.36). E XERCISE 17.3. Reconsider the system given by equation (17.15). Assume now that both the function f (x) and b(x) are uncertain. Furthermore, b(x) is bounded above and below 0 < bmin b(x) bmax . Let fˆ(x) be an estimate of f (x) such that fˆ(x) − f (x) F (x), and bˆ =
bmin bmax .
Define a parameter # bmax β= 1. bmin
(17.103)
(17.104)
(17.105)
(17.106)
Show that β −1
b bˆ β, β −1 β, b bˆ
(17.107)
and the robust sliding control u=
1
D(xd ) − G(x) − fˆ(x) − k sgn(s) , ˆb
will stabilize the system when k β − 1 ˆ f (x) + G(x) − D(xd ) − = −η, F (x) + β β
(17.108)
(17.109)
where η > 0. E XERCISE 17.4. Consider the projection operator Ps = I − B(SB)−1 S.
(17.110)
Show that SPs = 0, Ps B = 0.
(17.111)
E XERCISE 17.5. Let λj be a nonzero eigenvalue of Ps and vj be the associated eigenvector. Then, Ps vj = λj vj .
(17.112)
Chapter 17. Sliding Mode Control
360
Show that vj belongs to the null space of S. Since the full rank of S is m, there can be at most n − m such a nonzero eigenvalues of Ps . Let V = [v1 , v2 , . . . , vn−m ].
(17.113)
Show that SV = 0,
and
rank[V B] = n.
(17.114)
Chapter 18
Control of Stochastic Systems with Time Delay
Time delay comes from different sources and often leads to instability or poor performance in control systems. Other than a few exceptional cases, delay is undesired and control strategies to eliminate or minimize its unwanted effects have to be employed. Effects of time delay on the stability and performance of deterministic control systems are a subject of many studies. Yang and Wu (1998) and Stepan (1998) have studied structural systems with time delay. A study on stability and performance of feedback controls with multiple time delays is reported in Ali et al. (1998) by considering the roots of the closed loop characteristic equation. For deterministic delayed linear systems a survey of recent methods for stability analysis is presented in Niculescu et al. (1998). An effective Monte Carlo simulation scheme that converges in a weak sense is presented by Kuchler and Platen (2002). Buckwar (2000) studied numerical solutions of Itô type differential equations and their convergence where the system considered has time delay both in diffusion and drift terms. Guillouzic et al. (1999) studied first order delayed Itô differential equations using a small delay approximation and obtained PDFs as well as the second order statistics analytically. Frank and Beek (2001) obtained the PDFs using FPK equation for linear delayed stochastic systems and studied the stability of fixed point solutions in biological systems. State feedback stabilization of nonlinear time delayed stochastic systems are investigated by Fu et al. (2003) where a Lyapunov approach is used. The delayed systems are studied using discretization techniques with an extended state vector. Pinto and Goncalves (2002) fully discretized a nonlinear SDOF system to study control problems with time delay. Klein and Ramirez (2001) studied MDOF delayed optimal regulator controllers with a hybrid discretization technique where the state equation was partitioned into discrete and continuous portions. Another powerful discretization method is the semidiscretization method. It is a well established method in the literature and used widely in structural and fluid mechanics applications. The method was applied to delayed deterministic systems by Insperger and Stepan (2001). They studied high dimensional multiple time delayed systems in Insperger and Stepan (2002). The merit of the semi-discretization method lies in that it makes use of the exact solu361
Chapter 18. Control of Stochastic Systems with Time Delay
362
tion of linear systems over a short time interval to construct the mapping of a finite dimensional state vector for the system with time delay. The method has been extended to design feedback controls of time-delayed systems (Elbeyli et al., 2005b; Sheng et al., 2004; Sheng and Sun, 2005). In this chapter, we apply the semi-discretization method to the systems with time delay and subject to additive and multiplicative stochastic disturbances.
18.1. Method of semi-discretization Consider a stochastic system in the Stratonovich sense ˙ = f X(t), X(t − τ ), t + G(X, t)W(t), X
(18.1)
where X ∈ R r , W ∈ R p , f describes the system dynamics with time delay, and G = {Gj k } is a matrix determining the parametric and external random excitations. Wj (t) are delta correlated Gaussian white noise processes with E[Wj (t)Wk (t + T )] = 2πKj k δ(T ). Equation (18.1) can be converted to the stochastic differential equations in the Itô sense dX = m X(t), X(t − τ ), t dt + σ (X, t) dB(t), (18.2) where m is the drift vector including the Wong–Zakai correction term, and σ (X, t) is the diffusion matrix defined through the following equation p
σj l (X, t)σkl (X, t) =
l=1
p p
2πKrs Gj r Gks .
(18.3)
r=1 s=1
The Brownian motion dB(t) has the following properties tj +t
dB(t) = 0,
E tj
E dBj (t1 ) dBk (t2 ) =
δj k dt, t1 = t2 = t, t2 . 0, t1 =
(18.4)
(18.5)
We restrict ourselves to linear stochastic differential equations with m(X(t), X(t− τ ), t) = AX(t) + Aτ X(t − τ ), and G(X, t) is a linear function of X dX = AX(t) dt + Aτ X(t − τ ) dt + σ (t) dB(t),
(18.6)
where A is the state matrix and Aτ is the state matrix related to the delayed response, σ is the same as defined above. The formal solution to this equation in
18.1. Method of semi-discretization
363
integral form is written as follows tj +t
X(tj + t) = X(tj ) +
A(t)X(t) dt tj
tj +t
+
tj +t
Aτ X(t − τ ) dt + tj
σ (t) dB(t),
(18.7)
tj
where X(tj ) is the initial value in the time interval [tj , tj + t]. Note that the fourth term on the RHS is a multi-dimensional stochastic integral that must be interpreted in the Itô sense. Although equation (18.7) is exact, it does not provide any simplifications for the desired solution in a mapping form. To circumvent this problem, we introduce the following notations which will be used in the formulation X(tj − τ ) = X (j − n)t = X[j − n], X(tj ) = X[j ], A(tj ) = A[j ], σ (tj ) = σ [j ], Aτ X (j − n)t = Aτ X[j − n] = F[j − n],
(18.8)
where the time lag τ is divided into an integer n intervals of length t such that τ = nt. In the meantime, we denote tj = j t. More discussions of this procedure and applications to systems with multiple time delays can be found in Insperger and Stepan (2001, 2002). We further assume that Aτ X(t − τ ) and σ (t) remain constant over the interval [tj , tj + t]. This is the essence of the semi-discretization method and allows us to generate a discrete mapping of diffusion terms. Equation (18.7) then reads X[j + 1] = X[j ] + A[j ]X[j ]t tj +t
+ F[j − n]t + σ [j ]
dB(t).
(18.9)
tj
Some remarks on the method are in order. First, the above mapping solution is of the order O(t 2 ). Second, the solution of σ (X, t) from equation (18.3) is not unique. One possible solution is that the σ (X, t) is proportional to the matrix G, which is linear in X. When a nonlinear solution is taken for σ (X, t), the diffusion term has a nonlinear effect on the first order moments for non-zero mean Brownian motion, but produces linear relationship for the second order moments. Third, the non-delayed drift part of the mapping X[j + 1] = X[j ] + A[j ]X[j ]t can be determined by using the exact solution of the linear system with the state matrix A[j ].
Chapter 18. Control of Stochastic Systems with Time Delay
364
Define ((n + 1)r) × 1 dimensional state vectors as T Y[j ] = XT [j ], FT [j − 1], . . . , FT [j − n] , R[j ] =
T
tj +t
σ [j ]
dB(t)
n×r
(18.10)
.T
8 9: ; , 0, . . . , 0
.
(18.11)
tj
A mapping of Y[j ] over the interval [tj , tj +1 ] becomes Y[j + 1] = [j ]Y[j ] + R[j ], where [j ] is given by ⎧ I + A[j ]t ⎪ ⎪ ⎪ ⎪ Aτ ⎪ ⎨ [j ] = ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
(18.12) ⎫ It ⎪ ⎪ ⎪ ⎪ ⎪ ⎬
I ..
. I
⎪ ⎪ ⎪ ⎪ ⎪ ⎭
.
(18.13)
18.2. Stability and performance analysis Define the first order moment and the correlation matrix of Y[j ] as
z[j ] = zk [j ] = E Y[j ] ,
Z[j ] = Zkl [j ] = E Y[j ]YT [j ] . Introduce the following notations
J[j ] = E [j ] , b[j ] = E R[j ] , [j ] = ψklpq [j ]
= E [j ] ⊗ [j ] = Eφkl [j ]φqp [j ] , Θ[j ] = θklp [j ]
= E [j ] ⊗ R[j ] + R[j ] ⊗ T [j ]
= E φkl [j ]Rp [j ] + Rk [j ]φlp [j ] ,
E R[j ] ⊗ R[j ] = E Rk [j ]Rp [j ] ,
(18.14) (18.15)
(18.16)
and the inner products [j ] $ Z[j ] = ψklpq [j ]Zlq [j ], Θ[j ] $ z[j ] = θklp [j ]zl [j ].
(18.17)
18.2. Stability and performance analysis
365
Recall that G is assumed to be linear in X. E[R[j ] ⊗ R[j ]] can be written in the following form
E R[j ] ⊗ R[j ] = 1 [j ] $ Z[j ] + Θ1 [j ] $ z[j ] + [j ], (18.18) where 1 has the same dimension as , Θ1 has the same dimension as Θ, and has the same dimension as E[R[j ] ⊗ R[j ]]. In the computation of the quantity E[R[j ] ⊗ R[j ]], we have used the following result tj +t E
tj +t
dBl (t2 ) = δkl t.
dBk (t1 ) tj
(18.19)
tj
With the above notations and the assumption that [j ] and Y[j ] are statistically independent, we can then write down a mapping for the first and second order moments. z[j + 1] = J[j ] · z[j ] + b[j ],
(18.20)
Z[j + 1] = H[j ] $ Z[j ] + a[j ] $ z[j ] + [j ], where H = + 1 and a = Θ + Θ1 . The stability of the mean is dictated by the eigenvalues of J[j ]. When the mean of the response is zero, the stability of the second order moments is determined by the eigenvalue of H[j ]. Let λmax (j ) be the largest absolute eigenvalue of H[j ]. Then, < < < < 0, we need K = b − a and the system must start with the initial condition x0 = xˆ0 . To track the variance, we must have σ2 σˆ 2 = . 2(a + K) 2b
(19.4)
The linear proportional feedback control needs to have an additional degree of freedom in order to track the variance so that σ = σˆ even when the reference mean and variance are specified with the implied functional relationship. In order for the mean to track its reference regardless the initial condition, the feedback must contain the information about mr (t). Otherwise, even when K = b − a, m(t) will never be the same as mr (t) if x0 = xˆ0 due to disturbances. The same observation applies to the variance tracking control. When vr (t) is not specified with the implied functional relationship with mr (t), additional feedback terms must be needed in order to independently track mr (t) and vr (t). In the following, we present a procedure to design such a feedback control.
19.2. PDF tracking control We continue with the one-dimensional example to illustrate the discussion. From Itô’s lemma in equation (5.126) on page 88, we obtain the following moment equations m ˙ = −am + E[U ],
(19.5)
v˙ = −2av + σ + 2E[U X] − 2E[U ]m.
(19.6)
2
The feedback controls in terms of E[U ] and E[U X] can be designed so that the response of equations (19.5) and (19.6) will follow the references mr (t) and vr (t) E[U ] = m ˙ r + am + λ1 (mr − m), 2E[U X] − 2 E[U ]m = v˙ r + 2av + λ2 (vr − v),
(19.7) (19.8)
where the gains λ1 > 0 and λ2 > 0. We have assumed that the system parameter a is known precisely. The random disturbance is generally not measurable and thus σ 2 is assumed to be unknown. When a is not known exactly, an adaptive or
Chapter 19. Probability Density Function Control
376
robust control will have to be considered. Equation (19.7) suggests that U (X) = m ˙ r − aX + λ1 (mr − X) + g(X),
(19.9)
where E[g(X)] = 0. This provides a constraint on the otherwise arbitrary function g(X). Substituting equation (19.9) into equation (19.8), we obtain an equation for g(X) as
1
E g(X)X = v˙r + λ2 (vr − v) . 2 After some searching, we come to an expression for g(X) as
g(X) = (X − m) C1 (t) + C2 (t)vr ,
(19.10)
(19.11)
where the functions C1 (t) and C2 (t) are determined from equation (19.10) v˙r − λ2 v λ2 (19.12) , C2 (t) = . 2v 2v Note that equation (19.10) also implies that g(X)X = (X − m)[C1 (t) + C2 (t)vr ]X + h(X)X. The arbitrary function h(X) must satisfies the following constraints
E h(X) = 0, E h(X)X = 0. (19.13) C1 (t) =
The function h(X) will be determined through the equations of higher order moments subject to these two constraints. This leads to a hierarchy of feedback controls. Since we only need to control the moments up to second order for the present example, we shall stop here and arrive at the final control by setting h(X) = 0 as
U (X) = m ˙ r − aX + λ1 (mr − X) + (X − m) C1 (t) + C2 (t)vr . (19.14) The mean and variance of the closed-loop system satisfy the following equations d (19.15) (mr − m) = −λ1 (mr − m), dt d (vr − v) = −λ2 (vr − v) − σ 2 . (19.16) dt Hence, when the system parameter a is known precisely, the control given in equation (19.14) will make the system track the mean and variance of the target PDF with an exponential convergence. As t → ∞, the perfect tracking of mean is achieved. Furthermore, in steady state as t → ∞, the response variance is given by vss = σ 2 /λ2 + vr .
(19.17)
19.3. General formulation of PDF control
377
Hence, the term σ 2 /λ2 is the steady state variance tracking error. A large gain λ2 is needed to reduce the variance tracking error. When we assume that σ 2 is known, we include it in the control design of equation (19.8) 2E[U X] − 2 E[U ]m = v˙r + 2av − σ 2 + λ2 (vr − v). In this case, we would have variance tracking error.
d dt (vr
(19.18)
− v) = −λ2 (vr − v) and vss = vr with zero
Discussions Some remarks are in order. The control in equation (19.14) has time varying feedback gains, involves the mean and variance of the response at time t, and is highly nonlinear. In order to implement the control in real time, one needs to estimate the mean and variance online. Since we have to estimate the mean and variance over time as opposed to in the sample space, the system must be assumed to be ergodic. This example suggests clearly that in order to track a Gaussian PDF with independently specified mean and variance, both of which are functions of time, we must have a nonlinear and time varying feedback control involving moving temporal averages of the system response. The moving temporal averages take time to converge, and will undoubtedly deteriorate the transient performance of the control system, as will be demonstrated in the numerical examples. In summary, it is a far more difficult task to track a time-varying PDF or a set of time-varying moments than the steady state PDF or moments. Furthermore, when we have to track a non-Gaussian PDF, the hierarchical design would have to go on to include many terms in order to be able to control higher order moments. This is, of course, a reflection of the complexity of non-Gaussian PDFs. Stability When the mean and variance of the response are estimated accurately online, the current control guarantees the stability of the moment equations up to the order involved in the design. Hence, the closed-loop system is said to be stable in the sense of moments up to the order. Moment stability is a common notion in the community of random vibrations (Cai and Lin, 1996). However, the effect of poor estimations of the mean and variance on stability needs further investigations.
19.3. General formulation of PDF control The above hierarchical design procedure can be extended to stochastic systems of higher order. Consider a set of stochastic differential equations in the Itô sense as
Chapter 19. Probability Density Function Control
378
follows
dXj = fj (X,t) +
n
Bj k Uk (X,t) dt +
k=1
p
σj k (X,t) dβk ,
(19.19)
k=1
where Xj ∈ R n is the state vector, Uk ∈ R m is the control vector, Bj k is a n × m control influence matrix, βk is a p × 1 vector of independent unit Brownian motions and σj k is a n × p matrix of the state-dependent intensity of the random excitations. The objective is to find a control Uk (X,t) such that the PDF of the response X(t) tracks a given target PDF. Assume that the target PDF is characterized by a set of moments. We shall design such a control by considering the response moment equations. Let F (X,t) be a function of the response. By applying Itô’s lemma, we obtain n m ∂F dE[F (X,t)] Bj k Uk (X,t) =E fj (X,t) + dt ∂Xj j =1 k=1 n n 2 bj k ∂ F + (19.20) , 2 ∂Xj ∂Xk j =1 k=1
p
where bj k = l=1 σj l σkl . By selecting E[F (X,t)] as the qth order mixed moment of the system, we can generate a set of moment equations. (1) Let Uj (X, t) be the control for E[X(t)] to track the mean mr (t) of the target PDF. Then, the control of the system can be written as (1)
(1)
Uj (X, t) = Uj (X, t) + gj (X, t), subject to the constraint
(1) E gj (X, t) = 0,
j = 1, 2, . . . , m,
j = 1, 2, . . . , m.
(19.21)
(19.22)
Substituting equation (19.21) to the second order moment equations, we can (1) obtain an expression for gj (X, t) as (1)
(2)
(2)
gj (X, t) = Uj (X, t) + gj (X, t), subject to the constraints ⎫
(2) E gj (X, t) = 0, ⎬
(2) E gj (X, t)Xk = 0,⎭
(19.23)
j = 1, 2, . . . , m, k = 1, 2, . . . , n.
(19.24)
Following the same procedure with the equations for higher order moments, we can derive a hierarchy of controls such that (l)
(l+1)
gj (X, t) = Uj
(l+1)
(X, t) + gj
(X, t),
l = 1, 2, 3, . . . ,
(19.25)
19.4. Numerical examples
subject to the constraints ⎫
(l+1) (X, t) = 0, ⎬ E gj
(l+1) q E gj (X, t)Xk = 0,⎭
j = 1, 2, . . . , m, k = 1, 2, . . . , n, q = 1, 2, . . . , l.
379
(19.26)
As the order goes higher, more constraints on gj(l+1) (X, t) are added, which makes the solution more difficult to obtain.
19.4. Numerical examples Return to the one-dimensional example in Section 19.1. Assume that the system in equation (19.1) is unstable with a = −1. Let the external excitation be a zero mean Gaussian process with variance σ 2 = π2 . In the example, we assume that σ 2 is known. We choose the control gains as λ1 = 1 and λ2 = 0.9. First, consider a target PDF to be Gaussian with constant mean and variance mr = 5 and vr = 4. To implement the control in equation (19.14) under the assumption that σ 2 is known, we estimate the response mean m(t) and variance v(t) online using the most recent 15 points.
Figure 19.1. The steady state mean and variance of the closed-loop system in equation (19.1). a) The mean. (− − −): mr , (——): m(t). b) The variance. (− − −): vr , (——): v(t).
380
Figure 19.2.
Chapter 19. Probability Density Function Control
Comparison of the steady state PDF of the controlled system and the target PDF. (− − −): the controlled response PDF. (——): the target PDF.
We numerically generate 10000 trajectories and then take the sample average to obtain the time history of the mean and variance of the system response. The results are shown in Figure 19.1. The proposed control indeed drives the mean and variance to the desired ones. The online estimation of m(t) and v(t) takes time to converge, and thus deteriorates the transient performance of the control. We estimate the steady state mean and variance of the system by taking the time and sample averages of the last 300 points. We have m = 5.0059 and v = 4.1292 with 0.12% and 3.23% deviations from the respective references. Using the same data points, we also generate a histogram estimation for the response PDF and compare it with the target in Figure 19.2. The agreement is good. Next, we consider the target mean and variance that are harmonic functions of time given by mr (t) = 8+3 sin(0.4(t −2))H (t −2) and vr (t) = 5+2 sin(0.2(t − 2))H (t − 2). H (t) is the step function. All other parameters are the same as in the previous example. A delay is artificially introduced to the time-varying part of the target mean and variance to accommodate the temporal estimation of these quantities. However, this time delay does not affect the convergence of the estimation. 10000 time series are generated. Figure 19.3 compares the target mean and the response mean, whereas Figure 19.4 shows the variance tracking. It is seen from the figures that the proposed controller tracks the target mean and variance well after the initial transient periods.
Exercises E XERCISE 19.1. Show the steps leading to equation (19.14).
Exercises
381
Figure 19.3.
Mean tracking performance of the control. The system and control design parameters are the same as the previous example. (− − −): mr (t), (——): m(t).
Figure 19.4.
Variance tracking performance of the control. The system and control design parameters are the same as the previous example. (− − −): vr (t), (——): v(t).
E XERCISE 19.2. Design feedback controls for the following linear one-dimensional stochastic system to track time-varying mean and variance. dX = (aX + U ) dt + σ dβ,
X(0) = x0 ,
(19.27)
where a > 0, U is the control, β is a unit Brownian motion and σ is a constant representing the intensity of the random excitation. x0 is a deterministic initial condition. Note that the open-loop system is unstable.
Appendix A
Matrix Computation
There are many excellent textbooks on matrix computations. The references (Golub and Loan, 1983; Junkins and Kim, 1993) should serve the readers of this book well who wish to have in depth discussions on this subject.
A.1. Types of matrices Let A = {aj k } denote a n × n matrix with aj k as its j kth element. Symmetric Matrix: aj k = akj.
A = AT ,
(A.1)
Skew Symmetric Matrix: aj k = −akj.
A = −AT ,
(A.2)
Hermitian Matrix: ∗ aj k = akj.
A = AH ,
(A.3)
Skew Hermitian Matrix: A = −AH ,
∗ aj k = −akj.
(A.4)
Orthogonal Matrix: AAT = AT A = I,
A−1 = AT .
(A.5)
A−1 = AH .
(A.6)
Unitary Matrix: AAH = AH A = I, Idempotent: AA = A2 = A.
(A.7)
Nilpotent: Ak = 0,
for some integer k > 0. 383
(A.8)
Appendix A. Matrix Computation
384
Positive Definite: xT Ax > 0,
∀x = 0,
(A.9)
when A is real and symmetric, xH Ax > 0,
∀x = 0,
(A.10)
when A is Hermitian. The consequence of the positive definiteness is that all the eigenvalues of the matrix are positive. Positive Semidefinite: xT Ax 0,
∀x = 0,
(A.11)
when A is real and symmetric, xH Ax 0,
∀x = 0,
(A.12)
when A is Hermitian. The consequence of the positive semidefiniteness is that all the eigenvalues of the matrix are non-negative.
A.2. Blocked matrix inversion Let A be a square matrix partitioned in the following block form A11 A12 A= , A21 A22
(A.13)
where A11 and A22 are square matrices, and A12 and A21 are rectangular matrices with proper dimensions. Assume that A11 is invertible. We define a Schur complement matrix as S = A22 − A21 A−1 11 A12 .
(A.14)
Matrix A is invertible if and only if A11 and S are invertible. The inverse of A is given by −1 B −A−1 11 A12 S A−1 = (A.15) , −S−1 A21 A−1 S−1 11 where −1 −1 −1 B = A−1 11 + A11 A12 S A21 A11 .
(A.16)
On the other hand, if we assume that A22 is invertible, we define the Schur complement as Sˆ = A11 − A12 A−1 22 A21 .
(A.17)
A.3. Matrix decomposition
385
As in the above case, matrix A is invertible if and only if A22 and Sˆ are invertible. The inverse of A is given by −Sˆ −1 A12 A−1 Sˆ −1 22 A−1 = (A.18) , ˆ −1 −A−1 Bˆ 22 A21 S where ˆ = A−1 + A−1 A21 Sˆ −1 A12 A−1 . B 22 22 22
(A.19)
Comparing the two inverses of matrix A, we conclude that when both A11 and A22 are invertible, we must have ˆ S−1 = B,
or Sˆ −1 = B.
(A.20)
In terms of the sub-block matrices, we have
−1 A22 − A21 A−1 = A−1 11 A12 22
−1 −1 −1 + A22 A21 A11 − A12 A22 A21 A12 A−1 22 ,
−1 −1 −1 A11 − A12 A22 A21 = A11
−1 −1 + A−1 A21 A−1 11 A12 A22 − A21 A11 A12 11 .
(A.21)
(A.22)
A.3. Matrix decomposition A.3.1. LU decomposition Let Ak (k = 1, 2, . . . , s) be the leading k × k principal submatrix of the matrix A ∈ R m×n where s = min(m − 1, n). Then, there exists a unit lower triangular matrix L ∈ R m×m and an upper triangular matrix U ∈ R m×n such that A = LU.
(A.23)
A.3.2. QR decomposition Let the matrix A ∈ R m×n have linearly independent columns. Then, A can be uniquely expressed in the factored form A = QR,
(A.24)
has orthonormal columns such that QQ = = I. R ∈ where Q ∈ R m×n is upper triangular with positive diagonal elements, and it is in the form U R= . 0 R m×m
T
QT Q
Appendix A. Matrix Computation
386
A.3.3. Spectral decomposition Let A ∈ R n×n and B ∈ R n×n . Consider the right and left eigenvalue problems of the matrix pair as Aϕj = λj Bϕj ,
(A.25)
A ψ j = λj B ψ j , T
T
where λj (1 j n) are the eigenvalues, ϕi are the right eigenvectors and ψi the left eigenvectors. They have the following orthogonality properties ψkT Bϕj = δkj ,
ψkT Aϕj = λk δkj .
(A.26)
Define the eigenmatrices as = [ϕ1 , ϕ2 , . . . , ϕn ],
(A.27)
= [ψ1 , ψ2 , . . . , ψn ], and
⎡ ⎢ ⎢ Λ=⎢ ⎣
⎤
λ1 λ2 ..
.
⎥ ⎥ ⎥. ⎦
(A.28)
λn Then, we have the spectral decomposition of A and B as A = −T Λ−1 ,
B = −T −1 .
(A.29)
When B = I, we have T = −1 . Therefore, A = Λ T .
(A.30)
When B = I and A = AT , we have = and T = −1 . Therefore, = −1 and
T
A = ΛT .
(A.31)
A.3.4. Singular value decomposition Let A ∈ C m×n be a rectangular complex matrix and have rank r. It can be factored as A = UΣVH , where
Σ1 Σ= 0
0 , 0
(A.32)
(A.33)
A.4. Solution of linear algebraic equations and generalized inverse
387
and Σ1 = diag(σ1 , σ2 , . . . , σr ) and σ1 σ2 · · · σr > 0 are the real singular values of A. U ∈ C m×m and V ∈ C n×n are unitary matrices such that UH U = I,
VH V = I.
(A.34)
A.3.5. Cholesky decomposition Let A ∈ R n×n be positive definite. It can be factored into the following product A = LLT ,
(A.35)
where L is a real lower triangular matrix with positive diagonal terms. A.3.6. Schur decomposition Let λj (1 j n) be real eigenvalues of the matrix A ∈ R n×n . The eigenvalues are not necessarily distinct. Then, there exists a unitary matrix UH U = UUH = I such that the following triple product is an upper triangular matrix ⎤ ⎡ ∗ ∗ λ1 ∗ ⎢ λ2 ∗ ∗ ⎥ ⎥ ⎢ UH AU = ⎢ (A.36) ⎥. .. ⎦ ⎣ . λn When some eigenvalues of the matrix A are complex, we have the complex Schur form. For example, when λ2 = λ2r + iλ2i is complex, we have ⎡λ ∗ ∗ ∗ ∗ ⎤ 1
⎢ ⎢ ⎢ H U AU = ⎢ ⎢ ⎣
λ2r −λ2i
λ2i λ2r
∗ ∗ .. .
⎥ ⎥ ⎥ ⎥. ⎥ ∗ ⎦ λn ∗ ∗
(A.37)
A.4. Solution of linear algebraic equations and generalized inverse Consider a system of linear algebraic equations Ax = b.
(A.38)
When A ∈ and the rank of A is full, i.e., r(A) = n, there is a unique solution to the equation given by R n×n
x = A−1 b.
(A.39)
Appendix A. Matrix Computation
388
A.4.1. Underdetermined system with minimum norm solution When the rank of A ∈ R n×n is less than n, there are less independent equations in equation (A.38) than the number of unknowns. This is equivalent to the case when the matrix A ∈ R m×n with m < n. There are infinite many solutions to equation (A.38). It is common to select a solution such that the following index is minimized 1 T (A.40) x Wx, 2 where W is a positive definite weighting matrix. When W = I, the solution obtained is known as the minimum norm solution. Hence, we come to a constrained optimization problem. Define an augmented index as J =
1 Jˆ = xT Wx + λT (Ax − b). 2
(A.41)
The necessary conditions for minimizing Jˆ are ∂ Jˆ ∂ Jˆ = Wx − AT λ = 0, = Ax − b = 0. ∂x ∂λ This leads to the minimum norm solution as −1 x = W−1 AT AW−1 AT b. The inverse (AW−1 AT )−1 exists when the rank r(A) = m. Consider the case when W = I and r(A) = m. We have −1 x = AT AAT b ≡ A† b.
(A.42)
(A.43)
(A.44)
where A† = AT (AAT )−1 is a pseudo-inversion of A such that AA† = I. A.4.2. Overdetermined system with least squares solution When the matrix A is rectangular with dimensions m × n and m > n, there are more independent equations than unknowns. The system is overdetermined and equation (A.38) may not have a solution. In this case, we seek a solution x such that the following error is minimized e = Ax − b.
(A.45)
Consider the index, J =
1 T e We, 2
(A.46)
A.4. Solution of linear algebraic equations and generalized inverse
389
where W is a positive definite weighting matrix. We minimize J with respect to x. This leads to the so-called least squares solution −1 x = AT WA AT Wb ≡ A† b. (A.47) The inverse (AW−1 AT )−1 exists when the rank r(A) = n. When W = I, A† ≡ (AT A)−1 AT is another pseudo inverse of A such that A† A = I. A.4.3. A general case When A ∈ R m×n is not full rank, i.e., r(A) = r < min(m, n), the inversion (AT WA)−1 does not exist. However, it can be expressed by making use of the singular decomposition. Consider a special case when W = I. We have A† = VΣ † UT , where
† =
−1 1 0
(A.48)
0 , 0
(A.49)
and Σ1−1 = diag(σ1−1 , σ2−1 , . . . , σr−1 ). The pseudo-inversions of the matrix A discussed above are also known as the Moore–Penrose generalized inverse. The Moore–Penrose inverse has the following properties. AA† A = A, † ∗
A† AA† = A† ,
(AA ) = AA , †
∗
(A.50)
(A A) = A A. †
†
Appendix B
Laplace Transformation
B.1. Definition and basic properties Let f (t) be a real function. Assume that there exists a real number σ > 0 such that lim f (t)e−σ t = 0. (B.1) t→∞
Let s = σ + iω be a complex number. The Laplace transformation F (s) of f (t) is defined as ∞
L f (t) = F (s) = f (t)e−st dt. (B.2) 0
The Laplace transformation has the following basic properties. Note that a, b and λ are constants. Superposition
L af1 (t) + bf2 (t) = aF1 (s) + bF2 (s),
(B.3)
Time delay
L f (t − τ ) =
∞
f (t − τ )e−st dt = F (s)e−τ s .
(B.4)
0
Time scaling
L f (at) =
∞ f (at)e 0
−st
1 s dt = F . |a| a 391
(B.5)
Appendix B. Laplace Transformation
392
Frequency shift ∞
L e−at f (t) =
e−at f (t)e−st dt = F (s + a).
(B.6)
0
Differentiation Let f (t) be k times differentiable with the initial conditions f (0), f (j ) (0) (j = 1, 2, . . . , k). Then ∞ k d f (t) −st d k f (t) e dt = L k dt dt k
0
= s F (s) − s k−1 f (0) − s k−2 f (1) (0) − · · · − f (k−1) (0). k
(B.7)
Integration
t L
f (τ ) dτ
∞ t =
0
0
f (τ ) dτ e−st dt =
1 F (s). s
(B.8)
0
Convolution Theorem t ∞ t L f1 (t − τ )f2 (τ ) dτ = f1 (t − τ )f2 (τ ) dτ e−st dt 0
0
0
= F1 (s)F2 (s).
(B.9)
Initial and Final Value Theorem
lim sF (s) = f (0),
s→∞
lim sF (s) = lim f (t).
s→0
t→∞
(B.10)
B.2. Laplace transform of common functions
B.2. Laplace transform of common functions
f (t)
F (s)
δ(t)
1 1 s
H (t) tk 1 t k−1 e−at (k − 1)!
k! s k+1 1 (s + a)k
sin at
a s 2 + a2
cos at
s s 2 + a2
e−at sin bt
b (s + a)2 + b2
e−at cos bt
s+a (s + a)2 + b2
393
Bibliography
Abramowitz, M., Stegun, I.A., 1972. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover Publications, New York. Ali, M.S., Hou, Z.K., Noori, M.N., 1998. Stability and performance of feedback control systems with time delays. Computers and Structures 66 (2–3), 241– 248. Anderson, D., Townehill, J., Pletcher, R., 1984. Computational Fluid Mechanics and Heat Transfer. Hemisphere, New York. ASTM, 1990. Cycle counting in fatigue analysis. Annual Book of ASTM Standards 03.01. Benasciutti, D., 2004. Fatigue analysis of random loadings. Ph.D. Dissertation, University of Ferrara. Bendat, J.S., Piersol, A.G., 2000. Random Data: Analysis and Measurement Procedures. John Wiley and Sons, New York. Bensoubaya, M., Ferfera, A., Iggidr, A., 2000. Jurdjevic–Quinn-type theorem for stochastic nonlinear control systems. IEEE Transactions on Automatic Control 45, 93–98. Bishop, N.W.M., 1988. The use of frequency domain parameters to predict structural fatigue. Ph.D. Dissertation, University of Warwick. Bishop, N.W.M., Sherratt, F., 1990. Theoretical solution for the estimation of rainflow ranges from power spectral denisty data. Fatigue and Fracture of Engineering Materials and Structures 13 (4), 311–326. Blaquiere, A., 1992. Controllability of a Fokker–Planck equation, the Schroedinger system, and a related stochastic optimal control. Dynamics and Control 2 (3), 235–253. Boulanger, C., 2000. Stabilization of a class of nonlinear stochastic systems. Nonlinear Analysis, Theory, Methods and Applications 41 (3), 277–286. Bratus, A., Dimentberg, M., Iourtchenko, D., Noori, M., 2000. Hybrid solution method for dynamic programming equations for MDOF stochastic systems. Dynamics and Control 10, 107–116. Brockett, R.W., Liberzon, D., 1998. On explicit steady-state solutions of FokkerPlanck equations for a class of nonlinear feedback systems. In: Proceedings of American Control Conference. Philadelphia, Pennsylvania, pp. 264–268. Bryson, A.E. Jr., Ho, Y.C., 1969. Applied Optimal Control. Blaisdell Publishing Company, Waltham, Massachusetts. Buckwar, E., 2000. Introduction to the numerical analysis of stochastic delay differential equations. Journal of Computational and Applied Mathematics 125 (1–2), 297–307. 395
396
Bibliography
Cai, G.Q., Lin, Y.K., 1996. Exact and approximate solutions for randomly excited MDOF non-linear systems. International Journal of Non-Linear Mechanics 31 (5), 647–655. Caughey, T.K., Ma, F., 1982. Exact steady-state solution of a class of non-linear stochastic systems. International Journal of Non-Linear Mechanics 17 (3), 137–142. Caughey, T.K., O’Kelly, M.E.J., 1964. Classical normal modes in damped linear dynamic systems. Journal of Applied Mechanics, 583–588. Chang, K.-Y., Wang, W.-J., Chang, W.-J., 1997. Covariance control for stochastic multivariable systems with hysteresis nonlinearity. International Journal of Systems Science 28 (7), 731–736. Chernick, M.R., 1999. Bootstrap Methods: A Practitioner’s Guide. John Wiley and Sons. Chung, H.-Y., Chang, W.-J., 1994a. Constrained variance design using covariance control with observed-state feedback for bilinear stochastic continuous systems. Journal of the Chinese Institute of Engineers, Transactions of the Chinese Institute of Engineers, Series A 17 (1), 113–119. Chung, H.Y., Chang, W.J., 1994b. Extension of the covariance control principle to nonlinear stochastic systems. IEE Proceedings: Control Theory and Applications 141 (2), 93–98. Crandall, S.H., Mark, W.D., 1963. Random Vibration in Mechanical Systems. Academic Press, New York. Crandall, S.H., Zhu, W.Q., 1983. Random vibration: A survey of recent developments. Journal of Applied Mechanics 50 (4b), 953–962. Crespo, L.G., Sun, J.Q., 2000. Solution of fixed final state optimal control problems via simple cell mapping. Nonlinear Dynamics 23 (4), 391–403. Crespo, L.G., Sun, J.Q., 2003a. Control of nonlinear stochastic systems via stationary probability distributions. Probabilistic Engineering Mechanics 18, 79– 86. Crespo, L.G., Sun, J.Q., 2003b. Fixed final time optimal control via simple cell mapping. Nonlinear Dynamics 31, 119–131. Crespo, L.G., Sun, J.Q., 2003c. Stochastic optimal control of nonlinear dynamic systems via Bellman’s principle and cell mapping. Automatica 39 (12), 2109– 2114. Deodatis, G., Micaletti, R.C., 2001. Simulation of highly skewed non-Gaussian stochastic processes. Journal of Engineering Mechanics 127 (12), 1284–1295. Desoer, C.A., Vidyasagar, M., 1975. Feedback Systems: Input-Output Properties. Academic Press, New York. Dimentberg, M., 1988. Statistical Dynamics of Nonlinear and Time-Varying Systems. Research Studies Press, Tauton, UK. Dimentberg, M.F., 1982. An exact solution to a certain non-linear random vibration problem. International Journal of Non-Linear Mechanics 17 (4), 231–236.
Bibliography
397
Dimentberg, M.F., Iourtchenko, A.S., Brautus, A.S., 2000. Optimal bounded control of steady-state random vibrations. Probabilistic Engineering Mechanics 15 (4), 381–386. Dirlik, T., 1985. Application of computers in fatigue analysis. Ph.D. Dissertation, University of Warwick. Doob, J.L., 1953. Stochastic Processes. Wiley, New York. Dowling, N.E., 1999. Mechanical Behavior of Materials: Engineering Methods for Deformation, Fracture, and Fatigue. Prentice Hall, Upper Saddle River, New Jersey. Drakunov, S., Su, W.C., Ozguner, U., 1993. Discrete-time sliding-mode in stochastic systems. In: Proceedings of the American Control Conference, pp. 966–970. Duncan, T.E., Maslowski, B., Pasik-Duncan, B., 1998. Some results on the adaptive control of stochastic semilinear systems. In: Proceedings of the 37th IEEE Conference on Decision and Control, Vol. 4. Tampa, Florida, pp. 4028–4032. Edwards, C., Spurgeon, S.K., 1998. Sliding Mode Control—Theory and Applications. Taylor and Francis, London. Einstein, A., 1956. Investigations on the Theory of the Brownian Movement (edited by R. Furth). Dover Publications, Inc., New York. Elbeyli, O., Sun, J.Q., 2002. A stochastic averaging approach for feedback control design of nonlinear systems under random excitations. Journal of Vibration and Acoustics 124, 561–565. Elbeyli, O., Sun, J.Q., 2004. Covariance control of nonlinear dynamic systems via exact stationary probability density function. Journal of Vibration and Acoustics 126 (1), 71–76. Elbeyli, O., Hong, L., Sun, J.Q., 2005a. On the feedback control of stochastic systems tracking prespecified probability density functions. Transactions of the Institute of Measurement and Control 27 (5), 319–330. Elbeyli, O., Sun, J.Q., Ünal, G., 2005b. A semi-discretization method for delayed stochastic systems. Communications in Nonlinear Science and Numerical Simulation 10, 85–94. Elishakoff, I., 1983. Probabilistic Methods in the Theory of Structures. John Wiley and Sons, New York. Feller, W., 1968. An Introduction to Probability Theory and Its Applications— Volume I. John Wiley and Sons, New York. Feller, W., 1972. An Introduction to Probability Theory and Its Applications— Volume II. John Wiley and Sons, New York. Field, R.V., Bergman, L.A., 1998. Reliability based approach to linear covariance control design. Journal of Engineering Mechanics 124 (2), 193–199. Fischer, J., Kreuzer, E., 2001. Generalized cell mapping for randomly perturbed dynamical systems. Zeitschrift für Angewandte Mathematik und Mechanik 81 (11), 769–777.
398
Bibliography
Flashner, H., Burns, T.F., 1990. Spacecraft momentum unloading: The cell mapping approach. Journal of Guidance, Control and Dynamics 13, 89–98. Florchinger, P., 1997. Feedback stabilization of affine in the control stochastic differential systems by the control Lyapunov function method. SIAM Journal on Control and Optimazation 35 (2), 500–511. Florchinger, P., 1999. Passive system approach to feedback stabilization of nonlinear control stochastic systems. SIAM Journal on Control and Optimization 37 (6), 1848–1864. Forbes, M.G., Forbes, J.F., Guay, M., 2003a. Control design for discrete-time stochastic nonlinear processes with a nonquadratic performance objective. In: Proceedings of the 42nd IEEE Conference on Decision and Control, Vol. 4. Maui, Hawaii, pp. 4243–4248. Forbes, M.G., Forbes, J.F., Guay, M., 2003b. Regulatory control design for stochastic processes: Shaping the probability density function. In: Proceedings of the American Control Conference, Vol. 5. Denver, Colorado, pp. 3998–4003. Forbes, M.G., Guay, M., Forbes, J.F., 2004. Control design for first-order processes: Shaping the probability density of the process state. Journal of Process Control 14 (4), 399–410. Frank, T.D., Beek, P.J., 2001. Stationary solutions of linear stochastic delay differential equations: Applications to biological systems. Physical Review E 64 (2 I), 219171/1–12. Franklin, G.F., Powell, J.D., Emami-Naeini, A., 2003. Feedback Control of Dynamic Systems. Prentice Hall, Englewood Cliffs, New Jersey. Fu, Y., Tian, Z., Shi, S., 2003. State feedback stabilization for a class of stochastic time-delay nonlinear systems. IEEE Transactions on Automatic Control 48 (2), 282–286. Golub, G.H., Loan, C.F.V., 1983. Matrix Computations. The Johns Hopkins University Press, Baltimore, Maryland. Grigoriadis, K.M., Skelton, R.E., 1997. Minimum-energy covariance controllers. Automatica 33 (4), 569–578. Grigoriu, M., 1998. Simulation of stationary non-Gaussian translation processes. Journal of Engineering Mechanics 124 (2), 121–126. Guillouzic, S., L’Heureux, I., Longtin, A., 1999. Small delay approximation of stochastic delay differential equations. Physical Review E 59 (4), 3970–3982. Guo, L., Wang, H., 2003. Optimal output probability density function control for nonlinear ARMAX stochastic systems. In: Proceedings of the 42nd IEEE Conference on Decision and Control, Vol. 4. Maui, Hawaii, pp. 4254–4259. Hänggi, P., Marchesoni, F., Nori, F., 2005. Brownian motors. Annalen der Physik 14 (1–3), 51–70. Hotz, A., Skelton, R.E., 1987. Covariance control theory. International Journal of Control 46 (1), 13–32.
Bibliography
399
Housner, G.W., Bergman, L.A., Caughey, T.K., Chassiakos, A.G., Claus, R.O., Masri, S.F., Skelton, R.E., Soong, T.T., Spencer, B.F. Jr., Yao, J.T.P., 1997. Structural control: Past, present, and future. ASCE Journal of Engineering Mechanics 123, 897–971. Hsu, C.S., 1980. A theory of cell-to-cell mapping dynamical systems. Journal of Applied Mechanics 47, 931–939. Hsu, C.S., 1985. A discrete method of optimal control based upon the cell state space concept. Journal of Optimization Theory and Applications 46, 547–569. Hsu, C.S., 1987. Cell-to-Cell Mapping: A Method of Global Analysis for Nonlinear Systems. Springer-Verlag, Berlin. Hwang, J.H., Ma, F., 1993. On the approximate solution of nonclassically damped linear systems. Journal of Applied Mechanics 60 (3), 695–701. Ibrahim, R.A., 1985. Parametric Random Vibration. John Wiley and Sons, New York. Insperger, T., Stepan, G., 2001. Semi-discretization of delayed dynamical systems. In: Proceedings of ASME 2001 Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Pittsburgh, Pennsylvania. Insperger, T., Stepan, G., 2002. Semi-discretization method for delayed systems. International Journal for Numerical Methods in Engineering 55 (5), 503–518. Iourtchenko, D., 2000. Stochastic optimal bounded control for a system with the Boltz cost function. Journal of Vibration and Control 6, 1195–1204. Iwasaki, T., Skelton, R.E., 1994. On the observer-based structure of covariance controllers. Systems and Control Letters 22 (1), 17–25. Iwasaki, T., Skelton, R.E., Corless, M., 1998. Recursive construction algorithm for covariance control. IEEE Transactions on Automatic Control 43 (2), 268– 272. Jacobs, O.L.R., 1974. Introduction to Control Theory. Clarendon Press, Oxford. Jazwinski, A.H., 1970. Stochastic Processes and Filtering Theory. Academic Press, New York. Jumarie, G., 1996. Improvement of stochastic neighbouring-optimal control using nonlinear Gaussian white noise terms in the taylor expansions. Journal of the Franklin Institute 333B (5), 773–787. Junkins, J.L., Kim, Y., 1993. Introduction to Dynamics and Control of Flexible Structures. AIAA, Washington, DC. Karlin, S., Taylor, H.M., 1981. A Second Course in Stochastic Processes. Academic Press, New York. Kazakov, I.Y., 1995. Analytical construction of conditionally optimal control with respect to a local criterion in a nonlinear stochastic system. Journal of Computer and Systems Sciences International 33 (6), 85–94. Khasminskii, R.Z., 1966. A limit theorem for the solutions of differential equations with random right-hand sides. Theory of Probability and Its Applications 11 (3), 390–405.
400
Bibliography
Kihl, D.P., Sarkani, S., 1999. Mean stress effects in fatigue of welded steel joints. Probabilistic Engineering Mechanics 14, 97–104. Klein, E.J., Ramirez, W.F., 2001. State controllability and optimal regulator control of time-delayed systems. International Journal of Control 74 (3), 281–289. Knuth, D.E., 1998. The Art of Computer Programming. Addison-Wesley, Reading, Massachusetts. Krstic, M., Deng, H., 1998. Closed-form optimal controllers for nonlinear uncertain systems. In: Proceedings of American Society of Mechanical Engineers, Dynamic Systems and Control Division Publication, pp. 563–567. Kuchler, U., Platen, E., 2002. Weak discrete time approximation of stochastic differential equations with time delay. Mathematics and Computers in Simulation 59 (6), 497–507. Kushner, H.J., Dupois, P., 2001. Numerical Methods for Stochastic Control Problems in Continuous Time. Springer-Verlag, New York. Lewis, F.L., Syrmos, V.L., 1995. Optimal Control. John Wiley and Sons, New York. Lin, Y.K., 1967. Probabilistic Theory of Structural Dynamics. McGraw-Hill, New York. Lin, Y.K., Cai, G.Q., 1995. Probabilistic Structural Dynamics—Advanced Theory and Applications. McGraw-Hill, New York. Lui, J., Skelton, R.R., 1998. Covariance control using closed-loop modelling for structures. Earthquake Engineering and Structural Dynamics 27 (11), 1367– 1383. Meirovitch, L., 1967. Analytical Methods in Vibration. MacMillan, London. Naess, A., Moe, V., 2000. Efficient path integration methods for nonlinear dynamic systems. Probabilistic Engineering Mechanics 15, 221–231. Newland, D.E., 1984. Random Vibrations and Spectral Analysis. Longman, New York. Niculescu, S.-I., Verriest, E.I., Dugard, L., Dion, J.-M., 1998. Stability of linear systems with delayed state: A guided tour. In: Proceedings of the IFAC Workshop: Linear Time Delay Systems. Grenoble, France, pp. 31–38. Nielsen, S.R.K., 2000. Vibration Theory Volume 3: Linear Stochastic Vibration Theory. Aalborg Tekniske Universitetsforlag, Aalborg, Denmark. Nigam, N.C., 1983. Introduction to Random Vibrations. The MIT Press, Cambridge, Massachusetts. Park, I.W., Kim, J.S., Ma, F., 1992. On model coupling in nonclassically damped linear systems. Mechanics Research Communications 19 (5), 407. Peres-Da-Silva, S.S., Cronin, D.L., Randolph, T.W., 1995. Computation of eigenvalues and eigenvectors of nonclassically damped systems. Computers and Structures 57 (5), 883–891. Perotti, F., 1994. Analytical and numerical techniques for the dynamic analysis of non-classically damped linear systems. Soil Dynamics and Earthquake Engineering 13, 197–212.
Bibliography
401
Pieper, J.K., Surgenor, B.W., 1994. Discrete time sliding mode control applied to a gantry crane. In: Proceedings of the IEEE Conference on Decision and Control, Vol. 3. Lake Buena Vista, Florida, pp. 829–834. Pinto, O.C., Goncalves, P.B., 2002. Control of structures with cubic and quadratic non-linearities with time delay consideration. Revista Brasileira de Ciencias Mecanicas/Journal of the Brazilian Society of Mechanical Sciences 24 (2), 99–104. Rice, S.O., 1954. Mathematical analysis of random noise. In: Wax, N. (Ed.), Selected Papers on Noise and Stochastic Processes. Dover, New York, pp. 133– 294. Risken, H., 1984. The Fokker–Planck Equation—Methods of Solution and Applications. Springer-Verlag, New York. Roberts, J.B., Spanos, P.D., 1986. Stochastic averaging: An approximate method of solving random vibration problems. International Journal of Non-Linear Mechanics 21 (2), 111–134. Roberts, J.B., Spanos, P.D., 1990. Random Vibration and Statistical Linearization. Wiley, New York. Saridis, G.N., Wang, F.Y., 1994. Suboptimal control of nonlinear stochastic systems. Control—Theory and Advanced Technology 10 (4 pt 1), 847–871. Scott, I., Mulgrew, B., 1997. Nonlinear system identification and prediction using orthonormal functions. IEEE Transactions on Signal Processing 45 (7), 1842– 1853. Seo, Y.B., Choi, J.W., 2001. Stochastic eigenvalues for LTI systems with stochastic modes. In: Proceedings of the 40th SICE Annual Conference. Nagoya, Japan, pp. 142–145. Shahruz, S.M., Langari, G., 1992. Closeness of the solutions of approximately decoupled damped linear systems to their exact solutions. Journal of Dynamic Systems, Measurement and Control 114 (3), 369–374. Shahruz, S.M., Packard, A.K., 1993. Approximate decoupling of weakly nonclassically damped linear second-order systems under harmonic excitations. Journal of Dynamic Systems, Measurement and Control 115 (1), 214–218. Shahruz, S.M., Packard, A.K., 1994. Large errors in approximate decoupling of nonclassically damped linear second-order systems under harmonic excitations. Dynamics and Control 4 (2), 169–183. Shahruz, S.M., Srimatsya, P.A., 1997. Approximate solutions of non-classically damped linear systems in normalized and physical co-ordinates. Journal of Sound and Vibration 201 (2), 262–271. Sheng, J., Elbeyli, O., Sun, J.Q., 2004. Stability and optimal feedback controls for time-delayed linear periodic systems. AIAA Journal 42 (5), 908–911. Sheng, J., Sun, J.Q., 2005. Feedback controls and optimal gain design of delayed periodic linear systems. Journal of Vibration and Control 11 (2), 277–294. Shinozuka, M., Deodatis, G., 1991. Simulation of stochastic processes by spectral representation. Applied Mechanics Review 44 (4), 191–204.
402
Bibliography
Silverman, B.W., 1986. Density Estimation for Statistics and Data Analysis. Chapman and Hall, London. Sinha, A., Miller, D.W., 1995. Optimal sliding-mode control of a flexible spacecraft under stochastic disturbances. Journal of Guidance, Control, and Dynamics 18 (3), 486–492. Skelton, R.E., Iwasaki, T., Grigoriadis, K.M., 1998. A Unified Algebraic Approach to Linear Control Design. Taylor and Francis, London, UK. Slotine, J.J., Li, W., 1991. Applied Nonlinear Control. Prentice Hall, New York. Smith, K.N., Watson, P., Topper, T.H., 1970. A stress-strain function for the fatigue of metals. Journal of Materials 5 (4), 767–778. Smith, S.M., Corner, D.J., 1992. An algorithm for automated fuzzy logic controller tuning. In: Proceedings of IEEE International Conference on Fuzzy Systems, pp. 615–622. Socha, L., Blachuta, M., 2000. Application of linearization methods with probability density criteria in control problems. In: Proceedings of American Control Conference. Chicago, Illinois, pp. 2775–2779. Socha, L., Soong, T.T., 1991. Linearization in analysis of nonlinear stochastic systems. Applied Mechanics Reviews 44 (10), 399–422. Soize, C., 1994. The Fokker–Planck Equation for Stochastic Dynamical Systems and its Explicit Steady State Solutions. World Scientific Publishing Company, River Edge, New Jersey. Sólnes, J., 1997. Stochastic Processes and Random Vibrations—Theory and Practice. John Wiley and Sons, New York. Song, F., Smith, S.M., Rizk, C.G., 1999. Fuzzy logic controller design methodology for 4D systems with optimal global performance using enhanced cell state space based best estimate directed search method. In: Proceedings of IEEE International Conference on Systems, Man and Cybernetics, Vol. 6. Tokyo, Japan. Soong, T.T., 1973. Random Differential Equations in Science and Engineering. Academic Press, New York. Soong, T.T., Grigoriu, M., 1993. Random Vibration of Mechanical and Structural Systems. Prentice Hall, Englewood Cliffs, New Jersey. Spanos, P.D., 1981. Stochastic linearization in structural dynamics. Applied Mechanics Review 34 (1), 1–8. Stengel, R., 1986. Stochastic Optimal Control. John Wiley and Sons. Stepan, G., 1998. Delay-differential equation models for machine tool chatter. In: Moon, F.C. (Ed.), Dynamics and Chaos in Manufacturing Processes. Wiley, New York, pp. 165–192. Su, L., Ahmadi, G., 1990. A comparison study of performances of various isolation systems. Earthquake Engineering and Structural Dynamics 19, 21–23. Sun, J.Q., 1995. Random vibration analysis of a nonlinear system with dry friction damping by the short-time Gaussian cell mapping method. Journal of Sound and Vibration 180 (5), 785–795.
Bibliography
403
Sun, J.Q., Hsu, C.S., 1987. Cumulant-neglect closure methods for nonlinear systems under random excitations. Journal of Applied Mechanics 54, 649–655. Sun, J.Q., Hsu, C.S., 1988a. First-passage time probability of nonlinear stochastic systems by generalized cell mapping method. Journal of Sound and Vibration 124 (2), 233–248. Sun, J.Q., Hsu, C.S., 1988b. Statistical study of generalized cell mapping. Journal of Applied Mechanics 55 (3), 694–701. Sun, J.Q., Hsu, C.S., 1989a. Cumulant-neglect closure methods for asymmetric non-linear systems driven by Gaussian white noise. Journal of Sound and Vibration 135 (2), 338–345. Sun, J.Q., Hsu, C.S., 1989b. Random vibration of hinged elastic shallow arch. Journal of Sound and Vibration 132 (2), 299–315. Sun, J.Q., Hsu, C.S., 1990. The generalized cell mapping method in nonlinear random vibration based upon short-time Gaussian approximation. Journal of Applied Mechanics 57, 1018–1025. Sun, J.Q., Xu, Q., 1998. Response variance re duction of a nonlinear mechanical system via sliding mode control. Journal of Vibration and Acoustics 120, 801– 805. Tombuyses, B., Aldemir, T., 1997. Continuous cell-to-cell mapping. Journal of Sound and Vibration 202 (3), 395–415. Udwadia, F.E., 1993. Further results on iterative approaches to the determination of the response of nonclassically damped systems. Journal of Applied Mechanics 60 (1), 235–239. Udwadia, F.E., Esfandiari, R.S., 1990. Nonclassically damped dynamic systems an iterative approach. Journal of Applied Mechanics 57 (2), 423–433. van der Spek, J.A.W., De Hoon, C.A.L., De Kraker, A., van Campen, D.H., 1994. Application of cell mapping methods to a discontinuous dynamic system. Nonlinear Dynamics 6 (1), 87–99. Vanmarcke, E., 1983. Random Fields: Analysis and Synthesis. The MIT Press, Cambridge, Massachusetts. Wang, F.Y., Lever, P.J.A., 1994. A cell mapping method for general optimum trajectory planning of multiple robotic arms. Robotics and Autonomous Systems 12, 15–27. Wang, H., 1998. Robust control of the output probability densisty functions for multivariable stochastic systems. In: Proceedings of the 37th IEEE Conference on Decision and Control, Vol. 2. Tampa, Florida, pp. 1305–1310. Wang, H., 2002. Minimum entropy control of non-Gaussian dynamic stochastic systems. IEEE Transactions on Automatic Control 47 (2), 398–403. Wang, H., 2003. Control of conditional output probability density functions for general nonlinear and non-Gaussian dynamic stochastic systems. IEE Proceedings: Control Theory and Applications 150 (1), 55–60.
404
Bibliography
Wang, H., Yue, H., 2001. Minimum entropy control of non-Gaussian dynamic stochastic systems. In: Proceedings of the 40th IEEE Conference on Decision and Control, Vol. 1. Orlando, Florida, pp. 382–387. Wang, H., Zhang, J.H., 2001. Bounded stochastic distributions control for pseudoARMAX stochastic systems. IEEE Transactions on Automatic Control 46 (3), 486–490. Wang, R., Yasuda, K., 1999. Exact stationary response solutions of nine classes of nonlinear stochastic systems under stochastic parametric and external excitations. In: Proceedings of the ASME Design Engineering Technical Conferences. Las Vegas, Nevada, pp. 1–15. Wang, X., 2004. Random fatigue of structures with uncertain parameters and nonGaussian stress response. Ph.D. Dissertation, University of Delaware. Wang, X., Eisenbrey, J., Zeitz, M., Sun, J.Q., 2004. Multi-stage regression analysis of acoustical properties of polyurethane foams. Jounral of Sound and Vibration 273, 1109–1117. Wehner, M.F., Wolfer, W.G., 1983a. Numerical evaluation of path-integral solutions to Fokker–Planck equations. Physical Review A 27 (5), 2663–2670. Wehner, M.F., Wolfer, W.G., 1983b. Numerical evaluation of path-integral solutions to Fokker–Planck equations II. Restricted stochastic processes. Physical Review A 28 (5), 3003–3011. Westman, J.J., Hanson, F.B., 1999. Computational method for nonlinear stochastic optimal control. In: Proceedings of the American Control Conference, pp. 2798–2802. Winterstein, S.R., 1988. Nonlinear vibration models for extremes and fatigue. Journal of Engineering Mechanics 114 (10), 1772–1790. Wirsching, P.H., Paez, T.L., Ortiz, H., 1995. Random Vibrations—Theory and Practice. John Wiley and Sons, New York. Wojtkiewicz, S.F., Bergman, L.A., 2001. Moment specification algorithm for control of nonlinear systems driven by Gaussian white noise. Nonlinear Dynamics 24 (1), 17–30. Wong, E., 1971. Stochastic Processes in Information and Dynamical Systems. McGraw-Hill, New York. Yang, B., Wu, X., 1998. Modal expansion of structural systems with time delays. AIAA Journal 36 (12), 2218–2224. Yao, B., Tomizuka, M., 1993. Robust adaptive motion and force control of robot manipulators in unknown stiffness environments. In: Proceedings of the 32nd Conference on Decision and Control, pp. 142–147. Yao, B., Tomizuka, M., 1995. Robust adaptive constrained motion and force control of manipulators with guaranteed transient performance. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 893–898. Yong, J., Zhou, X.Y., 1999. Stochastic Controls, Hamiltonian Systems and HJB Equations. Springer-Verlag, New York.
Bibliography
405
Young, C.Y., 1986. Random Vibration of Structures. John Wiley and Sons, New York. Young, G.E., Chang, R.J., 1988. Optimal control of stochastic parametrically and externally excited nonlinear control systems. Journal of Dynamic Systems, Measurement, and Control 110, 114–119. Zhu, W.H., Leu, M.C., 1990. Planning optimal robot trajectories by cell mapping. In: Proceedings of Conference on Robotics and Automation, pp. 1730–1735. Zhu, W.Q., Cai, G.Q., Zhang, R.C., 2003. Advances in Stochastic Structural Dynamics. CRC Press, Boca Raton, Florida. Zhu, W.Q., Yang, Y., 1996. Exact stationary solutions of stochastically excited and dissipated integrable Hamiltonian systems. Journal of Applied Mechanics 63, 493–500. Zhu, W.Q., Ying, Z.G., Soong, T.T., 2001. An optimal nonlinear feedback control strategy for randomly excited structural systems. Nonlinear Dynamics 24 (1), 31–51. Zufiria, P.J., Guttalu, R.S., 1993. Adjoining cell mapping and its recursive unraveling, Part I: Description of adaptive and recursive algorithms. Nonlinear Dynamics 3, 207–225.
Subject Index
almost sure convergence, 65 asymptotical stability, 245 auto power spectral density function, 53 – see also power spectral density function autocorrelation coefficient function, 34 autocorrelation function, 34 autocovariance function, 34 backward algorithm – optimal control solution, 305 band-limited white noise, 58 bandwidth parameter, 54, 55 Bellman’s principle of optimality, 301 bending moment, 164 bending rigidity, 165 Bernoulli–Euler beam, 164 bijective (one-to-one) mapping, 28 bounded input-bounded output stability, 246 Box–Muller transformation, 31, 228 broadband process, 55 Brownian motion, 81 – see also Weiner process cell mapping methods, 301 – generalized, 303 central frequency, 54 central moment functions, 34 central moments, 12 – joint, 19 Chapman–Kolmogorov–Smoluchowski equation, 94 characteristic equation – closed-loop, 250 – open-loop, 244 characteristic function, 12 chattering, 343 Cholesky decomposition, 387 classical damping, 147 co-state equation, 282 coherence function, 54, 153 conditional expected value, 29 conditional probability, 10 – density function, 29
conditional stability, 252 convergence – in distribution, 65 – in mean square, 65 – in probability, 65 – with probability one, 65 correlation – coefficient, 19 – function, 19 – length, 40 – matrix, 21 cosine transforms, 52 counting process, 191 covariance, 19 covariance control, 260 – generalized, 261 – maximum entropy principle, 270 covariance matrix, 21 critically damped, 130 cross-correlation coefficient function, 36 cross-covariance function, 36 cross-power spectral density function, 53 crossing frequency, 189 cumulant functions, 13 cycle counting, 214 damped resonant frequency, 129 damping ratio, 55, 129 DC gain, 244 derivative process, 67 detailed balance, 114, 115 diffusion term, 95 Dimentberg’s system, 108 Dirlik’s formula, 214 distribution function – scalar random variable, 11 – two-dimensional, 18 disturbance rejection, 345 double-sided power spectral density function, 52 drift term, 95 Duffing oscillator – nonlinear feedback control, 274 – PDF based control, 262 Duhamel’s integral, 129 407
408
Subject Index
eigenmode, 146 eigenvalue analysis, 146 energy envelope process, 197 envelope process, 196 equivalent Itô equation, 110 ergodic – in the autocovariance, 41 – in the correlation, 41 – in the mean, 41 – in the mean square, 41 – process, 40 – sampling, 44 expected number of local maxima per unit time, 56 expected number of upcrossings per unit time, 56 expected value, 11 – see also mean value fatigue damage, 209 final value theorem, 392 first-passage failure, 203 first-passage time, 203 – probability, 238 Fokker–Planck–Kolmogorov equation, 95 – exact stationary solutions for nonlinear systems, 104 – path integral solution, 103 – short-time solution, 101 frequency response function, 79 frequency response matrix, 80, 145 fundamental matrix, 156 Gaussian process, 36 Green’s function, 167 half power bandwidth, 55 Hamilton–Jacobi–Bellman equation – deterministic systems, 287 – stochastic systems, 292 Hamiltonian function, 281 Hamiltonian systems, 110 harmonic process, 37, 232 impulse response function, 78 impulse response matrix, 145 independent random variables, 19 indicator function, 44 instability, 245 integrated vector process, 77 integrator windup, 252
invariant set, 339 irregularity factor, 55 Itô differential equation, 86 Itô integral, 84 Itô’s lemma, 88 joint events, 10 Kalman–Bucy filter, 294 Kanai–Tajimi filter, 160 Kolmogorov backward equation, 122 kurtosis, 12 Lévy’s oscillation property, 82 Lagrange function, 280 Lagrange multipliers, 281 Laplace transformation, 391 Laplace transforms, 244 least squares solution, 388 level crossing, 187 linear congruential method, 227 linear quadratic Gaussian (LQG) control, 293 LU decomposition, 385 Lyapunov equation, 158 Lyapunov function, 247 Lyapunov stability, 247 – asymptotical, 247 – unstable, 247 Lyapunov stability theorem, 247 marginal probability density, 18 marginal stability, 245 Markov chains, 304 Markov process, 48 matched uncertainty, 345 maximum entropy principle, 270 mean value – function, 34 – scalar random variable, 11 minimum norm solution, 388 minimum phase, 248 modal – coordinates, 146 – damping ratio, 148 – force processes, 147 – impulse response function, 149, 167, 181 – matrix, 146 – transformation, 146, 155 mode functions, 165 moment equations, 89
Subject Index moment functions, 34 moment specification, 261 moments, 12 – joint, 19 – of the first-passage time, 126 Moore–Penrose generalized inverse, 389 multi-degree-of-freedom systems, 143
409
probability density function – scalar random variable, 11 – two-dimensional, 18 probability density function of the nth order, 33 pseudo-inversion, 388 pseudo-random numbers, 227 QR decomposition, 385
narrowband process, 55 non-classical damping, 147 non-Gaussian stress, 215 non-minimum phase, 248 normal distribution – scalar random variable, 13 normal random variables, 22 null set, 10 one-sided power spectral density function, 52 optimal control – dry friction damped oscillator, 319 – impact nonlinear oscillator, 330 – linear oscillator, 312 – non-polynomial nonlinearities, 324 – one-dimensional nonlinear system, 306 – Van der Pol oscillator, 316 optimality condition, 283 Ornstein–Uhlenbeck process, 91 orthogonality, 146 overdamped, 130 Palmgren–Miner linear damage accumulation rule, 212 Pawula theorem, 95 PDF tracking control, 375 performance index, 280 PID controls, 250 Poisson process, 45 – compound, 47 – level crossing, 203 polar method, 228 poles – closed-loop, 250 – open-loop , 244 Pontryagin’s minimum principle, 283 Pontryagin–Vitt equations, 127 – generalized, 127 positive definite, 143 power spectral density function, 51 probability, 9 probability axiom, 9
radius of gyration, 54 rainflow ranges, 214 random event, 9 random sample, 9 random variable – n-dimensional, 20 – scalar random variable, 10 – two-dimensional variable, 18 rational spectral density function, 59 Rayleigh damping, 147 realizability, 244 regression analysis of fatigue, 216 relative degree, 244 reliability against the first-passage failure, 124 resonant frequency, 129, 146 response moment control, 257 Riccati equation, 289 Riemann–Stieltjes integral, 80, 84 Routh array, 252 Routh’s stability criterion, 252 Runge–Kutta algorithm, 237 S-N curve, 209 safe domain, 124, 202 sample space, 9 Schur decomposition, 387 Schwarz inequality, 20 semi-discretization, 362 semi-positive definite, 143 Shannon’s entropy, 273 shear force, 164 Shinozuka’s method, 235 short-time Gaussian approximation, 301 sigma algebra, 9 simulated Gaussian white noise, 229 single-degree-of-freedom systems, 129 singular value decomposition, 386 skewness, 12 sliding mode control, 337 – equivalent, 345 – nominal, 341, 348
410
Subject Index
– robust, 342 – stochastic, 348 – stochastic robust, 350 space–time correlation function, 163, 182 space–time cross-covariance function, 163, 182 space–time cross-spectral density function, 163 spectral decomposition, 156, 386 spectral moments, 54 standard deviation, 12 state equation, 282 stationary Markov process, 48 stationary process – in the strict sense, 38 – in the weak sense, 38 – vector process, 39 stochastic integral – integrated process, 75 stochastic process, 33 stochastic vector process, 35 Stratonovich differential equation, 86 Stratonovich integral, 84 stress ranges – overall, 210 – simple, 210 sufficient condition for optimal control, 299
Timoshenko beam, 174 total variation, 280 transcendental equation, 146 transfer function – closed-loop, 250 – open-loop, 243 transversality condition, 286
target set, 281 terminal conditions, 283 time delay, 361 time domain specifications, 249
Yasuda’s system, 105, 264
uncorrelated, 19 underdamped, 130 unit impulse response function, 167 value function, 287 variable structure controls, 337 – see also sliding mode control variance, 12 variance function, 34 variance reduction ratio, 352 variational coefficient, 12 virtual variation, 280 Weiner process, 81 white noise, 57 Wiener–Khintchine relations, 51 Wong–Zakai correction, 87
zeros – open-loop, 244