RANDOM PROCESSES IN NONLINEAR CONTROL SYSTEMS A. A . Pervozvanskii MOSCOW, U.S.S.R.
Translated by SCRIPTA TECHNICA, IN...
9 downloads
495 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
RANDOM PROCESSES IN NONLINEAR CONTROL SYSTEMS A. A . Pervozvanskii MOSCOW, U.S.S.R.
Translated by SCRIPTA TECHNICA, INC, Translation Editor : Ivo Herzer COLUMBIA UNIVERSITY, NEW YORK
@)
1965
W ACADEMIC PRESS New York London
COPYRIGHT 0 1965,
BY
ACADEMICPRESSINC.
ALL RIGHTS RESERVED. NO PART OF THIS BOOK MAY BE REPRODUCED I N ANY FORM, BY PHOTOSTAT, MICROFILM, OR ANY OTHER MEANS, WITHOUT WRITTEN PERMISSION FROM THE PUBLISHERS.
ACADEMIC PRESS INC. 1 1 1 Fifth Avenue, New York, New York lo003 United Kingdom Edition published by ACADEMIC PRESS INC. (LONDON) LTD. Berkeley Square House, London W.1
LIBRARY OF CONGRESS CATALOG CARDNUMBER: 65-17383
PRINTED IN THE UNITED STATES OF AMERICA
RANDOM PROCESSES IN NONLINEAR CONTROL SYSTEMS THIS BOOK WAS ORIGINALLY PUBLISHED AS: SLUCHAYNYYE PROTSESSY V NELINEYNYKH AVTOMATICHESKIKH SISTEMAKH BY STATE PRESS FOR PHYSICAL AND MATHEMATICAL LITERATURE, MOSCOW, 1962
As the theory of control processes has become of greater interest and importance, its domain has widened considerably. On one hand, we see an awareness of deterministic processes subject to realistic constraints; on the other hand, we see a recognition of the necessity for a determined study of stochastic and adaptive processes. T h e many years during which deterministic processes were the sole objects of research has bequeathed us a most important library of special processes and particular methods for their treatment. Furthermore, over time, a number of quite important approximation techniques have been developed. No such catalog of problems and methods exists for stochastic processes, and certainly not for nonlinedr stochastic processes. T h e purpose of the present book is to supply a set of methods which can be effectively used, together with detailed applications. I t should, for this reason, prove extremely useful to those engaged in research in the general theory of stochastic processes, as well as those specifically concerned with control. We have taken the liberty of changing the title slightly, replacing the older adjective “automatic” by “control,” and adding the term “nonlinear” to indicate the principal contribution of the book.
RICHARDBELLMAN
Santa Monica, California February 1965
V
T h e designer of control systems invariably has to deal with nonlinear phenomena. Indeed, only over a limited range can linear relations describe the real elements of such a system. Backlash and damping interfere with the linear performance of small signals, whereas mechanical and energy limitations often prevent the use of high-power signals. On the other hand, it has been shown in recent years that the dynamic properties of control systems can be considerably improved by the introduction of nonlinear techniques. Similarly, the increased utilization of self-adjusting and, in particular, of extremal systems, points up the significance of nonlinear relations. Furthermore, statistical methods are beginning to be used intensively in computations for complicated modern control systems. Today, the well-read engineer, specializing in control systems, knows very well that statistical methods, especially methods in the theory of random variables, make it possible to study and construct systems which, in the first place, successfully combat interference and which, in the second place, work reasonably well, not only for several common fixed signals but for a whole spectrum of possible factors that may arise under real conditions. Hence, the combination of the two subjects indicated above-the linearity in dynamic idealizations of real systems and the statistical nature of the input signals-is of vital interest in the theory of control systems. T h e monograph by V. S. Pugachev [65], which is the ,most comprehensive work on the theory of random functions and its application to problems in control systems, deals at great length with nonlinear problems. However, the broad range of questions covered in this work precludes the possibility of developing practical approximation techniques and of giving sufficient attention to the physical aspects of nonlinear phenomena. T h e elegant exposition of the approximation techniques of statistical analysis given by Ye. P. Popov and I. P. Pal’tov [64] fills this gap only to a limited extent. At the same time, a large number of articles have been published dealing with important practical topics related to random processes in nonvii
...
Vlll
Preface
linear control systems. This book attempts to give a systematic representation of these publications and to submit new material, previously unpublished. For practical reasons, and because of the personal preference of the author, this book pays special attention to simple techniques of approximation. However, to a large extent, it is concerned with exact methods because the application to many special problems allows a more precise understanding of nonlinear phenomena. From a computational point of view, this is also often a simpler method. This is illustrated in great detail in the development of several problems concerning statistical theory for extremal systems, which may be of special interest to the reader, both as an independent topic and as an example of the application of the various techniques. T h e book consists of an introduction, four chapters, and appendices. T h e introduction formulates the basic problem of statistical theory in control systems ; it introduces and discusses several general methods of calculation and, finally, it reviews elementary propositions in the theory of random functions and the characteristics of transformations which are used in dynamic idealizations of real control systems. Chapter 1 gives methods of analysis and synthesis of nonlinear transformations without feedback. In developing the problems of analysis, particular attention is paid to the qualitative effects that take place when random signals pass through nonlinear devices which are most frequently found in practice, as well as to the formal research techniques. Essentially this chapter describes the methods of obtaining exact solutions. Although this involves a rather cumbersome exposition, it shows nonetheless that, for a number of typical nonlinearities, the necessary computations have already been carried out. Thus, an engineer not interested in the technique itself can use the prepared formulas that are given in the appendices. T h e problem of synthesis by the criterion of the mean-square deviation is examined in a quite general form. An application of this criterion, several means of constructing optimal nonlinear filters, are studied ; the problem of statistical linearization of noninertial nonlinearities is also investigated. Chapter 2 contains a short general survey of statistical methods applicable to nonlinear systems with feedback and an examination of the stationary states of such systems. T h e application of the
Preface
ix
concept of statistical linearization to the analysis of closed systems in the presence of wide-band normally distributed random inputs is described in detail ; techniques are developed that make for a rigorous method of computing the distortion in the form of a correlation function of the signal that passes through a nonlinear element. These methods also give approximate solutions for special cases when the spectral density of the input signal has a narrow bandwidth or when its distribution is somewhat different from the normal. Furthermore, this chapter deals with the problem of synthesis of optimal linear correctional chains with some given nonlinearity. Special attention is paid to the case when the nonlinearity imposes a limitation on the magnitude of the output signal. T h e conclusion gives an exact method of analysis for several nonlinear problems by using the theory of Markov processes. I n Chapter 3 we develop techniques for studying nonstationary operation in nonlinear closed systems, mostly applicable to problems in which the nonstationary aspect has a periodic character. These problems have great significance both in the analysis of the effect of an impulse on a system with harmonic and stationary random signals and also in connection with the practical problem of guaranteeing stability with respect to random interference. T h e solution makes extensive use of the concepts of the frequency characteristics of input signals ; this introduces several approximative devices, which are based on the obvious combination of the ideas of statistical and harmonic linearization. T h e last section of this chapter presents an exact method for studying periodic operation in relay systems with small random disturbances. T h e solution is based on the solution lacing technique commonly used in the theory of relay systems. Chapter 4 is devoted to an examination of extremal systems. In view of the relative novelty of this subject, the exposition begins with a survey of the basic methods of constructing extremal systems and their classification, and, proceeds in the order dictated by this classification, which does not, however, correspond to methods used in analysis. I t covers systems with a time separation between operation and testing of the on-off and proportional type systems with a frequency separation of these operations, and finally oscillatory systems where operation and testing are completely coincident. T h e main objective of this study is to obtain an estimate of the qualitative operation of extremal systems and some idea of how to choose the parameters.
X
Preface
T h e development of most of the methods described in the text is supplemented by examples. Familiarity with these illustrative examples is necessary for the reader who wants to acquire greater skill in computational methods and, particularly, a better understanding of the qualitative nature of nonlinear phenomena. T h e appendices contain the basic information necessary for practical calculations ; they also discuss additional mathematical theories, such as an introduction to the theory of Markov chains and processes. T h e book is written for the design engineer as well as for the research scientist whose work is concerned with control systems. I t is assumed that, in addition to the usual fundamental mathematics taught at university level, the reader is acquainted with the elements of the theory of probability and of random functions, for example, with the material covered by J. Kh. Laning and R. G. Battin in Chapters2-5 and 7 of ref. [51] or by V. V. Solodovnikov in Chapters 2-7 of ref. [80]. A reader who is beginning the study of the theory of random functions given in the monograph by Pugachev [65] should be familiar with sections on linear problems, which are summarized in the preface to that monograph. I n spite of its extreme brevity, this book covers all the necessary elements of the general questions concerning random variables. T h e reader must also understand the general theory of control systems, and, in particular, the approximation techniques of solving nonlinear problems, at least on the level of the course of Ye. P. Popov [62]. Altogether, the demands on the preliminary preparation of the reader are somewhat advanced, even though this is primarily a technical and not a mathematical book. Borrowing an appropriate expression from one of his teachers, A. I. Lurie, the author would remark that the book is written “by an engineer for engineers.” Therefore, wherever a strict mathematical explanation was found to be either too difficult or too cumbersome, it has been replaced by an explanation based on simple physical concepts, which are founded on an analogy or on a practical experiment. I n particular, the discussion is simplified in the introduction and in Sections 1.5, 2.2, 2.3, 3.1, 3.2, and 3.3, where approximation methods for closed systems are described. Most of these sections (with the exception of that part of the text that is in small print) are
Preface
xi
independent of the rest of the book. An engineer who wishes to master the practical side of the subject as quickly as possible may wish to concentrate on these sections. T h e author is deeply grateful to Ye. P. Popov by whose initiative this work on nonlinear problems was begun, and who encouraged the publication of this book and, also, to A. A. Fel’dbaum who gave valuable advice on its structure. T h e author expresses gratitude to his colleagues in the department headed by Professor A. I. Lurie at the Leningrad Polytechnic Institute, whose kind attention and help was invaluable in allowing the book to go to press. T h e author would also like to sincerely thank 0. K. Sobolev for his diligent editing of the manuscript.
A. A. PERVOZVANSKII
CONTEN TS
•
Eo1Toa 's fouwo•o
vii
PR!rAC.It
INTRODUC TION
Chapter 1 NONLINEAR TRANSFOR MATIONS WITHOUT FEEDBACK Nonlinear Laglnt Transform11tionJ Nonlinur TramformatiO~'I with Laa: The Prob~em of Synthesis. Optimal Condition~ for Variout Clas~s of Tran.sfonnationt App lia~tion of Methods of SynthHit. 1.4 Nonlinear Fihen l.S SlttitliCIIl Linearization
1.1 1.2 1.3
,\e
16 4)
56 68
n
Chapter 1 NONLINEAR TRANSFOR MATIONS WITH FEEDBACK . STATIONAR Y STATES 2.1 A Shore Oetcription of the: Basic Method• of Investigation 2.2 The Application of Stati.sti.cal Linn.riution to the Analysis of Nonline-ar TranafomutioiU with Normally Oittributed Stttionary Signals 2.3 Computation of Frequency Oistortiom lntroduc:ed by Nonlinur E~ts 2.4 Rcstr1ctiont Imposed by the- Requiremtnt That true Input Signal of the Nonlinear System Be Normal 2. , The Synthesi.t o£ Linear Compenution Networks in C latc:d-l..oop Systems with Nonlintt.ritiea 2.6 Application of the Theory o( Markov Procc~Scs in the St·udy of Some Nonlineu Syttemt
xiii
88 91 107
liS 12l
138
Contents
XIV
Chapter 3 NONLINEAR TRANSFORMATIONS WITH FEEDBACK. NONSTATIONARY STATES 3. 1 The Transformation of a Slowly Changing Signal in the Presence of a High- Fr~qucncy Random Interference 3.2 Passage of a S lowly Varying Random S ignal through a Sytrem in a Suue with Periodic Oscillations 3.3 Transformation of the Sum of Wide- Band, Normal, Random Signals, and Harmonic Signals in a Nonlinear System with Feedback (Method of Statistical Lineariution) 3.4 Random Disturbances of Periodic SUI.tes in Relay Systems (Exact Solution by the Method of Alignment)
146 158
167 ISS
Chapter 4 EXTREMAL SYSTEMS 4.1 4.2
Basic Principles of che Operation of EJ(trernal System.-. Extremal Systems with a 1'ime Separation between Testing and Operation; Systems with Proportional Action 4.3 Discrete Extremal Systems with Constant Ste.ps 4.4 Extremal Systems in Which Testing and Operation Are Sepanted by a Frequency Band 4.5 An Automatic Extremal System with Simultaneous Testing and Operation
211 222 248 269
277
Appendix I FUNCTIONS m,(m., a.), h 1(m,, a,), a 1(m,, a,), AND a 1(m,, a, ) FOR SEVERAL TYPICAL NONLINEARITIES I. 2. 3. 4. 5.
6. 7. 8. 9.
The Ideal Relay Y = I sgn X A Relay with a Dead Zone An Element with a Bounded Zone of Linearity An Element with a Dead Zone An Element with a Bounded Zone of Linearity and a Dead Zone An Element with a Characteristic o f the Form Y = Nx2 sgn An Elcmcnt with the Char acteristic Y -= Nx3 A Relay with a H ysteresi5 Loop An Element with 011 Bounded Zone of Linearity with Nonsymmetrical Sounds
293 294 297 297
.~
298 300
300 301 303
Contents
xv
Appendix II REPRESENTATION OF A LAGLESS NONLINEAR TRANSFORMATION IN THE FORM OF AN INTEGRAL TRANSFORMATION IN A COMPLEX REGION. THE THEOREM OF R. PRICE
304
Appendix Ill COMPUTATION OF THE INTEGRALS I.
309
Appendix IV THE COEFFICIENTS OF STATISTICAL LINEARIZATION h 1(a, a) AND h1 (a, a) FOR TYPICAL NONLINEARITIES I.
2. 3. 4. 5. 6.
The Ideal Relay A Relay with a Dead Zone An Element with a Bounded Zone of Linearity An E14!ment with a Bounded Zone of Linearity and De•d Zone An Element with the Characteristicj(X) = NX 1 sgnx E'ements with the Characteristic /(X) .,. NX''>+' where n = I, 2, 3, ...
311 313 314 31S 317 317
Appendix V ELEMENTARY STATEMENTS ON THE THEORY OF MARKOV PROCESSES RtLAT t;D LtTBR.ATURE
329
BIBLIOGRAP HY
332
.JJ9 SUBJF.C..'T INOt:X
341
0
INTRODUCTION
T h e purpose of any automatic system is the generation of suitable responses as a function of variations in the external conditions. T h e external effects can vary widely in character. For example, the operating condition of a servo system is determined by the motion of the control shaft, by variations in a load moment, in the voltage of a power input, in temperature and humidity of the environment, or in the effect of electromagnetic interferences. T h e reaction of an automatic system, i.e., the physical process that takes place inside the system, can be just as diversified. However, in dynamic studies most of the external factors are neglected, and only those that are significant are taken into account. We shall designate these basic external effects by the input signals Z,). Z(Z1 , 2, , Similarly, we confine ourselves to a study only of those reactions of the system that, to a significant degree, characterize its correct performance. These reactions are called the output signals X(X, , X , , ..., X,) of the given system. As a first approximation for a servo system one may assume that the input signals are given by the position of a shaft and the variations of the moment of the load, and the output signals by the position of the output shaft and the error signal. T he n a dynamic analysis of an arbitrary automatic system is reduced to the study of the transformation of the input signals Z to the output signals X. T h e properties of this transformation, generally speaking, determine the whole set of physical characteristics of the system. However, every real system can be represented by an idealized dynamic scheme that approximately describes (by means of a rule or an algorithm) the transformation of the signals. T h e study of these idealized schemes and the development of schemes with optimal characteristics is the basic objective of the dynamic theory of automatic systems which is known also as technical cybernetics. T h e characteristics of an idealized scheme should closely approxima’.,
1
2
Introduction
ate those of a real system; in developing the optimal scheme, it is necessary to choose characteristics that can be physically realized. If in the operation of the real system the value of the output signals at any instant depends only on the value of the input signals at the same instant, then the system can be made to correspond with the dynamic scheme of a lagless transformation (i.e., a transformation without memory). If the reaction of a real system to the combined action of several signals is equal to the sum of the reactions to each separate signal, and if the reaction of the system to a one-dimensional input signal is a one-dimensional output signal (i.e., assuming the superposition principle is valid in the system), then it can be made to correspond with the scheme of a linear transformation. T h e statement that a real system is linear or lagless always turns out to be valid only as an approximation which assumes certain operational conditions, and which imposes certain limitations on the input signals. A more or less complete specification of the response time of a system is considered, at the present, absolutely necessary for the design of almost any automatic system. At the same time, one usually finds that in studying the behavior of systems with random external influences the investigations are always limited to linear dynamic schemes. Further, it is assumed, and often correctly, that the inaccuracy of the dynamic idealization is compensated by the simplicity of the results of the analysis. However, in many important practical cases, the refusal to take nonlinearity into account leads to qualitatively false concepts about the operation of the real system, if such a system is already constructed, or to an underestimate of the possibilities for improving a system in the process of construction. I t should be stressed that, in the theory and practice of the construction of automatic systems, these “nonlinear effects” often do not require a study of a nonlinear transformation of any great complexity. It is completely permissible to limit oneself to rather simple schemes which are obtained by combining well-known linear and lagless nonlinear transformations. T h e form of these dynamic schemes frequently satisfies the structure of the real system and the properties of the elements of which it is composed. Beginning with the simplest schemes, we shall now give a short
Introduction
3
summary of the basic characteristics of transformations which are used in dynamic idealizations of automatic systems. For the sake of brevity, we shall limit our discussion to transformations with a single input and a single output. 1. Linear Lagless Transformations. T h e basic equation can be written in the form* x = hZ, (1.1)
where h is a constant coefficient or a given function of time. 2. Nonlinear Lagless Transformations. T h e relation between the input and output signals is given by the equation
x =f(Z),
(1 4
wheref(2) is a given function of 2 and, possibly, of time. Frequently, nonlinear transformations can be given in an implicit form, x XI, (1.3)
=w,
which corresponds to a scheme having feedback with respect to the output signal. 3. Linear Transformation with Lag. output signal X ( t ) at the given time t signal Z(t) at the same time, but also This transformation is written in the
x(t)=
1
I n this case, the value of the depends not only on the input on its value at different times. explicit form
m -m
h(t, T) z ( T ) dT,
(1.4)
where h(t, T ) is the impulse function (Green’sfunction) which has the physical meaning of the response (of the output signal) of the system at time t to a unit impulse 6 ( ~ at) the input. For physically realizable systems, we have, for t < T , h(t, T)
0.
(1.5)
* Here, and in the following part of the Introduction, a linear transformation signifies only a homogeneous transformation.
4
Introduction
Thus, the linear functional transformation (1.4) is completely determined by one function of two variables h(t, T ) . For the special case of a linear transformation with constant coefficients, we have the simpler expression
x(t)= jmh(f -
T)
Z(T)d T ,
-X
or, taking (1.5) into account, we obtain
x(t)=
It
h(t - T ) Z(T)d7 =
1 h ( ~z(t) m
- T ) dT.
0
--x
(1.7)
T h e linear transformation can also be given in the implicit form X(t) = J
03
-X
h,(t, T )
[z(.)-
Jm -m
k ( ~S), X ( S )ds] d r ,
(1.8)
which corresponds to the description of a system with feedback (that is, a closed system). Here, h,(t, T ) is the impulse function of the forward loop and k ( t , T ) is the feedback impulse function. T h e implicit equation (1.8) is essentially an equation for the output signals in terms of an integral equation with a kernel which depends on the values of the input signals at various instants. T h e equation for the linear transformation in terms of differential equations is more frequently encountered and usually follows from the analysis of transfer characteristics of the separate elements of the automatic system
where Q ( d / d t )and R ( d / d t )are polynomials in powers of the differential operator either with constant coefficients or with coefficients which are functions of time. We must specify the initial conditions for Eq. (1.9). Henceforth, we shall assume that they are all zero. Any equation can be reduced to this case by introducing nonzero initial conditions into the equivalent input signals. I n practice, the equations for the transformations in the forms (1.4) and (1.9) are not equivalent because the determination of an impulse function which corresponds to an equation with coefficients variable with respect to time is a rather difficult problem.
5
Introduction
However, for equations with constant coefficients, the implicit form of Eq. (1.9) can be made explicit quite easily and, moreover, it is often convenient to use it in this form. We shall now apply the Laplace transformation to Eq. (1.9). We obtain Q ( p )X = R ( p )Z or X =F@) Z , (1.10) where X and 2 are the Laplace transforms of the functions of time
X ( t ) and Z ( t ) , and where F(p) = R(p)/Q(p)is the transfer function
of the linear transformation. On the other hand, applying the Laplace transformation to (1.7), we obtain
X
=Z
and, consequently,
F(p) =
m
e-P'h(7) dT,
(1.1 1)
0
Irn h(~) e-pr
0
dT,
(1.12)
that is, the transfer function is the Laplace transform of the impulse function. Using transfer functions, it is easy to change the implicit equation for the linear transformation into an explicit one (the change is from a closed-loop to an equivalent open-loop circuit). Applying the Laplace transformation, e.g., Eq. (1.8), it is not difficult to find an explicit expression for the transfer function of the closed system (1.13)
whereF(p), F,(p) and K ( p )are the transfer functions which correspond to the closed system (with the explicit transformation), to the forward loop, and to the feedback loop, respectively. 4. Nonlinear Transformation with Lag. T h e fundamental form for this class of equations is the implicit differential equation (1.14)
T h e introduction of the differential operator as an argument of this function indicates that there is a functional dependence, not only
6
Introduction
between the signals X and 2, but also between the derivatives of arbitrarily high order. Because there is no general method for solving nonlinear differential equations of arbitrary form, it is impossible to write an explicit expression for the nonlinear transformation. It has already been pointed out that in the theory of automatic systems there are several very important subclasses of nonlinear transformations with lag. a. NONLINEAR TRANSFORMATIONS WHICHCAN BE REDUCEDTO LINEARTRANSFORMATIONS (NONLINEAR TRANSFORMATIONS WITHOUT FEEDBACK). Such transformations are the result of the sequential application of, first, a nonlinear transformation without lag, and, second, a linear transformation Y
=f(z),
or
x =1
h(t, T ) Y ( T )dT,
(1.15)
x =I
h(t, ~ ) f [ zd(r ~ . )]
(1.16)
m
-m
m
-m
One can associate with this subclass a more complicated transformation of the form (1.17)
which corresponds (Fig. 1) to the parallel application of several transformations of the form (1.16), and also to a transformation of the form
I
m
X
=
-m
[Im h,(~, ds]
h2(t,~ ) f
--m
FIGURE1
S)
Z(S)
dT.
(I,18)
7
Introduction
This corresponds to the sequential application of a linear transformation, a nonlinear transformation without lag, and, again, a linear transformation (Fig. 2).
FIGURE 2
Because of structure and comparative simplicity, the transformations described above which are similar to linear transformations are called reducible-to-linear transformations [65]. T h e subclass of nonlinear transformations which are reducibleto-linear transformations may be generalized by considering more complicated integral operators than those given in (I.17) and (I.18), for example, by substituting an operator of the following type [117, 1181 :
x
J
OD
=
(1.19)
F{z(T),~)~T,
-OD
where F is an arbitrary nonlinear function. However, in this case, we lose the simplicity of the relationship between the operator and its corresponding differential equation, and, as a result, we also complicate the relation of the operator to the physically realizable system by means of which the transformation is performed. Of more practical importance is the following subclass of nonlinear transformations with lag. b. NONLINEAR TRANSFORMATIONS WITH FEEDBACK. Transformations of this type are given by the implicit equation
1 W
X
=
-m
= ho(t,T)f
[Z(T)
-
Jm
k(7, S)
X ( S )ds]
dT,
-W
which corresponds to the block diagram shown in Fig. 3.
FIGURE 3
(1.20)
8
Introduction
Here h, ( t , T) is the impulse function for the linear part of the forward loop of the transformation, I(t, T) is the impulse function for the feedback loop, and f is a function which determines a lagless, nonlinear transformation for the forward loop. T h e study of the simplest types of nonlinear transformations indicated here will be the main subject of this book. Let us recall the methods by which signals undergoing transformations can be fed into a system. T h e signals can be: (a) random functions of time; (b) nonrandom functions of time. Physically real nonrandom signals, that is, signals which have exact values at each moment of time, are extraordinarily diversified. However, the study of the transformational characteristics of a system usually involves only a few types of functions; the most important among these are harmonic functions (or sums of harmonic functions) and functions the graphs of which have jump discontinuities. By using these, one takes advantage of the fact that in many cases real signals transformed by automatic systems can be approximated by certain typical functions and, further, that this provides a simpler solution of basic problems in the theory of transformations. It is convenient to choose harmonic functions because other functions belonging to a much broader class can be described by the Fourier transform in the form of a finite or infinite series of harmonic functions. A process Z ( t ) is expressed in terms of its Fourier transform (the complex spectrum)* Z ( j w ) in the following manner : 1 2l-r
~ ( t= ) -
J
-I
--a
Z ( j w ) ejcutdw,
(1.21)
that is, Z ( t ) is represented in the form of an integral sum of harmonic functions which have amplitudes i Z(jw) dw I and which have phases equal to arg Z(kw). For a given process Z ( t ) , the complex spectrum is Z(jw) =
jI Z ( f )e-jwt d t . --x
*
(1.22)
We are using the same notation for the transformed function as for the original function.
Introduction
9
If Z ( t ) is a periodic process, then its spectrum will be a line spectrum; that is, it can be represented by an even number of coefficients in its Fourier series. T h e spectral or frequency representation plays a very important role in the quantitative and, what is more important, the qualitative analysis of the transformed signals. Compared with the signals which are definite functions of time, signals which are random functions of time present an altogether different set of problems in the theoretical and practical study of dynamic automatic systems. By definition, a random function is a finite or infinite set of given functions whose variations are governed by probability relations. Thus, to study the rules for transforming a random signal is equivalent to studying the general probability characteristics of the transformation for a whole collection of definite signals which are given by a single mathematical description. Moreover, in many real cases it is not generally known hcw each of the signals enters this collection, whereas the general probability characteristics can provide an effective means for theoretical and experimental study. When the signal to be transformed is given in the form of a random function, exact values for the signal itself or the determination of the values which result from its transformation at any moment of time cannot be given. Instead, the problem of specifying the random signal reduces to the problem of specifying a system of functions which determine the probability of the limits within which the value of the random signal exists at one moment of time and for a sequence of moments of time. A complete description of the random function of time is given by the following infinite system of probability characteristics : (1) T h e function W,(z, , t,) is equal to the probability that Z ( t , ) < z1for t = t,; (2) T h e function W,(z, , t , ; z , , t,) is equal to the probability that Z ( t , ) < z1 for t = t , and Z ( t , ) < z , for t = t , , and so on. T h e function W,(z, , t , ; z , , t , ; ...; z , , t,) gives the probability that at the moments of time t = t , , t , , ..., t, all of the following conditions will be satisfied : Z(tl) < zl,Z(t,) < z, , ..., Z(t,) < z, . This function is called the n-dimensional distribution of the random process Z(t). If the functions W , ( n = 1, 2, ...) are differentiable with respect
10
Introduction
to each z , , then in addition to specifying the process by means of these functions it is convenient to make use of the system of ndimensional probability densities w o , ( z l , t,; z , , t,; ...; z , , t,), where w, =
anw
az, ax, ... az,
(1.23)
'
T h e quantity w, dz, dz, ... dz, is the probability of the joint event that for t = t , , z, < Z(t,) < z1 dz, , for t = 1, , 2, < Z(t,) < z, dz, ,
+ +
for
t
=
t, ,
z,
< Z(t,) < z, + dz,
Of particular significance in the theory of automatic systems is a special class of random functions-the class of stationary random functions (processes). A random process Z(t) is called stationary (in a limited sense of the word) if, for arbitrary t , , t, , ..., t, and T , we have the identity Wn(z, > t i ; zz > 1;, ...; zn tn) 9
Wn(z1 9
ti
+
7;~
2
t ,,
+
7;
...;zn
3
tn
+
T),
that is, if a translation along the time axis of arbitrary length T for every moment of observation t, (k = 1, 2, ..., n) does not change the character of the distribution of the process. T h e distribution of probabilities, generally speaking, gives a more complete characterization of the random functions. However, in many cases one can find the properties of transformations of random signals from their moment characteristics. T h e moment characteristics of nth order (or simply the moments of nth order) are defined as the mathematical expectations of the products of the values of the random function at the moments of time t, , t , , t, ..., t, nzn(t,
t
tz
9
...>tn)
= -
M { z ( t , ) z(t,)... Z(tn))
I' Jrn ... jz
--a
-m
--a
z,z,
... znZ4~,(z, , t,; z, , t,; ...; z , , t,) x dz, dz, ... dz, ,
(I 24)
that is, they are defined as values of products averaged over all possible
11
Introduction
realizations of the random function, taking into account the probability
of each realization.
Of special interest are the moments of first and second order, which are denoted by mz(t) and B,(t, , t,). T h e moment of first order, or the mathematical expectation of the random function Z ( t ) , is given by the equation (1.25)
and the moment of second order is given by W
l
!
t2)
=
M W , ) Z(t2))
=
j"r,J
io
-m
z1z2w2(z1,
tl;
z 2 ,t2)
dz, dz,
.
(1.26)
We introduce the concept of an unbiased function Z y t ) = Z ( t ) - m&).
(1.27)
T h e second moment of the unbiased random function is called the correlation function of the process RZ(t1
I
t2)
= M { [ Z ( ~ l l- mz(t,)l[Z(t2) - m,(tz>l>*
(1.28)
I t is obvious that RAtl
9
t2)
=
&(t,
9
t2)
- Ml)mz(t2).
(I .29)
For stationary processes we have m,(t) = m, = const,
(1.30)
t a ) = RZ(t1 - t 2 ) = RZb),
(1.31)
and RZ(t1
7
where T = t , - t , . T h e property (1.31), which leads to the proposition that the correlation function for the random process Z ( t ) depends only on the difference of the arguments t , and t,, generally speaking, is satisfied by a broader class of random processes than the one described above. This class is often called the class of random functions which are stationary in the general sense, or stationary in the sense of Khintchin.
12
Introduction
If the condition (1.31) is satisfied and if certain weak restrictions [65] are fulfilled, then we have the basic ergodic relations for the theory of stationary processes (1.32)
and (1.33)
that is, in order to obtain the mathematical expectation and the correlation function, one can pass from an average over the realizations of the random function to an average with respect to time for one of the realizations. An important property of stationary random processes is that they can be described by means of spectral representations. Let us examine the random function Z T ( t )defined by the conditions Z T ( t )E Z ( t )
0
zT(t)
T.
(1.34)
We find the complex spectrum which corresponds to it,
j
a,
z T ( j w )=
z,(t)e - j w t
dt =
-a,
j
T
-T
zT(t) e-jwt
dt,
(1.35)
and which, likewise, represents a random function. T h e limit S,(W)= lim
T+m
1 2T
-M {I ZT(jw)12}
is called the spectral density of the process Z ( t ) .I t is not hard to show that the spectral density of a stationary process is related to its moment of second order by the equation
j
a,
s,(w) =
--P
e-jwr B,(T)dT,
(I .36)
that is, the spectral density is the Fourier transform of the moment of second order, and for an unbiased random function it is the Fourier transform of the correlation function.
Introduction
13
T h e description of a random function by its spectral density, of course, is not complete, and this distinguishes it from a nonrandom function. After this explanation of the basic types of dynamic characteristics for signals, we shall now give a general formulation of problems which arise in the study of automatic systems. T h e first of these is usually called the problem of analysis for a given system, while the second is called the problem of optimal synthesis for the system. 1. The Problem of Analysis. Given the characteristics of the input signals Z and given the dynamic representation of the system, or equivalently, the form of the transformation F which corresponds to this system, it is required to find the characteristics of the output signals X .
2. The Problem of Synthesis. T h e characteristics of the input signals Z are given. T h e desired characteristics of the output signals X also are given. I t is required to find the form of the transformation F out of a given class of transformations which will make F { Z } approximate X optimally (in some given sense). We shall now investigate the fundamental aspects of some specific problems and the principal means for solution. T h e analytical problem can be solved in two ways-either analytically or experimentally. If an apparatus for the system is already constructed, then a direct experimental study of the output signals of the system and their mathematical measurement is almost always the most convenient method. If the system exists only as a design, it is possible to make a theoretical or experimental study of the transform characteristics of its separate parts and their interrelations; on the basis of this study, a mathematical description of the transform characteristics of the system as a whole can be given. T h e effort to take into account all dynamic subtleties of the real system therefore is hardly necessary; besides, it will lead to such a complication of the problem that its analytical solution will become far too complex or even impossible while the experimental solution with the help of analog models will be much too labor consuming. And, more important, the search for the exact dynamic circuit usually makes it impossible to find the simple quantitative relations generally required by the engineer.
14
Introduction
Therefore, the analytical method of solving a given problem is acceptable only if it enables us to obtain sufficiently simple quantitative, and, more important, qualitative answers. This qualification requires that in the following exposition we pay attention to simple approximative methods regardless of the degree of accuracy and even when limitations make them valid only in a qualitative sense. Given certain random input signals, the analysis of output signals sometimes excludes the possibility of the experimental approach even in those cases when the random signal is given, not as a collection of all the realizations, but only as some bounded number of probability characteristics of the collection as a whole. At the same time, since we can generate random signals (cf., for example, Feldbaum [94]), in principle, we can also construct an arbitrary signal. This fact, together with the contemporary proficiency in making mathematical models with analog or digital computers, allows us to reduce the analytical problem to the problem of measuring statistically the experimental results of the output signals. A similar and practically identical method, widely known as the Monte-Carlo method, is always preferable to the purely analytical method when the dynamic circuit of the system and the characteristics of the signals are so complicated that simple approximative techniques of study are ineffective. As for the problem of synthesis, one would assume that the analytical methods would prevail; however, in practice they are generally replaced by the method of testing out several variants, a method which is based on intuition and on experimenting with the design and operation of similar systems. Obviously, this situation arises not only because the existing analytical methods of synthesis are very complicated, but also because in the majority of cases the necessity of passing from the optimal transformation which has been found (for the ideal dynamic circuit) to a real system is not taken into account. T h e real system usually includes the controlled variable and the power source, which is chosen on the basis of power and economic considerations, the characteristics of the power source and the controlled variable object itself often being of a pronounced nonlinear type. An analytical solution of the synthesis problem which does not account for these factors is useful only in the sense that it points the way to potentially achievable limits of operation associated with the characteristics of the real signals. T h e theory of synthesis of nonlinear
Introduction
15
transformations essentially broadens the possibilities for analytical study, both in the sense that it gives a more complete account of the real characteristics of an automatic system, and in the sense that it gives a more precise indication of the potential limits on improvements of dynamic properties. Unfortunately, the methods of nonlinear synthesis are still far from being completely perfected and, furthermore, they cannot always be reduced to practically acceptable results. Therefore, apart from introducing some of the clearer, modern methods of synthesis, the book is mainly concerned with the solution of analytical problems. Here, we keep in mind the fact that obtaining sufficiently simple results, by means of analysis, will facilitate the problem of finding the optimal choice of parameters in the system under study and will usually suggest a way to improve its structure; that is, in the last analysis, it will also simplify the solution for the general problem of synthesis.
chapter 1
NONLINEAR TRANSFORMATIONS WITHOUT FEEDBACK
1.l.Nonlinear Lagless Transformations
Nonlinear lagless transformations are the simplest form of nonlinear transformations. Their significance in the theory of nonlinear systems is determined by two factors: (1) For certain properties of signals the nonlinear lagless transformation is a satisfactory representation of the dynamic circuit for many real systems. (2) Nonlinear lagless transformations are often component elements of more complicated nonlinear transformations. In this section, we shall study only problems involving the analysis of output signals; moreover, we shall consider only transformations with a single input and a single output. Let f be a given function which defines the transformation of the input signal Z ( t ) into the output signal X ( t ) . T h e analytical problem is to determine the probability characteristics of X ( t ) knowing the probability characteristics of the function Z ( t ) . Since the transformation under consideration is a lagless transformation, the random variable X ( t J depends only on the random variable Z(t,), where ti is an arbitrary moment of time. Therefore, the problem of finding the characteristics of the random function x is the same as the problem of finding the characteristics of a nonlinear function of a random variable. T h e time t enters into consideration only as a parameter. Let us study the basic problem of finding the first two moments m, and B, of the signal X ( t ) . By definition [cf. (1.25) and (1.26)] the moment characteristics of X ( t ) are found from its probability density. However, it can be shown that, to find the mean of the function f ( Z ) , it is admissible to differentiate directly with respect to the probability distribution of its argument. 16
1.1. Nonlinear Lagless Transformations
17
Therefore, the moments of the first and second orders are given by the following equations : mJtl
M{x(t))=
--1c
and B,(t, T )
j
a
=
=
M { X ( t )x(t
1
ZI
=
-m
--co
+
fbl)wl(zl) dz, ,
(1.1)
T))
f ( z d f ( 4 wz(z1, 2 2 )
dz1 dz2
1
(1.2)
+
where 2, 3 Z ( t ) and 2, # Z(t T). We shall investigate a more exact process for computing these characteristics for several types of'signals Z ( t ) . Let the distribution of the signal Z ( t ) be normal. Then, we have (cf., for example, Pugachev [65])
and
where
For a stationary signal
Henceforth, we shall also be concerned with the case when only the unbiased component is a stationary function, while the mathematical
18
I . Nonlinear Transformations without Feedback
expectation m, is some function of time, that is, when the relation (1.4) is not satisfied. We see at once that, for a normally distributed input signal, the quantity m, is determined directly by the quantities m, and u, , mx =
m,(m,
>
az),
(1.6)
which may or may not depend on time [if Z ( t ) is stationary]. There is no explicit dependence on the variable t in (1.6). We shall now go on to a direct computation of the mathematical expectation m,(m, , u,) for several nonlinear transformations. A summary of the results of the computations for many typical nonlinearities often encountered in the study of automatic systems is given in Appendix I. Example 1 .
T h e stringent symmetrical bound :
For this transformation we have
which, after some simple manipulations, leads to
where we are using the notation
1.1. Nonlinear Lagless Transformations
19
and where
is the probability integral, for which tables are generally available. Figure 4 shows a graph of the dependence of m, on m , for various values of crl. From this graph one can draw definite qualitative
FIGURE 4
conclusions. In the first place, the presence of a random component tends to smooth out the nonlinearity of the resulting analytic characteristic. T h e range where, for practical purposes, there is linearity (where the deviation from linearity is about 5 % ) increases as the variance of the random component of the input signal increases. I n the second place, the value of the amplification factor on the linear portion of the characteristic m,(m,) decreases with increasing u1 (at cr, = 1 the amplification factor is equal to 0.6 of its value when there is no interference). These properties are shared by a large class of nonlinear elements. A decrease in the effective amplification factor am,/arn, in the zone of linearity (with increasing cr) is characteristic of all elements where the “differential” amplification factor aX/aZ decreases with increasing absolute value of the signal, because the random component causes an averaging for the characteristic over the whole range of values of the signal. This phenomenon, as will be shown in greater detail in Chapter 2, turns out to have a decisive effect in estimating the stability
20
1. Nonlinear Transformations without Feedback
and quality of nonlinear control systems which are operating in the presence of intense interference. Example
2. T h e ideal relay f(Z)
= -
:
1, -I,
z> 0, z< 0.
(1.10)
T h e effect of smoothing is still more pronounced for relays with sharper nonlinearities. T h e value of the mean component at the output of the ideal relay is given by the equation
(1.11)
This characteristic is practically linear in the sense indicated above when uzjmz < 1.4. T h e effect of linearizing the nonlinearity by the random component is of the same type as the well-known effect of oscillatory linearization by means of a periodic signal. I n this case too, the analysis of the behavior of the mean component when under the effect of either a random or a periodic oscillation is simplified; more specifically, it becomes possible to replace an essentially nonlinear transformation by an equivalent transformation which is, on the average, linear, and which has the transfer function
which is constant over a rather wide range of variations of the mean component of the signal. Example 3 . In Examples 1 and 2 only odd functions of Z were considered. Let us now look at some cases where this condition is not satisfied. In many control systems with devices that limit the output signal, it is not uncommon for the limiting equations to be nonsymmetric, that is, instead of (1.7),
1 . 1 . Nonlinear Lagless Transformations
we have j(Z) = -
z
1, 2,
-1(1
-1(1
-
a)
1,
< 2 < 1, 2
- a),
21
< -1(1
- a),(1.12)
where 0 < a < 1. Then the mean value of the output signal is given by the following equation :
Let the expectation of the input signal be equal to zero, that is, let the signal be random. Then
(1.14) It is not difficult to show that an expansion of m,il in a series in powers of a will begin with terms of order a, and only for a = 0 will *.r = 0.
I
(1.15)
Thus, the nonsymmetric property of the characteristic leads to a new qualitative concept : a nonlinear transformation makes it possible to detect the random component of the input signal. A mean component appears at the output signal even when there was no such component at the input. Example 4. We shall study the simplest example of a nonlinear transformation of two input signals especially for the case when the
I . Nonlinear Transformations without Feedback
22
output signal depends nonlinearly both on the input signal and its derivative x = f ( Z ,P Z ) . (1.16) T h e expectation can be found from the equation m, =
* --2
J
00 f(z9 -%
P.)
W,(.,
p z ) d. d ( p z ) ,
(1.17)
where w,(z, p z ) is the joint distribution function of the signal Z and of its derivative taken at the same instant. I t is a well-known fact (cf., for example, [49]) that for a normal signal these values are independent and that the quantity p Z also has a normal distribution %(Z,
P4
=Wl(4 Wl(P4
--
1
-(z
1
- m,)2
2/2.rra,
exp PZ
- pmA2 ( - ( P Z 2D;z--j
’
(1.18)
where
It should be emphasized that multivalued on-off-type nonlinearities with hysteresis encountered in the study of automatic systems are not of the type (1.16), although this description is often found in the literature. Let us now study an approximate solution of this problem, remembering that multivalued nonlinearities usually involve transformations with lag. Z A, Let X = f ( Z )for Z < - d, and for Z > A , ; when -dl the dependence of X on Z is multivalued : X = f i ( Z ) if the presence of Z in this interval was preceded by its presence in the interval Z > A , , and X = f,(Z) if Z was previously in the interval Z < - A , . Then, if Z ( t ) is a stationary random process, it can be assumed that the probability that Z has arrived from the right at the interval where f is multivalued is equal to the relative time of dwell in the interval Z > A , :
<
( - I)i(n
i=U
+ k + 2i - 2 ) ! ! ( a / 2 0 E ) k + 2 1 i!(k + i ) !
.
(1.76)
If a / 2 o ~ I (i.e., if there is a small random component), the resulting expression will converge poorly. In this case it is convenient to use another expansion. Hankel's well-known formula [14, 721 is as follows:
(1.77) where ,F,(a, 8, -x) is the confluent hypergeometric function which has an asymptotic representation (for large x) in the following form ([98], p. 373):
. ( a
-
B I- 1)
+ 4 a + I)(a -- P2 !+x Z l)(a: - B + 2 ) + "'I { , ( I .78)
where
r is the gamma h,k
=
function. Making use of this representation we find
Ijn+*-1 2+ r ( k --
x [I
Example 8.
If n then
+
1)
r[(n
(1.79)
+
The smoothing function is
+ k is even, hnP
=
0 [because of the symmetry off(Z)]. If n
I
m
hnk
+ k)/2]
2i"fk-' 7
1
0
Jk(au)u"-' exp { -
4 u2(og* + b2)}du.
+k
is odd,
(1.80)
1.1. Nonlinear Lagless Transformations It is not difficult to prove that this holds also if we replace and if 1 equals 1 .
39 UE
by
__-
.\/of2
+ be
Next we shall investigate the probability distribution of the output signal for a nonlinear transformation without lag. We shall derive from first principles the relations for a one-dimensional distribution. T h e value of the output signal X ( t ) = x at time t is determined only by the value of the input signal Z ( t ) = z at the same instant : X =f ( Z ) . Z is 9 random quantity with probability density wlz(zl). Thus, the problem is to find the probability distribution Wl,(xl) for a function of a random quantity when the given probability density of its argument is wlz(zl). By definition, where P{X < xl} denotes the probability that X < x1 and where the integral is taken over all intervals of the z1axis where the condition f ( Z ) < x1 is satisfied. If the inverse transformation is single valued,
z= V ( W , then by changing the variable of integration in the integral (1.81) we have ( I .82)
and, consequently, (1.83)
If the inverse transformation breaks up into several branches,
zj = V i ( X )
(2 =
I , 2, ...),
(1.84)
on each of which it is single valued, then the region of integration in (1.81) breaks u p into a series of regions each of which satisfies the condition Z < rpi(x). I n this case, the integral (1.81) reduces to the form (1.85)
1. Nonlinear Transformations without Feedback
40
Differentiation with respect to x1 gives for the probability density (1 3 6 )
Let us study two simple examples. Example 9. distribution
Let X
I n this case
=f
( 2 ) = Z 3 and let 2 obey the normal
z = v ( X ) = .i?/z.
T h e transformation is single valued so long as we restrict ourselves to real numbers. Therefore,
Example 10. Let X 7u&)
= a
sin(wt 1
+ Z ) , where
=
5,
0
< z < 2r,
=
0.
z
2r.
T h e inverse transformation obviously is not single valued. In the interval (0, 2 ~ there ) are two branches : X vl(X) = arcsin - - w t , a
v z ( X )= r
-
Substituting into Eq. (1.86), we obtain
because
X
arcsin a
+ wt.
1.1. Nonlinear Lagless Transformations
41
Let us assume an n-dimensional probability density for a random process at the input, w,&1
, z2 *
...9
where zi is the value of the process Z(t) at the moment of time ti (i = I , 2, ..., n). Then one can find the n-dimensional distribution for the process X ( t ) which is related to Z(t) by the equation (1.87)
while T h e distribution function is obviously given by a multiple integral of nth order,
the integration is over the whole region where the inequalities f(2,)
< Xl
(i
=
1, 2, ..., n)
(1.89)
are valid. When the inverse transformation
z= T(X)
(1.90)
is single valued, the change of variables 2 3
=
dXt)
(1.91)
reduces the integral (1.88) to the form Wnz-(xl
7
~2
3
...>xn)
where (1.92)
is the Jacobian of the transformation.
1. Nonlinear Transformations without Feedback
42
T he differentiation of W,(x, , x 2 , ..., x,) sequentially with respect to each of its variables enables us to find the n-dimensional probability density of the output process : %(XI
!
x2
I
'..9
XTI)
= Iz~nz[P)(X1)r dxz),
'..)(P(xn>l.
(1.93)
Many typical nonlinearities lead to a transformation ( I .90)which is not single valued and which does not break down into a finite number of singlevalued branches because the graph of f ( Z ) contains portions which are parallel to the Z axis. Let f ( Z ) have the form shown in Fig. 9. In the range Z < zo the trans-
,-FIGURE 9
formation (1.90) is single valued and, hence, in the region X i< xo the probability density is given by the expression (1.93). In the regions where only one of the conditions xo < x i < co (i = I , 2, ..., n) is valid, it is clear that ZL'"(XI
, xz , ..., x,) = 0.
On the boundaries of the region the probability density is given by delta functions with coefficients determined by the probability of the output for the given boundary. Hence, one can write that W",(XI
, xz ,
...I
x,)
= J z ~ n a [ d x l ) l d x z )...) , dxn)lv
< xi
l2 S,(w).
( 1.103)
T h e simple algebraic relations (1.102) and (1.103) are more convenient than the integrals (1.100) and (1.101). Moreover, the spectral density of the output signal is frequently of greater practical interest. Thus, the calculation for the transformation can be based on the moment characteristic B J T ) [or RY(7)], which one can find by the methods described in the previous section, and by introducing the spectral density expression
into Eq. (1.103). T h e mean-square value of the output signal is given by M{xz} = B,(O)
=
1 -J 257 1
=257
" -w
1 "
-w
S,(w)dw S,(w) I @ ( j w ) 1% dw.
(1.104)
For fractional rational functions S,(W)the integral (1.104) is tabulated (cf. Appendix 111); thus, the mean-square value M{x2}or the meansquare deviation (the variance) uz2= M{x2}- mz2 can be expressed quickly in terms of the parameters of the spectral density S,(W) and of the frequency characteristic @ ( j w ) .
1. Nonlinear Transformations without Feedback
46
Let us illustrate this computational technique by a very simple example. Example. Let f ( Z ) represent the characteristic of an ideal relay, i.e., f ( 2 ) = 1 sgn 2, and let 1 @(jw) = Tjw 1 '
+
T h e correlation function for the input signal has the form RAT) = u,2 exp(--e
I 7 I),
and mz = 0. T h e distribution of Z ( t ) is normal. We write the correlation function of the signal R,(T) in the form of a series
where 21"
a,2 = T
=
(2k - l)! (2k)!! (2k 1)'
+
0,
n=2k+1, n = 2k.
T h e corresponding spectral density is given by T
S,(w) =
&PSny(W),
( 1.105)
n=l
where
1
z
s,,,(w) =
e-iwTpzn(T) dT.
--m
In this case the computation of S,,(w) is very simple :
When p , ( ~ )is given by a more complicated expression, it is often convenient to use the recursion relation S,,(w)
=-
2.rr
1
m -a
S1(w - x) S,-,h) d x ,
which follows from the usual convolution formula.
(1.106)
1.2. Nonlinear Transformations with Lag
47
T h e spectral density of the output signal is given by Eq. (1.103). Substituting the expression for O ( j w ) into it, we finally obtain
1
4129
-~
n
T2w2
+1
g
(2k -- l)!! (2k)!!
1
w2
+ (2k + 1)W .
(1.107)
From Fig. 10 it is clear that the most significant effect of the signal
10
0.5
0 FIGURE 10
Y ( t ) on the spectral density SY(w) at the output of the lagless, nonlinear transformation comes from the first term, which is similar in form to S,(w). T h e frequency distortion becomes significant only for high frequencies. Therefore, if the linear portion has only lower frequencies, the error resulting from the use of the approximate formula aB
S,, = 1 cP( j w ) 12 -1;- S,(W)
( 1.108)
uzL
will be unimportant (see Fig. 11). T h e error in uz2 is of the order of 3 % when BT
=
1.
48
1. Nonlinear Transformations without Feedback
If f ( Z ) is not an odd function or if the mean value of the input signal is different from zero, it may be necessary to take the higher terms into account in an expansion of the type (1.33) for the moment
t
0
1.0
20
3.0
FIGURE 11
of second order or for the spectral density which corresponds to it. However, this calculation is not very difficult if one makes use of the dependencies a,(m,, u,) which are described in Appendix I for typical nonlinearities. We shall not give an example of the computation of S,(w) for the transformation described above when the mean component is different from zero. Let m, = oz. T h e n from the graphs in Appendix I we find that a, = 0.68, U, = 0.48, ~2
=
-0.34,
= 0.
Consequently, the mean value m y = a,, density of the central component is
=
0.68 and the spectral
From Fig. 12 it is clear that in this case it is perhaps not permissible
1.2. Nonlinear Transformations with Lag
49
to neglect the second term. Even after filtering (for BT error in the variance ax2is greater than 25% (Fig. 13).
=
l), the
f 05
0
1.0
2.0
3.0
4.0
~8
0
FIGURE12
1.0
2.0
3
FIGURE 13
Let us proceed to the analysis of a more complicated nonlinear transformation such as is given by Eq. (1.18). T h e input signal at first undergoes a linear transformation with lag, then, a nonlinear lagless transformation, and, again, a linear transformation with lag:
x(t)= y
i h , ( ~ )Y ( t OLi
- T ) dT,
=An
u(t)=
(1.109)
0
5
0
h , ( ~ z(t ) - T ) dT.
(1.110) (1.1 11)
We shall determine the two first moments of the output signal X ( t ) . I t follows from (1.98) and (1.99) that for this one must know the same moments for the signal Y ( t ) . However, to find the latter moments it is not sufficient to know only the first two moments of the signal U ( t ) .I n addition, one must also know the two-dimensional probability distribution. Therefore, one must first find the probability distribution for the signal at the output of the linear transformation with lag defined by Eq. (1.1 1 1). Generally speaking, this problem is very complicated. Only in the following two special, but very important, cases does it have an elementary solution.
50
1. Nonlinear Transformations without Feedback
( 1 ) The signal Z ( t ) is normally distributed. Then, one can show that an arbitrary linear transformation of Z ( t ) will result in a normally distributed signal (cf., for example, Pugachev [65], p. 415). A normal distribution is completely determined if the first two moments are given. Th en the problem for the normal input signal reduces to the simple problem of finding the moments. ( 2 ) The signal Z ( t ) is a harmonic signal with a random phase. I n this case the input signal U ( t ) will also be harmonic, that is, the form of the distribution is preserved, and the variation is changed only by a numerical parameter-the amplitude, au2
=
1
aiL.
(1.1 12)
T h e problem is more complicated when Z ( t ) is an arbitrary definite function of time t and when the random parameter rp (or several random parameters) has some well-known probability distribution. If it is possible to construct a definite solution U ( t , p’) for fixed, but arbitrary, values of the parameter p’, then the computation of the probability distribution for the signal U ( t , p’) reduces to computing the distribution for the function of the random parameter p’, which is a problem similar to that discussed in Section 1.1 for a harmonic signal with a random phase.* We shall now investigate a property of linear transformations which considerably simplifies many problems in the theory of automatic systems. Let us look at the basic formula for a linear transformation:
x(2)=
m
h ( ~z(t ) - T ) dT
0
* This method can be used in principle for solving a great variety of problems because an arbitrary random function Z(t) defined over a finite interval can be represented in the form of a series (a canonical expansion), (1.113) where V , is the mutually noncorrelated random variable, and where z,(t) is a fixed function of time. Computational techniques based on the use of canonical expansions and the theory for constructing these expansions are exhaustively described by Pugachev [65] and are therefore not developed here.
1.2. Nonlinear Transformations with Lag
51
We divide the interval of integration into subintervals, (0,
(71
9
~ n ) , ...>( T n
1
Tn+l),
...,
and then use the theorem of the mean.* The n we have m
oc
(1.114)
where dn
=
7*+1
h ( ~d )T ,
1,
=
dnZ(t - 7 , * ) ,
(1.115)
0.
(1.116)
Tn
and where 7,
Tk . Since I /3 I determines the width of the passband of the system and l/Tk is the width of the spectrum of 2, that is, the region where S,(w) is significantly different from zero, the physical meaning of the condition (1.117) is as follows : In order to have a normalized output signal (that is, in order to get a normal distribution), the passband of the frequency characteristic I @(jw) I must be much smaller than the real frequency band for the spectral density S,(w) of the input signal. This statement has a purely qualitative character, but it is widely used (and will be used in this exposition) in computations for nonlinear automatic systems. However, it is worth noting that the statement about the “normalization” of the sum of a large number N of independent random variables
n=O
cannot always be interpreted in the strict sense of the central limit theorem, that is, in the sense that when N + 00 the distribution of X N comes arbitrarily close to a normal distribution. I n fact, the latter -is true only if several additional conditions are satisfied (the so-called conditions of Lindeberg), which, generally speaking, are not satisfied by processes defined by linear transformations without lag. Let us investigate this question in more detail. We introduce the notation
2 5,
Ai
N
SN
=
7l=O
9
mN
=
M { S N )=
mp n=iI
?
(1.1 18)
1.2. Nonlinear Transformations with Lag
53
Then Lindeberg's conditions ([90], p. 207) can be written in the form (1.119)
where
I tn I 5,
U n = C, , =
0,
~
-
mg. I mCn I
< ESN, >
ESN,
-
and where E is an arbitrary number greater than zero. T h e sign designates that sN + 30 as N + CO, and that the quotient of the two quantities linked by this sign approaches unity. A sufficient condition, obviously, would be the requirement of uniform boundedness for
15,l
( I . 120)
< A
as sN + CO. Usually h ( t ) can be represented in the form I
(1.121) k=O
where f l k are the roots of the characteristic equation for the system poles @(p); for a stable system Re f l k < 0. We shall now compute the coefficients d . in Eq. (1.1 14) assuming that
I < 1
, B&,
(AT,
=
T
-+
~
7" ~) .
Then,
=
h
(1.122)
AT,,.
( T ~ )
This expression could have been written immediately because of the slow variation of h ( ~ ) . We also suppose that the process Z ( t ) is stationary so that
D{Z(t -
T,*)}
= uZ2,
T h e computation of sN2 gives N
sN2 =
A r m 2h2(r,).
uz2 "=O
Obviously, the condition for the convergence to a normal distribution sN will not be satisfied, since for a stable transformation lim s N 2 m
N+x
1 r
uz2 AT
0
h2(r)dr
=
const
(AT = AT, = const). (1.123)
1. Nonlinear Transformations without Feedback
54
Hence, the effect of the normalization of a random signal in the operation of a narrow-band linear filter can be understood only in the sense that the distribution of the output signal will come closer to a normal distribution than the distribution of the input signal. The accuracy of the approximation to the normal distribution can be estimated by different methods. One of these is to estimate the higher moments of the output signal. It is well known ([37], p. 248), that an arbitrary one-dimensional probability density can be represented in the form of an orthogonal expansion, of which the first term coincides with the normal distribution and the remaining terms characterize its difference from this distribution *: (1.124) where $(z) = ( l / d % u z ) exp[-(z - m,)a/20,2], and where Ha( 0. We shall suppose that w,(z) is completely described by the first three terms of the series ( I . 128) and, consequently, is determined by four parameters: m, , uz , p 3 , y . Then the determination of a one-dimensional probability density for a signal at the output of a linear transformation reduces to the problem of finding these parameters. We introduce the concept of an n-dimensional correlation function %(Ti
, 7 2 , ..., 7%) = M { Z ( t ) Z ( t f
Here, we are assuming that m,
=
u,"p, =
Ti)
... z(t + Tn)}.
(1.129)
0. Then it follows from (1.127) that R,-,(O, 0,..., 0).
( I . 130)
Likewise, we introduce into our study the n-dimensional spectral density which is defined by a Fourier transform of nth degree of R, : Sn(w1
-
9
wz
, ..., ~
n
)
jm'.' jmR,,(T~, ..., -m
7 ),
-m
Obviously, the inverse relation RAT1
exp[-j(oJITl
+ ... -+
w,.,)]
dT,
... dTn . (1.131)
, ..., .,J
is also valid and, consequently,
The expression for the n-dimensional spectral density of the signal X ( t ) at the output of the linear transformation has the following form [42]:
When n = 1 in (l.l34), it is obvious that we obtain again Eq. (1.103). Therefore, we can find the variables pzn at the output of the linear transformation if the n-dimensional spectral density of the input signal is given (or if the n-dimensional correlation function is given). However, if the linear transformation is followed by a nonlinear transformation, as in (1.109)-(1.1 I I), then, after computing the moments pnm, it is not possible, as a rule, to find the correlation function of the signal Y ( t ) at the
56
1. Nonlinear Transformations without Feedback
output of the nonlinear transformation f(u), because for this we have to know the two-dimensional probability density w2(u, , u2). The direct computation of w2(u1, u2), even if one uses an expansion of the type (1.124), is extremely difficult (cf., for example, Tikhonov [87]). Therefore, one usually tries to use some approximative device. If the coefficients of asymmetry and excess, p3 and 7 , are small and, consequently, if the onedimensional distribution is close to normal, then one may assume that the two-dimensional distribution is also nearly Gaussian. If this is not true, then it is expedient to use an approximative technique for computing the correlation function, a technique which is based on the idea of statistical linearization (cf. Sections 1.5, 2.2 and 3.3) and for which it is not necessary to know the two-dimensional probability density.
1.3. The Problem of Synthesis. Optimal Conditions for Various Classes of Transformations
T h e problem of synthesis for transformations which gives the best approximation was formulated in the Introduction. We shall now describe a method which is based on the criterion of the minimum mean-square deviation (the minimum error). Assume that statistical characteristics determined by the random processes X and 2 are related to each other. We shall look for a transformation F such that u,2 =
M{[F{Z}- XI2} = min
(1.1 35)
among all possible transformations F which belong to the given class. I n the future, we shall limit our study only to those cases where X and 2 are stationary and are stationarily related. We shall denote by a line over the expression which we are averaging the operation of finding the expectation. Let the operator F { Z } be optimal in the given class, that is, let it satisfy (1.135). A variation of the mean-square deviation Su,2 which causes a small variation EG{Z}of the operator F { Z } is given by the equation 80;
==
2, 0).
( T ~
(1.140)
We note that, if the first of these conditions is satisfied, we have the equation
Bzz(71)-
a=
+ (2)’ 1 h(7) d7. 71
R+z(71)
But then the second condition can be rewritten in the following way
1.3. The Problem of Synthesis since
59
+
B z ( ~=) R z ( ~ ) (2)'.
We write the resulting expression for the optimal transformation in the form F {Z } = h =
+2/
8+
I
aD
ai
h ( ~dz )
0
+/
a 0
h ( ~ ) [ Z (t T ) - z]
dT
h ( ~Zo(t ) - T ) dT.
0
Thus, the optimal linear transformation is found to be a transformation which carries out the reproduction of the mean components; the impulse function of the transformation for the unbiased component is given by the integral equation (1.141), which is an equation of the Wiener-Hopf type. T h e methods for solving Wiener-Hopf equations are now well developed. They are discussed in detail by Laning and Battin [51], Morosanov [54], Pugachev [65], and Solodovnikov [80]. Hence, it is not necessary to outline them here. Finally, we shall look at the simplest case-the optimal approximation by a linear transformation with lag : F{Z}= h
+ h,Z
=
h + h,Z
+ h,ZO.
From the fundamental optimal condition ( 1.136) we obtain
h,
+ h,Z
-
=
8,
h,
xzo 02
=-
(1.142)
T h e coefficient h, is usually called the transmission coefficient with respect to the random (unbiased) component. T h e transformation of the mean components is defined by the coefficient (1.143)
Thus, the optimal approximation by a lagless linear transformation is given by the equation Y
=
home
+ h,Zo.
(1.144)
60
1. Nonlinear Transformations without Feedback
Of course, the transmission coefficient h, needs to be introduced only when for m, = 0 we have m, = 0. Otherwise it is easier to write directly Y = m2: h,Zo. (1.145)
+
Sometimes it is convenient to use an approximation which is still simpler, the homogeneous transformation F { Z } = h,Z.
(1.146)
Here, one does not get an exact reproduction of the mean components. T h e transmission coefficient h, is given by the following equation, which is easily deduced from the general condition (1.136) : (1.147)
I t is not hard to see that the coefficient h, is related to the coefficients h, and h, in the optimal nonhomogeneous transformation by the equation (1.148)
In other words, it is the weighted mean between them. I n conclusion, we note that to find the optimal values of the parameters in the linear lagless approximation it is not necessary to solve the general variational problem using the condition (1.136). I t is sufficient to require that the quantity u,2 be a minimum, not as a functional, but simply as a function of the unknown parameters h i . Minimization is guaranteed by the condition
&,2
- = 0.
(1.149)
ahi
T h e equivalence of Eq. (1.149) to the general equation can be checked by constructing the optimal approximation by the transformation (1.146). I n fact, (h,Z - .Y)' = h , 2 2 - 2 h , E - au 2 2 = 2(hcZ2- X Z ) , ah, ue2=
from which we obtain (1.147).
+ XZ,
1.3. The Problem of Synthesis
61
I n the general case we
2. T h e Nonlinear Lagless Transformation.
have
(1.150)
F{Z} =f(f(z),
where f is a nonlinear function. We shall look for an optimal transformation which satisfies condition (1.135) for a somewhat smaller class of nonlinear transformations, those which can be described by the equation (1.151) where k,, are constants and where O,(z) are orthonormal polynomials in the variable z with weight w l ( z ) and of degree n. Thus, for e(z) the following relations are valid :
We note that the first two terms of the system of orthogonal polynomials for an arbitrary probability density w1 can be found by using the elementary relations
-CX
wl(z) dz
=
.W
1,
J
--7)
zwl(z)dz = m, ,
--?c
(1.153)
( z - rn,)%,(z) dz =
02,
or
\
X
-x
1 . 1 . w,(z)dz
=
1,
1
X
1 . ( z - m,)w,(z)dz
= 0,
-'L
It follows from this that the first two orthogonal polynomials are =
z
-
m,
~
0,
(1.154)
62
1. Nonlinear Transformations without Feedback
T h e transformation G { Z} must also belong to this class (1.151), so that we have (1.I 55)
Condition (1.136) now takes the form
m=O
'n=U
From the orthogonality of the polynomials e,(z), we obtain
Since this condition must be satisfied for arbitrary k,, , the desired constants k,, are given by the following equations :
and, hence, from (1.154),
k,,
=
B,(Z)X
=
m,
.
(1.157)
Therefore, the optimal transformation without lag is given by the equation
2 u,D,%(Z) + m, , R
Y
= F{Z} =
(1.158)
n=l
where (1.159)
We shall now compute cr?. of 0, we find a;
=
( Y - m,)2
From (1.138) and the orthogonality =
( Y - mE)2 =
2 R
n=l
aX2~,,2.
(1.160)
1.3. The Problem of Synthesis
63
T h e error in the optimal nonlinear transformation is found by substituting (1.160) into the general equation (1.139) : (1.161)
T h e quantities D, can be found by using experimental data from a model which simulates the characteristics of the polynomials e,(z) and the signals X ( t ) and Z( t) , or by analytical means if the statistical characteristics of X ( t ) and Z ( t ) and the relations between them are given. If all the D, = 0 for n # 1, then the optimal transformation is linear. This is true, in particular, when X and Z are normally distributed and when the joint probability density is given by an equation of the form (1.3). We shall write w2(x,z ) in the form of a series in the orthogonal ChebyshevHermitian polynomials: w2(x, z) =
1
(1.162)
where
We also recall the fact that the system of polynomials H, , when normalized, is a system which satisfies the conditions (1.152) for the one-dimensional probability density
We now compute the coefficients
D,
1-
1
= - e,(Z)X = - e,(Z)(X - mz). 0%
0,
Taking ( I . 162) into account we can rewrite the expression for D. in the form
1. Nonlinear Transformations without Feedback
64 where
b, = But
5
=
5
ex~(5~/2)H,(5) d5.
HI(().Therefore, b,
=
0,
m # 1,
=
1,
m
that is,
=
1,
+
m,) At the same time, because the normally distributed polynomial O,(o,( is proportional to H,(5), al,, = 0 when n # 1, which also proves the proposition that the linear transformation is optimal in the class of lagless transformations if the input and output signals are normal.
3. The Nonlinear Transformation with Lag but without Feedback (the Transformation Which is Reducible t o a Linear One). Again, we shall
narrow the problem down somewhat by looking for a transformation which can be represented in the form
F{Z}=
2J R
.-m
m=O
0
htm(7)Bm[Z(t- T ) ] dT.
( I .163)
With this limitation G{Z}must also be representable in the form
0 and identically equal where h , , ( ~ )is an arbitrary function for T to zero for T < 0. Substitution into the optimal condition (1.136) in this case gives
1.3. The Problem of Synthesis
65
Since h,, is arbitrary, the following equation is valid :
2 j,“
m=o
h f , n ( 7 , ) ~ ~ m ) ( T l - 72)
dT2
=
c : ) ( ~ ~ (n ) = 0,1, ..., R ) ,
(1.165)
which is a generalization of conditions (1.140) and (1.156). We shall use the fact that Crrn)(7)= 0 ,
m # 0,
= 1,
m = 0.
CY’(7)= 0 , =
1,
12
# 0,
n
=
0.
From Eqs. (1.166) and (1.167) it follows that
and, hence, one can find the final expression for the mean-square error
which is a generalization of (1.161).
1. Nonlinear Transformations without Feedback
66
Thus, in order to solve the problem of synthesis for nonlinear transformations with lag, it is sufficient to calculate (or to determine and DZz(7)and to solve a system experimentally) the functions C~"")(T) of integral equations (1.167) of the Wiener-Hopf type; for example, by the method of undetermined coefficients [110]. = 0 when n # 1 and m # 1, and if Again, we note that if Cinm) DLt) E 0 when n # 1, then the optimal system will be linear, and the system (1.167) will reduce to (1.141). I t is not hard to show that this will be true, in particular, when the processes X and 2 are normal. T h e method of synthesis described above for nonlinear transformations which uses orthogonal polynomials leads to some very refined results, especially in the problem of constructing a transformation without lag, when it is possible to look for a decomposition of the system of equations in order to find the unknown coefficients of the optimal transformation. However, in applying this method one must also construct the system of orthogonal polynomials O n . For this purpose, one can, for example, use the following recursion relation [110] : (1.169)
Obviously, the procedure for constructing the 8, is rather complicated. Therefore, if we seek an optimal nonlinear transformation with lag in a case where the application of orthogonal polynomials proves fruitless,* then it may be expedient to apply the following method : T h e optimal transformation may be sought directly in the following form : (1.170)
Using the optimal condition (1.136) leads to the following system of Wiener-Hopf equations for determining the unknown impulse functions h,,, :
* There
is a simplification only when one is calculating the optimal error.
1.3. The Problem of Synthesis
67
where Bim"'(T, - T 2 ) B::'(T,)
= za(t- Tl)Zm(t- T..), =
B'"'(T,) = z"(t- T,)x(t).
(1.172)
I n any case, the solution of this system will be no more complicated than the solution of (1.165). Approximation by the criterion of the minimum mean-square error is not the only method. There are several well-known studies [3, 4, 651 on approximation in the sense of minimizing various probability functionals for the error. I n the theory of linear approximation, special significance is given to that criterion which determines the transformation F , which is that Y = F { Z } is to approximate X,by using the condition that the first two moments be equal : (1.173)
(1.174)
T h e first of these conditions was satisfied also in the synthesis using the criterion of the minimum mean-square error. T h e second condition leads to another determination for the unknown impulse function h(7) of the optimal transformation F. Replacing it by the requirement that the spectral densities be equal, (1.175) (1.176)
(1.177)
which expresses the amplitude-frequency characteristic of the desired optimal transformation in terms of the spectral densities S,(w) and S,(w) of the processes Z and X .
68
1. Nonlinear Transformations without Feedback
We expand the right-hand side of the expression (1.177) into factors :
Then, from the requirement that the system be physically realizable, we see that the zeroes and poles for the frequency characteristic must be located in the lower half of the complex plane in the variable w (cf. [19,51], etc.);we can, therefore,find an expression for the frequency characteristic of the optimal linear transformation :
(1.178) T h e parameters of the optimal, lagless transformation
Y
=F{Z} =
hornz
+ hlzo
can easily be shown to be determined by
(1.179)
x2.
if we require that 7 = 8, p% = An attempt to solve in an analogous manner the synthesis problem for nonlinear transformations of the type (1.142), that is, by starting with the requirement that the moments of order 1, 2, ..., R be equal, obviously will not lead to reasonable results. 1.4. The Application of Methods of Synthesis. Nonlinear Filters
T h e general synthesis method described in Section 1.3 can be applied in solving several classes of problems, of which the following are the most important : (1) Filtering, or, in the more general case, the optimal transformation of a signal which is mixed with interference; (2) Parallel compensation for a given system;
1.4. The Application of Methods of Synthesis
69
(3) T h e statistical determination of the characteristics of the transformation which the system realizes ; (4) Statistical linearization of nonlinear transformations. T h e most extensively studied problem is the one concerning optimal filtering. T h e filtering problem is the problem of separating two waveforms S ( t ) and N ( t ) , where S ( t ) is considered a distinguishable signal and where N ( t ) is considered to be interference, the effect of which on the output signal Y(t) must be reduced to a minimum. Often, the “pure” filter problem is associated with the problem of transforming the signal. Hence, in the general case, one can assume that the signal which is to be transformed is some definite function of the inputs S(t) and N(t) : Z ( t ) = UW),N(t)l, (1.180) and the signal X ( t ) , which must be optimally approximated by the signal which is to be transformed by the filter Y
=F{Z},
(1.181)
is related to the “useful” component of the input signal by a given equation, X ( t ) = H{S(t)}. (1.182) T h e cases of practical importance are as follows : (a) the signal and the noise are related additively, Z ( t ) = S(t)
+ N(t);
(1.183)
(b) the signal and the noise are related multiplicatively,
Z(t) = S(t)N(t).
(1.184)
We study the main method of constructing nonlinear filters when the signal S ( t )and the noise N ( t )are related additively and are statistically independent. By assuming that X ( t ) = S(t) we may consider the “pure” filter problem to be solved. We shall now give equations for the generalized moments BTm) and B g ) .
70
1. Nonlinear Transformations without Feedback
We use the following notation :
=
22
C,,T m S,iS,' N;-iN;-k
(1.185)
i=O k=O
and B g ) = z,"x,= ( S ,
+ N,)"S,
=
2
-__
C:S,iS, N;-l. (1.186)
i=O
T h e solution can be derived further by using Eq. (1.171), where coefficients represent directly the moments BLnm) and Bit) or using Eq. (1.165) and performing the preliminary construction of system of orthogonal polynomials e,(Z) for which one must also the moments Bknm). We shall next use both of these schemes.
the by the use
Example 1. T h e problem is to find the filter for a harmonic signal with a random phase in the presence of a stationary normal noise. T h e amplitude A of the signal and the variance of the interference unz(mn = 0) are given. Neither the frequency of the signal nor the spectral composition of the noise are known. Thus, filtering by linear stages with lag based on the use of spectral properties will be ineffective. We proceed to find the optimal nonlinear filter, limiting ourselves at the same time, for the sake of simplicity, to the class of polynomials of third order : Y
= f ( z )= a,z
+
a 3 ~ 3 .
(1.1 87)
T h e terms of even degree cancel because the distribution of the input signal is symmetric. T h e coefficients a , and a3 for the optimal filter
1.4. The Application of Methods of Synthesis
71
can be determined from the condition for the minimum mean-square error, which gives the equations
+ a,Z = Z X , a1z4 + a3ZB = z3x, a13
(1.188)
+
where Z = S N and X = S. We compute the moments z"and ZnX, which are the coefficients in these equations. Assuming that the signal and the noise are independent, we obtain
( 1 . 1 89)
Considering that N2m
NPmexp
= __
p m =-
sinem(wt
2r
0
(- -)N
Z
2Un2
dN
= (2m -
l)!!uim,
- l)!! + q ~ )dqI = A2m (2m2"m! '
(1.190) (1.191)
we find, finally,
+ 3A2un2+ 3un4,
3A4 8 5A6 2 6 z16 -
2 4 =
-
45 45 +A4un2+ 7A2un4+ 8
-
z x = -A2-2 ,
__ 3 Z3X = - A4 8
+ 23- A2un2.
1 5 ~ ~ 6 ,
(1.192)
72
1. Nonlinear Transformations without Feedback
It is obvious that the quantites a, and a3A2depend only on the ratios T h e calculations of these quantities for various values of ( U , / A ) ~ are shown in the accompanying table.
I
0.1
0.01
0
0.328
1.08
1.13
1
x
-0.114
-0.163
0
4
a1
a3A2
(%)2
(y)'
0
0.1 19 -6.30
0
x lo-*
-1.18
0.500
0.444
0.284
0.061
0.009
0
0.500
0.444
0.333
0.083
0.010
0
T h e table shows the values of the relative error for the filter (U,/A)~ as computed from the equation
and, also, the errors for the optimal, linear, lagless transformation Y
=
(1.194)
a,Z
In this case, a, =
1
1
+ ')(U,/A)B'
o ,:
A'L
-
1
+ 2(u,/A)..
(1.195)
T h e figures in the table show that the introduction of the nonlinearity renders a significant reduction in the error only when o,/A < 1. This example clearly distinguishes between the filter problem as a problem of finding the best approximation and the filter problem as a problem of increasing the ratio of the power of the signal to the noise (the signal-to-noise ratio). I t is obvious that for the second problem a linear lagless transformation serves no purpose since in this case the improvement of the approximation in the sense of the meansquare error turns out to be significant. Therefore, in problems
1.4. The Application of Methods of Synthesis
73
where special importance is attached to the form of the signal, the mean-square error criterion may not be acceptable.
Example 2. We suppose that the signal S can take only the values
+ a and - a with equal probability, W,(S)
=
i[S(S
+ S(S + a ) ] .
- Q)
T h e noise has a normal distribution with a mean-square deviation of u,. T h e signal and noise are mutually independent and, thus, additive. T o find the moments S?? and N2" we have Eq. (1.190) and
S2m
=
(1.196)
a2m.
Let us find the results of the calculations for the system of polynomials
e,(Z) ( n = 1, 3, 5) according to Eq. (1.169) for the case when a2 = 0.8 and un2 = 0.2 :
e,(q
=
z, e3(z)= 0.98423
O,(Z) = 0.612Z5 - 2.78Z3
T h e coefficients aD, all,
=
0.8,
=
-
1.692,
+ 2.3122.
en(Z)Sin this case are
a&
=
0.252,
aD, = 0.141, ... .
T h e equation for the optimal filter (among the class of fifth-order polynomials) has the form Y
=
1.55Z - 0.64Z3 + 0.087Z5.
Graphs for the optimal filters constructed for various values of the ratio a/., (where a2 un2 = 1) are shown in Fig. 14 (continuous lines). Graphs for the exact solution
+
Y
= a tanh
a2 ~
on2
(1.197)
are shown by dotted lines. This is the simple result for a particular problem [51, 1101. T h e dependence of the mean-square error on the ratio a/U, (in
1. Nonlinear Transformations without Feedback
74
its application to constructed filters of the fifth order) is shown in Fig. 15 (Curve 1). For the optimal linear filter the values of u, are related to the error u l c . T h e improvement when a/u, is large (i.e., when the noise level is low) turns out to be very significant.
Y
2
0.97 0.96 0.70
0.0 0.5 1.0
--c
1.5 2.0 2.5 2
FIGURE14
a /a,,
FIGUREI5
Comparing the form of the characteristics of the fifth order with the form of the exact solution (the optimal filter in the class of arbitrary functions) shows clearly that the approximation by a polynomial of the fifth order is not very good. However, one can see (Fig. 15, Curve 3) that the use of a filter constructed according to the exact solution (1.197) does not decrease the error significantly for small a/., . Even when a/u, = 2, we see that (uc/urc) = 0.45 for filters of the fifth order and that (ur/ulJ2 = 0.34 for a filter constructed according to (1.197). Only for large a/., does the difference become significant [for a/u, = 4,we obtain according to the exact solution (ur/ulJ2= 0.0041. We also note that for large ratios a/., in this problem, it is appropriate to apply, as a nonlinear filter, the simple relay*
Y where the coefficient
OL
= act, sgn 2,
(1.198)
is chosen in a corresponding manner.
* The choice of the relay characteristic becomes clear if one treats the resulting problem as a problem of potential interference-stability for the reception of a signal which takes the values f a [36j.
1.4. The Application of Methods of Synthesis
75
I t can be shown that for a filter with an arbitrary odd characteristic f ( 2 ) the variation of the error in filtration of the given input signal is equal to a :
=
J
OLi
--m
[a
-f(n
+
wl(n) dn,
where q ( n ) is the probability density of the noise. For the characteristic (1.198) with an optimal choice of CY we have (1.199)
where aopt=
U
2@ - . on
Computations (Fig. 15, Curve 2) show that, when a/un 3 1.5, filtering by the relay is always more effective (when a/un = 4, the quantity u,2/u: = 0.02) than filtering by a device which uses a characteristic in the form of a fifth-order polynomial. T h e problem of constructing the optimal nonlinear filter with lag with the restriction of using fifth-order polynomials is discussed byLubbock[45]. I t is assumed that the signal and the noise have the same correlation function ps = pn = e ~ @ (T~h )e .effect of introducing linear circuits turns out to be insignificant; it is less than passing from polynomials of the fifth order to the lagless transformation (1.197). Let us now consider a study of the problem of parallel compensation. Let the system carry out a given transformation
u, = F,{Z).
(1.200)
I t is necessary to construct a transformation
u, = F,{Z),
(1.201)
such that the summed signal
u = u, + u, = F,{Z) + F,{Z)
(1.202)
in the best possible way approximates some given S.*
* This problem can be considered as a special case of the general synthesis problem, if, in looking for the optimal F, one accepts the fact that the desired signal X will be given by the relation
x = s- F 0 [ Z ] .
76
1. Nonlinear Transformations without Feedback
Thus, the problem consists of designing an optimum parallel network Fl [Fig. 16 (a)] which compensates for the dynamic properties of the original system.
I t is obvious that the problem does not become more complicated mathematically if one introduces a more general type of compensation [Fig. 16 (b)], where the input signal of the parallel stage is not the input signal of the system F, , but instead is either some intermediate signal acting within the system, or the output of F,. If the system F, is given and if the statistical properties of the signal which is the input for the parallel link and, also, the properties of the output signal U , can be measured, then the general form of the transformation F,, is unimportant for the solution of the synthesis problem. T h e indicated technique can be used, in particular, to construct parallel correctional links and to introduce a disturbance effect (compounding) in systems with feedback. T h e following basic synthesis problem, and especially the problem of the statistical determination of the characteristics of a transformation which is carried out within a given system, is similar to the problem of compensation. I n this case, it is assumed that only the input and output signals of the system can be measured and that its internal structure is unknown (the system is considered to be a “black box”). T h e problem is to select a transformation F which will best replace the transformation of 2 to X which is performed by the system. T h e mathematical formulation of the problem, obviously, is exactly the same as the general formulation of the synthesis problem. We note that the result of the synthesis, that is, the transformation F which best replaces the system performance, depends greatly on the characteristics of the input signal 2. Therefore, it is appropriate to choose as our 2 that input signal which will be in use for the system we are studying under
1.5. Statistical Linearizntion
77
real working conditions. If its form is not known, it will be convenient to introduce as an “experimental” signal normal white noise, for which the polynomials 8, are the Chebyshev-Hermitian polynomials, and Cznrn(,)= 0 when n # m. Th en the system of equations (1.167) which determine the functions hln1(7)decomposes into a series of separate equations which coincide exactly with the classical WienerHopf equations. 1.5. Statistical Linearization
T h e problem of statistical linearization consists in finding the best description of a given nonlinear transformation in terms of a linear transformation. Its solution is important for studying complex systems where mechanisms which perform nonlinear transformations (which, in turn, can be linearized) are introduced as elements. Such systems are often important in relation to feedback systems, which are of fundamental importance in the theory of automatic control. T h e insertion of a nonlinear element into a linear transformation (the parameters of which naturally will depend on the statistical characteristics of the input signal) makes it possible in the study of closed nonlinear systems to use methods of linear analysis, which greatly simplify the problem. This section will only give a description of the method of constructing a statistically linearized transformation; the application to calculations for closed systems will be given in the following chapter. T h e problem of statistical linearization can be formulated mathematically in the following manner. Let F, be a given nonlinear transformation and let F be linear. We must select an F such that the signal Y = F { Z } approximates the signal X = F,{Z} in the best possible way. I t is obvious that this is a special case of the general synthesis problem of finding transformations which give the best approximation of some desired property. We limit ourselves to a detailed study of the case where the linearized transformation approximates a nonlinear, lagless transformation : I n the analysis of an open-loop nonlinear transformation with lag given by equations of the type (1.163), it is convenient to linearize
78
1. Nonlinear Transformations without Feedback
directly the elementary, lagless transformations which are part of it. I n more general cases, it is possible to apply directly Eq. (1.141) or (1.178), provided one can successfully determine theoretically or experimentally the characteristics of the input and output signals of the transformation which is being linearized. First, let us study a solution of the problem of statistical linearization which is based on the criterion of the minimum mean-square deviation. T h e solution is given by the conditions (1.140) and (1.141), which here take on the following form : (1.204)
and
Thus, in the general case the solution reduces to the computation of the moment characteristic R,(T) and the solution of the WienerHopf equation. If the process Z ( t ) is normal, the solution is quite simple. I n fact, in Section 1.1 it was shown that, for a lagless, nonlinear transformation of a normal process, the cross-correlation function R,,(T) of the input signal z ( t ) and the output signal x ( t ) is proportional to the autocorrelation function R,(T), (1.206)
where
fm
a,=-.
0 2
Substituting (1.206) into condition (1.205), we see that it is satisfied when h ( ~= ) a16(T). Hence, when Z ( t ) is normal, the optimal approximation is a lagless transformation, (1.207) Y = hornz hJO,
+
I . 5 . Statistical Linearization
79
where
I t is not difficult to see that the coefficients for the optima1 transformation depend only on the parameters m, and uZ . We shall compute the approximate value of the correlation function for the output signal X ( t ) of the nonlinear transformation f o ( Z ) from its approximation in (1.207) : R,(T)
M
Ry(7) = hz2R2(T)= a12p2(T).
(1.208)
By comparing this expression with the exact expression for R,(T), which was found in Section 1.1,
we see that R,(T) coincides with the first term of the exact series expansion. When p ( T ) > 0, it is always true that R&) > RW(4
(1.209)
that is, the statistical linearization by the criterion of the minimum mean-square error gives a lower value for the correlation function of the output signal. T h e inaccuracy (at least when m, = 0) is not very great, since, as shown in Sections 1.1 and 1.2, only the first term is of great importance in the determination of R,(T) and, consequently, in the determination of the spectral density s,(~), which as a result of the statistical linearization turns out to coincide in form with S,(W). As shown by Barret and Lampard [loo], the proportionality of R,(T) and R,(T) holds for a whole class of signals whose two-dimensional probability densities w z ( z l , zz)can be uniquely expanded in a Fourier series of the form m
WAZ,
3
~ z = > w ~ ( z ~ ) w ~ ( z AanOn(z1Yn(ZA* n=O
(1.210)
where O,(z) are polynomials of weight w l ( z ) which are orthogonal in the interval where z is defined.
80
1. Nonlinear Transformations without Feedback
Naturally, for these signals the optimal property of a lagless linear approximation will be valid in the general class of linear approximations. Together with a normal signal in the given class, we have a harmonic signal with a random phase, 2
=
a sin(wt
+ Y).
Without showing that its two-dimensional probability density can be expanded into the series (1.210), we shall go on to compute R,(T) and R,(T) directly. From (1.53) we have fo (a sin t ) sin t dt,
R,
a2 2
= - COS
(1.21 1)
T.
Hence, the best linear approximation is given by the equation Y
=
=
2?T1
my
+ h,Z,
(1.212)
where m y = fo(z)
1
Dl
0
fo(a sin t ) dt,
T h e right-hand side of (1.212) is the sum of the first two terms of the Fourier series of the function fo(a sin t ) ; this means that, in the theory of nonlinear transformation of determined harmonic signals, the method of statistical linearization is equivalent to harmonic linearization. We note that this method of harmonic linearization can be considered the best approximation in the sense of the minimum mean-square deviation, where the mean is taken with respect to time over a certain period. This fact is a special case of the so-called minimal property of coefficients in a Fourier series. Statistical linearization by the method described above can be
1.5. Statistical Linearization
81
difficult or simple according to the nature of the problem. Thus, in several cases it is convenient to use an approximation by a simpler homogeneous linear transformation : Y
(1.213)
= h,Z.
From (1.148) the transmission coefficient is equal to (1.214) which is unique for both the mean and the random component. I n problems where a nonlinear transformation has several input signals, that is, where
it is usually reasonable to look for a linear approximation in the form
Y
=
2 hiZi .
(1.216)
i=O
We shall assume that the signals Zi ( i = 1, 2, ..., n ) are random and are mutually uncorrelated, that mZi = 0, and that the signal 2, does not have a random component. Then
(1.217)
As an illustration, we take a nonlinear transformation of a sum of statistically independent signals, one normal and the other harmonic with a random phase : Z
=
6 + a sin(wt
+ Y ) = ml + to+ a sin(wt + Y).
Approximating the nonlinear transformation f ( Z ) by a linear one Y
=
horn(
+ h,a sin(wt
Y )+ h,tO,
1
(1.218)
82
1. Nonlinear Transformations without Feedback
from Eq. (1.217) we find that
I t is not hard to see that the coefficients ho , h, and h, are simply related to the coefficients of the exact equation (1.68) for B,(T) : h
-lo!!
O -
h,
m5'
2hOl
=, U
h,
=
h,, ,
B,(T) computed from (1.218) corresponds to the first three terms of the series (1.69). Graphs showing the dependencies of the coefficients h, and h, on the parameters uE and a (when m E = 0) for several typical nonlinearities are in Appendix IV. As another example, we shall look at the nonlinear transformation of a normal signal and of its derivative :
x = f(Z, P Z ) .
(1.220)
We look for its approximation in the form Y
= h,Z
+ h,pZ
= h,Z,
+ h,Z,,
(1.221)
where 2, = p Z . Here, we shall assume, for the sake of simplicity, that m, = 0. Then
fkl * ~ , ) ~ l W l ( ~ l > W , ( ~dz, , ) dzz h,
where
1
= -5
02
j j "
--I,
"
-n
9
(1.222) f(z1
?
~ Z ) ~ Z W l ( ~ l ) W Z ( ~d Zz ,)
dz,
7
1.5. Statistical Linearization
83
Again, just as in Section 1.1, we note that these equations are not applicable to multivalued, nonlinear dependencies with branch characteristics. T h e approximate expression for the transfer function in terms of the random conponent for a nonlinearity of this type, which is derived under the same assumptions as the expression (1.19) for the expectation, has the form u,2hl
(z
=
-
m,)j(z)wl(z)d z
-ffi
z
-
mz)[tl1f1(4
+J
+
m
(z A2
-
m , ) j ( z ) w l ( ~dz)
cL2fi(41 W l W
(1.223)
dz,
where
Here, we are using the same notation as we did for the branches of the multivalued function in (1.19). For a relay with a symmetric hysteresis loop with width 24, we obtain
Next we develop the method of statistical linearization based on approximation with respect to the criterion that the first two moments be equal. T h e requirement of equality for the expectation and the correlation function actually guarantees an exact description of the nonlinear transformation within the bounds of correlation theory. T h e solution is given by Eqs. (1.173), (1.144) and (1.178), where
84
1. Nonlinear Transformations without Feedback
T h e problem of finding the optimal linear transformation reduces to the problem of computing the mean value and the spectral density of the signal at the output of the nonlinear transformation without lag and, then, of expanding into factors for the ratio S,(w)/S,(w). T h e first part of this problem was discussed in detail in Sections 1.1 and 1.2. T h e second part, also, is not very difficult; it is well known from many other problems in the theory of automatic systems.
Example 1. Let f o ( Z ) = Z3, and let Z ( t ) be a stationary random process with a spectral density of
We compute the correlation function for the output signal R,(7)
= a?p,(~)
+
U32p,3(T) = a12 e-'lr'
+ a32
e-3eiri.
T h e corresponding spectral density is equal to
Hence, we find that
Separating out the factors with roots lying below the half-plane, we obtain, finally, @ ( j w )= k T l j w T2jw where k(u,) and
+1
+1 '
G12 + Qa32is the
_____.
= (1/us)
-.
' 1'
z=
dI2
+ 3a32
2/302(3012
+ a;2)
statistical transfer function,
and
T,
=
1/30
1.5. Statistical Linearization
85
are time constants for the optimal linear approximation. I t is often convenient to use a weaker criterion for approximation, namely, that there be equality for the mean values and variances of the output signals of the approximate linear lagless transformation Y
=
hp,
+ hl*Zo
and of the given nonlinear transformation
x =f(Z). From the requirements uU2= ax2
m, = m , ,
we immediately obtain equations for the coefficients h, and h,* : (1.227) h l * ( m 2 , u,)
0 ,
=-
.
(1.228)
=2
We compute the correlation function for the signal Y at the output of the approximate transformation as follows : RU(7) = (h1*)2Rz(7) = O X ~ P ~ ( ~ ) .
Let Z ( t ) have a normal distribution. Then,
n=1
and
n=l
Hence, we have the inequality
86
1. Nonlinear Transformations without Feedback
where p , ( ~ )> 0; this means that a linear approximation constructed by this criterion gives an estimate for the correlation function above the correlation function of the actual output signal, whereas the linear approximation by the criterion of the minimum mean-square deviation gives an estimate which is below [cf. Eq. (1.209)]. In the first case we obtained a better approximation for large p , ( ~ ) (that is, for small T ) , and in the second case it was better for small p , ( ~ ) [that is, for large TI. I n general, the first case is more important because the resulting values of p,(.) are usually more valid for small T ; this has been borne out by experimental data for input disturbances (cf., for example, Pugachev [65]). However, the computation of the transfer function with respect to the random component* by applying the minimum, mean-square deviation criterion is simpler because h, is given by a linear operation onf(z) which makes it easier to tabulate. At the same time, if the coefficient h,* has already been computed (the dependency h,*(m, , uz) for typical nonlinearities is given by Pugachev [65]), then one can use the averaged coefficient hi
+ hi* 2
.
I .o
0.5
0
0.5
1.0
1.5
2.0
r
FIGURE17
* The coefficient h,
is, obviously, the same no matter which criterion is used.
1.5. Statistical Linearization
87
T o illustrate the effectiveness of this technique, graphs for the correlation function of a signal at the output of a relay for which R,(T) = -exp(I T I) and m, = 0 are shown in Fig. 17 for the following cases : (1) the (2) the (3) the (4) the tion.
exact solution, Eq. (1.29); approximation by the minimum, mean-square deviation ; approximation by equating variances; and approximation with the averaged coefficient for lineariza-
chapter 2
NONLINEAR TRANSFORMATIONS WITH FEEDBACK STAT10NARY STATES
2.1. A Short Description of the Basic Methods of Investigation
T h e fundamental characteristic of a nonlinear transformation with feedback is the fact that it is impossible to find in explicit form an expression for the dependence between the input and output signals. Hence, the techniques of computation described in Chapter 1 cannot be applied directly in order to find the statistical characteristics of the transformed signal. We now give a short summary of the current methods for dealing with nonlinear transformations which do not require an explicit expression for the depeqdency between the input and output signals. * (1) The method of direct linearization. T h e nonlinear functions at the input of a transformation with feedback can be replaced by linear functions if one considers only the first two terms of the Taylor series. Wherever this operation is feasible (when the nonlinearities are analytical and when the signals at the input are small), the problem loses its special nature and becomes a problem of linear transformations of random functions. We shall not discuss the method of direct linearization in detail because it is assumed that, wherever such a method is applicable, it has already been incorporated in the process of transforming from the real system to its dynamic counterpart. ( 2 ) Methods based on the application of canonical expansions of
* We do not pretend that this classification is complete; our objective is only to bring out the basic methods for solving this problem, and the degree to which they are developed in this book. 88
2.1. Short Description of Basic Methods of Investigation
89
random signals. Here, a random process is represented, over a finite interval of time, by the sum of definite functions of time with coefficients which are mutually independent random variables (cf. Section 1.2). I n principle, this kind of representation reduces the problem to the task of integrating nonlinear differential equations which contain only definite functions of time. (3) Methods based on representing the output signals by Markov processes (either one-dimensional or multidimensional) and, subsequently, on using Kolmogorov dtfferential equations to compute the probability distribution of these signals (cf. Appendix V ) . T h e complexity of this procedure in general, limits the scope of its application to analytical problems of (a) transformations which are defined by differential equations of the first order, or, in some cases, of the second order, and (b) of transformations which lead to these by way of introducing auxiliary transformations, such as harmonic linearization. T h e possibility of using Markov processes for exact solutions, even though feasible only for a limited number of problems, has attracted the attention of many researchers. This book gives a brief description of these methods, illustrated by examples (Section 2.6 and, in part, Sections 4.2 and 4.3). (4) The method of dealing with transformations which are piecewise linear functions, based on the sequential lacing together (alignment) of solutions for each region of the phase space where the transformation is linear. This method is applicable in analyzing vibrational states where there are small random disturbances from some source (Section 3.4 and, also, Section 4.5). ( 5 ) The method of successive approximations. This method derives from physical representations of the process of establishing an operating condition in a system with feedback in terms of an iterated process of a circulating external disturbance around the closed loop. Here, the integral equation (1.20), which implicitly expresses the transformation with feedback, can be solved by the method (2.1). .x
Xi(t)= J
--x
[Z(T) - jxk ( ~ , X i - l ( ~d) ~ ]
h,(t, ~ ) f
S)
X
dT,
(2.1)
where X o ( t ) = Z ( t ) , that is, the value of the signal X ( t ) is assumed to lag each time it is taken from the previous iteration of the cycle [43, 691.
90
2. Nonlinear Transformations-Stationary
States
Formally, of course, one can think of the method (2.1) as an ordinary mathematical method for sequential approximations which does not have to be associated with any kind of physical interpretation. I t is obvious that the application of this method changes the problem from that of a closed to that of an open system. A variation of the method of successive approximations is described in detail by Pugachev [65, p. 5241.
( 6 ) Approximation methods based on the assumption that the character of the distribution of the signal a t the input of the nonlinear, lagless transformation is unknown. In this case one tries to find several of the numerical parameters which are undefined in the equation for the distribution. T h e implicit relations in these parameters (they are usually transcendental equations) can be solved, for example, by graphical techniques. Remembering that in the process of filtering the approximation for the distribution is taken to be normal (cf. Section 1.2), one can usually assume that here the distribution is also normal. A normal distribution is completely determined by the values of the mean m, and the mean-square deviation u, and so, too, is the form of the correlation function. As an additional technique, it is imperative to use, in particular, the method of statistical linearization of a lagless nonlinear transformation. Hence, we can dssume that it is necessary to preserve only that term which is proportional to the correlation function of the signal X ( t ) at the input in the expression for the correlation function of the signal at the output of this transformation. This assumption considerably simplifies the problem and involves only the parameters m, and u I . Using the idea of expanding in terms of some small parameter broadens the applicability of the method and helps in studying the effect of small distortions in the form of the correlation function or of a deviation in the distribution from the normal. Because of its generality and comparative simplicity, this method of statistical linearization is of the greatest interest for practical computation. I t is convenient to divide the problems of nonlinear transformations with feedback into two parts : T h e first part (Chapter 2) is concerned with stationary states, that is, states where the signal which is acting outside the feedback loop is a stationary function of time; the second part (Chapter 3) considers nonstationary states.
2.2. Application of Statistical Linearization
91
T h e type of state which is realized in a given system (or transformation) is determined, not by its structure, but by the characteristics of the input signals and the parameters of the system. I n the study of real systems it is usually necessary to analyze both stationary and nonstationary states. I t is of very great practical importance to determine the conditions under which the transition from one state to another occurs when the parameters of the signal and the system are changing. These conditions frequently determine the so-called interferencestability of a system, that is, the potential loss of stability because of random interference. Of special importance is that situation where both stationary and nonstationary states can be studied on the general basis of statistical and harmonic linearization. T h e development of these approximative methods takes u p the larger part of this and the following chapters. 2.2. The Application of Statistical Linearization t o the Analysis of Nonlinear Transformations with Normally Distributed Stationary Signals
Consider the nonlinear transformation with feedback of the signal Z ( t ) to the signal X ( t ) , which is given by the system of differential equations
where Q(p), R( p ) and S ( p ) are linear differential operators (i.e., polynomials with constant coefficients in the differential operator p = d/dt. T h e system of equations (2.2) corresponds to the block diagram shown in Fig. 18.
FIGURE 18
92
2. Nonlinear Transformations-Stationary
States
Let the input signal Z ( t ) be a stationary random process with a normal distribution. Physically, Z ( t ) can represent any signal or any combination of a signal and noise. We now introduce the basic proposition that the distribution of the signal X ( t ) at the input of the nonlinear element be normal. It is equivalent to the proposition that the output signal U ( t ) of the linear transformation be normal (Fig. 18). T h e latter proposition refers to the effect of normalization of the signal which passes through a linear transformation (cf. Chapter 1.2). This effect does not take place under all circumstances, nor is it ever complete. However, in most practical cases the assumption that X ( t ) [or U ( t ) ]is normal leads to sufficiently reasonable results. (For a more detailed statement of the conditions under which the method of statistical linearization is applicable, see Section 2.4.) Starting with this assumption, we construct the optimal linear lagless approximation to the nonlinear function f ( X ) , which we shall first suppose to be odd and single valued :
Y
= horn,
+ h,XO.
(2.3)
As was shown in Section 1.5, the coefficients ho and h, are functions only of the parameters m, and ox :
T h e further development of the method does not depend on the concrete form of the functions (2.4) and, therefore, it does not depend on the accepted criterion for approximation. Let us separate from (2.2) equations for the mean and random components of the signals :
2.2. Application of Statistical Linearization
93
These equations can be solved formally for m, and XO : m, = @,,(P, m, , O
xo = @ d p , m,
I
, h ,
(2.7)
ux)Z0,
(2.8)
Here we have adopted the following notation : @, is the transfer function of the system with respect to the mean component, and is the transfer function with respect to the random component. Equations (2.7) and (2.8) are related because the transfer constants h, and h, depend both on m, and on u, . Therefore, they must be solved simultaneously. First of all, it is important to note that Eq. (2.5) has the solution* m, = @,,(O, m, , a,)m, = const
or
(2.10)
We have at our disposal the family of curves m v ( m x , a,) = a , (m,, 0), which have already been constructed for typical nonlinearities (cf. Appendix I) corresponding to various values of u,. We now draw on the same diagram the line [Fig. 19(a)] (2.1 1)
At the points of intersection we can immediately find the dependence m,
=
(2.12)
mx(4
Then, it is not difficult to calculate the dependence of the transfer constant h,* with respect to its random component directly on the variable ux : h*(%) = h,[m&Jz), %I. (2.13) From (2.8) and (2.9), the basic equations for the linear transformations, we obtain the following expression for the spectral density of the process Xo(t) : Sz(w) = I @l(ju9 m,
* From the
condition that
Z
I
0,)
I2~~b),
is stationary it follows that m,
=
const.
(2.14)
94
2. Nonlinear Transformations-Stationary
States
where S,(w) is the spectral density of the external disturbance Z o ( t ) . We shall compute the mean-square deviation u, in terms of S,(w)
T h e integral at the right is tabulated in Appendix 111. Its value can be expressed directly in terms of the values of the coefficients in S,(W),that is, in terms of the parameters of the linear operators which determine the form of Sz(w), and the transfer constant with respect to the random component.
Thus, Eq. (2.15) can be considered as an implicit expression which relates ux to the parameters of the system and of the external disturbance from which, for example, u, can be found by graphical means. Equation (2.12) then gives immediately the value of the mean component. Of course, the proposed procedure for finding m, and u, from the system of implicit equations (2.10) and (2.15) is not the only one possible, but it is quite convenient for computation. Its only drawback is that the dependence hl*(uz) has to be computed numerically. However, this drawback can be eliminated if a second, slightly different step is used in the computation. Equation (2.15) can also be considered a functional dependence of u, on the transfer constant h, : = oz(h,).
We construct this dependence in the plane (u,, h,) on the same diagram which shows the tabulated dependence h,(a, , m,) [Appendix I], constructed for various values of the parameter m, [Fig. 19(b)]. Along the points of intersection of the curve u,(h,) with the various curves of the family h,(m, , a,), we find a dependence of the form m3c = %(ax).
Its graph in the u,m, plane, together with the graph for the function
2.2. Application of Statistical Linearization
95
(2.12) gives the coordinates of the point of intersection mz0 ,
uzo,
which obviously is the desired point representing the steady state [Fig. 19(c)]. This whole procedure is illustrated graphically in Fig. 19. T h e visual representation and the elimination of intermediate analytical calculations are advantageous.
FIGURE19
A significant simplification in the method can be obtained if the random component of the external disturbance, which acts at the input of the nonlinearity Z?(t), is a high-frequency one, that is, if for all the existing frequencies in S,(W)the following condition is satisfied: (2.16)
96
2. Nonlinear Transformations-Stationary
States
In this case, we have the simple equation (2.17)
and rn, is given by Eq. (2.12) as before. We now assume that f ( X ) is not an odd function of X . (It has been described in detail in Section 1.1.) I t is obvious from physical considerations that in this case the nonlinear element corresponding to f ( X ) will detect the random signal, and, therefore, even in the absence of a DC component in the external influence, there can be a signal in the circuit. This fact does not allow the formal application of the concept of the transfer constant with respect to the mean component, which was introduced previously. I n fact, if m, is replaced according to the equation rn, '= hOmxin (2.19), one immediately arrives at the physically meaningless result that m, = 0 when m, = 0. T h e representation of m, in the form hOms entails the implicit assumption that, when m = 0, the expectation of the output signal is also zero; but, for an arbitrary (even) functionf(z), this cannot be true. Therefore, in making calculations for systems with this kind of nonlinear element, one must reject the concept of a transfer function Q0 with respect to the mean component of the signal. However, this does not lead us to impossible complications in computation. I n fact, we can use the first equation of the system (2.5). Since m, = const, it follows from (2.5) that
or (2.19)
Hence, one can directly apply the grapho-analytical technique developed above for systems with odd nonlinearities. Let us now study the case when the nonlinear dependence has the form y = f(X, P X ) . (2.20)
2.2. Application of Statistical Linearization
97
In Section 1.5 it was shown that similar functions can be effectively approximated by linear functions :
Y
= m,
+ hlXo + h,pXo,
(2.21)
where the coefficients h, , h, and m y are determined by Eqs. (1.17) and (1.22), with 2 replaced by X ; the coefficients h, , h, and m y are thus functions of the parameters m, , ux and cr,, . I n order to find these we have the following three implicit equations : (2.22)
where
From these equations, the graphical determination of the parameters m, , crx and u , ) ~is inordinately difficult. A more convenient solution is given by an iterative method according to the equations
(2.26)
98
2. Nonlinear Transformations-Stationary
States
T h e suitability of this method of computation is obvious from physical considerations. We note that a similar iterative method can be used for solving the problem of single-valued nonlinearities, if for some reason the graphical representation is not convenient. Because of the importance of these methods in practical computations, several detailed examples are given. Example 1. Let us study as a simple system the block diagram of Fig. 20. T h e corresponding differential equation is 1
Y f k ( T p + l)X=Z.
(2.27)
U
FIGURE 20
Let Y
=
X
+ ax3,and Z ( t ) be white noise with density d , that is, R,(7) = d8(T).
We replace the nonlinear function by its linear approximation in accordance with the criterion of the minimum mean-square deviation Y
=
where h, = 1
h,X,
+
3UOrZ.
Then, for the signal X one can write the linear equation
where
99
2.2. Application of Statistical Linearization We now determine
ux
:
We can find this integral from the table in Appendix I11 (although in the present case it can be found by elementary methods). For this integral
+
Hence, and
h ( p ) = P B, &(PI = 1,
a. = 1,
6,
=
a,
=
B,
1.
6 1 I 1 ----o=2aoa1 28
(2.28)
We substitute into this equation the expression for h, in terms of and solve the resulting quadratic equation for uT2:
uX2
Thus, in this simple case it is possible to find an explicit expression for the variance in terms of the parameters of the system and the characteristics of the signal.
Example 2. T h e system is the same as in Example 1, but the nonlinearity has the characteristic of an ideal relay : f ( X ) = I sgn X .
T h e input signal 2 has a constant mean component m, and a random component which is white noise with the intensity d. For the mean components, we have the equation my
=
m,
1 -7;"r.
(2.30)
According to the graph of Appendix I, we construct the family of
100
2 . Nonlinear Transformations-Stationary
States
1.5
I .o
0.5
0
FIGURE 21
I
2
3 Y
2.2. Application of Statistical Linearization curves (l/l)m&mz/l) for several values of u,/l diagram we draw the line
= ul.
101
In the same
'
mr - mz,- 1 m, k 1 for the case when m,, = 1.5 and k = 1 [Fig. 21(a)]. By plotting the points of intersection, we find the graph for the function ml(ul) [Fig. 21(b)]. Analogously, we construct the graph [Fig. 21(c)] for the family of curves hl( l/ul) for various values of m, and draw onto it a section of the parabola (2.28) : my -
"2
I
1
Here, we assign the values 0.5, 1.0, 2.0 and 3.0 to y = dk/2TI2. T h e values of ml(ul) for various values of y are shown in Fig. 21(b). T h e points of intersection give the solution, that is, the quantities m 1 and u,, for the respective values of y . T h e results are given in the form of graphs of m l ( y ) and ul(y) in Fig. 21(d). Example 3. Consider the problem of transforming random noise at the input of a low-power servo system. T h e block diagram for a servo system is shown in Fig. 22, and the corresponding ideal transfer function block diagram is shown in Fig. 23. Reduction
Tacho-
I
gene rotor
,
J
FIGURE 22
"I.C.
FIGURE 23
I
102
2. Nonlinear Transformations-Stationary States
T h e input signal Uin consists of two parts, the component min and the random noise Upn . We write the equations which relate the mean values (expectations) and the random components of the output signal* lJout and of the signal at the input of the limiter to corresponding quantities for the input signal Uin. On the basis of the block diagram and with the aid of the equation for statistical linearization, y
=
+hlh,
ho(m,,
, u?Jxo,
we can find the following equations : mout =
K",,(k)%; (2.31)
We investigate two cases. (a) Let mi, = const. Then m, given by the implicit equation
= 0.
T h e mean-square deviation is
* The voltage of the potentiometer for the feedback may be taken as an output signal.
2.2. Application of Statistical Linearization
103
T h e spectral density for the interference has the form
T h e integral at the right in (2.32) can be computed either graphically for various h, or directly from the equations for I, in Appendix 111. We carry out the analytical calculations neglecting the time constant of the amplifier T , . I n this case the polynomials h,(p) and g,(p) in the equation for I, will have the form h n ( ~= ) (P
+ 0) 11 + ~ P ( T , P+ 1) [I +
1
(Tmp
+ 1111 .
+
gn(P) = - ~ @ J ~ [ - T ; , P1]& ~ ,
and, hence,
b,
= 0,
b,
=
29T$~:,, ,
ba
=
-2eu?,,,
b,
=
0.
Substituting the value of the coefficients b, into the general equation for I4 , we obtain az'L =
+-
a3Tm2 a, ala2a3- aoa32- aiza,,
eU;,
(2.33)
.
T h e curves u,.l = u1 = ul(hl) are constructed in Fig. 24 for the following values of the parameters of the system :
k,
=
150,
km = 13.4 rev/sec . volt, T
.
1 z=90 '
k,
=
Tm = 0.1 sec,
= 0.013 volt . sec/rev
7 2 voltslrev,
k,
=
0.5,
T f = 0.05 sec,
1 = 12 volts
and for various mean-square values of the input noise for 0
=
20 sec-'
104
2. Nonlinear Transformations-Stationary States
10
5
0
FIGURE24
T h e curve for the transfer constant h,(u,) taken from Appendix I is drawn on the same diagram. T h e points of intersection correspond to the required values of ux = u,l and h, . T h e function axil = u,(oin) for B = 20 sec-' is shown in Fig. 25. T h e function axil = ul(0) when 01, = 3 volts is constructed in an analogous fashion, and is shown in Fig. 26. T h e values for the transfer constant h , , which were found, are used to construct the functions uo"t(oin) for O = 20 sec-' (Fig. 25) and uout(B)for 0131 = 3 volts (Fig. 26). T h e computation is based on the equation 2
Oout --
u:,,
%a2 - %a3 ala2a3- a,a,2 - a12a4'
which was derived in the same way as (2.33).
(2.34)
2.2. Application of Statistical Linearization
FIGURE 25
FIGURE 26
105
I06
2. Nonlinear Transformations-Stationary
States
These graphs show clearly that particularly the output in the region of nonlinearity leads to a sharp increase in the variance of the signals uout and X . Here, the output in the region of nonlinearity can be ( Gi n > 2 volts for B = 20 sec-l) looked at both as an increase of and as a variation in the spectral density component of the interference (0 > 5 sec-' for ui,, = 3 volts). (b) Let the signal be a linear function of time : m,,
= At.
Then, for the steady state, (2.35)
Along the points of intersection of the family of curves m,(m, , u,) with -the horizontal line which has the ordinate X/k,,ik,, one can find the function mx = m z ( 4
Substituting into the expression for the transfer constant in terms of the random component, we obtain -ml),
Q,(
(2.36)
01
where m,
=
mX I '
0
' = -1
O X
'
which makes it possible to find directly the dependence of h, on O x . A construction has been made for X = 1 volt sec-l and with the same parameters of the system as in case (a). I t turns out that for practical purposes, the form of the resulting explicit function h,*(u,) is identical with that of the function h,(o,) when m, = 0 (Fig. 24). T h e latter fact can also be derived from general considerations. I n fact, it is not difficult to see from (2.36) that when X = 1 volt sec-', the quantity m, is very small, and that the expansion of h,(m, , 0), in degrees of m , will not contain terms of the first degree. Hence, the graphs of case (a) [Figs. 25 and 261 for ooutand u, remain valid in this case. We now evaluate the variation in the settling error in the signal,
2.3. Computation of Frequency Distortions
107
the so-called noise effect; in other words, we evaluate the variation in the quality of the servo system. I n the absence of interference, the quality of the system is given by. the equation
e
=
i p (1
1 + -), P
(2.37)
and, in the presence of interference, by the equation (2.38)
If uin = 3 volts and 0 = 20 sec-l, we see from Fig. 25 that U, = 9.6 M 10. Moreover, from the graph in Fig. 27, we can find the point
0.1
0
0.2
c ml
FIGURE 27
of intersection of the curve (l/Z)mv(m1, 10) with the horizontal which has the ordinate h/k,,,k,i = 0.0078 and gives rn, = 0.1 and h, = 0.08. Therefore, 1 0.04 _ Be = 0.7. e 1 0.04(i/o.o8)
+
+
Here, the presence of interference reduces the quality of the system by a factor of 13. 2.3. Computation of Frequency Distortions Introduced by Nonlinear Elements
T h e method for the statistical linearization of a lagless, nonlinear transformation f ( X ) investigated above for nonlinear closed-loop
108
2. Nonlinear Transformations-Stationary
States
systems consisted in finding the optimally close linear, lagless transformation on the assumption that the random component of the signal X was normally distributed. We shall describe in greater detail a method of calculating the moment characteristics, which is based only on the assumption that X is normally distributed, and which makes it possible, by means of a nonlinear transformation, to take into account the distortions in the form of the correlation function. Let us take another look at a system with one nonlinear element which is described by Eq. (2.2), where Z ( t ) is assumed to be a stationary, random process. This equation can be written in the simple form x+ u=z,, (2.39) where
Z
N(p)
We write the equation which links the expectations m,, , m, and mu to the processes Z,(t), X ( t ) and U(t), m,, = m,
+ mu;
(2.40)
the equation for the random component is
z,o
=
xo + uo.
(2.41)
Equation (2.40) is equivalent to Eq. (2.9), N(O)m, = m,
+ K(O)m,(m,,
uz),
which was obtained by the method of statistical linearization.
(2.42)
2.3. Computation of Frequency Distortions
109
From Eq. (2.41) for the random component, we have dependence of the correlation function R,,(T) for the process Zl0 on the functions R,,(T)and R,(T) and on the cross-correlation functions R,,(T) and R,,(T) for the processes Uo and X o :
Ri(7) = Rd7)
+ R J T ) + R A T ) + RUd7).
(2.43)
Applying the Fourier transform to this identity, we obtain
Thus, we find the following well-known equations from the theory of linear transformations :
(2.45)
and, also, the condition
This follows from a property of the cross-correlation function of the input and output signals of a nonlinear element, which was discussed in Section 1.1 on the assumption that the input signal is distributed normally. We also remember that, in the method of statistical linearization, the coefficient a , is proportional to the transfer constant with respect to the random component h, which was found by the criterion of the least, mean-square deviation : (2.47)
110
2. Nonlinear Transformations-Stationary
or
I N ( j w ) 12Ss,(w)= [l
States
+ 2 4 Re K ( j w ) ] S , ( w )
+ I KCjw)
(2.48)
I”S,(w).
But SJw) is related to SJw) by the following equations (cf. Section 1.2) : ac
S,(w) =
j
e-jwT & ( T )
d7,
-W
(2.49)
or (2.50)
where
I
W
S,(w) =
--a
e-jwr p , ” ( ~ ) d7 =
,s:
e-jwT[-
1 ux2 27r
la
e3”’ S,(w) dw]” d7.
--iL
Therefore, Eq. (2.48) can be thought of as a complex integral equation relative to the spectral density SJw) of the signal at the input of the nonlinear element. For its solution, one can use the well-known technique of successive approximations. T h e method is based on the fact that in the expansion (2.50) only the first term is significant, while all the following terms contribute, at most, only small corrections. T h e limitation to the first term in (2.50) corresponds to a solution based on the method of statistical linearization. In fact,
and Eq. (2.48) reduces to the relation (2.14), (2.52)
in the method of statistical linearization.
2.3. Computation of Frequency Distortions
111
or
where
Taking as first approximations the quantities mi1)and a;’),obtained graphically (cf. Section 2.2) by the method of statistical linearization, we find
(2.55)
We can find the last approximations in the following way : SLk+l)(w)= I W ) ( j w ) 12Ssz(w) -
[uq+1)]2
1
1 @ y ) ( j w )( 2
=-
\“
277.
m l-2
u,2(m(zk),af))Sy,
(2.56)
SLk+”(w) dw,
-%
(k = 1, 2, ...),
112
2. Nonlinear Transformations-Stationary
States
where
I t is difficult to give a rigorous proof of the convergence of this series, although it is physically clear that it does converge and, in fact, rather quickly. Ineach of the approximations it is very difficult to compute the functions W k ) ( w )One . can use the recursion relation (1.106) W
(2.57)
where
1
S:"'(w) = -s;)(w). [u;']~
Nevertheless, the computation is still very laborious. One way of avoiding complicated calculations is to confine oneself to the construction of a second approximation* (the definition of the quantities m;') and oLz)) in the following simplified manner : T h e expression for Sil)(w) which was found by the method of statistical linearization is approximated by the simple relation (2.58)
or
T h e corresponding expressions for the correlation coefficients are ~ ( ~ ' ( =7 e-elrI )
* Or, more precisely,
and
p : ) ( ~ ) = e-elrl
cos BT.
to an improvement over the first approximation.
2.3. Computation of Frequency Distortions
113
I n the first case we have S P ' ( w ) = w2
2v0
+ (,e)z
(v =
1, 2, 3 , ...),
(2.60)
and, in the second,
For v > 3, this way of finding Sj') cannot conveniently be used, both because of the inaccuracy of the resulting approximation and because of the rate of decrease in the coefficients up2. T h e correction in the variance is given by the equation
T h e exact value of the mean component can be found graphically from the abscissa of the points of intersection of the line
7%"(O)/W)l
-
[l/~(O)lm7
with the family of curves m 2 / ( m xu, x ) , which corresponds to the value ux =
up.
I n conclusion, let us note that there is little justification in attempting a construction for a greater number of approximations, since this method is quite rough, based as it is on the assumption that the input of the nonlinear element is normally distributed. Moreover, in this technique an increase in the corrections has a bearing on the deviation of the distribution from the normal. I t is
114
2. Nonlinear Transformations-Stationary States
physically obvious, for example, that these corrections are more important when the frequency band of the spectrum of the signal at the input of the linear part of the system is narrower than the output band. Hence, as shown in Section 1.2, the effect of normalization is diminished.
Example 1. We shall study the relay system which is described by the equations 1 Y +,(TP 1)X = 2, (2.63) Y = lsgn X,
+
where Z has a random component in the form of white noise with intensity d. T h e system was analyzed by the method of statistical linearization in Example 2 of Section 2.2. We shall use the results to construct a second approximation. Let us consider the case
where
By Eq. (2.60) we find
We compute the correction to the variance by the approximate equation (2.62), which here has the form
since
2.4. Restrictions Imposed for Normal Input Signal
115
From the table of integrals in Appendix 111, we find that
Just as before, we set m,/l = 1.5 and k = 1 ; then we find from the graph of Fig. 21d, the quantities (l/l)mkl) and (l/l)u~’)for different values of the parameter y = d/2T12. For example, let y = 1. Th en (l/l)mil) m (l/l)@ m 0.8, and h,(mi’), u:’)) = 0.6. T h e functions u2(m, , u,) and u3(mx, u,) are given in Appendix I. In this case a2 = 0.34,
u3 = 0.
We find immediately - [ u : ) ] ~ = 0.01512.
T h e exact value of
ui2) differs
little from its initial value :
u z ) = 0.791.
Similarly, there is little change in the quantity m, . For y = 0.5, the equation mkl)/ubl) = 1.27 is even less satisfactory, although the correction to a, does not exceed 3 yo. 2.4. Restrictions Imposed by the Requirement That the Input Signal of the Nonlinear System be Normal
As shown above, the condition that the input of a nonlinear element have a normally distributed signal was the only limitation which prevented the method of statistical linearization from “turning” into a precise method which would, at least in principle, be as desired. It is of obvious interest to find the practical importance of this restriction. We shall study once more a system with only one, nonlinear, zero dead time element. We shall first consider the important case for which distribution of the external influence Z,(t), imposed at the input of the nonlinear element, can be assumed to be normal. I n this case, the distribution of the output signal of the nonlinear element will obviously not be normal. T h e distribution at the output
116
2. Nonlinear Transformations-Stationary States
of the linear portion will be given essentially by the frequency characteristics of the linear part of the system and by the frequency characteristics of the signal at the input of the nonlinear element (or, roughly speaking, those characteristics which the external disturbance imposes on the input). Usually, the linear part of an automatic control system can be thought of as a filter for low frequencies with a passband over the w w C p ,where wen is some boundary frequency (the range 0 cutoff frequency) above which the amplitude-frequency characteristic begins to fall off sharply.* T h e relations between the passband for the linear part of the system and the band of effective frequencies in the spectrum of the input signal can take on several forms :
<
K[jn(w,+ w ) ] ,
(2.69)
where w
Ikl 1, (XI 0.31
d q = 0.39.
For example, let 1 = 0.62. Then, h, operator K,(p) is given by the equation
=
0.96, and the desired
2.5. Synthesis of Linear Compensation Networks
135
Let us now look at the following example, which deals with the case in which the presence of a bound, generally speaking, makes it impossible to realize the optimal transfer function which is found by disregarding the effect of this bound.
+
Example 2. Let K,(p)[l/(T, l)] be a nonlinearity of the same form as in the previous example, where there is no internal interference N , and where the spectral densities of the signal S and of the external interference N l are given by 2a
S,b) = -&-5uA
STIkJJ)= d .
As we shall see in this case, it is impossible, in general, to realize the transfer function, which is found regardless of the effect of the bound. We find the solution by determining the optimal operator
which takes into consideration the bound imposed on the variance of the signal X , at the output of the nonlinearity ur2 =
A.
T h e optimal operator H ( p ) is given by Eq. (2.111). We shall find expressions for the functions K A ( p ) ,C ( p ) , L+(p). which appear in this equation. I n the present example
and, hence, K,(jw) =
Moreover,
+ d/1-tx +1
T dijw Tjw
136
2. Nonlinear Transformations-Stationary
States
and, consequently, C(jw) =
dc-i-j w + B JW
+
OL
where we use the notation
We can find in a like manner the expression L ( w ) = K2(-jw)+) Kn( -jw)C( - j w ) ’
where B(w) = S,(w). After substituting for the functions which appear in the expression for L(w), we find 2cia,2
L(w) = d/do’W
+ ci)(-jw + ,B)(-T d i j w + d-+j
’
Expanding L(w) into common fractions, we can separate out the termL+(w), which has poles in the upper half-plane of w : L+(W) =
2aa,2
d d ( a + B ) ( ~ di+ T diTij j w
1
+
‘
Moreover, putting all these expressions into Eq. (2.11 l), we find, finally, H(P) =
Tp
2or0,2
+1
4.. + B)(aT dh+ d/r+) ( p + B)(T d/hp + d m )’
and, hence,
Further calculations are made for specific values of the parameters : 2aiZ= 7,
ci =
3,
d
=
Graphs of the functions uz,(h) and
1,
T
uC2(X) are
=
1,
p
= 4.
shown in Fig. 33.
2.5. Synthesis of Linear Compensation Networks
137
FIGURE 33
Let the boundary level for the nonlinearity 1 be equal to 1.5. Then, following the simplified method developed above we obtain A
= ox2 = 0.71 =
1.05.
From the graph in Fig. 33, we ascertain the corresponding value of h to be equal to 0.27 and the mean-square error uE M 1.65. T h e graph in Fig. 30 for All = 0.7 shows that h,(u,) is equal to 0.65. Using these data, we obtain, finally,
We,also note that in this problem A = co when h = 0; therefore, it is impossible to construct the optimal circuit unless the bound restriction is considered in advance, regardless of how weak this restriction.
2. Nonlinear Transformations- Stationary States
138
2.6. Application of the Theory of Markov Processes in the Study of Some Nonlinear Systems
We shall describe one class of nonlinear problems for which, in principle, it is possible to obtain exact solutions. Suppose that the equations for the nonlinear system can be written in the form dX, dt
-= f i ( X l , ..., X , , i1, ..., 5,)
(i = 1, 2, ..., n),
(2.117)
where fi are nonlinear functions and ti are random functions of time, while the quantities ci(tl) and ci(t2),where t , and t , are arbitrary moments of time, are statistically independent. Without loss of generality we assume that M{ci} = 0. We assume that the cross-correlation functions for the processes and are given by the expressions
ci
ci
T h e processes for the variation in X i ( t ) are Markov processes.* A precise definition of a Markov process is as follows. “Let t - A, ax!
1
f ( x ) exp[-
(
- mx UX
)'I
dx
Hence, the characteristic equation for the system in the presence of interference can be written in the form:
Disregarding the small constant T, , we rewrite the condition for stability (the Routh-Hurwitz criterion) as follows : (3.30)
For a servo system with the parameters of Example 3, Section 2.2, we obtain h, > 0.05. It is not hard to calculate the frequency w0 at which loss of stability can occur. From the characteristic equation, it is obvious that (3.31)
and, hence, wo rn 14 sec-'. Therefore, strictly speaking, the loss of stability under the influence of interference can be considered as real only if the spectral density of the interference is significantly larger than zero for frequencies higher than wo = 14 sec-l.
Example 2. We shall study the state of a system with a nonlinear element which is an ideal relay. T h e block diagram of the system is
156
3. Nonlinear Transformations-Nonstationary States
shown in Fig. 36. I n the absence of an external disturbance there will be self-oscillations. T o find out what they are, we apply the method of harmonic linearization.
FIGURE36
Here, the system of equations has the form
k P(TP
Let X
= a sin mot;
+
Y
+X
= 0,.
Y
=
sgn X .
(3.32)
then y
=d
4X,
where ql(a) = 4 / r a is the harmonic coefficient of linearization for the ideal relay. T h e amplitude a and the frequency w 0 of the self-oscillations are given by the conditions ql(a) = -
1
Re[jwo(Tiwo
+ 1)7,
0 = Im[jwo(Tjwo + 1 ) 7 ,
(3.33)
and, hence, we obtain 1
wo=T'
2kT a=-
7r
Self-oscillations will occur for arbitrary values of the amplification coefficient k. Now impose upon the system a high-frequency external disturbance Z ( t ) [Fig. 361, while for all significant frequencies in S,(W)the condition for complete filtering is satisfied in the linear part of the system. Then a, = a,.
3.1. The Transformation of a Slowly Changing Signal
157
T o determine the amplitude of the harmonic component in the signal x we have the equation (3.21), 2T 2 h,(u, uz) = 7 wo. = kT
.
(3.34)
T h e function h,(a, u,) for the ideal relay is derived in Appendix IV : h,(u, a,)
where
1
= -&(a) U
a=-
[ a dn
=-
a
2a 1 - -
+ -1
,
(3.35)
U
dZ
0,
T h e abscissa of the point of intersection of the curve B,(a) with the line 2 2 d? ah&, u,) = - a = -ur(Y = La (3.36) k 1‘ kT gives the desired value of the parameter a, and, therefore, of the amplitudes a (Fig. 37) for arbitrary values of the combined parameter
FIGURE 37
L. T h e parameter L increases with an increase i n t h e interference level and when L > 214; or when U, > kTjd2.-, the harmonic component vanishes. There is a “break” in the self-oscillations. T h e phenomenon of this “break” in the self-oscillations with random external disturbances has in its very character much in common with the phenomenon of locking under the effect of harmonic disturbances, although, obviously, it has a slightly different form.
158
3. Nonlinear Transformations-Nonstationary
States
3.2. Passage of a Slowly Varying Random Signal through a System in a State with Periodic Oscillations
Let the nonlinear system operate in a state with periodic oscillations. These periodic oscillations can be produced either by an external periodic disturbance (forced oscillations) or by the internal properties of the system (self-oscillations). We shall study a method of finding the dynamic characteristics of such a system when it is under the effect of a signal which is a slowly changing, stationary, normal, random function of time Zo(t). “Slowness” in this connection means that the band of significant frequencies in the spectrum of Z o ( t ) lies considerably below the frequencies of the undisturbed periodic state. For disturbances in the self-oscillatory state, this defines the low-frequency characteristics of the spectrum Z o ( t ) with respect to the passband of the open loop. We shall restrict our investigation to the case in which the system contains only one nonlinear element and is in a state of forced oscillations because of a harmonic signal* : Q(P)Y
+ R(PW = S(p)Z, y
where Z
=
= f(X),
m, + Zo,
m, = a, sin w,t.
(3.37)
(3.38)
We look for a solution of the system (3.37) in the form of a sum of a high-frequency harmonic component XIand a slowly varying, normal component X, :
x = x,+ x,.
If it is assumed that the linear part of the system (3.37) satisfies the well-known conditions for the applicability of the method of harmonic linearization, then XI = a sin(w,t +),
+
* Computations for
a self-oscillatory system are not significantly different [57].
3.2. Passage of a Slowly Varying Random Signal
159
and when we expand Y in a Fourier series, we can limit ourselves to only the first terms : y
where*
= Fob,
.u,) + Q l ( 4X0)Xl
j
1 zn ~ , ( ax,) , = - j ( a sin v ql(a, x,) =
2~ 1
n
J
2n
j ( a sin ‘p
9
(3.39)
+ x,)dv, + x,)sin v dv.
Here, the amplitude a is, in general, a random, slowly varying function of time. Taking the mean with respect to time and assuming that the slowly varying functions are constant over one period, we can separate from the system (3.37) an equation for the periodic components, Q(PIY1
+ W)X, = S ( P h ,
Y,
= q1(a, Xo)X19
(3.40)
and for the slowly varying components,
+
Q ( ~ ) y o W)Xo
=
S(P)ZO,
Yo = Fo(a, Xo)*
(3.41)
T o solve the system (3.40), we use the method of “frozen coefficients.” Then the amplitude a and the phase y5 of the process X , , which are functions of X o ( t ) and, therefore, are slowly changing, random functions of time, are given by the equations (3.42)
where
We shall consider the first equation in (3.42) as an equation which gives the dependence of the amplitude a on the slowly varying component X , : F(a, Xn) = 0. (3.43) *Equations and graphs for the coefficients F,, and q1 are given by Popov and Pal’tov
[a].
160
3 . Nonlinear Transformations-Nonstationary
States
This function can be constructed in an explicit form, for example, by graphical means. Thus, we find a =
a(Xo).
(3.44)
Then, the slowly varying component can be expressed only in terms of the slowly varying component of the input signal, and the system (3.41) reduces to the form
where FO*(XO) = Fo[a(Xo),Xol.
If the condition for complete filtering is satisfied,
the system turns out to be open with respect to the high-frequency component and (3.46)
that is, the amplitude does not depend on X o . Therefore, FO*(XO) = FOWO).
(3.47)
If the condition for filtering is not satisfied, that is, if w 0 lies inside the limits of the passband for the open system, then the equation for the slowly changing component can usually be greatly simplified by neglecting terms of higher order. For a very low-frequency Zo, the system can be regarded as lagless (with respect to ZO); that is, X , will be given by the implicit equation
+
Q(0)Fo*(Xo) R(0)Xo = S(0)Zo.
(3.48)
I n the general case, Eq. (3.45) is an equation for a system with one nonlinear element which has an effectively real (“reduced”) characteristic given by the function Fo*(Xo).Let us look for an ap-
3.2. Passage of a Slowly Varying Random Signal
161
proximate method of solution for these equations. T h e simplest method is by straightforward linearization with a Taylor expansion of the function F,*(X,). Th en it will not be necessary to find the implicity function (3.43). I n fact,
But for an odd, piecewise differentiable functionf(X) one can show that aF,/aa = 0 when X, = 0. Hence,
while in the expression for the derivative obviously a = a , where a, is the amplitude of the periodic signal in the absence of the slowly varying component. T h e system (3.45) reduces to the linear equation where
x,=
@0(p)Z0,
(3.50)
which enables us to compute the mathematical expectation and variance of X , in the usual way. If direct linearization is not admissible (if the amplitude of the high-frequency component a, is comparatively small), then obviously statistical linearization is to be applied. T h e method for solving Eqs. (3.45) is no different from that outlined in Section 2.2 except that here the transfer constants must be constructed not from the real characteristic of the nonlinear element, but from the “reduced” characteristic F,*(X,); in other words, the transfer constant is given by the equation
(3.51) where wl(x) =
- ( ). uxo4%exp - 2“:,
162
3. Nonlinear Transformations-Nonstationary
States
If the filtering condition is satisfied, then (3.52)
or
that is, the transfer constant is the same as the one derived in Section 1.5 for the transfer constant h, for the random component using statistical linearization of a nonlinear transformation of the sum of a random, normal signal and of a harmonic signal with a random phase [cf. (1.219)]. We note that the direct application of the method of statistical linearization to Eq. (3.37) would have been very difficult because the transfer constants would then be rapidly changing functions with respect to time. In problems of this type, it is often of interest to find only the characteristics of the slowly changing component X , . If it is required to give an estimate of the fluctuations in the amplitude of the periodic component, then one can use Eq. (3.44) which can be written roughly in the form
from whence
One can also compute directly act from the nonlinear equation (3.44) if one assumes that X,has a normal distribution, an assumption which was already made in applying the method of statistical linearization. Then
J
-x
3.2. Passage of a Slowly Varying Random Signal
163
We shall give a summary of the material developed in the previous two sections. Essentially, the techniques described depend on the successive analysis of the processes in the two components operating in the system, the low-frequency and the high-frequency component. T h e basic premises of the method do not depend on whether these components are random or definite functions of time. T h e solution is required in the form of the sum of a slowly changing and a high-frequency component. From the equations for the system, we find, by averaging with respect to time (harmonic linearization) or with respect to the set of possible values (statistical linearization), the equations for the slowly varying components. T h e dependence of the coefficients of these components on the high-frequency components of the system hinges only upon the unspecified character of the parameter (the amplitude or the mean-square deviation at the input of the nonlinear element). This parameter, or more precisely, its dependence on the slowly varying component, is found either by statistical or harmonic linearization if one assumes that the slowly varying coefficients in the equations can be considered as “frozen.” T h e substitution of this dependence into the equation for the lowfrequency component makes it possible to analyze independently the transformation process of this component. For this, the resulting nonlinear functions are replaced by the smooth “reduced” functions. I n most cases it is possible to linearize them directly and to think of the equations for the low-frequency component as if they were linear. I t will then be comparatively easy to determine the transforming properties of the system with respect to this component. If direct linearization is not possible (either because of the properties of the external disturbance or because of the internal properties of the system), then any of the methods in the theory of nonlinear functions must be applied, and one may use once again harmonic or statistical linearization to compute the nature of the resulting states. T h e presence of a high-frequency component in the external disturbance can cause significant qualitative changes both in the properties of the system with respect to the transformation of the external slowly varying signal, and in the internal dynamic properties, such as the stability of the equilibrium state and the possibility of attaining a self-oscillatory state. It is obvious that these properties
164
3. Nonlinear Transformations-Nonstationary
States
are considered realizable (or unrealizable) “in the mean,” that is, averaged over the long period of the high-frequency component. T h e method of separating the frequencies agrees with the physical nature of the processes in nonlinear systems. Its application is always productive in the analysis of both periodic and random processes. It is especially useful in the composite problems described above, where one of the components is periodic while the other is a random function of time.
Example I . Consider a servo system with a sensitive element which has relay characteristics. Disregarding the time constant of the amplifier and the hysteresis of the relay, one can write the equation for the system in the form (3.56)
+
where Z = a, sin w,t Zo, and I is the time constant of the motor. T h e frequency of the harmonic interference w , is much higher than the significant frequencies in the spectral density S,(w) of the signal Zo. T h e amplitude a of the interference at the input of the relay is given by the equation
(3.57) where the transfer constant ql(a, X,) is equal to [64]
T h e phase relations are usually of no interest. We take the parameters to have the following values : W,
=2n.
50 sec-l,
T
= 0.1
sec,
k
=
50.
I t is not difficult to see that in this case the linear part of the system completely filters the interference and a = a , . For the useful signal (which is slowly changing), we have the equation
(3.58)
3.2. Passage of
a
Slowly Varying Random Signal
165
where Fo(a, X , )
2
.x o= 2- arcsin XO ,
= - arcsin
n
If the quotient
o,o/az is
a
n
( a > I X , I).
az
(3.59)
small enough, one can linearize directly : Fo(a, X,)
2 =a,
-X,
.
(3.60)
T h e variance of the signal X o (the error in the output signal) is found from the usual equation for a linear transformation (3.61)
If S,(W) = [2O/(w2
+ 02)]
oz2, we
obtain (cf. Appendix 111)
where a,
=
T,
a, =
1
+ BT,
a2 = k,
bo
= T2,
b,
+ 8,
a3 = Bk, ,
= -1.
Finally, we have (3.62)
When T < l/O, an increase in the amplitude of the interference increases the variance of the error, but when T > l/O, there will be a decrease. Naturally, this statement is valid only if the condition that u+,/az is small is satisfied. Suppose now that this condition is not satisfied; then one must use statistical linearization.
166
3 . Nonlinear Transformations-Nonstationary
States
In this case, the mean-square error o,, is given by an equation of the type (3.61) or (3.62), but the parameter k, turns out to depend on uI itself : ko = Ah,(%
1
UE0).
It is shown in Appendix IV that
where LY = a,/2/2 us0 and the function CO(a)are given by graphs. A graph of the function a, h, = aC0(a) = Co*(a)
d2
is shown in Fig. 38.
FIGURE38
Moreover, it follows that
3.3. Transformation of Sum of Random and Harmonic Signals
167
or (Y
=
.(Co*).
T h e abscissa of the point of intersection of the functions a(C,*) and Co*(a)gives the desired value of a (Fig. 38). T h e construction is made for the values B = 5 sec-', T = 0.1 sec, K = 50, and u, = 4 2 , and various values of a,. Figure 39 shows the final graph of the function u,u(a,/u,). I t is
FIGURE 39
interesting to note that for small a,/., an increase in the amplitude of the interference brings an increase in the error of the output signal. Computation by the use of Eq. (3.62), that is, by direct linearization, gives a very good approximation when a,/u, 3 1. 3.3. Transformation of the Sum of Wide-Band, Normal, Random Signals and Harmonic Signals in a Nonlinear System with Feedback (Method of Statistical Linearization)
I n the preceding sections we solved the problem of the transformation of the sum Z ( t ) of a harmonic signal Z , ( t ) and a normal, random signal Z z ( t )on the basis of certain assumptions regarding the frequency characteristics of the random component. Let us denote the spectral density of the stationary, random component &(t) by S,(w), and the frequency of the periodic component by woe
168
3 . Nonlinear Transformations-Nonstationary States
I n the problems considered above it was assumed that the spectral density S,(w) was significantly different from zero, at least in some w u p . T h e following variations were frequency range w1 analyzed :
<
(1) T h e high-frequency random signal (wl w0). T h e solution was sought in the form of a sum of a harmonic function and of a nonstationary, normal process with a mean-square value uz periodically varying with respect to time; (2) T h e low-frequency random signal ( w 2 w,,). T h e solution had the form of a sum of a stationary, normal process and of a harmonic function with random amplitude; (3) T h e case where the spectrum of the signal is concentrated in a narrow band near the frequency wo (wl w0 w 2 , w1 - w 2 = 24, where d / w o 1) is analyzed in a manner which, in principle, is no different from the technique described in Section 2.4 for the case of a narrow-band random disturbance of a stationary state.
0
or p=l
We now analyze the disturbed state. We denote by T ~ where , = 1, 2, ..., n ) the successive intervals of time (the “half-periods”) during each of which the output signal of the relay Y is constant (taking the values of either + f a or -fa). T o be specific, we assume that the disturbance Zo(t) is introduced into the system at the initial moment t o , which is within the interval 71, where Y takes the value
(K
189
3.4. Random Disturbances of Periodic States
T~ , that is, at the moment of switch-over, is denoted by Zko . We also introduce the index k to denote processes in the variation of the variables over the period T,,. . Then
f a . T h e value of Zo(t) at the end of the interval
+ (-l)k-lfot,
= C0.k
W0.k
+
W P m=k Cp,keApT
f
(p = 1,2, ..., n ) .
( - 1 ) k L AP
We take the beginning of each interval as the initial moment of time t . T h e continuity conditions ( p = 0,
IVp,k+l(o)
=
wp,k(Tk)
1 , 2, ...)n)
make it possible to derive the recursion relations
+
cO,k
C p , ke A p T k
=
(-l)"-'fOTk
+ (-
I)k
cO,k+l
9
f2 = Cp . k + l + (-I)'.+'
f.
(p =
A,
AP
1,2, ...,n ) . (3.102)
We introduce variables which characterize the deviation from an undisturbed state : A7k =
where
T
(-I)'((.
- Tk),
ACp.,
= cp,k - cz.k
I
is given by the equation of the periods (3.101), and z
2fo ___ 1 -
A, 1
+
(-1)k-1
eapT
=
cp ( - l ) -
C:,k = Co,l, = C0,*,
( p = 1,2, ..., n ) ,
k k
= 2m -
1,
= 2m.
We assume further that the disturbance Zk0 is small in comparison with the amplitudes of the values of the input signal X ( t ) in the undisturbed state. Because of the stability of the indicated state, the deviations AT^ and AC,,, , where p = 0, 1, ..., n, are in general, small by comparison with the corresponding quantities. Using Eq. (3.97) we can linearize the continuity conditions (3.102) : AC0,k-1 +f"4
ACO,,
=
AC,,,
= eApTACp,k-l
- - 1
,
+ C,e'pThpATk-1.
(3.103)
190
3. Nonlinear Transformations-Nonstationary States
We see that Eqs. (3.103) are difference equations of the first order relative to AC,,, , where p = 0, 1, ..., n ; it is easy to find expressions for the dependence of these quantities on the fluctuations in the halfperiods A T , : i=l
(3.104)
We now use the switch-over conditions Vk(0) = ( - l ) k - l A
or AVk(0)
+x
-
2,' - 1,
(3.105) (3.106)
-Z;-1
Moreover, n
n
= cppAWp,k(O)
Avk(o)
p=o
=
c p p A c p , k . p=0
(3.107)
T h e substitution of (3.104) into Eq. (3.107) gives an infinite system of linear equations for the determination of AT^ :
2 AT^ + 2 5
pofo
i=l
A,
B,C,
p=O
e2p7(k-iJ
Ari
=
-Zok-1
i=l
(k = 2, 3, ...). (3.108)
Substracting the (k - 1)th equation from each kth equation, we can put the system in the form
$
i=l
where
ak-i+l
=
-(z:
- Z,"-l),
(3.109)
3.4. Random Disturbances of Periodic States
191
I t is obvious that the coefficients ai rapidly decrease as the index i increases. T h e physical meaning is very simple. Quantity a, represents the difference between the values at the moment iT and at the moment ( i - 1). of the reaction of the linear part of the system to the impulse of the disturbance, though not for zero initial conditions-which would give the impulse function which corresponds to the operator K(p),-but for conditions which will guarantee the continuity and the periodicity of the process in an undisturbed system. T h e solution of the system (3.109) is not difficult because the matrix of the coefficients ai is triangular and because the values of Ark can be successively expressed in terms of all the 2: where i k. For an arbitrary k, )
, 3). p=1
T h e solution of the resulting infinite system of linear equations with a triangular matrix has the form
hi= $Ai-j+lZ,O j-1
= ZloAi
+ Z,OAi-l + ... + AIZio.
(3.133)
3.4. Random Disturbances of Periodic States
199
We find the values of the undetermined coefficients Ai by substituting (3.133) into the system (3.132). In the resulting identity,
we equate the coefficients of each 2:. As a preliminary step we change the order of summation :
Then, it is obvious that
or alAl
a,Al
+ a,A2 + a1A3
=
-1,
= 1,
(3.1 34)
i=l
Finally, we find the following recursion relation : (3.135)
while
Starting with k = 3, the coefficient A, rapidly decreases. We also find an expression for the magnitude of the "phase incidence" A Q k .
200
3. Nonlinear Transformations-Nonstationary
States
We take the sequential sum of the equations (3.132), and multiply each even equation by - 1, to obtain
i=l
a=l
k=l
After changing the order of summation we find
= Z,"( - 1 ) j
+ z;-l(
(3.136)
-1 ) j - l .
We introduce the quantities k s k =
CdTi(-l)i+k. i=l
Then the system of equations (3.136) takes the form (3.137) T h e resulting system is the same as the system of equations of (3.109) considered above. Its solution can be written in the form (3.138) where and
20 1
3.4. Random Disturbances of Periodic States
T h e coefficients B, decrease rapidly because of the decrease in the a, . Therefore, the variance of S , is bounded, which is the opposite of what was found for the self-oscillatory system. T h e expressions for the correlation function and for the variance of the fluctuations in the half-periods are obviously the same as the expressions (3.115), (3.1 17) and (3.1 19) which were derived in Example a of this section. A computational example will be given later. c. THEOUTPUTOF THE SUMOF A SIGNALVARYINGLINEARLY TIME A N D OF A STATIONARY RANDOM SIGNAL.I n this case
WITH
m,(t)
where
(3.139)
It.
=
1 = const.
Consider first the undisturbed equilibrium state ( Z o = 0). T h e variation in X ( t ) is periodic, although the mean value of X ( t ) over a period is different from zero. T h e output signal V ( t ) of the linear part of the system must have a term which linearly varies with time [otherwise there would be no compensation for the input signal
mz(0l.
From Eqs (3.95) which give the results for integration in the original system, it is obvious that this term can appear only in the , in view of the above discussion, will not component W O , kwhich, have a periodic character. From the continuity conditions, WO.k(rk)
it follows that (-l)B-lfOrk
We assume that 7k
=
+
= 71,
-r2,
WO,k+l(o)~ CO.k
=
CO.k+l.
(3.140) (3.141)
k Odd, k even.
T h e process of variation in the coordinate of W , , where p
2, ..., n, is periodic, and, thus, we have
=
1,
(3.142)
202
3. Nonlinear Transformations-Nonstationary
States
Moreover, from the continuity conditions, WP.k(Tk)
= WP,k+l(o);
this gives
Cp.2 = Cp.2,
2f0
1
- eAprl
= -Xp 1 - e a p ( 7 i + T z )
( p = 1 , 2 ,..., m,
m=O,l,
...).
(3.143)
T h e switch-over conditions can be written in the form
(3.144)
Subtracting the second equation from the first and taking into account (3.141), (3.142) and (3.143), we obtain, after some fairly simple manipulations, the equation for the periods
and, hence, considering (3.142), we obtain (3.146)
3.4. Random Disturbances of Periodic States
203
Analogously,
+
1
C0.2m+2
- Co.2m = - (71 BO
72).
(3.147)
These relations together with (3.141) make it possible to write a relation between the intervals T~ and T~ : (3.148)
T h e quantities T~ and T~ are then found by the simultaneous solution of Eqs. (3.145) and (3.148). From (3.148) one can also find a condition for the tracking stability, Ill
(3.149)
82x2,
Eq. (4.71) can be written in the form aC2 d2m 2T dx2
8x dm +1=0. T dx
(4.74)
An analogous result can be obtained if we start from the differential equation which corresponds to the difference equation (4.19). Integration of (4.74) under the conditions of (4.72) and neglecting the small quantity em gives
(4.75) where m
exp(f2) dt
I ( a ) = --
4,
0
exp(y2) d y .
(4.76)
1
For the range of small values of a which interest us a graph of I(a) is shown in Fig. 56. We note that, in practice, the only case of interest is that in which there is intensive interference. In fact, if u and 8x are of the same order and x is sufficiently large (i.e., 8 is small), it is not difficult by means of successive approximations to construct a solution of the form
m(x)
-- =
T
1
-
8
In
x
+ E*
(2n - I)!! 2 (F)P-', 2n m
a
a.
I Z I
(4.130) =
0,
S ( € j )
-
=
b
a
b
In order to make the computations simple, we shall assume that co = 0 (this assumption does not have any special significance since we are trying to find the steady-state distribution). By definition, we have c j = E~ j h = jh.
+
Thus, the expression for the transition probability can, finally, be written in a very simple form:
P,
=
1,
=
&(I
i