Lecture Notes in Control and Information Sciences Edited by A.V. Balakrishnan and M.Thoma
20 IIIII IIIIII III II
I III...
87 downloads
1154 Views
4MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Lecture Notes in Control and Information Sciences Edited by A.V. Balakrishnan and M.Thoma
20 IIIII IIIIII III II
I IIII IIIIIIII
Bo Egardt
Stability of Adaptive Controllers I
II
III
III
III
II
Springer-Verlag Berlin Heidelberg NewYork 1979
Series Editors A.V. Balakrishnan • M. Thoma Advisory Board L. D. Davisson • A. G. J. MacFarlane - H, Kwakernaak Ya. Z. Tsypkin • A..I. Viterbi Author Dr. Bo Egardt Dept. of Automatic Control Lund Institute of Technology S-220 07 Lund ?
ISBN 3-540-09646-9 Springer-Vedag Berlin Heidelberg NewYork ISBN 0-387-09646-9 Springer-Verlag NewYork Heidelberg Berlin This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically those of translation, reprinting, re-use of illustrations, broadcasting, reproduction by photocopying machine or similar means, and storage in data banks. Under § 54 of the German Copyright Law where copies are made for other than private use, a fee is payable to the publisher, the amount of the fee to be determined by agreement with the publisher. © Springer-Verlag Berlin Heidelberg 1979 Printed in Germany Printing and binding: Reltz Offsetdruck, Hemsbach/Bergstr. 206013020-543210
PREFACE The present work is concerned with the s t a b i l i t y analysis of adaptive control systems in both discrete and continuous time. The attention is focussed on two well-known approaches, namely the model reference adaptive systems and the self-tuning regulators. The two approaches are treated in a general framework, which leads to the formulation of a f a i r l y general algorithm. The s t a b i l i t y properties of this algorithm are analysed and s u f f i c i e n t conditions for boundedness of closed-loop signals are given. The analysis differs from most other studies in this f i e l d in that disturbances are introduced in the problem. Most of the material was o r i g i n a l l y presented as a Ph.D. thesis at the Department of Automatic Control, Lund I n s t i t u t e of Technology, Lund, Sweden, in December 1978. I t is a pleasure for me to thank my supervisor, Professor Karl Johan Astr~m, who proposed the problem and provided valuable guidance throughout the work.
B. Egardt
V
Table of contents
I.
INTRODUCTION
l
2. UNIFIED DESCRIPTION OF DISCRETE TIME CONTROLLERS
9
2.1
Design method for known plants
g
2.2
Class of adaptive controllers
13
2.3
Example of the general control scheme
20
2.4
The positive real condition
24
3. UNIFIED DESCRIPTION OF CONTINUOUSTIME CONTROLLERS
27
3.1
Design method for known plants
27
3.2
Class of adaptive controllers
30
3.3
Examples of the general control scheme
35
3.4
The positive real condition
41
4. STABILITY OF DISCRETE TIME CONTROLLERS
43
4.1
Preliminaries
45
4.2
L~-stability
60
4.3
Convergence in the disturbance-free case
77
4.4
Results on other configurations
80
4.5
Discussion
84
5. STABILITY OF CONTINUOUS TIME CONTROLLERS
87
5.1
Preliminaries
87
5.2
L~-stability
95
5.3
Convergence in the disturbance-free case
I03
REFERENCES
107
APPENDIX A - PROOFOF THEOREM4.1
Ill
APPENDIX B - PROOFOF THEOREM5.1
132
1,
INTRODUCTION
Generalities Most of the current techniques to design control systems are based on knowledge of the plant and i t s environment. In many cases this information is, however, not available. The reason might be that the plant is too complex or that basic relationships are not f u l l y understood, or that the process and the disturbances may change with operating conditions. Different p o s s i b i l i t i e s to overcome this d i f f i c u l t y exist. One way to attack the problem is to apply some system i d e n t i f i c a t i o n technique to obtain a model of the process and i t s environment from practical experiments. The controller design is then based on the resulting model. Another p o s s i b i l i t y is to adjust the parameters of the controller during plant operation. This can be done manually as is normally done for ordinary PID-controllers, provided that only a few parameters have to be adjusted. Manual adjustment is, however, not feasible i f more than three parameters have to be adjusted. Some kind of automatic adjustment of the controller parameters is then needed. Adaptive control is one p o s s i b i l i t y to tune the controller. In particular, self-tuning regulators and modeZreference adaptive systems are two widely discussed approaches to solve the problem for plants with unknown parameters. These techniques w i l l be the main concern of the present work. Although these two approaches in practice can handle slowly time-varying plants, the design is basically made for constant but unknown plants. The basic ideas behind the two techniques are discussed below.
Self-tuningregulators The self-tuning regulators (STR) are based on a f a i r l y natural combination of i d e n t i f i c a t i o n and control. A design method for known plants is the starting-point. Since the plant is unknown, the parameters of
the c o n t r o l l e r can, however, not be determined. They are instead obtained from a recursive parameter estimator. A separation between i d e n t i f i c a t i o n and control is thus assumed. Note that the only i n f o r mation from the estimator that is used by the control law is the parameter estimates. Schemes which u t i l i z e e.g. parameter u n c e r t a i n t i e s are not considered here. Probably the f i r s t
to formulate t h i s simple idea as an algorithm was
Kalman (1958). An o n - l i n e least-squares algorithm produced estimates of plant parameters. The estimates were then used at every sampling i n s t a n t to compute a deadbeat control law. The s e l f - t u n i n g idea was brought up by Peterka (1970) and Astr~m/ Wittenmark (1973) in a stochastic framework. Astr~m and Wittenmark's algorithm, based on minimum variance c o n t r o l , is described below in a simple case. EXAMPLE I . I
Consider the plant, given by y ( t ) + a y ( t - l ) = bu(t-l) + e ( t ) , where u is the input, y the output and { e ( t ) } is a sequence of independent, zero-mean random variables. I t is easy to see that the control law a
u(t) = ~y(t) gives the minimum output variance. I f the parameters a and b are unknown, the algorithm by Astr~m and Wittenmark can be applied. I t consists of two steps, each repeated at every sampling i n s t a n t :
.
Estimate the parameter ~ in the model y ( t ) : ~ y ( t - l ) + Bou(t-I ) + c ( t ) , t e.g. by minimizing z c 2 ( s ) . ~(t).
Denote the r e s u l t i n g estimate by
2.
Use the control law u(t) = - ~(t) y ( t ) . BO
I t should be noted that the estimation of ~ can be made recursively i f a least-squares c r i t e r i o n is used. This makes the scheme practically feasible.
[]
The above algorithm can easily be generalized to higher order plants with time delays. The paper by Astr~m and Wittenmark (1973) presented some analysis of the algorithm. The main conclusion was that ~f the algorithm converges at a l l , then i t converges to the desired minimum variance controller, even i f the noise { e ( t ) } is correlated. The l a t t e r result was somewhat surprising at that point. I t has later been shown by Ljung (1977a) that the algorithm converges under a s t a b i l i t y condition i f the noise characteristics satisfy a certain positive realness condition. Similar results without the s t a b i l i t y assumption was given by Goodwin et al. (1978b).
The self-tuning regulators are not confined to minimum variance control. For example, Astr~m/Wittenmark (1974) and C1arke/Gawthrop (1975) proposed generalizations of the basic algorithm. Algorithms based on pole placement design were discussed by Edmunds (1976), Wellstead et al. (1978) and Astr~m et al. (1978). Multivariable formulations are given by e.g. Borisson (1978). The general configuration of a self-tuning regulator is shown in Fig. I . I . The regulator can be thought of as composed of three parts: a parameter estimator, a controller, and a t h i r d part, which relates the controller parameters to the parameter estimates. This p a r t i t i o n i n g of the regulator is convenient when describing how i t works and to derive algorithms. The regulator could, however, equally well be described as a single nonlinear regulator. There are of course many design methods and i d e n t i f i c a t i o n techniques that can be combined into a self-tuning regulator with t h i s general structure. A survey of the f i e l d is given in Astr~m et al. (1977).
Regulator parameter calculation
J
Parameter estimation
F
Plant
j y'
I
$
4~ Regulator
Figure I . I . Block diagram of a self-tuning regulator.
Mode] reference adaptive systems The area of model reference adaptive systems (MRAS) is more d i f f i c u l t to characterize in a general way. The main reason is that the many different schemes proposed have been motivated by different considerations. An early attempt to cope with e.g. gain variations in servo problems was Whitaker's "MIT-rule". Parks (1966) i n i t i a t e d a rapid development of the MRASby using Lyapunov functions and s t a b i l i t y theory in the design. He also observed the relevance of a certain positive realness condition. A simple example wi]l i l l u s t r a t e the ideas.
EXAMPLE 1.2 A f i r s t order plant is assumed to have known time constant but unknown gain. The desired relationship between the input u and the output y is defined by a reference model with output yM, see Fig. 1.2. The objective is thus to adjust the gain K such that e(t) = y ( t ) - y M ( t ) tends to zero. The solution uses a Lyapunov function
I (e2 +cR2), V=~
Model .J KM 7 1+sT U
j
Plant
Kp
yM
y?
1 +sT
/ Figure 1.2. Configuration of Example 1.2.
where c > 0 and = KM - KKp. The derivative of V is " = - ?l e 2 + -I R u e + c R ~
T
I f the gain is adjusted according to K=
- --] ue, cT
(I.I)
the derivative of V is negative definite and i t can be shown that the error e tends to zero. This implies that the objective is f u l f i l l e d . Note that ( l . l ) can equivalently be written as =
"_l_lue
CTKp i f Kp is assumed to be constant. Since c is arbitrary, this updating formula is possible to implement, although the adaptation rate will vary due to the unknown plant gain.
a
The above example can be generalized considerably. The problem to follow a given reference signal was solved for higher order plants with unknown dynamics, see e.g. Gilbart et al. (1970) and Monopoli (1973). However, a crucial assumption in these references is that the
p l a n t ' s t r a n s f e r function has a pole excess ( i . e , difference between number of poles and number of zeros) equal to one, Monopoli (1974) proposed a modification of the e a r l i e r schemes to t r e a t the general case. His ideas have inspired many authors in the f i e l d and in p a r t i c u l a r the stability
problem associated with his scheme has been frequently dis-
cussed. The basic idea behind the schemes can be described as in Fig. 1.3. The unknown plant is c o n t r o l l e d by an adjustable c o n t r o l l e r . The desired behaviour of the plant is defined by a reference model. Some kind of adaptation mechanism modifies the parameters of the adjustable c o n t r o l l e r to minimize the difference between the plant output and the desired output. The methods to design the adaptation loop in MRAS have mostly been based on s t a b i l i t y theory since Park's important paper appeared. Although the MRAS ideas were f i r s t trol,
developed for continuous time con-
the same framework has been carried over to discrete time c o n t r o l .
Surveys of the numerous variants of the technique are given by e.g. Landau (1974) and Narendra/Valavani (1976).
] Reference -J model uM
.J
AdaptationL
"J mechanism =
Adjustoble J I u controller ~
PIont
Figure 1.3. Block diagram of a model reference adaptive regulator.
JyM
Similarities between STR and MRAS The STR and the MRASwere developed to solve different problems. The STR were o r i g i n a l l y designed to solve the stochastic regulator problem. The MRASwere developed to solve the deterministic servo problem. In spite of these differences, the two techniques exhibit some important s i m i l a r i t i e s . This has been observed in e.g. Ljung (1977a) and Gawthrop (1977). The question has thus arised, whether the two approaches are more closely related than earlier thought. Some answers are given in Ljung/Landau (1978~, Narendra/Valavani (1978} and Egardt (1978). The purpose of the f i r s t part of this work is to describe several MRAS and STR in a unified manner. The discussion is limited to systems with one input and one output. I t is assumed that only the plant output is available for feedback. I t w i l l be shown that i t is possible to derive MRAS from the STR point of view. This observation leads to the possibili t y to describe several MRASand STR as special cases of a f a i r l y general algorithm. The unified treatment also f a c i l i t a t e s a comparison of the positive real conditions, which play an important role in the design and analysis of both MRASand STR. I t is shown that the condition can be removed in the deterministic case. The discrete time case is covered by Chapter 2 and Chapter 3 gives the treatment for continuous time control. Since adaptive regulators are predominantly implemented using digital computers, the discrete time case is emphasized. The analysis is also a l i t t l e simpler in that case.
Stability There are a number of important properties of adaptive regulators which are poorly understood, e.g. - overall s t a b i l i t y - convergence of the regulator -
properties of the possible l i m i t i n g regulators
- effects of disturbances.
Overall s t a b i l i t y of the closed loop system is perhaps the most fundamental property. This is of course important both practically and theoretically. The s t a b i l i t y problem has also been encountered indirectly in most convergence studies. For MRASwithout disturbances, boundedness of closed-loop signals was assumed to prove convergence of the output error to zero. See e.g. Feuer/Morse (1977), Narendra/Valavani (1978) and - for discrete time - Landau/B~thoux (1975). The paper by Feuer and Morse (]977) in fact contained a proof of global s t a b i l i t y , but the algorithm considered was very complicated. For simpler schemes, the only rigorous convergence proofs without the s t a b i l i t y requirement are the ones by Goodwin et al. (1978a) for discrete time, Egardt (1978) for both discrete and continuous time and Morse (1979) for continuous time. Goodwin et al. and Morse treat the disturbance-free case whereas Egardt (1978) contains results with disturbances, too. S t a b i l i t y conditions are important also in the stochastic convergence analysis of STR. The convergence results presented in Ljung (1977a) for the minimum variance self-tuning regulator required a s t a b i l i t y assumption. As mentioned above, similar results were given by Goodwin et al. (1978b) without the s t a b i l i t y condition. S t a b i l i t y analysis of adaptive schemes in the presence of disturbances is the topic of the second part. The s t a b i l i t y properties of the algorithms described in Chapters 2 and 3 are investigated using the L~ - s t a b i l i t y concept.
The main e f f o r t is given to algorithms with a
stochastic approximation type of estimation scheme. The main results (Theorems 4.1 and 5.1) state that the closed-loop signals remain bounded under some reasonable assumptions. The most important one - boundedness of parameter estimates - can be omitted i f the algorithms are s l i g h t l y modified. When no disturbances affect the plant, the s t a b i l i t y results can be used to prove convergence of the output error to zero. This result thus holds without a priori requiring the closed loop to be stable and is analogous to the above mentioned results by Goodwin et al. (1978a) and Morse (1979). Chapter 4 treats the discrete time case and the continuous time schemes are analysed in Chapter 5.
2,
UNIFIED DESCRIPTION OF DISCRETE TIME CONTROLLERS
The MRASphilosophy has been applied to the discrete time case in e.g. Landau/B6thoux (1975), B6n6jean (1977), and Ionescu/Monopoli (1977). S t a b i l i t y theory is the major design tool. The STR approach has been used almost exclusively for discrete time systems, see e.g. Astr~m/Wittenmark (1973), Clarke/Gawthrop (1975), and Astr~m et al. (1978). The basic idea is to use a certainty equivalence structure, i . e . to use a control law for the known parameter case and j u s t replace the unknown parameters by t h e i r estimates. Since the control algorithms obtained by the MRASand the STR approaches are very similar, i t is of interest to investigate the connections between the two approaches. Results in this direction are given in Gawthrop (1977) and Ljung/Landau (1978). A unified treatment of MRASand STR for problems with output feedback w i l l be presented in this chapter. I t w i l l be shown that MRAScan be derived from the STR point of view. Some problems in the design and analysis of the discrete time schemes are also discussed. In particular, the nature of the positive real condition, associated with both MRASand STR, w i l l be examined in detail. I t is shown that this condition can be avoided in the deterministic case.
2.1. Design method for known plants A design method, which w i l l be the basis for the general adaptive algorithm in the next section, is described below. I t consists of a pole placement combined with zero cancellation and adding of new zeros. Related schemes are given in e.g. B~n~jean (1977), Ionescu/ Monopoli (1977), Gawthrop (1977), and Astr~m et ai.(1978). The plant is assumed to satisfy the difference equation A(q- I ) y ( t ) = q-(k+l) b0 B(q-l) u(t) + w(t),
(2.1)
where q-l is the backward s h i f t operator, k is a nonnegative integer,
10
and A(q - I ) and B(q - I ) are polynomials defined by A(q - I ) = 1 + alq-I + . . . +
anq -n
B(q - I ) = l + bl q-l + . . . +
bmq-m.
Furthermore, w(t) is a nonmeasurable disturbance. REMARK
The parameter b0 is not included in the B-polynomial, because i t will be treated in a special way in the estimation part of the adaptive controller in the next section. The objective of the controller design is to make the difference between the plant output y ( t ) and the reference model output yM(t) as small as possible. The reference output yM is related to the command input uM by the reference model, given by
yM(t) = q-(k+l) gM(q-l) uM(t ) = q - ( k + l ) ( b ~ + . . . AM(q-I )
M
+b m q-m) uM(t) •
1 + a~ q-I + "'" + aMn q-n (2.2)
I t is no r e s t r i c t i o n to assume that the polynomial degrees n and m are the same in the model and the plant, because c o e f f i c i e n t s may be zero in (2.2) and i t is easy to add zeros or poles by modifying uM. I t is seen that the time delay of the reference model is greater than or equal to the time delay of the plant. This is a natural assumption to avoid noncausal control laws. The problem will be approached by assuming the controller configuration shown in Fig. 2.1. Here R, S, and T are polynomials in the backward shift operator. Motivation for this structure can be found in e.g. Astr~m et al. (1978). I t can be shown that the controller is closely related to the solution in a state space setup with Kalman f i l t e r and feedback from the state estimates. Notice that the process zeros are cancelled. This implies that only minimum phase systems can be considered. Other versions which allow nonminimum phase systems are discussed in Astr~m et a1.(1978). The T-polynomial can be interpreted as the characteristic polynomial of an observer.
11
r
i
boB(q-1)R(q-1)
PIQnt
-S(q-1)FiLi
I I[
Controller
__ __]
Figure 2.1. Controller configuration.
The design procedure w i l l be given for two different problems. In the f i r s t one, the disturbances are neglected and the problem is treated as a pure servo-problem. This means that the design concentrates on tracking a given reference signal. The procedure w i l l be referred to as a d~t~rmi~tZc design. On the other hand, i f the disturbance is considered as part of the problem, the controller should have a regulating property too. An interesting special case is when the disturbance w(t) is a moving average, given by w(t) = C(q- l ) v ( t ) = ( l + C l q-l + . . . +Cnq-n) v ( t ) ,
(2.3)
where { v ( t ) } are independent, zero-mean random variables. A design procedure which has the objective to reject noise of the form (2.3), w i l l be called stochastic. The deterministic design is considered first.
P~erminist~c d~ign Assuming w(t) = 0, i t is possible to have the plant output equal to the reference model output yM(t). This is obtained by making the closed-loop transfer function equal to the reference model transfer function, i . e .
12
q-(k+l) BM(q-I) AM(q-I )
q- (k+l) bo B(q-I )T(q-I )BM(q-I ) A(q-l) bo B(q-l)R(q-l) + q - ( k + l ) bo B(q-l)s(q-1)
or, equivalently, T(q-l) AM(q-I) : A(q-l) R(q-l) + q-(k+I) S(q-l).
(2.4)
The observer polynomial T is cancelled in the closed-loop transfer function. Neglecting the effects of i n i t i a l values, i t can therefore be chosen a r b i t r a r i l y . When T has been determined, the equation (2.4) has many solutions R and S. I t w i l l , however, be required that the degree of R is less than or equal to the time delay k. Then there is a unique solution to (2.4). The degree of S w i l l depend on n, k, and the degree of T. Furthermore, i t is required that R(O) ¢ 0 in order to get a causal control law. As seen from (2.4), this is equivalent to T(O) ~0. Finally the R- and T-polynomials are scaled so that T(O) = R(O) = I. The deterministic design procedure can thus be summarized in the following steps: I)
Choose the polynomial T(q - I ) defined by T(q -I )
=
1 + tlq-]
+
...
+
tnT
q-nT
2) Solve the polynomial equation T(q-l) AM(q-I) = A(q-l) R(q-l) + q-(k+l) S(q-l) for the unique solutions R(q - I ) and S(q-1),defined by R(q - I ) = I + r l q - I + . . . +
rkq -k
S(q - I ) = s O+sl q-I + . . . ÷Sns q
-n s
,
n s = max(n+nT-k-l, n - l ) .
Stochastic d~iDn The deterministic design procedure can of course be used also when disturbances are acting on the plant. The choice of observer
13
polynomial w i l l , however, be of importance not only during an i n i t i a l transient period. I f i t is assumed that w(t) is given by (2.3), then i t is well-known that the optimal choice of observer polynomial is T(q- l ) = C(q- l ) , in the sense of minimum variance. This is e x p l i c i t l y demonstrated in Gawthrop (1977) as a generalization of the result on minimum variance regulators in Astr~m (1970).
2.2. Class of adaptive controllers A general adaptive control scheme is defined in this section. The scheme is a self-tuning version of the controller described in the preceeding section. I t w i l l be shown to include earlier proposed MRAS and STR as special cases. The plant is s t i l l assumed to satisfy (2.1). The following assumptions are also introduced. Al) The number of plant poles n and zeros m are known. A2) The time delay k is known and the sign of b0 is known. Without loss of generality b0 is assumed positive. A3) The plant is minimum phase, i . e . the numerator polynomial B(q- l ) in (2.1) has i t s zeros outside the unit circle.
REMARK
Notice that some coefficients in A(q- I ) or B(q- I ) may be zero. I t therefore suffices to know an upper bound on the polynomial degrees to put the equation into the form of (2.1) with known n and m. The condition on k in A2) is the counterpart of the continuous time condition, that the pole excess ( i . e . the difference between the number of poles and number of zeros) is known. Compare Chapter 3. The minimum phase assumption was commented upon in Section 2.1.
14 The objective of the controller is the same as in Section 2.1, i . e . to minimize the error defined by e(t) = y ( t ) - yM(t). The controller to be described uses an xLmpZZex~ti d e n t i f i c a t i o n , Astr~m et al. (1978). This means that the controller parameters are estimated instead of the parameters of the model (2.1). The f i r s t step in the development of the algorithm is therefore to obtain a model of the plant, expressed in the unknown controller parameters. Thus, use the identity (2.4) and the equations (2.1) and (2.2) to write for the error:
TAM e(t) = TAM y(t) - TAM yM(t) = (AR+q-(k+l)s) y ( t ) - T A M yM(t) = : q-(k+l)[b OBRu(t)+S y ( t ) - T B M uM(t)] + R w(t).
(2.5)
To obtain some f l e x i b i l i t y of the model structure, a filtered version of the error will be considered. Let Q and P be asymptotically stable polynomials, defined by Q(q-l)
=
p(q-l) =
1 + ql q-1 +
"'"
+
q-nQ
qnQ
pl(q-1) p2(q-1) = 1 + pl q-1 +...+
pnpq
-np
,
where PI and P2 are factors of P of degree np and np~ respectively. It l L is assumed that PI(O) = P2(O) = I. Define the filtered error by ef(t)
= Q(q-l) e(t). p(q-l)
Note that e f ( t ) is a known quantity, because y ( t ) is measured and yM(t), Q, and P are known. Using (2.5), e l ( t ) can be written as
ef(t)
~M q-(k+l)[boBR =
~
+S u(t)
p y(t)-T
_ TA MQ q-(k+l)[bo u(t)+bo(BR-P2) + s Y ( t ) uU -~ptp I
TBM uM(t)]+ QR w(t) = T~P
P -TBM P uM(t)]+ Q~R' TAmPw(t) " (2.6)
15 REMARK
The polynomials Q and P give the necessary f l e x i b i l i t y to cover both MRAS and STR. The exact choices of the polynomials and t h e i r degrees w i l l be commented in the examples in Section 2.3. I t should also be noted that instead of polynomials Q and P, one could consider rational functions. We will however not elaborate this case. o The general adaptive controller will f i r s t be given for the deterministic design case.
Deterministic d~i~n The observer polynomial is now determined a priori. Let e be a vector, containing the unknown parameters of the polynomials BR-P2 and S/b0 and the constant l/b 0 as the last element. Note that e contains the parameters of the controller, described in Section 2.1. Furthermore, define the vector ko(t) from
q)T(t) = [ _ ~ ,
u(t-2)p , . . y_.~_p,t . .~ . , .
, _ TTBM uM(t)] ' (2.7)
where the numbers of u- and y-terms are compatible with the d e f i n i t i o n of 8. Note that the elements of Ko are known signals. Using the definitions of e and m, i t is possible to write (2.6) as m q-(k+l) [b0 u(t)pl + bO BT m(t)] +~QR w(t). ef(t) = QTA
(2.8)
This model, which contains the unknown controller parameters b0 and 8, can be taken as a basis for a class of adaptive controllers. The intention is to estimate the unknown parameters b0 and e, and to use these estimates in the control law. The estimation procedure can be designed e.g. to force a prediction error of ef(t) to zero. Note that ef(t) is i t s e l f a known quantity. Taking the different possibilities of choosing e.g. estimation algorithm and control law into consideration, a class of controllers can be characterized in the following way.
16
BASIC CONTROLSCHEME o Estimate the unknown parameters b0 and 8 (or some combination of these) in the model (2.8). o Use these estimates to determine the control signal. A natural requirement on the c o n t r o l l e r is that i t performs as the c o n t r o l l e r in Section 2.1, i f the parameter estimates are equal to the true parameters.
Stochastic design The algorithm described above can of course be used also when w • O. However, i f w(t) is given by (2.3) with an unknown C-polynomial, i t was seen in Section 2.1 that the choice T = C is optimal. Since C is unknown, i t might be desirable to estimate i t .
Some minor changes are
then needed. Concatenate the 8-vector with a vector whose elements are the unknown parameters of C/b O. Also, redefine the m-vector as
J(t) : [ut_ p-1 ' u(t-2) y(:) -y(t-l) ~ ..... T ' --'7-
The f i l t e r e d ef(t)
P uM(t-l) . . . .
.....
B"
- - P uM(t)'
]
(2.9)
error can then be w r i t t e n as
: Q
[
U p ~ + b0 e T CAM q-(k+l)Lb 0
+
QR v ( t )
(2 I0) i
which constitutes the model for a class of algorithms in the same way as in the d e t e r m i n i s t i c design case. The class of algorithms described above contains many d i f f e r e n t schemes. Apart from the selection of the polynomials Q and P and the choice between fixed or estimated observer polynomial, the choices of control law and estimation algorithm generate d i f f e r e n t schemes. The choice of estimation algorithm w i l l
be commented in connection with
some examples in Section 2.3 and f u r t h e r discussed in Section 2.4. To proceed, i t is however suitable to specify one p a r t i c u l a r method.
17
A characteristic feature of the model reference methods is that the estimation is based on a model like (2.8), where the parameters b0 and 0 enter biZineaJ~Zy. The estimation scheme will be described in the deterministic design case. Let bo(t-l) and O(t-l) denote estimates at time t-] of b0 and 8. Using the model (2.8), a one step ahead prediction of ef(t) is defined as + ~o(t_l ) ~T(t_l ) re(t-k-l)]. e f ( t l t - l ) = ~ AM [ Go(t-I ) u(t-k-l) Pl
(2.11)
The prediction error ~(t) is defined as ~(t) = ef(t) - ef(tlt-l),
(2.12)
where el(t) is given by (2.8), and is usually used in the parameter updating. The following expression is obtained for c(t) i f i t is assumed that the disturbance w(t) is equal to zero: e(t) : .Q [[b O-bO(t-l)] (u(t-k-l) TAM k Pl
+ sT(t-l) ~(t-k-l)) (2.13)
+ bo[O-@(t-l)] T ~(t-k-l)].
The following parameter updating is used in the constant gain case: bo(t) @(t)
:
bo(t-l)
+F
Pl
@(t-l)
m(t), ~o(t-k-l)
where F is a constant, positive definite matrix.
(2.14)
REMARK
It is straightforward to define stochastic approximation (SA) or least squares (LS) versions of the algorithm (2.14). For LS F is replaced by P(t) = [zt ¢(s) ~(s)T] -l and a SA variant uses e.g. l / t r P-l(t) instead of F. Here
~(t) ~ [ u(t-k-l) ] P+ ~T(t-l) I ~(t-k-l) ~(t-k-l)
n
18 The intention with the algorithm (2.14) is to e x p l o i t the properties of a s t r i c t l y
positive real t r a n s f e r function in order to establish
convergence of ~ ( t ) to zero. The motivation is the successful use of Lyapunov theory and the Kalman-Yakubovich lemma in continuous time, see Chapter 3. The problems that arise w i l l
be discussed next. Let us
j u s t b r i e f l y comment on the stochastic case. The algorithm given by (2.11),
(2.12), and (2.14) cannot be d i r e c t l y applied to the model
(2.10), because the C-polynomial is unknown. This implies that the prediction cannot be calculated according to (2.11). An easy modification is to replace C in f r o n t of the paranthesis with an a p r i o r i estimate of C or even with unity.
Choice of control law The control law, given in Section 2.1, can be w r i t t e n as u ( t ) = - Pl(q - I )
[@T~(t)],
where 8 is the vector of true parameters. Compare (2.6),
(2.8). Any
reasonable control law should equal t h i s one when the parameter estimates are correct.
Notice that a parameter estimator l i k e (2.14)
has the objective to force the prediction error c ( t ) to zero. I t would thus be desirable to choose a control such that ~ f ( t l t - l ) equal to zero, because convergence of e f ( t ) from the convergence of ~ ( t ) to zero, cf.
is
to zero would then follow (2.12). This is accomplished
by the control law u ( t ) = - Pl(q - I )
[oT(t+k) ~ ( t ) ]
as seen from (2.11). This control
law is however noncausal. I t is
therefore natural to modify i t in the following way: u ( t ) = - Pl(q - I ) This control considered,
[§T(t) ~ ( t ) ] .
law i~ used in a l l control schemes of the type
(2.15)
19
Difficulties w~h conv~ence analysis There are two key problems in the analysis of the schemes of MRAS type described above. The f i r s t
problem is t h a t the control law (2.15) has
to be used i f a causal control law is required. This implies t h a t ~f(tlt-l)
is not equal to zero in the case k ¢ O. This in turn means
t h a t i t is not easy to conclude t h a t e f ( t )
tends to zero even i f
E(t) tends to zero. The second problem is to show t h a t ~ ( t ) tends to zero. Consider f o r s i m p l i c i t y the case k = O, which is analogous to the case f o r c o n t i n uous time systems, where the pole excess is equal to one, cf. Chapter 3. Then ~ ( t ) is equal to e f ( t )
i f the control law (2.15) is used.
Contrary to the continuous time case, convergence of e f ( t ) cannot be proved s t r a i g h t f o r w a r d l y .
to zero
The reason is the f o l l o w i n g one.
I f the control law (2.15) is used and i t
is assumed t h a t b0 = I , the
equation (2.13) can be w r i t t e n ~(t) = ef(t)
= H(q-l) " q - l [ - O T ( t )
~(t)].
(2.16)
Here H(q_l) :
Q(q-l) T(q - I ) AM(q- I )
and @(t) : @ ( t ) - 8 . In continuous time the estimation e r r o r ~ ( t ) is given by ~ ( t ) = G(p) [-@T(t) m ( t ) ] . Compare Chapter 3. P o s i t i v e realness of G(p) can be used to prove the convergence of ~ ( t ) to zero. I t i s however not possible to use the same arguments in d i s c r e t e time, because the t r a n s f e r f u n c t i o n H ( q - l ) . q - I can never be made p o s i t i v e r e a l . The d i f f e r e n c e appears because a d i s c r e t e time t r a n s f e r f u n c t i o n must contain a feedthrough term to be s t r i c t l y f u n c t i o n may be
p o s i t i v e r e a l , whereas a continuous time t r a n s f e r
s~o_~y proper. This d i f f i c u l t y
Landau/B~thoux (1975).
is also emphasized in
20 The problem mentioned above and also, in the case k # O,
the
previously mentioned problem to r e l a t e convergence of ~ ( t ) and e f ( t ) are c l o s e l y related to the boundedness of the signals of the closed loop system. This is pointed out in e.g. lonescu/Monopoli (1977).
2.3. Examples of the general control scheme Some special cases of the basic control scheme, proposed in the preceeding section, w i l l now be given. Both model reference adaptive systems and s e l f - t u n i n g regulators w i l l be shown to f i t
into the
general description.
EXAMPLE 2.1. Ione~cu's and Monopo2W_,s schP.me The scheme in lonescu/Monopoli (1977) is a straightforward t r a n s l a t i o n i n t o discrete time of the continuous time MRAS by Monopoli (1974). I t is possible to t r e a t the scheme as a special case of the general algorithm in the following way. Choose the polynomials as P1 = T
of degree k
P2
of degree n-I
Q = P = PiP2
of degree n+k-l.
The equation (2.6) then transforms into
P2 q-(k+l) [ b0 Tu(t) ef(t) : e(t) =~-~ 1 + b0(BR-P2) u(t) p + s y t) _ BM
-
P2 uM(t)]'l
(2.17)
where the disturbance w has been assumed to be zero as in the o r i g i n a l presentation. This is the model used by Ionescu and Monopoli and the estimation scheme is s i m i l a r to the one in (2.14). The polynomial P2 is chosen to make the t r a n s f e r function P2/AM s t r i c t l y
p o s i t i v e real.
Some modifications of the estimation scheme are done to handle the
21
problems solution Monopoli that the
discussed in the preceeding section, although no complete is presented. The concept of augmented eJu~or, introduced in (1974), is translated into discrete time. I t can be shown augmented error n(t) in the case k = 0 is given by
P2 n(t) = ~(t) - ~-~ [Kn - n ( t ) , l ~ ( t - l ) I 2 ] , where K is a constant. I t is shown that n(t) tends to zero, but a boundedness assumption is needed to establish convergence of ~(t) or ef(t). Finally i t should be noted that the polynomials Pl and P2 are called Zf and Zw in Ionescu/Monopoli (1977).
o
EXAMPLE 2.2. B~n~jean's scheme A discrete time MRASis presented in B~n~jean (1977). I t can be shown that the algorithm is very similar to Ionescu's and Monopoli's scheme. The model used by B~n~jean is obtained by reparametrizing (2.17) as follows: ef(t) = e(t) =~-~P2q-(k+l) [bo u(t)-uM(t)pl +bo(BR-P2) u(t)-uM(t)P + + S y(t)p + (boBR_BMPI) U_~pt]. The estimation algorithm used is similar to the one used by Ionescu and Monopoli. Note that more parameters have to be estimated because of the reparametrization. [] In the two MRASexamples above the natural choice Q : P has been used. This implies that the filtered error ef(t) equals the error e(t). Another possibility is to choose the polynomials so that the transfer function Q/TAM becomes very simple. This is done below.
EXAMPLE 2.3. SeZf-~Lng pole pZaeementaZgo~Cthm A pole placement algorithm with fixed observer polynomial is described in Astr~m et al. (1978). I t can be generated from the general structure in the following way. Choose the polynomials as
22
Q : TAM P = P1 = P2 = I , which means t h a t e f ( t )
= TAM e ( t ) .
This implies t h a t (2.8) has the
simple form ef(t)
= q - ( k + l ) [ b 0 u ( t ) + b0 e T m ( t ) ] ,
(2.18)
where the elements of ~ are simply lagged i n p u t and output s i g n a l s . The disturbance has been assumed to be zero. The model (2.18) i s used f o r the s e l f - t u n i n g r e g u l a t o r with a minor m o d i f i c a t i o n . The parameters which are estimated by a l e a s t squares algorithm are b 0 and bOB. Since the l a s t element in @ is I / b O, the e f f e c t is t h a t one parameter is known to be equal to one. I f 8 and m are redefined not to include the l a s t known element, the equation (2.18) can be w r i t t e n as ef(t)
= TAM [ y ( t ) - y M ( t ) ]
= q-(k+l)[b Ou(t)+b OBT~(t)]-TAMyM(t),
which is the model used. In the three examples above the choice of observer polynomial T was made in advance. However, i f there is noise of the form given by (2.3), the optimal choice of observer polynomial is T = C, which is unknown. I t can then be estimated as described in Section 2,2. Below some schemes of this type will be described.
EXAMPLE 2.4. Ast~m'6 and Wittenmark's self-tuning regul~utor The basic self-tuning regulator is described in Astr~m/Wittenmark (1973). I t is based on a minimum variance strategy, which minimizes the output variance. This is a special case of the problem considered in Section 2.1 with AM = l and uM = yM = O. Inserting this into (2.6) and using the polynomials Q = P = l , the following is obtained:
ef(t)
: y(t)
l q-(k+l =~ )[bou(t)+bo(BR-l)u(t)+Sy(t)]+Rv(t).
This model can be w r i t t e n analogously with (2.10) as ef(t)
y ( t ) : ~1 q- (k+l )[b 0 u ( t ) 8+ T ~(t)] + R v(t)
(2.19)
23
and is the basis for the self-tuning regulator. Since C is unknown, the prediction is chosen as in (2.11) with T = C replaced by unity. Compare the discussion in Section 2.2. Hence, ~
AT
e f ( t l t - l ) = y ( t l t - l ) = bo(t-l)u(t-k-I )+e ( t - l ) ~ ( t - k - l ) .
(2.20)
The fact that C is included in (2.19) but not in (2.20) makes i t somewhat unexpected that the algorithm really converges to the optimal minimum variance regulator. I t is shown in Ljung (1977a) that the scheme (with a stochastic approximation estimation algorithm) converges i f I/C is s t r i c t l y positive real. I f instead a least squares estimation algorithm is used, convergence holds i f I / C - I / 2 is SPRo The condition on I/C and i t s relation to the positive real condition for MRASwill be further examined in the following section,
o
EXAMPLE 2.5. Clarke's and Gawthrop's self-tuning controller Clarke and Gawthrop (]975) consider a 'generalized output' @(t) = P(q-l)y(t) + Q(q-l)u(t-k-l) - R(q-l)uM(t-k-l) and applies the basic self-tuning regulator to the system generating this output. I t is possible to treat the algorithm within the general structure in the special case Q = 0 in their notation. Thus change the notation into: @(t) : AM(q-l)y(t) - q-(k+l)BM(q-l)uM(t). Then i t follows that ¢(t) equals ef(t) = AMe(t) i f P = l and Q = AM. I f the noise is given by (2.3) and T is chosen to be equal to C, the equation (2.6) can be written as ef(t)
= l
~ q-
(k+l
)[bou(t ) + bo(BR-])u(t ) +Sy(t) - CBMuM(t)]+ Rv(t).
This is the model used in the self-tuning controller. The fact that the f i r s t parameter in C is known to be unity is exploited. The prediction is calculated as in Example 2.4, i.e. C in front of the parenthesis is replaced by unity. The estimation scheme is a least squares algorithm,
o
24
2.4. The positive real condition A special model structure and a specific estimation scheme were described in Section 2.2. The structure was obtained from an analogy with the model reference adaptive systems in continuous time. The intention was to use the properties of positive real transfer functions to establish convergence. I t was noted in Example 2.4 that a positive real condition also appears in the analysis of a self-tuning regulator in the presense of noise. The relations between the conditions in the two cases w i l l be treated below. F i r s t consider the d e t e r m i n i s t i c design case and f o r s i m p l i c i t y assume that k = 0 and b0 = I.
I f the control law (2.15) is used, we
have from (2.16) ~(t) = -H(q-l)[sT(t-l)~(t-l)].
(2.21)
We want to show in a simple way that a p o s i t i v e real condition r e a l l y appears in the analysis in a natural way. To do so, assume that a modified version of the parameter updating (2.14) is used:
e(t) = ~ ( t - l ) +
m(t-l)
I (t-l)i z
c(t).
(2.22)
This algorithm is s i m i l a r to stochastic approximation schemes and is used in e.g. lonescu/Monopoli
(1977).
Subtract the true parameter vector 0 from both sides, m u l t i p l y from the | e f t by the transpose and use (2.21) to get J~(t)I 2 = J S ( t - l ) I 2 + 2 8 T ( t - l ) m ( t - l ) l~(t-l)l 2
= l
(t-l)l 2
= l~(t-l)I 2
2
E(t) +
~2(t) l~o(t-l)I 2
~H(q-')/ + ~2(t) = I~o(t-l)l 2 Im(t-l)l 2
2 ~(t)[(I/H-I/2)c(t)]
Im(t-l)I 2
(2.23)
I t can be seen that the p o s i t i v e real condition enters in a natural
25 way. I f I / H - I / 2 is positive real, the parameter error will eventually decrease. Moreover, c ( t ) / I m ( t - l ) l tends to zero i f I / H - I / 2 is SPR. I t should be noted that the boundedness condition mentioned in Section 2.2 appears because (2.23) only proves convergence of ~(t) / Im(t-l)l. I t is straightforward to show that the positive real condition can be avoided. Thus, let ~ denote the signal obtained by f i l t e r i n g x by Q/TAM and rewrite (2.8) as ef(t) =
q-l[ u(t) + 0T [pl (q_l) ~(t) ],
(2.24)
where the same assumptions as above are used. Now consider this as being the model instead of (2.8). The prediction (2.11) is then replaced by ^ ~(t-l) ^ e f ( t l t - l ) = P1 + BT(t-l) ~ ( t - l ) , which is different from (2.11) because §(t) is timevarying. Instead of (2.21) we then have
E(t) = - § T ( t - l ) ~ ( t - l ) . I f the parameter updating (2.22) is replaced by
§(t) = @(t-l) + ~ ( t - l ) ~(t), l~(t-1)l 2
(2.25)
the following is obtained:
Io(t)l 2 : Io(t-l)l 2 + 2 oT(t-]) ~ ( t - l ) E(t) + l~(t-l)l 2 : l~(t-l)l 2 -
c2(t) I~(t-l)l 2
c2(t) l~(t_l)i2 "
I t thus follows that c ( t ) / I ~ ( t - l ) l tends to zero without any positive real condition. Of course the boundedness of the closed loop signals mentioned in Section 2.2 is s t i l l a problem. The conclusion is that i t is possible to eliminate the positive real condition in the determini s t i c design case i f a modified estimation scheme is used.
26
Now consider the stochastic design, where the observer polynomial C is estimated. The transfer function H(q-l), which was previously known, now contains the unknown C-polynomial. This implies that the f i l t e r i n g in (2.24) and (2.25) cannot be done with the true C-polynomial. The positive real condition then enters in the same way as in Example 2.4. The positive real condition on H(q- I ) = I/C(q - l ) and a boundedness condition are in fact sufficient to assure convergence for the s e l f -tuning regulator in Example 2.4, see Ljung (1977a), Anatural modification in order to weaken the condition on C is to f i l t e r with I / ~ ( t ) , where C(t) is the timevarying estimate of C. This is further discussed in Ljung (1977a). The conclusion of the discussion above is that the positive real condition, which appears in the analysis of both deterministic MRAS and stochastic STR, are of a similar technical nature. There is, however, an important difference. The condition can be eliminated for the deterministic case by choosing another estimation algorithm, which includes f i l t e r i n g by the transfer function H(q-l). In the stochastic case, the positive real condition is not possible to be dispensed with in the same way, because the f i l t e r is unknown.
3,
UNIFIED DESCRIPTION OF CONTINUOUS TIME CONTROLLERS
The MRASschemes were originally developed in continuous time. The solution for the problem with output feedback was given in Gilbart et al. (1970) for the easy case with pole excess of the plant equal to one or two. The pole excess is defined as the difference between the number of poles and the number of zeros. The solution was reformulated in a nice way by Monopoli (1973). Monopoli (1974) introduced the concept of #J~gmen.ted eyu~or to treat the general case. Similar schemes are proposed by B6n~jean (1977), Feuer/Morse (1977), and Narendra/Valavani (1977). Self-tuning regulators have not been formulated in continuous time before. Yet, i t is of interest to relate the MRAS philosophy and the separation idea behind the STR in continuous time too. In this chapter some MRAS schemes w i l l be derived in a unified manner from the STR point of view. The development gives a new interpretation of the augmented error, introduced by Monopoli. Some problems in the analysis are also pointed out and the positive real condition f o r MRAS is examined. I t is shown that the condition can be dispensed with. I t should be noted that the treatment of the continuous time schemes is not as complete as for discrete time. Only the determini s t i c design is considered. I t should, however, be possible to carry through a development, analogous with discrete time, in the stochastic design case too.
3.1. Design method for known plants Before a unified description of several algorithms is given, the known parameter case has to be considered. A design scheme, which includes interesting special cases, w i l l
be described in this section.
I t is analogous to the discrete time procedure in Section 2.1. The scheme is given in Astr~m (1976) and special cases are treated in e.g. Narendra/Valavani (1977), and B~n~jean (1977).
28
The plant is assumed to satisfy the d i f f e r e n t i a l equation boB(P)
y(t)
-
A(p)
bo(pm + blpm-l + . . . + bm)
u(t) =
pn +alpn-I + ... +an
u(t),
(3.1)
where p denotes the d i f f e r e n t i a l operator. REMARK l
I t is assumed that there is no disturbance. I t is convenient to make this assumption in this chapter, because the design is deterministic. Disturbances w i l l , however, be introduced in the s t a b i l i t y analysis in Chapter 5.
o
REMARK 2
The parameter b0 is not included in the B-polynomial, because i t w i l l be treated in a special way in the estimation part of the adaptive controller in the next section. Compare Chapter 2.
o
The objective of the controller is to make the closed-loop transfer operator equal to a reference model transfer operator, given by b~ pm+ . . . + bM
yM(t) - BM(p) uM(t) . . . . . . . . . AM(p)
pn+a ~ pn-l +.
~
+a~ uM(t)"
(3.2
)
Here yM(t) is the desired output of the closed loop system and uM(t) is the command input. I t is seen that the pole excess of the reference model is greater than or equal to the pole excess of the plant. This assumption is made to avoid differentiators in the control law. Analogous to the discrete time case, a controller structure as shown in Fig. 3.1 w i l l be considered. The controller polynomials R, S, and T are polynomials in the d i f f e r e n t i a l operator p. The configuration is motivated in e.g. Astr~m {1976). As in the discrete time case i t is related to a solution with Kalman f i l t e r and state estimate feedback. The T-polynomial can be interpreted as an observer polynomial. Also note that the B-polynomial is cancelled, r e s t r i c t i n g the design method to minimum phase systems.
29
I uM I
I
"JT(p)BM(P )
1 Plant I u J b0 B(PlI I 7 A(p} "J'
BIp)IR~)
I I
I Controller
._J
!
F~g~e 3.1. C o n t r o l l e r configuration. The desired closed-loop t r a n s f e r function i s obtained i f the polynomials R, S, and T are chosen to s a t i s f y the equation BM(~) = AM(p)
b0 B(p) T(p) BM(p) A(p) b0 B(p) R(p) + b0 B(p) S(p)
or, e q u i v a l e n t l y , T(p) AM(p) : A(p) R(p) + S(p).
(3.3)
The observer polynomial T(p) is cancelled in the closed-loop t r a n s f e r function. I f the effects of i n i t i a l therefore be chosen a r b i t r a r i l y .
values are neglected, i t can
When T(p) has been determined, the
equation (3.3) has many solutions S(p) and R(p). I t w i l l ,
however, be
assumed that the degree of S(p) is less than or equal to n - l , which assures that the equation has a unique s o l u t i o n , Astr~m (1976). Since the polynomials A(p) and AM(p) both have degree n, T(p) and R(p) w i l l have the same degree nT. In order to assure that the control law does not contain any d e r i v a t i v e s of the output, nT is chosen greater than or equal to n-m-l. Furthermore, R and T are scaled so that they are monic. In summary then, the design scheme consists of the following steps: I) Choose the monic polynomial T(p) nT T(p) = p + t I
pnT-1+ . . .
+tnT ,
nT ~ n - m - l .
30
2) Solve the equation T(p) AM(p) = A(p) R(p) + S(p) f o r the unique solutions R(p) and S(p), defined by S(p) = s O pn-I + . . . +Sn_l R(p) = pn T + r I
The f i r s t
pnT-1+ . . .
+rnT.
step, the choice of T(p) ( i n c l u d i n g i t s degree) does not
a f f e c t the closed-loop t r a n s f e r function. However, i t is of importance for the t r a n s i e n t properties and the e f f e c t of disturbances as was seen in Chapter 2. The importance of the noise colour f o r the choice of observer w i l l ,
however, not influence the discussion in t h i s
chapter, since only the d e t e r m i n i s t i c design case is considered.
3.2. Class of adaptive c o n t r o l l e r s The idea behind the s e l f - t u n i n g regulators w i l l be used in t h i s section to define a general class of adaptive regulators. These regulators w i l ] be adaptive versions of the c o n t r o l l e r described in Section 3.1. The class of algorithms w i l l
l a t e r be shown to include
several MRAS schemes as special cases. The plant is s t i l l
assumed to s a t i s f y (3.1). The following assump-
tions are also introduced. AI) The degrees n and m are known and m ~ n - l . A2) The parameter b0 is nonzero and i t s sign is known. Without loss of g e n e r a l i t y b0 is assumed to be p o s i t i v e . A3) The plant is minimum phase.
REMARK
Notice that i t is s u f f i c i e n t to know the pole excess and an upper
31
bound on the number of poles to write the differential equation in the form of (3.1) with known n and m. Knowledge of the pole excess is the counterpart of the discrete time condition, that the time delay is known, cf. Chapter 2. The minimum phase assumption was discussed in Section 3.]. The desired closed-loop transfer function is given by (3.2). The f i r s t step in the development is to use the results in Section 3.1 to obtain a model, expressed in the unknown controller parameters. Compare with Section 2.2. The polynomial identity (3.3) and the equations (3.1) and (3.2) are used to get the following expression for the error e(t) = y ( t ) - y M ( t ) : TAMe(t) = TAMy(t) - TAMyM(t) = (AR +S) y ( t ) - TAMyM(t) = = boBRu(t) +Sy(t)- TBMuM(t).
(3.4)
Let Pl(p), P2(p), and Q(p) be stable, monic polynomials of degree n - m - I , m+nT, and n+n T - l respectively, and let P(p) be given by P(p) = Pl(P)P2(p). Define the filtered error ef(t) = Q(P) e(t), P(p) which thus is a known quantity. Using (3.4), el(t) can be written as
/boBR
ef(t) = ~ AM L ~ u ( t )
-
S
TBM
+~y(t) -T
1
uM(t)J =
TA MQ Ibo u(t)pl + bo(BR-P2) u(t)p + S Ylt)p - TBMpuM(t)].
(3.5)
REMARK
The motive to introduce the polynomials Q and P and the filtered error is the f l e x i b i l i t y obtained. Different choices of polynomials will be seen to generate different MRASschemes in the examples in the next section. Also compare with Chapter 2. It should also be noted that
32
Q and P could be chosen as rational functions, but this generalization will not be considered here.
u
Let 0 be a vector containing the unknown parameters of the polynomials BR-P2 (degree m+n T - l )
and S/b0 (degree n - l )
and the constant I/b 0
as the last element. Note that the vector B contains the parameters of the controller, described in Section 3.1. Furthermore, define the vector m+nT- l
1
pn-I
..... -pU(t)'TY(t)'
....
TB"u"(t)]"
-T
(3.6) I t is then possible to rewrite the expression (3.5) for the filtered error e l ( t ) as ef(t)
:
Q
TAM
[b° u__~+ b0 0T ~(t)]. Pl
(3.7)
This model provides the starting-point for a class of adaptive controllers as in discrete time. Note that e l ( t ) is s t i l l a known quantity. As before, there is a lot of freedom when specifying the estimation algorithm and the control law. The development done so far thus proposes a class of adaptive controllers, defined in two steps.
BASIC CONTROLSCHEME o Estimate the unknown parameters b0 and B (or some combination of these) in the mode] (3.7). o Use these estimates to determine the control signal.
To make the discussion easier, i t is convenient to specify a particular estimation algorithm as was done in discrete time.
A special parame~ter ~t~mator A specific structure of the estimation part will be discussed below. I t is of special interest because many MRAS schemes use this structure.
33
It is analogous to the special configuration for discrete time controllers discussed in Chapter 2. Using the model (3.7), an estimate of ef(t) is defined as ef(t) = T ~M [go(t)Up_~+ ~o(t ) ~T(t ) ~o(t)],
~k+l,
where c is a positive constant,
2
~2(t)
(4.15) n
57 Proo~
Write (4.6 a,b) in terms of bo(t) and e(t) and multiply them by their transposes : r(t) k
Pl
c z (tl (~(t-k-1 ) + @T(t-l) ~(t-k-l) + r2(t) \ P1
)2
(@(t) 12 = le(t-l)12 + 2(30~(t) r ( t ~ eT(t-l) ~(t-k-l) +
+ ~o 2~ l~(t-k l)l 2 rZ(t) bo Add the second equation, multiplied bY~o0, to the first equation, which gives
bo
i2
bo
= ~(~> r(t) [~o~ ~> ~¢t \ Pl~ ~> + ~T¢~ ~>~¢~ ~ ,~)+~ o
~(~ ~I~¢~ ~ ~]+
+ r2(t) 2~(t) (~W(t)-c(t)) + "< r(t)
+max (l, ~ ) c2(t) IF ~(t-k-l) +~T(t-l)~(t-k-l))2+BO 2 l~(t-k-l),2], r - ~ L \ Pl where (4.4), (4.6e) and (4.14) are used in the last step. Let c
= l
- ~max
I,
.
Then c is positive from the assumptions. Insert this into the inequality above and use (4.6c) to obtain
bo
12
bo
58
2~(t) (~w(t)-e(t)) "< r-~ _
1 I-(/6 r(t)
~(t)
+ 2(I-c) r(t)c2(t) 1
R~(t))
vq~ P
c2(t) + 1 .< - c r - ~ cr(t)
w(t)
2
_ c ~2(t) +
2
,
which concludes the proof.
Coro2_~y The same result holds for the DSA-algorithm with stochastic design i f R is replaced by R0 and m(t) is defined by (4.9).
Proof The same proof s t i l l
holds.
A corresponding result concerning the DLS-algorithm with known b0 is given in the following lemma. LEMMA 4.3
Let 8(t) be defined by (4.14). Assume that b0 is known. Then the following holds for the DLS-algorithm for the deterministic design, (4.8): @T(t) P-l(t) 6(t) = X sT(t-l) P - l ( t - l ) X X +~T(t-k-l) P ( t - l ) ~ ( t - k - l )
~(t-l) -
~2(t) + 1 /R(q - I ) ~ ( t ) ) 2. b~ b~ \p(q-l)
(4.16) []
Proof Write (4.,Ba) in terms of @(t) and multiply from the l e f t by P-I/2(t). This gives 1 pl/2 (t) ~ ( t - k - l ) P-I/2(t) 8(t) = P-l/2(t) ~ ( t - l ) +b-o0 and after multiplication with the transpose
E(t)
59
sT(t) P-l(t) @(t) = @T(t-l) P-l(t) 8(t-l) + 2 ~T(t_l)~(t_k_l )e(t) + ~l ~T(t-k-l) P(t)~(t-k-l) c2(t) : :
~ @T(t-]) P-l(t-l) e(t-l) + [sT(t-l) ~(t-k-])] 2 + 2 ~T(t_l ) ~(t-k-l) ~(t) + l ~ T ( t - k - l ) P(t) $(t-k-l) ~2(t).
b~
+~o
If (4.7), (4.8d) and (4.14) are used, the following is obtained:
eT(t) P-l(t) 8(t) - A eT(t-l) P-l(t-l) e(t-l) = 2 +-]-] ~T(t-k-l) P(t) ~ ( t - k - l ) ~2(t).
(4.17)
The updating formula for P(t) is given by (4.8f). Multiply this equation from the l e f t by~T(t-k-l) and from the right b y ~ ( t - k - l ) , This gives ~T(t-k-l) P(t) ~ ( t - k - l ) = .~T(t-k-l) P(.t-l) ~ ( t - k - l ) +~T(t-k-l) P(t-l) ~ ( t - k - l ) Insert this into the equation (4.]7) to get: @T(t) P-](t) @(t) - A @T(t-l) P - l ( t - ] ) e ( t - l ) =
~2(t) = b2
b2
b2 0 ~+~T(t-k-l)P(t-l)~(t-k-l)
l+~T(t-k-l) P(t-l) ~(t-k-l) which is identical to (4.16).
D
Corollary The same result holds for the DLS-algorithm with stochastic design i f R is replaced by R0 and ~(t) is defined by (4.9), []
60
Proof The proof remains the same.
The results of Lemma 4.2 and Lemma 4.3 can be interpreted in the following way. The estimation errors bo and g decrease i f the prediction error e(t) is large. On the other hand, the errors increase i f the noise magnitude is large. This is natural i n t u i t i v e l y .
4.2. L°%stability The main r e s u l t s on L~%stability w i l l be given in t h i s section. For convenience, make the following
DEFINITION The closed loop system is L~-stable i f uniformly bounded disturbance (w) and command (uM) signals give uniformly bounded input (u) and output (y) signals. I t w i l l thus be assumed in the sequel that w(t) and uM(t) are uniformly bounded. The main part of this section is devoted to the DSA-algorithm. The idea behind the s t a b i l i t y analysis is the heuristic argument given in the beginning of this chapter. I t was pointed out that there are some shortcomings of the argument. F i r s t l y , i t is necessary to show that not only a few of the parameter estimates become accurate when the signals are growing large. This is no problem for the DSA-algorithm. The second problem mentioned seems to be more d i f f i c u l t . I t takes some time for the estimates to become accurate even i f the signals are very large. The discussion thus requires that the output does not increase a r b i t r a r i l y fast and that the parameter adjustment is not too slow. The l a t t e r condition is the reason why we do not consider estimation algorithms with decreasing gains. Compare the definitions of the DSA- and DLS-algorithms. The p o s s i b i l i t y that the output may increase arbit r a r i l y fast is closely related to the magnitude of the parameter
61
estimates. I t w i l l be eliminated by guaranteeing that the estimates are bounded. The following example i l l u s t r a t e s
that unbounded para-
meter estimates can lead to i n s t a b i l i t y .
EXAMPLE 4.2 Consider a plant given by y(t) + a y(t-l)
= b0 u ( t - 1 ) + w ( t ) ,
where b0 is known to be u n i t y . Assume that the reference model is yM(t) = u M ( t - l ) . Choose Q = P = T = I . Equation (4.4) can then be w r i t t e n as y ( t ) - yM(t) = u ( t - l )
+ s y(t-l)
- uM(t-l) + w(t),
(4.18)
where s = -a. Since b0 is known, the prediction e r r o r can be w r i t t e n as
c(t) = - s(t-l) y(t-l)
+ w(t),
(4.19)
where
s ( t ) : ~(t) - s. With ~ = 0 and ~ = 1 in the DSA-algorithm the updating of the parameter estimate is given by ~(t) = ~ ( t - l )
+ y(t-l)
c(t) I +y2(t-l)
This equation can be expressed in s ( t ) as s(t) = s(t-])
+y(t-])
w(t)-s(t-l) y(t-l) 1 + y2(t-l)
(4.20)
The control law corresponding to (4.6f) is u(t) = - ~(t) y ( t ) + uM(t), which can be inserted into (4.18) to give y(t)
: - s(t-l)
y(t-l)
+ uM(t-l) + w ( t ) .
Eqs. (4.20) and (4.21) describe the closed loop system.
(4.21)
62
The basic idea with the example is to show that the closed loop system is unstable by finding a disturbance w and a command input uM such that the parameter error s l t ) can increase without l i m i t . assume t h a t the r e c u r s i o n ( 4 . 2 0 ) ,
i4.21)
starts at t=l
Thus,
with i(I)
= O,
y ( 1 ) = I . Define f(t)
A: ( v ~ - ( t - l ) ) ( I
+
t = 2, 3 . . . . .
T - 5,
for some large T. Choose the following disturbance w(t) = 1 -
1
V~
+ fit),
t = 2, 3 . . . . .
T-5,
and the f o l l o w i n g command s i g n a l uM(t-l)
--
1 - f(t),
t = 2, 3, . . . .
T-5.
The signals w and uM are bounded. I t is then easy to show that ~(t) : / C -
1
y(t) = l
VF
for t = l . . . . .
T-5.
Further, l e t
wit ) : O, t : T - 4 . . . . . uM~t_l ~j
T
= S O, t = T - 4 l, t T 3. . . . .
T.
I t is then easy to check that s ( t ) and y ( t ) for large T are approximately given by: t
~(t)
y(t)
T-4
~
-I
T-3
VT
VT
2 T-2 T-I T
1
2~
--
v~T 2 16 ~-TT3
T
2
4 l
63
Now choose w(T+l) and uM(T) such that s(T+I) : 0 and y(T+l) = I . The state vector of (4.20),
(4.21) is then equal to the i n i t i a l
state. By
repeating the procedure f o r increasing values of T, a subsequence of y(t) will
increase as - ~ and therefore is unbounded. The r e s u l t of a 2 simulation with T = 50, I00,150 . . . . is shown in Fig. 4.4. [] The example shows that bounded disturbance and command signals can be found such that the output is unbounded. The assumption of bounded
i
w
0
O_ C
C)_
-300-
o
C
J LIILLL L_L_
nO
o
0
E E 0 cO
-I
2 cO
.m o
0 0
Figure 4.4. Simulation results f o r Example 4.2.
5000
64
disturbance and command signals is thus not s u f f i c i e n t to guarantee L~%stability. Some additional assumption is needed. Boundedness of parameter estimates is chosen here and other p o s s i b i l i t i e s are discussed in Section 4.5, I t should f i n a l l y be noted that the same technique can be used to derive examples of i n s t a b i l i t y with any ~ < I .
L~-stability for the DSA-a~9orit~ The main r e s u l t on L~%stability f o r the DSA-algorithm is given in the following theorem.
THEOREM 4,1
(DSA-o~Zgo~CthmwX~thnoiyse)
Consider the plant ( 4 . l ) controlled by the DSA-algorithm with determin i s t i c or stochastic design. Assume that assumptions A I - A 3 are s a t i s f i e d . Moreover assume that the parameter estimates are uniformly bounded and that b0 < 2BO. Then the closed-loop system is L~%stable. Q
Proof The f u l l
proof for the d e t e r m i n i s t i c design is given in Appendix A. I t
can be concluded immediately that the proof holds also f o r the stocha s t i c design, using the representation (4.12) instead of (4.10). Some minor changes are needed, such as replacing R by R0 and Q/TAM by Q/AM. The q>-vector w i l l also contain more uM-components in the stochastic design case, see (4.9), The proof of the theorem is unfortunately f a i r l y
technical. An o u t l i n e
of the proof w i l l therefore be given. The idea of the proof is to examine the behaviour of the algorithm when l ~ ( t ) I is growing from an a r b i t r a r i l y large value to a larger one. The time i n t e r v a l under consideration can be shown to increase with the difference between the values i f the rate of growth is l i m i t e d . This is done in Step 1 of the proof. I t follows from Lemma 4.1 that e ( t ) must be large when I ~ ( t ) l
increases.
65
I t must in fact be of the order of l ~ ( t ) I many times i f the interval where I ~ ( t ) l increases is long. This is shown in the f i r s t part of Step 3 of the proof. Since r ( t ) is of the order of I ~ ( t ) l 2 and ~(t) is of the order of e ( t ) , i t then follows from Lemma 4.2 that the parameter errors decrease s i g n i f i c a n t l y at many time instants. Neglecting the noise term, i t thus follows from the boundedness of the estimates that there is a contradiction, which implies that a r b i t r a r i l y large values of l ~ ( t ) l do not exist. This is shown in the second part of Step 3. However, i f the disturbance w(t) is nonzero the parameter errors could increase in the intervals between any two time instants where they decrease. See Lemma 4.2. Hence, i t is important to get an upper bound on the length of these intervals. This is done in Step 2 of the proof, which u t i l i z e s the same kind of arguments as Step 3.
[]
The conditions of the theorem have a l l been discussed e a r l i e r , except for the condition b0 < 280. This condition enters via Lemma 4.2. I t w i l l be shown below that the condition is in fact necessary for global s t a b i l i t y . Consider, however, f i r s t the local s t a b i l i t y properties. For simplicity assume that k=O and that w=O. Linearize the equations for the closed-loop system around the true parameter values and a constant uM. I t is then straightforward to verify that the eigenvalues corresponding to Eq. (4.6b) are a l l but one equal to one and one eigenvalue is equal to l ~bo(l-1)/80. local s t a b i l i t y is thus that
A necessary condition for
80 > bo(l-1)/2. I t is interesting to
note that the condition requires only that 80 is positive in the l i m i t case I = l . This is exactly the condition which is met in the convergence analysis in presense of noise in Ljung/Wittenmark (1974). I t w i l l be shown in the following example that the condition 80 > bo(l-1)/2 must be strengthened to 80 > bo/2 in order to assure global s t a b i l i t y . The condition is also discussed in Astr~m/Wittenmark (1973) and Ljung/Wittenmark (1974).
EXAMPLE 4.3 Consider the plant and the controller described in Example 4.1. I f uM
66
is set to zero, only one parameter is estimated, namely sO. I t is easy to check that the estimation error so(t) is given by
So(t) = S o ( t - l ) ( l - B 0 ~ r ( t -yl 2) +( tB- l~)
y2(t_l))v
Let r(O) = 0 and y(O) = I . Assume t h a t B0 = ~l - 6
f o r some a r b i t r a r i l y
small 6 > O. Straightforward c a l c u l a t i o n s then show t h a t l y ( t ) I tends to i n f i n i t y
if
So(O) > max ( l
' (l-~) B)"
The closed-loop system is thus not g l o b a l l y stable with t h i s choice of BO,
m
Several r e s u l t s on boundedness of the closed-loop s i g n a l s can be derived from Theorem 4,1. Consider f i r s t
the case where the disturbance
w(t) is zero. This is the s i t u a t i o n most often analysed in connection with model reference adaptive r e g u l a t o r s . The f o l l o w i n g theorem gives a s o l u t i o n to the boundedness problem discussed before.
THEOREM4.2
(PSA-aZgo~CthmmiJthou~t noise)
Consider the p l a n t (4.1) with no noise, i . e . w(t) = O, c o n t r o l l e d by the DSA-algorithm with d e t e r m i n i s t i c design, (4.6), Assume t h a t A I - A 3 are f u l f i l l e d L°%stable.
and that b 0 < 2BO. Then the closed-loop system is []
Proof I t follows from Lemma 4.2 t h a t the parameter estimates are bounded i f w(t) = O. Theorem 4,1 then gives the r e s u l t .
[]
The corresponding r e s u l t is also true f o r the s t o c h a s t i c design case. The r e s u l t is however not given, because i t seems u n r e a l i s t i c to assume t h a t there is no noise when the decision has been made to estimate the optimal observer from noise c h a r a c t e r i s t i c s .
67 I t appears that the conditions for the s t a b i l i t y result above are f a i r l y mild. The condition on 80 has been shown to be necessary for global s t a b i l i t y . Also, the choice X < l is the common one in real applications. However, the assumption that the disturbance is equal to zero is not very satisfactory. I t would thus be desirable to improve the result in Theorem 4.1 without the a priori assumption of boundedness of parameter estimates. Below are presented two s t a b i l i t y results, which treat modified versions of the DSA-algorithm.
THEOREM 4.3
(DSA-a~gorithm with conditional updating]
Consider the plant (4.1), controlled by the DSA-algorithm with deterministic or stochastic design, modified in the following way: ~o(t) = ^bo(t-l) 1 @(t) = @(t-l) ]
if
2 Kw
l~(t)l
~ Ti j-I
Ti j-l
Tj+l
T+I l~(s)l ds- 2KA In N . K v >
[e(s)l ds - 2AT-Kv >~ Ti j-l
Ti j-I
Ti j+l
K1(cM+ pKT) +4
l~(s)l
ds - ~
•
N
4K ( I + ~ ) K A In N.N KIpKT
Ti j-1
Tj+I >~ -~1
I
I~(s)l ds
for N s u f f i c i e n t l y
large.
i Tj-I
It is then easy to see that (Bo30) sti]1 holds i f ~(t) is replaced by E(t).l~(t) in the derivation. (ii)
Lemma 5.1 is used in the same way as above in Step 3 of the proof.
Since the technique is s i m i l a r , the above modifications can be used here too. (iii)
The modification might cause ~(t) to be zero sometimes. The A estimate used f o r (~(t) in the proof of Lemma B.4 i s , however, s t i l l
I00 valid• The conclusion is that Theorem 5.1 can be applied and the theorem is
proven.
D
The comments on the modification of the estimation algorithm in discrete time apply here too. Another p o s s i b i l i t y to obtain bounded estimates is to project them into a bounded area. This idea is exploited in the following theorem.
THEOREM 5.4
l CSA-algorithm with p r o j e ~ o n )
Consider the plant (5.1) controlled by the CSA-algorithm, modified in the following way:
{ ~0(t): [Up_~+ ~T(t ) ko(t)] ~Itt)
-
r ( t ) bo(t)
i,[ I>IF> c J O(t)
(5.15) where y is a positive constant and the constant C satisfies
C > 2
i
max (l ' bO)
~min
{[":I
~ bo)
•
,
(5.16)
where b0 and 8 are the true plant parameters. Assume that A1 - A4 are satisfied. Then the closed-loop system is L~%stable.
Proof
Define ,T
: [b o ,~0 ,T]
~T(t) : (b0(t)
~eT(t))
~T(t) = [ b o ( t )
V~-O0oT(t)) •
v
4-
II
~
(~
P~
II
-'~/ ~ ¢1~ ~
v
+
¢-I-
0"/
v
0
~-. !~
•
r..-
...~.
'~v
N
~A
PO
"~-~
,_~0
v
el-
tA
"E--
m
c.-
-.
:::5-
e., _1.
~
~
~1~
~,
.J.
~'Z~-
~:~
r..o
toe
#1
i'D
,.~
_J.
~'~ tn
.~.
0
~
O
c-'e
e+ ~-
m
v
-
+
-e->
-
:::r
ct"
~ -i°
X
~.V
~°
3
~,V
I"0
O
'-h 0 "S
--%
~ O
O
~
~J')
~"
"5
0"
PA
tA
v
OP,~
~
u
0
~-
~' ~ ~
~l
0'~
u
--I 1~.
O'1
~-
i'D
1
0
0
102
+ b0 eT(t) ~(t) c(t) _ Y b0 ~T(t) @(t)] : r(t) r(t) = 2 ~r ( t ) I - ~ v ( t ) - ~ ( t ) ) < -
r(t)
+
r(t)
- 2
2 r--~t) (~O(t) E)o(t) +b0 @T(t) @(t)) (t) @(t) .
+ r(t)/]
+ bo~T(t) E(t) ~(t) + b0 ?T(t) ~ ( t ) ] . r(t) The parameter estimates and l~(t)I are bounded. Also, e(t) is bounded as was seen above. Furthermore, H(p) is asymptotically stable and r ( t ) is bounded from below by rmi n as shown in the proof of Lemma B.3, Appendix B. Finally l~(t)l is bounded from the proof of Lemma B.2. I t is thus possible to conclude that E(t) is bounded. Hence, dt
[~2(t)] = 2~(t) E(t)
is bounded. I t then follows from (5.19) that e(t) ~ O,
t ~.
In the same way as in Lemma B.4 (Appendix B), we have
el(t) = bo(t)(G(p) ~T(t) ~(t)) r(t) where G(p) is a s t r i c t l y proper, operator. Since Ibo(t)l and
l (t)I
~(t),
asymptotically stable transfer are bounded and r ( t ) ~ rmi n v t,
105 i t thus follows that e f ( t ) ~ O,
t
This implies that ^
ef(t) = ~(t) + ef(t) ~ O,
t ~.
Hence y ( t ) - yM£t)'' = e(t) : P(P) e l ( t ) ~ O, Q(P)
t ~
because Q(p) is asymptotically stable and P(p)/Q(p) is proper.
The output error thus converges to zero for the general algorithm defined in Chapter 3, provided that the estimation scheme and control law are chosen as for the CSA-algorithm. The output error will also converge to zero i f the estimator is chosen as in the corollary or remark of Theorem 5.1. In particular, convergence of the output error is assured for earlier propsed MRASby Monopoli (1974) and Narendra/Valavani (1977) i f some minor modifications of the algorithms are made. F i r s t l y , the signals should be filtered by the transfer function Q/TAM. In fact, this modification seems to improve the properties of the algorithms, cf. Example
4.1. Secondly, the parameter adjustment should have r ( t ) (or I ~ ( t ) l 2) in the denominator. Finally, the control law should be chosen as in (5.6f). I t should also be noted that the same conclusions can be made for the algorithms by B~n~jean (1977) and Feuer/Morse (1977) and the new algorithm proposed in Section 3.3. As in the discrete time case, i t is possible to go one step further and investigate conditions for the convergence of the parameter estimates. I t is then necessary to assume that the number of parameters is chosen correctly. Note that for the above results to hold, this was not required. The convergence of parameter estimates has been examined by others, e.g. Caroll/Lindorff (1973), LUders/Narendra (1973),
106
Kudva/Narendra (1973), Morgan/Narendra (1977). The well-known conditions on the frequency contents of the input signal are introduced to assure convergence of parameter estimates. The problem w i l l be l e f t with these remarks.
107 REFERENCES Albert A E, Gardner L A (1967): Stochastic Approximation and Nonlineo~ Regression, The MIT Press, Cambridge, Mass.
AstrBm K J (1970): I~oduction to Stochastic Control Theory, Academic Press, New York.
Astr~m K J (1976): Regle#uteoluL (in Swedish), Almqvist & Wiksell (2nd edition), Stockholm. Astr~m K J (1979): New implicit adaptive pole-placement algorithms for nonminimum phase systems. Report to appear, Dept of Automatic Control, Lund Institute of Technology, Lund, Sweden. Astr~m K J, Bohlin T (1965): Numerical identification of linear dynamic systems from normal operating records. IFAC Symp on Theory of Self-adaptive Control Systems, Teddington, England. AstrBm K J, Wittenmark B (1973): On self-tuning regulators. A~tomatica 2, 185-199.
Astr~m K J, Wittenmark B (1974): Analysis of a self-tuning regulator for nonminimum phase systems. Preprints of the IFAC Symp on Stochastic Control, Budapest, Hungary, 165-]73. Astr~m K J, Borisson U, Ljung L, Wittenmark B (1977): Theory and applications of self-tuning regulators. Au~tomat~Lca I_~3, 457-476. Astr~m K J, Westerberg B, Wittenmark B (1978): Self-tuning controllers based on pole-placement design. CODEN: LUTFD2/(TFRT-3148)/l-52/ (1978), Dept of Automatic Control, Lund Institute of Technology, Lurid, Sweden. B6n6jean R (1977): La commande adaptive ~ mod61e de r6f6rence 6volutif. Universit6 Scientifique et M6dicale de Grenoble, France. Borisson U (1979): Self-tuning regulators for a class of multivariable systems. Automata/ca 15, 209-215.
108
Caroll R L, Lindorff P D (1973): An adaptive observer for single-input, single-output linear systems. IEEE T~ns AuJtom Control AC-18, 496-499. Clarke D W, Gawthrop P J (1975): Self-tuning controller. Proc IEE 122, 929-934. Edmunds J M (1976): Digital adaptive pole shifting regulators. PhD dissertation, Control Systems Centre, University of Manchester, England. Egardt B (1978): Stability of model reference adaptive and self-tuning regulators. CODEN: LUTFD2/(TFRT-lO17)/l-163/(1978), Dept of Automatic Control, Lund Institute of Technology, Lund, Sweden. Feuer A, Morse A S (1977): Adaptive control of single-input, single-output linear systems. Proc of the 1977 IEEE Conf on Decision and Control, New Orleans, USA, I030-I035. Feuer A, Morse A S (1978): Local s t a b i l i t y of parameter-adaptive control systems. John Hopkin Conf on Information Science and Systems. Feuer A, Barmish B R, Morse A S (197B): An unstable dynamical system associated with model reference adaptive control. IEEE T~ns Au~tom Control AC-23, 499-500.
Gawthrop P J (1977): Some interpretations of the self-tuning cont r o l l e r . Proc IEE 124, 889-894. Gawthrop P J (1978): On the s t a b i l i t y and convergence of self-tuning algorithms. Report 1259/78, Dept of Engineering Science, Univers i t y of Oxford. Gilbart J W, Monopoli R V, Price C F (1970): Improved convergence and increased f l e x i b i l i t y in the design of model reference adaptive control systems. Proc of the IEEE Symp on Adaptive Processes, Univ of Texas, Austin, USA. Goodwin G C, Ramadge P J, Caines P E (1978a): Discrete time multivariable adaptive control. Div of Applied Sciences, Harvard University.
109
Goodwin G C, Ramadge P J, Caines P E (1978b): Discrete time stochastic adaptive control. Div of Applied Sciences, Harvard University. Ionescu T, Monopoli R V (1977): Discrete model reference adaptive control with an augmented error signal. Au~tomcut/Laa 13, 507-517. Kalman R E (1958): Design of a self-optimizing control system. T~c~ A~ME 80, 468-478. Kudva P, Narendra K S (1973): Synthesis of an adaptive observer using Lyapunovs direct method. I ~ J
Control 18, 1201-1210.
Kudva P, Narendra K S (1974): An identification procedure for discrete
multivariable systems. IEEE Tram Au~tom Control AC-19, 549-552. Landau I D (1974): A survey of model reference adaptive techniques theory and applications. Au~tomo~?xicaIO, 353-379. Landau I D, B~thoux G (1975): Algorithms for discrete time model reference adaptive systems. Proc of the 6th IFAC World Congress, Boston, USA, paper 58.4. Ljung L (1977a): On positive real transfer functions and the convergence of some recursive schemes. IEEE T~uzns Au~tom Control AC-22, 539-550. Ljung L (1977b): Analysis of recursive stochastic algorithms. IEEE Trans Au~om Control AC-22, 551-575.
Ljung L, Landau I D (1978): Model reference adaptive systems and self-tuning regulators - some connections. Preprints of the 7th IFAC World Congress, Helsinki, Finland, 1973-1980. Ljung L, Wittenmark B (1974): Asymptotic properties of self-tuning regulators. Report TFRT-3071, Dept of Automatic Control, Lund Institute of Technology, Lund, Sweden. Ljung L, Wittenmark B (1976): On a stabilizing property of adaptive regulators. IFAC Symp on Identification and System Parameter Estimation, T b i l i s i , USSR.
110
LUders G, Narendra K S (1973): An adaptive observer and identifier for a linear system. IEEE Tran~ Au~om Con~ol AC-I8, 496-499. Monopoli R V (]973): The Kalman-Yakubovich ]emma in adaptive control system design. IEEE Trans Autom Control AC-18, 527-529. Monopoli R V (1974): Model reference adaptive control with an augmented error signal. IEEE Trans Autom Control AC-19, 474-484. Morgan A P, Narendra K S (1977): On the uniform asymptotic s t a b i l i t y of certain linear nonautonomous differential equations. SIAM J Control 15, 5-24. Morse A S (1979): Global s t a b i l i t y of parameter-adaptive control systems. Dept of Engineering and Applied Science, Yale University. Narendra K S, Valavani L S (1976): Stable adaptive observers and controllers. Proc of the IEEE 64, I198-1208. Narendra K S, Valavani L S (1977): Stable adaptive controller design, part I: Direct control. Proc of the IEEE Conf on Decision and Control, New Orleans, USA, 881-886. Narendra K S, Va]avani L S (197B): Direct and indirect adaptive control. Preprints of the 7th IFAC World Congress, Helsinki, Finland, 1981-1987. Osburn P V, Whitaker H P, Kezer A (1966): New developments in the design of adaptive control systems. Inst of Aeronautical Sciences, paper 6]-39. Parks P C (1966): Lyapunov redesign of model reference adaptive control systems. IEEE Trans Autom Control AC-ll, 362-367. Peterka V (1970): Adaptive digital regulation of noisy systems. 2rid IFAC Symp on Identification and System Parameter Estimation, Prague, Czechoslovakia. Wellstead P E, Prager D, Zanker P, Edmunds J M (1978): Self tuning pole/zero assignment regulators. Report 404, Control Systems Centre, University of Manchester, England.
111
APPENDIX A - PROOFOF THEOREM4.1
Theorem 4.1 Consider a plant• described by A(q-l)y(t) = q-(k+l)boB(q'l)u(t ) + w(t)
(A.l)
or• alternatively, ef(t) = boq-(k+l) Here '-'
) + 8T~(t) +
(A.2)
denotes filtering by Q/TAM and
= Fu(t-l) u(t-nu) y(t) y(t-ny+l) ~T(t) L F . . . . . - T ' - T ..... P
!BM uM(t)], (A.3) P
where
nu = max (m+k, nP2) (A.4)
ny = max (n+nT-k, n). The plant is controlled by the DSA-algorithm with fixed observer polynomial, defined by - estimation scheme: ^bo(t) =bo(t-l)+IT(t-k-l)+sT(t-l)~(t-k-l) ^ ] ¢(t) r(t)
(A.Sa)
B(t) =O(t-l) + 6O~(t-k-l) ~(t_) r(t)
(A.5b)
L Pl
r(t} =~r(t-]) +~u(t-k,.l_) + ~T(t_l)~(t_k_])] l Pl
2
+
+ B~ l~(t-k-l)l 2+~; O(~.
Tk~ ~
is seen t h a t there is at l e a s t one i n t e r v a l Jji in the
From (B.22) i t interval
I i . Suppose that
Ti
j+l
I
M
ds