Stability and Time-Optimal Control of Hereditary Systems
This isVolume 188 in MATHEMATICS IN SCIENCE AND ENGINEERING Edited by William E Ames, Georgia Institute of Technology A list of recent titles in this series appears at the end of this volume.
STABILITY AND TIME-OPTIMAL CONTROL OF HEREDITARY SYSTEMS E.N Chukwu DEPARTMENT OF MATHEMATICS NORTH CAROLINA STATE UNIVERSITY RALEIGH, NORTH CAROLINA
W ACADEMIC PRESS, INC. Harcourt Brace Jovanovich, Publishers
Boston San Diego New York London Sydney Tokyo Toronto
This book is printed on acid-free paper. @ Copyright Q 1992 by Academic Press, Inc.
All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or
any information storage and retrieval system, without permission in writing from the publisher.
ACADEMIC PRESS, INC.
1250 SiXm.A w e . SanDicgo. CA 92101-4311
United Kingdom Edition published by ACADEM~CPRESS LIMITED .
24-28 Oval Road, London NW I 7DX
This book was typeset by AMS-TeX, the TeX macro system of the American Mathematical Society.
Library of Congress Cataloging-in-Publication Data Chukwu, Ethelbert N. Stability and time-optimal control of hereditary systems / E.N. Chukwu. cm. - (Mathematics in science and engineering; v. 188) p. Includes bibliographical references and index. ISBN 0-12-174560-0 1. Control theory. 2. Mathematical optimization. 3. Stability. I. Title. 11. Series. QA402.3.C5567 1992 91-37886 629.8 312-dc20 CIP
PRINTED IN THE UNITED STATES OF AMERICA 92939495
98765432 1
DEDICATION It is with gratitude and great joy that I dedicate this book to the following good and courageous men and women whose faith in my family sustained us through our recent difficulties. My entire family, including my wife Regina, and our children Ezeigwe, Emeka, Uchenna, Obioma, Ndubusi, and Chika are most humbled by the friendship from so many. Our trials were very harrowing, but with the support and belief of all these fine women and men, our faith in God and belief in the principles of truth and justice were reaffirmed. Dr. A. Schafer (Chair, Committee of American Mathematical Society, Human Rights of Mathematicians, American Mathematical Society), Dr. William Browder (Princeton University) President , American Mathematical Society), Dr. M. Brin, Professor 0. Hzijek, Professor G. Leitmann, Professor R. Martin, Professor A. Fauntleroy, Professor E. Burniston, Dr. C. Christensen, Mr. R. O’Connor, Bishop F. Joseph Gossman, Father 0. Howard, Father G. Wilson, Mr. H. Ray Daley, Jr., Dr. J . Nathan, Mr. J. Downs, Mr. G. Berens and Mrs. Marie Berens, Professor T. Hallam, Dr. H. Liston, Dr. S. Lenhart, Dr. S. Middleton, Professor H. Petrea, Professor and Mrs. C. Wolf, Dr. G. Briggs, Dr. C. Jefferson, Reverend Fr. Doherty, Reverend Fr. J . Scannel, Mr. A. Macnair, Mr. J. Barry, Reverend M. P. Shugrue, Dr. B. Westbrook, Dr. D. M. Lotz, Ms. S. Mckenna, Ms. J. Hunt, Rev. G. Lewis, Rev. M. J. Carter, Mr. and Mrs. B. E. Spencer. My family and I are eternally grateful t o the following political leaders who provided support and assistance during a very difficult time that occurred while I was in the process of developing and writing this book. Without their support and assistance I could not have completed this work, and our difficulties would have overwhelmed us. The Honorable Terry Sanford, U.S. Senator for North Carolina. The Honorable Jesse Helms, U.S. Senator for North Carolina. The Honorable Albert Gore, U.S. Senator for Tennessee. The Honorable Clairborne Pell, U.S. Senator for Rhode Island. The Honorable David Price, U.S. Congressman for the 4th District, for North Carolina. The Honorable Rufus L. Edmisten, North Carolina Secretary of State. My family and I rejoice in your generosity, and thank you with all our hearts and souls. It is with humility that I pay tribute to my undergraduate teacher Professor Harold Ward and my Ph.D. supervisor and friend Professor 0. HAjek. Ethelbert Nwakuche Chukwu January, 1991 Raleigh, NC, USA
V
ACKNOWLEDGEMENTS The author is indebted t o Professors T. Angel1 and A. Kirsch for their kind permission to use their recent published paper. The author has consulted the work of Professors J. Hale, Haddock and Terjkki, T . Banks, M. Jacobs and Langenhop, A. Manitius, Drs. G . Kent and D. Salamon, to all of whom he is greatly indebted. He acknowledges the enduring inspiration of Professors 0. Hhjek, L. Cesari, H. Hermes and J. P. LaSalle, and R. Bellman. It was Bellman who invited the author to write this book. He is particularly appreciative of the joyous efforts of Mrs. Vicki Grantham in typing the manuscript. The author acknowledges with immense gratitude the support of NSF for the summer of 1991, which enabled him to complete some of the original work reported here. It was a pleasure to work with the professional staff of Academic Press, especially Charles B. Glaser and Camille Pecoul, and the author offers his gratitude to them.
Ethelbert Nwakuche Chukwu January, 1991 Raleigh, NC, USA
vi
CONTENTS xi
Preface
Chapter 1 Examples of Control Systems Described by Functional Differential Equations 1.1 1.2 1.3 1.4 1.5 1.6 1.7
Ship Stabilization and Automatic Steering Predator-Prey Interactions Fluctuations of Current Control of Epidemics Wind 'Alnnel Model; Mach Number Control The Flip-Flop Circuit Hyperbolic Partial Differential Equations with Boundary Controls 1.8 Control of Global Economic Growth 1.9 The General Time-Optimal Control Problem and the Stability Problem
Chapter 2 General Linear Equations
2.1 The Fundamental Matrix of Retarded Equations 2.2 The Variation of Constant Formula of Retarded Equations 2.3 The Fundamental Solution of Linear Neutral Functional Differential Equations 2.4 Linear Systems Stability 2.5 Perturbed Linear Systems Stability Chapter 3
Lyapunov-Razumikhin Methods of Stability of Delay Equations
3.1 Lyapunov Stability Theory 3.2 Razumikhin-type Theorems 3.3 Lyapunov-Razumikhin Methods of Stability in Delay Equations
vii
1 1 3 6 9 17 18 22 25 30
35
35 44 46 52 58
67
67 73 75
...
Con t e n t s
Vlll
Chapter 4 4.1 4.2 4.3 4.4
Global Stability of Functional Differential Equations of Neutral Type
Definitions Lyapunov Stability Theorem Perturbed System Perturbations of Nonlinear Neutral Systems
Chapter 5 Synthesis of Time-Optimal and Minimum-Effort Control of Linear Ordinary Systems 5.0 Control of Ordinary Linear Systems 5.1 Synthesis of Time-Optimal Control of Linear Ordinary Systems 5.2 Construction of Optimal Feedback Controls 5.3 Time-Optimal Feedback Control of Nonautonomous Systems 5.4 Synthesis of Minimum-Effort Feedback Control Systems 5.5 General Method for the Proof of Existence and Form of Time-Optimal, Minimum-Effort Control of Ordinary Linear Systems Chapter 6
Control of Linear Delay Systems
6.1 Euclidean Controllability of Linear Delay Systems 6.2 Linear Function Space Controllability 6.3 Constrained Controllability of Linear Delay Systems Chapter 7 Synthesis of Time-Optimal and Minimum-Effort Control of Delay Systems 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8
7.9
Linear Systems in Euclidean Space Geometric Theory and Continuity of Minimal Time Function The Index of a Control System Time-Optimal Feedback Control of Autonomous Delay Systems Minimum-Effort Control of Delay Systems Proof of Minimum-Effort Theorems Optimal Absolute Fuel Function Control of Nonlinear Functional Differential Systems with Finite Delay-Existence, Uniqueness, and Continuity of Solutions Sufficient Conditions for the Existence of a Time-Optimal Control
87 87 89 94 101 105 105 106 111 120 156
164 193 193 197 202 207 207 217 223 237 247 259 268
277 28 1
ix
Contents
7.10 Optimal Control of Nonlinear Delay Systems with Target in En 7.11 Optimal Control of Delay Systems in Function Space 7.12 The Time-Optimal Problem in Function Space Chapter 8 Controllable Nonlinear Delay Systems 8.1 Controllability of Ordinary Nonlinear Systems 8.2 Controllability of Nonlinear Delay Systems 8.3 Controllability of Nonlinear Systems with Controls Appearing Linearly Control of Interconnected-Delay Differential Equations in Introduction Nonlinear Systems General Nonlinear Systems Examples Control of Global Economic Growth Effective Solidarity Functions
Chapter 9 9.1 9.2 9.3 9.4 9.5 9.6
w!)
Chapter 10 The Time-Optimal Control of Linear Differential Equations of Neutral Type 10.1 The Time-Optimal Control Problem of Linear Neutral F’unctional Systems in En:Introduction 10.2 Forcing to Zero 10.3 Normal and Proper Autonomous Systems 10.4 Pursuit Games and Time-Optimal Control Theory 10.5 Applications and Economic Growth 10.6 Optimal Control Theory of Linear Neutral Systems 10.7 The Theory of Time-Optimal Control of Linear Neutral Systems 10.8 Existence Results 10.9 Necessary Conditions €or Optimal Control 10.10 Normal Systems 10.11 The Geometric Theory of Time-Optimal Control of Linear Neutral Systems 10.12 Continuity of the Minimal-Time Functions 10.13 The Index of the Control System 10.14 Examples
288 29 1 299 303 303 308 32 1 337 337 34 1 345 349 355 367 37 3 373 382 385 388 397 400 408 412 419 422 423 426 432 433
X
Chapter 11 The Time-Optimal Control Theory of Nonlinear Systems of Neutral Type
11.1 Introduction 11.2 Existence, Uniqueness, and Continuity of Solutions of Neutral Systems 11.3 Existence of Optimal Controls of Neutral Systems 11.4 Optimal Control of Neutral Systems in Function Space Chapter 12 Controllable Nonlinear Neutral Systems
12.1 General Nonlinear Systems 12.2 Nonlinear Interconnected Systems 12.3 An Example: A Network of Flip-Flop Circuits
Contents 44 1 44 1 443 452 458 473 473 485 489
Chapter 13 Stability Theory of Large-Scale Hereditary Systems
495
13.1 Delay Systems 13.2 Uniform Asymptotic Stability of Large-Scale Systems of Neutral Type 13.3 General Comments
495
Index
505
500 504
PREFACE This monograph introduces the theory of stability anJ of time-optimal control of hereditary systems. It is well known that the future state of realistic models in the natural sciences, economics and engineering depends not only on the present but on the past state and the derivative of the past state. Such models that contain past information are called hereditary systems. There are simple examples from biology (predator-prey, Lotka-Volterra, spread of epidemics, e.g., AIDS epidemic), from economics (dynamics of capital growth of global economy), and from engineering (mechanical and aerospace: aircraft stabilization and automatic steering using minimum fuel and effort, control of a high-speed closed air circuit wind tunnel; computer and electrical engineering: fluctuations of current in linear and nonlinear circuits, flip-flop circuits, lossless transmission line). Such examples are used t o study the stability and the time-optimal control of hereditary systems. Within this theory the problem of minimum energy and effort control of systems are also explored. Conditions for stability for both linear and nonlinear systems are given. Questions of controllability are posed and answered. Their application to the growth of the global economy yield broad, obvious but startling policy prescription. For the problem of finding some control to use in order to reach, in minimum time, an equilibrium point from any initial point and any initial state, existence theorems are given. Some effort is made to construct an optimal control strategy for the simple linear systems cited in the examples. For finite dimensional linear systems, time-optimal, and minimum effort feedback controls are constructed. It is assumed that the reader is familiar with advanced calculus and has some knowledge of ordinary differential equations. Some familiarity with real and functional analysis is helpful. The contents of this book were offered as a year-long course a t North Carolina State University in the 1989-1991 sessions. The students were mainly from engineering, economics, mathematical biology, and Applied Mathematics. Though in many aspects the book is self-contained, theorems often are stated and the appropriate references cited. The references should help a further exploration of the subject.
xi
xii
Preface
There are thirteen chapters in this book. The first chapter is devoted to examples from applications in which delays are important in time-optimal, minimum effort, and stability problems. The second chapter studies linear systems. The fundamental matrices of linear delay and linear neutral matrices are calculated and used t o obtain the variation of parameter formulae for nonhomogeneous systems. The stability theory of linear and perturbed linear systems is considered. Chapter 3 presents the basic Lyapunov and Razumikhin theories and their extensions by Hale and LaSalle and recently by Haddock and TerjCki. On the basis of these, a simple quadratic form is used to deduce a necessary and sufficient condition for stability of linear delay equations. The fourth chapter treats the global stability of functional differential equals of neutral type. The second part of the book begins in Chapter 5. It deals with the construction of minimum time and minimum effort optimal feedback control of linear ordinary differential systems. It is both an introduction and a model for such studies of linear hereditary systems. Most of the theory presented here is only available in research articles and theses. It will be most interesting to present parallel results for hereditary systems. Chapter 6 presents the theory of the underlying controllability assumptions required for the existence of optimal control of delay equations. The geometric theory of time-optimal control is then presented in Chapter 7, where numerous examples are considered. Here also the maximum principle both in Euclidean and function space is formulated. The fundamental work of Angel1 and Kirsch is stated: It gives a maximum principle in a setting that guarantees the existence of non-trivial regular multipliers for nonlinear problems involving point-wise control constraints. It presents an introduction t o the synthesis of optimal feedback control of linear hereditary systems. The synthesis of optimal control of simple examples is attempted. In Chapter 8 controllability of nonlinear systems is treated. Interconnected systems are studied in Chapter 9. There universal principles for the control of interconnected organizations are formulated. They have fundamental economic policy implications of immense practical interest. Chapters 10 through 12 consider the corresponding controllability and time-optimal control of linear and nonlinear neutral equations. Chapter 13 presents the stability theory of large-scale hereditary systems in the spirit of Michel and Miller.
Chapter 1 Examples of Control System. Described b Functional Differential l3quations 1.1 Ship Stabilization and Automatic Steering A ship is rolling in the waves. The differential equation satisfied by the angle of tilt z from the normal upright position is given by
+ b-d zdt( t ) + kx(t) = 0 ,
d%(t)
m$t2
(1.1.l)
where m > 0 is the mass, b > 0 is the damping constant, and k > 0 is a constant. The damping constant b determines how fast the ship returns to its upright position. It is known that if b2 < 4mk, the ship is “underdamped” and the ship oscillates as it returns to its upright position. If b2 > 4mR, it is “overdamped,” there is no oscillation, but it returns to its upright position more slowly the larger b becomes. When b2 = 4mk, it is “critically damped” and upright position is achieved more rapidly. This “roll-quenching” was a very important problem tackled by engineers for ships and destroyers of the second world war. In one such research by Minorsky [5,6],ballast tanks, partially filled with water, are introduced in each side of the ship in order to obtain the best value of b. A servomechanism is introduced and is designed dX to reduce the angle of tilt z, and its velocity, -, to 0 as fast as possible. dt What the contrivance does is t o introduce an input to the natural damping of the rolling ship, a term proportional to the velocity at an earlier instant t - h : q i ( t - h ) . The delay h is present because the servomechanism cannot respond instantaneously. It takes this time delay h to respond. Also introduced by the servomechanism is a control with components (u1, u2), which yields the following equations:
b m
i ( t ) = --y(t)
Q k - -y(t - h ) - - z ( t )
mZ(t)
m
m
+ b i ( t ) + c z ( t ) = 0.
+ uz(t).
(1.1.2)
Stability and Time-Optimal Control of Hereditary Systems
2
x =Angle of tiit
FIGURE1.1.1. b2 < 4 m c Decreasing oscillations b2 > 4mc No oscillations b2 = 4mc Critically damped We note that (1.1.1) is equivalent to
X(t) = Y(t) b Y(t)= --y(t) m
(1.1.3)
k
- -m- z ( d ) .
Equation (1.12) can be written in matrix form
where A0
[
-X(t) = Aog(t) + A ~ g (-t h) + Bu(t)i 0
= -- ,
1
]
--b , A1
=
[: ]:-
,B=
m m Using matrix notation, (1.1.3) assumes the form
[
(1.1.4) 1 0 1], E =
[;].
3
Examples of Control Systems
The point (0,O)can be regarded as an equilibrium position and the aim of the gadget is t o steer the angle of tilt and its velocity to the equilibrium position. There are three problems. The first is the problem of stability: Determine necessary and sufficient conditions on A,,, A1 such that the solution of (1.1.5) satisfies . . ( t ) - + O as t -+m.
(1.1.6)
But it is undesirable to wait forever for the system to attain the upright position. The second problem is: Is it possible that a control u, introduced by the servomechanism, can drive the system to the equilibrium in finite time? Is the system controllable? The third problem is: Find a control strategy u that will drive the angle of tilt and its velocity to zero in minimum time. Such a control is called a time-optimal control. Our ultimate goal in optimal control theory is to get an optimal control as a function of the appropriate state space, i.e., t o obtain the “feedback” or “closed loop” control. The major advantage of such a control, as opposed to an “open loop” one with u as a function o f t , is that the system in question becomes self-correcting and automatic.
1.2
Predator-Prey Interactions
Let z ( t ) be the population a t time t of a species of fish called prey, and let y(t) be the population of another species called the predator, which lives off the prey. Under the assumption that without the predator present the prey will increase at a rate proportional t o z ( t ) , and that the feeding action of the predator reduces the growth rate of the prey by an amount proportional t o the product z(t)y(t), we have .(t)
= a12(t) - b l Z ( t ) Y ( t ) .
If the predator eats the prey and breeds at a rate proportional to its number and the amount of food available, then
where
a l , ~ 2 ~ 6 b2 1 ,are
positive constants. The system of two equations
4
Stability and Time-Optimal Control of Hereditary Systems
is rather naive. A more realistic model assumes that the birthrate of the prey will diminish as z ( t ) grows because of overcrowding and shortage of available food. In this model it is assumed that there is a time delay of period h for the predator to respond to changes in the sizes of x and y. Thus
& ( t )= a1 [l
-
Qi(t) = -azy(t)
Y]
+ ( t )- b 1 z ( t ) y ( t ) ,
+ h ~ (-t h)y(t - h ) ,
(1.2.2)
where p is a positive constant. Voltetra 191 studied (1.2.1) and (1.2.2) and various generalizations of (1.2.2). One such system is given by
where 91, g 2 are continuous, nonnegative functions, and c1, c2 are constants. In the interaction between the prey and the predator it is important to ask whether there are equilibrium states that may be reached for the systems (1.2.3). If these states are not zero, then neither the predator nor the prey is extinct and the following is true: a1 - c l + ( t )- blY(t) (12
-c 4 t )
J_o, 0
+ b22(t) +J
-h
+
g l ( s ) y ( t s)ds = 0, ga(s)t(t
+ s)ds = 0.
The function z* = (+*, y*), which solves the above functional equation, is the equilibrium; it may well be the saturation levels of the given species. In this case it may be desirable that every solution ( ~ ( t y) (,t ) ) = ~ ( tof) (1.2.3) satisfies z ( t ) + z* as t + 00. This is an asymptotic stability property. The state z* is not attained in finite time. A more desirable objective is to “manage” the interaction and to have the population as near as possible to its equilibrium position, and to prevent near periodic outbreaks of predator population y beyond its equilibrium. We aim at coexistence of the two species at their equilibrium states, which are also the saturation levels of the two populations.
Examples of Control Systems
5
One management strategy is fishing. Man is interested in harvesting one e i , i = 1,2. For this situation the dynamics of the interaction are
or both species at some rates
If u l ( t ) = e l z ( t ) , uz(t) = e g y ( t ) , then u i ( t ) is the harvest rate that is proportional to the population density. The effort level e i ( t ) is a positive function with dimension l/time, which measures the total effort at time t made to harvest a given species, and uj is a piecewise continuous function [O,T] + [ O , b ] , T > 0, b > 0. Thus e i ( t ) can be considered a control ) the rate the fisherman function. The function u = ( 2 1 1 , u ~ represents (i) selectively kills the prey z, (ii) selectively kills the predator y , or (iii) kills both z and y . There are two problems. By harvesting, what conditions on the systems’ parameters ensure that the two species can be driven to the equilibrium z* in finite time. This is a problem of controllability. If e i ( t ) is negative in (1.2.4), u can be said to represent the rate at which laboratory-reared (iv) fishes z or (v) predators y are released into the system. One can use a combination of release and harvesting strategies to drive z , y to the saturation level t*in finite time. The second question is optimality: What is the best harvesting strategy that will, as quickly as possible, drive the system from any initial population to this equilibrium? Good fish management is interested in the time-optimal control problem for the system (1.2.4). A more general version of this system is given by
where L ( t , g5) is a 2 x 2 matrix function, which is linear in z ( t ) ;and B ( z ( t ) ,t ) is the matrix
6
Stability and Tirne-optimal Control of Hereditary Systems
A more general n-dimensional version of this system is given by
where B is an n x m matrix function, u is in Em,and
Here Ak is an n x n continuous matrix function, A ( t , s ) is integrable in s for each t , and there is a function a ( t ) that is integrable such that
The three questions that we posed for (1.2.4) can also be asked for (1.2.5) or for the general nonlinear equation
where f : E x C -+ En,B : E x C -+ Enxmis an n x m matrix valued function, and u is a rn-vector valued function. Here E = (--oo,oo), E" is the n-dimensional Euclidean space, and C = C([-h,01, En)is the space of continuous functions from [-h, 01 + En.The symbol zt E C is a function defined by q ( s ) = z ( t s) s E [-h, 01. In the discussion above we required all populations t o be driven to the equilibrium, which can be taken as a target. A more general target can be assumed t o be (1.2.7) T = {$ E C : HdJ = p ) .
+
Here H is a linear operator and p E C. The target represents the final configuration at which it is desirable for the species to lie after harvesting.
1.3
Fluctuations of Current
Let us consider an electric circuit in which two resistances, a capacitance and inductance, are connected in series. Assume that current is Rowing through the loop, and its value at time t is z ( t ) amperes. We use also the following units: volts for the voltage, ohms for the resistance R, henry for the inductance L , farads for the capacitance c , coulombs for the charge on the capacitance, and seconds for the time t . It is well known that with
7
Examples of Control Systems
these systems of units, the voltage drop across the inductance is L-d W dt ’ and that across the resistances it is ( R R l ) x ( t ) . The voltage drop across the capacitance is q / c where q is the charge on the capacitance. It is also dq A fundamental law of Kirchhoff states that the sum known that x ( t ) = -. dt of the voltage drops around the loop must be equal to the applied voltage:
+
dx Ldt
+ ( R + R I ) + ( t ) + -q1 C
= 0.
On differentiating with respect to t we deduce
d2x(t) Ldt2
+ ( R + R1)-ddtx + -c1x ( t ) = 0.
(1.3.1)
In Figure 1.3.2, the voltage across R1 is applied to a nonlinear amplifier A. The output is provided with a special phase-shifting network P. This introduces a constant time lag between the input and output P. The voltage drop across R in series with the output P is
q is the gain of the amplifier to R measured through the network. The equation becomes
L
2 t a dt2
+ R i ( t ) + q g ( i ( t - h ) ) + -1Cx ( t ) = 0 .
Finally, a control device is introduced to help stabilize the fluctuations of the current. If i ( t ) = y ( t ) , the “controlled” system may be described by
Y(t)= --R y(t) L
- -Qg ( y ( t - h ) ) - -4) 1 +u 2 ( 9 L CL
(1.3.2)
The system (1.3.2) can be put in the matrix form
a ( t ) =Ag(t)
+ G ( g ( t - h ) ) + Bu
(1.3.3)
8
Stability and Time-Optima[ Control of Hereditary Systems
where
-x =
(a)
u= 0
A=
[-&
[:;I
-4 1
1 0
B = [0 1
1
FIGURE 1.3.1 The control u = (ul,u2) is “created” and introduced by the stabilizer. The three basic questions are now natural: With ti = 0, what are the properties of the systems parameter which will ensure that t 2 ( t ) + y 2 ( t ) ---t 0 as t -+ m? Will “admissible controls” ti (say luj I 5 1, j = 1,2) bring any wild fluctuations of current (any initial position) to a normal equilibrium in finite time? Can this be done in minimum time?
Examples of Control Systems
9
FIGURE 1.3.2. [5, P . 361 1.4
Control of Epidemics
A. First Model [12, p. 1131 In this section we formulate a theory of epidemics, a problem of timeoptimal control theory. It is a slight modification of Bank’s presentation of the work of Gupta and Rink [2]. We assume as basic that there are four types of individuals in our population: (i) susceptibles: X , ( t ) ,
10
Stability and Time-Optimal Control of Hereditary Systems
(ii) exposed and infected but not yet infectious (infective) individuals: Xdt), (iii) infectives: X 3 ( t ) , and (iv) removed individuals: Xq. We include in (iv) those who have recovered and gained immunity and those who have been removed from the population because of observable symptoms. We assume a constant rate A of inflow of new susceptible members. The latent period hl is the period from the time an individual is exposed and becomes infected to the time he becomes infective. The infectious period hz is the period the individual is in the infectious (infective) class. The incubation period will denote the time from exposure and infectedness to hz = T). After normalization by setting the time of removal (i.e., hl x i = & / N , where N is the population, the dynamics of interaction is
+
The constant p is the average number of individuals per unit time that any individual will encounter, sufficient to cause infection. To control the epidemic we allow two strategies: (i) The removal at some rates ui i = 2 , 3 of both the infectives and exposed and infected individuals. The removal rate ui(t) = ei(t)bi(zi(t)) is proportional to a function bi of the population density and the effort level e i ( t ) . (ii) active immunization, the injection of dead or live but attenuated d i 5 ease microorganisms into members of the population, resulting in antibodies in the population of vaccinated individuals. If El ( t ) represents the reliable rate per day at which members of the population are being actively vaccinated at time t , the normalized rate is
e1(t) = E l ( t ) / N . We assume that the effective immunization rate is b l ( z l ( t ) ) e l ( l )= u l ( t ) . If we assume that there is a delay h (hi 5 h , i = 1,2) days before the antibodies become effective, then the rate is b l ( t l ( l ) ) e l ( t- h ) = u t ( t ) .
Examples of Control Systems
11
Thus the dynamical system is
+
& l ( t )= - P z l ( t ) z 3 ( t ) - h ( z i ( t ) ) e l ( t- h ) A , 52(t) = Pzl(t)23(t)- p z l ( t - h1)23(t - hl) - uZ(t)i i 3 ( t ) = pzl(t - hl)Z3(t h l ) - pzl(t - hl - h2)23(t - hl
-
- h2)
(1.4.2)
- u3(t). We use control strategies (i) and (ii) to reduce the total number of infected to an “acceptable” size in finite time. This health policy may be used to “prevent an epidemic” in minimum time, where preventing an epidemic means that the solution of (1.4.2) satisfies (i) I z z ( t ) l + Iz:s(t)l 5 A for all t 2 0. (ii) max 1z3(8)1 5 B, where A, B are prescribed constants. JE[O,TI
The solution of (1.4.2) may be viewed as lying in the space C([-h,O],E 3 ) the space of continuous functions mapping [-h,O] into E3 with the sup norm. Or it may be viewed as lying in E3. We define a subset T c E3 by
The best health policy may be that solutions of (1.4.2) hit the target, TIin minimum time. Also desirable is to have solutions of (1.4.2) to reach and remain in T, and to do so as fast as possible. The time-optimal control problem formulated in relation to the epidemic models above can also be formulated for the more general nonlinear system
i ( t ) = f(t, ~ ( t )~, (-th l ) * . * z(t - h a ) , u ( t ) , ~ (-t h ) ) , with target set
T = {z E En : Hx = T ,
T
(1.4.3)
E En},
and H an n x n matrix. Equation (1.4.3) is a special case of the delay system (1.4.4) which includes the ordinary differential equation (1.4.5)
i ( t ) = q t , z ( t ) ,U ( t ) ) . In (1.4.4) xt is a function defined by q ( s ) = z(t The symbol ut is defined similarly.
+
S) s
E [-h, 01.
12
Stability and Time-Optimal Control of Hereditary Systems
B. Control of Epidemics: AIDS Just as in A, we formulate a theory of acquired immunodeficiency syndrome (AIDS) epidemic as a problem of time-optimal control theory. It is a modification of the recent works of Castillc-Chavez, Cooke, Huang, and Levin [14].We assume as basic that there are five classes (of sexually active male homosexuals with multiple partners): 21 : susceptible individuals, 22: those infectious individuals who will go on to develop full-blown AIDS, 2 3 : infectious individuals who will not develop full-blown AIDS, 2 5 : those former 23 who are no longer sexually active, and 2 4 : those former x 2 who have developed full-blown AIDS. Note that if an individual enters 2 5 or 2 4 , he no longer enters into the dynamics of the disease. In contrast to our earlier treatment of the theory of epidemics, a latent class is excluded, i.e., those exposed individuals who are not yet infectious, since only a very short time is spent in that class. It is our assumption that an individual with full-blown AIDS has no sexual contacts and is therefore not infectious. Also, once infected, an individual is immediately infectious, and sexual inactivity or acquisition of AIDS are at the constant rates of a3 and a2,respectively, per unit time. We let A denote the total recruitment rate into the susceptible class, defined t o be those individuals who are sexually active. Let p be the natural mortality rate, and d the disease-induced mortality due t o AIDS. Suppose p is the fraction of the susceptible individuals that after becoming infectious will enter the AIDS class, and (1 - p ) the fraction that become infectious but will not develop full-blown AIDS. We use the following diagram:
FIGURE 1.4.1
13
Examples of Control Systems to determine the epidemiological dynamics:
(1.4.6)
+
-d x z ( t ) - X p C ( T ) ( t ) x l ( t )W - ( t )- ( a ~p ) x z ( t ) , dt T(t)
(1.4.7)
-dx4(t) - a z x z ( t )- (d + p ) x 4 ( t ) ,
(1.4.9)
dt
( 1.4.10)
d 2 5 ( t ) = a3x3(t) - p25(t),
dt
where
W=xz+x3
and
T = W+z1.
( 1.4.11 )
In the above formulation, C ( T )represents the mean number of sexual partners an average individual has per unit time when the total population is T , and the constant X denotes the average sexual risk per partner. Thus the expression X C ( T ) x l ( t ) W / T denotes the number of newly infected individuals per unit time. Very often the Michaelis-Menten type contact law is accepted for C(T):
C ( T )=
~
1
PT
+ kT’
( 1.4.12)
where P,k is constant. Systems (1.4.6 - 1.4.10) are ordinary differential equations. Further to the assumption above where individuals become immediately infectious and the infectious period is equal to the incubation period, we designate h z to be the fixed period of infection for 2 2 , and h3 to be that of c3. We set 2 2 0 ( t ) ] 2 3 0 ( t ) , to be infectious x z ( t ) and x 3 ( t ) at time t = 0, and 2 4 0 ( t ) , ~ 5 0 ( t living ) x 4 ( t ) ,x5(t) at time 2 = 0. Clearly 2 4 0 , 2 5 0 have compact support, i.e., ~ 4 O ( t ) , ~ 5 O vanish (t) as t +. 00. Note that x z o ( t ) = ~ ~ ~= (0 for 0 )t > max(hl,hz). If H ( t ) is the unit step or 0, t < O Heaviside function, i.e., H ( t ) = , then the system is modified 1, t > 1
14
Stability and Tame-Optimal Control of Hereditary Systems
with initial conditions,
Z l (t ) = ZlO(t), (1.4.16)
Z3(t) = 230(t). This is a delay differential equation whose existence and uniqueness of solutions are easily proved. We now restrict our analysis to the case p = l (i.e., we model AIDS as a progressive disease). The study of the steady states reduces t o the following set of equations:
i l ( t )= A
x2(t) - p x ~ ( t ) , T(t)
- XCT’(t)t1(t)-
with infection free state
(:-, )
0 as equilibrium. It is possible t o allow h2, h3
t o be distributed. For this we follow the generalizations of 1141 and define the survivorship functions P~(s), P3(s), which are the proportions of those individuals who become t 2 or t g infective a t time t and if alive are still infectious at time t s. These are nonnegative nonincreasing with
+
P2(0) = p3(0)= 1, (1.4.17)
Examples of Control Systems
15
are the removal rates of indiUnder these assumptions, -Pz(s)and -&(s) viduals from 2 2 and 23 into 2 4 , and x 5 classes s time units after infection. We therefore have
(1.4.18) (1.4.19)
+
t
W(s) x g ( t ) = z ~ o ( t ) (1 - p~AC(T)(s)zl(s)-e-'('-')Ps(t T(s)
0
- s)ds,
(1.4.20)
(1.4.22) Now denote by B the rate of infection-the number of new cases of infection per unit of time: B ( t ) = C ( T ( t ) ) z l ( tW ) - .( t ) (1.4.23)
T(t)
Since p is the mortality rate, we denote the rate of attrition by A ( t ) : In (1.4.6) this is given by A ( t ) = - p l ( t ) . Thus, (1.4.6) becomes i1(t)
= A - B ( t )- A ( t ) .
(1.4.24a)
This can be generalized. If A is time varying, then A ( t ) is defined as the total rate of recruitment into the susceptible class at time t . Assuming a certain fraction of the total dies in each future time period after recruitment, then n ( r ) d r is the natural mortality density, i.e., the fraction that dies in any small interval of length d r around the time point r. Then A ( t ) n ( t ) d . r will die in a small interval about t T , and if we replace t by t - 7 then A(t - r)na(r)drof the total recruitment made in a small interval about t - T will die. The total natural death at time t of all previous total recruitments A(t - r)m(r)dr.Equation (1.4.24a) becomes is
+
(1.4.24b)
Stability and Time-Optimal Control of Heredatay Systems
16
This can be considered an integral equation of renewal theory, where i l ( t ) and B ( t ) are known and A ( t ) is unknown. If we define the nth convolution, rnn(r)of m ( ~recursively ) by the relations rn(')(T)
= rn(T),
?7l"+'(T)
and the replacement density,
=
r(T)
l7
rnn(U)rn(T
=
of (1.4.24b) is
- U)dU,
(1.4.25a)
00
C rn(")(~), then the unique solution
n=l
roo
~ ( t=)i l ( t )+ ~ ( t+)J, i l ( t - T)r(T)dr. If the mortality density is exponential,
m ( ~= ) pe-p7,
(1.4.25b)
then (1.4.26)
so that
r(T) =
p . With this (1.4.25b) becomes A(t)
= i l ( t )+
or A(2)
Jd
00
i l ( t - r ) r ( T ) d T+ ~ ( t )
+
=ii(t) pi(t)
+ B(t).
From this analysis one has (1.4.27) Since r ( ~2) 0 for all
T , there
is a number 0
lW
5 h(t) = h < 00 such that
i l ( t - T)r(T)dT = i l ( t - h)
W
r(T)dT,
and we can then postulate that for some finite number
a-1,
(1.4.28) We have derived the neutral equation (1.4.29)
17
Examples of Control Systems
which now replaces (1.4.13). Because of the delays in (1.4.14) and (1.4.15) there is a possibility of oscillation and endemic equilibrium. One can now study the stability of the disease as well as its controllability by using any of the control strategies of Example A, i.e., allowing removal and active immunization. The results are available in Chukwu [15]. The interventions studied in [15]include the use of “positive controller,” (e.g., putting (known) infected individuals in quarantine, passive immunization-the direct injection of antibodies). The model (1.4.29) and the intervention of controllers suggest possible consequences for the spread of AIDS epidemics. The study of such control systems will help think through and focus on the moral and ethical questions raised by the controls used in the study. 1.5 Wind Tunnel Model; Mach Number Control
Delay equations of type (1.4.4) are also encountered in the design of an aircraft control facility. For example, in the control of a high-speed closed-air unit wind tunnel known as the National Transonic Facility (NTF), which is at present under development at NASA Langley Research Center, a linearized model of the Mach number dynamics is the system of three state equations with a delay, given by
k. The following finite dimensional analogue of a result of Nakagiri [ll]gives an explicit representation of U ( t )in terms of eAot and Ai i = 1 , . . . , N . Proposition 2.1.2 Define the matrix operator Vk C = 1 , 2 lows: v l ( t ) = eAo*
1 .
as fol-
(2.1.9b)
The fundamental matrix U ( t )t
2 0 of (2.1.6)
is given by
k
U ( t )=
C q ( t - ( i - 1 ) ~ ) t E [(k - l ) ~k,~ ] .
(2.1.1Oa)
i= 1
Writing o u t some expressions of v k , we have
(2.1.10b)
+
1
1
eAo('-'I)A1
1"
eAo('l-s)AleAo('))ds)dsl.
The expression (2.1.10) becomes simpler if Ai i = 1 e A o t . We have:
N commutes with
General Linear Equations
39
Corollary 2.1.1 Assume that for each i and for all t _> 0, Ai commutes with e A o t , i.e., AieAoi = eAotAi i = 1, .. , N. Then
-
For a special case of (2.1.7), namely
i ( t ) = Aoz(t)
+ A l z ( t - l),
(2.1.12)
we give an alternative description [7] of the fundamental matrix U. We note that by definition
a
-U(i at
- S) = AoU(t - S) + A1U(t - s - l),
(2.1.13)
for (t, s) E [slT ] x [0, TI, where
U ( t - s ) = 1 (n x n identity matrix f o r t = s ) = 0, ( t l s ) E [-I,$) x p,q.
(2.1.14)
+
Let Uk(r) = U ( r k) for r E [0,1] k = O , l , . .. , and assume s = 0 in (2.1.13). Then by substituting in (2.1.13) we have
d --Uo(r) = AoUo(7), Uo(0) = 1, dr (2.1.15)
It follows that the solution of (2.1.13) and (2.1.14) over the interval t E [ k , k 13 is given by U ( t ) = Uk(t - k). Set Z k ( r ) = [U,'(T)~... ,U,'(r)IT to convert (2.1.15) into the system
+
d -Zk(r) = BkZk(r) for r E [0,1], dt
Uk(7)= EkZk(T),
(2.1.16) (2.1.17)
Stability and Time-Optimal Control of Hereditary Systems
40
+
where zk(~) is an n(k 1) x n matrix, and Bk and Ek are respectively n(k 1) x n(k 1) and n x n(k 1) matrices given by
+
+
+
Because the unique solution of (2.1.16) is
(2.1.18) (2.1.19)
We note that
Zo(0) = I
Zk(0)=
[
I
.J,
.....
e B r - i Z(O)
k = 1,2,..
(2.1.20)
From the definition U ~ ( Tand ) from (2.1.19), we deduce that
V ( t )= Uk(t - k) = EkeB*('-k)Zk(0)
(2.1.21)
+
for t E [k,k 11. The essential difference between the fundamental matrix solution of the ordinary differential equation (2.1.2) and that of the delay system is that U ( t ) may be singular for some values of t E [0, oo),whereas eAot i s always nonsingular. Indeed, consider the system
k ( t ) = A o z ( t ) - e A o z ( t- 1). We have
It is clear that V(2) = 0, so that the fundamental matrix is singular at t = 2.
Problem 2.1.1: Find fundamental matrixsolution V ( t )using both (2.1.10) and (2.1.8). Check the validity of solutions $ ( t ) = - z ( t ) + z ( t - 1) on (0,3].
41
General Linear Equations
U ( t ) = e-t,
t E [0, 13, U ( t ) = e - ' [ l + e(t - I)],
U ( t ) = e-t[l
+ e(t - 1) + $ e 2 ( t 2 - 4(t + 4)].
42
Stability and Time-Optimal Control of Hemditary Systems
Check validity by substitution. Does V ( t ) satisfy i ( t ) = - z ( t ) + z(t 1); z ( t ) + c ( t ) - z ( t - ~ ) = on [0,1] ~ ( t=) e-', U ( t ) = -e-', ~ ( t - 1 )= 0. Substitute -e-'
+ e-' + 0 = 0 :.
solution valid on [0,1].
Substituting into the equation, we have
The solution is valid on [1,2]. On R31,
Substituting
Therefore the solution is valid on [2,3]. Using (2.1.81 On [ O J I ,
~ ( t=) eAot = e-'.
43
General Linear Equations
We now check the validity of this by substitution. Does U ( t ) satisfy
.(t)
+ z ( t ) - z ( t - 1) = O?
On interval 10, I],
~ ( t=)e-t, ~ ( t=)-e-t, U ( t - 1) = 0. Substituting in the equation, we'have e-t - e-t - 0 = 0. The solution is valid on t E [0, 11. On interval [l,2],
Substituting, e-te - e-'(l+ e(t valid on t E [1,2].
- 1)) + e - * ( l +
e(t - 1)) - e-'e = 0. The solution is
44
Stability and Time-Optimal Control of Hereditary Systems
On interval [2h,3h],
This is the same solution obtained for t E [2,3] using Equations (2.1.10a) and (2.1.10b).
2.2 The Variation of Constant Formula of Retarded Equations Using the fundamental solution U ( t ) of section (2.1), we represent the solution x(+, f ) of the nonhomogeneous equation (2.1.1). We will show that this is given by
The variation of constant formula is presented for a very general linear retarded functional differential equation,
45
General Linear Equations
where f is locally integrable on E. In this case by a solution we mean a function z that satisfies (2.2.3) almost everywhere. We assume f is a function mapping [u,oo)-+ En;L : [O,m) x C + E" is continuous and linear in q ,i.e., for any numbers cr,p and functions 4,$ E C we have the relation L ( t , a$ p4) = a L ( t ,4) PL(t,+). It follows from the Riesz Representation Theorem [6, p. 1431 that
+
+
L ( t , 4) =
J
0
-h
[del7(t,f?)l4(e),
(2.2.4)
where ~ ( 4) t , is an n x n matrix function that is measurable in ( t ,4) E E XE , such that ,
~ ( 0t ),= 0 for e 2 0, V ( t , 0 ) = ~ ( -h) t , for 8 5 -h. Also ~ ( 6t ),is continuous from the left in 8 on ( - h , O), of bounded variation in 8 on [-h,O] for each t. Also there is a locally integrable function m : (-00~00)
-+
E such that
IL(t,4)I 5 m(t)ll4ll,
vt
E (-O0,m), 4 E c.
Included in (2.2.3) is the system N
+J
i ( t ) = [C ~ ~ ( t ) w~k() t k=l
0
-h
~ ( t ,
+qde +f(t),
(2.2.5)
where A k ( t ) are n x n matrices. A(t,O) is an n x n matrix function that is integrable in 0 for each t , and there is a locally integrable function a ( t ) such that
The fundamental matrix of
i ( t ) = L ( t , 2 * ) , 2, = 4 is the n x n matrix U ( t ,s), which is a solution of the equation
-a U ( t ' s ) - L ( t ,U t ( . , s ) ) , t 2 s dt
U(t,S)
(2.2.6)
a.e. in s and t ,
(2.2.7)
=
0 s-h 0 there is a 6 = 6 ( c ) such that if 11411 5 6, then ((q(c~,4)115 c, V 1 2 c. The solution z = 0 of (2.4.1) is uniformly asymptotically stable if it is uniformly stable and there is a positive constant X > 0, such that for every q > 0 there is a T(q)such that if 11411 < A, then llzt(o,q5)II < q V t 2 B T(q)for each B E E . If y ( t ) is any solution of (2.4.1), then y is said t o be uniformly stable if the solution z = 0 of
+
is uniformly stable. The concept of uniform asymptotic stability is defined similarly. In the definition given above, 6 is independent of c and therefore stability is uniform. If 5 = S(c,o), we will have mere stability. However, the two concepts coincide if L ( t , 4) is periodic in t and L ( t + w , 4) = L ( t , q5), or if L is time invariant. The concept of uniform boundedness is useful. It is defined as follows: The solution z(o,c,b) of Equation (2.4.1) is uniformly bounded if for any a > 0 there is a p(a) > 0 such that for all u E El Q E C and 11+11 5 (Y we have IIz(o, 4)(t)ll 5 for all t >_ u. Recall that L satisfies for some locally integrable A4 : (-m,m) --+ E the inequality
Stated below is a characterization of the two concepts of stability in terms of the fundamental matrix solution U of Equation (2.4.1).
53
General Linear Equations
Theorem 2.4.1 Consider Equation (2.4.1)where M in (2.4.2) satisfies, for some constant M I , the inequality
(2.4.3)
Then the solution x = 0 o f (2.4.1)is uniformly stable i f and only if there is a constant k such that
Also the trivial solution x = 0 of (2.4.1)is uniformly asymptotically stable i f and only if there are constants k > 0 and a > 0 such that for all s E E ,
IV(t,s)l 5 ke-"('-'),
t 2 s.
(2.4.5)
I f the solution x(u,q5) o f (2.4.1)defines the operator T as follows:
then we can replace (2.4.4)and (2.4.5)respectively by
and
IT(t,u)1 5 k e - u ( t - u ) ,
t 2 u.
(2.4.7)
Asymptotic stability can also be deduced from some properties of the characteristic equation of a homogeneous linear differential difference equation with constant coefficients (2.1,7),namely A(X) = I X - A o - A l e - x h = 0,
(2.4.S)
where X is a complex number. Equation (2.4.8)is obtained from (2.1.7)by substituting x = ext< in (2.1.7)where X is a constant and is a nontrivial vector. We have:
0, k > 0 such that if r(4) is a solution of (2.4.11) with ~ ( 4 =) 4, then (2.4.14) ll.t(d)ll 5 kI1411e-Q', t 2 0, (2.4.15)
The definition of uniform stability and uniform asymptotic stability can be applied to the neutral linear system (2.3.8), i.e., d
-[Ox*] =Lxt, dt
(2.4.16)
where (2.4.17)
55
Geneml Linear Equations and
D t t = t(t) -
JIO,
pp(e)j+
+ el,
and the variation Var[-,,o]p of p satisfies Vaq-,,~] p has no singular part. This means that
where
+0
as s
-
(2.4.18)
0 and p
00
DO4 = 4(0) + x A k # ( - W k ) , k= 1
and
0 < wk 5 h,
00
k=1
lAkl
+/
0
-h
IA(s)fds< 00.
We now give conditions in terms of the characteristic equation det A(X) = 0, A(X) = X D ( e X I )- t ( e X I )
(2.4.19)
for uniform asymptotic stability. Theorem 2.4.3 Suppose in (2.4.16) where L and D are given as in (2.4.17) and (2.4.18), we assume that A(A) in (2.4.19) satisfies
sup{Re X : det A(X) = 0)
< 0.
Then the zero solution of (2.4.16) is uniformly asymptotically stable. Furthermore, if ~ ( 4 )is the solution of (2.4.16) with t o = 4, and if U is the fundamental matrix solution of (2.4.16), then there exist constants k > 0, a > 0 such that
t 2 0. For the autonomous system (2.4.16) stability implies uniform stability, but asymptotic stability does not imply uniform asymptotic stability. It is know in Brumley [l]that we can have the eigenvalues X with Re A < 0 and have some solutions unbounded. To have the same situation as in retarded
56
Stability and Time-Optimal Control of Hereditary Systems
equations where for autonomous systems or periodic systems uniform asymptotic stability is equivalent t o asymptotic stability, we require that D be stable in the following sense:
Definition 2.4.2: Suppose D : C -+ En is linear, continuous, and atomic at 0. Then D is called stable if the zero solution of the homogeneous difference equation, Dyt=O, t > O (2.4.20) Yo=$, D$=O is uniformly asymptotically stable. Let the characteristic equation of D be
Stable operators are characterized by the following theorem: Theorem 2.4.4 The following assertions are equivalent: D is stable. CYD = sup{& A : det AD(A) = 0) < 0. Any solution of the nonhomogeneous equation
D Y ~= W),t 2 0 , where h : [0, co) -+ En is continuous, satisfies
(2.4.21)
-
where a > 0, b > 0 are some constants. Let D+ = d(0) - Jzh[dc((S)]d(S),V a q - s , y --,0 as s 0, and p has no singular p a r t . Then all solutions of the characteristic equation det satisfy Re X
5 -6
[I
-
for some 6
Lh
eA”4(S)]
=0
> 0.
There are simple examples of stable operators: 1. Let D$ = d ( 0 ) . This corresponds t o the retarded functional differential equations. 2. D$ = $(O) -A$(--h), where the eigenvalues of A have moduli < 1. D is stable because the eigenvalues of det[l - Ae-Ah] = 0 have Re A 5 -6 < 0. It is illuminating t o state a corollary of Theorem 2.4.3.
57
General Linear Equations
Consider the system
Corollary 2.4.3
- A - i z ( t - h)] = A o ~ ( l+) A l ~ (-t h).
-d[ z ( t )
dt
(2.4.22)
Suppose oo = sup{Re X : X ( 1 - A - l e - x h ) = Ao
+ A l e - A h } , ao < 0
z(4) is a solution of (2.4.22), which coincides with q5 on [-h, 01. Then Iz(q5)(t>l5 ke-afllq511, t 2 0 for some k 2 0 and o > 0.
and
2.4
Examples
Example 2.4.1 [9]: Consider the delay system
where a , q , k, and h are positive constants. If we assume
b > q,
(2.4.24)
then (2.4.23) is uniformly asymptotically stable. We shall prove this by showing that every root of the characteristic equation
A’
+ b~ + qXe-xh + k = o
(2.4.25)
has negative real part. Indeed, suppose that X = a+ ip is a root of (2.4.25) with a 2 0, and aim a t a contradiction. Suppose j3 = 0. Then the left-hand of (2.4.25) will be positive yielding a contradiction. Hence j3 # 0. Note that
Im[X2 + bA
+ qXe-xh + k)//3] ah sin ph ph ah) > b - q > 0.
2 2 a + b + qeVnh
> b - q e e Q h ( l+
Since the imaginary part of (2.4.25) is zero we have deduced a contradiction. Hence every X has negative real part.
Example 2.4.2: Consider (2.4.23). The following is another sufficient condition that depends on the magnitude of h for the uniform asymptotic stability of (2.4.23): b+q>O,
(b+q+ki)h_ 0. Then
+ bX + qXe-xh + kl >_ IX(’
- (b
+ q ) l A ( - k.
From this we deduce that
Hence
+
As before, X = a ig, lphl < a / 2 . With this,
(Y
lXhl < a / 2 . 2 0, /3 = 0 is impossible for (2.4.23). Hence
+ bX + qXe-xh + k ) / f l = 2 a + b + qe-crh
Im(X2
sin fl h
> 2 a - qe-Uhah= 2 a - qah 2 0.
Ph
This contradiction proves the required result.
Example 2.4.3: The same methods yield the following sufficient conditions for uniform asymptotic stability of
+
i ( t )= a o t ( t ) a l x ( t - h ) : h > 0, + a1 < 0, i ( t ) 1- a l i ( t - h ) : C Z ~> 0 , 0 5 a l h < ~ / 2 . Z(t)
a0
+ b i ( t ) + q i ( t - h i ) + k ~ ( t+) p ( t - h2) = 0 :
+
5 0. (2.4.27) (2.4.28)
b, q , k , ~h l, , h2
> 0;
2.5 Perturbed Linear Systems Stability [lo] In this section we consider conditions that will ensure that the stability properties of the homogeneous linear equation
.(t) = L ( t , q ) will be shared by the system
&(t)= q t ,.t)
+ f(t,.t),
(2.5.1) (2.5.2)
where f ( t ,0) = 0. In (2.5.1) there is a locally integrable function m : (-co,CO) such that
Mi,411 Im(t>ll411, v t E ( - W W ) .
Also the function f : E x C -+ En is continuous. The following lemma is well known:
(2.5.3)
Geneml Linear Equations
59
L e m m a 2.5.1 (Bellman’s Inequality): Let u and a be real valued continuous functions on an interval [u,b],and assume that /3 is an integrable function on [a,b] with P 2 0 and
Then
+
u ( t ) I a(t)
/
a
t
P(s)a(s)[exp / ‘ P ( ~ ) d r ] d t ~ a, S
5 t 5 b.
(2.5.5)
If we assume further that a is nondecreasing, then
Theorem 2.5.1 Suppose the system (2.5.1) is uniformly stable and there is a locally integrable function 7 : [0, 00) + E such that
then (2.5.2) is uniformly stable on [O,oo). Suppose (2.5.1) is (i) uniformly asymptotically stable, so that there exist some constants k 2 1 a > 0 such that every solution of (2.5.1) satisfies
where c
> 0 is the constant c = a / 2 k and x is an integrable function with
n=
JOm
.(t)dt
< 00.
Then every solution z(9, c) of (2.5.2) with z,(c, 4 ) = 4 satisfies
60
Stability and Time-Optimal Control of Hereditary Systems
Remark 2.5.1: Condition (ii) of the second part of Theorem 2.5.1 is not too severe. See [5, p. 861. They are natural generalizations of conditions stated in the famous theorem of Lyapunov on stability with respect to the first approximation. Proof: From Theorem 2.4.1 and condition (2.4.6), IT(t,a)l I k , t 2 u for some k > 0, because (2.5.1) is uniformly stable. Also IT(t,u)XoI I k V t 2 u where X Ois given in (2.2.13). The variation of constant formula (2.2.14) applied to (2.5.2) implies bt(d,,u)ll 5 IIT(t,o)d,)II
+
1 t
IIT(t,s ) x o f ( s ,z,)dsll
t
< ~lldIl+k J 7(S)ll~.dlldS, t 1 0Apply Bellman's Inequality (2.5.6) to this to deduce that
[c lw
5~II~ exp II
r(s)ds]
for all d, E C. This proves that (2.5.2) is uniformly stable on [O,oo). For the second statement, recall that by Theorem 2.4.1 IIT(t,a)ll I /ce-Q(t-u), IIT(t,u)Xoll 5 k e - Q ( t - u ) for t 1u.
It follows from the variation of constant formula applied t o (2.5.2) that
Thus if
tt = e Q t z t ,then
111 t.1
L ~l1411eQU+
J
t [E
+ r(s)lll..dlld..
Using Bellman's Inequality (2.5.6) once again we deduce from this estimate that
61
General Linear Equations
This yields
where c = a/2k, M = kII. The proof is complete. Condition (ii) of Theorem 2.5.1 can be relaxed by using a recent sharper generalization of Bellman's Inequality due to En-Hao-Yang [3]. Lemma 2.5.2 Let C ( J ,E + ) be the Banach space of continuous functions taking J into E+ = [0, co). Let the following conditions be satisfied: (i) ~ ( tE)C ( J ,E + ) is a nondecreasing function with a(t) 2 1, V t E J. (ii) The functions u ( t ) , f i ( t ) E C ( J ,E+). (iii) The inequality
u ( t ) 5 ~ ( t +)
2 J' i=l
f i ( ~ ) ( u ( ~ ) ) r i dt sE, [0, T ] 3 J
0
holds where Ti E (0,1]. Then we also have the inequality u(t) 5 a(t)
fi G ; ( t ) t E J
i=l
where
here in
n Gk(t)= 1 t E J. 0
i=l
Theorem 2.5.2 In (2.5.2) assume that (i) f ( t ,0,O) = 0. (ii) The system (2.5.1) is uniformly asymptotically stable. (iii) For each E C,t 2 u 2 0 we have
+
n
If(tld)I _< Cci(t)lld llr i + C n + l ( t ) , i=l
c ES
Stability and Time-Optimal Control of Hereditary Systems
62
where ri E (O,1] are constants, c j ( t ) E t i o n s , j = 1 , 2 ,... , n + l . (iv) c j ( s ) e a b d s < 00, j = 1 . . . n 1. Then all solutions of (2.5.2) satisfy
Jr
C(E+,E + ) are known func-
+
llCt(u,4)II
for some constant M
I M ( u , 4)e-'(t)
= M ( u , 4) and
Q
> 0.
Proof: Just as before, using uniform asymptotic stability of (2.5.1) and the variation of constant formula, we have
Now apply the lemma to deduce the inequality
here in
n & ( t ) = 1, 0
k=l
t 2 u. Consequently, Ibt(u,4111
where lim D ( t )
t-co
by hypothesis (iv). Hence
I e-atD(u,41,
n:==, R(t)= D ( u , 4 )
0 for 0 < z < k where k is some positive constant (the carrying capacity in the absence of disease). (ii) g ( z ) > zg’(z) 0 < z < k . Then (a) The equilibrium z , = 0 is unstable. (b) The equilibrium , 2 = k is stable if (iii) Ph2k < 1, and unstable i f p h z k > 1. 1 (c) The equilibrium x, = -is stable if Phak Ph2
0 and a + ch2
> 0 or Ph2g
Ph2!7(%3)- g‘(z,),
c
(L) -2q’(L) Ph2 Ph2
= -Pzmg
(Pi2)
>
’
0 with a =
‘(Go).
Proof: The equilibria for (2.5.7) are solutions Y(zm)
> 1 or ifg’ -
of
2,
- ~h2Zw9(%0)= 0.
If g ( z , ) = 0 , which implies that
zoo= 0 (and we
1
have extinction), or zoo=
k (disappearance of the disease), or I, = -(endemic equilibrium), then m 2
Stability and Tame-Optimal Control of Hereditary Systems
64
we now linearize about the equilibrium x,:
i ( t ) = [g’(x,) - p h 2 g ( ~ m ) ] t . ( t-) pzcoz(t x
J’+
z(s)ds,
t-hi-h2
and construct the characteristic equation-the z ( t ) = ceXt is a solution of (2.7.8):
“
= (Xb +~)e-’~l At
X,
= 0, the equation (2.7.8) is Z(t)
- hl - h)- Ptmg‘(Xa,) (2.7.8) condition on X such that
J-ha-hi
(2.7.9)
-
= g’(O)z(t).
This is unstable by hypothesis (ii). For the equilibrium 2, = k , we observe that a = -g‘(k) > 0, b = p k , c = -pkg’(k) > 0, since g ( k ) = 0, g’(k) < 0. But the roots of (2.7.9) have negative real part (by [S]) if ph2k < 1 and unstable if Ph2k > 1, since g ( k ) = 0, g‘(k) < 0 and all the roots of (2.7.9) satisfy real A < 0 if hl 2 0, h2 2 0, 0 < Iclh2 < a, lblhz 5 1, and we have
= -g‘(k) > 0, b = p k , c = -pkg’(k) > 0 .
a
At the equilibrium x,
1 1 a = -g Ph2
Ph2
(&)/ h 2 . Thus ($-) > 0. Thus if (&) < 0
b = g
=
1 , c = -g’ h2
0
g‘
(A) > .. ($-) (k)>
0 so that c
stable. If g‘
(&) - cr > so that c
g‘
(L) > 0, by (ii) Ph2
0 if p h z k > 0, so that
> 0,
the equilibrium is
< 0, we have stability
-,
if a
+ c r > 0 or
- 2g’ 0. We sum up: If ph2k < 1, the only stable equilibrium is x, = k , and the disease disappears. If p h z k > 1 , there is 1 an endemic equilibrium x, = I and this is stable if Phzk is sufficiently phzg
Phz close t o 1 and unstable (with oscillation about x,) large.
if ph2k is sufficiently
65
General Linear Equations
REFERENCES 1. W. E. Brumley, “On the Asymptotic Behavior of Solutions of Differential Difference Equations of Neutral Type,” J . Differential Equations 7 (1970) 175-188. 2. R. Datko, “Linear Autonomous Neutral Differential Equations in a Banach Space,” J . Differential Equations 25 (1977)258-274. 3. En-Hao, Yang, “Perturbations of Nonlinear Systems of Ordinary Differential Equtions,” J. Math. Anal. Appl. 103 (1984) 1-15.
4. J. Hale, Theory of Functional Differential Equations, Springer-Verlag, New York, 1977. 5. J. Hale,
Ordinary Differential Equations, Wiley, New York, 1969.
6. S. Nakagiri, “On the F’undamental Solution of Delay-Differential Equations in Banach
Space,” J. Differential Equations 41 (1981) 349-368.
7. R. B. Zmood and N. H. McClamrock, “On the Point-Wise Completeness of DifferentidDifference Equations,” J . Differential Equations 12 (1972)474-486. 8. F. Brauer, “Epidemic Models in Population of Varying Size,” in “Mathematical A p proaches t o Problems in Resource Management and Epidemiology,” edited by C. Castillo-Chavez, S. A. Levin, C. A. Shoemaker, Lecture Notes in Biomathematics, 81, Springer-Verlag, 1989. 9. R. D. Driver, Ordinary and Delay Differential Equations, Springer-Verlag, New York, 1977. 10. E. N. Chukwu, “Null Controllability of Nonlinear Delay Systems with Restrained Controls,” J . Math Analysis and Applications 76 (1980) 283-296.
This page intentionally left blank
Chapter 3 Lyapunov-Razumikhin Methods of Stability in Delay Equations [ 5 ] 3.1
Lyapunov Stability Theory
For linear and quasilinear systems the methods of the previous chapter are adequate. For inherently nonlinear systems the ideas of Lyapunov as extended by LaSalle and Hale are powerful tools. For hereditary systems these methods require the construction of Lyapunov functionals. The main theory is now presented. Consider the nonlinear system
where
f ( t ,0) = 0.
Suppose V : E x C + E is continuous, and We define
(3.1.2) Z ( U , ~ is )
a solution of (3.1.1).
The following result is valid [3, p. 1051.
Theorem 3.1.1 Suppase f : E x C 4 En in (3.1.1) takes E X (bounded sets of C) into bounded sets of E n , and the functions u, v , w : E+ + E+ are continuous, nondecreasing functions with the property that u ( s ) ,v ( s ) are positive for s > 0, u(0) = v ( 0 ) = 0. Suppose there exists a continuous function V : E x C -+ E such that (4 ~(l4(0)1)L V ( t ,4) L 4ll4Il), and V(t7 4) 5 -W(l4(0>)1 then c = 0 is uniformly stable. If u(s) -+ 00 as s 4 00, the equation is uniformly bounded. If w ( s ) > 0 for s > 0, the solution c = 0 of (3.1.1) is uniformly asymptotically stable. Example 3.1.1: Consider the system (3.1.3)
67
68
Stability a n d Time-Optimal Control of Hereditary Systems
We now show that if a > Ibl, then (3.1.3) is uniformly asymptotically stable. We use the Lyapunov functional
where p
> 0.
Note that (3.1.3) is the same as
so that
This is a negative definite quadratic form in d(0) and # ( - h ) if
a
as possible. As a consequence, (61 < a. 2 It is now very easy to see that conditions (i) and (ii) are satisfied with .(l#(O)l) = +#’(O) and .(11411) = V(4), and .i(ld(O)l) 5 for Some 0-> 0. We now apply Theorem 3.1.1 t o an inherently nonlinear system.
Note that if p =
-, (bl is as large
@m)
Example 3.1.2: Consider
i ( t ) = a ( t ) x 3 ( t )+ b ( t ) x 3 ( t - h ) .
(3.1.4)
Assume that there are some constants
6 > 0 and q , such that
a ( t ) 5 -6 < 0, 0 < q < 1,
(3.1.5)
(3.1.6) Then (3.1.4) is uniformly asymptotically stable. Indeed, consider the functional
Lyapunov-Rarumikhin Methods of Stability in Delay Equations
69
which obviously satisfies condition (i) of Theorem 3.1.1. If we differentiate, we obtain
by (3.1.5) and (3.1.6). Thus conditions (ii) of uniform asymptotic stability of Theorem 3.1.1 are verified. For autonomous systems an interesting theory is available for investigating the asymptotic behavior of solutions. Consider
(3.1.7) where f : C
--t
En is continuous on some open set
s = {4 E c : lldll < HI H is constant. We denote the solution of (3.1.7) by z(4) and define a path or motion in C through 4 as the set U z t ( 4 )where the solution z(4) is defined on [0, a).
o 0 is a constant. To conclude the proof of the lemma, we observe that if z(4) is a solution of (3.1.1), which is bounded, then i ( + ) ( t )= f(zt(4)) so that If(zt(4))l5 L for some L since llzt(S)ll 5 a , and f maps bounded sets into bounded sets. Since IIi(4)(t)ll 5 L for all t 2 0, the Arzela-Ascoli Theorem guarantees that { z t ( + ): t >_ 0) belongs to a compact set.
Lemma 3.1.2 Let ~ ( 4 be ) a solution of Equation (3.1.1) which is defined on [-h,00) and 11zt(c$)l15 a1 < a for all t E [O,m). Then the r+(4)the positive limiting set of 4 is nonempty, compact, connected, invariant, and , --+ 0 as t -+ 00. distant ( z t ( 4 )r+(+))
Definition 3.1.2: A map V : C G in C relative to the equation
.--)
E+ is a Lyapunov functional on a set
i ( t ) = f(zt), if V is continuous on
c,the closure of G and,
V ( 4 ) = V ( 4 ) = limsup:[V(zr(4)) r-O+
on G. Let
(3.1.7)
- V ( ~ ) ]0,S
s = {q5 E c : V ( 4 ) = O } ,
and let the set M be the largest set in S invariant with respect to (3.1.7).
Lyapunou-Ratumikhin Methods of Stability in Delay Equations
71
Theorem 3.1.2 [3, p. 1191 Suppose V : C -+ E+ is a Lyapunov functional on G and xt(4) is a bounded solution of (3.1.7)) which remains in G for each 2 . Then xt(4) -+ M as t 00. -+
Theorem 3.1.3 [3, p. 1191. Let Ue = { 4 E C : V ( 4 ) < C}, where V :C E+ is a Lyapunov functional, and C is a constant. Suppose there is a constant k = k(C) such that if 4 E Ue, then I4(O)l < k, for some constant k . Then any solution xt(4) of (3.1.7) with 4 E Ut approaches M ast-+co. -+
Corollary 3.1.3 Let V : C + E+ be continuous. Suppose there are nonnegative functions u(r), v ( r ) such that a(.) + 00 as r 00 and u(lq5(O)l) 5 V ( 4 ) ,V ( 4 ) 5 -v(l4(0)l). Then every solution of (3.1.7) is bounded. If we assume further that v(r) is positive definite, so that v(0) = 0, v(r) > 0, r > 0, then every solution approaches zero as t ---* co. -+
Example 3.1.3: Consider the system of two masses and three linear springs. Let xi denote the displacement of the mass mi from its equilibrium position, with positive displacements measured to the right. The masses are subject to external forces Fj(t). This coupled system is described by the equations
In matrix form this becomes
Mx + Bx = F , where
Obviously M is symmetric and positive definite, B is symmetric and positive semidefinite. The external force F can be taken to be a state feedback that has delay and is given by
H(xt)=
lh
F ( s ) x ( t - s)ds.
72
Stability and Time-Optimal Control of Hereditary Systems
Thus the system considered is
M I ( t ) + B t ( t )=
I”
F ( s ) ~ (-t s)ds;
(3.1.8)
M , B are 2 x 2 symmetric matrices, and F is a 2 x 2 matrix that is continuously differentiable. We now consider a coupled system of n such masses. Then M i , i = 1 , . . . ,n.A, B , F are n x n symmetric matrices. If we set
A =B
-
I
h
F(s)ds,
(3.1.8) can be rewritten as
w = Y(t>,
My(t) = - A z ( t ) -I-
F(S)[Z(t
(3.1.9)
- S) - z ( t ) ] d s .
Theorem 3.1.4 In (3.1.9) assume that A and M are positive definite and F ( s ) is positive semidefinite for s E [0, h], and assume that F has the property that there exists a c E [O,h] such that F ( c ) < 0. Then every solution of (3.1.9) satisfies x ( t ) -+ 0 as t -+ 00. Proof: The proof of this result is due to Hale [3, p. 1241. It uses the following Lyapunov functional:
V(d, $1 = +dT(0)Ad(O) + + V ( O ) M $ ( O )
+
+ g7#(-S)
- d ( o ) l T F ( ~ ) [ d ( - 4- d(O)lds, where T denotes vector transpose and d,$ are the initial values of
3:
and y
of (3.1.9), One verifies easily that
V(d7lCl)= -iEd(-h) - #(O)lTF(h)[#(-h) - d(0)l
+ 4 J;[d(-s) - d(O)lj(s)[d(-s)
- d(O)lds I 0,
(3.1.lo)
by conditions of the theorem. From the hypothesis there exists an interval I, containing c such that k(s)< 0 for all s E I,. It follows from (3,l.lO) that V ( # , $ ) = 0 implies d(--s) = d(0) for s E I,. For a solution (3:,y) to belong to the largest invariant set where V ( ~ , $ J=) 0 we must have z(t - s ) = z ( t ) V t E (-00,m) s E I,. This implies that z ( t ) = b is a constant. From (3.1.9), y = 0 and thus M b = 0. Because M > 0, b = 0. Therefore the largest set where V ( 4 ,$) = 0 is (O,O), the origin. But V satisfies the conditions of Theorem 3.1.3 and its Corollary 3.1.1. The conclusion is valid. Every solution of (3.1.9) approaches (0,O) as t -+ co.
Lyapunov-Rarumikhzn Methods of Stability in Delay Equations
3.2
73
Razumikhin-type Theorems
There is an extensive theory in the application of the Lyapunov stability method for ordinary differential equations. Often the Lyapunov functions constructed for ordinary systems can be fruitfully exploited to investigate the stability of analogous functional differential equations. T h e theorems of Razumikhin are the bridge. These results are based on the properties of a continuous function V : En -+ E whose derivative V ( t , d ( O ) ) along solutions ~ ( 4) t , of the system (3.2.1) is defined to be
The following results are contained in Hale [3,p. 1271. Theorem 3.2.1 Let f : E x C -+ En take E+ (bounded sets of C) into bounded sets of En. Suppose u , v , w : E+ + E+ are continuous, nondecreasing functions u(s),v(s) positive for s > 0 u(0) = v(0) = 0. Let V : E x E n -+ E be a continuous function such that
and
Then the solution 3: = 0 of (3.2.1) is uniformly stable. If w(s) > 0 for > 0 and there is a continuous nondecreasing function p(s) > s for s > 0 such that i . ( t , d ( O ) ) i -W(ld(O)l) if V ( t + e,d(e)) < P(V(t,dJ(O)), 0 E [-h, 01, then thesolution I = 0 of (3.2.1)is uniformly asymptotically stable. If u ( s ) -+ 00 as s + oc), then the solution I = 0 is globally uniformly asymptotically stable. s
Example 3.2.1: Consider the scalar system
.(t) = -az(t) - 6I(t - h).
(3.2.2)
74
Stability and Time-Optimal Control of Heredttary Systems
Proposition 3.2.1 Assume that a , b are constants such that lbl 5 a . Then the solution x = 0 of (3.2.2) is uniformly stable. If in addition there exists some q > 1 such that 1b1q < a, then (3.2.2) is uniformly asymptotically stable. Proof: Consider V = xz/2. V ( z ( t ) )= -az2(t)
- bz(t)z(t- h ) ,
< -[u - lbl]z2(t) if Iz(t - h)l < Iz(t)l. Thus V ( z ( t ) )5 0 if V ( z ( t- h ) ) 5 V ( c ( t ) ) .It follows from Theorem 3.2.1 that x = 0 is uniformly stable. For the second part, for some q > 1, choose p ( s ) = 4’s. If p ( V ( c ( t )> V ( z ( t- h ) ) , i.e., (z(t- h ) ( < qlx(t)l, then V ( t ( t ) )5 -(u - lbIq)z’(t). Since u - lblq > 0, the second part of the theorem guarantees that the solution 3: = 0 of (3.2.2) is uniformly asymptotically stable. Example 3.2.2: Consider
.(t) = ?At), (3.2.3)
Proposition 3.2.2 Assume in Equation (3.2.3) that:
> a > 0, y # 0, for some constant a ; (ii) f(x)sgn z 00 as 1x1 -+ 00, f(x)sgn x > 0; and V x E E , where Lhq < a. Then (3.2.3) is uniformly (iii) Ig(x)l 5 L (i) a ( t , y ( t ) ) Y
4
asymptotically stable.
Proof: Consider
V ( Z ,Y) =
4 t )
+
f ( z : ( s ) ) d s Y2/27
V ( x W , ?/(t))= dt)Y(t)
+ f(+))Y(t)l
Lyapunov-Rarumikhin Methods of Stability in Delay Equations
if a - Lhq
> 0.
75
The result follows.
3.3 Lyapunov-Razumikhin Methods of Stability in Delay Equations [2] The method of Lyapunov and its extensions by LaSalle and Hale require the construction of a Lyapunov functional, which can be very difficult to practice. These functionals are nonincreasing along solutions. Lyapunov functions, which are simpler and which are not nonincreasing along solutions, can be appropriated. This is the fundamental idea of Haddock and TerjCki [2]: the development of an invariance principle using Razumikhin functions. In what follows, the power of the Lyapunov method is married to the simplicity of Razumikhin functions in a theory proposed by Haddock and TerjCki [5]. A quadratic Lyapunov function is shown t o yield good stability criteria. Consider an autonomous retarded functional differential equation of the form
(3.3.1) where f : C -, En is continuous and maps closed and bounded sets into bounded sets. We also assume that solutions depend continuously on initial data.
Definition 3.3.1: Let q5 E C. An element 11, E C belongs to an w-limit set of 4, n(4) if the solution z ( t , O , $ ) of (3.1.1) is defined on [ - h , m ) and there is a sequence { t , } , t , -+ 00 for n -+ oo such that [lzt,,(4)- $11 -+ 0 as n -+ oo and zt,(#)(s) = z(t, s,O,#). A set M E C is called an invariant set (with respect t o Equation (3.3.1)) if for any 4 E M there is a solution z(4) of (3.3.1) that is defined on (--oo,oo) such that zo = 4 and C is positively invariant if for each zt E M V t E (-00,oo). A set M 4 E M, zt(q5) E M V t E [O,w). It is well known [8] that if z(4) is a solution of (3.3.1) that is defined and bounded on [-h, oo), then (i) the set ( ~ ~ ( 4t) 2; 0) is precompact, (ii) Q(4) is nonempty, compact, connected, and invariant, and (iii) q ( 4 ) -+ Q(4)as t -+ 00.
+
76
Stability and Time-Optimal Control of Hereditary Systems
Haddock and TerjCki [2]applied a well known Lyapunov function argument to these stated properties of w-limit sets of solution to obtain an invariance principle that utilizes the easier to construct Lyapunov function instead of the usual Lyapunov functional of HaIe-LaSalle [S].To state the result we need the following definitions:
Definition 3.3.2: A function V : En -+ E that has derivatives is called a Lyapunov function or a Razumikhin function. Let V be a Lyapunov function. The upper right hand derivative of V with respect to (3.1.1) is defined by
gk51= limsuP:(VMO) r-O+
+ .f(4)l-
Vf4(0)1).
(3.3.2)
Because V has continuous first partial derivative, we have
(3.3.3) where fi is the i-component o f f . We note that if x is a solution of (3.3.1),
Definition 3.3.3: Suppose V is a Lyapunov function and G c C a set. Let
4 E G : - hmax V[xt(+)(s)] = - hmax V [ + ( s ) ]V , t20 <s 0 such
Then the solution x = 0 of Equation (3.3.1) is asymptotically stable. We note that if every solution of (3.3.1) is bounded on [ - h , m ) by a, say, then Corollary 3.3.1 gives a global asymptotic stability result. We now state a criterion for boundedness in terms of a Lyapunov function V . Theorem 3.3.2 Suppose u , v , w : E+ E+ are continuous, nondecreasing functions, u ( s ) , v ( s )positive for s > 0, u(0) = v(0) = 0. Let V : En E be a Lyapunov function such that -+
-+
4 4 )I V(Z) I v(I.1) v x E En, and V[43
(3.3.4)
5 -w(14(0)1) for any 4 E C such that (3.3.5)
If u ( s ) + 03 as s
-+
00,
the solutions of (3.3.1) are uniformly bounded.
Proof: Consider the function max V [ x t ( 4 ) ( s ) = ] W[zt]. Just as in Had-h 0 such that u(p) = v ( a ) . As a result, if 11q511 I a , the solution x ( 0 , d ) with ~ ( 0 ~ = 4 )4J satisfies Ix(4)(t)l 5 ,B V t 2 0, which implies uniform boundedness. Indeed, -+
4144>(t)I) I V[4t)I = W
Z t )
L W 4 ) L v(11411~5 .(a) =
a.
Therefore Ix(q5)(t)l 5 /3 t 2 0 as asserted. It now follows from Theorem 3.3.2 and Corollary 3.3.1 that the following is true:
78
Stability and Ttme-Optimal Control of Hereditary Systems
Corollary 3.3.2 Consider (3.3.1) and suppose 6) f(0) = 0, (ii) and there are continuous nondecreasing functions u(s), v(s) positive for s > 0, u(0) = v(0) = 0; u(s) -+ 00 as s + 00. Let V be a Lyapunov function such that (iii) V[O]= 0, u(IcI) 5 V ( c )5 v(lzl), V c E En, (iv) V[4] < 0 for any 4 such that max V[~(S)]= V[t$(O)]. -n<s 0, there is a a(€) > 0 such that if 11q511 < 6, then Ilg(4)ll < ~11611.Thus any f with F’rkchet derivative L can be written as where for some c > 0
f(4) = L(6) + f(0) + d6),
Is(6)l 5~11611for all 6 E 0,
(3.3.9)
(3.3.10) with 0 = {q5 E C : 114ll < 6). We assume that f given in Equation (3.3.1) is of this form where (3.3.10) holds for all 4 E C, since it is aglobal result that we need. If f(0) = 0 and g(6) 5 0, we deduce conditions on the system’s coefficients that guarantee global uniform stability. If f(0) = 0 but g(6) satisfies (3.3.10) for all 4 E C,we deduce results on stability in the first approximation with acting perturbations. We also obtain the global decay (3.3.6) when f(0) # 0.
Lyapunov-Razumikhin Methods of Stability in Delay Equations
79
Theorem 3.3.3 In Equation (3.3.1) assume that (i) f(0) = 0. (ii) There exists a constant, nonsingular,positive definice symmetric matrix H that satisfies dllz(t)12 5 z T ( t ) H z ( t ) 5 d21z(t)I2, V t E E , di > 0, i = 1 , 2 constants, and the matrix
is negative definite; (I is identity matrix), and q 2 1. (iii) f(4)= L ( 4 ) g(4) where L is given in (3.3.8), and
+
(3.3.12) Q
a constant, and
for some 6 > 0. Then every soh tion z(4) of Equation (3.3.1) with zo = 4 satisfies (3.3.6).
Proof: We take inner products in Equation (3.3.9) and use (3.3.12) to obtain
so that
Since f(0) = 0, we have the estimate
Now consider the function
80
Stability and Time-Optimal Control of Hereditary Systems
But if V [ z ( t ) ]=
max V [ z t ( s ) ]then , by Lemma 3.3.2 below we have
-h<s 0, a > 0.
Proof: This is an immediate consequence of Theorem 3.3.3, since g(4) implies E can be taken as zero, and Theorem 3.3.2 can be applied.
The estimate B
< 0 may
0
well be the best possible. The case
i ( t )= a z ( t ) + bz(t - h ) , considered by Infante and Walker [ 6 ] ,can be used to test this statement. Indeed. assume H = 1. Then
B = 2a
+ 2(bl < 0
or a
+ Ibl < 0.
Recall that in [6] uniform asymptotic stability is valid for every h E [0, m) if and only if a Ibl 5 0 and a b < 0. If the system
+
+
i ( t )= A ~ z ( t+) A l z ( t - h )
(3.3.18)
is considered, and if H = I is taken where the matrices are n x n , then the condition for global asymptotic stability is B = $(A0 A:) 21(11A111) and is negative definite, which is comparable to a recent result of Mori [7]. The next result can be proved by the methods of this section and the ideas of observability. The proof is contained in Chukwu [l].
+
Proposition 3.3.1
Consider the system
i ( t )= A o z ( t ) + a = h/N
Assume that
(i) B
HA0
+
N
> 0.
N
C Aiz(t - ia), i=l
+ A T H + k = l 2pnMkH
is negative semidefinite, for some
positive definite symmetric matrix H where Mk = m a I(Ak)ijl. Then (S) is asymptotically stable if and only if
82
Stability and Time-Optimal Control of Hewditary Systems
(ii) rank [A(A), B] = n for each complex A, where A(A) = A 1 - Ao N
k=l
A k e e X k a , and
(ii) for some complex number A,
[
Ao-AI
rank
AN
A1
... 0
AN B
0
0
Remark 3.3.1: Condition (i) of Proposition 3.3.1 ensures that (S) has uniformly bounded solutions. Example 3.3.1: The scalar system
has (by Proposition 3.3.1) the following property: If
the scalar system is uniformly asymptotically stable if and only if for some complex number A
This rank condition holds whenever
QN
# 0.
We now state a lemma that was used in the proof of the theorem.
Lemma 3.3.1 Let t E (--oo,oo). llztll
I f V [ z ( t ) ]= max V [ z t ( s ) ] ,then
5 q)z(t)lfor the same t , where q = aZ/a1,
the least eigenvalue of H , and where
-h<s 0 and for any a > 0 there exists a k ( a ) > 0 such that if 11q!~11 5 a ,
+
4)ll I
112t(u,
II4IL v t 2 cT*
The zero solution is Uniformly Exponentially Asymptotically Stable in the Large (UEXASL) if there exist c > 0, k > 0 such that
Il~t(~,4)11 I~e-e(t-")((411, v t 2 6.
(4.1.7)
The trivial solution is Eventually Uniformly Stable (EVUS) if for every 6 > 0 there is a = a ( € )such that (4.1.2) is uniformly stable fort 2 u 2 a(€). Other eventual stability concepts can similarly be formulated by insisting that the initial time u satisfy u 2 a. For example, the trivial solution of (4.1.2) is called Eventually Weakly Uniformly Asymptotically Stable in the Large (EVWUASL) if it is (EVUS) and if there exists a0 > 0 such that for every u 2 a0 and for every 4 E C, we have ~ ~ z t ( u+, 0~as ) t~ -+~ 00.
Global Stability of Functional Diflerential Equations of Neutral Type
4.2
89
Lyapunov Stability Theorem
Consider the homogeneous difference equation D(t)zt
=o
t
u
zu
= 4 D(u)4=0,
(4.2.1)
which was discussed in Section 2.4. There it was stated that D is uniformly stable if there are constants p, cr > 0 such that for every u E [r,co), 4 E C the solution z ( u , 4 ) of (4.2.1) satisfies
I l ~ t ( ~ l 4 5) l pe-a(t-u)l1411, l
t L u.
Cruz and Hale have given the following two lemmas [2]: Lemma 4.2.1 If D is uniformly stable, then there are positive constants a , b , c , d such that for any h E C((r,co), En),the space of continuous functions from [r,co) --t En, and u E [T,co) the solution z(u,4,g) of
D(t)zt = M ( t ) , t 2
6,I,
= 4,
(4.2.2)
satisfies llEt(u,4,
s)ll I e - Q ( t - u ) (bll4ll + c S U P IM(.)I) a 0, c > 0. Then there are positive constants M, R and a continuous functional V : [r,00) x C -+ E such that
l W 4 l 5 V(t,4) I Mll4ll,
w,4) 5
IV(4 4) - V(t, $)I L
-c/2V(t9
41,
Rll4 - II,lL
(4.2.4 b)
v t E [7,m), 4,II,E c.
We now use Theorem 4.2.1 to study the system d -[x(t) dt
- A - l z ( t - h)] = f ( t , t ( t ) , z ( t - h)),
(4.2.5)
Global Stability of Functional Differential Equations of Neutral Type
91
where A-1 is an n x n symmetric constant matrix and f : [ r , m ) x En x En -+ En and f : [T,00) x (bounded sets of En)x (bounded sets of En)+ (bounded sets of En).We also assume that f has continuous partial derivatives. If we let D(t)4 = 4(0) - A-1 q5(-h), then (4.2.5) is equivalent to (4.2.6)
In (4.2.6) denote by D, ( u = 1,2) the n x n symmetric matrices +(d,ij where
+
d,ji),
dlij = a f i i t , 4(0), d4-h)) a4j ( - h )
Let
1 J, = Z
7
( ~ +~D ~, A ) , u = 1,2,
(4.2.7)
where A is a positive definite constant symmetric n x n matrix, and Df is the transpose of D,. Our stability study of (4.2.6) will be made under various assumptions on J,. Theorem 4.2.3 In (4.2.5), assume that (i) D4 = d(0) - A - l + ( - h ) is uniformly stable, (ii) f ( t ,0,O) = 0. (iii) There are some positive definite constant symmetric matrices A and N such that the matrix J = ( ;
!),
(4.2.8a)
where
and J , are defined in (4.2.7), is negative definite. (iv) A-1 is symmetric. Then the solution x f 0 of (4.2.5) is uniformly asymptotically stable. Remark 4.2.3: J in (4.2.8) will be negative definite if
92
of Hereditary Systems Systems and Time-Optimal Control Control of Stability and
(i) the eigenvalues of X 2 j of Jz in (4.2.7) satisfy X u 5 - A 2 < 0, (j= 1 , 2 , . . . ,n ) , V q5(0), and t 2 0 where A2 is a constant; (ii) the eigenvalues A3j of the matrix J2+N satisfy A , 5 - A 3 < 0, (j= 1 , 2 , . , n ) , V 4(0), and t 2 0 where A3 is a constant; and (iii) the eigenvalues A y of N-A-1NA-1 satisfy A+ 2 Ad, ( j = 1 , . . . ,n ) where 2A3A4 > IIJ1+J2A-1+NA-111 and 11.11 denote the matrixnorm. +
-
Proof: Let D4 = d(0) - A-14t-h) and define the Lyapunov functional rO
where A and N are positive definite symmetric n x n constant matrices and (-,-) denote inner product in En.Because A , N are positive definite symmetric matrices there are constants a i , n i , i = 1,2, such that
Thus the first inequality of Theorem 4.2.1 is satisfied. That the second inequality of (4.2.4) is also satisfied follows from the following calculations: Because A is symmetric, we have
since f(t,O,O) = 0, and we have used Lemma 3.1.2. Note that 4(0) = Dq5 A-l$(-h), so that
+
Global Stability of Functional Differential Equations of Neutral Q p e
93
since J is negative definite. It now follows that (4.2.4) of Theorem 4.2.1 is completely verified. Because D is assumed to be uniformly stable, all the requirements of Theorem 4.2.1 are met: We conclude that the solution z = 0 of (4.2.5) is uniformly asymptotically stable. We now consider some examples. Example 4.2.1: Consider the scalar equation
where a and c are constants with a
4< > 0 1.1 < 1 and a > c/(l - c’) -
2 Lemma 3.1 of Cruz and Hale [2]implies that the operator Dq5 = #(O)+cq5(-h) is uniformly stable. We now apply Theorem 4.2.3 with A-1 = a = -3a/2 = Q -c, A = 1, J1 = 0 , 5 2 = -a, N = a / 2 , 2 J z + N = -2a+2 c
< 1.
p = J1+ J z A - i + N A - 1 a
ac
= a c - 5 c =3-
is negative definite. We can also use Remark 4.2.3 with u / 2 , A4 = :(I - c2), so that
2 A 3 ~= 4 2 ( u / 2 ) ( a / 2 ) ( 1 - c’)
A2
= a , XB =
> ac - ac/2,
since a2(1- 2 ) / 2 > ac/2 if we assume a ( 1 - c’)
> c.
Example 4.2.2: Consider the equation
d dt
-[z(t)
+ c ( t ) z ( t - h)] + a z ( t ) + b ( t ) z ( t- h ) = 0 ,
where a > 0 is a constant, c ( t ) , b ( t ) are continuous for t 2 0 , and there is a 6 > 0 such that c 2 ( t )5 1 - 6 . Let D ( t ) d = 4(0) + c(t)#(-h), so that by a remark in Cruz and Hale [2,p. 3381, D ( t ) is uniformly stable. Observe that f ( t ,z ( t ) ,z(t - h)) = - a z ( t ) - b(t)z(t- h). On taking A = 1, A-1 = -c(t), N = u / 2 , J1 = - b ( t ) , J2 = -a = -A’, JZ N = -a/2 =
+
94
Stability and Time-Optimal Control of Hereditary Systems
JzAVl = ac, NA-1 = -ac/2, -A3, We observe that
A4
= N - A-1 NA-1 = a / 2 - ac2/2.
ensures uniform asymptotic stability. This inequality will hold if a2/26 > Ib(t) ac(t)/21 for all t 2 0.
+
Example 4.2.3: Consider the n-dimensional autonomous neutral difference equation
where z ( t ) E En.A _ l , A o , A 1 are constant n x n matrices h > 0. Let D1 = i ( A l AT), D2 = 5(Ao 1 AT), Ji = ADi DTA, i = 1,2, A , a
+
+
+
positive definite matrix. Assume 11A-111 < 1 and A-1 to be symmetric. For some constant positive definite symmetric matrix N , Remark 4.2.3 yields the required condition for uniform asymptotic stability.
4.3
Perturbed Systems
The converse Theorem 4.2.2 will now be employed to prove stability results for the system (4.1.3).
Theorem 4.3.1 In System 4.1.3, assume that (i) Ig(tj4)I 5 NII411, If(tp4) - f(t,dJ)I 5 Nll4 V 4,dJ E C,t E [r,00); and D is uniformly stable. (ii) Suppose that the zero solution of (4.1.2) is (UEXASL), so that every solution z ( t , 4) satisfies
for some constants c > 0 k > 0. (ii) The function F in (4.1.3) satisfies F = J'1 FZ with IFl(t,4)( 5 u(t)lD(t)4(V t 1 u, 4 E C,where there is a constant [ > 0 such that for any p > 0
+
t+P
1P 1 and for some 6
>0
we have
v(s)ds
< t , t 2 6,
(4.3.1)
Global Stability of Functional Differential Equations of Neuirul Qpe
95
Then every solution z(u, 4) of (4.1.3) satisfies
uniformly with respect to 5 E [T,oo), and 4 in closed bounded sets. In particular, the null solution of (4.1.3) is WUASL.
Proof: The assumptions we have made in Theorem 4.2.2 guarantee the existence of a Lyapunov function V with the properties of Theorem 4.2.2. Suppose y = y ( u , 4 ) and 3: = z(u,qh) are solutions of (4.1.3) and (4.1.2) respectively with initial values 4 at u. If 64.1.3) and $4,1.2) are the derivatives of V along solutions of the systems (4.1.3) and (4.1.2) respectively, then the relations (4.2.4b) yields the inequality
But then
Because g(t,4) is uniformly nonatomic at zero and (4.1.5) holds, there is an ro > 0 such that llgu+r
for 0 < r
- zu+rll L
-
o+r
1
1 c(r0)
I{f(SJ - f('l Yd)
z8)
+ F(s,
!/d))lds
< rg. With this estimate in (4.3.2) we obtain
V t 2 u, 4 E C where Q =
1 - c( ho) *
It follows from (4.2.4b) that
We now set c = -c/4Q and choose some ( < c/SQ so that from (4.3.1)
Stability and Time-Optimal Control of Hereditary Systems
96 and
and t
(-44
+ Q v ( s ) ) d s I (-f + QE) (t - u) 5 -c/8(t
- 6).
Using all these in (4.3.4), one obtains
and with r ( t ) = V ( t ,zt(u,d)),
The solution of this inequality satisfies
Since (4.2.4b) is valid, this last inequality leads to
Because D is assumed uniformly stable, Lemma 4.2.2 can be invoked to ensure that z(u,$J)-+ 0 as t -+ 00, uniformly with respect to u E [T,co), and 4 in closed bounded sets. To conclude the proof we verify uniform stability as follows: Because D is uniformly stable, there are positive constants u l , u2, u3, a4 such that
For any c > 0 chocse 6 = ( Q ~ + u ~ M + u ~ M and ) -observe ~ ~ , that if 11411 < 6 , then IIZt(u, 4)II < a26 -k Q3M6-k U4M6 < 6 , for t 2 u. This proves that 2 = 0 is uniformly stable, so that the zero solution of (4.1.3) is WUASL.
Global Stability of Functional Differential Equations of Neutral Type
97
Remark 4.3.1: Condition (4.3.1) can be replaced by the assumption M
VE
v(s)ds
< 00
(4.3.5)
to deduce the same results. When f ( t ,4 ) G A ( t ,4 ) is linear in 4 , one can obtain a generalization of the famous theorem of Lyapunov on the stability with respect to the first approximation. It is the global stability result that is most useful when dealing with controllability questions of neutral systems with limited power. Consider the linear system
d
--D(t)zt
dt
(4.3.6)
= A ( t ,2 1 1 ,
and its perturbation
d -dt[ D ( t ) t t - G(t, xt)] = A ( t ,x t ) where A ( t ,4 ) , G(t,4) are linear in
+ f ( t ,z t ) ,
(4.3.7)
4. The following result is valid:
Theorem 4.3.2 In Equations (4.3.6) and (4.3.7) assume that (i) the linear system (4.3.6) is uniformly asymptotically stable, so that for some k 2 1, a > 0 the solution x(u,4 ) of (4.3.6) satisfies
ll+l4)ll
5 ke-a(t-u)l141L
t2
4 E c.
+ F 2 satisfies
(ii) The function F = Fl
where €=
a l-k(X+Mo) -
+ + Ma)
2 L 1 k(X
and where i f ( = 1x14, then for each p
/
1
P t
(iii) The function G = GI
t+P
v(s)ds
+ G2 satisfies
>0
< ( V t 2 U.
(4.3.8)
98
Stability and Time-Optimal Control of Hereditary Systems for t 2 u, q5 E C where n ( t ) is continuous and bounded with a bounded MO such that
n ( t ) 5 Mo, t 2
6,
k ( X + Mo) < 1.
Then every s o h tion z(u,4) of (4.3.7) satisfies
for some N. The proof is contained in [l,p. 8661.
Remark 4.3.2: The requirement of (4.3.8) can be replaced by the hypoth-
Jom
esis
v ( s ) d s < 00.
It is possible to weaken the conditions on F , to deduce global eventual weak stability. Because of its importance it is now stated.
Theorem 4.3.3 For the system (4.3.6) and (4.3.7) assume that (i) Equation (4.3.6) is uniformly asymptotically stable. (ii) The function F in (4.3.7) can be written as
where
for all t
2 u 2 TOfor some sufficiently large To and all q5 E C, where
is a sufficiently small number, and where
1- k ( X + M o ) 41c 1 k ( A Mo)' ff
T,
Stability and Time-Optimal Control of Hereditary Systems
100
Thus the solution x = 0 of(4.3.7) is (EVWUASL). A proofisindicated in [l,p. 8721. Example 4.3.1: Delay Logistic Equation of Neutral Type [S]. Consider the system
where h l , ha, K , c , r are constants 2 0, with r, h2, K conditions of the type
> 0. We assume initial
If we let N(t)
= 1i-p+ x(t)],
+
A(t) = r[l ~ ( t ) ] , B ( t ) = 2ckA(t)1[1 k2E2(t - h z ) ] ,
+
(4.3.9) is equivalent t o d
A(s
+ hl)r(s)ds] = -A(t + h l ) + B(t)E(t - h 2 ) . (4.3.10)
Set a* = (1
+ c) exp[r(l + c)hl]
b' = (1 - c)exp[hlr(l
pa=--
'
ra*
- c)]
2cka* b* exp(-a') [l+ru*hz+
+ Pckra'hl+
rb* 2(2cka*r)2 exp( -a*)
1
.
b* exp(-a')
Global Stability of Functional Differential Equations of Neutral Type
101
For this we use the Lyapunov functional
V ( t ) = V ( t ,4 t ) ) ,
and calculate the upper right derivative (4.3.10).
D+V of dt
V along solutions of
Proposition 4.3.1 Assume that r, hl, k: E (0,m); hz E [O, m),
> 0, P2 > 0, 2ckra* < 1; 0 < c P1
< 1.
(4.3.11) (4.3.12) (4.3.13)
Then every positive solution of (4.3.9) satisfies
N ( t ) + O as t
+m.
Proof: We need only show that if (4.3.11) - (4.3.12) hold, an arbitrary solution of (4.3.10) satisfies
t ( t )-0
as t +m.
(4.3.14)
The needed details are contained in [5].
4.4 Perturbations of Nonlinear Neutral Systems [S] Consider a nonlinear system that is more general than (4.1.3), namely
d --D(t, dt
xt)
= f ( t ,2 t ) + g ( t , 4.
(4.4.1)
We assume the following as basic: Let h c C be an open set, I1 c E an open interval, P = 11 x A. Denote by En' the set of n x n real matrices. Suppose D, f : r --t En are continuous functions. We assume also that 0 E A. The main result in this section is stated as follows:
Stability and Time-Optimal Control of Hereditary Systems
102
Theorem 4.4.1 Assume (i) D , f in (4.4.1) satisfy the following hypothesis: (a) The Fre'chet derivatives of D , f with respect to 4, denoted by D,, f+ respectively exist and are continuous on r as well as D++,and (b) for each ( t , 4 ) E r, $ E C write ~ + ( 4~ t ,
0
= ~ ( ~t , ( 0 -)
J-h
dep(t,4,
mwi
for some functions
-
with A continuous, p ( t ,4, .) of bounded variation on [-h, 01 and such that the map ( t ,4) s_", dep(t,4, O ) [ $ J ( ~ ) ] is continued for each rl, E
c.
(bf Assume for each ( t , 4 ) E I' D is uniformly atomic at zero compact sets (I C r, i.e., det A ( t ,4)
IL
OR
#0
d e p ( t , 4 , e ) [ $ ( 0 ) 15/ [(S)II$II
s E [O, h] for all ( t , 4 ) E I( for some function l : [0, h] + [O,co) that is
continuous and noijincreasing, [(O) = 0. (ii) g is Lipschitzian in q5 on compact subsets of r. (iii) f ( t ,0,O) = O for all t E I . (iv) For each (a,4)E I', we have
+
IIT(t,a, 4111 I exP(Q(t) b(a)), t
1 0,
where a ( t ) , b(t) are continuous functions from E+ = [0, co) into E+, i.e., are elements of C ( E + ,E+). Here T ( t ,a ;4) is the solution operator associated with the variational equation
d dt
-O+(t,
(v) For each
tda, 4))[&1 = f&, 2*(0,4))[4,
t E [a,u
+ a).
(a,4)E I', N
Ig(t,$)I I Cci(t)II$llri + c N + l ( t ) , t 2 6, i=l
(4.4.2)
Global Stability of Functional Diflerential Equations of Neutral Type
103
+
where r, E (0,1], q ( t ) E C(E+, E + ) , j = 1,2,... ,N 1. (vil cj(s)e‘Ja(a)+b(a), c$ileb(8) E ~ 1 ( r , u )j, = 1,. .. , N . (vii) a ( t ) -00 as t 4 00. Then every solution of (4.1.1) satisfies IICt(fll4)ll as t -+ 00.
-
The proof is contained in Chukwu and Simpson in [6, p. 571. It uses the nonlinear variation of parameter equation
where y t ( a , 4) is the solution of
(4.4.3)
(4.4.4)
This formula (4.4.3) holds when the solution of (4.4.1) is unique.
REFERENCES 1. E. N. Chukwu, “Global Asymptotic Behavior of Functional Differential Equations of the Neutral Type,” Nonlinear Analysis Theory Methods and Applications 6 (1981)853-872. 2. M.A. Cruz and J. K. Hale, “Stability of b c t i o n a l Differential Equations of Neutral Type,” J . Differential Equations 7 (1970)334-355. 3. J. K. Hale, Ordinary Differential Equations, Interscience, New York, 1969.
K. Hale, “Theory of Functional Differential Equations,” Applied Mathematical Sciences 3 , Springer-Verlag,New York, 1977. K. Gopalsamy, “Global Attractivity of Neutral Logistic Equations,” in Differential Equations and Applications, edited by A. R. Aftabizadeh, Ohio University Press,
4. J. 5.
Athens, 1989. 6. E. N. Chukwu and H. C. Simpson, “Perturbations of Nonlinear Systems of Neutral Type,” J. Differential Equations 82 (1989)28-59.
This page intentionally left blank
Chapter 5 S nthesis of Time-Optimal and dnimum-Effort Control of Linear Ordinary Systems 5.0 Control of Ordinary Linear Systems The material in this section is introductory and deals with ordinary differential equations. Our aim is to solve the optimal problem for linear ordinary systems that will then become the basis for generalization to hereditary syst e m . We shall first introduce in this section the theory of controllability of linear ordinary differential equations
where Ao(t), B ( t ) are analytic n x n , n x rn matrices defined on [ O , c o ) . The admissible controls Q* are at first required to be square integrable on finite intervals. We need the following controllability concepts:
Definition 5.0.1: System (5.0.1) is controllable on an interval [ t o , t l ] if, given zo,z1 E En, there is a control u E Q* such that the solution = zosatisfiesz(tl,t~,zo,u)= 11. z ( t , t 0 , z ~ , u ) o f ( 5 . 0 . 1withz(to,to,zo,u) ) It is called controllable at time t o if it is controllable on [to,tl]for some tl > t o . If it is controllable at each t o 2 0, we say it is controllab1e:System (5.0.1) is said to be fully controllable a t t o if it is controllable on [to,tl]€or each tl > t o . It is said to be fully controllable if it is fully controllable at each t o 2 0. If u is a control, the solution of (5.0.1) corresponding to this u is given by the variation of constant formula
= X(t)X-'(to)zo + X ( t )
Z ( t , t 0 1 2 0 ,u)
Lo 2
X-'(s)B(s)u(s)ds,
where X ( t ) is a fundamental matrix solution of
i ( t )= A o ( t ) ~ ( t ) . Define
Y ( s )= X - ' ( s ) B ( s ) ,
105
( 5.0.2) (5.0.3)
106
Stability and Time-Optimal Control of Hereditary Systems
and define the operator l? by
r =where
A ~ + D,
(5.0.4)
d = D. Also define the expression M as follows: dt
M(t0,t)=
la t
Y(s)YT(s)ds=
k
t
x-'(s)B(s)BT(s)X-'*(s)ds,(5.0.5)
where XT is the transpose of X. Then the following theorem is well known: Theorem 5.0.1 The following are equivalent: (i) M ( t 0 , t ) is nonsingular for all t l > t o and all t o 2 0. (ii) System (5.0.1) is fully controllable. Also if A o ( t ) and B ( t ) are analytic on ( t o , t l ) , t l > t o 2 0, then (5.0.1) is fully controllable at t o if and only iffor each tl > t o there exists a t E (to, t l ) such that rank [ ~ ( t )rB(t), , .. . ,r " - l ~ ( t = ) ]n. (5.0.6)
Furthermore, if A0 and B are constant, then (5.0.1)is fully controllable if and only if rank [ B , A o B , .. . ,Az-'B] = n. (5.0.7)
From the definition we note full controllability of t o implies that one can start a t t o , and using an admissible control, reach any point xl in an arbitrarily short time (any t l > to). If the matrices are constant and the system is full controllable, then (5.0.7) holds no matter the time, and this contrasts with the situation in the delay case. (Section 6.1.)
5.1 Synthesis of Time-Optimal Control of Linear Ordinary Systems Minimum-Time Problem Consider the following problem: Find a control u that minimizes
tl
subject to
i ( t ) = A z ( t ) + Bu(t), z(0) = 2 0 , z(t1) = 0, where A is an n x n and B an n x m matrix, and where u EU
(5.1.1)
= { u measurable u ( t ) E Em,Iuj(t)l 5 1, j = 1 , . . . , m}.
Thus we want explicitly to find a function f(.,zo) : [O,tl] -+ U ,f ( t , z o )= u ( t ) that drives System (5.1.1) from xo t o 0 in minimum t l . The following is well known:
Synthesis of Time-Optimal and Minimum-Effort Control
107
Theorem 5.1.1 [l] In (5.1.1) assume that (i) rank [ B , A B , .. . ,A"-'B] = n. (ii) The eigenvalues of A have no positive real part. (iii) For each j = 1,.. . ,rn, the vectors [bj ,Ab j,.. . ,A"-'bj] are linearly independent. Then there exists a unique optimal control u* of the form u*(t)= u ( t,zo)= ~ g n [ c ~ e - ~ ' ~ ] (5.1.2)
almost everywhere, which drives zo to zero in minimum time. Thus if y ( t ) is the Em-vector, cT e-AtB, the so-called index of the control system (5.1.1), then uj(t) = sgn y j j = 1, .. . ,m, where +
sgn yj =
1 ifyj -1 i fyj
>0 < 0.
It is undefined when y j ( t ) = 0. Because (5.1.1) is normal in the sense that (iii) holds, yj(t) has only isolated zeros, where the control switches in the following sense: Definition 5.1.1: If t > 0 and uj'(t - O ; z ~ ) u j . ( t , . , z o< ) 0, then t is a switch time of u* (or u switches at t). Thus each point of discontinuity of u* is called a switch time, and u*(s) = u1 on some interval [ t l , t )and u * ( s ) = u2 on some ( t , t z ) , t l < t < t 2 , where ui # uz in 2.4, then we say that u* switches from u i to uz at time t. It is clear that u* has only a finite number of switch points on [O,tl].If (5.1.2) is valid, then c is said to generate the control u* and -u* (which is the optimal control function for -to). If system (5.1.1) satisfies (iii) of Theorem 5.1.1, then it is said to be normal. Let
R(t) =
{1'
e-A8Bu(s)ds : u ( s ) E 24, u measurable
be the reachable set at t 3 0, and R =
1
u R(t) the reachable set.
Note
t>O
that R(t) is related to C ( t ) , the set of points that can be steered to zero by controls as follows:
~ ( t=) eAtR(t). If
2
is in C =
U C ( t ) and
t2o
u* is its unique time-optimal control function,
then the function
f(.)
= u*(O;.),
2
# 0, f(0) = 0
Stability and Time-Optimal Control of Hereditary Systems
108 defines a map
f
:c+u
that is called the time-optimal feedback control or the synthesis function. Once f is found, the time-optimal problem is completely solved. For example, consider the system i ( t ) = AZ
+ B f (x),
(5.1.3)
z(0) = 20.
If Z(ZO,ZO) is a solution of (5.1.3), then z describes the optimal trajectory from20 t o the origin, and u*(t,za) = f(z(t,zo)) is the time-optimal control function for 20. We shall now describe how Theorem 5.1.1 helps us to construct optimal feedback control for the simple harmonic oscillator. Example 5.1.1: The system (5.1.4a)
is equivalent to where
x = Ax!-+ Bu, 0 A = [-1
1 0]’ B =
(5.1.14b)
[;I, [;I. ic=
It is controllable, normal, and satisfies condition (ii) of Theorem 5.1.1. Because eAt =
cos t -sin2
cTe-At B = -c1 sin t
where c:
]
sin t cost ’
e-AtB
+ c2 cos t ,
=
[- ] ’ sin t cost
c = ( C I ,C Z ) ,
+ c: # 0. Therefore the (unique) optimal control is given by u*(t) = sgn(sin(t
+ 6)),
+
tan6 = -c2
C1
+
for some --a 5 6 5 -a, since -c1 sint c2 cost = asin(t 6), a > 0. We now derive optimal feedback control that drives (20,yo) E E2 to (0,O) in minimum t l . For each initial (20,yo) E E2 there exists an optimal control
Sgn thesis of Time- Optimal and Man imu m-Effo rt Control
109
+
u*(t) uniquely determined by u * ( t ) = sgn[sin(t S)]. When u = f l , the trajectory in E2 is a circle of center (h1,O) with a clockwise direction of motion of increasing t . (In time t the trajectory moves through an arc of the circle of angle t). Indeed, when +1, x 2 = -21 il = x2, dXl 22 d ~ 2 -XI + 1 '
+
+ 1,
Hence (1 - x1)dxl = x2dx2, and xi (1 - ~ 1 =) a2. ~ This is a circle centered at (1,O). Note that 1 - X I = QCOS t, 22 = a s i n t . When u = -1, xi (-1 - ~ 1 =) u2, ~ which is a circle centered at (-l,O), where 21 = acost - 1, x2 = -usint. One way of discovering an optimal control law is to begin at zero and move backward in time until we hit (z0,yo) at time - t l . Since we are moving in circles, the motion is counterclockwise. Begin with u = 1, when 0 < 6 5 x , and move with decreasing t away from the origin in a counterclockwise direction along a semicircle center (1,O). At PIon this semicircle, t = -6, and sin(tt-6) changes sign so that ti switches to -1. With this value the trajectory is a circle with center (-1,O) that passes through PI.For exactly x seconds we move counterclockwise along this circle, and describe a semicircle. (Recall that a full circle takes 2 x seconds.) After ?r seconds, P2 is reached. This point is a reflection of PI onto the circle with radius 1 and center (-1,O). At P2 the control becomes u = +1, and optimal trajectory becomes a circle with center (1,O) that passes P2. After x seconds on this circle we reach P3 and the control switches to u = -1. This is the way that an optimal trajectory that passes through (x0,yo) is generated. Because the system is normal, this trajectory is unique. There is another route. Begin at zero with u = -1, when --?r 5 6 < 0 until at Q t = -6 - x , and sin(t 6) = 0. This is a circle with center ( - l , O ) , and motion is counterclockwise until we switch at Q to u = 1. For x seconds the motion is a semicircle with center (1,O) and at Q2 we switch t o u = -1, etc. Clearly, the switching occurs along two arcs r+ and r-. The plane is divided into two sets M I and M2,and then it40 = (0). Optimal feedback control is described as follows:
+
+
110
Stability and Time-Optimal Control of Hereditary Systems
( -1 if
(z,y) is above the semicircles,
or the arc r- : M Z +1 if (z,y) is below the semicircles oronr+:Ml.
Thus for
+
The optimal trajectories are just solutions of f z = f ( z , i ) with initial point (z0,yo) and final point (0,O)at time t l . See Figure 5.2.1. From this analysis we deduce that there exists a unique function f : E2 -, E such that (5.1.5) i ( t ) = A d t ) Bf(c(t))
+
describes an optimal trajectory of reaching 0 in minimum time. In (5.1.5), A and B are as described following (5.1.4). The semicircles constitute the switching locus: I’+ are arcs of circles of radius 1, centers (l,O), (3,O) .. .; I’- are arcs of circles of radius 1, centers (-1, 0), (-3,O) .. . . They move counterclockwise. These are described as “two one-manifolds.” Above the semicircle is the subset M2 of E2, and below is the subset MI of E 2 . These portions of the plane E2 in some t neighborhood of zero are called “terminal twemanifolds.” The synthesis of optimal feedback control is realized as follows: Given any initial state ( z 0 , Z o ) in some terminal manifold Mk, the optimal feedback function remains constant and optimal solution remains in Mk until the instant the optimal solution enters Mk-1, when a switching occurs. After this it continues t o move within Mk-1 until it reaches the origin, Mo,and the motion terminates. For multidimensional controls ( m > 1) the feedback function can be quite complicated. But Yeung [8] has proved that there are precisely 2rn”-’ “terminal n-manifolds”, and the “switching locus consists of a finite number of (analytic) k-manifolds” 1 5 k 5 n- 1); one for k = 0 and at least two for, k 2 1. Just as before, motion continues within Mk-1 until it reaches M k - 2 and the optimal feedback function switches again. The process terminates when the optimal solution reaches the origin, Mo.This description of the general situation is valid for controllable, strictly normal systems in an 6neighborhood of origin, i.e., in Int R(c), E > 0. In the case of the simple harmonic oscillator, c can be taken to be T , and R(T) contains a circle of
Synthesis of Time-Optimal and Minimum-Effort Conirol
111
radius 2 about the origin. If (z0,li.O)E Mz, one uses the control u = -1 to describe a circle with center (-1,O). This motion hits I?+ at some point. The control switches to u = 1 and motion continues on I"+, which is part of M I , until it terminates at the origin, which is Mo. An analogous situation holds for ( z 0 , l i . O ) E MI. Note carefully that in our circle of radius c = 2 there is at most one switch. In the next section we shall describe Yeung's description of the general situation.
5.2 Construction of Optimal Feedback Controls In this section, in an €-neighborhood of the origin, we construct an optimal feedback control for controllable autonomous strictly normal systems. Using the switching times of the controls, which are among the roots of the index of the control system y ( t ) = cTeA'B in the interval [O,c], we define terminal manifolds. Optimal feedback is constructed by identifying its value as a constant unique vertex of the unit cube in each manifold.
Definition 5.2.1: The system i ( t )=Az(t)
+ Bu(t)
is strictly normal if for any integers rj 2 0 satisfying
(5.2.1) m
C r, = n , the vectors
j=1
A'bj with j = 1,e.a , m , and 0 5 i 5 rj - 1 are linearly independent (whenever rj = 0, there are no terms A'bj).
Remark 5.2.1: If the system (5.2.1) is strictly normal, then rank B = min[m,n]. Also if m = 1 and B is an n x 1 matrix, strict normality is equivalent to normality. The number c > 0 that determines a neighborhood where the optimal feedback is constructed is obtained from the following fundamental lemma of HAjek: L e m m a 5.2.1 System (5.2.1) is strictly normal if and only if there exists an c > 0 with the following property: For every complex nonzero n-vector c # 0 and in any interval of length less than or equal to c, the number of roots, counting multiplicities of the coordinates of the index y ( t ) = cTe-A'B of ( 5 . 2 4 , is less than n . With
E
determined by the above lemma, define the reachable set
112
Stability and Time-Optimal Control of Hereditary Systems
It is in Int R(E),the interior of R(c) relative t o E” that the construction of a feedback will be made. In Int R(E)we shall identify the terminal manifolds. But first recall that optimal controls are bang-bang and are the vertices of the unit cube U.We need some definitions. Definition 5.2.2: Let 0 < B < c, and suppose k is an integer 1 5 k 5 n. For any optimal control u : [O,B] ---t U defined by u(s) = u j for tj-1 5 s < t j with 0 = t o < tl < .. . < t k < E and uj-1 < u j in U ,where uj is a vertex, the finite sequence (u1 -+ . . . -+ U k ) is called an optimal switching sequence. If u is not optimal and u j is not necessarily a vertex, then the finite sequence is called a switching sequence. The numbers t j 1 5 j 5 k are points of
discontinuities of u, and they are called switch times of u. The number of discontinuities of optimal controls is important. The following statement is valid: Corollary 5.2.1 Let E begiven by the Fundamental Lemma, and consider the interval [0, €1. Every optimal control u : [O, €1 + U has at most n - 1 discontinuities. Proof: Let u : [Bl,B2] U be optimal control, where 0 5 81 < 02 and 82 - < E . Then by Theorem 5.1.1, u = sgn cTe-*’B a.e. on [B1,02], for some c # 0 in E”. Since the discontinuities of u are among the roots of the index y(s) = cTe-As B , and this, by HAjek’s Lemma, is less than n, the assertion follows. -+
Definition 5.2.3: For each k = 1,.. .
, R , let Mk denote the points in Int R(c) whose corresponding optimal controls have exactly k - 1 discontinuities. Assume A40 = (0). “terminal manifolds .”
We are now ready to identify and define
Definition 5.2.4: For each k = 1, ... , n , and each k-tuple i = {ul ... -+ uk} of vertices of U with uj-1 # u j , define as terminal manifold the set Mki of points in Int R(E),whose optimal controls have (u1 ---t - .. -+ uk} as the optimal sequence. (Note that i ranges over a finite set I k . ) We will
-+
say that the optimal switching sequence {ul . uk} corresponds to Mki. The terminal manifolds Mki are disjoint since optimal controls are unique.
Next we shall relate the terminal manifolds t o the reachable sets. It is contained in the following statement: Proposition 5.2.1
are valid:
For terminal manifolds, the following disjoint unions
Synthesis of Time-Optimal and Minimum-Eflorl Control Mk =
U
U Mk.
IntIR(€) =
Mkj,
113
k=O
jEIk
For k = n, there are precisely 27nn-l nonvoid terminal manifolds Mki. The set M,i is open and connected (in En). Furthermore, each Mki is the image Of the Set Q k of all 't = ( t i . . . t k ) E Ek with 0 = ti < t i < - . -< t k < 6 of a diffeomorphism r': Mki = F(Qk), where F is defined as the map F : Q k + Int W ( E ) , by k
F(t)=
J'J j=1
e - A s Bujds,
tj-1
a function that is analytic in its variables, and whose Jacobian has rank k a t each point of Q k . F-' is continuous.
The description of Mki contained in Proposition 5.2.1 asserts that each is an analytic k-submanifold of En.The following is a consequence of these results: Mki
Corollary 5.2.2 If ( 2 1 1 + . . . -+ U k } is an optimal switching sequence corresponding toMkj, then { u , + l , . . . , u k } isan optimdswitchingsequence corresponding to Mk-j , i for j = 0 , . . . , k - 1 M k - j , i k nonvoid whenever Mki is nonvoid. Moreover, if we let M o , = ~ Mo, and if an optimal solution in Int EL(€) meets M k , i , then it meets only M k - l , i , . . . ,MO thereafter in this order.
These assertions are used to prove the next theorem. Proposition 5.2.2 The set Pnt R(E) is made u p of a union of 2m"-' disjoint connected nonvoid open sets and a finite number of analytic kmanifolds, 0 5 k 5 n - 1; one for k = 0 and at least two for k 2 1. Proof: From Proposition 5.2.1, Int W ( E ) =
u n
k=O
kfk
= Mn
u
kO)
r+ in
Figure 5.3.18.
For the overdamped case
= -a f p < 0, y2(t) = eQ'[clep' + c2e-o'J. s
1
This is at most one switch when t = --ln(-q/c2). 2P Lemma yields 6 = r/O = m). For u = +l.
This corresponds to I?+ in Figure 5.3.18. For u = -1.
(Note that Htijek's
144
Stability and Time-Optimal Control of Hemdita y Systems
This corresponds to r- in Figure 5.3.18. The terminal manifolds are described in Figure 5.3.19. The curves in both cases are hyperbolas.
Example 5.3.4: Consider the system
which is equivalent to i = Ax
+ Bu, where
The system is normal and controllable since rank [ B ,AB] = rank
[:; -;:]
=2,
and rank It is also strictly normal.
[ b j , Abj] = 2,
For any integers rj 2 0 with 2 =
( n = 2, m = 2), the vectors
are linearly independent. (If following choices:
j = 1,2.
rj
2
C rj, j=1
= 0, there are no terms A'bj.) We have the r1
r2
0 1 2
2 1 0
r1 = 0 implies there are no terms in A'bl, since r1 - 1 = -1 and 0 5 i 5 r j - ? . Also ~1 = 1 implies i = r1 - 1 = 0. r1 = 1 implies r2 = 1, so [A'bl,A'bz] = [Aabl,Aob2]= [ b l , b 2 ] , (Ao is the identity matrix), [bl,bz] =
[ ],
that
and this has rank 2. If r1 = 2, i = 0 , l and rz = 0; and we have
Synthesis of Time-Optimal and Minimum-Effort Control This matrix has rank 2. If vectors are
r2
= 2, then
rl
145
= 0 and the corresponding
with rank 2. We note that r2 = 0 implies no terms in A'bz, since rz-1 = -1 and 0 5 i 5 rj - 1. If 7-2 = 1, i = rz - 1 = 0, and rl = 1. An earlier argument yields linear independence. We have verified the strict normality condition. The equation x = Ax has eigenvalues X = f i , which has zero real parts. All the conditions of the theorem are satisfied: There exists which we shall now determine. The an optimal feedback control f(z1, z~), open loop control is given as u*(t) = ~ g n [ c ~ e - ~ ' o~ 5 ] ,t
5 t*,
where t' is the minimum time. For our system eAt =
= where A =
d-,
cost -sint
[
sin(t CW(t
1
sin t cost '
+ 6) + 6)]
e-At
= cost - sin t sint
1
cost '
I
6 = tu;'(-cz/cl). u*(t) = sgn
[
sin(t cos(t
It follows that
+ 6)
+ 611 .
Because cos u and sin a differ by 7r/2, the sign changes of the optimal control components are 7r/2 apart. Each component switches every 7r seconds. The control U* is bang-bang and are the extreme values
146
Stability and Time-Optimal Control of Hereditary Systems
of the 2-dimensional unit cube. We now construct !(XI, Case 1: u =
[:I,
x1
i 2
= q + 1
$21 *-=-
22).
22+1
+1 + 1' + l)d+l = + l)d22,
= -21
$22
-21
(22
(-21
integrating 1 * -2": +
(21
+ +
-
(22
21
1)2
1 2
= -2;
+
22
+el,
= a2
The equation describes a circle of radius a centered at (1, -1) in the phase pIane. Case 2: u =
[-:I.
so that - p1 1 2
- 21 = 522 1 2
+ + c1, 22
(21
+ + + 1)2 = az1 1)2
which is a circle of radius 0 centered at (- 1, -1). c a s e 3: u =
[-:I.
which is a circle of radius a centered at ( 1 , l ) .
(22
Synthesis of Time-Optimal and Minimum-Effort Control Case 4: u
=
147
[I:]. il = 2 2 - 1 i 2 = -21 - 1
*
(21
- 1)2 + ( 2 2 + 1)2 = a2.
This is a circle of radius a centered at ( - 1 , l ) . Forming the Switching Curves With increasing time t , the circles described on the previous page are traversed clockwise; with t decreasing, the circles are traversed counterclockwise. Let us move backward in time to find the switchin curve.
[ -:]
-62
I-:].
>7r > 62 > 0.
Suppose our final control is uo = Move 2 seconds counterclockwise to PI around the arc corresponding to u1 =
Let 61
(circle centered at (1,l) that passes through the origin). Since we are
moving backward in time, our only possible control is 61
> n/2 > 62 > 0 and
[
sin(t - 61)
u * ( t ) = sgn sin(t
I
Ub
=
[I;],because
- 6,) . The control Ub =
[I:]
corresponds to a circle centered at ( - 1 , l ) and passes through PI in the phase plane. We now traverse this circle counterclockwise for 7r/2 seconds (61 - 52 = 7r/2). Traversing the circle n/2 seconds corresponds to moving a quarter circle counterclockwise from PI to P2. At P z , t = -62-7r/2(= -&), so sin(t-61) changes sign and the control will be u, =
[-:I,
circle centered
at (-1, -1). See Figure 5.3.20. Looking a t Figure 5.3.20, if we vary 62 from t = 0 to t = 7r/2 seconds, we see our switching curves for our problem take shape. The remaining switching curves for our problem can be formed as before by moving backward in time, starting with all possible controls. The switching curves are shown below in Figure 5.3.21. If an optimal trajectory hits the switching locus, it changes by using another value of optimal control and transverses another path of optimal trajectory until it reaches the origin. Detailed descriptions of the terminal manifolds will now be given.
148
Stability and T i m e - O p t i m a l Control of Hereditary Systems
FIGURE 5.3.20. The calculations needed for this diagram and some of the diagrams below were verified by Kenneth Coulter, C. Ukwu, and D. C. Etheridge. The information was made available by a personal communication.
149
Synthesis of Time-Optimal and Minimum-Eflort Control
FIGURE 5.3.21. SWITCHING CURVES Terminal Manifolds and Switching Locus &call the definition. For each k = 1, ... , n , let Mk be the set of points in Int R(6) whose corresponding optimal controls have exactly k - 1 discontinuities. Set Mo = [O]. For each L = 1 , . . . , n and each k-tuple = (211 -+ . . . 4 u k } of vertices of u with uj-1 # u j , let Mki be the Set of points in R ( E )whose optimal controls have (211 -+ . .. -+ I&} as the optimal switching sequence ( M k i may be void; the indices range over a finite set 4 ) .w e may also say that the optimal switching sequence (211 . . . + uk} corresponds to kf&. The Mki are called terminal manifolds. It follows from the uniqueness of optimal controls that the terminal manifolds are disjoint. For k = n there are precisely 2rnfl-' nonvoid sets Mki. The switching curves for this problem is below. -+
150
Stability and Time-Optimal Control of Hereditary Systems
...
...
* x1
FIGURE 5.3.22. We may define our terminal manifolds by moving backward in time from the origin (considering all possible control sequences as we move backward in time). The terminal manifolds, Mk's, are the set of points in Int R(E) whose optimal controls have exactly (n- 1) discontinuities (n- 1 = 2- 1 = 1 for this case). Suppose our final control is uc (1'+2 in Figure 5.3.23). If we move backward in time (ccw) in our phase plane, we can describe the terminal manifold M2. If our final control is u e , the only possible control sequence as we approach the origin is
{ud
-+
tie}.
Control
ud
=
[-;I
corresponds to a circle centered at (-1, -1); so we vary the radius of this curve from 4 to 0(and move backward in time (CCW)until we reach
Synthesis of Time-Optimal and Minimum-Effort Control
151
the next switching curve) to describe that particular manifold. See Figure 5.3.23.
% r=m
(1,-1)
FIGURE 5.3.23. TERMINAL MANIFOLDM2 Using the definition, we see that M 2is defined above. Once the system is within M 2 , there is exactly n - 1 = 1 discontinuity before reaching the origin. Terminal manifolds M3, M4, and M I are constructed in a similar manner but with control sequences {u, -+ Ub}, { U b -+ u,,}, and {ua -, u d } respectively. See Figure 5.3.24 for terminal manifolds.
152
Stability and Time-Optimal Control of Hereditary Systems
FIGURE 5.3.24. Now we may define the other manifolds in our phase plane, by moving backward in time (ccw) from our terminal manifolds Mk's. Suppose our final control sequence is {ud -+ u,} (terminal manifold Mz). If we move backward in time from A42, the only possible control is u,. So our optimal control sequence will be {u, -+ ud + u c ) . Now we will construct the corresponding manifold. The control ua =
[ -:]
corresponds to a circle
centered at (1,l) in the phase plane. So if we vary the radius of this circle (moving backward in time, ccw) from r = 0to r = until we reach our switching curve again, the manifold will be defined. See Figure 5.3.25. in Figure The optimal control sequence of the system once it reaches 5.3.25 is (u1 -+ ud -+ u c } . The rest of the manifolds may be found in a similar manner. The result is shown in Figure 5.3.26.
a,
Synthesis of Time-Optimal and Minimum-Effort Control
153
:I FIGURE 5.3.25. MANIFOLD Mal Some of the calculations needed for diagram 5.3.25 were verified by C. Ukwu, D. L. Etheridge, and Kenneth Coulter (personal communication).
154
Stability and Time-Optimal Control of Hereditary Systems
FIGURE 5.3.26. MANIFOLDS Since the system is strictly normal, there is c > 0 such that in Int It(€), g ( t ) = cTe-AtB has at most one discontinuity. Clearly c < T. Also Int R(c) =
u M~ u r + j . 4
j=1
Synthesis of Time-Optimal and Minimum-Effort Control
4
155
x2
FIGURE 5.3.27. Now, since we have our terminal manifolds described, our time-optimd feedback control is also described. As an example, suppose our initial conditions place the states in M 4 4 as shown by the “t” in Figure 5.3.27. The trajectory of our states and the time-optimal feedback control necessary to reach the origin is known.
As seen from Figure 5.3.27, our control will be 214 ---* ul -+ U Z -+ 213 -, u4 + u1. The manifolds traversed will be M 4 4 , M 4 1 , M 4 Z J M 4 3 , M 4 .Our control switches as we switch manifolds. Our final switch in control, once we are within the €-neighborhood of the origin, will be from u4 to u1.
156
Stability and Time-Optimal Control of Hereditary Systems
Our time-optimal feedback control for this problem will be defined as:
I[-:I
f(XlI22)
r4 and I" or on r',
if
21, t 2
lie between
if
t1,x2
lie between I'1 and
if
t1,22
lie between
r2 and r3 or on r3,
if
21,x 2
lie between
r3 and r4 or on r4.
=
r2or on r2,
Our optimal trajectory is simply solutions of the equation
5.4 Synthesis of Minimum-Effort Feedback Control Systems In this section we present the solution of the minimum-effort/energy control problem. For unrestrained controls, an explicit formula is given for the optimal feedback control. When the controls are restrained, the Neustadt/HAjek [4,7] open-loop solution is presented. With the insight gained from Yeung's optimal feedback solution of the minimum-time problem, we shall attempt to construct optimal feedback control of the system. Define the set U as follows:
U = {u
:
[O,q
---+
E m , u measurable and integrable}.
Suppose J : U -+ E , then J ( u ( T ) )is the effort associated with u(t). The minimum-effort problem is stated as follows: Minimize J ( u ( T ) ) ,
(5.4.0)
u €24
subject to
i ( t )= At(t) + Bu(t), where t(0) = to, z(T,to,u)= z1
(5.4.1)
(5.4.2)
Here t ( . , t o , u ) is the solution of (5.4.1) with t ( O , q , u ) = to. Implicit in this problem is the assumption that with 20,tl E En arbitrarily given,
157
Synthesis of Time-Optiinal and Minimum-Eflort Control
there is an appropriate T such that some control u transfers z from 10 to 2 1 in time T ; that is, the solution satisfies z(0,10, u) = zo and z(T,10, u) = z1. Thus we assume that System (5.4.1) is controllable on the interval [ O , T ] with controls as specified. We now isolate several cost functions J ( u ( T ) )that describe efforts to be minimized, and then state the solutions. A general proof of these theories will be discussed in Section 5.5.
Theorem 5.4.1 where
Consider Problem (5.4.0) subject to (5.4.1) and (5.4.21,
Jo(u(T))=
1' 0
u(t)+R(t)u(t)dt,
and where R(t) is a positive definite n x n matrix. (Here pose.) Suppose rank [ B , A B , .. . , A"-'B] = n.
(5.4.3)
+ denotes trans(5.4.4)
Let
(5.4.5) The control
where is the optimal one.
Remark 5.4.1: Note carefully that there is no constraint on the controls. When R ( t ) I, the identity matrix, then (5.4.8) and the following corollary is valid: CoroIIary 5.4.1 Consider problem (5.4.0) with (5.4.1) and (5.4.2). The optimal control is u*(t)= B+(e-At)+M-'q,
J in (5.4.8) subject
to
(5.4.9)
where
(5.4.10) q
= (e-
AT
z1 - to).
158
Stability and Time-Optimal Control of Hereditary Systems
Proof: It is quite standard that condition (5.4.4) is equivalent to the nonsingularity of W ( T ) ,so that (5.4.5) is well defined. Insert (5.4.5) into the solution of (5.4.1) that is given by the variation of parameter z ( t , z O , u )= eA' [ZO
+
1 t
e-"'Bu(s)ds]
.
(5.4.11)
Thus u' transfers zo to t 1 in time T. Suppose u ( t ) is any other control that transfers 2 0 to 2 1 in time T . We shall prove that
and also that J(U*(T))
= (P, W--lP)I
(5.4.13)
where (., -) denotes inner product. Because
iT
we have
e-A8Bu(s)ds=
1'
e-"'Bu*(s)ds.
(5.4.14)
If we subtract both sides and use inner product, we obtain
(lT
e-A"B[u(s)- u * ( s ) ] d s , W - ' ( T ) [ e - A T z l - 201
We use (5.4.6) and the properties of inner product to obtain
iT(.(.) -
u * ( s ) , u*(s))ds
= 0.
(5.4.16)
Using (5.4.16) and some easy manipulation, one deduces that indeed
159
Synthesis of Time-Optimal and Minimum-Eflort Control
Jr
But J ( u * ( T ) )=
where q = [ e - A T t l Jo(u*(T))=
1
u * + ( t ) R ( t ) u * ( t ) d so t , that
-
T
201.Since
W ( T )is symmetric, (5.4.17) yields
( q , W - ' ( T ) ) . [e-A8B]R-'(~)[e-A"B]+W-'(T)q)ds
The optimal solution given in Theorem 5.4.1 is a feedback one. It is global. But the controls are "big" and unrestrained. When the effort function is defined to correspond to the "maximum thrust" available, there are hard limits that bound the controls. This situation is treated in the next theorem.
Theorem 5.4.2
Consider the problem (5.4.0) with
(5.4.18) where
luj(t)l _< 1, j = l , . . .,m,O5 t 5 T}, (5.4.19) subject to (5.4.1) and (5.4.2), with 21 3 0. Define the function u
E U1 = {u E E m , u measurable,
= e - A t e l ( t ) - 20, g(t, c) = c .e - A t B , y(t)
(5.4.20)
(5.4.21)
where c is an n-dimensional row vector, and g the index of the system (5.4.1). Let
F ( c ,T ) =
cJ m
j=1
T
0
c)ldt.
(5.4.22)
Assume that: (5.4.23) (i) rank [ B ,A B , . . . ,A"-lB] = n. (ii) For each j = 1 , . . . ,m, the vectors { b j , A b j , .. . ,A"-'bj} (5.4.24) are linearly in depen den t . (iii) No eigenvalue of A has a positive real part.
160
Stability and Time-Optimal Control of Hereditary Systems
Then for each 20, and some response time T, there exists a minimum effort control that transfers xo to 0 in time T. If y(T) # 0, i.e., xo # 0, the minimum-effort J1 min = J1 (u*(T)) is given by
1
-
Jlmin
= min F(c, T ) ,
(5.4.25)
~ E P
where P is the plane c y(T) = 1. Furthermore, the optimal control u*(t) is unique almost everywhere and is given by u * ( t ) = Jlminsgn g ( t , c * ) ,
(5.4.26)
where C* is any vector in P for which the minimum in (5.4.25) is attained. If y(T) = 0, i.e., 10 = 0, the control u * ( t ) = 0 is the desired minimum-effort control.
Remark 5.4.2: The statement and proof of Theorem 5.4.2 is given by Neustadt [ 7 ] . He suggests that the method of steepest descent can be used to calculate min F(c, T ) c E P = {c : c y(T) = I}. It is important to observe that for some fixed time T, minimum-effort optimal control u* is given by u*(t) = J 1 rninsgn g(t,c),
2 E [O,
TI,
where J1 = Jlrninis a constant and g is the index of the control SYStem. For the time-optimal problem, the optimal control v * ( t ) is given by v * ( t ) = sgn g(t,c); t E [0, T ] with T the minimum time. That is, the indices are switching functions of optimal controls. Thus, optimal solution of the minimum-effort control of (5.4.1) is a constant multiple of the minimum-time one. Therefore, the treatment of Sections 5.4.2 and 5.4.3 can be appropriated for the construction in a neighborhood of zero, of optimal feedback control of minimum-effort problem. It is stated in the next result . Theorem 5.4.3 Consider the problem (5.4.0) with J l ( u ( T ) )defined in (5.4.19) subject to (5.4.1) and (5.4.2), where x1 = 0. Assume that: (i) rank [ B ,A B , , .. , A"-lB] = n. (ii) No eigenvalue of A has a positive real part. (iii) The system (5.4.1) is strictly normal. Then there exists an X > 0 and a function f : Int R ( E )+ E", where E = min[X,T], and
161
Synthesis of Time-Optimal and Minimum-Effort Control
such that in Int R(c), the set of solutions of
i ( t )= Az(t)
+ Bf(z(2))
(5.4.27)
coincides with the set of optimal solutions of (5.4.1). firthermore, f(0) = 0, and for 2 # 0, f(x) is among the vertices of the cube V
V = { V E Em,Ivj 1 5
Jmin,
j
= 1 , . . . ,m } ,
and f(x) = -f(-2). If rn 5 n, then f is uniquely determined by the condition that optimal solutions solve (5.4.27). Also, the inverse image of an open set in Em is an F,-set.
Remark 5.4.3: If X 2 T , the optimal feedback control constructed from Theorem 5.4.3 is a global one for the given T so that the minimal control strategy f(z) drives any 20 to 0 in time T . If X < T,only a local fueloptimal feedback control is constructed for strictly normal systems. Proof: The only modification needed in the proof of Theorem 5.4.3is to replace the unit cube U by the cube V whose components have dimensions ztJ,i,,. Thus, the optimal switching sequence is the finite sequence (v1 - . . v k } , where v j are among the vertices of V , and optimal control u on [O,B] for B < E is defined by u ( s ) = vj for t j - 1 5 s < tj, and 0 = tl < t l < - - - t k with vj-1 # w j , where k is an integer 1 5 k 5 n. As before, we set Mo = {0}, and for k = 1, .. . , n , we let Mk be the set of points in Int W ( E )whose components have exactly k - 1 discontinuities. For each k = 1,.. . , n , and each k-tuple 1= ( ~ 1 , .. . , V k } of vertices of V with vj-1 # vj, let Mki be the set of points in Int R ( E )whose optimal controls have {vl -+ . . . 4 v k ) as the optimal switching sequence. Because (5.4.1)is strictly normal, there exists E > 0 such that the indices g ( t , c ) of (5.4.1)has at most n - 1 discontinuities on [ O , E ] . With this E ,
define R(E) and note that IntR(E) =
u n
k=O
kfk,
MI, =
u
Mkj
as before.
j € [I,
Set f(0) = 0, and for 0 # 2: E IntR(c), find Mkj containing x and let (v1 4 . . + V k } be the optimal switching sequence corresponding to Mbj. Finally, set f(z) = v1. The rest of the argument follows as in Theorem 2.1.
Theorem 5.4.4 Consider the problem (5.4.0) with the cost function representing a pseudo fuel (energy if p = 2) :
Stability and Time-Optiiital Control of Hereditary Systems
162
where admissible controls are
As usual, the minimization is subject to (5.4.1) and (5.4.2). Assume that (4 rank [B,AB, . .. ,A"-'B] = n. (ii) No eigenvalue of A has a positive real part. (iii) For each J' = 1,... ,rn, the vectors b j , Abj . ,An-lbj are linearly independent. Then for each xo and some T,there exists an optimal control u * ( t ) that transfers 2 from xo to 0 in time T. firthermore, we let g(t, c) = c e - A f B , c E E", a row vector, (5.4.30)
where
-1 + -1 = 1. If 10 # 0, then
-P
given by
9
the minimum-effort Jmjn= J ( u * ( T ) )is
1
1
Mz(T)
Jzmin
- min F ( c , T ) ,
A=---
CEP
(5.4.31)
where
P = { C E En : -CXO = 1).
(5.4.32)
The optimal control is unique almost everywhere and is given by
where p = J2 min[F(c*, T)]-$ and C* E E" is any vector in P where the minimum in (5.4.31) is attained. lfzo = 0, the optimal control is u * ( t ) = 0. The last cost function considered is the measure of absolute fuel defined by
(5.4.34a)
Synthesis of Time-Optimal and Minimum- Effort Control
163
where the set of admissible controls is given by
This is the so-called absolute fuel minimum problem. There is no optimal soIution for this cost function if we assume that u E U3 is integrable. HAjek [4] has proved the following result:
Theorem 5.4.5 In problem (5.4.01,with cost (5.4.34) and constraints (5.4.1)and (5.4.2)with $ 1 = 0, assume: (i) rank [ B ,A B , . . . ,A”-’B] = n. (5.4.35)for each column b, of B (this (ii) rank [Abj, . . . ,Anbj] = n means that (5.4.1) is metanormal). There is no optimal solution with 21 integrable that steers 20 to 0 in some T , while minimizing J3(u(T)). We now admit, in absolute fuel minimization problems, as admissible controls those that are unbounded and “impulsive” in nature, the secalled Dirac delta functions. If such functions are considered optimal minimum, -IIuII1 controls exist.
Theorem 5.4.6 In problem (5.4.0), with the cost function (5.4.34)and constraints (5.4.1)and (5.4.2)with X I = 0, suppose (i) rank [ B , A B ,. .. , A”-’B] = n . (ii) rank [Abj,.. . , A”bj] = n for each j = 1 , . . . , m. Suppose g ( t , c) = c . e-A‘B,
F ( T , c ) = max max Igj(c,t)l.
(5.4.36)
l < j S m OStST
If xo # 0, and if we consider generalized (delta) functions to be admissible, then there exists a minimum-effort control u*(t) that transfers xo to 0 in time T . The minimum fuel J3(u*(T))= J3min is given by 1 = min F ( T ,c ) = F ( T ,c*), Jmin ~ E P
where P = (-czo = 1). The optimal control is given by ? ( t ) = J 3 m i n E ( t , c*), where the boundary control is given by Nj
~jiijt)= C s g n ( g j ( c * , r j i ) ) s (t r i=l
m j i ) / x ~ j , j=1
15 j 5 m.
164
Stability and Time-Optimal Control of Hereditary Systems
Here the maximum in (5.4.36) can occur at multiple j and at multiple instances rji, i = 1,2,. . ,N j , where Nj equals zero if g j does not contain the maxima. I f we replace the control set by
then optimal control is given by
u*(t)= Jminn(t, c*) where the boundary control is given by
where
- 1-
Jmin
and
- min F(T,c ) = F ( c * ) CEP
m
(5.4.37) with rji, i = 1 , 2 , . . . , m j , being where maximal in (5.4.37) may occur at multiple instances rji i = 1,2,.. . , M j , Mj 2 1.
5.5
General Method for the Proof of Existence and Form of Time-Optimal, Minimum-Effort Control of Ordinary Linear Systems
In the strings of results, Theorem 5.4.2- Theorem 5.4.5,on the system
i ( t ) = A ( t ) t ( t )+ B ( t ) u ( t ) , +(O) = 20,
(5.5.1)
one begins with the variation of parameters, +(t,+o,u)
= X ( t ) [ZO
+1 t
X- ' (s )B u (s )d s ] ,
(5.5.2)
where X ( t ) is the fundamental matrix solution of
i ( t )= A(t)z(t),
(5.5.3)
Synthesis of Time-Optimal and Minimum-Effort Control
165
and defines the functions (5.5.4) (5.5.5) Also defined is the map
St(.) =
1'
x-~(s)B(s)u(s)ds,
(5.5.6)
where St : Y -+ En is a continuous linear map with So(Y) = 0. Here Y is the dual space Z* of a separable Banach space 2, or is a reflexive Banach space. It is the space of controls whose elements u influence the state space of (5.5.1) that are elements of E". We assume a uniform bound M in norm of admissible controls:
u = {u E x : llull 5 M } .
(5.5.7)
If zl(t) is a time-varying target that the system will hit, then y(t) describes a reachable set-a desirable state of the system. Thus, at t = 0, y(0) = zl(0) - 20 # 0, and (5.5.1) is not at a desirable state. But if it hits the target at some t , then we have the coincidence y(t)
= x-'(t)21(t) - 2 0 = St(.)
(5.5.8)
for some u E U . As a consequence, y(t) E R(t), the reachable set, defined by
R(t) =
{I'
I
x - l ( s ) B ( s ) u ( s ) d s: u E u .
(5.5.9)
For the time-optimal problem, the minimal time for hitting the target is given by
t* = Inf{t E [O,T]: S t ( u )= y(t) for some u E U } .
(5.5.10)
The admissible control u* E U that has st* (u*) = y(t*) is the time-optimal control. For the minimum fuel problem, the admissible control u* that has St(u*) = y(t) while minimizing J ( u ( T ) )is the minimum fuel optimal control. As suggested by Hrijek and Krabs [ 5 ] , it is useful to study in abstract setting the mapping
Stability and Tame-Optimal Control of Hereditary Systems
166 and its adjoint
Sf : En + Y
with the following insight
If Y = L,([O,T], Em),then Y = 2' with 2 = L1([O,T],Em).St is given in (5.5.6), and its adjoint S: is given by s;(cT) =
0 a.e. (t,T] { (x~-~(T)B(T))~c a.e. E [o, I], T
E
t
(5.5.11)
which obviously maps En into Ll([O,T],Em) Y * so that IlS:(c)ll
=
1'
Il(x-'(s)B(s))Tclllds,
c
E En, a row vector
(5.5.12)
where /1.[/1 denotes the L1 norm in Em. If Y = Lz([O,TJ,Em),then Y = Y' is a Hilbert space. For each t E [ O , T ] , the adjoint operators St : E" + Y of St are given by (5.5.6) so that (lS;(c)II =
(J/ot
I I ( X - ' ( s ) B ( 8 ) ) ' c T ( ( ~ ~ ~,) 'for c E E".
(5.5.13)
We now consider the continuous linear mapping S, : Y -+ En under the following assumptions: (i) = {St(U ) : u E V } is closed. (ii) The mapping t + S, , t E [0, T ] is continuous with respect to operator norm topology of L(Y,En),the space of linear maps from Y into En. (iii) The function y : [0,T ] + En
Wt)
is continuous. (iv) y : [O,T]-+ En is constant and nonzero, i.e., y(t) = yl, V t E [O,T]I Y # 0. (v) For each c E En, c # 0,the function
is strictly increasing in [0, TI.
Synthesis of Tame-Optimal and Minimum-Effort Control
167
Theorem 5.5.1 Assume condition (i), and let t E [ O , q . Then there is an admissible control u E U , such that St(u) = y(t) if and only if cy(t) I M~~s;(c)II, v c E E"*, a row vector
(5.5.14)
where En*is the Euclidean space, cT is a row vector, and S; is the adjoint of St mapping En*to Y * . Proof: From the assumptions there is a u E U with St(u) = y(t), so that
cy(t) = C ( S t ( . u ) ) = s:(C)(.)
I Mlls;(c)ll.
This proves that (5.5.14) is true. Conversely, recall the definition of the reachable set R(t), and note that it is convex since St is linear and U convex. Because it is a closed convex subset of En,if there is no u E U with St(.) = y(t), then y(t) # R(t), and by the separation theorem of closed convex sets in En,[6, p. 331, there is a hyperplane which separates R(t) and y(t). This means there is a c E En*such that cy(t)
> sllp(cT(St(u)) : u E U } = sup{S*(c)(u) : u E U } .
This contradicts (5.5.14).
Theorem 5.5.2 Suppose conditions (i) - (iii) are satisfied. Then there exists a c E En*, a row vector with 1 1 ~ 1 1= 1 such that CY(t*)
= ~llSXC)ll*
(5.5.15)
The proof is contained in Antosiewicz [9].
Theorem 5.5.3 (HAjek-Krabs Duality Theorem) [5] Suppose (i), (ii), (iv), and (v) are valid. Let t* be the optimal time defined in (5.5.10). Then t* = max{t E ( 0 , q such that cy(t) = MllS;(c)Il with llcll = 1).
for some c E En*
Proof: From Theorem 5.5.2 and its corollary, the minimum time is a point in the set from which the maximum is taken. Suppose t > t* where t E (O,"], and where cyl = MllS;(c)ll for some c E En*with llcll = 1. Then by Theorem 5.5.1 and (v),
This is a contradiction.
168
Stability and Time-Optimal Control of Hertditary Systems
Corollary 5.5.1 time. Then
Let
ti*
be a time-optimal control and t* the minimum
= c(St.(u*)) = cy(t*)= Mlls;*(c)ll.
s;.(c)(u*)
Now, consider U c L,([O,t*],Em) as the control space. From the above there exists a* such that cy(t*)
=
1
t*
(cX-1(T)B(T))Tu*(T)dT,
This implies that u* is of the form u;(t,c)
= Msgn(cX-'(t)B(t))j
(5.5.17a)
when (cX-'(t)B(t))j# 0 for each c # 0 and each j = 1,. . . ,m. Thus i f the system is normal, optimal controls are given by u*(t) =
when U C L,([OT],Em). Suppose
M sgn(cX-l(t)B(t)),
u = {. E Lp
: IluIIp 5
(5.5.17b)
M}.
If U C Lp([Olt*], Em),where p > 1, we obtain from (5.5.15)that
(5.5.18)
Synthesis of Tame-Optimal and Minimum-Eflort Control
1 where -
P
169
+ -1 = 1. Recall that Q
u=
{
21
E L, :
(lT
121(s)lpds)
I Ad} .
Let g(s) = cX-l(s)B(s). Then
since by (5.5.18) cy(t*>= M
(lo 5 j=l
1
tgj(s)lq)
we have equality everywhere and the control u* that gives the equality is 4
uJ(s,c) = Mlilgj(s)lPsgn(gj(s)),
j = 1,.. . , m ,
(5.5.19)
where
This is the time-optimal control. It holds when the system is normal. To obtain expressions for the minimum fuel optimal controls, we note carefully that the controls u * ( t , c )in (5.5.17) and (5.5.19) are boundary controls in the sense that if
q,c) =
I'
X-l(s)B(s)u*(s, c)ds,
(5.5.20)
170
Stability and Time-Optimal Contml of Hereditary Systems
then z ( T ,c) is on the boundary of the reachable set R ( t ) , so that cz(T, c) = F ( c , T ) = Mlls;(c)ll, cz(T,c) > C Y , v Y E " T ) , Y # z(T,c).
(5.5.21)
I f y(T) is as defined in (5.5.4) and y(T) E R(T), we can extend y(T) to reach the boundary as follows: Let a = max{P : P y ( T ) E R(T)}.
(5.5.22)
I f y(T) # 0 , a can be taken positive, and clearly ay(T) is a boundary point of R(T), so that ay(T) = 4 T , c ) for some c a row vector, where matters can be so arranged that c * y(T)
= 1.
(5.5.23)
We observe that (5.5.24)
transfers 20 to z l ( T ) in time T while minimizing J ( u ( T ) ) .Also 1 1 - min J(u(T)), a - M(T)
(5.5.25)
where
a=
min
cEP={ cE E":cy(T)=l}
F(c,TI.
(5.5.26)
As a consequence of these observations, minimum energy control is given bY (5.5.27)
where c* is the minimizing vector in (5.5.26). These are the verifications of Theorem 5.4.2 and Theorem 5.4.4. In details for L , control, minimum (5.5.28)
where c' is the minimizing vector. For L, control, minimum pseudefuel controls are
where
Synthesis of Time-Optimal and Minimum-Eflorl Control
171
Thus for minimizing L, controls in U ,
(5.5.30) 1 M ( T )l
a = minF(c,T) = F ( c * , T ) = C€P
(5.5.31)
and optimal control (5.5.31) is given by (5.5.29) where C* is the minimizing vector in (5.5.30) and (5.5.31). I t is illuminating to link up minimum time control with minimum energy problems. In some situations, we show that the optimal controls are the same.
Definition 5.5.1: For each t f (OIZ'l, let
(5.5.32) where
P = { c f En*:cy1 = 1).
Theorem 5.5.4 Suppose the basic assumptions (i), (ii), (iv), and (v) are valid. Then for each t E ( O , T ] ,
M(t)
{ $} A4 t { i) t * ,
(5.5.33)
where M is the bound on the corresponding control set. Proof: From the definition we quickly deduce that CYl
I M(t)llSt(c)ll1
v c f En.
Because E n is finite dimensional, it is routine to verify that for each t E ( O I T ] there exists some c(t) f P such that
(5.5.34) and there is a ut E Y such that St(ut) = y1 with llutll = M ( t ) .
Using this in (5.5.34), there exists c(t) E En*such that
C(t)Yl = M(t)lls;(cT(t)ll = 1. If we invoke Theorem 5.5.1 and Theorem 5.5.2,the result follows at once.
172
Stability and Time-Optimal Control of Hereditary Systems
Theorem 5.5.5 Assume (i), (ii), (iv), and (v). The optimal control u* that transfers xo to 0 in minimum time t* while minimizing the effort J ( u ( t * ) )is given by
u'(t) = sgn[g(t,c*)],
o 5 t 5 t',
(5.5.35)
when U1 C L , with J = J1 and c minimizes (5.5.26) and (5.4.22). Also, isgiven by u p ) = W s j ( t ,c)lfsgn!Jj(t, c), (5.5.36)
U*
where
if U2 c L, with J = J2, where
c
in (5.5.36) minimizes (5.5.31) with F in
(5.4.30).
Proof: If T = t * , the minimum time required to transfer xo to 0, then from Theorem 5.5.4,
-1= - - 1 - inf{ IlSy. (.)I[ M M(t*) c
:c
E P},
where P = {c E En: cTyl = 0}, and where M is the bound on U , i.e.,
Since minimum fuel controls are given by (5.5.27), optimal controls are given by
where cis the minimizing vector of (5.5.26). But if we use ?5*(t)in (5.5.28) or (5.5.29), the results (5.5.35) and (5.5.36) are deduced. Thus, when M = 1, time-optimal control for the system (5.5.1) is also a minimum fuel control. Remarks on the assumptions: The coincidence y ( t ) = S t ( u ) for some u E U is equivalent to controllability with constraints. If y ( t ) = -20, this is null controllability with constraints, which is ensured by the following assumptions: (a) rank [ B ,A B . . . A"-'B] = n. (b) No eigenvalue of A has positive real part.
Synthesis of Time-Optimal and Minimum-Effort Control
173
(c) The condition that for each c # 0, t + IlS;(c)ll is strictly increasing is ensured by the system being proper, i.e., for each t l , t 2 E [0, T ] with 0 5 tl 5 t z 5 T , [ ~ e - ~ ' ( t ) B ( t )=] ~0, V t E [ t l , t z ] if and only if c = 0. This is equivalent to the controllability condition
(4-
(ii) In (5.5.17) the optimal controls are uniquely determined if for each j = 1,.. . , m , and each doE En, [ ~ e - ~ ' ( t ) B ( t ) ]#j 0. We note that theorems on minimum fuel problems, Theorem 5.4.2 Theorem 5.4.4, depend on the &sumption that ensures that the reachable set is closed. Because the reachable sets are also convex, optimal controls are controls that generate points on the boundary of R(t). If, however, the problem is to minimize the L1-cost function ~3(u(~ = )11ulll= )
1c T M
tuj(t)ldt
(5.5.37a)
j=1
subject to z(0) = 20 and z ( T ) = 0 where U = Us is given in (5.5.34), the reachable set (5.5.37b) is open, if we assume that
rank [Abj ,. .. ,A"bj] = n, for each column b j of B . This is the content of the following Lemma by HAjek, [4].
Lemma 5.5.1 Suppose (i) 5 ( t ) = Ao Bu (5.5.1) is metanormaf, i.e., rank [ABj ...A"bj] = n for each j = 1,.. . , m. Then the reachable set R(t) of ( 1 ) is open, bounded, convex, and symmetric.
+
Proof: Since boundedness, convexity, and symmetry are obvious, we give only the proof that R(t) is open. Suppose it is not open, so that rT
is on a boundary point. Let c # 0 be an outernormal to (the closure of) R(t) at E O , and use c to define the index y(t)
= c - e-A'B.
174
Stability and Time-Optimal Control of Hereditary Systems
It is clear that control uo maximizes (5.5.38) But the mapping
rT
is a linear functional on L110, T ] with norm
Since (5.5.38) implies that ilylla, is attained at the element ball, we have
UO(.) of
the unit
(5.5.39) This shows that we have equality throughout, so that
But HAjek has observed [4, p. 4171 that metanormality implies Iyj I is nonconstant a.e. This implies uo = 0 a.e. With this uo = 0, our boundary point is 10 = 0. This contradicts the following containment:
R,(T)
c Int W p ( T ) , Int(l/cr)R,(T)
3
(;)
RP7
whenever 0 < a < _< mT, which is a consequence of metanormality. Thus the set of points -10 which can be steered to 0 a t T by using controls with L1 bound 11~1115 1 is the open set R. If the controls have 11~111 M , the set is MIW for M > 0. From this it follows that minimal M can never be attained unless 10 = 0. This proves Theorem 5.5.5.
Q. The conditions for Euclidean controllability have been well studied: See Kirillova and Churakova [ l l ] Gabasov , and Kirillova [ S ] , Weiss [15],and Manitius and Olbrot [13,14]. They are all conditions on matrices representing the system, and are a consequence of the following lemma:
Lemma 6.1.1 In the system (6.1.1), the following are equivalent: (i) The matrix
G ( Q , ~ I=)
l'
U(ti,s)B(s)B"(s)U(t,s)ds
(6.1.2)
has rank n, where U ( t ,s) is the n x n matrix solution of
i ( t ) = q t , q).
(6.1.3)
(ii) The relation C T Y ( s , t l ) B ( s )E 0 , c E E n , s E [ ~ , t l ]
implies c = 0, where Y ( s , t l )= U ( t 1 , s ) a.e. in s is an n x n matrix defined by
Y ( a , t )=
>t, I - J,' Y ( c r , t ) q ( ad, - cr)dcY, Q 5 t . u
0,
(iii) The system (5.1.1) is Euclidean controllable on
[QI,~I],
(6.1.4) 11
>u
As a consequence of this lemma, the next characterization of Euclidean controllability in terms of the system's coefficients is available for the autonomous system
+
- h)+ Bu(~),
Z(t) = A o ~ ( t ) A l ~ ( t
(6.1.5)
where Ao,A1, B are constant matrices. First introduce the determining equations,
+
Q ~ ( s=) A o Q E - I ( s ) AiQE-i(S
Qo(s)=
and define
We have:
= 1 , 2 , 3 , .. . ,
- h),
{
B, s=O, 0,
s
# 0,
s E (--oo,-oo).
(6.1.6)
Control of Linear Delay Systems Theorem 6.1.1 only if
195
System (6.1.5) is Euclidean controllable on [O,tl]if and
rankQn(tl) = n.
Remark 6.1.1: Note that the nonzero elements of Q b ( s ) form the sequence: s=o h 2h Qo(s)= Bo QI (s) = AoB AlBO Qz(s) = A i B (AoAi AiAo)Bo A:Bo Note that if t l _< h, the only elements in the sequence are the terms [B,AaB, . .. ,A:-'B], so that En-controllability on an interval less than h implies the full rank of [B,. . . , A t - ' B ] . If this has less than full rank and t l > h, other terms can be added to &,(tl) and the system may still be controllable on [ O , t l ] . This contrasts with the situation in Section 5.1 for systems without delay, where if the system can be steered to some point of En,it can be steered t o that point in an arbitrarily short time.
+
Example 6.1.1: Consider
+
i ( t ) = A o t ( t ) A l l ( t - 1) + Bu(t), where
Ao=
[0
-1
Q1(s)z 0,
4 0
0 0 A 1 = [1
.=[;I,
s # 1 and s # 0.
Since rank g2(2) = 2, the system is Euclidean controllable on [ 0 , 2 ] .
Stability and Time-Optimal Control of Hereditary Systems
196
Example 6.1.2: Consider
i ( t )= A o ~ ( t+ ) Alz(t
AoB=
- h) + B u ( ~ ) ,
[%I, [i], [HI, AlB=
rankQ(tl),
21
B=
> h is 3.
Hence this system is E3-controllable.
Example 6.1.3: Consider the scalar differential-difference equation
where airbi and c are constants and t(0),
z2 = &)
.. . zn = x(n-1)
n- 1
2,u
are scalar functions. Define z1 =
n-1
Written in matrix form
i ( t )= A o ~ ( t + ) A l t ( t - 1) + B u ( t ) ,
Control of Linear Delay Systems
197
we have
This has full rank. Hence the system is Euclidean space controllable on any [O,tlI, tl > 0.
6.2 Linear Function Space Controllability The true state of solutions of (6.1.1) is an element of some function space. Thus the state a t time t is denoted by 21, and this is a segment of the trajectory s + z(s) t - h 5 s 5 t . Very often C = C([-h,O],E") is used. But W,(')([-h,01, E") = Wi'), the state space of absolutely continuous functions from [-h,O] t o E" with first derivative square integrable and in Lz([-h,O],E") is also natural. Indeed, if 4 E W,(')([-h,O],E") and u E Lz([O,t,],E"), then z(4) = 3: : [O,tl] -, E" is absolutely continuous, and if i ( t ) = L(3:t) u ( t ) , then z e Lz([O,t],E"). Therefore z E W,(')([-h,tl],E"),so that ttE W,(')([-h,O],E")for all t E [O,tl]. In this section we explore controllability questions in W,(').
+
Definition 6.2.1: The system (6.1.1) is controllable on the interval [a,t1] if for each 4,$ E W i l ) there is a controller u E L 2 ( [ a , t l ] , E msuch ) that 0 in the t t 1 ( u , 4 , u )= $, t 0 ( u , 4 , u )= 4. It is null controllable if 4 above definition. It is said to be controllable (null controllable) if it is controllable (null controllable) on every interval [a,t l ] with tl > a+ h. The following fundamental result is given in Banks, Jacobs, and Langenhop [2, p. 6161.
198
Slabilily and Time-Optimal Control of Hereditary Systems
Theorem 6.2.1 In (6.1 . l ) , in addition to the prevailing conditions of this section (for L and B), assume that (i) t -+ B+(t), t E E is essentially bounded on [tl - h, t l ] , where B+(t) denotes the Moore-Penrose generalized inverse of B(t) 1111. Then (6.1.1) is controllable on the interval [a,t1] with tl > a + h i f a n d only if (ii) rank B(t) = n on [tl - h , t l ] . When t -+B(t) is constant, we have the following sharp result: Corollary 6.2.1 Suppose the n x rn matrix B in (6.1.1) is constant. A necessary and sufficient condition that (6.1.1) is controllable on [a,tl] is that rank B = n. The proof of Theorem 6.2.1 is available in [2] where a thorough discussion of the result is made. Manitius also discusses the result [14]. In the investigation of controllability of (6.1.1) in the state space W;'), with L2 controls, it is impossible to handle the situation of controls with pointwise control constraints. In this case controls are taken in the space L,, and the state space is either W 2 )or C. Though several researches are available on the optimal control of variants of (6.1.1) when the state space is C and the controls are L,, (see Banks and Kent [3]and Angel1 [I])there seems to be no general results on the controllability of such systems, a necessary requirement for the existence of an optimal control. There are other problems connected with such treatment; for example, the attainable sets cannot be closed and the multipliers cannot in general be nontrivial. See Manitius [14,pp. 120,1281. We begin to address some of these problems by first formulating function space controllability state conditions of (6.1.1) when the space is C and the controls are L,. The result is also valid for W g )state space of initial functions and C'([-h, 01, En)space of terminal conditions, with L , controls. We assume that the control matrix B(t) is continuous.
Theorem 6.2.2 In (6,f.l),namely,
suppose the state space is C = C([-h,O],E n ) , and B is continuous. Assume: (i) rankB(t) = n o n [ t l - h , t ~ ] . Then(6.1.1)iscontrollableon [ a , t l ] , tl > a t h.
Control of Linear Delay Systems
199
Proof: Assume (i). Then rank B(t1 - h ) = n, and this implies that H(u,t1 - h) =
1
11--h
X(t1 - h , s)B(s)B*(s)X*(11 - h,s)ds
(6.2.1)
+
has rank n. Let t 1 > u h, 4,$ E C([-h,O],E"). It is a consequence of Lemma 2.3 of Manitius [14], (or Lemma 6.1.1) that there is a u E Loo([cr,tl-h],Em) such that z(tl-hlcr,q5,u) = +(-h). Weextend u and t = t(., u,4, u) t o the interval [u, tl] so that $(t --21) = ~ ( t ) ,tl - h 5 t 5 t 1 , and
$(t
+
- 11) = +(-h)
/
t
ti-h
L ( s , +,)ds
+
ll-h t
B(s)u(s)ds,
(6.2.2)
on [tl - h, tl]. Note that the integral form of (6.1.1) is z ( t ) = 40)
+ J0
t
L ( s ,% ) d S
t
+
B(s)u(s)ds,
x ( t ) = #(t), t E [-h, 01, t 2 0,
# E c.
(6.2.3)
Because of (i), rank [B(t)B*(t)]= n,
and
H(t) =
it
B(S)B*(S)dS,
h
- h, tl],
V t E ftl
t > tl - h
has rank n for each t f (tl - h , t1). As a consequence of this rank condition, define for each t E (tl - h , t l ] a control ~ ( s ) ,s 5 t , v(s) = B+(s)H-'(t)[+(t- t l ) - $(A)-
J
t
ti-h
L ( s , z,)ds],
where z(.) is that solution of (6.1.1) or (6.2.3) with
= 4, z(t1 - h, 4, ). = + ( - h ) .
I,(+)
That zi exists follows from the usual argument [9, p. 1411 for the existence of a solution of a linear system (6.1.1). With this zi in the right-hand side of (6.2.2), we have 1-h
= $(I - t l ) ,
t E [tl - k t l l .
200
Stability and Time-Optimal Control of Hereditary Systems
It is interesting t o note the absence of condition t -+ B+(t) being essentially bounded on [tl - h , t l ] , which is required in Banks, Jacobs, and Langenhop [ 2 ] . Indeed, if B ( t ) has rank n on [tl - h , t l ] and is continuous, then B + ( t ) is continuous on [tl - h , t l ] . See Campbell [4, p. 2251. In this case B+(t)is uniformly bounded on [tl - h , t l ] .
Remark 6.2.1: Note that the control that does the transfer from is 1 ' 1 E ~ C o ( [ ~ ,t lhl, Ern), ii= v : [tl - h , t l ] , continuous.
to
t)
{
Thus we can assume that in our situation the subspace of admissible controls is u = {. E L,([O, t1l)'11t1 E C([-h, 01, Ern)).
Remark 6.2.2: Because the proof is carried out in an integrated form, it is easy to consider 111 E C. If, as was done in [ 2 ] , a "differentiated" version of (6.1.1) is used in the definition of control, 111 will be naturally in C 1 ( [ - h , O ] , E " ) . In both cases the full rank of B ( t ) is needed as was formulated by Bank, Jacobs, and Langenhop [ 2 ] . The full rank of B required by condition (ii) of Theorem 6.2.1 is very strong: for W2(')-controllability we must have as many controls as there are state variables. There are very many practical situations when this will fail. For example, the scalar nth order retarded equation with a single control of example 6.1.3 is ruled out. We now study other types of controllability, which we must need to make progress in our study of time-optimal control. Consider the system
.(t) = L ( t ,Z t )
+ B(t).(t),
(6.1.l)
where L is given in (2.2.4) with assumptions on L ( t , 4) stated there. With these assumptions we may decompose L ( t , 4) as follows:
(6.2.4)
Control of Linear Delay Systems
We call the operator t the system
+H(t,
20 1
0) the “strictly retarded” part of L(t,+). In N
i=l
0 < W 1 < W2 < - . -< WN = h
(6.2.5)
N
~ ( 4) t ,= C A i ( t ) 4 ( - w t ) i= 1
We call system (6.1.1) a system with strict retardations if the mapping in (6.2.4) satisfies the condition, there exists a 6 , 0 < 6 < h such that g t , e ) = o for - 6
5 e 5 0,
7
v t E 8.
Note that all systems of the form (6.2.5) are strictly retarded. We have: Proposition 6.2.1 Suppose system (6.1.1) has strict retardation, then (6.1.1) is Euclidean null controllable if and only if G(a,t l - h ) = S,’’-* U(t1 - h , s ) B * ( s ) B * ( s ) U * ( t-l h , s ) . ds has rank n for every choice of u , t l , with tl > u h. With this one proves tlie following [2].
+
Theorem 6.2.3 Suppose: (i) (6.1.1) has strict retardation, and (ii) t + B S ( t ) , t E E is essentially bounded on [tl - h , t l ] for each t i with tl > u h. Then ( 6 1 . 1 ) is null controllable if and only if (iii) Condition (i) of Proposition 6.2.1 is satisfied. (iv) B(t)B+(t)v(t,O) = i j ( t , e ) i f - h 5 0 5 Oforalmosteveryt E [ t l - h , t l ] for every 2 1 > u h . When L , B are independent of time, the following sharp condition is deduced:
+
+
Corollary 6.2.2 Suppose (6.1.1) is strictly retarded. Then (6.1.1) is null controllable if and oiily if (i) Condition (iv) of Tbeorern 6.2.3 is satisfied, and (ii) rank [ B , A o B , .. . ,A:-’B] = n. In particular, if we consider
i ( t ) = A o ~ ( t+)
N
C A i ~ (-t hi) + B u ( t ) ,
(6.2.6)
i=l
where 0 < hl obtains:
< . . . < hN with A i ,
i = 0 , . . . , n n x n matrices, then one
Stability and Time-Optimal Control of Hereditary Systems
202
Corollary 6.2.3
The system (6.2.6) is null controllable if and only if N
and
N
rank[B,AoB,... ,A!-'B] = n.
As a consequence, the nth-order scalar differential difference equation in Example 6.1.3 is null controllable.
6.3 Constrained Controllability of Linear Delay Systems [5] In the last sections the controls are big. In this section we consider controllability of (6.3.1) i ( t ) = L ( t ,.t) B(t)u(t),
+
when the controls are required to lie on a bounded convex set U with a nonempty interior. For ease of treatment, U will be assumed to be the unit cube C"' = { U E E"' : I ~ j 5l 1, j = 1, ... ,m}. (6.3.2) Here u j denotes the j t h component of u E Em. Consistent with our earlier treatment, the class of admissible controls is defined by Uad
= {U E L , ( [ O , t l ] , E m ) : u ( t )E C"' a.e. on
[O,21]}.
It is easy to see that Uad has a nonempty interior relative to L,, and that 0 E Uad. The state space is either W c ) or C. Conditions on L and B of Sections 6.1, 6.2, and 6.3 are assumed to prevail. We need some definitions.
Definition 6.3.1: The system (6.3.1) is null controllable with constraints if for each 4 E C,there is a t l < 00 and a control u E Uad such that the solution z( ) of (6.3.1) satisfies . o ( d J , 4
= 4, %(6,4,U) = 0.
It is locally null controllable with constraints if there exists an open ball 0 of the origin in C with the following property: For each 4 E 0,there exists a t l < 00 and a u E Uad such that the solution x( ) of (6.3.1) satisfies .o(fl,
4,
= 4,
.:I
4,
(b,
= 0.
We now study the null controllability with constraints of (6.3.1). Two preliminary propositions are needed.
Control of Linear Delay Systems
203
Proposition 6.3.1 /5] Suppose (6.3.1) is null controllable on the interval [a,tl]. Then for each 4 E C, there exists a bounded linear operator H : C + Lw([u,t l ] ,Em)such that the control u = Hq5 has the property that the solution z(a,(P, H 4 ) of (6.3.1) satisfies
With this H , one proves the following:
Proposition 6.3.2 [5] Assume that (6.3.1) is null controllable. Then it is locally null controllable with constraints. Proof: Since (6.3.1) is null controllable, we have by Proposition (6.3.1) that there exists a bounded linear operator H : C 4 L o o ( [ u , t l ]Em) , such that for each 4 E C,u = and the solution z ( u , ~Hq5) , of (6.3.1) satisfies
Because H is continuous, it is continuous at zero in C. Here for each neighborhood V of zero in L , ( [ u , t l ] , E m ) there is a neighborhood M of 0 in C such that H ( M ) c V. In particular, choose V to be any open set in L , ( [ a , t l ] , E m ) containing zero that is contained in Uad. This choice is possible since U a d has zero in its interior. For this particular choice, we see that there exists an open set 0 around the origin in C such that H ( U ) c V c &([a, t l ] ,Em). Every 4 E 0 can be steered to zero by the control ti = H 4 . Hence (6.3.1) is locally null controllable with constraints.
Theorem 6.3.1 Assume that: (i) The system (6.3.1) is null controllable. (ii) The system i ( t ) = L ( t , Zt)
(6.3.3)
is uniformly asymptotically stable, so that there are constants k > 0, (Y > 0 such that for each u E E the solution x of (6.3.3) satisfies
Then (6.3.1) is null controllable with constraints.
Proof: Condition (i) and Proposition 6.2.2 guarantee the existence of an open ball 0 c C such that every 4 E 0 can be steered to the zero function
204
Stability and Time-Optimal Control of Hereditary Systems
with controls in U a d in time t l < 00. Condition (ii) assures us that every solution of (6.3.3), i.e., every solution of (6.3.1) with u = 0, satisfies 2 t ( u , + , O )+ 0
as t
+ 00.
Thus, using u = 0 E U a d , the system rolls on as Equation (6.3.3), and there is a finite t o , such that $J = zto(a,4,O) E 0.With this initial data ( t o , $ ) there exists a t l > t o such that for some u E &d, zto(a,$J,u)= $J,q ,( a ,4 , 0) = 0. Thus with the control, v ,
which is contained in &d, 4 is indeed transferred t o 0 in a time tl This concludes the proof.
< 00.
As a consequence we have the following sharp result: Corollary 6.3.1 Consider (6.3.4) where for an n x n matrix ((0) - h
5 0 5 0, L is given by
L(4) = /o [d€(0)14(4) 4 E c. -h
s_Oh
Let A(X) = XI eXBd((0). (i) Let the roots of the characteristic equation det A(X) = 0 have negative real parts. (ii) Suppose .(t) = L ( z t ) B u ( t )
+
(6.3.5)
is null controllable. Then (6.3.5) is null controllable with constraints. Corollary 6.3.2 Consider (6.3.6)
Control of Linear Delay Systems
205
Assume that: (i) There exists a positive definite symmetric matrix H such that the matrix G is negative semidefinite, where
G
HA0
+ A;fH-I-
N
2qnMkH,
k=l
and where p 2 1 is a constant, Mk = max IAkijl. (ii) rank [A(A),G] = n for each complex A where A(A) = A 1 - A0 N
(iii)
k=l
-
Ake-xha.
rank
[
"1
A0 - A I , A, . . - A N , A1 AN 0 AN 0 0
= n+rank
'0
or in place of (ii), (iii) we have that G is negative definite.
(iv) BB'
N
N
i=l
i=l
C Aj = C Ai.
(v) rank [ B , A o B , .. . ,AI;-lB] = n . Then (6.3.6) is null controllable with constraints.
Proof: For the system (6.3.6) conditions (i), (ii), and (hi) are the requirements of uniform asymptotic stability of Corollary 3.2.3 or of Chukwu [6]. Hypotheses (iv) and (v) are needed for null controllability.
Example 6.3.1: Consider n-1
n- 1
j =O
i=O
where ai, bi are constants. If the homogeneous system is uniformly asymptotically stable, then Example 6.3.1 is null controllable with constraints. The following result is implied by the main contribution in Chukwu [5, Corollary 4.11. Theorem 6.3.2
Consider the system
i ( t ) = q t , .t)
+ B(t)u(t),
(6.1.1)
where L is given in (2.2.4) with assumptions on L(t,q5) stated there and with B continuous. Assume that the controls u are L2(L,) functions with
206
Stability and Time-Optimal Control of Hereditary Systems
values in a compact convex subset P of Em and 0 E P . Assume that the trivial solution of (6.3.3) i ( t )= q t ,q ) is uniformly stable, and (6.1 .l)function-space W$’)controllable (Euclidean -Encontrollable) on finite interval [a,t l ] with L2(L,) controls. Then (6.1.l) is function space (Euclidean) controllable with constraints. A s a consequence, uniform stability of (6.3.3) and controllability of (6.1 .l) on some interval [ a , t l ]suffice for global null controllability of (6.1.1), where the controls are measurable with values in P .
REFERENCES 1. T. S. Angell, “Existence Theorems for Optimal Control Problems Involving Functional Differential Equations,” J. Optimization Theory Appl. 7 (1971) 149-169. 2. H. T. Banks, M. Q. Jacobs, and C. E. Langenhop, “Characterization of the Control States in Wil) of Linear Hereditary Systems,” SIAM J. Control 13 (1975) 611-649. 3. H. T. Banks and G. A. Kent, “Control of Functional Differential Equations of R e
tarded and Neutral Type to Target Sets in Function Space,” SIAM J . Control 10 (1972) 567-594.
4. S. L. Campbell and C. D. Meyer, Generalized Inverses of Linear Transformations, Pitman, London, 1979. 5. E. N. Chukwu, “Controllability of Delay Systems with Restrained Controls,” J . optimization Theory Appl. 29 (1979) 301-320. 6. E. N. Chukwu, “Function Space Null Controllability of Linear Delay Systems with Limited Power,” J. Math. Anal. Appl. 124 (1987) 193-304. 7. E. N. Chukwu, “Global Behavior of Linear Retarded Functional Differential Equcttions,” J. of Math. Anal. and Appl. 162 (1991) 277-298.
R. Gabasov and F. Kirillova, The Qualitative Theory of Optimal Processes, Marcel Dekker, New York, 1976. 9. J. Hale, Theory of Functional Differential Equations, Springer-Verlag, New York, 8.
1977. 10. H. Hermes and J. P. LaSalle, Functional Analysis and Time Optimal Control, Academic Press, New York, 1969. 11. F. M. Kirillova and S. V. Churakova, “On the Problem of Controllability of Linear Systems with After Effect,” Differential Nye urauneniya 3 (1967) 436-445. 12. D. G. Luenberger, Optimization by Vector Space Methods, John Wiley, New York, 1969. 13. A. Manitius and A. W. Olbrot, “Controllability Conditions for Linear Systems of Linear Hereditary Systems,” SIAM J . Control 13 (1975) 611-649. 14. A. Manitius, “Optimal Control of Hereditary Systems,” in Control Theory and
Topics in Functional Analysis 111, International Atomic Energy Agency, Vienna, 1976. 15. L. Weiss, “An Algebraic Criterion for Controllability of Linear Systems with TimeDelay,” IEEE Trans. on Autom. Control A C - 1 5 (1970) 443.
Chapter 7 Synthesis of Time-Optimal and Minimum-Effort Control of Linear Delay Systems 7.1
Linear Systems in Euclidean Space
Our aim here is to consider the time-optimal control of the linear system N
+ C Ai(t)z(t- hi) + B(t)u(t),
i ( t ) = Ao(t)z(t)
(7.1.1)
i=l
where 0 < hl < hz < - . < h N = h Ai, i = 0 , . . . ,N are n x n analytic matrix functions, and B is an n x m real analytic matrix function. The controls are locally measurable L , functions that are constrained to lie in the unit cube
c" = {u € Ern : lujl 5 1
j = 1, ... , m } .
(7.1.2)
Thus the class of admissible controls is defined by Uad
= {u E L,C[a,t1],
u ( t ) E C" a.e. on [O,t1]}.
In E" we consider the following problem: Minimize
J ( t ,4 t ) ) = t ,
(7.1.3)
where
= 4, ( z ( t ,a , 4 , u ) ,t ) = (0) x [O, I. (7.1.4) Here z ( . , a , + , u ) is a solution of (7.1.1) with zu = 4. This is the timeXu
and continues optimal problem. It is old [17]and very important [13,16,18] to be interesting even for linear ordinary differential systems [19]and [20]. In the economic applications, for example, it is desirable to force the value of capital to reach some target in minimum time while maximizing some welfare function. For example, we may want the economy to grow as fast as possible. In Example 1.4, one would like to control the epidemic of AIDS. It is known [7, p. 1831 that, associated with the delay equations
207
208
Stability and Time-Optimal Control of Hereditary Systems
that model AIDS as a progressive disease is the reproductive number R given by
This system has a unique positive endemic state if and only if R > 1. If R < 1, the infection-free state is globally asymptotically stable, while the endemic state is locally asymptotically stable whenever R > 1. Since R is the number of secondary infections produced by an infectious individual in a purely susceptible population, to control the spread of AIDS as rapidly as possible, R should be reduced t o less than 1 in minimum time. This is a time-optimal problem. Theoretically this can be done by introducing control strategies that will decrease the number AC, the transmission rate per unit time per infected partner, as fast as possible. Conceivably the control ~ ( t= ) b(t)AC(T)S(t)-
T(t)
is introduced, so that the system becomes
The aim is to drive this control system to the infection-free state in minimum time. With this as a motivation, we now study (7.1.1), a linear approximation, by constructing an optimal feedback control in a way similar t o the development in [lo]. Recent advances in the theory of functional differential equations in Section 7.3 [lo], [Ill and [14],and in the constrained controllability questions [9], [15],have made it possible to attempt to solve the time optimal control problem of linear systems (7.1.1) within the classical theory. The development of the theory was long abandoned because of lack of progress in the directions which have now taken place. We shall illustrate our theory with several examples of (7.1.1) from technology, and one from ecology. If z(u,4,O) is a solution of
i ( t ) = A,z(t) + 20
=4 E
c,
N
C A i ( t ) ~ (-t h i ) , i=l
(7.1.5)
209
Synthesis of Time-Optimal and M i n 3 r n ~ ~ - EControl ~o~
in En,the solution of (7.1.1) is given by
z ( t ,u,4,u) = z ( t ,u,4,0) +
J,' U ( t ,s ) B ( s ) u ( s ) d s ,
(7.1.6)
where U ( t , s ) is the fundamental solution of (7.1.5).
Definition 7.1.1: The Euclidean reachable set of (7.1.5) at time t is defined bv
This is the set of all points in E" that can be attained from initial point = 0 using all admissible controls. The following properties of R(t,u> are easily proved:
Proposition 7.1.1 R(t,u) is convex and compact, and 0 E R(t,r), V t 2 the subset of u,,d defined by
6. Consider
U:d
= { u E En' u measurable luj I = 1 j = 1 , 2 , . . . ,m } .
The following bang-bang principle is valid in En:
Theorem 7.1.1 Let
and
Then Ro(t,u)= R ( t , u ) , V t 2 u.
Proof: Because U ( 2 , s )E L , ( [ u , t ] , E f l X " )and , B ( t ) analytic, we have
U ( t , s ) B ( s )E L,([u,t], Enxm).
It now follows from Corollary 8.2 of Hermes and LaSalle [4] that R ( t , o )= RO(t, a),
v t 2 u.
Remark 7.1.1: Theorem 7.1.1 states that the bang-bang principle of LaSalle is valid for (7.1.1); that if, of all bang-bang steering functions, there is an
Stability and Time-Optimal Control of Hereditary Systems
210
optimal one relative to UaOd, then it is optimal relative to Uad. Also if there is an optimal steering function, there is always a bang-bang steering function that is optimal.
Definition 7.1.2: Let F" denote the metric space of all nonempty compact subsets of En with the metric p defined as follows: The distance of a point z from I is d ( z , U ) = inf{lz - a1 : a E U } , N ( O , c ) = {z : E" : d ( z , U ) 5 c},
P( 0 1 , 6 2 ) = inf { c
:0 1
c N ( 0 2 , c) and O2 c N(O1 ,c)}.
Proposition 7.1.2 The set R(.,u) : [u,oo) -+ r" is continuous. Because t 4 U ( t , s ) , t 2 s is continuous the usual ideas such as are contained in Lee and Markus [S, p. 691 can be adapted to prove this using the variation of parameter. Associated with W(t,u) is the Euclidean space attainable set
d ( t , u ) = { x ( t , a , 4 , u ) : u E Uad, z is a s o h tio n of(7.1.1)}. The same type of proof of Proposition 7.1.2 yields that t -+ A(t,a) = z(t,u,0) R ( t , u ) is continuous in the Hausdorff metric, and it is compact and convex. The following properties of R(t,a) as a subset of En are available:
+
Lemma 7.1.1 R ( t ,u) is compact and convex and satisfies the monotonicity relation: (i) 0 E R(t, u) for each t 2 u. (ii) W ( t , s ) R ( s , u c ) R ( t , u ) u 5 s _< t . (iii) The mapping t + R ( t , u) is continuous with respect to the Hausdorff metric .
Remark 7.1.2: Relations (i) and (ii) are proved as in [21]. To motivate the discussion for the form of optimal controls, we note that if an admissible control u drives d, to zero in time t l , then
Synthesis o f Time-Optimal and Minimum-Effort Control
211
Note that U ( t 1 ,u) = T(t1,u)I where I is the identity (see [S,p. 1461). For example, for the system
with solutions
44,0)(l) = U(t)4(O) + A1
JI:
V ( t - s - h)4(s)ds,
we may designate
W ( t ): C --+ E" by
If
Wt)4 = 44,W). 2,
= 4,then W ( t )
W ( t , u ) . Also recall that T ( t , u ) ( b )E z t ( u , 4 , 0 ) .
The coincidence
W(tl)4 E 8 @ 4 t l , U ) , where dR is the boundary of Ilk, is very important in characterizing the form of optimal controls. We need the following definitions. Definition 7.1.3: The control u E U a d is an extremal control if the as sociated solution z ( c , d , u ) corresponding t o (u,4)lies on the boundary of d(t,u),that is,
Definition 7.1.4: System (7.1.5) is pointwise complete at time t 1 , if and only if the solution operator defined by W ( t ,u)$ = z(u,4 , 0 ) ( t )is such that
W ( t l , ~: C ) + En is a surjection: W(t1,u)C = En. Conditions for pointwise completeness are found in [6]. These conditions are stated in Lemma 7.1.2. Lemma 7.1.2
For System (2.1.12), nameZy
a necessary and sufficient condition for the pointwise completeness fort = t l is that, for every nonzero q E En there exists a t 2 t l such that r f U ( t ) #
212
Stability and Time-Optimal Control of Hereditary Systems
0 where U is the fundamental matrix solution of (2.1.12). If we define Fk = Zk(O)Al, where z k is identified in (2.1.16) - (2.1.21), then (2.1.12) is pointwise complete for t l E [k,k l ) , k = 0, 1 , 2 , . . . whenever the matrix
+
M(ti) = [Ek-iF&-i,. .. ,~%-iA;:~pk-i, has rank
la.
Example 7.1.1: A . =
EzZz(0) =
[:: -:I 1
1 -2
M(2)=
&zis(O)I
[;f :]. [; i] 0 0 -1
A1 =
, U(2) =
and this is a singular matrix
[
2 -2 0 2 -4 0 0 -4 0 1 -2 0 0 -2 0 0 0 01. 0 2 0 2 0 0 0 - 4 0
This has rank less than 3 because vT = [l,-2, -11 satisfies v T M ( v ) = 0. The example is not pointwise complete for t l 2 2. The following is also true. Lemma 7.1.3 The system is pointwise complete for all t l E [O,oo)whenever AoAl = A1Ao. The system is pointwise complete for all tl € [0,2). Theorem 7.1.2 The following are equivalent: (i) The system (7.1.1) is Euclidean controllable on [ u , t ] ,1 > u (ii) O E I n t R ( t , a ) , t > u + h . (iii) W(t,s)R(s, u) c Int R ( t , u ) , u < s < t .
+ h;
Proof: Suppose (i) is valid. Then the mapping
H : L,([u, t ] ,Em)
--+
En
given by Hu = x ( t , u ,4, u) is surjective. Clearly H(Uad) = R(t, u). Also
where Iis an open ball containing 0 such that 0 E @ c Uad. Because H is a continuous linear transformation of L , onto En,it is an open map. Hence H ( @ ) is open and contains 0. Thus 0 E Int R(t,u). Conversely, assume that 0 E Int R ( t , u ) . Then 0 E Int R ( t , u ) c H ( L , ( [ u , t ] , E " ) ) . Because
Syn t h es i s of Time-Op 1im a1 and Man irn u m- Ego rt Con2 rol
213
the last set in the inclusion is a subspace, containing a full neighborhood of zero, we have H ( L , ( [ u , t ] , Em))= En.Thus (7.1.1) is controllable. Assume that 0 E Int R(t, u) and W ( t , s ) qE aR(t,u) where q E R(s, u), since by Lemma 7.1.1, W ( t ,s)q E R(t,u), W ( t , s )= T ( t ,s ) I , and aim at a contradiction. Because of the point in the boundary, we have
where (-,.) is the inner product in En. We now observe that
because of the semigroup property of the solution operator. Define U’(7)
Then this admissible
= U*
o (c, W ( t ,s ) q ) , contradicting the assumption that W ( t ,s ) q is the boundary of R(t, u). Hence W(t,s)R(s, u) c Int R(t, u),for u < s < t . This completes the proof of Theorem 7.1.2. Proposition 7.1.3 Suppose (7.1.5) is pointwise complete and w is a t i m e optimal control for (7.1.1). Then w is extremd.
Proof: We first show that if w is a time-optimal control at the first instant
tl of arrival a t 0 E En,then the solution will lie on d ( t 1 ,a),the boundary of the attainable set. We shall then show that if ~ ( u, t ,4 , w ) lies on d(t,u) at any fixed t,, then
Suppose that w is the optimal control that steers 4 to 0 E En in minimum time t l , i.e., z(t1) = z ( t l , u , # , w )= 0, and assume that z(t1) = 0 is not in the boundary, BA(t1,u). Then there is a ball O ( 0 , p ) of radius p about 0 such that O(0,p) E d ( t 1 , u ) . Since d ( t , u ) is continuous in t , we can preserve this inclusion at t near tl by reducing the size of O(O,p),i.e., if
214
Stability and Time-Optimal Control of Hereditary
Systems
there exists a 5 > 0 such that O(O,p/2)C d ( t , o ) , tl - d 5 t _< t l . Thus we can reach 0 at time tl - 6 . This contradicts the optimality of t l . The conclusion is valid that 0 = z ( ~ I , u , ~ ~E, w ad(t,u). ) We now prove the second part by assuming that t* = z(t,, u , 4 , w ) E Int d ( t , cr) for some 0 < t , < t l . We claim that z(t,u,qh,w)E Int
d ( t , a ) for t
> t,.
Since z* E Int d ( t , , a ) , there is a ball U ( z * , S )c d ( t , , a ) such that c E O(z*, 6) can be reached from 4 at time t , using control UO. Now introduce the new system N
+ B(t)w(t)j t 2 t',
~ ( t=) C A i ( t ) z ( t - hi) i=O
(7.1.7)
zt* = 4 0 with z ( t * ) = c = 40(0), t* < t with w fixed. We observe that for the coincidence 4 0 = z*,we have from uniqueness of solutions that z(t,u,q$-,,w) = t ( t , u , d , w ) , t 2 t*. This solution is given in En by U, 4 0 , w )
= W ( t ,0 ) 4 0
that is,
At
Z(t,cr,40,@)
+
b 1
U(t)s)B(s)v(s)&
= W(t,u)do(O)
+ d(t),
where d ( t ) = U ( t , s ) B ( s ) w ( s ) d s . Since (7.1.5) is complete so that W(t,cr)C= E n ,W ( t , a )is an open map. Hence 40 = c -* z(t,cr,do,u) is an open map and takes open sets into open sets. As a consequence the image of the open ball 0 lies in the interior of d(t,a).Thus the image of z*,which is just z ( t ,u,#,u), lies in Int d(t,u).
Theorem 7.1.3 Suppose (7.1.5)is pointwise complete and (7.1.1) is Euclidean controllable. Let u* be an optimal control and t* the minimum time. Then w 2 -T(t*, u)d(O)E bR(t*,cr), the boundary of the reachable set R(t*,u ) OR En if and only if U*(S)
= sgn[y(s)B(s)l,
Y(S)
$ 0 , s E [a,t*I,
where y : [a,t*]is an n-row vector of bounded variation that satisfies the adjoin t equation
(7.1.8)
215
Synthesis of Time-Optimal and Minimum-Effort Control
Proof: For the system N
+ C Ai(t)x(t - h i ) + B(t)u(t),
i ( t )= Ao(t)t(t)
i=l
x(t) = d(t)t
e [-h,O),
t(0)= qq0) = c,
t
1
Q
by definition the En-reachable set is
If (7.1.1) is null controllable with constraints so that optimal control u* exists, then z ( t * , Q, 4, u * ) = 0, and
w z -T(t*, Q)O(O) =
1
t*
U ( t * ,s)B(s)u(s)ds.
U
Consequently w E R(t*,a). Because u* is optimal and (7.1.5) is complete, the solution x(o,4, u*) is extremal. Therefore -T(t*,o)q5(0) = w e aR(t*,m). But by Proposition 7.1.1, R(t*,u) is compact and convex, and because (7.1.1) is controllable, Theorem 7.1.2 yields that 0 E Int R(t*,a). It follows from the separation theorem of closed convex sets [l,p. 4181 that there exists a nontrivial d E En,a row vector, such that
where
(a,
.) defines the scalar product in En.Thus U ( t * , s ) B ( s ) u ( s ) d s )5 ( d , l * U(t*,s)B(s)u*(s)ds
so that [*
dTU(t*,s)B(s)u(s)ds5
l*
drU(t*,s)B(s)u*(s)ds.
(7.1.9)
Recall the definition Y ( s , t ) as the matrix solution of the adjoint equation (7.1.8) as was treated in Hale [8, pp. 147-1531 and the coincidence Y ( s , t )= U ( t ,s) a.e. Then clearly (7.1.9) yields the inequality
l'
dTY (s,t*)B(s)u(s)ds5
I"
dTY (s,t*)B(s)u*.
216
Slability and Time-Optimal Control of Hereditary Systems
So that
I J,
t'
t* Y(S,
t*)B(s)u(s)ds
Y(S,
t*)B(s)u*(s)ds,
where y(.,t*) : E En*vanisheson [t*,co),satisfies(7.1.8) on (-co,t*-h] and is such that y(t*,t*) = d # 0. As a consequence, ---+
u * ( s ) = sgn[y(s,t*)B(s)l,
Y(S,t*)
f 0.
The argument can be reversed.
Remark 7.1.3: Note that y(t*,t*) = d
# 0, so that y(s,t*)
f 0.
Remark 7.1.4: The form of optimal control in Theorem 7.1.3 asserts that each component u; of U* is given by u;(s)
= sgn[Y(s, t*,$)bj (a)]
on [ u , t * ] ,j = 1 , . . . ,n,where bj(s) is the j t h component of B ( s ) . Provided we assume completeness, it uniquely defines an optimal control if (7.1.1)is normal in the following sense:
Definition 7.1.5: Define gj(d) = { s : y(s,t*,d)bj(s) = 0, s E [O,t*]}. System (7.1.1)is normal on [a,t*]if gj(d) has measure zero for each j = l , . . ., m , where y(t*,t*,d)= d # 0. If (7.1.1)is normal for each [u,t*],we say (7.1.1)is normal. It follows from the above definition that (7.1.1)is normal on [u,T ] , T > c if for each j = 1 , . .. , m , dTU(T,s)bj = 0 almost everywhere for each j = 1 , . . . , m, implies d = 0. Here b j is the j t h component of B. %calling the definition of determining equations for constant coefficient delay systems,
Qo(s) =
= 0, I (Identity),
I
s
0
S # O ,
and defining
one proves in a way similar to [3,p. 791 that the following proposition is valid.
Synthesis of Tame-Optimal and Manimum-Eflort Control
217
Theorem 7.1.4 A necessary and sufficient condition that a complete system (7.1 .10) is normal on [0, T ] is that for each j = 1 , . ,rn, the matrix
-
Qjn(T)=
{ Q o ( s ) b j , . . ., Q n - l ( s ) b j , s E [ O , T 3 }
has rank n.
7.2
Geometric Theory and Continuity of Minimal Time Function
We note that if (7.1.1) is null controllable with constraints, then z(u,4, u ) ( t ) = 0 for some t 2 u and some u E u a d , that is,
- T ( t , u)4(0) =
w
/
t
U ( t ,s)B(s)u(s)ds.
0
Consequently
if and only if 4 can be steered t o zero at time t by some admissible control. If the system is Euclidean null controllable with constraints, then
d(u) = {4 : ~ ( u4,,u ) ( t )= 0 for some =
26
E Uad and some
t},
c = (4 : T(t,u)+(O)= z ( t , u , 4 , 0 )E R ( t , a ) } ,
where z ( t , n,d,O) solves (7.1.5).
Definition 7.2.1: The minimal time function is the function M : d(u) -+ E defined by
M(4) = Inf(t 2’ u : - T ( t , a)4(O) E R(t, u)}. Thus u 5 M ( 4 ) w t , u).
5
+00
with M ( 4 )
M ( 4 ) . We know that
218
Stability and Time-Optimal Control of Hereditary Systems
Also, since (7.1.1) is Euclidean controllable and Lemma 7.1.1 (ii) is valid, there is some s, t < s such that
Therefore, for sufficiently large k,
Because T(s,t)[-T(t,6)4e(O)] = -T(s, u)&(O), we have M ( 4 t ) I s. Therefore for all s > M ( 4 ) , limsupM(4k) 5 s. If we let s = M ( 4 ) +,then
+
limsup M(4d
IM(4.
This inequality is still valid if M ( 4 ) = +oo, proving upper semicontinuity. For the proof of lower semicontinuity, suppose
M ( 4 ) > liminf M ( 4 k ) . Since M ( + E )cannot be +oo, there is a subsequence such that M ( 4 k ) converges: M ( 4 k ) s < M(4). Therefore, for k large,
-
M ( 4 k ) < s.
From the definition of M ( $ t ) , M ( + L . ) 5 t whenever -T(t,u)+t(O) E R(t, u). There is some 4 k i such that
Since R(s, a) is closed,
This contradiction proves that M is lower semicontinuous. We conclude that M is continuous. Note that M is always lower semicontinuous.
Synthesis of Time-Optimal and Minimurn-Effort Control
219
Corollary 7.2.1 Let (7.1.1) be controllable and (7.1.5) be pointwise complete. For t 2 u we have the following relations:
Proof: We now observe that
We now show that
03
:]
Let 4(0) E [u,t
+
.
n R
v=l
Then there is an admissible control up :
, such that
+(O) = f + +
u
U
t
=J,
U(t
(L + ;+) B(s)up(s)ds,
+ -P1, s ) B ( s ) u p ( s ) d s +
= + yp.
l++(t + u
,;
.)
2p
We note that 2p
=
J,’ u +
=U Thus zp E U
(t
,:
t)
U ( t ,s)B(s)up(s)ds,
(1+ :,t) J,’ U ( t ,
(1+ t , t ) R ( t , u ) ,or
s)B(s)up(s)ds.
B(s)up(s)ds,
220
Stability and Time-Optimal Control of Hereditary Systems
Hence zp + T(t,t)4(O)= O(0) as p + 00, since yp + 0 as p 4(0) is in the closed set R(t,u).
+ DC),
so that
For the second formula we note that since (7.1.1) is Euclidean controllable and (7.1.5) is pointwise complete, we have that
W ( t , u ): C
+
En, ( W ( t , u ) 4= T(t,u)+(O))
is an open map, and T ,M are continuous. Hence
is open, so that
{-T(t,u)4(O): M ( 4 ) < t } c Int W(t,u). From the proof of Proposition 7.1.1, we know that if
-T(t*,u)4(O)E Int R(t*,u),for u < t* then
< 00,
-T(t, u)4(O)E Int R ( t , u ) ,for t > t*.
From the above we have
M ( 4 ) 5 t* < t .
This completes the proof. As an immediate by-product of the continuity of the minimal time function, we construct an optimal feedback control for the system (7.1.1). In the spirit of HAjek [lo],we present some needed preliminaries. We identify Bo with the conjugate space of C and define a subset of Bo, which we describe as the cone of unit outward normals t o support hyperplanes t o R(t,u) at a point T ( t ,u)4(O)on the boundary of R(t, u).
Definition 7.2.2: For the controllable system (7.1.1), let
d(o)= C = {4 : T ( t ,u)d(O) E R(t, u)}. For each I$ E d(u), let
I((+) = (11 E Bo : 111111 = 1 and such that
v P E WM(4),0)1.
($(()),PI5
($(O),T(M(4),0)4(0:
Note that T ( M ( + ) ,u)4(0)E aR(M(q5),u),the boundary of R(M(I$),u).
Synthesis of Time-Optimal and Minimum-Effort Control
22 1
Lemma 7.2.1 Suppose (7.1.1) is Euclidean controllable. Then K ( 4 ) is nonvoid and I ( ( - + ) = -I u
+ h,
by Theorem 7.1.2. Consider K ( 4 ) . Since T ( M ( d ) , u ) 4 ( 0 )E aR(M(4),u ) by the usual separation theorem [l, p. 1481, there exists a 1c, E BO the conjugate space of C,$(O) f 0 such that
(11,(0), P ) 5 (11,(0>, T(M(41,u)4(0)) VP E W W d ) , u). It is clear that $J can be chosen such that ll11,ll = 1. Hence K ( 4 ) is nonvoid. For the second assertion, observe that R(M(#), c) is symmetric about zero. Because t + R(2,a) is continuous [21]and closed, and because 4 -+ M ( 4 ) is continuous, a simple argument proves the last assertion. Because K ( 4 ) is a nonvoid subset of Bo, for each 4 we can choose a y(4) E K(c#J)whenever (7.1.1) is Euclidean controllable. We now prove that this selection can be made in a measurable way. Lemma 7.2.2 Assume that (7.1.1) is Euclidean controllable. There exists a measurable function y : d ( u ) + Bo such that
y(4) E K(4) for all 4 E d(o). Proof: We observe that C is a measurable space. Its conjugate Bo is a Hausdorff space. Also C is separable. Let k : C -+ BO be the mapping given by the collect ion
I((4) = (k(4) = 11, : 11, E Bo, ll$ll = 1 ( $ ( O ) , P ) I (11,(O),T(M(4),u)4(0))
v P E R(M(4),.>I.
The continuity of k is a simple direct consequence of its definition as well as the continuity o f t + R ( t ,u) and of + M ( 4 ) . In McShane and Warfield [23]we identify M = C = d(u), A = Bo, 11, E Bo, (and invoke the continuum hypothesis). Theorem 4 of [23] asserts that there exists a measurable function y : d ( u ) -+ Bo such that y(4) E K ( 4 ) . This proves Lemma 7.2.2. We are now ready for our main result.
222
Stability and Time-Optimal Control of Hereditary Systems
Theorem 7.2.2 In (7.1.1) assume that Ail Bi are real analytic. Assume that (7.1.1) is both Euclidean controllable and normal. Let (7.1.5) bepointwise complete. Then there exists a measurable function f : d ( u ) 3 Em that is optimal feedback control for (7.1.1) in the following sense: Consider the system N
i(t)=
C Ai(t)t.(t- hi) + B ( t ) f ( z t ) ,
zu
= 4.
(7.2.1)
i d
Each optimal solution o f (7.1.1) is a solution of (7.2.1). As a partial converse, each solution of (7.2.1) is a solution (pcssibly not optimal) of (7.1.1). Also the function f is given by
where g ( t , s,.) : Bo
---*
E" is defined by
where y : [a,t]satisfies
-
Proof: Let y : d(u) Bo be the measurable selection described in Lemma 7.2.2. For each 4 E d(a)set y(4) = $ E Bo, and define f(4)as in (7.2.2). We recall that
+
S(t,s,$1 = J P , [ d W l [ ( i ( t 0 , S ) l I where U solves (7.1.5). We recall from Tadmor [25, Theorem 4.11 (see also Corollary 10.2 of [IS])that s -+ U ( t ,s) is piecewise analytic for each s E [c,M ( 4 ) ] ,since A i ( t ) is analytic in t. It follows from the analyticity of B ( s ) that each coordinate of g ( t , s , $ ) B ( s )is analytic on [si-~,si]i = 1 , 2 , . .. , v for each partition c 5 so 5 s1 5 . . . 5 sv = M ( 4 ) . Thus sgn[g(t,s,$ ) B ( s ) ] is piecewise constant on [si-l,si] and therefore on [u,M(4)]. Hence the limit in (7.2.2) exists. Because y(4) is a measurable selection of 4, so is f(4).
Synthesis of Time-Optimal and Minimum-Effort Control
223
Let 3: : [ a , M ( 4 ) ]+ C be an optimal solution of (7.1.1) with c, = 4. We now show that t satisfies (7.2.1). Indeed, take an arbitrary s, 0 5 s < M(+), then t8E d(a).There is an optimal control that transfers 2, to zero. We take it to be
for any choice of y E K ( t d ) ,i.e., -y E K(-t8). This choice follows from Theorem 7.1.3. We choose y = y ( z s ) . Because (7.1.1) is assumed normal, optimal control is unique almost everywhere and it uniquely determines a response on [ a , M ( d ) ] ;that is, the response 2 . Thus for almost all t 2 s and each s we have
Now take the limit as t
-+
s+ (c5 s
5 t ) . We now have
Since u, is piecewise constant, u,(s) = f(ts) for almost all s. Therefore, the response t t o u, satisfies
almost everywhere. Because 2 is absolutely continuous, it is a solution of (7.1.1). Finally, let c be a solution of (7.2.1) that is a response to u ( t ) = f ( z t ) . Clearly u E L,([a,M(+)], E") and v E Uad.
7.3 The Index of a Control System From our analysis it is clear that the properties of the function
with y s ( t , - ) = $, $ E Bo, the Banach space of functions 1c, : [-h,O] ---+ En*of bounded variation on [-h,O], continuous on the left on (-h,O) and vanishing at zero with norm Var $ , are important in determining [--h,Ol
224
Stability and Time-Optimal Control of Hereditary Systems
optimal strategies of delay equations (7.1.1). We call t the index of the control system. To determine it explicitly, one first finds the fundamental matrix U ( t ,s) of (7.1.1). We consider its autonomous version, namely N
i ( t ) = A,z(t)
+ C A;z(t - ih), i=l
(7.3.1)
~ ( 0= ) 2 , E En;X ( S ) = 4 ( ~ ) ,s E [-hN,O), h > 0. The determination of the fundamental matrix U of (7.3.1) was treated in Section 7.2.1. With the explicit formula for U , one determines the index of the controi system using g , given as
(7.3.2a)
Thus,
(7.3.2b)
Consider N = 1, that is, the system
i ( t )= A o ~ ( t+) A l t ( t - h ) + B ( t ) u ( t ) .
(7.3.3)
The index of this control system is given as follows: On [0,2h]
(7.3.4a)
Synthesis of Time-Optimal and Minimum-E8or-i Control g ( t , 6, $)
=
1
0
225
[dG(6)][eAo(t+e-S)
-h
t - s E [h,2h]. We now observe that k ( s ) is an rn-vector function, the sign of whose compe nents can be determined and thus an open loop control constructed. Thus, once the disconjugacy properties of k are analyzed, an optimal feedback control of (7.3.1) can be constructed, as was done in [20]. Applications
For linear autonomous systems, we have developed some theory about optimal control that, when applied t o simple examples, will give us insight to the construction of optimal control laws. Since optimal control exists when the complete system is Euclidean controllable with constraints, and is of the form sgn(drU(t - s ) ) = u ( s ) , the usual methods of ordinary differential equations can be applied. Using the values of U on [kh,(k+)h] beginning at the origin, integrate backward with controls u ( s ) , and find all optimal trajectories to the Euclidean origin. Since in the first interval [0, h ] , U ( t ) = eAot, the analysis is relatively routine, though complicated. The application of this method, though feasible, is definitely limited, because of the complicated nature of U. It does, however, provide insight into possible general methods of construction of optimal controls of systems with limited controls. Very many numerical methods which have so far been developed [28,29] are for La controls, and can reasonably be modified to help solve the synthesis problem of time-optimal control systems with L, controls whose components are bounded, a problem long abandoned by researchers more brilliant than I. This book is an affirmation of hope that they will return! Example 7.3.1: We want to find a time-optimal control u * ( t )such that z ( t ) = $ J ( t ) , t E [-l,OI, and z ( t * ) = 0, with t* minimum when
.(t) = - z ( t )
+ z(t - 1 ) + u ( t ) ,
)u(t)l5 1.
(7.3.5)
It is easily proved that optimal control exists. We use the fundamental matrix solution .(t) = - z ( t ) z(t - l ) ,
+
Stability and Time-Optimal Control of Hereditary Systems
226
of Problem 2.1.1
U ( t )= e-*,
= e-'[l+ e(t - I)], 1
1
+ e ( t - 1) + -21e 2 ( t 2 - 4t + 4)
to deduce the index of the control system, g(4
t E [O, 11, t E [1,21, ,t E [2,3]
= g ( t , s , + ) , ($(O) = d # 01, = d U ( t - s) = d e a - t * , s E [t* - l,t*], - deS-t' [I + e(t* - s - I)], s E it* - 2,t* - I], 1 2
- ded-f' [I + e(t* - s - 1) + -e2(t* s E [t*- 3,t' - 21.
-s ) ~ 4(t'
+
- s ) 41,
Optimal control is given by sgn d , since all factors of d are positive. The time-optimal control will be either 1 or -1 on the entire interval [O,t*].
Example 7.3.2: Antirolling Stabilization of a Ship A ship is rolling in the waves. The linear dynamics of the angle of tilt z from the normal upright position is given by
i ( t )= Y ( t ) , Y(t) = - b y ( t ) - qy(t - h ) - k ( t )
+
i ( t ) = - b y ( t ) - qy(t - h ) - k . Z ( t )
+ uz(t),
w = Y(t>,
UZ(t),
(7.3.6) (7.3.7)
System 7.3.7 is obtained when a servomechanism is introduced and designed to reduce ( 2 ,y) to (0,O) as fast as possible. What the contrivance does is to introduce an input to the natural damping of the rolling ship, a term proportional to the velocity at an earlier instant t - h : qy(t - h). Also introduced is a control with components (0,212) yielding the equation above. Thus
i ( t ) = A2g(t) + A l ~ ( t h ) + Bu(t),
(7.3.S)
227
Synthesis of Time-Optimal and Minimum-Effort Control
x =Angle of tilt
FIGURE 7.3.1. with k = 2, b = 3, q = 1. The parameter values correspond to one particular operating point. In this example,
e-'(e(Zt-6))+e-"(2+ea(Zt)),
e-'(2e(Z-t))+e-'*(-4+ez(-6-4t)),
e-'(e(t-4))+e-2'(l+ea(Zt)) e-'(e(5-t))+e-2*(-2+ea(2-4t))
1
, (7.3.10)
228
Stability and Time-Optimal Control of Hereditary Systems
and if h is arbitrary,
'I
-1
(7.3.11)
on [h,2h], and so on. The index of the control system is
g(s) = g ( t * , s,4 ) = #U(t* d
- s),
= $(O) = d = ( d 1 , d z ) .
Our problem is to find a control u* that drives the initial position 3 = (z0,yo) = q5(s), s E [-h,O] t o the Euclidean terminal point g(t*) = ( z ( t * ) ,y(t*)) = 0 in minimum time t * . Because
we invoke Theorem 7.1.4 to show that the system is normal and Euclidean controllable. Uniform asymptotic stability is assured by the analysis of Example 2.4.1 since 3 = b > q = 1. The system is null controllable with constraints, and optimal controls exist and are uniquely determined (almost everywhere) by u*(s> = sgn d s ) , where the index of the control system is g ( s ) = $rUl(t* g(s)
= #U2(t*
g ( s ) = #U3(t*
- s), - s),
- s),
E [O,h], 0 5 s 5 t* 5 h , s E [O,h], s h 5 t* _< 2h, s
+
s E [O,h], s
+ 2h 5 t* 5 3h,
(7.3.12)
and so on. In general, g ( s ) = drUk(t* - s),
s
E [O, h], s
+ (k - l ) h 5 t* 5 kh,
where Uk = U is as defined in Equation (2.1.8) or Equation (2.1.10). We now use U , which has been determined. In [0, h], g ( s ) = g l ( s ) = ( 2 ( d l - d 2 ) e - s + ( 2 d 2 - d l ) e - 2 5 , (dl - ~ f 2 ) e - ' + ( 2 d z - d l ) e - ~ ' ) .
(7.3.13)
Synthesis of Tirne-Optimal and Minirnurn-Effort Control
g ( s ) = g2(s) = (2(dl - d z ) , ( d l - d2))(e-(t'--d)+ (t - e -2(t'--~-h) @d2, - M e - ( t ' - h - S ) 1
229
- s - h)e-(t-8-h) 1
+ + ( 2 4 - d l ) ( l , l)(e-2(t*-B)- 2(t* - s - h)e-2(t-"h) s E [O, h], + h 5 t* 5 2h.
)I
5
(7.3.14) To obtain optimal trajectories in E 2 , we start at the origin and integrate backwards in time. To do this we replace 2 by -7 in our system (7.3.7) and obtain the dynamics (7.3.15) where 4 7 )
= sgn[(dl
- d 2 ) + e'(2d2 - d ~ ) ] .
(7.3.16)
Taking dl < 0, d2 < 0 , we begin a t the origin (z(0) = y(0) = 0) along the trajectory S ( r ) with u z = -1. This control may change sign at some 7 1 E ( 0 , h] where (dl - d z ) 71 = In 2dz - dl
(-
)
Therefore, starting at 7 = 0 a t the point ~ ( 7 1 )of S(7) = LY with u2 = 1, we leave LY along a new curve p(7) that begins at ~ ( 7 1 ) .There is no further change of sign if 7 E [0, h] until 72 = 7 E [h,2h], and then the control is u ( s ) = sgn g2(0). Also, if 71 E [h,2h],the control u(s) = sgn gz(s) is used. In both situations, and under the assumption dt < 0, dz < 0,
= Sgn[(dl - dz)(e-('*-S) + (t - S - h)e(t-a--h) - dl(e-,(t'-~-h) - e - Z ( i ' - ~ - h ) )
~ 2 ( 5 )
+ (2d2 - d1)(e
-(t'--s)
The new control then is
-qt*
- - h)e-2(-h)
(7.3.17)
11.
Stability and Time-Optimal Control of Hereditary Systems
230
Thus we begin at the point ,L?(T~)with u ~ ( s= ) the second component of g2(0), where dl < 0 d2 < 0, and move along the trajectory S(-T), which , this control until possible switches at the zero of the second is C ( T ) with component of g2(s) on [h,2h]. The earlier process is repeated. Next we start at the origin. With dl < 0, d2 > 0, we have u2 = 1. The process is repeated, and the control law is deduced, by considering the situation dl > 0, d2 > 0 and d2 > 0, and d2 < 0 as well.
Example 7.3.3: We consider a two-dimensional retarded system
+ +
+ +
k I ( t ) = -221(t) - 22(t - h ) .l(t) .2(t), .2(t) = -22(t) 2 l ( t - h ) U l ( t ) 2u2(t),
+
1.11
L 1,
1.21
5 1,
(7.3.19) where x l , x 2 stand for population densities of two species (and so they are nonnegative). This system may describe the linear dynamics of a predatory). If z and jj are the population prey system around the equilibrium (X, densities of the prey and predator respectively, then 21
=r-X,
22
=V-Y.
The controls 21 = (u1,u 2 ) are harvesting/seeding strategies. We want an optimal uC that will drive the populations to the equilibrium (0,O) E E2 in minimum time starting from any initial population density function (10, yo) 5 q5 E C ( [ - h ,01, E 2 ) . The system can be written in matrix form as k ( t ) = A o ~ ( t ) A l ~ (-t h ) B u ( t ) ,
+
where
+
0 -2
We observe that i(2)
= Aox(t) + A l ~ (-t h)
is Exponentially Asymptotically Stable (EAS) for all h 2 0. Indeed, we can use a result in a very recent book [27,pp. 98-99] that the system
kl(t) = -allzl(t) & ( t ) = -a22zz(t) is EAS for all h 2 0 if
- bl222(t - h ) , + b21z(t - h )
(7.3.20)
Synthesis of Tirne-Optimal and Minirnurn-Effort Control
23 1
or if
(7.3.22) We can also use Proposition 3.3.3. The first condition is 3 > 1, while the 1 3 second is 3 > - or 3 > -1 and 2h < -. The system is also Euclidean 4 2 controllable, since by Theorem 6.1.1,
and this has rank 2 for any t > 0. Theorem 6.3.1 assures us that (7.3.19) is null controllable with constraints. Optimal controls exist, and since it is normal for each t > 0, the optimal control is uniquely determined by
s g n ( g U k ( t - s ) B ) , s E [O, h ] , k: 5 t 5 (k
+ l)h,
where u k is the fundamental matrix which we now calculate the methods of Section 2.1:
(7.3.23)
232
Stability and Time-Optimal T i m e - O p t i m a l Control of Hereditary Systems
Hence
(7.3.25) .CL".,"
'.,.
For each point of the Euclidean state space (the plane), there is an optimal control. We first assume that the delay h is sufficiently large. Thus
drUb(t - s)B, s E [0,h], L 5 t _< (k+ 1)h
is given as follows:
d
\-----.I
To obtain the optimal trajectories, we start at the origin and as usual integrate backwards. This is done by replacing t with -T and considering the dynamics (7 -3.28) where
(7.3.29)
Synthesis of Tarne-Optimal and&finirnurn-Effort Control
233
+
We now assume that the optimal time t is known and that dle' d2 < 0, diet 2d2 < 0, and d2 > 0 and begin a t (0,O) with controls ul = -1, 212 = -1 and move along the parabola defined by Equation (7.3.28), which is a : q ( 7 ) = e2' - 1 on [ ~ , h ] , (7.3.30) 2 2 ( 7 ) = 3(e' - 1).
+
Clearly u2(7) will switch t o a new value at
and u l ( 7 ) will change sign at
72
= t + en
(i:). - Hence
72
- 7 1 = tn2. It
follows then that if we begin at T = 0 at the point ( z 1 ( ~ 1 ) , 2 2 ( 7 1 ) of ) the curve a with 211 = -1 and u2 = 1, we depart from a along the parabola
Clearly u1 changes signs at
These are the calculations of Hermes and LaSalle [4, p. 831 for ordinary differential systems. They are valid here because we have assumed that h is so large that on [0, h] all the indicated switches occur, and the dynamics is that of e A o + .Thus the last equations above tell us how t o transform a to obtain the curve p where u1 will change sign. For each component there is only one sign change. We complete the analysis by assuming now that
Now begin at (O,O), with control u1 = -1, u2 = 1, with a sign change of u1 not later than T = l n 2 apart. The trajectory is along the line (0,O) to ( - l , O ) , which is the switching curve y. The others are obtained by symmetry. Thus within [0, h], h large, the complete optimal control law is deduced. This is given in the diagram.
234
Stability and Time-Optimal Control of Hereditay Systems
FIGURE 7.3.2. 14, P . 831. If the switch time T > h , we have to go to the next interval [h, 2h] before switching. Thus we consider u(t), t E [h,2h]in [h,2h]; and then U(T)
= sgn[dTUl(s - t ) B ] , t E [h,2h] with s = 0, i.e.,
- 4(e+(t-h)- e+2(t-h)11 + dz{e+(lSh) - e+2(t-h)+ 2e+'} u ~ ( T= ) sgn[dl{e2' - 2(e+t-h - e+2(t-h)
u ~ ( T= ) sgn[dl{e+"
+ dz{e+' + el-'
11
- 2e2('-h)}],
where dle'+& < 0, d1et+2dz < 0, and dz > 0, t E [h,2h].Thus we begin with this control at the point a ( q ) and move along this trajectory until switches occur at the zero of 91 and the zero of 9 2 . The earlier process is
Synthesis of Time-Optimal and Minimum-E$od Control
235
repeated] after going to the next interval, [2h, 3h];the analysis is completed. In this way the optimal control law is deduced.
Example 7.3.4: Wind Tunnel Model The problem concerns a time-optimal control of a high-speed closedcircuit wind tunnel. We are concerned with the so-called Mach number control. A linearized model of the Mach number dynamics is a system of three state equations with a delay. The state variables z1,zz,z3 represent deviations from a chosen operating point (equilibrium point) for the following quantities: z1 = Mach number, 2 2 = actuator position (guide vane angle in a driving fan)] and 2 3 = actuator rate. The delay represents the time of the transport between the fan and the test section. We assume that the control variable has three components that constitute an input to the rate of the Mach number. The model has an equation of the form
1 with a = - k = 0.117, w = 6, E = 1.6, and h = 0.33s. 1.964 ' The parameter values correspond to one particular operating point. We write the equation as
k ( t ) = A , z ( t ) + A l ~ (-t h ) + B U ( t ) , where
-a
0
0
-w2
-2Ew
] [!5 !]
B = [k\],
]A1=
The eigenvalues of A , are A 1 = -0.509165, -2.106024. The fundamental matrix is
A2
k3=36.
= -17.0939, and
A3
=
236
Stability and Time-Optimal Control of Hereditary Systems
The system is Euclidean controllable on [ O , t ] , V t > h , since O,O, 0, 0,2.144603
36, -6912,11975.05,0,0 and rank Q3(h) = 3. The system
i ( t ) = Aoa(t) + A l z ( t - h )
is uniformly asymptotically stable since the eigenvalue A of the characteristic equation A+&
A(A) =
= (A
0
A
A0
36
+
0
0.117e-Xh
-1.964
A) + (A2
A 19.22A
+ 19.2
+ 36) = 0,
and A1
= -0.50916,
A3
= -2.106.
A2
= -17.094,
It follows that the system is controllable with constraints. The fundamental matrix is
U(t)=
+
u*( s ) = sgn[dl( .00129 - .003603e-2~6152('-a) .07527e-17.6032(t-a) )
+ d2(.06672(e-z.lo6(t-3) - e-17.044('-8) 1)
+ d3(-0.1405e-2.'06f + 1.1405e-'7.044t)]. 7.4 Time-Optimal Feedback Control of Autonomous Delay
Systems
In this section we complete the investigation that is reported in Theorem 7.2.2 of the problem of the construction of an optimal feedback control needed to reach the Euclidean space origin in minimum time for the linear system N
i ( t ) = A o ~ ( t+) C A j z ( t - ~ j+)B u ( ~ ) .
(7.4.1)
j=1
Here 0 < r < 27 < . . . < r N = h; Ai are n x n constant matrices, and B is an n x m constant matrix. The controls are L , functions whose values on any compact interval lie in the rn-dimensional unit cube
C m = { u E E m : J u j l _ < lj, = l l . . . l m } .
We shall show that the time-optimal feedback system N
k(t)=
C A j ~ (-t rj) + B f ( ~ ( t ) )
(7.4.2)
j=O
executes the time-optimal regime for (7.4.1) in the spirit of HAjek [30] and Yeung [32,33]. The construction o f f provides a basis for direct design, and it is done for strictly normal systems, which we now define.
238
Stability and Time-Optimal Control of Hereditary Systems
Definition 7.4.1: Let Jo={t=j7,j=0,1,2
,... },
and assume Jo is finite. Suppose U ( Et, ) is the fundamental matrix solution of
i ( t )= Aot(t)
N
+
Ajz(t
-rj)
(7.4.3)
j=l
on some interval [0,€1, E > 0. Note that U ( 6 ,t ) is piecewise analytic, and its analyticity may break down only at points of JO (see Tadmor [25]). System (7.4.1) is strictly normal on some interval [0, c) if for any integers rj
2 o satisfying
M
Crj = n,
j=1
the vectors Qij(6)
j = l , ... ,m, s E [ O , C ] - J O , O s i < r j - 1
are linearly independent. (If rj = 0, there are no terms Here, as before in (7.1.10),
Q,j
(7.4.4) in (7.4.4).)
N Qltj(S)=CA;Qb-lj(S-~j), j=O
k=1,2,
SE(-~,W),
j = 1,... , m ,
and B = (bl . . . bm).
It follows from Theorem 7.1.4 that a complete, strictly normal system is normal, and has rank B = min[m,n]. Indeed, choose any column bj of B and set r, = n , r, = 0 for i # j in the definition. Clearly Qoj(s),
Qn-l.i(s)I
s
E [O,c] - JO
are linearly independent. Because of Theorem 7.1.4 we have normality since bj is arbitrary. The second assertion is obvious since bj is linearly independent and we can take rj = 1 or rj = 0. If the system is an ordinary one, (7.4.5) i ( t ) = A o t ( t ) Bu(t),
+
our definition reduces to that of Htijek [31].
Synthesis of Tame-Optimal and Minimum-Effori Control
239
Lemma 7.4.1 (Fundamental Lemma). System (7.4.1) is strictly normal if there exists E > 0 with the following property: for every n-vector c # 0 and in any interval of length 5 6, the sum of the number of roots counting multiplicities of the coordinates of the index g of the control system (7.4.1) g ( t , c ) = CTU(E,t)B
is less than n . Proof: Suppose that such an E > 0 exists and (7.4.1) is not strictly normal on [Ole)
m
- Jo. Then there exist integers P j 3 0 such that C rj j=1
Qkj(iT),
(1 I j
= n, and
> 0, i = 1 , 2 , . - . Hence there exists c # 0, c E Ensuch that
5 m,
are linearly dependent.
CTQkj(iT)
0 _< k
= 0,
(1
5 rj-i),
E -
it
5 j 5 m,o 5 k. 5 rd-1).
(7.4.6)
Supposeaisazeroofgj(t,c) = ~ ~ ~ ( ~ , t () j b= j I ,, . . . , m ) o n [ o , E ) - J ~ . Then analyticity yields
where g j is expressed as the Taylor series at a particular time ( t = a+, or t = a - ) , and g ~ k ) ( Q +C)l
k = 0 , 1 , 2 , ... ,
-(u+,c), dkg
dtk
etc. It now follows that 00
0=
(1 A g j k ) ( u ,c)-
k=O
- Q)k k!
'
where Agjk) z g j k ) ( u - , )- g ! ( u + , ) . If u = 6 - ih > 0, W
( t - (€ - i - T))k
Agj(')(~- i ~ )
0= k=O
E( W
0=
k=O
i = 1,2, ... , then
k!
- (€ - ih)) - l)kcTQkj ( i ~ ) k!
1
(7.4.7)
(t
9
240
Stability and Time-Optimal Control of Hereditary Systems
where we have used (7.4.7). It follows from (7.4.6) that (7.4.7) becomes
( t - (€ - T i ) ) k 0 = [t - ( E - i ~ ) ] "C ~ ~ & k + ~ ~ , j ( i ~ ) k=O ( k + rj)! ' 00
Hence for each j = 1,. .. , rn, t = E - iT is a zero of g j ( t ) of multiplicity rj , and consequently the sum of the number of roots in [0,4 - JO counting multiplicities of the coordinates of g ( t ) = c*U(€,t)B is at least n. This completes the proof.
>
It is very desirable to have information on the largest that has the property of the Fundamental Lemma. Results on this do not seem to be currently available. It is conjectured that the converse of the Fundamental Lemma is valid. In all that follows, we assume as basic that (7.4.1) is strictly normal and (7.4.3) complete in the sense that the operator W ( t ,u) : C 4 En,defined by W ( t , c ) 4= z(u,4,O)(t), where z ( u , ~O)(t) , is a solution of (7.4.3), is a surjection: W ( t , a ) C= En. We shall also retain
E
as that described by the Fundamental Lemma.
Let k 5 n. Suppose there are distinct times t l < * . < t k with tk - t l 5 E and integers t i among 1 , . . . , m. If (7.4.1)is strictly normal, the vectors U(f,tl)bll, * . . I U ( f , t k ) b l , Corollary 7.4.1
are linearly independent.
Proof: Since a subcollection of linearly independent vectors are also linearly independent, we shall prove the statement for k = n. Suppose the
assertion is false, there exists c
# 0, c E En such that
Thus the sum of the number of roots, counting multiplicities of the coordinates of g i , is at least n in [ t l , t l €1; and this is a contradiction.
+
Corollary 7.4.2 Suppose (7.4.1) is strictly normal, and 0 t , < E , any n terms among rti
< t l < ...
u : C T U ( t l , t ) b j ( t )E 0, t E [ U , t l ] ) has measurable zero. In this case, llSr(cT)II> 0 in (7.5.39) or in (7.5.40), for each c # 0, t > u. Conditions on the systems’ coefficients for normality are given in (7.5.23). We state and prove this for the autonomous simple system i ( t ) = A o ~ ( t ) A l ~ (t h) B u ( ~ ) , (7.5.42)
+
+
where the “determining equations” are given by Qkj(S)=AOQL-l(S)+A1Qk-1(S-h),
k:= 1 , 2 , 3 , . - -, s E [-m,m), (7.5.43)
Syn t h es is of Time- Opti i n a 1 a n d Min im IJ m- Effo rt
Con1ro 1
257
Set
-
Qnj(t1) = {Qoj(s),Qlj(s),... ,Qn-lj(s),
6
E [o,tlJ}.
(7.5.44)
Theorem 7.5.6 The system (7.5.42) is normal on [O, t l ] if and only if for each j = 1 , . . . , m , rank Q v j ( t l )= n.
Proof: This is an easy adaptation of Manitius [37,p. 791. Because of its utility in subsequent discussions we shall outline it in some detail. We note that s + U(t1,s) is the fundamental matrix solution to the adjoint equation:
The function gj (s,c>= cT ~
(1 7 s)bj t t
the j t h component of the index of the control system, has the following properties: s + g,(s,c) is defined on [0, w); it vanishes on (t1,oo) and is piecewise analytic in (0, m). The isolated exceptional points are at points s = t1,tl h , t l - 2h,. . . ,tl - hi, where i is such that t1 - ih > 0 (see Tadmor[ll]). Define
-
A g f ( t , C ) = gf(t - 0 , C ) - $ ( t
+ 0 ,c ) ,
t E ( 0 ,OO),
(7.5.46)
+
and we have designated g f ( t - O,c),g;(t 0,c) as the left- (respectively right-) hand side limit of the kth derivative of g j at s = t , k = 0,1,2,. . . . Clearly.
(7.5.47) and the jump at t l occurs because of the jump discontinuity of the fundamental matrix U(t1,s) a t s = 11. Also
(7.5.48) otherwise. We can prove by induction that, in general,
Agjk)(t1- ih) = (-l)kcTQkj(ih) :
i = t1 - ih > 0.
(7.5.49)
258
Sfability and Time-Optimal Control of Hereditary Systems
Assume that rank G n j ( t r )= n , but (7.5.42) is not normal on [0 t l ] . Then there is a c # 0 c E En such that for some j = 1,... ,rn, g j ( s , c) = 0 on [0, t l ] . Because of this g j ( s ,c) E 0 on [O,m],so that on differentiating,
- ih > 0 = (-l)kcT&k(ih), for k = 0, 1,2, . . . , i = 0 , 1 , 2 , . .. ,tl - ih > 0.
0 = cTAgk(tl - i h ) , t l
We deduce that c E E" is orthogonal t o all the vectors of g n j ( t 1 ) ,which is a contradiction. We have proved normality. We now prove the necessity for this. The constant coefficient case is proved exactly the same way as was done in [37, p. 801. For the general situation of (7.5.1), it is far from clear whether the rank condition is necessary. The next prevailing assumption on (7.5.1) is that of metanormality. The notion was introduced by Hfijek [35, p. 4161. VI. The system (7.5.1) is metanormal on [ u , t l ]if and only if every index g(t, c)
= CTV(tl,V ( t )
with c # 0 has each component g j ( t ,c ) constant only on sets of measure zero: The set
{ t > o : C*V(tl,t)bj(t)= 0) has measure zero for each column b j of B , constant cr E E , c # 0 in E". Though metanormality is a condition on S;, we characterize it as the full rank of some of the systems parameters. We restrict our discussion to (7.5.42). Lemma 7.5.1 The system (7.5.42)is metanormal if and only if each j = 1 , . . . , m ,r a n k { Q l j ( s ) ,. . . ,Qnj(s), s E [O,tl)} = n where Q k j is as defined in (7.5.43).
Proof: Suppose a j t h coordinate $ j ( t , c ) = c*V(tl,t)bj
Q,
cr a constant on a set of positive measure. Then t -+ g,(t,c)-cr 5 M ( t ) = 0 on a set of positive measure. Since it is piecewise analytic except at the points s = t 1 , t l - h , t l - 2 h , . . . , t l - ih where i is such that tl - ih > 0, the function m ( t ) E 0 on [0,m). Hence it follows that
0 = AM'(t1 - ih), t i - ih
= (-l)'~'Q'j((ih)
> 0,
(7.5.50)
Synthesis of Tame-Optimal and Minimum-Effort Control
259
for k = 1 , 2 , . . . ,n, a' = 0 , 1 , ... , t l - ih > 0. Hence c is orthogonal to all vectors & l j ( s ) ,... , Q , l j ( s ) s E [O,tl]. Therefore (7.5.42) is not metanormal. We now prove necessity. We show that metanormality implies that for each j = 1 , . . . , rn rank Qoo, = n where Qoojdenotes a matrix with infinite number of columns given by Q l j ( s ) ,Q 2 j ( s ) ,. . . , S E [0, t l ] ;and the assertion rank Q-j = n means that there are n linearly independent columns in the sequence {Qk.j(s), s E [O,ti), k = 1 , 2 , * * . ) . Suppose this is false. Then there is a c
# 0, c E E"
such that
(7.5.51 ) Since Ao, A l , B are constant,the function
is piecewise analytic except at isolated points of [O,t1],which are seen to be t 1 , t l - h , t l - 2h, etc. But g j ( t , c ) vanishes for t > t l , hence M ( t ) E g j ( t ,c ) - a -a for t > t1. It follows that M ( k ) ( t l + O )= 0 for k = 1 , 2 , . . . . Since (7.5.47 - 7.5.50) holds and AAdk)(t)= M(k)(t-O)-M(k)(t+O), 2 E (O,oo), we deduce that M ( k ) ( t l - O ) = g jk( t , - O , c ) = 0, k = 1 , 2 . Because g j ( t , c ) is piecewise analytic on [O,tl], M(1) f 0 on [tl - h , t l ] . In this same way we obtain M ( t ) 0 on [ t l - 2 h , tl - h ] , and by induction M ( t ) f 0 on [ O , t , ] , i.e., gj(1,c) = a on [O,tl]. This contradicts metanormality. To complete the proof it can be shown that rank Q - j = n implies rank Q n = n , where Q n j = [ Q l j ( s ) , . *,.Q n j ( s ) E [o,t1]1-
=
7.6 Proof of Minimum-Effort Theorems With the six general assumptions I - VI stated, and conditions for their validity in terms of the system's coefficients deduced, we are now prepared to state three preliminary results on which the solution of the problem of the minimum-effort control strategies are based. Theorem 7.6.1 In (7.6.1), let t E [ u , t l ] , and consider U in (7.6.5) or (7.6.7). Then there exists an admissible control u E U such that
Stability and Time-Optimal Control of Hereditary Systems
260
if and only if
CTY(t) I Ils;(cT)II,
v c E E",
(7.6.1)
T
where S; is the adjoint of St, a map of En to L. Proof: It is assumed that there is a u E U such that St(u) = y ( t ) . It follows that
Therefore (7.6.1) is valid. To prove the converse, we recall that the Euclidean reachable set R ( t ) is closed. It is also convex, being a linear image of the convex set U . If there is no u E U such that S*(u) = y ( t ) , then y ( t ) 4 R ( t ) . Because R(t) is a closed and convex subset of En,the Separation Theorem 14, p. 331 asserts that there is a hyperplane that separates R ( t ) and y ( t ) : This means there exists a c E E", such that
c T y ( t ) 2 sup{cT(S,(u)) : u E U } = sUp{sf(CT)(u) : 21 E U } . This invalidates (7.6.1). The assumption that there exists some u E U such that St(u)= y ( t ) is a statement on the constrained controllability of (7.6.1) at some t . The optimal (minimum) time t* for hitting the target is defined as t* = Inf{t E [ g , t l ] : S t ( u )= y ( t ) for some u E U } .
(7.6.2)
The admissible control u* E U that ensures the coincidence St*(u*) = y ( t * ) is the time-optimal control. For the minimum-effort problem, the optimal strategy u* that ensures that Stl(u*) = y ( t 1 ) while minimizing E ( u ( t 1 ) ) is the (minimum) optimal control. Inspired by the ideas of H&jekand Krabs [36],we propose the following:
Theorein 7.6.2 In (7.6.1) assume that the point target q ( t ) E En is continuous. Then there exists a c E E" with llcll = 1 such that
C T d t * ) = llSXcT)Il,
(7.6.3)
so that St.(CT)(U*)
= cT(S*. (u*))= c T y ( t * ) ,
(7.6.4)
where u* is the time-optimal control, and
Ilu*ll= 1,
(7.6.5)
Synthesis of Time-Optimal and Minimum-Effort Control
26 1
if the system is normal. Remark 7.6.1: Because z l ( t ) is continuous,
is continuous. The proof as pointed out in [36,p. 51 is a little adaptation of the argument in the proof of Theorem 6 in [34]. Because the next result links the time-optimal control and the minimum fuel strategy, it is described by HAjek and Krabs in another setting [36]as the Duality Theorem. It is simple but fundamental.
Theorem 7.6.3 In (7.6.1) assume null controllability with constraints. This is satisfied if (7.6.11) is uniformly asymptotically stable and (7.6.1) Euclidean controllable. If t* is the minimum time, then
t* = max{t E
( O , t , ] such that cTy(t) = IIS;(cT)((
for some c E En : llcll
= 1).
(7.6.6)
Proof: Because (7.6.1) is null controllable with constraints, we set q ( t ) 0; for each nontrivial 4 E C, y(t) = z l ( t ) - z ( t , u , 4 , 0 )becomes y(t) = - z ( t , u,q5,O); and if -z (t l, (T,4,O) # 0 and if we set y(t) z - z ( t l , 6,4, 0) = y1, V t 2 (T,then (IV) is satisfied. Because of Euclidean controllability, assumption (V) is satisfied. Since Theorem 7.6.2 is valid, the minimum time t* is a point over which the maximum in (7.6.6) is assumed. Suppose there is a t > t* such that
for some c E En with llcll = 1. Then
Th e first inequality follows from Theorem 7.6.1, the second from assumption V. The obvious contradiction proves our assertion. We now designate U to be as in (7.6.5) or (7.6.7), and derive from Theorems 7.6.1 - 7.6.3 optimal controls for minimizing effort and time.
262
Stability and Time-Optimal Control of Hereditary Systems
Theorem 7.6.4 In (7.5.1), assume: (i) (7.5.1) is Euclidean controllable. (ii) (7.5.11) is uniformly asymptotically stable. (iii) (7.5.1) is normal. (iv) System (7.5.11) is complete. UU, = L,([O,tl],Cm), t * is the minimum time, and u* is the time-optimal control, then u* is uniquely determined by
for some e # 0, and each j = 1,. . . , m . If Up C
1,
Lp([~,t1], Em), p >
1 1 -+= 1 such that u E U implies [lullp 5 1, then the time-optimal
P
Q
controls are uniquely determined by
where g(t, c) = CTU(t*,t)B(t),
(7.6.9)
Proof: From Theorem 7.6.3, if u* is a time-optimal control and t* the minimum time, then s;.(cT)(u*) = cT(St*(u*)) = cTy(t*) = Ils;.(cT)II. But then the u* E U , is such that cTy(t*) =
l*
C T U ( t * , t ) B ( t ) U * ( t ) d t= cT
I"
U ( t * ,t)B(t)u*(t)dt
(7.6.1 1a) This implies that u* is of the form a.e. u j ' ( t ) = sgn[cTU(t*,t)bj(t)],
u 5t
5 t',
Synthesis of Time-Optimal and Minimum-Eflort Control
263
when c T U ( t * , t ) b j ( t )# 0 for c # 0, and each j = 1,e.O , m , which is true since the system is normal and (7.6.11) complete. For U p , we obtain from (7.6.3) in Theorem 7.6.2 that cTy(t*)
1 where P
t*
= [IS,'.
(?)I1
=
IlcTU(t*,t ) B ( t ) p d t ) t ,
I"
=
J,
CTU(t*,t)B(t)'U*(t)dt, (7.6.11)
Jo
+ -1 = 1. Since g ( t , c ) = cTV(t*,t)B(t), P
But then (7.6.11) is valid, i.e.,
and so we have equality everywhere in the above estimates. The control u* that gives the equality is (by inspection)
. This is the time-optimal control, and as observed'before, it is uniqueiy determined when the system is normal. The proof is complete.
Stability and Time-Optimal Control of Hereditary Systems
264
Proof of Theorem 7.6.2: Because of the Euclidean controllability and the stability assumptions, there is indeed a tl such that the solution 2 of (7.5.1) satisfies
We observe that in this case y(t) defined in (7.5.19), y(t) = zl(t)-z(t, u,4,O) = -z(tl,u,c$,O) y1 # 0 and y(t1) E R(t1). Observe that u* in (7.6.7) is a boundary control in the sense that if
Z(t1,C)=
tl
V(tl,t)B(t).*(t,C)dt,
then r(t1,c ) is on the boundary of the reachable set R(t),so that
and
cTz(t1,c ) > C T Y With y(t1) 3 follows: Let
y1
v Y E W(tl),
Y
# Z(tl,C)
E R ( t l ) , we can extend this to reach the boundary as a = m={P : Py(t1) E R(t1)).
(7.6.13)
Since y(t1) # 0, a can be assumed to be positive, and obviously ay(t1) is a boundary point of R(t1). This means that ay(t1) = z(t1,c)for some c, where (7.6.14) cTy(t1) = 1 = -cTz(t1, u,4,O). It is easy t o verify that the control (7.6.15) steers 4 t o 0 in time tl while minimizing E1(u(tl)).Also (7.6.16) where
a = min{I;;(t,c):
c E P = {c E En : cTyl = 1)).
(7.6.17)
Synthesis of Time-Optimal and Minimum-Effort Control
265
Details of the verification are the same as in the case of ordinary linear differential equations (see Neustadt [38]).We conclude that (7.6.18) is the minimum energy control, where c* E E" is vector in P on which the minimum in (7.6.17) is attained. Because u*(t) = sgn(g(t,c)), we deduce
that (7.6.19) where c* is the minimizing vector in (7.6.17). Normality ensures that '?i is uniquely determined. If y(t1) = y1 = 0, then the choice ii 0 is appropriate. This completes the proof of Theorem 7.6.2. Proof of Theorem 7.6.3: In this case, Up in (7.5.7) is used to define the reachable set
which, as we have noted earlier, is closed. Just as in the proof of Theorem 7.5.2, the time-optimal control u * ( t , c) in (7.6.8) is a boundary control that is uniquely defined. Thus if z(t1,c)
=
1
tl
U(t,,t)B(t)u*(t,c)dt,
z(t1,c) E dR(t1).
With a as defined in (7.6.13), it is easy to verify that the minimum-effort (7.6.20) where (7.6.21) and (7.6.22) and Fz(t1c) is as given in (7.5.26):
266
Stability and Time-Optimal Control of Heredita y Systems
The vector c* is that which minimizes the function in (7.6.22). Because u*(t,c*) in (7.6.20) is given by (7.6.8), the minimum-effort strategy is given by Uj(t,C*) = Igj ( t ,c*) :Sgn(gj ( t , c*) 1 (7.6.23) a where
Through Theorem 7.6.2 and the general ideas of HAjek and Krabs [36], we can link up the time-optimal controls and the minimum-effort controls. In some situations we see that they are the same. We observe that IIS;(cT)II is given by (7.6.11) and it is to be minimized in (7.6.22) and (7.6.17) over the hyperplane P. We are led to the following definition: For each t E [0,t l ] let
a(t)= inf{llS,'(cT)II : c E P } where P = {c E En : cTyl = 1). (7.6.24) Theorem 7.6.5 Let the prevailing assumptions I - V hold for (7.5,l). Then for each t E ( O , t , ] , we have that ift* is the minimum time, then (7.6.25) Remark 7.6.2: Note that 1 is the norm bound of the control set U , and is defined in the minimum-effort problem in (7.6.2).
tl
Proof: Quite rapidly from (7.6.24), one has that 1 (7.6.26) cTy1 5 - ~ ~ S ~ ( C ~ V) c~E~En. , 4t) From an elementary approximation theory argument we can verify that for each t E (0,111 there exists some c(t) E P such that llsr(cT(~))ll= 4 t ) > 0.
Also there exists a
ut
(7.6.27) 1 a ( t ) . From
E L such that St(.,) = y1 and 11utll = -
(7.6.27) we argue that there exists a c(t) E Ensuch that
Invoke Theorem 7.6.1 and Theorem 7.6.2 to validate (7.6.25).
Synthesis of Time-Optimal and Minimum-Eflort Control
267
Theorem 7.6.6 Let the prevailing assumptions I - VI hold for System 7.5.1,in which (7.5.11)is complete. The optimal control u* that steers 4 E C to 0 in minimum time t* while minimizing El(u(t*)) is given by u * ( t ) = sgn[g(t,c*)],
d
5 t 5 t*,
(7.6.28)
where c* is the vector in P that minimizes (7.6.17),i.e., a = min F(t*,c ) CE P
(7.6.29)
and
(7.6.30) g ( t , c ) = C T V ( t * ,t ) B (t).
(7.6.31)
The optimal strategy that is time-optimal and minimizes Ez(u(t*)) is u* with componen t ti;(t>= klsj(t,c*)l:sgn gj(t,c*), (7.6.32) where
(7.6.33) with c* the minimizing vector in (Y
= min F1(t*,c), cEP
(7.6.34) (7.6.35)
Proof: By assumption, the feasible t l of (7.5.2) in the minimum-effort problem is the minimum time t*. But then
a(t*)= inf{llSy.(cT)II} = 1 CEp
by (7.6.25) in Theorem 7.6.5. Because the minimum fuel controls are the functions
-
u * ( t ,C * ) u ( t ) = -= U * ( t , C * )
ff(t*> by (7.6.18) or (7.6.20), the corresponding expressions in (7.6.7) and (7.5.8) for time-optimal controls of Theorem (7.5.4) prove that (7.5.8) and (7.5.32) are correct. The following fundamental principle is now obvious: In our dynamics (7.5.i), the time-optimal control i s also the control that minimizes the effort function.
268
7.7
Stability and Time-Optimal Control of Hereditary Systems
Optimal Absolute Fuel Function
The solutions of the minimum time/minimum fuel problems contained in Theorem 7.5.1, Theorem 7.5.2, Theorem 7.5.3, and Theorem 7.6.4 depend on the closure of the reachable sets. Since the reachable sets are also convex, optimal controls are boundary controls in the sense that they generate points on the boundary of R(t1). But if the effort function is defined by ES(u(t1)) in (7.5.8), as the absolute fuel function
and if U is defined by
U = {u measurable llulll 5 11,
(7.7.2)
then the reachable set R(t1) = I[{
U(t,,t)B(t)u(t)dt : u E
u
I
(7.7.3)
is not closed, but open. Because the reachable set is open, its boundary points can only be "reached" by convex combination of delta functions. Optimal controls that yield boundary points must therefore be delta or dirac functions, which are impulsive in nature. If we admit such functions as controls in absolute fuel minimization problems, optima1 controls exist. If we rule them out and R(t) is open, there is no integrable optimal control. We consider i ( t ) = Aoz(t) A l ~ (-t h ) Bu(t). (7.7.4)
+
Lemma 7.7.1
+
Suppose (7.7.4) is rnetanorrnal, i.e., for each j = 1 , . . . , rn,
where Qkj is as defined in (7.5.43). Then the reachable set R(t) of (7.7.4) is open, bounded, convex, and symmetric.
Proof: The symmetry, boundedness, and convexity of R(t) are easy to verify. We prove that R(t) is open. Assume on the contrary that IW(t1) is not open and t h at
Synthesis of Time-Optimal and Minimum-EBort Control
269
is a boundary point. Let c # 0 be an outer normal to (the closure of) R(t) a t y1. With this c, define the index of (7.7.4) g(t,c) = c T U ( t l , t ) B . Obviously, uo is a control that maximizes
l1
g(t, c)u(t)dt
But the mapping u( ) with norm
I
l"
--+
g(t,c)uo(s)ds whenever [lull1 5 1.
(7.7.6)
J"'g(t, c)u(t)dt is a linear functional on Ll([u,tl])
The inequality on (7.7.6) implies that the value llgllm is attained at the element uo of the unit ball in L1. Therefore
This shows that we have equality throughout, so that
It follows that (Ilgllm - lgj(t,c)l)- luoj(l)l = 0 a.e. for each j = l , . . . , m , t E [u,tl].
Because the system is metanormal, lgjl is constant (= Ilgllm) only on a set of measure zero. Hence, uo = 0 a.e. on [c,tl]. But with uo = 0, our boundary point is y1 = 0. This contradicts the following containment:
(7.7.7) whenever 0 < a < /3 5 m t l , which is an easy consequence of metanormality and completeness as can be proved by the methods of Hijek [35, p. 4321. Recall that (in Hijek's notation)
270
Stability and Time-Optimal Control of Hereditary Systems
and
Thus the reachable set R(t1) is open. Proof of Theorem 7.5.4: Because IW(tl), the reachable set using L1 controls with bound 1, is open, then
0. Since this is open, the minimal k can never be attained unless y1 = - z ( t l , c , q 5 , 0 ) = 0. Continuing, if g is the index, i.e., g ( t ) = g ( t , c ) = c T U ( t l , t ) B ,then cTy1 = cT y ( t 1 ) =
l1
s(t,c)u(t)dt.
(7.7.10)
The control that realizes a boundary point of R(t1) definitely maximizes
l1
g(s)u(s)
whenever llulll
I/"
(7.7.11)
g(s)uo(s)
I 1, i.e., over u E U . We have that
If t* is a minimum time for the time-optimal control problem with u E U , then C T Y ( t * ) = 11s .; (CT)llm (7.7.13)
where
IlSY. (CT)ll@J = ly<m ,?$.
ISi(Ct
t>l.
(7.7.14)
If equality holds in (7.7.12), then impulse functions applied at the points where gj is largest maximizes (7.7.11). The maximum values in (7.7.14)
Synthesis of Time-Optimal and Minimum-Effori Control
27 1
occur at multiple j and at multiple instances s j i , i = 1 , 2 , . . .N j , where Nj is taken to be zero if g , does not contain the maximum. Because of these, the time-optimal controls (which are “boundary” controls) are u* ,
For the minimum-fuel problem we deduce as before that the optimal control is u*(t,c’) (7.7.16) u(t) = -, Q where
1 ---- 1 Q
M(t1)
- min ES(u(t1))
(7.7.17)
and
a ( t ~=) min F(t1,c ) , P = { c E En : c T y ( t l )= l}, CEP
(7.7.18) (7.7.19)
In (7.7.16) c* is the minimizing vector in (7.7.17). The proof is complete. Remark 7.7.1: If U in (7.7.2) is replaced by u measurable u ( t ) E E” :
l1
Iuj(t)ldt 5 1,
j = 1 , . .. , m
1
,
(7.7.20)
then
(7.7.21) The maximum in (7.7.21) occurs as before at multiple instances of time rj;, i = 1 , 2 , . . . , M j , where Mj 2 1. The maximizing impulsive controls that approximate reachable points on the boundary of R(t) are
Stability and Time-Optimal Control of Hereditary Systems
272 Therefore, with
M
(7.7.23a)
a ( t l )= min F(t1, c) = F(c*), CEP
(7.7.23b)
the optimal control that minimizes the absolute fuel is
U*(t,C*) 'u= -,
(7.7.24)
Q(t1)
where
C*
and a are determined by (7.7.23b).
Proof of Theorem 7.5.5: Recall the following definitions:
(7.7.25)
where a 2 0. These sets are nonvoid, convex, and symmetric about 0. Clearly R, c R(t1) and equality holds if a 2 rntl. Indeed, if a 2 rntl and y E R ( t l ) , then y E R(t1) corresponds to some admissible control u such that y=
l1
U ( t 1 ,t)Bu(t)dt,
IIulla, L 1-
This shows that y E Rmtl c R,. We also observe that for 0 < ct 5 p,
R,
c Rp,
3 p-Rp.
(7.7.27)
The first is obvious from the definition. The second follows from the following containment:
XIW,
+ (1 -
c h,+(l-A)a for all y , 6 2 0, 0 5 X 5 1. If 6 2 0, y = p, and X = alp, then the second containment follows. From (7.7.27) we deduce that
R, 3 (cl./rntl)rW(tl), 0 5 ff 5 rntl. If (7.5.1) is controllable, then R(t1) has a nonvoid interior, and this forces R, t o have nonvoid interior for each a > 0. From the usual weak-star compactness argument we can prove that R(t1) is compact. Other properties of R(t1) are contained in the next lemma.
Synthesis of Time-Optimal and Minimum-Effort Control
273
Lemma 7.7.2 For each q5 E C such that r(t1,u,4, 0) E R(t1) (and for no other points) there is an admissible control u that steers q5 to 0 a t time tl while minimizing E 3 ( ~ ( t l = ) ) llulll.Also t ( t l , u , q 5 , 0 )E 3lW~ for 8 = 11~11. Proof: By definition, q5 can be steered to zero at time tl by a control if and only if 2(tI,fl,4,0) E Wh).
Let
8 = inf{a 3 0 : z(tl,u,q5,0)E R}, where W l )
=
u Rr.
(7.7.28) (7.7.29)
a>O
It is clear that there is a sequence of admissible controls u( ), each steering q5 to 0 at time t l with 11.111 4 8+. The usual weak compactness argument yields an optimal control E with 8 = llVll1 = &(E(t1)) and
E 3 ( z ( t l ) )5 E3(.(tl)),
v 21.
To see this, interpret admissible controls as points in Lz-space, i.e., L2([u,tl], E m ) .Thus the conditions
l.lm 5 1,
l11 .11
5 8+
€7
determine a convex set that is bounded in L2 norm. It is weakly closed since if a sequence converges weakly, then a sequence of convex combination converges strongly in L z , and as a result a subsequence converges pointwise a.e. Thus r(tl,u,q5,0)E Re. We now verify that t(tl,u,q5,0)E dRe. We assume this is false and aim a t a contradiction: z ( t 1 ,u,q5,O) E Int Re. This implies that for sufficiently small A - 1 > 0, Xz(tl,u,q5,0)E Re. But if 0 < a 5 0, then by what was proved before
c (e/a)R(tl). Let a = 0/A, then AZ(tl,fl,d,O)
even though a complete.
a h , if for each q5 E C there is a control u E L , ( [ a , t l ] , E m ) such that the solution of (7.9.1) satisfies z , ( o , ~ ,U) = 4, Z t l ( u , 4, u) = 0. The qualifying phrase “on the interval [a,t l ] ” is dropped if they hold on every interval with t l > h. In the above definition, we consider Euclidean controllability by replacing @ by 21 € En. We now investigate the existence of optimal control in the state space
+
+
C. Theorem 7.9.2 (Existence of a Time-Optimal Control) (i) Assume that in (7.9.1), conditions (i) - (iv) of Theorem 7.9.2 hold. (ii) For some T 2 u, System (7.9.1) is function-space z-controllable on [a, T ] with constraints. Then there exists an optimal control. Proof: Let
t* = inf{t : T 2 t 2 u, zt E d(T,u)}. Because of (ii) there is at least one T such that ZT E d(T,a);as a consequence t* is well defined as a finite number. We now prove that zt= E d ( t * , a ) . Let { t , } be a sequence of times that converge to t’ such that zt, E d ( t n , u ) . For each n , let Z” be a solution of the contingent equation (7.9.4) with zt, = z:~. Then
where, by condition (7.9.3), ( is bounded almost everywhere. Since the target zt is continuous in t , zt + zt* as t , + t * . Also f.” 0 let C = C([-h, 0), En) be the Banach space of continuous functions defined on [-h,O] with values in En equipped with the sup norm. If c is a function on [-h,T], define a Banach space C ( [ - h , q ,En), analogously, where T > h, and for each i E [O,T] let ct E C be defined by e*(s) = c ( t s), - h 5 s 5 0. Consider the following problem: Minimize
+
rT
(7.11.1)
subject t o the constraints i ( t ) = f ( t ,ct,u ( t ) ) a.e. on [O, TI, 20
(7.1 1.2)
= 4 E W$([-h,O], E n ) ,
= 1c1 E CQ)([-h,01, E n ) , u ( t ) E C" a.e. on [O,T], 2,
(7.1 1.3)
(7.11.4)
where
and 3:
E WL([-h,T],E"),
E J5m([O,TI,Ern).
(7.11.6)
For this problem we assume that W g ) ( [ - h , T ] , E " )is the state space of absolutely continuous n-vector valued functions on [-h, T ] with essentially bounded derivatives, and L,([O,T], Em)is the set of functions that are essentially bounded rn-vector valued functions. Also, 4 E C'([-h,O],En) if 4 is continuously differentiable. The functions f O , f ,f' are mappings f o : [ O , T ] x C x Em + E ,
f : [ O , T ] x C x Em + En, f1 : [O,T]x
C + EP,
which we assume t o satisfy the following basic conditions: f,f o are continuous, FrCchet-differentiable with respect to their second argument, and
292
Stability and Time-Optimal Control of Hereditary Systems
continuously differentiable with respect t o their third argument, the derivative being assumed continuous with respect t o all arguments. Also f' : [0, T ] x C -+ EP is continuous and continuously F'rdchet-differentiable with respect t o its second argument. Denote by NBvk(to,bl)and NBV#xp(([a,b]) respectively the k-vector and the k x pma t ri x valued functions whose components are of bounded variation left continuous at each point of [a, b] and normalized t o zero at t = b. Associated with (7.11.2) is the linear equation i ( t ) = L ( t ,z t ) 20
= 0,
+ B(t)U(t), 0 5 t 5 T,
(7.1 1.7) (7.1 1.8)
where u E U , x E X , with these subspaces U , X defined as follows:
x = {z E W g ) ( [ - h , T ] , E " ): z T E C'([-h,O],E")}, u = { u E Lm([O,T],Em) : U T E C([-h,O],E")}. In (7.11.7),
L ( t , z t ) = [f&,zT,U*(t))lzt,
B(t)= fu(t,zf,u"(t)) are the Frdchet derivatives with respect to 4 and u respectively. Thus there t ,E NBV,,,([-h, 01) such that L ( t ,xt) = s_", deq(t,@)z(t 6). exists ~ ( .) Angel1 and Kirsch [45] have proved the following fundamental result:
+
Theorem 7.11.1 (i) Let ( x *, u*) be the optimal solution pair of the problem (7.11.1) (7.11.6), (ii) Assume that all the smoothness conditions are satisfied. (iii) Suppose the map t -+ B+(t)is continuous on [T - h , T ] , where B + ( t ) is the generalized inverse o f B ( t ) ,and rank B ( t ) = n , V t E [T- h, TI. Then there exists ( X , a , p , v , q )E E X Lm([O,T],E")x NBVp[O,T]x NBVn[O,T]x En,
(A a, P,21, d f (O,O, O,O,O), such that X 2 0 , p = ( p l , .. . , p ) , p , is nondecreasing on [O,T],pj is constant on every interval where fjl)(t,x;) < 0, a(t) =
A1
T VO(S,t
- s)ds
+
6'
Vl(S,t
- s)Tdp(s) (7.11.9)
Synthesis of Time-Optimal and Minimum-Effort Control
293
and T
5X
1'
BF(t)u*(t)dt+
L-h T
-
1'
uT(t)B(t)u*(t)dt
dVT(t - h)B(t)u*(t)
(7.1 1.lo) for all u E U a d . The multiplier can be taken to be 1. We now state a Kuhn-Tucker type necessary optimality theorem on which the maximum principle is based. The notations are first stated.
Definitions: Let X be a real Banach space. (i) Y C X is a linear subspace if z, y E Y,Alp E E implies Az py E Y . (ii) 2 C X is an affine manifold if c , y E 2, A E E implies (1- X)z Ay E
+
+
2. (iii) H c X is convex if x,y E H , X E [0, 11 implies (1 - X ) z + Xy 6 H . (iv) M c X is a cone (with vertex at 0) if z E MI A 2 0 implies Az E M. Definition 7.11.1: Let X be a real Banach space and S c X a subset. The (i) linear span of S , denoted by span ( S ) ,is the smallest linear subspace of X that contains S . (ii) the affine hull (or affine span) of S is the smallest affine manifold containing S. Definition 7.11.2: Let X be a real Banach space. A transformation f : X --+ E is a functional. The space of all bounded linear functionals f : X E is called the dual of X and is denoted by X * . We recall that X * is a Banach space with norm --f
Let X be a real Banach space with X * as dual, and let Y c X . The interior of Y relative to X will be denoted as Y o ,while its interior relative to its closed affine hull will be denoted by YO0. Let Z c X be a convex subset of X . Define = {e E : t(z) 2 0, vz: E X}.
z+
x*
Let f : X -+ Y be a map of the Banach space X into the Banach space Y . The F'rCchet derivative of f at c* E X is denoted by D f ( z * ) . We use
294
Stability and T i m e - O p t i m a l Control of Hereditary Systems
Di f(z) to denote the partial F'r6chet derivative o f f with respect to the ith variable. Theorem 7.11.2 Let Iw, 21, and 22 be Banach spaces, I/ c R a closed convex set with nonempty interior relative to its closed affine hull W , and let Y c 21 be a closed convex cone with vertex at the origin and having a nonempty interior in 21. Let go : R
-+
E , g1 : R
and let
M = (2 E
-+
21,
g2 : R
+22,
v : g l ( x ) E -Y,g2(z) = 0).
Suppose that go takes on a local minimum at x* E M , and that g l , g l , and g2 are continuously Frkchet differentiable a t x*. Suppose that Dg2(x*)W is closed in 2 2 , but either of the following two conditions fails: ( j ) There exists an ' 2 E V o owith D g 2 ( z * ) z o= 0 and (ii) gl(z*) Dgl(z*)zo E -Yo. Also (iii) Dg2(+*)(W) = Zz. Then there exists a nontrivial triple (A,!l,!,) E E x 2; x 2; such that (i) A 2 0, I! E Y + , (ii) t o o g'(z*) = 0, (iii) XDgo(z*)z !,Dg'(z*)z t2Dg2(z*)x 2 0, V 2 E V - 2*.
+
+
+
To apply the multiplier rule to the optimal problem, we define follows:
21,22,
as
Also
R =x x
u, W = {(z, u ) E R I to = $}, v = ((2,u ) E W ( uE V } .
Consider the three mappings go : X x U U 2 2 defined respectively by
g2 : X x
-+
E , g1 : X x U
-+
21, and
-+
(7.11.11)
Synthesis of Tame-Optimal and Minimum-Eflort Control
295
Under the basic prevailing assumptions on fo, f', f,the mappings go, g1,g2 are continuously F'rCchet differentiable at each point (z*, u * ) E X x V , and the derivatives are of the forms:
Dzgi(z*,u')z: = where Li(t)zt
I'
&(t)ztdt,
I
E x , i = 0,1,
= [fi(t,zt,u*(t))]~t, I t E C,
D"gi(z*,u*)u =
1'
B;(t)u(t)dt, u E u,
where & ( t ) = [fi(t,zt,u*(t)). Here Also where
where Now set
We note that gi, i = 0 , 1 , 2 , satisfy the conditions of the multiplier rule of Theorem 7.11.2 relative to the spaces X, U , 21, and 2 2 . Denote by L& = (2 E L,([O,T],E") : IT E C } and observe that by Theorem 7.11.2 there exist t! E [L&]*, the dual of L&, v E NBVn([-h,O]), q E En, p E NBV,([O,T],and A 2 0, ( A , l , v , q , p )# (O,O,O,O,O)such that p is nondecreasing and
(7.11.14)
(7.11.15) T +
L - h
dv(t - q T 2 ( t )
+ qT+)
= 0,
296
Stability and Tame-Optimal Control of Hereditary Systems
V x E X with x o = 0, and
That the map D l g 2 ( z * ,u*), (2,u) -+ (i(.)- L; .z(.) - B ~ ( . ) u ( . )z~( TT),) is a surjection is a consequence of condition (iii), which is the criteria for the controllability of (7.11.7) in the space C ' ( [ - h , O ] , E " ) . The proof is essentially contained in Theorem 6.2.2. We now affirm that (7.11.14) (7.11.16) hold. The function v is defined on [-h, 01. Outside this interval, we let v(0) = form
{ i(-h), o < - h t(2)
. We claim that the functional l is of the
= 1'aT(t)z(t)dt -
k-h T
dv(t - T ) T Z ( t ) ,
for z E L k , a E Lm([O,q). Indeed, consider (7.11.15). Let z E L&. The solution of
i ( t ) - L 2 ( t ) z t = r ( t ) in [ O , T j , is given by c ( t ,2 ) =
J/d'
10
=0
(7.1 1.17)
X ( t , s ) z ( s ) d s , t E [O, TI,
where X is the fundamental matrix solution of
.(t) = L z ( t ) z t . Thus the system (7.11.15) becomes
T +
L - h
dv(t - T ) T L 2 ( t ) x t ( . 2, )
+ qTz(T,
%)
= 0.
If we use the representation of Li and of z(.,z) and change the order of integration, we deduce that
Synthesis of Tame-Optimal and Minimum-Effort Control
297
for all t E L&,, where a ( t ) depends on A , q , X , p , v , q o , q 1 , q 2 .With this we rewrite (7.11.15) as follows:
dT
L o ( t ) z t ( . ,z)dt T - 1 - h
+
1
T
dp(qTL1(t)zt
+
Jd
d v ( t - T)[Z(t)- L 2 ( t ) z t ]
T
a T ( t ) [ W- J52(t)zt]dt T
+ 1 - h dv(t - T)"[i(t)]
+qTz(t) = 0,
for all z E X with
+
1'
= 0. This is the same as
+
+
a(t)T[Z(t) q 2 ( t , s - t)i(s)ds]dt
T +
10
L - h
-
6'
a ( t ) T q 2 ( t ,-h)z(t - h)dt
dv(t - T)"q2(t1-h)z(t - h )
J" J T-h
t
t-h
d v ( t - T ) T q 2 ( t 1s - t)i(s)ds.
Now define a function ,u as follows:
and for any y E L& extend y by zero and then define
z ( t )=
J,'
y(s)ds for
t E E.
298
Stability and Time-Optimal Control of Hereditary Systems
for all y E L&([O,TJ).Because the equality in (7.11.18) holds for all y, the integrand has to vanish pointwise. As a consequence we have
-A
lT-t qo(e +
h,-h)Tds - A
1
s+h
qo(e,
- e)Tde
T- h
l
T-h
+
+(e
+ h)Tq2(e+ h , - h ) +
which yields
lT + LT -A
qo(e,s - eldo -
I'
qT
= 0,
dp(e)TV1(e, - 0)
+
4 0 ) ~ 7 7 ~ (s0, ope
-
d v ( e ) ~ q 2 ( se ,- e)
+ q~ = 0.
This proves the equation (7.11.9). To show that (7.11.10) is also valid, we substitute the form of t into (7.11.16). We observe that p = ( P I , . . . ,p l ) pj is nondecreasing on [0, TI. Because (A, t, v,q , p ) = (O,O,O,O, 0) is impossible, (A, a,p, v , q ) cannot vanish simultaneously. The proof is complete. As an immediate consequence of this, we state in the next section the solution of the time-optimal control problem.
Synthesis of Time-Optimal and Minimum-Eflort Control
299
7.12 The Time-Optimal Problem in Function Space Theorem 7.12.1 (The Time-Optimal Problem) Consider the following problem: Minimize T subject to the constraints
i ( t )= f(t,xt,u(t)) a.e. on [ O , q , xo = 4 E W&([-h,01, E"),
(7.12.1)
= $ E c'([-h,O],En), u ( t ) E C" a.e. on [ O , T ] , ZT
where
C" = {u € Ern: I U j l _< 1, j = 1,... ,m}.
(i) We assume that f is continuous, and is Fr6chet differentiable with respect to its second argument and continuously differentiable with respect to its third argument, the derivative being assumed continuous with respect to all arguments. Associated with (7.12.1) is the linear equation
i ( t ) = L(t,xt)
+ B(t)U(t),
0 5 t 5 T,
20
= 0,
(7.12.2)
where u E U , x E X where these subspaces are defined as follows:
x = {x E W i ) ( [ - h , T ] , E " ) ,ZT E C'([-h,O],E")}, = {u E Lm([O,T],E"), U T E C([-h,O],E")}. In (7.12.2),
are the Fre'chet derivatives with respect to
usual, q t , xt) =
Ih
4 and u respectively. As
deq(t,q+(t
+ 0).
(ii) Suppose the map t +. B+(t)is continuous on [T- h , TI, where B f ( t ) is the generalized inverse of B ( t ) , and rank B ( t ) = R , V t E [T - h,T]. Then there exists
Stability and Time-Optimal Control of Hereditary Systems
300 such that
a(t) = -qand
1’
lT
q ( s , t-s)Ta(s)ds+
aT(t)B(t)u(t)dt-
I’
I’
T
v t E [O, TI,
dv(t - T ) T B ( t ) u ( t )
L - h
aT(t)B(t)u*(t)-
~(S,t-s)Tdv(s-T),
T
L - h
dv(t - T)TB(t)U*(t),
for all u E Uad.
REFERENCES 1. N. Dunford and J. T. Schwartz, Linear Operators Part 1957.
4
Interscience, New York,
2. H. 0. Fattorini, “The Time Optimal Control Problem in Banach spaces”, Applied Mathematics and Optimization 1 (1974) 163-168. 3. R. Gabasov and F. Kirillova, The Qualitative Theory of Optimal Processes, Marcel Dekker, New York, 1976. 4. H. Hermes and J. P. LaSalle, Functional Academic Press, New York, 1969.
Analysis and Time Optimal Control,
5. E. B. Lee and L. Markus, Foundations of and Sons, New York, 1967.
Optimal Control Theory, John Wiley
6. R. B. Zmood and N. H. McClamrock, “On the Pointwise Completeness of DifferentialDifference Equations,” J. Differential Equations 12 (1972) 474-486. 7. C. Castillo-Chavez, K. Cooke, W. Huang, and S. A. Levin, “The Role of Long Periods of Infectiousness in the Dynamics of Acquired Immunodeficiency Syndrome,” in C. Castillo-Chavez, S. A. Levin, C. Shoemaker (eds.) “Mathematical approaches to
Resource Management and Epidemiology (Lect. Notes Biomath), Springer-Verlag, Berlin, Heidelberg, New York, 1989.
8. J. Hale, Theory of Functional Differential Equations, Springer-Verlag, New York, NY, 1977. 9. E. N. Chukwu, The Time Optimal Control of Nonlinear Delay Systems in Oper-
ator Methods for Optimal Control Problems, Edited by S. J. Lee,Marcel Dekker,
Inc., New York, 1988.
10. 0. Hhjek, “Geometric Theory of Time Optimal Control,” 338-350.
SIAM J. Control 9 (1971)
11. G. Tadmor, “Functional Differential Equations of Retarded and Neutral Type: Analytic Solutions and Piecewise Continuous Controls,” J. Differential Equations 51 (1984) 151-181. 12. S. Nakagiri, “On the Fundamental Solution of Delay-Differential Equations in Banach Spaces,” Journal of Differential Equations 41 (1981) 349-368.
Synthesis of Time-Optimal and Minimum-Effort Control
30 1
13. E. N. Chukwu, The Time Optimal Control Theory of Functional Differential Equations Applicable t o Mathematical Ecology, Proceedings of the Conference on Mathematical Ecology -International Center for Theoretical Physics, Trieste, Italy, edited by T. Hallam and L. Gross. 14. E. N. Chukwu, “Global Behaviour of Etetarded Linear Functional Differential Equations”, J. Mathematical Analysis and Applications 182 (1991) 277-293.
15. E. N. Chukwu, “Function space null-controllability of linear delay systems with limited power,” J. Mathematical Analysis and Applications 121 (1987) 29S304. 16. A. Manitius and H. Tran, “Numerical Simulation of a Nonlinear Feedback Controller for a Wind Tunnel Model Involving a Time Delay,” Optimal Control Applications and Methods 7 (1986) 19-39. 17.
R. Bellman, I. Glicksberg and 0. Gross, “On the “Bang-Bang” Control Problems,” Quarterly of Applied Mathematics 14 (1956) 11-18.
18. H. T. Banks and M. Q. Jacobs, “The Optimization of Trajectories of Linear Functional Differential Equations,” SIAM J . Control 8 (1970) 461-488. 19. H. J. Sussman, ”Small-Time Local Controllability and Continuity of the Optimal Time Function of Linear Systems,” J. Optimization Theory and Applications 53 (1987) 281-296. 20. E. N. Chukwu and 0. Hfijek, “Disconjugacy and Optimal Control,” Theory and Applications 27 (1979) 333-356.
J . Optimization
21. H. T. Banks and G. A. Kent, “Control of Functional Differential Equations of Retarded and Neutral Type to Target Sets in Function Space,” SIAM J. Control 10 (1972) 567-594. 22. H. T. Banks, M. Q. Jacobs, and C. E. Langenhop, “Characterization of the Controlled States in W$’) of Linear Heredity Systems,” SIAM J. Control 13 (1975) 611-649. 23. E. J. McShane and R. B. Warfield, “On Filippov’s Implicit Function Lemma,” Proc. Amer. Math. SOC.18 (1967) 4 1 4 7 . 24. 0. Hhjek, “On Differentiability of the Minimal Time Functions,” Funkcialaj ,?hacioj 2 0 (1976) 97-114. 25. G. Tadmor, “Functional Differential Equations of Retarded and Neutral Type: Analytic Solutions and Piecewise Continuous Controls,” J. Digerential Equations 51 (1984) 151-181. 26. L. E. El’sgol’ts and S. B. Norkin, Introduction to the Theory and Application of Diflerential Equations with Deviating Arguments, Academic Press, New York, 1973. 27. G. Sthpan, Retarded Dynamical Systems: Stability and Characteristic Functions, Longman Scientific and Technical, New York, 1991. 28. H. T. Banks and J. A. Burns, “Hereditary Control Problems: Numerical Methods on Averaging Approximations,” SIAM J. Control 16 (1978) 169-208. 29. H. T. Banks and K. Ito, “A Numerical Algorithm for Optimal Feedback Gains in High Dimensional Linear Quadratic Regulator Problems,” SIAM 3. Control and Optimization, 29 (1991) 499-511. 30. E. N. Chukwu, “Time Optimal Control of Delay Differential System in Euclidean Space,” Preprint .
302
Stability and Time-Optimal Control of Hereditary Systems
31. 0.Hhjek, “Terminal Manifold and Switching Locus,” Mathematical Systems Theory 6 (1973)289-301. 32. D. S. Yeung, “Synthesis of Time-Optimal Controls,” Case Western Reserve University, Ph.D Thesis, 1974. 33. D. S. Yeung, “Time-Optimal Feedback Control,” plications 21 (1977)71-82.
J . Optimization Theory and Ap-
34. H. A. Antosiewicz, “Linear Control Systems,” Arch. Rat. 313-324.
Me& Anal. 12 (1963)
35. 0.HAjek, “L1-Optimization in Linear Systems with Bounded Controls,” J . @timizadion Theory and Applications 29(3) (1979)409-432. 36. 0.HAjek and W. Krabs, “On a General Method for Solving Time-Optimal Linear Control Problems,” Preprint No. 579,Technische Hochschule Darmstadt, Fachbereich Mathematik, Jan. 1981. 37. A. Manitius, “Optimal Control of Hereditary Systems,” in Control Theory and Top-
ics in Functional Analysis, Vol. III, International Center for Theoretical Physics, Trieste International Atomic Energy Agency, Vienna, 1976.
38. L. W. Neustadt, “Minimum Effort Control Systems,” 16-31.
SIAhf J . Control 1
(1962)
39. W. Rudin, Real and Complex Analysis, McGraw-Hill, New York, 1974. 40. R. G. Underwood and D. F. Young, “Null Controllability of Nonlinear Functional Differential Equations,” SIAM Journal of Control and Optimization 17 (1979) 753-768. 41. A. F. Filippov, “On a Certain Question in the Theory of Optimal Control,” SIAM J. Control 1 (1962) 76-84. 42. 0.HAjek, “Control Theory in the Plane,” Springer-Verlag Lecture Notes in Control and Information Sciences, New York, 1991. 43. T. S. Angell, “Existence Theorems for a Class of Optimal Problems with Delay,” University of Michigan, A n n Arbor, Michigan, Doctoral Dissertation, 1969. 44. T. S. Angell, “Existence Theorems for Optimal Control Problems Involving Functional Differential Equations,” J. Optimization Theory and Applications 7 (1971) 149-169. 45. T. S. Angell and A. Kirsch, “On the Necessary Conditions for Optimal Control of Retarded Systems,” Appl. Math. Optim. 22 (1990)117-145. 46. H. T. Banks, “Optimal Control Problems with Delay,” Purdue, Doctoral Dissertation, 1967.
Chapter 8 Controllable Nonlinear Delay Systems 8.1 Controllability of Ordinary Nonlinear Systems Consider the autonomous linear control system
k(t)= Az(t)
+ Bu(t),
z(0) = 20,
(8.1.l)
with controls-measurable functions whose values u(t)lie on the m-dimensional cube C" = { U E Em : Iujl 5 1 j = 1, ... , m } . (8.1.2) The solution of (8.1.1) is given by
(8.1.3) Definition 8.1.1: The attainable set of (8.1.1) is the subset d(t,zo) = { z ( t , z o , t i ) : u measurable u ( t ) E C", z is a solution of (8.1.1)} of En. If 20 = 0, d(t,zo) d(t).
Definition 8.1.2: The domain C of (Euclidean) null controllability of (8.1.1.) is the set of all initial points 20 E En,each of which can be steered to 0 in the same finite t l with u measurable u(t) E C", t E [O,tl]:
C = (20 E E"
: z(t1, z o , u ) = 0 for some t l some u
E C"}.
Definition 8.1.3: If 0 E Int C,
(8.1.4)
then (8.1.1) is locally (Euclidean) null controllable with constraints. Lemma 8.1.1 fjr
Assume (8.1.1) is controllable, and this holds if and only rank [B,A B , . . . ,An-'B] = n.
Then 0 E Int C. Proof: Consider the mapping 21
+
z ( t ,0, u) = eAt
it
e-AbBti(s)ds,
T : L,([O,t], Em)-+ En, T u = z(t,O,ti).
303
(8.1.5)
Stability and Time-Optimal Control of Hereditary Systems
304
Equation (8.1.1) is controllable if and only if
T(L,([O,t],E")) = E n . But T is a bounded linear map since u -+ T u is continuous. By the open mapping [5,p. 991 T is an open map. Therefore if U is an open ball such that U C L,([O,t],C"), then T ( U ) is open and T ( U ) c T ( L , ( [ O , t ] , C m ) = d ( t ) . But 0 E T ( U ) C d ( t ) . As a consequence,
0 E Int d ( t ) .
(8.1.6)
It readily follows from (8.1.6) that (8.1.4) is valid. Indeed, assume that 0 is not contained in Int C. Then there is a sequence {zon}?, 20n E 0 as n 00, and no xOn is in C. The trivial solution is a E", ZOn solution of (8.1.1), so that 0 C. Hence xOn # 0 for any n. We also have that 0 # z(tlzon,u ) for any t > 0 and any u E L,([O,t],Cm). Thus -+
-+
y,,
-eAtxOn
# eAt
lo rt
e-A5Bu(s)ds
for any n, t > 0 and any u E L,([O,t],C"). It follows that y,, $2 in d ( t ) for any n. But yn -+ 0 as n -+ 00. Thus the sequence {y,,}? has the property: Yn 0 as n -+ 00, y, $! d ( t ) for any t > 0, yn # 0 for any n. This means 0 $2 Int d ( t ) , a contradiction. --+
Remark 8.1.1: The necessity and sufficiency of the rank condition (8.1.5) for controllability is old and due to Kalman, (see [2, p. 741). The statement in Lemma 8.1.1 can be generalized to nonlinear systems
where f : E x En x Em -+ En is continuous and in the second and third argument is continuously differentiable. We assume all solutions x ( t , z, u ) of (8.1.7) exist and ( t ,$0, u ) -+ z ( t , 2 0 , u) are continuous. The assumptions on f ensure this. In addition t o (8.1.7), consider the linearized system
+
i ( t ) = A(t)z(t) B(t)u(t),
(8.1.8) (8.1.9)
Controllable Nonlinear Delay Systems
305
Theorem 8.1.1 Consider (8.1.7) in which f : E x En x Em + En (i) is continuous, and continuously differentiable in the second and third arguments. (ii) f ( t , 0,O) = 0, V t >_ 0. (iii) System (8.1.8) with A ( t ) and B ( t ) given by (8.1.9) is Euclidean controllable on [0, t l ] . Then the domain C ofnull controllability of (8.1.7) has 0 E Int C.
Proof: The solution of
is given by the integral equation
Consider the mapping
defined by
T u = z ( t ,u). Because of the smoothness assumptions on f , the partial derivative of z ( t , u) with respect t o u is given by
t 2 0. On differentiating this with respect to t , we have
d dt
--Duz(t, u ) v = W ( t ,Z ( t , u),u(t))-Duz(t,u)v
+ D 3 f ( t , z ( t ,.I,
.(t))v.
(8.1.11) Because (ii) holds, the function z ( t , O , 0) = 0 is a solution of (8.1.10). We deduce from these calculations that T'(0)v = Duz(t,O)v = t ( t , v ) is a solution of (8.1.8). Therefore T'(0) : L,([O,tl],E") -+ En is a surjection if and only if the system (8.1.8) with z(0,v) = 0 is controllabie on [ O , t l ] . It follows from Graves' Theorem [3,p. 1931 that T is locally open: There
Stability and Time-Optimal Control of Hereditary Systems
306
is an open ball radius p, Bp c L,([O,tl],C") center 0 , and an open ball radius T center 0, Br c En, such that
where
d(t)= { z ( t , u ) : u E L , ( [ O , t l ] , C m ) z is a solution of (8.1.10) with z(0, u) = 0). We have proved that 0 E Int d(t). With this result one proves, as in the linear case, that 0 E Int C. The proof is complete. Theorem 8.1.1 is a statement on the constrained local controllability of (8.1.7). It is interesting to explore conditions that will yield the global result of C = En.Such a statement is contained in the next theorem.
Theorem 8.1.2 For (8.1.7), (i) Assume that hypotheses (i) - (iii) of Theorem 8.1.1 are valid. (ii) The solution z ( t ) of
.(t) = f ( t , .(t), 01,
4 0 ) = zo,
(8.1.12)
tends to X I = 0, as t -+ m. Then C = En,that is, (8.1.7) is globally (Euclidean) controllable with constraints.
Proof: From Theorem (8.1.1) there is a neighborhood 0 of zero in Enthat is contained in C. Because of (ii), each solution z(t,zo) for any zo E En glides into 0 in time to: z(to,xo) E 0 C C. This point z ( t o , z o ) can be brought t o the precise zero in time t l . Thus the control u E L,([O, t l ] ,C"),
drives
10
into 0 in time t l . This completes the proof.
Remark 8.1.2: Conditions are available for the behavior z(t,zo) + 0 as
needed for (ii).
t
-+ 00
(8.1.13)
307
Controllable Nonlinear Delay Systems
Proposition 8.1.1 In (8.1.12), assume: (i) There exists a symmetric positive definite n x n constant matrix A such that the eigenvalues X k ( x , t ) , k = 1 , 2 , . . . , n of the matrix 1 -(AJ JTA) 2 satisfy & 5 -6 < 0 k = 1 , 2 , . . . , n ,
+
V ( x , t ) E Entl, where 6 is a constant and
J=
af ( 4 X , O ) ax
.
(ii) There are constants T > 0, p, 1 5 p 5 2, such that
4
ttr
lf(T,O,o)yd7+0
as t + m .
Then every solution of (8.1.12) satisfies (8.1.13). The proof is contained in [l]. Example 8.1.1: Consider the model of a mass spring system
(8.1.14)
Let g = ( x l ,x 2 ) where becomes
x=
-
[; I
x1
= 2, x 2 = i,so that the mass spring equation
=
[
22
-bX1
- a22
Here
+ g(u)
1
= f (c, u).
A(t)= A = We may write (8.1.8) as x = Ax
+ B u where B
I:[
=
,
v is in the un-
bounded closed convex cone of g([-l,l]). The rank of [ B ,AB] =
[;
-3
is 2. In (i), A has its characteristic values negative. All the hypotheses of Theorem 8.1.2 are seen to be satisfied. The system (8,1.14) is (globally) Euclidean null controllable with constraints.
308
Stability and Time-Optimal Control of Hereditary Systems
8.2 Controllability of Nonlinear Delay Systems In this section we resolve the problem of controllability of general nonlinear delay systems in function space on which the solution of the optimal problem had rested. In [ll],for example, the problem of minimizing a cost functional in nonlinear delay systems with Wil) boundary conditions was explored under the condition of controllability. The timeoptimal control theory of nonlinear systems requires controllability (see Theorem 7.9.3). As has been argued, if the controls are L, functions, the natural state space is W;'). For general nonlinear systems, a natural way of investigating this problem requires the Frgchet differentiability of an operator F : WJ1) x L, + W;'), which is very difficult to realize for nonlinear systems. To overcome this difficulty we use weaker concepts of differentiability in W;') and more powerful open-mapping theorems. The results obtained are analogous to those of ordinary differential systems. We shall first treat the problem in C, the space of continuous functions. Consider the general nonlinear system (8.2.1)
where f : E x C x Em --+ En is continuous and continuously FrCchet differentiable in the second and third argument. We assume also that there exist integrable functions Ni : E ---t [0,m), i = I, 2 , 3 , such that the partial derivatives D i f ( t ,4, w) satisfy
for all t E E , w E E m , 4 E C . Under these assumptions we are guaranteed existence and uniqueness of a solution through each (a,4) E E x C for each u E Loo([a,m),Em), (see Underwood and Young [15],Chukwu [6], and Section 7.8). Thus for each (r,4) E E x C and for each u E &,([a,oo),Em) there exists a unique response x(a,q5,u) : Et = [a,m) + C with initial data x u ( r l 4, u ) = 4 corresponding to u . The mapping x(r,4, u ) : E+ ---t C defined by t -+ xt(u,4,u) represents a point of C. We now study the mapping u -+ xt(u, 4, u ) : xt : &([a, m),Em)-, C. Lemma 8.2.1 For each v E L,([a, T ] , Em),T > a,we have that the partialFre'chet derivativeoftt(ao,4O,u0) with respect t o u , Dutt(uo140,uo)(v)=
Controllable Nonlinear Delay Systems
309
zt(uo,&,,u0,v) where the mapping t -+ z ( t ,O O , & , U O ,u ) is the unique absolutely continuous solution of the linear differential equation
i ( t ) = DZf(t, Zt(O0,40,uo), uo(t))rf+ DBf(t, Z t ( f l 0 , 4 0 , U o ) , u o ( t ) ) W , ~oo(~o,40,~o =,0.~ )
Proof: The solution I,
I
(8.2.2)
of (8.2.1) is a solution of the integral equation
= 4 in [-h,O],
z(t,u,4, u ) = 4(0)
+ J,'
f(S1 I s ( U , 4 ,
u ) ,u ( s ) ) d s , 1
2 6.
Let D, denote the partial derivative of q ( t ,u,4, u ) with respect to u. Then D,c(t, u,4, u ) = 0 in [ - h , 01, and
Duz(t,694, ). =
J,'
DZf(S, % ( U , 4, u ) , u(S))DuZs(u, 4, u)ds t
+J, D3f(S,
Zs(6,
4, u ) , U ( S ) ) d S , t L
6.
(Difdenotes the FrCchet derivative with respect to the ith argument.) These differentiation formulas follow from results in Dieudonne [9, pp. 107, 1631. On differentiating with respect to t we have d dt
).0,(I).
-[Dd(t,U,
= D2f(t,Zf(b14, u ) , u ( t > ). Du2t(b14, u)v
+ DBf(t,
Q(U,
4, u ) , u ( t ) ) v .
This proves the lemma. Lemma 8.2.2 For 4 E C, u E L , ( [ u , t l ] , E m ) , tl > u + h , let z ( u , ~ , u ) be a solution of (8.2.1) with z,(u, 4, u ) = 4. Consider the mapping u + Z t l ( U , 4 , U ) , F : -L([a,t1I,Ern) C([-h,OI,E") = F(u)= ~ t l ( W w . Then the Fre'chet derivative +
d D F ( u ) = -F(u) du
c,
: L c a ( [ u , t l ] , E m+ )
c
-
has a continuous local right inverse (is a surjective linear mapping if, and onlyif, the variational controlsystem (8.2.2) along the responset zt(c,4, u ) , namely
+
i ( t >= D z f ( t ,Z t , u(t))zt DBf(t, Z t , u ( t ) > v ( t ) ,&((T 4, u ,
= 0,
310
Stability and Time-Optimal Control of Hereditary Systems
is controllable on [ u , t l ] t l
> u + h.
Proof: System (8.2.2) is controllable on [ u , t l ] if and only if the mapping u -+ ~ ( u0,4, , u , v ) t E [a, t l ] is surjective. The lemma follows the observation in Lemma 8.2.1 that
In what follows, we assume in (8.2.1) that
f(t, 0,O) = 0. As a consequence, if u = 0.
(8.2.3)
4 3 0 in (8.2.1), we have a unique trivial solution when
Definition 8.2.1: The C-attainable set of (8.2.1) is asubset of C ( [ - h , 01,En) given by
d ( t , 4 , u ) = { Z t ( U , 4 , U ) : '11 E L,([u,t],Ern), is a solution of (8.2.1)}. If
2,
= 4,
2
4 5 0, we write d ( t , 0, u) E d ( t , u). Let C" denote the unit cube
The constrained C-attainable set is the subset of C defined by a ( t , 4, u) = { z ~ ( u 4,, u): u E L m ( [ u ,t ] , C"),
1:
is a solution of (8.2.1)
with z, = d}. Whenever q5 E 0, we simply write a ( t , 0, u) = a ( t , u).
Definition 8.2.2: System (8.2.1) is proper on [ u , t ] ,t > u + h , if 0 E Int a ( t , 6).
(8.2.5)
Controllable Nonlinear Delay Systems
31 1
Proposition 8.2.1 System (8.2.1) is proper on [ a , t l ] ,t l > u ever
i ( t ) = D z f ( t ,0 , O ) Z t is controllable on [ a , t ~ ]tl,
+ D 3 f ( t ,O , O ) V ( t )
+ h, when(8.2.6)
> u E h.
Proof: Consider the response z(u,4, u)to u E L,([u, t l ] ,Em) for Equation (8.2.1), and the associated map u + ztl(u,O,u) given by Fu = q l ( u , O , u ) , where F : L , ( [ u , t l ] , E m ) + C. Evidently F(L,([u,tl],Cm) = ~ ( u). t , It follows from Lemma 8.2.1 and Lemma 8.2.2 that
is a surjective mapping of L , ( [ u , t l ] , E " ) + C. Therefore, (Lang [ 3 , p. 193]), F is locally open: There is an open ball 1 , c L,([u,tl],Em) containing zero, radius p, and an open ball Br c C containing zero, radius r such that Br c F ( B p ) . Because L , ( [ u , t l ] , C m ) contains an open ball containing zero, r > 0, p > 0, can easily be chosen such that
Thus
Er c F(Lm([u,tl],C"')) = a ( t t u ) , so that 0 E Int ~ ( t , u )This . completes the proof.
Denote by U the set of admissible controls
where C" is as defined in Equation (8.2.4).
Definition 8.2.3: Consider Equation (8.2.1). The domain I) of null controllability of (8.2.1) is the set of initial functions € C such that the solution z(u,#, u ) of (8.2.1) for some tl < 00, and some u E L,([u, t ~ Cm), ], satisfies z,(u, 4, u)= 4, z t l (u,4, u ) = 0. If 2) contains an open neighborhood of z t l = 0, then (8.2.1) is said t o be locally null controllable with constraints when the initial data is restricted to a certain small neighborhood of zero in W;').
+
312
Stability and Time-Optimal Control of Hereditary Systems
Proposition 8.2.2 In Equation (8.2.1), assume that
(4
f : E x C x Em + E" is continuous and continuously differentiable in the second and third argument; and there exists integrable functions N i : E --+ [0, oo), i = 1 , 2 , 3 such that
I P z f 4, w)ll 5 N l ( t ) + W
t ) IwI, f o r al l t E E , w E E m , 4 E C .
I I W ( t , 4, w)11 5 N3(t),
(ii) f(t,O,O)= 0, V t 2 a. (iii) System (8.2.6) is controllable in [a, t l ] , tl > u + h. Then the domain of null controllability D of (8.2.1) contains zero in its interior, that is, 0 E Int D ,so that (8.2.1) is locally null controllable with constraints. Proof: Assume that 0 is not contained in the interior of D ,and aim at a contradiction. By this assumption there is a sequence { & } O O , 4" E C, q5n --+ 0 as n --+ 00, and so 4" is in D. Because the trivial solution is a solution of (8.2.1) an account of (ii), 0 E D ,therefore 4,, # 0 for any n . Also z t ( a , 4 , , , u )# 0 for any t > a h , and any u E LM([a,t],Cm)= U . Thus 03, 0. Then (8.2.1) has its domain of null controlla-
Proof: By Proposition 8.2.2, 0 f Int V ,so that whenever (8.2.10) is valid, every solution of (8.2.1) with 0 = 21 E U ,that is every solution of (8.2.9), satisfies xt(ul4, 0) -+ 0 as t at. Thus there exists a finite t o < at,such that x t o ( u ,4,O) 11, E 0 c V ,where 0 is an open ball contained in V with zero as center. With this t o as initial time now and $ as initial function in 0,there exists some control u E U such that the solution z(t0,11,, w ) of (8.2.1) satisfies z t o ( t O , l l , , u )= $ , x t l ( t O , $ , v ) = 0. Using -+
w(s) =
{
0,
[a,toI1 u(s>, s E [tOItll, sE
we have w E L,([u,tl], Cm),and the solution x = z(u,4,w) satisfies 4, xtl = 0. The proof is complete.
I,
=
We now restrict our attention to the state space W;') and use the control space L2([u,tl],E m )for some tl > u + h. We have: Theorem 8.2.2 In (8.2.1), assume that: (i) f : E x C x Em -+ En is continuous and continuously differentiable in the second and third arguments; and there exist integrable Ni : E [O,oo), i = 1,2,3, such that
-+
IIDzf(t, 4, w)ll L Nl(t) + Nz(t)lwl, ( I D 3 f ( t 1dl w)ll 5 N3(t), V t E El w E Em, 4 E C. (ii) f(t, 0,O) = 0, V t 2 u. (iii) The system
+
i ( t ) = Dzf(l,O,O)~t D3f(tr0,0)v(t), with u E L z ( [ u , t l ] Em), , is controllable in the state space w ~ ( ~ ) ( [ - ~ , o I , E ~ ) . Then the system .(t) = f ( t ,x t 7 4 t ) )
(8.2.6)
(8.2.1)
314
Stability and Time-Optimal Control of Hereditary Systems is locally null controllable with constraints; that is, there exists an open ball 0 center zero in Wil' such that all initial functions in 0 can be driven to zero with controls in
v = {.
E L2([U,tlI,Em): 11.112
5 1).
Proof: The proof parallels that of the state space C except that here the map F may not be FrCchet differentiable. More precisely, if u E L 2 ( [ 0 , t l ]En), , then the corresponding solution of (8.2.1) z ( u ) is an absolutely continuous function with derivative in L2([O,tl],En),so that %(ti) E W,(')([O,tl],En). Consider the mapping
F : L Z ( [ u , t l ] , E " -+ ) W;')([-h,O],E"), defined by Fu = z t ( u ) . To proceed as in the previous case, we need Fu to be Fr6chet differentiable. Because of the norms of L2 and W;", this requirement will place very stringent conditions on f . Unless u appears in an affine linear fashion in f , FrCchet differentiability is impossible because of Vainberg (191.We use Gateaux derivative, which does not require such stringent conditions, and we shall see that this will suffice. For this, let F'(u) denote (formally) the Gateaux derivative of q(u) E W;'), with respect to u. Then we have F'(u) : L2 --t W;') exists and is given by
F'(u)v = D,z(t, u ) ( v ) = Z(t,U, ?I), where the mapping t ~ ( t , u , vof ) E into En is the unique solution of (8.2.2). This assertion follows from the following argument: The solution z ( u ) = z(u, 4, u)of (8.2.1) is given as the integral --f
.(t) = d ( t > , t E [ - h , 01, (8.2.11)
315
Controllable Nonlinear Delay Systems We note that
where 6f denotes the Gateaux differential o f f . Now
where 6f denotes the Gateaux differential off. Since f : E x C x Em is continuously FrCchet differentiable
6f ( . l l o ( t ) , h ( t ) ) =
Z t ( U o ) , uo(t))h
where Duf denotes the F'rCchet derivative with respect t o u ( t ) ([14, Lemma 6.31). Thus we obtain
6T(uo,h)= lim(T(u0
1' T'O
=
+ 7-h)- T(uO)>/7-,
Duf(t,21(uo)r u o ( t ) ) h ( t ) d t ,
since condition (i) of Theorem 8.2.2 holds and the Lebesgue Dominated Convergence Theorem is available. We now prove that 6T(.,-) is continuous at (uo, 0) E L2([0,TI) x L 2 ( [ 0 ,TI), so that by Problem 6.61 of [14], 6T(uo,h ) = T'(uo)h, T'(u0) E B ( L 2 , Will) : T'(u0) : L2 + Wi') is a bounded linear map, from L2 into Wi'). Suppose (uk,hk) (u0,O). Then +
since Duf(S,z,(uk),'Ilk(S))hk(S) + 0 as k + 00 and assumption (i) of Theorem 8.2.2 and the Lebesgue Dominated Theorem is valid.
Stability and Tame-Optimal Control of Hereditary Systems
316 We note that
so that from the continuity of D , f in u we have: for every r > 0 and u E B,(uo) = {u : IIu - uollp 5 T } , there exists some finite constant L < 00 such that
- T'(u0)ll I L.
[IT'(.)
(8.2.12)
We now observe that the solution z ( u ) of (8.2.11) is Frdchet differentiable with respect to u ( t ) E Ern. Indeed, if D, denotes this partial derivative, then by Dieudonne [9]
Using the assertions previously proved,
6T(uo,v) = T'(u0)v = D,z(u)w, and
d dt
-[quo)v]
= D 2 f ( t ,X i ( U ) , u(t))Duzt(u)v+ D 3 f ( t ,z t ( u ) ,u ( t ) ) v .
> h and u -+ xi(.) are the mapping F : L2([0,T],Em) --+ W,(')([-h,O]E")given by F u = zi(u) where z ( u ) is
If u E L2([0,TI,E m ) , T
the solution of (8.2.1), then by what we have proved
F'(u)v = Duzt(u)v = yt(u, v), where y is a solution of the variational equation (8.2.2) and
q,Ern)
F y u ) : L2([0,
+
W,(')[-h,01.
Evidently F'(0) is a bounded linear surjection if and only if the control system (8.2.2) is controllable on [ a , t l ] .To sum up, consider the mapping
F : L 2 ( [ u , t l ] , E m+ ) W,('). F'(0) : L z ( [ u , t l ]Em) , + W2 (1) ,
Controllable Nonlinear Delay Systems
317
is a surjection by condition (iii), so that F satisfies all the requirements of Corollary 15.2 of [lo, p. 1551, an open-mapping theorem. Thus for uo 0 E L z ( [ a , t l ] , E m ) ,F(uo) = F ( 0 ) = 0 E Wj'), to every r > 0 and ) 0 E L2 and radius r , there is an open ball B(0,r) c L Z ( [ a , t l ] , E mcenter open ball B(0,p) c Will center 0 of radius p such that
=
where , X ( t 1 ,a ) = { Z t l ( U , O , U ) : 21 E L 2 ( [ ~ , t l 1 , E r n ) llullz
5 T,
~ ( ua)solution of (8.2.1)}.
(8.2.13)
We have proved that for any r > 0, 0 E I n t x ( t l , a ) where the attainable set is as defined in (8.2.13) above with controls in
Since T is arbitrary, we can select it t o be 1. What we have proved implies that 0 E Int V ,where Z, is the domain of null controllability, i.e., the set of all initial functions 4 such that the solution z(a,q+,u) of (8.2.1) with u E u a d satisfies
Indeed, suppose not. Then there is a sequence
as n + co and no 4,, is in V . Since c(a,O,O) = 0 is a solution of (8.2.1), 0 E V . We can therefore assume that 4n # 0 Vn. Thus, et,( a ,$ n , u)# 0, for any u E U a d l and any t i > h. I f t n Zt1(6,4n,u),tn = ~ t l ( ~ , 4 n , -+o ) e t , ( a ,0,O)
=
(from continuity and uniqueness of solution).
Because
318
Stability and Time-Optimal Control of Hereditary Systems
E d ( t l , ~ ) u. We now have a sequence {&} E Wi') that has the following property: ~ t ~ ( c , O ,= o )0
&,
-+
0 as n
+ 00,
&, # 0 for any n , h.
We conclude that 0 @ Int d(t1,e)for any t l > h, a contradiction. This concludes the proof that (8.2.1) is locally null controllable with constraints.
Remark 8.2.1: If D 3 f ( t , 0,O) 3 B ( t ) is continuous, a necessary and sufficient condition for Wil) controllability on [ u , t l ] , t l > u + h is rank B ( t )= n on [ t l - h , t l ] . See Theorem 6.2.1. This condition is strong. It is interesting to know whether the controllability condition (iii) of Theorem 8.2.2 can be weakened. Indeed, it can be replaced by the closure of the attainable set of (8.2.6). One such sharp result is obtained for the simple variant of (8.2.6), namely
where
Theorem 8.2.3 In (8.2.1), assume conditions (i) and (ii) of Theorem 8.2.2. Also assume that its linearized system is (8.2.14) where t + Ao(t), A l ( t ) , B ( t ) are analytic on [O,tl]. The rank of B ( t ) is constant on [tl - h , t l ] . ImAl(t)ri(t)B(t) c ImB(t), i = 0 , . . . ,n- 1, for all but isolated points in [e,tl],where the operator r is defined by
+d
r(t)= - ~ ~ ( t )-, dt and Im H denotes the image of H . Then Equation (8.2.1) is locally null controllable with constraints when the initial data is restricted to a certain subspace Y of W,'). Proof: The proof is as before, but because (iii) - (iv) replaces the controllability condition, the mapping F : L ~ ( [ u , t lE] m , ) W;') does not have its Gateaux derivative F' a surjection. Instead, F'(L2([a,t l ] , E m ) Y -+
319
Controllable Nonlinear Delay Systems
is closed as guaranteed by (iii) and (iv) and Theorem 2 of [13]. As a consequence, Y is a subspace of W,(')([-h,01, En).Consider the mapping F : L2([0,tl],Em) 4 Y of one Banach space into another. We have that F' : L2([u,tl],Em) --i Y is a surjection and satisfies all the requirements of Corollary 15.2 of [lo, p. 1551. The proof is concluded a8 in the previous proof of Theorem 8.2.2. The controllability assumption of Theorem 8.2.2 was relaxed in Theorem 8.2.3 by requiring the closure of the attainable set of the linearized equation. The theory of ordinary differential control systems suggests a further relaxation. In this theory, the linear system
i ( t ) = Az(2) + Bu(2) is locally null controllable with constraints, and the domain of null controllability is open if and only if rank [ B ,A B , . .. ,A"-'B] = n. Here the controls are taken t o be in the unit cube C"'. This rank condition, which is equivalent to controllability with Lz unrestrained controls, is also equivalent to null controllability. But in functional differential systems, controllability is not equivalent t o null controllability. It is interesting to assume the weaker condition of null controllability. Thus the problem posed for Equation (8.2.1) can be stated as follows: Suppose (8.2.6) is null controllable. Does the same version of Theorem 8.2.2 hold? The next result in the space C states further conditions that will guarantee this. In (8.2.6) we let
D2f(t, 0,O)Zr =
lh
cs,rl(t,S ) Z ( t
+ s),
(8.2.15)
where ~ ( -) t ,is of bounded variation on [-h, 01, left continuous on ( - h , 0), and ~ ( 2 0) , = 0. We need the following notation: Let the integer j E [l,n] be fixed. Let z E En. Let xlz be the projection of z onto its first j components, and let x2z denote the projection of z onto its last n - j components. We write z = (z1,z2) where z1 = r l z , x 2 = x22. Define f l
: 2 + C([-h,O],E'),
by
(%i+)(s)
T2 : C([-h,O],
E") + C([-h,O],E"-')
= T ; + ( s ) , i = 1,2.
The following result is extracted from the statement and proof of Theorem 2.1 of Underwood and Young [15]. It is the main contribution of Chukwu [7].
Stability and Tame-Optimal Control of Hereditary Systems
320
Theorem 8.2.4
Consider the system
and its linearization (8.2.6):
.(t) = Dz f ( t ,0,O)z
+ B(t)u(t).
(8.2.6)
Assume: (i) that conditions (i) - (ii) of Proposition 8.2.2 hold. (ii) For each t and s , the range o f f is contained in the null space of ~ ( 81, t, the function defined in (8.2.5). (iii) F o r a n y u E L z ( [ u , t l ] , E m satisfyingu(t) ) = O f o r f s t 5 t l , andany y E C ( [ u- h , t l ] ,En) satisfying y ( t ) = 0 for T- h 5 t 5 tl there exists nosolution z o f i ( t )= f ( t , y t + . z t , ~ ( t ) ) - - D a f ( t , O , O ) y t - B ( t ) . l L ( t )on [u- h,tl] that satisfies both z ( t 1 ) = 0 and z t , # 0. (iv) The only solution z E C ( [ a- h , t l ] ,E n ) of 2 ( t ) = D 2 f ( t ,O,O)zt, u 5 t 5 tl are z ( t 1 ) = 0 that is constant on [u- h, a] is z = 0. (v) Suppose that tl - t o > 3h, and suppose there exist functions f i :E
x EJ x C([-h,O],E"-j) x Em 4 E J ,
f2 : E
x En-j x Em-+ E n - j ,
such that
(vi) Let (8.2.6) be null controllable on [ t o , tl - 2h]. Then (8.2.1) is locally null controllable with restraints on [ t o , t l ] ,i.e., with controls in a unit closed sphere o f LZ([to,tl],E m ) with center the origin. Furthermore,
if (vii) the trivial solution of
.
= f(t,zt,O)
(8.2.16)
is globally, uniformly, exponentially stable so that for some M 2 1, cr > 0 , the solution of (8.2.15) satisfies
32 1
Controllable Nonlinear Delay Systems (viii) In (i) - (iii) above, t o is sufficiently large so that 11 - t o (8.2.1) is globally null controllable with constraints.
> 3h.
Then
Remark 8.2.2: If (8.2.6) is null controllable on every interval [to,t l ] , tl > t o + h , then condition (iv) of Theorem 8.2.4 prevails if t l - t o > 3h. Condi-
tions are available in Theorem 6.2.3 and its corollaries. Thus if we assume that (8.2.6) is null controllable, we need not assume t o large as in (viii). There are various criteria for the stability requirement of condition (viii). They are contained in [S]. They are needed for the global result. In Theorem 8.2.2, a global constrained null controllability theorem can be deduced by imposing the required global stability hypothesis.
8.3
Controllability of Nonlinear Systems with Controls Appearing Linearly
We now consider the special case of (8.2.1) of the form
.(t) = g ( t , z t , .Il(t))+ B(t,zt).(t).
(8.3.1)
Here g : E x C x C" -+ En is a nonlinear function. B(.,.) : E x C is an n x m matrix function. We assume that f ( t , Z t , .Il(t))
-+
EnXm
= d t , Q , 4 t ) ) + B ( t ,zt).Il(t)
satisfies all the hypotheses of Section 8.2 for the existence of a unique solution. It is summed up in the following statement: Proposition 8.3.1 In (8.3,1), assume that (i) B : E x C -+ E n X mis continuous; B ( t , .) : C ---t E n x m is continuously differen tiable. (ii) There exist integrable functions Nj : E [0,m) i = 1 , 2 such that -+
IIDZB(t14)II 5 N l ( t ) , IIB(tl4)ll 5 W t ) , V t E E , and 4 E C ( [ - h , 01, En). (iii) g ( t , ., .) is continuously differentiable for each t . (iv) g ( . , 4, w ) is measurable for each 4 and w . (v) For each compact set I< c E n , there exists an integrable function Mi : E -+ E+ and square integrable functions Mi : E [O,m), i = 2 , 3 such that -+
322
Stability and Time-Optimal Control of Hereditary Systems
Under assumptions (i) - (v), for each u E L2, I$ E C, there exists a unique solution x = x(u,4, u) of (8.3.1); that is, an absolutely continuous function 1: : .[ - h , w ) + En such that (8.3.1) holds almost everywhere and xu = 4. The proof is given in Underwood and Young ~51. ) xt(u,I$,u)E C is continuously They also show [15,p. 7611 that ( 4 , ~ + differentiable. Let the matrix H be defined as follows:
H =
l1
B ( s ,4)B*(s,4 ) d s ,
tl
> 6,
(8.3.2)
where B* is the transpose of B , and (b E C([-h,O],E"). The full rank of H is needed to prove that (8.3.1) is Euclidean controllable.
Theorem 8.3.1 In (8.3.1), assume the following: (i) Conditions (i) - (v) of Proposition 8.3.1 on f and B are valid, and there is a continuous function N,'(t) such that
IIB*(t,9)ll 5 N f(t), vt E El 4 E c. (ii) The matrix H in (8.3.2) has a bounded inverse. (iii) There exist continuous functions Gj : C x Em E+ and integrable functions a, : E + E+, j = 1 , . ' . , q, such that I g O , 4, u(t))I i
P
C aj(tIGj(4, ~ ( t ) ) ,
j=1
for all ( t , 4, u ( t ) ) E E x C x Em,where the following growth condition is satisfied: 4
Then (8.3.1) is Euclidean controllable on [a, t l ] , tl
> u.
Remark 8.3.1: If g is uniformly bounded, then condition (iii) is met. Proof: Let E Will, xl E En.Then the solution of (8.3.1) is the solution of the integral equation 4t
+
= 4(t),
t E [-h,OI,
6)
z ( t ) = 4(0)
+J
0
t
g ( s , 2 5 , u(s))ds
+J
U
t
B(s, xs)u(s)ds,
t2
6.
(8.3.3)
Controllable Nonlinear Delay Systems
323
A control that does the transfer is defined as follows: [XI - 4(0)
u(t) = B * ( t , z t ) H - '
-1 tl
g(s,z#,u(s))ds]
,
(8.3.4)
U
where z(.) is a solution of (8.3.1) corresponding t o u and 4. We now prove that such a u exists. It is obviously an L-2 function, since t B ( t , 4 ) is continuous. We need t o prove that such a u exists as a solution of the integral equation (8.3.4). --$
Proof: Introduce the following spaces:
x = C([-h,t11, with norm
Il(4, .)I1
En)x LZ([U,tll, Ern),
+ I I ~ I zwhere ,
= 11411
We show the existence of a positive constant, ro, and a subset A(r0) of X such that A(ro) = Al(t1, r o ) x A2(tl, ro), where
Al(t1,ro) = {€ : [-kt11
--+
En continuous
tq = 4, IICII 5 T O ,
Az(t1,ro) = {u E Lz((0,t1l,Em),(i), lu(t)l 5
To
t E [a,tl]) 8.e. in t E [u,tl],
J'' Iu(t + s) - u(t)lZdt + 0
as s + 0 uniformly with respect to u E Al(tl,ro)}. It is obvious that the two conditions for A2 ensure that A2 is a compact convex subset of the Banach space L2 ([12,p. 2971). Define the operator T on X as follows: and
where
v ( t ) = B * ( t , z t ) H - l [ z l - 4(0) -
J U
11
g(s,z,,u(s))ds .
(8.3.6)
324
Stability and Time-Optimal Control of Hereditary Systems
Obviously the solutions z(-) and u(.)of (8.3.3) and (8.3.4) are fixed points of T , i.e., T ( z ,u)= (2, u). Using Schauder’s fixed-point theorem we shall prove the existence of such fixed points in A . Let
Because the growth condition of (iii) is valid there exists a constant ro such that ro-
4 i= 1
ciFi(r0) 2 dor
4
>0
C ciF,(vo)+d 5 ro. See a recent paper by
i=l
Do [18,p. 441. With this rg, define A(r0) as described above. To simplify our argument we introduce the following notation:
If (z,u ) E A ( r o ) ,from (8.3.5) and (8.3.6)’we have
Controllable Nonlinear Delay Systems
325
Also
TO 2r0 = ( z ’ ( t ) v’(t)), , ( z ( t ) ,v(t)) = T ( z ,u), ( ~ ’ ( t~) ,’ ( t = ) )T ( d ,ti’).
326
Stability and Tame-Optimal Contml of Hereditary Systems
Then
Because u + B * ( t , t y )and u ( t ) -+ g ( t , t t , u ( t ) )are continuous, given any c > 0 there exists an g > 0 such that if lu(t) - u ( t ) [ ,then l B * ( t , t y )- B*(t,tY‘)I < 6 , Ig (t , t y, u )-g (t ,t y ’ , u ) l
< 6 , V t E [r,ti].
Divide [ D , t l ]into two sets el and e2; put the points at which )zt(t)-u’(t)l g to be e l and the remainder e2. If we write 1121 - 21’112 = y. then
so that mes
e2
a h , so that for given any 4,$ E a u E Lz([a,tl - h ] , E m )such that the solution of (8.3.1) satisfies z, = 4, z(t1 - h,a,q5,u) = $(-h). To conclude the proof we extend u and z(.,a,4,u ) = z(.)to the interval [ u , t l ] ,tl > u h, so that
+
+
$(t--1) = g ( t , t t , u ( t ) ) + B ( t , z t ) u ( t ) a.e. on [tl - h , t l ] where z(~) = + ( r - t l ) , tl - h a control u can be defined as follows:
(8.3.9)
5 T 5 t l . Since (ii) holds, (8.3.10)
for tl - h 5 t 5 t l . That such a u exists can be proved as follows: We define the following set:
Al(r-0)= { u E L2([tl - h , t l ] ,Em): I l u ( t ) l l ~_z;)[ll(t - t l ) - g ( t , z;, u(t))l. We shall prove that there is a constant ro such that with A = Al(ro), T : A A , where T is continuous. Because of [12,p. 2971 and [12,p. 6451, T is guaranteed a fixed point, that is -+
330
Stability and Tame-Optimal Control of Hereditary Systems T ( u ) = (u)E A ,
which implies that (8.3.9) and (8.3.10) hold. Observe that A is a compact and convex subset of the Banach space L2. Because of a result of Campbell and Meyer [17,p. 2251 and hypotheses (i) and (ii) of the theorem, the generalized inverse t + B+(t, 0 such that 9
CcjFj(r0) + d
0. Also s can be made small enough for the second integral t o be less than E > 0. This completes the verification of the first part that T : U -+ U . We now turn t o the problem of continuity. Let (u), (u’)E A(ro), T ( u )= (v), T(u’) = (v’). Then
Since u -+ Bt(t,zy) and u ( t ) + g ( t , z t , u ( t ) are continuous, given there exists an 7 > 0 such that if lu(t)- u’(t)l < v, then
6
>0
333
Controllable Nonlinear Delay Systems
Divide I into two sets el and e2, and put the points at which lu(t)-u’(t)l q to be el and the other t o be e 2 . If we set Ilu - ~ ’ 1 1 2= y, then
so that mes e2
CT + h . In Theorem 9.2.2 we have stated conditions that guarantee the controllability of each isolated free subsystem ( S i ) . Next we assume these conditions and give additional conditions to the interconnection g i that will ensure that the composite system (9.2.2) is controllable. It should be carefully noted that L
g i ( t , x t , ~ ' ( t )=) C g i j ( t ,4 : v i ( t > ) j=1 j#i
is independent of xi the state of the i t h subsystem, though it is measured locally in the ( S i ) system.
Theorem 9.2.3 Consider (9.2.1) and its decomposition (9.2.2). Assume that: (i) Conditions (i) - (iii) of Theorem 9.2.1 are valid. Thus each isolated subsystem is Euclidean controllable on [ a , t l ] .
Stability and Time-Optimal Control of Hereditary Systems
344
(ii) For each i, j = 1 , . . . , l ,
i
#j, L
gi(t,X:f,v')
= Cgij(t,diu'(t)) j=1 j#i
satisfies the following conditions: There are continuous functions
C'j : En'x Em'-+E+ and L' functions Pj : E
-+
E+ j = 1 , . . . ,q, such that
Igij ( t ,4'9 v')I
P
5 C Pj Cij($', u') j=1
e,u i ) ,
for a11 ( t ,
< mi, where
Then 9.2.1 is Euclidean controllable on [u, tl]. The proof is contained in [14]and is similar t o the proof of Theorem 9.2.1. Theorem 9.2.4 Consider (9.2.1) and its decomposition (9.2.2). Assume that: (i) Conditions (i) - (iv) of Theorem 9.2.2 bold. (ii) Condition (ii) of Theorem 9.2.3 holds. Then (9.2.1) is controllable on [U,tl],
tl >
+ h.
Proof: Just as in the proof of Theorem 9.2.2, define H(t1- h) =
J,"-" B ( s ,Xt)B*(S,
Xt)dS
in place of Hi(t1 - h ) , and conclude Euclidean controllability on [r,tl - h]. Define u as
for tl - h 5 t u h. Then the interconnected large-scale system (9.3.5) is locally null controllable with constraints. Corollary 9.3.1 Assume: (i) Conditions (i) - (iii) of Theorem 9.3.2. (ii) The system
i y t ) = fi(t, zf, 0)
(9.3.9)
is globally exponentially stable. Then the interconnected system is (globally) null controllable, with controls in ui = {u' E ~ 2 ( [ u , t lE"'), I, 1 I u i I I ~ a 5 1). Proof of Theorem 9.3.2: By Theorem 9.3.1,
0 E Int d i ( t ,u) for t > u
+ h,
(9.3.10)
where di is the attainable set associated with (9.3.8). Let x i be the solution of (9.3.5), with Z$ = 0. Then
Control of Interconnected Nonlinear Delay Differential Equations
347
Thus, if we define the set
we deduce that di(2,a) C
fJi(t1,
a).
Because fi(t,0,0) = g , j ( t , 4 , 0 ) = 0, and because zi(t,O,O) = 0 is asolution of (9.3.5), 0 E Hi(t1,a).As a result of this and (9.3.10), we deduce
0 E Int d i ( t 1 , u )C
Hi(t1,~).
(9.3.11;
There is an open ball B(O,r), center zero, radius r , such that
The conclusion
0 E Int Hi(t1,u)
(9.3.12)
follows at once. Using this, one deduces readily that 0 E Int V ,the interior of the domain of null controllability of (9.3.5), proving local null controllability with constraints. Proof of Corollary 9.3.1: One uses the control u* = 0 E Ui, i = 1 , . . . , C to glide along System (9.3.5) and approach an arbitrary neighborhood 0 of the origin in Wit'([-h, 01, Pi). Note that
Because of stability in hypothesis (ii) of (9.3.9), every solution with ui= 0 is entrapped in 0 in time a 2 0. Since (i) guarantees that all initial states in this neighborhood 0 can be driven t o zero in finite time, the proof is complete.
Remark: Conditions for global stability of hypothesis (ii) are available in [lo, Theorem 4.21.
Chukwu
Remark 9.3.1: The condition (9.3.11) from which we deduced that
0 E Int Hi(t, a),
(9.3.12)
348
Stability and Time-Optimal Control of Hereditary Systems
is of fundamental importance. If the condition
(9.3.13)
O E Int d i ( t , a)
fails, the isolated system is "not well behaved" and cannot be controlled. Condition (9.3.12) may still prevail and the composite system will be locally controllable. To have this situation, we require
0 E Int G i ( t ,a),
(9.3.14)
where : uj
E U,
In words, we require a sufficient amount of control impact (i.e., (9.3.14)) t o be brought to bear on ( S i )that is not an integral part of S;. Thus knowing the limitations of the control ui E U i , a sufficient signal
L
C g i j ( t ,z i , u j ) =
j=1 j+i
g ; ( t , z t , u )is dispatched t o make (9.3.14) hold. And (9.3.12) will follow. Remark 9.3.2: The same type of reasoning yields a result similar to Theorem 9.3.2 if we consider the system
where (i) f ( t , O , O )= 0. (ii) g , ( t , z i )... ,z:-1,o,z:+1,...
,":,Ul(t)
,... , u ' - ' ( t ) , O , u ' + ' ( t ) ,... , J ( t ) ) =o, g . ( t , z t ,... ,z:,o,o (...,0 )
Also conditions (ii) and (iii) of Theorem 9.3.2 are satisfied. If we consider
instead of (9.3.51, we can obtain the following result:
=o.
349
Control of Interconnected Nonlinear Delay Daflerentaal Equations
Theorem 9.3.3 In (9.3.16), assume (i) (9.3.7), L;(t,2 : ) and B’(t) are defined as
(iii) of Theorem 9.3.2. But in
-
D2fi(t,0, o>.f = L;(t,Z f ) ,
D3(fi(t,0,O)) + S i ( t , x t , 0)) =
W).
Then (9.3.16) is locally null controllable with constraints. The proof is essentially the same as that of Theorem 9.3.1. We note that the essential requirement for (9.3.16) to be locally null controllable is the controllability of
+
.(t) = J q t , r t ) (Bl(t)
+ B2(t))U(t),
(9.3.17)
where
& ( t ) = D3f(t10,0), B2(t) = D39(t,r t , 0). If the isolated system (9.3.8) is not “proper” (and this may happen when B l ( t ) does not have full rank on [ a , t l ] ,t > u + h ) , the solidarity function g; can be brought to bear to force the full rank of B = B1 B2, from which (9.3.16) will be proper because (9.3.17) is controllable. Even if B1 has full rank and (9.3.8) is proper, the interconnected system need not be locally null controllable. The function has to be so nice that B2 B1 has full rank. An adequate proper amount of regulation is needed in the form of a solidarity function g;. In applications it is important to know something about g; and to decide its adequacy. It is possible to consider gi as a control, and view
+
+
+
.i(t) = f i ( t t x f , u i ( t ) ) gi(t) as a differential game. Considered in this way, we must describe how big the control set for g; must be. In the linear case see Chukwu [13]. The
linear case is treated in Section 9.5.
9.4
Examples
Example 9.4.1: Fluctuations of Current The flow of current in a simple circuit in Figure 9.4.1 is described by the equation d2x(t) dx(t) 1 ( R R1)- x ( t ) = o. L- dt2 dt c
+ +
+
350
Stability and Time-Optimal Control of Hereditary Systems
FIGURE 9.4.1. The voltage across R1 is applied to a nonlinear amplifier A . The output is provided a special phase shifting network P . This introduces a constant time lag between the input and output of P . The voltage across R in series with the output of P is
e( t) = q g ( i ( t
- h)),
q is the gain of the amplifier to R measured through the network. The equation becomes
L-d2x(t) dt2
+ R i ( t ) + q g ( i ( t - h ) ) + -1x ( t ) = 0. C
Finally, a control device is introduced to help stabilize the fluctuations of the current. If i ( t ) = ~ ( tthe ) ~ "controlled" system is now given by
i ( t ) = Y(t> + % ( t ) , R q Y(t)= --y(t) - -g(y(t L L
- h))
1 - -CL x(t)
+
(9.4.1) U2(t)
Control of Intercon.nec2ed Nonlinear Delay Differential Equations
351
L
C
t-
FIGURE 9.4.2. [CHAPTER 1 , REF. 61. The question of interest is the following: With the control u = ( u l , ~ ) , which is “created” by the stabilizer, is it possible to bring any “wild fluctuations’’ of the current (any initial position) to a normal equilibrium position in finite time? Is (9.4.1) controllable?
352
Stability and Time-Optimal Control of Hereditary Systems
Example 9.4.2: Fluctuations of Current in a Network
A six-loop electric network is connected up as in Figure 9.4.1 and Figure 9.4.2. Each loop is a simple circuit as in Figure 9.4.1, and is described by the equation
The voltage across Rl; is applied t o a nonlinear amplifier Ai. The output is provided t o a special shifting network Pi. This introduces a constant time lag between the input and the output of Pi. The voltage across Ri in series with the output of Pi is ei(t) = qig(ii(t- h ) ) ,q; is the gain of the amplifier t o R, measured through the network. The equation becomes
+ R1i”t) + qigi(i”t Li 7 d2xi(t)
- h))
+ -12 ’ (‘t ) Ci
= 0.
For this ith loop, a control device is introduced to help stabilize the fluctuations of current. If i ’ ( t ) = y i ( t ) ,the “controlled” i-subsystem is
i q t ) = y”t)
8 y”t) = --y“t) Li
+ u!(t), 1 . . - -4ig i ( y y t - h ) ) - ---z“t) ’
Li
ci Li
+ .‘zi)(t).
The question of interest is the following: With a control ui = ( u i , u i ) , which is “created” by the stabilizer, is it possible to bring any “wild fluctuations” of current (any initial posit,ion) t o a normal equilibrium position of the ith subsystem in finite time? Is the ith-subsystem controllable? Furthermore, we study the overall large-scale system in Figure 9.4.3, which we consider as a decentralized control problem, where each subsystem is influenced by interaction terms gi(t,x ) = C g;j(t,x j , u i ) that are monitored (measured) j=1
j#i locally. Is the large-scale system controllable? Proposition 9.4.1 Assume (i) R, L i , q ; , and C; are positive and q i ( 4 ) is continuous; (ii) gij(t,@ , ui) satisfies the growth condition of Theorem 9.2.4. Then the interconnected system is controllable, and each subsystem is controllable. Proof: Note that the matrix
Control of Interconnected Nonlinear Delay Differential Equations
gii*o
FIGURE 9.4.3.
353
354
Stability and Time-Optimal Control of Hereditary Systems
has full rank.
Example 9.4.3: Let z ( t ) denote the value of capital stock at time t . Suppose 2 can be used in one of two ways: (i) investment; (ii) consumption. We assume the value allocated t o investment is used t o increase capital stock. Let u(t) 0 5 u ( t ) 5 1 denote the fraction of x(t) allocated to investment. Capital accumulation can be said t o satisfy the equation
For example, g ( z ( t ) , u ( t ) ) h ( t ) z ( t ) ( E constant) shows that the rate of increase of capital is proportional t o investment. Now if we take depreciation into account, and if at time a after accumulation the value of a unit of capital has decreased by a factor p ( a ) ,
the equation of capital accumulation can be represented by
so L
where f ( t - s , z ( t - s))p‘(s)dsis the value of capital that has “evaporated” per unit time at time t . Implicit is the assumption that equipment depreciates over a time L (the lifetime of the equipment) to a value 0. More generally, we have
(9.42) where we use a Riemann-Stieltjes integral. Equation (9.4.2) is a special case of ( 9 . 3 4 . Along with (9.4.2) are an initial capital endowment function q5 E Wil) and $ E W;’), its long run “desired stock of capital function”. In time-optimal control theory, one minimizes T (time) subject t o (9.4.2) and 20
= 4,
ZT
= $.
(9.4.3)
One can also minimize consumption rT
(9.4.4)
Control of Interconnected Nonlinear Delay Differential Equations
355
subject t o (9.4.2) and (9.4.3). System (9.4.2) is an isolated one. It may form part of a large-scale system
dxi -(t) dt
= gi(X’(t),U i ( t ) )-
J
0
L
f i ( t - S , ~ ‘ (-t s))dP,(s) (9.4.5)
and the problem of controllability can be investigated. From our analysis we have the following insight: If the isolated system is controllable and the interconnection “nice” then the large-scale system is also controllable. Detailed investigations of (9.4.5) will be pursued in Section 9.5, where comments on global economy will be made. See [ll] and [12,p. 6881. It is seen from Remark 9.3.1 that a sufficient amount of control regulation is needed from outside the subsystem t o make the large-scale system controllable. When this is lacking, and the subsystem is not well behaved, the controllability of the large-scale system is not assured. This has grave policy implications for global economy. Deregulation may not be a cure. It may lead t o global depression, or lack of growth of the economy.
9.5 Control of Global Economic Growth Introduction
In Section 1.8 we formulated a nonlinear general model of the dynamics of capital accumulation of a group of n firms whose destiny are linked with l other systems t o form a large-scale organization. As argued by Takayama [12],time lags are incorporated into the control procedure in a distributed fashion: The state and the control history are taken into account. Our aim is t o control the growth of capital stock from an initial endowment t o some target point, which can be taken t o be the origin. From the theory, some qualitative laws are formulated for the control of the growth of capital stock. They help validate the correctness of the new economic policies of movement away from centralization of firms of the East; they raise questions on the wisdom of deregulations in the West. The principles cast doubt on the popular myth that to grow, the firms of the Third World should dismantle the “solidarity function”.
356
Stability and Time-Optimal Control of Hereditary Systems
In Section 1.8 we designated z ( t ) t o be the value of capital stock at time t , and then postulated that the net capital formation i ( t ) is given by
i ( t ) = k u ( t ) z ( t )- 6z(t),
(9.5.1)
where t and 6 are positive constants and -1 5 u(t) 5 1, u being the fraction of stock that is used for payment of taxes or for investment. A more general version of (9.5.1) is the system
i ( t ) = L(t,zt,'lLt)et+B(t,ct,u*)ut.
(9.5.2)
Later we denote z ( t ) = (zl(t), . . . , cn(t))to be the value of n capital stocks at time t , with investment and tax strategy u = (211,". ,un), where -1 5 u j ( t ) 5 1. We therefore consider (9.5.2) as the equation of the net capital function for n stocks in a region that is isolated. They are linked to t other such systems in the globe, and the interconnection or solidarity function is given by Si(Zlt,." , z i t , Ul t,... ,w). This function describes the effects of other subsystems on the ith subsystem as measured locally at the ith location. Thus, i i ( t ) = Li(t,lit,uit)zit
a = l
,... ,e
+ B i ( f , z i t , u i t ) ~ i+t g i ( t , z l t , . . . , ~ e t , ~ l t , " ,uet), ' (9.5.3)
is the decomposed interconnected large-scale system whose free subsystem is ii(t)= L i ( t , x i t , u i t ) ~ i t B i ( t, zit, uit) uit. (9.5 Si)
+
We now introduce the following notation: e e Let C ni = n , C mi = m. i=l
i=l
95 = [h,. . . ,954 E E",
Then (9.5.3) is given as
2
= [Xl,.. . ,ze] E E " ,
u = [Ul,.. . ,ue] E E",
Control of Interconnected Nonlinear Delay Diflerential Equations
357
Preliminaries The problem of controllability of (9.5 S) will be explored in this section. Conditions are stated for the controllability of the isolated system (9.5.2). Assuming that the subsystem (9.5 Si) is controllable, we shall deduce conditions for (9.5 S) to be controllable when the solidarity function is “nice.” The following basic assumptions will be maintained: In (9.5.2),
L ( t , 4, $ ) Z t =
1, 0
M t , s>4, $ ) 4 t
+ s),
where q(t, s, 4,$) is measurable in ( t , s ) E E x E normalized so that
q ( t , s, +,$) is continuous from the left in s on (-h, 0), and V ( t ,s, 4, $) has bounded variation in s on [-h, 01 for each t , 4, $, and there is an integrable rn such that M t , s,4, $ ) Z t l 5 m(t>llxtll for all t E (-m, m),4, 4,zt E C. We assume L ( t , ., .) is continuous. The assumption on B ( t ,4, $) in (9.5.2) is continuity on all the variables. Also continuous is the function g : E x C x Em -+ En. Enough smoothness conditions on L and g are imposed t o ensure the existence of solutions of (9.5 S) and the continuous dependence of solutions on initial data. See [19] and [S], and Section 8.3.
Main Results To solve the Euclidean controllability problem for the system (9.5.2), we consider the simpler system
.(t) = q t , 2 , v)zt
+ B(t,
2,
v)v,
(9.5.4)
where the arguments x t and ut of L and B have been replaced by specified functions z E C([-h,O],E”) C , v E L,([-h,O], Em). Here C is the space of continuous En-valued functions on [ - h , 01, and L , is the space of essentially bounded measurable Em-valued functions on [-h, 01. For each ( z , v) E C x L,, the system (9.5.4) is linear and one can deduce its variation of parameter by the ideas of Klamka [23]. Let X ( t , s ) = X ( t , s , z , v) be the transition matrix for the system
i ( t ) = L ( t , z , V)Zt,
(9.5.5)
Stability and Time-Optimal Control of Hereditary Systems
358 so that
ax(t, at
= q t , ., z , .)X,(.,
where
0,
s-h = .(t, c,9,O)+
(9.5.6)
s),
+ e,s),
J, X ( t , t
-h
< 0 5 0.
Then the solution of
[/:
+
1
d e H ( s , Z ( e ) , v(e)>u(s 0) ds,
i t i tl, ~ ( t=)4(t), t E [c- h , ~ ] ,~ ( t=)~ ( t ) ,t E [C - h , ~ ] . c
(9.5.7) Here ~ ( Ct T, , ~0), is the solution of (9.5.5). The last term in (9.5.7) contains values of ti on [c- h , 01. As usual in such matters 171,[16],[20],and [23], we use Fubini’s Theorem to write (9.5.7) as
.(t) = .(t, c,4,O) +
{ J_, ~ ( t , e)de[H(s 0
/ O
O+S
- e , z ( e > ,V ( ~ ) I V ( ~ ) ~ S }
s-
Thus (9.5.8) can be written as .(t)
= z ( t , c , $ , O )+ q +
rt
Ju S ( t ,
s, z,v)u(s)ds.
(9.5.11)
The controllability map of (9.5.4) at time t is
W ( t )= W ( e , t ,z , #) =
J,’ S ( t ,
s , z , V>S*(t, s,z, u p s ,
(9.5.12)
Control of Interconnected Nonlinear Delay Differential Equations
359
with W(t1)= W , where the star denotes the matrix transpose. Define the reachable set R(t) of (9.5.4) as follows:
where
= {u measurable
u
E Em Iuj(t)l
5 1,
j = 1 , . . . ,m}.
Thus the reachable set is all solution X(U) of (9.5.4) with q5 = 0, r] = 0 as initial data and with controls u E L,([a,tl], Cm),where Cm is the unit rn-dimensional cube. The Euclidean attainable set of (9.5 S) is the subset of En defined as follows:
d ( t )= { x ( u ) ( t ): u E
Uad : 3:
is a solution of (s) with zu = 0, r] = 0).
The domain D of null controllability is the set of initial 4, such that there exists a t l and and a u E L([a,t13, Cm)such that the solution c of (9.5 S) satisfies c,(u) = 4, z ( t 1 , u ) = 0. The following lemma is fundamental in the sequel: Lemma 9.5.1 In (9.5.4) (i) Maintain the basic assumptions of L and B. (ii) There exists a constant c > 0 such that det W ( a , t l ,v) 2 c ( z , v) E C x L,. Then 0 E Intd(t).
> 0 for all (9.5.13)
Also for (9.5.2), 0 E Int D.
(9.5.14)
Proof: From standard arguments, the nonsingularity of W = W ( t 1 ) f W ( a , t l ,z , v) is equivalent to the Euclidean controllability of (9.5.4) [22, p. 921 and of (9.5.4) being “proper” [22, p. 771. As usual, this is equivalent to
0 c Int R(t,z , v).
(9.5.15)
) C x L,, 0 E Int IR(t,z,v) for all Because (ii) is uniform for all ( 4 , ~ c That (9.5.15) holds follows from the fact that the map
( 4 , ~ E) C x L,.
T : L , +En,
( T u ) ( t )=
J,’ X ( t ,
7, z ,
v ) B ( r ,z , v)u(.r)d.r
(9.5.16)
Stability and Time-Optimal Control of Hereditary Systems
360
is a continuous linear surjection for each ( 2 ,w) E C x L,, which is therefore an open map for each ( z , w). The transfer of any arbitrary I$ E C to any z1 E En is accomplished by the control u ( t ) = S * ( t l , t , z , V ) W - l [ Z 1-.(t,4,0) u ( t ) = rl(t),
t
- Q(t,rl,Z,V)],
t E [u,t11,
E [c- h t l l ,
(9.5.17) which when inserted into (9.5.11) gives an expression of the solution (9.5.4):
+ d t , rl,
4t) = 4 t , 6,4,0)
-1,
).
+ W ( u , t ,~ , V ) W - l ( u , t lz ,, u ) [ 2 1 - Z(tl,tO,4,O) - Q(t1,rl, z , .)I.
(9.5.18) With initial d a ta w(u) = (z(u),q+,q)and z(t1) = 21, controllability of (9.5.4) is established. We now show controllability for (9.5.2). Suppose ( 2 ,u)solves the set of integral equations u ( t ) = S * ( t l , t , 2 , U ) W - l [ ~-1Z ( t l , f l , 4 , 0 ) - d h , r l , z , u ) ] for
t E [U,tll,
u ( t ) = v ( t ) , t E .[ - h a ] ,
(9.5.19)
and
+ q ( t , u , z ,u)+
z ( t ) = z(t,u,4,0)
z ( t ) = 4(t),
t S(t,S,Z,U)U(S)dS, t
2
6,
(9.5.20)
t E [u- h , u ] .
Then u is L,([u - h , t l ] )and 2 solves (9.5.2) with this u and with initial data ~ ( u=) (z(c),4, v), where u, = 7). Also z(t1) = 2 1 . Next we prove that a solution pair indeed exists for (9.5.19) and (9.5.20). Let I be the Banach space of all functions ( z , u ) : [u - h , t l ] x [u- h , t l ] -+ En x Em where z E C ( [ u- h , t l ] ,E n ) u E &([a - h , t l ] , E " ) , with norms defined as IKz, .>I1 = 1141 II~lloo,where
+
Define the operator P : IB -+ IB by P ( z ,u)= (y,w), where
Control of Interconnected Nonlinear Delay Diflerential Equations y(t) = ~ ( u, t ,4,O)
+ q ( t , 71,x,v) + /
361
t
S ( t ,s,z, u)v(s)ds and for E [u,111,
0
and y(t) E 4(t) for t E [u- h, u]. (9.5.22) By a tedious but standard analysis (see Sinha [7], and Balachandran and Dauer [15,16],we can identify a closed bounded and convex subset Q(X) of I5 where Q(X) = {(x,u)E IfB : 1141I A, l141caL X I 1 and P : Q(X) + Q(X), and P is a relatively compact mapping. We are then assured by Schauder's fixed-point theorem that P has a fixed-point: P(xl u)= (c,u).Indeed, let a1 = SUP{llS*(tl,t,2tl,~tl)ll, t E [u,t11}, a2
= IIW-1(u,t1,211,~tl)ll,
a3 =SUP{14t,u,4,0,O)I a4
= s.P{lls(t,
+ Idt,7I,~t,ut)l+1211,
s, Z t , W)1lr
t E [~,tlIl, ( t ,s) E [(.,tlI x [a,till,
b = max{(tl - c ~ ) u 4 1). , Then
5 alaZa3, t E [(7tl], Mt)l = 17I(t)l I S U P Il?(t>l= a5. Iv(t)l
t €lo- h ,ul
If ro = m a x [ a l a z a 3 ~ a ~ ] , then Iv(t)l
Also
L To, t E .[
- ktll,
II~IICUI To
Stability and Time-Optimal Control of Hereditary Systems
362
we have proved that T ( Q ( A ) ) c Q(A). It is clear that Q(A) is closed, bounded, and convex. Also P is continuous since (t,q5,v) + B(t,q5,v) is continuous. To prove that the set of y ( t ) defined in (9.5.22) is equicontinuous is routine. Observe that u defined in (9.5.21) is relatively compact 0 as since all such v are uniformly bounded in L , norm, and 11v, - v11, s -+ 0 uniformly, where --+
[4, pp. 296, 6451. Indeed, since v ( t ) is measurable in t and in L,, there exists a sequence {vn(t)}of continuous functions such that IIv - v,,ll, --+ 0 as n 00. Hence -+
The last and first terms can be made arbitrarily less than c by selecting n very large. The second term can be made less than E if s is selected small enough. This proves that llw, - wll, + 0 as s + 0. All this analysis verifies that P(&(A))is relatively compact. Invoke Schauder's fixed-point theorem t o conclude that there is a fixed point (2,u ) E P(&(A))such that P(z,u ) = ( 2 ,u ) . With this fixed point established, we verify (9.5.13) by arguing as follows: Since T is an open map for each (r,v) E C x L,, and in particular for (2, u ) = (z, u ) , we have that T defined by
Tu =
1
$1
S ( t , s , z , u ) u ( s ) d s , T : L,([t,,tl],Em)
+
E"
is open. Because of this, 0 E Int T(L,(a,tl),Cm) = d ( t l ) , where d ( t l ) is the reachable set of (9.5.2), so that
0 E Int d ( t 1 )
(9.5.23)
0 E Int V .
(9.5.24)
and Details of the needed arguments are contained in [19, p. 1111. Theorem 9.5.1 Consider the large-scale system (9.5.5S ) with its decomposition (9.5.3), and assume that: (i)
.
= 0, gs(t,I l t , . . . , z t t , 0 , . . . , O ) = 0.
g i ( t , z l t , Z i - l t , O , z i + l t , . . . , t t ( t ) , u l t , . . r u i - ~ t , O , ~ i + l t , " .rutt)
363
Control of Interconnected Nonlinear Delay Differential Equations (ii) The system
i i ( t ) = Li(t,
$7
v)xt
+ Bi(t,$ , v ) u ( t )
(9.5 Si)
satisfies all the conditions of Lemma 9.5.1 for each +,v E C x L,([a,tl],Em). (iii) gi : E x En x Em -+ En is continuous. Then (9.5 S ) is locally null controllable on [a,tl] with constraints. I f , in addition, the system ii(t)= L;(t,Z1t)Zit (9.5.25) is globally exponentially stable, then (9.5.5 S ) is (globally) null controllable , for some t l , llzlillo0 5 1, j = with controls in Ui = { u i E L M ( [ a , t l ]Em') 1,... , e } . Proof: Because of Lemma 9.5.1,
0 E Int d;(tl) for tl > u
+h,
(9.5.26)
where Ai is the attainable set associated with (9.5 Si). Let xi be the solution of (9.5.3) with xb = 0, and U: = q = 0. Then
Jo
Jo
(9.5.27)
If we define the set .
.
H,;(tl)= { ~ i ( u ' , ~ ) ( Et iEn* ) : U' E U i ,
U'
E U;, j
# i,
j = 1,.. .
,t},
we deduce that
di(ti)C Hi(ti). Because g i ( t , x l t , . . . , 0 , . . . ,zit, 211,. . . , 0 , . . . , u;e) = 0, and because zi(a,O,O)(t) = 0 is a solution of (9.5.3), 0 E Hi(t1). Because (9.5.26) holds, 0 E Int di(t1) c Hi(t1) so that
0 E Int H i ( t l ) .
(9.5.28)
Using this, one deduces readily that
0 E Int D.
(9.5.29)
364
Stability and Time-Optimal Control of Hereditary Systems
The rest of the proof is routine. Appropriate the control ui E U;, ui z 0, i = 1 , . . . , l , and use this in (9.5.3); then the solution of (9.5.3) is that of (9.5.25). There is a time u < 00 such that qb x(u,4,0) E 0 where 0 c V ,and 0 is an open ball containing zero. With qb as initial state, there is a time tl > u and an n-tuple u = (211,. .. ,ue)control ui E Lm([u,t1],Cmi)= Ui such that the solution of (9.5.3) satisfies
4, ui) = 4, ~ ( t u, l , 4, ui) = 0.
lo(.,
The proof is complete. Note that Theorem 9.5.1 generalizes the main results in [7,15,18]. It removes the growth conditions of the nonlinear functions present in the equations. All that is needed here are the usual smoothness conditions required for the existence of solutions, and bigger “independent” control sets.
Remark 9.5.1: Our analysis describes Euclidean null controllability of large-scale systems. The results hold also for arbitrary point targets in En. Indeed, if 2 1 is a nontrivial arbitrary target and y ( t , 4 , u o )= y ( t ) is any solution of (9.5.30) .(t> = f(t,21 , 4 t ) ), with y , = 4 and with uo admissible, and y(t1) = 2 1 , one can equivalently study the null controllability of the system
44 = f ( t , z t + Y t , % )
- f(t,Yt,u(t)),
where z , = 0 so that 2, = yo = q5 and ~ ( t=) z ( t ) integrate (9.5.31) we obtain
Note that
z ( t , u,f$,uo,2 )= 0, tr t
(9.5.31)
+ y(t). Indeed. if we
2 0.
If we can show that there is a neighborhood 0 of the origin in z space such that all initial 4 E 13 can be brought to z ( t 1 , u , 4 , u 0 )= 0 by some admissible control u at time t l , then % ( i lu l , 4 , u O )=
that is, c ( t l )= y(t1) = 2 1 .
“ ( t l ,4,
- Y(t1) = 0,
365
Control of Interconnected Nonlinear D e l a y Diflerential Equations
Universal Laws for the Control of Global Economic Growth The main theorem and some assertions in its proof provide very useful broad policy prescription for the control of a large-scale economic system’s growth. It describes qualitatively when it is possible to control the growth of capital stock from an initial endowment to the long-run desired stock of capital, and when this control is impossible. Indeed, from the proof we deduce that if the isolated system (9.5 Si) is “proper”, and this is valid if (9.5.26) holds, then the large-scale system (9.5.3) will also be proper, i.e., (9.5.28) will hold provided the “solidarity function” gi is nice in the sense of being integrable, etc. From (9.5.28) one proves that the domain ID of null controllability of the large-scale system is such that 0 E Int D.
(9.5.32)
0 E Int Ai(t),
(9.5.33)
O E Int H i ( t )
(9.5.34)
The condition which implies when the interconnection is nice, yields the following obvious but fundamental principles for the control of large-scale organizations. Fundamental Principle 1 If the isolated system is “proper,” which means that it is locally controllable with constraints, and if the interconnection that we will call a “solidarity function” is nice and measured locally at the i t h subsystem and is a function of the state of all subsystems and their controls, then the composite system is also proper and therefore locally null controllable with constraints.
Some isolated system may fail to have property (9.5.33), and is not well behaved. Condition (9.5.34) on which (9.5.32) is deduced can still be salvaged. Obviously what is needed is a condition that ensures that
0 E Int Gj(t1)
(9.5.35)
where
With (9.5.35), one easily deduces (9.5.34) since 0 E di(t).A criteria such as in (9.5.35) can be deduced from a controllability of a linearization of the nonlinear control system ii(t)
= gi(t, z l t r . . . , Z i t , . . . , Z t t ( t ) , u l ( t ) ,. . . , .ili(t),. . . , u t ( t ) ) .
366
Stability and Time-Optimal Control of Hereditary Systems
In other words, we require a sufficient amount of control impact from external sources gi that is monitored locally a t the i-subsystem t o be “forced” on (9.5 Si). It works as follows: The external source of power senses the control ui available t o (9.5 Si) and constructs a sufficient controlling signal gi that is a function of all the states of the subsystems and their controls including u i . With this the right behavior is enforced. It is formalized in the following principle: Fundamental Principle 2 If some subsystem is not proper and is not locally controllable with constraints, the large-scale system can be proper a n d be made locally controllable provided there is an external source of suficient power available to the subsystem to enforce controllability and proper behavior. There is no theorem a t the present time that ensures that the large-scale system i s proper when some system is not proper, and there is no compensating proper behavior from an external source of power that i s monitored locally.
We observe that the solidarity function does not ignore the subsystems controls ui or initiative. Were it t o be ignored, for example, and
to be still operative and nontrivial, the condition 0 E H i ( t ) and subsequently (9.5.34) will fail. The large-scale system will not be proper and locally null controllable. Fundamental Principle 3 If subsystems’ control initiatives are ignored in the construction of a nontrivial solidarity function that is monitored and applied locally, the interconnected system will fail to be proper and locally null controllable.
Failure t o acknowledge individual subsystem’s control initiative in highly centralized economic systems that are described by our dynamics may lead to lack of growth from initial endowment 4 to the desired economic target (which is an equilibrium in this case) in centralized systems. On the other hand, failure t o enforce regulation bringing to bear the right force gi when some subsystems misbehave and are locally uncontrollable in individualistic systems may make the large-scale system locally uncontrollable and may trigger an economic depression. These observations seem t o have a bearing as an explanation of what is happening in the East and what may happen in the West as a consequence of the 1980-89 U S . dismantling of regulations (see [17,p. 2641 and [26]) on the economy. In the East, economic targets cannot be reached because of too much centralization: inflexible g i : t o make firms controllable, market forces are being advocated. In the
Control of Interconnected Nonlinear Delay Differential Equations
367
Third World, the popular policy prescription of dismantling the “solidarity function”, which rides the wave of deregulations of the 1980s in the West, is at work with fury. Aside from its hardships, it does not seem t o have a solid theoretical foundation. A certain amount of solidarity function is effective for economic growth. In other words, carefully timed government interventions (or solidarity functions) are crucial for economic growth. This can be provided by central governments that cherish individual initiatives. A qualitative description of the solidarity function will be explored in Section
9.6.
9.6 Effective Solidarity Functions In Equation (9.5.3) the dynamics of the ith subsystem are described by an equation of the form
where g ( t , z t , v t ) is the interconnection or the solidarity function. We now study simpler versions of (9.6.1) and in the process isolate the basic properties of effective solidarity functions. We consider them “controls,” “disturbances,” “quarry controls,” and appropriate an earlier formulation from ~31. The system we study is the linear one,
4 4 = q t ,2 1 ) + B(t)u(t)+ g ( t , v ( t ) ) ,
(9.6.2)
which can be written as
where
B ( t ) u ( t )= - P W , q(t>= g ( t , 44).
(9.6.4)
We assume P E L,([cr,tl], P ) , P C En,q E L,([cr,t~],Q), Q c En,and where for each t E E , the linear operator C$ + L ( t , C$), C$ E C, has the form
Here the integral is in the Lebesgue-Stieltjes sense, and q ( t , s ) is an n x n matrix. Basic assumptions on q are contained in Chukwu [13, p. 3271.
368
Stability and Time-Optimal Control of Hereditary Systems
Viewed in our setting, Q in [13]is now called a solidarity set, and P is a control set. Define the set
U : U = P.Q=
(. :
u
+
p ~)
~
(9.6.4)
(the Pontryagin difference of P and Q). Associate with (9.6.3) the control system .(t) = L ( t , .t) - U ( t ) , u ( t ) E U ( t ) . (9.6.5) Let X ( t , s) be the fundamental matrix of
i.e., the solution of
d - X ( t , s ) = L ( t , X t ( . , s ) ) , t 2 s a.e. in s , t , at 0 s-h
C,
defined by T(t, a)¢J = xt(a, ¢J), where x is a solution of system (10.1.1), is a homeomorphism, provided D is atomic at 0 and at -h [9, p. 279]. As asserted by Hale, since L(t, xt} = f(t, Xt)
The Time-Optimal Control of Linear Differential Equations
381
is linear and therefore continuously differentiable in the second argument, the solution operator is a diffeomorphism. It follows from an argument similar to Hale [9] that the solution operator
is also a homeomorphism. Definition 10.1.3: The control u E em is an extremal control on [0", tll if for some ¢ E and each t E [0", ttl the solution x(O", ¢, u) of (10.1.1) through 0", ¢ belongs to the boundary BA(t) of A(t).
e
Assume that conditions of Proposition (1O.1.1a) hold. Assume the solution of 10.1.7 is pointwise complete. Let u* be optimal on [0", t*]. Then there is a nonzero n-dimensional row vector c* depending on ¢* and t* such that
Theorem 10.1.3
{u*(t)} for each 1 ~ j
~
= sgn{ c* tur, t)B(t)}j,
0"
~
t
~
r,
(10.1.15)
m for which
{c*U(t*, t)B(t)}j
=f. O.
Proof: To prove that any optimal control is extremal, one uses the method of Strauss and the following proposition which is proved exactly as in [5). Proposition 10.1.2 If 0' is an interior point of A(t), then there is some f> 0 such that the e-neigbborbood N(a, f) of 0' satisfies
N(a, f)
~
A(s)
for all s E [t - e, t]. To prove the last assertion, we note that x(t*, 0", ¢*, u*) E BA(t*). Because A(t") is convex and closed, there exists a support plane 71"" through x(t", 0", ¢* ,u*) = z" such that A(t) lies on one side of 71"". Let v be a unit normal to 71"" directed away from A(t"). Clearly, for each
uE Hence (v, x" - x) equivalent to
~
em,
x = x(t",¢*,u) E A(t").
O. From the variation of parameter (10.1.3a), this is
382
Stability and Time-Optimal Control of Hereditary Systems
for all u E em. Let A(S) = vTU(t*,s)B(s). Then
Hence we see that, on any interval of positive length where A(s) must be that uj(s) = sgn Aj(S). We have proved the theorem.
#
0, it
Definition 10.1.4: System (10.1.1) is said to be normal on [0', t] if no component of cTY(t,s), c # 0, vanishes on a subinterval of [O',t] of positive length. This definition is equivalent to the following: Let Yj(t, s) be the jth column vector of Y(t, s). For each j = 1, ... ,m the functions Yjl(t,S), ... ,Yj,,(t,s) are linearly independent on each subinterval of [O',t] of positive length. Corollary 10.1.1 If System (10.1.1) is normal on [0', t], then u*(t), the optimal control, is uniquely determined by (10.1.9) and is bang-bang. Definition 10.1.5: Let S ~ En be a set. S is strictly convex if for every Xl and X2 in S, Xl # X2, the open line segment {Xl
+ (1 -
A)X2 :
0
r n continuous and strictly convex valued. Then if there exists a pair
m)1" ~ C such that if>m --;. 0 as m --;. 00 (the convergence is in the sup norm of C([-h, 0], En))
384
Stability and Time-Optimal Control of Hereditary Systems
and no ¢m is in N (so ¢m -:j:. 0). From the variation of constant formula we deduce that
for any i l > U and any u E C'", Hence Ym = T(il, u)¢m(O) is not in R(il) for any il' We therefore obtain a sequence {Ym)r' ~ En, Ym (j. R(ir) for any iI, Ym -:j:. 0 such that Ym ---+ 0, as m ---+ 00. We conclude that 0 is not in the interior of lW.(tr) for any iI, a contradiction. Hence 0 E Int N. Theorem 10.2.2 If (10.1.1) is proper and (10.1.2) is uniformly asymptotically stable, then (10.1.1) is Euclidean null controllable. Proof: Let N be the domain of null controllability of (10.1.1). Since (10.1.1) is proper we have from Theorem 10.2.1 that 0 E Int N. In (10.1.1) consider the trivial admissible control u = 0 on [u,oo). Then the solution x(u, ¢, 0) of (10.1.1) with u = 0 (that is the solution of (10.1.2)) satisfies
lIXt(u,¢,O)II ~ ke-c(t-u)II¢II,
t 2: a, ¢ E C,
for some k > 1 and c > O. This behavior of (10.1.2) follows from Cruz and Hale [1]. Hence Xt(u,¢,O) ---+ ¢o = 0 E Int N, as t ---+ 00. Therefore there exists a tr > U .such that Xt, (u, ¢, 0) E B where B is a ball in N with the origin as center and radius sufficiently small. With Xt, (u, ¢, 0) E N as an initial point, there exists some t2 and some u E Ll([tl, t2], cm) such that the solution x(il,xt"u) of (10.1.2) satisfies x(il,xt"u) = O. This proves the theorem. We conclude this section by considering a more general class of controls" in which we only require them to be integrable on finite intervals. If a control u satisfies this condition, we simply write u E C*. Definition 10.2.4: System (10.1.1) is Euclidean controllable on [0", i l ] if for each ¢ E C and each Xl E En there exists a control u E C* such that the solution x(u, ¢, u) of (10.1.1) satisfies xu(O", ¢, u) ¢ and X(il, U, ¢, u) Xl.
=
=
Theorem 10.2.3 System (10.1.1) is proper on [0", td ifand only if(10.1.1) is Euclidean controllable on [0", td. Remark 10.2.1: The ideas behind the proof are standard in the literature. See, for example, Zmood [7, Theorem 2], and Lemma 6.1.1 here.
The Time-Optimal Control of Linear Differential Equations
385
10.3 Normal and Proper Autonomous Systems In this section an autonomous special case of (10.1.1) will be considered. Sharp necessary and sufficient conditions for normality and for the system to be proper are deduced. The method of Gabasov and Kirillova [12] is used to obtain results that generalize the recent normality conditions of Kloch [8], and that specialize to well-known conditions for autonomous linear ordinary differential equations. Consider the system
d dt (x(t) - A_Ix(t - h))
= Aox(t) + AIx(t -
h) + Bu(t),
t ~ 0, (10.3.1)
x(t) = ¢(t) for t E [-h,O], where A j are n x n constant matrices and B is an n x m constant matrix. Here u E c m and ¢ E C([-h, 0], En) == C. It is well known that for the above assumptions, for each admissible control u E L10C([0, 00), cm), there exists a unique solution to (10.3.1) on [-h,oo) through ¢. Furthermore, if A-I # 0, this solution exists on (-00,00) and is unique. (See-Hale [9, p. 26].) The fundamental matrix of
d
dt (x(t) - A_Ix(t - h»
= Aox(t) + AIx(t -
h)
(10.3.2)
is a solution of (10.3.2) with initial data X o() t
={
0, t
I, t
< 0,
= 0 (Identity),
(10.3.3)
°
for which U(t) - A_IU(t - h) is continuous and satisfies (10.3.2) for t ~ except at kh, k = 0,1,2, .... Indeed, U(t) has a continuous first derivative on each interval (kh, (k + l)h), k = 0,1,2, , the right- and left-hand limits of U(t) exist at each kh, k = 0,1,2, , so that U(t) is of bounded variation on each compact interval and satisfies
U(t) - A_IU(t - h) = AoU(t) + AIU(t - h), t # kh, k = 0, 1,2, ....
(10.3.4)
Also if U(t) is the fundamental matrix solution of (10.3.2) described above, then the solution x(¢, u) of (10.3.1) is given by
x(t, ¢, u)
= x(t, ¢, 0) +
it
U(t - s)Bu(s)ds,
(10.3.5)
386
Stability and Time-Optimal Control of Hereditary Systems
where x(t, 4>, 0)
= U(t)(4>(O) - A_ I4>( -h)) + Al lOh U(t - s - h)4>(s )ds - A_IjO dU(t - s - h)4>(s). -h
In order to introduce computational criteria to check when (10.3.1) is proper, or normal, we introduce the following notation by defining Qk(S) k
= AOQk-I(S) + AIQk-I(S - h) + A-IQk(S - h),
= 0, 1,2, ... , s = 0, h, 2h, ... ,
Qo(O) = I identity matrix, Qo(s)
== 0 if s < O.
Theorem 10.3.1 A necessary and sufficient condition for System (10.3.1) to be proper on the interval [0,T) is that the matrix II(T) = {Qk(s)B, k = 0,1, ... ,n -1, s E [O,T)} has rank n.
Proof: The proof is exactly as in [12, pp. 51-60]. Corollary 10.3.1 In (10.3.1), assume that A-I = O. Then a necessary and sufficient condition that (10.3.1) is proper on [0, T] is that II(T)
= {Qk(S), k = 0,1, ... ,n -
1, s E [O,T]}
has rank n where
= AoQk-I(S) + A1Qk-I(S - h), Qo(O) = B, Qo(s) == S '# O.
Qk(S)
This is Theorem 6.1.1, located in this book.
Corollary 10.3.1 is the algebraic criterion for complete controllability given by Gabasov and Kirillova for the delay system x(t) = Aox(t) + A1x(t - h)
+ Bu(t),
(10.3.6)
when the controls are not restrained to lie on a compact set but are only required to be integrable on compact intervals. We note that an algebraic
The Time-Optimal Control of Linear Differential Equations
387
criterion for the delay equation (10.3.6) to be proper is given by Corollary 10.3.1. This is a generalization of the fundamental result of LaSalle in Hermes and LaSalle [6, p. 74] on the autonomous system
x=
Ax + Bu(t).
(10.3.7)
Recall that (10.3.1) is normal on [0, T], T> 0, iffor any r = 1, ... ,m,
r{U(T - s)B,. = 0 almost everywhere s E (0, T], implies lJ = O. If we follow the idea of Theorem 10.3.1, we deduce the following theorem:
Theorem 10.3.2 A necessary and sufficient condition for (10.3.1) to be normal on the interval (0, T] is that for each r = 1,2, ... , m, the matrix
= {Qk(s)B,.,
II(T)
,.
k=O,l, ... ,n-l, SE[O,T]}
has rank n. We now apply our result to the general nth order scalar autonomous neutral equations of the form n
xn(t) =
I: bixCi)(t i=O
where
lui :S
1) +
n-l
I: a;xC;)(t) + u(t),
(10.3.8)
i=O
1. Define
B=
1
o
o 1
o
o o o o
o o o bn -
1
(10.3.9) Then System (10.3.8) is equivalent to (10.3.1) with A j , B so defined.
Stability and Time-Optimal Control of Hereditary Systems
388
Corollary 10.3.2 T>O.
The scalar control system (10.3.8) is proper for every
Proof: The result follows immediately from (10.3.9) and Theorem 10.3.1.
It is fairly obvious from Theorem 10.3.1 that if (10.3.1) is proper on [0, Ttl,
then it is proper on [0, T] for T 2: T 1 •
Consider the pointwise complete system (10.3.1). Suppose (i) for each T 2: 0 and for each r = 1,2, ... ,m, the matrix
Theorem 10.3.3
I1(T)
= {Qk(s)B r , k = 0,1, ...
,n - 1,
5
E [0, T]}
(10.3.10)
r
has rank n, and (ii) suppose a = sup{R E A :det
= A(I -
~(A)
~(A)
= O} < 0 where
A_1e- Ah )
-
A o - A1e- Ah .
Then there is precisely one time-optimal control that drives any E C([-I, 0), En) to the origin in minimum-time t/". It is given by
i", We know from Henry [25] that
IIXtJ,s)11 ~
{3
< 00,
00,
the first term on
for all tn,s, for some {3;
also X tn(-, s) ---> Xt. (-, s) in the uniform topology of C. Hence by the bounded convergence theorem, the second summand on the l.h.s. tends to zero as n ---> 00. From the continuity of solution in time and the continuity of the target, Ilwt. - Wtnll---> 0 as t« ---> t", Hence Wt' = lim n -+ oo y(t*, un). Because a(t*, 0") is closed and y(t*, un) E a(t*,O"), w(t*) = y(t*, u*), for some u* E Loa ([0", tl], Cm), and by definition t/", u* is optimal.
414
Stability and Time-Optimal Control of Hereditary Systems
A controllability assumption was made in Theorem 3.1 for System (10.1.9). When the target is the zero function, what is required is the assumption of null controllability only. To get conditions for this, we need two preliminary results and precise definitions. We work in the space W~). The argument is also valid in C. Definition 10.8.2: System (10.1.9) is controllable on [0', ttl, t1 > 0' + h if for each t/J, ¢ E Woo there exists a control u E Loa ([0', ttl, Em) such that the solution of (10.1.9) satisfies Xu (0', ¢, u) ¢ and Xt 1 (0', ¢, u) t/J. If System (10.1.9) is controllable on each interval [0',ttl,t 1 > 0' + h, we simply say that it is controllable. It is null controllable on [0', ttl if t/J == 0 in the above definition.
=
=
Definition 10.8.3: System (10.1.9) is null controllable with constraints if for each ¢ E W~) there exist a t1 ;:::: 0' and au E Loo([O',td,C m) such that the solution x(O', ¢, u) of system (10.1.9) satisfies xu(O', ¢, u) = ¢ and Xtl (0', ¢, u) = O. Proposition 10.8.1 Suppose System (10.1.9) is null controllable on [0', ttl· Then for each ¢ E Woo, there exists a bounded linear operator H : W~) ~ L oo([0',t1],Em) such that
u=H¢
has the property that the solution x( 0', ¢, H ¢) of System (10.1.9) satisfies
xu(O',¢,H¢) = ¢,
xt,{O',¢,H¢) = O.
Proof: From the variation of constant formula (10.1.10),
Xt(O', ¢, u)
= T(t, O')¢ + C(t, O')¢ + 8(t, O')u
where
T(t, O')¢
= Xt(O', 0', 0),
C(t, O')¢
= it ds{X.(-, s)}[G(O')¢ -
8(t,0')u = it X.(-,s)B(s)u(s)ds,
G(s)x.(O', ¢)], t E [O',td.
The null controllability of System (10.1.9) is equivalent to the following statement: For every ¢ E Woo there exists a t1 and there exists a u E Loo([O',td,En ) such that
T(t1,0')¢
+ S(t 1,0')u + C( t1,0')¢ = 0,
t1 >O'+h.
The Time-Optimal Control of Linear Differential Equations
415
This is in turn equivalent to (10.8.1) The condition (10.8.1) is now valid by hypothesis. Denote by N the null space of S and by Nl. the orthogonal complement of N in Loo([O',td,E n). Let
So: Nl.
-+
S(tl, O')(Loo([O', til, En)
be the restriction of S(t 1,0') to u», Then Sol exists and is linear though not necessarily bounded, since S(td(Loo([O',td, En) is not necessarily closed. Define a mapping H : W~) -+ Loo([O', tIl, En) by H ¢
= -SOl [T(t l , O')¢ + C(t l, O')¢].
Xt(O', ¢, H ¢)«()) = x(O', ¢, H ¢)(h
Then
+ ()),
=T(t10')¢«()) + C(tl, O')¢«())
+ S(h, ())[-Sol(T(t1' O')¢( ()) + C(t 1, 0' )¢(()))] = 0,
-h
s () ::; O.
=
=
Since u H¢ E Loo([O',tl],En), we deduce that Xt,(O',¢,u) O. We now prove the boundedness of H as follows: Let {¢n} be a convergent sequence in W~) such that {H ¢>n} converges in Loo([O', til, En), and let
¢
=
lim ¢>n,
n~oo
u
= n-+oo lim H ¢>n,
Since Nl. is closed in Loo([O',td, En),
U
E
Un
»» and
T(t 1, O')¢> + C(h, O')¢> + S(t1, O')u = lim (T(t1, O')¢>n n--+oo
= O. Thus, u
= H ¢>n. + C(tl, O')¢>n + S(tl, O')u n
= -Sol[T(tl'O')¢> + C(t1,0')¢>]'
By the Closed Graph Theorem, H is bounded. The proposition is proved. Definition 10.8.4: System (10.1.9) is locally null controllable with constraints if for each ¢ E 0,0 an open neighborhood of zero in W~), there exists a finite t 1 and a control u E Loo([O',td,cm) == U such that the solution x(O', ¢>, u) of Equation (10.8.1) satisfies
xq(O', ¢, u)
= ¢,
Xt,(0', ¢>, u)
= o.
416
Stability and Time-Optimal Control of Hereditary Systems
Proposition 10.8.2 Suppose that System (10.1.9) is null controllable. Then System (10.1.9) is locally null controllable with constraints. Proof: Because System (10.1.9) is null controllable, from Proposition (10.8.1) there exists a bounded linear operator H : W~)
such that for each ' - A o - Ale->'.
10.9 Necessary Conditions for Optimal Control We now return to our original problem of hitting a continuously moving point target Wt E C in minimum time. At the time of hitting, in System (10.1.9)
Xt(O', ¢, u)
= T(t, O')¢ + it [d.XtL s)][G(O')(¢) -
G(S)(Xt)]
+ i t XtLs)B(s)u(s)ds = Wt, or equivalently,
Wt - T(t,O')¢ - C(t,O')¢
=z(t) = it XtLs)B(s)u(s)ds,
where
C(t, O')¢ = it [d.X t(-, s)][G(O')(¢) - G(s)(x.)). Thus, reaching Wt in time t corresponds to
Wt - T(t, O')¢ - C(t, O')¢
=z(t)
E a(t, 0').
We shall prove shortly that if u* is the optimal control with optimal time t*, i.e. if u* is the control that is used to hit Wt in minimum-time t*, then
Wt* - T(t*, O')¢ - C(t*, O')¢
= z(t*) E 8a(t*, 0'),
that is, z(t*) is on the boundary of the constrained reachable set.
420
Stability and Time-Optimal Control of Hereditary Systems
Theorem 10.9.1 Let u* be the optimal control with t* minimum-time. Then z(t*) E oa(t*,u). Proof: Suppose u* is used to hit
z(t*) =
Wt
in time i":
T(t*, u)4> - C(t*, u)4> E a(t*, u).
Wt- -
Suppose z(t*) is not in the boundary of a(t*, e ):
z(t*) E Int a(t*, o ),
t" > a.
Therefore, there is a ball B(z(t*),p) of radius p about z(t*) such that
B(z(t*),p) C a(t*,u). Because a( t, u) is a continuous function of t, we can preserve the above inclusion for t near t* if we reduce the size of B(z(t*),p), i.e., if there is a 8 > 0 such that
B(z(t*),p/2) C a(t,u), t* - 8 Thus, z(t*) E a(t, u), t" - 8 are led to conclude that
~t.
~
t
~
r.
This contradicts the optimality of t", We
z(t*) E oa(t*, u). Theorem 10.9.2 Let D and L satisfy the basic assumptions and inequality (10.1.13). Suppose System (10.1.9) is controllable and the solution operator is a homeomorphism, and
z(t)
= Wt -
T(t, u)4> - C(t, u)4>.
(10.9.1)
If u* is an optimal control and t* the minimum time, then z"
= z(t*) E oa(t*,u) C C
if and only if u* is of the form u*(t)
= sgn[-y(t, t*)B(t)],
a ~ t ~
r,
(10.9.2)
where y(t) is a nontrivial solution of the adjoint equation (10.1.6). Proof: Because of Proposition 10.1.1, a(t, u) is closed and convex. If u* is an optimal control and t* the minimum time, then by Theorem 10.9.1, z" E oa(t*, u). Also from controllability, and Theorem 10.8.3,
o E Int a(t*,u).
The Time-Optimal Control of Linear Differential Equations
421
By the Separation Theorem of Dunford and Schwartz [26, p. 148], there exists a 'l/; E B o such that 'l/; #- 0, and
('l/;, p)
~
('l/;, z") for every p E a(t", 0').
It follows from this that
('l/;,
itO T(t", S)XoB(S)U(S)dS) ~
for every
U
('l/;,
itO T(t" ,S)XOB(S)U"(S)dS) ,
E Loo([O', t"], C m). On using Equation (10.1.4), we deduce that
('l/;,
itO T(t"S)XoB(S)U(S)dS) a
= lh [d'l/;(O)] = =
1 to
T(t", s)Xo(O)B(s)u(s)ds,
itO {IOh [d'l/;(O)]T[(t", s)Xo](O)} B(s)u(s)ds,
1 to
-y(s, t")B(s)u(s)ds,
where s ---+ y(s, t") is the solution of the adjoint equation (10.1.6). (See Henry [25].) Thus
1 t·
[-y(s, t")B(s)u(s)]ds
~
1 t·
[-y(s, t")B(s)u"(s)]ds
(10.9.3)
for every admissible u. We see from inequality (10.9.3) that u.. is of the form u"(s) = sgn[-y(s, t")B(s)], (10.9.4) where y( s, t") is the solution of the adjoint equation. We note that u.. maximizes the l.h.s. of inequality (10.9.3). Thus, with t" fixed and u .. of the form (10.9.4), the point z" is on the boundary of a(t", 0') and ('l/;, p) ~ ('l/;,z") for every p E a(t" ,0'), 'l/; E B o. In Theorem 10.9.2, we have the form of optimal control for the timeoptimal problem in the space of continuous functions C. We now show that this form is also valid in En, without the controllability assumption. This is true since the Separation Theorem is true in En for any closed and convex set, that is, through each boundary of a closed convex set in En there is a support plane. We now state the following theorem:
422
Stability and Time-Optimal Control of Hereditary Systems
Theorem 10.9.3 Let D and L satisfy the basic assumptions and Inequality (10.1.13). Let w(t) be a point function that is continuous in En. Let
z(t)
= w(t)
- T(t, o-)¢(o) - C(t, o-)¢(O).
(10.9.5)
If u" is an optimal control and t* the minimum time, then z(t*) E 8R(t*, 0-) C En if and only if
u*(s) where 11'(0)
= sgn[-B"(s)y(s)] = sgn[B"(s)[T*(t,s),p](O)]jUj(s),
:f. 0
and y is a nontrivial solution of the adjoint equation.
10.10 Normal Systems It is important to know when the necessary condition for optimal control in Theorems 10.9.2 or 10.9.3 uniquely determines an optimal control. This occurs when System (10.1.1) is normal in the sense to be made precise below. First observe that the necessary condition for optimal control in C states that
u*(t) = sgn[-B"(s)y(s)l, 0-::; s::; to,
(10.10.1 )
where y(s) is a nontrivial solution of the adjoint equation. This condition states that
uj(s)
= sgn[-b*j (s)y(s)]
on [0, tjl,
sgn[ -b*j [T* (s, t*),p](0)], for j
= 1, ...
,m, where b"j(t) is the jth row vector of B"(t). Define
gj(,p(O)) = {s: b*j(s)y(s) = 0, s E [O,t*]}
= {s: b"j(s)[T*(s,t*),p](O) = 0,
s E [O,t"]}.
Define
Yi(,p) = {s: y(s,t*,,p)bi(s)
= 0,
s E [o-,t*]},
gij = {s: -y(s,t",,p)bi(s) = 0, s E [o-,t"]}. Definition 10.10.1: We say that System (10.1.1) is En normal on [0-, to] if Yj(,p) has measure zero for each j = 1, ... ,m, and for each nontrivial 11' with 11'(0) :f. 0. The system is En normal if it is normal on every interval
[0-, to]. Similar definitions can be made for C -normality if we replace Yj( 11') with gij (11') in the above definition. Utilizing the ideas of Gabasov and Kirillova [12] we are led to a necessary and sufficient condition for normality that is easily computable. It is Theorem 10.3.1.
423
The Time-Optimal Control of Linear Differential Equations
10.11 The Geometric Theory of Time-Optimal Control of Linear Neutral Systems In this section we study the time-optimal problem: Minimize
J(t, Xt) = t
(10.7.0)
subject to: N
X(t) - A_1(t)x(t - h)
= Ao(t)x(t) + L
Aj(t)x(t - hj)
+ B(t)u(t),
(10.11.1)
j=l
Xt7 = ¢, (t,Xt(u,¢,u) E [0,00) x H,
He C, (10.11.2)
where Aj are analytic n x n matrix functions, and B is an n x m analytic matrix function. The controls are constrained to lie in
Uad = {u E L~C([u,
00), Em) : u(t) E U a.e.
t E [u,oo); U C E'!', compact and convex;
°
E Int U.
Here L~C([u, 00), Em) represents the space of measurable functions defined on each interval [u, t] having values in Euclidean n-space En and having essentially bounded values. We work in the space C = C([-h, O], En) of continuous functions from [-h,O] ---> En with the sup norm. For these two spaces, the appropriate control set is U
= {u
measurable, u(t) E Em, IUj(t)1 ~ 1, a.e. j
= 1, ... ,m}.
(10.11.3) In what follows, Xt E C is defined by Xt(s) = x(t + s), s E [-h,O]. For the existence, uniqueness, and continuous dependence on initial data of the solution xC a, ¢, u) of (10.11.1), see Hale [9, pp. 25, 301]. See also Henry [16]. For conditions for the existence of analytic solutions of N
x(t) - A_1x(t - h)
= Aox(t) + L
Ajx(t - h j),
(10.11.4)
j=l
see Tadmor [29]. If we designate the strongly continuous semigroup of linear transformations defined by solutions of (10.11.4) by T(t, o ), t ~ a so that T(t, u)¢ = Xt(u, ¢, 0),
424
Stability and Time-Optimal Control of Hereditary Systems
then the solution x(u,,u) of (10.1.1) with x,,(u,,u) = relation
satisfies the
(10.11.5) where X is defined as follows: Let
X o(8) =
(
0,
-h~8 t", Just as in Hajek [30] and Chukwu [31], we use the continuity of the minimal-time function M to construct an optimal feedback control for (10.11.1). We need a special subset of C* = B o, the conjugate space of C, which may be described as the cone of unit outward normals to support hyperplanes to a(t, u) at a point ¢ on the boundary of a(t, u).
=
=
Definition 10.12.2: For each ¢ E A(u), let I«¢) 'l/J E B o : 11'l/J11 1 such that ('l/J,p) ~ ('l/J,¢), V p E a(M, (¢),u) where C·) is the outer product in C. Remark 10.12.1: Note that ¢ E aa(M(¢),u). We now outline key properties of the set I«¢). Let 5(B o) denote the collection of subsets of Be: Then I< : C --+ 5(B o)
is the mapping defined above. Definition 10.12.3: We say that I< is upper semicontinuous at ¢ if and only if limsupJ{(¢n) C I«¢) as ¢n --+ ¢. We have: Lemma 10.12.1 If (10.11.1) is controllable, then I«¢) is nonvoid and J{(-¢) = -I«¢). Also, I«¢) is upper semicontinuous at ¢.
Proof: Let (10.11.1) be controllable. Then 0 E Int a(t,u) by Theorem 10.11.1. Because ¢ E aa(M(¢), u), there exists a 'l/J E B o, 'l/J =I=- 0 [26, p. 418] such that ('l/J,p) :::; ('ljJ,¢) for all p E a(M(¢),u).
430
Stability and Time-Optimal Control of Hereditary Systems
The choice of t/J can be made such that 1It/J1l = 1. Hence K( 0 such that
U(tl, 0 such that if meas(E) < 8, then m(t)dt < e. For this E we have
IE
so that the functions
~yn(t), dt
yn E Hare equi-absolutely integrable. It
now follows that H is an equi-absolutely continuous set of functions. Since M is bounded and g(M) bounded, we readily deduce that H is bounded. By Ascoli's theorem there is a subsequence, which we again call
The Time-Optimal Control Theory of Nonlinear Systems
457
{Yn} and which converges in the metric form of C([O", 1'], En) to a function C([O", 1'], En), that is absolutely continuous on [0",1']. By condition (iii), we deduce Iyn(t) - yn(I1)1 :::; m(s)ds, Iyn(t) - y(t)l :::; J{ m(s)ds, so that y E
f;
f
lEi
dd[yn(s)]ds---,O as t
n---.oo
uniformly with respect to each decreasing sequence {Ed, E; ~ [11,1'] == I with void intersection. Therefore [see 15, p. 292] there is a sequence {yn} (we retain the same notation) weakly convergent in L 1 (I, En) to a function ~ E L 1 (1, En). Since the functions u" converge uniformly on [O",T] to u, certainly they converge pointwise almost everywhere to y and hence by absolute continuity of these functions we have y(t) = ~(t) a.e. Thus, for t« E I, t.; ---. t*,
We note
x~
= .I
T
+
Lo(t).,dt +
£-h
I
T
dp(I)L, (t).,
dv(t - T)[x(t) -
+e
(XO - ~
A; (.)x(·) - L(·
&
T N .
for all x E Xl,
(11.4.33)
Aj(t)x*(t)] + qx(T) = 0,
Xo = 0; and
loT Bo(t)u(t)dt + f[-BC)(uC) -
A
u*(-))]
~
)'0)
(11.4.34)
°
(11.4.35)
for all u E Uad. Recall the definition of 11, 11i and their extension outside their intervals of definition, and extend the definition of v by constancy and continuity outside its original interval of definition. We need an expression for the linear functional f. It is contained in the next lemma.
Lemma 11.4.1
fez)
The linear functionall has the form
=
fT a(t)z(t)dt _ fT dv(t - T)z(t),
Jo
(11.4.36)
JT-h
for all z E £. and a E Loo([O,T],En). Proof: Consider (11.4.34). Let z E E and consider the equation N
x(t) - EA_lj(t)x(t - hj) - L(t)xt = z(t) in [O,T], j=l
Xo = 0. (11.4.37)
464
Stability and Time-Optimal Control of Hereditary Systems
The variation of the constant formula of Hale and Meyer (27] gives the solution as
X(t, Z) =
1 xu, T
s)z(s)ds,
t E [0,71,
(11.4.38)
where X(t,s) E Loo([O, 71, E n 2 ) is the fundamental matrix defined by
X( t, s )
= O. N W(t, s) = L
= oW(t, os s)
a.e. in s.
(a) W'(', s) (b)
0< 8 < t.
A(t)W(t - s - hj) + I. L(>., W>.(·, s))d>. - (t - 8)1, t
j=l
With this expression, (11.4.34) takes the form
>.
i
T
a
Lo(t)Xt(-,z)dt+
f
Jo
T
dp(t)L 1(t)Xt(-,Z)+£(Z)+
+ fT dv(t - T)L(t)Xt(-, z) JT-h
f
T
JT-h
dv(t-T)z(t)
+ qx(T; z) = 0
(11.4.39) for all z E£.. But then La, L 1, L, are as given in (11.4.25) - (11.4.27), so that (11.4.34) yields
£(z)
= ->.I
T
(lOh d"'7o(t, s)x(t +
1 0
-
( dp(t)
T i
I,
-h
d.1]l(t, s)x(t + s, z)
dv(t - T)z(t) -
T-h
s, z)) dt
iT
dv(t - T)
1 a
T-h-h
d.1](t, s)x(t + s, z)
- qx(T).
(11.4.40) We now substitute the expression x(t; z) in (11.4.38) into (11.4.40), and change the order of integration to deduce (11.4.36), where a E E depends on >.,q,X,V,1]0,1]1,1]. With (11.4.36) established, we rewrite (11.4.39) as follows:
The Time-Optimal Control Theory of Nonlinear Systems
iT
A
i
Lo(t)Xtdt
T
+
iT
465
dp(t)Ll(t)Xt+
N
a(t)[x(t) - L A_lj(t)x(t - hj) - L(t)Xt]dt
o
j=l
T
N
- [
dr(t - T)[x(t) - L A_lj(t)x(t - hj) - L(t)Xt]
JT-h
+ [
T
j=l N
dr(t - T)[x(t) - LA_lj(t)x(t - hj)] + qx(T) = 0,
JT-h
j=l
for all Xl E Xl. Once more we use the expressions for TJO,TJ1,TJ to deduce that
A
[T it deTJo(t, B_ t)x(O)dt + [T it dp(t)deTJ1(t,O - t)x(B)
Jo
+
t-h
i
o
+ (
Jo
T
t-h
t
N
a(t)[x(t)- LA_lj(t)x(t-hj)-i
JT-h
dBTJ(t,B-t)x(B)]dt
t-h
j=l
it dv(t - T)deTJ(t, B- t)x(B) t-h
+ qx(T) = 0,
for all x E Xl with Xo = o. We consider each of the first four terms above, beginning with
A [T it de TJo(t , B- t)x(B)dt,
l«
t-h
and ending with
[T it dv(t _ T)deTJo(t, 0 - t)dt, JT-h
and integrate by parts:
t-h
466
Stability and Time-Optimal Control of Hereditary Systems
71o(t, -h)x(t _ h)dt _
_ oX {T
Jo
x (T Jo
it
71o(t, s - t)x(s)dsdt
t-h
T -iT dP(t)71I(t,-h)x(t-h)-l dp(t)71I(t,s-t)x(s)ds
1 -I
T N T
+
L A_1j(t)x(t -
a(t)[x(t) -
o
hj)]dt +
j=l
t, A_'i
T
a(t) [;,(t) -
r
(t);'(' -
( a(t)71(t, -h)x(t - h)dt
l«
hi)] dt
+ (T dv(t _ T)71(t, -h)x(t - h)
JT - h
+ qx(T) - {T JT-h
dv(t _ T)71(t, s - t)x(s)ds
t-h
= 0,
for all x E Xl with Xo = O. As in Angell and Kirsch [25], use the following abbreviation:
J.l(t)
- .f a(s)ds,
={
0 ~ t ~ T - h, T
-v(t-T)-ft a(s)ds,
T-h~t~T.
For any y E £. that is extended by zero, set
x(t) =
it
y(s)ds for tEE.
Use this above and change the order of integration:
-,\ Jo
I
(
{T-,
{T
s
T- h
{T l'+h
- Jo J,
(I s
T
-
h
(j'+h
dp( ())711 (() + h, -h )y(s)ds - Jo
t r'
{T a(t)y(t)dt + J + Jo + Jo
,
71o(()+h,-h)d()y(s)ds-oX Jo
o
t:
s
t
71o((),s-())d()y(s)ds dp(())711 ((), s - ())d()y( s)ds
dJ.l(())71((),S - ())y(s)ds
dJ.l(() + h)71(() + h, -h)y(s)ds
(T y(s)ds + + qJo
-h
k
N 1k (t ).:i:(t aCt) {;A_
+ hk)dt = a
The Time-Optimal Control Theory of Nonlinear Systems
467
for all y E E, Note that
Because the integrand above must vanish pointwise, we have for s E [0, Tj,
r: TJo(B, s - B)dB o r: -j . dp(B)TJl(8+h,-h)- Jo dp(B)TJl(O,s-B) +a(s)+. j .+h dp(O)11(O,S-O)+.t dp(O+h)TJ(B+h,-h)+q j
- A.
T- .
TJo(B + h, -h)dB - A J
T- h
N
+ La(s + hk)Ak(s + h k) k=l which yields
a(s)
iT -iT
=A
= 0,
TJo(O,s - O)dO a(O)11(B, s -
+
~)dB
iT -
dp(O)TJl(O,S - 0)
iT
dv(B)TJ(B, s - 0)
N
-q- L ak(S+h k)A_ 1k(s+h k). k=l We have proved that (11.4.31) holds. To prove (11.4.32), we insert fI(z) in (11.4.36) into (11.4.35):
iT -iT
A
Bo(t)[u(t) - u·(t)]dt +
it
a(t)[-B(t)(u(t) - u·(t)]dt]
dv(t - T)[-B(t)(u(t) - u·(t))] 2
°
for all u E Ua d . The result follows at once, since p is monotonic. Note that (A, a, p, v, q) do not vanish simultaneously, since otherwise (A,E, v, q, p) == (0,0,0,0,0). The proof of Corollary 11.4.1 and Theorem 11.4.2 follows as in [25]. Remark: The properties of the range of Dg 2 are conjectured as follows:
468
Stability and Time-Optimal Control of Hereditary Systems
Closure: Proposition 1.1 of Section 12.1. Surjectivity: Controllability of the linear system N
x(t) - L Ai(t)x(t - hi) i=I
= L(t, Xt) + B(t)u(t)
in W£.;). See for example Theorem 10.8.4. We now formulate the necessary conditions of the time-optimal control problem. Theorem 11.4.2 to the constraints
Consider the following problem: Minimize T subject
N
x(t) - LA_li(t)X(t - hi) ;== 1 XQ
= f(t,Xt,u(t)),
(11.4.2)
= eP E W£.;)([-h,O],E n ) ,
(11.4.3)
XT=.,p,
D(T).,p E CI([-h, 0], En),
u(t) E C" a.e, on [0, Tj,
where
C'" = {u E Em: lUi I ::; 1,
(11.4.4)
j = 1, ... ,m}.
(i) We assume that f is continuous, and is Frecbet. differentiable with respect to its second argument and continuously differentiable with respect to its third argument, the derivative being continuous with respect to all arguments. Consider the linear approximation (11.4.9) of (11.4.2), and assume that: (ii) This system N
x(t) - LAix(t - hi) = L(t,Xt) i=I
+ B(t)u(t),
XQ
= 0,
(11.4.9)
with attainable set A defined by
A = {XT(U) E C([-h, 0], En) DXT E C'([-h, 0], En) : U E U} where
The Time-Optimal Control Theory of Nonlinear Systems
469
is such that
= {tP E C([-h, 0]) : D'lj; E C'([-h, OJ, En)}.
A
Then there exists
(a, v, q) E Loo([O, T], En)
X
N BVn([O, Tj)
X
En
(a, v, q) "# (0, 0, 0) such that N
L
a(t) = -q -
a(t
+ hk)A_ 1k(t + hk)
k=l
-1
T
TJ(S, t - s)a(s)ds+
t
iT TJ(S, t - s)dv(s - T) for all
t E [0, T], and
a(t)B(t)~(t)dt
fT
Jo
s for all u E
_ fT dv(t _ T)B(t)u(t)
JT - h
fT a(t)B(t)u*(t) _ fT dv(t _ T)B(t)u*(t)
Jo
JT - h
u,«.
Proof: We need to prove that the attainable set A is all of V = {'lj; E C([-h,O]): D'lj; E C'([-h, 0], En)} if and only if the required map
Dg(x*, u*) : (W - [z ", u*» = Y2. Suppose that A = {tP E C([-h, 0]) : D'lj; E C'([-h, aD}, and fix a point (z, v, p) E Y2 . We shall prove that there exists a pair (x, u) E Xl X U such that if i = x - z" and u = u - u", then N
i(t) -
L
A_Ij
(t)i(t - h j
) -
j=1
DiT = v
and
i(T)
= p.
L(t, it) - Bu(t) = z(t),
470
Stability and Time-Optimal Control of Hereditary Systems
Now define tf; E G([-h, 0], En) with Dtf; E G'([T - h,T],En) by tf;(t) = Pv(s)ds T - h ~ t ~ T. By assumption, there exists a control u such that
ItT
T
X(t;O,U)=tf;(t)-l U(t,s)z(s)ds,
T-h~t~T.
It follows from the variation of constant formula in Proposition 2.3.3 that
x(t; z, u)
= x(t, 0, u) +
it
U(t, s)z(s)dt.
The required choice is the triplet x = (x; z, u), u(t) = u(t). To see that the converse is also valid, we note that if tf; E G([-h, 0], En), with Dtf; E G'([T - h, T], En), then the fact that Dg is surjective implies there exists a pair (x,u) that is the preimage of the triple (O,tf;,tf;(T». We conclude that (ii) holds if and only if the required map D 2 g is surjective. But this is equivalent to the controllability of the linear system (11.4.9) with controls in U. Theorem 11.4.1 and its corollary can therefore be invoked to conclude the proof. Remark 11.4.1: For autonomous systems, necessary and sufficient conditions for exact controllability on the interval [0, T) are given by Salamon [5, p. 155]. There we can identify our interval to be [0, T - h] with controls u E £00([0, T - h], Em) and state space W~([-h, 0], En). On the interval [T - h,T] one can use a control that is continuous. Thus it can easily be proved that in the autonomous case Salamon's conditions suffice for the surjectivity conditions. REFERENCES 1. G. A. Kent, Optimal Control of Functional Differential Equations of Neutral Type, Ph.D. Thesis, Brown University, 1971. 2. H. T. Banks and G. A. Kent, "Control of Functional Differential Equations to Target Sets in Function Space," SIAM J. Contral10 (1972) 567-593.
3. R. Gabasov and F. Kirillova, The Qualitative Theory of Optimal Processes, Marcel Dekker, New York, 1976. 4. H. R. Rodas and C. E. Langenhop, "A Sufficient Condition for Function Space Controllability of a Linear Neutral System," SIAM J. Control and Optimization 16 (1978) 429-435. 5. D. Salamon, Control and Observation of Neutral Systems, Pitman Advanced Publishing Program, Boston, 1984. 6. E. N. Chukwu, "The Time Optimal Control Problem of Linear Neutral Functional Systems," J. of Nigerian Mathematics Society 1 (1982) 39-55.
The Time-Optimal Control Theory of Nonlinear Systems
471
7. E. N. Chukwu, The Time Optimal Control Theory of Linear Differential Equations of Neutral Type, Proceedings, Second Bellman Continuum, Georgia Institute of Technology, Atlanta, Georgia, June 24,1986. 8. M. Cruz and J. K. Hale, "Existence, Uniqueness and Continuous Dependence for Hereditary Systems," Annali Mathematica Pura ed Applicata 85 (1970) 63-82. 9. W. R. Melvin, "A Class of Neutral Functional Differential Equations," J. Differential
Equations 12 (1972) 524-534.
10. M. A. Cruz and J. K. Hale, "Asymptotic Behavior of Neutral Functional Differential Equations," Archives of Rational Mechanics and Analysis 34 (1969) 331-353. 11. W. Rudin,
Real and Complex Analysis, McGraw-Hill, New York, 1974.
12. J. Dieudonne, Foundations of Modern Analysis, Academic Press, New York, 1969. 13. M. A. Cruz and J. H. Hale, "Stability of Functional Differential Equations of Neutral Type," J. Differential Equations 7 (1970) 334-355. 14. K. Kuratowski and C. Ryll-Nardzewski, "A General Theorem on Selectors," Bull. Acad. Polon. Sci. 12 (1965) 397-403. 15. N. Dunford and J. T. Schwartz, Linear Operators, Part I, General Theory, Interscience, New York, 1958. 16. T. S. Angell, "Existence Theorem for Hereditary Lagrange and Mayer Problems of Optimal Control," SIAM J. Control and Optimization 14 (1976) 1-18. 17. S. Lang,
Analysis II, Addison-Wesley, Reading, MA, 1969.
18. E. N. Chukwu, "An Estimate for the Solutions of a Certain Functional Differen-
tial Equation of Neutral Type," Proceedinqs of the International Conference on Nonlinear Phenomena in Mathematical Sciences, Academic Press, 1982, edited by V. Lakshmikantham. 19. E. N. Chukwu, "Global Asymptotic Behavior of Functional Differential Equations of the Neutral Type," J. Nonlinear Analysis Theory, Method and Applications 5 (1981) 853-872. 20. J. Hale, Theory 1977.
of Functional Differential Equations, Springer-Verlag, New York,
21. O. Lopes, "Forced Oscillation in Nonlinear Neutral Differential Equations," J. of Applied Mathematics 29 (1975) 196-207.
SIAM
22. H. J. Sussmann, "Small-Time Local Controllability and Continuity of the Optimal Time Function for Linear Systems," J. Optimization Theory and Applications 53 (1987) 281-296. 23. E. N. Chukwu and O. Hajek, "Disconjugacy and Optimal Control," Theory and Applications 27 (1979).
J. Optimization
24. E. N. Chukwu and H. C. Simpson, "Perturbations of Nonlinear Systems of Neutral Type;' J. Differential Equations 82 (1989) 28--59. 25. T. S. Angell and A. Kirsch, "On the Necessary Conditions for Optimal Control of Retarded Systems," Appl. Math. Optimization 22 (1990) 117-145. 26. H. T. Banks and M. Q. Jacobs, "An Attainable Sets Approach to Optimal Control
of Functional Differential Equations with Function Space Terminal Conditions," J.
Differential Equations 13 (1973) 129-149.
27. J. K. Hale and R. R. Meyer, "A Class of Functional Equations of Neutral Type," Mem. Amer. Math. Soc., No. 76, 1967.
This page intentionally left blank
Chapter 12 Controllable Nonlinear Neutral Systems Introduction Criteria for linear systems controllability and constrained controllability are contained in 10.1 and 10.2. There Euclidean and function-space targets are considered. In this chapter we treat the general nonlinear situation in WJ!).
12.1 General Nonlinear Systems We now consider the general system (12.1.1a) where f : E x C x Em -> En is continuously differentiable in the second and third arguments, and is continuous, and where (12.1.1a) satisfies the basic assumptions on D and f in Section 11.2. In particular, we assume that there exists an m E L 2 ( [0;, 00, E) such that
IID.p/(t, ¢, u)11 and
+ IID,J(t, ¢, u)11 :::; m(t),
(12.1.1b)
D(t, Xt) = x(t) - g(t, xd
where
Ig(t,¢)I:::; k(t)II¢11. V ¢ E C, t 2:
a
(12.1.1c)
for some k continuous, g(t,¢) is linear in ¢. We need some preliminary definitions from analysis. We work in the space WJ!). Definition 12.1.1: Let X and Y be real Banach spaces and F a mapping from an open set S of X into Y. If for each fixed point Xo E S and every hEX, the limit lim[F(xo
t-O
+ th) - F(xo)]jt = 8F(x oh)
exists in the topology of Y, then the operator 8F( Xo, h) is called the Gateaux differential of F at Xo in the direction of h. If for each fixed
473
474
Stability and Time-Optimal Control of Hereditary Systems
Xo E X the Gateaux differential 8F(xo,·) is a bounded linear operator mapping X into Y, we write 8F(xo, h) = F'(xo)h, and F'(xo) is called the Gateaux derivative of F at Xo. If F has a Gateaux derivative at Xo, we say F is G-differentiable at xo, and F'(xo) E L(X, Y), the space of bounded linear operators from X into Y. Definition 12.1.2: F : X ---+ Y is weakly G-differentiable at Xo if there exists a bounded linear map F'(xo) E L(X, Y) such that
([F(xo
+ th)
- F(xo)]jt - F'(xo)h, y*) ---+ 0
as t ---+ 0 for all hEX, y* E Y*. As a consequence, if F is G-differentiable, it is weakly G-differentiable. Consider
d dt [D(t, Xt)]
= f(t,xt,u(t)),
Xq
= 0. Then Bp(Fxo) C F(Br(xo)) for some p > 0, provided k is sufficiently small. Theorem 12.1.1 tem along x == 0,
Consider System (12.1.1a) whose linear variational sysd
dt D(t)z(t)
= L(t, Zt) + B(t)v(t),
(12.1.7b)
is controllable on [17, tl] where tl > 17 + h and where
D2f(t, 0, O)Zt
= L(t, Zt),
D3f(t, 0, O)v = B(t)v.
Assume that
f(t,O,O) =0, vt Then
°
E IntA(tl,
e:«. (12.1.8a)
17)
where
A(tl,U)
= {Xt,(u,O,u): u E Lp([u,td,Em), is a solution of (12.1.1a) with
X/7
lIuliLp ~ 1, x(u,O,u)
= O}
is the attainable set associated with (12.1.1a). Proof of Theorem: Let ¢ E W~l), U E Lp([u, td, Em), tl > 17 + h. Let u -> Xt,(u,¢,u) be the mapping F : Lp([u,td, Em) -> WP)([-h, OJ, En)
Controllable Nonlinear Neutral Systems
479
given by Fu = Xtl(U,¢, u), where x(u,¢,u) is a solution of (12.1.1a). Then by Lemma 12.1.1
F'(u)v
= DuXtl(U,¢,U)(v) = Yt(u,¢,u,v),
where Y is the solution of the variational equation (12.1.2b) and F'(u) : Lp([u,t1], Em) -+ WP). Evidently F' is a bounded linear surjection if and only if the control system (12.1.2b) is controllable on [u,td. As a consequence, F'(Lp[u, t1],Em) = WP); and Lemma 12.1.1 guarantees that all the requirements of the open mapping theorem are met for the map
°
°°
°
Thus for Uo == E Lp([u,td), F(uo) = F(O) = E WP), for every r > and open ballll£(O, r) C Lp([u,td, Em) of center E Lp and radius r there is an open ball ll£(0, p) C WP) center of radius p such that
°
ll£(O,p) C ll£(F(uo),p) C F(ll£(uo,r)) = F(ll£(O,r)). Thus
It follows from this that
° IntA(t1,u) E
C
wJ1)
where
A(h,u) = {Xtl(U,O,U): u E Lp([U,t1],Em), lIuIlL.:S r, x(u,O,u) is a solution of (12.1.1a)}. We have proved that for any r
> 0,
°
E
IntA(t1,U)
(12.1.8b)
(12.1.9)
where A(t1,U) is defined in (12.1.8). Since r is arbitrary and the set of admissible controls Uad with constraints is (12.1.10) (12.1.3) is valid.
480
Stability and Time-Optimal Control of Hereditary Systems
Corollary 12.1.1 Assume all the conditions of Theorem 12.1.1. Then System (12.1.1) is locally null controllable with constraints. That is, there is a neighborhood 0 of zero in WP) such that every initial point of 0 can be driven to zero in some finite time tl using some u E Uad .
Condition 12.1.3 will prove not only local null controllability but constrained null controllability as well, because (12.1.9) implies that
o E IntD
(12.1.11a)
where D is the domain of null controllability, i.e., the set of initial functions
Chapter 13
Stability Theory of Large-Scale Hereditary Systems
Introduction In Sections 2.4 to 4.4 we studied the stability properties of isolated systems and their perturbations, which are described by
x(t) = f(t, Xt) + g(t, Xt), or
d
dt [x(t) - A_1x(t)]
(13.1.1)
= f(t, Xt) + g(t, Xt).
(13.1.2)
But several systems of practical interest, such as the economic models of Sections 1.8 and 9.5 or nonlinear electric circuits described in 1.6 and 12.3, may often be viewed as interconnected or composite systems. We now investigate the stability properties of such hereditary systems in terms of the same properties of the subsystems and the growth conditions of the interconnecting structure. The analysis will then be used to provide an interesting policy prescription for the growth of large-scale systems. The definitions ofthe various stability concepts of Section 2.4 will be maintained.
13.1
Delay Systems
To deal with the problem, we let en; denote the set of all continuous functions 'l/Ji : [-h,O] -+ En, with norm lI'l/J i ll = max{I'l/Ji(t)1 : -h ~ t ~ O}, where I'l/J i (t) I is the Euclidean norm of system
'l/Ji. Consider the interconnected
l
L
ii(t)=fi(t,Z~)+
gij(t,z1),
i=1, ...
.e,
j=li~j
where
h .E
x en,
-+
En"
gij: E
495
X
C"!
-+
En"
(13.1.3)
496
Stability and Time-Optimal Control of Hereditary Systems
are nonlinear functions. If we let
xT
l
I: ni = n,
i==1 = [(z1)T,
f(t,xf = [h(Zlf,
, (zl)T) E En, ,ft(Zl)T),
l
gi(t,X)
= Li;ejgij(t,zj), j==l
and
[g(t,X)]T = [(gl(x)f ... [gl(x)f), (13.1.3) can be written as
x(t)
= f(t, Xt) + g(t, xt}.
(13.1.4)
System (1.4) can be viewed as an interconnection of i-isolated subsystems (13.1.5) with interconnection described by l
gi(X) = Li;ejgij(zj). j==l
We call (13.1.4) a composite system and (13.1.3) its decomposition. The basic assumptions for the existence of unique solutions are valid for (13.1.1), (13.1.3), (13.1.4), and (13.1.5). We recall Theorem 3.1.1 for uniform asymptotic stability of (13.1.4) in en, which is rephrased for (13.1.5) in the space en. as a definition of Property Fl for (13.1.5). Definition 13.1.1: In (13.1.5), suppose Ii : E x en; -. En. takes Ex (bounded sets of en.) into bounded sets of En,. System (13.1.5) is said to have Property Fl if there exist continuous functions Ui,Vi,Wi : E+ -. E+ that are non decreasing with the property that Wi(S),Ui(S), Vi(S) are positive 0, Ui(S) -. 00 as S -. 00; and there exists a for S > 0, Ui(O) Vi(O), Wi(O) continuous functional Vi : E x en. -. E such that
=
=
(i) ui(l¢i(O)I):5 Vi(t, ¢i) :5 Vi(II¢ill), (ii) V;(t,¢i):5 -CiWi(l¢i(O)l), and (iii) Vi(t, ¢i)- V(t, ¢~)I :5 Ldl¢i -¢~I\' and all these hold for all ¢i, ¢i, ¢~ E en" where L, is a constant. Note that Property Fl is the required criteria for uniform asymptotic stability of (13.1.5).
Stability Theory of Large-Scale Hereditary Systems
497
Theorem 13.1.1 For the composite system (13.1.4) with decomposition (13.1.3), assume that the following conditions hold: (i) Each isolated subsystem (13.1.5) possesses Property Fl. (ii) For each i,j = 1, ... ,.e, if. j there are constants k;j ~ 0 such that
(iii) All successive principle minors of the test matrix S = [s;j] (n arrow) defined by C;, if i = j, s·· - { I) -L.k.. ifi.,J. J' , ')' rare positive. Then the trivial solution of the composite system (13.1.4) is uniformly asymptotically stable. Proof: Let V; be the functionals of hypothesis (i), and let a; > 0, 1, ... .E, be arbitrary constants. Choose V as follows:
f
=
£
V(t, 0 such that S = [Sij] specified by ai(ci Sij = {
+ Miaij/(1-
~(aiMiaijl(1-
Li(h o)) ,
i = j,
Li(h o)) + ajMjaj;j(l- Lj(h o)) ,
if:. j
is a negative definite matrix. The constant Li(h o) is associated with gj as in (13.2.3b).
Proof: Let V; be given by hypothesis (ii), and let ai, arbitrary constants. Define t
V(t,Xt)
= LaiV;(t,Z;). i=1
i
= 1, ... ,£
be
503
Stability Theory of Large-Scale Hereditary Systems
Because of Property A, there exist u, v, W with the same properties as
Ui, Vi, Wi l
s La,u,(ID'(t)z;D,
u(ID(t)XtD
i=l
l
s Vet, Xt) ~ L ai v,(lIz;ID, i=l
~ v(IIX t ID·
For the second inequality in the derivative of V, we use Cruz and Hale Equation 7.8 of [3, p. 354] as was done by Chukwu (4, p. 354] to evaluate the derivative of V along solutions of (13.2.5). Indeed, l
V(t,xt)
= LaiV(t,Z;), i=l
~
l
l
L aic,wi(IDi(t)z;D + ~aLi. i=l 1 l
L i;t!j Ikij(t, z!)1 'j=l
,
~ L ai[c,w,(IDi(t)z;D i=l
.
M,ai
.
l ] + 1- L. Li;t!ja,jwj(I1Y(t)ztD .
,
[
j-1
Now let R = [rij] be the £ x £ matrix specified by
r, _ { a,[ci + M,ai;/(l - Li)] i
') Then
aiM,a,j/(l- Li),
Vet, Xt) = wT Rw
= i.
i::f j.
= wT (R + R T)/2,
W
= wTSw,
where S = [Sij] is the test matrix in (iv), and where
wT = [w1(ID'z;D,··· ,wl(IDlz:l)]. Because S is symmetric and negative definite, all its eigenvalues are negative, so that for some ,X > 0 l
Vet, xd ~ -'xwT w = -,X L w,(ID i z; D.
,=1
Hence Vet, Xt) is negative definite. Global uniform asymptotic stability follows at once.
504
Stability and Time-Optimal Control of Hereditary Systems
13.3 General Comments The parameter Ci in Definition 13.2.1 is the so-called degree or margin of stability. As observed by Michel and Miller [2] in ordinary differential equations, for the system to satisfy (iv) of Theorem 13.2.1, it is necessary that Ci+Miaii 0 so that (13.2.7) is unstable, we must insist that Miaii < 0 and IMiaii I > Ci. Thus, to ensure stability of the large-scale system when a subsystem is unstable, we must provide a sufficient amount of stabilizing feedback from outside the subsystem. This has very great policy implications, and is now restated as a universal principle. Principle 13.2.1
If a subsystem is misbehaving and unstable, the large-scale system can be made stable provided there is sufficient external feedback (external force) brought to bear on the subsystem to ensure stability. The dismantling of external feedback on a subsystem can create instability. This principle is analo-
gous to those of controllability of large economic systems studied in Section 9.5. REFERENCES 1. E. N. Chukwu, "Uniform asymptotic stability of large scale systems of neutral type,"
IEEE Proceedings of the 18th Southeastern Symposium on System Theory, 1986. 2. A. N. Michel and R. K. Miller, Qualitative Analysis of Large Scale Dynamical Systems, Academic Press, New York, 1977. 3. M. A. Cruz and J. K. Hale, "Stability of Functional Differential Equations of Neutral Type," J. Differential Equations 7 (1970) 334-355. 4. E. N. Chukwu, "Global Asymptotic Behaviour of Functional Differential Equations of the Neutral Type," Nonlinear Analysis, Theory, Method and Application I> (1981) 853-872.
Index Controllable with constraints, 33,
A
202-204, 455, 484
Absolute fuel minimum problem, 163 Acquired immunodeficiency syndrome, 12-17
Controller, 368 Control switches, 107, 112 Converge uniformly, 457
(AIDS), 208
Convex, 174, 181, 369
Active immunization, 10, 17
Cost, 27, 32, 156--157, 401, 408
Actuator position, 18, 235
Coulter, K., 153
Adjoint equation, 214-215, 289
D
Adjoint system, 214-215, 402, 406-408, 420,425,
D'Alamberto solution, 23
Affine hull, 293
Damped harmonic oscillator, 1-2, 7
Affine span, 293
Depression, 488
Analytic, 105, 243, 430
Deregulations, 366-367
Analytic matrix function, 207
Determining equations, 194, 216, 249
Arzela-Ascoli Theorem, 69-70
Digital computer, 18-19, 505
Atomic at zero, 47, 56, 102, 380, 418
Dirac functions, 254
Attainable set, 359, 363, 378, 453
Discontinuities, 241 Disturbances, 367
B
Domain of null controllability, 370, 480
Banach space, 360
Dominated Convergence Theorem, 280,
Bang-bang principle, 209,378-379
315, 477
Bang-of-bang, 249, 254
E
Bellman's Inequality, 59 Bounded, 59, 95, 173
Economic interpretation, 495, 508
Bounded linear operator, 414, 429
Economic target, 365-367
Bounded variation, 45, 87
Effort, 156 Eigenvalues, 53-58
c
En-Hao-Yang, 61
Canonical isomorphism, 405, 412
Epidemic, 9-17,207
Capital accumulation, 25-30, 354, 368
Epidemics, 9-17
Cesari's property, 458
Equi-absolutely, 456
Characteristic equation, 53, 54, 55, 56,
Equibounded, 284
57
Equicontinuous, 70, 284, 327
Closed affine hull, 293-294
Essentially bounded, 198
Codimensionality, 482
Etheridge, D., 153
Compact, 69, 361, 377
Euclidean controllability, 193
Compact convex subset, 330, 377
Euclidean controllable, 193-194, 214,
Computational criterion, 382
262
Control, 1-3, 5-6,7-9,17-18,27-30,32
Euclidean null controllable, 203
Controllable, 30, 105, 285, 303
Euclidean reachable, 359, 370
505
506
Index H
Extremal, 211, 213, 381
F Filippov's Lemma, 392 Finite escape times, 450 Flip-Bop, 18 Flip-Bop circuit, 18 Flip-Bop circuits, 489-493 Fluctuation of the current, 6-9, 434 Frechet derivative, 102, 278, 299, 309, 445
Frechet differentiable, 292, 316, 475 Fredholm's alternative, 281
Haddock and Terjeki, 75 Hajek and Krabs, 165 Hajek-Krabs Duality Theorem, 167 Hajek's Lemma, 114 Hamiltonian, 275, 288 Harvesting/seeding strategies, 5, 230 Hausdorff metric, 210 Health policy, 11 Holder inequality, 404 Homeomorphism, 380 Homogeneous system, 35-36
Fubini's Theorem, 358 Fully controllable, 105
Implicit Function Theorem, 281, 448
Functional, 67-72, 89-91, 308
Impulse control, 176, 189, 191
Functlon-space controllable, 207
Impulsive controls, 176, 253
Functional, 67-72, 89-91, 293
Index of the control system, 111, 223,
Function space controllability, 197-200 Fundarnent al lemma of Hajek, 111
432
Individual initiatives, 367
Fundamental matrix, 35-40
Infectious disease, 9--17
Fundamental matrix solution, 36-44,
Information pattern, 389
48-51
Initial endowment, 27-29,355, 366
Fundamental principles, 365 G
Gateaux derivative, 314,318, 405, 474 Gateaux differential, 315, 474, 476 Generalization of Bellman's Inequality, 61
Generalized (delta) functions, 163 Generalized inverse, 292, 299, 328 Global controllability, 483 Global economy, 354 Globally asymptotically stable, 52-u5, 208
Initiatives, 367, 488 Inner product, 79, 83 Integral equations, 16 Interconnected, 339, 349, 486 Interconnected system, 337 Interconnection, 337-338, 340
K Kuhn-Tucker type, 293
L Laplace transform, 49 Large scale systems, 341, 364-365
(globally) null controllable, 363
Lebesgue-Stieltjes sense, 339
Globally null controllable with
Linear differential equation of
constraints, 203, 320
neutral type, 448
Gronwall, 59-ul, 279,448
Linear functional, 463
Growth condition, 322, 337, 343
Linearly independent, 111
Index
507
Local maximum principle, 461 Logistic equation, 100
Lyapunov-Razumikhin type theorem, 76-85
M Maximum principle, 275, 288, 443 Maximum Protoprinciple, 273 Maximum thrust, 159 Measurable function, 107, 222 Metanormal, 173, 258, 276 Metanormality, 174, 258 Michaelis-Menten, 13 Minimal control strategy, 161 Minimal time function, 217, 426--429
o Open loop control, 32 Open map, 304, 360 Open Mapping Theorem, 478 Operator, 121,323 Optimal control, 3, 31-32 Optimal control function, 32, 107 Optimal control strategy, 432 Optimal feedback control, 33, 106-110, 123-126,149,222,430-431 Optimal problem, 32, 134 Optimal switching sequence, 242 Oscillations, 2 Outer product, 425, 429
p
Minimal time functions, 217 Minimum effort, 252
Piecewise analytic, 403, 430
Minimum effort control, 163, 252-254
Point of discontinuity, 107
Minimum effort problem, 156, 160
Pointwise complete, 212-214
Minimum energy control, 265
Pointwise completeness, 388
Minimum fuel controls, 165
Policy, 488
Minimum fuel optimal controls, 169
Positive definite symmetric, 79, 91, 205,
Minimum pseudo-fuel controls, 170
329
Minimum time, 266
Positive semidefinite, 72, 329
Moore Penrose generalized inverse, 343
Predator-prey, 3--5
Mortality, 15
Proper, 365, 373, 386, 417
Multiplier Rule, 295, 463
Pursuit game, 373, 388
Multipliers, 461
N
Q Quadratic cost functions, 443
Negative semidefinite, 81, 205
Quarry, 367
Neutral system, 46-51
Quarry control, 390
Nonlinear, 473
R
Nonsingular, 106 Nonsingularity, 359
Razumikhin function, 75
Nonvoid, 369
Reachable set, 107, 173, 359
Normal, 107,251,262,373,382,387
Redmond and Silverberg, 189
Normality, 385
Regulations, 488
Null controllability, 359
Rest-to-Rest Maneuver, 130
Null controllable, 285-286
Retarded Equations, 35--44
null controllable with constraints, 416'
Riesz Representation Theorem, 47
Indez Rigid body maneuver, 130,180
S Schauder's fixed-point theorem, 324-333,361-363 Secondary infections, 208 Servomechzhsm, 1-3 Simple harmonic oscillator, 108, 177 Singular part, 55 Sobolev space, 31, 193 Solidarity function, 349,366-370,489 Solution operator, 102,221 Square integrable, 105,193 Stability, 33, 52-64 Stable, 67,88-89 uniformly asymptotically stable, 53-57,88,90-94,123,261,434 uniformly, exponentially stable, 54-55, 59, 62,320,483 uniformly stable, 444 Strictly convex, 407 Strictly normal, 111, 238, 240 Strictly normal system, 111, 238 Strict retardations, 201 Stroboscopic strategy, 369,394 Strongly continuous semigroup, 409. 423 Suitable strategy, 368 Surjection, 296,317,319,479 Susceptible population, 208 Switching curve, 181 Switching curves, 138-142 Switching locus, 133, 148-151 Switch time, 107 Symmetric, 173,369 Synthesis of Optimal Control, 106
T Targets, 473 Terminal manifolds, 110, 112,128-134, 246-247
T i m e - o p t i d control, 169,455 Transition matrix, 357 Transmission-line problem, 19,435 Transmission rate, 208
U Ukwu, 188 Universal prinaple, 370 Upper semicontinuity, 217,283, 454 Upper semkontinuous, 217, 453
V Variational equation, 102 Variation of constant formula, 105, 400
W Weakly convergent, 457 Weakly G-differentiable, 478 Wind Tunnel Model, 17-18, 235
Mathematics in Science and Engineering Edited by William E Ames, Georgia Institute of Technology Recent titles
J. W Jerome, Approximation ofNonlinear Evolution Systems Anthony V. Fiacco, Introduction to Sensitivity and Stability Analysis in Nonlinear Programming Hans Blomberg and Raimo Ylinen, Algebraic Theory for Multivariate Linear Systems T. A. Burton, Volterra Integral and Differential Equations C. J. Harris and J. M. E. Valenca, The Stability ofInput-Output Dynamical Systems George Adomian, Stochastic Systems John O'Reilly, Observers for Linear Systems Ram P. Kanwal, Generalized Functions: Theory and Technique Marc Mangel, Decision and Control in Uncertain Resource Systems K. L. Teo and Z. S. Wu, Computational Methods for Optimizing Distributed Systems Yoshimasa Matsuno, Bilinear Transportation Method John L. Casti, Nonlinear System Theory Yoshikazu Sawaragi, Hirotaka Nakayama, and Tetsuzo Tanino, Theory of Multiobjective Optimization Edward J. Haug, Kyung K. Choi, and Vadim Komkov, Design Sensitivity Analysis ofStructural Systems T. A. Burton, Stability and Periodic Solutions ofOrdinary and Functional Differential Equations Yaakov Bar-Shalom and Thomas E. Fortmann, Tracking and Data Association V. B. Kolmanovskii and V. R. Nosov, Stability ofFunctional Differential Equations V. Lakshmikantham and D. Trigiante, Theory ofDifference Equations: Applications to Numerical Analysis B. D. Vujanovic and S. E. Jones, Variational Methods in Nonconservative Phenomena C. Rogers and W E Ames, Nonlinear Boundary Value Problems in Science and Engineering Dragoslav D. Siljak, Decentralized Control of Complex Systems W E Ames and C. Rogers, Nonlinear Equations in the Applied Sciences Christer Bennewitz, Differential Equations and Mathematical Physics Josip E. Pecaric, Frank Proschan, and Y L. Tong, Convex Functions, Partial Orderings, and Statistical Applications E. N. Chukwu, Stability and Time-Optimal Control ofHereditary Systems
This page intentionally left blank