´ MATHEMATIQUES & APPLICATIONS Directeurs de la collection : G. Allaire et M. Bena¨ım
66
´ MATH EMATIQUES & APPLICATIONS Comit´e de Lecture 2008–2011/Editorial Board 2008–2011 RE´ MI A BGRALL INRIAet Math´ematiques, Univ. Bordeaux 1, FR
[email protected] G R E´ GOIRE A LLAIRE ´ CMAP, Ecole Polytechnique, Palaiseau, FR
[email protected] MICHEL BENA¨I M Math´ematiques, Univ. de Neuchˆatel, CH
[email protected] OLIVIER CATONI Proba. et Mod. Al´eatoires, Univ. Paris 6, FR
[email protected] T HIERRY COLIN Math´ematiques, Univ. Bordeaux 1, FR
[email protected] MARIE-CHRISTINE COSTA UMA, ENSTA, Paris, FR
[email protected] ARNAUD D EBUSSCHE ENS Cachan, Antenne de Bretagne Avenue Robert Schumann, 35170 Bruz, FR
[email protected] CLAUDE L OBRY INRA, INRIA, Sophia-Antipolis et Analyse Syst`emes et Biom´etrie Montpellier, FR
[email protected] L AURENT MICLO Analyse, Topologie et Proba., Univ. Provence, FR
[email protected] F ELIX OTTO Institute for Applied Mathematics University of Bonn, DE
[email protected] VAL E´ RIE P ERRIER Mod.. et Calcul, ENSIMAG, Grenoble, FR
[email protected] BERNARD P RUM Statist. et G´enome, CNRS, INRA, Univ. Evry, FR
[email protected] P HILIPPE ROBERT INRIA, Domaine de Voluceau, Rocquencourt, FR
[email protected] P IERRE ROUCHON ´ Automatique et Syst`emes, Ecole Mines, Paris, FR
[email protected] JACQUES D EMONGEOT TIMC, IMAG, Univ. Grenoble I, FR
[email protected] A NNICK S ARTENAER Math´ematiques, Univ. Namur, BE
[email protected] N ICOLE E L K AROUI ´ CMAP, Ecole Polytechnique, Palaiseau, FR
[email protected] ¨ E RIC S ONNENDR UCKER IRMA, Strasbourg, FR
[email protected] J OSSELIN G ARNIER Proba. et Mod. Al´eatoires, Univ. Paris 6 et 7, FR
[email protected] S YLVAIN S ORIN Combinat. et Optimisation, Univ. Paris 6, FR
[email protected] S T E´ PHANE G AUBERT INRIA, Saclay, ˆIles-de-France, Orsay et ´ CMAP, Ecole Polytechnique, Palaiseau, FR
[email protected] CLAUDE L E BRIS CERMICS, ENPC et INRIA Marne la Vall´ee, FR
[email protected] A LAIN T ROUV E´ CMLA, ENS Cachan, FR
[email protected] CE´ DRIC V ILLANI UMPA, ENS Lyon, FR
[email protected] E NRIQUE Z UAZUA Basque Center for Applied Mathematics, Bilbao, Basque, ES
[email protected] Directeurs de la collection :
G. ALLAIRE et M. BENA¨I M Instructions aux auteurs : Les textes ou projets peuvent eˆ tre soumis directement a` l’un des membres du comit´e de lecture avec ´ copie a` G. A LLAIRE OU M. BENA¨I M. Les manuscrits devront eˆ tre remis a` l’Editeur sous format LATEX2e (cf. ftp://ftp.springer.de/pub/tex/latex/svmonot1/).
Weijiu Liu
Elementary Feedback Stabilization of the Linear Reaction-ConvectionDiffusion Equation and the Wave Equation
123
Weijiu Liu University of Central Arkansas Department of Mathematics 201 Donaghey Avenue Conway, AR 72035 USA
[email protected] ISSN 1154-483X ISBN 978-3-642-04612-4 e-ISBN 978-3-642-04613-1 DOI 10.1007/978-3-642-04613-1 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2009941063 Mathematics Subject Classification (2000); 93D15, 93C20, 93B52, 93B05, 93B07, 93B55, 35B40, 35K20, 35K57, 35L05, 35L15, 35L20, 49K 15, 49K 20 © Springer-Verlag Berlin Heidelberg 2010 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specif ically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microf ilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specif ic statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: SPi Publisher Services Printed on acid-free paper
Springer is part of Springer Science +Business Media (www.springer.com)
To Shi Danxia, Liu Xin, and my parents, Liu Yili and Zhang Louzhong
Preface
Mathematical control theory of applied partial differential equations is built on linear and nonlinear functional analysis and many existence theorems in control theory result from applications of theorems in functional analysis. This makes control theory inaccessible to students who do not have a background in functional analysis. Many advanced control theory books on infinite-dimensional systems were written, using functional analysis and semigroup theory, and control theory was presented in an abstract setting. This motivates me to write this text for control theory classes in the way to present control theory by concrete examples and try to minimize the use of functional analysis. Functional analysis is not assumed and any analysis included here is elementary, using calculus such as integration by parts. The material presented in this text is just a simplification of the material from the existing advanced control books. Thus this text is accessible to senior undergraduate students and first-year graduate students in applied mathematics, who have taken linear algebra and ordinary and partial differential equations. Elementary functional analysis is presented in Chapter 2. This material is required to present the control theory of partial differential equations. Since many control concepts and theories for partial differential equations are transplanted from finite-dimensional control systems, a brief introduction to feedback control of these systems is presented in Chapter 3. The topics covered in this chapter include controllability, observability, stabilizability, pole placement, and quadratic optimal control. Theories about the feedback stabilization of linear reaction-convection-diffusion equations are presented in Chapter 4. Both interior and boundary control problems are addressed. The methods employed to handle the problems include eigenfunction expansions, integral transforms, and optimal control. Finally, theories about feedback stabilization of linear wave equations are presented in Chapters 5 and 6. First, the one-dimensional wave equations are considered, then higher dimensional wave equations follows, since it is easier to use the one-dimensional equations to illustrate the theories and methods. The perturbed energy method is emphasized to deal with both interior and boundary control problems while the optimal control technique is used.
vii
viii
Preface
This text can be used as a textbook for an introductory one-semester graduate course on control theory of partial differential equations. If students do not have a background in finite-dimensional control systems, one can cover Sections 3.1-3.7 of Chapter 3, Sections 4.1-4.3 of Chapter 4, all sections of Chapter 5, and Sections 6.1- 6.3 of Chapter 6. Material from Chapter 2 can be introduced whenever needed, and all material about optimal control can be skipped. If students have had a background in both functional analysis and finite-dimensional control systems, one can cover all of Chapters 4, 5, 6 and give a brief review of Chapters 2 and 3. Inevitably, the material is selected from the topics I have worked on or am most familiar with, and many other important topics are not covered. Control theory for other applied partial differential equations, such as elastic equations, thermoelastic equations, viscoelastic equations, Schr¨odinger’s equations, and Navier-Stokes equations, is not included. In addition, exact and approximate controllability is not discussed since it is difficult to present it in an elementary way. I thank my PhD advisor, Dr. Graham H. Williams, for introducing me to the area of control theory of partial differential equations. I thank Drs. George Haller, Mirsolav Krstic, and Enrique Zuazua for having directed my research and providing stimulating feedback on my work. I thank two reviewers for their hard, serious work of reading and evaluating this text and giving constructive suggestions. I thank my students, such as Jingvoon Chen, for using this text and correcting mistakes. I thank Ms. Peggy Arrigo and Dr. Danny Arrigo for their language-editing. Finally, I thank my family for their constant support. Conway, Arkansas, USA May 2009
Weijiu Liu
Contents
1
Examples of Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Control of a Mass-Spring System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Temperature Control of a Rod in a Chemical Reactor . . . . . . . . . . . . 1.3 Control of String Vibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1 3 6
2
Elementary Functional Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Banach Spaces and Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Open and Closed Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Eigenvalue Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Semigroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Definitions and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 Formulas for Semigroup Representation . . . . . . . . . . . . . . . . . 2.6.3 Tests for Semigroup Generation . . . . . . . . . . . . . . . . . . . . . . . . 2.6.4 Semigroup Growth Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Abstract Cauchy Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 References and Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9 9 15 17 19 21 29 30 32 34 38 44 48
3
Finite Dimensional Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 General Forms of Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Controllability and Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 State Feedback Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 PID Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Integral State Feedback Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Output Feedback Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Observer-based Output Feedback Control . . . . . . . . . . . . . . . . 3.7.2 Integral Output Feedback Control . . . . . . . . . . . . . . . . . . . . . . 3.8 Optimal Control on Finite-time Intervals . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 Existence and Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49 49 51 57 67 75 78 83 83 86 90 91
ix
x
Contents
3.8.2 Optimality System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.9 Optimal State Feedback Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.10 Asymptotic Tracking and Disturbance Rejection . . . . . . . . . . . . . . . . . 111 3.11 References and Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4
Linear Reaction-Convection-Diffusion Equation . . . . . . . . . . . . . . . . . . . 119 4.1 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.2 Interior Feedback Stabilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 4.2.1 State Feedback Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 4.2.2 Output Feedback Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 4.3 Boundary Feedback Stabilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 4.3.1 State Feedback Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 4.3.2 Output Feedback Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 4.4 Optimal Interior Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.4.1 Existence and Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 4.4.2 Necessary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 4.4.3 Optimality Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 4.4.4 Optimal Interior State Feedback Controls . . . . . . . . . . . . . . . . 183 4.5 Optimal Boundary Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 4.5.1 Necessary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 4.5.2 Optimality Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 4.5.3 Existence and Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 4.5.4 Optimal Boundary State Feedback Controls . . . . . . . . . . . . . . 205 4.6 Generalization to Abstract Dynamical Systems . . . . . . . . . . . . . . . . . . 212 4.7 References and Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
5
One-dimensional Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5.1 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 5.2 Linear Interior Feedback Stabilization . . . . . . . . . . . . . . . . . . . . . . . . . 218 5.3 Linear Boundary Feedback Stabilization . . . . . . . . . . . . . . . . . . . . . . . 225 5.4 References and Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
6
Higher-dimensional Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 6.1 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 6.2 Linear Boundary Feedback Stabilization . . . . . . . . . . . . . . . . . . . . . . . 235 6.3 Nonlinear Boundary Feedback Stabilization . . . . . . . . . . . . . . . . . . . . 242 6.4 Observability Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 6.5 Linear Interior Feedback Stabilization . . . . . . . . . . . . . . . . . . . . . . . . . 255 6.6 Nonlinear Interior Feedback Stabilization . . . . . . . . . . . . . . . . . . . . . . 261 6.7 Optimal Boundary Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 6.8 References and Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Chapter 1
Examples of Control Systems
Feedback control is a process of regulating a physical system to a desired output based on the feedback of actual outputs of the system. Feedback control systems are ubiquitous around us, including trajectory planning of a robot manipulator, guidance of a tactical missile toward a moving target, regulation of room temperature, and control of string vibrations. These control systems can be abstracted by Figure 1.1. Here a plant subject to a disturbance d is given, and a controller is to be designed so that the output y tracks the reference r. The plant can be modeled by either a system of ordinary differential equations (finite dimensional systems) or partial differential equations (infinite dimensional systems).
1.1 Control of a Mass-Spring System Linear finite dimensional control systems have the following general form x˙ = Ax + Bu + Gd, ym = Cx + Du + Hd, yc = Ex + Fu + Jd, x(0) = x0 , where x = (x1 , x2 , · · · , xn )∗ (hereafter ∗ denotes the transpose of a vector or matrix) is a state vector, x˙ denotes the derivative dx dt , x0 is an initial state caused by external disturbances, ym = (ym1 , · · · , yml )∗ is a measured output vector, yc = (yc1 , · · · , yck )∗ is a controlled output vector, u = (u1 , · · · , um )∗ is a control vector, and A, B, C, D, E, F, G, H, J are constant matrices with appropriate dimensions. Given a desired reference r, the control objective is to design the control u, based on the measured output ym , to regulate the controlled output yc to r, that is, yc (t) → r as t → ∞.
W. Liu, Elementary Feedback Stabilization of the Linear Reaction-Convection-Diffusion Equation and the Wave Equation, Math´ematiques et Applications 66, c Springer-Verlag Berlin Heidelberg 2010 DOI 10.1007/978-3-642-04613-1 1,
1
2
1 Examples of Control Systems
d
r
e = r–y –
u
Controller
Plant
y
Fig. 1.1 Standard feedback control system. y is an output from a plant, d denotes a disturbance, and r is a reference input. A controller is to be designed so that the output y tracks the reference r.
As an example, we consider a mass-spring-damper system as shown in Figure 1.2. A mass with mass m is attached to a spring. If the mass is initially disturbed from its equilibrium, it will oscillate. To control the oscillation, a control actuator, a damper, is attached to the mass. Let y(t) denote the displacement of the mass from the equilibrium position. Then the system equation is given by my¨ = −ky + u,
(1.1)
where m, k are positive constants, the dot ˙ denotes the time derivative, and u is the force resulting from the damper. In analysis, sometimes it is convenient to convert a higher order differential equation to a system of first order differential equations. To do so, we define x1 = y,
x2 = y. ˙
Then we have x˙1 = y˙ = x2 , 1 k k 1 x˙2 = y¨ = − y + u = − x1 + u. m m m m This system can be written concisely in the matrix form 0 1 x1 0 x˙1 = + 1 u. x˙2 − mk 0 x2 m
(1.2)
If our particular interest in the analysis of the system is some variables such as the displacement y, which are required to behave in a specified manner, we can then set up another equation x yc = [1 0] 1 = x1 . (1.3) x2
1.2 Temperature Control of a Rod in a Chemical Reactor
3
Fig. 1.2 Mass-spring-damper system. k
m u
y
This is the controlled output. If the displacement can be measured physically, we then have another output equation x (1.4) ym = [1 0] 1 = x1 . x2 This is the measured output, which is used for feedback. The measured output can be different from the controlled output. If the displacement cannot be measured, but the velocity x2 = y˙ can be measured, then the measured output equation is x ym = [0 1] 1 = x2 . (1.5) x2 If the mass is expected to oscillate in a desired manner r(t) or stay at a certain fixed position like the equilibrium 0, we need to design a feedback control u = F(ym ) such that yc (t) − r(t) converges to 0 as t tends to infinity.
1.2 Temperature Control of a Rod in a Chemical Reactor Modern feedback control systems have become more and more complex. The stringent requirements on accuracy lead to distributed parameter systems in which partial differential equations are required to model the physical systems. Since these systems have infinite eigenfunctions, they are called infinite dimensional systems. One class of partial differential equations consists of reaction-convectiondiffusion equations ∂u = μ ∇2 u + ∇ · (uv) + au. ∂t
4
1 Examples of Control Systems
Depending on a particular real problem, u can represent temperature or the concentration of a chemical species. The constant μ > 0 is the diffusivity of the temperature or the species, the vector v(x) = (v1 (x), · · · , vn (x)) is the velocity field of a fluid flow, and a(x) is a reaction rate. ∇2 is defined by ∇2 u =
∂ 2u ∂ 2u + · · · + , ∂ x2n ∂ x21
which is called the Laplace operator. The gradient of u is denoted by ∂u ∂u ∇u = . ,··· , ∂ x1 ∂ xn i) ∇ · (uv) = div(uv) = ∑ni=1 ∂ (uv ∂ xi denotes the divergence of the vector uv. For partial differential equations, we can propose two kinds of control problems: interior control and boundary control. In the problem of interior control, the control acts inside a domain:
∂u = μ ∇2 u + ∇ · (uv) + au + φ , ∂t where φ denotes the controller. In the problem of boundary control, the control acts only on the boundary of the domain:
∂u = μ ∇2 u + ∇ · (uv) + au, ∂t u|∂ Ω = φ , where Ω denotes a bounded open set in Rn and ∂ Ω denotes the boundary of Ω . In this control problem, the state variable is u. The output of the system can be proposed in many ways in accordance with specific physical problems. Concentration of a chemical or a room temperature is usually desired to be homogenized. Then the controlled output is the concentration oc (u) = u(x,t). If the average concentration on different subsets ωio (i = 1, 2, · · · , l) can be measured, then the measured output is h1 (x)u(x,t)dV, · · · , hl (x)u(x,t)dV , om (u) = ω1o
ωlo
where hi (i = 1, 2, · · · , l) are given weight functions. A typical real example of such an output is the room temperature measurement where a number of thermo-sensors are placed in a room. As in the case of finite dimensional control systems, the control objective is to design the control φ to regulate u to a desired state, such as the room temperature of 20◦C.
1.2 Temperature Control of a Rod in a Chemical Reactor
B
5
C
Fig. 1.3 A thin rod in a chemical reactor: The reactor is fed with a pure species B and a zero-th order exothermic catalytic reaction of the form B → C takes place on the rod.
As an example of application, we consider a thin one-dimensional rod in a chemical reactor [20] as shown in Figure 1.3. The lateral surface area of the rod is not insulated. The reactor is fed with a pure species B and a zero-th order exothermic catalytic reaction of the form B → C takes place on the rod. Since the reaction is exothermic, a cooling medium that is in contact with the rod is used for cooling. We assume that the heat energy f (x,t) flowing out the lateral sides per unit volume and per unit time is proportional to the temperature difference between the rod u(x,t) and a known cooling medium temperature um (x) f (x,t) = α [u(x,t) − um (x)], where α is a positive proportionality. We also assume that the heat energy generated by the species B per unit volume and per unit time is equal to [20] ⎛ ⎛ ⎞ ⎞ γ ⎠ − e−γ ⎠ , β ⎝exp ⎝− 1 + u(x,t) T0 where β is a positive proportionality, γ denotes a dimensionless activation energy, and T0 is a reference temperature. Then the heat source is equal to ⎛ ⎛ ⎞ ⎞ γ ⎠ − e−γ ⎠ − α [u(x,t) − um(x)]. Q(x,t) = β ⎝exp ⎝− 1 + u(x,t) T0 It then follows that the governing equation for the heat flow in this rod is ⎛ ⎛ ⎞ ⎞ 2 ∂ u β ⎝ ∂u γ ⎠ − e−γ ⎠ − α [u(x,t) − um(x)], =μ 2 + exp ⎝− u(x,t) ∂t ∂x cρ cρ 1+
(1.6)
T0
where c denotes the specific heat of the rod (the heat energy that must be supplied to a unit mass of a substance to raise its temperature one unit), and ρ denotes the density of the rod. In this chemical engineering problem, if the cooling medium is maintained at the temperature of um = 0◦ , the operating equilibrium temperature u = 0 is desired. However, for some process parameters [20], the temperature u may grow. Thus
6
1 Examples of Control Systems
a control mechanism is needed to regulate the temperature to the equilibrium 0. A feasible control scheme is boundary control: the temperature u(0,t) at the left end x = 0 is fixed to be zero and the temperature u(L,t) at the right end x = L is controlled, that is, u(L,t) = T (t). If the equation (1.6) is linearized at 0, we obtain a linear reaction diffusion equation. Since the temperature distribution needs to be regulated, the controlled output is oc (x,t) = u(x,t). If the temperature u(x,t) can be measured, the measured output is om (x,t) = oc (x,t) = u(x,t). We then want to design a feedback control T (t) = F(u) such that u(x,t) tends to zero as t → ∞.
1.3 Control of String Vibration Another important class of partial differential equations consists of linear wave equations ∂ 2u = c2 ∇2 u, ∂ t2 where the positive constant c (m/s) is a wave speed. We can propose the same control problems as in the case of the reaction-convection-diffusion equations. A typical physical example of the wave equation is the vertical vibration of a tightly stretched string. In this problem, u = u(x,t) denotes the vertical displacement of the string from its equilibrium. If we are interested in the manner of vibration of the string, then the controlled output is the displacement oc (x,t) = u(x,t).
(1.7)
The measured output depends on a specific physical problem. For instance, if the velocity can be physically measured, then the measured output is om (x,t) =
∂u (x,t). ∂t
If we control the string at its ends, we then have a boundary control problem u(0,t) = 0,
∂u (L,t) = f (t), ∂x
(1.8)
1.3 Control of String Vibration
7
where f is a control force. In this case, the measured output could be the velocity at x=L ∂u (L,t). om (t) = ∂t In a real problem, the string may be expected to stay at a certain position profile r = r(x) like the equilibrium r(x) ≡ 0. Thus we need to find a feedback control f = F(om ) such that u(x,t) converges to r as t → ∞.
Chapter 2
Elementary Functional Analysis
Elementary functional analysis, such as Hilbert spaces, Sobolev spaces, and linear operators, is collected in this chapter for the reader’s convenience. Since elementary stabilization results are presented without using functional analysis, readers may skip this chapter and come back when needed. In this book, N denotes the set of all nonnegative natural numbers, C denotes the set of complex numbers, Rn denotes the n-dimensional Euclidean space, and R = R1 denotes the real line. Points in Rn will be denoted by x = (x1 , · · · , xn ), and its norm is defined by 1 x =
n
∑ x2i
2
.
i=1
The inner product of x and y is defined by n
(x, y) = ∑ xi yi . i=1
2.1 Banach Spaces and Hilbert Spaces To introduce normed linear spaces, we examine the properties of the absolute value |x| of a real number x: (1) |x| ≥ 0 for any x ∈ R, and |x| = 0 if and only if x = 0; (2) |ax| = |a||x| for any a, x ∈ R; (3) |x + y| ≤ |x| + |y| for any x, y ∈ R. The absolute value plays a crucial rule in Calculus. Without it, we could not describe how close two numbers are and then we could not introduce the concept of limit. Definition 2.1. Let X be a linear space over F (F = R or C). A norm on X , denoted by · , is a map from X to R that satisfies the following conditions: W. Liu, Elementary Feedback Stabilization of the Linear Reaction-Convection-Diffusion Equation and the Wave Equation, Math´ematiques et Applications 66, c Springer-Verlag Berlin Heidelberg 2010 DOI 10.1007/978-3-642-04613-1 2,
9
10
2 Elementary Functional Analysis
(1) x ≥ 0 for any x ∈ X , and x = 0 if and only if x = 0; (2) ax = |a|x for any a ∈ F and x ∈ X; (3) x + y ≤ x + y for any x, y ∈ X (the triangle inequality). The space X endowed with the norm · is called a normed linear space. According to the definition, R endowed with the norm | · |, the absolute value, is a normed linear space. The verification of condition (3) is usually difficult and needs the following Cauchy-Schwarz inequality:
n
n n
2 (2.1)
∑ xi yi ≤ ∑ xi ∑ y2i for any x, y ∈ Rn .
i=1
i=1 i=1 To prove this inequality, we consider the following positive quantity: n
∑ (xi − ε yi)2 ≥ 0
i=1
for any constant ε . Expanding the perfect square, we can readily derive that n
n
n
i=1
i=1
i=1
2ε ∑ xi yi ≤ ∑ x2i + ε 2 ∑ y2i .
(2.2)
If either x = 0 or y = 0, it is clear that (2.1) holds. So we can assume that x = 0 and y = 0. In this case, (2.1) can be derived by taking ε = (x,y) in (2.2). y2 Example 2.1. The normed linear space Rn . The linear space Rn endowed with the following Euclidean norm x =
n
∑ x2i
(2.3)
i=1
is a normed linear space. In fact, the conditions (1), (2), and (3) of Definition 2.1 are satisfied: n
(1) It is clear that
∑ x2i ≥ 0 and
i=1
n
∑ x2i = 0 if and only if xi = 0 (i = 1, 2, · · · , n),
i=1
i.e., x = 0. (2) For any number a ∈ R and a vector x ∈ Rn , we have ax =
n
∑ (axi )2 = |a|
i=1
n
∑ x2i = |a|x.
i=1
(3) Using the Cauchy-Schwarz inequality, we derive that
2.1 Banach Spaces and Hilbert Spaces
11
n
∑ (xi + yi)2 =
i=1
n
n
n
i=1
i=1
i=1
∑ x2i + 2 ∑ xi yi + ∑ y2i
≤ x2 + 2xy + y2 = (x + y)2 . Example 2.2. The function space L p = L p (Ω ). Let Ω be a open set in Rn . For p ∈ [1, ∞), let L p = L p (Ω ) denote the vector space of all functions u defined in Ω such that 1 p u L p = | u | p dV < ∞. Ω
The integral should be understood in the sense of Lebesgue integral, but it is acceptable to treat it as the integral in Calculus for readers who do not have a background in advanced analysis. We show that L p endowed with the above norm is a normed linear space. It is easy to prove that · L p satisfies conditions (1) and (2) of Definition 2.1, but the condition (3) is difficult to prove. We need Young’s inequality and H¨older’s inequality. Young’s inequality. Let 1 < p, q < ∞, 1p + 1q = 1. Then ab ≤
a p bq + p q
for any a, b ≥ 0.
(2.4)
We do not give a detailed proof of the inequality, but provide an outline. If either a = 0 or b = 0, then (2.4) holds. Thus we can assume that a = 0. We divide the inequality by a p to obtain q 1 1 b b + ≤ . a p−1 p q a p−1 Here we have used that p = (p − 1)q. This motivates us to consider the function f (x) = 1p + 1q xq − x. Then the proof of Young’s inequality is reduced to show that the minimum of f (x) over [0, ∞) is nonnegative. H¨older’s inequality. Let 1 ≤ p, q < ∞, 1p + 1q = 1. If f ∈ L p (Ω ) and g ∈ Lq (Ω ), we have | f g|dV ≤ f L p gLq . (2.5) Ω
Proof. If either f L p = 0 or gLq = 0, then (2.5) holds. Thus we can assume that f L p = 0 and gLq = 0. Defining f1 = f / f L p and g1 = g/gLq , we have f1 L p = g1 Lq = 1. It then follows from Young’s inequality (2.4) that Ω
| f1 g1 |dV ≤
which implies (2.5).
1 p
Ω
| f1 | p dV +
1 q
Ω
|g1 |q dV =
1 1 + = 1, p q
12
2 Elementary Functional Analysis
Once we have H¨older’s inequality, we are ready to prove the triangle inequality (3) of Definition 2.1, which, in this case, is called Minkowski’s inequality. Minkowski’s inequality. Let 1 ≤ p < ∞. If f , g ∈ L p (Ω ), we have f + gL p ≤ f L p + gL p .
(2.6)
Proof. It follows from H¨older’s inequality that Ω
| f + g| pdV ≤
| f + g| p−1| f |dV + | f + g| p−1|g|dV Ω Ω 1/q 1/p | f + g|(p−1)qdV | f | p dV ≤ Ω
+ =
Ω
Ω
Dividing this inequality by (
Ω
Ω
| f + g|(p−1)qdV
1/q
1/q
| f + g| pdV
Ω
1/p |g| p dV
( f L p + gL p ) .
| f + g| pdV )1/q gives (2.6).
Before we introduce the concept of inner product space, we look at the inner product in Cn n
(x, y) = ∑ xi y¯i , i=1
where ¯ denotes the complex conjugate. The inner product has the following properties: (1) (x, x) ≥ 0 for any x ∈ Cn and (x, x) = 0 if and only if x = 0; (2) (x, y) = (y, x) for any x, y ∈ Cn ((y, x) denoting the complex conjugate); (3) (ax + by, z) = a(x, z) + b(y, z) for any x, y, z ∈ Cn and any a, b ∈ C. This inner product can be extended to a general linear space. Definition 2.2. Let X be a linear space over F (F = R or C). An inner product on X , denoted by (·, ·), is a map from X × X to F that satisfies the following conditions: (1) (x, x) ≥ 0 for any x ∈ X , and (x, x) = 0 if and only if x = 0; (2) (x, y) = (y, x) for any x, y ∈ X; (3) (ax + by, z) = a(x, z) + b(y, z) for any x, y, z ∈ X and any a, b ∈ F. The space X endowed with the inner product (·, ·) is called an inner product space. Example 2.3. The inner product space L2 (Ω ). For any f , g ∈ L2 (Ω ), define ( f , g) =
Ω
f (x)g(x)dV. ¯
We show that (·, ·) is an inner product on L2 (Ω ), and so L2 (Ω ) is an inner product space.
2.1 Banach Spaces and Hilbert Spaces
13
(1) It is clear that ( f , f ) = Ω f (x) f¯(x)dV ≥ 0 for any f ∈ L2 (Ω ). Moreover, ( f , f ) = Ω f (x) f¯(x)dV = Ω | f (x)|2 dV = 0 if and only if f = 0. (2) For any f , g ∈ L2 (Ω ), we have ( f , g) = =
Ω
Ω
f (x)g(x)dV ¯ g(x) f¯(x)dV
= (g, f ). (3) For any a, b ∈ C and any f , g, h ∈ L2 (Ω ), we have (a f + bg, h) =
Ω
=a
¯ (a f (x) + bg(x))h(x)dV
Ω
¯ f (x)h(x)dV +b
= a( f , h) + b(g, h).
Ω
¯ g(x)h(x)dV
Cauchy-Schwarz inequality (2.1) can be easily generalized to the inner product spaces |(x, y)| ≤ (x, x) (y, y). (2.7) To prove it, we use the positive quantity: (x − ε y, x − ε y) ≥ 0 for any constant ε . Expanding the inner product, we can readily derive that (x, x) − ε¯ (x, y) − ε (x, y) + |ε |2 (y, y) ≥ 0, and then
ε¯ (x, y) + ε (x, y) ≤ (x, x) + |ε |2 (y, y).
(2.8)
If either (x, x) = 0 or (y, y) = 0, it is clear that (2.7) holds. So we can assume that in (2.8). they both are not zero. In this case, (2.7) can be derived by taking ε = (x,y) (y,y) Using Cauchy-Schwarz inequality, we can prove the following proposition. Proposition 2.1. Let X be an inner product space. Then the map defined by x = (x, x) is a norm on X. The proof of this proposition is left as an exercise. Once we define a norm in a linear space, we can introduce limit and convergence in the space. Definition 2.3. Let X be a normed linear space. we say that a sequence {xn } strongly converges to x ∈ X if lim xn − x = 0. n→∞
14
2 Elementary Functional Analysis
Definition 2.4. A sequence {xn } in a normed linear space X is said to a Cauchy sequence if for any ε > 0 there exists an integer N such that xn − xm ≤ ε
for all n, m ≥ N.
Definition 2.5. A normed linear space X is said to be complete if any Cauchy sequence {xn } strongly converges to an x ∈ X. A complete normed linear space is called a Banach space. Definition 2.6. Let X be an inner product space. If X under the norm x = (x, x) is complete, it is called a Hilbert space. We can show that the normed linear spaces Rn and L2 (Ω ) are Hilbert spaces and that L p is a Banach space. The proofs of these results require advanced mathematical analysis. So we omit them and refer to the books of functional analysis [1, 34, 35, 105]. In the space Rn , there is a basis: e1 = (1, 0, · · · , 0), e2 = (0, 1, · · · , 0), en = (0, · · · , 0, 1). We now extend the basis concept to Hilbert spaces. Definition 2.7. Let H be a Hilbert space. (1) A set {ei } ⊂ H is orthogonal if (ei , e j ) = 0,
i = j.
(2) A set {ei } ⊂ H is orthonormal if (ei , e j ) = δi j =
1, 0,
i = j, i = j.
(3) An orthonormal set {ei } ⊂ H is a basis for H if ∞
x = ∑ (x, ei )ei i=1
for every x ∈ H. The equation means that n
∑ (x, ei )ei . n→∞
x = lim
i=1
It is easy to verify that the set 1 1 1 1 1 √ , √ sin x, √ cos x, · · · , √ sin(nx), √ cos(nx), · · · π π π π 2π is orthonormal in L2 (0, 2π ). In fact, it is also a basis, but the proof is difficult and can be found in a book of functional analysis.
2.2 Open and Closed Sets
15
Exercises 2.1 1. Let C[a, b] be the linear space of all continuous functions on [a, b]. Define f C = max | f (x)| a≤x≤b
for any f ∈ C[a, b]. Show that · C is a norm on C[a, b] and C[a, b] endowed with this norm is a Banach space. 2. Let X be an inner product space with the inner product (·, ·). Prove that the map defined by x = (x, x) is a norm on X. 3. Prove Young’s inequality (2.4). 4. Let l2 =
(x1 , x2 , · · · , xn , · · · ) | xn ∈ C,
∞
∑ |xn|2 < ∞
.
n=1
Show that l 2 is a Hilbert space endowed with the following inner product (x, y) =
∞
∑ xn y¯n .
n=1
2.2 Open and Closed Sets We now extend the concept of open (closed) interval in R to normed linear spaces. Definition 2.8. A subset G of a normed linear space X is said to be open if for any x ∈ G there exists δ such that {y ∈ X | x − y < δ } ⊂ G. The open ball B(c, r) = {x ∈ Rn | x − c < r} (r > 0) is an open set in Rn (see Figure 2.1). Definition 2.9. Let G be a subset of a normed linear space X and x ∈ X . x is a limit point of G if there exists a sequence {xn } ⊂ G with xn = x such that xn → x as n → ∞. Any points in the ball B(c, r) are limit points of B(c, r). Moreover, any points on the surface S(c, r) = {x ∈ Rn | x − c = r} of the ball are also limit points of B(c, r). So a limit point of G does not necessarily belong to G. Definition 2.10. A subset G of a normed linear space X is said to be closed if it contains all its limit points. ¯ r) = {x ∈ Rn | x − c ≤ r} (r > 0) is a closed set in Rn . The closed ball B(c, Theorem 2.1. A subset G of a normed linear space X is closed if and only if its complement X − G = {x ∈ X | x ∈ G} is open.
16
2 Elementary Functional Analysis
Fig. 2.1 An open ball.
x
d
¯ r) is closed The proof of this theorem is left as an exercise. The closed ball B(c, ¯ r) = {x ∈ Rn | x − c > r} is open in Rn . in Rn . So its complement Rn − B(c, Definition 2.11. Let G be a subset of a normed linear space X . The closure of G is the union of G with the set of all limit points of G. The closure of G is denoted by G. ¯ r). The closure of the open ball B(c, r) is the closed ball B(c, Definition 2.12. A subset G of a normed linear space X is dense in X if G = X . The set of all rational numbers is dense in R. A set S is countable if there exists a one-to-one map from S to a subset of N, the set of all nonnegative integers. The set {1, 2, 6, 10}, the set N, and the set of all rational numbers are countable. Definition 2.13. A normed linear space X is separable if it contains a countable dense subset. R is separable since the set of all rational numbers is countable. Definition 2.14. A subset G of a normed linear space X is compact if for any family of open sets {Gλ , λ ∈ Λ } with G ⊂ ∪λ ∈Λ Gλ , there exist finitely many Gλ1 , · · · , Gλk in this family such that G ⊂ ∪ki=1 Gλi . G is relatively compact if the closure G is compact. ¯ r) are compact. Any open inAny closed interval [a, b] and the closed ball B(c, terval (a, b) and open ball B(c, r) are not compact, but relatively compact.
Exercises 2.2 1. Prove that the open ball B(c, r) = {x ∈ Rn | x − c < r} (r > 0) is an open set in Rn . ¯ r) = {x ∈ Rn | x − c ≤ r} (r > 0) is a closed set in 2. Prove the closed ball B(c, n R . 3. Prove that a subset G of a normed linear space X is closed if and only if its complement X − G = {x ∈ X | x ∈ G} is open.
2.3 Linear Operators
17
4. Let G be a subset of a normed linear space X . Show that (1) G is closed; (2) G is the smallest closed set containing G. ¯ r) are compact. 5. Prove that the closed interval [a, b] and the closed ball B(c, 6. Prove that the open interval (a, b) and open ball B(c, r) are not compact, but relatively compact.
2.3 Linear Operators An n × n matrix A can be thought of as an operator in Rn and has the following linear property A(ax + by) = aAx + bAy for any x, y ∈ Rn , a, b ∈ R. A linear operator in a normed linear space is just an extension of the matrix to the normed space. Definition 2.15. Let X be a Banach space. A subset S of X is bounded if there exists R > 0 such that x ≤ R, ∀x ∈ S. Definition 2.16. Let X and Y be two normed linear spaces and let D(A) be a subspace of X. A map A : D(A) → Y is called a linear operator if A(ax + by) = aAx + bAy for any x, y ∈ D(A), a, b ∈ F. The set D(A) is called the domain of A. If D(A) = X and A maps any bounded sets into bounded sets, that is, there exists a constant C > 0 such that AxY ≤ CxX
for all x ∈ X ,
then A is called a bounded (or continuous) operator. Example 2.4. For any integer k ≥ 0, let Ck [a, b] be the linear space of all k-times continuously differentiable functions on [a, b]. Define f Ck = max | f (i) (x)| a≤x≤b 0≤i≤k
for any f ∈ Ck [a, b]. Then Ck [a, b] is a Banach space. We write C[a, b] for C0 [a, b]. Define the differential operator d2 A= 2 dx in C[a, b] with the domain D(A) = C2 [a, b]. It is clear that A is linear. We now show that it is unbounded in C2 [a, b]. Consider the set S = {xn / max{|a|n , |b|n } | n =
18
2 Elementary Functional Analysis
0, 1, 2, · · · , }. Since xn / max{|a|n , |b|n }C = 1, the set S is bounded. But A(S) = {n(n − 1)xn−2/ max{|a|n , |b|n } | n = 2, 3, · · · , } is unbounded since n(n − 1)xn−2/ max{|a|n , |b|n }C = n(n − 1)/ max{|a|2 , |b|2 }. So A is unbounded. Let L (X,Y ) be the set of all linear bounded operators from X to Y . For any a, b ∈ F and A, B ∈ L (X,Y ), we define aA + bB as follows (aA + bB)(x) = aAx + bBx for any x ∈ X . Then L (X,Y ) is a linear space. Define Ax . x =0 x
A = sup
We can show that · is a norm on L (X ,Y ) and that L (X ,Y ) is a Banach space. If Y = F (R or C), any f ∈ L (X, F) is called a linear bounded functional (or linear continuous functional) on X . We denote X ∗ = L (X , F) and call it the dual space of X. Example 2.5. Let X = L2 (0, b) and b > 0. For any function f ∈ L2 (0, b), we define b
F(g) =
0
f (x)g(x)dx,
g ∈ L2 (0, b).
Then F is a linear bounded functional on L2 (0, b). In fact, by H¨older’s inequality, we have |F(g)| ≤ ≤
b 0
| f (x)g(x)|dx
0
b
| f (x)| dx
1/2
b
2
0
1/2 |g(x)| dx 2
= f L2 gL2 . Thus F maps any bounded set in L2 (0, b) into a bounded set in C. This shows that F ∈ (L2 (0, b))∗ . Since F is uniquely determined by f (x), we can identify F as f (x) and then say f ∈ (L2 (0, b))∗ . In fact, we have (L2 (0, b))∗ = L2 (0, b), whose proof can be found in any functional analysis books, such as [34, 105].
Exercises 2.3 1. Show that an n × m matrix is a bounded operator from Rm to Rn . 2. Let A be a linear bounded operator from X to Y . Prove that A is continuous, that is, if xn → x in X , then Axn → Ax in Y .
2.4 Sobolev Spaces
19
3. Let X and Y be two normed linear spaces. Define Ax . x =0 x
A = sup
Show that · is a norm on L (X,Y ). 4. Let k(x, y) be a continuous function on [a, b] × [a, b]. Define the operator K : C[a, b] → C[a, b] by (K f )(x) =
b a
k(x, y) f (y)dy.
Show that K is a linear continuous operator on C[a, b].
2.4 Sobolev Spaces If α = (α1 , · · · , αn ) is an n-tuple with αi ∈ N, i = 1, 2, · · · , n, we call α a multi-index, n
and |α | = ∑ αi the modulus of α . We shall denote by xα the monomial xα1 1 · · · xαn n , i=1
which has degree |α |. Similarly, if Di =
∂ for 1 ≤ i ≤ n, then ∂ xi
Dα = Dα1 1 · · · Dαn n denotes a differential operator of order |α |. D(0,···,0) u = u. In what follows, Ω denotes a bounded domain (open set) in Rn and Γ denotes its boundary ∂ Ω . Given a nonnegative integer k, we let Ck (Ω ) denote the vector space of all functions u, which together with all their derivatives Dα u of order |α | ≤ k, are continuous in Ω . We abbreviate C0 (Ω ) = C(Ω ). We also set C∞ (Ω ) =
∞
Ck (Ω ).
k=0
Moreover, the subspaces C0 (Ω ) and C0∞ (Ω ) consist of all those functions in C(Ω ) and C∞ (Ω ), respectively, which have compact supports in Ω ; that is, suppu = {x ∈ Ω | u(x) = 0} ⊂ Ω . We sometimes use the notation D(Ω ) for C0∞ (Ω ). We next define Ck (Ω ) (C(Ω ) for k = 0) as the space of all those functions u ∈ k C (Ω ) for which Dα u is bounded and uniformly continuous in Ω for 0 ≤ |α | ≤ k. Ck (Ω ) is a Banach space with the norm given by u Ck = max sup |Dα u(x)|. 0≤|α |≤k x∈Ω
We now define the regularity of the boundary of a domain Ω ⊂ Rn .
20
2 Elementary Functional Analysis
Definition 2.17. The boundary ∂ Ω of a domain Ω is said to be Ck if for each point x0 ∈ ∂ Ω there exist r > 0 and a Ck function f : Rn−1 → R such that, after relabeling and reorienting the coordinates axe if necessary, we have
Ω ∩ B(x0 , r) = {x ∈ B(x0 , r) | xn > f (x1 , x2 , · · · , xn−1 )}. ∂ Ω is said to be C∞ if it is Ck for k = 1, 2, · · · . Definition 2.18. For p ∈ [1, ∞) and a nonnegative integer m, we define W m,p (Ω ) = {u | Dα u ∈ L p (Ω ) for all |α | ≤ m} with the following norm u W m,p =
∑
Dα u Lpp
0≤|α |≤m
1p
.
The space W0m,p (Ω ) is defined to be the closure of C0∞ (Ω ) in the space W m,p (Ω ). W −m,q (Ω ) = (W0m,p (Ω ))∗ , the dual space of W0m,p (Ω ), where q is the conjugate 1 1 exponent to p, i.e., + = 1. p q These spaces are called Sobolev spaces over Ω . In what follows, we shall write W m,2 (Ω ) as H m (Ω ). Theorem 2.2. The Sobolev spaces have the following properties: (1) W m,p (Ω ) is a Banach space. (2) For p ∈ [1, ∞), W m,p (Ω ) is separable. (3) H m (Ω ) is a Hilbert space with the inner product (u, v)H m = where (u, v)L2 =
Ω
∑
(Dα u, Dα v)L2 ,
0≤|α |≤m
u(x)v(x) dV is the inner product in L2 (Ω ).
The proof of this theorem is referred to [1, p.45-47] or [35, Chapter 5]. Theorem 2.3. C0∞ (Ω ) is dense in L2 (Ω ). The proof of this theorem is omitted and referred to [35, Chapter 5].
Exercises 2.4 1. Define the differential operator A=
d2 dx2
2.5 Eigenvalue Problems
21
in L2 (a, b) with the domain D(A) = H 2 (a, b). Show that A is unbounded in L2 (a, b). 2. Show that u(x) = ln (1 − ln|x|) is in H 1 (B(0, 1)), where B(0, 1) is the unit ball in R2 and |x| = x21 + x22 .
2.5 Eigenvalue Problems In matrix theory, an eigenvalue of a matrix A is a complex number λ such that the linear system Ax = λ x has nonzero solutions. This concept can be extended to differential equations. Consider the eigenvalue problem
−
d 2ϕ = λ ϕ, dx2 ϕ (0) = 0, ϕ (L) = 0.
(2.9) (2.10)
The homogeneous boundary condition ϕ (0) = ϕ (L) = 0 is called the Dirichlet boundary condition. An eigenvalue is a complex number λ such that (2.9)-(2.10) has nonzero solutions. The nonzero solutions are called eigenfunctions corresponding to the eigenvalue. Theorem 2.4. The eigenvalue problem (2.9)-(2.10) has eigenvalues
λn =
n2 π 2 , L2
n = 1, 2, · · ·
and the corresponding eigenfunctions
ϕn = sin
nπ x , L
n = 1, 2, · · ·
Proof. The auxiliary equation for the problem is m2 = − λ . We now discuss three cases of λ . In this case, the auxiliary equation has distinct real roots m1 = √ Case 1: λ < 0. √ −λ and m2 = − −λ and then the general solution of (2.9) is
ϕ (x) = c1 e
√ −λ x
+ c2 e−
√ −λ x
.
22
2 Elementary Functional Analysis
The boundary conditions imply that c1 = c2 = 0. So no non-zero solutions exist and therefore λ < 0 is not an eigenvalue. Case 2: λ = 0. In this case, the auxiliary equation has repeated real roots m1 = m2 = 0 and then the general solution of (2.9) is
ϕ = c1 + c2 x. The boundary conditions imply that c1 = c2 = 0. So no non-zero solutions exist and then λ = 0 is not an eigenvalue. Case √3: λ > 0. In this √ case, the auxiliary equation has conjugate complex roots m1 = i λ and m2 = −i λ and then the general solution of (2.9) is √ √ ϕ = c1 cos( λ x) + c2 sin( λ x).
ϕ (0) = 0 implies that c1 = 0. ϕ (L) = 0 gives √ sin( λ L) = 0. √ So λ L = nπ (n = 1, 2, · · · ) and we obtain the eigenvalues λn =
n2 π 2 , L2
n = 1, 2, · · ·
and the corresponding eigenfunctions nπ x , ϕn = sin L
n = 1, 2, · · ·
Consider the higher-dimensional eigenvalue problem
μ ∇2 ψ + ∇ · (ψ v) + aψ = λ ψ in Ω , ψ = 0 on Γ ,
(2.11) (2.12)
where a(x) is a given function, v(x) is a given vector function, ∇2 is the Laplace operator ∂ 2u ∂ 2u ∇2 u = 2 + · · · + 2 , ∂ xn ∂ x1 ∇ denotes the gradient operator ∇= ∂ (ψ v )
∂ ∂ ,··· , ∂ x1 ∂ xn
∂ (ψ v )
,
and ∇ · (ψ v) = ∂ x11 + · · · + ∂ xnn , the divergence of ψ v. The homogeneous boundary condition ψ |Γ = 0 is called the Dirichlet boundary condition. This eigenvalue problem can be solved explicitly only for very simple geometries such as rectangles. In other situations, we have to solve it numerically.
2.5 Eigenvalue Problems
23
Example 2.6. Assume that Ω = (0, L) × (0, H), v(x) = 0, and a is a constant. Then (2.11) becomes
μ ∇2 ψ = (λ − a)ψ in Ω , ψ (0, y) = 0, ψ (L, y) = 0, ψ (x, 0) = 0, ψ (x, H) = 0.
(2.13) (2.14)
Setting ψ = X(x)Y (y) and plugging it into (2.13), we deduce that (λ − a)Y − μ Y X = = ν, X Y X (0) = 0, X (L) = 0, Y (0) = 0, Y (H) = 0.
μ
We then have two eigenvalue problems
μ X = ν X, X (0) = 0, X (L) = 0, and
μ Y = (λ − a − ν )Y, Y (0) = 0, Y (H) = 0. These problems have the following eigenvalues and corresponding eigenfunctions
λmn
nπ 2
nπ x , n = 1, 2, · · · , , Xn = sin L L mπ 2 nπ 2 mπ y , = a−μ −μ , Ym = sin H L H
νn = −μ
m = 1, 2, · · ·
Therefore (2.13)-(2.14) have the following eigenvalues and corresponding eigenfunctions mπ 2 nπ 2 λmn = a − μ −μ , (2.15) H L mπ y nπ x sin , m, n = 1, 2, · · · (2.16) ψmn = sin L H Example 2.7. Assume that Ω = (0, L) × (0, H), v = (v1 , v2 ) (v1 , v2 are constants), and a is a constant. Then (2.11) becomes
μ ∇2 ψ + v1 ψx + v2 ψy = (λ − a)ψ in Ω , ψ (0, y) = 0, ψ (L, y) = 0, ψ (x, 0) = 0, ψ (x, H) = 0. As above, setting ψ = X(x)Y (y) gives two eigenvalue problems
μ Xxx + v1 Xx = ν X, X(0) = 0, X(L) = 0,
(2.17) (2.18)
24
2 Elementary Functional Analysis
and
μ Yyy + v2Yy = (λ − a − ν )Y, Y (0) = 0, Y (H) = 0. These problems have the following eigenvalues and corresponding eigenfunctions −v1 v21 nπ x nπ 2 x , n = 1, 2, · · · , νn = −μ + 2 , Xn = e 2μ sin L 4μ L mπ 2 nπ 2 v21 + v22 , λmn = a − μ + + H L 4μ 2 −v2 mπ y y , m = 1, 2, · · · Ym = e 2μ sin H Therefore (2.17) has the following eigenvalues and corresponding eigenfunctions mπ 2 nπ 2 v21 + v22 , (2.19) λmn = a − μ + + H L 4μ 2 −1 mπ y nπ x sin , m, n = 1, 2, · · · (2.20) ψmn = e 2μ (v1 x+v2 y) sin L H In general cases, although we cannot solve (2.11)-(2.12) explicitly, we can prove the following theorem. Theorem 2.5. Assume that v(x) = 0. Then the eigenvalues and corresponding eigenfunctions of (2.11)-(2.12) have the following properties: (1) All the eigenvalues are real. (2) There are infinite number of eigenvalues λ1 > λ2 > · · · > λn > · · · with lim λn = n→∞ −∞. (3) There may be many eigenfunctions corresponding to one eigenvalue. (4) Eigenfunctions belonging to different eigenvalues λ1 and λ2 are orthogonal, that is, ψλ1 ψλ2 dV = 0. Ω
Furthermore, different eigenfunctions belonging to the same eigenvalue can be made orthogonal. Thus, we may assume that all eigenfunctions are orthogonal to each other. 2 (5) All eigenfunctions {ψm }∞ m=1 form a basis in L (Ω ), that is, any function f ∈ 2 L (Ω ) can be represented by the series of the eigenfunctions f=
∞
∑ a m ψm .
m=1
The series converges in the mean:
2.5 Eigenvalue Problems
25
2
n
lim
f − ∑ am ψm dV = 0. n→∞ Ω
m=1
The proof of this theorem is referred to [31, Chapter 8] and [35, p.335, Theorem 1]. We can easily verify that the eigenvalues and eigenfunctions (2.15)(2.16) satisfy all properties in the above theorem. If v = 0, the eigenvalue problem is complicated and not all properties in the above theorem are true. For instance, the eigenfunctions (2.20) are not orthogonal in L2 ((0, L)× (0, H)). But, if we change the usual inner product of L2 ((0, L) × (0, H)) to the following weighted inner product (u, v)w =
L H 0
1
eμ
(v1 x+v2 y)
uvdxdy
0
u, v ∈ L2 ((0, L) × (0, H)),
(2.21)
then they are orthogonal. Unfortunately, for a general v, we do not know whether there exists a weighted inner product to make the eigenfunctions orthogonal. To state a result in this case, we introduce the concept of direct sum. Let X1 and X2 be two subspaces of a vector space X . The sum of these two subspaces, denoted by X1 + X2 , is the set of all the sums x1 + x2 , where x1 ∈ X1 and x2 ∈ X2 . If X1 ∩ X2 = {0}, the sum is called a direct sum and is denoted by X1 ⊕ X2 . If X = X1 ⊕ X2 , then we say X has a direct decomposition and X1 and X2 are complementary subspaces. As a simple example of direct sum, we consider X = R2 , X1 = {(x, x) | x ∈ R} (the line y = x), X2 = {(x, 2x) | x ∈ R} (the line y = 2x). Then R2 = X1 ⊕ X2. In fact, if (x, y) ∈ X1 ∩ X2 , then x = y = 2x. So x = 0 and y = x = 0. In addition, for any (x, y) ∈ R, let (x, y) = (x1 , x1 ) + (x2 , 2x2 ). This implies that x1 + x2 = x, x1 + 2x2 = y. Solving this system gives x1 = 2x − y and x2 = y − x. Thus (x, y) = (2x − y, 2x − y) + (y − x, 2(y − x)) with (2x − y, 2x − y) ∈ X1 and (y − x, 2(y − x)) ∈ X2 . Let λ1 > λ2 > · · · > λn > · · · with lim λn = −∞ be the eigenvalues of (2.11)
n→∞
with v = 0 and let ψr1 , ψr2 , · · · , ψrnr be nr normalized (that is, Ω |ψr1 |2 dV = 1) orthogonal eigenfunctions corresponding to the eigenvalue λr . Let N be a positive integer. We define HN = span{ψ11 , ψ12 , · · · , ψ1n1 ; · · · , ψN1 , ψN2 , · · · , ψNnN } =
N
nr
∑ ∑ crsψrs | crs are any numbers
,
r=1 s=1
H∞ = span{ψN+1,1 , ψN+1,2 , · · · , ψN+1,nN+1 , · · · } =
N+M nr
∑ ∑ crs ψrs | crs are any numbers and M > 0 is an integer
r=N+1 s=1
.
26
2 Elementary Functional Analysis
As the direct result of Parts (4) and (5) of Theorem 2.5, we have the following orthogonal decomposition of L2 (Ω ).
Corollary 2.1. If v = 0, then L2 (Ω ) = HN H∞ . Furthermore, the decomposition is orthogonal, that is, every function uN ∈ HN is orthogonal to every function u∞ ∈ H∞ . For a matrix A, if λ is not an eigenvalue of A, then λ I − A is invertible. This concept can be extended to a linear operator in a normed space. If A is a linear, not necessarily bounded, operator in X, the resolvent set ρ (A) of A is the set of all complex numbers λ for which λ I − A is invertible, i.e., (λ I − A)−1 is a bounded linear operator in X. The family R(λ , A) = (λ I − A)−1 , λ ∈ ρ (A) of bounded linear operators is called the resolvent of A. The complementary set σ (A) of ρ (A) in the complex plane C is called the spectrum of A. For the matrix A, the spectra consist of all their eigenvalues. Definition 2.19. Let X and Y be two Banach spaces and D(A) be a linear subspace of X. Let A : D(A) → Y be a linear operator. (1) A is densely defined if D(A) is dense in X. (2) A is closed if the graph of A G (A) = {(x, y) ∈ X × Y | x ∈ D(A), y = Ax} is closed in X × Y . Alternatively, A is closed if whenever xn ∈ D(A), n = 1, 2, · · · and lim xn = x, lim Axn = y, n→∞
n→∞
it follows that x ∈ D(A) and Ax = y. Consider the differential operator A=
d2 dx2
in L2 (a, b) with the domain D(A) = H 2 (a, b) ∩ H01 (a, b). By Theorem 2.3, we deduce that the operator is densely defined. We now show that it is closed. Let un ∈ H 2 (a, b) ∩ H01 (a, b) and un → u, un → v as n → ∞. We need to prove that v = u . Since un → u and un → v, the limit and differentiation can be exchanged. So d2 we have v = lim un = 2 lim un = u . n→∞ dx n→∞ Definition 2.20. Let X1 be a subspace of a Banach space X and A : D(A) ⊂ X → X be a linear operator. If A(X1 ∩ D(A)) ⊂ X1 , then X1 is said to be invariant under A. Theorem 2.6. Assume that the spectrum σ (A) of a closed operator A on a Banach space X is the union of two parts, σr (A) and σl (A), such that a rectifiable, simple closed curve Γ can be drawn so as to enclose an open set containing σr (A) in its interior and σl (A) in its exterior. Then the operator defined by
2.5 Eigenvalue Problems
27
Px =
1 2π i
Γ
(λ I − A)−1xd λ
(2.22)
is a projection (that is, P2 = P), where the integral is in the counterclockwise direction along Γ . This projection induces a decomposition of the space X X = Xr ⊕ Xl ,
Xr = PX, Xl = (I − P)X .
Moreover, Xr and Xl are invariant under A, the restriction Ar of A to Xr is a bounded operator on Xr with the spectrum σ (Ar ) = σr (A), and the restriction Al of A to Xl has the spectrum σ (Al ) = σl (A). The proof of this theorem needs advanced complex analysis and thus is referred to [24, p.71, Lemma 2.5.7] and [44, p.178, Theorem 6.17]. Using this theorem, we can show that L2 (Ω ) can be decomposed by using the eigenfunctions of the operator A defined on L2 (Ω ) by Aψ = μ ∇2 ψ + ∇ · (ψ v) + a(x)ψ
(2.23)
with the domain D(A) = H 2 (Ω ) ∩ H01 (Ω ). Theorem 2.7. The eigenvalues and corresponding eigenfunctions of the operator (2.23) have the following properties: (1) There are infinite number of eigenvalues {λn }, n = 1, 2, · · · with lim |λn | = ∞. n→∞
The number of eigenvalues λn with Reλn > r is finite, where r is a real number. (2) There may be many eigenfunctions corresponding to one eigenvalue. (3) L2 (Ω ) = HN ⊕ H∞ , where HN is a finite dimensional space containing all eigenfunctions corresponding to the eigenvalues λ1 , λ2 , · · · , λN and H∞ is an infinite dimensional space containing all eigenfunctions corresponding to the eigenvalues λN+1 , λN+2 , · · · . Moreover, HN and H∞ are invariant under the operator A defined by (2.23). The proof of this theorem is refereed to [35, p.305, Theorem 5] and [44, p. 178, Theorem 6.17]. If v = 0, the decomposition L2 (Ω ) = HN ⊕ H∞ is usually not orthogonal. The following divergence theorem will be frequently used. For any continuously differentiable vector F, we have Ω
∇ · F dV =
Γ
F · n dS,
(2.24)
where n denotes the outward normal of the boundary of Ω . For a scalar function G, we have ∇ · (GF) = F · ∇G + G∇ · F. It then follows from (2.24) that Ω
(F · ∇G + G∇ · F) dV =
Ω
∇ · (GF) dV =
Γ
GF · n dS,
28
2 Elementary Functional Analysis
and then
Ω
F · ∇G dV =
Γ
GF · n dS −
Ω
G∇ · F dV.
(2.25)
The equation (2.25) is called the formula of integration by parts. Applying integration by parts, we obtain the Green’s identity Ω
v∇2 udV =
Γ
v
∂u dS − ∂n
Ω
∇v · ∇udV.
(2.26)
The following Poincar´e’s inequality will play a key role in estimating solutions of partial differential equations. Lemma 2.1. (Poincar´e’s inequality) Let λ1 > 0 be the smallest eigenvalue of − ∇2ψ = λ ψ in Ω , ψ = 0 on Γ . Then
λ1
| f (x)|2 dV ≤
Ω
for all f ∈ H01 (Ω ).
|∇ f (x)|2 dV
Ω
(2.27) (2.28)
(2.29)
Proof. Let 0 < λ1 ≤ λ2 ≤ · · · ≤ λn ≤ · · · with lim λn = ∞ be the eigenvalues of n→∞ (2.27)-(2.28) and let ψ1 ,ψ2 , · · · , ψn , · · · be the corresponding normalized orthogonal eigenfunctions (that is, Ω |ψi |2 dV = 1). By Theorem 2.5, we have ∞
f=
∑ a m ψm ,
m=1
where am (m = 1, 2, · · · ) are constants. Integration by parts gives Ω
|∇ f |2 dV = − =− = =
Ω
f ∇2 f dV ∞
Ω m=1 ∞
∑
λm a2m
m=1 ∞
∑ a2m
m=1
= λ1
m=1 ∞
∑ am ψm ∑ am λm ψm dV
Ω m=1 ∞
≥ λ1
∞
∑ am ψm ∑ am ∇2 ψm dV
Ω
| f |2 dV.
m=1
2.6 Semigroups
29
Exercises 2.5 1. Solve the eigenvalue problem X = λ X, X(0) = 0, X (L) = 0. 2. Solve the eigenvalue problem ∇2 ψ = λ ψ , ∂ψ ∂ψ ∂ψ ∂ψ (0, y) = (L, y) = 0, (x, 0) = (x, H) = 0. ∂x ∂x ∂y ∂y 3. Show that the eigenfunctions of the problem
∂ψ ∂ψ + v2 = λ ψ in Ω , ∂x ∂y ψ (0, y) = 0, ψ (L, y) = 0, ψ (x, 0) = 0, ψ (x, H) = 0,
μ ∇2 ψ + v1
are orthogonal with the weighted inner product (2.21), where v1 , v2 are constants. 4. Prove Green’s identity Ω
v∇2 udV =
Γ
v
∂u dS − ∂n
Ω
∇v · ∇udV.
2.6 Semigroups To get an idea of what a semigroup is and how it can be used to deal with differential equations, we consider the linear system dx = Ax, dt
x(0) = x0 ,
(2.30)
where x is a state vector, x0 is an initial state, and A is an n × n constant matrix. The solution of this system is given by x(t) = eAt x0 , where eAt =
∞
An t n . n=0 n!
∑
(2.31)
The family of the matrices eAt has the following important properties, the so-called semigroup properties,
30
2 Elementary Functional Analysis
eA·0 = I,
eA(t+s) = eAt eAs .
(2.32)
Such a family of matrices is called a semigroup. A partial differential equation can be formulated into a problem similar to (2.30). Consider the wave equation
∂ 2u ∂ 2u = c2 2 in (0, L) × (0, ∞), 2 ∂t ∂x u(0,t) = 0, u(L,t) = 0, t ≥ 0, ∂u (x, 0) = u1 (x), u(x, 0) = u0 (x), ∂t
(2.33) (2.34) x ∈ (0, L).
We define an operator A on the Hilbert space H = H01 (0, 1) × L2 (0, 1) by 0 I A= d2 c2 dx 2 0
(2.35)
(2.36)
with the domain D(A) = (H 2 (0, 1) ∩ H01 (0, 1)) × H01(0, 1), where I denotes the identity operator. Set v = ∂∂ ut and u=
u0 u . , u0 = v u1
Then the problem (2.33)-(2.35) can be formulated as an abstract differential equation du = Au, u(0) = u0 (2.37) dt on H . We call the problem (2.37) as an abstract Cauchy problem. We could guess that the solution of (2.37) should be given by u(t) = eAt u0 . However, the definition (2.31) of the matrix exponential no longer makes a sense in this case and we have to explain what eAt means. This leads to the concept of semigroup in a Banach space.
2.6.1 Definitions and Properties Definition 2.21. Let X be a Banach space. A one-parameter family T (t), 0 ≤ t < ∞, of bounded linear operators from X into X is called a semigroup of bounded linear operators on X if (1) T (0) = I, (I is the identity operator on X); (2) T (t + s) = T (t)T (s) for every t, s ≥ 0 (the semigroup property).
2.6 Semigroups
31
Furthermore, if the semigroup T (t) also satisfies lim T (t)x = x
t→0+
for every x ∈ X ,
(2.38)
then T (t) is called a strongly continuous semigroup of bounded linear operators on X. A strongly continuous semigroup of bounded linear operators on X will be called a semigroup of class C0 or simply a C0 semigroup. It is clear that the definition of semigroup is an extension of eAt satisfying (2.32), where A is a matrix. For the matrix A, we have eAt x − x eAt x − eA0x d + eAt x
= lim =
. t t dt t=0 t→0+ t→0+
Ax = lim
This is a very important relationship between the matrix A and the semigroup eAt . We can say that the semigroup eAt is generated by A. This relationship can be extended to the case of C0 semigroup. The linear operator A defined by T (t)x − x D(A) = x ∈ X : lim exists t t→0+ and
T (t)x − x d + T (t)x
=
t dt t=0 t→0+
Ax = lim
for x ∈ D(A)
(2.39)
(2.40)
is called the infinitesimal generator of the semigroup T (t), D(A) is the domain of A. We symbolically write T (t) = eAt . Semigroups have the following important properties. Theorem 2.8. Let T (t) be a C0 semigroup and A its infinitesimal generator. Then (1) There exist constants ω ≥ 0 and M ≥ 1 such that T (t) ≤ Meω t ,
for 0 ≤ t < ∞.
(2.41)
(2) For every x ∈ X, t → T (t)x is a continuous function from [0, ∞) into X . (3) For x ∈ D(A), T (t)x ∈ D(A) and d T (t)x = AT (t)x = T (t)Ax. dt
(2.42)
(4) D(A), the domain of A, is dense in X and A is a closed linear operator. The proof is omitted and referred to [91, p.4, Theorems 2.2 and 2.4] because advanced results such as the uniform boundedness theorem [95, p.43, Theorem 2.5] from functional analysis are required. The property (2.42) shows that u(t) = T (t)x is the solution of the equation
32
2 Elementary Functional Analysis
du = Au. dt This is why the semigroup theory is a powerful tool to study partial differential equations. The domain of C0 semigroups is the real nonnegative axis. It is natural to extend the domain of the parameter to regions in the complex plane that include the real nonnegative axis. It is clear that the domain must be an additive semigroup of complex numbers to preserve the semigroup structure, such as angles around the positive real axis. Definition 2.22. Let ZΔ = {z ∈ C | φ1 < arg(z) < φ2 , φ1 < 0 < φ2 }. Suppose T (z) is a bounded linear operator on a Banach space X for z ∈ ZΔ . The family T (z), z ∈ ZΔ , is an analytic semigroup in ZΔ if (1) z → T (z) is analytic in ZΔ , that is, the derivative (2) T (0) = I and lim T (z)x = x for every x ∈ X;
dT dz
exists for every z ∈ ZΔ ;
z→0 z∈ZΔ
(3) T (z1 + z2 ) = T (z1 )T (z2 ) for z1 , z2 ∈ ZΔ . Clearly, the restriction of an analytic semigroup to the real axis is a C0 semigroup.
2.6.2 Formulas for Semigroup Representation If A is a matrix, then the semigroup generated by A is represented by the series (2.31). This is also true for a bounded linear operator. Unfortunately, if A is unbounded, the series no longer makes a sense. Then a question arises: Is there a formula to represent the semigroup in terms of A if A is unbounded? To answer this question, we look at Cauchy’s integral formula. If f (λ ) is analytic in and on a simple and closed curve Γ , then f (λ0 ) =
1 2π i
Γ
f (λ ) dλ , λ − λ0
where λ0 is inside Γ . Letting λ0 = a and f (λ ) = eλ t , we obtain eat =
1 2π i
Γ
1 eλ t dλ = λ −a 2π i
Γ
eλ t (λ − a)−1d λ .
This representation of the semigroup eat generated by the number a is more useful in the sense that it can be generalized to unbounded operators. Theorem 2.9. Let A be the infinitesimal generator of a C0 semigroup T (t) = eAt on a Banach space X satisfying eAt ≤ Meω t . Let γ > max(0, ω ). If x ∈ D(A2 ), then 1 e x= 2π i At
γ +i∞ γ −i∞
eλ t (λ I − A)−1 xd λ
(2.43)
2.6 Semigroups
33
and for every δ > 0, the integral converges uniformly in t for t ∈ [δ , 1/δ ]. The proof is omitted and referred to [91, p.29, Corollary 7.5] since advanced complex analysis is required. The integration in (2.43) is along the straight line x = γ in the complex plane such that Im(λ ) increases from −∞ to ∞. The next question is: Can we use the semigroup eAt to represent the resolvent R(λ , A) = (λ I − A)−1 of A? To answer this question, we look at the equation du = Au, dt
u(0) = u0 ,
(2.44)
where A is the infinitesimal generator of a C0 semigroup eAt on a Banach space X satisfying eAt ≤ Meω t . By Theorem 2.8, the solution of (2.44) is given by u(t) = eAt u0 . Multiplying (2.44) by e−λ t and integrating from 0 to ∞, we obtain ∞ 0
e− λ t
du dt = dt
∞
e−λ t Au(t)dt.
0
If Re(λ ) > ω , we can integrate by parts on the left hand side to obtain
λ
∞ 0
e−λ t eAt u0 dt − u0 = A
∞ 0
e−λ t eAt u0 dt.
It therefore follows that (λ I − A)
∞ 0
e−λ t eAt u0 dt = u0 ,
and then R(λ , A)u0 =
∞ 0
e−λ t eAt u0 dt.
Hence R(λ , A)u0 is the Laplace transform of the semigroup eAt . Note that this equation is really true if A is a number. If Re(λ ) > ω , we deduce that e−λ t eAt u0 ≤ Me(ω −Re(λ ))t u0 . It then follows that ∞ −λ t At ≤M e e u dt 0 0
0
∞
e(ω −Re(λ ))t u0 dt =
Mu0 . Re(λ ) − ω
This is not a rigorous mathematical reasoning because the exchange between the integral and the operator A, 0∞ e−λ t AeAt u0 dt = A 0∞ e−λ t eAt u0 dt, is not guaranteed if A is unbounded. For a rigorous mathematical proof, we refer to [24, p.24, Lemma 2.1.11] or [91, p.8, Theorem 3.1]. Theorem 2.10. Let eAt be a C0 semigroup with the infinitesimal generator A on a Banach space X. Assume that there exist constants M and ω such that eAt ≤ Meω t . If Re(λ ) > ω , then λ ∈ ρ (A) and for all u0 ∈ X, the following result holds:
34
2 Elementary Functional Analysis
R(λ , A)u0 =
∞ 0
e−λ t eAt u0 dt and R(λ , A) ≤
M . Re(λ ) − ω
(2.45)
2.6.3 Tests for Semigroup Generation In solving a partial differential equation, usually a linear and unbounded operator such as the operator (2.36) can be defined through the equation. Therefore, to make the semigroup theory useful, it is important to establish tests with which we can determine what kind of operators generate a semigroup. Theorem 2.11. (Hille-Yosida) A linear (unbounded) operator A is the infinitesimal generator of a C0 semigroup eAt on a Banach space X satisfying eAt ≤ Meω t if and only if (1) A is closed and D(A) is dense in X. (2) The resolvent set ρ (A) of A contains (ω , ∞) and for every λ > ω (R(λ , A))n ≤
M , (λ − ω )n
n = 1, 2, 3, · · · .
(2.46)
The proof of this theorem is omitted and referred to [24, p.26, Theorem 2.1.12], or [91, p.20, Theorem 5.3]. The inequality (2.41) provides a very important growth estimate of a semigroup. If ω = 0, then eAt ≤ M for all t ≥ 0 and eAt is called uniformly bounded. If ω = 0 and M = 1, eAt is called a C0 semigroup of contractions. If ω < 0, then eAt converges to zero exponentially as t → ∞. In application to partial differential equations, the verification of the condition (2.46) is usually difficult. So we continue to look for other tests. For an n × n matrix A = (ai j ) and vectors x, y in Cn , we have (Ax, y) = =
n
n
i=1
j=1
n
n
j=1
i=1 ∗
∑ y¯i ∑ ai j x j ∑ x j ∑ a¯i j yi
= (x, A y), where A∗ denotes the conjugate transpose of A. Definition 2.23. Let H be a Hilbert space and A a densely defined linear operator. The adjoint operator A∗ : D(A∗ ) ⊂ H → H of A is defined as follows. The domain D(A∗ ) consists of all y ∈ H such that there exists a y∗ ∈ H satisfying (Ax, y) = (x, y∗ ) for all x ∈ D(A). We then define
2.6 Semigroups
35
A ∗ y = y∗
for all y ∈ D(A∗ ).
Definition 2.24. A linear operator A is called self-adjoint if A = A∗ . Theorem 2.12. A self-adjoint operator is closed. The proof of the theorem is omitted and referred to [105]. Definition 2.25. A linear operator A on a Hilbert space H is dissipative if Re(Ax, x) ≤ 0 for every x ∈ D(A). Theorem 2.13. (Lumer-Phillips) Let A be a linear operator with a dense domain D(A) in a Hilbert space H. (1) If A is dissipative and there is a λ0 > 0 such that the range, R(λ0 I −A), of λ0 I −A is H, then A is the infinitesimal generator of a C0 semigroup of contractions on H. (2) If A is the infinitesimal generator of a C0 semigroup of contractions on H, then R(λ I − A) = H for all λ > 0 and A is dissipative. The proof of this theorem is omitted and referred to [91, p.14, Theorem 4.3]. Corollary 2.2. Let A be a densely defined closed linear operator on a Hilbert space H. If both A and its adjoint operator A∗ are dissipative, then A is the infinitesimal generator of a C0 semigroup of contractions on H. The proof of this theorem is omitted and referred to [24, p.33, Corollary 2.2.3], or [91, p. 15, Corollary 4.4]. The Lumer-Phillips theorem is very useful in solving the partial differential equations because the conditions on A can be easily verified. We now use it to prove that the wave operator defined in (2.36) generates a C0 semigroup. Using Poincar´e’s inequality (2.29), we can readily prove the following lemma. Lemma 2.2. The space H01 (0, L) endowed with the following inner product (u, v)H 1 = 0
L 0
c2
du d v¯ dx dx dx
(2.47)
is a Hilbert space. Moreover, the norm induced by this inner product is equivalent to the usual norm
2 L
du u2 +
dx, uH 1 = dx 0 that is, there exist constants C1 ,C2 > 0 such that L
2 du C1 uH 1 ≤ c2
dx ≤ C2 uH 1 . dx 0
36
2 Elementary Functional Analysis
In what follows, we will always use the inner product (2.47) for H01 (0, L). Theorem 2.14. The wave operator A defined in (2.36) generates a C0 semigroup of contraction on H = H01 (0, L) × L2 (0, L). Proof. It suffices to verify the conditions of Lumer-Phillips Theorem 2.13. By Theorem 2.3, D(A) = (H 2 (0, L) ∩ H01 (0, L)) × H01 (0, L) is dense in H = H01 (0, L) × L2 (0, L). Next let λ > 0 and ( f1 , f2 ) ∈ H01 (0, L) × L2 (0, L). Consider the equation u f (λ I − A) = 1 , f2 v which is equivalent to
λ u − v = f1 , 2 d u −c2 2 + λ v = f2 . dx u(0) = u(L) = 0.
(2.48) (2.49) (2.50)
Substituting equation (2.48) into (2.49) gives −c2
d2u + λ 2u = f2 + λ f1 , dx2
u(0) = u(L) = 0.
Using the variation of parameters formula, we obtain the unique solution u(x) = c1 exp (λ x/c) + c2 exp (−λ x/c) c exp (λ x/c) x exp (−λ s/c) ( f2 (s) + λ f1 (s))ds + 2λ 0 x c exp (−λ x/c) exp(λ s/c) ( f2 (s) + λ f1 (s))ds, − 2λ 0 where c1 , c2 are unique constants such that the boundary condition u(0) = u(L) = 0 2 is satisfied. It is clear that u and ux belong to L2 (0, L). Also c2 ddxu2 = λ 2 u − f2 − λ f1 belongs to L2 (0, L). Hence we deduce that u ∈ H 2 (0, L) ∩ H01 (0, L) and then (2.48) and (2.49) have a unique solution. This proves that R(λ I − A) = H . It remains to show that A is dissipative. We note that the inner product in H01 (0, L) is defined by L d f d g¯ dx ( f , g)H 1 = c2 0 dx dx 0 and the inner product in L2 (0, L) is defined by ( f , g) =
L 0
f (x)g(x)dx. ¯
2.6 Semigroups
37
For any ( f1 , f2 ) ∈ D(A) = (H 2 (0, L) ∩ H01 (0, L)) × H01 (0, L), integration by parts gives (the inner product here is the one in H01 (0, L) × L2 (0, L)) f f Re A 1 , 1 f2 f2 H 1 ×L2 0 f2 f 2 , 1 = Re f2 c2 ddxf21 H01 ×L2 L d f2 d f¯1 d 2 f1 + c2 2 f¯2 dx c2 = Re dx dx dx 0 L L ¯ d f2 d f1 d f1 ¯
L d f1 d f¯2 dx + Re c2 dx c2 c2 = Re f2 − Re dx dx dx dx dx 0 0 0 = 0. Theorem 2.15. Let A be a densely defined closed linear operator on a Banach space. Then A is the infinitesimal generator of an analytic semigroup eAt satisfying eAt ≤ Meω t if and only if there exist 0 < δ < π2 , M > 0, and a real ω such that π ρ (A) ⊃ Sω ,δ = λ | |arg(λ − ω )| < + δ ∪ {ω } (2.51) 2 and M for all λ ∈ Sω ,δ , λ = ω . (λ I − A)−1 ≤ (2.52) |λ − ω | Furthermore AeAt ≤
M ωt e t
(2.53)
and the semigroup eAt is given by eAt =
1 2π i
Γ
eλ t (λ − A)−1d λ ,
(2.54)
where Γ is a contour in the resolvent set ρ (A) with arg(λ ) → ±θ as |λ | → ∞ for some θ in (π /2, π ). The proof is omitted and referred to [42, p.20, Theorem 1.3.4], [82, p.48, Theorem 2.47], or [91, p.61, Theorem 5.2]. The operator A satisfying the conditions (2.51) and (2.52) is called a sectorial operator. The sectorial conditions (2.51) and (2.52) are equivalent to the condition: there exists a constant M1 and a real ω such that for every λ = σ + iτ , σ > ω (λ I − A)−1 ≤
M1 . |τ |
For a proof, we refer to [91, p.61, Theorem 5.2].
(2.55)
38
2 Elementary Functional Analysis
Theorem 2.16. The operator A defined in (2.23) generates an analytic semigroup on L2 (Ω ). The proof relies on advanced estimates about the elliptic operator A, such as G˚arding’s inequality, and is referred to [91, p.211, Theorem 2.7]. The property of being a generator is not destroyed by the addition of a bounded operator. Theorem 2.17. Let X be a Banach space and let A be the infinitesimal generator of a C0 semigroup T (t) on X, satisfying T (t) ≤ Meω t . If B is a bounded linear operator on X , then A + B is the infinitesimal generator of a C0 semigroup S(t) on X, satisfying S(t) ≤ Me(ω +MB)t . The proof of the theorem is referred to [91, p.77, Theorem 1.1]. If A is the infinitesimal generator of an analytic semigroup T (t) on X , then the boundedness on B can be relaxed. Theorem 2.18. Let A be the infinitesimal generator of an analytic semigroup T (t) on a Banach space X such that (λ I − A)−1 ≤
M , 1 + |λ |
Reλ > ω
for some M > 0 and ω ∈ R. Let B be a linear operator satisfying D(A) ⊂ D(B) and Bx ≤ aAx + bx for x ∈ D(A), where a < 1/(1 + M). Then A + B generates an analytic semigroup on X. The proof of the theorem is referred to [82, p.54, Theorem 2.53] or [91, p.80, Theorem 2.1].
2.6.4 Semigroup Growth Bound The ω in (2.41) is called the growth rate of the semigroup. Such ω is not unique. For example, any numbers γ ≥ ω are also growth rates. Thus we define the least growth rate by
ω0 = inf{ω | there exist constants ω and M(ω ) such that T (t) ≤ Meω t }. (2.56) The ω0 is called the growth bound of the semigroup. Theorem 2.19. Let T (t) be a C0 semigroup with the infinitesimal generator A on a Banach space X. Then the growth bound ω0 of the semigroup satisfies
ω0 = lim
t→∞
ln T (t) . t
(2.57)
2.6 Semigroups
39
Proof. We first prove that the limit exists. Let t0 > 0 be a fixed number and M = sup T (t). Then for every t ≥ t0 , there exists n ∈ N such that nt0 ≤ t < (n + 1)t0 .
t∈[0,t0 ]
It then follows from the semigroup definition that ln T (t − nt0 + nt0) ln T (t) = t t ln T (nt0 )T (t − nt0 ) = t ln T n (t0 )T (t − nt0) = t ln(T (t0 )n T (t − nt0)) ≤ t ln T (t0 ) nt0 ln M + . ≤ t0 t t Taking limit and noting that lim
t→∞
lim sup t→∞
nt0 = 1, we obtain t
ln T (t) ln T (t0 ) ≤ < ∞. t t0
Since t0 is arbitrary, we can let t0 go to ∞ to obtain lim sup t→∞
Hence lim
t→∞
ln T (t) ln T (t) ≤ lim inf . t→∞ t t
ln T (t) ln T (t) ln T (t) = lim sup = lim inf < ∞. t→∞ t t t t→∞
For any ω > ω0 , there exists M(ω ) such that T (t) ≤ M(ω )eω t . Consequently,
ln T (t) M ≤ ω + lim = ω. t→∞ t→∞ t t Since ω is arbitrary, we have lim
ln T (t) ≤ ω0 . t→∞ t lim
If
ln T (t) < ω0 , t→∞ t lim
then there exists ω such that
40
2 Elementary Functional Analysis
lim
t→∞
ln T (t) < ω < ω0 . t
This implies that there exists a t0 such that
and then
ln T (t) ≤ω t
for t ≥ t0
T (t) ≤ eω t
for t ≥ t0 .
But T (t) ≤ M0 Taking
for 0 ≤ t ≤ t0 .
M(ω ) = max{M0 , M0 e−ω t0 },
we obtain
T (t) ≤ M(ω )eω t
for t ≥ 0,
which, combing with the definition of the growth bound, implies that ω0 ≤ ω . This is a contradiction. The growth bound is closely related the spectrum of A. Theorem 2.20. Let T (t) be a C0 semigroup with the infinitesimal generator A on a Banach space X and σ (A) be the spectrum of A. Then the growth bound ω0 of the semigroup satisfies ω0 ≥ sup{Re(λ ), λ ∈ σ (A)}. (2.58) Proof. We argue by contradiction. If ω0 < sup{Re(λ ), λ ∈ σ (A)}, then there exists a constant ω1 such that ω0 < ω1 < sup{Re(λ ), λ ∈ σ (A)}. By the definition of the growth bound, we deduce that there exists a constant M(ω1 ) such that T (t) ≤ M(ω1 )eω1t . Moreover, there exists a λ1 ∈ σ (A) such that Re(λ1 ) > ω1 . But it follows from Theorem 2.10 that λ1 ∈ ρ (A). This is a contradiction. The equality in (2.58) may not hold. For such an example, we refer to [82, p.114, Example 3.6]. Fortunately, the equality does hold for a large class of semigroups such as analytic semigroups. Lemma 2.3. Let σ (A) be the spectrum of an operator A and denote λ0 = sup{Re(λ ), λ ∈ σ (A)}. If A satisfies the sectorial conditions (2.51) and (2.52), then for any real λ1 > λ0 , there exist C > 0 and 0 < ϕ < π /2 such that (λ I − A)−1 ≤
C |λ − λ1|
whenever |arg(λ − λ1 )
sup{Re(λ ), λ ∈ σ (A)}, then there exists λ1 such that ω0 > λ1 > sup{Re(λ ), λ ∈ σ (A)}. Since A is the infinitesimal generator of the analytic semigroup, it follows from Theorem 2.15 that A satisfies the sectorial conditions (2.51) and (2.52). Then by Lemma 2.3, A satisfies the condition (2.59). It therefore follows from Theorem 2.15 that T (t) ≤ Ceλ1t .
42
2 Elementary Functional Analysis
By the definition of growth bound, we deduce that λ1 ≥ ω0 . This is a contradiction. The equality (2.60) also holds for another class of semigroups whose eigenvectors form an orthogonal basis in a Hilbert space. Theorem 2.22. Suppose that the linear, closed operator A has eigenvalues {λn }∞ n=1 and the corresponding eigenvectors {φn }∞ n=1 form an orthogonal basis on a Hilbert space H. Then (1) A has the representation Ax =
∞
∑ λn(x, φn )φn ,
x ∈ D(A) =
x∈H|
n=1
∞
∑ |λn| |(x, φn )| 2
2
< ∞ . (2.61)
n=1
(2) If sup Re(λn ) < ∞, then A is the infinitesimal generator of a C0 semigroup T (t) n≥1
given by
∞
∑ (x, φn )eλnt φn .
T (t)x =
(2.62)
n=1
(3) The growth bound of the semigroup is given by
ω0 = sup Re(λn ).
(2.63)
n≥1
Proof. For x ∈ D(A), let
∞
∑ (x, φn )φn .
x=
n=1
Since A is closed, we have Ax = A
N
∑ (x, φn )φn
lim
N→∞
n=1
= lim A N→∞
N
∑ (x, φn )φn
n=1 N
∑ (x, φn )Aφn N→∞
= lim
= lim
n=1 N
N→∞
=
∑ λn(x, φn )φn
n=1
∞
∑ λn(x, φn )φn .
n=1
We can easily check that T (t) defined by (2.62) is a semigroup. Moreover ∞ ∞ d λn t T (t)x − x = ∑ (x, φn ) e |t=0 φn = ∑ λn (x, φn )φn . t dt t→0+ n=1 n=1
Ax = lim
2.6 Semigroups
43
Using the orthogonality of the basis, we deduce that T (t)x2 =
∞
∑ |(x, φn )|2 e2Re(λn)t
n=1
∞ ≤ exp 2 sup Re(λn )t ∑ |(x, φn )|2 n≥1
n=1
2 = x exp 2 sup Re(λn )t . n≥1
This implies (2.63). Theorem 2.22 also holds for Riesz-spectral operators whose eigenvectors form a Riesz basis in H. The Riesz basis can be obtained from an orthogonal basis by an invertible linear bounded transformation. For details, we refer to [24, Section 2.3]. There is a necessary and sufficient condition for the equality (2.60): for every ω > ω0 , the resolvent R(ω + λ , A) as an operator-valued function of λ is bounded on the right-half complex plane Re(λ ) > 0. For a proof, see [24, p.223, Theorem 5.1.6]. This result is nice, but the condition is usually difficult to verify in applications to partial differential equations.
Exercises 2.6 1. Suppose that T (t) is a C0 -semigroup on the Hilbert space X . a. Let λ ∈ C. Show that eλ t T (t) is also a C0 -semigroup. b. Prove that the infinitesimal generator of eλ t T (t) is λ I + A, where A is the infinitesimal generator of T (t). 2. A linear operator A on a Hilbert space H is nonnegative if (Ax, x) ≥ 0 for all x ∈ D(A). Prove that if A is a self-adjoint and nonnegative operator on H, then −A is the infinitesimal generator of a contraction semigroup on H. 3. Consider the heat equation with Dirichlet boundary conditions
∂u ∂ 2u = k 2, ∂t ∂x u(0,t) = 0, u(1,t) = 0, u(x, 0) = f (x), where k is a positive constant. Define the differential operator A in L2 (0, 1) by A=k
d2 dx2
44
2 Elementary Functional Analysis
with the domain D(A) = H 2 (0, 1) ∩ H01 (0, 1). a. Prove that A is the infinitesimal generator of a contraction semigroup. b. Give an explicit expression for the semigroup by solving the equation with the Fourier method (separation of variables). 4. Consider the convection-diffusion equation with Dirichlet boundary conditions
∂u ∂ 2u ∂u = k 2 +a , ∂t ∂x ∂x u(0,t) = 0, u(1,t) = 0, u(x, 0) = f (x), where a, k are constant and k > 0. Define A=k
∂ 2u ∂u +a ∂ x2 ∂x
with the domain D(A) = H 2 (0, 1) ∩ H01 (0, 1). a. Show that A is self-adjoint on L2a = L2 (0, 1) with the weighted inner product (u, v)L2a =
1
u(x)v(x)eax/k dx.
0
b. Prove that A is the infinitesimal generator of a contraction semigroup.
2.7 Abstract Cauchy Problems Let X be a Banach space and let A be a linear operator from D(A) ⊂ X into X . Given x ∈ X, the abstract Cauchy problem for A with an initial condition x consists of finding a solution u(t) to the initial value problem du(t) = Au(t), dt
u(0) = x.
(2.64)
By a solution here we mean an X -valued function u(t) such that it is continuous for t ≥ 0, continuously differentiable, u(t) ∈ D(A) for t > 0, and (2.64) is satisfied. Theorem 2.23. Let A be a densely defined linear operator with a nonempty resolvent set ρ (A). The initial value problem (2.64) has a unique solution u(t), which is continuously differentiable on [0, ∞), for every initial value x ∈ D(A) if and only if A is the infinitesimal generator of a C0 semigroup T (t). The proof of this theorem is omitted and refereed to [91, p.102, Theorem 1.3]. Definition 2.26. Let T (t) be the C0 semigroup generated by A. For any x ∈ X , u(t) = T (t)x is called a weak solution of (2.64).
2.7 Abstract Cauchy Problems
45
Using this theorem and Theorem 2.14, we can prove that the initial boundary value problem of the wave equation (2.33)-(2.35) has a unique solution. Theorem 2.24. For every initial condition (u0 , u1 ) ∈ (H 2 (0, L) ∩ H01 (0, L)) × H01 (0, L), the initial boundary value problem of the wave equation (2.33)-(2.35) has a unique solution u(t) At u0 =e , ∂u u1 ∂t where the wave operator A is defined in (2.36). This theorem just tells us that the problem (2.33)-(2.35) has a unique solution, but it does not tell us what the explicit expression of the solution is. Using the Fourier method, we can obtain the explicit solution u(x,t) =
! cnπ t cnπ t " nπ x + bn sin sin , an cos L L L n=1 ∞
∑
where nπ x 2 L dx, u0 (x) sin L 0 L nπ x 2 L cnπ = dx. bn u1 (x) sin L L 0 L
an =
Therefore, the semigroup eAt is given by ⎡ & () ' ( ' cnπ t ( ' ⎤ + bn sin cnLπ t sin nπL x ∑∞ n=1 an cos L u ⎢ ⎥ eAt 0 = ⎣ ! ' cnπ t ( cnπ bn ' cnπ t (" ' nπ x ( ⎦ . u1 cnπ an ∞ sin L ∑n=1 − L sin L + L cos L We now consider the nonhomogeneous initial value problem du(t) = Au(t) + f (t), dt
u(0) = x,
(2.65)
where f : [0, T ) → X . We denote by L p (a, b; X) (1 ≤ p < +∞) the space of (classes of) functions f : (a, b) → X such that f L p =
a
b
f (t) Xp dt
1
p
< +∞.
L p (a, b; X) is a Banach space [32, p.469]. Let C([0, T ]; X ) denote the space of all continuous X -valued functions. Endowed with the following maximum norm f C = max f (t, 0≤t≤T
46
2 Elementary Functional Analysis
C([0, T ]; X ) is a Banach space. Definition 2.27. An X-valued function u : [0, T ) → X is a (classical) solution if it is continuous on [0, T ), continuously differentiable (0, T ), u(t) ∈ D(A) for 0 < t < T , and (2.65) is satisfied on [0, T ). If A is a matrix, the solution of (2.65) is given by the formula of variation of parameters u(t) = eAt x +
t
0
eA(t−s) f (s)ds.
This is also true in general cases. Definition 2.28. Let A be the infinitesmal generator of a C0 semigroup T (t). Let x ∈ X and f ∈ L1 (0, T ; X ). The function u ∈ C([0, T ]; X ) given by u(t) = T (t)x +
t 0
T (t − s) f (s)ds,
0≤t ≤T
(2.66)
is called a weak (or mild) solution of the initial value problem (2.65) on [0, T ]. Theorem 2.25. Let A be the infinitesimal generator of a C0 semigroup T (t) on a Banach space. Assume that T (t) ≤ Meω t ,
f (t) ≤ Meγ t .
(2.67)
Then the solution of (2.65) satisfies
or
u(t) ≤ C(u0 + t)eω t ,
if ω = γ ,
(2.68)
u(t) ≤ C(u0 + 1)eτ t ,
if ω = γ ,
(2.69)
where τ = max(ω , γ ) and C is a positive constant. Proof. It follows from (2.66) that t u(t) = T (t)x + T (t − s) f (s)ds 0 t ≤ T (t)x + T (t − s) f (s)ds 0
≤ xMe
ωt
+ M2
t 0
= xMeω t + M 2 eω t = xMeω t + M 2 eω t This implies (2.68) and (2.69).
eω (t−s) eγ s ds t
e(γ −ω )s ds
0
t 0
e(γ −ω )s ds.
2.7 Abstract Cauchy Problems
47
For f ∈ L1 (0, T ; X ), by definition, the problem (2.65) has a unique weak solution. The following theorem states the conditions imposed on f such that the weak solution becomes a (classical) solution for x ∈ D(A). Theorem 2.26. Let A be the infinitesimal generator of a C0 semigroup T (t) on a Banach space X. If f is differentiable on [0, T ) and f ∈ L1 (0, T ; X ), then the initial value problem (2.65) has a solution u on [0, T ) for every x ∈ D(A). The proof of this theorem is omitted and refereed to [91, p.109, Theorem 2.9]. Example 2.8. Consider the non-homogeneous wave equation 2 ∂ 2u 2∂ u = c + f (x,t) in (0, L) × (0, ∞), ∂ t2 ∂ x2 u(0,t) = 0, u(L,t) = 0 t ≥ 0, ∂u (x, 0) = u1 (x) x ∈ (0, L). u(x, 0) = u0 (x), ∂t
If f (x,t) ∈ L2 (0, L) for every t ≥ 0 and is continuously differentiable in t, then it follows from Theorem 2.26 that the problem has a unique solution.
Exercises 2.7 1. Consider the wave equation with Dirichlet boundary conditions
∂ 2u ∂ 2u ∂u = −a , 2 2 ∂t ∂x ∂x u(0,t) = 0, u(1,t) = 0, ∂u (x, 0) = u1 (x), u(x, 0) = u0 (x), ∂t where a is a constant. Define A=
∂ 2u ∂u −a ∂ x2 ∂x
with the domain D(A) = H 2 (0, 1) ∩ H01 (0, 1). a. Formulate this problem into an abstract Cauchy problem in the state space 1 × L2 , where H 1 = H 1 (0, 1) with the weighted inner product H = H0,a a 0,a 0 (h, f )H 1 = 0,a
1 0
e−ax Ah(x) f¯(x)dx,
and L2a = L2 (0, 1) with the weighted inner product
48
2 Elementary Functional Analysis
(h, f )L2a =
1 0
e−ax h(x) f¯(x)dx.
b. Prove that the Cauchy problem has a unique solution using the semigroup theory.
2.8 References and Notes This chapter is mainly based on the references [1, 24, 30, 31, 32, 35, 39, 40, 44, 82, 91, 95, 105]. Sections 2.1, 2.2, and 2.3 are adopted from [44, 95, 105], Section 2.4 from [1, 35], Section 2.5 from [30, 31, 32, 35, 41, 44], Section 2.6 from [24, 82, 91], and Section 2.7 from [91].
Chapter 3
Finite Dimensional Systems
Because many control concepts and theories in finite dimensional systems have been transplanted to partial differential equations, we present a brief introduction to feedback control of finite dimensional systems. We start with the control systems without disturbances and address the control systems subject to a disturbance in Section 3.10.
3.1 General Forms of Control Systems Linear finite dimensional control systems have the following general form x˙ = Ax + Bu,
(3.1)
y = Cx + Du, x(0) = x0 ,
(3.2) (3.3)
where x = (x1 , x2 , · · · , xn )∗ (hereafter ∗ denotes the transpose of a vector or matrix) is a state vector, x0 is an initial state caused by external disturbances, y = (y1 , · · · , yl )∗ is an output vector, u = (u1 , · · · , um )∗ is a control vector, and A, B, C, D are n × n, n × m, l × n, l × m constant matrices, respectively. The equation (3.1) is called a state equation. A simple example of such control systems is the mass-spring system (1.2), which is repeated as follows 0 1 x1 0 x˙1 = + 1 u, (3.4) x˙2 − mk 0 x2 m x (3.5) y = [1 0] 1 . x2 Given a constant reference output r of the system (3.1), the control objective is to design the control u to regulate the output y to r, that is, y(t) → r as t → ∞. Such W. Liu, Elementary Feedback Stabilization of the Linear Reaction-Convection-Diffusion Equation and the Wave Equation, Math´ematiques et Applications 66, c Springer-Verlag Berlin Heidelberg 2010 DOI 10.1007/978-3-642-04613-1 3,
49
50
3 Finite Dimensional Systems
a control system where the reference output is constant is called a regulator system. Without loss of generality, we can assume that a reference output r of the system (3.1)-(3.3) is zero. In fact, if r = 0, we introduce the change of variables w = x − x¯ ,
z = y − r,
v = u − u¯ ,
where x¯ and u¯ are the steady states satisfying ¯ 0 = A¯x + Bu, ¯ r = C¯x + Du, and y¯ is given by ¯ y¯ = C¯x + Du. In most of cases, this linear system has a solution. For instance, in the mass-spring system, the coefficient matrix AB CD is nonsingular and then the above system has a unique solution x¯ and u. ¯ We then have ˙ = Aw + Bv, w z = Cw + Dv, which has a zero reference output. If we substitute u = v + u¯ into the original equation (3.1), we obtain x˙ = Ax + B(v + u¯ ), y = Cx + D(v + u¯ ), x(0) = x0 , ¯ which usually has a nonzero steady control u.
Exercises 3.1 1. Consider the system x˙1 = −2x1 , x˙2 = −11x1 + u(t), y(t) = x1 (t) + x2(t).
3.2 Stability
51
Write the system in state-space form (i.e., determine the matrices A, B, C, and D). 2. Consider the system x¨ = 9x + 2u(t), y(t) = x(t). Write the system in state-space form (i.e., determine the matrices A, B, C, and D). 3. Consider the control system x˙ = Ax + Bu, x(0) = x0 , where A is an n × n constant matrix and B is an n × m constant matrix. Derive the formula of variation of parameters x(t) = e x0 + At
t
eA(t−s) Bu(s)ds.
(3.6)
0
3.2 Stability Stability is one of central issues in control theory. Consider a linear system x˙ = Ax, x(0) = x0 .
(3.7) (3.8)
The system has the equilibrium point x¯ = 0, that is, A0 = 0. Definition 3.1. The equilibrium point 0 of (3.7) is (1) stable if, for any ε > 0, there exists δ = δ (ε ) > 0 such that x(t) < ε
for x(0) < δ , t ≥ 0.
(2) unstable if it is not stable. (3) exponentially stable if there exist σ , K > 0 such that for all x0 we have x(t) ≤ Kx0 e−σ t . The stability of the equilibrium point 0 can be characterized by the locations of the eigenvalues of the matrix A. The solution of (3.7) is given by x(t) = eAt x0 .
52
3 Finite Dimensional Systems
From matrix theory it follows that there exists a nonsingular matrix P that transforms A into its Jordan normal form ⎤ ⎡ J1 0 0 · · · 0 0 ⎢ 0 J2 0 · · · 0 0 ⎥ ⎥ ⎢ ⎥ ⎢ −1 P AP = J = ⎢ ... ... ... · · · ... ... ⎥ , ⎥ ⎢ ⎣ 0 0 0 · · · Jr−1 0 ⎦ 0 0 0 · · · 0 Jr where Ji is an mi × mi Jordan block associated with the eigenvalue λi of A. If mi = 1, then Ji = [λi ]. If mi > 1, then ⎤ ⎡ λi 1 0 · · · 0 0 ⎢ 0 λi 1 · · · 0 0 ⎥ ⎥ ⎢ ⎥ ⎢ Ji = ⎢ ... ... ... · · · ... ... ⎥ . ⎥ ⎢ ⎣ 0 0 0 · · · λi 1 ⎦ 0 0 0 · · · 0 λi To illustrate the Jordan normal form, we consider the matrix ⎡ ⎤ 1 −2 2 A = ⎣ −2 1 −2 ⎦ . 2 −2 1 The characteristic equation of A is det(A − λ I) = −(λ + 1)2 (λ − 5). The eigenvectors corresponding to the eigenvalue λ1 = λ2 = −1 are ⎡ ⎤ ⎡ ⎤ 1 0 P1 = ⎣ 1 ⎦ , P2 = ⎣ 1 ⎦ . 0 1 The eigenvector corresponding to the eigenvalue λ3 = 5 is ⎤ ⎡ 1 P3 = ⎣ −1 ⎦ . 1 Thus we obtain the nonsingular matrix P = [P1 P2 P3 ] such that ⎡ ⎤ −1 0 0 P−1 AP = ⎣ 0 −1 0 ⎦ . 0 0 5 Consider another example
3.2 Stability
53
⎡
⎤
10 0 A = ⎣ 2 2 −1 ⎦ . 01 0 The characteristic equation is det(A − λ I) = −(λ − 1)3. The eigenvector corresponding to the eigenvalue λ1 = λ2 = λ2 = 1 is ⎡ ⎤ 0 P1 = ⎣ 1 ⎦ . 1 Solving the following system (A − λ1I)P2 = P1 , (A − λ1I)P3 = P2 ,
we obtain two generalized eigenvectors ⎡ ⎤ ⎡ ⎤ 0 1/2 P2 = ⎣ 1 ⎦ , P3 = ⎣ 0 ⎦ . 0 0 Thus we obtain the nonsingular matrix P = [P1 P2 P3 ] such that ⎡ ⎤ 110 P−1 AP = ⎣ 0 1 1 ⎦ . 001 We rewrite the Jordan block as follows ⎡ λi 1 0 · · · 0 ⎢ 0 λi 1 · · · 0 ⎢ ⎢ Ji = ⎢ ... ... ... · · · ... ⎢ ⎣ 0 0 0 · · · λi 0 0 0 ··· 0 ⎡ λi 0 0 · · · 0 ⎢ 0 λi 0 · · · 0 ⎢ ⎢ = ⎢ ... ... ... · · · ... ⎢ ⎣ 0 0 0 · · · λi 0 0 0 ··· 0 = Λi + N.
⎤ 0 0⎥ ⎥ .. ⎥ . ⎥ ⎥ 1⎦ λi ⎤ ⎡ 01 0 ⎢0 0 0⎥ ⎥ ⎢ .. ⎥ + ⎢ .. .. ⎢ . ⎥ ⎥ ⎢. . ⎣0 0 ⎦ 0 00 λi
⎤ 00 0 0⎥ ⎥ .. .. ⎥ . .⎥ ⎥ 0 ··· 0 1 ⎦ 0 ··· 0 0 0 ··· 1 ··· .. . ···
(3.9)
54
3 Finite Dimensional Systems
Since exp(Λit) = exp(λit)I and Nk = 0 for k ≥ mi , we deduce that exp(Jit) = exp(Λi t) exp(Nt) = exp(λit) ⎡ ⎢ ⎢ ⎢ =⎢ ⎢ ⎢ ⎣
mi −1 k k t N
∑
k!
k=0
eλit teλit 0 .. . 0 0
λi t 2! e eλit teλit
.. . 0 0
t2
.. . 0 0
··· ··· ··· ··· ···
t mi −2 λi t t mi −1 λit (mi −2)! e (mi −1)! e t mi −3 λi t t mi −2 λit (mi −3)! e (mi −2)! e
.. .
eλi t 0
.. . teλit eλ i t
⎤ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎦
It then follows that r mi −1
x(t) = eAt x0 = PeJt P−1 x0 = ∑
∑ t j e λ i t Ci j x 0 ,
(3.10)
i=1 j=0
where Ci j are constant matrices. For example, for the Jordan normal form (3.9), the solution can be expressed as x(t) = eAt x0 = PeJt P−1 x0 ⎛⎡ ⎤⎞ 2 et tet t2 et = P ⎝⎣ 0 et tet ⎦⎠ P−1 x0 0 0 et ⎤⎞ ⎤ ⎡ ⎤ ⎡ ⎛⎡ t 2 e 0 0 0 tet 0 0 0 t2 et = P ⎝⎣ 0 et 0 ⎦ + ⎣ 0 0 tet ⎦ + ⎣ 0 0 0 ⎦⎠ P−1 x0 0 0 0 0 0 et 00 0 ⎤ ⎤ ⎡ ⎡ 010 001 2 t = et x0 + tet P ⎣ 0 0 1 ⎦ P−1 x0 + et P ⎣ 0 0 0 ⎦ P−1 x0 . 2 000 000 Given a polynomial f (λ ) = (λ − λ1 )q1 · · · (λ − λn)qn , the algebraic multiplicity of the root λi (i = 1, · · · , n) of f (λ ) is defined to be qi . From matrix theory, the dimension mi of Ji is equal to 1 if and only if the dimension of the eigenvector space ker(A − λi I) is equal to the algebraic multiplicity qi of the eigenvalue λi , and if and only if rank(A − λi I) = n − qi . Hence, from (3.10), we have the following theorem. Theorem 3.1. The equilibrium point 0 of (3.7) is stable if and only if all eigenvalues of A satisfy Reλi ≤ 0 and for every eigenvalue with Reλi = 0 and algebraic multiplicity qi ≥ 2, rank(A − λi I) = n − qi , where n is the dimension of x. The equilibrium point 0 of (3.7) is exponentially stable if and only if all eigenvalues of A satisfy Reλi < 0.
3.2 Stability
55
Definition 3.2. A matrix A is said to be Hurwitz if all eigenvalues of A have negative real parts. Example 3.1. Consider the system x˙ = Ax,
(3.11)
0 1 A= . −1 0
where
The characteristic equation of A is det(λ I − A) = λ 2 + 1 = 0. Solving the equation, we obtain the eigenvalues λ = ±i. It is clear that the algebraic multiplicity of both i and −i is 1 and the ranks of iI − A and −iI − A are equal to 1. So the conditions of Theorem 3.1 are satisfied and then the equilibrium 0 is stable. In fact, the solution of the system is given by x1 cost sint = c1 + c2 , x2 − sint cost which shows that the equilibrium point 0 is stable. Example 3.2. Consider the system x˙ = Ax,
where A=
(3.12)
0 1 . −5 −2
The characteristic equation of A is
λ 2 + 2λ + 5 = 0. Then the eigenvalues are
λ = −1 ± 2i.
Thus, by Theorem 3.1, the equilibrium 0 is exponentially stable. In fact, the solution is x1 cos 2t e−t = c1 − cos2t − 2 sin 2t x2 sin 2t e−t , +c2 − sin 2t + 2 cos2t which decays exponentially.
56
3 Finite Dimensional Systems
For higher order systems (n ≥ 3), it may be impossible to solve characteristic polynomials explicitly. In this case, the Routh-Hurwitz’s stability criterion is needed for determining the signs of eigenvalues without actually solving for them. The proof of the criterion is long and is referred to [21]. Theorem 3.2. (Routh-Hurwitz’s Criterion) Suppose that all the coefficients of the polynomial f (s) = a0 sn + a1 sn−1 + · · · + an−1 s + an are positive. Construct the following table: sn : : s sn−2 : sn−3 : .. . n−1
a0 a1 b1 c1 .. .
a2 a4 a3 a5 b2 b3 c2 c3 .. .. . .
a6 a7 b4 c4 .. .
··· ··· ··· ··· .. .
s2 : d1 d2 s1 : e1 s0 : f1 where any undefined entries are set to zero and
1 a a 1 a a 1 a a b1 = −
0 2
, b2 = −
0 4
, b3 = −
0 6
, · · · a1 a1 a3 a1 a1 a5 a1 a1 a7
1 a a 1 a a 1 a a c1 = −
1 3
, c2 = −
1 5
, c3 = −
1 7
, · · · b1 b1 b2 b1 b1 b3 b1 b1 b4 .. . Then all zeros of f have negative real parts if and only if all entries in the first column of the table are well defined and positive. Example 3.3. Consider the polynomial f (s) = (s + 1)(s + 2)(s + 3) = s3 + 6s2 + 11s + 6, which has three negative zeros: -1, -2, -3. The Routh’s table is as follows s3 : 1 11 s2 : 6 6 s : 10 0 s0 : 6 All entries in the first column of the table are positive.
Exercises 3.2 1. Consider the system x˙ = Ax,
3.3 Controllability and Observability
57
⎤ −1 0 2 A = ⎣ 0 −1 1 ⎦ . 0 −1 −1 ⎡
where
a. Calculate the eigenvalues of A. b. Show the first column of the Routh’s array of the characteristic equation |λ I − A| = 0 is positive. 2. Consider the system x˙ = Ax, where
⎤ 0 1 0 A = ⎣ −b3 0 1 ⎦ . 0 −b2 −b1 ⎡
(A is called Schwarz matrix). Show that the first column of the Routh’s array of the characteristic equation |λ I − A| = 0 consists of 1, b1 , b2 , and b1 b3 .
3.3 Controllability and Observability Controllability and observability are structural properties of a system. Consider a control system x˙ = Ax + Bu, x(0) = x0 .
(3.13) (3.14)
Definition 3.3. System (3.13) or the pair (A, B) is controllable if any initial state x0 and final state x f , there exists a control vector u such that x(T ) = x f for some T > 0. In deriving a test for controllability, we need Cayley-Hamilton theorem that plays a very important role in solving problems involving matrix equations. Theorem 3.3. (Cayley-Hamilton) Any n × n matrix A satisfies its characteristic equation An + an−1An−1 + · · · + a1 A + a0I = 0, where
λ n + an−1λ n−1 + · · · + a1 λ + a0 = det(λ I − A).
Proof. Consider the adjugate adj(λ I − A) of λ I − A: ⎡ ⎤ C11 C21 · · · Cn1 ⎢ C12 C22 · · · Cn2 ⎥ ⎢ ⎥ adj(λ I − A) = ⎢ . . . ⎥, ⎣ .. .. · · · .. ⎦ C1n C2n · · · Cnn
58
3 Finite Dimensional Systems
where C ji = (−1)i+ j det(λ I − A) ji and (λ I − A) ji denotes the submatrix of λ I − A formed by deleting row j and column i. It is clear that adj(λ I − A) is a polynomial in λ of degree n − 1: adj(λ I − A) = Bn−1 λ n−1 + Bn−2 λ n−2 + · · · + B1 λ + B0 . Since
(λ I − A)adj(λ I − A) = det(λ I − A)I,
we obtain ( λ n + an−1λ n−1 + · · · + a1 λ + a0 I ( ' = (λ I − A) Bn−1 λ n−1 + Bn−2 λ n−2 + · · · + B1 λ + B0 . '
Substituting A for λ gives that An + an−1An−1 + · · · + a1 A + a0I = 0.
Consider the matrix A=
12 . 34
We have det(λ I − A) = (λ − 1)(λ − 4) − 6 and 0 2 −3 2 10 (A − I)(A − 4I) − 6I = −6 33 3 0 01 60 60 = − 06 06 00 = . 00 A square matrix A is positive definite if the inner product (Ax, x) > 0 for any x = 0. Consider the matrix 11 A= . 12 We have (Ax, x) = (x1 + x2)x1 + (x1 + 2x2 )x2 = x21 + 2x1x2 + 2x22 = (x1 + x2)2 + x22 >0 for any x = 0. So A is positive definite. Theorem 3.4. The following statements are equivalent: (1) (A, B) is controllable.
3.3 Controllability and Observability
59
(2) The matrix P(t) =
t
∗ (t−s)
eA(t−s) BB∗ eA
ds
0
is positive definite for all t > 0. (3) Kalman controllability matrix defined by C = [B AB A2 B · · · An−1 B] has rank n. Proof. (1) ⇒ (2). Argue by contradiction. Assume that P(T ) is not positive definite for some T > 0. Then there exists a vector v = 0 such that v∗ P(T )v = and then
T
∗ (T −s)
v∗ eA(T −s) BB∗ eA
0
vds = 0,
v∗ eAt B = 0.
For the initial state x0 = e−AT v, we have x(T ) = eAT e−AT v + and then
t
eA(T −s) Bu(s)ds,
0
v∗ x(T ) = v∗ v > 0.
Thus it is impossible to drive the system to x(T ) = 0. This contradicts with the controllability. (2) ⇒ (1). For any initial state x0 and any final state x f , choosing the control ∗ u(t) = −B∗ eA (T −t) P(T )−1 eAT x0 − x f , we obtain x(T ) = eAT x0 −
T 0
∗ (T −s)
eA(T −s) BB∗ eA
P(T )−1 eAT x0 − x f ds
T
∗ eA(T −s) BB∗ eA (T −s) dsP(T )−1 eAT x0 − x f 0 AT = e x0 − P(T )P(T )−1 eAT x0 − x f = eAT x0 −
= xf . (2) ⇒ (3). Argue by contradiction. Assume that the rank of C is less than n. Then the row vectors of C are linearly dependent. So we can find a nonzero vector v = (v1 , v2 , · · · vn ) such that v[B AB A2 B · · · An−1 B] = [vB vAB vA2 B · · · vAn−1B] = 0,
60
3 Finite Dimensional Systems
or vB = vAB = vA2 B = · · · = vAn−1 B = 0. It then follows from the Cayley-Hamilton theorem that −vAn B = an−1 vAn−1 B + · · · + a1 vAB + a0vB = 0. By induction, we can show that vAm B = 0 for m = 0, 1, 2, · · ·. Thus we derive that 1 veAt B = v I + At + A2t 2 + · · · B = 0, 2! and then vP(t) = 0. This contradicts the fact that P(t) is positive definite. (3) ⇒ (2). Next we assume that rank[C ] = n, but P(t) is not positive definite. Then there exists a nonzero vector v such that veA(T −s) B = 0,
0 ≤ s ≤ T.
Taking s = T gives that vB = 0. Differentiating it and letting s = T gives vAB = 0. Continuing this process, we find that vB = vAB = vA2 B = · · · = vAn−1 B = 0, which contradicts the assumption that rank[C ] = n. Example 3.4. Consider the mass-spring-damper system 0 1 x1 0 x˙1 = + 1 u(t). x˙2 − mk 0 x2 m Since the Kalman controllability matrix C = [B AB] =
0 1 m
1 m
0
has rank 2, the system is controllable. The following example shows that not every system is controllable. Example 3.5. The system
1 0 x1 0 x˙1 = + u(t) x˙2 1 1 x2 1
is not controllable since the rank of the Kalman controllability matrix 00 C = [B AB] = 11
3.3 Controllability and Observability
61
is equal to 1. To derive another test for controllability, we need the following technical lemma. Lemma 3.1. Let A, B be n × n and n × m matrices, respectively. If the rank of the controllability matrix rank[B AB A2 B · · · An−1 B] = q < n, then there exists a nonsingular matrix P such that A11 A12 B11 P−1 AP = , P−1 B = , 0 A22 0 where A11 is a q × q matrix, A12 is a q × (n − q) matrix, A22 is an (n − q) × (n − q) matrix, and B11 is a q × m matrix. Moreover, the pair (A11 , B11 ) is controllable. Proof. Let f1 , · · · , fq be q linearly independent column vectors of the controllability matrix and let us choose n − q additional vectors vq+1 , · · · , vn such that & ) P = f1 · · · fq vq+1 · · · vn is nonsingular. For each fi , there exists a column vector b of B and an integer 0 ≤ j ≤ n − 1 such that fi = A j b. If j < n − 1, then Afi = A j+1 b is one of the column vectors of the controllability matrix and so Afi , f1 , · · · , fq are linearly dependent. Thus there are ai1 , · · · , aiq such that Afi = ai1 f1 + · · · + aiqfq . (3.15) If j = n − 1, then Afi = An b. By the Cayley-Hamilton theorem, there exist constants c0 , c1 , · · · , cn−1 such that An = c0 I + c1 A + · · · + cn−1 An−1 . So Afi = c0 b + c1 Ab + · · · + cn−1 An−1 b. Since each A j b (0 ≤ j ≤ n − 1) can be expressed in terms of f1 , · · · , fq , (3.15) also holds in this case. It therefore follows that ) & ) A11 A12 & , Af1 · · · Afq Avq+1 · · · Avn = f1 · · · fq vq+1 · · · vn 0 A22 which implies
A11 A12 . P AP = 0 A22
−1
In addition, all column vectors of B can be expressed in terms of f1 , · · · , fq and so ) B11 & , B = f1 · · · fq vq+1 · · · vn 0 which implies
Moreover, we can show that
B11 . P B= 0 −1
62
3 Finite Dimensional Systems
P−1 [B AB A2 B · · · An−1 B] =
B11 A11 B11 A211 B11 · · · An−1 11 B11 . 0 0 0 ··· 0
Then the rank of [B11 A11 B11 A211 B11 · · · An−1 11 B11 ] is equal to q and so the pair (A11 , B11 ) is controllable. Theorem 3.5. (Popov-Belevitch-Hautus Test) System (3.13) is controllable if and only if the matrix [(λ I − A) B] has a rank n for all complex numbers λ . Proof. Suppose that [(λ I − A) B] does not have rank n. Then there exists a nonzero vector v∗ such that v∗ [(λ I − A) B] = 0. Then
λ v∗ = v∗ A,
v∗ B = 0
and so v∗ C = v∗ [B AB A2 B · · · An−1 B] = v∗ [B λ B λ 2 B · · · λ n−1 B] = 0. Hence, by Theorem 3.4, the system (3.13) is not controllable. We next suppose that the system (3.13) is not controllable. Then by Lemma 3.1 there exists a nonsingular matrix P such that A11 A12 B11 −1 −1 , P B= P AP = . 0 A22 0 Let λ be an eigenvalue of A22 and v22 be the left eigenvector so that v22 A22 = λ v22 . Then λ I − A11 −A12 B11 = 0, [0 v22 ] 0 λ I − A22 0 and so
[0 v22 ]P−1 [(λ I − A)P B] = 0.
Since P is nonsingular, we derive that [0 v22 ]P−1 (λ I − A) = 0, and then
[0 v22 ]P−1 [(λ I − A) B] = 0.
This contradicts the assumption that [(λ I − A) B] has rank n. Consider the mass-spring system (3.4). It is clear that the rank of the matrix λ −1 0 [(λ I − A) B] = k 1 m λ m
3.3 Controllability and Observability
63
is equal to 2. By Popov-Belevitch-Hautus Test, the mass-spring system is controllable. Consider the observation system x˙ = Ax,
(3.16)
y = Cx, x(0) = x0 ,
(3.17) (3.18)
where y = (y1 , y2 , · · · , yl )∗ is an output vector and C is an l × n constant matrix. Definition 3.4. System (3.16)-(3.17) or the pair (A, C) is observable if any initial state x0 can be uniquely determined by the observation y(t) over the interval [0, T ] for some T > 0. We define Kalman observability matrix O by ⎤ ⎡ C ⎢ CA ⎥ ⎥ ⎢ 2 ⎥ ⎢ O = ⎢ CA ⎥ . ⎢ .. ⎥ ⎣ . ⎦ CAn−1 Theorem 3.6. The following statements are equivalent: (1) (A, C) is observable. (2) Kalman observability matrix O has rank n. (3) (A∗ , C∗ ) is controllable. (4) The matrix t
Q(t) =
is positive definite for all t > 0. (5) The matrix
∗
eA s C∗ CeAs ds
0
λI−A C
has rank n for all complex numbers λ . Proof. (1) ⇒ (2). We first assume that the system is observable, but the rank of O is less than n. Then the column vectors of O are linearly dependent. So we can find a nonzero vector v = (v1 , v2 , · · · vn )∗ such that ⎡ ⎡ ⎤ ⎤ Cv C ⎢ CAv ⎥ ⎢ CA ⎥ ⎢ ⎢ ⎥ ⎥ 2 ⎢ ⎢ CA2 ⎥ ⎥ ⎢ ⎥ v = ⎢ CA v ⎥ = 0, ⎢ ⎢ .. ⎥ ⎥ .. ⎣ ⎣ . ⎦ ⎦ . CAn−1
CAn−1 v
64
3 Finite Dimensional Systems
or Cv = CAv = CA2 v = · · · = CAn−1 v = 0. It then follows from the Cayley-Hamilton theorem that −CAn v = an−1CAn−1 v + · · · + a1CAv + a0Cv = 0. By induction, we can show that CAm v = 0 for m = 0, 1, 2, · · · . Thus we derive that 1 y(t) = CeAt v = C I + At + A2t 2 + · · · v = 0. 2! In addition, we also have y(t) = CeAt 0. This implies that two different initial states 0 and v have the same output 0. So the initial state x0 cannot be uniquely determined by the observation y(t) over the interval [0, T ] and this contradicts the observability assumption. (2) ⇒ (1). Next we assume that rank[O] = n, but the system is not observable. Then there exists a nonzero initial state v such that CeAt v = 0,
0 ≤ t ≤ T.
Taking t = 0 gives that Cv = 0. Differentiating it and letting t = 0 gives CAv = 0. Continuing this process, we find that Cv = CAv = CA2 v = · · · = CAn−1 v = 0, which contradicts the assumption that rank[O] = n. The rest of the statements can be proved by using Theorem 3.4. From Theorem 3.6, we can derive the following observability inequality. Corollary 3.1. System (3.16)-(3.17) is observable if and only if the observability inequality holds T
0
Cx(t)2 dt ≥ Kx0 2
for all x0 ,
where K is a positive constant, depending on T , but independent of x0 . Proof. To prove the corollary, it suffices to notice that T 0
Cx(t)2 dt = = =
T 0
CeAt x0 2 dt
T 0
CeAt x0 , CeAt x0 dt
T 0
∗ eA t C∗ CeAt x0 , x0 dt
= (Q(T )x0 , x0 ) .
(3.19)
3.3 Controllability and Observability
65
Example 3.6. Consider the mass-spring system 0 1 x1 x˙1 = , x˙2 − mk 0 x2 y = [0 1]
x1 . x2
Since Kalman observability matrix 0 1 C O= = CA − mk 0 has rank 2, the system is observable. Example 3.7. The system
x˙1 x˙2
=
11 01
x1 , x2
x y = [0 1] 1 x2
is not observable since the rank of Kalman observability matrix 01 C = O= 01 CA is equal to 1. From Theorems 3.4 and 3.6, we can derive the following duality between controllability and observability. Theorem 3.7. The control system x˙ = Ax + Bu, x(0) = x0 is controllable if and only if its dual observation system x˙ = A∗ x, y = B∗ x, x(0) = x0 is observable.
66
3 Finite Dimensional Systems
Exercises 3.3 1. Consider the system ⎡
⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ x˙1 −1 −2 −2 x1 2 ⎣ x˙2 ⎦ = ⎣ 0 −1 1 ⎦ ⎣ x2 ⎦ + ⎣ 0 ⎦ u, x˙3 x3 1 0 −1 1 ⎡ ⎤ & ) x1 y = 1 1 0 ⎣ x2 ⎦ . x3
Is the system controllable and observable? 2. Consider the system ⎤⎡ ⎤ ⎡ ⎤ ⎡ 200 x1 x˙1 ⎣ x˙2 ⎦ = ⎣ 0 2 0 ⎦ ⎣ x2 ⎦ , x˙3 x3 031 ⎡ ⎤ & ) x1 y = 1 1 1 ⎣ x2 ⎦ . x3 a. Show that the system is not observable. b. Show that the system is observable if the output is given by ⎡ ⎤ x1 y1 111 ⎣ ⎦ x2 . = 123 y2 x3 3. ([86]) Define for any nonzero p ⎤ p10 A = ⎣ 0 p 1 ⎦. 00 p ⎡
Show that (A, B) is not controllable for any B = [b1 b2 0]∗ . Find a condition on the observation matrix C such that (A, C) is observable. 4. ([86]) The dynamics of a hot air balloon is modeled by ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ T k2 0 T −k 0 0 d ⎣ ⎦ ⎣ 1 u σ −k3 0 ⎦ ⎣ v ⎦ + ⎣ 0 k3 ⎦ v = , w dt 0 1 0 h 0 0 h where T is the temperature of the balloon (relative to equilibrium temperature), u is heat added, v is velocity, h is height, and w is wind speed. The parameters k1 , k2 , k3 , and σ are all positive constants.
3.4 State Feedback Control
a. b. c. d. e.
67
If the output y(t) = h(t), is the system observable? If the output y(t) = T (t), is the system observable? Is this system controllable? Is this system controllable by u (heat) only? Why? Is this system controllable by w (wind) only? Why?
5. Show that if an n × n matrix A is positive definite, then there exists a positive constant C such that (Ax, x) ≥ Cx2 for all x ∈ Rn . 6. Show that if a matrix A is positive definite, then it is nonsingular. 7. Prove that rank(AB) ≤ min{rank(A), rank(B)}. 8. Prove that rank(AB) = rank(A) if B is nonsingular.
3.4 State Feedback Control We assume that all state variables are available for feedback and design a control of the form u = −Kx. (3.20) Such a scheme is called a state feedback. The m × n matrix K is called a state feedback gain matrix. Substituting equation (3.20) into (3.1) gives x˙ = (A − BK)x,
x(0) = x0 .
(3.21)
Definition 3.5. The pair (A, B) or the system (3.1) is stabilizable if there exists K such that the solution x(t) of (3.21) converges to zero exponentially as t → ∞ for any initial state x0 . The matrix K is called the feedback matrix. If the pair (A, B) is stabilizable, then the solution x(t) tends to zero. Thus we have that u(t) = −Kx(t) → 0 and y(t) = Cx(t) + Du(t) → 0. Therefore the problem of regulating the output to zero is transformed into the stabilization of the pair (A, B). Example 3.8. Consider the mass-spring system (3.4). Let K = [k1 k2 ]. The gain matrix K stabilizes the equilibrium point 0 if and only if all eigenvalues of the matrix A − BK have negative real parts. Let μ1 , μ2 be the desired eigenvalues. Then we set det(λ I − A + BK) = λ 2 +
k2 k + k1 = (λ − μ1)(λ − μ2 ). λ+ m m
Equating the coefficients gives k1 = mμ1 μ2 − k,
k2 = −m(μ1 + μ2 ).
We then have the following control law u = −k1 x1 − k2x2 = −k1 y − k2y. ˙
68
3 Finite Dimensional Systems
For the observation system, we introduce the output injection system x˙ = Ax − Ky = (A − KC)x,
x(0) = x0 .
(3.22)
Definition 3.6. The pair (A, C) or the system (3.16)-(3.17) is detectable if there exists K such that the solution x(t) of (3.22) converges to zero exponentially as t → ∞ for any initial state x0 . The matrix K is called the output injection matrix. Evidently, (A, C) is detectable if and only if (A∗ , C∗ ) is stabilizable. Consider the control system x˙1 = x1 + x2 + u, x˙2 = x2 . It is clear that this system is not stabilizable since x2 is not controlled by u. We can readily check that the system is not controllable. Thus it can be expected that controllability plays a key role in determining stabilizability of a system. In fact, we now show that if a system is controllable, the eigenvalues of the system can be placed at any location by a feedback. Lemma 3.2. Let B be an n × 1 vector and (A, B) be controllable. Then there exists a nonsingular matrix T that transforms (A, B) into the following controllable canonical form: ⎤ ⎡ ⎡ ⎤ 0 1 0 ··· 0 0 0 ⎥ ⎢ 0 0 1 ··· 0 ⎢0⎥ 0 ⎥ ⎢ ⎢ ⎥ ⎢ 0 0 0 ··· 0 ⎢0⎥ 0 ⎥ ⎥ ⎢ ⎢ ⎥ −1 −1 , T B = ⎢ . ⎥. T AT = ⎢ . ⎥ . . . . .. .. · · · .. .. ⎥ ⎢ .. ⎢ .. ⎥ ⎥ ⎢ ⎢ ⎥ ⎣ 0 0 0 ··· 0 ⎣0⎦ 1 ⎦ −a0 −a1 −a2 · · · −an−2 −an−1 1 Proof. Because (A, B) is controllable, the vectors B, AB, · · · , An−1 B are linearly independent. Let the characteristic polynomial of A be det(sI − A) = sn + an−1sn−1 + · · · + a1s + a0. Define f0 (s) = sn + an−1sn−1 + · · · + a1 s + a0, f1 (s) = sn−1 + an−1sn−2 + · · · + a2s + a1, .. . fn−1 (s) = s + an−1, fn (s) = 1.
3.4 State Feedback Control
69
Then fi satisfies the iteration relation s fi (s) = fi−1 (s) − ai−1 fn (s).
(3.23)
Let vi = fi (A)B,
T = [v1 , · · · , vn ].
Then v1 , · · · , vn are linearly independent and T is a nonsingular matrix. Replacing s by A in (3.23) and operating on B, we obtain Avi = vi−1 − ai−1vn ,
B = vn ,
and then ⎡
0 0 0 .. .
1 0 0 .. .
0 1 0 .. .
··· ··· ···
0 0 0 .. .
0 0 0 .. .
⎤
⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ AT = T ⎢ ⎥, ⎥ ⎢ · · · ⎥ ⎢ ⎣ 0 0 0 ··· 0 1 ⎦ −a0 −a1 −a2 · · · −an−2 −an−1 Example 3.9. Let
A=
Then
12 , 34
B=
⎡ ⎤ 0 ⎢0⎥ ⎢ ⎥ ⎢0⎥ ⎢ ⎥ B = T⎢ . ⎥. ⎢ .. ⎥ ⎢ ⎥ ⎣0⎦ 1
1 . 2
s − 1 −2 2
−3 s − 4 = s − 5s − 2
and f1 (s) = s − 5, Thus v1 = f1 (A)B = (A − 5I)B = and
f2 (s) = 1.
0 , 1
2 5 01 AT = =T , 4 11 25
v2 = B,
T=
01 12
0 T = B. 1
The following lemma [104] indicates that feedback control of a multiple input system can be reduced to a single input system. Lemma 3.3. Let (A, B) be controllable and b = Bu = 0, where u is a vector. Then there exists a matrix F such that (A + BF, b) is controllable. Proof. Let b1 = b, and let n1 be the largest integer such that b1 , Ab1 , · · · , An1 b1 are linearly independent. We can easily show that
70
3 Finite Dimensional Systems
b1 , Ab1 + b1 , · · · , An1 b1 + An1 −1 b1 + · · · + b1 are also linearly independent. If n1 < n, then there exists b2 = Bu2 ∈ / span(b1 , Ab1 , · · · , An1 b1 ). If not, then for any u, we have Bu = c0 b1 + c1 Ab1 + · · · + cn1 An1 b1 ∈ span(b1 , Ab1 , · · · , An1 b1 ). It then follows that Ai Bu ∈ span(b1 , Ab1 , · · · , An1 b1 ) for i = 0, 1, 2, · · ·n. This implies that the rank of [B AB An−1 B] is equal to n1 < n and then contradicts with the controllability assumption. Let n2 be the largest integer such that b2 , Ab2 , · · · , An2 b2 are linearly independent. If n1 + n2 < n, we can repeat the above procedure and obtain n linearly independent vectors: b1 , Ab1 , · · · , An1 b1 ; b2 , Ab2 , · · · , An2 b2 ; .. . bk , Abk , · · · , Ank bk . Define v1 = b1 , vi = Avi−1 + b1 ,
1 < i ≤ n1 ,
vn j +i = Avn j +i−1 + b j+1,
1 ≤ i ≤ n j+1 , 1 ≤ j ≤ k − 1.
We can show that v1 , · · · , vn are linearly independent. Define b¯ i = b1 = Bu1 , u¯ i = u1 , 1 ≤ i ≤ n1 , ¯bn +i = b j+1 = Bu j+1 , u¯ n +i = u j+1 , 1 ≤ i ≤ n j+1 , 1 ≤ j ≤ k − 1. j j Then we have vi = Avi−1 + b¯ i−1 ,
2 ≤ i ≤ n.
Since v1 , · · · , vn are linearly independent, there exists a matrix F such that Fvi = u¯ i . In fact, F is given by
3.4 State Feedback Control
71
F = [u¯ 1 , · · · , u¯ n ][v1 , · · · , vn ]−1 . It therefore follows that vi = Avi−1 + Bu¯ i−1 = Avi−1 + BFvi−1 = (A + BF)i−1 b, 2 ≤ i ≤ n. This implies that the rank of [b (A + BF)b (A + BF)n−1 b] is equal to n and so (A + BF, b) is controllable. Theorem 3.8. A system (A, B) of order n is controllable if and only if for any set S = {p1 , · · · , pn } of complex numbers with the property that if p ∈ S, then p¯ ∈ S, there exists a matrix K such that A − BK has eigenvalues p1 , · · · , pn . Proof. Let λ1 , λ2 , · · · λn be all eigenvalues of A and let the set S = {p1 , · · · pn } have the stated property in the theorem such that S ∩ {λ1 , λ2 , · · · λn } = 0. / Suppose that there exists a matrix K such that A − BK has eigenvalues p1 , · · · pn . We want to prove that rank[(sI − A) B] = n for all complex numbers s. For all λi (i = 1, 2, · · · , n), the matrix λi I− A+ BK is nonsingular. For any x ∈ Rn , we define x¯ = (λi I − A + BK)−1x and u¯ = K(λi I − A + BK)−1x. Then x¯ [(λi I − A) B] = (λi I − A)¯x + Bu¯ u¯ = (λi I − A + BK)(λiI − A + BK)−1x = x. This shows that [(λi I − A) B] has rank n. For all other complex numbers s, the rank of [(sI − A) B] is n since the rank of sI − A is n. Thus, by Theorem 3.5, (A, B) is controllable. Next assume that (A, B) is controllable. Let b = Bv = 0, where v is a vector. Then there exists a matrix F such that (A + BF, b) is controllable. Denote AF = A + BF. Then there exists a nonsingular matrix T that transforms AF and b into controllable canonical form T−1 AF T = Ac , T−1 b = bc , where
⎡
0 0 0 .. .
1 0 0 .. .
0 1 0 .. .
··· ··· ···
0 0 0 .. .
0 0 0 .. .
⎤
⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ Ac = ⎢ ⎥, ⎥ ⎢ · · · ⎥ ⎢ ⎣ 0 0 0 ··· 0 1 ⎦ −a0 −a1 −a2 · · · −an−2 −an−1
⎡ ⎤ 0 ⎢0⎥ ⎢ ⎥ ⎢0⎥ ⎢ ⎥ bc = ⎢ . ⎥ . ⎢ .. ⎥ ⎢ ⎥ ⎣0⎦ 1
72
3 Finite Dimensional Systems
Let the state feedback Kc = [k0 k1 · · · kn−1 ]. Then det(sI − Ac + bc Kc ) ⎡ s −1 0 ⎢ 0 s −1 ⎢ ⎢ 0 0 s ⎢ = det ⎢ . .. . .. ⎢ .. . ⎢ ⎣ 0 0 0 a 0 + k0 a 1 + k1 a 2 + k2
··· ··· ···
0 0 0 .. .
0 0 0 .. .
··· ··· s −1 · · · an−2 + kn−2 s + an−1 + kn−1
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
= sn + (an−1 + kn−1)sn−1 + · · · + (a1 + k1 )s + a0 + k0 . Let (s − p1) · · · (s − pn ) = sn + bn−1sn−1 + · · · + b1 s + b0. Choosing k0 , k1 , · · · , kn−1 such that an−1 + kn−1 = bn−1 , · · · , a0 + k0 = b0 , we obtain det(sI − Ac + bc Kc ) = (s − p1) · · · (s − pn ). Since det(sI − Ac + bc Kc ) = det(sI − T−1 AF T + T−1 bKc ) = det[T−1 (sI − AF + bKc T−1 )T] = det(sI − AF + bKc T−1 ), we have and then
det(sI − AF + bKc T−1 ) = (s − p1) · · · (s − pn ), det(sI − A + B(−F + vKcT−1 )) = (s − p1 ) · · · (s − pn ).
For a system (A, B), an eigenvalue of A is called a pole of the system. Thus Theorem 3.8 is referred as the pole placement theorem. The above constructive proofs provide an algorithm for computing the feedback gain matrix of the system (A, B). Since the computation of the feedback F for multiple input systems is complex, we summarize the algorithm for a single input system as follows: 1. Computation of transformation matrix T. det(sI − A) = sn + an−1sn−1 + · · · + a1s + a0, f1 (s) = sn−1 + an−1sn−2 + · · · + a2 s + a1,
3.4 State Feedback Control
73
.. . fn−1 (s) = s + an−1, fn (s) = 1, vi = fi (A)B, i = 1, 2, · · · , n, T = [v1 , · · · , vn ]. 2. Computation of the designated polynomial: (s − p1 ) · · · (s − pn ) = sn + bn−1sn−1 + · · · + b1s + b0. 3. Computation of the gain matrix: K = [b0 − a0 · · · bn−1 − an−1]T−1 . Example 3.10. Let
12 A= , 34
1 B= . 2
Find the feedback gain matrix K such that A − BK has eigenvalues 1 and 2. Solution. The characteristic polynomial is
s − 1 −2 2
−3 s − 4 = s − 5s − 2. So a0 = −2 and a1 = −5. Since (s − 1)(s − 2) = s2 − 3s + 2, we obtain b0 = 2 and b1 = −3. In Example 3.9, we have already obtained 01 −2 1 T= , T−1 = . 12 1 0 So the feedback matrix is K = [4 2]T−1 = [−6 4]. Note that a stabilizable system is not necessarily controllable. For instance, the following simple system x˙1 −1 0 x1 0 = + u(t) 0 −1 x2 1 x˙2 is stabilizable, but not controllable. Using Theorem 3.8, we prove the Popov-Belevitch-Hautus Test for stabilizability.
74
3 Finite Dimensional Systems
Theorem 3.9. (Popov-Belevitch-Hautus Test) The pair (A, B) or the system (3.1) is stabilizable if and only if the matrix [(λ I − A) B] has a rank n for all complex numbers λ with nonnegative real parts. Proof. Assume that the pair (A, B) is stabilizable. If [(λ I − A) B] did not have rank n for some complex number λ with a nonnegative real part, then there would exist a nonzero vector v∗ such that v∗ [(λ I − A) B] = 0. Then and so for any K
λ v∗ = v∗ A,
v∗ B = 0
v∗ (A − BK) = λ v∗ .
This implies that A − BK has the eigenvalue λ with a nonnegative real part. This contradicts with the fact that the pair (A, B) is stabilizable. We next assume that the matrix [(λ I − A) B] has a rank n for all complex numbers λ with nonnegative real parts. If the pair (A, B) was not stabilizable, it follows from Theorem 3.8 that it would not be controllable. Then by Lemma 3.1 there exists a nonsingular matrix P such that A11 A12 B11 −1 −1 P AP = , P B= . 0 A22 0 Because the pair (A, B) was not stabilizable and by Lemma 3.1 the (A11 , B11 ) is controllable, A22 must have an eigenvalue λ with a nonnegative real part. Let v22 be the left eigenvector so that v22 A22 = λ v22 . Then λ I − A11 −A12 B11 [0 v22 ] = 0, 0 λ I − A22 0 and so
[0 v22 ]P−1 [(λ I − A)P B] = 0.
Since P is nonsingular, we derive that [0 v22 ]P−1 (λ I − A) = 0, and then
[0 v22 ]P−1 [(λ I − A) B] = 0.
This contradict the assumption that [(λ I − A) B] has rank n.
3.5 PID Controllers
75
Exercises 3.4 1. Consider the system x˙ = Ax + Bu, where
⎤ 0 1 0 A = ⎣ 0 0 1 ⎦, −1 −5 −6 ⎡
⎡ ⎤ 0 B = ⎣ 1 ⎦. 1
a. Show that the pair (A, B) is controllable. b. Determine the state feedback gain matrix K such that the closed-loop system with the feedback control u = −Kx has the closed-loop poles (eigenvalues) at λ = −2 ± 4i and λ = −10. 2. Consider the system
0 1 x1 1 x˙1 u. = + 0 2 x2 0 x˙2
a. Show that the system is not controllable. b. Show that this system cannot be stabilized by the state feedback control u = −Kx, whatever matrix K is chosen. 3. Consider
34 A= , 56
1 B= . 0
Find the nonsingular matrix T that transforms (A, B) into the controllable canonical form. Then find the feedback gain matrix K such that A− BK has eigenvalues 1 − i and 1 + i. 4. Consider the system x˙ = Ax + Bu, where
1 1 , A= 1 −1
0 . B= 1
a. Show that the pair (A, B) is stabilizable. b. Determine the state feedback gain matrix K such that the closed-loop system with the feedback control u = −Kx has the closed-loop poles (eigenvalues of A − BK) at λ = −1 and λ = −2.
3.5 PID Controllers Control mechanisms most widely used in control engineering are proportionalintegral-derivative (PID) feedback controllers. Consider the mass-spring system my¨ + ky = u(t).
(3.24)
76
3 Finite Dimensional Systems
Given the reference position r, we want to regulate y to r. To achieve this, we try the proportional controller u = −K1 (y − r), (3.25) where K1 is a positive constant, called the control gain. This means that the force applied by the damper is proportional to the difference y − r. Substituting this controller into (3.24), we obtain my¨ + ky = −K1 (y − r). Solving it gives y = c1 cos
rK1 k + K1 k + K1 t + c2 sin t + , m m k + K1
where c1 and c2 are any constants. Hence y(t) does not converge to r as t tends to infinity. To improve the proportional controller, we look at the energy of the system E(t) =
) 1& m(y) ˙ 2 + k(y − r)2 . 2
(3.26)
Using the equation (3.24), we deduce that dE = y˙ (my¨ + k(y − r)) = y(u ˙ − kr). dt This motivate us to take u = kr − K2
dy dt
(3.27)
such that dE ˙ 2 ≤ 0, where K2 is a positive constant. Controller (3.27) is dt = −K2 (y) called a derivative control. Substituting (3.27) into (3.24) gives my¨ + ky = kr − K2
dy . dt
Taking K22 < 4mk and solving the equation, we obtain ⎞ ⎛ ⎡ 4mk − K22 t⎠ y = e−K2t/(2m) ⎣c1 cos ⎝ 2m ⎞⎤ ⎛ 4mk − K22 +c2 sin ⎝ t ⎠⎦ + r, 2m where c1 and c2 are any constants. Thus, y(t) converges to r, y(t) ˙ converges to 0, and u(t) converges to kr. The limit kr is called a steady state of control.
3.5 PID Controllers
77
Since the system parameter k (the stiffness of the spring) in controller (3.27) may not be estimated accurately, the controller may not be robust to the perturbation of the parameter. To design a controller that does not involve system parameters, we use an integral control to replace the control steady state kr as follows: u = −K2
d (y − r) − K3 dt
t 0
(y(s) − r)ds,
(3.28)
where K3 is a positive constant. This controller is called the integral-derivative (ID) feedback controller. The integral accumulates the error y − r(t) and converges to the control steady state kr. Substituting (3.28) into (3.24) gives d my¨ + ky = −K2 (y − r) − K3 dt
t 0
(y(s) − r)ds.
Differentiating the equation gives that ... m y + K2 y¨ + ky˙ + K3 y = K3 r.
(3.29)
This equation has a particular solution y p = r. Let yc be the general solution of the associate homogeneous equation ... m y + K2 y¨ + ky˙ + K3 y = 0. Then y = yc + r. The characteristic polynomial of the homogeneous equation is mλ 3 + K2 λ 2 + kλ + K3 = 0. The Routh’s table for the equation is
λ3 : λ2 : λ: λ0 :
m K2 K2 k−mK3 K2
k K3 0
K3
By Routh-Hurwitz’s stability criterion (Theorem 3.2), it follows that if K2 >0, K3 > 0, and mK3 < K2 k, then all zeros of the characteristic polynomial have negative real parts. Hence yc (t) converges to zero as t → ∞ and then y(t) converges to r. In practice, we usually include a proportional controller to improve a performance. Combining these controllers together, we obtain the proportional-integralderivative (PID) controller u = −K1 (y − r) − K2
d (y − r) − K3 dt
t 0
(y(s) − r)ds.
(3.30)
78
3 Finite Dimensional Systems
Exercises 3.5 1. Consider the mass-spring system my¨ + ky = u(t), and the periodic reference output r(t) = sin ω t, where ω is a constant. For ω = k/m, derive conditions on gains K1 , K2 such that the following integral-derivative controller u(t) = −K1 (y − r) − K2
t 0
(y(s) − r(s))ds
tracks the reference r, that is, y(t) − r(t) converges to 0 as t → ∞. If ω = does the controller track the reference r? 2. Consider the third-order system
k/m,
d3y + y = u(t). dt 3 Design a derivative controller to stabilize the equilibrium 0.
3.6 Integral State Feedback Control Let us first look at the mass-spring system 0 1 x1 0 x˙1 = + 1 u(t), x˙2 − mk 0 x2 m & ) x1 . y= 10 x2
(3.31) (3.32)
Suppose that the reference output r is not zero. Then the system has the equilibrium x¯1 = r, x¯2 = 0, y¯ = r, u¯ = kr. To change the non-zero reference output to the zero reference output, we introduce new variables: w1 = x1 − x¯1 , w2 = x2 , z = y − r, v = u − kr.
3.6 Integral State Feedback Control
79
It then follows from (3.31) and (3.32) that 0 1 w1 0 w˙ 1 = + 1 v(t), w˙ 2 − mk 0 w2 m & ) w1 . z= 10 w2 From Example 3.8 we know that the state feedback control v = −Kw = −k2 w2 regulates z to 0, where k2 > 0. Then the state feedback control u = kr + v = kr − k2 x2 regulates y to r. So if r = 0, the control contains the system parameter k. In practice, the estimate of a system parameter is approximate. Hence such a control containing system parameters may not be robust against estimate errors of parameters. To see this, suppose that u¯ is calculated by using an approximate value k0 of k. Then u¯ = k0 r. It therefore follows that the equilibrium point of the closed-loop system is given by x¯2 = 0 and k k0 r = 0. − x¯1 + m m Solving it for x¯1 gives k0 r
= r, x¯1 = k and then the desired state r is not achieved. In the state equation we used the exact value k, not k0 , because the mass-spring system is a fixed internal system and the states x1 and x2 are generated by the system. In the controller we used k0 because the control signal is generated by the external controller that uses the estimate k0 . Consider the control system x˙ = Ax + Bu, y = Cx,
x(0) = x0 ,
(3.33) (3.34)
where A, B, C are n × n, m × n, and m × n matrices, respectively. To design a new feedback control that does not contain system parameters, we need to add the integrator z˙ = y − r (3.35) to the above system. This integrator accumulates the regulation errors y − r and plays a role of the steady state u¯ = kr of control. Before we design a feedback control for the augmented system (3.33)-(3.35), we first present the following lemma that will play an important role in the design of integral control. Lemma 3.4. Let A, B, C be n × n, n × m and m × n matrices, respectively. If (A, B) is controllable and
80
3 Finite Dimensional Systems
rank then
AB = n + m, C0
(3.36)
A0 B , C0 0
is also controllable. Thus the regulator poles of ) A0 B & K1 K 2 − C0 0 can be placed at any locations by choosing the matrices K1 , K2 . Proof. By PBH test (Theorem 3.5), it suffices to prove A0 B rank sI − = m + n. C0 0 If s = 0, this is true because of (3.36). If s = 0, we also have sI − A B 0 B A0 = rank rank sI − C 0 sI 0 C0 = m + rank[sI − A B] = m + n, because (A, B) is controllable. Theorem 3.10. Let A, B, C be n × n, n × m, and m × n matrices, respectively. If (A, B) is controllable and (3.36) holds, then there exist the gain matrices K1 , K2 such that the integral state feedback control u = −K1 x − K2z, z˙ = e = y − r
(3.37) (3.38)
regulates the output y of the control system (3.33)-(3.34) to r. Proof. By Lemma 3.4, there exist matrices K1 , K2 such that ) A0 B & K1 K2 − C0 0 is Hurwitz. By (3.36), we deduce that the steady state system ¯ 0 = A¯x + Bu,
(3.39)
r = C¯x
(3.40)
has a unique solution. We introduce the integrator z˙ = y − r = C(x − x¯ ).
3.6 Integral State Feedback Control
81
We then consider a feedback control law of the form u = −K1x − K2z.
(3.41)
Note that K2 is nonsingular since ) A0 B & K1 K2 − C0 0 is Hurwitz and then nonsingular. In fact, if K2 is singular, then the rank of K2 is less than m, and so the rank of BK2 is less than m. It therefore follows that the rank of ) A0 B & A − BK1 −BK2 K1 K2 = − C0 0 C 0 is less than m + n. This contradicts with the nonsingularity of the matrix. Thus the equation (3.42) u¯ = −K1 x¯ − K2 z¯ has a unique solution z¯ . Subtracting (3.39) from (3.33) gives d x − x¯ A 0 x − x¯ = C0 z − z¯ dt z − z¯ & ) x − x¯ B K1 K2 . − z − z¯ 0
(3.43)
This system is exponentially stable. Then x(t) converges to its equilibrium x¯ . This implies that the controlled output y(t) = Cx(t) of the control system (3.33)-(3.34) converges to r = C¯x as t tends to infinity. Therefore, we have proved the theorem. The equation (3.42) is the key to removing the control steady state u¯ from the control laws designed in previous sections to make the new control law here robust. In fact, if we did not introduce the integrator, we would not have the term K2 z¯ and then we could not find a matrix K1 such that u¯ = −K1 x¯ . Therefore we could not obtain the stable system (3.43). Instead, we would have d (x − x¯ ) = A(x − x¯ ) + B(u − u¯ ). dt
(3.44)
The feedback control that stabilizes this system is u − u¯ = −K1 (x − x¯ ). and then u = −K1 x + u¯ + K1 x¯ . ¯ which usually depends on sysThis control law contains the control steady state u, tem parameters as shown in the mass-spring-damper system, and then it is not robust.
82
3 Finite Dimensional Systems
Example 3.11. We now design a robust integral controller for the mass-springdamper system. It is clear that the matrix ⎡ ⎤ 0 1 0 AB = ⎣ − mk 0 m1 ⎦ C0 1 0 0 has rank 3. Let K1 = [k1 , k2 ] and K2 = k3 . A calculation gives
λ I − A 0 + B [K1 K2 ] = λ 3 + k2 λ 2 + (k + k1)λ + k3 .
C0 0 m m Using Routh-Hurwitz’s criterion, we find that if k1 >
k3 − kk2 , k2
k2 > 0,
k3 > 0,
then all eigenvalues have negative real parts. Therefore, the integral feedback control u = −k1 y − k2y˙ − k3 z, z˙ = e = y − r
(3.45) (3.46)
regulates the output y to the desired value r. Since this controller does not contain any system parameters, it is robust. To see that this integral control (3.45) is the same as the PID control proposed in Section 3.5, we take the initial condition of z to be z(0) = − kk13r . Integrating equation (3.46) and then plugging z into (3.45), we obtain t k1 r u = −k1 y − k2y˙ − k3 − + (y(s) − r)ds k3 0 t d = −k1 (y − r) − k2 (y − r) − k3 (y(s) − r)ds, dt 0 which is the same as the PID control proposed in Section 3.5.
Exercises 3.6 1. Consider the system
x˙1 x˙2
0 −2 x1 2 + u, 1 −1 x2 0 & ) x1 . y= 11 x2 =
3.7 Output Feedback Control
83
Design an integral feedback control to regulate the output y to a reference output 1. 2. Consider the mass-spring-damper system. If the output is the velocity y = x2 , then C = [0, 1] and the matrix ⎤ ⎡ 0 1 0 AB = ⎣ − mk 0 m1 ⎦ C0 0 1 0 has the rank 2. a. Can we design a state feedback control to regulate the output y to a nonzero reference output r? b. Can we design an integral feedback control to regulate the output y to a nonzero reference output r?
3.7 Output Feedback Control In the state feedback control, we assumed that all state variables are available for feedback. However, in practice, this is not always true. We then need to estimate the state variables using only input and output measurements. Such an estimation scheme is commonly called a state observer.
3.7.1 Observer-based Output Feedback Control Consider the control system x˙ = Ax + Bu, y = Cx, x(0) = x0 .
(3.47) (3.48) (3.49)
The estimate x˜ of x can be generated by injecting the output into the system as follows x˙˜ = A˜x + Bu + Ke(y − C˜x).
(3.50)
This output injection system is called the state observer, known as the Luenberger observer. The mathematical model of the observer is basically the same as the control system except that the estimation error, the difference between the measured output y and the estimated output C˜x, is injected to compensate for inaccuracies in matrices A, B and the initial state x0 . The matrix Ke is the output injection matrix (also called an observer gain matrix). We then use the estimate x˜ for feedback and
84
3 Finite Dimensional Systems
introduce the observer-based output feedback control u = −K˜x,
(3.51)
which leads to an observer-based output feedback control system x˙ = Ax − BK˜x,
(3.52)
y = Cx, x˙˜ = A˜x − BK˜x + Ke (y − C˜x).
(3.53) (3.54)
Theorem 3.11. If the pair (A, B) is controllable and the pair (A, C) is observable, then for any given closed-loop poles Sλ = {λ1 , · · · , λn } and any observer poles S μ = { μ1 , · · · , μn } with the property that if λ ∈ Sλ (μ ∈ S μ ) then λ¯ ∈ Sλ (μ¯ ∈ S μ ), there exist matrices K and Ke such that the eigenvalues of A − BK is equal to λ1 , · · · , λn and the eigenvalues of A− Ke C is equal to μ1 , · · · , μn . If these poles are chosen such that they have negative real parts, then the observer-based output feedback control (3.51)-(3.54) regulates the output y of the system (3.52)-(3.54) to zero. Proof. We introduce the error vector e = x − x˜ . Subtracting equation (3.54) from equation (3.52) gives e˙ = Ae − Ke (y − C˜x) = Ae − Ke (Cx − C˜x) = (A − Ke C)e. Also the state equation (3.52) can be written as x˙ = (A − BK)x + BKe. Combining these two equations, we obtain x˙ A − BK BK x = . e˙ 0 A − Ke C e
(3.55)
Since (A, B) is controllable, there exists a gain matrix K such that the eigenvalues of A − BK are equal to λ1 , · · · , λn . Moreover, since the pair (A, C) is observable, it follows from Theorem 3.7 that the pair (A∗ , C∗ ) is controllable. Then there exists a matrix G such that the eigenvalues of A∗ − C∗ G are equal to μ1 , · · · , μn . But A − G∗ C has the same eigenvalues as A∗ − C∗ G. Let Ke = G∗ . Then A − Ke C has eigenvalues μ1 , · · · , μn . Since A − BK BK det λ I − 0 A − Ke C = det (λ I − [A − BK]) det (λ I − [A − Ke C]) , the matrix
3.7 Output Feedback Control
85
A − BK BK 0 A − Ke C
has eigenvalues λ1 , · · · , λn ; μ1 , · · · , μn . If these eigenvalues have negative real parts, then x(t) converges to zero exponentially and then so y(t) = Cx(t) does. Example 3.12. Consider the system x˙ = Ax + Bu, y = Cx, where
0 1 A= , 0 −2
0 B= , 4
& ) C= 1 0 .
Design an output feedback control by observer approach such that the desired closed-loop poles are located at √ √ λ = −2 + 2 3i, −2 − 2 3i and the desired observer poles are located at
λ = −8, −8. The feedback gain matrix K = [k1 , k2 ] should be selected such that √ √ det(λ I − A + BK) = (λ + 2 − 2 3i)(λ + 2 + 2 3i), and then
λ 2 + (2 + 4k2)λ + 4k1 = λ 2 + 4λ + 16.
Comparing the coefficients gives k1 = 4,
k2 = 0.5.
The observer gain matrix Ke = [k1e , k2e ]∗ should be selected such that det(λ I − A + KeC) = (λ + 8)2 , and then
λ 2 + (2 + k1e )λ + 2k1e + k2e = λ 2 + 16λ + 64.
Comparing the coefficients gives k1e = 14, Then the output feedback control is
k2e = 36.
86
3 Finite Dimensional Systems
x˜˙1 x˙˜2
u = −4x˜ − 0.5x˜ , 1 2 0 1 x˜1 0 x˜ 14 = − [4 0.5] 1 + (x1 − x˜1 ) 0 −2 x˜2 4 36 x˜2 0 1 14 x˜1 = + (x1 − x˜1 ). −16 −4 x˜2 36
3.7.2 Integral Output Feedback Control As in the case of state feedback control, if the reference output is not zero, the above designed output feedback control for the system (3.47)-(3.49) may contain system parameters and then is not robust. To see this, we suppose that the reference output r is not zero and the steady state system ¯ 0 = A¯x + Bu,
(3.56)
r = C¯x
(3.57)
has a solution (solutions of this system may not be unique). Subtracting (3.56) from (3.47) gives d (x − x¯ ) = A(x − x¯ ) + B(u − u¯ ). dt
(3.58)
Let the estimate x˜ of x be generated by the observer (3.50) and consider the output feedback control of the form u − u¯ = −K1 (˜x − x¯ ). It then follows from (3.58) that d (x − x¯ ) = A(x − x¯ ) − BK1 (˜x − x¯ ) dt = A(x − x¯ ) − BK1 (˜x − x + x − x¯ ) = (A − BK1)(x − x¯ ) + BK1 e,
(3.59)
where e = x − x˜ . Subtracting (3.50) from (3.47) gives e˙ = (A − Ke C)e.
(3.60)
If the pair (A, B) is controllable and the pair (A, C) is observable, Theorem 3.11 shows that there exist matrices K1 and Ke such that the system (3.59)-(3.60) is exponentially stable. Now the feedback control u = −K1 x˜ + u¯ + K1 x¯ .
3.7 Output Feedback Control
87
¯ which usually depends on system parameters as contains the control steady state u, shown in the mass-spring-damper system. Hence it is not robust. To design a robust feedback control, as in the case of integral state feedback control, we need to add the integrator z˙ = y − y¯ = C(x − x¯ )
(3.61)
to the above system, where y¯ = C¯x. Theorem 3.12. Let A, B, C be n × n, n × m, and m × n matrices, respectively. Let r be the reference output such that the steady state system ¯ 0 = A¯x + Bu,
(3.62)
r = C¯x
(3.63)
has a solution. Denote y¯ = C¯x. If (A, B) is controllable, (A, C) is observable, and AB = n + m, (3.64) rank C0 then there exist the matrices K1 , K2 , Ke such that the integral output feedback control u = −K1 x˜ − K2 z, z˙ = e = y − y¯
(3.65) (3.66)
regulates the controlled output y of the control system (3.47)-(3.50) to r. Proof. By Lemma 3.4, there exist matrices K1 , K2 such that ) A0 B & K1 K2 − C0 0 is Hurwitz. Consider an output feedback control of the form u = −K1x˜ − K2z.
(3.67)
As in the proof of Theorem 3.10, the equation u¯ = −K1 x¯ − K2 z¯
(3.68)
has a unique solution z¯ . Subtracting (3.68) from (3.67) gives u − u¯ = −K1 (˜x − x¯ ) − K2(z − z¯ ). Subtracting (3.62) from (3.47) and using (3.69), we obtain
(3.69)
88
3 Finite Dimensional Systems
d (x − x¯ ) = A(x − x¯ ) − BK1 (˜x − x¯ ) − BK2(z − z¯ ) dt = A(x − x¯ ) − BK1 (˜x − x + x − x¯ ) − BK2(z − z¯ ) = (A − BK1 )(x − x¯ ) + BK1 e − BK2(z − z¯ ). It then follows from (3.60), (3.61), and (3.70) that ⎤⎡ ⎤ ⎡ ⎤ ⎡ x − x¯ x − x¯ A − BK1 −BK2 BK1 d ⎣ ⎦ ⎣ z − z¯ ⎦ . z − z¯ ⎦ = ⎣ C 0 0 dt e e 0 0 A − Ke C
(3.70)
(3.71)
Because (A, C) is observable, there exists a gain matrix Ke such that A − Ke C is Hurwitz. Since ⎡ ⎤⎞ ⎛ A − BK1 −BK2 BK1 ⎦⎠ C 0 0 det ⎝λ I − ⎣ 0 0 A − Ke C A − BK1 −BK2 = det λ I − det(λ I − [A − Ke C]) , C 0 the system (3.71) is exponentially stable. Then x(t) converges to its equilibrium x¯ . This implies that the output y(t) = Cx(t) of the control system (3.47)-(3.49) converges to r = C¯x as t tends to infinity. Therefore we have proved the theorem. Example 3.13. We now design a robust integral output controller for the massspring-damper system 0 1 x1 0 x˙1 = + 1 u(t), x˙2 − mk 0 x2 m & ) x1 . y = y= 1 0 x2 It is clear that the matrix
⎡ ⎤ 0 1 0 AB = ⎣ − mk 0 m1 ⎦ C0 1 0 0
has the full rank 3. Let K1 = [k1 , k2 ] and K2 = k3 . A calculation gives
λ I − A 0 + B [K1 K2 ] = λ 3 + k2 λ 2 + (k + k1)λ + k3 .
C0 0 m m Using Routh-Hurwitz’s criterion, we find that if k1 >
k3 − kk2 , k2
k2 > 0,
k3 > 0,
then all eigenvalues have negative real parts. Let Ke = [k1e , k2e ]∗ . Then
3.7 Output Feedback Control
89
|λ I − A + KeC| = λ 2 + k1e λ +
k + k2e . m
So if k1e > 0 and k2e > mk , the eigenvalues have negative real parts. Therefore, the integral output feedback control u = −k1 x˜1 − k2x˜2 − k3z, ˙x˜1 = x˜2 + k1e (x1 − x˜1), k + k1 k2 x˜1 − x˜2 + k2e (x1 − x˜1), x˙˜2 = − m m z˙ = x1 − r regulates the output y = x1 to the desired value r.
Exercises 3.7 1. Consider the system x˙ = Ax + Bu, y = Cx, where
⎡
⎤ 0 1 0 A = ⎣ 0 0 1⎦, −5 −6 0
⎡ ⎤ 0 B = ⎣ 0⎦, 1
& ) C= 100 .
Design a state observer, assuming that the desired poles for the observer are located at λ = −10, −10, −15. 2. Consider the system x˙ = Ax + Bu, y = Cx, where
⎤ 0 1 0 A = ⎣ 0 0 1 ⎦, −6 −11 −6 ⎡
⎡ ⎤ 0 B = ⎣ 0 ⎦, 1
& ) C= 100 .
Design an output feedback control by observer approach such that the desired closed-loop poles are located at
λ = −1 + i, −1 − i, −5 and the desired observer poles are located at
90
3 Finite Dimensional Systems
λ = −6, −6, −6. 3. Consider the mass-spring-damper system. If the measured output y = x2 , then C = [0, 1] and the matrix ⎤ ⎡ 0 1 0 AB = ⎣ − mk 0 m1 ⎦ C0 0 1 0 has the rank 2. Discuss whether we can design an integral output feedback control to regulate the output y to a reference output r.
3.8 Optimal Control on Finite-time Intervals Consider the control problem x˙ = Ax + Bu(t), y = Cx.
x(0) = x0 ,
(3.72) (3.73)
If (A, B) is controllable, then there exist infinite feedback gain matrices K to stabilize the system. Which one is best? To answer the question, we first need to set up a criterion. Thus we define the quadratic performance criterion J(u, T ; x0 ) = = =
T' 0
T 0
T 0
( |Cx(t)|2 + |Du(t)|2 dt + x(T )∗ Gx(T )
(x(t)∗ C∗ Cx(t) + u(t)∗D∗ Du(t)) dt + x(T )∗ Gx(T ) (x(t)∗ Qx(t) + u(t)∗Ru(t)) dt + x(T )∗ Gx(T ),
(3.74)
where G is a nonnegative matrix and Q = C∗ C,
R = D∗ D.
The optimal control problem is to minimize the functional J over L2 ((0, T ), Rm ): min
u∈L2 ((0,T ),Rm )
J(u, T ; x0 ).
The function uopt such that J(uopt , T ; x0 ) =
min
u∈L2 ((0,T ),Rm )
J(u, T ; x0 )
(3.75)
3.8 Optimal Control on Finite-time Intervals
91
is called an optimal control. The first term |Cx(t)|2 in the criterion means that the optimal control drives the output Cx(t) as close to the desired output 0 as possible in the average sense (same meaning for the last term x(T )∗ Gx(T )) and the second term |Du(t)|2 means that the control effort is minimized in the average sense. Example 3.14. Consider the control system x˙ = Ax + Bu, y = Cx,
where A=
01 , 00
B=
0 , 1
C=
10 . 01
The performance criterion is defined by J(u, T ; x0 ) = =
T'
( |Cx(t)|2 + |u(t)|2 dt
0
T'
( |x1 (t)|2 + |x2 (t)|2 + |u(t)|2 dt,
0
where D = [1] .
3.8.1 Existence and Uniqueness To solve the optimal control problem (3.75), we need the concept of weak convergence in a Hilbert space. Definition 3.7. Let H be a Hilbert space. A sequence {xn } ⊂ H converges weakly to x ∈ H if lim (xn − x, y) = 0 for any y ∈ H. n→∞
We denote xn x. Consider the sequence {sin nπ x} ⊂ L2 (0, 1). By Theorem 2.5, any function f ∈ has the expansion
L2 (0, 1)
∞
f (x) =
∑ cn sin nπ x,
n=1
where cn = 2
1 0
f (x) sin nπ xdx.
It therefore follows that f 2L2 =
1 ∞ 2 ∑ cn , 2 n=1
92
3 Finite Dimensional Systems
and then
1
lim
n→∞ 0
f (x) sin nπ xdx = lim cn /2 = 0. n→∞
1
Thus sin nπ x converges weakly to 0. Since 0 | sin nπ x|2 dx = 1/2, sin nπ x does not converge strongly to 0. So the weak convergence does not imply the strong convergence. But the strong convergence does imply the weak convergence. Theorem 3.13. If a sequence {xn } ⊂ H converges strongly to x ∈ H, then it converges weakly to x ∈ H. Proof. For any y ∈ H, it follows from Cauchy-Schwarz inequality (2.7) that |(xn − x, y)| ≤ xn − xy, which implies that {xn } converges weakly to x ∈ H. The following well-known compactness theorem plays an important role in dealing with minimization problems. Theorem 3.14. Let H be a Hilbert space. If a sequence {xn } ⊂ H is bounded, then there exists a subsequence {xni } such that {xni } weakly converges to x ∈ H as i → ∞. The proof of this theorem is difficult and referred to [34] or [105]. We use the sequence {sin nπ x} ⊂ L2 (0, 1) to illustrate the theorem. Since 01 | sin nπ x|2 dx = 1/2, the sequence is bounded. In the above, we have proved that sin nπ x converges weakly to 0, but does not converge strongly to 0. Lemma 3.5. Let x˙ n = Axn + Bun (t), xn (0) = x0 , and x˙ = Ax + Bu(t), x(0) = x0 . If
T
lim
n→∞ 0
then
T
lim
n→∞ 0
un (t) · v(t)dt = xn (t) · y(t)dt =
Proof. A direct calculation gives
T 0
T 0
u(t) · v(t)dt,
∀v ∈ L2 (0, T ; Rm ),
x(t) · y(t)dt,
∀y ∈ L2 (0, T ; Rn ).
3.8 Optimal Control on Finite-time Intervals
T
lim
n→∞ 0
93
xn (t) · y(t)dt
T
e x0 + e Bun (s)ds · y(t)dt = lim n→∞ 0 0 T T T = eAt x0 · y(t)dt + lim eA(t−s) Bun (s) · y(t)dtds n→∞ 0 0 s T T T ∗ eAt x0 · y(t)dt + lim B∗ eA (t−s) y(t) · un (s)dtds = n→∞ 0 0 s T T T ∗ = eAt x0 · y(t)dt + B∗ eA (t−s) y(t) · u(s)dtds 0 0 s T t = eAt x0 + eA(t−s) Bu(s)ds · y(t)dt 0
=
T 0
t
At
A(t−s)
0
x(t) · y(t)dt.
Consider
xn (t) = x¯n (t) +
Then the solution
n , n+1
xn (t) = x0 e−t +
xn (0) = x0 .
n (1 − e−t ) n+1
converges to x(t) = x0 e−t + (1 − e−t ), which is the solution of x (t) = x(t) ¯ + 1,
xn (0) = x0 .
We now prove that there is a unique optimal control of the problem (3.75). Theorem 3.15. Assume that R is positive definite. For every initial condition, there is a unique optimal control such that J(uopt , T ; x0 ) =
min
u∈L2 ((0,T ),Rm )
J(u, T ; x0 ).
Proof. Existence. Let un ∈ L2 ((0, T ), Rm ) be a minimizing sequence such that lim J(un , T ; x0 ) =
n→∞
min
u∈L2 ((0,T ),Rm )
J(u, T ; x0 ).
Since R is positive definite, there exists a C > 0 such that un 2L2 ≤ CJ(un , T ; x0 ) ≤ M. Then there is a subsequence, still denoted by itself, such that T
lim
n→∞ 0
un (t) · v(t)dt =
T 0
uopt (t) · v(t)dt,
∀v ∈ L2 (0, T ; Rm ),
94
3 Finite Dimensional Systems
and
T
lim
n→∞ 0
xn (t) · z(t)dt =
T 0
xopt (t) · z(t)dt,
∀z ∈ L2 (0, T ; Rn ),
where xn and xopt are solutions of x˙ n = Axn + Bun (t), xn (0) = x0 , and x˙ opt = Axopt + Buopt (t), xopt (0) = x0 . A direct calculation gives J(un , T ; x0 ) = =
T 0
T 0
(xn (t)∗ Qxn (t) + un (t)∗ Run (t)) dt + xn(T )∗ Gxn (T ) ([xn (t) − xopt (t) + xopt (t)]∗ Q[xn (t) − xopt (t) + xopt (t)]
+[un (t) − uopt (t) + uopt (t)]∗ R[un (t) − uopt (t) + uopt (t)]) dt +[xn (T ) − xopt (T ) + xopt (T )]∗ G[xn (T ) − xopt (T ) + xopt (T )] = J(un − uopt , T ; 0) + J(uopt , T ; x0 ) +2
T 0
([xn (t) − xopt (t)]∗ Qxopt (t) + [un (t) − uopt (t)]∗ Ruopt (t)) dt
+2[xn (T ) − xopt (T )]∗ Gxopt (T ) ≥ J(uopt , T ; x0 ) +2 +2
T 0
T 0
([xn (t) − xopt (t)]∗ Qxopt (t) + [un (t) − uopt (t)]∗ Ruopt (t)) dt [eA(t−r) B(un (r) − uopt (r))]∗ Gxopt (T )dr
= J(uopt , T ; x0 ) +2 +2
T 0
T 0
([xn (t) − xopt (t)]∗ Qxopt (t) + [un (t) − uopt (t)]∗ Ruopt (t)) dt [un (r) − uopt (r)]∗ [eA(t−r) B]∗ Gxopt (T )dr.
It then follows that min
u∈L2 ((0,T ),Rm )
J(u, T ; x0 )
= lim J(un , T ; x0 ) n→∞
3.8 Optimal Control on Finite-time Intervals
95
≥ J(uopt , T ; x0 ) +2 lim
T
n→∞ 0
+ lim 2
T
n→∞
0
([xn (t) − xopt (t)]∗ Qxopt (t) + [un (t) − uopt (t)]∗ Ruopt (t)) dt [un (r) − uopt (r)]∗ [eA(t−r) B]∗ Gxopt (T )dr
= J(uopt , T ; x0 ) ≥
min
u∈L2 ((0,T ),Rm )
J(u, T ; x0 ).
Uniqueness. Assume that there are two optimal controls u1 , u2 . Using the inequality a+b 2 1 2 < (a + b2), ∀a = b, 2 2 we can show that 1 1 J(u1 , T ; x0 ) + J(u2 , T ; x0 ) 2 2 J(u, T ; x0 ). = min
J((u1 + u2 )/2, T ; x0 )
0. For every initial condition x0 , there is a control uT ∈ L2 ((0, T ), Rm ) such that the solution of x˙ T = AxT + BuT (t), xT (0) = x0 satisfies that
102
3 Finite Dimensional Systems
xT (T ) = 0.
Define u(t) =
uT (t), 0 ≤ t ≤ T, 0, t > T.
Then the solution of x˙ = Ax + Bu(t), x(0) = x0 is given by
x(t) =
xT (t), 0 ≤ t ≤ T, 0, t > T,
and so x ∈ L2 ((0, ∞), Rn ). It can be expected that the optimal control problem (3.96) over an infinite time interval would be the limit of the optimal control problem (3.75) over a finite time interval as T → ∞ in some sense. Thus we further study the differential Riccati equation and investigate the asymptotic behavior of its solution as t → ∞. Lemma 3.6. The differential Riccati equation
Π = Π A + A∗ Π + Q − Π BR−1 B∗ Π , Π (0) = G.
(3.97) (3.98)
has a unique continuous symmetric nonnegative solution satisfying min
u∈L2 ((0,t),Rm )
J(u,t; x0 ) = x∗0 Π (t)x0 .
(3.99)
If G = 0, then Π is increasing, that is x∗0 Π (t1 )x0 ≤ x∗0 Π (t2 )x0 for all x0 and t1 ≤ t2 . Proof. Define
ΠT (t) = P(T − t),
where P is the solution of the differential Riccati equation (3.89). It is clear that ΠT (t) is a solution of (3.97) and the equation (3.99) follows from Theorem 3.19. Uniqueness of the solution can be established by standard results in ordinary differential equations. To show that ΠT is independent of T , we take any T1 ≤ T2 . Since the solution is unique, we have ΠT1 (t) = ΠT2 (t), 0 ≤ t ≤ T1 . Now assume that G = 0. Then, for any x0 and t1 ≤ t2 , we have
3.9 Optimal State Feedback Controllers
103
x∗0 Π (t1 )x0 = ≤ = Consider
min
J(u,t1 ; x0 )
min
J(u,t2 ; x0 )
u∈L2 ((0,t1 ),Rm ) u∈L2 ((0,t2 ),Rm ) x∗0 Π (t2 )x0 .
x (t) = x(t) + u(t),
x(0) = x0
with the performance criterion J(u, x0 ) =
∞ 0
[3x2 (t) + u2(t)]dt.
Here G = 0, Q = 3, R = 1. The differential Riccati equation is p (t) − 2p(t) + p2(t) − 3 = 0,
p(0) = 0,
which has a unique positive increasing solution p(t) =
3(1 − e−4t ) . 1 + 3e−4t
The limit of the differential Riccati equation leads to algebraic Riccati equations. Lemma 3.7. If the optimal control problem (3.96) is well posed, then the algebraic Riccati equation PA + A∗P + Q − PBR−1 B∗ P = 0 has a symmetric nonnegative solution given by P = lim Π (t), t→∞
where Π is the solution of the differential Riccati equation (3.97). ¯ x0 ) < ∞. Since x∗0 Π (t)x0 is inProof. Assume u¯ ∈ L2 ((0, ∞), Rm ) such that J(u; creasing and x∗0 Π (t)x0 = the limit
min
u∈L2 ((0,t),Rm )
¯ x0 ) < ∞ J(u,t; x0 ) ≤ J(u;
lim x∗ Π (t)x0 t→∞ 0
exists. Taking limit in the differential Riccati equations, we can show that the limit is the required solution of the algebraic Riccati equation.
104
3 Finite Dimensional Systems
Example 3.17. Consider the system x˙ = Ax + Bu, y = Cx,
where A=
01 , 00
B=
0 , 1
(3.100) (3.101) C=
10 . 01
The performance criterion is defined by J(u, x0 ) = =
∞'
( |Cx(t)|2 + |u(t)|2 dt
0 ∞ '
( |x1 (t)|2 + |x2 (t)|2 + |u(t)|2 dt.
0
Thus D = [1] . The corresponding Riccati equation is p11 p12 00 p p 01 + 11 12 10 p12 p22 p12 p22 00 & ) 0 p11 p12 10 00 p p [1] 0 1 + = . − 11 12 1 01 00 p12 p22 p12 p22 This equation gives three equations: 1 − p212 = 0, p11 − p12 p22 = 0, 1 + 2p12 − p222 = 0. Solving the system, we obtain the positive definite symmetric matrix √ 3 √1 . P= 3 1 Note that this Riccati equation has other complex solutions. As in the proof of the identity (3.91), we can prove the following identity. Lemma 3.8. Let P be a symmetric solution of the algebraic Riccati equation PA + A∗P + Q − PBR−1 B∗ P = 0. Then t 0
(x(s)∗ Qx(s) + u(s)∗ Ru(s)) dt
= x∗0 Px0 − x(t)∗ Px(t) +
t 0
(3.102)
[u(s) + R−1 B∗ Px(s)]∗ R[u(s) + R−1 B∗ Px(s)]ds.
3.9 Optimal State Feedback Controllers
105
We are now ready to derive the optimal state feedback controller. Theorem 3.21. If the optimal control problem (3.96) is well posed, then there is a unique optimal control uopt (t) = −R−1 B∗ Px(t) ∈ L2 ((0, ∞), Rm ) such that J(uopt ; x0 ) =
min
u∈L2 ((0,∞),Rm )
J(u; x0 ) = x∗0 Px0 ,
where P is the minimal nonnegative solution of the algebraic Riccati equation. By minimal we mean that any other symmetric solution P1 satisfies x∗0 Px0 ≤ x∗0 P1 x0 ,
∀x0 ∈ Rn .
¯ x0 ) < ∞. For any t ≥ 0, Proof. Existence. Assume u¯ ∈ L2 ((0, ∞), Rm ) such that J(u; we have ¯ x0 ) ∞ > J(u; ≥ ≥ = =
min
J(u; x0 )
min
J(u,t; x0 )
min
J(u,t; x0 )
u∈L2 ((0,∞),Rm ) u∈L2 ((0,∞),Rm ) u∈L2 ((0,t),Rm ) x∗0 Π (t)x0 .
Taking limit gives min
u∈L2 ((0,∞),Rm )
J(u; x0 ) ≥ x∗0 Px0 .
(3.103)
On the other hand, it follows from (3.102) that J(u,t; x0 ) = =
t
(x(s)∗ Qx(s) + u(s)∗ Ru(s)) ds
0 x∗0 Px0 − x(t)∗Px(t) t −1 ∗
+
0
[u(s) + R B Px(s)]∗ R[u(s) + R−1 B∗ Px(s)]ds
≤ x∗0 Px0 + Taking we obtain and then
t 0
[u(s) + R−1 B∗ Px(s)]∗ R[u(s) + R−1 B∗ Px(s)]ds.
u(s) = uopt (s) = −R−1 B∗ Px(s) J(uopt ,t; x0 ) ≤ x∗0 Px0 , J(uopt ; x0 ) ≤ x∗0 Px0 .
(3.104)
106
3 Finite Dimensional Systems
Now we show that
uopt ∈ L2 ((0, ∞), Rm ).
Since R is positive definite, then there exists a C > 0 such that ∞ 0
|uopt (t)|2 dt ≤ C
∞ 0
uopt (t)∗ Ruopt (t)dt
≤ CJ(uopt ; x0 ) < ∞. Combing (3.103) and (3.104) gives J(uopt ; x0 ) = x∗0 Px0 =
min
u∈L2 ((0,∞),Rm )
J(u; x0 ).
Uniqueness. The uniqueness follows from the convexity: 1 1 J((u1 + u2 )/2; x0 ) < J(u1 ; x0 ) + J(u2 ; x0 ). 2 2 P is minimal. Let P1 be another solution. Then J(u,t; x0 ) = =
≤
t
Taking
u(s) = u1 (s) = −R−1 B∗ P1 x(s)
we obtain
J(u1 ,t; x0 ) ≤ x∗0 P1 x0 ,
and so Thus
(x(s)∗ Qx(s) + u(s)∗ Ru(s)) ds
0 x∗0 P1 x0 − x(t)∗ P1 x(t) t + [u(s) + R−1 B∗ P1 x(s)]∗ R[u(s) + R−1B∗ P1 x(s)]ds 0 x∗0 P1 x0 t + [u(s) + R−1 B∗ P1 x(s)]∗ R[u(s) + R−1B∗ P1 x(s)]ds. 0
J(u1 ; x0 ) ≤ x∗0 P1 x0 . x∗0 Px0 =
min
u∈L2 ((0,∞),Rm )
J(u; x0 ) ≤ J(u1 ; x0 ) ≤ x∗0 P1 x0 .
The next question is whether the optimal feedback controller stabilizes the system. Before answering this question, we first look at an example. Example 3.18. In Example 3.17, we have obtained the positive definite symmetric matrix √ 3 √1 P= . 3 1
3.9 Optimal State Feedback Controllers
107
We then obtain the optimal feedback gain matrix for the system (3.100)-(3.101) ! √ √ " 3 √1 −1 ∗ = 1 3 . Kopt = R B P = [1][0 1] 3 1 Thus the optimal state feedback controller of (3.100)-(3.101) is √ u = −Kopt x = −x1 − 3x2 .
The matrix
0 √ 1 = −1 − 3
A − BKopt
is Hurwitz. In this case, one can check that (A, C) is observable. If we change C to 00 C= , 01 the performance criterion is J(u, x0 ) = =
∞'
( |Cx(t)|2 + |u(t)|2 dt
0 ∞ ' 0
( |x2 (t)|2 + |u(t)|2 dt.
The corresponding Riccati equation is p11 p12 01 00 p11 p12 + 00 p12 p22 p12 p22 10 & ) p11 p12 p11 p12 0 00 00 − [1] 0 1 + = . 1 01 00 p12 p22 p12 p22 This equation gives three equations: p212 = 0, p11 − p12 p22 = 0, 1 + 2p12 − p222 = 0. Solving the system, we obtain the semi-definite symmetric matrix 00 P= , 01 and then the optimal feedback gain matrix Kopt
00 = R B P = [1][0 1] = [0 1] . 01 −1 ∗
108
3 Finite Dimensional Systems
Thus the optimal state feedback control is u = −Kopt x = −x2 .
The matrix A − BKopt
0 1 = 0 −1
is not Hurwitz. This happened because x1 was not penalized or not detected by the performance criterion. In fact, we can check that (A, C) is not detectable. This example demonstrates that certain conditions such as detectability are needed to be imposed on (A, C) to ensure that the optimal feedback controller stabilizes the system. Lemma 3.9. The function exp(At)x0 ∈ L2 ((0, ∞), Rn ) for every x0 ∈ Rn if only if A is Hurwitz. Proof. If A is Hurwitz, then exp(At)x0 ≤ Me−at x0 for some M, a > 0. So exp(At)x0 ∈ L2 ((0, ∞), Rn ). If A is not Hurwitz, then there is an eigenvalue s0 with Re(s0 ) ≥ 0 and corresponding eigenvector xs0 . Then / L2 ((0, ∞), Rn ). exp(At)xs0 = exp(s0t)xs0 ∈ Theorem 3.22. If the optimal control problem (3.96) is well posed and the pair (A, C) is detectable, then the optimal feedback Kopt = −R−1 B∗ P stabilizes (A, B), that is, A − BKopt is Hurwitz. Proof. Since ∞' 0
( Cx(s)2 + u(s)∗ Ruopt (s) ds = J(uopt ; x0 ) < ∞,
and R is positive definite, we have Cx(s) ∈ L2 ((0, ∞), Rn ),
uopt ∈ L2 ((0, ∞), Rm ).
Since the system (A, C) is detectable, there is a F such that A − FC is Hurwitz. Writing x˙ = Ax − B Kopt x = Ax + Buopt = (A − FC)x + FCx + Buopt , we obtain
3.9 Optimal State Feedback Controllers
x(t) = e(A−FC)t x0 + which implies that
t 0
109
e(A−FC)(t−s) [FCx(s) + Buopt (s)]ds,
x ∈ L2 ((0, ∞), Rn ).
So A − BKopt is Hurwitz. We next show that the optimal feedback matrix Kopt minimizes the performance criterion among all gain matrices. Let K denote the set of all m × n gain matrices. Consider a state feedback control u(t) = −Kx(t).
(3.105)
We define the performance criterion J(K, x0 ) = = = where
∞' 0 ∞ 0
∞ 0
( |Cx(t)|2 + |Du(t)|2 dt
(x(t)∗ C∗ Cx(t) + x(t)∗K∗ D∗ DKx(t))dt (x(t)∗ Qx(t) + x(t)∗K∗ RKx(t))dt, Q = C∗ C,
R = D∗ D.
Theorem 3.23. Suppose that the optimal control problem (3.96) is well posed and let Kopt = −R−1 B∗ P be the optimal feedback matrix. Then J(Kopt , x0 ) = min J(K, x0 ). K∈K
(3.106)
Proof. We first show that min
u∈L2 ((0,∞),Rm )
J(u; x0 ) ≤ min J(K, x0 ). K∈K
In fact, if u(t) = −Kx(t) ∈ / L2 ((0, ∞), Rm ), then J(K, x0 ) = ∞. If u(t) = −Kx(t) ∈ 2 m L ((0, ∞), R ), then min
u∈L2 ((0,∞),Rm )
J(u; x0 ) ≤ J(K, x0 ).
It therefore follows that J(Kopt , x0 ) =
min
u∈L2 ((0,∞),Rm )
J(u; x0 ) ≤ min J(K, x0 ) ≤ J(Kopt , x0 ). K∈K
110
3 Finite Dimensional Systems
Exercises 3.9 1. Prove the identity (3.102). 2. Consider the system x˙ = Ax + Bu, y = Cx,
where A=
00 , 10
B=
1 , 0
C=
10 . 01
The performance criterion is defined by J(u, x0 ) = =
∞'
( |Cx(t)|2 + |u(t)|2 dt
0 ∞ ' 0
( |x1 (t)|2 + |x2 (t)|2 + |u(t)|2 dt.
a. Write down the corresponding algebraic Riccati equation. b. Derive the optimal feedback matrix and show that it stabilizes the system. 3. Consider the control problem x˙ = Ax + Bu(t), y = Cx, x(0) = x0 with a state feedback control u(t) = −Kx(t). Suppose that the pair (A, B) is stabilizable and define K = {K : A − BK is Hurwitz}. Let D be a nonsingular matrix and define the quadratic performance criterion J(K) = = = where
∞' 0 ∞ 0
∞ 0
( |Cx(t)|2 + |Du(t)|2 dt
(x(t)∗ C∗ Cx(t) + x(t)∗K∗ D∗ DKx(t)) dt (x(t)∗ Qx(t) + x(t)∗K∗ RKx(t)) dt, Q = C∗ C,
R = D∗ D.
3.10 Asymptotic Tracking and Disturbance Rejection
111
Let P be a real symmetric nonnegative solution of the algebraic Riccati equation A∗ P + PA + Q − PBR−1B∗ P = 0. Consider a quadratic Lyapunov function candidate V (x) = x∗ Px. a. Show that for any gain matrix K V˙ = −x∗ Qx − x∗K∗ D∗ DKx +x∗ [(D∗ )−1 B∗ P − DK]∗[(D∗ )−1 B∗ P − DK]x. b. Show that for any K ∈ K J(K) = x∗0 Px0 + ≥ x∗0 Px0 . c. Let Show that
∞ 0
x∗ (t)[(D∗ )−1 B∗ P − DK]∗[(D∗ )−1 B∗ P − DK]x(t) dt
Kopt = D−1 (D∗ )−1 B∗ P = R−1 B∗ P. J(Kopt ) ≤ x∗0 Px0 ≤ min J(K). K∈K
3.10 Asymptotic Tracking and Disturbance Rejection Consider the control system subject to a disturbance d x˙ = Ax + Bu + Ed d, y = Cx + Du + Fd d.
(3.107) (3.108)
The reference r(t) and the disturbance d(t) are assumed to be generated by the exosystem r˙ = A1r r, d˙ = A1d d.
(3.109) (3.110)
The constant reference r can be generated by the system with A1r = 0 and the sinusoidal disturbance d = M sin(ω t) can be generated with 0 ω . A1d = −ω 0 The tracking error is given by
112
3 Finite Dimensional Systems
e = Cx + Du + Fd d − r. Define
r v= , d
A1r 0 , A1 = 0 A1d
E = [0 Ed ],
F = [−I Fd ].
Combining the control system (3.107)-(3.108) with the exosystem (3.109)-(3.110), we obtain x˙ = Ax + Bu + Ev, v˙ = A1 v,
(3.111) (3.112)
e = Cx + Du + Fv.
(3.113)
The control problem is to design a controller u to track the reference r: lim e(t) = 0.
t→∞
This problem is called asymptotic tracking and disturbance rejection. Consider the static state feedback control u = −K1 x + K2v.
(3.114)
The two matrices K1 and K2 are called the feedback gain and the feedforward gain, respectively. Substituting (3.114) into (3.111)-(3.113) gives x˙ = (A − BK1 )x + (BK2 + E)v, v˙ = A1 v, e = (C − DK1)x + (DK2 + F)v.
(3.115) (3.116) (3.117)
To eliminate v from the equation (3.115), we introduce the transformation x = w + Xv, where the matrix X is to be determined. Substituting this transformation into (3.115)-(3.117) gives ˙ + XA1 v = (A − BK1)w + ((A − BK1)X + BK2 + E)v, w
(3.118)
e = (C − DK1)w + ((C − DK1)X + DK2 + F)v.
(3.119)
This leads to the matrix equations: XA1 = (A − BK1 )X + BK2 + E,
(3.120)
0 = (C − DK1)X + DK2 + F.
(3.121)
The equation (3.120) for X is called the Sylvester equation. Let
3.10 Asymptotic Tracking and Disturbance Rejection
113
U = K2 − K1X. Then the above matrix equations are changed to XA1 = AX + BU + E, 0 = CX + DU + F.
(3.122) (3.123)
These equations are called the regulator equations. Lemma 3.10. Let A be an n × n matrix and C be a p × n matrix. For any matrices E and F, the regulator equations (3.122)-(3.123) are solvable if and only if A−λI B = n+ p (3.124) rank C D for all λ ∈ σ (A1 ), where σ (A1 ) denotes the spectrum of A1 . The proof of this lemma needs advanced linear algebra and thus is referred to [43, p.9, Theorem 1.9]. Theorem 3.24. Suppose that (A, B) is stabilizable and the following holds: A−λI B = n+ p rank C D for all λ ∈ σ (A1 ). Then there exist K1 and K2 such that the controller (3.114) drives the output to track the reference: lim e(t) = lim (y(t) − r(t)) = 0.
t→∞
t→∞
Proof. By Lemma 3.10, the regulator equations (3.122)-(3.123) have a solution X, U. Since (A, B) is stabilizable, there exists K1 such that A − K1 B is Hurwitz. Set K2 = U + K1X. Then the equations (3.118)-(3.119) become ˙ = (A − BK1)w, w e = (C − DK1)w. It then follows that lim e(t) = lim (C − DK1)w(t) = 0.
t→∞
t→∞
We now consider the Luenberger-observer based dynamic output feedback control
114
3 Finite Dimensional Systems
u = −Kz, B A E z+ u + L (e − [C F]z − Du) z˙ = 0 0 A1 = G1 z + G2e, where the gain matrices K and L are to be designed and B A E K − L([C F] − DK), G1 = − 0 0 A1
(3.125)
(3.126)
G2 = L.
Substituting (3.125) and (3.126) into (3.111)-(3.113) gives x˙ = Ax − BKz + Ev, z˙ = G2 Cx + (G1 − G2 DK)z + G2Fv, v˙ = A1 v,
(3.128) (3.129)
e = Cx − DKz + Fv.
(3.130)
(3.127)
To eliminate v from the equation (3.127), we introduce the transformation x = w + Xv,
z = q + Zv,
where matrices X and Z are to be determined. Substituting this transformation into (3.127)-(3.130) gives ˙ + XA1 v = Aw − BKq + (AX − BKZ + E)v, w q˙ + ZA1 v = G2 Cw + (G1 − G2 DK)q +(G2 CX + (G1 − G2DK)Z + G2F)v, e = Cw − DKq + (CX − DKZ + F)v.
(3.131) (3.132) (3.133)
This leads to the matrix equations: XA1 = AX − BKZ + E, 0 = CX − DKZ + F,
(3.134) (3.135)
ZA1 = G1 Z.
(3.136)
Let U = −KZ. Then the above matrix equations are changed to XA1 = AX + BU + E,
(3.137)
0 = CX + DU + F, ZA1 = G1 Z.
(3.138) (3.139)
3.10 Asymptotic Tracking and Disturbance Rejection
115
By Lemma 3.10, the regulator equations (3.137)-(3.138) have a solution if the rank condition (3.124) is satisfied. The equation (3.139) has a trivial solution 0. To find a nonzero solution, we rewrite the regulator equations (3.134)-(3.135) as the following matrix form: X XA1 = [A E] − BKZ I X = [A E] − BKZ + L0 I X = [A E] − BKZ + L(CX − DKZ + F) I X X = [A E] − BKZ + L [C F] − DKZ I I X . A1 = [0 A1 ] I
If we let Z= we obtain
ZA1 =
Thus Z =
X , I
B A E − K − L([C F] − DK) Z = G1 Z. 0 0 A1
X is a solution of the equation (3.139). I
Theorem 3.25. Suppose that (A, B) is stabilizable, the pair A E [C F], 0 A1 is detectable, and the following holds: A−λI B rank = n+ p C D for all λ ∈ σ (A1 ). Then there exist K and L such that the controller (3.125)-(3.126) drives the output to track the reference: lim e(t) = lim (y(t) − r(t)) = 0.
t→∞
t→∞
Proof. By Lemma 3.10, the regulator equations (3.137)-(3.138) have a solution X, U. Since (A, B) is stabilizable, there exists K1 suchthat A − K1 B is Hurwitz. X satisfies the equation Define K2 = −U − K1 X. Then U = −KZ, where Z = I
116
3 Finite Dimensional Systems
(3.139). Then the equations (3.131)-(3.133) become ˙ = Aw − BKq, w q˙ = G2 Cw + (G1 − G2DK)q, e = Cw − DKq.
Let L=
L1 . L2
Then the coefficient matrix can be written as ⎡ ⎤ A −BK1 −BK2 A −BK = ⎣ L1 C A − BK1 − L1 C E − BK2 − L1 F ⎦ . G2 C G1 − G2 DK −L2 C A1 − L2 F L2 C Subtracting the first row from the second row and adding the second column to the first column, we find that the coefficient matrix is equivalent to ⎡ ⎤ A − BK1 −BK1 −BK2 ⎣ 0 A − L1 C E − L1 F ⎦ . 0 −L2 C A1 − L2 F Thus the eigenvalues of the coefficient matrix are the union of the eigenvalues of A − BK1 and the eigenvalues of A − L1 C E − L1 F . −L2 C A1 − L2 F Because the pair
A E [C F], 0 A1
is detectable, there exists a matrix L such that the above matrix is Hurwitz. Thus we have shown that the coefficient matrix is Hurwitz and then lim e(t) = lim (Cw(t) − DKq(t)) = 0.
t→∞
t→∞
Exercises 3.10 1. Consider the control system subject to a disturbance x˙ = r˙ = d˙1 = d˙2 = y=
x + u + d 1, r, d2 , −d1 , x.
3.11 References and Notes
117
Solve the regulator equations (3.122)-(3.123) and then design a static state feedback control u = k1 x + k 2 r + k 3 d 1 + k 4 d 2 such that the output y tracks the reference r. 2. Let A = [ai j ] be an m × q matrix and B = [bi j ] be a p × n matrix. The Kronecker product of A and B, denoted by A ⊗ B, is defined by ⎤ ⎡ a11 B · · · a1q B ⎢ .. ⎥ . .. A ⊗ B = ⎣ ... . ⎦ . am1 B · · · amq B Show that for any matrices A, B, C, and D of appropriate dimensions (A ⊗ B)(C ⊗ D) = (AC) ⊗ (BD), (A + B) ⊗ (C + D) = A ⊗ C + A ⊗ D + B ⊗ C + B ⊗ D. 3. For an n × m matrix X, define ⎤ X1 ⎢ ⎥ vec(X) = ⎣ ... ⎦ , ⎡
Xm where Xi is the ith column of X for i = 1, · · · , m. Prove that vec(BXA) = (A∗ ⊗ B)vec(X). 4. Prove that the Sylvester equation XA − BX = Q has a unique solution if and only if A and B have no eigenvalues in common.
3.11 References and Notes This chapter is based on the references [38, 45, 86, 90, 104]. Section 3.2 is adopted from [45], Section 3.3 from [86], Section 3.4 from [104], Sections 3.6 and 3.7 from [38, 45, 90], Sections 3.8 and 3.9 from [86], and Section 3.10 from [43]. Most of exercise problems are adopted from [86, 90]. For classical control theory, which is based on frequency-domain analysis, we refer to [38, 86, 90]; for nonlinear controls, we refer to [10, 43, 45]; for more discussions about PID trajectory tracking control for mechanical systems, we refer to [19].
Chapter 4
Linear Reaction-Convection-Diffusion Equation
In this chapter, we discuss the control problem of the linear reaction-convectiondiffusion equation ∂u = μ ∇2 u + ∇ · (uv) + au. (4.1) ∂t Depending on a particular real problem, u can represent a temperature or the concentration of a chemical species. The constant μ > 0 is the diffusivity of the temperature or the species, the vector v(x) = (v1 (x), · · · , vn (x)) is the velocity field of a fluid flow, and a(x) is a reaction rate. ∇2 is defined by ∇2 u =
∂ 2u ∂ 2u + · · · + , ∂ x2n ∂ x21
which is called the Laplace operator. The gradient of u is denoted by ∂u ∂u ∇u = . ,··· , ∂ x1 ∂ xn i) ∇ · (uv) = div(uv) = ∑ni=1 ∂ (uv ∂ xi denotes the divergence of the vector uv. We briefly mention that the equation (4.1) is a quite simplified model for real problems. As demonstrated in the equation (1.6), the reaction kinetics is usually nonlinear. Thus the linear term au is a linear approximation of the reaction kinetics originating in real problems. In a chemical reaction, there is usually more than one chemical. Competitive-consecutive reactions are the simplest model system and are described by A + B → P, B + P → W , where A and B are the initial reactants, P is the desired product, and W is a by-product (waste). The dynamics of the competitive-consecutive reactions can be modeled by the system of reactionconvection-diffusion equations [87]
∂A + (v · ∇)A = μ ∇2 A − k1AB, ∂t
W. Liu, Elementary Feedback Stabilization of the Linear Reaction-Convection-Diffusion Equation and the Wave Equation, Math´ematiques et Applications 66, c Springer-Verlag Berlin Heidelberg 2010 DOI 10.1007/978-3-642-04613-1 4,
119
120
4 Linear Reaction-Convection-Diffusion Equation
∂B + (v · ∇)B = μ ∇2 B − k1AB − k2BP, ∂t ∂P + (v · ∇)P = μ ∇2 P + k1 AB − k2BP, ∂t ∂W + (v · ∇)W = μ ∇2W + k2 BP, ∂t where k1 , k2 are the reaction rate constants. In addition, in real problems, the velocity field v should satisfy the incompressible Navier-Stokes equations (see, e.g., [37]) 1 ∂v + (v · ∇)v = − ∇p + ν ∇2 v + F(x,t) in Ω , ∂t ρ ∇ · v = 0 in Ω , where p denotes the pressure, ρ the density of the fluid, ν the kinematic viscosity of the fluid, and F(x,t) some external forces. In what follows, Ω denotes a bounded domain in Rn , Γ denotes its boundary ∂ Ω , and n denotes the unit normal on the boundary pointing the outside of Ω . In this text, for convenience, we will use the subscripts ut , ux or ∂∂ ut , ∂∂ ux interchangeably to denote the derivatives of u with respect to t, x, respectively. In this chapter, we focus on the Dirichlet boundary condition u|Γ = f , where f is a given function. However, the theories presented here also hold for other boundary conditions such as the Neumann boundary condition
∂ u
= f. ∂n Γ
4.1 Stability Before we study our control problems, we consider the stability of the boundary initial value problem
∂u = μ ∇2 u + ∇ · (uv) + au in Ω , ∂t u|Γ = 0, u(x, 0) = u0 (x),
(4.2) (4.3) (4.4)
where u0 is an initial condition. The steady state equation of the problem is
μ ∇2 ψ + ∇ · (ψ v) + aψ = 0 in Ω , ψ |Γ = 0.
(4.5) (4.6)
4.1 Stability
121
This problem has a trivial steady solution ψ = 0. Depending on v and a, it may have other non-trivial solutions. For example, if Ω = (0, π ), v = 0, and a = μ , it has the non-trivial solution ψ = sin x. We are interested in the stability of the equilibrium ψ = 0. Definition 4.1. The equilibrium point ψ = 0 of (4.2)-(4.4) is (1) stable if, for any ε > 0, there exists δ = δ (ε ) > 0 such that Ω
|u(x,t)|2 dV < ε
for
Ω
|u0 (x)|2 dV < δ , t ≥ 0.
(2) unstable if it is not stable. (3) asymptotically stable if for every u0 ∈ L2 (Ω )
lim
t→∞ Ω
|u(x,t)|2 dV = 0.
(4) exponentially stable if there exist constants γ ,C > 0 such that the solution of (4.2)-(4.4) satisfies
|u(x,t)| dV ≤ Ce 2
Ω
−γ t
Ω
|u0 (x)|2 dV.
Whenever the equilibrium is stable, unstable, asymptotically stable, or exponentially stable, we usually say that so is the equation. The number γ is called the decay rate. In this definition, the L2 -norm is used and thus the stability is called the L2 -stability. Unlike finite dimensional systems, other norms can be used, for example, the H 1 norm 1/2 ' ( |u(x,t)|2 + |∇u(x,t)|2 dV . Ω
In this case, the stability is called the H 1 -stability. In this text, for simplicity, we discuss only the L2 -stability. Other stabilities with high regularity, such as H 1 - or H 2 -stability, are usually difficult to establish. Analogous to the finite dimensional systems, the stability of the equilibrium point ψ = 0 of (4.2)-(4.4) is determined by the eigenvalues of the eigenvalue problem
μ ∇2 ψ + ∇ · (ψ v) + aψ = λ ψ in Ω , ψ = 0 on Γ .
(4.7) (4.8)
Readers who are not familiar with semigroup theory may skip this part and jump to the elementary Theorem 4.2. Corresponding to this problem, we define the operator A on L2 (Ω ) by Aψ = μ ∇2 ψ + ∇ · (ψ v) + a(x)ψ (4.9) with the domain D(A) = H 2 (Ω ) ∩ H01 (Ω ). According to Theorem 2.7, the spectrum σ (A) consists of infinite eigenvalues of A.
122
4 Linear Reaction-Convection-Diffusion Equation
Theorem 4.1. For any ω > sup{Re(λ ) | λ ∈ σ (A)}, there exists M(γ ) > 0 such that the solutions of (4.2)-(4.4) satisfies Ω
|u(x,t)|2 dV ≤ Me2ω t
Ω
|u0 (x)|2 dV.
(4.10)
Proof. Using the operator A defined by (4.9), the problem (4.2)-(4.4) can be formulated into the abstract Cauchy problem (2.64). By Theorem 2.16, A generates an analytic semigroup eAt on L2 (Ω ). Thus by Theorem 2.23, the problem (4.2)-(4.4) has a unique solution given by u = eAt u0 . Finally, (4.10) follows from Theorem 2.21. This theorem tells that if sup{Re(λ ) | λ ∈ σ (A)} < 0, then the equilibrium 0 of (4.2)-(4.4) is exponentially stable. In the case where v = 0, we can give an elementary proof of this theorem. In this case, by Theorem 2.5, all the eigenvalues of (4.7)-(4.8) consist of a sequence of real numbers converging to −∞ and all eigenfunctions form a basis in L2 (Ω ). Using the separation of variables, we can obtain the series solutions of (4.2)-(4.4). Theorem 4.2. Assume that v = 0. Let λ1 ≥ λ2 ≥ · · · ≥ λn ≥ · · · with lim λn = −∞ n→∞
∞ be the eigenvalues of (4.7)-(4.8) and2 let {ψi }i=1 be the corresponding normalized (by normalized we mean that Ω |ψi | dV = 1) orthogonal eigenfunctions. Then the solution of (4.2)-(4.4) is given by ∞
u = ∑ c i e λ i t ψi ,
(4.11)
i=1
where ci = Moreover,
Ω
Ω
u0 ψi dV.
|u(x,t)|2 dV ≤ e2λ1t
Ω
|u0 (x)|2 dV.
(4.12)
Proof. Let u = f (t)ψ (x). Substituting u into (4.2)-(4.4), we obtain (noting that v = 0) ft μ ∇2 ψ + a(x)ψ = = λ , ψ |Γ = 0. f ψ For each eigenvalue λi , solving df = λi f , dt we obtain fi = fi (0)eλit . Therefore the solution u is given by the series ∞
u = ∑ fi (0)eλit ψi . i=1
Setting t = 0 in the above equation, we obtain
4.1 Stability
123 ∞
u0 = ∑ fi (0)ψi . i=1
2 Since {ψi }∞ i=1 is an orthonormal basis in L (Ω ), multiplying the equation by ψi and integrating over Ω , we obtain
fi (0) =
Ω
u0 ψi dV.
Moreover, Ω
∞
∑ c2i e2λit
|u(x,t)|2 dV =
Ω
i=1 ∞
|ψi |2 dV
∑ c2i e2λit
=
i=1
∞
= e2λ1t ∑ c2i e2(λi −λ1 )t i=1 ∞
≤ e2λ1t ∑ c2i i=1
=e
2λ1 t
Ω
|u0 (x)|2 dV.
This theorem tells that the equilibrium 0 of (4.2)-(4.4) is exponentially stable if λ1 < 0, stable if λ1 = 0, and unstable if λ1 > 0. The above two theorems do not provide conditions on a and v under which the equilibrium ψ = 0 of (4.2)-(4.4) is exponentially stable. To obtain such conditions, we need Gronwall’s inequality. Lemma 4.1. (Gronwall’s inequality) If x ≤ g(t)x + h(t) for t ≥ s, then x(t) ≤ x(s)e
t s
g(r)dr
Proof. Multiplying (4.13) by e− x e −
t s
g(r)dr
t s
+e
t
g(r)dr
s
g(r)dr
t
h(z)e−
(4.13) z s
g(r)dr
dz.
(4.14)
s
gives
− g(t)xe−
t s
g(r)dr
≤ h(t)e−
t s
g(r)dr
,
and then
t d − st g(r)dr xe ≤ h(t)e− s g(r)dr . dt Integrating from s to t gives (4.14).
In what follows, we denote by div the divergence of a vector v: div(v) = ··· +
∂ vn ∂ xn .
∂ v1 ∂ x1
+
124
4 Linear Reaction-Convection-Diffusion Equation
Theorem 4.3. Assume that the velocity v is differentiable. Denote vm =
1 max{div(v(x,t))}, 2 x∈Ω
am = max a(x). x∈Ω
Let λ1 be the constant in Poincar´e’s inequality (2.29). If am + vm < μλ1 , then the equilibrium 0 of (4.2)-(4.4) is exponentially stable: Ω
|u(t)|2 dV ≤ e2(am +vm −μλ1 )t
Ω
|u0 |2 dV.
(4.15)
Proof. Multiplying (4.2) by u, integrating over Ω by parts, and using the boundary conditions on u, we deduce that 1d 2 dt
Ω
|u(t)|2 dV = μ
u∇2 u(t)dV +
Ω
Ω
(∇ · (uv))u(t)dV +
(use (2.25) and (2.26)) =μ −
Γ
u(t)
∂u dS − μ ∂n
Ω
|∇u(t)|2 dV +
u(t)v · ∇u(t)dV +
Ω
Γ
Ω
u2 (t)n · vdS
a|u(t)|2 dV
Ω
1 v · ∇(u2(t))dV + 2 Ω Ω 1 |∇u(t)|2 dV − u2 (t)n · vdS = −μ 2 Γ Ω
= −μ
1 + 2 = −μ
|∇u(t)|2 dV −
div(v)(u (t)dV +
Ω
|∇u(t)|2 dV +
1 2
Ω
Ω
Ω
div(v)(u2 (t)dV +
|u(t)|2 dV ≤ 2 (am + vm − μλ1)
Ω
|u(t)|2 dV.
Then (4.15) follows from Gronwall’s inequality. Example 4.1. Consider the initial boundary value problem
∂u ∂ 2u = 2 + au, ∂t ∂x u(0,t) = u(π ,t) = 0, u(x, 0) = u0 ,
a|u(t)|2 dV
a|u(t)|2 dV
The application of Poincar´e’s inequality (2.29) gives d dt
Ω
2
Ω
a|u(t)|2 dV
Ω
a|u(t)|2 dV.
4.1 Stability
125
where a is a constant. Using the method of separation of variables, we can solve the problem to obtain the series solution u(x,t) =
∞
∑ cn sin (nx) e(a−n )t , 2
n=1
where the Fourier coefficient cn is given by cn =
2 π
π 0
u0 (x) sin (nx) dx.
Evidently, if a < 1, the equilibrium 0 is exponentially stable; if a = 1, it is stable, but not exponentially stable; if a > 1, it is unstable because sin (x) e(a−1)t goes to infinity as t → ∞. We now extend the definition of stability for the reaction-convection-diffusion equation to an abstract Cauchy problem du = Au, dt
u(0) = u0 ,
(4.16)
where A is the infinitesimal generator of a C0 semigroup T (t) on a Banach space X and u0 ∈ X is an initial condition. The space X is called the state space of the problem. By Theorem 2.23, the solution is given by u(t) = T (t)u0 . Since A0 = 0, the zero element is an equilibrium point. Readers who do not have a background in operator theory may skip this part. Definition 4.2. The equilibrium point 0 of (4.16) is (1) stable if, for any ε > 0, there exists δ = δ (ε ) > 0 such that u(t) < ε
for u0 < δ , t ≥ 0.
(2) unstable if it is not stable. (3) asymptotically stable if lim u(t) = 0 for each u0 ∈ X . t→∞ (4) exponentially stable if there exist constants γ ,C > 0 such that the solution of (4.16) satisfies u(t) ≤ Ce−γ t u0 . Whenever the equilibrium is stable, unstable, asymptotically stable, or exponentially stable, we usually say that so is the system or the semigroup. If A is the infinitesimal generator of an analytic semigroup, Theorem 2.21 indicates that the equilibrium is exponentially stable if the spectrum σ (A) lies in the left-half complex plane. Unfortunately, this is not always true for general generators. For such an example, we refer to [82, p.114, Example 3.6]. To find conditions on A that ensure the exponential stability, we examine the inequality (2.45). This inequality tells that if the semigroup T (t) is exponentially stable, that is, ω < 0, then the resolvent R(λ , A) as an operator-valued function of
126
4 Linear Reaction-Convection-Diffusion Equation
λ is bounded on the right-half complex plane Re(λ ) > 0. In fact, this condition is both necessary and sufficient. Theorem 4.4. Let T (t) be a C0 semigroup with infinitesimal generator A on a Hilbert space H. Then T (t) is exponentially stable if and only if the resolvent R(λ , A) as an operator-valued function of λ is bounded on the right-half complex plane Re(λ ) > 0. The proof uses the Laplace transform of operator-valued functions and is beyond the scope of this text. Thus we refer to [24, p.222, Theorem 5.1.5]. This is a nice theoretical result, but the condition is usually difficult to verify in applications to partial differential equations. The above tests for stability rely on the spectrum of A. Finding out the spectrum itself is a difficult problem. Thus it is valuable to have a test without using the spectrum. For this, we examine the simple exponential function eat . If ∞ 0
eat dt < ∞,
then a < 0. This is actually true for any semigroups. Theorem 4.5. If the semigroup T (t) on a Banach space X satisfies ∞ 0
T (t) p dt ≤ C < ∞
(4.17)
for some p > 0, then there exist constants α , σ > 0 such that T (t) ≤ α e−σ t .
(4.18)
Proof. From the properties of semigroups (Theorem 2.8) it follows that there exist K, κ > such that T (t) ≤ Keκ t . (4.19) Then for 0 ≤ t ≤ 1, we have that T (t) ≤ Keκ . For t ≥ 1, we deduce that 1 − e−pκ t 1 − e−pκ T (t) p ≤ T (t) p pκ pκ = =
t 0
t 0
≤ Kp
e−pκ s T (t) p ds e−pκ s T (s)T (t − s) pds t 0
e−pκ se pκ s T (t − s) p ds (use (4.19))
4.1 Stability
127
≤ Kp
∞ 0
T (t) p dt
≤ CK p . (use (4.17)) Thus, for all t ≥ 0, we obtain
T (t) p ≤ M
for some M > 0. Using this inequality, we deduce that tT (t) p = = ≤
t 0
t 0
t 0
≤M
T (t) p ds T (s)T (t − s) pds MT (t − s) pds
∞ 0
T (t) p dt
≤ MC. (use (4.17)) Hence T (t) p ≤
MC , t
and then
1 T (2MC) p ≤ . 2 Let t = 2MCn + r, where n is a non-negative integer and 0 ≤ r < 2MC. It then follows that T (t) p = T (2MCn + r) p = T (2MC)T (2MC(n − 1) + r) p 1 ≤ T (2MC(n − 1) + r) p 2 .. . 1 n T (r) p ≤ 2 n+1 p 1 ≤K e pκ MC 2 = K p e−(n+1) ln2 e pκ MC ln 2 = K p e− 2MC t e pκ MC . For the finite dimensional system (3.7), the number of the eigenvectors of the matrix A is finite. For operators defined through differential equations in a function space, such as the operator A in (4.9), the number of eigenvectors is infinite. Thus such a system as (4.2) is called an infinite dimensional system.
128
4 Linear Reaction-Convection-Diffusion Equation
The extension of control theory of finite dimensional systems to infinite dimensional systems is not trivial. New methods and theories are needed to create to deal with the infinite dimensional control systems. Some of these methods originate from the finite dimensional control systems, but many others are completely new.
Exercises 4.1 1. Consider the initial boundary value problem of the heat equation
∂ 2u ∂u = μ 2 + au, ∂t ∂x u(0,t) = 0, u(1,t) = 0, u(x, 0) = u0 (x), where a is a constant. a. Use the method of separation of variables to find its solution. b. For what values of the reaction constant a, is the equilibrium 0 unstable? 2. Consider the initial boundary value problem
∂ 2u ∂u = μ 2 + au, ∂t ∂x ∂u (π ,t) = 0, u(0,t) = ∂x u(x, 0) = u0 (x), where a is a constant. a. Use the method of separation of variables to find its solution. b. For what values of the reaction constant a, is the equilibrium 0 unstable? 3. Consider the initial boundary value problem
∂ 2u ∂u = μ 2, ∂t ∂x 1 ∂u (1,t) = − u(1,t), u(0,t) = 0, ∂x 2 u(x, 0) = u0 (x). Show that
1 0
|u(x,t)|2 dx ≤ e−σ t
where σ is a positive constant.
1 0
|u0 (x)|2 dx,
4.2 Interior Feedback Stabilization
129
4. Consider the initial boundary value problem of the Burgers equation
∂ 2u ∂u ∂u = μ 2 +u , ∂t ∂x ∂x u(0,t) = 0, u(1,t) = 0, u(x, 0) = u0 (x). Show that
1 0
|u(x,t)|2 dx ≤ e−σ t
1 0
|u0 (x)|2 dx,
where σ is a positive constant. 5. Assume that the velocity v = (v1 (x,t), v2 (x,t), v3 (x,t)) over a bounded domain Ω satisfies the conditions div(v) =
∂ v1 ∂ v2 ∂ v3 + + = 0, ∂ x1 ∂ x2 ∂ x3
v|Γ = 0.
Show that the equilibrium 0 of the convection-diffusion equation
∂u + v · ∇u = μ ∇2 u in Ω , ∂t u|Γ = 0, u(x, 0) = u0 (x) is exponentially L2 -stable. Is it exponentially H 1 -stable?
4.2 Interior Feedback Stabilization Consider the interior control problem
∂u = μ ∇2 u + ∇ · (uv) + au + b(x) · φ (t) in Ω , ∂t u|Γ = 0, u(x, 0) = u0 (x),
(4.20) (4.21) (4.22)
where φ (t) = (φ1 (t), · · · , φm (t)) is a control input vector, b(x) = (b1 (x), · · · bm (x)) is a given vector which prescribes how a control action is distributed over the domain Ω , and b(x) · φ (t) = ∑m i=1 bi (x)φi (t). Typical functions bi are the characteristic func/ ωi ), where ωi (i = 1, 2, · · · , m) tions χωi (χωi (x) = 1 if x ∈ ωi and χωi (x) = 0 if x ∈ are subsets of Ω . In this control problem, the state variable is u. The output of the system can be proposed in many ways in accordance with specific physical problems. Concentration of a chemical or a room temperature is usually desired to be homogenized. Then the controlled output is the concentration oc (u) = u(x,t).
(4.23)
130
4 Linear Reaction-Convection-Diffusion Equation
If the average concentration on different subsets ωio (i = 1, 2, · · · , l) can be measured, then the measured output is om (u) = h1 (x)u(x,t)dV, · · · , hl (x)u(x,t)dV , (4.24) ω1o
ωlo
where hi (i = 1, 2, · · · , l) are given weight functions. This control problem could simulate the regulation of room temperature. The controller ∑m i=1 bi (x)φi (t) represents m different heaters that are placed at different locations of the room and the measured output om represents l different thermosensors that are placed at different locations. In Section 4.6, we will show how to formulate this control problem into a control system similar to the finite dimensional system (3.1)-(3.2). Let the reference output or (x) = r(x). Since u|Γ = 0, r should satisfy r|Γ = 0. If the controls φi regulate u to r, then φi and u will converge to φ¯i and r, respectively, and ut will converge to zero. Thus the control steady states φ¯i satisfy m
μ ∇2 r + ∇ · (rv) + ar + ∑ bi (x)φ¯i = 0. i=1
Introducing new variables
ψi = φi − φ¯i ,
w = u − r, and
η m = om (u) − om (r) =
ω1o
ηc = oc − or = u − r = w,
h1 (x)w(x,t)dV, · · · ,
ωlo
hl (x)w(x,t)dV ,
we then transform the control problem to m ∂w = μ ∇2 w + ∇ · (wv) + aw + ∑ bi (x)ψi (t), ∂t i=1
ηc = w(x,t), ηm = h1 (x)w(x,t)dV, · · · , ω1o
ωlo
hl (x)w(x,t)dV
with the zero reference output. Therefore, in what follows, we assume that the reference output is zero. The control problem is to design a feedback controller φ to regulate the controlled output oc to zero, that is, the solution u of the problem (4.20)-(4.22) converges to zero in some function space. Note that zero is the equilibrium of the problem (4.20)(4.22) with zero steady-state control. Hence, as in the case of finite dimensional systems, the control problem is to stabilize the equilibrium.
4.2 Interior Feedback Stabilization
131
4.2.1 State Feedback Controls We assume that the state u on the whole domain Ω is available for feedback and design a controller of the form φ = F(u), (4.25) where F is a linear operator from L2 (Ω ) to Rm , called a feedback operator. Definition 4.3. If there is a state feedback control φ = F(u) such that the closed-loop system (4.20)-(4.22) is exponentially stable, we say that the reaction-convectiondiffusion equation is exponentially stabilizable by interior feedback. We first consider the case where v = 0. In this case, the eigenfunctions of (2.11)(2.12) are orthogonal and then we can use them to decompose the reaction-diffusion equation into a finite dimensional control system and an infinite dimensional system. Let λ1 > λ2 > · · · > λn > · · · with lim λn = −∞ be the eigenvalues of (2.11)n→∞ ψ , ψ , · · · , ψ be n normalized (by normalized we mean that (2.12) and let in i i1 i2 i 2 Ω |ψi j | dV = 1) orthogonal eigenfunctions corresponding to the eigenvalue λi . Consider the eigenfunction expansion of u ∞
u= u0 =
ni
∑ ∑ ci j (t)ψi j (x),
(4.26)
∑ ∑ c0i j ψi j (x),
(4.27)
i=1 j=1 ∞ ni i=1 j=1
where ci j (t) = c0i j =
Ω Ω
u(x,t)ψi j (x)dV,
(4.28)
u0 (x)ψi j (x)dV.
(4.29)
Substituting (4.26) into (4.20), we obtain (noting that we are assuming v = 0) ∞
∞ ni ∞ ni dci j (t) ψi j (x) = μ ∑ ∑ ci j (t)∇2 ψi j (x) + ∑ ∑ ci j (t)a(x)ψi j (x) dt i=1 j=1 i=1 j=1 i=1 j=1 ni
∑∑
m
+ ∑ b p (x)φ p (t) =
∞
p=1 ni
m
∑ ∑ ci j (t)λi ψi j (x) + ∑ b p(x)φ p (t).
i=1 j=1
p=1
Multiplying the equation by ψi j (x) and integrating over Ω , we obtain m dci j = λi ci j + ∑ bi j p φ p (t), dt p=1
(4.30)
132
4 Linear Reaction-Convection-Diffusion Equation
ci j (0) = c0i j ,
i = 1, 2, · · · ; j = 1, 2, · · · , ni ,
where bi j p =
Ω
(4.31)
ψi j (x)b p (x)dV.
Denote ⎡
ci1 ⎢ ci2 ⎢ ci = ⎢ . ⎣ .. ⎡
cini c0i1 c0i1
⎡
⎤ ⎥ ⎥ ⎥, ⎦
⎤ c1 ⎢ c2 ⎥ ⎢ ⎥ cf = ⎢ . ⎥, ⎣ .. ⎦
⎤
⎡
⎢ ⎥ ⎢ ⎥ c0i = ⎢ . ⎥ , ⎣ .. ⎦ c0ini ⎡ λi 0 · · · ⎢ 0 λi · · · ⎢ Λi = ⎢ . . . ⎣ .. .. . .
⎡
⎤ cN+1 ⎢ ⎥ c∞ = ⎣ cN+2 ⎦ , .. .
cN
⎤ c01 ⎢ c0 ⎥ ⎢ 2⎥ c0f = ⎢ . ⎥ , ⎣ .. ⎦ ⎤ 0 0⎥ ⎥ .. ⎥ . ⎦
0 0 · · · λi
⎡
⎤ c0N+1 ⎢ 0 ⎥ c0∞ = ⎣ cN+2 ⎦ , .. .
c0N
(ni × ni matrix),
⎤ Λ N+1 0 0 ··· ⎢ 0 Λ N+2 0 · · · ⎥ ⎥ ⎥ ⎢ ⎥ , ⎥, Λ∞ = ⎢ 0 0 Λ N+3 · · · ⎥ ⎦ ⎣ ⎦ .. . . .. .. . 0 0 ··· ΛN . . . ⎡ ⎡ ⎡ ⎤ ⎤ ⎤ bi11 bi12 · · · bi1m B1 BN+1 ⎢ bi21 bi22 · · · bi2m ⎥ ⎢ B2 ⎥ ⎢ BN+2 ⎥ ⎢ ⎢ ⎢ ⎥ ⎥ ⎥ Bi = ⎢ . , B , B = = ⎢ ⎢ BN+3 ⎥ , ⎥ ⎥ . . . ∞ f .. . . . .. ⎦ ⎣ .. ⎣ .. ⎦ ⎣ ⎦ .. b ··· b . B b ⎡
Λ1 ⎢ 0 ⎢ Λf = ⎢ . ⎣ ..
0 Λ2 .. .
ini 1
and
··· ··· .. .
ini 2
0 0 .. .
⎡
⎤
ini m
N
⎡
⎤ φ1 ⎢ φ2 ⎥ ⎢ ⎥ φ = ⎢ . ⎥. ⎣ .. ⎦
φm
Then (4.30) can be decomposed into a finite dimensional control system dc f = Λ f cf + Bf φ, dt c f (0) = c0f , and an infinite dimensional control system
(4.32) (4.33)
4.2 Interior Feedback Stabilization
133
dc∞ = Λ ∞ c∞ + B∞ φ , dt c∞ (0) = c0∞ .
(4.34) (4.35)
If N is large enough, the eigenvalues in Λ ∞ are negative. So the system (4.34) is exponentially stable. Thus the original control problem (4.20) is reduced to the stabilization of the finite dimensional system (4.32). Lemma 4.2. If Rank(Bi ) = ni for i = 1, 2, · · · , N, then the pair (Λ f , B f ) is controllable. Proof. By PHB test (Theorem 3.5), it suffices to show that N
Rank[λ I − Λ f B f ] = ∑ ni i=1
for any λ ∈ C. This is true if λ = λ1 , · · · λN . Let λ = λi , 1 ≤ i ≤ N. Since Rank(Bi ) = ni , there exists an ni × ni sub-matrix bi of Bi such that det(bi ) = 0. Let P denote the N
N
i=1
i=1
∑ ni × ∑ ni matrix λ I − Λ f with λi I − Λ i replaced by bi , that is, ⎡
λi I − Λ 1 0 ⎢ 0 λ I − Λ2 i ⎢ ⎢ .. .. ⎢ . . ⎢ ⎢ 0 0 P=⎢ ⎢ 0 0 ⎢ ⎢ 0 0 ⎢ ⎢ ⎣ 0 0 0 0
··· ··· .. .
0 0 .. .
0 0 .. .
0 0 .. .
··· ··· .. .
· · · λi I − Λ i−1 0 0 ··· ··· 0 bi 0 ··· ··· 0 0 λi I − Λ i+1 · · · .. . ··· 0 0 0
···
0
0
0
0 0 .. . 0 0 0
0 · · · λi I − Λ N
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
It is easy to see that det(P) = (λi − λ1 )n1 · · · (λi − λi−1 )ni−1 det(bi )(λi − λi+1)ni+1 · · · (λi − λN )nN = 0 N
and so Rank[λ I − Λ f B f ] = Rank(P) = ∑ ni . i=1
Using the design method of feedback control for finite dimensional control systems (Theorem 3.8), we can now derive a state feedback control law for the reactiondiffusion equation (4.20). Theorem 4.6. Assume that v = 0. Let λ1 > λ2 > · · · > λn > · · · with lim λn = −∞ be n→∞ the eigenvalues of (2.11)-(2.12) and let ψi1 , ψi2 , · · · , ψini be ni normalized orthogonal eigenfunctions corresponding to the eigenvalue λi . Let γ > 0 be a given number and N an integer such that λN ≥ −γ > λN+1 . If Rank(Bi ) = ni for n = 1, 2, · · · , N, there exists a state feedback control
134
4 Linear Reaction-Convection-Diffusion Equation N
φ p (t) = Fp (u(t)) = ∑
ni
∑ Kp,q(i, j)
i=1 j=1
Ω
ψi j (x)u(x,t)dV,
p = 1, · · · , m,
(4.36)
such that the solution of (4.20) satisfies Ω
|u(x,t)|2 dV ≤ Me−2γ t
Ω
|u0 (x)|2 dV,
(4.37)
N
where q(i, j) = ∑i−1 r=1 nr + j, K = (K pq ) is an m × ∑ ni feedback gain matrix and M i=1
is a positive constant. Proof. Let L =
N
∑ ni . For any numbers μ1 , μ2 , · · · , μL < −γ , it follows from The-
i=1
orem 3.8 and Lemma 4.2 that there exists a feedback gain matrix K = (K pq ) such that the eigenvalues of Λ f − B f K are equal to μ1 , μ2 , · · · , μL . This means that the solution of the finite dimensional system (4.32) with the state feedback control
φ = −Kc f satisfies
N
ni
N
(4.38)
ni
∑ ∑ c2i j (t) ≤ M ∑ ∑ c2i j (0)e−2γt ,
i=1 j=1
(4.39)
i=1 j=1
where M is a positive constant that may change from line to line. It then follows from (4.38) that m
ni
N
ni
N
∑ φ p2 (t) ≤ M ∑ ∑ c2i j (t) ≤ M ∑ ∑ c2i j (0)e−2γt .
p=1
i=1 j=1
(4.40)
i=1 j=1
To estimate c∞ , we solve (4.34) to obtain m
t
p=1
0
ci j (t) = eλit ci j (0) + ∑ bi j p
eλi (t−s) φ p (s)ds.
It then follows from (4.40) that
∞
ni
1/2
∑ ∑ c2i j (t)
∞
1/2
ni
∑ ∑ e2λit c2i j (0)
≤
i=N+1 j=1
(4.41)
i=N+1 j=1
⎛
+⎝
∞
ni
≤
∞
m
∑ ∑ ∑ bi j p
i=N+1 j=1
ni
p=1
0
1/2
∑ ∑ e2λit c2i j (0)
i=N+1 j=1
t
2 ⎞1/2 eλi (t−s) φ p (s)ds ⎠ N
ni
+ M ∑ ∑ c2i j (0)e−γ t . i=1 j=1
4.2 Interior Feedback Stabilization
135
Since Ω
|u(x,t)|2 dV =
∞
ni
∑ ∑ c2i j (t),
i=1 j=1
Ω
|u0 (x)|2 dV =
∞
ni
∑ ∑ c2i j (0),
i=1 j=1
(4.37) follows from (4.39) and (4.41). Since ci j (t) =
ψi j (x)u(x,t)dV,
Ω
the state feedback control (4.36) follows from (4.38). Example 4.2. We set v = 0, μ = 0.5, a = 3μπ 2, b(x, y) = 1, u0 (x, y) = 5xy(1 − x)(1 − y) and the domain Ω = (0, 1) × (0, 1). In this case, the eigenvalue problem (2.11)(2.12) has a positive eigenvalue λ1 = a − 2 μπ 2 = μπ 2 with the eigenfunction ψ11 = 2 sin π x sin π y. To verify the conditions of Theorem 4.6, we calculate b111 =
1 1 0
=2 =
0
ψ11 (x, y)b(x, y)dxdy
1 1 0
0
sin π x sin π ydxdy
8 . π2
Thus the conditions of Theorem 4.6 are satisfied. The finite dimensional system (4.32) becomes 8 dc = μπ 2 c + 2 φ (t). dt π Therefore the feedback control
φ (t) = −Kc = −K
1 1 0
0
u(x, y,t) sin π x sin π ydxdy, K >
μπ 4 8
(4.42)
exponentially stabilizes the equation:
1 1 1 3 ∂u = ∇2 u + π 2 u − K u(x, y,t) sin π x sin π ydxdy, ∂t 2 2 0 0 u(0, y) = u(1, y) = u(x, 0) = u(x, 1) = 0.
Example 4.3. Point control. In Example 4.2, we take b(x, y) to be the Dirac-delta function b(x, y) = δ (x − 0.5, y − 0.5).
136
4 Linear Reaction-Convection-Diffusion Equation
Such a control acting at one point is called a point control. By the definition of the Dirac-delta function, we have 1 1 0
0
f (x, y)δ (x − 0.5, y − 0.5)dxdy = f (0.5, 0.5)
for any continuous functions f . Thus b111 =
1 1 0
=2
0
ψ11 (x, y)δ (x − 0.5, y − 0.5)dxdy
1 1 0
0
sin π x sin π yδ (x − 0.5, y − 0.5)dxdy
= 2 sin(π /2) sin(π /2) =2 and then the conditions of Theorem 4.6 are satisfied. The finite dimensional system (4.32) becomes dc = μπ 2 c + 2φ (t). dt Then the feedback control
φ (t) = −Kc = −K
1 1 0
0
u(x, y,t) sin π x sin π ydxdy, K >
μπ 2 2
(4.43)
exponentially stabilizes the equation: 1 3 ∂u = ∇2 u + π 2 u − K δ (x − 0.5, y − 0.5) ∂t 2 2 u(0, y) = u(1, y) = u(x, 0) = u(x, 1) = 0.
1 1 0
0
u(x, y,t) sin π x sin π ydxdy,
We now consider the case where v = 0. Readers who are not familiar with semigroup theory can skip this case. By Theorem 2.7, we have the decomposition: L2 (Ω ) = HN ⊕ H∞ ,
(4.44)
where HN is a finite dimensional space containing all eigenfunctions corresponding to the eigenvalues λ1 , λ2 , · · · , λN and H∞ is an infinite dimensional space containing all eigenfunctions corresponding to the eigenvalues λN+1 , λN+2 , · · · . Moreover, HN and H∞ are invariant under the operator A defined by (4.9), that is, A(HN ) ⊂ HN and A(H∞ ∩ D(A)) ⊂ H∞ . Therefore we can still decompose the reaction-convectiondiffusion equation into a finite dimensional control system and an infinite dimensional system. Usually, the space L2 (Ω ) is decomposed in the way such that HN contains all eigenfunctions whose eigenvalues may have positive real parts and H∞ contains all eigenfunctions whose eigenvalues have negative real parts. In this case, HN is called the unstable (or slow) manifold and H∞ is called the stable (or fast) manifold.
4.2 Interior Feedback Stabilization
137
We define the restriction AN of A to HN and the restriction A∞ of A to H∞ by AN u = Au A∞ u = Au
for every u ∈ HN , for every u ∈ H∞ ∩ D(A).
Theorem 4.7. Let λ1 , λ2 , · · · be the eigenvalues of the operator A defined by (4.9) ordered in their real parts: Re(λ1 ) > Re(λ2 ), · · · . Then the restrictions AN , A∞ are the infinitesimal generators of analytic semigroups eAN t , eA∞t on HN , and H∞ , respectively. Moreover, for any γ > Re(λN+1 ), there exist constants M1 , M2 (γ ) such that eAN t ≤ M1 eRe(λ1 )t ,
(4.45)
≤ M2 e .
(4.46)
e
A∞t
γt
This theorem is a consequence of Theorem 2.21. For the detailed proof, we refer to [42, p.30, Theorem 1.5.3]. For the direct composition L2 (Ω ) = HN ⊕ H∞ , we define the projection PN from 2 L (Ω ) onto HN and the projection P∞ from L2 (Ω ) onto H∞ by PN u = uN ,
P∞ u = u∞ ,
where u = uN + u∞ with uN ∈ HN and u∞ ∈ H∞ . Evidently we have P∞ = I − PN . Let u0,N = PN u0 and u0,∞ = P∞ u0 . Applying the projection PN to (4.20), we obtain a finite dimensional system on HN m ∂ uN = μ ∇2 uN + ∇ · (uN v) + auN + ∑ PN bi (x)φi (t), ∂t i=1
uN = 0
on Γ ,
(4.47) (4.48)
uN (x, 0) = u0,N (x),
(4.49)
and an infinite dimensional system on H∞ m ∂ u∞ = μ ∇2 u∞ + ∇ · (u∞v) + au∞ + ∑ P∞ bi (x)φi (t), ∂t i=1
u∞ = 0
on Γ ,
(4.50) (4.51)
u∞ (x, 0) = u0,∞ (x).
(4.52)
Consider the eigenfunction expansions uN =
N
ni
∑ ∑ ci j (t)ψi j (x),
(4.53)
i=1 j=1
u0,N =
N
ni
∑ ∑ c0i j ψi j (x),
(4.54)
i=1 j=1
PN b p =
N
ni
∑ ∑ bi j pψi j (x),
i=1 j=1
(4.55)
138
4 Linear Reaction-Convection-Diffusion Equation
where ψi j are eigenfunctions corresponding the eigenvalue λi . Substituting these expressions into (4.47) and using the independence of the eigenfunctions, we obtain m dci j = λi ci j + ∑ bi j p φ p (t), dt p=1
ci j (0) = c0i j ,
(4.56)
i = 1, 2, · · · , N; j = 1, 2, · · · , ni .
(4.57)
This system is the same as (4.32) except that bi j p is defined by (4.55). To derive a state feedback control for (4.20), we need to solve the system (4.53) for the coefficients ci j . Thus we multiply (4.53) by ψ pq and integrate over Ω to obtain the system
ni
N
Ω
uN (x)ψ pq (x)dV = ∑ ∑ ci j (t)
i=1 j=1
Ω
ψi j (x)ψ pq (x)dV,
(4.58)
where p = 1, 2, · · · , N and q = 1, 2, · · · , ni . This system has the solution c f = Ψ −1 UN ,
(4.59)
where the matrix Ψ and the column vector UN is defined by
Ψr(p,q),r(i, j) =
Ω
ψi j (x)ψ pq (x)dV,
Ψ = (Ψr(p,q),r(i, j)), UN,r(p,q) =
Ω
uN (x)ψ pq (x)dV,
UN = (UN,r(p,q) ). with n0 = 0 and r(i, j) = ∑ip=1 n p−1 + j. Note that Ψ is a ∑Ni=1 ni × ∑Ni=1 ni matrix and UN is a ∑Ni=1 ni dimensional column vector. Theorem 4.8. Let λ1 , λ2 , · · · be the eigenvalues of the operator A defined by (4.9) ordered in their real parts: Re(λ1 ) > Re(λ2 ), · · · . Let ψi1 , ψi2 , · · · , ψini be ni eigenfunctions corresponding to the eigenvalue λi . Let γ > 0 be a given number and N an integer such that Re(λN ) ≥ −γ > Re(λN+1 ). If Rank(Bi ) = ni for n = 1, 2, · · · , N, there exists a state feedback control
φ = −Kc f = −KΨ −1 UN
(4.60)
such that the solution of (4.20) satisfies u(t)L2 ≤ Mu0 L2 e−γ t ,
(4.61)
where K = (Ki j ) is an m× ∑Ni=1 ni feedback gain matrix and M is a positive constant.
4.2 Interior Feedback Stabilization
139
Proof. Let L = ∑Ni=1 ni . For any numbers μ1 , μ2 , · · · , μL < −γ , it follows from Theorem 3.8 and Lemma 4.2 that there exists a feedback gain matrix K = (Ki j ) such that the eigenvalues of Λ f − B f K are equal to μ1 , μ2 , · · · , μL . This means that there exists a constant M1 > 0 such that the solution of the finite dimensional system (4.56) with the state feedback control (4.60) satisfies
N
ni
∑∑
1/2 c2i j (t)
N
ni
∑∑
≤M
i=1 j=1
1/2 c2i j (0)
e− γ t .
(4.62)
i=1 j=1
Here M denotes a generic constant that may change from line to line. It then follows from (4.60) that (4.63) φ ≤ MuN (0)L2 e−γ t . The estimate (4.62) is equivalent to uN (t)L2 ≤ MuN (0)L2 e−γ t .
(4.64)
To estimate u∞ , we can use (2.66) to rewrite the infinite system (4.50)-(4.52) as u∞ = eA∞t u0,∞ +
m
∑
t
p=1 0
eA∞ (t−s) P∞ b p φ p (s)ds.
It then follows from (4.46) and (4.63) that for −γ > −γ1 > Re(λN+1 ) u∞ (t)L2 ≤ u∞ (0)L2 e−γ1t +
m
∑
t
p=1 0
eA∞(t−s) P∞ b p φ p (s)L2 ds
≤ u∞ (0)L2 e−γ1t + MuN (0)L2
t
e−γ1 (t−s) e−γ s ds
0
' ( M uN (0)L2 e−γ t − e−γ1t γ1 − γ M uN (0)L2 e−γ t ≤ u∞ (0)L2 e−γ1t + γ1 − γ ≤ Mu(0)L2 e−γ t . = u∞ (0)L2 e−γ1t +
(4.65)
Thus (4.61) follows from (4.64) and (4.65). There is a difficulty in computing the controller (4.60). To compute it, we need to compute the project operator PN , but the expression (2.22) for PN is difficult to evaluate.
4.2.2 Output Feedback Controls In reality, since only finite sensors are placed in the domain, only the output om (u) is available for feedback. To design an output feedback controller, we need to consider
140
4 Linear Reaction-Convection-Diffusion Equation
the output injection equation
∂u = μ ∇2 u + ∇ · (uv) + au − K · om(u) in Ω , ∂t u|Γ = 0,
(4.66) (4.67)
where K = K(x) = (K1 (x), · · · , Kl (x)) is an output injection vector. Definition 4.4. If there is an output injection vector K such that (4.66)-(4.67) is exponentially stable, we say that the reaction-convection-diffusion equation is exponentially detectable by interior output injection. We show that the reaction-convection-diffusion is exponentially detectable. We look for the output injection vector K = (K1 , · · · , Kl ) in (HN )l . Thus we assume that N
ni
K p = ∑ ∑ k i j p ψi j ,
p = 1, · · · , l.
(4.68)
i=1 j=1
Applying the projection PN to (4.66), we obtain a finite dimensional output injection system on HN
∂ uN = μ ∇2 uN + ∇ · (uN v) + auN − K · om(u), ∂t uN = 0 on Γ ,
(4.69) (4.70)
and an infinite dimensional system on H∞
∂ u∞ = μ ∇2 u∞ + ∇ · (u∞v) + au∞, ∂t u∞ = 0 on Γ .
(4.71) (4.72)
Substituting the expression (4.53) into (4.69) and using the independence of the eigenfunctions, we obtain l N nr dci j = λi ci j − ∑ ki j p ∑ ∑ crs h prs + h p u∞ dV , (4.73) dt ω po p=1 r=1 s=1 where i = 1, 2, · · · , N, j = 1, 2, · · · , ni , and h prs = Define
⎡
ki11 ki12 ⎢ ki21 ki22 ⎢ Ki = ⎢ . .. ⎣ .. . kini 1 kini 2
··· ··· .. .
ω po
h p ψrs dV.
ki1l ki2l .. .
· · · kini l
⎤ ⎥ ⎥ ⎥, ⎦
(4.74) ⎡
⎤ K1 ⎢ K2 ⎥ ⎢ ⎥ Kf = ⎢ . ⎥, ⎣ .. ⎦ KN
4.2 Interior Feedback Stabilization
⎡
and
h1i1 ⎢ h2i1 ⎢ Hi = ⎢ . ⎣ ..
h1i2 h2i2 .. .
hli1 hli2
141
⎤ · · · h1ini · · · h2ini ⎥ ⎥ , . . .. ⎥ . . ⎦ · · · hlini
H f = [H1 , H2 · · · , HN ] .
Theorem 4.9. Let λ1 , λ2 , · · · be the eigenvalues of the operator A defined by (4.9) ordered in their real parts: Re(λ1 ) > Re(λ2 ), · · · . Let ψi1 , ψi2 , · · · , ψini be ni eigenfunctions corresponding to the eigenvalue λi . Let γ > 0 be a given number and N an integer such that Re(λN ) ≥ −γ > Re(λN+1 ). If Rank(Hi ) = ni for n = 1, 2, · · · , N, then there exists an output injection vector K(x) in (HN )l such that the solution of (4.66)-(4.67) satisfies u(t)L2 ≤ Mu0 L2 e−γ t , (4.75) where M is a positive constant. Proof. By (4.46), we deduce that u∞(t)L2 ≤ Mu(0)L2 e−γ t .
(4.76)
Let L = ∑Ni=1 ni . For any numbers μ1 , μ2 , · · · , μL < −γ , it follows from Theorem 3.8 and Lemma 4.2 that there exists a feedback gain matrix K f such that the eigenvalues of Λ f − K f H f are equal to μ1 , μ2 , · · · , μL . It therefore follows from Theorem 2.25 that there exists a constant M > 0 such that the solution of the finite dimensional system (4.73) satisfies
N
1/2
ni
∑∑
c2i j (t)
≤ Mu(0)L2
i=1 j=1
N
ni
∑∑
1/2 c2i j (0)
e− γ t .
(4.77)
i=1 j=1
The estimate (4.77) is equivalent to uN (t)L2 ≤ Mu(0)L2 e−γ t .
(4.78)
Thus (4.75) follows from (4.76) and (4.78). To design an output feedback controller, we consider the following state observer m ∂w = μ ∇2 w + ∇ · (wv) + aw + ∑ bi (x)φi (t) ∂t i=1
+K · (om(u) − om (w)) w|Γ = 0, w(x, 0) = 0.
in Ω ,
(4.79) (4.80) (4.81)
We then use the estimate w for feedback and introduce the observer-based output feedback controller φ = F(w) = (F1 (w), · · · , Fm (w)),
142
4 Linear Reaction-Convection-Diffusion Equation
which leads to the observer-based output feedback control system m ∂u = μ ∇2 u + ∇ · (uv) + au + ∑ bi (x)Fi (w) in Ω , ∂t i=1
(4.82)
m ∂w = μ ∇2 w + ∇ · (wv) + aw + ∑ bi (x)Fi (w) ∂t i=1
+K · (om(u) − om (w)) u|Γ = w|Γ = 0.
in Ω ,
(4.83) (4.84)
Introducing the error e = u − w, we then transform the above system to m m ∂u = μ ∇2 u + ∇ · (uv) + au + ∑ bi (x)Fi (u) − ∑ bi (x)Fi (e), ∂t i=1 i=1
∂e = μ ∇2 e + ∇ · (ev) + ae − K · om(e) in Ω , ∂t u|Γ = e|Γ = 0.
(4.85) (4.86) (4.87)
Using this error equation and combing Theorems 2.25, 4.8 and 4.9, we can prove the following theorem. Theorem 4.10. Let λ1 , λ2 , · · · be the eigenvalues of the operator A defined by (4.9) ordered in their real parts: Re(λ1 ) > Re(λ2 ), · · · . Let ψi1 , ψi2 , · · · , ψini be ni eigenfunctions corresponding to the eigenvalue λi . Let γ > 0 be a given number and N an integer such that Re(λN ) ≥ −γ > Re(λN+1 ). If Rank(Bi ) = Rank(Hi ) = ni for n = 1, 2, · · · , N, then there exists an output injection vector K in (HN )l and an output feedback control φ = F(w) (4.88) such that the solution of (4.82)-(4.84) satisfies u(t)L2 ≤ Mu0 L2 e−γ t ,
(4.89)
where M is a positive constant . Example 4.4. Consider the control problem from Example 4.2. For the measured output, we assume that l = 1, ω1o = (0, 1/2) × (0, 1/2), and h1 = 1. From Example 4.2, we know that the eigenvalue problem (2.11)-(2.12) has a positive eigenvalue λ1 = a − 2μπ 2 = μπ 2 with the eigenfunction ψ11 = 2 sin π x sin π y. Thus the observer is given by
1/2 1 3 ∂w = ∇2 w + π 2 w + φ + K(x, y) ∂t 2 2 0 w(0, y) = w(1, y) = w(x, 0) = w(x, 1) = 0.
Using (4.74), we calculate
1/2 0
(u − w)dxdy,
(4.90) (4.91)
4.2 Interior Feedback Stabilization
h111 =
143
1/2 1/2 0
=2 =
0
ψ11 (x, y)h1 (x, y)dxdy
1/2 1/2 0
0
sin π x sin π ydxdy
2 . π2
Thus the conditions of Theorem 4.10 are satisfied. The finite dimensional system (4.73) becomes 2 dc = μπ 2 c − 2 kc. dt π Then the injection constant k > μπ2 makes this system exponentially detectable. It then follows from (4.68) that the injection function 4
K(x, y) = kψ11 = 2k sin π x sin π y makes the observer (4.90) exponentially stable. Combining with Example 4.2, we obtain the observer-based output feedback control system
1 1 1 3 ∂u = ∇2 u + π 2 u − K w(x, y,t) sin π x sin π ydxdy, ∂t 2 2 0 0 1 1 1 3 ∂w = ∇2 w + π 2 w − K w(x, y,t) sin π x sin π ydxdy ∂t 2 2 0 0
+2k sin π x sin π y
1/2 1/2 0
0
(u − w)dxdy,
u(0, y) = u(1, y) = u(x, 0) = u(x, 1) = 0, w(0, y) = w(1, y) = w(x, 0) = w(x, 1) = 0, K >
μπ 4 μπ 4 , k> , 8 2
which is exponentially stable. The structure of the feedback control in (4.20) can be proposed in different ways. For instance, we can consider the interior control problem of the following structure m ∂u = μ ∇2 u + ∇ · (uv) + au + ∑ φi (x) ∂t i=1
u=0
Γ
bi (x)u(x,t)dS,
on Γ ,
u(x, 0) = u0 (x), where bi ∈ L2 (Γ ) and φi ∈ L2 (Ω ) are functions to be designed to stabilize the problem. In this control problem, the measured output is assumed to be the temperature on the boundary. This problem requires advanced mathematics. For details, we refer to [102].
144
4 Linear Reaction-Convection-Diffusion Equation
Exercises 4.2 1. Consider the control problem
∂u ∂ 2u = 2 + 2u + χ[0,π /2](x)φ1 (t) + χ[π /2,π ](x)φ2 (t), ∂t ∂x u(0,t) = u(π ,t) = 0, u(x, 0) = u0 . a. Design a state feedback controller to stabilize the equilibrium 0. b. Assume that the measured output is given by ε π om = u(x,t)dx, u(x,t)dx , 0
π −ε
where 0 < ε < π /2. Design an output feedback controller to stabilize the equilibrium 0. 2. Generalize Theorem 4.6 to the Neumann boundary case: m ∂u = μ ∇2 u + au + ∑ bi (x)φi (t) in Ω , ∂t i=1
∂ u
= 0, ∂n
Γ
u(x, 0) = u0 (x), where φ1 , · · · , φm are control inputs, and b1 , · · · bm are given functions. 3. Consider the control problem
∂u ∂ 2u = + 2u + cos(x)φ1 (t) + cos(2x)φ2 (t), ∂t ∂ x2 ∂u ∂u (0,t) = (π ,t) = 0, ∂x ∂x u(x, 0) = u0 . Design a state feedback control to stabilize the equilibrium 0. If the control cos(x)φ1 (t) + cos(2x)φ2 (t) is changed to sin(x)φ1 (t) + sin(2x)φ2 (t), is it possible to find a state feedback control to stabilize the equilibrium 0? 4. Consider the control problem 2 ∂u ∂ u ∂ 2u ∂u ∂u =μ − v2 + 2 − v1 2 ∂t ∂x ∂y ∂x ∂y 2 + v2 v +μ 3π 2 + 1 2 2 u + φ (t), 4μ
4.3 Boundary Feedback Stabilization
145
u(0, y,t) = u(1, y,t) = u(x, 0,t) = u(x, 1,t) = 0, u(x, y, 0) = u0 (x, y), where v1 and v2 are constants. a. Show that if φ = 0, the zero equilibrium of the equation is unstable. b. Find a feedback control φ to exponentially stabilize the zero equilibrium. c. Assume that the measured output is given by ε ε ε 1 dy u(x, y,t)dx, dy u(x, y,t)dx, om = 1−ε 0 0 0 1 1 1 ε dy u(x, y,t)dx, dy u(x, y,t)dx , 1−ε
1−ε
1−ε
0
where 0 < ε < 1/2. Design an output feedback controller to stabilize the equilibrium 0. d. Solve the controlled equation numerically to verify the stability. 5. Prove Theorem 4.10.
4.3 Boundary Feedback Stabilization Consider the boundary control problem
∂u = μ ∇2 u + ∇ · (uv) + au in Ω , ∂t u=
m
∑ bi (x)φi (t)
on Γ ,
(4.92) (4.93)
i=1
u(x, 0) = u0 (x),
(4.94)
where b1 , · · · bm are given functions which prescribe how control actions are distributed over the boundary. If there is a feedback control φi = Fi (u) such that the closed-loop system (4.92)-(4.94) is exponentially stable, we say that the reactionconvection-diffusion equation is exponentially stabilizable by boundary feedback. The boundary control problem is more difficult than the interior control.
4.3.1 State Feedback Controls 4.3.1.1 Eigenfunction expansion We first consider the case where v = 0. In this case, the eigenfunctions of (2.11)(2.12) are orthogonal. This enables us to use the eigenfunction expansion (4.26) to decompose the control problem into a finite dimensional control system and
146
4 Linear Reaction-Convection-Diffusion Equation
an infinite dimensional control system. Since u does not satisfy the homogeneous boundary conditions, the term-by-term differentiations with respect to the spatial variables are not justified, that is, ∞
∇2 u = ∑
ni
∑ ci j (t)∇2 ψi j (x).
i=1 j=1
Then we cannot substitute (4.26) into the right hand side of (4.92). However, termby-term time differentiations are valid. Thus we can substitute (4.26) into the left hand side of (4.92) to obtain (noting that we are assuming v = 0) ∞
dci j (t) ψi j (x) = μ ∇2 u + au. dt i=1 j=1 ni
∑∑
Multiplying the equation by ψi j and integrating over Ω by parts, we obtain ' ( dci j = μ ∇2 u + au ψi j dV dt Ω = (−μ ∇u∇ψi j + auψi j ) dV Ω
=−
Γ m
μu
∂ ψi j dS + ∂n
Ω
( ' u μ ∇2 ψi j + aψi j dV
∂ ψi j dS + λi = − ∑ φ p (t) μ b p ∂n Γ p=1 = λ i ci j +
Ω
uψi j dV
m
∑ bi j pφ p(t),
p=1
where bi j p = −
Γ
μ bp
∂ ψi j dS. ∂n
(4.95)
Thus we obtain m dci j = λi ci j + ∑ bi j pφ p (t), dt p=1
ci j (0) = c0i j ,
i = 1, 2, · · · ; j = 1, 2, · · · , ni .
(4.96) (4.97)
Since this system is the same as (4.30) except that bi j p is defined by (4.95), we have the same result as in the case of the interior control problem. Theorem 4.11. Let λ1 > λ2 > · · · > λn > · · · with lim λn = −∞ be the eigenvalues n→∞ of (2.11)-(2.12) and let ψi1 , ψi2 , · · · , ψini be ni normalized orthogonal eigenfunctions corresponding to the eigenvalue λi . Let γ > 0 be a given number and N an integer such that λN ≥ −γ > λN+1 . If Rank(Bi ) = ni with bi j p being defined by (4.95) for n = 1, 2, · · · , N, then there exists a state feedback control
4.3 Boundary Feedback Stabilization ni
N
147
φ p (t) = ∑ ∑ K p,q(i, j) i=1 j=1
Ω
ψi j (x)u(x,t)dV,
p = 1, · · · , m.
(4.98)
such that the solution of (4.92)-(4.93) satisfies Ω
|u(x,t)|2 dV ≤ Me−2γ t
Ω
|u0 (x)|2 dV,
(4.99)
N
where q(i, j) = ∑i−1 r=1 nr + j, K = (K pq ) is an m × ∑ ni feedback gain matrix and M i=1
is a positive constant. Example 4.5. Consider the control problem
∂u ∂ 2u = + 2u, ∂t ∂ x2 u(0,t) = φ (t), u(π ,t) = 2φ (t).
(4.100) (4.101)
The eigenvalues and eigenfunctions related to this problem are 2 λn = 2 − n, ψn (x) = sin(nx), n = 1, 2, · · · π Using (4.95), we calculate bi j p as follows: 2 2 b111 = 3 , b211 = −2 . π π Then the conditions of Theorem 4.11 are satisfied. The finite dimensional control system is 2 dc1 = c1 + 3 φ, dt π dc2 2 = −2 φ. dt π Let Then
φ = −k1 c1 − k2 c2 .
- 2 2 1 − 3k1 c1 − 3k2 c2 , π π 2 dc2 =2 (k1 c1 + k2 c2 ). dt π dc1 = dt
148
4 Linear Reaction-Convection-Diffusion Equation
The characteristic polynomial of the coefficient matrix is 2 2 2 2 λ + 3k1 − 2k2 − 1 λ + 2k2 = 0. π π π -
2 2 > 2k2 + 1, k2 > 0, 3k1 π π then the real parts of the eigenvalues are negative. Thus the feedback control If
φ = −4c1 − 1c2 stabilizes the finite dimensional control system, and then the feedback control system
∂u ∂ 2u = 2 + 2u, ∂t ∂ x- 2 π 2 π u(0,t) = −4 sin(x)u(x,t)dx − sin(2x)u(x,t)dx, π π - 0 - 0 2 π 2 π sin(x)u(x,t)dx − 2 sin(2x)u(x,t)dx. u(π ,t) = −8 π 0 π 0 is exponentially stable. If the velocity v = 0, then the decomposition of the equation is difficult and methods from the inertial manifold theory, such as the Lyapunov-Perron method, are needed. For this complex case, we refer to [20, 54, 55, 76, 77], [97, Chapter 8], and [101]. For some simple velocities, we can transform the reaction-convectiondiffusion equation to the reaction-diffusion equation. For example, if the ith component of v is a function of xi only, that is, vi = vi (xi ), then we can use the following change of variable x n i v (s) i u = w ∏ exp − ds 2μ 0 i=1 to transform the reaction-convection-diffusion equation (4.92) to the following reaction-diffusion equation
∂w = μ ∇2 w + bw, ∂t where b is a function that depends on a, v, and μ (details are left as an exercise). Thus, in this case, the control problem can be solved by using the above theorem. If the velocity v = v(x,t) is time-dependent, then we no longer have the eigenvalue problem (2.11)-(2.12) and then it is even more difficult to decompose the reaction-convection-diffusion equations into a finite dimensional system and an infinite dimensional system [76]. In some simple cases, we can design feedback
4.3 Boundary Feedback Stabilization
149
controllers to stabilize the equation by using the method of integral transform introduced next. For details, we refer to [78]. 4.3.1.2 Integral transform method We now introduce the integral transform method (also called the backstepping method in the literature [48]). This method has been mainly used for onedimensional equations and there have been difficulties in extending it to higherdimensional equations. Thus we consider the one-dimensional reaction-diffusion equation ut = μ uxx + au
in (0, 1) × (0, ∞),
u(0,t) = 0, u(1,t) = φ u(x, 0) = u0 (x),
in (0, ∞),
(4.102) (4.103) (4.104)
where φ is a control to be designed. The subscripts ut , ux denote the derivatives of u with respect to t, x, respectively. Before we design a feedback control for the problem (4.102)-(4.104), we show that the reaction-diffusion equation wt = μ wxx − λ w in (0, 1) × (0, ∞), w(0,t) = w(1,t) = 0 in (0, ∞),
(4.105) (4.106)
w(x, 0) = w0 (x)
(4.107)
in (0, 1)
is exponentially stable, where λ is a positive constant. We first improve Poincar´e’s inequality (2.29) as follows L 0
|u|2 dx ≤
L2 π2
L 0
|ux |2 dx
for all u ∈ H01 (0, L)
(4.108)
and L 0
|ux |2 dx ≤
L2 π2
L 0
|uxx |2 dx
for all u ∈ H 2 (0, L) ∩ H01 (0, L).
In fact, u has the eigenfunction expansion ∞
∑ an sin
u=
n=1
nπ x . L
Differentiating u gives ux =
∞
∑ an
n=1 ∞
nπ x nπ cos , L L
uxx = − ∑ an n=1
nπ x n2 π 2 . sin L2 L
(4.109)
150
4 Linear Reaction-Convection-Diffusion Equation
It then follows that L 0
∞
|ux |2 dx =
n2 π 2 L2
∑ a2n
n2 π 2 L L2 2
n=1 ∞
=
n=1 π2 ∞
≥
L2
π L2
2
=
L
∑ a2n
cos2
0
nπ x dx L
L
∑ a2n 2
n=1 L 0
|u|2 dx,
and L 0
∞
|uxx |2 dx = = ≥
n4 π 4 L4
∑ a2n
n4 π 4 L L4 2
n=1 ∞
n=1 π2 ∞
=
L
∑ a2n
L2
π2 L2
0
nπ x dx L
n2 π 2 L L2 2
∑ a2n
n=1 L
sin2
0
|ux |2 dx.
Theorem 4.12. Assume that λ > 0 is a constant. Then, for arbitrary initial data w0 (x) ∈ H01 (0, 1), the solution of the problem (4.105)-(4.107) satisfies w(t)H 1 ≤ w0 H 1 e−(λ +μπ )t . 2
(4.110)
Proof. Multiplying (4.105) by w, integrating over (0, 1) by parts, and using the boundary conditions on w, we deduce that 1 d 2 dt
1 0
|w(x,t)|2 dx = −μ = −μ
1 0
1 0
wwxx dx − λ
1 0
|wx (t)|2 dx − λ
' ≤ − λ + μπ 2
( 1
w2 dx
1
w2 dx
0
(4.111) (use (4.108))
w2 dx.
0
Multiplying (4.105) by wxx , integrating over (0, 1), and using the boundary conditions on w, we deduce that 1 d 2 dt
1 0
|wx (x,t)| dx = 2
1 0
=−
wx (x,t)wtx (x,t)dx
1 0
wt (x,t)wxx (x,t)dx
(4.112)
4.3 Boundary Feedback Stabilization
= −μ = −μ
1 0
1 0
151
1
|wxx |2 dx + λ
0
1
|wxx |2 dx − λ
' ≤ − λ + μπ 2
( 1
0
wwxx dx |wx |2 dx
(use (4.109))
|wx |2 dx.
0
Adding (4.111) and (4.112) together, we obtain ( d ' w(t)2H 1 ≤ −2(λ + μπ 2)w(t)2H 1 . dt It then follows from Lemma 4.1 that w(t)2H 1 ≤ w(0)2H 1 e−2(λ +μπ )t . 2
The key idea of finding a boundary feedback control to stabilize the problem (4.102)-(4.104) is to construct an integral transformation w(x,t) = u(x,t) +
x
k(x, y)u(y,t)dy
(4.113)
0
to convert the problem (4.102)-(4.104) to the exponentially stable problem (4.105)(4.107), where the kernel k is to be found. To find equations for the kernel k, we need to plug w into (4.105)-(4.107). So we compute the derivatives of w as follows. Differentiating w with respect to t, we obtain wt (x,t) = ut (x,t) + = ut (x,t) +
x 0 x 0
k(x, y)ut (y,t)dy
(use (4.102)
k(x, y)[μ uyy (y,t) + a(y)u(y,t)]dy
(Integration by parts twice) = ut (x,t) + μ k(x, x)ux (x,t) − μ k(x, 0)ux (0,t) − μ ky (x, x)u(x,t) + μ ky (x, 0)u(0,t) +
x 0
[μ kyy (x, y)u(y,t) + k(x, y)a(y)u(y,t)]dy.
(4.114)
Differentiating w with respect to x, we obtain wx (x,t) = ux (x,t) + k(x, x)u(x,t) +
x 0
kx (x, y)u(y,t)dy.
(4.115)
Differentiating wx with respect to x, we obtain wxx (x,t) = uxx (x,t) +
d (k(x, x))u(x,t) + k(x, x)ux (x,t) dx
+ kx (x, x)u(x,t) +
x
0
kxx (x, y)u(y,t)dy.
(4.116)
152
4 Linear Reaction-Convection-Diffusion Equation
It then follows from (4.102) that wt − μ wxx + λ w = ut (x,t) + μ k(x, x)ux (x,t) − μ k(x, 0)ux (0,t) − μ ky (x, x)u(x,t) + μ ky (x, 0)u(0,t) +
x 0
[μ kyy (x, y)u(y,t) + k(x, y)a(y)u(y,t)]dy
− μ uxx (x,t) − μ
d (k(x, x))u(x,t) − μ k(x, x)ux (x,t) dx
− μ kx (x, x)u(x,t) − x
x
0
μ kxx (x, y)u(y,t)dy
+ λ u(x,t) + λ k(x, y)u(y,t)dy 0 d = a(x) − μ kx (x, x) − μ ky (x, x) − μ (k(x, x)) + λ u(x,t) dx + μ ky (x, 0)u(0,t) − μ k(x, 0)ux (0,t) +
x 0
[μ kyy (x, y) − μ kxx (x, y,t) + (a(y) + λ )k(x, y,t)]u(y,t)dy.
(4.117)
To make the right hand side equal to zero, the kernel k has to satisfy
μ kxx (x, y) − μ kyy (x, y) = (a(y) + λ )k(x, y), 0 ≤ y ≤ x ≤ 1, k(x, 0) = 0, 0 ≤ x ≤ 1, d μ kx (x, x) + μ ky (x, x) + μ (k(x, x)) = a(x) + λ , 0 ≤ x ≤ 1. dx
(4.118) (4.119) (4.120)
By the boundary condition (4.103), we deduce that w(0,t) = 0. Since w(1,t) = φ (t) +
1
k(1, y)u(y,t)dy, 0
we derive the feedback control u(1,t) = φ (t) = −
1
k(1, y)u(y,t)dy
(4.121)
0
so that w(1,t) = 0. In summary, we have proved the following lemma. Lemma 4.3. If k(x, y) is the solution of problem (4.118)-(4.120) and the feedback control is given by (4.121), then the transformation defined by (4.113) converts problem (4.102)-(4.104) into (4.105)-(4.107). To study the existence of a solution of problem (4.118)-(4.120), we need results about the uniform convergence of a sequence of functions. To introduce the concept of uniform convergence, we look at the sequence of functions fn (x) = xn ,
1 1 − ≤x≤ . 2 2
4.3 Boundary Feedback Stabilization
153
For each fixed x ∈ [− 12 , 12 ], it is clear that lim fn (x) = 0 = f (x).
n→∞
We now use the ε − N definition to prove this limit. For any ε > 0, we consider | fn (x) − f (x)| = |x|n < ε . Since |x| ≤ 12 , we have |x|n ≤
n 1 < ε, 2
and then n>−
ln ε . ln 2
ε Taking the integer N such that N > − ln ln2 , we then have that
| fn (x) − f (x)| < ε for all x ∈ [− 12 , 12 ] and all n > N. Note that the important fact that N is independent of x. Let us look at another sequence of functions fn (x) = xn ,
0 ≤ x ≤ 1.
For each fixed x ∈ [0, 1], it is clear that lim fn (x) =
n→∞
Define f (x) =
0 if 0 ≤ x < 1, 1 if x = 1.
0 if 0 ≤ x < 1, 1 if x = 1.
For a fixed x ∈ (0, 1) and any ε > 0, we consider | fn (x) − f (x)| = |xn | < ε . ε Taking the integer N such that N > − ln ln x , we then have that
| fn (x) − f (x)| < ε for all n > N. Note that this time N depends on x. Also for this sequence, if ε < 1, there is no integer N such that | fn (x) − f (x)| < ε
154
4 Linear Reaction-Convection-Diffusion Equation
for all x ∈ [0, 1] and all n > N since max | fn (x) − f (x)| = 1.
0≤x≤1
The first convergence is called uniform convergence since N does not depend on x. The uniform convergence has many applications in analysis. Definition 4.5. The sequence { fn (x)} of functions is said to be convergent uniformly on the interval I to the function f (x) if and only if for any ε > 0 there is an integer N independent of x such that | fn (x) − f (x)| < ε for all x ∈ I and all n > N. The importance of uniform convergence with regard to continuous functions is illustrated in the next theorem. Theorem 4.13. Suppose that { fn (x)}∞ n=1 is a sequence of continuous functions on an interval I and that it converges uniformly to f (x) on I. Then f (x) is continuous on I. Proof. For any ε > 0, there is an integer N independent of x such that | fn (x) − f (x)|
N. Let x0 ∈ I. Since fN+1 is continuous on I, there is a δ > 0 such that ε | fN+1 (x) − fN+1 (x0 )| < 3 for all x ∈ I such that |x − x0| < δ . It therefore follows that | f (x) − f (x0 )| ≤ | f (x) − fN+1 (x)| + | fN+1 (x) − fN+1 (x0 )| +| fN+1 (x0 ) − f (x0 )| ε ε ε ≤ + + =ε 3 3 3 for all x ∈ I such that |x − x0 | < δ . Thus f is continuous at x0 , an arbitrary point of I. The next theorem shows that limit and integration can be exchanged for a uniformly convergent sequence of continuous functions. Theorem 4.14. Suppose that { fn (x)}∞ n=1 is a sequence of continuous functions on a bounded interval I and that it converges uniformly to f (x) on I. Let a ∈ I and define Fn =
x a
fn (t)dt.
4.3 Boundary Feedback Stabilization
155
Then f (x) is continuous on I and Fn converges uniformly to the function F(x) =
x a
f (t)dt.
Proof. By Theorem 4.13, we deduce that f is continuous on I. Let L be the length of I. For any ε > 0, it follows from the uniform convergence of { fn (x)}∞ n=1 that there is an integer N independent of x such that | fn (x) − f (x)|
N. It therefore follows that
x
|Fn (x) − F(x)| = [ fn (t) − f (t)]dt
a
≤ ≤
x a
| fn (t) − f (t)| dt
ε |x − a| ≤ ε L
for all x ∈ I and all n > N. Hence {Fn (x)}∞ n=1 converges uniformly on I. A counterexample for Theorem 4.14 is fn (x) = nxe−nx . 2
The next theorem shows that limit and differentiation can be exchanged for a uniformly convergent sequence of continuous functions. Theorem 4.15. Suppose that { fn (x)}∞ n=1 is a sequence of continuous functions each having one continuous derivative on a bounded open interval I. Suppose that ∞ { fn (x)}∞ n=1 converges to f (x) for each x on I and { f n (x)}n=1 converges uniformly to g(x) on I. Then g(x) is continuous on I and f (x) = g(x) on I. Proof. By Theorem 4.13, we deduce that g is continuous on I. Let a be any point of I. We then have x fn (t)dt = fn (x) − fn (a). a
{ fn (x)}∞ n=1
converges uniformly to g(x) on I and { fn (x)}∞ Since n=1 converges to f (x) for each x on I, it follows from Theorem 4.14 that x a
g(t)dt = f (x) − f (a).
Differentiating this equation gives f (x) = g(x) on I.
156
4 Linear Reaction-Convection-Diffusion Equation
The concept of uniform convergence can be extended to series. Let fk (x), k = 1, 2, · · · , be functions defined on an interval I, and sn (x) =
n
∑ fk (x) be the partial
k=1
sum. ∞
Definition 4.6. The infinite series
∑ fk (x) is said to converge uniformly on I to a
k=1
function s(x) if and only if the sequence of partial sums {sn (x)} converges uniformly to s(x) on I. According to this definition, Theorems on uniform convergence of infinite series can be reduced to corresponding results for the uniform convergence of sequences. The next theorem is the analog to Theorem 4.13. Theorem 4.16. Suppose that fk (x), k = 1, 2, · · · , are continuous functions on an in∞
terval I and that
∑ fk (x) converges uniformly to s(x) on I. Then s(x) is continuous
k=1
on I. The next theorem is the analog to Theorem 4.14, which shows that a uniformly convergent series may be integrated term-by-term. Theorem 4.17. Suppose that fk (x), k = 1, 2, · · · , are integrable functions on a ∞
bounded interval I and that
∑ fk (x) converges uniformly to s(x) on I. Let a ∈ I.
k=1
Then
x a ∞
and
∑
x
k=1 a
s(t)dt =
x ∞
∑
a k=1
fk (t)dt =
fk (t)dt converges uniformly to
x a
∞
∑
x
k=1 a
fk (t)dt,
s(t)dt on I.
The next theorem is the analog to Theorem 4.15, which shows that a uniformly convergent series may be differentiated term-by-term. Theorem 4.18. Suppose that fk (x), fk (x), k = 1, 2, · · · , are continuous functions on ∞
a bounded open interval I. Suppose that ∞
and
∑
∑ fk (x) converges to s(x) for each x on I
k=1
fk (x)
converges uniformly to u(x) on I. Then s (x) = u(x) on I.
k=1
The following theorem gives a useful indirect test for uniform convergence. It is important to observe that the test can be applied without any knowledge of the sum of the series.
4.3 Boundary Feedback Stabilization
157
Theorem 4.19. (Weierstrass M-test) Let fk (x), k = 1, 2, · · · , be functions on a bounded open interval I. Suppose that | fk (x)| ≤ Mk Suppose that the constant series
for all k and all x ∈ I.
∞
∞
∞
k=1
k=1
k=1
∑ Mk converges. Then ∑ fk (x) and ∑ | fk (x)|
converge uniformly on I. ∞
Proof. Since
∑ Mk converges, for any ε > 0, there is an integer N such that
k=1
∞
∑
Mk < ε
k=n+1
for all n > N. It then follows that
∞
n ∞
∑ fk (x) − ∑ fk (x) = ∑ fk (x)
k=1 k=1 k=n+1 ∞
∑
≤
| fk (x)|
k=n+1
n ∞
= ∑ | fk (x)| − ∑ | fk (x)|
k=1 k=1 ∞
≤
∑
Mk
k=n+1
≤ε for all n > N. Hence
∞
∞
k=1
k=1
∑ fk (x) and ∑ | fk (x)| converge uniformly on I.
All results on the uniform convergence of sequences or series of functions defined an interval can be readily generalized to functions defined on a set of Rn . The existence of a solution of problem (4.118)-(4.120) can be proved by transforming it to an integral equation using the variable change
ξ = x + y,
η = x − y.
Lemma 4.4. Suppose that a ∈ C1 [0, 1]. Then problem (4.118)-(4.120) has a unique solution which is twice continuously differentiable in 0 ≤ y ≤ x. Proof. We introduce new variables
ξ = x + y,
η = x−y
158
4 Linear Reaction-Convection-Diffusion Equation
and denote
ξ +η ξ −η , G(ξ , η ) = k(x, y) = k 2 2
.
To derive equations in terms of these new variables, we compute kx (x, y) = Gξ + Gη , ky (x, y) = Gξ − Gη , kxx (x, y) = Gξ ξ + 2Gξ η + Gηη , kyy (x, y) = Gξ ξ − 2Gξ η + Gηη , kx (x, x) = Gξ (ξ , 0) + Gη (ξ , 0),
(since y = x, η = 0)
ky (x, x) = Gξ (ξ , 0) − Gη (ξ , 0), d d (k(x, x)) = (G(ξ , 0)) dx dx dξ (since y = x, ξ = 2x) = Gξ (ξ , 0) dx = 2Gξ (ξ , 0). Substituting these derivatives into (4.118)-(4.120), we obtain 1 ξ −η Gξ η (ξ , η ) = + λ G(ξ , η ), 0 ≤ η ≤ ξ ≤ 2, (4.122) a 4μ 2 G(ξ , ξ ) = 0,
0 ≤ ξ ≤ 2, 1 ξ +λ , a Gξ (ξ , 0) = 4μ 2
(4.123) 0 ≤ ξ ≤ 2.
(4.124)
Integrating twice gives the following integral equation G(ξ , η ) = +
1 4μ
ξ τ η
a
2
+ λ dτ
1 ξ η τ −s
4μ
η
a
0
2
+ λ G(τ , s)dsd τ .
(4.125)
We now use the method of successive approximations to show that this equation has a unique continuous solution. Set 1 ξ τ + λ dτ , a 4μ η 2 ξ η 1 τ −s + λ Gn−1 (τ , s)dsd τ a Gn (ξ , η ) = 4μ η 0 2
G0 (ξ , η ) =
(4.126) (4.127)
4.3 Boundary Feedback Stabilization
159
1 |a (x) + λ |. Then one can readily show that μ 0≤x≤1
and denote M = sup
|G0 (ξ , η )| ≤
1 M(ξ − η ) ≤ M, 4
|G1 (ξ , η )| ≤ M 2 ξ η , |G2 (ξ , η )| ≤
M3 2 2 ξ η , (2!)2
and by induction, M n+1 n n ξ η . (n!)2
|Gn (ξ , η )| ≤
It then follows from Weierstrass M-test (Theorem 4.19) that the series G(ξ , η ) =
∞
∑ Gn (ξ , η )
n=0
converges absolutely and uniformly in 0 ≤ η ≤ ξ ≤ 2. Furthermore, by Theorem 4.17, we deduce that G(ξ , η ) =
∞
∑ Gn (ξ , η )
n=0
=
1 4μ
ξ τ η
∞
+∑
a
1 ξ η τ −s
n=1 4 μ
=
1 4μ +
2
+ λ dτ a
η
0
ξ τ
a
η
2
2
+ λ Gn−1 (τ , s)dsd τ
+ λ dτ
1 ξ η τ −s
4μ
a
+λ
∞
∑ Gn−1 (τ , s)dsd τ
2 n=1 ξ 1 τ + λ dτ a = 4μ η 2 1 ξ η τ −s + λ G(τ , s)dsd τ . a + 4μ η 0 2 η
0
So G(ξ , η ) is a continuous solution of equation (4.125). To prove that the solution is unique, it suffices to prove that the equation 1 ξ η τ −s + λ G(τ , s)dsd τ a G(ξ , η ) = 4μ η 0 2
160
4 Linear Reaction-Convection-Diffusion Equation
has only zero solution. Thus we estimate G(ξ , η ) and obtain |G(ξ , η )| ≤ M η |ξ − η | For η ≤
1 4M ,
max
0≤τ ≤2, 0≤s≤η
|G(τ , s)| ≤ 2M η
max
0≤τ ≤2, 0≤s≤η
|G(τ , s)|.
we derive that 1 max |G(ξ , η )| ≤ 0. 2 0≤ξ ≤2, 0≤η ≤1/(4M)
Therefore it follows that G(ξ , η ) = 0 for 0 ≤ ξ ≤ 2, 0 ≤ η ≤ 1/(4M) and then 1 ξ η τ −s + λ G(τ , s)dsd τ . a G(ξ , η ) = 4μ η 1/(4M) 2 Repeating the above procedure, we can show that G(ξ , η ) = 0 for 0 ≤ ξ ≤ 2, 0 ≤ η ≤ 2. Moreover, it follows from (4.125) that G is twice continuously differentiable because a ∈ C1 [0, 1]. Indeed, differentiating (4.125) with respect to ξ gives 1 η 1 ∂ G(ξ , η ) ξ ξ −s +λ + + λ G(ξ , s)ds, a a = ∂ξ 4μ 2 4μ 0 2 which implies that ∂ G(∂ξξ,η ) is continuous since G(ξ , η ) is continuous. By analogy, we can show that other derivatives of G are continuous. The proof of Lemma 4.4 provides a numeric computation scheme of successive approximation to compute the kernel function k in our feedback control (4.121). In fact, if a is a constant, then the kernel k can be explicitly computed using the iteration scheme (4.126)-(4.127). In this case we have G0 (ξ , η ) =
a+λ (ξ − η ). 4μ
By induction, we can show that [98] Gn (ξ , η ) = and then G(ξ , η ) =
∞
∑
a+λ 4μ
n=0
n+1
a+λ 4μ
(ξ − η )ξ nη n , (n!)2 (n + 1)
n+1
(ξ − η )ξ n η n . (n!)2 (n + 1)
(4.128)
Thus we obtain the kernel k(x, y) = G(x + y, x − y) =
∞
∑
n=0
a+λ 4μ
n+1
2y(x2 − y2 )n . (n!)2 (n + 1)
(4.129)
4.3 Boundary Feedback Stabilization
161
Lemma 4.5. Let k(x, y) be the solution of problem (4.118)-(4.120) and define the linear bounded operator K : H i (0, 1) → H i (0, 1) (i = 0, 1, 2) by w(x) = (Ku)(x) = u(x) +
x
for u ∈ H i (0, 1).
k(x, y)u(y)dy, 0
(4.130)
Then K has a linear bounded inverse K −1 : H i (0, 1) → H i (0, 1) (i = 0, 1, 2). Proof. To prove that (4.130) has a bounded inverse, we set v(x) =
x
k(x, y)u(y)dy 0
and then w(x) = u(x) + v(x). Hence we have v(x) = = =
x
(u(y) = w(y) − v(y))
k(x, y)u(y)dy 0 x 0 x 0
k(x, y)[w(y) − v(y)]dy k(x, y)w(y)dy −
x
k(x, y)v(y)dy.
(4.131)
0
To show that this equation has a unique continuous solution, we set v0 (x) =
x
k(x, y)w(y)dy, 0
vn (x) = −
x 0
k(x, y)vn−1 (y)dy
sup |k(x, y)|. Then,
and denote M =
0≤y≤x≤1
|v0 (x)| ≤ M
x 0
|w(y)|dy ≤ M
1
1
dy 0
0
|w(y)|2 dy ≤ MwL2 ,
|v1 (x)| ≤ M 2 wL2 x, |v2 (x)| ≤
M 3 wL2 2 x , 2!
and by induction, M n+1 wL2 n x . n! It then follows from Weierstrass-M-test (Theorem 4.19) that the series |vn (x)| ≤
v(x) =
∞
∑ vn (x)
n=0
(4.132)
162
4 Linear Reaction-Convection-Diffusion Equation
converges absolutely and uniformly in 0 ≤ x ≤ 1. As in the proof of Lemma 4.4, we can show that the sum is a unique continuous solution of equation (4.131). Moreover, it follows from (4.132) that ∞
∞
M n+1 . n=0 n!
∑ |vn (x)| ≤ wL2 ∑ 0≤x≤1
max |v(x)| ≤ max
0≤x≤1
n=0
Hence there exists a constant C > 0 such that vL2 ≤ CwL2 .
(4.133)
This implies that there exists a bounded linear operator Φ : L2 (0, 1) → L2 (0, 1) such that v(x) = (Φ w)(x) and then
u(x) = w(x) − v(x) = ((I − Φ )w)(x) = (K −1 w)(x).
(4.134)
It is clear that K −1 : L2 (0, 1) → L2 (0, 1) is bounded. To show that K −1 : H 1 (0, 1) → H 1 (0, 1) is bounded, we take derivative in (4.131) and obtain vx (x) = k(x, x)w(x) +
x 0
kx (x, y)w(y)dy − k(x, x)v(x) −
x 0
kx (x, y)v(y)dy,
which, combined with (4.133), implies that there exists constant C > 0 such that vx L2 ≤ CwL2 . Then by (4.134) uH 1 ≤ wH 1 + vH 1 ≤ CwH 1 . By analogy, we can show that K −1 : H 2 (0, 1) → H 2 (0, 1) is bounded. We now prove that the controlled problem ut (x,t) = μ uxx (x,t) + a(x)u(x,t) u(0,t) = 0,
u(1,t) = −
in (0, 1) × (0, ∞),
1
k(1, y)u(y,t)dy 0
in (0, ∞),
u(x, 0) = u0 (x)
(4.135) (4.136) (4.137)
is exponentially stable. Theorem 4.20. Assume that λ > 0 is any positive constant and a ∈ C1 [0, 1] is any function. For arbitrary initial data u0 (x) ∈ H 1 (0, 1) satisfying the compatible condition u0 (0) = 0,
u0 (1) = −
1
0
the solutions of problem (4.135)-(4.137) satisfy
k(1, y)u0 (y)dy,
4.3 Boundary Feedback Stabilization
163
u(t)H 1 ≤ Mu0 H 1 e−(λ +μπ )t , 2
∀t > 0,
(4.138)
where M is a positive constant independent of u0 . Proof. By Lemma 4.5, the problem (4.135)-(4.137) can be transformed to the problem (4.105)-(4.107) via the isomorphism defined by (4.130). Hence there exists a positive constant C > 0 such that u(t)H 1 ≤ Cw(t)H 1 , w0 H 1 ≤ Cu0H 1 . Then (4.138) follows from Theorem 4.12. The method of integral transform can be applied to the problem of flux control ut = μ uxx + au in (0, 1) × (0, ∞), ux (0,t) = 0, μ ux (1,t) = φ in (0, ∞),
(4.139) (4.140)
u(x, 0) = u0 (x).
(4.141)
For details, we refer to [75].
4.3.2 Output Feedback Controls We use the same measured output om (u) defined by (4.24) for feedback. To design an output feedback controller, we consider the following state observer
∂w = μ ∇2 w + aw + K · (om(u) − om (w)) in Ω , ∂t w|Γ =
(4.142)
m
∑ bi (x)φi (t),
(4.143)
i=1
w(x, 0) = 0,
(4.144)
where K is an output injection vector. We then use the estimate w for feedback and introduce the observer-based output feedback controller
φ = F(w) = (F1 (w), · · · , Fm (w)), which leads to the observer-based output feedback control system
∂u = μ ∇2 u + au in Ω , ∂t ∂w = μ ∇2 w + aw + K · (om(u) − om (w)) in Ω , ∂t u|Γ =
(4.145) (4.146)
m
∑ bi (x)Fi(w),
i=1
(4.147)
164
4 Linear Reaction-Convection-Diffusion Equation
w|Γ =
m
∑ bi (x)Fi(w).
(4.148)
i=1
Introducing the error z = u − w, we then transform the above system to
∂w = μ ∇2 w + aw + K · om(z) in Ω , ∂t ∂z = μ ∇2 z + az − K · om(z) in Ω , ∂t w|Γ =
(4.149) (4.150)
m
∑ bi (x)Fi(w),
(4.151)
i=1
z|Γ = 0.
(4.152)
Theorem 4.21. Let λ1 > λ2 > · · · > λn > · · · with lim λn = −∞ be the eigenvalues n→∞ of (2.11)-(2.12) and let ψi1 , ψi2 , · · · , ψini be ni normalized orthogonal eigenfunctions corresponding to the eigenvalue λi . Let γ > 0 be a given number and N an integer such that λN ≥ −γ > λN+1 . If Rank(Bi ) = Rank(Hi ) = ni for n = 1, 2, · · · , N, where Bi and Hi are defined in Theorems 4.11 and 4.9, respectively, then there exists an output injection vector K in (HN )l and an output feedback control φ = F(w) such that the solution of (4.145)-(4.148) satisfies u(t)L2 ≤ Mu0 L2 e−γ t ,
(4.153)
where M is a positive constant. Proof. If the inequality (4.153) holds for w and z, then it also holds for u. Thus it suffices to show that (4.153) holds for w and z. By Theorem 4.9, there exists an output injection vector K such that z(t)L2 ≤ Mu0 L2 e−γ t .
(4.154)
By Theorem 4.11 and its proof (modifying and using (4.96)), we can show that there exists an output feedback control φ = F(w) such that (4.153) holds for w. (The details are left as an exercise.) Example 4.6. Consider the control problem (4.100) from Example 4.5:
∂u ∂ 2u = + 2u, ∂t ∂ x2 u(0,t) = φ (t), u(π ,t) = 2φ (t).
(4.155) (4.156)
For the measured output, we assume that l = 1, ω1o = [0, π /2], and h1 (x) = 1. The eigenvalues and eigenfunctions related to this problem are 2 λn = 2 − n, ψn (x) = sin(nx), n = 1, 2, · · · π
4.3 Boundary Feedback Stabilization
165
Then the state observer for this control system is given by
π /2 ∂w ∂ 2w = + 2w + K(x) (u(x,t) − w(x,t))dx, 2 ∂t ∂x 0 w(0,t) = φ (t), w(π ,t) = 2φ (t).
(4.157) (4.158)
Using (4.74), we calculate h pi j as follows: π /2 π /2 2 2 2 2 h111 = sin xdx = , h121 = sin 2xdx = . π π π π 0 0 Then the conditions of Theorem 4.21 are satisfied. The finite dimensional control system (4.73) in this case becomes - dc1 2 2 = 1 − k1 c1 − k1 c2 , dt π π 2 dc2 = −k2 (c1 + c2 ). dt π The characteristic polynomial of the coefficient matrix is 2 2 2 2 λ + k1 + k2 − 1 λ − k2 = 0. π π π -
If k2 < 0,
k1 + k 2 >
π , 2
then the real parts of the eigenvalues are negative. Hence the injection vector k = [4, −1] makes the finite dimensional system exponentially detectable. It therefore follows from (4.68) that the injection function 2 2 K(x) = 4 sin x − sin 2x π π makes the observer system (4.157) exponentially stable. Combining with Example 4.5, we have designed the following observer-based output feedback control to stabilize (4.155):
∂u ∂ 2u = + 2u, ∂t ∂ x2 ∂w ∂ 2w 2 = + 2w + [4 sin x − sin 2x] 2 ∂t ∂x π ×
π /2 0
(u(x,t) − w(x,t))dx,
166
4 Linear Reaction-Convection-Diffusion Equation
- - 2 π 2 π sin(x)w(x,t)dx − sin(2x)w(x,t)dx, π 0 π 0 - - 2 π 2 π sin(x)w(x,t)dx − 2 sin(2x)w(x,t)dx, u(π ,t) = −8 π π - 0 - 0 2 π 2 π sin(x)w(x,t)dx − sin(2x)w(x,t)dx, w(0,t) = −4 π π - 0 - 0 2 π 2 π sin(x)w(x,t)dx − 2 sin(2x)w(x,t)dx. w(π ,t) = −8 π 0 π 0 u(0,t) = −4
The output can be injected on the boundary as follows
∂u = μ ∇2 u + au in Ω , ∂t u|Γ = −K · om(u).
(4.159) (4.160)
This problem is difficult and referred to [55]. We now use the method of integral transformation to design output feedback controllers for the one-dimensional reaction-diffusion equation. These controllers were developed by Krstic et al [48]. Consider the control problem ut = uxx + au, ux (0) = 0, u(1) = φ (t).
(4.161) (4.162)
We assume that only u(0) can be measured for feedback. As done for the equation (4.102), we can use the integral transformation (4.113) to transform (4.161)-(4.162) to the following exponentially stable equation wt = wxx , wx (0,t) = w(1,t) = 0, where the state feedback control is given by
φ =−
1
k(1, y)u(y,t)dy 0
and the kernel satisfies kxx (x, y) − kyy (x, y) = a(y)k(x, y), ky (x, 0) = 0, k(0, 0) = 0, a(x) d (k(x, x)) = . dx 2
(4.163) (4.164) (4.165)
We design the following observer vt = vxx + av + p1(x)[u(0) − v(0)],
(4.166)
vx (0) = p10 [u(0) − v(0)], v(1) = φ (t),
(4.167)
4.3 Boundary Feedback Stabilization
167
where p1 (x) is an injection function and p10 is an injection constant. In this observer, the output is injected both in the domain and at the end point x = 0. Introducing the error z = u − v, from (4.161), (4.162), (4.166), and (4.167) we derive the following error equation zt = zxx + az − p1(x)z(0), zx (0) = −p10z(0), z(1) = 0.
(4.168) (4.169)
Our goal is to find p1 and p10 such that the error equation is exponentially stable. To do so, we use the integral transformation z(x) = h(x) −
x
p(x, y)h(y)dy
(4.170)
0
to transform the error equation into the exponentially stable equation ht = hxx ,
(4.171)
hx (0) = h(1) = 0.
(4.172)
Differentiating the transformation, we obtain zt = ht −
x 0
p(x, y)hyy (y)dy
= ht − p(x, x)hx (x) + p(x, 0)hx (0) + py (x, x)h(x) −py (x, 0)h(0) − zxx = hxx − h(x)
x 0
pyy (x, y)h(y)dy,
(4.173)
d p(x, x) − p(x, x)hx (x) dx
−px (x, x)h(x) −
x
0
pxx (x, y)h(y)dy.
Subtracting (4.174) from (4.173), we obtain x a h(x) − p(x, y)h(y)dy − p1(x)h(0) 0
d = 2h(x) p(x, x) − py (x, 0)h(0) + dx
x 0
(pxx (x, y) − pyy (x, y))h(y)dy.
For the last equality to hold, three conditions must be satisfied pxx (x, y) − pyy (x, y) = a(x)p(x, y), a(x) d p(x, x) = , dx 2 py (x, 0) = p1 (x).
(4.174)
168
4 Linear Reaction-Convection-Diffusion Equation
Moreover, the boundary conditions (4.169) and (4.172) imply that p must also satisfy p(0, 0) = p10 , p(1, y) = 0. In summary, the kernel p satisfies pxx (x, y) − pyy (x, y) = a(x)p(x, y), a(x) d p(x, x) = , dx 2 p(1, y) = 0,
(4.175) (4.176) (4.177)
and the injection function p1 (x) and constant p10 are given by p1 (x) = py (x, 0), p10 = p(0, 0).
(4.178)
Since the kernel equations (4.175)-(4.177) are the same as the kernel equations (4.118)-(4.120), they can be solved. Using the estimate v, we now introduce the output feedback controller
φ =−
1
k(1, y)v(y,t)dy 0
to obtain the following output feedback control system ut = uxx + au,
(4.179)
vt = vxx + av + p1(x)[u(0) − v(0)], ux (0) = 0, u(1) = −
(4.180)
1
k(1, y)v(y,t)dy, 0
vx (0) = p10 [u(0) − v(0)], v(1) = −
(4.181)
1
k(1, y)v(y,t)dy.
(4.182)
0
Theorem 4.22. Let the injection function p1 (x) and the injection constant p10 be given by (4.178) and let the state kernel k(x, y) be the solution of (4.163)-(4.165). Then the control system (4.179)-(4.182) is L2 -exponentially stable: 1 0
u2 (x,t)dx ≤ M
1 0
u2 (x, 0)dxe−t/4 ,
(4.183)
where M is a positive constant.
Proof. If we can prove that 01 v2 (x,t)dx and 01 z2 (x,t)dx = 01 (u(x,t) − v(x,t))2 dx converge to zero exponentially as t → ∞, then 01 u2 (x,t)dx also converges to zero exponentially. Thus it suffices to show the exponential decay of the following system vt = vxx + av + p1(x)z(0),
(4.184)
zt = zxx + az − p1(x)z(0),
(4.185)
4.3 Boundary Feedback Stabilization
169
vx (0) = p10 z(0), v(1) = −
1
k(1, y)v(y,t)dy,
(4.186)
0
zx (0) = −p10 z(0), z(1) = 0.
(4.187)
We have shown that the transformation (4.170) transforms equations (4.185) and (4.187) to the following exponentially stable system ht = hxx , hx (0) = h(1) = 0. As done for the equation (4.102), using (4.163)-(4.165), we can also show that the integral transformation q(x) = v(x) +
x
k(x, y)v(y)dy 0
transforms (4.184)-(4.186) into x k(x, y)p1 (y)dy , qt = qxx + h(0) p1 (x) − p10k(x, 0) + 0
qx (0,t) = p10 h(0), q(1,t) = 0. To show that h and w converge to zero exponentially, we construct the Lyapunov function A 1 2 1 1 2 V= h (x,t)dx + q (x,t)dx. 2 0 2 0 Differentiating in t gives dV = −A dt
1
1
h2x (x,t)dx − p10 q(0,t)h(0,t) − q2 (x,t)dx 0 0 1 x q(x,t) p1 (x) − p10k(x, 0) + k(x, y)p1 (y)dy dx. +h(0) 0
0
Using the Poincar´e inequality (4.108) and the Young’s inequality (2.4), we deduce that 1 1 −p10 q(0,t)h(0,t) ≤ q2 (0,t) + p210h2 (0,t) ≤ 4 4 and 1
h(0) ≤
1 1
4
0
0
1 0
q2x (x,t)dx + p210
1 0
h2x (x, 0)dx
x q(x,t) p1 (x) − p10 k(x, 0) + k(x, y)p1 (y)dy dx
q2x (x,t)dx + B
1 0
0
h2x (x, 0)dx,
170
4 Linear Reaction-Convection-Diffusion Equation
where B = maxx∈[0,1] (p1 (x) − p10k(x, 0) +
x 0
k(x, y)p1 (y)dy)2 . It then follows that
1 1 1 2 dV = −(A − B − p210) h2x (x,t)dx − q (x,t)dx dt 2 0 0 1 1 1 2 1 h2x (x,t)dx − q (x,t)dx. ≤ − (A − B − p210) 4 8 0 0
Taking A = 2(B + p210), we obtain 1 dV ≤ − V. dt 4 Hence the (h, p)-system is exponentially stable. Since the (h, p)-system is related to the (z, v)-system by the invertible transformations, the (z, v)-system is also exponentially stable.
Exercises 4.3 1. Consider the control problem
∂u ∂ 2u = 2 + 3u, ∂t ∂x u(0,t) = 0, u(π ,t) = φ (t), u(x, 0) = u0 . a. Design a state feedback controller to stabilize the equilibrium 0. b. Assume that the measured output is given by ε π om = u(x,t)dx, u(x,t)dx , π −ε
0
where 0 < ε < π /2. Design an output feedback controller to stabilize the equilibrium 0. 2. Generalize Theorem 4.11 to the Neumann boundary case:
∂u = μ ∇2 u + au in Ω , ∂ t m ∂ u
= ∑ bi (x)φi (t), ∂n Γ
i=1
u(x, 0) = u0 (x), where φ1 , · · · , φm are control inputs, and b1 , · · · bm are given functions.
4.3 Boundary Feedback Stabilization
171
3. Consider the control problem
∂u ∂ 2u = + u, ∂t ∂ x2
∂u (0,t) = 0, ∂x ∂u (π ,t) = φ (t), ∂x u(x, 0) = u0 .
Design a state feedback control to stabilize the equilibrium 0. 4. Assume that v = (v1 (x1 ), v2 (x2 ), · · · , vn (xn )). Use the following change of variable x n i v (s) i u = w ∏ exp − ds 2μ 0 i=1 to transform the reaction-convection-diffusion equation (4.92) to the following reaction-diffusion equation
∂w = μ ∇2 w + bw, ∂t where b is a function that depends on a, v, and μ . 5. Consider the iteration G0 (ξ , η ) = ξ − η Gn (ξ , η ) =
ξ η η
0
Gn−1 (τ , s)dsd τ ,
n = 1, 2, · · ·
Use the mathematical induction to show that Gn (ξ , η ) =
(ξ − η )ξ n η n , (n!)2 (n + 1)
n = 1, 2, · · ·
6. Consider the iteration G0 (ξ , η ) = ξ + η , Gn (ξ , η ) = 2
η τ 0
0
Gn−1 (τ , s)dsd τ +
ξ η η
0
Gn−1 (τ , s)dsd τ .
Use the mathematical induction to show that Gn (ξ , η ) =
(ξ + η )ξ n η n , (n!)2 (n + 1)
n = 1, 2, · · ·
7. Consider the initial boundary value problem of the heat equation ut = μ uxx ,
(4.188)
172
4 Linear Reaction-Convection-Diffusion Equation
ux (0,t) = −bu(0),
u(1,t) = 0,
u(x, 0) = u0 (x).
(4.189) (4.190)
a. Find the range of the constant b for which this system is unstable, that is, this system has a positive eigenvalue. b. Prove that the equilibrium 0 of the system wt = μ wxx , wx (0,t) = 0,
(4.191)
w(1,t) = 0
(4.192)
is exponentially stable. c. Derive equations for a kernel k and a boundary feedback control at the right end x = 1 such that the transformation w = u+
x
k(x, y)u(y)dy 0
converts the system (4.188)-(4.190) to the target system (4.191)-(4.192) 8. Design a flux control for the reaction convection diffusion equation ut = μ uxx + aux + bu, ux (0,t) = 0, μ ux (1,t) = φ , u(x, 0) = u0 (x), where a, b are constants. 9. Let the kernel k be the solution of (4.163)-(4.165). Show that the integral transform x q(x) = v(x) + k(x, y)v(y)dy 0
transforms the control problem vt = vxx + av + p1(x)z(0), vx (0) = p10 z(0), v(1) = − into
1
k(1, y)v(y,t)dy 0
x k(x, y)p1 (y)dy , qt = qxx + z(0) p1 (x) − p10k(x, 0) + 0
qx (0,t) = p10 z(0), q(1,t) = 0. 10. ([48, Section 5.3]) Consider the control problem ut = μ uxx + au, ux (0,t) = 0,
u(1,t) = φ (t),
4.4 Optimal Interior Control
173
where μ and a are positive constants. Assume that ux (1,t) is the measured output. Use the following observer vt = μ vxx + av + p1(x)[ux (1,t) − vx (1,t)], vx (0,t) = 0, v(1,t) = φ (t) + p10 [ux (1,t) − vx (1,t)], to design an output feedback controller to exponentially stabilize the equilibrium 0, where p1 (x) is an injection function and p10 is an injection constant to be designed.
4.4 Optimal Interior Control Optimal control is an important method to design an optimal feedback controller. We consider the optimal control of
∂u = μ ∇2 u + ∇ · (uv) + au + φ (x,t) in Ω , ∂t u|Γ = 0, u(x, s) = u0 (x),
(4.193) (4.194) (4.195)
where φ is a control input. In many real problems, the homogenization of concentration of a physical quantity such as reactants in a chemical reaction is preferred. Mathematically this means that u(x,t) = c (constant). Without loss of generality, we may assume that c = 0. Hence, for T > s, we define the quadratic functional of concentration variance by J(φ ; u0 , s) =
T ' s
+γ
( α |u(x,t)|2 + β |φ (x,t)|2 dV dt
Ω
Ω
|u(x, T )|2 dV,
(4.196)
where α , β , γ ≥ 0 are weight constants. If J is minimized at some control φ ∗ , then the concentration is closest to the target 0. Thus the optimal control problem is to minimize the functional J over L2 (Ω × (s, T )). The function φ ∗ such that J(φ ∗ ; u0 , s) =
min
φ ∈L2 (Ω ×(s,T ))
J(φ ; u0 , s)
is called an optimal control. Since the state equation (4.193) is linear and the functional (4.196) is quadratic, the above optimal control problem is called a linear quadratic optimal control problem (LQ problem).
174
4 Linear Reaction-Convection-Diffusion Equation
4.4.1 Existence and Uniqueness To get an idea of how to prove the existence and uniqueness of an optimal control, we look at the simple function f (x) = x2 . This function has a unique minimum at 0 and has the following properties: (1) f is strictly convex, i.e., for every x, y ∈ R, x = y, and λ ∈ (0, 1) f (λ x + (1 − λ )y) < λ f (x) + (1 − λ ) f (y). (2) f is coercive, i.e., lim x2 = ∞. |x|→∞
This motivates us to introduce the following definition. Definition 4.7. Let H be a Hilbert space and F : H → [−∞, +∞] be a functional. (1) F is said to be strictly convex if for every x, y ∈ H, x = y, and λ ∈ (0, 1) F(λ x + (1 − λ )y) < λ F(x) + (1 − λ )F(y). (2) F is said to be coercive if
lim F(x) = ∞.
x→∞
(3) F is said to be continuous if lim F(x) = F(y) for all y ∈ H. x→y
Theorem 4.23. If F is convex, continuous, and coercive, then the following minimization problem inf F(x) x∈H
has a unique solution xm , that is, F(xm ) = inf F(x). x∈H
The proof of this theorem is referred to Theorem 38.C and Proposition 38.15 of [106]. To apply this theorem, we need to estimate the solution of (4.193)-(4.195). We define 1 am = max a(x), vm = max div(v(x)). (4.197) x∈Ω 2 x∈Ω Lemma 4.6. The solution u of (4.193)-(4.195) satisfies the following estimate Ω
|u(t)|2 dV + 2μ
≤ e(2am +2vm +1)(t−s)
t s
Ω
Ω
|∇u|2 dV dr
|u0 |2 dV +
t s
Ω
φ 2 dV dr .
(4.198)
4.4 Optimal Interior Control
175
Proof. Multiplying (4.193) by u, integrating over Ω by parts, and using the boundary conditions, we obtain 1 d 2 dt
|u| dV = 2
Ω
=
' Ω
( μ u∇2 u + u(∇ · (uv)) + au2 + φ u dV
(use (2.26))
∂u dS − μ |∇u|2 dV + u2 n · vdS ∂ n Γ Ω Γ 2 − uv · ∇udV + au dV + φ udV μu Γ
Ω
Ω
(use Young’s inequality (2.4) for the last integral) 1 v · ∇(u2)dV ≤ −μ |∇u|2 dV − 2 Ω Ω 1 +am u2 dV + (φ 2 + u2 )dV 2 Ω Ω 1 = −μ |∇u|2 dV − u2 v · ndV 2 Γ Ω 1 1 u2 div(v)dV + am u2 dV + (φ 2 + u2)dV + 2 Ω 2 Ω Ω 1 1 u2 dV + φ 2 dV. (4.199) ≤ a m + vm + 2 Ω 2 Ω Using Gronwall’s inequality (4.14), we derive that t |u(t)|2 dV ≤ e(2am +2vm +1)(t−s) |u0 |2 dV + φ 2 dV dr . Ω
Ω
Ω
s
Integrating (4.199) over [s,t] gives Ω
≤ ≤
Ω
Ω
t
|u(t)|2 dV + 2 μ |u0 |2 dV + |u0 | dV + 2
s
t s
Ω
s
Ω
t
+ (2am + 2vm + 1)
|∇u|2 dV dr
φ 2 dV dr + (2am + 2vm + 1) φ 2 dV dr
t
t
Ω
e(2am +2vm +1)(z−s)
s
Ω
t s
Ω
u2 dV dr
|u0 |2 dV +
z s
Ω
φ 2 dV dr dz
φ 2 dV dr t t |u0 |2 dV + φ 2 dV dr e(2am +2vm +1)(z−s) dz + (2am + 2vm + 1) s Ω s Ω t |u0 |2 dV + φ 2 dV dr , = e(2am +2vm +1)(t−s)
≤
Ω
|u0 |2 dV +
s
Ω
Ω
which is (4.198).
s
Ω
176
4 Linear Reaction-Convection-Diffusion Equation
Theorem 4.24. The functional J defined by (4.196) is strictly convex, coercive, and continuous. Proof. (1) J is strictly convex. We denote the solution of (4.193)-(4.195) corresponding to the control φ and the initial condition u0 by u(x,t; u0 , φ ). Since the equation is linear, we deduce that u(x,t; u0 , λ φ + (1 − λ )ψ ) = λ u(x,t; u0, φ ) + (1 − λ )u(x,t; u0, ψ ). It therefore follows that J(λ φ + (1 − λ )ψ ; u0, s) T ' ( α |u(x,t; u0 , λ φ + (1 − λ )ψ )|2 + β |λ φ + (1 − λ )ψ |2 dV dt = s
+γ =
Ω
|u(x, T ; u0 , λ φ + (1 − λ )ψ )|2dV
T & s
+γ
0, and then ∞ Ω
0
α T (t)2 dt ≤ C.
Thus the theorem follows from Theorem 4.5. We now show that the feedback controller (4.251) is optimal. For this we define the quadratic performance functional over an infinite time interval J(φ ; u0 ) =
∞ ' 0
Ω
( α |u(x,t)|2 + β |φ (x,t)|2 dV dt,
(4.259)
190
4 Linear Reaction-Convection-Diffusion Equation
where u is the solution of (4.247)-(4.249) and α , β ≥ 0 are weight constants. We prove that the controller (4.251) minimizes the functional J over L2 (Ω × (0, ∞)). Since the control problem (4.247)-(4.249) is exponentially stabilizable, the optimal control problem is well posed, that is, for every u0 ∈ L2 (Ω ), there exists a control input φ ∈ L2 (Ω × (0, ∞)) such that J(φ ; u0 ) < ∞. Using the solutions of the algebraic Riccati equation (4.252), we can prove the existence of an optimal control. From (4.250), we first have the following technical lemma. Lemma 4.7. Let Π be the nonnegative symmetric solution of (4.252) and define J(φ ; u0 ,t) =
t '
( α |u(x,t)|2 + β |φ (x,t)|2 dV dt.
Ω
0
(4.260)
Then J(φ ; u0 ,t) =
Ω
+
u(0)Π u(0)dV −
1 β
t 0
Ω
Ω
u(t)Π u(t)dV
(Π u + β φ )2 dV.
(4.261)
Theorem 4.32. Let J be defined by (4.259). Then there exists a unique optimal control φ ∗ such that min
φ ∈L2 (Ω ×(0,∞))
J(φ ; u0 ) = J(φ ∗ ; u0 ) =
Ω
u0 Π u0 dV,
(4.262)
where Π is the nonnegative symmetric solution of (4.252) and 1 φ ∗ = − Π u. β Proof. We note that the optimal control problem is well posed and then the minimum below is finite. It follows from Theorem 4.30 that min
φ ∈L2 (Ω ×(0,∞))
J(φ ; u0 ) ≥ = =
min
J(φ ; u0 ,t)
min
J(φ ; u0 ,t)
φ ∈L2 (Ω ×(0,∞)) φ ∈L2 (Ω ×(0,t))
Ω
u0 P(t)u0 dV,
which, combined with (4.255), implies that min
φ ∈L2 (Ω ×(0,∞))
J(φ , u0 ) ≥
Ω
u0 Π u0 dV.
(4.263)
4.4 Optimal Interior Control
191
On the other hand, by (4.261), we deduce that J(φ ; u0 ,t) =
Ω
+ ≤
1 β
Ω
u0 Π u0 dV − t Ω
0
Ω
u(t)Π u(t)dV
(Π u + β φ )2 dV 1 β
u0 Π u0 dV +
t 0
Ω
(Π u + β φ )2 dV.
Taking φ = φ ∗ = − β1 Π u, we obtain J(φ ∗ ; u0 ,t) ≤
Ω
u0 Π u0 dV
and then min
φ ∈L2 (Ω ×(0,∞))
J(φ , u0 ) ≤ J(φ ∗ ; u0 ) ≤
Ω
u0 Π u0 dV.
(4.264)
Putting (4.263) and (4.264) together yields (4.262). In addition, we can readily show that J is strictly convex and then the optimal control is unique. Example 4.8. Consider the control problem
∂u ∂ 2u = 2 + φ (x,t), ∂t ∂x u(0,t) = u(π ,t) = 0, u(x, 0) = u0 (x) with the performance functional J(φ ; u0 ) =
∞ π 0
0
(u2 + φ 2 )dxdt.
Let
Π sin(nx) =
∞
∑ pmn sin(mx).
m=1
As in Example 4.7, taking u1 = sin(mx), u2 = sin(nx) in the algebraic Riccati equation (4.252), we obtain the infinite system ∞ ' 2 ( m + n2 pmn + ∑ pml pln − δmn = 0.
(4.265)
l=1
If m = n, pmn ≡ 0 is a solution. Then the above system is reduced to 2n2 pnn + p2nn − 1 = 0.
(4.266)
192
4 Linear Reaction-Convection-Diffusion Equation
Solving the system, we obtain the solution pnn = −n2 ±
n4 + 1.
Since P is nonnegative, we have pnn = −n2 + Let u(x,t) =
n4 + 1.
∞
∑ cn (t) sin(nx),
n=1
where
2 π cn (t) = u(x,t) sin(nx)dx. π 0 We then obtain the optimal state feedback control
φ = −Π u(x,t) ∞
= − ∑ cn (t)Π sin(nx) =−
n=1 ∞
2 (−n2 + n4 + 1) sin(nx) ∑ π n=1
π
u(x,t) sin(nx)dx. 0
Exercises 4.4 1. Verify that the operator P(s) defined by (4.215) is linear. 2. Show that the functional J(φ ; u0 , s) defined by (4.196) satisfies J(0; u0 , s) ≤ C
Ω
|u0 |2 dV,
where C is a positive constant. 3. Consider the control problem
∂u ∂ 2u = 2 + φ (x,t), ∂t ∂x ∂u ∂u (0,t) = (π ,t) = 0, ∂x ∂x u(x, 0) = u0 (x) with the performance functional J(φ ; u0 , 0) =
T π 0
0
(u2 + φ 2 )dxdt +
π 0
u(T )2 dx.
4.5 Optimal Boundary Control
a. b. c. d.
193
Calculate the Gˆateaux-differential of J. Derive the optimality system for the optimal control. Derive a Riccati equation. Design an optimal state feedback control.
4. Consider the optimal control problem m ∂u = μ ∇2 u + ∇ · (uv) + au + ∑ bi (x)φi (t), ∂t i=1
u|Γ = 0, u(x, s) = u0 (x) with the quadratic performance functional J(φ1 , · · · , φm ; u0 , s) = α
T s
m
Ω
|u|2 dV dt + β ∑
T
i=1 s
|φi |2 dt + γ
Ω
|u(T )|2 dV,
where α , β , γ ≥ 0 are weight constants and bi (i = 1, 2, · · · , m) are given functions. a. Calculate the Gˆateaux-differential of J. b. Derive the optimality system for the optimal control. c. Derive a Riccati equation. 5. Let H be a Hilbert space and F(x) = x for any x ∈ H. Prove that F (x) = for x = 0. 6. Consider the control problem
x x
∂u ∂ 2u = 2 + φ (x,t), ∂t ∂x ∂u ∂u (0,t) = (π ,t) = 0, ∂x ∂x u(x, 0) = u0 (x). Use Lyapunov’s procedure to derive an algebraic Riccati equation and then design an optimal state feedback control.
4.5 Optimal Boundary Control Consider the boundary control problem
∂u = μ ∇2 u + ∇ · (uv) + au, ∂t u|Γ = φ , u(x, s) = u0 (x),
(4.267) (4.268) (4.269)
194
4 Linear Reaction-Convection-Diffusion Equation
where φ is a control input. For T > s, we define the quadratic functional of concentration variance by J(φ ; u0 , s, T ) = α
T s
+γ
Ω
Ω
|u|2 dvdt + β
T s
Γ
|φ |2 dSdt
|u(T )|2 dV,
(4.270)
where α , β , γ ≥ 0 are weight constants. The optimal control problem is to minimize the functional J over L2 (Γ ×(s, T )) with respect to φ , where u0 and s are parameters. This problem can be solved in the same way as for the interior control problem, but the existence is handled in a different way at the end of the section.
4.5.1 Necessary Conditions We denote the solution of (4.267)-(4.269) corresponding to the control φ and the initial condition u0 by u(x,t; u0 , φ ). Since the equation is linear, we can readily see that (4.271) u(x,t; u0 , φ + λ ψ ) = u(x,t; u0 , φ ) + λ u(x,t; 0, ψ ) for any control functions φ , ψ and any number λ . We now calculate the Gˆateauxdifferential of J. Theorem 4.33. The Gˆateaux-differential of J defined by (4.270) at φ is given by T s
Γ
J (φ ; u0 , s, T )ψ dSdt = 2α
T
+2γ +2β
u(x,t; u0 , φ )u(x,t; 0, ψ )dV dt
Ω
s
Ω
u(x, T ; u0 , φ )u(x, T ; 0, ψ )dV
T s
Γ
φ ψ dSdt,
(4.272)
where u(x,t; 0, ψ ) is the solution of
∂u = μ ∇2 u + ∇ · (uv) + au, ∂t u|Γ = ψ , u(x, s) = 0. Proof. Using (4.270) and (4.271), we derive that T s
= lim
Γ
λ →0+
J (φ ; u0 , s, T )ψ dSdt
J(φ + λ ψ ; u0, s) − J(φ ; u0 , s) λ
(4.273) (4.274) (4.275)
4.5 Optimal Boundary Control
α + λ →0 λ
= lim
+ lim
195
T ' Ω
s
( |u(x,t; u0 , φ + λ ψ )|2 − |u(x,t; u0, φ )|2 dV dt
β T '
( |φ + λ ψ |2 − |φ |2 dSdt
λ s Γ ' ( γ + lim |u(x, T ; u0 , φ + λ ψ )|2 − |u(x, T ; u0 , φ )|2 dV λ →0+ λ Ω ' ( α T = lim 2λ u(x,t; u0, φ )u(x,t; 0, ψ ) + λ 2|u(x,t; 0, ψ )|2 dV dt + λ λ →0 s Ω T ' ( β + lim 2λ φ ψ + λ 2|ψ |2 dSdt + λ →0 λ s Γ ' ( γ + lim 2λ u(x, T ; u0 , φ )u(x, T ; 0, ψ ) + λ 2|u(x, T ; 0, ψ )|2 dV + λ →0 λ Ω λ →0+
= 2α
T
Ω
s
+2β
u(x,t; u0 , φ )u(x,t; 0, ψ )dV dt
T s
Γ
φ ψ dSdt + 2γ
Ω
u(x, T ; u0 , φ )u(x, T ; 0, ψ )dV.
By Theorem 4.27, we deduce the following necessary condition for the optimal control. Theorem 4.34. If the functional J has a minimum at φ ∗ , then J (φ ∗ ; u0 , s, T ) = 0.
(4.276)
4.5.2 Optimality Systems The following theorem is the boundary control version of Theorem 4.28. Theorem 4.35. The optimal control φ ∗ satisfies the following system
∂u = μ ∇2 u + ∇ · (uv) + au, ∂t ∂v = −μ ∇2 v + v · ∇v − av + α u, ∂t u|Γ = φ ∗ , v|Γ = 0,
μ ∂ v
φ∗ = − , β ∂ n Γ u(x, s) = u0 (x), v(x, T ) = −γ u(x, T ).
(4.277) (4.278) (4.279) (4.280) (4.281)
196
4 Linear Reaction-Convection-Diffusion Equation
Proof. Multiplying (4.273) by the solution v of (4.278) and integrating over Ω × (s, T ) by parts, we have T s
=μ
v
Ω
T
∂u (x,t; 0, ψ )dV dt ∂t v∇2 u(x,t; 0, ψ )dV dt +
s
Ω
s
Ω
T
+
s Ω T
+
T
∇v · ∇u(x,t; 0, ψ )dV dt −
Ω
s
u(x,t; 0, ψ )v · ∇vdVdt
au(x,t; 0, ψ )vdVdt.
Ω
s
v∇ · (u(x,t; 0, ψ )v)dVdt
Ω
s
au(x,t; 0, ψ )vdVdt
T
= −μ
(4.282) T
Multiplying (4.278) by the solution u(x,t; 0, ψ ) of (4.273) and integrating over Ω × (s, T ) by parts, we have T Ω
s
T
= −μ −
s Ω T
Ω
s
s Γ T
Ω
s
+α
Ω
T
u(x,t; 0, ψ )∇2 vdV dt +
ψ
∂v dSdt + μ ∂n
T s
Ω
s T
u(x,t; 0, ψ )v · ∇vdVdt
Ω
Ω
s
u(x,t; 0, ψ )u(x,t; u0, φ ∗ )dV dt
∇u(x,t; 0, ψ ) · ∇vdVdt
u(x,t; 0, ψ )v · ∇vdVdt −
T s
∂v dV dt ∂t
au(x,t; 0, ψ )vdVdt + α
T
= −μ +
u(x,t; 0, ψ )
T s
Ω
au(x,t; 0, ψ )vdV dt
u(x,t; 0, ψ )u(x,t; u0, φ ∗ )dV dt.
(4.283)
Adding (4.282) and (4.283) gives
α =μ
T s
Ω
s
Γ
T
u(x,t; 0, ψ )u(x,t; u0, φ ∗ )dV dt + γ
ψ
∂v dSdt. ∂n
Ω
u(x, T ; 0, ψ )u(x, T ; u0 , φ ∗ )dV (4.284)
On the other hand, it follows from Theorems 4.33 and 4.34 that
α
T
+γ
s
Ω
Ω
u(x,t; u0 , φ ∗ )u(x,t; 0, ψ )dV dt + β
u(x, T ; u0 , φ ∗ )u(x, T ; 0, ψ )dV = 0
T s
Γ
φ ∗ ψ dSdt
4.5 Optimal Boundary Control
197
for all functions ψ ∈ L2 (Γ × (s, T )). Substituting (4.284) into this equation, we obtain T ∂v ∗ + β φ ψ dV dt = 0 μ ∂n s Γ
for all functions ψ ∈ L2 (Γ × (s, T )). Thus β φ ∗ = −μ ∂∂ nv . Γ
To decouple the optimality system (4.277)-(4.281), we define a family of linear operators P(s) by (4.285) P(s)u0 (x) = −v(x, s; u0 ) for any given function u0 (x) ∈ L2 (Ω ), where v(x, s; u0 ) is the solution of (4.277)(4.281). Theorem 4.36. The operator P(s) defined by (4.285) has the following properties: (1) P(s) is symmetric, that is, Ω
p0 P(s)u0 dV =
Ω
for any u0 , p0 ∈ L2 (Ω ).
u0 P(s)p0 dV
(4.286)
(2) If φ ∗ is the optimal control of (4.267)-(4.269), then Ω
u0 P(s)u0 dV = α
T
+γ
s
Ω
Ω
|u| dV dt + β 2
T s
Γ
|u(T )|2 )dV.
|φ ∗ |2 dSdt (4.287)
Hence P(s) is nonnegative. (3) P(T ) = γ I. (4) P(s) is a linear bounded operator in L2 (Ω ). Proof. (1) Consider the system
∂p = μ ∇2 p + ∇ · (pv) + ap, ∂t ∂q = −μ ∇2 q + v · ∇q − aq + α p, ∂t p|Γ = ψ ∗ , q|Γ = 0,
μ ∂ q
ψ∗ = − , β ∂ n Γ p(x, s) = p0 (x), q(x, T ) = −γ p(x, T ).
(4.288) (4.289) (4.290) (4.291) (4.292)
Multiplying (4.277) by the solution q of (4.289) and integrating over Ω × (s, T ) by parts, we have
198
4 Linear Reaction-Convection-Diffusion Equation
T Ω
s
q
∂u dV dt = μ ∂t +
T Ω
s
T
Ω
s
s Ω T
Ω
s
T
q∇ · (uv)dVdt
Ω
s
auqdVdt
T
= −μ +
q∇2 udV dt +
T
∇q · ∇udV dt −
Ω
s
uv · ∇qdVdt
auqdVdt.
(4.293)
Multiplying (4.289) by the solution u of (4.277) and integrating over Ω × (s, T ) by parts, we have T Ω
s
u
T
∂q dV dt = −μ ∂t +
T s Γ T
Ω
s
uv · ∇qdVdt
φ∗
∂q dSdt + μ ∂n
T s
Ω
∇q · ∇udVdt
(uv · ∇q − auq + α up)dVdt.
Ω
s
T
(α up − auq)dVdt
Ω
s
= −μ +
s Ω T
u∇2 qdV dt +
(4.294)
Adding (4.293) and (4.294) gives Ω
=α =α
u0 P(s)p0 dV
T s
Ω
T s
Ω
updV dt − μ updV dt + β
T s
Γ
T s
Γ
φ∗
∂q dSdt + γ ∂n
φ ∗ ψ ∗ dSdt + γ
Ω
Ω
u(T )p(T )dV u(T )p(T )dV.
(4.295)
Repeating the above procedure by multiplying (4.288) by the solution v of (4.278) and multiplying (4.278) by the solution p of (4.288), we can obtain Ω
=α
p0 P(s)u0 dV
T s
Ω
updV dt + β
T s
Γ
φ ∗ ψ ∗ dSdt + γ
Ω
u(T )p(T )dV.
(4.296)
Then (4.286) follows from (4.295) and (4.296). (2) If p0 = u0 , then u = p, v = q, and φ ∗ = ψ ∗ . It then follows from (4.296) that Ω
u0 P(s)u0 dV = α
T s
Ω
u2 dV dt + β
T s
Γ
|φ ∗ |2 dSdt + γ
Ω
u(T )2 dV.
4.5 Optimal Boundary Control
199
(3) Letting s → T in (4.296), we derive that
p0 P(T )u0 dV = γ
Ω
Ω
for all p0 ∈ L2 (Ω ).
u0 p0 dV
Thus P(T )u0 = γ u0 for all u0 ∈ L2 (Ω ). (4) By (4.296), we deduce from H¨older’s inequality (2.5) and Young’s inequality (2.4) that
p0 P(s)u0 dV
Ω
≤γ
+β +α ≤
Ω
|u(T )|2 dV
T
Γ
s
T
s
1/2 Ω
|φ ∗ |2 dSdt |u| dV dt
1/2
1/2
Ω
s
T
Γ
s
× γ |p(T )|2 dV + β Ω
T
|ψ ∗ |2 dSdt
Γ
Γ
1/2
1/2 |p| dV dt 2
Ω
|φ ∗ |2 dSdt + α
T
s
T
s
2
γ |u(T )|2 dV + β Ω
1/2 |p(T )|2 dV
1/2
T Ω
s
∗ 2
|ψ | dSdt + α
|u|2 dV dt
T s
Ω
1/2 |p| dV dt 2
= (J(φ ∗ ; u0 , s, T ))1/2 (J(ψ ∗ ; p0 , s, T ))1/2 .
(4.297)
Moreover, by (4.15), we deduce that there exists a constant C > 0 such that J(φ ∗ ; u0 , s, T ) ≤ J(0; u0 , s, T ) =γ
Ω
≤C
|u(T )|2 dV + α
T s
Ω
|u|2 dV dt
|u0 |2 dV
Ω
and J(ψ ∗ ; p0 , s, T ) ≤ J(0; p0 , s, T ) =γ
≤C
Ω
|p(T )|2 dV + α
Ω
T s
Ω
|p|2 dV dt
|p0 |2 dV.
It therefore follows from (4.297) that
1/2
2 2
p0 P(s)u0 dV ≤ C |u | dV |p | dV 0 0
Ω Ω Ω
200
4 Linear Reaction-Convection-Diffusion Equation
for all p0 , u0 ∈ L2 (Ω ). Taking p0 = P(s)u0 , we obtain P(s)u0 L2 ≤ Cu0 L2 . Thus P(s) is bounded. Since the optimality system (4.277)-(4.281) is linear, P(s) is linear. Consider the optimal control problem
∂u = μ ∇2 u + ∇ · (uv) + au, ∂t u|Γ = φ , u(x, 0) = w0 (x).
(4.298) (4.299) (4.300)
Note that the initial time t = 0. By Theorem 4.35, we derive that the optimal control φ ∗ satisfies the following system:
∂u = μ ∇2 u + ∇ · (uv) + au, ∂t ∂v = −μ ∇2 v + v · ∇v − av + α u, ∂t u|Γ = φ ∗ , v|Γ = 0,
μ ∂ v
∗ φ =− , β ∂n
(4.301) (4.302) (4.303) (4.304)
Γ
u(x, 0) = w0 (x), v(x, T ) = −γ u(x, T ).
(4.305)
If we restrict the system (4.301)-(4.305) on [s, T ], then the initial condition (4.305) becomes u(x, s) = u(x, s), v(x, T ) = −γ u(x, T ). (4.306) If we compare the system (4.277)-(4.281) with the system (4.301)-(4.305) on [s, T ], we find that u0 (x) (in (4.281)) = u(x, s) (solution of (4.301)), v(x, s; u0 ) (solution of (4.278)) = v(x, s; w0 ) (solution of (4.302)). Substituting these equations into (4.285) gives − v(x, s; w0 ) = P(s)u(x, s). We then obtain the optimal state feedback control
μ ∂ v
μ ∂ P(s)u(x, s)
φ∗ = − =
. β ∂ n Γ β ∂n Γ
(4.307)
(4.308)
4.5 Optimal Boundary Control
201
We perform a formal computation to derive an equation for the operator P. Let u1 , u2 ∈ H 2 (Ω ). Multiplying (4.301) and (4.302) by u1 , u2 , respectively, and integrating over Ω , we obtain Ω
∂u u1 dV = ∂t
' Ω
=μ
Ω
∂v u2 dV = ∂t
∂u u1 dS − μ ∂n
Γ
+
( μ u1 ∇2 u + u1∇ · (uv) + auu1 dV
Ω
Γ
u
∂ u1 dS + μ ∂n
Ω
u∇2 u1 dV
(u1 ∇ · (uv) + auu1)dV,
(4.309)
' Ω
( −μ u2 ∇2 v + u2v · ∇v − avu2 + α uu2 dV.
(4.310)
Substituting (4.307) into (4.310) gives −
Ω
∂ (Pu) u2 dV = ∂t
' Ω
( μ u2 ∇2 P(u) − u2v · ∇P(u) + aP(u)u2 + α uu2 dV,
and then − =
Ω
' Ω
dP (u)u2 dV − dt
∂u u2 dV P ∂t Ω
( μ u2 ∇2 P(u) − u2v · ∇P(u) + aP(u)u2 + α uu2 dV.
It then follows from the symmetry of P that − =
Ω
' Ω
dP (u)u2 dV − dt
Ω
∂u P(u2 )dV ∂t
( μ u2 ∇2 P(u) − u2v · ∇P(u) + aP(u)u2 + α uu2 dV.
Substituting (4.309) into this equation for
∂u Ω ∂ t P(u2 )dV
gives
dP ∂u ∂ P(u2 ) (u)u2 dV − μ P(u2 )dS + μ u dS ∂n Ω dt Γ ∂n Γ ' ( − μ u∇2 P(u2 ) + P(u2)∇ · (uv) + auP(u2) dV −
Ω
=
' Ω
( μ u2 ∇2 P(u) − u2v · ∇P(u) + aP(u)u2 + α uu2 dV.
Since P(u2 )|Γ = 0, u|Γ = βμ ∂ P(u) ∂ n , and u is arbitrary, replacing u by u1 , we deduce from the property (3) of Theorem 4.36 that
202
4 Linear Reaction-Convection-Diffusion Equation
Ω
u2
dP (u1 )dV = dt
μ 2 ∂ P(u1 ) ∂ P(u2 ) dS ∂n ∂n Γ β ' ( μ u1 ∇2 P(u2 ) + μ u2∇2 P(u1 ) dV −
+ −
u1 P(T )(u2 )dV = γ
Ω
Ω Ω
(au1 P(u2 ) + au2P(u1 ) + α u1 u2 ) dV,
(4.311)
u1 u2 dV
(4.312)
Ω Ω
(u2 v · ∇P(u1 ) − P(u2)∇ · (u1 v)) dV
for all u1 , u2 ∈ H 2 (Ω ). This equation is called a differential Riccati equation. The study of the Riccati equation requires advanced mathematics and is referred to one of the advanced control books [6, 24, 64, 67, 69, 99]. We state a result from [6, p.449, Theorem 2.2] without proof. Theorem 4.37. The differential Riccati equation (4.311)-(4.312) has a unique, nonnegative, symmetric, strongly continuous solution P(t) (that is, P(t)u0 is continuous for any u0 ∈ L2 (Ω )).
4.5.3 Existence and Uniqueness Using the solutions of the Riccati equation, we can prove the existence of an optimal control. For this we define the Riccati variance V (u) =
Ω
uPudV.
(4.313)
Lemma 4.8. Let P be the nonnegative symmetric solution of (4.311)-(4.312). Then dV = −α dt
Ω
u2 dV − β
Γ
φ 2 dS +
1 β
Γ
μ
∂ Pu −βφ ∂n
2 dS.
(4.314)
Proof. Using the equation (4.267), we compute
dV = uP udV + ut PudV + uPut dV dt Ω Ω Ω ' ( 2 = uP udV + μ ∇ u + ∇ · (uv) + au PudV Ω
+
Ω
'
Ω
( uP μ ∇ u + ∇ · (uv) + au dV 2
(integrate twice by parts and use the symmetry of P and Pu|Γ = 0) ' ( ∂ Pu dS + 2μ u∇2 Pu + 2auPu dV uP udV − 2 μ φ = ∂n Ω Γ Ω +
Ω
(uP∇ · (uv) − uv · ∇Pu)dV
4.5 Optimal Boundary Control
= −α = −α +
Ω
Ω
Γ
= −α
203
u2 dV − 2 μ u2 dV − β
μ2 β
∂ Pu ∂n
Γ
Γ 2
φ
∂ Pu μ2 dS + ∂n β
Γ
∂ Pu ∂n
2 dS
(by (4.311)
φ 2 dS − 2 μφ
∂ Pu + β φ 2 dS ∂n
u2 dV − β φ 2 dS Ω Γ 2 ∂ Pu 1 − β φ dS. μ + β Γ ∂n
Lemma 4.9. Let P be the nonnegative symmetric solution of (4.311)-(4.312). Then
1 J(φ ; u0 , s, T ) = u0 P(s)u0 dV + β Ω
T ∂ Pu s
μ
Γ
∂n
−βφ
2 dSdt.
(4.315)
Proof. The identity (4.315) can be obtained by integrating (4.314) from s to T and using V (T ) = Ω u(T )P(T )u(T )dV = γ Ω u(T )2 dV. Theorem 4.38. Let J be defined by (4.270). Then there exists a unique optimal control φ ∗ such that min
φ ∈L2 (Γ ×(s,T ))
J(φ ; u0 , s, T ) = J(φ ∗ ; u0 , s, T ) =
Ω
u0 P(s)u0 dV,
where P is the nonnegative symmetric solution of (4.311)-(4.312) and
μ ∂ Pu
φ∗ = . β ∂ n Γ
(4.316)
(4.317)
Proof. It follows from (4.315) that J(φ ∗ ; u0 , s, T ) ≥ ≥
min
φ ∈L2 (Γ ×(s,T ))
J(φ ; u0 , s, T )
Ω
u0 P(s)u0 dV
= J(φ ∗ ; u0 , s, T ) if we take
φ = φ∗ =
μ ∂ Pu . β ∂n
In addition, we can readily show that J is strictly convex and then the optimal control is unique.
204
4 Linear Reaction-Convection-Diffusion Equation
Example 4.9. Consider the control problem
∂u ∂ 2u = 2, ∂t ∂x u(0,t) = φ1 (t), u(π ,t) = φ2 (t), u(x, 0) = u0 (x), with the performance functional J(φ ; u0 , 0) =
T π 0
0
u2 dxdt +
T 0
(φ12 + φ22 )dt +
π 0
u(T )2 dx.
Let ∞
∑ pmn (t) sin(mx).
P(t) sin(nx) =
m=1
Then pmn (t) =
2 π
π
sin(mx)P(t) sin(nx)dx. 0
Taking u1 = sin(mx) and u2 = sin(nx) in the Riccati equation (4.311), we derive that ' ( 2 ∞ d pmn = m2 + n2 pmn + ∑ pim p jn i j((−1)i+ j − 1) − δmn, dt π i, j=1 pmn (T ) = δmn .
(4.318) (4.319)
Unlike Example 4.7, this infinite system is coupled and cannot be solved explicitly. Let u(x,t) =
∞
∑ cn (t) sin(nx),
n=1
where cn (t) =
2 π
π
u(x,t) sin(nx)dx. 0
We then obtain the optimal boundary state feedback control
∂ P(t)u(x,t)
φ1 (t) = −
∂x x=0
∞ ∂ P(t) sin(nx)
= − ∑ cn (t)
∂x x=0 n=1 =−
2 ∞ ∞ ∑ ∑ mpmn (t) π n=1 m=1
π
u(x,t) sin(nx)dx, 0
4.5 Optimal Boundary Control
205
∂ P(t)u(x,t)
φ2 (t) =
∂x x=π
∞ ∂ P(t) sin(nx)
= ∑ cn (t)
∂x x=π n=1 2 = π
∞
∞
∑ ∑ m cos(mπ )pmn(t)
n=1 m=1
π
u(x,t) sin(nx)dx. 0
4.5.4 Optimal Boundary State Feedback Controls We now show that the optimal feedback controller (4.317) actually exponentially stabilizes the equation when T → ∞. Consider the boundary control problem
∂u = μ ∇2 u + ∇ · (uv) + au, ∂t u|Γ = φ , u(x, 0) = u0 (x)
(4.320) (4.321) (4.322)
with the quadratic performance functional over an infinite time interval J(φ ; u0 ) = α
∞ 0
Ω
|u(x,t)|2 dV dt + β
∞ 0
Γ
|φ (x,t)|2 dSdt,
(4.323)
where α , β ≥ 0 are weight constants. The optimal control problem is to minimize the functional J over L2 (Γ × (0, ∞)). Since the control problem (4.320)-(4.322) is exponentially stabilizable, it is well posed; that is, for each u0 ∈ L2 (Ω ), there exists a control φ such that J(φ , u0 ) < ∞. The following optimality theorem is the infinite time version of Theorem 4.35. Its proof is beyond the scope of the text and is referred to [69]. Theorem 4.39. The optimal control φ ∗ satisfies the following system:
∂u = μ ∇2 u + ∇ · (uv) + au, ∂t ∂v = −μ ∇2 v + v · ∇v − av + α u, ∂t u|Γ = φ ∗ , v|Γ = 0,
μ ∂ v
∗ φ =− , β ∂ n Γ u(x, 0) = u0 (x), v(x, ∞) = 0.
(4.324) (4.325) (4.326) (4.327) (4.328)
We define a linear operator Π by
Π u0 (x) = −v(x, 0; u0 )
(4.329)
206
4 Linear Reaction-Convection-Diffusion Equation
for any given function u0 (x) ∈ L2 (Ω ), where v(x, 0; u0 ) is the solution of (4.324)(4.328). It can be seen that
Π u(x,t; u0 ) = −v(x,t; u0).
(4.330)
In the same way as the derivation of the differential Riccati equation (4.311), we can derive that
μ 2 ∂ Π (u1 ) ∂ Π (u2 ) dS − β ∂n ∂n
Γ
+ −
Ω Ω
' Ω
( μ u1 ∇2 Π (u2 ) + μ u2∇2 Π (u1 ) dV
(u2 v · ∇Π (u1) − Π (u2 )∇ · (u1 v)) dV (au1 Π (u2 ) + au2Π (u1 ) + α u1 u2 ) dV = 0
(4.331)
for all u1 , u2 ∈ H 2 (Ω ). This equation is called an algebraic Riccati equation. The study of the Riccati equation is beyond the scope of this text and we state a result from [6, p.522, Proposition 2.2] without proof. Consider the differential Riccati equation Ω
u2
− +
Ω
μ 2 ∂ P(u1 ) ∂ P(u2 ) dS ∂n ∂n Γ β ' ( + μ u1 ∇2 P(u2 ) + μ u2∇2 P(u1 ) dV
dP (u1 )dV = − dt
u1 P(0)(u2 )dV = γ
Ω Ω
(au1 P(u2 ) + au2P(u1 ) + α u1u2 ) dV,
(4.332)
u1 u2 dV
(4.333)
Ω Ω
(u2 v · ∇P(u1) − P(u2)∇ · (u1 v)) dV
for all u1 , u2 ∈ H 2 (Ω ). The algebraic Riccati equation (4.331) is the steady state equation of the differential equation. Theorem 4.40. The differential Riccati equation (4.332)-(4.333) has a unique strongly continuous solution that has the following properties: 1.
Ω u0 P(t)u0 dV
=
min
φ ∈L2 (Γ ×(0,t))
J(φ , u0 , 0, T ), where J is defined by (4.270).
2. If γ = 0, then the limit
lim P(t)u0 = Π u0
t→∞
for all u0 ∈ L2 (Ω )
(4.334)
exists and Π is the minimal nonnegative symmetric solution of (4.331). That is, any other solution Π1 of (4.331) satisfies Ω
u0 Π u0 dV ≤
Ω
u0 Π1 u0 dV.
4.5 Optimal Boundary Control
207
Using the solutions of the algebraic Riccati equation (4.331), we can prove the existence of an optimal control. Lemma 4.10. Let Π be the nonnegative symmetric solution of (4.331) and define t
J(φ ; u0 , 0,t) = α
0
Ω
|u(x,t)|2 dV dt + β
t
|φ (x,t)|2 dSdt.
Γ
0
(4.335)
Then J(φ ; u0 , 0,t) =
u(0)Π u(0)dV − u(t)Π u(t)dV Ω Ω 2 ∂Πu 1 t − β φ dSdt. μ + β 0 Γ ∂n
Proof. Define V (u) =
Ω
(4.336)
u(t)Π u(t)dV.
Using the equation (4.320), we compute dV = dt =
ut Π udV +
Ω ' Ω
Ω
uΠ ut dV (
μ ∇ u + ∇ · (uv) + au Π udV + 2
Ω
' ( uΠ μ ∇2 u + ∇ · (uv) + au dV
(integrate twice by parts and use the symmetry of Π and Π u|Γ = 0) ' ( ∂Πu dS + 2μ u∇2 Π u + uΠ ∇ · (uv) − uv · ∇Π u + 2auΠ u dV = −2μ φ ∂n Γ Ω ∂Πu μ2 ∂Πu 2 2 dS + u dV − 2μ φ dS. (by (4.331) = −α ∂n β Γ ∂n Ω Γ Adding β
Γ
|φ |2 dS to this equations and integrating from 0 to t, we obtain
J(φ ; u0 , 0,t) =
=
u(0)Π u(0)dV − u(t)Π u(t)dV Ω Ω 2 2 ∂Πu μ ∂Πu + β φ 2 dS − 2 μφ + β ∂ n ∂n Γ u(0)Π u(0)dV − u(t)Π u(t)dV Ω Ω 2 ∂Πu 1 t − β φ dSdt. μ + β 0 Γ ∂n
Theorem 4.41. Let J be defined by (4.323). Then there exists a unique optimal control φ ∗ such that min
φ ∈L2 (Γ ×(0,∞))
J(φ ; u0 ) = J(φ ∗ ; u0 ) =
Ω
u0 Π u0 dV,
(4.337)
208
4 Linear Reaction-Convection-Diffusion Equation
where Π is the nonnegative symmetric solution of (4.331) and
μ ∂ Π u
∗ φ = . β ∂ n Γ
(4.338)
Proof. We note that the optimal control problem is well posed and then the minimum below is finite. It follows from Theorem 4.40 that min
φ ∈L2 (Γ ×(0,∞))
J(φ ; u0 ) ≥ = =
min
J(φ ; u0 , 0,t)
min
J(φ ; u0 , 0,t)
φ ∈L2 (Γ ×(0,∞)) φ ∈L2 (Γ ×(0,t))
u0 P(t)u0 dV,
Ω
which, combined with (4.334), implies that min
φ ∈L2 (Γ ×(0,∞))
J(φ , u0 ) ≥
Ω
u0 Π u0 dV.
(4.339)
On the other hand, by (4.336), we deduce that J(φ ; u0 , 0,t) =
u0 Π u0 dV − u(t)Π u(t)dV Ω Ω 2 ∂Πu 1 − β φ dS μ + β Γ ∂n 2 1 ∂Πu − β φ dS. ≤ u0 Π u0 dV + μ β Γ ∂n Ω
Taking φ = φ ∗ =
μ ∂Πu β ∂n ,
we obtain J(φ ∗ ; u0 , 0,t) ≤
Ω
u0 Π u0 dV
and then min
φ ∈L2 (Γ ×(0,∞))
J(φ , u0 ) ≤ J(φ ∗ ; u0 ) ≤
Ω
u0 Π u0 dV.
(4.340)
Putting (4.339) and (4.340) together yields (4.337). In addition, we can readily show that J is strictly convex and then the optimal control is unique. With the optimal feedback controller (4.338), we obtain the feedback control system
∂u = μ ∇2 u + ∇ · (uv) + au, ∂t
μ ∂ Π u
, u|Γ = β ∂ n Γ
(4.341) (4.342)
4.5 Optimal Boundary Control
209
u(x, 0) = u0 (x).
(4.343)
Define T (t)u0 = u(t; u0 ). We can show that T (t) is a strongly continuous semigroup on L2 (Ω ), but the proof requires the advanced semigroup theory and is referred to [6, p.525, Proposition 3.2]. It follows from Theorem 4.41 that the solution of the system satisfies ∞ 0
Ω
α u2 dV dt ≤ J(φ ∗ ; u0 ) < ∞, ∞
and then
0
T (t)2 dt < ∞.
Thus, from Theorem 4.5, we obtain the following theorem. Theorem 4.42. Let β > 0 and Π be the nonnegative symmetric solution of (4.331). Then the feedback control system (4.341)-(4.343) is exponentially stable. Example 4.10. Consider the control problem
∂u ∂ 2u = 2, ∂t ∂x u(0,t) = φ1 (t), u(x, 0) = u0 (x)
u(π ,t) = φ2 (t),
with the performance functional J(φ ; u0 , 0) =
∞ π 0
0
u2 dxdt +
∞ 0
(φ12 + φ22 )dt.
Let ∞
Π sin(nx) =
∑ pmn sin(mx).
m=1
Then pmn =
2 π
π 0
sin(mx)Π sin(nx)dx.
Taking u1 = sin(mx) and u2 = sin(nx) in the Riccati equation (4.331), we derive that '
( 2 ∞ m2 + n2 pmn + ∑ pim p jn i j((−1)i+ j − 1) − δmn = 0. π i, j=1
This system cannot be solved explicitly. Let u(x,t) =
∞
∑ cn (t) sin(nx),
n=1
(4.344)
210
4 Linear Reaction-Convection-Diffusion Equation
where
2 π u(x,t) sin(nx)dx. π 0 We then obtain the optimal boundary state feedback control
∂ Π u(x,t)
φ1 (t) = −
∂x x=0
∞ ∂ Π sin(nx)
= − ∑ cn (t)
∂x x=0 n=1 cn (t) =
=−
2 π
∞
π
∞
∑∑
mpmn
u(x,t) sin(nx)dx, 0
n=1 m=1
∂ Π u(x,t)
φ2 (t) =
∂x x=π
∞ ∂ Π sin(nx)
= ∑ cn (t)
∂x x=π n=1 =
2 ∞ ∞ ∑ ∑ m cos(mπ )pmn π n=1 m=1
π
u(x,t) sin(nx)dx. 0
Exercises 4.5 1. Prove that the operator P(s) defined by (4.285) is linear. 2. Consider the control problem
∂u ∂ 2u = , ∂t ∂ x2
∂u (0,t) = φ1 (t), ∂x u(x, 0) = u0 (x)
∂u (π ,t) = φ2 (t), ∂x
with the performance functional J(φ ; u0 , 0, T ) = a. b. c. d.
T π 0
0
u2 dxdt +
T 0
(φ1 (t)2 + φ2 (t)2 )dxdt +
Calculate the Gˆateaux-differential of J. Derive the optimality system for the optimal control. Derive a Riccati equation. Design an optimal state feedback control.
3. Consider the optimal control problem
∂u = μ ∇2 u + ∇ · (uv) + au, ∂t
π 0
u(T )2 dx.
4.5 Optimal Boundary Control
211
u|Γ =
m
∑ bi (x)φi (t),
i=1
u(x, s) = u0 (x) with the quadratic performance functional J(φ1 , · · · , φm ; u0 , s, T ) = α
T
m
Ω
s
|u|2 dV dt + β ∑
T
i=1 s
|φi |2 dt + γ
Ω
|u(T )|2 dV,
where α , β , γ ≥ 0 are weight constants and bi (i = 1, 2, · · · , m) are given functions. a. Calculate the Gˆateaux-differential of J. b. Derive the optimality system for the optimal control. c. Derive a Riccati equation. 4. Use the system (4.324)-(4.328) to show that the operator Π defined by (4.329) has the following properties: (1) Π is symmetric, that is, Ω
p0 Π u0 dV =
Ω
u0 Π p0 dV
for any u0 , p0 ∈ L2 (Ω ).
(4.345)
(2) If φ ∗ is the optimal control of (4.320)-(4.322), then Ω
u0 Π u0 dV = α
∞ Ω
0
|u|2 dV dt + β
∞ 0
Γ
|φ ∗ |2 dSdt.
(4.346)
5. Derive the algebraic Riccati equation (4.331) by following the procedure of deriving the differential Riccati equation (4.311). 6. Consider the control problem
∂u ∂ 2u = , ∂t ∂ x2
∂u (0,t) = φ1 (t), ∂x u(x, 0) = u0 (x)
∂u (π ,t) = φ2 (t), ∂x
with the performance functional J(φ1 , φ2 ; u0 ) =
∞ π 0
0
u2 dxdt +
Define V (t) =
π 0
∞ 0
(φ1 (t)2 + φ2 (t)2 )dxdt.
uΠ udx,
where Π is a linear operator to be designed.
212
4 Linear Reaction-Convection-Diffusion Equation
a. Derive a Riccati equation for Π such that
π dV =− u2 dx − φ1 (t)2 − φ2 (t)2 + ([Π u](0,t) − φ1(t))2 dt 0 + ([Π u](π ,t) + φ2(t))2 .
(The optimal control is given by
φ1 (t) = [Π u](0,t),
φ2 (t) = [Π u](π ,t).
The proof is not required and beyond the text). b. Follow Example 4.10 to reduce the Riccati equation to an infinite system.
4.6 Generalization to Abstract Dynamical Systems We briefly mention that the control theory for the reaction-convection-diffusion equations has been extended to the abstract dynamical systems du = Au + Bφ , u(0) = u0 , dt v = Cu + Dφ ,
(4.347) (4.348)
where A is the infinitesimal generator of an analytic semigroup T (t) on a Hilbert space H, B is a linear operator from a Hilbert space X to H, C is a linear operator from a Hilbert space H to Y , D is a linear operator from X to Y , φ is a control input, and v is an output. To see that the control problem of the reaction-convectiondiffusion equations is an example of (4.347)-(4.348), we define the operator A on the Hilbert space L2 (Ω ) by Au = μ ∇2 u + ∇ · (uv) + au
(4.349)
with the domain D(A) = H 2 (Ω ) ∩ H01 (Ω ), where I is the identity operator. The control operator B : Rm → L2 (Ω ) is defined by B = [b1 (x), · · · , bm (x)] .
(4.350)
The output operator C : L2 (Ω ) → Rl is defined by h1 (x)u(x)dV, · · · , hl (x)u(x)dV . Cu = ω1o
ωlo
Then the problem (4.20)-(4.22) is formulated into (4.347)-(4.348) with D = 0. Definition 4.10. If there is a linear operator F from H to X such that A + BF generates an exponentially stable C0 semigroup, then we say that the system (4.347) is
4.6 Generalization to Abstract Dynamical Systems
213
exponentially stabilizable and F is called a feedback operator. If there is a linear operator L from Y to H such that A + LC generates an exponentially stable C0 semigroup, then we say that the system (4.347)-(4.348) is exponentially detectable and L is called an output injection operator. As in the case of the reaction-convection-diffusion equations, the stabilization of the problem (4.347)-(4.348) can be solved by decomposing it into a finite dimensional control system and an infinite dimensional system. For details, we refer to [24, 99]. The optimal control theory of the problem (4.347)-(4.348) has been also well established and is referred to [6, Part IV], [64], and [67, Chapter 9] for details. Given T > 0, we want to minimize the cost functional J(u) =
T' 0
( Cu(t)2 + φ (t)2 dt + (P0 u(T ), u(T ))
(4.351)
over all controls φ ∈ L2 (0, T ; X ) subject to the differential equation (4.347), where P0 is a hermitian and nonnegative operator on H. The optimal control problem can be solved by the dynamic programming approach in the following two steps: 1. Solve the Riccati equation dP = A∗ P + PA − PBB∗P + C∗C, dt
P(0) = P0 .
2. Prove that the optimal control φ ∗ is given by the optimal state feedback controller
φ ∗ = −B∗ P(T − t)u(t) and that u is the solution of the closed loop equation du = (A − BB∗P(T − t))u, dt
u(0) = u0 .
The operator B can be either bounded or unbounded. The bounded case corresponds to the interior control of the reaction-convection-diffusion equations while the unbounded case corresponds to the boundary control. In formulating the boundary control problem into (4.347)-(4.348), we need the following Dirichlet “lifting” operator L : L2 (Γ ) → L2 (Ω ) defined as follows. Let λ0 be a real number such that the equation μ ∇2 u + ∇ · (uv) + au + λ0u = 0, u|Γ = f has a unique solution. We then define L( f ) = u. Using this operator, (4.267)-(4.268) can be written as du = Aλ0 u − λ0u − Aλ0 L (φ (t)) , dt where Aλ0 u = μ ∇2 u + ∇ · (uv) + au + λ0u. Then the control operator is given by B = −Aλ0 L, which is unbounded.
214
4 Linear Reaction-Convection-Diffusion Equation
4.7 References and Notes The material presented in this chapter is just a simplification of the material from the advanced control books and papers [6, 24, 62, 64, 67, 69, 99]. Here are the resources: Theorem 4.6 is adopted from [24]; the material about uniform convergence from [92]; Exercises 4.3 from [48]; the material about optimal control from [6, 24, 62, 64, 67, 69, 99]. There have been numerous references on feedback control of the parabolic equations and we mention some of them for further studies: Amamm [2], Boskovic, Balogh, and Krstic [8], Boskovic, Krstic, and Liu [7], Burns, Rubio, and King [9], Christofieds [20], Krstic [48], and Lasiecka and Triggiani [54, 55, 57, 58, 64]. For more references, we refer to the above mentioned control books.
Chapter 5
One-dimensional Wave Equation
In this chapter, we study the control problem of the one-dimensional wave equation 2 ∂ 2u 2∂ u = c . ∂ t2 ∂ x2
A typical physical problem modeled by the wave equation is the vibration of a string. In this problem, u = u(x,t) represents the vertical displacement of the string from its equilibrium and the positive constant c (m/s) is a wave speed. In what follows, for convenience, we will use the subscripts ut , ux or ∂∂ ut , ∂∂ ux interchangeably to denote the derivatives of u with respect to t, x, respectively.
5.1 Stability Consider the wave equation
∂ 2u ∂ 2u = c2 2 in (0, L) × (0, ∞), 2 ∂t ∂x u(0,t) = 0, u(L,t) = 0, t ≥ 0, ∂u (x, 0) = u1 (x), u(x, 0) = u0 (x), ∂t
(5.1) (5.2) x ∈ (0, L),
(5.3)
where u0 , u1 are initial conditions. We define the energy of the system (5.1)-(5.3) by
2
2
1 L
∂ u 2 ∂u
(x,t) + c (x,t)
dx. (5.4) E(t) =
2 0 ∂t ∂x The following theorem shows that the equilibrium 0 is stable, but not exponentially stable.
W. Liu, Elementary Feedback Stabilization of the Linear Reaction-Convection-Diffusion Equation and the Wave Equation, Math´ematiques et Applications 66, c Springer-Verlag Berlin Heidelberg 2010 DOI 10.1007/978-3-642-04613-1 5,
215
216
5 One-dimensional Wave Equation
Theorem 5.1. The energy of the system (5.1)-(5.3) satisfies E(t) = E(0) for all t ≥ 0.
(5.5)
Proof. Using the equation (5.1) and integrating by parts, we derive that L dE ∂u ∂ 2u ∂ 2u 2∂u = (x,t) 2 (x,t) + c (x,t) (x,t) dx dt ∂t ∂t ∂x ∂ x∂ t 0 L ∂u ∂ 2u ∂u ∂ 2u (x,t) dx c2 (x,t) 2 (x,t) + c2 (x,t) = ∂t ∂x ∂x ∂ x∂ t 0
L
∂u ∂u = c2 (x,t) (x,t)
∂t ∂x 0 L 2u ∂ ∂u ∂ 2u 2 2 ∂u (x,t) (x,t) + c (x,t) (x,t) dx −c + ∂ t∂ x ∂x ∂x ∂ x∂ t 0 = 0. So E(t) = E(0) for all t ≥ 0. Analogous to finite dimensional control systems, the stability of the system (5.1)(5.3) is determined by its eigenvalues. To see this, we define the operator A on the Hilbert space H = H01 (0, L) × L2 (0, L) by 0 I (5.6) A = 2 d2 c dx2 0 with the domain D(A) = (H 2 (0, L) ∩ H01 (0, L)) × H01 (0, L), where I denotes the identity operator. Set v = ut and u u u= , u0 = 0 . v u1 Then the problem (5.1)-(5.3) can be formulated as an abstract system du = Au, dt u(0) = u0 ,
(5.7) (5.8)
in the state space H . Theorem 5.2. The eigenvalues of A defined by (5.6) and their corresponding eigenfunctions are given by ' ( ncπ i sin nπL'x ( , u±n = λ±n = ± , n = 1, 2, · · · , (5.9) ± ncLπ i sin nπL x L
5.1 Stability
217
where i denotes the imaginary unit. Furthermore, {u±n } is an orthogonal basis in H01 (0, L) × L2 (0, L) and
ω0 = 0 = sup{Re(λ ), λ ∈ σ (A)},
(5.10)
where ω0 is the growth bound of the solution of the wave equation defined by (2.56) and σ (A) denotes the spectrum of A. Proof. The eigenvalue problem
Au = λ u
is equivalent to v = λ u,
c2
d2u = λ v, dx2
u(0) = u(L) = 0.
Then (5.9) follows from Theorem 2.4. To prove that {u±n } is an orthogonal basis in H01 (0, L) × L2 (0, L), it suffices to show that any real function (u, v) ∈ H01 (0, L) × L2 (0, L) can be expanded in terms of {u±n }. Let ' ( ∞ ' ( ∞ sin n'πL x ( sin nπL'x ( u = ∑ cn ncπ i + ∑ dn . nπ x v − ncLπ i sin nπL x L sin L n=1 n=1 Since {sin pansions
' nπ x ( L
} is an orthogonal basis in L2 (0.L), we have the eigenfunction exu=
∞
∑ an sin
nπ x
n=1
L
,
v=
∞
∑ bn sin
n=1
nπ x L
.
Then cn and dn must satisfy the system cn + d n = a n ,
ncπ i ncπ i cn − dn = bn . L L
Since this system has a unique solution, (u, v) can be expanded in terms of {u±n }. In addition, (5.10) follows from either (5.5) or Theorem 2.22.
Exercises 5.1 1. Consider the wave equation 2 ∂ 2u 2∂ u = c . ∂ t2 ∂ x2
What happens to the energy E(t) if a. u(0,t) = 0 and b. u(0,t) = 0 and
∂u ∂ x (L,t) ∂u ∂ x (L,t)
= 0; = −au(L,t) − m ∂∂ t 2u (L,t) with a, m > 0. 2
218
5 One-dimensional Wave Equation
2. Consider the wave equation
∂ 2u ∂ 2u ∂ 2u , = + a ∂ t2 ∂ x2 ∂ x∂ t u(0,t) = 0, u(1,t) = 0, ∂u (x, 0) = u1 (x), u(x, 0) = u0 (x), ∂t where a is a constant. Show that E(t) ≡ E(0). 3. Consider the wave equation
∂ 2u ∂ 2u ∂u = −a , 2 2 ∂t ∂x ∂x u(0,t) = 0, u(1,t) = 0, ∂u (x, 0) = u1 (x), u(x, 0) = u0 (x), ∂t where a is a constant. Define the weighted energy 1
∂ u 2 ∂ u 2 Ew (t) = e−ax
+
dx. ∂t ∂x 0 Show that Ew (t) ≡ Ew (0). 4. Use the method of separation of variable to solve the wave equation 2 ∂ 2u 2∂ u = c + au in (0, L) × (0, ∞), ∂ t2 ∂ x2 u(0,t) = 0, u(L,t) = 0, t ≥ 0, ∂u (x, 0) = u1 (x), x ∈ (0, L), u(x, 0) = u0 (x), ∂t
where a is a constant. Determine the range of a for which the equilibrium 0 is not stable.
5.2 Linear Interior Feedback Stabilization Consider the wave equation with an interior control 2 ∂ 2u 2∂ u = c +φ ∂ t2 ∂ x2
in (0, L) × (0, ∞),
(5.11)
5.2 Linear Interior Feedback Stabilization
219
u(0,t) = 0,
u(L,t) = 0, t ≥ 0, ∂u (x, 0) = u1 (x), u(x, 0) = u0 (x), ∂t
(5.12) x ∈ (0, L),
(5.13)
where φ = φ (x,t) is a control to be found, and u0 , u1 are initial states. Physically the control φ represents an external force exerted on a string. In this control system, the state variables are u and ∂∂ ut . If there is a feedback control φ = F(u, ut ) such that the above closed-loop system is exponentially stable, we say that the wave equation is exponentially stabilizable by interior feedback. The output of the system can be proposed in many ways in accordance with specific physical problems. Since our interest is in the vibration of a string, the controlled output is the displacement oc = u(x,t).
(5.14)
If the velocity ut (x,t) can be measured on [0, L], then the measured output is om = ut (x,t).
(5.15)
Let the reference output or (x) = r(x). Since u(0,t) = u(L,t) = 0, r(x) should satisfy r(0) = r(L) = 0. If the control φ regulates u to r, then φ (x,t), u(x,t) will converge to φ¯ (x) and r(x), respectively, and the measured output ut (x,t) will converge to zero. Thus the control steady state φ¯ satisfies c2
d2r ¯ + φ = 0. dx2
Introducing new variables w = u − r,
ψ = φ − φ¯ ,
ηc = oc − or ,
ηm = om ,
(5.16)
we then transform the control problem into 2 ∂ 2w 2∂ w = c + ψ, ∂ t2 ∂ x2 ηm = wt (x,t), ηc = w(x,t)
with the zero reference output. Therefore, in what follows, we assume that the reference output is zero. The control problem is to design a feedback control φ to regulate the controlled output oc to zero, that is, the solution u of the problem (5.11)-(5.13) converges to zero in some norm. Note that zero is the equilibrium of the problem (5.11)-(5.13) with zero steady-state control. Hence, as in the case of finite dimensional systems, the control problem is transformed into the stabilization of the zero equilibrium. Unlike the reaction-convection-diffusion equation, the control problem (5.11)(5.13) of the wave equation cannot be decomposed into a finite dimensional control
220
5 One-dimensional Wave Equation
system and a stable infinite dimensional system because all eigenvalues have zero ' ( real parts. By Theorem 2.5, the set of eigenfunctions {sin iπLx } is a basis in L2 (0, L). Then we can assume that ∞ iπ x u = ∑ ci (t) sin , (5.17) L i=1 ∞ iπ x 0 u0 = ∑ ci sin , (5.18) L i=1 ∞ iπ x u1 = ∑ c1i sin , (5.19) L i=1 where
2 L iπ x dx, u(x,t) sin ci (t) = L 0 L 2 L iπ x c0i = dx, u0 (x) sin L 0 L 2 L iπ x dx. u1 (x) sin c1i = L 0 L
(5.20) (5.21) (5.22)
Substituting (5.17) into (5.11), we obtain ∞
iπ x ∑ c¨i (t) sin L i=1
∞
iπ = −c ∑ ci (t) L i=1
Multiplying the equation by sin
2
and integrating from 0 to L, we obtain
2
L
ci (0) = c0i ,
φi =
iπ x + φ (x,t). sin L
' iπ x (
c¨i = −
where
2
2 L
icπ L
ci + φi (t),
c˙i (0) = c1i ,
L
sin 0
i = 1, 2, · · · ,
(5.23) (5.24)
iπ x φ (x)dx. L
Evidently, the real parts of eigenvalues for each equation are zero. Thus the original control problem (5.11)-(5.13) cannot be decomposed into a finite dimensional control system and a stable infinite dimensional system. So we need a different method to design a feedback controller. If a control φ drives the energy to zero, the energy should be decreasing, and then its derivative should be negative. This motivates us to examine the derivative of the energy L ∂u ∂ 2u ∂u ∂ 2u dE = (x,t) 2 (x,t) + c2 (x,t) (x,t) dx dt ∂t ∂t ∂x ∂ x∂ t 0
5.2 Linear Interior Feedback Stabilization
221
∂u ∂ 2u ∂u ∂ 2u (x,t) 2 (x,t) + c2 (x,t) (x,t) dx ∂t ∂x ∂x ∂ x∂ t 0 L ∂u + (x,t)φ (x,t)dx 0 ∂t L ∂u (x,t)φ (x,t)dx. = 0 ∂t =
L
c2
To make it negative, we take
φ (x,t) = −k so that dE = −k dt
L
∂u 0
∂t
∂u (x,t) ∂t
(5.25)
2
(x,t)
dx ≤ 0,
(5.26)
and then the energy is decreasing, where k is a positive constant called control gain. In fact, we will show that this damping force will make the vibration die out exponentially. The feedback control (5.25) is called a velocity feedback controller. Since the state u is not used for feedback, this feedback can be regarded as an output feedback if the velocity can be physically measured on the whole domain. The above method of designing a feedback controller is referred as the method of energy. We now use the Fourier method to prove that the equilibrium 0 of the system (5.11)-(5.13) is exponentially stable. 2(N+1)cπ π Theorem 5.3. Assume that 2Nc for some integer N ≥ 0. Then the L N + 1, 2 2 2 4n c π ωn = k 2 − . L2
(5.32) (5.33)
Proof. Since u0 (x) = u1 (x) =
∞
∑ an sin
n=1 ∞
∑ bn sin
n=1
nπ x L nπ x L
,
(5.34)
,
(5.35)
we have
dcn (0) = bn . dt Substituting (5.27) into the equation (5.11), we obtain ∞ 2 n 2 c2 π 2 cn dcn nπ x d cn (t) + = 0, sin (t) + k ∑ dt 2 dt L2 L n=1 cn (0) = an ,
(5.36)
which implies that
n 2 c2 π 2 cn d 2 cn dcn (t) + (t) + k = 0. dt 2 dt L2 Solving the equations (5.36)-(5.37) gives (5.30)-(5.32).
(5.37)
2(N+1)cπ π for some integer N ≥ 0. Then there Corollary 5.1. Assume that 2Nc L ≤k< L exists M > 0 such that the energy of the problem (5.11)-(5.13) with the feedback control (5.25) satisfies
E(t) ≤ ME(0)e−σ t
for t ≥ 0,
(5.38)
where
σ =
k 0 < k ≤ 2cLπ , 2, 2 2 k 1 k2 − 4cL2π , k > 2cLπ . 2−2
Proof. By (5.34) and (5.35), we deduce that ∂ u0 2 ∂x 2 = L u1 2L2 =
∞
n2 π 2 a2n , 2L n=1
∑ ∞
Lb2n . n=1 2
∑
(5.39)
5.2 Linear Interior Feedback Stabilization
223
From (5.30)-(5.32), we derive that there exists a positive constant C such that |cn (t)|2 ≤ C(n2 a2n + b2n)e−σ t , b2n −σ t 2 2 |cn (t)| ≤ C an + 2 e . n It therefore follows that
nπ x 2 nπ x 2 1 L 2
∞ nπ 1 L
∞
c ∑ cn (t) cos E(t) =
∑ cn (t) sin
dx +
dx
n=1 2 0 n=1 L 2 0 L L =
L 2 ∞ L ∞ n2 π 2 2 c ∑ |cn (t)|2 2 |c (t)| + ∑ n 4 n=1 4 n=1 L
L ∞ L 2 ∞ n2 π 2 b2n −σ t 2 2 2 −σ t 2 ≤ ∑ C(n an + bn)e + c ∑ 2 C an + 2 e 4 n=1 4 n=1 L n ≤ ME(0)e−σ t , where M is a new positive constant. Since the function f (k) =
k 2
− 12
2 2 k2 − 4cL2π =
L2
2c2 π 2 2 2 k+ k2 − 4c 2π
is decreasing
L
function on [0, ∞), the maximum decay rate σm that can be achieved by a velocity feedback is 2cLπ − ε for any small ε > 0 when the control gain k = 2cLπ . Hence, a larger control gain does not give a larger decay rate. Define the operator A on the Hilbert space H = H01 (0, L) × L2 (0, L) by 0 I (5.40) A = 2 d2 c dx2 −kI with the domain D(A) = (H 2 (0, L) ∩ H01 (0, L)) × H01 (0, L), where I denotes the identity operator. Theorem 5.4. The eigenvalues of A defined by (5.40) and their corresponding eigenfunctions are given by −k ± k2 − 4n2c2 π 2 /L2 , (5.41) λ±n = ' nπ x (2 sin 'L ( u±n = , n = 1, 2, · · · . (5.42) λ±n sin nπL x Moreover
ω0 = sup{Re(λ ), λ ∈ σ (A)},
(5.43)
where ω0 is the growth bound of the solution of the wave equation defined by (2.56) and σ (A) denotes the spectrum of A.
224
5 One-dimensional Wave Equation
Proof. The eigenvalue problem Au = λ u,
u = (u, v)∗
is equivalent to v = λ u,
c2
d 2u − kv = λ v, dx2
u(0) = u(L) = 0.
It then follows from Theorem 2.4 that
λn2 + kλn = −
c2 n 2 π 2 , L2
un = sin
nπ x L
,
which implies (5.41). Then (5.43) follows from Corollary 5.1. If k = cπLn , then the algebraic multiplicity of λn is 2. We define u+n as in (5.42) and the generalized eigenfunction u−n by k (A − λnI)u−n = A + I u−n = u+n , (u+n , u−n ) = 0. 2 Solving this equation, we obtain u−n =
1 k 1 2
' ( sin ' nπL x ( . sin nπL x
Evidently, {u±n } is not orthogonal. Define the linear operator L by ' ( ' ( sin nπL'x ( sin n'πL x ( L . = λ±n sin nπL x ± nLπ i sin nπL x We can show that L is an invertible linear bounded operator. So {u±n } is a Riesz basis. (By definition, a Riesz basis is the image of an orthogonal basis under an invertible linear bounded operator.) When the feedback gain k is a nonnegative function of x and strictly positive on some subinterval, Corollary 5.1 still holds, but its proof is sophisticated and referred to [22].
Exercises 5.2 1. Consider the wave equation with a velocity feedback control 2 ∂ 2u ∂u 2∂ u = c + au − k (x,t) in (0, L) × (0, ∞), 2 2 ∂t ∂x ∂t u(0,t) = 0, u(L,t) = 0 t ≥ 0,
5.3 Linear Boundary Feedback Stabilization
u(x, 0) = u0 (x),
225
∂u (x, 0) = u1 (x) x ∈ (0, L), ∂t
where a, k are constants with k > 0. Use the method of separation of variables to solve the problem and then determine the range of a such that the equation is exponentially stable. 2. Consider the wave equation with a velocity feedback control
∂ 2u ∂ 2u ∂u in (0, L) × (0, ∞), = c2 2 − k 2 ∂t ∂x ∂t ∂u (L,t) = 0, t ≥ 0, u(0,t) = 0, ∂x ∂u u(x, 0) = u0 (x), (x, 0) = u1 (x), x ∈ (0, L), ∂t where k is a positive constant. Define the energy functional E(t) by
2
2
1 L
∂ u 2 ∂u
(x,t) + c (x,t)
dx, E(t) =
2 0 ∂t ∂x and perturbed energy functional Eε (t) by Eε (t) = E(t) + ε
L
u(x,t) 0
∂u (x,t)dx. ∂t
a. Show that if ε is small enough, then there exist positive constants c1 and c2 such that c1 E(t) ≤ Eε (t) ≤ c2 E(t). b. Show that if ε is small enough, then there exists a positive constant δ such that dEε (t) ≤ −δ Eε (t). dt c. Solve the above differential equation and then show that E(t) ≤ ME(0)e−δ t , where M is a positive constant.
5.3 Linear Boundary Feedback Stabilization In the interior control problem (5.11), a damping force is exerted on a string. Such a damping force can be also applied at ends of the string. In fact, this boundary control mechanism is easier to implement in real problems.
226
5 One-dimensional Wave Equation
Consider the wave equation with a boundary control 2 ∂ 2u 2∂ u = c ∂ t2 ∂ x2
in (0, L) × (0, ∞),
∂u (L,t) = φ (t), t ≥ 0, ∂x ∂u (x, 0) = u1 (x), x ∈ (0, L), u(x, 0) = u0 (x), ∂t u(0,t) = 0,
c2
(5.44) (5.45) (5.46)
where φ = φ (t) represents a control force exerted at the right end of the string. The controlled output is the displacement oc = u(x,t), and the measured output is assumed to be the velocity at the right end x = L om = ut (L,t). If there is a feedback control φ = F(u, ut ) such that the above closed-loop system is exponentially stable, we say that the wave equation is exponentially stabilizable by boundary feedback. To find a feedback control to regulate u to zero, we examine the derivative of the energy L dE ∂u ∂ 2u ∂u ∂ 2u = (x,t) 2 (x,t) + c2 (x,t) (x,t) dx dt ∂t ∂t ∂x ∂ x∂ t 0 L ∂u ∂ 2u ∂u ∂ 2u (x,t) dx = c2 (x,t) 2 (x,t) + c2 (x,t) ∂t ∂x ∂x ∂ x∂ t 0 =
∂u (L,t)φ (t). ∂t
This leads us to take
φ = −k so that
∂u (L,t) ∂t
2
∂u
dE
= −k (L,t)
≤ 0, dt ∂t
(5.47)
(5.48)
and then the energy is decreasing, where k is a positive constant, called control gain. Thus we have designed the feedback control φ = F(u, ut ) = −k ∂∂ ut (L,t). Since only the velocity at the right-end point is used for feedback, the feedback can be regarded as an output feedback. Using the perturbed energy method developed in [14, 15, 16, 17, 18, 74, 80, 81], we prove that the feedback controller exponentially stabilizes the equilibrium 0 of the system (5.44)-(5.46).
5.3 Linear Boundary Feedback Stabilization
227
We construct the following perturbed energy functional Eδ L
∂u dx, ∂x 0 Eδ (t) = E(t) + δ F(t), F(t) =
2ut x
(5.49) (5.50)
where δ is a positive constant. We first show that, by choosing δ sufficiently small, Eδ and E are equivalent. Lemma 5.1. The perturbed energy satisfies 2Lδ 2Lδ E(t) ≤ Eδ (t) ≤ 1 + E(t). 1− c c
(5.51)
Proof. Using Young’s inequality (2.4), we derive that
L
∂ u
|F(t)| = 2ut x dx ∂x 0
∂u 1 L
≤ 2|x||ut | c
dx c 0 ∂x
∂u L L
≤ 2|ut | c
dx c 0 ∂x
2 L
∂u L |ut |2 + c2
dx ≤ c 0 ∂x =
2L E(t). c
It therefore follows that
2Lδ E(t) Eδ (t) ≤ E(t) + δ |F(t)| ≤ 1 + c
and
2Lδ Eδ (t) ≥ E(t) − δ |F(t)| ≥ 1 − E(t). c
Theorem 5.5. The solution of (5.44)-(5.46) with the feedback (5.47) satisfies E(t) ≤ ME(0)e−σ t
for t ≥ 0,
(5.52)
where 2 kc2 1 c , , δ = min 2 2L L(c2 + k2 ) 2Lδ , σ = 2δ 1 − c
(5.53) (5.54)
228
5 One-dimensional Wave Equation
M=
c + 2Lδ . c − 2Lδ
(5.55)
Proof. By (5.49), we have F (t) =
L 0
2utt x
∂u dx + ∂x
L 0
2ut x
∂ 2u dx. ∂ x∂ t
(5.56)
Moreover L
2utt x
0
∂u dx = ∂x
L
2c2
0
∂ 2u ∂ u x dx ∂ x2 ∂ x
= Lc2 u2x (L,t) − c2 =
L 0
Lk2 2 u (L,t) − c2 c2 t
u2x (x,t)dx
L 0
u2x (x,t)dx
(5.57)
and L 0
2ut x
∂ 2u dx = Lut2 (L,t) − ∂ x∂ t
L 0
ut2 (x,t)dx.
It therefore follows from (5.56), (5.57), and (5.58) that k2 F (t) = L 1 + 2 ut2 (L,t) − 2E(t). c
(5.58)
(5.59)
We then derive from (5.48) that Eδ (t) = E (t) + δ F (t) k2 = −2δ E(t) + δ L 1 + 2 − k ut2 (L,t) c ≤ −2δ E(t). It then follows from (5.51) that
2Lδ Eδ (t) = −σ Eδ (t). Eδ (t) ≤ −2δ 1 − c
(5.60)
Using Gronwall’s inequality (4.14), we derive that Eδ (t) ≤ Eδ (0) exp(−σ t). It therefore follows from (5.51) that E(t) ≤ ≤
1 1 − 2Lcδ 1 1 − 2Lcδ
Eδ (t) Eδ (0) exp(−σ t)
(5.61)
5.3 Linear Boundary Feedback Stabilization
≤
1 + 2Lcδ
1 − 2Lcδ
229
E(0) exp(−σ t).
2
c Since the function f (k) = L(ckc 2 +k2 ) attains the maximum 2L at k = c, δ attains c c the maximum 4L at k = c. In addition, the decay rate σ attains the maximum 4L at c δ = 4L . Thus the maximum decay rate is achieved when the control gain k = c.
Exercises 5.3 1. Consider the boundary control problem 2 ∂ 2u 2∂ u = c ∂ t2 ∂ x2
in (0, L) × (0, ∞),
∂u ∂u (L,t) = −k (L,t), t ≥ 0, ∂x ∂t ∂u (x, 0) = u1 (x), x ∈ (0, L). u(x, 0) = u0 (x), ∂t u(0,t) = 0,
c2
Derive the eigenvalue problem and then solve it by assuming that u = eλ t ϕ (x). 2. Let E : [0, ∞) → [0, ∞) be a non-increasing function and assume that there exists a constant K > 0 such that ∞ t
Show that
E(s)ds ≤ KE(t),
E(t) ≤ eE(0)e−t/K ,
t/K ∞
t ≥ 0. t ≥ K.
(Hint: Define f (t) = e t E(s)ds and then show that f (t) is non-increasing). 3. Consider the wave equation with a boundary velocity feedback control
∂ 2u ∂ 2u = c2 2 2 ∂t ∂x
in (0, L) × (0, ∞),
∂u ∂u (L,t) = −k (L,t), t ≥ 0, ∂x ∂t ∂u (x, 0) = u1 (x), x ∈ (0, L), u(x, 0) = u0 (x), ∂t u(0,t) = 0,
c2
where k is a positive constant. Define the energy functional E(t) by
2
2
1 L
∂ u 2 ∂u
(x,t) + c (x,t)
dx. E(t) =
2 0 ∂t ∂x
(5.62) (5.63) (5.64)
230
5 One-dimensional Wave Equation
a. Show that E(T ) + k
2
(L,t) dt = E(s)
∂t
T
∂u s
for any 0 < s < T.
b. Use this energy identity and the multiplier technique (multiply the equation (5.62) by x ∂∂ ux and then integrate over (0, L) × (s, T )) to show that there exists a constant K > 0 such that T s
E(t) ≤ KE(s)
for any 0 ≤ s < T.
c. Use the result of Problem 2 to show that there exist positive constants M and δ such that E(t) ≤ ME(0)e−δ t . 4. Prove the Poincar´e’s inequality of the transmission form: There exists a positive constant C such that x0
L
|u1 (x)|2 dx + |u2 (x)|2 dx x0 0
2 L
x0 du (x) 2
du (x) 1 2
dx +
dx
≤C
dx dx x0 0
(5.65)
for all u1 ∈ H 1 (0, x0 ) and u2 ∈ H 1 (x0 , L) with u1 (x0 ) = u2 (x0 ), where x0 ∈ (0, L). 5. Consider the transmission problem of the wave equation with a boundary velocity feedback control 2 ∂ 2 u1 2 ∂ u1 = c in (0, L/2) × (0, ∞), 1 ∂ t2 ∂ x2 2 ∂ 2 u2 2 ∂ u2 = c in (L/2, L) × (0, ∞), 2 ∂ t2 ∂ x2 ∂ u2 ∂ u2 u1 (0,t) = 0, c2 (L,t) = −k (L,t), ∂x ∂t ∂ u1 ∂ u2 (L/2,t) = c22 (L/2,t), u1 (L/2,t) = u2 (L/2,t), c21 ∂x ∂x ∂ ui ui (x, 0) = ui0 (x), (x, 0) = ui1 (x), i = 1, 2, ∂t
where k is a positive constant. Define
2
2
∂ u1 1 L/2
∂ u1 2 E(t) = (x,t)
+ c1
(x,t)
dx
2 0 ∂t ∂x
2
2
1 L
∂ u2 ∂ u 2 (x,t)
+ c22
(x,t)
dx, + 2 L/2 ∂ t ∂x
(5.66) (5.67) (5.68) (5.69) (5.70)
5.4 References and Notes
F(t) = 2
231
L/2 ∂ u1
x
(x,t)
∂t 0 Eε = E(t) + ε F(t).
a. Show that
∂ u1 (x,t)dx + 2 ∂x
L
x L/2
∂ u2 ∂ u2 (x,t) (x,t)dx, ∂t ∂x
2
∂ u2
dE = −k
(L,t)
. dt ∂t
b. Show that if ε is small enough, then there exist positive constants C1 and C2 such that C1 E(t) ≤ Eε (t) ≤ C2 E(t). c. Show that if c2 ≤ c1 and ε is small enough, then there exists a positive constant δ such that dEε (t) ≤ −δ Eε (t). dt d. Solve the above differential equation and then show that E(t) ≤ ME(0)e−δ t , where M is a positive constant.
5.4 References and Notes This chapter is mainly based on the references [14, 15, 16, 17, 18, 22, 23, 24, 47, 70, 71, 80, 93, 110, 111]. The interior feedback control in Section 5.2 is a simplified version of the work by Castro and Zuazua [12, 13], Freitas and Zuazuan [36], and Cox and Zuazua [22]. In their work, these authors considered the following feedback control problem 2 ∂ 2u 2∂ u = c + a(x)φ in (0, L) × (0, ∞), ∂ t2 ∂ x2 u(0,t) = 0, u(L,t) = 0, t ≥ 0, ∂u (x, 0) = u1 (x), x ∈ (0, L), u(x, 0) = u0 (x), ∂t
where a(x) is a function supported on a subset ω = (x0 − ε , x0 + ε ) ⊂ [0, L], that is, a(x) = 0 if x ∈ ω . The effect of time delay in the boundary feedback control was investigated by Datko, Lagnese, and Polis [25]; by Datko [26, 27, 28, 29]; by Li and Liu [68]. We mention more references for further studies: Dehman, Lebeau, and Zuazua [33]; Zhang and Zuazua [107, 108].
Chapter 6
Higher-dimensional Wave Equation
In this chapter, we study the control problem of the linear wave equation
∂ 2u = c2 ∇2 u. ∂ t2 This equation can serve as a mathematical model for many physical problems, such as the vibration of a membrane. In the membrane problem, u = u(x, y,t) represents the vertical displacement of the membrane from its equilibrium and the positive constant c (m/s) is a wave speed. In this chapter, we use the following notation. Let Ω be a bounded open set in Rn and x0 ∈ Rn . Set (see Figure 6.1)
Γ = ∂Ω, m(x) = x − x0 = (x1 − x01 , · · · , xn − x0n ),
Γ (x0 ) = {x ∈ Γ : m(x) · n(x) > 0}, Γ∗ (x0 ) = Γ − Γ (x0 ) = {x ∈ Γ : m(x) · n(x) ≤ 0},
(6.1) (6.2) (6.3) (6.4)
where n denotes the unit normal pointing towards the outside of Ω .
6.1 Stability Consider the wave equation
∂ 2u = c2 ∇2 u in Ω × (0, ∞), ∂ t2 u = 0 on Γ∗ (x0 ) × (0, ∞), ∂u = 0 on Γ (x0 ) × (0, ∞), ∂n W. Liu, Elementary Feedback Stabilization of the Linear Reaction-Convection-Diffusion Equation and the Wave Equation, Math´ematiques et Applications 66, c Springer-Verlag Berlin Heidelberg 2010 DOI 10.1007/978-3-642-04613-1 6,
(6.5) (6.6) (6.7)
233
234
6 Higher-dimensional Wave Equation
Fig. 6.1 Parts Γ (x0 ) and Γ∗ (x0 ) of the boundary of the domain Ω .
x0
m( x)
G*(x0)
W
G(x0)
u(x, 0) = u0 (x),
∂u (x, 0) = u1 (x) in Ω , ∂t
(6.8)
where u0 and u1 are initial conditions. We define the energy of the system (6.5)-(6.8) by
2
∂u 1 2 2
(x,t) + c |∇u(x,t)| dV. (6.9) E(t) =
2 Ω ∂t The following theorem shows that the equilibrium 0 is stable, but not exponentially stable. Theorem 6.1. The energy of the system (6.5)-(6.8) satisfies the following identity E(t) = E(0) for all t ≥ 0.
(6.10)
Proof. Using the equation (6.5) and Green’s identity (2.26), we derive that dE ∂u ∂ 2u ∂u = (x,t) 2 (x,t) + c2∇u(x,t) · ∇ (x,t) dV dt ∂t ∂t ∂t Ω ∂u ∂u c2 (x,t)∇2 u + c2∇u(x,t) · ∇ (x,t) dV = ∂t ∂t Ω =
Γ
c2
∂u ∂u (x,t) (x,t)dS − ∂t ∂n
Ω
c2 ∇u(x,t) · ∇
∂u (x,t)dV ∂t
∂u + c2 ∇u(x,t) · ∇ (x,t)dV ∂t Ω = 0. So E(t) = E(0) for all t ≥ 0. We can easily show that Theorem 5.2 also holds for the higher-dimensional wave equation.
6.2 Linear Boundary Feedback Stabilization
235
6.2 Linear Boundary Feedback Stabilization Consider the wave equation with a boundary control
∂ 2u = c2 ∇2 u in Ω × (0, ∞), ∂ t2 u = 0 on Γ∗ (x0 ) × (0, ∞), ∂u = φ on Γ (x0 ) × (0, ∞), c2 ∂n ∂u (x, 0) = u1 (x) in Ω , u(x, 0) = u0 (x), ∂t
(6.11) (6.12) (6.13) (6.14)
where φ = φ (x,t) is a control to be designed. In this control system, the state variables are u and ∂∂ ut , and the vertical component of the tensile force is controlled. Mathematically, a boundary control of this kind is called a Neumann boundary control. Since the equilibrium 0 is stable, but not exponentially stable, we are interested in stabilizing it exponentially. If there is a feedback control φ = F(u, ut ) such that the above closed-loop system is exponentially stable, we say that the wave equation is exponentially stabilizable by boundary feedback. To design a feedback control, we examine the derivative of the energy dE ∂ u ∂ 2u ∂u 2 = dV + c ∇u · ∇ dt ∂ t ∂ t2 ∂t Ω ∂u 2 2 ∂u = c ∇ u + c2∇u · ∇ dV ∂ t ∂t Ω ∂u 2 ∂u = c dΓ ∂n Γ ∂t ∂u = φ χΓ (x0 ) dΓ . Γ ∂t This leads us to take
∂u ∂t
(6.15)
2
∂u m · n
dΓ ≤ 0, 0 ∂t Γ (x )
(6.16)
φ = −km · n so that dE = −k dt
where k is a positive constant. Then we have designed the feedback control φ =
F(u, ut ) = −km · n ∂∂ ut 0 . With this feedback control, the problem (6.11)-(6.14) Γ (x )
becomes a closed-loop system:
∂ 2u = c2 ∇2 u in Ω × (0, ∞), ∂ t2 u = 0 on Γ∗ (x0 ) × (0, ∞),
(6.17) (6.18)
236
6 Higher-dimensional Wave Equation
∂u ∂u = −km · n on Γ (x0 ) × (0, ∞), ∂n ∂t ∂u u(x, 0) = u0 (x), (x, 0) = u1 (x) in Ω . ∂t c2
(6.19) (6.20)
This feedback control can be regarded as an output feedback control since only the velocity on the part of boundary is used for feedback. Evidently, the above energy functional E is a Lyapunov functional. The above procedure of designing a feedback controller to make the energy functional become Lyapunov is called a Lyapunov design. We now show that the system is exponentially stable. To this end, we need Poincar´e’s inequality. Lemma 6.1. (Poincar´e’s inequality) There exists a positive constant M > 0 such that |u|2 dV ≤ M |∇u|2 dV (6.21) Ω
Ω
holds for any u(x) ∈ {u ∈ H (Ω ) | u = 0 on Γ∗ (x0 )}. 1
The proof of this lemma is referred to [30]. Using Poincar´e’s inequality, we can readily prove the energy norm is equivalent to the usual norm in {u ∈ H 1 (Ω ) | u = 0 on Γ∗ (x0 )}. Lemma 6.2. There exists a positive constant M > 0 such that
M
Ω
(u2 + c2 |∇u|2 )dV ≤ ≤
c2 |∇u|2 dx
Ω
Ω
(u2 + c2 |∇u|2 )dV
(6.22)
holds for any u ∈ {u ∈ H 1 (Ω ) | u = 0 on Γ∗ (x0 )}. We use the perturbed energy method to establish the exponential stability of (6.17)-(6.20). So we construct the following perturbed energy functional Eδ F(t) =
Ω
[2ut m · ∇u + (n − 1)ut u]dV,
Eδ (t) = E(t) + δ F(t),
(6.23) (6.24)
where δ is a positive constant. How is this perturbed energy constructed? Here is the idea. To achieve the exponential stability, the perturbed energy should be constructed such that Eδ (t) = E (t) + δ F (t) ≤ −δ E(t). Then F should be constructed such that F contains −E(t). In the proof below, we will see that
F (t) ≤ −E(t) + C
Γ (x0 )
m · n|ut |2 dS.
6.2 Linear Boundary Feedback Stabilization
237
where C is a positive constant. The positive term C Γ (x0 ) m · n|ut |2 dS can be cancelled by E (t) by making δ sufficiently small and then we obtain the desired differential inequality. The reason why m · ∇u is chosen in F will be further explained in the proof of Theorem 6.3 below. We first show that, by choosing δ sufficiently small, Eδ and E are equivalent. Lemma 6.3. There exists a constant C > 0 such that (1 − δ C)E(t) ≤ Eδ (t) ≤ (1 + δ C)E(t).
(6.25)
Proof. Using Poincar´e’s inequality, we deduce that there exists a constant C > 0 such that
[2ut m · ∇u + (n − 1)ut u]dV ≤ C [|ut |2 + |∇u|2 + |u|2]dV
Ω
Ω ≤C
Ω
[|ut |2 + c2 |∇u|2 ]dV
≤ CE(t),
(6.26)
where the constant C changes from line to line. Then (6.25) follows from (6.26). To prove the exponential stability, we need the trace theorem. Theorem 6.2. Let Ω be a bounded domain in Rn with C1 boundary. Then the operator γ : H 1 (Ω ) → L2 (Γ ) defined by
γ (u) = u|Γ is a linear continuous operator. The proof of this theorem is referred to [73, p. 41]. This theorem implies that there exists a constant C > 0 such that
Γ
|u(x)|2 dS ≤ C
' Ω
( |u(x)|2 + |∇u(x)|2 dV
for all u ∈ H 1 (Ω ).
To better understand this inequality, we consider the case Ω = (0, 1). Since f (0) = f (x) −
x 0
f (t)dt,
f (1) = f (x) +
1 x
it follows that Γ
| f (x)|2 dS = f 2 (0) + f 2 (1) =
1' 0
( f 2 (0) + f 2 (1) dx
f (t)dt,
(6.27)
238
6 Higher-dimensional Wave Equation
=
1 0
≤C
1& 0
f (x) −
x
2
f (t)dt
0
+
f (x) +
1 x
2
f (t)dt
dx
) f 2 (x) + ( f (x))2 dx.
Theorem 6.3. Assume that Γ∗ (x0 ) has a non-empty interior and
Γ∗ (x0 ) ∩ Γ¯ (x0 ) = 0. /
(6.28)
Then there exist constants M, σ > 0 such that the solution of (6.17)-(6.20) satisfies E(t) ≤ ME(0)e−σ t
for t ≥ 0.
(6.29)
Proof. By (6.25), it suffices to show Eδ (t) = E (t) + δ F (t) ≤ −δ E(t) to prove (6.29). Thus we need to estimate F . By (6.23), we have
F (t) =
Ω
+
2utt m · ∇udV +
Ω
Ω
2ut m · ∇ut dV
(n − 1)uutt dV + (n − 1)
Ω
|ut |2 dV.
(6.30)
We now estimate every integral in (6.30) as follows. Since u = 0 on Γ∗ (x0 ), we have n=
∇u , ∇u
∂u = ∇u · n = ∇u. ∂n
It then follows that
∂u ∂u nk on Γ∗ (x0 ), = ∇unk = ∂ xk ∂n
(6.31)
where n = (n1 · · · , nn ). Using Green’s identity (2.26) and integration by parts (2.25), we derive that
2 =2
Ω
Ω
utt m · ∇udV
(use (6.17))
c2 ∇2 um · ∇udV
(use (6.21))
∂u m · ∇udS − 2 c2 ∇u · ∇ (m · ∇u)dV ∂ n Γ Ω n n ∂u ∂ ∂u 2 ∂u m · ∇udS − 2 c2 ∑ ∑ (x j − x0j ) dV = 2c ∂n ∂xj Γ Ω i=1 j=1 ∂ xi ∂ xi
=
=
2c2
Γ
2c2
∂u m · ∇udS − ∂n
Ω
' ( c2 2|∇u|2 + m · ∇(|∇u|2) dV (use (2.25))
6.2 Linear Boundary Feedback Stabilization
239
" m · ∇u − m · n|∇u|2 dS + (n − 2)c2 |∇u|2 dV ∂n Γ Ω ∂u 2 2 2 m · ∇u − m · n|∇u| dS =c ∂n Γ (x0 )
2
∂u 2 +c m · n
dS + (n − 2)c2 |∇u|2 dV. ∂n Γ (x0 ) Ω
= c2
! ∂u
2
(6.32)
∗
This equation can be obtained by multiplying (6.17) by the multiplier m · ∇u and then integrating over Ω . Thus the construction of F(t) is motivated by the multiplier technique. Since m · n ≤ 0 on Γ∗ (x0 ), we have
2
∂u m · n
dS ≤ 0. ∂n Γ∗ (x0 )
c2
(6.33)
It therefore follows from (6.32) that
2
Ω
utt m · ∇udV ≤ c2
∂u 2 m · ∇u − m · n|∇u|2 dS ∂n Γ (x0 )
+(n − 2)c
2
Ω
|∇u|2 dV.
(6.34)
For the second integral in (6.30), we derive from (2.25) that
2
Ω
ut m · ∇ut dV =
Ω
= −n
m · ∇(|ut |2 )dV
Ω
(6.35)
|ut (t)|2 dV +
Γ (x0 )
m · n|ut |2 dS.
(6.36)
From this equation we can see how the multiplier m · ∇u is critical for producing the negative term −n Ω |ut (t)|2 dV , which is necessary for producing −E(t). Using (6.17), we deduce (n − 1)
Ω
uutt dV = (n − 1) = (n − 1)
Ω
uc2 ∇2 udV
Γ (x0 )
−(n − 1)c2
c2
Ω
∂u udS ∂n
|∇u|2 dV.
(6.37)
This equation tells why the term uut is included in F to produce the negative term −(n − 1)c2 Ω |∇u|2 dV , which is necessary for producing −E(t). It therefore follows from (6.30) and (6.34)-(6.37) that
240
6 Higher-dimensional Wave Equation
∂u 2 2 2 m · ∇u − m · n|∇u| dS F (t) ≤ c ∂n Γ (x0 ) ∂u + m · n|ut |2 dS + (n − 1) c2 udS 0 0 ∂n Γ (x ) Γ (x )
−c2
Ω
|∇u|2 dV −
Ω
|ut |2 dV.
(6.38)
We now estimate the boundary integrals in the above inequality. Using Young’s inequality (2.4) and the boundary control (6.19), we deduce that ∂u 2 2 c 2 m · ∇u − m · n|∇u| dS (use (6.19)) ∂n Γ (x0 ) ' ( = −2km · nut m · ∇u − m · n|∇u|2 dS Γ (x0 )
(use Young’s inequality (2.4)) ≤C
Γ (x0 )
− =C
m · n|ut | dS +
2
Γ (x0 )
Γ (x0 )
(6.39)
m · n|∇u| dS 2
Γ (x0 )
m · n|∇u|2 dS
(6.40)
m · n|ut |2 dS.
(6.41)
Hereafter C denotes a generic positive constant that may change from line to line. Moreover, by Young’s inequality, the boundary control (6.19), and the trace theorem (the trace inequality (6.27)), we deduce that (n − 1)
Γ (x0 )
= −(n − 1) ≤C ≤C ≤C
Γ (x0 )
c2
Γ (x0 )
∂u udS ∂n
km · nut udS
m · n|ut |2 dS + ω
Γ (x0 )
m · n|ut | dS + ω M 2
Γ (x0 )
Γ (x0 )
m · n|ut |2 dS +
where ω is chosen such that ω M =
c2 2.
c2 2
Ω
Ω
|u|2 dS
|∇u|2 dV
|∇u|2 dV,
(6.42)
It therefore follows from (6.38)-(6.42) that
F (t) ≤ −E(t) + C
Γ (x0 )
m · n|ut |2 dS.
(6.43)
6.2 Linear Boundary Feedback Stabilization
241
We then derive from (6.16) that Eδ (t) = E (t) + δ F (t) ≤ −δ E(t) + (δ C − k)
Γ (x0 )
m · n|ut |2 dS
≤ −δ E(t) by taking δ < k/C. It then follows from (6.25) that Eδ (t) ≤ −δ (1 − δ C)Eδ (t).
(6.44)
Using the differential inequality (4.13), we then derive that Eδ (t) ≤ Eδ (0) exp[−δ (1 − δ C)t].
(6.45)
Then (6.29) follows from (6.25). If the position of a string or membrane can be controlled, we can rewrite the controller (6.19) as follows. Integrating (6.19) from 0 to t, we obtain u(x,t) = u0 (x) −
c2 ∂ km · n ∂ n
t
u(x, s)ds.
(6.46)
0
This is an integral controller since it involves the integral 0t u(x, s)ds. The boundary control of this kind is called Dirichlet boundary control. If a term au with a ≥ 0 is added to the equation (6.17), Theorem 6.3 still holds. For details, we refer to [47, Chapter 8]. The geometrical condition (6.28) can be removed by using microlocal analysis. This is beyond the scope of the text and is referred to [5, 63].
Exercises 6.2 1. Define the operator A on the Hilbert space H = {(u, v) ∈ H 1 ((0, 1) × (0, 1)) × L2 ((0, 1) × (0, 1)) | u(0, y) = u(x, 0) = u(x, 1) = 0} by 0 I A= c2 ∇2 0 with the domain D(A) = {(u, v) ∈ H 2 ((0, 1)×(0, 1))×H 1((0, 1)×(0, 1)) | u(0, y) = u(x, 0) = u(x, 1) = v(0, y) = v(x, 0) = v(x, 1) = 0, c2 ∂∂ ux (1, y) = −kv(1, y)}, where I denotes the identity operator. Find eigenvalues and eigenfunctions of A.
242
6 Higher-dimensional Wave Equation
6.3 Nonlinear Boundary Feedback Stabilization We now design a nonlinear boundary feedback control for the wave equation. As in the case of linear control, we look for the velocity feedback control
∂ 2u = c2 ∇2 u in Ω × (0, ∞), ∂ t2 u = 0 on Γ∗ (x0 ) × (0, ∞), ∂u ∂u = −m · n f on Γ (x0 ) × (0, ∞), c2 ∂n ∂t ∂u (x, 0) = u1 (x) in Ω , u(x, 0) = u0 (x), ∂t
(6.47) (6.48) (6.49) (6.50)
where f is a function to be designed. Theorem 6.4. Suppose that the function f is continuous and satisfies the following conditions: f (0) = 0, | f (u1 ) − f (u2 )| ≤ k1 |u1 − u2|q for |u1 − u2| ≤ 1,
(6.51) (6.52)
| f (u1 ) − f (u2 )| ≤ k1 |u1 − u2| for |u1 − u2| ≥ 1,
(6.53)
for |u| ≤ 1, f (u) · u ≥ k2 |u| 2 f (u) · u ≥ k2 |u| for |u| ≥ 1,
(6.54) (6.55)
p+1
0 ≤ [ f (u1 ) − f (u2 )](u1 − u2) for u1 , u2 ∈ R,
(6.56)
for some constants k1 , k2 > 0 and p, q with 0 < q ≤ 1. (1) If p = q = 1, there exist some constants M ≥ 1 and ω > 0 such that E(t) ≤ ME(0)e−ω t , ∀t ≥ 0.
(6.57)
(2) If p + 1 > 2q, then E(t) ≤
E(0) + CE σ +1(0) [1 + F(E(0))t]1/σ
, ∀t ≥ 0,
(6.58)
where C is a positive constant, F is a positive function with F(0) = 0, and σ is given by
σ =
p + 1 − 2q . 2q
(6.59)
Proof. As in the case of the proof of Theorem 6.3, we define a perturbed energy function by Eδ ,σ (t) = E(t) + δ [E(t)]σ F(t), (6.60)
6.3 Nonlinear Boundary Feedback Stabilization
243
where F is defined by (6.23). We now show that Eδ ,σ satisfies Eδ ,σ (t) ≤ −α Eδσ,+1 σ (t),
(6.61)
where α is a positive constant. Note that calculations of F (defined by (6.23)) in Theorem 6.3 are still valid. We then have the estimate for F ∂u 2 2 2 m · ∇u − m · n|∇u| dS F (t) ≤ c ∂n Γ (x0 ) ∂u 2 + m · n|ut | dS + (n − 1) c2 udS ∂n Γ (x0 ) Γ (x0 ) −c2
Ω
|∇u|2 dV −
Ω
|ut |2 dV.
(6.62)
We now estimate the boundary integrals in the above inequality. Using Young’s inequality and the boundary control (6.49), we deduce that ∂u 2 m · ∇u − m · n|∇u|2 dS (use (6.49)) c2 ∂n Γ (x0 ) ' ( = −2m · n f (ut )m · ∇u − m · n|∇u|2 dS Γ (x0 )
(use Young’s inequality (2.4)) ≤C
Γ (x0 )
− =C
Γ (x0 )
Γ (x0 )
m · n f 2 (ut )dS +
Γ (x0 )
m · n|∇u|2 dS
m · n|∇u|2 dS m · n f 2 (ut )dS.
(6.63)
Hereafter C denotes a generic positive constant that may change from line to line. Moreover, by Young’s inequality, the boundary control (6.49), and the trace theorem (the trace inequality (6.27)), we deduce that (n − 1)
Γ (x0 )
= −(n − 1) ≤C ≤C ≤C
Γ (x0 )
Γ (x0 )
Γ (x0 )
c2
Γ (x0 )
∂u udS ∂n
m · n f (ut )udS
m · n f 2 (ut )dS + ω
Γ (x0 )
m · n f 2 (ut )dS + ω M m · n f 2 (ut )dS +
c2 2
Ω
Ω
|u|2 dS
|∇u|2 dV
|∇u|2 dV,
(6.64)
244
6 Higher-dimensional Wave Equation c2 2.
where ω is chosen such that ω M = F (t) ≤ −E(t) + C
It therefore follows from (6.62)-(6.64) that
Γ (x0 )
m · n(|ut |2 + f 2 (ut ))dS.
(6.65)
As in the proof of (6.16), we can show that E (t) = −
Γ (x0 )
m · n f (ut )ut dS.
(6.66)
It then follows that d σ (E (t)F(t)) dt = E (t) + δ σ E σ −1(t)E (t)F(t) + δ E σ (t)F (t) (use (6.26) for the estimate of F)
Eδ ,σ (t) = E (t) + δ
≤ [1 − δ σ CE σ (t)] E (t) + δ E σ (t)F (t) (use (6.65) and (6.66))
≤ −δ E σ +1 (t) − [1 − δ σ CE σ (t)] +δ CE σ (t)
Γ (x0 )
Γ (x0 )
m · n f (ut )ut dS
m · n(|ut |2 + f 2 (ut ))dS.
(6.67)
We now distinguish the cases p = q = 1 and p + 1 > 2q. If p = q = 1, we take σ = 0. It follows from (6.67) that Eδ ,σ (t)
≤ −δ E(t) −
Γ (x0 )
m · n f (ut )ut dS + δ C
(use (6.51)-(6.56)) ≤ −δ E(t) + [δ C − k2 ]
Γ (x0 )
Γ (x0 )
m · n(|ut |2 + f 2 (ut ))dS
m · n|ut |2 dS
≤ −δ E(t) ( use (6.25)) δ E (t) ≤− 1 + δ C δ ,σ
(6.68)
if δ is sufficiently small. Solving this differential inequality and using (6.25), we obtain 1 E (t) 1 − δ C δ ,σ 1 E (0)e−δ t/(1+δ C) ≤ 1 − δ C δ ,σ 1 + δC E(0)e−δ t/(1+δ C) , ≤ 1 − δC
E(t) ≤
which proves (6.57).
(6.69)
6.3 Nonlinear Boundary Feedback Stabilization
245
We then consider the case where p + 1 > 2q. In the following, C denotes a generic constant that may change from line to line. By (6.52), we deduce that CE σ (t) ≤ CE σ (t)
Γ (x0 )∩[|ut |≤1]
Γ (x0 )∩[|ut |≤1]
m · n(|ut |2 + f 2 (ut ))dS m · n(|ut |2 + k12 |ut |2q )dS
(note that |ut |2 ≤ |ut |2q since q ≤ 1) ≤ CE σ (t)
Γ (x0 )∩[|ut |≤1]
m · n|ut |2q dS
(use Young’s inequality (2.4) with b > 0) ≤
p + 1 − 2q (p+1)/(p+1−2q) σ (p+1)/(p+1−2q) b E p+1 (p+1)/(2q) 2q 2q C + m · n|ut | dS (p + 1)b(p+1)/2q Γ (x0 )∩[|ut |≤1] (use H¨older’s inequality (2.5))
≤
p + 1 − 2q (p+1)/(p+1−2q) σ (p+1)/(p+1−2q) b E p+1 +
2qC (p + 1)b(p+1)/(2q)
Γ (x0 )∩[|ut |≤1]
m · n|ut | p+1 dS
(use (6.54)) ≤
p + 1 − 2q (p+1)/(p+1−2q) σ (p+1)/(p+1−2q) b E p+1 +
≤
2qC (p + 1)b(p+1)/(2q)
Γ (x0 )∩[|ut |≤1]
1 σ (p+1)/(p+1−2q) E +C 2
if we take
b=
m · n f (ut )ut dS
Γ (x0 )∩[|ut |≤1]
p+1 2(p + 1 − 2q)
m · n f (ut )ut dS
(p+1−2q)/(p+1)
.
It therefore follows from (6.67) and (6.70) that Eδ ,σ (t) ≤ −δ E σ +1 (t) − [1 − δ σ CE σ (t)] +δ CE σ (t)
Γ (x0 )
Γ (x0 )
m · n f (ut )ut dS
m · n(|ut |2 + f 2 (ut ))dS
(6.70)
246
6 Higher-dimensional Wave Equation
= −δ E σ +1(t) − [1 − δ σ CE σ (t)] − [1 − δ σ CE σ (t)] +δ CE σ (t) +δ CE σ (t)
Γ (x0 )∩[|ut |≤1]
m · n f (ut )ut dS
Γ (x0 )∩[|ut |≥1]
Γ (x0 )∩[|ut |≤1]
Γ (x0 )∩[|ut |≥1]
m · n f (ut )ut dS
m · n(|ut |2 + f 2 (ut ))dS m · n(|ut |2 + f 2 (ut ))dS
(use (6.53) and (6.55) to derive |ut |2 + f 2 (ut ) ≤ C f (ut )ut for |ut | ≥ 1) ≤ −δ E σ +1(t) − [1 − δ σ CE σ (t)] − [1 − δ CE σ (t)] +δ CE σ (t)
Γ (x0 )∩[|ut |≥1]
Γ (x0 )∩[|ut |≤1]
Γ (x0 )∩[|ut |≤1]
m · n f (ut )ut dS
m · n f (ut )ut dS
m · n(|ut |2 + f 2 (ut ))dS
(use(6.70))
δ ≤ −δ E σ +1(t) + E σ (p+1)/(p+1−2q) 2 − [1 − δ CE σ (t)] − [1 − δ CE σ (t)]
Γ (x0 )∩[|ut |≤1]
Γ (x0 )∩[|ut |≥1]
We now choose σ so that
σ +1 = Then
σ=
m · n f (ut )ut dS m · n f (ut )ut dS.
(6.71)
σ (p + 1) . p + 1 − 2q
p + 1 − 2q . 2q
Taking δ small enough, we derive from (6.71) that
δ Eδ ,σ (t) ≤ − E σ +1 (t). 2
(6.72)
By (6.26), noting that E(t) ≤ E(0), we derive that Eδ ,σ (t)(t) ≤ [1 + δ CE σ (0)]E(t).
(6.73)
It therefore follows from (6.72) that Eδ ,σ (t) ≤ −
δ2 E σ +1 (t). 2[1 + δ CE σ (0)]σ +1 δ ,σ
(6.74)
6.3 Nonlinear Boundary Feedback Stabilization
247
Solving this differential inequality, we obtain Eδ ,σ (t) ≤ (Eδ ,σ (0))−σ +
σδt 2[1 + δ CE σ (0)]σ +1
≤ (E(0) + δ CE σ +1(0))−σ +
−1/σ
σδt 2[1 + δ CE σ (0)]σ +1
−1/σ
,
which, combined with (6.25), implies (6.58). Example 6.1. Let f (u) = ku, where k is a positive constant. This is a linear feedback control and all the assumptions of Theorem 6.4 are satisfied. Example 6.2. We can easily verify that the functions 3 u3 u , |u| ≤ 1, f1 (u) = , f (u) = 2 u, |u| ≥ 1. 1 + u2 satisfy all the assumptions of Theorem 6.4 with p = 3 and q = 1.
Exercises 6.3 1. Show that the function
u3 1 + u2 satisfies all the conditions of Theorem 6.4 with p = 3 and q = 1. 2. Let E : [0, ∞) → [0, ∞) be a non-increasing function and assume that there exist two constants α > 0 and T > 0 such that f (u) =
∞ t
Show that
E α +1 (s)ds ≤ T E(0)α E(t),
T + αt E(t) ≤ E(0) T + αT
−1/α
,
t ≥ 0.
t ≥ T.
(Hint: Define f (t) = t∞ E α +1 (s)ds and differentiate f to establish a differential inequality for f . Then solve the inequality and use f (t) ≥ tT +(α +1)t E α +1 (s)ds ≥ (T + α t)E(T + (α + 1)t)α +1). 3. Consider the wave equation with a nonlinear boundary velocity feedback control utt = c2 uxx u(0,t) = 0,
in (0, L) × (0, ∞),
c2 ux (L,t)
u(x, 0) = u0 (x),
(6.75)
kut3 (L,t) , =− 1 + ut2(L,t)
ut (x, 0) = u1 (x),
t ≥ 0,
x ∈ (0, L),
(6.76) (6.77)
248
6 Higher-dimensional Wave Equation
where k is a positive constant. Define the energy E(t) by E(t) =
1 2
L 0
a. Show that
|ut (x,t)|2 + c2 |ux (x,t)|2 dx.
kut4 (L,t) dE =− . dt 1 + ut2(L,t)
b. Multiply the equation (6.75) by 2xux E(t) and integrate over (0, L) × (t, T )) to show that T T L T L 2 2 E (s)ds = −2 E(s) xux us dx + 2 E (s) xux us dxds t
+L
T t
0
&
t
t
0
) E(s) c2 u2x (L, s) + u2s (L, s) ds.
c. Assume that E(0) ≤ 1. Prove that there exists a constant C > 0 such that
T
L
xux us dx ≤ CE(t),
E(s)
0 t
T
L
E (s) xux us dxds
≤ CE(t).
t
0
d. Assume that E(0) ≤ 1 and let I1 = {s ∈ [t, T ] : |us (L, s)| > 1}. Prove that there exists a constant C > 0 such that
& )
E(s) c2 u2 (L, s) + u2 (L, s) ds ≤ CE(t). x s
I
1
e. Assume that E(0) ≤ 1 and let I2 = {s ∈ [t, T ] : |us (L, s)| ≤ 1}. Prove that for every ε > 0 there exists a constant C(ε ) > 0 such that
T
& )
E(s) c2 u2x (L, s) + u2s (L, s) ds ≤ C(ε )E(t) + ε E 2 (s)ds.
t
I2
f. Show that there exists a positive constant M such that E(t) ≤
M . 1+t
6.4 Observability Inequalities To address the problems of interior feedback stabilization, we need observability inequalities of the wave equation
∂ 2v = c2 ∇2 v in Ω × (0, T ), ∂ t2
(6.78)
6.4 Observability Inequalities
249
on ∂ Ω × (0, T ), ∂v (x, 0) = v1 (x) in Ω . v(x, 0) = v0 (x), ∂t v=0
(6.79) (6.80)
We start by establishing identities. In what follows, we denote Q = Ω × (0, T ) and Σ = Γ × (0, T ) for some T > 0. Lemma 6.4. The solution v of (6.78)-(6.80) satisfies
∂ v 2 ∂v
T 2 2
v(t), (t) =
∂ t − c |∇v| dV dt, ∂t 0 Q
(6.81)
∂v ∂v v(t), (t) = v(t) (t)dV. ∂t ∂t Ω
where
Proof. Multiplying the equation (6.78) by v and then integrating over Q by parts, we obtain
2 ∂v ∂v
T v(t), (t) −
dV dt = − c2 |∇v|2 dV dt, ∂t 0 Q ∂t Q which gives (6.81). Lemma 6.5. Let q = (qk ) be a vector field in [C1 (Ω¯ )]n . Suppose v is the solution of (6.78)-(6.80). Then the following identity holds:
2
∂v c q · n
d Σ ∂n Σ n n ∂v ∂v ∂qj ∂v
T (t), q · ∇v(t) + ∑ ∑ c2 = dV dt ∂t ∂ xi ∂ xi ∂ x j 0 i=1 j=1 Q
∂ v 2 1 2 2 div(q)
− c |∇v| dV dt, + 2 Q ∂t 1 2
where
2
(6.82)
∂v ∂v (t), q · ∇v(t) = (t)q · ∇v(t)dV. ∂t Ω ∂t
Proof. Multiplying (6.78) by q · ∇v and integrating over Q yields
∂ 2v q · ∇vdV dt = 2 Q ∂t
Q
c2 ∇2 vq · ∇vdV dt.
(6.83)
Integration by parts gives
∂ 2v q · ∇vdV dt = 2 Q ∂t
2
∂v ∂v
T 1 (t), q · ∇v + div(q)
dV dt ∂t 2 Q ∂t 0
(6.84)
250
6 Higher-dimensional Wave Equation
and Q
=
Σ
c2 ∇2 vq · ∇vdV dt n n ∂v q · ∇vd Σ − ∑ ∑ ∂n i=1 j=1
c2
n
n
−∑ ∑
Σ
−
c2
1 2
Q
∂v ∂qj ∂v V dt ∂ xi ∂ xi ∂ x j
∂v qj V dt ∂ xi ∂ xi ∂ x j
c2
n n ∂v q · ∇vd Σ − ∑ ∑ ∂n i=1 j=1
Σ
c2
∂ 2v
i=1 j=1 Q
=
c2 q · n|∇v|2 d Σ +
1 2
Q
c2
∂v ∂qj ∂v V dt ∂ xi ∂ xi ∂ x j
c2 div(q)|∇v|2V dt.
Q
Since v = 0 on Γ , we have n = ∇v/|∇v| and then |∇v| = derive from the above equation that Q
=
1 2 +
∂v ∂n.
Using this result, we
c2 ∇2 vq · ∇vdV dt
2 n n
∂v ∂v ∂qj ∂v c2 q · n
d Σ − ∑ ∑ c2 V dt ∂n ∂ xi ∂ xi ∂ x j Σ Q i=1 j=1
1 2
c2 div(q)|∇v|2V dt.
(6.85)
Q
Then (6.82) follows from (6.84) and (6.85). From (6.10) and (6.82) we can easily derive the following lemma. Lemma 6.6. The solution v of (6.78)-(6.80) satisfies
2
c d Σ ≤ CE(0), ∂n Σ 2 ∂v
(6.86)
where E is the energy function defined in (6.9) and C is a positive constant independent of initial conditions. Proof. Taking q = n in (6.82), we obtain 1 2 =
Σ
2
2 ∂v
c dΣ ∂n
n n ∂v ∂v ∂nj ∂v
T (t), n · ∇v(t) + ∑ ∑ c2 dV dt ∂t ∂ xi ∂ xi ∂ x j 0 i=1 j=1 Q
∂ v 2 1 div(n)
− c2|∇v|2 dV dt + 2 Q ∂t
6.4 Observability Inequalities
251
≤ C(E(0) + E(T )) + C
T
E(t)dt 0
≤ CE(0). We introduce an important constant used in the control theory of the wave equation:
1/2
n
0 0 2 R(x ) = max |m(x)| = max ∑ (xk − xk ) .
x∈Ω¯ x∈Ω¯ k=1 ) Lemma 6.7. If T > 2R(x c , then there exists a positive constant C such that any solutions of (6.78)-(6.80) satisfy 0
C(v0 , v1 )2H 1 (Ω )×L2 (Ω ) ≤ 0
2
∂v
(x,t) d Σ
Γ (x0 ) ∂ n
T 0
(6.87)
for any initial conditions (v0 , v1 ) ∈ H01 (Ω ) × L2 (Ω ). Proof. Taking q = m(x) in (6.82) and using (6.10) and (6.81), we obtain 1 2
2
∂v ∂v
T (t), m · ∇v(t) + c2 |∇v|2 dV dt c2 m · n
d Σ = ∂n ∂t 0 Σ Q 2
∂v n
− c2 |∇v|2 dV dt + 2 Q ∂t ∂v
T (t), m · ∇v(t) = ∂t 0 2
∂v 1 2 2
+ c |∇v| dV dt + 2 Q ∂t
∂ v 2 n−1 2 2
+
∂ t − c |∇v| dV dt 2 Q n−1 ∂v
T (t), m · ∇v(t) + v(t) = ∂t 2 0 +T E(0).
Furthermore, n−1 ∂v (t), m · ∇v(t) + v(t) ∂t 2
2
R(x0 )
∂ v (t) dV ≤
2 Ω ∂t
2
n−1 1
v(t)
dV m · ∇v(t) + +
0 2R(x ) Ω 2
(6.88)
252
6 Higher-dimensional Wave Equation
=
R(x0 ) 2c
2
∂ v (t) dV
Ω ∂t
n−1 2 2 |m · ∇v(t)| + |v(t)| + (n − 1)v(t)m · ∇v(t) dV 2 Ω
2
R(x0 )
∂ v (t) dV =
2c Ω ∂ t n(n − 1) c n−1 2 2 2 2 |v(t)| dV |m · ∇v(t)| + |v(t)| − + 2R(x0) Ω 2 2
2
R(x0 )
∂ v (t) dV + c ≤ |m · ∇v(t)|2 dV 2c Ω ∂ t 2R(x0) Ω
R(x0 ) cR(x0 ) ∂ v
2
(t) ≤ dV + |∇v(t)|2 dV 2c Ω ∂ t 2 Ω c + 2R(x0)
≤
2
R(x0 ) E(0). c
It then follows from (6.88) that
2
2
∂v
∂v 1 1 2 2
c m · n dΣ ≥ c m · n
d Σ 2 Σ (x0 ) ∂n 2 Σ ∂n
∂v
T
(t), m · ∇v(t)
≥ T E(0) − ∂t 0 0 2R(x ) E(0). ≥ T− c The inequality (6.87) is called a boundary observability inequality. The inequality implies that if ∂∂ nv (x,t) = 0 on Γ (x0 ) during the period of time T , then the initial condition (v0 , v1 ) = (0, 0). Thus initial states of the system are uniquely determined by its output ∂∂ nv on the boundary Γ (x0 ). That is, the system (6.78)-(6.80) is observable from the boundary Γ (x0 ) in time T . ) Lemma 6.8. Let ω be a neighborhood of Γ (x0 ). If T > 2R(x c , then there exists a constant M such that the solution of the system (6.78)-(6.80) satisfies that 0
E(0) ≤ M
T 0
ω
[|vt |2 + |∇v|2 ]dV dt.
Proof. Let α > 0 be such that T − 2α > such that E(0) ≤ M
By Lemma 6.7, there exists M > 0
2
∂v
(x,t) dSdt.
0 ∂ n Γ (x )
T −α α
2R(x0 ) c .
(6.89)
6.4 Observability Inequalities
253
It then suffices to prove that
2
T
∂v
(x,t) dSdt ≤ M [|vt |2 + |∇v|2 ]dV dt.
0 Γ (x0 ) ∂ n ω
T −α α
This can be proved by using identity (6.82) with q = t(T − t)h(x), where h ∈ [C1 (Ω¯ )]n satisfies h · n = 1 on Γ (x0 ), h · n ≥ 0 on Γ , supp(h) ⊂ ω . In fact, it follows from (6.82) that
2
2
T −α T
∂v
∂v
(x,t) dSdt ≤ M
(x,t) dSdt h · n
∂n α 0 Γ (x0 ) ∂ n Γ (x0 ) ≤M
T
ω
0
[|vt |2 + |∇v|2 ]dV dt.
2R(x0 )
Lemma 6.9. If T > c , then there exists a constant M such that the solution of the system (6.78)-(6.80) satisfies that E(0) ≤ M
T 0
ω
[|vt |2 + |v|2]dV dt.
(6.90)
Proof. For any ε > 0, we construct the neighborhood Oε = ∪x∈Γ (x0 ) B(x, ε ),
ωε = Oε ∩ Ω ,
where B(x, ε ) denotes the ball of the radius ε at the center x. Let α > 0 be such that T − 2α >
2R(x0 ) c .
By (6.89), there exists M > 0 such that E(0) ≤ M
T −α α
ωε /2
[|vt |2 + |∇v|2 ]dV dt.
(6.91)
It suffices to prove that T −α α
ωε /2
|∇v| dV dt ≤ M 2
T 0
ωε
[|vt |2 + |v|2 ]dV dt.
For this, we construct φ ∈ W 1,∞ (Oε ) such that |∇φ |2 0 ≤ φ ≤ 1 in Oε , φ = 1 in ωε /2 , φ
L∞ (Oε )
< ∞.
(6.92)
254
6 Higher-dimensional Wave Equation
Multiplying (6.5) by t(T − t)φ (x)v(x,t) and integrating over Ω × (0, T ), we obtain T
= ≥
[t(T − t)φ (x)|vt (x,t)|2 + (T − 2t)φ (x)v(x,t)vt (x,t)]dV dt
0
Ω
0
Ω
0
Ω 1 T
T T
c2 [t(T − t)φ (x)|∇v|2 + t(T − t)v∇φ (x) · ∇v]dVdt c2t(T − t)φ (x)|∇v|2 dV dt
c2t(T − t)φ (x)|∇v|2 dV dt 2 0 Ω |∇φ (x)| 2 1 T c2t(T − t) |v| dV dt − 2 0 Ω φ
−
≥
1 2 −
1 ≥ 2 −
T
Ω T 1 0
2
0
c2t(T − t)φ (x)|∇v|2 dV dt
Ω
c2t(T − t)
T −α α
1 2
T 0
Ω Ω
|∇φ (x)| 2 |v| dV dt φ
c2t(T − t)φ (x)|∇v|2 dV dt
c2t(T − t)
|∇φ (x)| 2 |v| dV dt, φ
which implies (6.92). The inequalities (6.89) and (6.90) are called interior observability inequalities.
Exercises 6.4 1. Consider the wave equation 2 ∂ 2v 2∂ v = c in Ω × (0, T ), ∂ t2 ∂ x2 v(0,t) = v(L,t) = 0 t ∈ (0, T ),
v(x, 0) = v0 (x),
∂v (x, 0) = v1 (x) in Ω . ∂t
Use the series solution v(x,t) = to prove:
! ncπ t ncπ t " nπ x + bn sin sin an cos L L L n=1 ∞
∑
6.5 Linear Interior Feedback Stabilization
255
a. If ω = [x0 − ε , x0 + ε ] ⊂ [0, L] and T ≥ 2L/c, then CE(0) ≤
2
dxdt, (x,t)
ω ∂t
T
∂v 0
where C is a positive constant. b. If T ≥ 2L/c, then C1 E(0) ≤
2
(L,t) dt ≤ C2 E(0),
∂x
T
∂v 0
where C1 and C2 are positive constants.
6.5 Linear Interior Feedback Stabilization Consider the wave equation with an interior control
∂ 2u = c2 ∇2 u + φ χω in Ω × (0, ∞), ∂ t2 u = 0 on Γ × (0, ∞), ∂u (x, 0) = u1 (x) in Ω , u(x, 0) = u0 (x), ∂t
(6.93) (6.94) (6.95)
where ω is a nonempty open subset of Ω , χω denotes the characteristic function of ω (equal to 1 if x ∈ ω and equal to 0 if x ∈ ω ) and φ = φ (x,t) is a control to be found. Physically, the control φ represents an external force exerted on the part ω of the domain. To find a feedback control to regulate the solution to the equilibrium zero, we compute the derivative of the energy ∂ u ∂ 2u ∂u dE 2 = dV + c ∇u · ∇ dt ∂ t ∂ t2 ∂t Ω ∂u 2 2 ∂u ∂u = c ∇ u + φ χω + c2 ∇u · ∇ dV ∂t ∂t ∂t Ω ∂u = φ χω dV (use Green’s identity (2.26)). Ω ∂t This leads us to take
φ = −k so that dE = −k dt
∂u ∂t
2
∂ u dV ≤ 0,
ω ∂t
(6.96)
(6.97)
where k is a positive constant. With this feedback control, the equilibrium 0 of the system (6.11)-(6.14) is exponentially stable.
256
6 Higher-dimensional Wave Equation
To prove this result, we construct the following perturbed energy functional Eδ defined by Eδ (t) = E(t) + 2δ
Ω
u(x,t)ut (x,t)dV,
(6.98)
where δ is a positive constant. We first show that, by choosing δ sufficiently small, Eδ and E are equivalent. Lemma 6.10. The perturbed energy functional satisfies 2δ M 2δ M 1− E(t) ≤ Eδ (t) ≤ 1 + E(t), c c
(6.99)
where M is the constant in Poincar´e’s inequality. Proof. Using Young’s inequality and Poincar´e’s inequality, we deduce that
c
2ut udV ≤ M 2|u |
u dV t
Ω
c Ω M
c 2 M
2 |ut | + u dV ≤ c Ω M M ≤ |ut |2 + |c∇u|2 dV c Ω 2M ≤ E(t), (6.100) c which implies (6.99). Theorem 6.5. Assume ω = Ω . Then there exist constants M, σ > 0 such that the solution of (6.93)-(6.95) with the feedback control (6.96) satisfies E(t) ≤ ME(0)e−σ t
for t ≥ 0.
(6.101)
Proof. It follows from (6.98), Young’s inequality, and Poincar´e’s inequality that
2utt udV + δ 2ut2dV (use (6.93) and (6.96)) Eδ (t) = E (t) + δ Ω Ω
∂ u 2 2c2 ∇2 uudV − kδ 2ut udV + δ 2ut2dV = −k
dV + δ Ω ∂t Ω Ω Ω (use Green’s identity (2.26))
kMut cu ∂ u 2 dV 2c2 |∇u|2 dV − δ 2 = −k
dV − δ c M Ω ∂t Ω Ω +δ
Ω
2ut2dV
(use Young’s inequality (2.4) and Poincar´e’s inequality (6.21))
6.5 Linear Interior Feedback Stabilization
257
2
∂ u dV − δ 2c2 |∇u|2 dV
Ω ∂t Ω 2 2 2 k M ut 2 2 dV + + c |∇u| δ 2ut2 dV. +δ c2 Ω Ω
≤ −k
We then derive that Eδ (t) ≤ −2δ E(t) +
δ k2 M 2 + 3δ − k c2
Ω
ut2 dV
≤ −2δ E(t) when δ is small enough. It then follows from (6.99) that 2δ M Eδ (t). Eδ (t) ≤ −2δ 1 − c
(6.102)
We then deduce from the differential inequality (4.13) that Eδ (t) ≤ Eδ (0)e−σ t .
(6.103)
Thus (6.101) follows from (6.99). If ω ⊂ Ω is a neighborhood of Γ (x0 ), the solution of (6.93)-(6.95) with the feedback control (6.96) still decays exponentially. The proof of this result is technically sophisticated and readers may skip it in the first reading. ) Lemma 6.11. Let ω be a neighborhood of Γ (x0 ). If T > 2R(x c , there exists a constant M > 0 such that the solution of (6.93)-(6.95) with the feedback control (6.96) satisfies 0
E(0) ≤ M
T 0
ω
(|u|2 + |ut |2 )dV dt.
(6.104)
Proof. We write the solution u as u = v+w where v and w are solutions of
∂ 2v = c2 ∇2 v in Ω × (0, T ), ∂ t2 v = 0 on ∂ Ω × (0, T ), ∂v (x, 0) = u1 (x) in Ω , v(x, 0) = u0 (x), ∂t
(6.105) (6.106) (6.107)
and
∂ 2w ∂u = c2 ∇2 w − k χω ∂ t2 ∂t
in Ω × (0, T ),
(6.108)
258
6 Higher-dimensional Wave Equation
on ∂ Ω × (0, T ), ∂w (x, 0) = 0 in Ω . w(x, 0) = 0, ∂t w=0
(6.109) (6.110)
Energy estimates give Ω
(|wt |2 + c2|∇w|2 )dV ≤ M
T
|ut |2 dV dt.
ω
0
The observability inequality (6.90) yields E(0) ≤ M =M ≤M ≤M
T 0
ω
0
ω
T T 0
ω
0
ω
T
(|v|2 + |vt |2 )dV dt (|u − w|2 + |ut − wt |2 )dV dt T
(|u| + |ut | )dV dt + M 2
2
0
ω
(|w|2 + |wt |2 )dV dt
(|u|2 + |ut |2 )dV dt.
Lemma 6.12. There exists a constant M > 0 such that the solution of (6.93)-(6.95) with the feedback control (6.96) satisfies T 0
Ω
|u|2 dV dt ≤ M
T ω
0
|ut |2 dV dt.
(6.111)
Proof. We argue by contradiction. If (6.111) is not true, there exists a sequence of solutions {un } of
∂ 2 un ∂ un = c2 ∇2 un − k χω in Ω × (0, T ), 2 ∂t ∂t un = 0 on ∂ Ω × (0, T ), ∂ un (x, 0) = un1 (x) in Ω , un (x, 0) = un0 (x), ∂t
(6.112) (6.113) (6.114)
satisfying T 0
Ω
0
Ω
T
|un |2 dV dt = 1, |un |2 dV dt ≥ n
(6.115)
T 0
ω
|unt |2 dV dt.
It then follows from (6.104) that for 0 ≤ t ≤ T E(un ,t) ≤ E(un , 0) ≤M
T 0
ω
(|un |2 + |unt |2 )dV dt
(6.116)
6.5 Linear Interior Feedback Stabilization
259
T
M(1 + n) n M(1 + n) . = n ≤
ω
0
|un |2 dV dt
We then can extract a subsequence (still denoted by {un }) such that un → v star-weakly in L∞ ([0, T ]; H01 (Ω )), unt → vt star-weakly in L∞ ([0, T ]; L2 (Ω )), un → v
L2 ([0, T ]; L2 (Ω )).
strongly in
Passing to the limit in (6.115) and (6.116) gives T 0
Ω
|v|2 dV dt = 1,
vt = 0
in ω × (0, T ).
(6.117)
Passing to the limit in (6.112)-(6.114) gives
∂ 2v = c2 ∇2 v in Ω × (0, T ), ∂ t2 v = 0 on ∂ Ω × (0, T ).
(6.118) (6.119)
Differentiating the above equation gives
∂ 2 vt = c2 ∇2 vt in Ω × (0, T ), ∂ t2 vt = 0 on ∂ Ω × (0, T ).
(6.120) (6.121)
It then follows from (6.104) that E(vt ,t) = E(vt , 0) ≤M = 0.
T 0
ω
(|vt |2 + |vtt |2 )dV dt
So we have vt = 0 in Ω × (0, T ). Then equation (6.120) reduces to c2 ∇2 v = 0 v=0
in Ω × (0, T ), on ∂ Ω × (0, T ),
(6.122) (6.123)
which implies that v = 0. This is a contradiction of (6.117). Theorem 6.6. Let ω be a neighborhood of Γ (x0 ). Then there exist constants M, σ > 0 such that the solution of (6.93)-(6.95) with the feedback control (6.96) satisfies E(t) ≤ ME(0)e−σ t
for t ≥ 0.
(6.124)
260
6 Higher-dimensional Wave Equation
Proof. Let T >
2R(x0 ) c .
By (6.97), we first have
T
∂ u
2
E(0) − E(T ) = k
dV dt. 0
ω
∂t
By (6.104) and (6.111), we have M
2 T
∂ u dV dt ≥ E(T ).
ω
0
∂t
It then follows that E(0) − E(T ) ≥ ME(T ), and then E(T ) ≤
1 E(0). 1+M
(6.125)
Let u(t; u0 , u1 ) denote the solution of (6.93)-(6.95) corresponding to the initial condition (u0 , u1 ). Then we have u(s + t; u0, u1 ) = u(s; u(t; u0 , u1 ), ut (t; u0 , u1 )) due to the uniqueness of the solution. For any t > 0, there exists an integer N such that t = NT + r, 0 ≤ r < T . It then follows from (6.125) that E(t) = E(u(NT + r; u0 , u1 )) = E(u(T ; u((N − 1)T + r; u0 , u1 ), ut ((N − 1)T + r; u0, u1 ))) 1 ≤ E(u((N − 1)T + r; u0 , u1 )) 1+M .. . 1 ≤ E(u(r; u0 , u1 )) (1 + M)N 1 ≤ E(0) (1 + M)N 1 (1+M)N
=e
ln
=e
−N ln(1+M)
=e
− t−r T
=e
r T
E(0) E(0)
ln(1+M)
ln(1+M)
E(0) t
E(0)e− T ln(1+M) t
≤ (1 + M)E(0)e− T ln(1+M) . As in the case of the reaction-convection-diffusion equations, the stabilization theory for the wave equation has been extended to the abstract dynamical control system (4.347)-(4.348), where the operator A generates a C0 semigroup. For details, we refer to [24].
6.6 Nonlinear Interior Feedback Stabilization Fig. 6.2 Legendre transformation.
261 y y=f(x) y=px
g(p)
x x(p)
6.6 Nonlinear Interior Feedback Stabilization This section is technically involved and may be skipped in the first reading. Consider the wave equation with a nonlinear interior control ∂ 2u ∂u 2 2 in Ω × (0, ∞), = c ∇ u+ f (6.126) 2 ∂t ∂t (6.127) u = 0 on Γ × (0, ∞), ∂u (x, 0) = u1 (x) in Ω . (6.128) u(x, 0) = u0 (x), ∂t To design a feedback controller f , we need to introduce a generalized Young’s inequality and Jensen’s inequality. Let y = f (x) be a convex function and f (x) > 0. The Legendre transformation of the function f is a new function g of a new variable p. Consider the straight line y = px (Figure 6.2). We take the point x = x(p) at which the curve of f (x) is farthest from the straight line in the vertical direction: for each p the function F(p, x) = px − f (x) has a maximum with respect to x at the point x(p). Now we define g(p) = F(p, x) = px − f (x). Definition 6.1. Two functions f and g, which are the Legendre transformation of one another, are called dual in the sense of Young. Generalized Young’s inequality. Let y = f (x) be a convex function and f (x) > 0. Let g(p) be the dual function of f (x) in the sense of Young. By definition of the Legendre transformation, we have px ≤ f (x) + g(p). Example 6.3. If f (x) = 12 x2 , then F(p, x) = px − 12 x2 and x = p and g(p) = F(p, p) = 12 p2 . Hence we have
(6.129) ∂ ∂ x F(p, x) =
1 px ≤ f (x) + g(p) = (x2 + p2). 2
p − x = 0. So
262
6 Higher-dimensional Wave Equation xα α
Example 6.4. If f (x) = p−
xα −1
= 0. So x =
α −1 α /(α −1) α p
=
pβ β
p1/(α −1)
and g(p) = F(p, p) =
1 α
= =
, where β = α /(α − 1). Hence we have px ≤ f (x) + g(p) =
where
xα ∂ α and ∂ x F(p, x) α /(α −1) pp1/(α −1) − p α
with α > 1, then F(p, x) = px −
xα p β + , α β
+ β1 = 1.
Lemma 6.13. (Jensen’s inequality) Let ϕ (x) be a convex function. Then 1 1 ϕ f (x)dV ≤ ϕ [ f (x)]dV, mes(Ω ) Ω mes(Ω ) Ω where mes(Ω ) denotes the volume of Ω . Proof. Since ϕ is convex, we deduce that
ϕ
1 mes(Ω )
Ω
f (x)dV
n 1 lim ∑ f (xi )Δ Vi =ϕ mes(Ω ) n→∞ i=1 n 1 = lim ϕ ∑ f (xi )Δ Vi n→∞ mes(Ω ) i=1 n Δ Vi =1 since ∑ i=1 mes(Ω ) n 1 ϕ [ f (xi )]Δ Vi ∑ n→∞ mes(Ω ) i=1
≤ lim =
1 mes(Ω )
Ω
ϕ [ f (x)]dV.
We also need the following Sobolev embedding theorem from [1, p. 97]. Theorem 6.7. Let Ω be a bounded smooth domain in Rn . (1) If k < n/2, then
H k (Ω ) ⊂ L2n/(n−2k)(Ω )
and uL2n/(n−2k) ≤ CuH k ,
u ∈ H k (Ω ).
(2) If k = n/2, then for any 1 ≤ p < ∞ H k (Ω ) ⊂ L p (Ω ) and uL p ≤ CuH k ,
u ∈ H k (Ω ).
(6.130)
6.6 Nonlinear Interior Feedback Stabilization
(3) If k > j + n/2, then
263
H k (Ω ) ⊂ C j (Ω¯ )
and uC j ≤ CuH k ,
u ∈ H k (Ω ).
Choose p to satisfy n+2 , if n > 2, n−2 1 ≤ p < ∞, if n ≤ 2.
1 ≤ p≤
(6.131) (6.132)
Then, by Sobolev’s embedding theorem, H01 (Ω ) is embedded into L p+1 (Ω ), and consequently, L(p+1)/p(Ω ) is continuously embedded into H −1 (Ω ). Let α = α (p) > 0 denote the best constant such that p/(p+1) |u|(p+1)/pdx , ∀ u ∈ L(p+1)/p(Ω ). (6.133) uH −1 (Ω ) ≤ α Ω
Theorem 6.8. Assume that f ∈ C(R) satisfies the following conditions: (1) f (0) = 0; (2) f is increasing on R; (3) there are constants c1 , c2 > 0 and p ≥ 1 satisfying (6.131)-(6.132) such that c1 |s| ≤ | f (s)| ≤ c2 |s| p ,
for |s| ≥ 1;
(6.134)
(4) there exist a strictly increasing positive function h(s) of class C2 defined on [0, ∞) and constants c3 , c4 > 0 such that c3 h(|s|) ≤ | f (s)| ≤ c4 h−1 (|s|),
for |s| ≤ 1,
(6.135)
where h−1 denotes the inverse of h; (5) there exists an increasing, positive and convex function ϕ = ϕ (s) defined on [0, ∞) and twice differentiable outside s = 0 such that ϕ (|s|(p+1)/p) ≤ h(|s|)|s| and ϕ (s)s is increasing on [0, ∞). Then the energy E(t) of solutions of (6.126)-(6.128) with (u0 , u1 ) ∈ H01 (Ω ) × L2 (Ω ) satisfies E(t) ≤ 2V (t), for t ≥ 0, (6.136) where V (t) is the solution of the following differential equation: ε V (t) aV (t) aV (t) − ε m1 ϕ ϕ b b b (p+1)/2 V (t) aV (t) , +m2 ελ p+1 ϕ c c
V (t) = −
and a, b, c, m1 , m2 , λ , ε are positive constants. Furthermore, we have
(6.137)
264
6 Higher-dimensional Wave Equation
lim E(t) = lim V (t) = 0.
t→∞
(6.138)
t→∞
Proof. By a straightforward calculation, we obtain E (t) = −
Ω
ut f (ut )dV ≤ 0.
(6.139)
If E(t0 ) = 0 for some t0 ≥ 0, then, by (6.139), we have E(t) ≡ 0 for t ≥ t0 and the theorem holds. Therefore, we may assume that E(t) > 0 for t ≥ 0. This assumption ensures that, in the following proof, ϕ (aE(t)) is well defined as we have assumed that ϕ (s) is twice differentiable outside s = 0. Define the perturbed energy
V (t) = E(t) + εψ (E(t))
Ω
uut dV,
(6.140)
where ψ (s) and ψ (s)s are positive and increasing functions on (0, +∞) that will be determined in the proof. Using (6.139) and Poincar´e inequality, we deduce V (t) = E (t) + εψ (E(t))E (t) +εψ (E(t))
Ω
Ω
uut dV
[|ut |2 − |∇u|2 − u f (ut )]dV
(use (6.139) for E (t)) =−
Ω
ut f (ut )dV − εψ (E(t))
+εψ (E(t))
+2εψ (E(t))
Ω
uut dV
Ω
[−|ut |2 − |∇u|2 ]dV
Ω
Ω
ut f (ut )dV
(subtract |ut |2 and then add |ut |2 )
|ut |2 dV − εψ (E(t))
Ω
u f (ut )dV
(use Young inequality (2.4) for uut ) 2 1 |u| 2 dV ≤ − ut f (ut )dV + kεψ (E(t)) + |u | ut f (ut )dV t 2 k2 Ω Ω Ω −2εψ (E(t))E(t) + 2εψ (E(t))
Ω
|ut |2 dV − εψ (E(t))
Ω
u f (ut )dV
(use Poincar´e inequality (2.29) for u2 /k2 ) ' ( 1 ≤ − ut f (ut )dV + kεψ (E(t)) |∇u|2 + |ut |2 dV ut f (ut )dV 2 Ω Ω Ω −2εψ (E(t))E(t) + 2εψ (E(t))
=−
Ω
Ω
|ut |2 dV − εψ (E(t))
ut f (ut )dV + kεψ (E(t))E(t)
−2εψ (E(t))E(t) + 2εψ (E(t))
Ω
Ω
ut f (ut )dV
|ut |2 dV − εψ (E(t))
Ω
Ω
u f (ut )dV
u f (ut )dV,
6.6 Nonlinear Interior Feedback Stabilization
265
where k denotes the constant in Poincar´e inequality. Moreover, by (6.134), we have |ut |2 ≤ c−1 1 ut f (ut ) for |ut | ≥ 1. Therefore, taking into account the fact that ψ (s)s is non-decreasing, we deduce that V (t) ≤ −2εψ (E(t))E(t) + [ε kψ (E(0))E(0) − 1] +2ε c−1 1 ψ (E(t)) −εψ (E(t))
Ω
[|ut |≥1]
+[ε kψ (E(0))E(0) − 1]
[|ut |≤1]
Ω
ut f (ut )dV
[|ut |≤1]
|ut |2 dV
u f (ut )dV
+[ε kψ (E(0))E(0) + 2ε c−1 1 ψ (E(0)) − 1]
−εψ (E(t))
Ω
ut f (ut )dV + 2εψ (E(t))
≤ −2εψ (E(t))E(t)
+2εψ (E(t))
[|ut |≤1]
[|ut |≥1]
ut f (ut )dV
ut f (ut )dV
|ut |2 dV (= I1 )
u f (ut )dV (= I2 ).
(6.141)
We now estimate I1 and I2 in terms of Ω ut f (ut )dV as follows. Let ϕ ∗ denote the dual of ϕ in the sense of Young. Then, by the generalized Young’s inequality (6.129) and Jensen’s inequality (6.130), we deduce
1 |ut |2 dV mes([|ut | ≤ 1]) [|ut |≤1] (note that |ut | ≤ 1 and p ≥ 1) 1 ≤ 2ε mes([|ut | ≤ 1])ψ (E) |ut |(p+1)/pdV mes([|ut | ≤ 1]) [|ut |≤1] (Use the generalized Young’s inequality (6.129)) 1 ∗ (p+1)/p |ut | dV ≤ 2ε mes([|ut | ≤ 1]) ϕ (ψ (E)) + ϕ mes([|ut | ≤ 1]) [|ut |≤1] (Use Jensen’s inequality (6.130))
I1 = 2ε mes([|ut | ≤ 1])ψ (E)
≤ 2ε mes([|ut | ≤ 1])ϕ ∗ (ψ (E)) + 2ε ≤ 2ε mes([|ut | ≤ 1])ϕ ∗ (ψ (E)) + 2ε ≤ 2ε mes(Ω )ϕ ∗ (ψ (E)) + 2ε c−1 3
[|ut |≤1] [|ut |≤1]
[|ut |≤1]
ϕ (|ut |(p+1)/p)dV |ut |h(|ut |)dV (use (6.135))
ut f (ut )dV.
(6.142)
Using the embedding inequality (6.133), the condition (6.134), and Young’s inequality (2.4), we deduce that
266
6 Higher-dimensional Wave Equation
|I2 | = εψ (E(t)) u f (ut )dV
Ω
(6.143)
≤ εψ (E(t))uH 1 (Ω ) f (ut )H −1 (Ω ) 0
(use the embedding inequality (6.133)) p/(p+1) | f (ut )|(p+1)/pdV ≤ αεψ (E(t))uH 1 (Ω ) 0
≤ αεψ (E(t))uH 1 (Ω )
Ω
0
[|ut |≤1]
| f (ut )|(p+1)/pdV
p/(p+1)
(use condition (6.134): | f (ut )|(p+1) = | f (ut )|| f (ut )| p ≤ c2 |ut || f (ut )| p = c2 [ut f (ut )] p ) p/(p+1) 1/(p+1) ψ (E(t))uH 1 (Ω ) ut f (ut )dV +αε c2 [|ut |≥1]
0
(use Young inequality (2.4)) p λ −(p+1)/p | f (ut )|(p+1)/pdV ≤ αεψ (E(t)) p+1 [|ut |≤1] 1 p 1/(p+1) p+1 p+1 + λ uH 1 (Ω ) + αε c2 ψ (E(t)) λ −(p+1)/p p+1 p+1 0 1 p+1 p+1 ut f (ut )dV + λ uH 1 (Ω ) , p+1 0 [|ut |≥1] Young’s inequality where λ is any positive number. We now use the generalized (6.129) and Jensen’s inequality (6.130) to estimate ψ (E) [|ut |≤1] | f (ut )|(p+1)/pdV as follows:
ψ (E)
[|ut |≤1]
| f (ut )|(p+1)/pdV
1 |c−1 f (ut )|(p+1)/pdV mes([|ut | ≤ 1]) [|ut |≤1] 4 (Use the generalized Young’s inequality (6.129)) 1 (p+1)/p ∗ mes([|ut | ≤ 1]) ϕ (ψ (E)) + ϕ ≤ c4 mes([|ut | ≤ 1]) (p+1)/p |c−1 4 f (ut )| (p+1)/p
= c4
mes([|ut | ≤ 1])ψ (E)
[|ut |≤1]
(Use Jensen’s inequality (6.130)) (p+1)/p
≤ c4
(p+1)/p
mes([|ut | ≤ 1)ϕ ∗ (ψ (E)) + c4
(use condition (v)) (p+1)/p
≤ c4
(p+1)/p
mes(Ω )ϕ ∗ (ψ (E)) + c4
(use (6.135) )
[|ut |≤1]
[|ut |≤1]
(p+1)/p dV ϕ |c−1 f (u )| t 4
−1 |c−1 4 f (ut )|h(c4 | f (ut )|)dV
6.6 Nonlinear Interior Feedback Stabilization (p+1)/p
≤ c4
267
mes(Ω )ϕ ∗ (ψ (E)) + c4
1/p
[|ut |≤1]
ut f (ut )dV.
(6.144)
Thus it follows from (6.143) that p |I2 | ≤ αεψ (E(t)) λ −(p+1)/p | f (ut )|(p+1)/pdV p+1 [|ut |≤1] 1 p 1/(p+1) + + λ p+1 uHp+1 αε c ψ (E(t)) λ −(p+1)/p 1 (Ω ) 2 p+1 p+1 0 1 ut f (ut )dV + λ p+1 uHp+1 1 p+1 0 (Ω ) [|ut |≥1] αε p (p+1)/p c mes(Ω )λ −(p+1)/pϕ ∗ (ψ (E)) (use (6.144)) ≤ p+1 4 (p+1)/2 (p+1)/2 2 1 2 |∇u|2 dV (use uHp+1 = c ≤ c22 E (p+1)/2(t)) 1 (Ω ) c2 2 Ω 0
αε 2(p+1)/2 1/(p+1) + (1 + c2 )λ p+1 ψ (E(t))E (p+1)/2(t) (p + 1)c p+1 αε p 1/p −(p+1)/p c4 λ ut f (ut )dV + p+1 [|ut |≤1] αε p 1/(p+1) −(p+1)/p c2 λ ψ (E(0)) ut f (ut )dV. + p+1 [|ut |≥1]
(6.145)
It therefore follows from (6.141), (6.142) and (6.145) that V (t) ≤ −2εψ (E(t))E(t) + 2ε mes(Ω )ϕ ∗ (ψ (E)) αε p (p+1)/p c mes(Ω )λ −(p+1)/pϕ ∗ (ψ (E)) + p+1 4 +
αε 2(p+1)/2 1/(p+1) (1 + c2 )λ p+1 ψ (E(t))E (p+1)/2(t) (p + 1)c p+1
+(ε k1 − 1)
[|ut |≥1]
ut f (ut )dV + (ε k2 − 1)
[|ut |≤1]
ut f (ut )dV.(6.146)
where k1 and k2 are positive constants depending on E(0). By the definition of the dual function in the sense of Young, ϕ ∗ (t) is the Legendre transform of ϕ (s), which is given by ϕ ∗ (t) = t[ϕ ]−1 (t) − ϕ [[ϕ ]−1(t)]. (6.147) Thus, we have
ϕ ∗ (ψ (E)) = ψ (E(t))[ϕ ]−1 (ψ (E(t))) − ϕ [[ϕ ]−1 (ψ (E(t))).
(6.148)
This motivates us to make the choice
ψ (s) = ϕ (as)
(6.149)
268
6 Higher-dimensional Wave Equation
so that
ϕ ∗ (ψ (E)) = ϕ (aE)aE − ϕ (aE)
where the constant a will be determined later. By condition (v), ψ (s) satisfies the requirement we set at the beginning of the proof, that is, ψ and ψ (s)s are positive and increasing on (0, +∞). Taking −1 α p (p+1)/p −(p+1)/p a = 2mes(Ω ) + c mes(Ω )λ , p+1 4
(6.150)
we deduce from (6.146) that V (t) ≤ −εϕ (aE(t))E(t) − ε m1ϕ (aE(t)) + m2ελ p+1 ϕ (aE(t))E (p+1)/2(t), (6.151) where m1 , m2 , and ε are positive constants. On the other hand, since ϕ (s) and ϕ (s) are positive and increasing on (0, ∞), it follows from Poincar´e’s inequality that [1 − ε kϕ (aE(0))]E(t) ≤ V (t) ≤ [1 + ε kϕ (aE(0))]E(t).
(6.152)
Therefore, we deduce from (6.151) and (6.152) that ε V (t) aV (t) V (t) ≤ − ϕ 1 + ε kϕ (aE(0)) 1 + ε kϕ (aE(0)) aV (t) − ε m1 ϕ 1 + ε kϕ (aE(0)) (p+1)/2 V (t) aV (t) (6.153) , +m2 ελ p+1 ϕ 1 − ε kϕ (aE(0)) (1 − ε kϕ (aE(0))) which proves (6.137). It remains to prove (6.138). We argue by contradiction. Suppose that E(t) doesn’t tend to zero as t → ∞. Since E(t) is decreasing on [0, ∞), we have E(0) ≥ E(t) ≥ σ > 0, ∀t ≥ 0,
(6.154)
bE(0) ≥ V (t) ≥ β > 0, ∀t ≥ 0.
(6.155)
and by (6.152), we have
Thus we have
ϕ (aE(0)) ≥ ϕ
aV (t) b
≥ γ > 0, ∀t ≥ 0.
Let λ > 0 be so small that V (t) (p+1)/2 aV (t) ≤ m1 ϕ (aβ /b), ∀t ≥ 0. m2 λ p+1ϕ c c
(6.156)
(6.157)
6.6 Nonlinear Interior Feedback Stabilization
269
It therefore follows from (6.137) that V (t) ≤ −
εγ V (t), ∀t ≥ 0, b
(6.158)
which is in contradiction of (6.155). This completes the proof. Corollary 6.1. Assume that f ∈ C(R) satisfies all the conditions of Theorem 6.8. Suppose ϕ (s) = s p/(p+1)h(s p/(p+1)) is convex and twice continuously differentiable. Then the energy E(t) of (6.126)-(6.128) satisfies the following decay rate: E(t) ≤ 2V (t),
for t ≥ 0,
(6.159)
where V (t) satisfies the following differential equation: p 2p p a p−1 aV p+1 aV p+1 εp p+1 p+1 V (t) = − ( ) V − V h ( ) h ( ) p b (p + 1)b b b (p+1)b p+1 p+1 p p −1 aV p+1 aV p−1 pm2 ε l (p+1) aV p+1 V 2 aV p+1 p+1 ( ) h(( ) ) + ( ) h (( ) ) . + p+1 c c c c c (6.160) −1
ε (2p+1)a p+1
p p+1
Proof. Since
ϕ (s) = s p/(p+1)h(s p/(p+1)), p [s−1/(p+1)h(s p/(p+1)) + s(p−1)/(p+1)h (s p/(p+1))], ϕ (s) = p+1
(6.161) (6.162)
by substituting (6.161) and (6.162) into (6.137), we obtain (6.160). We give three examples to illustrate how to derive the usual exponential or polynomial decay rate and the logarithmic decay rate for the exponentially degenerate damping from our general result. In what follows, ω denotes various positive constants that may vary from line to line. Example 6.5. Exponential decay rate. Let f (s) = ks and p = 1, where k is a positive constant. Then h(s) = ks. In this case, all the assumptions of Corollary 6.1 are satisfied and (6.160) becomes V (t) = −ω V (t),
(6.163)
where ω is a positive constant. Thus, as usual, we obtain an exponential decay rate. Example 6.6. Polynomial decay rate. Assume f (s) = k|s|q−1 s with q > 1 and k > 0. Then h(s) = ksq and p = 1 , q > 1. Then (6.160) becomes V (t) = −ω [V (t)](q+1)/2,
(6.164)
270
6 Higher-dimensional Wave Equation
which, as usual, implies the polynomial decay rate E(t) ≤ C(E(0))t −2/(q−1),
∀t > 0.
Example 6.7. Logarithmic decay rate. Let p = 1 and f (s) = s3 e Let −1 h(s) = s3 e s2 , s > 0.
(6.165) − 12 s
near the origin. (6.166)
Then, by (6.160), V satisfies b
V (t) ≤ −ω V 2 e− aV ,
(6.167)
which is the same as
b bω e aV ≥ . a Solving the inequality, we obtain the logarithmic decay rate V (t) ≤
−1 b b bω ln t + e aV (0) . a a
Exercises 6.6 1. Consider the beam equation with a nonlinear velocity feedback control kut3 in (0, L) × (0, ∞), 1 + ut2 u(0,t) = u(L,t) = ux (0,t) = ux (L,t) = 0, t ≥ 0, utt = −uxxxx −
u(x, 0) = u0 (x),
ut (x, 0) = u1 (x),
x ∈ (0, L),
where k is a positive constant. Define the energy E(t) by 1 E(t) = 2
L 0
|ut (x,t)|2 + |uxx (x,t)|2 dx.
a. Show that dE =− dt
L kut4 0
1 + ut2
dV.
b. Use the perturbed energy Eε (t) = E(t) + ε E(t)
L 0
uut dV
to show that there exists a positive constant M such that E(t) ≤
M . 1+t
(6.168)
(6.169)
6.7 Optimal Boundary Control
271
6.7 Optimal Boundary Control Consider the boundary control problem
∂ 2u = c2 ∇2 u, ∂ t2 u|Γ = φ ,
(6.170) (6.171)
∂u (x, s) = u1 (x), u(x, s) = u0 (x), ∂t
(6.172)
where φ is a control input. For T > s, we define the quadratic performance functional by J(φ ; u0 , u1 , s, T ) = α
T s
Ω
|u|2 dV dt + β
T s
Γ
|φ |2 dSdt + γ
Ω
|u(T )|2 dV,
(6.173) where α , β , γ ≥ 0 are weight constants. The optimal control problem is to minimize the functional J over L2 (Γ × (s, T )) with respect to φ , where u0 , u1 , and s are parameters. This problem can be solved in the same way as for the reactionconvection-diffusion equation. We first assume that an optimal control exists and prove the existence at the end of this section by using solutions of Riccati equations. We denote the solution of (6.170)-(6.172) corresponding to the control φ and the initial condition u0 , u1 by u(x,t; u0 , u1 , φ ). Since the equation is linear, we can readily see that u(x,t; u0, u1 , φ + λ ψ ) = u(x,t; u0, u1 , φ ) + λ u(x,t; 0, 0, ψ )
(6.174)
for any control functions φ , ψ and any number λ . We now calculate the Gˆateauxdifferential of J. Theorem 6.9. The Gˆateaux-differential of J defined by (6.173) at φ is given by T s
Γ
J (φ ; u0 , u1 , s, T )ψ dSdt = 2α
T
+2γ +2β
u(x,t; u0 , u1 , φ )u(x,t; 0, 0, ψ )dV dt
Ω
s
Ω
u(x, T ; u0 , u1 , φ )u(x, T ; 0, 0, ψ )dV
T s
Γ
φ ψ dSdt,
(6.175)
where u(x,t; 0, 0, ψ ) is the solution of
∂ 2u = c2 ∇2 u, ∂ t2 u|Γ = ψ , ∂u (x, s) = 0. u(x, s) = 0, ∂t
(6.176) (6.177) (6.178)
272
6 Higher-dimensional Wave Equation
Proof. Using (6.173) and (6.174), we derive that T Γ
s
J (φ ; u0 , u1 , s)ψ dSdt
J(φ + λ ψ ; u0, u1 , s) − J(φ ; u0 , u1 , s) λ ' ( α T |u(x,t; u0 , u1 , φ + λ ψ )|2 − |u(x,t; u0, u1 , φ )|2 dV dt = lim + λ →0 λ s Ω ( β T ' + lim |φ + λ ψ |2 − |φ |2 dSdt λ →0+ λ s Γ ' ( γ + lim |u(x, T ; u0 , u1 , φ + λ ψ )|2 − |u(x, T ; u0 , u1 , φ )|2 dV + λ →0 λ Ω ' ( α T = lim 2λ u(x,t; u0, u1 , φ )u(x,t; 0, 0, ψ ) + λ 2|u(x,t; 0, 0, ψ )|2 dV dt λ →0+ λ s Ω ( β T ' + lim 2λ φ ψ + λ 2|ψ |2 dSdt + λ →0 λ s Γ ' ( γ + lim 2λ u(x, T ; u0 , u1 , φ )u(x, T ; 0, 0, ψ ) + λ 2|u(x, T ; 0, 0, ψ )|2 dV λ →0+ λ Ω
= lim
λ →0+
= 2α
T
Ω
s
+2β
u(x,t; u0 , u1 , φ )u(x,t; 0, 0, ψ )dV dt
T s
Γ
φ ψ dSdt + 2γ
Ω
u(x, T ; u0 , u1 , φ )u(x, T ; 0, 0, ψ )dV.
In a manner analogous to Theorem 4.35, we have the optimality (also called Hamiltonian) system for the optimal control. Theorem 6.10. The optimal control φ ∗ satisfies the following system
∂ 2u = c2 ∇2 u, ∂ t2 ∂ 2v = c2 ∇2 v + α u, ∂ t2 u|Γ = φ ∗ , v|Γ = 0,
c2 ∂ v
∗ φ = , β ∂ n Γ ∂u (x, s) = u1 (x), u(x, s) = u0 (x), ∂t ∂v (x, T ) = −γ u(x, T ). v(x, T ) = 0, ∂t
(6.179) (6.180) (6.181) (6.182)
(6.183)
Proof. Multiplying (6.176) by the solution v of (6.180) and integrating over Ω × (s, T ) by parts, we have
6.7 Optimal Boundary Control
T Ω
s
v
273
∂ 2u (x,t; 0, 0, ψ )dV dt = c2 ∂ t2
T Ω T
s
= −c2
s
v∇2 u(x,t; 0, 0, ψ )dV dt
Ω
(6.184)
∇v · ∇u(x,t; 0, 0, ψ )dVdt.
Multiplying (6.180) by the solution u(x,t; 0, 0, ψ ) of (6.176) and integrating over Ω × (s, T ) by parts, we have T s
= c2 = c2
u(x,t; 0, 0, ψ )
Ω T s
T
Γ
s
+α
Ω
(6.185)
u(x,t; 0, 0, ψ )∇2 vdV dt + α
ψ
T Ω
s
∂ 2v dV dt ∂ t2
∂v dSdt − c2 ∂n
T Ω
s
T s
Ω
u(x,t; 0, 0, ψ )u(x,t; u0, u1 , φ ∗ )dV dt
∇u(x,t; 0, 0, ψ ) · ∇vdV dt
u(x,t; 0, 0, ψ )u(x,t; u0, u1 , φ ∗ )dV dt.
Substituting (6.184) into (6.185) gives c2
T Γ
s
ψ
∂v dSdt = ∂n
T s
−
Ω
u(x,t; 0, 0, ψ )
T
= −γ
Ω
s
−α
T
−α
v
s
Ω
Ω
(6.186)
∂ 2u (x,t; 0, 0, ψ )dV dt ∂ t2 u(x,t; 0, 0, ψ )u(x,t; u0, u1 , φ ∗ )dV dt
u(x, T ; 0, 0, ψ )u(x, T ; u0 , u1 , φ ∗ )dV dt
T s
∂ 2v dV dt ∂ t2
Ω
u(x,t; 0, 0, ψ )u(x,t; u0, u1 , φ ∗ )dV dt. (6.187)
On the other hand, it follows from Theorems 4.34 and 6.9 that
α
T
+γ
Ω
s
Ω
∗
u(x,t; u0 , u1 , φ )u(x,t; 0, 0, ψ )dV dt + β
T s
Γ
φ ∗ ψ dSdt
u(x, T ; u0 , u1 , φ ∗ )u(x, T ; 0, 0, ψ )dV = 0
for all functions ψ ∈ L2 (Γ × (s, T )). Substituting (6.187) into this equation, we obtain T ∂v β φ ∗ − c2 ψ dV dt = 0 ∂n s Γ
for all functions ψ ∈ L2 (Γ × (s, T )). Thus β φ ∗ = c2 ∂∂ nv . Γ
To decouple the optimality system (6.179)-(6.183), we define a family of linear operators P(s) by
274
6 Higher-dimensional Wave Equation
∂v P(s)(u0 , u1 ) = − (s; u0 , u1 ), v(s; u0 , u1 ) ∂t
(6.188)
for any given function (u0 , u1 ) ∈ L2 (Ω ) × H −1 (Ω ), where v(s; u0 , u1 ) is the solution of (6.179)-(6.183). Theorem 6.11. The operator P(s) defined by (6.188) has the following properties: (1) P(s) is symmetric, that is, P(s)(u0 , u1 ), (p0 , p1 ) = (u0 , u1 ), P(s)(p0 , p1 )
(6.189)
for (u0 , u1 ), (p0 , p1 ) ∈ L2 (Ω ) × H −1(Ω ), where ·, · denotes the dual product between L2 (Ω ) × H −1(Ω ) and L2 (Ω ) × H01 (Ω ). (2) If φ ∗ is the optimal control of (6.170)-(6.172), then P(s)(u0 , u1 ), (u0 , u1 ) = α
T
+γ
s
Ω
Ω
|u|2 dV dt + β
T s
Γ
|u(T )|2 )dV.
|φ ∗ |2 dSdt (6.190)
Hence P(s) is nonnegative. (3) P(T )(u0 , u1 ) = γ (u0 , 0) for all (u0 , u1 ) ∈ L2 (Ω ) × H −1 (Ω ). (4) P(s) is a linear bounded operator from L2 (Ω ) × H −1 (Ω ) to (L2 (Ω ) × H01 (Ω ). Proof. (1) Consider the system
∂2p = c2 ∇2 p, ∂ t2 ∂ 2q = c2 ∇2 q + α p, ∂ t2 p|Γ = ψ ∗ , q|Γ = 0,
c2 ∂ q
ψ∗ = , β ∂ n Γ ∂p p(x, s) = p0 (x), (x, s) = p1 (x), ∂t ∂q (x, T ) = −γ p(x, T ). q(x, T ) = 0, ∂t
(6.191) (6.192) (6.193) (6.194)
(6.195)
According to the definition of P in the equation (6.188), we have ∂p P(s)(p0 , p1 ) = − (s; p0 , p1 ), p(s; p0 , p1 ) . ∂t Multiplying (6.179) by the solution q of (6.192) and integrating over Ω × (s, T ) by parts, we have
6.7 Optimal Boundary Control
275
T s
Ω
q
∂ 2u dV dt = c2 ∂ t2
T Ω T
q∇2 udV dt
s
= −c2
s
Ω
∇q · ∇udVdt.
(6.196)
Multiplying (6.192) by the solution u of (6.179) and integrating over Ω × (s, T ) by parts, we have T Ω
s
u
∂ 2q dV dt = c2 ∂ t2 = c2
T
u∇2 qdV dt + α
Ω
s
T
Γ
s
+α
φ∗
T Ω
s
T
∂q dSdt − c2 ∂n
s
Ω
s
Ω
T
updV dt ∇q · ∇udVdt
updV dt.
(6.197)
Subtracting (6.196) from (6.197) gives (u0 , u1 ), P(s)(p0 , p1 ) ∂ q(s) dV u1 q(s) − u0 = ∂t Ω =α =α
T s
Ω
s
Ω
T
updV dt + c2 updV dt + β
T s
Γ
s
Γ
T
φ∗
∂q dSdt + γ ∂n
φ ∗ ψ ∗ dSdt + γ
Ω
Ω
u(T )p(T )dV
u(T )p(T )dV.
(6.198)
Repeating the above procedure by multiplying (6.191) by the solution v of (6.180) and multiplying (6.180) by the solution p of (6.191), we can obtain P(s)(u0 , u1 ), (p0 , p1 ) ∂ v(s) dV p1 v(s) − p0 = ∂t Ω
=α
T s
Ω
updV dt + β
T s
Γ
φ ∗ ψ ∗ dSdt + γ
Ω
u(T )p(T )dV.
(6.199)
Then (6.189) follows from (6.198) and (6.199). (2) If p0 = u0 , p1 = u1 , then u = p, v = q, and φ ∗ = ψ ∗ . It then follows from (6.199) that P(s)(u0 , u1 ), (u0 , u1 ) = α
T s
Ω
u2 dV dt + β
T s
Γ
|φ ∗ |2 dSdt + γ
Ω
u(T )2 dV.
(3) Letting s → T in (6.199), we derive that P(s)(u0 , u1 ), (p0 , p1 ) =
Ω
(γ u0 p0 +0· p1 )dV for all (p0 , p1 ) ∈ L2 (Ω )×H −1 (Ω ).
276
6 Higher-dimensional Wave Equation
Thus P(T )(u0 , u1 ) = (γ u0 , 0) for all (u0 , u1 ) ∈ L2 (Ω ) × H −1 (Ω ). (4) By (6.199), we deduce from the H¨older’s inequality and the Cauchy-Schwarz inequality that |P(s)(u0 , u1 ), (p0 , p1 )| 1/2 1/2 |u(T )|2 dV |p(T )|2 dV ≤γ Ω
+β +α
T
∗ 2
Γ
s
Ω
T
Ω
s
|φ | dSdt |u|2 dV dt
1/2
1/2
≤ γ |u(T )|2 dV + β Ω
s
Γ T
s T
∗ 2
|ψ | dSdt
Γ
Ω
s T
× γ |p(T )|2 dV + β Ω
T
1/2 |p|2 dV dt
|φ ∗ |2 dSdt + α
T Ω
s
∗ 2
Γ
s
1/2
|ψ | dSdt + α
1/2 |u|2 dV dt
T s
Ω
1/2 |p| dV dt 2
= (J(φ ∗ ; u0 , u1 , s, T ))1/2 (J(ψ ∗ ; p0 , p1 , s, T ))1/2 .
(6.200)
To estimate J in terms of (u0 , u1 ), we introduce the change of variable: v=
t s
u(r)dr + ψ ,
where u is the solution of (6.170)-(6.172) with φ = 0 and ψ is the solution of c2 ∇2 ψ = u1 ,
ψ |Γ = 0.
Multiplying this equation by ψ and integrating over Ω , we obtain
c2
Ω
|∇ψ |2 dV = −
Ω
u1 ψ dV ≤ u1 H −1 ψ H 1 ,
and so
1 u1 H −1 . c2 On the other hand, we can easily verify that v satisfies the wave equation ψ H 1 ≤
∂ 2v = c2 ∇2 v, ∂ t2 v|Γ = 0, v(x, s) = ψ (x),
∂v (x, s) = u0 (x). ∂t
Evidently, the energy identity (6.10) also holds for this equation. It therefore follows that
6.7 Optimal Boundary Control
277
t ' ( 2 2 |u| + | ∇u(s)ds + ∇ψ | dV = |vt |2 + |∇v|2 dV Ω
Ω
s
'
( |u0 |2 + |∇ψ |2 dV Ω (2 ' ≤ C u02L2 + |u1|2H −1 . =
We then deduce that there exists a constant C > 0 such that J(φ ∗ ; u0 , u1 , s, T ) ≤ J(0; u0 , u1 , s, T ) =γ
Ω
|u(T )|2 dV + α
T
( ' ≤ C u0 2L2 + |u1 |2H −1
s
Ω
|u|2 dV dt
and J(ψ ∗ ; p0 , p1 , s, T ) ≤ J(0; p0 , p1 , s, T ) =γ
Ω
|p(T )|2 dV + α
T s
( ' ≤ C p0 2L2 + |p1 |2H −1 .
Ω
|p|2 dV dt
It therefore follows from (6.200) that |P(s)(u0 , u1 ), (p0 , p1 )L2 | ≤ C
(' ((1/2 '' u0 2L2 + |u1 |2H −1 p0 2L2 + |p1 |2H −1
for all (u0 , u1 ) ∈ L2 (Ω ) × H −1 (Ω )). This implies that P(s)(u0 , u1 )L2 ×H 1 ≤ C(u0 , u1 )L2 ×H −1 . Thus P(s) is bounded. Since the optimality system (6.179)-(6.183) is linear, P(s) is linear. Since the optimality system (6.179)-(6.183) has a solution for any initial condition (u0 , u1 ) ∈ L2 (Ω ) × H −1(Ω ), the state space is L2 (Ω ) × H −1 (Ω ). Consider the optimal control problem
∂ 2u = c2 ∇2 u, ∂ t2 u|Γ = φ , u(x, 0) = w0 (x),
(6.201) (6.202)
∂u (x, 0) = w1 (x). ∂t
(6.203)
By Theorem 6.10, we derive that the optimal control φ ∗ satisfies the following system:
278
6 Higher-dimensional Wave Equation
∂ 2u = c2 ∇2 u, ∂ t2 ∂ 2v = c2 ∇2 v + α u, ∂ t2 u|Γ = φ ∗ , v|Γ = 0,
c2 ∂ v
∗ φ = , β ∂ n Γ ∂u (x, 0) = w1 (x), u(x, 0) = w0 (x), ∂t ∂v (x, T ) = −γ u(x, T ). v(x, T ) = 0, ∂t
(6.204) (6.205) (6.206) (6.207)
(6.208)
If we restrict the system (6.204)-(6.208) on [t, T ] and compare it with the system (6.179)-(6.183), we find that P(t) (u(t), ut (t)) = (−vt (t), v(t)) .
(6.209)
Let P(t) [u(t), ut (t)] = (P1 (t) [u(t), ut (t))], P2 (t) [u(t), ut (t)]).
(6.210)
Then we have v(t) = P2 (t) [u(t), ut (t)] , ∂v − (t) = P1 (t) [u(t), ut (t)] . ∂t We then obtain the optimal state feedback control
c2 ∂ (P2 (t) [u(t), ut (t)])
. u|Γ = φ = β ∂n Γ ∗
(6.211)
We now formally derive an equation for the operator P. For this, we define the operator 0 I I 0 , B= . A= 2 2 00 c ∇ 0 The adjoint of A is given by A∗ =
0 c2 ∇2 . I 0
Since the state space is L2 (Ω ) × H −1 (Ω ), the domains of the operators A and A∗ are H01 (Ω ) × L2 (Ω ) and L2 (Ω ) × H01 (Ω ), respectively. Using the operators A and A∗ , we can rewrite the equations (6.204) and (6.205) as follows: ∂ u u =A , (6.212) ut ∂ t ut
6.7 Optimal Boundary Control
∂ −vt ∂t v
279
= −A∗
−vt u . − αB ut v
(6.213)
Substituting (6.209) into (6.213) gives ∂ u u u u ∗ P (t) + P(t) = −A P(t) − αB . ut ut ut ∂ t ut Taking the dual product of this equation with (ξ , η ) ∈ H01 (Ω ) × L2 (Ω ), we obtain 3 4 4 3 ∂ u ξ ξ u P (t) , + P(t) , u η η ut ∂t t 3 4 4 3 ξ ξ u u , −α B , . = −A∗ P(t) η η ut ut ∂ u Substituting (6.212) into this equation for ∂ t , we deduce ut 3 4 3 4 ξ ξ u u P , + PA , η η ut ut 3 4 4 3 u ξ ξ u , −α B , . = −A∗ P ut η η ut
(6.214)
Moreover, 3 4 u ξ PA , η ut 4 3 ut ξ ,P = η c2 ∇2 u ξ ξ 2 2 dV + c ∇ uP2 dV ut P1 = η η Ω Ω
ξ (integrate by parts twice and note that P2 = 0) η Γ ξ ξ dV + c2 u∇2 P2 dV ut P1 = η η Ω Ω ∂ ξ − c2 u P2 dS η ∂n Γ ξ ξ 2 2 dV + c u∇ P2 dV ut P1 = η η Ω Ω 4 c ∂ u ∂ ξ P2 P dS − ut ∂ n 2 η Γ β ∂n 3 4 4 c ∂ ξ ξ u ∂ u P2 P2 , A∗ P − dS. = η η ut ∂ n ut Γ β ∂n
(6.215)
280
6 Higher-dimensional Wave Equation
Substituting this equation into (6.214) gives 3 4 ξ u P , η ut 4 3 4 3 ξ ξ u u ∗ ∗ ,A P − A P , =− η η ut ut 3 4 4 c ∂ ξ ξ u u ∂ P2 P2 , + dS. −α B η η ut u t ∂n Γ β ∂n Since u is arbitrary, it follows from the property (3) of Theorem 6.11 that 3 4 ϕ ξ P , ψ η 4 3 4 3 ϕ ξ ϕ ξ ∗ ∗ ,A P − A P , =− ψ η ψ η 3 4 4 c ∂ ϕ ξ ϕ ∂ ξ P2 P2 , + dS, −α B ψ η ψ η β ∂ n ∂ n Γ 3 4 3 4 ϕ ξ γϕ ξ P(T ) , = , ψ η η 0
(6.216) (6.217)
for all (ϕ , ψ ), (ξ , η ) ∈ H01 (Ω ) × L2 (Ω ). This equation is called a differential Riccati equation. The study of the Riccati equation is beyond the scope of this text and is referred to one of the advanced control books [6, 24, 64, 67, 69, 99]. We state a result from [6, p.463, Theorem 2.1] without proof. Theorem 6.12. The differential Riccati equation (6.216)-(6.217) has a unique, nonnegative, symmetric, strongly continuous solution. Using the solutions of the Riccati equation, we can prove the existence of an optimal control. For this we define the Riccati wave energy of (6.170)-(6.172) by 3 4 u u V (u, ut ) = P(t) , . (6.218) ut ut Lemma 6.14. Let P be the nonnegative symmetric solution of (6.216)-(6.217). Then dV = −α dt
1 u dV − β φ dS + β Ω Γ 2
2
Γ
2 ∂ u P2 c − β φ dS. ut ∂n 2
Proof. Using the equation (6.212), we compute 3 4 4 3 dV ∂ u u u u = P (t) , + P(t) , ut ut ut dt ∂ t ut 4 3 ∂ u u , + P(t) ut ∂ t ut
(6.219)
6.7 Optimal Boundary Control
281
4 3 4 u u u u , + P(t)A , = P (t) ut ut ut ut 3 4 u u ,A + P(t) ut ut 3
(use the symmetry of P) 3 4 4 3 u u u u , + 2 P(t) ,A = P (t) ut ut ut ut 3 4 4 3 u u u u , +2 , A∗ P (use (6.215)) = P (t) ut ut ut ut ∂ u P2 dS −2 c2 φ ut ∂n Γ 3 4 4 c ∂ u u u ∂ u P2 P , + dS = −α B ut ut ut ∂ n 2 ut Γ β ∂n ∂ u P2 dS (use (6.216)) −2 c2 φ ut ∂n Γ 4 c ∂ u ∂ u 2 2 P2 P2 dS = −α u dV − β φ dS + u ut β ∂ n ∂ n t Ω Γ Γ ∂ u P2 dS + β φ 2 dS −2 c2 φ ut ∂n Γ Γ 2 1 ∂ u c2 P2 − β φ dS. = −α u2 dV − β φ 2 dS + ut β Γ ∂n Ω Γ Integrating the identity (6.219) from s to T , we obtain the following lemma. Lemma 6.15. Let P be the nonnegative symmetric solution of (6.216)-(6.217). Then 3 4 u u J(φ , u0 , u1 , s, T ) = P(s) 0 , 0 u1 u1 2 T 1 ∂ u + c2 P2 − β φ dSdt. (6.220) ut β s Γ ∂n Proof. Integrating the identity (6.219) from s to T , we obtain V (T ) − V (s) = −α
T
T
u2 dV dt − β φ 2 dSdt s s Ω Γ 2 1 T ∂ u c2 P2 − β φ dSdt. + ut β s Γ ∂n
This implies (6.220) since V (T ) = γ Ω u(T )2 dV and 3 4 u u . V (s) = P(s) 0 , 0 u1 u1
282
6 Higher-dimensional Wave Equation
Theorem 6.13. Let J be defined by (6.173). Then there exists a unique optimal control φ ∗ such that 3 4 u u ∗ min , J(φ ; u0 , u1 , s, T ) = J(φ ; u0 , u1 , s, T ) = P(s) 0 , 0 2 u u1 1 φ ∈L (Γ ×(s,T )) (6.221) where P is the nonnegative symmetric solution of (6.216)-(6.217) and ∂ u
φ ∗ = c2 P2 . (6.222) ut Γ ∂n Proof. It follows from (6.220) that J(φ ∗ ; u0 , u1 , s, T ) ≥
min
φ ∈L2 (Γ ×(s,T ))
J(φ ; u0 , u1 , s, T )
3 4 u u ≥ P(s) 0 , 0 u1 u1 = J(φ ∗ ; u0 , u1 , s, T ) if we take
c2 ∂ u
P2 φ =φ = . ut Γ β ∂n ∗
In addition, we can readily show that J is strictly convex and then the optimal control is unique. To design an optimal boundary state feedback controller, we consider the boundary control problem
∂ 2u = c2 ∇2 u, ∂ t2 u|Γ = φ , u(x, 0) = u0 (x),
(6.223) (6.224)
∂u (x, 0) = u1 (x) ∂t
(6.225)
with the quadratic performance functional over an infinite time interval J(φ ; u0 , u1 ) = α
∞ 0
Ω
|u|2 dvdt + β
∞ 0
Γ
|φ |2 dSdt,
(6.226)
where α , β ≥ 0 are weight constants. The optimal control problem is to minimize the functional J over L2 (Γ × (0, ∞)). Since the wave equation can be stabilized by the feedback controller (6.46), it is well posed. The following optimality theorem is the infinite time version of Theorem 6.10. Its proof is beyond the scope of the text and is referred to [69].
6.7 Optimal Boundary Control
283
Theorem 6.14. The optimal control φ ∗ satisfies the following system
∂ 2u = c2 ∇2 u, ∂ t2 ∂ 2v = c2 ∇2 v + α u, ∂ t2 u|Γ = φ ∗ , v|Γ = 0,
c2 ∂ v
∗ φ = , β ∂ n Γ ∂u (x, 0) = u1 (x), u(x, 0) = u0 (x), ∂t ∂v v(x, ∞) = 0, (x, ∞) = 0. ∂t
(6.227) (6.228) (6.229) (6.230) (6.231) (6.232)
We define a linear operator Π by ∂v Π (u0 , u1 ) = − (0; u0 , u1 ), v(0; u0 , u1 ) ∂t
(6.233)
for any given function (u0 , u1 ) ∈ L2 (Ω ) × H −1 (Ω ), where v(t; u0 , u1 ) is the solution of (6.227)-(6.232). Theorem 6.15. The operator Π defined by (6.233) has the following properties: (1) Π is symmetric, that is, Π (u0 , u1 ), (p0 , p1 ) = (u0 , u1 ), Π (p0 , p1 )
(6.234)
for (u0 , u1 ), (p0 , p1 ) ∈ L2 (Ω ) × H −1 (Ω ). (2) If φ ∗ is the optimal control of (6.223)-(6.225), then Π (u0 , u1 ), (u0 , u1 ) = α
∞ 0
Ω
|u|2 dV dt + β
∞ 0
Γ
|φ ∗ |2 dSdt.
(6.235)
Hence Π is nonnegative. The proof is similar to that of Theorem 6.11 and is left as an exercise. It can be seen from the definition (6.233) of the operator Π that ∂u ∂v Π u(t), (t) = − (t), v(t) . ∂t ∂t Let
Π = (Π1 , Π2 ).
(6.236)
(6.237)
We then obtain the optimal state feedback control
φ∗ =
c2 ∂ (Π2 (u(t), ut (t)))
. β ∂n Γ
(6.238)
284
6 Higher-dimensional Wave Equation
In the same way as the derivation of the differential Riccati equation (6.216), we can derive the algebraic Riccati equation 3 4 3 4 ϕ ξ ϕ ξ ∗ ∗ ,A Π + A Π , ψ η ψ η 3 4 4 c ∂ ϕ ξ ϕ ∂ ξ , − dS = 0 (6.239) +α B Π2 Π2 ψ η ψ η ∂n Γ β ∂n for all (ϕ , ψ ), (ξ , η ) ∈ H01 (Ω ) × L2 (Ω ). We state a result about the Riccati equation from [62, 65] without proof. Consider the differential Riccati equation 3 4 ϕ ξ , P ψ η 4 3 4 3 ϕ ξ ϕ ξ ∗ ∗ ,A P + A P , = ψ η ψ η 3 4 4 c ∂ ϕ ξ ϕ ∂ ξ P2 P2 , − dS, (6.240) +α B ψ η ψ ∂n η Γ β ∂n 3 4 3 4 ϕ ξ γψ ξ P(0) , = (6.241) , ψ η η 0 for all (ϕ , ψ ), (ξ , η ) ∈ H01 (Ω ) × L2 (Ω ). The algebraic Riccati equation (6.239) is the steady state equation of the differential equation. Theorem 6.16. The differential Riccati equation (6.240)-(6.241) has a unique, nonnegative, symmetric, strongly continuous solution that has the following properties: 3 4 u u (1) P(t) 0 , 0 = min J(φ , u0 , u1 , 0,t), where J is defined by (6.173). u1 u1 φ ∈L2 (Γ ×(0,t)) (2) If γ = 0, then the limit lim P(t)(u0 , u1 ) = Π (u0 , u1 ) for all (u0 , u1 ) ∈ L2 (Ω ) × H −1 (Ω )
t→∞
(6.242)
exists and Π is the minimal nonnegative symmetric solution of (6.239). That is, any other solution Πm of (6.239) satisfies 3 4 3 4 u u u u ≤ Πm 0 , 0 . Π 0 , 0 u1 u1 u1 u1 We define the Riccati wave energy of (6.223)-(6.225) by 3 4 u u V (u, ut ) = Π , . ut ut
(6.243)
In a manner analogous to Lemma 6.14, we can prove the following lemma.
6.7 Optimal Boundary Control
285
Lemma 6.16. Let Π be the nonnegative symmetric solution of (6.239). Then dV = −α dt
Ω
u2 dV − β
Γ
φ 2 dS +
1 β
Γ
c2
2 ∂ u − β φ dS, Π2 ut ∂n
(6.244)
where Π (u0 , u1 ) = [Π1 (u0 , u1 ), Π2 (u0 , u1 )]. The proof is left as an exercise. Integrating the identity (6.244) from 0 to t, we obtain the following lemma. Lemma 6.17. Let Π be the nonnegative symmetric solution of (6.239). Then 3 4 3 4 u u u u J(φ , u0 , u1 , 0,t) = Π 0 , 0 − Π , ut ut u1 u1 2 t 1 ∂ u + c2 Π 2 − β φ dSdt. (6.245) ut β 0 Γ ∂n Theorem 6.17. Let J be defined by (6.226). Then there exists a unique optimal control φ ∗ such that 3 4 u u ∗ min , (6.246) J(φ ; u0 , u1 ) = J(φ ; u0 , u1 ) = Π 0 , 0 2 u u1 1 φ ∈L (Γ ×(s,T )) where Π is the nonnegative symmetric solution of (6.239) and u
∗ 2 ∂ φ =c Π2 . ut Γ ∂n
(6.247)
Proof. Since the control problem is well posed, the minimum below is finite. It follows from Theorem 6.16 that min
φ ∈L2 (Γ ×(0,∞))
J(φ ; u0 , u1 ) ≥ =
min
J(φ ; u0 , u1 , 0,t)
min
J(φ ; u0 , u1 , 0,t)
φ ∈L2 (Γ ×(0,∞)) φ ∈L2 (Γ ×(0,t))
3 4 u u . = P(t) 0 , 0 u1 u1 which, combined with (6.242), implies that 3 4 u u min . J(φ , u0 , u1 ) ≥ Π 0 , 0 u1 u1 φ ∈L2 (Γ ×(0,∞)) On the other hand, by (6.245), we deduce that
(6.248)
286
6 Higher-dimensional Wave Equation
3 4 3 4 u u u u − Π , J(φ ; u0 , u1 , 0,t) = Π 0 , 0 ut ut u1 u1 2 1 ∂ u + c2 Π 2 − β φ dS ut β Γ ∂n 2 3 4 1 u0 u0 u 2 ∂ , + c − β φ dS. ≤ Π Π2 u1 u1 ut β Γ ∂n u c2 ∂ ∗ , we obtain Taking φ = φ = β ∂ n Π2 ut 3 4 u u J(φ ∗ ; u0 , u1 , 0,t) ≤ Π 0 , 0 u1 u1 and then
3 4 u u . min J(φ , u0 , u1 ) ≤ J(φ ; u0 , u1 ) ≤ Π 0 , 0 2 u u1 1 φ ∈L (Γ ×(0,∞)) ∗
(6.249)
Putting (6.248) and (6.249) together yields (6.246). In addition, we can readily show that J is strictly convex and then the optimal control is unique. As in the case of the reaction-convection-diffusion equations, the optimal control theory for the wave equation has been extended to the abstract dynamical control system (4.347)-(4.348). For details, we refer to [6, Part IV], [65], and [67].
Exercises 6.7 1. Verify that the operator P(s) defined by (6.188) is linear. 2. Consider the optimal control problem
∂ 2u = c2 ∇2 u, ∂ t2 u|Γ =
m
∑ bi (x)φi (t),
i=1
u(x, 0) = u0 (x),
ut (x, 0) = u1 (x)
with the quadratic performance functional J(φ1 , · · · , φm ) = α
T 0
m
Ω
|u|2 dV dt + β ∑
T'
i=1 0
( |φi |2 + |φi |2 dt + γ |u(T )|2 dV, Ω
where α , β , γ ≥ 0 are weight constants and bi (i = 1, 2, · · · , m) are given functions. Derive the optimality system for the optimal control.
References
287
3. Prove Theorem 6.15. 4. Prove Lemma 6.16. 5. Consider the control problem
∂ 2u ∂ 2u = , ∂ t2 ∂ x2
∂u (0,t) = φ1 (t), ∂x
∂u (π ,t) = φ2 (t), ∂x ∂u u(x, 0) = u0 (x), (x, 0) = u1 (x) ∂t
with the performance functional J(φ1 , φ2 ) =
∞ π 0
0
u2 dxdt +
∞ 0
(φ1 (t)2 + φ2 (t)2 )dxdt.
Derive the optimality system for the optimal control.
6.8 References and Notes The material presented in this chapter is just a simplification of the material from the advanced control books [6, 24, 62, 64, 67, 69, 99] and from the references [14, 15, 16, 17, 18, 46, 47, 49, 50, 53, 66, 80]. Section 6.7 is from Example 4.3 of Chapter 3, Part 4 of [6]. The generalized Young’s inequality is from [3] and Jensen’s inequality from [95]. There have been numerous references on the feedback control of the wave equations. We mention some of them for further studies: Lax and Phillips [66], Morawetz [85], Strauss [100], Russell [96], Chen [14, 15, 16, 17, 18], Avalos and Lasiecka [4], Bardos, Lebeau, and Rauch [5], Carpio [11], Castro and Zuazua [12], Cox and Zuazua [22, 23], Kormornik and Zuazua [46], Lagnese [49], Lasiecka and Triggiani [52, 56, 59, 60, 61], Lions [70, 72], Liu and Williams [79, 74], Freitas and Zuazua [36], Lagnese [50], Martinez [83, 84], Lasiecka and Triggiani [51], Liu and Zuazua [80], Nakao [88, 89], Rauch, Zhang, and Zuazua [94], Wang and Chen [103], Zhang and Zuazua [108], and Zuazua [109].
References 1. Adams, R.: Sobolev Spaces. Academic Press. New York (1975) 2. Amamm, H.: Feedback stabilization of linear and semilinear parabolic systems. In: Clement, P., Invernizzi, S., Mitidieri, E. and Vrabie, I.I. (eds.) Semigroup Theory and Applications. Lecture Notes in Pure and Applied Mathematics, vol. 116, pp. 21-57. Marcel Dekker, New York (1989) 3. Arnold, V. I.: Mathematical Methods of Classical Mechanics. Springer-Verlag, New York (1989)
288
6 Higher-dimensional Wave Equation
4. Avalos, G., Lasiecka, I.: Optimal blowup rates for the minimal energy null control of the strongly damped abstract wave equation. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 2, no. 3, 601-616 (2003) 5. Bardos, C., Lebeau, G., Rauch, J.: Sharp sufficient conditions for the observation, control, and stabilization of waves from the boundary. SIAM J. Control Optim. 30, 1024-1065 (1992) 6. Bensoussan, A., Da Prato, G., Delfour, M. C., Mitter, S. K.: Representation and Control of Infinite Dimensional Systems. Birkhauser, Boston (2006) 7. Boskovic, D. M. , Krstic M. and Liu, W.: Boundary control of an unstable heat equation via measurement of domain-averaged temperature. IEEE Trans. Automatic Control 46, 20222028 (2001) 8. Boskovic, D., Balogh, A., Krstic, M.: Backstepping in infinite dimension for a class of parabolic distributed parameter systems. Mathematics of Control, Signals, and Systems 16, 44-75 (2003) 9. Burns, J. A., Rubio, D., King, B. B.: Regularity of feedback operators for boundary control of thermal processes. Proc. First International Conf. on Nonlinear Problems in Aviation and Aerospace, Daytona Beach, Florida (1996) 10. Byrnes, C. I., Priscoli, F. D., Isidori, A.: Output Regulation of Uncertain Nonlinear Systems. Birkh¨auser, Boston (1997) 11. Carpio, A.: Sharp estimates of the energy decay for solutions of second order dissipative evolution equations. Potential Analysis 1, 265-289 (1992) 12. Castro, C., Zuazua, E.: Low frequency asymptotic analysis of a string with rapidly oscillating density. SIAM J. Appl. Math. 60 1205-1233 (2000) 13. Castro, C., Zuazua, E.: High frequency asymptotic analysis of a string with rapidly oscillating density. Eur. J. Appl. Math. 11, 595-622 (2000) 14. Chen, G.: Energy decay estimates and exact boundary value controllability for the wave equation in a bounded domain. J. Math. Pures Appl. 58, 249-273 (1979) 15. Chen, G.: Control and stabilization for the wave equation in a bounded domain. SIAM J. Control Optim. 17, 66-81 (1979) 16. Chen, G.: A note on the boundary stabilization of the wave equation. SIAM J. Control Optim. 19, 106-113 (1981) 17. Chen, G.: Control and stabilization for the wave equation in a bounded domain, part II. SIAM J. Control Optim. 19, 114-122 (1981) 18. Chen, G.: Control and stabilization for the wave equation, part III: Domain with moving boundary. SIAM J. Control Optim. 19, 123-138 (1981) 19. Choi Y., Chung, W. K.: PID Trajectory Tracking Control for Mechanical Systems. Springer, Berlin (2004) 20. Christofides, P.D.: Nonlinear and Robust Control of PDE Systems, Methods and Applications to Transport-Reaction Processes. Birkh¨auser, Boston (2001) 21. Coppel, W. A.: Stability and Asymptotic Behavior of Differential Equations. D. C. Heath and Co., Boston (1966) 22. Cox S., Zuazua, E.: The rate at which energy decays in a damped string. Communications in Partial Differential Equations, 19, 213-243 (1994) 23. Cox S., Zuazua, E.: The rate at which energy decays in the string damped at one end. Indiana Univ. Math. J. 44, 545-573 (1995) 24. Curtain, R. F., Zwart, H.: An Introduction to Infinite-dimensional Linear Systems Theory. Springer-Verlag, New York (1995) 25. Datko, R., Lagnese, J., Polis, M.P.: An example on the effect of time delays in boundary feedback stabilization of wave equations. SIAM J. Control Optim. 24, 152-156 (1986) 26. Datko, R.: Not all feedback stabilized hyperbolic systems are robust with respect to small time delays in their feedbacks. SIAM J. Control Optim. 26, 697-713 (1988) 27. Datko, R.: The destabilizing effect of delays on certain vibrating systems. In: Advances in computing and control (Baton Rouge, LA, 1988), Lecture Notes in Control and Inform. Sci. 130, pp. 324-330. Springer, Berlin, New York (1989) 28. Datko, R., You, Y.C.: Some second-order vibrating systems cannot tolerate small time delays in their damping. J. Optim. Theory Appl. 70, 521-537 (1991)
References
289
29. Datko, R.: Two examples of ill-posedness with respect to small time delays in stabilized elastic systems. IEEE Trans. Automat. Control 38, 163-166 (1993) 30. Dautray, R., Lions, J.L.: Mathematical Analysis and Numerical Methods for Science and Technology, Vol.2, Functional and Variational Methods. Springer-Verlag, Berlin (1990) 31. Dautray, R., Lions, J.L.: Mathematical Analysis and Numerical Methods for Science and Technology, Vol.3, Spectral Theory and Applications. Springer-Verlag, Berlin (1990) 32. Dautray, R., Lions, J.L.: Mathematical Analysis and Numerical Methods for Science and Technology, Vol.5, Evolution Problems I. Springer-Verlag, Berlin (1992) 33. Dehman, B., Lebeau, G., Zuazua, E.: Stabilization and control for the subcritical semilinear wave equation. Annales Ecole Normale Superieure de Paris 36, 525-551 (2003) 34. DeVito, C. L.: Functional Analysis and Linear Operator Theory. Addison-Wesley Pub. Co., Redwood City, California (1990) 35. Evans, L.S.: Partial Differential Equations. American Mathematical Society, Providence, RI (1998) 36. Freitas, P., Zuazua, E.: Stability results for the wave equation with indefinite damping. J. Diff. Equations. 132, (1996), 338-352 (1996) 37. Fox, R. W., McDonald, A. T., Pritchard, P. J.: Introduction to Fluid Mechanics, John Wiley & Sons, Inc, Hoboken, N.J. (2004) 38. Franklin, G. F., Powell, D. J., Emami-Naeini, A.: Feedback control of dynamic systems, 4th Edition. Prentice Hall, Upper Saddle River, New Jersey (2002) 39. Gilbarg, D., Trudinger, N. S.: Elliptic Partial Differential Equations of Second Order. Springer-Verlag, Berlin (1983) 40. Grisvard, P.: Elliptic Problems in Nonsmooth Domains, Pitman, London (1985) 41. Haberman, R: Applied Partial Differential Equations with Fourier Seriers and Boundary Value Problems, 4th Edition. Prentice Hall, Upper Saddle River, New Jersey (2003) 42. Henry, D.: Geometric Theory of Semilinear Parabolic Equations, Lecture Notes in Mathematics vol. 840, Springer-Verlag, Berlin (1981) 43. Huang, J.: Nonlinear Output Regulation, Theory and Applications. Society for Industrial and Applied Mathematics, Philadelphia (2004) 44. Kato, T.: Perturbation Theory of Linear Operators. Springer-Verlag, New York (1966) 45. Khalil, H. K: Nonlinear Systems. Prentice Hall, New Jersey (2002) 46. Komornik, V., Zuazua, E.: A direct method for the boundary stabilization of the wave equation. J. Math. Pures Appl. 69, 33-54 (1990) 47. Komornik, V.: Exact Controllability and Stabilization: The Multiplier Method. John Wiley & Sons, Masson, Paris (1994) 48. Krstic, M., Smyshlyaev, A., Boundary Control of PDEs: A Course on Backstepping Designs. SIAM, Philadelphia (2008). 49. Lagnese, J.: Decay of solutions of wave equations in a bounded region with boundary dissipation. J. Differential Equations 50, 163-182 (1983) 50. Lagnese, J.: Control of wave process with distributed control supported on a subregion. SIAM J. Control Optim. 21, 68-85 (1983) 51. Lasiecka, I.: Stabilization of wave and plate-like equations with nonlinear dissipation on the boundary. J. Differential Equations 79, 340-381 (1989) 52. Lasiecka, I., Lions, J.L., Triggiani, R.: Nonhomogeneous boundary value problems for second order hyperbolic operators. J. Math. Pures Appl. 65, 149-192 (1986) 53. Lasiecka, I., Tataru, D.: Uniform boundary stabilization of semilinear wave equations with nonlinear boundary damping. Differential & Integral Equations 6, 507-533 (1993) 54. Lasiecka, I., Triggiani, R.: Stabilization of Neumann boundary feedback of parabolic equations: the case of trace in the feedback loop. Appl. Math. Optim. 10, 307-350 (1983) 55. Lasiecka, I., Triggiani, R.: Stabilization and structural assignment of Dirichlet boundary feedback parabolic equations. SIAM J. Control Optim. 21, 766-803 (1983) 56. Lasiecka, I., Triggiani, R.: Riccati equations for hyperbolic partial differential equations with L2 (0, T ; L2 (Γ )) -Dirichlet boundary terms. SIAM J. Control Optim. 24, 884-925 (1986)
290
6 Higher-dimensional Wave Equation
57. Lasiecka, I., Triggiani, R.: The regulator problem for parabolic equations with Dirichlet boundary control. I. Riccati’s feedback synthesis and regularity of optimal solution. Appl. Math. Optim. 16, 147-168 (1987) 58. Lasiecka, I., Triggiani, R.: The regulator problem for parabolic equations with Dirichlet boundary control. II. Galerkin approximation. Appl. Math. Optim. 16, 187-216 (1987) 59. Lasiecka, I., Triggiani, R.: Uniform exponential energy decay of wave equations in a bounded region with L1 (0, ∞; L2 (Γ ))-feedback control in the Dirichlet boundary conditions. J. Differential Equations. 66, 340-390 (1987) 60. Lasiecka, I., Triggiani, R.: Sharp regularity theory for second order hyperbolic equations of Neumann type. Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) 83, 109-113 (1989) 61. Lasiecka, I., Triggiani, R.: Uniform stabilization of the wave equation with Dirichletfeedback control without geometrical conditions. In: Stabilization of Flexible Structures (Montpellier, 1989), Lecture Notes in Control and Information Sciences, 147, pp. 62-108. Springer-Verlag, Berlin (1990) 62. Lasiecka, I., Triggiani, R.: Differential and Algebraic Riccati Equations with Application to Boundary/Point Control problems: Continuous Theory and Approximation Theory. Lecture Notes in Control and Information Sciences, 164, springer-Verlag, Berlin (1991) 63. Lasiecka, I., Triggiani, R.: Uniform stabilization of the wave equation with Dirichlet or Neumann feedback control without geometrical conditions. Appl. Math. Optim. 25, 189-224 (1992) 64. Lasiecka, I., Triggiani, R.: Control Theory for Partial Differential Equations, Continuous and Approximation Theories I: Abstract Parabolic Systems (Encyclopedia of Mathematics and its Applications). Cambridge University Press (2000) 65. Lasiecka, I., Triggiani, R.: Control Theory for Partial Differential Equations, Continuous and Approximation Theories II: Abstract Hyperbolic-like Systems over a Finite Time Horizon (Encyclopedia of Mathematics and its Applications). Cambridge University Press (2000) 66. Lax, P., Phillips, R. S.: Scattering theory for dissipative hyperbolic systems. J. Funct. Anal. 14, 172-235 (1973) 67. Li, X., Yong, J.: Optimal Control Theory for Infinite Dimensional Systems, Birkh¨auser, Boston (1995) 68. Li, X. J., Liu, K. S.: The effect of small time delays in the feedbacks on boundary stabilization. Science in China 36, 1435-1443 (1993) 69. Lions, J. L.: Optimal Control of Systems Governed by Partial Differential Equations. Springer-Verlag, Berlin (1971) 70. Lions, J. L.: Contrˆolabilit´e Exacte Perturbations et Stabilisation de Syst`emes Distribu´es, Tome 1, Contrˆolabilit´e Exacte. Masson, Paris Milan Barcelone Mexico (1988) 71. Lions, J. L.: Contrˆolabilit´e Exacte Perturbations et Stabilisation de Syst`emes Distribu´es, Tome 2, Perturbations. Masson, Paris Milan Barcelone Mexico (1988) 72. Lions, J. L.: Exact controllability, stabilization and perturbations for distributed systems. SIAM Rev. 30, 1-68 (1988) 73. Lions, J.L., Magenes, E.: Non-homogeneous Boundary Value Problems and Applications, Vol.I. Springer-Verlag, Berlin (1972) 74. Liu, W.: Stabilization and controllability for the transmission wave equation. IEEE Transcation on Automatic Control 46, 1900-1907 (2001) 75. Liu, W.: Boundary feedback stabilization of an unstable heat equation. SIAM J. Control Optim. 42, 1033-1043 (2003) 76. Liu, W., Haller, G.: Strange eigenmodes and decay of variance in the mixing of diffusive tracers, Physica D 188, 1-39 (2004) 77. Liu, W., Haller, G.: Inertial manifold and completeness of eigenmodes for unsteady magnetic dynamos. Physica D 194, 297-319 (2004) 78. Liu, W., Krstic, M.: Boundary feedback stabilization of homogeneous equilibriums in unstable fluid mixtures. International Journal of Control 80, 1-8 (2007) 79. Liu, W., Williams, G. H.: Exponential stability of the problem of transmission of the wave equation. Bull. Australian Math. Soc. 57, 305-327 (1998)
References
291
80. Liu, W., Zuazua, E.: Decay rates for dissipative wave equations. Ricerche di Matematica 48, 61-75 (1999) 81. Liu, W., Zuazua, E.: Uniform stabilization of the higher dimensional system of thermoelasticity with a nonlinear boundary feedback. Quarterly Appl. Math. 59, 269-314 (2001) 82. Luo, Z.H., Guo, B.Z., Morg¨ul, O.: Stability and Stabilization of Infinite Dimensional Systems with Applications. Springer, London (1999) 83. Martinez, P.: A new method to obtain decay rate estimates for dissipative systems with localized damping, Revista Matem´atica Complutense 12, 251-283 (1999) 84. Martinez, P.: A new method to obtain decay rate estimates for dissipative systems. ESAIM: COCV 4, 419-444 (1999) 85. Morawetz, C. S.: Decay for solutions of the exterior problem for the wave equation. Comm. Pure Appl. Math. 28, 229-264 (1975) 86. Morris, K. A.: Introduction to Feedback Control. 1st edition, Academic Press (2001) 87. Muzzio, F. J., and Liu, M.: Chemical reactions in chaotic flows. Chem. Eng. J. 64 117-127 (1996) 88. Nakao, M.: Asymptotic stability of the bounded or almost periodic solution of the wave equation with a nonlinear dissipative term. J. Math. Anal. Appl. 58, 336-343 (1977) 89. Nakao, M.: Energy decay for the wave equation with a nonlinear weak dissipation. Differential Integral Equations, 8, 681-688 (1995) 90. Ogata, K.: Modern Control Engineering, Fourth Edtion. Prentice Hall, Upper Saddle River, New Jersey (2002) 91. Pazy, A.: Semigroup of Linear Operators and Applications to Partial Differential Equations. Springer-Verlag, New York (1983) 92. Protter, M. H., Morrey, C. B.: A First Course in Real Analysis. Springer-Verlag, New York (1977) 93. Rauch, J., Taylor, M.: Exponential decay of solutions to hyperbolic equations in bounded domains. Indiana J. Math. 24 79-83 (1974) 94. Rauch, J., Zhang, X., Zuazua, E.: Polynomial decay for a hyperbolic-parabolic coupled system. J. Math. Pures Appl. 84, 407-470 (2005) 95. Rudin, W.: Functional analysis. McGraw-hill, New York (1973) 96. Russell, D. L.: Exact boundary value controllability theorems for wave and heat processes in star-complemented regions. In: Roxin, Liu, and Sternberg (eds.) Differential Games and Control Theory, pp.291-319. Marcel Dekker Inc., New York (1974) 97. Sell, G. R., You, Y.: Dynamics of Evolutionary Equations, Springer, New York (2002) 98. Smyshlyaev, A., Krstic, M.: Explicit state and output feedback boundary controllers for partial differential equations. JOURNAL OF AUTOMATIC CONTROL, UNIVERSITY OF BELGRADE 13, 1-9 (2003) 99. Staffans, O.: Well-Posed Linear Systems. Cambridge University Press (2005) 100. Strauss, W. A.: Dispersal of waves vanishing on the boundary of an exterior domain. Comm. Pure Appl. Math., 28, 265-278 (1975) 101. Temam, R.: Infinite Dimensonal Dynamical Systems in Mechanics and Physics, 2nd edition, Springer, New York (1997) 102. Triggiani, R.: On Nambu’s boundary stabilizability problem for diffusion processes. J. Differential Equations 33, 189-200 (1979) 103. Wang, H. K., Chen, G.: Asymptotic behavior of solutions of the one-dimensional wave equation with a nonlinear boundary stabilizer. SIAM J. Control Optim. 27, 758-775 (1989) 104. Wonham, W. M.: Linear Multivariable Control: a Geometric Approach, Second Edition, Springer-Verlag, New York (1976) 105. Yosida, K.: Functional Analysis. Springer-Verlag, New York, 1995. 106. Zeidler, E.: Nonlinear Functional Analysis and its Applications III, Variational Methods and Optimization, Springer-Verlag, New York (1985) 107. Zhang, X., Zuazua, E.: Control, observation and polynomial decay for a coupled heat-wave system. C. R. Acad. Sci. Paris, Serie I, 336, 823-828 (2003) 108. Zhang, X., Zuazua, E.: Polynomial decay and control of a 1-d hyperbolic-parabolic coupled system. J. Differential Equations 204, 380-438 (2004)
292
6 Higher-dimensional Wave Equation
109. Zuazua, E.: Uniform Stabilization of the wave equation by nonlinear boundary feedback. SIAM J. Control Optim. 28, 466-477 (1990) 110. Zuazua, E.: Exponential decay for the semilinear wave equation with locally distributed damping. Commun. in Partial Differential Equations 15, 205-235 (1990) 111. Zuazua, E.: Controllability and observability of partial differential equations: some results and open problems. In: Dafermos, C. M., Feireisl, E. (eds.) Handbook of Differential Equations: Evolutionary Equations, vol. 3, pp.527-621. Elsevier Science, Amsterdam, The Netherlands (2006)
Index
adjoint operator, 34 algebraic multiplicity, 54 algebraic Riccati equation, 188, 206, 284 asymptotic tracking, 112 asymptotically stable, 121, 125 backstepping, 149 Banach space, 14 boundary control, 4, 226 bounded set, 17 Cauchy problem, 44 Cauchy sequence, 14 Cauchy’s integral formula, 32 Cauchy-Schwarz inequality, 10, 13 Cayley-Hamilton theorem, 57 closed operator, 26 closed set, 15 closure of a set, 16 compact set, 16 control gain, 76, 221, 226 controllability, 57 controlled output, 1, 3 decay rate, 121 derivative control, 76 detectable, 68, 140, 213 differential Riccati equation, 185, 202, 280 direct decomposition, 25 direct sum, 25 Dirichlet boundary condition, 21, 22 Dirichlet boundary control, 241 dissipative operator, 35 disturbance rejection, 112 divergence theorem, 27 dual space, 18 eigenvalue, 21
eigenvalue problem, 22, 121 energy, 215 Euclidean norm, 10 exponentially stable, 51, 121, 125 feedback control, 1, 3 feedback gain, 112 feedback gain matrix, 67 feedback matrix, 67 feedback operator, 131, 213 feedforward gain, 112 finite dimensional control systems, 1, 49 flux control, 163 Gˆateaux-differential, 177 generalized Young’s inequality, 261 Gronwall’s inequality, 123 growth bound, 38 H¨older’s inequality, 11 Hamiltonian system, 180 Hilbert space, 14 Hille-Yosida theorem, 34 Hurwitz matrix, 55 infinite dimensional system, 127 infinitesimal generator, 31 injection operator, 213 inner product, 12 inner product space, 12 integral control, 77 integral output feedback control, 87 integral state feedback control, 80 integral transform method, 149 integral-derivative (ID) feedback control, 77 integrator, 79 interior control, 4, 218, 255, 261
293
294 Jensen’s inequality, 262
Index proportional-integral-derivative (PID) control, 77
Kalman observability matrix, 63 Laplace operator, 4 limit and differentiation exchange theorem, 155 limit and integration exchange theorem, 155 limit point, 15 linear operator, 17 linear quadratic optimal control problem, 173 Lumer-Phillips theorem, 35 Lyapunov design, 236 Lyapunov functional, 187 mass-spring system, 2 measured output, 1 method of energy, 221 Minkowski’s inequality, 12 Neumann boundary control, 235 norm, 9 normed linear space, 10 observability inequality, 64, 252, 254 observable, 63, 252 observation system, 63 observer gain matrix, 83 open set, 15 optimal control, 173 optimal state feedback control, 184, 200, 278, 283 optimality system, 180 output, 49 output feedback control, 84 output injection, 140 output injection matrix, 68 perturbed energy, 227 perturbed energy functional, 256 Poincar´e’s inequality, 28, 149, 236 point control, 135, 136 Popov-Belevitch-Hautus Test, 62, 73 proportional control, 76
reaction-convection-diffusion equations, 3 reaction-diffusion equation, 149 regulator equations, 113 regulator system, 50 resolvent set, 26 Routh-Hurwitz Criterion, 56 sectorial operator, 37 self-adjoint operator, 35 semigroup, 30 semigroup of contractions, 34 separable, 16 series uniform convergence theorem, 156 Sobolev spaces, 20 spectrum of operator, 26 stabilizable, 131, 145, 213, 219, 226, 235 stable, 51, 121, 125 stable manifold, 136 state, 1, 29, 49 state feedback, 67 state equation, 49 state observer, 83 state space, 125 strong convergence, 13 Sylvester equation, 112 term-by-term differentiation theorem, 156 term-by-term-integration theorem, 156 uniform convergence, 154 uniform convergence theorem, 154 unstable, 51, 121, 125 unstable manifold, 136 velocity feedback controller, 221 wave equation, 6 weak convergence, 91 weak solution, 44, 46 Weierstrass M-test, 157 Young’s inequality, 11
D´ej`a parus dans la mˆeme collection 1. T. Cazenave, A. Haraux : Introduction aux probl` emes d’´ evolution semi-lin´ eaires. 1990 2. P. Joly : Mise en œuvre de la m´ ethode des ´ el´ ements finis. 1990 3/4. E. Godlewski, P.-A. Raviart : Hyperbolic systems of conservation laws. 1991 5/6. Ph. Destuynder : Mod´ elisation m´ ecanique des milieux continus. 1991 7. J. C. Nedelec : Notions sur les techniques d’´ el´ ements finis. 1992 8. G. Robin : Algorithmique et cryptographie. 1992 9. D. Lamberton, B. Lapeyre : Introduction au calcul stochastique appliqu´ e. 1992 10. C. Bernardi, Y. Maday : Approximations spectrales de probl` emes aux limites elliptiques. 1992 11. V. Genon-Catalot, D. Picard : El´ ements de statistique asymptotique. 1993 12. P. Dehornoy : Complexit´ e et d´ ecidabilit´ e. 1993 13. O. Kavian : Introduction ` a la th´ eorie des points critiques. 1994 ´ 14. A. Bossavit : Electromagn´ etisme, en vue de la mod´ elisation. 1994 15. R. Kh. Zeytounian : Mod´ elisation asymptotique en m´ ecanique des fluides Newtoniens. 1994 16. D. Bouche, F. Molinet : M´ ethodes asymptotiques en ´electromagn´ etisme. 1994 17. G. Barles : Solutions de viscosit´ e des ´ equations de Hamilton-Jacobi. 1994 18. Q. S. Nguyen : Stabilit´ e des structures ´ elastiques. 1995 19. F. Robert : Les syst` emes dynamiques discrets. 1995 20. O. Papini, J. Wolfmann : Alg` ebre discr` ete et codes correcteurs. 1995 21. D. Collombier : Plans d’exp´ erience factoriels. 1996 22. G. Gagneux, M. Madaune-Tort : Analyse math´ ematique de mod` eles non lin´ eaires de l’ing´ enierie p´ etroli` ere. 1996 23. M. Duflo : Algorithmes stochastiques. 1996 24. P. Destuynder, M. Salaun : Mathematical Analysis of Thin Plate Models. 1996 25. P. Rougee : M´ ecanique des grandes transformations. 1997 ¨ rmander : Lectures on Nonlinear Hyperbolic Differential Equations. 1997 26. L. Ho ´chal, C. Sagastiza ´ bal : Optimisation 27. J. F. Bonnans, J. C. Gilbert, C. Lemare num´ erique. 1997 28. C. Cocozza-Thivent : Processus stochastiques et fiabilit´ e des syst` emes. 1997 ´ Pardoux, R. Sentis : M´ 29. B. Lapeyre, E. ethodes de Monte-Carlo pour les ´ equations de transport et de diffusion. 1998 30. P. Sagaut : Introduction ` a la simulation des grandes ´echelles pour les ´ ecoulements de fluide incompressible. 1998 31. E. Rio : Th´ eorie asymptotique des processus al´ eatoires faiblement d´ ependants. 1999 32. J. Moreau, P.-A. Doudin, P. Cazes (Eds.) : L’analyse des correspondances et les techniques connexes. 1999 33. B. Chalmond : El´ ements de mod´ elisation pour l’analyse d’images. 1999 34. J. Istas : Introduction aux mod´ elisations math´ ematiques pour les sciences du vivant. 2000 35. P. Robert : R´ eseaux et files d’attente : m´ ethodes probabilistes. 2000 36. A. Ern, J.-L. Guermond : El´ ements finis : th´ eorie, applications, mise en œuvre. 2001 37. S. Sorin : A First Course on Zero-Sum Repeated Games. 2002
38. J. F. Maurras : Programmation lin´ eaire, complexit´ e. 2002 39. B. Ycart : Mod` eles et algorithmes Markoviens. 2002 40. B. Bonnard, M. Chyba : Singular Trajectories and their Role in Control Theory. 2003 41. A. Tsybakov : Introdution ` a l’estimation non-param´ etrique. 2003 42. J. Abdeljaoued, H. Lombardi : M´ ethodes matricielles – Introduction ` a la complexit´ e alg´ ebrique. 2004 43. U. Boscain, B. Piccoli : Optimal Syntheses for Control Systems on 2-D Manifolds. 2004 44. L. Younes : Invariance, d´ eformations et reconnaissance de formes. 2004 45. C. Bernardi, Y. Maday, F. Rapetti : Discr´ etisations variationnelles de probl` emes aux limites elliptiques. 2004 46. J.-P. Franc ¸ oise : Oscillations en biologie : Analyse qualitative et mod` eles. 2005 47. C. Le Bris : Syst` emes multi-´ echelles : Mod´ elisation et simulation. 2005 48. A. Henrot, M. Pierre : Variation et optimisation de formes : Une analyse g´ eometric. 2005 ´garay-Fesquet : Hi´ 49. B. Bide erarchie de mod` eles en optique quantique : De MaxwellBloch ` a Schr¨ odinger non-lin´ eaire. 2005 ´ ger, E. Zuazua : Wave Propagation, Observation and Control in 1 − d Flexible 50. R. Da Multi-Structures. 2005 ´lat : M´ 51. B. Bonnard, L. Faubourg, E. Tre ecanique c´ eleste et contrˆ ole des v´ ehicules spatiaux. 2005 52. F. Boyer, P. Fabrie : El´ ements d’analyse pour l’´ etude de quelques mod` eles d’´ ecoulements de fluides visqueux incompressibles. 2005 `s, C. L. Bris, Y. Maday : M´ 53. E. Cance ethodes math´ ematiques en chimie quantique. Une introduction. 2006 54. J-P. Dedieu : Points fixes, zeros et la methode de Newton. 2006 55. P. Lopez, A. S. Nouri : Th´ eorie ´ el´ ementaire et pratique de la commande par les r´ egimes glissants. 2006 56. J. Cousteix, J. Mauss : Analyse asympotitque et couche limite. 2006 57. J.-F. Delmas, B. Jourdain : Mod` eles al´ eatoires. 2006 58. G. Allaire : Conception optimale de structures. 2007 59. M. Elkadi, B. Mourrain : Introduction ` a la r´ esolution des syst` emes polynomiaux. 2007 60. N. Caspard, B. Leclerc, B. Monjardet : Ensembles ordonn´ es finis : concepts, r´ esultats et usages. 2007 61. H. Pham : Optimisation et contrˆ ole stochastique appliqu´ es ` a la finance. 2007 62. H. Ammari : An Introduction to Mathematics of Emerging Biomedical Imaging. 2008 63. C. Gaetan, X. Guyon : Mod´ elisation et statistique spatiales. 2008 64. Rakotoson, J.-M. : R´ earrangement Relatif. 2008 65. M. Choulli : Une introduction aux problèmes inverses elliptiques et paraboliques. 2009 66. W. Liu : Elementary Feedback Stabilization of the Linear Reaction-ConvectionDiffusion Equation and the Wave Equation. 2010