Min Wu Yong He Jin-Hua She
Stability Analysis and Robust Control of Time-Delay Systems
Min Wu Yong He Jin-Hua She
Stability Analysis and Robust Control of Time-Delay Systems With 12 figures
Authors Min Wu School of Information Science & Engineering Central South University Changsha, Hunan, 410083, China Email:
[email protected] Yong He School of Information Science & Engineering Central South University Changsha, Hunan, 410083, China Email:
[email protected] Jin-Hua She School of Computer Science Tokyo University of Technology Email:
[email protected] ISBN 978-7-03-026005-5 Science Press Beijing ISBN 978-3-642-03036-9 e-ISBN 978-3-642-03037-6 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2009942249 © Science Press Beijing and Springer-Verlag Berlin Heidelberg 2010 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: Frido Steinen-Broo, EStudio Calamar, Spain Printed on acid-free paper Springer is a part of Springer Science+Business Media (www.springer.com)
Preface
A system is said to have a delay when the rate of variation in the system state depends on past states. Such a system is called a time-delay system. Delays appear frequently in real-world engineering systems. They are often a source of instability and poor performance, and greatly increase the difficulty of stability analysis and control design. So, many researchers in the field of control theory and engineering study the robust control of time-delay systems. The study of such systems has been very active for the last 20 years; and new developments, such as fixed model transformations based on the NewtonLeibnitz formula and parameterized model transformations, are continually appearing. Although these methods are a great improvement over previous ones, they still have their limitations. We recently devised a method called the free-weighting-matrix (FWM) approach for the stability analysis and control synthesis of various classes of time-delay systems; and we obtained a series of not so conservative delaydependent stability criteria and controller design methods. This book is based primarily on our recent research. It focuses on the stability analysis and robust control of various time-delay systems, and includes such topics as stability analysis, stabilization, control design, and filtering. The main method employed is the FWM approach. The effectiveness of this method and its advantages over other existing ones are proven theoretically and illustrated by means of various examples. The book will give readers an overview of the latest advances in this active research area and equip them with a state-ofthe-art method for studying time-delay systems. This book is a useful reference for control theorists and mathematicians working with time-delay systems, engineering designing controllers for plants or systems with delays, and for graduate students interested in robust control theory and/or its application to time-delay systems. We are grateful for the support of the National Natural Science Foundation of China (60574014), the National Science Fund for Distinguished Young
vi
Preface
Scholars (60425310), the Program for New Century Excellent Talents in University (NCET-06-0679), the Specialized Research Fund for the Doctoral Program of Higher Education of China (20050533015 and 200805330004), and the Hunan Provincial Natural Science Foundation of China (08JJ1010). We are also grateful for the support of scholars both at home and abroad. We would like to thank Prof. Zixing Cai of Central South University, Prof. Qingguo Wang of the National University of Singapore, Profs. Guoping Liu and Peng Shi of the University of Glamorgan, Prof. Tongwen Chen of the University of Alberta, Prof. James Lam of the University of Hong Kong, Prof. Lihua Xie of Nanyang Technological University, Prof. Keqin Gu of Southern Illinois University Edwardsville, Prof. Zidong Wang of Brunel University, Prof. Li Yu of Zhejiang University of Technology, Prof. Xinping Guan of Yanshan University, Prof. Shengyuan Xu of Nanjing University of Science & Technology, Prof. Qinglong Han of Central Queensland University, Prof. Huanshui Zhang of Shandong University, Prof. Huijun Gao of the Harbin Institute of Technology, Prof. Chong Lin of Qingdao University, and Prof. Guilin Wen of Hunan University for their valuable help. Finally, we would like to express our appreciation for the great efforts of Drs. Xianming Zhang, Zhiyong Feng, Fang Liu, Yan Zhang and Chuanke Zhang, and graduate student Lingyun Fu.
Min Wu Yong He Jin-Hua She July 2009
Contents
1.
2.
3.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Review of Stability Analysis for Time-Delay Systems . . . . . . . . . . . 1 1.2 Introduction to FWMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3 Outline of This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1 Lyapunov Stability and Basic Theorems . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.1 Types of Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.2 Lyapunov Stability Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2 Stability of Time-Delay Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2.1 Stability-Related Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2.2 Lyapunov-Krasovskii Stability Theorem . . . . . . . . . . . . . . . . . . . . 28 2.2.3 Razumikhin Stability Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.3 H∞ Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.3.1 Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.3.2 H∞ Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.4 H∞ Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.5 LMI Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.5.1 Common Specifications of LMIs . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.5.2 Standard LMI Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.6 Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Stability of Systems with Time-Varying Delay . . . . . . . . . . . . . . . 41 3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2 Stability of Nominal System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2.1 Replacing the Term x(t) ˙ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.2.2 Retaining the Term x(t) ˙ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.2.3 Equivalence Analysis
viii
4.
5.
Contents
3.3 Stability of Systems with Time-Varying Structured Uncertainties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.3.1 Robust Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.3.2 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.4 Stability of Systems with Polytopic-Type Uncertainties . . . . . . . . 54 3.4.1 Robust Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.2 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.5 IFWM Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.5.1 Retaining Useful Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.5.2 Further Investigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.5.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Stability of Systems with Multiple Delays . . . . . . . . . . . . . . . . . . . . 73 4.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.2 Two Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.2.1 Nominal Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.2.2 Equivalence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2.3 Systems with Time-Varying Structured Uncertainties . . . . . . . . 83 4.2.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.3 Multiple Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Stability of Neutral Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.1 Neutral Systems with Time-Varying Discrete Delay . . . . . . . . . . . . 94 5.1.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.1.2 Nominal Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.1.3 Systems with Time-Varying Structured Uncertainties . . . . . . 100 5.1.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.2 Neutral Systems with Identical Discrete and Neutral Delays . . 101 5.2.1 FWM Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.2.2
FWM Approach in Combination with Parameterized Model Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.2.3
FWM Approach in Combination with Augmented LyapunovKrasovskii Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.2.4
Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Contents
6.
7.
8.
9.
ix
5.3 Neutral Systems with Different Discrete and Neutral Delays . . 116 5.3.1 Nominal Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 5.3.2 Equivalence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 5.3.3 Systems with Time-Varying Structured Uncertainties . . . . . . 121 5.3.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Stabilization of Systems with Time-Varying Delay . . . . . . . . . . 127 6.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 6.2 Iterative Nonlinear Minimization Algorithm . . . . . . . . . . . . . . . . . . 129 6.3 Parameter-Tuning Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 6.4 Completely LMI-Based Design Method . . . . . . . . . . . . . . . . . . . . . . . 138 6.5 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Stability and Stabilization of Discrete-Time Systems with Time-Varying Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 7.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 7.2 Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 7.3 Controller Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 7.3.1 SOF Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 7.3.2 DOF Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 7.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 H∞ Control Design for Systems with Time-Varying Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 8.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 8.2 BRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 8.3 Design of State-Feedback H∞ Controller . . . . . . . . . . . . . . . . . . . . . 168 8.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 8.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 H∞ Filter Design for Systems with Time-Varying Delay . . . 177 9.1 H∞ Filter Design for Continuous-Time Systems . . . . . . . . . . . . . . 178 9.1.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 9.1.2 H∞ Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
x
Contents
9.1.3
Design of H∞ Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
9.1.4
Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
9.2 H∞ Filter Design for Discrete-Time Systems . . . . . . . . . . . . . . . . . 188 9.2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 9.2.2 H∞ Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 9.2.3 Design of H∞ Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 9.2.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 9.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 10. Stability of Neural Networks with Time-Varying Delay . . . 203 10.1 Stability of Neural Networks with Multiple Delays . . . . . . . . . 205 10.1.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 10.1.2 Stability Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 10.1.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 10.2 Stability of Neural Networks with Interval Delay . . . . . . . . . . . 215 10.2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 10.2.2 Stability Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 10.2.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 10.3 Exponential Stability of Continuous-Time Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 10.3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 10.3.2 Stability Criteria Derived by FWM Approach . . . . . . . . . . 224 10.3.3 Stability Criteria Derived by IFWM Approach . . . . . . . . . . 230 10.3.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
11.
10.4 Exponential Stability of Discrete-Time Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 10.4.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 10.4.2 Stability Criterion Derived by IFWM Approach . . . . . . . . 238 10.4.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 10.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Stability of T-S Fuzzy Systems with Time-Varying Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 11.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 11.2 Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 11.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 11.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Contents
12.
13.
14.
xi
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Stability and Stabilization of NCSs . . . . . . . . . . . . . . . . . . . . . . . . . . 263 12.1 Modeling of NCSs with Network-Induced Delay . . . . . . . . . . . . 264 12.2 Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 12.3 Controller Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 12.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 12.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Stability of Stochastic Systems with Time-Varying Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 13.1 Robust Stability of Uncertain Stochastic Systems . . . . . . . . . . 278 13.1.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 13.1.2 Robust Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 13.1.3 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 13.2 Exponential Stability of Stochastic Markovian Jump Systems with Nonlinearities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 13.2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 13.2.2 Exponential-Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . 286 13.2.3 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 13.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296 Stability of Nonlinear Time-Delay Systems . . . . . . . . . . . . . . . . . 299 14.1 Absolute Stability of Nonlinear Systems with Delay and Multiple Nonlinearities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 14.1.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 14.1.2 Nominal Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 14.1.3 Systems with Time-Varying Structured Uncertainties . . . 308 14.1.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 14.2 Absolute Stability of Nonlinear Systems with Time-Varying Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 14.2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 14.2.2 Nominal Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 14.2.3 Systems with Time-Varying Structured Uncertainties . . . 316 14.2.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 14.3 Stability of Systems with Interval Delay and Nonlinear Perturbations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 14.3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
xii
Contents
14.3.2 14.3.3
Stability Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Further Results Obtained with Augmented LyapunovKrasovskii Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
14.3.4
Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
14.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Abbreviations
inf
infimum
lim
limit
max
maximum
min
minimum
sup
supremum
BRL
bounded real lemma
CCL
cone complementarity linearization
DOF
dynamic output feedback
FWM
free weighting matrix
ICCL
improved cone complementarity linearization
IFWM
improved free weighting matrix
LFT
linear fractional transaction
LMI
linear matrix inequality
MADB
maximum allowable delay bound
MATI
maximum allowable transfer interval
NCS
networked control system
NFDE
neutral functional differential equation
NLMI
nonlinear matrix inequality
RFDE
retarded functional differential equation
SOF
static output feedback
Symbols
R, Rn , Rn×m
set of real numbers, set of n-dimensional real vectors, and set of n × m real matrices
C, Cn , Cn×m
set of complex numbers, set of n component complex vectors, and set of n × m complex matrices
¯+ R
set of non-negative real numbers
¯+ Z
set of non-negative integers
Re(s)
real part of s ∈ C
AT
transpose of matrix A
−1
A
inverse of matrix A
A−T
shorthand for (A−1 )T
In
n × n identity matrix (the subscript is omitted if no confusion will occur)
diag {A1 , · · · , An } ⎡ ⎤ X Y ⎣ ⎦ ∗ Z
diagonal matrix with Ai as its ith diagonal element ⎡ ⎤ X Y ⎦ symmetric matrix ⎣ YT Z
A > 0 (< 0)
symmetric positive (negative) definite matrix
A 0 ( 0)
symmetric positive (negative) semi-definite matrix
det(A)
determinant of matrix A
Tr{A}
trace of matrix A
λ(A)
eigenvalue of matrix A
λmax (A)
largest eigenvalue of matrix A
λmin (A)
smallest eigenvalue of matrix A
xvi
Symbols
σmax (A)
largest singular value of matrix A
L2 [0, +∞)
set of square integrable functions on [0, +∞)
l2 [0, +∞)
set of square multipliable functions on [0, +∞)
C ([a, b], Rn )
family of continuous functions φ from [a, b] to Rn
b CF ([a, b], Rn ) 0
family of all bounded F0 -measurable C([a, b], Rn )-valued random variables
L2F0 ([a, b], Rn )
family of all bounded F0 -measurable C([a, b], Rn )-valued 2 random variables ξ = ξ(t) : sup Eξ(t) < ∞
|·|
absolute value (or modulus)
·
Euclidean norm of a vector or spectral norm of a matrix
· ∞
induced l∞ -norm
φc
continuous norm sup φ(t) for φ ∈ C ([a, b], Rn )
L
weak infinitesimal of a stochastic process
Dxt
operator that maps C([−h, 0], Rn ) → Rn ; that is, Dxt = x(t) − Cx(t − h)
atb
atb
E ⎡ ⎣
⎤ A
B
C
D
⎦
mathematical expectation shorthand for state space realization C(sI − A)−1 B + D for a continuous-time system or C(zI − A)−1 B + D for a discrete-time system
∀
for all
∈
belongs to
∃
there exists
⊆
is a subset of
∪
union
→
tends toward or is mapped into (case sensitive)
⇒
implies
:=
is defined as
end of proof
1. Introduction
In many physical and biological phenomena, the rate of variation in the system state depends on past states. This characteristic is called a delay or a time delay, and a system with a time delay is called a time-delay system. Timedelay phenomena were first discovered in biological systems and were later found in many engineering systems, such as mechanical transmissions, fluid transmissions, metallurgical processes, and networked control systems. They are often a source of instability and poor control performance. Time-delay systems have attracted the attention of many researchers [1–3] because of their importance and widespread occurrence. Basic theories describing such systems were established in the 1950s and 1960s; they covered topics such as the existence and uniqueness of solutions to dynamic equations, stability theory for trivial solutions, etc. That work laid the foundation for the later analysis and design of time-delay systems. The robust control of time-delay systems has been a very active field for the last 20 years and has spawned many branches, for example, stability analysis, stabilization design, H∞ control, passive and dissipative control, reliable control, guaranteed-cost control, H∞ filtering, Kalman filtering, and stochastic control. Regardless of the branch, stability is the foundation. So, important developments in the field of time-delay systems that explore new directions have generally been launched from a consideration of stability as the starting point. This chapter reviews methods of studying the stability of time-delay systems and points out their limitations, and then goes on to describe a new method called the free-weighting-matrix (FWM) approach.
1.1 Review of Stability Analysis for Time-Delay Systems Stability is a very basic issue in control theory and has been extensively discussed in many monographs [4–6]. Research on the stability of time-delay
2
1. Introduction
systems began in the 1950s, first using frequency-domain methods and later also using time-domain methods. Frequency-domain methods determine the stability of a system from the distribution of the roots of its characteristic equation [7] or from the solutions of a complex Lyapunov matrix function equation [8]. They are suitable only for systems with constant delays. The main time-domain methods are the Lyapunov-Krasovskii functional and Razumikhin function methods [1]. They are the most common approaches to the stability analysis of time-delay systems. Since it was very difficult to construct Lyapunov-Krasovskii functionals and Lyapunov functions until the 1990s, the stability criteria obtained were generally in the form of existence conditions; and it was impossible to derive a general solution. Then, Riccati equations, linear matrix inequalities (LMIs) [9], and Matlab toolboxes came into use; and the solutions they provided were used to construct LyapunovKrasovskii functionals and Lyapunov functions. These time-domain methods are now very important in the stability analysis of linear systems. This section reviews methods of examining stability and their limitations. Consider the following linear system with a delay: ⎧ ⎪ ˙ = Ax(t) + Ad x(t − h), ⎨ x(t) ⎪ ⎩
(1.1) x(t) = ϕ(t), t ∈ [−h, 0],
where x(t) ∈ Rn is the state vector; h > 0 is a delay in the state of the system, that is, it is a discrete delay; ϕ(t) is the initial condition; and A ∈ Rn×n and Ad ∈ Rn×n are the system matrices. The future evolution of this system depends not only on its present state, but also on its history. The main methods of examining its stability can be classified into two types: frequencydomain and time-domain. Frequency-domain methods: Frequency-domain methods provide the most sophisticated approach to analyzing the stability of a system with no delay (h = 0). The necessary and sufficient condition for the stability of such a system is λ(A + Ad ) < 0. When h > 0, frequency-domain methods yield the result that system (1.1) is stable if and only if all the roots of its characteristic function, f (λ) = det(λI − A − Ad e−hλ ) = 0,
(1.2)
have negative real parts. However, this equation is transcendental, which makes it difficult to solve. Moreover, if the system has uncertainties and a
1.1 Review of Stability Analysis for Time-Delay Systems
3
time-varying delay, the solution is even more complicated. So the use of a frequency-domain method to study time-delay systems has serious limitations. Time-domain methods: Time-domain methods are based primarily on two famous theorems: the Lyapunov-Krasovskii stability theorem and the Razumikhin theorem. They were established in the 1950s by the Russian mathematicians Krasovskii and Razumikhin, respectively. The main idea is to obtain a sufficient condition for the stability of system (1.1) by constructing an appropriate Lyapunov-Krasovskii functional or an appropriate Lyapunov function. This idea is theoretically very important; but until the 1990s, there was no good way to implement it. Then the Matlab toolboxes appeared and made it easy to construct Lyapunov-Krasovskii functionals and Lyapunov functions, thus greatly promoting the development and application of these methods. Since then, significant results have continued to appear one after another (see [10] and references therein). Among them, two classes of sufficient conditions have received a great deal of attention. One class is independent of the length of the delay, and its members are called delay-independent conditions. The other class makes use of information on the length of the delay, and its members are called delay-dependent conditions. The Lyapunov-Krasovskii functional candidate is generally chosen to be T
t
V1 (xt ) = x (t)P x(t) +
xT (s)Qx(s)ds,
(1.3)
t−h
where P > 0 and Q > 0 are to be determined and are called Lyapunov matrices; and xt denotes the translation operator acting on the trajectory: xt (θ) = x(t+θ) for some (non-zero) interval [−h, 0] (θ ∈ [−h, 0]). Calculating the derivative of V1 (xt ) along the solutions of system (1.1) and restricting it to less than zero yield the delay-independent stability condition of the system: ⎡ ⎤ T P A + A P + Q P A d⎥ ⎢ ⎣ ⎦ < 0. ∗ −Q
(1.4)
Since this inequality is linear with respect to the matrix variables P and Q, it is called an LMI. If the LMI toolbox of Matlab yields solutions to LMI (1.4) for these variables, then according to the Lyapunov-Krasovskii stability theorem, system (1.1) is asymptotically stable for all h 0; and furthermore, an appropriate Lyapunov-Krasovskii functional is obtained.
4
1. Introduction
Since delay-independent conditions contain no information on a delay, they are overly conservative, especially when the delay is small. This consideration has given rise to another important class of stability conditions, namely, delay-dependent conditions, which do contain information on the length of a delay. First of all, they assume that system (1.1) is stable when h = 0. Since the solutions of the system are continuous functions of h, there ¯ on the delay such that system (1.1) is stable must exist an upper bound, h, ¯ for all h ∈ [0, h]. Thus, the maximum possible upper bound on the delay is the main criterion for judging the conservativeness of a delay-dependent condition. The hot topics in control theory are delay-dependent problems in stability analysis, robust control, H∞ control, reliable control, guaranteed-cost control, saturation input control, and chaotic-system control. Since the 1990s, the main approach to the study of delay-dependent stability has involved the addition of a quadratic double-integral term to the Lyapunov-Krasovskii functional (1.3): V (xt ) = V1 (xt ) + V2 (xt ),
(1.5)
where V2 (xt ) =
0
−h
t
xT (s)Zx(s)dsdθ.
t+θ
The derivative of V2 (xt ) is V˙ 2 (xt ) = hxT (t)Zx(t) −
t
xT (s)Zx(s)ds.
(1.6)
t−h
Delay-dependent conditions can be obtained from the Lyapunov-Krasovskii stability theorem. However, how to deal with the integral term on the right side of (1.6) is a problem. So far, three methods of studying delay-dependent problems have been devised: the discretized Lyapunov-Krasovskii functional method, fixed model transformations, and parameterized model transformations. The main use of the discretized Lyapunov-Krasovskii functional method is to study the stability of linear systems and neutral systems with a constant delay. It discretizes the Lyapunov-Krasovskii functional, and the results can be written in the form of LMIs [11–15]. The advantage of doing this is that the estimate of the maximum allowable delay that guarantees the stability of the system is very close to the actual value. The drawbacks are that it is
1.1 Review of Stability Analysis for Time-Delay Systems
5
computationally expensive and that it cannot easily handle systems with a time-varying delay. Consequently, this method has not been widely studied or used since it was first proposed by Gu in 1997 [11]. The primary way of dealing with the integral term on the right side of equation (1.6) is by using a fixed model transformation. It transforms a system with a discrete delay into a new system with a distributed delay (the integral term in (1.10)). The following inequalities play an important role in deriving the stability conditions: Basic inequality: ∀a, b ∈ Rn and ∀R > 0, −2aT b aT Ra + bT R−1 b.
(1.7)
Park’s inequality [16]: ∀a, b ∈ Rn , ∀R > 0, and ∀M ∈ Rn×n , ⎤⎡ ⎤ ⎡ ⎤T ⎡ a a R RM ⎦⎣ ⎦. −2aT b ⎣ ⎦ ⎣ T −1 b b ∗ (M R + I)R (RM + I)
(1.8)
Moon et al.’s inequality [17]: ∀a ∈ Rna , ∀b ∈ Rnb , ∀N ∈ Rna ×nb , and ⎡ ⎤ X Y ⎦ 0, then for X ∈ Rna ×na , Y ∈ Rna ×nb , and Z ∈ Rnb ×nb , if ⎣ ∗ Z ⎡ ⎤T ⎡ ⎤⎡ ⎤ a X Y − N a ⎦⎣ ⎦. −2aT N b ⎣ ⎦ ⎣ b ∗ Z b
(1.9)
The basic features of the typical model transformations discussed in [18] are described below. Model transformation I t x(t) ˙ = (A + Ad )x(t) − Ad [Ax(s) + Ad x(s − h)]ds. (1.10) t−h
The following Lyapunov-Krasovskii functional is used to determine a delay-dependent stability condition: V (xt ) = V1 (xt ) + V2 (xt ) + V3 (xt ), where V3 (xt ) =
−h t
−2h
t+θ
xT (s)Z1 x(s)dsdθ.
(1.11)
6
1. Introduction
The derivative of V (xt ) along the solutions of system (1.10) is V˙ (xt ) = Ψ + η1 + η2 −
t
T
x (s)Zx(s)ds −
t−h
t−h
xT (s)Z1 x(s)ds,
t−2h
(1.12) where Ψ = xT (t)[2P (A + Ad ) + Q + h(Z + Z1 )]x(t) − xT (t − h)Qx(t − h), t xT (t)P Ad Ax(s)ds, η1 = −2 t−h t−h
η2 = −2
xT (t)P Ad Ad x(s)ds.
t−2h
η1 and η2 are called cross terms. Using the basic inequality (1.7) yields P x(t) + η1 hxT (t)P Ad AZ −1 AT AT d T η2 hxT (t)P Ad Ad Z1−1 AT d Ad P x(t) +
t
xT (s)Zx(s)ds,
t−h t−h
xT (s)Z1 x(s)ds.
t−2h
Applying these two inequalities to (1.12) eliminates the quadratic integral terms, and a delay-dependent condition is established. This process has two key points: (1) The purpose of a model transformation is to bring the integral term into the system equation so as to produce both cross terms and quadratic integral terms in the derivative of a Lyapunov-Krasovskii functional along the solutions of the system. (2) The bounding of the cross terms, η1 and η2 , eliminates the quadratic integral terms in the derivative of the Lyapunov-Krasovskii functional, thereby yielding a delay-dependent condition. Model transformation II t d x(t) + Ad x(s)ds = (A + Ad )x(t). dt t−h
(1.13)
In 2000 and 2001, Prof. Gu [19, 20] pointed out that, since model transformations I and II introduce additional dynamics into the transformed system, the transformed system is not equivalent to the original one. Thus, these transformations were soon replaced by others.
1.1 Review of Stability Analysis for Time-Delay Systems
Model transformation III t x(t) ˙ = (A + Ad )x(t) − Ad x(s)ds. ˙
7
(1.14)
t−h
In this case, the Lyapunov-Krasovskii functional is V (xt ) = V1 (xt ) + V4 (xt ),
(1.15)
where V4 (xt ) =
0
−h
t
x˙ T (s)Z x(s)dsdθ. ˙
t+θ
The derivative of V (xt ) is t V˙ (xt ) = Φ + η3 − x˙ T (s)Z x(s)ds, ˙
(1.16)
t−h
where ˙ Φ = xT (t)[2P (A + Ad ) + Q]x(t) − xT (t − h)Qx(t − h) + hx˙ T (t)Z x(t), t xT (t)P Ad x(s)ds. ˙ η3 = −2 t−h
Just as for model transformation I, the bounding of the cross term, η3 , eliminates the quadratic integral terms in the derivative of Lyapunov-Krasovskii functional (1.16), thereby producing a delay-dependent condition. Model transformation III was presented in [16]. The basic idea is the same as that of model transformation I, with the difference being that, after model transformation III, the transformed system is equivalent to the original one. In addition, after the transformation of system (1.1) into (1.14), when dealing with the term hx˙ T (t)Rx(t) ˙ in the derivative of V (xt ), system (1.1) is used as a substitute for system (1.14). That is, to obtain system (1.14), the statedelay term x(t − h) in system (1.1) is replaced by using the Newton-Leibnitz formula; but x(t − h) is not replaced in the derivative of V (xt ). This inconsistency in the elimination of the integral terms leads to conservativeness. In 2001, Fridman devised the following descriptor model transformation [21], which attracted a great deal of attention in subsequent years. Model transformation IV ⎧ ⎪ ˙ = y(t), ⎨x(t) t (1.17) ⎪ y(s)ds. ⎩y(t) = (A + Ad )x(t) − Ad t−h
8
1. Introduction
Fridman employed the following generalized Lyapunov-Krasovskii functional: V (xt ) = ξ T (t)EP ξ(t) +
t
xT (s)Qx(s)ds +
0
−h
t−h
t
y T (s)Zy(s)dsdθ,
t+θ
(1.18) where ⎡
⎤ ⎡ ⎤ ⎡ I 0 P1 0 ⎦. ⎦, E = ⎣ ⎦, P = ⎣ ξ(t) = ⎣ P2 P3 y(t) 0 0 ⎤
x(t)
The derivative of V (xt ) along the solutions of system (1.17) is t y T (s)Zy(s)ds, V˙ (xt ) = Σ + η4 −
(1.19)
t−h
where
⎡ ⎤⎫ ⎬ Q 0 ⎦+⎣ ⎦ ξ(t) − xT (t − h)Qx(t − h), Σ = ξ T (t) 2P T ⎣ ⎩ 0 hZ ⎭ A + Ad −I ⎡ ⎤ t 0 ξ T (t)P T ⎣ ⎦ y(s)ds. η4 = −2 t−h Ad ⎧ ⎨
⎤
⎡
0
I
As before, the bounding of the cross term, η4 , eliminates the quadratic integral terms in the derivative of Lyapunov-Krasovskii functional (1.19), thereby producing a delay-dependent condition. There are four important points regarding the development of model transformations. (1) When double-integral terms are introduced into the Lyapunov-Krasovskii functional to produce a delay-dependent stability condition, it results in quadratic integral terms appearing in the derivative of that functional. (2) Model transformations emerged as a way of dealing with those quadratic integral terms. (3) More specifically, the purpose of a model transformation is to bring the integral terms into the system equation so as to produce cross terms and quadratic integral terms in the derivative of the Lyapunov-Krasovskii functional. (4) Then, the bounding of the cross terms eliminates the quadratic integral terms.
1.1 Review of Stability Analysis for Time-Delay Systems
9
The basic feature of all model transformations is that they produce cross terms in the derivative of the Lyapunov-Krasovskii functional. However, since no suitable bounding methods have yet been discovered, the bounding of cross terms results in conservativeness; and attempts to reduce the conservativeness have naturally focused on this point. For example, in 1999, Park extended the basic inequality (1.7) to produce Park’s inequality [16]. In 2001, Moon et al. explored ideas in the proof of Park’s inequality to extend it, resulting in Moon et al.’s inequality [17], which has greater generality. The use of Park’s or Moon et al.’s inequality in combination with model transformation III or IV brought forth a series of delay-dependent conditions with less conservativeness that are very useful in stability analysis and control synthesis. However, model transformations III and IV still have limitations: In a stability or performance analysis, they basically use the Newton-Leibnitz formula to replace delay terms in the derivative of the Lyapunov-Krasovskii functional; but not all the delay terms are necessarily replaced. For example, in [17], the derivative of the Lyapunov-Krasovskii functional is V˙ (xt ) = 2xT (t)P x(t) ˙ + · · · + hx˙ T (t)Z x(t) ˙ + ··· ,
(1.20)
where P > 0 and Z > 0 are matrices to be determined in the LyapunovKrasovskii functional. When dealing with the term x(t − h) (which appears when x(t) ˙ is replaced with the system equation) in V˙ (xt ), the x(t − h) in T 2x (t)P x(t) ˙ is replaced, but the x(t−h) in hx˙ T (t)Z x(t) ˙ is not. This treatment is equivalent to adding the following zero-equivalent term to the derivative of the Lyapunov-Krasovskii functional: t 2xT (t)P Ad x(t) − x(t − h) − x(s)ds ˙ . (1.21) t−h
Fixed weighting matrices are used to express the relationships among the terms of the Newton-Leibnitz formula in (1.21). That is, the weighting matrix of x(t) is P Ad and that of x(t − h) is zero. Similarly, in [18, 22–24], which employ the descriptor model transformation, the delay term x(t − h) in ⎤ ⎤T ⎡ ⎡ T 0 0 P 1 ⎦ ⎦ ⎣ 2 x (t), x˙ T (t) ⎣ P2 P3 Ad x(t − h) in the derivative of the Lyapunov-Krasovskii functional is replaced with t x(t) − t−h x(s)ds. ˙ This treatment is equivalent to adding the following zeroequivalent term to the derivative of the Lyapunov-Krasovskii functional:
10
1. Introduction
2 xT (t)P2T Ad + x˙ T (t)P3T Ad x(t) −
t
x(s)ds ˙ − x(t − h) .
(1.22)
t−h
Here, fixed weighting matrices are also used to express the relationships among the terms of Newton-Leibnitz formula. (The weighting matrix of x(t) is P2T Ad , that of x(t) ˙ is P3T Ad , and that of x(t − h) is zero). This substitution method is currently used in model transformations III and IV to obtain a delay-dependent condition. Note that, when weighting matrices are used for the above purpose, optimal weights do exist and the values should not be chosen simply for convenience. However, no effective way of determining the weights has yet been devised. The chief feature of a parameterized model transformation [25–28] is the division of the delay term of system (1.1) into two parts: a delay-independent one and one to which a fixed model transformation is applied. That transforms system (1.1) into x(t) ˙ = Ax(t) + (Ad − C)x(t − h) + Cx(t − h),
(1.23)
where C is a matrix parameter to be determined. In this way, a parameterized model transformation is combined with a fixed model transformation; so the limitations of the latter remain. On the other hand, although an effective approach to matrix decomposition was presented by Han in [28] (Remark 7 on page 378), three undetermined matrices have to be equal, which leads to unavoidable conservativeness. The stabilization problem is closely related to stability. Stabilization involves finding a feedback controller that stabilizes the closed-loop system, with the main feedback schemes being state and output feedback. Methods of stability analysis include both frequency- and time-domain approaches, but the latter are more commonly used for stabilization problems because the former do not lend themselves readily to solving such problems. For synthesis problems (such as delay-dependent stabilization and control), there is no effective controller synthesis algorithm, even for simple state feedback; solutions are even more difficult for output feedback. The main problem is that, even if model transformation I or II is used to derive an LMI-based controller synthesis algorithm, they both introduce additional eigenvalues into the original system, as mentioned above; so the transformed system is not equivalent to the original one. Moreover, they employ conservative vector inequalities. So, they have been replaced by model transformations III and IV. However, when using either of them to solve a
1.2 Introduction to FWMs
11
synthesis problem, the design of the controller depends on one or more nonlinear matrix inequalities (NLMIs). There are two main methods of solving this type of inequality: One is the iterative algorithm of Moon et al. [17], who used it on a robust stabilization problem. [23, 24] also used it on an H∞ control problem. This method yields a small controller gain, which is easy to implement; but the solutions are suboptimal [17]. The other is the widely used parameter-tuning method of Fridman et al. [18,22,29–34]. It transforms the NLMI(s) into an LMI(s) by using scalar parameters to set one or more undetermined matrices in the NLMI(s) to specific forms; and then the tuning of those parameters produces a controller. This method also yields a suboptimal solution, and experience is required to properly tune the parameters.
1.2 Introduction to FWMs In Section 1.1, we saw that the method of Moon et al. [17] adds the equation (1.21) to V˙ (xt ); and the descriptor model transformation [18, 22–24, 28–35] adds the term (1.22) to it. The difference is that the weighting matrices of terms such as x(t) and x(t) ˙ are different, but they are all constant. For example, in Moon et al. [17], the weighting matrix of x(t) is P Ad , where Ad is a coefficient matrix and P is a Lyapunov matrix. P is closely related to other matrices and cannot be freely chosen. For other terms, also, the weighting matrix is constant (for example, for x(t − h) it is zero). Moreover, in the descriptor model transformation, they are also constant. This is where FWMs come in. In equations (1.21) and (1.22), the weighting matrices of x(t), x(t), ˙ and x(t − h) are replaced by unknown FWMs. From the NewtonLeibnitz formula, the following equation is true for any matrices N1 and N2 with appropriate dimensions: t T T 2 x (t)N1 + x (t − h)N2 x(t) − x(s)ds ˙ − x(t − h) = 0. (1.24) t−h
Now, we add the left side of this equation to the derivative of the LyapunovKrasovskii functional. The fact that N1 and N2 are free and that their optimal values can be obtained by solving LMIs overcomes the conservativeness arising from the use of fixed weighting matrices [36–43]. On the other hand, since the two sides of the system equation are equal, FWMs thus express the relationships among the terms of that equation. That is, from system equation (1.1), the following equation is true for any matrices
12
1. Introduction
T1 and T2 with appropriate dimensions: ˙ − Ax(t) − Ad x(t − h)] = 0. 2 xT (t)T1 + x˙ T (t)T2 [x(t)
(1.25)
And from the Newton-Leibnitz formula, the following equation is true for any matrices Ni , i = 1, 2, 3 with appropriate dimensions: t T T T 2 x (t)N1 + x˙ (t)N2 + x (t − h)N3 x(t) − x(s)ds ˙ − x(t − h) = 0. t−h
(1.26) Reserving the term x(t) ˙ in the derivative of the Lyapunov-Krasovskii functional and adding the left sides of these two equations to the derivative produce another type of result; Chapter 3 theoretically proves the equivalence of these two methods. This shows that the descriptor model transformation of Fridman et al. is a special case of the FWM approach. Furthermore, this treatment in combination with a parameter-dependent Lyapunov-Krasovskii functional is easily extended to deal with the delay-dependent stability of systems with polytopic-type uncertainties [44–47].
1.3 Outline of This Book This book is organized as follows: Chapter 1 reviews research on the stability of time-delay systems and describes the free-weighting-matrix approach. Chapter 2 provides the basic knowledge and concepts on the stability of time-delay systems that are needed in later chapters. Chapter 3 deals with linear systems with a time-varying delay. FWMs are used to express the relationships among the terms in the Newton-Leibnitz formula, and delay-dependent stability conditions are derived. The criteria are then extended to delay-dependent and rate-independent stability conditions without any limitations on the derivative of the delay. Two classes of criteria are obtained for two different treatments of the term x(t) ˙ (retaining it or replacing it with the system equation) in the derivative of the LyapunovKrasovskii functional; and their equivalence is proved. On this basis, the criteria are extended to systems with time-varying structured uncertainties. Furthermore, since retaining the term x(t) ˙ allows the Lyapunov matrices and system matrices to readily be separated, this treatment in combination with a parameter-dependent Lyapunov-Krasovskii functional is easily extended to
1.3 Outline of This Book
13
deal with the delay-dependent stability of systems with polytopic-type uncertainties. Finally, systems with a time-varying delay are investigated based on an improved FWM (IFWM) approach that yields less conservative results. Chapter 4 focuses on systems with multiple constant delays. For a system with two delays, delay-dependent criteria are derived by using the FWM approach to take the relationship between the delays into account. When the delays are equal, the criteria are equivalent to those for a system with a single delay. This idea is extended to the derivation of delay-dependent stability criteria for a system with multiple delays. Chapter 5 investigates neutral systems. The FWM approach is used to analyze the discrete-delay-dependent and neutral-delay-independent stability of a neutral system with a time-varying discrete delay. Delay-dependent stability criteria for neutral systems are derived for identical discrete and neutral delays using the FWM approach and using that approach in combination with a parameterized model transformation and an augmented Lyapunov-Krasovskii functional, respectively. Again based on the FWM approach, discrete-delayand neutral-delay-dependent stability criteria are obtained for a neutral system with different discrete and neutral delays. It is shown that these criteria include those for identical discrete and neutral delays as a special case. Chapter 6 deals with the stabilization of linear systems with a timevarying delay. Based on the delay-dependent stability criteria obtained in Chapter 3, a static-state-feedback controller that stabilizes the system is designed by an iterative method that uses the cone complementarity linearization (CCL) algorithm or the improved CCL (ICCL) algorithm that we devised by using a new stop condition, along with a method of adjusting the parameters. In addition, an LMI-based method of controller design is developed from a delay-dependent and rate-independent stability condition. Chapter 7 employs the IFWM approach to investigate the output-feedback control of a linear discrete-time system with a time-varying interval delay. The delay-dependent stability is first analyzed by a new method of estimating the upper bound on the difference of a Lyapunov function that does not ignore any terms; and based on the stability criterion, a design criterion for a static-output-feedback (SOF) controller is derived. Since the conditions thus obtained for the existence of admissible controllers are not expressed strictly in terms of LMIs, the ICCL algorithm is employed to solve the nonconvex feasibility SOF control problem. Furthermore, the problem of designing a dynamic-output-feedback (DOF) controller is formulated as one of designing
14
1. Introduction
an SOF controller, and a DOF controller is obtained by transforming the design problem into one for an SOF controller. Chapter 8 concerns the design of an H∞ controller for systems with a timevarying interval delay. The IFWM approach is used to devise an improved delay-dependent bounded real lemma (BRL). A method of designing an H∞ controller is given that employs the ICCL algorithm. Chapter 9 focuses on the design of an H∞ filter for both continuous-time and discrete-time systems with a time-varying delay. The IFWM approach is used to carry out a delay-dependent H∞ performance analysis for error systems. The resulting criteria are extended to systems with polytopic-type uncertainties. Based on the results of the analysis, H∞ filters are designed in terms of LMIs. Chapter 10 discusses stability problems for neural networks with timevarying delays. First, the stability of neural networks with multiple timevarying delays is considered; and the FWM approach is used to derive a delay-dependent stability criterion, from which both a delay-independent and rate-dependent criterion, and a delay-dependent and rate-independent criterion are obtained as special cases. Next, the IFWM approach is used to establish stability criteria for neural networks with a time-varying interval delay. Moreover, the FWM and IFWM approaches are used to investigate the exponential stability of neural networks with a time-varying delay. Finally, the IFWM approach is used to deal with the exponential stability of a class of discrete-time recurrent neural networks with a time-varying delay. Chapter 11 shows how the IFWM approach can be used to study the asymptotic stability of a Takagi-Sugeno (T-S) fuzzy system with a timevarying delay. By considering the relationships among the time-varying delay, its upper bound, and their difference, and without ignoring any useful terms in the derivative of the Lyapunov-Krasovskii functional, an improved LMIbased asymptotic-stability criterion is obtained for a T-S fuzzy system with a time-varying delay. Then the criterion is extended to a T-S fuzzy system with time-varying structured uncertainties. Chapter 12 investigates the problem of designing a controller for a networked control system (NCS). The IFWM approach is used to derive an improved stability criterion for a networked closed-loop system. This leads to the establishment of a method of designing a state-feedback controller based on the ICCL algorithm.
References
15
Chapter 13 concerns the delay-dependent stability of a stochastic system with a delay. The robust stability of an uncertain stochastic system with a time-varying delay is discussed; and the exponential stability of a stochastic Markovian jump system with nonlinearity and a time-varying delay is investigated. Less conservative results are established using the IFWM approach. Chapter 14 investigates the stability of nonlinear systems with delays. First, for Lur’e control systems with multiple nonlinearities and a constant delay, LMI-based necessary and sufficient conditions for the existence of a Lyapunov-Krasovskii functional in the extended Lur’e form that ensures the absolute stability of the system are obtained and extended to systems with time-varying structured uncertainties. Then, the FWM approach is used to derive delay-dependent criteria for the absolute stability of a Lur’e control system with a time-varying delay. Finally, the IFWM approach is used to discuss the stability of a system with nonlinear perturbations and a timevarying interval delay. Less conservative delay-dependent stability criteria are established because the range of the delay is taken into account and an augmented Lyapunov-Krasovskii functional is used.
References 1. J. K. Hale and S. M. Verduyn Lunel. Introduction to Functional Differential Equations. New York: Springer-Verlag, 1993. 2. S. I. Niculescu. Delay Effects on Stability: A Robust Control Approach. London: Springer, 2001. 3. K. Gu, V. L. Kharitonov, and J. Chen. Stability of Time-Delay Systems. Boston: Birkh¨ auser, 2003. 4. N. P. Bhatia and G. P. Szeg¨ o. Stability Theory of Dynamical Systems. New York: Springer-Verlag, 1970. 5. J. P. LaSalle. The Stability of Dynamical Systems. Philadelphia: SIAM, 1976. 6. X. X. Liao, L. Q. Wang, and P. Yu. Stability of Dynamical Systems. London: Elsevier, 2007. 7. T. Mori and H. Kokame. Stability of x(t) ˙ = Ax(t) + Bx(t − τ ). IEEE Transactions on Automatic Control, 34(4): 460-462, 1989. 8. S. D. Brierley, J. N. Chiasson, E. B. Lee, and S. H. Zak. On stability independent of delay. IEEE Transactions on Automatic Control, 27(1): 252-254, 1982. 9. S. Boyd, L. E. Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in System and Control Theory. Philadelphia: SIAM, 1994. 10. P. Richard. Time-delay systems: An overview of some recent advances and open problems. Automatica, 39(10): 1667-1694, 2003.
16
1. Introduction
11. K. Gu. Discretized LMI set in the stability problem for linear uncertain timedelay systems. International Journal of Control, 68(4): 923-934, 1997. 12. K. Gu. A generalized discretization scheme of Lyapunov functional in the stability problem of linear uncertain time-delay systems. International Journal of Robust and Nonlinear Control, 9(1): 1-4, 1999. 13. K. Gu. A further refinement of discretized Lyapunov functional method for the stability of time-delay systems. International Journal of Control, 74(10): 967-976, 2001. 14. Q. L. Han and K. Gu. On robust stability of time-delay systems with normbounded uncertainty. IEEE Transactions on Automatic Control, 46(9): 14261431, 2001. 15. E. Fridman and U. Shaked. Descriptor discretized Lyapunov functional method: analysis and design. IEEE Transactions on Automatic Control, 51(5): 890-897, 2006. 16. P. Park. A delay-dependent stability criterion for systems with uncertain timeinvariant delays. IEEE Transactions on Automatic Control, 44(4): 876-877, 1999. 17. Y. S. Moon, P. Park, W. H. Kwon, and Y. S. Lee. Delay-dependent robust stabilization of uncertain state-delayed systems. International Journal of Control, 74(14): 1447-1455, 2001. 18. E. Fridman and U. Shaked. Delay-dependent stability and H∞ control: constant and time-varying delays. International Journal of Control, 76(1): 48-60, 2003. 19. K. Gu and S. I. Niculescu. Additional dynamics in transformed time delay systems. IEEE Transactions on Automatic Control, 45(3): 572-575, 2000. 20. K. Gu and S. I. Niculescu. Further remarks on additional dynamics in various model transformations of linear delay systems. IEEE Transactions on Automatic Control, 46(3): 497-500, 2001. 21. E. Fridman. New Lyapunov-Krasovskii functionals for stability of linear retarded and neutral type systems. Systems & Control Letters, 43(4): 309-319, 2001. 22. E. Fridman and U. Shaked. An improved stabilization method for linear timedelay systems. IEEE Transactions on Automatic Control, 47(11): 1931-1937, 2002. 23. H. Gao and C. Wang. Comments and further results on “A descriptor system approach to H∞ control of linear time-delay systems”. IEEE Transactions on Automatic Control, 48(3): 520-525, 2003. 24. Y. S. Lee, Y. S. Moon, W. H. Kwon, and P. G. Park. Delay-dependent robust H∞ control for uncertain systems with a state-delay. Automatica, 40(1): 65-72, 2004. 25. S. I. Niculescu. On delay-dependent stability under model transformations of some neutral linear systems. International Journal of Control, 74(6): 608-617, 2001. 26. S. I. Niculescu. Optimizing model transformations in delay-dependent analysis of neutral systems: A control-based approach. Nonlinear Analysis, 47(8): 53785390, 2001.
References
17
27. Q. L. Han. Robust stability of uncertain delay-differential systems of neutral type. Automatica, 38(4): 718-723, 2002. 28. Q. L. Han. Stability criteria for a class of linear neutral systems with timevarying discrete and distributed delays. IMA Journal of Mathematical Control and Information, 20(4): 371-386, 2003. 29. E. Fridman and U. Shaked. A descriptor system approach to H∞ control of linear time-delay systems. IEEE Transactions on Automatic Control, 47(2): 253-270, 2002. 30. E. Fridman and U. Shaked. H∞ − Control of linear state-delay descriptor systems: an LMI approach. Linear Algebra and Its Applications, 3(2): 271-302, 2002. 31. E. Fridman and U. Shaked. On delay-dependent passivity. IEEE Transactions on Automatic Control, 47(4): 664-669, 2002. 32. E. Fridman and U. Shaked. Parameter dependent stability and stabilization of uncertain time-delay systems. IEEE Transactions on Automatic Control, 48(5): 861-866, 2003. 33. E. Fridman, U. Shaked, and L. Xie. Robust H∞ filtering of linear systems with time-varying delay. IEEE Transactions on Automatic Control, 48(1): 159-165, 2003. 34. E. Fridman and U. Shaked. An improved delay-dependent H∞ filtering of linear neutral systems. IEEE Transactions on Signal Processing, 52(3): 668-673, 2004. 35. E. Fridman. Stability of linear descriptor systems with delay: a Lyapunov-based approach. Journal of Mathematical Analysis and Applications, 273(1): 24-44, 2002. 36. M. Wu, Y. He, J. H. She, and G. P. Liu. Delay-dependent criteria for robust stability of time-varying delay systems. Automatica, 40(8): 1435-1439, 2004. 37. M. Wu, S. P. Zhu, and Y. He. Delay-dependent stability criteria for systems with multiple delays. Proceedings of 23rd Chinese Control Conference, Wuxi, China, 625-629, 2004. 38. Y. He, M. Wu, J. H. She, and G. P. Liu. Delay-dependent robust stability criteria for uncertain neutral systems with mixed delays. Systems & Control Letters, 51(1): 57-65, 2004. 39. Y. He and M. Wu. Delay-dependent robust stability for neutral systems with mixed discrete- and neutral-delays. Journal of Control Theorey and Applications, 2(4): 386-392, 2004. 40. Y. He, M. Wu, and J. H. She. Delay-dependent robust stability criteria for neutral systems with time-varying delay. Proceedings of 23rd Chinese Control Conference, Wuxi, China, 647-650, 2004. 41. Y. He, M. Wu, J. H. She, and G. P. Liu. Robust stability for delay Lur’e control systems with multiple nonlinearities. Journal of Computational and Applied Mathematics, 176(2): 371-380, 2005. 42. M. Wu, Y. He, and J. H. She. Delay-dependent robust stability and stabilization criteria for uncertain neutral systems. Acta Automatica Sinica, 31(4): 578-583, 2005.
18
1. Introduction
43. Y. He and M. Wu. Delay-dependent conditions for absolute stability of Lur’e control systems with time-varying delay. Acta Automatica Sinica, 31(3): 475478, 2005. 44. M. Wu, Y. He, and J. H. She. New delay-dependent stability criteria and stabilizing method for neutral systems. IEEE Transactions on Automatic Control, 49(12): 2266-2271, 2004. 45. M. Wu and Y. He. Parameter-dependent Lyapunov functional for systems with multiple time delays. Journal of Control Theory and Applications, 2(3): 239-245, 2004. 46. Y. He, M. Wu, J. H. She, and G. P. Liu. Parameter-dependent Lyapunov functional for stability of time-delay systems with polytopic type uncertainties. IEEE Transactions on Automatic Control, 49(5): 828-832, 2004. 47. Y. He, M. Wu, and J. H. She. Improved bounded-real-lemma representation and H∞ control for systems with polytopic uncertainties. IEEE Transactions on Circuits and Systems II, 52(7): 380-383, 2005.
2. Preliminaries
This chapter provides basic knowledge and concepts on the stability of timedelay systems, including the concept of stability, H∞ norm, H∞ control, the LMI method, and some useful lemmas. They form the foundation for subsequent chapters.
2.1 Lyapunov Stability and Basic Theorems The stability of a system generally refers to its ability to return to its initial state when an external disturbance ceases. Stability is the primary condition for the normal operation of a control system. The Lyapunov stability theorem defines the stability of a system in terms of energy, the biggest advantage of which is that the stability can be determined without the need to solve the motion equation of the system. 2.1.1 Types of Stability This subsection defines various types of stability for continuous-time and discrete-time systems. 1) Continuous-Time Systems Consider the following continuous-time system: x(t) ˙ = f (t, x(t)), x(t0 ) = x0 ,
(2.1)
¯ + × Rn → Rn ; and t is a continuous where x(t) ∈ Rn is the state vector; f : R time variable. A point xe ∈ Rn is called an equilibrium point of system (2.1) if f (t, xe ) = 0 for all t t0 . The system remains at that point as long as there is no external action on it. The question is, if there is an external action, will the
20
2. Preliminaries
system remain near the equilibrium point or will it move farther and farther away? The problem of stability at an equilibrium point is discussed below. Shifting the origin of the system allows us to move the equilibrium point to xe = 0. If there are multiple equilibrium points, the stability of each must be studied by an appropriate shift of the origin. Various types of stability are defined below for system (2.1) at the equilibrium point, xe = 0. Definition 2.1.1. [1] (1) If, for any t0 0 and ε > 0, there exists a δ1 = δ(t0 , ε) > 0 such that x(t0 ) < δ(t0 , ε) =⇒ x(t) < ε, ∀t t0 ,
(2.2)
then the system is stable (in the Lyapunov sense) at the equilibrium point, xe = 0. (2) If the system is stable at the equilibrium point xe = 0 and if there exists δ2 = δ(t0 ) > 0 such that x(t0 ) < δ(t0 ) =⇒ lim x(t) = 0, t→∞
(2.3)
then the system is asymptotically stable at the equilibrium point, xe = 0. (3) If there exist constants δ3 > 0, α > 0, and β > 0 such that x(t0 ) < δ3 =⇒ x(t) βx(t0 )e−α(t−t0 ) ,
(2.4)
then the system is exponentially stable at the equilibrium point, xe = 0. (4) If δ1 in (1) (or δ2 in (2)) can be chosen independently of t0 , then the system is uniformly stable (or uniformly asymptotically stable) at the equilibrium point, xe = 0. (5) If δ2 in (2) (or δ3 in (3)) can be an arbitrarily large, finite number, then the system is globally asymptotically stable (or globally exponentially stable) at the equilibrium point, xe = 0. The two figures below may help the reader acquire an intuitive understanding of the concepts of stability and asymptotic stability. Fig. 2.1 illustrates the property of stability in the Lyapunov sense. It shows that the track, x(t), remains in the neighborhood Ω(ε) of the equilibrium point as long as x(t0 ) is in the neighborhood Ω(δ). Fig. 2.2 illustrates the property of asymptotic stability. It shows that the system is asymptotically stable at
2.1 Lyapunov Stability and Basic Theorems
21
Fig. 2.1. Stability in the Lyapunov sense
Fig. 2.2. Asymptotic stability
the equilibrium point, xe = 0, if it is stable at that point and if all solutions starting near that point approach it as t → ∞. 2) Discrete-Time Systems Consider the following discrete-time system: x(k + 1) = f (k, x(k)), x(k0 ) = x0 ,
(2.5)
¯ + × Rn → Rn ; and f (k, x) is where x(k) ∈ Rn is the state vector; f : Z continuous in x. A point xe in Rn is called an equilibrium point of (2.5) if f (k, xe ) = xe for all k k0 . In the literature, xe is usually assumed to be the origin and is called the zero solution. We now define various types of stability for system (2.5) at the equilibrium point, xe = 0. Definition 2.1.2. [2] (1) If, for any k0 0 and ε > 0, there exists a δ1 = δ(k0 , ε) > 0 such that x(k0 ) < δ(k0 , ε) =⇒ x(k) < ε, ∀k k0 ,
(2.6)
22
2. Preliminaries
then the system is stable (in the Lyapunov sense) at the equilibrium point, xe = 0. (2) If the system is stable at the equilibrium point, xe = 0, and if there exists a δ2 = δ(k0 ) > 0 such that x(t0 ) < δ(k0 ) =⇒ lim x(k) = 0, t→∞
(2.7)
then the system is asymptotically stable at the equilibrium point, xe = 0. (3) If there exist a δ3 and constants α > 0 and β > 0 such that x(k0 ) < δ3 =⇒ x(k) βx(k0 )e−α(k−k0 ) .
(2.8)
then the system is exponentially stable at the equilibrium point, xe = 0. (4) If δ1 in (1) (or δ2 in (2)) can be chosen independently of k0 , then the system is uniformly stable (or uniformly asymptotically stable) at the equilibrium point, xe = 0. (5) If δ2 in (2) (or δ3 in (3)) can be an arbitrarily large, finite number, then the system is globally asymptotically stable (or globally exponentially stable) at the equilibrium point, xe = 0. The figures below illustrate the concepts of stability and asymptotic stability. Regarding stability in the Lyapunov sense, when the movement of a solution starts inside a sphere of radius δ in the phase plane (Fig. 2.3), all states x(k) for k k0 remain in a disk with a radius of ε. The time trajectory in three-dimensional space (Fig. 2.4) provides another perspective on stability. Fig. 2.5 depicts the asymptotic stability of the zero solution. 2.1.2 Lyapunov Stability Theorems Lyapunov used classical mechanics to investigate how the distribution of the energy field of a system influences its stability, and then devised a method based on the above definition of stability to determine the stability of a system without explicitly integrating a differential equation. This is called Lyapunov’s direct method or the second method of Lyapunov. Classical mechanics tells us that, in a physical system, a mass is less stable when it has a high energy than when it has a low energy. Thus, when a particle moves from an unstable state towards a stable state, its energy must necessarily continuously decrease. If we denote the energy by E, then this situation is described by
2.1 Lyapunov Stability and Basic Theorems
23
Fig. 2.3. Stability in the Lyapunov sense in phase plane
Fig. 2.4. Stability in the Lyapunov sense in three-dimensional space
E > 0,
dE < 0. dt
Take a mechanical oscillation, for example. Even though the speed of the oscillator varies, the total energy of the system (which is the sum of the kinetic and potential energies) keeps decreasing and ultimately becomes zero at the equilibrium point. At that point, a passive system is stable; and it is impossible for the total energy of an independent passive system to increase.
24
2. Preliminaries
Fig. 2.5. Asymptotic stability
That is, in the neighborhood of an equilibrium point, no positive change in the total energy of the system can occur. Based on the above principles, Lyapunov constructed an energy function, V (t, x(t)), that is expressed solely in terms of the state energy. If ⎧ ⎨> 0, if x = 0, V (t, x(t)) ⎩= 0, if x = 0, and V˙ (t, x(t)) 0, then the stability at the equilibrium point can be proven without using any information on the solutions of the motion equation of the system. V (t, x(t)) is called a Lyapunov function. Theorem 2.1.1. [1] (Lyapunov stability theorem for continuous-time system) Consider system (2.1). Let f (t, 0) = 0, ∀t, which means that the equilibrium point of the system is xe = 0. • If (1) there exists a positive definite function V (t, x(t)) and d (2) V˙ (t, x(t)) := V (t, x(t)) is negative semi-definite, dt then the system is stable at the equilibrium point, xe = 0. • If (1) there exists a positive definite function V (t, x(t)) and d (2) V˙ (t, x(t)) := V (t, x(t)) is negative definite, dt then the system is asymptotically stable at the equilibrium point, xe = 0. • If (1) the system is asymptotically stable at xe = 0 and (2) V (t, x(t)) → ∞ as x → ∞,
2.2 Stability of Time-Delay Systems
25
then the system is globally asymptotically stable at the equilibrium point, xe = 0. Theorem 2.1.2. [2] (Lyapunov stability theorem for discrete-time system) Consider system (2.5). Let f (0, k) = 0, ∀k, which means that the equilibrium point of the system is xe = 0. • If (1) there exists a positive definite function V (k, x(k)) and (2) ΔV (k, x(k)) := V (k + 1, x(k + 1)) − V (k, x(k)) 0 ∀k, ∀x = 0, then the system is stable at the equilibrium point xe = 0. • If (1) there exists a positive definite function V (k, x(k)) and (2) ΔV (k, x(k)) := V (k + 1, x(k + 1)) − V (k, x(k)) < 0 ∀k, ∀x = 0, then the system is asymptotically stable at the equilibrium point, xe = 0. • If (1) the system is asymptotically stable at xe = 0 and (2) V (k, x(k)) → ∞ as x → ∞, then the system is globally asymptotically stable at the equilibrium point, xe = 0.
2.2 Stability of Time-Delay Systems This section presents some basic definitions and theoretical results in the theory of time-delay systems. 2.2.1 Stability-Related Topics This subsection presents some basic information on time-delay systems, specifically, fundamental concepts, descriptions, and types of stability. 1) Time-Delay Systems In science and engineering, differential equations are often used as mathematical models of systems. A fundamental assumption about a system that is modeled in this way is that its future evolution depends solely on the current values of the state variables and is independent of their history. For example, consider the following first-order differential equation: x(t) ˙ = f (t, x(t)), x(t0 ) = x0 . The future evolution of the state variable x at time t depends only on t and x(t), and does not depend on the values of x before time t.
26
2. Preliminaries
If the future evolution of the state of a dynamic system depends not only on current values, but also on past ones, then the system is called a timedelay system. Actual systems of this type cannot be satisfactorily modeled by an ordinary differential equation; that is, a differential equation is only an approximate model. One way to describe such systems precisely is to use functional differential equations. 2) Functional Differential Equations In many systems, there may be a maximum delay, h. In this case, we are often interested in the set of continuous functions that map [−h, 0] to Rn , which we denote simply by C = C([−h, 0], Rn ). For any a > 0, any continuous function of time ψ ∈ C([t0 − h, t0 + a], Rn ), and t0 t t0 + a, let ψt ∈ C be the segment of ψ given by ψt (θ) = ψ(t + θ), −h θ 0. The general form of a retarded functional differential equation (RFDE) (or functional differential equation of retarded type) is x(t) ˙ = f (t, xt ),
(2.9)
where x(t) ∈ Rn and f : R × C → Rn . This equation indicates that the derivative of the state variable x at time t depends on t and x(ζ) for t − h ζ t. Thus, to determine the future evolution of the state, it is necessary to specify the initial value of the state variable, x(t), in a time interval of length h, say, from t0 − h to t0 ; that is, xt0 = φ,
(2.10)
where φ ∈ C is given. In other words, x(t0 + θ) = φ(θ), −h θ 0. It is important to note that, in an RFDE, the derivative of the state contains no term with a delay. If such a term does appear, then we have a functional differential equation of neutral type. For example, 5x(t) ˙ + 2x(t ˙ − h) + x(t) − x(t − h) = 0 is a neutral functional differential equation (NFDE). For an a > 0, a function x is said to be a solution of RFDE (2.9) in the interval [t0 − h, t0 + a) if x is continuous and satisfies that RFDE in that interval. Here, the time derivative should be interpreted as a one-sided derivative in the forward direction. Of course, a solution also implies that (t, xt ) is within the domain of the definition of f . If the solution also satisfies the initial condition (2.10), we say that it is a solution of the equation with
2.2 Stability of Time-Delay Systems
27
the initial condition (2.10), or simply a solution through (t0 , φ). We write it as x(t0 , φ, f ) when it is important to specify the particular RFDE and the given initial condition. The value of x(t0 , φ, f ) at t is denoted by x(t; t0 , φ, f ). We omit f and write x(t0 , φ) or x(t; t0 , φ) when f is clear from the context. A fundamental issue in the study of both ordinary differential equations and functional differential equations is the existence and uniqueness of a solution. We state the following theorem without proof. Theorem 2.2.1. [3] (Uniqueness) Suppose that Ω ⊆ R × C is an open set, function f : Ω → Rn is continuous, and f (t, φ) is Lipschitzian in φ in each compact set in Ω. That is, for a given compact set, Ω0 ⊂ Ω, there exists a constant L such that f (t, φ1 ) − f (t, φ2 ) Lφ1 − φ2 for any (t, φ1 ) ∈ Ω0 and (t, φ2 ) ∈ Ω0 . If (t0 , φ) ∈ Ω, then there exists a unique solution of RFDE (2.9) through (t0 , φ). 3) Concept of Stability Let y(t) be a solution of RFDE (2.9). The stability of the solution depends on the behavior of the system when the system trajectory, x(t), deviates from y(t). Throughout this book, without loss of generality, we assume that RFDE (2.9) admits the solution x(t) = 0, which will be referred to as the trivial solution. If the stability of a nontrivial solution, y(t), needs to be studied, then we can use the variable transformation z(t) = x(t) − y(t) to produce the new system z(t) ˙ = f (t, zt + yt ) − f (t, yt ),
(2.11)
which has the trivial solution z(t) = 0. For the function φ ∈ C([a, b], Rn ), define the continuous norm ·c to be φc = sup φ(θ). aθb
In this definition, the vector norm · represents the 2-norm ·2 . As we did above for continuous- and discrete-time systems, we now define various types of stability for the trivial solution of time-delay system (2.9). Definition 2.2.1. [4] • If, for any t0 ∈ R and > 0, there exists a δ = δ(t0 , ) > 0 such that xt0 c < δ implies x(t) < for t t0 , then the trivial solution of (2.9) is stable.
28
2. Preliminaries
• If the trivial solution of (2.9) is stable, and if, for any t0 ∈ R and any > 0, there exists a δa = δa (t0 , ) > 0 such that xt0 c < δa implies lim x(t) = 0, then the trivial solution of (2.9) is asymptotically stable.
t→∞
• If the trivial solution of (2.9) is stable and if δ(t0 , ) can be chosen independently of t0 , then the trivial solution of (2.9) is uniformly stable. • If the trivial solution of (2.9) is uniformly stable and if there exists a δa > 0 such that, for any η > 0, there exists a T = T (δa , η) such that xt0 c < δa implies x(t) < η for t t0 + T, and t0 ∈ R, then the trivial solution of (2.9) is uniformly asymptotically stable. • If the trivial solution of (2.9) is (uniformly) asymptotically stable and if δa can be an arbitrarily large, finite number, then the trivial solution of (2.9) is globally (uniformly) asymptotically stable. • If there exist constants α > 0 and β > 0 such that x(t) β sup x(θ)e−αt , −hθ0
then the trivial solution of (2.9) is globally exponentially stable; and α is called the exponential convergence rate. 2.2.2 Lyapunov-Krasovskii Stability Theorem Just as for a system without a delay, the Lyapunov method is an effective way of determining the stability of a system with a delay. When there is no delay, this determination requires the construction of a Lyapunov function, V (t, x(t)), which can be viewed as a measure of how much the state, x(t), deviates from the trivial solution, 0. Now, in a delay-free system, we need x(t) to specify the future evolution of the system beyond t. In a time-delay system, we need the “state” at time t for that purpose; it is the value of x(t) in the interval [t − h, t] (that is, xt ). So, it is natural to expect that, for a time-delay system, the Lyapunov function is a functional, V (t, xt ), that depends on xt and indicates how much xt deviates from the trivial solution, 0. This type of functional is called a Lyapunov-Krasovskii functional. More specifically, let V (t, φ) : R × C → R be differentiable; and let xt (τ, φ) be the solution of RFDE (2.9) at time t for the initial condition xτ = φ. Calculating the time derivative of V (t, xt ) and evaluating it at t = τ yield
2.2 Stability of Time-Delay Systems
29
d V˙ (τ, φ) = V (t, xt )|t=τ,xt =φ dt V (τ + Δt, xτ +Δt (τ, φ)) − V (τ, φ) . = lim sup Δt Δt→0 If V˙ (t, xt ) is nonpositive, then xt does not grow with t, which means that the system under consideration is stable in the sense of Definition 2.2.1. The following theorem states this more precisely. Theorem 2.2.2. [4] (Lyapunov-Krasovskii stability theorem) Suppose that f : R × C → Rn in (2.9) maps R× (bounded sets in C) into ¯+ → R ¯ + are continuous nondecreasing bounded sets in Rn , and that u, v, w : R functions, where u(τ ) and v(τ ) are positive for τ > 0 and u(0) = v(0) = 0. • If there exists a continuous differentiable functional V : R × C → R such that u(φ(0)) V (t, φ) v(φc ) and V˙ (t, φ) −w(φ(0)), then the trivial solution of (2.9) is uniformly stable. • If the trivial solution of (2.9) is uniformly stable, and w(τ ) > 0 for τ > 0, then the trivial solution of (2.9) is uniformly asymptotically stable. • If the trivial solution of (2.9) is uniformly asymptotically stable and if lim u(τ ) = ∞, then the trivial solution of (2.9) is globally uniformly τ →∞ asymptotically stable. 2.2.3 Razumikhin Stability Theorem That the Lyapunov-Krasovskii functional requires the state variable x(t) in the interval [t−h, t] necessitates the manipulation of functionals, which makes the Lyapunov-Krasovskii theorem difficult to apply. This difficulty can sometimes be circumvented by using the Razumikhin theorem, an alternative that involves only functions, but no functionals. The key idea behind the Razumikhin theorem is the use of a function, V (x), to represent the size of x(t): V¯ (xt ) = max V (x(t + θ)). θ∈[−h, 0]
This function indicates the size of xt . If V (x(t)) < V¯ (xt ), then V¯ (xt ) does not grow when V˙ (x(t)) > 0. In fact, for V¯ (xt ) not to grow, it is only necessary that
30
2. Preliminaries
V˙ (x(t)) not be positive whenever V (x(t)) = V¯ (xt ). The precise statement is given in the next theorem. Theorem 2.2.3. [4](Razumikhin theorem) Suppose that f : R × C → Rn in (2.9) maps R× (bounded sets of C) into ¯+ → R ¯ + are continuous nondebounded sets of Rn and also that u, v, w : R creasing functions, u(τ ) and v(τ ) are positive for τ > 0, u(0) = v(0) = 0, and v is always increasing. • If there exists a continuously differentiable function V : R × Rn → R such that u(x) V (t, x) v(x), t ∈ R, x ∈ Rn ,
(2.12)
and the derivative of V along the solution, x(t), of system (2.9) satisfies V˙ (t, x(t)) −w(x(t)) whenever V (t+ θ, x(t+ θ)) V (t, x(t)) (2.13) for θ ∈ [−h, 0], then the trivial solution of (2.9) is uniformly stable. • If there exists a continuously differentiable function V : R × Rn → R such that u(x) V (t, x) v(x), t ∈ R, x ∈ Rn ,
(2.14)
if w(τ ) > 0 for τ > 0, and if there exists a continuous nondecreasing function p(τ ) > τ for τ > 0 such that condition (2.13) is strengthened to V˙ (t, x(t)) −w(x(t)) if V (t + θ, x(t + θ)) p(V (t, x(t))) (2.15) for θ ∈ [−h, 0], then the trivial solution of (2.9) is uniformly asymptotically stable. • If the trivial solution of (2.9) is uniformly asymptotically stable and if lim u(τ ) = ∞, then the trivial solution of (2.9) is globally uniformly τ →∞ asymptotically stable.
2.3 H∞ Norm This section presents some basic concepts that are used in this book. 2.3.1 Norm Let X be a vector space over the complex field C. For x ∈ X, let f (x) : x → R be a real-valued function. If it has the following properties:
2.3 H∞ Norm
(1) (2) (3) (4)
31
f (x) 0, f (αx) = |α|f (x), ∀α ∈ R, f (x, y) f (x) + f (y), ∀y ∈ X, and f (x) = 0 if and only if x = 0,
then f (x) is said to be a norm on x. It is denoted by x. 2.3.2 H∞ Norm H∞ space is a space of matrix functions, F (s), that are analytic on the open right-half plane (that is, Re(s) > 0), take values in Cm×n , and satisfy F ∞ = sup {σmax [F (s)] : Re(s) > 0} < +∞.
(2.16)
s
This equation defines F ∞ , the H∞ norm of the matrix functions F (s) [5]. Consider the following linear time-invariant system: ⎧ ⎨ x(t) ˙ = Ax(t) + Bw(t), (2.17) ⎩ z(t) = Cx(t) + Dw(t), where x(t) ∈ Rn is the state vector and x(0) = 0; w(t) ∈ Rm is a disturbance input vector; and A, B, C, and D are real matrices with appropriate dimensions. G(s) = C(sI − A)−1 B + D is the transfer function matrix of the system. For convenience, we denote ⎡ ⎤ A B ⎦. G(s) = ⎣ C D From (2.16) and the Maximum Modular Theorem, the H∞ norm of the proper transfer function matrix, G(s), of a stable linear time-invariant system is defined to be G∞ = sup σmax [G(jω)].
(2.18)
ω
When G(s) is a scalar transfer function, the H∞ norm is defined to be G∞ = sup|G(jω)|.
(2.19)
ω
The H∞ norm can also be defined in the time domain. Let w(t) be a square, integrable input signal; and let z(t) be the output signal. Their energies are defined to be
32
2. Preliminaries
w22 = z22 =
+∞
−∞ +∞
wT (t)w(t)dt,
z T (t)z(t)dt.
−∞
So, the H∞ norm of G(s) is G∞ = sup
ω=0
z2 . w2
(2.20)
The H∞ norm reflects the maximum ratio of the output signal energy to the input signal energy, or in other words, the maximum energy amplification ratio of the system.
2.4 H∞ Control Fig. 2.6 shows a block diagram of a standard H∞ control problem. G is the generalized plant, which is given in the problem statement; and K is the controller, which needs to be designed. Here, we assume that the system and controller are finite-dimensional, linear, and time-invariant. The external input, w, the control input, u, the controlled output, z, and the measured output, y, are all vector signals.
Fig. 2.6. Block diagram of standard H∞ control problem
Assume G(s) and K(s) in Fig. 2.6 are both proper real-rational transform function matrices that describe a linear time-invariant system. From Fig. 2.6, we have ⎡ ⎤ ⎡ ⎤ z w ⎣ ⎦ = G(s) ⎣ ⎦ . y u
2.4 H∞ Control
33
The state space realization of G(s) is ⎧ ⎪ ⎪ x˙ = Ax + B1 w + B2 u, ⎪ ⎨ z = C1 x + D11 w + D12 u, ⎪ ⎪ ⎪ ⎩ y = C x + D w + D u. 2 21 22 So,
⎤
⎡ A
⎢ ⎢ G(s) = ⎢ C1 ⎣ C2
B1
B2
⎥ ⎥ D11 D12 ⎥ , ⎦ D21 D22
(2.21)
where x ∈ Rn is the state vector. Also, assume w ∈ Rm1 , u ∈ Rm2 , z ∈ Rp1 , and y ∈ Rp2 . Decomposing G(s) into ⎡ ⎤ G11 (s) G12 (s) ⎦, G(s) = ⎣ (2.22) G21 (s) G22 (s) and comparing (2.22) and (2.21), we have Gij (s) = Ci (sI − A)−1 Bj + Dij , i, j = 1, 2.
(2.23)
The closed-loop transform function matrix from w to z is Tzw (s) = G11 (s)+G12 (s)K(s)(I −G22 (s)K(s))−1 G21 (s) := Fl (G, K), (2.24) where Fl (G, K) is called the lower linear fractional transformation (LFT) of G(s) and K(s). The H∞ optimal control problem for the closed-loop control system in Fig. 2.6 involves (1) finding a proper real-rational controller, K(s), that stabilizes the system internally and (2) minimizing the H∞ norm of Tzw (s), that is, finding min
K stabilizes G
Fl (G, K)∞ .
(2.25)
The H∞ suboptimal control problem for the closed-loop control system in Fig. 2.6 involves (1) finding all proper real-rational controllers, K(s), that stabilize the closedloop system internally and (2) making the H∞ norm of Tzw (s) less than a given constant γ > 0: Fl (G, K)∞ < γ.
(2.26)
34
2. Preliminaries
2.5 LMI Method In the past couple of decades, LMIs have become a hot topic in the field of the analysis and design of control systems [6]. This is due to the good properties of LMIs, breakthroughs in mathematical programming, and the discovery of useful algorithms and ways of using them to solve problems. Of particular importance are the development of interior-point algorithms and the launch of the LMI toolbox in Matlab. Previously, Riccati equations and inequalities were used to represent and solve most control problems; but that involved a large number of parameters, and symmetric positive definite matrices needed to be adjusted beforehand. So, even though a solution might exist, it might not necessarily be found. This is a big drawback when dealing with real-world problems. LMIs, on the other hand, do not suffer from this handicap and, furthermore, require no adjustment of parameters. 2.5.1 Common Specifications of LMIs An LMI is an expression of the form F (x) = F0 + x1 F1 + · · · + xm Fm < 0,
(2.27)
where x1 , x2 , · · · , xm are real variables, which are called the decision variables of the LMI (2.27); x = (x1 , x2 , · · · , xm )T ∈ Rm is a vector consisting of decision variables, which is called the decision vector; and Fi = FiT ∈ Rn×n , i = 0, 1, · · · , m are given symmetric matrices. In many system and control problems, the variables are matrices. One example is a Lyapunov matrix inequality: F (X) = AT X + XA + Q < 0,
(2.28)
where A ∈ Rn×n and Q = QT ∈ Rn×n are given constant matrices, and the variable X = X T ∈ Rn×n is an unknown matrix. That is, the variable in this matrix inequality is a matrix. Let E1 , E2 , · · · , Em be a basis in S n = {N : N = N T ∈ Rn×n }. For any symmetric matrix X = X T ∈ Rn×n , there exist x1 , x2 , · · · , xm such that X = m i=1 xi Ei . Therefore, F (X) = F
m
m m xi Ei = AT xi Ei + xi Ei A + Q
i=1
i=1 T
i=1
= Q + x1 (A E1 + E1 A) + · · · + xm (AT Em + Em A) < 0.
2.5 LMI Method
35
Thus, (2.27) is the general form of a Lyapunov matrix inequality written in terms of LMIs. Replacing “ < ” with “ ” in (2.27) produces a non-strict LMI. For arbitrary affine functions F (x) and G(x) : Rm → S n , F (x) > 0 and F (x) < G(x) are also LMIs because they can be written as ⎧ ⎨ −F (x) < 0, ⎩ F (x) − G(x) < 0. The set of all x satisfying LMI (2.27) is a convex set. This property of LMIs makes it possible to solve some LMI problems by methods commonly used to solve convex optimization problems. 2.5.2 Standard LMI Problems This section presents three generic LMI problems for which the Matlab LMI toolbox has solvers. Let F , G, and H be symmetric matrix affine functions; and let c be a given constant vector. LMI problem (LMIP): For the LMI F (x) < 0, the problem is to determine whether or not there exists an x∗ such that F (x∗ ) < 0 holds. This is called a feasibility problem. That is, if there exists such an x∗ , then the LMI is feasible; otherwise, it is infeasible. Eigenvalue problem (EVP): The problem is to minimize the maximum eigenvalue of a matrix subject to an LMI constraint (or to prove that the constraint is infeasible). The general form of an EVP is: Minimize λ subject to G(x) < λI, H(x) < 0. EVPs can also appear in the equivalent form of minimizing a linear function subject to an LMI: Minimize cT x subject to F (x) < 0. This is the standard form for the EVP solver in the LMI toolbox. The feasibility problem for the LMI F (x) < 0 can also be written as an EVP:
36
2. Preliminaries
Minimize λ subject to F (x) − λI < 0. Clearly, for any x, if λ is chosen large enough, (x, λ) is a feasible solution to the above problem. So, the problem certainly has a solution. If the minimum λ, λ∗ , satisfies λ∗ 0, then the LMI F (x) < 0 is feasible. Generalized eigenvalue problem (GEVP): The problem is to minimize the maximum generalized eigenvalue of a pair of affine matrix functions, subject to an LMI constraint. For two given symmetric matrices G and F of the same order and a scalar λ, if there exists a nonzero vector y such that Gy = λF y, then λ is called the generalized eigenvalue of matrices G and F . The problem of calculating the maximum generalized eigenvalue of G and F can be transformed into an optimization problem subject to an LMI constraint. Suppose that F is positive definite and that λ is a scalar. If λ is sufficiently large, G − λF < 0. As λ decreases, G − λF will become singular at some point. So there exists a nonzero vector y such that Gy = λF y. This λ is the generalized eigenvalue of matrices G and F . Using this idea, we can obtain the generalized eigenvalue of G and F by solving the following optimization problem: Minimize λ subject to G − λF < 0. If G and F are affine functions of x, the general form of the problem of minimizing the maximum generalized eigenvalue of the matrix functions G(x) and F (x) subject to an LMI constraint is Minimize λ subject to G(x) < λF (x), F (x) > 0, H(x) < 0. Note that, in this problem, the constraints are not linear in x and λ simultaneously.
2.6 Lemmas In this section, some basic lemmas that are used extensively throughout this book are given without proof.
2.6 Lemmas
37
Lemma ⎡ 2.6.1. ⎤ [6](Schur complement) For a given symmetric matrix S = S11 S12 ⎦ , where S11 ∈ Rr×r , the following conditions are equivalent: ST = ⎣ ∗ S22 (1) S < 0; T −1 (2) S11 < 0, S22 − S12 S11 S12 < 0; and −1 T (3) S22 < 0, S11 − S12 S22 S12 < 0. Lemma 2.6.2. [7] For given matrices Q = QT , H, and E with appropriate dimensions, Q + HF (t)E + E T F T (t)H T < 0 holds for all F (t) satisfying F T (t)F (t) I if and only if there exists ε > 0 such that Q + ε−1 HH T + εE T E < 0. Lemma 2.6.3. [8] There exists a symmetric matrix X such that ⎤ ⎡ ⎤ ⎡ P2 − X Q2 P1 + X Q1 ⎦ > 0, ⎣ ⎦>0 ⎣ ∗ R1 ∗ R2 if and only if ⎤ ⎡ P1 + P2 Q1 Q2 ⎥ ⎢ ⎥ ⎢ ⎢ ∗ R1 0 ⎥ > 0. ⎦ ⎣ ∗ ∗ R2 Lemma 2.6.4. [9] (S-procedure) Let Ti ∈ Rn×n , i = 0, 1, · · · , p be symmetric matrices. Consider the following condition on T0 , T1 , · · · , Tp : ζ T T0 ζ > 0, f or all ζ = 0 such that ζ T Ti ζ 0, i = 1, 2, · · · , p. (2.29) Clearly, if there exist τi 0, i = 0, 1, · · · , p such that T0 −
p
τi Ti > 0, (2.30)
i=1
then (2.29) holds. It is a nontrivial fact that, when p = 1, the converse also holds (that is, (2.29) and (2.30) are equivalent), provided that there exists a ζ0 such that ζ0T T1 ζ0 > 0.
38
2. Preliminaries
Lemma 2.6.5. [10] Let A, D, E, F, and P be real matrices with appropriate dimensions, and let F T F I and P > 0. Then, the following propositions are true: (1) For any x, y ∈ Rn , 2xT y xT P −1 x + y T P y. (2) For any x, y ∈ Rn and any ε > 0, 2xT DF Ey ε−1 xT DDT x + εy T E T Ey. (3) For any ε > 0 satisfying P − εDDT > 0, (A + DF E)T P −1 (A + DF E) ε−1 E T E + AT (P − εDDT )−1 A.
2.7 Conclusion This chapter provides basic knowledge and concepts on the stability of timedelay systems, including the concept of Lyapunov stability and some basic theorems, fundamental concepts related to the stability of time-delay systems, H∞ norm, H∞ control, the LMI method, and some useful lemmas. This knowledge is the foundation for the study of later chapters.
References 1. X. X. Liao, L. Q. Wang, and P. Yu. Stability of Dynamical Systems. London: Elsevier, 2007. 2. S. Elaydi. An Introduction to Difference Equations. New York: Springer-Verlag, 2005. 3. J. K. Hale and S. M. Verduyn Lunel. Introduction to Functional Differential Equations. New York: Springer-Verlag, 1993. 4. K. Gu, V. L. Kharitonov, and J. Chen. Stability of Time-Delay Systems. Boston: Birkh¨ auser, 2003. 5. K. Zhou, J. C. Doyle, and K. Glover. Robust and Optimal Control. New Jersey: Prentice Hall, 1995. 6. S. Boyd, L. E. Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in System and Control Theory. Philadelphia: SIAM, 1994. 7. I. R. Petersen and C. V. Hollot. A Riccati equation approach to the stabilization of uncertain linear systems. Automatica, 22(4): 397-411, 1986.
References
39
8. K. Gu. A further refinement of discretized Lyapunov functional method for the stability of time-delay systems. International Journal of Control, 74(10): 967-976, 2001. 9. V. A. Yakuboviˇc. The S-procedure in nonlinear control theory. Vestnik Leningrad University: Mathematics, 4(1): 73-93, 1977. 10. Y. Y. Cao, Y. X. Sun, and C. W. Cheng. Delay-dependent robust stabilization of uncertain systems with multiple state delays. IEEE Transactions on Automatic Control, 43(11): 1608-1612, 1998.
3. Stability of Systems with Time-Varying Delay
Increasing attention is being paid to delay-dependent stability criteria for linear time-delay systems. Investigations of such criteria for constant delays [1–10] usually involve either some type of frequency-domain method or the Lyapunov-Krasovskii functional method in the time domain. For time-varying delays, studies of such criteria [11–17] generally employ a fixed model transformation because frequency-domain methods and the discretized Lyapunov-Krasovskii functional method are too difficult to use in this case. Of the four types of fixed model transformations, three can handle time-varying delays: Model transformation I [12–14]; Model transformation III [18]; and Model transformation IV, which uses both Park’s and Moon et al.’s inequalities. Model transformations III and IV are the most effective, and they can also be used to obtain delay-independent stability criteria. However, their use of fixed weighting matrices imposes certain limitations, as pointed out in Chapter 1. On the other hand, there are often uncertainties due to errors in system modeling and changes in operating conditions. One way of describing a system uncertainty is a parameter uncertainty in the state equation, of which there are two types: a time-varying structured uncertainty and a polytopic-type uncertainty. For the former, [19] gave a necessary and sufficient condition under which stability criteria for nominal systems can easily be extended to uncertain systems. For the latter, research has shown that a parameterdependent Lyapunov function or functional can eliminate the conservativeness of quadratic stability (See, e.g., [20–25] regarding linear continuous systems; [9–11, 26, 27] regarding time-delay systems; and [28–30] regarding discrete systems). However, one problem with using a parameter-dependent Lyapunov function or functional is that it is difficult to separate the system matrices from the Lyapunov matrices in the derivative. Many researchers have endeavored to do that. Some have devised a method of separating them that yields only a sufficient condition. Others have devised sufficient and nec-
42
3. Stability of Systems with Time-Varying Delay
essary criteria, but they have the drawback that they cannot be formulated solely in terms of LMIs and require the manual adjustment of parameters. So, they also impose limitations. This chapter concerns delay-dependent stability problems for systems with a time-varying delay. For nominal systems, two approaches are first used to derive stability conditions for two different treatments of the term x(t). ˙ The first approach is to replace the term x(t) ˙ in the derivative of the Lyapunov-Krasovskii functional with the system equation and to use FWMs to express the relationships among the terms of the Newton-Leibnitz formula. This technique yields delay-dependent stability criteria. We also show that these criteria include delay-independent ones, and that Moon et al.’s criterion [5] is a special case of the criterion in this chapter [31]. The other approach is to retain the term x(t) ˙ in the derivative of the Lyapunov-Krasovskii functional and to use FWMs to express the relationships among the terms of the state equation of the system. This technique yields delay-dependent stability criteria for systems with a time-varying delay. Moreover, we show that Fridman et al.’s criterion [11], which was obtained by using the descriptor model transformation in combination with Park’s and Moon et al.’s inequalities, is a special case of the criterion in this chapter [32]. Then, we prove that the criteria obtained by these two different approaches are equivalent. Furthermore, we use Lemma 2.6.2 to extend these two categories of criteria to systems with time-varying structured uncertainties. The criteria obtained by using FWMs to express the relationships among the terms of the system state equations separate the system matrices from the Lyapunov matrices in a natural way. That makes it easy to extend these criteria to systems with polytopic-type parameter uncertainties, which are handled using a parameter-dependent Lyapunov-Krasovskii functional. The resulting delaydependent and delay-independent stability criteria are formulated in terms of LMIs [32]. Finally, the stability of systems with a time-varying delay are examined using the IFWM approach, which considers the relationships among the timevarying delay, its upper bound, and their difference, and does not ignore any useful terms in the derivative of the Lyapunov-Krasovskii functional [33–35]. This is in contrast to [26, 31, 32, 36–38], which did ignore some useful terms, thereby leading to considerable conservativeness.
3.1 Problem Formulation
43
3.1 Problem Formulation Consider the following nominal linear system with a time-varying delay: ⎧ ⎨ x(t) ˙ = Ax(t) + Ad x(t − d(t)), t > 0, (3.1) ⎩ x(t) = φ(t), t ∈ [−h, 0], where x(t) ∈ Rn is the state vector; A and Ad are constant matrices with appropriate dimensions; the delay, d(t), is a time-varying continuous function; and the initial condition, φ(t), is a continuously differentiable initial function of t ∈ [−h, 0]. In this chapter, the delay is assumed to satisfy one or both of the following conditions: 0 d(t) h,
(3.2)
˙ μ, d(t)
(3.3)
where h and μ are constants. A system containing time-varying structured uncertainties is described by ⎧ ⎨ x(t) ˙ = (A + ΔA(t))x(t) + (Ad + ΔAd (t))x(t − d(t)), t > 0, (3.4) ⎩ x(t) = φ(t), t ∈ [−h, 0]. The uncertainties are assumed to be of the form [ΔA(t) ΔAd (t)] = DF (t)[Ea Ead ],
(3.5)
where D, Ea , and Ead are constant matrices with appropriate dimensions; and F (t) is an unknown, real, and possibly time-varying matrix with Lebesguemeasurable elements satisfying F T (t)F (t) I, ∀t.
(3.6)
This chapter also discusses another class of uncertainties, namely, polytopictype uncertainties. For this class, matrices A and Ad of system (3.1) contain uncertainties and satisfy the real, convex, polytopic-type model ⎧ ⎫ p p ⎨ ⎬ [A Ad ] ∈ Ω, Ω = [A(ξ) Ad (ξ)] = ξj [Aj Adj ] , ξj = 1, ξj 0 , ⎩ ⎭ j=1
j=1
(3.7) where Aj and Adj , j = 1, 2, · · · , p are constant matrices with appropriate dimensions; and ξj , j = 1, 2, · · · , p are time-invariant uncertainties.
44
3. Stability of Systems with Time-Varying Delay
3.2 Stability of Nominal System Now, we use the FWM approach to obtain delay-dependent stability conditions for systems with a time-varying delay. Rather than using the NewtonLeibnitz formula to directly replace the delay term, we use FWMs to take into account the relationships among the terms of the Newton-Leibnitz formula in the derivation of delay-dependent stability criteria for nominal system (3.1). This section is divided into three parts. In the first part, the term x(t) ˙ is replaced with system equation (3.1) in the conventional way. In the second part, it is retained, and FWMs are used to express the relationships among the terms of the state equation of the system. This method enables the system matrices and Lyapunov matrices to be easily separated, which lays the foundation for a discussion of a parameter-dependent Lyapunov-Krasovskii functional in Section 3.4. In the third part, we prove that these two treatments are equivalent. This section first examines delay- and rate-dependent stability conditions; and based on those conditions, a simple procedure yields delay-dependent and rate-independent ones. 3.2.1 Replacing the Term x(t) ˙ The Newton-Leibnitz formula gives us t x(t − d(t)) = x(t) − x(s)ds. ˙ t−d(t)
For any appropriately dimensioned matrices N1 and N2 , the following is true: T T 2 x (t)N1 + x (t − d(t))N2 x(t) −
t
x(s)ds ˙ − x(t − d(t)) = 0.
t−d(t)
(3.8) In the next theorem, the terms on the left side of this equation are added to the derivative of the Lyapunov-Krasovskii functional. The FWMs, N1 and N2 , indicate the relationships among the terms of the Newton-Leibnitz formula; and optimal values for them can be obtained by solving LMIs. Now, replacing the term x(t) ˙ with system equation (3.1) yields the following theorem. Theorem 3.2.1. Consider nominal system (3.1) with a delay, d(t), that satisfies both (3.2) and (3.3). Given scalars h > 0 and μ, the system is asymptotically stable if there exist matrices P > 0, Q 0, Z > 0, and
3.2 Stability of Nominal System
⎡ X =⎣
45
⎤ X11 X12
⎦ 0, and any appropriately dimensioned matrices N1 and ∗ X22 N2 such that the following LMIs hold: ⎡ ⎤ Φ11 Φ12 hAT Z ⎢ ⎥ ⎢ ⎥ (3.9) Φ = ⎢ ∗ Φ22 hAT ⎥ < 0, dZ⎦ ⎣ ∗ ∗ −hZ ⎡
⎤ X11 X12 N1
⎢ ⎢ Ψ =⎢ ∗ ⎣ ∗
⎥ ⎥ X22 N2 ⎥ 0, ⎦ ∗ Z
(3.10)
where Φ11 = P A + AT P + N1 + N1T + Q + hX11 , Φ12 = P Ad − N1 + N2T + hX12 , Φ22 = −N2 − N2T − (1 − μ)Q + hX22 . Proof. Choose the Lyapunov-Krasovskii functional candidate to be t 0 t T T V (xt ) = x (t)P x(t)+ x (s)Qx(s)ds+ x˙ T (s)Z x(s)dsdθ, ˙ (3.11) −h
t−d(t)
t+θ
where P > 0, Q 0, and Z > 0 are to be determined. This type of functional is called a quadratic Lyapunov-Krasovskii functional. ⎡ ⎤ For any matrix X = ⎣
hη1T (t)Xη1 (t)
−
X11 X12
t
t−d(t)
∗
X22
⎦ 0, the following inequality is true:
η1T (t)Xη1 (t)ds 0,
(3.12)
where η1 (t) = [xT (t), xT (t − d(t))]T . Calculating the derivative of V (xt ) along the solutions of (3.1) and adding the left sides of (3.8) and (3.12) to it yield V˙ (xt ) = xT (t)[P A + AT P ]x(t) + 2xT (t)P Ad x(t − d(t)) + xT (t)Qx(t) T ˙ −[1 − d(t)]x (t − d(t))Qx(t − d(t)) + hx˙ T (t)Z x(t) ˙ t − x˙ T (s)Z x(s)ds ˙ t−h
46
3. Stability of Systems with Time-Varying Delay
xT (t)[P A + AT P ]x(t) + 2xT (t)P Ad x(t − d(t)) + xT (t)Qx(t) −(1 − μ)xT (t − d(t))Qx(t − d(t)) + hx˙ T (t)Z x(t) ˙ t − x˙ T (s)Z x(s)ds ˙ t−d(t)
T T +2 x (t)N1 + x (t − d(t))N2 x(t) − +hη1T (t)Xη1 (t) − = η1T (t)Ξη1 (t) −
t
x(s)ds ˙ − x(t − d(t))
t−d(t) t
t−d(t)
t
t−d(t)
η1T (t)Xη1 (t)ds
η2T (t, s)Ψ η2 (t, s)ds,
where η2 (t, s) = [xT (t), xT (t − d(t)), x˙ T (s)]T , ⎡ ⎤ Φ11 + hAT ZA Φ12 + hAT ZAd ⎦, Ξ=⎣ T ∗ Φ22 + hAd ZAd with Φ11 , Φ12 , and Φ22 being defined in (3.9) and Ψ being defined in (3.10). If Ξ < 0 and Ψ 0, then V˙ (xt ) < −εx(t)2 holds for any sufficiently small ε > 0, which ensures the asymptotic stability of system (3.1). From the Schur complement, Ξ < 0 is equivalent to (3.9). Thus, if LMIs (3.9) and (3.10) are true, then system (3.1) is asymptotically stable. This completes the proof.
Remark 3.2.1. For a system with a constant delay, setting X12 = 0, X22 = 0, and N2 = 0 in Theorem 3.2.1 yields Theorem 1 in [5]. That is, the above theorem is an extension of Moon et al.’s. Instead of making X12 , X22 , and N2 fixed matrices, Theorem 3.2.1 selects them by solving LMIs. So, it always chooses suitable ones, thus overcoming the conservativeness of Theorem 1 in [5]. Remark 3.2.2. If the matrices N1 , N2 , and X in (3.10) are all set to zero, and Z = εI (where ε is a sufficiently small positive scalar), then Theorem 3.2.1 is identical to the well-known delay-independent stability criterion in [39] and [40], which is now stated. Corollary 3.2.1. Consider nominal system (3.1) with a delay, d(t), that satisfies both (3.2) and (3.3). When μ = 0, the system is asymptotically stable if there exist matrices P > 0 and Q 0 such that the following LMI holds:
3.2 Stability of Nominal System
⎡ ⎣
47
⎤
P A + AT P + Q
P Ad
∗
−Q
⎦ < 0.
Thus, any system that exhibits delay-independent stability, as determined by Corollary 3.2.1, is, for all practical purposes, asymptotically stable for any delay satisfying 0 d(t) h, where h is a positive real number. So, Theorem 3.2.1 contains the well-known delay-independent stability criterion. On the other hand, Theorem 3.2.1 contains a delay-dependent and rateindependent condition. That is, although there is a limit on the upper bound on the delay, there is no limit on the upper bound on the derivative of the delay. In fact, setting Q = 0 in the theorem yields a delay-dependent and rate-independent stability criterion. Corollary 3.2.2. Consider nominal system (3.1) with a delay, d(t), that satisfies (3.2) [but not necessary (3.3)]. Given a scalar h > 0, the system is stable if there exist matrices P > 0, Z > 0, and ⎡ asymptotically ⎤ X =⎣
X11 X12
⎦ 0, and any appropriately dimensioned matrices N1 and ∗ X22 N2 such that LMI (3.10) and the following LMI hold: ⎡ ⎤ Φ¯11 Φ12 hAT Z ⎥ ⎢ ⎥ ⎢ (3.13) Φ¯ = ⎢ ∗ Φ¯22 hAT ⎥ < 0, Z d ⎦ ⎣ ∗ ∗ −hZ
where Φ¯11 = P A + AT P + N1 + N1T + hX11 , Φ¯22 = −N2 − N2T + hX22 , and Φ12 is defined in (3.9). 3.2.2 Retaining the Term x(t) ˙ In contrast to the previous subsection, where x(t) ˙ in V˙ (xt ) was replaced with Ax(t) + Ad x(t − d(t)), in this subsection the term x(t) ˙ is retained, and FWMs are used to express the relationships among the terms of the system equation. The following equation holds for any matrices Tj , j = 1, 2 with appropriate dimensions:
48
3. Stability of Systems with Time-Varying Delay
˙ − Ax(t) − Ad x(t − d(t))] = 0. 2 xT (t)T1 + x˙ T (t)T2 [x(t)
(3.14)
Just as in Theorem 3.2.1, FWMs express the relationships among the terms of the Newton-Leibnitz formula, and we combine them with equation (3.14) to obtain the following theorem. Theorem 3.2.2. Consider nominal system (3.1) with a delay, d(t), that satisfies both (3.2) and (3.3). Given scalars h > 0 and μ, the system is asymptotically stable if⎤ there exist matrices P > 0, Q 0, Z > 0, and ⎡ X X12 ⎢ 11 ⎢ X = ⎢ ∗ X22 ⎣ ∗ ∗ Ni , i = 1, 2, 3 and
X13
⎥ ⎥ X23 ⎥ 0, and any appropriately dimensioned matrices ⎦ X33 Tj , j = 1, 2 such that the following LMIs hold:
⎡
⎤ Γ11 Γ12 Γ13
⎢ ⎢ Γ =⎢ ∗ ⎣ ∗
⎥ ⎥ Γ22 Γ23 ⎥ < 0, ⎦ ∗ Γ33
⎡
(3.15)
⎤ X11 X12 X13 N1
⎢ ⎢ ⎢ ∗ Θ=⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥ ⎥ X22 X23 N2 ⎥ ⎥ 0, ⎥ ∗ X33 N3 ⎥ ⎦ ∗ ∗ Z
(3.16)
where Γ11 Γ12 Γ13 Γ22 Γ23 Γ33
= Q + N1 + N1T − AT T1T − T1 A + hX11 , = P + N2T + T1 − AT T2T + hX12 , = N3T − N1 − T1 Ad + hX13 , = hZ + T2 + T2T + hX22 , = −N2 − T2 Ad + hX23 , = −(1 − μ)Q − N3 − N3T + hX33 .
Proof. From the Newton-Leibnitz formula, we know that the following equation holds for any appropriately dimensioned matrices Ni , i = 1, 2, 3: T T T 2 x (t)N1 + x˙ (t)N2 +x (t−d(t))N3 x(t)−x(t−d(t))−
t
x(s)ds ˙ = 0;
t−d(t)
(3.17)
3.2 Stability of Nominal System
and (3.14) holds on the basis of (3.1). On the other hand, for any matrix X 0, the following holds: t hζ1T (t)Xζ1 (t) − ζ1T (t)Xζ1 (t)ds 0,
49
(3.18)
t−d(t)
where ζ1 (t) = [xT (t), x˙ T (t), xT (t − d(t))]T . Calculating the derivative of V (xt ) and using (3.14), (3.17), and (3.18) yield t V˙ (xt ) ζ1T (t)Γ ζ1 (t) − ζ2T (t, s)Θζ2 (t, s)ds, (3.19) t−d(t)
where ζ2 (t, s) = [ζ1T (t), x˙ T (s)]T ; and Γ and Θ are defined in (3.15) and (3.16), respectively. If Γ < 0 and Θ 0, then, for any sufficiently small positive scalar ε, V˙ (xt ) < −εx(t)2 , which ensures that system (3.1) is asymptotically stable. This completes the proof.
Remark 3.2.3. If the matrices N3 , X13 , X23 , and X33 are all set to zero, then Theorem 3.2.2 is equivalent to Lemma 1 in [11] for systems with a single delay. However, in our theorem, the optimal values of these matrices can be obtained by solving LMIs. That is, Lemma 1 in [11] is a special case of Theorem 3.2.2. In the above theorem, the system matrices and Lyapunov matrices are separated, which sets the stage for a discussion of a parameter-dependent Lyapunov-Krasovskii functional in Section 3.4. Now, if we set Q = 0, we can use Theorem 3.2.2 to obtain a delaydependent and rate-independent stability criterion. Corollary 3.2.3. Consider nominal system (3.1) with a delay, d(t), that satisfies (3.2) [but not necessary (3.3)]. Given a scalar h > 0, the system is ⎡ asymptotically⎤stable if there exist matrices P > 0, Z > 0, and X = X X12 X13 ⎢ 11 ⎥ ⎢ ⎥ ⎢ ∗ X22 X23 ⎥ 0, and any appropriately dimensioned matrices Ni , i = ⎣ ⎦ ∗ ∗ X33 1, 2, 3 and Tj , j = 1, 2 such that LMI (3.16) and the following LMI hold: ⎡ ⎤ Γˇ11 Γ12 Γ13 ⎢ ⎥ ⎢ ⎥ (3.20) ⎢ ∗ Γ22 Γ23 ⎥ < 0, ⎣ ⎦ ∗ ∗ Γˇ33
50
3. Stability of Systems with Time-Varying Delay
where Γˇ11 = N1 + N1T − AT T1T − T1 A + hX11 , Γˇ33 = −N3 − N3T + hX33 ; and Γ12 , Γ13 , Γ22 , and Γ23 are defined in (3.15). 3.2.3 Equivalence Analysis In this subsection, we prove that Theorem 3.2.1 is equivalent to Theorem 3.2.2. Let ⎡ ⎤ I AT 0 ⎢ ⎥ J1 0 J1 = ⎣ 0 I 0 ⎦ , J2 = . 0 I T 0 Ad I Now,
⎡
⎤ Γˆ11 Γˆ12 Γˆ13 ⎢ ⎥ ˆ ˆ ⎥ Γˆ = J1 Γ J1T = ⎢ ⎣ ∗ Γ22 Γ23 ⎦ < 0, ∗ ∗ Γˆ33 ⎤ ⎡ ˆ11 Θ ˆ12 Θ ˆ13 N1 + AT N2 Θ ⎥ ⎢ ⎥ ⎢ ∗ X Θ ˆ N ⎥ ⎢ 22 23 2 ˆ = J2 ΘJ2T = ⎢ Θ ⎥ 0, T ⎢ ∗ ˆ ∗ Θ33 N3 + Ad N2 ⎥ ⎦ ⎣ ∗ ∗ ∗ Z
(3.21)
(3.22)
where Γ and Θ are defined in (3.15) and (3.16), respectively; and ˆ 11 , Γˆ11 = P A + AT P + Q + N1 + N1T + N2T A + AT N2 + hAT ZA + hΘ T T T ˆ Θ11 = X11 + X12 A + A X12 + A X22 A, ˆ12 , Γˆ12 = P + T1 + hAT Z + AT T2 + N2T + hΘ T ˆ12 = X12 + A X22 , Θ ˆ ˆ13 , Γ13 = P Ad + N2T Ad + N3T − N1 − AT N2 + hAT ZAd + hΘ T T ˆ 13 = X13 + A X23 + X12 Ad + A X22 Ad , Θ ˆ Γ22 = hZ + T2 + T2T + hX22 , ˆ23 , Γˆ23 = −N2 + T2T Ad + hZAd + hΘ ˆ23 = X23 + X22 Ad , Θ T T ˆ ˆ Γ33 = −(1 − μ)Q − N3 − N3T − AT d N2 − N2 Ad + hAd ZAd + hΘ33 , T T T ˆ 33 = X33 + X Ad + A X23 + A X22 Ad . Θ 23 d d On the one hand, if LMIs (3.21) and (3.22) are feasible, then setting ˆ ˆ ˆ N1 = N1 + AT N2 , N2 = N3 + AT d N2 , X11 = Θ11 , X12 = Θ13 , and X22 = Θ33
3.3 Stability of Systems with Time-Varying Structured Uncertainties
51
(where the right sides of these equations are the feasible solutions of LMIs (3.21) and (3.22)) guarantees that LMIs (3.9) and (3.10) hold. On the other hand, if LMIs (3.9) and (3.10) are feasible, then setting T1 = −P , T2 = −hZ, N2 = 0, X12 = 0, X22 = 0, X23 = 0, X13 = X12 , X33 = X22 , and N3 = N2 in LMIs (3.21) and (3.22) (where the right sides of these equations are the feasible solutions of LMIs (3.9) and (3.10)) makes all the elements in the second row and all those in the second column zero, except for Γˆ22 = −hZ. Removing that row and that column converts the equations into LMIs (3.9) and (3.10) (LMI (3.9) is equivalent to Ξ < 0), which guarantees that LMIs (3.21) and (3.22) hold. Therefore, LMIs (3.21) and (3.22) are equivalent to LMIs (3.9) and (3.10), which means that LMIs (3.15) and (3.16) are equivalent to LMIs (3.9) and (3.10).
3.3 Stability of Systems with Time-Varying Structured Uncertainties This section explains how to extend the stability criteria for nominal systems to systems with time-varying structured uncertainties using Lemma 2.6.2. 3.3.1 Robust Stability Analysis We can use Lemma 2.6.2 to extend the stability criteria for nominal system (3.1) to system (3.4), which has time-varying structured uncertainties. First of all, extending Theorem 3.2.1 to system (3.4) yields the following theorem. Theorem 3.3.1. Consider system (3.4) with a delay, d(t), that satisfies both (3.2) and (3.3). Given scalars h > 0 and μ, the system ⎡ is robustly ⎤ stable if there exist matrices P > 0, Q 0, Z > 0, and X = ⎣
X11 X12
⎦ 0, any ∗ X22 appropriately dimensioned matrices N1 and N2 , and a scalar λ > 0 such that LMI (3.10) and the following LMI hold: ⎡ ⎤ Φ11 + λEaT Ea Φ12 + λEaT Ead hAT Z P D ⎢ ⎥ ⎢ ⎥ T ⎢ Ead hAT Z 0 ⎥ ∗ Φ22 + λEad d ⎢ ⎥ < 0, (3.23) ⎢ ⎥ ⎢ ∗ ∗ −hZ hZD ⎥ ⎣ ⎦ ∗ ∗ ∗ −λI
52
3. Stability of Systems with Time-Varying Delay
where Φ11 , Φ12 , and Φ22 are defined in (3.9). Proof. Replacing A and Ad in (3.9) with A + DF (t)Ea and Ad + DF (t)Ead , respectively, makes (3.9) equivalent to the following condition: ⎡ ⎤ ⎡ ⎤ PD EaT ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ T ⎥ T T Φ+ ⎢ 0 ⎥ F (t) [Ea Ead 0]+ ⎢ Ead ⎥ F (t) D P 0 hDT Z < 0. (3.24) ⎣ ⎦ ⎣ ⎦ hZD 0 From Lemma 2.6.2, we know that a necessary and sufficient condition guaranteeing (3.24) is that there exists a scalar λ > 0 such that ⎡
⎡
⎤ PD
EaT
⎤
⎢ ⎢ ⎥ ⎥ ⎢ T ⎥ ⎢ ⎥ Φ+λ−1 ⎢ 0 ⎥ DT P 0 hDT Z +λ ⎢ Ead ⎥ Ea Ead 0 < 0. ⎣ ⎣ ⎦ ⎦ hZD 0
(3.25)
Applying the Schur complement shows that (3.25) is equivalent to (3.23). This completes the proof.
Similarly, extending Theorem 3.2.2 yields another theorem. Theorem 3.3.2. Consider system (3.4) with a delay, d(t), that satisfies both (3.2) and (3.3). Given scalars h > 0 and μ, the system if ⎡ is robustly stable ⎤ X ⎢ 11 ⎢ there exist matrices P > 0, Q 0, Z > 0, and X = ⎢ ∗ ⎣ ∗ any appropriately dimensioned matrices Ni , i = 1, 2, 3 and a scalar λ > 0 such that LMI (3.16) and the following LMI ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Γ11 + λEaT Ea Γ12 Γ13 + λEaT Ead −T1 D ∗
Γ22
Γ23
∗
∗
T Ead Γ33 + λEad
∗
∗
∗
X12 X13
⎥ ⎥ X22 X23 ⎥ 0, ⎦ ∗ X33 Tj , j = 1, 2, and hold:
⎤
⎥ ⎥ −T2 D ⎥ ⎥ < 0, ⎥ 0 ⎥ ⎦ −λI
(3.26)
where Γij , i = 1, 2, 3, i j 3 are defined in (3.15). We can easily derive delay-dependent and rate-independent criteria from Theorems 3.3.1 and 3.3.2 by setting Q = 0.
3.3 Stability of Systems with Time-Varying Structured Uncertainties
53
Corollary 3.3.1. Consider system (3.4) with a delay, d(t), that satisfies (3.2) [but not necessary (3.3)]. Given a scalar h > 0, ⎡the system ⎤is robustly stable if there exist matrices P > 0, Z > 0, and X = ⎣
X11 X12
⎦ 0, any ∗ X22 appropriately dimensioned matrices N1 and N2 , and a scalar λ > 0 such that LMI (3.10) and the following LMI hold: ⎡ ⎤ Φ¯11 + λEaT Ea Φ12 + λEaT Ead hAT Z P D ⎢ ⎥ ⎢ ⎥ T ⎢ ⎥ ∗ Φ¯22 + λEad Ead hAT Z 0 d ⎢ ⎥ < 0, (3.27) ⎢ ⎥ ⎢ ∗ ∗ −hZ hZD ⎥ ⎣ ⎦ ∗ ∗ ∗ −λI where Φ12 is defined in (3.9), and Φ¯11 and Φ¯22 are defined in (3.13).
Corollary 3.3.2. Consider system (3.4) with a delay, d(t), that satisfies (3.2) [but not necessary (3.3)]. Given a scalar h > 0,⎡the system is robustly ⎤ X11 X12 X13 ⎢ ⎥ ⎢ ⎥ stable if there exist matrices P > 0, Z > 0, and X = ⎢ ∗ X22 X23 ⎥ 0, ⎣ ⎦ ∗ ∗ X33 any appropriately dimensioned matrices Ni , i = 1, 2, 3 and Tj , j = 1, 2, and a scalar λ > 0 such that LMI (3.16) and the following LMI hold: ⎡ ⎤ Γˇ11 + λEaT Ea Γ12 Γ13 + λEaT Ead −T1 D ⎢ ⎥ ⎢ ⎥ ⎢ Γ23 −T2 D ⎥ ∗ Γ22 ⎢ ⎥ < 0, (3.28) ⎢ ⎥ T ⎢ ∗ ∗ Γˇ33 + λEad Ead 0 ⎥ ⎣ ⎦ ∗ ∗ ∗ −λI where Γ12 , Γ13 , Γ22 , and Γ23 are defined in (3.15), and Γˇ11 and Γˇ33 are defined in (3.20). 3.3.2 Numerical Example Example 3.3.1. Consider the robust stability of system (3.4) with the following parameters: ⎡ ⎤ ⎡ ⎤ −2 0 −1 0 ⎦ , Ad = ⎣ ⎦, A=⎣ 0 −1 −1 −1 Ea = diag {1.6, 0.05} , Ead = diag {0.1, 0.3} , D = I.
54
3. Stability of Systems with Time-Varying Delay
Table 3.1. Allowable upper bound, h, for various μ (Example 3.3.1) μ
0
0.5
0.9
unknown μ
[7]
0.2013
—
—
—
[13]
0.2412
< 0.2
< 0.1
—
[14]
0.2412
0.2195
0.1561
—
[5]
0.7059
—
—
—
[11]
1.1490
0.9247
0.6710
0.5764
Theorems 3.3.1 and 3.3.2
1.1490
0.9247
0.6954
—
Corollaries 3.3.1 and 3.3.2
—
—
—
0.6274
Table 3.1 shows the upper bounds on the delay for different μ obtained from Theorems 3.3.1 and 3.3.2 and Corollaries 3.3.1 and 3.3.2. For comparison, the table also lists the upper bounds obtained from the criteria in [5,7,11,13,14]. Note that the values for [11] were obtained by using Lemma 1 in [11] together with Lemma 2.6.2 in Chapter 2. It is clear that the theorems and corollaries in this chapter produce much better results than those in [5,7,13,14], and the same or better results than those in [11]. This example shows that Theorems 3.3.1 and 3.3.2 produce the same results, as do Corollaries 3.3.1 and 3.3.2, which demonstrates the equivalence of the two classes of criteria in Subsection 3.2.3.
3.4 Stability of Systems with Polytopic-Type Uncertainties This section employs a parameter-dependent Lyapunov-Krasovskii functional to examine the stability of systems with polytopic-type uncertainties. 3.4.1 Robust Stability Analysis Polytopic-type uncertainties (3.7) are an important class of uncertainties because they can be used to represent uncertainties described in terms of interval matrices. Recent research has shown that a parameter-dependent Lyapunov-Krasovskii functional can overcome the conservativeness of a quadratic Lyapunov-Krasovskii functional of the form (3.11). The basic procedure for using one to handle polytopic-type uncertainties has three steps: Step 1: Obtain a stability condition for the nominal system.
3.4 Stability of Systems with Polytopic-Type Uncertainties
55
Step 2: Derive either a sufficient condition or a necessary and sufficient condition for the original condition by separating the Lyapunov matrices from the system matrices. Step 3: Extending the condition obtained in Step 2 with a parameterdependent Lyapunov-Krasovskii functional yields a new condition. However, it is difficult to separate the Lyapunov and system matrices. Conditions obtained from the separation are usually only sufficient conditions for the original conditions, which leads to conservativeness. Even if some of them are necessary and sufficient conditions for the original ones, the criteria obtained cannot be expressed in terms of LMIs because of newly introduced parameters. Regarding the delay-dependent conditions of a parameterdependent Lyapunov-Krasovskii functional, it is not difficult to separate the matrices by the method of Fridman et al. [11, 26, 27, 41]; but the problem is that the weighting matrices they employ are fixed, which leads to conservativeness. Theorem 3.2.2 separates the Lyapunov and system matrices in a natural way and is equivalent to Theorem 3.2.1. So, considering the delay-dependent conditions of a parameter-dependent Lyapunov-Krasovskii functional based on Theorem 3.2.2 leads to the following theorem. Theorem 3.4.1. Consider system (3.1) with polytopic-type uncertainties (3.7) and a delay, d(t), that satisfies both (3.2) and (3.3). Given scalars h > 0 and μ, the system is robustly stable if there exist matrices Pj > 0, Qj 0, ⎡ ⎤ (j) (j) (j) X11 X12 X13 ⎢ ⎥ ⎢ (j) (j) ⎥ Zj > 0, and X (j) = ⎢ ∗ X22 X23 ⎥ 0, j = 1, 2, · · · , p, and appropri⎣ ⎦ (j) ∗ ∗ X33 ately dimensioned matrices Nij , i = 1, 2, 3, j = 1, 2, · · · , p and Tk , k = 1, 2 such that LMI (3.29) and the following LMIs hold for j = 1, 2, · · · , p : ⎡ ⎤ (j) (j) (j) Γ˜11 Γ˜12 Γ˜13 ⎢ ⎥ ⎢ (j) ˜ (j) ⎥ (3.29) Γ˜ (j) = ⎢ ∗ Γ˜22 Γ23 ⎥ < 0, ⎣ ⎦ (j) ∗ ∗ Γ˜ 33
⎡
(j)
(j)
(j)
⎤
X11 X12 X13 N1j
Ψ˜ (j)
⎢ ⎢ ⎢ ∗ =⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥ ⎥ (j) (j) X22 X23 N2j ⎥ ⎥ 0, ⎥ (j) ∗ X33 N3j ⎥ ⎦ ∗ ∗ Zj
(3.30)
56
3. Stability of Systems with Time-Varying Delay
where (j) (j) T T − AT Γ˜11 = Qj + N1j + N1j j T1 − T1 Aj + hX11 , (j) (j) T T + T1 − AT Γ˜12 = Pj + N2j j T2 + hX12 , (j) (j) Γ˜ = N T − N1j − T1 Adj + hX , 13
13
3j
(j) (j) Γ˜22 = hZj + T2 + T2T + hX22 , (j) (j) Γ˜23 = −N2j − T2 Adj + hX23 , (j) (j) T Γ˜33 = −(1 − μ)Qj − N3j − N3j + hX33 .
Proof. Choose the following parameter-dependent Lyapunov-Krasovskii functional candidate: p p t T Vu (xt ) = x (t)ξj Pj x(t) + xT (s)ξj Qj x(s)ds j=1
+
p 0 j=1
−h
t−d(t)
j=1 t
x˙ T (s)ξj Zj x(s)dsdθ, ˙
(3.31)
t+θ
where Pj > 0, Qj 0, and Zj > 0, j = 1, 2, · · · , p are to be determined. Following a line similar to the one in Theorem 3.2.2 yields V˙ u (xt )
p j=1
ζ1T (t)ξj Γ˜ (j) ζ1 (t)−
p j=1
t
t−d(t)
ζ2T (t, s)ξj Ψ˜ (j) ζ2 (t, s)ds,
(3.32)
where ζ1 (t) and ζ2 (t, s) are defined in (3.18) and (3.19), respectively; and Γ˜ (j) and Ψ¯ (j) , j = 1, 2, · · · , p are defined in (3.29) and (3.30), respectively. If Γ˜ (j) < 0 and Ψ˜ (j) 0, j = 1, 2, · · · , p, then V˙ u (xt ) < −εx(t)2 for a sufficiently small ε > 0, which ensures the robust stability of system (3.1) with polytopic-type uncertainties. This completes the proof.
Now, setting Qj = 0, j = 1, 2, · · · , p, yields the following delay-dependent and rate-independent corollary. Corollary 3.4.1. Consider system (3.1) with polytopic-type uncertainties (3.7) and a delay, d(t), that satisfies (3.2) [but not necessary (3.3)]. Given a scalar h > 0, the system is robustly stable if there exist matrices Pj > 0, ⎡ ⎤ (j) (j) (j) X11 X12 X13 ⎢ ⎥ ⎢ (j) (j) ⎥ Zj > 0, and X (j) = ⎢ ∗ X22 X23 ⎥ 0, j = 1, 2, · · · , p, and appropri⎣ ⎦ (j) ∗ ∗ X33 ately dimensioned matrices Nij , i = 1, 2, 3, j = 1, 2, · · · , p and Tk , k = 1, 2 such that LMI (3.30) and the following LMI hold for j = 1, 2, · · · , p :
3.4 Stability of Systems with Polytopic-Type Uncertainties
⎡
(j) (j) (j) Γ´11 Γ˜12 Γ˜13
⎢ ⎢ ⎢ ∗ ⎣ ∗
57
⎤
⎥ (j) (j) ⎥ Γ˜22 Γ˜23 ⎥ < 0, ⎦ (j) ∗ Γ´33
(3.33)
where (j) (j) T T Γ´11 = N1j + N1j − AT j T1 − T1 Aj + hX11 ; (j) (j) T + hX33 ; Γ´33 = −N3j − N3j (j) (j) (j) (j) and Γ˜12 , Γ˜13 , Γ˜22 , and Γ˜23 are defined in (3.29).
On the other hand, Ψ˜ (j) must be positive semi-definite, not positive definite, to prove Theorem 3.4.1. Setting all the matrix elements of Ψ˜ (j) (namely, Zj , X (j) , and Nij , i = 1, 2, 3, j = 1, 2, · · · , p) to zero produces the following delay-independent and rate-dependent condition. That is, although a limit is imposed on the upper bound on the derivative of the delay, there is no limit on the upper bound on the delay. Corollary 3.4.2. Consider system (3.1) with polytopic-type uncertainties (3.7) and a delay, d(t), that satisfies (3.3) [but not necessary (3.2)]. Given a scalar μ, the system is robustly stable if there exist matrices Pj > 0 and Qj 0, j = 1, 2, · · · , p, and any appropriately dimensioned matrices Tk , k = 1, 2 such that the following LMI holds for j = 1, 2, · · · , p : ⎤ ⎡ (j) (j) (j) Γ˘11 Γ˘12 Γ˘13 ⎥ ⎢ ⎢ (j) ˘ (j) ⎥ (3.34) ⎢ ∗ Γ˘22 Γ23 ⎥ < 0, ⎦ ⎣ (j) ∗ ∗ Γ˘33 where (j) T Γ˘11 = Qj − AT j T1 − T1 Aj , (j) T Γ˘12 = Pj + T1 − AT j T2 , (j) Γ˘13 = −T1 Adj , (j) Γ˘22 = T2 + T2T , (j) Γ˘23 = −T2 Adj , (j) Γ˘ = −(1 − μ)Qj . 33
3.4.2 Numerical Example The numerical example in this subsection demonstrates the effectiveness of the above method and shows how much of an improvement it is over other methods.
58
3. Stability of Systems with Time-Varying Delay
Example 3.4.1. Consider system (3.1) with polytopic-type uncertainties (3.7) and with ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ −0.2 0 −2 −1 −1.9 0 ⎦ , A2 = ⎣ ⎦ , A3 = ⎣ ⎦, A1 = ⎣ 0 −0.09 0 −2 0 −1 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ −0.1 0 01 −0.9 0 ⎦ , Ad2 = ⎣ ⎦ , Ad3 = ⎣ ⎦. Ad1 = ⎣ −0.1 −0.1 10 −1 −1.1 Assume the delay is time-invariant (μ = 0). The upper bound on the delay is 0.4149 in [9], 0.6142 in [10], and 4.2423 in [11, 26, 27]. However, Theorem 3.4.1 in this section shows that the system is robustly stable for h = 4.2501. Theorem 3.4.1 yields a larger maximum upper bound on the allowable size of the delay than [9–11, 26, 27] do. Moreover, Table 3.2 compares the upper bounds obtained by Fridman & Shaked [11,26,27] and those we obtained with Theorem 3.4.1 and Corollary 3.4.1 for various μ. Clearly, ours are bigger than theirs. Table 3.2. Allowable upper bound, h, for various μ (Example 3.4.1) μ
0
0.5
0.9
unknown μ
[9]
0.4149
—
—
—
[10]
0.6142
—
—
—
[11, 26, 27]
4.2423
1.8088
0.9670
0.7963
Theorem 3.4.1
4.2501
1.8261
1.0589
—
Corollary 3.4.1
—
—
—
0.9090
3.5 IFWM Approach Although the FWM approach produces better results than other methods, there is room for further investigation. An important point is that previous sections in this chapter and some reports by other authors, such as [11,26,27], ignore useful terms in the derivative of a Lyapunov-Krasovskii functional. This is the issue discussed below. Section 3.2 uses the following Lyapunov-Krasovskii functional:
3.5 IFWM Approach
V1 (xt ) = xT (t)P x(t) +
t
xT (s)Qx(s)ds +
0
−h
t−d(t)
t
59
x˙ T (s)Z1 x(s)dsdθ. ˙
t+θ
(3.35) However, in the derivative of V1 (xt ), the term − t creased to − t−d(t) x˙ T (s)Z1 x(s)ds. ˙ Note that −
t
T
x˙ (s)Z1 x(s)ds ˙ =−
t−h
t
T
t t−h
x˙ (s)Z1 x(s)ds ˙ −
t−d(t)
x˙ T (s)Z1 x(s)ds ˙ is in-
t−d(t)
x˙ T (s)Z1 x(s)ds. ˙
t−h
(3.36) t−d(t) The term − t−h x˙ T (s)Z1 x(s)ds ˙ was ignored in previous studies, which may lead to considerable conservativeness. We can reduce the conservativeness by using the IFWM approach presented below to examine the stability of systems with a time-varying delay. It retains useful terms in the derivative of the Lyapunov-Krasovskii functional and takes into account the relationships among the delay, its upper bound, and their difference. 3.5.1 Retaining Useful Terms t−d(t) In this subsection, − t−h x˙ T (s)Z1 x(s)ds ˙ is retained when estimating the upper bound on the derivative of a Lyapunov-Krasovskii functional. A new class of Lyapunov-Krasovskii functional candidates is used to handle this term: t t V2 (xt ) = xT (t)P x(t) + xT (s)Qx(s)ds + xT (s)Rx(s)ds
0
t−d(t) t
+ −h
x˙ T (s)(Z1 + Z2 )x(s)dsdθ, ˙
t−h
(3.37)
t+θ
where P > 0, Q 0, R 0, and Zi > 0, i = 1, 2 are to be determined. Now, we give the following theorem. Theorem 3.5.1. Consider system (3.1) with a delay, d(t), that satisfies both (3.2) and (3.3). Given scalars h > 0 and μ, the system is asymptotically stable if there exist matrices P > 0, Q 0, R 0, and Zi > 0, i = 1, 2, T and any appropriately dimensioned matrices N = N1T N2T N3T , S = T T such that the following LMI S1T S2T S3T , and M = M1T M2T M3T
60
3. Stability of Systems with Time-Varying Delay
holds: ⎡
⎤ T Φ hN hS hM hA (Z + Z ) 1 2 c1 ⎢ ⎥ ⎢ ⎥ ⎢ ∗ −hZ ⎥ 0 0 0 ⎢ ⎥ 1 ⎢ ⎥ ⎢ ⎥ ⎢∗ ⎥ < 0, ∗ −hZ 0 0 1 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ∗ ∗ −hZ2 0 ⎢∗ ⎥ ⎢ ⎥ ⎣ ⎦ ∗ ∗ ∗ ∗ −h(Z1 + Z2 )
(3.38)
where Φ = Φ1 + Φ2 + ΦT 2, ⎡ ⎤ P A + AT P + Q + R P Ad 0 ⎢ ⎥ ⎢ ⎥ ⎢ Φ1 = ⎢ ∗ −(1 − μ)Q 0 ⎥ ⎥, ⎣ ⎦ ∗ ∗ −R Φ2 = [N + M − N + S − M − S] , Ac1 = [A Ad 0] . Proof. From the Newton-Leibnitz formula, we know that the following equations are true for any matrices N , S, and M with appropriate dimensions: t
2ζ1T (t)N x(t) − x(t − d(t)) −
x(s)ds ˙ = 0,
2ζ1T (t)S 2ζ1T (t)M
x(t − d(t)) − x(t − h) − x(t) − x(t − h) −
t−d(t)
x(s)ds ˙ = 0, t−h
(3.39)
t−d(t)
t
(3.40)
x(s)ds ˙ = 0,
(3.41)
t−h
T where ζ1 (t) = xT (t), xT (t − d(t)), xT (t − h) . Calculating the derivative of V2 (xt ) along the solutions of system (3.1), adding the left sides of (3.39)-(3.41) to it, and using (3.36) yield T ˙ V˙ 2 (xt ) = 2xT (t)P x(t) ˙ + xT (t)Qx(t) − (1 − d(t))x (t − d(t))Qx(t − d(t))
+xT (t)Rx(t) − xT (t − h)Rx(t − h) t T +hx˙ (t)(Z1 + Z2 )x(t) ˙ − x˙ T (s)(Z1 + Z2 )x(s)ds ˙ t−h
3.5 IFWM Approach
61
2xT (t)P x(t) ˙ + xT (t)(Q + R)x(t) −(1 − μ)xT (t − d(t))Qx(t − d(t)) − xT (t − h)Rx(t − h) +hx˙ T (t)(Z1 + Z2 )x(t) ˙ t x˙ T (s)Z1 x(s)ds ˙ − − −
t−d(t) t T t−h
t−d(t)
x˙ T (s)Z1 x(s)ds ˙
t−h
x˙ (s)Z2 x(s)ds ˙
+2ζ1T (t)N
x(t) − x(t − d(t)) −
x(s)ds ˙ t−d(t)
+2ζ1T (t)S +2ζ1T (t)M
t
x(t − d(t)) − x(t − h) −
x(t) − x(t − h) −
t−d(t)
x(s)ds ˙ t−h
t
x(s)ds ˙ t−h
−1 T ζ1T (t) Φ + hAT c1 (Z1 + Z2 )Ac1 + hN Z1 N +hSZ1−1 S T + hM Z2−1 M T ζ1 (t) t T ζ1 (t)N + x˙ T (s)Z1 Z1−1 N T ζ1 (t) + Z1 x(s) ˙ ds − t−d(t)
−
t−d(t)
t−h t
−
t−h
ζ1T (t)S + x˙ T (s)Z1 Z1−1 S T ζ1 (t) + Z1 x(s) ˙ ds
T ζ1 (t)M + x˙ T (s)Z2 Z2−1 M T ζ1 (t) + Z2 x(s) ˙ ds.
(3.42)
Since Zi > 0, i = 1, 2, the last three parts of (3.42) are all less than 0. −1 T −1 T −1 T So, if Φ + hAT < 0, c1 (Z1 + Z2 )Ac1 + hN Z1 N + hSZ1 S + hM Z2 M ˙ which is equivalent to (3.38) by the Schur complement, V2 (xt ) < −εx(t)2 for a sufficiently small ε > 0, which means that system (3.1) is asymptotically stable. This completes the proof.
Remark 3.5.1. If N3 = 0, S = M = 0, Z2 = ε1 I, and R = ε2 I (where εi > 0, i = 1, 2 are sufficiently small scalars), Theorem 3.5.1 reduces to Theorem 3.2.2. So, if we choose suitable values for N3 , S, M , Z2 and R, Theorem 3.5.1 overcomes the conservativeness of Theorem 3.2.2 and is an improvement over the criterion in [5]. The alternative version of Theorem 3.5.1 below enables us to use a parameter-dependent Lyapunov-Krasovskii functional for the investigation of systems with polytopic-type uncertainties.
62
3. Stability of Systems with Time-Varying Delay
Theorem 3.5.2. Consider system (3.1) with a delay, d(t), that satisfies both (3.2) and (3.3). Given scalars h > 0 and μ, the system is asymptotically stable if there exist matrices P > 0, Q 0, R 0, and Zi > 0, i = 1, 2, T ˜ = N T N T · · · N T , S˜ = and any appropriately dimensioned matrices N 1 2 4 T T T ˜ = M T M T · · · M T , and T = T T T T · · · T T S1T S2T · · · S4T , M 1 2 4 1 2 4 such that the following LMI holds: ⎤ ⎡ ˜ ˜ Θ hN hS˜ hM ⎥ ⎢ ⎥ ⎢ ⎢ ∗ −hZ1 0 0 ⎥ ⎥ < 0, ⎢ (3.43) ⎥ ⎢ ⎢∗ 0 ⎥ ∗ −hZ1 ⎦ ⎣ ∗ ∗ ∗ −hZ2 where Θ = Θ1 + Θ2 + Θ2T , ⎤ ⎡ Q+R 0 0 P ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ∗ −(1 − μ)Q 0 0 ⎥, Θ1 = ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ∗ −R 0 ⎦ ⎣ ∗ ∗ ∗ h(Z1 + Z2 ) ˜ +M ˜ −N ˜ + S˜ − M ˜ − S˜ 0 + T Ac2 + AT T T , Θ2 = N c2 Ac2 = [−A − Ad 0 I] . Proof. Choose the same Lyapunov-Krasovskii functional candidate as in (3.37) and note that 2ζ2T (t)T [x(t) ˙ − Ax(t) − Ad x(t − d(t))] = 0,
(3.44)
T where ζ2 (t) = xT (t), xT (t − d(t)), xT (t − h), x˙ T (t) . Then, the proof uses equations similar to (3.39)-(3.41) and follows a line similar to the one in Theorem 3.5.1.
Remark 3.5.2. Subsection 3.2.3 shows that Theorem 3.5.2 is equivalent to Theorem 3.5.1. Note that Theorem 3.5.2 can be extended to deal with a system with polytopic-type uncertainties by using a parameter-dependent Lyapunov-Krasovskii functional, as in Section 3.4. This is because the LMI condition in Theorem 3.5.2 does not involve any product of system matrices and Lyapunov matrices.
3.5 IFWM Approach
63
˜ = 0, Z2 = ε1 I, and R = ε2 I (where Remark 3.5.3. If N4 = 0, S˜ = M εi > 0, i = 1, 2 are sufficiently small scalars), Theorem 3.5.2 reduces to Theorem 3.4.1 and is an improvement over the theorems in [11]. Now, we consider two classes of uncertainties mentioned in Section 3.1. For system (3.4), which has time-varying structured uncertainties, we have a corollary similar to Theorem 3.3.1 in Subsection 3.3.1. Corollary 3.5.1. Consider system (3.4) with a delay, d(t), that satisfies both (3.2) and (3.3). Given scalars h > 0 and μ, the system is robustly stable if there exist matrices P > 0, Q 0, R 0, and Zi > 0, i = 1, 2, any appro T T priately dimensioned matrices N = N1T N2T N3T , S = S1T S2T S3T , T and M = M1T M2T M3T , and a scalar λ > 0 such that the following LMI holds: ⎤ ⎡ Φˆ hN hS hM hAT Pˆ D c1 (Z1 + Z2 ) ⎥ ⎢ ⎥ ⎢ 0 0 0 0 ⎥ ⎢ ∗ −hZ1 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢∗ 0 0 0 ∗ −hZ1 ⎥ < 0, ⎢ (3.45) ⎥ ⎢ ⎥ ⎢∗ ∗ ∗ −hZ 0 0 2 ⎥ ⎢ ⎥ ⎢ ⎢∗ ∗ ∗ ∗ −h(Z1 + Z2 ) h(Z1 + Z2 )D ⎥ ⎦ ⎣ ∗ ∗ ∗ ∗ ∗ −λI where
⎡
⎢ ⎢ Φˆ = Φ + ⎢ ⎣
λEaT Ea λEaT Ed 0 ∗ ∗
⎤
⎡
⎤ P
⎢ ⎥ ⎥ ⎢ ⎥ ⎥ λEdT Ed 0 ⎥ , Pˆ = ⎢ 0 ⎥ ; ⎣ ⎦ ⎦ 0 ∗ 0
and Φ and Ac1 are defined in (3.38). The next corollary is derived from Theorem 3.5.2 by using a parameterdependent Lyapunov-Krasovskii functional. Corollary 3.5.2. Consider system (3.1) with polytopic-type uncertainties (3.7) and a delay, d(t), that satisfies both (3.2) and (3.3). Given scalars h > 0 and μ, the system is robustly stable if there exist matrices Pj > 0, Qj 0, Rj 0, and Zij > 0, i = 1, 2, j = 1, 2, · · · , p, and any appropriately diT T ¯j = S T S T · · · S T ¯j = N T N T · · · N T , S , mensioned matrices N 1j 2j 4j 1j 2j 4j
64
3. Stability of Systems with Time-Varying Delay
T T ¯j = MT MT · · · MT T TT ··· TT , and T = , i = 1, 2, j = M T 1 2 4 1j 2j 4j 1, 2, · · · , p such that the following LMI holds for j = 1, 2, · · · , p : ⎤ ⎡ ¯ (j) hN ¯j ¯j Θ hS¯j hM ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ∗ −hZ1j 0 0 ⎥ < 0, ⎢ (3.46) ⎥ ⎢ ⎥ ⎢ ∗ 0 ∗ −hZ1j ⎦ ⎣ ∗ ∗ ∗ −hZ2j where ¯ (j) = Θ ¯ (j) + Θ ¯ (j) + [Θ ¯ (j) ]T , Θ 2 2 ⎤ ⎡1 Qj + Rj 0 0 Pj ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ∗ −(1 − μ)Qj 0 0 (j) ¯ ⎥, ⎢ Θ1 = ⎢ ⎥ ⎥ ⎢ 0 ∗ ∗ −Rj ⎦ ⎣ ∗ ∗ ∗ h(Z1j + Z2j ) ¯ (j) = N ¯j + M ¯j − N ¯j + S¯j − M ¯ j − S¯j 0 + T A¯j + A¯T T T , Θ j 2 A¯j = [−Aj − Adj 0 I] . 3.5.2 Further Investigation t−d(t) Even though in the previous subsection we retained the term − t−h x˙ T (s)Z1 x(s)ds ˙ in the derivative of the Lyapunov-Krasovskii functional and obtained improved delay-dependent stability criteria for systems with a time-varying delay, there is still room for further investigation. In the last part of formula (3.42), the terms d(t)N Z1−1 N T and (h − d(t))SZ1−1 S T are increased to hN Z1−1 N T and hSZ1−1 S T , respectively. Since d(t) and h − d(t) are closely related because their sum is h, this treatment may lead to conservativeness. We now present a new theorem for nominal system (3.1) and a new corollary for system (3.4) that do not ignore any useful terms and take the relationships among d(t), h − d(t), and h into account. Theorem 3.5.3. Consider nominal system (3.1) with a delay, d(t), that satisfies both (3.2) and (3.3). Given scalars h > 0 and μ, the system is asymptotically stable if there exist matrices P > 0, Q 0, R 0, Z > 0, and X 0, and any appropriately dimensioned matrices N and S such that the following LMIs hold:
3.5 IFWM Approach
⎤ ⎡ Z Φ hAT c1 ⎦ ⎣ < 0, ∗ − hZ
65
(3.47)
⎡ ⎤ X N ⎣ ⎦ 0, ∗ Z
(3.48)
⎡ ⎤ X S ⎣ ⎦ 0, ∗ Z
(3.49)
where Φ = Φ1 + Φ2 + ΦT 2 + hX, ⎡ ⎤ P A + AT P + Q + R P Ad 0 ⎢ ⎥ ⎢ ⎥ Φ1 = ⎢ ∗ −(1 − μ)Q 0 ⎥ , ⎣ ⎦ ∗ ∗ −R Φ2 = N −N + S −S , Ac1 = A Ad 0 . Proof. Choose the Lyapunov-Krasovskii functional candidate to be t t T T x (s)Qx(s)ds + xT (s)Rx(s)ds V3 (xt ) = x (t)P x(t) + t−d(t) t−h 0 t (3.50) x˙ T (s)Z x(s)dsdθ, ˙ + −h
t+θ
where P > 0, Q 0, R 0, and Z > 0 are to be determined. For any matrix X 0, the following holds: t t−d(t) T T hζ1 (t)Xζ1 (t) − ζ1 (t)Xζ1 (t)ds − ζ1T (t)Xζ1 (t)ds = 0. (3.51) t−d(t)
t−h
Calculating the derivative of V3 (xt ) along the solutions of nominal system (3.1) and using (3.39), (3.40), and (3.51) yield ⎡ ⎤ t X N ⎦ ζ2 (t, s)ds V˙ 3 (xt ) =ζ1T (t) Φ + hAT ζ2T (t, s) ⎣ c1 ZAc1 ζ1 (t) − t−d(t) ∗ Z ⎡ ⎤ t−d(t) X S ⎦ ζ2 (t, s)ds, ζ2T (t, s) ⎣ (3.52) − t−h ∗ Z
66
3. Stability of Systems with Time-Varying Delay
T where ζ2 (t, s) = ζ1T (t), x˙ T (s) . The rest of the proof follows a line similar to the one in Theorem 3.2.1.
Applying Lemma 2.6.2 gives us the following corollary for system (3.4). Corollary 3.5.3. Consider system (3.4) with a delay, d(t), that satisfies both (3.2) and (3.3). Given scalars h > 0 and μ, the system is robustly stable if there exist matrices P > 0, Q 0, R 0, Z > 0, and X 0, any appropriately dimensioned matrices N and S, and a scalar λ > 0 such that LMIs (3.48) and (3.49), and the following LMI hold: ⎡ ⎤ ˆ Φ hAT λEˆ c1 Z P D ⎢ ⎥ ⎢ ⎥ ⎢ ∗ −hZ hZD 0 ⎥ ⎢ ⎥ < 0, (3.53) ⎢ ⎥ ⎢∗ ∗ −λI 0 ⎥ ⎣ ⎦ ∗ ∗ ∗ −λI where ⎡ ⎤ ⎡ ⎤ ET P ⎢ a⎥ ⎢ ⎥ ⎢ ⎥ ⎥ ˆ=⎢ Pˆ = ⎢ 0 ⎥ , E ⎢EbT ⎥ ; ⎣ ⎦ ⎣ ⎦ 0 0 and Φ and Ac1 are defined in (3.47). Remark 3.5.4. Retaining the term x(t) ˙ enables us to use parameter-dependent Lyapunov-Krasovskii functionals in combination with the IFWM approach to derive improved criteria for systems with polytopic-type uncertainties. Moreover, combining the IFWM approach with the augmented LyapunovKrasovskii functional method in Subsection 5.2.3 produces even better results. Remark 3.5.5. The conditions in this section are all delay- and rate-dependent. However, if we set the Q (or Qj , j = 1, 2, · · · , p) in them to zero, they reduce to delay-dependent and rate-independent ones, which can be used when the delay is not differentiable or the derivative of the delay is unknown. 3.5.3 Numerical Examples Example 3.5.1. Consider the robust stability of system (3.4) with the following parameters:
3.5 IFWM Approach
⎡ A=⎣
−0.5 −2 1
−1
⎤
⎡
⎦ , Ad = ⎣
−0.5 −1 0
0.6
67
⎤ ⎦ , Ea = Ead = diag {0.2, 0.2} , D = I.
Table 3.3 lists the upper bounds on the delay obtained from Theorems 3.3.1 and 3.3.2, and Corollaries 3.3.1, 3.3.2, 3.5.1, and 3.5.3 for various μ. In addition, the results obtained from Lemma 1 in [11] in combination with Lemma 2.6.2 are also listed. Clearly, our results are significantly better than those in [11]. Note that the method in [11] fails for μ = 0.9, while the upper bound is 0.2420 for Theorems 3.3.1 and 3.3.2, 0.3155 for Corollary 3.5.1, and 0.3972 for Corollary 3.5.3. Furthermore, the table also shows a gradual improvement as our method progresses. Table 3.3. Allowable upper bound, h, for various μ (Example 3.5.1) μ
0
0.5
0.9
unknown μ
[11]
0.6812
0.1820
—
0.1622
Theorems 3.3.1 and 3.3.2
0.8435
0.2433
0.2420
—
Corollaries 3.3.1 and 3.3.2
—
—
—
0.2420
Corollary 3.5.1
0.8435
0.3155
0.3155
0.3155
Corollary 3.5.3
0.8435
0.3972
0.3972
0.3972
Example 3.5.2. Consider system (3.1) with the following parameters: ⎡ ⎤ ⎡ ⎤ 0 − 0.12+12ρ −0.1 − 0.35 ⎦ , Ad = ⎣ ⎦, A=⎣ 1 − 0.465 − ρ 0 0.3 and −0.035 ρ 0.035. If we let ρm = 0.035 and set ⎡ ⎡ ⎤ ⎤ 0 −0.12 + 12ρm 0 −0.12 − 12ρm ⎦ , A2 = ⎣ ⎦, A1 = ⎣ 1 −0.465 − ρm 1 −0.465 + ρm ⎡ ⎤ −0.1 −0.35 ⎦, Ad1 = Ad2 = Ad = ⎣ 0 0.3 then the system is recast as a system with polytopic-type uncertainties that are described by (3.7). Table 3.4 lists the upper bounds on the delay obtained
68
3. Stability of Systems with Time-Varying Delay
Table 3.4. Allowable upper bound, h, for various μ (Example 3.5.2) μ
0
0.5
0.9
unknown μ
[11, 26, 27]
0.782
0.465
0.454
0.454
Theorem 3.4.1
0.863
0.465
0.454
—
Corollary 3.4.1
—
—
—
0.454
Corollary 3.5.2
0.863
0.537
0.537
0.537
from [11, 26, 27], Theorems 3.4.1, and Corollaries 3.4.1 and 3.5.2 for various μ. Note that our methods produce larger upper bounds on the delay than [11, 26, 27] do.
3.6 Conclusion This chapter explains how the FWM and IFWM approaches can be used to examine the delay-dependent stability of systems with a delay. First, the FWM approach and two different treatments of the term x(t) ˙ (retaining it or replacing it) are used to produce two different forms of delay- and ratedependent stability conditions. They are proven to be equivalent and are easily extended to delay-dependent and rate-independent conditions without any limit on the derivative of the delay. Second, the robust stability of two classes of uncertainties is examined. Lemma 2.6.2 is used to extend the conditions for nominal systems obtained by the FWM approach to systems with time-varying structured uncertainties; and retaining the term x(t) ˙ and using a parameter-dependent Lyapunov-Krasovskii functional extends the conditions for the nominal system to systems with polytopic-type uncertainties. Finally, the IFWM approach is used to study the delay-dependent stability of systems with a time-varying delay. The resulting criteria are less conservative than those produced by other methods.
References 1. K. Gu. Discretized LMI set in the stability problem for linear uncertain timedelay systems. International Journal of Control, 68(4): 923-934, 1997. 2. K. Gu. A generalized discretization scheme of Lyapunov functional in the stability problem of linear uncertain time-delay systems. International Journal of Robust and Nonlinear Control, 9(1): 1-4, 1999.
References
69
3. K. Gu. A further refinement of discretized Lyapunov functional method for the stability of time-delay systems. International Journal of Control, 74(10): 967-976, 2001. 4. P. Park. A delay-dependent stability criterion for systems with uncertain timeinvariant delays. IEEE Transactions on Automatic Control, 44(4): 876-877, 1999. 5. Y. S. Moon, P. Park, W. H. Kwon, and Y. S. Lee. Delay-dependent robust stabilization of uncertain state-delayed systems. International Journal of Control, 74(14): 1447-1455, 2001. 6. T. J. Su and C. G. Huang. Robust stability of delay dependence for linear uncertain systems. IEEE Transactions on Automatic Control, 37(10): 16561659, 1992. 7. X. Li and C. E. de Souza. Delay-dependent robust stability and stabilization of uncertain linear delay systems: A linear matrix inequality approach. IEEE Transactions on Automatic Control, 42(8): 1144-1148, 1997. 8. C. E. de Souza and X. Li. Delay-dependent robust H∞ control of uncertain linear state-delayed systems. Automatica, 35(7): 1313-1321, 1999. 9. Y. Xia and Y. Jia. Robust stability functionals of state delayed systems with polytopic type uncertainties via parameter-dependent Lyapunov functions. International Journal of Control, 75(16): 1427-1434, 2002. 10. Y. Xia and Y. Jia. Robust control of state delayed systems with polytopic type uncertainties via parameter-dependent Lyapunov functionals. Systems & Control Letters, 50(3): 183-193, 2003. 11. E. Fridman and U. Shaked. An improved stabilization method for linear timedelay systems. IEEE Transactions on Automatic Control, 47(11): 1931-1937, 2002. 12. X. Li and C. E. de Souza. Criteria for robust stability and stabilization of uncertain linear systems with state delay. Automatica, 33(9):1657-1662, 1997. 13. J. H. Kim. Delay and its time-derivative dependent robust stability of timedelayed linear systems with uncertainty. IEEE Transactions on Automatic Control, 46(5): 789-792, 2001. 14. D. Yue and S. Won. An improvement on “Delay and its time-derivative dependent robust stability of time-delayed linear systems with uncertainty”. IEEE Transactions on Automatic Control, 47(2): 407-408, 2002. 15. X. Jiang and Q. L. Han. New stability criteria for linear systems with interval time-varying delay. Automatica, 44(10): 2680-2685, 2008. 16. E. Fridman and S. I. Niculescu. On complete Lyapunov-Krasovskii functional techniques for uncertain systems with fast-varying delays. International Journal of Robust and Nonlinear Control, 18(3): 364-374, 2007. 17. E. Fridman and U. Shaked. Input-output approach to stability and L2 -gain analysis of systems with time-varying delays. Systems & Control Letters, 55(12): 1041-1053, 2006. 18. M. Wu, Y. He, and J. H. She. Delay-dependent criteria for the robust stability of systems with time-varying delay. Journal of Control Theory and Application, 1(1): 97-100, 2003.
70
3. Stability of Systems with Time-Varying Delay
19. I. R. Petersen and C. V. Hollot. A Riccati equation approach to the stabilization of uncertain linear systems. Automatica, 22(4): 397-411, 1986. 20. J. C. Geromel, M. C. de Oliveira, and L. Hsu. LMI characterization of structural and robust stability. Linear Algebra and its Applications, 285(1-3): 68-80, 1998. 21. D. Peaucelle, D. Arzelier, O. Bachelier, and J. Bernussou. A new robust Dstability condition for real convex polytopic uncertainty. Systems & Control Letters, 40(1): 21-30, 2000. 22. H. D. Tuan, P. Apkarian, and T. Q. Nguyen. Robust and reduced-order filtering: new characterizations and methods. Proceedings of the American Control Conference, Chicago, USA, 1327-1331, 2000. 23. U. Shaked. Improved LMI representations for analysis and design of continuoustime systems with polytopic-type uncertainty. IEEE Transactions on Automatic Control, 46(4): 652-656, 2001. 24. P. J. de Oliveira, R. C. L. F. Oliveira, V. J. S. Oliveira, V. F. Montagner, and P. L. D. Peres. LMI based robust stability conditions for linear uncertain systems: a numerical comparison. Proceedings of the 41st IEEE Conference on Decision and Control, Las Vegas, USA, 644-649, 2002. 25. Y. Jia. Alternative proofs for improved LMI representation for the analysis and the design of continuous-time systems with polytopic-type uncertainty: a predictive approach. IEEE Transactions on Automatic Control, 48(8): 14131416, 2003. 26. E. Fridman and U. Shaked. Delay-dependent stability and H∞ control: constant and time-varying delays. International Journal of Control, 76(1): 48-60, 2003. 27. E. Fridman and U. Shaked. Parameter dependent stability and stabilization of uncertain time-delay systems. IEEE Transactions on Automatic Control, 48(5): 861-866, 2003. 28. O. Bachelier, J. Bernussou, M. C. de Oliveira, and J. C. Geromel. Parameterdependent Lyapunov control design: numerical evaluation. Proceedings of the 38th IEEE Conference on Decision and Control, New York, USA, 293-297, 1999. 29. M. C. de Oliveira, J. Bernussou, and J. C. Geromel. A new discrete-time robust stability condition. Systems & Control Letters, 37(4): 261-265, 1999. 30. M. C. de Oliveira, J. C. Geromel, and L. Hsu. LMI characterization of structural and robust stability: the discrete-time case. Linear Algebra and its Applications, 296(1-3): 27-38, 1999. 31. M. Wu, Y. He, J. H. She, and G. P. Liu. Delay-dependent criteria for robust stability of time-varying delay systems. Automatica, 40(8): 1435-1439, 2004. 32. Y. He, M. Wu, J. H. She, and G. P. Liu. Parameter-dependent Lyapunov functional for stability of time-delay systems with polytopic type uncertainties. IEEE Transactions on Automatic Control, 49(5): 828-832, 2004. 33. Y. He, Q. G. Wang, L. Xie, and C. Lin. Further improvement of free-weighting matrices technique for systems with time-varying delay. IEEE Transactions on Automatic Control, 52(2): 293-299, 2007. 34. Y. He, Q. G. Wang, C. Lin, and M. Wu. Delay-range-dependent stability for systems with time-varying delay. Automatica, 43(2): 371-376, 2007.
References
71
35. Y. He, G. P. Liu, and D. Rees. New delay-dependent stability criteria for neural networks with time-varying delay. IEEE Transactions on Neural Networks, 18(1): 310-314, 2007. 36. Y. He, M. Wu, and J. H. She. Delay-dependent exponential stability of delayed neural networks with time-varying delay. IEEE Transactions on Circuits and Systems II, 53(7): 553-557, 2006. 37. Q. L. Han. Robust stability for a class of linear systems with time-varying delay and nonlinear perturbations. Computers & Mathematics with Applications, 47(8-9): 1201-1209, 2004. 38. Q. L. Han and D. Yue. Absolute stability of Lur’e systems with time-varying delay. IET Proceedings: Control Theory & Applications, 1(3): 854-859, 2007. 39. K. Gu, V. L. Kharitonov, and J. Chen. Stability of Time-Delay Systems. Boston: Birkh¨ auser, 2003. 40. J. K. Hale and S. M. Verduyn Lunel. Introduction to Functional Differential Equations. New York: Springer-Verlag, 1993. 41. E. Fridman and U. Shaked. A descriptor system approach to H∞ control of linear time-delay systems. IEEE Transactions on Automatic Control, 47(2): 253-270, 2002.
4. Stability of Systems with Multiple Delays
If a linear system with a single delay, h, is not stable for a delay of some length, ¯ for which but is stable for h = 0, then there must exist a positive number h ¯ the system is stable for 0 h h. Many researchers have simply extended this idea to a system with multiple delays, but this simple extension may lead to conservativeness. For example, Fridman & Shaked [1, 2] investigated a linear system with two delays: x(t) ˙ = A0 x(t) + A1 x(t − h1 ) + A2 x(t − h2 ).
(4.1)
¯ 1 and h ¯ 2 on h1 and h2 , respectively, are selected so that The upper bounds h ¯ 2 . However, the ranges of this system is stable for 0 h1 ¯ h1 and 0 h2 h h1 and h2 that guarantee the stability of this system are conservative because they start from zero, even though that may not be necessary. One reason for this is that the relationship between h1 and h2 was not taken into account in the procedure for finding the upper bounds. Another point concerns a linear system with a single delay, x(t) ˙ = A0 x(t) + (A1 + A2 )x(t − h1 ),
(4.2)
which is a special case of system (4.1), namely, the case h1 = h2 . The stability criterion for system (4.2) should be equivalent to that for system (4.1) for h1 = h2 ; but this equivalence cannot be demonstrated by the methods in [1,2]. This chapter presents delay-dependent stability criteria for systems with multiple constant delays based on the FWM approach [3, 4]. Criteria are first established for a linear system with two delays. They take into account t not only the relationships between x(t − h1 ) and x(t) − t−h1 x(s)ds, ˙ and t ˙ but also the one between x(t − h2 ) and x(t − h2 ) and x(t) − t−h2 x(s)ds, t−h1 ˙ Note that the last relationship is between h1 and h2 . x(t− h1 )− t−h2 x(s)ds. All these relationships are expressed in terms of FWMs, and their parameters are determined based on the solutions of LMIs. In addition, the equivalence
74
4. Stability of Systems with Multiple Delays
between system (4.2) and system (4.1) for h1 = h2 is demonstrated. Numerical examples show that the methods presented in this chapter are effective and are a significant improvement over others. Finally, these ideas are extended from systems with two delays to systems with multiple delays.
4.1 Problem Formulation Consider the following linear system with multiple delays: ⎧ m ⎪ ⎪ ⎨ x(t) A x(t − h ), t > 0, ˙ = i
i
i=0
⎪ ⎪ ⎩ x(t) = φ(t), t ∈ [−h, 0],
(4.3)
where x(t) ∈ Rn is the state vector; h0 = 0, hi 0, i = 1, 2, · · · , m are constant delays; h = max{h1 , h2 , · · · , hm }; Ai ∈ Rn×n , i = 0, 1, · · · , m are constant matrices; and the initial condition, φ(t), is a continuously differentiable initial function of t ∈ [−h, 0]. If the system contains time-varying structured uncertainties, it can be described by ⎧ m ⎪ ⎪ x(t) ⎨ ˙ = (Ai + ΔAi (t))x(t − hi ), t > 0, (4.4) i=0 ⎪ ⎪ ⎩ x(t) = φ(t), t ∈ [−h, 0]. The uncertainties are assumed to be of the form ΔAi (t) = DF (t)Ei , i = 0, 1, · · · , m,
(4.5)
where D and Ei , i = 0, 1, · · · , m are constant matrices with appropriate dimensions; and F (t) is an unknown, real, and possibly time-varying matrix with Lebesgue-measurable elements satisfying F T (t)F (t) I, ∀t.
(4.6)
This chapter also concerns system (4.3) with polytopic-type uncertainties. In this case, the matrices Ai , i = 0, 1, · · · , m of the system contain uncertainties and satisfy the real convex polytopic model [A0 A1 · · · Am ] ∈ Ω, ⎫ ⎧ p p ⎬ ⎨ ξj [A0j A1j · · · Amj ] , ξj = 1, ξj 0 , Ω = [A0 (ξ) A1 (ξ) · · · Am (ξ)] = ⎭ ⎩ j=1
j=1
4.2 Two Delays
75
(4.7) where Aij , i = 0, 1, · · · , m, j = 1, 2, · · · , p are constant matrices with appropriate dimensions, and ξj , j = 1, 2, · · · , p are time-invariant uncertainties.
4.2 Two Delays This section considers the stability of systems with two delays, the relationship between which is taken into account for the first time. 4.2.1 Nominal Systems First, consider the case m = 2. A delay-dependent stability criterion is established for system (4.3) with two delays by taking the relationship between h1 and h2 into account and replacing the term x(t) ˙ in the derivative of the Lyapunov-Krasovskii functional with the system equation. Theorem 4.2.1. Consider nominal system (4.3) with m = 2. Given scalars hi 0, i = 1, 2, the system is asymptotically stable if there exist matrices P > 0, Qi 0, i = 1, 2, Wj > 0, Xjj 0, Yjj 0, and Zjj 0, j = 1, 2, 3, and any appropriately dimensioned matrices Ni , Si , Mi , i = 1, 2, 3, Xij , Yij , and Zij , i = 1, 2, 3, i < j 3 such that the following LMIs hold: ⎡ ⎤ Φ Φ Φ 11 12 13 ⎢ ⎥ ⎢ ⎥ ⎢ (4.8) Φ = ⎢ ∗ Φ22 Φ23 ⎥ ⎥ < 0, ⎣ ⎦ ∗ ∗ Φ33 ⎤
⎡ X11 X12 X13 N1
⎥ ⎢ ⎥ ⎢ ⎢ ∗ X22 X23 N2 ⎥ ⎥ ⎢ Ψ1 = ⎢ ⎥ 0, ⎢ ∗ ∗ X33 N3 ⎥ ⎥ ⎢ ⎦ ⎣ ∗ ∗ ∗ W1 ⎤ ⎡ Y11 Y12 Y13 S1 ⎥ ⎢ ⎥ ⎢ ⎢ ∗ Y22 Y23 S2 ⎥ ⎥ ⎢ Ψ2 = ⎢ ⎥ 0, ⎢ ∗ ∗ Y33 S3 ⎥ ⎥ ⎢ ⎦ ⎣ ∗ ∗ ∗ W2
(4.9)
(4.10)
76
4. Stability of Systems with Multiple Delays
⎤
⎡ Z11 Z12 Z13 kM1
⎢ ⎢ ⎢ ∗ ⎢ Ψ3 = ⎢ ⎢ ∗ ⎢ ⎣ ∗
⎥ ⎥ Z22 Z23 kM2 ⎥ ⎥ ⎥ 0, ∗ Z33 kM3 ⎥ ⎥ ⎦ ∗ ∗ W3 ,
(4.11)
where
⎧ ⎨ 1, if h1 h2 , k= ⎩−1, if h < h ; 1 2
and T T T Φ11 = P A0 +AT 0 P +Q1 +Q2 +N1 +N1 +S1 +S1 +A0 HA0 +h1 X11 +h2 Y11 + |h1 − h2 |Z11 ,
Φ12 = P A1 −N1 +N2T +S2T −M1 +AT 0 HA1 +h1 X12 +h2 Y12 +|h1 −h2 |Z12 , Φ13 = P A2 +N3T +S3T −S1 +M1 +AT 0 HA2 +h1 X13 +h2 Y13 +|h1 −h2 |Z13 , Φ22 = −Q1 −N2 −N2T −M2 −M2T +AT 1 HA1 +h1 X22 +h2 Y22 +|h1 −h2 |Z22 , Φ23 = −N3T − S2 + M2 − M3T + AT 1 HA2 + h1 X23 + h2 Y23 + |h1 − h2 |Z23 , Φ33 = −Q2 −S3 −S3T +M3 +M3T +AT 2 HA2 +h1 X33 +h2 Y33 +|h1 −h2 |Z33 , H = h1 W1 + h2 W2 + |h1 − h2 |W3 . Proof. First, consider the case h1 h2 . Choose the following LyapunovKrasovskii functional candidate: t t V2 (xt ) = xT (t)P x(t) + xT (s)Q1 x(s)ds + xT (s)Q2 x(s)ds
0
t−h1 t
+ −h1 t+θ −h2 t
x˙ T (s)W1 x(s)dsdθ ˙ +
+
−h1
0 −h2
t−h2 t
x˙ T (s)W2 x(s)dsdθ ˙
t+θ
x˙ T (s)W3 x(s)dsdθ, ˙
(4.12)
t+θ
where P > 0, Qi 0, i = 1, 2, and Wj > 0, j = 1, 2, 3 are to be determined. Calculating the derivative of V2 (xt ) along the solutions of system (4.3) yields V˙ 2 (xt ) = 2xT (t)P [A0 x(t) + A1 x(t − h1 ) + A2 x(t − h2 )] +xT (t)Q1 x(t) − xT (t − h1 )Q1 x(t − h1 ) +xT (t)Q2 x(t) − xT (t − h2 )Q2 x(t − h2 ) t T +h1 x˙ (t)W1 x(t) ˙ − x˙ T (s)W1 x(s)ds ˙ t−h1
4.2 Two Delays
+h2 x˙ T (t)W2 x(t) ˙ −
t
x˙ T (s)W2 x(s)ds ˙
t−h2 T
77
+(h1 − h2 )x˙ (t)W3 x(t) ˙ −
t−h2
x˙ T (s)W3 x(s)ds. ˙
(4.13)
t−h1
From the Newton-Leibnitz formula, the following equations are true for any appropriately dimensioned matrices Ni , Si , and Mi , i = 1, 2, 3: 2 xT (t)N1 + xT (t − h1 )N2 + xT (t − h2 )N3 t x(s)ds ˙ = 0, (4.14) × x(t) − x(t − h1 ) − t−h1
2 xT (t)S1 + xT (t − h1 )S2 + xT (t − h2 )S3 t x(s)ds ˙ = 0, × x(t) − x(t − h2 ) −
(4.15)
t−h2
2 xT (t)M1 + xT (t − h1 )M2 + xT (t − h2 )M3
t−h2
× x(t − h2 ) − x(t − h1 ) −
x(s)ds ˙ = 0.
(4.16)
t−h1
On the other hand, for any matrices Xjj 0, Yjj 0, and Zjj 0, j = 1, 2, 3, and any appropriately dimensioned matrices Xij , Yij , and Zij , i = 1, 2, 3, i < j 3, the following equation holds: ⎡
⎤⎡
⎤T ⎡ x(t)
⎢ ⎥ ⎢ ⎥ ⎢ x(t − h1 ) ⎥ ⎣ ⎦ x(t − h2 )
⎤ x(t)
Λ11 Λ12 Λ13
⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥⎢ ⎥ ⎥⎢ ⎥ Λ22 Λ23 ⎥ ⎢ x(t − h1 ) ⎥ = 0, ⎦⎣ ⎦ x(t − h2 ) ∗ Λ33
(4.17)
where Λij = h1 (Xij −Xij )+h2 (Yij −Yij )+(h1 −h2 )(Zij −Zij ), i = 1, 2, 3, i j 3. Adding the left sides of (4.14)-(4.17) to V˙ 2 (xt ) yields t V˙ 2 (xt ) = η1T (t)Φη1 (t) − η2T (t, s)Ψ1 η2 (t, s)ds −
t−h1
t
t−h2
η2T (t, s)Ψ2 η2 (t, s)ds
−
t−h2
t−h1
where T η1 (t) = xT (t), xT (t − h1 ), xT (t − h2 ) , T η2 (t, s) = η1T (t), x˙ T (s) ,
η2T (t, s)Ψ3 η2 (t, s)ds,
(4.18)
78
4. Stability of Systems with Multiple Delays
and Φ and Ψi , i = 1, 2, 3 (where k = 1 in Ψ3 ) are defined in (4.8)-(4.11), respectively. If Φ < 0 and Ψi 0, i = 1, 2, 3, then V˙ 2 (xt ) < −εx(t)2 for a sufficiently small ε > 0. So, system (4.3) is asymptotically stable if LMIs (4.8)-(4.11) hold. On the other hand, when h1 < h2 , one Lyapunov-Krasovskii functional candidate is t t T T V2 (xt ) = x (t)P x(t) + x (s)Q1 x(s)ds + xT (s)Q2 x(s)ds
0
t−h1 t
+ −h1 t+θ −h1 t
x˙ T (s)W1 x(s)dsdθ ˙ +
+ −h2
0
t−h2 t
−h2
x˙ T (s)W2 x(s)dsdθ ˙
t+θ
x˙ T (s)W3 x(s)dsdθ. ˙
(4.19)
t+θ
Equation (4.16) can be rewritten as 2 xT (t)M1 + xT (t − h1 )M2 + xT (t − h2 )M3 × x(t − h2 ) − x(t − h1 ) +
t−h1
x(s)ds ˙ = 0.
(4.20)
t−h2
Then, following the procedure for the case h1 h2 yields a similar result; but note that, in this case, k = −1 in (4.11). This completes the proof.
Remark 4.2.1. The main modification to the Lyapunov-Krasovskii functional candidate is the addition of the last term, which contains an integral of the state that is a function of the upper bounds on h1 and h2 . This is a very important term: Without it, the stability is guaranteed from 0 to the upper bounds; but with it, the stability range for each delay can begin at a nonzero lower bound. This enlarges the stability range, thereby reducing the conservativeness. Now, retaining the term x(t) ˙ in the derivative of V˙ 2 (xt ) rather than replacing it with the system equation yields another theorem. Theorem 4.2.2. Consider nominal system (4.3) with m = 2. Given scalars hi 0, i = 1, 2, the system is asymptotically stable if there exist matrices P > 0, Qi 0, i = 1, 2, Wj > 0, j = 1, 2, 3, Xll 0, Yll 0, and Zll 0, l = 1, 2, · · · , 4, and any appropriately dimensioned matrices Ni , Si , Mi , i = 1, 2, · · · , 4, Xij , Yij , and Zij , i = 1, 2, · · · , 4, i < j 4 such that the following LMIs hold:
4.2 Two Delays
79
⎤
⎡ Ξ11 Ξ12 Ξ13 Ξ14
⎢ ⎢ ⎢ ∗ Ξ=⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥ ⎥ Ξ22 Ξ23 Ξ24 ⎥ ⎥ < 0, ⎥ ∗ Ξ33 Ξ34 ⎥ ⎦ ∗ ∗ Ξ44
⎡
(4.21)
⎤ X11 X12 X13 X14 N1
⎢ ⎢ ⎢ ∗ ⎢ ⎢ Θ1 = ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗ ⎡ Y11 ⎢ ⎢ ⎢ ∗ ⎢ ⎢ Θ2 = ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗ ⎡ Z11 ⎢ ⎢ ⎢ ∗ ⎢ ⎢ Θ3 = ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥ ⎥ X22 X23 X24 N2 ⎥ ⎥ ⎥ ∗ X33 X34 N3 ⎥ 0, ⎥ ⎥ ∗ ∗ X44 N4 ⎥ ⎦ ∗
∗
∗
(4.22)
W1 ⎤
Y12 Y13 Y14 S1
⎥ ⎥ Y22 Y23 Y24 S2 ⎥ ⎥ ⎥ ∗ Y33 Y34 S3 ⎥ 0, ⎥ ⎥ ∗ ∗ Y44 S4 ⎥ ⎦ ∗ ∗ ∗ W2 ⎤ Z12 Z13 Z14 kM1 ⎥ ⎥ Z22 Z23 Z24 kM2 ⎥ ⎥ ⎥ ∗ Z33 Z34 kM3 ⎥ 0, ⎥ ⎥ ∗ ∗ Z44 kM4 ⎥ ⎦ ∗ ∗ ∗ W3
(4.23)
(4.24)
where
⎧ ⎨ 1, if h1 h2 , k= ⎩−1, if h < h ; 1 2
and T T T Ξ11 = Q1 + Q2 − T1 A0 − AT 0 T1 + N1 + N1 + S1 + S1 + h1 X11 + h2 Y11 + |h1 − h2 |Z11 , T T T Ξ12 = P + T1 − AT 0 T2 + N2 + S2 + h1 X12 + h2 Y12 + |h1 − h2 |Z12 , Ξ13 = −T1 A1 + N3T − N1 + S3T − M1 + h1 X13 + h2 Y13 + |h1 − h2 |Z13 , Ξ14 = −T1 A2 + N4T + S4T − S1 + M1 + h1 X14 + h2 Y14 + |h1 − h2 |Z14 ,
80
4. Stability of Systems with Multiple Delays
Ξ22 Ξ23 Ξ24 Ξ33 Ξ34 Ξ44
= h1 W1 +h2 W2 +|h1 −h2 |W3 +T2 +T2T +h1 X22 +h2 Y22 +|h1 −h2 |Z22 , = −T2 A1 − N2 − M2 + h1 X23 + h2 Y23 + |h1 − h2 |Z23 , = −T2 A2 − S2 + M2 + h1 X24 + h2 Y24 + |h1 − h2 |Z24 , = −Q1 − N3 − N3T − M3 − M3T + h1 X33 + h2 Y33 + |h1 − h2 |Z33 , = −N4T − S3 + M3 − M4T + h1 X34 + h2 Y34 + |h1 − h2 |Z34 , = −Q2 − S4 − S4T + M4 + M4T + h1 X44 + h2 Y44 + |h1 − h2 |Z44 .
Proof. From the Newton-Leibnitz formula, we know that the following equations hold for any appropriately dimensioned matrices Ni , Si , and Mi , i = 1, 2, · · · , 4: 2 xT (t)N1 + x˙ T (t)N2 + xT (t − h1 )N3 + xT (t − h2 )N4 t x(s)ds ˙ = 0, (4.25) × x(t) − x(t − h1 ) − t−h1
2 xT (t)S1 + x˙ T (t)S2 + xT (t − h1 )S3 + xT (t − h2 )S4 t x(s)ds ˙ = 0, × x(t) − x(t − h2 ) −
(4.26)
t−h2
2 xT (t)M1 + x˙ T (t)M1 + xT (t − h1 )M3 + xT (t − h2 )M4 × x(t − h2 ) − x(t − h1 ) −
t−h2
x(s)ds ˙ = 0.
(4.27)
t−h1
Moreover, from system equation (4.3), the following is true for any appropriately dimensioned matrices Ti , i = 1, 2: 2 xT (t)T1 + x˙ T (t)T2 [x(t) ˙ − A0 x(t) − A1 x(t − h1 ) − A2 x(t − h2 )] = 0. (4.28) On the other hand, the following holds for any matrices Xjj 0, Yjj 0, and Zjj 0, j = 1, 2, · · · , 4, and any appropriately dimensioned matrices Xij , Yij , and Zij , i = 1, 2, · · · , 4, i < j 4: x(t)
⎥ ⎢ ⎥ ⎢ ⎥ ⎢ x(t) ˙ ⎥ ⎢ ⎥ ⎢ ⎢ x(t − h1 ) ⎥ ⎦ ⎣ x(t − h2 ) where
⎤⎡
⎤T ⎡
⎡
Λ11 Λ12 Λ13 Λ14
⎢ ⎢ ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
⎤ x(t)
⎥⎢ ⎥ ⎥⎢ ⎥ ⎥ ˙ Λ22 Λ23 Λ24 ⎥ ⎢ x(t) ⎥⎢ ⎥ = 0, ⎥⎢ ⎥ ⎢ ⎥ x(t − h1 ) ⎥ ∗ Λ33 Λ34 ⎦⎣ ⎦ x(t − h2 ) ∗ ∗ Λ44
(4.29)
4.2 Two Delays
81
Λij = h1 (Xij−Xij )+h2 (Yij−Yij )+(h1−h2 )(Zij−Zij ), i = 1, 2, · · · , 4, i j 4. We retain the term x(t) ˙ in V˙ 2 (xt ) and add the left sides of (4.25)-(4.29) ˙ to V2 (xt ). The proof is completed by following a line similar to the one in Theorem 4.2.1.
Remark 4.2.2. The equivalence of Theorems 4.2.1 and 4.2.2 can be proven in the same way that equivalence was proven in Subsection 3.2.3. 4.2.2 Equivalence Analysis Although the criterion for the case h1 = h2 should be equivalent to the criterion for a single delay, that cannot be demonstrated using previous methods. In contrast, since Theorem 4.2.1 takes the relationship between h1 and h2 into account, it is easy to show the equivalence of Theorem 4.2.1 for two identical delays and a criterion for a single delay, as explained below. We begin with a criterion for a single delay derived directly from Theorem 4.2.1. Corollary 4.2.1. Consider nominal system (4.3) with m = 1. Given a scalar ¯ ¯ h1 0, the system is asymptotically ⎤ stable if there exist matrices P > 0, Q ⎡ ¯ 12 ¯ X X ¯ > 0, and X ¯ = ⎣ 11 ⎦ 0, and any appropriately dimensioned 0, W ¯ 22 ∗ X ¯2 such that the following LMIs hold: ¯1 and N matrices N ⎡ ⎣ ⎡
Φ¯11 Φ¯12 ∗
Φ¯22
⎤
⎦ < 0,
¯ 11 X ¯ 12 N ¯1 X
⎢ ⎢ ⎢ ∗ ⎣ ∗
⎤
⎥ ¯ 22 N ¯2 ⎥ ⎥ 0, X ⎦ ¯ ∗ W
where T ¯ ¯T ¯ ¯ ¯ ¯ Φ¯11 = P¯ A0 + AT 0 P + Q + N1 + N1 + A0 HA0 + h1 X11 ,
¯1 + N ¯2T + AT ¯ ¯ Φ¯12 = P¯ A1 − N 0 HA1 + h1 X12 , ¯−N ¯2 − N ¯ T + AT HA ¯ 1 + h1 X ¯ 22 , Φ¯22 = −Q 2 1 ¯ = h1 W ¯. H
(4.30)
(4.31)
82
4. Stability of Systems with Multiple Delays
Now, we will show that Corollary 4.2.1 is equivalent to Theorem 4.2.1 for h1 = h2 when A1 is replaced with A1 + A2 in Φ¯12 and Φ¯22 . If the third row and third column of (4.8) are added to the second row and second column, respectively, then (4.8) is equivalent to the LMI ⎡ ⎤ Φ11 Π12 Φ13 ⎢ ⎥ ⎢ ⎥ (4.32) Π = ⎢ ∗ Π22 Π23 ⎥ < 0, ⎣ ⎦ ∗ ∗ Φ33 where Π12 = P A1 + P A2 + N2T + N3T − N1 + S2T + S3T − S1 + AT 0 H(A1 + A2 ) +h1 (X12 + X13 ) + h2 (Y12 + Y13 ) + |h1 − h2 |(Z12 + Z13 ), Π22 = −(Q1 + Q2 ) − N3 − N3T − S3 − S3T − N2 − N2T − S2 − S2T T +(A1 + A2 )T H(A1 + A2 ) + h1 (X22 + X23 + X23 + X33 ) T T +h2 (Y22 + Y23 + Y23 + Y33 ) + |h1 − h2 |(Z22 + Z23 + Z23 + Z33 ),
Π23 = −Q2 − S3 − S3T + M3 − N3T − S2 + M2 + (A1 + A2 )T HA2 +h1 (X23 + X33 ) + h2 (Y23 + Y33 ) + |h1 − h2 |(Z23 + Z33 ); and Φ11 , Φ13 , Φ33 , and H are defined in (4.8). On the one hand, if LMIs (4.30) and (4.31) in Corollary 4.2.1 are feasible (when A1 is replaced with A1 + A2 ), then the solutions can be expressed as appropriate forms of the feasible solutions of LMIs (4.9)-(4.11) and (4.32). In fact, for the feasible solutions of LMIs (4.30) and (4.31) in Corollary 4.2.1, ¯ 1 , N2 = N ¯2 , we can make the assignments: P = P¯ , Si = 0, i = 1, 2, 3, N1 = N T ¯ Q1 = Q ¯ − Q2 , M1 = −P¯ A2 − A0 HA ¯ 2 , M2 = N3 = 0, 0 < Q2 < Q, T ¯ ¯ ¯ ¯ 12 , Q2 − (A1 + A2 ) HA2 M3 = 0, W1 = W , W2 = 0, X11 = X11 , X12 = X ¯ 22 , X23 = 0, X33 = 0, and Yij = 0, i = 1, 2, 3, i j 3. X13 = 0, X22 = X Then, Zij , i = 1, 2, 3, i j 3, and W3 are the feasible solutions of the LMI ⎤ ⎡ Z11 Z12 Z13 M1 ⎥ ⎢ ⎥ ⎢ ⎢ ∗ Z22 Z23 M2 ⎥ ⎥ 0. ⎢ (4.33) ⎥ ⎢ ⎢ ∗ ∗ Z33 0 ⎥ ⎦ ⎣ ∗ ∗ ∗ W3 The above matrices must be the feasible solutions of LMIs (4.9)-(4.11) and (4.32). Consequently, Theorem 4.2.1 for h1 = h2 contains Corollary 4.2.1.
4.2 Two Delays
83
On the other hand, for the feasible solutions of LMIs (4.9)-(4.11) and ¯ = Q1 + Q2 , W ¯ = W1 + W2 , N ¯1 = (4.32), we make the assignments: P¯ = P , Q ¯ ¯ ¯ N1 +S1 , N2 = N2 +N3 +S2 +S3 , X11 = X11 +Y11 , X12 = X12 +Y12 +X13 +Y13 , ¯ 22 = X22 +Y22 +X23 +Y23 +X T +Y T +X33 +Y33 . This yields the feasible and X 23 23 solutions of LMIs (4.30) and (4.31) in Corollary 4.2.1. That is, Corollary 4.2.1 contains Theorem 4.2.1 for h1 = h2 . Thus, Corollary 4.2.1 and Theorem 4.2.1 are equivalent for the case h1 = h2 . 4.2.3 Systems with Time-Varying Structured Uncertainties Using Lemma 2.6.2 to extend Theorem 4.2.1 to system (4.4), which has timevarying structured uncertainties, produces a new theorem. Theorem 4.2.3. Consider system (4.4) with m = 2. Given scalars hi 0, i = 1, 2, the system is robustly stable if there exist matrices P > 0, Qi 0, i = 1, 2, Wj > 0, Xjj 0, Yjj 0, and Zjj 0, j = 1, 2, 3, any appropriately dimensioned matrices Ni , Si , Mi , i = 1, 2, 3, Xij , Yij , and Zij , i = 1, 2, 3, i < j 3, and a scalar λ > 0 such that LMIs (4.9)-(4.11) and the following LMI hold: ⎡ ⎤ Φˆ11 Φˆ12 Φˆ13 AT H P D 0 ⎢ ⎥ ⎢ ⎥ ˆ ˆ ⎢ ∗ Φ22 Φ23 AT H 0 ⎥ 1 ⎢ ⎥ ⎢ ⎥ (4.34) ⎢ ∗ ⎥ < 0, H 0 ∗ Φˆ33 AT 2 ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ∗ ∗ −H HD ⎥ ⎣ ⎦ ∗ ∗ ∗ ∗ −λI where T T T Φˆ11 = P A0 + AT 0 P + Q1 + Q2 + N1 + N1 + S1 + S1 + λE0 E0 + h1 X11 + h2 Y11 + |h1 − h2 |Z11 , Φˆ12 = P A1 −N1 +N2T +S2T −M1 +λE0T E1 +h1 X12 +h2 Y12 +|h1 −h2 |Z12 ,
Φˆ13 = P A2 +N3T +S3T −S1 +M1 +λE0T E2 +h1 X13 +h2 Y13 +|h1 −h2 |Z13 , Φˆ22 = −Q1 −N2 −N2T −M2 −M2T +λE1T E1 +h1 X22 +h2 Y22 +|h1 −h2 |Z22 , Φˆ23 = −N3T − S2 + M2 − M3T + λE1T E2 + h1 X23 + h2 Y23 + |h1 − h2 |Z23 , Φˆ33 = −Q2 −S3 −S3T +M3 +M3T +λE2T E2 +h1 X33 +h2 Y33 +|h1 −h2 |Z33 , H = h1 W1 + h2 W2 + |h1 − h2 |W3 .
84
4. Stability of Systems with Multiple Delays
Proof. Applying the Schur complement and Lemma 2.6.2, and following a procedure similar to the one in the proof of Theorem 3.3.1, yield Theorem 4.2.3.
Theorem 4.2.2 can also be extended to system (4.4), although we omit the explanation here for brevity. Furthermore, Theorem 4.2.2 can be extended to a system with polytopictype uncertainties, as shown next. Theorem 4.2.4. Consider system (4.3) with polytopic-type uncertainties (4.7) and m = 2. Given scalars hi 0, i = 1, 2, the system is robustly stable if there exist matrices Pj > 0, Qij 0, i = 1, 2, Wij > 0, i = 1, 2, 3, (j) (j) (j) Xkk 0, Ykk 0, and Zkk 0, k = 1, 2, · · · , 4, j = 1, 2, · · · , p, and (j) (j) (j) any appropriately dimensioned matrices Nij , Sij , Mij , Xik , Yik , Zik , i = 1, 2, · · · , 4, i < k 4, j = 1, 2, · · · , p, and Ti , i = 1, 2 such that the following LMIs hold for j = 1, 2, · · · , p : ⎤ ⎡ (j) (j) (j) (j) Ξ Ξ Ξ Ξ 12 13 14 ⎥ ⎢ 11 ⎥ ⎢ ⎢ ∗ Ξ (j) Ξ (j) Ξ (j) ⎥ ⎥ ⎢ 22 23 24 ⎥ < 0, (4.35) Ξ (j) = ⎢ ⎢ (j) (j) ⎥ ⎥ ⎢ ∗ ∗ Ξ Ξ 33 34 ⎥ ⎢ ⎦ ⎣ (j) ∗ ∗ ∗ Ξ44 ⎡
(j)
(j)
(j)
(j)
(j) X23
(j) X24
(j)
(j)
X11 X12 X13 X14
(j)
Θ1
(j)
Θ2
⎢ ⎢ ⎢ ∗ X (j) 22 ⎢ ⎢ ⎢ =⎢ ∗ ∗ ⎢ ⎢ ⎢ ∗ ∗ ⎢ ⎣ ∗ ∗ ⎡ (j) (j) Y Y12 ⎢ 11 ⎢ ⎢ ∗ Y (j) 22 ⎢ ⎢ ⎢ =⎢ ∗ ∗ ⎢ ⎢ ⎢ ∗ ∗ ⎢ ⎣ ∗ ∗
X33 X34
(j)
∗
X44
∗
∗
(j)
Y13
(j)
Y23
(j)
(j)
Y14
(j)
Y24
(j)
Y33
Y34
∗
Y44
∗
∗
(j)
⎤ N1j N2j N3j N4j
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ 0, ⎥ ⎥ ⎥ ⎥ ⎦
W1j ⎤ S1j ⎥ ⎥ S2j ⎥ ⎥ ⎥ ⎥ S3j ⎥ 0, ⎥ ⎥ S4j ⎥ ⎥ ⎦ W2j
(4.36)
(4.37)
4.2 Two Delays
⎡
(j)
(j)
(j)
(j)
85
⎤
Z11 Z12 Z13 Z14 M1j
(j)
Θ3
⎢ ⎢ ⎢ ∗ ⎢ ⎢ =⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥ ⎥ (j) (j) (j) Z22 Z23 Z24 M2j ⎥ ⎥ ⎥ (j) (j) ∗ Z33 Z34 M3j ⎥ 0, ⎥ ⎥ (j) ∗ ∗ Z44 M4j ⎥ ⎦ ∗ ∗ ∗ W3j
(4.38)
where (j)
(j)
T T T Ξ11 = Q1j + Q2j − T1 A0j − AT 0j T1 + N1j + N1j + S1j + S1j + h1 X11 (j)
Ξ12 = (j)
Ξ13 = (j)
Ξ14 = (j)
Ξ22 = (j)
Ξ23 = (j) Ξ24 = (j) Ξ33 = (j)
Ξ34 = (j)
Ξ44 =
(j) h2 Y11
(j) + + |h1 − h2 |Z11 , (j) (j) (j) T T T Pj + T1 − AT 0j T2 + N2j + S2j + h1 X12 + h2 Y12 + |h1 − h2 |Z12 , (j) (j) (j) T T −T1 A1j + N3j − N1j + S3j − M1j + h1 X13 + h2 Y13 + |h1 − h2 |Z13 , (j) (j) (j) T T −T1 A2j + N4j + S4j − S1j + M1j + h1 X14 + h2 Y14 + |h1 − h2 |Z14 , (j) (j) h1 W1j + h2 W2j + |h1 − h2 |W3j + T2 + T2T + h1 X22 + h2 Y22 (j) + |h1 − h2 |Z22 , (j) (j) (j) −T2 A1j − N2j − M2j + h1 X23 + h2 Y23 + |h1 − h2 |Z23 , (j) (j) (j) −T2 A2j − S2j + M2j + h1 X24 + h2 Y24 + |h1 − h2 |Z24 , (j) (j) (j) T T −Q1j − N3j − N3j − M3j − M3j + h1 X33 + h2 Y33 + |h1 − h2 |Z33 , (j) (j) (j) T T −N4j − S3j + M3j − M4j + h1 X34 + h2 Y34 + |h1 − h2 |Z34 , (j) (j) (j) T T −Q2j − S4j − S4j + M4j + M4j + h1 X44 + h2 Y44 + |h1 − h2 |Z44 .
4.2.4 Numerical Examples Example 4.2.1. Consider the stability of system (4.3) with m = 2 and ⎡ ⎡ ⎡ ⎤ ⎤ ⎤ −2 0 −1 0.6 0 −0.6 ⎦ , A1 = ⎣ ⎦ , A2 = ⎣ ⎦ . (4.39) A0 = ⎣ 0 −0.9 −0.4 −1 −0.6 0 If h1 = h2 , this system is equivalent to system (4.3) with m = 1 and ⎡ ⎤ ⎡ ⎤ −2 0 −1 0 ⎦ , A1 = ⎣ ⎦. (4.40) A0 = ⎣ 0 −0.9 −1 −1 The methods in [1, 2] and Corollary 4.2.1 show that system (4.3) with m = 1 and (4.40) is asymptotically stable for 0 h1 4.47. However, [1, 2] show that it is asymptotically stable for 0 h1 = h2 1.64. This result is conservative for multiple delays primarily because the relationship between h1 and h2 was not taken into account. In contrast, Theorem 4.2.1
86
4. Stability of Systems with Multiple Delays
shows that system (4.3) with m = 2 and (4.39) is asymptotically stable for 0 h1 = h2 4.47. This upper bound is much larger than the one in [1, 2] and is the same as the one for a single delay. Regarding the calculated range of h2 that ensures that system (4.3) with m = 2 and (4.39) is asymptotically stable for a given h1 , Table 4.1 compares the results for our method and for the one in [1, 2]; the results are also illustrated in Fig. 4.1. Clearly, our method produces significantly larger stability domains for h1 and h2 . Since the stable range for a single delay is generally from 0 to an upper bound, we usually just need to find that upper bound. As Fridman & Shaked simply extended the method for a single delay to two delays [1, 2], they were able to provide only a stable upper bound for h2 , but not an appropriate (possibly non-zero) lower bound. In the numerical example, their method yielded h1 < 2.25, and it was impossible to find the stable range of h2 for h1 2.25. In contrast, our method employs a cross term for h1 and h2 (the last term in (4.12)) to construct a new type of Lyapunov-Krasovskii functional. Unlike other methods, this is not a simple extension of the treatment for a single delay; and it takes the relationship between the two delays into account. Consequently, our method yields a stable range for h2 rather than a simple upper bound. In the numerical example, the stable range of h2 is much larger than that given by the method in [1, 2] when h1 < 2.25; and we can even obtain a stable range for h2 when h1 2.25. Note that in this case, the stable range of h2 no longer starts from 0. Example 4.2.2. Consider system (4.3) with two delays and the following parameters: ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 0 −0.12 + 12ρ −0.4 0.35 0.3 −0.7 ⎦ , A1 = ⎣ ⎦ , A2 = ⎣ ⎦ , (4.41) A0 = ⎣ 1 −0.465 − ρ 0.1 0.2 −0.1 0.1 and −0.035 ρ 0.035. When h1 = h2 , the above system is equivalent to the one in Example 3.5.2 in Chapter 3 for a constant delay. The methods for a system with a single delay in [2, 5] tell us that the system is stable for 0 h1 0.782. On the other hand, Theorem 3.4.1 in Chapter 3 gives 0 h1 0.863 as the stable range. Above we proved that a system with two equal delays is equivalent to the same system with a single delay. Solving LMIs (4.35)-(4.38) in Theorem 4.2.4 also demonstrates this point; that is, system (4.3) with two delays and
4.2 Two Delays
87
Fig. 4.1. Stability ranges ensuring asymptotic stability of system (4.3) with m = 2 and parameters (4.39) Table 4.1. Range of h2 ensuring asymptotic stability of system (4.3) with m = 2 and parameters (4.39) for a given h1 (Example 4.2.1) h1
1.51
1.52
1.53
1.55
h2 (Theorem 4.2.1)
[0, +∞]
[0, 3.36]
[0, 3.35]
[0, 3.34]
h2 ( [1, 2])
[0, +∞]
[0, 1.84]
[0, 1.81]
[0, 1.78]
h1
1.6
1.64
1.7
1.8
h2 (Theorem 4.2.1)
[0, 3.33]
[0, 3.33]
[0, 3.33]
[0, 3.36]
h2 ( [1, 2])
[0, 1.71]
[0, 1.64]
[0, 1.57]
[0, 1.42]
h1
1.9
2.0
2.1
2.2
h2 (Theorem 4.2.1)
[0, 3.39]
[0, 3.43]
[0, 3.47]
[0, 3.52]
h2 ( [1, 2])
[0, 1.22]
[0, 0.88]
[0, 0.40]
[0, 0.07]
h1
2.25
2.3
2.4
2.5
h2 (Theorem 4.2.1)
[0, 3.55]
[0.08, 3.57]
[0.22, 3.61]
[0.35, 3.65]
h2 ( [1, 2])
[0, 0]
—
—
—
h1
3.0
3.5
4.0
4.47
h2 (Theorem 4.2.1)
[1.04, 3.77]
[1.88, 3.90]
[3.59, 4.18]
[4.47, 4.47]
h2 ( [1, 2])
—
—
—
—
88
4. Stability of Systems with Multiple Delays
(4.41) is robustly stable for 0 h1 = h2 0.863. However, applying Lemma 1 in [2] and Corollary 3 in [1] or extending Theorem 1 in [5] to the twodelay case shows that system (4.3) with two delays and (4.41) is robustly stable for 0 h1 = h2 0.235. Clearly, the upper bound on h1 = h2 obtained in [1,5,6] is much smaller than the one produced by Theorem 4.2.4. Furthermore, Table 4.2 compares the methods in [1, 5, 6] and our method for the problem of calculating the upper bound on h2 for a given h1 ; the results are illustrated in Fig. 4.2. It is clear that our method produces significantly larger stability domains for h1 and h2 . Table 4.2. Range of h2 ensuring asymptotic stability of system (4.3) with two delays and parameters (4.41) for a given h1 (Example 4.2.2) h1
0.1
0.2
0.235
0.3
h2 (Theorem 4.2.4)
[0, 0.50]
[0, 0.54]
[0, 0.55]
[0, 0.57]
h2 ( [1, 5, 6])
[0, 0.36]
[0, 0.26]
[0, 0.235]
[0, 0.17]
h1
0.4
0.5
0.6
0.7
h2 (Theorem 4.2.4)
[0, 0.60]
[0.05, 0.64]
[0.41, 0.69]
[0.61, 0.75]
h2 ( [1, 5, 6])
[0, 0.08]
—
—
—
h1
0.8
0.863
h2 (Theorem 4.2.4)
[0.77, 0.82]
[0.863, 0.863]
h2 ( [1, 5, 6])
—
—
Fig. 4.2. Stability ranges ensuring asymptotic stability of system (4.3) with two delays and parameters (4.41)
4.3 Multiple Delays
89
4.3 Multiple Delays This section extends Theorem 4.2.1 to system (4.3) with m > 2. For convenience, we assume that 0 = h0 h 1 h 2 · · · h m .
(4.42)
We have the following theorem. Theorem 4.3.1. Consider system (4.3). Given scalars hi 0, i = 1, 2, · · · , m satisfying (4.42), the system is asymptotically stable if there exist matrices ⎤ ⎡ (ij) (ij) (ij) X00 X01 · · · X0m ⎥ ⎢ ⎢ (ij) (ij) ⎥ ∗ X11 · · · X1m ⎥ ⎢ ⎥ P > 0, Qi 0, i = 1, 2, · · · , m, X (ij) = ⎢ ⎢ .. .. .. ⎥ 0, i = ⎢ . . . ⎥ ⎦ ⎣ ∗
∗
(ij)
· · · Xmm
0, 1, · · · , m − 1, i < j m, and W (ij) > 0, i = 0, 1, · · · , m − 1, i < j m, (ij) and any appropriately dimensioned matrices Nl , l = 0, 1, · · · , m, i = 0, 1, · · · , m − 1, i < j m such that the following LMIs hold: ⎤ ⎡ Ξ00 Ξ01 · · · Ξ0m ⎥ ⎢ ⎥ ⎢ ⎢ ∗ Ξ11 · · · Ξ1m ⎥ ⎥ ¯ ⎢ (4.43) Ξ=⎢ . .. .. ⎥ < 0, ⎥ ⎢ .. . . ⎦ ⎣ ∗ ⎡
∗ (ij)
⎢ X00 ⎢ ⎢ ∗ ⎢ ⎢ . . Γ (ij) = ⎢ ⎢ . ⎢ ⎢ ⎢ ∗ ⎣ ∗
· · · Ξmm (ij)
X01
(ij) X11
(ij)
· · · X0m
(ij) X1m
···
.. .
.. .
(ij)
N0
⎤ ⎥
(ij) ⎥ N1 ⎥ ⎥
⎥ ⎥ 0, i = 0, 1, · · · , m − 1, i < j m, ⎥ ⎥ (ij) ⎥ Nm ⎥ ⎦ W (ij) .. .
(ij)
∗
· · · Xmm
∗
···
∗
(4.44) where Ξ00 = P A0 +
AT 0P
+
m
Qi +
i=1
+AT 0 GA0 +
m−1
m
i=0 j=i+1
m !
(0j) N0
+
j=1 (ij)
(hj − hi )X00 ,
(0j) T [N0 ]
"
90
4. Stability of Systems with Multiple Delays
Ξ0k = P Ak −
k−1
(ik)
N0
+
i=0
m
(0i) T
[Nk
] +
i=1
+AT 0 GAk +
m−1
m
m
(kj)
N0
j=k+1 (ij)
(hj − hi )X0k , k = 1, 2, · · · , m,
i=0 j=i+1
Ξkk = −Qk −
k−1 !
(ik) Nk
+
(ik) T [Nk ]
"
" m ! (kj) (kj) T + Nk + [Nk ]
i=0
j=k+1
+AT k GAk +
m−1
m
(ij)
(hj − hi )Xkk , k = 1, 2, · · · , m,
i=0 j=i+1
Ξlk = −
k−1
(ik)
Nl
−
i=0
l−1
(il) T
[Nk ] +
i=0
+AT l GAk +
m−1
m
(kj)
Nl
j=k+1 m
m
+
(lj) T
[Nk ]
j=l+1
(ij)
(hj − hi )Xlk , l = 1, 2, · · · , m, l < k m
i=0 j=i+1
G=
m−1
m
(hj − hi )W (ij) .
i=0 j=i+1
Proof. Choose the following Lyapunov-Krasovskii functional candidate: m t xT (s)Qi x(s)ds Vm (xt ) = xT (t)P x(t) + +
m m−1 i=0 j=i+1
i=1
t−hi
−hi
−hj
t
x˙ T (s)W (ij) x(s)dsdθ, ˙
(4.45)
t+θ
where P > 0, Qi 0, i = 1, 2, · · · , m, and W (ij) > 0, i = 0, 1, · · · , m− 1, i < j m are to be determined. According to the Newton-Leibnitz formula, for any appropriately dimen(ij) sioned matrices Nl , i = 0, 1, · · · , m − 1, i < j m, l = 0, 1, · · · , m, the following equation holds: m t−hi (ij) T x(t − hi ) − x(t − hj ) − x (t − hl )Nl x(s)ds ˙ = 0. (4.46) 2 t−hj
l=0
On the other hand, for any matrices X (ij) 0, i = 0, 1, · · · , m−1, i < j m, the following holds: m−1
m
i=0 j=i+1
(hj − hi )ζ1T (t) X (ij) − X (ij) ζ1 (t) = 0,
(4.47)
4.4 Conclusion
91
where T ζ1 (t) = xT (t), xT (t − h1 ), xT (t − h2 ), · · · , xT (t − hm ) . So, the derivative of Vm (xt ) along the solutions of system (4.3) can be written as ¯ 1 (t) − V˙ m (xt ) = ζ1T (t)Ξζ
m−1
m
i=0 j=i+1
t−hi
t−hj
ζ2T (t, s)Γ (ij) ζ2 (t, s)ds,
(4.48)
where T ˙ ; ζ2 (t, s) = ζ1T (t), x(s) and Ξ and Γ (ij) , i = 0, 1, · · · , m − 1, i < j m are defined in (4.43) and (4.44), respectively. From (4.48), if LMIs (4.43) and (4.44) hold, system (4.3) is asymptotically stable. This completes the proof.
Remark 4.3.1. If ∃ i ∈ {1, 2, · · · , m− 1} such that hi = hi+1 , then the system can be transformed into a system with m−1 delays. From the explanations for Theorem 4.2.1 and Corollary 4.2.1, it is easy to see that the delay-dependent condition is equivalent to the one for a system with m − 1 delays. Remark 4.3.2. Following a similar line, Theorem 4.2.4 can also be extended to a system with multiple delays and polytopic-type uncertainties using a parameter-dependent Lyapunov-Krasovskii functional, although we do not give the details here for brevity.
4.4 Conclusion This chapter presents new delay-dependent stability criteria for linear systems with multiple constant delays derived by the FWM approach. This method is less conservative than previous ones because it employs neither a system transformation nor an inequality to estimate the upper bound on a cross term, but instead uses FWMs to take the relationships among the delays into account. FWMs that express the reciprocal influences of the terms of the Newton-Leibnitz formula are easy to calculate and are determined by LMIs. In contrast to other methods, the stability domain of a delay provided by our method is a range, rather than just an upper bound.
92
4. Stability of Systems with Multiple Delays
References 1. E. Fridman and U. Shaked. Delay-dependent stability and H∞ control: constant and time-varying delays. International Journal of Control, 76(1): 48-60, 2003. 2. E. Fridman and U. Shaked. An improved stabilization method for linear timedelay systems. IEEE Transactions on Automatic Control, 47(11): 1931-1937, 2002. 3. Y. He, M. Wu, and J. H. She. Delay-dependent stability criteria for linear systems with multiple time delays. IEE Proceedings: Control Theory and Applications, 153(4): 447-452, 2006. 4. M. Wu and Y. He. Parameter-dependent Lyapunov functional for systems with multiple time delays. Journal of Control Theory and Applications, 2(3): 239-245, 2004. 5. E. Fridman and U. Shaked. Parameter dependent stability and stabilization of uncertain time-delay systems. IEEE Transactions on Automatic Control, 48(5): 861-866, 2003. 6. E. Fridman and U. Shaked. A descriptor system approach to H∞ control of linear time-delay systems. IEEE Transactions on Automatic Control, 47(2): 253-270, 2002.
5. Stability of Neutral Systems
A neutral system is a system with a delay in both the state and the derivative of the state, with the one in the derivative being called a neutral delay. That makes it more complicated than a system with a delay in only the state. Neutral delays occur not only in physical systems, but also in control systems, where they are sometimes artificially added to boost the performance. For example, repetitive control systems constitute an important class of neutral systems [1]. Stability criteria for neutral systems can be classified into two types: delay-independent [2–4] and delay-dependent [5–24]. Since the delayindependent type does not take the length of a delay into consideration, it is generally conservative. The basic methods for studying delay-dependent criteria for neutral systems are similar to those used to study linear systems, with the main ones being fixed model transformations. As mentioned in Chapter 1, the four types of fixed model transformations impose limitations on possible solutions to delay-dependent stability problems. The delay in the derivative of the state gives a neutral system special features not shared by linear systems. In a neutral system, a neutral delay can be the same as or different from a discrete delay. Neutral systems with identical constant discrete and neutral delays were studied in [6–8,12,14]; and systems with different discrete and neutral delays were studied in [10, 11, 13, 16–21,23]. The criteria in these reports usually require the neutral delay to be constant, but allow the discrete delay to be either constant [10,11,13,17,18,23] or time-varying [16,19–21]. Almost all these criteria take only the length of a discrete delay into account and ignore the length of a neutral delay. They are thus called discrete-delay-dependent and neutral-delay-independent stability criteria [13]. Discrete-delay- and neutral-delay-dependent criteria are rarely investigated, with two exceptions being [23, 25]. This chapter offers a comprehensive analysis of these various types of criteria based on the FWM approach. First, this approach is used to examine systems with a time-varying discrete delay and a constant neutral
94
5. Stability of Neutral Systems
delay, and discrete-delay-dependent and neutral-delay-independent stability criteria are obtained. We show that the criterion in [19], which relies on a descriptor model transformation, is a special case of ours. Furthermore, we point out that another reason why criteria derived using Park’s inequality in combination with a descriptor model transformation are conservative is that, when the coefficient matrix of a term with a discrete delay is nonsingular, Park’s inequality leads to conservativeness. Then, for a neutral system with identical constant discrete and neutral delays, we use the FWM approach to derive delay-dependent stability criteria; and we obtain less conservative results [26, 27] by using the FWM approach in combination with either a parameterized model transformation or an augmented Lyapunov-Krasovskii functional. Finally, for a neutral system with different constant discrete and neutral delays, we use the FWM approach to derive a discrete-delay- and neutral-delay-dependent stability criterion; and we show that, when the two delays are identical, the criterion is equivalent to the one obtained by using the FWM approach to directly handle identical discrete and neutral delays [23, 25].
5.1 Neutral Systems with Time-Varying Discrete Delay This section uses the FWM approach to examine the stability of neutral systems with a time-varying discrete delay and a constant neutral delay. 5.1.1 Problem Formulation Consider the following neutral system with a time-varying discrete delay: ⎧ ⎨ x(t) ˙ − C x(t ˙ − τ ) = Ax(t) + Ad x(t − d(t)), t > 0, (5.1) ⎩ x(t) = φ(t), t ∈ [−r, 0], where x(t) ∈ Rn is the state vector; A, Ad , and C are constant matrices with appropriate dimensions; all the eigenvalues of matrix C are inside the unit circle; the delay, d(t), is a time-varying continuous differentiable function satisfying 0 d(t) h
(5.2)
˙ μ, d(t)
(5.3)
and
5.1 Neutral Systems with Time-Varying Discrete Delay
95
where h and μ are constants; r is defined to be max{h, τ }; and the initial condition, φ(t), is a continuously differentiable initial function of t ∈ [−r, 0]. If the system contains time-varying structured uncertainties, it can be written as ⎧ ⎨ x(t) ˙ − C x(t ˙ − τ ) = (A + ΔA(t))x(t) + (Ad + ΔAd (t))x(t − d(t)), t > 0, ⎩ x(t) = φ(t), t ∈ [−r, 0]. (5.4) The uncertainties are assumed to be of the form [ΔA(t) ΔAd (t)] = DF (t) [Ea Ead ] ,
(5.5)
where D, Ea , and Ead are constant matrices with appropriate dimensions; and F (t) is an unknown, real, and possibly time-varying matrix with Lebesguemeasurable elements satisfying F T (t)F (t) I, ∀t.
(5.6)
5.1.2 Nominal Systems Choose the Lyapunov-Krasovskii functional candidate to be t t xT (s)Qx(s)ds + x˙ T (s)Rx(s)ds ˙ V (xt ) = xT (t)P x(t) +
0
t−d(t) t
+ −h
t−τ
x˙ T (s)Z x(s)dsdθ, ˙
(5.7)
t+θ
where P > 0, Q 0, R 0, and Z > 0 are to be determined. For any appropriately dimensioned matrices Ni , i = 1, 2, 3, the NewtonLeibnitz formula gives us 2 xT (t)N1 + xT (t − d(t))N2 + x˙ T (t − τ )N3 t × x(t) − x(s)ds ˙ − x(t − d(t)) = 0. (5.8) t−d(t)
⎡
X11 X12 X13
⎤
⎢ ⎥ ⎥ On the other hand, for any matrix X = ⎢ ⎣ ∗ X22 X23 ⎦ 0, the following ∗ ∗ X33 inequality holds: t T hη1 (t)Xη1 (t) − η1T (t)Xη1 (t)ds 0, (5.9) t−d(t)
96
5. Stability of Neutral Systems
T where η1 (t) = xT (t), xT (t − d(t)), x˙ T (t − τ ) . Then, calculating the derivative of V (xt ) along the solutions of system (5.1), adding the left sides of (5.8) and (5.9) to it, and replacing the term x(t) ˙ in V˙ (xt ) with the system equation yield the following theorem. Theorem 5.1.1. Consider nominal system (5.1). Given scalars h > 0 and μ, the system is asymptotically ⎡ stable if there⎤exist matrices P > 0, Q 0, X X12 X13 ⎢ 11 ⎥ ⎢ ⎥ R 0, Z > 0, and X = ⎢ ∗ X22 X23 ⎥ 0, and any appropriately ⎣ ⎦ ∗ ∗ X33 dimensioned matrices Ni , i = 1, 2, 3 such that the following LMIs hold: ⎡ ⎤ Φ11 Φ12 Φ13 AT H ⎢ ⎥ ⎢ ⎥ ⎢ ∗ Φ22 Φ23 AT ⎥ H d ⎥ < 0, (5.10) Φ=⎢ ⎢ ⎥ ⎢ ∗ ∗ Φ33 C T H ⎥ ⎣ ⎦ ∗ ∗ ∗ −H ⎡
⎤ X11 X12 X13 N1
⎢ ⎢ ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥ ⎥ X22 X23 N2 ⎥ ⎥ 0, ⎥ ∗ X33 N3 ⎥ ⎦ ∗ ∗ Z
(5.11)
where Φ11 = P A + AT P + Q + N1 + N1T + hX11 , Φ12 = P Ad + N2T − N1 + hX12 , Φ13 = P C + N3T + hX13 , Φ22 = −(1 − μ)Q − N2 − N2T + hX22 , Φ23 = −N3T + hX23 , Φ33 = −R + hX33 , H = R + hZ. On the other hand, for any appropriately dimensioned matrices Ni , i = 1, 2, · · · , 4, the Newton-Leibnitz formula gives us 2 xT (t)N1 + x˙ T (t)N2 + xT (t − d(t))N3 + x˙ T (t − τ )N4 t × x(t) − x(s)ds ˙ − x(t − d(t)) = 0. (5.12) t−d(t)
5.1 Neutral Systems with Time-Varying Discrete Delay
97
In addition, from system equation (5.1), we know that, for any appropriately dimensioned matrices Tj , j = 1, 2, ˙ − Ax(t) − Ad x(t − d(t)) − C x(t ˙ − τ )] = 0. (5.13) 2 xT (t)T1 + x˙ T (t)T2 [x(t) ⎤ ⎡ X11 X12 X13 X14 ⎥ ⎢ ⎥ ⎢ ⎢ ∗ X22 X23 X24 ⎥ ⎥ 0, the following Furthermore, for any matrix X = ⎢ ⎥ ⎢ ⎢ ∗ ∗ X33 X34 ⎥ ⎦ ⎣ ∗ ∗ ∗ X44 inequality holds: t T hη2 (t)Xη2 (t) − η2T (t)Xη2 (t)ds 0, (5.14) t−d(t)
where T η2 (t) = xT (t), x˙ T (t), xT (t − d(t)), x˙ T (t − τ ) . Calculating the derivative of V (xt ) along the solutions of system (5.1) and using (5.12)-(5.14) yield t V˙ (xt ) = η2T (t)Γ η2 (t) − η3T (t, s)Ψ η3 (t, s)ds, (5.15) t−d(t)
where T η3 (t, s) = η2T (t), x˙ T (s) ,
(5.16)
⎤
⎡ Γ11 Γ12 Γ13 Γ14
⎢ ⎢ ⎢ ∗ Γ =⎢ ⎢ ⎢ ∗ ⎣ ∗ ⎡ X11 ⎢ ⎢ ⎢ ∗ ⎢ ⎢ Ψ =⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗ and
⎥ ⎥ Γ22 Γ23 Γ24 ⎥ ⎥ < 0, ⎥ ∗ Γ33 Γ34 ⎥ ⎦ ∗ ∗ Γ44 ⎤ X12 X13 X14 N1 ⎥ ⎥ X22 X23 X24 N2 ⎥ ⎥ ⎥ ∗ X33 X34 N3 ⎥ 0, ⎥ ⎥ ∗ ∗ X44 N4 ⎥ ⎦ ∗ ∗ ∗ Z
(5.17)
(5.18)
98
5. Stability of Neutral Systems
Γ11 = Q + N1 + N1T − AT T1T − T1 A + hX11 , Γ12 = P + N2T + T1 − AT T2T + hX12 , Γ13 = N3T − N1 − T1 Ad + hX13 , Γ14 = N4T − T1 C + hX14 , Γ22 = R + hZ + T2 + T2T + hX22 , Γ23 = −N2 − T2 Ad + hX23 , Γ24 = −T2 C + hX24 , Γ33 = −(1 − μ)Q − N3 − N3T + hX33 , Γ34 = −N4T + hX34 , Γ44 = −R + hX44 . Thus, we arrive at the following theorem. Theorem 5.1.2. Consider nominal system (5.1). Given scalars h > 0 and μ, the system is asymptotically stable if there exist matrices P > 0, Q 0, ⎤ ⎡ X X12 X13 ⎢ 11 ⎢ ⎢ ∗ X22 X23 R 0, Z > 0, and X = ⎢ ⎢ ⎢ ∗ ∗ X33 ⎣ ∗ ∗ ∗
X14
⎥ ⎥ X24 ⎥ ⎥ 0, and any appropriately ⎥ X34 ⎥ ⎦ X44
dimensioned matrices Ni , i = 1, 2, · · · , 4 and Tj , j = 1, 2 such that LMIs (5.17) and (5.18) hold. If (5.12) is replaced with 2 xT (t)N1 + x˙ T (t)N2 + xT (t − d(t))N3 + x˙ T (t − τ )N4 Ad × x(t) −
t
x(s)ds ˙ − x(t − d(t)) = 0,
t−d(t)
the Z in Lyapunov-Krasovskii functional (5.7) is replaced with AT d ZAd , X is T −1 set to N1T N2T N3T N4T Z N1T N2T N3T N4T , and η3 (t, s) is set to T T η4 (t, s) = η2 (t), x˙ T (s)AT , then we get a corollary. d Corollary 5.1.1. Consider nominal system (5.1). Given scalars h > 0 and μ, the system is asymptotically stable if there exist matrices P > 0, Q 0, R 0, and Z > 0, and any appropriately dimensioned matrices Ni , i = 1, 2, · · · , 4 and Tj , j = 1, 2 such that the following LMI holds:
5.1 Neutral Systems with Time-Varying Discrete Delay
99
⎤
⎡ Π11 Π12 Π13 Π14 hN1
⎢ ⎢ ⎢ ∗ ⎢ ⎢ ⎢ Π =⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎢ ⎣ ∗
Π22 Π23 Π24 ∗
Π33 Π34
∗
∗
Π44
∗
∗
∗
⎥ ⎥ hN2 ⎥ ⎥ ⎥ ⎥ hN3 ⎥ < 0, ⎥ ⎥ hN4 ⎥ ⎥ ⎦ −hZ
(5.19)
where T T T Π11 = Q + N1 Ad + AT d N1 − A T1 − T1 A, T T T Π12 = P + AT d N2 + T 1 − A T 2 , T Π13 = AT d N3 − N1 Ad − T1 Ad , T Π14 = AT d N4 − T1 C, T Π22 = R + hAT d ZAd + T2 + T2 ,
Π23 = −N2 Ad − T2 Ad , Π24 = −T2 C, T Π33 = −(1 − μ)Q − N3 Ad − AT d N3 , T Π34 = −AT d N4 ,
Π44 = −R. Remark 5.1.1. Corollary 5.1.1 is equivalent to Theorem 5.1.2 when Ad is nonsingular. However, if Ad is singular, Ni Ad , i = 1, 2, · · · , 4 cannot describe all the FWMs, which means that Corollary 5.1.1 is more conservative than Theorem 5.1.2. Remark 5.1.2. The condition in Corollary 5.1.1 includes Theorem 1 in [19] T T for a single delay. In fact, if we set P = P1 , N1 = W11 + P2T , N2 = W12 + P3T , T T N3 = 0, N4 = 0, T1 = −P2 , T2 = −P3 , Q = S1 , and Z = R1 (where the terms on the right are the parameter matrices of Theorem 1 in [19]), then Corollary 5.1.1 yields precisely Theorem 1 in [19]. Moreover, N3 and N4 are selected by solving LMIs, rather than simply being set to zero, which makes our criterion an improvement over the one in [19]. Remark 5.1.3. This section concerns systems with a time-varying discrete delay and a constant neutral delay. Since the criterion obtained depends on the length of the discrete delay but not on that of the neutral delay, it is a discrete-delay-dependent and neutral-delay-independent condition.
100
5. Stability of Neutral Systems
5.1.3 Systems with Time-Varying Structured Uncertainties We can use Lemma 2.6.2 to extend Theorems 5.1.1 and 5.1.2 to systems with time-varying structured uncertainties. Corollary 5.1.2. Consider system (5.4). Given scalars h > 0 and μ, the system is robustly stable if there exist matrices P > 0, Q 0, R 0, Z > 0, ⎡ ⎤ X11 X12 X13 ⎢ ⎥ ⎢ ⎥ and X = ⎢ ∗ X22 X23 ⎥ 0, any appropriately dimensioned matrices ⎣ ⎦ ∗ ∗ X33 Ni , i = 1, 2, 3, and a scalar λ > 0 such that LMI (5.11) and the following LMI hold: ⎡ ⎤ Φ11 + λEaT Ea Φ12 + λEaT Ead Φ13 AT H P D ⎢ ⎥ ⎢ ⎥ T ⎢ Ead Φ23 AT H 0 ⎥ ∗ Φ22 + λEad d ⎢ ⎥ ⎢ ⎥ (5.20) ⎢ ∗ ∗ Φ33 C T H 0 ⎥ < 0, ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ∗ ∗ −H HD ⎥ ⎣ ⎦ ∗ ∗ ∗ ∗ −λI where Φij , i = 1, 2, 3, i j 3 and H are defined in (5.10). Corollary 5.1.3. Consider system (5.4). Given scalars h > 0 and μ, the system is robustly stable if there exist matrices P > 0, Q 0, R 0, Z > ⎡ ⎤ X11 X12 X13 X14 ⎢ ⎥ ⎢ ⎥ ⎢ ∗ X22 X23 X24 ⎥ ⎢ ⎥ 0, any appropriately dimensioned 0, and X = ⎢ ⎥ ⎢ ∗ ∗ X33 X34 ⎥ ⎣ ⎦ ∗ ∗ ∗ X44 matrices Ni , i = 1, 2, · · · , 4 and Tj , j = 1, 2, and a scalar λ > 0 such that LMI (5.18) and the following LMI hold: ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Γ11 + λEaT Ea Γ12 Γ13 + λEaT Ead Γ14 T1 D ∗
Γ22
Γ23
∗
∗
T Ead Γ33 + λEad
∗
∗
∗
∗
∗
∗
⎤
⎥ ⎥ Γ24 T2 D ⎥ ⎥ ⎥ Γ34 0 ⎥ < 0. ⎥ ⎥ Γ44 0 ⎥ ⎦ ∗ −λI
(5.21)
5.2 Neutral Systems with Identical Discrete and Neutral Delays
101
where Γij , i = 1, 2, · · · , 4, i j 4 are defined in (5.17) 5.1.4 Numerical Example This subsection uses a numerical example to compare the above method with the one in [19]. Example 5.1.1. Consider the stability of system (5.1) with ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 0 0 0.1 0 −2 1 ⎦, C = ⎣ ⎦. ⎦ , Ad = ⎣ A=⎣ 0 −1 0 0.1 1 −0.9 Table 5.1 shows the allowable upper bound, h, for various μ. Note that Ad is singular. As explained in Remarks 5.1.2 and 5.1.3, our results are better than those in [19]. Table 5.1. Allowable upper bound, h, for various μ (Example 5.1.1) μ
0
0.5
0.9
[19]
1.59
1.26
0.97
Theorems 5.1.1 and 5.1.2
1.96
1.51
1.07
5.2 Neutral Systems with Identical Discrete and Neutral Delays If the discrete delay, d(t), has a constant value of h, and if the neutral delay is also equal to h, then system (5.1) becomes a system with identical discrete and neutral delays: ⎧ ⎨ x(t) ˙ − C x(t ˙ − h) = Ax(t) + Ad x(t − h), t > 0, (5.22) ⎩ x(t) = φ(t), t ∈ [−h, 0]. Moreover, system (5.4) becomes ⎧ ⎨ x(t) ˙ − C x(t ˙ − h) = (A + ΔA(t))x(t) + (Ad + ΔAd (t))x(t − h), t > 0, ⎩ x(t) = φ(t), t ∈ [−h, 0], (5.23)
102
5. Stability of Neutral Systems
where the structured uncertainties are defined in (5.5) and (5.6). The structures of systems (5.22) and (5.23) are different from those of systems (5.1) and (5.4) in that they have only one delay. We can exploit this to overcome the conservativeness arising from the use of a discrete-delaydependent and neutral-delay-independent stability condition. It is known that Dxt must be stable if systems (5.22) and (5.23) are to be stable [28]. Several delay-dependent criteria derived by the FWM approach are given below. 5.2.1 FWM Approach First, we give a theorem for nominal system (5.22) that is based on the FWM approach. Theorem 5.2.1. Consider nominal neutral system (5.22). Given a scalar h > 0, the system is asymptotically stable if the operator Dxt is stable and P > 0, Q 0, R 0, W > 0, and ⎡ there exist matrices ⎤ X X12 X13 ⎢ 11 ⎥ ⎢ ⎥ X = ⎢ ∗ X22 X23 ⎥ 0, and any appropriately dimensioned matrices ⎣ ⎦ ∗ ∗ X33 Ni , i = 1, 2, 3 such that the following LMIs hold: ⎤ ⎡ Φ11 Φ12 Φ13 AT H ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ∗ Φ22 Φ23 AT H d ⎥ < 0, (5.24) Φ=⎢ ⎥ ⎢ ⎢ ∗ ∗ Φ33 C T H ⎥ ⎦ ⎣ ∗ ∗ ∗ −H ⎡
⎤ X11 X12 X13 N1
⎢ ⎢ ⎢ ∗ Ψ =⎢ ⎢ ⎢ ∗ ⎣ ∗
X22 X23 ∗
X33
∗
∗
⎥ ⎥ N2 ⎥ ⎥ 0, ⎥ N3 ⎥ ⎦ W
where Φ11 = P A + AT P + Q + N1 + N1T + hX11 , Φ12 = P Ad − AT P C − N1 + N2T + hX12 , Φ13 = N3T + hX13 ,
(5.25)
5.2 Neutral Systems with Identical Discrete and Neutral Delays
103
T Φ22 = −Q − C T P Ad − AT d P C − N2 − N2 + hX22 , T Φ23 = −N3 + hX23 , Φ33 = −R + hX33 , H = R + hW .
Proof. Choose the Lyapunov-Krasovskii functional candidate to be t t xT (s)Qx(s)ds + x˙ T (s)Rx(s)ds ˙ V (xt ) = (Dxt )T P (Dxt ) +
0
t−h t
+ −h
t−h
x˙ T (s)W x(s)dsdθ, ˙
(5.26)
t+θ
where P > 0, Q 0, R 0, and W > 0 are to be determined. From the Newton-Leibnitz formula, we obtain the following for any appropriately dimensioned matrices Ni , i = 1, 2, 3: 2 xT (t)N1 +xT (t − h)N2 + x˙ T (t − h)N3 x(t)−
t
x(s)ds−x(t ˙ − h) = 0.
t−h
(5.27) ⎡
⎤
X X12 X13 ⎢ 11 ⎥ ⎢ ⎥ On the other hand, for any matrix X = ⎢ ∗ X22 X23 ⎥ 0, the following ⎣ ⎦ ∗ ∗ X33 equation holds: ⎡
⎤⎡
⎤T ⎡ x(t)
⎢ ⎥ ⎢ ⎥ ⎢ x(t − h) ⎥ ⎣ ⎦ x(t ˙ − h)
Λ11 Λ12 Λ13
⎢ ⎢ ⎢ ∗ ⎣ ∗
⎤ x(t)
⎥ ⎥⎢ ⎥ ⎥⎢ Λ22 Λ23 ⎥ ⎢ x(t − h) ⎥ = 0, ⎦ ⎦⎣ x(t ˙ − h) ∗ Λ33
(5.28)
where Λij = h(Xij − Xij ), i = 1, 2, 3, i j 3. Calculating the derivative of V (xt ) along the solutions of system (5.22) and using (5.27) and (5.28) yield V˙ (xt ) = 2[x(t) − Cx(t − h)]T P [Ax(t) + Ad x(t − h)] + xT (t)Qx(t) −xT (t − h)Qx(t − h) + x˙ T (t)Rx(t) ˙ − x˙ T (t − h)Rx(t ˙ − h) t +hx˙ T (t)W x(t) ˙ − x˙ T (s)W x(s)ds ˙ t−h
104
5. Stability of Neutral Systems
+2 xT (t)N1 +xT (t − h)N2 + x˙ T (t − h)N3 x(t)− +hη1T (t)Xη1 (t) − = η1T (t)Ξη1 (t) −
t−h
x(s)ds−x(t−h) ˙
t−h t
t−h
t
t
η1T (t)Xη1 (t)ds
η2T (t, s)Ψ η2 (t, s)ds,
where η1 (t) = xT (t), xT (t − h), x˙ T (t − h) , T T η2 (t, s) = x (t), xT (t − h), x˙ T (t − h), x˙ T (s) , ⎡ ⎤ Φ11 + AT HA Φ12 + AT HAd N3T + AT HC ⎢ ⎥ ⎢ ⎥ T T Ξ=⎢ ⎥; ∗ Φ22 + AT HA −N + A HC d 3 d d ⎣ ⎦ ∗ ∗ −R + C T HC Φ11 , Φ12 , Φ22 , and H are defined in (5.24); and Ψ is defined in (5.25). If Ξ < 0, which is equivalent to Φ < 0 from the Schur complement, and if Ψ 0, then V˙ (xt ) −εx(t)2 for a sufficiently small ε > 0. In addition, operator Dxt is stable. So, system (5.22) is asymptotically stable if LMIs (5.24) and (5.25) are feasible. This completes the proof.
Remark 5.2.1. If x(t) ˙ is retained and FWMs are used to express the relationships among the terms of the system equation, we obtain a result equivalent to Theorem 5.2.1 that can be extended to systems with polytopic-type uncertainties through the use of a parameter-dependent Lyapunov-Krasovskii functional. If we do not add the left side of (5.28) to the derivative of V (xt ) in the proof of Theorem 5.2.1, we can write the derivative of V (xt ) as 1 t T ζ (t, s)Θζ1 (t, s)ds, V˙ (xt ) = h t−h 1 where ⎡ ⎢ ⎢ ⎢ Θ=⎢ ⎢ ⎢ ⎣
Φ + AT HA Φ12 + AT HAd ∗ ∗ ∗
N3T + AT HC
−hN1
⎤
⎥ ⎥ T T Φ22 + AT HA −N + A HC −hN d 2⎥ 3 d d ⎥. ⎥ ∗ −R + C T HC −hN3 ⎥ ⎦ ∗ ∗ −W
That leads to the following corollary.
5.2 Neutral Systems with Identical Discrete and Neutral Delays
105
Corollary 5.2.1. Consider nominal system (5.22). Given a scalar h > 0, the system is asymptotically stable if the operator Dxt is stable and there exist matrices P > 0, Q 0, R 0, and W > 0, and any appropriately dimensioned matrices Ni , i = 1, 2, 3 such that the following LMI holds: ⎤ ⎡ T T T T Φ + A HA Φ + A HA N + A HC −hN 12 d 1 3 ⎥ ⎢ ⎥ ⎢ ⎢ T T T ∗ Φ22 + Ad HAd −N3 + Ad HC −hN2 ⎥ ⎥ ⎢ ⎥ < 0. ⎢ (5.29) ⎥ ⎢ T ⎢ ∗ ∗ −R + C HC −hN3 ⎥ ⎥ ⎢ ⎦ ⎣ ∗ ∗ ∗ −W Extending Theorem 5.2.1 to neutral system (5.23), which has time-varying structured uncertainties, gives us the following stability criterion. Theorem 5.2.2. Consider neutral system (5.23). Given a scalar h > 0, the system is robustly stable if the operator Dxt is stable and there exist ⎤ matrices ⎡ X X12 X13 ⎥ ⎢ 11 ⎥ ⎢ P > 0, Q 0, R 0, W > 0, and X = ⎢ ∗ X22 X23 ⎥ 0, any ⎦ ⎣ ∗ ∗ X33 appropriately dimensioned matrices Ni , i = 1, 2, 3, and a scalar λ > 0 such that LMI (5.25) and the following LMI hold: ⎡ ⎤ Φ11 + λEaT Ea Φ12 + λEaT Ead Φ13 AT H PD ⎢ ⎥ ⎢ ⎥ T T T ⎢ ∗ Φ22 + λEad Ead Φ23 Ad H −C P D ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ T Φ=⎢ ⎥ < 0, (5.30) ∗ ∗ Φ33 C H 0 ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ∗ ∗ −H HD ⎥ ⎢ ⎥ ⎣ ⎦ ∗ ∗ ∗ ∗ −λI where Φij , i = 1, 2, 3, i j 3 and H are defined in (5.24). 5.2.2 FWM Approach in Combination with Parameterized Model Transformation Now, we use the FWM approach in combination with a parameterized model transformation to investigate the stability of systems (5.22) and (5.23). First, we have the following theorem.
106
5. Stability of Neutral Systems
Theorem 5.2.3. Consider nominal neutral system (5.22). Given a scalar h > 0, the system is asymptotically ⎤stable if the operator Dxt is stable and ⎡ P11 P12
⎦ 0 (with P11 > 0), Q 0, R 0, Z ∗ P22 0, and W > 0, and any appropriately dimensioned matrices Ni , i = 1, 2, 3 such that the following LMI holds: ⎡ ⎤ Φ11 Φ12 Φ13 Φ14 −hN1 AT H ⎢ ⎥ ⎢ ⎥ H ⎢ ∗ Φ22 Φ23 Φ24 −hN2 AT ⎥ d ⎢ ⎥ ⎢ ⎥ T ⎢ ∗ ∗ Φ33 0 −hN3 C H ⎥ ⎢ ⎥ < 0, (5.31) Φ=⎢ ⎥ ⎢ ∗ ⎥ ∗ ∗ −hW 0 0 ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ∗ ∗ ∗ −hZ 0 ⎥ ⎣ ⎦ ∗ ∗ ∗ ∗ ∗ −H there exist matrices P = ⎣
where T Φ11 = P11 A + AT P11 + P12 + P12 + Q + hW + N1 + N1T , T Φ12 = P11 Ad − AT P11 C − P12 − P12 C + N2T − N1 ,
Φ13 = N3T , Φ14 = h(AT P12 + P22 ), T T Φ22 = −Q + P12 C + C T P12 − C T P11 Ad − AT d P11 C − N2 − N2 ,
Φ23 = −N3T , Φ24 = h(AT d P12 − P22 ), Φ33 = −R, H = R + hZ. Proof. Choose the Lyapunov-Krasovskii functional candidate to be t V (xt ) = (Dxt )T P11 (Dxt ) + 2(Dxt )T P12 x(s)ds
T
t
x(s)ds
+
t−h t
t−h t
P22
x(s)ds t−h
T
+
t
x (s)Qx(s)ds + t−h
0
t
+ −h
x˙ T (s)Rx(s)ds ˙
t−h
t+θ
x˙ T (s)Z x(s)dsdθ ˙ +
0
−h
t
t+θ
xT (s)W x(s)dsdθ,
5.2 Neutral Systems with Identical Discrete and Neutral Delays
⎡ where P = ⎣
107
⎤ P11 P12
⎦ 0 (with P11 > 0), Q 0, R 0, Z 0, and ∗ P22 W > 0 are to be determined. It is clear that α1 Dxt 2 V (xt ) α2 xt 2c1 , where xt c1 =
sup {x(t + θ), x(t ˙ + θ)},
−hθ0
α1 = λmin (P ), α2 = λmax (P ) (1 + C + h) + h {λmax (Q) + λmax (R)} 1 + h2 {λmax (Z) + λmax (W )}. 2 From the Newton-Leibnitz formula, we have the following for any appropriately dimensioned matrices Ni , i = 1, 2, 3: T T T 2 x (t)N1+x (t − h)N2 + x˙ (t−h)N3 x(t)−
t
x(s)ds−x(t ˙ − h) = 0.
t−h
(5.32) Calculating the derivative of V (xt ) along the solutions of system (5.22) and using (5.32) yield T V˙ (xt ) = 2 [x(t) − Cx(t − h)] P11 [Ax(t) + Ad x(t − h)] t T +2 [Ax(t) + Ad x(t − h)] P12 x(s)ds t−h T
+2[x(t) − Cx(t − h)] P12 [x(t) − x(t − h)] t +2[x(t)−x(t−h)]T P22 x(s)ds+xT (t)Qx(t)−xT (t−h)Qx(t−h) t−h
+x˙ T (t)Rx(t)− ˙ x˙ T (t−h)Rx(t−h)+h ˙ x˙ T (t)Z x(t)− ˙ +hxT (t)W x(t) −
t
t−h t
xT (s)W x(s)ds
t−h
T T T +2 x (t)N1 +x (t−h)N2 + x˙ (t−h)N3 x(t)− =
1 h
x˙ T (s)Z x(s)ds ˙
t
x(s)ds−x(t ˙ − h)
t−h t
ζ T (t, s)Ξζ(t, s)ds,
t−h
where ζ(t, s) = [xT (t), xT (t − h), x˙ T (t − h), xT (s), x˙ T (s)]T ,
(5.33)
108
5. Stability of Neutral Systems
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ Ξ=⎢ ⎢ ⎢ ⎢ ⎣
Φ11 + AT HA Φ12 + AT HAd Φ13 + AT HC
Φ14
T Φ22 + AT d HAd Φ23 + Ad HC
Φ24
∗ ∗
∗
∗
∗
∗
∗
−hN1
⎤
⎥ ⎥ −hN2 ⎥ ⎥ ⎥ Φ33 + C T HC 0 −hN3 ⎥, ⎥ ⎥ ∗ −hW 0 ⎥ ⎦ ∗ ∗ −hZ
and H is defined in (5.31). If Ξ < 0, which is equivalent to Ξ < 0 from the Schur complement, then V˙ (xt ) −εx(t)2 for a sufficiently small ε > 0. In addition, operator Dxt is stable. Thus, system (5.22) is asymptotically stable if LMI (5.31) is true. This completes the proof.
Remark 5.2.2. The matrix P in Theorem 5.2.3 can be chosen to be positive semi-definite. Setting P12 = 0, P22 = 0, and W = 0, turns Theorem 5.2.3 into Corollary 5.2.1, which was obtained by directly using the FWM approach. So, we can get appropriate values for the elements of matrices P12 , P22 , and W by solving an LMI rather than by setting these matrices to zero. On the other hand, setting Z = 0 and Ni = 0, i = 1, 2, 3 yields the following corollary. Corollary 5.2.2. Consider nominal system (5.22). Given a scalar h > 0, the system is asymptotically stable if the operator Dxt is stable and there exist ⎡ ⎤ matrices P = ⎣
P11 P12
⎦ 0 (with P11 > 0), Q 0, R 0, and W > 0 ∗ P22 such that the following LMI holds: ⎡ ⎤ Φ¯11 Φ¯12 0 Φ¯14 AT R ⎢ ⎥ ⎢ ⎥ ⎢ ∗ Φ¯22 0 ⎥ Φ¯24 AT R d ⎢ ⎥ ⎢ ⎥ (5.34) ⎢ ∗ 0 C T R ⎥ < 0, ∗ Φ¯33 ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ∗ ∗ −hW 0 ⎥ ⎣ ⎦ ∗ ∗ ∗ ∗ −R where T Φ¯11 = P11 A + AT P11 + P12 + P12 + Q + hW , T T ¯ Φ12 = P11 Ad − A P11 C − P12 C − P12 , Φ¯14 = h(AT P12 + P22 ),
5.2 Neutral Systems with Identical Discrete and Neutral Delays
109
T C + C T P12 − C T P11 Ad − AT Φ¯22 = −Q + P12 d P11 C, T Φ¯24 = h(Ad P12 − P22 ), Φ¯33 = −R,
Remark 5.2.3. We can use a parameterized model transformation to derive Corollary 5.2.2 by combining parameter matrices, such as P12 and P22 , with a Lyapunov-Krasovskii functional. That enables the matrices to be obtained by solving an LMI. Theorem 5.2.3 can be obtained from this corollary and the explanation in Remark 5.2.2 by combining the FWM approach with a parameterized model transformation. Theorem 5.2.3 can also be extended to system (5.23), which has timevarying structured uncertainties, as stated in the following theorem. Theorem 5.2.4. Consider neutral system (5.23). Given a scalar h > 0, the system ⎤ stable if the operator Dxt is stable and there exist matrices ⎡ is robustly P11 P12 ⎦ 0 (with P11 > 0), Q 0, R 0, Z 0, and W > 0, any P =⎣ ∗ P22 appropriately dimensioned matrices Ni , i = 1, 2, 3, and a scalar λ > 0 such that the following LMI holds: ⎡ ⎤ T T T Φ + λE E Φ + λE E Φ Φ −hN A H P D 11 a 12 ad 13 14 1 11 a a ⎢ ⎥ ⎢ ⎥ T T T ⎢ ∗ Φ22 + λEad Ead Φ23 Φ24 −hN2 Ad H −C P11 D⎥ ⎢ ⎥ ⎢ ⎥ T ⎢ ⎥ 0 −hN C H 0 ∗ ∗ Φ 33 3 ⎢ ⎥ ⎢ ⎥ ⎢ T ∗ ∗ ∗ −hW 0 0 hP12 D ⎥ ⎢ ⎥ < 0, ⎢ ⎥ ⎢ ⎥ ∗ ∗ ∗ ∗ −hZ 0 0 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ∗ ∗ ∗ ∗ ∗ −H HD ⎥ ⎢ ⎣ ⎦ ∗ ∗ ∗ ∗ ∗ ∗ −λI (5.35) where Φ11 , Φ12 , Φ13 , Φ14 , Φ22 , Φ23 , Φ24 , Φ33 , and H are defined in (5.31). 5.2.3 FWM Approach in Combination with Augmented Lyapunov-Krasovskii Functional At present, it is difficult to further reduce the conservativeness by using a general type of Lyapunov-Krasovskii functional.
110
5. Stability of Neutral Systems
This subsection describes an augmented Lyapunov-Krasovskii functional that takes the delay into account through augmentation of the terms of the general Lyapunov-Krasovskii functional. This functional in combination with the FWM approach yields an improved delay-dependent stability criterion for neutral system (5.22). It can be extended to systems with time-varying structured uncertainties and polytopic-type uncertainties although we do not do so here for brevity. An augmented Lyapunov-Krasovskii functional can be used for various types of time-delay systems; interested readers are referred to [27, 29, 30]. Theorem 5.2.5. Consider nominal system (5.22). Given a scalar h > 0, the system is asymptotically stable⎤if the operator Dxt is stable and there exist ⎡ ⎡ ⎤ L L L ⎢ 11 12 13 ⎥ Q Q 11 12 ⎥ ⎢ ⎦ matrices L = ⎢ ∗ L22 L23 ⎥ 0 (with L11 > 0), Q = ⎣ ⎦ ⎣ ∗ Q22 ∗ ∗ L33 ⎤ ⎡ Z11 Z12 ⎦ 0, and any appropriately dimensioned matrices 0, and Z = ⎣ ∗ Z22 Mi , i = 1, 2, 3 such that the following LMI holds: ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ Γ =⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Γ11 Γ12 Γ13 ∗
Γ22 Γ23
Γ14 Γ24
∗
∗
Γ33
Γ34
∗
∗
∗
−hZ11
∗
∗
∗
∗
∗
∗
∗
∗
−hM1 AT S
⎤
⎥ ⎥ −hM2 AT S d ⎥ ⎥ ⎥ −hM3 C T S ⎥ ⎥ < 0, ⎥ −hZ12 0 ⎥ ⎥ ⎥ −hZ22 0 ⎥ ⎦ ∗ −S
where Γ11 Γ12 Γ13 Γ14 Γ22 Γ23 Γ24 Γ33
T = GA + AT GT + L13 + LT 13 + Q11 + hZ11 + M1 + M1 , T T T = GB + A L12 + L23 − L13 + M2 − M1 , = GC + L12 + M3T , = h(L33 + AT L13 ), T T T = AT d L12 + L12 Ad − L23 − L23 − Q11 − M2 − M2 , T = LT 12 C + L22 − Q12 − M3 , T = h(−L33 + Ad L13 ), = −Q22 ,
(5.36)
5.2 Neutral Systems with Identical Discrete and Neutral Delays
111
Γ34 = h(L23 + C T L13 ), S = Q22 + hZ22 , G = L11 + Q12 + hZ12 . Proof. Choose the Lyapunov-Krasovskii functional candidate to be t 0 t ζ2T (s)Qζ2 (s)ds + ζ2T (s)Zζ2 (s)dsdθ, V (xt ) = ζ1T (t)Lζ1 (t) + −h
t−h
t+θ
(5.37) ⎡
⎤
⎡ ⎤ L L L ⎢ 11 12 13 ⎥ Q11 Q12 ⎢ ⎥ ⎦ 0, where L = ⎢ ∗ L22 L23 ⎥ 0 (with L11 > 0), Q = ⎣ ⎣ ⎦ ∗ Q22 ∗ ∗ L33 ⎡ ⎤ ⎡ ⎤ x(t) ⎢ ⎥ Z11 Z12 ⎥ ⎦ 0 are to be determined; ζ1 (t) = ⎢ and Z = ⎣ ⎢ x(t − h) ⎥ ; and ⎣ ⎦ ∗ Z22 t x(s)ds t−h T T T ζ2 (t) = x (t), x˙ (t) . From the Newton-Leibnitz formula, the following equation is true for any appropriately dimensioned matrices Mi , i = 1, 2, 3: t x(s)ds−x(t−h) ˙ = 0. 2 xT (t)M1 + xT (t−h)M2 + x˙ T (t−h)M3 x(t)− t−h
(5.38) Calculating the derivative of V (xt ) along the solutions of system (5.22) and using (5.38) yield V˙ (xt ) = 2ζ1T (t)Lζ˙1 (t) + ζ2T (t)Qζ2 (t) − ζ2T (t − h)Qζ2 (t − h) t T +hζ2 (t)Zζ2 (t) − ζ2T (s)Zζ2 (s)ds t−h
T = 2ζ1T (t)L x˙ T (t) x˙ T (t − h) xT (t) − xT (t − h) + ζ2T (t)Qζ2 (t) t ζ2T (s)Zζ2 (s)ds −ζ2T (t − h)Qζ2 (t − h) + hζ2T (t)Zζ2 (t) − t−h +2 xT (t)M1 + xT (t − h)M2 + x˙ T (t − h)M3 t x(s)ds ˙ − x(t − h) × x(t) −
1 = h
t−h t
t−h
η1T (t, s)Γˆ η1 (t, s)ds,
112
5. Stability of Neutral Systems
where η1 (t, s) = [xT (t), xT (t − h), x˙ T (t − h), xT (s), x˙ T (s)]T , ⎡ Γ11 + AT SA Γ12 + AT SAd Γ13 + AT SC Γ14 ⎢ ⎢ T ⎢ ∗ Γ22 + AT Γ24 d SAd Γ23 + Ad SC ⎢ ⎢ ˆ T Γ =⎢ ∗ ∗ Γ33 + C SC Γ34 ⎢ ⎢ ⎢ ∗ ∗ ∗ −hZ11 ⎣ ∗ ∗ ∗ ∗ S = Q22 + hZ22 .
−hM1
⎤
⎥ ⎥ −hM2 ⎥ ⎥ ⎥ −hM3 ⎥ , ⎥ ⎥ −hZ12 ⎥ ⎦ −hZ22
From the Schur complement, we find that Γ < 0 is equivalent to Γˆ < 0, which means that V˙ (xt ) −εx(t)2 for a sufficiently small ε > 0. Therefore, nominal system (5.22) is asymptotically stable. This completes the proof.
Theorem 5.2.5 was established by using the Newton-Leibnitz formula and the FWMs Mi , i = 1, 2, 3 (see (5.38)). Below, we derive an alternative delaydependent criterion by retaining the term x(t) ˙ and employing another set of FWMs to express the relationships among the terms of system equation (5.22). Theorem 5.2.6. Consider nominal system (5.22). Given a scalar h > 0, the system is asymptotically stable⎤ if the operator Dxt is stable and there exist ⎡ ⎡ ⎤ L L L ⎢ 11 12 13 ⎥ Q Q 11 12 ⎥ ⎢ ⎦ 0, matrices L = ⎢ ∗ L22 L23 ⎥ 0 (with L11 > 0), Q = ⎣ ⎦ ⎣ ∗ Q22 ∗ ∗ L33 ⎤ ⎡ Z11 Z12 ⎦ 0, and any appropriately dimensioned matrices U, Mi , Z =⎣ ∗ Z22 i = 1, 2, 3, and Tj , j = 1, 2, · · · , 6 such that the following LMI holds: ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ Φ=⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Φ11 Φ12 Φ13 Φ14 ∗
Φ22 Φ23 Φ24
Φ15 Φ25
∗
∗
∗
∗
∗
Φ44
Φ45
∗
∗
∗
∗
−hZ11
∗
∗
∗
∗
∗
Φ33 Φ34
Φ35
Φ16
⎤
⎥ ⎥ Φ26 ⎥ ⎥ ⎥ Φ36 ⎥ ⎥ < 0, ⎥ Φ46 ⎥ ⎥ ⎥ −hZ12 ⎥ ⎦ −hZ22
(5.39)
5.2 Neutral Systems with Identical Discrete and Neutral Delays
113
where T T T Φ11 = L13 + LT 13 + Q11 + hZ11 + M1 + M1 − T1 A − A T1 ,
Φ12 = L11 + Q12 + hZ12 + U T + T1 − AT T2T , T T T Φ13 = LT 23 − L13 + M2 − M1 − T1 Ad − A T3 ,
Φ14 = L12 + M3T − T1 C − AT T4T , Φ15 = hL33 − AT T5T ,
Φ16 = −hM1 − AT T6T , Φ22 = Q22 + hZ22 + T2 + T2T , Φ23 = L12 − U + T3T − T2 Ad , Φ24 = −T2 C + T4T , Φ25 = hL13 + T5T , Φ26 = −hU + T6T , T T T Φ33 = −L23 − LT 23 − Q11 − M2 − M2 − T3 Ad − Ad T3 ,
T Φ34 = L22 − Q12 − M3T − T3 C − AT d T4 , T Φ35 = −hL33 − AT d T5 , T Φ36 = −hM2 − AT d T6 ,
Φ44 = −Q22 − T4 C − C T T4T , Φ45 = hL23 − C T T5T , Φ46 = −hM3 − C T T6T . Proof. Choose the same Lyapunov-Krasovskii functional candidate as in (5.37). From system equation (5.22), we know that
t
2 t−h
η2T (t, s)T [x(t) ˙ − C x(t ˙ − h) − Ax(t) − Ad x(t − h)] ds = 0,
(5.40)
where T T = T1T T2T T3T T4T T5T T6T , η2 (t, s) = [xT (t), x˙ T (t), xT (t − h), x˙ T (t − h), xT (s), x˙ T (s)]T . On the other hand, x(t) ˙ in V˙ (xt ) is retained (Note that, in contrast to the proof of Theorem (5.2.5), x(t) ˙ is replaced with system equation (5.22)), and (5.38) is slightly modified to 2 xT (t)M1 + x˙ T (t)U + xT (t − h)M2 + x˙ T (t − h)M3 t x(s)ds ˙ − x(t − h) = 0. (5.41) × x(t) − t−h
114
5. Stability of Neutral Systems
If we follow a line similar to the proof of Theorem 5.2.5, but add (5.40) and (5.41) to V˙ (xt ), we get the desired result immediately. This completes the proof.
5.2.4 Numerical Examples The next two examples illustrate the effectiveness and advantages of the methods described above. Example 5.2.1. Consider the stability of nominal system (5.22) with ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ −1.1 −0.2 −0.2 0 −0.9 0.2 ⎦, C = ⎣ ⎦. ⎦ , Ad = ⎣ A=⎣ −0.1 −1.1 0.2 −0.1 0.1 −0.9 The allowable upper bound on the delay that guarantees the stability of the system is 0.3 in [7], 0.5658 in [8], and 0.74 in [12]. In contrast, solving LMIs (5.24) and (5.25) in Theorem 5.2.1 yields a maximum upper bound of h = 1.7855, which is about 451%, 192%, and 123% larger than the three values just mentioned. Example 5.2.2. Consider the robust stability of system (5.23) with ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ −2 0 −1 0 c 0 ⎦ , Ad = ⎣ ⎦, C = ⎣ ⎦ , 0 c < 1, A=⎣ 0 −0.9 −1 −1 0 c D = I, Ea = Ead = αI. [14] used a parameterized model transformation to solve this problem, but that method requires that coefficient matrix Ad be artificially decomposed. (Although Han [16] devised an effective way of decomposing it, that method is still conservative because it requires that three matrices be the same.) In contrast, if we use either the FWM approach or the FWM approach in combination with a parameter model transformation, solving LMIs gives us all the parameter matrices. Table 5.2 shows the maximum delay that ensures the asymptotic stability of nominal system (5.22) for α = 0. The method in this section produces significantly better results than those in Han [14] and Fridman & Shaked [19], especially when c is large. The results also show that a parameterized matrix transformation (Corollary 5.2.2) is almost equivalent to Theorem 5.2.3 but is conservative for c = 0; that is, the FWM approach in combination
5.2 Neutral Systems with Identical Discrete and Neutral Delays
115
Table 5.2. Allowable upper bound on h for α = 0 (Example 5.2.2) c
0
0.1
0.3
0.5
0.7
0.9
[19]
4.47
3.49
2.06
1.14
0.54
0.13
Corollary 5.2.1
4.47
3.65
2.32
1.31
0.57
0.10
[14]
4.35
4.33
4.10
3.62
2.73
0.99
Corollary 5.2.2
4.37
4.35
4.13
3.67
2.87
1.41
Theorem 5.2.3
4.47
4.35
4.13
3.67
2.87
1.41
Theorems 5.2.5 and 5.2.6
4.47
4.42
4.17
3.69
2.87
1.41
with a parameter model transformation (Theorem 5.2.3) is superior to a simple model transformation. Moreover, they also indicate that the FWM approach in combination with an augmented Lyapunov-Krasovskii functional (Theorems 5.2.5 and 5.2.6) yields the best results. Table 5.3 shows the maximum delay that ensures the robust stability of a system with time-varying structured uncertainties for α = 0.2 and various c. The results obtained with Theorem 5.2.4 are much better than those obtained by the method in Han [14]. Table 5.3. Allowable upper bound on h for α = 0.2 (Example 5.2.2) c
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
[14]
1.77
1.63
1.48
1.33
1.16
0.98
0.79
0.59
0.37
Theorem 5.2.4
2.43
2.33
2.24
2.14
2.03
1.91
1.78
1.65
1.50
Table 5.4 shows how the uncertainty bound, α, affects the upper bound on h for c = 0.1. Again, Theorem 5.2.4 produces better results than the method in Han [14]. Table 5.4. Allowable upper bound on h for c = 0.1 and various α (Example 5.2.2) α
0
0.05
0.1
0.15
0.2
0.25
[14]
4.33
3.61
2.90
2.19
1.48
0.77
Theorem 5.2.4
4.35
3.64
3.06
2.60
2.24
1.94
116
5. Stability of Neutral Systems
5.3 Neutral Systems with Different Discrete and Neutral Delays If the discrete delay, d(t), has a constant value of h, then system (5.1) turns into a system with different neutral and discrete delays: ⎧ ⎨ x(t) ˙ − C x(t ˙ − τ ) = Ax(t) + Ad x(t − h), t > 0, (5.42) ⎩ x(t) = φ(t), t ∈ [−r, 0]; and system (5.4) turns into ⎧ ⎨ x(t) ˙ − C x(t ˙ − τ ) = (A + ΔA(t))x(t) + (Ad + ΔAd (t))x(t − h), t > 0, ⎩ x(t) = φ(t), t ∈ [−r, 0], (5.43) where r = max{h, τ }. In addition, Dxt must be stable if systems (5.42) and (5.43) are to be stable [28]. 5.3.1 Nominal Systems First, we give a theorem for the nominal system that is based on the FWM approach. Theorem 5.3.1. Consider nominal system (5.42). Given scalars h 0 and τ 0, the system is asymptotically stable if the operator Dxt is stable and there exist matrices P > 0, Qi 0, i = 1, 2, R 0, Wi > 0, i = 1, 2, 3, Xjj 0, Yjj 0, and Zjj 0, j = 1, 2, · · · , 4, and any appropriately dimensioned matrices Ni , Si , Mi , Xij , Yij , and Zij , i = 1, 2, · · · , 4, i < j 4 such that the following LMIs hold: ⎡
Φ11 Φ12 Φ13 Φ14 AT H
⎢ ⎢ ⎢ ∗ ⎢ ⎢ Φ=⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
⎤
⎥ ⎥ H Φ22 Φ23 Φ24 AT ⎥ d ⎥ ⎥ ∗ Φ33 Φ34 0 ⎥ < 0, ⎥ ⎥ T ∗ ∗ Φ44 C H ⎥ ⎦ ∗
∗
∗
−H
(5.44)
5.3 Neutral Systems with Different Discrete and Neutral Delays
⎡
117
⎤ X11 X12 X13 X14 N1
⎢ ⎢ ⎢ ∗ ⎢ ⎢ Ψ1 = ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗ ⎡ Y11 ⎢ ⎢ ⎢ ∗ ⎢ ⎢ Ψ2 = ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗ ⎡ Z11 ⎢ ⎢ ⎢ ∗ ⎢ ⎢ Ψ3 = ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥ ⎥ X22 X23 X24 N2 ⎥ ⎥ ⎥ ∗ X33 X34 N3 ⎥ 0, ⎥ ⎥ ∗ ∗ X44 N4 ⎥ ⎦ ∗ ∗ ∗ W1 ⎤ Y12 Y13 Y14 S1 ⎥ ⎥ Y22 Y23 Y24 S2 ⎥ ⎥ ⎥ ∗ Y33 Y34 S3 ⎥ 0, ⎥ ⎥ ∗ ∗ Y44 S4 ⎥ ⎦ ∗ ∗ ∗ W2 ⎤ Z12 Z13 Z14 kM1 ⎥ ⎥ Z22 Z23 Z24 kM2 ⎥ ⎥ ⎥ ∗ Z33 Z34 kM3 ⎥ 0, ⎥ ⎥ ∗ ∗ Z44 kM4 ⎥ ⎦ ∗ ∗ ∗ W3
where
⎧ ⎨ 1, if h τ, k= ⎩−1, if h < τ,
and Φ11 = P A + AT P + Q1 + Q2 + N1 + N1T + S1 + S1T + Ξ11 , Φ12 = P Ad − N1 + N2T + S2T − M1 + Ξ12 , Φ13 = −AT P C + N3T + S3T − S1 + M1 + Ξ13 , Φ14 = N4T + S4T + Ξ14 , Φ22 = −Q1 − N2 − N2T − M2 − M2T + Ξ22 , T T Φ23 = −AT d P C − N3 − S2 + M2 − M3 + Ξ23 , Φ24 = −N4T − M4T + Ξ24 , Φ33 = −Q2 − S3 − S3T + M3 + M3T + Ξ33 , Φ34 = −S4T + M4T + Ξ34 , Φ44 = −R + Ξ44 , H = R + hW1 + τ W2 + |τ − h|W3 ,
(5.45)
(5.46)
(5.47)
118
5. Stability of Neutral Systems
Ξij = hXij + τ Yij + |τ − h|Zij , i = 1, 2, · · · , 4, i j 4. Proof. First, consider the case h τ . Choose the Lyapunov-Krasovskii functional candidate to be t t V (xt ) = (Dxt )T P (Dxt ) + xT (s)Q1 x(s)ds + xT (s)Q2 x(s)ds
t−h t
+
x˙ T (s)Rx(s)ds ˙ +
t−τ 0 t
+ −τ
0
−h
t−τ t
x˙ T (s)W2 x(s)dsdθ ˙ +
t+θ
x˙ T (s)W1 x(s)dsdθ ˙
t+θ
−τ
−h
t
x˙ T (s)W3 x(s)dsdθ, ˙
t+θ
where P > 0, Qi 0, i = 1, 2, R 0, and Wi > 0, i = 1, 2, 3 are to be determined. Calculating the derivative of V (xt ) along the solutions of system (5.42) yields V˙ (xt ) = 2(Dxt )T P [Ax(t)+Ad x(t−h)]+xT (t)Q1 x(t)−xT (t − h)Q1 x(t − h) +xT (t)Q2 x(t)−xT (t−τ )Q2 x(t−τ )+ x˙ T (t)Rx(t)− ˙ x˙ T (t−τ )Rx(t−τ ˙ ) t +hx˙ T (t)W1 x(t) ˙ − x˙ T (s)W1 x(s)ds ˙ ˙ − +τ x˙ T (t)W2 x(t)
t−h t
x˙ T (s)W2 x(s)ds ˙
t−τ
+(h − τ )x˙ T (t)W3 x(t) ˙ −
t−τ
x˙ T (s)W3 x(s)ds. ˙
t−h
From the Newton-Leibnitz formula, the following equations hold for any appropriately dimensioned matrices Ni , Si , and Mi , i = 1, 2, · · · , 4: 2 xT (t)N1 + xT (t − h)N2 + xT (t − τ )N3 + x˙ T (t − τ )N4 t x(s)ds ˙ = 0, (5.48) × x(t) − x(t − h) − t−h 2 xT (t)S1 + xT (t − h)S2 + xT (t − τ )S3 + x˙ T (t − τ )S4 t x(s)ds ˙ = 0, (5.49) × x(t) − x(t − τ ) − t−τ T 2 x (t)M1 + xT (t − h)M2 + xT (t − τ )M3 + x˙ T (t − τ )M4 t−τ x(s)ds ˙ = 0. (5.50) × x(t − τ ) − x(t − h) − t−h
On the other hand, the following is also true for any matrices Xjj 0, Yjj 0, and Zjj 0, j = 1, 2, · · · , 4, and any appropriately dimensioned matrices Xij , Yij , and Zij , i = 1, 2, · · · , 4, i < j 4:
5.3 Neutral Systems with Different Discrete and Neutral Delays
⎤T ⎡
⎡ x(t)
⎤
⎤⎡ x(t)
Λ11 Λ12 Λ13 Λ14
⎥ ⎢ ⎥ ⎢ ⎢ x(t − h) ⎥ ⎥ ⎢ ⎥ ⎢ ⎢ x(t − τ ) ⎥ ⎦ ⎣ x(t ˙ − τ)
⎢ ⎢ ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
119
⎥ ⎥⎢ ⎥ ⎥⎢ Λ22 Λ23 Λ24 ⎥ ⎢ x(t − h) ⎥ ⎥ = 0, ⎥⎢ ⎥ ⎥⎢ ∗ Λ33 Λ34 ⎥ ⎢ x(t − τ ) ⎥ ⎦ ⎦⎣ x(t ˙ − τ) ∗ ∗ Λ44
(5.51)
where Λij = h(Xij −Xij )+τ (Yij −Yij )+(h−τ )(Zij −Zij ), i = 1, 2, · · · , 4, i j 4. Now, adding the terms on the left sides of equations (5.48)-(5.51) to V˙ (xt ), we get t V˙ (xt ) = η1T (t)Ωη1 (t) − η2T (t, s)Ψ1 η2 (t, s)ds −
t−h
t
t−τ
η2T (t, s)Ψ2 η2 (t, s)ds −
t−τ
t−h
η2T (t, s)Ψ3 η2 (t, s)ds,
(5.52)
where η1 (t) = [xT (t), xT (t − h), xT (t − τ ), x˙ T (t − τ )]T , η2 (t, s) = [η1T (t), x˙ T (s)]T , ⎡ ⎤ Φ11 + AT HA Φ12 + AT HAd Φ13 Φ14 + AT HC ⎢ ⎥ ⎢ ⎥ T ⎢ ⎥ HA Φ Φ + A HC ∗ Φ22 + AT d 23 24 d d ⎢ ⎥. Ω=⎢ ⎥ ⎢ ⎥ ∗ ∗ Φ33 Φ34 ⎣ ⎦ T ∗ ∗ ∗ Φ44 + C HC If Ω < 0 and Ψi 0, i = 1, 2, 3, then V˙ (xt ) < −εx(t)2 for a sufficiently small scalar ε > 0. From the Schur complement, we find that Φ < 0 implies Ω < 0. Thus, system (5.42) is asymptotically stable if LMIs (5.44)-(5.47) are feasible. Next, consider the case h < τ . In this case, the candidate LyapunovKrasovskii functional is chosen to be t t V (xt ) = (Dxt )T P (Dxt ) + xT (s)Q1 x(s)ds + xT (s)Q2 x(s)ds t−h t−τ t 0 t T T x˙ (s)Rx(s)ds ˙ + x˙ (s)W1 x(s)dsdθ ˙ + −h t+θ t−τ 0 t −h t x˙ T (s)W2 x(s)dsdθ ˙ + x˙ T (s)W3 x(s)dsdθ; ˙ + −τ
t+θ
and (5.50) can be rewritten as
−τ
t+θ
120
5. Stability of Neutral Systems
2 xT (t)M1 + xT (t − h)M2 + xT (t − τ )M3 + x˙ T (t − τ )M4 × x(t − τ ) − x(t − h) +
t−h
x(s)ds ˙ = 0. (5.53) t−τ
Then, following the same procedure as for h τ yields the same conclusion. Note that, in this case, k in (5.47) is −1. This completes the proof.
5.3.2 Equivalence Analysis Now we consider the special case of identical delays (τ = h) in system (5.42). This turns system (5.42) into system (5.22), for which we have already used the FWM approach to obtain a delay-dependent stability criterion, namely Theorem 5.2.1. Theorem 5.3.1 should be equivalent to that theorem for τ = h. This point is discussed below. If the third row and third column of (5.44) are added to the second row and second column, respectively, (5.44) is equivalent to the following LMI: ⎡ ⎤ Φ11 Π12 Φ13 Φ14 AT H ⎢ ⎥ ⎢ ⎥ ⎢ ∗ Π22 Π23 Π24 AT ⎥ H d ⎢ ⎥ ⎢ ⎥ Π =⎢ ∗ (5.54) 0 ⎥ < 0, ∗ Φ33 Φ34 ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ∗ ∗ Φ44 C T H ⎥ ⎣ ⎦ ∗ ∗ ∗ ∗ −H where Π12 = P Ad − AT P C + N2T + N3T − N1 + S2T + S3T − S1 + Ξ12 + Ξ13 , T T T Π22 = −C T P Ad − AT d P C − (Q1 + Q2 ) − N3 − N3 − S3 − S3 − N2 − N2 T −S2 − S2T + Ξ22 + Ξ23 + Ξ23 + Ξ33 , T T Π23 = −Q2 − AT P C − S − S + M 3 3 − N3 − S2 + M2 + Ξ23 + Ξ33 , 3 d T T Π24 = −N4 − S4 + Ξ24 + Ξ34 , and Φ11 , Φ13 , Φ14 , Φ33 , Φ34 , Φ44 , Ξ12 , Ξ13 , Ξ22 , Ξ23 , Ξ33 , Ξ24 , Ξ34 , and H are defined in (5.44). First, we show that, if LMIs (5.24) and (5.25) in Theorem 5.2.1 are feasible, then the solutions can be written as appropriate forms of the feasible solutions of LMIs (5.44)-(5.47). In fact, for the feasible solutions of ¯ LMIs (5.24) and (5.25) in Theorem 5.2.1, if we set P = P¯ , R = R, ¯ 1 , N2 = N ¯2 , N3 = 0, N4 = N ¯3 , 0 < Q2 < Q, ¯ Si = 0, i = 1, 2, · · · , 4, N1 = N T¯ T¯ ¯ Q1 = Q − Q2 , M1 = A P C, M2 = Ad P C + Q2 , M3 = 0, M4 = 0,
5.3 Neutral Systems with Different Discrete and Neutral Delays
121
¯ , W2 = 0, X11 = X ¯ 11 , X12 = X ¯ 12 , X13 = 0, X14 = X ¯ 13 , W1 = W ¯ 22 , X23 = 0, X24 = X ¯ 23 , X33 = 0, X34 = 0, X44 = X ¯ 33 , and X22 = X Yij = 0, i = 1, 2, · · · , 4, i j 4; and if we let Zij , i = 1, 2, · · · , 4, i j 4 and W3 be the feasible solutions of the following LMI for a given M1 and M2 : ⎤
⎡ Z11 Z12 Z13 Z14 M1
⎢ ⎢ ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥ ⎥ Z22 Z23 Z24 M2 ⎥ ⎥ ⎥ ∗ Z33 Z34 0 ⎥ 0, ⎥ ⎥ ∗ ∗ Z44 0 ⎥ ⎦ ∗ ∗ ∗ W3
then the above matrices must be the feasible solutions of LMIs (5.44)-(5.47). Therefore, Theorem 5.3.1 includes Theorem 5.2.1 for τ = h. Next, we show that, if LMIs (5.44)-(5.47) are feasible, then the solutions are feasible solutions of LMIs (5.24) and (5.25). That is, for the feasible ¯ = R, Q ¯ = Q1 + Q2 , W ¯ = solutions of LMIs (5.44)-(5.47), setting P¯ = P , R ¯ 1 = N 1 + S1 , N ¯ 2 = N 2 + N 3 + S2 + S3 , N ¯ 3 = N4 , X ¯ 11 = X11 + Y11 , W1 + W2 , N ¯ ¯ ¯ X12 = X12 + Y12 + X13 + Y13 , X13 = X14 + Y14 , X22 = X22 + Y22 + X23 + Y23 + T T ¯ 23 = X24 +Y24 +X34 +Y34 , and X ¯ 33 = X44 +Y44 yields X23 +Y23 +X33 +Y33 , X the feasible solutions of LMIs (5.24) and (5.25) in Theorem 5.2.1. Therefore, Theorem 5.2.1 includes Theorem 5.3.1 for τ = h. Thus, Theorems 5.3.1 and 5.2.1 are equivalent for the case τ = h. Remark 5.3.1. Better results can be obtained by using the FWM approach in combination with either a parameterized model transformation or an augmented Lyapunov-Krasovskii functional. For brevity, we do not give the details here. 5.3.3 Systems with Time-Varying Structured Uncertainties The next theorem extends Theorem 5.3.1 to a neutral system with timevarying structured uncertainties. Theorem 5.3.2. Consider neutral system (5.43). Given scalars τ 0 and h 0, the system is robustly stable if the operator Dxt is stable and there exist matrices P > 0, Qi 0, i = 1, 2, R 0, Wi > 0, i = 1, 2, 3, Xjj 0, Yjj 0, and Zjj 0, j = 1, 2, · · · , 4, any appropriately dimensioned matrices
122
5. Stability of Neutral Systems
Ni , Si , Mi , i = 1, 2, · · · , 4, Xij , Yij , and Zij , i = 1, 2, · · · , 4, i < j 4, and a scalar λ > 0 such that LMIs (5.45)-(5.47) and the following LMI hold: ⎡ ⎤ Φ11 + λEaT Ea Φ12 + λEaT Ead Φ13 Φ14 AT H PD ⎢ ⎥ ⎢ ⎥ T Ead Φ23 Φ24 AT H 0 ∗ Φ22 + λEad ⎢ ⎥ d ⎢ ⎥ ⎢ ⎥ ⎢ 0 −C T P D ⎥ ∗ ∗ Φ33 Φ34 ⎢ ⎥ < 0, (5.55) ⎢ ⎥ T ⎢ ⎥ ∗ ∗ ∗ Φ44 C H 0 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ∗ ∗ ∗ ∗ −H HD ⎣ ⎦ ∗ ∗ ∗ ∗ ∗ −λI where Φij , i = 1, 2, · · · , 4, i j 4 and H are defined in (5.44). Since the criteria obtained in this section depend not only on the length of the discrete delay but also on that of the neutral delay, they are discretedelay- and neutral-delay-dependent conditions. 5.3.4 Numerical Example Example 5.3.1. Consider the robust stability of system (5.43) with ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ −2 0 −1 0 c 0 ⎦ , Ad = ⎣ ⎦, C = ⎣ ⎦ , 0 c < 1, A=⎣ 0 −0.9 −1 −1 0 c D = I, Ea = Ead = 0.2I. Table 5.5. Allowable upper bound on h for different c c
0
0.1
0.2
0.3
0.4
[14] (τ = h)
1.77
1.48
1.16
0.79
0.37
Theorem 5.2.2
2.39
1.86
1.42
1.06
0.76
Theorem 5.3.2 (τ = h)
2.39
1.86
1.42
1.06
0.76
Theorem 5.3.2 (τ = 0.1)
2.39
2.13
1.85
1.55
1.21
Table 5.5 compares Theorems 5.2.2 and 5.3.2 in this chapter with the method in [14]. The values in the table are the upper bound on the delay, h. Note that the values in the third and fourth rows are the same, which reflects the equivalence of Theorems 5.2.1 and 5.3.1 for τ = h. The results are better for τ = 0.1 than for τ = h. That shows that reducing the neutral delay, τ , increases the upper bound on the discrete delay, h, which means that discrete-delay- and neutral-delay-dependent criteria are very important.
References
123
5.4 Conclusion In this chapter, the FWM approach is first used to derive discrete-delaydependent and neutral-delay-independent stability criteria for neutral systems with a time-varying discrete-delay. Next, delay-dependent stability criteria for neutral systems with identical discrete and neutral delays are derived by the FWM approach and by that approach in combination with either a parameterized model transformation or an augmented Lyapunov-Krasovskii functional. Finally, the FWM approach is used to derive discrete-delay- and neutral-delay-dependent stability criteria for neutral systems with different discrete and neutral delays. Moreover, it is shown that, if we make the two delays the same in this criterion, the result is equivalent to the one obtained by using the FWM approach to directly handle identical discrete and neutral delays.
References 1. S. Hara, Y. Yamamoto, T. Omata, and M. Nakano. Repetitive control system: a new type servo system for periodic exogenous signals. IEEE Transactions on Automatic Control, 33(7): 659-668, 1988. 2. G. D. Hu and G. D. Hu. Some simple stability criteria of neutral delaydifferential systems. Applied Mathematics and Computation, 80(2-3): 257-271, 1996. 3. G. D. Hui and G. D. Hu. Simple criteria for stability of neutral systems with multiple delays. International Journal of Systems Science, 28(12): 1325-1328, 1997. 4. M. S. Mahmoud. Robust H∞ control of linear neutral systems. Automatica, 36(5): 757-764, 2000. 5. W. H. Chen and W. X. Zheng. Delay-dependent robust stabilization for uncertain neutral systems with distributed delays. Automatica, 43(1): 95-104, 2007. 6. C. H. Lien. New stability criterion for a class of uncertain nonlinear neutral time-delay systems. International Journal of Systems Science, 32(2): 215-219, 2001. 7. C. H. Lien, K. W. Yu, and J. G. Hsieh. Stability conditions for a class of neutral systems with multiple delays. Journal of Mathematical Analysis and Applications, 245(1): 20-27, 2000. 8. J. D. Chen, C. H. Lien, K. K. Fan, and J. H. Chou. Criteria for asymptotic stability of a class of neutral systems via an LMI approach. IEE Proceedings– Control Theory and Application, 148(6): 442-447, 2001.
124
5. Stability of Neutral Systems
9. J. H. Park. A new delay-dependent criterion for neutral systems with multiple delays. Journal of Computational and Applied Mathematics, 136(1-2): 177-184, 2001. 10. S. I. Niculescu. On delay-dependent stability under model transformations of some neutral linear systems. International Journal of Control, 74(6): 608-617, 2001. 11. S. I. Niculescu. Optimizing model transformations in delay-dependent analysis of neutral systems: A control-based approach. Nonlinear Analysis, 47(8): 53785390, 2001. 12. E. Fridman. New Lyapunov-Krasovskii functionals for stability of linear retarded and neutral type systems. Systems & Control Letters, 43(4): 309-319, 2001. 13. Q. L. Han. On delay-dependent stability for neutral delay-differential systems. International Journal of Applied Mathematics and Computer Science, 11(4): 965-976, 2001. 14. Q. L. Han. Robust stability of uncertain delay-differential systems of neutral type. Automatica, 38(4): 718-723, 2002. 15. J. H. Park. Stability criterion for neutral differential systems with mixed multiple time-varying delay arguments. Mathematics and Computers in Simulation, 59(5): 401-412, 2002. 16. Q. L. Han. Stability criteria for a class of linear neutral systems with timevarying discrete and distributed delays. IMA Journal of Mathematical Control and Information, 20(4): 371-386, 2003. 17. D. Iv˘ anescu, S. I. Niculescu, L. Dugard, J. M. Dionc, and E. I. Verriestd. On delay-dependent stability for linear neutral systems. Automatica, 39(2): 255261, 2003. 18. E. Fridman and U. Shaked. A descriptor system approach to H∞ control of linear time-delay systems. IEEE Transactions on Automatic Control, 47(2): 253-270, 2002. 19. E. Fridman and U. Shaked. Delay-dependent stability and H∞ control: constant and time-varying delays. International Journal of Control, 76(1): 48-60, 2003. 20. Q. L. Han. Robust stability for a class of linear systems with time-varying delay and nonlinear perturbations. Computers & Mathematics with Applications, 47(8-9): 1201-1209, 2004. 21. Q. L. Han and L. Yu. Robust stability of linear neutral systems with nonlinear parameter perturbations. IEE Proceedings–Control Theory & Applications, 151(5): 539-546, 2004. 22. Q. L. Han. A descriptor system approach to robust stability of uncertain neutral systems with discrete and distributed delays. Automatica, 40(10): 1791-1796, 2004. 23. Y. He, M. Wu, J. H. She, and G. P. Liu. Delay-dependent robust stability criteria for uncertain neutral systems with mixed delays. Systems & Control Letters, 51(1): 57-65, 2004.
References
125
24. Y. He, M. Wu, and J. H. She. Delay-dependent robust stability and stabilization of uncertain neutral systems. Asian Journal of Control, 10(3): 376-383, 2008. 25. Y. He and M. Wu. Delay-dependent robust stability for neutral systems with mixed discrete- and neutral-delays. Journal of Control Theory and Applications, 2(4): 386-392, 2004. 26. M. Wu, Y. He, and J. H. She. New delay-dependent stability criteria and stabilizing method for neutral systems. IEEE Transactions on Automatic Control, 49(12): 2266-2271, 2004. 27. Y. He, Q. G. Wang, C. Lin, and M. Wu. Augmented Lyapunov functional and delay-dependent stability criteria for neutral systems. International Journal of Robust and Nonlinear Control, 15(18): 923-933, 2005. 28. J. K. Hale and S. M. Verduyn Lunel. Introduction to Functional Differential Equations. New York: Springer-Verlag, 1993. 29. Y. He, Q. G. Wang, L. Xie, and C. Lin. Further improvement of free-weighting matrices technique for systems with time-varying delay. IEEE Transactions on Automatic Control, 52(2): 293-299, 2007. 30. Y. He, G. P. Liu, and D. Rees. Augmented Lyapunov functional for the calculation of stability interval for time-varying delay. IET Proceedings: Control Theory & Applications, 1(1): 381-386, 2007.
6. Stabilization of Systems with Time-Varying Delay
At present, there is no effective controller synthesis algorithm for solving delay-dependent stabilization problems, even for the simple situation of statefeedback; for output feedback, the problem is even more difficult. It is possible to use model transformations I and II to derive an LMI-based controller synthesis algorithm. However, as mentioned in [1,2], they add eigenvalues to the system, with the result that the transformed system is not equivalent to the original one. Thus, they have been abandoned in favor of model transformations III and IV, for which an NLMI is used to design a controller in synthesis problems. Two methods are available to solve the NLMI. One is an iterative algorithm devised by Moon et al. [3]. It has two steps: The original nonconvex problem is first reduced to an LMI-based nonlinear minimization problem; and then the CCL algorithm is used to obtain a suboptimal solution. [3] used this method to deal with a robust stabilization problem, and [4, 5] used it for an H∞ control problem. The controller obtained by this method has a small gain and is easy to implement, but the drawback is that the solution is suboptimal. There is still room for further investigation of the CCL algorithm in [3–5]. For instance, the iteration stop condition is very strict; and the gain matrix and some derived Lyapunov matrices must satisfy one or more matrix inequalities. However, once the gain matrix is derived, the delay-dependent stabilization conditions reduce to LMI ones, which means that the iteration can actually be stopped when the LMIs for that gain matrix are feasible. Moreover, some Lyapunov matrices can be used as decision variables rather than as fixed matrices. The other method of solving the NLMI is a parameter-tuning method often used by Fridman et al. [6–9]. It transforms the NLMI into an LMI by setting one or more undetermined matrices in the NLMI to a specific form with a scalar parameter, and then tunes the parameter to obtain a controller. This method also produces a suboptimal solution, and the parameter needs
128
6. Stabilization of Systems with Time-Varying Delay
to be continuously tuned based on experience. Although these two methods yield only suboptimal solutions, they are still the most effective methods now available. This chapter first explains how the two methods just mentioned can be used to extend the stability theorems in Chapter 3 to delay-dependent stabilization design. Then, an ICCL algorithm with a better stop condition is presented that gives a suboptimal solution when an iterative algorithm is used. The theorems obtained with model transformations III and IV are special cases of the ones derived by the FWM approach. Thus, a stabilization design method based on FWMs is less conservative than other methods [10,11]. Furthermore, an LMI-based controller synthesis algorithm based on delay-dependent and rate-independent stabilization is derived that has no parameter tuning or iterative processing.
6.1 Problem Formulation Consider the following nominal linear system with a time-varying delay: ⎧ ⎨ x(t) ˙ = Ax(t) + Ad x(t − d(t)) + Bu(t), t > 0, (6.1) ⎩ x(t) = φ(t), t ∈ [−h, 0], where x(t) ∈ Rn is the state vector; u(t) ∈ Rm is the control input; A, Ad , and B are constant matrices with appropriate dimensions; the delay, d(t), is a time-varying continuous function; and the initial condition, φ(t), is a continuously differentiable initial function of t ∈ [−h, 0]. The delay is assumed to satisfy one or both of the following conditions: 0 d(t) h,
(6.2)
˙ μ, d(t)
(6.3)
where h and μ are constants. Our objective in this chapter is to design a memoryless state-feedback controller with the following form to stabilize system (6.1): u(t) = Kx(t),
(6.4)
where K ∈ Rm×n is a constant gain matrix. Then, we extend the results for the nominal system to a system with time-varying structured uncertainties:
6.2 Iterative Nonlinear Minimization Algorithm
129
⎧ ⎪ ⎪ x(t) ˙ = (A + ΔA(t))x(t) + (Ad + ΔAd (t))x(t − d(t)) ⎪ ⎨ +(B + ΔB(t))u(t), t > 0, ⎪ ⎪ ⎪ ⎩ x(t) = φ(t), t ∈ [−h, 0].
(6.5)
The uncertainties are assumed to be of the form [ΔA(t) ΔAd (t) ΔB(t)] = DF (t) [Ea Ead Eb ] ,
(6.6)
where D, Ea , Ead , and Eb are constant matrices with appropriate dimensions; and F (t) is an unknown, real, and possibly time-varying matrix with Lebesgue measurable elements satisfying F T (t)F (t) I, ∀t.
(6.7)
6.2 Iterative Nonlinear Minimization Algorithm This section explains how to use an iterative nonlinear minimization algorithm to obtain the controller gain from NLMIs. This involves reducing the original nonconvex problem to an LMI-based nonlinear minimization problem and using the CCL algorithm to obtain a suboptimal solution. Moreover, an ICCL algorithm with a better iteration stop condition is presented that leads to less conservativeness. First, we give a theorem that follows from Theorem 3.2.1. Theorem 6.2.1. Consider nominal system (6.1) with a delay, d(t), that satisfies both (6.2) and (6.3). For given scalars h >⎤0 and μ, if there exist matri⎡ ces L > 0, W 0, R > 0, and Y = ⎣
Y11 Y12
⎦ 0, and any appropriately ∗ Y22 dimensioned matrices M1 , M2 , and V such that the following matrix inequalities hold: ⎡ ⎤ Π11 Π12 h(LAT + V T B T ) ⎢ ⎥ ⎢ ⎥ (6.8) ⎢ ∗ Π22 ⎥ < 0, hLAT d ⎣ ⎦ ∗ ∗ −hR ⎡
⎤ Y11 Y12
M1
Y22
M2
∗
LR−1 L
⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥ ⎥ ⎥ 0, ⎦
(6.9)
130
6. Stabilization of Systems with Time-Varying Delay
where Π11 = LAT + AL + BV + V T B T + M1 + M1T + W + hY11 , Π12 = Ad L − M1 + M2T + hY12 , Π22 = −M2 − M2T − (1 − μ)W + hY22 , then the system can be stabilized by control law (6.4), and the controller gain is K = V L−1 . Proof. Applying memoryless state-feedback controller (6.4) to closedloop system (6.1) yields x(t) ˙ = (A + BK)x(t) + Ad x(t − d(t)).
(6.10)
Now, we replace the A in Theorem 3.2.1 with A + BK, pre- and postmultiply (3.9) by diag {P −1 , P −1 , Z −1 }, pre- and post-multiply (3.10) by diag {P −1 , P −1 , P −1 }, and make the following assignments: L = P −1 , W = P −1 QP −1 , Y = diag {P −1 , P −1 } · X · diag {P −1 , P −1 }, R = Z −1 , M1 = P −1 N1 P −1 , M2 = P −1 N2 P −1 , V = KP −1 . These operations result in (6.8) and (6.9). This completes the proof.
Since the conditions in Theorem 6.2.1 are no longer LMIs because of the term LR−1 L in (6.9), we cannot use a convex optimization algorithm to find an appropriate controller gain. However, as mentioned in [3], we can use the method in [12], which involves solving a cone complementarity problem. First, we convert the problem into a nonlinear minimization problem. Define a new variable, S, for which LR−1 L S; and replace (6.9) with ⎡ ⎤ Y11 Y12 M1 ⎢ ⎥ ⎢ ⎥ (6.11) ⎢ ∗ Y22 M2 ⎥ 0 ⎣ ⎦ ∗ ∗ S and LR−1 L S.
(6.12)
Inequality (6.12) is equivalent to L−1 RL−1 S −1 , which the Schur complement allows us to write as ⎡ ⎤ S −1 L−1 ⎣ ⎦ 0. (6.13) ∗ R−1
6.2 Iterative Nonlinear Minimization Algorithm
131
We introduce the new variables J, U , and H so that we can write the original condition (6.9) as (6.11) and ⎡ ⎤ U J ⎣ ⎦ 0, J = L−1 , U = S −1 , H = R−1 . (6.14) ∗ H Thus, the problem is converted into the following LMI-based nonlinear minimization problem: Minimize
Tr{LJ + SU + RH}
subject to (6.8) and ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ L > 0, S > 0, ⎪ ⎪ ⎨
⎡ ⎤ ⎡ ⎤ Y11 Y12 M1 ⎢ ⎥ U J ⎢ ⎥ ⎦ 0, ⎢ ∗ Y22 M2 ⎥ 0, ⎣ ⎣ ⎦ ∗ H ∗ ∗ S ⎪ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎪ ⎪ ⎪ ⎪ L I S I R I ⎪ ⎪ ⎣ ⎦ 0, ⎣ ⎦ 0, ⎣ ⎦ 0. ⎪ ⎪ ⎩ ∗ J ∗ U ∗ H (6.15)
If the solution to this problem is 3n, that is, if Tr{LJ + SU + RH} = 3n, then from Theorem 6.2.1 closed-loop system (6.10) is asymptotically stable. Although it is still impossible to always find the global optimal solution, this nonlinear minimization problem is easier to solve than the original non-convex feasibility problem. Actually, we can easily find a suboptimal maximal delay by using the linearization method in [12] and the following CCL algorithm. Algorithm 6.2.1 To maximize h: Step 1: Choose a sufficiently small initial h > 0 such that there exists a feasible solution to (6.8) and (6.15). Set hmax = h. Step 2: Find a feasible set (L0 , J0 , W0 , R0 , H0 , Y0 , M10 , M20 , V0 , S0 , U0 ) satisfying (6.8) and (6.15). Set k = 0. Step 3: Solve the following LMI problem for the variables L, J, W, R, H, Y, M1 , M2 , V, S, and U : Minimize Tr{LJk + Lk J + SUk + Sk U + RHk + Rk H} subject to (6.8) and (6.15). Set Lk+1 = L, Jk+1 = J, Sk+1 = S, Uk+1 = U, Rk+1 = R, and Hk+1 = H.
132
6. Stabilization of Systems with Time-Varying Delay
Step 4: If (6.9) is satisfied, then set hmax = h, increase h, and return to Step 2.If it is not satisfied within a specified number of iterations, then exit. Otherwise, set k = k + 1 and go to Step 3. This algorithm can find a suboptimal maximum h for which the controller u(t) = V L−1 x(t) stabilizes system (6.1). However, the iteration stop condition is very strict. In addition, the gain matrix and some derived Lyapunov matrices must satisfy one or more matrix inequalities, which makes the iteration process very long. So, there is still room for investigation to reduce the number of iterations. In fact, [13] presents an ICCL algorithm with a better stop condition that does just that. To make the description of this algorithm easier, we first give a corollary obtained by replacing the A in Theorem 3.2.1 with A + BK. Corollary 6.2.1. Consider nominal system (6.1) with a delay, d(t), that satisfies both (6.2) and (6.3). For given ⎡scalars h >⎤0 and μ, if there exist matrices P > 0, Q 0, Z > 0, and X = ⎣
X11 X12
⎦ 0, and any appropriately ∗ X22 dimensioned matrices N1 and N2 such that the following matrix inequalities hold: ⎡ ⎤ Φ11 Φ12 hAT Z k ⎢ ⎥ ⎢ ⎥ (6.16) Φ = ⎢ ∗ Φ22 hAT ⎥ < 0, Z d ⎣ ⎦ ∗ ∗ −hZ ⎡
⎤ X11 X12 N1
⎢ ⎢ Ψ =⎢ ∗ ⎣ ∗
⎥ ⎥ X22 N2 ⎥ 0, ⎦ ∗ Z
(6.17)
where T Φ11 = P Ak + AT k P + N1 + N1 + Q + hX11 , T Φ12 = P Ad − N1 + N2 + hX12 , Φ22 = −N2 − N2T − (1 − μ)Q + hX22 , Ak = A + BK,
then the system can be stabilized by control law (6.4). Clearly, once the controller gain, K, is derived, the conditions in this corollary become LMI conditions; and the iteration stop condition can be
6.2 Iterative Nonlinear Minimization Algorithm
133
modified to include a determination of whether or not LMIs (6.16) and (6.17) are feasible for decision variables P , Q, Z, X, N1 , and N2 . Algorithm 6.2.2 To maximize h: Step 1: Choose a sufficiently small initial h > 0 such that there exists a feasible solution to (6.8) and (6.15). Set hmax = h. Step 2: Find a feasible set (L0 , J0 , W0 , R0 , H0 , Y0 , M10 , M20 , V0 , S0 , U0 ) satisfying (6.8) and (6.15). Set k = 0. Step 3: Solve the following LMI problem for the variables L, J, W, R, H, Y, M1 , M2 , V, S, U, and K: Minimize Tr{LJk + Lk J + SUk + Sk U + RHk + Rk H} subject to (6.8) and (6.15). Set Lk+1 = L, Jk+1 = J, Sk+1 = S, Uk+1 = U, Rk+1 = R, and Hk+1 = H. Step 4: For the K obtained in Step 3, if LMIs (6.16) and (6.17) are feasible for the variables P, Q, Z, X, N1 , and N2 , then set hmax = h, increase h, and return to Step 2. If they are not satisfied within a specified number of iterations, then exit. Otherwise, set k = k + 1 and go to Step 3. Remark 6.2.1. Note that the stop condition at the beginning of Step 4 is different from the one in Algorithm 6.2.1. If we follow the idea in Algorithm 6.2.1, the stop condition could be that matrix inequality (6.9) holds, which means that matrix inequalities (6.8) and (6.9) should be true for given L, W , R, Y , M1 , M2 , and V . However, once the gain matrix, K, is obtained, the stop condition reduces to the question of the feasibility of LMIs (6.16) and (6.17) for the decision variables P , Q, Z, X, N1 , and N2 rather than the fixed matrices L, W , R, Y , M1 , and M2 . So, in Algorithm 6.2.2, the stop condition is modified to include a determination of whether or not LMIs (6.16) and (6.17) are feasible for the specified K, which provides more freedom in the selection of variables, such as P , Q, Z, X, N1 , and N2 . Now, we present a theorem derived from Theorem 3.2.2. Theorem 6.2.2. Consider nominal system (6.1) with a delay, d(t), that satisfies both (6.2) and (6.3). For given scalars h > 0 and μ, if there exist ma⎡ ⎤ Y11 Y12 Y13 ⎢ ⎥ ⎢ ⎥ trices L > 0, W 0, R > 0, and Y = ⎢ ∗ Y22 Y23 ⎥ 0, and any ⎣ ⎦ ∗ ∗ Y33
134
6. Stabilization of Systems with Time-Varying Delay
appropriately dimensioned matrices Mi , i = 1, 2, 3, Sj , j = 1, 2, and V such that the following matrix inequalities hold: ⎡ ⎤ Λ11 Λ12 Λ13 hS1T ⎢ ⎥ ⎢ ⎥ ⎢ ∗ Λ22 Λ23 hS2T ⎥ ⎢ ⎥ < 0, (6.18) Λ=⎢ ⎥ ⎢ ∗ ∗ Λ33 0 ⎥ ⎣ ⎦ ∗ ∗ ∗ −hR ⎡
⎤ Y11 Y12 Y13
M1
Y22 Y23
M2
⎢ ⎢ ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
∗
Y33
M3
∗
∗
LR−1 L
⎥ ⎥ ⎥ ⎥ 0, ⎥ ⎥ ⎦
(6.19)
where Λ11 Λ12 Λ13 Λ22 Λ23 Λ33
= W + M1 + M1T + S1 + S1T + hY11 , = (AL + BV )T − S1T + S2 + M2T + hY12 , = M3T − M1 + hY13 , = −S2 − S2T + hY22 , = Ad L − M2 + hY23 , = −(1 − μ)W − M3 − M3T + hY33 ,
then the system can be stabilized by control law (6.4), and the controller gain is K = V L−1 . Proof. Replace the A in (3.15) with A + BK. The term Γ22 in Theorem 3.2.2 is negative definite, which implies that T2 + T2T is also negative definite. Thus, T2 is nonsingular. Define ⎡ ⎤ ⎤ ⎡ L 0 P 0 ˜ =⎣ ⎦ , H −1 = H ⎦. H =⎣ S1 S 2 −T1T −T2T ˜ L}, re˜ T , L} and diag {H, Pre- and post-multiply Γ in (3.15) by diag {H ˜ T , L, L} and spectively; pre- and post-multiply Θ in (3.16) by diag {H ˜ L, L}, respectively; and set diag {H, ˜ T , L} · X · diag {H, ˜ L}, Y = diag {H W = LQL, R = Z −1 , V = KL, T T ˜ T , L} · N T N T N T = diag {H · L. M1T M2T M3T 1 2 3
6.2 Iterative Nonlinear Minimization Algorithm
135
Then, we can use the Schur complement to write (3.15) and (3.16) as (6.18) and (6.19), respectively. This completes the proof.
The next two theorems extend Theorems 6.2.1 and 6.2.2 to system (6.5), which has time-varying structured uncertainties. Theorem 6.2.3. Consider system (6.5) with a delay, d(t), that satisfies both (6.2) and (6.3). For given scalars h >⎤0 and μ, if there exist matrices L > 0, ⎡ W 0, R > 0, and Y = ⎣
Y11 Y12
⎦ 0, any appropriately dimensioned
∗ Y22 matrices M1 , M2 , and V, and a scalar λ > 0 such that matrix inequality (6.9) and the following matrix inequality hold: ⎡ ⎤ Π11 +λDDT Π12 h(LAT +V T B T +λDDT ) (Ea L+Eb V )T ⎢ ⎥ ⎢ ⎥ T ⎢ ⎥ ∗ Π22 hLAT LE d ad ⎢ ⎥ < 0, (6.20) ⎢ ⎥ 2 T ⎢ ⎥ ∗ ∗ −hR + λh DD 0 ⎣ ⎦ ∗ ∗ ∗ −λI where Π11 , Π12 , and Π22 are defined in (6.8), then the system can be stabilized by control law (6.4), and the controller gain is K = V L−1 .
Theorem 6.2.4. Consider system (6.5) with a delay, d(t), that satisfies both (6.2) and (6.3). For given scalars h > 0 and μ, if there exist matrices L > 0, ⎡ ⎤ Y11 Y12 Y13 ⎢ ⎥ ⎢ ⎥ W 0, R > 0, and Y = ⎢ ∗ Y22 Y23 ⎥ 0, any appropriately dimen⎣ ⎦ ∗ ∗ Y33 sioned matrices Mi , i = 1, 2, Sj , j = 1, 2, 3, and V, and a scalar λ > 0 such that matrix inequality (6.19) and the following matrix inequality hold: ⎡ Λ11
⎢ ⎢ ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
Λ12
Λ13 hS1T (Ea L + Eb V )T
Λ22 + λDDT Λ23 hS2T
0
∗
Λ33
0
T LEad
∗
∗
−hR
0
∗
∗
∗
−λI
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ < 0, ⎥ ⎥ ⎥ ⎦
(6.21)
where Λij , i = 1, 2, 3, i j 3 are defined in (6.18), then the system can be stabilized by control law (6.4), and the controller gain is K = V L−1 .
136
6. Stabilization of Systems with Time-Varying Delay
The conditions in Theorems 6.2.2-6.2.4 are no longer LMI conditions. Just as for Theorem 6.2.1, this problem can be transformed into an LMI-based nonlinear minimization problem; and the CCL or ICCL algorithm can be used to obtain a suboptimal maximum value for the bound h. Chapter 3 showed that the stability theorems derived using Moon et al.’s inequalities or a descriptor model transformation are special cases of the ones in Chapter 3. So, we can conclude that Theorems 6.2.1-6.2.4, which are stabilization design methods derived from the stability theorems in Chapter 3, are more effective than other methods. The controller thus obtained has a small gain and is easy to implement; but the iterative algorithm takes a long time to finish.
6.3 Parameter-Tuning Method This section explains a parameter-tuning method that uses an NLMI to obtain the controller gain. If we put some of the matrices in the NLMI into a special form, such as the product of a scalar and a matrix, then we can transform the NLMI in Section 6.2 into an LMI with only one scalar that needs to be tuned. For example, by writing matrix R in Theorem 6.2.1 as R = εL, we obtain a new corollary. Corollary 6.3.1. Consider nominal system (6.1) with a delay, d(t), that satisfies both (6.2) and (6.3). For given scalars ⎡ h > 0 ⎤and μ, if there exist matrices L > 0, W 0, R > 0, and Y = ⎣
Y11 Y12
⎦ 0, any appropriately ∗ Y22 dimensioned matrices M1 , M2 , and V, and a scalar ε > 0 such that the following matrix inequalities hold: ⎡ ⎤ Π11 Π12 h(LAT + V T B T ) ⎢ ⎥ ⎢ ⎥ (6.22) ⎢ ∗ Π22 ⎥ < 0, hLAT d ⎣ ⎦ ∗ ∗ −hεL ⎡
⎤ Y11 Y12
M1
Y22
M2
∗
ε−1 L
⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥ ⎥ ⎥ 0, ⎦
(6.23)
6.3 Parameter-Tuning Method
137
where Π11 , Π12 , and Π22 are defined in (6.8), then the system can be stabilized by control law (6.4), and the controller gain is K = V L−1 . A similar treatment produces three corollaries from Theorems 6.2.2-6.2.4. Corollary 6.3.2. Consider nominal system (6.1) with a delay, d(t), that satisfies both (6.2) and (6.3). For given scalars h > 0 and ⎤ μ, if there exist matri⎡ Y Y Y ⎢ 11 12 13 ⎥ ⎥ ⎢ ces L > 0, W 0, R > 0, and Y = ⎢ ∗ Y22 Y23 ⎥ 0, any appropriately ⎦ ⎣ ∗ ∗ Y33 dimensioned matrices Mi , i = 1, 2, 3, Sj , j = 1, 2, and V, and a scalar ε > 0 such that the following matrix inequalities hold: ⎡ ⎤ Λ11 Λ12 Λ13 hS1T ⎢ ⎥ ⎢ ⎥ ⎢ ∗ Λ22 Λ23 hS2T ⎥ ⎢ ⎥ < 0, (6.24) ⎢ ⎥ ⎢ ∗ ∗ Λ33 0 ⎥ ⎣ ⎦ ∗ ∗ ∗ −hεL ⎡
⎤ Y11 Y12 Y13
⎢ ⎢ ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
Y22 Y23 ∗
Y33
∗
∗
M1
⎥ ⎥ M2 ⎥ ⎥ 0, ⎥ M3 ⎥ ⎦ ε−1 L
(6.25)
where Λij , i = 1, 2, 3, i j 3 are defined in (6.18), then the system can be stabilized by control law (6.4), and the controller gain is K = V L−1 . Corollary 6.3.3. Consider system (6.5) with a delay, d(t), that satisfies both (6.2) and (6.3). For given scalars h >⎤0 and μ, if there exist matrices L > 0, ⎡ W 0, R > 0, and Y = ⎣
Y11 Y12
⎦ 0, any appropriately dimensioned ∗ Y22 matrices M1 , M2 , and V, and a scalar ε > 0 such that matrix inequality (6.23) and the following matrix inequality hold: ⎡ ⎤ Π11 +λDDT Π12 h(LAT +V T B T +λDDT ) (Ea L+Eb V )T ⎢ ⎥ ⎢ ⎥ T ⎢ ⎥ ∗ Π22 hLAT LEad d ⎢ ⎥ < 0, (6.26) ⎢ ⎥ 2 T ⎢ ⎥ ∗ ∗ −hεL + λh DD 0 ⎣ ⎦ ∗ ∗ ∗ −λI
138
6. Stabilization of Systems with Time-Varying Delay
where Π11 , Π12 , and Π22 are defined in (6.8), then the system can be stabilized by control law (6.4), and the controller gain is K = V L−1 . Corollary 6.3.4. Consider system (6.5) with a delay, d(t), that satisfies both (6.2) and (6.3). For given scalars h > 0 and ⎡ ⎤ μ, if there exist matrices L > 0, Y Y Y ⎢ 11 12 13 ⎥ ⎢ ⎥ W 0, R > 0, and Y = ⎢ ∗ Y22 Y23 ⎥ 0, any appropriately dimen⎣ ⎦ ∗ ∗ Y33 sioned matrices Mi , i = 1, 2, 3, Sj , j = 1, 2, and V, and a scalar ε > 0 such that LMI (6.25) and the following matrix inequality hold: ⎡ Λ11
Λ12
⎢ ⎢ ⎢ ∗ Λ22 + λDDT ⎢ ⎢ ⎢ ∗ ∗ ⎢ ⎢ ⎢ ∗ ∗ ⎣ ∗ ∗
Λ13 hS1T (Ea L + Eb V )T Λ23 hS2T Λ33
0
∗ −hεL ∗
∗
0 T LEad
0
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ < 0, ⎥ ⎥ ⎥ ⎦
(6.27)
−λI
where Λij , i = 1, 2, 3, i j 3 are defined in (6.18), then the system can be stabilized by control law (6.4), and the controller gain is K = V L−1 . These four corollaries prevent an excessive amount of time from being spent in a nonlinear iterative algorithm. However, the parameter ε needs to be tuned and the matrix R must be put into a special form. So, the solution is still suboptimal. Fridman et al. has often used this method [6–9], and the theorems thus derived are special cases of the ones in this chapter. In other words, our theorems are more general and better than those, as is demonstrated by the numerical example in Section 6.5.
6.4 Completely LMI-Based Design Method Sections 6.2 and 6.3 presented the results of the best attempts so far at developing a method of solving delay-dependent robust stabilization problems; they are not general LMI-based methods. Even though it is possible to use an iterative nonlinear minimization algorithm or a parameter-tuning method to obtain a suboptimal solution, the iterations and the parameter tuning both take a long time. This section presents a completely LMI-based design
6.4 Completely LMI-Based Design Method
139
method, in which the controller is obtained by directly solving LMI-based stabilization conditions without any parameter tuning or iterative process, for a special case that is based on the delay-dependent and rate-independent stability conditions in Chapter 3. A simple transformation converts Corollary 3.2.2 to the following one. Corollary 6.4.1. Consider nominal system (6.1) with a delay, d(t), that satisfies (6.2). Given a scalar h > 0, the system is asymptotically stable⎤ when ⎡ u(t) = 0 if there exist matrices P > 0, Z > 0, and X = ⎣
X11 X12
⎦ 0, ∗ X22 and any appropriately dimensioned matrices N1 and N2 such that matrix inequality (3.10) and the following matrix inequality hold: ⎡ ⎤ ¯12 hAT Φ¯11 Φ ⎢ ⎥ ⎢ ¯22 hAT ⎥ < 0, (6.28) Φ¯ = ⎢ ∗ Φ d ⎥ ⎣ ⎦ ∗ ∗ −hZ −1 where Φ¯11 = P A + AT P + N1 + N1T + hX11 , Φ¯12 = P Ad − N1 + N2T + hX12 , Φ¯22 = −N2 − N2T + hX22 . From this corollary, we obtain an LMI-based delay-dependent and rateindependent stabilization condition. Theorem 6.4.1. Consider nominal system (6.1) with a delay, d(t), that satisfies (6.2). ⎡ For a given ⎤ scalar h > 0, if there exist matrices L > 0, R > 0, Y11 Y12 ⎦ 0, and any appropriately dimensioned matrices M1 , and Y = ⎣ ∗ Y22 M2 , and V such that the following LMIs hold: ⎡ ⎤ S + S T + hY11 Ad M2 + L − M1T + hY12 hS T ⎢ ⎥ ⎢ ⎥ < 0, (6.29) Π =⎢ ∗ −M2 − M2T + hY22 hM2T AT d ⎥ ⎣ ⎦ ∗ ∗ −hR ⎡ Θ=⎣
⎤ Y11
Y12
∗
Y22 − R
⎦ 0,
(6.30)
140
6. Stabilization of Systems with Time-Varying Delay
where S = AL + Ad M1 + BV, then the system can be stabilized by control law (6.4), and the controller gain is K = V L−1 . Proof. Setting ⎤ ⎡ T P 0 ⎦ , AK = A + BK Ad , I¯= I −I , A¯ = AT I¯ , H =⎣ K N1T N2T and replacing the A in (6.28) with A + BK yield ⎡ ⎤ H T A¯ + A¯T H + hX hAT K ⎦ < 0. Ξ=⎣ hAK −hZ −1
(6.31)
Also, (6.28) implies that −N2 − N2T is negative definite, which means that −N2 is nonsingular. So, we set ⎡ H −1 = ⎣
⎤−1 P
0
N1T N2T
⎦
⎡ =⎣
⎤ L
0
⎦.
M1 M2
Then, we pre- and post-multiply Ξ in (6.31) by diag {H −T , I} and diag {H −1 , I}, respectively, and set Y = H −T XH −1 , R = Z −1 , V = KL. This transforms (6.31) into (6.29). Pre- and post-multiplying Ψ in (3.10) by diag {H −T , I} and diag {H −1 , I}, respectively, and applying the Schur complement yield (6.30). This completes the proof.
Remark 6.4.1. For a given scalar h > 0, conditions (6.29) and (6.30) are LMI-based; that is, they constitute a completely LMI-based solution to the delay-dependent and rate-independent stabilization problem. Next, we use Lemma 2.6.2 to extend Theorem 6.4.1 to system (6.5), which has time-varying structured uncertainties. Theorem 6.4.2. Consider system (6.5) with a delay, d(t), that satisfies (6.2).⎡For a given ⎤ scalar h > 0, if there exist matrices L > 0, R > 0, and Y =⎣
Y11 Y12
⎦ 0, any appropriately dimensioned matrices M1 , M2 , and
∗ Y22 V, and a scalar λ > 0 such that LMI (6.30) and the following LMI hold:
6.5 Numerical Example
141
⎡ ⎤ S +S T +λDDT +hY11 Ad M2 +L−M1T +hY12 hS T +hλDDT ET ⎢ ⎥ ⎢ T T ⎥ ⎢ ∗ −M2 − M2T + hY22 hM2T AT M E 2 d ad ⎥ ⎢ ⎥< 0, ⎢ ⎥ ⎢ ⎥ ∗ ∗ −hR+λh2 DDT 0 ⎣ ⎦ ∗ ∗ ∗ −λI (6.32) where E = Ea L + Ead M1 + Eb V and S is defined in (6.29), then the system can be stabilized by control law (6.4), and the controller gain is K = V L−1 . Remark 6.4.2. There is an error in Theorem 2 of [7], which gives the delaydependent and rate-independent conditions as LMIs for ε¯i = I, i = 1, 2. In (28a) in the theorem, rows 4-7 and columns 4-7 were deleted. That is, both S¯1 and S¯2 must be zero. However, in (17), L = E0 , L1 = E1 , and L2 = E2 ; and (28a) was derived from (17) by pre- and post-multiplying it by ΔT and Δ (Δ = diag {Q, I, S¯1 , S¯2 , I}, respectively, not by diag {Q, I} and S¯1 = S1−1 , S¯2 = S2−1 ). Since S¯1 and S¯2 must be zero, this treatment is equivalent to making rows 3-4 and columns 3-4 in (17) all zero and then deleting them, which is clearly not correct. On the other hand, if either L1 or L2 in (17) is non-zero, the BRL representation of (17) cannot be extended to make it delay-dependent and rate-independent. In fact, it is a mistake that (28) does not contain E1 or E2 when there are uncertainties in A1 and A2 . So, the delay-dependent and rate-independent conditions in [7] are not valid for a delay with an uncertainty. Note that all the equation numbers mentioned here are those in [7].
6.5 Numerical Example Example 6.5.1. Consider a system with time-varying structured uncertainties and the following parameters: ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 0 0 −2 −0.5 0 ⎦ , Ad = ⎣ ⎦, B = ⎣ ⎦, A=⎣ 0 1 0 −1 1 D = I, Ea = 0.2I, Ead = αI, Eb = 0. If we assume that this system contains a constant delay (μ = 0), then the upper bound, h, for which the system is stabilizable by a state-feedback
142
6. Stabilization of Systems with Time-Varying Delay
Table 6.1. Maximum upper bound, h, for α = 0.2 Iterations
μ
Method
h
Feedback gain, K
0
[3]
0.45
[−4.8122 − 7.7129]
99
0.68
[−18.2332 − 23.4286]
203
0.68
[−16.3701 − 21.7856]
168
0.63
[−42.0837 − 42.2851]
243
0.63
[−23.5647 − 26.6134]
144
0.67
[−12.0052 − 16.7633]
220
0.67
[−8.2990 − 12.6240]
155
0.62
[−25.4782 − 27.6622]
240
0.62
[−20.9674 − 24.3177]
167
Theorem 6.2.3, 0
Algorithm 6.2.1 Theorem 6.2.3, Algorithm 6.2.2 Theorem 6.2.3,
0.5
Algorithm 6.2.1 Theorem 6.2.3, Algorithm 6.2.2 Theorem 6.2.4,
0
Algorithm 6.2.1 Theorem 6.2.4, Algorithm 6.2.2 Theorem 6.2.4,
0.5
Algorithm 6.2.1 Theorem 6.2.4, Algorithm 6.2.2
or param.
0
Corollary 6.3.3
0.61
[−0.5335 − 1.7433]
ε = 0.9
0.5
Corollary 6.3.3
0.54
[−1.5680 − 2.3956]
ε = 1.2
0
Corollary 6.3.4
0.61
[−0.5008 − 1.7331]
ε = 0.9
0.5
Corollary 6.3.4
0.54
[−1.5352 − 2.4047]
ε = 1.2
controller is 0.45 in [3] (where α = 0.2) and 0.5865 in [7] (where α = 0). Tables 6.1 and 6.2 list these values along with results for α = 0.2 or α = 0 obtained by Theorems 6.2.3 and 6.2.4 using Algorithm 6.2.1 or 6.2.2, and also Corollaries 6.3.3 and 6.3.4. Clearly, the conditions in this chapter produce the largest upper bounds. In addition, when the same theorem is used, Algorithm 6.2.2 produces a smaller controller gain in fewer iterations than Algorithm 6.2.1 does. Simulations were also run on a closed-loop system for α = 0, ΔA(t) = 0.2I, and μ = 0. Fig. 6.1 shows input and state response curves for the statefeedback controller gain obtained with Theorem 6.2.4 and that obtained by
6.5 Numerical Example
143
Fridman et al. [7] for h = 0.5865. Both controllers stabilize the system; but the one from Theorem 6.2.4 makes the state converge to zero more quickly, although it does produce a larger initial control input. Fig. 6.2 shows results for h = 0.84. In this case, Fridman et al.’s controller [7] cannot stabilize the system at all, while that of Theorem 6.2.4 can. Table 6.2. Maximum upper bound, h, for α = 0 μ
Method
0
[7] Theorem 6.2.3,
0
Algorithm 6.2.1 Theorem 6.2.3, Algorithm 6.2.2 Theorem 6.2.3,
0.5
Algorithm 6.2.1 Theorem 6.2.3, Algorithm 6.2.2 Theorem 6.2.4,
0
Algorithm 6.2.1 Theorem 6.2.4, Algorithm 6.2.2 Theorem 6.2.4,
0.5
Algorithm 6.2.1 Theorem 6.2.4, Algorithm 6.2.2
Iterations
h
Feedback gain, K
0.58
[−0.3155 − 4.4417]
—
0.79
[−33.2323 − 29.2854]
200
0.79
[−16.0425 − 16.1640]
115
0.75
[−59.6999 − 47.3689]
279
0.75
[−34.1414 − 30.4576]
145
0.79
[−23.3921 − 20.1295]
245
0.79
[−20.6913 − 18.5518]
194
0.73
[−30.5855 − 26.5910]
208
0.74
[−29.0621 − 25.8568]
183
or param.
0
Corollary 6.3.3
0.67
[−2.0523 − 1.9435]
ε = 1.1
0.5
Corollary 6.3.3
0.59
[−1.4623 − 1.7582]
ε = 1.1
0
Corollary 6.3.4
0.67
[−2.0552 − 1.9437]
ε = 1.1
0.5
Corollary 6.3.4
0.59
[−1.4376 − 1.7652]
ε = 1.1
˙ In addition, when the derivative of the time-varying delay, d(t), is unknown and a delay-dependent and rate-independent stabilization condition is used to find the upper bound, h, for which the system is stabilizable by a state-feedback controller, then the values are 0.489 in [6] and 0.496 in [7] for
144
6. Stabilization of Systems with Time-Varying Delay
Fig. 6.1. Simulation results for h = 0.5865: (a) method of Fridman et al. and (b) Theorem 6.2.4
Fig. 6.2. Simulation results for h = 0.84: (a) method of Fridman et al. and (b) Theorem 6.2.4
References
145
α = 0, and 0.496 for the LMI-based condition in Theorem 6.4.1. However, in contrast to LMIs (6.29) and (6.30) in Theorem 6.4.1, for which no parameters need to be tuned, the condition in [7] is not completely LMI-based and requires parameter tuning. A more important point is that, as mentioned in Remark 6.4.2, the method in [7] is ineffective when α = 0.2 because of the error in the condition. However, Theorem 6.4.2 yields a value of 0.451 for the upper bound, h, for which the system is stabilizable by a state-feedback controller.
6.6 Conclusion This chapter uses an LMI-based iterative nonlinear minimization algorithm and a parameter-tuning method to establish methods of designing controllers for systems with a time-varying delay. The methods are based on the delaydependent stabilization conditions obtained by the FWM approach. It also presents an ICCL algorithm that requires fewer iterations than the CCL algorithm. Furthermore, it describes an approach for designing an LMI-based delay-dependent and rate-independent stabilizable controller. Finally, a numerical example demonstrates the benefits of our method.
References 1. K. Gu and S. I. Niculescu. Additional dynamics in transformed time-delay systems. IEEE Transactions on Automatic Control, 45(3): 572-575, 2000. 2. K. Gu and S. I. Niculescu. Further remarks on additional dynamics in various model transformations of linear delay systems. IEEE Transactions on Automatic Control, 46(3): 497-500, 2001. 3. Y. S. Moon, P. Park, W. H. Kwon, and Y. S. Lee. Delay-dependent robust stabilization of uncertain state-delayed systems. International Journal of Control, 74(14): 1447-1455, 2001. 4. H. Gao and C. Wang. Comments and further results on “A descriptor system approach to H∞ control of linear time-delay systems”. IEEE Transactions on Automatic Control, 48(3): 520-525, 2003. 5. Y. S. Lee, Y. S. Moon, W. H. Kwon, and P. G. Park. Delay-dependent robust H∞ control for uncertain systems with a state-delay. Automatica, 40(1): 65-72, 2004. 6. E. Fridman and U. Shaked. Delay-dependent stability and H∞ control: constant and time-varying delays. International Journal of Control, 76(1): 48-60, 2003.
146
6. Stabilization of Systems with Time-Varying Delay
7. E. Fridman and U. Shaked. An improved stabilization method for linear timedelay systems. IEEE Transactions on Automatic Control, 47(11): 1931-1937, 2002. 8. E. Fridman and U. Shaked. A descriptor system approach to H∞ control of linear time-delay systems. IEEE Transactions on Automatic Control, 47(2): 253-270, 2002. 9. E. Fridman and U. Shaked. Parameter dependent stability and stabilization of uncertain time-delay systems. IEEE Transactions on Automatic Control, 48(5): 861-866, 2003. 10. M. Wu, Y. He, and J. H. She. Delay-dependent robust stability and stabilization criteria for uncertain neutral systems. Acta Automatica Sinica, 31(4): 578-583, 2005. 11. M. Wu, Y. He, and J. H. She. New delay-dependent stability criteria and stabilizing method for neutral systems. IEEE Transactions on Automatic Control, 49(12): 2266-2271, 2004. 12. E. L. Ghaoui, F. Oustry, and M. AitRami. A cone complementarity linearization algorithms for static output feedback and related problems. IEEE Transactions on Automatic Control, 42(8): 1171-1176, 1997. 13. Y. He, G. P. Liu, D. Rees, and M. Wu. Improved stabilization method for networked control systems. IET Control Theory & Applications, 1(6): 15801585, 2007.
7. Stability and Stabilization of Discrete-Time Systems with Time-Varying Delay
Increasing attention is being paid to the delay-dependent stability, stabilization, and H∞ control of linear systems with delays [1–14]. The literature discusses discrete-time systems with two types of time-varying delays: small and non-small. For a small delay, [15–17] presented methods of designing an H∞ state-feedback controller. More recently, a time-varying interval delay, which is a kind of non-small delay, has become a focus of attention for both continuous-time systems [18] and discrete-time systems [9, 19, 20]. [9] solved the robust H∞ control problem using an output-feedback controller; but the limitations of that approach are that matrix inequalities must be solved to obtain the decision matrix variables, and only a range of delays can be dealt with. [21] handled the problem of designing an H∞ filter by using a finite-sum inequality. [20] used Moon et al.’s inequality and criteria containing both the range and upper bound of the time-varying delay for the delay-dependent output-feedback stabilization of discrete-time systems with a time-varying state delay. And [19] derived H∞ control criteria using a descriptor model transformation in combination with Moon et al.’s inequality for uncertain linear discrete-time systems with a time-varying interval delay; in that approach, the delay was decomposed into a nominal part and an uncertain part. However, as mentioned in Chapter 3, Moon et al.’s inequality is more conservative than the FWM approach for continuous-time systems. This is also true for discrete-time systems. In addition, [20, 21] ignored some terms when estimating the upper bound on the difference of a Lyapunov function; but [6, 7, 22] showed that retaining those terms yields less conservative stability results. Another point is that the delay was increased to make it easy to handle. That is, the delay, d(k), where d1 d(k) d2 , was increased to d2 in many studies [19–22]; and d2 − d(k) was increased to d2 − d1 in [22], or in other words, d2 = d(k) + (d2 − d(k)) was increased to 2d2 − d1 . These methods may lead to considerable conservativeness. On the other hand, the
148
7. Stability and Stabilization of Discrete-Time Systems...
conditions for a delay-dependent stabilizing controller obtained by improved methods cannot be expressed strictly in terms of LMIs, although this type of problem can be solved by using either an iterative nonlinear minimization algorithm or a parameter-tuning method. Recently, [20] employed the CCL algorithm to deal with the problem of the output-feedback stabilization of discrete-time systems with a time-varying delay. However, as mentioned in Chapter 6, there is still room for further investigation of that algorithm. This chapter discusses the output-feedback control of a linear discretetime system with a time-varying delay [23]. First, the IFWM explained in Chapter 3 is used to derive a criterion for delay-dependent stability. Next, this criterion is used to design an SOF controller. Since the conditions for the existence of an admissible controller are not expressed strictly in terms of LMIs, the ICCL algorithm described in Chapter 6 is used to solve the nonconvex feasibility SOF stabilization control problem. Then, the problem of designing a DOF controller is transformed into one of designing an SOF controller, and a DOF controller is obtained by following the design method for an SOF controller. Finally, numerical examples demonstrate the effectiveness of this method and its advantages over other methods.
7.1 Problem Formulation Consider the following linear discrete-time system with a time-varying delay: ⎧ ⎪ ⎪ x(k + 1) = Ax(k) + Ad x(k − d(k)) + Bu(k), ⎪ ⎨ (7.1) y(k) = Cx(k) + Cd x(k − d(k)), ⎪ ⎪ ⎪ ⎩ x(k) = φ(k), − d k 0, 2
where x(k) ∈ Rn is the state vector; u(k) ∈ Rm is the control input; A, Ad , and B are constant matrices with appropriate dimensions; d(k) is a timevarying interval delay satisfying d1 d(k) d2 ,
(7.2)
where d1 and d2 are known positive integers; and φ(k) is the initial condition. This chapter considers both SOF and DOF controllers. The SOF control law is u(k) = F y(k),
(7.3)
7.2 Stability Analysis
149
where F ∈ Rm×p is a constant gain matrix to be determined. The DOF control law is ⎧ ⎪ ⎪ x (k + 1) = Ac xc (k) + Bc y(k), ⎪ ⎨ c (7.4) u(k) = Cc xc (k) + Dc y(k), ⎪ ⎪ ⎪ ⎩ x (k) = 0, k < 0, c where xc (k) ∈ Rr is the state of the controller; and Ac , Bc , Cc , and Dc are appropriately dimensioned matrices to be determined. The aim of this chapter is to find an SOF controller of the form (7.3) and a DOF controller of the form (7.4) such that the closed-loop system is asymptotically stable.
7.2 Stability Analysis This section presents a stability analysis of system (7.1) with u(k) = 0. It is used in subsequent sections to design output-feedback controllers. First, we give a theorem. Theorem 7.2.1. Consider system (7.1) with u(k) = 0. Given scalars d1 and d2 with d2 d1 > 0, the system is asymptotically stable if there⎤ exist ⎡ X11 X12 ⎦ 0, matrices P > 0, Qi 0, i = 1, 2, 3, Zj > 0, j = 1, 2, X = ⎣ ∗ X22 ⎡ ⎤ Y11 Y12 ⎦ 0, and any appropriately dimensioned matrices N = and Y = ⎣ ∗ Y22 T T T T T , S = S1T S2T , and M = M1T M2T such that the following N1 N 2 LMIs hold: ⎡ ⎤ √ √ Φ Ξ1T P d2 Ξ2T Z1 d12 Ξ2T Z2 ⎢ ⎥ ⎢ ⎥ ⎢ ∗ −P ⎥ 0 0 ⎢ ⎥ < 0, (7.5) ⎢ ⎥ ⎢∗ ⎥ ∗ −Z1 0 ⎣ ⎦ ∗ ∗ ∗ −Z2 ⎡ Ψ1 = ⎣
⎤ X N ∗ Z1
⎦ 0,
(7.6)
150
7. Stability and Stabilization of Discrete-Time Systems...
⎡ Ψ2 = ⎣ ⎡ Ψ3 = ⎣ where
⎤ Y
S
∗ Z2
⎦ 0,
(7.7) ⎤
X +Y
M
∗
Z1 + Z2
⎡ Φ11 Φ12
S1
Φ22
S2
∗
−Q1
∗
∗
⎢ ⎢ ⎢ ∗ Φ=⎢ ⎢ ⎢ ∗ ⎣ ∗
⎦ 0,
−M1
(7.8)
⎤
⎥ ⎥ −M2 ⎥ ⎥, ⎥ 0 ⎥ ⎦ −Q2
Φ11 = Q1 + Q2 + (d12 + 1)Q3 − P + N1 + N1T + d2 X11 + d12 Y11 , Φ12 = N2T − N1 + M1 − S1 + d2 X12 + d12 Y12 , Φ22 =−Q3 − N2 −N2T + M2 + M2T − S2 − S2T + d2 X22 + d12 Y22 , Ξ1 = A Ad 0 0 , Ξ2 = A − I Ad 0 0 , d12 = d2 − d1 . Proof. Defining η(l) = x(l + 1) − x(l)
(7.9)
and using system equation (7.1) with u(k) = 0 yield x(k + 1) = x(k) + η(k)
(7.10)
η(k) = x(k + 1) − x(k) = (A − I)x(k) + Ad x(k − d(k)).
(7.11)
and
Choose the Lyapunov function candidate to be V (k) = V1 (k) + V2 (k) + V3 (k) + V4 (k),
(7.12)
where V1 (k) = xT (k)P x(k), V2 (k) =
0
k−1
θ=−d2 +1 l=k−1+θ
T
η (l)Z1 η(l) +
−d 1
k−1
θ=−d2 +1 l=k−1+θ
η T (l)Z2 η(l),
7.2 Stability Analysis
V3 (k) =
k−1 l=k−d1
V4 (k) =
k−1
xT (l)Q1 x(l) +
−d 1 +1
151
xT (l)Q2 x(l),
l=k−d2 k−1
xT (l)Q3 x(l);
θ=−d2 +1 l=k−1+θ
and P > 0, Qi 0, i = 1, 2, 3, and Zj > 0, j = 1, 2 are to be determined. Defining ΔV (k) = V (k + 1) − V (k) yields ΔV1 (k) = xT (k + 1)P x(k + 1) − xT (k)P x(k) = ζ1T (k)Ξ1T P Ξ1 ζ1 (k) − xT (k)P x(k), ΔV2 (k) =d2 η T (k)Z1 η(k) −
k−1
(7.13)
η T (l)Z1 η(l)
l=k−d2
+ d12 η T (k)Z2 η(k) −
k−d 1 −1
η T (l)Z2 η(l)
l=k−d2 k−1
=ζ1T (k)Ξ2T (d2 Z1 + d12 Z2 )Ξ2 ζ1 (k) −
η T (l)Z1 η(l)
l=k−d(k)
−
k−d 1 −1
k−d(k)−1
η T (l)Z2 η(l) −
η T (l)(Z1 + Z2 )η(l),
(7.14)
l=k−d2
l=k−d(k)
ΔV3 (k) =xT (k)(Q1 + Q2 )x(k) − xT (k − d1 )Q1 x(k − d1 ) − xT (k − d2 )Q2 x(k − d2 ),
T
(7.15)
ΔV4 (k) =(d2 − d1 + 1)x (k)Q3 x(k) −
k−d 1
xT (l)Q3 x(l)
l=k−d2 T
T
(d12 + 1)x (k)Q3 x(k) − x (k − d(k))Q3 x(k − d(k)),
(7.16)
where T ζ1 (k) = xT (k), xT (k−d(k)), xT (k−d1 ), xT (k−d2 ) . From (7.9), the following equations hold for any matrices N , M , and S with appropriate dimensions: ⎡ ⎤ k−1 T η(l)⎦ , (7.17) 0 = 2ζ2 (k)N ⎣x(k) − x(k − d(k)) − l=k−d(k)
152
7. Stability and Stabilization of Discrete-Time Systems...
⎡ 0 = 2ζ2T (k)M ⎣x(k − d(k)) − x(k − d2 ) −
k−d(k)−1
⎤ η(l)⎦ ,
l=k−d2
⎡
k−d 1 −1
0 = 2ζ2T (k)S ⎣x(k − d1 ) − x(k − d(k)) −
(7.18)
⎤ η(l)⎦ ,
(7.19)
l=k−d(k)
where T ζ2 (k) = xT (k), xT (k − d(k)) . ⎡ On the other hand, for any matrices X = ⎣
∗
⎤
⎡ ⎣
⎤ X11 X12
Y11 Y12 ∗ Y22 0=
⎦ 0 and Y =
X22
⎦ 0, the following equations are true:
k−1
k−1
ζ2T (k)Xζ2 (k) −
l=k−d2
ζ2T (k)Xζ2 (k)
l=k−d2
=d2 ζ2T (k)Xζ2 (k)
k−1
−
k−d(k)−1
ζ2T (k)Xζ2 (k)
−
ζ2T (k)Xζ2 (k),
l=k−d2
l=k−d(k)
(7.20) 0=
k−d 1 −1
ζ2T (k)Y ζ2 (k) −
l=k−d2
k−d 1 −1
ζ2T (k)Y ζ2 (k)
l=k−d2
=d12 ζ2T (k)Y ζ2 (k) −
k−d 1 −1
k−d(k)−1
ζ2T (k)Y ζ2 (k) −
ζ2T (k)Y ζ2 (k).
l=k−d2
l=k−d(k)
(7.21) Taking the forward difference of V (k) and adding the terms on the right sides of (7.17)-(7.21) to ΔV (k) allow us to write ΔV (k) as ΔV (k) ζ1T (k) Φ + Ξ1T P Ξ1 + Ξ2T (d2 Z1 +d12 Z2 )Ξ2 ζ1 (k) −
k−1
ζ3T (k, l)Ψ1 ζ3 (k, l)
−
l=k−d(k)
k−d 1 −1
ζ3T (k, l)Ψ2 ζ3 (k, l)
l=k−d(k)
k−d(k)−1
−
l=k−d2
where
ζ3T (k, l)Ψ3 ζ3 (k, l),
(7.22)
7.3 Controller Design
153
T ζ3 (k, l) = ζ2T (k), η T (l) . Thus, if Ψi 0, i = 1, 2, 3 and Φ + Ξ1T P Ξ1 + Ξ2T (d2 Z1 + d12 Z2 )Ξ2 < 0, which is equivalent to (7.5) from the Schur complement, then ΔV (k) < 0. That is, system (7.1) with u(k) = 0 is asymptotically stable. This completes the proof.
Remark 7.2.1. Lyapunov function (7.12) is different from the one in [20] in two ways: V4 (k) includes information on d2 that was not used in previous −d1 +1 k−1 T studies; and only V4 (k) = θ=−d l=k−1+θ x (l)Q3 x(l) is employed to 2 +1 k−1 handle the time-varying delay. In contrast, V4 (k) = l=k−d(k) xT (l)Qx(l) + −d1 +1 k−1 T θ=−d2 +2 l=k−1+θ x (l)Qx(l) was used in [9, 20]. Our treatment considerably simplifies the proof. Remark 7.2.2. Let Q1 = Q2 = Z2 = εI (where⎡ε is a sufficiently small posi⎤ X 0 ⎦, and N = Y T 0 T . tive scalar) and let M = 0, S = 0, Y = 0, X = ⎣ ∗ 0 Then, Theorem 7.2.1 reduces to Theorem 1 in [20]. That is, Theorem 7.2.1 provides more freedom in the selection of Q1 , Q2 , Z2 , M , S, X, N , and Y . Remark 7.2.3. In the proof of Theorem 7.2.1, d2 is separated into two parts: d2 = d(k) + (d2 − d(k)). In contrast, in (6) in [22], the inequalities d2 ζ T (k)M Z1−1 M T ζ(k) −
k−1
ζ T (k)M Z1−1 M T ζ(k) 0
l=k−d(k)
and
k−d(k)−1
d12 ζ T (k)SZ1−1 S T ζ(k) −
ζ T (k)SZ1−1 S T ζ(k) 0
l=k−d2
are employed; and d(k) and (d2 − d(k)) are increased to d2 and d12 , respectively. So, their theorem is more conservative than Theorem 7.2.1.
7.3 Controller Design The results of the stability analysis in the previous section are now employed to design both SOF and DOF controllers for system (7.1).
154
7. Stability and Stabilization of Discrete-Time Systems...
7.3.1 SOF Controller Connecting SOF controller (7.3) to system (7.1) yields the closed-loop system ⎧ ⎨ x(k + 1) = Ax(k) ˆ + Aˆd x(k − d(k)), (7.23) ⎩ x(k) = φ(k), − d k 0, 2
where Aˆ = A + BF C, Aˆd = Ad + BF Cd . We obtain the following theorem from Theorem 7.2.1. Theorem 7.3.1. Consider system (7.1). For given scalars d1 and d2 with d2 d1 > 0, if there exist⎡ matrices P⎤ > 0, L > 0, Qi ⎡0, i = 1, 2, ⎤ 3, Zj > 0, Y X11 X12 Y ⎦ 0, and Y = ⎣ 11 12 ⎦ 0, and Rj > 0, j = 1, 2, X = ⎣ ∗ X22 ∗ Y22 T T T T , S = S1T S2T , any appropriately dimensioned matrices N = N1 N2 T T T M = M1 M2 , and F such that LMIs (7.6)-(7.8) and the following matrix inequalities hold: ⎤ ⎡ √ √ ˆT ˆT ˆT Φ Ξ d2 Ξ d12 Ξ 1 2 2 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ∗ −L 0 0 ⎥ < 0, ⎢ (7.24) ⎥ ⎢ ⎥ ⎢∗ ∗ 0 −R1 ⎦ ⎣ ∗ ∗ ∗ −R2 P L = I, Zj Rj = I, j = 1, 2,
(7.25)
where
ˆ1 = Aˆ Aˆ 0 0 , Ξ d ˆ2 = Aˆ − I Aˆ 0 0 , Ξ d
and Φ is defined in (7.5), then the system can be stabilized by control law (7.3). Proof. In (7.5), replacing A and Ad with Aˆ and Aˆd , respectively, and preand post-multiplying the left and right sides by diag {I, P −1 , Z1−1 , Z2−1 } yield
7.3 Controller Design
⎡ Φ
ˆT Ξ 1
⎢ ⎢ ⎢ ∗ −P −1 ⎢ ⎢ ⎢∗ ∗ ⎣ ∗ ∗
⎤ √ √ ˆT d2 Ξˆ2T d12 Ξ 2 ⎥ ⎥ ⎥ 0 0 ⎥ < 0. ⎥ ⎥ 0 −Z1−1 ⎦ −1 ∗ −Z2
155
(7.26)
Defining L = P −1 and Rj = Zj−1 , j = 1, 2 in (7.26) gives equations (7.24) and (7.25). This completes the proof.
Since the conditions in Theorem 7.3.1 are no longer LMI conditions owing to (7.25), we cannot use a convex optimization algorithm to find a maximum d2 , d2 max . However, as mentioned in Chapter 6, we can use the idea for solving a cone complementarity problem first proposed in [24]. Defining L = P −1 and Rj = Zj−1 , j = 1, 2 converts this nonconvex problem into the following LMI-based nonlinear minimization problem: Minimize
Tr{P L +
2
Zj Rj }
j=1
subject to (7.6)-(7.8), (7.24), and ⎧ ⎪ ⎪ > 0, L > 0, Qi > 0, Zj > 0, Rj > 0 ⎪P ⎨ ⎡ ⎤ ⎡ ⎤ P I Z I ⎪ ⎣ ⎦ 0, ⎣ j ⎦ 0, i = 1, 2, 3, j = 1, 2. ⎪ ⎪ ⎩ ∗ L ∗ Rj
(7.27)
Then, for a given d1 > 0, a suboptimal maximum d2 can be found by using either the CCL or the ICCL algorithm. Here, we employ the ICCL algorithm because of its advantages. Algorithm 7.3.1 To maximize d2 : Step 1: Choose a sufficiently small initial d2 d1 such that there exists a feasible solution to (7.6)-(7.8), (7.24), and (7.27). Set d2 max = d2 . Step 2: Find a feasible set (P0 , L0 , Qi0 , i = 1, 2, 3, Zj0 , Rj0 , j = 1, 2, M0 , N0 , S0 , X0 , Y0 , F0 ) satisfying (7.6)-(7.8), (7.24), and (7.27). Set k = 0. Step 3: Solve the following LMI problem for the variables P, L, Qi , i = 1, 2, 3, Zj , Rj , j = 1, 2, M, N, S, X, Y, and F : ⎧ ⎫ 2 ⎨ ⎬ Minimize Tr Pk L + P Lk + [Zj Rjk + Zjk Rj ] ⎩ ⎭ j=1
subject to
(7.6)-(7.8), (7.24), and (7.27).
156
7. Stability and Stabilization of Discrete-Time Systems...
Set Pk+1 = P, Lk+1 = L, Zj,k+1 = Zj , and Rj,k+1 = Rj , j = 1, 2. Step 4: For the F obtained in Step 3, if LMIs (7.6)-(7.8) and ⎡
ˆ TP Φ Ξ 1
⎢ ⎢ ⎢∗ ⎢ ⎢ ⎢∗ ⎣ ∗
−P ∗ ∗
⎤ √ √ ˆ T Z1 ˆ T Z2 d2 Ξ d Ξ 12 2 2 ⎥ ⎥ ⎥ 0 0 ⎥ 0, if there exist⎡ matrices P⎤ > 0, L > 0, Qi ⎡0, i = 1, 2, ⎤ 3, Zj > 0, Y X11 X12 Y ⎦ 0, and Y = ⎣ 11 12 ⎦ 0, and Rj > 0, j = 1, 2, X = ⎣ ∗ X22 ∗ Y22 T T T T , S = S1T S2T , any appropriately dimensioned matrices N = N1 N2 T T T M = M1 M2 , and F˜ such that matrix inequalities (7.6)-(7.8), (7.25) and the following matrix inequality hold: ⎡ ⎤ √ √ ¯1T ¯2T ¯2T Φ Ξ d2 Ξ d12 Ξ ⎢ ⎥ ⎢ ⎥ ⎢ ∗ −L ⎥ 0 0 ⎢ ⎥ < 0, (7.32) ⎢ ⎥ ⎢∗ ∗ ⎥ 0 −R1 ⎣ ⎦ ∗ ∗ ∗ −R2 where
¯1 = A¯ A¯ 0 0 , Ξ d
158
7. Stability and Stabilization of Discrete-Time Systems...
¯2 = A¯ − I A¯ 0 0 , Ξ d and Φ is defined in (7.5) (Note that the dimensions here are different from those in Theorem 7.2.1.), then the system can be stabilized by control law (7.4). Remark 7.3.3. Since the problem of designing a DOF controller has been transformed into one of designing an SOF controller, Algorithm 7.3.1 can be used to solve the design problem by replacing A, Ad , B, C, and Cd in the ˜ A˜d , B, ˜ C, ˜ and C˜d , respectively. algorithm with A,
7.4 Numerical Examples The two examples below demonstrate the effectiveness of our method and its advantage over other methods. Example 7.4.1. Consider the stability of system (7.1) with u(k) = 0 and the following parameters: ⎡ ⎤ ⎡ ⎤ −0.1 0 0.8 0 ⎦. ⎦ , Ad = ⎣ A=⎣ −0.2 −0.1 0.05 0.9 This example was discussed in [20, 22]. Table 7.1 lists values of the upper bound on d2 that guarantee the stability of system (7.1) with u(k) = 0 that were reported in [20, 22] and those that were obtained with Theorem 7.2.1. The method presented in this chapter is significantly better than the others. Table 7.1. Upper bound on d2 for various d1 (Example 7.4.1) d1
2
4
6
10
12
[20]
7
8
9
12
13
[22]
13
13
14
15
17
Theorem 7.2.1
17
17
18
20
21
Example 7.4.2. Consider system (7.1) with the following parameters: ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 0.3 0 1 1 1 1 0 0.9 0.5 ⎦ , B =⎣ ⎦ , C =⎣ ⎦ , Cd = ⎣ ⎦. ⎦ , Ad = ⎣ A=⎣ 0.8 0.5 0.5 0 1 1 1 0.8 1
References
159
[20] stated that system (7.1) could be stabilized by SOF controller (7.3) for 3 d(k) 11 and by DOF controller (7.4) with r = 1 or r = 2 for 3 d(k) 100. However, using Algorithm 7.3.1 shows that system (7.1) can be stabilized by SOF controller (7.3) with F = [−0.3170 − 0.1519] for 3 d(k) 12. On the other hand, even for 3 d(k) 1000, system (7.1) can be stabilized by DOF controller (7.4) with an order of either one Ac = 0.0279, Bc = −0.0069 −0.0230 , Cc = −4.0479, Dc = −0.3821 −0.2737 , or two ⎡ Ac = ⎣
0.2381
−0.8620
⎤
⎡
⎦ , Bc = ⎣
⎤ 0.1069 0.3551
⎦,
−0.1464 0.4531 0.0324 0.1085 Cc = 0.1727 0.3208 , Dc = −0.3815 −0.2716 .
7.5 Conclusion This chapter discusses the output-feedback stabilization control of a linear discrete-time system with a time-varying delay. First, the IFWM approach is used to carry out a delay-dependent stability analysis. Next, based on that, a design criterion for an SOF controller is derived; and the problem of designing a DOF controller is reduced to the problem of designing an SOF controller. A design criterion for a DOF controller is also established. Then, since the conditions for the existence of an admissible controller are not expressed strictly in terms of LMIs, the ICCL algorithm described in Chapter 6 is used to solve the nonconvex feasibility SOF stabilization control problem. Finally, numerical examples demonstrate the effectiveness of this method and its advantage over other methods.
References 1. E. Fridman and S. I. Niculescu. On complete Lyapunov-Krasovskii functional techniques for uncertain systems with fast-varying delays. International Journal of Robust and Nonlinear Control, 18(3): 364-374, 2007. 2. K. Gu, V. L. Kharitonov, and J. Chen. Stability of Time-Delay Systems. Boston: Birkh¨ auser, 2003.
160
7. Stability and Stabilization of Discrete-Time Systems...
3. P. Richard. Time-delay systems: An overview of some recent advances and open problems. Automatica, 39(10): 1667-1694, 2003. 4. H. Gao and C. Wang. Comments and further results on “A descriptor system approach to H∞ control of linear time-delay systems”. IEEE Transactions on Automatic Control, 48(3): 520-525, 2003. 5. M. Wu, Y. He, J. H. She, and G. P. Liu. Delay-dependent criteria for robust stability of time-varying delay systems. Automatica, 40(8): 1435-1439, 2004. 6. Y. He, Q. G. Wang, L. Xie, and C. Lin. Further improvement of free-weighting matrices technique for systems with time-varying delay. IEEE Transactions on Automatic Control, 52(2): 293-299, 2007. 7. Y. He, Q. G. Wang, C. Lin, and M. Wu. Delay-range-dependent stability for systems with time-varying delay. Automatica, 43(2): 371-376, 2007. 8. C. Lin, Q. G. Wang, and T. H. Lee. A less conservative robust stability test for linear uncertain time-delay systems. IEEE Transactions on Automatic Control, 51(1): 87-91, 2006. 9. S. Xu and T. Chen. Robust H∞ control for uncertain discrete-time systems with time-varying delays via exponential output feedback controllers. Systems & Control Letters, 51(3-4): 171-183, 2004. 10. S. Xu and J. Lam. On equivalence and efficiency of certain stability criteria for time-delay systems. IEEE Transactions on Automatic Control, 52(1): 95-101, 2007. 11. X. M. Sun, J. Zhao, and D. J. Hill. Stability and L2 -gain analysis for switched delay systems: a delay-dependent method. Automatica, 42(10): 1769-1774, 2006. 12. X. M. Sun, G. P. Liu, D. Rees, and W. Wang. Delay-dependent stability for discrete systems with large delay sequence based on switching techniques. Automatica, 44(11): 2902-2908, 2008. 13. X. M. Sun, W. Wang, G. P. Liu, and J. Zhao. Stability analysis for linear switched systems with time-varying delay. IEEE Transactions on Systems, Man, and Cybernetics-Part B, 38(2): 528-533, 2008. 14. M. Wu, Y. He, and J. H. She. Delay-dependent stabilization for systems with multiple unknown time-varying delays. International Journal of Control, Automation, and Systems, 4(6): 662-668, 2006. 15. S. H. Song and J. K. Kim. H∞ control of discrete-time linear systems with norm-bounded uncertainties and time delay in state. Automatica, 34(1): 137139, 1998. 16. S. H. Song, J. K. Kim, C. H. Yim, and H. C. Kim. H∞ control of discrete-time linear systems with time-varying delays in state. Automatica, 35(9): 1587-1591, 1999. 17. Z. Wang, B. Huang, and H. Unbehauen. Robust H∞ observer design of linear state delayed systems with parametric uncertainties: the discrete-time case. Automatica, 35(6): 1161-1167, 1999. 18. Q. L. Han and K. Gu. Stability of linear systems with time-varying delay: A generalized discretized Lyapunov functional approach. Asian Journal of Control, 3(3): 170-180, 2001.
References
161
19. E. Fridman and U. Shaked. Stability and guaranteed cost control of uncertain discrete delay systems. International Journal of Control, 78(4): 235-246, 2005. 20. H. Gao, J. Lam, C. Wang, and Y. Wang. Delay-dependent output-feedback stabilization of discrete-time systems with time-varying state delay. IEE Proceedings–Control Theory & Applications, 151(6): 691-698, 2004. 21. X. M. Zhang and Q. L. Han. Delay-dependent robust H∞ filtering for uncertain discrete-time systems with time-varying delay based on a finite sum inequality. IEEE Transactions on Circuits and Systems II, 53(12): 1466-1470, 2006. 22. H. Gao and T. Chen. New results on stability of discrete-time systems with time-varying state delay. IEEE Transactions on Automatic Control, 52(2): 328333, 2007. 23. Y. He, M. Wu, G. P. Liu, and J. H. She. Output feedback stabilization for a discrete-time system with a time-varying delay. IEEE Transactions on Automatic control, 53(10): 2372-2377, 2008. 24. E. L. Ghaoui, F. Oustry, and M. AitRami. A cone complementarity linearization algorithms for static output feedback and related problems. IEEE Transactions on Automatic Control, 42(8): 1171-1176, 1997. 25. Y. S. Moon, P. Park, W. H. Kwon, and Y. S. Lee. Delay-dependent robust stabilization of uncertain state-delayed systems. International Journal of Control, 74(14): 1447-1455, 2001.
8. H∞ Control Design for Systems with Time-Varying Delay
During the last decade, considerable attention has been devoted to the problems of delay-dependent stability, stabilization, and H∞ controller design for time-delay systems [1–6]. However, as pointed out in [7, 8], most studies to date have ignored some useful terms in the derivative of the LyapunovKrasovskii functional [1, 5, 9–11]. Although [7, 8] retained these terms and established an improved delay-dependent stability criterion for systems with a time-varying delay, there is room for further investigation. For instance, in [1, 5, 7–12], the delay, d(t), where 0 d(t) h, was often increased to h. And in [7, 8], another term, h − d(t), was also taken to be equal to h; that is, h = d(t) + (h − d(t)) was increased to 2h, which may lead to conservativeness. Moreover, these methods are not applicable to systems with a time-varying interval delay. On the other hand, design conditions for a delay-dependent H∞ controller with memoryless state feedback obtained by improved methods cannot be expressed strictly in terms of LMIs. So, either an iterative nonlinear minimization algorithm or a parameter-tuning method is needed to solve the problem. Recently, [5] used the FWM approach and the CCL algorithm to solve the problem of designing an H∞ controller. However, as mentioned in Chapter 6, that algorithm can be improved. This chapter shows how the IFWM approach can be used to obtain an improved delay-dependent BRL for systems with a time-varying interval delay. Then, that BRL along with the ICCL algorithm described in Chapter 6 are used to design an H∞ controller. Finally, numerical examples demonstrate the effectiveness and advantages of this method.
8.1 Problem Formulation Consider the following linear system with a time-varying delay:
164
8. H∞ Control Design for Systems with Time-Varying Delay
⎧ ⎪ x(t) ˙ = Ax(t) + Ad x(t − d(t)) + Bu(t) + Bω ω(t), t > 0, ⎪ ⎪ ⎪ ⎤ ⎡ ⎪ ⎪ ⎪ ⎪ Cx(t) + D ω(t) ω ⎪ ⎨ ⎥ ⎢ ⎥ ⎢ z(t) = ⎢ Cd x(t − d(t)) ⎥ , ⎪ ⎦ ⎣ ⎪ ⎪ ⎪ ⎪ Du(t) ⎪ ⎪ ⎪ ⎪ ⎩ x(t) = φ(t), t ∈ [−h2 , 0],
(8.1)
where x(t) ∈ Rn is the state vector; u(t) ∈ Rm is the control input; ω(t) ∈ L2 [0, ∞) is an exogenous disturbance signal; z(t) ∈ Rr is the control output; A, Ad , B, Bω , C, Dω , Cd , and D are constant matrices with appropriate dimensions; and the delay, d(t), is a time-varying differentiable function that satisfies h1 d(t) h2
(8.2)
˙ μ, d(t)
(8.3)
and
where h2 h1 0 and μ are constants. Note that h1 may be non-zero. The initial condition, φ(t), is a continuously differentiable initial function of t ∈ [−h2 , 0]. For a given scalar γ > 0, the performance of the system is defined to be ∞ J(ω) = (z T (t)z(t) − γ 2 ω T (t)ω(t))dt. (8.4) 0
The H∞ control problem addressed in this chapter is stated as follows: For a memoryless state-feedback controller, find a value for the gain, K ∈ Rm×n , in the control law u(t) = Kx(t)
(8.5)
such that, for any delay, d(t), satisfying (8.2) and (8.3), (1) the closed-loop system of (8.1), x(t) ˙ = (A + BK)x(t) + Ad x(t − d(t)) + Bω ω(t),
(8.6)
is asymptotically stable under the condition ω(t) = 0, ∀t 0; and (2) J(ω) < 0 for all non-zero ω(t) ∈ L2 [0, ∞) and a given γ > 0 under the condition x(t) = 0, ∀t ∈ [−h2 , 0].
8.2 BRL
165
8.2 BRL Below, we use the IFWM approach to derive a new delay-dependent BRL. Theorem 8.2.1. Consider system (8.1) with u(t) = 0. Given scalars h2 h1 0, μ, and γ > 0, the system is asymptotically stable and satisfies J(ω) < 0 for all non-zero ω(t) ∈ L2 [0, ∞) under the condition x(t) = 0, ∀t ∈ [−h2 , 0] if there exist matrices P > 0, Qi 0, i = 1, 2, 3, Zj > 0, and Xj 0, j = 1, 2 and any appropriately dimensioned matrices Ni i = 1, 2, 3 such that the following LMIs hold: ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ Φ=⎢ ⎢ ⎢ ⎢ ⎣
Φ1 + Φ2 + ΦT 2 + h2 X1 + h12 X2 ∗ ∗ ∗ ∗
⎤ √ √ T T T h2 ΦT Z h Φ Z Φ Φ 1 12 2 3 3 4 5 ⎥ ⎥ −Z1 0 0 0 ⎥ ⎥ ⎥ 0 0 ⎥ < 0, ∗ −Z2 ⎥ ⎥ ∗ ∗ −I 0 ⎥ ⎦ ∗ ∗ ∗ −I (8.7)
⎡ Ψj = ⎣ ⎡ Ψ3 = ⎣ where
⎤ Xj N j ∗
Zj
⎦ 0, j = 1, 2,
(8.8)
⎤
X1 + X2
N3
∗
Z1 + Z2
⎡ T
⎢PA + A ⎢ ⎢ ⎢ ⎢ ⎢ Φ1 = ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
P+
3
⎦ 0,
(8.9)
⎤ Qi
P Ad
0
0
∗
−(1 − μ)Q3
0
0
∗
∗
−Q1
0
∗
∗
∗
−Q2
∗
∗
i=1
∗ ∗ Φ2 = [N1 N3 − N1 − N2 N2 − N3 0], Φ3 = [A Ad 0 0 Bω ], Φ4 = [C 0 0 0 Dω ], Φ5 = [0 Cd 0 0 0], h12 = h2 − h1 .
P Bω ⎥ ⎥ ⎥ ⎥ 0 ⎥ ⎥ ⎥, 0 ⎥ ⎥ ⎥ 0 ⎥ ⎦ −γ 2 I
166
8. H∞ Control Design for Systems with Time-Varying Delay
Proof. Choose the Lyapunov-Krasovskii functional candidate to be t 2 t xT (s)Qi x(s)ds + xT (s)Q3 x(s)ds V (xt ) = xT (t)P x(t) +
0
i=1 t
x˙ T (s)Z1 x(s)dsdθ+ ˙
+ −h2
t−hi
t+θ
t−d(t)
−h1
−h2
t
x˙ T (s)Z2 x(s)dsdθ, ˙
(8.10)
t+θ
where P > 0, Qi 0, i = 1, 2, 3, and Zj > 0, j = 1, 2 are to be determined. From the Newton-Leibnitz formula, the following equations are true for any matrices Ni i = 1, 2, 3 with appropriate dimensions: t x(s)ds ˙ , (8.11) 0 = 2ζ T (t)N1 x(t) − x(t − d(t)) − t−d(t)
T
t−h1
0 = 2ζ (t)N2 x(t − h1 ) − x(t − d(t)) −
x(s)ds ˙ , t−d(t)
T
t−d(t)
0 = 2ζ (t)N3 x(t − d(t)) − x(t − h2 ) −
(8.12)
x(s)ds ˙ ,
(8.13)
t−h2
where T ζ(t) = xT (t), xT (t − d(t)), xT (t − h1 ), xT (t − h2 ), ω T (t) . On the other hand, for any matrices Xj 0, j = 1, 2, the following equalities are true: t−d(t) t T T ζ (t)X1 ζ(t)ds− ζ T (t)X1 ζ(t)ds, (8.14) 0 = h2 ζ (t)X1 ζ(t)− t−h2
0 = h12 ζ T (t)X2 ζ(t)−
t−d(t)
ζ T (t)X2 ζ(t)ds−
t−d(t) t−h1
ζ T (t)X2 ζ(t)ds.
(8.15)
t−d(t)
t−h2
Moreover, the following equations are also true: −
t
x˙ T (s)Z1 x(s)ds ˙ =−
t−h2
t
x˙ T (s)Z1 x(s)ds− ˙
t−d(t)
t−d(t)
x˙ T (s)Z1 x(s)ds, ˙
t−h2
(8.16) −
t−h1
t−h2
x˙ T (s)Z2 x(s)ds ˙ =−
t−d(t)
t−h2
x˙ T (s)Z2 x(s)ds− ˙
t−h1
x˙ T (s)Z2 x(s)ds. ˙
t−d(t)
(8.17) Calculating the derivative of V (xt ) along the solutions of system (8.1), adding the right sides of (8.11)-(8.15) to it, and using equations (8.16) and (8.17) yield
8.2 BRL
167
V˙ (xt ) + z T (t)z(t) − γ 2 ω T (t)ω(t) ζ T (t) Φ1 + Φ2 + ΦT 2 + h2 X1 + h12 X2
T T +ΦT 3 (h2 Z1 + h12 Z2 )Φ3 + Φ4 Φ4 + Φ5 Φ5 ζ(t) t−h1 t ξ T (t, s)Ψ1 ξ(t, s)ds − ξ T (t, s)Ψ2 ξ(t, s)ds − t−d(t)
t−d(t)
−
t−d(t)
ξ T (t, s)Ψ3 ξ(t, s)ds,
(8.18)
t−h2
T where ξ(t, s) = ζ T (t), x˙ T (s) . T Thus, if Ψi 0, i = 1, 2, 3 and Φ1 + Φ2 + ΦT 2 + h2 X1 + h12 X2 + Φ3 (h2 Z1 + T h12 Z2 )Φ3 + ΦT 4 Φ4 + Φ5 Φ5 < 0, which is equivalent to (8.7) by the Schur complement, then V˙ (xt ) + z T (t)z(t) − γ 2 ω T (t)ω(t) < 0, which ensures that J(ω) < 0. On the other hand, (8.7)-(8.9) imply that the following LMIs hold: ⎡ ⎤ √ √ ˆ 1 + h12 X ˆ2 ˆ T Z2 Φˆ1 + Φˆ2 + ΦˆT + h2 X h2 ΦˆT Z1 h12 Φ 2 3 3 ⎢ ⎥ ⎢ ⎥ (8.19) ⎢ ⎥ < 0, ∗ −Z1 0 ⎣ ⎦ ∗ ∗ −Z2 ⎡ Ψˆj = ⎣ ⎡ Ψˆ3 = ⎣ where
ˆj N ˆj X ∗
Zj
⎤ ⎦ 0, j = 1, 2,
ˆ2 ˆ1 + X X
ˆ3 N
∗
Z1 + Z2
⎡ T
3
(8.20)
⎤ ⎦ 0,
(8.21)
⎤
Qi P Ad 0 0 ⎥ ⎢PA + A P + ⎥ ⎢ i=1 ⎥ ⎢ ⎥ ⎢ ⎥, ⎢ ˆ 0 0 ∗ −(1 − μ)Q 3 Φ1 = ⎢ ⎥ ⎥ ⎢ ⎢ ∗ ∗ −Q1 0 ⎥ ⎦ ⎣ ∗ ∗ ∗ −Q2 ˆ1 N ˆ1 − N ˆ2 N ˆ3 , ˆ3 − N ˆ2 − N Φˆ2 = N Φˆ3 = [A Ad 0 0]. Thus, V˙ (xt ) < −εx(t)2 for a sufficiently small ε > 0. Therefore, system (8.1) with u(t) = 0 and ω(t) = 0 is asymptotically stable. This completes the proof.
168
8. H∞ Control Design for Systems with Time-Varying Delay
Remark 8.2.1. Often there is no information on the derivative of the delay. In that case, a delay-dependent and rate-independent BRL for a delay satisfying only (8.2) can be derived by setting Q3 = 0 in Theorem 8.2.1. From the procedure used in the proof of Theorem 8.2.1, we obtain a corollary on the stability of system (8.1) with u(t) = 0 and ω(t) = 0. Corollary 8.2.1. Consider system (8.1) with u(t) = 0 and ω(t) = 0. Given scalars h2 h1 0 and μ, the system is asymptotically stable if there exist ˆ j 0, j = 1, 2, and any matrices P > 0, Qi 0, i = 1, 2, 3, Zj > 0, and X ˆ appropriately dimensioned matrices Ni , i = 1, 2, 3 such that LMIs (8.19)(8.21) hold. ˆ2 = 0, N ˆ3 = 0, N ˆ1 = Remark 8.2.2. For h1 = 0, if Z1⎡= Z, Q3 = Q,⎤N X X12 0 0 ⎥ ⎢ 11 ⎥ ⎢ ⎢ ∗ X22 0 0 ⎥ T T T ˆ ˆ ⎥ 0, Qi = εi I, i = 1, 2, ⎢ Y T 0 0 , X2 = 0, X1 = ⎢ ⎥ ⎢ ∗ ∗ 0 0⎥ ⎦ ⎣ ∗ ∗ ∗ 0 and Z2 = εj I (where εj > 0, j = 1, 2, 3 are sufficiently small scalars), then Corollary 8.2.1 reduces to Theorem 2 in [9].
8.3 Design of State-Feedback H∞ Controller This section extends Theorem 8.2.1 to the design of an H∞ controller for system (8.1) under control law (8.5). Theorem 8.3.1. Consider closed-loop system (8.6). For given scalars h2 h1 0, μ, and γ > 0, if there exist matrices L > 0, Ri 0, i = 1, 2, 3, Yj 0, and Wj > 0, j = 1, 2, and any appropriately dimensioned matrices Mj , j = 1, 2, 3, and V such that the following matrix inequalities hold: ⎡ ⎤ √ √ T T T T T T Ξ + Ξ + Ξ + h Y + h Y h Ξ h Ξ Ξ Ξ Ξ 1 2 2 1 12 2 2 12 2 3 3 4 5 6 ⎥ ⎢ ⎢ ∗ −W1 0 0 0 0 ⎥ ⎢ ⎥ ⎢ 0 0 0 ⎥ ∗ ∗ −W2 ⎢ ⎥ < 0, ⎢ ∗ ∗ ∗ −I 0 0 ⎥ ⎢ ⎥ ⎢ ⎥ ∗ ∗ ∗ ∗ −I 0 ⎦ ⎣ ∗ ∗ ∗ ∗ ∗ −I (8.22)
8.3 Design of State-Feedback H∞ Controller
⎤
⎡ ⎣
Mj
Yj
∗ LWj−1 L
⎦ 0, j = 1, 2,
(8.23)
⎤
⎡ ⎣
169
Y1 + Y2
M3
∗
LW1−1 L + LW2−1 L
⎦ 0,
(8.24)
where ⎡ T
T
T
3
⎤
Ri Ad L 0 0 Bω ⎥ ⎢ AL+LA +BV +V B + ⎥ ⎢ i=1 ⎥ ⎢ ⎥ ⎢ ⎢ ∗ −(1−μ)R3 0 0 0 ⎥ ⎥ ⎢ Ξ1 = ⎢ ⎥, ⎢ 0 ⎥ ∗ ∗ −R1 0 ⎥ ⎢ ⎥ ⎢ ⎢ ∗ ∗ ∗ −R2 0 ⎥ ⎦ ⎣ ∗ ∗ ∗ ∗ −γ 2 I Ξ2 = [M1 M3 − M1 − M2 M2 − M3 0], Ξ3 = [AL + BV Ad L 0 0 Bω ], Ξ4 = [CL 0 0 0 Dω ], Ξ5 = [0 Cd L 0 0 0], Ξ6 = [DV 0 0 0 0], then the system is asymptotically stable and satisfies J(ω) < 0 for all nonzero ω(t) ∈ L2 [0, ∞) under the condition x(t) = 0, ∀t ∈ [−h2 , 0], and u(t) = V L−1 x(t) is a stabilizing H∞ controller. Proof. If system (8.1) in Theorem 8.2.1 is replaced with closed-loop system (8.6), then (8.7) should be changed to ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ˜ Φ=⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Φ˜1 +Φ2 +ΦT 2 +h2 X1 +h12 X2
√
h2 Φ˜T 3 Z1
∗
−Z1
∗
∗
∗
∗
∗
∗
∗
∗
⎤ √ T T ˜T h12 Φ˜T 3 Z2 Φ4 Φ5 Φ6 ⎥ ⎥ 0 0 0 0 ⎥ ⎥ ⎥ −Z2 0 0 0 ⎥ ⎥ < 0, ⎥ ∗ −I 0 0 ⎥ ⎥ ⎥ ∗ ∗ −I 0 ⎥ ⎦ ∗ ∗ ∗ −I (8.25)
where
170
8. H∞ Control Design for Systems with Time-Varying Delay
Φ˜3 = [A + BK Ad 0 0 Bω ], Φ˜6 = [DK 0 0 0 0], and Φ˜1 is obtained by replacing A in Φ1 in (8.7) with A + BK; and the other parameters are defined in Theorem 8.2.1. Define Π = diag {P −1 , P −1 , P −1 , P −1 , I}, Θ = diag {Π, Z1−1 , Z2−1 , I, I, I}. Pre- and post-multiply Φ˜ in (8.25) by Θ; pre- and post-multiply Ψi , i = 1, 2 and Ψ3 in (8.8) and (8.9) by diag {Π, L}; and make the following changes in the variables: L = P −1 , V = KL, Mi = ΠNi L, Ri = LQi L, i = 1, 2, 3, Yj = ΠXj Π, Wj = Zj−1 , j = 1, 2. Then, (8.22)-(8.24) are derived using the Schur complement. This completes the proof.
Note that the conditions in Theorem 8.3.1 are no longer LMI conditions due to the terms LWj−1 L, j = 1, 2 in (8.23) and (8.24). As mentioned in Chapter 6, we can solve this nonconvex problem by using the idea for solving a cone complementarity problem in [13]. Defining new variables Sj , j = 1, 2 for which LWj−1 L Sj , j = 1, 2 and letting P = L−1 , Tj = Sj−1 , and Zj = Wj−1 , j = 1, 2 convert this nonconvex problem into the following LMI-based nonlinear minimization problem: ⎧ ⎫ 2 ⎨ ⎬ Minimize Tr LP + (Sj Tj + Wj Zj ) ⎩ ⎭ j=1
subject to (8.22) and ⎤ ⎧⎡ ⎪ Y M ⎪ j ⎪ ⎣ j ⎦ 0, ⎪ ⎪ ⎪ ⎨ ∗ Sj ⎡ ⎤ ⎪ ⎪ L I ⎪ ⎪ ⎣ ⎦ 0, ⎪ ⎪ ⎩ ∗ P
⎡ ⎤ ⎡ ⎤ M3 Y1 + Y2 Tj P ⎣ ⎦ 0, ⎣ ⎦ 0, ∗ Zj ∗ S 1 + S2 ⎡ ⎤ ⎡ ⎤ Sj I Wj I ⎣ ⎦ 0, ⎣ ⎦ 0, j = 1, 2. ∗ Tj ∗ Zj (8.26)
The minimum H∞ performance, γmin , can be found for given h2 h1 0 by using either the CCL or ICCL algorithm described in Chapter 6. Below, we use the ICCL algorithm because of its advantages.
8.4 Numerical Examples
171
Algorithm 8.3.1 To find γmin : Step 1: Choose a sufficiently large initial γ > 0 such that there exists a feasible solution to (8.22) and (8.26). Set γmin = γ. Step 2: Find a feasible set (P0 , L0 , V0 , Ri0 , Mi0 , Yj0 , Zj0 , Wj0 , Sj0 , Tj0 , i = 1, 2, 3, j = 1, 2) that satisfies (8.22) and (8.26). Set k = 0. Step 3: Solve the following LMI problem for the variables P , L, Ri , Mi , i = 1, 2, 3, Yj , Zj , Wj , Sj , Tj , j = 1, 2, and V : ⎫ ⎧ 2 ⎬ ⎨ (Sj Tjk +Sjk Tj +Wj Zjk +Wjk Zj ) Minimize Tr LPk +Lk P + ⎭ ⎩ j=1
subject to (8.22) and (8.26). Set Pk+1 = P , Lk+1 = L, Sj,k+1 = Sj , Tj,k+1 = Tj , Wj,k+1 = Wj , and Zj,k+1 = Zj , j = 1, 2. Step 4: For the K obtained in Step 3, if LMIs (8.8), (8.9), and (8.25) are feasible for the variables P , Qi , Ni , i = 1, 2, 3, Xj , and Zj , j = 1, 2, then set γmin = γ, decrease γ, and return to Step 2. If they are not feasible within a specified number of iterations, then exit. Otherwise, set k = k +1 and go to Step 3. Remark 8.3.1. Note that the iteration stop condition at the beginning of Step 4 in [2] and [14] is very strict. The gain matrix, K, and other decision variables, such as L, Ri , Mi , i = 1, 2, 3, Wj , and Yj , j = 1, 2 that were obtained in the previous step must satisfy (8.22)-(8.24), which is the same as saying that the matrices P , Qi , Ni , i = 1, 2, 3, Zj , and Xj , j = 1, 2 obtained in the previous step must satisfy (8.8), (8.9), and (8.25) for the specified K. However, once K is obtained, the conditions in Theorem 8.3.1 reduce to LMIs for P , Qi , Ni , i = 1, 2, 3, Zj , and Xj , j = 1, 2. So, we modified the stop condition in Algorithm 8.3.1 to include a determination of whether or not LMIs (8.8), (8.9), and (8.25) are feasible, which may provide more freedom in the selection of variables, such as P , Qi , Ni , i = 1, 2, 3, Zj , and Xj , j = 1, 2.
8.4 Numerical Examples This section presents two numerical examples that demonstrate the benefits of the method described above.
172
8. H∞ Control Design for Systems with Time-Varying Delay
Example 8.4.1. Consider the stability of system (8.1) with u(t) = 0, ω(t) = 0, and ⎡ A=⎣
⎤ 0
1
−1 −2
⎡
⎦ , Ad = ⎣
⎤ 0
0
−1 1
⎦.
If (h1 + h2 )/2 = 1 and μ = 0.8, the lower and upper bounds on d(t) are h1 = 0.88 and h2 = 1.12 in [15], and h1 = 0.46 and h2 = 1.54 in [8]. However, Corollary 8.2.1 yields better values, namely, h1 = 0.60 and h2 = 1.40. Table 8.1 lists values of the upper bound, h2 , for various μ and h1 that were obtained with Corollary 8.2.1 and those that are given in [4, 8, 16]. Although our corollary produces the same values as those in [16] for h1 = 0, the criteria in [16] are not applicable when h1 > 0. Table 8.1. Allowable upper bound, h2 , for various h1 (Example 8.4.1) μ
Method
h1
0
0.3
0.5
0.8
1
2
[4]
h2
0.67
0.91
1.07
1.33
1.50
2.39
[8]
h2
0.77
0.94
1.09
1.34
1.51
2.40
[16]
h2
1.06
—
—
—
—
—
Corollary 8.2.1
h2
1.06
1.19
1.33
1.56
1.72
2.57
[8]
h2
2.19
2.19
2.20
2.20
2.21
2.40
[16]
h2
2.35
—
—
—
—
—
Corollary 8.2.1
h2
2.35
2.35
2.35
2.35
2.35
2.57
unknown μ
μ = 0.3
Example 8.4.2. Consider system (8.1) with ⎡
⎤
⎡ ⎤ ⎡ ⎤ 1 0 ⎦ , Ad = ⎣ ⎦ , B = ⎣ ⎦ , Bω = ⎣ ⎦ , A=⎣ 0 1 0 −0.9 1 1 C = 0 1 , Dω = 0, Cd = 0 0 , D = 0.1. 0 0
⎡
−1
−1
⎤
˙ When d(t) is unknown and (h1 + h2 )/2 = 2, the range of delays for which system (8.1) with ω(t) = 0 is stable is [1.78, 2.22] for K = [−20.5108, −34.6753] in [17]. However, Algorithm 8.3.1 produces the range
8.5 Conclusion
173
Table 8.2. Controller gain and number of iterations for Algorithm 8.3.1 for γ = 0.1287 (Example 8.4.2) Feedback gain
h1 = h2
Number of iterations
from Algorithm 8.3.1
Algorithm 8.3.1
[3]
[5]
1.1
[−0.1718, −32.0748]
2
19
16
1.2
[−0.1228, −33.6992]
2
32
22
1.25
[−0.0905, −35.0062]
2
86
29
1.40
[0.0009, −19.0760]
7
—
—
[1.70, 2.30] for K = [−2.9088, −8.0278]. Thus, the method in this chapter produces both a larger range of delays and smaller gains than the method in [17] does. For a constant delay (d(t) = h2 = h1 ) and γ = 0.1287, the range of h2 for which the system is stable is 0 h2 1.25 in [2], 0 h2 1.25 in [3], and 0 h2 1.38 in [5]. However, Algorithm 8.3.1 yields the range 0 h2 1.40. Table 8.2 shows that this algorithm requires far fewer iterations than the ones in [3] or [5] because the stop condition is less strict. Regarding a time-varying delay, [5] only considered the case h1 = 0; but d(t) can vary within an interval. Table 8.3 lists the minimum H∞ performance, γmin , for the closed-loop system obtained for various h1 and h2 . Note that our method yields better results than those in [5], even for h1 = 0, because we use both an improved BRL and a new algorithm. Table 8.3. γmin for various h1 and h2 (Example 8.4.2) h2 1
1.4
h1
μ = 0.5
unknown μ
[5]
Algorithm 8.3.1
[5]
Algorithm 8.3.1
0
0.117
0.111
—
0.118
0.5
0.117
0.108
—
0.109
1.0
—
1.452
—
1.489
1.2
—
1.280
—
1.280
8.5 Conclusion This chapter describes a new delay-dependent BRL derived using the IFWM approach of Chapter 3. Based on this BRL, an H∞ controller is designed
174
8. H∞ Control Design for Systems with Time-Varying Delay
using the ICCL algorithm of Chapter 6. Two numerical examples show that this method is less conservative than others.
References 1. E. Fridman and U. Shaked. Delay-dependent stability and H∞ control: constant and time-varying delays. International Journal of Control, 76(1): 48-60, 2003. 2. H. Gao and C. Wang. Comments and further results on “A descriptor system approach to H∞ control of linear time-delay systems”. IEEE Transactions on Automatic Control, 48(3): 520-525, 2003. 3. Y. S. Lee, Y. S. Moon, W. H. Kwon, and P. G. Park. Delay-dependent robust H∞ control for uncertain systems with a state-delay. Automatica, 40(1): 65-72, 2004. 4. X. Jiang and Q. L. Han. On H∞ control for linear systems with interval timevarying delay. Automatica, 41(12): 2099-2106, 2005. 5. S. Xu, J. Lam, and Y. Zou. New results on delay-dependent robust H∞ control for systems with time-varying delays. Automatica, 42(2): 343-348, 2006. 6. X. M. Zhang, M. Wu, Q. L. Han, and J. H. She. A new integral inequality approach to delay-dependent robust H∞ control. Asian Journal of Control, 8(2): 153-160, 2006. 7. Y. He, Q. G. Wang, L. Xie, and C. Lin. Further improvement of free-weighting matrices technique for systems with time-varying delay. IEEE Transactions on Automatic Control, 52(2): 293-299, 2007. 8. Y. He, Q. G. Wang, C. Lin, and M. Wu. Delay-range-dependent stability for systems with time-varying delay. Automatica, 43(2): 371-376, 2007. 9. M. Wu, Y. He, J. H. She, and G. P. Liu. Delay-dependent criteria for robust stability of time-varying delay systems. Automatica, 40(8): 1435-1439, 2004. 10. Y. He, M. Wu, J. H. She, and G. P. Liu. Parameter-dependent Lyapunov functional for stability of time-delay systems with polytopic type uncertainties. IEEE Transactions on Automatic Control, 49(5): 828-832, 2004. 11. Q. L. Han. On robust stability of neutral systems with time-varying discrete delay and norm-bounded uncertainty. Automatica, 40(6): 1087-1092, 2004. 12. J. Lam, H. Gao, and C. Wang. Stability analysis for continuous systems with two additive time-varying delay components. Systems & Control Letters, 56(1): 16-24, 2007. 13. E. L. Ghaoui, F. Oustry, and M. AitRami. A cone complementarity linearization algorithms for static output feedback and related problems. IEEE Transactions on Automatic Control, 42(8): 1171-1176, 1997. 14. Y. S. Moon, P. Park, W. H. Kwon, and Y. S. Lee. Delay-dependent robust stabilization of uncertain state-delayed systems. International Journal of Control, 74(14): 1447-1455, 2001.
References
175
15. E. Fridman. Stability of systems with uncertain delays: a new “complete” Lyapunov-Krasovskii functional. IEEE Transactions on Automatic Control, 51(5): 885-890, 2006. 16. P. G. Park and J. W. Ko. Stability and robust stability for systems with a time-varying delay. Automatica, 43(10): 1855-1858, 2007. 17. E. Fridman and U. Shaked. Descriptor discretized Lyapunov functional method: analysis and design. IEEE Transactions on Automatic Control, 51(5): 890-897, 2006.
9. H∞ Filter Design for Systems with Time-Varying Delay
One problem with estimating the state from corrupted measurements is that, if we assume that the noise source is an arbitrary signal with bounded energy, then the well-known Kalman filtering scheme is no longer applicable. To handle that case, H∞ filtering was proposed in [1]. It provides a guaranteed noise attenuation level [2–5]. H∞ filtering for time-delay systems has been a hot topic in recent years, and design methods for delay-independent H∞ filters have been presented in [6–10]. However, since delay-independent methods tend to be conservative, especially when the delay is small, attention has shifted to delay-dependent H∞ filtering [11–16]. A descriptor model transformation was first employed in H∞ filter design in [11]. Later, [12–16] devised less conservative results using Park’s or Moon et al.’s inequality. An improved method based on the FWM approach was reported in [17] for systems with a time-varying interval delay. Recently, Zhang & Han employed a new integral inequality, which is equivalent to the FWM approach, to study H∞ filtering for neutral delay systems in [18] and robust H∞ filtering for uncertain linear systems with a time-varying delay in [19]. These methods are less conservative than previous ones. A similar method was used to design H∞ filters for discrete-time systems with a time-varying delay in [20, 21]. However, there is room for further improvement for systems with a time-varying delay. For example, for continuous-time systems with a time-varying delay, d(t), where ˙ μ, [12, 19] ignored the term − t−d(t) x˙ T (s)Z x(s)ds 0 d(t) h and d(t) ˙ t−h in the derivative of the Lyapunov-Krasovskii functional, which may lead to considerable conservativeness. The best way to deal with this term is t to keep it; that is, all the terms in hx˙ T (t)Z x(t) ˙ − t−h x˙ T (s)Z x(s)ds ˙ = t t−d(t) T T T hx˙ (t)Z x(t) ˙ − t−d(t) x˙ (s)Z x(s)ds ˙ − t−h x˙ (s)Z x(s)ds ˙ should be retained. On the other hand, for discrete-time systems with a time-varying delay, in the calculation of the difference of the Lyapunov function in [20,22], k−1 k−1 the term − l=k−h¯ η T (l)Zη(l) was increased to − l=k−d(k) η T (l)Zη(l) and
178
9. H∞ Filter Design for Systems with Time-Varying Delay
k−d(k)−1 the term − l=k−h¯ η T (l)Zη(l) was ignored, which may also lead to considerable conservativeness. This chapter uses the IFWM approach to analyze the delay-dependent H∞ performance of the error systems of both continuous-time and discretetime systems [23] with a time-varying delay. Useful terms in the derivative of a Lyapunov-Krasovskii functional and those in the difference of a Lyapunov function are retained, and the relationships among a time-varying delay, its upper bound, and their difference are taken into consideration. That treatment yields H∞ filters that are designed in terms of LMIs. The resulting criteria are extended to systems with polytopic-type uncertainties. Numerical examples demonstrate the effectiveness of the method and its advantages.
9.1 H∞ Filter Design for Continuous-Time Systems First, we use the IFWM approach to design a delay-dependent H∞ filter for continuous-time systems with a time-varying delay. 9.1.1 Problem Formulation Consider the following system with a time-varying delay: ⎧ ⎪ x(t) ˙ = Ax(t) + Ad x(t − d(t)) + Bω(t), ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ y(t) = Cx(t) + Cd x(t − d(t)) + Dω(t), ⎪ ⎪ z(t) = Lx(t) + Ld x(t − d(t)), ⎪ ⎪ ⎪ ⎪ ⎩ x(t) = φ(t), t ∈ [−h, 0],
(9.1)
where x(t) ∈ Rn is the state vector; y(t) ∈ Rr is the vector of the measured state; ω(t) ∈ Rm is a noise signal vector belonging to L2 [0, ∞); z(t) ∈ Rp is the signal to be estimated; A, Ad , B, C, Cd ,D, L, and Ld are constant matrices with appropriate dimensions; the delay, d(t), is a time-varying differentiable function satisfying 0 d(t) h
(9.2)
˙ μ, d(t)
(9.3)
and
9.1 H∞ Filter Design for Continuous-Time Systems
179
where h > 0 and μ are constants; and the initial condition, φ(t), is a continuously differentiable initial function of t ∈ [−h, 0]. The aim of this section is to design a full-order, linear, time-invariant, asymptotically stable filter for system (9.1). The state-space realization of the filter has the form ⎧ ⎪ ⎪ x ˆ˙ (t) = AF x ˆ(t) + BF y(t), ⎪ ⎨ (9.4) ˆ(t) + DF y(t), zˆ(t) = CF x ⎪ ⎪ ⎪ ⎩x ˆ(0) = 0, where AF ∈ Rn×n , BF ∈ Rn×r , CF ∈ Rp×n , and DF ∈ Rp×r are filter parameters to be determined. Denote ⎡ ⎤ x(t) ⎦ , e(t) = z(t) − zˆ(t). ζ(t) = ⎣ (9.5) x ˆ(t) Then, the filtering-error dynamics of system (9.1) are ⎧ ⎪ ˙ = Aζ(t) ¯ ¯ ⎪ ζ(t) + A¯d Eζ(t − d(t)) + Bω(t), ⎪ ⎨ ¯ ¯ e(t) = Cζ(t) + C¯d Eζ(t − d(t)) + Dω(t), ⎪ ⎪ ⎪ ⎩ ζ(t) = [φT (t) 0]T , t ∈ [−h, 0], where
⎤
⎡ A
0
⎡
⎤ Ad
(9.6)
⎤
⎡ B
⎦, BF Cd BF D BF C AF ¯ = −DF D. E = I 0 , C¯ = L − DF C −CF , C¯d = Ld −DF Cd , D
A¯ = ⎣
⎦ , A¯d = ⎣
¯ =⎣ ⎦, B
The H∞ filtering problem addressed in this section is stated as follows: Given scalars h > 0, μ, and γ > 0, find a full-order, linear, time-invariant, asymptotically stable filter with a state-space realization of the form (9.4) for system (9.1) such that, for any delay, d(t), satisfying (9.2) and (9.3), (1) filtering-error system (9.6) with ω(t) = 0 is asymptotically stable; and (2) the H∞ performance e(t)2 γω(t)2
(9.7)
is guaranteed under zero-initial conditions for all nonzero ω(t) ∈ L2 [0, ∞) and a given γ > 0.
180
9. H∞ Filter Design for Systems with Time-Varying Delay
Denote Λ = [A Ad B C Cd D L Ld ] .
(9.8)
For a system with polytopic-type uncertainties, we consider the set of system matrices Λ ∈ Ω, where Ω is the real, convex, polytopic domain ⎧ ⎫ p p ⎨ ⎬ Ω = Λ(λ) = λj Λj , λj = 1, λj 0. (9.9) ⎩ ⎭ j=1
j=1
Here, Λj = [Aj Adj Bj Cj Cdj Dj Lj Ldj ] , j = 1, 2, · · · , p are constant matrices with appropriate dimensions, and λj , j = 1, 2, · · · , p are time-invariant uncertainties. 9.1.2 H∞ Performance Analysis This subsection considers the case where the set of system matrices, Λ, is fixed. For this case, the following theorem holds. Theorem 9.1.1. Consider filtering-error system (9.6). Given scalars h > 0, μ, and γ > 0, the system is asymptotically stable and (9.7) is satisfied under zero-initial for all nonzero ω(t) ∈ L2 [0, ∞) if there exist ⎡ conditions ⎤ matrices P = ⎣
P1 P2
⎦ > 0, Qi 0, i = 1, 2, Z > 0, and X 0, and any ∗ P3 appropriately dimensioned matrices N and M such that the following matrix inequalities hold: ⎡ ⎤ T T Φ1 + Φ2 + ΦT + hX hΦ Z Φ 2 3 4 ⎥ ⎢ ⎢ ⎥ Φ(Λ) = ⎢ (9.10) ∗ −hZ 0 ⎥ < 0, ⎣ ⎦ ∗ ∗ −I ⎡ Ψ1 = ⎣
⎤ X N ∗ Z
⎡ Ψ2 = ⎣ where
⎦ 0,
(9.11)
⎤ X M ∗
Z
⎦ 0,
(9.12)
9.1 H∞ Filter Design for Continuous-Time Systems
⎡
181
⎤ Φ11 Φ12 Φ13
0
Φ15
⎢ ⎥ ⎢ ⎥ ⎢ ∗ Φ22 Φ23 0 Φ25 ⎥ ⎢ ⎥ ⎢ ⎥ Φ1 = ⎢ ∗ ∗ Φ33 0 0 ⎥, ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ⎥ ∗ ∗ −Q 0 1 ⎣ ⎦ ∗ ∗ ∗ ∗ −γ 2 I Φ2 = [N 0 M − N − M 0], Φ11 = P1 A + AT P1 + P2 BF C + C T BFT P2T + Q1 + Q2 , Φ12 = P2 AF + AT P2 + C T BFT P3 , Φ13 = P1 Ad + P2 BF Cd , Φ15 = P1 B + P2 BF D, Φ22 = P3 AF + AT F P3 , Φ23 = P2T Ad + P3 BF Cd , Φ25 = P2T B + P3 BF D, Φ33 = −(1 − μ)Q2 , Φ3 = [A 0 Ad 0 B], Φ4 = [L − DF C − CF Ld − DF Cd 0 − DF D]. Proof. Choose the Lyapunov-Krasovskii functional candidate to be t t V (t, ζt ) = ζ T (t)P ζ(t) + xT (s)Q1 x(s)ds + xT (s)Q2 x(s)ds
0
t−h t
+ −h
t−d(t)
x˙ T (s)Z x(s)dsdθ, ˙
(9.13)
t+θ
where P > 0, Qi 0, i = 1, 2, and Z > 0 are to be determined. From the Newton-Leibnitz formula, the following equations are true for any matrices N and M with appropriate dimensions: 0 = 2η T (t)N x(t) − x(t − d(t)) −
t
x(s)ds ˙ ,
(9.14)
t−d(t)
T
0 = 2η (t)M x(t − d(t))−x(t − h) −
t−d(t)
x(s)ds ˙ ,
(9.15)
t−h
where T η(t) = xT (t), x ˆT (t), xT (t − d(t)), xT (t − h), ω T (t) . On the other hand, for any matrix X 0, the following equation is true:
182
9. H∞ Filter Design for Systems with Time-Varying Delay
t
η T (t)Xη(t)ds −
0= t−h
T
= hη (t)Xη(t) −
t
η T (t)Xη(t)ds
t−h t−d(t) T
η (t)Xη(t)ds −
t−h
t
η T (t)Xη(t)ds. (9.16)
t−d(t)
In addition, the following equation is also true: −
t
T
x˙ (s)Z x(s)ds ˙ =−
t−h
t
T
x˙ (s)Z x(s)ds ˙ −
t−d(t)
t−d(t)
x˙ T (s)Z x(s)ds. ˙
t−h
(9.17) Calculating the derivative of V (t, ζt ) along the solutions of system (9.6), adding the right sides of (9.14)-(9.16) to it, and using (9.17) yield ˙ + xT (t)Q1 x(t) − xT (t − h)Q1 x(t − h) + xT (t)Q2 x(t) V˙ (t, ζt ) = 2ζ T (t)P ζ(t) T ˙ −(1 − d(t))x (t − d(t))Q2 x(t − d(t)) + hx˙ T (t)Z x(t) ˙ t − x˙ T (s)Z x(s)ds ˙ t−h T
˙ + xT (t)(Q1 + Q2 )x(t) − xT (t − h)Q1 x(t − h) 2ζ (t)P ζ(t) −(1 − μ)xT (t − d(t))Q2 x(t − d(t)) + hx˙ T (t)Z x(t) ˙ t−d(t) t x˙ T (s)Z x(s)ds ˙ − x˙ T (s)Z x(s)ds ˙ − t−d(t)
t−h
T
t
+2η (t)N x(t)−x(t−d(t))−
x(s)ds ˙ t−d(t)
T
+2η (t)M x(t − d(t)) − x(t − h) − −
t−d(t)
t−h
η T (t)Xη(t)ds −
t−d(t)
x(s)ds ˙ + hη T (t)Xη(t)
t−h t
η T (t)Xζ(t)ds.
(9.18)
t−d(t)
Thus, V˙ (t, ζt ) + eT (t)e(t) − γ 2 ω T (t)ω(t) T T = η T (t) Φ1 + Φ2 + ΦT 2 + hX + hΦ3 ZΦ3 + Φ4 Φ4 η(t) t−d(t) t ξ T (t, s)Ψ1 ξ(t, s)ds − ξ T (t, s)Ψ2 ξ(t, s)ds, − t−d(t)
t−h
(9.19)
T where ξ(t, s) = η T (t), x˙ T (s) . T Therefore, if Ψi 0, i = 1, 2, and Φ1 + Φ2 + ΦT 2 + hX + hΦ3 ZΦ3 + T Φ4 Φ4 < 0, which is equivalent to (9.10) by the Schur complement, then
9.1 H∞ Filter Design for Continuous-Time Systems
183
V˙ (t, ζt ) + eT (t)e(t) − γ 2 ω T (t)ω(t) < 0. Following an argument similar to the one in [15] ensures that (9.7) holds. On the other hand, (9.10)-(9.12) imply that (9.20)-(9.22) hold, which guarantees that V˙ (t, ζt ) < −εζ(t)2 for a sufficiently small ε > 0 and thus that error system (9.6) with ω(t) = 0 is asymptotically stable. ⎡ ⎤ ˆ hΦˆT Z Φˆ1 + Φˆ2 + ΦˆT + hX 2 3 ⎣ ⎦ < 0, (9.20) ∗ −hZ ⎡ ⎤ ˆ N ˆ X ⎦ 0, (9.21) Ψˆ1 = ⎣ ∗ Z ⎡ ⎤ ˆ M ˆ X ⎦ 0, (9.22) Ψˆ2 = ⎣ ∗ Z where
⎤
⎡ Φ11 Φ12 Φ13
⎢ ⎢ ⎢ ∗ ˆ Φ1 = ⎢ ⎢ ⎢ ∗ ⎣ ∗ ˆ 0 Φˆ2 = N Φˆ3 = [A 0
0
⎥ ⎥ 0 ⎥ ⎥, ⎥ Φ33 0 ⎥ ⎦ ∗ −Q1 ˆ −M ˆ , −N
Φ22 Φ23 ∗ ∗ ˆ M
Ad 0],
ˆ M ˆ , and X ˆ 0 are decision variables with appropriate dimensions. and N, This completes the proof.
Remark 9.1.1. Regarding the design of a full-order filter for a single delay in [15], where s = 1, F = 0, and G = 0, it is easy to see that Theorem 9.1.1 in this section reduces to Theorem 1 in [15] for μ = 0 if we make the following assignments: ⎡ ⎡ ⎤ ⎤ X1 0 0 0 0 Y1 ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ 0 0 0 0 0 ⎥ ⎢ 0 ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ X = ⎢ 0 0 0 0 0 ⎥ , N = ⎢ 0 ⎥ , M = 0, Q1 = εI, ⎢ ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ ⎥ ⎢ 0 0 0 0 0⎥ ⎢ 0 ⎥ ⎣ ⎣ ⎦ ⎦ 0 0 0 0 0 0 where ε > 0 is a sufficiently small scalar.
184
9. H∞ Filter Design for Systems with Time-Varying Delay
Remark 9.1.2. Often there is either no information on the derivative of a delay, or the function d(t) is continuous but not differentiable. For these situations, we can carry out a delay-dependent and rate-independent performance analysis for a delay satisfying (9.2), but not necessarily (9.3), by choosing Q2 = 0 in Theorem 9.1.1. Note that the method in [15] cannot handle this case. 9.1.3 Design of H∞ Filter This subsection uses Theorem 9.1.1 to solve the H∞ filter synthesis problem. Theorem 9.1.2. Consider filtering-error system (9.6). For given scalars h > 0, μ, and γ > 0, if there exist matrices P1 > 0, V > 0, Qi 0, i = 1, 2, ¯ 0, and any appropriately dimensioned matrices N ¯, M ¯ , A¯F , Z > 0, and X ¯ ¯ ¯ BF , CF , and DF such that the following LMIs hold: ⎡ ⎤ ¯ hΦ¯T Z Φ ¯T Φ¯1 + Φ¯2 + Φ¯T + h X 2 3 4 ⎥ ⎢ ⎢ ⎥ ¯ (9.23) Φ(Λ) = ⎢ ∗ −hZ 0 ⎥ < 0, ⎣ ⎦ ∗ ∗ −I ⎡ Ψ¯1 = ⎣ ⎡ Ψ¯2 = ⎣
¯ N ¯ X ∗ Z ¯ M ¯ X ∗
Z
⎤ ⎦ 0,
(9.24)
⎤ ⎦ 0,
(9.25)
P1 − V > 0, where
⎡
¯13 Φ¯11 Φ¯12 Φ
(9.26)
Φ¯15
0
⎤
⎢ ⎥ ⎢ ⎥ ¯23 0 ⎢ ∗ Φ¯22 Φ Φ¯25 ⎥ ⎢ ⎥ ⎢ ⎥ ¯33 0 Φ¯1 = ⎢ ∗ ∗ Φ 0 ⎥, ⎢ ⎥ ⎢ ⎥ ⎢ ∗ 0 ⎥ ∗ ∗ −Q1 ⎣ ⎦ ∗ ∗ ∗ ∗ −γ 2 I ¯ 0 M ¯ −N ¯ −M ¯ 0 , Φ¯2 = N ¯F C + C T B ¯ T + Q1 + Q2 , Φ¯11 = P1 A + AT P1 + B F T T ¯T ¯ ¯ Φ12 = AF + A V + C B , F
9.1 H∞ Filter Design for Continuous-Time Systems
185
¯F Cd , Φ¯13 = P1 Ad + B ¯F D, Φ¯15 = P1 B + B ¯ ¯ ¯ Φ22 = AF + AT F, ¯F Cd , Φ¯23 = V Ad + B ¯ ¯ Φ25 = V B + BF D, Φ¯33 = −(1 − μ)Q2 , Φ¯3 = [A 0 Ad 0 B], ¯ F C − C¯F Ld − D ¯ F Cd 0 − D ¯F D , Φ¯4 = L − D then the system is asymptotically stable, the bound on the H∞ noise attenuation level is γ, and either of the following is a suitable filter of the form (9.4): ¯F ¯F , CF = C¯F , DF = D AF = V −1 A¯F , BF = V −1 B
(9.27)
¯F , CF = C¯F V −1 , DF = D ¯F . AF = A¯F V −1 , BF = B
(9.28)
or
⎤
⎡ Proof. For P = ⎣
P1 P2 ∗ P3
⎦ > 0 defined in Theorem 9.1.1, define
J1 = diag {I, P2 P3−1 , I, I, I}, J2 = diag {J1 , I, I}, J3 = diag {J1 , I}. (9.29) Pre- and post-multiply Φ(Γ ) by J2 and J2T , respectively; pre- and postmultiply Ψi , i = 1, 2 by J3 and J3T , respectively; and define the following new variables: ⎧ ⎨N ¯ = J1 N, M ¯ = J1 M, X ¯ = J1 XJ1T , V = P2 P −1 P2T , 3 −1 ⎩ A¯ = P A P −1 P T , B ¯ = P B , C¯ = C P P T , D ¯ =D . F
2
F
3
2
F
2
F
F
F
3
2
F
F
(9.30) Then, LMIs (9.10)-(9.12) are equivalent ⎡ ⎤to LMIs (9.23)-(9.25). In addition, P1 P2 ⎦ > 0 is equivalent to by the Schur complement, P = ⎣ ∗ P3 0 < P1 − P2 P3−1 P2T = P1 − V.
(9.31)
186
9. H∞ Filter Design for Systems with Time-Varying Delay
Since V = P2 P3−1 P2T > 0, P2 is that the following holds: ⎡ ⎤⎡ ⎤ ⎡ AF BF A¯ P2−1 0 ⎣ ⎦⎣ F ⎦=⎣ CF DF 0 I C¯F
nonsingular. Thus, using (9.30), we find
¯F B ¯F D
⎤⎡ ⎦⎣
P2−T P3 0 0
⎤ ⎦.
(9.32)
I
However, P2 and P3 cannot be derived from the solutions of LMIs (9.23)(9.26). Just as in [15], let the filter transfer function from y(t) to zˆ(t) be Tzˆy (s) = CF (sI − AF )−1 BF + DF .
(9.33)
Replacing the filter matrices with (9.32) and considering the relationship V = P2 P3−1 P2T yield Tzˆy (s) = CF (sI − AF )−1 BF + DF ¯F ¯F + D = C¯F P2−T P3 (sI − P2−1 A¯F P2−T P3 )−1 P2−1 B ¯F ¯F + D = C¯F (sV − A¯F )−1 B ¯F ¯F + D = C¯F (sI − V −1 A¯F )−1 V −1 B ¯F . ¯F + D = C¯F V −1 (sI − A¯F V −1 )−1 B In this way, the state-space realization of filter (9.27) or (9.28) is readily established. This completes the proof.
Remark 9.1.3. Proposition 11 in [19] can be derived from the above theorem by setting G in that paper to 0, making the following assignments, and using ¯ = πT , M ¯ = 0, X ¯ = π T Z −1 π2 , V = U , A¯F = N1 , the Schur complement: N 2 2 ¯F = N2 , C¯F = N3 , D ¯ F = N4 , and Q1 = εI (where ε > 0 is a sufficiently B ¯ and M ¯ provide extra freedom in the choice of these small scalar). Thus, N variables. For polytopic-type uncertainties, we have the following corollary. Corollary 9.1.1. Consider filtering-error system (9.6) with polytopic-type uncertainties (9.9). For given scalars h > 0, μ, and γ > 0, if there exist ¯ 0, and any matrices P1 > 0, V > 0, Qi 0, i = 1, 2, Z > 0, and X ¯ ¯ ¯ ¯ ¯ ¯ F such that appropriately dimensioned matrices N , M , AF , BF , CF , and D LMIs (9.24)-(9.26) and the following LMI hold: ¯ j ) < 0, j = 1, 2, · · · , p, Φ(Λ
(9.34)
9.1 H∞ Filter Design for Continuous-Time Systems
187
¯ j ) is derived by replacing Λ with Λj in Φ(Λ), ¯ where Φ(Λ which is defined in (9.23), then the system is robustly stable, the bound on the H∞ noise attenuation level is γ, and either (9.27) or (9.28) constitute a suitable filter of the form (9.4). 9.1.4 Numerical Examples The next two examples demonstrate the benefits of the method explained above. Example 9.1.1. Consider system (9.1) with ⎡
⎤
⎡ ⎤ 0 ⎦ , Ad = ⎣ ⎦, B = ⎣ ⎦, A=⎣ 0 −0.9 −1 −1 1 C = 1 0 , Cd = 0 0 , D = 1, L = 1 2 , Ld = 0 0 . −2
⎡
0
−1
⎤
0
As mentioned in [19], the method in [11] fails for this example. Table 9.1 lists the results obtained by the method in this section along with those in [12] and [19] for h = 1 and various μ. Our method yields a smaller H∞ performance, γmin .
Table 9.1. Minimum γ for h = 1 and various μ (Example 9.1.1) μ
0.4
0.8
[12]
1.8311
15.8414
[19]
1.6837
2.6813
Theorem 9.1.2
1.4103
1.4103
Example 9.1.2. Consider system (9.1) with ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ −0.5 0.5 + σ −0.5 −2 0 ⎦, B = ⎣ ⎦, ⎦ , Ad = ⎣ A=⎣ 0 −0.5 1 0 −0.9 + ρ C = 1 1 , Cd = 0.5 1 , D = 1, L = 0.5 2 , Ld = 0 0 , ρ < 0.8, σ < 0.2.
188
9. H∞ Filter Design for Systems with Time-Varying Delay
Table 9.2. Minimum γ for various h and μ (Example 9.1.2) h
1
1.2
μ
[15]
[19]
Corollary 9.1.1
0
2.7572
2.1997
2.1997
0.2
4.5970
2.2623
2.2308
0.4
15.1919
2.3204
2.2490
unknown μ
—
2.4464
2.2493
0
4.9048
3.8686
3.8686
0.2
11.0202
4.1161
3.9846
0.4
—
4.3208
4.0278
unknown μ
—
4.5669
4.0470
The uncertainties can be expressed as polytopic-type ones [24]. Table 9.2 compares the results in [15, 19] with those obtained by Corollary 9.1.1. Our results are clearly better.
9.2 H∞ Filter Design for Discrete-Time Systems In this section, we use the IFWM approach to design a delay-dependent H∞ filter for discrete-time systems with a time-varying delay. 9.2.1 Problem Formulation Consider the following system with a time-varying delay: ⎧ ⎪ x(k + 1) = Ax(k) + Ad x(k − d(k)) + Bω(k), ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ y(k) = Cx(k) + Cd x(k − d(k)) + Dω(k), ⎪ ⎪ z(k) = Lx(k) + Ld x(k − d(k)) + Gω(k), ⎪ ⎪ ⎪ ⎪ ⎩ x(k) = φ(k), k = −d , −d + 1, · · · , 0, 2 2
(9.35)
where x(k) ∈ Rn is the state vector; y(k) ∈ Rm is the vector of the measured state; ω(k) ∈ Rq is a noise signal vector belonging to l2 [0, +∞); z(k) ∈ Rp is the signal to be estimated; A, Ad , B, C, Cd , D, L, Ld , and G are constant matrices with appropriate dimensions; d(k) is a time-varying delay satisfying d1 d(k) d2 ,
(9.36)
9.2 H∞ Filter Design for Discrete-Time Systems
189
where d1 and d2 are known positive integers and d12 = d2 − d1 ; and φ(k), k = −d2 , −d2 + 1, · · · , 0 is a known given initial condition. The aim of this section is to design a full-order, linear, asymptotically stable filter for system (9.35). The state-space realization of the filter has the form ⎧ ⎪ ⎪ x ˆ(k + 1) = AF x ˆ(k) + BF y(k), ⎪ ⎨ (9.37) ˆ(k) + DF y(k), zˆ(k) = CF x ⎪ ⎪ ⎪ ⎩x ˆ(0) = 0, where AF ∈ Rn×n , BF ∈ Rn×m , CF ∈ Rp×n , and DF ∈ Rp×m are filter parameters to be determined. Denote ⎡ ⎤ x(k) ⎦ , e(k) = z(k) − zˆ(k). ζ(k) = ⎣ (9.38) x ˆ(k) Then, the filtering-error dynamics of system (9.35) are ⎧ ⎪ ¯ ¯ ⎪ ζ(k + 1) = Aζ(k) + A¯d Eζ(k − d(k)) + Bω(k), ⎪ ⎨ ¯ ¯ e(k) = Cζ(k) + C¯d Eζ(k − d(k)) + Dω(k), ⎪ ⎪ ⎪ ⎩ ζ(k) = [φT (k) 0]T , k = −d , −d + 1, · · · , 0, 2 2 where
⎤
⎡
A¯ = ⎣
A
0
BF C AF
⎡
⎦, A¯d = ⎣
⎤ Ad BF Cd
⎤
⎡
¯ =⎣ ⎦, B
(9.39)
B
⎦,
BF D
C¯ = L − DF C −CF , C¯d = Ld − DF Cd , ¯ = G − DF D, E = I 0 . D The H∞ filtering problem addressed in this section is stated as follows: Given integers d2 d1 > 0 and a scalar γ > 0, find a full-order, linear, timeinvariant filter for system (9.35) with a state-space realization of the form (9.37) such that, for any delay, d(k), satisfying (9.36), (1) the filtering-error system (9.39) with ω(k) = 0 is asymptotically stable; and (2) the H∞ performance e2 γω2
(9.40)
is guaranteed under zero-initial conditions for all nonzero ω(k) ∈ l2 [0, +∞) and a given γ > 0.
190
9. H∞ Filter Design for Systems with Time-Varying Delay
Denote Λ = [A Ad B C Cd D L Ld G] .
(9.41)
For a system with polytopic-type uncertainties, we consider the set of system matrices, Λ ∈ Ω, where Ω is the real, convex, polytopic domain ⎫ ⎧ p p ⎬ ⎨ λj Λj , λj = 1, λj 0. (9.42) Ω = Λ(λ) = ⎭ ⎩ j=1
j=1
Here, Λj = [Aj Adj Bj Cj Cdj Dj Lj Ldj Gj ] , j = 1, 2, · · · , p are constant matrices with appropriate dimensions and λj , j = 1, 2, · · · , p are timeinvariant uncertainties. 9.2.2 H∞ Performance Analysis This subsection considers the case where the set of system matrices, Λ, is fixed. For this case, the following theorem holds. Theorem 9.2.1. Consider filtering-error system (9.39). Given integers d2 d1 > 0 and a scalar γ > 0, the system is asymptotically stable and (9.40) satisfied under zero-initial conditions for all nonzero ω(k) ∈ l2 [0, +∞) there exist ⎤ matrices P ⎡> 0, Qi ⎤ 0, i = 1, 2, 3, Zj > 0, j = 1, 2, X ⎡ ⎣
X11 X12 ∗
X22
⎦ 0, Y = ⎣
Y11 Y12 ∗
Y22
is if =
⎦ 0, and any appropriately dimensioned
T T T such matrices N = N1T N2T , M = M1T M2T , and S = S1T S2T that the following matrix inequalities hold: ⎡ Φ1
⎢ ⎢ ⎢ ∗ ⎢ ⎢ Φ=⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
⎤ √ √ T T d2 ΦT d12 ΦT 2 Z1 2 Z2 Φ3 Φ4 P ⎥ ⎥ −Z1 0 0 0 ⎥ ⎥ ⎥ 0 0 ⎥ < 0, ∗ −Z2 ⎥ ⎥ ∗ ∗ −I 0 ⎥ ⎦ ∗ ∗ ∗ −P
⎡ Ψ1 = ⎣
(9.43)
⎤ X N ∗ Z1
⎦ 0,
(9.44)
9.2 H∞ Filter Design for Discrete-Time Systems
⎡ Ψ2 = ⎣ ⎡ Ψ3 = ⎣ where
⎡
191
⎤ Y
S
∗ Z2
⎦ 0,
(9.45) ⎤
X +Y
M
∗
Z1 + Z2
⎦ 0,
Φ11 Φ12 E T S1 −E T M1
(9.46)
⎤ 0
⎢ ⎥ ⎢ ⎥ ⎢ ∗ Φ22 S2 −M2 0 ⎥ ⎢ ⎥ ⎢ ⎥ Φ1 = ⎢ ∗ ∗ −Q1 0 0 ⎥, ⎢ ⎥ ⎢ ⎥ ⎢ ∗ 0 ⎥ ∗ ∗ −Q2 ⎣ ⎦ ∗ ∗ ∗ ∗ −γ 2 I Φ11 = E T [Q1 + Q2 + (d12 + 1)Q3 + N1 + N1T + d2 X11 + d12 Y11 ]E − P , Φ12 = E T N2T − N1 + M1 − S1 + d2 X12 + d12 Y12 , Φ22 = −Q3 − N2 − N2T + M2 + M2T − S2 − S2T + d2 X22 + d12 Y22 , Φ2 = [(A − I)E Ad 0 0 B], ¯ , Φ3 = C¯ C¯d 0 0 D ¯ , Φ4 = A¯ A¯d 0 0 B d12 = d2 − d1 . Proof. Let η(l) = x(l + 1) − x(l).
(9.47)
Then, x(k + 1) = x(k) + η(k),
(9.48)
η(k) = x(k + 1) − x(k) = (A − I)x(k) + Ad x(k − d(k)) + B1 ω(k) = (A − I)Eζ(k) + Ad x(k − d(k)) + B1 ω(k) = Φ2 ξ1 (k),
(9.49)
where ξ1 (k) = [ζ T (k), xT (k − d(k)), xT (k − d1 ), xT (k − d2 ), ω T (k)]T . Choose the Lyapunov function candidate to be V (k) = V1 (k) + V2 (k) + V3 (k) + V4 (k),
(9.50)
192
9. H∞ Filter Design for Systems with Time-Varying Delay
where V1 (k) = ζ T (k)P ζ(k), V2 (k) =
0
k−1
θ=−d2 +1 l=k−1+θ
V3 (k) =
k−1 l=k−d1
V4 (k) =
k−1
η T (l)Z2 η(l),
θ=−d2 +1 l=k−1+θ k−1
xT (l)Q1 x(l) +
−d 1 +1
−d1
η T (l)Z1 η(l) +
xT (l)Q2 x(l),
l=k−d2 k−1
xT (l)Q3 x(l);
θ=−d2 +1 l=k−1+θ
and P > 0, Qi 0, i = 1, 2, 3, and Zj > 0, j = 1, 2 are to be determined. Defining ΔV (k) = V (k + 1) − V (k) yields ΔV (k) = ΔV1 (k) + ΔV2 (k) + ΔV3 (k) + ΔV4 (k), where ΔV1 (k) = ζ T (k + 1)P ζ(k + 1) − ζ T (k)P ζ(k) T = ξ1T (k)ΦT 4 P Φ4 ξ1 (k) − ζ (k)P ζ(k), T
ΔV2 (k) = d2 η (k)Z1 η(k) −
k−1
η T (l)Z1 η(l)
l=k−d2
+d12 η T (k)Z2 η(k) −
k−d 1 −1
η T (l)Z2 η(l),
l=k−d2 k−1
= ξ1T (k)ΦT 2 (d2 Z1 + d12 Z2 )Φ2 ξ1 (k) −
η T (l)Z1 η(l)
l=k−d(k) k−d 1 −1
−
k−d(k)−1
η T (l)Z2 η(l) −
η T (l)(Z1 + Z2 )η(l),
l=k−d2
l=k−d(k) T
T
ΔV3 (k) = x (k)(Q1 + Q2 )x(k) − x (k − d1 )Q1 x(k − d1 ) −xT (k − d2 )Q2 x(k − d2 ) = ζ T (k)E T (Q1 + Q2 )Eζ(k) − xT (k − d1 )Q1 x(k − d1 ) −xT (k − d2 )Q2 x(k − d2 ), T
ΔV4 (k) = (d2 − d1 + 1)x (k)Q3 x(k) −
k−d 1
xT (l)Q3 x(l)
l=k−d2
(d12 + 1)ζ T (k)E T Q3 Eζ(k) − xT (k − d(k))Q3 x(k − d(k)). From (9.47), we have
9.2 H∞ Filter Design for Discrete-Time Systems k−1
0 = x(k) − x(k − d(k)) −
193
η(l)
l=k−d(k)
= Eζ(k) − x(k − d(k)) −
k−1
η(l),
l=k−d(k)
k−d(k)−1
0 = x(k − d(k)) − x(k − d2 ) −
η(l),
l=k−d2
0 = x(k − d1 ) − x(k − d(k)) −
k−d 1 −1
η(l).
l=k−d(k)
T Then, the following equations hold for any matrices N = N1T N2T , M = T T with appropriate dimensions: M1T M2T , and S = S1T S2T ⎡ T 0 = 2 x (k)N1 + xT (k − d(k))N2 ⎣Eζ(k) − x(k − d(k)) −
k−1
⎤ η(l)⎦
l=k−d(k)
= 2 ζ T (k)E T N1 + xT (k − d(k))N2 ⎤ ⎡ k−1 η(l)⎦ , × ⎣Eζ(k) − x(k − d(k)) −
(9.51)
l=k−d(k)
0 = 2 ζ T (k)E T M1 + xT (k − d(k))M2 ⎤ ⎡ k−d(k)−1 η(l)⎦ , × ⎣x(k − d(k)) − x(k − d2 ) −
(9.52)
l=k−d2
0 = 2 ζ T (k)E T S1 + xT (k − d(k))S2 ⎤ ⎡ k−d 1 −1 η(l)⎦ . × ⎣x(k − d1 ) − x(k − d(k)) − l=k−d(k)
⎣
⎤
⎡
On the other hand, for any matrices X = ⎣
X11 X12
⎤
⎡ Y11 Y12 ∗
Y22
(9.53)
⎦ 0, the following equations are true:
∗
X22
⎦ 0 and Y =
194
9. H∞ Filter Design for Systems with Time-Varying Delay
0=
k−1
k−1
ξ2T (k)Xξ2 (k) −
l=k−d2
ξ2T (k)Xξ2 (k)
l=k−d2 k−1
= d2 ξ2T (k)Xξ2 (k)−
0=
ξ2T (k)Y ξ2 (k)
l=k−d2
−
ξ1T (k)Xξ2 (k),
(9.54)
l=k−d2
l=k−d(k) k−d 1 −1
k−d(k)−1
ξ2T (k)Xξ2 (k)− k−d 1 −1
ξ2T (k)Y ξ2 (k)
l=k−d2
= d12 ξ2T (k)Y ξ2 (k)−
k−d 1−1
k−d(k)−1
ξ2T (k)Y ξ2 (k)−
ξ2T (k)Y ξ2 (k),
(9.55)
l=k−d2
l=k−d(k)
where T T ξ2 (k) = xT (k), xT (k − d(k)) = ζ T (k)E T , xT (k − d(k)) . Adding the right sides of (9.51)-(9.55) to ΔV (k) yields ΔV (k) + eT (k)e(k) − γ 2 ω T (k)ω(k) T T ξ1T (k) Φ1 + ΦT 2 (d2 Z1 + d12 Z2 )Φ2 + Φ3 Φ3 + Φ4 P Φ4 ξ1 (k) −
k−1
ξ3T (k, l)Ψ1 ξ3 (k, l) −
l=k−d(k)
k−d 1 −1
ξ3T (k, l)Ψ2 ξ3 (k, l)
l=k−d(k)
k−d(k)−1
−
l=k−d2
ξ3T (k, l)Ψ3 ξ3 (k, l),
(9.56)
T where ξ3 (k, l) = ξ2T (k), η T (l) . T Thus, if Ψi 0, i = 1, 2, 3 and Φ1 + ΦT 2 (d2 Z1 + d12 Z2 )Φ2 + Φ3 Φ3 + T Φ4 P Φ4 < 0, which is equivalent to (9.43) by the Schur complement, then ΔV + eT (k)e(k) − γ 2 ω T (k)ω(k) < 0. If we follow an argument similar to the one in [20], this ensures that (9.40) holds under zero-initial conditions for all nonzero ω(k) ∈ L2 [0, +∞) and a given γ > 0. On the other hand, (9.43) implies that the following matrix inequality holds: ⎡ ⎤ √ √ ˆ T Z1 ˆT Φˆ1 d2 Φ d12 ΦˆT 2 2 Z2 Φ4 P ⎢ ⎥ ⎢ ⎥ ⎢ ∗ 0 0 ⎥ −Z1 ⎥ < 0, Φˆ = ⎢ ⎢ ⎥ ⎢ ∗ ∗ −Z2 0 ⎥ ⎣ ⎦ ∗ ∗ ∗ −P where
9.2 H∞ Filter Design for Discrete-Time Systems
⎡
Φ11 Φ12 E T S1 −E T M1
⎢ ⎢ ⎢ ∗ Φ22 Φˆ1 = ⎢ ⎢ ⎢ ∗ ∗ ⎣ ∗ ∗ Φˆ2 = [(A − I)E Φˆ4 = A¯ A¯d 0
S2 −Q1 ∗
195
⎤
⎥ ⎥ −M2 ⎥ ⎥, ⎥ ⎥ 0 ⎦ −Q2
Ad 0 0], 0 ,
thereby guaranteeing that ΔV (k) < 0, which means that error system (9.39) with ω(k) = 0 is asymptotically stable This completes the proof.
k−d(k)−1 Remark 9.2.1. The term − l=k−d2 η T (l)Zη(l), which was ignored in [20, 22], is retained in Theorem 9.2.1 to overcome the conservativeness of those methods. Furthermore, d2 , which was increased to 2d2 −d1 in [25], is separated into two parts (d(k), d2 − d(k)) in the procedure for proving Theorem 9.2.1. Just as for Theorem 2 in [16] and Proposition 2 in [20], (9.43) has an equivalent form, which we obtain by introducing the three variables H1 , H2 , and T : ⎡ ⎤ √ √ T T Φ1 d2 ΦT d12 ΦT 2 H1 2 H2 Φ3 Φ4 T ⎢ ⎥ ⎢ ⎥ ⎢ ∗ Λ1 0 0 0 ⎥ ⎢ ⎥ ⎢ ⎥ Φ˜ = ⎢ ∗ 0 0 ⎥ < 0, ∗ Λ2 ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ⎥ ∗ ∗ −I 0 ⎣ ⎦ ∗ ∗ ∗ ∗ Λ3 where Λi = Zi − Hi − HiT , i = 1, 2, Λ3 = P − T − T T . We employ a parameter-dependent Lyapunov function to deal with error system (9.39) when it has polytopic-type uncertainties (9.42), which gives us the following corollary. Corollary 9.2.1. Consider filtering-error system (9.39) with polytopic-type uncertainties (9.42). Given integers d2 d1 > 0 and a scalar γ > 0, the system is robustly stable and (9.40) is satisfied under zero-initial conditions for all nonzero ω(k) ∈ l2 [0, +∞) if there exist matrices Pj > 0, Qij 0,
196
9. H∞ Filter Design for Systems with Time-Varying Delay
⎡ T Z1j > 0, Z2j = Z2j > 0, i = 1, 2, 3, j = 1, 2, · · · , p, X = ⎣
⎡ and Y = ⎣
⎤ Y11 Y12 ∗
Y22
⎤ X11 X12 ∗
X22
⎦ 0,
⎦ 0, and any appropriately dimensioned matrices Nj =
T T T T T T T T T , Mj = M1j , Sj = S1j , j = 1, 2, · · · , p, H1 , N2j M2j S2j N1j H2 , and T such that the following matrix inequalities hold for j = 1, 2, · · · , p: ⎡ T T T T ⎤ √ (j) √ (j) (j) (j) (j) Φ Φ Φ Φ Φ d H d H T⎥ 2 1 12 2 2 2 3 4 ⎢ 1 ⎢ ⎥ ⎢ ∗ ⎥ Λ1j 0 0 0 ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ⎥ < 0, (9.57) ∗ Λ2j 0 0 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ∗ ∗ −I 0 ⎢ ∗ ⎥ ⎣ ⎦ ∗ ∗ ∗ ∗ Λ3j
⎡ Ψ1j = ⎣
⎤ X Nj ∗ Z1j
⎡ Ψ2j = ⎣
⎦ 0,
(9.58)
⎤ Y Mj ∗ Z2j
⎦ 0,
(9.59)
⎡ Ψ3j = ⎣ where
(j)
Φ1
⎤ X +Y
Sj
∗
Z1j + Z2j
⎡
(j)
⎦ 0,
(j)
Φ11 Φ12 E T S1j −E T M1j
⎢ ⎢ ⎢ ∗ ⎢ ⎢ =⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
(j)
(9.60)
⎤ 0
Φ22
S2j
−M2j
0
∗
−Q1j
0
0
∗
∗
−Q2j
0
∗
∗
∗
−γ 2 I
(j)
(j)
(j)
(j)
⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎦
Φ11 = E T Ξ11 E − Pj , (j) T Ξ11 = Q1j + Q2j + (d12 + 1)Q3j + N1j + N1j + d2 X11 + d12 Y11 , Φ12 = E T Ξ12 , (j) T Ξ12 = N2j − N1j + M1j − S1j + d2 X12 + d12 Y12 , (j)
T T T + M2j + M2j − S2j − S2j + d2 X22 + d12 Y22 , Φ22 = −Q3j − N2j − N2j
9.2 H∞ Filter Design for Discrete-Time Systems
197
(j)
Φ2 = [(Aj − I)E Adj 0 0 Bj ], (j) ¯j , Φ3 = C¯j C¯dj 0 0 D (j) ¯j , Φ4 = A¯j A¯dj 0 0 B Λij = Zij − Hi − HiT , i = 1, 2, Λ3j = Pj − T − T T ; ¯j , C¯j , C¯dj , and D ¯ j are defined in the same way as A, ¯ A¯d , B, ¯ and A¯j , A¯dj , B ¯ C¯d , and D ¯ in (9.39) by replacing the elements in Λ with those in Λj . C, 9.2.3 Design of H∞ Filter This subsection uses Corollary 9.2.1 to solve the H∞ filter synthesis problem. Theorem 9.2.2. Consider filtering-error system (9.39). For ⎡ given integers ⎤ P1j P2j ⎦ > 0, d2 d1 > 0 and a scalar γ > 0, if there exist matrices P˜j = ⎣ ∗ P3j ⎡ ⎤ X11 X12 ⎦ Qij 0, Z1j > 0, Z2j > 0, i = 1, 2, 3, j = 1, 2, · · · , p, X = ⎣ ∗ X22 ⎡ ⎤ Y11 Y12 ⎦ 0, and any appropriately dimensioned matrices 0, and Y = ⎣ ∗ Y22 T T T T T T T T T , Mj = M1j , Sj = S1j , j = 1, 2, · · · , p, Nj = N1j N2j M2j S2j ¯F , C¯F , and D ¯ F such that LMIs (9.58)-(9.60) and H1 , H2 , T1 , V1 , V2 , A¯F , B the following LMI hold for j = 1, 2, · · · , p: ⎡ ⎤ (j) (j) (j) Ξ1 Ξ 2 Ξ 4 ⎢ ⎥ ⎢ ⎥ (9.61) ⎢ ∗ Ξ3(j) 0 ⎥ < 0, ⎣ ⎦ (j) ∗ ∗ Ξ5 where
⎡
(j)
Ξ1
⎢ ⎢ ⎢ ⎢ ⎢ ⎢ =⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
ˆ (j) −P2j Ξ (j) Ξ 11 12
S1j
−M1j
0
0
0
0
∗
−P3j
0
∗
∗
Φ22
S2j
−M2j
0
∗
∗
∗
−Q1j
0
0
∗
∗
∗
∗
−Q2j
0
∗
∗
∗
∗
∗
−γ 2 I
(j)
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
198
9. H∞ Filter Design for Systems with Time-Varying Delay
ˆ (j) = Ξ (j) − P1j , Ξ 11 11 T T T √ √ (j) (j) (j) (j) , Ξ2 = d2 Φ2 d12 Φ2 Φ3 H1 H2 # $ (j) Ξ3 = diag Z1j − H1 − H1T , Z2j − H2 − H2T , − I , ⎤ ⎡ ¯ T AT V1T + C T B ¯T AT T T + CjT B j j F F ⎥ ⎢ j 1 ⎥ ⎢ T T ¯ ¯ AF AF ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ T T T T T T T T ¯ ¯ ⎢ A T + C A V + C B B dj 1 dj F dj 1 dj F ⎥ (j) ⎥, Ξ4 = ⎢ ⎥ ⎢ ⎥ ⎢ 0 0 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0 0 ⎦ ⎣ ¯ T B T V T + DT B ¯T BjT T1T + DjT B j 1 j F F ⎤ T T P1j − T1 − T1 P2j − V1 − V2 ⎦, =⎣ ∗ P3j − V2T − V2T ⎡
(j)
Ξ5
(j)
(j)
(j)
and Ξ11 , Ξ12 , and Φ22 are defined in (9.57), then the system is robustly stable and (9.40) is satisfied under zero-initial conditions for all nonzero ω(k) ∈ l2 [0, +∞), and either of the following is a suitable filter of the form (9.37): ¯F ¯F , CF = C¯F , DF = D AF = V2−1 A¯F , BF = V2−1 B
(9.62)
¯F , CF = C¯F V −1 , DF = D ¯F . AF = A¯F V2−1 , BF = B 2
(9.63)
or
Proof. Let ⎡ ⎤ T1 T2 ⎦. T =⎣ T3 T4
(9.64)
Inequality (9.57) shows that T + T T > 0. Thus, T4 + T4T > 0, which implies that T4 is invertible. Define ⎡ ⎤ I 0 ⎦ , J2 = diag {J1 , I, I, I, I, I, I, I, J1 } . (9.65) J1 = ⎣ 0 T2 T4−1 Pre- and post-multiply (9.57) by J2 and J2T , respectively; and define the following new variables:
9.2 H∞ Filter Design for Discrete-Time Systems
⎧ ⎪ ⎪ ⎪ ⎨
⎡ V1 = T2 T4−1 T3 , V2 = T2 T4−1 T2T , P˜j = ⎣
199
⎤ P1j P2j ∗
P3j
⎦ = J1 Pj J1T ,
⎪ ⎪ ⎪ ⎩ A¯ = T A T −1 T T , B ¯ F = T2 BF , C¯F = CF T −1 T T , D ¯ F = DF . F 2 F 4 2 2 4
(9.66)
Thus, (9.57) is equivalent to (9.61). (j) On the other hand, Ξ5 < 0 implies that V2 = T2 T4−1 T2T > 0, which further implies that T2 is nonsingular. Thus, (9.66) yields the following: ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ¯F A¯F B AF BF T2−1 0 T2−T T4 0 ⎦⎣ ⎦. ⎣ ⎦=⎣ ⎦⎣ (9.67) ¯F CF DF 0 I 0 I C¯F D However, T2 and T4 cannot be derived from the solutions of LMIs (9.58)(9.61). Let the filter transfer function from y(k) to zˆ(k) be Tzˆy (z) = CF (zI − AF )−1 BF + DF .
(9.68)
Replacing the filter matrices with (9.67) and using the fact that V2 = T2 T4−1 T2T yield Tzˆy (z) = CF (zI − AF )−1 BF + DF ¯F ¯F + D = C¯F T −T T4 (zI −T −1 A¯F T −T T4 )−1 T −1 B 2
2
2
2
¯F ¯F + D = C¯F (zV2 − A¯F )−1 B ¯F ¯F + D = C¯F (zI − V2−1 A¯F )−1 V2−1 B ¯F . ¯F + D = C¯F V −1 (zI − A¯F V −1 )−1 B 2
2
Thus, the state-space realization of filter (9.62) or (9.63) is readily established. This completes the proof.
9.2.4 Numerical Example The numerical example in this subsection demonstrates the benefits of the method described above. Example 9.2.1. Consider system (9.35) with ⎡ ⎡ ⎤ ⎤ ⎡ ⎤ 0.9 0 −0.1 σ 0 ⎦ , Ad = ⎣ ⎦, B = ⎣ ⎦, C = 1 1 , A=⎣ 0 0.7 + φ −0.1 −0.1 1 Cd = 0.2 0.5 , D = 1, L = 1 2 , Ld = 0.5 0.6 , G = −0.5, φ 0.2, σ 0.1.
200
9. H∞ Filter Design for Systems with Time-Varying Delay
It is clear that the uncertainties can be expressed as polytopic-type uncertainties [24]. Table 9.3 compares the results in [20] with those obtained with Theorem 9.2.2. Our results are markedly better. Moreover, we can use (9.62) and (9.63) to calculate the filter parameters for d1 , d2 , and γ. For example, when d1 = 1, d2 = 5, and γ = 3.5555, the parameters are ⎡ ⎤ ⎡ ⎤ 0.9895 −0.1283 0.2644 ⎦ , BF = ⎣ ⎦, AF = ⎣ −0.0609 0.4409 −0.3335 CF = 0.3418 −0.0955 × 10−4 , DF = 1.5359. In addition, when d1 = 1 and d2 = 2, we obtain a γmin of 2.4219 by setting ⎡ ⎤ ⎡ ⎤ 0.7530 −0.7934 −0.3697 ⎦ , BF = ⎣ ⎦, AF = ⎣ 0.0074 0.5245 −0.3217 CF = 0.0087 −0.2490 , DF = 1.3264. Table 9.3. γmin for d2 = 5 and various d1 (Example 9.2.1) d1
1
2
3
4
[20]
7.1709
5.4786
4.4587
3.7035
Theorem 9.2.2
3.5555
3.5296
3.4973
3.4568
9.3 Conclusion This chapter focuses on the design of H∞ filters for both continuous-time and discrete-time systems with a time-varying delay. The IFWM approach is first used to carry out a delay-dependent H∞ performance analysis for error systems. The resulting criteria are extended to systems with polytopic-type uncertainties. Then, based on the results of the performance analysis, H∞ filters are designed in terms of LMIs. Finally, numerical examples demonstrate that this method is less conservative than others.
References 1. A. Elsayed and M. J. Grimble. A new approach to H∞ design of optimal digital linear filters. IMA Journal of Mathematical Control and Information, 6(2): 233251, 1989.
References
201
2. C. E. de Souza, L. Xie, and Y. Wang. H∞ filtering for a class of uncertain nonlinear systems. Systems & Control Letters, 20(6): 419-426, 1993. 3. K. M. Nagpal and P. P. Khargonekar. Filtering and smoothing in an H∞ setting. IEEE Transactions on Automatic Control, 36(2): 152-166, 1991. 4. L. Xie, C. E. de Souza, and M. Fu. H∞ estimation for discrete-time linear uncertain systems. International Journal of Robust and Nonlinear Control, 1(6): 111-123, 1991. 5. H. Gao, J. Lam, and C. Wang. Mixed H2 /H∞ filtering for continuous-time polytopic systems: a parameter-dependent approach. Circuits, Systems & Signal Processing, 24(6): 1531-5878, 2005. 6. A. Pila, U. Shaked, and C. E. de Souza. H∞ filtering for continuous-time linear systems with delay. IEEE Transactions on Automatic Control, 44(7): 1412-1417, 1999. 7. Z. Wang and F. Yang. Robust filtering for uncertain linear systems with delayed states and outputs. IEEE Transactions on Circuits and Systems I, 49(1): 125130, 2002. 8. Z. Wang and D. W. C. Ho. Filtering on nonlinear time-delay stochastic systems. Automatica, 39(1): 101-109, 2003. 9. Z. Wang, D. W. C. Ho, and X. Liu. Robust filtering under randomly varying sensor delay with variance constraints. IEEE Transactions on Circuits and Systems II, 51(6): 320-326, 2004. 10. Z. Wang, F. Yang, D. W. C. Ho, and X. Liu. Robust H∞ filtering for stochastic time-delay systems with missing measurements. IEEE Transactions on Signal Processing, 54(7): 2579-2587, 2006. 11. E. Fridman and U. Shaked. A new H∞ filter design for linear time-delay systems. IEEE Transactions on Signal Processing, 49(11): 2839-2843, 2001. 12. E. Fridman, U. Shaked, and L. Xie. Robust H∞ filtering of linear systems with time-varying delay. IEEE Transactions on Automatic Control, 48(1): 159-165, 2003. 13. E. Fridman and U. Shaked. An improved delay-dependent H∞ filtering of linear neutral systems. IEEE Transactions on Signal Processing, 52(3): 668-673, 2004. 14. H. Gao and C. Wang. Robust L2 − L∞ filtering for uncertain systems with multiple time-varying state delays. IEEE Transactions on Circuits and Systems I, 50(4): 594-599, 2003. 15. H. Gao and C. Wang. Delay-dependent robust H∞ and L2 − L∞ filtering for a class of uncertain nonlinear time-delay systems. IEEE Transactions on Automatic Control, 48(9): 1661-1666, 2003. 16. H. Gao and C. Wang. A delay-dependent approach to robust H∞ filtering for uncertain discrete-time state-delayed systems. IEEE Transactions on Signal Processing, 52(6): 1631-1640, 2004. 17. Y. He, Q. G. Wang, and C. Lin. An improved H∞ filter design for systems with time-varying interval delay. IEEE Transactions on Circuits and Systems II, 53(11): 1235-1239, 2006.
202
9. H∞ Filter Design for Systems with Time-Varying Delay
18. X. M. Zhang and Q. L. Han. Stability analysis and H∞ filtering for delay differential systems of neutral type. IET Control Theory & Applications, 1(3): 749-755, 2007. 19. X. M. Zhang and Q. L. Han. Robust H∞ filtering for a class of uncertain linear systems with time-varying delay. Automatica, 44(1): 157-166, 2008. 20. X. M. Zhang and Q. L. Han. Delay-dependent robust H∞ filtering for uncertain discrete-time systems with time-varying delay based on a finite sum inequality. IEEE Transactions on Circuits and Systems II, 53(12): 1466-1470, 2006. 21. X. M. Zhang and Q. L. Han. A new finite sum inequality approach to delaydependent H-infinity control of discrete-time systems with time-varying delay. International Journal of Robust and Nonlinear Control, 18(6): 630-647, 2008. 22. H. Gao, X. Meng, and T. Chen. A new design of robust H∞ −filters for uncertain discrete-time state-delayed systems. Proceedings of the 46th IEEE Conference on Decision and Control, New Orleans, USA, 5564-5569, 2007. 23. Y. He, G. P. Liu, D. Rees, and M. Wu. H∞ filtering for discrete-time systems with time-varying delay. Signal Processing, 89(3): 275-282, 2009. 24. Y. He, M. Wu, J. H. She, and G. P. Liu. Parameter-dependent Lyapunov functional for stability of time-delay systems with polytopic-type uncertainties. IEEE Transactions on Automatic Control, 49(5): 828-832, 2004. 25. Y. He, M. Wu, Q. L. Han, and J. H. She, Delay-dependent H∞ control of linear discrete-time systems with an interval-like time-varying delay. International Journal of Systems Science, 39(4): 427-436, 2008.
10. Stability of Neural Networks with Time-Varying Delay
Neural networks are useful in signal processing, pattern recognition, static image processing, associative memory, combinatorial optimization, and other areas [1]. Although considerable effort has been expended on analyzing the stability of neural networks without a delay, in the real world such networks often have a delay due, for example, to the finite switching speed of amplifiers in electronic networks and to the finite signal propagation speed in biological networks. So, the stability of different classes of neural networks with a delay has become an important topic [2–20]. The criteria in these papers are based on various types of stability (asymptotic, complete, absolute, exponential, and so on); and they can be classified into two categories according to their dependence on information about the length of a delay: delay-independent [3–10, 13] and delay-dependent [2, 11–20]. Since delay-independent criteria tend to be conservative, especially when the delay is small or varies within an interval, the delay-dependent type receives greater attention. [13] presented stability criteria for the global asymptotic stability of a class of neural networks with multiple delays. They have two main weaknesses. First, the nonlinear parts were handled using inequalities rather than the S-procedure, which is the most effective way of dealing with nonlinearities. Second, no information on nonlinearities was included in the LyapunovKrasovskii functional, even though including it would have yielded better results. On the other hand, [16–19] took the range of a time-varying delay in a neural network to be from 0 to an upper bound. In practice, however, timevarying interval delays are often encountered for which the interval does not necessarily start from zero. The stability criteria for neural networks with a time-varying delay in [16–19] are conservative for this case because they do not use information on the lower bound of the delay. To our knowledge, few reports have appeared on the stability of neural networks with a time-varying interval delay.
204
10. Stability of Neural Networks with Time-Varying Delay
As pointed out in [12], the property of exponential stability is particularly important when the exponential convergence rate is used to determine the speed of neural computations. Thus, in general, it is not only of theoretical interest but also of practical importance to determine the exponential stability of, and to estimate the exponential convergence rate of, dynamic neural networks. Accordingly, a great number of sufficient conditions guaranteeing the global exponential stability of both continuous-time [3, 4, 7, 9, 12, 14, 15, 17, 20–22] and discrete-time [23–26] neural networks with constant and/or time-varying delays have been derived. Among them, delaydependent exponential stability criteria have been attracting a great deal of attention [12, 17, 20, 23–26] because they exploit information on the length of a delay, which makes them less conservative than delay-independent ones. However, for continuous-time neural network systems with a time-varying delay, some negative terms in the derivative of the Lyapunov-Krasovskii functional tend to be ignored when delay-dependent stability criteria are derived t [12,17,18,20]. For example, the negative term − t−h e2ks z˙ T (s)Z z(s)ds ˙ in the 0 t 2ks T ˙ was ignored in [20], which may lead derivative of −h t+θ e z˙ (s)Z z(s)dsdθ t to considerable conservativeness. [17] used −e2k(t−h) t−d(t) z˙ T (s)Z z(s)ds ˙ as 2k(t−h) t T an estimate of the negative term −e ˙ but they igt−h z˙ (s)Z z(s)ds; 2k(t−h) t−d(t) T nored the other term, −e z˙ (s)Z z(s)ds, ˙ which may also lead t−h to considerable conservativeness. For discrete-time recurrent neural network systems with a time-varying delay, the derivations of delay-dependent criteria generally employ a fixed model transformation of the original system [23, 24, 26], which may lead to conservativeness. Recently, [25] employed the FWM approach to study the delay-dependent stability of discrete-time neural networks with a time-varying delay. However, further research is still possible. For example, the delay, d(k) (where h1 d(k) h2 ), was increased to h2 ; and h2 − d(k) was increased to h2 − h1 . In other words, h2 = d(k) + h2 − d(k) was increased to 2h2 − h1 , which may lead to considerable conservativeness. In this chapter, first, the delay-dependent stability of neural networks with multiple time-varying delays is investigated by constructing a class of Lyapunov-Krasovskii functionals that contain information on nonlinearities [16]; and the S-procedure and FWM approach are used to derive a delaydependent stability criterion. This criterion is shown to include the delayindependent and rate-dependent one, and it is extended to a delay-dependent and rate-independent stability criterion for multiple unknown time-varying delays. Second, the IFWM approach is used to establish stability criteria for
10.1 Stability of Neural Networks with Multiple Delays
205
neural networks with a time-varying interval delay [27]. Third, the FWM and IFWM approaches are employed to derive delay-dependent exponential stability criteria for neural networks with a time-varying delay [17, 28]. Finally, for discrete-time recurrent neural networks with a time-varying delay, the IFWM approach is used to establish less conservative criteria without ignoring any useful terms in the difference of a Lyapunov function [29]. Numerical examples illustrate the effectiveness of these methods and the improvement over others.
10.1 Stability of Neural Networks with Multiple Delays This section examines the stability of neural networks with multiple timevarying delays by constructing a class of Lyapunov-Krasovskii functionals containing information on nonlinearities. 10.1.1 Problem Formulation Consider the following neural network with multiple time-varying delays: u(t) ˙ = −Au(t) + W (0) g(u(t)) +
r
W (k) g(u(t − dk (t))) + J,
(10.1)
k=1
A = where u(t) = [u1 (t), u2 (t), · · · , un (t)]T is the neural state vector; (k) (k) diag {a1 , a2 , · · · , an } is a positive diagonal matrix; W = wij ,k= n×n
0, 1, · · · , r are interconnection matrices; g(u) = [g1 (u1 ), g2 (u2 ), · · · , gn (un )]T is the vector of neural activation functions, and g(0) = 0; J = [J1 , J2 , · · · , Jn ]T is a constant input vector; and the delays, dk (t), k = 1, 2, · · · , r, are time-varying differentiable functions. In this section, the delays are assumed to satisfy one or both of the following conditions: 0 dk (t) hk ,
(10.2)
d˙k (t) μk ,
(10.3)
where hk , and μk , k = 1, 2, · · · , r are constants. In addition, the activation functions, gj (·), j = 1, 2, · · · , n, of the neurons in system (10.1) are assumed to satisfy the condition
206
10. Stability of Neural Networks with Time-Varying Delay
0
gj (x) − gj (y) σj , ∀x, y ∈ R, x = y, x−y
(10.4)
where σj , j = 1, 2, · · · , n are positive constants. We use the transformation x(·) = u(·) − u∗ to shift the equilibrium point ∗ u = [u∗1 , u∗2 , · · · , u∗n ]T of system (10.1) to the origin, which means that the following holds: 0 = −Au∗ (t) + W (0) g (u∗ (t)) +
r
W (k) g (u∗ (t − dk (t))) + J.
(10.5)
k=1
System equation (10.1) minus (10.5) yields
x(t) ˙ = −Ax(t) + W
(0)
f (x(t)) +
r
W (k) f (x(t − dk (t))),
(10.6)
k=1
where x = [x1 , x2 , · · · , xn ]T is the state vector of the transformed system; f (x) = [f1 (x1 ), f2 (x2 ), · · · , fn (xn )]T ; and fj (xj ) = gj (xj +u∗j )−gj (u∗j ), j = 1, 2, · · · , n. Note that the functions fj (·) satisfy the condition 0
fj (xj ) σj , ∀xj = 0, j = 1, 2, · · · , n, xj
(10.7)
which is equivalent to fj (xj ) [fj (xj ) − σj xj ] 0, j = 1, 2, · · · , n.
(10.8)
10.1.2 Stability Criteria In this subsection, we use a new class of Lyapunov-Krasovskii functionals, the S-procedure, and the FWM approach to establish stability criteria. Theorem 10.1.1. Consider neural network (10.6) with time-varying delays, dk (t), k = 1, 2, · · · , r, that satisfy both (10.2) and (10.3). Given scalars hk > 0 and μk , the system is asymptotically stable at the origin if there exist matrices P > 0, Qk 0, Rk 0, Zk > 0, Λ = diag{λ1 , λ2 , · · · , λn } 0, T = diag{t1 , t2 , · · · , tn } 0, and Sk = diag{sk1 , sk2 , · · · , skn } 0, and any appropriately dimensioned matrices Nkj and Mkj , k = 0, 1, · · · , r, j = 1, 2, · · · , r such that the following LMI holds:
10.1 Stability of Neural Networks with Multiple Delays
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ Ξ =⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Ξ11 Ξ12 Ξ13 + ΣT
h1 M01 · · · hr M0r
Ξ14
∗
Ξ22
Ξ23
Ξ24 + Σs h1 M1 · · ·
∗
∗
Ξ33 − 2T
Ξ34
∗
∗
∗
Ξ44 − 2S
∗ .. .
∗ .. .
∗ .. .
∗ .. .
∗
∗
∗
∗
h1 N01 · · · ···
h1 N 1
−h1 Z1 · · · .. . ∗
···
⎤
⎥ ⎥ hr M r ⎥ ⎥ ⎥ hr N0r ⎥ ⎥ ⎥ hr N r ⎥ ⎥ < 0, (10.9) ⎥ 0 ⎥ ⎥ .. ⎥ ⎥ . ⎥ ⎦ −hr Zr
where Ξ11 = Φ11 + AT HA + Ξ12 =
r
r
T M0k + M0k ,
k=1 T M1k
− M01 ,
k=1
r
T M2k
k=1 r
Ξ13 = Φ13 − AT HW (0) +
− M02 , · · · ,
Ξ14 = Φ14 − A HW1r + ⎡
Ξ22
⎢ ⎢ ⎢ = Φ22 + ⎢ ⎢ ⎢ ⎣
T Mrk
− M0r ,
k=1 T N0k ,
k=1 r
T
r
k=1
T N1k ,
r k=1
T N2k ,
··· ,
r
T Nrk
,
k=1
T T T −M11 − M11 −M12 − M21 · · · −M1r − Mr1
∗ .. . ∗
Ξ23 = [−N01 , − N02 , · · · , − N0r ]T , ⎤ ⎡ T T T −N11 −N21 · · · −Nr1 ⎥ ⎢ ⎢ T T ⎥ −N22 · · · −Nr2 ⎥ ⎢ ∗ ⎥ Ξ24 = ⎢ ⎢ .. .. .. ⎥, ⎥ ⎢ . . . ⎦ ⎣ T ∗ ∗ · · · −Nrr Σs = diag{ΣS1 , ΣS2 , · · · , ΣSr }, T
Ξ33 = Φ33 + [W (0) ] HW (0) , T
Ξ34 = Φ34 + [W (0) ] HW1r , T Ξ44 = Φ44 + W1r HW1r ,
⎤
⎥ T T ⎥ −M22 − M22 · · · −M2r − Mr2 ⎥ ⎥, ⎥ .. .. ⎥ . . ⎦ ∗
207
T · · · −Mrr − Mrr
208
10. Stability of Neural Networks with Time-Varying Delay
Φ11 = −P A − AP +
r
Qk ,
k=1
S = diag{S1 , S2 , · · · , Sr }, Σ = diag{σ1 , σ2 , · · · , σr }, W1r = W (1) , W (2) , · · · , W (r) , r hk Z k , H= k=1
T T T T Mk = M1k , M2k , · · · , Mrk , k = 1, 2, · · · , r, T T T T Nk = N1k , N2k , · · · , Nrk , k = 1, 2, · · · , r. and Φ13 = P W (0) − AT Λ, Φ14 = P W1r , Φ22 = diag{−(1 − μ1 )Q1 , − (1 − μ2 )Q2 , · · · , − (1 − μr )Qr }, r T Rk + ΛW (0) + W (0) Λ, Φ33 = k=1
Φ34 = ΛW1r , Φ44 = diag{−(1 − μ1 )R1 , − (1 − μ2 )R2 , · · · , − (1 − μr )Rr }. Proof. Choose the Lyapunov-Krasovskii functional to be T
V (xt ) = x (t)P x(t)+2 +
r k=1
n
λj
j=1
t
t−dk (t)
xj
0
fj (s)ds+
r k=1
0
−hk
t
x˙ T (s)Zk x(s)dsdθ ˙
t+θ
xT (s)Qk x(s) + f T (x(s))Rk f (x(s)) ds,
(10.10)
where P > 0, Qk 0, Rk 0, Zk > 0, k = 1, 2, · · · , r, and Λ = diag {λ1 , λ2 , · · · , λn } 0 are to be determined. Calculating the derivative of V (xt ) along the solutions of system (10.6) yields V˙ (xt ) = 2xT (t)P x(t) ˙ +2 +
r k=1
n j=1
λj fj (xj (t))x˙ j (t)
xT (t)Qk x(t) − (1 − d˙k (t))xT (t − dk (t))Qk x(t − dk (t))
10.1 Stability of Neural Networks with Multiple Delays
+
r T f (x(t))Rk f (x(t)) k=1
+
209
r
−(1 − d˙k (t))f T (x(t − dk (t)))Rk f (x(t − dk (t))) hk x˙ T (t)Zk x(t) ˙ −
k=1 T
t
t−dk (t)
x˙ T (s)Zk x(s)ds ˙
T
˙ + 2f (x(t))Λx(t) ˙ 2x (t)P x(t) r T x (t)Qk x(t) − (1 − μk )xT (t − dk (t))Qk x(t − dk (t)) + +
+
k=1 r
f T (x(t))Rk f (x(t))−(1 − μk )f T (x(t − dk (t)))Rk f (x(t − dk (t)))
k=1 r
T
hk x˙ (t)Zk x(t) ˙ −
k=1
t
T
t−dk (t)
x˙ (s)Zk x(s)ds ˙ .
(10.11)
From the Newton-Leibnitz formula, the following equations hold for any appropriately dimensioned matrices Njk and Mjk , k = 1, 2, · · · , r, j = 0, 1, · · · , r: ⎡ ⎤ r r ⎣xT (t)M0k + xT (t−dj (t))Mjk +f T (x(t))N0k + f T (x(t−dj (t)))Njk ⎦ j=1
× x(t)−x(t−dk (t))−
t
j=1
x(s)ds ˙ = 0.
(10.12)
t−dk (t)
On the other hand, for ⎡ (k) (k) X X12 ⎢ 11 ⎢ (k) ⎢ ∗ X22 ⎢ Xk = ⎢ ⎢ ∗ ∗ ⎣ ∗ ∗
any matrices ⎤ (k) (k) X13 X14 ⎥ (k) (k) ⎥ X23 X24 ⎥ ⎥ 0, k = 1, 2, · · · , r, (k) (k) ⎥ X33 X34 ⎥ ⎦ (k) ∗ X44
the following holds: T
hk ξ (t)Xk ξ(t) −
t
t−dk (t)
ξ T (t)Xk ξ(t)ds 0,
(10.13)
where ξ(t) = [xT (t), xT (t − d1 (t)), · · · , xT (t − dr (t)), f T (x(t)), f T (x(t − d1 (t))), · · · , f T (x(t − dr (t)))]T .
210
10. Stability of Neural Networks with Time-Varying Delay
Now, adding the terms on the left sides of (10.12) and (10.13) to V˙ (xt ) allows us to write V˙ (xt ) as r r t T ˆ ˙ hk Xk ξ(t)− ζ T (t, s)Ψk ζ(t, s)ds, (10.14) V (xt ) ξ (t) Ξ + k=1
k=1
t−dk (t)
where T ζ(t, s) = ξ T (t), x˙ T (s) , ⎤ ⎡ Ξ11 Ξ12 Ξ13 Ξ14 ⎥ ⎢ ⎥ ⎢ ⎢ ∗ Ξ22 Ξ23 Ξ24 ⎥ ⎥, ˆ ⎢ Ξ=⎢ ⎥ ⎢ ∗ ∗ Ξ33 Ξ34 ⎥ ⎦ ⎣ ∗ ∗ ∗ Ξ44 ⎡ (k) (k) (k) (k) X X12 X13 X14 ⎢ 11 ⎢ (k) (k) (k) ⎢ ∗ X22 X23 X24 ⎢ ⎢ (k) (k) Ψk = ⎢ ∗ ∗ X33 X34 ⎢ ⎢ (k) ⎢ ∗ ∗ ∗ X44 ⎣ ∗ ∗ ∗ ∗
⎤ M0k
⎥ ⎥ Mk ⎥ ⎥ ⎥ N0k ⎥ , k = 1, 2, · · · , r. ⎥ ⎥ Nk ⎥ ⎦ Zk
From (10.8), we have fj (xj (t)) [fj (xj (t)) − σj xj (t)] 0, j = 1, 2, · · · , n, fj (xj (t − dk (t))) [fj (xj (t − dk (t))) − σj xj (t − dk (t))] 0, j = 1, 2, · · · , n, k = 1, 2, · · · , r.
(10.15)
(10.16)
Thus, by applying the S-procedure, we find that system (10.6) is asymptotically stable if there exist T = diag {t1 , t2 , · · · , tn } 0 and Sk = diag {sk1 , sk2 , · · · , skn } 0, k = 1, 2, · · · , r such that V˙ (xt ) − 2 −2
n
tj fj (xj (t)) [fj (xj (t)) j=1 r n
{skj fj (xj (t − dk (t))) [fj (xj (t − dk (t))) − σj xj (t − dk (t))]}
k=1 j=1
T
¯+ ξ (t) Ξ
r k=1
0, Qk 0, Rk 0, Λ = diag{λ1 , λ2 , · · · , λn } 0, T = diag{t1 , t2 , · · · , tn } 0, and Sk = diag{sk1 , sk2 , · · · , skn } 0, k = 1, 2, · · · , r such that the following LMI holds:
212
10. Stability of Neural Networks with Time-Varying Delay
⎡
⎤ Φ11
⎢ ⎢ ⎢ ∗ Φ=⎢ ⎢ ⎢ ∗ ⎣ ∗
0
Φ13 + ΣT
Φ14
Φ22
0
Σs
∗
Φ33 − 2T
Φ34
∗
∗
Φ44 − 2S
⎥ ⎥ ⎥ ⎥ < 0, ⎥ ⎥ ⎦
(10.18)
where Φ11 , Φ13 , Φ14 , Φ22 , Φ33 , Φ34 , Φ44 , Σs , Σ, and S are defined in (10.9). Setting Qk and Rk , k = 1, 2, · · · , r to zero results in a delay-dependent and rate-independent criterion for which the derivative of the delay may be unknown. Corollary 10.1.2. Consider neural network (10.6) with time-varying delays, dk (t), k = 1, 2, · · · , r, that satisfy (10.2) [but not necessarily (10.3)]. Given scalars hk > 0, k = 1, 2, · · · , r, the system is asymptotically stable at the origin if there exist matrices P > 0, Zk > 0, Λ = diag{λ1 , λ2 , · · · , λn } 0, T = diag{t1 , t2 , · · · , tn } 0, and Sk = diag{sk1 , sk2 , · · · , skn } 0, k = 1, 2, · · · , r, and any appropriately dimensioned matrices Nkj and Mkj , k = 0, 1, · · · , r, j = 1, 2, · · · , r such that the following LMI holds: ⎡ ⎤ ˜11 Ξ12 Ξ13 + ΣT Ξ Ξ14 h1 M01 · · · hr M0r ⎢ ⎥ ⎢ ⎥ ˜22 ⎢ ∗ Ξ Ξ Ξ + Σ h M · · · h M 23 24 s 1 1 r r ⎥ ⎢ ⎥ ⎢ ⎥ ˜33 − 2T ⎢ ∗ ⎥ ∗ Ξ Ξ h N · · · h N 34 1 01 r 0r ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ˜ (10.19) ∗ ∗ Ξ44 − 2S h1 N1 · · · hr Nr ⎥ ⎢ ⎥ < 0, ⎢ ⎥ ⎢ ∗ ∗ ∗ ∗ −h1 Z1 · · · 0 ⎥ ⎢ ⎥ ⎢ . .. .. .. .. .. ⎥ ⎢ . ⎥ ⎢ . . . . . . ⎥ ⎣ ⎦ ∗ ∗ ∗ ∗ ∗ · · · −hr Zr where ˜11 = −P A − AP + AT HA + Ξ ⎡ ˜22 Ξ
⎢ ⎢ ⎢ =⎢ ⎢ ⎢ ⎣
T M0k + M0k ,
k=1
−M11 − ∗ .. .
T M11
T T −M12 − M21 · · · −M1r − Mr1
⎤
⎥ T T ⎥ −M22 − M22 · · · −M2r − Mr2 ⎥ ⎥, ⎥ .. .. ⎥ . . ⎦
T ∗ · · · −Mrr − Mrr T T = ΛW (0) + W (0) Λ + W (0) HW (0) ,
∗
˜33 Ξ
r
10.1 Stability of Neural Networks with Multiple Delays
213
T ˜44 = W1r HW1r , Ξ
and the other terms are defined in (10.9). Remark 10.1.3. Regarding the stability of a neural network with time-varying delays, previously derived criteria restricted the derivatives of the delays to being less than 1. Corollary 10.1.2 does not impose this limitation because it is based on the FWM approach. This enables us to obtain a stability criterion for neural networks with unknown time-varying delays. 10.1.3 Numerical Examples Example 10.1.1. Consider system (10.6) with r = 2 and the following parameters: ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 1 0 0.01 0.10 0.05 −0.01 ⎦ , W (0) = ⎣ ⎦ , W (1) = ⎣ ⎦, A=⎣ 0 1 0.10 0.03 −0.10 0.35 ⎡ ⎤ 0.10 −0.03 ⎦. W (2) = ⎣ 0 −0.35 Assume μ1 = 0.2 and μ2 = 0.1 for the time-varying delays; and let σ1 = 5.5 and σ2 = 1.19. Under these conditions, LMI (10.18) in Corollary 10.1.1 is feasible, which implies that the system is asymptotically stable regardless of the lengths of the delays. In contrast, LMI (4) in [13] is infeasible; and it is infeasible even for σ1 = 4.8 and σ2 = 1.0, which shows how conservative the method in [13] is. Now, assume h1 = 2 and h2 = 1. LMI (10.18) in Corollary 10.1.1 is infeasible in any of the following three cases: (1) σ1 exceeds 5.5 and σ2 equals 1.19; (2) σ1 equals 5.5 and σ2 exceeds 1.19; or (3) σ1 exceeds 5.5 and σ2 exceeds 1.19. But LMI (10.9) in Theorem 10.1.1 is feasible for σ1 = 5.6 and σ2 = 1.39, which demonstrates that it is better than the delay-independent criterion in Corollary 10.1.1. In fact, if we let g1 (x) = 5.6 tanh(x), g2 (x) = 1.39 tanh(x), u1 (θ) = 0.2, u2 (θ) = −0.5, (θ ∈ [−2, 0]), J1 = J2 = 1, d1 (t) = 1.8 + 0.2 sin t, and d2 (t) = 0.9 + 0.1 sin t, then system (10.1) is asymptotically stable at its T unique equilibrium point u∗ = [1.933, 1.520] . Figs. 10.1.3 and 10.1.3 show the convergence dynamics.
214
10. Stability of Neural Networks with Time-Varying Delay
Fig. 10.1. Time response curve of u1 (t) of system (10.6) (Example 10.1.1)
Fig. 10.2. Time response curve of u2 (t) of system (10.6) (Example 10.1.1)
Finally, suppose that no information on the derivatives of delays is available; that is, μ1 and μ2 could take any values. If the upper bounds, h1 and h2 , on the delays are known to be 2 and 1, respectively, then LMI (10.19) in Corollary 10.1.2 is feasible for σ1 = 4.2 and σ2 = 0.92. Example 10.1.2. Consider system (10.6) with r = 2 and the following parameters: ⎤ ⎤ ⎡ ⎤ ⎡ ⎡ ⎡ ⎤ 1 2 2 −5 0 1 0 − 0 − 3⎦ ⎦ , W (2) = ⎣ ⎦ , W (1) = ⎣ 3 ⎦ , W (0) = ⎣ 2 A=⎣ . 2 2 0 3 5 −2 −3 0 0 1 [21] discusses the case μ1 = μ2 = 0 and σ1 = σ2 = 1. Corollary 10.1.1 shows that this system is asymptotically stable regardless of the lengths of
10.2 Stability of Neural Networks with Interval Delay
215
the delays. In addition, this corollary also shows that the system with σ1 = σ2 = 4.5 and μ1 = μ2 = 0 is also asymptotically stable regardless of the lengths of the delays, which is significantly better than the results in [21].
10.2 Stability of Neural Networks with Interval Delay This section uses the IFWM approach to analyze the stability of neural networks with a time-varying interval delay. 10.2.1 Problem Formulation Consider the following neural network with a time-varying delay: x(t) ˙ = −Ax(t) + W0 g(x(t)) + W1 g(x(t − d(t))) + J,
(10.20)
where x(·) = [x1 (·), x2 (·), · · · , xn (·)]T ∈ Rn is the neural state vector; g(x(·)) = [g1 (x1 (·)), g2 (x2 (·)), · · · , gn (xn (·))]T ∈ Rn is the vector of neural activation functions; J = [J1 , J2 , · · · , Jn ]T ∈ Rn is a constant input vector; A = diag {a1 , a2 , · · · , an } is a diagonal matrix with ai > 0, i = 1, 2, · · · , n; W0 and W1 are the connection weight matrix and the delayed connection weight matrix, respectively; and the delay, d(t), is a time-varying differentiable function. In this section, the delay is assumed to satisfy one or both of the following conditions: 0 h1 d(t) h2 ,
(10.21)
˙ μ, d(t)
(10.22)
where h1 , h2 , and μ are constants. Note that h1 may be nonzero. In addition, we assume that the neural activation functions of system (10.20), gi (·), i = 1, 2, · · · , n, satisfy 0
gi (x) − gi (y) ki , ∀x, y ∈ R, x = y, x−y
(10.23)
where ki , i = 1, 2, · · · , n are positive constants. We use the transformation z(·) = x(·) − x∗ to shift the equilibrium point x∗ = [x∗1 , x∗2 , · · · , x∗n ]T of system (10.20) to the origin, which converts the system to the following form: z(t) ˙ = −Az(t) + W0 f (z(t)) + W1 f (z(t − d(t))),
(10.24)
216
10. Stability of Neural Networks with Time-Varying Delay
where z(·) = [z1 (·), z2 (·), · · · , zn (·)]T is the state vector of the transformed system; f (z(·)) = [f1 (z1 (·)), f2 (z2 (·)), · · · , fn (zn (·))]T ; and fi (zi (·)) = gi (zi (·) + zi∗ ) − gi (zi∗ ), i = 1, 2, · · · , n. Note that the functions fi (·), i = 1, 2, · · · , n, satisfy vspace*-1mm 0
fi (zi ) ki , fi (0) = 0, ∀zi = 0, i = 1, 2, · · · , n, zi
(10.25)
which is equivalent to fi (zi ) [fi (zi ) − ki zi ] 0, fi (0) = 0, i = 1, 2, · · · , n.
(10.26)
10.2.2 Stability Criteria We obtain the following theorem by considering the relationships among d(t), h2 − d(t), h2 , and h1 . Theorem 10.2.1. Consider system (10.24) with a time-varying interval delay, d(t), that satisfies both (10.21) and (10.22). Given scalars h2 > h1 0 and μ, the system is globally asymptotically stable at the origin if there exist matrices P > 0, Ql 0, l = 1, 2, · · · , 4, Zi > 0, i = 1, 2, Λ = diag{λ⎡ , λn } 0, Tj = diag{t 1 , λ2 , · · · ⎤ ⎡ 1j , t2j⎤, · · · , tnj } 0, j = 1, 2, X = ⎣
X11 X12 ∗
X22
⎦ 0, and Y = ⎣
Y11 Y12 ∗
Y22 T
⎦ 0, and any appropri-
ately dimensioned matrices N = N1T N2T , M = T S = S1T S2T such that the following LMIs hold: Φ1 + ΦT 2 (h2 Z1 + h12 Z2 )Φ2 < 0, ⎡ ⎤ X N ⎦ 0, Ψ1 = ⎣ ∗ Z1 ⎡ ⎤ Y S ⎦ 0, Ψ2 = ⎣ ∗ Z2 ⎡ ⎤ X +Y M ⎦ 0, Ψ3 = ⎣ ∗ Z1 + Z2 where
M1T M2T
T
, and
(10.27) (10.28)
(10.29)
(10.30)
10.2 Stability of Neural Networks with Interval Delay
⎡
Φ11 Φ12 Φ13 P W1
⎤
−M1
S1
217
⎥ ⎥ −M2 ⎥ ⎥ ⎥ ∗ ∗ Φ33 ΛW1 0 0 ⎥ ⎥, ⎥ ∗ ∗ ∗ Φ44 0 0 ⎥ ⎥ ⎥ 0 ⎥ ∗ ∗ ∗ ∗ −Q3 ⎦ ∗ ∗ ∗ ∗ ∗ −Q4 Φ11 = −P A − AT P + N1 + N1T + Q1 + Q3 + Q4 + h2 X11 + h12 Y11 , Φ12 = N2T − N1 + M1 − S1 + h2 X12 + h12 Y12 , Φ13 = P W0 − AT Λ + KT1 , Φ22 = −(1 − μ)Q1 − N2 − N2T + M2 + M2T − S2 − S2T + h2 X22 + h12 Y22 , Φ33 = Q2 − 2T1 + ΛW0 + W0T Λ, Φ44 = −(1 − μ)Q2 − 2T2 , Φ2 = [−A 0 W0 W1 0 0], K = diag{k1 , k2 , · · · , kn }, h12 = h2 − h1 . ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ Φ1 = ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
∗
Φ22
0
KT2
S2
Proof. Choose the Lyapunov-Krasovskii functional candidate to be V (zt ) = V1 (zt ) + V2 (zt ) + V3 (zt ) where T
V1 (zt ) = z (t)P z(t) + 2
n i=1
t
V2 (zt ) =
λi
(10.31)
zj
0
fj (s)ds,
z T (s)Q1 z(s) + f T (z(s))Q2 f (z(s)) ds
t−d(t)
+ V3 (zt ) =
2
t
z T (s)Qi+2 z(s)ds,
t−hi
i=1 0 t −h2
z˙ T (s)Z1 z(s)dsdθ ˙ +
t+θ
−h1
−h2
t
z˙ T (s)Z2 z(s)dsdθ; ˙
t+θ
and P > 0, Ql 0, l = 1, 2, · · · , 4, Zi > 0, i = 1, 2, and Λ = diag {λ1 , λ2 , · · · , λn } 0 are to be determined. Calculating the derivative of V (zt ) along the solutions of system (10.24) yields V˙ (zt ) = V˙ 1 (zt ) + V˙ 2 (zt ) + V˙ 3 (zt ), where
(10.32)
218
10. Stability of Neural Networks with Time-Varying Delay
˙ +2 V˙ 1 (zt ) = 2z T (t)P z(t)
n
λi fi (zi (t))z˙i (t)
i=1 T
˙ + 2f (z(t))Λz(t), ˙ = 2z T (t)P z(t) V˙ 2 (zt ) = z T (t)(Q1 + Q3 + Q4 )z(t) + f T (z(t))Q2 f (z(t)) T ˙ −(1 − d(t)) z (t − d(t))Q1 z(t − d(t)) +f T (z(t − d(t)))Q2 f (z(t − d(t))) −
2
z T (t − hi )Qi+2 z(t − hi )
i=1
z T (t)(Q1 + Q3 + Q4 )z(t) + f T (z(t))Q2 f (z(t)) −(1 − μ) z T (t − d(t))Q1 z(t − d(t)) +f T (z(t − d(t)))Q2 f (z(t − d(t))) −
2
z T (t − hi )Qi+2 z(t − hi ),
i=1
V˙ 3 (zt ) = h2 z˙ T (t)Z1 z(t) ˙ −
t
z˙ T (s)Z1 z(s)ds ˙
t−h2
+(h2 − h1 )z˙ T (t)Z2 z(t) ˙ −
t−h1
z˙ T (s)Z2 z(s)ds ˙
t−h2
˙ = z˙ T (t)(h2 Z1 + h12 Z2 )z(t) −
t
z˙ T (s)Z1 z(s)ds ˙ −
t−d(t)
−
t−d(t)
t−h1
z˙ T (s)Z2 z(s)ds ˙
t−d(t)
z˙ T (s)(Z1 + Z2 )z(s)ds. ˙
t−h2
From the Leibnitz-Newton formula, for any appropriately dimensioned T T T matrices N = N1T N2T , M = M1T M2T , and S = S1T S2T , the following equations are true: T T 0 = 2 z (t)N1 + z (t − d(t))N2 z(t) − z(t − d(t)) −
t
z(s)ds ˙ ,
t−d(t)
(10.33) T T 0 = 2 z (t)S1 + z (t − d(t))S2 z(t − h1 ) − z(t − d(t)) −
t−h1
z(s)ds ˙ ,
t−d(t)
(10.34)
10.2 Stability of Neural Networks with Interval Delay
T T 0 = 2 z (t)M1 +z (t − d(t))M2 z(t − d(t))−z(t − h2 )−
219
t−d(t)
z(s)ds ˙ .
t−h2
(10.35) On the other hand, from (10.26) we know that fi (zi (t)) [fi (zi (t)) − ki zi (t)] 0, i = 1, 2, · · · , n, fi (zi (t − d(t))) [fi (zi (t − d(t))) − ki zi (t − d(t))] 0, i = 1, 2, · · · , n. Thus, for any Tj = diag {t1j , t2j , · · · , tnj } 0, j = 1, 2, the following holds: 0 −2 −2
n i=1 n
ti1 fi (zi (t)) [fi (zi (t)) − ki zi (t)] ti2 fi (zi (t − d(t))) [fi (zi (t − d(t))) − ki zi (t − d(t))]
i=1 T
= 2z (t)KT1 f (z(t)) − 2f T (z(t))T1 f (z(t)) +2z T (t − d(t))KT2 f (z(t − d(t))) − 2f T (z(t − d(t)))T2 f (z(t − d(t))). ⎡ Moreover, for any matrices X = ⎣ the following equations hold: t η T (t)Xη(t)ds − 0= t−h2
T
⎤ X11 X12
t
∗
X22
= h2 η (t)Xη(t)−
t−h1
T
t−h2 T
t−d(t)
η (t)Xη(t)ds−
η T (t)Y η(t)ds −
= (h2 − h1 )η (t)Y η(t) −
Y11 Y12 ∗
Y22
⎦ 0,
η T (t)Xη(t)ds
t−d(t)
0=
⎡
⎦ 0 and Y = ⎣
t−h2 t
(10.36) ⎤
η T (t)Xη(t)ds,
(10.37)
t−h2
t−h1
2 t−h t−h1
t−d(t)
η T (t)Y η(t)ds T
η (t)Y η(t)ds −
t−d(t)
η T (t)Y η(t)ds,
t−h2
(10.38) where η(t) = [z T (t), z T (t − d(t))]T . Adding the terms on the right sides of (10.33)-(10.38) to V˙ (zt ) yields
220
10. Stability of Neural Networks with Time-Varying Delay
V˙ (zt ) ζ T (t) Φ1 + ΦT 2 (h2 Z1 + h12 Z2 )Φ2 ζ(t) − −
t−h1
ξ T (t, s)Ψ2 ξ(t, s)ds −
t−d(t)
t−d(t)
t
ξ T (t, s)Ψ1 ξ(t, s)ds
t−d(t)
ξ T (t, s)Ψ3 ξ(t, s)ds, (10.39)
t−h2
where T ζ(t) = z T (t), z T (t−d(t)), f T (z(t)), f T (z(t−d(t))), z T (t−h1 ), z T (t−h2 ) , ξ(t, s) = [z T (t), z T (t−d(t)), z˙ T (s)]T . Thus, if Φ1 + ΦT 2 (h2 Z1 + h12 Z2 )Φ2 < 0 and Ψi 0, i = 1, 2, 3, then V˙ (zt ) < −εz(t)2 for a sufficiently small ε > 0, which means that system (10.24) is asymptotically stable. This completes the proof.
Remark 10.2.1. Note that we do not simply increase d(t), h2 −d(t), and d(t)− h1 to h2 , h2 − h1 , and h2 − h1 , respectively. Instead, we use the relationships d(t) + (h2 − d(t)) = h2 and (d(t) − h1 ) + (h2 − d(t)) = h2 − h1 . For μ 1, Q1 and Q2 are no longer helpful in improving the stability condition because −(1 − μ)Qi , i = 1, 2 are positive definite. So, setting Qi = 0, i = 1, 2 results in the following easy, delay-range-dependent and rateindependent criterion for an unknown μ. Corollary 10.2.1. Consider system (10.24) with a time-varying interval delay, d(t), that satisfies (10.21) [but not necessarily (10.22)]. Given scalars h2 > h1 0, the system is globally asymptotically stable at the origin if there exist matrices P > 0, Ql 0, l = 3, 4, Zi > 0, i = 1, 2, Λ = diag{λ1 , λ2 , · · · , λn } 0, Tj = diag{t1j , t2j , · · · , tnj } 0, j = 1, 2, ⎤ ⎡ ⎤ ⎡ Y11 Y12 X11 X12 ⎦ 0, and Y = ⎣ ⎦ 0, and any appropriX = ⎣ ∗ X22 ∗ Y22 T T ately dimensioned matrices N = N1T N2T , M = M1T M2T , and T S = S1T S2T such that LMIs (10.28)-(10.30) and the following LMI hold: Φˆ1 + ΦT 2 (h2 Z1 + h12 Z2 )Φ2 < 0, where
(10.40)
10.2 Stability of Neural Networks with Interval Delay
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ˆ Φ1 = ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ Φˆ11 Φˆ22 Φˆ33 Φˆ44
Φˆ11 Φ12 Φ13 P W1
S1
∗
Φˆ22
∗
∗
∗
∗
∗
Φˆ44
0
∗
∗
∗
∗
−Q3
∗
∗
∗
∗
∗
0
KT2
S2
ˆ33 ΛW1 Φ
0
−M1
221
⎤
⎥ ⎥ −M2 ⎥ ⎥ ⎥ 0 ⎥ ⎥, ⎥ 0 ⎥ ⎥ ⎥ 0 ⎥ ⎦ −Q4
−P A − A P + N1 + N1T + Q3 + Q4 + h2 X11 + h12 Y11 , −N2 − N2T + M2 + M2T − S2 − S2T + h2 X22 + h12 Y22 , −2T1 + ΛW0 + W0T Λ,
= = = = −2T2 ,
T
and the other parameters are defined in (10.27). Remark 10.2.2. If we set N = 0, X = 0, and Z1 = εI (where ε is a sufficiently small scalar), then Theorem 10.2.1 reduces to a delay-independent and interval-dependent stability criterion. 10.2.3 Numerical Examples The next two examples demonstrate the benefits of the method described above. Example 10.2.1. Consider the stability of neural network (10.20) with A = diag {1.2769, 0.6231, ⎡ −0.0373 0.4852 ⎢ ⎢ ⎢ −1.6033 0.5988 W0 = ⎢ ⎢ ⎢ 0.3394 −0.0860 ⎣ −0.1311 0.3253 ⎡ 0.8674 −1.2405 ⎢ ⎢ ⎢ 0.0474 −0.9164 W1 = ⎢ ⎢ ⎢ 1.8495 2.6117 ⎣ −2.0413 0.5179
0.9230, 0.4480}, −0.3351 0.2336
⎤
⎥ ⎥ −0.3224 1.2352 ⎥ ⎥, ⎥ −0.3824 −0.5785 ⎥ ⎦ −0.9534 −0.5015 ⎤ −0.5325 0.0220 ⎥ ⎥ 0.0360 0.9816 ⎥ ⎥, ⎥ −0.3788 0.8428 ⎥ ⎦ 1.1734 −0.2775
k1 = 0.1137, k2 = 0.1279, k3 = 0.7994, k4 = 0.2368.
222
10. Stability of Neural Networks with Time-Varying Delay
Table 10.1. Upper bound, h2 , calculated for various h1 and μ (Example 10.2.1) h1
Method
μ = 0.1
μ = 0.9
unknown μ
[18]
3.2775
1.3164
1.2598
[19]
3.2793
1.5847
1.5444
Theorem 10.2.1 or Corollary 10.2.1
3.3039
2.0853
2.0389
1
Theorem 10.2.1 or Corollary 10.2.1
3.3068
2.2736
2.2393
2
Theorem 10.2.1 or Corollary 10.2.1
3.3125
2.6468
2.6299
0
[18] and [19] examined the case h1 = 0, but the methods used did not take the relationship between d(t) and h2 − d(t) into account. Table 10.1 compares results for this case obtained with Theorem 10.2.1 and Corollary 10.2.1 in this section and with the methods in [18] and [19]. It shows that our method is markedly better than the others. This is because Theorem 10.2.1 and Corollary 10.2.1 exploit the relationships among h2 , d(t), h2 − d(t), and d(t) − h1 . Moreover, for our method, h1 can be nonzero. Table 10.1 also lists values of the upper bound, h2 , calculated by our method for various h1 and μ. Example 10.2.2. Consider the stability of neural network (10.20) with ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 2 0 1 1 0.88 1 ⎦ , W0 = ⎣ ⎦ , W1 = ⎣ ⎦, A=⎣ 0 2 −1 −1 1 1 k1 = 0.4, k2 = 0.8. Table 10.2 lists values of the upper bound, h2 , calculated for various h1 and μ using Theorem 10.2.1, Corollary 10.2.1, and the methods in [18] and [19]. Just as for Example 10.2.1, our method is significantly better than the others. The other methods cannot even handle the case h2 2 when μ 0.8. Note that the calculated h2 increases as h1 increases. Table 10.2 shows results for values of h1 up to 100. It is clear that this system is still stable for an unknown μ when d(t) is in the interval [100, 101.3606]. The system is also stable when h2 − h1 1.3606 and h2 has a value of 1, 2, or 100. In fact, as stated in Remark 10.2.2, the delay-independent and intervaldependent criterion shows that the system is stable for an unknown μ when h2 − h1 1.3606 and h2 has any value.
10.3 Exponential Stability of Continuous-Time Neural Networks
223
Table 10.2. Upper bound, h2 , calculated for various h1 and μ (Example 10.2.2) h1
Method
μ = 0.8
μ = 0.9
unknown μ
[18]
1.2281
0.8636
0.8298
[19]
1.6831
1.1493
1.0880
Theorem 10.2.1 or Corollary 10.2.1
2.3534
1.6050
1.5103
1
Theorem 10.2.1 or Corollary 10.2.1
3.2575
2.4769
2.3606
2
Theorem 10.2.1 or Corollary 10.2.1
4.2552
3.4769
3.3606
100
Theorem 10.2.1 or Corollary 10.2.1
102.2552
101.4769
101.3606
0
10.3 Exponential Stability of Continuous-Time Neural Networks This section employs the FWM and IFWM approaches to analyze the exponential stability of continuous-time neural networks with a time-varying delay. 10.3.1 Problem Formulation Consider the following neural network with a time-varying delay: x(t) ˙ = −Cx(t) + Ag(x(t)) + Bg(x(t − d(t))) + J,
(10.41)
where x(·) = [x1 (·), x2 (·), · · · , xn (·)]T ∈ Rn is the neural state vector; g(x(·)) = [g1 (x1 (·)), g2 (x2 (·)), · · · , gn (xn (·))]T ∈ Rn is the vector of neural activation functions; J = [J1 , J2 , · · · , Jn ]T ∈ Rn is a constant input vector; C = diag {c1 , c2 , · · · , cn } is a diagonal matrix with ci > 0, i = 1, 2, · · · , n; A and B are the connection weight matrix and the delayed connection weight matrix, respectively; and the delay, d(t), is a time-varying differentiable function. In this section, the delay is assumed to satisfy one or both of the following conditions: 0 d(t) h,
(10.42)
˙ μ, d(t)
(10.43)
where h and μ are constants In addition, the neural activation functions, gj (·), j = 1, 2, · · · , n are assumed to satisfy
224
10. Stability of Neural Networks with Time-Varying Delay
0
gj (x) − gj (y) Lj , ∀x, y ∈ R, x = y, j = 1, 2, · · · , n, x−y
(10.44)
where Lj , j = 1, 2, · · · , n are positive constants. We use the transformation z(·) = x(·) − x∗ to shift the equilibrium point x∗ = [x∗1 , x∗2 , · · · , x∗n ]T of (10.41) to the origin, which converts the system to the following form: z(t) ˙ = −Cz(t) + Af (z(t)) + Bf (z(t − d(t))),
(10.45)
where z(·) = [z1 (·), z2 (·), · · · , zn (·)]T is the state vector of the transformed system; f (z(·)) = [f1 (z1 (·)), f2 (z2 (·)), · · · , fn (zn (·))]T ; and fj (zj (·)) = gj (zj (·) + zj∗ ) − gj (zj∗ ), j = 1, 2, · · · , n. Note that the functions fj (·), j = 1, 2, · · · , n satisfy the condition fj (zj ) 0 Lj , fj (0) = 0, ∀zj = 0, j = 1, 2, · · · , n, (10.46) zj which is equivalent to fj (zj ) [fj (zj ) − Lj zj ] 0, fj (0) = 0, j = 1, 2, · · · , n.
(10.47)
The following lemma is employed to derive a new criterion. Lemma 10.3.1. [20] If (10.46) holds, then u [fi (s) − fj (s)] ds [u − v] [fi (u) − fi (v)] , j = 1, 2, · · · , n. v
10.3.2 Stability Criteria Derived by FWM Approach We use the FWM approach to obtain a delay-dependent exponential stability criterion. Theorem 10.3.1. Consider system (10.45) with a time-varying delay, d(t), that satisfies both (10.42) and (10.43). Given scalars k : 0 < k < min{ci }, i = 1, 2, · · · , n, h 0, and μ < 1, the system is globally exponentially stable at the origin and has the exponential convergence rate k if there exist matrices P > 0, Q 0, W 0, Z > 0, D = diag{d1 , d2 , · · · , dn } 0, R = diag{r1 , r2 , · · · , rn } 0, and S = diag{s1 , s2 , · · · , sn } 0, and any apT propriately dimensioned matrices Ti , i = 1, 2 and N = N1T N2T N3T N4T N5T such that the following LMI holds: ⎤ ⎡ Φ hN ⎦ < 0, (10.48) Ξ=⎣ ∗ −he−2kh Z where
10.3 Exponential Stability of Continuous-Time Neural Networks
⎡
225
⎤ Φ11 Φ12 Φ13 Φ14 Φ15
⎢ ⎢ ⎢ ∗ ⎢ ⎢ Φ=⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥ ⎥ Φ22 Φ23 Φ24 Φ25 ⎥ ⎥ ⎥ ∗ Φ33 Φ34 Φ35 ⎥, ⎥ ⎥ ∗ ∗ Φ44 0 ⎥ ⎦ ∗ ∗ ∗ Φ55
Φ11 = 2kP + T1 C + C T T1T + N1 + N1T + e2kh Q, Φ12 = P + T1 + C T T2T + N2T , Φ13 = N3T − N1 , Φ14 = 2kD − T1 A + LR + N4T , Φ15 = −T1 B + N5T , Φ22 = T2 + T2T + hZ, Φ23 = −N2 , Φ24 = D − T2 A, Φ25 = −T2 B, Φ33 = −(1 − μ)Q − N3 − N3T , Φ34 = −N4T , Φ35 = LS − N5T , Φ44 = e2kh W − 2R, Φ55 = −(1 − μ)W − 2S, L = diag{L1 , L2 , · · · , Ln }. Proof. Choose the Lyapunov-Krasovskii functional to be V (zt ) = V1 (zt ) + V2 (zt ) + V3 (zt ), where V1 (zt ) = e2kt z T (t)P z(t) + 2 V2 (zt ) = e2kh V3 (zt ) =
0
dj e2kt
j=1
0
zj
fj (s)ds,
e2ks z T (s)Qz(s) + f T (z(s))W f (z(s)) ds,
t
t−d(t) t 2ks T
e
−h
n
(10.49)
z˙ (s)Z z(s)dsdθ; ˙
t+θ
and P > 0, Q 0, W 0, Z > 0, and D = diag {d1 , d2 , · · · , dn } 0 are to be determined. For any appropriately dimensioned matrices Ti , i = 1, 2, the following holds:
226
10. Stability of Neural Networks with Time-Varying Delay
˙ + Cz(t) − Af (z(t)) − Bf (z(t − d(t)))] . 0 = 2e2kt z T (t)T1 + z˙ T (t)T2 [z(t) (10.50) From the Newton-Leibnitz formula, the following is also true for any appropriately dimensioned matrix N : t 2kt T 0 = 2e ζ (t)N z(t) − z(t − d(t)) − z(s)ds ˙ , (10.51) t−d(t)
where ζ(t) = [z T (t), z˙ T (t), z T (t − d(t)), f T (z(t)), f T (z(t − d(t)))]T . In addition, for any matrix ⎡ X11 X12 X13 X14 ⎢ ⎢ ⎢ ∗ X22 X23 X24 ⎢ ⎢ X=⎢ ∗ ∗ X33 X34 ⎢ ⎢ ⎢ ∗ ∗ ∗ X44 ⎣ ∗ ∗ ∗ ∗ the following holds: 0 e2kt hζ T (t)Xζ(t) −
⎤ X15
⎥ ⎥ X25 ⎥ ⎥ ⎥ X35 ⎥ 0, ⎥ ⎥ X45 ⎥ ⎦ X55
t
ζ T (t)Xζ(t)ds .
(10.52)
t−d(t)
From (10.47), we know that fj (zj (t)) [fj (zj (t)) − Lj zj (t)] 0, j = 1, 2, · · · , n and fj (zj (t − d(t))) [fj (zj (t − d(t))) − Lj zj (t − d(t))] 0, j = 1, 2, · · · , n. So, for any R = diag {r1 , r2 , · · · , rn } 0 and S = diag {s1 , s2 , · · · , sn } 0, the following holds: n 0 −2e2kt rj fj (zj (t)) [fj (zj (t)) − Lj zj (t)] −2e2kt
j=1 n j=1
sj fj (zj (t − d(t))) [fj (zj (t − d(t))) − Lj zj (t − d(t))]
= 2e2kt z T (t)LRf (z(t)) − f T (z(t))Rf (z(t)) + z T (t − d(t))LSf (z(t − d(t))) −f T (z(t − d(t)))Sf (z(t − d(t))) . (10.53)
10.3 Exponential Stability of Continuous-Time Neural Networks
227
Calculating the derivative of V (zt ) along the solutions of system (10.45) yields V˙ (zt ) = V˙ 1 (zt ) + V˙ 2 (zt ) + V˙ 3 (zt ),
(10.54)
where V˙ 1 (zt ) = 2ke2kt z T (t)P z(t) + 2e2kt z T (t)P z(t) ˙ +4
n
kdj e
j=1
+2
n
2kt
0
zj
fj (s)ds
dj e2kt fj (zj (t))z˙j (t)
j=1
2ke2kt z T (t)P z(t) + 2e2kt z T (t)P z(t) ˙ + 4ke2kt f T (z(t))Dz(t) +2e2kt f T (z(t))Dz(t), ˙ ˙ V˙ 2 (zt ) = e2kh e2kt z T (t)Qz(t) + f T (z(t))W f (z(t)) − e2kh e2k(t−d(t)) (1 − d(t)) T × z (t − d(t))Qz(t − d(t)) + f T (z(t − d(t)))W f (z(t − d(t))) e2kh e2kt z T (t)Qz(t) + f T (z(t))W f (z(t)) −e2kt (1 − μ) z T (t − d(t))Qz(t − d(t)) +f T (z(t − d(t)))W f (z(t − d(t))) , t 2kt T ˙ ˙ − e2ks z˙ T (s)Z z(s)ds ˙ V3 (zt ) = he z˙ (t)Z z(t) t−h
he2kt z˙ T (t)Z z(t) ˙ − e2k(t−h) ˙ − e2k(t−h) he2kt z˙ T (t)Z z(t)
t
z˙ T (s)Z z(s)ds ˙
t−h t
z˙ T (s)Z z(s)ds. ˙
t−d(t)
Adding the terms on the right sides of (10.50)-(10.53) to V˙ (zt ) yields % & t 2kt T T ˙ ζ (t) (Φ + hX) ζ(t) − η (t, s)Ψ η(t, s)ds , (10.55) V (zt ) e t−d(t)
where
T η(t, s) = ζ T (t), z˙ T (s) , ⎤ ⎡ X N ⎦, Ψ =⎣ −2kh Z ∗ e
and Φ is defined in (10.48). If Φ + hX < 0 and Ψ 0, then V˙ (z(t)) < 0 for any ζ(t) = 0. Let X = e2kh N Z −1 N T , which ensures that X 0 and Ψ 0. In this case, Φ + hX < 0 is equivalent to Ξ < 0, according to the Schur complement.
228
10. Stability of Neural Networks with Time-Varying Delay
It follows from V˙ (zt ) < 0 that V (zt ) V (z(0)).
(10.56)
However, applying Lemma 10.3.1 yields zj (0) n T V (z(0)) = z (0)P z(0) + 2 dj fj (s)ds
+e2kh
e2ks z T (s)Qz(s) + f T (z(s))W f (z(s)) ds
−d(0) 0 2ks T
0
+
e
−h
z˙ (s)Z z(s)dsdθ ˙
θ
λmax (P )φ2 + 2 +e
2kh
0
j=1 0
+e2kh λmax (W ) +λmax (Z)
0
dj zj (0)fj (zj (0))ds
j=1
λmax (Q)
n
0
z T (s)z(s)ds
−d(0) 0
f T (z(s))f (z(s))ds
−d(0) 0 T
z˙ (s)z(s)dsdθ. ˙
−h
θ
It follows from Lemma 2.6.5 that ˙ = [−Cz(s) + Af (z(s)) + Bf (z(s − d(s)))] z˙ T (s)z(s)
T
× [−Cz(s) + Af (z(s)) + Bf (z(s − d(s)))] = z T (s)C T Cz(s) + f T (z(s))AT Af (z(s)) +f T (z(s − d(s)))B T Bf (z(s − d(s))) −2T z(s)C T Af (z(s)) − 2T z(s)C T Bf (z(s − d(s))) +2f T (z(s))AT Bf (z(s − d(s))) 3 z T (s)C T Cz(s) + f T (z(s))AT Af (z(s)) +f T (z(s − d(s)))B T Bf (z(s − d(s))) # $ 3 λmax (C T C) + λmax (AT A) + λmax (B T B) λmax (L2 ) φ2 . Thus, V (z(0)) λmax (P )φ2 + 2λmax (DL)φ2 + he2kh λmax (Q)φ2 +he2kh λmax (W )λmax (L2 )φ2 + 3h2 λmax (Z) λmax (C T C)
10.3 Exponential Stability of Continuous-Time Neural Networks
229
+λmax (AT A)λmax (L2 ) +λmax (B T B)λmax (L2 ) φ2 = Λφ2 , where Λ = λmax (P ) + 2λmax (DL) + he2kh λmax (Q) + he2kh λmax (W )λmax (L2 ) # $ +3h2 λmax (Z) λmax (C T C) + λmax (AT A) + λmax (B T B) λmax (L2 ) . On the other hand, V (zt ) e2kt z T (t)P z(t) e2kt λmin (P )z(t)2 . Therefore, ' z(t)
Λ φe−kt . λmin (P )
From Definition 2.2.1, system (10.45) is exponentially stable and has the exponential convergence rate k. This completes the proof.
Often there is no information on the derivative of the delay or the delay is nondifferentiable; and for μ 1, Q and W are no longer helpful in improving the stability condition because −(1−μ)Q and −(1−μ)W are positive definite. For those cases, a delay-dependent and rate-independent condition for a delay, d(t), satisfying only (10.2) is obtained by setting Q = 0 and W = 0 in Theorem 10.3.1. Corollary 10.3.1. Consider system (10.45) with a time-varying delay, d(t), that satisfies (10.42) [but not necessarily (10.43)]. Given scalars k : 0 < k < min{ci }, i = 1, 2, · · · , n and h 0, the system is globally exponentially stable at the origin and has the exponential convergence rate k if there exist matrices P > 0, Z > 0, D = diag{d1 , d2 , · · · , dn } 0, R = diag{r1 , r2 , · · · , rn } 0, and S = diag{s1 , s2 , · · · , sn } 0, and any appropriately dimensioned matrices Ti , i = 1, 2 and N = [N1T N2T N3T N4T N5T ]T , such that the following LMI holds: ⎤ ⎡ Φˆ hN ˆ=⎣ ⎦ < 0, (10.57) Ξ −2kh Z ∗ −he where
230
10. Stability of Neural Networks with Time-Varying Delay
⎡
Φˆ11 Φ12 Φ13 Φ14 Φ15
⎢ ⎢ ⎢ ∗ ⎢ ⎢ Φˆ = ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗ Φˆ11 Φˆ33 Φˆ44 Φˆ55
⎤
⎥ ⎥ Φ22 Φ23 Φ24 Φ25 ⎥ ⎥ ⎥ ∗ Φˆ33 Φ34 Φ35 ⎥, ⎥ ⎥ ∗ ∗ Φˆ44 0 ⎥ ⎦ ∗ ∗ ∗ Φˆ55
= 2kP + T1 C + C T T1T + N1 + N1T , = −N3 − N3T , = −2R, = −2S,
and the other terms are defined in (10.48) Remark 10.3.1. In the derivation of the delay-dependent exponential stability criterion in [20], the negative term in the derivative of V3 (zt ) was ignored, which may lead to conservativeness. In contrast, the proof of Theorem t 10.3.1 shows that the negative term −e2k(t−h) t−d(t) z˙ T (s)Z z(s)ds ˙ in V˙ 3 (zt ) is retained. The FWM approach is employed to handle it and to derive a delay-dependent exponential stability criterion. 10.3.3 Stability Criteria Derived by IFWM Approach New exponential stability criteria are obtained by considering the relationships among a time-varying delay, its upper bound, and their difference. Theorem 10.3.2. Consider system (10.45) with a time-varying delay, d(t), that satisfies both (10.42) and (10.43). Given scalars k : 0 < k < min{ci }, i = 1, 2, · · · , n, h 0, and μ < 1, the system is globally exponentially stable at the origin and has the exponential convergence rate k if there exist matrices ⎡ ⎤ X11 X12 ⎦ 0, D = P > 0, Q 0, W 0, Z > 0, U > 0, X = ⎣ ∗ X22 diag{d1 , d2 , · · · , dn } 0, R = diag{r1 , r2 , · · · , rn } 0, and S = diag{s1 , s2 , · · · , sn } 0, and any appropriately dimensioned matrices T T Ti , i = 1, 2, N = N1T N2T , and M = M1T M2T , such that the following LMIs hold:
10.3 Exponential Stability of Continuous-Time Neural Networks
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ Φ=⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Φ11 Φ12 Φ13 −M1 ∗
Φ22
∗
∗
∗
∗
∗
−U
∗
∗
∗
∗
∗
∗
∗
∗
0
Φ33 −M2
X
N
∗ e
−2kh
Z
⎡ Ψ2 = ⎣
⎤
⎥ ⎥ −T2 B ⎥ ⎥ ⎥ 0 LS ⎥ ⎥ < 0, ⎥ 0 0 ⎥ ⎥ ⎥ e2kh W − 2R 0 ⎥ ⎦ ∗ Φ66 D − T2 A
(10.58)
⎤
⎡ Ψ1 = ⎣
0
−T1 B
Φ15
231
⎦ 0,
(10.59)
⎤ X
M
∗ e
−2kh
Z
⎦ 0,
(10.60)
where Φ11 = 2kP + T1 C + C T T1T + e2kh (Q + U ) + N1 + N1T + hX11 , Φ12 = P + T1 + C T T2T , Φ13 = hX12 − N1 + N2T + M1 , Φ15 = 2kD − T1 A + LR, Φ22 = T2 + T2T + hZ, Φ33 = −(1 − μ)Q + hX22 − N2 − N2T + M2 + M2T , Φ66 = −(1 − μ)W − 2S, L = diag{L1 , L2 , · · · , Ln }. Proof. Choose the Lyapunov-Krasovskii functional candidate to be V (zt ) = V1 (zt ) + V2 (zt ) + V3 (zt ),
(10.61)
where V1 (zt ) = e
2kt T
z (t)P z(t) + 2
n
dj e
2kt
j=1
V2 (zt ) = e2kh
+e2kh V3 (zt ) =
0 −h
t
zj
0
fj (s)ds,
e2ks [z T (s)Qz(s) + f T (z(s))W f (z(s))]ds
t−d(t) t
e2ks z T (s)U z(s) ds,
t−h t
t+θ
e2ks z˙ T (s)Z z(s)dsdθ; ˙
232
10. Stability of Neural Networks with Time-Varying Delay
and P > 0, Q 0, W 0, Z > 0, U > 0, and D = diag {d1 , d2 , · · · , dn } 0 are to be determined. The following equation is true: t t t−d(t) z˙ T (s)Z z(s)ds ˙ = z˙ T (s)Z z(s)ds+ ˙ z˙ T (s)Z z(s)ds. ˙ (10.62) t−h
t−d(t)
t−h
Thus, following a line similar to the one in Subsection 10.3.2 and using (10.62) yield V˙ (zt ) = V˙ 1 (zt ) + V˙ 2 (zt ) + V˙ 3 (zt ),
(10.63)
where ˙ V˙ 1 (zt ) 2ke2kt z T (t)P z(t) + 2e2kt z T (t)P z(t) ˙ +4ke2kt f T (z(t))Dz(t) + 2e2kt f T (z(t))Dz(t), T 2kh 2kt T z (t)(Q + U )z(t) + f (z(t))W f (z(t)) V˙ 2 (zt ) e e −e2kt (1 − μ) z T (t − d(t))Qz(t − d(t)) +f T (z(t − d(t)))W f (z(t − d(t))) − e2kt z T (t − h)U z(t − h), ˙ − V˙ 3 (zt ) = he2kt z˙ T (t)Z z(t)
t
e2ks z˙ T (s)Z z(s)ds ˙
t−h
he2kt z˙ T (t)Z z(t) ˙ − e2k(t−h)
t
z˙ T (s)Z z(s)ds ˙
t−h
˙ = he2kt z˙ T (t)Z z(t) t −e2k(t−h)
T
t−d(t)
z˙ (s)Z z(s)ds ˙ +
t−d(t)
T
z˙ (s)Z z(s)ds ˙ . t−h
On the other hand, the following holds for any appropriately dimensioned matrices Ti , i = 1, 2: ˙ + Cz(t) − Af (z(t)) − Bf (z(t − d(t)))] . 0 = e2kt z T (t)T1 + z˙ T (t)T2 [z(t) (10.64) From the Newton-Leibnitz formula, we know that the following equations are true for any appropriately dimensioned matrices N and M : 0 = 2e2kt ζ1T (t)N z(t) − z(t − d(t)) − 0=
2e2kt ζ1T (t)M
t
z(s)ds ˙ , t−d(t)
z(t − d(t)) − z(t − h) −
t−d(t)
(10.65)
z(s)ds ˙ , t−h
(10.66)
10.3 Exponential Stability of Continuous-Time Neural Networks
233
where ζ1 (t) = [z T (t), z T (t − d(t))]T .
⎡
In addition, the following equation holds for any matrix X = ⎣
⎤ X11 X12 ∗
X22
⎦
0: 0=e
2kt
t
t−h
ζ1T (t)Xζ1 (t)ds
= e2kt hζ1T (t)Xζ1 (t) −
−
t
t−h
t
t−d(t)
ζ1T (t)Xζ1 (t)ds
ζ1T (t)Xζ1 (t)ds
−
t−d(t)
t−h
ζ1T (t)Xζ1 (t)ds
.
(10.67) Just as in Subsection 10.3.2, for any R = diag {r1 , r2 , · · · , rn } 0 and S = diag {s1 , s2 , · · · , sn } 0, we have 0 2e2kt z T (t)LRf (z(t))−f T (z(t))Rf (z(t))+z T(t−d(t))LSf (z(t−d(t))) (10.68) −f T (z(t − d(t)))Sf (z(t − d(t))) . Thus, adding the terms on the right sides of (10.64)-(10.68) to V˙ (zt ) yields t 2kt T ˙ ζ (t)Φζ(t) − η T (t, s)Ψ1 η(t, s)ds V (zt ) e −
t−d(t) t−d(t)
T
η (t, s)Ψ2 η(t, s)ds ,
(10.69)
t−h
where ζ(t) = [z T (t), z˙ T (t), z T (t − d(t)), z T (t − h), f T (z(t)), f T (z(t − d(t)))]T , η(t, s) = [ζ1T (t), z˙ T (s)]T , and Φ, Ψ1 , and Ψ2 are defined in (10.58), (10.59), and (10.60), respectively. If Φ < 0 and Ψi 0, i = 1, 2, then V˙ (zt ) < 0 for any ζ(t) = 0. Following a line similar to the one in Subsection 10.3.2, we have V (z(0)) Λφ2 , where Λ = λmax (P ) + 2λmax (DL) + he2kh [λmax (Q) + λmax (U )] +he2kh λmax (W )λmax (L2 ) # $ +3h2 λmax (Z) λmax (C T C) + λmax (AT A) + λmax (B T B) λmax (L2 ) .
234
10. Stability of Neural Networks with Time-Varying Delay
On the other hand, V (zt ) e2kt z T (t)P z(t) e2kt λmin (P )z(t)2 . Therefore, ' z(t)
Λ φe−kt . λmin (P )
(10.70)
From Definition 2.2.1, system (10.45) is exponentially stable and has the exponential convergence rate k. This completes the proof.
Remark 10.3.2. In the derivation of the delay-dependent exponential stat bility criterion in [20], the negative term − t−h e2ks z˙ T (s)Z z(s)ds ˙ in V˙ (zt ) was ignored, which may lead to conservativeness. Although in Subsection t 10.3.2, the negative term −e2k(t−h) t−d(t) z˙ T (s)Z z(s)ds ˙ in V˙ (zt ) is retained, t−d(t) the other negative term −e2k(t−h) t−h z˙ T (s)Z z(s)ds ˙ is ignored, which may also lead to conservativeness. In contrast, in the proof of Theorem 10.3.2 all of the negative terms are retained and a new FWM, M , is used. t Remark 10.3.3. In [30], inequalities t−d(t) ζ T (t)N Z −1 N T ζ(t)ds hζ T (t)N · t Z −1 N T ζ(t) and t−d(t) ζ T (t)M Z −1 M T ζ(t)ds hζ T (t)M Z −1 M T ζ(t) are used to derive a delay-dependent asymptotic stability criterion. However, d(t) and h − d(t) are all increased to h. Theorem 10.3.2, on the other hand, uses the matrix hX and separates the delay into two parts, d(t)X and (h−d(t))X, which makes the theorem less conservative. Often there is no information on the derivative of the delay; and for μ 1, Q and W are no longer helpful in improving the stability condition because −(1 − μ)Q and −(1 − μ)W are positive definite. For those cases, a delaydependent and rate-independent condition for a delay, d(t), satisfying only (10.2) is obtained by setting Q = 0 and W = 0 in Theorem 10.3.2. Corollary 10.3.2. Consider system (10.45) with a time-varying delay, d(t), that satisfies (10.42) [but not necessarily (10.43)]. Given scalars k : 0 < k < min{ci }, i = 1, 2, · · · , n and h 0, the system is globally exponentially stable at the origin and has the exponential convergence rate k if there exist matrices ⎡ ⎤ P > 0, Z > 0, U > 0, X = ⎣
X11 X12
⎦ 0, D = diag{d1 , d2 , · · · , dn }
∗ X22 0, R = diag{r1 , r2 , · · · , rn } 0, and S = diag{s1 , s2 , · · · , sn } 0,
10.3 Exponential Stability of Continuous-Time Neural Networks
235
T and any appropriately dimensioned matrices Ti , i = 1, 2, N = N1T N2T , T T T and M = M1 M2 such that LMIs (10.59) and (10.60) and the following LMI hold: ⎡ ⎤ Φˆ11 Φ12 Φ13 −M1 Φ15 −T1 B ⎢ ⎥ ⎢ ⎥ 0 D − T2 A −T2 B ⎥ ⎢ ∗ Φ22 0 ⎢ ⎥ ⎢ ⎥ ⎢ ∗ 0 LS ⎥ ∗ Φˆ33 −M2 ˆ ⎢ ⎥ < 0, (10.71) Φ=⎢ ⎥ ⎢ ∗ ⎥ ∗ ∗ −U 0 0 ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ⎥ ∗ ∗ ∗ −2R 0 ⎣ ⎦ ˆ ∗ ∗ ∗ ∗ ∗ Φ66 where Φˆ11 = 2kP + T1 C + C T T1T + e2kh U + N1 + N1T + hX11 , Φˆ33 = hX22 − N2 − N2T + M2 + M2T , Φˆ66 = −2S, and the other terms are defined in (10.58). 10.3.4 Numerical Examples This subsection presents two numerical examples that demonstrate the effectiveness of the above criteria. Example 10.3.1. Consider neural network (10.41) with ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 1 0 −0.1 0.1 −0.1 0.2 ⎦, A = ⎣ ⎦, B = ⎣ ⎦ , L1 = L2 = 1. C=⎣ 0 1 0.1 −0.1 0.2 0.1 We assume that d(t) = 0.5 sin2 t; so h = μ = 0.5. When k = 0.05, Theorem 1 in [20] shows the system to be globally exponentially stable; but Theorem 1 in [31] fails to verify that. However, Theorem 10.3.1 in this chapter in combination with the bisection search method yields k = 0.67, for which the system is globally exponentially stable. In addition, Theorem 10.3.1 is also applicable to cases in which the derivative of the delay is greater than or equal to 1. For example, if we let d(t) = sin2 t, then h = μ = 1. The bisection search method shows the system to be globally exponentially stable for k = 0.54. Moreover, we obtained an exponential convergence rate of k = 0.37 for d(t) = 2 sin2 t. In contrast, the methods in [20, 31] fail to verify exponential stability in either of these cases.
236
10. Stability of Neural Networks with Time-Varying Delay
Example 10.3.2. Consider neural network (10.41) with ⎡ C=⎣
⎤ 2
0
⎡
⎦, A = ⎣
0 3.5
−1 0.5 0.5 −1
⎤
⎡
⎦, B = ⎣
−0.5 0.5 0.5
0.5
⎤ ⎦ , L1 = L2 = 1.
When d(t) = 1 and k = 0.25 (note that the exponential convergence rate is k/2 in [20]), Theorem 1 in [20] shows the system to be globally exponentially stable; but Theorem 3 in [32] fails to verify that. However, Theorem 10.3.1 in this chapter in combination with the bisection search method shows the system to be globally exponentially stable, even for k = 1.15. Table 10.3 lists exponential convergence rates for the upper bound h = 1 and various μ obtained from Theorems 10.3.1 and 10.3.2, and from Corollaries 10.3.1 and 10.3.2, along with the results in [20]. From Table 10.3, we can see that, when the delay is time-invariant (μ = 0), the results obtained with Theorems 10.3.1 and 10.3.2 are much better than those in [20]. Furthermore, when the delay is time-varying, [20] fails to produce an allowable exponential convergence rate for the exponentially stable neural network system; but Theorems 10.3.1 and 10.3.2, and Corollaries 10.3.1 and 10.3.2, do. Theorem 10.3.2 also produces better results that guarantee the exponential stability of the neural network than Theorem 10.3.1 does.
Table 10.3. Allowable exponential convergence rate, k, for h = 1 and various μ (Example 10.3.2) μ
0
0.8
0.9
unknown μ
[20]
0.2500
—
—
—
Theorem 10.3.1
1.1540
0.7538
0.6106
—
Corollary 10.3.1
—
—
—
0.3391
Theorem 10.3.2
1.1540
0.8643
0.8344
—
Corollary 10.3.2
—
—
—
0.8169
On the other hand, Table 10.4 lists values of the upper bound, h, for the exponential convergence rate k = 0.8 and various μ obtained with Theorems 10.3.1 and 10.3.2, and with Corollaries 10.3.1 and 10.3.2.
10.4 Exponential Stability of Discrete-Time Recurrent Neural Networks
237
Table 10.4. Allowable upper bound, h, for k = 0.8 and various μ (Example 10.3.2) μ
0.5
0.8
unknown μ
[20]
—
—
—
Theorem 10.3.1
1.2606
0.9442
—
Corollary 10.3.1
—
—
0.8310
Theorem 10.3.2
1.2787
1.0819
—
Corollary 10.3.2
—
—
1.0366
10.4 Exponential Stability of Discrete-Time Recurrent Neural Networks This section employs the IFWM approach to analyze the exponential stability of discrete-time recurrent neural networks with a time-varying delay. 10.4.1 Problem Formulation Consider the following discrete-time recurrent neural network with a timevarying delay: x(k + 1) = Cx(k) + Af (x(k)) + Bf (x(k − d(k))) + J
(10.72)
for k = 1, 2, · · · , where x(k)=[x1 (k), x2 (k), · · · , xn (k)]T ∈ Rn is the neural state vector; f (x(k))=[f1 (x1 (k)), f2 (x2 (k)), · · · , fn (xn (k))]T ∈ Rn is the vector of neural activation functions; J ∈ Rn is the exogenous input; C = diag {c1 , c2 , · · · , cn }, where 0 ci < 1, i = 1, 2 · · · , n, is the state feedback coefficient matrix; A and B are the connection weight matrix and the delayed connection weight matrix; and d(k) is a time-varying delay satisfying h1 d(k) h2 ,
(10.73)
where d(k) is an integer for all k, and h1 0 and h2 0 are known integers. Moreover, the following assumption holds for the neural activation functions. In addition, we assume that the neural activation functions, fi (·), i ∈ 1, 2, · · · , n, satisfy Fi−
fi (α1 ) − fi (α2 ) Fi+ , ∀α1 , α2 ∈ R, α1 = α2 , α1 − α2
where Fi− and Fi+ are constants.
(10.74)
238
10. Stability of Neural Networks with Time-Varying Delay
For convenience, we denote $ # F1 = diag F1− F1+ , F2− F2+ , · · · , Fn− Fn+ , F2 = diag
F − + Fn+ F1− + F1+ F2− + F2+ , , ··· , n 2 2 2
.
We use the transformation y(·) = x(·) − x∗ to shift the equilibrium point x∗ = [x∗1 , x∗2 , · · · , x∗n ]T of (10.72) to the origin, which converts the system to the following form: y(k + 1) = Cy(k) + Ag(y(k)) + Bg(y(k − d(k))),
(10.75)
where y(k) = [y1 (k), y2 (k), · · · , yn (k)]T is the state vector of the transformed system; g(y(k)) = [g1 (y1 (k)), g2 (y2 (k)), · · · , gn (yn (k))]T ; and gi (yi (k)) = fi (xi (k) + x∗i ) − fi (x∗i ), i = 1, 2, · · · , n. The definition of the global exponential stability of a neural network is now given. Definition 10.4.1. System (10.72) is said to be globally exponentially stable if there exist constants μ > 0 and 0 < α < 1 such that x(k) − x∗ μαk
max x(i) − x∗ , k 0.
−h2 i0
10.4.2 Stability Criterion Derived by IFWM Approach This subsection uses the IFWM approach to derive a stability criterion for discrete-time recurrent neural networks with a time-varying delay. Theorem 10.4.1. Consider system (10.72). Given integers h1 and h2 , where h2 h1 0, the system is globally exponentially stable if there exist matrices P > 0, Q1 0, Q2 0, R > 0, Z1 > 0, Z2 > 0, D = diag{d1 , d2 , · · · , dn } 0, H = diag{h1 , h2 , · · · , hn } 0, X = ⎤ ⎡ ⎤ ⎡ Y11 Y12 X11 X12 ⎦ 0, and Y = ⎣ ⎦ 0, and any appropriately di⎣ ∗ X22 ∗ Y22 T T T mensioned matrices N = N1T N2T , M = M1T M2T , and S = S1T S2T such that the following LMIs hold:
10.4 Exponential Stability of Discrete-Time Recurrent Neural Networks
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ Φ=⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Φ11 Φ12
S1
−M1 Φ15 Φ16
∗
Φ22
S2
−M2
0
∗
∗
−Q1
0
0
∗
∗
∗
−Q2
0
∗
∗
∗
∗
Φ55
∗
∗
∗
∗
∗
⎡ Ψ1 = ⎣ ⎡ Ψ2 = ⎣
⎤
⎥ ⎥ Φ26 ⎥ ⎥ ⎥ 0 ⎥ ⎥ < 0, ⎥ 0 ⎥ ⎥ ⎥ Φ56 ⎥ ⎦ Φ66
(10.76)
⎤ X N ∗ Z1
⎦ 0, M
∗
Z1 + Z2
Y
S
∗ Z2
(10.77) ⎤
X +Y
⎡ Ψ3 = ⎣
239
⎤
⎦ 0,
⎦ 0,
(10.78)
(10.79)
where Φ11 = CP C − P + Q1 + Q2 + (h2 − h1 + 1)R + h2 (C − I)(Z1 + Z2 )(C − I) − h1 (C − I)Z2 (C − I) + N1 + N1T + h2 X11 + (h2 − h1 )Y11 − F1 D, Φ12 = −N1 + N2T + M1 − S2 + h2 X12 + (h2 − h1 )Y12 , Φ15 = CP A + h2 (C − I)(Z1 + Z2 )A − h1 (C − I)Z2 A + F2 D, Φ16 = CP B + h2 (C − I)(Z1 + Z2 )B − h1 (C − I)Z2 B, Φ22 = −R− N2 − N2T + M2 + M2T − S2 − S2T + h2 X22 + (h2 − h1 )Y22 − F1 H, Φ26 = F2 H, Φ55 = AT P A + h2 AT (Z1 + Z2 )A − h1 AT Z2 A − D, Φ56 = AT P B + h2 AT (Z1 + Z2 )B − h1 AT Z2 B, Φ66 = B T P B + h2 B T (Z1 + Z2 )B − h1 B T Z2 B − H, Proof. Define η(l) = y(l + 1) − y(l).
(10.80)
From (10.80) and (10.75), we find that y(k + 1) = y(k) + η(k), η(k) = y(k + 1) − y(k) = (C − I)y(k) + Ag(y(k)) + Bg(y(k − d(k))).
240
10. Stability of Neural Networks with Time-Varying Delay
Choose the Lyapunov function to be V (k) = V1 (k) + V2 (k) + V3 (k) + V4 (k),
(10.81)
where V1 (k) = y T (k)P y(k), k−1
V2 (k) =
i=k−h1
y T (i)Q2 y(i),
i=k−h2
k−h 1 k−1
V3 (k) =
k−1
y T (i)Q1 y(i) + y T (i)Ry(i),
l=k−h2 i=l −1
V4 (k) =
k−1
η T (l)Z1 η(l) +
i=−h2 l=k+i
−h 1 −1 k−1
η T (l)Z2 η(l);
i=−h2 l=k+i
and P > 0, Q1 0, Q2 0, R > 0, Z1 > 0, and Z2 > 0 are to be determined. The following equations are true: k−1
η (l)Z1 η(l) =
i=k−h2 k−h 1 −1
k−1
k−d(k)−1 T
η T (l)Z1 η(l) +
i=k−h2
i=k−d(k) k−h 1 −1
k−d(k)−1
η T (l)Z2 η(l) =
i=k−h2
η T (l)Z1 η(l), (10.82)
η T (l)Z2 η(l) +
i=k−h2
η T (l)Z2 η(l). (10.83)
i=k−d(k)
Defining ΔV (k) = V (k + 1) − V (k) and using (10.82) and (10.83) yield ΔV (k) = ΔV1 (k) + ΔV2 (k) + ΔV3 (k) + ΔV4 (k), where ΔV1 (k) = (Cy(k) + Ag(y(k)) + Bg(y(k − d(k))))T P (Cy(k) +Ag(y(k)) + Bg(y(k − d(k)))) − y T (k)P y(k), k
ΔV2 (k) =
i=k+1−h1 k−1
−
i=k−h1
k
T
y (i)Q1 y(i) + y T (i)Q1 y(i) −
y T (i)Q2 y(i)
i=k+1−h2 k−1 T
y (i)Q2 y(i)
i=k−h2
= y T (k)(Q1 + Q2 )y(k) − y T (k − h1 )Q1 y(k − h1 ) −y T (k − h2 )Q2 y(k − h2 ),
(10.84)
10.4 Exponential Stability of Discrete-Time Recurrent Neural Networks k+1−h 1
ΔV3 (k) =
k
k−h 1 k−1
y T (i)Ry(i) −
l=k+1−h2 i=l
241
y T (i)Ry(i)
l=k−h2 i=l k−h 1 T
= (h2 − h1 + 1)y T (k)Ry(k) −
y (l)Ry(l)
l=k−h2
(h2 − h1 + 1)y T (k)Ry(k) − y T (k − d(k))Ry(k − d(k)), −1
ΔV4 (k) =
k
η T (l)Z1 η(l) −
i=−h2 l=k+i+1
+
−h 1 −1
k
−1
k−1
η T (l)Z1 η(l)
i=−h2 l=k+i
η T (l)Z2 η(l) −
i=−h2 l=k+i+1
−h 1 −1 k−1
η T (l)Z2 η(l)
i=−h2 l=k+i k−1
= h2 η T (k)(Z1 + Z2 )η(k) − h1 η T (k)Z2 η(k) −
η T (l)Z1 η(l)
l=k−d(k)
−
k−h 1 −1
k−d(k)−1
η T (l)Z2 η(l) −
η T (l)(Z1 + Z2 )η(l).
l=k−h2
l=k−d(k)
On the other hand, the following equations are true for any appropriately dimensioned matrices N , M , and S: ⎡ ⎤ k−1 0 = 2ζ T (k)N ⎣y(k) − y(k − d(k)) − η(l)⎦ , (10.85) l=k−d(k)
⎡ 0 = 2ζ T (k)M ⎣y(k − d(k)) − y(k − h2 ) −
k−d(k)−1
l=k−h2
⎡ 0 = 2ζ T (k)S ⎣y(k − h1 ) − y(k − d(k)) −
k−h 1 −1
⎤ η(l)⎦ ,
(10.86)
⎤ η(l)⎦ ,
(10.87)
l=k−d(k)
where T ζ(k) = y T (k), y T (k − d(k)) . ⎤
⎡ In addition, the following equations hold for any matrices X = ⎣ ⎡ 0 and Y = ⎣
⎤ Y11 Y12 ∗
Y22
⎦ 0:
X11 X12 ∗
X22
⎦
242
10. Stability of Neural Networks with Time-Varying Delay
0=
k−1
k−1
ζ T (k)Xζ(k) −
l=k−h2
ζ T (k)Xζ(k)
l=k−h2
= h2 ζ T (k)Xζ(k)−
k−1
0=
T
ζ (k)Y ζ(k) −
l=k−h2
k−h 1 −1
ζ T (k)Xζ(k), (10.88)
l=k−h2
l=k−d(k)
k−h 1 −1
k−d(k)−1
ζ T (k)Xζ(k)−
ζ T (k)Y ζ(k)
l=k−h2 k−h 1 −1
= (h2 − h1 )ζ T (k)Y ζ(k) −
l=k−d(k)
k−d(k)−1
ζ T (k)Y ζ(k) −
ζ T (k)Y ζ(k).
l=k−h2
(10.89) From (10.74), we have
gi (y(k)) − Fi− yi (k) × gi (y(k)) − Fi+ yi (k) 0, gi (y(k − d(k)))−Fi− yi (k − d(k)) × gi (y(k − d(k)))−Fi+ yi (k − d(k)) 0, i = 1, 2, · · · , n,
which are equivalent to ⎡ ⎤ ⎡ ⎤ ⎡ ⎤T Fj− + Fj+ − + T T ej ej ⎥ − ⎢ Fj Fj ej ej y(k) y(k) 2 ⎥⎣ ⎦ 0, ⎣ ⎦ ⎢ ⎣ F− + F+ ⎦ j j T T g(y(k)) g(y(k)) − ej ej ej ej 2 ⎡ ⎤ − + ⎡ ⎤ ⎡ ⎤T +F F j j − + ej eT − ⎢ Fj Fj ej eT y(k − d(k)) y(k − d(k)) j ⎥ j 2 ⎥⎣ ⎦ 0, ⎣ ⎦ ⎢ ⎣ F − +F + ⎦ j j T g(y(k−d(k))) g(y(k−d(k))) ej ej − ej eT j 2 j = 1, 2, · · · , n. Thus, the following inequalities are true for dj > 0 and hj > 0, j = 1, 2, · · · , n: ⎡ ⎤ ⎡ ⎤ ⎤T Fj− +Fj+ − + T T n ej ej ⎥ y(k) − ⎢ Fj Fj ej ej y(k) 2 ⎥⎣ ⎦, ⎦ ⎢ 0− dj ⎣ ⎣ Fj− +Fj+ ⎦ j=1 T T g(y(k)) g(y(k)) ej ej − ej ej 2 (10.90) ⎡
10.4 Exponential Stability of Discrete-Time Recurrent Neural Networks
0−
n
⎡ hj ⎣
j=1
⎡ ×⎣
⎤T
y(k − d(k)) g(y(k − d(k)))
y(k − d(k)) g(y(k − d(k)))
⎦
⎡ − + ⎢ Fj Fj ej eT j ⎢ ⎣ F − +F + j j ej eT − j
2
⎤
243
⎤ Fj− +Fj+ T ej ej ⎥ − 2 ⎥ ⎦ T ej ej
⎦,
(10.91)
where ej denotes the unit column vector, which is a single column matrix in which the element in the jth row is “1” and all the other elements are 0. Therefore, adding the terms on the right sides of (10.85)-(10.91) to ΔV (k) yields k−1
ΔV (k) ζ1T (k)Φζ1 (k) −
ζ2T (k, l)Ψ1 ζ2 (k, l)
l=k−d(k)
k−d(k)−1
−
ζ2T (k, l)Ψ2 ζ2 (k, l)−
l=k−h2
k−h 1−1
ζ2T (k, l)Ψ3 ζ2 (k, l),
(10.92)
l=k−d(k)
where ζ1T (k) = [y T (k), y T (k − d(k)), y T (k − h1 ), y T (k − h2 ), g T (y(k)), T g T y(k − d(k))]T , ζ2T (k, l) = ζ T (k), η T (l) ; and Φ and Ψi , i = 1, 2, 3 are defined in (10.76) and (10.77)-(10.79), respectively. If Φ < 0 and Ψi 0, i = 1, 2, 3, then ΔV (k) < 0 for any ζ1 (k) = 0. Thus, we have 2
ΔV (k) λmax (Φ) y(k) .
(10.93)
From the definition of V (k), it is easy to verify that 2
V (k) λmax (P ) y(k) +ρ1
k−1 i=k−h2
2
y(i) +ρ2
k−1
2
y(i + 1) , (10.94)
i=k−h2
where ρ1 = (h2 + 1 − h1)λmax (R) + λmax (Q1 ) + λmax (Q2 ) + 2h2 λmax (Z1 + Z2 ) and ρ2 = 2h2 λmax (Z1 + Z2 ). For any scalar μ > 1, inequality (10.94) together with (10.93) implies that μk+1 V (k + 1) − μk V (k) = μk+1 ΔV (k) + μk (μ − 1)V (k) k−1 2 2 μk (μ − 1)λmax (P ) + μk+1 λmax (Φ) y(k) + ρ1 μk (μ − 1) y(i) i=k−h2
+ρ2 μk (μ − 1)
k−1 i=k−h2
2
y(i + 1) .
(10.95)
244
10. Stability of Neural Networks with Time-Varying Delay
Furthermore, for any integer N h2 + 1, summing both sides of (10.95) over k from 0 to N − 1 yields μN V (N )−V (0) [(μ−1)λmax (P )+μλmax (Φ)]
N−1
μk y(k)
2
k=0
+ρ1 (μ−1)
N−1 k−1
2
μk y(i) +ρ2 (μ−1)
k=0 i=k−h2
N−1
k−1
2
μk y(i+1) . (10.96)
k=0 i=k−h2
An easy calculation shows that ( −1 i+h N −1−h i+h N −1 k−1 N −1 2 2 2 2 k μ y(i) + + k=0 i=k−h2
i=−h2 k=0
h2 μh2
sup i∈[−h2 ,0]
i=0
k=i+1 2
N −1
) μk y(i)
2
i=N −h2 k=i+1
y(i) +h2 μh2
N −1
2
μi y(i) . (10.97)
i=0
Similarly, we have N −1
k−1
2
μk y(i + 1) h2 μh2
k=0 i=k−h2
sup i∈[−h2 ,0]
2
y(i) + h2 μh2
N
2
μi y(i) .
i=1
(10.98) If we let ρ = max{λmax (P ), ρ1 , ρ2 }, and ρ0 = λmin (P ), it readily follows from (10.94) and (10.81) that V (0) ρ
sup
2
y(i) ,
(10.99)
i∈[−h2 ,0]
V (N ) ρ0 y(N )2 .
(10.100)
Then, from (10.96)-(10.98), we have μN V (N ) V (0) + [(μ − 1)λmax (P ) + μλmax (Φ) + (ρ1 + ρ2 )(μ − 1)h2 μh2 ] N −1 2 2 μk y(k) + (ρ1 + ρ2 )(μ − 1)h2 μh2 sup y(i) . × k=0
i∈[−h2 ,0]
In addition, we can verify that there exists a scalar μ0 > 1 such that (μ0 − 1)λmax (P ) + μ0 λmax (Φ) + (ρ1 + ρ2 )(μ0 − 1)h2 μh0 2 = 0.
(10.101)
Thus, using (10.99)-(10.101), we obtain y(N )2
! 1 "N 1 ρ + (ρ1 + ρ2 )(μ0 − 1)h2 μh0 2 sup y(i)2 . ρ0 μ0 i∈[−h2 ,0]
10.4 Exponential Stability of Discrete-Time Recurrent Neural Networks
245
That is, ! 1 "N 1 2 h2 x(N )−x ρ+(ρ1 +ρ2 )(μ0 −1)h2 μ0 sup x(i)−x∗ . ρ0 μ0 i∈[−h2 ,0] ∗ 2
So, system (10.72) is globally exponentially stable. This completes the proof.
Remark 10.4.1. Lyapunov function (10.81) differs from the one in [24] in that we added a new term, V4 (k), that contains information on h2 . This may reduce the conservativeness of criteria for discrete-time neural networks with a time-varying delay. Remark 10.4.2. In [25], regarding the difference of the Lyapunov function for a discrete-time recurrent neural network with a time-varying delay, k−1 −1 T −1 T T T the terms i=k−d(k) α (k)S1 Z1 S1 α(k) = d(k)α (k)S1 Z1 S1 α(k) and k−d(k)−1 T −1 T −1 T T i=k−h2 α (k)S2 Z1 S2 α(k) = (h2 − d(k))α (k)S2 Z1 S2 α(k) were increased to h2 αT (k)S1 Z1−1 S1T α(k) and (h2 − h1 )αT (k)S2 Z1−1 S2T α(k), respectively. That is, h2 = d(k) + h2 − d(k) was increased to 2h2 − h1 , which may lead to conservativeness. In contrast, the proof of Theorem 10.4.1 exploits the relationships among h1 , h2 , and d(k). 10.4.3 Numerical Examples The two numerical examples below demonstrate the effectiveness of the above criterion. Example 10.4.1. Consider discrete-time recurrent neural network (10.72) with ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 0.2 −0.2 0.1 −0.2 0.1 0 0.4 0 0 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ C = ⎢ 0 0.3 0 ⎥ , A = ⎢ 0 −0.3 0.2 ⎥ , B = ⎢ −0.2 0.3 0.1 ⎥ , ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ 0 0 0.3 −0.2 −0.1 −0.2 0.1 −0.2 0.3 f1 (x) = tanh(0.6x), f2 (x) = tanh(−0.4x), f3 (x) = tanh(−0.2x). Using these parameters, we find ⎡ ⎤ ⎡ 0 0 0 −0.3 ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ F1 = ⎢ 0 0 0 ⎥ , F2 = ⎢ 0 ⎣ ⎦ ⎣ 0 0 0 0
that ⎤ 0 0.2 0
0
⎥ ⎥ 0 ⎥. ⎦ 0.1
246
10. Stability of Neural Networks with Time-Varying Delay
This example is discussed in [24], where the neural activation function, gi (x), is assumed to be equal to fi (x). Table 10.5 lists values of the upper bound, h2 , that guarantee the exponential stability of system (10.72) for various values of the lower bound, h1 , obtained with the theorems in [24] and [25] and with Theorem 10.4.1. The results obtained with our theorem are clearly better. Table 10.5. Allowable upper bound, h2 , for various h1 (Example 10.4.1) h1
0
2
4
6
10
[24] and [25]
4
6
8
10
14
Theorem 10.4.1
11
13
15
17
21
Example 10.4.2. Consider discrete-time recurrent neural network (10.72) with ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 0.26 0 0.1 0.2 −0.25 0.1 ⎦, A = ⎣ ⎦, B = ⎣ ⎦, C =⎣ 0 0.1 −0.15 0.1 0.02 0.08 f1 (x) = 0.5(|x + 1| + |x − 1|), f2 (x) = |x + 1| + |x − 1| . F2+
Condition (10.74) is satisfied for F1− = −1, F1+ = 1, F2− = −2, and = 2. Thus, ⎡ ⎤ ⎡ ⎤ −1 0 0 0 ⎦ , F2 = ⎣ ⎦. F1 = ⎣ 0 −4 0 0
This example is discussed in [25]. Table 10.6 lists values of the upper bound, h2 , that guarantee the exponential stability of system (10.72) for various values of the lower bound, h1 , obtained by solving the LMIs in [24], [25], and Theorem 10.4.1. For example, when h1 is 2, h2 is 4 in [24, 25]; but Theorem 10.4.1 produces a value of 11, which shows that our results are significantly better.
10.5 Conclusion This chapter employs the FWM and IFWM approaches to investigate the stability of neural networks with one or more time-varying delays. First, the
References
247
Table 10.6. Allowable upper bound, h2 , for various h1 (Example 10.4.2) h1
2
4
6
10
12
[24] and [25]
4
6
8
12
14
Theorem 10.4.1
11
13
15
19
21
stability of a neural network with multiple time-varying delays is considered; and the FWM approach is employed to derive a delay-dependent stability criterion, from which a delay-independent and rate-dependent criterion is obtained as a special case. Next, the IFWM approach is used to investigate the stability of a neural network with a time-varying interval delay; and a less conservative stability criterion is obtained. Then, the FWM and IFWM approaches are used to examine the exponential stability of a neural network with a time-varying delay; and criteria are obtained that are less conservative than others. Finally, the IFWM approach is used to investigate the exponential stability of a class of discrete-time recurrent neural networks with a time-varying delay; and a less conservative criterion is derived without ignoring any useful terms in the difference of a Lyapunov function.
References 1. G. P. Liu. Nonlinear Identification and Control: A Neural Network Approach. New York: Springer-Verlag, 2001. 2. Y. He, Q. G. Wang, M. Wu, and C. Lin. Delay-dependent state estimation for delayed neural networks. IEEE Transactions on Neural Networks, 17(4): 10771081, 2006. 3. Y. Zhang. Global exponential stability and periodic solutions of delay Hopfield neural networks. International Journal of Systems Science, 27(2): 227-231, 1996. 4. Y. Zhang, P. A. Heng, and A. W. C. Fu. Estimate of exponential convergence rate and exponential stability for neural networks. IEEE Transactions on Neural Networks, 10(6): 1487-1493, 1999. 5. T. L. Liao and F. C. Wang. Global stability for cellular neural networks with time delay. IEEE Transactions on Neural Networks, 11(6): 1481-1484, 2000. 6. S. Arik. An analysis of global asymptotic stability of delayed cellular neural networks. IEEE Transactions on Neural Networks, 13(5): 1239-1242, 2002. 7. S. Arik. An analysis of exponential stability of delayed neural networks with time varying delays. Neural Networks, 17(7): 1027-1031, 2004.
248
10. Stability of Neural Networks with Time-Varying Delay
8. V. Singh. A generalized LMI-based approach to the global asymptotic stability of delayed cellular neural networks. IEEE Transactions on Neural Networks, 15(1): 223-225, 2004. 9. J. Cao and J. Wang. Global exponential stability and periodicity of recurrent neural networks with time delays. IEEE Transactions on Circuits and Systems I, 52(5): 920-931, 2005. 10. S. Xu, J. Lam, D. W. C. Ho, and Y. Zou. Improved global robust asymptotic stability criteria for delayed cellular neural networks. IEEE Transactions on Systems, Man, and Cybernetics-Part B, 35(6): 1317-1321, 2005. 11. Y. He, M. Wu, and J. H. She. An improved global asymptotic stability criterion for delayed cellular neural networks. IEEE Transactions on Neural Networks, 17(1): 250-252, 2006. 12. X. Liao, G. Chen, and E. N. Sanchez. Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach. Neural Networks, 15(7): 855-866, 2002. 13. X. Liao and C. Li. An LMI approach to asymptotical stability of multi-delayed neural networks. Physica D, 200(2): 139-155, 2005. 14. T. L. Liao, J. J. Yan, C. J. Cheng, and C. C. Hwang. Globally exponential stability condition of a class of neural networks with time-varying delays. Physics Letters A, 339(3-5): 333-342, 2005. 15. H. T. Lu. Global exponential stability analysis of Cohen-Grossberg neural networks. IEEE Transactions on Circuits and Systems Part II: Express Briefs, 52(8): 476-479, 2005. 16. Y. He, Q. G. Wang, and M. Wu. LMI-based stability criteria for neural networks with multiple time-varying delays. Physica D, 212(1-2): 126-136, 2005. 17. Y. He, M. Wu, and J. H. She. Delay-dependent exponential stability of delayed neural networks with time-varying delay. IEEE Transactions on Circuits and Systems II, 53(7): 553-557, 2006. 18. C. Hua, C. Long, and X. Guan. New results on stability analysis of neural networks with time-varying delays. Physics Letters A, 352(4-5): 335-340, 2006. 19. Y. He, G. P. Liu, and D. Rees. New delay-dependent stability criteria for neural networks with time-varying delay. IEEE Transactions on Neural Networks, 18(1): 310-314, 2007. 20. Q. Zhang, X. Wei, and J. Xu. Delay-dependent exponential stability of cellular neural networks with time-varying delays. Chaos, Solitons & Fractals, 23(4): 1363-1369, 2005. 21. X. X. Liao and J. Wang. Algebraic criteria for global exponential stability of cellular neural networks with multiple time delays. IEEE Transactions on Circuits and Systems I, 50(2): 268-275, 2003. 22. X. Li, L. Huang, and J. Wu. A new method of Lyapunov functionals for delayed cellular neural networks. IEEE Transactions on Circuits and Systems I, 51(11): 2263-2270, 2004.
References
249
23. W. H. Chen, X. Lu, and D. Y. Liang. Global exponential stability for discretetime neural networks with variable delays. Physics Letters A, 358(3): 186-198, 2006. 24. Y. R. Liu, Z. Wang, A. Serrano, and X. H. Liu. Discrete-time recurrent neural networks with time-varying delays: exponential stability analysis. Physics Letters A, 362(5-6): 480-488, 2007. 25. Q. K. Song and Z. Wang. A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural networks with time-varying delays. Physics Letters A, 368(1-2): 134-145, 2007. 26. W. Xiong and J. Cao. Global exponential stability of discrete-time CohenGrossberg neural networks. Neurocomputing, 64: 433-446, 2005. 27. Y. He, G. P. Liu, D. Rees, and M. Wu. Stability analysis for neural networks with time-varying interval delay. IEEE Transactions on Neural Networks, 18(6): 1850-1854, 2007. 28. M. Wu, F. Liu, P. Shi, Y. He, and R. Yokoyama. Exponential stability analysis for neural networks with time-varying delay. IEEE Transactions on Systems Man and Cybernetics-Part B, 38(4): 1152-1156, 2008. 29. M. Wu, F. Liu, P. Shi, Y. He, and R. Yokoyama. Improved free-weighting matrix approach for stability analysis of discrete-time recurrent neural networks with time-varying delay. IEEE Transactions on Circuits and Systems II, 55(7): 690694, 2008. 30. M. Wu and Y. He. Parameter-dependent Lyapunov functional for systems with multiple time delays. Journal of Control Theory and Applications, 2(3): 239-245, 2004. 31. G. J. Yu, C. Y. Lu, J. S. H. Tsai, B. D. Liu, and T. J. Su. Stability of cellular neural networks with time-varying delay. IEEE Transactions on Circuits and Systems I, 50(5): 677-678, 2003. 32. J. Cao and J. Wang. Global asymptotic stability of recurrent neural networks with Lipschitz-continuous activation functions and time-varying delays. IEEE Transactions on Circuits and Systems I, 50(1): 34-44, 2003.
11. Stability of T-S Fuzzy Systems with Time-Varying Delay
Takagi-Sugeno (T-S) fuzzy systems [1] combine the flexibility of fuzzy logic and the rigorous mathematics of a nonlinear system into a unified framework. A variety of analytical methods have been used to express asymptotic stability criteria for them in terms of LMIs [2–5]. All of these methods are for systems with no delay. In the real world, however, delays often occur in chemical, metallurgical, biological, mechanical, and other types of dynamic systems. Furthermore, a delay usually causes instability and degrades performance. Thus, the analysis of the stability of T-S fuzzy systems is not only of theoretical interest, but also of practical value [6–24]. Stability criteria for T-S fuzzy systems are generally classified into two types: delay-dependent and delay-independent. Since delay-dependent criteria make use of information on the lengths of delays, they are less conservative than delay-independent ones, especially when the delay is small. The delay-dependent stabilization of nominal T-S fuzzy systems with a constant delay was first discussed in [8] based on the Lyapunov-Krasovskii functional approach and Moon et al.’s inequality. Moreover, the stability of uncertain T-S fuzzy systems with a time-varying delay was studied in [13]. However, there is still room for further investigation. For ext ample, hx˙ T (t)Z x(t) ˙ − t−d(t) x˙ T (s)Z x(s)ds ˙ was used as an estimate of the 0 t derivative of −h t+θ x˙ T (s)Z x(s)dsdθ, ˙ where 0 d(t) h, and the term t−d(t) T x˙ (s)Z x(s)ds ˙ was ignored, which may lead to considerable conservat−h tiveness. The IFWM approach [25–27] has recently been devised to study the stability of time-delay systems, and less conservative stability criteria have been derived. This chapter employs it to examine the asymptotic stability of TS fuzzy systems with a time-varying delay [28, 29]. A consideration of the relationships among the delay, its upper bound, and their difference yields improved LMI-based asymptotic stability criteria for uncertain T-S fuzzy
252
11. Stability of T-S Fuzzy Systems with Time-Varying Delay
systems with a time-varying delay that do not ignore any useful terms in the derivative of a Lyapunov-Krasovskii functional. Finally, two numerical examples demonstrate the effectiveness and advantages of the method.
11.1 Problem Formulation Consider a fuzzy system with a time-varying delay that is represented by a T-S fuzzy model composed of a set of fuzzy implications, each of which is expressed as a linear system model. The ith rule of the model has the following form: Rule i: IF Θ1 (t) is μi1 , and · · · , and Θp (t) is μip , THEN ⎧ ⎨ x(t) ˙ = (Ai + ΔAi (t)) x(t) + (Adi + ΔAdi (t)) x(t − d(t)), ⎩ x(t) = φ(t), t ∈ [−h, 0], i = 1, 2, · · · , r,
(11.1)
In this equation, Θ1 (t), Θ2 (t), · · · , Θp (t) are the premise variables; μij , i = 1, 2, · · · , r, j = 1, 2, · · · , p is a fuzzy set; x(t) ∈ Rn is the state vector; Ai and Adi , i = 1, 2, · · · , r are constant real matrices with appropriate dimensions; the scalar r is the number of IF-THEN rules; d(t) is a time-varying delay satisfying 0 d(t) h
(11.2)
˙ μ, d(t)
(11.3)
and
where μ and h are constants; and the matrices ΔAi (t) and ΔAdi (t), i = 1, 2, · · · , r are the uncertainties of the system and have the form [ΔAi (t) ΔAdi (t)] = DF (t)[Ei Edi ],
(11.4)
where D, Ei , and Edi , i = 1, 2, · · · , r are known constant matrices and F (t) is an unknown matrix function with Lesbesgue measurable elements bounded by F T (t)F (t) I, ∀t. Fuzzy blending produces the overall fuzzy model
(11.5)
11.2 Stability Analysis
253
⎧ r ⎪ ⎪ ⎪ wi (θ(t)) [(Ai + ΔAi (t))x(t) + (Adi + ΔAdi (t))x(t − d(t))] ⎪ ⎪ ⎪ ⎪ i=1 ⎪ x(t) ˙ = ⎪ r ⎪ ⎪ ⎪ ⎪ wi (θ(t)) ⎪ ⎪ ⎨ i=1 r ⎪ ⎪ ρi (θ(t))[(Ai + ΔAi (t))x(t) + (Adi + ΔAdi (t))x(t − d(t))] = ⎪ ⎪ ⎪ ⎪ i=1 ⎪ ⎪ ⎪ ⎪ ¯ ⎪ = Ax(t) + A¯d x(t − d(t)), ⎪ ⎪ ⎪ ⎪ ⎩ x(t) = φ(t), t ∈ [−h, 0], (11.6) where θ = [θ1 , θ2 , · · · , θp ]T , wi : Rp → [0, 1], i = 1, 2, · · · , r is the membership function of the system for plant rule i; ρi (θ(t)) =
wi (θ(t)) , r wi (θ(t)) i=1
A¯ = A¯d =
r i=1 r
ρi (θ(t))(Ai + ΔAi (t)), ρi (θ(t))(Adi + ΔAdi (t)).
i=1
The fuzzy weighting functions ρi (θ(t)) clearly satisfy ρi (θ(t)) 0,
r
ρi (θ(t)) = 1.
i=1
11.2 Stability Analysis In this section, we use the IFWM approach to obtain a stability criterion for a T-S fuzzy system with a time-varying delay. First, we take up the case where ΔAi (t) = 0 and ΔAdi (t) = 0 in system (11.1); that is, ⎧ ⎨ x(t) ˙ = Ax(t) + Ad x(t − d(t)), (11.7) ⎩ x(t) = φ(t), t ∈ [−h, 0], where A=
r i=1
ρi (θ(t))Ai , Ad =
r i=1
ρi (θ(t))Adi .
254
11. Stability of T-S Fuzzy Systems with Time-Varying Delay
The Lyapunov-Krasovskii stability theorem and the IFWM approach give us the following theorem. Theorem 11.2.1. Consider system (11.7). Given scalars h 0 and μ, the system is stable if there ⎡ ⎤ exist matrices P > 0, Q 0, W 0, Z > 0, X11 X12 ⎦ 0, and any appropriately dimensioned matrices and X = ⎣ ∗ X22 T T T T and M = M1T M2T such that the following LMIs hold for N = N1 N2 i = 1, 2, · · · , r : ⎤ ⎡ Φ11 Φ12 −M1 hAT i Z ⎥ ⎢ ⎥ ⎢ ⎢ ∗ Φ22 −M2 hAT Z di ⎥ ⎥ < 0, (11.8) Φi = ⎢ ⎥ ⎢ ⎢ ∗ ∗ −W 0 ⎥ ⎦ ⎣ ∗ ∗ ∗ −hZ ⎡ Ψ1 = ⎣
⎤ X N ∗ Z
⎦ 0,
⎡ Ψ2 = ⎣
(11.9)
⎤ X M ∗
Z
⎦ 0,
(11.10)
where T Φ11 = P Ai + AT i P + Q + W + N1 + N1 + hX11 , Φ12 = P Adi − N1 + N2T + M1 + hX12 , Φ22 = −(1 − μ)Q − N2 − N2T + M2 + M2T + hX22 .
Proof. Choose the fuzzy-weight-dependent Lyapunov-Krasovskii functional candidate to be t t V (xt ) = xT (t)P x(t) + xT (s)Qx(s)ds + xT (s)W x(s)ds
0
t−d(t) t
+ −h
t−h
x˙ T (s)Z x(s)dsdθ, ˙
(11.11)
t+θ
where P > 0, Q 0, W 0, and Z > 0 are to be determined. From the Newton-Leibnitz formula, the following equations are true for any appropriately dimensioned matrices N and M : t T 0 = 2ζ1 (t)N x(t) − x(t − d(t)) − x(s)ds ˙ , (11.12) t−d(t)
11.2 Stability Analysis
0 = 2ζ1T (t)M x(t − d(t)) − x(t − h) −
255
t−d(t)
x(s)ds ˙ ,
(11.13)
t−h
where ζ1 (t) = [xT (t), xT (t − d(t))]T . ⎡ On the other hand, for any matrix X = ⎣ equation holds: t ζ1T (t)Xζ1 (t)ds − 0= t−h
=
hζ1T (t)Xζ1 (t)−
t
t−h
t
t−d(t)
⎤ X11 X12 ∗
X22
⎦ 0, the following
ζ1T (t)Xζ1 (t)ds
ζ1T (t)Xζ1 (t)ds−
t−d(t)
t−h
ζ1T (t)Xζ1 (t)ds.
(11.14)
The following equation is also true:
t
z˙ T (s)Z z(s)ds ˙ =
t−h
t
z˙ T (s)Z z(s)ds+ ˙
t−d(t)
t−d(t)
z˙ T (s)Z z(s)ds. ˙ (11.15)
t−h
Using (11.15) and calculating the derivative of V (xt ) in (11.11) along the solutions of system (11.7) yield V˙ (xt ) = xT (t) P A + AT P x(t) + 2xT (t)P Ad x(t − d(t)) + xT (t)Qx(t) T ˙ −(1 − d(t))x (t − d(t))Qx(t − d(t)) + xT (t)W x(t) t T T −x (t − h)W x(t − h) + hx˙ (t)Z x(t) ˙ − x˙ T (s)Z x(s)ds ˙ t−h xT (t) P A + AT P x(t) + 2xT (t)P Ad x(t − d(t)) +xT (t)Qx(t) − (1 − μ)xT (t − d(t))Qx(t − d(t)) +xT (t)W x(t) − xT (t − h)W x(t − h) + hx˙ T (t)Z x(t) ˙ t−d(t) t x˙ T (s)Z x(s)ds ˙ − x˙ T (s)Z x(s)ds. ˙ − t−d(t)
(11.16)
t−h
Then, adding the terms on the right sides of equations (11.12)-(11.14) to V˙ (xt ) yields V˙ (xt ) xT (t) P A + AT P x(t) + 2xT (t)P Ad x(t − d(t)) + xT (t)Qx(t) −(1−μ)xT (t−d(t))Qx(t−d(t))+xT (t)W x(t)−xT (t−h)W x(t−h) t t−d(t) T T +hx˙ (t)Z x(t) ˙ − x˙ (s)Z x(s)ds ˙ − x˙ T (s)Z x(s)ds ˙ t−d(t)
t−h
256
11. Stability of T-S Fuzzy Systems with Time-Varying Delay
+2ζ1T (t)N x(t) − x(t − d(t)) − +2ζ1T (t)M
t
x(s)ds ˙
x(t − d(t)) − x(t − h) −
+hζ1T (t)Xζ1 (t) − t = ζ T (t)Ξζ(t) −
t−d(t)
x(s)ds ˙ t−h
t
t−d(t)
t−d(t)
ζ1T (t)Xζ1 (t)ds −
T
η (t, s)Ψ1 η(t, s)ds −
t−d(t)
t−d(t)
t−h t−d(t)
ζ1T (t)Xζ1 (t)ds η T (t, s)Ψ2 η(t, s)ds,
t−h
(11.17) where T ζ(t) = xT (t), xT (t − d(t)), xT (t − h) , T T η(t, s) = ζ (t), x˙ T (s) , ⎡ ⎤ *11 + hAT ZA Φ *12 + hAT ZAd −M1 Φ ⎢ ⎥ ⎢ *22 + hAT ZAd −M2 ⎥ Ξ=⎢ ⎥, ∗ Φ d ⎣ ⎦ ∗ ∗ −W *11 = P A + AT P + Q + W + N1 + N T + hX11 , Φ 1 *12 = P Ad − N1 + N2T + M1 + hX12 , Φ *22 = −(1 − μ)Q − N2 − N T + M2 + M T + hX22 ; Φ 2
2
and Ψ1 and Ψ2 are defined in (11.9) and (11.10), respectively. 2 If Ξ < 0 and Ψi 0, i = 1, 2, then V˙ (xt ) < −ε x(t) for a sufficiently small ε > 0. From the Schur complement, Ξ < 0 is equivalent to the following inequality: ⎡ ⎤ *11 Φ *12 −M1 hAT Z Φ ⎢ ⎥ ⎢ *22 −M2 hAT Z ⎥ ⎢ ∗ Φ ⎥ d *=⎢ ⎥ < 0. (11.18) Φ ⎢ ⎥ ⎢ ∗ ∗ −W 0 ⎥ ⎣ ⎦ ∗ ∗ ∗ −hZ * < 0 and Ψi 0, i = 1, 2, then V˙ (xt ) < −ε x(t)2 for a That is, if Φ r sufficiently small ε > 0. Furthermore, (11.8) implies i=1 ρi (θ(t))Φi < 0, which is equivalent to (11.18). Therefore, if LMIs (11.8)-(11.10) are feasible, then system (11.7) is asymptotically stable. This completes the proof.
Based on Theorem 11.2.1, we have the following theorem for uncertain T-S fuzzy system (11.1).
11.2 Stability Analysis
257
Theorem 11.2.2. Consider system (11.1). Given scalars h 0 and μ, the system is robustly exist matrices P > 0, Q 0, W 0, ⎡ stable if there ⎤ X11 X12 ⎦ 0, any appropriately dimensioned matrices Z > 0, and X = ⎣ ∗ X22 T T and M = M1T M2T , and a scalar λ > 0 such that the N = N1T N2T following LMIs hold for i = 1, 2, · · · , r : ⎡ ⎤ Φ11 + λEiT Ei Φ12 + λEiT Edi −M1 hAT PD i Z ⎢ ⎥ ⎢ ⎥ T ⎢ ∗ Φ22 + λEdi Edi −M2 hAT Z 0 ⎥ di ⎢ ⎥ ⎢ ⎥ (11.19) ⎢ ∗ ∗ −W 0 0 ⎥ < 0, ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ∗ ∗ −hZ hZD ⎥ ⎣ ⎦ ∗ ∗ ∗ ∗ −λD ⎡ ⎤ X N ⎦ 0, Ψ1 = ⎣ (11.20) ∗ Z ⎡ ⎤ X M ⎦ 0, (11.21) Ψ2 = ⎣ ∗ Z where Φij , i = 1, 2, i j 2 are defined in (11.8). Proof. Replacing A and Adi in (11.8) with Ai + DF (t)Ei and Adi + DF (t)Edi , respectively, allows us to write (11.8) for system (11.1) as ⎤ ⎡ ⎤ ⎡ EiT PD ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ T⎥ ⎢ ⎢ Edi ⎥ T ⎢ 0 ⎥ ⎥ F (t) E E 0 0 + ⎢ ⎥ F (t) DT P 0 0 hDT Z < 0. Φ+ ⎢ i di ⎥ ⎢ ⎥ ⎢ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎦ ⎣ ⎦ ⎣ hZD 0 (11.22) According to Lemma 2.6.2, (11.22) is true if there exists a scalar λ > 0 such that the following inequality holds: ⎤ ⎡ ⎤ ⎡ ET PD ⎥ ⎢ i ⎥ ⎢ ⎥ ⎢ T⎥ ⎢ ⎢Edi ⎥ ⎢ 0 ⎥ −1 ⎢ ⎥ ⎢ ⎥ E E 0 0 < 0. (11.23) T T Φ+λ ⎢ +λ D P 0 0 hD Z i di ⎥ ⎢ ⎥ ⎢ 0 ⎥ ⎢ 0 ⎥ ⎦ ⎣ ⎦ ⎣ hZD 0
258
11. Stability of T-S Fuzzy Systems with Time-Varying Delay
By the Schur complement, (11.23) is equivalent to (11.19). This completes the proof.
Remark 11.2.1. In the derivative of the Lyapunov-Krasovskii functional for T-S fuzzy systems with a time-varying delay in [13], fixed weighting matrices express the relationships among the terms in the Newton-Leibnitz formula, which may lead to conservativeness. In contrast, we employ two FWMs, N and M , to express those relationships. That makes our criterion less conservative. Remark 11.2.2. In the derivative in [13] mentioned in the previous remark, t−d(t) the negative term − t−h x˙ T (s)Z x(s)ds ˙ in V˙ (xt ) was ignored, which may lead to conservativeness. In contrast, the proofs of Theorems 11.2.1 and 11.2.2 show that this negative term is retained; and a new FWM, M , takes into account the relationships among h, d(t), and their difference.
11.3 Numerical Examples This section provides two numerical examples that demonstrate the effectiveness of the above criteria. Example 11.3.1. Consider a fuzzy system with a delay but without any uncertainties. The T-S fuzzy model of this system has the form x(t) ˙ =
2
ρi (Ai x(t) + Adi x(t − d(t))) ,
i=1
where ⎡ A1 = ⎣
⎤
−2
0
0
−0.9
⎡ Ad2 = ⎣
⎡
⎦ , Ad1 = ⎣
−1
⎤ 0
−1 −1
⎡
⎦ , A2 = ⎣
⎤
−1.5
1
0
−0.75
⎦,
⎤
−1
0
1
−0.85
⎦ , ρ1 = sin2 (θ(t)), ρ2 = cos2 (θ(t)).
This example is discussed in [13]. Table 11.1 lists values of the upper bound, h, that guarantee the stability of system (11.7) for various μ obtained by solving the LMIs in [13] and those in Theorem 11.2.1.
11.3 Numerical Examples
259
Table 11.1. Allowable upper bound, h, for various μ μ
0
0.01
0.1
0.5
unknown μ
[13]
0.583
—
—
—
—
Theorem 11.2.1
1.348
1.340
1.280
1.135
1.099
When the delay is time-invariant (μ = 0), Theorem 11.2.1 produces better results than the method in [13]. Furthermore, when the delay is time-varying, the corollary in [13] fails to verify the stability of the system, while Theorem 11.2.1 does verify it. Example 11.3.2. Consider a fuzzy system with a delay and uncertainties. The T-S fuzzy model of this system has the form x(t) ˙ =
2
ρi [(Ai + ΔAi (t))x(t) + (Adi + ΔAdi (t))x(t − d(t))] ,
i=1
where ⎡ A1 = ⎣
−2
⎤ 1
0.5 −1
⎦ , Ad1 = ⎣
⎡ E1 = ⎣
⎤ 1.6
0
0
0.05
−1
0
0
−0.03
⎤ 0.1
0
0
0.3
⎡
⎦, I = ⎣
⎡
⎦ , A2 = ⎣
⎡
⎤ 0.03
⎤ 0
−1 −1
⎦ , Ed1 = ⎣
⎡ D=⎣
⎡
⎤
−2
0
0
−1
⎡
⎦ , Ad2 = ⎣
⎡
⎦ , E2 = ⎣
0
0
−1
⎤ 1.6
0
0
−0.05
⎤
−1.6 ⎡
⎦ , Ed2 = ⎣
⎤ 0.1
0
0
0.3
⎤ 1 0
⎦,
⎦.
0 1
Table 11.2 lists values of the upper bound, h, that guarantee the stability of system (11.1) for various μ obtained with Theorem 1 in [13] and Theorem 11.2.2. Table 11.2. Allowable upper bound, h, for various μ μ
0
0.01
0.1
0.5
unknown μ
[13]
0.950
0.944
0.892
0.637
—
Theorem 11.2.2
1.353
1.348
1.303
1.147
1.081
⎦,
260
11. Stability of T-S Fuzzy Systems with Time-Varying Delay
When μ is known, our theorem produces better results than the one in [13]. This difference arises because our theorem not only retains all the terms in the derivative of the Lyapunov-Krasovskii functional, but also considers the relationships among h, d(t), and, h − d(t).
11.4 Conclusion This chapter applies the IFWM approach to a T-S fuzzy system with a time-varying delay to obtain less conservative LMI-based asymptotic stability criteria that do not ignore any terms in the derivative of the LyapunovKrasovskii functional. Two numerical examples demonstrate that this method is an improvement over others.
References 1. T. Takagi and M. Sugeno. Fuzzy identification of systems and its application to modeling and control. IEEE Transactions on Systems, Man, and Cybernetics, 15(1): 116-132, 1985. 2. K. Tanaka and M. Sano. A robust stabilization problem of fuzzy control systems and its application to backing up control of a truck-trailer. IEEE Transactions on Fuzzy Systems, 2(2): 119-134, 1994. 3. H. O. Wang, K. Tanaka, and M. F. Griffin. An approach to fuzzy control of nonlinear systems: stability and design issues. IEEE Transactions on Fuzzy Systems, 4(1): 14-23, 1996. 4. M. C. Teixeira and S. H. Zak. Stabilizing controller design for uncertain nonlinear systems using fuzzy accepted models. IEEE Transactions on Fuzzy Systems, 7(2): 133-144, 1999. 5. B. S. Chen, C. S. Tseng, and H. J. Uang. Mixed H2 /H∞ fuzzy output feedback control design for nonlinear dynamic systems: an LMI approach. IEEE Transactions on Fuzzy Systems, 8(3): 249-265, 2000. 6. Y. Y. Cao and P. M. Frank. Stability analysis and synthesis of nonlinear timedelay systems via linear Takagi-Sugeno fuzzy models. Fuzzy Sets and Systems, 124(2): 213-229, 2001. 7. W. J. Chang and W. Chang. Fuzzy control of continuous time-delay affine TS fuzzy systems. Proceedings of the 2004 IEEE International Conference on Networking, Sensing and Control, Taipei, China, 618-623, 2004. 8. X. P. Guan and C. L. Chen. Delay-dependent guaranteed cost control for T-S fuzzy systems with time delays. IEEE Transactions on Fuzzy Systems, 12(2): 236-249, 2004.
References
261
9. J. Yoneyama. Robust control analysis and synthesis for uncertain fuzzy systems with time-delay. The 12th IEEE International Conference on Fuzzy Systems, St. Louis, USA, 396-401, 2003. 10. E. G. Tian and C. Peng. Delay-dependent stability analysis and synthesis of uncertain T-S fuzzy systems with time-varying delay. Fuzzy Set and Systems, 157(4): 544-559, 2006. 11. M. Akar and U. Ozguner. Decentralized techniques for the analysis and control of Takagi-Sugeno fuzzy systems. IEEE Transactions on Fuzzy Systems, 8(6): 691-704, 2000. 12. X. Jiang, Q. L. Han, and X. Yu. Robust H∞ control for uncertain TakagiSugeno fuzzy systems with interval time-varying delay. 2005 American Control Conference, Portland, Oregon, 1114-1119, 2005. 13. C. G. Li, H. J. Wang, and X. F. Liao. Delay-dependent robust stability of uncertain fuzzy systems with time-varying delays. IEE Proceedings Control Theory & Applications, 151(4): 417-421, 2004. 14. C. Lin, Q. G. Wang, and T. H. Lee. Improvement on observer-based H∞ control for T-S fuzzy systems. Automatica, 41(9): 1651-1656, 2005. 15. C. Lin, Q. G. Wang, and T. H. Lee. Stabilization of uncertain fuzzy time-delay systems via variable structure control approach. IEEE Transactions on Fuzzy Systems, 13(6): 787-798, 2005. 16. C. Lin, Q. G. Wang, and T. H. Lee. H∞ output tracking control for nonlinear systems via T-S fuzzy model approach. IEEE Transactions on Systems, Man, and Cybernetics-Part B, 36(2): 450-457, 2006. 17. C. Lin, Q. G. Wang, and T. H. Lee. Delay-dependent LMI conditions for stability and stabilization of T-S fuzzy systems with bounded time-delay. Fuzzy Sets and Systems, 157(9): 1229-1247, 2006. 18. C. Lin, Q. G. Wang, and T. H. Lee. Less conservative stability conditions for fuzzy large-scale systems with time delays. Chaos, Solitons & Fractals, 29(5): 1147-1154, 2006. 19. C. Lin, Q. G. Wang, and T. H. Lee. Stability and stabilization of a class of fuzzy time-delay descriptor systems. IEEE Transactions on Fuzzy Systems, 14(4): 542551, 2006. 20. C. Lin, Q. G. Wang, T. H. Lee, and B. Chen. H∞ filter design for nonlinear systems with time-delay through T-S fuzzy model approach. IEEE Transactions on Fuzzy Systems, 16(3): 739-746, 2008. 21. C. Lin, Q. G. Wang, T. H. Lee, and Y. He. Stability conditions for time-delay fuzzy systems using fuzzy weighting-dependent approach. IET ProceedingsControl Theory and Applications, 1(1): 127-132, 2007. 22. C. Lin, Q. G. Wang, T. H. Lee, and Y. He. LMI Approach to Analysis and Control of Takagi-Sugeno Fuzzy Systems with Time Delay. New York: SpringerVerlag, 2007. 23. C. Lin, Q. G. Wang, T. H. Lee, and Y. He. Design of observer-based H∞ control for fuzzy time-delay systems. IEEE Transactions on Fuzzy Systems, 16(2): 534543, 2008.
262
11. Stability of T-S Fuzzy Systems with Time-Varying Delay
24. B. Chen, X. P. Liu, S. C. Tong, and C. Lin. Observer-based stabilization of T-S fuzzy systems with input delay. IEEE Transactions on Fuzzy Systems, 16(3): 652-663, 2008. 25. Y. He, Q. G. Wang, L. H. Xie, and C. Lin. Further improvement of freeweighting matrices technique for systems with time-varying delay. IEEE Transactions on Automatic Control, 52(2): 293-299, 2007. 26. Y. He, Q. G. Wang, C. Lin, and M. Wu. Delay-range-dependent stability for systems with time-varying delay. Automatica, 43(2): 371-376, 2007. 27. Y. He, G. P. Liu, and D. Rees. New delay-dependent stability criteria for neural networks with time-varying delay. IEEE Transactions on Neural Networks, 18(1): 310-314, 2007. 28. F. Liu, M. Wu, Y. He, Y. C. Zhou, and R. Yokoyama. New delay-dependent stability criteria for T-S fuzzy systems with a time-varying delay. Proceedings of the 17th World Congress, Seoul, Korea, 254- 258, 2008. 29. F. Liu, M. Wu, Y. He, Y. C. Zhou, and R. Yokoyama. New delay-dependent stability analysis and stabilizing design for T-S fuzzy systems with a timevarying delay. Proceeding of the 27th Chinese Control Conference, Kunming, China, 340-344, 2008.
12. Stability and Stabilization of NCSs
Closed control loops in communication networks are becoming more and more common as network hardware becomes cheaper and use of the Internet expands. Feedback control systems in which control loops are closed through a real-time network are called NCSs. In an NCS, network-induced delays of variable length occur during data exchange between devices (sensor, controller, actuator) connected to the network. This can degrade the performance of the control system and can even destabilize it [1–11]. It is important to make these delays bounded and as small as possible. On the other hand, it is also necessary to design a controller that guarantees the stability of an NCS for delays less than the maximum allowable delay bound (MADB) [12], which is also called the maximum allowable transfer interval (MATI) [5, 6]. Delay-dependent criteria for the MADB are attracting attention with regard to the stability of non-networked control systems with a delay [13–18]. The main methods in the literature are based on four fixed model transformations. Among them, the descriptor system combined with Park’s or Moon et al.’s inequality has been the most productive approach to deriving delay-dependent criteria [19–21]. However, the FWM approach is a better way [22, 23]. As for NCSs, [9] used the sample-date method to establish stability conditions for a continuous-time system under the assumption that the network-induced delay was less than the sampling period. Methods of calculating the MADB for an NCS using Moon et al.’s inequality for both discrete-time and continuous-time plants were presented in [12, 24]. [25] uses the FWM approach to establish a discrete-time state-space model. A new model for a network-induced delay that is larger than the sampling period was presented in [26], which used the FWM approach to derive a new criterion that guarantees the stability of an NCS. In addition, [27] presented a similar idea combined with an augmented Lyapunov-Krasovskii functional, and [28] used it to investigate the H∞ control of NCSs.
264
12. Stability and Stabilization of NCSs
However, [29, 30] pointed out that [19, 21–23, 31] and other reports ignore useful terms in the derivative of the Lyapunov-Krasovskii functional for linear systems with a time-varying delay. The same problem appears in studies on the stabilization and H∞ control of NCSs in [26, 28]. For example, in [26], t η x˙ T (t)T x(t) ˙ − ik h x˙ T (s)T x(s)ds ˙ was used as an estimate of the derivative ik h T 0 t T ˙ and the term − t−η x˙ (s)T x(s)ds ˙ was ignored, of −η t+θ x˙ (s)T x(s)dsdθ which may lead to considerable conservativeness. Although [29, 30] retain all the terms and present an improved delay-dependent stability criterion for systems with a time-varying delay, there is room for further investigation. Those reports do not consider the relationships among a time-varying delay, its upper bound, and the difference between them. For instance, they increase the terms t − ik h and η − (t − ik h) to η, or in other words, they increase η = t − ik h + η − (t − ik h) to 2η, which may lead to conservativeness. On the other hand, [26] presented a method based on parameter tuning of obtaining the gain of a networked state-feedback controller; but the problem with it is that it is very difficult to choose the parameters. Moon et al. devised a CCL algorithm that enables the design of a delay-dependent state-feedback stabilization controller [32]; but it has the drawback that the stop condition for iteration is very strict because the gain matrix and other Lyapunov matrices obtained in a previous iteration step must satisfy one or more matrix inequalities. However, once the gain matrix is determined, the stabilization conditions derived by this method reduce to LMIs. So, the iteration can be stopped if the LMIs with that gain matrix are feasible, in which case the other Lyapunov matrices are decision variables rather than given ones. This chapter first uses the IFWM approach to establish an improved stability condition for NCSs [33] that does not ignore any terms in the derivative of the Lyapunov-Krasovskii functional, but rather considers the relationships among a network-induced delay, its upper bound, and the difference between them. This condition and an ICCL algorithm, which has an improved stop condition, are used to design a state-feedback networked controller. Numerical examples demonstrate the effectiveness and advantages of this method.
12.1 Modeling of NCSs with Network-Induced Delay Consider the following linear system: x(t) ˙ = Ax(t) + Bu(t),
(12.1)
12.1 Modeling of NCSs with Network-Induced Delay
265
where x(t) ∈ Rn is the state vector; u(t) ∈ Rm is the controlled input vector; and A and B are constant matrices with appropriate dimensions. For convenience, we make the following assumptions. Assumption 12.1.1 The NCS consists of a time-driven sensor, an eventdriven controller, and an event-driven actuator, all of which are connected to a control network. The calculated delay is viewed as part of the networkinduced delay between the controller and the actuator. Assumption 12.1.2 The controller always uses the most recent data and discards old data. When old data arrive at the controller, they are treated as packet loss. Assumption 12.1.3 The actual input obtained in (12.1) with a zero-order hold is a piecewise constant function. The control network itself induces transmission delays and dropped data that degrade the control performance of the NCS. Based on these three assumptions, we can formulate a closed-loop system with a memoryless statefeedback controller: ⎧ ⎨ x(t) ˙ = Ax(t) + Bu(t), (12.2) ⎩ u(t) = Kx(t∗ − τ ), t∗ ∈ {i h + τ } , k = 1, 2, · · · , k
k
k
where h is the sampling period; k = 1, 2, 3, · · · are the sequence numbers of the most recent data available to the controller, which are assumed not to change until new data arrive; ik is an integer denoting the sequence number of the sampling times of the sensor {i1 , i2 , i3 , · · · } ⊆ {1, 2, 3, · · · }; and τk is the delay from the instant ik h, when a sensor node samples the sensor data from the plant, to the instant when the actuator transfers the data to the +∞ plant. Clearly, k=1 [ik h + τk , ik+1 h + τk+1 ) = [t0 , ∞). From Assumption 12.1.2, ik+1 > ik is always true. The number of data packets lost or discarded is ik+1 − ik − 1. When {i1 , i2 , i3 , · · · } = {1, 2, 3, · · · }, no packets are dropped. If ik+1 = ik + 1, then h + τk+1 > τk , which includes τk = τ0 and τk < h as special cases. So, system (12.2) represents an NCS and takes the effects of both a network-induced delay and dropped data packets into account. Below, we assume that u(t) = 0 before the first control signal reaches the plant, and that a constant η > 0 exists such that (ik+1 − ik )h + τk+1 η, k = 1, 2, · · · .
(12.3)
266
12. Stability and Stabilization of NCSs
Based on this inequality, we can rewrite NCS (12.2) as ⎧ ⎨ x(t) ˙ = Ax(t) + BKx(ik h), t ∈ [ik h + τk , ik+1 h + τk+1 ) , k = 1, 2, · · · , ⎩ x(t) = x(t − η)eA(t−t0 +η) = φ(t), 0
(12.4) where the initial condition function, φ(t), of the system is continuously differentiable and vector-valued.
12.2 Stability Analysis This section first presents a new stability criterion for NCS (12.4), assuming the gain, K, is given. Theorem 12.2.1. Consider NCS (12.4). Given a scalar η > 0, the system is stable if there exist matrices P > 0, Q 0, Z > 0, and ⎡ asymptotically ⎤ ⎡X = ⎤ X11 X12 N1 ⎣ ⎦ 0, and any appropriately dimensioned matrices N = ⎣ ⎦ ∗ X22 N2 T such that the following matrix inequalities hold: and M = M1T M2T ⎡
Φ11 Φ12 −M1
⎢ ⎢ ⎢ ∗ Φ=⎢ ⎢ ⎢ ∗ ⎣ ∗
(12.5)
⎤ X N ∗ Z
⎡ Ψ2 = ⎣
⎤
⎥ ⎥ Φ22 −M2 ηK T B T Z ⎥ ⎥ < 0, ⎥ ⎥ ∗ −Q 0 ⎦ ∗ ∗ −ηZ
⎡ Ψ1 = ⎣
ηAT Z
⎦ 0,
(12.6)
⎤ X M ∗
Z
⎦ 0,
where Φ11 = P A + AT P + Q + N1 + N1T + ηX11 , Φ12 = P BK − N1 + N2T + M1 + ηX12 , Φ22 = −N2 − N2T + M2 + M2T + ηX22 .
(12.7)
12.2 Stability Analysis
267
Proof. Choose the Lyapunov-Krasovskii functional candidate to be t 0 t T T x (s)Qx(s)ds+ x˙ T (s)Z x(s)dsdθ, ˙ (12.8) V (xt ) = x (t)P x(t)+ −η
t−η
t+θ
where P > 0, Q 0, and Z > 0 are to be determined. From the Newton-Leibnitz formula, the following equations are true for T T any matrices N = N1T N2T and M = M1T M2 with appropriate dimensions: t T 0 = 2ζ (t)N x(t) − x(ik h) − x(s)ds ˙ , (12.9) ik h ik h
0 = 2ζ T (t)M x(ik h) − x(t − η) −
x(s)ds ˙ ,
(12.10)
t−η
T where ζ(t) = xT (t), xT (ik h) . On the other hand, for any matrix X = ⎡ ⎤ X11 X12 ⎣ ⎦ 0, the following equation holds: ∗ X22 t t ζ T (t)Xζ(t)ds − ζ T (t)Xζ(t)ds 0= t−η
= ηζ T (t)Xζ(t) −
t−η t
ζ T (t)Xζ(t)ds −
ik h
ik h
ζ T (t)Xζ(t)ds.
(12.11)
t−η
In addition, the following equation is also true −
t
x˙ T (s)Z x(s)ds ˙ =−
t
x˙ T (s)Z x(s)ds− ˙
ik h
t−η
ik h
x˙ T (s)Z x(s)ds. ˙ (12.12)
t−η
Calculating the derivative of V (xt ) along the solutions of system (12.4) for t ∈ [ik h + τk , ik+1 h + τk+1 ), adding the right sides of (12.9)-(12.11) to it, and using (12.12) yield V˙ (xt ) = 2xT (t)P x(t) ˙ + xT (t)Qx(t) − xT (t − η)Qx(t − η) + η x˙ T (t)Z x(t) ˙ t − x˙ T (s)Z x(s)ds ˙ t−η T
˙ + xT (t)Qx(t) − xT (t − η)Qx(t − η) + η x˙ T (t)Z x(t) ˙ = 2x (t)P x(t) t ik h − x˙ T (s)Z x(s)ds ˙ − x˙ T (s)Z x(s)ds ˙ ik h
t−η
+2ζ T (t)N x(t) − x(ik h) −
t
ik h
x(s)ds ˙
268
12. Stability and Stabilization of NCSs
+2ζ T (t)M x(ik h) − x(t − η) −
T
+ηζ (t)Xζ(t) − ˆ 1 (t)− = ξ1T (t)Φξ where
⎡
⎢ ⎢ Φˆ = ⎢ ⎣
x(s)ds ˙ t−η
t
ik h
T
ζ (t)Xζ(t)ds −
ik h t
ik h
ξ2T (t, s)Ψ1 ξ2 (t, s)ds−
Φ11 + ηAT ZA
ik h
ζ T (t)Xζ(t)ds
t−η ik h t−η
Φ12 + ηAT ZBK
ξ2T (t, s)Ψ2 ξ2 (t, s)ds, (12.13)
−M1
⎤
⎥ ⎥ ∗ Φ22 + ηK B ZBK −M2 ⎥, ⎦ ∗ ∗ −Q T ξ1 (t) = xT (t), xT (ik h), xT (t − η) , T T ξ2 (t, s) = ζ (t), x˙ T (s) . T
T
Thus, if Ψi 0, i = 1, 2, and Φˆ < 0, which is equivalent to (12.5) by the Schur complement, then V˙ (xt ) < −εx(t)2 for a sufficiently small ε > 0, which guarantees that system (12.4) is asymptotically stable. This completes the proof.
When M = 0 and Q = εI (where ε > 0 is a sufficiently small scalar), the following corollary readily follows from Theorem 12.2.1. Corollary 12.2.1. Consider NCS (12.4). Given a scalar η > 0, the system is asymptotically stable if there exist matrices P > 0, Z > 0, and⎡X = ⎡ ⎤ ⎤ ⎣
X11 X12
⎦ 0, and any appropriately dimensioned matrix N = ⎣
∗ X22 such that matrix inequality (12.6) and the following one hold : ⎤ ⎡ Ξ11 Ξ12 ηAT Z ⎥ ⎢ ⎥ ⎢ Ξ = ⎢ ∗ Ξ22 ηK T B T Z ⎥ < 0, ⎦ ⎣ ∗ ∗ −ηZ
N1
⎦
N2
(12.14)
where Ξ11 = P A + AT P + N1 + N1T + ηX11 , Ξ12 = P BK − N1 + N2T + ηX12 , Ξ22 = −N2 − N2T + ηX22 . Remark 12.2.1. Following a line similar to the one in [34, 35], we can easily show that this corollary is equivalent to Theorem 1 in [26]. Thus, Theorem
12.3 Controller Design
269
12.2.1 provides more freedom in choosing M and Q because, rather than being taken to be constant matrices, they can be selected by using the LMI toolbox.
12.3 Controller Design Now, Theorem 12.2.1 is extended to the design of a stabilization controller with gain K for system (12.4). Theorem 12.3.1. Consider NCS (12.4). For a given scalar⎤ η > 0, if there ⎡ Y11 Y12 ⎦ 0, and any exist matrices L > 0, W 0, R > 0, and Y = ⎣ ∗ Y22 T T appropriately dimensioned matrices S = S1T S2T , T = T1T T2T , and V such that the following matrix inequalities hold : ⎡ ⎤ Ξ11 Ξ12 −T1 ηLAT ⎢ ⎥ ⎢ ⎥ ⎢ ∗ Ξ22 −T2 ηV T B T ⎥ ⎢ ⎥ < 0, (12.15) Ξ=⎢ ⎥ ⎢ ∗ ⎥ ∗ −W 0 ⎣ ⎦ ∗ ∗ ∗ −ηR ⎤
⎡ Π1 = ⎣
Y
S
∗ LR−1 L
⎡ Π2 = ⎣
⎦ 0,
(12.16)
⎤ Y
T
∗ LR−1 L
⎦ 0,
(12.17)
where Ξ11 = AL + LAT + W + S1 + S1T + ηY11 , Ξ12 = BV − S1 + S2T + T1 + ηY12 , Ξ22 = −S2 − S2T + T2 + T2T + ηY22 , then the system is asymptotically stable, and K = V L−1 is a stabilizing controller gain. Proof. Pre- and post-multiply Φ in (12.5) by diag {P −1 , P −1 , P −1 , Z −1 }, and pre- and post-multiply Ψi , i = 1, 2, in (12.6) and (12.7) by diag {P −1 , P −1 , P −1 }. Then, make the following changes to the variables:
270
12. Stability and Stabilization of NCSs
L = P −1 , R = Z −1 , V = KL, Si = LNi L, Ti = LMi L, i = 1, 2, W = LQL, Y = diag {P −1 , P −1 } · X · diag {P −1 , P −1 }. These manipulations yield matrix inequalities (12.15)-(12.17). This completes the proof.
Note that the conditions in Theorem 12.3.1 are no longer LMI conditions due to the term LR−1 L in (12.16) and (12.17). Thus, we cannot use a convex optimization algorithm to obtain an appropriate gain matrix, K, for the statefeedback controller. [26, 28] used parameter tuning to obtain the gain of the networked state-feedback controller, but they presented no valid method of determining the parameters. As mentioned in Chapter 6, this problem can be solved by using the idea for solving a cone complementarity problem in [36]. Define a new variable, U , for which LR−1 L U ; and let P = L−1 , H = U −1 , and Z = R−1 . Now, we convert the nonconvex problem into the following LMI-based nonlinear minimization problem: Minimize
Tr{LP + U H + RZ}
subject to (12.15) and ⎤ ⎧⎡ ⎪ Y S ⎪ ⎪ ⎣ ⎦ 0, ⎪ ⎪ ⎪ ⎨ ∗ U ⎡ ⎤ ⎪ ⎪ L I ⎪ ⎪⎣ ⎦ 0, ⎪ ⎪ ⎩ ∗ P
⎡ ⎣
⎤ Y T ∗ U
⎡
⎦ 0, ⎣
⎤ H P
⎦ 0,
∗ Z ⎤ U I R I ⎣ ⎦ 0, ⎣ ⎦ 0. ∗ H ∗ Z ⎡
⎤
⎡
(12.18)
A suboptimal maximum upper bound, η, on the delay can be obtained by using either the CCL or the ICCL algorithm in Chapter 6. Here we use the ICCL algorithm because of its advantages. Algorithm 12.3.1 To maximize η : Step 1: Choose a sufficiently small initial η > 0 such that there exists a feasible solution to (12.15) and (12.18). Set ηmax = η. Step 2: Find a feasible set (P0 , L0 , W, S, T, Y, Z0 , R0 , U0 , H0 , V ) satisfying (12.15) and (12.18). Set k = 0. Step 3: Solve the following LMI problem for the variables P, L, W, S, T, Y, Z, R, U, H, V, and K :
12.4 Numerical Examples
Minimize
Tr{LPk + Lk P + U Hk + Uk H + RZk + Rk Z}
subject to
(12.15) and (12.18).
271
Set Pk+1 = P , Lk+1 = L, Uk+1 = U , Hk+1 = H, Rk+1 = R, and Zk+1 = Z. Step 4: For the K obtained in Step 3, if LMIs (12.5)-(12.7) are feasible for the variables P, Q, Z, N, M, and X, then set ηmax = η, increase η, and return to Step 2. If LMIs (12.5)-(12.7) are infeasible within a specified number of iterations, then exit. Otherwise, set k = k + 1 and go to Step 3. Remark 12.3.1. Note that the stop condition for iteration at the beginning of Step 4 is different from the one in [20,32]. If the ideas in [20,32] were followed, the stop condition for iteration would be that matrix inequalities (12.16) and (12.17) held; that is, (12.15)-(12.17) should be true for given L, W , R, S, T , Y , and V . However, once the gain matrix, K = V L−1 , is obtained, the stop condition in Theorem 12.2.1 reduces to a determination of the feasibility of LMIs (12.5)-(12.7) for decision variables P , Q, Z, N , M , and X rather than taking L, W , R, S, T , and Y to be constant. Thus, in Algorithm 12.3.1, the stop condition is modified to include a determination of whether or not LMIs (12.5)-(12.7) are feasible for a given K, which may provide more freedom in the selection of the variables P , Q, Z, N , M , and X.
12.4 Numerical Examples The numerical examples below demonstrate the advantages of our method. Example 12.4.1. Consider system (12.1) with ⎡ ⎤ ⎡ ⎤ 0 1 0 ⎦, B = ⎣ ⎦. A=⎣ 0 −0.1 0.1
If we let the controller gain matrix, K, be [−3.75, − 11.5], then the ηmax that ensures the stability of system (12.4) is 0.8695 in [26] and 0.8871 in [28]. However, the value obtained with Theorem 12.2.1 is 1.0432, which is much better.
272
12. Stability and Stabilization of NCSs
On the other hand, if we do not assume that we know the controller gain, K, then [26] reported that system (12.4) was stable for η = 402 and K = [−0.0025, − 0.0118], based on the tuning of some parameters. In contrast, using Algorithm 12.3.1, we find after just one iteration that it is stable for η = 600 and K = [−0.0001, 0.0273]. Now, to understand the superiority of the stop condition in Step 4 of Algorithm 12.3.1, consider what happens if we just use the condition that matrix inequalities (12.16) and (12.17) hold. We get the same value of η (namely, 600) that guarantees the stability of NCS (12.4); but since the condition is so strict, the number of iterations increases from 1 to 5. Example 12.4.2. Consider system (12.1) with ⎡ ⎤ ⎡ ⎤ 0 0 1 ⎦, B = ⎣ ⎦. A=⎣ 1 0 1
The eigenvalues of matrix A are 1 and −1, which means that the openloop system is unstable. When the state-feedback gain matrix, K, is [−2, −3], the ηmax that ensures the stability of system (12.4) is 0.3334 in [26]; but the value obtained with Theorem 12.2.1 is 0.4125, which is better. When K is unknown, Algorithm 1 in [26] yields a value of 0.97 for ηmax ; while after 313 iterations Algorithm 12.3.1 shows that NCS (12.4) is stable for η = 0.996 and K = [−1.0050, − 1.0049]. However, if we use the stop condition in [32], we need 368 iterations to find a suitable gain matrix.
12.5 Conclusion This chapter uses the IFWM approach to design a stabilization state-feedback controller for an NCS. An improved stability criterion for an NCS with a given state-feedback gain is first established by considering the relationships among the network-induced delay, its upper bound, and their difference. This criterion and the ICCL algorithm, which has a new stop condition, are used to design a networked state-feedback controller. Finally, numerical examples demonstrate the benefits of the method.
References
273
References 1. M. Y. Chow and Y. Tipsuwan. Gain adaptation of networked DC motor controllers on QoS variations. IEEE Transactions on Industrial Electronics, 50(5): 936-943, 2003. 2. K. C. Lee, K. Lee, and M. H. Lee. QoS-based remote control of networked control systems via profibus token passing protocol. IEEE Transactions on Industrial Informatics, 1(3): 183-191, 2005. 3. K. C. Lee, K. Lee, and M. H. Lee. Worst case communication delay of realtime industrial switched Ethernet with multiple levels. IEEE Transactions on Industrial Electronics, 53(5): 1669-1676, 2006. 4. Y. Tipsuwan and M. Y. Chow. Gain scheduler middleware: a methodology to enable existing controllers for networked control and teleoperation-part I: networked control. IEEE Transactions on Industrial Electronics, 51(6): 1228-1237, 2004. 5. G. Walsh, O. Beldiman, and L. Bushnell. Asymptotic behavior of nonlinear networked control systems. IEEE Transactions on Automatic Control, 46(7): 1093-1097, 2001. 6. G. Walsh, H. Ye, and L. Bushnell. Stability analysis of networked control systems. IEEE Transactions on Control Systems Technology, 10(3): 438-446, 2002. 7. F. W. Yang, Z. D. Wang, Y. S. Hung, and M. Gani. H∞ control for networked systems with random communication delays. IEEE Transactions on Automatic Control, 51(3): 511-518, 2006. 8. T. C. Yang. Networked control systems: a brief survey. IEE Proceedings–Control Theory & Applications, 153(4): 403-412, 2006. 9. W. Zhang, M. S. Branicky, and S. M. Phillips. Stability of networked control systems. IEEE Control Systems Magazine, 21(1): 84-99, 2001. 10. H. Gao, T. Chen, and J. Lam. A new delay system approach to network-based control. Automatica, 44(1): 39-52, 2008. 11. H. Gao and T. Chen. Network-Based H∞ Output Tracking Control. IEEE Transactions on Automatic Control, 53(3): 655-667, 2008 12. D. Kim, Y. Lee, W. Kwon, and H. Park. Maximum allowable delay bounds of networked control systems. Control Engineering Practice, 11, 1301-1313, 2003. 13. K. Gu, V. L. Kharitonov, and J. Chen. Stability of Time-Delay Systems. Boston: Birkh¨ auser, 2003. 14. H. Gao, J. Lam, C. Wang, and Y. Wang. Delay-dependent output-feedback stabilization of discrete-time systems with time-varying state delay. IEE Proceedings–Control Theory & Applications, 151(6): 691-698, 2004. 15. C. Lin, Q. G. Wang, and T. H. Lee. A less conservative robust stability test for linear uncertain time-delay systems. IEEE Transactions on Automatic Control, 51(1): 87-91, 2006. 16. X. Jiang and Q. L. Han. On H∞ control for linear systems with interval timevarying delay. Automatica, 41(12): 2099-2106, 2005.
274
12. Stability and Stabilization of NCSs
17. E. K. Boukas and N. F. Al-Muthairi. Delay-dependent stabilization of singular linear systems with delays. International Journal of Innovative Computing, Information and Control, 2(2): 283-291, 2006. 18. X. M. Zhang, M. Wu, J. H. She, and Y. He. Delay-dependent stabilization of linear systems with time-varying state and input delays. Automatica, 41(8): 1405-1412, 2005. 19. E. Fridman and U. Shaked. Delay-dependent stability and H∞ control: constant and time-varying delays. International Journal of Control, 76(1): 48-60, 2003. 20. H. Gao and C. Wang. Comments and further results on “A descriptor system approach to H∞ control of linear time-delay systems”. IEEE Transactions on Automatic Control, 48(3): 520-525, 2003. 21. Q. L. Han. On robust stability of neutral systems with time-varying discrete delay and norm-bounded uncertainty. Automatica, 40(6): 1087-1092, 2004. 22. M. Wu, Y. He, J. H. She, and G. P. Liu. Delay-dependent criteria for robust stability of time-varying delay systems. Automatica, 40(8): 1435-1439, 2004. 23. Y. He, M. Wu, J. H. She, and G. P. Liu. Parameter-dependent Lyapunov functional for stability of time-delay systems with polytopic-type uncertainties. IEEE Transactions on Automatic Control, 49(5): 828-832, 2004. 24. H. Park, Y. Kim, D. Kim, and W. Kwon. A scheduling method for network based control systems. IEEE Transactions on Control Systems Technology, 10(3): 318-330, 2002. 25. Y. J. Pan, H. J. Marquez, and T. Chen. Stabilization of remote control systems with unknown time-varying delays by LMI techniques. International Journal of Control, 79(7): 752-763, 2006. 26. D. Yue, Q. L. Han, and C. Peng. State feedback controller design of networked control systems. IEEE Transactions on Circuits and Systems II, 51(11): 640-644, 2004. 27. M. Wu, Y. He, and J. H. She. New delay-dependent stability criteria and stabilizing method for neutral systems. IEEE Transactions on Automatic Control, 49(12): 2266-2271, 2004. 28. D. Yue, Q. L. Han, and J. Lam. Network-based robust H∞ control of systems with uncertainty. Automatica, 41(6): 999-1007, 2005. 29. Y. He, Q. G. Wang, L. H. Xie, and C. Lin. Further improvement of freeweighting matrices technique for systems with time-varying delay. IEEE Transactions on Automatic Control, 52(2): 293-299, 2007. 30. Y. He, Q. G. Wang, C. Lin, and M. Wu. Delay-range-dependent stability for systems with time-varying delay. Automatica, 43(2): 371-376, 2007. 31. S. Xu, J. Lam, and Y. Zou. New results on delay-dependent robust H∞ control for systems with time-varying delays. Automatica, 42(2): 343-348, 2006. 32. Y. S. Moon, P. Park, W. H. Kwon, and Y. S. Lee. Delay-dependent robust stabilization of uncertain state-delayed systems. International Journal of Control, 74(14): 1447-1455, 2001.
References
275
33. Y. He, G. P. Liu, D. Rees, and M. Wu. Improved stabilization method for networked control systems. IET Control Theory & Applications, 1(6): 15801585, 2007. 34. Y. He, Q. G. Wang, C. Lin, and M. Wu. Augmented Lyapunov functional and delay-dependent stability criteria for neutral systems. International Journal of Robust and Nonlinear Control, 15(18): 923-933, 2005. 35. S. Xu, J. Lam, and Y. Zou. Simplified descriptor system approach to delaydependent stability and performance analysis for time-delay systems. IEE Proceedings–Control Theory & Applications, 152(2): 147-151, 2005. 36. E. L. Ghaoui, F. Oustry, and M. AitRami. A cone complementarity linearization algorithms for static output feedback and related problems. IEEE Transactions on Automatic Control, 42(8): 1171-1176, 1997.
13. Stability of Stochastic Systems with Time-Varying Delay
Stochastic phenomena are common in many branches of science and engineering, and stochastic perturbations can be a source of instability in systems. This has made stochastic systems an interesting topic of research; and stochastic modeling has become an important tool in science and engineering. Increasing attention is now being paid to the stability, stabilization, and H∞ control of stochastic time-delay systems [1–6]. Stability criteria for time-delay systems fall into two categories, depending on whether or not information on the lengths of delays is used: delayindependent and delay-dependent. Delay-independent criteria are generally conservative, particularly when the delays are small. So, more attention is being paid to delay-dependent stability; for example, the robust stability of uncertain stochastic systems with a time-varying delay was studied in [7] and the exponential stability of stochastic systems with a time-varying delay, nonlinearities, and Markovian switching was investigated in [8]. One problem with these papers is that the delay, d(t), where 0 d(t) h, is often increased to h, and h − d(t) is also taken to be h. However, d(t) and h − d(t) are closely related in that their sum is h. So, the above treatment may lead to conservativeness. In this chapter, consideration of the relationships among a time-varying delay, its upper bound, and their difference leads to less conservative stability criteria. This chapter uses the IFWM approach and Itˆ o’s differential formula to analyze the delay-dependent stability of stochastic systems with a time-varying delay. First, the robust stability of uncertain stochastic systems with a timevarying delay is examined [9]. Next, the exponential stability of stochastic Markovian jump systems with nonlinearities and time-varying delays is investigated [10]. The stability criteria obtained are less conservative than others because the method does not ignore any terms, considers the relationships among a time-varying delay, its upper bound, and their difference, and is based on both Itˆ o’s differential formula and Lyapunov-Krasovskii stability
278
13. Stability of Stochastic Systems with Time-Varying Delay
theory. Numerical examples demonstrate the effectiveness and advantages of the method.
13.1 Robust Stability of Uncertain Stochastic Systems This section considers uncertain linear stochastic systems with a time-varying delay. We use FWMs, consider the relationships among a time-varying delay, its upper bound, and their difference, and do not ignore any terms to obtain an improved delay-dependent robust stability criterion. 13.1.1 Problem Formulation Consider the following uncertain linear stochastic system with a time-varying delay: ⎧ ⎪ ⎪ dx(t) = [(A + ΔA(t))x(t) + (Ad + ΔAd (t))x(t − d(t))]dt ⎪ ⎨ +[(E + ΔE(t))x(t) + (Ed + ΔEd (t))x(t − d(t))]dw(t), (13.1) ⎪ ⎪ ⎪ ⎩ x(t) = φ(t), t ∈ [−h, 0], where x(t) ∈ Rn is the state vector; w(t) is a scalar describing Brownian motion in the complete probability space (Ω, F , P) with the filter {Ft }t0 ; φ(t) is any given initial condition in L2F0 ([−h, 0], Rn ); A, Ad , E, and Ed are known, real, constant matrices with appropriate dimensions; and ΔA(t) and ΔAd (t), and ΔE(t) and ΔEd (t), are unknown, time-varying matrices with appropriate dimensions that represent the system uncertainties and stochastic perturbation uncertainties, respectively, which are assumed to have the form [ΔA(t) ΔAd (t) ΔE(t) ΔEd (t)] = DF (t) [G1 G2 G3 G4 ] ,
(13.2)
where D, G1 , G2 , G3 , and G4 are known, real, constant matrices with appropriate dimensions; and F (t) is an unknown, real, and possibly time-varying matrix with Lebesgue measurable elements satisfying F T (t)F (t) I, ∀t.
(13.3)
The delay, d(t), is a time-varying differentiable function that satisfies 0 d(t) h
(13.4)
˙ μ. d(t)
(13.5)
and
13.1 Robust Stability of Uncertain Stochastic Systems
279
13.1.2 Robust Stability Analysis This subsection uses Lyapunov-Krasovskii stability theory to establish a robust stochastic stability criterion for system (13.1). For convenience, we define a new state variable, y(t), to be y(t) = (A + ΔA(t))x(t) + (Ad + ΔAd (t))x(t − d(t))
(13.6)
and a new perturbation variable, g(t), to be g(t) = (E + ΔE(t))x(t) + (Ed + ΔEd (t))x(t − d(t)).
(13.7)
Thus, system (13.1) becomes dx(t) = y(t)dt + g(t)dw(t).
(13.8)
Now, we present a delay-dependent stability criterion for system (13.1). Theorem 13.1.1. Consider system (13.1). Given scalars h > 0 and μ, the system is robustly stochastically stable if there exist matrices P > 0, Qi 0, i = 1, 2, Z > 0, and S > 0, scalars εj > 0, j = 1, 2, and any appropriately dimensioned matrices N, M, and H such that the following LMIs hold: ⎤ ⎡ √ ˆ hN Θ ⎦ < 0, Π1 = ⎣ (13.9) ∗ −Z ⎤ ⎡ √ ˆ hM Θ ⎦ < 0, Π2 = ⎣ (13.10) ∗ −Z where
⎡
⎢ ⎢ ⎢ ⎢ ⎢ ⎢ Θ=⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Θ1 W T Pˆ
0
N
M
HD
∗
−Pˆ
Pˆ D
0
0
0
∗
∗
−ε1 I
0
0
0
∗
∗
∗
−S
0
0
∗
∗
∗
∗
−S
0
∗
∗
∗
∗
∗
−ε2 I
ˆ = [N 0 0 0 0 0] , N ˆ = [M 0 0 0 0 0] , M
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
(13.11)
280
13. Stability of Stochastic Systems with Time-Varying Delay
where Θ1 = Φ + Ψ + Ψ T , Pˆ = P + hS, ⎤ ⎡ Φ11 Φ12 0 P ⎥ ⎢ ⎥ ⎢ ⎢ ∗ Φ22 0 0 ⎥ ⎥, ⎢ Φ=⎢ ⎥ ⎢ ∗ ∗ −Q2 0 ⎥ ⎦ ⎣ ∗ ∗ ∗ hZ Ψ = [N + HA M − N + HAd − M
− H] ,
W = [E Ed 0 0] , and Φ11 =
2
T Qi + ε2 GT 1 G1 + ε1 G3 G3 ,
i=1 T Φ12 = ε1 GT 3 G4 + ε2 G1 G2 , T Φ22 = −(1 − μ)Q1 + ε2 GT 2 G2 + ε1 G4 G4 .
Proof. Choose the following stochastic Lyapunov-Krasovskii functional candidate: V (xt , t) =
5
Vi (xt , t),
(13.12)
i=1
where V1 (xt , t) = xT (t)P x(t), t V2 (xt , t) = xT (s)Q1 x(s)ds, t−d(t) t T
V3 (xt , t) =
x (s)Q2 x(s)ds,
t−h 0 t
V4 (xt , t) = V5 (xt , t) =
−h 0
t+θ t
−h
t+θ
y T (s)Zy(s)dsdθ, # $ Tr g T (s)Sg(s) dsdθ;
and P > 0, Qi 0, i = 1, 2, Z > 0, and S > 0 are to be determined. The weak infinitesimal operator, L, of the stochastic process {xt , t h} along the evolution of V (xt , t) is
13.1 Robust Stability of Uncertain Stochastic Systems
LV (xt , t) =
5
LVi (xt , t),
281
(13.13)
i=1
where # $ LV1 (xt , t) = 2xT (t)P y(t) + Tr g T (t)P g(t) , T ˙ LV2 (xt , t) = xT (t)Q1 x(t) − (1 − d(t))x (t − d(t))Q1 x(t − d(t)), LV3 (xt , t) = xT (t)Q2 x(t) − xT (t − h)Q2 x(t − h), t T LV4 (xt , t) = hy (t)Zy(t) − y T (s)Zy(s)ds, t−h
# $ LV5 (xt , t) = hTr g T (t)Sg(t) −
# $ Tr g T (s)Sg(s) ds.
t
t−h
In addition, from the Newton-Leibnitz formula, the following equations are true for any appropriately dimensioned matrices N and M : 2ξ T (t)N x(t) − x(t − d(t)) −
t
dx(s) = 0, t−d(t)
T
2ξ (t)M x(t − d(t)) − x(t − h) −
(13.14)
t−d(t)
dx(s) = 0.
(13.15)
t−h
From (13.8), the following is true for any appropriately dimensioned matrix H: 2ξ T (t)H [(A + ΔA(t))x(t) + (Ad + ΔAd (t))x(t − d(t)) − y(t)] = 0, (13.16) T where ξ(t) = xT (t), xT (t − d(t)), xT (t − h), y T (t) . If we add the left sides of (13.14)-(13.16) to (13.13) and use (13.8), then the weak infinitesimal operator of V (xt , t) along the trajectory of system (13.1) becomes # $ LV (xt , t) = 2xT (t)P y(t) + Tr g T (t)P g(t) + xT (t)(Q1 + Q2 )x(t) T ˙ −(1 − d(t))x (t − d(t))Q1 x(t − d(t)) − xT (t − h)Q2 x(t − h) t # $ T +hy (t)Zy(t) − y T (s)Zy(s)ds + hTr g T (t)Sg(t)
t−h
# $ Tr g T (s)Sg(s) ds − t−h t
+2ξ T (t)N x(t) − x(t − d(t)) −
t
t−d(t)
y(s)ds −
t
g(s)dw(s)
t−d(t)
282
13. Stability of Stochastic Systems with Time-Varying Delay
+2ξ T (t)M x(t − d(t) − x(t − h) −
t−d(t)
y(s)ds −
t−h
t−d(t)
g(s)dw(s) t−h
+2ξ T (t)H [(A + ΔA(t))x(t) + (Ad + ΔAd (t))x(t − d(t)) − y(t)] . (13.17) Now, from (3) of Lemma 2.6.5, the following is true for any scalar ε1 > 0 T satisfying (P + hS)−1 − ε−1 1 DD > 0: $ # Tr g T (t)(P + hS)g(t) = g T (t)(P + hS)g(t) T
= ξ T (t) ([E Ed 0 0] + DF (t)[G3 G4 0 0])
×(P + hS) ([E Ed 0 0] + DF (t)[G3 G4 0 0]) ξ(t) T −1 ξ T (t)W T (P + hS)−1 − ε−1 W ξ(t) 1 DD +ξ T (t)ε1 [G3 G4 0 0]T [G3 G4 0 0]ξ(t).
(13.18)
In addition, from (2) of Lemma 2.6.5, the following is true for any scalar ε2 > 0: 2ξ T (t)H[ΔE(t) ΔEd (t) 0 0]ξ(t) = 2ξ T (t)HDF (t)[G1 G2 0 0]ξ(t) T T T ξ T (t)HDε−1 2 (HD) ξ(t) + ξ (t)ε2 [G1 G2 0 0] [G1 G2 0 0] ξ(t).
(13.19) From (1) of Lemma 2.6.5, we have the following for any matrix S > 0: t T g(s)dw(s) ξ T (t)N S −1 N T ξ(t) −2 ξ (t)N (
t−d(t) t
)T
g(s)dw(s)
+
( S
t−d(t)
T
−2 ξ (t)M (
t−h
t−h
)T
(
t−d(t)
S
) g(s)dw(s) .
(13.21)
t−h
Note that ⎧( ) T ( )⎫ ⎨ t ⎬ t E g(s)dw(s) S g(s)dw(s) ⎩ t−d(t) ⎭ t−d(t) t # $ = Tr g T (s)Sg(s) ds, t−d(t)
(13.20)
g(s)dw(s) ξ T (t)M S −1 M T ξ(t)
g(s)dw(s)
+
g(s)dw(s) , t−d(t)
t−d(t)
t−d(t)
)
t
(13.22)
13.1 Robust Stability of Uncertain Stochastic Systems
E
⎧( ⎨
)T
t−d(t)
g(s)dw(s)
⎩
( S
t−h
t−d(t)
=
t−d(t)
t−h
283
)⎫ ⎬ g(s)dw(s) ⎭
# $ Tr g T (s)Sg(s) ds.
(13.23)
t−h
Then, applying (13.18)-(13.23) to (13.17) yields ⎡ ⎤ Ξ −hN 1 t ⎦ η(t, s)ds LV (xt , t) η T (t, s) ⎣ h t−d(t) ∗ −hZ ⎡ ⎤ Ξ −hM 1 t−d(t) T ⎦ η(t, s)ds, η (t, s) ⎣ + h t−h ∗ −hZ
(13.24)
where T η(t, s) = ξ T (t), y T (s) , T −1 Ξ = Θ1 + W T (P + hS)−1 − ε−1 W + N S −1 N T + M S −1 M T 1 DD T +HDε−1 2 (HD) ;
and Θ⎡ 1 is defined ⎤in (13.11).⎡ ⎤ Ξ −hN Ξ −hM ⎦ < 0 and ⎣ ⎦ < 0, which imply that (13.9) and If ⎣ ∗ −hZ ∗ −hZ (13.10) hold, respectively, then system (13.1) is robustly stable. This completes the proof.
13.1.3 Numerical Example The numerical example in this subsection demonstrates the effectiveness of the above method. Example 13.1.1. Consider system (13.1) with ⎡ ⎤ ⎡ ⎤ −2 0 −1 0 ⎦ , Ad = ⎣ ⎦ , ΔA(t) 0.2, ΔAd (t) 0.2, A=⎣ 0 −0.9 −1 −1 ΔE(t) 0.2, ΔEd (t) 0.2. Then, the uncertainties described by (13.2) are of the form ⎡ ⎤ ⎡ ⎤ 1 0 0.2 0 ⎦. ⎦ , G1 = G2 = G3 = G4 = ⎣ D=⎣ 0 1 0 0.2
284
13. Stability of Stochastic Systems with Time-Varying Delay
Table 13.1. Allowable upper bound, h, for various μ (Example 13.1.1) μ
0
0.5
0.9
unknown μ
[7]
1.0660
0.5252
0.1489
—
Theorem 13.1.1
1.8684
1.1304
0.9402
0.9262
Table 13.1 lists values of the upper bound, h, on the delay, d(t), that guarantee the robust stochastic stability of system (13.1) obtained with Theorem 13.1.1 along with the values in [7]. Note that our results are significantly better. Remark 13.1.1. [7] states that the maximum allowable delay for a μ of 0.9 is 0.6822. This appears to be incorrect. Using the method in that report, we calculated the actual value to be 0.1489.
13.2 Exponential Stability of Stochastic Markovian Jump Systems with Nonlinearities Markovian jump systems are a special class of stochastic systems. They are used to model various types of dynamic systems that are subject to abrupt changes in structure, such as failure-prone manufacturing systems, power systems, and economic systems. This section uses the IFWM approach and Itˆo’s differential formula to derive an improved exponential-stability criterion for stochastic Markovian jump systems with nonlinearities and time-varying delays. 13.2.1 Problem Formulation Consider the following stochastic system with nonlinearities, Markovian jump parameters, and time-varying delays: ⎧ ⎪ ⎪ dx(t) = [A(rt )x(t) + A1 (rt )x(t − d(t)) + f (t, x(t), x(t − d(t)), rt )] dt ⎪ ⎨ (13.25) +g(t, x(t), x(t − h(t)), rt )dw(t), ⎪ ⎪ ⎪ ⎩ x(t) = φ(t), r = r ∈ S, ∀t ∈ [−τ , 0], t 0 where x(t) ∈ Rn is the state of the system; A(rt ) ∈ Rn×n and A1 (rt ) ∈ Rn×n are known matrix functions of the Markovian jump process, {rt }; w(t) is
13.2 Exponential Stability of Stochastic Markovian Jump Systems with...
285
a vector describing m-dimensional Brownian motion, which is defined on a b probability space; the initial condition is φ(t) ∈ CF ([−τ , 0], Rn ); τ (t) and 0 h(t) are time-varying differentiable delays that satisfy 0 τ (t) τ,
(13.26)
0 h(t) h,
(13.27)
τ˙ (t) dτ , ˙ h(t) dh ;
(13.28)
and
(13.29)
and τ = max{τ, h}. {rt , t 0} is a right continuous Markov process on the probability space that takes values in the finite state set S = {1, 2, . . . , N } and has the generator Π = [πij ] , i, j ∈ S, which is given by ⎧ ⎨ πij Δ + o(Δ), i = j, P r{rt+Δ = j | rt = i} = ⎩ 1 + π Δ + o(Δ), i = j, ij
transition rate from where Δ > 0, limΔ→0 o(Δ) Δ = 0, πij 0 for i = j is the N mode i at time t to mode j at time t + Δ, and πii = − j=1, i=j πij . ¯ + ×Rn ×Rn ×S −→ Rn and g(·, ·, ·, ·) : R ¯ +× In system (13.25), f (·, ·, ·, ·) : R n n n×m R × R × S −→ R are nonlinear uncertainties that satisfy the following conditions: f (t, x(t), x(t − τ (t)), rt ) F1 (rt )x(t) + F2 (rt )x(t − τ (t)), (13.30) $ # Tr g T (t, x(t), x(t − h(t)), rt )g(t, x(t), x(t − h(t)), rt ) 2
2
G1 (rt )x(t) + G2 (rt )x(t − h(t)) ,
(13.31)
where Fj (rt ) and Gj (rt ), j = 1, 2 are matrix functions of the Markovian jump process {rt }. For each rt = i ∈ S, Fj (rt ) = Fij and Gj (rt ) = Gij are known matrices with appropriate dimensions. For each rt = i ∈ S, we can write system (13.25) as dx(t) = [Ai x(t) + Ai1 x(t − τ (t)) + fi (t, x(t), x(t − τ (t)))] dt +gi (t, x(t), x(t − h(t)))dw(t),
(13.32)
where Ai ∈ Rn×n and Ai1 ∈ Rn×n are known matrices. For convenience, we define yi (t) = Ai x(t) + Ai1 x(t − τ (t)) + fi (t, x(t), x (t − τ (t))) .
(13.33)
286
13. Stability of Stochastic Systems with Time-Varying Delay
From [11], we know that system (13.25) has a unique continuous solution, x(t), for t τ that satisfies 2 (13.34) E sup x(s) < +∞, t 0. −τ s0
From [12, 13], we know that {(xt , t), t τ } is a Markovian process with the initial state (φ(·), r0 ). The weak infinitesimal generator [14, 15] that acts 2 on function V : CF ([−τ , 0], Rn ) × S −→ R is 0 LV (x(t), t, i) = lim + Δ−→0
E {V (x(t +Δ), t+Δ, rt+Δ) |xt , rt = i}−V (x(t), t, i) . Δ
We now give the definition of exponential stability. Definition 13.2.1. System (13.25) is said to be exponentially stable in the mean square sense if there exist constants α > 0 and β > 0 such that, for t 0, # $ E x(t)2 αe−βt E sup φ(s)2 . −τ s0
β is called the exponential convergence rate. 13.2.2 Exponential-Stability Analysis We now use the IFWM approach to obtain a theorem for Markovian jump systems with nonlinearities and time-varying delays. Theorem 13.2.1. Consider system (13.25). Given scalars τ, h, dτ , and dh , the system is exponentially stable in the mean square sense if there exist matrices Pi > 0, Qj 0, Tj 0, Rj > 0, Xi 0, and Yi 0, any appropriately dimensioned matrices Ni , Mi , Hi , Ki , and Si , and scalars εl > 0, εik > 0 and δi > 0, i = 1, 2, · · · , N , j = 1, 2, l = 1, 4, k = 2, 3 such that the following LMIs hold for i = 1, 2, · · · , N : ⎡ ⎤ i Φ N K S H M M i i i i i i ⎢ 11 ⎥ ⎢ ∗ −ε I ⎥ 0 0 0 0 0 ⎢ ⎥ 1 ⎢ ⎥ ⎢ ∗ 0 0 0 0 ⎥ ∗ −ε1 I ⎢ ⎥ ⎢ ⎥ (13.35) Φi = ⎢ ∗ 0 0 0 ⎥ < 0, ∗ ∗ −ε4 I ⎢ ⎥ ⎢ ∗ ⎥ 0 0 ⎥ ∗ ∗ ∗ −ε4 I ⎢ ⎢ ⎥ i ⎢ ∗ 0 ⎥ ∗ ∗ ∗ ∗ −ε2 I ⎣ ⎦ ∗ ∗ ∗ ∗ ∗ ∗ −εi3 I
13.2 Exponential Stability of Stochastic Markovian Jump Systems with...
⎡ Θ1i = ⎣
⎤ Xi Ni ∗ R1
⎡ Θ2i = ⎣
∗ R1
(13.36)
⎦ 0,
(13.37)
⎤ Yi Si ∗ R2
⎡ Θ4i = ⎣
⎦ 0, ⎤
Xi K i
⎡ Θ3i = ⎣
287
⎦ 0,
(13.38)
⎤ Yi Hi ∗ R2
⎦ 0,
(13.39)
Pi δi I,
(13.40)
where Φi11 = Σi + Ψi + ΨiT + τ Xi + hYi , ⎡ Σi11 0 0 0 0 ⎢ ⎢ 0 0 ⎢ ∗ Σi22 0 ⎢ ⎢ ⎢ ∗ ∗ −Q2 0 0 Σi = ⎢ ⎢ ⎢ ∗ ∗ ∗ Σi44 0 ⎢ ⎢ ⎢ ∗ ∗ ∗ ∗ −T2 ⎣ ∗
∗
∗
∗
Σi11 = Q1 + Q2 + T1 + T2 +
Pi 0 0 0 0
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
∗ τ R1 + hR2 N
i T πij Pj + (δi + τ ε1 + hε4 )GT i1 Gi1 + ε2 Fi1 Fi1 ,
j=1
Σi22 = −(1 − dτ )Q1 +
T εi3 Fi2 Fi2 ,
Σi44 = −(1 − dh )T1 + (δi + τ ε1 + hε4 )GT i2 Gi2 , Ψi = [Ni + Si − Mi Ai Ki − Ni − Mi Ai1 − Ki Hi − Si − Hi Mi ]. Proof. Choose the stochastic Lyapunov-Krasovskii functional candidate to be V (xt , t, i) =
6 k=1
where
Vk (xt , t, i),
(13.41)
288
13. Stability of Stochastic Systems with Time-Varying Delay
V1 (xt , t, i) = xT (t)P (rt )x(t), t t V2 (xt , t, i) = ε1 G1 (rθ )x(θ)2 +G2 (rθ )x(θ − h(θ))2 dθds, t−τ s t t
V3 (xt , t, i) =
ε4 G1 (rθ )x(θ)2 +G2 (rθ )x(θ − h(θ))2 dθds,
t−h s t t
V4 (xt , t, i) =
t−τ
y T (θ, rθ )R1 y(θ, rθ )dθds
s t
+ V5 (xt , t, i) =
t−h t
t
y T (θ, rθ )R2 y(θ, rθ )dθds,
s
xT (s)Q1 x(s)ds +
t−τ (t) t T
V6 (xt , t, i) =
t
t−h(t) t
x (s)Q2 x(s)ds +
t−τ
xT (s)T1 x(s)ds,
xT (s)T2 x(s)ds;
t−h
and P (rt ) = Pi > 0 for rt = i ∈ S; Qj 0, Tj 0, and Rj > 0, j = 1, 2; and ε1 > 0 and ε4 > 0. Then, the weak infinitesimal operator, L, of the stochastic process {xt , t τ } along the evolution of V (xt , rt ) for rt = i ∈ S is LV (xt , t, i) =
6
LVi (xt , t, i),
(13.42)
k=1
where LV1 (xt , t, i) = 2xT (t)Pi yi (t) +
N
xT (t)πij Pj x(t)
j=1
# $ +Tr giT (t, x(t), x(t − h(t)))Pi gi (t, x(t), x(t − h(t))) , , ,2 LV2 (xt , t, i) = τ ε1 ,Gi1 x(t)2 + Gi2 x(t − h(t)), t 2 2 ε1 G1 (rs )x(s) + G2 (rs )x(s − h(s)) ds, − t−τ , ,2 LV3 (xt , t, i) = hε4 ,Gi1 x(t)2 + Gi2 x(t − h(t)), , t 2 2 ε4 G1 (rs )x(s) + G2 (rs )x(s − h(s)) ds, − t−h
LV4 (xt , t, i) = τ yiT (t)R1 yi (t) −
t
y T (s, rs )R1 y(s, rs )ds
t−τ
+hyi (t)T R2 yi (t) −
t t−h
y T (s, rs )R2 y(s, rs )ds,
13.2 Exponential Stability of Stochastic Markovian Jump Systems with...
289
LV5 (xt , t, i) = xT (t)(Q1 + T1 )x(t) − (1 − τ˙ (t))xT (t − τ (t))Q1 x(t − τ (t)) T ˙ −(1 − h(t))x (t − h(t))T1 x(t − h(t)), LV6 (xt , t, i) = xT (t)(Q2 + T2 )x(t) − xT (t − τ )Q2 x(t − τ ) −xT (t − h)T2 x(t − h). The following equations are true for any matrices Ni , Ki , Si , and Hi , i = 1, 2, · · · , N with appropriate dimensions: t
2ξiT (t)Ni x(t) − x(t − τ (t)) −
dx(s) = 0,
2ξiT (t)Ki x(t − τ (t)) − x(t − τ ) − 2ξiT (t)Si x(t) − x(t − h(t)) −
(13.43)
t−τ (t)
t−τ (t)
dx(s) = 0,
(13.44)
t−τ
t
dx(s) = 0,
(13.45)
t−h(t)
2ξiT (t)Hi x(t − h(t)) − x(t − h) −
t−h(t)
dx(s) = 0.
(13.46)
t−h
From (13.33), the following is true for any appropriately dimensioned matrices Mi , i = 1, 2, · · · , N : 2ξiT (t)Mi [yi (t) − Ai x(t) − Ai1 x(t − τ (t)) − fi (t, x(t), x (t − τ (t)))] = 0, (13.47) where ξi (t) = [xT (t), xT (t − τ (t)), xT (t − τ ), xT (t − h(t)), xT (t − h), yiT (t)]T . On the other hand, the following equations are true for any matrices Xi 0 and Yi 0, i = 1, 2, · · · , N : t t−τ (t) ξiT (t)Xi ξi (t)ds− ξiT (t)Xi ξi (t)ds = 0, (13.48) τ ξiT (t)Xi ξi (t)− t−τ (t)
hξiT (t)Yi ξi (t) −
t t−h(t)
t−τ
ξiT (t)Yi ξi (t)ds −
t−h(t)
t−h
ξiT (t)Yi ξi (t)ds = 0. (13.49)
Adding the terms on the left sides of (13.43)-(13.49) to (13.42) yields LV (xt , t, i) 2xT (t)Pi yi (t) +
N j=1
xT (t)πij Pj x(t)
# $ +Tr g T (t, x(t), x(t − h(t)), i)Pi g(t, x(t), x(t − h(t)), i)
290
13. Stability of Stochastic Systems with Time-Varying Delay
, ,2 +(τ ε1 + hε4 ) ,Gi1 x(t)2 + Gi2 x(t − h(t)), −
t
2 2 ε1 G1 (rs )x(s) + G2 (rs )x(s − h(s)) ds
t−τ
−
t
2 2 ε4 G1 (rs )x(s) + G2 (rs )x(s − h(s)) ds
t−h
+τ yiT (t)R1 yi (t) − −
t
t
y T (s, rs )R1 y(s, rs )ds + hyi (t)T R2 yi (t)
t−τ
y T (s, rs )R2 y(s, rs )ds + xT (t)(Q1 + Q2 + T1 + T2 )x(t)
t−h
−(1 − dτ )xT (t − τ (t))Q1 x(t − τ (t)) − xT (t − τ )Q2 x(t − τ ) −(1 − dh )xT (t − h(t))T1 x(t − h(t)) − xT (t − h)T2 x(t − h) t
+2ξiT (t)Ni x(t) − x(t − τ (t)) −
dx(s) t−τ (t)
+2ξiT (t)Ki
x(t − τ (t)) − x(t − τ ) −
dx(s) t−τ
+2ξiT (t)Si
t−τ (t)
x(t) − x(t − h(t)) −
t
dx(s) t−h(t)
+2ξiT (t)Hi x(t − h(t)) − x(t − h) −
t−h(t)
dx(s) t−h
+2ξiT (t)Mi [yi (t)−Ai x(t)−Ai1 x(t − τ (t))−fi (t, x(t), x (t − τ (t)))] +τ ξiT (t)Xi ξi (t) −
+hξiT (t)Yi ξi (t)−
t
t−τ (t)
ξiT (t)Xi ξi (t)ds −
t−τ (t)
t−τ
ξiT (t)Xi ξi (t)ds
t−h(t) T ξi (t)Yi ξi (t)ds− ξiT (t)Yi ξi (t)ds. t−h(t) t−h t
(13.50)
From (13.31) and (13.40), we have the following inequalities: $ # Tr g T (t, x(t), x(t − h(t)), i)Pi g(t, x(t), x(t − h(t)), i) T T xT (t)δi GT i1 Gi1 x(t) + x (t − h(t))δi Gi2 Gi2 x(t − h(t)),
(13.51)
13.2 Exponential Stability of Stochastic Markovian Jump Systems with...
−2ξiT Ni
t
t−τ (t)
dx(s) −2ξiT (t)Ni
t
t−τ (t)
T y(s, rs )ds + ξiT (t)ε−1 1 Ni Ni ξi (t)
, ,2 , t , , , +ε1 , g(x(s), x(s − h(s)), rs )dw(s), , , t−τ (t) , −2ξiT Ki
t−τ (t)
t−τ
dx(s) −2ξiT (t)Ki
t−τ (t) t−τ
t
t−h(t)
dx(s) −2ξiT (t)Si
t t−h(t)
t−h(t)
t−h
dx(s) −2ξiT (t)Hi
t−h(t) t−h
(13.53)
T y(s, rs )ds + ξiT (t)ε−1 4 Si Si ξi (t)
,2 , , , t , , +ε4 , g(x(s), x(s − h(s)), rs )dw(s), , , , t−h(t) −2ξiT Hi
(13.52)
T y(s, rs )ds + ξiT (t)ε−1 1 Ki Ki ξi (t)
, ,2 , t−τ (t) , , , +ε1 , g(x(s), x(s − h(s)), rs )dw(s), , , t−τ , −2ξiT Si
291
(13.54)
T y(s, rs )ds + ξiT (t)ε−1 4 Hi Hi ξi (t)
, ,2 , t−h(t) , , , +ε4 , g(x(s), x(s − h(s)), rs )dw(s), . , t−h ,
(13.55)
Note that ⎧, ,2 ⎫ ⎨, t , ⎬ , , E , g(x(s), x(s − h(s)), rs )dw(s), ⎩, t−τ (t) , ⎭
t
t−τ (t)
G1 (rs )x(s)2 + G2 (rs )x(s − h(s))2 ds,
(13.56)
⎧, ,2 ⎫ ⎨, t−τ (t) , ⎬ , , g(x(s), x(s − h(s)), rs )dw(s), E , , ⎭ ⎩, t−τ
t−τ (t)
G1 (rs )x(s)2 + G2 (rs )x(s − h(s))2 ds,
(13.57)
t−τ
⎧, ,2 ⎫ ⎨, t , ⎬ , , E , g(x(s), x(s − h(s)), rs )dw(s), , ⎭ ⎩, t−h(t) t G1 (rs )x(s)2 + G2 (rs )x(s − h(s))2 ds, t−h(t)
(13.58)
292
13. Stability of Stochastic Systems with Time-Varying Delay
⎧, ,2 ⎫ ⎨, t−h(t) , ⎬ , , E , g(x(s), x(s − h(s)), rs )dw(s), ⎩, t−h , ⎭
t−h(t)
G1 (rs )x(s)2 + G2 (rs )x(s − h(s))2 ds.
(13.59)
t−h
In addition, from (13.30), the following inequality holds: −2ξiT (t)Mi f (t, x(t), x(t − τ (t)), i) ξiT (t) (εi2 )−1 + (εi3 )−1 Mi MiT ξi (t) T T + xT (t)εi2 Fi1 Fi1 x(t) + xT (t − τ (t))εi3 Fi2 Fi2 x(t − τ (t)).
(13.60)
Then, applying inequalities (13.51)-(13.60) to (13.50) yields t # T $ ζi (t, s)Θ1i ζi (t, s) ds LV (xt , i) ξiT (t)Ξi ξi (t) − t−τ (t)
−
t−τ (t) #
t−τ
−
t−h(t)
t−h
where Ξi = Φi11 +
$ ζiT (t, s)Θ2i ζi (t, s) ds −
# T $ ζi (t, s)Θ4i ζi (t, s) ds,
εi2
−1
+ εi3
−1
t
t−h(t)
# T $ ζi (t, s)Θ3i ζi (t, s) ds (13.61)
−1 T T Mi MiT + ε−1 1 Ni Ni + ε1 K i K i +
ε−1 S S T + ε−1 H H T , with Φi11 being defined in (13.35), and ζi (t, s) = 4 T i i T 4T i i ξi (t), y (s) . By the Schur complement, (13.35) is equivalent to Ξi < 0. Therefore, if (13.35)-(13.39) are satisfied, then (13.61) implies that the following holds for all i ∈ S, . # $ E LV˜ (xt , i) −λ1 E xT (t)x(t) ,
(13.62)
where λ1 = mini∈S {λmin (−Ξi )}. Define the following new function: W (xt , rt ) = eβt V˜ (xt , rt ),
(13.63)
where β > 0. Its generator, LW (xt , rt ), is LW (xt , rt ) = βeβt V˜ (xt , rt ) + eβt LV˜ (xt , rt ). Using Itˆo’s generalized formula, we obtain
(13.64)
13.2 Exponential Stability of Stochastic Markovian Jump Systems with...
293
E{W (xt , rt )} − E{W (x0 , r0 )} t t βeβs E{V˜ (xs , rs )}ds + eβs E{LV˜ (xs , rs )}ds. (13.65) = 0
0
For system (13.25), the proof follows a line similar to the one in [16]. There exists a scalar α > 0 such that, for t 0, . E V˜ (xt , rt ) α
sup
−τ s0
. E φ(s)2 e−βt .
(13.66)
Since V˜ (xt , rt ) λ2 xT (t)x(t), where λ2 = mini∈S {λmin (Pi )}, we know from (13.66) that, for t 0, # $ E xT (t)x(t) α ¯
sup
−τ s0
. 2 E φ(s) e−βt .
(13.67)
From Definition 13.2.1, system (13.25) is exponentially stable. This completes the proof.
Remark 13.2.1. If we let Q2 = 0 and T2 = 0, and also Ki = 0 and Hi = 0, i = 1, 2, · · · , N , then we find that Theorem 13.2.1 is equivalent to Theorem 1 in [8], which means that the latter is a special case of the former. If we do not consider the switching modes of system (13.25), the system reduces to a conventional stochastic system with nonlinearities and delays. That gives us the following corollary. Corollary 13.2.1. Consider system (13.25) with i ∈ S = {1}. Given scalars τ, h, dτ , and dh , the system is exponentially stable in the mean square sense if there exist matrices P > 0, Qi 0, Ti 0, Ri > 0, i = 1, 2, X 0, and Y 0, any appropriately dimensioned matrices N, M, H, K, and S, and scalars εj > 0, j = 1, 2, · · · , 4 and δ > 0 such that the following LMIs hold : ⎤ ⎡ Φ N K S H M M 11 ⎥ ⎢ ⎥ ⎢ ⎢ ∗ −ε1 I 0 0 0 0 0 ⎥ ⎥ ⎢ ⎥ ⎢ ⎢ ∗ 0 0 0 0 ⎥ ∗ −ε1 I ⎥ ⎢ ⎥ ⎢ ⎢ (13.68) Φ=⎢ ∗ ∗ ∗ −ε4 I 0 0 0 ⎥ ⎥ < 0, ⎥ ⎢ ⎥ ⎢ ∗ 0 0 ⎥ ∗ ∗ ∗ −ε4 I ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ∗ ∗ ∗ −ε2 I 0 ⎥ ⎢ ∗ ⎦ ⎣ ∗ ∗ ∗ ∗ ∗ ∗ −ε3 I
294
13. Stability of Stochastic Systems with Time-Varying Delay
⎡ Θ1 = ⎣
⎤ X N ∗ R1
⎡ Θ2 = ⎣
∗ R1
⎦ 0,
(13.70)
⎤ Y
S
∗ R2
⎡ Θ4 = ⎣
(13.69)
⎤ X K
⎡ Θ3 = ⎣
⎦ 0,
⎦ 0,
(13.71)
⎤ Y
H
∗ R2
⎦ 0,
(13.72)
P δI,
(13.73)
where Φ11 = Σ + Ψ + Ψ T + τ X + hY, in which,
⎤
⎡
⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ Σ=⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Σ11
0
0
0
0
P
∗
Σ22
0
0
0
0
∗
∗
−Q2
0
0
0
∗
∗
∗
Σ44
0
0
∗
∗
∗
∗
−T2
0
∗
∗
∗
∗
∗
τ R1 + hR2
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
T Σ11 = Q1 + Q2 + T1 + T2 + (δ + τ ε1 + hε4 )GT 1 G1 + ε2 F1 F1 ,
Σ22 = −(1 − dτ )Q1 + ε3 F2T F2 , Σ44 = −(1 − dh )T1 + (δ + τ ε1 + hε4 )GT 2 G2 , Ψ = [N + S − M A K − N − M A1 − K
H −S
− H M ].
13.2.3 Numerical Example The numerical example in this subsection demonstrates the effectiveness of the above method.
13.3 Conclusion
295
Example 13.2.1. Consider the system dx(t) = [(A + ΔA(t))x(t) + (A1 + ΔA1 (t))x(t − τ (t))]dt +g(t, x(t), x(t − h(t)))dw(t), with ⎡ A=⎣
⎤
−2
0
1
−1
⎡
⎦ , A1 = ⎣
−1
⎤ 0
−0.5 −1
⎦;
τ (t) and h(t) satisfy conditions (13.26)-(13.29); and ΔA(t) 0.1, ΔA1 (t) 0.1, # $ Tr g T (t, x(t), x(t − h(t))) g (t, x(t), x(t − h(t))) 0.1x(t)2 + 0.1x(t − h(t))2 . Since f (t, x(t), x(t − τ (t))) = ΔA(t)x(t) + ΔA1 (t)x(t − τ (t)), we choose √ $ #√ F1 = F2 = diag {0.1, 0.1}, G1 = G2 = diag 0.1, 0.1 . We assume that τ = h. Table 13.2 lists values of the maximum allowable τ , τmax , that guarantee that the system is exponentially stochastically stable for various dτ and dh . The data were obtained with Corollary 13.2.1 and the criterion in [8]. Our method is clearly better. Table 13.2. Maximum allowable τ , τmax , for various dτ and dh (Example 13.2.1) dτ
0.5
0.9
1
0.5
0.9
dh
0.5
0.9
1
0
0
[8]
0.8502
0.5261
0.5001
0.9366
0.7639
Corollary 13.2.1
0.9010
0.7312
0.7312
1.0027
0.9772
13.3 Conclusion This chapter examines the delay-dependent robust stability of linear stochastic systems with a time-varying delay and the delay-dependent exponential stability of a special type of stochastic system, namely, a Markovian jump
296
13. Stability of Stochastic Systems with Time-Varying Delay
system with nonlinearities and time-varying delays. The IFWM approach is used to establish stability conditions. Numerical examples demonstrate the effectiveness and advantages of this method.
References 1. S. Xu and T. Chen. Robust H∞ control for uncertain stochastic systems with state delay. IEEE Transactions on Automatic Control, 47(12): 2089-2094, 2002. 2. C. Y. Lu, J. S. H. Tsai, G. J. Jong, and T. J. Su. An LMI-based approach for robust stabilization of uncertain stochastic systems with time-varying delays. IEEE Transactions on Automatic Control, 48(2): 286-289, 2003. 3. S. Xu and T. Chen. Robust H∞ control for uncertain discrete-time systems with time-varying delays via exponential output feedback controllers. Systems & Control Letters, 51(3-4): 171-183, 2004. 4. W. H. Chen, Z. H. Guan, and X. M. Lu. Delay-dependent exponential stability of uncertain stochastic systems with multiple delays: an LMI approach. Systems & Control Letters, 54(6): 547-555, 2005. 5. Y. S. Fu, Z. H. Tian, and S. J. Shi. Output feedback stabilization for a class of stochastic time-delay nonlinear systems. IEEE Transactions on Automatic Control, 50(6): 847-851, 2005. 6. Z. H. Guan, W. H. Chen, and J. X. Xu. Delay-dependent stability and stabilizability of uncertain jump bilinear stochastic systems with mode-dependent time-delays. International Journal of Systems Science, 36(5): 275-285, 2005. 7. H. C. Yan, X. H. Huang, H. Zhang, and M. Wang. Delay-dependent robust stability criteria of uncertain stochastic systems with time-varying delay. Chaos, Solitons & Fractals, 40(4): 1668-1679, 2009. 8. D. Yue and Q. L. Han. Delay-dependent exponential stability of stochastic systems with time-varying delay, nonlinearity and Markovian switching. IEEE Transactions on Automatic Control, 50(2): 217-222, 2005. 9. Y. Zhang, Y. He, and M. Wu. Improved delay-dependent robust stability for uncertain stochastic systems with time-varying delay. Proceeding of the 27th Chinese Control Conference, Kunming, China, 764-768, 2008. 10. Y. He, Y. Zhang, M. Wu, and J. H. She. Improved exponential stability for stochastic Markovian jump systems with nonlinearity and time-varying delay. International Journal of Robust and Nonlinear Control, in press, 2008. 11. X. R. Mao and L. Shaikhet. Delay-dependent stability criteria for stochastic differential delay equations with Markovian switching. Stability and Control: Theory and Applications, 3(2): 87-101, 2000. 12. Z. Shu, J. Lam, and S. Y. Xu. Robust stabilization of Markovian delay systems with delay-dependent exponential estimates. Automatica, 42(11): 2001-2008, 2006.
References
297
13. S. Xu and X. R. Mao. Delay-dependent H∞ control and filtering for uncertain Markovian jump system with time-varying delay. IEEE Transactions on Circuits and systems I, 54(9): 2070-2077, 2007. 14. H. J. Kushner. Stochastic Stability and Control. New York: Academic Press, 1967. 15. A. V. Skorohod. Asymptotic Methods in the Theory of Stochastic Differential Equations. Providence, RI: American Mathematical Society, 1989. 16. X. R. Mao. Robustness of exponential stability of stochastic differential delay equation. IEEE Transactions on Automatic Control, 41(3): 442-447, 1996.
14. Stability of Nonlinear Time-Delay Systems
Since Lur’e brought up the subject of absolute stability in [1], three common ways of dealing with the problem of the absolute stability of Lur’e control systems have emerged. One is to use the Popov frequency-domain criterion [2–5]. The problem with it is that it is not suitable for dealing with multiple nonlinearities because it is not geometrically intuitive and cannot be examined by illustration. The second is to use the extended Popov frequency-domain criterion [6,7]. The problem here is that the condition is only sufficient. The third is a method based on a Lyapunov function in the Lur’e form [8,9]. It produces necessary and sufficient conditions for the existence of a Lyapunov function in the Lur’e form that ensures the absolute stability of a Lur’e control system with multiple nonlinearities in a bounded sector; but the conditions simply exist and are unsolvable. These ways of examining absolute stability depend on the selection of free parameters, such as a positive definite matrix or the coefficients of integral terms. However, the parameters cannot be derived by either an analytical or a numerical method, which makes the resulting criteria conservative. In fact, the criteria cannot be used to demonstrate that no Lyapunov function in the extended Lur’e form that guarantees the absolute stability of a system exists when suitable parameters are not found. So, they are not necessary and sufficient conditions in the fullest sense. Since delays are common in a great variety of control systems and are usually a source of instability, many researchers study the absolute stability of Lur’e control systems with a delay [10–16]. [16] extended the criterion in [8] to a Lur’e control system with a delay and gave necessary and sufficient conditions for the existence of a Lyapunov-Krasovskii functional in the extended Lur’e form that guarantees the absolute stability of the system. However, in this case, too, the conditions simply exist and are unsolvable. Moreover, these criteria still depend on the selection of free parameters and are not necessary and sufficient conditions in the fullest sense.
300
14. Stability of Nonlinear Time-Delay Systems
Since systems often have uncertainties, there is keen interest in Lur’e control systems either with just uncertainties or with both uncertainties and a delay. Many sufficient conditions for the robust absolute stability of Lur’e systems with uncertainties have been derived by the three techniques mentioned above [17–25]. Systems with uncertainties and multiple nonlinearities present the same difficulties as systems with no uncertainties with regard to the handling of robust absolute stability. Moreover, the necessary and sufficient conditions that [8,9,16] obtained using a Lyapunov function or functional in the Lur’e form cannot be extended to systems with time-varying uncertainties. So, deriving a delay-dependent stability criterion for Lur’e control systems with a delay and time-varying uncertainties is a challenging objective. [26] used fixed model transformations to investigate this problem, but the conservativeness of the transformations themselves imposes certain limitations. Better criteria can be found. On the other hand, many papers have appeared on delay-independent [27–31] and delay-dependent [30, 32–34] robust stability criteria for timedelay systems subject to nonlinear perturbations. [30] used a transformation with additional eigenvalues to derive a delay-dependent condition. [32] used Park’s inequality, and [34] used the FWM approach, to obtain improved delay-dependent conditions. However, there is room for further investigation. Useful terms tend to be ignored when the upper bound on the derivative of a Lyapunov-Krasovskii functional is estimated. For example, [32] t used hx˙ T (t)Z x(t) ˙ − t−d(t) x˙ T (s)Z x(s)ds ˙ as an estimate of the derivative 0 t t−d(t) T T of −h t+θ x˙ (s)Z x(s)dsdθ ˙ and ignored the term − t−h x˙ (s)Z x(s)ds, ˙ which may lead to considerable conservativeness [35]. On the other hand, [30,32–34] considered only a delay ranging from zero to an upper bound; but in practice, the lower bound is not necessarily zero. So, the criterion in [32] is conservative because it does not take into account information on the lower bound on the delay. This chapter concerns the absolute stability of Lur’e control systems with a delay. First, the absolute stability of nonlinear systems with a delay and multiple nonlinearities is discussed [36]. The problem of finding necessary and sufficient conditions for the existence of a Lyapunov-Krasovskii functional in the extended Lur’e form with a negative definite derivative that ensures the absolute stability of the system in [16] is converted into the problem of solving a set of LMIs. This idea is extended to a system with time-varying structured uncertainties [37, 38]. Then, the FWM approach is used to derive
14.1 Absolute Stability of Nonlinear Systems with Delay and Multiple...
301
delay-dependent criteria for the absolute stability of a Lur’e control system with a time-varying delay. Finally, the delay-dependent robust stability of a system with nonlinear perturbations and a time-varying interval delay is investigated [39]. The IFWM approach is used to estimate the upper bound on the derivative of a Lyapunov-Krasovskii functional; and consideration of the range of a delay and the use of an augmented Lyapunov-Krasovskii functional lead to improved delay-dependent stability criteria.
14.1 Absolute Stability of Nonlinear Systems with Delay and Multiple Nonlinearities This section discusses the absolute stability of nonlinear systems with a delay and multiple nonlinearities, and provides some necessary and sufficient conditions for the existence of a Lyapunov-Krasovskii functional in the extended Lur’e form. 14.1.1 Problem Formulation Consider the following nominal Lur’e control system with multiple nonlinearities and a delay: ⎧ ⎨ x(t) ˙ = Ax(t) + Bx(t − h) + Df (σ(t)), (14.1) ⎩ σ(t) = C T x(t), T
where x(t) = [x1 (t), x2 (t), · · · , xn (t)] is the state vector; A = [aij ]n×n , B = [bij ]n×n , C = [cij ]n×m = [c1 , c2 , · · · , cm ], and D = [dij ]n×m = [d1 , d2 , · · · , dm ]; cj and dj , j = 1, 2, · · · , m are the jth column of C and D, T respectively; and f (σ(t)) = [f1 (σ1 (t)), f2 (σ2 (t)), · · · , fm (σm (t))] , where T σ(t) = [σ1 (t), σ2 (t), · · · , σm (t)] , is a nonlinear function. The nonlinearities fj (·) satisfy # $ fj (·) ∈ Kj [0, kj ] = fj (σj )|fj (0) = 0, 0 σj fj (σj ) kj σj2 (σj = 0) (14.2) for 0 < kj < +∞, j = 1, 2, · · · , m. For simplicity, we abbreviate fj (σj (t)) as fj (σj ) when there is no possibility of confusion, and let K = diag {k1 , k2 , · · · , km }. We will also consider the following system with time-varying structured uncertainties:
302
14. Stability of Nonlinear Time-Delay Systems
⎧ ⎪ ⎪ ˙ = (A + ΔA(t))x(t) + (B + ΔB(t)) x(t − h) ⎪ ⎨ x(t) (14.3)
+(D + ΔD(t))f (σ(t)), ⎪ ⎪ ⎪ ⎩ σ(t) = C T x(t). The uncertainties are assumed to have the form [ΔA(t) ΔB(t) ΔD(t)] = HF (t)[Ea Eb Ed ],
(14.4)
where H, Ea , Eb , and Ed are known, real, constant matrices with appropriate dimensions, and F (t) is an unknown, real, time-varying matrix with Lebesgue measurable elements satisfying F T (t)F (t) I, ∀t.
(14.5)
Constructing a Lyapunov-Krasovskii functional in the extended Lur’e form yields σj t m T T x (s)Qx(s)ds + 2 λj fj (σj )dσj , (14.6) V (xt ) = x (t)P x(t) + t−h
j=1
0
where P > 0, Q 0, and λj 0, j = 1, 2, · · · , m are to be determined. The functional V (xt ) in (14.6) is said to be a Lyapunov-Krasovskii functional of system (14.3), which has uncertainties, (or of nominal system (14.1)) with a negative definite derivative if the following condition holds: V˙ (xt )|(14.3) < 0 (or V˙ (xt )|(14.1) < 0), (14.7) T ∀ xT (t), xT (t − h) = 0 and ∀fj (·) ∈ Kj [0, kj ], j = 1, 2, · · · , m. If condition (14.7) holds, (14.3) (or nominal system (14.1)) is robustly absolutely stable (or absolutely stable). Gan [16] considered the absolute stability of a general Lur’e control system with multiple delays and multiple nonlinearities and obtained necessary and sufficient conditions for the existence of a Lyapunov-Krasovskii functional in the extended Lur’e form with a negative definite derivative. Those conditions are the starting point for the derivation of LMI-based necessary and sufficient conditions; they are stated in the following lemma. Lemma 14.1.1. Consider nominal system (14.1) with m 2. Necessary and sufficient conditions for (14.7) are as follows: (1) For f1 (σ1 ) = α1 σ1 (α1 = 0, k1 ) and any fj (·) ∈ Kj [0, kj ], j = T 2, 3, · · · , m, if xT (t), xT (t − h) = 0, then V˙ (xt )|(14.1) < 0. (2) For f1 (σ1 ) ∈ K1 [0, k1 ] and any fj (σj ) = 0, j = 2, 3, · · · , m, if [xT (t), xT (t − h)]T = 0, V˙ (xt )|(14.1) < 0.
14.1 Absolute Stability of Nonlinear Systems with Delay and Multiple...
303
14.1.2 Nominal Systems This subsection presents a sufficient condition derived by directly applying the S-procedure to the nonlinearities. Then, for nominal system (14.1), it presents necessary and sufficient conditions for the existence of a LyapunovKrasovskii functional in the extended Lur’e form with a negative definite derivative that guarantees the absolute stability of the system. We begin with the following theorem. Theorem 14.1.1. Consider nominal system (14.1). Condition (14.7) holds and the system is absolutely stable in the sector bounded by K = diag{k1 , k2 , · · · , km } if there exist matrices P > 0, Q 0, T = diag{t1 , t2 , · · · , tm } 0, and Λ = diag{λ1 , λ2 , · · · , λm } 0 such that the following LMI holds: ⎡ ⎤ AT P + P A + Q P B P D + AT CΛ + CKT ⎢ ⎥ ⎢ ⎥ (14.8) Ω=⎢ ⎥ < 0. ∗ −Q B T CΛ ⎣ ⎦ ∗ ∗ ΛC T D + DT CΛ − 2T This condition is also a necessary condition when m = 1. Proof. The derivative of V (xt ) along the solutions of system (14.1) is V˙ (xt ) = x˙ T (t)P x(t) + x(t)P x˙ T (t) + xT (t)Qx(t) − xT (t − h)Qx(t − h) m λj fj (σj (t))cT ˙ +2 j x(t) j=1 T
= x (t)[AT P + P A + Q]x(t) + 2x(t)P Bx(t − h) −xT (t − h)Qx(t − h) + 2xT (t)(P D + AT CΛ)f (σ(t)) +2xT (t − h)B T CΛf (σ(t)) + f T (σ(t))(ΛC T D + DT CΛ)f (σ(t)) = η T (t)Ξη(t),
(14.9)
where T η(t) = xT (t), xT (t − h), f T (σ) , ⎡ ⎤ AT P + P A + Q P B P D + AT CΛ ⎢ ⎥ ⎢ ⎥ Ξ=⎢ ⎥. ∗ −Q B T CΛ ⎣ ⎦ ∗ ∗ ΛC T D + DT CΛ Nonlinear condition (14.2) is equivalent to fj (σj (t))(fj (σj (t)) − kj cT j x(t)) 0, j = 1, 2, · · · , m.
(14.10)
304
14. Stability of Nonlinear Time-Delay Systems
From the S-procedure, we know that, if there exists T = diag {t1 , t2 , · · · , tm } 0 such that η T (t)Ξη(t) − 2
m
tj fj (σj (t))(fj (σj (t)) − kj cT j x(t)) < 0
(14.11)
j=1
T for η(t) = 0, then, if xT (t), xT (t − h) = 0 and nonlinear condition (14.2) holds, then V˙ (xt )|(14.1) < 0. From (14.11), LMI (14.8) holds. Therefore, (14.7) holds for nominal system (14.1) and the system is absolutely stable. This completes the proof.
Remark 14.1.1. Theorem 14.1.1 is an extension of the theorem in [40], which is for a system with no delay. Since the S-procedure is applied directly to multiple nonlinearities, only a sufficient condition is obtained. It is more conservative than the necessary and sufficient condition below for a system with multiple nonlinearities. Now, we transform a system with multiple nonlinearities into multiple systems, each with just one nonlinearity. Let Γ1∼m = diag {α1 , α2 , · · · , αm }. Then, Dj1∼m = {Γ1∼m |αi = 0, for i j; and αi ∈ {0, ki }, for i < j, i = 1, 2, · · · , m} , j = 1, 2, · · · , m (14.12) for 2j−1 elements. The next theorem is for a nominal nonlinear Lur’e system with a delay. Theorem 14.1.2. Consider nominal system (14.1) with m 1. A necessary and sufficient condition for (14.7) is that V˙ (xt )|(14.1) < 0 when [xT (t), xT (t− h)]T = 0 for any Γ1∼m ∈ Dj1∼m , j = 1, 2, · · · , m, any fi (σi ) = αi σi , i = 1, 2, · · · , m, i = j, and any fj (σj ) ∈ Kj [0, kj ]. Proof. We prove this theorem by mathematical induction. From Lemma 14.1.1, we know that the theorem holds for m = 1. Suppose that it holds for m = ρ, and consider a system with ρ nonlinearities: x˙ = Ax + Bx(t − h) +
ρ+1 j=2
dj fj (σj ).
(14.13)
14.1 Absolute Stability of Nonlinear Systems with Delay and Multiple...
305
2∼(ρ+1)
Let Dj = {Γ2∼(ρ+1) |αi = 0, ∀i j; and αi ∈ {0, ki }, ∀i < j; i = 2, 3, · · · , ρ + 1}, j = 2, 3, · · · , ρ + 1. From the induction assumption, the necessary and sufficient condition for (14.7) is that V˙ (xt )|(14.1) < 0 when T T 2∼(ρ+1) x (t), xT (t − h) = 0 for any Γ2∼(ρ+1) ∈ Dj , j = 2, 3, · · · , ρ + 1, any fi (σi ) = αi σi , i = 2, 3, · · · , ρ + 1, i = j, and any fj (σj ) ∈ Kj [0, kj ]. Condition (14.7) holds if and only if (1) and (2) in Lemma 14.1.1 hold for m = ρ + 1. The necessary and sufficient condition for condition (2) is T that V˙ (xt )|(14.1) < 0 when xT (t), xT (t − h) = 0 for any Γ1∼(ρ+1) ∈ 1∼(ρ+1) D1 = diag {0, 0, · · · , 0}, any fi (σi ) = αi σi , i = 2, 3, · · · , ρ + 1, and any f1 (σ1 ) ∈ K1 [0, k1 ]. On the other hand, the necessary and sufficient condition for condition (1) has two parts: (i) If α1 = 0 and f1 (σ1 ) = 0, system (14.1) can be transformed into (14.13). Let . ¯ 1∼(ρ+1) = Γ1∼(ρ+1) )|α1 = 0, Γ2∼(ρ+1) ∈ D2∼(ρ+1) , j = 2, 3, · · · , ρ+1. D j j From the induction assumption, we know that the necessary and sufficient condition for condition (14.7) is that V˙ (xt )|(14.1) < 0 when T T ¯ 1∼(ρ+1) , j = 2, 3, · · · , ρ+1, x (t), xT (t − h) = 0 for any Γ1∼(ρ+1) ∈ D j any fi (σi ) = αi σi , i = 1, 2, · · · , ρ + 1, i = j, and any fj (σj ) ∈ Kj [0, kj ]. (ii) If α1 = k1 and f1 (σ1 ) = k1 σ1 , system (14.1) can be transformed into x˙ = Ax + Bx(t − h) + k1 d1 σ1 +
ρ+1
dj fj (σj ).
(14.14)
j=2
Let . ˆ 1∼(ρ+1) = Γ1∼(ρ+1) |α1 = k1 , Γ2∼(ρ+1) ∈ D2∼(ρ+1) , j = 2, 3, · · · , ρ+1. D j j The necessary and sufficient condition for condition (14.7) is that T = 0 for any Γ1∼(ρ+1) ∈ V˙ (xt )|(14.1) < 0 when xT (t), xT (t − h) 1∼(ρ+1) ˆ Dj , j = 2, 3, · · · , ρ + 1, any fi (σi ) = αi σi , i = 1, 2, · · · , ρ + 1, i = j, and any fj (σj ) ∈ Kj [0, kj ]. Then we have ⎧ ⎨ D1∼(ρ+1) , j = 1, 1 1∼(ρ+1) = Dj + ⎩D ¯ 1∼(ρ+1) D ˆ 1∼(ρ+1) , j = 2, 3, · · · , ρ + 1. j j
(14.15)
306
14. Stability of Nonlinear Time-Delay Systems
Therefore, conditions (1) and (2) in Lemma 14.1.1 are equivalent to the state T ment that V˙ (xt )|(14.1) < 0 when xT (t), xT (t − h) = 0 for any Γ1∼(ρ+1) ∈ 1∼(ρ+1) Dj , j = 1, 2, · · · , ρ + 1, any fi (σi ) = αi σi , i = 1, 2, · · · , ρ + 1, i = j, and any fj (σj ) ∈ Kj [0, kj ]. Thus, we have proved the theorem for m = ρ + 1. Therefore, the condition is true for any m 1. This completes the proof.
The necessary and sufficient condition for condition (14.7) in Theorem 14.1.2 is obtained by transforming a system with multiple nonlinearities into multiple systems, each with a single nonlinearity. It can be formulated in terms of LMIs. For simplicity, we abbreviate Γ1∼m as Γ ; and we assume A(Γ ) = A + DΓ C T , P (Γ ) = P + CΛΓ C T . Theorem 14.1.3. Consider nominal system (14.1). A necessary and sufficient condition for the existence of a Lyapunov-Krasovskii functional, V (xt ), that satisfies condition (14.7) and ensures the absolute stability of system (14.1) in the sector bounded by K = diag{k1 , k2 , · · · , km } is that, for any Γ ∈ Dj1∼m , j = 1, 2, · · · , m, there exist matrices P > 0 and Q 0, and scalars tΓ 0 and λj 0 such that the following LMI holds: ⎡ ⎤ Φ11 (Γ ) P (Γ )B Φ13,j (Γ ) + tΓ kj cj ⎢ ⎥ ⎢ ⎥ (14.16) Gj (Γ ) = ⎢ ∗ ⎥ < 0, −Q λj B T cj ⎣ ⎦ ∗ ∗ 2λj cT j dj − 2tΓ where Φ11 (Γ ) = AT (Γ )P (Γ ) + P (Γ )A(Γ ) + Q, Φ13,j (Γ ) = P (Γ )dj + λj AT (Γ )cj . Proof. Consider the case where Γ ∈ Dj1∼m and fi (σi ) = Γi σi , i = 1, 2, · · · , m, j = 1, 2, · · · , m, i = j. For any fj (·) ∈ Kj [0, kj ], we can transform system (14.1) into x(t) ˙ = Ax(t) + Bx(t − h) + = Ax(t) +
m
m
di fi (σi (t))
i=1
di Γi σi (t) + Bx(t − h) + dj fj (σj (t))
i=1 i=j
= Ax(t) +
m
di Γi cT i x(t) + Bx(t − h) + dj fj (σj (t))
i=1 i=j
= A(Γ )x(t) + Bx(t − h) + dj fj (σj (t)),
(14.17)
14.1 Absolute Stability of Nonlinear Systems with Delay and Multiple...
307
and we can transform the Lyapunov-Krasovskii functional into σi (t) t m xT (s)Qx(s)ds+2 λi Γi σi dσi V (xt ) = xT (t)P x(t)+ t−h
+2λj
fj (σj )dσj
= xT (t)P x(t) +
+2λj
m
λi Γi σi2 (t) +
t
xT (s)Qx(s)ds
t−h
i=1 i=j
σj (t)
fj (σj )dσj
0
= xT (t)P x(t)+
0
σj (t)
0
+2λj
i=1 i=j
m
λi Γi xT (t)ci cT i x(t)+
i=1 i=j
t
xT (s)Qx(s)ds
t−h
σj (t)
fj (σj )dσj
0
= xT (t)P (Γ )x(t) +
t
xT (s)Qx(s)ds + 2λj
t−h
σj (t)
0
fj (σj )dσj . (14.18)
When we calculate the derivative of V (xt ) along the solutions of system (14.1), we need to guarantee that the following expression holds for [xT (t), xT (t − h)]T = 0 and fj (·) ∈ Kj [0, kj ]: ⎡
⎤⎡
⎤T ⎡ x(t)
⎢ ⎥ ⎢ ⎥ V˙ (xt )|(14.1) = ⎢ x(t − h) ⎥ ⎣ ⎦ fj (σj (t))
⎢ ⎢ ⎢ ⎣
Φ11 (Γ ) P (Γ )B Φ13,j (Γ ) ∗
−Q
∗
∗
⎤ x(t)
⎥⎢ ⎥ ⎥⎢ ⎥ λj B T cj ⎥ ⎢ x(t − h) ⎥ < 0. ⎦⎣ ⎦ fj (σj (t)) 2λj cT j dj (14.19)
It is easy to show that # T $ x (t), xT (t − h), fjT (σj (t))]|[xT (t), xT (t − h)] = 0, fj (·) ∈ Kj [0, kj # = [xT (t), xT (t − h), fjT (σj (t))]| xT (t), xT (t − h), fjT (σj (t)) = 0, fj (·) ∈ Kj [0, kj ]} . (14.20) Since there is only one nonlinearity in system (14.17), the necessary and sufficient condition for (14.19) to hold is that there exists tΓ 0 such that
308
14. Stability of Nonlinear Time-Delay Systems
LMI (14.16) is feasible based on the S-procedure. This completes the proof.
Remark 14.1.2. Theorem 14.1.3 contains a necessary and sufficient condition, while Theorem 14.1.1 contains only a sufficient condition. For both of them, the values of the free parameters in the Lyapunov-Krasovskii functionals can be obtained by solving LMIs. In contrast, the criterion derived by Gan et al. [16] is only an existence condition with nothing to solve. Moreover, it cannot be extended to systems with time-varying structured uncertainties. 14.1.3 Systems with Time-Varying Structured Uncertainties In this subsection, we use Lemma 2.6.2 to extend the above theorems on nominal systems to systems with time-varying structured uncertainties. Theorem 14.1.4. Consider system (14.3). It is robustly absolutely stable in the sector bounded by K = diag{k1 , k2 , · · · , km } if there exist matrices P > 0, Q 0, Λ = diag{λ1 , λ2 , · · · , λm } 0, and T = diag{t1 , t2 , · · · , tm } 0, and a scalar ε > 0 such that the following LMI holds: ⎡ ⎤ Ψ11 P B + εEaT Eb Ψ13 PH ⎢ ⎥ ⎢ ⎥ ⎢ ∗ −Q + εEbT Eb B T CΛ + εEbT Ed 0 ⎥ ⎢ ⎥ < 0, (14.21) ⎢ ⎥ ⎢ ∗ ΛC T H ⎥ ∗ Ψ33 ⎣ ⎦ ∗ ∗ ∗ −εI where Ψ11 = AT P + P A + Q + εEaT Ea , Ψ13 = P D + AT CΛ + CKT + εEaT Ed , Ψ33 = ΛC T D + DT CΛ − 2T + εEdT Ed . This condition is also a necessary one when m = 1. Theorem 14.1.5. Consider system (14.3). A necessary and sufficient condition for the existence of a Lyapunov-Krasovskii functional, V (xt ), with the form (14.6) that satisfies inequality (14.7) and ensures the robust absolute stability of (14.3) in the sector bounded by K = diag{k1 , k2 , · · · , km } is that, for any Γ ∈ Dj1∼m , j = 1, 2, · · · , m, there exist matrices P > 0 and Q 0, and scalars tΓ 0, λi 0, i = 1, 2, · · · , m, and εΓ > 0 such that the following LMI holds:
14.1 Absolute Stability of Nonlinear Systems with Delay and Multiple...
⎡ ⎢ ⎢ ⎢ ˆ j (Γ ) = ⎢ G ⎢ ⎢ ⎣
Φˆ11 (Γ ) ∗
Φˆ12 (Γ )
Φˆ13,j (Γ )
−Q + εΓ EbT Eb λj B T cj + εΓ EbT Edj
∗
∗
Φˆ33,j (Γ )
∗
∗
∗
309
⎤ P (Γ )H
⎥ ⎥ ⎥ ⎥ < 0, ⎥ ⎥ λj cT j H ⎦ −εΓ I 0
(14.22) where Φˆ11 (Γ ) = Φ11 (Γ ) + εΓ EaT (Γ )Ea (Γ ), Φˆ12 (Γ ) = P (Γ )B + εΓ EaT (Γ )Eb , Φˆ13,j (Γ ) = Φ13,j (Γ ) + tΓ kj cj + εΓ EaT (Γ )Edj , T Φˆ33,j (Γ ) = 2λj cT j dj − 2tΓ + εΓ Edj Edj , T T Ea (Γ ) = (Ea + Ed Γ C ) (Ea + Ed Γ C T ), and Φ11 (Γ ) and Φ13,j (Γ ) are defined in (14.16). Proof. For simplicity, let ¯ = B + ΔB(t), D ¯ = D + ΔD(t), A(Γ ¯ ) = A¯ + DΓ ¯ C. A¯ = A + ΔA(t), B ¯ From Theorem 14.1.3, we know that Also, let d¯j be the jth column of D. condition (14.16) for system (14.3), which has time-varying structured uncertainties, is equivalent to the statement that there exist a matrix P > 0, scalars λj 0, j = 1, 2, · · · , m and tΓ 0, and a matrix Γ ∈ Dj1∼m such that the following LMI holds: ⎡ ⎤ ¯ Φ¯13,j (Γ ) + tΓ kj cj Φ¯11 (Γ ) P (Γ )B ⎢ ⎥ ⎥ ¯ j (Γ ) = ⎢ ¯ T cj (14.23) G ⎢ ∗ ⎥ < 0, −Q λj B ⎣ ⎦ ¯ ∗ ∗ 2λj cT j dj − 2tΓ where ¯ ) + Q, Φ¯11 (Γ ) = A¯T (Γ )P (Γ ) + P (Γ )A(Γ T ¯ ¯ ¯ Φ13,j (Γ ) = P (Γ )dj + λj A (Γ )cj . ¯ ), B(Γ ¯ ), and d¯j in (14.23) with A(Γ ) + HF (t)Ea (Γ ), B + Replacing A(Γ ¯ j (Γ ) as HF (t)Eb (Γ ), and dj + HF (t)Ebj , respectively, allows us to write G ⎡ ⎤ P (Γ )H ⎢ ⎥ ⎢ ⎥ ¯ Gj (Γ ) = Gj (Γ ) + ⎢ ⎥ F (t) Ea (Γ ) Eb Edj 0 ⎣ ⎦ λj cT H j
310
14. Stability of Nonlinear Time-Delay Systems
⎡
EaT (Γ )
⎢ ⎢ + ⎢ EbT ⎣ T Edj
⎤ ⎥ ⎥ T ⎥ F (t) H T P (Γ ) 0 λj H T cj , ⎦
(14.24)
where Gj (Γ ) is defined in (14.16). ¯ j (Γ ) < 0 if and only if From the Schur complement and Lemma 2.6.2, G LMI (14.22) holds. This completes the proof.
14.1.4 Numerical Examples Example 14.1.1. Consider nominal system (14.1) with ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ −1 0 −0.5 −0.1 0 −1 ⎦, B = ⎣ ⎦, D = ⎣ ⎦ , C = I, A=⎣ 1 −2 0.1 −0.5 −1 0 k1 = 1, k2 = 5. Since m = 2, D11∼2 = {diag {0, 0}} , D21∼2 = {diag {0, 0}, diag {k1 , 0}} . Solving LMI (14.16) yields the following parameters for the LyapunovKrasovskii functional: ⎡ ⎤ ⎡ ⎤ 208.7487 −97.2103 142.9228 −135.6343 ⎦, Q = ⎣ ⎦, P =⎣ −97.2103 340.3044 −135.6343 652.5408 λ1 = 5.3309, λ2 = 413.8239. Thus, system (14.1) is absolutely stable. Moreover, if the sector of one nonlinearity is set to k1 = 1, the maximum sector bound for the other nonlinearity is k2 = 5.49. In addition, solving LMI (14.16) yields ⎡ ⎤ ⎡ ⎤ 1.5496 −1.9397 2.1183 −2.1798 ⎦ × 104 , ⎦ × 104 , Q = ⎣ P =⎣ −1.9397 7.9711 −2.1798 7.6664 λ1 = 4.5653, λ2 = 3.4694 × 104 . Thus, the system is still absolutely stable. From Theorem 14.1.3, there is no Lyapunov-Krasovskii functional in the extended Lur’e form that guarantees
14.2 Absolute Stability of Nonlinear Systems with Time-Varying Delay
311
the absolute stability of system (14.1) if one of the nonlinearities exceeds the sector bounded by k1 = 1 and k2 = 5.50. Moreover, LMI (14.8) is infeasible for k1 = 1 and k2 = 4.63. The fact that Theorem 14.1.1 directly uses the S-procedure to examine absolute stability indicates the conservativeness of this method. A final point here is that the maximum bound on k1 can be obtained in a similar way by setting k2 to a particular value. Example 14.1.2. Consider system (14.3), which has time-varying structured uncertainties. Let A, B, C, and D be the same as in Example 14.1.1; and let the uncertainties ΔA(t), ΔB(t), and ΔD(t) have the form (14.4) with ⎡ H=⎣
⎤ 1 0 0 1
⎡
⎦ , Ea = ⎣
⎤ 0.2
0
0
0.2
⎡
⎦ , Eb = ⎣
⎤ 0.05
0
0
0.05
⎡
⎦ , Ed = ⎣
⎤ 0.05
0
0
0.05
⎦.
Assume that k1 = 1 and k2 = 2.23. Solving LMI (14.22) produces ⎡ ⎤ ⎡ ⎤ 16.4678 −9.3711 9.2568 −6.1537 ⎦, Q = ⎣ ⎦, P =⎣ −9.3711 29.0463 −6.1537 27.4173 λ1 = 0.3889, λ2 = 28.5416. Thus, system (14.3) is robustly absolutely stable. Moreover, LMI (14.21) is infeasible for k1 = 1 and k2 = 2.09. The results are conservative because Theorem 14.1.4 directly uses the S-procedure to deal with nonlinearities.
14.2 Absolute Stability of Nonlinear Systems with Time-Varying Delay Section 14.1 presented necessary and sufficient conditions for the existence of a Lyapunov-Krasovskii functional in the extended Lur’e form with a negative definite derivative that guarantees the absolute stability of a Lur’e control system with multiple nonlinearities and a delay. They are delay-independent, which means that they are conservative (as was pointed out in Chapters 1 and 3), especially when the delay is small. So, it is important to investigate delay-dependent conditions for the absolute stability of Lur’e control systems with a delay.
312
14. Stability of Nonlinear Time-Delay Systems
14.2.1 Problem Formulation Consider the following nominal Lur’e control system with multiple nonlinearities and a delay: ⎧ ⎨ x(t) ˙ = Ax(t) + Bx(t − d(t)) + Df (σ(t)), (14.25) ⎩ σ(t) = C T x(t), where x(t) = [x1 (t), x2 (t), · · · , xn (t)]T is the state vector; A ∈ Rn×n , B ∈ Rn×n , C ∈ Rn×m , and D ∈ Rn×m ; σ(t) = [σ1 (t), σ2 (t), · · · , σm (t)]T ; f (σ(t)) = [f1 (σ1 (t)), f2 (σ2 (t)), · · · , fm (σm (t))]T is a nonlinear function; fj (·) satisfies either the finite sector condition # $ fj (·) ∈ Kj [0, kj ] = fj (σj )|fj (0) = 0, 0 σj fj (σj ) kj σj2 (σj = 0) , j = 1, 2, · · · , m, (14.26) where 0 < kj < +∞, j = 1, 2, · · · , m, or the infinite sector condition fj (·) ∈ Kj [0, ∞] = {fj (σj )|fj (0) = 0, σj fj (σj ) > 0(σj = 0)} , j = 1, 2, · · · , m; (14.27) and the delay, d(t), is a time-varying continuous function. In this section, the delay is assumed to satisfy one or both of the following conditions: 0 d(t) h, ˙ μ. d(t)
(14.28) (14.29)
where h and μ are constants. We also consider a system with time-varying structured uncertainties: ⎧ ⎨ x(t) ˙ = (A+ΔA(t))x(t)+(B +ΔB(t))x(t−d(t))+(D+ΔD(t))f (σ(t)), ⎩ σ(t) = C T x(t). (14.30) The uncertainties are assumed to have the form (14.4).
14.2 Absolute Stability of Nonlinear Systems with Time-Varying Delay
313
14.2.2 Nominal Systems First, we use the FWM approach in combination with the S-procedure to derive a delay-dependent absolute stability condition. Theorem 14.2.1. Consider nominal system (14.25) with a delay, d(t), that satisfies both (14.28) and (14.29). Given scalars h > 0 and μ, the system is absolutely stable in the finite⎡ sector (14.26) ⎤ if there exist matrices P > 0, X X12 X13 ⎢ 11 ⎥ ⎢ ⎥ Q 0, Z > 0, and X = ⎢ ∗ X22 X23 ⎥ 0, and any appropriately ⎣ ⎦ ∗ ∗ X33 dimensioned matrices Ni , i = 1, 2, 3 such that the following LMIs hold: ⎤ ⎡ Φ11 Φ12 Φ13 + CKT hAT Z ⎥ ⎢ ⎥ ⎢ ⎢ ∗ Φ22 Φ23 hB T Z ⎥ ⎥ < 0, ⎢ (14.31) Φ=⎢ T ⎥ ⎢ ∗ ∗ Φ33 − 2T hD Z ⎥ ⎦ ⎣ ∗ ∗ ∗ −hZ ⎡
⎤ X11 X12 X13 N1
⎢ ⎢ ⎢ ∗ Ψ =⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥ ⎥ X22 X23 N2 ⎥ ⎥ 0, ⎥ ∗ X33 N3 ⎥ ⎦ ∗ ∗ Z
(14.32)
where K = diag{k1 , k2 , · · · , km }, Φ11 = AT P + P A + Q + N1 + N1T + hX11 , Φ12 = P B + N2T − N1 + hX12 , Φ13 = P D + AT CΛ + N3T + hX13 , Φ22 = −(1 − μ)Q − N2 − N2T + hX22 , Φ23 = B T CΛ − N3T + hX23 , Φ33 = ΛC T D + DT CΛ + hX33 . Proof. Choose the Lyapunov-Krasovskii functional candidate to be σj t m V (xt ) = xT (t)P x(t) + xT (s)Qx(s)ds + 2 λj fj (σj )dσj
0
t−d(t) t
+ −h
t+θ
x˙ T (s)Z x(s)dsdθ, ˙
j=1
0
(14.33)
314
14. Stability of Nonlinear Time-Delay Systems
where P > 0, Q 0, Z > 0, and λj 0, j = 1, 2, · · · , m are to be determined. From the Newton-Leibnitz formula, the following equation holds for any matrices Ni , i = 1, 2, 3 with appropriate dimensions: 2 xT (t)N1 + xT (t − d(t))N2 + f T (σ)N3 t × x(t) − x(s)ds ˙ − x(t − d(t)) = 0. (14.34) t−d(t)
⎤
⎡ X11 X12 X13
⎥ ⎢ ⎥ ⎢ On the other hand, for any matrix X = ⎢ ∗ X22 X23 ⎥ 0, the following ⎦ ⎣ ∗ ∗ X33 holds: t η1T (t)Xη1 (t)ds 0, (14.35) hη1T (t)Xη1 (t) − t−d(t)
where η1 (t) = [xT (t), xT (t − d(t)), f T (σ)]T . Calculating the derivative of V (xt ) along the solutions of system (14.25) and using (14.34) and (14.35) yield t T ˙ η2T (t, s)Ψ η2 (t, s)ds, (14.36) V (xt ) η1 (t)Ξη1 (t) − t−d(t)
where T T T T T η2 (t, s) ⎡ = [x (t), x (t − d(t)), f (σ), x˙ (s)] ,
⎢ ⎢ Ξ=⎢ ⎣
Φ11 + hAT ZA Φ12 + hAT ZB Φ13 + hAT ZD
⎤
⎥ ⎥ Φ22 + hB T ZB Φ23 + hB T ZD ⎥. ⎦ ∗ Φ33 + hDT ZD
∗ ∗
Equation (14.26) can be written as fj (σj )(fj (σj ) − kj cT j x) 0.
(14.37)
Applying the S-procedure, we find that V˙ (xt ) < 0 for xT (t), xT (t − d(t)) = 0 and condition (14.37) if there exists T = diag {t1 , t2 , · · · , tm } 0 such that t η1T (t)Ξη1 (t) − η2T (t, s)Ψ η2 (t, s)ds −2
t−d(t) m
tj fj (σj (t))(fj (σj (t)) − kj cT j x(t)) < 0.
j=1
(14.38)
14.2 Absolute Stability of Nonlinear Systems with Time-Varying Delay
Moreover, (14.38) ⎡ 0 ⎢ ⎢ Ξ +⎢ 0 ⎣ T KC T
holds only if Ψ 0 and ⎤ 0 CKT ⎥ ⎥ 0 0 ⎥ < 0. ⎦ 0 −2T
315
(14.39)
Under these conditions, according to the Schur complement, (14.39) is equivalent to Φ < 0. This completes the proof.
On the other hand, we can write the infinite sector condition (14.27) as −fj (σj )cT j x(t) 0, j = 1, 2, · · · , m.
(14.40)
That gives us a theorem that is similar to Theorem 14.2.1. Theorem 14.2.2. Consider nominal system (14.25) with a delay, d(t), that satisfies both (14.28) and (14.29). Given scalars h > 0 and μ, the system is absolutely stable in the infinite sector (14.27) if there exist matrices P > 0, ⎡ ⎤ X11 X12 X13 ⎢ ⎥ ⎢ ⎥ Q 0, Z > 0, and X = ⎢ ∗ X22 X23 ⎥ 0, and any appropriately ⎣ ⎦ ∗ ∗ X33 dimensioned matrices Ni , i = 1, 2, 3 such that LMI (14.32) and the following one hold: ⎡ ⎤ Φ11 Φ12 Φ13 + CT hAT Z ⎢ ⎥ ⎢ ⎥ ⎢ ∗ Φ22 Φ23 hB T Z ⎥ ⎢ ⎥ < 0, (14.41) ⎢ ⎥ ⎢ ∗ ∗ Φ33 hDT Z ⎥ ⎣ ⎦ ∗ ∗ ∗ −hZ where Φij , i = 1, 2, 3, i j 3 are defined in (14.31). Note that the above two theorems are delay- and rate-dependent absolute stability conditions. We can make them delay-dependent and rate independent by setting Q = 0, as shown next. Corollary 14.2.1. Consider nominal system (14.25) with a delay, d(t), that satisfies (14.28), but not necessarily (14.29). Given a scalar h > 0, the system is absolutely stable in the finite sector (14.26) if there exist matrices P > 0,
316
14. Stability of Nonlinear Time-Delay Systems
⎡
⎤ X11 X12 X13
⎢ ⎥ ⎢ ⎥ Z > 0, and X = ⎢ ∗ X22 X23 ⎥ 0, and any appropriately dimensioned ⎣ ⎦ ∗ ∗ X33 matrices Ni , i = 1, 2, 3 such that LMI (14.32) and the following one hold: ⎡ ⎤ Φˆ11 Φ12 Φ13 + CKT hAT Z ⎢ ⎥ ⎢ ⎥ ⎢ ∗ Φˆ22 Φ23 hB T Z ⎥ ⎢ ⎥ < 0, (14.42) ⎢ T ⎥ ⎢ ∗ ∗ Φ33 − 2T hD Z ⎥ ⎣ ⎦ ∗ ∗ ∗ −hZ where Φˆ11 = AT P + P A + N1 + N1T + hX11 , Φˆ22 = −N2 − N2T + hX22 , and K, Φ12 , Φ13 , Φ23 , and Φ33 are defined in (14.31). Corollary 14.2.2. Consider nominal system (14.25) with a delay, d(t), that satisfies (14.28), but not necessarily (14.29). Given a scalar h > 0, the system is absolutely stable⎡ in the infinite sector (14.27) if there exist matrices P > 0, ⎤ X X12 X13 ⎢ 11 ⎥ ⎢ ⎥ Z > 0, and X = ⎢ ∗ X22 X23 ⎥ 0, and any appropriately dimensioned ⎣ ⎦ ∗ ∗ X33 matrices Ni , i = 1, 2, 3 such that LMI (14.32) and the following one hold: ⎡ ⎤ Φˆ11 Φ12 Φ13 + CT hAT Z ⎢ ⎥ ⎢ ⎥ ⎢ ∗ Φˆ22 Φ23 hB T Z ⎥ ⎢ ⎥ < 0, (14.43) ⎢ ⎥ ⎢ ∗ ∗ Φ33 hDT Z ⎥ ⎣ ⎦ ∗ ∗ ∗ −hZ where K, Φ12 , Φ13 , Φ23 , and Φ33 are defined in (14.31); and Φˆ11 and Φˆ22 are defined in (14.42). 14.2.3 Systems with Time-Varying Structured Uncertainties We can use Lemma 2.6.2 to extend all the conditions in Subsection 14.2.2 to systems with time-varying structured uncertainties. We have the following corollary to Theorem 14.2.1.
14.2 Absolute Stability of Nonlinear Systems with Time-Varying Delay
317
Corollary 14.2.3. Consider system (14.30) with a delay, d(t), that satisfies both (14.28) and (14.29). Given scalars h > 0 and μ, the system is absolutely stable in the if there exist matrices P > 0, Q 0, Z > 0, ⎡ finite sector (14.26) ⎤ X X12 X13 ⎢ 11 ⎥ ⎢ ⎥ and X = ⎢ ∗ X22 X23 ⎥ 0, any appropriately dimensioned matrices ⎣ ⎦ ∗ ∗ X33 Ni , i = 1, 2, 3, and a scalar ε > 0 such that LMI (14.32) and the following one hold: ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
Φ11 +εEaT Ea Φ12 +εEaT Eb Φ13 +CKT +εEaT Ed hAT Z ∗
Φ22 + εEbT Eb
Φ23 + εEbT Ed
∗
∗
Φ33 −2T +εEdT Ed
∗
∗
∗
∗
∗
∗
⎤ PH
⎥ ⎥ ⎥ ⎥ ⎥ T T hD Z ΛC H ⎥ < 0, ⎥ ⎥ −hZ hZH ⎥ ⎦ ∗ −εI hB T Z
0
(14.44) where Φij , i = 1, 2, 3, i j 3 and K are defined in (14.31). Theorem 14.2.2 and Corollaries 14.2.1 and 14.2.2 can be extended in a similar way, although we do not give the details here. 14.2.4 Numerical Example Example 14.2.1. Consider nominal system (14.25) with ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ −1 0 −0.5 −0.1 0 −1 ⎦, B = ⎣ ⎦, D = ⎣ ⎦ , C = I. A=⎣ 1 −2 0.1 −0.5 −1 0 If the delay, d(t), is constant, which makes μ = 0, then this example turns into Example 14.1.1 above. From Theorem 14.1.3, we know that no Lyapunov-Krasovskii functional in the extended Lur’e form that guarantees the absolute stability of system (14.1) exists if any of the nonlinearities exceeds the sector bounded by k1 = 1 and k2 = 5.50. However, this does not mean the system is not absolutely stable. Since the theorems in Section 14.1 are delay-independent, those absolute-stability conditions hold for any arbitrary delay; and when the size of the delay is small, they are very conservative. If we use Theorem 14.2.1 for k1 = 1 and k2 = 6.0, we find that system (14.1)
318
14. Stability of Nonlinear Time-Delay Systems
is absolutely stable for 0 h 1.2238. This conclusion cannot be reached using the theorems in Section 14.1. This case can be extended to system (14.30), which has time-varying structured uncertainties. Let the uncertainties ΔA(t), ΔB(t), and ΔD(t) be the same as in Example 14.1.2. For Example 14.1.2, a Lyapunov-Krasovskii functional in the extended Lur’e form that guarantees the robust absolute stability of system (14.3) exists when the sector is bounded by k1 = 1 and k2 = 2.23. So, when k1 = 1 and k2 = 3, the conditions in Theorem 14.1.5 do not hold. However, from Corollary 14.2.3, we know that the system is robustly absolutely stable for 0 h 1.5789.
14.3 Stability of Systems with Interval Delay and Nonlinear Perturbations This section uses the IFWM approach and an augmented Lyapunov-Krasovskii functional to examine the robust stability of systems with nonlinear perturbations and a time-varying interval delay. 14.3.1 Problem Formulation Consider the following system with nonlinear perturbations and a timevarying delay: ⎧ ⎨ x(t) ˙ = Ax(t) + Ad x(t − d(t)) + f (x(t), t) + g(x(t − d(t)), t), t > 0, ⎩ x(t) = φ(t), t ∈ [−h , 0], 2
(14.45) where x(t) ∈ Rn is the state vector; A, Ad ∈ Rn×n are constant matrices; the delay, d(t), is a time-varying continuous function; and the initial condition, φ(t), is a continuously differentiable initial function on t ∈ [−h2 , 0]. In this section, the delay is assumed to satisfy one or both of the following conditions: 0 h1 d(t) h2 , ˙ μ, d(t) where h1 , h2 , and μ are constants.
(14.46) (14.47)
14.3 Stability of Systems with Interval Delay and Nonlinear Perturbations
319
The time-varying nonlinear functions f ∈ Rn and g ∈ Rn are unknown and represent the perturbations of the current state, x(t), and the delayed state, x(t − d(t)), respectively, of the system. We assume that the bounds on f (·, ·) and g(·, ·) can be written as f (x(t), t) αx(t),
(14.48)
g(x(t − d(t)), t) βx(t − d(t)),
(14.49)
where α 0, β 0, f (0, t) = 0, and g(0, t) = 0. 14.3.2 Stability Results In this subsection, we first choose the following Lyapunov-Krasovskii functional candidate: V (xt ) = V1 (xt ) + V2 (xt ) + V3 (xt ),
(14.50)
where V1 (xt ) = xT (t)P x(t), t t t T T x (s)Q1 x(s)ds+ x (s)Q2 x(s)ds+ xT (s)Q3 x(s)ds, V2 (xt ) = t−h1 t−h2 t−d(t) 0 t −h1 t V3 (xt ) = x˙ T (s)Z1 x(s)dsdθ ˙ + x˙ T (s)Z2 x(s)dsdθ, ˙ −h2
−h2
t+θ
t+θ
and P > 0, Qi 0, i = 1, 2, 3, and Zj > 0, j = 1, 2 are to be determined. This functional is different from the ones in [32,41–43] in two ways. First, −h t the term −h21 t+θ x˙ T (s)Z2 x(s)dsdθ ˙ includes information on the lower bound on the delay. Second, many of the functionals in those reports contain the t term t−d(t) xT (s)Q3 x(s)ds; but V2 (xt ) contains more information. On the other hand, although the functionals in the reports just cited 0 t t contain the term −h2 t+θ x˙ T (s)Z1 x(s)dsdθ, ˙ the term − t−h2 x˙ T (s)Z1 x(s)ds ˙ t T ˙ Note that in the derivative of V3 (xt ) is increased to − t−d(t) x˙ (s) Z1 x(s)ds. −
t
t−h2
x˙ T (s)Z1 x(s)ds ˙ =−
t
x˙ T (s)Z1 x(s)ds− ˙
t−d(t)
t−d(t)
x˙ T (s)Z1 x(s)ds. ˙
t−h2
(14.51) t−d(t)
Moreover, those reports ignore the term − t−h2 x˙ T (s)Z1 x(s)ds ˙ in V˙ 3 (xt ). This may lead to considerable conservativeness. In contrast, we retain that term; and we also take the important characteristic h2 = (h2 − d(t)) + d(t) into account in the estimation of the upper bound on V˙ 3 (xt ). Thus, we have the following theorem.
320
14. Stability of Nonlinear Time-Delay Systems
Theorem 14.3.1. Consider system (14.45) with a delay, d(t), that satisfies both (14.46) and (14.47). Given scalars h2 h1 0 and μ, the system is robustly stable if there exist matrices P > 0, Qi 0, i = 1, 2, 3, Zj > 0, j = 1, 2, X 0, and Y 0, any appropriately dimensioned matrices N , S, and M , and scalars εj > 0, j = 1, 2 such that the following LMIs hold: ⎤ ⎡ Φ Ξ TΛ ⎦ < 0, ⎣ (14.52) ∗ −Λ ⎡ Ψ1 = ⎣
⎤ X N ∗ Z1
⎡ Ψ2 = ⎣
⎦ 0, ⎤
Y M ∗ Z2
⎦ 0,
⎡ Ψ3 = ⎣
(14.53)
(14.54) ⎤
X +Y
S
∗
Z1 + Z2
⎦ 0,
(14.55)
where Φ = Φ1 + Φ2 + ΦT 2 + h2 X + (h2 − h1 )Y , ⎤ ⎡ Φ11 P Ad 0 0 P P ⎥ ⎢ ⎥ ⎢ 0 0 0 ⎥ ⎢ ∗ Φ22 0 ⎥ ⎢ ⎥ ⎢ ⎢ ∗ ∗ Φ33 0 0 0 ⎥ ⎥, ⎢ Φ1 = ⎢ ⎥ ⎢ ∗ 0 0 ⎥ ∗ ∗ Φ44 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ∗ ∗ ∗ −ε I 0 1 ⎦ ⎣ ∗ ∗ ∗ ∗ ∗ −ε2 I Φ2 = [N S −N −M M −S 0 0], Φ11 = P A + AT P + Q1 + Q2 + Q3 + ε1 α2 I, Φ22 = −(1 − μ)Q3 + ε2 β 2 I, Φ33 = −Q1 , Φ44 = −Q2 , Ξ = [A Ad 0 0 I I], Λ = h2 Z1 + (h2 − h1 )Z2 . Proof. The derivative of V (xt ) along the solutions of system (14.45) is
14.3 Stability of Systems with Interval Delay and Nonlinear Perturbations
321
˙ V˙ 1 (xt ) = 2xT (t)P x(t) = 2xT (t)P [Ax(t) + Ad x(t − d(t)) + f (x(t), t) + g(x(t − d(t)), t)] , (14.56) V˙ 2 (xt ) = xT (t) [Q1 + Q2 + Q3 ] x(t) − xT (t − h1 )Q1 x(t − h1 ) T ˙ −xT (t − h2 )Q2 x(t − h2 ) − (1 − d(t))x (t − d(t))Q3 x(t − d(t))
xT (t) [Q1 + Q2 + Q3 ] x(t) − xT (t − h1 )Q1 x(t − h1 ) −xT (t − h2 )Q2 x(t − h2 ) − (1 − μ)xT (t − d(t))Q3 x(t − d(t)). (14.57) From (14.51) and −
t−h1
T
x˙ (s)Z2 x(s)ds ˙ =−
t−h2
t−d(t)
T
V˙ 3 (xt ) = h2 x˙ T (t)Z1 x(t) ˙ −
t
x˙ T (s)Z2 x(s)ds, ˙
t−d(t)
t−h2
we obtain
t−h1
x˙ (s)Z2 x(s)ds− ˙
x˙ T (s)Z1 x(s)ds ˙ + (h2 − h1 )x˙ T (t)Z2 x(t) ˙
t−h2
t−h1
−
x˙ T (s)Z2 x(s)ds ˙
t−h2
˙ = x˙ T (t) [h2 Z1 + (h2 − h1 )Z2 ] x(t) t−d(t) t x˙ T (s)Z1 x(s)ds ˙ − x˙ T (s)(Z1 + Z2 )x(s)ds ˙ − t−d(t)
t−h1
−
t−h2
x˙ T (s)Z2 x(s)ds. ˙
(14.58)
t−d(t)
From the Newton-Leibnitz formula, the following equations are true for any matrices N , M , and S with appropriate dimensions: 0 = 2ζ1T (t)N x(t) − x(t − d(t)) −
t
x(s)ds ˙ ,
0 = 2ζ1T (t)M x(t − h1 ) − x(t − d(t)) − 0 = 2ζ1T (t)S x(t − d(t)) − x(t − h2 ) − where
(14.59)
t−d(t)
t−h1
x(s)ds ˙ ,
t−d(t)
t−d(t)
x(s)ds ˙ ,
t−h2
(14.60)
(14.61)
322
14. Stability of Nonlinear Time-Delay Systems
ζ1 (t) = xT (t), xT (t − d(t)), xT (t − h1 ), xT (t − h2 ),
T f T (x(t), t), g T (x(t − d(t)), t) .
On the other hand, the following equalities are true for any matrices X 0 and Y 0:
t ζ1T (t)Xζ1 (t)ds − ζ1T (t)Xζ1 (t)ds t−h2 t−h2 t−d(t) T T ζ1 (t)Xζ1 (t)ds − = h2 ζ1 (t)Xζ1 (t) − t
0=
t−h2
t
t−d(t)
ζ1T (t)Xζ1 (t)ds, (14.62)
t−h1
0= t−h2
= (h2 −
ζ1T (t)Y ζ1 (t)ds −
h1 )ζ1T (t)Y ζ1 (t)
−
t−h1
ζ1T (t)Y ζ1 (t)ds
2 t−h t−d(t)
t−h2
ζ1T (t)Y ζ1 (t)ds −
t−h1
t−d(t)
ζ1T (t)Y ζ1 (t)ds. (14.63)
Furthermore, it follows from (14.48) and (14.49) that, for any ε1 0 and ε2 0, 0 ε1 α2 xT (t)x(t) − f T (x(t), t)f (x(t), t)
(14.64)
and 0 ε2 β 2 xT (t − d(t))x(t − d(t)) − g T (x(t − d(t)), t)g(x(t − d(t)), t) . (14.65) Adding the right sides of (14.59)-(14.65) to V˙ (xt ) yields V˙ (xt ) ζ1T (t) Φ + Ξ T (h2 Z1 + (h2 − h1 )Z2 )Ξ ζ1 (t) t−h1 t T ζ2 (t, s)Ψ1 ζ2 (t, s)ds − ζ2T (t, s)Ψ2 ζ2 (t, s)ds − t−d(t)
t−d(t)
−
t−d(t)
t−h2
ζ2T (t, s)Ψ3 ζ2 (t, s)ds,
(14.66)
T where ζ2 (t, s) = ζ1T (t), x˙ T (s) . Thus, if Φ+Ξ T (h2 Z1 +(h2 −h1 )Z2 )Ξ3 < 0, which is equivalent to (14.52) by the Schur complement, and if Ψi 0, i = 1, 2, 3, then V˙ (xt ) < −εx(t)2 for a sufficiently small ε > 0, which means that system (14.45) is robustly stable. This completes the proof.
14.3 Stability of Systems with Interval Delay and Nonlinear Perturbations
323
Now, consider the case h1 = 0, which is discussed in [30,32–34]. If M = 0, Y = 0, Z2 = 0, and Q1 = 0, and if some elements of N , S, and X in Theorem 14.3.1 are chosen to be zero, we obtain a new corollary. Corollary 14.3.1. Consider system (14.45) for the case where the lower bound, h1 , on the time-varying delay is zero. Given scalars h2 > 0 and μ, the system is robustly stable if there exist matrices P > 0, Qi 0, i = 2, 3, ˜ 0, any appropriately dimensioned matrices N ˜ and S, ˜ and Z1 > 0, and X scalars εj > 0 j = 1, 2 such that the following LMIs hold: ⎡ ⎣ ⎡ ⎣ ⎡ ⎣
˜ T Z1 Φ˜ h2 Ξ ∗
−h2 Z1
˜ N ˜ X ∗ Z1 ˜ S˜ X ∗ Z1
⎤ ⎦ < 0,
(14.67)
⎤ ⎦ 0,
(14.68)
⎤ ⎦ 0,
where ˜ Φ˜ = Φ˜1 + Φ˜2 + Φ˜T 2 + h2 X, ⎡ ⎤ Φ˜11 P Ad 0 P P ⎢ ⎥ ⎢ ⎥ ⎢ ∗ Φ22 0 0 0 ⎥ ⎢ ⎥ ⎢ ⎥ Φ˜1 = ⎢ ∗ 0 0 ⎥, ∗ −Q2 ⎢ ⎥ ⎢ ⎥ ⎢ ∗ 0 ⎥ ∗ ∗ −ε1 I ⎣ ⎦ ∗ ∗ ∗ ∗ −ε2 I ˜ S˜ − N ˜ − S˜ 0 0 , Φ˜2 = N Φ˜11 = P A + AT P + Q2 + Q3 + ε1 α2 I, Φ˜22 = −(1 − μ)Q3 + ε2 β 2 I, ˜ = [A Ad 0 I I]. Ξ Remark 14.3.1. If S˜ = 0, ˜ = N T N T 0 N T N T T, N 1 2 3 4
(14.69)
324
14. Stability of Nonlinear Time-Delay Systems
⎡
⎤ X11 X12 0 X13 X14
⎢ ⎢ ⎢ ∗ ⎢ ˜ =⎢ X ⎢ ∗ ⎢ ⎢ ⎢ ∗ ⎣ ∗
⎥ ⎥ X22 0 X23 X24 ⎥ ⎥ ⎥ ∗ 0 0 0 ⎥, ⎥ ⎥ ∗ ∗ X33 X34 ⎥ ⎦ ∗ ∗ ∗ X44
and Q2 = εI (where ε > 0 is a sufficiently small scalar), then Corollary 14.3.1 reduces to Theorem 1 in [32]. In many cases, there is either no information on the derivative of a delay or the derivative of a time-varying delay is greater than 1 (which means that the delay varies quickly with time). For these cases, we can use the delaydependent and rate-independent stability criterion in the following corollary, which is for a delay that satisfies only (14.46) and which is obtained by setting Q3 = 0 in Theorem 14.3.1. Corollary 14.3.2. Consider system (14.45) with a delay, d(t), that satisfies (14.46), but not necessarily (14.47). Given scalars h2 h1 0, the system is robustly stable if there exist matrices P > 0, Qi 0, i = 1, 2, Zj > 0, j = 1, 2, X 0, and Y 0, any appropriately dimensioned matrices N , S, and M , and scalars εj > 0, j = 1, 2 such that LMIs (14.53)-(14.55) and the following one hold: ⎤ ⎡ Φˆ Ξ T Λ ⎦ < 0, ⎣ (14.70) ∗ −Λ where Φˆ = Φˆ1 + Φ2 + ΦT 2 + h2 X + (h2 − h1 )Y , ⎤ ⎡ Φˆ11 P Ad 0 0 P P ⎥ ⎢ ⎥ ⎢ 0 0 0 ⎥ ⎢ ∗ Φˆ22 0 ⎥ ⎢ ⎥ ⎢ ⎢ ∗ ∗ Φ33 0 0 0 ⎥ ⎥, Φˆ1 = ⎢ ⎥ ⎢ ⎥ ⎢ ∗ 0 0 ∗ ∗ Φ 44 ⎥ ⎢ ⎥ ⎢ ⎢ ∗ ∗ ∗ ∗ −ε1 I 0 ⎥ ⎦ ⎣ ∗ ∗ ∗ ∗ ∗ −ε2 I Φˆ11 = P A + AT P + Q1 + Q2 + ε1 α2 I, Φˆ22 = ε2 β 2 I, and the other parameters are defined in Theorem 14.3.1.
14.3 Stability of Systems with Interval Delay and Nonlinear Perturbations
325
14.3.3 Further Results Obtained with Augmented Lyapunov-Krasovskii Functional The augmented Lyapunov-Krasovskii functional in [44] is applicable to neutral systems with a constant delay. In this subsection, we use an augmented Lyapunov-Krasovskii functional that takes the delay terms into account to deal with systems with a time-varying interval delay. The use of an augmented Lyapunov-Krasovskii functional requires the following restriction on the derivative of a time-varying delay: ˙ |d(t)| μ.
(14.71)
Then, we have the following theorem. Theorem 14.3.2. Consider system (14.45) with a delay, d(t), that satisfies both (14.46) and (14.47). Given scalars h2 h1 0⎡ and 0 < μ 0, ⎣ ⎦ ∗ ∗ P33 ⎡ ⎤ (i) (i) Q12 Q ¯ 1 0, Q ¯ i = ⎣ 11 ¯ ⎦ 0, i = 2, 3, Z¯j > 0, j = 1, 2, U > 0, X Q (i) ∗ Q22 ¯ , S, ¯ and M ¯ , and 0, and Y¯ 0, any appropriately dimensioned matrices N scalars εj > 0, j = 1, 2 such that the following LMIs hold: ⎡ ⎤ ¯ T Λ¯2 μΛ¯T Φ¯ Ξ 3 ⎥ ⎢ ⎢ ⎥ (14.72) ⎢ ∗ −Λ¯2 0 ⎥ < 0, ⎣ ⎦ ∗ ∗ −μU ⎡ Ψ¯1 = ⎣ ⎡ Ψ¯2 = ⎣ ⎡ Ψ¯3 = ⎣ where
⎤
¯ N ¯ X ∗ Z¯1 ¯ Y¯ M ∗ Z¯2
⎦ 0,
(14.73)
⎤ ⎦ 0,
¯ + Y¯ X
S¯
∗
Z¯1 + Z¯2
(14.74) ⎤ ⎦ 0,
(14.75)
326
14. Stability of Nonlinear Time-Delay Systems
¯ ¯ Φ¯ = Φ¯1 + Φ¯2 + Φ¯T 2 + h2 X + (h2 − h1 )Y , ⎤ ⎡ T ¯ ¯ ¯ ¯ P13 Λ1 Λ1 ⎥ ⎢ Φ11 Φ12 0 A P13 P12 ⎥ ⎢ T T T ⎥ ⎢ ∗ Φ¯22 0 Ad P13 Φ¯25 P23 P12 P12 ⎥ ⎢ ⎥ ⎢ ⎢ ∗ ¯ ∗ −Q1 0 0 0 0 0 ⎥ ⎥ ⎢ ⎥ ⎢ (2) (2) T T T ⎥ ⎢ ∗ ¯ ¯ ∗ ∗ −Q11 P23 P33 − Q12 P13 P13 ⎥ ⎢ Φ¯1 = ⎢ ⎥, ⎢ ∗ ∗ ∗ ∗ Φ¯55 0 0 0 ⎥ ⎥ ⎢ ⎥ ⎢ (2) ⎢ ∗ ¯ 0 0 ⎥ ∗ ∗ ∗ ∗ −Q22 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ∗ ∗ ∗ ∗ ∗ −ε1 I 0 ⎥ ⎢ ∗ ⎦ ⎣ ∗ ∗ ∗ ∗ ∗ ∗ ∗ −ε2 I ¯ S¯ − N ¯ −M ¯ M ¯ − S¯ 0 0 0 0 , Φ¯2 = N 2 ¯ ¯ (2) ¯ (3) Φ¯11 = Λ¯1 A + AT Λ¯T 1 + Q1 + Q11 + Q11 + ε1 α I, Φ¯12 = Λ¯1 Ad + AT P12 , T 2 ¯ (3) Φ¯22 = P12 Ad + AT d P12 − (1 − μ)Q11 + ε2 β I, (3) ¯ ¯ Φ25 = P22 − (1 − μ)Q12 , ¯ (3) , Φ¯55 = μU − (1 − μ)Q 22 ¯ = [A Ad 0 0 0 0 I I], Ξ ¯ (2) + Q ¯ (3) , Λ¯1 = P11 + Q 12 12 ¯ (2) + Q ¯ (3) , Λ¯2 = h2 Z¯1 + (h2 − h1 )Z¯2 + Q 22 T 22 Λ¯3 = P12 P22 0 P23 0 0 0 0 . Proof. Choose the Lyapunov-Krasovskii functional candidate to be V¯ (xt ) = V¯1 (xt ) + V¯2 (xt ) + V¯3 (xt ),
(14.76)
where V¯1 (xt ) = ξ0T (t)P¯ ξ0 (t), V¯2 (xt ) =
t
¯ 1 x(s)ds + xT (s)Q
t−h1
⎡ t
⎣
t−d(t)
0
−h2
⎤T x(s)
⎣
x(s) ˙
t−h2
+ V¯3 (xt ) =
⎡ t
t
t+θ
⎤T x(s) x(s) ˙
⎡
¯3 ⎣ ⎦ Q
⎡
¯2 ⎣ ⎦ Q
⎤ x(s)
⎦ ds
x(s) ˙
⎤ x(s)
⎦ ds,
x(s) ˙
x˙ T (s)Z¯1 x(s)dsdθ ˙ +
−h1
−h2
t
t+θ
x˙ T (s)Z¯2 x(s)dsdθ. ˙
14.3 Stability of Systems with Interval Delay and Nonlinear Perturbations
⎡
327
⎤
⎡ ⎤ (i) (i) ⎢ ⎥ Q11 Q12 ⎢ ⎥ ¯ 1 0, Q ¯i = ⎣ ⎦ In these equations, P¯ = ⎢ ∗ P22 P23 ⎥ > 0, Q (i) ⎣ ⎦ ∗ Q22 ∗ ∗ P33 0, i = 2, 3, and Z¯j > 0, j = 1, 2 are to be determined; and ξ0 (t) = T T x (t), xT (t − d(t)), xT (t − h2 ) . Note that the derivative of V¯1 (xt ) is ⎡ ⎤ x(t) ˙ ⎢ ⎥ ⎢ ⎥ ˙ x(t V¯˙ 1 (xt ) = 2ξ0T (t)P¯ ⎢ (1 − d(t)) ˙ − d(t)) ⎥ ⎣ ⎦ x(t ˙ − h2 ) ⎡ ⎤ x(t) ˙ ⎢ ⎥ ⎢ ⎥ T 2ξ0T (t)P¯ ⎢ x(t ˙ − d(t)) ˙ − d(t)) ⎥ + μx˙ (t − d(t))U x(t ⎣ ⎦ x(t ˙ − h2 ) P11 P12 P13
+μξ0T (t)P¯2T U −1 P¯2 ξ0 (t),
(14.77)
T where P¯2 = [ P12 P22 P23 ] and U > 0. Calculating the derivative of V¯ (xt ) along the solutions of system (14.45), following a procedure similar to the one in the previous subsection, and using (14.77) yield −1 ¯ ¯ T Λ¯2 Ξ ¯ + Λ¯T V¯˙ (xt ) ξ1T (t) Φ¯ + Ξ Λ3 ξ1 (t) 3U t−h1 t T ¯ ξ2 (t, s)Ψ1 ξ2 (t, s)ds − ξ2T (t, s)Ψ¯2 ξ2 (t, s)ds − t−d(t)
−
t−d(t)
t−h2
t−d(t)
ξ2T (t, s)Ψ¯3 ξ2 (t, s)ds,
(14.78)
where
ξ1 (t) = xT (t), xT (t − d(t)), xT (t − h1 ), xT (t − h2 ), x˙ T (t − d(t)), T x˙ T (t − h2 ), f T (x(t), t), g T (x(t − d(t)), t) , T ξ2 (t, s) = ξ1T (t), x˙ T (s) .
The proof is completed by following an argument similar to the one for Theorem 14.3.1.
Note that Theorem 14.3.2 is not applicable when μ = 0 or μ 1. When μ = 0, which means that the delay is constant (that is, d(t) ≡ h2 ), Theorem 1 in [44] is a general result for neutral systems. When μ 1 or μ is
328
14. Stability of Nonlinear Time-Delay Systems
unknown, we can use the following delay-dependent and rate-independent criterion obtained directly from Theorem 14.3.2. Corollary 14.3.3. Consider system (14.45) with a delay, d(t), that satisfies (14.46), but not necessarily (14.47). Given scalars h2 ⎤ h1 0, the system ⎡ is robustly stable if there exist matrices Pˇ = ⎣ ⎡ ⎣
(2)
(2)
Q11 Q12 ∗
(2) Q22
⎤
P11 P12 ∗
P22
ˇ2 = ˇ 1 0, Q ⎦ > 0, Q
ˇ 0, and Yˇ 0, any appropriately ⎦ 0, Zˇj > 0, j = 1, 2, X
ˇ , S, ˇ and M ˇ , and scalars εj > 0, j = 1, 2 such that dimensioned matrices N the following LMIs hold: ⎡ ⎤ ˇ T Λˇ2 Φˇ Ξ ⎣ ⎦ < 0, (14.79) ∗ −Λˇ2 ⎡ Ψˇ1 = ⎣ ⎡ Ψˇ2 = ⎣ ⎡ Ψˇ3 = ⎣
⎤
ˇ N ˇ X ∗ Zˇ1 ˇ Yˇ M ∗ Zˇ2
⎦ 0,
(14.80)
⎤ ⎦ 0,
ˇ + Yˇ X
Sˇ
∗
Zˇ1 + Zˇ2
(14.81) ⎤ ⎦ 0,
where ˇ ˇ Φˇ = Φˇ1 + Φˇ2 + ΦˇT 2 + h2 X + (h2 − h1 )Y , ⎤ ⎡ ˇ11 Φˇ12 0 AT P12 ˇ1 ˇ1 Φ P Λ Λ 12 ⎥ ⎢ ⎥ ⎢ ⎢ ∗ Φˇ22 0 AT P12 0 0 0 ⎥ d ⎥ ⎢ ⎥ ⎢ ⎢ ∗ ˇ1 0 0 0 0 ⎥ ∗ −Q ⎥ ⎢ ⎥ ⎢ (2) (2) ˇ ⎢ T T ˇ ˇ Φ1 = ⎢ ∗ ∗ ∗ −Q11 P22 − Q12 P12 P12 ⎥ ⎥, ⎥ ⎢ (2) ⎢ ∗ ˇ ∗ ∗ ∗ −Q 0 0 ⎥ ⎥ ⎢ 22 ⎥ ⎢ ⎥ ⎢ 0 ⎥ ∗ ∗ ∗ ∗ −ε1 I ⎢ ∗ ⎦ ⎣ ∗ ∗ ∗ ∗ ∗ ∗ −ε2 I ˇ Sˇ − N ˇ −M ˇ M ˇ − Sˇ 0 0 0 , Φˇ2 = N
(14.82)
14.3 Stability of Systems with Interval Delay and Nonlinear Perturbations
329
2 ˇ ˇ (2) Φˇ11 = Λˇ1 A + AT ΛˇT 1 + Q1 + Q11 + ε1 α I, Φˇ12 = Λˇ1 Ad , Φˇ22 = ε2 β 2 I,
ˇ = [A Ad 0 0 0 I I], Ξ ˇ (2) , Λˇ1 = P11 + Q 12
ˇ (2) . Λˇ2 = h2 Zˇ1 + (h2 − h1 )Zˇ2 + Q 22 Proof. Choose the Lyapunov-Krasovskii functional candidate to be Vˇ (xt ) = Vˇ1 (xt ) + Vˇ2 (xt ) + Vˇ3 (xt ),
(14.83)
where ⎤T
⎡ Vˇ1 (xt ) = ⎣ Vˇ2 (xt ) =
x(t) x(t − h2 ) t
⎦ Pˇ ⎣
x(t) x(t − h2 )
ˇ 1 x(s)ds + xT (s)Q
t−h1
Vˇ3 (xt ) =
⎤
⎡
0
−h2
⎦,
t
t−h2 t
x˙ T (s)Zˇ1 x(s)dsdθ ˙ +
t+θ
⎡
⎡
⎤T x(s)
⎣
x(s) ˙ −h1
⎡
ˇ2 ⎣ ⎦ Q
−h2
t
⎤ x(s)
⎦ ds,
x(s) ˙
x˙ T (s)Zˇ2 x(s)dsdθ. ˙
t+θ
⎤
⎡ ⎤ P P P (2) (2) ⎢ 11 12 13 ⎥ Q Q ⎢ ⎥ 12 ⎦ ˇ 1 0, Q ˇ 2 = ⎣ 11 In these equations, Pˇ = ⎢ ∗ P22 P23 ⎥ > 0, Q (2) ⎣ ⎦ ∗ Q22 ∗ ∗ P33 ˇ 0, and Zj > 0, j = 1, 2 are to be determined. Calculating the derivative of Vˇ (xt ) along the solutions of system (14.45) and following a procedure similar to the one in the previous subsection yield ˇ T Λˇ2 Ξ ˇ η1 (t) Vˇ˙ (xt ) η1T (t) Φˇ + Ξ t−h1 t T ˇ η2 (t, s)Ψ1 η2 (t, s)ds − η2T (t, s)Ψˇ2 η2 (t, s)ds − t−d(t)
t−d(t)
−
t−d(t)
t−h2
where
η2T (t, s)Ψˇ3 η2 (t, s)ds,
(14.84)
η1 (t) = xT (t), xT (t − d(t)), xT (t − h1 ), xT (t − h2 ), x˙ T (t − h2 ), T f T (x(t), t), g T (x(t − d(t)), t) , T η2 (t, s) = η1T (t), x˙ T (s) .
330
14. Stability of Nonlinear Time-Delay Systems
The proof is completed by following an argument similar to the one for Theorem 14.3.1.
14.3.4 Numerical Examples The example below demonstrates the advantages of our method. Example 14.3.1. Consider the robust stability of system (14.45) with ⎡ ⎡ ⎤ ⎤ −1.2 0.1 −0.6 0.7 ⎦ , Ad = ⎣ ⎦, A=⎣ −0.1 −1 −1 −0.8 which was discussed in [30, 32–34] for h1 = 0 and either α = 0 and β = 0.1 or α = 0.1 and β = 0.1. Table 14.1 lists values of the upper bound, h2 , for h1 = 0 reported in [30, 32–34] along with values obtained by the method described in this chapter. Our method gives less conservative results for systems with a time-varying delay. Table 14.2 lists values of the upper bound, h2 , for various h1 and an unknown μ obtained with Corollary 14.3.2. Note that the criteria in [30,32–34] can only handle the case h1 = 0. It can be seen that h2 is larger for h1 > 0 than for h1 = 0. That means that a criterion that does not restrict the lower bound on the delay to zero is less conservative than one that does. Table 14.1. Upper bound, h2 , for h1 = 0 (Example 14.3.1) Bound
α = 0, β = 0.1
α = 0.1, β = 0.1
μ
μ = 0.5
unknown μ
μ = 0.5
unknown μ
[30]
0.5467
—
0.4950
—
[33]
0.6746
—
0.5716
—
[34]
1.1365
—
0.9952
—
[32]
1.1424
0.7355
1.0097
0.7147
Theorem 14.3.1
1.4422
—
1.2848
—
Corollary 14.3.2
—
1.2807
—
1.2099
14.4 Conclusion This chapter considers Lur’e control systems with multiple nonlinearities and a delay. Necessary and sufficient conditions for the existence of a Lyapunov-
References
331
Table 14.2. Upper bound, h2 , for various h1 and unknown μ (Example 14.3.1) h1
0
0.5
1
α = 0, β = 0.1
1.2807
1.3083
1.5224
α = 0.1, β = 0.1
1.2099
1.2219
1.3912
Krasovskii functional in the extended Lur’e form that ensures the absolute stability of the system are obtained and then extended to systems with time-varying structured uncertainties. They are formulated in terms of LMIs. Moreover, the FWM approach is used to derive a delay-dependent criterion for the absolute stability of Lur’e control systems with a time-varying delay. Finally, for systems with nonlinear perturbations and a time-varying interval delay, the IFWM approach is used to estimate the upper bound on the derivative of a Lyapunov-Krasovskii functional. Less conservative delay-dependent stability criteria are established by allowing the lower bound on a delay to be non-zero and by using an augmented Lyapunov-Krasovskii functional.
References 1. A. I. Lur’e. Some Nonlinear Problems in the Theory of Automatic Control. London: H. M. Stationery, 1957. 2. V. M. Popov. Absolute stability of nonlinear systems of automatic control. Automation and Remote Control, 22(8): 857-875, 1962. 3. K. S. Narendra and J. H. Taylor. Frequency Domain Criteria for Absolute Stability. New York: Academic Press, 1973. 4. A. R. Gaiduk. Absolute stability of control systems with several nonlinearities. Automation and Remote Control, 37, 815-821, 1976. 5. P. Park. A revisited Popov criterion for nonlinear Lurie systems with sectorrestrictions. International Journal of Control, 68(3): 461-470, 1997. 6. V. M. Popov. Sufficient criteria for global asymptotic stability for nonlinear control systems with several actuators. Studii Cercet Rnerg, IX(4): 647-680, 1959. (in Romanian) 7. V. Rasvan. Popov theories and qualitative behavior of dynamic and control systems. European Journal of Control, 8(3): 190-199, 2002. 8. S. X. Zhao. On absolute stability of control systems with several executive elements. Scientia Sinica (Ser. A), 31(4): 395-405, 1988. 9. Z. X. Gan and J. Q. Han. Lyapunov function of general Lurie systems with multiple nonlinearities. Applied Mathematics Letters, 16(1): 119-126, 2003. 10. V. M. Popov and Halanay A. About stability of nonlinear controlled systems with delay. Automation and Remote Control, 23(7): 848-851, 1962.
332
14. Stability of Nonlinear Time-Delay Systems
11. P. A. Bliman. Extension of Popov absolute stability criterion to nonautonomous systems with delays. International Journal of Control, 73(15): 13481361, 2000. 12. P. A. Bliman. Absolute stability criteria with prescribed decay rate for finitedimensional and delay systems. Automatica, 38(11): 2015-2019, 2002. 13. A. Somolines. Stability of Lurie-type functional equations. Journal of Differential Equations, 26(2): 191-199, 1977. 14. Q. L. Han. Absolute stability of time-delay systems with sector-bounded nonlinearity. Automatica, 41(12): 2171-2176, 2005. 15. Q. L. Han and D. Yue. Absolute stability of Lur’e systems with time-varying delay. IET Proceedings: Control Theory & Applications, 1(3): 854-859, 2007. 16. Z. X. Gan and W. G. Ge. Lyapunov functional for multiple delay general Lur’e control systems with multiple non-linearities. Journal of Mathematical Analysis and Applications, 259(2): 596-608, 2001. 17. L. T. Grujic. Robust absolutely stable Lurie systems. International Journal of Control, 46(1): 357-368, 1987. 18. L. T. Grujic and D. Petkovski. On robustness of Lurie systems with multiple non-linearities. Automatica, 23(3): 327-334, 1987. 19. A. Tesi and A. Vicino. Robust absolute stability of Lur’e control systems in parameter space. Automatica, 27(1): 147-151, 1991. 20. M. Dahleh, A. Tesi, and A. Vicino. On the robust Popov criteria for interval Lur’e systems. IEEE Transactions on Automatic Control, 38(9): 1400-1405, 1993. 21. A. V. Savkin and I. R. Petersen. A method for robust stabilization related to the Popov stability criterion. International Journal of Control, 62(5): 1105-1115, 1995. 22. T. Wada, M. Ikeda, Y. Ohta, and D. D. Siljak. Parametric absolute stability of Lur’e systems. IEEE Transactions on Automatic Control, 43(11): 1649-1653, 1998. 23. T. Wada, M. Ikeda, Y. Ohta, and D. D. Siljak. Parametric absolute stability of multivariable Lur’e systems. Automatica, 36(9): 1365-1372, 2000. 24. H. Miyagi and K. Yamashita. Robust stability of Lur’e systems with multiple nonlinearities. IEEE Transactions on Automatic Control, 37(6): 883-886, 1992. 25. K. Konishi and H. Kokame. Robust stability of Lur’e systems with time-varying uncertainties: a linear matrix inequality approach. International Journal of Systems Science, 30(1): 3-9, 1999. 26. B. Yang and M. Chen. Delay-dependent criterion for absolute stability of Lurie type control systems with time delay. Control Theory and Applications, 18(6): 928-931, 2001. 27. B. Xu. Stability robustness bounds for linear systems with multiple timevarying delayed perturbations. International Journal of Systems Science, 28(12): 1311-1317, 1997.
References
333
28. H. Trinh and M. Aldeen. On robustness and stabilization of linear systems with delayed nonlinear perturbations. IEEE Transactions on Automatic Control, 42(7): 1005-1007, 1997. 29. A. Goubet-Bartholomeus and M. Dambrine. Stability of perturbed systems with time-varying delays. Systems & Control Letters, 31(3): 155-163, 1997. 30. Y. Y. Cao and J. Lam. Computation of robust stability bounds for time-delay systems with nonlinear time-varying perturbations. International Journal of Systems Science, 31(3): 359-365, 2000. 31. N. L. Ni and M. J. Er. Stability of linear systems with delayed perturbations: an LMI approach. IEEE Transactions on Circuits and Systems I, 49(1): 108-112, 2002. 32. Z. Zuo and Y. Wang. New stability criterion for a class of linear systems with time-varying delay and nonlinear perturbations. IEE Proceedings–Control Theory & Applications, 153(5): 623-626, 2006. 33. Q. L. Han. Robust stability for a class of linear systems with time-varying delay and nonlinear perturbations. Computers & Mathematics with Applications, 47(8-9): 1201-1209, 2004. 34. Q. L. Han and L. Yu. Robust stability of linear neutral systems with nonlinear parameter perturbations. IEE Proceedings–Control Theory & Applications, 151(5): 539-546, 2004. 35. Y. He, Q. G. Wang, L. H. Xie, and C. Lin. Further improvement of freeweighting matrices technique for systems with time-varying delay. IEEE Transactions on Automatic Control, 52(2): 293-299, 2007. 36. M. Wu, Y. He, and J. H. She. On absolute stability of Lur’e control systems with multiple non-linearities using linear matrix inequalities. Journal of Control Theory and Applications, 2(2): 131-136, 2004. 37. Y. He and M. Wu. Delay-dependent conditions for absolute stability of Lur’e control systems with time-varying delay. Acta Automatica Sinica, 31(3): 475478, 2005. 38. Y. He and M. Wu. Absolute stability for multiple delay general Lur’e control systems with multiple nonlinearities. Journal of Computational and Applied Mathematics, 159(2): 241-248, 2003. 39. Y. He, G. P. Liu, D. Rees, and M. Wu. Improved delay-dependent stability criteria for systems with nonlinear perturbations. European Journal of Control, 13(4): 356-365, 2007. 40. S. Boyd, L. E. Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in System and Control Theory. Philadelphia: SIAM, 1994. 41. E. Fridman and U. Shaked. Delay-dependent stability and H∞ control: constant and time-varying delays. International Journal of Control, 76(1): 48-60, 2003. 42. Y. He, M. Wu, J. H. She, and G. P. Liu. Parameter-dependent Lyapunov functional for stability of time-delay systems with polytopic-type uncertainties. IEEE Transactions on Automatic Control, 49(5): 828-832, 2004. 43. M. Wu, Y. He, J. H. She, and G. P. Liu. Delay-dependent criteria for robust stability of time-varying delay systems. Automatica, 40(8): 1435-1439, 2004.
334
14. Stability of Nonlinear Time-Delay Systems
44. Y. He, Q. G. Wang, C. Lin, and M. Wu. Augmented Lyapunov functional and delay-dependent stability criteria for neutral systems. International Journal of Robust and Nonlinear Control, 15(18): 923-933, 2005.
Index
absolutely stable, 302 asymptotically stable, 20, 22, 28 augmented Lyapunov-Krasovskii functional, 110 Basic inequality, BRL, 165
5
CCL algorithm, 131 completely LMI-based design method, 138 delay-dependent and rate-independent condition, 47 delay-dependent condition, 3 delay-independent and rate-dependent condition, 57 delay-independent condition, 3 descriptor model transformation, 7 discrete delay, 2 discrete-delay-and neutral-delaydependent condition, 122 discrete-delay-dependent and neutraldelay- independent condition, 99 discretized Lyapunov-Krasovskii functional method, 4 DOF controller, 149 Eigenvalue problem, 35 equilibrium point, 19, 21 EVP, 35 exponential convergence rate, 28 exponentially stable, 20, 22
Generalized eigenvalue problem, 36 GEVP, 36 globally asymptotically stable, 20, 22, 28 globally exponentially stable, 20, 22, 28 globally uniformly asymptotically stable, 28 H∞ control problem, 164 H∞ filtering problem, 179, 189 H∞ norm, 31 H∞ optimal control problem, 33 H∞ space, 31 H∞ suboptimal control problem, 33 S-procedure, 37 ICCL algorithm, 132 IFWM approach, 59 iterative nonlinear minimization algorithm, 129
fixed model transformation, 5 Frequency-domain method, 2 fuzzy model, 252 FWM approach, 44
Lebesgue-measurable elements, 43 LMI, 34 LMI problem, 35 LMIP, 35 lower linear fractional transformation, 33 Lure control system, 301 Lyapunov function, 23 Lyapunov matrix, 3 Lyapunov stability theorem, 24, 25 Lyapunovs direct method, 22 Lyapunov-Krasovskii functional, 28 Lyapunov-Krasovskii stability theorem, 29
generalized eigenvalue,
Markovian jump system,
36
284
336
Index
maximum allowable delay bound, 263 maximum allowable transfer interval, 263 memoryless state-feedback controller, 128 Moon et al.’s inequality, 5 multiple delays, 74 multiple nonlinearities, 301 NCS, 263 neutral delay, 93 neutral functional differential equation, 26 neutral system, 93 norm, 31 parameter-dependent Lyapunov- Krasov skii functional, 56 parameter-tuning method, 136 parameterized model transformation, 10 Parks inequality, 5 polytopic-type uncertainties, 43
quadratic Lyapunov-Krasovskii functional, 45 Razumikhin theorem, 30 retarded functional differential equation, 26 robustly absolutely stable, 302 Schur complement, 37 SOF controller, 148 stable, 20, 22, 28 stochastic system, 278 system matrices, 2 T-S fuzzy system, 251 time delay, 1 time-delay system, 1, 26 Time-domain method, 3 time-varying interval delay, 148 time-varying structured uncertainties, 43 uniformly asymptotically stable, 22, 28 uniformly stable, 20, 22, 28
20,