THE ELECTRICAL ENGINEERING HANDBOOK
This page intentionally left blank
THE ELECTRICAL ENGINEERING HANDBOOK WAI-KAI CHEN EDITOR
AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Academic Press is an imprint of Elsevier
Elsevier Academic Press 200 Wheeler Road, 6th Floor, Burlington, MA 01803, USA 525 B Street, Suite 1900, San Diego, California 92101-4495, USA 84 Theobald’s Road, London WC1X 8RR, UK This book is printed on acid-free paper.
⬁
Copyright ß 2004, Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Permissions may be sought directly from Elsevier’s Science & Technology Rights Department in Oxford, UK: phone: (+44) 1865 843830, fax: (+44) 1865 853333, e-mail:
[email protected]. You may also complete your request on-line via the Elsevier homepage (http://elsevier.com), by selecting ‘‘Customer Support’’ and then ‘‘Obtaining Permissions.’’ Library of Congress Cataloging-in-Publication Data Application submitted British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN: 0-12-170960-4 For all information on all Academic Press publications visit our Web site at www.books.elsevier.com 04 05 06 07 08 09 9 8 7 6 5 4 3 2 1 Printed in the United States of America
Contents Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Editor-in-Chief . . . . . . . . . . . . . . . . . . . . . . . . . .
I
Circuit Theory . . . . . . . . . . . . . . . . . . . . .
xiv xv xvii
6
Semiconductors . . . . . . . . . . . . . . . . . . . . . . Michael Shur
153
7
Power Semiconductor Devices . . . . . . . . . . . . Maylay Trivedi and Krishna Shenai
163
VLSI systems . . . . . . . . . . . . . . . . . . . . .
177
1
Krishnaiyan Thulasiraman 1
Linear Circuit Analysis . . . . . . . . . . . . . . . . . P.K. Rajan and Arun Sekar
2
Circuit Analysis: A Graph-Theoretic Foundation . . . . . . . . . . . . Krishnaiyan Thulasiraman and M.N.S. Swamy
3
III
Magdy Bayoumi 1
31
3
Computer-Aided Design. . . . . . . . . . . . . . . . . Ajoy Opal
43
4
Synthesis of Networks . . . . . . . . . . . . . . . . . . Jiri Vlach
53
5
Nonlinear Circuits. . . . . . . . . . . . . . . . . . . . . Ljiljana Trajkovic´
75
II Electronics . . . . . . . . . . . . . . . . . . . . . . .
83
2
3
Krishna Shenai 1
Investigation of Power Management Issues for Future Generation Microprocessors . . . . . . Fred C. Lee and Xunwei Zhou
85
2
Noise in Analog and Digital Systems . . . . . . . . Erik A. McShane and Krishna Shenai
101
3
Field Effect Transistors . . . . . . . . . . . . . . . . . ¨ ztu¨rk Veena Misra and Mehmet C. O
109
4
Active Filters . . . . . . . . . . . . . . . . . . . . . . . . Rolf Schaumann
127
5
Junction Diodes and Bipolar Junction Transistors . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Schro¨ter
Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
4
Custom Memory Organization and Data Transfer: Architectural Issues and Exploration Methods . . . . . . . . . . . . . . . . . . . Francky Catthoor, Erik Brockmeyer, Koen Danckaert, Chidamber Kulkani, Lode Nachtergaele, and Arnout Vandecappelle The Role of Hardware Description Languages in the Design Process of Multinature Systems . . . . . . . . . . . . . . . . . . . Sorin A. Huss Clock Skew Scheduling for Improved Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . Ivan S. Kourtev and Eby G. Friedman
179
191
217
231
5
Trends in Low-Power VLSI Design . . . . . . . . . Tarek Darwish and Magdy Bayoumi
6
Production and Utilization of Micro Electro Mechanical Systems. . . . . . . . . . David J. Nagel and Mona E. Zaghloul
281
Noise Analysis and Design in Deep Submicron Technology . . . . . . . . . . . . . . . . . Mohamed Elgamel and Magdy Bayoumi
299
7 139
Logarithmic and Residue Number Systems for VLSI Arithmetic . . . . . . . . . . . . . Thanos Stouraitis
263
v
vi 8
IV
Contents Interconnect Noise Analysis and Optimization in Deep Submicron Technology . . . . . . . . . . . Mohamed Elgamel and Magdy Bayoumi
Digital Systems and Computer Engineering . . . . . . . . . . . . . . . . . . . . .
321
Sun-Yung Kung and Benjamin W. Wah 1
Computer Architecture. . . . . . . . . . . . . . . . . Morris Chang
323
2
Multiprocessors . . . . . . . . . . . . . . . . . . . . . . Peter Y. K. Cheung, George A. Constantinides, and Wayne Luk
335
Configurable Computing . . . . . . . . . . . . . . . Wayne Luk, Peter Y. K. Cheung, and Nabeel Shirazi
343
4
Operating Systems . . . . . . . . . . . . . . . . . . . . Yao-Nan Lien
355
5
Expert Systems . . . . . . . . . . . . . . . . . . . . . . Yi Shang
367
6
Multimedia Systems: Content-Based Indexing and Retrieval . . . . . . . . . . . . . . . . . Faisal Bashir, Shashank Khanvilkar, Ashfaq Khokhar, and Dan Schonfeld
3
7
8
9
V
Multimedia Networks and Communication . . Shashank Khanvilkar, Faisal Bashir, Dan Schonfeld, and Ashfaq Khokhar Fault Tolerance in Computer Systems—From Circuits to Algorithms . . . . . . . . . . . . . . . . . Shantanu Dutt, Federico Rota, Franco Trovo, and Fran Hanchek
4
Transmission Lines . . . . . . . . . . . . . . . . . Krishna Naishadham
525
5
Guided Waves . . . . . . . . . . . . . . . . . . . . . . Franco De Flaviis
539
6
Antennas and Radiation . . . . . . . . . . . . . . . Nirod K. Das I Antenna Fundamentals . . . . . . . . . . . . . . . . II Antenna Elements and Arrays . . . . . . . . . . .
553
7
Microwave Passive Components . . . . . . . . . . Ke Wu, Lei Zhu, and Ruediger Vahldieck
585
8
Computational Electromagnetics: The Method of Moments . . . . . . . . . . . . . . . . . . Jian-Ming Jin and Weng Cho Chew
311
9
379
401
Computational Electromagnetics: The FiniteDifference Time-Domain Method . . . . . . . Allen Taflove, Susan C. Hagness and Melinda Piket-May
10
Radar and Inverse Scattering . . . . . . . . . . . . Hsueh-Jyh Li and Yean-Woei Kiang
11
Microwave Active Circuits and Integrated Antennas. . . . . . . . . . . . . . . . . . . . . . . . . . William R. Deal, Vesna Radisic, Yongxi Qian, and Tatsuo Itoh
VI
Electric Power Systems . . . . . . . . . . . .
553 569
619
629
671
691
707
Anjan Bose 1
Three-Phase Alternating Current Systems . . . Anjan Bose
709
2
Electric Power System Components . . . . . . . Anjan Bose
713
427
High-Level Petri Nets—Extensions, Analysis, and Applications . . . . . . . . . . . . . . Xudong He and Tadao Murata
459
3
Power Transformers . . . . . . . . . . . . . . . . . . Bob C. Degeneff
715
Electromagnetics . . . . . . . . . . . . . . . . . .
477
4
Introduction to Electric Machines . . . . . . . . Sheppard Joel Salon
721
5
High-Voltage Transmission . . . . . . . . . . . . . Ravi S. Gorur
737
6
Power Distribution. . . . . . . . . . . . . . . . . . . Turan Go¨nen
749
7
Power System Analysis . . . . . . . . . . . . . . . . Mani Venkatasubramanian and Kevin Tomsovic
761
Hung-Yu David Yang 1
Magnetostatics . . . . . . . . . . . . . . . . . . . . . . Keith W. Whites
479
2
Electrostatics. . . . . . . . . . . . . . . . . . . . . . . . Rodolfo E. Diaz
499
3
Plane Wave Propagation and Reflection David R. Jackson
513
....
Contents
vii
8
Power System Operation and Control . . . . . . Mani Venkatasubramanian and Kevin Tomsovic
9
Fundamentals of Power System Protection . . . . . . . . . . . . . . . . . . . . . . . . . Mladen Kezunovic
779
5
Data Communication Concepts. . . . . . . . . . . Vijay K. Garg and Yih-Chen Wang
6
Communication Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . Vijay K. Garg and Yih-Chen Wang
989
Wireless Network Access Technologies . . . . . . . . . . . . . . . . . . . . . . . . Vijay K. Garg and Yih-Chen Wang
1005
Convergence of Networking Technologies . . . . . . . . . . . . . . . . . . . . . . . . Vijay K. Garg and Yih-Chen Wang
1011
787 7
10
Electric Power Quality Gerald T. Heydt
...............
805
VII Signal Processing. . . . . . . . . . . . . . . . .
811
8
Yih-Fang Huang 1
Signals and Systems . . . . . . . . . . . . . . . . . . . Rashid Ansari and Lucia Valbonesi
813
Digital Filters . . . . . . . . . . . . . . . . . . . . . . . Marcio G. Siqueira and Paulo S.R. Diniz
839
983
IX Controls and Systems . . . . . . . . . . . . . . 1017 Michael Sain
2
3
Methods, Models, and Algorithms for Modern Speech Processing . . . . . . . . . . . . . . John R. Deller, Jr. and John Hansen
861
4
Digital Image Processing . . . . . . . . . . . . . . . Eduardo A.B. da Silva and Gelson V. Mendonc¸a
891
5
Multimedia Systems and Signal Processing . . . John R. Smith
911
6
Statistical Signal Processing . . . . . . . . . . . . . Yih-Fang Huang
921
VLSI Signal Processing. . . . . . . . . . . . . . . . . Surin Kittitornkun and Yu-Hen Hu
933
7
VIII Digital Communication and Communication Networks . . . . . . . .
1
Algebraic Topics in Control . . . . . . . . . . . . . Cheryl B. Schrader
1019
2
Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . Derong Liu
1027
3
Robust Multivariable Control . . . . . . . . . . . . Oscar R. Gonza´lez and Atul G. Kelkar
1037
4
State Estimation . . . . . . . . . . . . . . . . . . . . . Jay Farrell
1049
5
Cost-Cumulants and Risk-Sensitive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . Chang-Hee Won
1061
Frequency Domain System Identification . . . . . . . . . . . . . . . . . . . . . . . Gang Jin
1069
Modeling Interconnected Systems: A Functional Perspective . . . . . . . . . . . . . . . Stanley R. Liberty
1079
6
949
Vijay K. Garg and Yih-Chen Wang
7
1
Signal Types, Properties, and Processes . . . . . Vijay K. Garg and Yih-Chen Wang
951
2
Digital Communication System Concepts . . . . Vijay K. Garg and Yih-Chen Wang
957
3
Transmission of Digital Signals . . . . . . . . . . . Vijay K. Garg and Yih-Chen Wang
965
4
Modulation and Demodulation Technologies . . . . . . . . . . . . . . . . . . . . . . . . Vijay K. Garg and Yih-Chen Wang
971
8
Fault-Tolerant Control . . . . . . . . . . . . . . . . . Gary G. Yen
1085
9
Gain-Scheduled Controllers . . . . . . . . . . . . . Christopher J. Bett
1107
10
Sliding-Mode Control Methodologies for Regulating Idle Speed in Internal Combustion Engines . . . . . . . . . . . . . . . . . . Stephen Yurkovich and Xiaoqiu Li
1115
viii 11
12
Contents Nonlinear Input/Output Control: Volterra Synthesis . . . . . . . . . . . . . . . . . . . Patrick M. Sain Intelligent Control of Nonlinear Systems with a Time-Varying Structure . . . . Rau´l Ordo´n˜ez and Kevin M. Passino
13
Direct Learning by Reinforcement . . . . . . . . Jennie Si
14
Software Technologies for Complex Control Systems. . . . . . . . . . . . . . . . . . . . . Bonnie S. Heck
1131
1139
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1151
1161
1171
Contributors Rashid Ansari Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA Faisal Bashir Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA Magdy Bayoumi The Center for Advanced Computer Studies University of Louisiana at Lafayette Lafayette, Louisiana, USA Christopher J. Bett Raytheon Integrated Defense Systems Tewksbury, Massachusetts, USA Anjan Bose College of Engineering and Architecture Washington State University Pullman, Washington, USA Erik Brockmeyer IMEC Leuven, Belgium Francky Catthoor IMEC Leuven, Belgium Morris Chang Department of Electrical and Computer Engineering Iowa State University Ames, Iowa, USA Peter Y. K. Cheung Department of Electrical and Electronic Engineering Imperial College of Science, Technology, and Medicine London, UK Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
Weng Cho Chew Center for Computational Electromagnetics Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, Illinois, USA George A. Constantinides Department of Electrical and Electronic Engineering Imperial College of Science, Technology, and Medicine London, UK Koen Danckaert IMEC Leuven, Belgium Tarek Darwish The Center for Advanced Computer Studies University of Louisiana at Lafayette Lafeyette, Louisiana, USA Nirod K. Das Department of Electrical and Computer Engineering Polytechnic University Brooklyn, New York, USA Eduardo A.B. da Silva Program of Electrical Engineering Federal University of Rio de Janeiro Rio de Janeiro, Brazil William R. Deal Northrup Grumman Space Technologies Redondo Beach, California, USA Franco De Flaviis Department of Electrical and Computer Engineering University of California at Irvine Irvine, California, USA ix
x
Contributors
Bob C. Degeneff Department of Computer, Electrical, and Systems Engineering Rensselaer Polytechnic Institute Troy, New York, USA
Ravi S. Gorur Department of Electrical Engineering Arizona State University Tempe, Arizona, USA
John R. Deller, Jr. Department of Electrical and Computer Engineering Michigan State University East Lansing, Michigan, USA
Susan C. Hagness Department of Electrical and Computer Engineering University of Wisconsin Madison, Wisconsin, USA
Rodolfo E. Diaz Department of Electrical Engineering Ira A. Fulton School of Engineering Arizona State University Tempe, Arizona, USA
Fran Hanchek Intel Corporation Portland, Oregan, USA
Paulo S. R. Diniz Program of Electrical Engineering Federal University of Rio de Janeiro Rio de Janeiro, Brazil Shantanu Dutt Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA Mohamed Elgamel The Center for Advanced Computer Studies University of Louisiana at Lafayette Lafayette, Louisiana, USA Jay Farrell Department of Electrical Engineering University of California Riverside, California, USA Eby G. Friedman Department of Electrical and Computer Engineering University of Rochester Rochester, New York, USA Vijay K. Garg Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA
John Hansen Department of Electrical and Computer Engineering Michigan State University East Lansing, Michigan, USA Xudong He School of Computer Science Florida International University Miami, Florida, USA Bonnie S. Heck School of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, Georgia, USA Gerald T. Heydt Department of Electrical Engineering Arizona State University Tempe, Arizona, USA Yu-Hen Hu Department of Electrical and Computer Engineering University of Wisconsin-Madison Madison, Wisconsin, USA Yih-Fang Huang Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana, USA
Turan Go¨nen College of Engineering and Computer Science California State University, Sacramento Sacramento, California, USA
Sorin A. Huss Integrated Circuits and Systems Laboratory Computer Science Department Darmstadt University of Technology Darmstadt, Germany
Oscar R. Gonza´lez Department of Electrical and Computer Engineering Old Dominion University Norfolk, Virginia, USA
Tatsuo Itoh Department of Electrical Engineering University of California, Los Angeles Los Angeles, California, USA
Contributors David R. Jackson Department of Electrical and Computer Engineering University of Houston Houston, Texas, USA Gang Jin Ford Motor Company Dearborn, Michigan, USA Jian-Ming Jin Center for Computational Electromagnetics Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, Illinois, USA Atul G. Kelkar Department of Mechanical Engineering Iowa State University Ames, Iowa, USA Mladen Kezunovic Department of Electrical Engineering Texas A & M University College Station, Texas, USA Shashank Khanvilkar Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA Ashfaq Khokhar Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA Yean-Woei Kiang Department of Electrical Engineering National Taiwan University Taipei, Taiwan Surin Kittitornkun King Mongkut’s Institute of Technology Ladkrabang Bangkok, Thailand Ivan S. Kourtev Department of Electrical and Computer Engineering University of Pittsburgh Pittsburgh, Pennsylvania, USA Chidamber Kulkani IMEC Leuven, Belgium
xi Sun-Yung Kung Department of Electrical Engineering Princeton University Princeton, New Jersey, USA Fred C. Lee Center for Power Electronics Systems The Bradley Department of Electrical Engineering Virginia Polytechnic Institute and State University Blacksburg, Virginia, USA Hsueh-Jyh Li Department of Electrical Engineering National Taiwan University Taipei, Taiwan Xiaoqiu Li Cummins Engine Columbus, Indiana, USA Stanley R. Liberty Academic Affairs Bradley University Peoria, Illinois, USA Yao-Nan Lien Department of Computer Science National Chengchi University Taipei, Taiwan Derong Liu Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA Wayne Luk Department of Electrical and Electronic Engineering Imperial College of Science, Technology, and Medicine London, UK Erik A. McShane Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA Gelson V. Mendonc¸a Department of Electronics COPPE/EE/Federal University of Rio de Janeiro Rio de Janeiro, Brazil Veena Misra Department of Electrical and Computer Engineering North Carolina State University Raleigh, North Carolina, USA
xii Tadao Murata Department of Computer Science University of Illinois at Chicago Chicago, Illinois, USA Lode Nachtergaele IMEC Leuven, Belgium David J. Nagel Department of Electrical and Computer Engineering The George Washington University Washington, D.C., USA Krishna Naishadham Massachusetts Institute of Technology Lincoln Laboratory Lexington, Massachusetts, USA Ajoy Opal Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, Canada Rau´l Ordo´n˜ez Department of Electrical and Computer Engineering University of Dayton Dayton, Ohio, USA ¨ ztu¨rk Mehmet C. O Department of Electrical and Computer Engineering North Carolina State University Raleigh, North Carolina, USA Kevin M. Passino Department of Electrical and Computer Engineering The Ohio State University Columbus, Ohio, USA Melinda Piket-May Department of Electrical and Computer Engineering University of Colorado Boulda, Colorado, USA Yongxi Qian Department of Electrical Engineering University of California, Los Angeles Los Angeles, California, USA Vesna Radisic Microsemi Corporation Los Angeles, California, USA
Contributors P.K. Rajan Department of Electrical and Computer Engineering Tennessee Technological University Cookeville, Tennessee, USA Federico Rota Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA Politecnico di Torina, Italy Michael Sain Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana, USA Patrick M. Sain Raytheon Company EI Segundo, California, USA Sheppard Joel Salon Department of Electrical Power Engineering Renssalaer Polytechnic Institute Troy, New York, USA Rolf Schaumann Department of Electrical and Computer Engineering Portland State University Portland, Oregan, USA Dan Schonfeld Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA Cheryl B. Schrader College of Engineering Boise State University Boise, Idaho, USA Michael Schro¨ter Institute for Electro Technology and Electronics Fundamentals University of Technology Dresden, Germany Arun Sekar Department of Electrical and Computer Engineering Tennessee Technological University Cookeville, Tennessee, USA
Contributors
xiii
Yi Shang Department of Computer Science University of Missouri-Columbia Columbia, Missouri, USA
Kevin Tomsovic School of Electrical Engineering and Computer Science Washington State University Pullman, Washington, USA
Krishna Shenai Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA
Ljiljana Trajkovic´ School of Engineering Science Simon Fraser University Vancouver, British Columbia, Canada
Nabeel Shirazi Xilinx, Inc. San Jose, California, USA
Malay Trivedi Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA
Michael Shur Department of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute Troy, New York, USA Jennie Si Department of Electrical Engineering Arizona State University Tempe, Arizona, USA Marcio G. Siqueira Cisco Systems Sunnyvale, California, USA John R. Smith IBM T. J. Watson Research Center Hawthorne, New York, USA Thanos Stouraitis Department of Electrical and Computer Engineering University of Patras Rio, Greece M.N.S. Swamy Department of Electrical and Computer Engineering Concordia University Montreal, Quebec, Canada Allen Taflove Department of Electrical and Computer Engineering Northwestern University Chicago, Illinois, USA Krishnaiyan Thulasiraman School of Computer Science University of Oklahoma Norman, Oklahoma, USA
Franco Trovo Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA Politecnico di Torina, Italy Ruediger Vahldieck Laboratory for Electromagnetic Fields and Microwave Electronics Swiss Federal Institute of Technology Zurich, Switzerland Lucia Valbonesi Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA Arnout Vandercappelle IMEC Leuven, Belgium Mani Venkatasubramanian School of Electrical Engineering and Computer Science Washington State University Pullman, Washington, USA Jiri Vlach Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, Canada Benjamin W. Wah Computer and Systems Research Laboratory University of Illinois at Urbana-Champaign Urbana, Illinois, USA Yih-Chen Wang Lucent Technologies Naperville, Illinois, USA
xiv
Contributors
Keith W. Whites Department of Electrical and Computer Engineering South Dakota School of Mines and Technology Rapid City, South Dakota, USA
Stephen Yurkovich Center for Automotive Research The Ohio State University Columbus, Ohio, USA
Chang-Hee Won Department of Electrical Engineering University of North Dakota Grand Forks, North Dakota, USA
Mona E. Zaghloul Department of Electrical and Computer Engineering The George Washington University Washington, D.C., USA
Ke Wu Department of Electrical and Computer Engineering Ecole Polytechnique Montreal, Quebec, Canada
Xunwei Zhou Center for Power Electronics Systems The Bradley Department of Electrical Engineering Virginia Polytechnic Institute and State University Blacksburg, Virginia, USA
Hung-Yu David Yang Department of Electrical and Computer Engineering University of Illinois at Chicago Chicago, Illinois, USA Gary G. Yen Intelligent Systems and Control Laboratory School of Electrical and Computer Engineering Oklahoma State University Stillwater, Oklahoma, USA
Lei Zhu Department of Electrical and Computer Engineering Ecole Polytechnique Montreal, Quebec, Canada
Preface Purpose The purpose of The Electrical Engineering Handbook is to provide a comprehensive reference work covering the broad spectrum of electrical engineering in a single volume. It is written and developed for the practicing electrical engineers in industry, government, and academia. The goal is to provide the most up-to-date information in classical fields of circuits, electronics, electromagnetics, electric power systems, and control systems, while covering the emerging fields of VLSI systems, digital systems, computer engineering, computeraided design and optimization techniques, signal processing, digital communications, and communication networks. This handbook is not an all-encompassing digest of everything taught within an electrical engineering curriculum. Rather, it is the engineer’s first choice in looking for a solution. Therefore, full references to other sources of contributions are provided. The ideal reader is a B.S. level engineer with a need for a one-source reference to keep abreast of new techniques and procedures as well as review standard practices.
Background The handbook stresses fundamental theory behind professional applications. In order to do so, it is reinforced with frequent examples. Extensive development of theory and details of proofs have been omitted. The reader is assumed to have a certain degree of sophistication and experience. However, brief reviews of theories, principles, and mathematics of some subject areas are given. These reviews have been done concisely with perception. The handbook is not a textbook replacement, but rather a reinforcement and reminder of material learned as a student. Therefore, important advancement and traditional as well as innovative practices are included. Since the majority of professional electrical engineers graduated before powerful personal computers were widely available, many computational and design methods may be new to them. Therefore, computers and software use are thoroughly covered. Not only does the handbook use traditional references to cite sources for the contributions, but it also contains Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
relevant sources of information and tools that would assist the engineer in performing his/her job. This may include sources of software, databases, standards, seminars, conferences, and so forth.
Organization Over the years, the fundamentals of electrical engineering have evolved to include a wide range of topics and a broad range of practice. To encompass such a wide range of knowledge, the handbook focuses on the key concepts, models, and equations that enable the electrical engineer to analyze, design, and predict the behavior of electrical systems. While design formulas and tables are listed, emphasis is placed on the key concepts and theories underlying the applications. The information is organized into nine major sections, which encompass the field of electrical engineering. Each section is divided into chapters. In all, there are 72 chapters involving 108 authors, each of which was written by leading experts in the field to enlighten and refresh knowledge of the mature engineer and educate the novice. Each section contains introductory material, leading to the appropriate applications. To help the reader, each article includes two important and useful categories: defining terms and references. Defining terms are key definitions and the first occurrence of each term defined is indicated in boldface in the text. The references provide a list of useful books and articles for following reading.
Locating Your Topic Numerous avenues of access to information contained in the handbook are provided. A complete table of contents is presented at the front of the book. In addition, an individual table of contents precedes each of the nine sections. The reader is urged to look over these tables of contents to become familiar with the structure, organization, and content of the book. For example, see Section VII: Signal Processing, then Chapter 7: VLSI Signal Processing, and then Chapter 7.3: Hardware Imxv
xvi plementation. This tree-like structure enables the reader to move up the tree to locate information on the topic of interest. The Electrical Engineering Handbook is designed to provide answers to most inquiries and direct inquirer to further sources and references. We trust that it will meet your need.
Acknowledgments The compilation of this book would not have been possible without the dedication and efforts of the section editors, the
Preface publishers, and most of all the contributing authors. I particularly wish to acknowledge my wife, Shiao-Ling, for her patience and support. Wai-Kai Chen Editor-in-Chief
Editor-in-Chief Wai-Kai Chen, Professor and Head Emeritus of the Department of Electrical Engineering and Computer Science at the University of Illinois at Chicago. He received his B.S. and M.S. in electrical engineering at Ohio University, where he was later recognized as a Distinguished Professor. He earned his Ph.D. in electrical engineering at University of Illinois at Urbana-Champaign. Professor Chen has extensive experience in education and industry and is very active professionally in the fields of
circuits and systems. He has served as visiting professor at Purdue University, University of Hawaii at Manoa, and Chuo University in Tokyo, Japan. He was Editor-in-Chief of the IEEE Transactions on Circuits and Systems, Series I and II, President of the IEEE Circuits and Systems Society, and is the Founding Editor and Editor-in-Chief of the Journal of Circuits, Systems and Computers. He received the Lester R. Ford Award from the Mathematical Association of America,
Dr. Wai-Kai Chen Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
xvii
xviii the Alexander von Humboldt Award from Germany, the JSPS Fellowship Award from Japan Society for the Promotion of Science, the National Taipei University of Science and Technology Distinguished Alumnus Award, the Ohio University Alumni Medal of Merit for Distinguished Achievement in Engineering Education, the Senior University Scholar Award and the 2000 Faculty Research Award from the University of Illinois at Chicago, and the Distinguished Alumnus Award from the University of Illinois at Urbana/Champaign. He is the recipient of the Golden Jubilee Medal, the Education Award, and the Meritorious Service Award from IEEE Circuits and Systems Society, and the Third Millennium Medal from the IEEE. He has also received more than dozen honorary professorship awards from major institutions in Taiwan and China.
Editor-in-Chief A fellow of the Institute of Electrical and Electronics Engineers (IEEE) and the American Association for the Advancement of Science (AAAS), Professor Chen is widely known in the profession for his Applied Graph Theory (North-Holland), Theory and Design of Broadband Matching Networks (Pergamon Press), Active Network and Feedback Amplifier Theory (McGraw-Hill), Linear Networks and Systems (Brooks/Cole), Passive and Active Filters: Theory and Implements (John Wiley), Theory of Nets: Flows in Networks (Wiley-Interscience), The Circuits and Filters Handbook (CRC Press) and The VLSI Handbook (CRC Press). Dr. Wai-Kai Chen
I CIRCUIT THEORY
Krishnaiyan Thulasiraman School of Computer Science, University of Oklahoma, Norman, Oklahoma, USA
Circuit theory is an important and perhaps the oldest branch of electrical engineering. A circuit is an interconnection of electrical elements. These include passive elements, such as resistances, capacitances, and inductances, as well as active elements and sources (or excitations). Two variables, namely voltage and current variables, are associated with each circuit element. There are two aspects to circuit theory: analysis and design. Circuit analysis involves the determination of current and voltage values in different elements of the circuit, given the values of the sources or excitations. On the other hand, circuit design focuses on the design of circuits that exhibit a certain prespecified voltage or current characteristics at one or more
parts of the circuit. Circuits can also be broadly classified as linear or nonlinear circuits. This section consists of five chapters that provide a broad introduction to most fundamental principles and techniques in circuit analysis and design: . . . . .
Linear Circuit Analysis Circuit Analysis: A Graph-Theoretic Foundation Computer-Aided Design Synthesis of Networks Nonlinear Circuits.
This page intentionally left blank
1 Linear Circuit Analysis P.K. Rajan and Arun Sekar Department of Electrical and Computer Engineering, Tennessee Technological University, Cookeville, Tennessee, USA
1.1 Definitions and Terminology................................................................. 1.2 Circuit Laws .......................................................................................
3 6
1.2.1 Kirchhoff’s Current Law . 1.2.2 Kirchhoff ’s Voltage Law
1.3 Circuit Analysis...................................................................................
6
1.3.1 Loop Current Method . 1.3.2 Node Voltage Method (Nodal Analysis)
1.4
Equivalent Circuits ..............................................................................
9
1.4.1 Series Connection . 1.4.2 Parallel Connection . 1.4.3 Star–Delta (Wye–Delta or T–Pi) Transformation . 1.4.4 Thevenin Equivalent Circuit . 1.4.5 Norton Equivalent Circuit . 1.4.6 Source Transformation
1.5 Network Theorems ..............................................................................
12
1.5.1 Superposition Theorem . 1.5.2 Maximum Power Transfer Theorem
1.6 Time Domain Analysis.........................................................................
13
1.6.1 First-Order Circuits . 1.6.2 Second-Order Circuits . 1.6.3 Higher Order Circuits
1.7
Laplace Transform...............................................................................
16
1.7.1 Definition . 1.7.2 Laplace Transforms of Common Functions . 1.7.3 Solution of Electrical Circuits Using the Laplace Transform . 1.7.4 Network Functions
1.8 State Variable Analysis..........................................................................
20
1.8.1 State Variables for Electrical Circuits . 1.8.2 Matrix Representation of State Variable Equations . 1.8.3 Solution of State Variable Equations
1.9
Alternating Current Steady State Analysis................................................
22
1.9.1 Sinusoidal Voltages and Currents . 1.9.2 Complex Exponential Function . 1.9.3 Phasors in Alternating Current Circuit Analysis . 1.9.4 Phasor Diagrams . 1.9.5 Phasor Voltage–Current Relationships of Circuit Elements . 1.9.6 Impedances and Admittances in Alternating Current Circuits . 1.9.7 Series Impedances and Parallel Admittances . 1.9.8 Alternating Current Circuit Analysis . 1.9.9 Steps in the Analysis of Phasor Circuits . 1.9.10 Methods of Alternating Current Circuit Analysis . 1.9.11 Frequency Response Characteristics . 1.9.12 Bode Diagrams
1.10
Alternating Current Steady State Power ..................................................
26
1.10.1 Power and Energy . 1.10.2 Power in Electrical Circuits . 1.10.3 Power Calculations in AC Circuits
1.1 Definitions and Terminology An electric charge is a physical property of electrons and protons in the atoms of matter that gives rise to forces between atoms. The charge is measured in coulomb [C]. The charge of a proton is arbitrarily chosen as positive and has the value of 1:601 1019 C, whereas the charge of an electron is chosen as negative with a value of 1:601 1019 C. Like charges repel Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
while unlike charges attract each other. The electric charges obey the principle of conservation (i.e., charges cannot be created or destroyed). A current is the flow of electric charge that is measured by its flow rate as coulombs per second with the units of ampere [A]. An ampere is defined as the flow of charge at the rate of one coulomb per second (1 A ¼ 1 C/s). In other words, current i(t) through a cross section at time t is given by dq/dt, where 3
4
P.K. Rajan and Arun Sekar
q(t) is the charge that has flown through the cross section up to time t : i(t) ¼
dq(t) [A]: dt
(1:1)
Knowing i, the total charge, Q, transferred during the time from t1 to t2 can be calculated as:
Q¼
tð2
idt [C]:
(1:2)
t1
The voltage or potential difference (VAB ) between two points A and B is the amount of energy required to move a unit positive charge from B to A. If this energy is positive, that is work is done by external sources against forces on the charges, then VAB is positive and point A is at a higher potential with respect to B. The voltage is measured using the unit of volt [V]. The voltage between two points is 1 V if 1 J (joule) of work is required to move 1 C of charge. If the voltage, v, between two points is constant, then the work, w, done in moving q coulombs of charge between the two points is given by: w ¼ vq [J]:
(1:3)
Power (p) is the rate of doing work or the energy flow rate. When a charge of dq coulombs is moved from point A to point B with a potential difference of v volts, the energy supplied to the charge will be v dq joule [J]. If this movement takes place in dt seconds, the power supplied to the charge will be v dq/dt watts [W]. Because dq/dt is the charge flow rate defined earlier as current i, the power supplied to the charge can be written as: p ¼ vi [W]:
(1:4)
relation, defines the element’s characteristic. A circuit is made up of electrical elements. Linear elements include a v–i relation, which can be linear if it satisfies the homogeneity property and the superposition principle. The homogeneity property refers to proportionality; that is, if i gives a voltage of v, ki gives a voltage of kv for any arbitrary constant k. The superposition principle implies additivity; that is, if i1 gives a voltage of v1 and i2 gives a voltage of v2 , then i1 þ i2 should give a voltage v1 þ v2 . It is easily verified that v ¼ Ri and v ¼ L di=dt are linear relations. Elements that possess such linear relations are called linear elements, and a circuit that is made up of linear elements is called a linear circuit. Sources, also known as active elements, are electrical elements that provide power to a circuit. There are two types of sources: (1) independent sources and (2) dependent (or controlled) sources. An independent voltage source provides a specified voltage irrespective of the elements connected to it. In a similar manner, an independent current source provides a specified current irrespective of the elements connected to it. Figure 1.1 shows representations of independent voltage and independent current sources. It may be noted that the value of an independent voltage or an independent current source may be constant in magnitude and direction (called a direct current [dc] source) or may vary as a function of time (called a timevarying source). If the variation is of sinusoidal nature, it is called an alternating current (ac) source. Values of dependent sources depend on the voltage or current of some other element or elements in the circuit. There are four classes of dependent sources: (1) voltagecontrolled voltage source, (2) current-controlled voltage source, (3) voltage-controlled current source and (4) currentcontrolled current source. The representations of these dependent sources are shown in Table 1.1. Passive elements consume power. Names, symbols, and the characteristics of some commonly used passive elements are given in Table 1.2. The v–i relation of a linear resistor, v ¼ Ri,
The energy supplied over duration t1 to t 2 is then given by:
w¼
tð2
vi dt [J]:
(1:5)
t1
v(t)
A lumped electrical element is a model of an electrical device with two or more terminals through which current can flow in or out; the flow can pass only through the terminals. In a twoterminal element, current flows through the element entering via one terminal and leaving via another terminal. On the other hand, the voltage is present across the element and measured between the two terminals. In a multiterminal element, current flows through one set of terminals and leaves through the remaining set of terminals. The relation between the voltage and current in an element, known as the v–i
l(t )
5V
−
(A)
+
+
+
−
(B)
− (C)
(D)
A) General voltage source B) Voltage source : dc C) Voltage source : ac D) General current source
FIGURE 1.1 Independent Voltage and Current Sources
1
Linear Circuit Analysis TABLE 1.1
5 Dependent Sources and Their Representation
Element
Voltage and current relation
Voltage-controlled voltage source
v2 ¼ a v1 a : Voltage gain
Representation
+ v1
+ +
v2
−
− i2 ¼ g t v 1 gt : Transfer conductance
Voltage-controlled current source
−
+ v1
i2
− v2 ¼ rt i1 rt : Transfer resistance
Current-controlled voltage source
+ i1
+
v2
−
− i2 ¼ bi1 b: Current gain
Current-controlled current source
i2
i1
TABLE 1.2
Some Passive Elements and Their Characteristics
Name of the element
Symbol
Resistance: R
R
i +
Inductance: L
v
The v–i relation
Unit
v ¼ Ri
ohm [V]
v ¼ L di=dt
henry [H]
i ¼ C dv=dt
farad [F]
v1 ¼ M di2 =dt þ L1 di1 =dt v2 ¼ M di1 =dt þ L2 di2 =dt
henry [H]
−
L i +
Capacitance: C
v
−
C i +
Mutual Inductance: M
+
v1
i1
v
− i2
M L1
L2
−
is known as Ohm’s law, and the linear relations of other passive elements are sometimes called generalized Ohm’s laws. It may be noted that in a passive element, the polarity of the voltage is such that current flows from positive to negative terminals.
+
v2 −
This polarity marking is said to follow the passive polarity convention. A circuit is formed by an interconnection of circuit elements at their terminals. A node is a junction point where the
6
P.K. Rajan and Arun Sekar
1.2.2 Kirchhoff ’s Voltage Law
I4 6Ω B A
10 V
I1
2Ω
C
I3
I3 4Ω
0.5 H I2
+
1.5 F
−
D I5 5Ω
At any instant, the algebraic sum of the voltages (v) around a loop is equal to zero. In going around a loop, a useful convention is to take the voltage drop (going from positive to negative) as positive and the voltage rise (going from negative to positive) as negative. In Figure 1.2, application of KVL around the loop ABCEA gives the following equation: vAB þ vBC þ vCE þ vEA ¼ 0:
(1:8)
E
FIGURE 1.2
1.3 Circuit Analysis
Example Circuit Diagram
terminals of two or more elements are joined. Figure 1.2 shows A, B, C, D, and E as nodes. A loop is a closed path in a circuit such that each node is traversed only once when tracing the loop. In Figure 1.2, ABCEA is a loop, and ABCDEA is also a loop. A mesh is a special class of loop that is associated with a window of a circuit drawn in a plane (planar circuit). In the same Figure ABCEA is a mesh, whereas ABCDEA is not considered a mesh for the circuit as drawn. A network is defined as a circuit that has a set of terminals available for external connections (i.e., accessible from outside of the circuit). A pair of terminals of a network to which a source, another network, or a measuring device can be connected is called a port of the network. A network containing such a pair of terminals is called a one-port network. A network containing two pairs of externally accessible terminals is called a two-port network, and multiple pairs of externally accessible terminal pairs are called a multiport network.
1.2 Circuit Laws Two important laws are based on the physical properties of electric charges, and these laws form the foundation of circuit analysis. They are Kirchhoff ’s current law (KVL) and Kirchhoff ’s voltage law (KCL). While Kirchhoff ’s current law is based on the principle of conservation of electric charge, Kirchhoff ’s voltage law is based on the principle of energy conservation.
1.2.1 Kirchhoff ’s Current Law At any instant, the algebraic sum of the currents (i) entering a node in a circuit is equal to zero. In the circuit in Figure 1.2, application of KCL at node C yields the following equation: i1 þ i2 þ i3 ¼ 0
(1:6)
Similarly at node D, KCL yields: i4 i3 i5 ¼ 0:
(1:7)
Analysis of an electrical circuit involves the determination of voltages and currents in various elements, given the element values and their interconnections. In a linear circuit, the v–i relations of the circuit elements and the equations generated by the application of KCL at the nodes and of KVL for the loops generate a sufficient number of simultaneous linear equations that can be solved for unknown voltages and currents. Various steps involved in the analysis of linear circuits are as follows: 1. For all the elements except the current sources, assign a current variable with arbitrary polarity. For the current sources, current values and polarity are given. 2. For all elements except the voltage sources, assign a voltage variable with polarities based on the passive sign convention. For voltage sources, the voltages and their polarities are known. 3. Write KCL equations at N 1 nodes, where N is the total number of nodes in the circuit. 4. Write expressions for voltage variables of passive elements using their v–i relations. 5. Apply KVL equations for E N þ 1 independent loops, where E is the number of elements in the circuit. In the case of planar circuits, which can be drawn on a plane paper without edges crossing over one another, the meshes will form a set of independent loops. For nonplanar circuits, use special methods that employ topological techniques to find independent loops. 6. Solve the 2E equations to find the E currents and E voltages. The following example illustrates the application of the steps in this analysis. Example 1.1. For the circuit in Figure 1.3, determine the voltages across the various elements. Following step 1, assign the currents I1 , I2 , I3 , and I4 to the elements. Then apply the KCL to the nodes A, B, and C to get I4 I1 ¼ 0, I1 I2 ¼ 0, and I2 I3 ¼ 0. Solving these equations produces I1 ¼ I2 ¼ I3 ¼ I4 . Applying the v–i relation characteristics of the nonsource elements, you get VAB ¼ 2 I1 , VBC ¼ 3 I2 , and VCD ¼ 5 I3 . Applying
1
Linear Circuit Analysis
I1
7
+ V1 −
2Ω
+ V2 − I2
A 2Ω
B
+ 12 V
+ V3 −
5Ω
− I3
I4
1Ω
A
C
3Ω
B
C
+ 12 V
I1
3Ω
I2
4Ω
−
D
D
FIGURE 1.3 Circuit for Example 1.1 FIGURE 1.4
the KVL to the loop ABCDA, you determine VAB þ VBC þ VCD þ VDA ¼ 0. Substituting for the voltages in terms of currents, you get 2 I1 þ 3 I1 þ 5 I1 12 ¼ 0. Simplifying results in 10 I1 ¼ 12 to make I1 ¼ 1:2 A. The end results are VAB ¼ 2:4 V, VBC ¼ 3:6 V, and VCD ¼ 6:0 V. In the above circuit analysis method, 2E equations are first set up and then solved simultaneously. For large circuits, this process can become very cumbersome. Techniques exist to reduce the number of unknowns that would be solved simultaneously. Two most commonly used methods are the loop current method and the node voltage method.
1.3.1 Loop Current Method In this method, one distinct current variable is assigned to each independent loop. The element currents are then calculated in terms of the loop currents. Using the element currents and values, element voltages are calculated. After these calculations, Kirchhoff ’s voltage law is applied to each of the loops, and the resulting equations are solved for the loop currents. Using the loop currents, element currents and voltages are then determined. Thus, in this method, the number of simultaneous equations to be solved are equal to the number of independent loops. As noted above, it can be shown that this is equal to E N þ 1. Example 1.2 illustrates the techniques just discussed. It may be noted that in the case of planar circuits, the meshes can be chosen as the independent loops. Example 1.2. In the circuit in Figure 1.4, find the voltage across the 3-V resistor. First, note that there are two independent loops, which are the two meshes in the circuit, and that loop currents I1 and I2 are assigned as shown in the diagram. Then calculate the element currents as IAB ¼ I1 , IBC ¼ I2 , ICD ¼ I2 , IBD ¼ I1 I2 , and IDA ¼ I1 . Calculate the element voltages as VAB ¼ 2 IAB ¼ 2 I1 , VBC ¼ 1 IBC ¼ 1 I2 , VCD ¼ 4 I2 , and VBD ¼ 3 IBD ¼ 3(I1 I2 ). Applying KVL to loops 1 (ABDA) and 2 (BCDB) and substituting the voltages in terms of loop currents results in:
Circuit for Example 1.2
5 I1 3 I2 ¼ 12 3 I1 þ 8 I2 ¼ 0: Solving the two equations, you get I1 ¼ 96=31 A and I2 ¼ 36=31 A. The voltage across the 3-V resistor is 3(I1 I2 ) ¼ 3(96=31 36=31) ¼ 180=31 A. Special case 1 When one of the elements in a loop is a current source, the voltage across it cannot be written using the v–i relation of the element. In this case, the voltage across the current source should be treated as an unknown variable to be determined. If a current source is present in only one loop and is not common to more than one loop, then the current of the loop in which the current source is present should be equal to the value of the current source and hence is known. To determine the remaining currents, there is no need to write the KVL equation for the current source loop. However, to determine the voltage of the current source, a KVL equation for the current source loop needs to be written. This equation is presented in example 1.3. Example 1.3. Analyze the circuit shown in Figure 1.5 to find the voltage across the current sources. The loop currents are assigned as shown. It is easily seen that I3 ¼ 2. Writing KVL equations for loops 1 and 2, you get: Loop 1: Loop 2:
2(I1 I2 ) þ 4(I1 I3 ) 14 ¼ 0 ¼> 6 I1 2 I2 ¼ 6. I2 þ 3(I2 I3 ) þ 2(I2 I1 ) ¼ 0 ¼> 2 I1 þ 6 I2 ¼ 6.
Solving the two equations simultaneously, you get I1 ¼ 3=4 A and I2 ¼ 3=4 A. To find the VCD across the current source, write the KVL equation for the loop 3 as: 4(I3 I1 ) þ 3(I3 I2 ) þ VCD ¼ 0 ¼> VCD ¼ 4 I1 þ 3 I2 7 I3 ¼ 14:75 V:
8
P.K. Rajan and Arun Sekar 1Ω I3
I2 2Ω A
C
B
+
14 V
3Ω
I1
4Ω
I3
I1
2Å
−
I2
Super loop
D
FIGURE 1.5
Circuit for Example 1.3 FIGURE 1.7 Dotted Line
Circuit in Figure 1.6 with the Super Loop Shown as
1Ω I3
A
2Ω
+ −
12 V
F
B
I1
3Ω I2
C
4Ω
2Å
2Ω
FIGURE 1.6
E
1Ω
D
Circuit for Example 1.4
Special case 2 This case concerns a current source that is common to more than one loop. The solution to this case is illustrated in example 1.4. Example 1.4. In the circuit shown in Figure 1.6, the 2 A current source is common to loops 1 and 2. One method of writing KVL equations is to treat VBE as an unknown and write three KVL equations. In addition, you can write the current of the current source as I2 I1 ¼ 2, giving a fourth equation. Solving the four equations simultaneously, you determine the values of I1 , I2 , I3 , and VBE . These equations are the following: 2(I1 I3 ) þ VBE þ 2 I1 12 ¼ 0 ¼> 4 I1 2 I3 þ VBE ¼ 12. Loop 2: 3(I2 I3 ) þ 4 I2 þ I2 VBE ¼ 0 ¼> 8 I2 3 I3 VBE ¼ 0. Loop 3: I3 þ 3(I3 I2 ) þ 2(I3 I1 ) ¼ 0 ¼> 2 I1 3 I2 þ 6 I3 ¼ 0. Current source relation: I1 þ I2 ¼ 2.
Solving the above four equations results in I1 ¼ 0:13 A, I2 ¼ 2:13 A, I3 ¼ 1:11 A, and VBE ¼ 13:70 V. Alternative method for special case 2 (Super loop method): This method eliminates the need to add the voltage variable as an unknown. When a current source is common to loops 1 and 2, then KVL is applied on a new loop called the super loop. The super loop is obtained from combining loops 1 and 2 (after deleting the common elements) as shown in Figure 1.7. For the circuit considered in example 1.4, the loop ABCDEFA is the super loop obtained by combining loops 1 and 2. The KVL is applied on this super loop instead of KVL being applied for loop 1 and loop 2 separately. The following is the KVL equation for super loop ABCDEFA: 2(I1 I3 ) þ 3(I2 I3 ) þ 4 I2 þ I2 þ 2 I1 12 ¼ 0 ¼> 4 I1 þ 8 I2 5 I3 ¼ 12: The KVL equation around loop 3 is written as: 2 I1 3 I2 þ 6 I3 ¼ 0: The current source can be written as: I1 þ I2 ¼ 2: Solving the above three equations simultaneously produces equations I1 ¼ 0:13 A, I2 ¼ 2:13 A, and I3 ¼ 1:11 A.
Loop 1:
1.3.2 Node Voltage Method (Nodal Analysis) In this method, one node is chosen as the reference node whose voltage is assumed as zero, and the voltages of other nodes are expressed with respect to the reference node. For example, in Figure 1.8, the voltage of node G is chosen as the
1
Linear Circuit Analysis
9 + VAB − A
RAB
+ VBC − C
B
+
VAG = VA
−
+
+
VBG = VB
VCG = VC
−
−
G
FIGURE 1.8
Circuit with Node Voltages Marked
reference node, and then the voltage of node A is VA ¼ VAG and that of node B is VB ¼ VBG and so on. Then, for every element between two nodes, the element voltages may be expressed as the difference between the two node voltages. For example, the voltage of element RAB is VAB ¼ VA VB . Similarly VBC ¼ VB VC and so on. Then the current through the element RAB can be determined using the v–i characteristic of the element as IAB ¼ VAB =RAB . Once the currents of all elements are known in terms of node voltages, KCL is applied for each node except for the reference node, obtaining a total of N–1 equations where N is the total number of nodes. Special Case 1 In branches where voltage sources are present, the v–i relation cannot be used to find the current. Instead, the current is left as an unknown. Because the voltage of the element is known, another equation can be used to solve the added unknown. When the element is a current source, the current through the element is known. There is no need to use the v–i relation. The calculation is illustrated in the following example. Example 1.5. In Figure 1.9, solve for the voltages VA, VB, and VC with respect to the reference node G. At node A, VA ¼ 12. At node B, KCL yields: IBA þ IBG þ IBC ¼ 0 ¼> (VB VA )=1 þ VB =4 þ (VB VC )=5 ¼ 0 ¼> VA þ (1 þ 1=4 þ 1=5)VB VC =5 ¼ 0: Similarly at node C, KCL yields: VA =2 VB =5 þ (1=5 þ 1=2)VC ¼ 2: Solving the above three equations simultaneously results in VA ¼ 12 V, VB ¼ 10:26 V, and VC ¼ 14:36 V. Super Node: When a voltage source is present between two nonreference nodes, a super node may be used to avoid intro-
2Ω 1Ω
IBA
B
IBC
5Ω
A
C IBG
+ 12 V
4Ω
−
2Å
G
FIGURE 1.9
Circuit for Example 1.5
ducing an unknown variable for the current through the voltage source. Instead of applying KCL to each of the two nodes of the voltage source element, KCL is applied to an imaginary node consisting of both the nodes together. This imaginary node is called a super node. In Figure 1.10, the super node is shown by a dotted closed shape. KCL on this super node is given by: IBA þ IBG þ ICG þ ICA ¼ 0 ¼> (VB VA )=1 þ VB =3 þ VC =4 þ (VC VA )=2 ¼ 0: In addition to this equation, the two voltage constraint equations, VA ¼ 10 and VB VC ¼ 5, are used to solve for VB and VC as VB ¼ 9 V and VC ¼ 4 V.
1.4 Equivalent Circuits Two linear circuits, say circuit 1 and circuit 2, are said to be equivalent across a specified set of terminals if the voltage– current relations for the two circuits across the specified terminals are identical. Now consider a composite circuit
10
P.K. Rajan and Arun Sekar
2Ω
5V A B
1Ω
−
3Ω
+ −
10 V
+
C
Super node
4Ω
5Ω
G
FIGURE 1.10
Circuit with Super Node
A
Circuit 1
A
Circuit 3
Circuit 2
Circuit 3
B
B
(A) Composite Circuit with Circuit 1
FIGURE 1.11
+ V1 − I
A1
Equivalent Circuit Application
+ V2 − A2
R1
(B) Composite Circuit with Circuit 2
+ VN − A3
...
AN
+ AN+1
RN
R2
VT −
I
Req
VT
(A) N Resistors Connected in Series
FIGURE 1.12
(B) Equivalent Circuit
Resistances Connected in Series
consisting of circuit 1 connected to another circuit, circuit 3, at the specified terminals as shown in Figure 1.11(A). The voltages and currents in circuit 3 are not altered if circuit 2 replaces circuit 1, as shown in Figure 1.11(B). If circuit 2 is simpler than circuit 1, then the analysis of the composite circuit will be simplified. A number of techniques for obtaining twoterminal equivalent circuits are outlined in the following section.
1.4.1 Series Connection Two two-terminal elements are said to be connected in series if the connection is such that the same current flows through both the elements as shown in Figure 1.12. When two resistances R1 and R2 are connected in series, they can be replaced by a single element having an equivalent
resistance of sum of the two resistances, Req ¼ R1 þ R2 , without affecting the voltages and currents in the rest of the circuit. In a similar manner, if N resistances R1 , R2 , . . . , RN are connected in series, their equivalent resistance will be given by: Req ¼ R1 þ R2 þ . . . þ RN :
(1:9)
Voltage Division: When a voltage VT is present across N resistors connected in series, the total voltage divides across the resistors proportional to their resistance values. Thus V1 ¼ VT
R1 R2 RN , V2 ¼ VT , . . . , VN ¼ VT , Req Req Req
where Req ¼ R1 þ R2 þ . . . þ RN .
(1:10)
1
Linear Circuit Analysis
11
1.4.2 Parallel Connection Two-terminal elements are said to be connected in parallel if the same voltage exists across all the elements and if they have two distinct common nodes as shown in Figure 1.13. In the case of a parallel connection, conductances, which are reciprocals of resistances, sum to give an equivalent conductance of Geq : Geq ¼ G1 þ G2 þ . . . þ GN ,
(1:11)
1 1 1 1 ¼ þ þ ... þ : Req R1 R2 RN
(1:12)
1.4.3 Star–Delta (Wye–Delta or T–Pi) Transformation It can be shown that the star subnetwork connected as shown in Figure 1.14 can be converted into an equivalent delta subnetwork. The element values between the two subnetworks are related as shown in Table 1.3. It should be noted that the star subnetwork has four nodes, whereas the delta network has only three nodes. Hence, the star network can be replaced in a circuit without affecting the voltages and currents in the rest
or equivalently
TABLE 1.3 Relations Between the Element Values in Star and Delta Equivalent Circuits
Current Division: In parallel connection, the total current IT of the parallel combination divides proportionally to the conductance of each element. That is, the current in each element is proportional to its conductance and is given by: I1 ¼ IT
G1 G2 GN , I2 ¼ IT , . . . , IN ¼ IT , Geq Geq Geq
(1:13)
Star in terms of delta resistances R1 ¼
Rb Rc Ra þ Rb þ Rc
Ra ¼
R1 R2 þ R2 R3 þ R3 R1 R1
R2 ¼
Rc Ra Ra þ Rb þ Rc
Rb ¼
R1 R2 þ R2 R3 þ R3 R1 R2
R3 ¼
Ra Rb Ra þ Rb þ Rc
Rc ¼
R1 R2 þ R2 R3 þ R3 R1 R3
where Geq ¼ G1 þ G2 þ . . . þ GN .
IT
Delta in terms of star resistances
A
A
IT +
+ R2
R1
V I1
RN
I2
Req
V IT
IN −
−
B
B
(A) N Resistors Connected in Parallel
FIGURE 1.13
(B) Equivalent Circuit
Resistances Connected in Parallel
A A R1 Rb
R3
N
Rc
R2 C
C
B Ra
B (A) Star-Connected Circuit
FIGURE 1.14
(B) Delta-Connected Circuit
Star and Delta Equivalent Circuits
12
P.K. Rajan and Arun Sekar
of the circuit only if the central node in the star subnetwork is not connected to any other circuit node.
1.4.4 Thevenin Equivalent Circuit A network consisting of linear resistors and dependent and independent sources with a pair of accessible terminals can be represented by an equivalent circuit with a voltage source and a series resistance as shown in Figure 1.15. VTH is equal to the open circuit voltage across the two terminals A and B, and RTH is the resistance measured across nodes A and B (also called looking-in resistance) when the independent sources in the network are deactivated. The RTH can also be determined as RTH ¼ Voc =Isc , where Voc is the open circuit voltage across terminals A and B and where Isc is the short circuit current that will flow from A to B through an external zero resistance connection (short circuit) if one is made.
1.4.6 Source Transformation Using a Norton equivalent circuit, a voltage source with a series resistor can be converted into an equivalent current source with a parallel resistor. In a similar manner, using Thevenin theorem, a current source with a parallel resistor can be represented by a voltage source with a series resistor. These transformations are called source transformations. The two sources in Figure 1.17 are equivalent between nodes B and C.
1.5 Network Theorems A number of theorems that simplify the analysis of linear circuits have been proposed. The following section presents, without proof, two such theorems: the superposition theorem and the maximum power transfer theorem.
1.4.5 Norton Equivalent Circuit
1.5.1 Superposition Theorem
A two-terminal network consisting of linear resistors and independent and dependent sources can be represented by an equivalent circuit with a current source and a parallel resistor as shown in Figure 1.16. In this figure, IN is equal to the short circuit current across terminals A and B, and RN is the looking-in resistance measured across A and B after the independent sources are deactivated. It is easy to see the following relation between Thevenin equivalent circuit parameters and the Norton equivalent circuit parameters:
For a circuit consisting of linear elements and sources, the response (voltage or current) in any element in the circuit is the algebraic sum of the responses in this element obtained by applying one independent source at a time. When one independent source is applied, all other independent sources are deactivated. It may be noted that a deactivated voltage source behaves as a short circuit, whereas a deactivated current source behaves as an open circuit. It should also be noted that the dependent sources in the circuit are not deactivated. Further, any initial condition in the circuit is treated as an appropriate independent source. That is, an initially charged
RN ¼ RTH and IN ¼ VTH =RTH :
(1:14)
A A
RTH
Linear resistors and sources
VTH
+ −
B
B
(A) Linear Network with Two Terminals
(B) Equivalent Circuit Across the Terminals
FIGURE 1.15 Thevenin Equivalent Circuit
Linear resistors and sources
A
A IN
B
B
(A) Linear Network with Two Terminals
FIGURE 1.16
RN
(B) Equivalent Circuit Across AB in (A)
Norton Equivalent Circuit
1
Linear Circuit Analysis
A
10 V
13
5Ω
B
+
3Ω
D
4Ω
A
2Ω 2Å
−
B
5Ω
C
FIGURE 1.17
3Ω
D
4Ω
2Ω
C
Example of Source Transformation
capacitor is replaced by an uncharged capacitor in series with an independent voltage source. Similarly, an inductor with an initial current is replaced with an inductor without any initial current in parallel with an independent current source. The following example illustrates the application of superposition in the analysis of linear circuits. Example 1.6. For the circuit in Figure 1.18(A), determine the voltage across the 3-V resistor. The circuit has two independent sources, one voltage source and one current source. Figure 1.18(B) shows the circuit when voltage source is activated and current source is deactivated (replaced by an open circuit). Let V31 be the voltage across the 3-V resistor in this circuit. Figure 1.18(C) shows the circuit when current source is activated and voltage source is deactivated (replaced by a short circuit). Let V32 be the voltage across the 3-V resistor in this circuit. Then you determine that the voltage across the 3-V resistor in the given complete circuit is V3 ¼ V31 þ V32 .
1.5.2 Maximum Power Transfer Theorem In the circuit shown in Fig. 1.19, power supplied to the load is maximum when the load resistance is equal to the source resistance. It may be noted that the application of the maximum power transfer theorem is not restricted to simple circuits only. The theorem can also be applied to complicated circuits as long as the circuit is linear and there is one variable load. In such cases, the complicated circuit across the variable load is replaced by its equivalent Thevenin circuit. The maximum power transfer theorem is then applied to find the load resistance that leads to maximum power in the load.
1.6 Time Domain Analysis When a circuit contains energy storing elements, namely inductors and capacitors, the analysis of the circuit involves the solution of a differential equation.
1.6.1 First-Order Circuits A circuit with a single energy-storing element yields a firstorder differential equation as shown below for the circuits in Figure 1.20. Consider the RC circuit in Figure 1.20(A). For t > 0, writing KVL around the loop, the result is equation: ðt 1 Ri þ vc (0) þ idt ¼ vs : c
(1:15)
0
Differentiating with respect to t yields: R
di 1 þ i ¼ 0: dt c
(1:16)
The solution of the above homogeneous differential equation can be obtained as: i(t) ¼ Ke 1=RCt :
(1:17)
The value of K can be found by using the initial voltage vc (0) in the capacitor as: K ¼ i(0) ¼
vs vc (0) : R
(1:18)
Substituting this value in the expression for i(t) determines the final solution for i(t) as: i(t) ¼
vs vc (0) (1=RC)t e : R
(1:19)
This exponentially decreasing response i(t) is shown in Figure 1.21(A). It has a time constant of t ¼ RC. The voltage Vc (t) is also shown in Figure 1.21(B) In a similar manner, the differential equation for i(t) in the RL circuit shown in Figure 1.20(B) can be obtained for t > 0 as:
14
P.K. Rajan and Arun Sekar + V3 −
1Ω
4Ω
I1 3Ω
10 V
2Ω
+ −
2 Ix
2 Å
Ix
(A) Original Circuit
1Ω
+ V31 −
I11
4Ω
3Ω
10 V
2Ω
+ −
2 Ix1
Ix1
(B) Circuit When Voltage Source Is Activated and Current Source Is Deactivated 1Ω
+ V32 −
I12
4Ω
3Ω
2Ω
2 Ix 2
2Å
Ix2
(C) Circuit When Current Source Is Activated and Voltage Source Is Deactivated
FIGURE 1.18
Circuits for Example 1.6
Ri þ L
di ¼ vs : dt
Because this is a nonhomogeneous differential equation, its solution consists of two parts:
Rs
i(t) ¼ in (t) þ if (t), + Vs
RL
−
Circuit with a Variable Load Excited by a Thevenin
(1:21)
where in (t), the natural response (also called the complementary function) of the circuit, is the solution of the homogeneous differential equation: L
FIGURE 1.19 Source
(1:20)
din þ Rin ¼ 0: dt
(1:22)
The forced response (also called the particular integral) of the circuit, if (t), is given by the solution of the nonhomoge-
1
Linear Circuit Analysis
15 R
R
t=0
t=0
+
+
+
vs
i
i
vc
C −
L
vs −
−
(A) RC Circuit
FIGURE 1.20
(B) RL Circuit
Circuits with a Single Energy-Storing Element
vs
vs − vc(0)
vc(t)
i(t)
vc(0)
0
0
t
t (B) vc(t)
(A) i(t)
FIGURE 1.21
Response of the Circuit in Figure 1.20(A)
neous differential equation corresponding to the particular forcing function vs . If vs is a constant, the forced response in general is also a constant. In this case, the natural and forced responses and the total response are given by: in (t) ¼ Ke R=L t , if (t) ¼
vs vs , and i(t) ¼ Ke (R=L)t þ : R R (1:23)
K is found using the initial condition in the inductor i(0) ¼ I0 as i(0) ¼ K þ vs =R, and so K ¼ I0 vs =R. Substituting for K in the total response yields: vs (R=L)t vs þ : i(t) ¼ I0 e R R
I0 i(t)
vs /R
vs R
0
t
(1:24) FIGURE 1.22
Response of the Circuit Shown in Figure 1.20(B)
The current waveform, shown in Figure 1.22, has an exponential characteristic with a time constant of L/R [s].
1.6.2 Second-Order Circuits If the circuit contains two energy-storing elements, L and/or C, the equation connecting voltage or current in the circuit is a
second-order differential equation. Consider, for example, the circuit shown in Figure 1.23. Writing KCL around the loop and substituting i ¼ Cdvc =dt results in:
16
P.K. Rajan and Arun Sekar R t=0
i
rffiffiffiffiffiffi 1 ¼ jv0 : s1, 2 ¼ j LC
L
+
+
vs
The corresponding natural response is an undamped oscillation of the form:
vc
C
−
(1:32)
−
vn (t) ¼ K cos (v0 t þ u):
(1:33)
The forced response can be obtained as vf (t) ¼ Vs . The total solution is obtained as: FIGURE 1.23
Circuit with Two Energy-Storing Elements
vc (t) ¼ vn (t) þ Vs :
(1:34)
2
LC
d vc dvc þ RC þ vc ¼ vs : dt dt
(1:25)
This equation can be solved by either using a Laplace transform or a conventional technique. This section illustrates the use of the conventional technique. Assuming a solution of the form vn (t) ¼ Ke st for the homogeneous equation yields the characteristic equation as: LCs 2 þ RCs þ 1 ¼ 0:
(1:26)
Solving for s results in the characteristic roots written as: R s 1, 2 ¼ 2L
ffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 R 1 : 2L LC
(1:27)
Four cases should be considered: Case 1: (R=2L)2 > (1=LC). The result is two real negative roots s1 and s2 for which the solution will be an overdamped response of the form: vn (t) ¼ K1 e s1 t þ K2 e s2 t :
(1:28)
Case 2: (R=2L)2 ¼ (1=LC). In this case, the result is a double root at s0 ¼ R=2L. The natural response is a critically damped response of the form: vn (t) ¼ (K1 t þ K2 )e s0 t :
(1:29)
Case 3: 0 < (R=2L)2 < (1=LC). This case yields a pair of complex conjugate roots as: R s 1, 2 ¼ j 2L
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ffi 1 R ¼ s jvd : LC 2L
1.6.3 Higher Order Circuits When a circuit has more than two energy-storing elements, say n, the analysis of the circuit in general results in a differential equation of order n. The solution of such an equation follows steps similar to the second-order case. The characteristic equation will be of degree n and will have n roots. The natural response will have n exponential terms. Also, the forced response will in general have the same shape as the forcing function. The Laplace transform is normally used to solve such higher order circuits.
1.7 Laplace Transform (1:30)
The corresponding natural response is an underdamped oscillatory response of the form: vn (t) ¼ Ke st cos (vd t þ u):
The unknown coefficients K1 and K2 in cases 1 and 2 and K and u in cases 3 and 4 can be calculated using the initial values on the current in the inductor and the voltage across the capacitor. Typical responses for the four cases when Vs equals zero are shown in Figure 1.24. For circuits containing energydissipating elements, namely resistors, the natural response in general will die down to zero as t goes to infinity. The component of the response that goes to zero as time t goes to infinity is called the transient response. The forced response depends on the forcing function. When the forcing function is a constant or a sinusoidal function, the forced response will continue to be present even as t goes to infinity. The component of the total response that continues to exist for all time is called steady state response. In the next section, computation of steady state responses for sinusoidal forcing functions is considered.
(1:31)
Case 4: R=2L ¼ 0. In this case, a pair of imaginary roots are created as:
In the solution of linear time-invariant differential equations, it was noted that a forcing function of the form Ki e st yields an output of the form Ko e st where s is a complex variable. The function e st is a complex sinusoid of exponentially varying amplitude, often called a damped sinusoid. Because linear equations obey the superposition principle, the solution of a linear differential equation to any forcing function can be found by superposing solutions to component-damped sinusoids if the forcing function is expressed as a sum of damped
1
Linear Circuit Analysis
17 3
3
2
2 x
x
1
1 0
0
0
1
0
2
1
t [s]
2
t [s]
(A) Overdamped Response
(B) Critically Damped Response
3
6 Freq. w0 = 4 r/s Period T = 1.5708 s f = f/t = 0.6366 Hz
4
2
2 x 1
x
0 −2
0
−4
−1 0
1
2
−6
3
0
t [s]
Typical Second-Order Circuit Responses
1.7.1 Definition The Laplace transform of f (t) is defined as:
F(s) ¼
f (t)e st dt:
3
(D) Undamped Response
sinusoids. With this objective in mind, the Laplace transform is defined. The Laplace transform decomposes a given time function into an integral of complex-damped sinusoids.
1 ð
2 t [s]
(C) Underdamped Response
FIGURE 1.24
1
(1:35)
0
gral. The equation 1.36 shows that f (t) is expressed as a sum (integral) of infinitely many exponential functions of complex frequencies (s) with complex amplitudes (phasors) {F(s)}. The complex amplitude F(s) at any frequency s is given by the integral in equation 1.35. The Laplace transform, defined as the integral extending from zero to infinity, is called a singlesided Laplace transform against the double-sided Laplace transform whose integral extends from 1 to þ1. As transient response calculations start from some initial time, the single-sided transforms are sufficient in the time domain analysis of linear electric circuits. Hence, this discussion considers only single-sided Laplace transforms.
The inverse Laplace transform is defined as:
f (t) ¼
1 2pj
s0 þj1 ð
1.7.2 Laplace Transforms of Common Functions F(s)e st dt:
(1:36)
Consider
s0 j1
F(s) is called the Laplace transform of f (t), and s0 is included in the limits to ensure the convergence of the improper inte-
f (t) ¼ Ae at for 0 t 1, then
(1:37)
18
P.K. Rajan and Arun Sekar
F(s) ¼
1 ð 0
¼
1 (aþs)t Ae at st Ae e dt ¼ (a þ s) 0
s 2 F(s) sf (0) (1:38)
Ae 1 Ae 0 A : ¼ sþa (a þ s)
Example 1.7. Consider the second-order differential equation and use the Laplace transform to find a solution: 2
(1:39)
df (0) ¼ 3: with initial conditions f (0) ¼ 2 and dt Taking the Laplace transform of both sides of the above differential equation produces:
TABLE 1.4
Laplace Transforms of Common Functions
f (t), for t 0
F(s)
A
A s
Ae st At sin (!t) cos (!t) e st cos (!t) e st sin (!t)
A sþs A s2 ! s 2 þ !2 s s 2 þ !2 sþs (s þ s)2 þ !2 ! (s þ s)2 þ !2
4 : (s þ 1)
(1:40)
Substituting for the initial values, you get:
In this equation, it is assumed that Re(s) Re(a). In the region in the complex s-plane where s satisfies the condition that Res > Rea, the integral converges, and the region is called the region of convergence of F(s). When a ¼ 0 and A ¼ 1, the above f (t) becomes u(t), the unit step function. Substituting these values in equation 1.38, the Laplace transform of u(t) is obtained as 1/s. In a similar way, letting s ¼ j!, the Laplace transform of Ae j!t is obtained as A=(s j!). Expressing cos (!t) ¼ (e j!t þ e j!t )=2, we get the Laplace transform of A cos (!t) as A s=(s 2 þ !2 ). In a similar way, the Laplace transform of A sin (!t) is obtained as A !=(s 2 þ w 2 ). Transforms for some commonly occurring functions are given in Table 1.4. This table can be used for finding forward as well as inverse transforms of functions. As mentioned at the beginning of this section, the Laplace transform can be used to solve linear time-invariant differential equations. This will be illustrated next in example 1.7.
d f df þ 6 þ 8f ¼ 4e t dt 2 dt
¼
df (0) þ 6(sF(s) f (0)) þ 8F(s) dt
s 2 F(s) 2s 3 þ 6sF(s) 12 þ 8F(s) ¼
4 : (s þ 1)
(2s þ 15)(s þ 1) þ 4 (s þ 1) 2 (2s þ 17s þ 19) : ¼ (s þ 1)
(1:41)
(s 2 þ 6s þ 8)F(s) ¼
F(s) ¼
(1:42)
(2s 2 þ 17s þ 19) (2s 2 þ 17s þ 19) : (1:43) ¼ 2 (s þ 6s þ 8)(s þ 1) (s þ 2)(s þ 4)(s þ 1)
Applying partial fraction expansion, you get: F(s) ¼
7=2 17=6 4=3 þ þ : sþ2 sþ4 sþ1
(1:44)
Taking the inverse Laplace transform using the Table 1.5, you get: f (t) ¼
7e 2t 17e 4t 4e t þ for t > 0: 2 6 3
(1:45)
It may be noted that the total solution is obtained in a single step while taking the initial conditions along the way.
1.7.3 Solution of Electrical Circuits Using the Laplace Transform There are two ways to apply the Laplace transform for the solution of electrical circuits. In one method, the differential equations for the circuit are first obtained, and then the differential equations are solved using the Laplace transform. In the second method, the circuit elements are converted into s-domain functions and KCL and KVL are applied to the s-domain circuit to obtain the needed current or voltage in the s-domain. The current or voltage in time domain is obtained using the inverse Laplace transform. The second method is simpler and is illustrated here. Let the Laplace transform of {v(t)} ¼ V (s) and Laplace transform of {i(t)} ¼ I(s). Then the s-domain voltage current relations of the R, L, and C elements are obtained as follows. Consider a resistor with the v–i relation: v(t) ¼ R i(t):
(1:46)
Taking the Laplace transform on both the sides yields: V (s) ¼ R I(s):
(1:47)
1 Linear Circuit Analysis TABLE 1.5
19
Properties of Laplace Transforms
Operations
f (t)
F(s)
Addition Scalar multiplication Time differentiation
f1 (t) þ f2 (t) A f (t) d=dt{f (t)} 1 Ð f (t)dt
F1 (s) þ F2 (s) A F(s) sF(s) f (0) F(s) s
Time integration
0 1 Ð
Convolution
f1 (t t)f2 (t)dt
F1 (s)F2 (s)
0 at
Frequency shift Time shift
f (t)e f (t a)u(t a)
Frequency differentiation
t f (t)
Frequency integration
f (t) t
Scaling
f (at), a > 0
Initial value
f (0þ )
Final value
f (1)
F(s þ a) e as F(s) dF ds 1 Ð F(s)ds s
1 s F( ) a a lim sF(s)
6 3 10 2I(s) þ 0:2sI(s) 0:4 þ I(s) þ ¼ : s s s
lim sF(s) s!0
Defining the impedance of an element as V (s)=I(s) ¼ Z(s) produces Z(s) ¼ R for a resistance. For an inductance, v(t) ¼ L di=dt. Taking the Laplace transform of the relation yields V (s) ¼ sL I(s) Li(0), where i(0) represents the initial current in the inductor and where Z(s) ¼ sL is the impedance of the inductance. For a capacitance, i(t) ¼ c dv=dt and I(s) ¼ sc V (s) cv(0), where v(0) represents the initial voltage across the capacitance and where 1/sc is the impedance of the capacitance. Equivalent circuits that correspond to the s-domain relations for R, L, and C are shown in Table 1.6 and are suitable for writing KVL equations (initial condition as a voltage source) as well as for writing KCL equations (initial condition as a current source). With these equivalent circuits, a linear circuit can be converted to an s-domain circuit as shown in the example 1.8. It is important first to show that the KCL and KVL relations can also be converted into s-domain relations. For example, the KCL relation in s-domain is obtained as follows: At any node, KCL states that: i1 (t) þ i2 (t) þ i3 (t) þ . . . þ in (t) ¼ 0:
(1:48)
By applying Laplace transform on both sides, the result is: (1:49)
which is the KCL relation for s-domain currents in a node. In a similar manner, the KVL around a loop can be written in s-domain as: V1 (s) þ V2 (s) þ . . . þ Vn (s) ¼ 0,
Example 1.8. Consider the circuit given in Figure 1.25(A) and convert a linear circuit into an s-domain circuit. You can obtain the s-domain circuit shown in Figure 1.25(B) by replacing each element by its equivalent s-domain element. As noted previously, the differential relations of the elements on application of the Laplace transform have become algebraic relations. Applying KVL around the loop, you can obtain the following equations:
s!1
Note: The u(t) is the unit step function defined by u(t) ¼ 0 for t < 0 and u(t) ¼ 1 for t > 0.
I1 (s) þ I2 (s) þ I3 (s) þ . . . þ In (s) ¼ 0,
where V1 (s), V2 (s), . . . , Vn (s) are the s-domain voltages around the loop. In fact, the various time-domain theorems considered earlier, such as the superposition, Thevenin, and Norton theorems, series and parallel equivalent circuits and voltage and current divisions are also valid in the s-domain. The loop current method and node voltage method can be applied for analysis in s-domain.
(1:50)
(1:51)
6 10 3 7 þ 0:4s (2 þ 0:2s þ )I(s) ¼ þ 0:4 ¼ : (1:52) s s s s 2s þ 35 2(s þ 5) þ 25 pffiffiffi : ¼ þ 10s þ 30 (s þ 5)2 þ ( 5)2 pffiffiffi pffiffiffi pffiffiffi i(t) ¼ e 5t {2 cos ( 5t) þ 5 5 sin ( 5t)} for t > 0:
I(s) ¼
s2
(1:53)
(1:54)
From 1.51 to 1.54 equations, you can see that the solution using the Laplace transform determines the natural response and forced response at the same time. In addition, the initial conditions are taken into account when the s-domain circuit is set up. A limitation of the Laplace transform is that it can be used only for linear circuits.
1.7.4 Network Functions For a one-port network, voltage and current are the two variables associated with the input port, also called the driving port. One can define two driving point functions under zero initial conditions as: Driving point impedance Z(s) ¼
V (s) : I(s)
Driving point admittance Y (s) ¼
I(s) : V (s)
In the case of two-port networks, one of the ports may be considered as an input port with input signal X(s) and the other considered the output port with output signal Y(s). Then the transfer function is defined as:
20
P.K. Rajan and Arun Sekar TABLE 1.6
s-Domain Equivalent Circuits for R, I, and C Elements
Time domain
Laplace domain KVL
R [ohm] i(t) [ampere]
R [ohm]
+
Laplace domain KCL
G [siemens] I(s) [ampere]
I(s) [ampere]
−
+
− + V(s) [volt]
v(t) [volt] v(t) = R i(t)
− V(s) [volt]
I(s) = G V(s)
V(s) = RI (s)
i(0+)/s L[henry]
sL[ohm]
Li(0+) [volt] −+
i(t) [ampere] −
+ v(t) [volt]
+
(1/sL) [siemens]
+
I(s)
I(s)
I(s)
V(s)
+ v(t) = Ldi/dt
V(s) = (sL) I(s) − Li(0+)
C[farad]
(1/sC)[ohm] (v (0 +)/s)
V(s)
I(s) = (1/sL)V(s) + [i (0+)/s] Cv(0+) [ampere]
i(t) [A] +
+
+
I(s)
− v(t) [volt]
i(t) = C dv/dt
+ −
sC [siemens] I(s)
I(s) +
V(s) V(s) = (1/sC)I(s) + (v(0 +)/s)
V(s) I(s) = (sC) V(s) − Cv (0+)
Note: [A] represents ampere, and [V] represents volt.
H(s) ¼
Y (s) , under zero initial conditions: X(s)
In an electrical network, both Y(s) and X(s) can be either voltage or current variables. Four transfer functions can be defined as: V2 (s) , V1 (s) under the condition I2 (s) ¼ 0:
Transfer voltage ratio Gv (s) ¼
I2 (s) , I1 (s) under the condition V2 (s) ¼ 0:
1.8 State Variable Analysis State variable analysis or state space analysis, as it is sometimes called, is a matrix-based approach that is used for analysis of circuits containing time-varying elements as well as nonlinear elements. The state of a circuit or a system is defined as a set of a minimum number of variables associated with the circuit; knowledge of these variables along with the knowledge of the input will enable the prediction of the currents and voltages in all system elements at any future time.
Transfer current ratio Gi (s) ¼
V2 (s) , I1 (s) under the condition I2 (s) ¼ 0: I2 (s) , Transfer admittance Y21 ¼ V1 (s) under the condition V2 (s) ¼ 0: Transfer impedance Z21 ¼
1.8.1 State Variables for Electrical Circuits As was mentioned earlier, only capacitors and inductors are capable of storing energy in a circuit, and so only the variables associated with them are able to influence the future condition of the circuit. The voltages across the capacitors and the currents through the inductors may serve as state variables. If loops are not solely made up of capacitors and voltage sources, then the voltages across all the capacitors are independent
1 Linear Circuit Analysis
21 2Ω
t=0
0.2 H
i
i(0) = 2 A Vc(0) = 3 V +
+ Vs(t) = 10 V
1 F 6
−
Vc −
(A) Time-Domain Circuit
2Ω
0.2s Ω
0.4 V − +
I(s) +
+
10 V s
Vc
6Ω s
−
+ −
3 s V
−
(B) s-Domain Equivalent Circuit
FIGURE 1.25
Circuit for Example 1.8
variables and may be taken as state variables. In a similar way, if there are no sets of inductors and current sources that separate the circuit into two or more parts, then the currents associated with the inductors are independent variables and may be taken as state variables. The following examples assume that all the capacitor voltages and inductor currents are independent variables and will form the set of state variables for the circuit.
equations, where the derivative of state variables is expressed as a linear combination of state variables and forcing functions, is said to be in normal form. Example 1.9. Consider the circuit shown in Figure 1.26 and solve for state variable equations in matrix form. Taking vL and iC as state variables and applying KVL around loop ABDA, you get: diL 1 1 1 1 ¼ vL ¼ (va vC ) ¼ va vC : L L L L dt
1.8.2 Matrix Representation of State Variable Equations
Similarly, by applying KCL at node B, you get:
Because matrix techniques are used in state variable analysis, the state variables are commonly expressed as a vector x, and the input source variables are expressed as a vector r. The output variables are denoted as y. Once the state variables x are chosen, KVL and KCL are used to determine the derivatives x˙ of the state variables and the output variables y in terms of the state variables x and source variables r. They are expressed as: x_ ¼ Ax þ Br: y ¼ Cx þ Dr:
dvC 1 1 ¼ (iC ) ¼ (iL þ i1 ): C C dt
+
(1:55)
The x˙ equation is called the state dynamics equation, and the y equation is called the output equation. A, B, C, and D are appropriately dimensioned coefficient matrices. This set of
(1:56)
va
+
vL
−
iL C
−
+ + −
FIGURE 1.26
vo
(1:57)
−
R1 vc
Circuit for Example 1.9
R2
ib
22
P.K. Rajan and Arun Sekar
The current i1 can be found either by writing node equation at node C or by applying the superposition as:
Consider the state dynamics equation: x_ ¼ Ax þ Br:
R2 1 ib vc : i1 ¼ R1 þ R2 R1 þ R2
(1:58)
dvC 1 1 1 ¼ iL þ ib vc : C C(R1 þ R2 ) C(R1 þ R2 ) dt
(1:59)
The output vo can be obtained as i1 R1 and can be expressed in terms of state variables and sources by employing equation 1.58 as: vo ¼
R1 R2 R1 ib þ vc : R1 þ R2 R1 þ R2
(1:60)
Equations 1.56, 1.59, and 1.60 can be expressed in matrix form as: 2
3
2
3
1 7 iL L 7 5 v 1 C C C(R þ R ) 1 2 dt 2 3 1 0 6L 7 va 6 7 þ4 5 i : 1 b 0 C(R1 þ R2 ) iL R1 [vo ] ¼ 0 R1 þ R2 vC va R1 R2 þ 0 : R1 þ R2 ib
diL 6 dt 7 6 0 6 7 6 4 dvC 5 ¼4 1
Taking the Laplace transform on both sides yields: sX(s) x(0) ¼ AX(s) þ BR(s),
Substituting for i1 in equation 1.57, you get:
(1:61)
(1:63)
(1:64)
where X(s) and R(s) are the Laplace transforms of x(t) and r(t), respectively, and x(0) represents the initial conditions. Rearranging equation 1.64 results in: (sI A)X(s) ¼ x(0) þ BR(s) X(s) ¼ (sI A)1 x(0) þ (sI A)1 BR(s):
(1:65)
Taking the inverse Laplace transform of X(s) yields: x(t) ¼ f(t)x(0) þ f(t) Br(t),
(1:66)
where f(t), the inverse Laplace transform of {(sI A)1 }, is called the state transition matrix and where * represents the time domain convolution. Expanding the convolution, x(t) can be written as: ðt
x(t) ¼ f(t)x(0) þ f(t t)Br(t)dt:
(1:67)
0
Once x(t) is known, y(t) may be found using the output equation 1.60. (1:62)
The ordering of the variables in the state variable vector x and the input vector r is arbitrary. Once the ordering is chosen, however, it must remain the same for all the variables in every place of occurrence. In large circuits, topological methods may be employed to systematically select the state variables and write KCL and KVL equations. For want of space, these methods are not described in this discussion. Next, this chapter briefly goes over the method for solving state variable equations.
1.8.3 Solution of State Variable Equations There are many methods for solving state variable equations: (1) computer-based numerical solution, (2) conventional differential equation time-domain solution, and (3) Laplace transform s-domain solution. This chapter will only present the Laplace transform domain solution.
1.9 Alternating Current Steady State Analysis 1.9.1 Sinusoidal Voltages and Currents Standard forms of writing sinusoidal voltages and currents are: v(t) ¼ Vm cos (vt þ a)[V]:
(1:68a)
i(t) ¼ Im cos (vt þ b)[A]:
(1:68b)
Vm and Im are the maximum values of the voltage and current, v is the frequency of the signal in radians/second, and a and b are called the phase angles of the voltage and current, respectively. Vm , Im , and v are positive real values, whereas a and b are real and can be positive or negative. If a is greater than b, the voltage is said to lead the current, or the current to lag the voltage. If a is less than b the voltage is said to lag the current, or the current to lead the voltage. If a equals b, the voltage and current are in phase.
1 Linear Circuit Analysis
23
1.9.2 Complex Exponential Function Define Xm e j(vtþf) as a complex exponential function. By Euler’s theorem,
Imaginary part
I Length of current phasor:Im
e ju ¼ cos u þ j sin u: Xm e j(vtþf) ¼ Xm [ cos (vt þ f) þ j sin (vt þ f)]: x(t) ¼ Xm cos (vt þ f): ¼ Real [Xm e
j(vtþf)
jf
] ¼ Real [(Xm e )e
V Length of voltage phasor:Vm
β α
jvt
0
]:
Real part
(1:69) The term (Xm e jf ) is called the phasor of the sinusoidal function x(t). For linear RLCM circuits, the forced response is sinusoidal at the input frequency. Since the natural response decays exponentially in time, the forced response is also the steady state response.
1.9.3 Phasors in Alternating Current Circuit Analysis
FIGURE 1.27 Phasor Diagram
of the vector is the magnitude of the phasor, and the direction is the phase angle. The projection of the vector on the x-axis is the real part of the phasor, and the projection on the y-axis is the imaginary part of the phasor in rectangular form as noted in Figure 1.27. The graphical representation is called the phasor diagram and the vector is called the phasor.
Consider voltage and current waves of the same frequency:
1.9.5 Phasor Voltage–Current Relationships of Circuit Elements
v(t) ¼ Vm cos (vt þ a)[V]: i(t) ¼ Im cos (vt þ b)[A]:
Voltage v(t) and current i(t) are sinusoidal signals at a frequency of v rad/s, whereas V and I are phasor voltage and current, respectively. The v–i and V–I relations for R, L, and C elements are given in Table 1.8.
Alternative representation is by complex exponentials: v(t) ¼ Real [Vm e ja ]e jvt [V]: i(t) ¼ Real [Im e jb ]e jvt [A]:
1.9.6 Impedances and Admittances in Alternating Current Circuits
Phasor voltage and current are defined as: V ¼ Vm e ja [V]:
(1:70a)
I ¼ Im e jb [A]:
(1:70b)
Since a phasor is a complex number, other representations of a complex number can be used to specify the phasor. These are listed in Table 1.7. Addition of two voltages or two currents of the same frequency is accomplished by adding the corresponding phasors.
1.9.4 Phasor Diagrams Since a phasor is a complex number, it can be represented on a complex plane as a vector in Cartesian coordinates. The length
Impedance Z is defined as the ratio of phasor voltage to phasor current at a pair of terminals in a circuit. The unit of impedance is ohms. Admittance Y is defined as the ratio of phasor current to phasor voltage at a pair of terminals of a circuit. The unit of admittance is siemens. Z and Y are complex numbers and reciprocals of each other. Note that phasors are also complex numbers, but phasors represent time-varying sinusoids. Impedance and admittance are time invariant and frequency dependent. Table 1.9 shows the impedances and admittances for R, L, and C elements. It may be noted that the phasor impedance for an element is obtained by substituting s ¼ j! in the s-domain impedance Z(s) of the corresponding element.
TABLE 1.8 TABLE 1.7
Representation of Phasor Voltages and Currents
Phasor
Exponential form
Rectangular form
Polar form
V I
Vm e ja Im e jb
Vm cos a þ jVm sin a Im cos b þ jIm sin b
Vm ff a[V] Im ff b[A]
Element Voltage–Current Relationships
Element
Time domain
Frequency domain
Resistance: R Inductance: L Capacitance: L
v ¼ Ri v ¼ Ldi=dt i ¼ Cdv=dt
V ¼ RI V ¼ jv LI I ¼ jv CV
24
P.K. Rajan and Arun Sekar TABLE 1.9
Impedances and Admittances of Circuit Elements
Element
Resistance: R
Inductor: L
Capacitor: C
Z [ohm]
R
Y [siemens]
G ¼ 1=R
j XL XL ¼ vL (XL is the inductive reactance.) j BL BL ¼ 1=v L (BL is the inductive susceptance.)
j XC XC ¼ 1=v C (XC is the capacitive reactance.) j BC BC ¼ vC (BC is the capacitive susceptance.)
(G is the conductance.)
1.9.7 Series Impedances and Parallel Admittances the variables. Impedances and admittances describe the eleIf n impedances are in series, their equivalent impedance is given by Zeq ¼ (Z1 þ Z2 þ . . . þ Zn ). Similarly, the equivalent admittance of n admittances in parallel is given by Yeq ¼ (Y1 þ Y2 þ . . . þ Yn ).
1.9.8 Alternating Current Circuit Analysis Before the steady state analysis is made, a given time-domain circuit is replaced by its phasor-domain circuit, also called the phasor circuit. The phasor circuit of a given time-domain circuit at a specified frequency is obtained as follows: .
.
Voltages and currents are replaced by their corresponding phasors. Circuit elements are replaced by impedances or admittances at the specified frequency given in Table 1.9.
All circuit analysis techniques are now applicable to the phasor circuit.
1.9.9 Steps in the Analysis of Phasor Circuits .
. .
.
.
Select mesh or nodal analysis for solving the phasor circuit. Mark phasor mesh currents or phasor nodal voltages. Use impedances for mesh analysis and admittances for nodal analysis. Write KVL around meshes (loops) or KCL at the nodes. KVL around a mesh: The algebraic sum of phasor voltage drops around a mesh is zero. KCL at a node: The algebraic sum of phasor currents leaving a node is zero. Solve the mesh or nodal equations with complex coefficients and obtain the complex phasor mesh currents or nodal voltages. The solution can be obtained by variable elimination or Cramer’s rule. Remember that the arithmetic is complex number arithmetic.
ment voltage–current relationships. Some of the methods are described in this section. Method of superposition: Circuits with multiple sources of the same frequency can be solved by using the mesh or nodal analysis method on the phasor circuit at the source frequency. Alternatively, the principle of superposition in linear circuits can be applied. First, solve the phasor circuit for each independent source separately. Then add the response voltages and currents from each source to get the total response. Since the responses are at the same frequency, phasor addition is valid. Voltage and current source equivalence in ac circuits: An ac voltage source in series with an impedance can be replaced across the same terminals by an equivalent ac current source of the same frequency in parallel with an admittance, as shown in Figure 1.28. Similarly, an ac current source in parallel with an admittance can be replaced across the same terminals by an equivalent ac voltage source of the same frequency in series with an impedance. Current and voltage division in ac circuits: For two impedances in series: V1 ¼ (
Z1 )V , Z1 þ Z2
V2 ¼ (
Z2 )V Z1 þ Z2
I Z + E
+
I
a
+
Vab
Y
All methods of circuit analysis are applicable to alternating current (ac) phasor circuits. Phasor voltages and currents are
Vab
Is
b
b
Voltage source
Current source Source equivalence: ac
1.9.10 Methods of Alternating Current Circuit Analysis
a
E = ( Is / Y )
Is = ( E / Z )
Z = ( 1/Y )
Y = ( 1/Z )
FIGURE 1.28
Source Transformations
1 Linear Circuit Analysis
25
For n impedances in series 0 1 0
1
0
1
B Z1 C B C B C CV , V 2 ¼ B Z 2 CV , . . . , V n ¼ B Z n CV : V1 ¼ B n n n @P A @P A @P A Zi Zi Zi i¼1
i¼1
B Y1 C CI, I1 ¼ B n @P A Yi
1
0
i¼1
1
i¼1
Thevenin theorem for ac circuits: A phasor circuit across a pair of terminals is equivalent to an ideal phasor voltage source Voc in series with an impedance ZTh , where Voc is the open circuit voltage across the terminals and where ZTh is the equivalent impedance of the circuit across the specified terminals. Norton theorem for ac circuits: A phasor circuit across a pair of terminals is equivalent to an ideal phasor current source Isc in parallel with an admittance YN , where Isc is the short circuit current across the terminals and where YN is the equivalent admittance of the circuit across the terminals. Thevenin and Norton equivalent circuits of a linear phasor circuit are shown in Figure 1.29.
1.9.11 Frequency Response Characteristics The voltage gain Gv at a frequency v of a two-port network is defined as the ratio of the output voltage phasor to the input voltage phasor. In a similar manner, the current gain Gi is defined as the ratio of output current phasor to input current phasor. Because the phasors are complex quantities that depend on frequency, the gains, voltages, and currents are written as G(jv), V (jv), and I(jv). Frequency response is
1.9.12 Bode Diagrams Bode diagrams are graphical representations of the frequency responses and are used in solving design problems. Magnitude and phase functions are shown on separate graphs using logarithmic frequency scale along the x-axis. Logarithm of the frequency to base 10 is used for the x-axis of a graph. Zero frequency will correspond to negative infinity on the logarithmic scale and will not show on the plots. The x-axis is graduated in log10 v and so every decade of frequency (e.g., . . . 0.001, 0.01, 0.1, 1, 10, 100, . . . ) is equally spaced on the x-axis. The gain magnitude, represented by decibels defined as 20 log10 [A(v)], is plotted on the y-axis of magnitude plot. Since A(v) dB can be both positive and negative, the y-axis has both positive and negative values. Zero dB corresponds to a magnitude function of unity. The y-axis for the phase function uses a linear scale in radians or degrees. Semilog graph paper makes it convenient to sketch Bode plots. Bode plots are easily sketched by making asymptotic approximations first. The frequency response function H(jv) is a rational function, and the numerator and denominator are factorized into first-order terms and second-order terms with complex roots. The factors are then written in the standard form as follows:
ZTh A Linear phasor circuit
v First-order terms: (jv þ v0 ) ! v0 1 þ j : v0
A
A
+
YN
Isc
Voc
B
B
B Thevenin equivalent circuit
FIGURE 1.29
(1:71)
where A(v) is the gain magnitude function, jH(jv)j, and f(v) is the phase function given by the argument of H(jv). In addition to the above gain functions, a number of other useful ratios (network functions) among the voltages and currents of two port networks can be defined. These definitions and their nomenclature are given in Figure 1.30.
B Y2 C B C CI, . . . , I n ¼ B Y n CI: I2 ¼ B n n @P A @P A Yi Yi
i¼1
H(jv) ¼ A(v)=f(v)
i¼1
For two admittances in parallel Y1 Y2 I1 ¼ I, I 2 ¼ I Y1 þ Y2 Y1 þ Y2 For n admittances in parallel: 0 1 0
then defined as the variation of the gain as a function of frequency. The above gain functions are also called transfer functions and written as H(jv). H(jv) is a complex number and can be written in polar form as follows:
Norton equivalent circuit
Thevenin and Norton Equivalent Circuits
26
P.K. Rajan and Arun Sekar
I2( jω)
I1( jω) + V1( jω)
− Input
C i r e u i t
V2(jω) / V1(jω) = Voltage ratio +
V2(jω) / I1(jω) = Transfer impedance
V2( jω)
I2(jω) / V1(jω) = Tranfer admittance I2(jω) / I1(jω)
= Current ratio
V1(jω) / I1(jω) = Input impedance
Output
Voltage and current phasors shown
I1(jω) / V1(jω) = Input admittance
FIGURE 1.30 Frequency Response Functions Notes: The frequency response ratios are called network functions. When the input and output are at different terminals, frequency response function is also called a transfer function.
Second-order terms: [(v20 v2 ) þ j(2zv0 v)] " 2 ! # v v 2 ! v0 1 : þ j 2z v0 v0
1.10 Alternating Current Steady State Power 1.10.1 Power and Energy
Here z is the damping ratio with a value of less than 1. The magnitude and phase Bode diagrams are drawn by adding the individual plots of the first- and second-order terms. Asymptotic plots are first easily sketched by using approximations. For making asymptotic Bode plots, the ratio v=v0 is assumed to be much smaller than one or much larger than one, so the Bode plots become straight line segments called asymptotic approximations. The normalizing frequency v0 is called the corner frequency. The asymptotic approximations are corrected at the corner frequencies by calculating the exact value of magnitude and phase functions at the corner frequencies. The first- and second-order terms can occur in the numerator or denominator of the rational function H(jv). Normalized plots for these terms are shown in figures 1.31 and 1.32. Bode diagrams for any function can be made using the normalized plots as building blocks. Figure 1.31 shows the Bode diagrams for a first-order term based on the following equations: (i) H(jv) ¼
v 1þj : v0
(ii) H(jv) ¼
1þj
v v0
1 :
Power is the rate at which energy E is transferred, such as in this equation: p(t) ¼
dE dt
Energy is measured in Joules [J]. A unit of power is measured in watts [W]: 1W ¼ 1J=s. When energy is transferred at a constant rate, power is constant. In general, energy transferred over time t is the integral of power. ðT
E ¼ p(t)dt 0
1.10.2 Power in Electrical Circuits Power in a two-terminal circuit at any instant is obtained by multiplying the voltage across the terminals by the current through the terminals: p(t) ¼ v(t)i(t):
The magnitude and phase plots of a denominator secondorder term with complex roots are given in Figure 1.32. If the term is in the numerator, the figures are flipped about the x-axis, and the sign of the y-axis calibration is reversed.
(1:72)
If the voltage is in volts [V] and the current in amperes [A], power is in watts. In direct current (dc) circuits under steady state, the voltage V and current I are constant and power is also constant, given by P ¼ VI. In ac circuits under steady state, V
1 Linear Circuit Analysis
27 20
40
20
0
Actual Asymptotic
A(ω) dB
A(ω) dB
30
10 0
−10 −20 −30
−10 −20 0.01
Actual Asymptotic
10
−40 0.1
10
1
100
0.01
0.1
Normalized Frequency (ω/ω0)
1
10
100
Normalized Frequency (ω/ω0)
(i) First-Order Numerator Term
(ii) First-Order Denominator Term (A) Magnitude Bode Plots 0
Actual Asymptotic
+π/4
0.01
0.1 10 Frequency (radians/sec)
Actual Asymptotic
Phase function φ(ω) radions
Phase function φ(ω) radions
+π/2
100
(i) First-Order Numerator Term
−π/2 0.01
0.1
1 Normalized Frequency
1
100
(ii) First-Order Denominator Term (B) Phase Bode Plots
FIGURE 1.31
Bode Diagrams of First-Order Terms
and I are sinusoidal functions of the same frequency. Power in ac circuits is a function of time. For v(t) ¼ Vm cos (vt þ a) [V] and i(t) ¼ Im cos (vt þ b)[A]; p(t) ¼ [Vm cos (vt þ a)][Im cos (vt þ b)] ¼ 0:5Vm Im { cos (a b) þ cos (2vt þ a þ b)}
(1:73)
Average power P is defined as the energy transferred per second and can be calculated by integrating p(t) for 1s. Since the voltage and current are periodic signals of the same frequency, the average power is the energy per cycle multiplied by the frequency f. Energy per cycle is calculated by integrating p(t) over one cycle, that is, one period of the voltage or current wave.
2T 3 2T 3 ð ð 1 P ¼ f 4 p(t)dt 5 ¼ 4 p(t)dt 5 T 0
¼
0
ðT
1 0:5Vm Im { cos (a b) þ cos (2vt þ a þ b)}dt: T 0
Evaluating the integral yields average power: P ¼ 0:5Vm Im cos (a b)[W]:
(1:74)
The p(t) is referred to as the instantaneous power, and P is the average power or ac power. When a sinusoidal voltage of peak value of Vm [V] is applied to a 1V resistor, the average power
28
P.K. Rajan and Arun Sekar 20
ζ = 0.05 0.10 0.15 0.20 0.25
10
A(ω) [dB]
0
0.3 0.4 0.5 0.6 0.8 1.0
−10 −20 −30 −40 0.1
0.2
0.3 0.4
0.6 0.8 1.0
2
3
4 5 6
8 10
Normalized Frequency (ω/ω0) (A) Magnitude Plot
Phase Function [degrees]
0 −20 −40
0.3 0.4 0.5
−60 −80
0.6 0.8 1.0
ζ = 0.05 0.10 0.15 0.20 0.25
−100 −120 −140 −160 −180 0.1
1 Normalized Frequency (ω/ω0)
10
(B) Phase Plot
FIGURE 1.32
Bode Diagrams of H(jv) ¼ [(v20 v2 ) þ j(2zv0 v)]1
dissipated in the resistor is 0:5 Vm2 [W]. Similarly, when a sinusoidal current of Im [A] passes through a one ohm resistor, the average power dissipated in the resistor is 0:5 Im2 [W]. Root mean square (RMS) or effective value of an ac voltage or current is defined as the equivalent dc voltage or current that will dissipate the same amount of power in a 1 V resistor. pffiffiffiffiffiffi 0:5Vm ¼ 0:707Vm : pffiffiffiffiffiffi ¼ 0:5Vm ¼ 0:707Im :
VRMS ¼ IRMS
(1:75)
For determining whether the power is dissipated (consumed) or supplied (delivered), source or load power conventions are used. In a load, the current flowing into the positive terminal of the voltage is taken in calculating the dissipated power. For a
source, the current leaving the positive terminal of the voltage is used for evaluating the supplied power.
1.10.3 Power Calculations in AC Circuits The average power of a resistance R in an ac circuit is obtained as 0:5 V2m =R [W] or 0:5 R I2m [W], with Vm the peak voltage across the resistance and Im the peak current in the resistance. The average power in an inductor or a capacitor in an ac circuit is zero. Phasor voltages and currents are used in ac calculations. Average power in ac circuits can be expressed in terms of phasor voltage and current. Using the effective values of V and I for the phasors, the following definitions are given: Using the notation V ¼ V ff a [V], I ¼ Iff b [A] and conjugate I, denoted by I ¼ Iff b [A], you get:
1 Linear Circuit Analysis TABLE 1.10
29
AC Power in Circuit Elements
Element
Voltage–current relationship
Complex power: S ¼ VI
Average power: P
Reactive power: Q
Power factor
Resistance R [ohm] Inductive reactance XL ¼ vL [ohm] Capacitive susceptance BC ¼ vC [siemens] Impedance Z ¼ (R þ jX) ¼ Z=u [ohm]
V=RI V ¼ j XL I I ¼ j BC V V ¼ZI
V I=00 V I=900 V I= 900 V I=u ¼ I 2 Z
I 2R 0 0 I 2R
0 I 2 XL V 2 BC I 2X
Admittance Y ¼ (G þ jB) ¼ Y =f [siemens]
I ¼YV
V I/f ¼ V 2 Y
V 2G
V 2B
Unity Zero-lagging Zero-leading cos u u > 0 Lagging u < 0 Leading cos f f > 0 Leading f < 0 Lagging
Notes: Bold letters refer to phasors or complex numbers. Units: Voltage in volts, current in amperes, average power in watts, reactive power in volt ampere reactive, and complex power in voltampere
Average power P ¼ V I cos (a b)
¼ Real{V I }[W]
(1:76a)
Reactive power Q ¼ V I sin (a b) (1:76b) ¼ Imaginary{V I*}[VAR] Apparent power S ¼ V I[VA]
(1:76c)
Complex power S ¼ (P þ j Q) ¼ {V I*}[VA] (1:76d) In the set of equations [W] stands for watts, [VAR] for voltampere reactive, and [VA] for voltampere. Average power is also called active power, real power or simply power. Reactive power is also referred to as imaginary power. Reactive power is a useful concept in power systems because the system voltage is affected by the reactive power flow. The average power in an inductor or capacitor is zero. By definition, the reactive power taken by an inductor is positive, and the reactive power taken by a capacitor is negative. Complex power representation is useful in calculating the power supplied by the source to a number of loads connected in the system. Power factor is defined as the ratio of the average power in an ac circuit to the apparent power, which is the product of the voltage and current magnitudes. power factor ¼
average power P ¼ apparent power S
(1:77)
Power factor (PF) has a value between zero and unity. The nature of the power factor depends on the relationship between the current and voltage phase angles as: (a b) > 0 PF is lagging: (a b) ¼ 0 PF is unity (UPF): (a b) < 0
PF is leading:
(a b) ¼ p=2
PF is zero-lagging:
(a b) ¼ p=2
PF is zero-leading:
As reactive power Q ¼ V I sin (a b), its sign depends on the nature of PF: Q is positive for lagging PF. Q is negative for leading PF. Q is zero for UPF. The PF of an inductive load is lagging and that of a capacitive load is leading. A pure inductor has zero-lagging power factor and absorbs positive reactive power. A pure capacitor has zero-leading power factor and absorbs negative reactive power or delivers positive reactive power. Table 1.10 summarizes the expressions for various power quantities in ac circuit elements. Loads are usually specified in terms of P and PF at a rated voltage. Examples are motors and household appliances. Alternatively, the apparent P and PF can be specified. PF plays an important role in power systems. The product of V and I is called the apparent power. The investment cost of a utility depends on the voltage level and the current carried by the conductors. Higher current needs larger, more expensive conductors. Higher voltage means more insulation costs. The revenue of a utility is generally based on the amount of energy in Kilowatt per hour sold. At low power factors, the revenue is low since power P and energy sold is less. For getting full benefit of investment, the utility would like to sell the highest possible energy, that is, operate at unity power factor all the time. Utilities can have a tariff structure that penalizes the customer for low power factor. Most electromagnetic equipment such as motors have low lagging PF and absorb large reactive powers. By connecting capacitors across the terminals of the motor, part or all of the reactive power absorbed by the motor can be supplied by the capacitors. The reactive power from the supply will be reduced, and the supply PF improved. Cost of the capacitors is balanced against the savings accrued due to PF improvement.
This page intentionally left blank
2 Circuit Analysis: A Graph-Theoretic Foundation Krishnaiyan Thulasiraman School of Computer Science, University of Oklahoma, Norman, Oklahoma, USA
M.N.S. Swamy Department of Electrical and Computer Engineering, Concordia University, Montreal, Quebec, Canada
2.1 2.2
Introduction ........................................................................................ Basic Concepts and Results..................................................................... 2.2.1 Graphs . 2.2.2 Basic Theorems . 2.2.3 Cuts, Circuits, and Orthogonality . 2.2.4 Incidence, Circuit, and Cut Matrices of a Graph
2.3
Graphs and Electrical Networks...............................................................
36
2.3.1 Loop and Cutset Transformations
2.4
Loop and Cutset Systems of Equations .....................................................
38
2.4.1 Loop Method of Network Analysis . 2.4.2 Cutset Method of Network Analysis:
2.5
Summary ............................................................................................
2.1 Introduction The theory of graphs has played a fundamental role in discovering structural properties of electrical circuits. This should not be surprising because graphs, as the reader shall soon see, are good pictorial representations of circuits and capture all their structural characteristics. This chapter develops most results that form the foundation of graph theoretic study of electrical circuits. A comprehensive treatment of these developments may be found in Swamy and Thulasiraman (1981). All theorems in this chapter are stated without proofs. The development of graph theory in this chapter is selfcontained except for the definitions of standard and elementary results from set theory and matrix theory.
2.2 Basic Concepts and Results 2.2.1 Graphs A graph G ¼ (V , E) consists of two sets: a finite set V ¼ (v1 , v2 , . . . vn ) of elements called vertices and a finite set E ¼ (e1 , e2 , . . . en ) of elements called edges. If the edges of G are identified with ordered pairs of vertices, then G is called a directed or an oriented graph; otherwise, it is called an undirected or an unoriented graph. Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
31 31
41
Graphs permit easy pictorial representations. In a pictorial representation, each vertex is represented by a dot, and each edge is represented by a line joining the dots associated with the edge. In directed graphs, an orientation or direction is assigned to each edge. If the edge is associated with the ordered pair (vi , vj ), then this edge is oriented from vi to vj . If an edge e connects vertices vi and vj , then it is denoted by e ¼ (vi , vj ). In a directed graph, (vi , vj ) refers to an edge directed from vi to vj . An undirected graph and a directed graph are shown in Figure 2.1. Unless explicitly stated, the term graph may refer to a directed graph or an undirected graph. The vertices vi and vj associated with an edge are called the end vertices of the edge. All edges having the same pair of end vertices are called parallel edges. In a directed graph, parallel edges refer to edges connecting the same pair of vertices vi and vj the same way from vi to vj or from vj to vi . For instance, in the graph of Figure 2.1(A), the edges connecting v1 and v2 are parallel edges. In the directed graph of Figure 2.1(B), the edges connecting v3 and v4 are parallel edges. However, the edges connecting v1 and v2 are not parallel edges because they are not oriented in the same way. If the end vertices of an edge are not distinct, then the edge is called a self-loop. The graph of Figure 2.1(A) has one self loop, and the graph of Figure 2.1(B) has two self-loops. An edge is said to be incident on its end vertices. In a directed 31
32
Krishnaiyan Thulasiraman and M.N.S. Swamy
v1
v2 v1
v2
v3
v4
v5
v3
v4 (A) An Undirected Graph
FIGURE 2.1
(B) A Directed Graph
Graphs: Easy Pictorial Representations
graph, the edge (vi , vj ) is said to be incident out of vi and is said to be incident into vj . Vertices vi and vj are adjacent if an edge connects vi and vj . The number of edges incident on a vertex vi is called the degree of vi and is denoted by d(vi ). In a directed graph, din (vi ) refers to the number of edges incident into the vertex vi , and it is called the in-degree. In a directed graph, dout (vi ) refers to the number of edges incident out of the vertex vi or the outdegree. If d(vi ) ¼ 0, then vi is said to be an isolated vertex. If d(vi ) ¼ 1, then vi is said to be a pendant vertex. A self-loop at a vertex vi is counted twice while computing d(vi ). As an example in the graph of Figure 2.1(A), d(v1 ) ¼ 3, d(v4 ) ¼ 3, and v5 is an isolated vertex. In the directed graph of Figure 2.1(B), din (v1 ) ¼ 3 and dout (v1 ) ¼ 2. Note that in a directed graph, for every vertex vi ,
and (2) in a directed graph with m edges, the sum of the indegrees and the sum of out-degrees are both equal to m. The following theorem is known to be the first major result in graph theory. Theorem 2.2: The number of vertices of odd degree in any graph is even. Consider a graph G ¼ (V 0 , E 0 ). The graph G 0 ¼ (V 0 , E 0 ) is a subgraph of G if V 0 V and E 0 E. As an example, a graph G and a subgraph of G are shown in Figure 2.2. In a graph G, a path P connecting vertices vi and vj is an alternating sequence of vertices and edges starting at vi and ending at vj , with all vertices except vi and vj being distinct. In a directed graph, a path P connecting vertices vi and vj is called a directed path from vi to vj if all the edges in P are oriented in the same direction when traversing P from vi toward vj . If a path starts and ends at the same vertex, it is called a circuit. In a directed graph, a circuit in which all the edges are oriented in the same direction is called a directed circuit. It is often convenient to represent paths and circuits by the sequence of edges representing them. For example, in the undirected graph of Figure 2.3(A), P: e1 , e2 , e3 , e4 is a path connecting v1 and v5 , and C: e1 ,
d(vi ) ¼ din (vi ) þ dout (vi ):
2.2.2 Basic Theorems Theorem 2.1: (1) The sum of the degrees of the vertices of a graph G is equal to 2 m, where m is the number of edges of G,
V1
V2
V1
V3
V4
V3
V4
V5
V5
(A) Graph G
(B) Subgraph of G
FIGURE 2.2 Graphs and Subgraphs
2
Circuit Analysis: A Graph-Theoretic Foundation
V1
e6
V6
e1
e7
e5
V2
e8
V5
e2
e9
e4
33 V1
V3
e3
e8
V4
V6
(A) An Undirected Graph
FIGURE 2.3
e4
V2
e1
e6
e5
V5
e9
e7
V3
e2
V4
(B) A Directed Graph
Graphs Showing Paths
e2 , e3 , e4 , e5 , e6 is a circuit. In the directed graph of Figure 2.3(B) P: e1 , e2 , e7 , e5 is a directed path and C: e1 , e2 , e7 , e6 is a directed circuit. Note that C: e7 , e5 , e4 , e1 , e2 is a circuit in this directed graph, although it is not a directed circuit. Similarly, P: e9 , e6 , e3 is a path but not a directed path. A graph is connected if there is a path between every pair of vertices in the graph; otherwise, the graph is not connected. For example, the graph in Figure 2.2(A) is a connected graph, whereas the graph in Figure 2.2(B) is not a connected graph. A tree is a graph that is connected and has no circuits. Consider a connected graph G. A subgraph of G is a spanning tree of G if the subgraph is a tree and contains all the vertices of G. A tree and a spanning tree of the graph of Figure 2.4(A) are shown in Figures 2.4(B) and (C), respectively. The edges of a spanning tree T are called the branches of T. Given a spanning tree of connected graph G, the cospanning tree relative to T is the subgraph of G induced by the edges that are not present in T. For example, the cospanning tree relative to the spanning tree T of Figure 2.4(C) consists of these edges: e3 , e6 , e7 . The edges of a cospanning tree are called chords. It can be easily verified that in a tree, exactly one path connects any two vertices. It should be noted that a tree is minimally connected in the sense that removing any edge from the tree will result in a disconnected graph. Theorem 2.3: A tree on n vertices has n 1 edges. If a connected graph G has n vertices and m edges, then the rank r and nullity m of G are defined as follows: r(G) ¼ n 1: m(G) ¼ m n þ 1: The concepts of rank and nullity have parallels in other branches of mathematics such as matrix theory. Clearly, if G is connected, then any spanning tree of G has r ¼ n 1 branches and m ¼ m n þ 1 chords:
e3
2.2.3 Cuts, Circuits, and Orthogonality Introduced here are the notions of a cut and a cutset. This section develops certain results that bring out the dual nature of circuits and cutsets. Consider a connected graph G ¼ (V , E) with n vertices and m edges. Let V1 and V2 be two mutually disjoint nonempty subsets of V such that V ¼ V1 [ V2 . Thus V2 is the complement of V1 in V and vice versa. V1 and V2 are also said to form a partition of V. Then the set of all those edges that have one end vertex in V1 and the other in V2 is called a cut of G and is denoted by < V1 , V2 >. As an example, a graph and a cut < V1 , V2 > of G are shown in Figure 2.5. The graph G, which results after removing the edges in a cut, will not be connected. A cutset S of a connected graph G is a minimal set of edges of G, such that removal of S disconnects G. Thus a cutset is also a cut. Note that the minimality property of a cutset implies that no proper subset of a cutset is a cutset. Consider a spanning tree T of a connected graph G. Let b denote a branch of T. Removal of branch b disconnects T into two trees, T1 and T2 . Let V1 and V2 denote the vertex sets of T1 and T2 , respectively. Note that V1 and V2 together contain all the vertices of G. It is possible to verify that the cut < V1 , V2 > is a cutset of G and is called the fundamental cutset of G with respect to branch b of T. Thus, for a given graph G and a spanning tree T of G, we can construct n 1 fundamental cutsets, one for each branch of T. As an example, for the graph shown in Figure 2.5, the fundamental cutsets with respect to the spanning tree T ¼ [e1 , e2 , e6 , e8 ] are the following: Branch e1 : (e1 , e3 , e4 ): Branch e2 : (e2 , e3 , e4 , e5 ): Branch e6 : (e6 , e4 , e5 , e7 ): Branch e8 : (e8 , e7 ): Note that the fundamental cutset with respect to branch b contains b. Furthermore branch b is not present in any other fundamental cutset with respect to T.
34
Krishnaiyan Thulasiraman and M.N.S. Swamy V1 e1
e5 e3
V2
V3
e2
V2
V3 e2
e4
e4
e6
e7
V4
V5
(A) Graph G
e7
V4
V5
(B) A Tree of Graph G
V1 e1
e5 V3
V2 e2
e4
V4
V5 (C) A Spanning Tree of G
FIGURE 2.4 Examples of Tree Graphs
This section next identifies a special class of circuits of a connected graph G. Again let T be a spanning tree of G. Because exactly one path exists between any two vertices of T, adding a chord c to T produces a unique circuit. This circuit is called the fundamental circuit of G with respect to chord c of T. Note again that the fundamental circuit with respect to chord c contains c, and that the chord c is not present in any other fundamental circuit with respect to T. As an example, the set of fundamental circuits with respect to the spanning tree T ¼ (e1 , e2 , e6 , e8 ) of the graph shown in Figure 2.5. is the following: Chord e3 : (e3 , e1 , e2 ): Chord e4 : (e4 , e1 , e2 , e6 ): Chord e5 : (e5 , e2 , e6 ): Chord e7 : (e7 , e8 , e6 ):
2.2.4 Incidence, Circuit, and Cut Matrices of a Graph The incidence, circuit and cut matrices are coefficient matrices of Kirchhoff ’s voltage and current laws that describe an elec-
trical network. This section defines these matrices and presents some properties of these matrices that are useful in studying electrical networks. Incidence Matrix Consider a connected directed graph G with n vertices and m edges and with no self-loops. The all-vertex incidence matrix Ac ¼ [aij ] of G has n rows, one for each vertex, and m columns, one for each edge. The element aij of Ac is defined as follows: 8 < 1, if jth edge is incident out of the ith vertex: aij ¼ 1, if jth edge is incident into the ith vertex: : 0, if the jth edge is not incident on the ith vertex: As an example, a graph and its Ac matrix are shown in Figure 2.6. From the definition of Ac it should be clear that each column of this matrix has exactly two nonzero entries, one þ1 and one 1, and therefore, any row of Ac can be obtained from the remaining rows. Thus,
2 Circuit Analysis: A Graph-Theoretic Foundation
35
V1
e1
e3
V2
e5
V3 e8
e6
e8 V4
V4
V2
e2
e4 e5
e1
V1
V3
e2
e7
e7
V5
V5 V1
V2 (B) Cut of G
(A) Graph G
FIGURE 2.5 A Graph and a Cut e1 v1 v2 v3 v4 v5
e2
e3
e4
e5
e6
e7
0 0 1 −1 0 0 0 0 0 1 0 0 −1 0 −1 0 1 −1 0 1
1 0 0 −1 1 0 0 −1 1 0 0 −1 0 0 0
v1
e7
v5
e1 e6 v2
e4 e2
e5
v3
e3
v4
FIGURE 2.6 A Directed Graph
rank(Ac ) n 1: An (n 1) rowed submatrix of Ac is referred to as an incidence matrix of G. The vertex that corresponds to the row that is not in Ac is called the reference vertex of A. Cut Matrix Consider a cut (Va , Vb ) in a connected directed graph G with n vertices and m edges. Recall that < Va , Vb > consists of all those edges connecting vertices in Va to Vb . This cut may be assigned an orientation from Va to Vb or from Vb to Va . Suppose the orientation of (Va , Vb ) is from Va to Vb . Then the orientation of an edge (vi , vj ) is said to agree with the cut orientation if vi 2 Va , and vj 2 Vb .
The cut matrix Qc ¼ [qij ] of G has m columns, one for each edge and has one row for each cut. The element qij is defined as follows: 8 1 > > > > > > < qij ¼ 1 > > > > > > : 0
if the jth edge is in the ith cut and its orientation agrees with the cut orientation: if the jth edge is in the ith cut and its orientation does not agree with cut orientation: if the jth edge is not in the ith cut:
Each row of Qc is called the cut vector. The edges incident on a vertex form a cut. Hence, the matrix Ac is a submatrix of Qc . Next, another important submatrix of Qc is identified.
36
Krishnaiyan Thulasiraman and M.N.S. Swamy
Recall that each branch of a spanning tree T of connected graph G defines a fundamental cutset. The submatrix of Qc corresponding to the n 1 fundamental cutsets defined by T is called the fundamental cutset matrix Qf of G with respect to T. Let b1 , b2 , . . . , bn1 denote the branches of T. Assume that the orientation of a fundamental cutset is chosen to agree with that of the defining branch. Arrange the rows and the columns of Qf so that the ith column corresponds to the fundamental cutset defined by bi. Then the matrix Qf can be displayed in a convenient form as follows: Qf ¼ [U jQfc ], where U is the unit matrix of order n 1, and its columns correspond to the branches of T. As an example, the fundamental cutset matrix of the graph in Figure 2.6 with respect to the spanning tree T ¼ (e1 , e2 , e5 , e6 ) is: 2
e1 61 6 Qf ¼ 6 60 40 0
e2 0 1 0 0
e5 0 0 1 0
e6 0 0 0 1
e3 1 1 0 1
e4 1 1 1 1
3 e7 1 7 7 1 7 7 1 5 0
It is clear that the rank of Qf is n 1. Hence, rank(Qc ) n 1: Circuit Matrix Consider a circuit C in a connected directed graph G with n vertices and m edges. This circuit can be traversed in one of two directions, clockwise or counterclockwise. The direction chosen for traversing C is called the orientation of C. If an edge e ¼ (vi , vj ) directed from vi to vj is in C, and if vi appears before vj in traversing C in the direction specified by the orientation of C, then the orientation agrees with the orientation of e. The circuit matrix Bc ¼ [bij ] of G has m columns, one for each edge, and has one row for each circuit in G. The element bij is defined as follows: 8 1 if the jth edge is in the ith circuit and its > > > > > orientation agrees with the circuit orientation: > < bij ¼ 1 if the jth edge is in the ith circuit and its > > > orientation does not agree with circuit orientation: > > > : 0 if the jth edge is not in the ith circuit:
The submatrix of Bc corresponding to the fundamental circuits defined by the chords of a spanning tree T is called fundamental circuit matrix Bf of G with respect to the spanning tree T. Let c1 , c2 , c3 , . . . , cmnþ1 denote the chords of T. Arrange the columns and the rows of Bf so that the ith row corresponds to the fundamental circuit defined by the chord ci and the ith column corresponds to the chord ci . Then orient the funda-
mental circuit to agree with the orientation of the defining chord. The result is writing Bf as: Bf ¼ [U jBft ], where U is the unit matrix of order m n þ 1, and its columns correspond to the chords of T. As an example, the fundamental circuit matrix of the graph shown in Figure 2.6 with respect to the tree T ¼ (e1 , e2 , e5 , e6 ) is given here. e3 2 e3 1 B f ¼ e4 4 0 e7 0
e4 e7 e1 e2 e5 0 1 0
0 0 1
1 1 1
1 1 1
0 1 1
e6 3 1 1 5 0
It is clear from this example that the rank of Bf is m n þ 1. Hence, rank(Bc ) m n þ 1: The following results constitute the foundation of the graphtheoretic application to electrical circuit analysis. Theorem 2.4 (orthogonality relationship): (1) A circuit and a cutset in a connected graph have an even number of common edges; and (2) if a circuit and a cutset in a directed graph have 2k common edges, then k of these edges have the same relative orientation in the circuit and the cutset, and the remaining k edges have one orientation in the circuit and the opposite orientation in the cutset. Theorem 2.5: If the columns of the circuit matrix Bc and the columns of the cut matrix Qc are arranged in the same edge order, then Bc Qct ¼ 0: Theorem 2.6: rank(Bc ) ¼ m n þ 1: rank(Qc ) ¼ n 1: Note that it follows from the above theorem that the rank of the circuit matrix is equal to the nullity of the graph, and the rank of the cut matrix is equal to the rank of the graph. This result, in fact, motivated the definitions of the rank and nullity of a graph.
2.3 Graphs and Electrical Networks An electrical network is an interconnection of electrical network elements, such as resistances, capacitances, inductances, voltage, and current sources. Each network element is associ-
2 Circuit Analysis: A Graph-Theoretic Foundation
37 i(t)
+
v (t)
FIGURE 2.7 A Network Element with Reference Convention
6
6
b
a
c
1
1
a
b
2
c
2
5
4
+
3
5
4
3 d (A) An Electrical Network N
FIGURE 2.8
d (B) Directed Graph Representation of N
Graph Examples of KVL and KCL Equations
ated with two variables: the voltage variable v(t) and the current variable i(t). Reference directions are also assigned to the network elements as shown in Figure 2.7 so that i(t) is positive whenever the current is in the direction of the arrow and so that v(t) is positive whenever the voltage drop in the network element is in the direction of the arrow. Replacing each element and its associated reference direction by a directed edge results in the directed graph representing the network. For example, a simple electrical network and the corresponding directed graph are shown in Figure 2.8. The physical relationship between the current and voltage variables of network elements is specified by Ohm’s law. For voltage and current sources, the voltage and current variables are required to have specified values. The linear dependence among the voltage variables in the network and the linear dependence among the current variables are governed by Kirchhoff ’s voltage and current laws: Kirchhoff ’s voltage law (KVL): The algebraic sum of the voltages around any circuit is equal to zero. Kirchhoff ’s current law (KCL): The algebraic sum of the currents flowing out of a node is equal to zero. As examples, the KVL equation for the circuit 1, 3, 5 and the KCL equation for vertex b in the graph of Figure 2.8 are the following: Circuit 1, 3, 5: V1 þ V3 þ V5 ¼ 0 Vertex b: I1 þ I2 þ I3 ¼ 0 It can be easily seen that KVL and KCL equations for an electrical network N can be conveniently written as:
Ac Ie ¼ 0: Bc Ve ¼ 0: Ac and Bc are, respectively, the incidence and circuit matrices of the directed graph representing N, Ie , and Ve , respectively, and these are the column vectors of element currents and voltages in N. Because each row in the cut matrix Qc can be expressed as a linear combination of the rows of the matrix, in the above Ac can be replaced by Qc . Thus, the result is as follows: KCL: Qc Ie ¼ 0: KVL: Bc Ve ¼ 0: Hence, KCL can also be stated as: the algebraic sum of the currents in any cut of N is equal to zero. If a network N has n vertices and m elements and its graph is connected, then there are only (n 1) linearly independent cuts and only (m n þ 1) linearly independent circuits. Thus, in writing KVL and KCL equations, only Bf and Qf , respectively, need to be used. Thus, we have KCL: Qf Ie ¼ 0 KVL: Bf Ve ¼ 0 Note that KCL and KVL equations depend only on the way network elements are interconnected and not on the nature of the network elements. Thus, several results in electrical network theory are essentially graph-theoretic in nature. Some of those results of interest in electrical network analysis are pre-
38
Krishnaiyan Thulasiraman and M.N.S. Swamy
sented in the remainder of this chapter. Note that for these results, a network N and its directed graph representation are both denoted by N.
2.3.1 Loop and Cutset Transformations Let T be a spanning tree of an electrical network. Let Ic and Vt be the column vectors of chord currents and branch currents with respect to T. 1. Loop transformation: Ie ¼ Bft Ic : 2. Cutset transformation: Ve ¼ Qft Vt : If, in the cutset transformation, Qf is replaced by the incidence matrix A, then we get the node transformation given below: Ve ¼ At Vn , where the elements in the vector Vn can be interpreted as the voltages of the nodes with respect to the reference node r. (Note: The matrix A does not contain the row corresponding to the node r). The above transformations have been extensively employed in developing different methods of network analysis. Two of these methods are described in the following section.
2.4 Loop and Cutset Systems of Equations As observed earlier, the problem of network analysis is to determine the voltages and currents associated with the elements of an electrical network. These voltages and currents can be determined from Kirchhoff ’s equations and the element voltage–current (v–i) relations given by Ohm’s law. However, these equations involve a large number of variables. As can be seen from the loop and cutset transformations, not all these variables are independent. Furthermore, in place of KCL equations, the loop transformation can be used, which involves only chord currents as variables. Similarly, KVL equations can be replaced by the cutset transformation that involves only branch voltage variables. Taking advantage of these transformations enables the establishment of different sytems of network equations known as the loop and cutset systems. In deriving the loop system, the loop transformation is used in place of KCL in this case, the loop variables (chord currents) serve as independent variables. In deriving the cutset system, the cutset transformation is used in place of KVL, and the cutset variables (tree branch voltages) serve as the independent
variables in this case. It is assumed that the electrical network N is connected and that N consists of only resistances, (R), capacitances (C), inductances (L), including mutual inductances, and independent voltage and current sources. It is also assumed that all initial inductor currents and initial capacitor voltages have been replaced by appropriate sources. Further, the voltage and current variables are all Laplace transforms of the complex frequency variable s. In N, there can be no circuit consisting of only independent voltage sources, for if such a circuit of sources were present, then by KVL, there would be a linear relationship among the corresponding voltages; this would violate the independence of the voltage sources. For the same reason, in N, there can be no cutset consisting of only independent current sources. Hence, there exists in N a spanning tree containing all the voltage sources but not current sources. Such a tree is the starting point for the development of both the loop and cutset systems of equations. Let T be a spanning tree of the given network such that T contains all the voltage sources but no current sources. Partition the element voltage vector Ve and the element current vector Ie as follows: 2
3 V1 Ve ¼ 4 V2 5 V3
2
and
3 I1 Ie ¼ 4 I2 5 I3
The subscripts 1, 2, and 3 refer to the vectors corresponding to the current sources, RLC elements, and voltage sources, respectively. Let Bf be the fundamental circuit matrix of N, and let Q f be the fundamental cutset matrix of N with respect to T. The KVL and the KCL equations can then be written as follows: 2 KVL: Bf Ve ¼ 4
U 0
2 KCL: Qf Ie ¼ 4
Q11 Q21
3 V1 54 V2 5 ¼ 0: B22 B23 V3 32 3 Q12 0 I1 54 I2 5 ¼ 0: Q22 U I3
B12
B13
32
2.4.1 Loop Method of Network Analysis Step 1: Solve the following for the vector Il . Note that Il is the vector of currents in the nonsource chords of T. Zl Il ¼ B23 V3 B22 Z2 B12 I1 , where Z2 is the impedance matrix of RLC elements and Z1 ¼ B22 Z2 B22 : Equation 2.1 is called the loop system of equations.
(2:1)
2 Circuit Analysis: A Graph-Theoretic Foundation
39
Step 2: Calculate I2 using:
Then
I2 ¼ B12 I1 þ B22 Il :
I2 ¼ Y2 Y2 :
(2:2)
Then
(2:8)
Step 3: Determine V1 and I3 using the following: V 2 ¼ Z2 I2 :
V1 ¼ Q21 Vb þ Q21V3
(2:3)
I3 ¼ Q21 I1 Q22 I2 :
Step 3: Determine V1 and I3 using the following: V1 ¼ B12V2 B13V3 :
(2:4)
I3 ¼ B13 I1 þ B23 Il :
(2:5)
Note that I1 and V3 have specified values because they correspond to current and voltage sources, respectively.
2.4.2 Cutset Method of Network Analysis: Step 1: Solve the following for the vector Vb. Note that Vb is the vector of voltages in the nonsource branches of T. Yb Vb ¼ Q11 I1 Q12 Y2 Q22Vb ,
(2:6)
Equation 2.6 is called the cutset system of equations. Step 2: Calculate V2 using: V2 ¼ Q12 Vb þ Q22V3 :
1
2
3
4
5
1 Bf ¼ 4 1 0
0 1 0
0 0 1
1 1 1
1 0 1
2
Yb ¼ Q12 Y2 Q12
2
3
4
5
1 Qf ¼ 4 1 1
1 0 1
1 1 0
1 0 0
0 1 0
3 1Ω
1Ω
4
5
3Ω + 1 u (t)
6
1
2
(2:7)
6
2
2 u (t)
(A) Network
(2:10)
Note that I1 and V3 have specified values because they correspond to current and voltage sources. This completes the cutset method of network analysis. The following discussion illustrates the loop and cutset methods of analysis on the network shown in Figure 2.9(A). The graph of the network is shown in Figure 2.9(B). The chosen spanning tree T consists of edges 4, 5 and 6. Note that T contains the voltage source and has no current source. The fundamental circuit and the fundamental cutset matrices with respect to T are given below in the required partioned form:
where Y2 is the admittance matrix of RLC elements and
1Ω
(2:9)
(B) Graph
FIGURE 2.9 A Network and Its Graph
1
3 1 1 5 0 6
3 0 05 1
40
Krishnaiyan Thulasiraman and M.N.S. Swamy
From these matrices results the following: B12 ¼ [0
0 1 1]
B13 ¼ [1] 1 0 1 B22 ¼ 0 1 1 1 B23 ¼ 0 1 Q11 ¼ 0 1 1 1 Q12 ¼ 0 1 0 Q21 ¼ [ 1]
0
Using equation 2.2 results in: 2 3 2 3 7 i2 6 i3 7 6 5 7 7 6 7 I2 ¼ 6 4 i4 5 ¼ 1=114 1 5 6 i5
1
Then using V2 ¼ Z2 I2 yields: 0
2
3 2 3 21 v2 6 v3 7 6 5 7 7 6 7 V2 ¼ 6 4 v4 5 ¼ 1=114 1 5 6 v5
1
Q22 ¼ [1 0 0 0]:
Finally, equations 2.4 and 2.5 yield the following:
The following also results: 2
i2 7 ¼ 1=11 i3 5
Il ¼
3
0
60 6 Z2 ¼ 6 40
1 0
3
0
0
0 1
07 7 7 05
0 0 0 1 3 1=3 0 0 0 6 0 1 0 07 6 7 Y2 ¼ 6 7 4 0 0 1 05 2
0
0
0
V1 ¼ [v1 ] ¼ 27=11: I3 ¼ [i6 ] ¼ 4=11
Cutset Method Edges 4 and 5 are the nonsource branches. So,
v Vb ¼ 4 v5
1
n6 ¼ 2 V
and substituting
i1 ¼ 1A
Yb ¼
Loop Method Edges 2 and 3 are nonsource chords. So, Il ¼
i2 i3
7=3 1
1 2
in equation 2.6 yields the following cutset system of equations:
7=3 Yb Vb ¼ 1
1 2
v4 v5
1=3 ¼ 1
and substituting t Zl ¼ B22 Z2 B22 4 1 ¼ 1 3
in equation 2.1 yields the following loop system of equations:
4 1
1 3
Solving for i2 and i3 yields:
i2 3 ¼ i3 2
Solving for Vb yields: Vb ¼
v4 v5
¼ 1=11
1 6
From equation 2.7, the result is: 2
3 2 3 21 v2 6 v3 7 6 5 7 7 6 7 V2 ¼ 6 4 v4 5 ¼ 1=114 1 5 6 v5
2 Circuit Analysis: A Graph-Theoretic Foundation Then using I2 ¼ Y2 V2 yields: 2 3 2 3 7 i2 6 i3 7 6 5 7 7 6 7 I2 ¼ 6 4 i4 5 ¼ 1=114 1 5 6 i5 Finally, using equations 2.9 and 2.10, the end results are: V1 ¼ [v1 ] ¼ 27=11
41 V1 ¼ At11Vn , where Vn is the column vector of node voltages. So the result from the KCL equations is the following: (A11 Y1 At11 )Vn ¼ A12 I2 : The above equations are called node equations. The matrix A11 Y1 At11 is called the node admittance matrix of N.
I3 ¼ [i6 ] ¼ 4=11 This completes the illustration of the loop and cutset methods of circuit analysis. Node Equations Suppose a network N has no independent voltage sources. A convenient description of N with the node voltages as independent variables can be obtained as follows. Let A be the incidence matrix of N with vertex vr as reference. Let A be partitioned as A ¼ [A11 , A12 ], where the columns of A11 and A12 correspond, respectively, to the RLC elements and current sources. If I1 and I2 denote the column vectors of RLC element currents and current source currents, then KCL equations for N become: A11 I1 ¼ A12 I2 : By Ohm’s law: I1 ¼ Y1 V1 , where V1 is the column vector of voltages of RLC elements and Y1 is the corresponding admittance matrix. Furthermore, by the node transformation yields:
2.5 Summary This chapter has presented an introduction to certain basic results in graph theory and their application in the analysis of electrical circuits. For a more comprehensive treatment of other developments in circuit theory that are primarily of a graph-theoretic nature, refer to works by Swamy and Thulasiraman (1981) and Chen (1972). A publication by Seshu and Reed (1961) is an early work that first discussed many of the fundamental results presented in this chapter. A review by Watandabe and Shinoda (1999) is the most recent reference summarizing the graph theoretic contributions from Japanese circuit theory researchers.
References Chen, W.K. (1972). Applied graph theory. Amsterdam: North-Holland. Seshu, S., and Reed M.B. (1961). Linear graphs and electrical networks. Reading, MA: Addison-Wesley. Swamy, M.N.S., and Thulasiraman, K. (1981). Graphs, networks, and algorithms. New York: Wiley Interscience. Watanabe, H., and Shinoda, S. (1999). Soul of circuit theory: A review on research activities of graphs and circuits in Japan. IEEE Transactions on Circuits and Systems 45, 83–94.
This page intentionally left blank
3 Computer-Aided Design Ajoy Opal Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada
3.1 3.2 3.3 3.4 3.5
Introduction ........................................................................................ Modified Nodal Analysis ........................................................................ Formulation of MNA Equations of Nonlinear Circuits ................................ A Direct Current Solution of Nonlinear Circuits ........................................ Transient Analysis of Nonlinear Circuits ................................................... References ...........................................................................................
3.1 Introduction Hand analysis of large circuits, especially if they contain nonlinear elements, can be tedious and prone to error. In such cases, computer-aided design (CAD) is necessary to speed up the analysis procedure and to obtain accurate results. This chapter describes the algorithms used in CAD programs. Such programs are available from commercial companies, such as Mentor Graphics, Cadence, and Avant ! Typically, CAD programs are based on formulating the equations using matrix methods, and the equations are then solved using numerical methods. For ease in programming, methods that are general and straightforward are preferred. This chapter assumes that the reader is familiar with methods for solving a set of simultaneous linear equations.
3.2 Modified Nodal Analysis The first step in circuit analysis is formulation of equations. This section modifies the nodal analysis method described in Chapter 1 of this book’s Section 1 to include different types of elements in the formulation. Recall that in nodal analysis, Kirchhoff ’s current law (KCL) is used to write equations at each node in terms of the nodal voltages and element values. The equations are written for one node at a time, and the node voltages are unknown variables. In CAD, it is useful to consider Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
43 43 46 47 49 51
one element at a time and to develop the matrix equations on this basis. Assume that the nodes in a circuit are labeled by consecutive integers from 1 to N such that there are the N nodal voltages in the circuit. The reference node is usually labeled zero. This presentation uses the lowercase letters j, k, l, and m to denote nodes in the circuit. Voltages will be denoted by V and branch currents by I. Consider a linear resistor R connected between nodes j and k as in Figure 3.1. This element will affect the KCL equations at nodes j and k, and the contributions will be: KCL at Node j: . . . þ GVj GVk . . . ¼ 0: KCL at Node k: . . . GVj þ GVk . . . ¼ 0: The Vj and Vk are the nodal voltages at the two nodes V ¼ Vj Vk and G ¼ 1=R, and the ellipses ( . . . ) denote contributions from other elements connected to that node. Thus, the entries in the nodal admittance matrix due to the resistor are in four locations: Col: Col: Vj Vk Row j: G G Row k: G G Row and Col. denote the row and the column in the matrix, respectively. If one end of the resistor, for example node k, is connected to the reference or ground node, then the 43
44
Ajoy Opal R
j +
V
k −
I
FIGURE 3.1 Linear Resistor
corresponding column and row are not present in the admittance matrix, and there is only one entry in the matrix at location (j, j). The above is called the circuit stamp for the element and can be used for all elements that can be written in admittance form (i.e., the current through the element can be written in terms of nodal voltages and the element value). Next, consider an independent voltage source for which the current through it cannot be written in terms of nodal voltages, such as in Figure 3.2. + E
j +
V
−
Independent Voltage Source
In this case, the KCL equations at nodes j and k are listed as: KCL at node j: . . . þ IE . . . ¼ 0: KCL at node k: . . . IE . . . ¼ 0:
(3:1)
The current IE is included in the formulation. This increases the number of unknown variables by one. Hence, another equation is needed so that the number of unknowns and the number of equations are the same and can be solved. The additional equation for the voltage source is the following: Vj Vk ¼ E:
(3:2)
The circuit stamp for this element includes both equations 3.1 and 3.2 and becomes: Col: Col: Vj Vk 2 Row j: Row k: 4 Row IE : 1
1
Col: IE 3 1 1 5
2
G
RHS 2 3 4 5 E
Row IE denotes the additional equation 3.2 added to nodal analysis, and Col. IE is the additional column in matrix due to the current through the voltage source. The vector RHS denotes the right-hand side and contains known values of sources. This method is called modified nodal analysis (MNA) because of the modifications made to nodal analysis to allow different
G
0
1
0
0
32
V1
3
7 6 07 7 6 V2 7 76 7 0 0 0 0 1 7 6 V3 7 76 7 7 6 0 0 0 0 07 7 6 IE 7 76 7 1 0 0 0 0 5 4 IL 5 0 m 1 0 0 0 IVVT 3 2 2 3 0 0 0 0 0 0 V1 60 7 6 C C 0 0 07 7 6 V2 7 6 7 6 6 7 6 0 C C 0 0 0 7 d 6 V3 7 7 6 7 6 þ6 7 6 0 0 0 0 07 7 dt 6 IE 7 60 7 6 6 7 40 0 0 0 L 0 5 4 IL 5 0 0 0 0 0 0 IVVT 2 3 3 2 V1 (0) 0 607 6 V (0) 7 6 7 7 6 2 6 7 7 6 607 6 V3 (0) 7 7E, v(0) ¼ 6 7 ¼6 617 6 I (0) 7 6 7 7 6 E 6 7 7 6 405 4 IL (0) 5 0 IVVT (0) 6 G 6 6 6 0 6 6 1 6 6 4 0
k
IE
FIGURE 3.2
types of elements in the formulation. The circuit stamps for other common elements are given in Table 3.1 and can be found in CAD books. In this table, rows labeled with lowercase letters j, k, l, and m represent KCL equations at that node, and Vj , Vk , Vl , and Vm represent nodal voltages at the corresponding nodes. All rows and columns labeled by a current I represent additional rows and columns required for MNA. As examples, the MNA equations for the linear circuit are in Figure 3.3. Scanning of the circuit shows that it has three nodal voltages and that it needs three branch currents for MNA formulation. The branch currents are required, one each, for the voltage source, the inductor in impedance form, and the VVT. The total number of variables is six and can be computed before the matrices are formulated. The MNA equations are as follows:
G
0
0
1
The first three rows represent KCL equations at the three nodes. The last three rows represent the characteristic equations from the independent voltage source, inductor in impedance form, and VVT, respectively. The equations for linear time-invariant circuits are in the form: Gv(t) þ C
d v(t) ¼ du(t) and v(0) ¼ v 0 , dt
(3:3)
where all conductances and constants arising in the formulation are stored in the matrix G and all capacitor and inductor values are stored in the matrix C. In addition, the connection of the input source to the circuit is in d, the vector of nodal voltages and some branch currents needed for MNA is v(t), and the initial condition vector is v 0 .
3
Computer-Aided Design
TABLE 3.1
45
Circuit Stamps for MNA
Element
Symbol
Matrix entries
Current source
k
Voltage source
E
+
j
Row j: Row k:
RHS J J
Vk
IE
2
RHS 2 3
3
Row j: Row k: 4 Row IE :
Y
j
Vk
Vj
k
IE
Admittance I ¼ Y (Vj Vk )
Vj
J j
1 1 5
4 5 E
1
1
Vj
k
Row j: Row k:
Vk Y Y
Y Y
I
Impedance Vj Vk ¼ ZIZ
Z
j
Vj 2
k
Vj
l
j
J
Vj
l Row j: Row k: Row l: Row m: Row IE :
+
K
m
j
7 7 5
g g
J
k
m
j
Row j: Row k: Row l: Row m: Row Ijk :
IE
+
m
Row j: Row k: Row l: Row m: Row Ijk : Row IE :
Vl
Vm
IE 3
6 6 6 6 4 m
7 7 17 7 1 5 1
1
m Vk
Vl
Vm
2
Ijk 3 1 1 7 7 b7 7 b 5
6 6 6 6 4 1
1 Vj
l Ijk
Vk
2
Vj
l Ijk
k
Vm
m
IE
Current-controlled voltage source (CVT) Vl Vm ¼ rIjk
Vl
3
Row j: Row k: 6 6 Row l: 4 g Row m: g
k
Current-controlled current source (CCT) J ¼ bIjk
Vk
2
J
Voltage-controlled voltage source (VVT) Vl Vm ¼ m(Vj Vk )
IZ
3 1 1 5 1 Z
Row j: Row k: 4 Row IZ : 1
IZ
Voltage-controlled current source (VCT) J ¼ g(Vj Vk )
Vk
Vk
Vl
Vm
2 6 6 6 6 6 6 41
Ijk
IE 3
1 1 1 1 1 1
1
r
7 7 7 7 7 7 5
46
Ajoy Opal 1
C
2
G
+
VC (0)
−
+ E −
IE
L
The magnitude is also plotted in decibels [dB] and computed as:
3
IVVT
IL (0)
jV ( jv)jdB ¼ 20 log jV ( jv)j: +
µV2
−
Using the values L ¼ 1 H, G ¼ 1 S, C ¼ 1 F, m ¼ 0:5, the magnitude of V2 is plotted in Figure 3.4.
3.3 Formulation of MNA Equations of Nonlinear Circuits FIGURE 3.3 Linear Circuit Example
For sinusoidal steady state response of a linear time invariant circuit, the input is: u(t) ¼ e jvt Here, the initial conditions v 0 are ignored, and phasor analysis is used for computing the results. In this case, equation 3.3 becomes: (G þ jvC)V ( jv) ¼ d,
(3:4)
where V (jv) is the phasor representation of v(t). The matrix equation 3.4 is a set of simultaneous linear equations and are solved numerically for V (jv). The solution, in general, is a complex quantity with magnitude jV (jv)j and phase angle ffV (jv): V ( jv) ¼ jV ( jv)jffV ( jv):
Formulation of MNA equations of nonlinear circuits follows the same steps as equation formulation for linear circuits. Consider the circuit in Figure 3.5 with a nonlinear resistor defined by IR ¼ g(VR ) ¼ 0:001(VR )3 and a nonlinear capacitor defined by QC ¼ q(VC ) ¼ 0:001(VC )3. The current through the capacitor is given by IC ¼ (d=dt)QC . Both the nonlinear elements are called voltage-controlled elements because the current through the resistor and charge on the capacitor are nonlinear functions of the voltage across the individual elements. Let the input be a DC source E(t) ¼ 1 V. The MNA equations for the circuit in Figure 3.5 are the following: V1 V2 þ IE ¼ 0 ¼ f1 : 1000 V2 V1 (V2 )3 d þ þ QC (V2 ) ¼ 0 ¼ f2 : 1000 1000 dt V1 E ¼ 0 ¼ f3 :
10
0
V2 [dB]
−10
−20
−30
−40
−50 0.001
0.01
0.1
1
10
Frequency [Hz]
FIGURE 3.4 Frequency Response of a Linear Circuit
100
(3:5)
3
Computer-Aided Design
47
1 kΩ
1
For example, the equations for the circuit in Figure 3.5 are the following:
2
2 +
+ IE
E −
VC
−
x3 (V1 , V2 , IE ) 3 2 3 V1 V2 0 þ I E 7 6 7 6 1000 7 6 7 6 7 6 7 ¼6 6 V2 V1 þ 0:001(V2 )3 7 ¼ 6 0 7 ¼ 0: 5 4 5 4 1000 V1 E 0 2
IC
−
FIGURE 3.5 Nonlinear Circuit
The first two equations are the KCL equations at nodes 1 and 2 respectively, and the last equation is the characteristic equation for the voltage source. In general, all the equations are nonlinear and are collectively written in the form f1 (v, _ v, t) f (v, _ v, t) ¼ f2 (v, _ v, t) ¼ 0, f (v, 3 _ v, t)
2
3 V1 v ¼ 4 V2 5and v(0) ¼ v 0 : IE
J i Dv i ¼ x(v i ),
For a direct current (dc) solution of nonlinear circuits, the derivatives of the network variables and time are set to zero, and equation 3.6 reduces to:
2 @x
1
6 @V1 6 6 @x2 Ji ¼ 6 6 @V 6 1 4 @x 3
@V1
@x1 3 2 1 6 @IE 7 7 6 1000 7 @x2 7 6 ¼6 6 1 @IE 7 7 6 1000 5 4 @x3 1 @IE
@x1 @V2 @x2 @V2 @x2 @V2
V1iþ1 V1i
1 1000 1 þ 3(V2i )2 1000 0
3 17 7 7 7 07 7 5 0
3
6 7 Dv i ¼ 4 V2iþ1 V2i 5, IEiþ1 IEi
and J i is called the Jacobian of the system, and Dv i is the change in the solution at the ith iteration. From the solution of equation 3.9, Dv i is obtained and v iþ1 ¼ v i þ Dv i is computed. Note that equation 3.9 is a linear system of equations, and the Jacobian can be created using circuit stamps, as done
3 @x1 iþ1 @x1 iþ1 @x1 iþ1 (V1 V1i ) þ (V2 V2i ) þ (IE IEi ) 7 6 @V1 @V2 @IE 7 6 7 6 @x2 iþ1 @x2 iþ1 @x2 iþ1 6 iþ1 i i i i 7 x (v ) 6 x2 (v ) þ (V1 V1 ) þ (V2 V2 ) þ (IE IE ) 7 0 7 6 @V1 @V2 @IE 7 6 5 4 @x3 iþ1 @x3 iþ1 @x3 iþ1 i i i i (V1 V1 ) þ (V2 V2 ) þ (IE IE ) x3 (v ) þ @V1 @V2 @IE 2
(3:9)
where
2
3.4 A Direct Current Solution of Nonlinear Circuits
(3:7)
The nonlinear equation 3.7 is solved numerically using an iterative method called the Newton–Raphson (NR) method. Let v 0 denote the initial guess and v i the result of the ith iteration for the solution of equation 3.7. The calculation of the next iteration value v iþ1 is attempted such that x(v iþ1 ) 0. Expanding x(v iþ1 ) in a Taylor series around the point v i gives Equation 3.8. Equation 3.8 shows that only the first two terms in the series have been used and other higher order terms have been ignored. Equation 3.8 can be rewritten in the form:
(3:6)
where the unknown vector and initial conditions are the following:
f (0, v, 0) ¼ x(v) ¼ 0:
3
6 7 x(v) ¼ 4 x2 (V1 , V2 , IE ) 5
+
VR IR
x1 (V1 , V2 , IE )
x1 (v i ) þ
(3:8)
48
Ajoy Opal
previously for linear circuits. The MNA circuit stamp for a nonlinear voltage-controlled resistor IR ¼ g(VR ), shown in Figure 3.1, is as follows: Col: Vj Row j: Row k:
2
@g 6 @Vj 6 4 @g @Vj
Col: Vk
RHS 3
@g @Vk 7 7 @g 5 @Vk
2 6 6 4
g(VRi ) g(VRi )
3 7 7 5
The VR ¼ Vj Vk and the linear resistor are special cases of the nonlinear resistor. The difference from MNA formulation of linear circuits is that, for nonlinear circuits, there exists an entry in RHS for all elements. For a current-controlled nonlinear resistor in the form VR ¼ r(IR ), the impedance form of MNA formulation is used, and the current through the resistor IR is added to the vector of the unknowns. The additional equation is Vj Vk r(IR ) ¼ 0. The Newton–Raphson method is summarized in the following steps: 1. Set the iteration count i ¼ 0; and estimate the initial guess of v 0 . 2. Calculate the Jacobian J i and right-hand side of equation 3.9, which is x(v i ). 3. Solve equation 3.9 for Dv i. 4. Update the solution vector v iþ1 ¼ v i þ Dv i. 5. Calculate the error in the solution as the norm of the vector kx(v iþ1 )k. A common method p is ffiffiffiffito ffi use the norm defined as L2 ¼ kx(v iþ1 )k2 ¼ Sj (xj )2 : If the error is small then stop, or instead set i ¼ i þ 1 and go to step 2.
Under fairly general conditions, it can be shown that if the initial guess is close to the solution, then the Newton–Raphson method converges quadratically to the solution. For the circuit in Figure 3.6, if the initial guess v 0 ¼ ½ 0 0 0 T is used, then the iterations for nodal voltage V2 are given in Table 3.2. Note that the last column in the table shows that the righthand side of equation 3.9 approaches zero with each iteration, indicating that the final value is being reached and the iterations can be stopped when the value is sufficiently close to zero. Also note that with each iteration, the change in the solution is becoming smaller, indicating that one is getting closer to the exact solution. Since the circuit is nonlinear, the Jacobian must be computed for each iteration and is usually the most expensive portion of the entire solution method. If a better initial guess is made, such as v 0 ¼ ½ 1:0 0:7 0:003 T , then the NR method will converge to the final value in fewer iterations. The algorithm just described is the basic Newton–Raphson method that works well in most, but not all, cases. Modifications to this method as well as to other methods for solving nonlinear equations can be found in references listed at the end of this chapter.
TABLE 3.2
Newton–Raphson Iterations
Iteration i
V2iþ1
DV2i
kx(v iþ1 )k2
0 1 2 3 4
1.0000 0.7500 0.6860 0.6823 0.6823
1.0000 0.2500 0.0640 0.0037 0.0000
0.0316 0.0131 0.0030 0.0002 0.0000
0.7 0.6
V2 [V]
0.5 0.4 0.3 0.2 0.1 0
0
1
2
3
4
Time [s]
FIGURE 3.6 Time Response of a Nonlinear Circuit
5
3 Computer-Aided Design
49
3.5 Transient Analysis of Nonlinear Circuits Another very useful analysis is the time response of nonlinear circuits. This requires the solution of the differential equation 3.6, and once again numerical methods are used. Usually the solution of the network at time t ¼ 0 is known, and the response is calculated after a time step of h at t þ h. All numerical solution methods replace the time derivative in equation 3.6 with a suitable approximation, and then the resulting equations are solved. Expanding the solution in a Taylor series, the following is obtained: v(t þ h) ¼ v(t) þ h
d v(t) þ . . . higher order terms: dt
Assuming that the time step h is small and ignoring the higher order terms as being small, the method called the Euler forward method is used. The method approximates the derivative with: v_ (t) ¼
d v(t þ h) v(t) v(t) : dt h
which is a nonlinear equation in the variable v(t þ h) and is solved using the Newton–Raphson method. The last step is called the discretization of the differential equation, and this step changes it into an algebraic equation. For example, substituting equation 3.11 in equation 3.5 results in: 2
3 V1 (t þ h) V2 (t þ h) þ I (t þ h) E 6 7 1000 6 7 6 V (t þ h) V (t þ h) (V (t þ h))3 Q (V (t þ h)) Q (V (t)) 7 6 2 7 ¼ 0, 1 2 C 2 C 2 þ þ 6 7 4 5 1000 h 1000 V1 (t þ h) E(t þ h)
where the Jacobian will be: 2
1 6 1000 6 6 J i ¼ 6 1 6 4 1000 1
1 1000 1 þ 3(V2i )2 3(V2i )2 þ 1000 1000h 0
3 17 7 7 7 07 5 0
(3:10) The explicit dependence on time has been dropped for simplicity. The Jacobian can be created using circuit stamps for the nonlinear elements as done in the case of dc solution of nonlinear circuits. For a nonlinear voltage-controlled capacitor, such as:
Substituting equation 3.10 in equation 3.6 yields: v(t þ h) v(t) , v(t), t ¼ 0 ¼ x(v(t þ h)), f h which is a nonlinear equation with the unknown value v(t þ h). It can be written in the form x(v(t þ h)) ¼ 0 and solved using the Newton–Raphson method given previously. Experience has shown that this method works well with small step sizes; however, large step sizes give results that diverge from the true solution of the network. Such methods are called unstable and are generally not used in integration of differential equations. A better solution method is to use the Euler backward method for which the derivative is approximated by a Taylor series expansion around the time point v(t þ h):
C
j +
Vc
k −
I
defined by the equation QC ¼ q(VC ) ¼ 0:001(VC )3 where VC ¼ Vj Vk , the circuit stamp is obtained by writing KCL equations at nodes j and k:
v(t) ¼ v(t þ h) þ (h)
d QC (t þ h) . . . ¼ 0: dt d KCL at node k: . . . QC (t þ h) . . . ¼ 0: dt
This equation can be rewritten in the form:
Using the Euler backward method, the differentials are discretized by:
KCL at node j: . . . þ
d v(t þ h) þ . . . dt higher order terms:
v_ (t þ h) ¼
d v(t þ h) v(t) v(t þ h) : dt h
(3:11)
Substituting equation 3.11 in equation 3.6 results in: f
v(t þ h) v(t) , v(t þ h), t þ h h
¼ 0 ¼ x(v(t þ h)), (3:12)
QC (t þ h) QC (t) . . . ¼ 0: h QC (t þ h) QC (t) . . . ¼ 0: KCL at node k: . . . h
KCL at node j: . . . þ
The Jacobian and RHS entries become:
50
Ajoy Opal Col: Vj
Row j: Row k:
2 6 6 4
1 @QC h @Vj 1 @QC h @Vj
Col: Vk
RHS
32 3 1 @QC QC (VCi (t þ h)) QC (VC (t)) 7 7 6 h @Vk 76 h 7 5 1 @QC 54 QC (VCi (t þ h)) QC (VC (t)) h @Vk h
(3:13) Note that in deriving equation 3.13, the use of the chain rule of differentiation and then discretizing (d=dt)VC has been avoided: d d d QC ¼ QC VC : dt dVC dt The value of (d=dt)VC has also been discretized. Discretization applied directly to the charge of the capacitor (and flux in case of inductors) leads to better results that maintain charge (and flux) conservation in the numerical solution of electrical circuits. Additional details can be found in work by Kundert (1995). For the circuit in Figure 3.5, let the capacitor be initially discharged such that V2 (0) ¼ 0 and the correct initial condition vector is v(0) ¼ ½ 1:0 0:0 0:001 T . For this circuit, it is expected that the capacitor will charge up from 0 V to a finite value with time. All nodal voltages in the circuit will remain less than the source voltage of 1 V. If step h ¼ 0:1 s is taken, and the initial guess is v 0 (h) ¼ v(0), the Newton–Raphson iteration results will be obtained as shown in Table 3.3. Once the solution at t ¼ 0:1 s is obtained, the values from the circuit conditions at that time are used, another time step is taken, and the response at t ¼ 2h ¼ 0:2 s is calculated. This procedure is repeated until the end time of interest is reached for computing the response. Although the circuit has a continuous response, the numerical methods are used to compute the response at discrete instants of time. It is expected that if small time steps are taken, then no interesting features of the response will be missed. Note that at each time step, the Newton–Raphson method is used to solve a set of nonlinear equations. Moreover, since the circuit is nonlinear, the Jacobian has to be recomputed at each iteration and each time step. The complete time domain response of the circuit is given in
TABLE 3.3
Newton–Raphson Iterations for t ¼ 0 and h ¼ 0:1 s
Iteration i
V2iþ1
DV2i
0 1 2 3 4 5
1.0000 0.6765 0.4851 0.4006 0.3835 0.3828
1.0000 0.3235 0.1914 0.0845 0.0171 0.0007
Figure 3.6. The final value V2 ¼ 0:6823 V is reached at about t ¼ 8 s. Note that the final value is the same as that obtained in the dc solution above. Choosing a large step size h ¼ 10 s and calculating the time response will give the results V2 (10) ¼ 0:6697 V, V2 (20) ¼ 0:6816 V, V2 (30) ¼ 0:6823 V, . . . . Note that even though the result at t ¼ 10 s is not accurate, the error does not grow with each time step and the correct value is reached after a few more steps. Such time domain integration methods are called stable and are preferred over the Euler Forward method that may not give correct results for large time steps. Finally, this section discusses some issues concerning the use of numerical methods for the solution of differential equations and gives some recommendations for obtaining correct results: 1. The discretization equation 3.11 is an approximation, since only the first two terms in the Taylor series expansion are included. As a result of this approximation, the numerically computed results contain an error at each time step, called the local truncation error (LTE). In general, the smaller the time step used, the smaller will be the LTE. The normal expectation is that if the error is reduced at each time step, then the overall error in the numerical results will also be small. 2. This discussion concerning the LTE suggests that if a discretization formula is used that matches more than two terms in the Taylor series expansion, then more accurate results will be obtained. This is also correct, provided the same step size is used for both discretization methods, or alternatively the second method will allow larger step sizes for a given error. 3. This discussion also suggests that it is not necessary to use the same time step when determining the time domain solution of a circuit over a long period of time. It is possible to change the time step during the computation based on error and accuracy requirements. 4. The Newton–Raphson method is used at each time step to solve a set of nonlinear equations. For this method, a good initial guess gives the solution in fewer iterations. Such formulae, called predictor formulae, are available and reduce the overall computation needed for the solution. 5. The issue of stability of the response with large step sizes has already been mentioned. A stable integration method is preferred. Stable methods ensure that the error at each time step does not accumulate over time and reasonably accurate results are obtained. Additional reasons for these recommendations, as well as other integration methods for time domain solution of nonlinear circuits, can be found in the references at the end of this chapter.
3 Computer-Aided Design
References Kundert, K. (1995). The designer’s guide to Spice and Spectre. Boston: Kluwer Academic Publishers. Ruehli, A. (Ed.). (1986). Circuit analysis, simulation, and design vol. 3: Advances in CAD for VLSI parts 1 and 2. Amsterdam: NorthHolland.
51 Vlach, J. and Singhal, K. (1994). Computer methods for circuit analysis and design. New York: Van Nostrand Reinhold.
This page intentionally left blank
4 Synthesis of Networks Jiri Vlach Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada
4.1 4.2 4.3 4.4 4.5 4.6 4.7
Introduction ...................................................................................... Elementary Networks........................................................................... Network Functions.............................................................................. Frequency Domain Responses ............................................................... Normalization and Scaling.................................................................... Approximations for Low-Pass Filters ...................................................... Transformations of Inductor Capacitor Low-Pass Filters ............................
53 54 56 57 61 61 63
4.7.1 Low-Pass into a High-Pass Filters . 4.7.2 Low-Pass into a Band-Pass Filter . 4.7.3 Low-Pass into a Band-Stop Filter
4.8 Realizability of Functions ..................................................................... 4.9 Synthesis of LC One-Ports .................................................................... 4.10 Synthesis of LC Two-Port Networks .......................................................
65 69 70
4.10.1 Transfer Zeros at Infinity . 4.10.2 Transfer Zeros on the Imaginary Axis
4.11 4.12
All-Pass Networks ............................................................................... Summary........................................................................................... References ..........................................................................................
4.1 Introduction Synthesis of electrical networks is an area of electrical engineering in which one attempts to find the network from given specifications. In most cases, this is applied to filters constructed with various elements. This chapter begins with a few words about history. Before the second world war, the main communication products were radios working with amplitude modulation. Stations transmitted on various frequencies, and it was necessary to get only the desired one and suppress all the others. This led to the development of various filters. Two men contributed fundamentally to the filter design theory: Darlington, from the United States, and Cauer, from Germany. As time went by, it was discovered that inductor capacitor (LC) filters were not suitable for many applications, especially in low frequency regions where inductors are large and heavy. An idea occurred: replace the inductors by active networks composed of amplifiers with feedback by means of resistors and/or capacitors. This was the era of active netCopyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
72 74 74
works, approximately in the 1960s to 1980s of the 20th century. Technological advances and miniaturization led to integrated circuits, with the attempt to get complete filters on a chip. Here it turned out that using resistors was not convenient, and a new idea emerged of switched capacitor networks. In these networks, the resistors are replaced by rapidly switched capacitors that act approximately as resistors. In such networks, we have only capacitors and transistors work as amplifiers or switches. Capacitors and transistors are suitable for integration. New developments continue to find their ways. This Chapter limits explanations mostly to LC filters. Switched capacitor networks and later theoretical developments are too complex to cover in this short contribution. Returning now to filter synthesis, this discussion must first clarify the appropriate theoretical tools. Filters are linear devices that allow the use of linear network theories. Specifications of filters are almost always given as frequency domain responses, which leads to the use of Laplace transform. This chapter assumes that the reader is at least somewhat familiar 53
54
Jiri Vlach
with the concept of the transformation because the discussion deals with the complex plane, the complex variable s, frequency domain responses, poles, and zeros. A sufficient amount of information is given so that a reader can refresh his or her knowledge.
+
I1
I2
V1 −
V2 I1
I2
Filters can be built with passive elements that do not need any power supply to retain their properties and with active elements that work only when electrical power is supplied from a battery or from a power supply. The passive elements are inductors (L), capacitors (C), and resistors (R). Resistance of the resistor is measured in ohms. Its inverse value is a conductance, G ¼ 1=R. For inductors and capacitors, we use the Laplace transform; in such a case, we speak about impedances of these elements: ZL ¼ sL and ZC ¼ 1=sC. The inverse of the impedance is the admittance, YC ¼ sC and YL ¼ 1=sL. The subscripts used here are for clarification only and are usually not used. Symbols for these elements are in Figure 4.1. Such networks have two terminals, sometimes called a port. Notice the voltage signs and positive directions of the currents. This is an important convention because it is also valid for independent voltage and current sources, shown in Figure 4.2. We use the letter E for the independent voltage source and the letter J for the independent current source to distinguish them from voltages and currents anywhere inside the network. From the above description of a one-port network, we can refer to the concept of a two-port network, usually drawn so that the input port is on the left and the output port is on the right. We can introduce a general symbol for the two-port as
shown in Figure 4.3. Notice that in such a case we speak about the input voltage across the left one-port, V1 , and the output voltage across the right one-port, V2 . The plus sign is on top and minus sign is at the bottom. We also indicate the currents I1 and I2 as shown. Notice how the currents flow into the twoport and how they leave it. It is very important to know these directions. There exist four elementary two-ports, shown in Figure 4.4. They are the most simplified forms of amplifiers. The voltagecontrolled voltage source, VV, has its output controlled by the input voltage: V2 ¼ mV1 :
+
I2 ¼ gV1 ,
−
L
−
−
FIGURE 4.1 Symbols for a Resistor, Capacitor, and Inductor +
E
(4:2)
where g is transconductance and has the dimension of a conductance. The third elementary two-port is the current-controlled voltage source, CV, defined by: (4:3)
where r represents transresistance and has the dimension of a resistor. The last such two-port is a current-controlled current source, CC, defined by:
+
C
(4:1)
The m is a dimensionless constant. Another elementary twoport is a voltage-controlled current source, VC, described by the equation:
V2 ¼ rI1 ,
R
−
FIGURE 4.3 Symbol for a Two-Port
4.2 Elementary Networks
+
+
J
I2 ¼ aI1 ,
(4:4)
where a is a dimensionless constant. Any number of elements can be variously connected; if we consider on such a network an input port and an output port, we will have a general two-port. This two-port has four variables, V1 , V2 , I1 , and I2 , as in Figure 4.3. Any two can be selected as independent variables, and the other two will be dependent variables. For instance, we can consider both currents as independent variables, which will make the voltages dependent variables. To couple them in a most general form, we can write the following:
−
FIGURE 4.2 Symbols for Independent Voltage (E) and Independent Current (J) Source
V1 ¼ z11 I1 þ z12 I2 : V2 ¼ z21 I1 þ z22 I2 :
(4:5)
4 Synthesis of Networks
55 I2
I1 = 0
+
+
V1
µV1
−
−
+
+
V2
−
(B) I2
I1
+
+
V1 = 0
rl1
−
−
I2
I1
+
+
+
V2
−
(D)
FIGURE 4.4
Simplest Two-Ports
In equation 4.5, the zij have dimensions of impedances, and we speak about the impedance description of the network. The equations are usually cast into a matrix equation: ¼
z11 z11
z12 z22
I1 I2
−
CC
(C)
(4:6)
In the next example, we consider the connection of a two-port to a loading resistor, as sketched in Figure 4.6. We wish to find the input impedance of the combination. After we connect them, as indicated by the dashed line, there is the same voltage, V2 , across the second port and across the resistor. The current flowing into the resistor will be I2 and thus V2 ¼ I2 R. Inserting into the second equation of the set (4.5), we obtain this equation:
Another way of expressing the dependences is to select the voltages as independent variables and the currents as dependent variables and present them in the form of two equations:
I2 ¼
(4:7) Zin ¼ V1 =I1 ¼
This is an admittance description of a two-port, and in matrix form it is: I1 I2
y ¼ 11 y21
y12 y22
V1 V2
z11 z21
z12 z22
¼
Z1 þ Z3 Z3
Z3 Z2 þ Z3
z11 R þ z11 z22 z12 z21 : R þ z22
V2
Z1
Z2
(4:10)
I2 = 0
(4:8)
+ +
There exist additional possibilities on how to couple the variables. They can be found in any textbook on network theory. For demonstration, we use the network in Figure 4.5 and derive its zij parameters. We attach a voltage source V1 on the left and nothing on the right. This means that I2 ¼ 0, and the voltage V2 appears in the middle of the network as indicated. We see that V1 ¼ (Z1 þ Z3 )I1 and z11 ¼ V1 =I1 ¼ Z1 þ Z3 . In addition, since the voltage in the middle of the network is V2 , we can write V2 ¼ I1 Z3 and z21 ¼ V2 =I1 ¼ Z3 . In the next step, we place the voltage source on the right and proceed similarly. The result is:
z21 I1 : R þ z22
Replacing I2 in the first equation by this result, we get:
I1 ¼ y11 V1 þ y12 V2 : I2 ¼ y21 V1 þ y22 V2 :
V2
αI1
V1 = 0
−
CV
V1 V2
−
VC
(A)
V2
gV1
V1
−
VV
I2
I1 = 0
+
(4:9)
I1
E = V1
V2
Z3
−
−
FIGURE 4.5 Finding Z Parameters for the Network −I2
I2 + V1 −
I1
Zin I1
+ Z
V2
R
−
FIGURE 4.6 Obtaining Input Impedance of a Two Port Loaded by a Resistor
56
Jiri Vlach
+
I1
I2
M
V1
L1
L2
Vout ¼
+
−
FIGURE 4.7
TV ¼
Symbol for a Technical Transducer
We will conclude this section by introducing two more twoports, namely the ideal and the technical transformer5. The ideal transformer, often used in the synthesis theory, is described by the equations: V2 ¼ nV1 : 1 I2 ¼ I1 : n
(4:11)
We note that we cannot put equation 4.11 into any of the two matrix forms that we introduced (impedance or admittance form). A technical transformer is realized by magnetically coupled coils. The device is described by the equations: V1 ¼ sL1 I1 þ sMI2 : V2 ¼ sMI1 þ sL2 I2
V1 V2
¼
sL1 sM
sM sL2
I1 I2
(4:13)
(4:15)
This is one of the possible network functions. Had we chosen a larger network, the ratio would be in the form: TV ¼
Vout N (s) , ¼ D(s) E
(4:16)
where N(s) and D(s) are polynomials in the variable s. One network may have many network functions. The number depends on where we apply the independent source, which source we select, and where we take the output. To be able to define the network function, we must satisfy two conditions: (1) the network is linear, and (2) there are zero initial conditions on capacitors and inductors. If these conditions are satisfied, then we define a general network function as the ratio: Network function ¼
Figure 4.7 Shows the technical transformer.
output : input
(4:17)
Depending on the type of independent input source (voltage or current) and the type of the output (voltage or current), we can define the following network functions: Vout . E Iout 2. Current transfer is TI ¼ . J
1. Voltage transfer is TV ¼
Vout . J Iout 4. Transfer admittance is YTR ¼ . E Vin 5. Input impedance is Zin ¼ . J Iin 6. Input admittance is Yin ¼ . E 3. Transfer impedance is ZTR ¼
4.3 Network Functions In the second section, we introduced two-port networks with their input and output variables. We now introduce the concept of networks with only one source and one output. As an example, take the network in Figure 4.8. The output voltage is:
E
Vout 2 : ¼ sþ2 E
(4:12)
:
The L1 and L2 are the primary and secondary inductances, and M is the mutual inductance. These variables can be expressed in the impedance form as:
+
(4:14)
where E is any independent signal voltage. If we do not consider a specific signal but rather divide the equation by E, we get the voltage transfer function:
V2
−
2 E, sþ2
L = 1 R=2
Vout
The input impedance is the inverse of the input admittance, but this is not true for the transfer impedances and admittances. As an example, we take the network in Figure 4.9. Using nodal analysis, we can write the equations:
−
V1 (G1 þ G2 þ sC) G2 V2 ¼ J : FIGURE 4.8 Introducing a Network Function
G2 V1 þ (G2 þ 1=sL)V2 ¼ 0:
4
Synthesis of Networks
57 V1
Im
V2
Vout = V2
G2
C J
G1
IL
Re
Iout = IL
FIGURE 4.9 Finding Two Network Functions of One Network FIGURE 4.10
They can also be written in matrix form:
(G1 þ G2 þ sC) G2
G2 (G2 þ 1=sL)
V1 V2
J ¼ 0
The denominator of the transfer function is the determinant of the system matrix: D ¼ sCG2 þ (G1 G2 þ C=L) þ (G1 þ G2 )=sL:
4.4 Frequency Domain Responses Let us now consider any one of the above network functions and denote it by the letter F. The function will be the ratio of two polynomials in the variables s, N(s), and D(s). If we use a root-finding method, we can also get the poles, pi , and zeros, zi . As a result, for a general network function, we can use one of the two following forms: Q N (s) (s zi ) ¼KQ : F(s) ¼ D(s) (s pj )
Using Cramer’s rule, we obtain: Vout ¼
G2 J : D
s ¼ jv:
Iout ¼ (G2 =sL)=D: We remove the complicated fractions by multiplying the numerator and the denominator by sL, with the result:
TI ¼
sLG2 : s 2 LCG2 þ s(LG1 G2 þ C) þ G1 þ G2 s 2 LCG
(4:18)
To obtain various network responses, we substitute
Since Iout ¼ Vout =sL, we also get
ZTR ¼
Pole and Zero Positions of Some Low-Pass Filter
v ¼ 2pf ¼
(4:19) 2p : T
(4:20)
In equations 4.19 and 4.20, f is the frequency of the signal we applied, and f ¼ 1=T . Using one frequency and substituting it into equation 4.18, we obtain one complex number with a real and imaginary part:
G2 : 2 þ s(LG1 G2 þ C) þ G1 þ G2
F ¼ x þ jy ¼ Rejf
(4:21)
Here This example demonstrated that the network functions are ratios of polynomials in the variable s. If the elements are given by their numerical values, we can find roots of such polynomials. Roots of the numerator polynomial are called zeros, and roots of the denominator are called poles. From mathematics, we know that a polynomial has many zeros just as the highest power of the s variable. In the above example, the denominator is a polynomial of second degree, and the network has two poles. Since the numerator in this example is only a constant, the network will not have any (finite) zeros. In general, the poles (or zeros) can be real or can appear in complex conjugate pairs. We can draw them in the complex plane, and we can use crosses to mark the poles and use small circles to mark the zeros. Figure 4.10 shows the pole-zero plot of a network with one real and two complex conjugate poles and with two purely imaginary zeros.
R¼
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 þ y2
(4:22)
and y tan f ¼ : x
(4:23)
This is sketched in Figure 4.11. If we evaluate the absolute value or the angles for many frequencies and plot the frequency on the horizontal axis and the resulting values on the vertical axis, we obtain the amplitude and the phase responses of the network. Calculation of the frequency responses is always done by a computer. We assume that the reader knows how this calculation is done, but we show the responses for the filter in Figure 4.12.
58
Jiri Vlach
R
Clearly, selecting a correct scale and plot is very important for proper presentation and understanding. The phase responses can also be plotted, but we have a couple of problems: the tangent function is periodic, and the plots experience jumps. For this reason, the phase response is rarely plotted, and we prefer to use the group delay, defined by:
Y ϕ
t¼ X
FIGURE 4.11
Finding the Absolute Value and Phase in Complex Plane
We have several possibilities for plotting them: the horizontal axis can be linear or logarithmic, the amplitude responses can be in absolute values, jFi j ¼ jF(jvi )j, or jFi1 dB j ¼ 20 log jFi j (in decibels, which is a logarithmic scale). The four possible plots of the same responses are in Figures 4.13(A) through 4.13(D).
1Ω
df : dv
Plot of the group delay for the filter in Figure 4.12 is in Figure 4.14. We see that the amplitude response approximates fairly well a constant in the passband, but the group delay has a large peak. Not every program has a built-in evaluation of the group delay, but there is a remedy: every computer has a polynomial root-finding routine. We can calculate the poles and zeros and use the second formula in equation 4.18. Let there be M zeros
1.25752
1.16658
0.05226
0.13975
+ 1Ω
E 1.41201
−
2.1399
FIGURE 4.12
1.33173
Example of a Low-Pass Filter with Transfer Zeros
4.997E−01
3.997E−01
2.99E−01
1.999E−01
9.993E−02
1.725E−06 1.000E−01 (A)
FIGURE 4.13
2.575E+00
(4:24)
5.050E+00
7.525E+00
1.000E+01
Lin. Frequency [rad/s]
(A) LC Filter with Transmission Zeros AC Analysis, Magnitude of V4 , Linear Frequency Scale
4
Synthesis of Networks
59
−6.027E+00
−2.787E+01
−4.972E+01
−7.157E+01
−9.342E+01
−1.153E+02 1.000E−01
2.575E+00
(B)
5.050E+00
7.525E+00
1.000E+01
3.162E+00
1.000E+01
Lin. Frequency [rad/s]
5.000E−01
4.000E−01
3.000E−01
2.000E−01
1.000E−01
6.906E−01 1.000E−01 (C)
3.162E−01
1.000E+00 Log. Frequency [rad/s]
FIGURE 4.13 (cont’d) (B) Same Filter, dB of V4 , Linear Frequency Scale (C) Same Filter, Magnitude of V4 , Logarithmic Frequency Scale continued
60
Jiri Vlach −6.021E+00
−2.546E+01
−4.490E+01
−6.434E+01
−8.378E+01
−1.032E+02 1.000E−01
3.162E−01
(D)
1.000E+00
3.162E+00
1.000E+01
Log. Frequency [rad/s]
FIGURE 4.13 (cont’d) (D) Same Filter, dB of V4 , k Logarithmic Scale
1.022E+04
8.179E+03
6.138E+03
4.097E+03
2.055E+03
1.389E+01 1.000E−01
3.162E−01
1.000E+00
3.162E+00
Log. Frequency [rad/s]
FIGURE 4.14
LC Filter, Group Delay of V4 , in ms, Logarithmic Frequency Scale
1.000E+01
4 Synthesis of Networks
61
zi ¼ ai þ bi and N poles pi ¼ gi þ di . Substituting s ¼ jv and taking the absolute value, we arrive at the formula: 2
M Q
6 6 jFj ¼ K 6i¼1 N 4Q i¼1
[a2i
2
31=2
þ (v bi ) ]7 7 7 5 [g2i þ (v di )2 ]
ZC , s ¼
The results can be collected in the following formulas: (4:25)
R , GS ¼ Gk: k Lv0 : LS ¼ k CS ¼ Cv0 k:
RS ¼
The phase response will be as follows: F¼
M X
arctan
i¼1
N bi v X di v arctan : ai gi i¼1
(4:26)
t(v) ¼
i¼1
N X ai gi 2 2: 2 2 ai þ (v bi ) i¼1 gi þ (v d)
(4:27)
All formulas are easily programmed and plotted.
(4:29)
If the network has amplifiers, then: .
Differentiating equation 4.26 with respect to vo provides the formula for the group delay: M X
1 1 1 ¼ v ¼ : jvCk j v0 (v0 Ck) jvS CS
. .
VV and CC remain unchanged; VC transconductance g is multiplied by k; and CV transresistance r is divided by k.
The scaling makes it possible to provide numerous tables of filters, all normalized so that one resistor is equal to 1 V and some frequency is normalized to vS ¼ 1. From such tables, the reader selects a suitable filter and modifies the values to suit the frequency band and impedance level.
4.5 Normalization and Scaling
4.6 Approximations for Low-Pass Filters
Normalization is one of the most useful concepts in linear networks. It allows us to reduce one frequency and the network impedance to unit values without losing any information. Consider first the scaling of impedances. If we increase (decrease) the impedance of every element of a filter described by its voltage or current transfer function, the input–output relationship should not change. It is thus customary to apply scaling, which reduces one (any) resistor to the value of 1. We can also scale the frequency so that the cutoff frequency of a low-pass filter or the center frequency of a band-pass filter is reduced to 1 rad/s. To derive the necessary formulas, we will denote the scaled values by the subscript s and leave the values to be used for realization without subscript. Impedance scaling means that the impedance of every network element must be scaled by the same constant, k. This immediately leads to Rs ¼ R=k. For frequency normalization, we introduce the formula:
Before we go into the details of the approximations, we first pose a question: what kind of properties should a lowpass filter have to transfer, without distortion, a certain frequency band and suppress completely all other frequencies. The thought comes immediately to mind that all components should be amplified equally. This is true but not sufficient. In addition, the phase response must be a straight line; thus, the group delay must be a constant for all frequencies of interest. The conditions are sketched in Figure 4.15. We will see that this is impossible to achieve and that we have to accept some approximations when comparing with Figures 4.13(A) through 4.13(D) and Figure 4.14. A lowpass filter, as indicated by the name, passes some frequency band starting from zero frequency and suppresses higher frequencies. One type of low-pass filter, called the polynomial filter, is described by transfer functions having a general form:
v , (4:28) v0 where v0 is a constant by which we wish to scale the frequency. Now consider the impedance of a scaled inductor by writing: vs ¼
L v v0 L v0 L ¼ jvs ( ): ZL, s ¼ jv ¼ j k v0 k k Similarly, for a capacitor we obtain:
T (s) ¼ K
1 : polynomial in s
(4:30)
The polynomial must have all its roots (poles of the filter) in the left half of the complex plane to make the filter stable. One such filter was suggested by Butterworth (Schaumann, 1990). He wanted to determine what the coefficients were of the polynomial in equation 4.30 to get a maximally flat amplitude low-pass transfer at v ¼ 0. To make the problem unique,
62
Jiri Vlach
he also requested that the output of the filter at v ¼ 1 should be 3 dB less than at v ¼ 0. We will not go into detail about how the polynomials were and are found; the steps are in any book on filters. Instead, we
show the responses in Figure 4.16 for orders n ¼ 2 to 10. We have used a logarithmic horizontal scale, and the responses start at v ¼ 0:1. As we see, the approximation of a constant at low frequencies is reasonable for a high n, but the group delay
Phase
Amplitude
Group delay
ω
ω
FIGURE 4.15
ω
Ideal Responses for Amplitide, Phase, and Group Delay
0.1 0
0 −10
−1
−20
dB −2
−30 π 2 3 −40 4 5 dB 6 −50 7 8 9 −60 10
0.5
1.0
π
10 6 3 2
−3 ω
−70 −80
1
5
10
50
100
ω
(A) 6
π 2 3 4 5 6 7 8 9 10
π = 10
5 4
8
3
6
2
4
1
2
0 −1 τ
−2
τ(0) 1.414 1.388 2.613 3.236 3.863 4.493 5.125 5.758 6.392
−3 −4 −5 −6 −7 0.1
(B)
FIGURE 4.16
0.2
0.3 0.4 0.5
1
2
3
4
5 6
8 10
ω
(A) Selectivity Curves of Maximally Flat Filters (B) Group Delay of Maximally Flat Filters
4 Synthesis of Networks
63
p/3
Ripple n=3
(A) n = 3
p /4 n = 4
(B) n = 4
FIGURE 4.17 Poles’ Positions for Maximally Flat (Butterworth) Filters, n ¼ 3 and n ¼ 4
turns out to have a peak at approximately v ¼ 1, and the peak grows with the order n. The poles of the normalized Butterworth (or maximally flat) filter lie on a circle with a radius r ¼ 1. Figure 4.17 shows the situation for n ¼ 3 and n ¼ 4. Extension to higher degrees is easy to extrapolate from these two figures: all odd powers will have one real pole, and all even powers will have only complex conjugate poles. The poles of a few filters are in Table 4.1. Another well-known types of polynomial filters include the Chebychev filters; they use special polynomials discovered by Russian scientist Chebychev. They are also described by the general Formula 4.30, but the requirements are different. In the passband, the amplitude response oscillates between two limits, like in Figure 4.18. The ripple is normally expressed in decibels. At v ¼ 1, the response always drops by the amount of specified ripple, as in the figure. Compared with the maximally flat filters, Chebychev filters have better attenuation outside the passband and approximate a constant better in the frequency band from 0 to 1. Table 4.1 gives pole positions of Chebychev filters with 0.5 dB ripple. If the requirements on the suppression of signals outside the passband are more severe, polynomial filters are not sufficient. Much better results are obtained with filters having transfer zeros placed on the imaginary axis. Because a pair of TABLE 4.1
Poles of Filters
Order
Butterworth
Chebychev 0.5 dB ripple
2 3
0:7071068 j0:7071068 1:0000000 0:5000000 j0:8660254 0:3826834 j0:9238795 0:9238795 j0:3826834 1:0000000 0:3090170 j0:9510565 0:8090170 j0:5877852 0:2588190 j0:9659258 0:7071068 j0:7071068 0:9659258 j0:2588190 1:0000000 0:2225209 j0:9749279 0:6234898 j0:7818315 0:9009689 j0:4338837
0:7128122 j1:0040425 0:6264565 0:3132282 j1:0219275 0:1753531 j1:0162529 0:4233398 j0:4209457 0:3623196 0:1119629 j1:0115574 0:2931227 j0:6251768 0:0776501 j1:0084608 0:2121440 j0:7382446 0:2897940 j0:2702162 0:2561700 0:0570032 j1:0064085 0:1597194 j0:8070770 0:2308012 j0:4478939
4 5
6
7
ω=1
FIGURE 4.18
Amplitude Response of a Chebycher Filter
imaginary axis complex conjugate zeros is expressed by terms (s þ jvi ) (s jvi ) ¼ s 2 þ v2i , the transfer function will have the form: Q
T (s) ¼ K
(s 2 þ v2i ) : polynomial in s
(4:31)
A sketch of the pole-zero plot of some such lowpass filter was shown in Figure 4.10. As always, the roots of the polynomial must be in the left half of the plane. There are many possibilities for selecting the zeros and poles, all beyond the scope of this presentation. Fortunately, there exist numerous tables of filters in literature. Best known are the Cauer-parameter filters with equiripple responses in the passband and with equal minimal suppressions in the stopband. We have analyzed one such filter: see Figure 4.12 and its amplitude responses in Figures 4.13(A) through 4.13(D). Before closing this section, we provide a general network for low-pass filters described by Formula 4.30. The elements are like those in Figure 4.19. Depending on the order of the filter, the last element before the load resistor will be either an inductor in series or a capacitor in parallel. We can have filters with different values of the resistors and also a design with only one resistor, RL , but that does not change the general structure of the filter. Table 4.2 gives element values for normalized Butterworth filters with RE ¼ RL ¼ 1.
4.7 Transformations of Inductor Capacitor Low-Pass Filters We spoke about low-pass filters and normalization previously because many normalized LC low-pass filters can be found in
64
Jiri Vlach L2
RE
L4
+ E
C1
−
C3
FIGURE 4.19
TABLE 4.2 Order 2 3 4 5 6 7
Realization of Polynomial Low-Pass Filters
Elements Values of Butterworth Filters
C1
L2
C3
L4
C5
L6
C7
1.4142 1,0000 0.7654 0.6180 0.5176 0.4450
1.4142 2,0000 1.8478 1.6180 1.4142 1,2470
1.0000 1.8478 2,0000 1.8319 1.8019
0,7654 1.6180 1.8319 2.0000
0.6180 1.4142 1.8019
0.5176 1.2470
0.4450
the literature, such as in Zverev (1976). One selects a suitable scaled filter and transforms it to the desired frequency and impedance level. There are, however, additional possibilities for inductor capacitor (LC) low-pass filters. They can be transformed into band-pass, high-pass, or band-stop filters. We introduce these filters now.
On the left is the admittance of a capacitor, and on the right is the admittance of an inductor: L0 ¼ 1=v0 C. The transformation changes each capacitor into an inductor and vice versa; the filter is transformed into a high-pass. We must still understand the meaning of v0 . When evaluating the frequency domain responses, we always substitute s ¼ jv. The frequency of interest is the cutoff frequency of the original low-pass filter, z ¼ j. Inserting into equation 4.32, we get v0 ¼ v. The sign only means that negative frequencies, existing in mathematics but not in reality, transform into positive frequencies and vice versa. The low-pass cutoff frequency of vs ¼ 1 transforms into the highpass cutoff frequency v0.
4.7.2 Low-Pass into a Band-Pass Filter Consider next the transformation: z¼
4.7.1 Low-Pass into a High-Pass Filter Consider the transformation: v0 , (4:32) z¼ s where z describes the complex frequency variable of the original scaled low-pass filter, possibly taken from tables, and s is the variable of the network to be realized. The meaning of v0 will become clear later. Multiply both sides of equation 4.32 by L. On the left is the impedance of an inductor. Simple arithmetic operations lead to: zL ¼
v0 L 1 1 ¼ ¼ 0, s s=v0 L sC
where on the right is an impedance of a capacitor with the value C 0 ¼ 1=v0 L. Considering multiplication of equation 4.32 by C, we get: v0 C 1 1 ¼ zC ¼ ¼ 0: s s=v0 C sL
RL
C5
s v20 s 2 þ v20 þ ¼ : D sD sD
(4:33)
In equation 4.33, z belongs to the original low-pass normalized filter. Multiply both sides by L to represent the impedance of an inductor in the z variable: zL ¼
sL v20 L sL 1 1 þ ¼ þ ¼ sL0 þ 0 : D sD D sD=v20 L sC
This means that the original inductor impedance was transformed into a series connection of two impedances. One of the 0 elements is an inductor Lser ¼ L=D and the other is a capacitor 0 2 Cser ¼ D=v0 L. Proceeding similarly for the admittance of a capacitor, we get: zC ¼
sC v20 C C 1 1 þ ¼s þ ¼ sC 0 þ 0 : D sD D sD=v20 C sL
The admittance of the capacitor is changed into the sum of two admittances, indicating a parallel connection. One of the 0 ¼ C=D, and the other is an inductor elements is a capacitor Cpar
4 Synthesis of Networks
65
4.7.3 Low-Pass into a Band-Stop Filter
0 Lpar ¼ D=v20 C:
To find additional properties of the transformation, we first form the product: 0 0 0 0 Lser Cser ¼ Lpar Cpar
1 ¼ 2: v0
This equation shows that the resonant frequencies of the parallel and series tuned circuits are the same, v0 . Next, multiply equation 4.33 by the denominator to get: s 2 szD þ v20 ¼ 0:
s1; 2
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi z 2 D2 v20 : 4
Now consider special points of the original filter. For z ¼ 0, we get s1; 2 (z ¼ 0) ¼ jv0 : Zero frequency of the original filter is transformed into v0 point on the imaginary axis. Taking next the cutoff frequency of the low-pass filter, z ¼ j, we get four points:
s1; 2; 3; 4
jD ¼ j 2
1 s v20 s 2 þ v20 ¼ þ ¼ : z D sD sD
(4:36)
It transforms a low-pass into a band-stop. All the above steps remain valid, and only the elements will be transformed differently. Divide both sides by L to get: 1 s v2 ¼ þ 0 : zL DL sDL On the left is an admittance, and on the right is the sum of two admittances, indicating that the elements will be in parallel: 0 0 Cpar ¼ 1=DL and Lpar ¼ DL=v20 . A similar division by C will result in:
This is a quadratic equation with two solutions: zD ¼ 2
Consider a transformation similar to equation 4.33:
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi D2 v20 þ : 4
Only two of these points will be on the positive imaginary axis: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi D2 v20 þ : 4 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi D D2 jv2 ¼ j þ j v20 þ : 2 4 D jv1 ¼ j þ j 2
1 s v2 ¼ þ 0 zC DC sDC On the left side is an impedance, and on the right side is the sum of two impedances, indicating that the elements will be in 0 0 series Lser ¼ 1=DC and Cser ¼ DC=v20 . All three transformations are summarized in Figure 4.20 for easy understanding. As an example, we will take the third order maximally flat filter from Table 4.2 and its realization in Figure 4.21(A). We then transform it into a band-pass filter with center frequency v0 ¼ 1 rad/s and with the bandwidth D ¼ 0:1 rad/s. The transformed band-pass filter is in Figure 4.21(B) and its amplitude response in Figure 4.22. Using equations from section 4.5, the band-pass filter can be transformed to any impedance level and any center frequency with a bandwidth equal to one tenth of the center frequency.
4.8 Realizability of Functions
Their difference is the following: v1 v2 ¼ D:
(4:34)
v1 v2 ¼ v20 :
(4:35)
Their product is:
Thus, v0 is the (geometric) center of the two frequencies, and D is the passband of the transformed filter.
Synthesis is a process in which we have a given function, and we try to find a network whose properties will be described by that function. In most cases, we try to do so for a voltage transfer function. The first question that comes into mind is whether any function can be realized as a network composed of passive elements: capacitors, inductors, resistors, and transformers. Rather obviously, the answer is no. To make the problem treatable, we must start with a simple network function and establish conditions that the function must satisfy. The simplest network function is an impedance (or admittance) but since one is the inverse of the other, we can restrict ourselves to only one of them: the impedance. A large amount of work went into establishing the necessary and sufficient conditions for an impedance function composed of
66
Jiri Vlach
Original Low - Pass
High - Pass
Band - Pass
Band - Stop
2
∆ / ω0C
2
∆C / ω0
I / ∆C
I / ω0C
C
C/∆ 2
∆/ I / ω0L
L
∆L / ω0
2 ω0L
L/∆
I / ∆L
FIGURE 4.20
Transformations of Low-Pass Filters
2
1
20
1
1
1
1
0.05
10
10
(B)
(A)
FIGURE 4.21
1
0.1
0.1
Transformation of a Butterworth Low-Pass Filter into a Band-Pass
L, C, and R elements, possibly with an ideal transformer. We will explain the conditions as a set of rules without trying to establish the reasons for these rules. A rational function in the variable s can be realized as an impedance (admittance) if it satisfies the following rules: 1. The degree of the numerator and denominator may differ by at most one. 2. Z(s) may not have any poles or zeros in the right half of the plane. 3. Poles and zeros in the left half of the plane may be multiple. 4. Poles on the imaginary axis must be simple and must have positive residues. 5. The function must satisfy the condition: Re Z(s) 0 if Re s 0:
function. Unfortunately, all these steps are only necessary but not sufficient. The only necessary and sufficient condition is point 5, which is difficult to test. All of this information may seem very discouraging, but, fortunately, synthesis of completely arbitrary impedances is almost never needed. In most cases, we need to realize LC impedances (admittances), and the rules are considerably simplified here. Again, without trying to provide a proof, let us state that any LC impedance can be realized in the form of the circuit in Figure 4.23(A) and any LC admittance in the form of the network in Figure 4.23(B). Consider the network Figure 4.23(A). The tuned circuits have the impedance: Zi ¼
(4:37)
The first condition is easy to establish by inspection. The second condition is a standard requirement for stability. Because the impedance and the admittance are the inverse of each other, the same must apply for the zeros. Stability is not destroyed by multiple poles in the left half plane, as stated in point 3. Point 4 would call for partial fraction expansion of the
1 s : Ci s 2 þ 1=Li Ci
(4:38)
ki ¼
1 : 2Ci
(4:39)
v2i ¼
1 : Li Ci
(4:40)
We usually define:
4 Synthesis of Networks
67
5.000E−01
4.106E−01
3.211E−01
2.317E−01
1.423E−01
5.284E−02 9.000E−01
9.500E−01
1.000E+00
1.050E+00
1.100E+00
Lin. Frequency [rad/s]
FIGURE 4.22
Amplitude Response of the Band-Pass Filter in Figure 4.21
The variable ki represents the residues of the poles on the imaginary axis, pi ¼ jvi , and vi is the resonant frequency of the circuit. This gives us the possibility of writing a general LC impedance in the form: ZLC ¼ sL þ
1 X 2ki s þ : sC s 2 þ v2i
(4:41)
Any of the components in equation 4.41 may be missing. A similar expression could be derived for the network in Figure 4.23(B). We could now use a computer, plot various impedances, and study the results. We will summarize them for you: 1. ZLC or YLC have only simple poles on the imaginary axis. 2. The residues of the poles are positive. 3. The zeros and poles on the imaginary axis must alternate. 4. ZLC or YLC are expressed as a ratio of an even and an odd polynomial. 5. A pole or a zero may appear at the origin or at infinity, but the alternating nature of the poles and zeros must be preserved. These results can be summarized by the simple sketches in Figure 4.24. Synthesis of filters is usually based on the two-port theory. Using overall expressions for the whole filter, we separate the
LC two-port from the loading resistors and apply synthesis to find the elements of the LC two-port. Again, as could be expected, some restrictions apply for the overall network, but the conditions are much more relaxed than for the impedances. A voltage or current transfer function can be realized as an LC two-port with loading resistors if the following statements are true: 1. The degree of the numerator, M, and denominator, N, satisfy M N. 2. Zeros are usually on the imaginary axis, but theoretically they can be anywhere. 3. Poles must be in the left half of the plane. Networks composed of passive elements always have z12 ¼ z21 . In addition, z11 and z22 will have the same poles as z12 ¼ z21 . Both z11 and z22 can have additional poles (called private poles), as we will show later. If the network components are only L and C, then z11 and z22 must satisfy the conditions on LC impedances discussed previously. We now indicate the simplest method to extract the LC twoport from the network function. Consider the network in Figure 4.25. We wish to find the transfer impedance V2 =I1 . To do so, we need the second equation from the equation set 4.5, V2 ¼ z21 I1 þ z22 I2 :
(4:42)
68
Jiri Vlach
(A)
(B)
FIGURE 4.23 A General Form of an LC: (A) Impedance and (B) Admittance
After the indicated connection, we have: I2 ¼
be in the left half of the plane. We can rewrite equation 4.44 in one of the two forms:
V2 : R
f ¼
g=m : 1 þ m=n
(4:45)
f ¼
g=m : 1 þ n=m
(4:46)
Inserting into equation 4.42 and eliminating I2 , we obtain: ZTR
V2 z21 : ¼ ¼ I1 1 þ z22 =R
or (4:43)
As we explained in section 4.5, we can simplify the expression by normalizing the impedance level to R ¼ 1. Now consider a function having the form: f ¼
g , mþn
(4:44)
where m collects all the even terms of the denominator and n collects all the odd terms. All roots of the denominator must
∞
+
≈
Zero or pole possible here
I1
I2
+
+
+ +
+
+
+
≈
∞
The ratios in the denominator, n/m or m/n, satisfy the conditions imposed on z22 of an LC network. The problem is reduced to the synthesis of z22 , taking into account the properties of z21 . We will describe such synthesis in section 4.10. We have chosen this primitive case for demonstration only. Practical synthesis steps usually require voltage transfer with a loading resistor and with an input resistor. The steps that extract properties of the LC two-port are more complicated but are available in books focusing on synthesis of filters.
V2
≈
−
≈
FIGURE 4.24
Possible Positions of LC Network Poles and Zeros
FIGURE 4.25
Deriving ZTR for a Loaded Two-Port
R
4 Synthesis of Networks
69
4.9 Synthesis of LC One-Ports
Z2 ¼
For LC impedances or admittances, we stated the rules that the function is represented by the ratio of an even and odd polynomial and the poles and zeros must interchange, be simple, and have positive residues. To find the partial fractions, we must use a root-finding routine and actually get the zeros and poles first. To calculate the residues of the poles, we use the following formulas: Pole at the origin: Pole at infinity: Two poles at jvi : 2k ¼
k0 ¼ sZ(s)js¼0 :
(4:47)
Z(s) j : s s¼1
(4:48)
k1 ¼
s 2 þ v2i Z(s)js2 ¼v2 : i p
We could now select L to be anywhere between zero and one, but if we select L ¼ 1, the function simplifies to: Z2 ¼
6s 2 þ 9 : s 3 þ 4s
The subtraction means that we have realized the first element in series, L ¼ 1, and removed a pole at infinity. The remaining function can now be inverted to: Y2 ¼
s 3 þ 4s : 6s 2 þ 9
(4:49)
In formulas 4.47 through 4.49, it is first necessary to cancel the additional terms against equal terms in Z(s) and then substitute the indicated values for s. Suppose now that we consider an admittance in the form: n k0 X 2ki s Y (s) ¼ k1 s þ þ : 2 s þ v2i s i¼1
s 4 þ 10s 2 þ 9 s 4 (1 L) þ s 2 (10 4L) þ 9 sL ¼ : 3 s þ 4s s 3 þ 4s
(4:50)
This equation has a pole at infinity. We can continue by subtracting from it the admittance of a capacitor: Y3 ¼ Y2 sC ¼
s 3 (1 6C) þ s(4 9C) : 6s 2 þ 9
If we select C ¼ 1=6, we remove again a pole at infinity and realize a capacitor in parallel, C ¼ 1=6, with the result: Y3 ¼
The first term is C ¼ k1 , and the second is L ¼ 1=k0 . The remaining terms can be written as:
5s=2 : 6s 2 þ 9
Y3 can be inverted again: 2ki s 1 Yi (s) ¼ 2 , ¼ 2 s þ vi s=2ki þ v2i =2ki s
(4:51) Z3 ¼
where 1 Li ¼ : 2ki
(4:52)
2ki : v2i
(4:53)
Ci ¼
The admittance realized this way was shown in Figure 4.23(B). Should we consider the function in equation 4.50 as an impedance, similar steps would lead to the network in Figure 4.23(A). These networks have the name Foster canonical forms. Another method, originally due to Cauer, gives impedances or admittances in the form of a ladder. As an example, consider the impedance: Z¼
s 4 þ 10s 2 þ 9 (s 2 þ 1)(s 2 þ 9) ¼ : s 3 þ 4s s(s 2 þ 4)
(4:54)
Subtract from equation 4.54 an impedance sL (a pole at infinity). This gives the following:
12s 18 þ : 5 5s
The first element is an inductor L ¼ 12=5, the second a capacitor C ¼ 5=18, both connected in series. The resulting network is in Figure 4.26. The above expansion started from the highest powers. There exists another possibility by starting from the lowest power. Rewrite the impedance from 4.54 in the form: Z¼
9 þ 10s 2 þ s 4 : 4s þ s 3
(4:55)
and subtract 1/sC, a pole at the origin: Z2 ¼ Z
1 (9 4=C) þ s 2 (10 1=C) þ s 4 ¼ : sC 4s þ s 3
If we select C ¼ 4=9 as a capacitor in series, the first term disappears, and we remove the pole at s ¼ 0 to get: Z2 ¼
31s=4 þ s 3 : 4 þ s2
70
Jiri Vlach 1
4.10 Synthesis of LC Two-Port Networks
12/5
4.10.1 Transfer Zeros at Infinity 1/6
5/18
FIGURE 4.26 Synthesis of Equation 4.54 by Extracting Poles at Infinity
The remainder can be inverted and the admittance of an inductor subtracted again: 1 (4 31=4L) þ s 2 (1 1=L) Y3 ¼ Y2 ¼ : 31 sL s þ s3 4
z11 ¼
0:5s 4 þ 8s 2 þ 9 0:5(s 2 þ 1:22)(s 2 þ 14:78) ¼ : s 3 þ 4s s 3 þ 4s
(4:56)
We see that the partial removal did not simplify the function but shifted zeros of the original function into new positions. Normally, if no other conditions are imposed, we would always remove the full value because then the resulting number of elements is minimal. Partial removal is used in the synthesis of two-port networks, as is shown later in this chapter.
4/9
z22 ¼
k0(22) X 2ki(22) s (22) þ þ k1 s s 2 þ v2i s
possibly plus some LC impedance: z12 ( ¼ z21 ) ¼
k0(12) X 2ki(12) s (12) þ þ k1 s: s 2 þ v2i s
The words possibly in addition LC impedance indicate that additional terms may exist in z11 and/or z22. These are called private impedances and do not influence z12 . They would appear as in Figure 4.28(A) for impedance parameters and as in Figure 4.28(B) for admittance parameters. After their removal, all zij must have the same poles, the residues of z11 and z22 must be positive, and residues of z12 may be positive or negative. Before we start with the synthesis example, let us state that a removal of an element, which is supposed to influence z12 (or y12 ), must be connected as in Figure 4.29(A) or 4.29(B). Let us now have the following two functions: 9 15 s þ : 4s 4 s 2 þ 4 1 s=4 1 z12 ¼ þ 2 ¼ 2 : 4s s þ 4 s(s þ 4)
z11 ¼ s þ
(4:57)
The first term in z11 is a private impedance (the pole at infinity does not appear in z12 ) and can be removed as an inductor L ¼ 1 in series. The remaining functions are these:
60/961
31/16
k0(11) X 2ki(11) s (11) þ þ k1 s s 2 þ v2i s
possibly plus some LC impedance:
The choice L ¼ 31=16, connected in parallel, removes the first term and another pole at the origin. The process can be continued, with the resulting network shown in Figure 4.27. It is not necessary to continue one type of the expansion to the end. It is always possible, at any step, to rewrite the remaining function like we did going from equations 4.54 to 4.55 and continue. In addition, we need not remove one of the terms completely. All this indicates that synthesis of LC impedances or admittances is far from a unique procedure. Let us return to equation 4.54, where we subtracted L ¼ 1; we subtract now only L ¼ 0:5. The result will be as follows: Z2 ¼
In this section, we indicate synthesis steps of two-ports. It is a fairly complicated procedure, and we will try to explain directly with examples. An LC two-port is described by its impedance or admittance parameters, equations 4.5 or 4.7. We will use the impedance parameters and assume that the functions have been decomposed into partial fractions. Because for passive networks z12 ¼ z21 , we need (for full description only):
31/15
FIGURE 4.27 Synthesis of Equation 4.55 by Extracting Poles at Zero
¼ Z1 ¼ z11
9 15 s 6s 2 þ 9 þ ¼ 2 : 2 4s 4 s þ 4 s(s þ 4) z12 ¼
s(s 2
1 : þ 4)
4 Synthesis of Networks
71
Z1
Z2
Z
Y1
(A)
FIGURE 4.28
Removing Private (A) Impedances and (B) Admittances
(A)
Y
(B)
FIGURE 4.29 Removals Affecting Transfer Immittances and Connection of the Last Element for (A) Impedance Description and (B) Admittance Description
The poles of both functions are the same, z12 has only a constant in the numerator, and three powers of s are in the denominator. If we insert infinity for any s in the denominator, the function will become zero, and we say that z12 has three zeros at infinity. As was shown in the previous section, we cannot remove zeros as elements, but we can remove them as poles of inverted functions. Subtracting the admittance sC from Y1 ¼ 1=Z1 leads to: s 3 þ 4s s 3 (1 6C) þ s(4 9C) sC ¼ 2 6s þ 9 6s 2 þ 9
Selecting C ¼ 1=6 (realized as a capacitor in parallel) reduces the admittance to Y2 ¼
5s : 12s 2 þ 18
and takes care of one of the transfer zeros of z12 . The remaining function, Y2 , has a zero at infinity. We can invert and remove a pole at infinity from: Z3 ¼
Y2
(B)
Z
Y2 ¼
Y
1 s 3 (12 5L) þ 18 : sL ¼ Y2 5s
We realize L ¼ 12=5 as an inductor in series. This has taken care of the second transfer zero of z12 . The remaining function is Z4 ¼ 18=5 s. By inverting it, we have Y4 ¼ 5 s=18 and remove the last transfer zero as a pole at infinity by connecting a capacitor C ¼ 5=18 in parallel. The whole network is in Figure 4.30. Let us now check our result logically. Inductors in series obstruct transfer of high frequencies and capacitors in parallel represent short circuits at high frequencies. Hence, each of the capacitors in parallel and the middle inductor in series will indeed help in suppressing high frequencies and create together three zeros at infinity. We have used the above more complicated procedure to prepare the reader for synthesis of two-ports with finite transfer zeros. We could have achieved the same by dividing the numerator by the denominator in each of the steps. In fact, if the reader knows expansions into continued fractions, he or she should try it on 1=Z1 . It is the fastest way for the removal of transfer zeros at infinity.
4.10.2 Transfer Zeros on the Imaginary Axis As we have demonstrated in equation 4.56, partial removal does not reduce the degree of the function but shifts the zeros to different places. This feature is used in the synthesis. We will indicate the method in an example. Consider: (s 2 þ 2)(s 2 þ 6) s 4 þ 8s 2 þ 12 4 s ¼ ¼sþ þ 2 : s(s 2 þ 3) s(s 2 þ 3) s s þ3 (s 2 þ 1)(s 2 þ 6) 4 2s ¼sþ þ 2 : ¼ s(s 2 þ 3) 3s 3(s þ 3) (4:58)
z11 ¼ z12
The transfer impedance has transfer zeros at s ¼ j1 and s ¼ j2. Both z11 and z12 have the same poles. There are no private poles in z11 ; the functions are ratios of even and odd polynomials, zeros, and poles of z11 interchange. Thus, all the conditions for realizability are satisfied. In the first step, we wish to take into consideration the transfer zero at j1. We substitute this value into z11 to obtain: z11 (j1) ¼
5 : 2j
(4:59)
72
Jiri Vlach 1
Then we return to the impedance:
12/5
Z3 ¼ 1/6
FIGURE 4.30
5/18
Synthesis of Equation 4.57
Equation 4.59 behaves as a capacitance; at v1 ¼ 1, we have 5=2j ¼ 1=jv1 C ¼ 1=j1C from which C ¼ 2=5. The capacitance and z11 have a pole at the origin, and we subtract: Z2 ¼ z11
5 s 4 þ 11s 2 =2 þ 9=2 ¼ : 2s s(s 2 þ 3)
(4:60)
There is another transfer zero to be removed, s ¼ j2. The procedure is repeated by first evaluating Z3 (j2) ¼ 7=j12. It behaves as a capacitance equal to 7=j12 ¼ 7=jv2 C2 ¼ 7=j2C2 with the result C2 ¼ 6=7. The impedance corresponding to the capacitance has a pole at the origin, and Z3 has the same pole, so a partial removal of the pole is possible: Z4 ¼ Z3
(s 2
s(s 2 þ 3) : þ 1)(s 2 þ 9=2)
(s 2 þ 1) s(s 2 þ 3) 4 js2 ¼1 ¼ : 2 2 s (s þ 1)(s þ 9=2) 7
(4:62)
(4:63)
4s=7 1 , ¼ s 2 þ 1 7s=4 þ 7=4s
s(s 2 þ 3) 4s=7 3s 2 ¼ 2 : 2 2 (s þ 1)(s þ 9=2) s þ 1 7(s þ 9=2)
In filter design, we place the transfer zeros almost always on the imaginary axis because this secures largest suppression of the signal in the stopband. In most cases, a low-pass filter should have approximately constant transfer of low frequencies and rapid suppression of frequencies beyond the specified passband. This may result in almost constant transfer of the desired frequencies, but the group delay response is then far from ideal. We have analyzed one such filter, Figure 4.12, with its amplitude responses in Figure 4.13(A) through Figure 4.13D and with the group delay in Figure 4.14. In special cases, it may be necessary to compensate for the nonideal group delay response, and this can be done with allpass networks. They modify only the group delay and pass all frequencies without attenuation.
6/7
2/5
with the result L ¼ 7=4 and C ¼ 4=7. The series tuned circuit, connected in parallel, is on the left of Figure 4.31. In the next step, we remove the expression for the tuned circuit from Y2 : Y3 ¼
3s , þ 28
4.11 All-Pass Networks
The element values of the tuned circuit are: Ytc ¼
7s 2
(4:61)
To be removed is an admittance of a series-tuned circuit of the form 2ki s=(s 2 þ v2i ). This is done by the formula 4.49: 2k1 ¼
Y4 ¼
with the result L ¼ 7=3 and C ¼ 3=28, see Figure 4.31.
Notice the neat trick we used to shift the zeros by first evaluating equation 4.59. Using the above steps, we have realized the left capacitor in Figure 4.31. Z2 now has the same number of zeros as the transfer function. We remove them as poles of the inverted function: Y2 ¼
1 7(s 2 þ 9=2) 7 7(s 2 þ 4) ¼ : ¼ sC2 3s 6s 3s
Because now Z4 has a zero at the proper place, we can remove it as a pole of a tuned circuit:
The intention is to get the numerator polynomial with zeros at s ¼ j1. We can now divide the numerator of equation 4.60 by the term s 2 þ 1 and obtain the decomposition: (s 2 þ 1)(s 2 þ 9=2) : Z2 ¼ s(s 2 þ 3)
7 s 2 þ 9=2 : 3 s
(4:64)
7/4
7/3
4/7
3/28
FIGURE 4.31 Synthesis of Equation 4.58
4
Synthesis of Networks
73
and the entries of the two-port impedance matrix are: +
2
3 Z2 Z1 7 2 Z2 þ Z1 5 2
(4:66)
+
+
Z2 þ Z1 6 2 Z ¼ 4Z Z1 2 2
FIGURE 4.32
If we connect the dotted resistor, the situation changes:
(B)
(A)
Possible Pole-Zero Positions for All-Pass Networks
Consider the symmetrical pole and zero positions sketched in Figure 4.32(A) and Figure 4.32(B) and the formulas for amplitude and group delay responses, equations 4.25 and 4.27. Symmetrical positions of both axes make ai ¼ gi and bi ¼ di . When these are inserted into equation 4.25, the numerator cancels against the denominator and the expression in large brackets becomes equal to one. Inserted into equation 4.27, the two terms add and: tall-pass ¼ 2
M X i¼1
ai : a2i þ (v bi )2
(4:65)
ZTR ¼
Zin ¼
z11 þ z11 z22 z12 z21 , 1 þ z22
obtained in equation 4.10. Let us now impose the condition: Z1 Z2 ¼ 1:
1 þ Z12 6 2Z 1 Z ¼6 4 1 Z2 1 2Z1
3 1 Z12 2Z1 7 7 1 þ Z12 5 2Z1
2V1 Z1 þ Z2
I1
+ Z2 V2 = VA−VB VA
VB R=1 Z2
Z1
−
FIGURE 4.33
(4:70)
Deriving Properties of an All-Pass Network
(4:71)
The input impedance of the combination is equal to the loading resistor, irrespective of what the value of Z1 , as long as we maintain the condition that Z2 ¼ 1=Z1 . Inserting this condition into equation 4.67 results in: ZTR ¼
V1
(4:69)
It changes the matrix of equation 4.66 into:
Zin ¼ 1: The current flowing from the source into the two branches will be:
Z1
(4:68)
Inserting these entries into equation 4.68, we get, after a few simple algebraic steps, the surprising result:
Z1 Z2 V2 ¼ VB VA ¼ V1 : Z1 þ Z2
I1 ¼
(4:67)
derived already in equation 4.43, and the input impedance becomes:
2
For further study, we will use the two-port theory and the impedance parameters. Consider the network in Figure 4.33, and remove the dotted resistor. Without it, we have two voltage dividers:
z21 , 1 þ z22
1 Z1 : 1 þ Z1
(4:72)
Let us now take the special case Z1 ¼ sL. The transfer impedance becomes ZTR ¼ (1 sL)=(1 þ sL), and we get the design parameters shown in Figure 4.34. The figure uses an accepted practice to draw only two of the impedances and to indicate the presence of the other two by dashed lines. Suppose we take for Z1 a parallel tuned circuit with elements L and C. In such a case, Z1 ¼
sL : s 2 LC þ 1
Inserting into equation 4.72, we get:
(4:73)
74
Jiri Vlach
(s þ a þ jb)(s þ a jb) ¼ s 2 þ 2as þ (a2 þ b2 ), we conclude that C ¼ 1=2a and L ¼ 2a=(a2 þ b2 ). The network and its design values are in Figure 4.35. Remember that the loading resistor was normalized to unit value, and for other values of R, we must also scale impedances of these elements. The above networks have one considerable disadvantage: the output is taken between two nodes and not with respect to ground. Connection to filters would have to be done through a transformer. All-pass networks with input and output taken with respect to ground also exist but require a transformer in the all-pass.
a
L R=1 R=1 C L = 1/a C = 1/a
FIGURE 4.34 Realization
Position of One All-Pass Pole-Zero and Network
We have presented simplified steps for the synthesis of LC filters. The intention was to provide enough information for a general understanding and for encouraging the reading of more advanced books. The material should be sufficient for practical situations when tables of filters can be used. We have intentionally skipped synthesis of RC networks. The theory is available but is almost never used for designs, and is in many respects similar to what has been presented. Synthesis of RC networks can be found in specialized books. All references except Mitra (1969), which is an early text concerning active network synthesis, are classic texts about passive network synthesis.
+
a
+
b
L1 R=1 C1
References
L2 R=1 C2 L2 = 1/2a
L1 = 2a/(a2+b2) C1 = 1/2a
FIGURE 4.35 Realization
C2 = 2a/(a2+b2)
Positions of Two All-Pass Pole-Zero and Network
ZTR ¼
s 2 s=C þ 1=LC : s 2 þ s=C þ 1=LC
Comparing the denominator with
4.12 Summary
(4:74)
Balabanian, N. (1958). Network synthesis. Englewood Cliffs, NJ: Prentice Hall. Guillemin, E.A. (1957). Synthesis of passive networks. New York: John Wiley & Sons. Mitra, S.K. (1969). Analysis and synthesis of linear active networks. New York: John Wiley & Sons. Schaumann, R., Ghausi, M.S., and Laker, K.R. (1990). Design of analog filters: passive, active RC, and switched capacitor. Eaglewood Cliffs, NJ: Prentice Hall. Temes, G.C., and Mitra, S.K. (1973). Modern filter theory and design. New York: Wiley-Interscience. Van Valkenberg, M.E. (1960). Introduction to modern network synthesis. New York: John Wiley & Sons. Weinberg, L. (1962). Network analysis and synthesis. New York: McGraw-Hill. Zverev, A.I. (1976). Handbook of filter synthesis. New York: John Wiley & Sons.
5 Nonlinear Circuits Ljiljana Trajkovic´ School of Engineering Science, Simon Fraser University, Vancouver, British Columbia, Canada
5.1 5.2
Introduction ........................................................................................ Models of Physical Circuit Elements ........................................................
75 76
5.2.1 Two-Terminal Nonlinear Resistive Elements . 5.2.2 Two-Terminal Nonlinear Dynamical Elements . 5.2.3 Three-Terminal Nonlinear Resistive Elements . 5.2.4 Multiterminal Nonlinear Resistive Elements . 5.2.5 Qualitative Properties of Circuit Elements
5.3
Voltages and Currents in Nonlinear Circuits..............................................
79
5.3.1 Graphical Method for Analysis of Simple Nonlinear Circuits . 5.3.2 Computer-Aided Tools for Analysis of Nonlinear Circuits . 5.3.3 Qualitative Properties of Circuit Solutions
5.4
Open Problems .................................................................................... References
5.1 Introduction Nonlinearity plays a crucial role in many systems and qualitatively affects their behavior. For example, most circuits emanating from the designs of integrated electronic circuits and systems are nonlinear. It is not an overstatement to say that the majority of interesting electronic circuits and systems are nonlinear. Circuit nonlinearity is often a desirable design feature. Many electronic circuits are designed to employ the nonlinear behavior of their components. The nonlinearity of the circuits’ elements is exploited to provide functionality that could not be achieved with linear circuit elements. For example, nonlinear circuit elements are essential building blocks in many well-known electronic circuits, such as bistable circuits, flipflops, static shift registers, static RAM cells, latch circuits, oscillators, and Schmitt triggers: they all require nonlinear elements to function properly. The global behavior of these circuits differs from the behavior of amplifiers or logic gates in a fundamental way: they must possess multiple, yet isolated, direct current (dc) operating points (also called equilibrium points). This is possible only if a nonlinear element (such as a transistor) is employed. A nonlinear circuit or a network (a circuit with a relatively large number of components) consists of at least one nonlinear
Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
81
element, not counting the voltage and current independent sources. A circuit element is called nonlinear if its constitutive relationship between its voltage (established across) and its current (flowing through) is a nonlinear function or a nonlinear relation. All physical circuits are nonlinear. When analyzing circuits, we often simplify the behavior of circuit elements and model them as linear elements. This simplifies the model of the circuit to a linear system, which is then described by a set of linear equations that are easier to solve. In many cases, this simplification is not possible and the nonlinear elements govern the behavior of the circuit. In such cases, approximation to a linear circuit is possible only over a limited range of circuit variables (voltages and currents), over a limited range of circuit parameters (resistance, capacitance, or inductance), or over a limited range of environmental conditions (temperature, humidity, or aging) that may affect the behavior of the circuit. If global behavior of a circuit is sought, linearization is often not possible and the designer has to deal with the circuit’s nonlinear behavior (Chua et al., 1987; Hasler and Neirynck, 1986; Mathis, 1987; Willson, 1975). As in the case of a linear circuit, Kirchhoff ’s voltage and current laws and Telegen’s theorem are valid for nonlinear circuits. They deal only with a circuit’s topology (the manner in which the circuit elements are connected together). Kirchhoff ’s voltage and current laws express linear relationships 75
Ljiljana Trajkovic´
76 between a circuit’s voltages or currents. The nonlinear relationships between circuit variables stem from the elements’ constitutive relationships. As a result, the equations governing the behavior of a nonlinear circuit are nonlinear. In cases where a direct current response of the circuit is sought, the governing equations are nonlinear algebraic equations. In cases where transient behavior of a circuit is examined, the governing equations are nonlinear differential-algebraic equations. Analysis of nonlinear circuits is more difficult than the analysis of linear circuits. Various tools have been used to understand and capture nonlinear circuit behavior. Some approaches employ quantitative (numerical) techniques and the use of circuit simulators for finding the distribution of a circuit’s currents and voltages for a variety of waveforms supplied by sources. Other tools employ qualitative analyses methods, such as those dealing with the theory for establishing the number of dc operating points a circuit may possess or dealing with the analysis of stability of a circuit’s operating point.
5.2 Models of Physical Circuit Elements Circuit elements are models of physical components. Every physical component is nonlinear. The models that capture behavior of these components over a wide range of values of voltages and currents are nonlinear. To some degree of simplification, circuit elements may be modeled as linear elements that allow for an easier analysis and prediction of a circuit’s behavior. These linear models are convenient simplifications that are often valid only over a limited range of the element’s voltages and currents. Another simplification that is often used when seeking to calculate the distribution of circuit’s voltages and currents, is a piecewise-linear approximation of the nonlinear element’s characteristics. The piecewise-linear approximation represents a nonlinear function with a set of nonoverlapping linear segments closely resembling the nonlinear characteristic. For example, this approach is used when calculating circuit voltages and currents for rather simple nonlinear electronic circuits or when applying certain computer-aided software design tools. Nonlinear circuit elements may be classified based on their constitutive relationships (resistive and dynamical) and based on the number of the element’s terminals.
5.2.1 Two-Terminal Nonlinear Resistive Elements The most common two-terminal nonlinear element is a nonlinear resistor. Its constitutive relationship is: f (i, v) ¼ 0,
(5:1)
where f is a nonlinear mapping. In special cases, the mapping can be expressed as:
i ¼ g(v),
(5:2)
when it is called a voltage-controlled resistor or: v ¼ h(i),
(5:3)
when it is a current-controlled resistor. Examples of nonlinear characteristics of nonlinear twoterminal resistors are shown in Figure 5.1(A). Nonlinear resistors commonly used in electronic circuits are diodes: exponential, zener, and tunnel (Esaki) diodes. They are built from semiconductor materials and are often used in the design of electronic circuits along with linear elements that provide appropriate biasing for the nonlinear components. A circuit symbol for an exponential diode and its characteristic are given in Figure 5.1(B). The diode model is often simplified and the current flowing through the diode in the reverse biased region is assumed to be zero. Further simplification leads to a model of a switch: the diode is either on (voltage across the diode terminals is zero) or off (current flowing through the diode is zero). These simplified diode models are also nonlinear. Three such characteristics are shown in Figure 5.1(B). The zener diode, shown in Figure 5.1(C), is another commonly used electrical component. Tunnel diodes have more complex nonlinear characteristic, as shown in Figure 5.1(D). This nonlinear element is qualitatively different from exponential diodes. It permits multiple values of voltages across the diode for the same value of current flowing through it. Its characteristic has a region where its derivative (differential resistance) is negative. Silicon-controlled rectifier, used in many switching circuits, is a two-terminal nonlinear element with a similar behavior. It is a current-controlled resistor that permits multiple values of currents flowing through it for the same value of voltage across its terminals. A symbol for the silicon-controlled rectifier and its current-controlled characteristic are shown in Figure 5.1(E). Two simple nonlinear elements, called nullator and norator, have been introduced mainly for theoretical analysis of nonlinear circuits. The nullator is a two-terminal element defined by the constitutive relations: v ¼ i ¼ 0:
(5:4)
The current and voltage of a norator are not subject to any constraints. Combinations of nullators and norators have been used to model multiterminal nonlinear elements, such as transistors and operational amplifiers. Voltage- and current-independent sources are also considered to belong to the family of nonlinear resistors. In contrast to the described nonlinear resistors, which are all passive for the choice of their characteristics presented here, voltage and current independent sources are active circuit elements.
5
Nonlinear Circuits
77
i
i
i a
c
c b
a b
i +
i + v − (A) Nonlinear Resistor
(B) Exponential Diode
i
i
i
v
−
(C) Zener Diode
v
i
i v
v
v
v
+
−
v
+
v
i
v
−
(D) Tunnel Diode
+
v
−
(E) Silicon-Controlled Rectifier
FIGURE 5.1 Two-Terminal Nonlinear Circuit Elements: (A) Nonlinear resistor and examples of three nonlinear constitutive characteristics (a. general, b. voltage-controlled, and c. current-controlled). (B) Exponential diode and its three simplified characteristics. (C) Zener diode and its characteristic with a breakdown region for negative values of the diode voltage. (D) Tunnel diode and its voltage-controlled characteristic. (E) Silicon-controlled rectifier and its current-controlled characteristic.
5.2.2 Two-Terminal Nonlinear Dynamical Elements
v¼
Nonlinear resistors have a common characteristic that their constitutive relationships are described by nonlinear algebraic equations. Dynamical nonlinear elements, such as nonlinear inductors and capacitors, are described by nonlinear differential equations. The constitutive relation of a nonlinear capacitor is: f (v, q) ¼ 0,
dq : dt
where the flux f and the voltage v are related by:
A generalization of the nonlinear resistor that has memory, called a memristor, is a one-port element defined by a constitutive relationship: f (f, q) ¼ 0,
(5:9)
where the flux f and charge q are defined in the usual sense. In special cases, the element may model a nonlinear resistor, capacitor, or inductor.
(5:6)
5.2.3 Three-Terminal Nonlinear Resistive Elements
(5:7)
A three-terminal nonlinear circuit element commonly used in the design of electronic circuits is a bipolar junction transistor (Ebers and Moll, 1954; Getreu, 1976). It is a current-controlled nonlinear element used in the design of various integrated circuits. It is used in circuit designs that
The constitutive relation for a nonlinear inductor is: f (f, i) ¼ 0,
(5:8)
(5:5)
where the charge q and the current i are related by: i¼
df : dt
Ljiljana Trajkovic´
78
ic
ib
C ic
ib4
+
ib3
ib vce
B +
ib2 −
vbe −
ib1
ie E
vce
vbe
FIGURE 5.2 Transistor
Three-Terminal Nonlinear Resistive Element: A Circuit Symbol and Characteristics of an n–p–n Bipolar Junction
rely on the linear region of the element’s characteristic (logic gates and arithmetic circuits) to designs that rely on the element’s nonlinear behavior (flip-flops, static memory cells, and Schmitt triggers). A circuit symbol of the bipolar junction transistor and its simplified characteristics are shown in Figure 5.2. The large-signal behavior of a bipolar junction transistor is usually specified graphically by a pair of graphs with families of curves. A variety of field-effect transistors (the most popular being a metal-oxide semiconductor) are voltage-controlled nonlinear elements used in a similar design fashion as the bipolar junction transistors (Massobrio and Antognetti, 1993).
5.2.4 Multiterminal Nonlinear Resistive Elements A commonly used multiterminal circuit element is an operational amplifier (op-amp). It is usually built from linear resistors and bipolar junction or filed-effect transistors. The behavior of an operational amplifier is often simplified and its circuit is modeled as a linear multiterminal element. Over the limited range of the op-amp input voltage, the relationship between the op-amp input and output voltages is assumed to be linear. The nonlinearity of the op-amp circuit elements is employed in such a way that the behavior of the op-amp can be approximated with a simple linear characteristic. Nevertheless, even in the case of such simplification, the saturation of the op-amp characteristic for larger values of input voltages needs to be taken into account. This makes the op-amp a nonlinear element. The op-amp circuit symbol and its simplified characteristic are shown in Figure 5.3.
5.2.5 Qualitative Properties of Circuit Elements Two simple albeit fundamental attributes of circuit elements are passivity and no-gain. They have both proven useful
in applying certain modern mathematical techniques for analyzing and solving nonlinear circuit equations. A circuit element is passive if it absorbs power (i.e., if at any operating point, the net power delivered to the element is nonnegative). A nonlinear resistor is passive if for any pair (v, i) of voltage and current, its constitutive relationship satisfies: vi 0:
(10)
Otherwise, the element is active. Hence, a nonlinear resistor is passive if its characteristic lies in the first and third quadrant. A circuit is passive if all its elements are passive. A multiterminal element is a no-gain element if every connected circuit containing that element, positive resistors, and independent sources possesses the no-gain property. A circuit possesses the no-gain property if, for each solution to the dc network equations, the magnitude of the voltage between any pair of nodes is not greater than the sum of the magnitudes of the voltages appearing across the network’s independent sources. In addition, the magnitude of the current flowing through any element of the network must not be greater than the sum of the magnitudes of the currents flowing through the network’s independent sources. For example, when considering a multiterminal’s large-signal dc behavior, Ebers-Moll modeled bipolar junction transistors, filed-effect transistors, silicon-controlled rectifiers, and many other three-terminal devices are passive and incapable of producing voltage or current gains (Gopinath and Mitra, 1971; Willson, 1975). Clearly, the no-gain criterion is more restrictive than passive. That is, passivity always follows as a consequence of nogain, although it is quite possible to cite examples (e.g., the ideal transformer) of passive components capable of providing voltage or current gains.
5 Nonlinear Circuits
79
vo +Vcc ip +
+
vp
in +
−Vcc
+
io
−
vo
in vn +
Vcc
−
+ +
−
vn −
ip
vp
io
+
vp − vn −Vcc
vo
−
−
FIGURE 5.3 Multiterminal Nonlinear Circuit Element: An Op-Amp with Five Terminals, its Simplified Circuit Symbol, and an Ideal Characteristic
5.3 Voltages and Currents in Nonlinear Circuits Circuit voltages and currents are solutions to nonlinear differential-algebraic equations that describe circuit behavior. Of particular interest are circuit equilibrium points, which are solutions to associated nonlinear algebraic equations. Hence, circuit equilibrium points are related to dc operating points of a resistive circuit with all capacitors and inductors removed. If capacitor voltages and inductor currents are chosen as state variables, there is a one-to-one correspondence between the circuit’s equilibrium points and dc operating points (Chva et al., 1987). Analyzing nonlinear circuits is often difficult. Only a few simple circuits are adequately described by equations that have a closed form solution. Contrary to linear circuits, which consist of linear elements only (excluding the independent current and voltage sources), nonlinear circuits may possess multiple solutions or may not possess a solution at all (Willson, 1994). A trivial example is a circuit consisting of a current source and an exponential diode, where the value of the dc current supplied by the current source is more negative than the asymptotic value of the current permitted by the diode characteristic when the diode is reverse-biased. It is also possible to construct circuits employing the Ebers-Moll modeled (Ebers and Moll, 1954) bipolar junction transistors whose dc equations may have no solution (Willson, 1995).
5.3.1 Graphical Method for Analysis of Simple Nonlinear Circuits Voltages and currents in circuits containing only a few nonlinear circuit elements may be found using graphical methods for solving nonlinear equations that describe the behavior of the circuit. A simple nonlinear circuit consisting
R
id
i
+ Vdd
Vdd/R
vd
OP
id
− vd
Vdd
v
FIGURE 5.4 Simple Nonlinear Circuit and a Graphical Approach for Finding its DC Operating Point (OP): The circuit’s load line is obtained by applying Kirchhoff ’s voltage law. The intersection of diode’s exponential characteristic and the load line provides the circuit’s dc operating point.
of a constant voltage source, a linear resistor, and an exponential diode is shown in Figure 5.4. Circuit equations can be solved using a graphical method. The solution is the circuit’s dc operating point, found as the intersection of the diode characteristics and the ‘‘load line.’’ The load line is obtained by applying Kirchhoff ’s voltage law to the single circuit’s loop. Another simple nonlinear circuit, shown in Figure 5.5, is used to rectify a sinusoidal signal. If the diode is ideal, the sinusoidal signal propagates unaltered and the current flowing through the diode is an ideally rectified sinusoidal signal. The circuit’s steady-state response can be found graphically. The diode is off below the value vs (t) ¼ Vdd . For values vs (t) Vdd the diode is on. Hence, id (t) ¼ 0 for vs (t) Vdd : vs (t) Vdd for vs (t) > Vdd : id (t) ¼ R
(5:11)
Ljiljana Trajkovic´
80 R
id
vs(t)
vs(t)
+ vd − vs(t)
+ −
Vdd
id(t)
Vdd
id(t) t
FIGURE 5.5
A Simple Rectifier Circuit with an Ideal Diode: The steady-state solution can be found graphically.
5.3.2 Computer-Aided Tools for Analysis of Nonlinear Circuits In most cases, a computer program is used to numerically solve nonlinear differential-algebraic circuit equations. The most popular circuits analysis program is SPICE (Massobrio and Antognetti, 1993; Quarles et al., 1994; Vladimirescu, 1994). The original SPICE code has been modified and enhanced in numerous electronic design automation tools that employ computer programs to analyze complex integrated circuits containing thousands of nonlinear elements, such as bipolar junction and field-effect transistors. One of the most important problems when designing a transistor circuit is to find its dc operating point(s) (i.e., voltages and currents in a circuit when all sources are dc sources). The mathematical problem of finding a nonlinear circuit’s dc operating points is described by a set of nonlinear algebraic equations constructed by applying Kirchhoff ’s voltage and current laws and by employing the characteristic of the circuit elements. A common numerical approach for finding these operating points is the Newton–Raphson method and its variants. These methods require a good starting point and sometimes fail. This is known as the dc convergence problem. In the past decade, several numerical techniques based on continuation, parameter-embedding, or homotopy methods have been proposed to successfully solve convergence problems that often appear in circuits possessing more than one dc operating point (Melville et al., 1993; Trajkovic´, 1999; Wolf and Sanders, 1996; Yamamura and Horiuchi, 1990; Yamamura et al., 1999).
5.3.3 Qualitative Properties of Circuit Solutions Many fundamental issues arise in the case of nonlinear circuits concerning a solution’s existence, uniqueness, continuity, boundedness, and stability. Mathematical tools used to address these issues and examine properties of nonlinear circuits range from purely numerical to geometric (Smale, 1972). Briefly addressed here are solutions to a circuit’s equations. Described are several fundamental results concerning their properties.
Existence and Uniqueness of Solutions Circuits with nonlinear elements may have multiple discrete dc operating points (equilibriums). In contrast, circuits consisting of positive linear resistors possess either one dc operating point or, in special cases, a continuous family of dc operating points. Many resistive circuits consisting of independent voltage sources and voltage-controlled resistors, whose v–i relation characteristics are continuous strictly monotone-increasing functions, have at most one solution (Duffin, 1947; Minty, 1960; Willson, 1975). Many transistor circuits possess the same property based on their topology alone. Other circuits, such as flip-flops and memory cells, possess feedback structures (Nielsen and Willson, 1980). These circuits may possess multiple operating points with an appropriate choice of circuit parameters and biasing of transistors (Trajkovic´ and Willson, 1992). For example, a circuit containing two bipolar transistors possesses at most three isolated dc operating points (Lee and Willson, 1983). Estimating the number of dc operating points or even their upper bounds for an arbitrary nonlinear circuit is still an open problem (Lagarias and Trajkovic´, 1999). Continuity and Boundedness of Solutions Properties such as continuity and boundedness of solutions are often desired in cases of well-behaved circuits. One would like to expect that ‘‘small’’ changes in the circuit’s output will result from the ‘‘small’’ change of the circuit’s input and that the bounded circuit’s input leads to the bounded circuit’s output. They are intimately related to the stability of circuit solutions. Stability of Solutions Once a solution to a circuit’s currents and voltages is found, a fundamental question arises regarding its stability. A solution may be stable or unstable, and stability is both a local and a global concept (Hasler and Neirynck, 1986). Local stability usually refers to the solution that, with sufficiently close initial conditions, remains close to other solutions as time increases. A stronger concept of stability is asymptotic stability: if initial conditions are close, the solutions converge toward each other as time approaches infinity. Although no rate of convergence is
5 Nonlinear Circuits implied in the case of asymptotic stability, a stronger concept of exponential stability implies that the rate of convergence toward a stable solution is bounded from above by an exponential function. Note that the exponential convergence to a solution does not necessarily imply its stability. Contrary to nonlinear circuits, stability (if it exists) of linear time-invariant circuits is always exponential. Linear time-invariant circuits whose natural frequencies have negative real parts have stable solutions. A global asymptotic (complete) stability implies that no matter what the initial conditions are, the solutions converge toward each other. Hence, a completely stable circuit possesses exactly one solution. A fundamental observation can be made that there exist dc operating points of transistor circuits that are unstable: if the circuit is biased at such an operating point and if the circuit is augmented with any configuration of positive-valued shunt capacitors and/or series inductors, the equilibrium point of the resulting dynamic circuit will always be unstable (Green and Willson, 1992, 1994). A simple example is the common latch circuit that, for appropriate parameter values, possesses three dc operating points: two stable and one unstable.
5.4 Open Problems In the area of analysis of resistive circuits, interesting and still unresolved issues are: Does a given transistor circuit possess exactly one, or more than one dc operating point? How many operating points can it possess? What simple techniques can be used to distinguish between those circuits having a unique operating point and those capable of possessing more than one? What can we say about the stability of an operating point? How can circuit simulators be used to find all the solutions of a given circuit? These and other issues have been and still are a focal point of research in the area of nonlinear circuits. The references cited provide more detailed discussions of the many topics considered in this elementary review of nonlinear circuits.
References Chua, L.O., Desoer, C.A., and Kuh, E.S. (1987). Linear and nonlinear circuits. New York: McGraw-Hill. Duffin, R.J. (1947). Nonlinear networks IIa. Bulletin of the American Mathematical Society 53, 963–971. Ebers, J.J., and Moll, J.L. (1954). Large-signal behavior of junction transistors. Proceedings IRE 42, 1761–1772. Getreu, I. (1976). Modeling the bipolar transistor. Beaverton, OR: Tektronix, 9–23.
81 Gopinath, B., and Mitra, D. (1971). When are transistors passive? Bell Systems Technological Journal 50, 2835–2847. Green, M., and Willson, A.N. Jr. (1992). How to identify unstable dc operating points. IEEE Trans. Circuits System-I 39, 820–832. Green, M., and Willson, A.N. Jr. (1994). (Almost) half of any circuit’s operating points are unstable. IEEE Transactions on Circuits and Systems-I 41, 286–293. Hasler, M., and Neirynck, J. (1986). Nonlinear circuits. Norwood, MA: Artech House. Lagarias, J.C., and Trajkovic´, Lj. (1999). Bounds for the number of dc operating points of transistor circuits. IEEE Transactions on Circuits and Systems 46(10), 1216–1221. Lee, B.G., and Willson, A.N. Jr. (1983). All two-transistor circuits possess at most three dc equilibrium points. Proceedings of the 26th Midwest Symposium Circuits and Systems, 504–507. Massobrio, G. and Antognetti, P. (1993). Semiconductor device modeling with spice. (2d Ed.). New York: McGraw-Hill. Mathis, W. (1987). Theorie nichtlinearer netzwerke. Berlin: SpringerVerlag. Melville, R.C., Trajkovic´, Lj., Fang, S.C., and Watson, L.T. (1993). Artificial parameter homotopy methods for the dc operating point problem. IEEE Transactions on Circuits and Systems 12(6), 861–877. Minty, G.J. (1960). Monotone networks. Proceedings of the Royal Society (London), Series A 257, 194–212. Nielsen, R.O., and Willson, A.N., Jr. (1980). A fundamental result concerning the topology of transistor circuits with multiple equilibria. Proceedings of IEEE 68, 196–208. Quarles, T.L., Newton, A.R., Pederson, D.O., and Sangiovanni-Vincentelli, A. (1994). SPICE 3 version 3F5 user’s manual. Berkeley: Department of EECS, University of California. Smale, S. (1972). On the mathematical foundations of electrical circuit theory. Journal of Differential Geometry 7, 193–210. Trajkovic´, Lj., and Willson, A.N., Jr. (1992). Theory of dc operating points of transistor networks. International Journal of Electronics and Communications 46(4), 228–241. Trajkovic´, Lj. (1999). Homotopy methods for computing dc operating points. In J.G. Webster (Ed.), Encyclopedia of electrical and electronics engineering, New York: John Wiley & Sons. Vladimirescu, A. (1994). The SPICE book. New York: John Wiley & Sons. Willson, A.N., Jr. (1975). Nonlinear networks. New York: IEEE Press. Willson, A.N., Jr. (1975). The no-gain property for networks containing three-terminal elements. IEEE Transactions on Circuits and Systems 22, 678–687. Wolf, D., and Sanders, S. Multiparameter homotopy methods for finding dc operating points of nonlinear circuits. IEEE Transactions on Circuits and Systems 43, 824–838. Yamamura, K., and Horiuchi, K. (1990). A globally and quadratically convergent algorithm for solving nonlinear resistive networks (1990). IEEE Transactions of Computer-Aided Design 9(5), 487–499. Yamamura, K., Sekiguchi, T., and Inoue, T. (1999). A fixed-point homotopy method for solving modified nodal equations. IEEE Transactions on Circuits and Systems 46, 654–665.
This page intentionally left blank
II ELECTRONICS
Krishna Shenai Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
With the emergence of digital information age, there is an ever increasing need to integrate various technologies to efficiently perform advanced communication, computing, and information processing functions. Efficient generation and utilization of electric energy is of paramount importance in an energy and environmentally conscious society. This section deals with recent developments in these areas with relevant background information. In Chapter 1, an account of voltage regulation modules (VRMs) used in powering microchips is provided with a particular emphasis on circuit switching topologies. Chapter 2 deals with noise in mixed-signal electronic systems with a focus on submicron ultra-large scale integrated (ULSI) systemon-a-chip (SOC) technologies. Noise in semiconductor devices and passive components becomes a key issue in addressing system-level signal integrity concerns. A detailed review and progress on metal-oxide-silicon field-effect transistors (MOSFETs) is discussed in Chapter 3 with emphasis on im-
portant performance and reliability degradation parameters. These devices form the basic electronic building blocks in a majority of microsystems. Filters are used in a variety of signal processing circuits. Chapter 4 provides an account of latest advances in the design and application of active filters in microsystems. Although bipolar junction transistors (BJTs) are not as prominent as MOSFETs, nevertheless they are used in a variety of mixedsignal and poweer circuits. Chapter 5 discusses the state-ofthe-art on diodes and BJTs in these circuits. Chapter 6 provides insight into basic semiconductor material physics and how it relates to device performance and reliability. The section concludes with a detailed account of the evolution and current state-of-the-art of power semiconductor devices for discrete as well as integrated applications. Power semiconductor devices are playing an increasingly important role in microsystems, especially for performing the power management function.
This page intentionally left blank
1 Investigation of Power Management Issues for Future Generation Microprocessors* Fred C. Lee and Xunwei Zhou Center for Power Electronics Systems, The Bradley Department of Electrical Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, USA
1.1 1.2
Introduction ........................................................................................ Limitations of Today’s Technologies .........................................................
1.3
Advanced VRM Topologies.....................................................................
85 87
1.2.1 Limitations of Present VRM Topologies . 1.2.2 Limitations from Power Devices
90
1.3.1 Low-Input-Voltage VRM Topologies . 1.3.2 High-Input-Voltage VRM Topologies 1.3.3 A Nonisolated, High-Input-Voltage VRM Topology: The Center-Tapped Inductor VRM 1.3.4 An Isolated High-Input-Voltage VRM Topology: The Push-Pull Forward Topology
1.4
Future VRMs .......................................................................................
95
1.4.1 High-Frequency and High-Power-Density VRMs
1.5
Conclusions ......................................................................................... References ...........................................................................................
1.1 Introduction Advances in microprocessor technology pose new challenges for supplying power to these devices. The evolution of microprocessors began when the high-performance Pentium processor was driven by a nonstandard power supply of less than 5 V instead of drawing its power from the 5-V plane on the motherboard (Goodfellow and Weiss, 1997). Low-voltage power management issues are becoming increasingly critical in state-of-the-art computing systems. The current generation of high-speed CMOS processors (e.g., Alpha, Pentium, and Power PC) operate at above 300 MHz with 2.5- to 3.3-V output voltage. Future processors will be designed with even lower logic voltages of 1 to 1.5 V and
* This work is supported by Intel, Texas Instruments, National Semiconductors Inc., SGS Thomson, and Delta Electronics Inc. * This work made use of ERC Shared and Facilities supported by the National Science Foundation under Award Number EEC-9731677.
Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
99 99
increases in current demands from 13 A to 50 to 100 A (Zhang et al., 1996). Meanwhile, operating frequencies will increase to above 1 GHz. These demands, in turn, require special power supplies, voltage regulator modules (VRMs), to provide lower voltages with higher current capabilities for microprocessors. Table 1.1 shows the specifications for present and future VRMs. As the speed of the processors increases, the dynamic loading of the VRMs is also significantly increased. Future microprocessors are expected to exhibit higher current slew rates of up to 5 A/ns. These slew rates represent a severe problem for the large load changes that are encountered when systems transfer from the sleep mode to the active mode and vice versa. In these cases, the parasitic impedance of the power supply connection to the load and the equivalent series resistor (ESR) and equivalent series inductor (ESL) of the capacitors have a dramatic effect on VRM voltage (Zhang et al., 1996). If this impedance is not low enough, the supply voltage may fall out of the required range during the transient period. Moreover, the total voltage tolerance will be much 85
86 TABLE 1.1
Fred C. Lee and Xunwei Zhou Specifications for Present and Future VRMs
Voltage/Current
Present
Future
Output voltage Load current Output voltage tolerance Current slew at decoupling capacitors
2.1 3.5 V 0.3 13 A 5% 1 A/nS*
1 1.5 V 1 50 A 2% 5 A/ns
l inte um nti e P
* Current slew rate at today’s VRM output is 30 MS (1997).
tighter. Currently, the voltage tolerance is 5% (for a 3.3 V VRM output with a voltage deviation of 165 mV). In the future, the total voltage tolerance will be 2% (for a 1.1 V VRM output with a voltage deviation requirement of only 33 mV). All of these requirements pose serious design challenges and require VRMs to have very fast transient responses. Figure 1.1 shows today’s VRMs and the road map of microprocessors’ development. Today’s VRMs are powered up from the 5-V or 12-V outputs of silver boxes that are used for supplying various parts of the system, such as the memory chips, the video cards, and some sub buses. Future VRMs will be required to provide lower voltages and higher currents with tighter voltage regulations. The traditional centralized power system, the silver box, will no longer meet the stringent requirements for VRM voltage regulation because of the distributed impedance associated with a long power bus and the parasitic ringing due to high-frequency operation. On the other hand, with much heavier loads in the future, the bus loss becomes significant. To maintain system stability, a huge silverbox output capacitance is also needed. At the same time, to avoid the interaction between different outputs, a very large VRM input filter capacitance is required. Figure 1.2 shows the trend of computer power system architecture. In the future, a distributed power system (DPS) with a high-voltage bus, 12 V
ac
ac
B : 2.6 ~ 3.3 V @ 12 A C
200
1995
FIGURE 1.1
D:80%)
Q1
FIGURE 1.13
load ( > 30). This limitation is due to the high figure of merit (FOM) of today’s devices. The FOM is equal to Rds(on) times gate charge (Qg). For today’s device technology, the lowest FOM value is around 300(mV nC). With such a high FOM value, power devices not only limit the VRM’s efficiency but also limit the VRM’s ability to operate at higher operating frequencies. Most of today’s VRMs operate at a switching frequency lower than 300 kHz. This low switching frequency causes slow transient responses and creates a need for very large energy storage components.
Io D2
Q2
D1
Quasi-Square-Wave (QSW) VRM Topology
ventional buck and synchronous buck topologies, the output filter inductance is reduced significantly. At a 13-A load and a 300-kHz switching frequency, the QSW circuit needs only a 160-nH inductor, as compared with the 2 4 mH inductor needed in the conventional design. This small inductance makes the VRM transient response much faster. Figure 1.14 shows the transient response of the QSW topology. The third spike of the output voltage becomes insignificant, and the second spike is reduced significantly.
1.3 Advanced VRM Topologies 1.3.1 Low-Input-Voltage VRM Topologies A Fast VRM Topology: The Quasi-Square-Wave (QSW) VRM To overcome the transient limitation occurring in conventional VRMs, a smaller output filter inductance is the most desirable option for increasing the energy transfer speed. Figure 1.13 shows the quasi-square-wave (QSW) circuit. The QSW topology keeps the VRM output inductor current touching zero in both sleep and active modes. When Q1 turns on, the inductor current is charged to positive by the input voltage. After Q1 turns off and before Q2 turns on, the inductor current flows through Q2’s body diode. When Q2 turns on, the inductor current is discharged to negative. After Q2 turns off and before Q1 turns on, the inductor current flows through Q1’s body diode. Compared with con-
2.01 2.00
Vo
1.99 1.98 1.97
1st
3rd
2nd
1.96 150 1.00 50
iL
0 -50 -1.00 10 µ
FIGURE 1.14
20 µ
30 µ
40 µ
Transient Response of the QSW
1
Investigation of Power Management Issues for Future Generation Microprocessors
91
1 0.9
Efficiency
0.8 0.7
Con buck
0.6
Syn buck
Single QSW
0.5 0.4 0.3 0.2 0.1
10
20
30
40
50
I load (A)
FIGURE 1.15 Efficiency of the QSW Compared with a Conventional VRM (Vin ¼ 5 V, Vo ¼ 2 V, and fs ¼ 300 kHz)
There are two disadvantages to this fast VRM topology. The first one is the large current ripple. A huge VRM output filter capacitance is needed to suppress the steadystate ripple. The smaller inductance results in a faster transient response but requires a larger bulk capacitance. The second disadvantage is its low efficiency. In Figure 1.15, due to the large ripple current, the QSW switch has a larger conduction loss. Its efficiency is lower than that of a conventional VRM. A Fast VRM with a Small Ripple: The Interleaved QSW VRM To meet both the steady state and transient requirements, a novel VRM topology, the interleaved QSW, is introduced in Figure 1.16. The interleaved QSW topology naturally cancels the output current ripple and still maintains the fast transient response characteristics of the QSW topology. A smaller capacitance is needed compared to both the singlemodule QSW VRM and the conventional VRM. The more modules in parallel, the better the ripple canceling effect. Figure 1.17 shows a four-module interleaved QSW VRM. Figure 1.18 shows its transient response. The results show that this technique can meet future transient requirements without a large steady-state voltage ripple. Compared with the single-module QSW topology, the efficiency is improved significantly. Figure 1.19 shows the efficiency comparison results. Figure 1.20 shows a four-module interleaved QSW VRM prototype picture. In the VRM, an integrated magnetic design is used. Every two inductors use one magnetic core. As a result, in total, two magnetic cores are used for these four channel inductors. Figure 1.21 shows the integrated magnetic structure (Chen 1999). By taking advantage of the interleaving technology, the AC flux of the two inductors is canceled out in the center leg. As a result, the core loss and center leg crossing area are reduced. The planar core structure
IL1
Io
Load
IL2
IL1
IL2 T/2
Io
FIGURE 1.16
Current Ripple Cancelling Effect of Interleaved QSW
makes a very low-profile VRM. This kind of low-profile magnetic also has good thermal management. In the magnetic design, the PCB trace is used as the inductor winding. This approach is very cost-effective. In addition, the termination loss is eliminated. Table 1.2 compares the four-module interleaved QSW VRM design with the conventional VRM. Since the necessary capacitance is reduced, the VRM power density is dramatically increased by six times. Also, since each module handles a lower current, the circuit will be packaged more easily. The test results in Figure 1.22 show the transient response of the
92
Fred C. Lee and Xunwei Zhou
+ −
Load
four-module interleaved QSW VRM compared to today’s commercial VRM. The interleaved QSW topology can not only reduce output current ripple, but it can also reduce input current ripple.
1.3.2 High-Input-Voltage VRM Topologies
FIGURE 1.17
A Four-Module Interleaved QSW VRM
Most of today’s VRMs draw power from the 5-V output of the silver box. This bus voltage is too low for future low-voltage, high-current processors’ applications. A distributed power system (DPS) with a high bus voltage can be the solution for future computer systems, such as servers’ and workstations’ power systems. In a high-voltage bus-distributed system, the bus conduction loss is lower and the distribution bus is easy to
40 30 20 10 0 −10 −20 5µ
10 µ
15 µ
−
15 µ
−
(A) Total Output Current
40 30 20 10 0 −10 −20 5µ
10 µ (B) Current in Each Module
2.01 2.00 1.99 1.98 1.97 2% Limit
1.96 5µ
10 µ
15 µ
(C) Output Voltage
FIGURE 1.18
Transient Response of the Four-Module Interleaved QSW
−
Investigation of Power Management Issues for Future Generation Microprocessors
93
1 0.9
0.8 Efficiency
1
Con buck
0.7
0.6
Syn buck
0.5
Single QSW
Four modules parallel
0.4
0.3
0.2
0.1
10
20
30
40
50
I load [A]
FIGURE 1.19
Integrated magnetic
Efficiency Comparison
TABLE 1.2 Design Comparison of the Interleaved QSW VRM and the Conventional VRM
PCB windings
Vin Bulk capacitance Output inductance Transient voltage drop: Vo @ load Power-stage-power density (W=in3 )
FIGURE 1.20
Gap
The Four-Module Interleaved QSW VRM
Core Winding
PC board L1
L2
(A) Integrated Magnetic Structure
PC Board Winding
1 2 1
0
2
L1 L2
Through hole
(B) Implementation of the Integrated Magnetic Structure
FIGURE 1.21 Integrated Magnetic Structure
0
Interleaved QSW
Conventional VRM
5 1200 mF 320 nH (4) 50 mV 2 V @ 30 A 30
5 8000 mF 3:8 mH 150 mV 2 V @ 13 A 35
design. On the other hand, in this kind of system, the transient of the load-end converter will have less effect on the bus voltage and thus less effect on the other load-end converters. As a result, for future applications, the high-input-voltage VRMs’ input filter size can be dramatically reduced. Figure 1.23 shows the results. When the bus voltage increases from 5 V to 48 V, the VRM input filter capacitance can be reduced from 10 mF to 10 mF. For high-voltage bus systems, advanced high-input-voltage VRM topologies must be developed. For example, if the buck converter shown in Figure 1.3 is used in a high-voltage bus system, the VRM’s duty cycle is very asymmetrical. When the input voltage is 12 V and the output voltage is 2 V, the VRM duty cycle is only 0.16. Figure 1.24 shows the asymmetrical transient response of a synchronous buck VRM. When the load changes from a light load to a heavy load, since the inductor charging voltage (Vin Vo ) is high, the VRM’s step-down voltage drop is small. When the load changes from a heavy load to a light load, since the inductor discharging voltage Vo , is low, the VRM’s step-up voltage drop is large. This asymmetrical transient response makes the output filter overdesigned. Besides, it is difficult to optimize efficiency in an asymmetrical duty cycle converter. Figure 1.25 shows the efficiency comparison for different input
94
Fred C. Lee and Xunwei Zhou Figure 1.28. Figure 1.29 shows the efficiency of this two-channel interleaved center-tapped inductor VRM efficiency. Its input voltage is 12 V, and its output voltage is 1.2 V. In the test, IRL3103D1 is used as top switch, and two MTP75N03DHs in parallel are used as a synchronous rectifier. This topology can achieve 80% efficiency at 1.2 V at a 60 A output.
50 mV
50 mV/div
100 µs/div (A) Transient Response of the Four-Module Interleaved QSW
1.3.4
170 mV
50 mV/div
100 µs/div (B) Transient Response of the Conventional VRM
FIGURE 1.22
Transient Response Test Results
1000000
Cin [mF]
100000 10000 1000 100 10 1 0
10
20
30
40
50
Vin [V]
FIGURE 1.23
Input Filter Capacitance Versus Input Voltage
voltages. With a higher input voltage, the VRM has a lower efficiency.
1.3.3 A Nonisolated, High-Input-Voltage VRM Topology: The Center-Tapped Inductor VRM With advanced inductor designs, the VRM duty cycle can be adjusted. Figure 1.26 shows a changed inductor structure, a center-tapped inductor, that can improve the VRM duty cycle. Figure 1.27 shows an improved VRM topology with the centertapped inductor structure and a loss-less snubber that can absorb the leakage inductance energy as well as reduce ripple and voltage spikes (Zhou, 1999). With this topology, VRMs do not need high-side gate drives, thus improving VRM efficiency at high-input-voltages. To reduce input and output current ripple, an interleave technique can be used, which is shown in
An Isolated High-Input-Voltage VRM Topology: The Push-Pull Forward Topology
Another approach to adjusting the VRM duty cycle in high-voltage bus systems is to use transformers. By designing the transformer’s turns ratio, VRMs can simply adjust their duty cycles to optimize efficiency and cancel the ripple. Figure 1.30 shows the push-pull forward VRM topology (Zhou, 1999). In the primary side, switch and transformer windings are alternately connected in a circle. A capacitor is connected between any of two interleaved terminations. The left two terminations are connected to the input and ground, respectively. The two primary windings have the same turns. Figure 1.31 explains the operation of this converter. Compared with conventional high-input-voltage VRM topologies, the push-pull forward converter has a higher power density, faster transient response, and higher efficiency. For example, compared with the forward flyback shown in Figure 1.32 and asymmetrical half-bridge shown in Figure 1.33, which are fourth-order systems, the push-pull forward converter is a second-order system. Therefore, its control is simpler, and its transient is faster. As a result, the output filter inductance and capacitance needed are significantly reduced. Due to its reduced input current ripple, the converter’s input filter can also be reduced. Compared with half-bridge converters and asymmetrical and symmetrical half-bridge converters, the push-pull forward topology has a larger transformer’s turns ratio. For example, if VRM input voltage is 48 V and output voltage is 3.3 V, for both the symmetrical and asymmetrical half-bridges, the transformer’s turns ratio is 3 to 1. For the push-pull forward converter, the transformer’s turns ratio is 6 to 1. Therefore, the conduction loss of the primary switches in a push-pull forward converter is lower. As a result, the push-pull forward converter has a higher efficiency. At heavy load, the efficiency of the push-pull forward converter is 2 to 3% higher than that of either half-bridge converter. Figures 1.34(A) and 1.34(B) show the efficiencies of the push-pull forward converter under 12 V and 48 V input voltages, respectively. For a 12-V input, this converter can achieve 81% efficiency at an output of 1.2 V at 60 A. For a 48-V input, this converter can achieve 83.6% efficiency at an output of 1.2 V at 60 A.
1
Investigation of Power Management Issues for Future Generation Microprocessors
95
Io 1.10
Cash
1.05 1.00
Vo
950.00m
20 µ
40 µ
60 µ
80 µ
100 µ
120 µ
140 µ −
(A) Asymmetrical Transient Response
Vin
Vo
Vin
(B) Step-Down Voltage
Vo
(C) Step-Up Voltage
FIGURE 1.24 An Asymmetrical Transient Response from Low-Output-Voltage and High-Input-Voltage 0.95 Vin = 5 V
Efficiency
0.9
Vin = 3.3 V
0.85 η = 80%
0.8
0.75
Vin = 12 V
0.7
0.65
10
20
30 I Load [A]
40
50
FIGURE 1.25 Efficiency Comparison of Buck Converters (Vo ¼ 2 V, fs ¼ 300 kHz, and switches: IRL3803)
1.4 Future VRMs 1.4.1 High-Frequency and High-Power-Density VRMs To develop low-cost, high-efficiency, low-profile, high-power density, fast-transient-response, board-mount VRM modules for future generation microprocessor loads, high operating frequencies are desirable. Figure 1.35 shows the transient response of the interleaved QSW VRM when it operates at 1 MHz. Obviously, the voltage spike is reduced significantly.
Figure 1.36 shows the inductance and capacitance needed in the interleaved QSW VRM when it operates at a high switching frequency. At 10 MHz, the inductance needed is only 9.25 nH and the capacitance needed is only 5:26 mF. With such a small inductance and capacitance, very high-power-density VRMs can be created, and energy storage costs can be dramatically reduced. Because of today’s device technology, however, most VRMs’ operating frequencies are lower than 300 kHz. Even at this frequency, the VRM cannot meet efficiency requirements. When the frequency is increased, the resulting VRM efficiency levels are shown in Figure 1.37. At 10 MHz, the VRM will only
96
Fred C. Lee and Xunwei Zhou n 1
Q1
0.8
Co
Efficiency
+
Vin
0.9
− Q2
0.7 0.6
FIGURE 1.26
Center-Tapped Inductor Structure
0.5 0
n
Q1
+ −
40 Io [A]
60
80
FIGURE 1.29 Efficiency of the Two-Channel Interleaved CenterTapped Inductor VRM
n
Vin
20
Co
S3 S1
Q2
Vo C
+ −
Is4
FIGURE 1.27 Improved VRM Topology with a Center-Tapped Inductor and a Lossless Snubber
S2
FIGURE 1.30
S4
A Novel Topology: The Push-Pull Forward Converter
n
n Vin
Q1
+ −
Co Q2
Vgs1 Vgs2
Ia
I1
n
Ib
I2
n IS1
Q3
IS2
Q4
Iin Vx
FIGURE 1.28
Interleaved Center-Tapped Inductor VRM
Vds (s1)
t1
have 40% efficiency. This efficiency makes thermal management and packaging very difficult. For future microprocessor applications, the power device must have a smaller FOM value [ < 100(mV nC)] and a lower miller charge. With improved device technologies, such as the SOI LDDMOS technology (Huang, 1998), future VRM efficiency will be higher than 90% at several megahertz operating frequencies. Figure 1.38 shows the difference between a vertical DMOS and the proposed LDD MOSFET on
FIGURE 1.31
t2 t3
t4
The Operation of the Push-Pull Forward Converter
SOI structure, whose power density is higher than 100 W=in3 . Table 1.3 shows the VRM efficiency comparison based on today’s device technology and the improved LDDMOS technology. Although advanced topologies have very fast transients, and future device techniques can operate at very high frequencies,
1
Investigation of Power Management Issues for Future Generation Microprocessors
97
0.9 S3
Efficiency
0.85
+ _ S2
S4
S1
0.8 0.75 0.7 0.65 0.6
FIGURE 1.32
0
Flyback Forward Converter
10
20
30
40
50
60
70
80
I load [A] 0.9 Vo = 1.65
Q1 + −
C Q3
Vin Q2
Efficiency
0.85 0.8 Vo = 1.2
0.75 0.7 0.65
Q4
0.6 0
Asymmetrical Half-Bridge Converter
to minimize the effects of the interconnections an innovative design with a possible integration of the VRM and the processor is still the key to meeting the ever-increasing demand for processor performance and speed. The integration of the VRM with the processor can take either a hybrid or a monolithic approach. In the hybrid approach, the VRM can be made as a silicon chip with all the control functions. Figure 1.39 shows an integration-packaging example. As shown in Figure 1.39(A) and (B), several VRM chips can be placed in parallel and be mounted close to the microprocessor on the same cartridge. Ceramic capacitors with small ESRs and ESLs can be used as the output capacitors and can be placed on the PCB board next to the processor. By connecting the output of the VRM and the power input of the processor via a path through a magnetic material sheet, the small output inductor can also be created. With this kind of packaging approach, interconnection parasitics can be minimized. For future applications, some other advanced packaging technologies, such as flip-chip technology, can also be used.
40 I load [A]
60
80
FIGURE 1.34 Push-Pull Forward Efficiency (A) Efficiency of the Push-Pull Forward Converter (Vin ¼ 48 V, Vo ¼ 1:2 V, fs ¼ 100 kHz; secondary side uses four MTP75N03DHL, and primary side uses two IRF630). (B) Efficiency of the Push-Pull Forward Converter (Vin ¼ 12 V, fs ¼ 100 kHz; secondary side uses four MTP75N03DHL, and primary side uses two IRL3103D1).
2.01 2.00 Vo
FIGURE 1.33
20
1.99 1.98 1% Vo
1.97 6 µ
8 µ
10 µ
12 µ
14 µ
16 µ
18 µ
FIGURE 1.35 Transient Response of the Interleaved QSW (Vin ¼ 5 V, Vo ¼ 2 V, and fs ¼ 1 MHz)
98
Fred C. Lee and Xunwei Zhou 1.00E−05
Inductance
1.00E−06
1.00E−07 9.25 nH 1.00E−08
1.00E−09 1.00E + 00
2.00E + 06
4.00E + 06
6.00E + 06
8.00E + 06
1.00E + 07
Frequency
(A) Inductance Needed Versus Frequency
Capacitance
1.00E−03
1.00E−04
5.26 µF 1.00E−05
1.00E−06 0.00E +00
2.00E + 06
4.00E + 06 6.00E + 06 Frequency
8.00E + 06
1.00E + 07
(B) Capacitance Needed Versus Frequency
FIGURE 1.36
Inductance and Capacitance Needed in the Interleaved QSW VRM Topology at a High Operating Frequency
Efficiency
S 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
fs = 300 kHz
η = 80%
G
n + substrate 40 60 I Load [A]
80
n+
p
D
n
n+
n
fs = 10 MHz
20
G
n+
p+
p
fs = 1 MHz
S
100
FIGURE 1.37 VRM Efficiency Based on Today’s Device Technology (Vin ¼ 5 V, Vo ¼ 2 V, and Switches 5 IRL3803 in parallel)
p−
D
(A)
(B)
Today’s Device VDMOS
Future Device SOI LDDMOS
FIGURE 1.38 Future Power Device Technology
1
Investigation of Power Management Issues for Future Generation Microprocessors
TABLE 1.3
VRM Efficiency Comparisons
99
References
Goodfellow, S. and Weiss, D. (1997). Designing power systems around processor specifications. Electronic Design, 53–57. Zhang, M. T., Jovanovic, M. M., and Lee, F. C. (1996). Design consign 300 kHz 1 MHz 10 MHz considerations for low-voltage on-board dc/dc modules for next generations of data processing circuits. IEEE Transactions on Power LDDMOS 10 77 95% 91% 88% Electronics (11) 2, 328–337. Today’s device 30 473 87% 79% 60% Wong, P., Zhou, X., Chen, J., Wu, H., Lee, F. C., and Chen, D. Y. (1997). VRM transient study and output filter design for future processors. VPEC Seminar. Hoshi, M., Shimoida, Y., Hayami, Y., and Mihara, T. (1995). Low onresistance LDMOSFET with DSS (a drain window surrounded by Output cap VRM Input cap source windows) pattern layout. IEEE Conference ISPSD., 63–67. Chen, W., Lee, F. C., Zhou, X., and Xu, P. (1999). Integrated planar inductor scheme for multimodule interleaved quasi-square-wave Processor dc/dc converter. IEEE PESC. Vol. 2, 759–762. Zhou, X. Ph. D. (July, 1999) dissertation. Low-voltage, high-efficiency, fast-transient voltage regulator module. Zhou, X., Yang, B., Amoroso, L., Lee F. C., and Wong, P. (1999). A Magnetic layer (A) 3-D View novel high-input-voltage. High efficiency and fast transient voltage regulator module: Push-pull forward converter. APEC. Heat Sink Huang, A. Q., Sun, N. X., Zhang, B., Zhou, X., and Lee, F. C. (1998). Insulation layer Low voltage power devices for future VRM. ISPSD, 395–398. Cap VRM Processor Rozman, A., and Fellhoelter, K. (1995). Circuit considerations for fast, PCB Laminated sensitive, low-voltage loads in a distributed power system. IEEE Magnetic layer APEC. 34–42. O’Connor, J. (1996). Converter optimization for power low-Voltage, (B) Side View high-performance microprocessors. IEEE APEC., 984–989. Efand, T., Tasi, C., Erdeljac, J., Mitros, J., and Hutter, L. (1997). A FIGURE 1.39 Hybrid Approach performance comparison between new reduced surface drain RSD LDMOS and RESURF and conventional planar power devices rated at 20 V. IEEE Conference ISPSD, 185–188. Zhou, X., Zhang, X., Liu, J., Wong, P. L., Chen, J., Wu, H. P., Amoroso, L., Lee, F. C., and Chen, D. Y. (1998). Investigation of candidate 1.5 Conclusions VRM topologies for future microprocessors. IEEE APEC. 145–150. For future microprocessor applications, there are many power Watson, R., Chen, W., Hua, G., and Lee, F. C. (1996). Component development for a high-frequency ac distributed power system. management issues that need to be addressed, such as those Proceedings of the 1996 Applied Power Electronics Conference, 657– concerning VRM topologies, power device technologies, and 663. packaging technologies. To meet future requirements, VRMs Von Jouanne, A. (1995). The effect of long motor leads on PWM should have high-power densities, high efficiencies, and fast inverter-fed ac drive system. IEEE APEC. 592–597. transient performances. To achieve this target, advanced VRM Matsuda, H., Hiyoshi, M., and Kawamura, N., (1997). Pressure contopologies, advanced power devices, and advanced packaging tact assembly technology of high-power devices. Proceedings ISPSD technologies must be developed. 17–24.
Vin ¼ 5V Vo ¼ 2V
BV [V]
FOM [mV nC]
Optimized efficiency for interleaved QSW VRM
This page intentionally left blank
2 Noise in Analog and Digital Systems Erik A. McShane and Krishna Shenai
2.1 2.2
Introduction............................................................................... Analog (Small-Signal) Noise ...........................................................
Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
2.3
Digital (Large-Signal) Noise ............................................................
2.2.1 Noise Categories . 2.2.2 White Noise . 2.2.3 Pink Noise
105
2.3.1 Noise Categories . 2.3.2 Series Resistance . 2.3.3 Dynamic Charge Sharing 2.3.4 Noise Margins
Bibliography...............................................................................
2.1 Introduction Noise from an analog (or small-signal) perspective is a random time-varying signal that is generated in all passive and active electronic devices. It can be represented as either a current or a voltage, in (t) or vn (t). Over a fixed time period, t, the average value of noise is zero. Analog noise, therefore, is 2 commonly presented in terms of its mean-square value (Irms 2 or Vrms ). It is sometimes also described by its root-meansquare value (Irms or Vrms ). Noise in digital (or large-signal) circuits is the perturbation of one nonlinear signal, the noise victim, by a second nonlinear signal, the noise aggressor. The perturbation is introduced by a parasitic coupling path that is resistive, capacitive, or inductive in nature. Since digital systems are typically characterized by voltage levels, digital noise is presented in terms of voltage and voltage transients, vn (t) and dvn =dt. The impact of noise differs for analog and digital systems and leads to unique equivalent circuit models and analytical techniques. Analog noise is treated by linearized expressions that correspond to small-signal equivalent circuit parameters. Digital noise is analyzed by large-signal expressions that define logic transitions. In this chapter, the characteristics of noise are presented for both systems. Several examples are included for each type of system. For further reading, several respected books are listed in the Bibliography section of this chapter. These provide a more thorCopyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
101 101
108
ough background on analog and digital systems, network theory, and the introduction of noise to classical systems analysis.
2.2 Analog (Small-Signal) Noise Noise in analog systems is typically characterized by its impact as a small-signal perturbation. In this form, noise is considered to be an independent alternating current (ac) source (either voltage or current). Table 2.1 lists the nomenclature for discussing analog noise. These measures are derived and defined throughout the chapter. The relationships as a function of time for noise voltage, square noise voltage, and mean-square noise voltage are illustrated in Figure 2.1; similar waveforms can also be shown for representating noise current. If multiple noise sources are present, they are considered to be independent and uncorrelated. Therefore, the cumulative noise contribution can be expressed in mean-square units by: 2 þ i2 , in2 ¼ in1 n2
(2:1)
as the sum of the individual mean-square components. In rms terms, the total noise is represented by: in, rms ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 in1 , rms þ in2, rms :
(2:2) 101
102
Erik A. McShane and Krishna Shenai TABLE 2.1
Nomenclature in Analog (Small-Signal) Noise Analysis
Name
Current-referred symbol
Units
Noise signal
in (t)
A
vn (t)
V
Mean-square noise signal
in2 (t)
A2rms
vn2 (t)
V2rms
Root-mean-square noise signal Noise power spectral density Root noise power spectral density
qffiffiffiffiffiffiffiffiffi in2 (t) or in,
Voltage-referred symbol
qffiffiffiffiffiffiffiffiffiffi vn2 (t) or vn, rms
Arms
rms
2
SI (f ) pffiffiffiffiffiffiffiffiffiffi SI (f )
A =Hz pffiffiffiffiffiffi A= Hz
νn
t
(A) ν2n ν2n
Units
Vrms V2 =Hz pffiffiffiffiffiffi V= Hz
SV (f ) pffiffiffiffiffiffiffiffiffiffiffiffi SV (f )
dependent (or pink) noise. The spectrum of the former is constant to frequencies as high as 1014 Hz. In the latter, the noise power is a function of 1=f n , where typically n ¼ 1 (although some forms of noise obey a power relationship with n 2). The total noise power in an element, as shown in Figure 2.2, is the combination of both, and a clear knee frequency is observed. The terms SIw and SIp are used to refer to the specific contributions of white and pink noise, respectively, to the total noise power. Table 2.2 lists the common characteristics of small-signal noise. The expressions are developed more fully in the following subsections.
2.2.2 White Noise (B)
t
FIGURE 2.1
Some noise also exhibits a frequency dependence. If the noise mean-square value of spectral components in a narrow bandwidth (Df ) is determined, then the ratio of that value to Df is defined as the noise power spectral density (the noise mean-square per bandwidth). Mathematically, the definition is expressed in terms of noise current as: in2 (t) : Df !0 Df
SI (f ) ¼ lim
White noise sources are composed of thermal (also called Johnson or Nyquist) noise and shot noise. Thermal noise is generated by all resistive elements and is associated with the random motion of carriers in an electronic element. It is a function only of temperature. Shot noise is associated with the discretization of charged particles and is always present in semiconductor pn junctions. It is a function only of dc current flow. The thermal noise of a resistance, R, can be modeled as an ideal resistor and an independent current or voltage source, as shown in Figure 2.3. The Thevenin equivalent noise voltage is given by:
(2:3) log(SI (f ))
The relationship can be reversed to define the mean-square noise current in terms of the noise power spectral density by: in2 ¼
Pink noise
ð f2 SI (f )df :
(2:4)
f1
Equivalent expressions can be derived for SV (f ), the noise power in terms of noise voltage.
2.2.1 Noise Categories Analog small-signal noise can be subdivided into two categories: frequency independent (or white) noise and frequency
White noise SIW (f ) SIp(f ) log(f )
FIGURE 2.2
2 Noise in Analog and Digital Systems TABLE 2.2
103
Characteristics of Analog (Small-Signal) Noise
Name
Type
Dependence
Expression
Thermal (or Johnson or Nyquist)
White
Temperature
vn2 ¼ 4^k TRDf
Shot
White
Current
Capacitor bandwidth-limited
White
Temperature, bandwidth
Inductor bandwidth-limited
White
Temperature, bandwidth
in2 ¼ 2qIDf sffiffiffiffiffiffi ^k T vn, rms ¼ C sffiffiffiffiffiffi ^k T in, rms ¼ L
Flicker (or 1/f )
Pink
Current
vn2 ¼ KR in2 ¼
Pink
/
Recombination
A
V2
Df f
KD I Df A f
vn2, eq ¼ Popcorn
2 Rsq
KM Df 2 f WLCox
1 f2
Constants: ^k ¼ 1:38 1023 V C=K q ¼ 1:6 1019 C KR 5 1024 cm2 =V2 KD 1021 cm2 A 8 33 2 2 > < 10 C =cm (pJFET) KM 1032 C2 =cm2 (PMOSFET) > : 4 1031 C2 =cm2 (NMOSFET)
vn2 ¼ 4^k TRDf ,
(2:5)
where ^k is Boltzmann’s constant (1:38 1023 V C=K), T is absolute temperature (in degrees Kelvin), and Df is the noise bandwidth. The Norton equivalent noise current can also be obtained. It is expressed as: in2 ¼ 4^k T
1 Df : R
−
R (ideal)
(A)
FIGURE 2.3
4^k T Df : Rgm2
(2:8)
In the saturation regime, the channel resistance can be approximated by 1=gm, and the channel pinch-off reduces the effective noise to about 66% of its nominal value. Therefore, equation 2.8 can be simplified to: vn2, eq ¼
8^k T 1 Df : 3 gm
(2:9)
i 2n
R (ideal)
(B)
(2:7)
such that the equivalent input noise voltage is as follows: vn2, eq ¼
+
R (noisy)
id ¼ gm vgs ,
(2:6)
A MOSFET’s channel resistance also produces a thermal noise current according to equation 2.6. Since it is informative to
n2n
compare the MOSFET noise current with the small-signal input stimulus, the output noise current can be referred back to the source using the amplifier gain expression:
(C)
Both MOSFET channel thermal noise source configurations are shown in Figure 2.4. It should be noted that the input noise voltage representation is valid only at low frequencies. As the frequency rises, the admittance of gate parasitic capacitances (CGD and CGS ) becomes higher; vn2, eq is dropped partly on the gate impedance and partly on the output resistance of the
104
Erik A. McShane and Krishna Shenai M (ideal)
M (noisy)
M (ideal)
+ −
i 2n
n2n (A)
(B)
(C)
2.2.3 Pink Noise
FIGURE 2.4
preceding device. Therefore, the ideal noiseless MOSFET is stimulated by an erroneously low noise voltage, resulting in inaccurate predictions of output noise. Likewise, the shot noise of a junction diode, D, can be modeled as an ideal diode and an independent current source, as shown in Figure 2.5. The Norton equivalent noise current is given by in2 ¼ 2qIDf ,
(2:10)
where I is the dc current conducted by the diode and Df is the noise bandwidth. Noise is generated by the junction in both forward- and reverse-biased conditions since either a forward current or a leakage current is present. Many analog circuits such as filters, sample-and-hold amplifiers, and switched-capacitor networks employ a resistor or a conductance that is bandwidth-limited by a capacitance or inductance. Just as the reactance of the inductor or capacitor limits the bandwidth of the input signal, it also affects the noise bandwidth. In the case of an RC circuit, the rms noise voltage is expressed as:
vn, rms
sffiffiffiffiffiffi ^k T : ¼ C
Pink noise sources consist of flicker (also called 1=f ) noise and popcorn noise. Flicker noise pertains to the conductive properties of an electronic element. Other theories attribute flicker noise to interface traps present at oxide-semiconductor interfaces or to fluctuations in mobility. The noise is a function of the material homogeneity, its volume, the current, and the frequency. Popcorn noise, on the other hand, is proportional to 1=f 2 and is an indication of poor semiconductor manufacturing. It is associated with distinct recombination processes and appears as a series of low-frequency noise bursts. Because it is uncommon, popcorn noise is not discussed further. Flicker noise is proportional to current and inversely proportional to volume. Since electronic devices are threedimensional structures, planar resistors and semiconductor junctions have unique expressions and scaling factors. For example, the flicker noise voltage of a resistor, R, is given as: vn2 ¼ KR
in2 ¼
sffiffiffiffiffiffi ^k T : ¼ L
(2:12)
D (ideal)
(a)
i 2n
(b)
FIGURE 2.5
A
V2
Df , f
(2:13)
KD I Df , A f
(2:14)
where A is the cross-sectional area of the diode, I is the dc ˚ current, and KD is a diode scaling constant ( 1021 cm2 A). Finally, the input referred flicker noise voltage in a MOSFET is found by an expression that is derived from equation 2.14. It is given by: vn2, eq ¼
D (noisy)
2 Rsq
where Rsq is the sheet resistance, A is the planar area of the resistor, V is the dc voltage, and KR is a technology scaling constant ( 5 1024 cm2 =V2 ). The flicker noise current of a junction diode is similarly found as:
(2:11)
A complementary result can be obtained for RL circuits, with the rms noise current expressed as:
in, rms
The expressions of equations 2.11 and 2.12 are interesting in that the resulting noise voltage or current is independent of the resistance or conductance. The total noise can be reduced only by choosing larger values for the reactive elements.
KM Df , 2 f WLCox
(2:15)
where W and L are, respectively, the gate width and length, Cox is the oxide capacitance per unit area, and technology constant KM is found experimentally. Recent empirical values for three types of FET are: p-type JFET PMOSFET NMOSFET
1033 C2 =cm2 1032 C2 =cm2 4 1031 C2 =cm2
2 Noise in Analog and Digital Systems
105
2.3 Digital (Large-Signal) Noise Noise in digital systems is typically characterized by the integrity of the voltages that represent the logic 0 and logic 1 states. Noise can manifest itself either during switching (when it may result in switching times that exceed theoretical results) or in static conditions. Even in CMOS logic, which nominally provides fully rail-to-rail logic levels, dynamic noise events can momentarily perturb a voltage level. Deep submicron technologies, which typically employ supply voltages of under 1.8 V, may be especially susceptible to logic glitches or errors that are induced by random coupled noise. Table 2.3 lists the nomenclature used for discussing digital noise. These terms are derived and defined throughout the chapter.
of noise on either the input logic level (VIL or VIH ) or the output logic level (VOL or VOH ). Through these four parameters, noise indirectly affects the low and high noise margins.
2.3.2 Series Resistance The effect of series resistance is to degrade output voltage levels (push them away from the rails). Series resistance can be present in the supply rail and/or the ground rail. Each series resistance has a different effect on the output characteristics. Ground rail resistance degrades VOL and supply rail resistance degrades VOH . For example, consider the CMOS inverter shown in Figure 2.6(A) that includes a ground resistance Rseries . The output voltage, VO , can be modeled [see Figure 2.6(B)] as a resistor– divider network such that:
2.3.1 Noise Categories
VO ¼
Several categories of noise are present in digital circuits. These include series resistance, dynamic charge sharing of hard and soft nodes, and I/O integrity, which are the types discussed in this chapter. They can appear in any combination, but for simplicity, the analysis below considers each noise source independently. All are strongly dependent on the specifications of the semiconductor manufacturing process, the physical arrangement of gates and interconnections on an integrated circuit, the switching frequency, and the relative activity level of adjacent gates or interconnections. Table 2.4 lists the fundamental characteristics of large-signal noise. The expressions derived below demonstrate the impact TABLE 2.3
RNMOS þ Rseries Vdd, RNMOS þ RPMOS þ Rseries
(2:16)
where RNMOS and RPMOS are the output resistances of the inverter gates and Rseries is the ground resistance. If the inverter input is a logic 1, the NMOS is strongly conducting and therefore RNMOS 0. Then equation 2.16 can be simplified to: VO ¼
Rseries Vdd: RPMOS þ Rseries
(2:17a)
The output voltage is increased by the presence of the series resistance. A complementary result can be obtained for a supply rail resistance and an input of logic 0, which yields:
Nomenclature in Digital (Large-Signal) Noise Analysis
Name
Symbol
Units
Input low (highest voltage acceptable as a logic ‘0’) Input high (lowest voltage acceptable as a logic ‘1’) Output low (nominal voltage produced as a logic ‘0’) Output high (nominal voltage produced as a logic ‘1’) Low-level noise margin High-level noise margin
VIL VIH VOL VOH NML NMH
V V V V V V
TABLE 2.4
Characteristics of Digital (Large-Signal) Noise
Name
Expression (CMOS)
Expression (NMOS)
VOL
0
2 1 VTD k 2(Vdd VTN )
VOH
Vdd
Vdd
VIL
3Vdd þ 3VTP þ 5VTN 8
VTD VTN pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi k(1 þ k)
VIH
5Vdd þ 5VTP þ 3VTN 8
2VTD VTN pffiffiffiffiffi 3k
NML NMH
VIL VOL VOH VIH
106
Erik A. McShane and Krishna Shenai VO ¼
RNMOS Vdd: RNMOS þ Rseries
(2:17b)
Supply and ground rails are designed to have minimal resistance to alleviate such effects as series resistance. This is accomplished by using low resistivity materials, increasing the crosssectional area of the conductors, using shorter interconnection lengths, and applying other techniques. Two conditions, however, may still produce large voltage deviations. One condition is high-current I/O buffers, and the second condition is simultaneous switching of many parallel gates. The latter situation is illustrated in Figure 2.6(C). If n gates (inverters in this example) with a common series resistance simultaneously switch the expressions of equations 2.17a and 2.17b, they can be modified as: VO ¼
nRseries Vdd: RPMOS þ nRseries
(2:18a)
VO ¼
RNMOS Vdd: RNMOS þ nRseries
(2:18b)
and
In general, noise coupling is proportional to the crosssectional area that is shared by the two lines. Coupling is also proportional to frequency and inversely proportional to the separation. Therefore, as integrated circuits are scaled to smaller dimensions and operate at higher frequencies, the detrimental effects of charge sharing become more severe. Until recently, coupling was exclusively capacitive in nature. As integrated circuits reach frequencies of 1 GHz and above, however, inductive coupling also becomes significant. Results in this chapter, though, are limited to conventional capacitive coupling. Consider, as shown in Figure 2.7(A), the two nodes vn (the noise aggressor or source) and vv (the noise victim) that are coupled by a capacitor Cc . Node vv is also tied to ground through a generic admittance Y. In digital circuits, vv can be either a floating node (such as found in dynamic logic, pass transistor logic, or memory cells) or a driven node (such as found in static logic). The admittance Y is therefore equivalent to either a second coupling capacitance Cp (from the floating node to the ground plane) or a resistance ra (representing the output impedance of the gate driving the node). Both cases are illustrated in Figure 2.7. A floating node’s change in voltage at vv that is produced by a signal at vn is expressed as:
The effect of simultaneous switching becomes more pronounced as device density on integrated circuits rises and die sizes also increase.
2.3.3 Dynamic Charge Sharing The phenomenon of dynamic charge sharing occurs when two functionally unrelated nodes become coupled due to a capacitance or mutual inductance. Coupling appears between any two interconnected lines that are adjacent or that overlap.
vv ¼
VDD
VDD
VDD
PMOS RPMOS
VO
RPMOS
VO1
VO
RNMOS
RNMOS
Rseries
Rseries
NMOS
Rseries
(A)
(2:19)
Because cp is usually just the input capacitance of a logic gate, it is very small. Any value of Cc can cause significant chargesharing to occur. Active nodes are much less susceptible to disruption by noise coupling since the equivalent output resistance, ra , of the driving gate may be large (> 10 kV). The noise coupled to the victim node is written as:
VDD
VDD
vn C c : C c þ cp
(B)
FIGURE 2.6
(C)
VO 2
VOn
2
Noise in Analog and Digital Systems
107
Cc
Vn
Vn
Vn
Cc
Cc
Vv
Vv
Vv
Cp
Y
(A)
ra
(C)
(B)
FIGURE 2.7
vv ¼
vn ra Cc s : 1 þ ra Cc s
(2:20)
Because typically jra Cc sj 1, equation 2.20 can be reduced to: vv ¼ vn ra Cc s:
(2:21)
Clearly for an actively driven node, a much larger coupling capacitance is necessary to produce a significant perturbation of the victim node.
2.3.4 Noise Margins The I/O noise margins, NML and NMH , refer to the ability of a logic gate to accommodate input noise without producing a faulty logic output. The input noise threshold levels, VIL
andVIH , are by convention defined as the input voltages that result in a slope of 1 in the dVO =dVI response. This is shown in Figure 2.8. As is clear from Table 2.4, the noise margins of CMOS logic gates are larger than for comparable NMOS technologies. This is evident because CMOS delivers rail-to-rail outputs, whereas the VOL is a circuit constraint in NMOS. The noise margins of a CMOS gate can be found by first examining the dc transfer curve shown in Figure 2.8. From graphical analysis, the VIL occurs when the PMOS is in its linear regime and the NMOS is in its saturation regime. Since a CMOS gate is complementary in operation, the VIH by symmetry occurs when the PMOS is in its saturation regime and the NMOS is in its linear regime. Considering first the CMOS VIL , begin by equating the NMOS and PMOS currents:
VO VDD VOH
dVO = −1 dVI
VIH
VIL
dVO = −1 dVI VOL VIL
VIH
FIGURE 2.8
VI
108
Erik A. McShane and Krishna Shenai
Wp Wn (VI VTn )2 VI Vdd ¼ kp kn Ln 2 Lp (2:22) VO Vdd (VO Vdd): VTp 2
Assuming that the inverter is designed to have a balanced transfer curve such that: Wp Wn kn ¼ kp , Ln Lp
3Vdd þ 3VTP þ 5VTN : 8
Wp (VI Vdd VTp )2 Wn VO )VO ¼ kp : (VI VTn Ln 2 Lp 2 (2:25)
Again using the assumption of equation 2.23, this expression can also be reduced and rearranged, yielding the form: VIH ¼
5Vdd þ 5VTP þ 3VTN : 8
(2:26)
The noise margins of an NMOS inverter can be found using similar methods. The derivations are not shown here but the
(2:27)
Considering the NMOS VIL , the driver and load bias regimes are exchanged (as in CMOS), and the result is as follows: VTD VIL ¼ VTN pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : k(1 þ k)
(2:24)
Considering next the CMOS VIH , equating the NMOS and PMOS currents results in: kn
2VTD VIH ¼ VTN pffiffiffiffiffi : 3k
(2:23)
then equation 2.22 reduces to a simpler form such that: VIL ¼
steps are identified. Beginning with VIH and examining through graphical techniques the output characteristics, the NMOS inverter is found to be equivalent to the CMOS case; that is, the driver (enhancement mode) is in the linear regime and the load (depletion mode) is in the saturation regime. Assuming that the inverter pull-up: pull-down ratio is k, then:
(2:28)
Note that the NMH of NMOS and CMOS inverters are similar since both achieve VOH Vdd. Because an NMOS inverter VOL is not zero (100 mV – 500 mV are typical values), however, the NML of NMOS is considerably lower than for a CMOS inverter.
Bibliography Chen, W.K. (2000). The VLSI handbook. CRC Press. Geiger, R.L., Allen, P.E., and Strader, N.R. (1990). VLSI design techniques for analog and digital circuits. New York: McGraw-Hill. Laker, K.R., and Sansen, W.M.C. (1994). Design of analog integrated circuits and systems. New York: McGraw-Hill. Tsividis, Y. (1996). Mixed analog-digital VLSI devices and technology: An introduction. New York: McGraw-Hill. Vyemura, J.P. (1988). Fundamentals of MOS digital integrated circuits. Reading, MA: Addison-Wesley.
3 Field Effect Transistors Veena Misra and ¨ ztu¨rk Mehmet C. O Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, North Carolina, USA
3.1 3.2
Introduction............................................................................... Metal-Oxide-Silicon Capacitor......................................................... 3.2.1 The Ideal MOS Capacitor . 3.2.2 Deviations from the Ideal Capacitor . 3.2.3 Small-Signal Capacitance . 3.2.4 Threshold Voltage
3.3
Metal-Oxide-Silicon Field Effect Transistor..........................................
113
3.3.1 Device Operation . 3.3.2 Effective Channel Mobility . 3.3.3 Nonuniform Channels . 3.3.4 Short Channel Effects . 3.3.5 MOSFET Scaling . 3.3.6 Modern MOSFETs
3.4 3.5 3.6
Junction Field Effect Transistor ........................................................ Metal-Semiconductor Field Effect Transistor........................................ Modulation-Doped Field Effect Transistor........................................... References..................................................................................
3.1 Introduction The concept of modulating the electrical current in a semiconductor by an external electrical field was first proposed by Julius Lilienfeld in the 1930s. Years later, William Shockley, a scientist at Western Electric, led a research program to create a semiconductor device based on this ‘‘field effect’’ concept. To replace bulky vacuum tubes then used in telephone switching. Two scientists in Shockley’s team, W. Brattain and J. Bardeen, invented the point contact transistor in 1947. Subsequently, Shockley invented the bipolar-junction transistor (BJT). Shockley later developed the junction field effect transistor (JFET); however, the JFET was not able to challenge the dominance of BJT in many applications. The development of field effect devices continued with moderate progress until the early 1960s, when the metal-oxide-silicon field effect transistor (MOSFET) emerged as a prominent device. This device quickly became popular in semiconductor memories and digital integrated circuits. Today, the MOSFET dominates the integrated circuit technology, and it is responsible for the computer revolution of the 1990s. The first half of this chapter is dedicated to the basic theory of the MOSFET, beginning with its fundamental building block, the MOS capacitor. The second half of the chapter
Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
109 109
122 123 124 126
presents an overview of less common field effect devices used only in specific applications. Their coverage is limited to a qualitative explanation of the operating principles and basic equations. The reader is referred to other books available in literature to obtain detailed information.
3.2 Metal-Oxide-Silicon Capacitor The metal-oxide-silicon (MOS) capacitor is at the core of the complementary metal-oxide-silicon (CMOS) technology. MOSFETs rely on the extremely high quality of the interface between SiO2 , the standard gate dielectric, and silicon (Si). Before presenting the MOSFET in detail, it is essential to achieve a satisfactory understanding of the MOS capacitor fundamentals. The simplified schematic of the MOS capacitor is shown in Figure 3.1. The structure is similar to a parallel plate capacitor in which the bottom electrode is replaced by a semiconductor. When Si is used as the substrate material, the top electrode, known as the gate, is typically made of polycrystalline silicon (polysilicon), and the dielectric is thermally grown silicon dioxide. In MOS terminology, the substrate is commonly referred to as the body.
109
¨ ztu¨rk Veena Misra and Mehmet C. O
110 Gate Dielectric
Gate Electrode
Semiconductor Substrate Contact
FIGURE 3.1
an amount equal to qVGB . The energy bands in the semiconductor bend upward, bringing the valence band closer to the Fermi level that is indicative of a higher hole concentration under the dielectric. It is important to note that the Fermi level in the substrate remains invariant even under an applied bias since no current can flow through the device due to the presence of an insulator. The applied gate voltage is shared between the gate dielectric and the semiconductor such that: VG ¼ Vox þ Vc ,
3.2.1 The Ideal MOS Capacitor The energy band diagram of an ideal p-type substrate MOS capacitor at zero bias is shown in Figure 3.2. In an ideal MOS capacitor, the metal work function, fm , is equal to the semiconductor work function, fs . Therefore, when the Fermi level of the semiconductor, EFS , is aligned with the Fermi level of the gate, EFm , there is no band bending in any region of the MOS capacitor. Furthermore, the gate dielectric is assumed to be free of any charges, and the semiconductor is uniformly doped. Figure 3.3 shows the energy band diagram and the charge distribution in an ideal MOS structure for different gate-tobody voltages (VGB ). With a negative gate bias (Figure 3.3(A)), the gate charge, QG , is negative. The source of this negative charge is electrons supplied by the voltage source. In an MOS capacitor, charge neutrality is always preserved. This requires: QG þ QC ¼ 0,
(3:1)
where QC is the charge induced in the semiconductor. Therefore, a net positive charge, QC , must be generated in the silicon substrate to counterbalance the negative charge on the gate. This is achieved by accumulation of the positively charged holes under the gate. This condition, where the majority carrier concentration is greater near the SiSiO2 interface compared to the bulk, is called accumulation. Under an applied negative gate bias, the Fermi level of the gate is raised with respect to the Fermi level of the substrate by
where Vox and Vc are the voltages that drop across the oxide and the semiconductor, respectively. The band bending in the oxide is equal to qVox . The electric field in the oxide can be expressed as: Eox ¼
Vox , tox
(3:3)
where tox is the oxide thickness. The amount of band bending in the semiconductor is equal to qcs , where cs is the surface potential and is negative when the band bending is upward. Figure 3.3(B) shows the energy band diagram and the charge distribution for a positive gate bias. To counterbalance the positive gate charge, the holes under the gate are pushed away, leaving behind ionized, negatively charged acceptor atoms, which creates a depletion region. The charge in the depletion region is exactly equal to the charge on the gate to preserve charge neutrality. With a positive gate bias, the Fermi level of the gate is lowered with respect to the Fermi level of the substrate. The bands bend downward, resulting in a positive surface potential. Under the gate, the valence band moves away from the Fermi level indicative of hole depletion. When the band bending at the surface is such that the intrinsic level coincides with the Fermi level, the surface resembles an intrinsic material. The surface potential required to have this condition is given by: 1 cs ¼ fF ¼ (Ei EF ), q
(3:4)
where
Vacuum Level
fF ¼
qφs
qφm
Ec
EFs
EFm
Ev
Gate Electrode
(3:2)
Gate Dielectric
FIGURE 3.2
Semiconductor
kT NA ln q ni
(3:5)
Under a larger positive gate bias, the positive charge on the gate increases further, and the oxide field begins to collect thermally generated electrons under the gate. With electrons, the intrinsic surface begins to change into an n-type inversion layer. The negative charge in the semiconductor is comprised of ionized acceptor atoms in the depletion region and free electrons in the inversion layer. As noted above, at this point, the electron concentration at the surface is still less than the
3 Field Effect Transistors
111
Gate Ec Ei
EFM
Oxide Substrate +Q
x
EV
EFS
QG < 0 QC > 0
−Q
+Q −Q
x
QG > 0 QC < 0
Ionized acceptors
+Q QG > 0 −Q
x
QC < 0
Electrons
QG > 0
+Q
QC < 0 −Q
x
Ionized acceptors
(A)
(B)
Electrons
FIGURE 3.3
hole concentration in the neutral bulk. Thus, this condition is referred to as weak inversion and cs ¼ fF is defined as the onset of weak inversion and is shown in Figure 3.3(B). As the gate bias is increased further, the band bending increases. The depletion region becomes wider, and the electron concentration in the inversion layer increases. When the electron concentration is equal to the hole concentration in the bulk, a strong inversion layer is said to form. The surface potential required to achieve strong inversion is equal to:1 2 cs ¼ 2fF ¼ (Ei EF ): q
(3:6)
The inversion and depletion charge variation with cs is shown in Figure 3.4. The electron concentration in the inversion layer is an exponential function of the surface potential and is given by:
1
In some texts, cs ¼ 2fF is taken as the onset of moderate inversion. For strong inversion, the surface potential is required to be 6 kT=q above this level.
ninv NA e q(cs 2fF )=kT
(3:7)
On the other hand, the charge density in the depletion region is written as: QD ¼ qNA WD ,
(3:8)
where WD is the depletion region width is given by: sffiffiffiffiffiffiffiffiffi 2es pffiffiffiffiffi WD ¼ cs : qNA
(3:9)
Therefore, the charge density in the depletion region is a weak function of the surface potential. Consequently, when the gate bias is further increased beyond the value required to reach strong inversion, the extra positive charge on the gate can be easily compensated by new electrons in the inversion layer. This eliminates the need to uncover additional acceptor atoms in the depletion region. After this point, the depletion region width and, hence, the band bending can increase only
¨ ztu¨rk Veena Misra and Mehmet C. O
112
Another modification to the above picture comes from charges that reside in the dielectric. These charges may originate from processing, from defects in the bulk, or from charges that exist at the interface between Si and SiO2 . The charges in the oxide or at the interface induce opposite charges in the semiconductor and the gate. This can cause the bands to bend up or down. Therefore, the flat-band voltage will have to be adjusted to take this into account. Assuming that all oxide fixed charges reside at the SiSiO2 interface, the flat-band voltage can be expressed as:
Q
Inversion layer charge
Depletion region charge
VFB ¼ fms φF
2φF
negligibly. The surface potential is pinned to its maximum value, which is a few kT/q over 2fF . For simplicity, 2fF is generally taken as the maximum value for the surface potential, cs .2 The maximum depletion layer width, WD, max , reached at the onset of strong inversion is given by: sffiffiffiffiffiffiffiffiffi 2es pffiffiffiffiffiffiffiffi ¼ 2fF : qNA
where Qo is the oxide charge and Cox is the oxide capacitance. In addition, defects located at the SiSiO2 interface may not be fixed in their charge state and may vary with the surface potential of the substrate. These defects are referred to as fast interface traps and have an impact on switching characteristics of MOSFETs as will be discussed later. Based on the values of cs , different regions of operation can be defined and are shown in Table 3.1.
3.2.3 Small-Signal Capacitance (3:10)
3.2.2 Deviations from the Ideal Capacitor In practical devices, the work function of the metal, fm , is not equal to that of the semiconductor, fs . Consider a MOS capacitor with fm < fs at zero gate bias (gate connected to the substrate). Electrons in the metal reside at energy levels above the conduction band of the semiconductor. As a result, electrons flow from the metal into the semiconductor until a potential that counterbalances the difference in the work functions is built up between the two plates. This induces a negative charge under the gate dielectric accompanied by a downward band bending and, hence, a positive surface potential. If an external voltage equal to this difference is applied to the gate, the net charge in the semiconductor disappears and the bands return to their flat position. This voltage is defined as the flatband voltage, VFB , and can be expressed as: VFB ¼ qfms ¼ qðfm fs Þ:
If the gate voltage is changed by a small positive amount, a small positive charge will flow into the gate. To preserve charge neutrality, an equal amount of charge must flow out of the semiconductor. The relation of the small change in charge due to a small change in voltage is defined as the small-signal capacitance and can be written as: Cgb
dQG : dVGB
(3:13)
The equivalent circuit for the total gate capacitance is shown in Figure 3.5. The total gate capacitance consists of the oxide and semiconductor capacitances, where the semiconductor capacitance is the sum of the depletion, the inversion capacitance, and the interface states capacitance.
3.2.4 Threshold Voltage The threshold voltage of an MOS capacitor is the gate voltage, VGB , required to create strong inversion (i.e., cs ¼ 2fF ) under the gate. Figure 3.6 shows the inversion charge as a function of VGB . The straight-line extrapolation of this charge to the x-axis
(3:11) TABLE 3.1
2
(3:12)
ψs
FIGURE 3.4
WD, max
Qo , Cox
A more accurate definition for the onset of strong inversion is the surface potential at which the inversion layer charge is equal to the depletion region charge, which is 6 kT=q above the onset of moderate inversion.
Accumulation Depletion Inversion
Regions of Operation of MOS Capacitor for p-Substrate cs < 0 cs > 0 cs > 0
VGB < VFB VGB > VFB VGB >> VFB
Qc > 0 Qc < 0 Qc < 0
3
Field Effect Transistors
113
voltage is needed to achieve the onset of inversion resulting in an increase of the threshold voltage. In addition, the doping concentration and oxide thickness can also have an impact on the threshold voltage dependence on back bias. Lower doping concentrations and thinner oxides result in a weaker dependence of back bias on threshold voltage.
Cox
Cgate
Cdep
Cinv
Cit
3.3 Metal-Oxide-Silicon Field Effect Transistor FIGURE 3.5
QI
Slope = Cox VGB
VTO
FIGURE 3.6
is called the extrapolated threshold voltage, VTO . When the semiconductor (body) is at ground potential (VB ¼ 0), VTO is given by: VTO ¼ VFB þ 2fF þ g
pffiffiffiffiffiffiffiffi 2fF ,
(3:14)
where g is referred to as the body-effect coefficient and can be expressed as: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2qeS NA g¼ : COX
The MOSFET is the building block of the current ultra largescale integrated circuits (ICs). The growth and complexity of MOS ICs have been continually increasing over the years to enable faster and denser circuits. Shown in Figure 3.7 is the simplified schematic of a MOSFET. The device consists of a MOS gate stack formed between two pn junctions called the source and drain junctions. The region under the gate is often referred to as the channel region. The length and the width of this region are called the channel length and channel width, respectively. An inversion layer under the gate creates a conductive path (i.e., channel) from source to drain and turns the transistor on. When the channel forms right under the gate dielectric, the MOSFET is referred to as a surface channel MOSFET. Buried channel MOSFETs in which the channel forms slightly beneath the surface are also used, but they are becoming less popular with the continuous downscaling of MOSFETs. A fourth contact to the substrate is also formed and referred to as the body contact. The standard gate dielectric used in Si-integrated circuit industry is SiO2 . The gate electrode is heavily doped n-type or p-type polysilicon. The source and drain regions are formed by ion-implantation. We shall first consider MOSFETs with uniform doping in the channel region. Later in the chapter, however, we shall learn how nonuniform doping profiles are used to enhance the performance of MOSFETs. The MOSFET shown in Figure 3.7 is an n-channel MOSFET, in which electrons flow from source to drain in the channel
(3:15)
L
With a body bias, the threshold voltage is given by: VT ¼ VFB þ 2fF þ g
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2fF þ VB :
(3:16)
As shown in the above equation, the threshold voltage increases when a back bias is applied. A positive bias on the substrate results in a wider depletion region and assists in balancing the gate charge. This causes the electron concentration in the inversion layer to decrease. Thus, a higher gate
W n+ Gate Source n+
Drain p
Silicon Dioxide
FIGURE 3.7
n+
¨ ztu¨rk Veena Misra and Mehmet C. O
114 TABLE 3.2 Dopant Types for Different Regions of n-Channel and p-Channel MOSFETs
Substrate (channel) Gate electrode Source and drain
n-channel MOSFET
p-channel MOSFET
p nþ nþ
N pþ pþ
induced under the gate oxide. Both n-channel and p-channel MOSFETs are extensively used. In fact, CMOS IC technology relies on the ability to use both devices on the same chip. Table 3.2 shows the dopant types used in each region of the two structures.
3.3.1 Device Operation Figure 3.8 shows an n-channel MOSFET with voltages applied to its four terminals. Typically, VS ¼ VB and VD > VS . Shown in Figure 3.9 is the typical IDS VDS characteristics of such a device. For simplicity, we assume that the body and the source terminals are tied to the ground (i.e. VSB ¼ 0. This yields. VGS ¼ VGB VSB ¼ VGB ¼ VG : VDS ¼ VDB VSB ¼ VDB ¼ VD :
(3:17)
As shown, at low VDS, the drain current increases almost linearly with VDS , resulting in a series of straight lines with slopes increasing with VGS . At high VDS , the drain current saturates and becomes independent of VDS . In this section, we present a description of the device operation complete with important equations valid in different regions of operation. We shall refer to Figure 3.10, which
VGB = VG - VB
VDB = VD - VB
n+ n+ VB
+
+ V D + VB
FIGURE 3.8
ID, lin ¼
h W a 2 i 0 mCox (VGS VT )VDS VDS , L 2
VT ¼ VFB þ 2fF þ g n+
p
Linear Region With a small positive voltage on the drain and no bias on the gate (i.e., VDS > 0 and VGS ¼ 0), the drain is a reverse-biased pn junction. Conduction band electrons in the source region encounter a potential barrier determined by the built-in potential of the source junction. As a result, electrons cannot enter the channel region and, hence, no current flows from the source to the drain. This is referred to as the off state. With a small positive bias on the gate, band bending in the channel region (cs > 0) brings the conduction band in the channel region closer to the conduction band in the source region, thus reducing the height of the potential barrier to electrons. Electrons can now enter the channel and a current flow from source to drain is established. In the low-drain-bias regime, the drain current increases almost linearly with drain bias. Indeed, here the channel resembles an ideal resistor obeying Ohm’s law. The channel resistance is determined by the electron concentration in the channel, which is a function of the gate bias. Therefore, the channel acts like a voltage-controlled resistor whose resistance is determined by the applied gate bias. As shown in Figure 3.9, as the gate bias is increased, the slope of the I–V characteristic gradually increases due to the increasing conductivity of the channel. We obtain different slopes for different gate biases. This region where the channel behaves like a resistor is referred to as the linear region of operation. The drain current in the linear regime is given by: (3:18)
0 is the gate capacitance where VT is the threshold voltage, Cox per unit area, a is a constant, and m is the effective channel mobility (which differs from bulk mobility). We shall deal with the concept of effective channel mobility later in this chapter. Threshold voltage in the above equation is defined as:
+ V G
VSB = VS - VB
shows the inversion layer and the depletion regions that form under the inversion layer and around the source and drain junctions. Now we can begin to discuss MOSFET regions of operation in detail.
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2fF þ VSB :
(3:19)
For small VDS , the second term in the parenthesis can be ignored, and the expression for drain current reduces to: ID, lin ¼
W 0 mCox (VGS VT )VDS , L
(3:20)
which is a straight line with a slope equal to the channel conductance:
3
Field Effect Transistors
115 Linear region
Saturation region VGS5
ID
VGS4 Increasing VGS VGS3 VGS2 VGS1 VDS
FIGURE 3.9 Gate
VD, sat ¼ Source
Drain
N+
N+
(3:22)
To obtain the drain current in saturation, this VD, sat value can be substituted in the linear region expression, which gives:
L Depletion region (lonized acceptors)
VGS VT : a
Inversion layer
ID,
N+
N+
N+
N+
Pinch-off
FIGURE 3.10
sc ¼
W 0 mCox (VGS VT ): L
(3:21)
Saturation Region For larger drain biases, the drain current saturates and becomes independent of the drain bias. Naturally, this region is referred to as the saturation region. The drain current in saturation is derived from the linear region current shown in equation 3.18, which is a parabola with a maximum occurring at VD, sat given by:3
sat
¼
2 W 0 (VGS VT Þ mCox : L 2a
(3:23)
As VDS increases, the number of electrons in the inversion layer decreases near the drain. This occurs due to two reasons. First, because both the gate and the drain are positively biased, the potential difference across the oxide is smaller near the drain end. Because the positive charge on the gate is determined by the potential drop across the gate oxide, the gate charge is smaller near the drain end. This implies that the amount of negative charge in the semiconductor needed to preserve charge neutrality will also be smaller near the drain. Consequently, the electron concentration in the inversion layer drops. Second, increasing the voltage on the drain increases the depletion width around the reverse-biased drain junction. Since more negative acceptor ions are uncovered, a fewer number of inversion layer electrons are needed to balance the gate charge. This implies that the electron density in the inversion layer near the drain would decrease even if the charge density on the gate was constant. The reduced number of carriers causes a reduction in the channel conductance, which is reflected in the smaller slope of IDS VDS characteristics as VDS approaches VD, sat , and the MOSFET enters the saturation region. Eventually, the inversion layer completely
3
The VDS at which the linear drain current parabola reaches its maximum can be found by setting qIDS =qVDS equal to zero.
¨ ztu¨rk Veena Misra and Mehmet C. O
116 disappears near the drain. This condition is called pinch-off, and the channel conductance becomes zero. As shown in Figure 3.9, VD, sat increases with gate bias. This results because a larger gate bias requires a larger drain bias to reduce the voltage drop across the oxide near the drain end. As given in equation 3.22, VD, sat increases linearly with VGS . As VDS is increased beyond VD, sat , the width of the pinchoff region increases. However, the voltage that drops across the inversion layer remains constant and equal to VD, sat . The portion of the drain bias in excess of VD, sat appears across the pinch-off region. In a long channel MOSFET, the width of the pinch-off region is assumed small relative to the length of the channel. Thus, neither the length nor the voltage across the inversion layer change beyond the pinch-off, resulting in a drain current independent of drain bias. Consequently, the drain current saturates. In smaller devices, this assumption falls apart and leads to channel length modulation, which is discussed later in the chapter. From the above discussion, it is also evident that the electron distribution is highest near the source and lowest near the drain. To keep a constant current throughout the channel, the electrons travel slower near the source and speed up near the drain. In fact, in the pinch-off region, the electron density is negligibly small. Therefore, in this region, to maintain the same current level, the electrons have to travel at much higher speeds to transport the same magnitude of charge. An important figure of merit for MOSFETs is transconductance, gm , in the saturation regime, which is defined as: 0
gm ¼
qID, sat W mCox ¼ ðVGS VT Þ: L a qVGS
(3:24)
Transconductance is a measure of the responsiveness of the drain current to variations in gate bias. Subthreshold Region: MOSFET in Weak Inversion When the surface potential at the source end is sufficient to form an inversion layer but the band bending is less than what is needed to reach strong inversion (i.e., fF < Cs < 2fF ), the MOSFET is said to operate in weak inversion. This region of operation is commonly called the subthreshold region and plays an important role in determining switching characteristics of logic circuits. When an n-channel MOSFET is in weak inversion, the drain current is determined by diffusion of electrons from the source to the drain. This is because the drift current is negligibly small due to the low lateral electric field and small electron concentration in weak inversion. Even though the electron concentration in weak inversion is small, it increases exponentially with gate bias. Consequently, the drain current in weak inversion also rises exponentially, and it can be expressed as:
IDS ¼
W 0 q(VGS VT )=nkT VDS =kT Ie , 1 e q L
(3:25)
where I 0 and n are constants defined as: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2qeS NA I ¼ m pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi f2 2 2fF þ VSB
(3:26)
g n ¼ 1 þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : 2 2fF þ VSB
(3:27)
0
When the drain bias is larger than a few kT/q, the dependence on the drain bias can be neglected, and the above equation reduces to: IDS ¼
W 0 q(VGS VT )=nkT Ie , L
(3:28)
which yields an exponential dependence on gate bias. When log IDS is plotted against gate bias, we obtain: log IDS
W 0 q VGS VT I þ , ¼ log L kT n
(3:29)
which is a straight line with a slope: 1 1 q ¼ : S n kT
(3:30)
The parameter S in the above equation is the MOSFET subthreshold swing, which is one of the most critical performance figures of MOSFETs in logic applications. It is highly desirable to have a subthreshold swing as small as possible since this is the parameter that determines the amount of voltage swing necessary to switch a MOSFET from its off state to its on state. This is especially important for modern MOSFETs with supply voltages approaching 1.0 V. Figure 3.11 shows typical subthreshold characteristics of a MOSFET. As predicted by the above model, log IDS increases linearly with gate bias up to VT . In strong inversion, the subthreshold model is no longer valid, and either equation 20 or 23 must be used depending on the drain bias. When VGS ¼ VT , log IDS deviates from linearity. In practice, this is a commonly used method to measure the threshold voltage. In terms of device parameters, subthreshold swing can be expressed as: kT Sffi q
1þ
0 Cdep þ Cit0 0 Cox
ln(10),
(3:31)
0 is depletion region capacitance per unit area of the where Cdep MOS gate determined by the doping density in the channel region, and Cit0 is the interface trap capacitance. Lower
3 Field Effect Transistors
117 channel mobility, meff . A simple yet intuitive model for meff is as follows:
log IDS
Weak inversion
Slope =
Strong inversion
meff
1 S
mo , 1 þ u(VGS VT )
(3:33)
where u is a constant inversely proportional to the gate oxide thickness, tox . A typical plot of effective mobility as a function of gate bias is shown in Figure 3.12 (A). The effect of mobility degradation on IDS VGS characteristics is shown in Figure 3.12(B).
Off-state leakage
3.3.3 Nonuniform Channels VGS
FIGURE 3.11
channel doping densities yield wider depletion region widths 0 and hence smaller values for Cdep . Another critical parameter is 0 the gate oxide thickness, which determines Cox . To minimize S, the thinnest possible oxide must be used. The most widely used unit for S is mV/decade. Typical values range from 60 to 100 mV/decade. The lower limit of the subthreshold current is determined by the leakage current that flows from source to drain when the transistor is off. This current, which is referred to as the offstate leakage, is determined by the source-to-channel potential barrier as well as the leakage currents of the source and drain junctions. For this reason, it is critical to be able to form low leakage junctions.
Typical channels in today’s MOSFETs are nonuniform, which consist of a higher doped region placed underneath the SiSiO2 interface. This is done primarily to optimize the threshold voltage value while keeping the substrate concentration low. The new threshold voltage can be expressed as:
µeff
VT
3.3.2 Effective Channel Mobility The mobility term used in the previous equations differs from the bulk mobility due to additional scattering mechanisms associated with the surface, including interface charge scattering, coulombic scattering, and surface roughness scattering. These mechanisms tend to lower the carrier mobility in the channel. These mechanisms also tend to exhibit a dependence on the vertical electric field, Ey . At low fields, interface and coulombic scattering are the dominant mechanisms. At high fields, the surface roughness scattering dominates. Since the vertical field varies along the channel, mobility also varies. The dependence on the vertical field is generally expressed as: m¼
mo , 1 þ aEy
(a)
VGS
ID
Mobility degradation at high gate field
(3:32)
where mo is roughly half of the bulk mobility and a is roughly 0.025 mm/V at room temperature. This mobility equation is not useful in device equations because Ey varies along the channel. To be able to use the standard equations, the field dependence is lumped into a constant termed the effective
VGS (b)
FIGURE 3.12
¨ ztu¨rk Veena Misra and Mehmet C. O
118 0 VT ¼ VFB þ 2fF þ g
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2fF þ VB ,
(3:34)
0 where, VFB is given by:
nd
nd = mEx
nd,max
0 ¼ VFB þ VFB
qM : Cox
(3:35)
The variable M is the implant dose. This equation assumes that the threshold adjust implant is very shallow and behaves as a sheet of fixed charge located under the gate oxide. In some cases, additional doped layers are placed in the channel to suppress punchthrough, a term that will be discussed later.
3.3.4 Short Channel Effects MOSFETs are continually downscaled for higher packing density, higher device speed, and lower power consumption. The scaling methods are covered later in this chapter in a dedicated subsection. When physical dimensions of MOSFETs are reduced, the equations for drain current have to be modified to account for the so-called short channel effects. The three primary short channel effects included in this chapter are the following: 1. Velocity saturation: Limits the maximum carrier velocity in the channel to the saturation velocity in Si 2. Channel length modulation: Causes the drain current to increase with drain bias in the saturation region 3. Drain-induced barrier lowering (DIBL): Causes the threshold voltage to change from its long channel value with dependence on device geometry as well as drain bias Velocity Saturation As discussed earlier, in long channel MOSFETs, the drain current saturates for VDS larger than VD, sat . The potential drop across the inversion layer remains at VD, sat , and the horizontal electric field, Ex , along the channel is fixed at: Ex ¼
VDsat : L
(3:36)
The electron drift velocity as a function of applied field is shown in Figure 3.13. At low fields, the drift velocity is given by: vd ¼ mEx ,
(3:37)
where m is generally referred to as the low-field mobility. At high fields, the velocity saturates due to phonon scattering. In a short channel device, the electric field across the channel can become sufficiently high such that carriers can suffer from velocity saturation. The effect of velocity saturation on MOSFET drain current can be severe. In short channel MOSFETs, it is impossible to overcome this effect. Thus, it is
Ec
Ex
FIGURE 3.13
safe to say that all devices used in modern logic circuits suffer from velocity saturation to some extent. To account for this phenomenon, IDS has to be obtained using velocity dependence on the electric field. A simple yet intuitive model for IDS contains the original long channel drain current expression in the linear regime (equation 18) divided by a factor that accounts for velocity saturation. This formula is written as: IDS (with velocity saturation) ¼ IDS (without velocity saturation) 1 þ VDS =LEc
(3:38)
where Ec is the critical field above which velocity saturation occurs (Figure 3.13) and is given by: Ec ¼
vsat : m
(3:39)
The VD, sat value obtained using the modified current is smaller than the long channel VD, sat value. The drain current in saturation can be obtained from the linear region expression by assuming the drain current saturates for VDS > VD, sat : ID, sat W mCox (VGS VT )EC :
(3:40)
It is important to note that with velocity saturation, the dependence of drain current on VGS is linear instead of square, as it is the case for the long channel MOSFET. Channel Length Modulation After pinch-off occurs at the drain end, the length of the inversion layer and, hence, the channel resistance continually decrease as the drain bias is raised above VD, sat . In long channel devices, the length of the pinch-off region is negligibly small. Since the voltage drop across the channel is pinned at VD, sat , drain current increases only negligibly. However, in short channel devices, this pinch-off region can become a significant fraction of the total channel length. This reduction of channel length manifests itself as a finite slope in the
3
Field Effect Transistors
119
VGS5
VGS4
ID
Slope = λ Increasing VGS
VGS3 VGS2 VGS1 VDS
FIGURE 3.14
IDS VDS characteristics beyond VD, sat as shown in Figure 3.14. A commonly used expression for MOSFET drain current in saturation with channel length modulation is the following: ID, sat ¼
2 W 0 ðVGS VT Þ mCox (1 þ lVDS ), L 2a
(3:41)
induced barrier lowering (DIBL). Figure 3.15 shows the variation of surface potential from source to drain for a short channel and a long channel MOSFET. As shown, for the long channel device, the potential is nearly constant throughout the channel and is determined only by the gate bias.4 The height of the potential barrier for electrons at the source end is given by:
where l is known as the channel length modulation factor expressed as: l¼
1 , Eo L
(3:42)
fBL 2fF :
For the smaller device, the surface potential is larger due to additional band bending. In addition, the surface potential is
and Eo is the magnitude of the electric field at the pinch-off point. If both velocity saturation and channel length modulation are taken into account, the saturation drain current can be written as:
Long channel ∆φs φBL
ID, sat W mCox (VGS VT )EC (1 þ lVDS )
(3:44)
Short channel
φB
(3:43) n+
Drain-Induced Barrier Lowering Another major short channel effect deals with the reduction of the threshold voltage as the channel length is reduced. In long channel devices, the influence of source and drain on the channel depletion layer is negligible. However, as channel lengths are reduced, overlapping source and drain depletion regions start having a large effect on the channel depletion region. This causes the depletion region under the inversion layer to increase. The wider depletion region is accompanied by a larger surface potential, which makes the channel more attractive to electrons. Therefore, a smaller amount of charge on the gate is needed to reach the onset of strong inversion, and the threshold voltage decreases. This effect is worsened when there is a larger bias on the drain since the depletion region becomes even wider. This phenomenon is called drain-
Source
Drain
VD
n+ x=0
x=L
FIGURE 3.15
4
The channel potential actually increases gradually toward the drain due to small yet finite resistance of the inversion layer. When pinch-off occurs at the drain end, majority of the drain-to-source bias appears across the pinch-off region due to its much larger resistivity.
¨ ztu¨rk Veena Misra and Mehmet C. O
120 no longer constant throughout the channel. With a larger surface potential, the barrier for electrons between the source and the channel is reduced and is given by: fB ¼ fBL DfB :
(3:45)
The change in threshold voltage due to DIBL has been modeled Liu et al. (1993) as: DVT ½3ðfbi 2fF Þ þ VDS e L=lDIBL ,
(3:46)
where fbi is the built-in potential of the drain junction and: lDIBL
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi t W eSiox D : ¼ eox b
(3:47)
In the above equation, WD is the depletion region width under the inversion layer, and b is a fitting parameter. The model indicates that DVT is a strong function of the lDIBL term in the exponential and must be minimized. This requires higher doping density in the channel to reduce WD and downscaling of the gate oxide thickness, tox . The model does not include the well-known dependence of DVT on source/drain junction depths although some modifications have been suggested. The impact of DIBL on VT can be measured by plotting the subthreshold characteristics for increasing values of VDS as shown in Figure 3.16. The subthreshold slope degrades and the impact of drain bias on DVT increases as the channel length is reduced. This methodology is valid only until the subthreshold slope remains intact. A very large subthreshold swing implies that the device cannot be turned off. This phenomenon is called punchthrough and occurs roughly when the source and drain depletion regions meet. Since the depletion region becomes wider at larger drain biases, the onset of punchthrough is reached sooner. Punchthrough can occur
either at or below the surface depending on the doping profile in the channel region. Surface punchthrough occurs for uniformly doped substrates and results in the loss of gate control of the channel region, causing the device to fail. Bulk punchthrough occurs for ion-implanted channels that have a higher doping concentration at the surface. In this condition, there still exists a channel that is gate controlled, but the background leakage becomes very high and is strongly dependent on VDS . The common solution is to add a shallow punchthrough implant typically placed immediately below the threshold adjust implant. This raises the doping density under the channel while keeping the bulk doping density as low as possible. The goal is to minimize junction depletion region capacitance, which has an impact on device switching speed. Similarly, for a fixed drain bias, a larger body bias increases the reverse bias on the drain junction and, thus, has the same effect as increasing the drain bias. Lowering of the threshold voltage due to DIBL is demonstrated in Figure 3.17. Note that the effect of body bias on threshold voltage for large channel MOSFETs is determined by equation 3.15. Here we are interested in lowering of the threshold voltage due to body bias only for short channel lengths. In addition to the short channel effects described above, the scaling of the channel width can also have a large effect on the threshold voltage, and its nature depends on the isolation technology (Tsividis, 1999). In the case of LOCOS isolation, the threshold voltage increases as the channel width decreases. On the other hand, for shallow trench isolation, the threshold voltage decreases as the channel width decreases. In today’s technologies, shallow trench isolation is the predominant isolation technique.
3.3.5 MOSFET Scaling Scaling of MOSFETs is necessary to achieve (a) low power, (b) high speed, and (c) high packing density. Several scaling methodologies have been proposed and used, which are
Threshold voltage (Volts)
log IDS
Decreasing channel length
VDS = 100mV
VSB = 1 V
VSB = 0 V
VDS = VDD VGS
FIGURE 3.16
Channel length (µm)
FIGURE 3.17
3 Field Effect Transistors
121
derivatives of constant field scaling. In constant-field scaling, the supply voltage and the device dimensions (both lateral and vertical) are scaled by the same factor, k such that the electric field remains unchanged. This results in scaling of the drain current or current drive. At the same time, gate capacitance is also scaled due to reduced device size. This provides a reduction in the gate charging time, which directly translates into higher speed. Furthermore, power dissipation per transistor is also reduced. Constant field scaling provides a good framework for CMOS scaling without degrading reliability. However, there are several parameters, such as the kT/q and an energy gap, that do not scale with reduced voltages or dimensions and present challenges in device design. Since the junction built-in potential, fbi , and the surface potential, cs , are both determined by the bandgap, they do not scale either. Consequently, depletion region widths do not scale as much as other parameters, which results in worsened short channel effects. Another parameter that does not scale easily is the threshold voltage. This sets the lower limit for the power-supply voltage since a safe margin between the two parameters is required for reliable device operation. Other parameters that do not scale well include the off-state current and subthreshold slope. Although constant field scaling provides a reasonable guideline, scaling of the voltages by the same factor as the physical dimensions is not always practical. This is due to the inability to scale the subthreshold slope properly as well as standardize voltage levels of prior generations. This is the basis for constant voltage scaling and other modified scaling methodologies. One problem with constant voltage scaling is that the oxide field increases since the tox is also scaled by k. To reduce this problem, the oxide thickness is reduced by k0 where k0 < k. Modifications of the constant voltage and constant field scaling have also been tried to avoid high-field problems. For example, in quasi-constant voltage scaling, physical dimensions are scaled by k, whereas the voltages are scaled by a different factor, k0 . Furthermore, since depletion layers do not scale proportionately, the doping concentration, NA , must be increased by more than what is suggested by constant-field scaling. This is called generalized scaling and the doping scaling factor is greater than k. The scaling rules discussed above are shown in Table 3.3. It should be noted that there are other effects that can limit the performance of scaled devices. These include (a) reduction
of the effective oxide capacitance due to finite thickness of the inversion layer, (b) depletion in the polysilicon gate, (c) quantum effects in the inversion layer that increase VT , and (d) quantum mechanical tunneling of carriers through the thin oxide, which is a critical issue for tox < 50 A.
3.3.6 Modern MOSFETs Today’s MOSFET has evolved through many modifications to provide solutions for parasitic problems that emerge during scaling. A modern n-channel MOSFET structure is shown in Figure 3.18. The gate dielectric is SiO2 and is grown by thermal oxidation. As a result of continued scaling, the gate oxide has thinned down to the extent that direct tunneling has become a serious concern. Currently, alternate high-k dielectrics are being considered as replacements for SiO2. The goal is to achieve higher capacitance without compromising gate leakage. The standard gate electrode is heavily doped polysilicon defined by anisotropic reactive ion etching. Extension junctions are formed by ion-implantation using the polysilicon gate as an implant mask. Because a separate masking step is not needed to define the junction regions, the approach is named self-aligned polysilicon gate technology. At the same time, the junction sheet resistance must be sufficiently low not to increase the device series resistance. Extension junctions must be as shallow as possible to reduce DIBL. When formed by ion-implantation, shallow junctions require low doses and low energies. Unfortunately, low dose results in higher series resistance. Sidewall spacers are formed by deposition of a dielectric (SiO2 or Si3 N4 ) followed by anisotropic reactive ion etching. The spacers must be as thin as possible to minimize the series resistance contribution of the extension junctions. Deep source and drain junctions are formed by ion-implantation following spacer formation. Because they are separated from the channel by the spacers, their contribution to DIBL is negligible. These junctions must be sufficiently deep to allow formation of good quality silicide contacts. Both Ti and Co silicides are used as source/drain contact materials. Silicides are formed by an approach referred to as self-alignedsilicide (SALICIDE), which selectively forms the silicide on the junctions as well as polysilicon. Silicide formation consumes the Si substrate in the deep source/drain regions and can lead to excessive junction leakage. To avoid this, the deep source/ drain regions are required to be about 50 nm deeper than the
TABLE 3.3
Scaling Rules for Four Different Methodologies
Parameters
Constant field scaling 1 < k0 < k
Constant voltage scaling 1 < k0 < k
Quasi-constant voltage scaling 1 < k0 < k
Generalized scaling 1 < k0 < k
W,L tox NA Vdd, VT
1=k 1=k k 1=k
1=k 1=k0 k 1
1=k 1=k k 1=k0
1=k 1=k k2 =k0 1=k0
¨ ztu¨rk Veena Misra and Mehmet C. O
122 n + Polysilicon
L Sidewall spacer SiO2 or Si3N4
Self-aligned silicide TiSi2 or CoSi2
W Source
Gate
n+ n+
Gate oxide
n+
n+
n
Extension junctions p
Drain
p+
p+
Deep source/drain contaction junctions
FIGURE 3.18
FIGURE 3.19
amount consumed. An important advantage of the SALICIDE process is that the entire junction area up to the sidewall spacer is used for contact formation, which translates into low contact resistance. The channel doping profile typically consists of the threshold adjust and punchthrough stop implants. The gate electrode is very heavily doped to avoid polysilicon depletion. The MOSFET continues to maintain its dominance in today’s digital ICs. To ensure continued benefits from MOSFET scaling, several structural and material changes are required. These include new gate dielectric materials to reach equivalent SiO2 thicknesses less than 1 nm, new doping technologies that provide ultra-low resistance and ultra-shallow junctions, as well as metal gate electrodes that do not suffer from depletion effects.
built-in potential of the junction. The depletion regions are wider at the drain end due to the large positive bias applied to the drain terminal. Figure 3.20(A) corresponds to the linear region of the JFET. Even though the channel depth is reduced at the drain end, the n-type channel extends from the source to the drain. The channel resembles a resistor, and the current is a linear function of the drain bias. The depth and, hence, the resistance of the channel is modulated by the gate bias. The drain current in the linear region can be expressed as: ID, lin ¼
Go (VG VT )VD , 2VP
(3:48)
where VP is the pinch-off voltage corresponding to the potential drop across the gate junction when the two depletion
3.4 Junction Field Effect Transistor The junction field effect transistor (JFET) was first proposed by William Shockley in 1952. The first demonstration was made the following year by Dacey and Ross. Due to the large popularity of the bipolar junction transistor (BJT) at the time, major advancements in JFET fabrication did not occur until the 1970s. With the introduction of MOSFET, the use of JFET remained limited to specific applications. Figure 3.19 shows a simplified schematic of the JFET. The device consists of an n-type channel region sandwiched between two pþ n junctions that serve as the gate. The pþ regions can be tied together to form a dual-gate JFET. The heavily doped nþ source and drain regions serve as low resistivity ohmic contacts to the n-type channel. As such, a conductive path exists between the source and the drain contacts. The device is turned off by depleting the channel off the free carriers. The IDS VDS characteristics of a JFET are very similar to those of a MOSFET. Figure 3.20 illustrates the operation of a JFET in different regions. The depletion region widths of the two pþ n junctions are determined by the gate bias and the potential variation along the channel. The widths of the depletion regions at the source end are determined mainly by the
VG
p + Gate Wd 2a
VD
n
p + Gate VG
p + Gate
Pinch-off
p + Gate
FIGURE 3.20
VD
3
Field Effect Transistors
123
regions meet at the drain end. The variable Go is the full channel conductance with a channel depth equal to 2a. These can be expressed as: Go ¼ 2aqmn ND
W L
L
(3:49) W
and Source 2
VP ¼
qND a : 2es
(3:50)
(3:51)
The built-in potential cbi is a function of the doping concentrations in both sides of the junction. For large devices, the reduction in channel length due to pinch-off can be negligibly small. Hence the channel conductance remains approximately the same with a fixed voltage equal to VD, sat across the conductive channel. The drain current no longer increases with drain bias, and JFET is said to operate in the saturation region. The drain current after pinch-off can be expressed as: ID, sat ¼
Go VG 2 1þ : 4VP VP
(3:52)
In JFETs with short channel lengths, the electric field along the channel can be large enough to cause carrier velocity saturation even before pinch-off. This requires a drain bias of: VD ¼ Ec L,
n
Drain n+
Semi-insulating GaAs
When the two depletion regions meet at the drain end, the JFET enters the saturation region. As shown in Figure 3.20(B), the pinch-off point moves toward the source as the drain bias is raised. The potential at the pinch-off point, however, remains pinned at: VD, sat ¼ VP cbi þ VG ¼ VG VT :
n+
Metal gate
(3:53)
where Ec is the critical field for velocity saturation.
3.5 Metal-Semiconductor Field Effect Transistor A MESFET is very similar to a metal-semiconductor field effect transistor (MOSFET). The main difference is that in a MESFET, the MOS gate is replaced by a metalsemiconductor (Schottky) junction. A self-aligned GaAs MESFET structure is shown in Figure 3.21. The heavily doped source and drain junctions are formed in an n-type epitaxial layer formed on semi-insulating GaAs, which provides low parasitic capacitance. The n-type channel region has a thickness, a, which is typically less than 200 nm. The source
FIGURE 3.21
and drain junctions are formed by ion-implantation followed by annealing. Common source and drain contact materials are AuGe alloys. Popular Schottky gate metals include Al, Ti-PtAu-layered structure, Pt, W, and WSi2 . However, in a selfaligned MESFET, since the source and drain implantation and implant annealing must be performed after gate formation, refractory metals that can withstand the annealing temperatures are preferred. MESFET is better suited to materials that do not have a good dielectric to form a high-quality MOS gate. Today, MESFETs are commonly fabricated on compound semiconductors, predominantly GaAs. Because of high mobility of carriers in GaAs and low capacitance due to the semi-insulating GaAs, MESFETs have the speed advantage over MOSFETs. They are used in microwave applications, which require high-speed devices. Application areas include communication, highspeed computer, and military systems. Although the MESFET structure resembles the MOSFET, its operation is much closer to that of the JFET. Figure 22 shows the MESFET cross-section in different regions of operation. The MESFET has a conductive path between the source and the drain since the channel is of the same conductivity type as the junctions. Therefore, to turn-off a MESFET, a sufficiently high gate bias must be applied to deplete the channel. The depletion region under the gate modulates the width and the resistance of the conductive channel from source to drain. As such, in a MESFET, the Schottky gate plays the exact same role that the pn junction gate plays in a JFET. The width of the depletion region is wider at the drain end since the source is grounded and a finite bias is applied to the drain. At small drain biases, the channel is conductive from source to drain. However, the width of the channel is smaller at the drain end due to the wider depletion region. The depletion region width under the gate is given by: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2es ½cbi þ c(x) VG Wd (x) ¼ , qND
(3:54)
¨ ztu¨rk Veena Misra and Mehmet C. O
124
Finally, VT is the threshold voltage, and it is given by:
VG
VT ¼ cbi VP Metal gate VD
In saturation, the drain current is given by:
Wd
a
n
ID, sat
Semi-insulating GaAs
Gi (VG VT )2 , 4VP
(3:59)
yielding a transconductance of:
VG
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi cbi VG : ¼ Gi 1 VP
Metal gate
gm, sat VD
Pinch-off Semi-insulating GaAs
FIGURE 3.22
where cbi is the band bending in the semiconductor due to the metal-semiconductor work function difference, and c(x) is the channel potential with respect to the source. In this regime, the MESFET channel resembles a voltage, controlled resistor. The drain current varies linearly with the drain bias. The gate bias changes the width of the depletion region to modulate the width and the resistance of the channel. This is the linear region of operation for a MESFET. For a long channel MESFET (L a), the drain current in the linear region is approximately equal to: ID, lin
Gi ðVG VT ÞVD , 2VP
(3:55)
(3:60)
When pinch-off occurs, the voltage at the pinch-off point is pinned at VP even when the channel length is continually reduced at higher drain biases. In long channel MESFETs, the length of the pinch-off region is negligibly small and can be ignored. This results in saturation of the drain current for voltages beyond VP . In smaller devices, however, the length and the resistance of the channel decreases as the pinch-off region becomes wider. Similar to channel length modulation in MOSFETs, this results in a gradual increase in drain current with applied drain bias. In short channel devices, the horizontal field in the channel is high enough to reach the velocity saturation regime, which degrades the drain current as well as the transconductance. Velocity saturation can be reached even before pinch-off. With velocity saturation, the drain current in the saturation region can be expressed as: rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi cbi VG , ¼ qvs WND a 1 VP
ID, sat
(3:61)
which gives a transconductance of:
where Gi ¼
W qmn ND a L
(3:56)
is the channel conductance without depletion under the Schottky gate. The variable VP in equation 3.55 is called the pinch-off voltage, which is the net potential across the gate junction when the depletion region width under the gate is exactly equal to the channel depth, a. The pinch-off voltage is given by:5 VP ¼
qND a2 : 2es
(3:57)
5 This comes from the depletion region of equation 3.54 by replacing the potential term with VP and the depletion region width with the channel depth.
qvs WaND gm, sat pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : 2 VP (cbi VG )
(3:62)
3.6 Modulation-Doped Field Effect Transistor The modulation-doped field effect transistor (MODFET) is also known as the high-electron mobility transistor (HEMT). The device relies on the ability to form a highquality heterojunction between a wide bandgap material lattice matched to a narrow bandgap material. The preferred material system is AlGaAs–GaAs; however, MODFETs have also been demonstrated using other material systems including SiSix Ge1x . The device was developed in the
3
Field Effect Transistors
125
far away from the ionized impurities in the doped AlGaAs layer. The thickness of this layer is typically around 80 A. The semiconductor layers are typically grown by molecular beam epitaxy (MBE); however, metal-organic chemical vapor deposition (MOCVD) has also shown to be feasible. The energy band diagram at the onset of threshold is shown in Figure 3.24. The threshold is typically taken as the gate bias at which the conduction band of the GaAs layer coincides with the Fermi level. The 2-D electron gas forms at the heterointerface immediately under the undoped AlGaAs buffer layer and resembles the inversion layer that forms under the gate dielectric of a MOSFET. The threshold voltage can be expressed as:
L
W
Source
Metal gate
Drain
n
n+
n+
i - GaAs i - AlGaAs
Semi-insulating GaAs
FIGURE 3.23
VT ¼ fbn VP
DEC , q
where VP is the pinch-off voltage for the AlGaAs layer given by:
1970s. Enhanced mobility was first demonstrated by Dingle in 1978. Figure 3.23 shows the simplified schematic of a self-aligned MODFET. The current conduction takes place in the undoped GaAs layer. The n-type AlGaAs layer located under the metal (Schottky barrier) gate is separated from the undoped GaAs by a thin undoped AlGaAs that acts as a buffer layer. A twodimensional (2-D) electron gas is formed in GaAs immediately under the AlGaAs. High mobility results from the absence of ionized impurity scattering in the undoped layer. The thickness of the undoped AlGaAs buffer layer is critical. The buffer layer must be sufficiently thin to allow electrons to diffuse from the n-type AlGaAs into GaAs. At the same time, it must be sufficiently thick to place the 2-D electron gas sufficiently
VP ¼
qN Dxd2 : 2es
(3:64)
The drain current in the linear regime is given by: ID, lin ffi
mn Co W (VG VT )VD , L
Co ¼
eS : xd þ xud þ Dd
xud qVp Two-dimensional electron gas
∆Ec qφbn
Ec
Ec
EF
EF qVT
Ev
Metal
AlGaAs
(3:65)
where
xd
Efm
(3:63)
GaAs
FIGURE 3.24
Ev
(3:66)
¨ ztu¨rk Veena Misra and Mehmet C. O
126 In the above equations, xd and xud are the doped and undoped AlGaAs thicknesses, and d is the thickness of the 2-D electron gas that is typically less than 100 A. The threshold voltage can be made positive or negative by adjusting xd . Hence, both enhancement and depletion mode devices are possible. The transconductance in the linear region is given by: gm, lin
dID, lin mn Co WVD : ¼ dVG L
(3:68)
The drain current in saturation is given by: ID, sat ¼
mn Co W (VG VT )2 , 2L
dID, sat mn Co W (VG VT ) : ¼ L dVG
(3:71)
The drain current is then given by: ID, sat ¼ WCo (VG VT )vs ,
(3:72)
where vs is the saturated drift velocity. This yields a transconductance of: gm, sat
dID, sat ¼ WCo vs , dVG
(3:73)
It is interesting to note that the saturation current is independent of channel length, and the transconductance is independent of both L and VG . The device, however, benefits from a velocity overshoot that provides higher drive current.
(3:69)
References
yielding a transconductance of: gm, sat
VD, sat Ec L:
(3:67)
When the drain bias is sufficiently large, electron concentration in the 2-D gas is reduced to zero at the drain end and the drain current saturates with VD . This occurs at a drain bias of: VD, sat ¼ VG VT :
gas since high mobility also implies low Ec . With velocity saturation, the drain current saturates at a drain bias of:
(3:70)
In practical devices, the lateral electric field can be high enough that the carriers suffer from velocity saturation even before the drain current saturates. This is especially an issue for MODFETs due to high mobilities achieved in the 2-D electron
Arora, N. (1983). MOSFET models for VLSI circuits simulation: Theory and practice. New York: Springer-Verlag Wien. Grove, A.S. (1967) Physics and technology of semiconductor devices. New York: John Wiley & Sons. Ng, K.K. (1995). Complete guide to semiconductor devices. New York: McGraw-Hill. Tsividis, Y.P. (1999). Operation and modeling of the MOS transistor. New York: McGraw-Hill. Z.H. Liu, et al., Threshold voltage model for deep submicrometer MOSFETs. IEEE Transactions on Electron Devices 40, 86–95.
4 Active Filters
Department of Electrical and Computer Engineering, Portland State University, Portland, Oregon, USA
4.1 4.2
Introduction ....................................................................................... 127 Realization Methods............................................................................. 128 4.2.1 Cascade Design . 4.2.2 Realization of Ladder Simulations . 4.2.3 Transconductance-C (OTA-C) Filters
References .......................................................................................... 138
4.1 Introduction
requirements call for the fifth-order elliptic function (Zverev, 1967):
Electrical filters are circuits designed to shape the magnitude and/or phase spectrum of an input signal to generate a desired response at the output. Thus, a frequency range can be defined as the passband where the input signal should be transmitted through the filter undistorted and unattenuated (or possibly amplified); stopbands require transmission to be blocked or the signal to be at least highly attenuated. In the frequency domain, the transmission characteristic is described by the transfer function: Vout Nm (s) ¼ Dn (s) Vin m bm s þ bm1 s m1 þ þ b1 s þ b0 : ¼ n s þ an1 s n1 þ þ a1 s þ a0
T (s) ¼
9:04(s 4 þ 2, 718:9s2 þ 1, 591, 230:9) , s5 þ 40:92s 4 þ 897:18s3 þ 28, 513:8s 2 þ 400, 103:6s þ 3, 411, 458:7
(4:2)
where the frequency is normalized with respect to 1 krad/s. A sketch of the function is shown in Figure 4.1. The denominator polynomial of T(s) in equation 4.1 can be factored to display its n roots, the poles, which are restricted to the left half of the s plane. Assuming that n is even and keeping conjugate complex terms together, the factored expression is as written here:
(4:1)
The nth-order function T(s) is a ratio of two polynomials in frequency s ¼ jv. In a passband, the transfer function should be approximately a constant, typically unity (i.e., jT (jv)j 1), so that the attenuation is a ¼ 20 log jT (jv)j 0 dB. In the stopband, we need jT (jv)j 0 for stable circuits with poles in the left half of the s plane and that the poles are complex if Qi > 1=2. If n is odd, the product in equation 4.3 contains one first-order term. Similarly, Nm (s) may be factored to give: Nm (S) ¼ bm s m þ bm1 s m1 þ þ b1 s þ b0 ¼
m=2 Y
(k2j s 2 þ k1j s þ k0j ),
(4:5)
j¼1
where a first-order term (i.e., k2j ¼ 0) will appear if m is odd. The roots of Nm (s) are the transmission zeros of T(s) and may lie anywhere in the s plane. Thus, the signs of kij are unrestricted. For example, k1j ¼ 0 for transmission zeros on the jv axis. Assuming now m ¼ n, the following is true: m=2 Q
(k2j s 2 þ k1j s þ k0j ) Nm (s) j¼1 T (s) ¼ ¼ Dn (s) n=2 Q 2 (s þ sv0i =Qi þ v20i )
4.2 Realization Methods Typically, filter applications have sharp transitions between passbands and stopbands and have complex poles close to the jv axis; that is, their quality factors, Qi , are large. The realization of such high-Q poles requires circuit impedances that change their values with frequency very rapidly. This problem has been solved traditionally with resonance: the implementation requires inductors, L, and capacitors, C, because RC circuits have poles only on the negative real axis in the s-plane where Q 1=2. Because inductors are large and bulky, LC circuits cannot easily be miniaturized (except at the highest frequencies) so that other approaches are needed for filters in modern communications and controls systems. A popular and widely used solution that avoids bulky inductors makes use of the fact that complex high-Q pole pairs can be implemented by combining RC circuits with gain. Gain in active circuits is most commonly provided by the operational amplifier, the op-amp, or sometimes by the operational transconductance amplifier (OTA). It is very easy to see that complex poles can indeed be obtained from active RC circuits. Consider an inverting lossy integrator, as shown in Figure 4.2(A), and a noninverting lossless (ideal) integrator, as shown in Figure 4.2(B). Using conductances, G ¼ 1=R, and assuming the op-amps are ideal, the lossy and lossless integrators, respectively, realize the functions:
(4:6)
i¼1
VB G 1 ¼ ¼ sC þ Gq st þ q V1
n=2 n=2 Y Y k2i s 2 þ k1i s þ k 0i ¼ ¼ Ti (s): s 2 þ sv0i =Qi þ v20i i¼1 i¼1
and
In case m < n, the numerator of equation 4.6 has (n m)=2 factors equal to unity. The objective is now to design filter circuits that realize this function. For the fifth-order example function in equation 4.2, factoring results in three sections: T (s) ¼ T1 (s)T2 (s)T3 (s) ¼
38:73 0:498(s2 þ 29:22 ) 0:526(s 2 þ 43:22 ) , s þ 16:8 s 2 þ 19:4s þ 20:012 s 2 þ 4:72s þ 22:522
VL G 1 ¼ : ¼ VB sC st
The integrator time constant is t ¼ RC and the loss term is q ¼ R=Rq . If these two integrators are connected in a twointegrator loop shown in Figure 4.3, the function realized is:
(4:7)
VL ¼
where the gain constants are determined to equalize1 the signal level throughout the filter and to realize a 12.5-dB passband gain. Obtained were two second-order low-pass functions, each with a finite transmission zero (at 29.2 kHz and at 43.2 kHz), and one first-order low-pass.
1 1 (KVin þ VL ) st þ q st
Rq
V1
1 Since the signal level that an op-amp can handle without distortion is finite and the circuits generate noise, dynamic range is limited. To maximize dynamic range, the first- and second-order blocks in equation 4.7 are cascaded in Figure 4.4 in the order of increasing Q values, and the gain constants are chosen such that the signal level throughout the cascade stays constant.
(4:8)
R
− A +
(A)
r C
VB
VB
R
C
− A +
+ A −
r
VL
(B)
FIGURE 4.2 Integrators (A) Lossy Inverting Integrator (B) Lossless Noninverting Integrator
4
Active Filters
129
4.2.1 Cascade Design K
+
V1
Vin
1 sτ + q
VB
−1 st
VL
FIGURE 4.3 Two-Integrator Loop Realizing a Second-Order Active Filter (Note that one of the blocks in inverting so that the loop gain is negative for stability).
In cascade design, a transfer function is factored into loworder blocks or sections as indicated in equation 4.6. The low-order blocks are then connected in a chain, or are cascaded as in Figure 4.4, such that the individual transfer functions multiply: Vout V1 V2 V3 Vout ¼ ... Vin Vin V1 V2 Vn=21
or VL K : ¼ 2 2 s t þ stq þ 1 Vin
¼ T1 T2 T3 . . .Tn=21 ¼ (4:9a)
If we compare the denominator of equation 4.9a with a polepair factor in equation 4.3, we notice that the pole frequency is implemented as v0i ¼ 1=t ¼ 1=(RC), and the pole quality factor is Qi ¼ 1=q ¼ Rq =R. Clearly, we may choose Rq larger than R, so that Qi > 1 and the poles are complex. In addition, a band-pass function is realized at the output VB because according to Figure 4.3, we have VB ¼ stVL to give with equation 4.9a: VB stK : ¼ 2 2 s t þ stq þ 1 Vin
(4:9b)
After this demonstration that inductors are not required for high-Q filters, we need to consider next how to implement practical high-order active filters as described in equation 4.6. We shall discuss the two methods that are widely used in practice: cascade design and ladder simulation. When analyzing the circuits in Figure 4.2 to obtain equation 4.8, we did assume ideal op-amps with infinite gain A and infinite bandwidth. The designer is cautioned to investigate very carefully the validity of such an idealized model. Filters designed with ideal op-amps will normally not function correctly except at the lowest frequencies and for moderate values of Q. A more appropriate op-amp model that is adequate for most filter applications uses the finite and frequency-dependent gain A(s): A(s) ¼
vi v : sþs s
(4:10)
That is, the op-amp is modeled as an integrator. The op-amp’s 3-dB frequency, s, (typically less than 100 Hz), can be neglected for most filter applications as is indicated on the right-hand side of equation 4.10. The vt is the op-amp’s gainbandwidth product (greater than 1 MHz). To achieve predictable performance, the operating frequencies of active filters designed with op-amps should normally not exceed about 0:1vt . The Ackerberg-Mossberg and GIC second-order sections discussed below have been shown to be optimally insensitive to the finite value of v and to behave very well even when designed with real operational amplifiers.
n=2 Y
(4:11) Ti ,
i¼1
as required by equation 4.6. This simple process is valid provided that the individual sections do not interact. Specifically, section Tiþ1 must not load section Ti . In active filters, this is generally not a problem because the output of a filter section is taken from an op-amp output. The output impedance of an op-amp is low, ideally zero, and therefore can drive the next stage in the cascade configuration without loading effects. A major advantage of cascade design is that transfer functions with zeros anywhere in the s-plane can be obtained. Thus, arbitrary transfer functions can be implemented. Ladder simulations have somewhat lower passband sensitivities to component tolerances, but their transmission zeros are restricted to the jv axis. To build a cascade filter, we need to identify suitable secondorder (and first-order, if they are present) filter sections that realize the factors in equation 4.6. The first- and second-order functions, respectively, are the following: as þ b sþs
(4:12a)
k2 s 2 þ k1 s þ k0 : þ sv0 =Q þ v20
(4:12b)
T1 (s) ¼ and T2 (s) ¼
s2
In the first-order function, T1 , a and b may be positive, negative, or zero, but s must be positive. Similarly, in the second-order function, the coefficients k1 can be positive, negative, or zero, depending on where T2 is to have transmission zeros. For example, T2 can realize a low-pass (k2 ¼ k1 ¼ 0), a high-pass (k1 ¼ k0 ¼ 0), a band-pass (k2 ¼ k0 ¼ 0), a band-rejection or
Vin
T1
V1
T2
V2
T3
V3 Vn/2-1
Tn/2
Vout
FIGURE 4.4 Realizing a High-Order Filter as a Cascade Connection of Low-Order Sections
130
Rolf Schaumann
pffiffiffiffiffiffiffiffiffiffiffi ‘‘notch’’ filter (k1 ¼ 0) with transmission zeros at j k0 =k2 , and an all-pass or delay equalizer (k2 ¼ 1, k0 ¼ v20 , k1 ¼ v0 =Q). An example for the choice of coefficients of a firstorder low-pass and two second-order notch circuits was presented in equation 4.7. First-Order Filter Sections The function T1 in equation 4.12a is written as: T1 (s) ¼
V2 sC1 þ G1 (C1 =C2 )s þ 1=(C2 R1 ) , (4:13) ¼ ¼ V1 sC2 þ G2 s þ 1=(C2 R2 )
from which we can identify: a¼
C1 1 1 , b¼ , s¼ C2 R1 C2 R 2 C2
(4:14)
Note, however, that the circuit is inverting and causes a (normally unimportant) phase shift of 1808. The example of the first-order low-pass in equation 4.7 is realized by the circuit in Figure 4.5 with C1 ¼ 0, R2 ¼ 2:31R1 , and C2 R2 ¼ 1= (16:8 krad=s). The components can be determined if C2 is chosen (e.g., as 1 nF). We obtain R1 ¼ 4:10 kV and R2 ¼ 9:47 kV. Often, a zero in the right half of the plane is needed (i.e., b=a < 0). For this case, the circuit in Figure 4.5(B) can be used. It realizes, with s ¼ 1=(RC): T1 (s) ¼
V2 s (RF =R)s : ¼ sþs V1
their low sensitivities to component tolerances and to finite vt -values of the op-amps. The first is the four-amplifier Ackerberg-Mossberg biquad that may be called a universal filter because it can implement any kind of second-order transfer function. Then we shall consider the more restrictive two-amplifier GIC section that is derived from an RLC prototype and has been found to have excellent performance. Finally, we shall present the single-amplifier DelyiannisFriend biquad, which is more sensitive than the previous two circuits but may be used in applications where considerations of cost and power consumption are critical. The Ackerberg-Mossberg Biquad. The Ackerberg-Mossberg biquad is a direct implementation of the two-integrator loop in Figure 4.3 with the two integrators of Figure 4.2. Adding further a summer that combines all amplifier output voltages weighted by different coefficients, we obtain the circuit in Figure 4.6. Using routine analysis,2 we can derive the transfer function of the circuit as: Vout as2 þ sv0 =Q[a b(kQ)] þ v20 [a (c d)k] ¼ : s 2 þ sv0 =Q þ v20 Vin
The parameters a, b, c, d, and k, and the quality factor Q are given by the resistors shown in Figure 4.6, and the pole frequency equals v0 ¼ 1=(RC). If we compare the function with
(4:15)
C
Vin R/k
For RF ¼ R, the circuit in Figure 4.5(B) realizes a first-order all-pass function that changes the phase but not the magnitude of an input signal.
C1
R
QR
Second-Order Filter Sections. The literature on active filters contains a large number of second-order sections, the so-called biquads, which can be used to realize T2 in equation 4.12b. Among those, we shall only present three circuits that have proven themselves in practice because of their versatility and
(4:16)
A +
C
VB R
FIGURE 4.6 Summer
A
r
+
VL
+ A -
R0/c
R0/b
R0/a
r
R0/d +A
R0 Vout
The Ackerberg-Mossberg Biquad with Output
R2
V1
C2
− +
R1
A
V2
V1
R
RF
−
C
+
V2
A
R (A)
FIGURE 4.5
(B)
Active Circuits Realizing the Bilinear Function. (A) Realization for Equation 4.13. (B) Realization for Equation 4.15.
2 The op-amps are again assumed ideal. This assumption is justified because the combination of the two particular integrators in the Ackerberg-Mossberg circuit causes cancellation of most errors induced by finite vt .
4
Active Filters
131
T2 in equation 4.12b, it becomes apparent that an arbitrary set of numerator coefficients can be realized by the AckerbergMossberg circuit. For example, a notch filter with a transmission zero on the jv axis requires that a ¼ bkQ by equation 4.16. Notice that the general configuration with the summer is not required for low-pass and band-pass functions: a low-pass function is obtained directly at the output VL and a band-pass at VB as we saw in equations 4.9a and 4.9b. As a design exercise, let us realize T2 and T3 in our example function in equation 4.7. We equate formula 4.16 with: T2 (s) ¼
V1 ¼ V2 ,
ZL ¼
ZL (s) ¼
0:526(s 2 þ 43:22 ) : þ 4:72s þ 22:522
With v02 ¼ 20:01 (normalized) and Q2 ¼ 20:01=19:4 ¼ 1:03 for T2, we find by comparing coefficients that a ¼ 0:498, b ¼ a=(kQ) ¼ 0:498=(1:03 k), and a (c d)k ¼ 1:06. Choosing for convenience, k ¼ 1 and c ¼ 0 yields b ¼ 0:483 and d ¼ 0:562. The choice of C ¼ 1 nF leads to R ¼ 7:95 kV. The value r for the inverter is uncritical, as is R0 ; let us pick r ¼ R0 ¼ 5 kV; the remaining resistor values are then determined. Similarly, we have for T3 v03 ¼ 22:52 (normalized) and Q3 ¼ 4:77. We choose again k ¼ 1 and c ¼ 0 to get a ¼ 0:526, b ¼ a=Q ¼ 0:118, and d ¼ 1:41. The choice of C ¼ 1 nF results in R ¼ 7:07 kV, and r ¼ Ro ¼ 5 kV settles the remaining resistor values. If these two second-order sections and the first-order block determined earlier are cascaded according to Figure 4.4, the resulting filter realizes the prescribed function of equation 4.2. The GIC Biquad A very successful second-order filter is based on the RLC band-pass circuit in Figure 4.7(A) that realizes the following: Vout G sG=C ¼ 2 : ¼ G þ sC þ 1=(sL) s þ sG=C þ 1=(LC) Vin
(4:17)
Since inductors have to be avoided, circuits that use only capacitors, resistors, and gain were developed whose input impedance looks inductive. The concept is shown in Figure 4.7(B) where the box labeled GIC is used to convert the resistor RL to the inductive impedance ZL . The general impedance converter (GIC) generates an input impedance as ZL ¼ sTRL , where T is a time constant. For the inductor L in the RLC circuit, we then can substitute a GIC loaded by a resistor RL ¼ L=T . One way to develop a GIC is to construct a circuit that satisfies the following two-port equations (refer to Figure 4.7(B)):
V1 V2 ¼ sT ¼ sTRL , I1 I2
(4:19)
exactly as desired. A circuit that accomplishes this feat is shown in the dashed box in Figure 4.7(C). Routine analysis yields V2 ¼ V1 and I2 ¼ s(C2 R1 R3 =R4 )I1 ¼ sTI1 to give the inductive impedance:
0:498(s 2 þ 29:22 ) s 2 þ 19:4s þ 20:012
s2
(4:18)
In that case, we have:
and T3 (s) ¼
I1 ¼ I2 =(sT ):
V1 R1 R3 ¼ s C2 RL ¼ sTRL : I1 R4
(4:20)
Normally one chooses in the GIC band-pass filter identical capacitors to save costs: C ¼ C2 . Further, it can be shown that the best choice of resistor values (to make the simulated inductor optimally insensitive to the finite gain-bandwidth product, vt , of the op-amps) is the following: R1 ¼ R3 ¼ R4 ¼ RL ¼ 1=(v0 C2 ),
(4:21)
where v0 is a frequency that is critical for the application, such as the center frequency in our band-pass case of Figure 4.7 or the passband corner in a low-pass filter. The simulated inductor is then equal to L ¼ C2 R12. Finally, the design is completed by choosing R ¼ QR1 to determine the quality factor. A further small problem exists because the output voltage in the RLC circuit of Figure 4.7(A) is taken at the capacitor node. In the active circuit, this node (V1 in Figure 4.7(C)), is not an op-amp output and may not be loaded without disturbing the filter parameters. A solution is readily obtained. The voltage Vout in Figure 4.7(C) is evidently proportional to V1 because V2 ¼ V1 : Vout =V1 ¼ 1 þ R4 =RL ¼ 2. Therefore,
Vin
Vout R
Vin
L
Vin I1
R
C
C
(A)
(B)
V1 R C
R1
C2 − A+
+ −A
R3
V1
I2
GIC
Vout V2
R
ZL Vout R4
V2
RL
(C)
FIGURE 4.7 (A) RLC Prototype Band-Pass Filter (B) Inductor Simulation with a General Impedance Converter (C) The Final GIC Band-Pass Filter with GIC Enclosed
132
Rolf Schaumann
we may take the filter output at Vout for a preset gain of Vout =V1 ¼ 2. The band-pass transfer function realized by the circuit is then Vout 2s=(CR) 2sv0 =Q ¼ ¼ 2 : s þ s=(CR) þ 1=(C 2 R12 ) s 2 þ sv0 =Q þ 1=v20 Vin
(4:22)
Transfer functions other than a band-pass can also be obtained by generating additional inputs to the GIC band-pass kernel in Figure 4.7(c). The Single-Amplifier Biquad On occasion, considerations of cost and power consumption necessitate using a single op-amp per second-order section. In that case, the single-amplifier biquad (SAB) of Figure 4.8 may be used. Depending on the choice of components, the circuit can realize a variety of, but not all, different transfer functions. The function realized by this circuit is written as: h i a=b þ v20 s 2 þ s Qv00 1 þ 2Q02 1 1K T (s) ¼ b : K þ v20 s 2 þ s Qv00 1 2Q02 1K
(4:23)
pffiffiffiffiffiffiffiffiffiffi The pole frequency equals v0 ¼ 1= C R1 R2 . To optimize the performance, R2 ¼ 9R1 and Q0 ¼ 1:5 are chosen. The realized pole quality factor then becomes: Q¼
1:5 : 1 4:5K =(1 K )
(4:24)
Evidently, Q is very sensitive to the tap position, K, of the resistor R and must be adjusted carefully. This filter is particularly useful for building low-Q gain or delay equalizers to improve the performance of a transmission channel.
C2
C4
R1 L4
L2 C1
C3
C5
R2
FIGURE 4.9 Typical (Low-Pass) LC Ladder Filter: This circuit is of the form required to realize the transfer function of equation 4.2 with the attenuation curve of Figure 4.1.
and shunt immittance branches that, as the name implies, are lossless and contain only inductors and capacitors. A typical structure of an LC ladder is shown in Figure 4.9. The circuit consists of five ladder arms (C1 , L2 kC2 , C3 , L4 kC4 , and C5 ) and source and load resistors R1 and R2 . It can realize the fifth-order low-pass function of equation 4.2 with the transfer behavior sketched in Fig. 2.6.1. The two transmission zeros at 29.2 kHz and 43.2 kHz are realized by the parallel resonance frequencies of L2 kC2 and L4 kC4 . The question to be addressed is how to implement such a circuit without the use of inductors. The problem is attacked in two different ways. The element replacement method eliminates the inductors by electronic circuits whose input impedance is inductive over the relevant frequency range. We encountered this method in Figure 4.7, where the need for an inductor was avoided when L was replaced by a GIC terminated in a resistor. The method of operational simulation makes use of the fact that inductors and capacitors fundamentally perform the function of integration, 1/s. Specifically, the voltage V across an inductor generates the current I ¼ V =(sL), and the current I through a capacitor is integrated to produce the voltage V ¼ I=(sC). Thus, we should need only to employ electronic integrators, as in Figure 4.2, to arrive at an inductorless simulation of an LC ladder.
4.2.2 Realization of Ladder Simulations Because of their low passband sensitivity to component tolerances, LC ladders have been found to be among the best performing filters that can be built. Therefore, a tremendous effort has gone into designing active filters that retain the excellent performance of LC ladders without having to build and use inductors. A ladder filter consists of alternating series
C
V1 R1/a
R2 C R1/(1-a)
KR/b
−
+A
V2 (1-K)R
The Element-Replacement Method In a complete analogy to the GIC biquad discussed earlier, we now create the inductive impedance ZL at the input of a GIC that is loaded by a resistor R [see Figure 4.10(A)]. The GIC circuit used here is shown in Figure 4.10(B). Notice that compared to Figure 4.7(C), we interchanged the capacitor and the resistor in positions 2 and 4. From equation 4.20, the time constant equals T ¼ C4 R1 R3 =R2 . This change is immaterial as far as ZL is concerned, but the selection is optimal for this case. Note that we have attached the label 1:sT to the GIC boxes to help us keep track of their orientation in the following development: the sT side faces the resistive load. It was shown in equation 4.18 that the GIC implements the two-port equations for the element choice in Figure 4.10(B):
KR/(1-b)
FIGURE 4.8 The Delyiannis-Friend Single-Amplifier Biquad (SAB)
V 1 ¼ V 2 , I2 ¼ C 4
R1 R3 ¼ sTI1 : R2
(4:25)
4 Active Filters
133 V1
1 : sT
I1 GIC
V2
1 : sT I I1 GIC
R
ZL
(A)
R1
R
V2
GIC I2 (C)
1 : sT
V1
sT : 1
V1
I2
+ − A
R2
V11 1 : sT V21 I11 GIC1 I21
C4
V2
R3
− A +
"R" V1n 1 : sT V2n I1n GICn I2n
(D)
(B)
FIGURE 4.10
Circuits for the Element-Replacement Method
C4
C2 R1
1:sT
sT:1
1:sT
GIC
GIC
GIC
C1
L2 /T
C3
sT:1 L4/T
GIC C5
R2
FIGURE 4.11 Inductors in the LC Ladder of Figure 4.9: The inductors simulated by resistors of value Li =T , i ¼ 2, 4, which are embedded between two GICs.
Consequently, the input impedance of the circuit in Figure 4.10(A) is that of a grounded inductor L ¼ TR: ZL ¼
V1 V2 ¼ sT ¼ sTR ¼ sL, I1 I2
(4:26)
A simplification is obtained by the circuit illustrated in Figure 4.10(D). We have shown a resistive network R that has a 1:sT GIC inserted in each input lead. The network R is described by the equation: V2k ¼ RI2k ,
and this scheme can be used to replace any grounded inductor in a filter. A floating inductor can be implemented by using two GICs as shown in Figure 4.10(C). It is a simple extension of the circuit in Figure 2.10(A), obtained by noting that the resistor is grounded as in Figure 4.10(A) for V2 ¼ 0, according to equation 4.25, input and output voltages of the GIC are the same. The same is true for V1 ¼ 0. Because the voltage DV ¼ V1 V2 appears across the series resistor R, we have I ¼ (V1 V2 )=R. Observing the orientation of the two GIC boxes, we find further I2 ¼ I=(sT ) and:
(4:28)
where V2k and I2k are the vectors of the input voltages and currents, respectively, identified in Figure 4.10(D) and R is the resistance matrix describing the network and linking the two vectors. By equation 4.25, we have from the GICs the relationships V1k ¼ V2k and I2k ¼ sT I1k so that with equation 4.28, we obtain the result: V1k ¼ V2k ¼ RI2k ¼ RsT I1k ¼ sT RI1k
(4:29)
or V1 V2 V1 V2 ¼ : I1 ¼ I2 ¼ sTR sL
(4:27)
Evidently, this equation describes a floating inductor of value L ¼ TR. This method affords us a simple way to implement floating inductors L by embedding resistors of value L=T between two GICs. Specifically, for the filter in Figure 4.9, we obtain the circuit in Figure 4.11. We still remark that this method is relatively expensive in requiring two GICs (four op-amps) for each floating inductor and one GIC (two op-amps) for each grounded inductor.
V1k ¼ sT RI1k ¼ sLI1k :
(4:30)
This equation implies that a resistive network embedded in GICs appears like an inductive network of the same topology with the inductance matrix L ¼ T R, that is, each inductor L is replaced by a resistor of value L/T. This procedure is the Gorski-Popiel method. Its significance is that it permits complete inductive subnetworks to be replaced by resistive subnetworks embedded in GICs, rather than requiring each inductor to be treated separately. Depending on the topology
134
Rolf Schaumann
I0 þ (I2 ) , Y1 (I2 ) þ I4 I4 þ (I6 ) (V3 ) ¼ , V5 ¼ : Y5 Y3
C4
C2
V1 ¼
R1 L2
L4 L3
C1
C5
R2
L5
(4:32a)
(A)
C4
C2
V1 þ (V3 ) (V3 ) þ V5 , I4 ¼ : Z2 Z4
(I2 ) ¼
sT:1
GIC
R1
1:sT
L2/T
GIC
L3/T
C1
sT:1
L4/T
GIC
L5/T
C5
R2
(B)
FIGURE 4.12 (A) LC Band-Pass Ladder (B) Realization by GorskiPopiel’s Element-Replacement Method
of the LC ladder, Gorski-Popiel’s method may save a considerable number of GICs, resulting in reduced cost, power consumption, and noise. The simulation of the band-pass ladder in Figure 4.12(A) provides an example for the efficiencies afforded by Gorski-Popiel’s procedure. Converting each inductor individually would require six GICs, whereas identifying first the inductive subnetwork consisting of all four inductors leads to only three GICs for the active simulation of the ladder shown in Figure 4.12(B). Operational Simulation or Signal-Flow Graph Implementation of LC Ladders As mentioned earlier, in the SFG simulation of an LC ladder filter, each inductor and capacitor are interpreted as signalflow integrators that can be implemented by an RC/op-amp circuit. The method is completely general in that it permits arbitrary ladder arms to be realized, but in this chapter, we shall illustrate the procedure only on low-pass ladders. Consider the general section of a ladder shown in Figure 4.13(A). It can be represented by the equations: V1 ¼
I0 I2 I2 I4 I4 I6 , V3 ¼ , V5 ¼ : Y1 Y3 Y5 I2 ¼
(4:32b)
V1 V3 V3 V5 , I4 ¼ : Z2 Z4
Figure 4.13(B) shows the resulting block diagram. To be able to sum only voltages, we next convert the currents in equations 4.32(A) and 4.32(B) into voltages. This step is accomplished by multiplying all currents by a scaling resistor R and, simultaneously, by multiplying the admittances by R to obtain so-called ‘‘transmittances’’ tY ¼ 1=(RY ) or tZ ¼ R=Z as is shown in equation 4.33: RI0 þ (RI2 ) ) v1 ¼ tY1 [v10 þ (v12 )]: RY1
V1 ¼ (RI2 ) ¼
V1 þ (V3 ) ) v12 ¼ tZ2 [v1 þ (v3 )]: Z2 =R
(V3 ) ¼
(RI2 ) þ RI4 ) v3 ¼ tY3 [(v12 ) þ v14 ]: RY3
RI4 ¼
(V3 ) þ V5 ) vI4 ¼ tZ4 [(v3 ) þ v5 ]: Z4 =R
V5 ¼
RI4 þ (RI6 ) ) v5 ¼ tY5 [v14 þ (v16 )]: RY5
V1 I0
Y1
I0
+
1 Y1
For example, in all-pole low-pass filters, all branches Yi and Zj are single capacitors and inductors, respectively, so that the five expressions in equations 4.31(A) and 4.31(B) represent integration. Generally, however, Yi and Zj may be arbitrary LC immittances. Equations 4.31(A) and 4.31(B) indicate that differences of voltages and currents that need to be formed, which requires more complicated circuitry than simple summing. To achieve operation with only summers, we recast equations 4.31(A) and 4.31(B) as follows by introducing a number of multiplications with (1):
Z2
V3 Y3
I4
V5
Z4
I6 Y5
(A)
(4:31a) (4:31b)
I2
(4:33)
(B)
vI0
V1
I2
1 Z2
+
+ 1 Y3
V3
+
+ -tZ2
tY1
(C)
1 Z4 +
vI4
I6
+ 1 Y5
V5
+
-vI6
-tZ4
tY3
tY5
+
+
v1
I4
-v3
v5
FIGURE 4.13 (A) Section of a General Ladder (B) Signal-Flow Graph Representation for Equation 4.32 (C) Signal-Flow Graph Representation for Equation 4.33
4 Active Filters
135
For consistency, we have labeled all voltages, the signals of the SFG implementation, by lowercase symbols and have used the subscripts I, Z, and Y to be reminded of the origin of the dimensionless signals, v, and transmittances, t. Observe that all equations have the same format, vk ¼ tk (vk1 þ vkþ1 ), that the signs of all signals are consistent, and (for low-pass ladders) that all t Z transmittances are inverting and the t Y transmittances are noninverting integrators: t Y ¼ 1=(RY ) ¼ 1=(sRC), t Z ¼ R=Z ¼ R=(sL). Note also that this process may lead to a normally unimportant sign inversion in the realization. Equations 4.33 are represented by the signal-flow graph in Figure 4.13(C) (Martin and Sedra, 1978). Also observe that inverting and noninverting integrators alternate so that all loops have negative feedback (the loop-gain is negative) and are stable. In low-pass filters, the boxes, together with the summing junctions, form summing integrators that now have to be implemented. The integrators can be obtained from the circuits n Figure 4.2 by adding a second input. As drawn in Figure 4.13(C), the inverting integrators point upward and the noninverting integrators downward. We used this orientation for the circuits in Figure 4.14. Apart from the sign, both circuits realize the same two-input lossy integrator functions, V2 ¼
a1 V11 þ a2 V12 , st þ q
(4:34)
where the minus sign is for the inverting integrator in Figure 4.14(A), and the plus sign for the noninverting integrator in Figure 4.14(B). We used a scaling resistor Ra to define t ¼ CA Ra , q ¼ Ra =Rq , the gain constants a1 ¼ Ra =R1 , and a2 ¼ Ra =R2 that multiply the two input voltages V11 and V12 . These gain constants can be adjusted to maximize the dynamic range of the active filter, but the treatment goes beyond the scope of this chapter. We will for simplicity set a1 ¼ a2 ¼ 1 by selecting Ra ¼ R1 ¼ R2 ¼ R to get t ¼ CA R
and q ¼ R=Rq . Remember still that the LC ladder is lossless. All internal integrators are, therefore, lossless (i.e. q ¼ 0, Rq ¼ 1). A finite value for Rq is needed only for the first and last ladder arms where Rq serves to realize source and load resistors. As an example for the signal-flow graph technique, consider the low-pass ladder in Figure 4.15(A). The component values are RS ¼ RL ¼ 1kV, C1 ¼ C3 ¼ 20 nF, and L2 ¼ 30 mH. The filter has a flat passband; the 1-dB passband corner is at 9.6 kHz. The first step in the realization is a Norton source transform to convert the voltage source into a current source, so that the source resistor is placed in parallel to C1 . This transformation removes one mesh from the circuit and simplifies the design as in Figure 4.15(B). Let us write the describing equations to understand how the components in the active circuit are derived from the ladder. Choosing as the scaling resistor RS, we obtain RS (Vin =RS ) þ ( IL )RS , RS GS þ sC1 RS V1 þ ( V2 ) IL RS IL RS ¼ , V2 ¼ : sL2 =RS RS GL þ sC3 RS
V1 ¼
These equations need to be compared with those of the integrators, equation 4.34, to determine the element values. Since C1 ¼ C3 and RS ¼ RL , the first and last lossy integrators are the same. The comparison yields C1 RS ¼ C3 RS ¼ t ¼ CA Ra , q1 ¼ q3 ¼ GS =RS ¼ 1, C2 RS ¼ t2 ¼ L2 =RS , and q2 ¼ 0. If we select3 Ra ¼ RS ¼ 1 kV, we have C1 ¼ C3 ¼ 20 nF and C2 ¼ L2 =RS2 ¼ 30 nF. The value of r is unimportant; let us choose r ¼ 1 kV so that all resistors have the same value. The final circuit is shown in Figure 4.15(C). Notice that each loop combines the inverting and noninverting integrators, which we used earlier to construct the Ackerberg-Mossberg RS V1
V11
V2
A
− +
R2
V12
Vin
V2=Vout
+-
Vin
r
A − +
+ − A
R1
R2
V12
R
r
A
(B)
A
R
3
Alternatively, we could choose different resistor values and arrive at identical capacitors.
R
(C)
C3
C2
A
V2
FIGURE 4.14 (A) Inverting Lossy Summing Integrator (B) Noninverting Lossy Summing Integrator
RL
R R V2
R
(A)
-IL
C1
R
V11
C3
(B)
R
R
L2 C1
C3
C1
V2=Vout
V1
Vin/RS RS
L2
(A)
CA
Rq
CA Rq
R1
V1
R
R
A
A
-V2 Vout
FIGURE 4.15 Realizing a Low-Pass Ladder by the Signal-Flow Graph Technique. (a) The original ladder. (b) Source transformation of the ladder. (c) Final active circuit with R ¼ 1 kV, C1 ¼ C3 ¼ 20 nF, C2 ¼ L2 =R12 ¼ 30 nF.
136
Rolf Schaumann
biquad (see Figure 4.6 without the summer). As a result, we conclude that the performance of this simulated ladder can be expected to be similarly insensitive to the op-amp’s finite gain-bandwidth products. Using 741-type op-amps with ft 1:5 MHz, the performance of the active circuit is acceptable to over 300 kHz, and the performance of the two circuits in Figure 4.15(A) and 4.15(C) is experimentally indistinguishable until about 90 kHz when the op-amp effects begin to be noticeable. As a final comment, observe that the output Vout in the active realization is inverting. If this phase shift of 1808 is not acceptable, the output may be taken at the node labeled V2 .
4.2.3 Transconductance-C (OTA-C ) Filters The finite value of the op-amp’s gain-bandwidth product vt causes the frequency range of the active filters discussed so far to be limited to approximately 0:1 vt. Using inexpensive opamps in which f t is of the order 1.5 to 3 MHz is not sufficient because the frequency range is too low for most communications applications. Although relatively economical wideband amplifiers are available with f t values up to about 100 MHz, a different solution is available that is especially attractive for filter circuits compatible with integrated-circuit (IC) technology. For these cases, gain is not obtained from operational voltage amplifiers but from operational transconductance amplifiers, also labeled OTAs or transconductors. These circuits are voltage-to-current converters characterized by their transconductance parameter g m that relates the output current to the (normally differential) input voltage: Iout ¼ g m (Vinþ Vin ):
(4:35)
Simple transconductors with bandwidths in the GHz range have been designed in standard CMOS technology (Szczepanski et al., 1997); in turn, they permit communication filters to be designed with operating frequencies in the hundreds of megahertz. There are several additional reasons for the popularity of the use of transconductors in filter design. Among them are the small OTA circuitry, the particularly easy filter design techniques that normally permit all OTA cells to be identical, and the compatibility with digital (CMOS) integrated-circuit technology: ‘‘OTA-C’’ filters use no resistors. A small-signal model of the OTA is shown in Figure 4.16(A) including parasitic input and output capacitors (of the order 0.01 to 0.05 pF) and the finite output resistance (of the order of 0.1 to 2 MV). We have shown the lower output terminal grounded because for simplicity we shall assume that designs are single-ended for the sake of this discussion. In practice, IC analog circuitry is designed in differential form for better noise immunity, linearity, and dynamic range. Conversion to differential circuitry is straightforward. The customary circuit symbol for the OTA is shown in Figure 4.16(B).
Iout
+
Vin
+
Vin
Ci
Co
−
Vin
Ro
+
−
Vin
(A)
gm
Iout
(B)
FIGURE 4.16 (A) Small-Signal Model of a Transconductor: Iout ¼ gm (Vinþ Vin ) (B) Circuit Symbol
Apart from minor differences having to do largely with IC implementation, active filter design with OTAs is the same as filter design with op-amps. As before, we have available the cascade method with first and second-order sections, or we can use ladder simulations. For the latter, it can be shown that element replacement and the signal flow-graph technique lead to identical OTA-C filter structures, so only one of the two methods needs to be addressed. Cascade Design The only difference between this cascade design method and the one presented earlier (in Section 4.2.1) is the use of only OTAs and capacitors. In this case, the filter sections do not normally have low output impedance, but the input impedance4 is as a rule very large, so cascading remains possible. General first- and second-order sections are shown in Figure 4.17. The circuit in Figure 4.17(A) realizes the function: T1 (s) ¼
Vout saC þ gm1 ¼ , Vin sC þ gm2
(4:36)
and the second-order section in Figure 4.17(B) implements: T2 (s) ¼
Vout s 2 bC1 C2 þ s(bC2 gm2 aC1 gm3 ) þ gm1 gm3 ¼ : Vin s 2 C1 C2 þ sC2 gm2 þ gm3 gm4
(4:37)
Normally, one selects gm3 ¼ gm4 ¼ gm ; the pole frequency and pole quality factor are then: gm v0 ¼ pffiffiffiffiffiffiffiffiffiffi , C1 C2
v0 C1 ¼ Q¼ gm2
rffiffiffiffiffi C1 g m : C2 gm2
(4:38)
Since v0 is proportional to gm, which in turn depends on a bias current or voltage, the pole frequency can be tuned by varying the circuit’s bias. Automatic electronic tuning techniques have been developed for this purpose. In addition, note that Q, as a dimensionless parameter, is set by ratios of like components 4 The input of an OTA-C filter is normally a parasitic capacitor ( 0:05 pF) of a CMOS transistor. Thus, even at 100 MHz, the input impedance is still larger than 30 kV. Furthermore, as the topology of the first- and second-order circuits in Figure 4.17 indicates, these input capacitors can be absorbed in the circuit capacitors of the previous filter section.
4
Active Filters
137 bC2 aC
Vin
+ −
aC1 Vin
gm1
(1-a)C
− +
gm2
Vout
− +
+ gm4
−
gm1
− +
(1-a)C1
FIGURE 4.17
General OTA-C Filter Sections. (A) First-order. (B) Second-order.
and, therefore, is designable quite accurately. Difficulties arise only for large values of Q where parasitic effects can cause large deviations and tuning must be used. Evidently, a variety of transfer functions can be realized by choice of the components, the coefficients a and b, and the values and signs of the transconductances, gmi . For instance, comparing equation 4.37 with T2 in equation 4.12 shows that the second-order functions can be realized with arbitrary coefficients. Negative values of gmi are obtained by simply inverting the polarity of their input connections; this must be done with caution, of course, to make certain that the transfer functions’ denominator coefficients stay positive. An interesting observation can be made about the two circuits in Figure 4.17. Notice that all internal nodes of the circuits are connected to a circuit capacitor. This means that the input and output capacitors of all transconductance cells (see Figure 4.16(A)) can be absorbed in a circuit capacitor and do not generate parasitic poles or zeros, which could jeopardize the transfer function. The designer must, however, ‘‘predistort’’ the capacitors by reducing their nominal design values by the sum of all parasitic capacitors appearing at the capacitor nodes. Thus, if a high-frequency design calls for C ¼ 1:1 pF and the sum of all relevant parasitics equals 0.2 pF, for example, the circuit should be designed with C ¼ (1:1 0:2)pF ¼ 0:9 pF. The unavoidable parasitics will restore the effective capacitor to its nominal value of 1.1 pF. An additional interesting observation can be made related to the structure of the second-order circuit in Figure 4.17(B). We have labeled the circuit in the dashed box a gyrator. A gyrator is a circuit whose input impedance is proportional to the reciprocal of the load impedance. Figure 4.18(A) shows the situation for a load capacitor. From this circuit, we derive that the input impedance equals: Zin ¼
(1-b)C2
Gyrator
(B)
(A)
Vout
− gm3 +
gm2
V1 1 C ¼ 2 sC ¼ s 2 ) sL: gm gm I1
V1
I1
+ gm −
V1
−
gm +
I1
C
C
(A)
V1
I1
+ gm − − gm +
+ gm −
C
− gm +
I2 V2 C
(B)
FIGURE 4.18 Capacitor-Loaded Gyrators and their Symbols. (A) Grounded Inductor. (B) Floating Inductor.
4.7(C). Assume5 for the following discussion that a ¼ b ¼ 0 in the circuit in Figure 4.17(B). We observe then that gm1 converts the input voltage Vin into a current that flows through C1 and the resistor6 1=gm2 . In parallel with the capacitor, C1 , is the inductor L ¼ C2 =(gm3 gm4 ). The parallel RLC connection of 1=gm2 , C1 , and L ¼ C2 =(gm3 gm4 ) is driven by the current gm1 Vin . The OTA-based grounded inductor can be used in any filter application in the same way as the GIC-based inductor of Figure 4.10(A), except that the OTA-based circuit can be employed at much higher frequencies. Ladder Simulation The analogy of the present treatment with the one summarized in Figure 4.10 should let us suspect that floating inductors can be simulated as well by appropriate gyrator connections. Indeed, we only need to connect two gyrators with identical gm values to a grounded capacitor as shown in Figure 4.18(B) to obtain a floating inductor of value L ¼ C=gm2 . For example,
(4:39) C=gm2 .
That is, it realizes an inductor of value L ¼ Consequently, we recognize that the second-order OTA-C circuit is derived from the passive RLC band-pass stage in Figure 4.7(A) in the same way as the GIC filter in Figure
5 The fractions a and b of the capacitors C1 and C2 are connected to the input to generate a general numerator polynomial from the band-pass core circuit. The method is analogous to the one used in Figure 4.8 for the resistors R1 and KR. 6 It is easy to show that a transconductor gm with its output current fed back to the inverting input terminal acts like a resistor 1=gm.
138
Rolf Schaumann C4
C2 R1 C3
C1 C
L2gm2
C5 C
R2
L4gm2
FIGURE 4.19 OTA-C Realization of the LC Ladder in Figure 4.9: Note the similarity with the circuit in Figure 4.11. Each gyrator symbol represents the circuit enclosed in the dashed box in Figure 4.16 (B).
using this approach on the ladder in Figure 4.9 results in the circuit in Figure 4.19, in complete analogy with the GIC circuit in Figure 4.11. Notice that all gm cells in the simulated ladder are the same, a convenience for matching and design automation. The circuits in Figures 4.11 and 4.19 have identical performance in all respects, except that the useful frequency range of the OTA-based design is much larger. Even when using fast amplifiers with, say, f t 100 MHz, the useful operating frequencies of the GIC filter in Figure 4.11 will be less than about 10 MHz, whereas it is not difficult to achieve operation at several 100 MHz with the OTA-based circuit. A signal-flow graph method need not be discussed here because, as we stated, the resulting circuitry is identical to the one obtained with the element replacement method.
The treatment of transconductance-C filters in this chapter has necessarily been brief and sketchy. Many topics, in particular the important issue of automatic tuning, could not be addressed for lack of space. The reader interested in this important modern signal processing topic is referred to the literature in the References section for further details.
References Martin, K. and Sedra, A.S. (1978). Design of signal-flow-graph (SFG) active filters. IEEE Transactions on Circuits and Systems 25, 185–195. Schaumann, R. (1998). Simulating lossless ladders with transconductance-C circuits. IEEE Transactions on Circuits and Systems II 45(3), 407–410. Schaumann, R., Ghausi, M.S., and Laker, K.R. (1990). Design of analog filters: Passive, Active RC, and Switched Capacitor. Englewood Cliffs, NJ: Prentice-Hall. Szczepanski, S., Jakusz, J., and Schaumann, R. (1997). A linear fully balanced CMOS OTA for VHF filtering applications. IEEE Transactions on Circuits and Systems, II 44, 174–187. Schaumann, R., and Valkenburg, M.V. (2001). Modern active filter design. New York: Oxford University. Tsividis, Y., and Voorman, J.A. (Eds.). (1993). Integrated continuoustime filters: Principles, design and implementations. IEEE Press. Zverev, A. (1967). Handbook of filter synthesis. New York: John Wiley & Sons.
5 Junction Diodes and Bipolar Junction Transistors Michael Schro¨ter
5.1
Junction Diodes................................................................................... 139
Institute for Electro Technology and Electronics Fundamentals, University of Technology, Dresden, Germany
5.2
Bipolar Junction Transistor.................................................................... 142
5.1.1 Basic Equations . 5.1.2 Equivalent circuit 5.2.1 Basic equations . 5.2.2 Internal Transistor and Base Resistance . 5.2.3 Emitter Periphery Effects . 5.2.4 Extrinsic Transistor Regions . 5.2.5 Other Effects . 5.2.6 Equivalent circuits . 5.2.7 Some Typical Characteristics and Figures of Merits . 5.2.8 Other Types of Bipolar Transistors
References .......................................................................................... 151
5.1 Junction Diodes Applications of junction diodes are possible for switches, demodulators, rectifiers, limiters, (variable) capacitors, nonlinear resistors, level shifters, and frequency generation. Special diode types exist that are optimized for the particular application. The cross-section of a discrete diode is shown in Figure 5.1(A); diode structures in integrated circuits also contain a parasitic junction.
5.1.1 Basic Equations The pn-junction and pn-diode can be treated theoretically by applying the basic semiconductor equations (Sze, 1981; Selberherr, 1984) consisting of Poisson’s equation as well as the continuity equations for electrons and holes; these are complemented by material equations for carrier transport, recombination, and generation. Sufficiently simple analytical equations describing the direct current (dc) and small-signal high-frequency terminal behavior are then obtained by simplifications, such as the regional approach (Schilling, 1969). The latter corresponds to a partitioning of the device structure into space-charge regions (SCR) and neutral regions (NR) in which only a subset of the basic equations needs to be solved. V and Vj are the applied dc terminal voltage and the dc voltage across the junction, respectively; a (possible) time dependence is indicated by lowercase letters (i.e., v and vj ). Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
For the sake of clarity, the regional approach is applied here to the one-dimensional (1-D) structure of a pn-junction, assuming an abrupt doping profile as shown in Figure 5.1(B) that also contains the respective dimensions used in the subsequent analysis. The lateral (y-) dimensions of the junction define the diode area A shown in Figure 5.1(A). Space Charge Region Contacting a p- and n-doped region leads to a large gradient of the carrier densities at the junction. The result is a diffusion current across the junction and an electric field in the opposite direction, due to the ionized doping atoms left behind, which form the SCR. Therefore, in thermal equilibrium (V ¼ 0), the corresponding drift current equals the diffusion current, resulting in a zero net current through the junction. The occurring electric field leads to a built-in voltage: VD ¼ VT ln
ND NA , n2i0
(5:1)
with VT as the thermal voltage and ni0 as intrinsic carrier density. An applied voltage across the junction (positive in the direction defined in Figure 5.1) either decreases (V > 0) or increases (V < 0) the internal junction voltage, which is then given by superposition as VD Vj (with Vj ¼ V here). 139
Michael Schro¨ter
140 V Vj
y x
N = ND-NA ND
Area A r
−Wp
Anode p+ n−
p Guard-ring
n+
−xp − − − p − − wp
Cathode
n
− 0 xn − Ionised − dopants − − −N A wn SCR
NR (p)
Wn x
NR (n)
(B) Schematic Doping Profile
(A) Discrete Diode Cross-Section
FIGURE 5.1 Junction Diodes. (A) This figure shows a cross-section of a discrete pn-diode with a guard-ring for preventing perimeter breakdown. (B) Illustrated here is a schematic doping profile (assuming an abrupt junction) along a one-dimensional (1-D) cut through the center with definitions of neutral region (NR), space-charge region (SCR), and the respective dimensions.
Important properties of the junction can be calculated from solving Poisson’s equation in the SCR (Lindmayer and Wrigley, 1965; Sze, 1981). From this and overall space-charge neutrality, one obtains the width of the SCR sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffi Vj 2e NA þ ND w¼ (VD Vj ) ¼ w0 1 q NA ND VD
(5:2)
and the maximum electric field that determines the breakdown characteristics of a diode: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2q NA ND Em ¼ (VD Vj ): e NA þ ND
e Cj (V ) ¼ A ¼ A w
with z ¼ 0:5 in this case of an abrupt doping profile and with Cj0 as zero-bias capacitance. For the often occurring case of highly nonsymmetrical p profiles (e.g., NA ND ), pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi w0 ¼ 2eVD =(qND ), and Cj0 ¼ A eqND =(2VD ). The corresponding depletion charge can be described by:
Qj (V ) ¼
ð Vj 0
Cj Cj 0
Cj0 VD Cj dV ¼ 1z
"
Vj 1 VD
Cj C j0
aj Numerical solution
log
C j, PT
Numerical solution eq. 5.4
C j0
1
V j,PT = VPT −VD Vj /VD
0 (A) Forward Bias
1z
# 1 :
(5:5)
At high forward bias, the depletion charge eventually becomes completely neutralized and disappears, leading to a drop of C j (V j ); a typical curve is shown in Figure 5.2(A),
log
eq. 5.4
(5:4)
Cj0 ¼ , (1 Vj =VD )z
(5:3)
At a forward bias that is not too high (Vj < 0:7V D ), the depletion capacitance can be calculated by considering the SCR as two plates with opposite charges and located the distance w apart:
ffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qe NA ND 1 2 NA þ ND VD Vj
log(1 − V j / VD) (B) Reverse Bias (Vj − V )
FIGURE 5.2 Normalized Depletion Capacitance Characteristics
5 Junction Diodes and Bipolar Junction Transistors
141
containing the numerical solution of Poisson’s equation as reference. At high reverse bias, the SCR of the lightly-doped side of the junction extends to the highly doped access region; this punchthrough effect results in an (almost) constant depletion capacitance CjPT ¼ eA=Wn (for NA ND ) and is easily visible in a log-plot shown in Figure 5.2(B); VPT ¼ qNA Wn2 =(2e) is the punchthrough voltage. Current–Voltage Relationship (Direct Current) Consider, for instance, the NR of the n-side of the junction. With applied bias, V > 0 holes are injected into the NR at xn . Using the law-of-mass and the neutrality condition (for low injection, i.e., Vj ¼ V ), the injected excess minority (hole) carrier density is Dpn (xn ) ¼ pn0 [exp (Vj =VT ) 1], with pn0 as equilibrium carrier density. Combining continuity and transport equations of holes yields a second-order differential equation for calculating the spatial dependence of Dpn (Lindmayer and Wrigley, 1965; Sze, 1981). Under the assumption of a finite width wn (¼ Wn xn ) of the NR, the solution is: Vj 1 Dpn ¼ pn0 exp VT Wn x wn sinh = sinh : Lp Lp
(5:6)
Because there is no electric field in the considered case of abrupt doping profiles, the hole current density equals the diffusion component; the current density, injected at xn , is then given by: dp Jp (xn ) ¼ qDp dx xn ¼
Vj qDp pn0 wn coth exp 1 : Lp Lp VT
(5:7)
A similar expression can be derived for the electron component. Combination yields the well-known diode equation:
Vj Id ¼ IS exp mVT
1 ,
(5:8)
log (Id)
log (Is)
Ideal behavior eq. 5.8
Rs only Cond. modul. and Rs
Recombination
V
0 Reverse region
Very low
Low Medium High Injection
FIGURE 5.3 Typical Current–Voltage Relationship. The V is the terminal voltage applied from anode to cathode.
deviations from the ideal I–V behavior of equation 5.8 occur because of conductivity modulation, resulting in m ¼ 2, as well as because of series resistances (see later sections in this chapter). At certain reverse bias V < 0, the current starts to increase again due to avalanche breakdown at lower doping concentrations or tunneling at higher doping concentrations (e.g., Zener diodes) (Sze, 1981). Diffusion Charge and Capacitance Minority carriers injected into a NR result in a diffusion charge Qd and an associated diffusion capacitance of Cd . These are strong functions of bias and can become much larger than Qj and Cj , respectively, at high forward bias. Combining continuity and transport equation, assuming the small-signal case, with transformation into frequency domain (q=qt ! jv) yields again a second-order differential equation that enables the calculation of the frequency and spatially dependent injected excess carrier densities as well as the corresponding injected small-signal terminal current densities. Adding both the hole and electron current components yields the total small-signal terminal current Id (v). Division by the applied small-signal voltage V (v) gives the small-signal admittance Yd (v) ¼ Gd þ jvCd of the pn-diode. For the most interesting case of a strongly nonsymmetrical junction (e.g., NA ND ) with a short NR width wn and frequencies that are not too high (i.e., jvtn 1), one obtains for the case of low injection (Lindmayer and Wrigley, 1965), assuming V > 4VT , with the dc bias current Id :
with an ideality factor m (¼ 1 here) and the saturation current: Dp Pn0 Dn np0 wp wn coth coth IS ¼ Aq þ Lp Lp Ln Ln
Gd ¼ (5:9)
as parameters. For real devices, m is usually slightly larger than one due to recombination loss and nonabrupt doping profiles, for example. A typical characteristic is shown in Figure 5.3 where the various bias regions are indicated. At very low forward bias recombination dominates and results in m ¼ 2. At high-current densities and high injection, respectively,
Id mVT
and Cd ¼
wn2 Gd : 3Dp
(5:10)
The diffusion charge is Qd ¼ td Id if the storage time td ¼ wn2 =(3Dp ) is current independent.
Michael Schro¨ter
142 Cd Cj A
iQd Rs
iQj C
A
Rs
id
Gd
C
Vj
Vj (A) Small-Signal Circuit
V (B) Large-Signal Circuit
FIGURE 5.4 Equivalent Circuits. (A) This figure shows a small-signal equivalent circuit of a pn-diode. (B) Illustrated in this figure is a largesignal equivalent circuit. A and C denote the anode and cathode terminals. iQd ¼ dQd =dt and iQj ¼ dQj =dt.
Series Resistances Both the neutral regions (see Figure 5.1) and the contacts themselves have a finite resistivity, which results in additional parasitic series resistances. Physically, these can be expressed by the sheet resistances of the (access) regions as well as the area-specific contact resistances and the dimensions of the corresponding regions and contacts, respectively.
5.1.2 Equivalent Circuit A generic small-signal diode-equivalent circuit (EC) is given in Figure 5.4. Cd and the junction conductance Gd have been defined in equation 5.10 and Cj is given by equation 5.4. Rs is the series resistance, which is a lumped representation of all contributing components discussed before. The voltage vj across the junction is smaller than the applied terminal voltage v due to the voltage drop across Rs . In a realistic diode structure, additional parasitic elements have to be taken into account, such as a sidewall capacitance and—in integrated circuits—a parasitic pn-junction that is connected to a different terminal. For tunneling diodes, the effect of a negative differential resistance is usually taken into account in the small-signal EC by an inductance in series to Rs .
5.2 Bipolar Junction Transistor The generic expression of bipolar transistor is used for various types of this device. The most important type is the vertical npn bipolar transistor that is available as (a) silicon-based homojunction version (npn-BJT) and (b) SiGe or III–V material-based heterojunction version (HBT). Less often employed versions are the lateral pnp (LPNP) and vertical pnp (VPNP) transistor. In this chapter, emphasis is placed on the npn-BJT for presenting a basic theory that can be applied to the other transistor versions with few modifications. Bipolar transistors are employed wherever speed and drive capability are required. Major application areas include highspeed data links, such as in fiber-optic communications; RF-front-ends, such as mixers and amplifiers in wireless communications; and drivers, bias circuitry, and high-speed logic
in general, and in particular in BiCMOS technologies. Both discrete and integrated solutions can be found. An example for a standard integrated BJT implemented in a self-aligning double-polysilicon process is shown in Figure 5.5 (Klose et al., 1993; Yamaguchi et al., 1994). The self-alignment scheme allows submicron emitter window widths bE0 with a mm-lithography yielding transit frequencies fT of 25 GHz and beyond for present production processes. Due to the recent trend to integrate BJTs and silicon-germanium (SiGe) into BiCMOS processes, bipolar transistors benefit from the advanced CMOS lithography tools. The tools enable a significant reduction in parasitics and size and, as a result, the transistors increase in speed ranging from fT 30 GHz (Racanelli et al., 1999) up to 90 GHz (Freeman et al., 1999), thus possibly even obviating the need for selfalignment schemes. The structure in Figure 5.5 can be divided into the internal transistor Ti (s. blow-up), the peripheral transistor Tp, and the external transistor regions. Figure 5.6(A) contains the typical doping profile under the emitter for the transistor in Figure 5.5. Two versions are possible: constant collector doping NC , which yields a high-voltage device, and a selectively implanted collector (SIC), which yields a high-speed device. The transistor speed can be further increased by (a) employing an epitaxial base and (b) adding a graded Ge profile. Both measures significantly reduce the base transit time. A further improvement is achieved by forming a Ge step at the BE junction in Figure 5.6(B) that prevents holes from being back-injected into the emitter and results in a ‘‘real’’ HBT. Here, the BE profile can be inverted without affecting the current gain despite a much higher base doping. The latter allows a significant reduction in base resistance. The basic operation principle can be derived already for the 1-D structure under the emitter. Classical transistor theory is discussed first; however, it becomes invalid at medium to high current densities because in all practical transistors, NC NB , so that more general equations are also presented. To allow a generic representation of the typical characteristics and a comparison of technologies, a collector current density JC ¼ IC =AE is used in the following considerations, where AE is the (effective) emitter area that is larger than the emitter window area AE0 . The controlling terminal voltages of the 1-D transistor are denoted as VB0 E 0 and VB0 C 0 .
5
Junction Diodes and Bipolar Junction Transistors B
143
E +
n poly
bE0
B + p poly
n+ p Effective internal transistor
p+
C
1E0 Silicide
p+
siO2 +
n SIC
n sinker
n epi
p+
n+ buried layer
− p substrate
0 bE/2 y Blow-up of base emitter region
bE0
n+
BE spacer
p+
p
ext. Tp/2
Ti
Tp/2 ext.
FIGURE 5.5 Schematic Cross-Section of a Self-Aligned Double-Polysilicon Bipolar Transistor Structure. The emitter window area is AE0 ¼ bE0jE0 , and the effective (electrical) internal transistor and its effective emitter width bE are defined later.
log (N/cm−3)
log (N/cm−3)
1020
1020 wBm wBm
1019
wC
1019 Ge
Ge
1018
wC
1018 SIC
1017
1017
NC
NC 0 xjE
xjC
0.2
0.3
0.4
X/µm xc
(A) Si and SiGe-Drift Transistor
0
xjE
0.2 xjC
0.3
0.4
X/µm
(B) HBT Transistor
FIGURE 5.6 Typical Doping Profiles Under the Emitter (of the Internal Transistor). (A) This graph shows the standard Si and SiGe-drift transistor (typical Ge content 7–15%). (B) Shown here is a SiGe HBT (typical Ge content 20–30%) transistor. The location x ¼ 0 defines the surface of the monosilicon and the E0 terminal; wBm and wC are the metallurgical widths of base and collector, respectively; xc is the C0 terminal. Scales are approximate numbers. The SIC does not exist in the external transistor region. The collector substrate junction is not shown here
5.2.1 Basic Equations Based on the 1-D structure, the basic BJT action can be explained as follows. For normal forward operation (VB0 E 0 > 0 and VB0 C 0 < 0), electrons are injected from the emitter across the BE SCR into the neutral base. The carriers traverse the base by a combination of drift and diffusion, the partition of which depends on the electric field in the base, and
then enter the BC SCR, where they are pulled toward the collector with high velocity. Since in today’s processes, recombination of minorities (electrons in this case) in the base can be neglected, the (1-D) current IC at the collector (x ¼ xc ) equals the current injected into the base and, for useful bias conditions VB0 E 0 , even the current entering the BE SCR at the emitter side; this current component is often called
Michael Schro¨ter
144 forward transfer current ITf . Similarly, holes are injected from the base across the BE SCR into the neutral emitter, constituting the base current IB ; a portion of the associated holes recombine in the emitter (Auger recombination), while the remaining portion arrives at the emitter contact. Under practically useful bias conditions VB0 E 0 , recombination in the BE SCR is negligible. The emitter current is then IE ¼ IC þ IB , and the dc common-emitter current gain is B ¼ IC =IB , which becomes bias dependent at low and high injection (or JC ). For the theoretical treatment of the BJT, the basic semiconductor equations can be employed (cf., pn-diode). Once again, a regional approach is useful for arriving at reasonably simple analytical equations that describe the electrical behavior. Results of classical transistor theory are presented first to provide a feeling for the basic analytical formulations of various characteristics (Lindmayer and Wrigley, 1965; Pritchard, 1967; Philips, 1962). Direct Current Behavior of a BJT It is a fair assumption in present technologies to neglect recombination in the base of a ‘‘useful’’ bipolar transistor. According to equation 5.6, the excess electron density injected into the base of a transistor with a simplified box doping profile N B reads, after series expansion of the sinh: n2iB (wB x) VB0 E 0 exp 1 VT NB w B x ¼ nBe 1 : wB
iTf
nB 0 E 0 ¼ IS exp mf VT
1 ,
IS ¼ (qAE )2 VT
qAE Dn n2iB : NB wB
(5:14)
ð xe
The current depends on base doping and neutral base width. The variation of the latter with the applied voltages vB0 C 0 and vB0 E 0 is known as forward and reverse Early effect. In most transistor models, mf > 1 is supposed to account for the reverse Early effect (variation of wB with vB0 E 0 ), whereas the vB0 C 0 dependence is approximated analytically, such as wB ¼ wB0 (1 þ vB0 C 0 =VEf ) with wB0 as neutral base width in equilibrium and VEf as forward Early voltage. At high-injection, a solution for iTf exists that is similar to the one above, but with 2Dn and mf ¼ 2 rather than Dn and
(5:15)
ð p0 dx qAE NB dx ¼ qAE NB wB0 :
(5:16)
0
(5:12)
(5:13)
mn n2i : Qp0
The hole charge at equilibrium (VB0 E 0 ¼ VB0 C 0 ¼ 0) is: Qp0 ¼ qAE
with an ideality coefficient mf ¼ 1 for the considered assumptions. The saturation current is as follows: IS ¼
exp (vB0 E 0 =VT ) exp (vB0 C 0 =VT ) ¼ iTf iTr , Qp =Qp0
(5:11)
The corresponding (quasistatic) electron current flowing through the base to the collector is represented by:
iT ¼ IS
with the saturation current:
Dn(x) ¼
mf ¼ 1. Unfortunately, such a discontinuous description of the transfer current is not very useful for compact models and circuit simulation. In classical theory, similar equations as above are also derived for reverse operation, leading to iTr ¼ IS ( exp [vBCi = (mr VT )] 1), which is superimposed with iTf to yield iT . Since in practically useful transistors NC NB , the control voltage vBCi differs from the terminal voltage vB0 C 0 by a voltage drop that results from the strongly nonlinear bias dependent internal epicollector resistance. Similarly, the base charge and transit time are controlled by vBCi, which is very difficult to model accurately. This together with the high-injection problem mentioned before requires an improvement of classical theory, which is described next. ‘‘Modern’’ bipolar transistor models employ, one way or the other, the integral charge-control relation (ICCR), which was first proposed by Gummel (1970). Under the assumption of a negligible time dependence @n=@t in the 1-D electron continuity equation, the (electron) transfer current from the emitter contact x ¼ 0 to the internal collector contact at xc can be written in a simple form (Rein et al., 1985):
The last two terms containing the base doping concentration NB and its average value NB , respectively, and the zero-bias width wB0 of the neutral base show the correspondence to the classical solution of equation 5.13. The bias-dependent total hole charge in the (1-D) transistor is as follows: Qp ¼ Qp0 þ QjEi þ QjCi þ Qf þ Qr :
(5:17)
This equation 5.17 can be expressed by several components, namely the base-emitter depletion charge and base-collector depletion charge, respectively: QjEi ¼
ð VB 0 E 0 CjEi dV 0
and
QjCi ¼
ð VB 0 C 0 CjCi dV , 0
as well as the ‘‘forward’’ and ‘‘reverse’’ minority charge:
(5:18)
5 Junction Diodes and Bipolar Junction Transistors
Qf ¼
ð VB0 E0
CdE dv ¼
0
Qr ¼
ð VB0 C 0
ð ITf tf di
and
0
CdC dv ¼
145
(5:19)
ð ITr tr di:
0
0
In the above integrands, CjEi and CjCi are the BE and BC depletion capacitance, CdE and CdC are the BE and BC diffusion capacitance, and tf and tr are the minority storage times, which are often called transit times. The latter are strongly bias-dependent small-signal quantities, which will be discussed later in more detail. From a comparison of equations 5.14 and 5.12, it becomes obvious that there is no single Early voltage (i.e., the Early voltage is bias-dependent and modeled by QjCi ). At this point, it is important to note that the ICCR permits a self-consistent description of nonideal effects via those smallsignal quantities that can (a) be measured quite easily with standard methods and equipment and (b) also determine the dynamic (i.e., frequency and large-signal transient) behavior of the transistor. Moreover, from a circuit simulation and numerical point of view, the ICCR provides the basis of a continuous description of the transfer current via a single-piece equation if the charges are modeled continuously differentiable. For Sibased HBTs, certain extensions of equation 5.17 are required, which are discussed in Schro¨ter et al. (1993) and Friedrich and Rein (1999). Figure 5.7(A) shows the schematic dependence of the forward dc transfer current density on VB0 E 0 . In a ‘‘useful’’ process, IT =AE behaves almost ideally at low current densities and is often described in compact models in the form: iT ffi IS exp
vB 0 E 0 , mf VT
(5:20)
where mf is an ideality factor close to 1. For Si bipolar processes, however, this equation is only valid over a fairly small bias range because mf changes with bias according to the
log
IC AE
∆VIB: rE,rB and High injection
vB 0 E 0 ijBEi ¼ IBEiS exp 1 mBEi VT vB 0 E 0 þ IREiS exp 1 , mREi VT
(5:21)
which are shown in Figure 5.7(A). The first component represents the back injection into the (neutral) emitter, with an ideality factor mBEi that is usually slightly larger than one. This component dominates at low- to high-current densities and remains unaffected by high-current effects for many processes; that is, it keeps increasing almost ideally with VB0 E 0 . The second component represents the recombination loss in the BE SCR, with an ideality factor mREi that is close to two. This component dominates for Si BJTs only at very low-current densities. The component ijBCi across the BC junction can be described similarly. The total internal dc base current is then IBi ¼ IjBEi þ IjBCi . The dc current gain B is (obviously) a derived quantity rather than a basic physical parameter of a transistor. Therefore, for modeling purposes, it is easier to describe separately the collector and base current rather than B, as is still often attempted in compact models. For circuit design purposes, only an average value of B is usually required and can be readily provided. Depletion Capacitances For reverse and forward biases the capacitances CjEi and CjCi follow the classical voltage dependence given in equation 5.4. For typical capacitance curves as a function of bias, see Figure 5.2. In BJTs, the correct description of CjEi is of most importance for the forward bias region up to the peak value
B Very low
∆VIB:rE, rB only
IC VCE
ICCR. At high current densities, the curve bends downward due to so-called high-current effects that are discussed later in more detail. The base current ijBEi across the BE junction of a modern bipolar transistor can be described by two components:
Current density range: Medium High Low
IB VCE
~ (mVT)−1 Recombination in BE SCR
Recombination in BE SCR
High injection
0
VBE (A) Forward I-V Curves
log (I C / A E )
(B) Current Gain B
FIGURE 5.7 Sketch of the dc Characteristics and Definition of Various Bias Regions. (A) This schematic diagram shows forward I V curves with total base current IB (see later). (B) This diagram shows current gain B. The arrow indicates an increase in VCE . Typical units: IC =AE in mA=mm2 .
Michael Schro¨ter
146
(aj CjEi0 ), since CjEi strongly determines the charge storage and dynamic behavior in that region. At higher bias, the diffusion capacitance starts to dominate, and the exact modeling of CjEi becomes academic. CjEi is modeled differently in the various transistor models (Schro¨ter, 1998). For the above mentioned reasons, a voltage-independent approximation after the peak (shown by dashed line in Figure 5.2(A)) is sufficient and numerically efficient as well as consistent with the determination of tf from fT . An accurate description of CjCi is usually most important at reverse and low forward bias of the BC junction. For transistors with a lightly doped collector (around 1016 cm3 ), the SCR can already punch through the collector at quite low reverse bias, leading to an almost voltageindependent value of CjCi and, therefore, a deviation from the classical equation (refer to Figure 5.2(B). At high collector current densities, the mobile carriers start to compensate for the ionized doping in the collector. As a result, the electric field distribution in the BC SCR changes, and CjCi becomes dependent on iT in a complicated way. Since in this bias region the minority charge storage dominates the dynamic behavior, an accurate modeling of the above effect on CjCi is usually of little importance for practical applications. Minority Charge and Transit Time Classical theory assumes that minority charge storage only occurs in the neutral base; according to equation 5.11, the injected electron density decreases linearly, leading to the stored minority charge QnB ¼ qAE nBe (wB =2). Derivative with respect to ITf gives the base transit time of a diffusion transistor (i.e., a transistor with a spatially constant base doping) operated at low current densities: tnB
w2 ¼ B : 2DnB
(5:22)
total transit time tf 0 . The result of a more detailed analysis, which additionally takes into account both Early effect and delay time through the BC SCR, yields (Schro¨ter, 1998). tf 0 ¼ t0 þ Dt0h (c 1) þ tBfvl
(5:23)
with c ¼ CjCi0 =CjCi (VB0 C 0 ) and t0 , Dt0h , and tBfvl as time constants. Figure 5.8(A) shows the possible voltage dependence of tf 0 for the two different cases mentioned previously. With increasing collector current density, so called highcurrent effects (e.g., high-injection in the collector region) occur, leading to additional charge storage (Kirk, 1962; Schro¨ter and Lee, 1999). Figure 5.8(B) contains typical characteristics of the transit time tf ¼ tf 0 (VC 0 B0 ) þ Dtf (IC , VC 0 E 0 ) as a function of collector current density. The sharp increase of tf due to high-current effects can be described by a critical current ICK (Schro¨ter and Walkey, 1996) that also serves as a useful aid for designing high-speed switching circuits (Rein, In press). A rough approximation of the corresponding maximum value of the tf increase is Dtf , max wC (wBm þ wC =2)=(2DnC ). (Kirk, 1962). Beyond ICK , a drop in current gain and transit frequency fT also occur. The latter can be used to determine tf (IC , VC 0 E 0 ). (Rein, 1983). The total minority Ðcharge for forI ward also operation is given by Qf (IC , VC 0 E 0 ) ¼ 0 Tf tf di (¼ area under the tf curve), from which the BE diffusion capacitance can be calculated as CdE ¼ qQf =qVB0 E 0 ¼ tf gm with the transconductance gm ¼ qIT =qVB0 E 0 at constant VC 0 E 0 .
5.2.2 Internal Transistor and Base Resistance The base region under the emitter has a finite resistivity that is characterized by the internal base sheet resistance rSBi . The bias dependence of the latter can be reasonably well approximated by:
In advanced transistors, tnB is about two to three times smaller due to a built-in drift field and, thus, is only a portion of the
rSBi ¼ rSBi0 Qp0 =Qp ,
τ f0
τf
τ0
τ0
Delay through BC SCR dominates (∆τ0h>0)
1 1 , c
(5:24)
VCE
1
ICK /AE
0 0
Early effect dominates (∆τ0h ICK . Typical units: tf in ps; Qf in fC=mm2 .
5 Junction Diodes and Bipolar Junction Transistors
147
with rSBi0 as zero-bias value. This automatically includes changes due to the changes of the SCRs and conductivity modulation at high collector current densities (Rein and Schro¨ter, 1991). The base current entering the internal base region horizontally from the side at y ¼ bE =2 (see Figure 5.5) causes a voltage drop in the direction of the center (y ¼ 0) that is called emitter current crowding. Because the hole injection into the emitter is distributed laterally, the associated lumped internal base resistance rBi has to be calculated from a differential equation (Ghosh, 1965; Rein and Schro¨ter, 1989). The formulation for rBi depends on both the mode of operation (e.g., dc transient and small-signal low or high-frequency) and the emitter geometry (e.g., rectangular, square, and circular). The various cases are discussed below. Direct Current Operation (v ¼ 0) A set of equations describing the geometry dependence and emitter current crowding is given in (Schro¨ter, 1991). The relations are based on analytical solutions for both a long rectangular emitter stripe (used in high-speed applications, for example) and a small square emitter geometry (used in low-power applications, for example): rBi ¼ rSBi
bE gi (bE , lE )c(Z), lE
(5:25)
where the basic expression for the dc current crowding function and factor, respectively, are taken from the square-emitter case for simplicity reasons: c(Z) ¼
ln (l þ Z) Z
and
Z¼
rSBi bE IBi : gZ (bE , lE ) lE VT
(5:26)
The functions gi ¼ 1=12 (1=12 1=28:6)bE =lE and gZ ¼ 21:8(bE =lE )2 13:5bE =lE þ 20:3 describe in a smooth form the geometry dependence of the various possible cases. The results are valid for transistors with nE emitter and nB ¼ nE þ 1 base contacts (then: IBi ! IBi =nB ). For example, gi ¼ 12 for a long stripe and gi ¼ 28:6 for a square emitter window. An extension to the case of a single base contact (i.e., nE ¼ nB ¼ 1) is given in (Schro¨ter, 1992). Note that in advanced high-speed transistors with narrow emitter widths, dc current crowding becomes negligible. Small-Signal High-Frequency Operation At sufficiently high frequencies (hf) the ac base current shunts the center region under the emitter; this effect is called hf emitter current crowding (Pritchard, 1958). The small-signal resistance at low frequencies is the following: rbi ¼ rBi þ IBi
drBi : dIBi
(5:27)
Assuming negligible dc emitter current crowding and lE bE , the distributed charging of the capacitances can be represented by a distributed RC network. The solution of the corresponding linear differential equation yields an infinite series for the admittance as a function of frequency. Truncating after the first frequency term leads to the impedance: zbi ¼ rBi (1 jvrBi CrBi ) rBi (1 þ jvrBi CrBi )1 :
(5:28)
The last expression corresponds to a parallel circuit with a lumped representation of hf emitter current crowding that is modeled by a capacitance CrBi ¼ (CjEi þ CjCi þ CdE þ CdC )=5 and the dc resistance rBi . Large-Signal Transient Operation During (fast) switching, the base resistance can change significantly over time. At the beginning of a switching process, only the portion at the emitter edge is charged or discharged, leading initially to a small value of the effective internal base resistance zBi (t). With a continuing turn-on process, the region toward the center starts to get charged, too, and zBi increases with time, until it approaches the dc value at the end of the turn-on process. For the turn-off process, the discharge of the emitter edge region continues, thus making this region high-ohmic compared to the center region that acts as a conducting transistor. Therefore, zBi not only starts to become larger than rBi but also keeps increasing during the remainder of the turn-off process. Overall, for switching: zBi (t)jturn-on rBi zBi (t)jturn-off :
(5:29)
In circuit simulators, usually the dc value is employed for rBi, which has turned out to be a reasonable approximation for differential-type switching circuits (Schro¨ter and Rein, 1995; Rein, In press).
5.2.3 Emitter Periphery Effects The lateral outdiffusion of the emitter doping forms the emitter periphery junction with an associated depletion capacitance CjEp . In addition, a portion of the base current, IjBEp, coming from the base contact is back-injected into the emitter already across the peripheral junction. Electrons injected across the emitter periphery junction pass through the external base region under the BE spacer before they enter the external BC depletion region (refer to Figure 5.5). The associated carrier transport and charge storage lead to a perimeter transfer current ITp and a corresponding diffusion capacitance. Therefore, in most cases, the emitter periphery acts like a second transistor almost in parallel to the internal one but with lower performance. To avoid a two-transistor model, one can combine ITi and ITp into a single transfer 0 current source, IT ¼ ITi þ ITp ¼ I Ti AE0 þ ITp PE0 , by defining
Michael Schro¨ter
148 an effective electrical emitter width (for lE0 bE0 ) and area, respectively (Rein, 1984): bE ¼ bE0 þ 2gC
and
AE ¼ AE0 (1 þ gC PE0 =AE0 ),
(5:30)
0 with the emitter window perimeter PE0. The ratio gC ¼ ITp =I Ti is bias-independent at low-current densities. Employing the effective emitter dimensions in all geometry-dependent equations of the internal transistor defines a procedure for constructing a consistent one-transistor model (Schro¨ter and Walkey, 1996). Merging ITi and ITp together with the corresponding minority charges into single elements IT and CdE requires a modification also of rBi into rBi to obtain the correct time constants (Koldehoff et al., 1993).
5.2.4 Extrinsic Transistor Regions The regions outside of the (effective) internal transistor are usually described by lumped elements, which will be discussed briefly (see Figures 5.5 and 5.9). For dc operation, the series resistances of the emitter (rE ) as well as of the external base (rBx ) and collector (rCx ) have to be taken into account besides possible current components across the junctions (such as ijBCx ); rCx results from the buried layer and sinker; advanced bipolar transistors rBx is dominated by the component under the BE spacer (refer to blow-up in Figure 5.5). For hf operation, the junction capacitances of the external BC region (CjCx ) and the collector–substrate region (CjS ) are required. With shrinking device dimensions, bias-independent isolation (partially oxide) capacitances caused by the BE spacer (CEox ) as well as by the B-contacting region over the epicollector (CCox ) are of increasing importance. As frequencies increase, a substrate coupling resistance (rsu ) and—for lightly doped substrate material—the capacitance (Csu ) that is associated with the substrate permittivity become important (Lee, 1999; Pfost, 1996). In some processes, a parasitic substrate transistor can turn on; however, only a simple transport model (iTS , ijSC , CdS ) is usually required to indicate this undesired action to the designer. From a processing point of view, the goal is to reduce the dimensions of the parasitic regions as much as possible to minimize their impact on electrical performance.
5.2.5 Other Effects Depending on the operation, a number of other effects can occur that may have to be taken into account for certain applications. Increasing the speed of bipolar transistors comes partially at the expense of lower breakdown voltages in the BC and BE junctions due to larger doping concentrations and shallower junction depths. Therefore, the corresponding effect of BC avalanche breakdown (iAVL in Figure 5.9), which usually occurs in the internal transistor, often needs to be taken into account (Maes et al., 1990; Rickelt and
Rein, 1999; Schro¨ter et al., 1998). For some applications, such as varactors, a BE breakdown current (iBEt ), which in highspeed transistors is mostly determined by tunneling across the BE periphery junction, can also become important for circuit design (Schro¨ter et al., 1998). Temperature variations or self-heating in the device cause most characteristics to change more or less significantly with temperature. The corresponding shift in dc I–V characteristics and depletion capacitances is mostly determined by the temperature dependence of the bandgap. For instance, typical values for qVBE =qT at constant IC and the temperature coefficient (TC) of built-in voltages are in the range of 1.5 to 2.5 mV/K (for silicon). The TC of the current gain is positive for most Si–BJTs, about zero for SiGe bipolar transistors, and negative for III–V (e.g., AlGaAs) HBTs. Changes in series resistances and transit time are predominantly determined by the temperature dependence of the respective mobilities and the thermal voltage VT . Self-heating occurs at bias points where a high power P IC VCE is generated and can be described to first-order by a separate network consisting of thermal resistance, Rth , and thermal capacitance, Cth . For many applications, such as low-noise amplifiers and oscillators, noise plays an important role. Fundamental noise mechanisms are thermal noise in series resistances, shot noise in junction and transfer currents, and flicker noise. The latter is a phenomenological model for taking into account (physically usually not well described) low-frequency noise sources at surfaces and interfaces (Markus and Kleinpenning, 1995; Deen et al., 1999; Chen et al., 1998). The high-frequency (hf) noise performance of modern bipolar transistors is mostly deter mined by the noise in rB ( ¼ rBi þ rBx ), IjBE , and IT as well as by the parasitic capacitances.
5.2.6 Equivalent Circuits Figure 5.9 shows a physics-based large-signal equivalent circuit (EC) of a bipolar transistor, containing the elements that were discussed before (Koldchoff et al., 1993; Schro¨ter and Rein, 1995; Schro¨ter and Lee, 1998.). The BC capacitance CBCx ¼ CjCx þ CCox has been split across rBx to account for distributed hf effects in the external BC region in as simple as possible EC topology (-representation). In practice, the full complexity of the EC is often not required and certain effects (i.e., their associated elements, such as those for the substrate transistor or iBEt ) can be turned off. In many cases, the transistors are even operated outside any critical bias regions, such as the high-current and breakdown regions, so that a simplified EC [refer to Figure 5.9(B)] and a set of model equations are satisfactory and enable a better analytical insight into the circuit performance for the designer. The simplified model can be constructed from the physics-based model by merging elements and simplifying equations at the expense of a smaller bias, geometry, and temperature validity range. A scalable modeling approach is described in Schro¨ter et al. (1999).
5 Junction Diodes and Bipolar Junction Transistors
149 rsu S
Cjs iTS
ijSC C'BCx C''BCx
ijBCx
Csu ijBCi
Cds
CrBi
B*
B rBx CEox
rCx
C' Cdc
rBi*
CjEi
CdE
iBEt
C
iAVL
B'
CjEp ijBEp
CjCi
iT
ijBEi E'
Tj
rE P
Rth
Cth
E (A) Advanced Large-Signal EC S Cjs C' iAVL CCox B
C*jCx CEox
ijBCx
C ijBCi
C*jCi
Cdc
CjE
CdE
B' rB ijBEi
rCx iT
E' rE
E
(B) Simplified Large-Signal EC
FIGURE 5.9 (A) Shown here is a large-signal physics-based EC of an advanced compact model for circuit design; (B) In comparison, here is a simplified large-signal EC obtained by merging elements and simplifying equations.
5.2.7 Some Typical Characteristics and Figures of Merits This chapter contains a brief overview and discussion of the typical behavior of some of the most important characteristics of a bipolar transistor. Due to the lack of space, the overview is by no means complete, and the reader is referred to the literature in the Reference list at the end of the chapter. To facilitate in practice an easier understanding for circuit designers and a meaningful comparison of processes, it is recommended to choose the collector current density JC ¼ IC =AE and the voltage VCE as independent variables for defining the bias points of a figure of merit (FoM). The forward dc (‘‘Gummel’’) characteristics and the current gain were already shown in Figure 5.7. For most processes, the voltage drop DVIB is caused by only rE and rB , whereas DVIC contains in addition the influence of high-current effects, causing B to drop at higher JC . Therefore, IC (VBE ) should not be used for determining the series resistances! Figure 5.10 shows typical forward output characteristics for constant IB (with linear spacing DIB ). The thin solid line (labeled with ICK jAE ) indicates the boundary between the qua-
sisaturation region and the normal forward operation. The slope of the curves determines the dc output conductance. At sufficiently high VCE , avalanche breakdown in the BC SCR causes JC to increase rapidly; this effect is more pronounced in JC (VCE ) if IB rather than VBE is held constant. Often the CE breakdown voltage for an open base, BVCEO , is used to characterize the breakdown performance of a process. At high JC and VCE , self-heating occurs that leads JC to increase (in Si) or decrease (in GaAs) already before the onset of breakdown. The most important FoMs characterizing the highfrequency performance of a transistor are discussed next. For those frequencies of f that are not too high, the small-signal current gain in common-emitter (CE) configuration (i.e., E-grounded with the signal applied to either B or C) can be described quite accurately as a function of frequency by: b¼
b0 b0 ¼ , 1 þ j(f =fb ) 1 þ j(b0 f =fT )
(5:31)
with fb as 3-dB-corner frequency, fT as (quasistatic) transit frequency, and b0 as low-frequency current gain. The above
Michael Schro¨ter
150 ICK/AE
IC
IC
AE
AE IB
IB
AE
AE
VCE
0
0
(A) Si-BJT
VCE (B) AiGa As HBT
FIGURE 5.10 Sketch of the Output Characteristics. The impact of self-heating is shown by the dashed lines. Si-based transistors possess a much smaller BVCEO value than GaAs based transistors of comparable performance. Typical units: IC =AE in mA=mm2 ; VCE in V.
equation allows a simple method for determining fT (Gummel, 1969): ! 1 1 fT ¼ f Im , (5:32) b(f ) which—in addition—can be performed at fairly low frequencies (down to f fb ), thus avoiding sophisticated deembedding methods (Koolen, et al. 1991). The following is a quite accurate analytical expression for fT : fT ¼
1 gm , (5:33) 2p CBE þ CBC þ gm [tf þ rCx CBC þ (rE þ rB =b0 )(CBC þ CEox )]
where the transconductance gm is proportional to IC at lowcurrent densities. The maximum frequency of oscillation, fmax , depends on the power gain chosen for its extraction. According to theory, though, the magnitude of only the unilateral power gain, GU , is expected to show a 20 dB per decade decrease in frequency in a certain frequency range. In practice, a deviation of GU (and also of jbj) from this ideal behavior occurs at higher frequencies due to parasitics, such as CBC and nonquasistatic effects (TeWinkel, 1973; Schro¨ter and Rein, 1995). From GU , the following equation can be derived for CE configuration (Armstrong and French, 1995):
fmax
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3ffi u 2sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u 2 fT (1 þ CL =CBC ) u 4 t2 1þ 15: (5:34) ¼ 1 þ CL =CBC 8pfT CBC (rB þ 1=gm )
CL is the capacitive load at the transistor output and includes CjS . Forpsmall ratios CL =CBC , the simple classical equation ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi fmax ¼ fT =(8pCBC rB ) is obtained. The typical bias dependence of fT and fmax is shown in Figure 5.11(A). Although fT shows a moderate dependence on emitter geometry, fmax is a strong function of the emitter dimensions and is usually evaluated for transistors with lE0 bE0 . An important FoM characterizing the noise behavior of a transistor is the noise factor F
that can also be calculated as a function of frequency if the EC topology and element noise sources are given (Vendelin, 1990). F depends on a number of transistor parameters as well as on the generator impedance ZG and increases at high frequencies as shown in Figure 5.11(B). Toward low frequencies, F increases again due to flicker noise. For Si and SiGe bipolar transistors, typical values of the corner frequency ffn are in the order of 1 to 10 kHz, which is much lower than in MOS transistors, for example. Of particular interest for circuit applications is the minimum noise factor Fmin at high frequencies, which is obtained by determining the minimum of F with respect to real and imaginary parts of ZG . At high frequencies, Fmin can be expressed analytically as a function of dc bias, transistor parameters, and frequency dependent y-parameters (Voinigescu et al., 1997); a reasonable simplification is as follows:
Fmin
mf f ffi1þ þ b 0 fT
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi m2f fT2 2IC f2 : (rE þ rB ) 1 þ T 2 þ VT b0 f b0 f 2
(5:35)
5.2.8 Other Types of Bipolar Transistors Heterojunction bipolar transistors (HBTs) show excellent high-frequency behavior and are in many respects superior to BJTs. Conventionally, HBTs have been fabricated in III–V materials such as GaAs (Ali and Gupta, 1991); major applications of those HBTs are power amplifiers. More recently, also SiGe HBTs have started entering into applications that used to be dominated by III–V semiconductors; however, so far the majority of those ‘‘HBTs’’ are actually transistors with an increased drift field in the base (Harame et al., 1995) rather than ‘‘true’’ HBTs (Schu¨ppen et al., 1996) as shown in Figure 5.6(B). Besides vertical npn transistors, variants of pnp transistors are offered in processes, since they are required for certain circuit applications, such as bias networks or push-pull stages. Only few truly complementary bipolar processes do exist (Yamaguchi et al., 1994); that is, in most cases the pnp
5 Junction Diodes and Bipolar Junction Transistors fT, fmax
VCE
fmax
151
NF
VCE fT VCK /AE 0 log (IC /AE) (A) Dependence of fT and fmax
ffn
≈(fT /
β 0 ) log ( f )
(B) Noise Figure Versus Frequency
FIGURE 5.11 Typical Characterization of the (Small-Signal) High-Frequency Performance of Transistors. (A) This figure shows bias dependence of transit frequency fT and maximum oscillation frequency fmax. (B) Shown here is a sketch of noise figure [¼ 10log(F)] versus frequency. Typical units: f , fT , fmax in GHz; NF in dB.
transistors have a much lower performance than their npn counterparts. Lateral pnp transistors usually are quite slow (fT around 0.2 to 1 GHz) and consume significant space on a chip. Vertical pnp transistors can be about a factor of 10 faster than LPNPs but suffer often from a large collector series resistance and associated high-current effects already at quite low collector current densities. In all cases, the fundamental transistor action is the same, so that the same basic modeling approach can be used. For accurate circuit simulation, however, modifications or extensions of both the EC and some of the element equations are required.
References Ali, F., and Gupta, A. (Eds.). (1991). HEMTs and HBTs: Devices, fabrication, and circuits. Boston: Artech House. Armstrong, G., and French, W. (1995). A model for the dependence of maximum oscillation frequency on collector to substrate capacitance in bipolar transistors. Solid-State Electron. 38, 1505–1510. Chen, X.Y., Deen, M.J., Yan, Z.X., and Schro¨ter, M. (1998). Effects of emitter dimensions on low-frequency noise in double-polysilicon BJTs. Electronics Letters, 34(2), 219–220. Deen, M.J., Rumyantsev, S.L., and Schro¨ter, M. (1999). On the origin of 1/f noise in polysilicon emitter bipolar transistors. J. Appl. Phys. 85, 1192–1195. Freeman, G., et al. (1999). A 0.18-mm 90-GHz fT SiGe HBT BiCMOS, ASIC-compatible, copper interconnect technology for RF and microwave applications. IEDM, Washington DC. Friedrich, M., and Rein, H.M. (1999). Analytical current–voltage relations for compact SiGe HBT models. IEEE Trans. Electron Dev. 46, 1384–1401. Ghosh, H.N. (1965). A distributed model of the junction transistor and its application in the prediction of the emitter-base diode characteristic, base impedance, and pulse response of the device. IEEE Trans. Electron Dev. 12, 513–531. Gummel, H.K. (1970). A charge-control relation for bipolar transistors. BSTJ 49, 115–120.
Gummel, H.K. (1969). On the definition of the cutoff frequency fT . Proc. IEEE 57, 2159. D. Harame et al. (1995). Si/SiGe epitaxial base transistors. IEEE Trans. Electron Dev. 40, 455–482. Kirk, C.T. (1962). A theory of transistor cutoff frequency fall-off at high current densities. IEEE Trans. Electron Dev. 9, 914–920. Klose, H., et al. (1993). B6HF: A. 0.8 micron 25GHz/25ps bipolar technology for mobile radio and ultra fast data link IC products. Proc BCTM Minn, 125–128. Koldehoff, A., Schro¨ter, M., and Rein, H.M., (1993). A compact bipolar transistor model for very high-frequency applications with special regard to narrow stripes and high current densities. Solid-State Electron. 36, 1035–1048. Koolen, M., et al. (1991). An improved de-embedding technique for on-wafer high-frequency characterization. Proc. BCTM 188–191. Lee, T.Y., Fox, R. Green, K., and Vrotsos, T. (1999). Modeling and parameter extraction of BJT substrate resistance. Proc. BCTM 101– 103. Lindmayer, J., and Wrigley, C.Y. (1965). Fundamentals of semiconductor devices. Princeton, New Jersey: Van Nostrand. Maes, W., DeMeyer, K., and van Overstraaten, R. (1990). Impact ionization in silicon: A review and update. Solid-State Electron. 33, 705–718. Markus and Kleinpenning, (1995). Low-frequency noise in polysilicon bipolar transistors. IEEE Trans. Electron Dev. 42, 720–727. Pfost, M., Rein, H.M., and Holzwarth, T. (1996). Modeling substrate effects in the design of high-speed Si bipolar ICs. IEEE J. Solid-State Circuits. 31, 1493–1502. Philips, A.B. (1962). Transistor engineering. New York: McGraw Hill. Pritchard, R.L. (1967). Transistor characteristics. New York: McGraw Hill. Pritchard, R.L. (1958). Two-dimensional current flow in junction transistors at high frequencies. Proc. IEEE 46, 1152–1160. Racanelli, M., et al. (1999). BC35—a 0.35-mm, 30-GHz production RF BiCMOS technology. Proc. BCTM, 125–128. Rein, H.M. (1984). A simple method for separation of the internal and external (peripheral) currents of bipolar transistors. Solid-State Electronics 27, 625–632.
152 Rein, H.M. (1983). Proper choice of the measuring frequency for determining fT of bipolar transistors. Solid-State Electronics. 26, 75–82 and 929. Rein, H.M. (In press). Si and SiGe bipolar ICs for 10–40 Gb/s opticalfiber TDM links. Int. J. High-Speed Electronics and Systems. Rein, H.M. and Schro¨ter, M. (1989). Base spreading resistance of square emitter transistors and its dependence on current crowding. IEEE Trans. Electron Dev. 36, 770–773. Rein, H.M. and Schro¨ter, M. (1991). Experimental determination of the internal base sheet resistance of bipolar transistors under forward-bias conditions. Solid-State Electron. 34, 301–308. Rein, H.M., Stu¨bing, H., and Schro¨ter, M. (1985). Verification of the integral charge-control relation for high-speed bipolar transistors at high current densities. IEEE Trans. Electron Dev. 32, 1070–1076. Rickelt, M., and Rein, H.M. (1999). Impact-ionization induced instabilities in high-speed bipolar transistors and their influence on the maximum usable output voltage. Proc. BCTM 54–57. Schilling, R.B. (1969). A regional approach for computer-aided transistor design. IEEE Trans. Electron Education. 12, 152–161. Schro¨ter, M. (1998). Bipolar transistor modeling for the design of highspeed integrated circuits. Monterey/Lausanne: MEAD. Schro¨ter, M. (1992). Modeling of the low-frequency base resistance of single base contact bipolar transistors. IEEE Trans. Electron Dev. 39, 1966–1968. Schro¨ter, M. (1991). Simulation and modeling of the low-frequency base resistance of bipolar transistors in dependence on current and geometry. IEEE Trans. Electron Dev. 38, 538–544. Schro¨ter, M., et al. (1999). Physics- and process-based bipolar transistor modeling for integrated circuit design. IEEE Journal of SolidState Circuits 34, 1136–1149. Schro¨ter, M., Friedrich, M., and Rein, H.M. (1992). A generalized integral charge control relation and its application to compact
Michael Schro¨ter models for silicon based HBTs. IEEE Trans. Electron Dev. 40, 2036–2046. Schro¨ter, M., and Lee, T.Y. (1998). HICUM—A physics-based scaleable compact bipolar transistor model. Presentation to the Compact Model Council. www.eigroup.org/cmc Schro¨ter, M., and Lee, T.Y. (1999). A physics-based minority charge and transit time model for bipolar transistors. IEEE Trans. Electron Dev. 46, 288–300. Schro¨ter, M., and Rein, H.M. (1995). Investigation of very fast and high-current transients in digital bipolar circuits by using a new compact model and a device simulator. IEEE J. Solid-State Circuits 30, 551–562. Schro¨ter, M. and Walkey, D.J. (1996). Physical modeling of lateral scaling in bipolar transistors. IEEE J. Solid-State Circuits 31, 1484– 1491. Schro¨ter, M., Yan, Z., Lee, T.Y., and Shi, W. (1998). A compact tunneling current and collector break-down model. Proc. IEEE Bipolar Circuits and Technology Meeting, 203–206. Schu¨ppen, A. et al. (1996). SiGe Technology and components for mobile communication systems. Proc. BCTM, 130–133. Selberherr, S. (1984). Semiconductor device modeling. Springer: Wien. Sze, S. (1981). Physics of semiconductor devices. New York: John Wiley & Sons. TeWinkel, J. (1973). Extended charge-control model for bipolar transistors. IEEE Trans. Electron Dev. 20, 389–394. Vendelin, G., Pavio, A., and Rohde, U. (1990). Microwave circuit design. Toronto: John Wiley & Sons. Voinigescu, S., et al., A scaleable high-frequency noise model for bipolar transistors and its application to optimal transistor sizing for low-noise amplifier design. IEEE J. Sol.-St. Circ. 32, 1430–1439. Yamaguchi, T., et al. (1994). Process investigations for a 30 GHz fT submicrometer double poly-Si bipolar technology. IEEE Trans. Electron Dev. 41, 321–328.
6 Semiconductors 6.1 6.2
History of Semiconductors .................................................................... 153 Dielectrics, Semiconductors, and Metals .................................................. 153
6.3
Electron and Hole Velocities and Mobilities ............................................. 157
6.4
Important Semiconductor Materials........................................................ 162 References .......................................................................................... 162
6.2.1 Chemical Bonds and Crystal Structure . 6.2.2 Electrons and Holes . 6.2.3 Band Diagrams 6.3.1 Impact Ionization . 6.3.2 Optical Properties
In 1821, the German physicist Tomas Seebeck first noticed unusual properties of semiconductor materials, such as lead sulfur (PbS). In 1833, the English physicist Michael Faraday reported that resistance of semiconductors decreased as temperature increased (opposite to what happens in metals, in which resistance increases with temperature). In 1873, the British engineer Willoughby Smith discovered that the resistivity of selenium, a semiconductor material, is very sensitive to light. In 1875, Werner von Siemens invented a selenium photometer, and, in 1878, Alexander Graham Bell used this device for a wireless telephone communication system. The discovery of a bipolar junction transistor by the American scientists John Bardeen, Walter Houser Brattain, and William Bradford Shockley in 1947 led to a revolution in electronics, and, soon after, semiconductor devices largely replaced electronic tubes.
6.2 Dielectrics, Semiconductors, and Metals When atoms form solids, their electron energy states split into many close energy levels that form allowed energy bands. In each energy band, allowed energy levels are very close to each other, and the electron energy can vary continuously. These bands are separated by forbidden energy gaps as shown in Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
Figure 6.1. The position and extent of allowed and forbidden energy gaps determine the properties of solids. The important property of electrons is determined by the rule that is called the Pauli exclusion principle. According to this principle, not more than two electrons with different spins can occupy each energy state. Electrons occupy the lowest energy levels first. In semiconductors and dielectrics, almost all the states in the lowest energy bands are filled by electrons, whereas the energy states in the higher energy bands are, by and large, empty. The lower energy bands with mostly filled energy states are called the valence bands. The higher energy bands with mostly empty energy states are called conduction
Dielectric
Semiconductor
Metal
Energy
6.1 History of Semiconductors
Energy
Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, Troy, New York, USA
Energy
Michael Shur
Distance
Distance
Distance
Allowed (mostly filled) states
Forbidden states
Allowed (mostly empty) states
FIGURE 6.1 Band structures of a Dielectric, a Semiconductor, and a Metal. The shaded regions represent energy levels filled with electrons (two per state to satisfy the Pauli exclusion principle).
153
154
Michael Shur
The most important semiconductor is silicon. Silicon belongs to the fourth column of the Periodic Table. Elements belonging to the fourth column of the Periodic Table, such as Si, Ga, As, Ge, and N, have only s and p electrons in their valence shells (shells with the largest value of the principal quantum number, n, for a given atom). It takes eight valence electrons to fill up all the states in these two valence subshells (two s electrons and six p electrons). In the Periodic Table, the elements with completely filled s and p valence subshells correspond to inert gases. When atoms are combined together in a solid, they may share or exchange valence electrons, forming chemical bonds. In silicon, germanium, and related compound semiconductors, these bonds are formed in such a way that neighboring atoms share their valence electrons having (on average) completed s in the valence shell. In these semiconductors, each atom forms four bonds with four other atoms (four nearest neighbors) and shares two valence electrons with each of them (i. e., an atom shares eight valence electrons with all four nearest neighbors). Since each atom has four nearest neighbors, it is tetrahedrally coordinated as shown in Figure 6.2. If all atoms in a crystal are identical (as in Si or Ge, for example) the electrons shared in a bond must spend, on
average, the same time on each atom. This corresponds to a purely homopolar (covalent) bond. In a compound semiconductor, such as GaAs, bonding electrons spend a greater fraction of time on the anions (i. e., negatively charged atoms). This situation corresponds to a partially heteropolar (partially ionic) bond. The quantitative measure of the type of a chemical bond is ionicity, fi . The ionicity is zero for a purely covalent bond and unity for a purely heteropolar bond. The ionicity values 0 < fi < 1 can be assigned to any binary chemical compound. Elemental semiconductors, such as Si, Ge, or diamond, have zero ionicity. According to the ionicity scale proposed by Phillips in 1973, fi ¼ 0:177 for silicon carbide (SiC) and fi ¼ 0:31 for GaAs. There are very few elemental semiconductors (e.g., Si, Ge, and C). Many compound materials (such as GaAs, InP, AlGaAs, and GaN), however, also have the tetrahedral bond configuration seen in Figure 6.3 and exhibit semiconducting properties. Some of these semiconductors, such as GaAs and AlGaAs or GaN and AlGaN, can be grown as films on top of each other to form heterostructures. Hence, very many different material combinations with different properties are available to a semiconductor device designer. As can be seen from Figure 6.2, each silicon atom is located at the center of the tetrahedron formed by four other silicon atoms. In Si, such an arrangement is formed by two interpenetrating face-centered cubic sublattices of silicon atoms, shifted with respect to each other by one fourth of the body diagonal. A diamond crystal (one of the crystalline modifications of carbon) has the same crystal structure, which is traditionally called the diamond crystal structure (see Figure 6.4). Another important semiconductor—germanium (Ge)—also has the diamond crystal structure. Most compound semiconductors have the zinc blende crystal structure seen in Figure 6.5 that is very similar to the diamond structure. This structure contains two kinds of atoms, A and B, with each species forming a face-centered cubic lattice. These mutually penetrating face-centered cubic (fcc) lattices of element A and element B are shifted relative to each other by a quarter of the body diagonal of the unit cell cube.
Si
As
bands. The difference between the highest valence band and the lowest conduction band is called the energy band gap or the energy gap. An electron in a valence band needs the energy equal to or higher than the energy gap to experience a transition from the valence to the conduction band. In a dielectric, the energy gap, Eg , is large so that the valence bands are completely filled and conduction bands are totally devoid of electrons. Typically for a dielectric, Eg is larger then 5 to 6 eV. For semiconductors, energy band gaps vary between 0.1 eV and 3.5 eV. The energy gap of silicon (Si), which is the most important semiconductor material, is approximately 1.12 eV at room temperature. The energy gap of silicon dioxide—the most widely used dielectric material in microelectronics—is 9 eV. In a metal, the lowest conduction band is partially filled with many electrons (usually one electron per atom), and the metal’s resistance is very small.
6.2.1 Chemical Bonds and Crystal Structure
α
Si
α
Si
FIGURE 6.2
Tetrahedral Bond Configuration in Si; a ¼ 1088 290
As As
Si Si
Ga
As
FIGURE 6.3 Tetrahedral Bond Configuration in GaAs; a ¼ 1088 290
6
Semiconductors
155
FIGURE 6.4 The Face-Centered Cubic (fcc) and Diamond Structure. This structure is formed by two interpenetrating fcc lattices shifted by one quarter of the body diagonal. Thick lines show the bonds formed between an atom and its four nearest neighbors (tetrahedral bond configuration). Figure adapted From Shur (1996).
a
FIGURE 6.5 The Face-Centered Cubic (fcc) and Zinc Blende Structure. This structure is formed by two interpenetrating fcc lattices shifted by one quarter of the body diagonal (similar to diamond structure). Thick lines show the bonds between an atom and its four nearest neighbors (tetrahedral bond configuration). The a is the lattice constant. Figure adapted from Shur (1996).
A valence band electron can change its energy and velocity if and only if it can be promoted to an empty energy level. The vacant levels in the valence band allow electrons in the valence band to move as shown in Figure 6.6. The motion of a vacant energy level in the valence band can be visualized as the motion of a fictitious particle (called a hole). The absence of a negative electronic charge, q, corresponds to a positive charge q ¼ ( q). Hence, the hole has a positive charge with a magnitude equal to the electronic charge. To the first order, holes can be treated as particles with a certain effective mass, mp .
6.2.3 Band Diagrams Usually, conducting electrons occupy states in the conduction band that are close to the lowest energy in the conduction band, Ec , which is called the conduction band edge or the bottom of
6.2.2 Electrons and Holes Time,t=0
Time,t1>0
Time,t2>t1 Conduction band
Energy
In the absence of an electric field, electrons in a semiconductor move randomly because of thermal motion. The average velocity of this motion is called thermal velocity. An electric field accelerates electrons and causes a component of the electron velocity to go in the direction of the electric field. This velocity of the directed electronic motion in the electric field is called the drift velocity. This drift velocity results in an electric current. To the first order, the difference between the motion of conduction band electrons and free electrons can be approximately described by introducing an electron effective mass, mn , which is different from the free electron mass, me ¼ 9:11 1031 kg. For example, in gallium arsenide (GaAs), mn 0:067 me .
Energy gap Valence band
Distance
Filled state
Unfilled state
FIGURE 6.6 Motion of Electrons in an Electric Field. Solid circles represent electrons occupying energy levels; open circles represent energy levels available for electrons.
156
Michael Shur E ¼ h2 k 2 =2me :
F −
Energy
Ec
Ev
+
Distance
FIGURE 6.7 Band Diagram of a Semiconductor in an Electric Field
the conduction band. Holes occupying states in the valence band are close to the highest energy in the valence band, Ev . Ev is called the valence band edge or the top of the valence band. The dependencies of Ec and Ev in a semiconductor device on position are called band diagrams. Band diagrams are very useful for illustrating properties and understanding behavior of semiconductor materials and devices. As an example, Figure 6.7 shows a band diagram of a semiconductor in an electric field. In an electric field, F, the bands are tilted, with the slope qF. The arrows in Figure 6.7 represent the directions of forces exerted on the electrons and holes by the electric field. (Since electrons are charged negatively and holes are charged positively, they move in opposite directions.) In free space, an electron energy is as follows: E ¼ me v 2 =2,
(6:1)
where me is the free electron mass and v is the electron velocity. According to the basics of quantum mechanics, an electron has both particle-like and wave-like properties and its momentum, p ¼ me v, can be related to its wave vector, k: p¼ hk,
(6:2)
where h ¼ 1:055 1034 js is the Planck constant. Hence, equation 6.1 can be rewritten as:
Free space
En
Wave vector
Indirect gap semiconductor (Si) En
In semiconductor materials, E(k) dependence is more complex (see Figure 6.8). Near the lowest point of the conduction band, the dependence of the electron energy on the wave vector can still be approximated by a parabolic function similar to that for an electron in free space (see equation 6.3 and Figure 6.8). The curvature of this dependence, however, is usually quite different from that for an electron in free space. Moreover, different crystallographic directions are not equivalent, and this curvature may depend on direction. These features can be accounted for by introducing an inverse effective mass tensor with components defined as follows: 1 1 q2 E n ¼ 2 , mi, j h qki qkj
(6:4)
ki and kj are the projections of the wave vector k. When E(k) depends only on the magnitude of k, and not on the k direction, this tensor reduces to a scalar inverse effective mass, 1=mn . This is the case for GaAs, where dependence of the electron energy on the wave vector near the lowest point of the conduction band can be approximated by the following function: En (k) ¼ Ec þ
h2 k 2 : 2mn
(6:5)
In Si, however, the lowest minimum point of the conduction band is quite anisotropic. In this case, the simplest equation for E(k) is obtained by expressing k in terms of two components: kl and kt , which is perpendicular to kl . These components are called longitudinal and transverse components of k, respectively. (This takes into account the crystal symmetry, which makes all possible directions of kt equivalent.) Now the inverse effective mass tensor reduces to two components: 1=ml and 1=mt , where ml and mt are called the longitudinal effective mass and transverse effective mass, respectively. In this case, the equation for E(k) is given by: Direct gap semiconductor (GaAs)
Conduction band
Eg
(6:3)
Wave vector
Valence band
En
Conduction band
Eg
Wave vector
Valence band
FIGURE 6.8 Qualitative Energy Spectra for Electrons in Free Space. Si and GaAs. Shaded states in valence bands are filled with valence electrons. Figure adapted from Shur (1996).
6 Semiconductors
157
2 kl2 kt2 h En (k) ¼ Ec þ þ : 2 ml mt
(6:6)
For Si, ml 0:98 me and mt 0:189 me , where me is the free electron mass, so that these two effective masses are quite different indeed! For holes, an equation similar to equation 6.5 is written as: Ep (k) ¼ Ev
2 k 2 h : 2mp
(6:7)
Equation 6.7 can be used for the valence bands in cubic semiconductors. Here Ev is the energy corresponding to the top of the valence band, and mp is called a hole effective mass. Doping In a pure semiconductor, the electron concentration in the conduction band and the hole concentration in the valence band are usually very small compared to the number of available energy states. These concentrations can be changed by many orders of magnitude by doping, which means adding to a semiconductor impurity atoms that can ‘‘donate’’ electrons to the conduction band (such impurities are called donors) or ‘‘accept’’ electrons from the valence band creating holes (such impurities are called acceptors). Both donors and acceptors are referred to as dopants. A donor atom has more electrons available for bonding with neighboring atoms than is required for an atom in the host semiconductor. For example, a silicon atom has four electrons available for bonding and forms four bonds with the four nearest silicon atoms. A phosphorus atom has five electrons available for bonding. Hence, when a phosphorus atom replaces a silicon atom in a silicon crystal, it can bond with the four nearest neighbors and ‘‘donate’’ one extra electron to the conduction band. An acceptor atom has fewer electrons than are needed for chemical bonds with neighboring atoms of the host semiconductor. For example, a boron atom has only three electrons available for bonding. Hence, when a boron atom replaces a silicon atom in silicon, it ‘‘accepts’’ one missing electron from the valence band, creating a hole. A semiconductor doped with donors is called an n-type semiconductor. A semiconductor doped with acceptors is called a p-type semiconductor. Often, especially at room temperature or elevated temperatures, each donor in an n-type semiconductor supplies one electron to the conduction band, and the electron concentration, n, in the conduction band is approximately equal to the donor concentration, Nd . In a similar way, at room temperature or elevated temperatures, each acceptor creates one hole in the valence band, and the hole concentration, p, in the valence band of a p-type semiconductor is approximately equal to the acceptor concentration, Na . If both donors and acceptors are added to a semiconductor, they ‘‘compensate’’ each other, since electrons
supplied by donors occupy the vacant levels in the valence band created by the acceptor atoms. In this case, the semiconductor is called compensated. In a compensated semiconductor, the largest impurity concentration ‘‘wins’’: if Nd > Na , the compensated semiconductor is n-type with the effective donor concentration, Ndeff ¼ Nd Na ; if Nd < Na , the compensated semiconductor is p-type with the effective acceptor concentration, Naeff ¼ Na Nd .
6.3 Electron and Hole Velocities and Mobilities Electrons and holes experience a chaotic random thermal motion. The average kinetic energy of thermal motion per one electron is 3kB T =2, where T is temperature in degrees Kelvin and kB is the Boltzmann constant. The electron thermal velocity, vthn , is found by equating the electron kinetic energy to 3kB T =2: 2 mn vthn 3kB T , ¼ 2 2
(6:8)
where mn is the electron effective mass. From equation 6.8: vthn ¼
3kB T mn
1=2 :
(6:9)
Thermal velocities vthn and vthp represent average magnitudes of electron and hole velocities caused by the thermal motion. Since the directions of this thermal motion are random, the same number of carriers crosses any device cross-section in any direction so that the electric current is equal to zero. In an electric field, the electrons and holes acquire an additional drift velocity caused by the electric field, which is superimposed on the chaotic thermally induced velocities. In a uniform semiconductor and in a weak electric field, F, the drift velocities, vn and vp , of the electrons and holes are proportional to the electric field: vn ¼ mn F:
(6:10)
vp ¼ mp F:
(6:11)
and
The mn and mp are called the electron and hole mobilities, respectively. (The direction of vp coincides with the direction of the electric field, and the direction of vn is opposite to the direction of the electric field, since holes are positively charged and electrons are negatively charged.) The total charge of the electrons crossing a unit area of a semiconductor per second is qnvn , where n is the electron concentration in the conduction
158
Michael Shur
band. The charge of the holes crossing a unit area of a semiconductor per second is qpvp , where p is the hole concentration in the valence band. Hence, using equations 6.1 and 6.2, we obtain the following expressions for the current density, j, and conductivity, s for a semiconductor: j ¼ sF,
The tnp is an average time between random collisions interrupting these free electron flights. The average distance that an electron travels between two collisions is called the mean free path. In relatively weak electric fields when the electron drift velocity is much smaller than the thermal velocity, the mean free path is given by:
(6:12) ln ¼ vthn tnp :
(6:18)
where s ¼ qmn n þ qmp p:
(6:13)
Often, the hole contribution to the conductivity of an n-type semiconductor can be neglected because the hole concentration is many orders of magnitude smaller than the electron concentration. Likewise, the electron contribution to the conductivity of a p-type semiconductor can often be neglected. Equation 6.12 is called Ohm’s law. We can rewrite this equation in a more familiar form: I ¼ V =R,
(6:14)
where I ¼ jS is the total current, S is the sample cross-section, V ¼ FL is the potential difference across the sample (here a constant electric field is assumed), and L is the sample length. The sample resistance is the following: R¼
L : sS
(6:15)
Remember that, in semiconductors, Ohm’s law is only valid when an electric field is weak, which is rarely the case in modern semiconductor devices. To the first order, equation 6.10 can be interpreted by using the second law of motion for an electron in the conduction band: mn
dvn vn ¼ qF mn : dt tnp
(6:16)
The first term in the right-hand side of equation 6.16 represents the force from the electric field. The second term in the right-hand side of the same equation represents the loss of the electron momentum, mn vn , due to electron collisions with impurities and/or lattice vibrations; tnp is called the momentum relaxation time. From equation 6.16, in a steady state: vn ¼ mn F,
The momentum relaxation time, tnp , can be approximately expressed as: 1 1 1 1 ¼ þ þ þ ..., tnp tii tni tlattice
(6:19)
where the terms on the right-hand side represent momentum relaxation times due to different scattering processes such as ionized impurity scattering (tii ), neutral impurity scattering (tni ), and lattice vibration scattering (tlattice ). In a low electric field, tnp and mn are independent of the electric field, and the electron drift velocity, vn , is proportional to the electric field. However, in high electric fields when electrons may gain considerable energy from the electric field, tnp and, in certain cases, mn become strong functions of the electron energy and, hence, of the electric field. In a high electric field, electrons gain energy from the field, and their average energy exceeds the thermal energy (they become ‘‘hot’’). Hot electrons transfer energy into thermal vibrations of the crystal lattice. Such vibrations can be modeled as harmonic oscillations with a certain frequency, vl . The energy levels of a harmonic oscillator are equidistant with the energy difference between the levels equal to El ¼ hvl . Hence, the scattering process for a hot electron can be represented as follows. The electron accelerates in the electric field until it gains enough energy to excite lattice vibrations: mn vn2 max ¼ En Eo hvl , 2
(6:20)
where vn max is the maximum electron drift velocity. Then the scattering process occurs, and the electron loses all the excess energy and all the drift velocity. Hence, the electron drift velocity varies between zero and vn max , and average electron drift velocity (vn ¼ vn max =2) becomes nearly independent of the electric field: sffiffiffiffiffiffiffiffiffi hvl vn ¼ vsn : 2mn
(6:21)
where qtnp mn ¼ : mn
(6:17)
The vsn is called the electron saturation velocity. Figure 6.9 shows the dependencies of the electron drift velocity on the electric field for several semiconductors.
6 Semiconductors
159
Modern semiconductor devices are often so small that their dimensions are comparable to or even smaller than the electron mean free path. In this case, electrons may acquire higher velocities than those shown in Figure 6.9, since transient effects associated with acceleration of carriers become important. In the limiting case of very short devices, the electron transit time may become so small that most electrons will not experience any collisions during the transit. Such a mode of electron transport is called ballistic transport. Figure 6.10 shows the dependence of the electron and hole mobilities in Si on temperature for different electron concentrations.
3
Electron velocity [100,000 m/s]
InGaAs InP GaAs
2
1 Si
6.3.1 Impact Ionization
T = 300 K 0 0
5
10
20
15
Electric field [kV/cm]
FIGURE 6.9 Velocity Versus Electric Field for Several Semiconductors. Figure adapted from Shur (1996).
As can be seen from Figure 6.9, in Si, the drift velocity saturates in high electric fields, as expected. In many compound semiconductors, such as GaAs, InP, and InGaAs, the electron velocity decreases with an electric field in a certain range of high electric fields. (In this range of the electric fields, the electron differential mobility, mdn ¼ dvn =dF, is negative.) In these semiconductors, the central valley of the conduction band (G) is the lowest as shown in Figure 6.8. In high electric fields, hot electrons acquire enough energy to transfer from the central valley of the conduction band (where the effective mass and, hence, the density of states is relatively small) into the satellite valleys, where electrons have a higher effective mass and, hence, a larger density of states but a smaller drift velocity. As can be seen from Figure 6.9, compound semiconductors have a potential for a higher speed of operation than silicon because electrons in these materials may move faster. The negative differential mobility observed in many compound semiconductors can be used for generating microwave oscillations at very high frequencies. TABLE 6.1
Si Ge GaAs InP
In high electric fields, electron-hole pairs in a semiconductor are often generated by impact ionization. In this process, an electron (or a hole) acquires enough energy from the electric field to break a bond and promote another electron from the valence band into the conduction band. The electron impact ionization generation rate, Gni , is proportional to the electron concentration, n, to the electron velocity, vn , and to the impact ionization coefficient, ani : Gni ¼ ani nvn ,
(6:22)
where ani , is given by the following empirical expression: ani ¼ ano exp½(Fin =F)min :
(6:23)
In a similar fashion, when the impact ionization is caused by holes, the result is as follows: Gpi ¼ api pvp : api ¼ apo exp (Fip =F)mip :
(6:24) (6:25)
Here F is an electric field and ano , apo , Fin , Fip , min , and mip are constants that depend on semiconductor material and, in certain cases, even on the direction of the electric field. The impact ionization coefficients for Si calculated by using
The Parameters of Some Important Semiconductors a [A˚]
er [rel.]
r [g=cm3 ]
Eg [eV]
mn
mp
mn [cm2 =Vs]
mp [cm2 =Vs]
5.43 5.66 5.65 5.87
11.8 16.0 13.2 12.1
2.33 5.32 5.31 4.79
1.12 0.67 1.42 1.35
1.08 0.55 0.067 0.080
0.56 0.37 0.48 –
1350 3900 8500 4000
480 1900 400 100
Table 6.1. Properties of Important Semiconductors. Note: a: Lattice constant, er : Dielectric constant, r: Density, Eg : Energy band gap, mn : Electron effective mass (density of states effective mass for Si and Ge and central valley effective mass for GaAs and InP), mp : Hole effective mass (density of states effective mass), mn : Electron mobility, mp : hole mobility. All are at 300 K. Table adapted from Fjeldy et al. (1998).
Michael Shur lonization coefficients [cm−1]
160
104
µn [cn 2/vs]
1015 cm−3
1016 103
1017
ai
40000 30000
βi
20000 10000
250 1018
350
400
450
500
Electric field [kV/cm]
102 100
300
200
300
400
500
T [K]
FIGURE 6.11 Impact Ionization Coefficients for Si; a0 ¼ 3, 318 cm1 , Fno ¼ 1, 174 kV=cm, min ¼ 1. Figure adapted from Shur (1996).
The light intensity Pl (measured in W=m2 ) is given by:
(A) Electron Mobility
Pl ¼ Po exp ( ax),
where Po is the light intensity at the surface. The distance 1=a is called the light penetration depth (see Figure 6.12(B)). If 1=a L, where L is the sample dimension, the generation rate of electron-hole pairs is nearly uniform in the sample. Absorption coefficient, a, is a strong function of the wavelength. (Typically, a is measured in the range from 102 to 106 cm1.) The number of generated electron-hole pairs is proportional to the number of absorbed photons. Since the energy of each photon is hv, the generation rate is given by:
104
1015 cm−3
µp [cn 2/vs]
(6:26)
1016 103 1017
1018
G ¼ Qe
102 100
200
300
400
aPl , hv
(6:27)
500
T [K] (B) Hole Mobility
FIGURE 6.10 Electron and Hole Mobilities. These mobilities are shown in n-type, (A) and p-type (B) silicon, respectively, versus temperature for different impurity concentrations. Figure adapted from Fjeldly et al. (1998).
where Qe is equal to the average number of electron-hole pairs produced by one photon. Qe is called the quantum efficiency.
Light
Distance, x [µm]
equations 6.3, 6.8, and 6.14 and equations 6.3, 6.8, and 6.16 are shown in Figure 6.11.
6.3.2 Optical Properties Generation of electron-hole pairs in a semiconductor can be achieved by illuminating a semiconductor sample with light with photon energies larger than the energy gap of the semiconductor. The light is partially reflected from the semiconductor surface and partially absorbed in the semiconductor (see Figure 6.12).
Relative intensity
(A) Light Shining onto Semiconductor 1
α = 105 cm−1 0.5
1/ α
0 0
0.1
0.2
0.3
0.4
Distance [µm] (B) Relative Light Intensity Versus Distance
FIGURE 6.12 Light Effects on Semiconductors. Figure adapted from Shur (1996).
6
Semiconductors
161 Ec
Ec
− ω = Eg
Ev
Et
Ev
+
(A) Band-to-Band
− ω = Eg - Et
+
(B) Band-to-Impurity
Ec
− Sample − +
Et
Ev
+
Recombination at the surface
(C) Nonradiative
(D) Surface
FIGURE 6.13 Recombination Processes According to Electron-Hole Pair Generation. The figures show (A) direct (band-to-band) radiative recombination, (B) radiative band-to-impurity recombination, (C) nonradiative recombination via impurity (trap) levels, and (D) surface recombination. Figure adapted from Shur (1996).
For hv > Eg , the quantum efficiency Qe is often fairly close to unity. Under equilibrium conditions, G ¼ R, where R is the recombination rate; that is, the generation of electron-hole pairs is balanced by different recombination processes that include direct (band-to-band) radiative recombination, radiative band-to-impurity recombination, nonradiative recombination via impurity (trap) levels, Auger recombination, and surface recombination. These processes (except for the Auger recombination discussed below) are schematically shown in Figure 6.13. In the radiative recombination processes, recombining electron-hole pairs emit light, and the energy released during this recombination process is transformed into the energy of photons. Since during band-to-band recombination, the energy released by each recombining electronhole pair is either equal to the energy gap or is slightly higher, the frequency of the emitted radiation can be estimated as: hv ¼ Eg :
(6:28)
The radiative band-to-band recombination rate is proportional to the np product:
R ¼ A np
n2i
,
(6:29)
since both electrons and holes are required for the band-toband recombination (just like the number of marriages in a city should be proportional to both the number of eligible men and to the number of eligible women). In practical lightemitting semiconductor devices, radiative band-to-impurity recombination may be more important than radiative bandto-band recombination. The radiative band-to-impurity recombination rate is given by: R ¼ p=tr ,
(6:30)
where p is the concentration of the electron-hole pairs, tr ¼ 1=(Br Nt ), Nt is the concentration of impurities
involved in this radiative recombination process, and Br is a constant. In many cases, the dominant recombination mechanism is recombination via traps, especially in indirect semiconductors such as silicon. Shockley and Reed were first to derive the famous (but somewhat long) expression for the recombination rate related to traps: R¼
pn n2i : tpl (n þ nl ) þ tnl (p þ pl )
(6:31)
Here, ni is the intrinsic carrier concentration, and tnl , tpl are electron and hole lifetimes given by: tnl ¼
1 1 and tpl ¼ , vthn sn Nt vthp sp Nt
(6:32)
where vthn and vthp are electron and hole thermal velocities, sn and sp are effective capture cross-sections for electrons and holes, and Nt is the trap concentration. The carrier concentrations corresponding to the position of the Fermi level coinciding with the trap level can be written as follows: nl ¼ Nc exp
Et Ec kB T
and pl ¼ Nv exp
Ev Et : (6:33) kB T
Here Nc and Nv are the effective densities of states in the conduction and valence bands, respectively, and Et is the energy of the trap level. When electrons are minority carriers (n ¼ np p Na ¼ pp , where Na is the concentration of shallow ionized acceptors, p pl , p nl ), equation 6.31 reduces to: R¼
np npo , tnl
(6:34)
where npo ¼ n2i =Na . When holes are minority carriers (p ¼ pn n Nd ¼ nn , where Nd is the concentration of shallow ionized donors, n nl , n pl :
162
Michael Shur R¼
pn pno , tpl
(6:35)
where pno ¼ n2i =Nd . The electron lifetime, tnl , in p-type silicon and the hole lifetime, tpl , in n-type silicon decrease with increasing doping. At low doping levels, this decrease may be explained by higher trap concentrations in the doped semiconductor. We can crudely estimate electron and hole lifetimes in Si as follows: tnl (s) ¼
1012 3 1012 and tpl (s) ¼ : 3 Na (cm ) Nd (cm3 )
1 1 and tpl ¼ , 2 Gp Na Gn Nd2
(6:37)
for p-type material and for n-type material, respectively, where Gp ¼ 9:9 1032 cm6 =s and Gn ¼ 2:28 1032 cm6 =s for silicon. As stated previously, in many cases the recombination rate can be presented as R ¼ (pn pno )=tpl (in n-type material in which holes are minority carriers). In this case, the diffusion of the minority carriers can be described by the diffusion equation: Dp
@ 2 pn pn pno ¼ 0: @x 2 tpl
(6:38)
The solution of this second-order linear differential equation is given by: Dp ¼ A exp (x=Lp ) þ B exp ( x=Lp ),
Lp ¼
(6:39)
pffiffiffiffiffiffiffiffiffiffiffi Dp tpl
(6:40)
is called the hole diffusion length. A similar diffusion equation applies to electrons in p-type material, where electrons are minority carriers. The electron diffusion length is then given by: Ln ¼
(6:36)
At high doping levels, however, the lifetimes decrease faster than the inverse doping concentration. The reason is that a different recombination mechanism, called Auger recombination, becomes important. In an Auger process, an electron and a hole recombine without involving trap levels, and the released energy (of the order of the energy gap) is transferred to another carrier (a hole in p-type material and an electron in n-type material). Auger recombination is the inverse of impact ionization, where energetic carriers cause the generation of electron-hole pairs. Since two electrons (in n-type material) or two holes (in p-type material) are involved in the Auger recombination process, the recombination lifetime associated with this process is inversely proportional to the square of the majority carrier concentration; that is: tnl ¼
where p ¼ pn pno , A and B are constants to be determined from the boundary conditions, and
pffiffiffiffiffiffiffiffiffiffiffi Dn tnl :
(6:41)
6.4 Important Semiconductor Materials Today, most semiconductor devices are made of silicon. However, submicron devices made of compound semiconductors, such as gallium arsenide or indium phosphide, successfully compete for applications in microwave and ultra-fast digital circuits. Other semiconductors, such as mercury cadmium telluride are utilized in infrared detectors. Silicon carbide, aluminum nitride, and gallium nitride promise to be suitable for power devices operating at elevated temperatures and in harsh environments. Hydrogenated amorphous silicon and related compounds have found important applications in the display industry and in photovoltaics. Ternary compounds, quaternary compounds, and heterostructure materials provide another important dimension to semiconductor technology. Developing new semiconductor materials and using their unique properties in electronic industry is a major challenge that will exist for many years to come.
References Fjeldly, T., Ytterdal, T., and Shur, M.S. (1998). Introduction to device and circuit modeling for VLSI. New York: John Wiley & Sons. Levinshtein, M.E., and Simin, G.S. (1992). Getting to know semiconductors. Singapore: World Scientific. Pierret, R.F. (1988). Semiconductor Fundamentals, (2d Ed.): Modular series on solid state devices, Vol. 1. Reading, MA: Addison-Wesley. Shur, M.S. (1996). Introduction to electronic devices. New York: John Wiley & Sons. Streetman, B.G. (1995). Solid state electronics devices, (4th Ed.). Englewood Cliffs, NJ: Prentice Hall. Wolfe, C.M., Holonyak, N.J., and Stillman, G.E. (1989). Physical properties of semiconductors. Englewood Cliffs, NJ: Prentice Hall.
7 Power Semiconductor Devices Maylay Trivedi and Krishna Shenai Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
7.1 7.2 7.3
Introduction ..................................................................................... 163 Breakdown Voltage............................................................................. 163 P-i-N Diode ...................................................................................... 164
7.4 7.5
Schottky Diode .................................................................................. 165 Power Bipolar Transistor ..................................................................... 166
7.6
Thyristor .......................................................................................... 167
7.7 7.8
Gate Turn-Off Thyristor ...................................................................... 169 Metal-Oxide-Semiconductor Field Effect Transistor ................................. 169
7.9
Insulated Gate Bipolar Transistor .......................................................... 172
7.3.1 Switching Characteristics
7.5.1 Switching Characteristics 7.6.1 Transient Operation
7.8.1 Switching Performance . 7.8.2 Limits of Operation . 7.8.3 Improved Structures 7.9.1 Transient Operation
7.10 7.11 7.12
Other MOS-Gate Devices .................................................................... Smart Power Technologies ................................................................... Other Material Technologies ................................................................ Bibliography......................................................................................
174 175 175 176
7.1 Introduction
7.2 Breakdown Voltage
Power semiconductor devices are used as switches in power electronics applications. Ideal switches arbitrarily block large forward and reverse voltages, with zero current flow in the offstate, arbitrarily conduct large currents with zero voltage drop in the on-state, and have negligible switching time and power loss. Material and design limitations prevent semiconductor devices from operating as ideal switches. It is important to understand the operation of these devices to determine how much the device characteristics can be idealized. Available semiconductor devices could be either controllable or uncontrollable. In an uncontrollable device, such as the diode, on-and off-states are controlled by circuit conditions. Devices like BJTs, MOSFETs, and their combinations can be turned on and off by control signals, and hence, are controllable devices. Thyristors belong to an intermediate category defined by latch-on being determined by a control signal and turn-off governed by external circuit conditions. This chapter presents a summary of the design equations, terminal characteristics, and the circuit performance of contemporary and emerging power devices.
Figure 7.1(A) shows the cross-section of a typical power diode. A p-n junction is formed when an n-type region in a semiconductor crystal abuts a p-type region in the same crystal. Rectification properties of the device result from the presence of two types of carriers in a semiconductor: electrons and holes. Power devices are required to block high voltage when the anode is biased negative with respect to the cathode. Application of a reverse bias removes carriers from the junction and creates a depletion region devoid of majority carriers. The width of the depletion layer depends on the applied bias, VR , and doping of the region, ND , as:
Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2e(VR þ Vbi ) , WD ¼ qND
(7:1)
where Vbi is the built-in potential. A high electric field is formed in the depletion layer. This field rises as the reverse voltage is increased until a critical electric field, EC , is reached, 163
164
Maylay Trivedi and Krishna Shenai
at which point the device breaks down. The critical electric field for Si is roughly 2 105 V=cm. A nonpunchthrough diode has drift region doping (ND ) and width (WD, pp ) parameters such that the critical electric field is reached before the entire region is depleted. In such cases, the breakdown voltage, VBD , and depletion width at breakdown, WD , for a given doping level are expressed as: 3=4
VBD ¼ 5:34 1013 ND
Anode
p+ (NA = 1019cm−3)
5−10 µm
A Id
n− Drift region
+
Drift region width: WD
−
(7:2a)
and
K
n+ (ND = 1019cm−3) 7=3
WD, pp ¼ 2:67 1010 ND
:
(7:2b) (A) Cross-Section
In some power devices, it is preferable to use a punchthrough structure to support the voltage. In this case, the entire drift region is depleted at the time of breakdown. The breakdown voltage of a punchthrough device with a certain doping (ND ) and width WD ( < WD, pp ) is given as: VBD ¼ EC WD
qND WD2 : 2e
(7:3)
The thickness, WD , of the punchthrough structure is smaller than that of the nonpunchthrough device with the same breakdown voltage. A thinner drift region is preferable for bipolar power devices, such as P-i-N diodes operating under high-level injection conditions. Planar diodes are formed by implantation and diffusion of impurities through a rectangular diffusion window. A cylindrical junction is formed at the straight edges of the diffusion window, and a spherical junction is formed at each of the corners. The breakdown voltage of practical devices can be significantly lower than the ideal values by the occurrence of high electric fields at the device edges. With respect to the parallel-plane breakdown voltage, VBD, pp , the breakdown voltage of cylindrical (VBD, CYL ) and spherical (VBD, SP ) junctions with a junction depth xj is given as the following two equations: 6=7 # xj þ2 WD " 8=7 # 6=7 xj xj ln 1 þ 2 : WD WD
VBD, CYL 1 ¼ 2 VBD, PP
"
xj WD
Vd
(B) Circuit Representation
FIGURE 7.1 Typical Power Diode (P-i-N)
Example 7.1. The asymmetrically doped pþ n diode shown in Figure 7.1 should be designed to support a reverse voltage of 500 V by forming a pþ junction 5 mm deep into the n-type material. For avalanche breakdown at the parallel-plane junction, the drift region doping and maximum depletion width at breakdown are obtained using equation 7.3 to yield 5 1014 cm3 and 36 mm respectively. Considering that the pþ junction is 5 mm deep into the n-type drift region, the diode forms a cylindrical junction at the edge, with a radius of 5 mm. This gives a ratio of 0.14 with the ideal parallel plane breakdown depletion width. Then, using equation 7.4a, the finite device will have a cylindrical breakdown voltage that is 40% of the ideal parallel plane breakdown. In this case, the finite device has a breakdown voltage of 200 V. Special device termination structures have been devised to reduce the electric field strength at the device edge and to raise the breakdown voltage close to the ideal value. These include floating field rings, field plates, bevelled edges, ion-implanted edge terminations, and so forth.
2
2 6=7 xj xj VBD, SP ¼ þ2:14 VBD, PP WD WD " 13=7 #2=3 xj 3 xj r þ3 : WD WD
7.3 P-i-N Diode (7:4a)
(7:4b)
The P-i-N diode has a punchthrough structure as described earlier and has an nþ region at the end of the drift region opposite from the rectifying junction. The nþ region provides an ohmic cathode contact and injects electrons into the drift region under high-level injection. When the diode is forward biased, the pþ anode injects minority holes into the drift region, resulting in current flow. The forward bias current flow is determined by recombination of the minority carriers injected into the drift region. The device conducts in low-level injection regime when the injected minority carrier density is
7
Power Semiconductor Devices
165
much lower than background doping. The magnitude of this current is written as: qDp pon qVF JF ¼ e kT 1 , Lp
If
(7:6)
7.3.1 Switching Characteristics A power diode requires a finite time to switch from the blocking state (reverse bias) to the on-state (forward bias) and vice versa. This is because of the finite time required for establishment and removal of steady-state carrier distribution in the drift region. During turn-on, the terminal voltage, VA , first rises to a value VFP that is much higher than the static forward drop before decaying to the steady-state value, VF , determined by the static characteristics. This process is called forward recovery, and VFP is known as the forward recovery voltage. When the diode is turned off, the current changes at a rate dI/dt determined by the external circuit conditions. The diode current displays a negative peak before decaying to zero. Correspondingly, device voltage rises to the off-state voltage, VR . The process of negative current flow during device turn-off is referred to as the reverse recovery process. The current, Irr , is the reverse recovery current peak, and trr is the reverse recovery time. These quantities are expressed as: (7:7a)
and sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2Qrr (S þ 1) , trr ¼ diR =dt
ta
Vrated
where ta and La are the ambipolar carrier lifetime and diffusion length, respectively.
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2Qrr (diR =dt) Irr ¼ Sþ1
S=
(7:5)
where VF is the voltage drop across the rectifying junction. Away from the junction in the drift region, current flow is dominated by majority carriers under the influence of an electric field. This causes a resistive voltage drop. During onstate current flow, when injected minority carrier density exceeds background doping concentration, the device is said to be conducting in high-level injection. The current flow in high-level injection is given by: qVF 2qLa ni WD JF ¼ tan h e kT 1 , ta 2La
Forward Id conduction
Id
(7:7b)
where S is the ‘‘snappiness’’ factor defined in Figure 7.2. The variable Qrr represents the reverse charge extracted out of the diode and is indicated by the shaded area in Figure 7.2.
0 Reverse blocking region
Vf (If)
Qrr
tb Time
Irm
(A) Forward I −V
FIGURE 7.2
0
Vd
trr
tb ta
(B) Reverse Recovery of the P−i−N Dioxide
Reverse and Forward Biases of a Power Node
The extracted charge, Qrr , is a fraction of the total charge QF ( ¼ tF IF ) stored in the diode during forward bias. Example 7.2. The diode designed in example 7.1 has a high-level carrier lifetime of 500 ns. At a current density of 100 A=cm2 , assuming conduction in high-level injection, the voltage drop across the p diode is roughly given by equation 7.2, where La ¼ (Da ta ). The ambipolar diffusion constant, Da , for a diode at room temperature is 9 cm2 =V-s. Then, for a lifetime of 500 ns, the ambipolar diffusion length, La , is 21 mm, and the forward voltage of the diode is 0.82 V. This diode is turned off from an on-state current density of 100 A=cm2 with a turn-off dJ/dt of 100 A=cm2 ms. Assuming that all the excess charge is removed from the device, Qrr QF ¼ tF JF . Thus, a charge of 50 mC is removed from the device. For a diode with S ¼ 1, the reverse recovery current has a magnitude of 71 A=cm2 and a duration of 0:7 ms.
7.4 Schottky Diode A P-i-N diode displays large reverse recovery transients because of charge accumulation in the drift region brought about by bipolar current flow. A majority carrier diode may be formed by making a rectifying junction between a metal and the semiconductor. Such a device, as shown in Figure 7.3, is referred to as the Schottky diode. The rectifying characteristics of a Schottky diode are identical to a P-i-N diode, although the fundamental physics of operation are quite different. The forward current of a Schottky diode is expressed as: qVF q Bo JF ¼ A T 2 e kT e kT 1 ,
(7:8)
where A is the effective Richardson’s constant and where fBo is the metal-semiconductor barrier height defined as the difference between the metal and semiconductor work
166
Maylay Trivedi and Krishna Shenai
7.5 Power Bipolar Transistor
Anode (Rectifying contact) Oxide Id
n− Drift region
Forward conduction
Drift region width: WD trr
n+ (Nd = 1019cm−3 0
Irm
Cathode
(A) Cross-Section
Qrr
Time
(B) Typical Reverse Recovery
FIGURE 7.3 A Schottky Diode
functions. Current flow is dominated by majority carriers. The Schottky diode has a negligible stored charge in the drift region because of a lack of minority carriers. The switching transients do not include forward and reverse recovery. A finite transient recovery is observed because of the capacitance associated with the depletion region. However, the reverse leakage current is considerably higher than for P-i-N diodes. Furthermore, at higher voltage rating, the Schottky diode has a considerable voltage drop in the drift region, expressed as: JF WD Vdr ¼ : qmn ND
(7:9)
The vertical power bipolar junction transistor (BJT) has a four-layer structure of alternating p-type and n-type doping as shown in Figure 7.4. The transistor has three terminals labeled collector, base, and emitter. The need for a large offstate blocking voltage and high on-state current-carrying capability is responsible for the changes in structure over the logic-level counterpart. The transistor is commonly operated in a common emitter mode. The steady-state static I–V characteristics of the power BJT are shown in Figure 7.5. The BJT is turned on by injecting current in the base terminal and is a current-controlled device. This causes electrons to flow from the emitter to collector. The maximum collector current for a given base current is determined by the current gain, IC , max ¼ bIB : Aq Dn NB qVBE exp IC ¼ : WB kT
Emitter
Base
n+: 1019cm−3
5 − 10µm p: 1016cm−3
C
5 − 20µm
IC
+ n− Drift region
Example 7.3. A Schottky diode is designed with the same drift region parameters indicated in example 7.1. The Schottky contact has a barrier height of 0.8 eV. Conduction in a Schottky diode takes place by majority carriers. Forward voltage drop of the diode is obtained from equations 7.8 and 7.9 as:
(7:10)
Drift region width: WD
VCE
B IB
n+ : 1019cm−3
−
E
Collector
(A) Cross-Section
(B) Circuit Representation
FIGURE 7.4 Power Bipolar Junction Transistor
kT JF VF ¼ ln 2 qf Bo=kT þ Rs JF ¼ 0:49 þ 3:1 ¼ 3:59 V, q A T e
QuasiHardsaturation saturation
Active
WD qmn ND
is the specific series resistance of the where Rs ¼ 2 ˚ drift region and A ¼ 146 A=cm -K2 for Si. A 200-V Schottky diode requires a drift region with a doping of 1:7 1015 cm3 and thickness of 12:5 mm. For a barrier height identical to the previous case, the forward voltage of this diode is VF ¼ 0:49 þ 0:32 ¼ 0:81 V. The voltage drop across the series resistor in the Schottky diode rises rapidly with the breakdown voltage rating. This restricts the operating voltage range of Schottky diodes even as reverse recovery limits high-frequency operation of P-i-N diodes. Hybrid devices such as junction barrier Schottky and merged P-i-N–Schottky diodes have been developed to expand the operating range of power diodes by taking advantage of the properties of Schottky and P-i-N diodes.
ION IC
IB
VCE (sat)
VCE
FIGURE 7.5 Output Characteristics of Power BJT
7 Power Semiconductor Devices
167
The static output characteristics of the BJT has three distinct regimes of conduction. From its off-state current, the transistor enters a forward active region when the base–emitter (B–E) junction is forward biased by a base current. With the basecollector (B–C) junction reverse biased, a collector current of magnitude bF IB flows through the device. For the same base current, if the collector voltage is reduced such that the B–C junction is barely forward biased, the device enters the quasisaturation mode of operation. The boundary for this transition is VCE ¼ VBE , on þ IC RD , where RD is the resistance of the drift region. Forward biasing the B–C junction results in minority carrier injection into the collector drift region. The increase in effective base width reduces the transistor current gain and, hence, the collector current. The device conducts in hard saturation when the entire collector drift region is in high-level injection. Any additional reduction in collector voltage drives the transistor further into hard saturation. The transistor has very little on-state voltage in hard saturation and quasisaturation and is operated in this condition for practical application. Since BJTs are controlled by current, base current must be supplied continuously to maintain conduction, resulting in power loss. Power BJTs have a low current gain (b 5 20). The gain can be enhanced by connecting the transistors in Darlington configuration, as shown in Figure 7.6. The output of one transistor drives another transistor, resulting in higher overall current gain. The current capability, however, is accompanied by a deterioration in forward voltage and switching speed.
7.5.1 Switching Characteristics The clamped inductive load circuit, a typical switching application of a controllable power switch, is shown in Figure 7.7. The switching waveforms of the power BJT, used as a switch in
C IC
B IB
+
+ IO
Vd
Cd IB
IC
_
FIGURE 7.7
Clamped Inductive Load Circuit
the circuit, are shown in Figure 7.8. A large inductor represents a typical inductive load that is driven by the circuit. The inductor is approximated as a constant current source IO . For a given collector current, the base current should be high enough to ensure conduction in hard or quasisaturation. After an initial delay because of the B–E capacitance, the charge in the base begins to buildup, and the collector current rises to its on-state value with the voltage clamped to Vd by the diode. The voltage beings to fall rapidly once the device attains on-state current. As the charge is injected into the drift region, the gain of the transistor reduces and slows down the rate of fall of voltage. This phase increases the overall device turn-on time. The turn-off process involves removal of all the stored charge in the transistor. The base current is driven negative to remove the charge during turn-off. The negative base current rapidly removes excess charge from the device. The base current reversal is usually achieved gradually because abrupt reversal leads to a long current tail, thus increasing switching loss. When the base current is reversed, the collector current remains constant for a storage time, ts , until the drift region comes out of high-level injection at one of its ends. After this interval, the device enters quasisaturation, and voltage starts rising slowly. Once the charge distribution reduces to zero at the collector-base junction, the voltage rises rapidly until the diode clamps it to the bus voltage. The collector current then falls off toward zero and device is cut off. For higher current or voltage-handling capability, devices can be connected in parallel or series. Connecting BJTs in parallel can lead to instabilities because of a negative temperature coefficient of the on-state resistance. During conduction, if the temperature of one device rises, its resistance reduces, causing it to pass more current than other transistors. This, in turn, leads to a further rise in current, eventually resulting in a thermal runaway condition that destroys the device.
−
7.6 Thyristor E
FIGURE 7.6 Darlington Configuration
The vertical cross-section and symbol of a thyristor are shown in Figure 7.9. The thyristor is a three-junction p-n-p-n
168
Maylay Trivedi and Krishna Shenai IB(on)
iB(t)
IB(off) VBE(on)
vBE(t) VBE(off)
Io
iC(t)
VCE(sat)
vCE(t)
Vd
FIGURE 7.8 Switching Waveforms of a Power BJT During Clamped Inductive Turn-Off
Cathode
J3
Gate
n +: 1019cm−3
5 − 10 µm
p: 1017cm−3
30 − 20 µm
n− Drift region
Drift region width: WD
J2
J1
A
+ −
G
VAK
IG K
n+ : 1019cm−3
Anode
(A) Cross-Section
FIGURE 7.9
(B) Circuit Representation
Diagram of a Thyristor
structure. The thyristor may be viewed as two bipolar transistors connected back to back. The collector current of one transistor provides base current to the other transistor. In the reverse direction, the thyristor operates similarly to a reversebiased diode. When a forward polarity volage is applied, the junction J3 (between the gate and drift region) is reverse biased, and the thyristor is in the forward blocking mode. The device can be triggered into latch-up by injecting a hole current from the gate electrode. Considering ICO to be the leakage current of each transistor in cut-off condition, the anode current can be expressed in terms of gate current as: IA ¼
a2 IG þ ICO1 þ ICO2 , 1 (a1 þ a2 )
(7:11)
where a is the common base current gain of the transistor (a ¼ IC =IE ). From equation 7.11, the anode current becomes arbitrarily large as (a1 þ a2 ) approaches unity. As the anode– cathode voltage increases, the depletion region expands and reduces the neutral base width of the n1 and p2 regions. This causes a corresponding increase in the a of the two transistors. If a positive gate current of sufficient magnitude is applied to
the thyristor, a significant amount of electrons will be injected across the forward-biased junction, J3 , into the base of the n1 p2 n2 transistor. The resulting collector current provides base current to the p1 n1 p2 transistor. The combination of the positive feedback connection of the npn and pnp BJTs and the current-dependent base transport factors eventually turn the thyristor on by regenerative action. Among the power semiconductor devices known, the thyristor shows the lowest forward voltage drop at large current densities. The large current flow between the anode and cathode maintains both transistors in saturation region, and gate control is lost once the thyristor latches on.
7.6.1 Transient Operation When gate current is injected in the thyristor, finite time is required by the regenerative action to turn the thyristor on under the influence of the gate current (see Figure 7.10). A time delay is observed before the current starts rising. After the delay time, the anode current rapidly rises toward its on-state value at a rate limited only by external current elements, and the device voltage collapses. The finite time required for the charge plasma to spread through the device causes a tail in onstate voltage. The thyristor cannot be turned off by the gate and turns off naturally when the anode current is forced to change direction. A thyristor exhibits turn-off reverse recovery characteristics just like a diode. Excess charge is removed once the current crosses zero and attains a negative value at a rate determined by external circuit elements. The reverse recovery peak is reached when either junction J1 or J3 becomes reverse biased. The reverse recovery current starts decaying, and the anode–cathode voltage rapidly attains its off-state value. Because of the finite time required for spreading or collecting the charge plasma during turn-on or turn-off stage, the maximum dI/dt and dV/dt that may be imposed across the device are limited in magnitude. Further, device manufacturers
7
Power Semiconductor Devices
169 IA
IA
On-state trr 0
Off-to-on-states
t
VAK
Vrated Reverse blocking
VAK
0
0
Forward blocking
t
tq (A) Static
FIGURE 7.10
(B) Transient
Typical Characteristics of a Thyristor
specify a circuit-commutated recovery time, tq , for the thyristor, which represents the minimum time for which the thyristor must remain in its reverse blocking mode before forward voltage is reapplied. Thyristors come in various forms depending on the application requirements. They can be made to handle large voltage and current with low on-state forward drop. Such thyristors typically find applications in circuits requiring rectification of line frequency voltage and currents. Faster recovery thyristors can be made at the expense of increased on-state voltage drop.
7.7 Gate Turn-Off Thyristor The gate turn-off thyristor (GTO) adds gate-controlled turnoff capability to the thyristor. In a GTO, modifications are made to ensure that the base current of the transistor n1 p2 n2 is briefly made less than the value needed to maintain saturation (IB2 < IC =b). This forces the transistor to go into the active region. The regenerative action pulls both transistors out of saturation into the active region and eventually into turn-off mode. A GTO requires complex interdigitated layout of gate and cathode. Anode short structures further suppress regeneration. The turn-off capability of a GTO is gained at the expense of on-state performance. Other variations in structure, such as the reverse-conducting thyristor (RCTs), asymmetrical siliconcontrolled-rectifier (ASCR), light-triggered thyristor (LTT), and so forth, are applied in many modern power electronic applications.
cross-section of a vertical n-channel power MOSFET. The MOSFET is a voltage-controlled device as opposed to a BJT that is a current-controlled device. In an off-state mode, the depletion layer expands in the drift region and supports the blocking voltage. As the depletion layer grows, it pinches off the region between the p-wells and isolates the gate oxide from the high voltage appearing at the drain. The channel region is formed by implantation and diffusion of p-type impurities in the window for the source region. The p-type region also isolates the source and the drain. Application of a gate voltage higher than a threshold value creates an inversion channel under the gate oxide that supports current flow from source to the drain. For an ideal metal-oxide-semiconductor structure, the threshold voltage is expressed as:
VTH
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4ekTND ln (ND =ni ) 2kT ND ln þ ¼ : eox =tox q ni
The maximum current supported per unit cell for a given gate voltage is limited by the threshold voltage and channel length.
Gate
Source
D
n+ p − Body
ID + VDS
n − Drift region G
7.8 Metal-Oxide-Semiconductor Field Effect Transistor
(7:12)
p
+
−
S
: 1019 cm−3
Drain
The power metal-oxide-semiconductor field effect transistor (MOSFET) is perhaps the most thoroughly investigated and optimized power semiconductor device. Figure 7.11 shows the
(A) Cross-Section
FIGURE 7.11
(B) Circuit Representation
Power MOSFET
170
Maylay Trivedi and Krishna Shenai
Several unit cells of the kind shown in Figure 7.11 are connected in parallel to achieve the desired current rating. The cells are connected in a variety of geometrical layouts, such as stripes, squares, and hexagonal shapes. The source nþ region and p-body region are shorted. This creates a parasitic diode connected from the drain to the source of the MOSFET. This integral diode is useful in several power electronics applications, such as half-bridge and full-bridge PWM converters. The MOSFET is a majority carrier device. In the case of the n-channel MOSFET shown in Figure 7.11, the electrons traveling from source to drain have to overcome the resistance offered by the channel, the accumulation region at the surface, the neck or JFET region between the p-wells, the drift region, and the substrate, as indicated in Figure 7.12(A). The overall resistance of the device is determined by the contributions of the individual components. Thus, the following is true: Ron ¼ Rch þ Racc þ Rneck þ Rdrift þ Rsub :
(7:13a)
Lch LG xD þ þ þ Ron 2ZCox meh (VGS VTH ) 8ZCox macc (VGS VTH ) qmn ZLG ND ¼
Example 7.4. A power MOSFET has a p-base region that is uniformly doped at a concentration of 1 1017 cm3 with a gate oxide thickness of 1000 A. Using equation 7.14, the threshold voltage of this device is 5.65 V at room temperature. A fixed charge density at the semiconductor–oxide interface introduces additional charge and shifts the threshold voltage of the device by:
DVTH ¼
Qox : Cox
Thus, a charge density of 1 1011 cm2 would cause a shift of 0:47 V in the threshold voltage. Saturation current of the MOSFET is determined by the channel. For a MOSFET with channel length of 2 mm, the channel width required to have a saturation current of 1 A at a gate voltage of 7 V can be obtained with the help of equation 7.14 to be 6.4 cm. The channel on-resistance is 0.68 V.
ln½1 þ 2ðWD =LG Þ tan a Wsub þ : qmn ZND tan a qmn ZLcell Nsub
(7:13b) The resistance of low-voltage MOSFETs is dominated by the resistance of the channel and neck regions. Higher voltage MOSFETs, however, have thicker drift regions with low doping. The resistance is then dominated by the drift region. This limits the application of MOSFETs at higher voltage levels. The I–V characteristics of the power MOSFET are shown in Figure 7.12(B). Increasing the gate voltage raises the saturation current level as given by: ID ,
sat
¼ mch Cox Z VGS VTH Þ2 =Lch :
RCS
Source
CGS
(7:14)
7.8.1 Switching Performance The MOSFETs require continuous application of gate-source voltage above the threshold voltage to be in an on-state mode. The gate is isolated from the semiconductor by the oxide. Gate current flows only during transition from on-to off-state and vice versa when the gate capacitor charges or discharges. This implies reduced power loss in the control circuit and simple gate control. The switching transients are determined by the time required to charge and discharge various capacitors in the MOSFET structure. The capacitors appear where the gate oxide overlaps the semiconductors and junction capacitance because of the depletion layer, as indicated in Figure 7.12.
Gate
ID n+
RS
RCH
RA RJ
p Base
ohmic
CGD CDS
VGS
RD
n epi
Cutoff
RSUB
n+ Substrate
Drain
VDS
RCD
(A) Parasitics in a Power MOSFET
FIGURE 7.12
(B) Typical Output Characteristics
MOSFET Device
7
Power Semiconductor Devices
171
VGS(on)
vGS(t)
VGG t
iG(t)
t
Io
iD(t)
t
VDS(on)
vDS(t)
FIGURE 7.13
VGP VGS, on dVGD dVCE ¼ CGD , ¼ CGD RG dt dt
t
Switching Waveforms of Power MOSFET
Important switching waveforms of the MOSFET are indicated in Figure 7.13 for an inductive load, similar to the one in Figure 7.7. The gate pulse makes transitions between VGP and VGN through the gate resistor, RG . Initially, the device is off. When the gate pulse is turned on high, the capacitor CGD and CGS charge exponentially toward VGP through RG . The MOSFET turns on when the gate-source voltage crosses VTH . The device goes from cut-off to saturation region. As the gate voltage rises, the MOSFET current also increases, as determined by equation 7.14, until it is conducting IO . The drain voltage remains at VD as long as the free-wheeling diode is conducting. Once the MOSFET attains the current IO , the drain current is clamped. Since the device is conducting in saturation, the gate-source voltage is also clamped. The capacitor CGD now starts discharging through the gate resistor. The gate current is given as: IG ¼
Vd
(7:15)
where VGS, on is the gate-source voltage required to maintain a drain current IO in saturation. The gate voltage starts rising exponentially again when the drain voltage reduces to on-state value. The turn-off of the MOSFET involves the inverse sequence of the events that occur during turn-on, and the turn-off waveforms are also shown in Figure 7.16. During the switching transient, the maximum power loss occurs during the crossover between voltage and current states. This power loss originates from the charging and discharging of capacitors. Since these capacitances do not vary with temperature, the switching power loss in MOSFET is independent of temperature. The on-state resistance and, hence, conduction loss, however, do vary with temperature.
Example 7.5. The MOSFET designed in example 7.4 has a parasitic capacitance of CISS ¼ 2:5 nF, CRSS ¼ 200 pF, and COSS ¼ 600 pF. A gate pulse of 10 V is applied through a gate resistor of 10 V to the MOSFET to turn it on from an off-state voltage of 100 V to an on-state current of 500 mA. Based on example 7.4, the VGS, on of the MOSFET at this current level is about 7 V. During the turn-off phase, the current rapidly rises to its on-state value. The device voltage then reduces from 100 V to the on-state value in the phase of the Miller plateau. Then, according to equation 7.18, the time required for the device voltage to drop is 67 ns. Accordingly, the turn-on energy loss is as follows: Eon ¼ 1=2Ion VBus ton ¼ 1:7 mJ:
7.8.2 Limits of Operation Static breakdown of power MOSFETs occurs when the reverse voltage exceeds the static breakdown. As explained in equation 7.13, the on-resistance of the power MOSFET increases rapidly with the device blocking voltage because of increased drift region resistance. The dependence of resistance 2:52:7 on breakdown voltage is nearly cubic ( VBD ). The onresistance has a positive temperature coefficient because of a reduction in carrier mobility at higher temperature. This permits paralleling of MOSFETs without risk of instability. The device conducting higher current heats up and is forced to share its current equitably with the MOSFETs that are in parallel with it. The MOSFET structure has a parasitic npn bipolar transistor, as can be identified from Figure 7.11. The base of this transistor is shorted to the source of the MOSFET through a
172
Maylay Trivedi and Krishna Shenai
7.8.3 Improved Structures
VDS
IL D
I1 CGD
CDB I2
CGS
RG
RB
S
FIGURE 7.14 Elements
Circuit Representation of MOSFET with Parasitic
resistor that is the lateral resistance of the p-base. The equivalent circuit model of the MOSFET, including the parasitic transistor, is shown in Figure 7.14. Under rapid transients of voltage at the drain, a displacement current is induced through the parasitic capacitors of the MOSFET along two paths, as shown. The current I1 may rise VGS above VTH , thus causing spurious turn-on of the MOSFET. Likewise, the current I2 may cause a resistive drop across RB that is high enough to turn the parasitic bipolar transistor on. This places a limit on the maximum dV/dt across the MOSFET. Reliability of MOSFET is also affected by the power dissipation under transient conditions. This power dissipation leads to a temperature rise in the device. The temperature range of operation and highest current and voltage stress are determined by the extent of device self-heating.
Source
Apart from channel resistance, on-resistance of the MOSFET has contributions from the neck region between the p-wells and spreading resistance in the drift region. Various structures have been designed to reduce the on-resistance toward the ideal limit. Two notable device structures are the trench MOSFET and the lateral MOSFET shown in Figure 7.15. The trench MOSFET ensures a nearly one-dimensional current flow and eliminates the neck region. This structure also achieves a much lower cell size and higher packing density. Two of the major markets where power MOSFETs are finding increasing use are portable electronics and automotive applications. In such applications, the power MOSFET must be integrated with other circuit elements for logic-controlled or analog applications. This requires all terminals to be available at the top surface for integration. The lateral MOSFET, shown in Figure 7.15(B) satisfies this requirement.
7.9 Insulated Gate Bipolar Transistor Although power BJTs have low on-state resistance, their operation involves circuit complexity and significant losses in the control circuitry because of the current-controlled nature of device operation. On the other hand, MOSFETs involve relatively simple gate drive circuits but have significant losses during forward conduction at higher blocking voltages. Improved overall performance is achieved by merging the advantages of bipolar conduction with MOS-gate control. Several hybrid MOS-gate transistors have been introduced. Among
Source
Drain
Gate
n+ Source n+ Source
n LDD
n+ Source
P Base
P Base p Base
Oxide n Drift region ND p+ Sinker
p-epi
n+ Substrate
p+ Substrate Drain
(A) French MOSFET
(B) Lateral MOSFET
FIGURE 7.15
Cross-Sections of MOSFETs
n+ Drain
7 Power Semiconductor Devices
173 Gate
Emitter
C
n+ p − Body
IC +
n − Drift region
G −
VCE
n Buffer region p+ : 1019 cm−3
E
Collector (A) Cross-Section
FIGURE 7.16
(B) Circuit Representation
Insulated Gate Bipolar Transistor
these devices, the insulated gate bipolar transistor (IGBT) is the most popular and widely used device. A vertical crosssection of the IGBT is shown in Figure 7.16(A). The structure is similar to a vertical diffused MOSFET. The main difference is the pþ layer at the back of the device that forms the collector of the device. This layer forms a pn junction, which injects minority carriers into the drift region under normal conduction. The structure at the top of the device is identical to a MOSFET in all respects including processing and cell geometry. The forward blocking operation of the IGBT is identical to a power MOSFET. The presence of a rectifying junction at the backside also provides the device with reverse voltage blocking capability. A nonpunchthrough (NPT) drift region design has symmetrical forward and reverse blocking capability. The on-state operation of the IGBT can be understood by an equivalent representation of a MOSFET and a power diode connected in series. When a positive gate voltage is applied, an inversion layer is formed under the gate that facilitates the injection of electrons from the emitter into the drift region. Positive voltage at the collector simultaneously induces hole injection into the drift region from the pþ backside layer. Double injection into the drift region gives the device its excellent on-state properties. Just like a power diode, the drift region conductivity is modulated significantly during conduction, which helps reduce the on-state resistance compared to the power MOSFET. The NPT IGBT has a thick drift region to support the high voltage. The same voltage blocking capability can be achieved by use of a much thinner drift region if an nþ buffer layer is introduced between the drift region and pþ substrate. These punchthrough (PT) IGBTs exhibit stronger conductivity modulation by virtue of a thinner drift region. The on-state characteristics of the IGBT are depicted in Figure 7.17. For a given gate voltage, the saturation current is
determined by the MOS channel, as expressed in equation 7.14. The device is operated in the linear region, where the contributions to voltage drop are from the MOS channel, the drift region, and the backside junction drop: VCE , on ¼ Vpn þ VMOS þ Vdrift :
(7:16)
Neglecting the voltage drop in the drift region because of conductivity modulation yields: VCE , on
kT IC t IC Lch ln : (7:17) ¼ þ q qWLcell Zni mch Cox Z(VGS VTH )
A more detailed and accurate analysis can be done by treating the IGBT as a power BJT whose base current is provided by a MOSFET. A parasitic thyristor is formed in the
IC
VGS
Cutoff
VDS
FIGURE 7.17
Output Characteristics of IGBT
174
Maylay Trivedi and Krishna Shenai
IGBT by the junctions between the pþ substrate, n-type drift, p-base, and nþ emitter regions. Under normal operation, a fraction of the holes injected into the drift region by the pþ substrate are collected by the p-base. These holes travel laterally in the p-base before being collected at the emitter contact. The lateral voltage drop may be sufficient to turn on the p-base nþ emitter junction, eventually latching the parasitic thyristor. Several improvements have been made in the structure of present-day IGBTs to incorporate very high latch-up immunity.
7.9.1 Transient Operation The switching waveforms of an IGBT in the clamped inductive circuit are shown in Figure 7.18. The turn-on waveforms are similar to those of the MOSFET in Figure 7.13. Two distinct phases are observed while the IGBT turns on. The first is the rapid reduction in voltage as the capacitor CD discharges, similar to a MOSFET. A finite time is required for high-level injection conditions to set in the drift region. The gate voltage starts rising again only after the transistor comes out of its saturation region into the linear region. During turn-off, the current is held constant while the collector voltage rises. This is followed by a sharp decline in collector current as the MOS channel turns off. The turn-off transient until this phase is similar to a MOSFET. However, the IGBT has excess charge in the drift region during on-state conduction that must be removed to turn the device off. This high concentration of minority carriers stored in the n-drift region supports current flow even after the MOS channel is cut off. As the minority carrier density decays due to recombination, it leads to a gradual reduction in the collector current and results in a current tail. For an on-state current of ION , the magnitude of the current tail and turn-off time are roughly expressed as:
VCS(th)
IB(on)
VGS(t) 0
t
VCG td(off) tfi2
ID(t) 0
trv
t
tfi1
IC (t) ¼ apnp Ion e t=tHL :
(7:18a)
toff ¼ tHL ln (10apnp ):
(7:18b)
In these equations, apnp ¼ sech(WD =La ) is the gain of the bipolar transistor. The PT IGBT has a thin drift region and better on-state performance than NPT IGBT. High gain of the pnp transistor can lead to potential thyristor latch-up. For this reason, PT IGBT is subjected to lifetime control techniques to reduce the gain of the bipolar transistor. On the other hand, NPT IGBT has a wide drift region, and the gain is low even without lifetime control. Low lifetime and thin drift region make the PT IGBT more sensitive to temperature variation than NPT IGBT. Further, being bipolar in nature, IGBTs are vulnerable to thermal runaway in certain conditions, depending on the relative importance of MOSFET and bipolar transistor parameters. Hence, paralleling IGBTs is a challenge. The delay in turn-on and current tail during turn-off are significant sources of power loss in an IGBT. Several structures have been designed for faster removal of charge from the drift region during turn-off. Controlling the carrier lifetime within a tightly controlled band in the drift region has also been used to obtain a better turn-off between conduction and switching characteristics. Example 7.6. An IGBT is designed with the same parameters as the MOSFET in example 7.4 by replacing the nþ substrate with a pþ substrate. A symmetrical structure is designed for both devices with a voltage blocking capability of 500 V. The drift region has a minority carrier lifetime of 1 ms. We can compare the on-state performance of the IGBT and the MOSFET. From Example 7.3, the voltage drop across the drift region at 2 ˚ a current density of 100 A=cm is 3.1 V. With technological limitations and following the design guidelines, the unit cell dimension of a 500 V MOSFET is about 8 mm. Then, considering that the channel has an on-state resistance of 0:68 V (example 7.4), the channel drop is 0.35 V. Thus, the overall MOSFET voltage drop is 3.45 V. The IGBT will have the same channel drop. However, the voltage across the diode in the equivalent representation of the IGBT is given by equation 7.19 to be 0.43 V. Thus, the overall voltage-drop of the IGBT is 0.78 V. This illustrates the superior on-state performance of IGBTs over MOSFETs at higher voltage levels.
7.10 Other MOS-Gate Devices Vd VCE(t) 0
t
FIGURE 7.18 Typical Switching Waveforms During IGBT Clamped Inductive Load Turn-Off
In the on-state, the IGBT operates as a bipolar transistor driven by a MOSFET. The on-state characteristics can be significantly improved by thyristor-like operation. This is achieved by the MOS-controlled thyristor (MCT) shown in Figure 7.19(A). Application of a negative gate voltage creates a p-channel
7
Power Semiconductor Devices
175 Gate
Gate
Anode
Emitter
n+ p − Anode
n+
n+ p − Body
p − Region
p− Drift region
n− Drift region
p Buffer region
n Buffer region
n+ : 1019 cm−3
p+ : 1019 cm−3
Cathode
Collector
(A) MOS-Controlled
(B) Emitter-Switched
FIGURE 7.19
Cross-Section of Thyristors
under the gate. This channel provides the gate current to the vertical n-p-n-p thyristor that latches due to regenerative feedback and conducts a large forward current with a low forward voltage drop. During on-state conduction, the p-channel loses control over MCT performance once the thyristor latches on, and the I–V characteristics are independent of gate voltage beyond the threshold voltage VTH of the channel. MCT turnoff is accomplished by applying a high voltage of reverse polarity at the gate. This creates an n-channel under the gate. If the resistance of the channel is small enough, it will divert the electron current from the thyristor. This effectively raises the holding current of the thyristor, thus forcing it to turn-off. Although MCTs have significantly improved current-handling capability, they do not match the switching performance of IGBTs. This is due to the increased resistance in the device during turn-on and higher current density. During the turnoff stage, the thyristor may fail to turn off if resistance of the diverting channel is high. In addition the channel has to be formed uniformly and abruptly to effectively cut off the feedback in the thyristor. The MCT has an uncontrolled on-state performance; hence, its forward-biased safe operating area (FBSOA) cannot be defined. The emitter-switched thyristor (EST) has been proposed to achieve control over the thyristor on-state conduction. The cross-section of an EST is shown in Figure 7.19(B). The EST has an nþ floating emitter region in addition to the basic IGBT structure. When the inversion channel is formed on application of a gate voltage, device performance is identical to an IGBT. At higher current levels, hole flow in the p-base region under the floating region forward biases the p-nþ junction, causing latch-up of the vertical thyristor. The thyristor current flows through the MOS channel before reaching the emitter terminal. This provides control over the thyristor current as resistance of the channel can be varied by changing the gate voltage. On-state thyristor operation of EST results in a much higher current density than IGBT. The switching mechanism in an EST is identical to an IGBT. Because of a higher
level of drift region conductivity modulation, however, large turn-off power results. Several other MOS-gated structures, such as base resistancecontrolled thyristor (BRT), insulated gate-controlled thyristor (IGCT), injection-enhanced transistor (IEGT), MOS-turn-off thyristor (MTO), and so forth have been introduced in the recent past to further optimize the on-state and switching performance at high voltage and current levels.
7.11 Smart Power Technologies The development of MOS-gate driven power devices has greatly simplified the gate drive circuits. The devices have made it possible to integrate the gate drive circuit into a monolithic chip. With the current technology, it is also possible to add other functions, such as protection against adverse operating conditions and logic circuits to interface with microprocessors. In a typical smart power control chip, the sensing and protection circuits are usually implemented using analog circuits with high-speed bipolar transistors. These circuits must sense adverse temperatures, currents, and voltages. This circuitry helps detect situations like thermal runaway, impact ionization, insufficient gate drive, and so forth. Present day Smart Power chips are manufactured using a junction isolation technology. Efforts are in progress to make lateral structures with high breakdown voltages and thin epitaxial layers. The dielectric isolation technology is being perfected to replace the junction isolation technology to achieve fewer parasitics, compactness, and a higher degree of integration.
7.12 Other Material Technologies Numerous power circuit applications exist in which the device temperature can rise significantly above room temperature. Further, several applications require devices that can handle
176 large blocking voltage exceeding 5 kV. Inherent material limitations make it impossible to use silicon-based devices beyond 2008C. These limitations have led to the search for new materials that can handle large voltages. Wide band gap materials with high carrier mobilities are ideally suited for such applications. Presently, SiC appears to be the most promising material to replace Si for high-voltage, high-temperature applications. With the commercial availability of high-quality silicon carbide wafers with lightly doped epitaxial layers, the fabrication of power devices has become viable. Power switches and rectifiers fabricated from silicon carbide offer tremendous promise for reduction of power losses in the device. All the devices discussed in this chapter have been demonstrated using SiC technology. However, the predicted superior performance of
Maylay Trivedi and Krishna Shenai SiC devices has not been achieved on a large scale because of limitations in wafer quality and processing technology. Much work needs to be done before a manufacturable technology can be commercialized.
Bibliography Baliga, B.J. (1996). Power semiconductor devices. Boston: PWS Publishing. Grant, A. and Gowar, J. (1989). Power MOSFETs—Theory and applications. New York: John Wiley & Sons. Sze, M. (1981). Physics of semiconductor devices. New York: John Wiley & Sons. Undeland, M.T. and Robbins, W. (1996). Power electronics—Design, converters, and applications. New York: John Wiley & Sons.
III VLSI SYSTEMS
Magdy Bayoumi The Center for Advanced Computer Studies, University of Louisiana at Lafayette, Lafayette, Louisiana, USA
This Section covers the broad spectrum of VLSI arithmetic, custom memory organization and data transfer, the role of hardware description languages, clock scheduling, low-power design, micro electro mechanical systems, and noise analysis and design. It has been written and developed for practicing electrical engineers in industry, government, and academia. The goal is to provide the most up-to-date information in the field. Over the years, the fundamentals of the field have evolved to include a wide range of topics and a broad range of practice. To encompass such a wide range of knowledge, the section focuses on the key concepts, models, and equations that enable the
design engineer to analyze, design, and predict the behavior of large-scale systems. While design formulas and tables are listed, emphasis is placed on the key concepts and the theories underlying the processes. In order to do so, the material is reinforced with frequent examples and illustrations. The compilation of this section would not have been possible without the dedication and efforts of the section editor and contributing authors. I wish to thank them all. Wai-Kai Chen Editor
This page intentionally left blank
1 Logarithmic and Residue Number Systems for VLSI Arithmetic Thanos Stouraitis Department of Electrical and Computer Engineering, University of Patras, Greece
1.1 1.2
Introduction ....................................................................................... 179 LNS Basics.......................................................................................... 179 1.2.1 LNS and Linear Representations . 1.2.2 LNS Operations . 1.2.3 LNS and Power Dissipation
1.3
The Residue Number System ................................................................. 185 1.3.1 RNS Basics . 1.3.2 RNS Architectures . 1.3.3 Error Tolerance in RNS Systems . 1.3.4 RNS and Power Dissipation
References .......................................................................................... 190
1.1 Introduction Very large-scale integrated circuit (VLSI) arithmetic units are essential for the operations of the data paths and/or the addressing units of microprocessors, digital signal processors (DSPs), as well as data-processing application-specific integrated circuits (ASICs) and programmable integrated circuits. Their optimized realization, in terms of power or energy consumption, area, and/or speed, is important for meeting demanding operational specifications of such devices. In modern VLSI design flows, the design of standard arithmetic units is available from design libraries. These units employ binary encoding of numbers, such as one’s or two’s complement, or sign magnitude encoding to perform additions and multiplications. If nonstandard operations are required, or if high performance components are needed, then the design of special arithmetic units is necessary. In this case, the choice of arithmetic system is of utmost importance. The impact of arithmetic in a digital system is not only limited to the definition of the architecture of arithmetic circuits. Arithmetic affects several levels of the design abstraction because it may reduce the number of operations, the signal activity, and the strength of the operators. The choice of arithmetic may lead to substantial power savings, reduced area, and enhanced speed.
Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
This chapter describes two arithmetic systems that employ nonstandard encoding of numbers. The logarithmic number system (LNS) and the residue number system (RNS) are singled out because they have been shown to offer important advantages in the efficiency of their operation and may be at the same time more power- or energy-efficient, faster, and/or smaller than other systems. Although a detailed comparison of performance of these systems to their counterparts is not offered here, one must keep in mind that such comparisons are only meaningful when the systems under question cover the same dynamic range and present the same precision of operations. This necessity usually translates in certain data word lengths, which, in their turn, affect the operating characteristics of the systems.
1.2 LNS Basics Traditionally, LNS has been considered as an alternative to floating-point representation (Koren, 1993; Stouraitis, 1986). The organization of an LNS word is shown in Figure 1.1. The LNS maps a linear real number X to a triplet as follows:
179
180
Thanos Stouraitis n
n−1
zx
sx
...
0
erel (X) ¼
... x = logb X
FIGURE 1.1
The Organization of an (n þ 1)-bit LNS Digital Word
LNS
X ! (zx , sx , x ¼ logb jXj),
(1:1)
where sx is the sign of X, b is the base of the logarithmic representation, and zx is a single-bit flag, which, when asserted, denotes that X is zero. A zero flag is required because logb X is not a finite number for X ¼ 0. Similarly, since the logarithm of a negative number is not a real number, the sign information of X is stored in flag sx . Logarithm x ¼ logb jXj is encoded as a binary number, and it may comprise a number of k integer and l fractional bits. The inverse mapping of a logarithmic triple (zx , sx , x) to a linear number X is defined by: LNS1
(zx , sx , x) ! X: X ¼ (1 zx )( 1)sx bx :
(1:2)
1. The two representations should exhibit equal average representational error. 2. The two representations should cover equivalent data ranges. The average representational error, eave , is defined as: Xmax X
eave,
LNS
¼ erel,
eave ¼
Xmax Xmin þ 1
,
l
LNS
¼ b2 1:
(1:3)
where Xmin and Xmax define the range of representable numbers in each system and where erel (X) is the relative representational error of a number X encoded in a number system. This error is, in general, a function of the value of X and it is defined as:
(1:5)
Due to formula 1.3, the average representational error for the n-bit linear fixed-point case is given by: n
eave,
FXP
1 1 2X 1 , ¼ n 2 1 i¼1 i
(1:6)
which, by computing the sum on the right-hand side, can be written as: eave,
FXP
¼
c(2n ) þ g , 2n 1
(1:7)
where g is the Euler gamma constant and function c is defined through: c(x) ¼
d ln G(x), dx
(1:8)
where G(x) is the Euler gamma function. In the following, the maximum number representable in each number system is computed and used to compare the ranges of the representations. Notice that different figures could also have been used for range comparison, such as the ratio Xmax =Xmin (Stouraitis, 1986). The maximum number representable by an n-bit linear integer is 2n 1; therefore the upper bound of the fixed-point range is given by: FXP Xmax ¼ 2n 1:
erel (X)
X¼Xmin
(1:4)
^ is the corresponding value in which X is the actual value and X ^ due to the representable in the system. Notice that X 6¼ X finite length of the words. Assuming that the logarithm of X is represented as a two’s complement number, the relative representational error erel, LNS for a (k, l, b)–LNS is independent of X and, therefore, is equal to the average representational error. It is given by [refer to Koren (1993) for the case b ¼ 2].
1.2.1 LNS and Linear Representations Two important issues in a finite word length number system are the range of numbers that can be represented and the precision of the representation (Koren, 1993). Let (k, l, b)–LNS denote an LNS of integer and fractional word lengths k and l, respectively, and of base b. These three parameters determine the properties of the LNS and can be computed so that the LNS meets certain specifications. For example, for a (k, l, b)–LNS to be considered as equivalent to an n-bit linear fixed-point system, the following two restrictions may be posed:
^j jX X , X
(1:9)
The maximum number representable by a (k, l, b)-LNS encoding 1.1 is as follows: k
l
LNS ¼ b2 þ12 : Xmax
(1:10)
Therefore, according to the equivalence restrictions posed above, to make an LNS equivalent to an n-bit linear fixedpoint representation, the following inequalities should be simultaneously satisfied:
1 Logarithmic and Residue Number Systems for VLSI Arithmetic LNS FXP Xmax Xmax :
eave,
LNS
eave,
(1:11) (1:12)
FXP :
Hence, from equations 1.5 and 1.7 through 1.10 the following equations are obtained:
c(2n ) þ g ) : l ¼ log2 logb (1 þ n 2 1 k ¼ dlog2 logb (2n 1) þ 2l 1 e:
(1:13) (1:14)
Values of k and l that correspond to various values of n for various values of b can be seen in Table 1.1, where for each base, there is a third column (explained next). Although the word lengths k and l computed via equations 1.13 and 1.14 meet the posed equivalence specifications of equations 1.11 and 1.12, LNS is capable of covering a significantly larger range than the equivalent fixed-point representation. Let neq denote the word length of a fixed-point system that can cover the range offered by an LNS defined through equations 1.13 and 1.14. Equivalently, let neq be the smallest integer, which satisfies: l
k
2neq 1 b2 þ12 :
(1:15)
181 Of course, the average (relative) error is not the only way to compare the accuracy of computing systems. Especially true for signal processing systems, one may use the signal-to-noise ratio (SNR), assuming that quantization errors represent noise, to compare the precision of two systems. In that case, by equating the SNRs of the LNS and the fixed-point system that covers the required dynamic range, the integer and fractional word lengths of the LNS may be computed.
1.2.2 LNS Operations Mapping of equation 1.1 is of practical interest because it can simplify certain arithmetic operations (i.e., it can reduce the implementation complexity, also called strength, of several operators). For example, due to the properties of the logarithm function, the multiplication of two linear numbers, X ¼ bx and Y ¼ by , is reduced to the addition of their logarithmic images, x and y. The basic arithmetic operations and their LNS counterparts are summarized in Table 1.2, where, for simplicity and without loss of generality, the zero flag zx is omitted and it is assumed that X > Y . Table 2 reveals that, while the complexity of most operations is reduced, the complexity of LNS addition and LNS subtraction is significant. In particular, for d ¼ jx yj, LNS addition requires the computation of the nonlinear function:
From equation 1.15, it follows that:
sa (d) ¼ logb (1 þ bd ),
l
k
neq ¼ d(2 þ 1 2 ) log2 be:
It should be stressed that when neq n, the precision of the particular fixed-point system is better than that of the LNS derived by equations 1.13 and 1.14. Equation 1.16 reveals that the particular LNS, while meeting the precision of an n-bit linear representation, in fact covers the range provided by an neq -bit linear system.
TABLE 1.1
and subtraction requires the computation of the nonlinear function: ss (d) ¼ logb (1 bd ):
(1:18)
Equations 1.7 and 1.8 substantially limit the data word lengths for which LNS can offer efficient VLSI implementations. The
Correspondence of n, k, l, and neq for Various Bases b b ¼ 1:5
n
5 6 7 8 9 10 11 12 13 14 15
(1:17)
(1:16)
b¼2
b ¼ 2:5
k
l
neq
k
l
neq
k
l
neq
3 4 4 4 4 5 5 5 5 5 5
2 3 4 5 5 6 7 8 9 10 11
6 10 10 10 10 20 20 20 20 20 20
3 3 3 3 4 4 4 4 4 4 4
3 4 5 5 6 7 8 9 10 11 12
9 9 9 9 17 17 17 17 17 17 17
2 2 3 3 3 3 3 4 4 4 4
3 4 5 6 7 7 8 9 10 11 12
7 7 12 12 12 12 12 23 23 23 23
182
Thanos Stouraitis TABLE 1.2
Basic Linear Arithmetic Operations and Their LNS Counterparts Linear operation
Multiply Divide Root Power Add Subtract
W W W W W W
x y
Logarithmic operation xþy
¼ XY ¼ b b ¼ b bx ¼ XYp¼ bffiffiffiffixy ffi ffiffiffiffi by ¼p x m ¼ X ¼ m b x ¼ bm ¼ X m ¼ (bx )m ¼ X þ Y ¼ bx þ by ¼ bx (1 þ byx ) ¼ X Y ¼ bx by ¼ bx (1 byx )
w w w w w w
¼ x þ y, sw ¼ sx XOR sy ¼ x y, sw ¼ sx XOR sy ¼ mx , m, integer, sw ¼ sx ¼ mx, m, integer, sw ¼ sx ¼ x þ logb (1 þ byx ), sw ¼ sx ¼ x þ logb (1 byx ), sw ¼ sx
y x
t
n
g Si +
bi
−1
Multiply/ divide Add/ subtract
LUT
+
Sum/difference
+ −|x−y|) logb(1_b
Product/quotient
FIGURE 1.2 The Organization of a Basic LNS Processor: the processor comprises an adder, two multiplexers, a sign-inversion unit, a look-up table, and a final adder. It may perform the four operations of addition, subtraction, multiplication, or division.
organization of an LNS processor that can perform the four basic operations of addition, subtraction, multiplication, or division is shown in Figure 1.2. Note that to implement LNS subtraction (i.e., the addition of two quantities of opposite sign) a different memory look-up table (LUT) is required. The main complexity of an LNS processor is the implementation of the LUTs for storing the values of the functions sa (d) and ss (d). A straightforward implementation is only feasible for small word lengths. A different technique can be used for larger word lengths based on the partitioning of an LUT into an assortment of smaller LUTs. The particular partitioning becomes possible due to the nonlinear behavior of the addition and subtraction functions, logb (1 þ bd ) and logb (1 bd ), respectively, that are depicted in Figure 1.3 for b ¼ 2. By exploiting the different minimal word length required by groups of function samples, the overall size of the LUT is compressed, leading to a LUT organization of Figure 1.4. In addition to the above techniques, reduction of the size of memory can be achieved by proper selection of the base of the logarithms. It turns out that the same bases that yield minimum power consumption for the LNS arithmetic unit by reducing the bit activity, as mentioned in the next section, also result in minimum LUT sizes.
To use the benefits of LNS, a conversion overhead is required in most cases to perform the forward LNS mapping defined by equation 1.1. It is noted that conversions of equations 1.1 and 1.2 are required if an LNS processor receives input or transmits output linear data in digital format. Since all arithmetic operations can be performed in the logarithmic domain, only an initial conversion is imposed; therefore, as the amount of processing implemented in LNS grows, the contribution of the conversion overhead to power dissipation and to area–time complexity becomes negligible because it remains constant. In stand-alone DSP systems, the adoption of a different solution to the conversion problem is possible. In particular, the LNS forward and inverse mapping overhead can be mitigated by converting the analog data directly into digital logarithms. LNS Arithmetic Example Let X ¼p 2:75, ffiffiffiffi Y ¼ 5:65, and b ¼ 2. Perform the operations X Y , X þ Y , X and Y 2 using the LNS. Initially, the data are transferred to the logarithmic domain as implied by equation 1.1:
1 Logarithmic and Residue Number Systems for VLSI Arithmetic
183
1 2 0.8
4
6
8
−1
0.6 −2
0.4 0.2
−3 2
Address generator
Address generator
Address generator
FIGURE 1.3
4 (a) : sa (d)
6
8 (b) : ss (d)
The Functions sa (d) and ss (d): Approximations required for LNS addition and subtraction.
To retrieve the actual result W from equation 1.21, inverse conversion of 1.2 is used as follows: LUT 1
W ¼ (1 zw )( 1)sw 2w ¼ 23:9577 ¼ 15:5377:
By directly multiplying X by Y, it is found that W ¼ 15:5375. The difference of 0.0002 is due to round-off error during the conversion from linear to the LNS domain. pffiffiffiffi The calculation of the logarithmic image w of W ¼ X is performed as follows:
LUT 2
1 1 w ¼ x ¼ 1:4594 ¼ 0:7297: 2 2 LUT N
FIGURE 1.4 The Partitioning of the LUT: The partitioning stores the addition and subtraction functions into a set of smaller LUTs, which leads to memory compression.
w ¼ 2 1:4594 ¼ 2:9188: Again, the actual result is obtained as: W ¼ 22:9188 ¼ 7:5622:
(1:20)
Using the LNS images from equations 1.19 and 1.20, the required arithmetic operations are performed as follows: The logarithmic image w of the product W ¼ X Y is given by: w ¼ x þ y ¼ 1:4594 þ 2:4983 ¼ 3:9577:
(1:25)
(1:19)
LNS
Y ! (zy , sy , y ¼ log2 jY j) ¼ (0, 0, y ¼ log2 5:65) ¼ (0, 0, 2:4983):
(1:24)
The calculation of the logarithmic image w of W ¼ X 2 can be done as:
LNS
¼ (0, 0, x ¼ log2 2:75) ¼ (0, 0, 1:4594):
(1:23)
The actual result is retrieved as follows: W ¼ 20:7297 ¼ 1:6583:
X ! (zx , sx , x ¼ log2 jXj)
(1:22)
(1:21)
As both operands are of the same sign (i.e., sx ¼ sy ¼ 0), the sign of the product is sw ¼ 0. In addition, because zx 6¼ 1 and zy 6¼ 1, the result is non-zero (i.e., zw ¼ 0).
(1:26)
The operation of logarithmic addition is rather awkward, and its realization is usually based on a memory LUT operation. The logarithmic image w of the sum W ¼ X þ Y is as follows: w ¼ max (x, y) þ log2 (1 þ 2min (x , y)max (x , y) )
(1:27)
¼ 2:4983 þ log2 (1 þ 21:0389 )
(1:28)
¼ 3:0704:
(1:29)
184
Thanos Stouraitis
The actual value of the sum W ¼ X þ Y is obtained as: W ¼ 23:0704 ¼ 8:4001:
(1:30)
1.2.3 LNS and Power Dissipation Power dissipation minimization is sought at all levels of design abstraction, ranging from software and hardware partitioning down to technology-related issues. The average power dissipation in a circuit is computed via the relationship: 2 Pave ¼ afclk CL Vdd ,
(1:31)
where fclk is the clock frequency, CL is the total switching capacitance, Vdd is the supply voltage, and a is the average activity in a clock period. LNS is applicable for low-power design because it reduces the complexity of certain arithmetic operators and the bit activity. Power Dissipation and LNS Architecture LNS exploits properties of the logarithm function to reduce the strength of several arithmetic operations; thus, it leads to complexity savings. By reducing the area complexity of operations, the switching capacitance CL of equation 1.31 can be reduced. Furthermore, reduction in latency allows for further reduction in supply voltage, which also reduces power dissipation (Chandrakasan and Brodersen, 1995). A study of the impact of the choice of the number system on the QRD-RLS algorithm revealed that LNS offers accuracy comparable to that of floating-point operations but only at a fraction of the switched capacitance per iteration of the algorithm (Sacha and Irwin, 1998). The reduction of average switched capacitance of LNS systems stems from the simplification of basic arithmetic operations, shown in Table 1.2. It can be seen that n-bit multiplication and division are reduced to (k þ l)-bit addition and subtraction, respectively, while the computation of roots and powers is reduced to division and multiplication by a constant, respectively. For the common cases of square root or square, the operation is reduced to left or right shift respectively. For example, assume that a n-bit carry-save array multiplier, which has a complexity of n2 n 1-bit full adders (FAs), is replaced by an n-bit adder, assuming k þ l ¼ n has a complexity of n FAs for a ripple-carry implementation (Koren, 1993). Therefore, multiplication complexity is reduced by a factor rCL , given as: rCL ¼
n2 n ¼ n 1: n
(1:32)
Equation 1.32 reveals that the reduction factor rCL grows with the word length n. Addition and subtraction, however, are complicated in LNS because they require an LUT operation for the evaluation of
logb (1 byx ), although different approaches have been proposed in the literature (Orginos et al., 1995; Paliouras and Stouraitis, 1996). An LUT operation requires a ROM of n 2n bits, a size that can inhibit use of LNS for large values of n. In an attempt to solve this problem, efficient table reduction techniques have been proposed (Taylor et al., 1988). As a result of the above analysis, applications with a computational load dominated by operations of simple LNS implementation can be expected to gain power dissipation reduction due to the LNS impact on architecture complexity. Since multiplication–additions are important in DSP applications, the power requirements of an LNS and a linear fixedpoint adder–multiplier have been compared. It has been reported that approximately a two times reduction in power dissipation is possible for operations with word sizes of 8 to 14 bits (Paliouras and Stouraitis, 2001). Given a sufficient number of consecutive multiplication–additions, the LNS implementation becomes more efficient from the low-power dissipation viewpoint, even when a constant conversion overhead is taken into consideration. Power Dissipation and LNS Encoding The encoding of data through logarithms of various bases implies variations in the bit activity (i.e., the a factor of equation 31 and, therefore, the power dissipation) (Paliouras and Stouraitis, 1996, 2001). Assuming a uniform distribution of linear n-bit input numbers, the distribution of bit assertions of the corresponding LNS words reveals that LNS can be exploited to reduce the average activity. Let p0!1 (i) be the bit assertion probabilities (i.e., the probability of the ith bit transition from 0 to 1). Assuming that data are temporarily independent, it holds that: p0!1 (i) ¼ p0 (i)p1 (i) ¼ (1 p1 (i) )p1 (i),
(1:33)
where p0 (i) and p1 (i) is the probability of the ith bit being 0 or 1, respectively. Due to the assumption of uniform data distribution, it holds that: 1 p0 (i) ¼ p1 (i) ¼ , 2
(1:34)
which, due to equation 1.33, gives: 1 p0!1 (i) ¼ : 4
(1:35)
Therefore, all bits in the linear fixed-point representation exhibit an equal p0!1 (i), i ¼ 0, 1, . . . , n 1. Activities of the bits in an LNS-encoded word are quantified under similar assumptions. Since there is an one-to-one correspondence of linear fixed-point values to their LNS images defined by equation 1.1, the LNS values follow a probability function identical to the fixed-point case. In fact, the LNS
1 Logarithmic and Residue Number Systems for VLSI Arithmetic mapping can be considered as a continuous transformation of the discrete random variable X, which is a word in the linear representation, to the discrete random variable x, an LNS word. Hence, the two discrete random variables follow the same probability function (Peebles, 1987). LNS The p0!1 probabilities of bit assertions in LNS words, however, are not constant as p0!1 (i) of equation 1.35; they depend on the significance of the ith bit. To evaluate the probabilities LNS p0!1 (i), the following experiment is performed. For all possible values of X in a n-bit system, the corresponding blogb Xc values in a (k, l, b)-LNS format are derived, and probabilities LNS p1 (i) for each bit are computed. Then, p0!1 (i) is computed as in equation 1.33. The actual assertion probabilities for the bits LNS in an LNS word, p0!1 (i), are depicted in Figure 1.5. It can be seen that p0!1 (i) for the more significant bits is substantially lower than p0!1 (i) for the less significant bits. Moreover, it can be seen that p0!1 (i) depends on b. This behavior, which is due to the inherent data compression property of the logarithm function, leads to a reduction of the average activity in the entire word. The average activity savings percentage, Save , is computed as:
p01 0.25 0.2 0.15 0.1 0.05
b = 2.5 b=2 b = 1.5 1
2
3
4
5
6
7
8
185 ! LNS p0!1 (i) 100%, 0:25n
Pkþl1 Save ¼
1
i¼0
(1:36)
FXP (i) ¼ 1=4 for i ¼ 0, 1, . . . , n 1; the word where p0!1 lengths k and l are computed via equations 1.13 and 1.14, and n denotes the length of the fixed-point system. The savings percentage Save is demonstrated in Figure 1.6(A) for various values of n and b, and the percentage is found to be more than 15% in certain cases. As implied by the definition of neq in equation 1.16, however, the linear system that provides an equivalent range to that of a (k, l, b)-LNS, requires neq bits. If the reduced precision of a (k, l, b)-LNS, compared to an neq -bit fixed-point system, is 0 acceptable for a particular application, Save is used to describe the relative efficiency of LNS, instead of equation 1.36, where:
0 Save
! LNS p0!1 (i) 100%: 0:25neq
Pkþl1 ¼
1
i¼0
(1:37)
0
Savings percentage Save is demonstrated in Figure 1.6(B) for various values of n and b. Savings are found to exceed 50% in some cases. Notice that Figure 1.6 reveals that, for a particular word length n, the proper selection of logarithm base b can significantly affect the average activity. Therefore, the choice of b is important in designing a low-power LNS-based system. Finally, it should be noted that overhead is imposed for linear-to-logarithmic and logarithmic-to-linear conversion. Conversion overhead contributes additional area and time complexity as well as power dissipation. As the number of operations grows, however, the conversion overhead remains constant; therefore, the overhead’s contribution to the overall budget becomes negligible.
i
(A) n = 8
1.3 The Residue Number System p01 0.25 0.2 0.15 0.1 0.05
b = 2.5 b=2 b = 1.5 1
2
3
4
5 6 7 8 (B) n = 12
9
10 11 12
i
FIGURE 1.5 Activities Against Bit Significance i (in an LNS Word for n ¼ 8 and n ¼ 12) and Various Values of the Base b. The horizontal dashed line is the activity of the corresponding n-bit fixed-point system
A different concept than the nonlinear logarithmic transformation is followed by mapping of data to appropriately selected finite fields. This may be achieved through the use of one of the many available versions of the residue number system (RNS) (Szabo and Tanaka, 1967). RNS arithmetic faces difficulties with sign detection, division, and magnitude comparison. These difficulties may outweigh the benefits it presents for addition, subtraction, and multiplication as far as general computing is concerned. Its use in specialized computations, like those for signal processing, offers many advantages. RNS has been used to offer superior fault tolerance capabilities as well as high-speed, small-area, and/or significant powerdissipation savings in the design of signal processing architectures for FIR filters (Freking and Parhi, 1997) and other circuits (Chren, 1998). RNS may even reduce the computational load in complex-number processing (Taylor et al., 1985), thus
186
Thanos Stouraitis Save 15 b=3 b = 1.7 b = 2.5 b = 1.5 b = 1.1 b=2
12.5 10 7.5 5 2.5
6
8
10
12
14
n
16
(A) n − bit FXP Save` 60 b=3 50
b = 1.7 b = 2.5
40
b = 1.5 30 b = 1.1 20
b=2
6
8
10
12
14
16
neq
(B) neq − bit FXP
FIGURE 1.6 Percentage of Average Activity Reduction from Use of LNS. The percentage is compared to n-bit and to neq -bit linear fixed-point system for various bases b of the logarithm. The diagram reveals that the optimal selection of b depends on n, and it can lead to significant power dissipation reduction.
providing speed and power savings at the algorithmic level of the design abstraction.
1.3.1 RNS Basics The RNS maps Q a natural number X in the range [0, M 1], with M ¼ N i¼1 mi , to an N-tuple of residues xi : RNS
X!{x1 , x2 , . . . , xN },
(1:38)
where xi ¼ hXimi , himi denotes the mod mi operation and where mi is a member of the set of the co-prime integers B ¼ {m1 , m2 , . . . , mN } called moduli. Co-prime integers’ greatest common divisor is gcd(mi , mj ) ¼ 1, i 6¼ j. The set of RNS moduli is called the base of RNS. The modulo oper-
ation hXim returns the integer remainder of the integer division x div m (i.e., an integer k such that x ¼ m l þ k) where l is an integer. RNS is of interest because basic arithmetic operations can be performed in a digit-parallel carry-free manner, such as in: zi ¼ hxi yi imi ,
(1:39)
where i ¼ 1, 2, . . . , N and where the symbol stands for addition, subtraction, or multiplication. Every integer in the Q range 0 X < N m i¼1 i has a unique RNS representation. Inverse conversion may be accomplished by means of the Chinese remainder theorem (CRT) or the mixed-radix conversion (Soderstrand et al., 1986). The CRT retrieves an integer from its RNS representation as:
1 Logarithmic and Residue Number Systems for VLSI Arithmetic * X¼
N X
+
RNS
1
mi hmi xi imi
i¼1
,
þ x20 m1 þ x10 ,
(1:41)
where 0 xi0 < mi , i ¼ 1 . . . N , and the xi0 can be generated sequentially from the xi using only residue arithmetic, such as in:
0 x20 ¼ hm1 1 (X x1 )im2
¼
RNS
Y !{y1 , y2 , y3 } ¼ {h5i3 , h5i5 , h5i7 } ¼ {2, 0, 5}:
x10 ) )
(1:42)
RNS
Z!{z1 , z2 , z3 } ¼ {h1 þ 2i3 , h0 þ 0i5 , h3 þ 5i7 }
To retrieve the integer that corresponds to the RNS representation {0, 0, 1} by applying the CRT of equation 1.40, the following quantities are precomputed: M ¼ 3 5 7 ¼ 105, 1 105 105 m1 ¼ 105 ¼ 2, 3 ¼ 35, m2 ¼ 5 ¼ 21, m3 ¼ 7 ¼ 15, m1 1 1 m2 ¼ 1, and m3 ¼ 1. The value of the sum in integer form is obtained by applying equation 1.40 Z ¼ X þ Y ¼ h35h2 0i3 þ 21h1 0i5 þ 15h1 1i7 i105 ¼ h15i105 ¼ 15:
x10 ¼ x1
xN0
1.46,
RNS
notice
that
(1:48)
Z ¼ z30 15 þ z20 3 þ z10 ,
x20 ¼ h(x2 x10 )m1 1 m2 im2 .. .
(1:47)
which is the result obtained in equation 1.46. The same integer may be retrieved by using an associated mixed radix system defined by equation 1.41 as:
and so on, or as in the following:
0 1 x30 ¼ h((x3 x10 )m1 1 m3 x2 )m2 m3 im3
(1:46)
¼ {0, 0, 1}:
15!{h15i3 , h15i5 , h15i7 } ¼ {0, 0, 1} ¼ {z1 , z2 , z3 },
x20 )im3 ,
(1:45)
The RNS image of the sum Z ¼ X þ Y is obtained as:
To verify the result of equation X þ Y ¼ 10 þ 5 ¼ 15 and that:
x10 ¼ hXim1 ¼ x1
(1:44)
¼ {1, 0, 3}:
M
X ¼ xN0 (mN 1 mN 2 . . . m1 ) þ . . . þ x30 (m2 m1 )
1 hm1 2 (m1 (X
X!{x1 , x2 , x3 } ¼ {h10i3 , h10i5 , h10i7 }
(1:40)
1 M , M ¼ N is the multiplicative where mi ¼ m i¼1 mi , and mi i inverse of mi modulo mi (i.e., an integer such that hmi mi 1 imi ¼ 1). Using an associated mixed radix system, inverse conversion may also be performed by translating the residue representations to a mixed radix representation. By choosing the RNS moduli to be the weights in the mixed radix representation, the inverse mapping is facilitated by associating the mixed radix system with the RNS. Specifically, an integer 0 < X < M can be represented by N mixed radix digits (x10 , . . . , xN0 ) as:
x30
187
(1:43)
0 1 ¼ h( . . . ((xN x10 )m1 1 mN x2 )m2 mN . . . xN0 1 )m1 N 1 mN imN :
xi0
The digits can be generated sequentially through residue subtraction and multiplication by the fixed m1 i . The sequential nature of calculation increases the latency of the residues conversion to binary numbers. The set of RNS moduli is often chosen so that the implementation of the various RNS operations (e.g., addition, multiplication, and scaling) becomes efficient. A common choice is the set of moduli {2n 1, 2n , 2n þ 1}, which may also form a subset of the base of RNS. RNS Arithmetic Example Consider the base B ¼ {3, 5, 7} and two integers X ¼ 10 and Y ¼ 5. The RNS images of X and Y are as written here:
with 0 z10 < 3, 0 z20 < 5, 0 z30 < 7 and the following: z10 ¼ z1 ¼ 0 z20 ¼ h31 (z2 z10 )i5 ¼ h2 z2 i5 ¼ 0 and z30 ¼ h51 [31 (z3 z10 ) z20 ]i7 ¼ h(z3 z10 ) 3 z20 i7 ¼ h(1 0) 3 0i7 ¼ 1 (1:49) so that Z ¼ 1 15 þ 0 3 þ 0 ¼ 15.
1.3.2 RNS Architectures The basic architecture of an RNS processor in comparison to a binary counterpart is depicted in Figure 1.7. This figure shows that the word length n of the binary counterpart is partitioned into N subwords, the residues, that can be processed
188
Thanos Stouraitis n/M
n/M
n
n
n/M
n/M mod m2
Inverse converter
n
Forward converter
mod m1
n
n/M
n/M mod mM (A) Structure of a Binary Architecture
(B) Corresponding RNS Processor
FIGURE 1.7 Basic Architectures
independently and are of word length significantly smaller than n. The architecture in Figure 1.7 assumes, without loss of generality, that the moduli are of equal word length. The ith residue channel performs arithmetic modulo mi . Most implementations of arithmetic units for RNS consist of an accumulator and a multiplier and are based on ROMs or PLAs. Bayoumi et al. (1983) have analyzed the efficiency of various VLSI implementations of RNS adders. Moreover, implementations of arithmetic units that operate in a finite integer ring R(m) and that are called AUm s are offered in the literature (Stouraitis, 1993). They are less costly and require less area and lower hardware complexity and power consumption. They are based on continuously decomposing the residue bits that correspond to powers of 2 that are larger than or equal to 2n , until they are reduced to a set of bits that correspond to a sum of powers of 2 that is less than 2n , where n ¼ dlog2 me. This decomposition is implemented by using full adder (FA) arrays. For all moduli, the FA-based AUm s are shown to execute much faster as well as have much smaller hardware complexity and time-complexity products than ROM-based general multipliers. Since the AUm s use full adders as their basic units, they lead to modular and regular designs, which are inexpensive and easy to implement in VLSI.
1.3.3 Error Tolerance in RNS Systems Because there is no interaction among digits (channels) in residue arithmetic, any errors generated at one digit cannot propagate and contaminate other channels during subsequent operations, given that no conversion has occurred from the RNS to a weighted representation. In addition, because there is no weight associated with the RNS residues (digits), if any digit becomes corrupted, the associated channel may be easily identified and dealt with. Based on the amount of redundancy that is built in an RNS processor, the faulty channels may be replaced or just isolated,
with the rest of the system operating in a ‘‘soft failure’’ mode, being allowed to gracefully degrade into accurate operations of reduced dynamic range. Provided that the remaining dynamic range contains the results, there is no problem with this degradation. The more redundant an RNS is, the easier it is to identify and correct errors. A redundant RNS (RRNS) uses a number r of moduli in addition to the N standard moduli that are necessary for covering the desired dynamic range. All N þ r moduli must be relatively prime. In an RRNS, a number X is presented by a total of N nonredundant residue digits {X2 , . . . , XN } plus r redundant residue digits {XN þ1 . . . , XN þr }. þr Of the total number of states, MR ¼ N i¼1 mi is represented by N the RRNS. The M ¼ i¼1 mi first states constitute its ‘‘legitimate range,’’ while any number that lies in the range (M, MR ), is called ‘‘illegitimate.’’ Any single error moves a legitimate number X into an illegitimate number X 0. Once it is verified that the number being tested is illegitimate, its digits are discarded one by one, until a legitimate representation is found. The discarded digit whose omission results in the legitimate representation is the erroneous one. A correct digit can then be produced by extending the base of the reduced RNS that produced the legitimate representation. The above error-locating-andcorrecting procedure can be implemented in a variety of ways. Assuming that the mixed radix representations of all the reduced RNS representations can be efficiently generated, the legitimate one can be easily identified by checking the highest order mixed radix digit against zero. If it is zero, the representation is legitimate. Mixed radix representations associated with the RNS numbers can be used to detect overflows as well as to detect and correct errors in redundant RNS systems. For example, to detect overflows, a redundant modulus mN þ1 is added to the base and the corresponding highest order mixed radix digit aN þ1 is found and compared to zero. Assuming that the
1 Logarithmic and Residue Number Systems for VLSI Arithmetic number being tested for overflow is not large enough to overflow the augmented range of the redundant system, overflow occurs whenever aN þ1 is not zero.
189 Monte Carlo runs. It is observed that RNS performs better than two’s complement representation for anticorrelated data and slightly worse than sign-magnitude and two’s complement representations for uncorrelated and correlated sequences.
1.3.4 RNS and Power Dissipation RNS may reduce power dissipation because it reduces the hardware cost, the switching activity, and the supply voltage (Freking and Parhi, 1997). By employing binary-like RNS filter structures (Ibrahim, 1994), it has been reported that RNS reduces the bit activity up to 38% in (4 4)-bit multipliers. As the critical path in an RNS architecture increases logarithmically with the equivalent binary word length, RNS can tolerate a larger reduction in the supply voltage than the corresponding binary architecture while achieving a particular delay specification. To demonstrate the overall impact of the RNS on the power budget of an FIR filter, Freking and Parhi (1997) report that a filter unit with 16-bit coefficients and 32-bit dynamic range, operating at 50 MHz, dissipates 26.2 mW on average for a two’s complement implementation, while the RNS equivalent architecture dissipates 3.8 mW. Hence, power dissipation reduction becomes more significant as the number of filter taps increases, and a 3-fold reduction is possible for filters with more than 100 taps. Low-power may also be achieved via a different RNS implementation. It has been suggested to one-hot encode the residues in an RNS-based architecture, thus defining one-hot RNS (OHR) (Chren, 1998). Instead of encoding a residue value xi in a conventional positional notation, an (m 1)-bit word is employed. In this word, the assertion of the ith bit denotes the residue value xi . The one-hot approach allows for a further reduction in bit activity and power-delay products using residue arithmetic. OHR is found to require simple circuits for processing. The power reduction is rendered possible since all basic operations (i.e., addition, subtraction, and multiplication) as well as the RNS-specific operations of scaling (i.e., division by constant), modulus conversion, and index computation are performed using transposition of bit lines and barrel shifters. The performance of the obtained residue architectures is demonstrated through the design of a direct digital frequency synthesizer that exhibits a power-delay product reduction of 85% over the conventional approach (Chren, 1998). RNS Signal Activity for Gaussian Input The bit activity in an RNS architecture with positionally encoded residues has been experimentally studied for the encoding of 8-bit data using the base {2, 151}, which provides a linear fixed-point dynamic range of approximately 8.24 bits. Assuming data sampled from a Gaussian process, the bit assertion activities of the particular RNS, an 8-bit signmagnitude, and an 8-bit two’s-complement system are measured and compared. The results are depicted in Figure 1.8 for 100
TC
2500 2000
RNS SM
1500 1000 500
20
40
60
80
100
(A) Strongly Anticorrelated Gaussian Data 2500 RNS TC
2000
SM 1500 1000 500
20
40
60
80
100
(B) Uncorrelated Gaussian Data
2500 2000 RNS 1500 TC SM
1000 500
20
40 60 80 100 (C) Strongly Correlated Gaussian Data
FIGURE 1.8 Number of Low-to-High Transitions. (A) This figure shows strongly anticorrelated (r ¼ 0:99) Gaussian data for two’s complement, RNS, and sign-magnitude number systems for 100 Monte Carlo runs. (B) Shown here are uncorrelated (r ¼ 0) Gaussian data; (C) This figure illustrates strongly correlated (r ¼ 0:99) Gaussian data
190
References Bayoumi, M.A., Jullien, G.A., and Miller, W.C. (1983). Models of VLSI implementation of residue number system arithmetic modules. Proceedings of 6th Symposium on Computer Arithmetic, 412–415. Chandrakasan, A.P., and Brodersen, R.W. (1995). Low power digital CMOS design. Boston: Kluwer Academic Publishers. Chren, W.A., Jr., (1998). One-hot residue coding for low delay-power product CMOS design. IEEE Transactions on Circuits and Systems— Part II 45, 303–313. Freking, W.L., and Parhi, K.K. (1997). Low-power FIR digital filters using residue arithmetic. Proceedings of Thirty-first Asilomar Conference on Signals, Systems, and Computers 739–743. Ibrahim, M.K. (1994). Novel digital filter implementations using hybrid RNS-binary arithmetic. Signal Processing 40, 287–294. Koren, I. (1993). Computer arithmetic algorithms. Englewood Cliffs, NJ: Prentice Hall. Orginos, I., Paliouras, V., and Stouraitis, T. (1995). A novel algorithm for multioperand logarithmic number system addition and subtraction using polynomial approximation. Proceedings of International Symposium on Circuits and Systems, III.1992–III.1995. Paliouras, V., and Stouraitis, T. (2001). Signal activity and power consumption reduction using the logarithmic number system. Proceedings of IEEE International Symposium on Circuits and Systems, II.653–II.656. Paliouras, V., and Stouraitis, T. (2001). Low-power properties of the logarithmic number system. Proceedings of the 15th Symposium on Computer Arithmetic (ARITH15), 229–236.
Thanos Stouraitis Paliouras, V., and Stouraitis, T. (1996). A novel algorithm for accurate logarithmic number system subtraction. Proceedings of International Symposium on Circuits and Systems. 4, 268–271. Peebles, P.Z. Jr. (1987). Probability, random variables, and random signal principles. New York: McGraw-Hill. Soderstrand, M.A., Jenkins, W.K., Jullien, G.A., and Taylor, F.J. (1986). Residue number arithmetic: Modern applications in digital signal processing. New York: IEEE Press. Stouraitis, T., Kim, S.W., and Skavantzos, A. (1993). Full adder-based units for finite integer rings. IEEE Transactions on Circuits and Systems—Part II 40, 740–745. Taylor, F., Gill, R., Joseph, J., and Radke, J. (1988). A 20-bit logarithmic number system processor. IEEE Transactions on Computers, 37, 190–199. Taylor, F.J., Papadourakis, G., Skavantzos, A., and Stouraitis, T. A radix-4 FFT using complex RNS arithmetic. IEEE Transactions on Computers C-34, 573–576. Sacha, J.R., and Irwin, M.J. (1998). The logarithmic number system for strength reduction in adaptive filtering. Proceedings of International Symposium on Low-Power Electronics and Design, 256–261. Stouraitis, T. (1986). Logarithmic number system: Theory, analysis and design. Ph.D. diss., University of Florida. Szabo´, N., and Tanaka, R. (1967). Residue arithmetic and its applications to computer technology. New York: McGraw-Hill.
2 Custom Memory Organization and Data Transfer: Architectural Issues and Exploration Methods Francky Catthoor, Erik Brockmeyer, Koen Danckaert, Chidamber Kulkani, Lode Nachtergaele, and Arnout Vandecappelle IMEC, Leuven, Belgium
2.1 2.2
Introduction ....................................................................................... 191 Custom Memory Components ............................................................... 192 2.2.1 General Principles and Storage Classification . 2.2.2 Register Files and Local Memory Organization . 2.2.3 RAM Organization
2.3
Off-Chip and Global Hierarchical Memory Organization ........................... 195 2.3.1 The External Data Access Bottleneck . 2.3.2 Power Consumption Issues . 2.3.3 Synchronization and Access Times for High-Speed Off- and On-Chip RAMs . 2.3.4 High-Density Issues
2.4
Code Rewriting Techniques to Improve Data Reuse and Access Locality........ 198
2.5
How to Meet Real-Time Bandwidth Constraints ....................................... 201
2.4.1 Methodology . 2.4.2 Illustration on Cavity Detection Application 2.5.1 The Cost of Data Transfer Bandwidth . 2.5.2 Energy Cost Versus Cycle Budget Trade-Off . 2.5.3 Exploiting the Use of the Pareto Curves
2.6
Custom Memory Organization Design .................................................... 206 2.6.1 Impact . 2.6.2 Difficulties . 2.6.3 Storage Cycle Budget Distribution . 2.6.4 Memory Allocation and Assignment . 2.6.5 Results
2.7
Data Layout Reorganization for Reduced Memory Size .............................. 210 2.7.1 Example Illustration . 2.7.2 Data Layout Reorganization Methodology . 2.7.3 Voice Coder Algorithm and Experimental Results . 2.7.4 Summary
References .......................................................................................... 214
2.1 Introduction Because of the eternal push for more complex applications with correspondingly larger and more complicated data types, storage technology is taking center stage in more and more applications (Lawton, 1999), especially in the information technology area, including multimedia processing, network protocols, and telecom terminals. In addition, the access speed, size, and power consumption associated with the available storage devices form a severe bottleneck in these systems, especially in an embedded context. In this chapter, several building blocks for memory storage will be investigated first, with the emphasis on their internal architectural organization. Section 2 presents a general classification of the main memory components for customized organizations, including register Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
files and on-chip SRAM and DRAMs. Next, Section 3. explains off-chip and global hierarchical memory organization issues. Apart from the storage architecture itself, the way data are mapped to these architecture components is important for a good overall memory management solution. Actually, these issues are gaining in importance in the current age of deep submicron technologies, where technology and circuit solutions are not sufficient on their own to solve the system design bottlenecks. In current practice, designers usually go for the highest speed implementation for most submodules of a complex system, even when real-time constraints apply for the global design. Moreover, the design tools for exploration support (e.g., compilers and system synthesis tools) focus mainly on the performance aspect. The system cost, however can often be significantly reduced by system-level code transformations 191
192
F. Catthoor, E. Brockmeyer, K. Danckaert, C. Kulkani, L. Nachtergaele, and A. Vandecappelle
or trading-off cycles spent in different submodules. Therefore, the last four sections of this chapter are devoted to different aspects of data transfer and storage exploration: code rewriting techniques to improve data reuse and access locality (Section 4), how to meet real-time bandwidth constraints (Section 5), custom memory organization design (Section 6), and data layout reorganization for reduced memory size (Section 7). The main emphasis lies on custom processor contexts. Realistic multimedia and telecom applications are used to demonstrate the impressive effects of such techniques.
2.2 Custom Memory Components In a custom processing context, many different options are open to the designer for the organization of the data storage and the access to it. Before going into the usually hierarchical organization of this storage (see section 2–3), this section presents the different components that make up the primitives of the hierarchy. The goal of a storage device is in general to store a number of n-bit data words for a short- or long-term period. Under control of the address unit(s), these data words are transferred to the custom processing units (called processors further on) at the appropriate point in time (cycle), and the results of the operations are then written back in the storage device for future use. Due to the different characteristics of storage and access, different styles of devices have been developed.
execution units. This is typically called foreground memory. The devices for long-term storage are in general meant for (much) larger capacities (from 64- to 16-M words) and take a separate cycle for read or write access (Figure 2.2). Both categories are described in more detail in the next subsections. Six other important distinctions between short-term and long-term storage devices can be made using the tree-like ‘‘genealogy’’ of storage devices presented in Figure 2.3: 1. Read-only and read or write (R/W) access: Some memories are used only to store constant data, such as read-only memories or ROMs. Good alternatives include programmable logic arrays (PLA) or multilevel logic circuits, especially when the amount of data is relatively small. In most cases, data needs to be overwritable at high speeds, which means that read and write are treated with the same priority (R/W access),
BusA
RAM background memory
Data path execution unit (EXU)
R BusC e g f B
Memory Classification Tree
A very important difference can be made between memories (for frequent and repetitive use) concerning short-term and long-term storage. Short-term storage devices are in general located very close to the operators and require a very small access time. Consequently, they should be limited to a relatively small capacity (less than 32 words typically) and are taken up usually in feedback loops over the operators (e.g., RegfA-BufA-BusA-EXU-RegfA in Figure 2.1) or at the input of
Write-few (EAROM, PROM)
Read-only (ROM, PLA)
12 12
BusC
FIGURE 2.1
R e g f B
8
+
R e g f A
BusA B u f A
Read/Write
Volatile
Sequential addressing M U X
12
FIGURE 2.2 Large Capacity Background Memory Communicating with Custom Data Path
2.2.1 General Principles and Storage Classification
BusB
B u f A
FIFO
PAM
Nonvolatile
Direct addressing
LIFO
Single -port
Dual -port
Multi -port
12
Data path execution unit (EXU)
Register File in Feedback Loop of Custom Data Path
RAM
Static
FIGURE 2.3
SDRAM
Reg.-file
Dynamic
Storage Classification
2 Custom Memory Organization and Data Transfer: Architectural Issues and Exploration Methods
2.
3.
4.
5.
6.
such as in random-access memories or RAMs. In some cases, the ROMs can be made electrically alterable (¼ write-few) with high energies (EAROM) or programmable by means of fuses (PROM), for example. Only the R/W memories will be discussed further on. Volatile or not: For R/W memories, the data are usually removed once the power goes down. In some cases, this can be avoided, but these nonvolatile options are expensive and slow. Examples are magnetic media and tapes that are intended for slow access of mass data. This discussion is restricted to the most common case on chip, namely, the volatile one. Address mechanism: Some devices require only sequential addressing, such as the first-in first-out (FIFO) queue or first-in last-out (FILO) or stack structures, which put a severe restriction on the order in which the data are read out. Still, this restriction is acceptable for many applications. A more general but still sequential access order is available in a pointeraddressed memory (PAM). In most cases, however, the address sequence should be random (including repetition), which requires a direct addressing scheme. Then, an important requirement is that in this case, the access time should be independent of the address selected. The number of independent addresses and corresponding gateways (busses) for access: This parameter can be one (single-port), two (dual-port), or even more (multiport) ports. Any of these ports can be for reading only, writing only, or R/W. Of course, the area occupied will increase considerably with the number of ports. Internal organization of the memories: The memory can be meant for capacities that remain small or that can become large. Here, a trade-off is usually involved between speed and area efficiency. The registerfiles in subsection 2.2.2 constitute an example of the fast small capacity organizations that are usually also dual-ported or even multiported. The queues and stacks are meant for medium-sized capacities. The RAMs in subsection 2.2.3 can become extremely large (up to 256 Mb for the state-of-the-art RAM) but are also much slower in random access. Static or dynamic: For R/W memories, the data can remain valid as long as Vdd is on (static cell in SRAM), or the data should be refreshed every 1.2 ms (dynamic cell in DRAM). In the dynamic class, highthroughput Synchronous DRAMs (SDRAM) are also available. Circuit level issues are discussed in overview articles concerning SRAMs, such as in Evans (1995), and for DRAMs, such as in Itoh (1997) and Prince (1994).
2.2.2 Register Files and Local Memory Organization This subsection discusses the register file and local memory organization. An illustrative organization for a dual port register file with two address busses, where the separate read and write addresses are generated from an address calculation unit (ACU), is shown in Figure 2.4. In this case, two data busses (A and B) are used but only in one direction, so the write and read addresses directly control the port access. In general, the number of different address words can be smaller than the number of port(s) when they are shared (e.g., for either read or write), and the busses can be bidirectional. Additional control signals decide whether to write or read and for which port the address applies. The number of address bits per word is log2 (N ). The register file of Figure 2.4 can be used very efficiently in the feedback loop of a data path as already illustrated in Figure 2.1. In general, the file is used only for the storage of temporary variables in the application running on the data path (sometimes also referred to as execution unit). Such register files (regfiles) are also used heavily in most modern general-purpose RISCs and especially for modern multimedia-oriented signal processors that have regfiles up to 128 locations.1 For multimedia-oriented VLIW processors or recent super-scalar processors, regfiles with a very large access bandwidth, up to 17 ports, are provided (Jolly, 1991). Application-specific instruction-set processors (ASIPs) and custom processors make heavy use of regfiles for the same purpose. It should be noted that although it has the clear advantage of very fast access, the number of data words to be stored should be minimized as much as possible due to the power- and area-intensive structure of such register files (both due to the decoder and the cell overhead). Detailed circuit issues will not be discussed here (for review, see Weste and Eshraghian [1993]). After this brief discussion of the local foreground memories, we will now proceed with on- and off-chip background memories of the random access type. Read Address calculation unit (ACU)
Address decoders
Write
BusB
BusA DataIn
FIGURE 2.4
In the following subsections, the most important R/W-type memories and their characteristics are investigated in more detail.
193
1
DataOut N
Regfile with Both R and W Addresses
1 In this case, it becomes doubtful whether a register file is really a good option due to the very high area and power penalty.
194
F. Catthoor, E. Brockmeyer, K. Danckaert, C. Kulkani, L. Nachtergaele, and A. Vandecappelle
2.2.3 RAM Organization
8 words of 4 bits
A large variety of possible types of RAMs for use as main memories has been proposed in the literature; research on RAM technology is still very active, as demonstrated by the results presented at International Solid-State Circuits (ISSCC) and Custom Integrated Circuit (CICC) conferences. Summary articles are available by Comerford and Watson (1992), Prince (1994), Oshima et al. (1997), and Lawton (1999). The general organization depends on the number of ports, but usually single-port structures are encountered in the large density memories. These structures will also be a restriction here. Most other distinguishing characteristics are related to the circuit design (and the technological issues) of the RAM cell, the decoder, and the auxiliary R/W devices. In this subsection, only the general principles are discussed. For a B-bit organized RAM with a capacity of 2k words, the floor plan in Figure 2.5 is the basis for everything. Notice the presence of read and write amplifiers that are necessary to drive or sense the (very) long bit-lines in the vertical direction. Notice also the presence of write-enable (WE) signal for controlling the R/W option and a chip select (CS) control signal that is mainly used to save power. For large-capacity RAMs, the basic floor plan of Figure 2.5 leads to a very slow realization because of bit-lines that are too long. For this purpose, postdecoding is applied. This leads to the use of an X and a Y decoder (Figure 2.6), the flexibility of the floor plan shape is now used to end up with a near square (dimensions x by y), which makes use of the chip area in the most optimal way and reduces the access time and power (wire-length related). To achieve this, the following equations can be applied for a k-bit total address of x þ y ¼ k and x ¼ y þ log 2B, leading to x ¼ ( log 2B þ k)=2 and y ¼ k x for maximal ‘‘squareness.’’ This breakup of a large memory plane into several subplanes is very important also for low-power memories. In that case, however, care should be taken to enable only the memory
Memory cell
A d A d e d d c d r o r e d e k s e s s r s
B i t
CS WE
I/O buffer
Read write amplifiers Vdd
FIGURE 2.5
D a t a
Bus
B
Vss
X a d d r X=5 e s s
A d d r e s s
D a t a
X d e c o d e r
l i n e s 32 4
4
4
4
4
4
4
4
I/O buffer
Read write amplifiers Y=3 Address Y decoder Y address
FIGURE 2.6 Example of Postdecoding X address Precharge
Precharge
X
X b u f f e r
A d d r e s s
X d e c o d e r
X b u f f e r
I/O buffer Read write amplifiers
Read write amplifiers Y Y decoder
Y Y address
Y decoder
FIGURE 2.7 Partitioning of Memory Matrix Combined with Postdecoding
l i n e s
l i n e
1
Clock phases
Basic Floor Plan for B-Bit RAM with 2k Words
plane that contains data needed in a particular address cycle. If possible, the data used in successive cycles should also come from the same plane because activating a new plane takes up a significant amount of extra precharge power. An example floor plan in Figure 2.6 is drawn for k ¼ 8 and B ¼ 4. The memory matrix can be split up in two or more parts to reduce the length of the word lines also by a factor 2 or more. This results in a typically heavily partitioned floor plan (Itoh, 1997) as shown in a simplified form in Figure 2.7. It should also be noted that the word length of data stored in the RAM is usually matched to the requirement of the application in the case of an on-chip RAM embedded in an ASIC. For RAM chips, this word organization normally has been
2 Custom Memory Organization and Data Transfer: Architectural Issues and Exploration Methods
Video RAM
Frame buffer RAM
Dualport RAM
FIGURE 2.8 Specialized Storage Units. Thin arrows are serial ports; wide arrows represent parallel ports.
standardized to a few choices only. Most large RAMs are 1-bit RAMs. However, with the push for more application-specific hardware, 4-bit (nibble), 8-bit (byte), and 16-bit RAMs are now commonly produced. For the on-chip, clearly more options can be made available in a memory generator. In addition to the mainstream evolution of these SRAMs, DRAMs, and SDRAMs, more customized large capacity high bandwidth memories are proposed that are intended for more specialized purposes. Examples are video RAMs, very wide DRAMs, and SRAMs with more than two ports (see the eight-port SRAM described by Takayanagi et al., 1996, and review Prince, 1994). See also Figure 2.8.
2.3 Off-Chip and Global Hierarchical Memory Organization Custom processors used in a data-dominated application context are driven from a customized memory organization. In programmable instruction-set processors, this organization is usually constructed according to a rigorous hierarchy that can be considered almost like a bidirectional pipeline from disk2 over main memory and L2/L1 cache to the multiport register file. Still, the pipeline gets saturated and blocked more and more due to the large latencies that are introduced compared to the CPU clock cycles in current process technologies. This happens especially for the off-chip memories in the pipeline. In a custom processor context, more options are open to increase the on-chip bandwidth to the data, but off-chip similar restrictions apply for the off-chip. These issues are discussed in more detail in Subsection 2.3.1. Next, a brief literature overview is presented of some interesting evolutions of current and future stand-alone RAM memory circuit organizations in different contexts. In particular, three different ‘‘directions’’ will be highlighted describing the apparent targets of memory vendors. They differ namely in terms of their mostly emphasized cost functions: power, speed, or density. Some of these directions can be combined, but several of them are noncompatible. One conclusion of this literature study is that the main emphasis in almost all directions is on the access structure and not on the way the actual
storage takes place in the ‘‘cell.’’ This access structure, which includes the ‘‘selection’’ part and the ‘‘input/output’’ part, dominates the overall cost in terms of power, speed, and throughput. When power-sensitive memories are constructed, the area contribution also becomes more balanced between the heavily distributed memory matrices and the access structures.
2.3.1 The External Data Access Bottleneck Most older DRAMs are of the pipelined page-mode type. Pipelined memory implementations improve the throughput of a memory, but they don’t improve the latency. For instance, in an extended data output (EDO) RAM, the address decoding and the actual access to the storage matrix (of the previous memory access) are done in parallel. In that case, the access sequence to the DRAM is also important. For example, the data sheet of an EDO memory chip specifies the sequence 10-2-2-23-2-2-2, where the numbers represent clock cycles. Each curly bracket indicates four bus cycles of 64-bit each— that is one cache line. The first sequence, 10-2-2-2, specifies the timing if the page is first opened and accessed four times. The second sequence, 3-2-2-2, specifies the timing if the page was already open and accessed four additional times—that means no other memory page was opened and accessed in between those times. The last sequence repeats as long as memory is accessed in the same page. The data sheet in question relates to a memory bus running at 66 MHz. Taking another processing speed, say a 233-MHz processor, the timing becomes 35-7-7-711-7-7-7 in processor clocks (refer to Table 2.1). Synchronous DRAMs (SDRAM) can have a larger pipeline that provides a huge theoretical bandwidth. Basically, the internal state machine enables to enlarge the pipeline. An SDRAM can sustain this high throughput rate by data access interleaving over several banks. However, the address needs to be known several cycles prior to when the data are actually needed. Otherwise the data path will stall, canceling the advantage of the pipelined memory. As already mentioned, modern stand-alone DRAM chips, which are often of this SDRAM type, also offer low-power solutions, but this comes at a price. Internally they contain banks and a small cache with a (very) wide width connected to the external high-speed bus TABLE 2.1 Memory Architecture and Timing for a System Using the Pentium II Processor and EDO Memory Bus
Bus clocks
CPU clocks at 233 MHz
Total CPU clocks (bandwidth)
L1 cache L2 cache EDO memory
1-1-1-1 5-1-1-1 10-2-2-2 3-2-2-2 11-1-1-1 2-1-1-1
1-1-1-1 10-2-2-2 35-7-7-7 11-7-7-7 39-4-4-4 7-4-4-4
4 (1864 MB/s) 16 (466 MB/s) 56 (133 MB/s) 32 (233 MB/s) 51 (146 MB/s) 19 (392 MB/s)
SDRAM 2
This is possibly with a disk cache to hide the long latency.
195
196
F. Catthoor, E. Brockmeyer, K. Danckaert, C. Kulkani, L. Nachtergaele, and A. Vandecappelle Client On-chip cache hierachy
128-1024 bit bus Data
FIGURE 2.9
Cache and bank combine
Local Local Bank1 select latch
Local Local Bank N latch select
SDRAM Global bank select/ control
Address/control
External Data Access Bottleneck Illustration with SDRAM
(see Figure 2.9) (Kim et al., 1998; Kirihata et al., 1998). So the low-power operation per bit is only feasible when the chips operate in burst mode with entire (parts of) memory column(s) transferred over the external bus. This is not directly compatible with the actual use of the data in the processor data paths, so without a buffer to the processors, most of the data words that are exchanged would be useless (and discarded). Obviously, the effective energy consumption per useful bit becomes very high in that case, and the effective bandwidth is low. Therefore, a hierarchical and typically much more powerhungry intermediate memory organization is needed to match the central DRAM to the data ordering and bandwidth requirements of the processor data paths. The reduction of the power consumption in fast random-access memories is not as advanced as in DRAMs; moreover, this type of memory is saturating because many circuit and technology level tricks have already been applied also in SRAMs. As a result, fast SRAMs keep on consuming on the order of watts for highspeed operation around 500 MHz. So the memory-related system power bottleneck remains a very critical issue for data-dominated applications. From the process technology point of view, this problem is not so surprising, especially for submicron technologies. The relative power cost of interconnections is increasing rapidly compared to the transistor (active circuit) related components. Clearly, local data paths, and controllers themselves contribute little to this overall interconnect compared to the major data/ instruction busses and the internal connections in the large memories. Hence, if all other parameters remain constant, the energy consumption (and also the delay or area) in the storage and transfer organization will become even more dominant in the future, especially for deep submicron technologies. The remaining basic limitation lies in transporting the data and the control (like addresses and internal signals) over large on-chip distances and in storing them. One last technological recourse to try to alleviate the energydelay bottleneck is to embed the memories as much as possible on-chip. This has been the focus of several recent activities, such as in the Mitsubishi announcement of an SIMD processor with a large distributed DRAM in 1996 (Tsuruda et al., 1996), followed by the offering of the ‘‘embedded DRAM’’ technology by several other vendors, and the IRAM initiative of Dave Patterson’s group at U.C. Berkeley (Patterson et al., 1997).
The results show that the option of embedding logic on a DRAM process leads to a reduced power cost and an increased bandwidth between the central DRAM and the rest of the system. This is indeed true for applications where the increased processing cost is allowed (Wehn et al., 1998). It is a one-time drop, however after which the widening energy-delay gap between the storage and the logic will keep on progressing due to the unavoidable evolution of the relative interconnect contributions (see previous discussion). So on the longer term, the bottleneck should be broken also by other means. In sections 2.4 to 2.7 it is shown that this breakdown is feasible with quite spectacular effects at the level of the system-design methodology. The price paid will be increased design complexity that can however be offset with appropriate design methodology support tools.
2.3.2 Power Consumption Issues In many cases, a processor requires one or more large external memories, mostly of the (S)DRAM type, to store long-term data. For data-dominated applications, the total system power cost in the past was for the most part attributed to the presence of these external memories on the board. Because of the heavy push toward lower power solutions to keep the package costs low, and also for mobile applications or due to reliability issues, the power consumption of such external DRAMs has been reduced significantly. This has also been a very active research topic. The main concepts are a hierarchical organization with: .
.
.
.
.
Usually more than 32 divisions for RAM sizes above 16 Mb (Itoh et al., 1995). Wide memory words to reduce the access speed (Amrutur and Horowitz, 1994). Multidivided arrays (both for word line and data line) with up to 1024 divisions in a single matrix (Sugibayashi et al., 1993). Low on-chip Vdd up to 0.5 V (Yamagata et al., 1995; Yamauchi et al., 1996), with Vdd/2 precharge (Yamagata et al., 1995). Negative word line drive (NWD) (Itoh et al., 1995) or over-VCC grounded data storage (OVGS) (Yamauchi et al., 1996) to reduce the leakage.3
3 But both of these concepts requires several on-chip voltages to be fabricated, some of them negative.
2 Custom Memory Organization and Data Transfer: Architectural Issues and Exploration Methods .
.
.
. . .
Level-controllable (Yamagata et al., 1995) or boosted (Morimura and Shibata, 1996) word line for further reducing leakage, Multi-input selectively precharged NAND decoder (Morimura and Shibata, 1996), Driving level-controllable (Yamagata et al., 1995) or boosted (Morimura and Shibata, 1996) word line instead of purely dynamic decoder of the past, Reduced bit/word line swing4, Differential bus drivers, Charge recycling in the I/O buffers (Morimura and Shibata, 1996)
Because these principles distribute the power consumption from a few ‘‘hot spots’’ to all parts of the architecture (Itoh et al., 1997), the end result is indeed a very optimized design for power where every piece consumes about an equal amount. An explicit power distribution is provided by Seki et al. (1993): 47% for everything to do with address selection and buffering (several subparts), 15% in the cells, 15% in the data line and sensing, and 23% in the output buffer. The end result is a very low-power SRAM component (e.g., 5 mW for 1 M SRAM operating at 10 MHz for a 1-V 0.5-mm CMOS technology [Morimura and Shibata, 1996], and 10 mW for 1-M SRAM operating at 100 MHz for a longer term future 0.5-V 0.35-mm CMOS technology [Yamauchi et al., 1996]. Combined with the advance in process technology, all this has also led to a remarkable reduction of the DRAM-related power: from several watts for the 16–32-Mb generation to about 100 mW for 100-MHz operation in a 256-Mb DRAM. It is expected, however, that not much more can be gained because the ‘‘bag of tricks’’ now contains only the more complex solutions with a smaller return-on-investment. Note that the combination of all these approaches indicates a very advanced circuit technology, which still outperforms the current state-ofthe-art in data path and logic circuits for low-power design (at least in industry). Hence, it can be expected that the relative power in the nonstorage parts can be more drastically reduced for the future (on condition that similar investments are done).
197
nearly only DRAMs. In that category, the most important subclass is becoming the SDRAMs (see previous section). In that case, the bus protocol is fully synchronous, but the internal operation of the DRAM is still partly asynchronous. For the more conventional DRAMs that also have an asynchronous interface, the internal organization involves special flags, which signal the completion of a read or write and thus the ‘‘readiness’’ for a new data and/or address word. In this case, a distinction has to be made between the address access delay tAA from the moment when the address changes and when the actual data are available at the output buffer and the chip (RAM) access delay tACS from the moment when the chip select (CS) goes up and the data are available. Ideally, tACS ¼ tAA , but in practice t ACS is the largest. So special ‘‘tricks’’ have to be applied to approach this ideal. For on-chip (embedded) RAMs as used in ASICs, a clocked RAM is typically preferred as it is embedded in the rest of the (usually synchronous) architecture. These are nearly always SRAMs. Sometimes they are also used as stand-alone L2 cache, but the trend is to embed that cache level as well on the processor chip. A possible timing (clock) diagram for such a RAM is illustrated in Figure 2.10. The highest speeds can almost be achieved with embedded SRAMs. These memories are used in the so-called L1 cache of the modern microprocessors, where hundreds of megahertz are required to keep up with the speed of the CPU core. Representative examples of this category are a 2-ns (333-MHz) 256-kb (semi) setassociative cache (Covino et al., 1996) that requires 8.7 W in a 2.5-V 0.5-mm CMOS technology and a 500-MHz 288-kb directly mapped cache (Furumochi et al., 1996) requiring 1 W in a 2.5-V 0.25-mm CMOS technology.
2.3.4 High-Density Issues The evolution at the process technology side has been the most prominent in the DRAM, leading to new device structures like the recessed-array stacked capacitor technology (Horiguchi et al., 1995) resulting in a 1-Gb version in 0.16-mm TLM CMOS and the single-electron Coulomb cell (Yano et al., 1996) for even larger sizes. Multilevel logic is also being
2.3.3 Synchronization and Access Times for High-Speed Off- and On-Chip RAMs An important aspect of RAMs are the access times, both for read and write. In principle, these should be balanced as much as possible as the worst case determines the maximum clock rate from the point of view of the periphery. It should be noted that the RAM access itself can be either ‘‘asynchronous’’ or ‘‘clocked.’’ Most individual RAM chips are of the asynchronous type. The evolution of putting more and more RAM on chip has led to a state defined by stand-alone memories that are
Phase 1
Phase 2 Read address and data in Decoder address
Precharge RAM
Evaluate RAM or write Wordline valid
Out valid
Access period T 4
The reduction equals up to only 10% of the Vdd (Itoh et al., 1995), which means less than 0.1 V for low voltage RAMs.
FIGURE 2.10
Timing Diagram for Synchronous Clocked RAM
F. Catthoor, E. Brockmeyer, K. Danckaert, C. Kulkani, L. Nachtergaele, and A. Vandecappelle
for (j ¼ 0; j 0(or sin Df > 0);
Case A: Linearly Polarized Wave In case A, Df ¼ 0 or p, or Im(rf ru ) ¼ 0: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a ¼ jru j2 þ jrf j2 ¼ 1, b ¼ 0, AR ¼ 1, a ¼ au , jrf j if Df ¼ 0, jru j jrf j g ¼ -arctan if Df ¼ p, jru j
z ¼ -1(RHP) if Im(rf ru ) < 0(or sin Df < 0):
(6:38)
When z ¼ þ1, the wave is called left-hand elliptically polarized (LHEP), and when z ¼ 1, the wave is called right-hand elliptically polarized (RHEP):
g ¼ arctan
(6:34)
tan 2g ¼
2Re(rf ru ) 2
2
jru j -jrf j
¼
2jruj jrfj j cos Df 2
jru j -jrf j
2
;-
p p gþ : 2 2 (6:39)
z ¼ undefined: In this case, the polarization ellipse becomes a straight line, with b ¼ 0 and AR ¼ 1, and the rotational parameter z becomes irrelevant. Case B: Circularly Polarized Wave In case B, Df ¼ p=2 (or equivalently Re(rf ru ) ¼ 0) and jru j ¼ jrf j: rf a ¼ b ¼ jru j ¼ jrf j, g ¼ 0, a ¼ au , z ¼ Im , AR ¼ 1 0 dB: ru
(6:35) In this case, the polarization ellipse becomes a circle with a ¼ b, and axial ratio AR ¼ 1( 0dB). The wave is called a circularly polarized wave (CP). The tilt angle is arbitrary and is chosen to be zero. When the wave is RHP (z ¼ 1), it is called right-hand circularly polarized (RHCP), and on the other hand, when the wave is LHP (z ¼ þ1), it is called lefthand circularly polarized (LHCP). Case C: Elliptically Polarized Wave with g ¼ 0 or p=2 In case C, Df ¼ p=2 (or equivalently Re(rf ru ) ¼ 0) but jru j 6¼ jrf j: a ¼ jru j, b ¼ jrf j, g ¼ 0, a ¼ au , if jru j > jrf j z ¼ þ1 if Df ¼ þp=2; z ¼ -1 if Df ¼ -p=2: (6:36) a ¼ jrf j, b ¼ jru j, g ¼ þp=2, a ¼ af , if jru j < jrf j z ¼ þ1 if Df ¼ þp=2; z ¼ -1 if Df ¼ -p=2: (6:37) Case C is the special situation of an elliptical wave with the major and minor axes coinciding with the original coordinates. Depending on the relative magnitudes of ru and rf however, one needs to choose the tilt angle as 0 or p=2 degrees, as indicated. Case D: General Elliptically Polarized Wave Case D includes all cases other than A, B, and C, when Df 6¼ 0, p=2, p, (irrespective of jru j and jrf j):
Note that there can be two values of -p=2 g p=2 satisfying equation 6.39. One needs to eliminate one value as follows: 0 < g < p=2 if Re(rf ru ) > 0(or cos Df > 0); -p=2 < g < 0 if Re(rf ru ) < 0(or cos Df < 0):
(6:40)
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jru j2 cos2 g þ jrf j2 sin2 g þ 2jru jjrf j sin g cos g cos Df rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : 1 ¼ [jr j2 þ jrf j2 þ {jru j4 þ jrf j4 þ 2jru j2 jrf j2 cos (2Df)}1=2 2 u
a¼
(6:41) qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b ¼ jru j2 sin2 g þ jrf j2 cos2 g-2jru jjrf j sin g cos g cos Df rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : 1 ¼ [jr j2 þ jrf j2 -{jru j4 þ jrf j4 þ 2jru j2 jrf j2 cos (2Df)}1=2 2 u
(6:42) a ¼ arg(ru cos g þ rf sin g) " # jrf j sin g sin Df : ¼ au þ arctan jrf j sin g cos Df þ jru j cos g
(6:43)
It should be remembered that all parameters a, b, g, a, z, and ^ 0 in equations 6.40 to AR as well as the unit vectors ^u0 and f 6.43 are functions of (u, f). Figure 6.4 shows a four-quadrant chart for Df that may be used to determine the type of polarization (linear, circular or elliptical, left-handed or right-handed) and the corresponding tilt angle g. The other parameters a, b, AR, and a can then be determined from cases A through D as appropriate.
6.2.8 Total Radiated Power and Radiation Efficiency From equations 6.10 and 6.12, the total radiated power can be determined by integrating the power density function Pd (r, u, f) over a closed spherical surface. Prad ¼
ðp 2p ð u¼0 f¼0
Pd (r, u, f)r 2 sin udfdu ¼
ðp 2p ð pd (u, f) sinu df du: u¼0 f¼0
(6:44)
6 Antennas and Radiation
559 LHCP, if |ρφ| = |ρθ| with γ = 0 LHEP, with γ = 0 if |ρθ| > |ρφ| with γ = + π/2 if |ρθ| < |ρφ|
∆φ = +π/2
II
I
ζ = +1(LHEP)
ζ = +1(LHEP)
-π/2 < γ < 0
0 < γ < π/2
Linearly polarized |ρφ|
γ = arctan (|ρθ|) ∆φ = ± π
∆φ = 0
Linearly polarized |ρφ|
γ = -arctan (|ρθ|)
-π/2 < γ < 0
0 < γ < π/2
ζ = -1(RHEP)
ζ = -1(RHEP)
III
IV
RHCP, if |ρφ| = |ρθ| with γ = 0 RHEP, with γ = 0 if |ρθ| > |ρφ| with γ = + π/2 if |ρθ| < |ρφ|
∆φ = -π/2
FIGURE 6.4 Different Quadrants of the Phase Difference Df ¼ arg (rf ) arg(ru ) Determine the Tilt Angle g and Rotational Parameter z
The total radiated power, Prad , is less than or equal to the input power, Pin , supplied at the input of the antenna. A part of the input power, Ploss , is dissipated as heat in the material (conductor or dielectric) of the antenna itself. Ploss Prad ¼ Pin -Ploss ¼ Pin 1(6:45) Pin Zr ¼ 1-
Ploss : Pin
(6:46)
The parameter Zr , is called the radiation efficiency. A part of the Ploss is dissipated as dielectric loss (Pld ), a part as conductor loss (Plc ), and a part is sometimes lost as unwanted surface wave in the antenna substrate material (Psw ), with efficiencies associated with the individual loss mechanisms as Zd , Zc , and Zsw , respectively: Ploss ¼ Pld þ Plc þ Psw : Pld Plc Psw , Z ¼ 1, Z ¼ 1: Zd ¼ 1Pin c Pin sw Pin
(6:47) (6:48)
If the individual losses are small, the total radiation efficiency can be approximated as the product of individual parts: Pld þ Plc þ Psw Zr ¼ 1’ Pin
Pld Plc Psw 11¼ Zc Zd Zsw : 1Pin Pin Pin
(6:49)
6.2.9 Antenna Input Impedance and Radiation Resistance An antenna is seen by the input source as an equivalent impedance Zin (see Figure 6.1) with real and imaginary parts Rin and Xin , respectively. The imaginary part is produced due to stored energy in the antenna field. The real part Rin consists of two parts, Rrad and Rloss respectively contributed due to the radiated power Prad and the power loss Ploss in the antenna: Zin ¼ Rin þ jXin ¼ Rrad þ Rloss þ jXin , Rin ¼ Rrad þ Rloss :
(6:50) jIin j2 Rrad jIin j2 Rloss , Ploss ¼ , 2 2 jIin j2 (Rrad þ Rloss ) Rrad Pin ¼ , Zr ¼ : 2 Rrad þ Rloss
Prad ¼
(6:51)
The Rrad is called the radiation resistance, and Rloss is called the loss resistance of the antenna.
6.2.10 Resonant Frequency and Bandwidth The input impedance Zin of the antenna is a function of the frequency of operation. Figure 6.5 shows the magnitude of the input impedance of an example antenna as a function of
560
Nirod K. Das from the ideal isotropic radiator. The resulting power density, Pd(iso) , at a given distance r is independent of u and f:
|Z in|
R in(X in = 0)
Pd(iso) ¼
(6:52)
The ratio of the power density Pd (r, u, f) from the actual antenna, in a given direction (u, f) with that from the isotropic radiator Pd(iso), is called the directive gain function Dg (u, f):
∆fr R in 2
Dg (u, f) ¼ fr
Prad : 4pr 2
Frequency
FIGURE 6.5 The frequency dependence of impedance magnitude of an example antenna is shown with the resonant frequency fr and the bandwidth Dfr of the antenna
frequency. In this case, the antenna impedance looks like a parallel RLC resonant circuit. The frequency, fr , for which the impedance magnitude is maximum, or equivalently the reactance is zero, is often defined as the resonant frequency. It is desirable to operate the antenna at this resonant frequency so that it can be easily matched to an input transmission line with a real characteristic impedance. The frequency span Dfr , beyond which the impedance magnitude falls below half of the resonant value, may be defined as the impedance bandwidth of the antenna. The antenna may be usable in this frequency band. Outside the antenna’s bandwidth, the input power to the antenna would be significantly reflected due to impedance mismatch, resulting in poor radiation. It may be noted that Figure 6.5 shows only an example case. Depending on the type of the antenna and its application, the resonance frequency and bandwidth may be defined based on the performance parameter one desires for the particular application, such as the gain, directivity, axial ratio. In such cases, the resonant frequency is defined as the frequency of best performance and the bandwidth as the frequency span over which the performance can be tolerated, as dictated by the application. It is meaningful to call the resonant frequency and the bandwidth of an antenna with an appropriate parameter prefix, such as the impedance–resonant frequency, the impedance–bandwidth, gain–resonant frequency, the gain–bandwidth, the axial– ratio–resonant frequency, or the axial–ratio–bandwidth. Often, the definitions based on the different performance parameters have strong correlations with each other.
6.2.11 Directivity How well an antenna directs the radiation in a particular angle is quantified by comparing the radiation intensity with that of an ideal isotropic radiator. An isotropic radiator is defined as an antenna that radiates uniformly in all angles. Let the total radiated power from a given antenna, Prad , also be radiated
Pd (r, u, f) 4pr 2 Pd (r, u, f) 4ppd (u, f) ¼ ¼ : (6:53) Pd(iso) Prad Prad
Pd (r, u, f) ¼
Prad Prad Dg (u, f): (6:54) Dg (u, f), pd (u, f) ¼ 4pr 2 4p
The maximum value of Dg (u, f), occurring along the main beam direction (umax , fmax ), is called the directivity (D). Directivity ¼ D ¼ Dg (u, f)max ¼ Dg (umax , fmax ):
(6:55)
Now, using equation 6.44 in equation 6.53 Dg (u, f) can be expressed using either pd (u, f) or the power pattern P d (u, f): Dg (u, f) ¼
4ppd (u, f) 2p Ð
Ðp
f¼0 u¼0
¼
pd (u, f) sin ududf (6:56)
4ppd (u, f) 2Ðp
Ðp
:
P d (u, f) sin ududf
f¼0 u¼0 .
Dg , Rrad , and e Relationship: Combining the equations (6.22, 6.51, and 6.54), a useful relationship results between the field vector e(u, f), directive gain function Dg (u, f), and the radiation resistance Rrad :
jIin j2 j e (u, f)j2 jIin j2 Rrad ¼ e (u, f)j ¼ Dg (u, f), j 2Z0 8p
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Z0 Rrad Dg (u, f): 4p
(6:57)
6.2.12 Gain It may also be useful to find the ratio of the power density 0 Pd (r, u, f) from a given antenna to that ( ¼ Pd(iso) ) from an ideal isotropic radiator when supplied by the same input power Pin in both cases (unlike the same Prad considered in 0 the last section.) The ratio of Pd (r, u, f) to Pd(iso) is called the gain function G(u, f): 0 ¼ Pd(iso)
G(u, f) ¼
Pin : 4pr 2
(6:58)
Pd (r, u, f) 4pr 2 Pd (r, u, f) 4ppd (u, f) ¼ ¼ : (6:59) 0 Pd(iso) Pin Pin
6 Antennas and Radiation
561
Pin Prad G(u, f) ¼ G(u, f); 2 4pr 4pr 2 Zr Pin G(u, f): pd (u, f) ¼ 4p
Pd (r, u, f) ¼
(6:60)
Comparing equations 6.59 and 6.53, and using the efficiency relationship of equation 6.45 a simple relationship is obtained between G(u, f) and Dg (u, f): G(u, f) ¼ Zr Dg (u, f):
(6:61)
The maximum value of the gain function is referred to as the maximum gain or simply the gain G: Gain ¼ G ¼ G(u, f)max ¼ G(umax , fmax ) ¼ Zr D: .
(6:62)
G, Rin , and e Relationship: Combining equations (6.22, 6.51, 6.60) leads to a useful relationship between the field vector e(u, f), gain function G(u, f), and the antenna input resistance Rin :
jIin j2 j e (u, f)j2 jIin j2 Rin G(u, f), j e (u, f)j ¼ ¼ 2Z0 8p
rffiffiffiffiffiffiffiffiffiffiffi Z0 Rin G(u, f): 4p (6:63)
6.2.13 Beam Efficiency The beam efficiency hbeam of an antenna is defined as the ratio of the radiated power, Pmainbeam , confined in the antenna main
lobe to the total radiated power Prad. This quantifies how well the antenna uses the total radiated power in the desired main beam. Zbeam
Pmainbeam ¼ ¼ Prad
ÐÐ mainbeam 2Ðp Ðp
pd (u, f) sinu du df
: (6:64)
pd (u, f) sinu du df
f¼0 u¼0
6.3 Antenna as a Receiver This section treats an antenna as a receiving element. Before a receiving antenna can be modeled, a uniform plane wave must first be modeled incident on the antenna placed at the origin of a spherical coordinate system (see Figure 6.6).
6.3.1 Incident Plane Wave ^ components with The incident plane wave has both ^u and f arbitrary amplitudes and phases. Unlike a radiating wave propagating radially away from the origin, the incident plane wave is propagating toward the origin with a e þjk0 r phase factor: ^ Eif ¼ ei e þjk0 r ¼ (^ueiu þ f ^ eif )e þjk0 r : Ei ¼ ^uEiu þ f
(6:65)
^ eif ¼ jei j^ri ¼ j ^ rif ): ei j(^uriu þ f ei ¼ ^ueiu þ f qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi eiu eif j ei j ¼ jeiu j2 þ jeif j2 , riu ¼ ,r ¼ : j ei j if j ei j
(6:66) (6:67)
Ei = θ Eiθ + φ Eiφ Z Receiving antenna
Zin
IL
+ ZL
θ
Incident plane wave (Power density Pdi = Pdiθ+ Pdiφ)
PL
VL
−
VR Zin
+ −
Y
φ X
FIGURE 6.6 A General Antenna System in a Receiving Mode of Operation Shows an Incident Plane Wave and a Thevenin’s Equivalent Model at the Input Terminals
562
Nirod K. Das qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi j^ ri j ¼ jriu j2 þ jrif j2 ¼ Pdinc ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jeiu j2 þ jeif j2 j ei j
¼ 1:
jEi j2 j ei j2 ¼ : 2Z0 2Z0
(6:68)
(6:69)
The ^ ri is the polarization unit vector of the incident wave. Like the radiating polarization unit vector ^ r, the incident polarization unit vector ^ ri may be transformed to a new coordinate ^ 0 ) aligned with the major and minor axes of the system (^ u0i , f i incident polarization ellipse, respectively. The angle between the ^ u and ^ u0i axes is the tilt angle gi : ^ 0 ¼ r^: ^ 0 ); ai , bi 0, ai bi , u0 f u0i -zi jbi f ri ¼ e jai ^ ^ r0i ¼ e jai (ai ^ i i i
(6:70) zi ¼ þ1 for LHP wave; zi ¼ -1 for RHP wave:
(6:71)
The equations in Section 6.2.7 may be used here to determine the various parameters of the incident wave. The major and minor axes ai and bi , axial ratio ARi , tilt angle gi , the rotation parameter zi, and the phase angle ai should be replaced for the corresponding variables in Section 6.27: a, b, AR, g, z, and a, respectively. However, it may be noted that, in contrast with equation 6.29, equation 6.70 has a negative sign before zi . This takes into account the propagation of the incident wave in the -^ r direction instead of the þ^ r direction of propagation for the outgoing radiated field. The change in the direction of propagation, by definition, automatically switches the ‘‘handedness’’ (right- or left-hand) of the wave.
6.3.2 Receive Voltage Vector As shown in Figure 6.6, the receiving antenna is modeled as a Thevenin’s equivalent circuit at its output terminals. The open-circuit voltage VR can be written as a linear combination of the u and the f components of the incident fields: VR (u, f) ¼ VRu (u, f) þ VRf (u, f) ¼ eui vRu (u, f) þ efi vRf (u, f) ¼ ei vR (u, f):
(6:72)
^ vRf (u, f) ¼ jvR (u, f)j^ vR (u, f) ¼ ^ uvRu (u, f) þ f rR (u, f): (6:73) ^ r (u, f): ^R (u, f) ¼ ^ r urRu (u, f) þ f Rf qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi vRu vRf ,r ¼ : jvR j ¼ jvRu j2 þ jvRf j2 , rRu ¼ jvR j Rf jvR j qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jvRu j2 þ jvRf j2 ¼ 1: j^ rR j ¼ jrRu j2 þ jrRf j2 ¼ jvR j
(6:74) (6:75)
(6:76)
The vR (u, f) is called the receive vector, and the unit vector ^rR (u, f) is called the receive polarization unit vector. Like the radiation and incident polarization unit vectors ^r and ^ ri , the receive polarization unit vector may be transformed to a new ^ R ) aligned with the major and minor coordinate system (^uR , f axes of the receive polarization ellipse. The angle between the ^ u and ^uR axes is the tilt angle gR : ^ 0 ); ^rR ¼ e jaR ^r0R ¼ e jaR (aR ^u0R þ zR jbR f R 0 0 ^ ^ aR , bR 0, aR bR , u f ¼ r^: R
(6:77)
R
zR ¼ þ1 for LHP receiver; zR ¼ -1 for RHP receiver:
(6:78)
Like the incident polarization unit vector in equation 6.70, the equations in Section 6.2.7 may also be used here to determine the various parameters for the receive polarization unit vector ^rR . It may be noted that, in contrast with equation 6.70, equation 6.77 has a positive sign before zR . The rotational parameter zR as a rule follows the same convention as for z, which is different from the rule for the incident wave zi
6.3.3 Received Power and Effective Area Once the Thevenin’s voltage VR of the receiving antenna is known, the load power PL delivered to a load impedance ZL can be easily obtained (see Figure 6.6): ei jjvR j^ri ^rR : (6:79) VR ¼ ei vR ¼ j 2 VR ZL VR 1 VR VL ¼ , IL ¼ , PL ¼ RL : (6:80) 2 ZL þ Zin ZL þ Zin ZL þ Zin The effective area Ae of a receiving antenna is defined as the ratio of the load power PL and the incident power density Pdinc. The receiving antenna is seen as an equivalent capture area for the incident wave. The equivalent area Ae of an antenna may be clearly distinguished from the physical area of an antenna because, in general, they can be significantly different from each other. For example, a commonly used wire antenna has a nonzero effective area allowing a fairly good reception, whereas its physical area is very small—practically zero: 2 PL RL Z0 VR Ae ¼ ¼ : (6:81) Pdinc j ei j2 ZL þ Zin Ae ¼
RL Z0 jvR j2 j ei j2 j^ri ^rR j2 RL Z0 jvR j2 j^ri ^rR j2 ¼ : j ei j2 jZL þ Zin j2 jZL þ Zin j2
(6:82)
The effective area can be maximized in two ways: (1) by matching the load impedance for maximum power transfer, which requires ZL ¼ Zin or (2) by adjusting the incident and receive polarization vectors such that j^ri ^rR j ¼ 1 ¼ j^ ri ^ rR j whenever possible. Under these conditions, the maximum effective area Aem can be derived from equation 6.82:
6 Antennas and Radiation Aem (u, f) ¼
563
Z0 jvR (u, f)j2 ; ZL ¼ Zin , j^ ri ^ rR j ¼ 1 ¼ j^ r0i ^r0R j: 4Rin (6:83)
other parameter remaining the same, the received power is larger if the rotational parameters are matched (zi ¼ zr ) compared to when they are mismatched (zi 6¼ zr ).
The element j^ ri ^ rR j is called the polarization match factor between the incident wave and the receiving antenna. It can be shown that the polarization match factor is at the maximum possible (unity) only when gi ¼ gr , zi ¼ zr , and ARi ¼ ARr (ai ¼ aR , bi ¼ bR ). From equations 6.70 and 6.77, one can see that the above condition established when ^ ri ¼ ^ rR . Other possible practical situations may be considered. First, for a given receiving antenna (given zr and ARr ), a given incident plane wave (given zi and ARi ), and a given load impedance ZL , the receive power PL can be maximized by just turning the antenna about the r^ axis, such that the major axes of the incident and receive polarization vectors are aligned (gi ¼ gR ). Second, every
6.3.4 Received Noise Temperature Figure 6.7(A) shows the noise-equivalent circuit model of an antenna. Like an ohmic resistance, the input resistance of the antenna, Rin is associated with a noise source vna , with equivalent noise temperature Tea (Pozar, 1998). Unlike an ohmic resistance, however, Tea is not the physical temperature of the antenna. The vna consists of two parts, vnL and vnr , with noise temperatures, Ta and Ter , respectively associated with the loss resistance Rloss and radiation resistance Rrad of the antenna. The vnL is generated due to the thermal process in the antenna itself, and therefore Ta is equal to the physical temperature of
υnL, TeL −
+
jX in jX in
R loss
R rad
RL
+ υnr, Ter −
jX L
R in + υna, Tea, −
RL jX L
(A) Noise -Equivalent Circuit
Te (θ, φ) = T (θ, φ) = T 0K
PnL
p dn (θ, φ)
Z
Zin + υna −
ZL
∆Ω
Incoming radiation noise
Y
X Antenna T 0K
(B) System in Thermal Equilibrium
FIGURE 6.7 Noise Model for a Receiving Antenna. (A) This noise-equivalent circuit of an antenna shows noise voltage vnr received through radiation and vnl produced by the antenna itself due to material-loss resistance Rloss . (B) The antenna is placed in a closed ‘‘perfectly absorbing’’ chamber at temperature T 0 K (pdn ¼ noise power per unit solid angle, as seen by an ideal isotropic antenna).
564
Nirod K. Das
the antenna. In comparison, Ter is associated with the incoming radiation noise that depends on the temperature distribution of the surrounding objects that generate it: 2 2 2 2 2 2 vnL ¼ 4kTa Rloss , vnr ¼ 4kTer Rrad , vna ¼ 4kTea Rin ; vna ¼ vnL þ vnr :
(6:84) The bar over the v represents root-mean-square (rms) values per unit bandwidth. Let the total noise–power per unit bandwidth, delivered by the antenna to a conjugate-matched load ZL ¼ Zin be Pna . The respective constituent parts delivered by vnL and vnR are PnL and Pnr . These noise powers can be expressed using the respective noise temperatures and antenna radiation efficiency Zr (from equation 6.51), which would then provide a relationship between Tea , Ta , and Ter : PnL ¼
2 vnL Rloss ¼ kTa ¼ kTa (1-Zr ): 4Rin Rin
v 2 Rrad ¼ kTer Zr : Pnr ¼ nr ¼ kTer 4Rin Rin Pna ¼
(6:85)
Pnr ¼
V
¼
ð pdn (u, f)
G(u, f) dV Giso
V
ð
pdn (u, f)G(u, f)dV:
(6:88)
V
It may be noted that equation 6.88 uses the G(u, f) Aem (u, f) relationship, to be developed in 6.110. Then, using equations 6.86 and 6.61 in 6.88, we can relate the radiation noise–temperature Ter with pdn (u, f) and the directive gain function Dg (u, f): rr kTer ¼
ð V
Ter ¼
ð
V
pdn (u, f) Dg (u, f)dV: k
ð
pdn (u, f) Dg (u, f)dV k V ð pdn 4ppdn kT : ¼ ; pdn ¼ Dg (u, f)dV ¼ 4p k k
(6:91)
V
Equation 6.91 can be extended to arbitrary situations when the surrounding objects are at different physical temperatures T (u, f), or the objects can be modeled with noise-equivalent temperatures Te (u, f). This will relate pdn (u, f) to T (u, f) or Te (u, f). Then, using equation 6.89 Ter can be related to the surrounding temperature profile Te (u, f) via Dg (u, f): pdn (u, f) ¼
ð kTe (u, f) 1 Te (u, f)Dg (u, f)dV: (6:92) , Ter ¼ 4p 4p V
6.4 Transmit–Receive Communication Link Figure 6.8(A) shows a communication link between two antennas, with their antenna parameters indicated by a subscript 1 or 2. Each antenna is described with respect to its own spherical coordinate system, represented with subscripts 1 or 2, having an origin at the center of the corresponding antenna. First, antenna 1 can be treated as the transmit antenna. The transmit field, E, radiated from antenna 1 is viewed as the incident wave, ei , for the receiving antenna 2: e E1 (r1 , u1 , f1 ) ¼ I1
pdn (u, f)G(u, f)dV;
(6:90)
Due to symmetry of the situation, pdn (u, f) is independent of the angle of arrival (u, f). With this assumption, and using equations 6.56 and 6.90 in equation 6.89 a relationship between pdn and the absorber temperature T results:
(6:86)
2 vna ¼ kTea ¼ PnL þ Pnr ; Tea ¼ Zr Ter þ (1-Zr )Ta : 4Rin (6:87)
Aem (u, f) pdn (u, f) dV ¼ Aem(iso)
Tea ¼ T ¼ Ta ¼ Zr Ter þ (1-Zr )Ta ; Ter ¼ T :
Ter ¼ T ¼
Relationship Between Ter and Radiation–Noise Density Let pdn (u, f) be the noise–power density of the incoming radiation per unit solid angle and per unit bandwidth, as seen by an ideal, matched ‘‘isotropic’’ antenna (gain function Giso (u, f) ¼ 1, matched effective area ¼ Aem(iso) ). The matched radiation noise Pnr received by an arbitrary antenna can be expressed in terms of pdn (u, f) and the antenna’s effective area Aem (u, f) relative to Aem(iso) : ð
Relationship Between Ter and Spatial Temperature Distribution Te (u, f) Consider first the special situation when the antenna is surrounded by perfect absorbers, and the entire system is in thermal equilibrium at the temperature T ¼ Ta , as shown in Figure 6.7(B). Under this condition, the noise–temperature of the antenna should be equal to the surrounding temperature. Consequently, using equation 6.87, the radiation–noise temperature of the antenna can be shown to be the temperature of the surrounding objects T :
(6:89) j ei j ¼
-jk0 r1 r1
e1 (u1 , f1 ) ¼ ei :
jI1 jj e1 (u1 , f1 )j , ^ri ¼ e j( arg (I1 )-k0 r1 ) ^r1 (u1 , f1 ): r1
(6:93) (6:94)
6 Antennas and Radiation
565 Z2 D2, G2, r2, rR2,u R2 Z1 (θ2,φ2) θ1
I1
+ +
VR2
r2
−
V2
Y2
−
r1 (θ1,φ1)
+ V1 −
I2
θ2 Zin2
Antenna 2 (Receive)
X2 Y1
Zin1
D1, G1, r1, rR1,u R1 r = r1 = r2 = Distance between two antennas
Antenna 1 (Transmit) X1
(A) Communication Link
I1
+ I1 Zin1 + VR1 = V1
−
VR1 I1 VR2 Z22 = Zin2, Z21 = I2
Z11 = Zin1, Z12 =
V1 V2
=
Z11
Z12
I1
Z21
Z22
I2
I1
+ V2 = I2 Z in2 + VR2
−
(B) Two-Port Circuit
FIGURE 6.8 A Communication Link Between Two Antennas, Shown with Antenna 1 Used as the Transmitter and Antenna 2 as the Receiver. The Link can be Viewed as a Two-Port Circuit as Shown in (B).
Note that the incident polarization unit vector ^ ri, as seen by receiving antenna 2, is equal to the transmit polarization unit vector ^ r1 of antenna 1 multiplied by the phase factor e j(arg(I1 )k0 r1 ) due to propagation over distance r1 and the phase of the input current I1 . The magnitude of the incident wave in equation 6.94 may be expressed in terms of the input impedance Rin1 of antenna 1 (or alternately the input power Pin1 of antenna 1) using equation 6.63: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Z0 Rin1 2Z0 Pin1 j ei j ¼ jI1 j G(u1 , f1 ) ¼ G(u1 , f1 ): (6:95) 4pr12 4pr12 The Thevenin’s voltage, or the open-circuit voltage VR2 , at the terminals of antenna 2 can now be written in different forms using equations 6.79, 6.93, 6.94, and 6.95: VR2 ¼ I1 VR2 ¼
e -jk0 r1 e1 (u1 , f1 ) vR2 (u2 , f2 ): r1
(6:96)
jI1 jj e1 (u1 , f1 )jjvR2 (u2 , f2 )j j(arg(I1 )-k0 r1 ) e r1 (u1 , f1 ) ^ ^ rR2 (u2 , f2 ): r1
(6:97)
VR2
"sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi# 2Z0 Pin1 ¼ G1 (u1 , f1 ) jvR2 (u2 , f2 )je j(arg(I1 )-k0 r1 ) ^ r1 (u1 , f1 ) ^ rR2 (u2 , f2 ): 4pr12
(6:98)
6.4.1 Transmit–Receive Link as a Two-Port Circuit If antenna 2 is also excited by an input current I2 , antenna 1 will receive a Thevenin’s terminal voltage VR1 . Repeating the procedure described in the previous paragraphs, VR1 may be expressed by interchanging the subscripts 1 and 2: VR1 ¼ I2
e -jk0 r2 e2 (u2 , f2 ) vR1 (u1 , f1 ): r2
(6:99)
The link between antennas 1 and 2 may be seen as a two-port circuit, as illustrated in Figure 6.8(B). The total input voltages V1 and V2 , respectively across the terminals of antennas 1 and 2 may be expressed as a superposition of the receive voltage and the transmit voltage:
566
Nirod K. Das V1 ¼ I1 Zin1 þ VR1 , V2 ¼ I2 Zin2 þ VR2 : Z12 ¼
Z21 ¼
e -jk0 r2
VR1 ¼ e2 (u2 , f2 ) vR1 (u1 , f1 ): I2 r2
(6:100) (6:101)
VR2 e -jk0 r1 ¼ e1 (u1 , f1 ) vR2 (u2 , f2 ); r1 ¼ r2 : (6:102) I1 r1
If the circuit is assumed to be reciprocal, then Z12 ¼ Z21 (Pozar, 1998). Hence, the following is true: e1 (u1 , f1 ) vR2 (u2 , f2 ) ¼ e2 (u2 , f2 ) vR1 (u1 , f1 ):
(6:103)
Equation 6.103 shows an important relationship in antenna theory called the antenna reciprocity relationship.
6.4.2 Relationship Between Transmit and Receive Vectors of an Arbitrary Antenna Using equation 6.103, one can relate the transmit and receive vectors of any arbitrary antenna (replaced for antenna 1), once the transmit and receive vectors of one specific known antenna can be substituted for antenna 2. A small dipole antenna with unit input current and length l can be used as the reference antenna 2. Let this dipole current be oriented in the -^ u1 direction, such that the radiation from the dipole antenna 2 is maximum toward the antenna 1. Alternately, one may visualize the current of the dipole antenna 2 to be oriented along the Z^2 axis (see Figure 6.8(A)). Then, let the ^z2 axis be turned to align in the ^ u1 direction so that in the line of communication ^ u1 ¼ ^ u2 and ^ u2 ¼ 908. Under this situation, using the results from Das (2004) in Section 6.2.1, the transmit and receive vectors of the dipole antenna 2 are found as: I ¼ I2^z2 ¼ -^ u1 , I2 ¼ 1: e2 (u2 , f2 ) ¼
(6:104)
e1f (u1 , f1 ) ¼
jZ l e1 (u1 , f1 ) ^ u1 l ¼ 0 u1 vR1 (u1 , f1 ), e1u (u1 , f1 ) 2l0 jZ0 vR1u (u1 , f1 ): ¼ 2l0
(6:107)
Combining equations 6.106 and 6.107, the relationship between the complete transmit and receive vectors becomes: e1 (u1 , f1 ) ¼
jZ0 vR1 (u1 , f1 ): 2l0
(6:108)
Because an arbitrary antenna is being considered for antenna 1, and the direction of consideration (u1 , f1 ) is arbitrary, it can be generalized that for any antenna and any direction of radiation-reception the following is true: jZ0 vR (u, f), 2l0 Z j e (u, f)j ¼ 0 jvR (u, f)j, ^r(u, f) ¼ j^rR (u, f): 2l0 e(u, f) ¼
(6:109)
Equation 6.109 is a useful universal relationship between the transmit and receive vectors of any antenna. G Aem Relationship Consider now the relationship between the magnitudes of the transmit and receive vectors in equation 6.109. Then use equation 6.63 to relate j e j to G and equation 6.83 to relate jvR j to maximum effective area Aem . Thus, an equivalent universal relationship between G(u, f) and Aem (u, f) results: j e (u, f)j2 Z0 2 G(u, f) Z20 G(u, f) 4p , ¼ ) ¼ : (6:110) 2 ¼( 2l0 Aem (u, f) 16p Aem (u, f) l20 jvR (u, f)j
6.4.3 Friis Transmission Formula Using the universal e vR relationship of equation 6.109 in the equation 6.96, the received Thevenin’s voltage at the antenna 2 terminals can be expressed using transmit vectors of the two antennas:
jZ0 l ^ jZ l u2 ¼ 0 ^u1 , vR2 (u2 , f2 ) ¼ l ^u2 ¼ lu1 , u1 ¼ u2 : 2l0 2l0 (6:105)
Now, using equation 6.105 in equation 6.103, the relationship between the ^ u components of the transmit and receive vectors of the arbitrary antenna 1 is established as follows:
jZ0 vR1f (u1 , f1 ): 2l0
VR2 ¼ I1 jVR2 j ¼
e -jk0 r1 2l0 : e1 (u1 , f1 ) e2 (u2 , f2 ) r1 jZ0
(6:111)
jI1 j 2l0 j e1 (u1 , f1 )jj e2 (u2 , f2 )j j^r (u1 , f1 ) ^r2 (u2 , f2 )j: r1 Z0 1 (6:112)
Equation 6.63 may be used to relate the multiple j e js of equation 6.112 to corresponding Gs: (6:106)
One may repeat the above procedure by reorienting the dipole ^ 1 direction. This antenna 2 such that the current is in the -f ^ components of would provide the relationship between the f the transmit and receive vectors:
jVR2 j2 ¼
l0 4pr1
2
4jI1 j2 Rin1 Rin2 G1 (u1 , f1 )G2 (u2 , f2 )j^ r1 (u1 , f1 ) ^ r2 (u2 , f2 )j2 :
(6:113) jVR2 j2 ¼
l0 4pr1
2
8Pin1 Rin2 G1 (u1 , f1 )G2 (u2 , f2 )j^r1 (u1 , f1 ) ^r2 (u2 , f2 )j2 :
(6:114)
6 Antennas and Radiation
567
Once VR2 is known, the load voltage VL2 and load power PL2, when the antenna 2 is loaded by an arbitrary load impedance ZL2 , are found from equation 6.80: PL2 ¼
8Rin2 RL2 l0 2 Pin1 G1 (u1 , f1 )G2 (u2 , f2 )j^r1 (u1 , f1 ) ^r2 (u2 , f2 )j2 : 2 2jZL2 þ Zin2 j 4pr1
(6:115) PL2 max ¼
l0 4pr1
2
PL2 ¼ PL2 max
Pin1 G1 (u1 , f1 )G2 (u2 , f2 ); ZL2 ¼ Zin2 , j^ r1 ^ r2 j ¼ 1:
! 4Rin2 RL2 r2 (u2 , f2 )j2 : j^ r1 (u1 , f1 ) ^ jZL2 þ Zin2 j2
With this situation, one may see that the polarization mismatch factor, Mp ¼ r1u r2u r1f r2f , is no longer equal to ^r1 ^r2 . The correct expression for Mp is as follows: Mp ¼ r1u r2u r1f r2f ¼ ^r1s ^r2 ¼ ^r1 ^r2s ; ^r1s ¼ ^r1 jf^ !f^ , ^r2s ¼ ^r2 jf^ !f^ :
(6:122)
jMp j ¼ j^r1s ^r2 j ¼ j^r1 ^r2s j,
(6:117)
where the unit vectors with subscript s refer to original vectors ^ switched for f ^. As may be noted, this essentially with f switches the ‘‘handedness’’ of the coordinates.
6.4.4 Polarization Match Factor The element ^ r1 (u1 , f1 ) ^ r2 (u2 , f2 ) in the above Friis formulas is the polarization match factor Mp. (6:118)
Using equation 6.29 (also see Figure 6.3), Mp may be alternately represented in the coordinates of the respective polarization ellipses: Mp ¼ e ja1 ^ r01 (u1 , f1 ) ^ r02 (u2 , f2 )e ja2 :
(6:121)
(6:116)
Equations 6.115, 6.116, and 6.117 are called the Friis transmission formula, which relates the received load power to the input power through the antenna and load parameters.
Mp ¼ ^ r1 (u1 , f1 ) ^ r2 (u2 , f2 ):
^ , ^r2 ¼ r2u ^u þ r2f f ^: ^r1 ¼ r1u ^u þ r1f f
(6:119)
It should be remembered, however, that equation 6.118 applies for general conditions, where the polarization unit vectors are referenced with respect to the coordinate systems of the corresponding antennas (see Figure 6.8) that in general can have arbitrary relative orientation with respect to each other. It may be useful to express Mp for a simpler situation, when the axes (^ x1 , ^y1 , ^z1 ) and (^ x2 , ^y2 , ^z2 ) are respectively aligned with each other (with different origins). With this situation, in the direction of communication between the antennas, ^ u1 ^ u2 ¼ 1, but ^ ^ f1 f2 ¼ -1: ^ 1 ) (r ^ ^ Mp ¼ ^r1 ^ r2 ¼ (r1u ^ u1 þ r1f f 2u u2 þ r2f f2 ) ¼ r1u r2u -r1f r2f :
(6:120) ^ Vectors in Common Coordinates Mp Using r Consider again the just described situation of aligned coordinates for the two antennas of a communication link. However, consider that the polarization unit vectors of the two antennas, r1 and ^ ^ r2 , are expressed while the two antennas are placed in a common coordinate system (r, u, f) with origin at the antenna center:
(6:123)
6.5 Antenna as a Scatterer Figure 6.9 shows a plane wave, Ei , being scattered from an arbitrary antenna. Ei is incident from angle (ui , fi ), which in ^ components of the electric field: general has both ^u and f 2 2 ^ ), Pdi ¼ P u þ P f ; P u ¼ jeiu j , P f ¼ jeif j : u þ eif f Ei ¼ e jk0 r (eiu ^ di di di di 2Z0 2Z0 (6:124)
The scattered field Es (r, u, f; ui , fi ) in the far-field region can be decomposed into two parts. The first part, Eso (r, u, f; ui , fi ), is the scattering with the antenna input terminals open-circuited (IL ¼ 0). The second part, Er (r, u, f; ui , fi ), is the reradiated field due to the load current IL ¼ -Iin when the antenna is loaded with a finite impedance: e Es ¼ Eso þ Er ¼
-jk0 r r
[eiuesu (u, f; ui , fi ) þ eifesf (u, f; ui , fi )]:
(6:125) The scattered field is a linear combination of the incident field components eiu and eif . As for a radiating antenna, the scattered field also has the e -jk0 r -dependence, in the far-field region, that account for the propagation phase and the freespace loss. The esu and esf may respectively be referred to as the u- and f-scattering vectors and are the r-independent part of the scattered field due to one unit of eiu and eif , respectively. The part due to the open-circuited antenna may be separately expressed in the following form, where the subscript refers to the corresponding values in the open-circuited condition: e Eso ¼
-jk0 r r
[eiuesou (u, f; ui , fi ) þ eifesof (u, f; ui , fi )]: (6:126)
Using equations 6.20, 6.72, and 6.80, the reradiated part can be expressed in terms of the input current Iin ¼ -IL , the transmit vector e(u, f), and the receive vector vR (ui , fi ). The
568
Nirod K. Das
Incident plane wave
E i = eie + jkor Z (r, θ, φ ) (θi, φi ) E s = (r, θ, φ ) -I in = IL Zin ZL
Y VR
X
Scattering antenna
FIGURE 6.9 A Plane Wave Incident From Angle (ui , fi ) is Scattered from an Antenna. The scattered signal E s (r, u, f) is observed at a point (r, u, f). The scattered signal is affected by the load impedance ZL .
input/load current is induced by the incident field and related by the receive vector vR : e Er ¼
-jk0 r r
Iin (ui , fi ) e (u, f) ¼ -
e -jk0 r ei vR (ui , fi ) e(u, f) : (6:127) Zin þ ZL r
Now the individual parts in equations 6.126 and 6.127 may be combined with equation 6.125 to obtain the following relationships: esu ¼ esou
vRu (ui , fi ) vRf (ui , fi ) e(u, f), esf ¼ esof e(u, f): (6:128) Zin þ ZL Zin þ ZL
6.5.1 Intercepted Power and Radar Cross Section First consider a u-polarized incident field (eif ¼ 0). With this u condition, the power–density, Pds , of the scattered field at a given location (r, u, f) can be expressed using the resulting scattered E-field (from equations 6.124 and 6.125). For brevity of presentation, the angle of incidence (ui , fi ) and the observation point (r, u, f) can be considered as implicit variables, as appropriate: u Pds (r, u, f; ui , fi ) ¼
u 2 u 2 j þ jesf j] esu j2 Pdiu [jesu jeiu j2 j esu j2 Pdiu j ¼ ¼ : 2 2 2 r 2Z0 r r
(6:129)
u , is defined as associated with An effective intercept power, Pint u a scatterer. The Pint is the equivalent power intercepted by the scattering antenna that when reradiated by an isotropic antenna provides the same power density at the observation u location as the scattering wave. The Pint is a function of the angle of observation (u, f). The corresponding radar cross section, su , is defined as the ratio of the intercept power and the incident power density Pdiu :
u Pds ¼
u Pint (u, f; ui , fi ) Pdiu su ¼ ; 4pr 2 4pr 2 u 4pr 2 Pds Pu s (u, f; ui , fi ) ¼ int ¼ : u u Pdi Pdi
(6:130)
u
Using equations 6.129 in 6.130, the relationship is created between the u-radar cross section su and the u-scattering vector esu : su is further decomposed into suu and suf which respectively correspond to the u and f components of u ^ eu : esu ¼ ^uesu þf sf u 2 u 2 su (u, f; ui , fi ) ¼ 4pj esu j2 ¼ 4p[jesu j þ jesf j]
¼ suu (u, f; ui , fi ) þ suf (u, f; ui , fi ):
(6:131)
6 Antennas and Radiation
569
Similarly, the radar cross section may be defined for the f-polarized incident field, with the u-superscripts exchanged for f: f Pint (u, f; ui , fi ) 4pr 2 (6:132) f 2 f 2 f f 2 Pdif [jesu j þ jesf j] Pdi j es j ¼ ¼ : r2 r2
f Pds (r, u, f; ui , fi ) ¼
sf (u, f; ui , fi ) ¼
f 4pr 2 Pds
Pdif
f 2 f 2 ¼ 4p[jesu j þ jesu j]
(6:133)
¼ sfu (u, f; ui , fi ) þ sff (u, f; ui , fi ):
References Balanis, C.A. (1997). Antenna theory: Analysis and design. (2d ed.). John Wiley & Sons. Cheng, D.K. (1993). Fundamentals of engineering electromagnetics. Reading, MA: Addison-Wesley.
Collin, R.E., and Zucker, F.J. (Eds.) (1969). Antenna theory: Parts I, II. New York: McGraw-Hill. Das, N.K. (2004). Antennas and radiation: Antenna elements and arrays. EE Handbook. Boston: Academic Press. Jasik, J. (Ed.). (1961). Antenna engineering handbook. New York: McGraw-Hill. Kraus, J.D. (1950). Antennas. New York: McGraw-Hill. Kraus, J.D., and Fleisch, D.A. (1999). Electromagnetics with applications. New York: McGraw-Hill. Pozar, D.M. (1998). Microwave engineering. New York: John Wiley & Sons. Ramo, S., Whinnery, J.R., and van Duzer, T. (1993). Fields and waves in communication electronics. (3rd ed.) New York: John Wiley & Sons. Rao, N.N. (2000). Elements of engineering electromagnetics. New Jersey: Prentice Hall. Schelkunoff, S.A., and Friis, H.T. (1952). Antennas, theory, and practice. New York: John Wiley & Sons.
II Antenna Elements and Arrays 6.6 Introduction One can envision literally infinite possible antenna geometries that can be designed to achieve a variety of performance features. Among these diverse configurations, one may identify a few basic antenna elements that are often used for generalpurpose wireless applications. The following sections analyze the radiation/reception mechanism in selected basic antenna geometries: (1) dipole antenna, (2) monopole antenna, (3) wire-loop antenna, (4) slot antenna, and (5) microstrip antenna. Selected other types of antennas in use will also be introduced. An antenna array is made of a number of individual antenna elements each located at a different position in space, with independent excitation to each input. Such an array may provide many practical advantages over a single antenna element. By properly designing for the individual locations and input excitations, one can achieve radiation characteristics that might not be normally feasible using a single antenna. If one can also control the input excitations to the individual elements in the real-time, the radiation characteristics of the array, such as the pointing direction or radiation pattern, can be changed in a dynamic manner. This will allow significant flexibility for adjusting to a dynamically changing situation. This chapter will analyze the basic theory of a general antenna
array and derive results for specific situations of a one-dimensional array and a two-dimensional array.
6.7 Antenna Elements This section presents a number of basic antenna elements, starting with a small dipole antenna that is considered the most fundamental radiating structure. Behaviors only in the far-field region are emphasized, and various radiation and impedance parameters of the specific antennas are derived from them. For general definitions and notations used in this study, the reader is referred to Das (2004). Some useful parameters of the antenna elements covered in this section are summarized in Table 6.1.
6.7.1 Dipole Antennas Small Dipole Antenna A small dipole antenna may be visualized as a small length of current I ¼ ^z Iin , excited by connecting a pair of small metal wires or rods to a radio frequency source. This is shown in Figure 6.10. It is considered small when the length l is significantly smaller than the wavelength (l l0 ) of operation. This basic element is often used in many radio communication systems as a whip antenna. For communication in a relatively
570 TABLE 6.1
Nirod K. Das Summary of Antenna Parameters of Basic Antenna Types Polarization
Power pattern Pd (u, p)
Short dipole (length ¼ l) Half-wave dipole (length ¼ l0 =2) Half-wave folded dipole (length ¼ l0 =2)
Eu (TMz )
sin2 u
Short monopole (length ¼ l)
Eu (TMz )
Half-wave monopole (length ¼ l0 =4)
Eu (TMz )
Half-wave folded monopole (length ¼ l0 =4)
Eu (TMz )
Loop antenna (loop on x–y plane)
Small loop antenna (loop area ¼ S)
Ef (TEz )
Slot antennas (slot length along ^z )
Short slot antenna (length ¼ l) Half-wave slot antenna (length ¼ l0 =2)
Antenna type Dipole antennas (current along ^z )
Monopole antennas (current along ^z )
Microstrip antenna (antenna current along x^, patch on the x–y plane)
Directivity D
Radiation resistance Rrod (V)
1.5
80p2 (l=l0 )2
1.64
73
1.64
292
3.0
160p2 (l=l0 )2
3.28
36.5
3.28
146
sin2 u
1.5
320p4 (S=l20 )2
Ef (TEz )
sin2 u
1.5
45(l0 =l)2
Ef (TEz )
cos2 (p=2 cos u) sin2 u
1.64
485
2
cos (p=2 cos u) sin2 u cos2 (p=2 cos u) sin2 u
Eu (TMz ) Eu (TMz )
Rectangular, edge fed pffiffiffiffi (length ¼ l0 =(2 er )
sin2 u (only half space, 0 u p=2) cos2 (p=2 cos u) sin2 u (only half space, 0 u p=2) cos2 (p=2 cos u) sin2 u (only half space, 0 u p=2)
See Section 6.2.5
Both Eu and Ef (TMx )
No analytical results, depending on several parameters: d, L, W, er , l0 .
Refer to appropriate sections in the text for the geometry of the specific antennas.
H
Z θ
E Iin
I
θ=0
Y
(a)
(b)
−π / 2
+π / 2
Φ +−π
X (A) A Basic Dipole Antenna
(B) Power Radiation Pattern
FIGURE 6.10 (A) This Basic Dipole antenna is excited by an Input Current Iin and the Figure Shows the Directions of the Electric E and Fields in the Far-Field Region. (B) The Power Radiation Pattern p (u, f) in the Elevation Plane (u) for a Given f is Shown Here. Magnetic H d The pattern is uniform in the azimuth (f) plane.
low frequency range (such as a short-wave radio), where the wavelength of radiation is quite long, a whip antenna of any reasonable length can be safely approximated as a small dipole antenna. The metal wire or rod is ideally made of a perfect metal when the ohmic loss in the metal is zero. In practical
situations when the wire is made from a very good conductor, unless the wire is too thin (which results in very high ohmic resistance), only a small fraction of the input power is dissipated as ohmic loss. Most of the input power is thus radiated into the free space, resulting in a good radiation efficiency.
6 Antennas and Radiation
571
Transmitting Mode of Operation The rigorous electric and magnetic fields of this dipole antenna can be derived using Maxwell’s equations (Cheng, 1993; Ramo et al., 1993). The final expressions in the far-field region are provided here and are used to derive the various basic parameters. The radiated far-fields of the dipole antenna can be expressed in the following form:
The directive gain and directivity of the dipole antenna are obtained from pd (u, f) or pd (u, f) Das (2004): Dg (u, f) ¼ Ð 2p Ð p
4ppd (u, f)
f¼0 u¼0
¼ Ð 2p Ð p
4ppd (u, f)
f¼0 u¼0
jZ0 l e jk0 r sin u ¼ Iin e(u, f): r 2l0 r ^r E(r, u, f) ^ e jk0 r jl H (r, u, f) ¼ ¼ fIin sin u Z0 r 2l0 e jk0 r jl (^z ^r ): ¼ Iin r 2l0 ^ jZ0 l sin u, r ^: ^(u, f) ¼ j u e(u, f) ¼ u 2l0 e E(r, u, f) ¼ ^ uIin
jk0 r
pd (u, f) sin ududf P d (u, f) sin ududf
(6:141)
:
(6:134)
4p sin2 u 3 sin2 u ; D ¼ 1:5: (6:142) ¼ Dg (u, f) ¼ Ð 2p Ð p 3 2 f¼0 u¼0 sin ududf
(6:135)
In an ideal situation, when the ohmic loss is zero or radiation efficiency is one, the antenna gain is equal to the directivity. In practical cases using good conductors, the ohmic loss can be calculated using the metal conductivity, shape of the wire, and the current distribution:
(6:136)
The dipole does not radiate in the vertical directions (u ¼ 0, p) and produces maximum radiation perpendicular to the current (u ¼ p=2) with uniform variation in the azimuth. There is a sin u factor for variation of the radiation fields in the elevation in proportion to the projection of the input current in the ^ direction particular direction. The magnetic field H is in the f and, thus, revolves around the wire in analogy to a magnetostatic field, due to a linear current. The electric field E and, accordingly, the polarization vector of the dipole antenna, are in the ^ u direction. Once the far-field expressions are known, then the power density Pd and power per unit solid angle pd of the radiation can be derived using the theory of Das (2004): jEj2 jIin j2 Z0 l 2 2 Pd (r, u, f) ¼ ¼ sin u, pd (u, f) l0 2Z0 8r 2 (6:137) jIin j2 Z0 l 2 2 2 ¼ r Pd (r, u, f) ¼ sin u: l0 8
G(u, f) ¼ Zr
3 sin2 u 3 sin2 u ’ , G ’ 1:5; Zr ’ 1: (6:143) 2 2
Receiving Mode of Operation Figure 6.11 shows the dipole antenna in the receiving mode of operation. The Thevenin’s voltage excited at the antenna terminals can be obtained by taking the tangential component of the incident electric field and multiplying it with the antenna length: VR ¼ lEzinc ¼ leiu sin u ¼ vR (u, f) ei ¼ vRu (u, f)eiu þ vRf (u, f)eif : vRu (u, f) ¼ l sin u, vRf (u, f) ¼ 0; ^R (u, f) ¼ ^u: vR (u, f) ¼ ^ul sin u, r
(6:144) (6:145)
Hinc
Z q
Einc = eie+jkor
IL
The power radiation pattern pd (u, f) of the dipole antenna is a sin2 u function in elevation and uniform in the azimuth: pd (u, f) ¼
pd (u, f) ¼ sin2 u: pdmax
Prad ¼
ðp pd (u, f) sin ududf
f¼0 u¼0
Rrad
Y
(6:138)
The total radiated power from the dipole antenna for a given input current Iin can be obtained by integrating pd (u, f) over the entire sphere: 2p ð
I
jIin j2 Z0 p l 2 jIin j2 Rrad : (6:139) ¼ ¼ 3 l0 2 2 2Z0 p l 2 l 2 ¼ ¼ 80p , Rin ¼ Rrad ; Zr ¼ 1: (6:140) 3 l0 l0
F X (A) Antenna in Receiving Mode Zin = Rin + jXin ZL = RL +jXL
+ VR = eiq L sin q − uRq (q, f) = L sin q uRf(q, f) = 0
(B) Thevenin's Equivalent Circuit
FIGURE 6.11 (A) A Small Dipole Antenna in its Receiving Mode of Operation (B) Thevenin’s Equivalent Circuit of the Receiving Dipole Antenna
572
Nirod K. Das
The receiving vector vR is linearly polarized, with the receiving polarization unit vector in the ^ u direction. From the magnitude of the receiving vector, the maximum effective area Aem can be found using the theory of Das (2004): 2
Z jvR (u, f)j ¼ Aem (u, f) ¼ 0 4Rin
e D E ¼ ^uI(z)
2
3l20
sin u : 8p
(6:146) l2
Notice that the ratio of Aem (u, f) and G(u, f) is equal to 4p0 , which is a constant independent of (u, f). This is the universal constant applicable for any arbitrary antenna, as derived in Das (2004): Aem (u, f) 3l20 sin2 u l20 ¼ : ¼ G(u, f) 8p(1:5 sin2 u) 4p
(6:147)
Dipole Antenna of Arbitrary Length If the length of the dipole antenna is not electrically small, then one can derive the radiation fields by slicing it into small pieces and then superposing (integrating) the individual fields, if the current distribution is known. This is shown in Figure 6.12. Each piece can be modeled as a small dipole with a different current of excitation and a different location.
Z
z=L/2
r0
jZ0 sin uDz; r 0 ¼ r z cos u: 2l0
(6:148)
Assuming that z r in the far-field, equation 6.148 can be simplified by approximating the r 0 in the denominator as r: D E ’ ^uI(z)
e jk0 r e jk0 z cos u jZ0 sin uDz: r 2l0
(6:149)
The total field can be obtained by integrating equation 6.149 with respect to z, from z ¼ L=2 to z ¼ þL=2: L=2 ð
e jk0 r jZ0 E ¼ ^u sin u r 2l0
I(z)e jk0 z cos u dz:
(6:150)
z¼L=2
If the current distribution I(z) is known or can be approximated by a known function, then the far-field of the total antenna can be obtained using equation 6.150. One would expect the current distribution to exhibit a standing wave pattern, with zero current at the ends of the line (z ¼ L=2) and I ¼ Iin at the center. This behavior is similar to the current distribution along an open-circuited transmission line, having propagation constant k0 : I(z) ¼ Iin
Dz I (z)
Iin
sin [k0 (L=2 jzj)] ; I(z ¼ 0) ¼ Iin : sin (k0 L=2)
(6:151)
Using the current distribution of equation 6.151 in 6.150 and then performing the integration over z, one can obtain a closeform expression for the far-field distribution:
r z
jk0 r 0
r' θ
Current element
The elemental field DE produced by a current element of length Dz, located at distance z from the center, with a current Iz , can be written using equations 6.134 through 6.136:
z cos q Y
Source
e E(r, u, f) ¼ ^ u
jk0 r
r
jZ0 sin u 2l0
L=2 ð
Iin
sin [k0 (L=2 jzj) jk0 z cos u e dz sin (k0 L=2)
z¼L=2
f
¼^ u
e jk0 r jZ0 Iin cos (k0 L=2 cos u) cos (k0 L=2) : r 2p sin (k0 L=2) sin u
(6:152) X Dipole antenna
z=−L/2
FIGURE 6.12 A Linear Dipole Antenna of Arbitrary Length L. This antenna can be treated as a number of ‘‘slices’’ of small dipole elements. The total field can be obtained by superposing the fields due to the individual small dipole slices having current I(z), having length Dz, and being located at distance z from the center.
Once the electric field is obtained, then basic antenna parameters can be derived using steps similar to those used for a small dipole antenna (Das, 2004). The radiated power is obtained by integrating the power-density function over the sphere, as usual, from which the radiation resistance can be derived. The necessary spherical integration involving equation 6.152 cannot be easily expressed in closed form, requiring numerical integration.
6 Antennas and Radiation
573
Half-Wave Dipole Antenna A dipole antenna with L ¼ l0 =2, called a half-wave dipole, is often useful because it operates close to a resonant condition of its input impedance. This results in an almost real value of the input impedance, which can be easily matched to an input source having a real source impedance for maximum power transfer. For this case, the equations 6.151 and 6.152 can be simplified: I(z) ¼ Iin cos (k0 z) ¼ Iin cos
2pz : l0
(6:153)
jZ0 Iin cos ( p2 cos u) r 2p sin u jk0 r cos ( p2 cos u) e ^ j60Iin : ¼u sin u r
e E(r, u, f) ¼ ^ u
jEu j2 15jIin j2 cos ( p2 cos u) 2 ¼ , Pd (r, u, f) ¼ sin u 2Z0 pr 2 15jIin j2 cos ( p2 cos u) 2 pd (u, f) ¼ : sin u p
(6:154)
(6:155)
The direction of maximum radiation is along u ¼ p2 , which is perpendicular to the direction of the antenna current. The power radiation pattern pd (u, f) has the following form with respect to u, having a uniform variation in the azimuth: pd (u, f) pd (u, f) ¼ ¼ pdmax
cos ( p2 cos u) sin u
2 :
(6:156)
Now, by following steps similar to those used for a small dipole, the radiated power Prad, radiation resistance Rrad , directive gain Dg (u, f), and directivity D for a half-wave dipole antenna can be derived. This will require integrations involving the pattern function of equation 6.156: 2ðp
ðp
pd (u, f) sinu du df ¼ 30jIin j2
f¼0 u¼0
Rrad ¼
ðp
cos2 ( p2 cos u) du: sin u
(6:157)
u¼0
2Prad ¼ 60 jIin j2
ðp
cos2 ( p2 cos u) du: sin u
(6:158)
u¼0
(6:161)
D ¼ 1:64, Rrad ¼ 73:2 V
Notice that the directivity D of a half-wave dipole (¼ 1:64) is not significantly different from that of a small dipole (¼ 1:5).
2
2 : Ð p cos2 ( p2 cos u) du u¼0 sinu
Folded Dipole Antenna Figure 6.13 shows the geometry of a folded dipole antenna. It consists of a source (Vin ) connected to the ends of a wire of total length 2L and folded once as a rectangular loop of height L and width d L. The length L can have any value but is often designed to be about l0 =2 for a resonant operation. This antenna configuration can be viewed as a superposition of an odd mode and an even mode, having dual excitations at the center of the two arms of the antenna with odd and even symmetry. The odd mode represents a transmission line mode of operation without any radiation. Here, the top and bottom sections are two identical transmission-line stubs of length L=4 connected in series, as shown in Figure 6.13. The input current It in this mode can then be related to Vin as follows: It ¼
(6:159) (6:160)
Vin Vin , ¼ 2Zt 2jZ0 tan (k0 L=2)
(6:162)
where Z0 is the characteristic impedance of a two-wire transmission line having the same wire diameter as that of the folded dipole and having a line-to-line separation d. In contrast, the even mode represents an antenna mode of operation with I ¼ 0 at the top and bottom ends to satisfy the symmetry condition. The antenna mode operates very much as two dipole antennas of length L that are placed close to each other. The current distribution along the wire is the same as that for an equivalent dipole antenna and, therefore, has the same radiation pattern and gain. The input current Ia for the antenna mode can be related to the input voltage Vin using the input impedance Zdipole of an equivalent dipole antenna of length L: Zdipole ¼
2
cos ( p2 cos u) sinu : ¼ Dg (u, f)¼ Ð 2 p Ð p Ð p cos2 ( p2 cos u) p (u, f) sinu du df f¼0 u¼0 d du u¼0 sinu 4ppd (u, f)
D¼ Dg (u, f)max ¼
Prad
cos ( p2 cos u) 2 ¼ 36:6jIin j , Dg (u, f) ¼ 1:64 , sin u 2
jk0 r
The polarization vector is in the ^ u direction, which is the same as that for a small dipole. The power density Pd and power per unit solid angle pd can be expressed using equation 6.154 (Das, 2004):
Prad ¼
The u integral that identically appears in the above equations 6.157 through 6.160 may be computed numerically and has an approximate value of 1.22. Using this value, the antenna parameters can be approximated as follows:
Vin =2 Vin , Ia ¼ : 2Ia 4Zdipole
(6:163)
Now, using superposition, the input impedance of the folded dipole can be expressed as a parallel combination of two parts: Iin ¼ Ia þ It ¼
Vin Vin : þ 4Zdipole 2jZ0 tan (k0 L=27)
(6:164)
574
Nirod K. Das I = 0(open)
(short)
Folded metal wire Ia
Iin +
Vin /2
Vin L
Ia
+ +
Vin /2
− −
−
It +
+ +
(short)
I = 0(open) Antenna mode
δ r. Then, the flux relationship is the following: df ¼ m
Ix dx: 2pr 2
(7:11)
For x > r, the flux linked up to some radial distance R per unit of length is simply: ðR
lexternal ¼ m0
I mI R dx ¼ 0 ln : 2px 2p r
(7:12)
r
FIGURE 7.3 System
Simplified Transformer Circuit Model Under Per Unit
For x < r, only the enclosed current will be linked. Continuing with the even distribution of current assumption, the flux linked is as written here:
764
Mani Venkatasubramanian and Kevin Tomsovic ðr
linternal
Ix 3 I : ¼ mc dx ¼ mc 8p 2pr 4
(7:13)
law. For a point P at a distance x from a conductor with charge q, the electric flux density D is:
0
D¼ For simplicity, assume the permeability of the conductor is that of free space, and then the total flux linkage is as follows: l¼
m0 I m0 I R m0 I R þ ln ¼ ln 1=4 : 8p 2p r 2p re
(7:14)
Typically, re 1=4 is written as r 0 . Consider a three-phase transmission line of phase currents Ia , Ib , and Ic , with each line spaced equally by the distance D. Flux from each of the currents will link with each of the other conductors. The flux linkage for phase a out to some point R a far distance away from the conductors is approximately: la ¼
m0 R R R Ia ln 0 þ Ib ln þ Ic ln : r D D 2p
E¼
(7:16)
In practice, the phase conductors may not be equally spaced as they are in Figure 7.5. This results in unbalanced conditions due to the imbalance in mutual inductance. High-voltage transmission lines with such a layout can be transposed so that, on average, the distance between phases is equal canceling out the imbalance. The equivalent distance of separation between phases can then be found as the geometric mean of this spacing. Similarly, if several conductors are used per phase, an equivalent conductor radius can be found as the geometric mean. Transmission lines also exhibit capacitive effects. That is, whenever a voltage is applied to a pair of conductors separated by a nonconducting medium, charge accumulates, which leads to capacitance. Similar to the previous development for inductance, the capacitance can be determined based on Gauss’s
D
D
FIGURE 7.5
D
End View of Equally Spaced Phase Conductors
q : 2pe0 x
(7:18)
Integrating E over some path (a radial path is chosen for simplicity) yields the voltage difference between the two end points:
V12 ¼
R ð2 R1
m D L~a ¼ 0 ln 0 : 2p r
(7:17)
Assuming a homogeneous medium, the electric field density E is related to D by the permitivity e of the dielectric, which, in this case, will be assumed to be that of free space:
(7:15)
Assuming balanced currents (i.e., Ia þ Ib þ Ic ¼ 0) and recalling that inductance is simply the ratio of flux linkage to current, the series inductance in phase a per unit length of line will be the following:
q : 2px
q q R1 dx ¼ ln : 2pe0 x 2pe0 R2
(7:19)
Now consider a three-phase transmission line again with each line spaced equally by the distance D. Superposition holds so that the voltage arising from each of the charges can be added. To find the voltage from phase to ground arising from each of the conductors, assume a balanced system with qa þ qb þ qc ¼ 0 and a neutral system located at some far distance R from phase a: Van ¼
1 R R R 1 D qa ln þ qb ln þ qc ln qa ln : (7:20) ¼ 2pe0 r D D 2pe0 r
Now since the capacitance is the ratio of charge to voltage, the capacitance from phase a to ground per unit length of line will be: ~ an ¼ qa ¼ 2pe0 : C Van ln D=r
(7:21)
If the conductors are not evenly spaced, transposition results in an equivalent geometric mean distance, and using bundled conductors per phase can also be accommodated by using a geometric mean. Finally, conductors have finite resistances that depend on the temperature, the frequency of the current, the conductor material, and other such factors. For most systems analysis problems, these can be based on values provided by manufacturers or from tables for commonly used conductors and typical ambient conditions. Transmission Line Circuit Models Transmission lines may be classified according to their total length. If the line is around 50 miles or less, a so-called short line, capacitance can be neglected, and the series inductance
7 Power System Analysis
765 R
and it turns out to be possible, to continue to use the p–model for the transmission line and simply modify the circuit parameters to represent the distributed parameter effects. The relationship between the sending end voltage, Vs , and current, Is , to the receiving end voltage and current in a p–model can be found as:
L
FIGURE 7.6 Short Line Model
and resistance can be modeled as lumped parameters. Figure 7.6 depicts the short line model per phase. The series resistance and inductance are simply found by calculating the per unit distance parameters times the line length; therefore, at 60 Hz for line length l, the line impedance is the following: ~ l þ j120pL ~l ¼ R þ jX: Z ¼R
(7:23)
For line lengths longer than 150 miles, the lumped parameter model may not provide sufficient accuracy. To see this, note that at 60 Hz for a low-loss line, the wavelength is around 3000 miles. Thus, a 150-mile line begins to cover a significant portion of the wave, and the well-known wave equations must be used. The relationship between voltage and current at a point x (i.e., distance along the line) to the receiving end voltage, Vr , and current, Ir , is seen through the following equations: V (x) ¼ Vr cosh gx þ Ir Zc sinh gx: I(x) ¼ Ir cosh gx þ
Vr sinh gx: Zc
(7:24) (7:25)
pffiffiffiffiffiffiffi The Zc ¼ z=y is the characteristic impedance of the line, pffiffiffiffiffi and g ¼ zy is the propagation constant. It would be useful,
R
Y/2
Z 0 ¼ Zc sinh gl ¼ Z
sinh gl : gl
Y0 1 gl Y tanh gl=2 ¼ : ¼ tanh Zc 2 2 gl=2 2
(7:28) (7:29)
In these equations, the prime indicates the modified circuit values arising from a long line. Generators Three-phase synchronous generators produce the overwhelming majority of electricity in modern power systems. Synchronous machines operate by applying a dc excitation to a rotor that, when mechanically rotated, induces a voltage in the armature windings due to changing flux linkage. The per phase flux for a balanced connection can be written as: l ¼ Kf If sin um ,
(7:30)
where If is the field current, um is the angle of the rotor relative to the armature, and Kf is a constant that depends on the number of windings and the physical properties of the machine. The machine may have several poles so that the armature will ‘‘see’’ multiple rotations for each turn of the rotor. So, for example, a four pole machine appears electrically to be rotating twice as fast as two pole machine. For a machine rotating at vm radions per second with p poles, the electric frequency is as follows:
X
FIGURE 7.7 Medium Line Model
(7:27)
Now equating 7.24 and 7.25 for a line of length l to equations 7.26 and 7.27 and solving shows the equivalent shunt admittance and series impedance for a long line as:
p vs ¼ vm , 2
Y/2
(7:26)
(7:22)
For lines longer than 50 miles, up to around 150 miles, capacitance can no longer be neglected. A reasonable circuit model is to simply split the total capacitance evenly with each half represented as a shunt capacitor at each end of the line. This is depicted as the p–circuit model in Figure 7.7. Again, the total capacitance is simply the per unit distance capacitance times the line length: ~ an l ¼ jB: Y ¼ j120pC
YZ Vs ¼ Vr 1 þ þ Ir Z: 2 YZ YZ Is ¼ Vr Y 1 þ þ Ir 1 þ : 4 2
(7:31)
with vs as the desired synchronous frequency. If the machine is rotated at a constant speed Faraday’s law indicates that the induced voltage can be written as: V ¼
dl ¼ Kf If vs sinðvs t þ u0 Þ: dt
(7:32)
766
Mani Venkatasubramanian and Kevin Tomsovic XS
FIGURE 7.8
Simple Synchronous Generator Model
If a load is applied to the armature windings, then current will flow and the armature flux will link with the field. This effectively puts a mechanical load on the rotor, and power input must be matched to this load to maintain the desired constant frequency. Some of the armature flux ‘‘leaks’’ and does not link with the field. In addition, there are winding resistive losses, but those are commonly neglected. The circuit model shown in Figure 7.8 is a good representation for the synchronous generator in the steady-state. Note that most generators are operated at some fixed terminal voltage with a constant power output. Thus, for steady-state studies, the generator is often referred to as a PV bus since the terminal node has fixed power P and voltage V. Loads Modeling power system loads remains a difficult problem. The large number of different devices that could be connected to the network at any given time renders precise modeling intractable. Broadly speaking, loads may vary with voltage and frequency. In the steady-state, frequency is constant, so the only concern is voltage. For most steady-state analysis, a fixed (i.e., constant over an allowable voltage range) power consumption model can be used. Still, some analysis requires consideration of voltage effects to be useful, and then the traditional exponential model can be used to represent real power consumption P and reactive power consumption Q as: P ¼ P0 V a :
(7:33)
Q ¼ Q0 V b :
(7:34)
In these equations, the voltage V is normalized to some rated voltage. The exponents a and b can be 0, 1, or 2 where they could represent constant power, current, or impedance loads, respectively. Alternatively, they can represent composite loads with a generally ranging between 0.5 and 1.8 and b ranging between 1.5 and 6.0.
transmission network. Both real and reactive power flows play equally important roles in determining the power flow properties of the system. Power flow studies are among the most significant computational studies carried out in power system planning and operations in the industry. Power flow equations allow the computation of the bus voltage magnitudes and their phase angles as well as the transmission line current magnitudes. In actual system operation, both the voltage and current magnitudes need to be maintained within strict tolerances for meeting consumer power quality requirements and for preventing overheating of the transmission lines, respectively. The difficulty in computing the power flow solutions arises from the fact that the equations are inherently nonlinear because of the balancing of power quantities. Moreover, the large size of the power network implies that power flow studies involve solving a very large number of simultaneous nonlinear equations. Fortunately, the sparse interconnected nature of the power network reflects itself in the computational process, facilitating the computational algorithms. In this section, we first study a simple power flow problem to gain insight into the nonlinear nature of the power flow equations. We then formulate the power flow problem for the large power system. A classical power flow solution method based on the Gauss-Seidel algorithm is studied. The popular Newton-Raphson algorithm, which is the most commonly used power flow method in the industry today, is introduced. We then briefly consider the fast decoupled power flow algorithm, which is a heuristic method that has proved quite effective for quick power flow computations. Finally, we will discuss the dc power flow solution that is a highly simplified algorithm for computing approximate linear solutions of the power flow problem and is becoming widely used for electricity market calculations. Simple Example of a Power Flow Problem Let us consider a single generator delivering the load P þ jQ through the transmission line with the reactance x. The generator bus voltage is assumed to be at the rated voltage, and it is at 1 per unit (pu). The generator bus angle is defined as the phasor reference, and hence, the generator bus voltage phase angle is set to be zero. The load bus voltage has magnitude V and phase angle d. Because the line has been assumed to be lossless, note that the generator real power output must be equal to the real
1
V
0
δ
jx
7.2.2 Power Flow Analysis Power flow equations represent the fundamental balancing of power as it flows from the generators to the loads through the
P+jQ
FIGURE 7.9 A Simple Power System
7 Power System Analysis
767
power load P. However, the reactive power output of the generator will be the sum of the reactive load Q and the reactive power ‘‘consumed’’ by the transmission line reactance x. Let us write down the power flow equations for this problem. Given a loading condition P þ jQ, we want to solve for the unknown variables, namely, the bus voltage magnitude V and the phase angle d. For simplicity, we will assume that the load is at unity power factor of Q ¼ 0. The line current phasor I from the generator bus to the load bus is easily calculated as:
V
1
P
1ff0 V ffd I¼ : jx
0
(7:35) FIGURE 7.10
Pmax
Qualitative Plot of the Power-Flow Solutions
Next, the complex power S delivered to the load bus can be calculated as: S ¼ VI ¼
2
V ffðd þ p=2Þ V ffðp=2Þ : x x
(7:36)
Therefore, we get the real and reactive power balance equations: P¼
V sin d V 2 þ V cos d and Q ¼ : x x
(7:37)
After setting Q ¼ 0 in equation 7.37, we can simplify equation 7.37 into a quadratic equation in V 2 as follows: V 4 V 2 þ x 2 P 2 ¼ 0:
(7:38)
Therefore, given any real power load P, the corresponding power flow solution for the bus voltage V can be solved from equation 7.38. We note that for nominal load values, there are two solutions for the bus voltage V, and they are the positive roots of V 2 in the next equation: 2
V ¼
1
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 4x 2 P 2 : 2
(7:39)
Equation 7.39 implies that there exist two power flow solutions for load values P < Pmax where Pmax ¼ 1=(2x), and there exist no power-flow solutions for P > Pmax. A qualitative plot of the power flow solutions for the bus voltage V in terms of different real power loads P is shown in Figure 7.10. From the plot and from the analysis thus far, we can make the following observations: 1. The dependence of the bus voltage V on the load P is very much nonlinear. It has been possible for us to compute the power flow solutions analytically for this simple system. In the large power system with hundreds of generators delivering power to thousands of loads, we have to solve for thousands of bus voltages and their phase angles from large coupled sets of non-
linear power flow equations, and the computation is a nontrivial task. 2. Multiple power flow solutions can exist for a specified loading condition. In Figure 7.10, there exist two solutions for any load P < Pmax . Among the two solutions, the solution on the upper locus with voltage V near 1 pu is considered the nominal solution. For the solutions on the lower locus, the bus voltage V may be unacceptably low for normal operation. The lower voltage solution also requires higher line current to deliver the specified load P, and the line current values can become unacceptably high. In general, for any specified loading condition, we would like to locate the power flow solution that has the most acceptable values of voltages and currents among the multiple power flow solutions. In this example of a single generator delivering power to a single load, there exist two power flow solutions. In a large power system, there may exist a very large number of possible power flow solutions. 3. Once the bus voltage V has been computed from equation 7.39, the bus voltage phase angle d can be computed from equation 7.37. Then, the line current phasor I can be solved from equation 7.35. Specifically, we would like to ensure that the magnitude of the line current I stays below the thermal limit of the transmission line for preventing potential damage to the expensive transmission line. 4. Power flow solutions may fail to exist at high loading conditions, such as when P > Pmax in Figure 7.10. The loading value Pmax beyond which power flow solutions do not exist is called the static limit in the power literature. Because power flow solutions denote the steady-state operating conditions in our formulation, lack of power flow solutions implies that it is not possible to transfer power from the generator to the load in a steady-state fashion, and the dynamic interactions of the generators and the loads become significant. Operating the power system at loading conditions
768
Mani Venkatasubramanian and Kevin Tomsovic SGi Ii Si Vi SLi
FIGURE 7.11
Complex Power Balance at Bus i
beyond the static limit may lead to catastrophic failure of the system. Power flow Problem Formulation In this subsection, we will construct the power flow equations in a structured manner using the admittance matrix Y bus representation of the transmission network. The admittance matrix Y bus is assumed to be known for the system under consideration. Let us first look at the complex power balance at any bus, say bus i, in the network. The power balance equation is given by: Si ¼ V i I i ¼ S Gi SLi :
(7:40)
Let us denote the vector of bus voltages as V bus and the vector of bus injection currents as I bus . By definition, the admittance matrix Y bus provides the relationship I bus ¼ Y bus V bus . Suppose the ith or jth entry Y ij of the Y bus matrix has the magnitude Yij and the phase gij . Then, we can simplify the current injection I i as: Ii ¼
X
Y ij V j ¼
j
X
Yij Vj ff(dj þ gij ):
(7:41)
j
Then, combining equations 7.40 and 7.41, we get the complex power balance equations for the network as: Si ¼ S Gi SLi ¼
X
Yij Vi Vj ff(di dj gij ):
(7:42)
j
Taking the real and imaginary parts of the complex equation 7.42 gives us the real and reactive power flow equations for the network: Pi ¼ PGi PLi ¼
X
Yij Vi Vj cos (di dj gij ):
(7:43)
j
Qi ¼ QGi QLi ¼
X
Yij Vi Vj sin (di dj gij ):
(7:44)
j
Generally speaking, our objective in this section is to solve for the bus voltage magnitudes Vi and the phase angles di
when the power generations and loads are specified. For a power system with N buses, there are 2N number of power flow equations. At each bus, there are six variables: PGi , QGi , PLi , QLi , Vi , and di . Depending on the nature of the bus, four of these variables will be specified at each bus, leaving two unknown variables at each bus. We will end up with 2N unknown variables related by 2N equations 7.43 and 7.44, and our aim in the rest of this section is to develop algorithms for solving this problem. Let us consider a purely load bus first, that is, with PGi ¼ QGi ¼ 0. In this case, the loads PLi and QLi are assumed to be known either from measurements or from load estimates, and the bus voltage variables Vi and di are the unknown variables. Purely load busses with no generation support are called PQ busses in power flow studies because both realpower injection Pi and reactive power Qi have been specified at these busses. Typically, every generator in the system consists of two types of internal controls, one for maintaining the real power output of the generator and the other for regulating the bus voltage magnitude. In power flow studies, we usually assume that both these control mechanisms are operating perfectly and so the real power output PGi and Vi are maintained at their specified values. Again, the load variables PLi and QLi are also assumed to be known. This leaves the generator reactive output QGi and the voltage phase angle di as the two unknown variables for the bus. In terms of injections, the real power injection Pi and the bus voltage Vi are then the specified variables; thus, the generator busses are normally denoted PV busses in power flow studies. In reality, the generator voltage control for keeping the bus voltage magnitude at a specified value becomes inactive when the control is pushed to the extremes, such as when the reactive output of the generator becomes either too high or too low. This voltage control limitation of the generator can be represented in power flow studies by keeping track of the reactive output QGi . When the reactive generation QGi becomes larger than a prespecified maximum value of QGi, max or goes lower than a prespecified minimum value QGi, min , the reactive output is assumed to be fixed at the limiting value QGi, max or QGi, min, respectively, and the voltage control is disabled in the formulation; that is, the reactive power QGi becomes a known variable, either at QGi, max or QGi, min, and the voltage Vi then becomes the unknown variable for bus i. In power flow terminology, we say that the generator at bus i has ‘‘reached its reactive limits’’ and, hence, bus i has changed from a PV bus to a PQ bus. Owing to space limitations, we will not discuss generator reactive limits in any more detail in this section. In addition to PQ busses and PV busses, we also need to introduce the notion of a slack bus in the power flow formulation. Note that power conservation demands that the real power generated from all the generators in the network must equal the sum of the total real power loads and the line losses on the transmission network:
7 Power System Analysis X
P Gi ¼
i
X
769 P Li þ
i
XX i
P lossesij :
(7:45)
j
The line losses associated with any transmission line in turn depend on the line resistance and the line current magnitude. As stated earlier, one of the main objectives of power flow studies is to compute the line currents, and as such, the line current values are not known at the beginning of a power flow computation. Therefore, we do not know the actual values for the line losses in the transmission network. Looking at equation 7.45, we need to assume that at least of one of the variables PGi or PLi should be a free variable for satisfying the real power conservation. Traditionally, we assume that one of the generations is a ‘‘slack’’ variable, and such a generator bus is denoted the slack bus. At the slack bus, we specify both the voltage Vi and the angle di . The power injections Pi and Qi are the unknown variables. Again, by tradition, we set the voltage at slack bus to be the rated voltage or at 1 pu and the phase angle to be at zero. Like in standard textbooks, slack bus is defined in this section to be the first bus in the network with V1 ¼ 1 and the angle d1 ¼ 0. Assuming the number of generators to be NG , the busses 2 through NG þ 1 are set to be the PV busses. The remaining buses NG þ 2 through N are then the PQ busses.
network admittance equations Y bus V bus ¼ I bus . The matrix Y bus takes over the role of the matrix A in the linear equations. We will be solving for the bus voltage vector V bus. The current injections I bus are not known per se. The current injections are in fact dependent on the bus voltages. As we see next, they can also be computed iteratively from the power injections Si by using the relationship Ii ¼ Si =Vi . For a PQ bus, the injection Si is a specified variable and, hence, is known. For PV busses, only the real power injection Pi is known, while the reactive injection Qi is evaluated first using the latest estimate of bus voltages V bus . An outline of the Gauss-Seidel algorithm for solving the power flow equations of 7.42 is presented next. Let us start with an initial condition for the bus voltages V 0bus , and we would like to compute the iterate V kþ1 bus from the previous iterate V kbus . Recall that bus 1 is a slack bus and, hence, V k1 ¼ 1ff0 for all iterations. Also, busses 2 through NG þ 1 are PV busses; therefore we need to keep Vik at specified values Vi, specified ¼ Vi0 for all iterations for the PV busses. Gauss-Seidel Iterations . Slack bus (i ¼ 1): V kþ1 ¼ 1ff0 1 . PV busses (i ¼ 2, . . . , NG þ 1): (1) Compute the reactive power generation at bus i. X Qikþ1 ¼ Y ij V ki V kþ1 sin dki dkþ1 gij j j j
Gauss-Seidel Algorithm Let us consider a set of simultaneous linear equations of the form Ax ¼ b, where A is an n n matrix and x and b are n 1 vectors. Clearly, there exists a unique solution to the problem when the matrix A is invertible, and the solution is given by x ¼ A1 b. When the matrix size is very large, it may not be possible to compute the inverse of the matrix A for finding the solution; there exist other numerical techniques. Gauss-Seidel algorithm is one such classical algorithm that tries to arrive at the solution x ¼ A1 b iteratively by starting from an approximate initial condition of x 0 . The iteration for the solution x kþ1 from the previous iterate x k proceeds as follows. xikþ1 ¼
1 aii
bi
X j
aij xjkþ1
X
for i ¼ 1, 2, . . . , n (7:46)
j>i
Here, aij denotes the ith or jth entry of the matrix A as usual. It can be shown that the iterative solution x k converges to the exact solution A1 b for any initial condition x 0 , provided the matrix A satisfies certain ‘‘diagonal dominance’’ properties. The details are limited here to save space, and they can be found in standard numerical analysis textbooks. In the previous section, we formulated the power flow problem as a set of simultaneous nonlinear equations of 7.42 and as such, it is not obvious how the Gauss-Seidel algorithm can be applied for solving these equations. The trick here is to visualize the power balance equations to be arising from the
Y ij V ki V kj sin dki dkj gij :
(7:47)
ji
(2) Update the bus voltage phasor V kþ1 i . V kþ1 i
1 ¼ Y ii
! X Pi jQikþ1 X kþ1 k Y ij V j Y ij V j : V ki ji
(7:48)
(3) Normalize the magnitude V kþ1 to be Vi0 . i ¼ V kþ1 i .
! aij xjk
þ
X
V kþ1 i V 0i : V kþ1 i
(7:49)
PQ busses (i ¼ NG þ 2, . . . , N ): (4) Update the bus voltage phasor V kþ1 i . V kþ1 i
1 ¼ Y ii
! X Pi jQi X kþ1 k Y ij V j Y ij V j : V ki ji
(7:50)
At the end of the (k þ 1)th iteration, we have the updated kþ1 values of the bus voltages V kþ1 bus . The values of V bus can be k compared with the previous set of values V bus to check whether the solution has converged. For the power flow problems, the Gauss-Seidel algorithm has been known to converge even from poor initial conditions, which is one of its main strengths. The algorithm is typically used for ‘‘flat starts’’ when all the initial voltage magnitudes are set to be at their rated values (Vi0 ¼ 1 pu for all the PV busses), and the bus voltage angles are set to be zero (d0i ¼ 0 for all
770
Mani Venkatasubramanian and Kevin Tomsovic
buses). The relative simplicity of the computations involved in each iteration step in equations 7.46 to 7.49 implies that the algorithm is very fast to implement. On the other hand, the error convergence rate is typically linear in the sense that the ratios of the error norm from one iteration to the previous iteration tend to be constant. Therefore, while the Gauss-Seidel algorithm can converge to the proximity of the actual solution in tens of iterations, it typically takes a large number of iterations to get to an accurate solution estimate. Newton-Raphson Algorithm Unlike the Gauss-Seidel algorithm, which was originally developed for solving simultaneous linear equations, the Newton-Raphson (NR) algorithm is specifically designed for solving nonlinear equations. The algorithm proceeds iteratively by linearizing the nonlinear equations into linear equations at each step and by solving the linearized equations exactly. Suppose we want to solve the nonlinear equations F(x) ¼ 0, where x is a n 1 vector and where F : R n ! R n is a smooth nonlinear function. We have been given an initial condition x 0 . Then, for computing the estimate x kþ1 from x k , we first linearize the functions F(x) at x k as follows: 0 ¼ F(x) F(x k ) þ
qF x xk : qx x k
(7:51)
The solution to the linearized equations 7.17 and 7.18 is defined as the iteration estimate x kþ1 :
x
kþ1
k
¼x
!1 qF F xk : qx x k
(7:52)
Note that the linearization of equation 7.51 will be a good approximation if the estimate x k is close to the true solution of x since F(x ) ¼ 0. The NR algorithm stated in equation 7.52 can be proved to converge to the true solution x when the initial condition x 0 is sufficiently close to x . On the other hand, for initial conditions away from x , the approximation of equation 7.51 becomes poorly justified, and the iterations can quickly diverge away from x . When the iterations converge, owing to the linearized nature of the algorithm, the norm of the error decreases to zero in a ‘‘quadratic’’ fashion. Roughly speaking, the ratios of the error norm from one iteration to the square of the error norm in the previous iteration tend to be a constant. An example would be that the error norms decrease from 0.1 in one iteration, to 0.01 in the next iteration, to 0.0001 in the following iteration. Therefore, given good initial conditions, the NR algorithm can typically get to an accurate solution estimate within a few iterations. Let us apply the NR algorithm for solving the power flow equations 7.43 and 7.44. We will solve for the unknown vari-
ables among the bus voltage magnitudes Vi and angles di first. That is, we define the vector x as consisting of all the PV and PQ bus angles and all the PQ bus voltages. PV bus voltages are known and, hence, they are not included in x: x ¼ ðd2 , . . . , dNG þ1 , dNG þ2 , . . . , dN , VNG þ2 , . . . , VN ÞT : (7:53)
The corresponding power flow equations are as follows: 0
1
P2 p2 (x)
C B ......... C B C B B PNG þ1 pNG þ1 (x) C C B C BP B NG þ2 pNG þ2 (x) C C B C F(x) ¼ B ......... C B C B B PN pN (x) C C B B QN þ2 qN þ2 (x) C C B G G C B A @ ......... 0
QN qN (x) P P2 Y2, j V2 Vj cos d2 dj g2, j
1
C B j C B . . . . . . . .. C B P C B B PNG þ1 YNG þ1, j VN þ1 Vj cos dNG þ1 dj gNG þ1, j C C B j C B C B P C BP B NG þ2 YNG þ2, j VN þ2 Vj cos dNG þ2 dj gNG þ2, j C C B j C B . . . . . . .. . ¼B C: P C B C B Y V V cos d d g P N N , j N j N j N , j C B j C B C B P C BQ B NG þ2 YNG þ2, j VN þ2 Vj sin dNG þ2 dj gNG þ2, j C C B j C B . . . . . . .. . C B A @ P QN YN , j VN Vj sin dN dj gN , j j
(7:54) The entries of the F function in equation 7.54 are the differences between the specified power injections and the computed power injections from the current power flow solutions, and these are usually denoted as the real and reactive power mismatches at the different busses. In the power flow problem, we want to find a solution that makes the power mismatches in equation 7.54 equal zero. Suppose an initial condition x 0 has been specified. Then, the NR algorithm for solving the power flow equations 7.54 proceeds iteratively as follows. Newton-Raphson Iterations (1) Compute the power mismatches F(x k ) for step k from equation 7.54. If the mismatches are within desired tolerance values, the iterations stop. (2) Compute the power flow Jacobian: qF k J ¼ : (7:55) qx k x
7 Power System Analysis
771
Owing to the nice structure of the equations in 7.54, explicit formulas can be derived for the entries of the Jacobian matrix, and the Jacobian for step k can be evaluated by substituting the current values of x k into these formulas. (3) Compute the correction factors Dx k from equation 7.52 by solving a set of simultaneous linear equations: J k Dx k ¼ F x k : (7:56)
Let us split the power flow state vector x in equation 7.53 into the voltage magnitude V and angle d counterparts:
The Jacobian matrix J k is extremely sparse even for very large power systems, which facilitates the solution of Dx k in equation 7.55. (4) Evaluate x kþ1 from x k by adding the correction factors Dx k : x kþ1 ¼ x k þ Dx k : (7:57)
1 P2 p2 (x) C B 0 1 ......... C B QNG þ2 qNG þ2 (x) C B B PNG þ1 pNG þ1 (x) C C C and DQ ¼ B DP ¼ B ......... @ A: C BP B NG þ2 pNG þ2 (x) C q (x) Q C B N N A @ ......... PN pN (x)
As compared with the Gauss-Seidel algorithm, each iteration step in the NR algorithm is computationally much more intensive because of (a) evaluating the Jacobian and (b) solving the linear equations 7.55. On the other hand, the error convergence rate of the NR algorithm is spectacularly faster, and hence, the NR algorithm requires much fewer iterations to reach comparable solution accuracies. In usual practice, the Gauss-Seidel algorithm is used only for flat starts with poorly known initial conditions. In most other situations, the NR algorithm is the preferred choice.
(7:59)
Fast Decoupled Power Flow Algorithm Both the Gauss-Seidel algorithm and the Newton-Raphson algorithm are general methods for solving linear and nonlinear equations respectively, and they were tailored toward solving the power flow problem. On the other hand, we study a specific method for power systems in this section called the fast decoupled power flow algorithm, which is a heuristic method that is derived by exploiting specific properties for a power system. The fast decoupled power flow algorithm is essentially a highly simplified and approximated version of the NewtonRaphson algorithm of the previous subsection. We recall that the NR iteration steps are computation intensive because of evaluating J k and solving the Jacobian equation 7.55. In this subsection, we will proceed to simplify this by replacing the iteration specific J k with a constant matrix. It is a well-known property of power systems that variations in the bus voltage magnitudes mostly affect the reactive power injections under nominal operating conditions. Similarly, the variations in the bus voltage phase angles mostly influence the real power injections. By idealizing this property, we assume qpi qqi that all the Jacobian entries of the form qV and qd are all j j identically zero. Next, if we assume that the bus voltage magnitudes are all close to one pu, and the voltage phase angle differences on the two ends of any transmission line are all close to zero, the NR algorithm greatly simplifies to the fast decoupled algorithm stated in equation 7.58.
d ¼ ðd2 , . . . , dNG þ1 , dNG þ2 , . . . , dN ÞT , V ¼ ðVNG þ2 , . . . , VN ÞT : (7:58) Similarly, we separate the real and reactive power mismatches in equation 7.54: 0
Suppose initial conditions for the voltage magnitude vector V 0 and angle vector d0 have been specified. The fast decoupled algorithm then proceeds iteratively as follows. Fast Decoupled Iterations (1) Compute the real and reactive power mismatches DP(x k ) and DQ(x k ). If the mismatches are within desirable tolerance, the iterations end. (2) Normalize the mismatches by dividing each entry by its respective bus voltage magnitude: 0
DP2k =V2k
1
C B B ......... C 0 1 C B DQNk G þ2 =VNk G þ2 C B DP k =V k k k C B B N N þ1 þ1 G G ~ ¼ @ ......... C ~ ¼B DP C and DQ A: B DPNk þ2 =VNk þ2 C C B G G DQNk =VNk C B @ ......... A DPNk =VNk
(7:60)
(3) Solve for the voltage magnitude and angle correction factors DV k and Ddk by using the constant matrices B and B, which are extracted from the bus admittance matrix Y bus : ~ k: ~ k and BDV k ¼ DQ BDdk ¼ DP
(7:61)
Define Bij ¼ imag(Y ij ). The matrices B and B are constructed as follows: 2 3 B 2, 2 . . . B 2, N 6 7 . . . 5 and B ¼ 4 . . . . . . BN , 2 . . . BN , N 2 3 (7:62) BNG þ2, NG þ2 . . . BNG þ2, N 6 7 ... ... ... 5 B ¼ 4 BN , NG þ2 ... BN , N (4) Update the voltage magnitude and angle vectors. dkþ1 ¼ dk þ Ddk , V kþ1 ¼ V k þ DV k :
(7:63)
772
Mani Venkatasubramanian and Kevin Tomsovic
By using the constant matrices B and B in equation 7.60, the time taken for evaluating one iteration for the fast decoupled algorithm is considerably less than that of the NewtonRaphson algorithm. In repeated power flow runs of the same power system, the inverses of the matrices B and B can directly be stored, which enables the implementation to be very fast. However, the convergence speed of the fast decoupled algorithm is not quadratic, and it takes considerably more iterations to converge to an accurate power flow solution. Fast decoupled algorithm is used in applications where quick approximate estimates of power flow solutions are required. Direct Current Power Flow Algorithm A further simplification of the fast decoupled algorithm is the highly approximate dc power flow algorithm, which completely transforms the nonlinear power flow equations into linear equations by using drastic assumptions. In addition to the assumptions used in deriving the fast decoupled method, we also assume that all the voltage magnitudes are at 1 pu and all the transmission lines are lossless. With these assumptions, the voltage correction factors DV k become irrelevant in equation 7.60. Moreover, the angle variables can be solved explicitly by the linear equations: Bd ¼ P,
(7:64)
where P is the vector of bus power injections. The resulting solution for the bus voltage phase angles is called the dc power flow model. It gives approximate values for the phase angles across the power system. The phase angles can be used to approximate the real power flow on any transmission line by dividing the phase angle difference between the two ends of the transmission lines by the line reactance. The advantage of the dc power flow model is its extreme simplicity in finding a power flow solution. The limitations of the solution need to be kept in mind, however, in light of the drastic assumptions that were used to simplify the nonlinear power flow equations into linear equations.
7.3 Dynamic Analysis The power system in practice is constantly undergoing changes because of such factors as changing loads, planned outages of equipment for maintenance or other disturbances (e.g., equipment failures, line faults, lightning strikes etc), or any number of other events that cause outages. During disturbances, the precise balance between generation and load is not maintained. These disturbances may lead to oscillations, and the system must be able to dampen these and reach a viable steady-state operating condition. Extremely fast electromagnetic transients, such as those that arise from lightning strikes or switching actions, are not considered from a system point of view, hence, the network models introduced in the previous
sections are still valid. Dynamic models for generator units models still must be introduced to understand system response. Load dynamics are also important, particularly from large induction motors, but such details are beyond the scope here. This section focuses on the transient response of the system over fractions of a second to oscillations over several seconds and then up to slow dynamics that occur with voltage problems over several minutes.
7.3.1 Modeling Electric Generators To understand modeling generators for dynamic analysis, more details on the physical construction are needed. Most generators are three-phase synchronous machines, which means they are designed to operate at a constant frequency. The machine rotor is driven mechanically by the prime mover governing system to control power output and speed. Synchronous machine rotors can be classified as either cylindrical or salient pole. Cylindrical rotors have an even air gap at all points between the stator, i.e., the stationary part of the machine, and the rotor. This construction is used for machines that rotate at high speed, typically steam-driven generators. Steam generators generally are two-pole or four-pole machines and so rotate, in North America, at 1800 RPM or 3600 RPM to produce the desired 60 Hz signal. Hydro generators, conversely, may have numerous pole pairs, as it is more efficient to drive them at lower speeds. These generators have a salient pole construction that leads to a variable air gap between the stator and the rotor. For modeling purposes, the pole construction is important because it represents the mechanical speed of rotations and also because the transfer of power from the rotor to the stator through mutual inductance depends on the size of the air gap (i.e., the reluctance of the air gap and, thus, the effective coupling). During disturbances, these variations are most evident, and the effective circuit inductances must be modeled accurately. In modern power system modeling for dynamic analysis, a rotating frame of reference is chosen to represent these effects. The circuit equations are then written in terms of direct and quadrature axes. For simplicity, the more involved rotating frame of reference models are not developed here but instead a simple model approximating the variable inductances is presented. To begin, there are primarily two sets of windings of concern: .
Armature windings: The windings on the stator are referred to as the armature windings. The armature is the source from which the power generated will be drawn. A voltage is induced in the armature from the rotation of the field generated by currents on the rotor. For purposes of modeling, the self-inductance, including leakage, and mutual inductance between phases of the armature windings describe the circuit performance as current flows from the terminals. The armature windings
7 Power System Analysis
773 jX⬘d
E⬘ θ
FIGURE 7.12 A Simple Model of a Synchronous Generator Armature for Dynamic Studies
.
are distributed in slots around the stator to produce a high-quality sinusoidal signal. Winding resistance may also be included but is small and often neglected in simple studies. Field windings: These windings reside on the rotor and provide the primary excitation for the machine. The windings are supplied with a dc current so that with rotation, they induce a voltage in the armature windings. The current is controlled by the exciter to provide the desired voltage across the armature windings. This current must be constrained to avoid overheating of the windings; the modeling of these limits and excitation of the control circuit are critical for analysis.
In addition, in salient pole machines, there are solid conducting bars in the pole faces called damper windings, or amortisseurs windings, that influence the effective inductance. These windings serve to damp out higher frequency oscillations. Similar effects are also seen in the cylindrical case through eddy current flows in the rotor even though damper windings may not be present. These details are not pursued further here. The armature will be modeled as a voltage controlled by the rotor current behind a simple transient reactance as illustrated in Figure 7.12. The induced voltage E 0 ffu, referred to as the voltage behind the reactance, is connected to the generator 0 terminal through the transient reactance xd , where the subscript indicates a direct axis quantity and the superscript a transient quantity. To control the terminal voltage, the field current is controlled by the exciter, which in turn varies the voltage behind the reactance. A simple high gain exciter with limits can be
modeled as in Figure 7.13. For the field circuit, there is a time constant associated with the winding inductance and resistance. Together, these describe the basic electromagnetic time constants for a simple generator model. There is often a supplementary stabilization control on large units referred to as a power system stabilizer (PSS). The interested reader is referred to the literature. Mechanically, the generator is similar to a mass spring system with some frictional damping. If there is a net imbalance of torque acting on the rotor, then, neglecting damping, the machine will accelerate according to the well-known swing equations: dvm 1 ¼ v_ m ¼ (Tm Te ), J dt
where J is the rotational inertia, Tm is the mechanical torque input, and Te the electromagnetic torque that arises from producing an electric power output. Recall that power equals the torque times angular velocity, so multiplying both sides of equation 7.65 by vm yields: 1 v_ m vm ¼ (Pm Pe ): J
H¼
1 J (vom )2 , 2 SB
FIGURE 7.13
(7:67)
with v0m the synchronous mechanical speed. If in addition the speed and the real powers are expressed on a per unit basis, then substituting equation 7.67 into equation 7.66 gives: v_ ¼
vs pfs (Pm Pe ) or v_ ¼ (Pm Pe ), 2H H
(7:68)
where v_ is now the acceleration relative to synchronous speed. The electrical power output is a function of the rotor angle u, and so we need to write the simple differential equation that relates rotor angle to speed:
Exciter
Σ
(7:66)
Typically, engineers normalize the machine inertia based on the machine rating and use the per unit inertia constant H as:
u_ ¼ v vs :
Reference voltage
Terminal voltage
(7:65)
Field voltage limiter
KA
Basic Components of Simple Voltage Regulation and Excitation System
(7:69)
774
Mani Venkatasubramanian and Kevin Tomsovic
Thus, the mechanical equations can be expressed as a secondorder dynamic system. In the next section, the analysis of this in a connected system is discussed.
jX⬘d
jx
7.3.2 Power System Stability Analysis
E⬘ q
The power system is a nonlinear dynamic system consisting of generators, loads, and control devices, which are coupled together by the transmission network. The dynamic equations of the generator have been discussed in the previous section. The interactions of the generators with the loads and the control devices can result in diverse nonlinear phenomena. Specifically, we are interested in understanding whether the system response can return to an acceptable operating condition following disturbances. Normally, we distinguish between two types of disturbances in the power system context: .
.
Minor disturbances such as normal random load fluctuations, are denoted small-disturbances, and the ability of the power system to damp out such small disturbances is called small-signal stability. In the next section, we will learn that small-signal stability can be understood by computing the eigenvalues of the system Jacobian matrix that is evaluated at an equilibrium point. Major disturbances, such as transmission line trippings and generator outages, are denoted large disturbances, and the capability of the power system to return to acceptable operating conditions following a large disturbance is called the transient stability. In the following paragraphs, we will also study techniques for verifying the transient stability of a power system following a specific large disturbance.
Small-Signal Stability Consider a nonlinear system described by the following ordinary differential equation: dx ¼ f (x), x 2 R n , f : R n ! R n , f is smooth: dt
FIGURE 7.14
0
A Simple Power System
A single generator that is connected to an ideal generator bus through a transmission line is shown in Figure 7.14. The ideal generator maintains its bus voltage at 1 pu irrespective of the external dynamics and is referred to as an infinite bus. It also defines the reference angle for this power system, and the angle is set at zero. The other generator is represented by the classical machine model with a voltage source of E 0 ffu connected to the generator terminal through the transient reac0 tance xd . In the classical machine model, the internal induced voltage E 0 is assumed to remain constant, and the rotor angle u follows the second-order swing equations as in equation 7.68 but here includes a damping component Pd : 2H ¨ u ¼ Pm Pe Pd : vs
(7:71)
This can be rewritten into the standard form of:
u_ v_
¼
v vs : vs 2H (Pm Pe Pd )
(7:72)
For the power system in Figure 7.14, the electrical power output Pe is easily calculated as: Pe ¼
(7:70)
Suppose that x is an equilibrium point for the system. That is, f (x ) ¼ 0. We define the Jacobian matrix A for the equilibrium qf to be the matrix of partial derivatives qx evaluated at x . Then, classical analysis in nonlinear dynamical system theory tells us that the equilibrium x is locally stable if all the eigenvalues of the matrix A have negative real parts. Recall that the eigenvalues of a matrix A are defined as the solutions li of the polynomial matrix characteristic equation det (lI n A) ¼ 0, where I n denotes the n–X–n identity matrix. In addition, the equilibrium will be locally unstable if any one of the eigenvalues has a positive real part. In the power system context, the concept of local stability is known as the small-signal stability or the small-disturbance stability. We will look at a simple power system example next for studying the concept in more detail.
1
E 0 sin u ¼ Pmax sin u, xd0 þ x
(7:73)
where Pmax ¼ E 0 =(xd0 þ x) is a constant. The remaining term, the damping power Pd , is usually defined as Pd ¼ D(v0 vs ), where D is known as the damping constant for the generator, which is a positive constant. Substituting the entries for Pe and Pd into equation 7.72, we get the dynamic equations for the generator as:
u_ v_
¼
v vs : vs 2H ðPm Pmax sin u D(v vs )Þ
(7:74)
The equilibrium points for equation 7.74 can be easily solved by setting the derivatives to be zero. The (u , vs )T is an equilibrium point for this system if Pm ¼ Pmax sin u . The equilibrium points can be identified visually by plotting Pe and Pm as shown in Figure 7.15. The intersection of the
7 Power System Analysis
775
Pmax sin θ
θs
θu
Pm
θ
FIGURE 7.15
Plot of Pe and Pm
constant line Pm with the curve Pmax sin u highlighted by the black marks depict the two possible equilibrium points when Pm < Pmax . Let us denote the equilibrium point between 0 and p=2 by us and the other equilibrium between p=2 and p by uu. We will show that the equilibrium (us , vs )T is small-signal stable, while the other equilibrium (uu , vs )T is small-signal unstable. For assessing the small-signal stability, we need to evaluate the Jacobian matrix A at the respective equilibrium point. The Jacobian is first derived as: qf qx
¼
vs 2H
0 ( Pmax cos u)
vs 2H
1 ( D)
:
(7:75)
The eigenvalues of the system matrix A can easily be computed at the equilibrium point (us , vs )T as the solutions of the second-order polynomial: l2 þ
vs vs Dl þ Pmax cos us ¼ 0: 2H 2H
(7:76)
Since us by definition lies between 0 and p=2, cos us is positive. Therefore, the two roots of the characteristic equation 7.76 have negative real parts, and the equilibrium (us , vs )T is smallsignal stable. Therefore, the system will return to the equilibrium condition following any small perturbations away from this equilibrium point. On the other hand, when the equilibrium point (uu , vs )T is considered, note that cos uu is negative because uu lies between p=2 and p. Then, the characteristic equation 7.76 will have one positive real eigenvalue and one negative real eigenvalue when us is replaced by uu. Therefore, the equilibrium (uu , vs )T is small-signal unstable. Normally, the power system is operated only at the stable equilibrium point (us , vs )T . The unstable equilibrium point (uu , vs )T also plays an important role in determining the large disturbance response of the system that we study in the next subsection. Unlike this simple system, assessing the smallsignal stability properties of a realistic large power system is an extremely challenging task computationally.
Transient Stability Assume that the power system is operating at a small-signal stable equilibrium point. Suppose the system is suddenly subject to a large disturbance such as a line fault. Then, the power system protective relays will detect the fault, and they will isolate the faulty portion of the network by possibly tripping some lines. The occurrence of fault and the subsequent line trippings will cause the system response to move away from the equilibrium condition. After the fault has been cleared, the ability of the system to return to nominal equilibrium condition is called the transient stability for that fault scenario. We will study a transient stability example for the simple system in Figure 7.16. Suppose we want to study the occurrence of a solid threephase-to-ground fault at the middle point of the lower transmission line in Figure 7.16. We usually assume that the system is operating at a nominal equilibrium point before the fault occurs. For this prefault system, the effective transmission line reactance is x=2, since there are two transmission lines in parallel each with reactance x. The dynamic equations for the prefault system are then given by:
u_ v_
¼
vs 2H
v vs , pre Pm Pmax sin u D(v vs )
(7:77)
pre ¼ E 0 =xd0 þ x=2. The equilibrium points can be where Pmax solved by setting the derivatives to zero in equation 7.77. Let T pre us denote the stable equilibrium by (upre s , vs ) where us is the equilibrium solution of the rotor angle between 0 and p=2. Next, let us say that the fault occurs at time t ¼ 0. When the solid fault is present at the middle of the lower transmission line, the system in Figure 7.14 changes to the configuration shown in Figure 7.17. The computation of the generator electrical power output Pe for the fault-on system in Figure 7.15 requires a little more work. Looking from the generator terminal bus, the effective Thevenin voltage is 1ff0[x=( x2 þ x)] ¼ 13 ff0, which is 1=3ff0. Next, the effective Thevenin reactance is the parallel equivalent of reactance x and the reactance x=2, which is x=3. Therefore, the electrical power Pe during the fault-on period is given by:
jx jX⬘d jx E⬘ θ
FIGURE 7.16
1 0
Prefault Power System
776
Mani Venkatasubramanian and Kevin Tomsovic T stable equilibrium (upost s , vs ) . If the postfault response settles down to the nominal equilibrium, we can determine the system to be transient stable for the disturbance. If the postfault system response diverges away, the system is not transient stable.
jx jX⬘d j x/2
E⬘ θ
The steps involved in the numerical integration procedure for a realistic large system are similar to the outline above for the simple system. The models are of very large dimensions for realsize power systems. Hence, the formulation of the prefault, fault-on, and postfault models becomes a nontrivial task, and the numerical integration becomes highly time-consuming.
1 0
FIGURE 7.17 Fault-On Power System
Pefault
¼
fault Pmax
E 0 =3 sin u: sin u ¼ 0 xd þ x=3
(7:78)
The dynamic equations for the fault-on system are then given by:
u_ v_
¼
v vs : vs fault 2H (Pm Pmax sin u D(v vs ))
(7:79)
Let us assume that the relays clear the fault at time t ¼ tc by opening the lower transmission line at both the sending and receiving end. Then, the system configuration becomes as shown in Figure 7.14, and the dynamic equations for the postfault system are given by:
u_ v_
¼
vs 2H
v vs , post (Pm Pmax sin u D(v vs ))
(7:80)
post ¼ E 0 =xd 0 þ x. If the system settles down to the where Pmax stable equilibrium of the postfault system of equation 7.80 after the fault is cleared, we say that the system is transient stable for the fault under study. To verify this numerically, we proceed as follows.
Transient Stability Study by Numerical Integration (1) The system is at (upre s , vs ) before the fault occurs. Therefore, we start the simulation with (u, v)T ¼ T (upre s , vs ) at time t ¼ 0. (2) During the fault-on period, the system dynamics is governed by equation 7.79, and the fault-on period is from time t ¼ 0 through time t ¼ tc . Therefore, we integrate the equation 7.79 starting from the initial T condition (upre s , vs ) at time t ¼ 0 for a time period from t ¼ 0 to t ¼ tc. Let us denote the system state at the end of fault-on period numerical integration at time t ¼ tc by the clearing state (uc , vc )T . (3) At time t ¼ tc , the system equations change to the postfault equation 7.80. Therefore, we integrate the equation 7.80 starting from the clearing state (uc , vc )T at time t ¼ tc for a period of several seconds to assess whether the system response converges to the
Equal Area Criterion To verify the transient stability, it is possible to derive analytical conditions for the power system in Figure 7.15. For simplicity, we assume that the damping constant D ¼ 0 and that the fault clearing is instantaneous (tc ¼ 0). We recall that the dynamics of the prefault system are described by: 2H € pre u ¼ Pm Pmax sin u, vs
(7:81)
pre where Pmax ¼ E 0 =xd 0 þ x=2. Similarly, the equations for the postfault system are described by:
2H € post u ¼ Pm Pmax sin u, vs
(7:82)
post pre post where Pmax ¼ E 0 =xd 0 þ x. Clearly, it follows that Pmax > Pmax because x=2 < x. Intuitively, more power can be transferred in the prefault configuration with two parallel lines compared to one transmission line present in the postfault configuration. As stated earlier, the system is operating at the prefault T equilibrium (upre before the fault occurs where s , vs ) pre pre sin us ¼ Pm =Pmax , and upre s lies between 0 and p=2. When the fault is cleared instantaneously at time t ¼ 0, we would like to T know whether the transient starting from (upre s , vs ) will settle T post to the postfault system equilibrium (us , vs ) or whether it T will diverge away. Convergence to (upost s , vs ) or divergence implies transient stability or instability, respectively. post Let us start with the analysis. First, we note that upre s < us pre post because Pmax > Pmax . For visualization, let us plot Pe and Pm for the postfault system as shown in Figure 7.18. T The dynamics of the transient starting from (upre s , vs ) are governed by the second-order equation:
2H € 2H post u¼ v_ ¼ Pm Pmax sin u: vs vs
(7:83)
post sin u determines Therefore, the sign of the term Pm Pmax whether the speed derivative v_ is positive or negative. Inspecting the plot Figure 7.18, we conclude that the rotor frequency v increases whenever the rotor angle u is below the Pm line post because then Pm Pmax sin u will be positive. Similarly, the
7 Power System Analysis
777
Pmax post sin θ
Pmaxpost sin θ θs post
θu post
θs post
θu post
Pm
Pm
Aa θs pre
θs pre
θ
θ
FIGURE 7.18
Power-Angle Curve for the Postfault System
rotor speed decreases in value when the angle u is above the Pm post line because then Pm Pmax sin u will be negative. Let us recall that the postfault system response starts at T (upre s , vs ) at time t ¼ 0. As noted earlier, the initial rotor pre angle us lies beneath the Pm line in Figure 7.18, hence, the speed v increases from the initial value v ¼ vs at time t ¼ 0 as soon as the fault is cleared. When v increases above vs , the _ rotor angle starts to increase from upre s because u ¼ v vs for the machine dynamics. Therefore, the rotor angle moves up on the power-angle curve in Figure 7.18, and the rotor speed keeps increasing until the rotor angle reaches the value upost s at the intersection point with the Pm line in Figure 7.18. Let us say that the angle u takes time t1 seconds to increase from the initial value upre at time t ¼ 0 to the value upost s s . During this time period from t ¼ 0 to t ¼ t1, the speed v has increased from vs to some higher value, such as v1 . The dynamic state of T the postfault system at time t ¼ t1 is then given by (upost s , v1 ) . post Note that even though the rotor angle u equals us at time t ¼ t1 , the system is not in the equilibrium condition because the speed v equals v1 at time t1 , and v1 is greater than the equilibrium speed value vs . By construction, the speed value v1 is defined as follows by the dynamics of equation 7.83:
v1 vs ¼
t¼t ð1 t¼0
FIGURE 7.19
becomes negative. That is, the speed v starts to decrease from the value v1 as the time increases from t ¼ t1 . Only after the speed v has decreased below the synchronous speed vs can the rotor angle start to decrease. Until then, the rotor angle will keep increasing, and the rotor speed keeps decreasing as time increases from t ¼ t1 . Looking at the power-angle plot in Figure 7.19, the rotor angle stays above the Pm line only up to the unstable equilibpost rium value upost u . If the rotor angle were to increase above us , then the speed derivative v_ becomes positive again, and the speed will start to increase. In this case, there is no scope for speed v to decrease to the synchronous speed vs , and transient instability results. Therefore, for any chance of transient stabilT ity, and for response to settle down around (upost s , vs ) , we need the speed v to decrease below vs before the rotor angle reaches the critical value upost u . Graphically, this implies that the maximum deceleration area Amax shown in Figure 7.20 needs d to be larger than the acceleration area Aa shown in Figure 7.19.
Pmax post sin θ θs post
post
v_ dt ¼
uð s
pre
vs vs post Pm Pmax Aa , sin u du ¼ 2H 2H
Acceleration Area for the Postfault System
(7:84)
Pm
Ad max
us
where Aa is the shaded area shown in Figure 7.19. Because the rotor acceleration € u has been positive during this time period from t ¼ 0 to t ¼ t1 , the angle has been accelerating in the area shown, and hence, the area Aa is called the acceleration area. T When the transient reaches (upost s , v1 ) at time t ¼ t1 , the rotor angle keeps on increasing because u_ ¼ v1 vs > 0 at time t1 . However, for time t > t1 , the rotor angle moves above the Pm line in Figure 7.18, and hence, the derivative of speed
θu post
θs pre
θ
FIGURE 7.20
Maximum Deceleration Area for the Postfault System
778 When Amax > Aa , as the rotor angle decelerates past upost s d from time t ¼ t1 , the deceleration area will become equal to the accelaration area at some intermediate value of u between upost and upost at some time of t ¼ t2 . For time t > t2 , the s u speed v falls below vs , and the rotor angle u starts to decrease back toward upost s . The alternating scenarios of rotor angle acceleration and deceleration will continue before the angle swings are damped out eventually by the rotor damping effects that have been ignored thus far. Therefore, we say that the system is transient stable whenever Amax > Aa . d On the contrary, when Amax < A , the rotor speed stays a d above vs , the rotor angle reaches upost . Then, the rotor speed u starts to increase away from vs monotonously. In this case, the rotor speed never recovers below vs , and the rotor angle continuously keeps increasing. The transient diverges away, thus resulting in transient instability. The analytical criterion presented in this section for the simple system can be extended to multimachine models using Lyapunov theory and based on the concepts of energy functions. Development of analytical criteria for checking the transient stability of large representative dynamic models remains a research area. Numerical integration procedures outlined in the previous sections are commonly used by the power industry for studying the transient stability properties of large power systems.
Mani Venkatasubramanian and Kevin Tomsovic
7.4 Conclusion This chapter has introduced the readers to the basic concepts in power system analysis, namely modeling issues, power flow studies, and dynamic stability analysis. The concepts have been illustrated on simple power system representations. In real power systems, power-flow studies and system stability studies are routinely carried out for enduring the reliability and security of the electric grid separation. While the basic concepts here have been summarized in this chapter on simple examples, the real power systems are large-scale, nonlinear systems. The large interconnected nature of electric networks makes the computation aspects highly challenging. We have highlighted some of these issues in this section, and the readers are encouraged to refer to advanced power system analysis textbooks for additional details.
References Bergen, A., and Vittal, V. (2000). Power systems analysis, 2nd Ed. New Jersey: Prentice Hall. Chapman, S. (2002). Electric machinery and power system fundamentals. New York: McGraw Hill. Glover, J., and Sarma, M. (1994). Power system analysis and design, 2nd Ed. Boston: PWD Publishing. Grainger, J.J., and Stephenson, W.D. (1994). Power system analysis. New York: McGraw Hill. Kundur, P. (1994). Power system stability and control. New York: McGraw Hill. Saadat, H. (2002). Power system analysis, 2nd Ed. New York: McGraw Hill.
8 Power System Operation and Control Mani Venkatasubramanian and Kevin Tomsovic School of Electrical Engineering and Computer Science, Washington State University, Pullman, Washington, USA
8.1 8.2
Introduction ....................................................................................... 779 Generation Dispatch ............................................................................ 779 8.2.1 Classical Lossless Generation Dispatch . 8.2.2 Lossy Economic Dispatch . 8.2.3 Optimal Power Flow Formulation
8.3
Frequency Control ............................................................................... 782
8.4
Conclusion: Contemporary Issues........................................................... 785
8.3.1 AGC
8.1 Introduction The primary objective of power system operation is delivering power to consumers meeting strict tolerances on voltage magnitude and frequency. Accordingly, the operation control problems naturally divide into the control of voltage magnitudes or the voltage control issues and the control of system frequency or the frequency control problems. Because a power system is an interconnected, large system spread over a geographically wide network, operation of the large system is complex. The controls are built to exploit the inherent timescale and structural properties of the system. In this chapter, we focus on the frequency control problem as an example of power system controls. In fact, the automatic frequency control in the North American electric power grid was the first instance of a successful implementation of a large-scale network-based control scheme. The frequency control includes two subproblems. First, we need to determine optimal values of generations that minimize the total generation costs while meeting the load demands. This problem is denoted the economic dispatch problem and is discussed in Section 8.2. It can be shown that differences between total active power that is generated and the total active power that is consumed lead to frequency drifting. Because the load fluctuations themselves are random, it is not possible to exactly match the total generation with the power consumption at all times. Therefore, the system frequency will tend to drift around on its own. In North American grids, there exists a central closedloop controller that samples system-wide power flows and Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
frequency to maintain the system frequency within tight tolerances while also maintaining economically dispatched generations. A brief introduction to the control called the automatic generation control or the load frequency control is presented in Section 8.3. In addition to the topics in Sections 8.2 and 8.3, the voltage control problem is another important topic because it is a complex problem. It is mostly done by distributed automatic local controls in the North American grid. However, some European countries such as France do have automatic coordinated voltage control schemes for the large network. The operation of the power system also has to meet regulations on security and reliability. Roughly speaking, the system is required to continue normal operation even with the loss of any one component. These studies are grouped under the framework of power system security, which is a broad topic in itself.
8.2 Generation Dispatch A power system must generate sufficient power at all times to meet the load demand from the grid. The amount of load connected to the system varies significantly based on seasonal and time-of-day considerations. The cost of producing power at different generators also varies from plant to plant, depending on the efficiency of plant design and fuel costs at the generator location. Therefore, it is not economical to divide the required generation capacity arbitrarily among the available generators. The problem of determining how the 779
780
Mani Venkatasubramanian and Kevin Tomsovic
total load requirement is to be divided among the generators in service is denoted the generation dispatch problem. This problem clearly optimizes the total generation costs in producing the required amount of active power. We discuss this problem first in Secton 8.2.1 under a classical formulation. Moreover, there are also active power losses involved in transmitting real power from generators to loads. Some generators, such as those near coal mines with low fuel costs, may incur large transmission losses in transferring the generated power to load centers. An optimization formulation that includes some simple consideration of transmission losses together with generation costs is presented in Section 8.2.2. Section 8.2.3 discusses a general framework for posing detailed optimization problems in the form of optimal power flow formulation.
ICi ¼
N X
and
C(P) ¼ a þ bP þ gP 2 Let us assume that there are N thermal generators in the system that share the total load demand of PD . The economic dispatch problem tries to minimize the total generation costs for generating power at the N generators while meeting the load demand PD . In the classical lossless formulation, we also assume that there are no transmission losses involved, which simplifies the optimization considerably. For the lossless case, the power conservation equation or the power balance equation is simply stated as: N X
Pi ¼ PD ,
i¼1
where Pi denotes the power generated at plant i. Then, the economic dispatch reduces to the constrained optimization problem for minimizing the total generation cost CT : Min CT ¼ Pi
N X
Ci (Pi ),
i¼1
subject to
N X
Pi ¼ PD :
i¼1
The problem can be easily solved using the Lagrangian formulation by defining the Lagrangian L(P1 , P2 , . . . , PN , l) as: ! N N X X Ci (Pi ) þ l PD Pi : L(P1 , P2 , . . . , PN , l) ¼ i¼1
i¼1
By setting the partial derivatives of the Lagrangian L with respect to Pi and l equal to zero, the optimal solution can be described by the conditions:
Pi ¼ PD :
(8:1) (8:2)
i¼1
In other words, the generation dispatch becomes optimal when the incremental generation costs ICi become equal at all the generators. Since the generation cost Ci is a quadratic function of Pi , the incremental cost ICi is a linear function of Pi . Therefore, the optimality problem reduces to solving N þ 1 linear equations 8.1 and 8.2 stated above in terms of N þ 1 unknown variables P1 through PN and l. The unique solution is easily solved as: PD þ
8.2.1 Classical Lossless Generation Dispatch The cost C of generating power P at a thermal plant can be roughly stated by the nonlinear function:
dCi ¼ l for i ¼ 1, 2, . . . N , dPi
l¼
N P i¼1
N P i¼1
bi =2gi and Pi ¼
1=2gi
l bi : 2gi
(8:3)
Thus far, we have considered no limits on the generation capacity at individual plants. In reality, there are lower and upper limits, Pmin; i and Pmax; i , on the generation output Pi . These limits can be easily incorporated into the optimization problem above by appending the inequality constraints: Pmin; i Pi Pmax; i ,
(8:4)
with the conditions of equations 8.1 and 8.2. Since these equations are linear, the optimal solution can be computed by simply freezing Pi at either Pmin; i or Pmax; I whenever the limit is reached while finding the optimal solution. Details can be found in any standard textbook on power system analysis. The discussion in this section has thus far been focused on thermal generation plants. The operation costs of a hydroelectric plant are fundamentally different because there are no fuel costs involved. Here, the concern is to maintain an adequate level of water storage while maintaining required water flow through the generator turbines. Clearly, the stored water capacity depends on water in-flow and out-flow rates, and these issues are typically studied over longer time horizons as compared to the dispatch of thermal power plants. The problem of coordinating the generation outputs of hydro generators and thermal plants for meeting load demands while minimizing generation costs is called the hydrothermal coordination. Again, the formulation and solution details can be found in standard power system textbooks.
8.2.2 Lossy Economic Dispatch In the real system, there will always be transmission losses associated with sending power from the generation facilities to the load busses. In the previous section, these losses were ignored, which resulted in simple solutions for the generation dispatch problem. In this section, we modify the power
8 Power System Operation and Control
781
balance equation 8.2 to include the power losses PL . For a given set of generations Pi , the line losses have to be computed from the line currents, which in turn require solving the corresponding power flow equations. Such a problem of optimizing the total generation costs while solving nonlinear power flow equations is called the optimal power flow formulation, and it will be discussed in the next section. In this section, we approximate the total line losses PL as a direct function of the generations Pi using the equation: PL ¼
N X N X
N X
Bij Pi Pj þ
i¼1 j¼1
Bi0 Pi þ B00 :
(8:5)
i¼1
Here, the term B00 is constant while the coefficients Bij and Bi0 summarize the quadratic and linear dependence of line losses PL on the generations. Equation 8.5 is called the B matrix loss formula, and the B coefficients in equation 8.5 are called the loss factors. There exists a rich history in the literature on the computation of loss factors for a given economic dispatch problem. In this section, let us address the economic dispatch problem when the losses are represented by equation 8.5. Then, the optimization changes to: Min CT ¼ Pi
N X
Ci (Pi ),
(8:6)
i¼1
subject to: N X
Pi ¼ PD þ PL (P1 , P2 , . . . PN ):
i¼1
We redefine the Lagrangian L(P1 , P2 , . . . , PN , l) as: L(P1 , P2 , . . . , PN , l) ¼
N X
Ci (Pi )
i¼1
þ l PD þ PL (P1 , P2 , . . . PN )
N X
!
(8:7)
Pi :
i¼1
The optimal solution can be determined by solving these two equations: ICi ¼
dCi qPL ¼l 1 for i ¼ 1, 2, . . . N : dPi qPi
N X
Pi ¼ PD þ PL (P1 , P2 , . . . PN ):
(8:8)
case are quadratic. The solution from the lossless formulation provides an excellent initial condition, and the lossy economic dispatch solution can be computed by iterative solution strategies such as the Newton-Raphson algorithm discussed in the power system analysis chapter. Equation 8.8 provides the relationship between the power generations Pi and the Lagrangian multiplier l. That is, we can restate 8.8 as l ¼ ICi =(1 qPL =qPi ). The product term 1=(1 qPL =qPi ) denotes the penalty factor that is being multiplied to the incremental cost ICi from the contribution of generation Pi to the line losses PL . Moreover, once the multiplier l is specified, the power generations Pi can be uniquely determined from equation 8.8 since this equation is linear in Pi . Therefore, it follows that the optimal solution of the equations 8.8 and 8.9 essentially reduces to the problem of finding the Lagrangian multiplier l. There exist excellent iterative techniques in the literature for finding the optimal solution by iterating on l.
8.2.3 Optimal Power Flow Formulation In previous subsection, we simplified the network power balance equations into a single power conservation equation by either ignoring the losses (equation 8.2) or by approximating the line losses (equation 8.9). However, in both earlier approaches, the network nature of the power transmission was completely ignored. In this section, we treat the power transmission in earnest by stating the power balance equations in full, which summarizes the transfer of power from the generators to the loads through the transmission network. The optimization itself has the same objective of minimizing the total cost of generation. Let us assume that the system has N generators like before, and the total number of busses including generator and load busses is M. To simplify the notation, bus number 1 is assumed to be the slack bus. Busses 2 to N are the PV busses, and busses numbered N þ 1 through M are PQ busses. The slack bus generation PGI becomes a dependent variable, while the remaining generations PG2 through PGN are the control variables for the minimization. Real and reactive loads are denoted by PDi and QDi, respectively. Suppose the ith or jth entry Y ij of the Y bus matrix has the magnitude Yij and the phase gij . The basic optimal power flow problem can then be stated as: Min CT ¼ PG i
(8:9)
i¼1
Whereas equations 8.1 and 8.2 for the lossless case were linear in the variables Pi and l, the equations 8.8 and 8.9 for the lossy
N X
Ci (PGi ),
(8:10)
i¼1
which is subject to the real power flow equations of: Pi ¼ PGi PDi ¼
X j
Yij Vi Vj cos di dj gij ,
(8:11)
782
Mani Venkatasubramanian and Kevin Tomsovic
for i ¼ 2, . . . , M, and the reactive power flow equations of: Qi ¼ QGi QDi ¼
X
Yij Vi Vj sin (di dj gij ),
(8:12)
j
for i ¼ N þ 1, . . . , M. The power flow variables, namely the bus voltages VN þ1 , . . . , VM and phase angles d2 , . . . , dN become the dependent variables in the optimization procedure. In general, there may exist inequality constraints on the control variables as well as the state variables in the optimal power flow formulation. The optimal power flow problem can be formally stated as: Min f (x, u) u
subject to and
hi (x, u) 0
g(x, u) ¼ 0
for i ¼ 1, 2, . . . , k:
(8:13) (8:14) (8:15)
Here u denotes the optimization control variables, and x denotes the network state variables that are dependent on u through equation 8.14. The function f(x, u) is the objective function to be minimized. The equations 8.14 denote the equality constraints, and the number of equations of n in equation 8.14 matches the dimension of x. The equation 8.15 represents k different inequality constraints. A solution to the general constrained optimization problem in equations 8.13 to 8.15 can be found by using the celebrated Kuhn-Tucker theorem. We first define a generalized Lagrangian L(x, u, l, m) for the problem as: L(x, u, l, m) ¼ f (x, u) þ lT g(x, u) þ mT h(x, u),
(8:16)
where l 2 R n and m 2 R k . The Kuhn-Tucker theorem provides the conditions that must be satisfied at the optimal solution as: qf qg qh þ lT þ mT ¼ 0: qx qx qx
(8:17)
qf qg qh þ lT þ mT ¼ 0: qu qu qu
(8:18)
g(x, u) ¼ 0:
(8:19)
hi (x, u)mi ¼ 0 with either mi ¼ 0 and hi (x, u) < 0 or mi > 0 and hi (x, u) ¼ 0:
(8:20)
Therefore, the optimal power flow problem becomes the solution of equations 8.17 through 8.20. In large-scale power system formulations, it is very difficult to find an exact solution to the Kuhn-Tucker conditions. It is quite often sufficient to find a suboptimal solution using heuristic optimization algorithms. In recent history, excellent progress has been made in the development of such heuristic algorithms.
8.3 Frequency Control The power system load is continually undergoing changes as individual loads fluctuate while others are energized or deenergized. Generation must precisely match these changes to maintain system frequency, a function called load frequency control (LFC), and at the same time must follow an appropriate economic dispatch of the units as discussed in the previous section. Together, these functions are referred to as automatic generation control (AGC). Accordingly, all modern power systems have centralized control centers that run software called Energy Management Systems (EMS) that, in addition to other functions, monitor frequency and generator outputs. Units that are on AGC will receive raise and lower signals to adjust their set points.
8.3.1 AGC Ensuring the power balance is commonly referred to as regulation and can be sensed by changes in the system frequency. If load (including losses) exceeds the generation input, then energy must be leaving the system over time. This energy will be drawn from the kinetic energy stored in the rotating masses of the generators. Hence, the generators will begin to rotate more slowly and the system frequency will decrease. Conversely, if generation input exceeds the load, then frequency will increase. It is the responsibility of the governor on a generator to sense these speed deviations and adjust the power input (say through the opening or closing of valves on a steam unit) as appropriate. Specifically, the governor will change the power input in proportion to the speed deviation. This is referred to as the droop or speed regulation, R, and can be expressed as: 1 DP ¼ Df , R
(8:21)
where Df is the change in frequency and where DP is the resulting change in power input. The R is measured in hertz per megawatts or alternatively as a percentage of the rated capacity of a unit. Typical droops in practice are on the order of 5 to 10%. If no further action is taken, the system will then operate at this new frequency. Figure 8.1(A) illustrates a standard scenario. Assume a simple system with a nominal frequency of 60 Hz and a 100 MW unit is operating with a 5% droop. The regulation constant is calculated to be R ¼ 0:05(60=100) ¼ 0:03 Hz=MW. If the generator unit senses a drop in frequency of 0.6 Hz, then this corresponds to a 20 MW increase in power output. In addition to governor actions, many loads are frequency sensitive (e.g., motors). For these loads, the load will decrease as the frequency drops so that the needed increase from the generators is less. This can be expressed as: 1 DP ¼ þ D Df , (8:22) R
8 Power System Operation and Control
783
60.6 60.4
Frequency [Hz]
Slope = −R 60.2 ∆f
60 59.8
∆P
59.6 59.4 25
30
35
40
45
50
55
60
65
70
75
Power output [MW] (A) Illustration of 5% Speed Droop on a 100-MW Unit
61.5
61
Frequency [Hz]
200-MW unit 60.5
60 100-MW unit 59.5 150-MW unit 59
58.5 20
30
40
50
60
70
80
90
100
110
Power output [MW] (B) Illustration of Three Units with 5% Droop
FIGURE 8.1
Speed Droop Characteristics
where D is the damping. Thus, the effective droop is slightly less (i.e., the frequency drop will be less) when load damping is considered. This load damping is often approximated as a 1% decrease in load for every 1% decrease in frequency. Now, as would normally be true, if there are several generators interconnected and on regulation, then each will see the same frequency change (after any initial transients die out) and respond. Assuming the same percentage droop on each unit,
the load will be picked up according to the relative capacities. This is depicted in Figure 8.1(B) with units of 100 MW, 150 MW and 200 MW, respectively, all operating with a 5% droop. Analytically, with R on a per unit basis, we can express this for n units as: 1 1 1 DP ¼ þ þ þ þ D Df : (8:23) R1 R2 Rn
784
Mani Venkatasubramanian and Kevin Tomsovic
Area A The line flow
Area B
FIGURE 8.2 Multiple Control Areas Interconnected by Tie Lines
The control system as described in the previous paragraphs suffers from a serious drawback. Frequency will never return to the nominal (i.e., desired point). Supplemental control is needed to make the appropriate adjustments. One may think of this as adjusting for the initial loss of rotating kinetic energy when the load change first occurred by modifying the generator set points. This action must be coordinated among the units. An additional issue arises in the coordination for this supplemental control. With relatively few exceptions (e.g., some islands), utilities are interconnected with their neighboring systems. Not only does this provide additional security and reliability in the case of outages, but it allows for more economic operation by judicious energy trades. Still, the utilities
must coordinate their operation to maintain energy schedules and meet demand. If a load change occurs in a neighboring system, both systems will see the frequency change and respond. There is no difference from the viewpoint of the generator between the loads in the different areas. Clearly, each utility wishes only to supply loads for which it is responsible and will receive compensation. Systems are generally broken into separate control areas reflecting the different responsibilities of the utilities as shown in Figure 8.2. The flow on tie lines between these control areas is monitored. The generator set point adjustments are made to maintain these scheduled tie flows. Note that while the first function of AGC is load following, these adjustments to generator set points may over time lead the units away from the most economic dispatch. Thus, the supplemental control may include additional adjustments for economic operation of the units. As summarized in the block diagram of Figure 8.3, the supplemental control serves several functions, including the following: . . .
Restoration of the nominal frequency Maintenance of the scheduled interchanges Provision for the economic dispatch of units
The coordination among areas is achieved by defining the socalled area control error (ACE) as a measure of the needed adjustments to the generator set points. Let the following be true: ACE ¼ DPTie 10bf Df ,
where DPTie is the deviation in the tie line from the scheduled exchange and bf Df is called the frequency bias, which is by tradition negative and measured in MW/0.1 Hz (thus, the multiplier 10 in equation 8.4). If the ACE for an area is
Droop, R
Bias, B
−
+
ACE +
Integrator
−
Machine model +
Tie line error
(8:24)
Desired power output
Economic dispatch
FIGURE 8.3 Block Diagram of AGC System
∆f
8 Power System Operation and Control negative, then the area generation is too small and the unit set points should be increased. Considering the system of Figure 8.2, notice the effect of a sudden load increase in area A on the ACE in both areas A and B. First, the frequency in both areas will decrease and, accordingly, the power output for all regulated units will increase according to their respective droop settings. Since the load change occurs in area A, there will be an additional unscheduled in-flow from B. ACE in area A will have two negative terms: schedule error DPTie and the frequency bias, or control, error. Conversely, ACE in area B will have a positive schedule error but a negative frequency bias. It may seem that ideally the ACE in B would be zero and that in A it would precisely match the load change; however, this is not practical or particularly necessary. Instead, the ACE is integrated over time, and this signal is used to determine the generator set points. Integrating the error ensures that the actual energy exchange between areas is precisely maintained over time. Since the control center must calculate the ACE after gathering data from the system, the set point controls are discrete. In North America, the AGC signals are fed to the units typically, about once every 2–4 sec. In many parts of the world, such frequent adjustment is considered excessive, and supplementary control signals are sent over much less frequent time intervals. In each country, regulating agencies determine the required performance of the AGC so that utilities do not
785 ‘‘lean’’ too hard on their neighbors. For example, the traditional performance criteria in North America is that the ACE has to return to zero or its predisturbance level within 15 min following the start of the disturbance.
8.4 Conclusion: Contemporary Issues The supplementary control system as described above reflects AGC operation more or less as it has existed since the advent of computer controls in the late 1960s. Recent moves to deregulate the power system and open up the system to competition greatly complicate the traditional control philosophy. In an open market, the load responsibilities are not so clearly defined or geographically restricted. For example, an independent generator may have a contract to serve a load in a neighboring control area, and this impacts the scheduled flows on the intertie. A number of methods have been proposed to facilitate this control, including the establishment of a separate market for a ‘‘load-following’’ service. Consumers would pay for this service along with their energy fee. No regions in the world have fully opened AGC control to market competition, but many incremental steps have been taken. So while the supply of energy has proven to be amenable to economic competition, the vagaries of control are more difficult to formulate as an open transparent market.
This page intentionally left blank
9 Fundamentals of Power System Protection Mladen Kezunovic Department of Electrical Engineering, Texas A & M University, College Station, Texas, USA
9.1
Fundamentals of Power System Protection ............................................... 787 9.1.1 Power System Faults . 9.1.2 Power System Components . 9.1.3 Relay Connections and Zones of Protection
9.2
Relaying Systems, Principles, and Criteria of Operation.............................. 791 9.2.1 Components of a Relaying System . 9.2.2 Basic Relaying Principles . 9.2.3 Criteria for Operation
9.3
Protection of Transmission Lines............................................................ 796 9.3.1 Overcurrent Protection of Distribution Feeders . 9.3.2 Distance Protection of Transmission Lines . 9.3.3 Directional Relaying Schemes for High-Voltage Transmission Lines . 9.3.4 Differential Relaying of High-Voltage Transmission Lines
9.4
Protection of Power Transformers........................................................... 799 9.4.1 Operating Conditions: Misleading Behavior of Differential Current . 9.4.2 Implementation Impacts Causing Misleading Behavior of Differential Currents . 9.4.3 Current Differential Relaying Solutions
9.5
Protection of Synchronous Generators .................................................... 801 9.5.1 Requirements for Synchronous Generator Protection . 9.5.2 Protection Principles Used for Synchronous Generators
9.6
Bus Protection .................................................................................... 802
9.7
Protection of Induction Motors ............................................................. 803
9.6.1 Requirements for Bus Protection . 9.6.2 Protection Principles Used for Bus Protection 9.7.1 Requirements for Induction Motor Protection . 9.7.2 Protection Principles Used for Induction Motor Protection
References .......................................................................................... 803
9.1 Fundamentals of Power System Protection This chapter defines the power system faults, the role of protective relaying, and the basic concepts of relaying. The discussion is a rather general overview. More specific issues are discussed in several excellent textbooks (Horowitz and Phadke, 1992; Blackburn, 1998; Ungrad et al., 1995).
9.1.1 Power System Faults Power systems are built to allow continuous generation, transmission, and consumption of energy. Most of the power system operation is based on a three-phase system that operates in a balanced mode, often described with a set of symmetrical phasors of currents and voltages being equal in magnitude Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
and having the phase shifts between phases equal to 1208 (Blackburn, 1993). The voltages and current behave according to Kirchhoff ’s laws of the electrical circuits stating that the sum of all currents entering and leaving a network node is equal to zero, and the sum of all voltage drops and gains in a given loop is also equal to zero. In addition, the voltage and currents generate electric power that integrated over a period of time produce energy. For the three-phase systems, a variety of definitions for the delivered, consumed, or transmitted power may be established as follows: instantaneous, average, active, reactive, and complex. The power system consists of components that are put together with a goal of matching each other regarding the power ratings, dielectric insulation levels, galvanic isolation, and number of other design goals. In the normal operating conditions, currents, voltages, power, and energy are matched to meet the design constraints. 787
788
Mladen Kezunovic
9.1.2 Power System Components
As such, the system is capable of sustaining a variety of environmental and operating impacts that resemble normal operating conditions. The abnormal operating conditions that the system may experience are rare but do happen. They include lightning striking the transmission lines during severe weather storms, excessive loading and environmental conditions, deterioration or breakdown of the equipment insulation, and intrusions by humans and/or animals. As a result, power systems may experience occasional faults. The faults may be defined as events that have contributed to a violation of the design limits for the power system components regarding insulation, galvanic isolation, voltage and current level, power rating, and other such requirements. The faults occur randomly and may be associated with any component of the power system. As a result, the power component experiences an exceptional stress, and unless disconnected or de-energized, the component may be damaged beyond repair. In general, the longer the duration of a fault, the larger is the damage. The fault conditions may affect the overall power system operation since the faulted component needs to be removed, which in turn may contribute to violation of the stability and/or loading limits. Last, but not least, the faults may present a life threat to humans and animals since the damage caused by the faults may reduce safety limits otherwise satisfied for normal operating conditions. Protective relaying was introduced in practice as early as the first power systems were invented to make sure that faults are detected and damaged components are taken out of service quickly. To facilitate graphic representation of different types of faults, circuit diagrams shown in Figure 9.1 are used. The example is related to the faults on transmission lines and covers eleven types of most common transmission line ground and/or phase faults. To facilitate the presentation, multiple fault types are shown on the same diagram. The rest of the discussion in this section describes the basic power system components and how different power system components are used to facilitate implementation of the protection concept. The basic requirements for the protection system solution are outlined pointing out the most critical implementation criteria.
Three phase
Three phase to ground
The most basic power system components are generators, transformers, transmission lines, busses, and loads. They allow for power to be generated (generators), transformed from one voltage level to another (transformers), transmitted from one location to another (transmission lines), distributed among a number of transmission lines and power transformers (busses), and used by consumers (loads). In the course of doing this, the power system components are being switched or connected in a variety of different configurations using circuit breakers and associated switches (Horowitz and Phadke, 1992; Blackburn, 1998; Ungrad et al., 1995). The circuit breakers are capable of interrupting the flow of power at a high energy level and, hence, may also be used to disconnect the system components on an emergency basis, such as in the case when the component experiences a fault (Flurscheim, 1985). Because the power systems are built to cover a large geographical area, the power system components are scattered across the area and interconnected with transmission lines. The grouping of the components associated with generation, switching, transformation, or consumption are called power plants (generation and transformation), substations (transformation and switching), and load centers (switching, transformation, and consumption). In turn, the related monitoring, control, protection, and communication gear is also located at the mentioned facilities. To facilitate the description of power systems, a graphical representation of the power system components as shown in Figure 9.2 is used. Such representation is called a one-line diagram. It is reducing the presentation complexity of the three-phase connections into a single-line connection. This is sufficiently detailed when the normal system operation is considered since the solutions of voltages and currents are symmetrical and one-line representations resemble very closely the single-phase system representation used to obtain the solution. The solution for the faulted systems requires more detailed three-phase representation, but the one-line diagram is still sufficient to discuss the basic relaying concepts. In that case, a detailed representation of the faults shown in Figure 9.1 is not used, but a single symbol representing all fault types is used instead.
Phase-to-phase
Phase-to-phaseto-ground
Phase-to-ground
a
a
a
a
b
b
b
b
c
c
c
c
FIGURE 9.1
Eleven Types of Most Common Transmission Line Faults
9 Fundamentals of Power System Protection
789
Circuit breakers Bus Power transformer
Transmission line
Generator
Load
Bus
FIGURE 9.2 One-Line Representation of the Power System Components and Connection
The next level of detail is the representation of the points where the components merge, shown in Figure 9.2 as busses. A good example of such a point is a substation where a number of lines may come together, and a transformation of the voltage level may also take place. Figure 9.3 shows a one-line representation of a substation. Substations come in a variety of configurations, and the one selected in Figure 9.3 is called a breaker-and-a-half. This configuration is used in high-voltage substations containing a number of transmission lines and transformers as well as different voltage levels and associated busses. This representation also includes circuit breakers and busses as the principal means of switching and/or connecting the power system components in a substation. The protective relaying role is to disconnect the components located or terminated in the substation when a fault occurs. In the case shown in Figure 9.3, the transmission line is connected to the rest of the system through two breakers marked up as ‘‘L,’’ the bus is surrounded with several breakers connected to the bus and marked up as ‘‘B,’’ and the power transformer is connected between the two voltage level busses with four breakers marked up as ‘‘T.’’ In the common relaying terminology, all the breakers associated with a given relaying function are referred to as a bay, hence, the terminology exists of ‘‘protection bays’’ for a transmission line, a bus, and a transformer. It may be observed in the highvoltage substation example, given in Figure 9.3, that each
Line 1 Bus 1 (Voltage X )
breaker serves at least two protection bays. In Figure 9.3, each breaker box designated as ‘‘L’’ or ‘‘T’’ also acts as the breaker designated with ‘‘B.’’ This property will be used later when introducing the concept of overlapping protection zones.
9.1.3 Relay Connections and Zones of Protection Protective relays are devices that are connected to instrument transformers to receive input signals and to circuit breakers to issue control commands for opening or closing. In some instances, the relays are also connected to the communication channels to exchange information with other relays. The electronic relays always require a power supply, which is commonly provided through a connection to the station dc battery. Often, relays are connected to some auxiliary monitoring and control equipment to allow for coordination with other similar equipment and supervision by the operators. In the high-voltage power systems, relays are located in substations and, most frequently, in a control house. The connections to the instrument transformers and circuit breakers located in the substation switchyard are done through standard wiring originating from the substation switchyard and terminating in the control house. To achieve effective protection solutions, the entire relaying problem is built around the concept of relaying zones. The zone is defined to include the power system component that
Line 2
Line breakers L
Bus 2 (Voltage Y )
Bus breakers B Transformer breakers T
FIGURE 9.3 Breaker-and-a-Half Substation Connection
790
Mladen Kezunovic Power transformer
Load
Generator Transmission line Bus
FIGURE 9.4 Allocation of Zones of Protection for Different Power System Components
has to be protected and include the circuit breakers needed to disconnect the component from the rest of the system. A typical allocation of protective relaying zones for the power system shown in Figure 9.2 is outlined in Figure 9.4. Several points regarding the zone selection and allocation are important. First, the zones are selected to ensure that a multiple usage of the breakers associated with each power system component is achieved. Each circuit breaker may be serving at least two protection functions. This enables separation of the neighboring components in the case either one is faulted. At the same time, the number of breakers used to connect the components is minimized. By overlapping at least two zones of protection around each circuit breaker, it is important to make sure that there is no part of the power system left unprotected. This includes the short connection between circuit breakers or between circuit breakers and busses. Such an overlap is only possible if instrument transformers exist on both sides of the breaker, which is the case with so-called dead-tank breakers that have current transformers located in the breaker bushings. Another important notion of the zones is to define a backup coverage. This is typical for the transmission line protection where multiple zones of protection are used for different sections of the transmission line. Figure 9.5 shows how the zones may be selected in case three zones of protection are used by each relay. The zones of protection are selected by determining the settings of the relay reach and the time associated with relay operation. Each zone of protection is set to cover specific length of the transmission line, which is termed the relay reach. Typical selection of the zones in the transmission line protection is to cover 80 to 90% of the line in zone 1, 120–130% in zone 2,
and 240–250% in zone 3. This protection is selected by locating a relay at a given line terminal and determining the length corresponding to the relay coverage as a percentage of the line length between the relay terminal and adjacent relay terminals. When doing this, the selected direction is down the transmission line starting from the terminal where the relay is located. The length of the transmission line originating from the location of the relay and ending at the next terminal is assumed to be 100%. The meaning of 120% is that the entire transmission line is covered as well as the additional 20% of the line originating from the adjacent terminal. The times of operation associated with zones are different: zone 1 operation is instantaneous, zone 2 is delayed to allow zone 1 relays to operate first, and zone 3 times allow the corresponding relays closer to the fault to operate first in either the zone 1 or zone 2. With this time-step approach selected for different zones of protection, the relays closest to the fault are allowed to operate first. If they fail to operate, the relays located at the remote terminals, that ‘‘see’’ the same fault in zone 2, will still disconnect the failed component. If zone 2 relay operation fails, relays located further away from the faulted line will operate next with the zone 3 settings. The advantage of this approach is a redundant coverage of each line section. They are also covered with multiple relay zones of the relay located on the adjacent lines, ensuring that the faulted component will be eventually removed even if the relay closest to the fault fails. The disadvantage is that each time a backup relay operates, a larger section of the system is removed from service because the relays operating in zone 2 (sometimes) or zone 3 (always) are connected to the circuit breakers that are remote from the ends of the transmission line experiencing the fault. In addition, the time to remove faulted sections from service increases as the
Zone 3 Zone 2 Zone 1 R
FIGURE 9.5 Selection of the Overlapping Zones for Transmission Line Protection
9 Fundamentals of Power System Protection zone coverage responsible for the relay action increases due to the time delays associated with the zone 2 and zone 3 settings.
9.2 Relaying Systems, Principles, and Criteria of Operation This section describes elements of a relaying system and defines the basic concept of a relaying scheme. In addition, the basic principles of protective relaying operation are discussed. More detailed discussions of each of the relaying solutions using the mentioned principles aimed at protecting different power system components are outlined in subsequent sections.
9.2.1 Components of a Relaying System Each relaying system consists, as a minimum, of an instrument transformer, relay, and a circuit breaker. A typical connection for protection of high-voltage transmission lines using distance relays is shown in Figure 9.6. In the case of Figure 9.6, since the relay measures the impedance (which is proportional to distance), both current and voltage instrument transformers are used. The relay is used to protect the transmission line, and it is connected to a circuit breaker at one end of the line. The other end of the line has another relay protecting the same line by operating the breaker at that end. In a case of a fault, both relays need to operate, causing the corresponding breakers to open and resulting in the transmission line being removed from service. The role of instrument transformers is to provide galvanic isolation and transformation of the signal energy levels between the relay connected to the secondary side and the voltages and currents connected to the primary side. The original current and voltage signal levels experienced at the terminals of
Bus Current transformer
Circuit breaker
CB
Trip
R
Relay (distance)
Voltage transformer
Transmission line
FIGURE 9.6 Protective Relaying System Consisting of Instrument Transformers, a Relay, and a Breaker
791 the power system components are typically much higher than the levels used at the input of a relay. To accommodate the needed transformation, instrument transformers with different ratios are used (IEEE, 1996; Ungrad, 1995). Next, a brief discussion of the options and characteristics of most typical instrument transformer types is given. Current Transformer Current transformers (CTs) are used to reduce the current levels from thousands of amperes down to a standard output of either 5 A or 1 A for normal operation. During faults, the current levels at the transformer terminals can go up several orders of magnitude. Most of the current transformers in use today are simple magnetically coupled iron-core transformers. They are input/output devices operating with a hysteresis of the magnetic circuit and, as such, are prone to saturation. The selection of instrument transformers is critical for ensuring a correct protective relaying operation. They need to be sized appropriately to prevent saturation. If there is no saturation, instrument transformers will operate in a linear region, and their basic function may be represented via a simple turns ratio. Even though this is an ideal situation, it can be assumed to be true for computing simple relaying interfacing requirements. If a remanent magnetism is present in an instrument transformer core, then the hysteresis may affect the time needed to saturate next time the transformer gets exposed to excessive fault signals. The current transformers come as free-standing solutions or as a part of the circuit breaker or power transformer design. If they come preinstalled with the power system apparatus, they are located in the bushings of that piece of equipment. Voltage Transformer Voltage transformers come in two basic solutions: potential transformer (PT) with iron-core construction and capacitor coupling voltage transformers (CVTs) that use a capacitor coupling principle to lower the voltage level first and then use the iron-core transformer to get further reduction in voltage. Both transformer types are typically free-standing. PTs are used frequently to measure voltages at substation busses, whereas CVTs may be used for the same measurement purpose on individual transmission lines. Since the voltage levels in the power system range well beyond kilovolt values, the transformers are used to bring the voltages down to an acceptable level used by protective relays. They come in standard solutions regarding the secondary voltage, typically 69.3 V or 120 V, depending if either the line-to-ground or line-to-line quantity is measured respectively. In an ideal case, both types of instrument transformers are assumed to be operating as voltage dividers, and the transformation is proportional to their turns ratio. In practice, both designs may experience specific deviations from the ideal case. In PTs, this may manifest as a nonlinear behavior caused by the effects of the hysteresis. In CVTs, the abnormalities include various ringing effects
792
Mladen Kezunovic
at the output when a voltage is collapsed at the input due to a close-in fault as well as impacts of the stray capacitances in the inductive transformer, which may affect the frequency response. Relays Another component shown in Figure 9.6 is the relay itself. Relays are controllers that measure input quantities and compare them to thresholds, commonly called relay settings, which in turn define operating characteristics. The relay characteristics may be quite different depending on the relaying quantity used and the relaying principle employed. A few different operating characteristics of various relays are shown in Figure 9.7. The first one is for the relay operating using an overcurrent principle with an inverse time-delay applied for different levels of input current. The second one is used by transmission line relays that operate using an impedance (distance) principle with the time-step zone implementation. Further discussion of the specific relaying principles will be given later. In general, the relay action is based on a comparison between the measured quantity and the operating characteristic. Once the characteristic thresholds (settings) are exceeded, the relay assumes that this is caused by the faults affecting the measuring quantity, and it issues a command to operate associated circuit breaker(s). This action is commonly termed as a relay tripping, meaning opening a circuit breaker. The relays may come in different designs and implementation technologies. The number of different designs at the early days when the relaying was invented was rather small, and the main technology was the electromechanical one. Today’s design options are much wider with a number of different
relay implementation approaches being possible. This is all due to a great flexibility of the microprocessor-based technology almost exclusively used to build relays today. Since the microprocessor-based relays use very low-level voltage signal at inputs to the signal measurement circuitry, all these relays have auxiliary transformers at the front-end to scale down the input signal levels even further from what is available at the secondary of an instrument transformer. To accommodate specific needs and provide different levels of the relay input quantities, some of the relay designs come with multiple connections of the auxiliary transformers called taps. Selecting appropriate tap determines a specific turns ratio for the auxiliary transformer that allows more precise selection of the specific level of the relay input signal. Besides the voltage and/or current signals as inputs and trip signals as outputs, the relays have a number of input/output connections aimed at other functions: coordination with other relays, communication with relays and operators, and monitoring. For an electronic relay, a connection to the power supply also exists. Circuit Breaker The last component in the basic relaying system is the circuit breaker. The breakers allow interruption of the current flow, which is needed if the fault is detected and a tripping command is issued by the relay. Circuit breakers operate based on different principles associated with physical means of interrupting the flow of power (Flurscheim, 1985). As a result, vacuum, air-blast, and oil-field breakers are commonly used depending on the voltage level and required speed of operation. All breakers try to detect the zero crossing of the current and interrupt the flow at that time since the energy level to be
X
Current
Zone 3 Zone 2 Zone 1
R
Time
(A) Overcurrent for Inverse-Time Relay
(B) Distance (Impedance) for Three Zone MHO Relay
FIGURE 9.7 Typical Relay-Operating Characteristic
9 Fundamentals of Power System Protection interrupted is at a minimum. The breakers often do not succeed in making the interruption during the first attempt and, as a result, several cycles of the fundamental frequency current signal may be needed to completely interrupt the current flow. This affects the speed of the breaker operation. The fastest breakers used at the high-voltage levels are onecycle breakers, whereas a typical breaker used at the lower voltage levels may take 20 to 50 cycles to open. Circuit breakers are initiated by the relays to disconnect the power system component in the case a fault is present on the component. In the case of the transmission line faults, many faults are temporary in nature. To distinguish between permanent and temporary faults on transmission lines, the concept of breaker autoreclosing is used. It assumes that once the breaker is tripped (opened) by the relay, it will stay open for a while, and then it will automatically reclose. This action allows the relays to verify if the fault is still present and, if so, to trip the breaker again. In the case the fault has disappeared, the relays will not act, and the transmission line will stay in service. The autoreclosing function may be implemented quite differently depending on the particular needs. The main options are to have a single or multiple reclosing attempts and to operate either a single pole or all three poles of the breaker. Circuit breakers are also quite often equipped with auxiliary relays called breaker failure (BF) relays. If the breaker fails to open when called upon, the BF relay will initiate operation of other circuit breakers that will disconnect the faulted element, quite often at the expense of disconnecting some additional healthy components. This may be observed in Figure 9.3: once a transmission line relay operates the two breakers in the transmission line bay and one of the breakers fails to operate, the BF relay will disconnect all the breakers on the bus side where the failed breaker is connected, making sure the faulted line is disconnected from the bus.
9.2.2 Basic Relaying Principles When considering protection of the most common power system components, namely generators, power transformers, transmission lines, busses, and motors, only a few basic relaying principles are used. They include overcurrent, distance, directional, and differential. In the case of transmission line relaying, communication channels may also be used to provide exchange of information between relays located at two ends of the line. The following discussion is aimed at explaining the generic properties of the above-mentioned relaying principles. Many other relaying principles are also in use today. The details may be found in a number of excellent references on the subject (Horowitz and Phadke, 1992; Blackburn, 1998; Ungrad et al., 1995; Blackburn, 1993). Overcurrent protection is based on a very simple premise that in most instances of a fault, the level of fault current dramatically increases from the prefault value. If one establishes a threshold well above the nominal load current, as soon
793 as the current exceeds the threshold, it may be assumed that a fault has occurred and a trip signal may be issued. The relay based on this principle is called an instantaneous overcurrent relay, and it is in wide use for protection of radial low-voltage distribution lines, ground protection of high-voltage transmission lines, and protection of machines (motors and generators). The main issue in applying this relaying principle is to understand the behavior of the fault current well, in particular when compared to the variation in the load current caused by significant changes in the connected load. A typical example where it may become difficult to distinguish the fault levels from the normal operating levels is the overcurrent protection of distribution lines with heavy fluctuations of the load. To accommodate the mentioned difficulty, a variety of overcurrent protection applications are developed using the basic principle as described previously combined with a time delay. One approach is to provide a fixed time delay, and in some instances, the time delay is proportional to the current level. One possible relationship is an inverse one where the time delay is small for high currents and long for smaller ones. The example shown earlier in Figure 9.7 describes an inverse time characteristic, which may also be a very or extremely inverse type. Further variations of the overcurrent relay are associated with the use of the directional element, which is discussed later. The issues of coordinating overcurrent relays and protecting various segments of a distribution line are also discussed later. Distance relaying belongs to the principle of ratio comparison. The ratio is between the voltage and current, which in turn produces impedance. The impedance is proportional to the distance in transmission lines, hence the ‘‘distance relaying’’ designation for the principle. This principle is primarily used for protection of high-voltage transmission lines. In this case, the overcurrent principle cannot easily cope with the change in the direction of the flow of power, simultaneous with variations in the level of the current flow, which is common in the transmission but not so common in the radial distribution lines. Computing the impedance in a three-phase system is a bit involved because each type of fault produces a different impedance expression (Lewis, 1947). Because of these differences the settings of a distance relay need to be selected to distinguish between the ground and phase faults. In addition, fault resistance may create problems for distance measurements because the value of the fault resistance may be difficult to predict. It is particularly challenging for distance relays to measure correct fault impedance when a current in-feed from the other end of the line creates an unknown voltage drop on the fault resistance. This may contribute to erroneous computation of the impedance, called apparent impedence, ‘‘seen’’ by the relay located at one end of the line and using the current and voltage measurement just from that end. Once the impedance is computed, it is compared to the settings that define the operating characteristic of a relay. Based on the comparison, a decision is made if a fault has occurred and, if so, in what zone. As mentioned earlier, the
794
Mladen Kezunovic X
X Quadrilateral
X Offset MHO
R
Blinder
R
R
FIGURE 9.8 Operating Characteristics of a Distance Relay
impedance relay may be set to recognize multiple zones of protection. Due to variety of application reasons, the operating characteristics of a distance relay may have different shapes, the quadrilateral and MHO being the most common. The different operating characteristic shapes are shown in Figure 9.8 (Blackburn, 1998). The characteristics dictate relay performance for specific application conditions, such as the changes in the loading levels, different values of fault resistance, effects of power swings, presence of mutual coupling, and reversals of fault direction. Distance relays may be used to protect a transmission line by taking the input measurements from only one end of the line. Another approach is to connect two distance relays to perform the relaying as a system by exchanging the data between the relays from two ends through a communication link. In this case, it is important to decide what information gets exchanged and how is it used. The logic that describes the approach is called a relaying scheme. Most common relaying schemes are based on the signals sent by the relay from one end causing either blocking and unblocking or facilitating and accelerating of the trip from the relay at the other end. In all of the mentioned applications, directionality of a distance relay is an important feature. It will be discussed in more detail later on. As an example of the relaying scheme operation, Figure 9.9 shows relaying zones used for implementation of a blocking scheme. The zones for each relay are forward overreaching (FO) and backward reverse (BR). The FO setting is selected so
that the relay can ‘‘see’’ the faults occurring in a forward direction, looking from the relay position toward the adjacent line terminal and beyond. The BR setting is selected so that the relay can ‘‘see’’ the faults occurring in the backward direction, causing a reversal of the power flow. If the relay R1 (at location A) has ‘‘seen’’ the fault (at location X1) in zone FO behind the relay R2 positioned at the other end (at location B) of the line, the relay is blocked by relay R2 from operating. Relay R2 has ‘‘seen’’ the fault at location X1 in a BR zone and, hence, can ‘‘tell’’ the relay R1 not to trip by sending a blocking signal. Should a fault occur in between the two relays (location X2), the blocking signal is not sent, and both relays operate instantaneously since both relays ‘‘see’’ the fault in zone FO and neither in zone BR. A relaying concept widely used for protection of busses, generators, power transformers, and transmission lines is the current differential. It assumes that the currents entering and leaving the power system component are measured and compared to each other. If the input and output currents are the same, then it means that the protected component is ‘‘healthy,’’ and no relaying action is taken. If the current comparison indicates that there is a deference, which means that a difference is caused by a fault, the relay action is called upon. The difference has to be significant enough to be attributed to a fault since some normal operating conditions and inaccuracies in the instrument transformers may also indicate a difference that is not attributed to a fault. More discussion about the
FO A
BR X2 R1
BR
X1 R2 B
FO
FIGURE 9.9
The Blocking Principle of Relaying with Fault Directionality Discrimination
9 Fundamentals of Power System Protection criteria for establishing the type and level of the difference that are typical for a fault event are presented at a later time when this relaying principle is discussed in more detail. In the case of a differential protection of transmission lines, the measured quantities need to be sent over a communication channel to compute the difference. Since the speed and reliability of communication channels may play a role in the overall performance of the concept, slightly different philosophy is established regarding the type of measurements taken and the comparison criteria for the transmission lines versus other, more local, applications. Directionality of a relay operation is quite important in many applications. It is established based on the angle of the fault current with the respect to the line voltage. It can be established for current alone or for impedance. In the latter case, the directionality is detected by looking at the angle between the reference voltages and fault current. In the impedance plane, the directionality is detected by the quadrant where the impedance falls. Whether the calculated fault impedance is in a forward (first-quadrant) or reverse (third-quadrant) zone determines the directions of the fault. Typically, the zone associated with the reverse direction is called a reverse zone and is numbered in a sequence after the forward zones are numbered first. The directionality may also be based on the power calculation, which in turn determines the direction of the power flow to/from a given power system component. Besides being used for implementation of the transmission line relaying schemes, directionality is quite important when applying overcurrent principle and is often used when implementing various approaches to ground protection.
9.2.3 Criteria for Operation A number of different criteria for operation may be established, but the three most common ones are speed, dependability/security, and selectivity (WD G5, 1997a, b). All the criteria need to be combined in a sound engineering solution to produce the desirable performance, but for the sake of clarity, each of the criteria is now discussed separately. Speed The speed of operation is the most critical protective relay operating criterion. The relays have to be fast enough to allow clearing of a fault in the minimum time needed to ensure reliable and safe power system operation. The minimum operating time of a relay is achieved when the relay operates without any intentional time delay settings. Such an example is the time of operation of a distance relay in a direct (instantaneous) trip in Zone 1. The operating time may vary from the theoretical minimum possible to the time that a practical solution of a relaying algorithm may take to produce a decision. The operating time is dependent on the algorithm and technology used to implement the relay design. Because the relays respond to the fault transients, the relay operating time
795 may vary slightly for the same relay if subjected to the transients coming from different types of fault. The minimum acceptable operating time is often established to make sure that the relay will operate fast enough to meet other timecritical criteria. For the transmission line protection example, the overall time budget for clearing faults in a system is expressed based on the number of cycles of the fundamental frequency voltage and current signals. This time is computed from the worst-case fault type persisting and potentially causing an instability in the overall power system. To prevent the instability from occurring, the fault needs to be cleared well before this critical time is reached, hence the definition of the minimum fault clearance time. The relay operating time is only a portion of this time-budget allocation. The rest is related to the operation of circuit breakers and a possible multiple reclosing action that needs to be taken. The consideration also includes the breaker failure action taken in the case a breaker fails to open and other breakers get involved in clearing the fault. The relay operating time is a pretty critical criterion even though it is allocated a very small portion of the mentioned fault-clearing time-budget criteria. As an example, the expected average operating time of transmission line relays in zone 1 is around one to two fundamental frequency cycles, where one cycle duration is 16.666 ms. Dependability/Security Another important operating criterion for protective relays is dependability/security. It is often mentioned as a pair since dependability and security are selected in a trade-off mode. Dependability is defined as the relay ability to respond to a fault by recognizing it each time it occurs. Security is defined as an ability of a relay not to act if a disturbance is not a fault. In almost all the relay approaches used today, the relays are selected with a bias toward dependability or security in such a way that one affects the other. A more dependable approach will cause the relays to overtrip, the term used to designate that the relay will operate whenever there is a fault but at the expense of possibly tripping even for nonfault events. The security emphasis will cause relays not to trip for nofault conditions but at a risk of not operating correctly when the fault occurs. The mentioned trade-off when selecting the relaying approach is made by choosing different types of relaying schemes and related settings to support one or the other aspect of the relay operation. Selectivity One criterion often used to describe how reliable a relaying scheme is relates to the relay ability to differentiate between a variety of operating options it is designed for. This criterion is called relay selectivity. It may be attributed to the relay accuracy, relay settings, or, in some instances, the measuring capabilities of the relay. In all cases, it designates how well the relay has recognized the fault conditions that it is designed or set to operate for. An example of the selectivity problem is an
796
Mladen Kezunovic
inability of a relay to correctly decide if it should operate in zone 1 or zone 2 for a fault that occurs in the region close to the set point between the zones. In this case, the relay operation may be termed as ‘‘overreaching’’ or ‘‘underreaching,’’ depending if the relay has mistaken that the fault was inside of the selected zone while it was actually outside and vice versa.
9.3 Protection of Transmission Lines The protection of transmission lines varies in the principle used and the implementation approaches taken, depending on the voltage level. The main principles and associated implementation issues are discussed for the following most frequent transmission line protective relaying cases: overcurrent protection of distribution radial feeders, distance protection of transmission lines and associated relaying schemes, and differential protection of transmission lines. Detailed treatment of the subject may be found in an excellent recent IEEE survey of the subject (IEEE, 1999).
9.3.1 Overcurrent Protection of Distribution Feeders Protection of distribution feeders is most adequately accomplished using the overcurrent relaying principle. In a radial configuration of the feeder, shown in Figure 9.10, the fault
Bus 1
Bus 2
current distribution is such that the fault currents are the highest for the faults closest to the source and the current decays as the fault gets farther away from the source. This property allows the use of the fault current magnitudes as the main criterion for the relaying action. The overcurrent relaying principle is combined with the inverse-time operating characteristic, as shown in Figure 9.11, and this represents the relaying solution for radial distribution feeders in most cases. The inverse-time property designates the relationship between the current magnitude and the relay operating time (the higher the fault current magnitude, the shorter the relay operation time). Further discussion is related to the setting procedures for the overcurrent relays with inverse-time characteristics. To embark on determination of relay settings, the following data have to be made available for an overcurrent relay either through a calculation or through simple selection of design options: time dial, tap (pick-up setting), and operating characteristic type; current transformer ratio for each relay location; and extreme (minimum and maximum) short circuit values for fault currents. The mentioned data applied to the relaying system shown in Figure 9.10 are provided in Table 9.1. The values are determined only for relays R2 and R3 as an example. Similar approaches will yield corresponding values for relay R1. The next step is to establish the criteria for setting coordination. The criteria selected as examples for relays R3 and R2 shown in Figure 9.10 are as these rules state:
Bus 3
Bus 4
B12
B23
B34
R1
R2
R3
Source
FIGURE 9.10
Protection of a Radial Distribution Feeder
Operating time [log]
0.7 Time dial 0.3
0.2
Time dial 0.2
6
FIGURE 9.11
Time dial 0.05 Ratio fault current to pick-up current
Inverse-Time Operating Characteristic of an Overcurrent Relay
Loads
9 Fundamentals of Power System Protection TABLE 9.1 Figure 9.10
797
Data Needed for Setting Determination for the Case in
9.11, the time delay of 0.7 sec requires the selection of time dial of 0.2. It also becomes obvious that selection of a smaller current for calculation of the time delay of relay R3 will not allow the criteria in rule 2 to be met.
Bus/relay Max and min fault current [A]
R2
R3
Max fault current Min fault current CT ratio Pick-up setting Time-dial setting
306 250 50:5 5 0.2
200 156 50:5 5 0.05
1. R2 must pick up for a value exceeding one-third of the minimum fault current (rule of thumb) seen by relay R3 (assuming this value will never be below the maximum load current) 2. R2 must pick up for the maximum fault current seen by R3 but no sooner than 0.5 sec (rule of thumb) after R3 should have picked up for that current. Based on the mentioned criteria and data provided in Table 9.1, the following are the setting calculation steps. Step 1: Settings for Relay R3 The relay has to operate for all currents above 156 A. For reliability, one-third of the minimum fault current is selected. This yields a primary fault current of 156/3 ¼ 52 A. Based on this, a CT ratio of 50/5 ¼ 10 is selected. This yields a secondary current of 52/10 ¼ 5.2 A. To match this, the relay tap (pick-up value) is selected to be 5.0 A. To ensure the fastest tripping for faults downstream from relay R3, the time dial of 0.05 is selected (see Figure 9.11). Step 2: Settings for Relay R2 . Selection of CT ratio and relay tap The relay R2 must act as a backup for relay R3, and, hence, it has to operate for the smallest fault current seen by relay R3, which is 156 A. Therefore, the selection of the CT ratio and relay tap is the same for relay R2 as it was for relay R3. . Time dial selection Based on the rule 2, relay R2 acting as a backup for relay R3 has to operate 0.5 sec after relay R3 should have operated. This means that relay R2 has to have a delay of 0.5 sec for the highest fault current seen by relay R3 to meet the above mentioned criteria. Let us assume that the highest fault current seen by relay R3 is at location next to breaker B34 looking downstream from breaker B34, and this current is equal to 306 A. In that case, the primary fault current is equal to 306/10 ¼ 30.6 A. The selected relay tap setting will produce a pick-up current of 30.6/5 ¼ 6.12 A. From Figure 9.11, the time delay corresponding to this pick-up value is 0.2 sec. Hence, if relay R3 fails to operate, the relay R2 will operate as a backup with a time delay of 0.2 sec þ 0.5 sec ¼ 0.7 sec. According to Figure
The overcurrent relaying of distribution feeders is a very reliable relaying principle as long as the time coordination can be achieved for the selected levels of fault current as well as the given circuit breaker operating times and instrument transformer ratios. The problems start occurring if the feeder loading changes significantly during a given period of time and/or the level of fault currents are rather small. This may affect a proper selection of settings that will accommodate a wide-range fluctuation in the load and fault currents. Some other, less likely, phenomena that may affect the relaying performance are as follows: current transformer saturation, selection of inadequate auxiliary transformer taps, and large variation in the circuit breaker opening times.
9.3.2 Distance Protection of Transmission Lines As explained earlier, the distance protection principle is based on calculation of the impedance ‘‘seen’’ by the relay. This impedance is defined as the apparent impedance of the protected line calculated using the voltages and currents measured at the relaying point. Once the impedance is calculated, it is compared to the relay operating characteristic. If identified as falling into one of the zones, the impedance is considered as corresponding to a fault; as a result, the relay issues a trip. The concept of the impedance being proportional to the length of the transmission line and the idea of using the relay settings to correspond to the line length lead to the reason for calling this relaying principle distance relaying. To illustrate the process of selecting distance relay settings, a simple network configuration, with data given in Table 9.2, is considered in Figure 9.12. Table 9.2 also contains additional information about the instrument transformer ratios. The setting selection and coordination for the example given in Figure 9.12 can be formulated as follows. Step 1: Determination of Maximum Load Current and Selection of CT and CVT Ratios From the data given in Table 9.2, the maximum load current is computed as: TABLE 9.2
Data Needed for Calculation of Settings for a Distance Relay Line impedance
Data
Line 1–2
Line 2–3
Line impedance Max load current Line length Line voltages
1:0 þ j1:0 50 MVA 50 miles 138 kV
1:0 þ j1:0 50 MVA 50 miles 138 kV
798
Mladen Kezunovic Bus 1
Bus 2
B1
LINE 1−2
FIGURE 9.12
B2
B3
LINE 3−4
B4
A Sample System for Distance Relaying Application
50(106 ) pffiffiffi ¼ 418:4 A ( 3)(138)(103 )
(9:1)
The CT ratio is 400/5 ¼ 80, which produces about 5 A in the secondary winding. If one assumes that the CVT secondary phase-to-ground voltage needs to be close to 69 V, then computing the actual primary voltage and selecting the ratio to produce the secondary voltage close to 69 V allows one to calculate the CVT pffiffiffi ratio. The primary phase-to-ground voltage is equal to 138 3(10, 000) ¼ 79:67(10, 000) V. If we allow the secondary voltage to be 69.3 V, then the CVT ratio can be selected as 79:67(10, 000)=69:3 ¼ 1150=1. Step 2: Determination of the Secondary Impedance ‘‘Seen’’ by the Relay The CT and CVT ratios are used to compute the impedance as follows: Vp =1150 Vp ¼ (0:07) ¼ Zline (0:07): Ip =80 Ip
Step 3: Computation of Apparent Impedance Apparent impedance is the impedance of the relay seen under specific loading conditions. If we select the power factor of 0.8 lagging for the selected CT and CVT ratios as well as the selected fault current, the apparent impedance is equal to: 69:3 5 (0:8 þ j0:6) ¼ 10:6 þ 8:0: 418:4 400
(9:3)
Step 4: Selection of Zone Settings Finally, the zone settings can now be selected by multiplying each zone’s impedance by a safety factor. This factor is arbitrarily determined to be 0.8 for zone 1 and 1.2 for zone 2. As a result, the following settings for zone 1 and zone 2, respectively, are calculated as: Zone 1
0:8(0:007 þ j0:7) ¼ (0:056 þ 0:56)V
Zone 2
1:2(0:007 þ j0:7) ¼ (0:084 þ 0:84)V:
inherent limitation is the influence of the current in-feed from the other end as described earlier. The other source of possible error is the fault resistance, which cannot be measured online. Therefore, it has to be assumed at the time of the relay setting computation, when a value different from what is actually present during the fault may be picked up. The anticipated value of the fault resistance is selected arbitrarily and may be a cause for a gross error when computing the impedance for a ground fault. Yet another source of error is the mutual coupling between adjacent transmission line or phases, which if significant, can adversely affect relay operation if not taken adequately into account when computing the fault impedance. The distance relaying becomes particularly difficult and sometimes unreliable if applied to some special protection case, such as multiterminal lines, lines with series compensation, very short lines, and parallel lines. In all of the cases, selecting relay settings is quite involved and prone to errors due to some arbitrary assumptions about the value of the fault impedance.
(9:2)
Therefore, the secondary impedance ‘‘seen’’ by the relay is for both lines equal to 0:07 þ j0:07.
Zload ¼
Bus 3
(9:4)
Distance relaying of transmission lines is not free from inherent limitations and application ambiguities. The most relevant
9.3.3 Directional Relaying Schemes for High-Voltage Transmission Lines Due to the inherent shortcoming of distance relays not being able to recognize the effect of the current in-feed from the other end of the line and because of this possibly leading to a wrong decision, the concept of a relaying scheme is developed. The concept assumes that the relays at the two ends of a transmission line will coordinate their actions by exchanging the information that they detect for the same fault. To be able to perform the coordination, the relays have to use a communication channel to exchange the information. In an earlier discussion, illustrated in Figure 9.9, one of the principles for scheme protection of transmission lines was explained. A more comprehensive summary is given now. The choices among basic relaying scheme principles are influenced by two factors: the approach in using the communication channels and the type of information sent over the channel. Regarding the approach for using the channels, one option is for the channels to be active during a fault, which means sending the information all the time and making sure that the communication did not fail. The other option is for the communication channels to be activated only when a command is to be transmitted and not being activated during other intervals or cases related to the fault. Regarding the type of information sent, the channels are used for issuing either a
9 Fundamentals of Power System Protection
799
TABLE 9.3 Summary of Basic Characteristics of Most Common Relaying Schemes
9.3.4 Differential Relaying of High-Voltage Transmission Lines
Scheme type (Directional Comparison)
The use of communication channel
Type of signal sent
Blocking
Blocking a signal (use of power line carrier) Sending either block or unblock signal all the time (use of frequency shift keying channel) Sending a trip signal (use of audio tones)
Block
In the cases when the above-mentioned relaying schemes are not sufficiently effective, a differential scheme may be used to protect high-voltage (HV) transmission lines. In this type of relaying, the measurements from two (or multiple) ends of a transmission line are compared. Transmitting the measurement from one end to the other enables the comparison. The decision to trip is made once the comparison indicates a significant difference between the measurements taken at the transmission line end(s). The main reason for introducing differential relaying is the ability to provide 100% protection of transmission lines; at the same time, the influence of the rest of the system is minimal. The scheme has primarily been used for high-voltage transmission lines that have a strategic importance or have some difficult application requirements such as series compensation or multiterminal configuration. The use of a communication system is needed for implementation of this approach. This may be considered a disadvantage due to the increased cost and possibility for the channel malfunctioning. The classification of the existing approaches can be based on two main design properties: the type of the communication media used and the type of the measurements compared. The communication media most commonly used are metallic wire, also known as pilot-wire; leased telephone lines; microwave radio; and fiber-optic cables. The measurements typically used for the scheme are composite values obtained by combining several signal measurements at a given end (IEEE, 1999) and sample-by-sample values of the phase currents (IEEE, 1999). A summary of the most common approaches for the differential relaying principle for transmission lines is given in Table 9.4. In the past, the most common approach was to use metallic wire to compare the sequence values. The sequence values are a particular representation of the three-phase original values obtained through a symmetrical component transformation (Blackburn, 1993). Due to a number of practical problems caused by the ground potential rise and limited length of the physical wire experienced with the metallic wire use, these schemes have been substituted by other approaches where different media such as fiber-optic or microwave links are used. Most recently, as the wideband communication channels have become more affordable, the differential schemes are being implemented using either dedicated fiber-optic cables or a high-speed leased wideband communication system.
Unblocking
Overreaching transfer trip Underreaching transfer trip
Block/unblock
Trip/guard
blocking or a tripping signal. In the case a blocking signal is issued, the relay from one end, after making a decision about a fault, sends a blocking signal to the other relay. If a trip signal is used, the relay that first detects the fault sends a signal to the other end. This will make the other relay perform a trip action immediately after it has detected a fault as well. Additional consideration in selecting the scheme relates to the type of detectors used in the relays for making the decisions about the existence of a fault and the location of the fault with the respect to the zones. Specific choices in setting up the zones are made for each scheme operation. Typical choices for the detector types are overcurrent and/or directional, with the zone I and / or zone II settings being involved in the decision making. A summary of the above considerations is given in Table 9.3. As with any other relaying approach, the scheme implementations also have different performance criteria established. If one takes the dependability/security criterion (WG D5, 1997; WG, 1981) as the guiding factor in making the decisions, then the property of the various relaying scheme solutions can be classified as follows: 1. Blocking/unblocking: The blocking solution tends to provide higher dependability than security. Failure to establish a blocking signal from a remote end can result in overtripping for external faults. The unblocking solution offers a good compromise of both high dependability (channel not required to trip) and high security (blocking is continuous). 2. Transfer trip (overreaching/underreaching): The transfer trip solution offers higher security than dependability. A failure to receive the channel signal results in a failure to trip for internal faults. The transfer trip systems require extra logic for internal-trip operation at a local terminal when the remote terminal breaker is open or for a ‘‘weak infeed’’ when the fault contribution is too low to send a trip signal. This is not a problem with blocking and unblocking systems. The options for scheme protection implementation are much more involved than what has been discussed here. Further details may be found in Blackburn (1993, 1998).
9.4 Protection of Power Transformers For power transformers, the current differential relaying principle is the most common one used. In addition, other types of protection are implemented, such as a sudden pressure relay on the units with large ratings. As much as the current
800 TABLE 9.4
Mladen Kezunovic Summary of Most Common Differential Relaying Principles
Differential principle
Signals used for comparison
Properties
Pilot-wire
Three-phase currents converted into sequence voltage Three-phase currents converted into a single current waveform Phase currents in each phase directly compared at two ends Samples of each current transmitted to the other end for comparison Each current converted into sequence current and a combined composite signal is transmitted
Use of sequence filters due to direct use of wires prone to transients caused by interference Composite waveform converted into binary string used for phase comparison Currents converted into square waves used for phase comparison Currents from both ends directly compared
Phase comparison Segregated phase comparison Current differential Composite–waveform differential
differential relaying has been a powerful approach in the past, it has some inherent limitations that pose difficulties for special applications or practical design considerations. Further discussion concentrates on the mentioned limitations and their impact on the current differential approaches. The other relaying principles applied for protecting power transformers are not discussed here. Further details can be found in references (Horowitz and Phadke, 1992; Blackburn, 1998; Ungrad et al., 1995).
9.4.1 Operating Conditions: Misleading Behavior of Differential Current Power transformers are energy storage devices that experience transient behavior of the terminal conditions when the stored energy is abruptly changed. Such conditions may be seen during transformer energization, energization of a parallel transformer, removal of a nearby external fault, and a sudden increase in the terminal voltage. The following is a discussion of the mentioned phenomena and the impact they have on the terminal currents. Energizing a transformer causes a transient behavior of the currents at the transformer primary due to so-called magnetizing inrush current. As a voltage is applied on an unloaded transformer, the nonlinear nature of the magnetizing inductance of the transformer causes the magnetizing current to experience an initial value as high as 8–30 times the full-load
Composite signals from both ends directly compared
current, which may appear to the differential scheme as a difference caused by a fault. An example of the harmonic inrush wave shape for the magnetizing current is given in Figure 9.13. Fortunately, the inrush current has a rich harmonic content, which can be used as the basis for distinguishing between the high currents caused by a fault and the ones caused by the inrush. Since the magnetizing inrush is a function of both the prior history of remanent magnetism as well as the type of the transformer connection, selecting the scheme for recognizing proper levels of the harmonics needs to be carried out carefully. Similar transient behavior of the primary current is seen in a transformer connected in parallel to a transformer that is being energized. The change in the magnetizing current is affecting the primary current of the parallel transformer due to an inrush created on the transformer being energized. This phenomenon is called sympathetic inrush. Sudden removal of an external fault and a sudden increase in the transformer voltage also cause the inrush phenomenon, again well recognized by an occurrence of particular harmonics in the primary current.
9.4.2 Implementation Impacts Causing Misleading Behavior of Differential Currents The current differential relayed for power transformer applications may be affected by practical implementation constraints.
Current
Time
FIGURE 9.13
Magnetizing Inrush Affecting Primary Current
9 Fundamentals of Power System Protection
801 (Ia-Ib)prim
Power transformer
Iaprim
Iasec
(Ia-Ib)sec
Relay
FIGURE 9.14
One of the common problems is to have a mismatch between ratios of the instrument transformers located at the two power transformer terminals. This is called a ratio mismatch, and it is corrected by selecting appropriate taps on the auxiliary transformers located at the inputs of the transformer differential relay. Yet another obstacle may be a phase mismatch, where the instrument transformer connection may cause a phase shift between the two currents seen at the transformer terminals. This is because the connection of the power transformer may introduce a phase shift, and if the instrument transformer does not correct for this, a phase mismatch will occur at the terminals of the instrument transformer. The phase mismatch is illustrated in Figure 9.14, where the currents in the relaying circuit are not correctly selected. The mismatch can be avoided if the instrument transformer at the Y side of the power transformer is of a D type. Besides the mentioned constraints, other constraints include the mismatch due to a changing tap position on the load tap changer as well as the mismatch caused by the errors in current transformers located at two terminals.
9.4.3 Current Differential Relaying Solutions The straightforward solution for differential relaying is to take a difference of currents I1 and I2 at two ends and compare it to a threshold IT as shown in equation: j(I1 I2 )j IT :
(9:5)
This solution will have a problem to accommodate an error due to a mismatch discussed earlier. Hence, a different equation may be used to make sure that a higher error is allowed for higher current levels: j(I1 I2 )j
j(I1 I2 )j : 2
(9:6)
Finally, to distinguish the case of the inrush condition mentioned earlier, a harmonic restraint scheme is used as represented by equation:
Phase Mismatch
j(I1 I2 )j k
j(I1 I2 )j : 2
(9:7)
This type of operating characteristic will recognize that the current difference is caused by an event that is not an internal fault, and it will block the relay from operating. The criterion for recognizing a nonfault event is the presence of a particular harmonic content in the differential current. This knowledge is used to restrain the relay operation by relating the factor k to the presence of the harmonic content, hence the ‘‘harmonic restraint’’ terminology.
9.5 Protection of Synchronous Generators Synchronous generators are commonly used in high-voltage power systems to generate electric power. They are also protected using the current differential relaying principle. In addition, the generators require a number of other special operating conditions to be met. This leads to the use of a score of other relaying principles. This section reviews some basic requirements for generator protection and discusses the basic relaying principles used. A much more comprehensive coverage of the subject may be found in an IEEE Tutorial (1995).
9.5.1 Requirements for Synchronous Generator Protection Generators need to be protected from the internal faults as well as the abnormal operating conditions. Since generators consist of two parts, namely the stator and the rotor, protection of both is required. The stator is protected from both phase and ground faults, while the rotor is protected against ground faults and loss of field excitation. Due to the particular conditions required for the synchronous machine to operate, a number of operating conditions that represent either a power system disturbance or operational hazard need to be avoided. The conditions associated with network disturbances are overvoltage or undervoltage, unbalanced currents, network
802 frequency deviation, and subsynchronous oscillations. The conditions of hazardous operation are loss of prime mover (better known as generator motoring), inadvertent energization causing a nonsynchronized connection, overload, outof-step or loss of synchronism, and operation at unallowed frequencies.
9.5.2 Protection Principles Used for Synchronous Generators The current differential protection principle is most commonly used to protect against phase faults on the stator, which are the most common faults. Other conditions require other principles to be used. The loss of field and the ground protection of the rotor are quite complex and depend on the type of the grounding and current sensing arrangement used. This subject is well beyond the basic considerations and is treated in a variety of specialized literature (IEEE, 1995). Protection from the abnormal generator operating condition requires the use of relaying principles based on detection of the changes in voltage, current, power, or frequency. A reverse power relay is used to protect against loss of a prime mover known as generator motoring, which is a dangerous condition since it can cause damage of the turbine and turbine blades. In addition, synchronous generators should not be subjected to an overvoltage. With normal operation near the knee of the iron saturation curve, small overvoltages result in excessive flux densities and abnormal flux patterns, which can cause extensive structural damage in the machine. A relaying principle based on the ratio between voltage and frequency, called a volt-per-hertz principle, is used to detect this condition. An inadvertent connection of the generator to the power system not meeting the synchronization requirements can also cause damage. The overcurrent relaying principle in combination with the reverse power principle are used to detect such conditions. The overload conditions as well as overand under-frequency operations can cause damage from overheating, and thermal relays in combination with frequency relays are used. Undervoltage and overvoltage protections are used for detecting loss of synchronism and overvoltage conditions.
9.6 Bus Protection Protecting substation busses is a very important task because operation of the entire substation depends on availability of the busses. The bus faults are rare, but in the open-air substations, they occasionally happen. They have to be cleared very fast due to high-fault current flow that can damage the bus. This section briefly discusses the requirements and basic principles used for bus protection. Detailed treatment of the subject is given in several classical references (Horowitz and Phadke, 1992; Blackburn, 1998; Ungrad et al., 1995).
Mladen Kezunovic
9.6.1 Requirements for Bus Protection High-voltage substations typically have at least two busses: one at one voltage level and the other at a different voltage level with power transformer(s) connecting them. In high-voltage substations, the breaker-and-a-half arrangement, shown earlier in Figure 9.3, is used to provide a redundant bus connection at each voltage level. In addition, in some substation arrangements, there is a provision for separating one portion of the bus from another, allowing for independent operation of the two segments. All of those configurations are important when defining the bus protection requirements (Blackburn, 1998). The first and foremost requirement for bus protection is the speed of relay operation. There are several important reasons for that all relate to the fact that the fault currents are pretty high. First, the current transformers used to measure the currents may get saturated; hence, a fast operation will allow for the relaying decision to be made before this happens. Next, due to the high currents, the equipment damage caused by a sustained fault may be pretty severe. Hence, the need to accomplish an isolation of the faulted bus from the rest of the system in the shortest time possible is paramount. Last, but not least, due to the various possibilities in reconfiguring busses, it is very important that the bus protection scheme is developed and implemented in a flexible way. It needs to allow for various parts of the bus to be isolated for maintenance purposes related to the breakers on the connected lines, and yet the protection for the rest of the bus needs to stay intact.
9.6.2 Protection Principles Used for Bus Protection The most common bus protection principle is the current differential approach. All connections to the bus are monitored through current transformers to detect current imbalance. If an internal bus fault occurs, the balance between incoming and outgoing currents is drastically disturbed, and that becomes a criterion for tripping the bus breakers and isolating the bus from the rest of the system. This relaying principle would be very simple to implement if there were no problems with CT saturation. Due to high-fault currents, one of the CTs may see extraordinary high currents during a closein external fault. This may cause the CT to saturate, making it very difficult to distinguish if the CT was saturated due to high currents from an internal or external fault. If air-gap or aircore CTs are used, the saturation may not be an issue, and the current differential relaying is easily applied. This solution is more costly and is used far less frequently than the solution using standard iron-core CTs. To cope with the CT saturation, two types of relaying solutions are most commonly used (Blackburn, 1998). The first one is a multirestraint current differential, and the other one is a high-impedance voltage differential. The multirestraint
9 Fundamentals of Power System Protection current differential scheme provides the restraint winding connection to each circuit that is a major source of the fault current. These schemes are designed to restrain correctly for heavy faults just outside the differential zone, with maximum offset current, as long as the CTs do not saturate for the maximum symmetrical current. These schemes are more difficult to apply and require use of auxiliary CTs to match the current transformer ratios. The high-impedance voltage differential principle uses a property that CTs, when loaded with high impedance, will be forced to take the error differential current, and, hence, this current will not flow through the relay operating coil. This principle translates the current differential sensitivity problem is to the voltage differential sensitivity probem for distinguishing the close-in faults from the bus faults. The voltage differential scheme is easier to implement since the worst case voltage for external faults can be determined much more precisely knowing the open voltage of the CT.
9.7 Protection of Induction Motors Induction motors are a very common type of load. The main behavioral patterns of the induction motor are representative of the patterns of many loads; their operation depends on the conditions of the network that is supplying the power as well as the loading conditions. This section gives a brief discussion on the most important relaying requirements as well as the most common relaying principles that apply to the induction motor protection. Further details can be found in (Horowitz and Phadke, 1992; Blackburn, 1998; Ungrad et al., 1995).
9.7.1 Requirements for Induction Motor Protection The requirements may be divided in three categories: protecting the motor from faults, avoiding thermal damage, and sustaining abnormal operating conditions. The protection from faults has to detect both phase and ground faults. The thermal damage may come from an overload or locked rotor and has to be detected by correlating the rise in the temperature to the occurrence of the excessive currents. The abnormal operating conditions that need to be detected are unbalanced operation, under voltage or overvoltage, reversed phases, highspeed reclosing, unusual ambient temperature, voltage and incomplete starting sequence. The protection principle used has to differentiate the causes of problems that may result in a damage to the motor. Once the causes are detected and linked to potential problems, the motor needs to be quickly disconnected to avoid any damage.
803
9.7.2 Protection Principles Used for Induction Motor Protection The most common relaying principles used for induction motor protection are the overcurrent protection and thermal protection. The overcurrent protection needs to be properly set to differentiate between various changes in the currents caused either by faults or excessive starting conditions. A variety of current-based principles can be used to implement different protection tasks that fit the motor design properties. Both the phase and ground overcurrents, as well as the differential current relays, are commonly used for detecting the phase and ground faults. The thermal relays are available in several forms: a ‘‘replica’’ type, where the motor-heating characteristics are approximated closely with a bimetallic element behavior; resistance temperature detectors embedded in the motor winding; and relays that operate on a combination of current and temperature changes.
References Blackburn, J.L. (1993). Symmetrical components for power systems engineering. Marcel-Dekker. Blackburn, J.L. (1998). Protective relaying—Principles and applications. Marcel-Dekker. Flurscheim, C.H. (1985). Power circuit breaker theory and design. Peter Peregrinus. Horowitz, S.H., and Phadke, A.G. (1992). Power system relaying. City, State Abbreviation: Research Studies Press. IEEE Standard, IEEE guide for generator protection C37.102-1995. IEEE Standard, IEEE guide for the application of current transformers used for protective relaying C37.110-1996. IEEE Standard, IEEE guide for protective relay applications of transmission lines C37.113-1999. Lewis, W.A., Tippett, L.S. (1947). Fundamental basis for distance relaying on three-phase systems. AIEE Transactions 66, 694–708. Ungrad, H., Winkler, W., Wiszniewski, A. (1995). Protection techniques in electrical energy systems. Marcel-Dekker. WG D5, Line Protection Subcommittee, Power System Relaying Committee. (1997a). Proposed statistical performance measures for microprocessor-based transmission-line protective relays: Part I—Explanation of the statistics. IEEE Transactions on Power Delivery 12(1). WG D5, Line Protection Subcommittee, Power System Relaying Committee. (1997b). Proposed statistical performance measures for microprocessor-based transmission-line protective relays: Part II—Collection and uses of data. IEEE Transactions on Power Delivery 12(1). WG of the Relay Input Sources Subcommittee, Power System Relaying Committee. (1981). Transient response of coupling capacitor voltage transformers. IEEE Transactions on Power Apparatus and Systems PAS-100(2).
This page intentionally left blank
10 Electric Power Quality Gerald T. Heydt Department of Electrical Engineering, Arizona State University, Tempe, Arizona, USA
10.1 10.2 10.3 10.4
Definition ......................................................................................... Types of Disturbances ......................................................................... Measurement of Electric Power Quality ................................................. Instrumentation Considerations ...........................................................
805 805 805 808
10.4.1 Advanced Instrumentation
10.5 10.6
Analysis Techniques ............................................................................ 809 Nomenclature.................................................................................... 809 References......................................................................................... 810
10.1 Definition There is no single acceptable definition of the term electric power quality. The term generally applies to the goodness of the electric power supply, its voltage regulation, its frequency, voltage wave shape, current wave shape, level of impulses and noise, and the absence of momentary outages. Some engineers include reliability considerations in electric power quality studies, some consider electromagnetic compatibility, and some perform generation supply studies. Narrower definitions of electric power quality generally focus on the bus voltage wave shape. Electric power quality studies generally span the entire electrical system, but the main points of emphasis are in the primary and secondary distribution systems. This is the case because loads generally cause distortion of bus voltage wave shape, and this distortion is mainly noted near the source of the difficulty—namely near the load—in the secondary distribution system. Because the primary distribution system is closely coupled to the secondary system and the load, which is sometimes served directly by the primary distribution system, the primary network is also a point of focus. Transmission and generation systems are also studied in certain types of power quality evaluation and analyses. Table 10.1 shows the main points in electric power quality.
10.2 Types of Disturbances There are two main classes of electric power quality disturbances: the steady-state disturbance that lasts for a long period Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
of time (and is often periodic) and the transient. The latter generally exists for a few milliseconds, and then decays to zero. Controversy surrounds which is more important, and a case could be made for either type of disturbance being more problematic as far as cost. The steady-state type of disturbance is generally less evident in its appearance, often at lower voltage and current levels, and less harmful to the operation of the system. Because steady-state phenomena last for a long period of time, the integrated effects of active power losses (low or high voltage) and inaccurate timing signals may be quite costly. Transient effects tend to be higher level in amplitude and are often quite apparent in harmful effects as well as occasionally spectacular in cost (e.g., causing loss of a manufactured product or causing long-term outages). The cost of transient power quality problems has been estimated in the 100 million to 3 billion dollar range annually in the United States. Table 10.2 lists some types of steady-state and transient power quality problems. The transient problems are often termed events.
10.3 Measurement of Electric Power Quality Many indices have been developed for the specification of electric power quality. A few are listed in Table 10.3. The most widely used index of power quality is the total harmonic distortion, which is an index that compares the intensity of harmonic signals in voltages and currents to the fundamental component. The main indices are as follows: 805
806
Gerald T. Heydt TABLE 10.1
Electric Power Quality Considerations
Consideration
Focus
Region of analysis
. . . . . .
Types of problems
Analysis methods
. . . . . . . .
Mitigation techniques
TABLE 10.2
Comments
Distribution systems Points of utilization of electric power Transmission systems Electromagnetic compatibility Harmonics Momentary outages and low voltages (sags) Circuit analysis programs Harmonic power flow studies Focused studies on particular events using circuit theory to obtain solution Pspice Filters Capacitors Problematic loads Higher pulse order (e.g., twelve-pulse rather than six-pulse systems)
The main region of analysis is the primary and secondary distribution system. This is where nonsinusoidal waves are most prevalent and of greatest amplitude. There is a controversy concerning whether momentary low voltages (sags) or harmonics are the most problematic in terms of cost of the problem. A range of commercial software is available for both smaller and larger studies. Many software tools are linked to elaborate graphics. Most methods are data intensive and approximate. Mitigation techniques are often customized to the particular problem and application. In general, higher pulse order systems give much less problem than single-phase and six-pulse, three-phase systems.
Power Quality Problems
Type
Problem
Appearance
Causes
Transient system problems
Impulses (surges, pulses)
High-voltage impulse for a short time, typically in the microsecond to 1 ms range
. Lightning . Switching surges . Rejection of inductive loads
Momentary outages Phase shift
Collapse of ac supply voltage for up to a few (e.g., 20) cycles Sinusoidal supply voltage proportional to a sine function whose phase angle suddenly shifts by an angle w Momentary low voltage caused by faults in the supply Damped sinusoidal voltages impressed on the ac wave
Circuit breaker operations Faults
Sags (low voltage) Ringing Steady-state system problems
Harmonics
Integer multiples of the ac supply frequency (e.g., 60 Hz) of (usually) lower amplitude signals impressed on the power frequency wave
Voltage notches
Momentary low voltages of duration much shorter than one cycle caused by commutated loads Noise impressed on the power frequency
Noise Radio frequency
Interharmonics and fractional harmonics
High-frequency (e.g., f > 500 kHz) sinusoidal signals of typically low amplitude impressed on the power frequency Components of noninteger multiples of the power frequency
Faults Capacitor switching Nonlinear loads Adjustable speed drives Rectifiers Inverters Fluorescent lamps Adjustable speed drives . . . . .
. Static discharge and corona . Arc furnaces
Radio transmitters
. Cycloconverters . Kramer drives . Certain types of adjustable
speed drives
.
Power factor: The relationship of power factor to the IEEE Standard 519-1992, especially with regard to the failure of the power factor to register harmful and undesirable effects of high-frequency harmonics in power
distribution systems, is well known. Many electric utilities have limits to the power factor of consumer loads, but there may not be a clear definition of power factor for the nonsinusoidal case. In particular, many electric utilities
10
Electric Power Quality
TABLE 10.3
807
Voltage Measurement
Instrument
Configuration
Application considerations
Potential transformer (PT)
Energized at full line potential on the high side and typically in the 100-V range on the low side
Voltmeters
Energized directly or through a probe
Voltage divider
Energized at full line potential at the upper resistor; resistive voltage divider used to give low voltage at the low end of the string Capacitive voltage divider energized at full line potential at upper end; instrument transformer used to obtain some isolation
. . . . . . . . . . . . .
Capacitively coupled voltage transformer
.
do not distinguish between displacement of the power factor (i.e., the cosine between the voltage and current phasors at the fundamental frequency) and the power factor defined in Table 10.4. The term true power factor has been used by some to refer to P=jV jjIj, but the IEEE Standard 100 used the term power factor for this ratio, and this simpler term is used in this chapter. Total harmonic distortion: The use of the total harmonic distortion (THD) is perhaps the most widespread power quality index, and many electric utilities have adopted a THD-based measure of the limits of customer load currents.
TABLE 10.4
.
.
Bandwidth of the PT Safety (e.g., upon failure of the PT) Turns ratio, accuracy Isolation from the line Bandwidth True RMS reading questioned Accuracy Safety (e.g., opening of low end of string) Heat loss Bandwidth Resonance of the capacitors and transformer Accuracy Frequency dependence
K-factor: Explanation of the use of the K-factor to derate transformers that are expected to carry nonsinusoidal load currents has been used, and an alternative calculation of this index in the time domain has been shown. Flicker factor: This flicker factor index has been used in connection with electric arc furnaces for the purpose of quantifying the load impact on the power system.
Perhaps the main application of power quality indices has been in guides, recommended practices, and standards. As an example, the IEEE Standard 519-1992 contains an often cited limit to harmonic load currents and voltages. The ANSI
Common Power Quality Indices
Index Total harmonic distortion (THD)
Definition
Main applications
sffiffiffiffiffiffiffiffiffiffiffi! 1 X Ii =I1
General purpose; standards
i¼2
Power factor (PF) Telephone influence factor
Ptot =jVrms kIrms j sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! 1 X wi2 Ii =Irms
Potentially in revenue metering Audio circuit interference
i¼2
C message index
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! 1 X ci2 Ii =Irms
Communications interference
i¼2
IT product
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 X wi2 Ii2
Audio circuit interference, shunt capacitor stress
i¼1
VT product
sffiffiffiffiffiffiffi 1 X wi2 Vi2 i¼1
K factor
1 X
! h2 Ih2 =
h¼1
Voltage distortion index 1 X
Ih2
Transformer derating
h¼1
Crest factor Unbalance factor Flicker factor
Vpeak =Vrms jV j=jVþ j DV =jV j
Total demand distortion (TDD)
THD*(fundamental current/circuit rating)
Dielectric stress Three-phase circuit balance Incandescent lamp operation, bus voltage regulation, sufficiency of short circuit capacity In IEEE Standard 519
808
Gerald T. Heydt
Standard 368 contains a well-quoted guide on limits of the IT product. The Underwriters Laboratories applies the K-factor to the specification of transformers that carry nonsinusoidal load currents.
10.4 Instrumentation Considerations Because power quality is often stated in terms of voltages and currents, the main instrumentation needed to assess power quality relates to bus voltages and line and load currents. In terms of voltages, usually bus voltages are measured using potential transformers because isolation from the power circuit is desirable and because power system voltages are usually too high to measure directly. Typical potential transformers are capable of bringing circuit voltages (e.g., in the 440–7200 V range for distribution circuits and up to 40 kV for subtransmission circuits) to about 110 V. These potential transformers must have the proper bandwidth to ‘‘see’’ the desired voltages to be instrumented (e.g., harmonic voltages), they must have the proper dynamic range to allow measurement of the voltage, and they must have isolation suitable for safety. Some voltage measurement instruments are listed in Table 10.3. Current instrumentation is analogous to voltage measurement with the replacement of the current transformer for the potential transformer. Main considerations are appropriateness of the current ratio in the current transformer, isolation for safety, loading with the proper current transformer burden, and bandwidth and dynamic range of the current transformer. Table 10.5 shows some of the basic instruments used in current instrumentation. Both current and voltage transformers are available in several grades. General purpose transformers are not generally usable for power quality measurements because the bandwidth
TABLE 10.5
of these devices is often not much larger than 60 Hz (in 60-Hz systems). Harmonics may be attenuated by the current or potential transformer, and this adds intolerable error to the measurement. Relaying grade current and relaying potential transformers are usually not much better because of limited bandwidth. Revenue meter grade current and voltage transformers are generally not designed for high bandwidth applications. Laboratory grade transformers are usually the best choice. Modern power quality instruments generally have companion current and voltage transformers for use with a given instrument, and the manual for the instrument will contain the bandwidth measurement of the transformers. It is noted that for many voltage and current instruments used in power quality tests, current and voltage transformers are the most costly components of the instrumentation system. Modern power quality assessment often involves more than voltage and current measurement. The following are also part of the test regimen in most cases: .
.
. . . . . .
.
Event measurement (i.e., measurement of three-phase voltages and currents, plus neutral ground voltage and neutral current versus time) On-board evaluation of harmonics (of voltages and currents, often plotted versus time) Measurement of active and reactive power Measurement of total harmonic distortion Measurement of active power loss in a system component Assessment of high-frequency effects Measurement of rise time Oscillograph capability (often written digitally to a disk for subsequent analysis and report writing) Energy measurement.
Table 10.6 illustrates a few of these capabilities in commercial instruments in use for power quality assessment.
Current Instrumentation
Instrument
Configuration
Application considerations
Current transformer (CT)
Placed around the conductor to be instrumented
Resistive shunt for current measurement
Resistor in series with load to be instrumented
Ammeter
Placed in series with circuit to be instrumented
Optical instruments
Rotation of plane of polarized light in a fiber-optic cable around or near the instrumented circuit Measurement of magnetic field near instrumented conductor
. . . . . . . . . . . . . . . .
Hall-effect device
Bandwidth Safety Operation with correct CT burden No ohmic isolation Accuracy Heating of the shunt Accuracy Bandwidth Safety—no ohmic isolation for circuit Accuracy Vibration sensitivity Cost Accuracy Bandwidth Vibration and mechanical placement sensitivity Linearity
10
Electric Power Quality
TABLE 10.6
809
Power Quality Assessment Instruments
Instrument
Typical capability
Bandwidth
Dynamic range
Digital fault recorder (DFR) Power quality node
Measure three-phase voltage and current; perform basic analysis on these data; measure digital output Commercialized event recorder for distribution system instrumentation and field rugged, and telephone interrogations Oscilloscope functions, digital readout, basic analysis functions, signal triggers recording
Typically to about 3000 Hz
At least 80 db, possibly 100 db
Typically to about 3000 Hz
About 80 db
Varies widely—possibly into the megahertz range
About 100 db
Event recorder, digital oscilloscope
10.4.1 Advanced Instrumentation In the preceding section, several basic voltage and current measurement instruments were discussed. Because power quality measurements are often demanding in bandwidth and dynamic range, several types of advanced instrumentation techniques have been studied and developed for this application. A few are listed in Table 10.7.
10.5 Analysis Techniques The main analytical techniques for power quality studies are the following: . . . .
Power flow studies Injection current analysis Simulation methods, such as Pspice and EMTP Direct circuit analysis
Power flow studies are software tools that rely on steadystate operation of the system to be studied. These software tools use input data such as load type, load level (P, Q), and circuit data (e.g., impedance, connection diagram). The output is typically all nodal voltages and all line currents at all frequencies. Injection current analysis is the analysis of how currents injected into a system propagate. This is a simplified form of power flow study. Power quality engineering may be taken to be a specialized branch of signal analysis. In this regard, there are many useful
TABLE 10.7
formulas that may be applied to solve problems. One particularly useful set of formulas is the ideal rectifier set of formulas shown in Table 10.8. Rectifier loads are a main source of harmonic load currents. The ideal rectifier formulas give the various interrelationships of voltage and current parameters for the idealized case of no rectifier power loss and very high dc side inductance. These formulas, while useful, should be used with caution because the assumptions made in arriving at these formulas are all idealizations. Note that the displacement factor is the cosine of the angle between fundamental voltage and fundamental current. The power factor is the ratio P=Vrms Irms .
10.6 Nomenclature ch C-message weight CBEMA Computer Business Equipment Manufacturers Association CT current transformer DF displacement factor DPF displacement power factor EPRI Electric Power Research Institute IEC International Electrotechnical Commission IT current—telephone influence factor product, read as IT product Ih , V h harmonic components of current i(t) and v(t) K K-factor kIT thousands of IT units kVT thousands of VT units
Advanced and Unconventional Instrumentation for Electric Power Quality Assessment
Basic instrumentation technique
Basis
Application
Poeckels effect
Rotation of the plane of polarized light in a medium due to electric field strength Rotation of the plane of polarized light in a medium due to magnetic field strength Variation of the resistance of a material in a magnetic field Triangular spotting of four or more artificial earth satellites to obtain time and position of a receiver
Wideband measurement of voltage
Faraday rotation effect Hall effect Global Positioning System (GPS) applications
Wideband measurement of current Wideband measurement of current Accurate time tagging of measurements, phasor measurements
810 TABLE 10.8
Gerald T. Heydt The Ideal Rectifier Formulas
Single-phase bridge rectifiers Line commutated: Infinite dc inductance Zero supply inductance
pffiffiffi 2 2 Vac p pffiffiffi 2 2 Idc Iac; fundamental ¼ p Isupply; h ¼ Iac; fundamental=h THDI ¼ 48:4% Displacement factor ¼ DF ¼ 1:0 Vdc ¼
pffiffiffi 2 2 Displacement power factor ¼ DPF p pffiffiffi 2 2 Pdc ¼ Pac ¼ Vs; rms Idc p Line commutated: Infinite dc inductance Nonzero supply inductance
Forced commutated: Infinite dc inductance Nonzero supply inductance
2vIdc Ls cos (u) ¼ 1 pffiffiffi 2Vs pffiffiffi 2 2 2vLs Idc Vs Vdc ¼ p p DF cos (u=2) P ¼ Vs Iac; fundamental *DF ¼ Vdc Idc 2vLs Idc pffiffiffi cos (a þ u) ¼ cos (a) Vs 2 u DPF ¼ cos a þ 2 pffiffi 2 2 s 2 Vs Idc cos (a) 2vL I p dc Isupply; fundamental ¼ p Vs cos a þ u2 pffiffiffi 2 2 2 Vs cos (a) vLs Idc Vdc ¼ p p
Three-phase bridge rectifiers (six pulse) pffiffiffi 3 2 Line commutated: VLL Vdc ¼ Infinite dc inductance ppffiffiffiffiffiffiffi Zero supply inductance Is, rms ¼ 2=3Idc Isupply; fundamental ¼
Line commutated: Infinite dc inductance Nonzero supply inductance
Forced commutated: Infinite dc inductance Nonzero supply inductance
pffiffiffi 6 Idc p
DF ¼ 1:00 DPF ¼ 3=p THDI ¼ 31:1% (harmonics 5, 7, 11, . . . ) pffiffiffi 3 2 3vLS VLL Idc Vdc ¼ p p 2vLs Idc cos (u) ¼ 1 pffiffiffi 2VLL DF pcos ffiffiffi (u=2) P ¼ 3VLL Isupply; fundamental *cos (u=2) ¼ Vdc Idc pffiffiffi 3 2 3vLS VLL cos (a) Idc Vdc ¼ p p 2vLs Idc cos (a þ u) ¼ cos (a) pffiffiffi 2VLL u DPF cos a þ 2
Ptot PCC PF PT SCR rms
total active power (for all harmonics) point of common coupling power factor potential transformer short circuit ratio rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ð root mean square value T1 f 2 (t)dt
T
period of a periodic wave; time horizon under study for an aperiodic wave total demand distortion total harmonic distortion telephone influence factor positive and negative sequence components of a sinusoidal three-phase voltage voltage—telephone influence factor product, read as VT product telephone influence factor weight
T
TDD THD TIF Vþ , V VT wh
References Arrillaga, J. (1985). Power system harmonics. New York: John Wiley & Sons. Bollen, M.H.J. (2000). Understanding power quality problems: Voltage sags and interruptions. New York: IEEE. Dugan, R.C., McGranaghan, M.F., and Beaty, H.W. (1996). Electrical power systems quality. New York: McGraw-Hill. Heydt, G. (1995). Electric power quality. Scottsdale, AZ: Stars in a Circle. IEEE. (1986). IEEE Standard C57.110-1986, IEEE recommended practice for establishing transformer capability when supplying nonsinusoidal load currents. New York. IEEE. (1997a). IEEE Standard 368-1977, IEEE recommended practice for measurement of electrical noise and harmonic filter performance of high-voltage direct-current systems. New York. IEEE. (1997b). IEEE. (1992a). IEEE Standard 1100, IEEE recommended practice for powering and grounding sensitive electronic equipment. New York. IEEE. (1992b). IEEE Standard 519-1992, IEEE recommended practices and requirements for harmonic control in electrical power systems. New York. Kennedy, B.W. (2000). Power quality primer. New York: McGraw-Hill. Porter, G., and Van Sciver, J.A. (1998). Power quality solutions: Case studies for troubleshooters. Lilburn, GA: Fairmont Press. Power Quality Assurance Magazine, http://industryclick.com/magazine.asp?magazineid¼286&siteid¼13 Shepherd, W., and Zand, P. (1979). Energy flow and power factor in nonsinusoidal circuits. Cambridge: Cambridge University.
VII SIGNAL PROCESSING Yih-Fang Huang Department of Electrical Engineering, University of Notre Dame, Notre Dame, Indiana, USA
This section includes seven chapters that address various topics of Signal Processing, written by leading experts on the respective topics. Signal processing is one of the most important and rapidly growing fields in Electrical Engineering. Over the last few decades, it has played an important role in many different fields in science and engineering. Information is most effectively represented by electronic signals or optical signals. Both types of signals can then be processed conveniently by electronic devices and systems. In the second half of the twentieth century, the successes in semiconductor device fabrication and integrated circuits technologies continued to fuel the development of powerful signal processing tools that benefit further development of various technological fields. The first chapter of this section provides an overview of basic principles of signals and systems. It presents important fundamental concepts in transforming time-domain signals into frequency domain signals (like Fourier transform and ztransform and their properties) and linear shit-invariant systems and their properties. The presentation of Chapter 1 lays down a nice foundation for the discussion of digital filters, with some emphasis on finite impulse response (FIR) and infinite impulse response (IIR) filters. The FIR filter and IIR filters are two most com-
monly used digital filters and they are realizations of linear shift-invariant systems. Properties and applications of those filters are addressed. Chapters 3 to 5 present principles of the processing of various forms of signals, namely, speech signals, image signals and multimedia signals. Each form of signals has specific properties and applications that require distinctive processing techniques. Chapter 6 presents fundamental concepts of statistical signal processing, which is an important aspect of signal processing. It provides a brief summary of the principles of mathematical statistics on which signal detection and estimation are based. Signal detection and estimation are critical in extracting information from signals in communication and control systems. The Section ends with a chapter that addresses issues related to implementation of signal processing algorithms. It is concerned with application-specific VLSI architecture, including programmable digital signal processors, and dedicated signal processors implemented with VLSI technology. It also provides a good survey of recent development in VLSI signal processing technologies. On behalf of all the authors in this section, I thank you for your interests in signal processing and hope you enjoy reading these chapters.
This page intentionally left blank
1 Signals and Systems Rashid Ansari and Lucia Valbonesi
1.1 1.2
Introduction ....................................................................................... 813 Signals ............................................................................................... 814
Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
1.3
Systems .............................................................................................. 819
1.2.1 Basic Signals . 1.2.2 Signal Classification 1.3.1 Common System Properties . 1.3.2 Linear Shift-Invariant Systems . 1.3.3 Properties of LSI Systems
1.4
Analysis in Frequency Domain ............................................................... 824 1.4.1 Properties of the Discrete-Time Fourier Transform . 1.4.2 Frequency Response of LSI System
1.5
The z-Transform and Laplace Transform.................................................. 828
1.6
Sampling and Quantization ................................................................... 831
1.7
Discrete Fourier Transform.................................................................... 835
1.5.1 z-Transform and Its Properties . 1.5.2 The Laplace Transform and Its Properties 1.6.1 Sampling . 1.6.2 Signal Reconstruction . 1.6.3 Quantization 1.7.1 Relation Between the DFT and the Samples of DTFT . 1.7.2 Linear and Circular Convolution . 1.7.3 Use of DFT
1.8
Summary ........................................................................................... 836 References .......................................................................................... 837
1.1 Introduction Signal processing deals with operations performed on a signal with the intent of either modifying it in some desirable way or extracting useful information from it. Signal processing encompasses the theory, design, and practice of software and hardware for converting signals produced by different sources into a form that allows a more effective utilization of the signal content. The signals might be speech, music, image, video, multimedia data, biological signals, sensor data, telemetry, electrocardiograms, or seismic data. Typical objectives of processing include manipulation, enhancement, transmission, storage, visualization, detection, estimation, diagnosis, classification, segmentation, or interpretation. Signal processing finds applications in many areas, such as in telecommunications, consumer electronics, medical imaging and instrumentation, remote sensing, space research, oil exploration, radar, sonar, and biometrics (Anastassiou, 2001; Baillet et al., 2001; Ebrahimi et al., 2003; Hanzo et al., 2000; Molina et al., 2001; Ohm, 1999; Podilchuck and Delp, 2001; Strintzis and Malassiotis, Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
1999; Strobel et al., 2001; Wang et al., 2000; National Science Foundation, 1994). Mathematical foundations of signal processing are mainly drawn from transform theory, probability, statistical inference, optimization, and linear algebra. The field of signal processing has grown phenomenally in the past years. This growth has been fueled by theoretical and algorithmic developments, technological advances in hardware and software, and ever widening applications that have generated an insatiable demand for signal processing tools. Theory and practice in a number of fields have been greatly influenced by signal processing (e.g., communications, controls, acoustics, music, seismology, medicine, and biomedical engineering). Signal processing finds widespread use in everyday life, though the ubiquity of signal processing may not be evident to most users. Users may be familiar with JPEG, MPEG, and MP3 without recognizing that these tools are rooted in signal processing (Ansari and Memon, 2000; Wang et al., 2000). Recording of movies on a digital video disc (DVD), one of the most successful consumer electronics products, is made possible by compression techniques based on signal 813
814 processing. DVD video is usually encoded from digital studio master tapes to MPEG-2 format, where MPEG refers to Moving Pictures Expert Group. The encoding process uses lossy compression that removes information that is either redundant or not readily perceptible by the human eye. The ultimate objective in compression is to reduce the bit rate for storage and transmission of video. The signal processing used here consists of applying suitable transforms to the data. Typically, a reduction in bit-rate requirements by a factor of 30 to 40 permits acceptable video quality. The reduced memory requirements of audio coded in MP3 format has made it a popular vehicle for exchanging and storing music by individual users. MP3 refers to the MPEG Layer 3 audio compression format. Raw digital audio signals typically consist of 16-bit samples acquired at a sampling rate that is at least twice the bandwidth of the audio signal. In compact discs (CDs), a sampling rate of 44.1 kHz is used for each channel of stereo music. As a result, more than 1.4 Mbits is needed to represent just 1 second of music in raw CD recordings. By using MP3 audio coding, the storage requirement is reduced by a factor of 12 to 14 without significant loss in sound quality. This saving in storage is realized by using a signal processing tool, called a filter bank, in conjunction with exploiting the nature of perception of sound waves by the human ear. In some applications, both speech and visual information may be jointly processed (Chen, 2001; Strobel et al., 2001). This joint processing is referred to as multimedia or multimodal signal processing where data of multiple media like text, audio, images, and video are processed in an integrated manner, and interaction among the different media or modalities is investigated and exploited. Person recognition and authentication systems that have recently acquired increased importance in security and surveillance applications can be made more reliable by combining inputs from different modalities, such as voice and visual information. Progress in the signal processing field has in recent years been largely driven by specific applications. Signal processing often figures in multidisciplinary research efforts. A noteworthy success in the application of signal processing has been in the fields of medicine and healthcare (Baillet et al., 2001; Ebrahimi et al., 2003). A number of imaging tools have been developed to aid diagnosis and recognition of disease. Computer tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) are being widely used. These imaging technologies are testimony to the success of multidisciplinary interaction among researchers in fields such as signal processing, medicine, biology, and physics. The multidisciplinary effort can be witnessed in the collaboration on new challenges in modeling and processing in the emerging modalities for medical imaging such as electro/magneto-encephalography (E/MEG) and electrical impedance tomography (EIT). In this chapter, we provide an overview of signals and systems that is largely based on signal representation framework, linear system theory, and transform theory. We begin by
Rashid Ansari and Lucia Valbonesi describing some basic notions of discrete-time and continuous-time signals and systems. The emphasis is on discretetime signals and systems, but notions for the two cases are presented in parallel. The exposition of the continuous-time case is deliberately brief because the similarity in the notions should be evident to the reader. The concepts of sampling and quantization that relate continuous- and discrete-time signals are briefly discussed toward the end of the chapter. Fundamentals of discrete-time systems and properties such as linearity, time-invariance, causality, and stability are reviewed first. Frequency domain analysis of signals and Fourier transforms are then presented followed by a discussion of frequency-selective filtering. Basics of z-transforms are covered next with a discussion of the z-domain interpretation of system properties such as causality and stability. Operations of sampling and quantization are then presented. Finally, the discrete Fourier transform is presented, and its importance in implementing signal processing algorithms is described. Our discussion is confined to the description and processing of deterministic signals.
1.2 Signals Notions of signals and systems are encountered in a wide variety of disciplines. Signal processing formalizes these notions in a unified framework. It enables us to deal with diverse phenomena whose behavior is captured in variations of functions that characterize the phenomena. Signals are an abstraction for representing these variations. The signal may represent amplitude changes in speech or audio, concentration of a solute, biologically generated voltages, temperature inside an engine, pressure in a urinary bladder, noise in a communication receiver, and other such variables. A signal is a mathematical function of one or more independent variables that usually represent time and/or space. An example of a signal as a function of time is a speech signal that represents a variation of acoustic pressure with time. An example of a signal as a function of space is an image: it is a function of light intensity and color in two spatial coordinates. Some signals may depend both on space and time. Examples are video signals or the measurements of temperature in a room that depend on the three spatial coordinates defining each point in the room and that also vary with time. In this section, we focus on signals that are functions of a single independent variable, and we will generally refer to this variable as time. The independent variable of a signal can be either continuous or discrete, depending on the set of values for which the function is defined. In the first case, the signal is continuoustime, and the function is defined over a continuum of values of the independent variable; in the second case, the signal is discrete-time, and the function is defined over a countable set of values of the independent variable.
1 Signals and Systems
815
.
.
Continuous-time or analog signals: A continuoustime or an analog signal xc is a mapping xc : R ! R , where the domain is R and the range is a subset of the real line. A continuous-time signal will be denoted as xc (t) and not xc , using the engineering convention of attaching the time variable as argument. This may be a source of confusion at times, but this aspect is clarified later. An example of a continuous-time signal is a speech signal: the acoustic pressure assumes a continuum of values, and pressure is defined at values of the independent variable, time, on the entire real line. Figure 1.1 shows an example of a continuous-time or an analog signal. Quantized continuous-time signals: A quantized continuous-time (boxcar) signal is a special case of a continuous-time signal, where the range of the signal is finite or countably infinite (i.e. it is a function defined over a continuum of values of the independent variable, while the range of values the signal assumes is discrete). An example of a quantized continuous-time (boxcar)
2 1.5 1 0.5 f(t)
To distinguish between continuous-time and discrete-time signals, we will denote the independent variable as t in the continuous-time case and n in the discrete-time case. The amplitude of a signal may also assume either continuous or discrete set of values. Thus, the signals can be classified into four categories depending on the nature of their independent and dependent variables. It is convenient to introduce some notation here: Z denotes the set of integers, C the set of complex numbers, and R the set of real numbers:
0 −0.5 −1 −1.5 −2 −5
.
3 2 1
f(t)
0 −1 −2 .
−4 −5 −5
−4
−3
−2
−1
0 t
1
2
3
4
5
FIGURE 1.1 Example of Continuous-Time or Analog Signal. Both the independent and the dependent variables assume a continuum of values.
−3
−2
−1
0 t
1
2
3
4
5
FIGURE 1.2 Example of Quantized Continuous-Time (or Boxcar) Signal. The independent variable is ‘‘continuous,’’ whereas the dependent variable is ‘‘discrete’’.
4
−3
−4
signal is the control signal of a switch. At every instant of time, the switch is either open or closed depending on the control signal: if it is positive, the switch is open, whereas if it is negative, the switch is closed. Figure 1.2 shows an example of the control signal. Discrete-time or sampled data signals: A discrete-time signal x is a mapping x: Z ! R , where the domain is Z and the range is a subset of the real line. A discrete-time signal will be denoted as x[n], and not x, again using the engineering convention of attaching the time variable as argument. Note that the independent variable is integer-valued. Sampling a continuous-time signal at equally spaced instants is commonly used to produce a discrete-time signal. The distance, T , in time between two adjacent sampling instants is called the sampling interval, and its inverse is called sampling frequency. Upon sampling, x[n] ¼ xc (nT ). Figure 1.3 shows an example of a discrete-time signal. This is the sampled but not necessarily quantized version of the signal in Figure 1.1. Digital signals: A digital signal is a special case of a discrete-time signal, where the range of the signal is finite or countably infinite (i.e., it is a function defined over integer values of the independent variable and the range of values the signal assumes is also discrete). Figure 1.4 shows an example of a digital signal. This is the same signal shown in Figure 1.1, sampled and quantized. In this case, we note that the function assumes only integer values (i.e., it is discrete-valued).
816
Rashid Ansari and Lucia Valbonesi 4
y[n] ¼ Dn0 x[n] ¼ x[n n0 ]:
3
In the case of a continuous-time signal xc (t), the timeshifted signal is defined as:
f(n)
2 1
yc (t) ¼ Dt0 x(t) ¼ x(t t0 ):
0
We will now define some common basic signals in continuous-time and discrete-time cases.
−1 −2
−4 −5 −5
(1:2)
Unit Sample or Unit Impulse The discrete unit sample or unit impulse is defined as:
−3
d[n] ¼ −4
−3
−2
−1
0 n
1
2
3
4
1, 0,
n ¼ 0: n 6¼ 0:
(1:3)
5
FIGURE 1.3 Example of a Sampled Signal. The independent variable is discrete, whereas the dependent variable is continuous. 4
The continuous-time counterpart of d[n] is the unit impulse or the Dirac delta-function dc (t). This function is zero-valued Ð1 for t 6¼ 0 and 1 dc (t)dt ¼ 1. The discrete-time and the continuous-time unit impulses are shown in Figure 1.5. The unit impulse has the property that:
3
x[n0 ] ¼ 2
1 X
x[n]d[n n0 ],
(1:4a)
xc (t)dc (t t0 )dt:
(1:4b)
n¼1
1 f(n)
(1:1)
xc (t0 ) ¼
0
1 ð 1
−1
Moreover, every signal can be written as a weighted sum or integral of unit impulses:
−2
x[n] ¼
−3
1 X
x[k]d[n k]:
(1:5a)
xc (t)dc (t t)dt:
(1:5b)
k¼1
−4 −5
−4
−3
−2
−1
0 n
1
2
3
4
5
FIGURE 1.4 Example of Digital Signal. Both the independent and the dependent variables are discrete.
1.2.1 Basic Signals Some basic signals are worthy of special attention because they play a particularly important role in the representation of other more general signals, either because other signals can be written as their weighted sum or integral or because they have useful properties. Signals in this category include the unit sample or unit impulse, the unit step, and the exponential signal. Another signal that plays an important role is a timeshifted version of a given signal. In the discrete-time case, this may be viewed as a transformation of a signal x[n] given by y ¼ Dn0 x and defined by:
xc (t) ¼
1 ð 1
Unit Step The discrete-time unit step is defined as: u[n] ¼
1, 0,
n 0: n < 0:
(1:6)
The continuous-time unit step is defined as: uc (t) ¼
1, 0,
t 0: t < 0:
(1:7)
The discrete-time and the continuous-time unit steps are shown in Figure 1.6.
1 Signals and Systems
817 1.5
1.5
1
1
0.5
0.5
0
0
−0.5
−10
0
10
−0.5
(A) Discrete-Time Unit Impulse
FIGURE 1.5
1
1
0.5
0.5
0
0
0
0
10
Discrete-Time and Continuous-Time Unit Impulses
1.5
−10
−10
(B) Continuous-Time Unit Impulse
1.5
−0.5
area = 1
10
(A) Discrete-Time Unit Step
−0.5
−10
0
10
(B) Continuous-Time Unit Step
FIGURE 1.6 Discrete-Time and Continuous-Time Unit Steps
818
Rashid Ansari and Lucia Valbonesi
The unit impulse and the unit step functions are related by: u[n] ¼
n X
d[k],
d[n] ¼ u[n] u[n 1]:
from which we have: Dt ¼
(1:8a)
k¼1
uc (t) ¼
ðt dc (t)dt,
d dc (t) ¼ uc (t): dt
(1:8b)
1
Exponential Signal The continuous-time exponential signal is defined as: f (t) ¼ Ke st ,
vDt ¼ 2p,
(1:10)
x[n] ¼ Kz n ,
1.2.2 Signal Classification Signals can be classified according to their support, energy, power, range bounds, and other characteristics. Here, we focus sigma = −0.01 1
0.5
0.5 real part
real part
sigma = 0
0
−0.5
0
−0.5
0
20
40
60
80
−1
100
0
20
40
t
80
100
80
100
sigma = −0.03
sigma = 0.03 1
10
0.5 real part
real part
60 t
20
0
−10 −20
(1:12)
where K is a constant and z is in general a complex variable z ¼ ge jv . The discrete-time exponential signal is shown in Figure 1.8, with z ¼ 3=4 and k ¼ 1.
1
−1
(1:11)
The parameter v is expressed in radians per second, and the frequency f ¼ v=2p is expressed in cycles per second or Hertz. The role of the parameter s is to amplify (s > 0) or reduce (s < 0) the amplitude of the signal when the independent variable increases. In a similar way, the discrete-time exponential signal is defined as:
(1:9)
where K is a constant, t is the independent variable, and s is in general a complex number with s ¼ s þ jv. When s ¼ 0 and K ¼ 1, the real part of the signal is cos (vt), and the imaginary part is sin (vt). Figure 1.7 shows the real part of complex exponential functions for different values of s. This is a periodic signal with period Dt given by:
2p : v
0
−0.5
0
20
40
60 t
FIGURE 1.7
80
100
−1
0
20
40
60 t
Real Part of the Complex Exponential Function for Different Values of the Parameter s
1 Signals and Systems
819
Exponential signal (z = 3/4) 5 4.5 4 3.5 3 2.5 2 1.5 1 0.5 0 −0.5
−5
−4
−3
−2
−1
0
1
2
3
4
5
FIGURE 1.8 Discrete-Time Exponential Signal
on classification according to support. Support denotes the subset of the domain where the signal is nonzero (i.e., the extent of the independent variable or ‘‘time-axis’’ where the signal is nonzero). The nature of support bears a relation to types of convergence regions of the z-transform or Laplace transform of a signal as seen later. Signal Duration Signals can be classified according to their duration or support (i.e., the extent of domain over which the signal is nonzero). .
Finite-extent signal: A signal is of finite extent or finite duration if its support is bounded. In the discrete-time case, there exist integers, N1 and N2 , such that the signal x[n] satisfies the constraint: 0, n < N1 : x[n] ¼ (1:13) 0, n > N2 : The corresponding requirement on a finite-extent continuous-time signal xc (t) is that there exist real numbers T1 and T2 , such that: 0, t < T1 : xc (t) ¼ (1:14) 0, t > T2 :
.
Infinite-extent signal. A signal is of infinite extent or infinite duration if its support is not bounded. Infiniteextent signals are further categorized into: (1) Left-sided infinite-extent signal: if there exists an integer N1, such that x[n] ¼ 0 for n > N1 (2) Right-sided infinite-extent signal: if there exists an integer N2, such that x[n] ¼ 0 for n < N2 (3) Two-sided infinite-extent signal: if the infiniteextent signal is neither left-sided no right sided
Causal Support . causal Signal Causality is a system property that is described later in this chapter. For linear-time invariant systems, causality constrains the support of the system impulse response to non-negative values of the domain. The notion is extended to define a general ‘‘causal’’ signal. A discretetime signal, x[n], is causal if: x[n] ¼ 0, n < 0: (1:15) A continuous-time signal is causal if: xc (t) ¼ 0, t < 0: (1:16) A signal is noncausal if it is not causal. An infiniteduration causal signal is a special case of an infiniteduration right-sided signal. . Anticausal Signal A discrete-time signal is anticausal if: x[n] ¼ 0, n > 0: (1:17) A continuous-time signal is anticausal if: (1:18) xc (t) ¼ 0, t > 0: An infinite-duration anticausal signal is a special case of a left-sided infinite-duration signal.
1.3 Systems A system is an entity that transforms an input signal into an output signal according to a specified rule. Systems can be classified into discrete-time systems or continuous-time systems if the pair of input and output signals is discretetime or continuous time, respectively, as shown in Figure 1.9. Systems can be hybrid (e.g., analog-to-digital converters) where the input is continuous-time and the output discretetime. A discrete-time or continuous-time input signal is denoted by x[n] or xc (t), respectively, while the corresponding output signal is denoted as y[n] or yc (t). The system (i.e., the transformation that maps that input signal into the output signal) is denoted by T so that y[n] ¼ T (x[n]) or yc (t) ¼ T (xc (t)). We will often refer to the system itself as T.
Continuous-time input
Discrete-time input
Continuous-time system
Discrete-time system
Continuous-time output
Discrete-time output
FIGURE 1.9 Continuous-Time and Discrete-Time Systems
820
Rashid Ansari and Lucia Valbonesi
Unfortunately the engineering notation here could be misleading. The relation yc (t) ¼ T (xc (t)) does not mean that the output at instant t depends on the input only at time t. Rather, the equation should be understood in the mathematical sense of yc ¼ T (xc ).
1 y1 [n] ¼ (x1 [n] þ x1 [n 1]) 2 1 y2 [n] ¼ (x2 [n] þ x2 [n 1]) 2
Time-Shift Operator We had noted earlier that a signal that plays an important role in signal processing is a time-shifted version of a given signal. This signal can be viewed as the output of a time-shifting system. In the discrete-time case, this may be viewed as a transformation of a signal x[n] given by y ¼ Dn0 x and defined by:
1 ¼ (ax1 [n] þ bx2 [n] þ ax1 [n 1] þ bx2 [n 1]) 2 1 1 ¼ a (x1 [n] þ x1 [n 1]) þ b (x2 [n] þ x2 [n 1]) 2 2
y[n] ¼ Dn0 x[n] ¼ x[n n0 ]:
Shift Invariance (or Time Invariance) A shift-invariant or time-invariant system is one where the response of a system to a shifted version of an original input is the same as the shifted version of the system’s response to the original input. Consider a discrete-time system T such that the output corresponding to an input x[n] is y[n] ¼ T (x[n]). The system T is shift invariant if:
(1:19)
In the case of a continuous-time signal xc (t), the timeshifted signal is as follows: yc (t) ¼ Dt0 xc (t) ¼ xc (t t0 ):
(1:20)
1.3.1 Common System Properties
Linearity A linear system is one where the response (output) of a system to the sum of scaled stimuli (inputs) equals the sum of its scaled responses to each individual stimulus. Consider a discrete-time system T such that the output corresponding to an input x1 [n] is y1 [n] ¼ T (x1 [n]), and the output corresponding to an input x2 [n] is y2 [n] ¼ T (x2 [n]). The system is said to be linear if: (1:21)
where a and b are scalars (real or complex numbers depending on the mapping T). The above rule can be extended to sum of a finite set of inputs by induction. We will assume that the system allows the extension of this property to a countably infinite sum of inputs. Example The system: 1 y[n] ¼ (x[n] þ x[n 1]), 2 is linear, since:
¼ ay1 [n] þ by2 [n]:
T (x[n n0 ]) ¼ y[n n0 ]:
To characterize system behavior, we need to impose some structure on a system. This structure takes the form of system properties that provide useful mathematical descriptions as well as insights into analysis and design of systems. We consider a few of these properties with a focus on linear shiftinvariant systems.
T (ax1 [n] þ bx2 [n]) ¼ ay1 [n] þ by2 [n],
) T (ax1 [n] þ bx2 [n])
(1:22)
Example The system of the previous example with output given by: 1 y[n] ¼ (x[n] þ x[n 1]), 2 is shift invariant. If the input to a system is x1 [n] ¼ x[n n0 ], then: 1 y1 [n] ¼ (x1 [n] þ x1 [n 1]) 2 1 ¼ (x[n n0 ] þ x[n n0 1]) ¼ y[n n0 ]: 2 On the other hand, the system with output y[n] ¼ nx[n] is not shift invariant. If x1 [n] ¼ x[n n0 ], then: y1 [n] ¼ nx1 [n] ¼ nx[n n0 ] 6¼ y[n n0 ] ¼ (n n0 )x[n n0 ]:
Causality and Stability A discrete-time system T is causal if its output at any time instant depends only on the input samples at that time or prior to it. In other words, the output value y[n0 ] depends only on input samples (x[n0 ], x[n0 1], x[n0 2], . . . ) and not on future input samples (x[n0 þ 1], x[n0 þ 2], . . . ). Stability of a system is defined in many ways, but a common characterization is in terms of bounds on system response when the input range is confined to some specified bounds. A discrete-time system T is said to be stable in the bounded-input
1 Signals and Systems
821
bounded-output (BIBO) sense if, given any input x[n] bounded for all n (i.e., jx[n]j B1 < 1) the output is also bounded for all n (i.e., jy[n]j BO < 1) for some BO . Examples 2 The system y[n] ¼ e x [n] is BIBO stable. If jx[n]j 10 ¼ BI , then jy[n]j e 100 ¼ BO . two systems with outputs y[n] ¼ nx[n] and y[n] ¼ PThe 1 k¼1 x[n k] are not BIBO stable because their outputs are not bounded (i.e., one cannot find a finite positive number BO such that jy[n]j BO for all n). Invertibility An invertible system is one in which an arbitrary input to the system can be uniquely inferred from the corresponding output of the system. The cascade of the system and its inverse, if it exists, is an identity system (i.e., the output of the cascade is identical to the input). All properties so far were defined for discrete-time systems. Equivalent properties can be defined for continuous-time systems by requiring output to satisfy the following conditions, with notation analogous to that in the discrete-time case: . . . .
Linearity: T (ax1 (t) þ bx2 (t)) ¼ ay1 (t) þ by2 (t). Shift invariance: T (x(t t0 )) ¼ y(t t0 ). Causality: y(t) is function of x(t) for 1 < t t. BIBO stability: jx(t)j BI for all t ) jy(t)j BO for all t.
1.3.2 Linear Shift-Invariant Systems Linearity and shift invariance are often used in combination to model the behavior of practical systems. This chapter is largely devoted to developing descriptions and tools for the analysis and design of linear shift-invariant (LSI) systems. These are often referred to as linear time-invariant (LTI) systems because the independent variable is commonly time. As pointed out before, a discrete-time signal can be expressed as a weighted sum of shifted discrete-time unit impulses. Similarly, a continuous-time signal can be expressed as the integral of weighted and shifted continuous-time unit impulses. This representation, together with system properties of linearity and shift invariance, allows us to create a practical and useful characterization of a system in terms of a function called the impulse response of the system. To arrive at the characterization of a linear time-invariant discrete-time system in terms of its response to an impulse input, the input signal, x[n], to a digital system is expressed as a weighted sum of discrete-time unit impulses: x[n] ¼
1 X k¼1
x[k]d[n k]:
(1:23)
Here, a comment on the engineering notation is in order. P x[k]D Equation 1.23 should be understood as x ¼ 1 k d, so 1 that x[k] in this equation is a scalar and d[n k] is a discretetime signal. The output y[n] for the input in equation 1.23 is expressed as: 1 X
y[n] ¼ T (x[n]) ¼ T
! x[k]d[n k] :
(1:24)
k¼1
The linearity of the system allows us to rewrite equation 1.24 as: 1 X
y[n] ¼
x[k]T (d[n k]),
(1:25)
k¼1
where we have assumed the system behavior allows us to extend the linearity property to the case of an infinite sum of inputs. Here, T (d[n]) denotes the output signal when a unit impulse is applied as input. This signal is called the impulse response of the system, and it is denoted by h[n]. Next, we invoke the shift invariance property to assert that T (d[n k]) ¼ h[n k]. The output signal can therefore be rewritten as: y[n] ¼
1 X
x[k]h[n k]:
(1:26)
k¼1
The last expression in equation 1.26 is called the convolution sum of the signals x[n] and h[n], which is denoted by x[n] h[n]. The convolution sum of h[n] and x[n] can also be expressed as: y[n] ¼ x[n] h[n] ¼
1 X
x[k]h[n k]
k¼1
¼
1 X
(1:27)
x[n m]h[m] ¼ h[n] x[n]:
m¼1
By proceeding in an analogous manner in the case of continuous-time systems, one arrives at equivalent relations. As pointed out before, each continuous-time signal can be written as the integral of weighted and shifted unit impulses:
xc (t) ¼
1 ð
xc (t)dc (t t)dt:
(1:28)
1
We assume that hc (t) is the impulse response of the continuous-time system (i.e., it is the output of the system when a unit impulse is applied at the input). By invoking linearity and shift invariance property of the system, the output signal y(t) can be written as:
822
Rashid Ansari and Lucia Valbonesi
yc (t) ¼ T (xc (t)) ¼ T (
¼
¼
1 ð 1 1 ð
1 ð
Consider a signal xc (t): xc (t)dc (t t)dt)
1
xc (t) ¼
xc (t)T (dc (t t))dt
1=2, 0,
1 t 1, otherwise,
(1:29) which is applied as input to a system with impulse response hc (t):
xc (t)hc (t t)dt ¼ xc (t) hc (t):
1
hc (t) ¼
Therefore, for a continuous-time system, the output signal is given by the convolution integral of the input signal and the system impulse response. An important observation is that both discrete-time and continuous-time systems that are linear and time invariant are completely characterized by their impulse responses under suitable assumptions. Example An example of the convolution for two continuous-time signals is considered now. It illustrates the property that the duration of convolved signal is equal to the sum of the durations of the two signals that are convolved. For discrete-time convolution, the duration of the sum obtained as a convolution sum is equal to the sum of the durations of the signals less one.
1, 0,
0t 2 otherwise:
Figure 1.10 shows the two signals xc (t) and hc (t) and their convolution yc (t). We notice that the duration of the convolution signal is Ly ¼ Lx þ Lh ¼ 2 þ 2 ¼ 4. For discrete-time convolution, the duration of the signal obtained as a convolution sum is equal to the sum of the durations of the convolved signals less one. Let x[n] and h[n] be two finite-duration discrete-time signals such that x[n] ¼ 0 for n < N1 or n > N2 N1, and h[n] ¼ 0 for n < M1 or n > M2 M1 . We assume that x[N1 ], x[N2 ], h[M1 ], and h[M2 ], are nonzero, so that the duration of x[n] is N ¼ N2 N1 þ 1 and the duration of h[n] is M ¼ M2 M1 þ 1. Then, the convolved signal y[n] ¼ x[n] h[n] has duration L ¼ M þ N 1. The signal y[n] is zero for n < L1 ¼ M1 þ N1 or n > L2 ¼ M2 þ N2. Note that L ¼ L2 L1 þ 1.
1.5 1 0.5 0 −0.5 −2
−1
0
−1
0
1 (A) Signal x(t )
2
3
4
1
2
3
4
2
3
4
1.5 1 0.5 0 −0.5 −2
(B) Signal h(t ) 1.5 1 0.5 0 −0.5 −2
−1
0
1 (C) Signal y(t )
FIGURE 1.10
Two signals and their convolution
1 Signals and Systems
823
To conclude this section, we list three properties of the convolution operator that are valid both in the continuoustime and in the discrete-time case. Their proofs follow directly from properties of the convolution sum and integral operators. 1. Commutative property: x1 x2 ¼ x2 x1 . 2. Associative property: x1 [x2 x3 ] ¼ [x1 x2 ] x3 . 3. Distributive property: x1 [x2 þ x3 ] ¼ x1 x2 þ x1 x3 . In expressions 1 through 3, if x1 is assumed to represent system input and x2 and x3 represent system impulse responses, then these have useful interpretations for an interconnected system. The associative property implies that a cascade connection of two LSI systems is equivalent to a single system with impulse response equal to the convolution of the two individual system impulse responses. The distributive property implies that a parallel connection of two LSI systems can be represented by a single system with impulse response equal to the sum of the two individual system impulse responses.
BIBO Stability A system was previously defined to be BIBO stable if a bounded input produces a bounded output. It will be shown P1 that an LSI system is BIBO stable if and only if n¼1 jh[n]j < 1. Proof P We first assume that Sh ¼ 1 n¼1 jh[n]j < 1. Now suppose the system input x[n] is bounded (i.e., jx[n]j < BI ) for all n. The output satisfies: 1 X
jy[n]j
k¼1
Causality If a discrete-time system is causal, then the current output sample depends only on the current and past input samples and not on future samples. This definition is now examined in the case of an LSI system with an impulse response h[n]. The output y[n] of an LSI system can be expressed as: y[n] ¼ h[n] x[n] ¼
jh[k]j ¼ BI Sh : (1:31)
k¼1
x[n] ¼
An important observation is that both discrete-time and continuous-time systems that are linear and time invariant are completely characterized by their impulse responses. Other properties of LSI systems, such as causality, stability, and invertibility, can then be characterized in terms of a system’s impulse response. The characterization of the LSI system properties of causality, stability, and invertibility will first be examined for the case of discrete-time systems.
1 X
We conclude that jy[n]j BO ¼ BI Sh , and therefore the system is stable. We now assume that the LSI system is BIBO stable with an impulse response that is not identically zero. Consider a bounded input x[n] defined by:
1.3.3 Properties of LSI Systems
1 X
jx[n k]jjh[k]j BI
h [n] BI jh[n]j , 0,
h[n] 6¼ 0, h[n] ¼ 0,
(1:32)
where h [n] is the complex conjugate of h[n]. The output y[n] for n ¼ 0 is given by: jy[0]j ¼
1 X k¼1
x[ k]h[k] ¼ BI
1 X
jh[k]j ¼ BI Sh : (1:33)
k¼1
For the system to be stable, we require that y[n] be bounded for all n. In particular, jy[0]j above should be finite. This P implies that Sh ¼ 1 n¼1 jh[n]j < 1. Invertibility Let an LSI system with impulse response h[n] be invertible. It can then be shown that the inverse system is LSI. If the inverse system has impulse response hi [n], then h[n] hi [n] ¼ d[n]. In the case of continuous-time, we can obtain similar conditions on the impulse response, hc (t), of an LSI system as summarized here:
x[n k]h[k]
k¼1
¼ . . . þ x[n þ 2]h[ 2] þ x[n þ 1]h[ 1]
(1:30)
þ x[n]h[0] þ x[n 1]h[1] þ x[n 2]h[2] þ . . . :
The expression of equation 1.30 immediately suggests the constraint on the impulse response of a causal LSI system. If the output is required to be independent of future samples of an arbitrary input, then the coefficients h[ 1], h[ 2], . . . in the convolution sum should each be equal to zero. Therefore, an LSI system is causal if and only if h[n] ¼ 0, n < 0.
. . .
An LSI system is causal , hc (t) ¼ Ð 10, t < 0. An LSI system is BIBO stable , 1 jhc (t)jdt < 1. An Inverse system exists and has impulse response hci (t); then hc (t) hci (t) ¼ dc (t).
A special class of LSI systems commonly used in modeling and implementation is the class that is described by a linear constant-coefficient difference or differential equation in the case of discrete and continuous time, respectively. It should be pointed out that depending on the specified auxiliary (or ‘‘initial’’) conditions, a system described by a linear constantcoefficient difference or differential equation may not be LSI
824
Rashid Ansari and Lucia Valbonesi
(Oppenheim et al., 1999; oppenheim and Willsky, 1997). We will assume that the auxiliary conditions are consistent with the LSI requirement. A condition of initial rest is assumed for the systems, so that we will be dealing only with linear shiftinvariant systems described by the difference or differential equation. The LSI systems can be further classified according to the duration of the impulse response. An LSI system is called finite-duration impulse response (FIR) if its impulse response has finite support. An infinite-duration impulse response (IIR) LSI system has an impulse response with infinite support. LSI systems are often referred to as filters in a generalized sense, though the term filters is commonly understood in terms of passing or rejecting signal content in a frequencyselective manner. Hence, the labels FIR and IIR filters are commonly used even when frequency selectivity is not the objective. We now consider frequency domain analysis of signals and systems.
1.4 Analysis in Frequency Domain So far, we have examined discrete-time and continuous-time signals as well as LSI systems in the time domain. We now consider signal and LSI system descriptions in the frequency domain motivated by the fact that analysis and design may be more conveniently done in the frequency domain. Central to frequency-domain description and analysis is the fact that signals (and impulse responses) can be conveniently represented in terms of sinusoids and complex exponentials that are eigenfunctions of LSI systems. Let T denote the system operator for an LSI system. We are interested in finding the eigenfunctions of this operator, that is the input signals x[n] or x(t), such that: Tx ¼ lx,
From equation 1.35, we observe that ejwn is an eigenfunction of the linear shift invariant system. The LSI system output is equal to the input multiplied by a scalar factor H(e jv ), which is the eigenvalue given by: 1 X
H(e jv ) ¼
e jvk h[k]:
(1:36)
k¼1
The H(e jv ) represents the complex gain as a function of the frequency variable v and is called the frequency response of the system and also the discrete-time Fourier transform (DTFT) of h[n]. In general, the DTFT of a signal x[n] is defined as: 1 X
X(e jv ) ¼
e jvn x[n]:
(1:37)
n¼1
The DTFT of a signal is a complex function that can be expressed as jX(e jv )je f(v) . Here jX(e jv )j is the magnitude, and f(v) is the phase of the DTFT. The magnitude jX(e jv )j can be viewed as a measure of the strength of the signal content at different frequencies, and f(v) represents the phase of the frequency components. Example Consider the computation of the DTFT of the rectangular window signal shown in Figure 1.11 and expressed as: x[n] ¼
n0 n n0 otherwise:
A, 0,
For v ¼ 0, we note that X(e jv ) ¼ A(2n0 þ 1). For v 6¼ 0:
(1:34) 1.5
where l is a scalar. Equation 1.34 implies that for certain input signals x, the system output is a scaled version of the input. The scalar multiplier l is called the eigenvalue. It has already been mentioned that the exponential signal plays an important role in signal representation. We now consider the response of a linear shift-invariant system to an exponential signal applied as input. We again focus on the discrete-time case first. Consider an LSI discrete-time system with impulse response h[n]. If the input to the system is x[n] ¼ e jvn for 1 < n < 1, then the output signal is as written here: y[n] ¼
1 X
x[n k]h[k] ¼
k¼1
¼e
jvn
1 X
k¼1
e
jvk
0.5
0
e jv(nk) h[k]
k¼1 1 X
1
jv
h[k] ¼ H(e )e
(1:35) jvn
:
−0.5
−10
−8
−6
−4
FIGURE 1.11
−2
0
2
4
Rectangular Window
6
8
10
1 Signals and Systems
825 The validity of the above relation can be seen from:
12 10
1 2p
8
2p ð
jv
X(e )e
jvn
1 dv ¼ 2p
0
6
¼
0
1 X
x[m]
n¼1
4 2
2p ð
2p ð
1 2p
!
1 X
e
jvm
x[m] e jvn dv
m¼1 1 X
e jv(nm) dv ¼
x[m]d[n m] ¼ x[n],
m¼1
0
where we have used the fact that:
0
1 2p
−2
2p ð
e
jv(nm)
dv ¼
n ¼ m: n 6¼ m:
1, 0,
0
−4 −3
−2
FIGURE 1.12 Window.
X(e jv ) ¼
−1
0
1
2
3
Discrete-Time Fourier Transform of the Rectangular
1 X
x[n]e jvn ¼
¼A
Ae jvn
X(e j(vþ2p) ) ¼
The DTFT is shown in Figure 1.12.
1.4.1 Properties of the Discrete-Time Fourier Transform The discrete-time Fourier transform (DTFT) was defined a complex series for an arbitrary discrete-time signal. The issue of its convergence is briefly examined. In the description here, a sufficient condition for the existence of DTFT is assumed. This is done by P considering the signal x[n] to be absolutely summable (i.e., 1 n¼1 jx[n]j < 1). In this case: X 1 1 X jv jvn jX(e )j ¼ e x[k] je jvn j jx[k]j n¼1 n¼1 ¼
1 X
1 2p
0
y[n] ¼ ax1 [n] þ bx2 [n] , Y (e jv ) ¼ aX1 (e jv ) þ bX2 (e jv ):
Symmetry of DTFT for Real Signals If the signal x[n] is real (x[n] ¼ x [n]), then X(e jv ) ¼ X (e jv ). This follows from: X(e jv ) ¼
X(e jv )e jvn dv:
1 X
x[n]e jvn :
n¼1
X (e
jv
)¼ ¼
Inverse DTFT Given the DTFT of a signal, the signal can be recovered from it using the so-called inverse DTFT given by: (1:38)
x[n]e jvn e j2pn ¼ X(e jv ):
n¼1
1 X n¼1
Therefore, if the signal x[n] is absolutely summable, the series converges and the DTFT exists.
1 X
x[n]e j(vþ2p)n ¼
Linearity A property that follows from the definition is that the DTFT of a linear combination of two or more signals is equal to the same linear combination of the DTFTs of each signal:
jx[n]j < 1:
n¼1
2p ð
1 X n¼1
e jvn0 sin ((2n0 þ 1)v=2) : ¼A jv sin (v=2) e 1
x[n] ¼
Periodicity The DTFT is a periodic function in v with period 2p, since:
n¼n0
n¼1 jv(n0 þ1)
e
n¼n X0
Some useful properties of DTFT are now examined.
1 X
! x[n]e
jvn
¼
1 X
x [n]e jvn
n¼1
x[n]e jvn ¼ X(e jv ):
n¼1
If X(e jv ) ¼ jX(e jv )je f(v) , then X (e jv ) ¼ jX(e jv )je f(v) . Therefore, for real x[n], the magnitude of the DTFT is an even function of v and the phase of the DTFT is an odd function of v. Time Shift of Signals A shift of the signal x[n] in the time domain does not affect the magnitude of the DTFT but produces a phase shift that is linear in v and in proportion to the time shift:
826
Rashid Ansari and Lucia Valbonesi y[n] ¼ x[n n0 ] , Y (e jv ) ¼ X(e jv )e jvn0 :
jv
Y (e ) ¼
1 X
Y (e ) ¼
y[n]e
jvn
1 X
¼
1 X
¼
n¼1
¼ e jvn0
x1 [n]x2 [n]e
x[n n0 ]e
jvn
1 2p
"
ðp
X1 (e ju )
p
The DTFT of y[n] ¼ x1 [n] x2 [n] ¼ is given by: y[n]e jvn ¼
n¼1
¼
jvk
1 1 X X
P1
k¼1 x1 [k]x2 [n
k]
x2 [n k]e
jv(nk)
jv
du5x2 [n]e jvn
ðp
X1 (e ju )X2 (e j(vu) )du:
p
¼ X1 (e )X2 (e ):
y[n]e
jvn
n¼1 1 X
jX(e ju )j2 du:
p
Y (e j0 )
1 X
1 X
x[n]x [n] ¼
n¼1 ðp
jx 2 [n]j
n¼1 ju
X(e )X(e
ju
1 )du ¼ 2p
ðp
jX(e ju )j2 du:
p
For continuous-time signals, the Fourier transform and its inverse are given by:
Xc (jV) ¼
1 ð
xc (t)e jVt ,
(1:39)
1
1 xc (t) ¼ 2p
1 ð
Xc (jV)e jVt :
(1:40)
1
This is seen from: 1 X
ðp
1 jx[n]j ¼ 2p n¼1 2
jv
y[n] ¼ e jv0 n x[n] , Y (e jv ) ¼ X(e j(vv0 ) ):
¼
1 2p
p
1 X
Modulation A modulation of the signal in the time domain by a complex exponential signal corresponds to a shift in the frequency domain. That is:
Y (e ) ¼
jun
p
x2 [n]e jn(vu) du ¼
1 X
1 ¼ 2p
x1 [k]x2 [n k]e jvn
n¼1
k¼1
jv
ju
X1 (e )e
#
1 X
3
ðp
Defining x1 [n] ¼ x[n], x2 [n] ¼ x [n] and y[n] ¼ x1 [n]x2 [n] ¼ jx 2 [n]j and then using the product relation, we obtain:
n¼1 k¼1
x1 [k]e
41 ¼ 2p n¼1
Parseval’s Relation The signal energy can equivalently be expressed in the time or in the frequency domain as:
x[m]e jvn ¼ e jvn0 X(e jv ):
y[n] ¼ x1 [n] x2 [n] , Y (e jv ) ¼ X1 (e jv )X2 (e jv ):
1 X
2
n¼1
Convolution The DTFT of the convolution sum of two signals x1 [n] and x2 [n] is the product of their DTFTs, X1 (e jv ) and X2 (e jv ). That is:
1 X
1 X
n¼1
m¼1
Y (e jv ) ¼
jvn
n¼1
This follows from: jv
1 X
1 X
¼
x[n]e
jv0 n jvn
e
n¼1
Properties analogous to those for DTFT can be obtained for the continuous-time Fourier transform (CTFT), and these are summarized in Table 1.1. The subscript c used for continuoustime signals is dropped in the table.
x[n]e j(vv0 )n ¼ X(e j(vv0 ) ):
n¼1
1.4.2 Frequency Response of LSI System
Product of Signals The DTFT of the product of two signals, x1 [n] and x2 [n] as well as the periodic convolution of the two DTFTs, X1 (e jv ) and X2 (e jv ), are shown on the right-hand side of the equation: 1 y[n] ¼ x1 [n]x2 [n] , Y (e ) ¼ 2p jv
ðp
X1 (e ju )X2 (e j(vu) )du:
p
Computing the DTFT of the product:
An LSI system is characterized in the time domain by its impulse response. It can be equivalently characterized in the frequency domain by the system frequency response, which is the Fourier transform of the impulse response. We first examine the discrete-time case. The frequency response of a discrete-time system is defined as the discrete-time Fourier transform (DTFT) of its impulse response h[n], as shown in equation 1.36. If an input x[n] is applied to the system, the output is the convolution sum y[n] ¼ x[n] h[n]. From the property of the DTFT of convolution sums discussed above, the DTFT, Y (e jv ), of the output,
1 Signals and Systems
827
TABLE 1.1
Properties of the Continuous-Time Fourier Transform
Property
Continuous-time signal
Continuous-time Fourier transform
Linearity
y(t) ¼ ax1 (t) þ bx2 (t).
Y (jV) ¼ aX1 (jV) þ bX2 (jV).
Time shift
y(t) ¼ x(t t0 ).
Y (jV) ¼ X(jV)e jVt0 .
Frequency shift
y(t) ¼ x(t)e jV0 t .
Y (jV) ¼ X(j(V V0 )).
Convolution
y(t) ¼ x1 (t) x2 (t).
Product
y(t) ¼ x1 (t)x2 (t).
Y (jV) ¼ X1 (jV)X2 (jV). R1 Y (jV) ¼ X1 (jV) X2 (jV) ¼ 1 X1 (jQ)X2 (j(V Q))dQ. R1 X(jV) ¼ x(0). R1 R1 1 2 2 1 jx(t)j dt ¼ 1 jX(jV)j dV.
Area under X(jV) Parseval’s relation y(t) ¼ x (t).
Y (jV) ¼ X ( jV).
Real part
Re[x(t)]
Imaginary part
Im[x(t)]
1 2 [X(jV) þ X ( jV)]. 1 2j [X(jV) X ( jV)].
Conjugate function
y[n], is the product of the DTFTs H(e jv ) and X(e jv ) of h[n] and x[n], respectively: Y (e jv ) ¼ H(e jv )X(e jv ):
(1:41)
If the DTFTs are expressed in terms of magnitude and phase, (i.e., Y (e jv ) ¼ jY (e jv )je jfg (v) , X(e jv ) ¼ jX(e jv )je jfx (v) , and H(e jv ) ¼ jH(e jv )je jfH (v) , then the following two equation apply: Y (e jv ) ¼ H(e jv ):X(e jv ): (1:42) fY (v) ¼ fH (v) þ fX (v):
(1:43)
The H(e jv ) and fH (v) are referred to as the magnitude response and the phase response of the system, respectively. In equations 1.42 and 1.43, the system magnitude response represents the gain or attenuation that the input signal content at frequency v experiences when the system maps it to the output. The system phase response represents the corresponding phase shift at frequency v. If the system magnitude response, H(e jv ), is unity, then the system is said to ideally pass the input signal content at frequency v0 to the output. The set of all such frequencies is called the passband of the system. If H(e jv0 ) is zero, then the system is said to ideally stop the input signal content at frequency v0 from appearing at the output. The set of all such frequencies is called the stopband of the system. If the set of all frequencies jvj p is partitioned into passbands and stopbands, then the system is called an ideal filter that passes or stops the signal content in a frequencyselective manner. A filter is said to be ideal low-pass if its frequency response is the following: H(e jv ) ¼
1, 0,
jvj < vc : jvc j < jvj p:
(1:44)
The filter passes all low-frequency input content in the passband jvj < vc and stops all high-frequency input content in the stopband jvc j < jvj p. The impulse response of the ideal low-pass filter can be computed as the inverse DTFT of H(e jv ): h[n] ¼
vc =p, sin vc n=pn,
n ¼ 0: n 6¼ 0:
(1:45)
This filter can be shown to be unstable in the BIBO sense. A practical low-pass filter can only approximate the ideal low-pass response. In practice, filters have passband magnitude within a tolerance of unity and have stopband magnitude within a tolerance of zero. When practical filters are specified, transition bands are allowed between adjacent passbands and stopbands so that the magnitude response in the transition bands may swing between the allowed passband and stopband magnitudes. The impulse response h[n] of a practical filter can vary depending on given filter specifications. The task of determining h[n] to satisfy the specifications is called filter design. The frequency response of a continuous-time LSI system is the continuous-time Fourier transform Hc (jV) of the impulse response hc (t):
Hc (jV) ¼
1 ð
hc (t)e jVt ,
(1:46)
1
1 hc (t) ¼ 2p
1 ð
Hc (jV)e jVt :
(1:47)
1
Frequency response of continuous-time filters is specified for frequencies over the range 1 < V < 1.
828
Rashid Ansari and Lucia Valbonesi The z-transform of x[n] is the following:
1.5 The z-Transform and Laplace Transform
X(z) ¼ Fourier transforms are useful, but they exist for a restricted class of signals. They can be generalized so that the generalization exists for a larger class of signals. These generalizations provide additional insights into system analysis and design. If time is discrete, the generalization is called the z-transform. The definition of the DTFT is modified by replacing e jv by a complex variable z ¼ re jv . In the case of continuous time, the definition of the CTFT is modified by replacing jV by a complex variable s ¼ s þ jV.
1 X n¼1
X(e ) ¼
x[n]e
jvn
,
an z n ¼
n¼0
(1:48)
1 X
(az 1 )n ¼
n¼0
1 , 1 az 1
provided jaz 1 j < 1 (i.e., jz j > jaj). For the DTFT of the signal to exist, we require that jaj < 1. Example 2 Consider a signal x[n] given by: x[n] ¼ bn u[ n] ¼
The discrete-time Fourier transform of a signal x[n], expressed as: jv
1 X
(1:52)
1.5.1 z-Transform and Its Properties
1 X
x[n]z n ¼
bn , 0,
n < 0: n 0:
The z-transform of x[n] is expressed as: X(z) ¼
1 X
bn z n ¼
n¼1
1 X
(bz 1 )m ¼
m¼1
1 X
(b1 z)m ¼
m¼1
b1 z , 1 b1 z
n¼1
can be viewed as a power series in the complex variable z ¼ e jv but with the variable restricted to values on the unit circle in the z-plane. Consider allowing z to assume any complex value of z ¼ re jv , r real, and 0. The z-transform of a discrete-time signal x[n] is defined as: X(z) ¼
1 X
x[n]z
n
:
provided jb1 z j < 1 ) jz j < jbj. In this case, the existence of the DTFT requires jbj > j1j. Example 3 The z-transform of the signal: x[n] ¼ ajnj ¼
(1:49)
an , an ,
n 0, n < 0,
n¼1
Note the relation of the z-transform to a Laurent’s series. The DTFT of x[n] is identical to values that the z-transform assumes on the unit circle (i.e., the circle with unit radius centered at the origin in the z-plane). With z ¼ re jv viewed as a position vector, r is the distance from the origin, and v is the angle between the vector z and the real axis. The z-transform may converge only over some region in the complex z-plane. In general, the region of convergence of ztransform, if not empty, is either the interior, jz j < R, of a circle or an annular region in the z-plane: R1 < jz j < R2 ,
(1:50)
where 0 < R1 < R2 1. If the region of convergence includes the unit circle, then the DTFT of the signal exists. Example 1 Consider the discrete-time signal: n
x[n] ¼ a u[n] ¼
is expressed as: X(z) ¼
1 X n¼0
an z n þ
1 X n¼1
n 0: n < 0:
(1:51)
z z : z a z a1
The region of convergence ROC ¼ {[jz j > jaj] \ [jz j < j1=aj]} is {j1=aj < jz j < jaj} provided jaj < 1: One notices that in these examples, the z-transforms are rational functions in z, with pole locations defining the boundaries of the region of convergence. In example 3, the two poles are located at z ¼ a and z ¼ 1=a. When signals are sums of left-sided or right-sided complex exponential sequences, of the form an u[n na ] or bn u[ (n nb )], then the z-transforms are rational functions in z. Given a z-transform of a signal x[n], the signal can be recovered with the inverse z-transform: 1 x[n] ¼ 2pj
an , 0,
an z n ¼
þ
X(z)z n1 dz,
(1:53)
C
where the integral is evaluated in the counterclockwise direction over a contour within the region of convergence that
1 Signals and Systems
829
encloses the origin. To compute the inverse z-transform, we need information not only about the functional form of the ztransform but also about the region of convergence. In fact, two identical functions with different regions of convergence have different inverse transforms. Let us consider the case of a z-transform X(z) with two poles located at z ¼ a and z ¼ b, jaj < jbj. There are three possible regions of convergence: . . .
Example Consider the z-transform of a finite-duration sequence: x[n] ¼ an , 0 n N 1: The z-transform of this sequence is as written here: X(z) ¼
ROC1 —the region jz j < jaj ROC2 —the annulus jaj < jz j < jbj ROC3 —the region jz j > jbj
N 1 X
x[n]z n ¼
n¼0
1(az 1 )N 1az 1
N,
N
z¼ 6 a z ¼ a:
Zeros of the z-transform are located at:
Figure 1.13 shows the three possible regions of convergence for a ¼ 1=2 and b ¼ 2. To select the correct region of convergence, we need further information about the signal. If for example we know that the DTFT of the signal exists (i.e., signal is absolutely summable), then we know that the unit circle is included in the region of convergence. Among the three previous regions, we choose the one that includes the unit circle. In our example, b ¼ 1=2 and a ¼ 2, and therefore, the ROC is the annulus j1=2j < jz j < j2j (i.e., region ROC2 ). If the system is causal, then the z-transform converges in the region outside the largest pole, which is region ROC3 in the example. Depending on the selected region of convergence, the signal obtained through the inverse transform is different. For example, the inverse z-transform of X(z) ¼ z=z a is equal to x[n] ¼ (a)n u[n] if the ROC is jz j > jaj, whereas it is equal to x[n] ¼ (a)n u[ n 1] if the ROC is jz j < jaj. Table 1.2 summarizes the properties of the z-transform. The derivation of these properties is similar to that carried out for the discrete-time Fourier transform and is not presented here.
zk ¼ ae j2pk=N , 1 k N 1: There are N 1 poles at z ¼ 0 and N 1 zeros as shown in Figure 1.14. In general, the z-transform of a finite-duration sequence can possibly have poles only at z ¼ 0 or z ¼ 1, while the zeros may be located anywhere in the z-plane. This means that for finite sequences, the ROC is the whole z-plane except perhaps the origin and/or z ¼ 1.
TABLE 1.2
Properties of the z-Transform
Property
Discrete-time signal
z-Transform
Linearity
y[n] ¼ ax1 [n] þ bx2 [n]. Y (z) ¼ aX1 (z) þ bX2 (z).
Time shift
y[n] ¼ x[n n0 ]. n
Exponential weighting y[n] ¼ x[n]a . Linear weighting
y[n] ¼ nx[n].
Convolution
y[n] ¼ x1 [n] x2 [n].
Product
y[n] ¼ x1 [n]x2 [n].
Im {z} ROC3
ROC2 Re {z}
ROC1 Z=1/2
FIGURE 1.13
N
a ¼ z N11 z za ,
Z=2
Region of Convergence with Poles at z ¼ 2 and z ¼ 1=2
Y (z) ¼ X(z)z zn0 . Y (z) ¼ X(a1 z). Y (z) ¼ z dX(z) dz Y (z) ¼ X1 (z)X2 (z). R z dv C X1 (v)X2 ( v ) v .
1 Y (z) ¼ 2pj
830
Rashid Ansari and Lucia Valbonesi 1 0.8 0.6
Imaginary part
0.4 0.2 9
0 −0.2 −0.4 −0.6 −0.8 −1 −1
FIGURE 1.14
−0.5
0 Real part
0.5
1
Location of Zeros and Poles for a Finite Sequence with N ¼ 10 and a ¼ 0:5
The magnitude and the phase of the z-transform and the DTFT of the finite-duration sequence are shown in Figure 1.15.
1.5.2 The Laplace Transform and Its Properties
s1 < Re{s} < s2 : The signal x(t) can be recovered using the inverse Laplace transform:
For continuous-time signals, the Fourier transform integral of equation 1.39 can be generalized as:
x(t) ¼
ð 1 X(s)e st ds, 2pj
(1:55)
G
X(s) ¼
1 ð
x(t)e st dt,
(1:54)
1
where s ¼ s þ jV. Equation 1.54 is the Laplace transform of the signal x(t). The Laplace transform can be represented in the twodimensional s-plane, with s along the real axis, and the frequency V on the imaginary axis. The Laplace transform evaluated for s ¼ 0 (i.e., for s ¼ jV) is identical to the Fourier transform X(V). If a continuous-time signal x(t) is bounded by:
where G is a contour within the region of convergence that goes from s ¼ s j1 to s ¼ s þ j1. The following rules are useful in determining the region of convergence: .
.
.
.
jx(t)j K1 e
s1 t
uc (t) þ K2 e
s2 t
uc ( t),
then one can determine that the Laplace transform converges for:
If x(t) is causal and of infinite duration, then the ROC lies to the right of all poles. If x(t) is anticausal and of infinite duration, then the ROC lies to the left of all the poles. If x(t) is a two-sided infinite-duration signal, then the ROC is the section of plane included between the poles of the causal part and the poles of the anticausal part. If x(t) is of finite duration and there exists at least one value of s for which the Laplace transform exists, then the ROC is the whole plane.
The properties of the Laplace transform are similar to the properties of the z-transform and are summarized in Table 1.3.
1 Signals and Systems
831 1.5 Magnitude [log scale]
1.5 1 Phase
0.5 0 −0.5 −1 −1.5
−1
0
2
8
0.4
6
0.2 0 −0.2 −0.4 −0.6 0
FIGURE 1.15
0.2
0.4 0.6 Frequency
0 −0.5 −1
0.8
−1
0.5
0.6
−0.8
0.5
−1.5
1
Magnitude [decibels]
Phase [radians]
−2
0
1
1
0
1
1
1.5
2
2.5
3
4 2 0 −2 −4
0
0.2
0.4 0.6 Frequency
0.8
1
Phase and Log-Magnitude of z-Transform and of DTFT of Finite Sequence With N ¼ 10 and a ¼ 0:5
1.6 Sampling and Quantization Until this point in our discussion, continuous-time and discrete-time signals have been considered separately. We did not consider how the signals arise in practice and where they are linked in some cases. Now we examine a mechanism— the sampling process—by which a discrete-time signal is obtained from a continuous-time signal.
1.6.1 Sampling Sampling refers to the process of obtaining a discrete-time signal by extracting values of a continuous-time signal at predetermined instants that are usually equally spaced. This process maps a function with an uncountably infinite domain to a function with a countable domain. One would like the mapping to be such as to allow an exact recovery of the continuous-time signal from the sampled signal. The conditions that allow a continuous-time signal to be reconstructed from its samples can be obtained from examining the signal Fourier transform. The main requirements are that the
signal be bandlimited and that an adequate sampling rate be used. Consider a signal sampled at equally spaced instants. The time separation between two adjacent samples is called sampling period (T), and its inverse is the sampling rate or sampling frequency (fs ). Because signal values are retained only at sampling instants, the signal information can be captured in the product of the continuous-time signal with a train, s(t), of impulses spaced apart by T :
s(t) ¼
1 X
d(t nT ):
(1:56)
n¼1
A sampled continuous-time signal is just the product of the original signal xc (t) and the sampling signal s(t): xs (t) ¼ s(t)xc (t) ¼
1 X
xc (nT )d(t nT ):
(1:57)
n¼1
The Fourier transform of the signal xs (t) is as follows:
832
Rashid Ansari and Lucia Valbonesi
TABLE 1.3
Properties of the Laplace Transform
Property
Continuous-time signal
Laplace transform
Linearity
y(t) ¼ ax1 (t) þ bx2 (t):
Y (s) ¼ aX1 (s) þ bX2 (s):
Time shift
y(t) ¼ x(t t0 ):
Y (s) ¼ X(s)e st0 :
Exponential weighting
y(t) ¼ x(t)e at :
Y (s) ¼ X(s a):
Differentiation
tx(t)
dX(s) ds
Convolution
y(t) ¼ x1 (t) x2 (t):
Y (s) ¼ X1 (s)X2 (s):
Product
y(t) ¼ x1 (t)x2 (t):
Y (s) ¼ X1 (s) X2 (s):
Xs (jV) ¼
1 ð 1
¼
1 X
1 X
xc (nT )d(t nT ) e jVtdt
xc (nT )e
(1:58) jVnT
:
The samples define a discrete-time signal x[n] ¼ xc (nT ), so that equation 1.58 becomes: Xs (jV) ¼
(1:60)
1 1 Xc (jV) S(jV) ¼ Xc (jV) 2p T (1:61) 1 1 X 2p 1 X 2p d Vk Xc j V k ¼ : T T k¼1 T k¼1
Xs (jV) ¼
n¼1
1 X
1 2p X 2p S(jV) ¼ d Vk : T k¼1 T
The convolution of this train of impulses with the X(e jV ) results in periodic repetition of the Fourier transform image at the locations of the impulses.
!
n¼1
signal xs (t). Multiplication in the time domain yields convolution in the frequency domain. This is another way of expressing the Fourier transform of the sampled signal xs (t) as a convolution integral of the Fourier transforms Xc (e jV ) and S(e jV ). If s(t) is a train of impulses with spacing T , its Fourier transform is a train of impulses with spacing f ¼ 1=T
x[n]e jVnT ¼ X(e jVT ) ¼ X(e jv ),
Equating the Fourier transform of xs (t) in equation 1.59 to 1.61, we get: (1:59)
n¼1
X(e where v ¼ VT is the normalized frequency. In Figure 1.16, a continuous-time signal xc (t) is shown, and we assume that its Fourier transform is bandlimited. If xc (t) is multiplied with an impulse train, s(t), we obtain the sampled
jVT
1 1 X 2p )¼ Xc j V k : T k¼1 T
If we substitute the discrete frequency v ¼ VT, equation 1.62 becomes:
(A) Continuous-Time Signal x(t)
(B) Fourier Transform X(u)
(C) Sampling Function s(t)
(D) FT of Sampling Function S(u)
(E) x(t)s(t)
(F) X(u)*S(u)
FIGURE 1.16
(1:62)
Graphic Development of Sampling Concepts
1 Signals and Systems
833
1 1 X v k2p X(e ) ¼ Xc j : T k¼1 T jv
(1:63)
Therefore, sampling in the time domain corresponds to adding shifted replicas of the Fourier transform in the frequency domain. To avoid losing information, we need to avoid any overlap in the replicas in the frequency domain.
using a low-pass filter. The sampling rate has been chosen adequately to avoid loss of information. Suppose that the Fourier transform of the signal xc (t) is bandlimited to jVj V0 ¼ 2pW so that: Xc (jV) ¼ 0:
jVj V0 ¼ 2pW :
(1:66)
We need to choose a sampling rate such that:
1.6.2 Signal Reconstruction To reconstruct the original signal (i.e., to extract the central replica from Xs (jV)), we need to apply an ideal low-pass filter. In the time domain, this means that we need to convolve the signal with a sinc function, shown in Figure 1.17, and given by: h(t) ¼
pt sin (p =T ) ¼ sinc : pt=T T
(1:64)
The reconstructed signal, xr (t), is the sum of shifted sinc functions weighted by the signal samples: 1 X
sin (p=T (t nT )) p=T (t nT ) n¼1 1 p X ¼ x[n] sinc (t nT ) : T n¼1
xr (t) ¼
x[n]
(1:65)
The reconstructed signal xr (t) equals the original signal x(t) only if there is no overlapping of the replicas in the frequency domain. In Figure 1.16 replicas of the Fourier transform do not overlap, so the original signal can be recovered by simply
1 2W : T
(1:67)
In the example of Figure 1.17, the sampling period is T ¼ 1=2W . Figure 1.18 shows the case of undersampling for the signal with Fourier transforms shown in Figure 1.16. Undersampling means that the sampling rate is below the so-called Nyquist rate of 2W, resulting in a frequency content above V > 2pW that cannot be recovered by applying the low-pass reconstruction filter. The content is folded back to lower frequencies, a phenomenon called aliasing. Finally, Figure 1.19 illustrates the case of oversampling where fs > 2W . The requirements on the low-pass reconstruction filter can be more relaxed. In many applications, there is a need to increase or decrease the sampling rate of a discrete-time signal directly in the discrete-time domain without reconstructing the continuoustime signal. These tasks are referred to as interpolation and decimation. These are basic operations needed in multirate signal processing, filter banks, and sub-band/wavelet analysis (Akansu et al., 1996; Strang and Nguyen, 1996; Vaidyanathan, 1993; Vetterli and Kovacevic, 1995).
1.6.3 Quantization
1.2 1 0.8 0.6 0.4 0.2 0 −0.2 −0.4
fs ¼
−3
−2
FIGURE 1.17
−1
0
1
2
Impulse Response of an Ideal Low-Pass Filter
3
At the beginning of this chapter, signals were classified into four categories according to the values assumed by the independent and the dependent variables. So far, we have focused our attention only on the independent variable, namely time, to distinguish signals between continuous-time and discretetime signals, and we assumed that the dependent variable can assume a continuum of values. However, when processing data with a computer or with a digital signal processor, both variables can assume only discrete values. This means that not only is the signal discrete in time but also in amplitude. These signals are referred to as digital signals. Therefore, if a signal has to be processed in digital form, first its independent variable and then its dependent variable are converted from continuous to discrete form through sampling and quantization respectively. Quantization is the process that converts data from infinite or high precision to finite or lower precision.
834
Rashid Ansari and Lucia Valbonesi
(A) Continuous-Time Signal x(t )
(B) Fourier Transform X(u)
(C) Sampling Function s(t)
(D) FT of Sampling Function S(u)
(E) x(t )s(t )
(F) X(u)*S(u)
FIGURE 1.18
Graphic Development of Undersampling Concepts
Each digital variable is stored in a finite length register and is represented by a finite number of bits, B. There are two possible ways of representing data in binary form: fixed point and floating point. In fixed point representation, the binary point is fixed at a specific location, and the implementation of arithmetic operations takes into account this property. On the other hand, in the floating point representation, each number is represented by two parameters: a mantissa,
represented with BM bits, and an exponent E, represented as BE bits, with B ¼ BM þ BE . The error introduced in the processing of digital signals due to quantization depends on the format of the numbers, the quantization method (truncation or rounding), the number of bits used to represent signal values and filter coefficients, and so on. Details can be found in Mitra (1998) and Oppenheim et al. (1999).
(A) Continuous-Time Signal x(t)
(B) Fourier Transform X(u)
(C) Sampling Function s(t)
(D) FT of Sampling Function S(u)
(E) x(t)s(t )
(F) X(u)*S(u)
FIGURE 1.19
Graphic Development of Oversampling Concepts
1 Signals and Systems
835
1.7 Discrete Fourier Transform
X(e jv ) ¼
For general discrete-time signals, we examined the discretetime Fourier transform (DTFT) as a useful analysis and design tool for frequency domain representation. We now consider Fourier analysis that is expressly formulated for finite-duration discrete-time and that serves as a computational aid in many signal processing tasks. Consider a sequence x[n] that is zero for n other than 0 n N 1. The discrete Fourier transform (DFT) of the signal x[n] is defined as: X[k] ¼
PN 1 n¼0
x[n]e j2pnk=N ,
0,
0 k N: otherwise:
(1:68)
With the following two equations: WN ¼ e j2p=N , N 1 X
RN [n] ¼
d[n m],
(1:69) (1:70)
m¼0
x[n]WNnk RN [k]:
(1:71)
X[k] ¼ X(e jv )jv¼2pk RN [k] ¼ X(e j2pk=2 )RN [k]: N
.
.
Denote samples of X(e jv ) as XN [k] ¼ X(e j2pk=N ) for k ¼ 0, 1, . . . : N 1. Compute the inverse DFT of XN [k] and call the result xN [n]. Determine xN [n] such that: XN [k] ¼
1 1 1 NX 1 NX 2p X[k]e j N nk RN [n] ¼ X[k]WNnk RN [n], N k¼0 N k¼0
(1:72) which differs from the forward transform in the normalization factor N1 and in the sign of the exponent.
N 1 X
xN [n]e j2pkn=N :
It can be shown that: xN [n] ¼
1 X
x[n þ rN ], n ¼ 0, 1, . . . , N 1:
(1:77)
r¼1
Proof : XN [k] ¼ X(e j2pk=N ) ¼
1 X
x[n]e j2pkn=N
n¼1
¼
N 1 X
1 X
x[n þ rN]e j2pk(nþrN )=N
n¼0 r¼1
1.7.1 Relation Between the DFT and the Samples of DTFT
(1:76)
n¼0
!
x[n] ¼
(1:75)
Because a sequence of duration N can be recovered from its DFT X[k], then x[n] of duration N samples can be recovered from N samples of X(e jv ) at v ¼ 2pk=N , 0 k N 1, by computing the inverse DFT of these samples. We conclude that a sequence x[n] of duration N can be completely recovered from N equally spaced samples of its DTFT by computing the inverse DFT. But what happens in the case of a general sequence that is not necessarily of duration N ? That is, what is the inverse DFT of the samples of X(e jv ) at v ¼ 2pk=N , 0 k N 1, when the sequence x[n] is not necessarily of duration N ? Let X(e jv ) be the DTFT of x[n] that may be of duration > N . Then the following applies.
n¼0
Note that the DFT maps sequences to sequences. In other words, the frequency variable (k) is discrete, unlike that for the DTFT, where v assumes all real values. The signal x[n] can be recovered from X[k] through the inverse discrete Fourier transform (IDFT) relation:
(1:74)
Clearly X[k], the DFT of x[n], consists of samples of X(e jv ) at v ¼ 2pk=N for 0 k N 1. Therefore:
. N 1 X
e jvn x[n]:
n¼0
the DFT can be rewritten as: X[k] ¼
N 1 X
¼
N 1 X
1 X
n¼0
r¼1
! x[n þ rN ] e j2pkn=N :
In equation 1.37, we defined the DTFT of a signal x[n] as: X(e jv ) ¼
1 X
e jvn x[n]:
(1:73)
n¼1
If the signal x[n] has finite duration and it is limited to the interval 0 n N 1, then the previous expression becomes:
1.7.2 Linear and Circular Convolution In implementing discrete-time LSI systems, we need to compute the convolution sum, otherwise called linear convolution, of the input signal x[n] and the impulse response h[n] of the system. For finite duration sequences, this convolution can be carried out using DFT computation.
836
Rashid Ansari and Lucia Valbonesi
Let x[n] and h[n] be of finite duration. Assume x[n] is zero outside the interval 0 n N 1 and h[n] is zero outside the interval 0 n M 1. The sequence y[n] is the linear convolution between x[n] and h[n]: y[n] ¼ x[n] h[n] ¼
1 X
algorithms (Oppenheim et al., 1997). FFT allows an efficient computation of the DFT with a number of operations proportional to NlogN instead of N 2 . Algorithms are commonly used in implementing signal processing tasks: .
x[k]h[n k],
(1:78)
k¼1
and it is of duration M þ N 1. If we wish to recover y[n] from its DTFT Y (e jv ), we need samples of it at (M þ N 1) points v ¼ 2p=Nk and k ¼ 0, 1, . . . , M þ N 2. To get these samples, we observe that Y (e jv ) ¼ X(e jv )H(e jv ). Therefore, if we know the values of the samples of X(e jv ) and H(e jv ) at v ¼ 2p=Nk for k ¼ 0, 1, . . . , M þ N 2, then Y (e j2p=Nk ) can be obtained as a product of these samples. But the samples of X(e jv ) at these points are the same as the (M þ N 1)-point DFT of x[n]. Similarly the samples of H(e jv ) at these points are obtained by computing the (M þ N 1)-point DFT of h[n]. The product X[k] H[k] is therefore equal to Y[k], the DFT of y[n]. However the relationship Y [k] ¼ X[k]H[k] is valid only if Y[k] is of length (M þ N 1), which is the length of the linear convolution y[n]. To obtain this result, we need to zeropad the sequences to the desired length before computing the DFTs and multiplying them together. Let L max {N , M}. If we compute the product X[k]H[k] using an L-point DFT max {N , M}, possibly without the necessary zero-padding, then the resulting sequence can be denoted as YL [k], with inverse DFT given by: yL [n] ¼
1 X
y[n þ rL] ¼ x[n] o h[n]:
(1:79)
r¼1
Here o denotes circular convolution of the sequences x[n] and h[n]. The samples of circular convolution, yL [n], are obtained from the samples of linear convolution, y[n], by wrapping around all samples that exceed the index n ¼ L 1 as shown in equation 1.79. From the definitions of linear and circular convolution, we observe that if L (N þ M 1), then the two expressions coincide and yL [n] ¼ y[n] as determined previously.
1.7.3 Use of DFT Many fast algorithms have been developed to implement the DFT, and these are referred to as fast Fourier transform (FFT)
.
Convolution of two finite duration sequences can be performed using DFTs of adequate length. The use of FFT enables an efficient implementation of convolution and FIR filters. The DFT of a sequence consists of samples of the DTFT of the sequence. It thus provides information about the frequency content of signals, and it is used in estimating the spectra of signals.
1.8 Summary In this chapter, we provided an overview of discrete-time and continuous-time signals and systems. The emphasis was on discrete time, but notions for the two cases were presented in parallel. Fundamentals of discrete-time systems and properties such as linearity, time-invariance, causality, and stability were presented. Fourier analysis of signals was presented followed by a discussion of frequency-selective filtering. Basics of ztransforms were covered next along with a discussion of the z-domain interpretation of system properties, such as causality and stability. The operations of sampling and quantization that relate continuous and discrete time signals were briefly discussed. Finally, the discrete Fourier transform was presented and its importance in implementing signal processing algorithms described. Our discussion was largely confined to the description and processing of deterministic signals. A discussion of issues in implementation and the use and design of digital signal processors (Lapsley et al., 1997) is not within the scope of this chapter. Many excellent books cover the topic of signal processing, such as those by Haykin and Van Veen (1999), Lathi (1998), Mitra (1998), Oppenheim (1999, 1997), Proakis and Manolakis (1995) Strang and Nguyen (1996), Vaidyanathan (1993), and Vetterli and Kovacevic (1995). Introductory articles on new developments and research in signal processing appear in IEEE Signal Processing Magazine. Most of the research articles describing the advances in signal processing appear in IEEE Transactions on Signal Processing, IEEE Transactions on Image Processing, IEEE Transactions on Circuits and Systems, Electronics Letters; International Conference on Acoustics, Speech, and Signal Processing (ICASSP), and International Symposium on Circuits and Systems. Several information resources are available on the World Wide Web. The impact of signal processing on information technology is described in a workshop report by the National Science Foundation (1994).
1 Signals and Systems
References Akansu, A.N., and Smith, M.J.T. (1996). Sub-band and wavelet transforms: Design and applications. Norwell, MA: Kluwer Academic. Anastassiou, D. (2001). Genomic signal processing. IEEE Signal Processing Magazine 18 (4), 8–20. Ansari, R., and Memon, N. (2000). The JPEG lossy image compression standard. In A. Bovik (Ed), The handbook of image and video processing. Burlington, MA: Academic Press. Baillet, S., Mosher, J.C., and Leahy, R.M. (2001). Electromagnetic brain mapping. IEEE Signal Processing Magazine 18 (6). Chen, T. (2001). Audiovisual speech processing. IEEE Signal Processing Magazine 18 (1) 9–21. Ebrahimi, T., Vesin, J.M., and Garcia, G. (2003). Brain-computer interface multimedia communication. IEEE Signal Processing Magazine 20 (1), 14–29. Hanzo, L., Wong, C.H., and Cherriman, P. (2000). Channel-adaptive wideband wireless video telephony. IEEE Signal Processing Magazine 17 (4), 10–30. Haykin, S., and Van Veen, B. (1999). Signals and systems. New York: John Wiley & Sons. Lapsley, P., Bier, J., Shoham, A., and Lee, E.A. (1997). DSP Processor fundamentals—Architectures and features. New York: IEEE Press. Lathi, B.P. (1998). Signal processing and linear systems. New York: Oxford University. Mitra, S.K. (1998). Digital signal processing, A computer-based approach. New York: McGraw-Hill. Molina, R., Nunez, J., Cortijo, F.J., and Mateos, J. (2001). Image restoration in astronomy. IEEE Signal Processing Magazine 18 (2), 11–29.
837 Ohm, J.R. (1999). Encoding and reconstruction of multiview video objects. IEEE Signal Processing Magazine 16 (3), 47–54. Oppenheim, A.V., Schafer, R.W., and Buck, J.R. (1999). Discrete-time signal processing. Englewood Cliffs, NJ: Prentice Hall. Oppenheim, A.V., and Willsky, A.S. (1997). Signals and systems. Englewood Cliffs, NJ: Prentice Hall. Podilchuk, C.I., and Delp, E.J. (2001). Digital watermarking, algorithms, and applications. IEEE Signal Processing Magazine 18 (4), 33–46. Proakis, J., and Manolakis, M. (1995). Digital signal processing. Englewood Cliffs, NJ: Prentice Hall. Strang, G., and Nguyen, T. (1996). Wavelets and filter banks. Wellesley, MA: Wellesley-Cambridge. Strintzis, M.G., and Malassiotis, S. (1999). Object-based coding of stereoscopic and 3D image sequences. IEEE Signal Processing Magazine 16 (3), 14–28. Strobel, N., Spors, S., and Rabenstein, R. (2001). Joint audio-video object localization and tracking. IEEE Signal Processing Magazine 18 (1), 22–31. Vaidyanathan, P.P. (1993). Multirate systems and filter banks. Englewood Cliffs, NJ: Prentice Hall. Vetterli, M., and Kovacevic, J. (1995). Wavelets and sub-band coding. Englewood Cliffs, NJ: Prentice-Hall. Wang, Y., Wenger, S., Wen, J., and Katsaggelos, A.K. (2000). Error resilient video coding techniques. IEEE Signal Processing Magazine 18 (4), 61–82. National Science Foundation. (1994). Signal processing and the national information infrastructure. Report of workshop organized by the National Science Foundation, Ballston, Virginia. http:// www-isl.stanford.edu/gray/iii.pdf
This page intentionally left blank
2 Digital Filters Marcio G. Siqueira Cisco Systems, Sunnyvale, California, USA
Paulo S.R. Diniz Program of Electrical Engineering, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil,
2.1 2.2 2.3
Introduction ..................................................................................... 839 Digital Signal Processing Systems .......................................................... 840 Sampling of Analog Signals.................................................................. 840
2.4
Digital Filters and Linear Systems ......................................................... 841
2.5 2.6 2.7
Finite Impulse Response (FIR) Filters .................................................... 844 Infinite Impulse Response Filters .......................................................... 845 Digital Filter Realizations..................................................................... 846
2.8
FIR Filter Approximation Methods ....................................................... 848
2.9
FIR Filter Design by Optimization ........................................................ 849
2.3.1 Sampling Theorem 2.4.1 Linear Time-Invariant Systems . 2.4.2 Difference Equation
2.7.1 FIR Filter Structures . 2.7.2 IIR Filter Realizations 2.8.1 Window Method . 2.8.2 Sara¨maki Window 2.9.1 Problem Formulation . 2.9.2 Chebyshev Method . 2.9.3 Weighted Least-Squares Method . 2.9.4 Minimax Filter Design Employing WLS . 2.9.5 The WLS-Chebyshev Method
2.10
IIR Filter Approximations.................................................................... 854
2.11
Quantization in Digital Filters .............................................................. 856
2.10.1 Analog Filter Transformation Methods . 2.10.2 Bilinear Transformation by Pascal Matrix 2.11.1 Coefficient Quantization . 2.11.2 Quantization Noise . 2.11.3 Overflow Limit Cycles . 2.11.4 Granularity Limit Cycles
2.12 2.13
Real-Time Implementation of Digital Filters ........................................... 859 Conclusion ....................................................................................... 860 References......................................................................................... 860
2.1 Introduction The rapid development of integrated circuit technology led to the development of very powerful digital machines able to perform a very high number of computations in a very short period of time. In addition, digital machines are flexible, reliable, reproducible, and relatively cheap. As a consequence, several signal processing tasks originally performed in the analog domain have been implemented in the digital domain. In fact, several signal processing tools are only feasible to be implemented in the digital domain. As most real life signals are continuous functions of one or more independent variables, such as time, temperature, and position, it would be natural to represent and manipulate these signals in their original analog domain. In this chapter we will consider time as the single independent variable. If sources of Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
information are originally continuous in time, which is usually the case, and they occupy a limited range of frequencies, it is possible to transform continuous-time signals into discretetime signals without losing any significant information. By interpolation, the discrete-time signal can be mapped back into the original continuous-time signal. It is worth mentioning that signals exist that are originally discrete in time, such as the monthly earnings of a worker or the average yearly temperature of a city. As a result, the current technological trend is to sample and quantize the signals as early as possible so that most of the operations are performed in the digital domain. The sampling process consists of transforming a continuous signal into a discrete signal (a sequence). This transformation is necessary because digital machines are suitable to manipulate sequences of quantized numbers at high speed. Therefore, quantization 839
840
Marcio G. Siqueira and Paulo S.R. Diniz
of the continuous amplitude samples generate digital signals tailored to be processed by digital machines. A digital filter is one of the basic building blocks in digital signal processing systems. Digital filtering consists of mapping a discrete-time sequence into another discrete-time sequence that highlights the desired information while reducing the importance of the undesired information. Digital filters are present in various digital signal processing applications related to speech, audio, image, video, and multirate processing systems as well as in communication systems, CD players, digital radio, television, and control systems. Digital filters can have either fixed (Antoniou, 1993; Jackson, 1996; Oppenheim and Schafer, 1989; Mitra, 2001; Diniz, 2002) or adaptive coefficients (Diniz, 2002). In this chapter, we will describe: .
.
.
.
.
The most widely used types of digital filter transfer functions; How to design these transfer functions (i.e., how to calculate coefficients of digital filter transfer functions); How to map a transfer function into a digital filter structure; The main concerns in the actual implementation in a finite precision digital machine; and The current trends in the hardware and software implementations of digital filters
In particular, we focus on digital filters that implement linear systems represented by difference equations. We will show how to design digital filters based on magnitude specifications. In addition, we present efficient digital filter structures that are often required for implementation with minimal cost, taking into consideration that real-time implementation leads to nonlinear distortions caused by the finite word length representations of internal signals and coefficients. Finally, the mostly widely used approaches to implement a digital filter are briefly discussed.
2.2 Digital Signal Processing Systems A typical digital signal processing (DSP) system comprises the following modules, shown in Figure 2.1. .
Analog-to-digital (A/D) converter: Assuming that the input signal is band-limited to frequency components below half of the sampling frequency, the A/D converter obtains signal samples at equally spaced time intervals
x (t )
Sample and hold
x (n)
x*(t ) Encoder
Digital filter
.
.
.
and converts the level of these samples into a numeric representation that can be used by a digital filter. Digital filter: The digital filter uses the discrete-time numeric representation obtained by the A/D converter to perform arithmetic operations that will lead to the filtered output signal. Digital-to-analog (D/A) converter: The D/A converter converts the digital filter output into analog samples that are equally spaced in time. Low-pass filter: The low-pass filter converts the analog samples of the digital filter output into a continuous-time signal.
2.3 Sampling of Analog Signals Figure 2.2 shows the mechanism behind sampling of continuous time signals and its effect on the frequency representation. Figure 2.2(A) shows that a continuous signal x(t) is multiplied in the time domain by a train of unitary impulses xi (t). The resulting signal from the multiplication operation is x (t), which contains samples of x(t). Figure 2.2(B) shows the frequency representation of the original signal x(t) before sampling. Figure 2.2(C) shows the spectrum of the impulse train xi (t). Finally, Figure 2.3 shows the spectrum of the sampled signal x (t). According to the convolution theorem, the Fourier transform of the sampled signal x (t) should be the convolution, in the frequency domain, of the Fourier transform of the impulse train and the Fourier transform of the original signal before sampling. Therefore, it can be shown that: XD (e jv ) ¼ ¼
1 1 X v 2pl X j þj T l¼1 T T 1 1 X X(jva þ jvs l): T l¼1
(2:1)
From the above equation and from Figure 2.2, it can be seen that the spectrum of the sampled signal will be repeated at every vs ¼ 2p=T interval. An important consequence of the spectrum repetition is the sampling theorem shown next.
2.3.1 Sampling Theorem If a continuous signal is band-limited (i.e., X(jva ) ¼ 0 for va > vc ), x(t) can be reconstructed from xD (n) for
y (n )
D/A converter
y*(n )
Low-pass filter
A/D converter
FIGURE 2.1 Architecture of a Complete DSP System Used to Filter an Analog Signal
y (t )
2 Digital Filters
841 of the sampling frequency), the sampled signal spectrum will be distorted in the frequencies around vs =2. In the case of Figure 2.4(B), no distortion occurred only because the signal had no power at vs =2. This distortion is known as aliasing and is undesirable in most cases. As a consequence, only case 2.4(D)1 can be used to recover the spectrum of the original continuous signal shown in Figure 2.4(A). The recovery of the continuous signal spectrum from the sampled signal can be theoretically accomplished by using an analog filter with flat frequency response to retain only the components between vs =2 and vs =2 of the spectra shown in Figure 2.4(A). The frequency response of the analog filter with these characteristics is shown in Figure 2.5. The impulse response of this filter is as follows:
xi (t )
x
x (t )
x* (t )
(A) Signal Sampling X ( jωa)
1
h(t) ¼ −ωc
ωa
ωc
0
xi (jωa)
ωs −ωs
ωs
0
(2:2)
In equation 2.2, vLP is the cutoff frequency of the designed low-pass filter. The vLP shall be chosen to guarantee no aliasing (i.e., vc < vLP < vs =2). The impulse response in the above equation is depicted in Figure 2.5. It is easy to verify that a causal filter with this exact impulse response cannot be implemented. In practice, it is possible to design analog filters that are good approximations for the frequency response in Figure 2.5.
(B) Signal Spectrum
−2ωs
T sin (vLP t) : pt
ωa
2ωs
(C) Impulse Train Spectrum
2.4 Digital Filters and Linear Systems
FIGURE 2.2 Sampling of Continuous-Time Signals
1 < n < 1 if vs > 2vc , where vs is the sampling frequency in radians per second. Figure 2.4 shows sampling effects on a continuous-time signal. Figure 2.4(A) shows the continuous-time signal spectrum, and Figures 2.4(B), 2.4(C), and 2.4(D) show the sampled signal spectra for the cases vc ¼ vs =2, vc > vs =2, and vc < vs =2. It can be seen from these pictures that only Figure 2.4(D) does not produce distortion in the frequencies between vc and vc . Figure 2.4(C) shows that when the sampled signal has frequency components above vs =2 (half
An example of a discrete-time signal is shown in Figure 2.6. In a digital filter, the input signal x(n) is a sequence of numbers indexed by the integer n that can assume only a finite number of amplitude values. Such a sequence might originate from a continuous-time signal x(t) by periodically sampling it at the time instants t ¼ nT , where T is the sampling interval. The output sequence y(n) arises by applying x(n) to the input of the digital filter, with the relationship between x(n) and y(n) represented by the operator F as: y(n) F [x(n)]:
X * ( jωa) 1/T
3ωs 2
ωs
−ω −ωs c 2
ωc
ωs 2
ωs
3ωs 2
2ωs ωa
FIGURE 2.3 Spectrum of Sampled Signal 1
In the case of Figure 2.4(B), distortion could occur only at vs =2.
(2:3)
842
Marcio G. Siqueira and Paulo S.R. Diniz 37 X (jωa)
−ωc
ωc
0
ωa
(A) Continuous-Time Signal Spectrum
XD(e jωa) 1/T −ωs 2
−ωs
−2ωs
0
ωs 2
−ωc =
ωs
2ωs
ω
(B) Sampled Signal Spectrum ωc = ωs /2
XD(e jω) 1/T
−2ωs
−ωs −ωs 2
0
ωs 2
−ωs
ω
2ωs
(C) Sampled Signal Spectrum ωc > ωs /2
XD(e jω) 1/T
−2ωs
−ωs
−ωc −ωs 2
0
ωc ωs 2
ωs
2ωs
ω
(D) Sampled Signal Spectrum ωc < ω
FIGURE 2.4 Sampling Effects
2.4.1 Linear Time-Invariant Systems The main classes of digital filters are the linear, time-invariant (LTI) and causal filters. A linear digital filter responds to a weighted sum of input signals with the same weighted sum of the corresponding individual responses; that is: F [a1 x1 (n) þ a2 x2 (n)] ¼ a1 F [x1 (n)] þ a2 F [x2 (n)],
(2:4) F [x1 (n)] ¼ F [x2 (n)], for n n0 :
for any sequences x1 (n) and x2 (n) and for any arbitrary constants a1 and a2 . A digital filter is said to be time-invariant when its response to an input sequence remains the same, irrespective of the time instant that the input is applied to the filter. That is, F [x(n)] ¼ y(n), and then: F [x(n n0 )] ¼ y(n n0 ),
for all integers n and n0 . A causal digital filter is one whose response does not anticipate the behavior of the excitation signal. Therefore, for any two input sequences x1 (n) and x2 (n) such that x1 (n) ¼ x2 (n) for n n0, the corresponding responses of the digital filter are identical for n n0 ; that is:
(2:5)
(2:6)
An initially relaxed linear-time invariant digital filter is characterized by its response to the unit sample or impulse sequence d(n). The filter response when excited by such a sequence is denoted by h(n), and it is referred to as impulse response of the digital filter. Observe that if the digital filter is causal, then h(n) ¼ 0 for n < 0. An arbitrary input sequence
2 Digital Filters
843 H ( jωa)
x(n )
n −ωLP
ωLP
ωa
F−1 h(t ) −ωLP
πT
FIGURE 2.6
Discrete-Time Signal Representation 1 X
X(z) ¼
x(n)z n :
(2:10)
n¼1
3π ωLP
2π ωLP
0
π ωLP
π
ωLP
FIGURE 2.5
2π ωLP
3π ωLP
The transfer function of a digital filter is the ratio of the z transform of the output sequence to the z-transform of the input signal:
t
Ideal Low-Pass Filter
H(z) ¼ can be expressed as a sum of delayed and weighted impulse sequences: x(n) ¼
1 X
x(k)d(n k),
(2:7)
k¼1
and the response of an LTI digital filter to x(n) can then be expressed by: " y(n) ¼ F ¼ ¼
1 X
#
k¼1 1 X
x(k)F [d(n k)]
k¼1 1 X
1 X
x(k)h(n k)
n¼1
y(n)z n ¼
1 X
h(k)z k
k¼1
1 X
x(l)z l :
(2:13)
l¼1
The following relation among the z-transforms of the output Y(z), of the input X(z), and of the impulse response H(z) of a digital filter is obtained as: Y (z) ¼ H(z)X(z):
The summation in the last two lines of the above expression, called the convolution sum, relates the output sequence of a digital filter to its impulse response h(n) and to the input sequence x(n). Equation 2.8 can be rewritten as:
(2:12)
and substituting variables (l ¼ n k):
¼ x(n) h(n):
y(n) ¼
h(k)x(n k)z n ,
n¼1 k¼1
k¼1
1 X
1 1 X X
y(n)z n ¼
n¼1
(2:8)
(2:11)
Taking the z-transform of both sides of the convolution expression of equation 2.9 yields:
1 X
x(k)d(n k)
Y (z) : X(z)
(2:14)
Hence, the transfer function of an LTI digital filter is the z-transform of its impulse response.
2.4.2 Difference Equation A general digital filter can be described according to the following difference equation:
h(k)x(n k):
(2:9)
k¼1
Defining the z-transform of a sequence x(n) is written as:
N X i¼0
ai y(n i)
M X l¼0
bl x(n l) ¼ 0:
(2:15)
844
Marcio G. Siqueira and Paulo S.R. Diniz
Most digital filters can be described by difference equations that are suitable for implementation in digital machines. It is important to guarantee that the difference equation represents a system that is linear, time-invariant, and causal. This is guaranteed if the auxiliary conditions for the underlying difference equation correspond to its initial conditions, and the system is initially relaxed. Nonrecursive digital filters are a function of past input samples. In general, they can be described by equation 2.15 if a0 ¼ 1 and ai ¼ 0 for i ¼ 1, . . . , N :
Let us consider in detail a particular case of linear-phase FIR filters: For M even:
H(z) ¼
M=21 X
h(n)z n þ h
n¼0
¼
M=21 X
M X M M=2 þ h(n)z n z 2 n¼M=2þ1
h(n)(z n þ z (Mn) ) þ h
n¼0
M M =2 : z 2 (2:19)
y(n) ¼
M X
bl x(n l):
(2:16)
l¼0
It can be easily shown that these filters have finite impulse response and are known as FIR filters. For a0 ¼ 1, equation 2.15 can be rewritten as: y(n) ¼
N X i¼1
ai y(n i) þ
M X
bl x(n l):
(2:17)
At unity circle: Evaluating equation 2.19 (M even) at z ¼ e jv , we have: H(e jv ) ¼
M=21 X
h(n)e jvn þ h
n¼0
M X M jvM=2 e þ h(n)e jvn : 2 n¼M=2þ1
Considering the case of symmetric impulse response (i.e., h(n) ¼ h(M n) for n ¼ 0, 1, . . . , M=2), we have:
l¼0
It can be shown that these filters have, in most cases, infinite impulse response [i.e., y(n) 6¼ 0 when n ! 1] and are therefore known as IIR filters.
jv
H(e ) ¼ e
jvM=2
" M=21 # X M M h 2h(n) cos v n þ : 2 2 n¼0
By replacing n by (M=2 m), the equation above becomes:
2.5 Finite Impulse Response (FIR) Filters jv
H(e ) ¼ e The general transfer function of an finite impulse response (FIR) filter is given by: H(z) ¼
M X l¼0
bl z l ¼ H0 z M
M Y
jvM=2
" M=2 # X M M m cos [v(m)] : h 2h þ 2 2 m¼1
Defining a0 ¼ h(M=2) and am ¼ 2h(M=2 m), for m ¼ 1, 2, . . . , M=2, we have: (z zl ):
(2:18)
l¼0
A filter with the transfer functions in equation 2.18 will always be stable because it does not have poles outside the unit circle. FIR filters can be designed to have a linear phase. A linear phase is a desirable feature in a number of signal processing applications (e.g., image processing). FIR filters can be designed by using optimization packages or by using approximations based on different types of windows. The main drawback is that to satisfy demanding magnitude specifications, the FIR filter requires a relatively high number of multiplications, additions, and storage elements. These facts make FIR filters potentially more expensive than IIR filters in applications where the number of arithmetic operations or number of storage elements are expensive or limited. However, FIR filters are widely used because they are suitable for designing linearphase filters. An FIR filter has a linear phase if and only if its impulse response is symmetric or antisymmetric; that is h(n) ¼ h(M n).
H(e jv ) ¼ e jvM=2
M=2 X
am cos (vm):
(2:20)
m¼0
In summary, we have four possible cases of FIR filters with a linear phase: .
Case A: M Even and Symmetric Impulse Responses (1) Characteristics: h(n) ¼ h(M n) P for n ¼ 0, 1, . . . , M=2 M =2 H(e jv ) ¼ e jvM=2 m¼0 am cos (vm), where a0 ¼h(M=2) and am ¼ 2h(M=2 m) for m ¼ 1, 2, . . . , M=2 n o jv )] (2) Phase: Q(v) ¼ tan1 Im[H(e ¼ v M2 : Re[H(e jv )] M (3) Group delay: t ¼ qQ(v) qv ¼ 2 .
.
Case B: M Odd and Symmetric Impulse Responses (1) Characteristics: h(n) ¼ h(M n) for n ¼ 0, 1, . . . , (M=2 0:5): PM =2þ0:5 H(e jv ) ¼ e jvM=2 m¼1 bm cos [v(m 12 )], where bm ¼ 2h(M=2 þ 0:5 m) for m ¼ 1, 2, . . . , (M=2 þ 0:5):
2 Digital Filters
845
(2) Phase: Q(v) ¼ v M2 . M (3) Group delay: t ¼ qQ(v) qv ¼ 2 . .
.
h(n)
Case C: M Even and Antisymmetric Impulse Response (1) Characteristics: h(n) ¼ h(M n) for n ¼ 0, 1, . . . , (M=2 1) and h(M=2) ¼ 0: PM=2 H(e jv ) ¼ e j(vM =2p=2 m¼1 cm sin (vm), where cm ¼ 2h(M=2 m) for m ¼ 1, 2, . . . , M=2: (2) Phase: Q(v) ¼ v M2 þ p2 . M (3) Group delay: t ¼ qQ(v) qv ¼ 2 . Case D: M Odd and Antisymmetric Impulse Response (1) Characteristics: h(n) ¼ h(M n), forP n ¼ 0, 1, . . . , (M=2 0:5): M=2þ0:5 H(e jv ) ¼ e j(vM =2p=2) m¼1 dm sin [v(m 12 )], M where dm ¼ 2h( 2 þ 0:5 m), for m ¼ 1, 2, . . . , (M=2 þ 0:5): (2) Phase: Q(v) ¼ v M2 þ p2 . M (3) Group delay: t ¼ qQ(v) qv ¼ 2 .
n 0
1
2
3
4
5
6
7
8
9
10
(A) Case A h(n)
n 0
1
2
3
4
5
6
7
8
9
10 11
8
9
10
8
9
10 11
(B) Case B h(n)
As an illustration, Figure 2.7 depicts the typical impulse responses for the linear-phase FIR filter. Equation 2.19 for linear-phase FIR filters can be rewritten as: H(z) ¼ z M=2
L X
6
gl (z l z l ),
(2:21)
0
1
2
3
4
7
n
5
l¼0
where the gl ’s are related to h(l), where L ¼ M=2 0:5 for M odd and L ¼ M=2 for M even. If there is a zero of H(z) at z0 , z01 is also a zero of H(z). This means that all complex zeros not located on the unit circle occur in conjugate and reciprocal quadruples. Zeros on the real axis outside the unit circle occur in reciprocal pairs. Zeros on the unit circle occur in conjugate pairs. It is possible to verify that the case B linear-phase transfer function has a zero at v ¼ p, such that high-pass and bandstop filters cannot be approximated using this case. For case C, there are zeros at v ¼ p and v ¼ 0; therefore they are not suitable for low-pass, high-pass, and band-stop filter designs. Case D has zero at v ¼ 0 and is not suitable for low-pass and band-stop design.
(C) Case C h(n)
6 0
1
2
3
4
7
n
5
(D) Case D
2.6 Infinite Impulse Response Filters
FIGURE 2.7 Linear-Phase FIR Filter: Typical Impulse Response
The general transfer function for infinite impulse response (IIR) filters is shown below: (2:22)
QM QM (1 z 1 zl ) (z zl ) H(z) ¼ H0 QNl¼0 ¼ H0 z N M QNl¼0 : (2:23) 1 i¼0 (1 z pi ) i¼0 (z pi )
The above equation can be rewritten in the following alternative forms:
The above equation shows that, unlike FIR filters, IIR filters have poles located at pi . Therefore, it is required that jpi j < 1 to guarantee stability for the IIR transfer functions.
PM
H(z) ¼
l
Y (z) l¼0 bl z ¼ : P i X(z) 1 þ N i¼1 ai z
846
Marcio G. Siqueira and Paulo S.R. Diniz
2.7 Digital Filter Realizations
x(n)
From equation 2.15, it can be seen that digital filters use three basic types of operations: .
.
.
Delay: Used to store previous samples of the input and output signals, besides internal states of the filter Multiplier: Used to implement multiplication operations of a particular digital filter structure; also used to implement divisions Adder: Used to implement additions and subtractions of the digital filter
z −1
z −1
z −1
+
+
z −1 +
+
+ z −1
z −1
h(0) x
z −1
h(1) x
z −1
z −1 M x h( M−1.5) x h ( −0.5) 2 2
h(2) x
+
2.7.1 FIR Filter Structures
y(n)
General FIR and IIR filter structures are shown in Figures 2.8 and 2.9. Figure 2.8 shows the direct-form nonrecursive structure for FIR filters, while Figure 2.9 depicts an alternative transposed2 structure. These structures are equivalent when implemented in software. Since the impulse responses of linear-phase FIR filters are either symmetric or antisymmetric [e.g., h(n) ¼ h(M n)], this property can be exploited to reduce the overall number of multiplications of the FIR filter realization. In Figure 2.10 the resulting realizations for the case of symmetric h(n) are shown. For the antisymmetric case, the structures are straightforward to obtain.
(A) Odd Order x(n)
z −1
z −1
z −1 +
+
z −1
h(0) x
h(1) x
+
+
z −1
z −1
z −1
h(2) x
z −1
x h(
M −1) 2
M ) 2
x h(
2.7.2 IIR Filter Realizations A general IIR transfer function can be written as in equation 2.22. The numerator in this transfer function can be imple-
+ y(n)
z −1
x(n)
h(0)
z −1
h(1)
x
z −1
h(M −1)
x
(B) Even Order
z −1
h(M )
x
FIGURE 2.10 Linear-Phase FIR Filter Realization With Symmetric Impulse Response x
+ y(n)
FIGURE 2.8 Direct-Form Nonrecursive Structure
z −1 x
h(M)
+
z −1
x h(M−1)
+ x h(M−2)
z −1
+ x h(1)
z −1
+
y(n)
mented by using an FIR filter. The denominator entails the use of a recursive structure. The cascade of these realizations for the numerator and denominator is shown in Figure 2.11. If the recursive part is implemented first, the realization in Figure 2.11 can be transformed into the structure in Figure 2.12 for the case N ¼ M. The transpose of the structure in Figure 2.12 is shown in Figure 2.13. In general, IIR transfer functions can be implemented as a cascade or as a summation of lower order IIR structures. In case an IIR transfer function is implemented as a cascade of second-order sections of order m, it is possible to write:
x h(0)
H(z) ¼
x(n)
FIGURE 2.9
y(n)
m m Y g0k þ g1k z 1 þ g2k z 2 Y g0k z 2 þ g1k z þ g2k ¼ 1 þ m1k z 1 þ m2k z 2 z 2 þ m1k z þ m2k k¼1 k¼1
Alternative Form Nonrecursive Structure
¼ H0 2 This structure is characterized by the branches being reversed, such that the sinks turn to summations, and the summations become distribution nodes.
m m Y Y z 2 þ g01k z þ g02k ¼ Hk (z): 2 z þ m1k z þ m2k k¼1 k¼1
(2:24)
2 Digital Filters
847 b0
b0
x
x(n)
+
y(n)
x
x(n)
+
y(n)
z−1
z−1 −a1
b1
x
z −1
x
z−1 b2
x
−a2
x
x
z−1
FIGURE 2.11
x
z −1
x
−aN
x
+
−a2
b2
z−1 bM
−a1
b1
z−1
+
x
x z −1
Recursive Structure in Direct Form
−aN
bM
x
+
x
b0 x(n)
x
+
+
y(n)
FIGURE 2.13
Alternative Direct-Form Structure for N ¼ M
z−1
x(n) −a1
b1
x
x
H1(z)
b2
x
x
Hm(z)
y(n)
(A) Cascade
z−1 −a2
H2(z)
H1(z)
z−1
FIGURE 2.12
−aN
bM
x
x
x(n)
H2(z)
+
y(n)
Direct Form Structure for N ¼ M Hm(z)
Equation 2.24 is depicted in Figure 2.14(A). In the case an IIR transfer function is implemented as the addition of lower order sections of order m, it is possible to write: 0
0
p p p p p m m X X g0k z 2 þ g1k z þ g2k g0k z 2 þ g1k z H(z) ¼ ¼ h þ 0 z 2 þ m1k z þ m2k z 2 þ m1k z þ m2k k¼1 k¼1
¼ h00 þ
m X
p00
p00
m X g1k z þ g2k ¼ Hk (z): 2 z þ m1k z þ m2k k¼1 k¼1
(2:25)
(B) Parallel
FIGURE 2.14
Realization with Second-Order Sections
Equation 2.25 depicted in Figure 2.14(B). In general, for implementations with high signal-to-quantization noise ratios, sections of order two are preferred. Different types of structures can be used for second-order sections, such as the direct-form structures shown in Figures 2.15(A) and 2.15(B) or alternative biquadratic structures like state-space (Diniz and Antoniou, 1986).
848
Marcio G. Siqueira and Paulo S.R. Diniz
2.8.1 Window Method
γ0 x(n)
x
+
+
y(n)
z −1 −m1
γ1
x
x
The window method starts by obtaining the impulse response of ideal prototype filters. These responses for standard filters are shown in Figure 2.16. In general, the impulse response is calculated according to: 1 h(n) ¼ 2p
z −1
ðp
H(e jv )e jvn dv,
(2:26)
p
for a desired filter.
−m2
γ1
x
x |H(e jω)|
(A) Type 1 γ0 x(n)
x
+
y(n)
1
z −1
π
ωc
2π
ω
(A) Low Pass γ1
x
−m1
+
x
|H(e jω)|
z −1 γ2
x
−m2
+
1
x
FIGURE 2.15
ωc1
0
(B) Type 2
ωc 2 π
2π
ω
(B) Band - Pass
Basic Second-Order Sections |H(e jω)|
2.8 FIR Filter Approximation Methods 1
There are three basic approximation methods for FIR filters satisfying given specifications. The first one, the window method, is very simple and well established but generally leads to transfer functions that normally have a higher order than those obtained by optimization methods. The second method, known as minimax method, calculates a transfer function that has minimum order to satisfy prescribed specifications. The minimax method is an iterative method and is available in most filter design packages. The third method, known as weighted least squares-Chebyshev3 (WLS-Chebyshev) is a reliable but not widely known method. The WLSChebyshev method is particularly suitable for multirate systems because the resulting transfer functions might have decreasing energy at a prescribed range of frequencies.
0
ωc 2 π
ωc1
2π
ω
2π
ω
(C) Band Reject |H(e jω)|
1
0
ωc1
ωc 2
π (D) High - Pass
3
The WLS-Chebyshev method is also known as the peak constrained method.
FIGURE 2.16
Ideal Frequency Responses
2 Digital Filters
849
In the case of a band-pass filter, the prototype filter can be described by following transfer function: 8 < 2 [1 cos (pn=P)], gn ¼ cos½p(n P)=2(K P), > : 0,
0 n P, P n K, otherwise:
P ¼ time of pulse peak: K ¼ final sample before complete closure:
(3:3) Equation 3.3 is popular because it can flexibly represent many realistic pulse shapes by adjusting its parameters. The Rosenberg pulse, however, cannot be approximated well using a model with only poles, so we add another concern for the discussion of zeros below. The radiation component, R(z), is a low-impedance load that terminates the vocal-tract and converts the volume velocity at the lips to a pressure wave in the far field. The radiation load has been observed to have a high-pass filtering effect that is well-modeled by a simple differencer:
3 Methods Models and Algorithms for Modern Speech Processing R(z) ¼ Zlips (z) ¼ 1 zo z 1 ,
zo < 1, zo 1:
(3:4)
We must finally come to terms with the apparent contradiction between the need for zeros in the model and the desire to avoid them. Indeed, R(z) itself is composed of a single zero and no poles. Some of the early writings on this subject argued that if equation 3.2 is a good model for the glottal dynamics, then one of the poles of G(z) will approximately cancel the zero zo in R(z). This does not, however, resolve the question of whether G(z) should contain zeros, or whether, in certain cases, T(z) should include zeros to appropriately model a phoneme. The answer to these questions depends largely on what aspect of speech we are trying to model. In most applications, correct spectral magnitude information is all that is required of the model. Generally speaking, this is because the human auditory system is ‘‘phase deaf ’’ (Milner, 1970), so information gleaned from the speech is extracted from its magnitude spectrum. While a detailed discussion is beyond our current scope (Deller et al., 2000), the critical fact that justifies the all-pole model is that a magnitude (but not a phase) spectrum for any rational system function can be modeled to an arbitrary degree of accuracy with a sufficient number of stable poles. Therefore, the all-pole model can exactly preserve the magnitude spectral dynamics (the ‘‘information’’) in the speech but might not retain the phase characteristics. In fact, for the speech model to be stable and all pole, it must necessarily have a minimum phase spectrum, regardless of the true characteristics of the signal being encoded. Ideally, the all-pole model will have the correct magnitude spectrum but minimum phase characteristic with respect to the ‘‘true’’ model. If the objective is to code, store, resynthesize, and perform other such tasks on the magnitude spectral characteristics but not necessarily on the temporal dynamics, the allpole model is perfectly adequate. One should not, however, anticipate that the all-pole model can preserve time-domain features of speech waveforms because such features depend explicitly on the phase spectrum. Let us summarize these important results. Ignoring the technicalities of z-transform existence, we assume that the output (pressure wave) of the speech production system is the result of filtering the appropriate excitation by two (in the unvoiced case) or three (voiced) linear, separable filters. Ignoring the developments above momentarily, let us suppose that we know ‘‘exact’’ or ‘‘true’’ linear models of the various components. By this we mean that we (somehow) know models that will exactly produce the speech waveform under consideration. These models are only constrained to be linear and stable and are otherwise unrestricted. In the unvoiced case S(z) ¼ E(z)T (z)R(z), where E(z) represents a partial realization of a white noise process. In the voiced case, S(z) ¼ E(z)G(z)T (z)R(z), where E(z) represents a DT impulse train of period P, the pitch period of the utterance. Accordingly, the true overall system function is as follows:
871 H(z) ¼
S(z) ¼ E(z)
T (z)R(z), G(z)T (z)R(z),
unvoiced case: voiced case:
(3:5)
With enough painstaking experimental work, we could probably deduce reasonable ‘‘true’’ models for any stationary utterance of interest. In general, we would expect these models to require zeros as well as poles in their system functions. Yet, an all-pole model exists that will at least produce a model/speech waveform with the correct magnitude spectrum (Deller et al., 2000), and a waveform with correct spectral magnitude is frequently sufficient for coding, recognition, and synthesis. We henceforth assume, therefore, that during a stationary frame of speech, the speech production system can be characterized by a z-domain system function of the form: H(z) ¼
1
H0 PM
i¼1 ai z
i
with H0 > 0,
(3:6)
which is driven by an excitation sequence: en ¼
P1
q¼1 dnqP , zero mean, unity variance, uncorrelated noise,
voiced case: unvoiced case:
(3:7) In equation 3.6, 0 < M < 1. A block diagram for this system is shown in Figure 3.6.
Gain for voice source, H0 DT impulse train generator en
Voiced/ unvoiced switch
All-pole system model, H(z)/H0
en Noise generator
Gain for noise source, H0
FIGURE 3.6 Block Diagram for z-Domain System Function
872
3.3 Fundamental Methods and Algorithms Used in Speech Processing 3.3.1 Parametric and Feature-Based Representations of the Speech Signal Speech processing algorithms rarely work directly with the (sampled) speech waveform but rather with a sequence of quantified features that are extracted from the signal, usually at regularly-spaced intervals. In a typical scenario, features are computed periodically in time over 256-point frames that are intentionally overlapped by 128 samples. For a 10-kHz sampling rate, this represents feature computations using 25.6 ms segments of speech that overlap by 12.8 ms as the processing moves through time. Using the rule of thumb that speech signal dynamics remain stationary for blocks of 10 to 20 ms, the frame duration is chosen with this in mind, balanced against the need to represent the waveform with the smallest possible number of feature computations (for economy of computation and storage, adherence to bandwidth requirements, and other factors). Increased duration of the frame results in enhanced spectral resolution of the feature being computed, while sequences of features computed over shorter windows provide better time resolution (Deller et al., 2000). Overlapping frames are used to smooth the transitions of features from one stationary region to the next. Speech features are often vector-valued. An example would be (estimates of) the set of filter coefficients {ai } associated with the DT model of speech in equation 3.6. As we detail in Section 3.3.2, these coefficient estimates, typically 14 in number, are the result of ‘‘linear prediction’’ analysis of speech. Such a set of features is called a parametric representation of the frame because it consists of values for the parameters associated with a predetermined model. In the present example, the set of 14 model parameters might be complemented by an estimate of the system gain (H0 in equation 3.6) and a binary valued feature to indicate a voiced or unvoiced frame. Each frame, therefore, would be represented by a real vector of dimension 16. Features that have been used to characterize speech usually convey information about spectral properties. These include direct spectral measures (FFT based), mappings or transformations of spectral information (see the cepstrum in Section 3.3.3.), correlation information, zero-crossing measures (crude frequency content information), model parameter estimates (as in the linear-prediction example, or the ‘‘reflection coefficients,’’ discussed in Section 3.3.2.), and various transformations of such parameters. Energy measures are also used as conveyors of information about, for example, the presence of voicing or for ‘‘silence/speech’’ decisions.
3.3.2 Linear Prediction Analysis LP Model and the Normal Equations The time-domain implication of the all-pole system function is that the difference equation governing the system includes
John R. Deller, Jr. and John Hansen memorized values for the output only; given these output lags, only the present sample of the input is needed to determine sn to within a gain factor at each n. In particular: sn ¼ a1 sn1 þ a2 sn2 þ þ aM snM þ H0 en ¼ a T sn þ H0 en , (3:8) def
where we have defined vectors a ¼ ½a1 a2 aM T and def
sn ¼ ½sn1 sn2 snM T . A model of form equation 3.8 driven by a zero-mean, uncorrelated (white) sequence, it is called an autoregressive (AR) model by time-series analysts. The output of an AR model of order M, denoted AR(M), is said to ‘‘(linearly) regress on itself,’’ meaning that the present value sn is a linear combination of (or a prediction based on) its past M values plus a component, H0 en , which is not predictable from any data, past or future. Let us assume for the moment that {en } is indeed stationary white noise, so {sn } is a stationary random process. In light of the difference equation 3.8, a natural way to estimate the parameters of a signal believed to follow an AR(M) model is to design an FIR filter that attempts to ‘‘predict’’ sn from its past M values. Let us denote the estimated filter parameters by {^ai }M sn : i¼1 and the prediction at time n by ^ ^sn ¼ ^a1 sn1 þ ^a2 sn2 þ þ ^aM snM ¼ a^T sn ,
(3:9)
in which the vector a^ 2 RM is defined similarly to a in equation 3.8. In accordance with the AR model, the parameters, a^, should be chosen to account for all but an unpredictable component, {H0 en }, in {sn }. That is, we seek a^ such that:11 ^en ¼ sn a^T sn H0 en
(a white-noise sequence of variance H02 ):
(3:10) The sequence {^en }, which represents the error in prediction at each n, is called the prediction residual. Using the orthogonality principle from optimization theory (Haykin, 1996), it can be shown that selecting parameters {^ai } such that {^en } has minimum power (i.e., minimum E{^en2 } in which E denotes the statistical expectation) will also result in the desired spectral whiteness of {^en }. Therefore, we seek a^ that minimizes the mean squared error (MSE) in prediction; that is: a^ ¼ argmin E{^en2 } ¼ argmin E(sn a T sn )2 a
a
¼ argmin E sn {ai }
M X
!2
(3:11)
ai sni
i¼1
11 The approximation should be taken to mean that {^en } and {H0 en } are approximately the ‘‘same stochastic process.’’
3 Methods Models and Algorithms for Modern Speech Processing The well-known solution is the unique parameter vector solving the system of normal equations: s a^ ¼ rs , R
Practical Solutions of the Normal Equations Frame-Wise Normal Equations In practice, the parameters must be identified using finite signal records with nominally constant dynamics—typically 10 to 20 ms. Let us denote the block of samples over which the analysis takes place by n 2 {0, N 1}. There are two methods conventionally used to estimate the LP parameters in speech processing. In the autocorrelation method, the solution takes the form of equation 3.12, with the statistical autocorrelations estimated by values of the autocorrelation sequence: N 1 X
w snw snk ,
k ¼ 0, 1, . . . , M,
(3:13)
n¼0
in which snw ¼ sn wn , where wn is a data window (e.g., Hamming) of duration N M beginning at the time origin, n ¼ 0. For future purposes, let us write the resulting system of equations in vector–matrix form as: Rs a^ ¼ r s ,
(3:14)
in which the overbars have been dropped with respect to equation 3.12 to indicate the short-term temporal solution. In the so-called covariance method the same solution form applies, but the statistical autocorrelations are estimated by values of the covariance sequence: fs;i;k ¼
N 1 X n¼0
sn1 snk ,
i, k ¼ 0, 1, . . . , M:
The rs;ki is estimated by fs;i;k. The data are not windowed in this case. Let us denote the corresponding vector–matrix equation by:
(3:12)
s 2 RM<M is the autocorrelation matrix for the in which R process {sn } [i.e., a matrix whose (i, k) element is M rs;ki ¼ E{sni snk }] and rs 2 R Tis an auxiliary vector of autocorrelations rs ¼ rs;1 rs;M . The overbars are to distinguish the correlation quantities as statistical expectations rather than time averages. In the voiced case, where {en } is modeled as a periodic DT impulse train, the normal equations are used without modification except that the statistical autocorrelations must be replaced by long-term time averages because the speech waveform is deterministic. This approach can be shown to produce unbiased results as the pitch period becomes infinite. Generally speaking, the analysis works satisfactorily for voiced speech for most pitch ranges encountered in applications although performance predictably degrades for very high-pitched utterances that are more typical of women’s and children’s voices (Deller et al., 2000).
rs;k ¼
873
(3:15)
Fs a^ ¼ fs :
(3:16)
In addition to its interpretation as a temporally estimated version of the stochastic normal equations, each LP result can be derived as the solution to a short-term optimization problem (Deller et al., 2000). Autocorrelation Method and Lattice Structures In either the autocorrelation or covariance solution, obtaining the final LP parameter estimate, a^, ultimately requires the solution of a matrix–vector equation of the form Ax ¼ b, in which A is a square matrix that is reasonably assumed to be of full rank, M. This is a widely studied problem for which numerous methods exist (Golub and van Loan, 1989). Solutions to general problems of this form tend to be of O(M 3 ) computational complexity. In the autocorrelation case, the symmetric and Toeplitz structure of the correlation matrix Rs is exploited to obtain numerical and computational [O(M 2 )] benefits as well as an alternative view of the solution that leads to the so-called lattice methods. The basis for the more efficient solution is the Levinson-Durbin (L-D) recursion (Deller et al., 2000; Haykin, 1996; Golub and van Loan, 1989), a recursive-inmodel-order solution (see Figure 3.7). By this we mean that, using the fixed block of windowed data at each iteration, 1 {snw }N n¼0 , the solution for the desired order M model is successively built up from lower order models, beginning with the ‘‘zero order predictor,’’ which is no predictor at all (^sn ¼ 0 for every n). Associated with the ith iteration for each i 2 [0, M] is a sequence, say {^eni }, that can be interpreted as the error in prediction associated with the ith order filter as well as a ‘‘backward’’ prediction error, say (^bni ), corresponding to an attempt to predict the waveform on a reversed time axis (i.e., ‘‘backward prediction’’ in time). Each stage is also characterized by a parcor (short for partial correlation) coefficient or reflection coefficient, the ith one of which, say ki, can be interpreted as a proportion of the amount of ‘‘forward error’’ from stage i that is reflected back as backward error as the forward error propagates into the (i þ 1)st stage. Figure 3.8 illustrates the analysis lattice, so-named because it takes the speech as input, sequentially analyzes its dynamics to return only the unpredictable error sequence as output. It is a simple matter to reverse the flow of the lattice to create a synthesis lattice that reconstructs the speech from the (forward) error signal (Deller et al., 2000). As the name suggests, reflection coefficients, {ki }M i¼1 , bear a strong formal relationship to acoustic reflection coefficients associated with the vocal-tract acoustic-tube model briefly discussed in Section 3.2.2, with the forward and backward errors analogous to similar flow quantities in the model. This
874
John R. Deller, Jr. and John Hansen Initialization: For ‘ ¼ 0, x0 ¼ scaled total energy in the ‘‘error’’ from an ‘‘order-0’’ predictor ¼ average energy in the speech frame {snw }N1 n¼0 ¼ rs; 0 For ‘ ¼ 1, 2, . . . , M:
Recursion:
n o P 1 rs; ‘ ‘1 ai‘1 rs; ‘i . 1. Compute the ‘th reflection coefficient, k‘ ¼ x‘1 i¼1 ^ 2. Generate the order-‘ set of LP parameters: a^‘‘ ¼ k‘ : ‘1 ai‘1 k‘ ^a‘i , i ¼ 1, . . . , ‘ 1: a^i‘ ¼ ^
3. Compute the error energy associated with the order-‘ solution, x‘ ¼ x‘1 {1 k2‘ }. 4. Return to step 1 with ‘ replaced by ‘ þ 1 if ‘ < M.
FIGURE 3.7
The Levinson-Durbin Recursion. The recursion is applied to the time frame n ¼ 0, 1, . . . , N 1.
ε 0n = sn
+ k1
+
ε 1n
−
+
+
ε 2n
+
ε M−1 n
+
−
k2
−
k2
k1
eM n = ên
km −
− z−1
β 0n
+
+
z−1
β 1n
+
+
z−1
β 2n
z−1
β
M−1 n
FIGURE 3.8 Analysis Lattice Structure
is a manifestation of the physics of the all-pole system in the mathematics of the DT model. The set {ki }M i¼1 contains equivalent information to the set of LP coefficients, as is evident from the fact that the reflection coefficients can be used to reconstruct the speech from the error signal {^en }. The reflection coefficients are often used as alternatives to the LP parameters in compression and coding schemes because of their desirable numerical property of having a restricted dynamic range (in theory), jki j < 1 for all i. (Deller et al., 2000). Checking the reflection coefficients for this property on an iteration-byiteration basis in the L-D recursion offers a convenient check for numerical stability. Further topics related to lattice structures and reflection coefficients include formulations that compute reflection coefficients independently of the LP parameters (Itakura and Saito, 1969; Burg, 1967) sets of parameters that are related to, or are transformations of, reflection coefficients (Deller et al., 2000; Viswanathan and Makhoul, 1975; and Itakura, 1975); and more efficient lattice structures (Makhoul, 1977; Strobach, 1991). Decomposition Methods for the Covariance Solution The covariance method equations to be solved are of the form of equation 3.16. Similar to the autocorrelation matrix Rs , the covariance matrix Fs is symmetric and positive definite. Unlike Rs , however, Fs it is not Toeplitz, so there is less structure to exploit the solution. The most common solution
methods are based on the decomposition of the covariance matrix into lower and upper triangular matrices, say L and U, such that Fs ¼ LU . Given this decomposition, equation 3.16 can be solved by sequentially solving Ly ¼ fs and U ^ a ¼ y in each case using simple algorithms (Golub and van Loan, 1989). The most efficient algorithms for accomplishing the LU decomposition are based on two methods from linear algebra (for symmetric matrices): the LDL T decomposition and the Cholesky or square root decomposition. For details, see Golub and van Loan (1989). Specific algorithms are found in Deller et al. (2000) and Golub and van Loan (1989). Comparisons and Further Notes When approached as ‘‘batch’’ (i.e., nonlattice) solutions to the Ax ¼ b problem according to the methods described above, the solution methods are of O(M) complexity, with the autocorrelation method being slightly, but insignificantly, more efficient. Variations exist on the general algorithms discussed above, some being more efficient than others (Markel and Gray, 1976), but all commonly used methods are of O(M) complexity. Factors other than computational burden must be considered in choosing between the two general methods. Stability of the solution, for example, can be an important consideration. The autocorrelation solution is theoretically guaranteed to represent a stable filter if infinite precision arithmetic is used. In practice, the L-D recursion contains a built-in check
3 Methods Models and Algorithms for Modern Speech Processing (jki j 1) for instabilities arising from finite-precision computations. No such theoretical guarantee nor numerical check is available in the covariance case. The lattice approaches are the most computationally expensive of the conventional methods but are among the most popular because they offer the inherent ability to directly generate the parcor coefficients to monitor stability and to deduce appropriate model orders ‘‘online’’. Finally, the popularity of the autocorrelation method notwithstanding, the covariance method has an important property that can be very significant in certain applications. In fact, the covariance approach is precisely equivalent to the classical least square error (LSE) solution of an overdetermined system of equations (Golub and van Loan, 1989). This fact can be exploited to develop alternative solutions (Deller et al., 2000; McWhiter, 1983; Deller and Odeh, 1993). Estimating the Model Order and Gain In practice, the ‘‘true’’ order of the system of equation 3.6 is unknown. A generally accepted rule of thumb is to allocate one pole pair per kilohertz of Nyquist bandwidth; then, if the segment is voiced, one can add an additional four or five (whichever makes the total even) (Markel and Gray, 1976). Since the order of the model would ordinarily not be changed from frame to frame, the ‘‘extra’’ four or five would be included for unvoiced speech frames. Hence, we choose: M ¼ Fs þ {4 or 5}
(Fs ¼ sampling frequency in kHz): (3:17)
The pole-pair per kHz rule is based on the fact that speech spectra tend to have about one formant (which can ordinarily be modeled by a resonant pole pair) per kHz (Fant, 1956). The supplemental poles are needed in the voiced case to account for the glottal shaping filter and to generally allow some flexibility in fitting the model to the speech spectrum. An experimental technique for determining an appropriate value for P M is to study the sum of squared errors, þM1 w either xautoc ¼ N (^sn snw )2 (autocorrelation method) PN 1 n¼0 or xcov ¼ n¼0 (^sn sn )2 (covariance), as a function of M. Each is a nonincreasing function of M, and each will tend to reach a point of diminishing returns in the neighborhood of the value of M suggested in equation 3.17. For unvoiced speech, the asymptote will be reached before the four or five supplemental poles in this equation are added. In the unvoiced case, xautoc and xcov will also tend to level off at a higher final value than the respective values in the voiced case, suggesting that the LP model is not as accurate for the former. Note that xautoc as a function of M is a byproduct of the L-D recursion so that the solution method itself can be used to ascertain an appropriate model size. Finally, to have a completely specified model, we need a means for estimating the gain parameter, H0 . The relative
875 gain from frame to frame is clearly important for proper resynthesis of a speech waveform, for example. By some straightforward manipulations involving the difference equations 3.8 and 3.9, two sets of approximations are obtained (Deller et al., 2000). These are as follows: ffi 8 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi < rs;0 PM ^ , r a i s;i i¼1 ^ 0 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi H : P[r PM ^ s;0 i¼1 ai rs;i ],
unvoiced case: voiced case, (P ¼ pitch period):
(3:18) pffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffi , x ^e ;0 ¼ ^ 0 ¼ prffiffiffiffiffiffiffiffiffi pautoc ffiffiffiffiffiffiffiffiffiffiffiffiffiffi H Pr^e ;0 ¼ Pxautoc ,
unvoiced case: voiced case, (P ¼ pitch period): (3:19)
Each of these expressions assumes that the autocorrelation method is used. For the covariance method, the values fs;o;i , are inserted in place of the rs;i in equation 3.18. In equation 3.19, f^e ;0;0 and xcov are similarly substituted.
3.3.3 Cepstral Analysis The Cepstrum As its name suggests, cepstral analysis represents a variation on speech spectral analysis. The nominal goal of cepstral processing is a representation of a voiced speech signal, the cepstrum, in which the excitation information is separated, or deconvolved from the dynamics of the speech system impulse response. With such a representation, the speech system dynamics and the excitation signal can be studied, coded, and manipulated separately. Applications of the cepstrum have included pitch detection (using the voice component) and ‘‘homomorphic’’ vocoding (using the speech production system component of the cepstrum to derive an efficient representation of the speech dynamics) (Deller et al., 2000; Rabiner and Huang, 1993). The history of the cepstrum is interesting and the details of the theory are extensive. The objective of this treatment is to present an overview of the main concepts and then focus on a modern application of the cepstrum in which most of the connections to the original motivations and theoretical underpinnings are obscured. Cepstral analysis is a special case within a class of methods collectively known as homomorphic signal processing. Historically, the discovery and use of the cepstrum by Bogert, Healy, and Tukey (1963) and Noll (1967) predate the formulation of the more general homomorphic SP approach by Oppenheim and Schafer (1968). In fact, the earlier cepstrum is a special case of a more general cepstrum, which, in turn, is a special instance of homomorphic processing. When it is not clear from context, the cepstrum derived from homomorphic processing is called the complex cepstrum (even though the version of it used is ordinarily real-valued!), while the original Bogert–Tukey–Healy cepstrum is called the real cepstrum.
876
John R. Deller, Jr. and John Hansen
sn
DTFT
log|
Voiced speech
|
IDTFT
cs;n Cepstrum
log|S(ω)| = Cs(ω)
FIGURE 3.9
Computation of the Real Cepstrum
Cs;k
We use the term cepstrum and restrict attention to the simpler real cepstrum. In fact, there are various definitions of the real cepstrum, but all are equivalent to the even part of the complex cepstrum to within a scale factor. In spite of its connection to the spectrum, the cepstrum is actually a sequence indexed by time, say {cs;n }, representing the speech sequence {sn }. The set of operations leading to {cs;n } is depicted in Figure 3.9. The first two operations can be interpreted as an attempt to transform the signal {sn } into a ‘‘linear’’ domain in which the two parts of the signal that are convolved in the unaltered signal have representatives that are added in the new domain. Let us denote by Q the operation corresponding to the first two boxes in the figure. Then: Cs (v) ¼ Q{sn } ¼ log jS(v)j ¼ log jE(v)H(v)j (3:20) ¼ log jE(v)j þ log jH(v)j ¼ Ce (v) þ Ch (v): In the linear domain, we are able to apply familiar linear techniques to the new signal, Cs (v). In particular, we might wish to apply Fourier analysis to view the frequency domain properties of the new signal. Because Cs (v) is a periodic function of v, it is appropriate to compute the Fourier series coefficients for the ‘‘harmonics’’ of the signal. These would take the form:12 an ¼
1 2p
ðp
Cs (v)e jvn dv:
(3:21)
p
Now, according to the definition: cs;n ¼
1 2p
ðp
Cs (v)e jvn dv,
(3:22)
p
but Cs (v) is a real, even function of v, so that equations 3.21 and 3.22 produce equivalent results.13 Therefore, the cepstrum can be interpreted as the Fourier series ‘‘line spectrum’’ of the ‘‘signal’’ Cs (v). 12
Note that function Cs (v) is periodic on the v axis where its fundamental ‘‘period’’ is vs ¼ 2p. The role of the fundamental ‘‘frequency’’ in this formulation is the quantity Ts ¼ vs =2p ¼ 1. Hence, the harmonic frequencies are the numbers nTs ¼ n, n 0. 13 In fact, since Cs ( v) is even, the IDTFT in the definition can be replaced by a cosine transform:
cs;n
1 ¼ 2p
ðp
1 Cs (v) cos (vn)dv ¼ p p
ðp
Cs (v) cos (vn)dv: 0
In practice, the cepstrum would be computed over a frame of, say, length N, using a DFT–IDFT pair. Suppose that we seek 1 the cepstrum of the frame (possibly windowed) {sn }N n¼0 . Then, we would compute: NX 1 j 2p kn N ¼ log sn e , n¼0
for k ¼ 0, 1, . . . , N 1 (log magnitude spectrum): (3:23)
cs;n ¼
N 1 X
2p
Cs;k e j N kn ,
for n ¼ 0, 1, . . . , N 1
(cepstrum):
k¼0
(3:24) Because the cepstrum is an infinitely long sequence, care must be taken to avoid aliasing in the cepstral domain (by zero padding, often unnecessary) and to account for the artificial periodicity of the result. In the introduction above, quotation marks appear around terms that are used in an unusual manner. The signal that is being transformed into the frequency domain is, in fact, already in what we ordinarily consider the frequency domain. Therefore the ‘‘new’’ frequency domain was dubbed the quefrency domain by Tukey in the earlier work on the cepstrum, and the cepstrum was so named because it plays the role of a spectrum in the quefrency domain. The index of the cepstrum (which actually represents DT) is called the quefrency axis. The harmonic frequencies of Cs (v), which are actually time indices of the cepstrum, are called rahmonics. There is an entire vocabulary of amusing terms that accompanies cepstral analysis [see the title of the paper by Bogert et al. (1996)], but only the terms cepstrum and quefrency (and sometimes liftering, a concept discussed next, are widely used by speech processing engineers. The stated purpose of the cepstrum is to resolve the two convolved pieces of the speech, {en } and {hn }, into two additive components, then to analyze those components with spectral (cepstral) analysis. From equation 3.20: cs;n ¼ ce;n þ ch;n ,
(3:25)
and if the nonzero parts of ce;n and ch;n , occupy different parts of the quefrency axis, we should be able to examine them as separate entities—something that we are unable to do when they are convolved in {sn }. In fact, the excitation component, {ce;n }, tends to have most of its energy at relatively large (and ‘‘rahmonically spaced’’) values of n, since its quefrency domain counterpart manifests fast, periodic variations. The use of a time window to select out the component due to the excitation, {ce;n }, has been called high-time liftering to stress the analogy to high-pass filtering. For similar reasons, the vocal
3 Methods Models and Algorithms for Modern Speech Processing
877
system component, {ch;n }, appears at very low quefrencies and can be extracted using a low-time lifter.
et al., 1983). A detailed description of this computastion is found in Chapter 6 of Deller et al. (2000).
Mel Cepstrum In the 1980s, the cepstrum began to supplant the direct use of the LP parameters as the premier features in speech recognition because of two convenient enhancements that improve performance (Davis and Mermelstein, 1980). The first is the ability to easily smooth the spectrum using the liftering process described above. This process removes the inherent variability of the LP-based spectrum due to the excitation. A second improvement over direct use of the LP parameters can be obtained by use of the so-called mel-cepstrum. In principle, the mel cepstrum represents a set of features that are more similar (than those derived uniformly over the frequency spectrum) to those used by the human auditory system in sound perception. A mel is a unit of measure of perceived pitch or frequency of a tone. It does not correspond linearly to the physical frequency of the tone because the human auditory system does not perceive pitch in this linear manner (Stevens and Volkman, 1940; Koenig, 1949). The mapping between real frequencies, say FHz, and perceived frequency, say Fmel, is approximately (Fant, 1959):
Delta Cepstrum Another popular feature in speech recognition is the delta m } denotes the (mel-) cepstrum or for the cepstrum. If {cs;n mth frame of the signal {sn }, then the delta or differenced cepstrum at frame m is defined as:
Fmel
1000 FHz 1þ ’ , log 2 1000
(3:26)
in which Fmel (FHz ) is the perceived (real) frequency in mels or Hz. The cepstrum is particularly well-suited to working with a warped frequency axis since it is based directly on the magnitude spectrum of the speech. Suppose one desires to compute the cepstral parameters at frequencies distributed linearly on the range 0–1 kHz and logarithmically above 1 kHz. One approach is to ‘‘oversample’’ the frequency axis with the DFT– IDFT pair by using, say, a N 0 ¼ 1024 or 2048 point DFT, then selecting those frequency components in the DFT that represent the appropriate distribution. The remaining components can be set to zero, or, more commonly, a second psychoacoustic principle is invoked. Loosely speaking, it has been found that the perception of a particular frequency by the auditory system, say F 0, is influenced by energy in a critical band of frequencies around F 0 (O’Shaughnessy, 1987). Further, the bandwidth of these critical bands varies with frequency, beginning at about 100 Hz for frequencies below 1 kHz, then increasing logarithmically above 1 kHz. Therefore, rather than simply using the mel-distributed log magnitude frequency components to compute the cepstrum, it has become common to use the log total energy in critical bands around the mel frequencies as inputs to the final DFT. The energy components in each critical band are weighted by a critical band filter to account for their relative influences in the band (Davis and Mermelstein, 1980; Dautrich
def
m mþd md Dcs;n ¼ cs;n cs;n ,
(3:27)
for all n. The parameter d is chosen to smooth the estimate and typically takes a value of 1 or 2 (look forward and backward one or two frames). A vector of such features at relatively low ns (quefrencies) intuitively provides information about spectral changes that have occured since the previous frame alm though the precise meaning of Dcs;n for a particular n and m is difficult to ascertain. A generalization and enhancement on the delta cepstrum is represented by the shifted delta cepstrum (SDC) (Bielefeld, 1994) in which a set of flexible parameters is used to construct enhanced cepstral vectors. The SDC has been shown to be beneficial in limited study (Bielefeld, 1994; Torres-Carrasquillo et al., 2002). Typically, 8 to 14 cepstral coefficients and their differences are used as features for speech recognition (Deller et al., 2000; Rabiner and Juang, 1993; Juang et al., 1987). Log Energy The measures cs;0 and Dcs;0 are often used as relative measures of spectral energy and its change. For the cepstrum, inserting log jS(v)j for Cs (v) in equation 3.22, then evaluating at n ¼ 0, we see that cs;0 differs by a factor of two from the area under the log energy density spectrum. In the practical case in which the IDFT P is used in the final operation, this becomes 1 cs;0 ¼ N 1 N k¼0 log jSk j that has a similar interpretation. In the case of the mel-cepstrum, the sum of all critical band energies provides a measure of total spectral energy.
3.3.4 Traditional Search and Pattern Matching Algorithms Introduction Pattern recognition and matching and related problems in optimal search play key roles in speech processing. In this subsection, we examine some critical algorithms related to these tasks. The main objective of this material is to introduce the hidden Markov model (HMM), a critical technology that serves as a foundation for many aspects of modern speech recognition systems. The HMM can be considered to be a stochastic form of the widely used (across many disciplines and applications) search method known as dynamic programming (DP). After treating some general issues in pattern
878
John R. Deller, Jr. and John Hansen
recognition, we begin by studying the DP foundation, then progress to the specific DP-based speech algorithms. Similarity Measures and Clustering Underlying all statistical pattern matching problems is the need to have objective measures of similarity among objects and the related problem of grouping objects into clusters whose elements are similar in some sense. Here, we touch on four interrelated classes of pattern-matching problems that occur in specific contexts in speech processing. We return to the clustering problem at the end of the section. The first problem involves the relatively straightforward task of determining a measure of similarity between sets of features resulting from the analysis of two frames of speech. When features composing the feature vectors represent uncorrelated pieces of information with similar scales (e.g., outcomes of uncorrelated random variables with equal variances), then the widely used Euclidean distance is an appropriate measure of similarity between the vectors. Such is the case, for example, with cepstral coefficients because the parameters are coefficients of a Fourier series and, therefore, represent an expansion on an orthogonal set of basis functions. Determining the ‘‘distance’’ between (models represented by) two sets of LP parameters, however, is a common task for which the Euclidean distance is not appropriate because the LP parameters are highly correlated. For this problem, some variation of the Itakura distance measure (Itakura, 1975) (discussed next) is often used. The LP parameter-matching problem can be considered as an example of the first class of pattern-matching problem— that of assessing the similarity between two feature vectors. It also, however, belongs to a second class of problems important to speech processing, that of assessing similarity between two models.14 The Itakura distance is used to measure intermodel distance reflected in LP vectors. To understand the Itakura distance, consider two LP models, say a^ and b^, derived using 1 two different windowed sequences of data, say X ¼ {xnv }N n¼0 N 1 and Y ¼ {ynv }n¼0 . The Itakura distance is given by: ~ x b~ b~T R def a , b^)¼ log T d1 (^ , ~ x a~ a~ R
(3:28)
where r ~ x def R ¼ x;0 rx
r Tx , Rx
def
a~ ¼
1 , ^a
def and b~ ¼
1 , b^
(3:29)
in which Rx and r x have as elements the short-term autocorrelations of the data (i.e., X ) used to estimate a^ (Schafer 14
A second important example of model-to-model distance occurs in the assessment of the similarity of HMMs. These models are discussed later in this chapter.
and Markel, 1979). If we let x^a be the total squared error incurred in predicting X using a^ and x^b be the total squared error in predicting X using b^, then it is not difficult to show that dl (^ a , b^) ¼ log (x^b =x^a ); hence, the measure gives an indication of how ‘‘far’’ from optimal is the estimate of b^. The Itakura distance is always positive because the denominator yields the smallest possible value of x. The sense of the Itakura distance is to measure similarity between models, but it is based, in fact, on the measurements of the distances from sequences to models. These measurements indirectly provide the intermodel distance. Assessment of sequence-to-model similarity is a third important class of pattern-matching problems occurring in speech processing. Such problems figure prominently in an array of modern speech-processing technologies, including speech and speaker recognition, speaker identification, speaker verification, language identification, and accent recognition (Rabiner and Juang, 1993). In nearly every case, the model is a statistical representation of the production of sequences associated with that model’s class or entity (e.g., a word, a language, a speaker). The ‘‘distance’’ between a sequence consists of some likelihood measure that the model in question is coherent with the given sequence. The discussion later in this chapter of speech recognition using the HMM provides a prime example of this strategy. A fourth important class of pattern-matching problems occurring in speech processing is that of assessing similarity between two sequences. The line between this class and the one immediately preceeding is not distinct because one may consider a particular sequence as a ‘‘model’’ for a given class or entity (e.g., a word), or several sequences may be combined in some manner to produce a ‘‘model sequence’’ for a particular entity. Older techniques in speech recognition and related problems in particular, ‘‘dynamic time warping’’ (discussed later) rely on such direct measures of intersequence distance. Coding and decoding problems in speech processing routinely deploy such techniques as well. The upcoming discussions will provide ample illustration. Finally, we return to the clustering problem. Clustering refers to the act of categorizing J objects into K J classes or categories according to strategies that maximize intraclass similarity and minimize interclass similarity. Clustering methods are numerous and varied depending on the application, types of measurements available, similarity measures used, as well as other factors (Devijver and Kittler, 1982). Although cluster formation often employs feature-to-feature matching (i.e., ‘‘objects’’ represented by features or feature vectors), the ‘‘objects’’ can be sequences or models as well. Thus, any of the pattern-matching problems already discussed may figure into the clustering procedure. The need for clustering arises in many speech processing applications. Again, the HMM provides two important examples. These include clustering procedures for ‘‘codebook’’ formation for the so-called discrete-observation HMM and
3 Methods Models and Algorithms for Modern Speech Processing clustering for Gaussian mixture model formation for continuous-observation HMMs. In each case, the K-means algorithm (Deller et al., 2000; Devijver and Kittler, 1982) is ordinarily used. In this popular method, the initial measurements are subdivided into K distinct groups. The average feature vector, or centroid in each group is used as the exemplar feature for that group. The entire population of measurements is then assigned to the group to which it is closest according to the similarity measure used. The centroids are recomputed. The process is repeated until no measurement changes classes. Dynamic Programming Dynamic programming (DP) has a rich and varied history in mathematics (Silverman and Morgan, 1990; Bellman, 1957). DP as we discuss it here is actually a special class of DP problems that is concerned with discrete sequential decisions. Let us first view DP in a general framework. Consider a ‘‘grid’’ in the plane where discrete points or nodes of interest are, for convenience, indexed by ordered pairs of non-negative integers as though they are points in the first quadrant of the Cartesian plane. The basic problem is to find a ‘‘shortest distance’’ or ‘‘least cost’’ path through the grid that begins at a designated original node, (0,0), and ends at a designated terminal node, (I, J). A path from node (i0 , j0 ) to node (iN , jN ) is an ordered set of nodes (index pairs) of the form (i0 , j0 ), (i1 , j1 ), (i2 , j2 ), (i3 , j3 ), . . . , (iN , jN ), where the intermediate (ik , jk ) pairs are not, in general, restricted. Distances, or costs, may be assigned to nodes or transitions (arcs connecting nodes) along a path in the grid, or both. Suppose that we focus on a node with indices (ik , jk ). Let us define the notation as: def
D[(i0 , j0 ), (i1 , j1 ), . . . , (iN , jN )]¼ distance associated with the path (i0 , j0 ), (i1 , j1 ), . . . , (iN , jN ); def
Dmin (ik , jk )¼ distance from(0, 0) to (ik , jk ) over the best path; and def
d[(ik , jk )j(ik1 , jk1 )]¼ transition cost from node (ik1 , jk1 ) to node (ik , jk ) node cost incurred upon entering node (ik , jk ),
(3:30) where ‘‘’’ indicates the rule (usually addition or multiplication) for combining these costs. This total cost is first-order Markovian in its dependence on the immediate predecessor node only. The distance measure d is ordinarily a non-negative quantity, and any transition originating at (0,0) is usually costless. Let us further define the notation: (i0 , j0 ) ! (iN , jN ) :
best path (in the sense of optimal cost) from (i0 , j0 ) to (iN , jN ) (3:31)
879 (i 0 ; j 0 )
(i0 , j0 )! (iN , jN ) :
best path(i0 , j0 ) to (iN , jN ) that also passes through (i 0 , j 0 ): (3:32)
In these terms, the Bellman Optimality Principle (BOP) implies the following (Deller et al., 2000; Bellman, 1957). h
i h i h i (i0 , j 0 ) (i0 , j0 )! (iN , jN ) ¼ (i0 , j0 ) ! (i 0 , j 0 ) (i 0 , j 0 ) ! (iN , jN ) ,
(3:33) for any i0 , j0 , i 0 , j 0 , iN , and jN , such that 0 i0 , i 0 , iN I and 0 j0 , j 0 , jN J ; the denotes concatenation of the path segments. In words, the BOP asserts that the best path from node (i0 , j0 ) to node (iN , jN ) that includes node (i 0 , j 0 ) is obtained by concatenating the best paths from (i0 , j0 ) to (i 0 , j 0 ) and from (i 0 , j 0 ) to (iN , jN ). Because it is consistent with most path searches encountered in speech processing, let us assume that a viable search path is always ‘‘eastbound’’ in the sense that for sequential pairs of nodes in the path, say (ik1 , jk1 ), (ik , jk ), it is true ik ¼ ik1 þ 1; that is, each transition involves a move by one positive unit along the abscissa in the grid. DP searches generally have similar local path constraints to this assumption (Deller et al., 2000). This constraint, in conjunction with the BOP, implies a simple, sequential update algorithm for searching the grid for the optimal path. To find the best path to a node (i, j) in the grid, it is simply necessary to try extensions of all paths ending at nodes with the previous abscissa index, that is, extensions of nodes (i 1, p) for p ¼ 1, 2, . . . , J and then choose the extension to (i, j) with the least cost. The cost of the best path to (i, j) is: Dmin (i, j) ¼ min {Dmin (i 1, p) d[(i, j)j(i 1, p)]}: p2[1, j] (3:34) Ordinarily, there will be some restriction on the allowable transitions in the vertical direction so that the index p above will be restricted to some subset of indices [1, J ]. For example, if the optimal path is not permitted to go ‘‘southward’’ in the grid, then the restriction p 6< j applies. Ultimately, node (I, J ) is reached in the search process through the sequential updates, and the cost of the globally optimal path is Dmin (I, J ). To actually locate the optimal path, it is necessary to use a backtracking procedure. Once the optimal path to (i, j) is found, we record the immediate predecessor node, nominally at a memory location attached to (i, j). Let us define: def
c(i, j)¼ index of the predecessor node to (i, j) on the minimum cost path from (0, 0) to (i, j): (3:35)
880
John R. Deller, Jr. and John Hansen
Initialization: ‘‘Origin’’ of all paths is node (0,0). For j ¼ 1, 2, . . . , J. Dmin (1, j) ¼ d[(1, j)j(0, 0)]. c(1, j) ¼ (0, 0). Recursion:
For i ¼ 2, 3, . . . , I. For j ¼ 1, 2, . . . , J. Compute Dmin (i, j) according to equation 3.34. Record c(i, j) according to equation 3.35. Next j Next i
Termination:
Distance of optimal path from (0,0) to (I, J) is Dmin (I, J ). Best path is found by backtracking.
FIGURE 3.10 Example Dynamic Programming Algorithm for the Eastbound Salesperson Problem.
If we know the predecessor to any node on the path from (0, 0) to (i, j), then the entire path segment can be reconstructed by recursive backtracking beginning at (i, j). As an example of how structured paths can yield simple DP algorithms, consider the problem of a salesperson trying to traverse a shortest path through a grid of cities. The salesperson is required to drive eastward (in the positive i direction) by exactly one unit with each city transition. To locate the best route into a city (i, j), only knowledge of optimal paths ending in the column just to the west of (i, j), that is, those ending at {(i 1, p)}Jp¼1 , is required. A DP algorithm that finds the optimal path for this problem is shown in Figure 3.10. There are many application-dependent constraints that govern the path search region in the DP grid. We mentioned the possibility of local path constraints that govern the local trajectory of a path extension. These local constraints imply global constraints on the allowable region in the grid through which the optimal path may traverse. Further, in searching the DP grid, it is often the case that relatively few partial paths sustain sufficiently low costs to be considered candidates for extension to the optimal path. To cut down on what can be an extraordinary number of paths and computations, a pruning procedure is frequently employed that terminates consideration of unlikely paths. This procedure is called a beam search (Deller et al., 2000; Rabiner and Juang, 1993; Lowerre and Reddy, 1980).
3.4 Specialized Speech Processing Methods and Algorithms 3.4.1 Applied Dynamic Programming Methods Dynamic Time Warping Time alignment is the process by which temporal regions of a test utterance are locally matched with appropriate regions of a reference utterance. The need for time alignment arises not only because different utterances of the same word generally have different durations but also because phones within words
are also of different durations across utterances. In this section, we will focus on discrete utterances as the unit to be recognized. For the sake of discussion, we will assume the usual case in which these discrete utterances are words although there is nothing that precludes the use of these principles on subword or superword units. Dynamic time warping (DTW) is a feature-matching scheme that uses DP to achieve time alignment between sequences of reference and test features. The brief description of DTW presented here is mostly of historical interest and for its value in introducing the HMM (Deller et al., 2000; Rabiner and Juang, 1993; Silverman and Morgan, 1990) DTW has been successfully employed in simple applications requiring relatively straightforward algorithms and minimal hardware. The technique had its genesis in isolated-word recognition (IWR) in which words are deliberately spoken in isolation but has also been applied in a limited way to continuous-speech recognition (CSR) in which speech is produced naturally without concern for the recognition process. The approach to CSR using DTW has been called connected-speech recognition a sort of hybrid approach in which the continuous speech (spoken naturally or with minimal ‘‘cooperation’’ by the talker), is assumed to have been uttered as isolated words. The DTW process essentially concatenates isolated-word templates against which to match the flowing speech feature stream. Details and algorithms used in connected-speech recognition are found in Chapter 11 of Deller et al. (2000). Because DTW requires a template (or concatenation of templates) to be available for any utterance to be recognized, the method does not generalize well to accommodate the numerous sources of variation in speech. Therefore, DTW is not used for complex tasks involving large vocabularies. Generally speaking, DTW is used to match an incoming test word (represented by a string of features) with numerous reference words (also represented by feature strings). The reference word with the best match score is declared the ‘‘recognized’’ word. In fairness to the correct reference string, the test features should be aligned with it in the manner that gives the best matching score to prevent time differences from unduly influencing the match. On the other hand, constraints on the matching process prevent unreasonable alignments from taking place. The procedure by which a test word is assigned an optimal match score with respect to a reference string is straightforward. Both the test and reference utterances are reduced to strings of features extracted from the acoustic speech waveform. Let us denote the test utterance feature vectors by {ti }Ii¼1 and those of the reference utterance by {rj }Jj¼1, where i and j index the frames as we progress in time. The test and reference features are (conceptually, at least) laid out on the abscissa and ordinate of a grid much like that discussed previously in Section 3.3.4. Although the indices i and j denumerate frame numbers rather than absolute sample times, it is customary to refer to the i and j axes as ‘‘time’’ axes. Costs are assigned to
3 Methods Models and Algorithms for Modern Speech Processing each node in the grid, say node (i, j), according to some measure of distance between the feature vectors ti and rj . With an appropriate set of transition costs for the grid, the feature mapping problem can clearly be posed as a minimumcost path search through the i j plane, and we can solve the problem using DP. Transition costs are sometimes explicitly or implicitly used to control the general trend of the candidate paths as well as the local trajectories at each step along a path.15 For example, a horizontal transition into a node corresponds to the association of two sequential test frames with a single reference frame, whereas a unity slope transition into a node corresponds to the pairing of sequential frames in both the test and reference strings. The latter may be given a lower cost, for example, to promote a more linear path. Such local costs are often dependent on the short-term history of the path (e.g., one or two steps back) so that, for example, excessive numbers of test string features are not associated with a single reference feature vector. In addition to the local trajectory costs, hard constraints (effectively infinite transition costs) are placed on the search. These may include, for example, strict endpoint matching constraints [nodes (1, 1) and (I, J) must appear on any valid path] and a monotonicity requirement that a path may never take a ‘‘westward’’ trajectory at any step. The latter requirement ensures proper time sequencing of events in the utterance. These ‘‘soft’’ and ‘‘hard’’ local constraints on path trajectories imply global constraints on the overall search space. In turn, the restricted search space implies computational savings as certain node costs need not be computed. For further details on local and global constraints and conditions, see Deller et al. (2000), Rabiner and Juang (1993), and Myers et al. (1980). A final issue of significance in the use of DTW is the development of an appropriate set of reference strings—the so-called training problem. DTW training is a problem to which there is no simple or well-formulated solution, and this fact represents one of the central contrasts between the DTW method and the pervasive HMM approach. Unlike the DTW reference models, the HMM can be trained in a ‘‘supervised’’ paradigm in which the model can learn the statistical makeup of the exemplars. The application of the HMM to speech recognition in the 1980s was a revolutionary development, largely because of the supervised training aspect of the model. For more information on DTW training, see Deller et al. (2000). Viterbi Decoding Occurring in many important engineering tasks is a problem that can be solved by the DP search procedure when modified to accommodate stochastic node or transition costs or both. 15
Normalization of path lengths (i.e., sum of the transition costs along a path) is required to prevent undue penalty to longer reference strings. In a DP algorithm, where optimization is done locally, a path-dependent normalization is clearly inappropriate, and we must resort to the use of an arbitrary normalization factor (Myers et al., 1980).
881 Such problems arise when the test or reference sequence, or both, are modeled by stochastic process(es) so that the search costs are random variables whose values depend on the outcome(s) of the stochastic process(es) to be matched. Given random test and reference sequences, the search costs can only be modeled as random variables. Once realizations of the test and reference sequences are available, however, the search can proceed as in the deterministic case with costs determined ‘‘online.’’ Such a modified search procedure, called the Viterbi algorithm (Viterbi, 1967), plays a key role in the training and decoding of the HMM, to which we now turn.
3.4.2 The Hidden Markov Model Introduction In the 1970s and 1980s, speech researchers began to turn to stochastic approaches to modeling of speech in an effort to address the problem of variability, particularly in large-scale systems (Baker, 1975; Jelinek et al., 1975). The term stochastic approach is used to indicate that models are employed that inherently characterize some of the variability in the speech. This is to be contrasted with the straightforward ‘‘deterministic’’ use of the speech data in template-matching (e.g., DTW) approaches in which no probabilistic modeling of variability is present. Two very different types of stochastic methods have been researched. The first, the hidden Markov model (HMM), is amenable to computation on conventional sequential computing machines. Speech research has driven a majority of the engineering interest in the HMM in the last three decades, and the HMM has been the basis for several successful large-scale laboratory and commercial speech recognition systems. In contrast, the second class of stochastic techniques in speech recognition based on the artificial neural network (ANN) has been a small part of a much more general research effort to explore alternative computing architectures with some superficial resemblances to the massively parallel computing of biologic neural systems. ANNs have had relatively little impact on the speech field, and we will not pursue this topic here. Interested readers can refer to Morgan and Scofield (1991). HMM Structure and Formulation The HMM as an Automaton An HMM can be viewed as a stochastic finite-state automaton—a type of abstract ‘‘machine’’ used to model the generation of a sequence of events or observations (Hopcroft and Ullman, 1979), in this case representing a speech utterance. The utterance may be a word, a subword unit, or, in principle, a sentence or paragraph. In small vocabulary systems, the HMM tends to be used to model words, whereas in larger vocabulary systems, the HMM is used for subword units like phonemes and for more abstract language information (see Section 3.4.3.). To introduce the operation of the HMM,
882
John R. Deller, Jr. and John Hansen
however, it is sufficient to assume that the speech unit is a word. The HMM models an utterance in the form of a string of features (or feature vectors) to which the corresponding analog waveform has been reduced. In HMM literature, it is customary to refer to the string of test features as the observations or observables because these features represent information that is ‘‘observed’’ from the speech utterance to be modeled. Let us denote a string of observations by {yt }Tt¼1. During the training phase an HMM is ‘‘taught’’ the statistical makeup of the observation strings for its dedicated word. Training an HMM can be thought of as an attempt to create the automaton (the HMM) that ‘‘produces’’ a given word, in the sense that the HMM will tend to generate strings of features that are similar (statistically) to those produced by a human speaker uttering the given word. In producing the word, the human’s vocal tract, roughly speaking, progresses through a sequence of ‘‘states’’ (corresponding to phonemes, for example) that are not directly observable to a listener who must recognize the utterance. The listener can observe the speech features but generally does not know the exact state of the speaker’s vocal system that produced a given observation. The HMM is likewise composed of a set of interconnected states, a sequence through which the HMM progresses in the production of a feature string. As in the case of the human generator, the states of the HMM are ‘‘hidden’’ from the observer of the feature string, and the number of discrete states composing the HMM is usually very small compared with the number of feature vectors in a string of observables. In modeling a word, for example, the HMM might have six states corresponding to a like number of phonemes that compose the word being modeled, whereas an acoustic utterance of the word could result in a hundred or more feature vectors. During the recognition phase given an incoming observation string, it is imagined that one of the trained HMMs produced the observation string. Then, for each HMM, the question is asked: How likely (in some sense) is it that this HMM produced this incoming observation string? The word associated with the HMM of highest likelihood is declared to be the recognized word. Note carefully that it is not the purpose of an HMM to generate observation strings. We imagine, however, that one of the HMMs did generate the observation string to be recognized as a gimmick for performing the recognition.
1
2
3
The ‘‘Hidden’’ and Observable Random Processes Associated with an HMM A diagram of a typical HMM with six states is shown in Figure 3.11. The states are labeled by integers. The structure, or topology, of the HMM is determined by its allowable state transitions, and speech-related HMMs almost always have a left-to-right or Bakis (Bakis, 1976) topology in keeping with the natural temporal ordering of events in an utterance. Two slightly different forms of the HMM are used. The one usually (but not always) discussed for acoustic processing (modeling the signal, as we are doing here) emits an observation upon arrival at each successive state. (It also emits one from the initial state at the outset.) The alternative form, generally employed in language processing, emits an observation during the transition. The state emitter form of the model is called a Moore machine in automata theory, while the transition emitter form is a Mealy machine (Hopcroft and Ullman, 1979). Throughout the rest of this discussion, we will discuss the Moore form. The differences in algorithms associated with the two forms are largely a matter of interpretations of quantities. The HMM is imagined to ‘‘produce’’ strings of features as follows. At each observation time (corresponding to the times at which we extract observations), a state transition is assumed to occur in the HMM. Upon entering the next state, an observation is emitted. Accordingly, there are two main stochastic processes for which the characterizations must be learned (inferred from data) during training. The first random process models the (hidden) sequence of states that occurs en route to generating a given observation sequence. The second models the generation of observations by the various states. The state sequence random process is governed by the statetransition probabilities, which appear as labels on the arcs connecting the states. Suppose that we call the state random process x. The associated random variables are {x t }1 t¼1 . We then have that: def
a(ijj)¼ P(x t ¼ ijx t1 ¼ j),
(3:36)
for arbitrary t. By assumption, the state transition at time t does not depend on the history of the state sequence prior to time t 1; hence, the random sequence is called a (first-order) Markov process (Leon-Garcia, 1989). The matrix of state transition probabilities is the state-transition matrix A 2 RS S ,
4
5
FIGURE 3.11 Typical HMM with Six States
6
3 Methods Models and Algorithms for Modern Speech Processing that has a(ijj) as its (i,j) element, where S is the total number of states in the model. Since the random variables of the Markov process take only discrete values (integers), the state process is called a Markov chain. Further, since the state-transition probabilities do not depend on t, the Markov chain is said to be homogeneous in time. The second stochastic process characterized in the learning phase models the generation of observations by the HMM. Let us denote n o1this (generally vector-valued) random process by y ¼ yt , and let Y be the assumed set of all possible t¼1 observations (e.g., features, feature vectors, or some mapping of features) that can be produced by a human talker. If Y is a finite discrete set, then the model is called a discrete-observation HMM, whereas if Y is a continuum, then the model is a continuous-distribution HMM. Associated with state i in the HMM, for i 2 [1, S], is a probability distribution (discrete) or density (continuous) over the set Y, fy t jx t (xji). (We henceforth use the abbrevation pdf for this function). Both y t and x are, in general, M-dimensional vectors, where M is the dimension of the feature vector extracted from the speech. For mathematical tractability, it is customary to make the unrealistic assumption that the random process y has conditionally independent and identically distributed random variables, y t , where the conditioning is upon the state sequence. This means that fy t jx t (xji) is not dependent on t, and we write: def
fy jx (xji)¼ fy t jx t (xji)
for arbitrary t:
(3:37)
Two auxiliary pieces of information are needed to complete the characterization of the HMM—the initial probability vector, say p1, and the number of states, S. The ith element of p1 2 RS is the probability that state i is the initial state in any path that produces an observation string. In the leftto-right model, the starting state is naturally taken to be the ‘‘leftmost’’ one, ordinarily numbered ‘‘state 1.’’ Accordingly, p1 ¼ ½ 1 0 0 . . . 0 . Formally, then, a Moore-form HMM, say M, is comprised of the set of elements: n n oo M ¼ S, p1 , A, fyjx (xji), 1 i S :
(3:38)
The Discrete-Observation Model The discrete-observation HMM, a special case of model set 3.38, is restricted to the production of a finite set of observations. In this case, the naturally occurring observation vectors are quantized into one of the permissible sets using a technique known as vector quantization (VQ). In the VQ process, a large population of continuous-observation vectors, assumed to be statistically representative of all the features to be encountered in a speech-recognition task, is partitioned into a fixed number of clusters, say K, typically using the K-means algorithm (Deller et al., 2000; Devijver and Kittler, 1982). The centroids of the clusters are assumed to be representative of the sounds associ-
883 ated with the respective clusters. Only the centroids are kept, and collectively they are called a codebook for the vector quantizer. Subsequently, any observation vector used for either training or recognition is quantized (assigned to the nearest code) using this codebook; hence, each feature vector suffers some degree of quantization in the feature space. If there are K possible vectors (observations) in the codebook, then it is sufficient to assign to an observation a single integer, say k, where 1 k K . Formally, the vector random process y is replaced by a scalar random process, say y, where each of the random variables y t may take only integer values in [1, K]. For the discrete-observation HMM, the quantized observation pdf for state i takes the form of K impulses on the real line. In this case, it is sufficient to know the probability distribution over the K symbols for each state (weights on the impulses), which we shall denote as: def
b(kji)¼ P(y t ¼ kjx t ¼ i), for k 2 [1, K ], i 2 [1, S]: (3:39) These observation probabilities are clearly defined to be dependent on the state but are assumed independent of time t. In general, we will not know the value assumed by a particular observation, y t , and we will write: def
b(yt ji)¼ P(y t ¼ yt jx t ¼ i):
(3:40)
Similar to the definition of the transition probability matrix, A, we define the observation probability matrix, B 2 RK S , with (k, i) element b(kji). The general mathematical specification of the HMM of equation 3.38 can be modified to reflect the discrete observations: M ¼ {S, p1 , A, B, Y VQ },
(3:41)
def
where Y VQ ¼ {yk , 1 k K }, the set of K vectors in the VQ codebook. Recognition Using the HMM Discrete-Observation Case We assume that we have a population of M HMMs, say {M‘ }M ‘¼1 , each of which represents a word and each of which has been trained (we will discuss how in subsequent paragraphs) with appropriate probabilities. Every HMM represents a word in a vocabulary, V. Given a (vector-quantized) observadef tion sequence, say y ¼ {y1 , y2 , . . . , yt , . . . , yT }, which is a realization of random process y, we seek to deduce which of the words in V is represented by y. That is, we want to determine the likelihood with which each of the M‘ produces y. Available data will generally not allow us to characterize the more intuitive a posteriori probability P(Mjy ¼ y) to use as a likelihood measure, so the measure P(y ¼ yjM), the probability that the observation sequence y is produced given the
884 model M, is used instead. If the model probabilities are equal, then Bayes rule implies that the two approaches will produce the same result. First, we consider an approach based on the likelihood that the observations could have been produced using any state sequence (path) through a given model. The most common technique applied to this problem is the so-called the forward– backward (F–B) algorithm, also called Baum-Welch reestimation. The full F–B algorithm was developed in the series of papers by Baum and colleagues (Baum, 1972). The underlying principles described in these papers are well beyond the scope of this treatment. The interested reader is referred, for example, to Deller et al. (2000) and Rabiner and Juang (1993). We simply note that the method represents an enormous savings in computation with respect to the effort required by exhaustive evaluation of all paths in each model. For S ¼ 5 and T ¼ 100, for example, the F–B method yields an improvement of 69 orders of magnitude with respect to the same computation carried out directly. The second approach evaluates the likelihood be based on only the best state sequence through M. A Viterbi search provides a slightly more cost-efficient alternative to the F–B method, with typically 10 to 25% fewer computations required. Recalling that the Viterbi algorithm amounts to a DP search with stochastic costs, it is not surprising that Viterbi search returns a total cost based on the single best state path through the HMM. Formally, given observation string y of length T and HMM M, the Viterbi search returns the likelidef hood P(y, I jM), where I ¼ arg maxI 2ST P(y, I jM) in which ST represents the set of legal state sequences (paths) of duration T. The formulation of the Viterbi problem as a DP grid search is straightforward. In principle, the observations (indexed by time, t) are laid out along the abscissa of the search grid and the states along the ordinate. Each point in the grid is indexed by a time, state pair (t,i). Two natural restrictions are imposed on the search. The first involves sequential grid points along any path that must be of the form (t, i), (t þ 1, j), where 1 i, j S. This says that every path must advance in time by one, and only one, time-step for each path segment. The second refers to final grid points on any path that must be of the form (T , if ), where if is a legal final state in the model. For a left-to-right HMM, this means that the ordinate of the grid point at time T must be the final (‘‘rightmost’’) state in the model. The stochastic cost assigned to node (t,i) in the grid is simply b(yt ji), and the transition cost, given that the state occupied at time t 1 is j, is taken to be a(ijj) for any i and j and for arbitrary t > 1. In addition, to account for initial state probabilities, all paths are assumed to originate at a fictitious and costless node (0,0), which makes a transition of cost P(x 1 ¼ i) to any initial node of the form (1,i). Upon arriving at the initial node, the path will also incur a node cost of the form b(y1 ji). The total cost associated with any
John R. Deller, Jr. and John Hansen Initialization: ‘‘Origin’’ of all paths is node (0,0). For i ¼ 1, 2, . . . , S. Dmin (1, i) ¼ ln [a(ij0)b(y1 ji)] ¼ [ ln P(x 1 ¼ i) þ ln b(y1 ji)]. c(1, i) ¼ 0. Next i Recursion:
For t ¼ 2, 3, . . . , T For it ¼ 1, 2, . . . , S. Compute Dmin (t, it ) ¼ minp2[1, S] {Dmin (t 1, p) ln a(it jp) ln b(yt jit )}. Record backtracking information at the present node, c(t, it ). Next it Next t
Termination:
Distance of optimal path from (0,0) to (T , iT ) is Dmin (T , it ). Best state sequence, I , is found as follows: Terminal state is iT , then: For t ¼ T 1, I 2, . . . , 0. it ¼ c(t þ 1, itþ1 ). Next t
FIGURE 3.12 Algorithm.
Computation of P(y, I jM) Using the Viterbi
transition, say (t 1, j) to (t, i), is therefore a(ijj)b(yt ji) for t > 1 and P(x 1 ¼ i)b(y1 ji) when t ¼ 1. We note that multiplication is the natural operation ( in equation 3.30) by which node and transition costs are combined in this case. Also note that a large ‘‘distance’’ (i.e., probability) is desirable, so we will apparently want to seek a maximum cost path if such measures are used. With these cost assignments, and a minor adjustment to seek the maximum rather than minimum cost path, the Viterbi procedure is applied in the usual way. To reduce the computational load and to mitigate the numerical problems arising from many products of probabilities, it is common to sum negative log probabilities rather than to multiply probabilities directly. If this strategy is used, the problem reverts to a minimal cost path search because high probabilities result in small negative logs. The steps of a Viterbi search algorithm based on this cost strategy are shown in Figure 3.12. Note that a provision for state backtracking is included in the algorithm. Continuous-Observation Cases The F–B and Viterbi methods already described are applicable to the continuous-observation case with a simple modification. We define the likelihood of generating observation y t in def state i as b(y t ji)¼ fy jx(y t ji). With this substitution for the observation probabilities, the methods developed for the discrete-observation case can be applied directly. The resulting measure computed for a model, say M, and an observation def string, say y ¼ {y 1 , . . . , y T }, will be P(yjM) for the F–B approach and P(y, I min jM) in the Viterbi case. These measures will no longer be proper probabilities, but they provide meaningful likelihood measures with which appropriate relative comparisons across models can be made.
3 Methods Models and Algorithms for Modern Speech Processing HMM Training Discrete-Observation Case Next we address the question of training a particular HMM to correctly represent its designated word or other utterance.16 We assume that we have one or more feature strings of the form y ¼ {y1 , . . . , yt } extracted from training utterances of the word (this assumes that we already have the codebook that will be used to deduce symbols), and the problem is to use these strings to find an appropriate model of form 3.38. In particular, we must find the matrices A and B and the initial state probability vector p1. There is no known way to analytically compute these quantities from the observations in any optimal sense. However, an extension to the F–B algorithm provides an iterative estimation procedure for computing a model, M, corresponding to a local maximum of the likelihood P(yjM). This method is most widely used to estimate the HMM parameters, and for algorithm details, the reader is referred to Deller et al. (2000) and Rabiner and Juang (1993). Repeated re-estimation of the model (by repeated iterations of the F–B algorithm) is guaranteed to converge to an HMM corresponding to a local maximum of P(yjM) (Baum and Sell, 1968). The model improves with each iteration unless its parameters already represent a local maximum. F–B training, therefore, does not necessarily produce the globally best possible model. Accordingly, it is common practice to run the algorithm several times with different sets of initial parameters and to take as the trained model the M that yields the largest value of P(yjM). Training using the Viterbi algorithm is much simpler and more computationally efficient, though equally effective. Beginning with an initial model estimate, we implement the recognition procedure on the training sequence. Once the optimal path is discovered, backtracking information is used to determine the number of transitions into and out of each state and the number of times each symbol is produced in each state. The tallies are used to reestimate the state and observation probabilities, and then the process is repeated. The algorithm can be shown to converge to a proper characterization of the underlying probabilities (Fu, 1982). Continuous-Observation Case The state-transition and initial-state probabilities in the continuous-observation case have exactly the same meaning and structure as in the discrete-observation case and can be found in exactly the same manner. The pdf most widely used to model continuous observation outcomes is the Gaussian mixture density, which is of the form: 16
In addition to the two most prominent training methods described here, gradient search techniques (Levinson et al., 1983) and techniques based on the concept of statistical discrimination (Ephraim et al., 1989; Bahl et al., 1988), have been used for training. The latter works includes algorithms for corrective training which reinforce model parameters when they achieve successful detection and ‘‘negatively train’’ models with data that cause false recognition.
885 fyjx (xji) ¼
M X
cim N (x; mim , Cim ),
(3:42)
m¼1
in which cim is the mixture coefficient for the mth component for state i, and N ( ) denotes a multivariate Gaussian pdf with mean vector mim and covariance matrix C im . The mixture coefficients must be non-negative and satisfy the constraint PM c ¼ 1, 1 i S. For a sufficiently large number m¼1 im of mixture densities, M, equation 3.42 can be used to approximate arbitrarily accurately any continuous pdf. Modified Baum-Welch reestimation formulas have been derived for the three quantities cim , mim , and C im for mixture density m in state i (Liporace, 1982; Juang et al., 1986). As in the discrete-observation case, the use of these formulas will ultimately lead to a model, M, that represents a local maximum of the likelihood P(yjM). However, finding a good local maximum depends rather critically on a reasonable initial estimate of the vector and matrix parameters mim and C im for each i and m. We now describe a convenient procedure for deriving good separation of the mixture components for use with a Viterbi procedure. In the Viterbi approach, the mean vectors and covariance matrices for the observation densities are reestimated by simple averaging, similarly to what is done in the discreteobservation procedure. This is easily described when there is only one mixture component per state. In this case, for a given M, the recognition experiment is performed on the observation sequence. Each observation vector is then assigned to the state that produced it on the optimal path by examining the backtracking information. The vectors assigned to each state are then used to update the mean vector and covariance matrix for that state, and the process is repeated. When M > 1 mixture components appear in a state, then the observation vectors assigned to that state must be subdivided into M subsets prior to averaging. This is done by clustering, using, for example, the K-means algorithm with K ¼ M. If there are Nim vectors assigned to the mth mixture in state i, then the mixture coefficient cim is reestimated as cim ¼ Nim =Ni . Practical Considerations in the Use of the HMM Features Thus far, we have discussed the HMM in principally abstract terms. Here, we briefly consider some of the practical aspects of the use of the HMM, turning first to the fundamental acoustical analysis. Many features have been used as observations, but the most prevalent are the LP parameters, cepstral parameters, and related quantities. These are frequently supplemented by short-term time differences that capture the dynamics of the signal as well as supplemented by energy measures, such as short-term energy and differenced energy. A comparative discussion is found in Bocchieri and Doddington (1986) and Nocerino et al. (1985).
886 In a typical application, the speech would be sampled at 8 kHz and analyzed on frames of 256 points with a 156-point overlap. The analysis frame end-times, say 100, 200, . . . , M, become observation times t ¼ 1, 2, . . . , T . For a typical word utterance lasting 1 sec, T ¼ 80 observations. On each frame, 8 to 10 LP coefficients are computed, which are then converted to 12 cepstral coefficients. Alternatively, 12 mel-cepstral coefficients might be computed directly from the data. To add dynamic information, 12 differenced cepstral coefficients are also included in the vector. Finally a short-term energy measure and a differenced energy measure are included for each frame, for a total of 26 features in the observation. A related issue is to decide upon the number of symbols (centroids) in the VQ codebook if a discrete-observation model is used. One measure of the quality of a codebook is the average distance of a vector observation in the training data from its corresponding symbol. This figure is often called the codebook distortion. The smaller the distortion, the more accurately the centroids represent the vectors they replace. Typically, recognizers use codebooks of size 32–256 with the larger sizes being more common. Since a larger codebook implies increased computation, there is an incentive to keep the codebook as small as possible without decreasing performance. Like the other parametric decisions that must be made about an HMM, this issue is principally guided by experimental evidence. Model Topology and Size Some variation of the Bakis topology is almost uniformly used in speech applications because it most appropriately represents the acoustic process. Interestingly, and perhaps not surprisingly, when used with speech, an HMM with no state-transition constraints will often train so that it essentially represents a sequential structure (backward transition probabilities turn out zero). Experimental evidence suggests that states frequently represent identifiable acoustic phenomena.17 Therefore, the number of states is often chosen to roughly correspond to the expected number of such phenomena in the utterance. If words are being modeled with discrete observations, for example, 5 to 10 states are typically used to capture the phones in the utterances. Continuous-observation HMMs sometimes use more states. If HMMs are used to model discrete phones or phonemes, three states are often used—one each for onset and exit transitions and one for the steady-state portion of the phone. The Generality of the HMM and a First View of Language Models We have generally discussed the HMM above as though it were a model for a single word but stressed that much of what 17
Perhaps more accurately, assuming that states represent acoustic phenomena seems to provide a good rule of thumb for creating HMMs with successful performance. However, the relationship of the number of states to the performance of the HMM is very imprecise, and in practice, it is often necessary to experiment with different model sizes.
John R. Deller, Jr. and John Hansen was developed applies to other uses of the model. For large vocabulary systems, it is necessary to use HMMs to model sub-word acoustic units (e.g., phones or phonemes) since appropriate training of thousands of isolated-word models is prohibitive. Regardless of whether we view speech as constructed of phones or words (or some other acoustic unit), the structure of human language is, of course, much more complex than that conveyed by isolated acoustic units. A major part of the information in the speech code is conveyed by the way in which the units are ordered. Still more information is carried in other acoustic cues such as intonation, nonspeech cues like hand gestures and facial expressions, and in abstract notions like the physical environment of the conversation or the meaning of a phrase in the context of the overall message. Generally speaking, a set of rules by which fundamental acoustic units of speech ultimately become complete utterances is called a language model (LM). From the earliest efforts to do large vocabulary speech recognition, it has been clear to researchers that a good LM is critical to a successful performance (Lowerre and Reddy, 1980). For large vocabulary systems, acoustic processing will almost never provide satisfactory recognition as a stand-alone technology. LMs have been a very active area of research on speech processing for many years. We raise the issue of LMs in the present context because the HMM is, in fact, formally equivalent to one of the most prominent LMs used in practice—the so-called ‘‘regular’’ or ‘‘finite state’’ grammar (Hopcroft and Ullman, 1979; Fu, 1982). Because of its simplicity, regular grammar is often used to model the speech production code at several levels of linguistic and acoustic processing. This is why the HMM figures so prominently in the continuous speech recognition (CSR) problem. The general idea is straightforward: an HMM for syntax (word ordering) in a speech recognizer would have dynamics (states and transitions) very much like the HMMs we described for the word model. The difference would be that the observations ‘‘emitted’’ by the model would be words, which, in turn, would be represented by an acoustic HMM (probably further decomposed into phonetic HMMs). It is possible, therefore, to envision this type of speech recognizer as a large graph (or grid) consisting of HMMs embedded inside of HMMs several levels ‘‘deep.’’ Not unexpectedly, the training procedures for such an embedded system of models require some enhancement and reconsideration of methods applied to the simple word models. Nevertheless, the basic ideas are the same, and the same F–B and Viterbi reestimation algorithms are applicable. One remarkable property of the HMM is evident in the training process. In working with continuous speech, we must face the problem of not knowing where the temporal boundaries (i.e., phonetic or word) are in the acoustic observation strings used for training (or recognition). The creation of sufficiently large databases marked according to phonetic time boundaries is generally impractical. An important
3 Methods Models and Algorithms for Modern Speech Processing property of the HMM comes to our aid in this situation. Researchers have discovered that HMMs can be trained in context (i.e., hooked together in the language graph) as long as reasonable ‘‘seed’’ models are used to initiate the estimation procedure. This means that the entire observation sequence for a sentence can be presented to the appropriate string of HMMs, and the models will tend to ‘‘soak up’’ the parts of the observation sequence corresponding to their words or phones, for example. This ability of the HMM has revolutionized the CSR field because it obviates the time-consuming procedure of temporally marking a database.
3.4.3 Language Modeling Introduction The concept of a LM18 was introduced in the just-completed material on HMMs because these stochastic models play such a prominent role in representing language structure in contemporary speech recognition systems. More generally, however, LM research has a long and interesting history in which numerous and varied systems have been produced. The field remains very active with many challenging unsolved problems. In the scope of this vast history and level of current activity, the goal of this section is very modest—to provide an overview of some of the basic operating principles and terminology of LM’ing18 particularly those related to the HMM-like formalisms discussed above. Inasmuch as LMs ultimately imply structure that assists in pattern-matching and search, some of the techniques and algorithms discussed in earlier sections are directly applicable to the LM’ing problem. The field of LM’ing, however, has developed largely outside of the SP discipline—principally in certain areas of computer science, linguistics, and related fields—so the vocabulary and concepts of LM’ing can seem unfamiliar to the traditional SP engineer, even when the techniques are similar to SP methods. Further, the abstraction of some of the problems encountered in LM’ing (e.g., the meaning of an utterance) defy the construction of rigorously quantified models, which is another break from the SP engineer’s typical approach to problem solving. The Purpose and Structure of a Language Model Whether we view phones, phonemes, syllables, or words as the basic unit of speech, LMs are generally concerned with how fundamental speech units may be concatenated: in what order, in what context, and with what intended meaning. This characterization immediately suggests that LMs have both similarities to, and differences from, the sets of ‘‘grammatical rules’’ of a natural language that are typically studied in primary education. ‘‘Primary school’’ grammar is indeed concerned with the rules by which speech components are combined. Such studies
887 rarely, if ever, are concerned with basic sub-word phonetic elements, and seldom do they deal with or higher level abstractions like subtle nuances of meaning. In contrast, an LM often reaches deep into the acoustic structure of speech and sometimes high into the more abstract aspects of intention and meaning. In essence, what primary school students know as ‘‘grammar’’ is the natural-language version of only one of the ingredients of a formal LM used in speech recognition. A LM is a broader framework, consisting of the (often quantified) formalization of some or all of the structure that grammarians, linguists, psychologists, and others who study language can ascribe to a natural language. Indeed, an LM need not conform to the grammatical rules of the natural language it is modeling. Peirce (Liszka, 1996) identifies four components of the natural-language code: symbolic, grammatical,19 semantic, and pragmatic. Implicit or explicit banks of linguistic knowledge resident in speech recognizers, sometimes called knowledge sources (Reddy, 1976), can usually be associated with a component of Peirce’s model. The symbols of a language are defined to be the most fundamental units from which all messages are ultimately composed. In the spoken form of a language, for example, the symbols might be words or phonemes, whereas in the written form, the alphabet might serve as the symbols. For the purposes of this discussion, let us consider the phonemes to be the symbols of a natural language. The grammar of the language is concerned with how symbols are related to one another to form message units. If we consider the sentence to be the ultimate message unit, then how words are formed from phonemes are part of Peirce’s grammar as well as the manner in which words form sentences. How phonemes form words is governed by lexical constraints and how words form sentences by syntactic constraints. Lexical and syntactic constraints are both components of the grammar. The grammar of a language is, in principle, arbitrary in the sense that any rule for combining symbols may be posed. On the other hand, semantics is concerned with the way in which symbols are combined to form meaningful communication. Systems imbued with semantic knowledge sources straddle the line between speech recognition and speech understanding and draw heavily on artificial intelligence research. Beyond binary unmeaningful or meaningful decisions about symbol strings, semantic processors can be used to impose meaning upon incomplete, ambiguous, noisy, or otherwise hard-to-understand speech. Finally, the pragmatics component of the LM is concerned with the relationship of the symbols to their users and the environment of the discourse. This aspect of language is very difficult to formalize. To understand the nature of pragmatic knowledge, consider the phrase ‘‘rocking chair.’’ Depending on the nature of the conversation, the word ‘‘rocking’’ could 19
18
In the following please read ‘‘LM’’ as ‘‘language model’’ and ‘‘LM’ing’’ as ‘‘language modeling.’’
We have replaced Peirce’s word syntax with grammar for more consistency with our ‘‘engineering’’ formalisms in the following. We will reserve the word syntax to refer to the rules that govern how words may combine.
888 describe either a type of chair or the motion of a chair or both. A pragmatic knowledge source in a recognizer must be able to discern among the various meanings of symbol strings and, hence, find the correct decoding. In the 1950s and 1960s, Chomsky further formalized the structure of an LM (Chomsky, 1959), and he and others (Fu, 1982) generalized the concept of language and grammar to any phenomenon or process that can be viewed as generating structured entities by building them from primitive patterns (‘‘symbols’’) according to certain ‘‘production’’ rules. The most restrictive type of grammar in Chomsky’s hierarchy, and the one that is of most interest in practical application, is the regular, or finite-state, grammar. In the parlance of the HMM, Chomsky’s regular grammar contains production rules that allow any ‘‘state’’ to produce a ‘‘symbol’’ plus a new ‘‘state,’’ or to produce a sole ‘‘symbol.’’ In Chomsky’s formalism, a stochastic grammar is one whose production rules have probabilities associated with them, so each of the strings in the stochastic language occurs with a probability that depends on the probabilities of its productions. Clearly, when probabilities are added to the production rules of the regular grammar, the regular grammar can be used as an alternative description of the operation of the Mealy-form HMM. In each ‘‘production’’ (observation time), the HMM produces a symbol and a new state with a certain probability. It is very important to generalize our thinking about the HMM, however, in the present context. Our fundamental descriptions of the HMM involved acoustic modeling in which states nominally represented physiologic acoustic states of the vocal systems and in which symbols represented acoustic features. In the LM context, the ‘‘HMM’’ can represent higher level structures as well. For example, the states may represent words while the observations are phonemes, so the model embodies syntactical and lexical information. As noted earlier, the HMM is usually known as a finite-state automaton (FSA) in the formal language theory. ‘‘HMM’’ is almost invariantly used to refer to the model at the acoustic level, while ‘‘FSA’’ is more likely to be used for the linguistic models. Regardless of what we call the model, the critical point is the one-to-one correspondence between the model and a regular stochastic grammar. n-Gram and Other Language Models While FSA and FSA-like models are prominent among those deployed in practice, other important models have been applied. Notable among these are the n-gram LMs in which the occurrence of a string element (usually a word) is characterized by an (n 1)-order Markov dependence on past string elements. For example, in a 2-gram or bi-gram LM for syntax, the probability that a particular word appears as the tth word in a string is conditioned on the (t 1)st word. In the 3-gram or tri-gram LM, the dependency is on two previous elements. For most vocabularies, the use of n-gram models for n > 3 is prohibitive.
John R. Deller, Jr. and John Hansen Clearly, the more constrained the rules of language in the recognizer, the less freedom of expression the user has in constructing spoken messages. The challenge of LM’ing is to balance the need for maximally constraining the ‘‘pathways’’ that messages may take in the recognizer while minimizing the degree to which freedom of expression is diminished. A measure of the extent to which an LM constrains permissible discourse is given by the perplexity. This term roughly means the average number of branches at any decision point when the decoding of messages is viewed as the search through a graph of permissible utterances. Training and Searching the Language Model Training methods akin to those studied in previous sections are available for constructing LMs (e.g., see Chapter 13 of Deller et al., 2000). Both F–B-like and Viterbi-like approaches exist for the inference of the probabilities of the production rules of a stochastic grammar, given the characteristic grammar (rules without probabilities) (Fu, 1982; Levinson, 1985). For a regular grammar, this task is equivalent to the problem of finding the probabilities associated with an HMM or an FSA with a fixed structure. Consequently, the fact that these training algorithms exist is not surprising. In fact, however, any stochastic grammar may be shown to have a correlative doubly stochastic (HMM-like) process. A discussion of this issue and related references are found in Levinson (1985).
3.5 Summary and Conclusions The focus of this chapter has been on fundamental ideas and algorithms underlying the DT processing of speech. Indeed, it is difficult to find a modern application of speech processing that does not employ one or more of the algorithms discussed here, and virtually all contemporary speech technology begins with some variation of the linear DT model to which we devoted much initial attention in this chapter. As a technology with vast economic significance, the amount of research and development that has been, and continues to be, devoted to speech processing is extraordinary. The commercial, military, forensic, and other driving forces for speech technologies are centered primarily in the broad areas of communications and recognition. Modern communication networks, in their many traditional and emerging forms, have created vast interest in speech processing technologies, particularly with regard to coding, compression, and security. At the same time, modern communications systems have engendered a merging of subspecialties in signal processing as speech technologists find themselves increasingly working with image (and other data) processing engineers, specialists in detection and estimation theory, and computer hardware and software systems developers. Further, the global nature of communications has caused some aspects of speech communications and recognition to merge as well. ‘‘Recognition’’ was once envisioned as a quest
3 Methods Models and Algorithms for Modern Speech Processing for a typewriter that takes dictation, and the field has now broadened to include such issues as language and accent recognition—motivated, at least in part, as a facilitating technology for multilanguage communications systems and other computer-assisted telephony applications (Waibel et al., 2000). Both as stand-alone technologies and also as part of the communications revolution, security and forensics applications such as speaker identification and speaker verification have also expanded the activities of ‘‘recognition’’ engineers (Campbell, 1997). Although the commercial interests have played dominant roles in setting the speech processing agenda, many other applications of speech processing continue to be explored. Biomedicine represents one area, for example, in which speech engineering is playing an important role in the development of diagnostic software, techniques for assessment of speech and hearing therapies, and assistive aids for persons with speech, hearing, and motor disabilities. At the same time, funding agencies continue to support new theories and applications of speech processing, including models of human discourse, theories of hearing and psychoacoustics, and other basic research activities. To explore these emerging technologies and well as the wealth of information about modern applications, the reader is encouraged to explore the literature cited here, the further citations to which it leads, and the everincreasing amount of information available on the World Wide Web.
References Bahl, L.R., Brown, P.F., DeSouza, P.V., and Mercer, R.L. (1988). A new algorithm for the estimation of hidden Markov model parameters. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing 1, 493–496. Baker, J.K. (1975). Stochastic modeling for automatic speech understanding. In D.R. Reddy (Ed.), Speech recognition. New York: Academic Press. Bakis, R. (1976). Continuous speech word recognition via centisecond acoustic states. Proceedings of the 91st Annual Meeting of the Acoustics Society of America. Baum, L.E. (1972). An inequality and associated maximization technique in statistical estimation for probabilistic functions of Markov processes. Inequalities 3, 1–8. Baum, L.E., and Sell, G.R. (1968). Growth functions for transformations on manifolds. Pacific Journal of Mathematics 27, 211–227. Bellman, R.E. (1975). Dynamic programming. Princeton, NJ: Princeton University Press. Bielefeld, B. (1994). Language identification using shifted delta cepstrum. Proceedings of the 14th Annual Speech Research Symposium. Bocchieri, E.L., and Doddington, G.R. (1986). Frame-specific statistical features for speaker-independent speech recognition. IEEE Transactions on Acoustics Speech, and Signal Processing 34, 755–764. Bogert, B.P., Healy, M.J.R., and Tukey, J.W. (1963). The quefrency alanysis of time series for echoes: Cepstrum, pseudo-autocovariance, cross-cepstrum, and saphe cracking. In M. Rosenblatt (Ed.),
889 Proceedings of the Symposium on Time Series Analysis. New York: John Wiley & Sons. Burg, J.P. (1967). Maximum entropy spectral analysis. Proceedings of the 37th Meeting of the Society of Exploration Geophysicists. Campbell, Jr., J.P. (1997). Speaker recognition: A tutorial. Proceedings of the IEEE 85, 1437–1462. Chomsky, N. (1959). On certain formal properties of grammars. Information and Control 2, 137–167. Chomsky, N., and Halle, M. (1968). The sound pattern of English. New York: Harper and Row. Reprinted (1991). Boston: MIT Press. Dautrich, B., Rabiner, L.R., and Martin, T.B. (1983). On the effects of varying filter bank parameters on isolated word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing 31, 793–807. Davis, S.B., and Mermelstein, P. (1980). Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustics, Speech, and Signal Processing 28, 357–366. Deller, Jr., J.R., and Odeh, S.F. (1993). Adaptive set-membership identification in O (m) time for linear-in-parameters models. IEEE Transactions on Signal Processing 41, 1906–1924. Deller, Jr., J.R., Hansen, J.H.L., and Proakis, J.G. (2000). Discrete-time processing of speech signals. (2d ed.). New York: IEEE Press. Devijver, P.A., and Kittler, J. (1982). Pattern recognition: A statistical approach. London: Prentice Hall. Dudley, H. (1939). The vocoder. Bell Labs Record. 17, 122–126. Ephraim, Y., Dembo, A., and Rabiner, L.R. (1989). A minimum discrimination information approach for hidden Markov modeling. IEEE Transactions on Information Theory 35, 1001–1013. Fant, C.G.M. (1956). On the predictability of formant levels and spectrum envelopes from formant frequencies. In Halle, M., Lunt, H., MacLean, H. (eds.). For Roman Jakobson. The Hague: Mouton. Fant, C.G.M. (1960). Acoustic theory of speech production. The Hague: Mouton. Fant, C.G.M. (1973). Speech sounds and features. Cambridge, MA: MIT Press. Flanagan, J.L. (1972). Speech analysis, synthesis, and perception. (2nd. ed.). New York: Springer-Verlag. Fu, K.S. (1982). Syntactic pattern recognition and applications. Englewood Cliffs, NJ: Prentice Hall. Golub, G.H., and van Loan, C.F. (1989). Matrix computations. (2nd. ed.). Baltimore: Johns-Hopkins University. Haykin, S. (1996). Adaptive filter theory. (3d. ed.). Englewood Cliffs, NJ: Prentice Hall. Hopcroft, J.E., and Ullman, J.D. (1979). Introduction to automata theory, languages, and computation. Reading, MA: Addison-Wesley. Itakura, F. (1975). Line spectrum representation of linear prediction coefficients of speech signals. Journal of the Acoustics Society of America 57, 535. Itakura, F. (1975). Minimum prediction residual principle applied to speech recognition. IEEE Transactions on Acoustics Speech, and Signal Processing 23, 67–72. Itakura, F., and Saito, S. (1969). Speech analysis-synthesis system based on the partial autocorrelation coefficient. Proceedings of the Acoustic Society of Japan Meeting. Jelinek, F., Bahl, L.R., and Mercer, R.L. (1975). Design of a linguistic statistical decoder for the recognition of continuous speech. IEEE Transactions on Information Theory 21, 250–256.
890 Juang, B.H., Levinson, S.E., and Sondhi, M.M. (1986). Maximum likelihood estimation for multivariate mixture observations of Markov chains. IEEE Transactions on Information Theory 32, 307–309. Juang, B.H., Rabiner, L.R., and Wilpon, J.G. (1987). On the use of bandpass liftering in speech recognition. IEEE Transactions on Acoustics Speech, and Signal Processing 35, 947–954. Klatt, D. (1987). Review of text-to-speech conversion for English. Journal of Acoustic Society of America 82, 737–793. Koenig, W. (1949). A new frequency scale for acoustic measurements. Bell Telephone Laboratory Record 27, 299–301. Koenig, W., Dunn, H.K., and Lacy, L.Y. (1946). The sound spectrograph. Journal of Acoustic Society of America 17, 19–49. Ladefoged, P. (1975). A course in phonetics. New York: HarcourtBrace-Javanovich. Leon-Garcia, A. (1989). Probability and random processes for electrical engineering. Reading, MA: Addison-Wesley. Levinson, S.E. (1985). Structural methods in automatic speech recognition. Proceedings of the IEEE 73, 1625–1650. Levinson, S.E., Rabiner, L.R., and Sondhi, M.M. (1983). An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition. Bell Systems Technology Journal 62, 1035–1074. Lieberman, P. (1967). Intonation, perception, and language. Cambridge, MA: MIT Press. Liporace, L.A. (1982). Maximum likelihood estimation for multivariate observations of Markov sources. IEEE Transactions on Information Theory, 28, 729–734. Liszka, J.J. (1996). A general introduction to the semeiotic of Charles Sanders Peirce. Bloomington: Indiana University Press. Lowerre, B.T., and Reddy, D.R. (1980). The HARPY speech understanding system. In W.A. Lea (Ed.), Trends in speech recognition. Englewood Cliffs, NJ: Prentice Hall. Makhoul, J. (1977). Stable and efficient lattice methods for linear prediction. IEEE Transactions on Acoustics Speech, and Signal Processing 25, 423–428. Markel, J.D., and Gray, A.H. (1976). Linear prediction of speech. New York: Springer-Verlag. McWhiter, J.G. (1983). Recursive least squares solution using a systolic array. Proceedings of the Society of Photooptical Instrumentation Engineers (Real-Time Signal Processing IV) 431, 105–112. Milner, P.M. (1970). Physiological psychology. New York: Holt– Rinehart–Winston. Morgan, D.P., and Scofield, C.L. (1991). Neural networks and speech processing. Boston: Kluwer Academic. Morgan, N., and Gold, B. (1999). Speech and audio signal processing: Processing and perception of speech and music. New York: John Wiley & Sons.
John R. Deller, Jr. and John Hansen Myers, C.S., Rabiner, L.R., and Rosenberg, A.E. (1980). Performance trade-offs in dynamic time warping algorithms for isolated word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing 28, 622–635. Nocerino, N., Soong, F.K., Rabiner, L.R., and Klatt, D.H. (1985). Comparative study of several distortion measures for speech recognition. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing 1, 25–28. Noll, A.M. (1967). Cepstrum pitch determination. Journal of the Acoustic Society of America 41, 293–309. O’Shaughnessy, D. (1987). Speech communication: Human and machine. Reading, MA: Addison-Wesley. Oppenheim, A.W., and Schafer, R.W. (1968). Homomorphic analysis of speech. IEEE Transactions on Audio and Electroacoustics 16, 221–226. Potter, R.G., Kopp, G.A., and Kopp, H.G. (1966). Visible speech. New York: Van Nostrand. Rabiner, L.R., and Juang, B.-H. (1993). Fundamentals of speech recognition. Englewood Cliffs, NJ: Prentice Hall. Rabiner, L.R., and Schafer, R.W. (1978). Digital processing of speech signals. Englewood Cliffs, NJ: Prentice Hall. Reddy, D.R. (1976). Speech recognition by machine: A review. Proceedings of the IEEE 64, 501–531. Rosenberg, A.E. (1971). Effects of glottal pulse shape on the quality of natural vowels. Journal of Acoustic Society of America 49, 583–590. Schafer, R.W., and Markel, J.D. (Eds.). (1979). Speech analysis. New York: John Wiley & Sons. Silverman, H.F., and Morgan, D.P. (1990). The application of dynamic programming to connected speech recognition. IEEE Acoustics, Speech, and Signal Processing Magazine 7, 6–25. Stevens, S.S., and Volkman, J. (1940). The relation of pitch to frequency. American Journal of Psychology 53, 329. Strobach, P. (1991). New forms of Levinson and Schur algorithms. IEEE Signal Processing Magazine, 12–36. Torres-Carrasquillo, P.A. et al., (2002). An approach to language identification using Gaussian mixture models and shifted delta cepstra. Proceedings of the International Conference on Spoken Language Processing. Viswanathan, R., and Makhoul, J. (1975). Quantization properties of the transmission parameters in linear predictive systems. IEEE Transactions on Acoustics Speech, and Signal Processing 23, 309–321. Viterbi, A.J. (1967). Error bounds for convolutional codes and an asymptotically optimal decoding algorithm. IEEE Transactions on Information Theory 13, 260–269. Waibel, A., Geutner, P., Tomokiyo, L.M., Schultz, T., and Woszczyna, M. (2000). Multilinguality in speech and spoken language systems. Proceedings of the IEEE 88, 1181–1200.
4 Digital Image Processing Eduardo A.B. da Silva Program of Electrical Engineering, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
Gelson V. Mendonc¸a Department of Electronics, COPPE/EE/Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
4.1 4.2
Introduction ....................................................................................... 891 Image Sampling................................................................................... 892
4.3 4.4
Image Quantization ............................................................................. 895 Image Enhancement............................................................................. 897
4.5
Image Restoration................................................................................ 898
4.6
Image Coding ..................................................................................... 901
4.7 4.8
Image Analysis .................................................................................... 908 Summary ........................................................................................... 909 References .......................................................................................... 910
4.2.1 Nonrectangular Grid Sampling
4.4.1 Histogram Manipulation . 4.4.2 Linear Filtering 4.5.1 Wiener Filter 4.6.1 DPCM . 4.6.2 Transform Coding
4.1 Introduction Digital image processing consists of the manipulation of images using digital computers. Its use has been increasing exponentially in the last decades. Its applications range from medicine to entertainment, passing by geological processing and remote sensing. Multimedia systems, one of the pillars of the modern information society, rely heavily on digital image processing. The discipline of digital image processing is a vast one, encompassing digital signal processing techniques as well as techniques that are specific to images. An image can be regarded as a function f (x, y) of two continuous variables x and y. To be processed digitally, it has to be sampled and transformed into a matrix of numbers. Since a computer represents the numbers using finite precision, these numbers have to be quantized to be represented digitally. Digital image processing consists of the manipulation of those finite precision numbers. The processing of digital images can be divided into several classes: image enhancement, image restoration, image analysis, and image compression. In image enhancement, an image is manipulated, mostly by heuristic techniques, so that a human viewer can Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
extract useful information from it. Image restoration techniques aim at processing corrupted images from which there is a statistical or mathematical description of the degradation so that it can be reverted. Image analysis techniques permit that an image be processed so that information can be automatically extracted from it. Examples of image analysis are image segmentation, edge extraction, and texture and motion analysis. An important characteristic of images is the huge amount of information required to represent them. Even a grayscale image of moderate resolution, say 512 512, needs 512 512 8 2 106 bits for its representation. Therefore, to be practical to store and transmit digital images, one needs to perform some sort of image compression, whereby the redundancy of the images is exploited for reducing the number of bits needed in their representation. In what follows, we provide a brief description of digital image processing techniques. Section 4.1 deals with image sampling, and Section 4.2 describes image quantization. In Section 4.3, some image enhancement techniques are given. Section 4.4 analyzes image restoration. Image compression, or coding, is presented in Section 4.5. Finally, Section 4.6 introduces the main issues involved in image analysis. 891
892
Eduardo A.B. da Silva and Gelson V. Mendonc¸a
4.2 Image Sampling An analog image can be seen as a function f (x, y) of two continuous space variables: x and y. The function f (x, y) represents a variation of gray level along the spatial coordinates. Since each color can be represented as a combination of three primaries (R, G, and B), a color image can be represented by the three functions fR (x, y), fG (x, y), and fB (x, y), one for each color component. A natural way to translate an analog image into the discrete domain is to sample each variable; that is, we make x ¼ nx Tx and y ¼ ny Ty . This way, we get a digital image fD (nx, ny) such that: fD (nx , ny ) ¼ f (nx Tx , ny Ty ):
(4:1)
Likewise, for the one-dimensional case, one has to devise conditions under which an analog image can be recovered from its samples. We will first generalize the notions of frequency content and Fourier transform for two-dimensional signals. One can represent an image in the frequency domain by using a two-dimensional Fourier transform. It is a straightforward extension of the one-dimensional Fourier transform, as can be seen in these equations: F(Vx , Vy ) ¼
ð1 ð1 1
f (x, y) ¼
1 4p2
Ωy
(4:2)
1
ð1 ð1 1
f (x, y)e j(Vx xþVy y) dxdy:
FIGURE 4.1 Two-Dimensional Sinusoid with Vx ¼ 2Vy . Its expression is f (x, y) ¼ 1=2(e j(Vx xþVy y) þ e j(Vx xþVy y) ).
Ω2
F(Vx , Vy )e j(Vx xþVy y) dVx dVy : (4:3)
1
The function e j(Vx xþVy y) is a complex sinusoid in two dimensions. It represents a two-dimensional pattern that is sinusoidal both along the x (horizontal) and y (vertical) directions. The Vx and Vy are the spatial frequencies along the x and y directions, respectively. An example can be seen in Figure 4.1, where a two-dimensional sinusoid is depicted as cos (Vx x þ Vy y), with Vx ¼ 2Vy . One can note that there are twice as many cycles in the x direction than in the y direction. The period in the x direction is 2p=Vx and in the y direction is 2p=Vy . Therefore, equation 4.3 means that any two-dimensional signal is equivalent to an infinite sum of two-dimensional complex sinusoids. The term that multiplies the sinusoid of frequency (Vx , Vy ) is its Fourier transform, F(Vx , Vy ), and is given by equation 4.2. Having defined the meaning of frequency content of twodimensional signals, we are now ready to state conditions under which an image can be recovered from its samples. It is well known that most useful digital images have the significant part of the amplitude spectrum concentrated at low frequencies. Therefore, an image in general can be modeled as a bandlimited two-dimensional signal; that is, its Fourier
−Ω1
Ω1
Ωx
−Ω2
FIGURE 4.2 Image
Support Region of the Fourier Transform of a Typical
transform F(Vx , Vy ) is approximately zero outside a bounded region, as shown in Figure 4.2. When the sampling process is performed according to equation 4.1, it is equivalent to sampling the continuous signal using a rectangular grid as in Figure 4.3. As in the one-dimensional case (Oppenheim and Schaffer, 1989), one can show that the spectrum of the digital image is composed of replicas of the analog image spectrum, repeated with a period of 2p=Tx
4 Digital Image Processing
893 Ωy
y
4π Ty
2π Ty
Ty
Ω2
x Tx
−4π Tx
−2π Tx −Ω1
Ω1
2π Tx
4π Tx
Ωx
−Ω2 −2π Ty
−4π Ty
FIGURE 4.3 angular Grid
Sampling a Two-Dimensional Signal Using a RectFIGURE 4.5 Aliased Spectrum of the Two-Dimensional Discrete Signal Generated by Sampling Using a Rectangular Grid
along the horizontal frequency axis and 2p=Ty along the vertical frequency axis, as depicted in Figure 4.4. The mathematical expression for the spectrum of the sampled signal is then: Fs (Vx , Vy ) ¼
1 1 X 2pky 1 X 2pkx F Vx , Vy : (4:4) Tx Ty k ¼1 k ¼1 Tx Ty x
y
Ωy 2π Ty
Ω2
−2π Tx
−Ω1
Ω1 −Ω2
2π Tx
Ωx
−2π Ty
From the above equation and Figure 4.4, we note that if the sampling intervals do not satisfy the Nyquist conditions V1 2p=Tx and V2 2p=Ty then the analog signal cannot be reconstructed from its samples. This is due to the overlap of the replicated frequency spectra, which, likewise in the onedimensional case, is known as aliasing. Aliasing is illustrated in Figure 4.5. Provided that aliasing is avoided, the sampled signal has to be processed by a two-dimensional low-pass filter to obtain the image signal back into its analog form. This filter should ideally have a transfer function with an amplitude spectrum equal to one in the support region of the Fourier transform of the original image and zero outside that region. In Figure 4.6(A), we see the original LENA image. It has been sampled without aliasing. Figure 4.6(B) shows the LENA image sampled at a rate much smaller than the minimum sampling frequencies in the horizontal and vertical directions necessary to avoid aliasing. Aliasing can be clearly noted. In Figure 4.6(C), the bandwidth of the image LENA has been reduced prior to sampling so that aliasing is avoided after sampling. Figure 4.6(D) shows the filtered image after sampling. We can notice that despite the blur on the image due to the lower bandwidth, there is indeed less aliasing indicated by the smoother edges.
4.2.1 Nonrectangular Grid Sampling FIGURE 4.4 Spectrum of the Two-Dimensional Discrete Signal Generated by Sampling Using a Rectangular Grid
A two-dimensional signal does not necessarily need to be sampled using a rectangular grid like the one described by equation 4.1 and Figure 4.3.
894
Eduardo A.B. da Silva and Gelson V. Mendonc¸a
(A) Original
(B) Sampled at Below the Sampling Frequencies
(C) Low-Pass Filtered
(D) Low-Pass Filtered and Sampled
FIGURE 4.6
For example, a nonrectangular grid as in Figure 4.7 can be used. The nonrectangular grid sampling can be best described by using vector and matrix notation. Let: t¼
tx ty
n¼
nx ny
V¼
vxx vyx
vxy vyy
¼ vx
vy
V¼
Vx : Vy
(4:5)
LENA Image
In the case of Figure 4.7: V ¼
(4:6)
Tx , Ty
(4:7)
and it can be shown that the spectrum of the sampled signal becomes (Mersereau and Dudgeon, 1984):
The sampling process then becomes: fD (n) ¼ fD (nx , ny ) ¼ f (nx v x þ ny v y ) ¼ f (Vn):
Tx Ty
Fs (V) ¼
X 1 F(V 2p(V 1 )t k): jdet(V )j k
(4:8)
4 Digital Image Processing
895 Ωy
y
2π Ty π Ty
Ω2
Ty −Ω1
x
−2π Tx
Ω1
−π Tx
π Tx
−Ω2
2π Tx
Ωx
Tx −π Ty −2π Ty
FIGURE 4.7 Sampling a Two-Dimensional Signal Using a Nonrectangular Grid
The matrix 2p(V 1 )t ¼ U gives the periodicity, in the twodimensional frequency plane, of the spectrum repetitions. For the sampling grid defined in equation 4.7, we have that the periodicity matrix in the frequency plane is the following: 0p B Tx U ¼B @p Ty
p 1 Tx C C p A: Ty
(4:9)
Then, the spectrum becomes like the one shown in Figure 4.8. One particular grid of interest arises when pffiffinonrectangular ffi we make Ty ¼ 3Tx . We refer to it as hexagonal sampling. One advantage of using such a sampling grid instead of the rectangular one is that, for signals with isotropic frequency response, hexagonal sampling needs 13.4% fewer samples than the rectangular one for representing them without aliasing (Mersereaw and Dudgeon, 1984). Such sampling grid is also useful when the continuous signals have a baseband as in Figure 4.2, as can be observed from Figures 4.4, 4.5, and 4.8. In fact, given the support region of a baseband signal, one can always determine the best sampling grid so that aliasing is avoided with the smallest possible density of samples.
4.3 Image Quantization To be represented in a digital computer, every signal must be quantized once sampled. A digital image, for example, is a matrix of numbers represented with finite precision in the
FIGURE 4.8 Spectrum of the Two-Dimensional Discrete Signal Generated by Sampling Using a Nonrectangular Grid
machine being used to process it. More generally, a quantizer is a function Q( ) such that: Q(x) ¼ rl for tl1 < x tl for l ¼ 1, . . . , L:
(4:10)
The numbers rl , l ¼ 1, . . . , L are referred to as the reconstruction levels of the quantizer, and tl , l ¼ 0, . . . , L are referred to as its decision levels. A typical quantizer is depicted in Figure 4.9. In many cases, the decision and reconstruction levels of a quantizer are optimized such that the mean-squared error between the quantizer’s input and output is minimized. This is usually referred to as a Lloyd-Max quantizer (Jain, 1989). In digital image processing, one often uses linear quantizers. For a linear quantizer, the reconstruction and decision levels satisfy the following relations: t0 ¼ 1: tlþ1 tl ¼ q, rlþ1 ¼ tl þ
for l ¼ 1, . . . , L: q 2
for l ¼ 1, . . . , L:
tL ¼ 1:
(4:11) (4:12) (4:13) (4:14)
The parameter q is referred to as the quantizer step size. The mean square error of the linear quantizer, assuming that the quantization error e(n) ¼ x(n) Q[x(n)] is uniformly distributed, is given by (Jain, 1989): E[e2 (n)] ¼
q2 : 12
(4:15)
896
Eduardo A.B. da Silva and Gelson V. Mendonc¸a
Q(x) r 11 r 10 r9 Reconstruction levels r8 r7 r6 t1
t2
t3
t4
t5 r5
t6
t7
t8
t9
t 10
x
r4 r3
Decision levels
r2 r1
FIGURE 4.9
Typical Quantizer
Since the human eye cannot in general recognize more than around 256 gray levels, usually 8 bits per pixel are enough for representing a gray-scale image. In the case of color images, one uses 8 bits for each color component. For RGB images, this gives 24 bits/pixel. In many occasions, one needs to quantize an image with less than 8 bits/pixel. Figure 4.10(A) shows the image LENA quantized with 4 bits/pixel. One can notice the effect of false contours. One way to overcome this problem is to perform dithering (Jain, 1989). It consists in adding pseudo-random
(A) Quantized with 8 Levels
noise to the image prior to quantization. Before display, this noise can be either subtracted from the image or not subtracted. This is illustrated in Figures 4.10(A) to 4.10(C). Figure 4.10(B) shows the LENA image with a pseudo-random noise uniformly distributed in the interval [25, 25] quantized with 8 levels. Figure 4.10(C) shows the image in Figure 4.10(B) with the pseudo-random noise subtracted from it. One can notice that in both cases there is a great improvement in visual quality when comparing it with Figure 4.10(A). The false contours almost disappear.
(B) Quantized with 8 Levels with Noise Added Prior to Quantization
FIGURE 4.10
LENA Image
(C) The Image in (B) with the Noise also Subtrated after Quntization
4 Digital Image Processing
897
4.4 Image Enhancement Image enhancement is the name given to a class of procedures by which an image is processed so that it can be better displayed or improved for a specific analysis. This can be obtained at the expense of other aspects of the image. There are many enhancement techniques that can be applied to an image. To provide a flavor of the image enhancement techniques, we will give two examples namely, histogram manipulation and image filtering.
4.4.1 Histogram Manipulation The histogram of an image is a function that maps each gray level of an image to the number of times it occurs in the image.
For example, the image in Figure 4.11(A) has the histogram shown in Figure 4.11(B). One should note that the pixels have, in general, gray levels in the integer range [0,255]. Histogram Equalization By looking at Figure 4.11(A), one notices that the image is too dark. This can be confirmed by the image’s histogram in Figure 4.11(B), where one can see that the most frequent gray levels have low values. To enhance the appearance of the image, one would need to re-map the image’s gray levels so that they become more uniformly distributed. Ideally, one would need to apply a transformation that wouldÐ make the histogram of u the image look uniform. If FU (u) ¼ 0 pU (x)dx is the distribution function of the image, then this transformation would
1600 1400 1200 1000 800 600 400 200 0 0
(A) Original Image
26
51
77
102 128 153 180 204 230 255
(B) Histogram of the Original Image
1600 1400 1200 1000 800 600 400 200 0 0 (C) Image with Equalized Histogram
FIGURE 4.11
26
51
77
102 128 153 180 204 230 255
(D) Histogram of the Equalized Image
Histogram of an Image
898
Eduardo A.B. da Silva and Gelson V. Mendonc¸a
be y ¼ F 1 (x) (Gonzalez and Wintz, 1977). In practice, since the pixels can attain only integer values, this operation cannot be performed exactly, and some sort of quantization must be carried out (Jain, 1989). Figure 4.11(C) shows the image with equalized histogram, and Figure 4.11(D) shows its histogram. Note that the quality of the image is far superior to the original one, and the histogram is much more uniform then the one in Figure 4.11(B). Another similar histogram manipulation technique is histogram specification, where we try to make the histogram of an image as similar as possible to a given one (Gonzalez and Wintz, 1977; Jain, 1989). Some texts refer to histogram matching. It is a kind of histogram specification technique in which the histogram of an image is matched to the one of another image. It can be used, for example, when there are two images of the same scene taken from two different sensors.
4.4.2 Linear Filtering Linear filtering is one of the most powerful image enhancement methods. It is a process in which part of the signal frequency spectrum is modified by the transfer function of the filter. In general, the filters under consideration are linear and shift-invariant, and thus, the output images are characterized by the convolution sum between the input image and the filter impulse response; that is: y(m, n) ¼
M X N X
h(m i, n j)x(i, j) ¼ h(m, n) x(m, n),
i¼0 j¼0
(4:16) where the following is true: . . .
The y(m, n) is the output image. The h(m, n) is the filter impulse response. The x(m, n) is the input image.
For example, low-pass filtering has the effect of smoothing an image, as can be observed from Figure 4.6(C). On the other hand, high-pass filtering usually sharpens the edges of an image. They can even be used for edge detection, which is used in image analysis algorithms. The image filtering can be carried out either in the spatial domain, as in equation 4.16, or in the frequency domain, using the discrete Fourier transform (DFT) (Mersereau and Dudgeon, 1984; Oppenheim and Schaffer, 1989). For filtering using the DFT, we use the well known property that the DFT of the circular convolution of two sequences is equal to the product of the DFTs of the two sequences. That is, for y(m,n) defined as in equation 4.16, provided that a DFT of sufficient size is used, we have that: DFT{y(m, n)} ¼ DFT {h(m, n)} DFT {x(m, n)}:
(4:17)
Therefore, one can perform image filtering in the frequency domain by modifying conveniently the DFT of the image and taking the inverse transformation. This is exemplified in Figures 4.12(A) through 4.12(D). In Figure 4.12(A), we see an image contaminated with periodic noise. Figure 4.12(B) shows the DFT of this image. One can clearly see the periodic noise as two well-defined points on the DFT of the image. For convenience, arrows are pointing to them. In Figure 4.12(C), one can see the DFT of the image with the periodic noise removed; the frequency locations corresponding to the periodic noise were made equal to zero. Figure 4.12(D) shows the inverse DFT of the image in Figure 4.12(C). One can notice that the noise has been effectively removed, improving the quality of the image a great deal. Image enhancement is an extensive topic. In this text, we aim at giving a flavor of the topic by providing typical examples. For a more detailed treatment see works by Gonzalez and Wintz (1977) and Jain (1989).
4.5 Image Restoration In image restoration, one is usually interested in recovering an image from a degraded version of it. It is essentially different from image enhancement, which is concerned with accentuation or extraction of image features. Moreover, in image restoration, one usually has mathematical models of the degradation and a statistical description of the ensemble of images used. In this section, we will study one type of restoration problem, namely, the recovery of images degraded by a linear system and contaminated with additive noise. A linear model for degradations is depicted in Figure 4.13. The matrix u(m, n) is the original image, h(m, n) is a linear system representing the degradation, Z(m, n) is an additive noise, and v(m, n) is the observed degraded image. We can write: v(m, n) ¼ u(m, n) h(m, n) þ Z(m, n),
(4:18)
where represents a two-dimensional convolution (see equation 4.16). An illustrative case is when Z(m, n) ¼ 0. In this situation, we can recover u(m, n) from v(m, n) by using a linear filter g(m, n) such that g(m, n) h(m, n) ¼ d(m, n). In the frequency domain, this is equivalent to having: G(vx , vy ) ¼
1 : H(vx , vy )
(4:19)
This procedure is called inverse filtering. The main problem with the inverse filter is that it is not defined in the cases that there is a pair (v1 , v2 ) such that H(v1 , v2 ) ¼ 0. A solution to this problem is the pseudo-inverse filter defined as:
4 Digital Image Processing
899
(A) Image Contaminated with Periodic Noise
(B) DFT of (A) with the Points Corresponding to the Noise Highlighted
(C) DFT (A) with the Periodic Noise Remove
(D) Recovered Image
FIGURE 4.12
G(vx , vy ) ¼
1 H(vx , vy ) ,
0,
H(vx , vy ) 6¼ 0: H(vx , vy ) ¼ 0:
(4:20)
As an example, we analyze now the restoration of images degraded by blur. For image blur caused by excessive film exposure time in the presence of camera motion in the hori-
Image Filtering
zontal direction, this degradation can be modeled, in the case of absence of additive noise, as: v(x, y) ¼
1 T
ðT
u(x ct, y)dt,
(4:21)
0
where the relative speed between the camera and the scene is c and the camera exposure time is T. In the digital domain, this is equivalent to the following operation: u(m,n)
h(m,n)
+
v (m,n)
v(m, n) ¼
h(m,n)
FIGURE 4.13
Image Degradation Model
L1 1X u(m l, n): L t¼0
(4:22)
Figure 4.14(A) shows the ZELDA image, of dimensions 360 288, blurred using equation 4.22 with L ¼ 48. Applying the pseudo-inverse filter (see equation 4.20) in the frequency
900
Eduardo A.B. da Silva and Gelson V. Mendonc¸a
(A) Image Degraded by Blur
(B) Image Restored Using the Pseudo Inverse Filter
(C) Image in (A), Quantized with 256 Levels, after Application of the Pseudo-Inverse Filter
(D) Image in (A), Quantized with 256 Levels, after Application of the Wiener Filter
FIGURE 4.14
domain (using a DFT), we obtain Figure 4.14(B). We notice that the recovery can be quite good. It is important to note that, in this case, since the process is supposed to be noiseless, the degradation had to be performed using full machine precision. If we quantize the degraded image using 256 levels, as is usual in image processing applications, a small quantization noise is superimposed on the degraded image. As we explained in Section 4.2, this quantization noise is invisible to the human eye. On the other hand, applying the pseudo-inverse filter to the quantized degraded image, we obtain the image in Figure 4.14(C). We can clearly see that the restoration process has failed completely. The excessive noise present in the restored image results from the fact that, since the quantization noise is relatively white, there is noise information even around the frequencies where H(v1 , v2 ) is very small; as a consequence, G(v1 , v2 ) is very large. This greatly amplifies the noise around these frequencies, thus producing an image like the one in
Figure 4.14(C). In fact, one would need a filter that is more or less equivalent to the pseudo-inverse in frequencies where the signal to noise ratio is high but has little effect in regions where the signal to noise ratio is low. One way to accomplish this is using the Wiener filter which is optimum for a given image and noise statistics. We describe it in what follows.
4.5.1 Wiener Filter Given the image degradation model in Figure 4.13, suppose that x(m, n) and Z(m, n) are zero mean, stationary random processes (Papoulis, 1984). The image restoration problem can be formally stated as: given v(m, n), find the best estimate of u(m, n) and u^(m, n) such that the mean squared estimation error E[e 2 (m, n)] is minimum, where: e(m, n) ¼ u(m, n) u^(m, n):
(4:23)
4 Digital Image Processing
901
Such best estimate u^(m, n) is known to be (Papoulis, 1984; Jain, 1989): u^(m, n) ¼ E[u(m, n)jv(k, l), 8(k, l)]:
(4:24)
In our case, we are looking for linear estimates, that is, u^(m, n) of the form: u^(m, n) ¼ v(m, n) g(m, n):
(4:25)
According to the orthogonality principle (Papoulis, 1984), we have the best estimate when: E[e(m1 , n1 )v (m2 , n2 )] ¼ 0,
8m1 , n1 , m2 , n2 ,
(4:26)
where represents the complex conjugation. From equation 4.23, we have that the above equation is equivalent to: E[{u(m1 , n1 ) u^(m1 , n1 )}v (m2 , n2 )] ¼ 0,
8m1 , n1 , m2 , n2 : (4:27)
For stationary processes, equation 4.27 implies that: ruv (m, n) ¼ ru^v (m, n),
(4:28)
where rab (m, n) is the crosscorrelation (Papoulis, 1984) between signals a(m, n) and b(m, n). If Sab (vx , vy ) is the cross power spectral density (Papoulis, 1984) between signals a(m, n) and b(m, n), the above equation implies that: Suv (vx , vy ) ¼ Su^v (vx , vy ):
(4:29)
Since, from equation 4.25, u^(m, n) ¼ v(m, n) g(m, n), equation 4.29 is equivalent to: Su^v (vx , vy ) ¼ G(vx , vy )Svv (vx , vy ):
(4:30)
From equation 4.29, this implies that: G(vx , vy ) ¼
Suv (vx , vy ) : Svv (vx , vy )
Then, from equation 4.31, the above equations imply that the optimum linear estimation filter is as follows: G(vx , vy ) ¼
Suu (vx , vy )H (vx , vy ) Suu (vx , vy )jH(vx , vy )j2 þ SZZ (vx , vy )
:
(4:34)
This is known as the Wiener filter. Note that the implementation of the Wiener filter requires, besides the knowledge of the degradation process H(vx , vy ), the knowledge of the power spectral densities of the ensemble of the original inputs, Suu (vx , vy ) and of the noise, SZZ (vx , vy ). Figure 4.14(D) shows the image recovered from the blurred and quantized image. In the implementation of the Wiener filter, it was supposed that the ensemble of the input could be modeled by a first-order Gauss-Markov process [AR(1)] with correlation coefficient equal to 0.95. Since the image has a dynamic range of 256 and was quantized with quantization interval 1, the noise was supposed to be white and uniform with density 1/12 (see equation 4.15). We note that the recovery is far better than the one obtained with the pseudo-inverse filter. We can rewrite equation 4.34 as: G(vx , vy ) ¼
H (vx , vy ) jH(vx , vy )j2 þ SZZ (vx , vy )=Suu (vx , vy )
(4:35)
A couple of interesting conclusions can be drawn from equation 4.35. The first one is that, in regions of the frequency plane where the signal-to-noise ratio is high SZZ (vx , vy )= Suu (vx , vy ) ! 0Þ, then equation 4.35 is equivalent to equation 4.20, the one for the pseudo-inverse filter. On the other hand, in regions of the frequency plane where the signal-to-noise ratio is low, then the Wiener filter depends on the power spectral densities of both the noise and the input. In this section, we attempted to provide the reader with a general notion of the image restoration problem, which is still an active area of research (Banham and Katsaggetos, 1997). For a deeper treatment of the subject, the reader is referred to Gonzalez and Wintz (1977), Jain (1989), Lim (1990), Pratt (1978), and Rosenfeld and Kak (1982).
(4:31)
4.6 Image Coding
Since, from Figure 4.13 and equation 4.18: v(m, n) ¼ u(m, n) h(m, n) þ Z(m, n), and u(m, n) and v(m, n) are considered uncorrelated, we have that: Svv (vx , vy ) ¼ Suu (vx , vy )jH(vx , vy )j2 þ SZZ (vx , vy ):
(4:32)
Suv (vx , vy ) ¼ Suu (vx , vy )H (vx , vy ):
(4:33)
One of the main difficulties when it comes to image processing applications concerns the large amount of data required to represent an image. For example, one frame of a standard definition digital television image has dimensions 720 480 pixels. Therefore, using 256 gray levels (8 bits) for each color component, it requires 3 8 720 480 ¼ 8, 294, 400 bits to be represented. Considering that in digital video applications, one uses 30 frames/sec, one second of video needs more than 240 Mbits! Thus, for the processing, storage, and transmission
902
FIGURE 4.15
Eduardo A.B. da Silva and Gelson V. Mendonc¸a
Illustration of the Image Compression Problem
of images to be feasible, one needs to reduce somehow this number of bits. Image compression or coding, is the discipline that deals with forms of achieving this. To better understand the image compression problem, consider the 256 256 image in Figure 4.15. If we need to represent this image as it is, one would need 8 bits/pixel, giving a total of 8 256 256 ¼ 524,288 bits. Alternatively, if we know that the image in Figure 4.15 is a circle, it would be enough to say that it represents a circle of radius 64, it has a gray level of 102, it has a center coordinates x ¼ 102 and y ¼ 80, and it is superimposed on a background of gray level 255. Since the radius, x coordinate, and y coordinate are between 0 and 255, one would need 8 bits for specifying each one; since both the gray level of the circle and of the background need 8 bits each, the whole image in Figure 4.15 would only need 40 bits to be specified. This is equivalent to a compression ratio of more than 13,000:1! The vast majority of images, however, is not as simple as the one in Figure 4.15, and such amount of compression cannot be easily obtained. Fortunately, most useful images are redundant; that is, their pixels carry a lot of superfluous information. This can be better understood by resorting to the images in Figure 4.16. By looking at Figure 4.16(A), we note that by taking a pixel, for example, at the girl’s face, the pixels around it have on average similar values. Certainly there are pixels for which this does not hold, such as at the boundary of the girl’s skin and hair. However, this is true of most of the image’s pixels. Examining the images in Figures 4.16(B) and 4.16(C), we see that the same reasoning applies to them. In other words, if we know something about one pixel, it is likely that we can guess, with a good probability of success, many things about the pixels around it. This implies that we do not need all the pixels to represent the images, only the ones carrying information that cannot be guessed. At this point, a question arises: are all images redundant? The answer is no, as one can see in Figure 4.16(D). This image
represents uniformly distributed white noise between 0 and 255. To know something about a given pixel does not imply knowing anything about the pixels around it. Such an image, however, is not useful for carrying information and is not of the kind frequently present in nature. Indeed, one can state that most natural images are redundant. Therefore, they are amenable to compression. How to exploit this redundancy to create a compact representation of an image is the main objective of image compression techniques. In general, an image compression method can be split into three basic steps, as depicted in Figure 4.17: transformation, quantization, and coding. In the transformation step, a mathematical transformation is applied to the image pixels x(m, n), generating a set of coefficients c(k, l) with less correlation among themselves. Its main objective is to exploit the correlation between the image pixels. For these coefficients c(k, l) to be represented with a limited number of bits, they have to be mapped to a finite number of symbols, ^c (k, l). This mapping is commonly referred to as quantization. In Section 4.2, we introduced some forms of quantization. After the coefficients are mapped to a finite number of symbols, these symbols ^c (k, l) must be mapped to a string of bits b(s) to be either transmitted or stored. This operation is commonly referred to as coding. Actually, we have compression only when the number of bits in b(s), s ¼ 0, 1, . . . N is smaller than the number of bits needed to represent the pixels x(m, n) themselves. These three steps can be clearly illustrated in the next example, differential pulse-coded modulation (DPCM).
4.6.1 DPCM When looking at the redundant images in Figures 4.16(A) through 4.16(C) one notices, for example, that if we know the value of a pixel x(m, n 1), a good guess for the value of the pixel on its right, x (m, n), would be the value of the pixel x(m, n 1). A way to reduce the redundancy on those images would be to transmit or encode only the ‘‘part’’ of the pixel that cannot be guessed. This is represented by the difference between the actual value of the pixel and the guess; that is, one would encode c(m, n) as: c(m, n) ¼ x(m, n) x (m, n) ¼ x(m, n) x(m, n 1): (4:36)
Because the images are redundant, the values of c(m, n) tend to be small. This is illustrated in Figure 4.18, where we can see the histograms of the image LENA [see Figure 4.6(A)] and the histogram of the coefficients generated according to equation 4.36. One can see that the histogram of the original image in Figure 4.18(A) has nothing special about it; it has a more or less random shape. On the other hand, one can see that the histogram of the coefficients c(m, n) has a well-defined shape, tending to a Laplacian distribution. One can also see that indeed the small differences c(m, n) are much more likely than the smaller differences, confirming the redundancy of the LENA
4 Digital Image Processing
903
(A) Redundant Image
(B) Redundant Image
(C) Redundant Image
(D) Nonredundant Image
FIGURE 4.16
Redundant and Nonredundant Images: Differences in Pixels
image. A remarkable fact is that the histogram of the differences has a similar distribution for almost any type of natural images. Then, in fact, the transformation we performed, consisting of taking differences between adjacent pixels, has effectively reduced the redundancy of the original image. In addition, because the smaller values are much more probable than the larger ones, if we use a variable-length code to represent the coeffi-
x(m,n)
cients, where the more likely values are represented with a smaller number of bits and the less likely values are represented with a larger number of bits, we can effectively obtain a reduction on the number of bits used. It is well known from information theory (Thomas and Joy, 1991) that, by properly choosing a code, one can encode a source using a number of bits arbitrarily close to its entropy, defined as:
^c(k,l)
c(k,l)
Transformation
FIGURE 4.17
Quantization
b(s)
Coding
Three Basic Steps Involved in Image Compression
904
Eduardo A.B. da Silva and Gelson V. Mendonc¸a 1400
18000
1200
16000 14000
1000 12000 800
10000
600
8000 6000
400
4000 200
2000
0 0
50
100
150
200
0 −300
250
H(S) ¼
L X
pi log2 pi ,
−100
0
100
200
300
(B) Histogram of the Coefficients c (m, n) Generated According to Equation 4.36
(A) Histogram of the Image LENA
FIGURE 4.18
−200
Redundant Images with Small Coefficient Values
(4:37)
i¼1
where the source S consists of L symbols {s1 , . . . , SL }, with symbol si having probability of occurrence pi . Examples of such codes are Huffman codes (Cover and Thomas, 1991) and arithmetic codes (Bell et al., 1990). A discussion of them is beyond the scope of this article. For a thorough treatment of these codes, the reader is referred to Bell et al. (1990) and Cover and Thomas (1991). For example, the image LENA has the histogram in Figure 4.18(A). It corresponds to an entropy of 7.40 bits/pixel, and thus, one needs at least 8 bits to represent each pixel of the image LENA. On the other hand, the entropy of the coefficients, whose histogram is in Figure 4.18(B), is 3.18 bits/coefficient. Therefore, one can find a Huffman code that just needs 4 bits to represent each of them. This is equivalent to a compression ratio of 2:1. This compression was achieved by the combination of the transformation (taking differences) and coding (Huffman codes) operation. It is noteworthy that the compression obtained in the above example was achieved without any loss; that is, the input image could be recovered, without error, from its compressed version. This is referred to as lossless compression. In some circumstances, one is willing to give up the error-free recovery to obtain larger compression rates. This is referred to as lossy compression. In the case of images, lossy compression is used a great deal, especially when one can introduce distortions with just a low degree of visibility (Watson, 1993). These higher compression rates can be obtained through quantization, as described in Section 4.2. In the example above, the differences c(m, n) have a dynamic range from 0 255 ¼ 255 to 255 0 ¼ 255. Therefore, there are 512
such differences. By applying quantization as in Figure 4.9, the number of possible differences can be decreased at the expense of some distortion being introduced. Figure 4.19 describes a practical way of achieving a lossy compression using DPCM. Notice that, in this case, due to quantization, the decoder cannot recover the original pixels without loss. Therefore, instead of c(m, n) being computed as in equation 4.36, with x(m, n 1) being the prediction for the pixel x(m, n), the prediction for x(m, n) is the ‘‘recoverable’’ value of x(m, n 1) at the decoder. Note that to achieve this, there must be a perfect copy of the decoder in the encoder. Details of this kind of coder/decoder can be found in Jayant and Noll (1984). For the LENA image, if one uses a 16-level quantizer of the same type as the one in Figure 4.9, then the histogram of the quantized differences Q[c(m, n)] is as in Figure 4.20(A). The bit rate necessary to represent them can be made arbitrarily close to the entropy by means of conveniently designed Huffman codes. In this case, the entropy is 1.12 bits/pixel. Therefore, the reconstructed image in Figure 4.20(B) is the result of a compression ratio of more than 7:1. As can be seen, despite the high compression ratio achieved, the quality of the reconstructed image is quite acceptable. This example highlights the nature and effect of the three basic operations described in Figure 4.17: transformation for reducing the redundancy between the pixels; quantization for reducing the number of symbols to encode; and coding to associate to the generated symbols, the smallest average number of bits possible.
4.6.2 Transform Coding In the DPCM encoder/decoder already discussed, the transformation consisted of taking the differences between a pixel
4 Digital Image Processing
905
Encoder x(m,n)
+
+
c(m,n)
Q[c(m,n)] Q
−
_ x(m,n)= xq(m,n−1)
xq(m,n) Delay
+
+ + _ x(m,n)
Channel Decoder
_ x(m,n) = xq(m,n−1)
xq(m,n)
DELAY
+
Q[c(m,n)]
+ + _ x(m,n)
FIGURE 4.19
4.5
DPCM Encoder/Decoder
⫻ 104
4 3.5 3 2.5 2 1.5 1 0.5 0 −300
−200
−100
0
100
200
300
(A) Histogram of the Quantized Differences Q [c (m, n)] in Figure 4.19 for the Case of the LENA Image
FIGURE 4.20
(B) Recovered Image x (m,n)
906
Eduardo A.B. da Silva and Gelson V. Mendonc¸a
and the adjacent pixel to its left. It indeed reduced the redundancy existing in the image. However, only the redundancy between two adjacent pixels was exploited. By looking at images like the ones in Figures 4.16(A) through 4.16(B), we notice that a pixel has a high correlation with many pixels surrounding it. This implies that not only the redundancy between adjacent pixels but also the redundancy in an area of the image can be exploited. By exploiting this, one expects to obtain even higher compression ratios. Transform coding is the generic name given to compression methods in which this type of redundancy reduction is carried out by means of a linear transform (Clarke, 1985). By far, the most widely used linear transform in compression schemes up to date is the discrete cosine transform (DCT). It is part of the old JPEG standard (Pennebaker and Mitchell, 1993) as well as most modern video compression standards, like H.261 (ITU-T, 1990), H.263 (ITU, 1996), MPEG-1 (ISO/IEC, 1994), MPEG-2 (ISO/IEC, 1994) and MPEG-4 (ISO/IEC, 1997). The DCT of a length N signal x(n) consists of the coefficients c(k) such that (Ahmed et al., 1974): p(2n þ 1)k c(k) ¼ a(k) x(n) cos , 2N n¼0 qffiffiffi qffiffiffi where a(0) ¼ N1 , a(k) ¼ N2 , and 1 k N 1. N 1 X
(4:38)
Given the DCT coefficients, the original signal can be recovered using:
x(n) ¼
N 1 X k¼0
c(k)a(k) cos
p(2n þ 1)k : 2N
(4:39)
For images, the DCT can be calculated by first computing the one-dimensional DCT of all the rows of the image and then computing the one-dimensional DCT of all the columns of the result. The DCT is very effective in reducing the redundancy of real images. It tends to concentrate energy in very few transform coefficients. In other words, the coefficients with most of the energy are in general the ones with low values of k in equation 4.38. Since, in this equation, k is proportional to the frequency of the cosine function, usually one refers to them as lowfrequency coefficients. To illustrate this, Figure 4.21 shows the 256 256 DCT of the image LENA 256 256. Actually, the logarithm of the absolute value of the coefficients has been plotted. Black corresponds to zero, and the whiter the pixel, the higher its value. We can see clearly that the energy is highly concentrated in the low-frequency coefficients. Therefore, it is natural to think of a lossy image compression system in which only the coefficients having higher energies are encoded. To get a feeling of this, one can look at Figure 4.22. There, the LENA image has been divided into 8 8 blocks, and the DCT of each block was
FIGURE 4.21
A 256 256 DCT of the 256 256 LENA Image
computed. In Figures 4.22(A) to 4.22(D), we can see the image recovered, respectively, using 1, 3, 15, and all 64 coefficients of each block. We note, for example, that the difference between the original image and the one recovered using 15 coefficients is almost imperceptible. Because the DCT highly concentrates the energy in few coefficients, a large number of the DCT coefficients are zero after quantization. Therefore, a coding method that avoids explicitly encoding the zeros can be very efficient. In practice, this is achieved by scanning the quantized DCT coefficients as in Figure 4.23 prior to encoding. Then, instead of encoding each coefficient, one encodes the pairs (run, level); level indicates the quantized value of the nonzero coefficient, while run indicates the number of zeros that comes before them. The encoding is performed using an entropy coder, like a Huffman or arithmetic coder. High compression rates can be achieved using such schemes. For example, the JPEG standard (Pennebaker and Mitchell, 1993) is essentially based on the above scheme. The same applies to the H.261, H.263, MPEG-1, MPEG-2, and MPEG-4 standards. Figure 4.24 shows the LENA 256 256 image encoded using JPEG with 0.5 bits/pixel and a compression ratio of 16:1. One of the main drawbacks of DCT-based image compression methods is the blocking effect that occurs at low rates (see Figure 4.22). One way to solve this is to use a wavelet transform in the transformation stage. Essentially, a wavelet transform is an octave band decomposition in which each band is subsampled according to its bandwidth (Mallat, 1998; Vetterli and Kovac˘evic´, 1995). A way of achieving this is to first divide a signal into low- and highpass bands, with each band being subsampled by two. Then, only the low-pass channel is again low- and high-pass filtered, and each band is subsampled by two. This process is recursively repeated until a predetermined number of stages is reached. For an image, the same process is performed for
4 Digital Image Processing
907
(A) Using 1 Coefficient
(C) Using 15 Coefficients
FIGURE 4.22
(B) Using 3 Coefficients
(D) Using 64 Coefficients
LENA Image Recovered Using, from a 8 8 DCT, Various Coefficients
both its rows and columns. Figure 4.25 shows an image and its wavelet transform. Figure 4.25(B) shows that the wavelet transform contains mostly coefficients with small magnitudes. In addition, the transform’s bands are directional; that is, besides the low-frequency band, one can see horizontal, vertical, and diagonal bands. From an image compression point of view, it is important to notice that the coefficients with small value, which are put to zero after quantization, tend to be clustered. This facilitates their efficient coding, thereby producing high compression ratios. Image coding methods based on the wave-
let transform have the advantage of the absence of blocking effects in low rates. Figure 4.24(B) shows a LENA 256 256 image compressed at a rate of 16:1 using a wavelet-based encoder. One can notice the superior image quality compared to the DCT-coded one in Figure 4.24(A). Indeed, the JPEG 2000 standard (ISO/IEC, 2000) is based on the wavelet transform. References to state-of-the-art wavelet image encoding methods can be found in works by Andrew (1997), Lan and Tewfik (1999), Said and Pearlman (1996), Shapiro (1993), and Taubman (1999).
908
FIGURE 4.23
Eduardo A.B. da Silva and Gelson V. Mendonc¸a
Scanning Order of the Quantized DCT Coefficients
4.7 Image Analysis The discipline of image analysis is a very vast one. It deals with the automatic extraction of information from an image or sequence of images. It is also an essential part of computer vision systems. In this section, we provide a brief overview of its main issues. For a more detailed treatment, the reader is referred to some excellent texts on image processing and computer vision, such as Castleman (1979), Faugeras (1993), Gonzales and Wintz (1977), Haralick and Shapiro (1992, 1993), Jain (1989), Law (1991), Pratt (1978), Rosenfeld and Kak (1982), and Russ (1995). For an image to be analyzed, its main features have to be isolated. Spatial features like edges, central moments, and en-
(A) Using the JPEG Standard
FIGURE 4.24
tropy are used a great deal. However, frequency domain features obtained in the Fourier transform domain may also be useful. Since, for analysis purposes, a scene must be segmented in objects, there must be ways of extracting the objects and representing them. Edge extraction systems are some of the first steps for isolating the different image objects. They are usually based on some form of gradient-based approach, where the edges are associated to large gray level variations. Once the edges are extracted, one can use contour following algorithms to identify the different objects. Texture is also a useful feature to be used. Once the features are extracted, one uses image segmentation algorithms to separate the images into different regions. The simplest are the ones based on amplitude thresholding, in which the features used for region determination are the amplitudes of the pixels themselves; each region is defined by a range of amplitudes. If the features used are the edges, one can use boundary representation algorithms to define the regions in an image. One can also use clustering algorithms to define a region as the set of pixels having similar features. Quad-trees are a popular approach for defining regions. Template or texture matching algorithms can be used to determine which pixels represent a same shape or pattern, thereby defining regions based on them. These belong to the class of pattern recognition algorithms that have, by themselves, a large number of applications. In the case of image sequences motion detection can be used to define the regions as the pixels of the image having the same kind of movement. For each region, one can also perform measurements, such as perimeter area and Euler number. Mathematical morphology (Serra, 1982) is a discipline commonly employed for performing such measurements.
(B) Using Wavelet Transforms
LENA 256 256 Image with a Compression Ratio of 16:1
4 Digital Image Processing
909
(A) Octagon Image
FIGURE 4.25
(B) Wavelet Transform
An Image and Its Wavelet Transform. White corresponds to positive values, black to negative values, and gray to zero values.
Having the different image regions defined, one must group them into objects by labeling the regions so that all the regions belonging to an object have the same label and regions belonging to different objects have different labels. To achieve this, one can use some kind of clustering algorithm. Once the objects are identified, they must be classified in different categories. The classification can be based, for example, on their shapes. Then, image understanding techniques can be used to identify high-level relations among the different objects. The image analysis tools mentioned above can be exemplified by Figure 4.26. It shows one bolt and two nuts. The features to be extracted are, for example, the contours of the nuts and the bolt. The regions can be defined as the pixels inside the different contours; alternatively, they can be repre-
sented as the pixels having gray level within predetermined ranges. Depending on the application, we may know the shape of one object (e.g., the bolt) and perform pattern matching to determine the position of the bolt on the image. One could also use mathematical morphology to compute the Euler number to count the ‘‘number of holes’’ of each object and consider the nuts as the ones having just one hole. Then, the inner perimeter of the nuts can be determined to check whether they comply to manufacturing specifications. By closely observing the bolt, one can notice that it is composed of several different regions (e.g., its head clearly belongs to a region different from its body). Therefore, after the segmentation algorithm, one should perform region labeling to determine which regions should be united to compose the bolt. Once having established that the image is composed of three objects, image classification and understanding techniques should be used to figure out what kind of objects they are. In this case, one must determine that there are two nuts and one bolt from the data extracted in previous steps. Finally, a robot could use this information, extracted automatically, to screw the bolt into the round nut. As mentioned previously, this section provides only a very brief overview of this vast subject, but the open literature provides a vast treatment of it as first described in this section.
4.8 Summary
FIGURE 4.26
Hardware Image
This chapter presented an introduction to image processing techniques. First, the issues of image sampling and quantization were dealt with. Then, image enhancement and
910 restoration techniques were analyzed. Image compression methods were described, and the chapter finished with a very brief description of image analysis techniques.
References Ahmed, N., Natarajan, T., and Rao, K.R. (1974). Discrete cosine transform. IEEE Transactions on Computers C-23, 90–93. Andrew, J. (1997). A simple and efficient hierarchical image coder. IEEE International Conference on Image Processing 3, 658–664. Banham, M.R., and Katsaggelos, A.K. (1997). Digital image restoration. IEEE Signal Processing Magazine 14(2), 24–41. Bell, T.C., Cleary, J.G., and Witten, I.H. (1990). Text compression. Englewood Cliffs, NJ: Prentice Hall. Castleman, K.R. (1979). Digital image processing. Englewood Cliffs, NJ: Prentice Hall. Clarke, R.J. (1985). Transform coding of images. London: Academic Press. Cover, T.M., and Thomas, J.A. (1991). Elements of information theory. New York: John Wiley & Sons. Faugeras, O. (1993). Three-dimensional computer vision—A geometric viewpoint. Cambridge, MA: MIT Press. Gonzalez, R.C., and Wintz, P. (1977). Digital image processing. London: Addison-Wesley. Haralick, R.M., and Shapiro, L.G. (1992). Computer and robot vision, Vol. 1, Reading, MA: Addison-Wesley. Haralick, R.M., and Shapiro, L.G. (1993). Computer and robot vision, Vol. 2. Reading, MA: Addison-Wesley. International Telecommunication Union (ITU). (1996). Video codec for low bitrate communication. International Telecommunication Union-T (ITU-T). (1990). Recommendation H.261, Video codec for audio visual services at p64 kbit/s. (ISO/IEC). (1992). JTC1/CD 11172, Coding of moving pictures and associated audio for digital storage media up to 1.5 mbit/s. (ISO/IEC). (1994). JTC1/CD 13818, Generic coding of moving pictures and associated audio. (ISO/IEC). (1999). JTC1/SC29/WG1 (ITU/T SG28), JPEG 2000 verification model 5.3. (ISO/IEC). (1997). JTC1/SC29/WG11, MPEG-4 video verification model version 8.0, July. Jain, A.K. (1989). Fundamentals of digital image processing. Englewood Cliffs, NJ: Prentice Hall.
Eduardo A.B. da Silva and Gelson V. Mendonc¸a Jayant, N.S. and Noll, P. (1984). Digital coding of waveforms. Englewood Cliffs, NJ: Prentice Hall. Lan, T.H., and Tewfik, A.H. (1999). Multigrid embedding (MGE) image coding. Proceedings of the 1999 International Conference on Image Processing 3, 24–28. Lim, J.S. (1990). Two-dimensional Signal and Image Processing. Englewood Cliffs, NJ: Prentice Hall. Low, A. (1991). Introductory computer vision and image processing. Berkshire, England: McGraw-Hill. Mallat, S.G. (1998). A wavelet tour of signal processing. San Diego, CA: Academic Press. Mersereau, R.M., and Dudgeon, D.E. (1984). Multidimensional digital signal processing. Englewood Cliffs, NJ: Prentice Hall. Oppenheim, A.V., and Schaffer, R.W. (1989). Discrete time signal processing. Englewood Cliffs, NJ: Prentice Hall. Papoulis, A. (1984). Probability, random variables, and stochastic processes. New York: McGraw-Hill. Pennebaker, W.B., and Mitchell, J.L. (1993). JPEG still image data compression standard. New York: Van Nostrand Reinhold. Pratt, W.K. Digital image processing. New York: John Wiley & Sons. Rosenfeld, A. and Kak, A.C. (1982). Digital picture processing, Vol. 1. New York: Academic Press. Rosenfeld, A., and Kak, A.C. (1982). Digital picture processing, Vol. 2, New York: Academic Press. Russ, J.C. (1995). The image processing handbook. Boca Raton, FL: CRC Press. Said, A., and Pearlman, W.A. (1996). A new, fast, and efficient image codec based on set partitioning in hierarchical trees. IEEE Transactions on Circuits and Systems for Video Technology 6(3), 243–250. Serra, J. (1982). Image analysis and mathematical morphology. New York: Academic Press. Shapiro, J.M. (1993). Embedded image coding using zero-trees of wavelet coefficients. IEEE Transactions on Acoustics, Speech, and Signal Processing 41(12), 3445–3462. Taubman, D. (1999). High performance scalable image compression with EBCOT. 1999 IEEE International Conference on Image Processing. Vetterli, M., and Kovacˇevic´, J. (1995). Wavelets and sub-band coding. Englewood Cliffs, NJ: Prentice Hall. Watson, A.B. (Ed.) (1993). Digital images and human vision. Cambridge, MA: MIT Press.
5 Multimedia Systems and Signal Processing John R. Smith
5.1
Introduction ....................................................................................... 911
IBM, T. J. Watson Research Center, Hawthorne, New York, USA
5.2
MPEG-7 UMA .................................................................................... 912
5.3
MPEG-21 Digital Item Adaptation.......................................................... 914
5.4
Transcoding Optimization ..................................................................... 916
5.5
Multimedia Content Selection................................................................ 917
5.1.1 MPEG-7/-21 and UMA . 5.1.2 UMA Architecture . 5.1.3 Outline 5.2.1 MPEG-7 Transcoding Hints . 5.2.2 MPEG-7 Variations 5.3.2 Digital Item Configuration . 5.3.3 Media Resource Adaptation 5.4.1 Image Transcoding . 5.4.2 Image Transcoding Examples 5.5.1 Resource Selection . 5.5.2 Variation Selection . 5.5.3 Selection Optimization . 5.5.4 Example Selection
5.6
Summary ........................................................................................... 919 References .......................................................................................... 919
5.1 Introduction With the growing ubiquity and portability of multimediaenabled devices, universal multimedia access (UMA) is emerging as one of the important applications for the next generation of multimedia systems (Smith, 2000). The basic concept of UMA is the adaptation, summarization, and personalization of multimedia content according to usage environment. The different dimensions for adaptation include rate and quality reduction (Kuhn et al., 2001), adaptive spatial and temporal sampling (Smith, 1999), and semantic summarization of the multimedia content (Tseng et al., 2002). The different relevant dimensions of the user environment include device capabilities, bandwidth, user preferences, usage context, and spatial and temporal awareness. UMA facilitates the scalable or adaptive delivery of multimedia to users and terminals regardless of bandwidth or capabilities of terminal devices and their support for media formats. In addition, UMA allows the content to be customized for users at a high-semantic level through summarization and personalization according to user preferences and usage context (Tseng et al., 2002). UMA is relevant for emerging applications that involve delivery of multimedia for pervasive computing (e.g., handheld computers, palm devices, portable Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
media players), consumer electronics (e.g., television set-top boxes, digital video recorders, television browsers, Internet appliances), and mobile applications (e.g., cell phones, wireless computers) (Bickmore and Schlit, 1997; Smith et al., 1999, 1998b; Han and Smith, 2000). UMA is partially addressed by scalable or layered encoding, progressive data representation, and object- and scene-based encodings (such as MPEG-4) that inherently provide different embedded levels of content quality (Vetro et al., 1999). From the network perspective, UMA involves important concepts related to the growing variety of communication channels, dynamic bandwidth variation, and perceptual quality of service (QoS) (Mohan and Smith, 1999, 1998). UMA involves different preferences of the user (recipients of the content) and the content publisher in choosing the form, quality, and personalization of the content. UMA promises to integrate these different perspectives into a new class of content-adaptive applications that allows users to access multimedia content without concern for specific encodings, terminal capabilities, or network conditions.
5.1.1 MPEG-7/-21 and UMA The emerging MPEG-7 and MPEG-21 standards address UMA in a number of ways. The overall goal of MPEG-7 is to enable 911
912
John R. Smith
fast and efficient searching, filtering, and adaptation of multimedia content (Salembier and Smith, 2001; Smith, 2003). In MPEG-7, the application of UMA was conceived to allow the scalable or adaptive delivery of multimedia by providing tools for describing transcoding hints and content variations (van Beek et al., 2003; Smith and Reddy, 2001). MPEG-21 addresses the description of user environment, which includes terminals and networks (Smith, 2002). Furthermore, in MPEG-21, digital item adaptation facilitates the adaptation of digital items and media resources for usage environment. The MPEG-21 digital item adaptation provides tools for describing user preferences and usage history in addition to tools for digital item and media resource adaptation (Salembier and Smith, 2001).
5.1.2 UMA Architecture The basic problem of UMA concerns allowing users to access multimedia content anytime, anywhere, and without concern for particular formats, devices, networks, and so forth. Consider the server–proxy–client network architecture shown in Figure 5.1. UMA can be enabled in a number of ways. First, it is possible to store multiple variations of the content at the server and select the best variation for each particular request (Li et al., 1998). Given multiple media resources in a presentation, it may be necessary to make the selection of multiple variants jointly to meet the overall delivery constraints (Smith et al., 1999). As an alternative, it is possible to perform content adaptation at the server or in the network, such as at the proxy device. For example, considering a scenario in which a user
User Input
MPEG-7 description
MPEG-7 transcoding hints and sernantic descriptions
MPEG-7 filtering
allocates 10 min for receiving a video summary of a sports game, the server or proxy can manipulate the video content to produce a highlighted video that is personalized for the user’s particular preference, such as a focus on a favorite team or player or preferred types of plays in the game (e.g., for a baseball game, this could be strikeouts, hits, homeruns, etc.).
5.1.3 Outline This chapter describes UMA as one of the emerging applications of multimedia systems and signal processing. In particular, it describes how different multimedia systems and signal processing tools use MPEG-7 and MPEG-21 for enabling UMA. Section 5.2 describes the MPEG-7 support for UMA, which includes transcoding hints and variations descriptions. Section 5.3 describes the MPEG-21 support for UMA, which includes digital item adaptation. Section 5.4 describes how transcoding of images can be optimized using MPEG-7 transcoding hints. Section 5.5 describes a method for the optimized selection of media resources, which applies to MPEG-7 variations and MPEG-21 digital items.
5.2 MPEG-7 UMA MPEG-7 addresses the requirements of UMA by providing a number of different tools for content adaptation (Smith and Reddy, 2001; Kuhn and Suzuki, 2001). For one, MPEG-7 defines description schemes that describe different abstraction
Transcoding
Browsing
Adaptation
Personalization
Summarization
Multimedia content adaptation
Content Servers Content MPEG-7 selection
VIDEO
IMAGE
TEXT
AUDIO
MPEG-7 variations descriptions
Server
Adaptation engine
Client
FIGURE 5.1 Universal Multimedia Access. UMA involves the adaptive delivery of multimedia content according to usage environment, which includes devices and networks, user preferences, and usage context.
5 Multimedia Systems and Signal Processing levels and variations of multimedia content. For example, the different abstraction levels include the composition of objects from sub-objects, the extraction of plot structure of a video, summaries of multimedia programs, and different variations of the multimedia data with different resource requirements. In addition, MPEG-7 supports the transcoding, translation, summarization, and adaptation of multimedia material according to the capabilities of the client devices, network resources, and user and author preferences. For example, adaptation hints may be provided that indicate how a photograph should be compressed to adapt it for a handheld computer or how a video should be summarized to speed up browsing over a low-bandwidth network (Smith et al., 1998a).
5.2.1 MPEG-7 Transcoding Hints The MPEG-7 Media Transcoding Hint Description Scheme gives information that can be used to guide the transcoding of multimedia, including description of importance, priority, and content value and description of transcoding behavior based on transcoding utility functions and network scaling profile, as shown in Figure 5.1. In addition, media coding tools give information about multimedia data including the image and video frame size (width and height), frame rates of video, data sizes for image, video and audio download and storage, formats, and MIME-types. The MPEG-7 media transcoding hints allow content servers, proxies, or gateways to adapt image, video, audio, and multimedia content to different network conditions, user and publisher preferences, and capabilities of terminal devices with limited communication, processing, storage, and display capabilities. MPEG-7 provides the following types of transcoding hint information as part of its Media Transcoding Hints: .
.
Importance MPEG-7 specifies the relative importance of resources, segments, regions, objects, or media resources. The importance takes values from 0.0 to 1.0, where 0.0 indicates the lowest importance and 1.0 indicates the highest importance. Spatial resolution MPEG-7 specifies the maximum allowable spatial resolution reduction factor for perceptability. This feature takes values from 0.0 to 1.0, where 0.5 indicates that the resolution can be reduced by half, and 1.0 indicates the resolution cannot be reduced
Importance Hints The Importance Hints can be used to annotate different regions with information that denotes the importance segment hint type of each region. The Importance Hint descriptions can then be used to transcode the images for adaptive delivery according to constraints of client devices and bandwidth limitations. For example, text regions and face regions can be compressed with a lower compression factor than the factors for the remaining part of the image. The other parts of the
913 image are blurred and compressed with a higher factor to greatly reduce the overall size of the compressed image. MPEG-7 UMA enables more intelligent transcoding capability using the MPEG-7 Importance Hint information than otherwise would be provided. The MPEG-7 Importance Hint information has advantages over methods for automatic extraction of regions from images in the transcoder in that the Importance Hints can be provided by the content authors or publishers directly. This allows greater control in the adaptation and delivery of content. Spatial Resolution Hints The Spatial Resolution Hints can be used to denote the minimum resolution with which a segment should be displayed. For example, if an image region contains a face or textual information, then it may be necessary to preserve a minimum resolution for the contents to be discernible. Example Transcoding Hints Annotation The following example shows an MPEG-7 description of transcoding hints for an image in which three regions have been annotated (R1, R2, and R3). Values are assigned for the importance of Spatial Resolution Hint attributes of each region. <Mpeg7 xmlns¼ ‘‘urn:mpeg:mpeg7:schema:2001’’ xml:lang¼ ‘‘en’’> <MultimediaContent xsi:type¼ ‘‘ImageType’’> <SpatialDecomposition gap¼ ‘‘true’’ overlap¼‘‘false’’> <StillRegion id¼‘‘R1’’> <MediaInformation> <MediaProfile> <MediaTranscodingHints importance¼ ‘‘1.0’’ spatialResolutionHint¼ ‘‘0.5’’/> <SpatialLocator> 16 34 64 64 <StillRegion id¼‘‘R2’’> <MediaInformation> <MediaProfile> <MediaTranscodingHints importance¼ ‘‘0.5’’ spatialResolutionHint¼ ‘‘0.35’’/> <SpatialLocator> 384 256 128 128 <StillRegion id¼‘‘R3’’> <MediaInformation> <MediaProfile> <MediaTranscodingHints importance¼ ‘‘0.35’’ spatialResolutionHint¼ ‘‘0.25’’/> <SpatialLocator>
914
John R. Smith
512 384 64 128
5.2.2 MPEG-7 Variations The MPEG-7 Variation Description addresses a general framework for managing and selecting multimedia content for adapting delivery. As depicted in Figure 5.2, the Variation DS describes different variations of media resources with different modalities (e.g., video, image, text, and audio) and fidelities (e.g., summarized, compressed, scaled, etc.) (Li et al., 1998). The MPEG-7 Variation DS describes the relationships among the different variations of a media resource. For example, Figure 5.2 shows the variations of a video resource A. The variations differ in fidelity and modality. The vertical axis shows lower fidelity variations of each resource. The horizontal axis shows different modality variations (e.g., image, text, and audio) of each resource. <Mpeg7> <Source xsi:type¼ ‘‘VideoType’’> <MediaLocator> <MediaUri> file://video-high-res.mpg< /MediaUri>
Fidelity
H
I
E
F
G
A
B
C
D
FIGURE 5.2 The MPEG-7 Variation DS. This feature allows the selection and adaptive delivery of different variations of multimedia content.
<MediaLocator> < MediaUri>file://key-frame.jpg extraction <MediaLocator> <MediaUri> file://video-low-res.mpg< /MediaUri> spatialReduction temporalReduction rateReduction <MediaLocator> <MediaUri> file://audio-track.mp3< /MediaUri> extraction
5.3 MPEG-21 Digital Item Adaptation MPEG-21 specifies an XML Schema-based language for describing media resource adaptability, user environment, and other concepts related to adapting and configuring digital items and adapting media resources to user environments. While MPEG-7 has an important role in describing media resource adaptability, user preferences, and controlled terms, MPEG-21 further describes media resource adaptability, usage environment, and adaptation rights. MPEG-21 is also investigating a language for describing and manipulating bit streams (BSDL). The Digital Item Declaration part of MPEG-21 specifies an XML Schema-based language for declaring digital items, which are packages of media resources and metadata. The following example shows a digital item that contains music plus images and gives its digital item declaration. Digital Item Example (Music and Images) Figure 5.3 shows an example digital item that contains two descriptors and three media resource items. One of the descriptors gives creation information about the digital item and defines a title. The other descriptor gives a purely text-based description of the digital item. The digital item contains one
5 Multimedia Systems and Signal Processing
915
MPEG-21 digital item Descriptor tert/xml
MPEG-07 creation Information
Context digital items (XDI)
Item
Digital item (UED)
Item music.mp3
MPEG-21 digital item
Digital item (BSDL)
Descriptor test/xml
Item photo2.jpg
Item photo1.jpg
B1
Input digital item (CDI)
FIGURE 5.3 resources.
A
...
Adaptation engine (DIA)
MPEG-07 creation Information
Bn
A'
Output digital item (CDI)
MPEG-21 Digital Item Adaptation. This feature addresses the configuration of digital items and adaptation of their media
music file and two photos. The media resources are referenced by a locator. The following MPEG-21 XML description gives the declaration for the digital item depicted in Figure 5.3. As shown next, the digital item declaration provides a container at the top level, which forms the logical package for the digital item. It contains a descriptor that holds the XML statement giving an MPEG-7 description and an item that contains further description and sub-item elements. The description contained by the item holds a plain text statement giving a textual description of the item. The three sub-items each contain one component that gives a media resource. The first one gives an audio resource (music.mp3); the latter two each give an image (photo1.jpg and photo2.jpg). <Statement type¼ ‘‘text/xml’’> <mpeg7:Mpeg7> <mpeg7:DescriptionUnit xsi:type¼ ‘‘CreationInformationType’’> <mpeg7:Creation> <mpeg7:Title>Musical experience package <Statement type¼ ‘‘text/plain’’> Music package (one song plus two photos)
5.3.2 Digital Item Configuration MPEG-21 accommodates digital item adaptation in a number of ways, such as through digital item configuration or media resource adaptation or transcoding as shown in Figure 5.3. Digital item configuration is the mechanism by which selections are made among the elements of the digital items. The digital item declaration includes a choice element that describes a set of related selections that configure an item. A selection describes a specific decision affecting the inclusion of conditional elements from the item. The result is the digital item declaration can specify that items with the package that are optional or subject to conditions. For example, the declaration can describe two video files and indicate that one is conditionally included subject to display on a high-resolution terminal, whereas the other is included subject to display on a low-resolution terminal.
5.3.3 Media Resource Adaptation Alternatively, MPEG-21 supports digital item adaptation through media resource adaptation. In this case, the media resources themselves are adapted or transcoded. The media resources in the digital item declarations can be manipulated in an adaptation engine according to usage environment. The adaptation engine can take as input a user environment description that includes user preferences, usage context, device capabilities, and so forth.
916
John R. Smith
5.4 Transcoding Optimization Since media-rich content often contains multiple media resources, there is much opportunity for optimizing the adaptation of this content by trading off the different transcoding operation for each of the component media resources. This chapter now investigates how this transcoding optimization can be applied to images consisting of multiple annotated regions.
this case, for each unique combination of regions, the image is cropped to include only those selected regions, and the cropped image is then rescaled to fit the device screen size. Then, the regions are evaluated in terms of the image’s content value under the rescaling and selection, and the total content value is computed. Then, the combination of regions with maximum benefit is selected as the optimal transcoding of the image. The complexity of this search is not great considering that each image will typically only have a handful of annotated regions.
5.4.1 Image Transcoding Given that multiple regions in the image can be annotated, the transcoding of images needs to consider the individual importance and spatial resolution hint for each region (Smith and Reddy, 2001). Overall, this can be approached as an optimization problem in which the image needs to be manipulated, such as through cropping and rescaling, to produce an output image that maximizes the overall content value given the constraints on its delivery. The optimization problem is expressed as follows: The device imposes a constraint on the size of the image (i.e., size of the screen). The transcoding engine seeks to maximize the benefit derived from the content value of the transcoded image. The problem, thus, is to maximize the total content value given the constraints. Following the particular structure provided by the MPEG-7 transcoding hints, consider that each region Ri has an importance Ii and spatial resolution hint Si . Consider also that each region has a content value score Vi after rescaling the image globally using rescaling factor L, which is a function of its importance Ii and spatial resolution hint Si . Rescaling goes as follows: .
.
.
.
Importance Hint (Ii ): Indicates relative importance of region Ri , where 0 Ii 1 Spatial Resolution Hint (Si ): Indicates the minimum resolution of region for preserving details, where Rescaling factor (L): Indicates the global rescaling of the image, where 0 Si 1 Content Value (Vi ): Indicates value of the transcoded region Vi ¼ f (Ii , Si , L)as follows: ( Vi ¼
Ii L SIii 0
. .
The following examples illustrate the transcoding of an image (shown in Figure 5.4) both with and without the MPEG-7 transcoding hints for importance and spatial resolution. This example considers four different display devices: personal computer (PC), television (TV) browser, handheld computer, and personal digital assistant (PDA) or mobile phone. Figure 5.5 shows the transcoding of the image without using transcoding hints. In this case, the original image is adapted to the screen sizes by globally rescaling the image to fit the screen. The drawback of this type of adaptation is that the image details are lost when the size of the output image is small. For example, it is difficult to discern any details from the image displayed on the PDA screen (far right). The result is an equal loss of detail for all regions including those important regions, which results in a lower overall content value of the image for a display size. Figure 5.6 shows the transcoding of the image using transcoding hints. In this case, the original image is adapted to the screen sizes by using a combination of cropping and rescaling to fit a set of selected image regions on the screen. The advantage of this type of adaptation is that the important image details are preserved when the size of the output image is small. For example, it is possible to discern important details from
if 1 L. if 0 L < 1. otherwise.
Then, the problem can be stated as max ( Fx (Ri ) Ds , where: .
5.4.2 Image Transcoding Examples
P
i
Vi ) such that
Ds gives the size of the device screen: Ri gives a selected rescaled region; and Fs (Ri ) gives the spatial size of the minimum bounding rectangle that encapsulates the selected rescaled regions.
One way to solve this problem is by using an exhaustive search for all possible combinations of the rescaled regions. In
FIGURE 5.4 Image with Four Regions Annotated Using MPEG-7 Transcoding Hints
5 Multimedia Systems and Signal Processing
917
1280 ⫻ 960
640 ⫻ 480
PC (SCGA)
TV Browser
320 ⫻ 240
160 ⫻ 120
Handheld
PDA/Phone
FIGURE 5.5 Example Transcoded Image Output. The image was globally rescaled to fit the screen.
1280 ⫻ 960
640 ⫻ 480
PC (SCGA)
TV Browser
320 ⫻ 240
160 ⫻ 120
Handheld
PDA/Phone
FIGURE 5.6 Example Transcoded Image Output Using the MPEG-7 Transcoding Hints. The hints are for image region importance and minimum allowable spatial resolution. The transcoding uses a combination of cropping and rescaling to maximize the overall content value.
the image displayed on the handheld and PDA screens. The result is an adaptive loss of detail for different regions by cropping those important regions, which results in a higher overall content value of the image for a display size.
5.5 Multimedia Content Selection Considering that media-rich content is composed of multiple constituent media resources, this section investigates how different variations of the media resources can be selected based on the overall device and delivery constraints.
5.5.1 Resource Selection A procedure is now described for optimizing the selection of the different variations of media resources in a multimedia document that maximizes the total content value given the constraints of the client devices (Smith et al., 1999). A multimedia presentation M ¼ [D, L] can be defined as a tuple consisting of a multimedia document D and a document layout L. The multimedia document D is the set of media resources Oij , written as follows:
D ¼ {(Oij )n } (multimedia document), where (Oij )n gives the nth media resource, which has modality i and fidelity j. The document layout L gives the relative spatial and temporal location and size of each media resource. The variation description V of a media resource describes the collection of the different variations of the media resource Oij , as follows: V ¼ {Oij } (variation description): We then define a variation document VD as a multimedia document consisting of resources described by variation description {Oij }, written as follows: VD ¼ {Vn } ¼ {{Oij }n } (variation document):
Content Value Scores To optimize the selection, the variation descriptions include content value scores V ((Oij )) for each of the media resources Oij , as shown in Figure 5.7. The content value scores can be based on automatic measures, such as entropy, or on loss in fidelity that results from translating or summarizing the con-
918
John R. Smith X
D( (Oij )n ) DT ,
(5:1)
n
4 5 7 6 3
4
6
where (Oij )n gives for each n the optimal variation of the media resource, which has fidelity i and modality j.
5
Fidelity 2
1
3
5
2
Video
4
4
Image
Minimum Load Time Alternatively, given a minimum acceptable total content value VT , the content select process selects media resource variations Oij from each variation description to minimize the total data size as follows:
Text
3
X
Audio
D( (Oij )n )
n
Modality
¼ min
X
! D( (Oij )n ) ,
n
and
FIGURE 5.7 Example of the Reciprocal Content Value Scores. These scores are assigned for different video media resource variations.
X
V ( (Oij )n ) VT ,
(5:2)
n
tent. For example, the content value scores can be linked to the distortion introduced from compressing the images or audio. Otherwise, the content value scores can be tied directly to the methods that manipulate the content or be assigned manually. Figure 5.7 illustrates examples of the relative reciprocal content value scores of different variations of a video media resource. In this example, the original video (lower left) has the highest content value. The manipulation of the video along the dimension of fidelity or modality reduces the content value. For example, converting the video to a sequence of images results in a small reduction in content value. Converting the video to a highly compressed audio track produces a higher reduction in the content value.
Given a multimedia document with N media resources, let {Oij }n give the variation description of the nth media resource. Let V ( (Oij )n ) give the relative content value score of the variation of the nth media resource with modality i and fidelity j, and let D( (Oij )n ) give its data size. Let DT give the total maximum data size allocated for the multimedia document by the client device. The total maximum data size may, in practice, be derived from the user’s specified maximum load time and the network conditions or from the device constraints in storage or processing. Maximum Content Value The content selection process selects media resource variations Oij from each variation description to maximize the total content value for a target data size DT as follows: ! X X V ( (Oij )n ) ¼ max V ( (Oij )n ) and
Device Constraints and Preferences By extending the selection process, other constraints of client devices can be considered. For example, the content selection system can incorporate device screen size ST as follows: let S( (Oij )n ) give the spatial size of the variation of the nth media resource with modality i and fidelity j. Then, we add the following constraint to the optimization process: X
S( (Oij )n ) ST :
(5:3)
n
5.5.2 Variation Selection
n
where, as above, (Oij )n gives for each n the optimal variation of the media resource, which has fidelity i and modality j.
n
In the same way, we can include additional device constraints such as color depth, streaming bandwidth, and processing power.
5.5.3 Selection Optimization Given the variation descriptions for describing different variations of the media resources, the overall number of different variations of each variation document is combinatorial in the number of media resources (N) and number of variations of each media resource (M) and is given by M N . To solve the optimization problems of equations 5.1 and 5.2 convert the constrained optimization problems into the equivalent Lagrangian unconstrained problems, as described in Mohan et al. (1998). The optimization solution is based on the resource allocation technique proposed in Shoham and Gersho (1990) for arbitrary discrete functions. This is illustrated by converting the problem in equation 5.1 to the following unconstrained problem:
5 Multimedia Systems and Signal Processing
919 TABLE 5.1 Summary of Different Variations of Two Media Resources: (Oij )0 and (Oij )1
(O00)0
(O*ij)0
(Oij)0
(Oij )n
Video (O20)1
(Oij)1
(O*ij)1
Text (A) (O00)0 Video and (O20)1 Text
(B) Variation Document (C) Selected Variations with (Oij)0 and (Oij)1 of the Media Resources (Oij*)0 and (Oij*)1
V ( (Oij )n )
D( (Oij )n )
Modality
(O00 )0 (O01 )0 (O10 )0 (O11 )0
1.0 0.75 0.5 0.25
1.0 0.25 0.10 0.05
Video Video Image Image
(O20 )1 (O21 )1 (O32 )1 (O33 )1
1.0 0.5 0.75 0.25
0.5 0.10 0.25 0.05
Text Text Audio Audio
FIGURE 5.8 Example Content Selection for a Multimedia Document D Consisting of Two Media Resources
( min
X
TABLE 5.2 Summary of the Selected Variations of the Two Media Resources: (Oij )0 and (Oij )1 *
) D( (Oij )n ) l(VT V ( (Oij )n )) :
(5:4)
n
the optimal solution gives that for all n, the selected variations of the media resources (Oij )n operate at the same constant trade-off l in content value V ( (Oij )n ) versus data size D( (Oij )n ). To solve the optimization problem, it is only necessary to search over values of l.
5.5.4 Example Selection Illustrated here is the content selection in an example multimedia document, as shown in Figure 5.8. The multimedia document has two media resources: a video resource ¼ (O00 )0 and a text resource ¼ (O20 )1 . For each media resource (Oij )n , where n 2 {0, 1}, a variation document {Oij }n can be constructed, which gives the different variations of the media resource. The selection process selects the variations (Oij )0 and (Oij )1 , respectively, to maximize the total content value. Consider the four variations of each media resource with content values and data sizes given in Table 5.1. By iterating over values for the trade-off l in content value and data size, the content selection table of Table 5.2 is obtained which shows the media resource variations that maximize the total content value max (Sn V ( (Oij )n )) for different total maximum data sizes DT , as given in equation 5.1.
5.6 Summary Universal Multimedia Access (UMA) is one of the important emerging applications for multimedia systems and signal processing. The basic idea of UMA is to adapt media-rich content according to usage environment. The emerging MPEG-7 and MPEG-21 standards address different aspects of UMA by standardizing metadata descriptions for adapting this content. In this chapter, we described how UMA is supported in multimedia systems through media resource transcoding and multimedia content selection.
DT
(Oij )0
(Oij )1
1.5 1.25 1.0 0.6 0.35 0.35 0.2 0.1
(O00 )0 (O00 )0 (O01 )0 (O10 )0 (O01 )0 (O10 )0 (O10 )0 (O11 )0
(O20 )1 (O32 )1 (O20 )1 (O20 )1 (O21 )1 (O32 )1 (O21 )1 (O33 )1
Sn V ( (Oij )n ) 2.0 1.75 1.75 1.5 1.25 1.25 1.0 0.5
Sn D( (Oij )n ) 1.5 1.25 0.75 0.6 0.35 0.35 0.2 0.1
*These resources are under different total maximum data size constraints DT .
References Bickmore, T.W., and Schlitt, B.N., (1997). Digestor: Device-independent access to the World Wide Web. Proceedings of the Sixth International WWW Conference. Bjrk, N., and Christopoulos, C. (2000). Video transcoding for universal multimedia access. Proceedings of the ACM International Conference on Multimedia (ACMMM). 75–79. Han, R., and Smith, J.R. (2000). Transcoding of Internet multimedia for universal access. Multimedia communications: Directions and innovations. San Diego, CA: Academic Press. Li, C.S., Mohan, R., and Smith, J.R. (1998). Multimedia content description in the InfoPyramid. IEEE Proceedings of the International Conference on Acoustics, Speech, and Signal Processing. Kuhn, P.M., and Suzuki, T. (2001). MPEG-7 metadata for video transcoding: Motion and difficulty hints and Storage and Retrieval for Media Database 4315, 352–361. Kuhn, P.M., Suzuki, T., and Vetro, A. (2001). MPEG-7 transcoding hints for reduced complexity and improved quality. International Packet Video Workshop. 276–285. Mohan, R., Smith, J.R., and Li, C.S. (1999). Adapting multimedia Internet content for universal access. IEEE Transactions on Multimedia 1(1), 104–114. Mohan, R., Smith, J.R., and Li, C.S. (1998). Multimedia content customization for universal access. SPIE, Photonics east—Multimedia storage and archiving systems III. Salembier, P., and Smith, J.R. (2001). MPEG-7 multimedia description schemes. IEEE Transactions on Circuits and Systems for Video Technology, 11(6), 748–759.
920 Shoham, Y., and Gersho, A. (1990). Efficient bit allocation for an arbitrary set of quantizers. IEEE Transactions on Acoustics, Speech, and Signal Processing 36(9), 289–296. Smith, J.R. (2003). MPEG-7 multimedia content description standard. Multimedia Information Retrieval and Management. Springer. Smith, J.R. (2002). Multimedia content management in the MPEG-21 framework. SPIE ITCOM, Internet Multimedia Management Systems III. Smith, J.R. (2000). Universal multimedia access. Proceedings of the SPIE Multimedia Networking Systems III. Smith, J.R. (1999). VideoZoom spatio-temporal video browser. IEEE Transactions on Multimedia 1(2), 157–171. Smith, J.R., Mohan, R., and Li, C.S. (1999). Scalable multimedia delivery for pervasive computing. Proceedings of the ACM International Conference on Multimedia (ACMMM). Smith, J.R., Mohan, R., and Li, C.S. (1998a) Content-based transcoding of images in the Internet. IEEE Proceedings of the International Conference on Image Processing.
John R. Smith Smith, J.R., Mohan, R., and Li, C.S. (1998b). Transcoding Internet content for heterogenous client devices. Proceedings of the IEEE International Symposium on Circuits and Systems. Smith, J.R., and Reddy, V. (2001). An application-based perspective on universal multimedia access using MPEG-7. Proceedings of the SPIE Multimedia Networking Systems IV. Tseng, B., Lin, C.Y., and Smith, J.R. (2002). Video personalization and summarization system. Proceedings of the IEEE Multimedia Signal Processing Conference. van Beek, P., Smith, J.R., Ebrahimi, T., Suzuki, T., and Askelof, J. (2003). Metadata driven multimedia access. IEEE Signal Processing Magazine 52(11), 969–979. Vetro, A., Sun, H., and Wang, Y. (1999). MPEG-4 rate control for multiple video objects. IEEE Transactions on Circuits and Systems for Video Technology 9(1), 186–199.
6 Statistical Signal Processing Yih-Fang Huang Department of Electrical Engineering, University of Notre Dame, Notre Dame, Indiana, USA
6.1 6.2
Introduction ....................................................................................... 921 Bayesian Estimation ............................................................................. 921
6.3 6.4
Linear Estimation ................................................................................ 923 Fisher Statistics.................................................................................... 924
6.2.1 Minimum Mean-Squared Error Estimation . 6.2.2 Maximum a Posteriori Estimation
6.4.1 Likelihood Functions . 6.4.2 Sufficient Statistics . 6.4.3 Information Inequality and Crame´r-Rao Lower Bound . 6.4.4 Properties of MLE
6.5
Signal Detection .................................................................................. 927 6.5.1 Bayesian Detection . 6.5.2 Neyman-Pearson Detection . 6.5.3 Detection of a Known Signal in Gaussian Noise
6.6
Suggested Readings .............................................................................. 930 References .......................................................................................... 931
6.1 Introduction Statistical signal processing is an important subject in signal processing that enjoys a wide range of applications, including communications, control systems, medical signal processing, and seismology. It plays an important role in the design, analysis, and implementation of adaptive filters, such as adaptive equalizers in digital communication systems. It is also the foundation of multiuser detection that is considered an effective means for mitigating multiple access interferences in spread-spectrum based wireless communications. The fundamental problems of statistical signal processing are those of signal detection and estimation that aim to extract information from the received signals and help make decisions. The received signals (especially those in communication systems) are typically modeled as random processes due to the existence of uncertainties. The uncertainties usually arise in the process of signal propagation or due to noise in measurements. Since signals are considered random, statistical tools are needed. As such, techniques derived to solve those problems are derived from the principles of mathematical statistics, namely, hypothesis testing and estimation. This chapter presents a brief introduction to some basic concepts critical to statistical signal processing. To begin Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
with, the principles of Bayesian and Fisher statistics relevant to estimation are presented. In the discussion of Bayesian statistics, emphasis is placed on two types of estimators: the minimum mean-squared error (MMSE) estimator and the maximum a posteriori (MAP) estimator. An important class of linear estimators, namely the Wiener filter, is also presented as a linear MMSE estimator. In the discussion of Fisher statistics, emphasis will be placed on the maximum likelihood estimator (MLE), the concept of sufficient statistics, and information inequality that can be used to measure the quality of an estimator. This chapter also includes a brief discussion of signal detection. The signal detection problem is presented as one of binary hypothesis testing. Both Bayesian and Neyman-Pearson optimum criteria are presented and shown to be implemented with likelihood ratio tests. The principles of hypothesis testing also find a wide range of applications that involve decision making.
6.2 Bayesian Estimation Bayesian estimation methods are generally employed to estimate a random variable (or parameter) based on another 921
922
Yih-Fang Huang
random variable that is usually observable. Derivations of Bayesian estimation algorithms depend critically on the a posteriori distribution of the underlying signals (or signal parameters) to be estimated. Those a posteriori distributions are obtained by employing Bayes’ rules, thus the name Bayesian estimation. Consider a random variable X that is some function of another random variable S. In practice, X is what is observed and S is the signal (or signal parameter) that is to be estimated. Denote the estimate as S^(X), and the error as e ¼ S S^(X). Generally, the cost is defined as a function of the estimation error that clearly depends on both X and S, thus J (e) ¼ J (S, X). The objective of Bayesian estimation is to minimize the Bayes’ risk R, which is defined as the ensemble average (i.e., the expected value) of the cost. In particular, the Bayes’ risk is defined by:
and the maximum a posteriori (MAP) estimator. These two estimators are discussed in more detail below.
6.2.1 Minimum Mean-Squared Error Estimation When the cost function is defined to be the mean-squared error (MSE), as in equation 6.4a, the Bayesian estimate can be derived by substituting J (e) ¼ jej2 ¼ (s ^s (x))2 into equation 6.2. Hence the following is true: RMS ¼
1
where fS, X (s, x) is the joint probability density function (pdf) of the random variables S and X. In practice, the joint pdf is not directly obtainable. However, by Bayes’ rule: (6:1)
the a posteriori pdf can be used to facilitate the derivation of Bayesian estimates. With the a posteriori pdf, the Bayes’ risk can now be expressed as: ð 1 ð 1 R¼ J (s, x)fSjX (sjx)ds fX (x)dx: (6:2) 1
1
Because the cost function is, in general, non-negative and so are the pdfs, minimizing the Bayes’ risk is equivalent to minimizing: ð1 J (s, x)fSjX (sjx)ds: (6:3) 1
Depending on how the cost function is defined, the Bayesian estimation principle leads to different kinds of estimation algorithms. Two of the most commonly used cost functions are the following: JMS (e) ¼ jej2 : ( 0 JMAP (e) ¼ 1
if jej if jej >
where
D 1.
(s ^s (x))2 fSjX (sjx)ds ¼ 0:
(6:6)
1
ð1
(6:4b)
These example cost functions result in two popular estimators, namely, the minimum mean-squared error (MMSE) estimator
(s ^s MS (x))fSjX (sjx)ds ¼ 0;
1
and s^MS (x) ¼
ð1
sfSjX (sjx)ds ¼ E{sjx}:
(6:7)
1
In essence, the estimate that minimizes the MSE is the conditional-mean estimate. If the a posteriori distribution is Gaussian, then the conditional-mean estimator is a linear function of X, regardless of the functional relation between S and X. The following theorem summarizes a very familiar result. Theorem Let X ¼ [X1 , X2 , . . . , XK ]T and S be jointly Gaussian with zero means. ThePMMSE estimate of S based on X is E[SjX ] K and E[SjX PK ] ¼ i¼1 ai xi , where ai is chosen such that E[(S i¼1 ai Xi )Xj ] ¼ 0 for any j ¼ 1, 2, . . . , K.
6.2.2 Maximum a Posteriori Estimation If the cost function is defined as in equation 6.4b, the Bayes’ risk is as follows:
(6:4a) D 2 D 2
ð1
The differentiation in equation 6.6 is evaluated at s^ ¼ ^ s MS (x). Consequently:
1
fS;X (s, x) ¼ fSjX (sjx)fX (x),
(6:5)
1
d d^s
R ¼ EfJ (e)g ð1 ð1 ¼ J (s, x)fS, X (s, x)dxds,
(s ^s (x))2 fSjX (sjx)ds fX (x)dx:
Denote the resulting estimate as s^MS (x), and use the same argument that leads to equation 6.3. A necessary condition for minimizing RMS results as:
D
1
ð 1 ð 1
RMAP ¼
ð1 1
fX (x)[1
ð s^MAP þD 2
s^MAP D2
To minimize RMAP , we maximize
fSjX (sjx)ds]dx:
Ð s^MAP þD2
f (sjx)ds. s^MAP D2 SjX
When
D is extremely small (as is required), this is equivalent to
6 Statistical Signal Processing
923
maximizing fSjX (sjx). Thus, s^MAP (x) is the value of s that maximizes fSjX (sjx). Normally, it is often more convenient (for algebraic manipulation) to consider ln fSjX (sjx), especially since ln (x) is a monotone nondecreasing function of x. A necessary condition for maximizing ln fSjX (sjx) is as written here: q ln fSjX (sjx) ¼ 0: qs s¼^ s MAP (x)
(6:8)
Equation 6.8 is often referred to as the MAP equation. Employing Bayes’ rule, the MAP equation can also be written as: q q ln fXjS (xjs) þ ln fS (s) ¼ 0: qs qs s¼^ s MAP (x)
(6:9)
Example (van Trees, 1968) Let Xi , i ¼ 1, 2, . . . , K be a sequence of random variables modeled as follows: Xi ¼ S þ Ni
i ¼ 1; 2, , K ,
where S is a zero-mean Gaussian random variable with variance s 2s and {Ni } is a sequence of independent and identically distributed (iid) zero-mean Gaussian random variables with variance s2n . Denote X ¼ [X1 X2 . . . XK ]T .
In this example, the MMSE estimate and the MAP estimate are equivalent because the a posteriori distribution is Gaussian. Some useful insights into Bayesian estimation can be gained through this example (van Trees, 1968). Remarks s2 1. If s2s Kn , the a priori knowledge is more useful than the observed data, and the estimate is very close to the a priori mean (i.e., 0). In this case, the a posteriori distribution almost has no effect on the value of the estimate. 2 s 2. If s2s Kn , the estimate is directly related to the observed data as it is the sample mean, while the a priori knowledge is of little value. 3. The equivalence of s^MAP to ^s ms (x) is not restricted to the case of Gaussian a posteriori pdf. In fact, if the cost function is symmetric and nondecreasing and if the a posteriori pdf is symmetric and unimodal and satisfies lims!1 J (s, x)fSjX (sjx) ¼ 0, then the resulting Bayesian estimation (e.g., MAP estimate) is equivalent to s^MS (x).
6.3 Linear Estimation
Since the Bayesian estimators (MMSE and MAP estimates) presented in the previous section are usually not linear, it may be impractical to implement them. An alternative is to restrict the consideration to only the class of linear estimators K 2 and then find the optimum estimatior in that class. As such, Y 1 (xi s) pffiffiffiffiffiffi exp fXjS (xjs) ¼ the notion of optimality deviates from that of Bayesian esti2 2sn 2psn i¼1 mation, which minimizes the Bayes’ risk. 2 One of the most commonly used optimization criteria is the 1 1s fS (s) ¼ pffiffiffiffiffiffi exp 2 MSE (i.e., minimizing the error variance). This approach leads 2 ss 2pss to the class of Wiener filters that includes the Kalman filter as fxjs (xjs)fS (s) a special case. In essence, the Kalman filter is a realizable fSjX (sjx) ¼ fx (x) Wiener filter as it is derived with a realizable (state-space) model. Those linear estimators are much more appealing in " # " ! # PK K 2 Y 1 1 1 1 s2 practice due to reduced implementational complexity and i¼1 (xi s) pffiffiffiffiffiffi pffiffiffiffiffiffi ¼ þ 2 exp s2n ss fX (x) 2pss i¼1 2psn 2 relative simplicity of performance analysis. The problem can be described as follows. Given a set of !2 # " K zero-mean random variables, X1 , X2 , . . . , XK , it is desired to 1 s2s 1X fSjx (sjx) ¼ C(x) exp 2 s 2 xi Þ , estimate a random variable S (also zero-mean). The objective ss þ s2n =K K i¼1 2sp here is to find an estimator S^ that is linear in Xi and that is (6:10) optimum in some sense, like MMSE. 1 2 2 Clearly, if S^ isPconstrained to be linear in Xi , it can be s s n where s2p ¼ s12 þ sK2 ¼ K ss2 þs 2 : expressed as S^ ¼ Ki¼1 ai Xi . This expression can be used indes n s n From equation 6.10, it can be seen clearly that the condi- pendently of the model that governs the relation between X tional mean estimate and the MAP estimate are equivalent. In and S. One can see that once the coefficients ai , i ¼ 1, 2, . . . , K are determined for all i, S^ is unambiguously (uniquely) speciparticular: fied. As such, the problem of finding an optimum estimator ! becomes one of finding the optimum set of coefficients, and K 2 X ss 1 ^sms (x) ¼ ^sMAP (x) ¼ 2 x : estimation of a random signal becomes estimation of a set of i ss þ s2n =K K i¼1 deterministic parameters.
924
Yih-Fang Huang
If the objective is to minimize the MSE, namely: ( 2
E{kS S^k } ¼ E kS
K X
) ai Xi k
2
,
(6:11)
i¼1
a necessary condition is that: q E{kS S^k2 } ¼ 0 qai
for all i:
(6:12)
6.4 Fisher Statistics
Equation 6.12 is equivalent to: ( E
S
K X
!
) Xi
aj Xj
¼0
for all i:
(6:13)
j¼1
In other words, a necessary condition for obtaining the linear MMSE estimate is the uncorrelatedness between the estimation error and the observed random variables. In the context of vector space, equation 6.13 is the well-known orthogonality principle. Intuitively, the equation states that the linear MMSE estimate of S is the projection of S onto the subspace spanned by the set of random variables {Xi }. In this framework, the norm of the vector space is the mean-square value while the inner product between two vectors is the correlation between two random variables. Let the autocorrelation coefficients of xi be E{Xj Xi } ¼ rji and the crosscorrelation coefficients of Xi and S be E{SXi } ¼ ri . Then, equation 6.13 is simply: ri ¼
K X
aj rji
for all i,
(6:14)
j¼1
which is essentially the celebrated Wiener-Hopf equation. Assume that rij and rj are known for all i, j, then the coefficients {ai } can be solved by equation 6.14. In fact, this equation can be stacked up and put in the following matrix form: 2
r11 6 r12 6 6 .. 4 .
r21 r22 .. .
... ... .. .
3 rK 1 rK 2 7 7 .. 7 . 5
r1K
r2K
...
rKK
6.17 are the basis of Wiener filter, which is one of the most studied and commonly used linear adaptive filters with many applications (Haykin, 1996, 1991). The matrix inversion in equation 6.17 could present a formidable numerical difficulty, especially when the vector dimension is high. In those cases, some computationally efficient algorithms, like Levinson recursion, can be employed to mitigate the impact of numerical difficulty.
2
a1 6 a2 6 6 .. 4 . aK
3
2
r1 7 6 r2 7 6 7 ¼ 6 .. 5 4 .
3 7 7 7 5
(6:15)
rK
Generally speaking, there are two schools of thoughts in statistics: Bayes and Fisher. Bayesian statistics and estimation were presented in the previous section, where the emphasis was on estimation of random signals and parameters. This section is focused on estimation of a deterministic parameter (or signal). A natural question that one may ask is if the Bayesian approach presented in the previous section is applicable here or if the estimation of deterministic signals can be treated as a special case of estimating random signals. A closer examination shows that an alternative approach needs to be taken (van Trees, 1968) because the essential issues that govern the performance of estimators differ significantly. The fundamental concept underlying the Fisher school of statistics is that of likelihood function. In contrast, Bayesian statistics is derived from conditional distributions, namely, the a posteriori distributions. This section begins with an introduction of the likelihood function and a derivation of the maximum likelihood estimation method. These are followed by the notion of sufficient statistics, which plays an important role in Fisherian statistics. Optimality properties of maximum likelihood estimates are then examined with the definition of Fisher information. Crame´r-Rao lower bound and minimum variance unbiased estimators are then discussed.
6.4.1 Likelihood Functions Fisher’s approach to estimation centers around the concept of likelihood function (Fisher, 1992). Consider a random variable X that has a probability distribution FX (x) with probability density function (pdf) fX (x) parameterized by a parameter u. The likelihood function (with respect to the parameter u) is defined as:
or simply: L(x; u) ¼ fX (xju): Ra ¼ r: Thus, the coefficient vector can be solved by: a ¼ R 1 r ,
(6:18)
(6:16)
(6:17)
which is sometimes termed the normal equation. The orthogonality principle of equation 6.13 and the normal equation
It may appear, at the first sight, that the likelihood function is nothing but the pdf. It is important, however, to note that the likelihood function is really a function of the parameter u for a fixed value of x, whereas the pdf is a function of the realization of the random variable x for a fixed value of u. Therefore, in a likelihood function, the variable is u, while in a pdf the variable is x. The likelihood function is a quantitative indication of how
6 Statistical Signal Processing
925
likely that a particular realization (observation) of the random variable would have been produced from a particular distribution. The higher the value the likelihood function, the more likely the particular value of the parameter will have produced that realization of x. Hence, the cost function in the Fisherian estimation paradigm is the likelihood function, and the objective is to find a value of the parameter that maximizes the likelihood function, resulting in the maximum likelihood estimate (MLE). The MLE can be derived as follows. Let there be a random variable whose pdf, fX (x), is parameterized by a parameter u. Define the objective function: L(x; u) ¼ fX (xju),
(6:19)
where fX (xju) is the pdf of X for a given u. Then, the MLE is obtained by: ^ uMLE (x) ¼ Arg{ max L(x; u)}: u
(6:23)
and the log-likelihood equation as:
Example Consider a sequence of random variables Xi , i ¼ 1, 2, , K modeled as: Xi ¼ u þ N i , where Ni is a sequence of iid zero-mean Gaussian random variables with variance s2. In practice, this formulation can be considered as one of estimating a DC signal embedded in white Gaussian noise. The issue here is to estimate the strength, i.e., the magnitude, of the DC signal based on the a set of observations xi , i ¼ 1, 2, , K. The log-likelihood function as defined in equation 6.22 is given by: pffiffiffiffiffiffi 1 l(x; u) ¼ K ln ( 2ps) 2 2s where x ¼ [x1 , x2 , . . . , xk ]T .
K X
(xi u)2 ,
6.4.2 Sufficient Statistics
Fisher Factorization Theorem A function of the observation set T (X) is a sufficient statistic if the likelihood function of X can be expressed as:
This equation is sometimes referred to as the likelihood equation. In general, it is more convenient to consider the log-likelihood function defined as:
q l(x; u) ¼ 0: u¼^uMLE qu
Thus, the MLE for a DC signal embedded in additive zero mean white Gaussian noise is the sample mean. As it turns out, this sample mean is the sufficient statistic for estimating u. The concept of the sufficient statistic is critical to the optimum properties of MLE and, in general, to Fisherian statistics. Generally speaking, the likelihood function is directly related to sufficient statistics, and the MLE is usually a function of sufficient statistics.
(6:22)
(6:21)
l(x; u) ¼ ln L(x; u)
K X ^uMLE (x) ¼ 1 xi : K i¼1
Sufficient statistics is a concept defined in reference to a particular parameter (or signal) to be estimated. Roughly speaking, a sufficient statistic is a function of the set of observations that contains all the information possibly obtainable for the estimation of a particular parameter. Given a parameter u to be estimated, assume that x is the vector consisting of the observed variables. A statistic T (x) is said to be a sufficient statistic if the probability distribution of X given T (x) ¼ t is independent of u. In essence, if T (x) is a sufficient statistic, then all the information regarding estimation of u that can be extracted from the observation is contained in T (x). The Fisher factorization theorem stated below is sometimes used as a definition for the sufficient statistic.
(6:20)
Clearly, a necessary condition for L(x; u) to be maximized is that: q L(x; u) ¼ 0: qu u¼^uMLE
Solving the log-likelihood equation, 6.23, yields:
L(x; u) ¼ h(T (x), u)g(x):
(6:24)
P In the example shown in SectionP6.4.1, Ki¼1 xi is a sufficient K 1 statistic, and PK the sample mean K i¼1 xi is the MLE for u. The fact that i¼1 xi is a sufficient statistic can be easily seen by using the Fisher factorization theorem. In particular: L(x; u) ¼
1 pffiffiffiffiffiffi 2ps
K
"
# K 1 X exp 2 (xi u)2 : 2s i¼1
(6:25)
The above equation can be expressed as: ( L(x; u) ¼
"
u exp 2 s
K X i¼1
! xi
K u2 2 2s
# )
1 pffiffiffiffiffiffi 2ps
K
"
# K 1 X 2 exp 2 x : 2s i¼1 i
(6:26)
i¼1
Identifying equation 6.26 with 6.24, it can be easily seen that, if the following is defined:
926
Yih-Fang Huang "
u h(T ((x), u) ¼ exp 2 s
K X i¼1
! xi
K u2 2 2s
#
and g(x) ¼
1 pffiffiffiffiffiffi 2ps
K
"
# K 1 X 2 exp 2 x , 2s i¼1 i
P then T (x) ¼ Ki¼1 xi is clearly a sufficient statistic for estimating u. It should be noted that the sufficient statistic may not be unique. In fact, it is always subject to a scaling factor. In equation 6.26, it is seen that the pdf can be expressed as: fX (xju) ¼ fexp [c(u)T (x) þ d(u) þ S(x)]gI(x),
(6:27)
PK PK 2 1 where c(u) ¼ su2 , T (x) PK¼ 2 i¼1 xi , d(u) ¼ 2s2 i¼1 xi , S(x) ¼ K 2 1 2 ln (2ps ) 2s2 i¼1 xi , and I(x) is an indicator function that is valued at 1 wherever the value of the pdf is nonzero and zero otherwise. A pdf that can be expressed in the form given in equation 6.27 is said to belong to the exponential family of distributions (EFOD). It can be verified easily that Gaussian, Laplace (or two-sided exponential), binomial, and Poisson distributions all belong to the EFOD. When a pdf belongs to the EFOD, one can easily identify the sufficient statistic. In fact, the mean and variance of the sufficient statistic can be easily calculated (Bickel and Doksum, 1977). As stated previously, the fact that the MLE is a function of the sufficient statistic helps to ensure its quality as an estimator. This will be discussed in more detail in the subsequent sections.
6.4.3 Information Inequality and Crame´r-Rao Lower Bound There are several criteria that can be used to measure the quality of an estimator. If u(X) is an estimator for the parameter u based on the observation X, three criteria are typically used to evaluate its quality: ^ ) is said to be an unbiased 1. Bias: If E[^ u (X)] ¼ u, u(X estimator. 2. Variance: For estimation of deterministic parameters and signals, variance of the estimator is the same as variance of the estimation error. It is a commonly used performance measure, for it is practically easy to evaluate. 3. Consistency: This is an asymptotic property that is examined when the sample size (i.e., the dimension ^ ) converges of the vector X) approaches infinity. If u(X with probability one to u, then it is said to be strong consistent. If it converges in probability, it is weak consistent.
In this section, the discussion will be focused on the second criterion, namely the variance, which is one of the most commonly used performance measures in statistical signal processing. In the estimation of deterministic parameters, Fisher’s information is imminently related to variance of the estimators. The Fisher’s information is defined as: ( 2 ) q l(x; u) : (6:28) I(u) ¼ E qu Note that the Fisher’s information is non-negative, and it is additive if the set of random variables is independent. Theorem (Information Inequality) Let T (X) be a statistic such that its variance Vu (T (X)) < 1, for all u in the parameter space Q. Define c(u) ¼ Eu {T (X)}. Assume that the following regularity condition holds: E
q l(x; u) qu
¼0
for all u 2 Q,
(6:29)
where l(x, u) has been defined in equation 6.22. Assume further that c(u) is differentiable for all u. Then: Vu (T (X))
[c0 (u)]2 : I(u)
(6:30)
Remarks 1. The regularity condition defined in equation 6.29 is a restriction imposed on the likelihood function to guarantee that the order of expectation operation and differentiation is interchangeable. 2. If the regularity condition holds also for second order derivatives, I(u) as defined in equation 6.28 can also be evaluated as: 2 q I(u) ¼ E l(x; u) : qu2 3. The subscript u of the expectation (Eu ) and of the variance (Vu ) indicates the dependence of expectation and variance on u. 4. The information inequality gives a lower bound on the variance that any estimate can achieve. It thus reveals the best that an estimator can do as measured by the error variance. 5. If T(X) is an unbiased estimator for u, (i.e., Eu {T (X)} ¼ u), then the information inequality, equation 6.30, reduces to: Vu (T (X))
1 , I(u)
(6:31)
which is the well-known Crame´r-Rao lower bound (CRLB).
6 Statistical Signal Processing 6. In statistical signal processing, closeness to the CRLB is often used as a measure of efficiency of an (unbiased) estimator. In particular, one may define the Fisherian efficiency of an unbiased estimator ^ u(X) as: 1 I (u) Z(^ u(X)) ¼ : (6:32) Vu f^ u(X)g Note that Z is always less than one, and the larger Z, the more efficient that estimator is. In fact, when Z ¼ 1, the estimator achieves the CRLB and is said to be an efficient estimator in the Fisherian sense. 7. If an unbiased estimator has a variance that achieves the CRLB for all u 2 Q, it is called a uniformly minimum variance unbiased estimator (UMVUE). It can be shown easily that UMVUE is a consistent estimator and MLE is usually the UMVUE.
6.4.4 Properties of MLE The MLE has many interesting properties, and it is the purpose of this section to enlist some of those properties. 1. It is easy to see that the MLE may be biased. This is because being unbiased was not part of the objective in seeking MLE. MLE, however, is always asymptotically unbiased. 2. As shown in the previous section, MLE is a function of the sufficient statistic. This can also be seen from the Fisher factorization theorem. 3. MLE is asymptotically a minimum variance unbiased estimator. In other words, its variance asymptotically achieves the CRLB. 4. MLE is consistent. In particular, it converges to the parameter with probability one (or in probability). 5. Under the regularity condition of equation 6.29, if there exists an unbiased estimator whose variance attains the CRLB, it is the MLE. 6. Generally speaking, the MLE of a transformation of a parameter (or signal) is the transformation of the MLE of that parameter (or signal). This is referred to as the invariance properly of the MLE. The fact that MLE is a function of sufficient statistics means that it depends on relevant information. This does not necessarily mean that it will always be the best estimator (in the sense of, say, minimum variance), for it may not make the best use of the information. However, when the sample size is large, all the information relevant to the unknown parameter is essentially available to the MLE estimator. This explains why MLE is asymptotically unbiased, it is consistent, and its variance achieves asymptotically the CRLB. Example (Kay, 1993) Consider the problem of estimating the phase of a sinusoidal signal received with additive Gaussian noise. The problem is formulated as:
927 Xi ¼ A cos (vo i þ f) þ Ni
i ¼ 0, 1, . . . , K 1
where {Ni } is an iid sequence of zero-mean Gaussian random variables with variance s2 . Employing equation 6.23, MLE can be obtained by minimizing: J (f) ¼
K 1 X
(xi A cos (vo i þ f))2 :
i¼1
Differentiating J(u) with respect to u and setting it equal to zero yields: K 1 X
^ MLE ) ¼ A xi sin (vo i þ f
i¼0
K 1 X
^ MLE ) cos (vo i þ f ^ MLE ): xi sin (vo i þ f
i¼0
Assume that: K 1 1X cos (2vo i þ 2f) ¼ 0 K i¼0
for all f:
Then the MLE for the phase can be approximated as: PK 1 xi sin (vo i) ^ fMLE arctan PKi¼0 : 1 i¼0 xi cos (vo i) Perceptive readers may see that implementation of MLE can easily become complex and numerically difficult, especially when the underlying distribution is non-Gaussian. If the parameter to be estimated is a simple scalar, and its admissible values are limited to a finite interval, search algorithms can be employed to guarantee satisfactory results. If this is not the case, more sophisticated numerical optimization algorithms will be needed to render good estimation results. Among others, iterative algorithms such as the Newton-Raphson method and the expectation maximization method are often employed.
6.5 Signal Detection The problem of signal detection can be formulated mathematically as one of binary hypothesis testing. Let X be the random variable that represents the observation, and let the observation space be denoted by V (i.e., x 2 V). There are basically two hypotheses: Null hypothesis H0 : X F0 :
(6:33a)
Alternative H1 : X F1 :
(6:33b)
The F0 is the probability distribution of X given that H0 is true, and F1 is the probability distribution of X given that H1 is true. In some applications (e.g., a simple radar communication
928
Yih-Fang Huang
system), it may be desirable to detect a signal of constant amplitude, and then F1 will simply be F0 shifted by a mean value equal to the signal amplitude. In general, this formulation also assumes that, with probability one, either the null hypothesis or the alternative is true. Specifically, let p0 and p1 be the prior probabilities of H0 and H1 being true, respectively. Then: p0 þ p1 ¼ 1:
(6:34)
The objective here is to decide whether H0 or H1 is true based on the observation of X ¼ x. A decision rule, namely a detection scheme, d(x) essentially partitions V into two subspaces V0 and V1 . The subspace V0 consists of all observations that lead to the decision that H0 is true, while V1 consists of all observations that lead to the decision that H1 is true. For notational convenience, one may also define d(x) as follows: d(x) ¼
if x 2 V0 : if x 2 V1
0 1
1. 2. 3. 4.
Decide Decide Decide Decide
H0 H0 H1 H1
when when when when
H0 H1 H0 H1
is is is is
true true true true
(6:39b)
6.5.1 Bayesian Detection The objective of a Bayes criterion is to minimize the so-called Bayes’ risk which is, again, defined as the expected value of the cost. To derive a Bayes’ detection rule, the costs of making decisions need to be defined first. Let the costs be denoted by Cij being cost of choosing i when j is true. In particular: C01 ¼ cost of choosing H0 when H1 is true: C10 ¼ cost of choosing H1 when H0 is true: In addition, assume that the prior probabilities p0 and p1 are known. The Bayes’ risk is then evaluated as follows:
(6:40)
1. Type-I error (false alarm) Decide H1 when H0 is true. The probability of type-I error is as follows: ð f0 (x)dx:
(6:36)
V1
Substituting equations 6.34, 6.39a and 6.39b into equation 6.40 yields: R ¼ p0 C00 þ p1 C01
2. Type-II error (miss): Decide H0 when H1 is true. The probability of type-II error is as follows: Pm ¼
Prob{d(x) ¼ 0jH1 } þ Prob{d(x) ¼ 1jH1 } ¼ 1:
R ¼ E{C} ¼ p0 {Prob{d(x) ¼ 0jH0 }C00 þ Prob{d(x) ¼ 1jH0 }C10 } þ p1 {Prob(d(x) ¼ 0jH1 )C10 þ Prob(d(x) ¼ 1jH1 )C11 }:
Two types of error can occur:
a¼
(6:39a)
The above constraints of equations 6.39a and 6.39b are simply outcomes of the assumptions that V0 [ V1 ¼ V and V0 \ V1 ¼ ; (i.e., for every observation, an unambiguous decision must be made).
(6:35)
For any decision rule d(x), there are clearly four possible outcomes:
Prob{d(x) ¼ 0jH0 } þ Prob{d(x) ¼ 1jH0 } ¼ 1
ð ð
f0 (x)dx þ p0 C10
V0
ð f0 (x)dx ð
V1
f1 (x)dx þ p1 C11 f1 (x)dx V1 ð ¼ p0 C10 þ (1 p0 )C11 þ f(1 p0 )(C01 C11 )f1 (x) V0
V0
ð f1 (x)dx:
(6:37)
p0 (C10 C00 )f0 (x)gdx:
V0
Another quantity of concern is the probability of detection, which is often termed the power in signal detection literature, defined by: b ¼ 1 Pm ¼
ð f1 (x)dx:
(6:38)
V1
Among the various decision criteria, Bayes and Neyman-Pearson are most popular. Detection schemes may also be classified into parametric and nonparametric detectors. The discussion here focuses on Bayes and Neyman-Pearson criteria that are considered parametric. Before any decision criteria can be derived, the following constraints need be stated first:
Note that the sum of the first two terms is a constant. In general, it is reasonable to assume that: C01 C11 > 0
and
C10 C00 > 0:
In other words, the costs of making correct decisions are less than those of making incorrect decisions:
I1 (x) ¼ (1 p0 )(C01 C11 )f1 (x):
I2 (x) ¼ p0 (C10 C00 )f0 (x): It can be seen easily that I1 (x) 0 and I2 (x) 0. Thus, R can be rewritten as:
6 Statistical Signal Processing R ¼ constant þ
929
ð
[I1 (x) I2 (x)]dx:
V0
To minimize R, the observation space V need be partitioned such that x 2 V1 whenever: I1 (x) I2 (x): In other words, decide H1 if: (1 p0 )(C01 C11 )f1 (x) p0 (C10 C00 )f0 (x):
(6:41)
So, the Bayes’ detection rule is essentially evaluating the likelihood ratio defined by: D
L(x) ¼
f1 (x) : f0 (x)
(6:42)
D
p0 C10 C00 : 1 p0 C01 C11
l ) H1 : < l ) H0
(6:45)
It can be shown that: J ¼ l(1 a0 ) þ
ð
[f1 (x)dx lf0 (x)]dx,
(6:46)
V0
(6:43)
(6:44)
To satisfy the constraint and to maximize the power, choose l so that a ¼ a0 , namely: ð1 fLjH0 (ljH0 )dl ¼ a0 , a¼
In particular: f1 (x) L(x) ¼ f0 (x)
J ¼ (1 b) þ l(a a0 ):
and an LRT will minimize J for any positive l. In particular: l ) H1 D f1 (x) L(x) ¼ : < l ) H0 f0 (x)
Comparing it to the threshold yields: l¼
other arbitrary likelihood ratio test with threshold l and falsealarm rate and power as, respectively, a and b. If a a , then b b . The Neyman-Pearson Lemma showed that if one desires to increase the power of an LRT, one must also accept the consequence of an increased false-alarm rate. As such, the NeymanPearson detection criterion is aimed to maximize the power under the constraint that the false-alarm rate be upper bounded by, say, a0 . The Neyman-Pearson detector can be derived by first defining a cost function:
A decision rule characterized by the likelihood ratio and a threshold as in equation 6.44 is referred to as a likelihood ratio test (LRT). A Bayes’ detection scheme is always an LRT. Depending on how the a posteriori probabilities of the two hypotheses are defined, the Bayes’ detector can be realized in different ways. One typical example is the so-called MAP detector that renders the minimum probability of error by choosing the hypothesis with the maximum a posteriori probability. Another class of detectors is the minimax detectors, which can be considered an extension of Bayes’ detectors. The minimax detector is also an LRT. It assumes no knowledge of the prior probabilities (i.e., p0 and p1 ) and selects the threshold by choosing the prior probability that renders the maximum Bayes’ risk. The minimax detector is a robust detector because its performance does not vary with the prior probabilities.
l
where f LjH0 (ljH0 ) is the pdf of the likelihood ratio. The threshold is determined by solving the above equation. The Neyman-Pearson detector is known to be the most powerful detector for the problem of detecting a constant signal in noise. One advantage of the Neyman-Pearson detector is that its implementation does not require explicit knowledge of the prior probabilities and costs of decisions. However, as is the case for Bayes’ detector, evaluation of the likelihood ratio still requires exact knowledge of the pdf of X under both hypotheses.
6.5.3 Detection of a Known Signal in Gaussian Noise Consider a signal detection problem formulated as follows: H0 : Xi ¼ Ni H1 : Xi ¼ Ni þ S,
6.5.2 Neyman-Pearson Detection The principle of the Neyman-Pearson criterion is founded on the Neyman-Pearson Lemma stated below: Neyman-Pearson Lemma Let dl (x) be a likelihood ratio test with a threshold l as defined in equation 6.44. Let a and b be the false-alarm rate and power, respectively, of the test dl (x). Let dl (x) be an-
(6:47)
i ¼ 1, 2, . . . , K . Assume that S is a deterministic constant and that Xi , i ¼ 1, 2, . . . , K are iid zero-mean Gaussian random variables with a known variance2. The likelihood ratio is then as written here: L(x) ¼
K f1 (x1 , x2 , . . . , xK ) Y f1 (xi ) ¼ : f0 (x1 , x2 , . . . , xK ) i¼1 f0 (xi )
930
Yih-Fang Huang
Taking the logarithm of L(x) yields the log-likelihood ratio of: lnL(x) ¼
K X (2xi S S2 )
2s2
i¼1
:
(6:48)
Straightforward algebraic manipulations show that the LRT is characterized by comparing a test statistic: D
T (x) ¼
K X
xi
(6:49)
i¼1
with the threshold l. If T (x) l, then H1 is said to be true; otherwise, H0 is said to be true. P It is interesting to note that T (x) ¼ Ki¼1 xi turns out to be the sufficient statistic for estimating S, as shown in Section 6.4.2. This should not be surprising as detection of a constant signal in noise is a dual problem of estimating the mean of the observations. Under the iid Gaussian assumption, the test statistic is clearly a Gaussian random variable. Furthermore, E{T (X)jH0 } ¼ 0, E{T (X)jH1 } ¼ KS, and Var{T (X)jH0 } ¼ Var{T (X)jH1 } ¼ K s 2 . The pdf of T (x) plotted in Figure 6.1, can be written as: t2 1 fT jH0 (tjH0 ) ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi e 2K s2 2 2pK s (tKS)2 1 fT jH1 (tjH1 ) ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi e 2K s2 : 2pK s2
The false alarm rate and power are given by: a¼ b¼
ð1 fT jH0 (tjH0 )dt: l ð 10
fT jH1 (tjH1 )dt: l0
The above two equations can be further specified as:
1 l0 2 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffi e t =2K s dt ¼ 1 F pffiffiffiffi : (6:50) s K 2pK s2 l0
ð1 1 l0 KS (tKs)2 =2K s2 pffiffiffiffi : (6:51) pffiffiffiffiffiffiffiffiffiffiffiffiffiffi e dt ¼ 1 F b¼ s K 2pK s2 l0 a¼
ð1
In equations 6.50 and 6.51, this is true:
F(x) ¼
ðx
1 2 pffiffiffiffiffiffi e t =2 dt: 1 2p
For the Neyman-Pearson detector, the threshold l0 is determined by the constraint on the false alarm rate: pffiffiffiffi l0 ¼ s K F1 (1 a):
Combining equations 6.50 and 6.51 yields a relation between the false alarm rate and power, namely:
pffiffiffiffi S : (6:52) b ¼ 1 F F1 (1 a) K s Figure 6.1 is a good illustration of the Neyman-Pearson Lemma. The shaded area under fT jH1 (tjH1 ) is the value of power, and the shaded area under fT jH0 (tjH0 ) is the false alarm rate. It is seen that if the threshold is moved to the left, both the power and the false alarm rate increase. Remarks 1. It can be seen from equation 6.51 that as the sample size K increases, a decreases and b increases. In fact, limK 1 b ¼ 1. 2 2. Define d 2 ¼ ss 2 , which can be taken as the signal-tonoise ratio (SNR). From equation 6.52, one can see that limd!1 b ¼ 1. 3. If the test statistic P is defined with a scaling factor, namely T (x) ¼ Ki¼1 sS2 Xi, the detector remains unchanged as long as the threshold is alsoP scaled accord ingly. Let hi ¼ ss2 , and then T (x) ¼ Ki¼1 hi xi . The detector is essentially a discrete-time matched filter. This detector is also known as the correlation detector as it correlates signal with the observation. When the output of the detector is of a large numerical value, the correlation between the observation and the signal is high, and H1 is (likely to be) true. The Neyman-Pearson detector can be further characterized by the receiver–operation curve (ROC) shown in Figure 6.2, which is a plot of equation 6.52, parameterized by d, the SNR. It should be noted that all continuous LRTs have ROCs that are above the line of a ¼ b and are concave downward. In addition, the slope of a curve in a ROC curve at a particular point is the value of the threshold l required to achieve the prescribed value of a and b.
6.6 Suggested Readings There is a rich body of literature on the subjects of statistical signal processing and mathematical statistics. A classic textbook on detection and estimation is by van Trees (1968). This book provides a good basic treatment of the subject, and it is easy to read. Since then, many books have been written. Textbooks written by Poor (1988) and Kay (1993, 1998) are the more popular ones. Poor’s book provides a fairly complete coverage of the subject of signal detection and estimation. Its presentation is built on the principles of mathematical statistics and includes some brief discussions of nonparametric and robust detection theory. Kay’s books are more relevant to signal processing applications though they also include a good deal of theoretical treatment in statistics. In addition, Kassam (1988)
6 Statistical Signal Processing
931 fT/H0 (t/H0)
fT/H1(t/H1)
β
α
T0
FIGURE 6.1 Probability Density Function of the Test Statistic β
would be a good reference. It contains in-depth coverage of the subject. For quick references to the subject, however, a book by Silvey (1975) is a very useful one. It should be noted that studies of statistical signal processing cannot be effective without proper background in probability theory and random processes. Many reference books on that subject, such as Billingsley (1979), Kendall and Stuart (1977), Papoulis and Pillai (2002), and Stark and Woods (2002), are available.
1
d2 d1 d0
References
d2>d1>d0
1
α
FIGURE 6.2 An Example of Receiver Operation Curves
offers a good understanding of the subject of signal detection in non-Gaussian noise, and Weber (1987) offers useful insights into signal design for both coherent and incoherent digital communication systems. If any reader is interested in learning more about mathematical statistics, Bickel and Doksum (1977)
Bickel, P.J., and Doksum, K.A. (1977). Mathematical statistics: Basic ideas and selected topics. San Francisco: Holden-Day. Billingsley, P. (1979). Probability and measure. New York: John Wiley & Sons. Fisher, R.A. (1950). On the mathematical foundations of theoretical statistics. In R.A. Fisher, Contributions to mathematical statistics. New York: John Wiley & Sons. Haykin, S. (2001). Adaptive filter theory. (4th ed.) Englewood-Cliffs, NJ: Prentice Hall. Haykin, S. (Ed.). (1991). Advances in spectrum analysis and array processing. Vols. 1 and 2. Englewood Cliffs, NJ: Prentice Hall. Kassam, S.A. (1988). Signal detection in non-Gaussian noise. New York: Springer-Verlag. Kay, S.M. (1993). Fundamentals of statistical signal processing: Estimation theory. Upper Saddle River, NJ: Prentice Hall.
932 Kay, S.M. (1998). Fundamentals of Statistical Signal Processing: Detection Theory. Upper Saddle River, New Jersey: Prentice-Hall. Kendall, M.G., and Stuart, A. (1977). The advanced theory of statistics, Vol. 2. New York: Macmillan Publishing. Papoulis, A., and Pillai, S.U. (2002). Probability, random variables, and stochastic processes. (4th Ed.) New York: McGraw-Hill. Poor, H.V. (1988). An introduction to signal detection and estimation. New York: Springer-Verlag.
Yih-Fang Huang Silvey, S.D. (1975). Statistical inference. London: Chapman and Hall. Stark, H., and Woods, J.W. (2002). Probability, random processes, and estimation theory. (3rd ed.). Upper Saddle River, NJ: Prentice Hall. van Trees, H.L. (1968). Detection, estimation, and modulation theory. New York: John Wiley & Sons. Weber, C.L. (1987). Elements of detection and signal design. New York: Springer-Verlag.
7 VLSI Signal Processing Surin Kittitornkun King Mongkut’s Institute of Technology Ladkrabang, Bangkok, Thailand
7.1
7.1.1 VLSI Application Specific Processors . 7.1.2 VLSI and Signal Processing . 7.1.3 Chapter Overview
7.2
Algorithm to Hardware Synthesis ........................................................... 935 7.2.1 DSP Common Characteristics . 7.2.2 Nested Do-Loop Algorithm . 7.2.3 Recurrent Algorithm . 7.2.4 SFG/DFG Optimization . 7.2.5 Bit-Level Algorithm
Yu-Hen Hu Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin, USA
Introduction ....................................................................................... 933
7.3
Hardware Implementation .................................................................... 942
7.4
Conclusion ......................................................................................... 946 References .......................................................................................... 946
7.3.1 Implementation Technology . 7.3.2 Implementation Tools
7.1 Introduction The field of very large scale integrated (VLSI) signal processing concerns the design and implementation of signal processing algorithms using application-specific VLSI architecture, including programmable digital signal processors and dedicated signal processors implemented with VLSI technology. In this chapter, we survey important developments in this field, including algorithm design, architecture development, and design methodology. An implementation of a digital signal processing (DSP) algorithm consists of the computer program of that algorithm and the hardware on which the program is executed. In many signal processing applications, real-time processing is an essential requirement. Real-time implies that the results of a signal processing algorithm must be computed by a predefined deadline after the inputs are sampled. For example, in a cellular phone, the speech coding signal processing algorithm must be executed to match the speed of normal conversation. An implementation of a real-time signal processing application has three special characteristics: (1) Input signal samples are made available while the program is being executed. The computation cannot be started early until the input signal samples are received. Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
(2) Results must be computed before the prespecified deadline. When real-time constraint is not met, the quality of services will be dramatically compromised. (3) Vast amount of operations must be computed. The data rate of a single Moving Picture Experts Group MPEG-II encoded video signal stream can easily exceed 20 million samples per second. On average, each sample will require tens or even hundreds of fixed-point or floating point arithmetic operations to process. Multiple streams of video and audio signals often need to be processed simultaneously. Hence, signal processing algorithms are always computationintensive. An efficient implementation of a real-time signal processing algorithm must be able to perform an extremely large amount of arithmetic operations within a short duration. In other words, it must sustain high throughput rate. Signal processing is often found in embedded systems such as electrical appliances where the user interacts with the system’s main function instead of specific signal processing algorithms. For example, speech coding is regularly performed in cellular phones while users may never be aware of its existence. A signal processing algorithm can be implemented on a general purpose computer, a special purpose programmable 933
934
Surin Kittitornkun and Yu-Hen Hu
digital signal processor, or even dedicated hardware. The tasks of implementation involve algorithm design, code generation (programming), and architecture synthesis. With the same integrated circuit technology, a specialized hardware platform may offer better performance than general-purpose hardware by eliminating redundant operations and components. However, the design and manufacturing cost will be higher.
.
7.1.1 VLSI Application Specific Processors In the early 1960s, most DSP algorithms, such as the fast Fourier transform (FFT), were implemented in Fortran programs running on a general-purpose mainframe computer. It could take hours to process a short 30-second speech. Obviously, general purpose computing systems were insufficient to meet the high throughput rate demanded by a real-time signal processing algorithm. Dedicated application-specific computing systems, however, were too expensive to be a realistic solution for most commercial signal processing applications. This situation changed in mid-1970s. Quantum leaps in integrated circuit manufacturing technology led to the era of VLSI systems. By 1980, hundreds of thousands transistors could be reliably and economically fabricated on a single silicon chip. With the transistor count per chip growing exponentially, it became quite clear that to manage the design complexity of VLSI circuits, IC design methodologies must be revolutionized. In 1980, Mead and Conway championed the notion of structured VLSI design. In Mead and Conway’s (1980) seminal book Introduction to VLSI Systems, the authors argued that a hierarchical design style exhibiting regularity and locality must be adopted to design millions of transistors on a single chip. A novel architecture called systolic array was used as an example that satisfies all these requirements. This idea of structured VLSI design further inspired the concept of a silicon compiler, which, in analogy with the software compiler, would automatically generate a silicon implementation starting from a high-level description. These pioneering ideas stimulated many important developments in the IC industry, such as the proliferation of electronic design automation (EDA) tools, the popularity of semi-custom design styles (e.g., gate array and standard cell layout), and the availability of silicon foundry services. By the time of mid-1980s, a new industry known as application specific IC (ASIC) design started to thrive. Numerous chip sets for video coding, threedimensional (3-D) audio processing, and graphic rendering have been available on the market at appealing costs.
7.1.2 VLSI and Signal Processing The VLSI revolution impacted on signal processing system architecture in a number of important ways: .
High speed: As the IC manufacturing technology evolves, the feature dimensions of transistors continue to shrink.
.
.
Smaller transistors means faster switching speed and, hence, higher clock rate. Faster processing speed means more demanding signal processing algorithms can now be implemented for real-time processing. Parallelism: Higher device density and larger chip area promise to pack millions of transistors on a single chip. This makes it feasible to exploit parallel processing to achieve an even higher throughput rate by processing multiple data streams concurrently. To fully exploit the benefit of parallel processing, however, the formulation of signal processing algorithms must be reexamined. Algorithm transformation techniques are also developed to exploit maximum parallelism from a given DSP algorithm formulation. Local communication: As device dimensions continue to decrease and chip area continues to increase, the cost of intercommunication becomes significant in terms of both chip real estate and transmission delay. Hence, pipelined operation with a local bus is preferred to broadcasting using global interconnection links. Compiler and code generation methods need to be updated to maximize the efficiency of pipelining. Low-power architecture: Smaller transistor feature size makes it possible to reduce the operating voltage and, thereby, significantly reduces the power consumption of an IC chip. This trend makes it possible to develop digital signal processing systems on portable or handheld mobile computers.
On the other hand, the stringent performance requirement and regular deterministic formulation of signal processing applications also profoundly influenced the VLSI design methodology. .
.
High-level synthesis design methodology: The quest to streamline the process of translating a complex algorithm into a functional piece of silicon that meets the stringent performance and costs constraints has led to significant progress in the area of high-level synthesis, system compilation, and optimal code generation. Ideas such as dataflow modeling, loop unrolling, software pipelining, which were originally developed for general purpose computing systems, have enjoyed great success when applied to aiding the synthesis of an application-specific signal processing system from a high-level behavioral description. Multimedia processing architecture: With the maturity and popularity of multimedia signal processing applications, general purpose microprocessors have incorporated special-purpose architecture, such as the multimedia extension instruction set (e.g., MMX). Signal processors also led the wave of a novel architectural concept such as very long instruction word (VLIW) architecture. In fact, it is argued that incorporating multimedia features is the only way to sustain the exponential growth in performance through the next decade.
7 VLSI Signal Processing
7.1.3 Chapter Overview This chapter puts more emphasis on DSP algorithm to hardware synthesis and its hardware implementation. First, a DSP algorithm can be expressed as an n-level nested Do-loop, a recurrent equation, and a data flow graph (DFG). Next, one of these representations gets synthesized to its hardware counterpart. The hardware architecture is not only driven by the algorithm representation but also the sampling rate of input/ output signals. Due to limited hardware resources, operations and data of a particular algorithm can be scheduled at the right time and assigned at the right execution unit basis provided that the precedence and semantics are preserved. For a highthroughput application such as image/video processing, synthesis of a regular network of processing elements (PEs) is discussed in detail. Eventually, implementation technology and tools of a particular architecture are surveyed.
7.2 Algorithm to Hardware Synthesis Other than a mathematical equation, a DSP algorithm has a variety of different representations including a nested Do-loop algorithm, recurrent algorithm, and data/signal flow graph. The DSP algorithm can also be classified as a terminating and a nonterminating program. Unlike the terminating program, a nonterminating one is specified to execute for an infinite amount of time.
7.2.1 DSP Common Characteristics DSP systems implemented in either hardware or softwareprogrammable DSP processors are characterized by the following common characteristics. First, data flow/real-time computing corresponds to its analog counterpart in that an output signal is a convolution of an input signal and the impulse response of a given analog linear time-invariant (LTI) system. The convolution is assumed to take infinitesimal time to compute. In a practical DSP system, a stream of output data is a discrete convolution sum of another stream of sampled/discretized input data and the impulse response of a discrete LTI system. Furthermore, the discrete convolution sum takes a finite amount of time to compute a useful datum (sampling time period). Hence, the notion of realtime computing is imposed. Such DSP systems may involve human interface, such as speech recognition for voice dialing or audio and video signal processing in Dolby Digital/DVD home theaters. Second, some DSP systems have an infinite run time. For instance, a particular communication system on either the transmitting or receiving end must process the transmitted/received signal to maintain the continuous data/ audio/video stream such as the backbone network of the Internet. Third, some DSP algorithms can be described in a nested Do-loop construct. This includes any vector/matrix multipli-
935 cation (inner/outer product), correlation, and the like. A number of DSP algorithms can be written in a recursive equation: infinite impulse response (IIR) filter, least mean square/recursive least square (LMS/RLS) adaptive filter, and so on. Finally, DSP is applied in a variety of applications ranging from seismogram to the communication system in the space shuttle/station. As a result, a variety of data types and sampling rates must be taken into consideration. Because of these DSP common characteristics, many DSP algorithms can be conveniently formulated into the form of nested Do-loops perhaps with an infinite loop bound.
7.2.2 Nested Do-Loop Algorithm The regular structure of nested Do-loop algorithms has been explored extensively for a compiler of general-purpose parallel computing (Banerjee, 1993; Wolfe, 1996). With the help of the compiler, the algorithms can be executed according to particular scheduling and assignment to match the underlying architecture such as SIMD, MIMD, and many others. In general, without the regular structure of loop indices, such scheduling and assignment are NP-hard. Similarly, the implementation of a nested Do-loop formulation of DSP algorithms is actually a ‘‘mapping’’ process consisting of task scheduling and assignment from abstract operations to a piece of hardware. In contrast to generalpurpose computing, the underlying hardware is of the form programmable digital signal processor or ASIC in array-like structure. Since the introduction of the temporal hyperplane by Lamport (1974), the scheduling and assignment problem of nested Do-loop algorithms has been explored and developed to the so-called systolic array (Kung, 1982), exploiting regular array structure, temporal pipelining, and local communication. Along with the advance of VLSI technology, certain nested Do-loop algorithms can be scaled down and realizable in a single chip. Thus, the nested Do-loop scheduling and assignment problem can be formulated as an algebraic projection. Each loop index in the index space will be projected onto the index of a regular array of processing elements (PEs), and their execution time is determined by a family of equitemporal hyperplanes, where all indices on the same hyperplane are executed simultaneously. Two different but closely related methodologies to processor array design are the dependence method (Moldovan, 1982; Rao and Kailath, 1988; Kurg, 1988) and the parameter method (Li and Wah, 1985). Both the dependence method and parameter method are based on the assumption that an algorithm is written in a uniform recurrence nested Do-loop, which can be depicted as a shift-invariant dependence graph (DG). In this section, we mainly focus on the dependence method design methodology because it is easy to visualize in multidimensional representation. Despite the fact that the parameter method establishes the relationship between velocity, distance,
936
Surin Kittitornkun and Yu-Hen Hu
and period of each variable throughout the course of parallel execution, it is not as popular as the dependence method. Nonetheless, the relationship between these two design methodologies has been elaborated by O’Keefe et al. (1992). Space-Time Mapping The dependence method can be illustrated by the notion of space-time mapping of this matrix–matrix multiplication. Multiplication of two 3 3 matrices, a and b, results in a 3 3 matrix c.
Do i1 ¼ l1 to u1 .... Do in ¼ ln to un Input/Propagation Statements ( vj [Gvj (~ I )] , ~ I 2 IIvj vjn [~ I] ¼ vjn [~ I ~ dvjn ] , ~ I 2IPvj .... Computation/Initialization and Statements ( , ~ I 2 IN vkn, Init vk n ~ v k [I ] ¼ F vk (vkn [~ I ~ dvkn ], vjn [~ I ], . . . , ~ I 2 I Cvk
Example 1: Matrix–Matrix Multiplication c ¼ab
First, the multiplication is formulated as a three-level nested Do-loop as follows: Do i ¼ 1 to 3: Do j ¼ 1 to 3: c[i, j] ¼ 0: Do k ¼ 1 to 3: c[i, j] ¼ c[i, j] þ a[i, k] b[k, j]: EndDo k EndDo j EndDo i
(
a3 [i, j, k] ¼ ( b3 [i, j, k] ¼ ( c3 [i, j, k] ¼
a[i, k],
j ¼ 0:
a3 [i, j 1, k],
j > 0:
b[j, k],
i ¼ 0:
b3 [i 1, j, k],
i > 0:
k
.... EndDo in .... EndDo i1 FIGURE 7.1
It can then be reformulated in single assignment format (Kung, 1982) as the following: Do i ¼ 1 to 3: Do j ¼ 1 to 3: Do k ¼ 1 to 3:
.... Output Statements vk [Gvk (~ I )] ¼ vkn [~ I] , ~ I 2 IO v
(7:1)
n-Level Nested Do-Loop Algorithm.
Definition 1 Computing Index Space. J n The computing index space is a set of n-dimensional integervalued vectors such that J n ¼ {~ I ¼ (i1 , i2 , . . . , in )i ji1 , i2 , . . . , in 2 Z:l1 i1 u1 , l2 i2 u2 , . . . ln in un }, where ~ at n n denotes the transpose of ~ a. In other words, J Z is an n-D polyhedron with integer-valued vertices. J 3 ¼ {(i, j, k)t j1 i, j, k 3} in the matrix–matrix multiplication example. Definition 2 Generic n-Level Nested Do-Loop An n-level nested Do-loop model is shown in Figure 7.1. Each iteration is indexed by a computing index (vector) ~ I ¼ (i1 , i2 , . . . in )t . The loop body consists of three different kinds of statements: .
0,
k ¼ 0:
c3 [i, j, k 1] þ a3 [i, j, k] b3 [i, j, k],
k > 0:
c[i, j] ¼ fc3 [i, j, k], k ¼ 3: EndDo k EndDo j EndDo i .
The a3 , b3 , and c3 are the 3-D version a, b, and c, respectively. & From this example, a three-level nested Do-loop algorithm can be written in a single assignment format and a perfect loop nest. The innermost loop body is composed of a set of input/ propagation, computing/initialization, and output statements as shown in Figure 7.1. We denote J n as the computing index space.
Input/Propagation Let vj be an input variable where vjn is its corresponding n D variable. At a particular input, index ~ I 2 I Ivi , vj [Gvj (~ I )] is assigned to vjn by the index transformation function Gvj . Other than for this exception, vjn is propagated along the direction of the propagation dependence vector (DV )dvnj at any index ~ I 2 I Pvj . Computation/Initialization The vkn an n-D version of variable vk , is computed as a recurrent function F vk of itself and other input variables (e.g., vjn and so on). The vector ~ dvkn represents the operation dependence of current of vkn . Most of the time, the computation is carried on at index ~ I 2 I Cvk J n . During n the initialization phase, vk is assigned to vkn, Unit at index I 2 I vvk
7 VLSI Signal Processing .
937
Output vkn is output as vk by the index transformation function Gvk at index ~ I 2 IO vk .
Note that each variable in the loop body follows the single assignment format. & Based on the definition of an n-level nested Do-loop, there are three different kinds of variables: input, output, and intermediate. Definition 3 Input Variable, v For each input variable vj its n-D correspondence is vjn . It is inputted at ~ I 2 I Ivj and propagated at ~ I 2 I Pvj , where l P I vj \ I vj ¼ f and f is the empty setted. Definition 4 Output Variable, vk For each output variable vk , its n-D correspondence is vkn . It is C ~ initialized at ~ I 2 IN vk and computed at I 2 I vk , where C N U ~ I vk \ I vk ¼ f. Its output value is defined at I 2 I uk . & Definition 5 Intermediate Variable vk For each intermediate variable vk , its n D correspondence is vkn . It is intialized at ~ I 2 I Ivk and computed at ~ I 2 IvCk , where C N U Ivk \ Ivk ¼ f. Its output value is consumed only at ~ I 2 Ivk & In matrix–matrix multiplication, a and b are input variables, while c is the output variable. The propagation dependence
db3 ¼ (1, 0, 0)t , vectors of a and b are ~ da3 ¼ (0, 1, 0)t and ~ respectively. On the other hand, the true dependence vector of c is (0, 0:1)t . If all dependence vectors are plotted in the index space J 3 , a 3-D DG is yielded in Figure 7.2(A). Each node represents a data flow graph (DFG) of multiplication and addition as shown in Figure 7.2(B), respectively. Definition 6 Dependence Graph A dependence graph (DG) (Kung, 1988) is a directed graph GDG ¼ (VDG , LDG ), where VDG is a set of nodes in index space J n and where LDG is a set of edges connecting a pair of nodes, i and j and i, j 2 VDG if and only if computing of node j depends on the result from node i. Note that these edges are also called dependence vectors either due to propagation or computation. In other words, a DG is a graphical representation of an n-level nested Do-loop defined in definition 2. Furthermore, any DG is computable if and only if it contains no loops or cycles. & A DG with its detailed DFG in each node is considered a data flow graph as it is defined here. Definition 7 Data Flow Graph A data flow graph is a directed graph GDFG ¼ (VDFG , LDFG ), where VDFG is a set of nodes representing operations or functions and where LDFG is a set of edges connecting a pair of nodes,
c11=0 b11
b12
b13
b22
b23
b32
b33
a11
a21 b21 a31 a12
a22 b31 a32 a13
c
b
c13 a23
a c23 a33
X
+
a
b c
c31
c32
c33
(A) A 3 x 3 Matrix-Matrix Multiplication in a 3-D Dependence Graph
FIGURE 7.2
(B) Node (Loop Body) as a Data Flow Graph
938
Surin Kittitornkun and Yu-Hen Hu
i and j and i, j 2 VDFG . This indicates the flow of data with or without delay element(s) as an output from node i to node j. & The notion of linear space-time mapping is a transformation of index space J n onto an (n-1)-D space and 1-D time index spaces. The space-time mapping matrix T consists of scheduling vector s 2 Z n and processor allocation matrix P 2 Z nn1 , where Z is the integer number space. In other words, an algorithm described as a multiple perfect loop nest can be projected onto an (n-1)-D array of PEs and a scalar valued execution schedule with respect to the dependencies. The computing takes place in parallel at the allocated PEs according to the time (clock) schedule.
formed via space–time mapping to interprocessor links or edges and registers or delays, respectively. Interprocessor communication is carried out via an ~ evi edge that is pipelined with a number of delay elements or shift registers, r(~ evi ):
t ~ r(eV ) s DV ¼ ¼ TDV : eV P t DV
(7:7)
In equation 7.7, the delay vector is as follows: r(eV ) ¼ ½r(~ ev1 ) r(~ e v2 )
. . . :
(7:8)
The edge matrix is as follows: Definition 8 Space-Time Mapping The space-time mapping matrix T 2 Z nn is given by: T¼
s t , Pt
ev1 ~ e v2 eV ¼ ½~ (7:2)
where ~ s and P are linearly independent. In other words, Rank (T ) ¼ n. The schedule-allocation matrix is another representation of the original index space in the computing schedule (time) and processor allocation (space) domains. The computing schedule t(J n ) Z n and processor allocation J 1 Z can be obtained by:
t n ~ t(J n ) sJ ¼ ¼ T J n, J n1 Pt J n
(7:3)
where J n1 is the (n 1) D index spaces, respectively. In other words, the matrix T maps the execution index ~ q 2 J n to t processor P ~ q at time step (clock) t(~ q). Due to the shift-invariant DGj all dependence vectors associated with any DG node can be represented in a dependence matrix, DV : DV ¼ [Dv1
D v2
. . . ],
(7:4)
where variables vi 2 V and V represent a set of all variables in the algorithm. The Dvi is recursively defined as a set of dependence vectors associated with variable vi : h i di2 . . . , D vi ¼ ~ di1 ~ (7:5) where ~ dij 2 Z n is a dependence vector j of variable vi . It is obvious that all the dependencies in every node in Figure 7.2(A) are captured by this dependence matrix:
DV ¼ ~ da3 ~ dc3 db3 ~
2
1 ¼ 40 0
0 1 0
3 0 0 5: 1
(7:6)
Similarly, the mapping can be applied to the dependence matrix to obtain the delay-edge matrix. The DV is trans-
. . . :
(7:9)
The delay-edge matrix describes the global structure of the resulting PE array regardless of the functionality inside each PE. This structure is called a signal flow graph and is mainly used to explain algorithms in DSP. Definition 9 Signal Flow Graph A signal flow graph (SFG) (Kung, 1988) shown by G ¼ (V, L) is a directed graph consisting of a set of vertices, (PEs) V and a set of edges (links) L. Each variable v 2 V has its own set of edges Lv such that an edge lv 2 Lv associated with r(ev ) delay elements of direction ev connect two PEs vi and vj , where vi , vj 2 V. Based on shift-invariant DG, the resulting SFG after space–time mapping is considered structurally time-invariant in which only edge structures remain invariant. & Partitioning If the number of processors is limited and much less than the number of PEs required for a particular algorithm, the DG must be partitioned and mapped to fit that fixed size array. Hwang and Hu (1992) have proposed a two-level locally sequential globally parallel (LSGP) scheduling to reduce the overall computing time. The DG is first partitioned into a set of disjoint blocks. Those blocks will be allocated to available processors and scheduled with a two-level schedule. The firstlevel sequential schedule is obtained from applying permissible loop transformations. The second-level parallel schedule is formulated as an integer programming problem subject to the total computation latency. An interblock schedule is applied to determine the relative timing of each block’s computation and satisfy the precedence constraints imposed by dependence vectors across the block boundaries. Partitioning the same DG to several blocks is considered a special case of multistage systolic mapping MSSM (Hwang and Hu, 1992), where each stage is characterized by a different DG of a different nested Do-loop algorithm. It can be viewed as an locally parallel globally parallel (LPGP). Scheduling problem for parallel processing of a complex operation chain. After mapping consecutive stages, it may result in mismatched I/O operations.
7 VLSI Signal Processing
939
Therefore, a systematic approach is proposed to reduce data communication overhead and overlap part of the computation in successive stages to minimize performance degradation. Lower-D Mapping In this section, the dependence method or space–time mapping will be extended to map an n-D DG to the final 1-D or 2-D SFG. The earliest attempt to map an n-D DG to an (n 2)-D clock schedule and a 2-D PE array using only a single space–time mapping matrix was proposed by Wong and Delosme (1985). The resulting multidimensional clock must be linearized to a scalar value to reduce implementation cost. This idea of mapping to a lower dimension is similar to multiprojection (Kung, 1988), which is equivalent to a sequence of consecutive space– time mappings. Later efforts to extend space–time mapping to k-D array, where 1 k < n, are reported by Lee and Kedem (1988, 1990), Shang and Fortes (1992), and Zimmerman (1996). Similarly, the generalization of the parameter method, called the general parameter method (Ganapathy and Wah, 1992), is formulated for a lower-D array as well. Definition 10 One-Dimensional Space–Time Mapping Contrary to the traditional space–time mapping, the transformation matrix consists of a scheduling vector~ s and a linear allocation vector P1 as follows: t ~ s s1 s2 . . . sn T1 ¼ ¼ , (7:10) P1t p1 p2 pn whereT1 2 Z 2n andsi , pi 2 Z arerelatively prime.Furthermore, ~ s and P1 must be linearly independent so that Rank (T1 ) ¼ 2: & Similar to traditional mapping, the schedule–allocation matrix can also be obtained from low-D space–time mapping matrix T1 . The computation schedule, t(J n ) Z and processor allocation J 1 Z are as follows: n t n ~ s J t(J ) (7:11) ¼ ¼ T1 J n : P1t J n J1 In other words, the matrix T1 maps the execution of index ~ q 2 J n at processor P1t~ q at time step (clock tick) t(~ q). This 1-D array mapping can be applied to obtain the delay-edge matrix as shown earlier in equation 7.7. Design Objectives Unlike the parameter method and the general parameter method, the space–time mapping has close form relationships
a b c
c = c+ a*b
−3
D 2-D 3-D
c = c+ a*b
−2
D 2-D 3-D
c = c+ a*b −1
D 2-D 3-D
c = c+ a*b 0
D 2-D 3-D
c = c+ a*b 1
of design objective functions. For instance, the total execution time, te , can be obtained from: te ¼ tcycle Ncycle ,
(7:12)
where tcycle is the propagation delay of the critical path in each SFG node and where Ncycle is the number of clock cycles. The Ncycle is the number of steps between the first and the last computation indices (Kung, 1988) which is given by: s (~ p ~ q)} þ 1: Ncycle ¼ maxn {~ ~ q2J p, ~
(7:13)
In other words, Ncycle is the number of cuts by the temporal hyperplane. In addition, to minimizing only, the Ncycle . Wong and Delosme (1992) has proposed to minimize both tcycle and Ncycle . Although Ncycle seems to depend on the scheduling vector ~ s only, the question on how many PEs are used and how the data are delivered to the right PE still remain. In the 1-D or linear array mapping, the number of PEs, NPE , can be obtained explicitly as follows: PEmax ¼ max {P1t q j~ q 2 J n} PEmin ¼
min {P1t q j~ q2
(7:14)
n
J }:
(7:15)
As a result, the number of PEs, NPE , is expressed as: NPE ¼ PEmax PEmin þ 1:
(7:16)
In other words, NPE is the number of distinct projections of the index space J n on the vector P1. Example 2 4 4 Matrix–Matrix Multiplication A 4 4 matrix–matrix multiplication (Pee and Kedem, 1988) DG is scheduled and allocated to a linear or 1-D array of PEs by this 1-D space–time mapping matrix: T1 ¼
~ st P1t
¼
2 1
1 1
3 : 1
(7:17)
As a result, the array is shown in Figure 7.3. It takes 10 cycles for the data to fill in the pipeline and Ncycle ¼ 19 cycles to finish the execution. The maximum PE utilization is 60%. This is due to the constraint that I/O is allowed on either end of the array. &
D 2-D 3-D
c = c+ a*b 2
D 2-D 3-D
c = c+ a*b 3
D 2-D 3-D
c = c+ a*b 4
D 2-D 3-D
c = c+ a*b 5
D 2-D 3-D
c = c+ a*b 6
FIGURE 7.3 Signal Flow Graph of 4 4 Matrix–Matrix Multiplication. PE number ranges are from 3 to 6 (Lee and Kedem, 1988).
940
Surin Kittitornkun and Yu-Hen Hu
7.2.3 Recurrent Algorithm
ab(2n þ 2) ¼ fab [aa(2n þ 1), ca(2n þ 1), ba(2n þ 2)]:
In the previous section, an n-level nested Do-loop can be regarded as a terminating program. In this section, a nonterminating recurrent program is taken into consideration. Examples of this kind of algorithm include IIR and adaptive filters. These algorithms are characterized by a current output that is defined by a given function of the previous outputs and the current input. As such, the validity of the output depends on how often its computation is initiated, called the initiation period. Two optimization techniques loop unfolding and look-ahead transformation are discussed next.
ba(2n þ 2) ¼ fba [ab(2n þ 1)]: bc(2n þ 2) ¼ fbc [ab(2n þ 1)]:
Loop Unfolding Loop unfolding exploits the interiteration precedence or the so-called loop-carried dependence. After the loop is unfolded, the new initiation period may be reduced. For example, a nonterminating program is shown in algorithm 1. Algorithm 1 A Nonterminating Recurrent Program Do n ¼ 1 to 1:
ca(2n þ 2) ¼ fca [bc(2n þ 2)]: EndDo n As discussed by Parhi (Parhi, 1989), loop unfolding can reduce the iteration period in a multiprocessor implementation. For a perfect-rate DFG, however, its iteration period is bounded and cannot be reduced further. The schedule associated with a perfect-rate DFG is called a rate-optimal schedule. Look-Ahead Transformation Loop unfolding can only improve the iteration period but cannot achieve an iteration period less than the iteration bound of the algorithm. To lower the iteration bound, the look-ahead transformation is devised such that the nth iteration’s output can be expressed as a function of previous n k iteration outputs and the necessary inputs. The program shown here is a terminating first-order recurrent program.
aa(n) ¼ faa [aa(n 1), ca(n 1), ba(n)]: Do n ¼ 0 to 7 y(n þ 1) ¼ a(n)y(n) þ x(n) EndDo n
ab(n) ¼ fab [aa(n 1), ca(n 1), ba(n)]: ba(n) ¼ fba [ab(n 1)]: bc(n) ¼ fbc [ab(n 1)]: ca(n) ¼ fca [bc(n)]: EndDo n
The look-ahead transformation can be illustrated by a back substitution as follows:
The xy(n) denotes an arc (data) flowing from node x to node y at the iteration n, and fxy denotes a definition of xy(n). Its corresponding DFG is illustrated in Figure 7.4. After the loop body is unfolded by a factor of two, the resulting program is listed here: Do n ¼ 1 to 1: aa(2n þ 1) ¼ faa [aa(2n), ca(2n), ba(2n þ 1)]: ab(2n þ 1) ¼ fab [aa(2n), ca(2n), ba(2n þ 1)]: ba(2n þ 1) ¼ fba [ab(2n)]: bc(2n þ 1) ¼ fbc [ab(2n)]:
y(8) ¼ a(7)y(7) þ x(7): ¼ a(7)[a(6)y(6) þ x(6)] þ x(7): ¼ y(6)a(7)a(6) þ x(6)a(7) þ x(7):
For example, the last element y(8) can be obtained directly from y(6), x(6), and x(7). The term y(6)a(7)a(6) þ x(6)a(7) is called the overhead of this transformation. As we keep substituting, the final result becomes: y(8) ¼ y(0)
8 Y i¼0
ca(2n þ 1) ¼ fca [bc(2n þ 1)]: aa(2n þ 2) ¼ faa [aa(2n þ 1), ca(2n þ 1), ba(2n þ 2)]:
(7:18)
þ x(5)
1 Y
a(8 i) þ x(0)
6 Y
a(8 i)
i¼0
(7:19)
a(8 i) þ . . . þ x(6)a(8) þ x(7):
i¼0
After the look-ahead transformation, a new DFG of this algorithm is obtained. The iteration period of the new DFG may be less than that of the one that is not look-ahead.
D
D a
D
b
c
7.2.4 SFG/DFG Optimization FIGURE 7.4
A Data Flow Graph of Algorithm 1.
In this subsection, the relationship between multiprocessor implementation of a recurrent algorithm and VLSI implemen-
7 VLSI Signal Processing tation of the space–time mapping is established. After the SFG/ DFG is obtained, some optimization techniques such as cut-set/ retiming and geometric transformation can be applied. DG/SFG Versus DFG As definition 6, the DG with its functional DFG in each DG node can be perceived as a complete DG (Kung, 1988) or a DFG of a nested Do-loop algorithm. On the other hand, each DFG node describing a recurrent algorithm in Section 7.2.3 corresponds to a set of operations or functions in a processor. During the space–time mapping, each DG node or computation index is allocated and scheduled to execute at a specific SFG node. In parallel, multiprocessor implementation of recurrent DFG can be obtained by scheduling and assignment strategies, such as the one proposed by Wang and Hu (1995). As a result, several DG nodes correspond to an SFG node (PE), while several DFG nodes can be assigned to the same processor. In a fully systolic SFG, each edge is associated with register(s). The clock cycle time of each SFG node is determined by its critical path. The delay of the critical path is determined by the maximum computation delay on a zero-register path. On the other hand, a recurrent algorithm is constrained by its iteration period analogous to a clock period. Other than minimizing the iteration period or cycle time, both space–time mapping and multiprocessor implementation try to minimize the number of PEs/processors, respectively. Due to some undesirable properties of SFG (e.g., long zerodelay link, broadcasting link, dimensionality or geometry) the following strategies have been exploited to reorganize the SFG in such a way that the result is more suitable for VLSI implementation. Cut-Set and Retiming The primary methods of SFG optimization employed are cutset and retiming (Kung, 1988). The objective is to convert an SFG into a fully pipelined form so that all the edges between modular sections have at least one delay element. A cut-set is a minimal set of edges including the target edge, non-zero-delay edges going in either direction, and zero-delay edges going in the same direction as the target edge. The procedure consists of these three steps: .
.
.
Partitioning cuts the DFG/SFG nodes to two sets to achieve a cut-set. Delay elements associated with a cut-set are all scaled by a positive integer. A number of delay elements are transferred between inbound and outbound edges with respect to the partition in such a way that there remains at least one delay element.
The retiming was first proposed by Leiserson et al. (1983). It is equivalent to moving around the delays in the DFG such that the total number of delays in any loop remains unaltered. The retiming can be applied to a single-rate DFG to reduce the iteration period in a programmable multiprocessor imple-
941 mentation. A rate-optimal schedule cannot be guaranteed, however, and cannot reduce the iteration period in perfectrate DFG (Parhi et al., 1989). Very similar to the retiming, delay distribution methodology is presented for the FIR and IIR digital filter (Chen and Moricz, 1991). The optimization is performed on an DFG/SFG, which represents a 1-D algorithm and then is generalized to n-D one. The strategies consist of balancing, rescaling, distribution, factorization, and inversion. Geometric Transformation The recent effort to transform a 1-D SFG to 2-D SFG was exhibited in Yeo and Hu (1995) and Kittitornkum and Hu (2001) by making use of a spiral and long interconnect. Both arrays were proposed for full-search block matching motion estimation, which is one of the most computation-intensive tasks in real-time digital video coding.
7.2.5 Bit-Level Algorithm In digital communication, a huge amount of information is transformed into a series of data bits by source coding (e.g., audio and video coding) and various forms of channel coding (e.g., error correction) prior to the transmission. Therefore, every bit of received data is so important that many algorithms have been proposed and developed to save the communication bandwidth and error resilience. Such algorithms include variable length coding at the source coding level and Viterbi decoding at the channel coding level. Variable Length Coding The most well-known and widely used variable-length coding (VLC) is the Huffman code. The coding part can be easily implemented as a look-up table to achieve a high output rate. On the other hand, its decoding end can be implemented in a number of VLSI architectures to satisfy different input/output data rates as surveyed by Chang and Messerchmitt (1992). The VLSI architecture can be classified as tree-based and programmable logic array-based (PLA-based). The tree-based architecture is a direct map of the Huffman binary tree. The average decoding throughput of the pipelined version is in the range of [Lmin , Lmax ] bits/cycle, where Lmin and Lmax are minimum and maximum code lengths, respectively. This rate is apparently higher than a 1 bit/cycle of the sequential tree-based architecture. A PLA in the PLA-based architecture is functioning as a look-up table storing the states of an finite state machine (FSM). Each state corresponds to either an internal node of the Huffman tree or a decoded output symbol. The design can be constrained to the following conditions. .
Constant input rate at N bits/cycle: An N < Lmax -bit input is buffered and used to look up the next state, the decoded symbol, as well as the shift amount if it is found from a PLA.
942 .
Surin Kittitornkun and Yu-Hen Hu Constant output rate of one symbol per clock cycle: The architecture resembles a constant input rate. A longer bit string N Lmax , however, is used to look up a decoded symbol and its shift amount. The bit string will be shifted by a barrel shifter at the corresponding shift amount.
Viterbi Algorithm The convolutional coding has been one of the most widely used error corrections in digital wireless communication. Therefore, the Viterbi decoding algorithm must be implemented efficiently in a pipelined/systolic fashion. A (K, R) convolutional coding scheme can be described by an FSM, where K is the constraint length and R is the code rate. Similar to a mealy FSM, there are a total of 2K coding states where output bits and the next state depend on the current state and an input bit. Among many VLSI implementations, a modified trace-back Viterbi algorithm (Truong, 1992) has been proposed to save register storage. The survivor path is traced back through the trellis diagram and used to determine the decoded information bit in the window of length 5K. Besides, a 1-Gb/s K ¼ 2 convolutional code sliding block Viterbi decoder has been recently reported (Black and Menge, 1997). It is designed for a high throughput magnetic and optical storage channel. Both backward and forward processing exist, with total window length of M ¼ 2L where L ¼ 5K . However, L ¼ 3 is chosen for performance/complexity trade-off. Thus, each processing unit uses an L-stage trace-back decoding unit. Its fully systolic architecture results in a very high throughput of 2Lfclk ¼ 2 6 83 ’ 1 Gb/s, where fclk ¼ 83 MHz. Each pipeline stage is an add-compare-select (ACS) unit. The fabricated chip size is 9:21 8:77 nm2 with a 1.2-mm double-metal CMOS technology.
CCS can be perceived as a dynamically reconfigurable data path for special-purpose tasks that cannot be efficiently implemented by software. There have been several examples of CCSs, including CHAMP (Patriguin and Gurevich, 1995), national adaptive processing architecture (NAPA) (Rupp et al., 1998), and Splash-2 (Buell et al., 1996). .
.
.
Besides custom computing systems, the following are some DSP algorithms implemented in FPGA: .
.
.
7.3 Hardware Implementation DSP application-specific integrated circuits (ASICs) are mostly implemented in digital CMOS circuit technology due to several advantages and maturity of fabrication technology. Other than an ASIC design in a large volume, field-programmable gate array (FPGA) and reconfigurable computing (RC) have been invented to amortize the cost in a low to medium volume. As the application advances toward multiprotocol mobile/wireless computing/communication. RC may emerge as an alternative to hard-wired ASIC. In this section, we put the emphasis on current state-of-the-art FPGA and RC implementation technology and tools.
7.3.1 Implementation Technology FPGA A custom computing system (CCS) is an array of interconnected FPGA chips hosted by a general purpose computer.
CHAMP is a system of 8 PEs interconnected in a 32-bit ring topology. Each PE consists of two Xilinx XC4013 FPGAs, which is a dual-port memory system. NAPA’s aim is to develop a teraoperations/second-class computing system. Its emphasis is on algorithmic data path and function generation. Splash-2 is a large system of up to 16 array boards each containing 17 Xilinx XC4010 FPGA chips. Splash-2 is hosted by SPARCstation 2 and connected through an SBus interface board.
.
.
A 1-D discrete fourier transform (DFT) (Dick, 1996) systolic array of PE with coordinate rotation digital computer (CORDIC) arithmetic is reported. Each PE consumes about 44 configurable logic blocks (CLBs) of the Xilinx XC4010PG191-4. As a result, each FPGA chip can accommodate 10 PEs. With the operating 15.3-MHz clock frequency, 1000-point 1-D DFT only takes 51.45 ms to compute. A CORDIC arithmetic processor performing polar to Cartesian coordinate transformation is reported by Andraka (1998). The 52-MHz CORDIC processor features pipelined 14-bit 5 iterations per transformation and occupies only 50% of a Xilinx XC4013E-2 FPGA. An 8 8-pixel 2-D discrete cosine transform (DCT) implementation is reported (Woods et al., 1996). The 2-D DCT is separated into two passes of 1-D DCT, each consisting of bit-serial arithmetic 12 multipliers and 32 adders/subtractors. The 2-D DCT consumes about 70% of a Xilinx XC6216 FPGA. At maximum clock frequency of 38 MHz, it can achieve 30 frames/second at 720 480 pixels. Direct sequence spread spectrum (DSSS) RAKE Receiver (Shankiti and Leeser, 2000) has been attempted on a Xilinx XC4028EX–3HQ240 FPGA chip. Each finger occupies 1000 CLBs, which is approximately one FPGA chip. With a 1-MHz clock, the system throughput can reach up to 4000 symbols/sec. A 1-D discrete wavelet transform (DWT) realization on FPGA is presented in (Stone and Manalokos, 1998). Both high-pass and low-pass filters are a six-tap biorthogonal binary filter. The system consumes 90% of an Altera EPF10K100GC503 whose capacity is equivalent to 100.000 gates. The operating clock frequency is 20 MHz.
7 VLSI Signal Processing TABLE 7.1
943
Summary of DSP Algorithms on FPGA Implementation
Design CORDIC 1-D DFT 2-D 8 8 DCT 1-D six-tap DWT RAKE Receiver
fclk (MHz)
Space Util. (%)
52.0 15.3 38.0 20.0 1.0
50 10 70 90 90
Device Xilinx XC4013 Xilinx XC4010 Xilinx XC6216 Altera EPF10K100 Xilinx XC4028
In addition to these examples summarized in Table 7.1, several efficient formats of finite impulse response (FIR) filter implementation using distributed arithmetic (White, 1989) were reported by Goslin (1995). As discussed earlier, LUTs are one of the fundamental constructs in FPGA. There are a few more advantages of implementing DSP algorithms on FPGA: . . .
.
TABLE 7.2 Granularity Versus Architecture of Reconfigurable Computing
Efficient bit-level logic operations Pipelining using flip/flops and shift registers ROM- or LUT-based algorithms, such as the Huffman code or Viterbi decoder Bit-parallel arithmetic support and dedicated multiplier unit in Xilinx Virtex II FGPA (Xilinx, 2000)
Granularity
Reference
Fine-grained Multigrain Coarse-grained
DRLE Pleiades Garp, MATRIX, PipeRench, MorphoSys, REMARC
tecture can be classified by its processing capacity of each logic cell at different granularity levels. The architecture of a configurable cell, either fine-grained or coarse-grained, is mainly driven by either general-purpose or signal/image processing computing. Its configuration or context can be stored locally like SRAM-based FPGA. Instead of being off-chip and loosely coupled to the host processor, RC is going toward an on-chip coprocessor as the technology advances to SOC. The following are some of the exiting RC systems as they are classified according to their granularity and application in Table 7.2. .
Reconfigurable Computing Reconfigurable computing (RC) has a variety of hardware architecture as well as applications. In the next two sections, existing RC architecture and its algorithm to hardware mapping are surveyed. Inspired by the 90/10 locality rule, in which a program executes about 90% of its instructions in 10% of its code, HW/SW partitioning is influenced by the following facts: 1. The higher percentage of run time of specialized computation, the more improvement of cost/performance if it is implemented in hardware. 2. The more specialized computation dominates the application, the more closely the specialized processor should be coupled with the host processor.
.
This rules of thumb have been successfully applied to the floating-point unit as well as RC. The basic principle is that the hardware programmability of RC is combined with a general-purpose processing capability of the RISC microprocessor. The interface between these two can be either closely or loosely coupled depending on its applications. How frequently an RC’s functionality should be reconfigured must be determined based on the performance criteria of particular HW/SW architecture. Unlike the long configuration latency of the FPGA, RC is attributed by its low-latency dynamic configurability. Existing RC architecture models are mostly of the form 1-D or 2-D array of configurable logic cells interconnected by programmable links and switches. The array communicates with the outside world through peripheral I/O cells. The archi-
.
A fine-grained configurable cell performs simple logic functions with more complex function (e.g., fast carry look-ahead adder). A perfect example of this architecture is the dynamically reconfigurable logic engine (DRLE) (Nishitani, 1999). It is capable of real-time reconfiguration with several layers of configuration tables. An experimental chip is composed of a 4 12 array of configurable cells. Each cell can realize two different logic operations equivalent to 4-bit input to 1-bit output or 3-bit input to 2-bit output. Up to eight different configurations can be locally stored in each memory cell. With 0.25-micron CMOS technology, the chip contains 5.1 million transistors in a 10 10-mm2 die area and consumes 500 mW at 70 MHz. A multigrain configurable cell, such as Pleiades (Zhang et al., 2000; Rabaey et al., 1997; Abnovs and Rabaey, 1996; Wan et al., 1999), is a multigrain heterogeneous system partitioned by control-flow computing on microprocessor and data flow computing on RC for future wireless embedded devices. An RC array is composed of satellite (configurable) processors and a programmable interconnection to the main microprocessor. Data-flow computing is implemented using global asynchronous and local synchronous clocking to reduce overhead. Therefore, operation starts only when all input data are ready. A coarse-grained configurable cell performs more complex word parallel arithmetic functions such as, addition and multiplication, with simple bit-level logic functions. Mostly, this RC architecture is organized in a 2-D array of nanoprocessors with an arithmetic and logic unit (ALU), a multiplier, an instruction RAM, a data RAM, data registers, input registers, and an operand register. Reported architecture examples include Garp (Callahan
944
Surin Kittitornkun and Yu-Hen Hu et al., 2000), PipeRench (Goldstein et al., 1999), REMARC Reconfigurable Multimedia Array Coprocessor (REMARC) (Miyamori and Olukotun, 1998) MATRIX (Mirsky and Detton, 1996) and MorphoSys (Lu et al., 1999).
7.3.2 Implementation Tools In this subsection, we are focused more on an architecturedriven synthesis, in particular the regular array synthesis. Such array architecture is suitable to both FPGA and reconfigurable computing platforms. Architecture-Driven Synthesis One of the well-known DSP synthesis tools is the Cathedral I/ II/III (De Man et al., 1990, 1988; Note et al., 1991). A DSP algorithm is described as a graph with operators as vertices and with arcs representing the flow of data in a language called SILAGE. The bit-true specification of all signal formats is specified up to bit-level including quantization and overflow characteristics. Since Cathedral is characterized as an architecture-driven synthesis tool, it supports the following architectural styles. Each architecture style is categorized by sample rate. 1. Hard-wired bit serial architecture supported in Cathedral I is suitable to a low-sample rate filter in audio, speech, and telecommunication applications. Because signals are processed bit by bit, least significant bit (LSB) first, the amount of logic is independent of the word length. 2. Microcoded processors architecture supported by Cathedral II is appropriate for low- to medium-rate algorithms with heavy decision making and intensive computations. Each processor is a dedicated interconnection of bit-parallel execution units controlled by branch controllers (e.g., highly complex, blockoriented DSP algorithms in the audio- to near-videofrequency range). 3. Bit-sliced multiplexed data paths supported in Cathedral III are suitable to a medium-level image, video, and radar processing ranging 1–10 MHz data rates with irregular and recursive high-speed algorithms. The data path consists of interconnected dedicated applicationspecific units such as registers, adders, multipliers, and so on.
DG capturing
Space-time mapping
4. Regular array architecture, which would be supported in Cathedral IV, is suited for front-end modules of image, video, and radar processing. A regular network of PEs features mostly localized communication and distributed storage. If it is fully pipelined and modular, the array corresponds to the so-called systolic architecture. In the next subsection, the focus is shifted to a more specific architecture (i.e., the regular array architecture or systolic array mapped from n-level nested Do-loop). Regular Array Architecture Synthesis The design flow of traditional systolic synthesis is constituted of the following steps as illustrated in Figure 7.5. 1. DG Capturing: A DG is captured by means of either description language or graphical input. It is composed of nodes representing computation and edges representing dependencies between connected nodes. Nodes are organized in such a way that they are located on integer grids. In addition, information such as bit width and variable type can be associated with the graph. 2. Space–time mapping: It is a process of allocating and scheduling a number of DG nodes to a (usually smaller) number of PEs with respect to their dependencies. The allocation and scheduling can be either a linear or a nonlinear function of node indices. The result of space–time mapping is an SFG with desired objective cost function subject to design criteria. 3. SFG optimization: SFG is composed of nodes representing PEs and edges representing wires or busses associated with delay elements or shift registers. SFG optimization includes cut-set and retiming (Kung, 1988) according to additional design criteria and better VLSI realization. 4. Intermediate representation generation (IR): An IR is generated from a given SFG with the help of a predesigned component library. The bit width of each variable is used to determine the size of register and bus. The final result is usually in HDL behavioral description for further behavioral-to-register transfer HDL synthesis. As summarized in Table 7.3, these synthesis tools are discussed in alphabetical order:
SFG optimization
IR generation
FIGURE 7.5 Systolic Space–Time Mapping Design Flow.
7 VLSI Signal Processing TABLE 7.3
Summary of Systolic Synthesis Tools DG capturing
ARREST DG2VHDL IRIS1 Khoros2 VACS VASS 1 2
945
Space–time lapping
p p
p p
p p p
p p p
SFG optimization p p p p p p
IR generation p p p p p p
no such automatic design tool yet, exists which may be due to the following reasons. .
.
Woods et al. (1997). Moorman and Crates (1999).
.
. .
.
.
.
ARREST (Burleson and Jung, 1992) extracts dependency vectors from an affine recurrence equation (ARE) description and generates the corresponding DG. Users are allowed to explore design space using a number of provided DG transformations. An SFG is obtained from affine transformation according to user-specified scheduling and projection vectors (Kung, 1988). ARREST permits the user to verify the functionality via a Verilog simulator and cost/performance tradeoff of different design decisions. The DG2VHDL (Stone and Manalokos, 2000, 1998) is proposed for general systolic mapping from a net list of DG to a synthesizable behavioral VHDL description of the input DSP algorithm. In addition, a corresponding test bench is generated for design validation. The VLSI array compiler system (VACS) (Kung and Jean, 1988) is developed for systolic and wave front array design. DG is captured and displayed in graphical format. Other than scheduling and projection vectors required in the space–time mapping process, processor axes and local memory size in each processor are optional inputs from the user. The optimal SFG is obtained by enumerating different criteria over a bounded search space. VASS captures DG in graphical format. It features multiprojection and the two-level pipeline technique for space– time mapping. An approach to transport all interior I/O ports to the array boundary is implemented. The output is the register transfer (RT) level data path synthesis of PE and associated control signals in net list format.
Regular Array to FPGA/RC Synthesis RC emerges to exploit inherent and distributed storage, regularity and local communication of VLSI layout, dynamic reconfiguration, and spatial parallelism. As a consequence, more computational power as well as data flow computing with high-throughput demand for multimedia and communication applications can be realized at low-power consumption (Zhang et al., 2000). Meanwhile, advances in semiconductor technology allow faster and larger FPGA/RC to become common practice. This leads to demand of efficient and automatic algorithm mapping tools. Prior to algorithm-to-hardware synthesis, a system specification undergoes HW/SW partitioning. The success of FPGA/ RC lies in how effective the algorithm is mapped to its hardware counterpart and how both hardware and software interact with one another. The latter part, however, is beyond the scope of this chapter. Despite different levels of granularity of logic cells, the algorithm-to-hardware mapping procedures are performed similarly. Started from algorithm compilation as shown in Figure 7.6, an algorithm mapping methodology of FPGA/RC can be decomposed into the following phases.
Even though systolic array design methodology was explored in the early 1980s, a deficit of automatic systolic design tools have inhibited traditional systolic design methodology to become useful for RC. To the best of both authors’ knowledge,
Algorithm compilation
FIGURE 7.6
High-level synthesis
Design space of systolic arrays must be explored with human interaction (Kung and Jean, 1988; Burleson and Jung, 1992; Stone and Manalokos, 2000) to achieve an optimal trade-off between different design objectives and to satisfy design constraints. The exploration of the final design space is limited because the array dimension is decreased by one after each projection according to the multiprojection procedure. I/O port allocation is cumbersome because it is allocated to only array boundaries. The resultant array may not be applicable due to its large number of I/O ports and a massive amount of memory bandwidth required.
HDL synthesis
1. Algorithm compilation: Compilation takes one of several forms of high-level languages (HLLs) to graph representation and then converts it suitable for algorithm transformation and optimization. Such, HLLs include C=C þ þ. NAPA C (Rupp, 1998), single assignment (SA)-C (Moorman and Cates, 1999), and Matlab. The tool to accomplish the task above is in various forms of compiler such as GNU C=C þ þ (Isshiki
PMPR
Configuration generation
Algorithm-to-Hardware Mapping of Reconfigurable Computing
946
Surin Kittitornkun and Yu-Hen Hu
2.
3.
4.
5.
and Dai, 1996), NAPA C (Rupp, 1988). SUIF (Callahan et al., 2000), and Cameron (Hammes et al., 1999; Moorman and Cates, 1999). The most recent one is the System Generator from Xilinx and Mathworks (Xilinx, 2000). The suite of tools for algorithm development to hardware prototyping consists of an algorithm mapper, a net lister, a test bench generator, a synthesizer, a control designer a core generator an FPGA placer and router, and a logic simulator. High-level synthesis: Synthesis converts the graph representation from the algorithm compilation phase into a behavioral description of hardware description language (HDL), such as VHDL. Verilog HDL, and Lola (Woods et al., 1997). The available tools include ARREST (Burleson and Jung, 1992). DG2VHDL (Stone and Manalokos, 2000). IRIS (Woods et al., 1997), Khoros (Moorman and Cates, 1999), VACS (Kung and Jean, 1988), and VASS (Yen et al., 1996). HDL synthesis: This synthesis is responsible for synthesizing a behavioral description of a given algorithm to its corresponding structural or RT level representation using, for example, the Synopsys Behavioral Compiler, Design Compiler, and other such compilers. Partitioning, mapping, placing, and routing (PMPR): Partitioning is required for multichip design if it is also necessary for multichip implementation. After design is partitioned for an individual FPGA chip, logic formulation is mapped to underlying logic cells and placed according to the desired floor plan. Logic cells are interconnected by an automatic routing tool. A mapping, placing, and routing tool is provided by the manufacturer or locally implemented and integrated to the design flow such as IRIS (Woods et al., 1997) for faster execution time. Configuration generation. A configuration word (bit stream) is generated according to both the logic cell formulation and the routing information.
Table 7.4 summarizes some existing tools/methodologies and their corresponding design phases.
7.4 Conclusion DSP is attributed by its data flow and real-time computing nature. This dictates the implementation of a high-throughput DSP application as a dedicated chip called ASIC. A broad range of DSP applications include seismic/radar/sonar/radio signal processing, human/computer/machine interface, storage and communication, and security/military operations. As such, a variety of data types, algorithm constructs, and sampling rates are involved. Data type can be either bit-level or bitparallel (fixed-point/floating-point number). The algorithm can be described as a nonterminating/terminating nested
TABLE 7.4
Summary of Algorithm Mapping Methodologies Algorithm High-level compilation synthesis
CHAMP1 Garp2 IRIS3 MorphoSys4 NAPA5 Pleiades6 Xilinx7 1 2 3 4 5 6 7
p p p p p p
p p p p p p p
HDL synthesis p p p p p p p
PMPR p p p p p p p
Configuration generation p p p p p p p
Patriquin and Gurevich (1995). Callahan et al. (2000). Woods et al. (1997). Lu et al. (1999). Rupp et al. (1998). Wan et al. (1999) Turney et al. (2000).
Do-loop or recurrent equation. The sampling rate of the input signals influences the architectural style of algorithmto-hardware synthesis. As the sampling rate increases, more parallellism/computation power is demanded from the hardware architecture. Consequently, processing power can be obtained from an implementation in either a network of interconnected execution units or a regular array of PEs. Other than CMOS ASIC realization that needs high product volume to amortize the prototyping cost, FPGA is one of the alternatives provided that allows the timing requirement to be satisfied. In addition, reconfigurable computing has emerged with similar architecture to FPGA where a 2-D array of reconfigurable logic cells are interconnected by a programmable interconnect and surrounded by input/output cells. To satisfy dynamic reconfigurability on the same platform, the context or configuration is stored locally in each cell. This emergence of the dynamically configurable hardware may soon influence algorithm-to-hardware synthesis. Hence, the time to market can be shortened.
References Abnous, A., and Rabaey, J. (1996). Ultra-low-power domain-specific multimedia processors. VLSI Signal Processing IX, 461–470. Andraka, R. (1998). A survey of cordic algorithms for FPGA-based computers. Proceedings of the ACM/SIGDA Sixth International Symposium on Field-Programmable Gate Arrays 191–200. Banerjee, U. (1993). Loop transformations for restructuring compilers: The foundations. Norwell, MA: Kluwer Academic. Black, P. J., and Meng, T. H. Y. (1997). A 1-gb/s. four-state, sliding block viterbi decoder. IEEE Journal of Solid-State Circuits 32(6), 797–805. Buell, D., Arnold, J. M., and Kleinfelder, W. J. (1996). Splash 2 FPGAs in a custom computing machine. Los Alamitos, CA: IEEE Computer Society. Burleson, W., and Jung, B. (1992). Arrest: An interactive graphic analysis tool for VLSI arrays. Proceedings of the International Conference on Application Specific Array Processors 149–162.
7 VLSI Signal Processing Callahan, T. J., Hauser, J. R., and Wawrzynek, J. (2000). The garp architecture and C compiler. IEEE Computer 33(4), 62–69. Chang, S. F., and Messerchmitt, D. G. (1992). Designing a highthroughput VLC decoder part i-concurrent VLSI architectures. IEEE Transactions on Circuits and Systems for Video Technology 2(2), 187–196. Chen, C. Y. R., and Moricz, M. Z. (1991). A delay distribution methodology for the optimal systolic synthesis of linear recurrence algorithms. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 10(6), 685–697. Dick, C. (1996). Computing the discrete fourier transform on FPGAbase systolic arrays. Proceedings of the 1996 ACM Fourth International Symposium on Field-Programmable gate arrays 129–135. Ganapathy, K. G., and Wah, B. (1992). Optimal design of lower dimensional processor arrays for uniform recurrences. International Conference on Application Specific Array Processors, 636–648. Goldstein, S. C., Schmit, H., Moe, M., Budiu, M., Cadambi, S., Taylor, R. R., and Laufer, R. (1999). Piperench: A coprocessor for streaming multimedia acceleration. Proceedings of the 26th International Symposium on Computer Architecture 28–39. Goslin, G. R. (1995). A guide to using field-programmable gate arrays (FPGAs) for application-specific digital signal processing performance. San Jose, CA: Xilinx. Hammes, J., Rinker, B., Bohm, W., Najjar, W., Draper, B., and Beveridge, R. (1999). Cameron: High-level language compilation for reconfigurable system. 236–244. Hwang, Y. T., and Hu, Y. H. (1992). MSSM-a design aid for multistage systolic mapping. Journal of VLSI Signal Processing 4(2/3), 125–146. Hwang, Y. T., and Hu, Y. H. (1992). Novel scheduling scheme for systolic array partitioning problem. VLSI Signal Processing 5 355–64. Xilinx. (2000). Virtex ii datasheet. San Jose, CA: Xilinx. Isshiki, T., and Dai, W. W. M. (1996). Bit-serial pipeline synthesis for multi-FPGA systems with C þ þ design capture. Proceedings of the IEEE Symposium on FPGAs for Custom Computing Machines 38–47. Kittitornkun, S., and Hu, Y. H. (2001). Frame-level pipelined motion estimation array processor. IEEE Transactions on Circuit and System for Video Technology 11(2), 248–251. Kung, H. T. (1982). Why sysytolic architectures? IEEE Computer 15(1), 37–45. Kung, S. Y. (1988). VLSI Array Processors. Englewood Cliffs, NJ: Prentice Hall. Kung, S. Y., and Jean, S. N. (1988). A VLSI array compiler system. VLSI Signal Processing 3, 495–508. Lamport, L. (1974). The parallel execution of Do loops. Communications of the ACM 17(2), 83–93. Lee, P., and Kedem, Z. M. (1988). Synthesizing linear array algorithms from nested for loop algorithms. IEEE Transactions on Computers 37(12), 1578–98. Lee, P., and Kedem, Z. M. (1990). Mapping nested loop algorithms into multidimensional systolics arrays. IEEE Transactions on Parallel and Distributed Systems 1(1), 64–76. Leiserson, C. E., Rose, F., and Saxe, J. (1983). Optimizing synchronous circuitry for retiming. Third Caltech Conference on VLSI 87–116. Li, G. J., and Wah, B. W. (1985). The design of optimal systolic arrays. IEEE Transactions on Computers 34(1), 66–77.
947 Lu, G., Singh, H., Lee, M. H., Bagherzadeh, N., Kurdahi, F. J., Filho, E. M. C., and Castro-Alves, V. (1999). The morphosys dynamically reconfigurable system-on-chip. Proceedings of the First NASA/DoD Workshop on Evolvable Hardware 152–160. De Man, H., Catthoor, F., Goossens, G., Vanhoof, J., Meerbergen, J. V., Note, S., and Huisken, J. (1990). Architecture-driven synthesis techniques or VLSI implementation of DSP algorithms. Proceedings of IEEE 78(2), 319–334. De Man, H., Rabaey, J., Vanhoof, J., Goosens, G., Siz, P., and Claesen, L. (1988). Cathedral-II-A computer-aided synthesis system for digital signal processing VLSI systems 55–66. Mead, C., and Conway, L. (1980). Introduction to VLSI Systems. Reading, MA: Addison-Wesley. Mirsky, E., and DeHon, A. (1996). Matrix: A reconfigurable computing architecture with configurable instruction distribution and deployable resources. Proceedings of the IEEE Symposium on FPGAs for Custom Computing Machines 157–166. Miyamori, T., and Olukotun, U. (1998). A quantitative analysis of reconfigurable co-processors for multimedia applications. Proceedings of the IEEE Symposium on FPGAs for Custom Computing Machines 2–11. Moldovan, D. I. (1982). On the analysis and synthesis of VLSI algorithms. IEEE Transactions on Computers 31(11), 1121–1126. Moorman, A. C., and Cates, D. M., Jr. (1999). A complete development environment for image processing applications on adaptive computing systems. IEEE ICASSP 4, 2159–62. Nishitani, T. (1999). An approach to a multimedia system on a chip. IEEE Workshop on Signal Processing Systems 13–21. Note, S., Geurts, W., Catthoor, F., and De Man, H. (1991). CathedralIII: Architecture-driven high-level synthesis for high throughput DSP applications. Proceedings of the 28th Conference on ACM/IEEE Design Automation Conference 597–602. O’Keefe, M. T., Fortes, J. A. B., and Wah, B. W. (1992). On the relationship between two systolic array design methodologies. IEEE Transactions on Computers 41(12), 1589–93. Parhi, K. K. (1989). Algorithm transformation techniques for concurrent processors. Proceedings of the IEEE 77(12), 1879–95. Patriquin, R., and Gurevich, I. (1995). An automated design process for the champ module. Proceedings of the IEEE National Aerospace and Electronics Conference 1, 417–424. Rabaey, J., Abnolus, A., Ichikawa, Y., Seno, K., and Wan, M. (1997). Heterogeneous reconfigurable systems. IEEE Workshop on Signal Processing Systems 24–34. Rao, S. K., and Kailath, T. (1989). Regular iterative algorithms and their implementation on processor arrays. Proceedings of the IEEE 76(3), 259–269. Rupp, C. R., Landguth, M., Garverick, T., Gomersall, E., Holt, H., Arnold, J. M., and Gokhale, M. (1988). The NAPA adaptive processing architecture. IEEE Symposium on FPGAs for Custom Computing Machines 28–37. Shang, W., and Fortes, J. A. B. (1992). On-time mapping of uniform dependence algorithms into lower dimensional processor arrays. IEEE Transactions on Parallel and Distributed Systems 3(3), 350–363. Shankiti, A. M., and Leeser, M. (2000). Implementing a rake receiver for wireless communications on an FPGA-based computer system. Proceedings of the ACM/SIGDA International Symposium on FieldProgrammable Gate Arrays 145–151.
948 Stone, A., and Manalokos, E. S. (1998). Using DG2VHDL to synthesize an FPGA 1-D discrete wavelet transform. IEEE Workshop on Signal Processing Systems 489–498. Stone, A., and Manalokos, E. S. (2000). DG2YHDL: To facilitate the high-level synthesize of parallel processing array architectures. Journal of VLSI Signal Processing 24(1), 99–120. Truong, T. K., Shih, M. T., Reed, I. S., and Satorius, E. H. (1992). A VLSI design for a trace-back Viterbi decoder. IEEE Transactions on Communications 40(3), 616–624. Turney, R. D., Dick, C., Parlour, D. B., and Hwang, J. (2000). Modeling and implementation of DSP FPGA solutions. San Jose, CA: Xilinx. Wan, M., Zhang, H., Benes, M., and Rabaey, J. (1998). A low-power reconfigurable data-flow driven DSP system. IEEE Workshop on Signal Processing Systems 191–200. Wang, D. J., and Hu, Y. H. (1995). Multiprocessor implementation of real-time DSP algorithms. IEEE Transactions on Very Large Scale Integration (VLSI) Systems 3(3), 393–403. White, S. A. (1989). Applications of distributed arithmetic to digital signal processing: A tutorial review. IEEE ASSP Magazine 4–19. Wolfe, M. (1996). High performance compilers for parallel computing. Redwood City, CA: Addison-Wesley. Wong, Y., and Delosme, J. M. (1985). Optimization of computation time for systolic arrays. IEEE International Conference on Computer Design: VLSI in Computers 618–621. Wong, Y., and Delosme, J. M. (1992). Optimization of computation time for systolic arrays. IEEE Transactions on Computers 41(2), 159–177.
Surin Kittitornkun and Yu-Hen Hu Woods, R., Cassidy, A., and Gray, J. (1996). VLSI architectures for field programmable gate arrays: A case study. Proceedings on IEEE Symposium on FPGAs for Custom Computing Machines 2–9. Woods, R., Ludwig, S., Heron, J., Trainor, D., and Gehring, S. (1997). FPGA synthesis on the xc6200 using IRIS and TRIANUS/HADES (or from heaven to hell and back again). Proceedings of the 5th Annual IEEE Symposium on Field-Programmable Custom Computing Machines 155–164. Xilinx. (2000). System generator v.1.0 product datasheet. Yeh, J. W., Cheng, W. J., and Jen, C. W. (1996). VASS-A VLSI array system synthesizer. Journal of VLSI Signal Processing 12(2), 135–158. Yeo, H., and Hu, Y. H. (1995). A novel modular systolic array architecture for full-search block matching motion estimation. IEEE Transactions on Circuit and Systems for Video Technology 5(5), 407–416. Zhang, H., Prabhu, V., George, V., Wan, M., Benes, M., Abnolus, A., and Rabaey, J. M. (2000). A 1-V heterogeneous reconfigurable processor IC for baseband wireless applications. Digest of Technical Papers IEEE International Solid-State Circuits Conference, 68–69. Zimmermann, K. H. (1996). Linear mapping of n-dimensional uniform recurrences onto k-dimensional sysytolic arrays. Journal of VLSI Signal Processing 12, 187–202.
VIII DIGITAL COMMUNICATION AND COMMUNICATION NETWORKS Vijay K. Garg Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
Yih-Chen Wang Lucent Technologies, Naperville, Illinois, USA
This section is concerned with digital communication and data communication networks. In the first part of the section we focus on the essentials of digital communication and in the second part we discuss important aspects of data communication networks. In a digital communication system, the information is processed so that it can be represented by a sequence of discrete messages. The digital source may be the result of sampling and quantizing an analog source, or it may represent a digital source such as the contents of a computer memory. When the source is a binary source, the two possible values 0 and 1 are called bits for binary source. A digital signal can be processed independently of whether it represents a discrete data source or a digitized analog source. This implies that an essentially unlimited range of signal conditioning and processing options is available to the designer. Depending on the origination and destination of the information being conveyed, it may include source coding, encryption, pulse shaping for spectral control, forward error correction (FEC) coding, special modulation to spread the signal spectrum, and equalization to compensate for channel distortion. In most communication system designs, a general objective is to use as efficiently as possible the resources of bandwidth
and transmitted power. Thus, we are interested in both bandwidth efficiency, defined as the ratio of data rate to signal bandwidth, and power efficiency, characterized by the probability of making a reception error as a function of signal-to-noise ratio (SNR). Modulation produces a continuous-time waveform suitable for transmission through the communication channel, whereas demodulation is to extract the data from the received signal. We discuss various modulation schemes. Spread spectrum refers to any modulation scheme that produces a spectrum for the transmitted signal much wider than bandwidth of the information to be transmitted independently of the bandwidth of the information-bearing signal. We discuss the code division multiple access (CDMA) spread spectrum scheme. In the second part of the section, our focus is on data communications network usually involving a collection of computers and other devices operating autonomously and interconnected to provide computing or data services. The interconnection of computers and devices can be established with either wired or wireless transmission media. This part starts with an introduction to fundamental concepts of data communications and network architecture. These concepts
would help readers to understand how different pieces of system components are working together to build a data communication network. Many data communication and networking terminologies are explained so that they would also help readers to understand advanced networking technologies. Two important network architectures are described. They are Open System Interconnection (OSI) and TCP/IP architecture. OSI is a network reference model that is referenced by all types of networks while the TCP/IP architecture provides a framework where the Internet has been built upon. The network evolution cannot be so successful without the widespread use
of local networking. The local network technology including the wireless Local Area Network (WLAN) was then introduced followed by various wireless network access technologies, like FDMA, TDMA, CDMA, and WCDMA. The convergence of varieties of networks under the Internet has begun. One important driver has been the introduction of Session Initiation Protocol (SIP) that could provide a framework for Integrated Multimedia Network Service (IMS). The convergence won’t be possible without extensive use of a Softswitch, the creation of ALL-IP architecture and providing broadband access utilizing optical networking technologies.
1 Signal Types, Properties, and Processes Vijay K. Garg Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
Yih-Chen Wang
1.1 1.2 1.3
Signal Types........................................................................................ 951 Energy and Power of a Signal................................................................. 951 Random Processes ............................................................................... 952 1.3.1 Statistical Average of a Random Process . 1.3.2 Stationary Process . 1.3.3 Properties of an Autocorrelation Function . 1.3.4 Crosscorrelation Functions . 1.3.5 Ergodic Processes
1.4 1.5
Transmission of a Random Signal Through a Linear Time-Invariant Filter .... 953 Power Spectral Density ......................................................................... 954
1.6
Relation Between the psd of Input Versus the psd of Output....................... 955
1.5.1 Properties of psd
Lucent Technologies, Naperville, Illinois, USA
1.1 Signal Types A signal is a function that carries some information. Mathematically, a signal can be classified as deterministic or random. For a deterministic signal, there is no uncertainty with respect to its value at any time. Deterministic signals are modeled by explicit mathematical expressions. For a random signal, there is some degree of uncertainty before the signal occurs. A probabilistic model is used to characterize a random signal. Noise in a communication system is an example of the random signal. A signal x(t) is periodic in time if there exists a constant T0 > 0, such that: x(t) ¼ x(t þ T0 ) for 1 < t < 1,
(1:1)
where t is the time. The smallest value of T0 that satisfies equation 1.1 is called the period of x(t). The period T0 defines the duration of one complete cycle of x(t). A signal for which there is no such value of T0 that satisfies equation 1.1 is referred to as a nonperiodic signal.
1.2 Energy and Power of a Signal
where x(t) is either a voltage or current of the signal. The energy dissipated during the time interval ( T =2, T =2) by a real signal with instantaneous power given by equation 1.2 is written as: ExT
¼
Tð=2
x 2 (t)dt:
(1:3)
T =2
The average power dissipated by the signal during interval T will be as follows: PxT
Tð=2
1 ¼ : T
x 2 (t)dt:
(1:4)
T =2
In analyzing communication signals, it is often desirable to deal with waveform energy. We refer to x(t) as an energy signal only if it has nonzero but finite energy 0 < Ex < 1 for all time, where:
Ex ¼ lim
Tð=2
T !1 T =2
2
x (t)dt ¼
1 ð
x 2 (t)dt:
(1:5)
1
Instantaneous power of an electrical signal is given as: p(t) ¼ x 2 (t), Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
(1:2)
A signal is defined to be a power signal only if it has finite but nonzero power 0 < Px < 1 for all time, where: 951
952
Vijay K. Garg and Yih-Chen Wang 1 Px ¼ lim T !1 T
Tð=2
2
x (t)dt:
E{X(tk )} ¼
(1:6)
(x pxk )dx ¼ mx (tk ),
(1:8)
1
T =2
1.3 Random Processes A random process X(t, s) can be viewed as a function of two variables: sample space s, and time t. We show N sample functions of time, {Xk (t)} (see Figure 1.1). Each of the sample functions can be considered as the output of a different noise generator. For a fixed sample point sk , the graph of the function X(t, sk ) versus time t is called a realization or sample function of the random process. To simplify the notation, we denote this sample function as: Xk (t) ¼ X(t, sk ):
1 ð
(1:7)
We have an indexed ensemble (family) of random variables {X(t, s)}, which is called a random process. To simplify the notation, we suppress the s and use X(t) to denote a random process. A random process X(t) is an ensemble of time functions together with a probability rule that assigns a probability to any meaningful event associated with an observation of one sample function of the random process.
where X(tk ) is the random variable and pxk (x) is the pdf of X(tk ) at time tk . We define the autocorrelation function of the random process X(t) to be a function of two variables, t1 and t2 , as: Rx (t1 , t2 ) ¼ E{X(t1 )X(t2 )},
(1:9)
where X(t1 ) and X(t2 ) are random variables obtained by observing X(t) at times t1 and t2 , respectively. The autocorrelation function is a measure of the degree to which two time samples of the same random process are related.
1.3.2 Stationary Process A random process X(t) is said to be stationary in the strict sense if none of its statistics is affected by a shift in time origin. A random process is said to be wide-sense stationary (WSS) if two of its statistics (mean and autocorrelation function) do not vary with shift in time origin. Therefore, a random process is WSS if: E{X(t)} ¼ mx ¼ constant,
(1:10)
Rx (t1 , t2 ) ¼ Rx (t1 t2 ) ¼ Rx (t),
(1:11)
and
1.3.1 Statistical Average of a Random Process A random process whose distribution functions are continuous can be described statistically with a probability density function (pdf). We define the mean of the random process X(t) as:
where t ¼ t1 t2 . In a strict sense, stationary implies WSS, but WSS does not imply stationary.
1.3.3 Properties of an Autocorrelation Function X1(t )
An auto correlation function reads as: Rx (t) ¼ E[X(t þ t)X(t)] for all t:
(1:12)
X2(t ) Real value of X
.
Xj (t )
The mean square value of a random process can be obtained from Rx (t) by substituting t ¼ 0 in equation 1.12: Rx (0) ¼ E[X 2 (t)]:
.
XN (t ) .
FIGURE 1.1
Random Noise Process
The autocorrelation function Rx (t) is an even function of t; that is: Rx (t) ¼ Rx ( t):
Time
(1:13)
(1:14)
The autocorrelation function Rx (t) has its maximum magnitude at t ¼ 0; that is: jRx (t)j Rx (0):
(1:15)
1 Signal Types, Properties, and Processes
953
To prove this property, we consider the non-negative quantity: E[{X(t þ t) X(t)}2 ] 0:
(1:17)
2Rx (0) 2Rx (t) 0:
(1:18)
Rx (0) Rx (t) Rx (0):
(1:19)
jRx (t)j Rx (0):
(1:20)
Tð=2
1 E[x(t)]dt ¼ T
T =2
(1:16)
Or we can consider the following: E[X 2 (t þ t)] 2E[X(t þ t)X(t)] þ E[X 2 (t)] 0:
1 E[mx (T )] ¼ T
Tð=2
mx dt ¼ mx ,
T =2
where mx is the mean of process X(t). We say that the process X(t) is ergodic in mean if two conditions are satisfied: (1) The time average mx (T ) approaches the ensemble average mx in limit as the observation time T goes to infinity: lim mx (T ) ¼ mx :
T !1
1.3.4 Crosscorrelation Functions We consider two random processes X(t) and Y (t) with autocorrelation function RX (t, u) and RY (t, u), respectively. The two crosscorrelation functions of X(t) and Y (t) are defined as:
(1:27)
(1:28)
(2) The variance of mx (T ), treated as a random variable, approaches zero in limit as the observation interval T goes to infinity. lim var[mx (T )] ¼ 0:
T !1
(1:29)
RXY (t, u) ¼ E[X(t)Y (u)],
(1:21)
Likewise, the process X(t) is ergodic in the autocorrelation function if the following two limiting conditions are fulfilled:
RYX (t, u) ¼ E[Y (t)X(u)]: RX (t, u) RXY (t, u) R(t, u) ¼ , RYX (t, u) RY (t, u)
(1:22)
(1) lim Rx (t, T ) ¼ Rx (t):
(1:30)
(2) lim var[Rx (t, T )] ¼ 0:
(1:31)
and
T !1
(1:23)
or R(t) ¼
RX (t) RYX (t)
RXY (t) : RY (t)
(1:24)
where t ¼ t u. The crosscorrelation function is not generally an even function of t, and it does not have a maximum value at the origin. However, it obeys a symmetry relationship: RXY (t) ¼ RYX (t):
(1:25)
Example 1 A random process is described by x(t) ¼ A cos (2pft þ f), where f is a random variable uniformly distributed on (0, 2p). Find the autocorrelation function and mean square value. Solutions: Rx (t1 , t2 ) ¼ E[A cos (2pft1 þ f)A cos (2pft2 þ f)] 1 1 ¼ A2 E cos 2pf (t1 t2 ) þ cos [2pf (t1 t2 ) þ 2f] 2 2 ¼
1.3.5 Ergodic Processes We consider the sample function x(t) of a stationary process X(t), with observation interval T =2 t T =2. The time average will be as written here: 1 mx (T ) ¼ T
T !1
Tð=2
x(t)dt:
A2 cos 2pf (t1 t2 ): 2
Rx (t) ¼
A2 A2 cos 2pf (t1 t2 ) ¼ cos 2pf t: 2 2
Rx (0) ¼
A2 : 2
(1:26)
T =2
The time average mx (T ) is a random variable, and its value depends on the observation interval and which particular sample function of the random process X(t) is selected for use in equation 1.26. Since X(t) is assumed to be stationary, the mean of the time average mx (T ) is given as:
1.4 Transmission of a Random Signal Through a Linear Time-Invariant Filter Figure 1.2 shows a random input signal X(t) passing through a linear time-invariant filter. The output signal Y (t) can be
954
Vijay K. Garg and Yih-Chen Wang
RY (t, u) ¼ X(t )
FIGURE 1.2 Filter
Random Signal Through a Linear Time-invariant
h(t2 )dt2 E[X(t t1 ) X(u t2 )]:
h(t1 )dt1
Y(t )
Filter
1 ð
1 ð
Impulse response function h(t )
1 1 ð
RY (t, u) ¼
1 1 ð
h(t2 )dt2 RX (t t1 , u t2 ):
h(t1 )dt1 1
1
(1:37) related to the input through an impulse response function h(t1 ) as:
Y (t) ¼
1 ð
h(t1 )X(t t1 )dt1 :
When the input X(t) is a stationary process, the autocorrelation function of X(t) is only a function of difference between the observation time t t1 and u t2 . Thus, with t ¼ t u, we have:
(1:32)
1
1 ð
2 mY (t) ¼ E[Y (t)] ¼ E 4
1 ð
h(t1 )X(t t1 )dt15:
1
1 ð
h(t1 )h(t2 )RX (t2 t1 )(dt1 )(dt2 ) (1:39)
1 1
(1:33)
RY (0) is a constant.
The system is stable provided that the expectation E[X(t)] is finite for all t. We may interchange the order of expectation and integration in equation 1.33, and the following results:
h(t1 )E[X(t t1 )]dt1 ¼
1 ð
RY (0)¼ E[Y 2 (t)] ¼
3
1
1 ð
(1:38)
[h(t1 )h(t2 ) RX (t t1 þ t2 )](dt1 )(dt2 )
1 1
The mean of Y (t) is as follows:
mY (t) ¼
1 ð
RY (t)¼
1 ð
1.5 Power Spectral Density Let H(f ) denote the frequency response of the system; then:
h(t1 )mX (t t1 )dt1 :
1 ð
h(t1 ) ¼
1
[H(f ) e j2pf t1 ]df :
(1:40)
1
(1:34)
The following also applies: For a stationary process X(t), the mean mX (t) is constant mX , so we may simplify equation 1.34 as: mY ¼ mX
1 ð
h(t1 )dt1 ¼ mX H(0),
E[Y 2 (t)] ¼
(1:35)
¼
1 ð
1 1 1 ð
02
1 ð
@4
3
1
[H(f ) e j2pf t1 ]df 5 h(t2 )RX (t2 t1 )A(dt1 )(dt2 )
1 1 ð
1 ð
H(f )df 1
1
1 ð
h(t2 )dt2 1
[RX (t2 t1 )e j2pf t1 ]dt1 :
1
(1:41) where H(0) is the zero-frequency (dc) response of the system. Equation 1.35 states that the mean of the random process Y (t) produced at the output of a linear time-invariant system in response to X(t) acting as the input process is equal to the mean of X(t) multiplied by the dc response of the system, which is intuitively satisfying. The autocorrelation function of the output random process Y (t) is as follows: 82 1 3 21 39 ð < ð = RY (t, u) ¼ E 4 h(t1 )X(t t1 )dt15 4 h(t2 )X(u t2 )dt25 : : ; 1
1
Let t ¼ t2 t1 , and the following results: 1 ð
2
E[Y (t)]¼
¼
1 1 ð 1
H(f )df
1 ð
[h(t2 )dt2 e
]dt2
1
jH(f )j2 df
1 ð
[RX (t)e j2pf t ]dt
1 1 ð
[RX (t)e j2pf t ]dt,
1
where jH(f )j is the magnitude response of the filter. We define:
(1:36) Assuming that mean square E[X 2 (t)] is finite for all t and the system is stable, then:
j2pf t2
SX (f ) ¼
1 ð 1
[RX (t)e j2pf t ]dt:
(1:42)
1 Signal Types, Properties, and Processes
955
SX (f ) is called the power spectral density or power spectrum of the stationary process X(t), Now, we can rewrite equation 1.42 as: E[Y (t)] ¼
[jH(f )j2 SX (f )]df :
(1:43)
1
The mean square value of a stable linear time-invariant filter is output in response to a stationary process is equal to the integral over all frequencies of the power spectral density of the input process multiplied by the squared magnitude response of the filter.
Property 5 The psd, appropriately normalized, has the properties usually associated with a probability density function: SX (f ) px (f ) ¼ 1 0 for all f : (1:48) Ð SX (f )df 1
1.6 Relation Between the psd of Input Versus the psd of Output
1.5.1 Properties of psd SY (f ) ¼
We use the following definitions: 1 ð
SX (f ) ¼
[RX (t)e
j2pf t
SY (f ) ¼
]dt,
RX (t) ¼
[SX (f )e
j2pf t
]df :
Using above expressions, we derive the following properties: Property 1 For f ¼ 0, we get: SX (0) ¼
1 ð
RX (t)dt:
(1:44)
1
Property 2 The mean square value of a stationary process equals the total area under the graph of psd: 2
RX (0) ¼ E[X (t)] ¼
1 ð
SX (f )df :
(1:45)
1
Property 3 The psd of a stationary process is always non-negative: SX (f ) 0 for all f :
.
1 1 ð ð 1
1 ð
[h(t1 )h(t2 )RX (t t1 þ t2 )e 2jpf t ](dt1 )(dt2 )dt:
SY (f ) ¼ H(f )H (f )SX (f ) ¼ jH(f )j2 SX (f ):
1 ð 1
.
[RY (t)e j2pf t ]dt:
Let t0 ¼ t þ t1 t2 :
and
.
1 ð
1 1 1
1
.
[RX (t)e j2pf t ]dt ¼ SX (f ):
1 .
1 ð
2
SX ( f ) ¼
1 ð
The psd of the output process Y(t) equals the power spectral density of the input X(t) multiplied by the squared magnitude response of the filter. Example 2 Numbers 1 and 0 are represented by pulse of amplitude A and A volts, respectively, and duration T sec. The pulses are not synchronized, so the starting time td of the first complete pulse for positive time is equally likely to lie anywhere between 0 and T sec (see Figures 1.3 and 1.4). That is, td is the sample value of a uniformly distributed random variable Td with its probability density function defined by: 1 0 td T : fTd (td ) ¼ T 0 elsewhere: A2 RX (t1 ) ¼ T
X(t)X(t t1 )dt:
T =2
A
(1:47)
X(t ) T −A
1
Because RX (t) is equal to RX ( t), we can write:
Tð=2
(1:46)
Property 4 The psd of a real-valued random process is an even function of frequency: SX ( f ) ¼ SX (f ): 1 ð [RX (t)e j2pf t ]dt: SX ( f ) ¼
(1:49)
FIGURE 1.3
Pulse of Amplitude A
956
Vijay K. Garg and Yih-Chen Wang A
X(t − τ1)
X(t − τ1) τ1
τ1
T
T Rx(τ)
−A
FIGURE 1.4 Shifted Pulse of Amplitude A −T
RX (0) ¼ Total average power: jtj 2 RX (t) ¼ A 1 jtj < T T ¼0
jtj T : sin pf T 2 : SX (f ) ¼ A2 T pf T 1 ð
SX (f )df ¼ Total average power
1
SX (f ) ¼
ðT
jtj j2pf t sin pfT 2 A2 1 dt ¼ A2 T e T pfT
T
¼
jg (f ) A2 T 2 sinc 2 (fT ) ¼ : T T
τ1
FIGURE 1.5
T
Autocorrelation Function
These example equations show that for a random binary wave in which binary numbers 1 and 0 are represented by pulse g(t) and g(t), respectively, the psd SX (f ) is equal to the energy spectral density jg (f ) of the symbol shaping pulse g(t), divided by the symbol duration T. Note the autocorrelation function is shown in Figure 1.5.
2 Digital Communication System Concepts Vijay K. Garg Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
Yih-Chen Wang Lucent Technologies, Naperville, Illinois, USA
2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9
Digital Communication System.............................................................. Messages, Characters, and Symbols ......................................................... Sampling Process ................................................................................. Aliasing.............................................................................................. Quantization....................................................................................... Pulse Amplitude Modulation ................................................................. Sources of Corruption .......................................................................... Voice Communication .......................................................................... Encoding............................................................................................
957 957 957 959 960 960 961 963 964
2.9.1 Encoding Schemes
2.1 Digital Communication System
2.2 Messages, Characters, and Symbols
Figure 2.1 shows a block diagram of a typical digital communication system. We focus primarily on formatting and transmission of baseband signal. Data already in a digital format would bypass the formatting procedure. Textual information is transformed into binary digits by use of a coder. Analog information is formatted using three separate processes:
When digitally transmitted, the characters are first encoded into a sequence of bits, called a bit stream or baseband signal. Groups of n bits can be combined to form a finite symbol set or word of M ¼ 2n for such symbols. A system using a symbol set size of M is called an M-ary system. The value of n or M represents an important initial choice in the design of any digital communication system. For n ¼ 1, the system is referred to as binary, the size of symbol set is M ¼ 2, and the modulator uses two different waveforms to represent the binary 1 and the binary 0. In this case, the symbol rate and the bit rate are the same. For n ¼ 2, the system is called quaternary or 4-ary (M ¼ 4). At each symbol time, the modulator uses one of the four different waveforms to represent the symbol (see Figure 2.2).
. . .
Sampling Quantization Encoding
In all cases, the formatting steps result in a sequence of binary digits. These digits are transmitted through a baseband channel, such as a pair of wires or a coaxial cable. However, before we transmit the digits, we must transform the digits into waveforms that are compatible with the channel. For baseband channels, compatible waveforms are pulses. The conversion from binary digits to pulse waveform takes place in a wave encoder also called a baseband modulator. The output of the waveform encoder is typically a sequence of pulses with characteristics that correspond to the binary digits being sent. After transmission through the channel, the received waveforms are detected to produce an estimate of the transmitted digits, and then the final step is (reverse) formatting to recover an estimate of the source information. Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
2.3 Sampling Process Analog information must be transformed into a digital format. The process starts with sampling the waveform to produce a discrete pulse-amplitude-modulated waveform (see Figure 2.3). The sampling process is usually described in a time domain. This is an operation that is basic to digital signal processing and digital communication. Using the sampling 957
958
Vijay K. Garg and Yih-Chen Wang Digital information Textual Analog information
Sampler
Quantizer
Coder
Waveform encoder
Transmitter
Channel
Binary digits Pulse waveform
Analog information
Low-pass Filter
Textual
Decoder
Waveform decoder
Receiver
Digital information
FIGURE 2.1 Block Diagram of a Typical Digital Communication System
where gd (t) is the ideal sampled signal and where d(t nTs ) is the delta function positioned at time t ¼ nTs . A delta function is closely approximated by a rectangular pulse of duration Dt and amplitude g(nTs )=Dt; the smaller we make Dt, the better will be the approximation:
1 (−1,1)
(−1,−1)
−1
(1,1)
(1,−1)
gd (t) ¼ fs
FIGURE 2.2 Binary and Quaternary Systems
1 X n¼1
g(nTs )d(t nTs ),
G(f mfs ),
(2:2)
m¼1
process, we convert the analog signal in a corresponding sequence of samples that are usually spaced uniformly in time. The sampling process can be implemented in several ways, the most popular being the sample-and-hold operation. In this operation, a switch and storage mechanism (such as a transistor and a capacitor, or shutter and a film strip) form a sequence of samples of the continuous input waveform. The output of the sampling process is called pulse amplitude modulation (PAM) because the successive output intervals can be described as a sequence of pulses with amplitudes derived from the input waveform samples. The analog waveform can be approximately retrieved from a PAM waveform by simple low-pass filtering, provided we choose the sampling rate properly. The ideal form of sampling is called instantaneous sampling. We sample the signal g(t) instantaneously at a uniform rate of fs once every Ts sec. Thus, we can write: gd (t) ¼
1 X
(2:1)
where G(f ) is the Fourier transform of the original signal g(t) and fs is sampling rate. Equation 2.2 states that the process of uniformly sampling a continuous-time signal of finite energy results in a periodic spectrum with a period equal to the sampling rate. Taking the Fourier transform of both side, of Equation 2.1 and noting that the Fourier transform of the delta function d(t nTs ) is equal to e j2pnfTs : Gd (f ) ¼
1 X
g(nTs )e j2pnfTs :
(2:3)
n¼1
Equation 2.3 is called the discrete-time Fourier transform. It is the complex Fourier series representation of the periodic frequency function Gd (t), with the sequence of samples g(nTs ) defining the coefficients of the expansion. We consider any continuous-time signal g(t) of finite energy and infinite duration. The signal is strictly band-limited with no frequency component higher than W Hz. This implies that the Fourier transform G(f ) of the signal g(t) has the property that G(f ) is zero for jf j W. If we choose the
2 Digital Communication System Concepts
959 gδ(t )
g(t )
Ts t
FIGURE 2.3 Sampling Process
sampling period Ts ¼ 1=2W , then the corresponding spectrum is given as: Gd (f ) ¼
1 X n¼1
g
1 n jpnf X e W ¼ fs G(f ) þ fs G(f mfs ) 2W m¼1, m6¼0
(2:4) Consider the following two conditions: (1) G(f ) ¼ 0 for jf j W . (2) fs ¼ 2W . We find from equation 2.4 by applying these conditions, 1 Gd (f ) G(f ) ¼ 2W ;G(f ) ¼
2.4 Aliasing
W < f < W:
1 n jpnf 1 X g e ð W Þ 2W n¼1 2W
width W Hz is called the Nyquist rate and 1=2 W sec is called the Nyquist interval. We discuss the sampling theorem by assuming that signal g(t) is strictly band-limited. In practice, however, an information-bearing signal is not strictly band-limited, with the result that some degree of under sampling is encountered. Consequently, some aliasing is produced by the sampling process. Aliasing refers to the phenomenon of a high-frequency component in the spectrum of the signal seemingly taking on the identity of a lower frequency in the spectrum of its sampled version.
(2:5) W < f < W:
Thus, if the sample value g(n=2W ) of a signal g(t) is specified for all n, then the Fourier transform G(f ) of the signal is uniquely determined by using the discrete-time Fourier transform of equation 2.5. Because g(t) is related to G(f ) by the inverse Fourier transform, it follows that the signal g(t) is itself uniquely determined by the sample values g(n=2W ) for 1 < n < 1. In other words, the sequence {g(n=2W )} has all the information contained in g(t). We state the sampling theorem for band-limited signals of finite energy in two parts that apply to the transmitter and receiver of a pulse modulation system, respectively. (1) A band-limited signal of finite energy with no frequency components higher than W Hz is completely described by specifying the values of signals at instants of time separated by 1=2 W sec. (2) A band-limited signal of finite energy with no frequency components higher than W Hz may be completely recovered from a knowledge of its samples taken at the rate of 2 W samples/sec. This is also known as the uniform sampling theorem. The sampling rate of 2 W samples per second for a signal band-
Figure 2.4 shows the part of the spectrum that is aliased due to under sampling. The aliased spectral components represent ambiguous data that can be retrieved only under special conditions. In general, the ambiguity is not resolved and ambiguous data appear in the frequency band between (fs fm ) and fm . 0 In Figure 2.5, we show a higher sampling rate fs to eliminate the aliasing by separating the spectral replicas. Figures 2.6 and 2.7 show two ways to eliminate aliasing using antialiasing filters. The analog signal is prefiltered so that the new maximum frequency fm is less than or equal to fs =2. Thus, there are no aliasing components seen in Figure 2.6 0 because fs > 2fm . Eliminating aliasing terms prior to sampling is a good engineering practice. When the signal structure is |Xs(f)|
fm
fs−fm
fm
fs
fs /2
FIGURE 2.4
Sampled Signal Spectrum
fs+fm
960
Vijay K. Garg and Yih-Chen Wang
fs⬘/2
fm
f
fs⬘−fm
fs⬘
trade-off is needed between the cost of a small transition bandwidth and costs of the higher sampling rate, which are those of more storage and higher transition rates. In many systems, the answer has been to make the transition bandwidth 10 and 20% of the signal bandwidth. If we account for the 20% transition bandwidth of the antialiasing filter, we have an engineering version of Nyquist sampling rate: fs 2:2fm . Example 3 We want to produce a high-quality digitalization of a 20-kHz bandwidth music signal. The sampling rate of greater than or equal to 22 ksps should be used. The sampling rate for compact disc digital audio player is 44.1 ksps, and the standard sampling rate for studio-quality audio player is 48 ksps.
FIGURE 2.5 Higher Sampling Rate to Eliminate Aliasing
fs /2
2.5 Quantization f
f⬘m fs−f⬘m
FIGURE 2.6
fs
In Figure 2.8, each pulse is expressed as a level from a finite number of predetermined levels; each such level can be represented by a symbol from a finite alphabet. The pulses in Figure 2.8 are called quantized samples. When the sample values are quantized to a finite set, this format can interface with a digital system. After quantization, the analog waveform can still be recovered but not precisely; improved reconstruction fidelity of the analog waveform can be achieved by increasing the number of quantization levels (requiring increased system bandwidth).
Prefiltering to Eliminate Aliasing
|Xs(f )|
fm⬘
fm
fs−fm
fm
fs
fs+fm
fs /2
FIGURE 2.7 Postfiltering to Eliminate Aliasing Portion of the Spectrum
well known, the aliased terms can be eliminated after sampling with a linear pass filter (LPF) operating on the sampled data. In this case, the aliased components are removed by postfilter0 ing after sampling. The filter cutoff frequency fm removes the 0 aliased components; fm needs to be less than (fs fm ). It should be noted that filtering techniques for eliminating the aliased portion of the spectrum will result in a loss of some signal information. For this reason, the sample rate, cutoff bandwidth, and filter type selected for a particular signal bandwidth are all interrelated. Realizable filters require a nonzero bandwidth for the transition between the passband and the required out-of-band attenuation. This is called the transition bandwidth. To minimize the system sample rate, we desire that the antialiasing filter has a small transition bandwidth. Filter complexity and cost rise sharply with narrower transition bandwidth, so a
2.6 Pulse Amplitude Modulation There are two operations involved in the generation of the pulse amplitude modulation (PAM) signal: (1) Instantaneous sampling of the message signal m(t) every Ts sec, where fs ¼ 1=Ts is selected according to the sampling theorem (2) Lengthening the duration of each sample obtained to some constant value T
g(t )
T t Ts
FIGURE 2.8 Flattop Quantization
2 Digital Communication System Concepts
961
h(t ) |H(f )|
1.0
Spectrum magnitude
0
−3/T
T
−1/T
0
1/T
Pulse
3/T
arg [H(f )] π Spectrum phase −3/T
FIGURE 2.9
−1/T
1/T
Rectangular Pulse and Its Spectrum
These two operations are jointly referred to as sample and hold. One important reason for intentionally lengthening the duration of each sample is to avoid the use of an excessive channel bandwidth because bandwidth is inversely proportional to pulse duration. The Fourier transform of the rectangular pulse h(t) is given as (see Figure 2.9): H(f ) ¼ T sinc(f T )e j2pf T :
(2:6)
We observe that by using flattop samples to generate a PAM signal, we introduce amplitude distortion as well as a delay of T =2. This effect is similar to the variation in transmission frequency that is caused by the finite size of the scanning aperture in television. The distortion caused by the use of PAM to transmit an analog signal is called the aperture affect. This distortion may be corrected by using an equalizer (see Figure 2.10). The equalizer has the effect of decreasing the in-band loss of the filter as the frequency increases in such a manner to compensate for the aperture effect. For T =Ts 0:1, the amplitude distortion is less than 0.5%, in which case the need of equalization may be omitted altogether. Example 4 Sampled uniformly and then time-division multiplexed are 24 voice signals. The sampling operation involved flattop samples with 1 ms duration. The multiplexing operation includes provision for synchronization by adding an extra pulse of sufficient amplitude and also 1 ms duration. The highest frequency component of each voice signal is 3.4 kHz. PAM signal s(t )
3/T
Filter (LPF)
FIGURE 2.10
(1) Assuming a sampling rate of 8 kHz, calculate the spacing between successive pulses of the multiplexed signal. (2) Repeat your calculations using Nyquist rate sampling. Ts ¼
106 ¼ 125 ms: 8000
For 25 channels (24 voice channels þ1 sync), time allocated for each channel is 125=25 ¼ 5 ms. Since the pulse duration is 1 ms, the time between pulses is (5 1) ¼ 4 ms. The Nyquist rate is 7.48 Hz (2:2 3:4). In addition: Ts ¼
106 ¼ 134 ms: 7480
Tc ¼
134 ¼ 5:36 ms: 25
The time between pulses is 4:36 ms.
2.7 Sources of Corruption The sources of corruption include sampling and quantization effects as well as channel effects, as described in the following bulleted list. .
Quantization noise: The distortion inherent in quantization is a roundoff or truncation error. The process of
Equalizer
Message signal m(t )
An Equalizer Application
962
.
.
.
.
Vijay K. Garg and Yih-Chen Wang encoding the PAM waveform into a quantized waveform involves discarding some of the original analog information. This distortion is called quantization noise; the amount of such noise is inversely proportional to the number of levels used in the quantization process. Quantizer saturation: The quantizer allocates L levels to the task of approximating the continuous range of inputs with a finite set of outputs (see Figure 2.11). The range of inputs for which the difference between the input and output is small is called the operating range of the converter. If the input exceeds this range, the difference between the input and output becomes large, and we say that the converter is operating in saturation. Saturation errors are more objectionable than quantizing noise. Generally, saturation is avoided by use of automatic gain control (AGC), which effectively extends the operating range of the converter. Timing jitter: If there is a slight jitter in the position of the sample, the sampling is no longer uniform. The effect of the jitter is equivalent to frequency modulation (FM) of the baseband signal. If the jitter is random, a low-level wideband spectral contribution is induced whose properties are very close to those of the quantizing noise. Timing jitter can be controlled with very good power supply isolation and stable clock reference. Channel noise: Thermal noise, interference from other users, and interference from circuit switching transients can cause errors in detecting the pulses carrying the digitized samples. Channel-induced errors can degrade the reconstructed signal quality quite quickly. The rapid degradation of the output signal quality with channelinduced errors is called a threshold effect. Intersymbol interference: The channel is always bandlimited. A band-limited channel spreads a pulse waveform passing through it. When the channel bandwidth is much greater than pulse bandwidth, the spreading of the pulse will be slight. When the channel bandwidth is close to the signal bandwidth, the spreading will exceed a
symbol duration and cause signal pulses to overlap. This overlapping is called inter-symbol interference ISI), ISI causes system degradation (higher error rates); it is a particularly insidious form of interference because raising the signal power to overcome interference will not improve the error performance:
2
s ¼
q=2 ð
e p(e)de ¼
q=2
1 q2 e 2 de ¼ q 12
q=2
¼ average quantization noise power: Vp2
2 2 Vpp Lq L2 q2 : ¼ ¼ ¼ 2 2 4
S (L2 q2 )=4 ¼ 3L2 : ¼ 2 N q q =12
(2:7)
(2:8)
In the limit as L ! 1, the signal approaches the PAM format (with no quantization error) and signal-to-quantization noise ratio is infinite. In other words, with an infinite number of quantization levels, there is zero quantization error. 2V Typically L ¼ 2R , R ¼ Log2 L, and q ¼ L p ¼ (2Vp )=2R . ;s2 ¼
2Vp 2 1 =12 ¼ Vp2 22R : 3 2R
Let P denote the average power of the message signal m(t), and then: P (SNR)o ¼ 2 ¼ s
! 3P 2R 2 : Vp2
(2:9)
The output SNR of the quantizer increases exponentially with increasing number of bits per sample, R. An increase in R requires a proportionate increase in the channel bandwidth. Example 5 We consider a full-load sinusodial modulating signal of amplitude A that uses all representation levels provided. The average signal power is (assuming a load of 1 V): The equations are written and solved as follows:
Vp q
L levels
Vpp
A2 : 2 1 s2 ¼ A2 22R : 3 P¼
A2
−Vp
(SNR)o ¼ FIGURE 2.11
q=2 ð
2
Uniform Quantization
2 ð1=3A2 22R Þ
3 ¼ (22R ) ¼ 1:8 þ 6R dB: 2
2 Digital Communication System Concepts L 32 64 128 256
R [bits] 5 6 7 8
963
SNR [decibels] 31.8 37.8 43.8 49.8
In equation 2.10, m is constant, x and y are the input and output voltages, m ¼ 0 represents uniform quantization, and m ¼ 255 is the standard value used in North America. .
A-Law, used in Europe, is as follows: y A(jxj=xmax ) sgnx, ¼ ymax 1 þ ln A
2.8 Voice Communication
¼
For most voice communication, very low speech volumes predominate; about 50% of the time, the voltage characterizing detected speech energy is less than 1/4 of the root-meansquare (rms) value. Large amplitude values are relatively rare; only 15% of the time does the voltage exceed the rms value. The quantization noise depends on the step size. When the steps are uniform in size, the quantization is called the uniform quantization. Such a system would be wasteful for speech signals; many of the quantizing steps would rarely be used. In a system that uses equally spaced quantization levels, the quantization noise is same for all signal magnitudes. Thus, with uniform quantization, the signal-to-noise ratio (SNR) is worse for low-level signals than for high-level signals. Nonuniform quantization can provide fine quantization of the weak signals and coarse quantization of the strong signals. Thus, in the case of nonuniform quantization, quantization noise can be made proportional to signal size. Improving the overall SNR by reducing the noise for predominant weak signals, at the expense of an increase in noise, can be done for rarely occurring signals. The nonuniform quantization can be used to make the SNR a constant for all signals within the input range. For voice, the signal dynamic range is 40 dB. Nonuniform quantization is achieved by first distorting the original signal with logarithmic compression characteristics and then using a uniform quantizer. For small magnitude signals, the compression characteristics have a much steeper slope than the slope for large magnitude signals. Thus, a given signal change at small magnitudes will carry the uniform quantizer through more steps than the same change at large magnitudes. The compression characteristic effectively changes the distribution of the input signal magnitude so there is no preponderance of low-magnitude signals at the output of the compressor. After compression, the distorted signal is used as an input to a uniform quantizer. At the receiver, an inverse compression characteristic, called expansion, is used so that the overall transmission is not distorted. The whole process (compression and expansion) is called companding. .
ymax
¼
ln [1 þ jxj=xmax ] sgn x: ln [1 þ m]
(2:10)
1 jxj < < 1: A xmax
(2:11a) (2:11b)
The A is the positive constant, and A ¼ 87:6 is the standard value used in Europe. Example 6 The information in an analog waveform with maximum frequency fm ¼ 3 kHz is transmitted over an M-ary PCM system, where the number of pulse levels is M ¼ 32. The quantization distortion is specified not to exceed 1% of the peak-to-peak analog signal. (1) What is minimum number of bits/sample or bits/ PCM word that should be used? (2) What is minimum sampling rate, and what is the resulting transmission rate? (3) What is the PCM pulse or symbol transmission rate? Solutions: jej pVpp , where p is fraction of the peak-to-peak analog voltage.
jemax j ¼ ;
Vpp , 2L
Vpp pVpp : 2L
2R ¼ L 2R
1 : 2p
1 ¼ 50, 2 0:01
;(R 5:64)
use R ¼ 6:
fs ¼ 2fm ¼ 6000 samples=sec: fs ¼ 6 6000 ¼ 36 kbps:
M ¼ 2b ¼ 32: b ¼ 5 bits=symbol:
sgn x ¼ 1, x 0: sgn x ¼ 1, x < 0:
jxj 1 : xmax A
1 þ ln (jxj=xmax ) sgnx, 1 þ ln A
The m-Law, used in North America, is as follows: y
0
2): No
(4:18)
Each symbol has length T. Therefore, the following is true: Eb ¼ V 2 T = log2 M,
(4:19)
(4:13) cos ωt
0
where Eb is the energy in either s0 (t) or s1 (t), that is, the bit energy: rffiffiffiffiffiffi rffiffiffiffiffiffiffi 1 Eb 2Eb Pe ¼ erfc ¼Q : (4:14) 2 No No
NRZ data
Mapping: 1 to +1 0 to -1
Pulse shaping/ filter
FIGURE 4.2 BPSK Modulator
BPSK signal
974
Vijay K. Garg and Yih-Chen Wang
and the RMS noise s as:
cos ωt
No ¼ s2 2T ,
(4:20)
for noise in a Nyquist bandwidth. Also, as shown in the bandwidth efficiency of the M-ary PSK is given as:
Pulse shaper/ filter NRZ data
X Q
Pulse shaper/ filter
R log2 M , ¼ Bw 2
X
PSK signal
sin ωt
where R is the data rate and Bw is the bandwidth. We will now examine several variations of Phase Shift Keying.
4.4 Quadrature Phase Shift Keying If we define four signals, each with a phase shift differing by 908 then we have quadrature phase shift keying (QPSK). We have previously calculated the error rates for a general phase shift keying signal with M signal points. For QPSK, M ¼ 4, so substituting M ¼ 4 in equation 4.17, we get: 1=2 # " 1 p Eb Pe ¼ ( log2 4)1=2 : erfc sin log2 4 4 No rffiffiffiffiffiffi rffiffiffiffiffiffiffi 1 Eb 2Eb ¼Q Pe ¼ erfc : (4:21) 2 No No The input binary bit stream {bk }, bk ¼ 1; k ¼ 0, 1, 2, . . . , arrives at the modulator input at a rate 1=T bits/sec and is separated into two data streams, aI (t) and aQ (t), containing even and odd bits, respectively (Figure 4.3). The modulated QPSK signal s(t) is given as: 1 p 1 p þ pffiffiffi aQ (t) sin 2pft þ : s(t) ¼ pffiffiffi aI (t) cos 2pft þ 4 4 2 2 h i p s(t) ¼ A cos 2pft þ þ u(t) : 4
In equations 4.22 and 4.23, the following apply: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi A ¼ (1=2)(aI2 þ aQ2 ) ¼ 1: u(t) ¼ a tan
Serial to parallel
I
aQ (t) : aI (t)
(4:22) (4:23)
FIGURE 4.3 QPSK Modulator
In Figure 4.3, we show an NRZ data stream of 01110001. We then show the I(0100) and Q (1101) signals that are generated from the NRZ data stream. Notice that the I and Q signals have bit lengths that are twice as long as the NRZ data bits. In the figure, there is no delay between the NRZ data and the I and Q data. In a real implementation, there would be a 1- to 2-bit delay before the I and Q signals were generated. This delay accounts for the time for two bits to be received and decoded into the I and Q signals. Finally, we show the QPSK signal that is generated. To make the figure clearer, we chose a carrier frequency that is four times higher than the data rate. In real systems, the carrier frequency would be many times that data rate. The values of u(t) ¼ 0, p=2, p=2, p represent the four values of aI (t) and aQ (t). On the I/Q plane, QPSK represents four equally spaced points separated by p=2 (see Figure 4.5). Each of the four possible phases of carriers represents two bits of data. Thus, there are two bits per symbol. Since the symbol rate for QPSK is half of the bit rate, twice the information can be carried in the same amount of channel bandwidth as compared to binary phase shift keying. This is possible because the two signals I and Q are orthogonal to each other and can be transmitted without interfering with each other. In QPSK, the carrier phase can change only once every 2T seconds. If from one 2T interval to the next, neither bit stream changes sign, the carrier phase remains the same. If one component aI (t) or aQ (t) changes sign, a phase shift of p=2 occurs. If both components, I and Q change sign, however then a phase shift of p or 1808 occurs. When this 1808 phase shift is filtered by the transmitter and receiver filters, it generates a change in amplitude of the detected signal and causes additional errors. Notice the 1808 shift at the end of bit interval 4 in Figure 4.4.
b0 =1
b2 =1
b4 =1
b6 =1
b1 =1
b3 =1
b5 =1
b7 =1
FIGURE 4.4 QPSK Signals
4 Modulation and Demodulation Technologies
975 of phase transition is 08 and 908 and occurs twice as often but with half the intensity of the quadrature phase shift keying system. Comparing Figure 4.4 with Figure 4.7, notice that the Q signal is the same for both QPSK and OQPSK, but the I signal is delayed by 1/2 bit. Thus, the 1808 phase change at the end of bit interval 4 of the QPSK signal is replaced by a 908 phase change at the end of bit interval 4 of the OQPSK signal. Also notice that phase changes occur more frequently with OQPSK. Although the phase changes will still cause amplitude fluctuations to occur in the transmitter and receiver, they have a smaller magnitude. The bit error rate and bandwidth efficiency of QPSK and OQPSK is the same as for BPSK. In theory, quadrature (or offset quadrature) phase shift keying systems can improve the spectral efficiency of mobile communication. They do, however, require a coherent detector, and in a multipath fading environment, the use of coherent detection is difficult and often results in poor performance over noncoherently based systems. The coherent detection problem can be overcome by using a differential detector, but then OQPSK is subject to intersymbol interference that results in poor system performance. The spectrum of offset QPSK (OQPSK) is the following (Proakis, 1989):
cos ωt + π/4 cos ωt (aQ,aI) (−1, 1)
(1, 1)
θ = π /4 sin ωt + π/4
(−1,−1)
(1,−1) sin ωt
FIGURE 4.5
Signal Constellation for QPSK
PQPSK (f ) ¼ T
If the two bit streams, I and Q, are offset by 1/2 a bit interval, then the amplitude fluctuations are minimized because the phase never changes by 1808 (see Figures 4.6 and 4.7). This modulation scheme, offset quadrature phase shift keying (OQPSK), is obtained from the conventional quadrature phase shift keying by delaying the odd-bit stream by a halfbit interval with respect to the even bit stream. Thus, the range
NRZ data
Delay of T
Odd bits
Pulse shaper/ filter
I
Pulse shaper/ filter
Q
We can design a phase shift keying system to be inherently differential and thus solve the detection problems. The p=4 differential quadrature phase shift keying (p=4-DQPSK) is a compromise modulation method because the phase is restricted to fluctuate between p=4, and 3p=4 rather than the p=2 phase changes for OQPSK. The method has a spectral efficiency of about 20% more than the Gaussian minimum shift keying (see upcoming section) modulation used for DECT and GSM. The p=4-DQPSK is essentially a p=4-shifted QPSK with differential encoding of symbol phases. The differential encod-
FIGURE 4.6 OQPSK Encoding
b0=1
b2=1
b1=−1
O
2T
b6=1
b4=−1 b3=−1
T
3T
FIGURE 4.7
(4:24)
4.5 The p/4 Differential Phase Shift Keying
Even bits
Serial-toparallel converter
sin pfT 2 : pfT
b7=1
b5=−1
4T
5T
OQPSK Signals
6T
7T
976
Vijay K. Garg and Yih-Chen Wang 90⬚
Even bits 45⬚
135⬚
NRZ data
Serial-toParallel converter
Differential Phase encoding
I Q
Odd bits
FIGURE 4.9 180⬚
Differential Encoding of p=4-DQPSK
0⬚
225⬚
315⬚ 270⬚
Ik ¼ Ik1 cos Dfk Qk1 sin Dfk :
(4:25)
Qk ¼ Ik1 sin Dfk þ Qk1 cos Dfk :
(4:26)
The Ik and Qk are the in-phase and quadrature components of the p=4-shifted DQPSK signal corresponding to the p k th ffiffiffi symbol. The amplitudes of Ik and Qk are 1, 0, 1= 2. Because the absolute phase of (k 1)th symbol is fk1 , the in-phase and quadrature components can be expressed as:
FIGURE 4.8 The p=4-DQPSK Modulation
ing mitigates loss of data due to phase slips. However, differential encoding results in the loss of a pair of symbols when channel errors occur. This can be translated to approximately a 3-dB loss in Eb =No relative to coherent p=4-QPSK. A p=4-shifted QPSK signal constellation in Figure 4.8 consists of symbols corresponding to eight phases. These eight phase points can be considered to be formed by superimposing two QPSK signal constellations, offset by 45 degrees relative to each other. During each symbol period, a phase angle from only one of the two QPSK constellations is transmitted. The two constellations are used alternately to transmit every pair of bits (di-bits). Thus, successive symbols have a relative phase difference that is one of the four phases shown in Table 4.1. Figure 4.8 shows the p=4-shifted QPSK signal constellation. When the phase angles of p=4-shifted QPSK symbols are differentially encoded, the resulting modulation is p=4-shifted DQPSK. This can be done either by differential encoding of the source bits and mapping them onto absolute phase angles or, alternately, by directly mapping the pairs of input bits onto relative phase ( p=4 3p=4) as shown in Figure 4.8. The binary data stream entering the modulator bM (t) is converted by a serial-to-parallel converter into two binary streams bo (t) and be (t) before the bits are differentially encoded (see Figure 4.9):
Ik ¼ cos fk1 cos Dfk sin fk1 sin Dfk ¼ cos (fk1 þ Dfk ): (4:27) Qk ¼ cos fk1 sin Dfk þ sin fk1 cos Dfk ¼ sin (fk1 þ Dfk ): (4:28) These component signals (Ik , Qk ) are then passed through baseband filters having a raised cosine frequency response as: 8 1a > > 1 0 jf j : > > 2Ts > > sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > < pTs 1 1a 1þa 1 jH(f )j ¼ jf j : jf j 2 1 sin a > 2T 2T 2Ts s s > > > > 1þa > > :0 : jf j 2Ts
(4:29) In equation 4.29 a is the roll-off factor, and Ts is the symbol duration. If g(t) is the response to pulses Ik and Qk at the filter input, then the resultant transmitted signal is given as: s(t) ¼
X
g(t kTs ) cos fk cos vt
k
X
g(t kTs ) sin fk sin vt,
k
(4:30) s(t) ¼
X
g(t kTs ) cos (vt þ fk ),
(4:31)
k
TABLE 4.1 Symbol 00 01 10 11
Phase Transitions of p=4-DQPSK p/4-DQPSK phase transition 458 1358 458 1358
where 2pv is the carrier frequency of transmission. The component fk results from differential encoding (i.e., fk ¼ fk1 þ Dfk ). Depending on the detection method (coherent detection or differential detection), the error performance of p=4-DQPSK can either be the same or 3 dB worse than QPSK.
4 Modulation and Demodulation Technologies
977
4.6 Minimum Shift Keying We previously showed that OQPSK is derived from QPSK by delaying the Q data stream by 1 bit or T sec with respect to the corresponding I data stream. This delay has no effect on the error rate or bandwidth. Minimum shift keying (MSK) is derived from OQPSK by replacing the rectangular pulse in amplitude with a half-cycle sinusoidal pulse. Notice that the in-phase and quadrature signals are delayed by interval T from each other. The MSK signal is defined as:
p(t 2nT )
s(t) ¼ aI (t) cos
cos 2pft þ aQ (t) 2T
sin p(t 2nT ) sin 2pft:
2T p(t 2nT ) þ fk : s(t) ¼ cos 2pft þ bk (t) 2T
(4:32) (4:33)
In equations 4.32 and 4.33, the following apply: . . . . .
Note that since the I and Q signals are delayed by one bit interval, the cosine and sine pulse shape in equation 4.32 are actually both in the shape of a sin pulse. Minimal shift keying has the following properties: (1) For a modulation bit rate of R, the high tone is fH ¼ f þ 0:25R when bk ¼ 1, and the low tone is fL ¼ f 0:25 R when bk ¼ 1. (2) The difference between the high tone and the low tone is Df ¼ fH fL ¼ 0:5R ¼ 1=(2T ), where T is the bit interval of the NRZ signal. (3) The signal has a constant envelope. The error probability for an ideal minimal shift keying system is: rffiffiffiffiffiffi rffiffiffiffiffiffiffi Eb 2Eb ¼Q , No No
16T cos 2pfT 2 : PMSK (f ) ¼ 2 p 1 16f 2 T 2
(4:34)
which is the same as for QPSK/OQPSK. The minimal shift keying modulation makes the phase change linear and limited to p=2 over a bit interval T. This enables MSK to provide a significant improvement over QPSK. Because of the effect of the linear phase change, the power spectral density has low side lobes that help to control adjacent channel interference. However, the main lobe becomes wider
(4:35)
Figure 4.11 shows the spectral density for both MSK and QPSK. Notice that while the first null in the side lobes occurs at a data rate of R for MSK and R=2 for QPSK, the overall side lobes are lower for MSK. Thus, MSK is more spectrally efficient than QPSK. The detector for MSK (Figure 4.10) is slightly different than for PSK. We must generate the matched filter equivalent to the two transmitted in-phase and quadrature signals. These two reference signals are as follows: i(t) ¼ cos q(t) ¼ sin
n ¼ 0, 1, 2, 3, . . . bk ¼ þ1 for aI aQ ¼ 1. bk ¼ 1 for aI aQ ¼ 1. fk ¼ 0 for aI ¼ 1. fk ¼ p for aI ¼ 1.
1 Pe ¼ erfc 2
than the quadrature shift keying (see Figure 4.11). Thus, it becomes difficult to satisfy the CCIR recommended value of 60 dB-side lobe power levels. The power spectral density for MSK can be shown (Proakis, 1989) to be as:
pt
2T pt 2T
cos vt:
(4:36)
sin vt:
(4:37)
We multiply the received signal by i(t) and q(t) and perform an integration with detection at the end of the bit interval; we then dump the integrator output. This is the standard integrateand-dump matched filter with the reference signal i(t) and q(t) match to the received waveform. At the end of the bit interval, we make a decision on the state of the bit ( þ 1 or 1) and output the decision as our detected bit. We do this for both the I channel and the Q channel with I and Q out of phase by T sec to account for the differential nature of MSK.
4.7 Gaussian Minimum Shift Keying In minimal shift keying, we replace the rectangular data pulse with a sinusoidal pulse. Obviously, other pulse shapes are possible. A Gaussian-shaped impulse response filter generates a signal with low side lobes and a narrower main lobe than the rectangular pulse. Since the filter theoretically has output before input, it can only be approximated by a delayed and shaped impulse response that has a Gaussian-like shape. This modulation is called Gaussian minimum shift keying (GMSK). The relationship between the premodulation filter bandwidth B and the bit period T defines the bandwidth of the system. If B > 1=T , then the waveform is essentially minimal shift keying. When B < 1=T , the intersymbol interference occurs because the signal cannot react to its next position in the symbol time. However, intersymbol interference can be traded for bandwidth reduction if the system has sufficient
978
Vijay K. Garg and Yih-Chen Wang i(t )
Gaussian low-pass filter
NRZ data
Received MSK signal
X
Integrate and dump
Bit decision
X
Integrate and dump
Bit decision
aI (t )
FIGURE 4.12
GMSK Modulator Using Frequency Modulator cos ωt
NRZ data
Optimum MSK Detector
signal-to-noise ratio. GSM designers use a BT of 0.3 with a channel data rate of 270.8 kbps. DECT designers adopt BT ¼ 0:5 with a data rate of 1.152 Mbs. A choice of BT ¼ 0:3 in GSM is a compromise between bit error rate and out-of-band interference since the narrow filter increases the intersymbol interference and reduces the signal power. When GMSK was first proposed (Murota and Hirade, 1981), the modulator was based on using frequency modulation (FM), illustrated by Figure 4.12. Since newer integrated circuits are available that enable an I and Q modulator to be easily constructed, a more modern method to generate the GSMK signal is shown in Figure 4.13. Filtering of the NRZ data by a Gaussian low-pass filter generates a signal that is no longer constrained to one bit interval. Intersymbol interference is generated by the modulator.
Serial-toparallel converter
Integrator
Gaussian filter
Integrator
Gaussian filter
FIGURE 4.13
1 Pe ¼ erfc 2
rffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffi aEb aEb ¼Q , 2No No
0.20 Fraction of out-of-band power (IS-54) QPSK
X
0.05
X
Spectrum
X
GMSK signal
(4:38)
where a depends on the BT product. For GSM where BT ¼ 0:3, a 0:9; for DECT where BT ¼ 0:5, a 0:97. For BT ¼ 1, a ¼ 2, which is the case of MSK.
MSK
Fraction
X
QPSK/OQPSK
MSK
FIGURE 4.11
Q
The bit error rate performance of GMSK can be expressed (Murota and Hirade, 1981) as:
X X
X
GMSK Modulator Using Phase Modulator
1.0 X
I
sin ωt
Fraction of out-of-band power (GSM)
0.5 T
GMSK signal
aQ(t )
q(t)
FIGURE 4.10
Frequency modulator
1/T
Frequency Spectral Density of QPSK and MSK
4 Modulation and Demodulation Technologies
979
cos ωt
Received GMSK signal
X
Gaussian LPF
Bit decision
X
Gaussian LPF
Bit decision
aI (t ) aQ (t )
sin ωt
FIGURE 4.14
Received signal
Multiply by N
GMSK Demodulator
Phase locked loop
cos Nωt
FIGURE 4.15
Divide by N
cos ωt
Divide by N
sin ωt
Carrier Recovery for PSK
Demodulation of GMSK, shown in Figure 4.14, requires multiplication by the in-phase and quadrature carrier signals followed by a low-pass filter with Gaussian shape. At the end of the bit interval, we make a decision on the state of the bit (þ1 or 1) and output the decision as our detected bit. As with MSK, we do this for both the I channel and the Q channel with I and Q out of phase by T seconds to account for the differential nature of GMSK. We ignore the intersymbol interference caused by the longer than 1-bit interval nature of the Gaussian transmitted pulse.
4.8 Synchronization The demodulation of a signal requires that the receiver be synchronized with the transmitted signal, as received at the input of the receiver. The synchronization must be for: .
.
.
Carrier synchronization: The receiver is exactly on the same frequency that was transmitted and adjusted for the effects of Doppler shifts. Bit synchronization: The receiver is aligned with the beginning and end of each bit interval. Word synchronization: The receiver is aligned with the beginning and end of each word in the transmitted signal.
r (t ) f + ∆f /2, f − ∆ f /2
Multiply by 2
FIGURE 4.16
If the synchronization in the receiver is not precise for any of the above operations, then the bit error rate of the receiver will not be the same as described by the equations in the previous sections. The design of a receiver is an area that standards have traditionally not specified. It is usually an art that enables one company to offer better performance in its equipment compared to a competitor’s equipment. The methods of achieving synchronization discussed in this section are the traditional methods. A particular receiver may or may not use any of these methods. Many companies use proprietary methods. For PSK the carrier signal changes phase every bit interval. If we multiply the received signal by a factor N, an integer, we can convert all of the phase changes in the multiplied signal to multiples of 3608. The new signal then has no phase changes, and we can recover it using a narrow band phase locked loop (PLL). After the PLL recovers, the multiplied carrier signal it is divided by N to recover the carrier at the proper frequency. By the suitable choice of digital dividing circuits, it is possible to get a precise 908 difference in the output of two dividers and thus generate both the cos vt and the sin vt signals needed by the receiver. There are also some down-converter integrated circuits that contain a precise phase shift network. The carrier recovery is typically performed at some lower intermediate frequency rather than directly at the received frequency. For BPSK, we would need an N or 2, but an N of 4 would be used to enable the sine and cosine term to be generated. For QPSK and its derivatives, an N of 4 is necessary, and for p=4-DQPSK, an N of 8 would be needed. After we recover the carrier, we must reestablish the carrier phase to determine the values of the received bits. Somewhere in the transmitted signal must be a known bit pattern that we can use to determine the carrier phase. The bit pattern can be alternating zeros and ones that we use to determine bit timing, or it could be some other known pattern. The advantage of differential keying (e.g., p=4-DQPSK) is that the knowledge of the absolute carrier phase is not important. Only the change in carrier phase from one symbol to the next is important. MSK is a form of frequency modulation; therefore, a different method of carrier recovery is needed. In Figure 4.16, the MSK signal has frequency f and deviation Df ¼ 1=2T . We first multiply the signal by 2, thus doubling the deviation and generating strong frequency components (Pasupathy, 1979) at 2f þ 2Df and 2f 2Df . We use two PLLs to recover these two signals:
PLL 2f + ∆f
Divided by 2
PLL 2f − ∆f
Divided by 2
Carrier Recovery for MSK
s1(t )
s2(t )
+ −
i(t )
q(t )
980
Vijay K. Garg and Yih-Chen Wang
r (t )
i(t ) or q(t )
Matched filter detector
PLL
FIGURE 4.17
s1 (t)s2 (t) ¼ cos (2pft þ pDft) cos (2pft pDft) ¼ 0:5 cos 4pft þ 0:5 cos 2pDft:
Differentator
Generalized Data Timing Recovery Circuit
Low-pass filter Clock signal 1/2T
s2(t )
FIGURE 4.18
MSK Data Timing Recovery Circuit
s1 (t) ¼ cos (2pft þ 2pDft):
(4:39)
s2 (t) ¼ cos (2pft 2pDft):
(4:40)
We then take the sum and difference of s1 (t) and s2 (t) to generate the desired i(t) and q(t) signals: i(t) ¼ s1 (t) þ s2 (t) ¼ cos (2pft þ pDft) þ cos (2pft pDft) ¼ 2 cos 2pft cos pDft:
(4:41)
q(t) ¼ s1 (t) s2 (t) ¼ cos (2pft þ pDft)
(4:42)
cos (2pft pDft) ¼ 2 sin 2pft sin pDft:
Shift register ...
Correlator
FIGURE 4.19
Frame timing
Generalized Framing Recovery Circuit
pt : (4:44) T
4.9 Equalization The received signal in a mobile radio environment travels from the transmitter to the receiver over many paths. The signal thus fades in and out and undergoes distortion because of the multipath nature of the channel. For a transmitted signal s(t) ¼ a(t) cos (vt þ u(t)), we can represent the received signal, r(t), as: r(t) ¼
n X
xi (t ti )a(t ti ) cos (v(t ti ) þ u(t ti ))
i¼0
The identical circuit can also be used to recover a carrier for a GMSK system. (de Buda, 1972; Murota and Hirade, 1981; Pasupathy, 1979). The next step is to recover data time or bit synchronization. Most communication systems transmit a sequence of zeros and ones in an alternating pattern to enable the receiver to maintain bit synchronization. A PLL operating at the bit timing is used to maintain timing. Once the PLL is synchronized on the received 101010 . . . pattern (Figure 4.17), it will remain synchronized on any other patterns except for long sequences of all zeros or all ones.
Demodulated signal
low-pass filtered[s1 (t)s2 (t)] ¼ 0:5 cos 2pDft ¼ 0:5 cos
(4:43)
The output of the low-pass filter is a clock signal at onehalf the bit rate, which is the correct rate for demodulation of the signal because the I and Q signals are at one-half of the bit rate. Word synchronization or framing is determined by correlating on a known bit pattern being transmitted. The receiver then performs an autocorrelation function to determine when the bit pattern is received and outputs a framing pulse (Figure 4.20).
s1(t )
X
MSK uses an additional circuit to achieve bit timing (Figure 4.18). The s1 (t) and s2 (t) signals are multiplied together and low-pass filtered:
(4:45)
þ yi (t ti )a(t ti ) sin (v(t ti ) þ u(t ti )):
The received signal has Rayleigh fading statistics. But what are the characteristics of the x and y terms in equation 4.45? If the transmitter signal is narrow enough compared to the fine multipath structure of the channel, then the individual fading components, xi (t) and yi (t) will also have Rayleigh statistics. If a particular path is dominated by a reflection off of a mountain, hill, building, or similar structure, then the statistics of that path may be Rician rather than Raleigh. If the range of ti is small compared to the bit interval, then little distortion of the received signal occurs. If the range of ti is greater than a bit interval, then the transmissions from one bit will interfere with the transmissions of another bit. This effect is called intersymbol interference. Spread spectrum systems transmit wideband width signals and attempt to recover the signals in each of the paths and add them together in a diversity receiver. In the discussion in this section, we fowson the transmission of narrow band signals. Therefore, the multipath signals are interference to the desired signal. We need a receiver that removes the effects of the multipath signal or cancels the undesired multipaths.
4 Modulation and Demodulation Technologies r (t )
1-bit Delay
1-bit Delay
α2
α1
X
981 1-bit Delay
α3
X
...
1-bit Delay
α4
X
αn
X
Detected bit
X Σ
αi to i th tap
Tap gain adjustment
FIGURE 4.20
Block Diagram of Equalizer
Another method to describe the multipath channel is to describe the channel as having an impulse response h(t). The received signal is then written as: r(t) ¼
1 ð
s(t)h(t t)dt:
(4:46)
1
We can then recover s(t) if we can determine a transfer function h1 (t), the inverse of h(t). One difficulty in performing the inverse function is that it is time varying. The circuit that performs the inverse transfer function is called an equalizer, as shown in Figure 4.20. Generally, we are interested in minimizing the intersymbol interference at the time when we do our detection (the sample time in a sample and hold circuit). Thus, we can model the equalizer as a series of equal time delays (rather than random as in the general case) with the shortest delay interval being a bit interval. We then construct a receiver that determines req , the equalized signal: req (t) ¼
n X
ai (t tn )r(t tn ):
Detector
(4:47)
i¼0
We use our equalized signal r eq as the input to our detector to determine the value of the kth transmitted bit. We must adjust the values of ai to achieve some measure of performance of the receiver. A typical measure is to minimize the mean square error between the value of the detected bit at the output of the summer and the output of the detector. Other measures for the equalizer are possible. For more details, see Proakis (1989).
4.10 Summary of Modulation and Demodulation Processes In this section, we studied the modulation and demodulation processes that are applicable to GSM and DECT (and its North American variant PWT). Since baseband signals are can only be transmitted over short distances with wires and require very long antennas to transmit them without wires, the baseband signals are modulated onto radio frequency carriers. First, we studied amplitude shift keying (ASK) to determine its probability of error and methods for demodulation. From ASK, we studied phase modulation and the specific form (p=4DQPSK) used for PWT. We showed that p=4-DQPSK has the same shaped error rate curve as ASK, but the signalto-noise ratio definition is different. We then studied MSK, a form of frequency modulation, as a first step toward GMSK, which is used for DECT and GSM. Based on the literature, the error performance for GMSK also has the same shaped curve as ASK with the proper definition of a correcting factor a. For GSM where BT ¼ 0:3, a 0:9; for DECT where BT ¼ 0:5, a 0:97. We then presented methods and block diagrams for recovery of clocks used for the carrier frequency and phase, the bit timing, and the framing for various modulation methods. While our block diagrams are based on hardware approaches, they could just as easily be implemented in software in many cases. We also explained that many manufacturers have proprietary methods for determining clock recovery techniques. Finally, we studied equalization techniques.
This page intentionally left blank
5 Data Communication Concepts Vijay K. Garg Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
5.1
Introduction to Data Networking ........................................................... 983 5.1.1 Fundamental Concepts and Architecture of Data Communication and Networking . 5.1.2 Advantages of Digital Transmission . 5.1.3 Asynchronous and Synchronous Transmission . 5.1.4 Sharing of Network Nodes and Transmission Facility Resource . 5.1.5 Switching
Yih-Chen Wang Lucent Technologies, Naperville, Illinois, USA
5.1 Introduction to Data Networking The explosive growth of Internet and its applications have made the use of data networks essential for many people’s daily lives. The advanced data networking technologies will soon combine data, voice, and multimedia applications on a single platform for an economic reason. We introduce fundamental concepts of data communication and networking and also describe important data networking technologies that will shape the future of communication networks.
the network by means of electrical, electromagnetic, or optical signals. The terms digital and analog are often used in the contexts of transmission, data, and signals. This leads to the following six definitions: .
.
.
.
5.1.1 Fundamental Concepts and Architecture of Data Communication and Networking .
The changing face of the communication industry was started with the marriage of computer and communication technologies in 1984 when AT&T was divided into seven Regional Bell Operation Companies (RBOC). The change has shaped the fabrics of society. Paper mail to electronic mail, shopping mode to online shopping, and libraries to digital libraries have changed day-to-day life. All the changes are the results of advanced communication and networking technologies. In this section, we explain the concepts and terms that will help the reader understand different communication and networking technologies described in the later subsections. Transmission, Data, and Signals Transmission is the communication of data that is propagated and processed from one point in a network to another point in Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
.
Digital data is a source entity of which meaning has discrete values, such as the text or integers. Analog data is a source entity of which meaning has continuous values, such as voice or video. Digital signal is a sequence of voltage pulses that can only be transmitted over certain wired media. Analog signal is continuously varying electromagnetic wave that could be transmitted over a variety of wired and wireless media. Digital transmission is a transmission system that is concerned about the content of data it transmits or receives. It can carry either digital data or analog data with either a digital signal or an analog signal. Analog transmission is a transmission system that is not concerned about the content of data it transmits or receives. It can carry either analog data or digital data with an analog signal only.
Combinations of Data and Signals There are four possible combinations of data and signals. For each combination, this discussion provides examples of applications and techniques to encode data. Note that a digital transmission system can use each of the four combinations for its applications, but an analog transmission system can only use an analog signal to carry either digital data or analog data. 983
984
Vijay K. Garg and Yih-Chen Wang
Digital Data and Analog Signal One example of this combination is that a PC or terminal connects to a remote host via a telephone network with a voice-grade modem. The modem modulates digital data to the characteristics of an analog carrier signal. These characteristics are the frequency, amplitude, and phase of the carrier signal. The modem uses one or any combination of the following three basic techniques as shown in Figure 5.1. .
.
.
Amplitude-shift keying (ASK): In this technique, binary values are represented by two different amplitudes of a carrier signal. The binary value 0 could be represented by a lower amplitude, while a binary 1 could be represented by a higher amplitude. Frequency shift keying (FSK): In this technique, the two binary values are represented by two different frequencies. In the full duplex FSK, one set of frequencies could be designated for one direction of the communication, while the other set could be designated for an opposite direction of the communication. For example, a set of frequencies (1070 Hz, 1270 Hz) is used for one direction of the communication, and the other set (2070 Hz, 2075 Hz) is used for other direction of the communication. Phase-shift keying: In this technique, the phase of the carrier signal is shifted to a certain degree of phase to represent the binary value. In the simplest case, no shift can represent the binary 0, while a shift of 1808 can represent the binary 1. If a carrier signal can produce four phases, such as no shift, shifting to 908, shifting to
1808 and shifting to 2708, then each phase could represent two bits. Therefore, no shifting can represent 00, the shifting to 908 can represent 01, the shifting to 1808 can represent 10, and the shifting to 2708 can represent 11. Analog Data and Digital Signal An example of the combination is the T1 carrier carrying voice signal from a switching local office to another switching local office via a tandem switch. The voice is converted into a stream of digital bits that are carried by digital signal with some encoding scheme. The receiving system would then convert the stream of digital bits back to the original analog voice. Another example of the combination is compact disc audio. The basic technique for this combination is called pulse code modulation (PCM). PCM takes the samples of analog data at the sampling rate that is twice the highest signal frequency and then assigns each sample to a binary code. The analog data is quantized into 8 steps (0 to 7), an each binary code will have 3 bits assigned to it. According to the Nyquist’s sampling theorem, the samples contain all information of the original signal and can be reconstructed from these samples. The generated bit stream will be 011110111111110100011100110. Digital Data and Digital Signal One example of the combination is a text file transfer via a baseband ethernet local area network where digital signal is used. There are many different digital encoding techniques for this combination. The techniques are summarized as below: .
0
0
1
1
0
1
0
0
0
1
0
. . . . .
(A) Amplitude Shift Keying
Nonreturn to zero level (NRZ-L) Nonreturn to zero inverted (NRZI) Manchester encoding Differential manchester encoding Bipolar with 8-zero substitution (B8ZS) High-density bipolar 3 zeros (HDB3)
7 6 5 4
(B) Frequency Shift Keying 3 STEP
(C) Phase Shift Keying
FIGURE 5.1 Three Basic Techniques
Sampling points
FIGURE 5.2 Pulse Code Modulation
5 Data Communication Concepts Analog Data and Analog Signal One example of the combination is the analog CATV or traditional telephone network. The techniques used in this combination are similar to modulations of a modem. The modulation is required to provide a higher frequency for more efficient transmission or allow multiple user transmission using frequency modulation. The three modulation techniques amplitude modulation (AM), phase modulation (PM), and frequency modulation (FM) are used.
5.1.2 Advantages of Digital Transmission The industry has been gradually converting an analog transmission system into a digital transmission system due to the cost, quality, and capacity of the digital transmission system. The evolution of integrated service digital network (ISDN) and digital subscriber line (DSL) to replace the analog local loop and the conversion of analog CATV into a digital CATV are examples of conversion into a digital transmission system. The following is a list of benefits for a digital transmission system: .
.
.
.
.
The digital transmission system is cheaper. The technique for designing and manufacturing a digital circuitry is more advanced and cheaper. The very large scale integration (VLSI) technique allows for mass production of a circuitry at lower cost. Transmission quality is higher. With the digital transmission system, transmission noise is not accumulated with the use of a repeater. A repeater recovers an incoming signal based on the content and regenerates a fresh signal; an amplifier used by an analog transmission system amplifies a noise component as well so that noises are accumulated in an analog transmission system. A digital transmission system has higher capacity. Time division multiplexing (TDM) instead of frequency division multiplexing (FDM) is used in a digital transmission system. TDM shares the entire bandwidth by all users, and the waste of bandwidth can be reduced to a minimum. The granularity for a time slot is also more flexible than a subfrequency of an FDM. Therefore, the bandwidth utilization is much higher for a digital transmission system, allowing digital CTAV to accommodate more than 100 channels. The system enables security and privacy. The most important tool for transmission security and privacy is the use of encryption. The encryption uses a cipher algorithm to replace information with a ciphertext (output of a cipher algorithm) using some mathematical algorithm. Digital transmission system makes the manipulation of digital information much easier. The system enables easier integration of multiple data types. The digital transmission system deals with digital information only. This makes for easier control and integration of multimedia applications.
985 Bit Rate Per Second Versus Baud Rate Many people use bit rate per second (bps) and baud rate interchangeably, which is not correct. The bps is defined as the number of binary bits transmitted per second, while Baud rate is defined as the number of signal elements or states transmitted per second. Frequency, amplitude, or phase can be considered as signaling elements. If a modem only uses two frequencies signaling elements in the transmission, the baud rate of the modem is equal to bps, where the higher frequency can represent a binary 1 and a lower frequency can represent a binary 0. If a modem uses four frequencies to represent signaling elements, then each frequency can represent two binary bits. For example, 00 corresponds to the first frequency, 01 corresponds to the second frequency, 10 corresponds to the third frequency, and 11 corresponds to the last frequency. In this case, a baud rate is equal to 2 bps. Maximum Data Rate Some factors can limit the maximum transmission rate of a transmission system. Nyquist’s theorem specifies the maximum data rate for noiseless condition, whereas the Shannon theorem specifies the maximum data rate under a noise condition. The Nyquist theorem states that a signal with the bandwidth B can be completely reconstructed if 2B samples per second are used. The theorem further states that: Rmax ¼ 2B log2 M,
(5:1)
where Rmax is the maximum data rate and M is the discrete levels of signal. For example, if a transmission system like the telephone network has 3000 Hz of bandwidth, then the maximum data rate ¼ 2 3000 log2 2 ¼ 6000 bits/sec (bps). The Shannon theorem states the maximum data rate as follows: Rmax ¼ B log2 (1 þ S=N ),
(5:2)
where S is the signal power and N is the noise power. For example, if a system has bandwidth B ¼ 3 kHz with 30-dB quality of transmission line, then the maximum data rate ¼ 3000 log2 (1 þ 1000) ¼ 29, 904 bps.
5.1.3 Asynchronous and Synchronous Transmission To determine what bits are constituted as an octet (character), two transmission techniques are used. .
Asynchronous transmission. In this technique, each character is enclosed with a start bit, 7 or 8 data bits, an optional parity bit, and stop bits. The gap between character transmission is not necessarily fixed in length of
986
.
Vijay K. Garg and Yih-Chen Wang time. It is cheaper but is less efficient. For example, assume that there are 1 stop bit and 1 start bit in the asynchronous transmission with 8 bits of character; the efficiency is only 80% (fixed). Synchronous transmission. In this technique, all characters are blocked together and transmitted without a gap between two characters being transmitted. It requires more complicated hardware to handle buffering and blocking, but this hardware more efficient. For example, assume that there are 240 characters and 3 SYN characters in one block. Then the number of data bits ¼ 240 8 ¼ 1920 bits, the number of SYN character bits ¼ 3 8 ¼ 24 bits, and efficiency ¼ 1920=1944 99% variable efficiency; the larger the block is, the higher the efficiency.
Transmission Impairments The signal received may differ from the signal transmitted. The effect will degrade the signal quality for analog signals and introduce bit errors for digital signals. There are three types of transmission impairments: attenuation, delay distortion, and noise. (1) Attenuation: The impairment is caused by the strength of signals that degrades with distance over a transmission link. Three factors are related to the attenuation: . The received signal should have sufficient strength to be intelligently interpreted by a receiver. An amplifier or a repeater is needed to boost the strength of the signal. . A signal should be maintained at a level higher than the noise so that error will not be generated. Again, an amplifier or a repeater can be used. . Attenuation is an increasing function of frequency, with more attenuation at higher frequency than at lower frequency. An equalizer can smooth out the effect of attenuation across frequency bands, and an amplifier can amplify high frequencies more than low frequencies. (2) Delay distortion: The velocity of propogation of a signal through a guided medium varies with frequencies; it is fast at the center of the frequency, but it falls off at the two edges of frequencies. Equalization techniques can be used to smooth out the delay distortion. Delay distortion is a major reason for the timing jitter problem, where the receiver clock deviates from the incoming signal in a random fashion so that an incoming signal might arrive earlier or late. (3) Noise: Impairment occurs when an unwanted signal is inserted between transmission and reception. There are four types of noises: . Thermal noise: This noise is a function of temperature and bandwidth. It cannot be eliminated. The
.
.
.
thermal noise is proportional to the temperature and bandwidth as shown in the equation: thermal noise ¼ K (constant) temperature bandwith. Intermodulation noise This noise is caused by nonlinearity in the transmission system f 1; f 2 frequencies could produce a signal at f 1 þ f 2 or ABS (f 1 f 2) and affect the frequencies at f 1 þ f 2 or ABS (f 1 f 2). Cross talk: This type of noise is caused by electrical coupling in the near by twisted pair or by unwanted signal picked by microwave antennas. For example, sometimes when you are on the telephone, you might hear someone else’s conversation due to the cross talk problem. Impulse noise: Irregular pulses and short duration of relative high amplitude cause impulse noise. This noise is also caused by lightning and faults in the communication system. It is not an annoyance for analog data, but it is an annoyance for digital data. For example, 0.01 sec at 4800 bps causes 50 bits of distortion.
Error Detection and Recovery Due to the transmission impairments, a message arriving at the receiving station might be in error. The receiving station will detect the error and might ask the sending station for a retransmission. It is very rare that the receiving station would correct errors. The approach for requesting a retransmission when an error is detected is called forward correction. The error detection is mostly done at the data link layer that is the second layer of the network architecture although some error detection schemes might be done at the first layer, the physical layer of the network architecture. The error detection schemes that have been used are described below. The cyclic redundancy check (CRC) is a most commonly used error detection scheme at the data link layer. .
.
Single parity checking: This scheme appends a parity bit to the character to be transmitted to make the total number of binary ones in one character be either odd or even. An odd parity checking or an even parity checking has to be agreed between a transmitter and receiver. The problem of this approach is that the even number of bits changed will make the number of ones for being odd or even, and the parity bit error cannot be detected. Two coordinated parity checking: The scheme checks for the parities for a block of characters and not just for a single character. The parities are added horizontally and vertically within a block of characters to produce a block check character (BCC). BCC is appended to the end of a block to be transmitted. Table 5.1 illustrates the error detection scheme. Assume that odd parity checking is used.
5 Data Communication Concepts TABLE 5.1
987
Error Detection Scheme
Data bits
Parity
0010010 1001001 0100101 0000001
1 0 0 0
In Table 5.1 the last row with numbers 00000010 (including the parity bit) is the BCC. The scheme will not work if there are an even number of bits changed horizontally and/or vertically. Even the BCC will be different, but all is correct for the odd parity checking. Cyclic Redundancy Check For a cyclic redundancy check, the scheme selects a standard defined 8 bits, 10 bits, 12 bits, 16 bits, or 32 bits of a constant check data. A constant check data is normally represented as a polynomial constant. For example, the polynomial x 3 þ x 2 þ 1 is the check data bit of 1101. A data message that could be thousands of bytes is divided by the polynomial constant with exclusive or bit-by-bit operation. The remainder of the division is the CRC that is appended at the end of a frame or block. With the above example, the CRC will be 3 bits long. The receiving station takes the data it receives and performs the same computation with the same polynomial constant. If the computed CRC is equal to the CRC in the received message, it is fine; otherwise an error has occurred. The CRC error detection scheme cannot detect all errors at all times. In the case of an error pattern being the multiples of the polynomial constant, the error cannot be detected. Therefore, it is very important to select a polynomial constant that has low probability of being generated in a transmission environment.
5.1.4 Sharing of Network Nodes and Transmission Facility Resources It is very economically infeasible to have one dedicated physical link between any two computers on a network. Networking nodes and the transmission facility must be shared. Multiplexing allows for a physical link to be shared by multiple users to fully use the link and reduce the number of input/ output (I/O) ports required for a computer. Switching techniques avoid a creation of the mesh or complete topology where two computers are directly connected in a network. Direct connection increase the cost of I/O ports on each computer and transmission links. Multiplexing is also very inflexible without the use of switching if the direct link between two computers is broken. Multiplexing As the list below indicates, there are three multiplexing techniques.
.
.
.
Time division multiplexing (TDM)—synchronous TDM: Multiple digital signals or analog signals carrying digital data can be carried on a single transmission path by interleaving portions of each signal in time. The interleaving interval can be one bit, one octet, or one block of a fixed size of octets. Each signal or connection path takes a fixed time slot but use the whole bandwidth of the link. ‘‘Dummy’’ information will be sent on the slot even if there is no data for the connection. This wastes the capacity of the transmission bandwidth. Frequency division multiplexing (FDM): A number of signals can be carried simultaneously on the same transmission link with each signal being modulated into a separate frequency. The bandwidth of each signal is reduced due to the division of multiple channels. Digital and analog data can both be supported. CATV is an example of the use of FDM. Statistical TDM (asynchronous TDM or intelligent TDM): The time slots are not preassigned for a signal or connection path. They are allocated on a demand basis. There is no waste of bandwidth. A terminal number is included in the message to identify where the message came from. The output rate is designed to be less than the sum of data rates of all inputs. At the peak load, total inputs might exceed output capacity, then backlog will be built. The trade-off is between the size of buffers and the rates of total inputs to be supported.
5.1.5 Switching Switching techniques avoid direct connections between any two computers on a network. These techniques not only save the cost of transmission links but also provide the flexible and reliable connections between any two computers on the network. There are two major switching techniques: one is mainly used in the telephone network, and the other is used in data and other telecommunication networks. The message switching is rarely used, but it is described here for completeness. Circuit Switching A connection between the calling station and the called station is established on demand for exclusive use of the physical connection. Three phases of a circuit switching connection exist; (1) connection establishment phase requires a separate setup signal to send and the acknowledgement signal to receive; (2) The data transfer phase has no interruption to the data transfer; and (3) the termination phase allows the release of the resource so that other connection, can use the resource. The switching technique does not require buffers or queues and is free of congestion. Long setup time may cause undesirable delay, which is rather inefficient if no data are being transferred. Appropriate applications may have relatively continuous flow, such as voice, sensor, and telemetry inputs.
988 Message Switching An entire message, like an e-mail or a file, is transmitted to an intermediate point, stored for a period of time, and an transmitted again toward its destination. The address of the destination is included in the message. No dedicated path is established between two stations. The delay at each intermediate point is the time to wait for all messages to arrive; plus, there is a queuing delay in waiting for the opportunity to retransmit to next node. The line efficiency for message switching is greater than for circuit switching because the transmission line is shared over time but not at the same time for multiple messages. Simultaneous availability of sender and receiver is not required, and the message priority can be established in the message switching. Because a connection is not needed, the line has a flexible routing capability and the capability for sending the same message to multiple destinations. With buffering capability,
Vijay K. Garg and Yih-Chen Wang congestion problems will be reduced. Any error detected requires retransmission of the entire message. If the message is a megabytes file, the whole file will need to be retransmitted. No overlap of transmission is possible. Message switching is generally not suitable for the interactive traffic due to a long delay. Packet Switching A message is divided into a number of smaller packets, and one packet at a time is sent through the network. There are two approaches in packet switching: (1) datagram will not need to establish a connection path, and each packet is treated independently; and (2) the virtual circuit approach requires that a connection be established, and all packets will need to go over that connection path. There is less overhead on the retransmission as the packet is smaller; overlap of the transmission is possible. Virtual circuit is more suitable for interactive traffic because it has smaller delay.
6 Communication Network Architecture Vijay K. Garg
6.1
Computer Network Architecture ............................................................ 989
Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
6.2
Local Networking Technologies .............................................................. 991
6.1.1 Open System Interconnection . 6.1.2 TCP/IP Network Architecture 6.2.1 Technologies of Local Networks . 6.2.2 Local Network Reference Model . 6.2.3 IEEE 802.2—Logical Link Control . 6.2.4 IEEE 802.3—Carrier Sense Multiple Access Collision Detection . 6.2.5 IEEE 802.4—Token Passing Bus Protocol . 6.2.6 IEEE 802.5 Token Passing Ring Protocol . 6.2.7 IEEE 802.11—Wireless LANs . 6.2.8 ANSI X3T9.5—Fiber Data Distribution Interface
Yih-Chen Wang Lucent Technologies, Naperville, Illinois, USA
6.3
Local Network Internetworking Using Bridges or Routers........................... 1001
6.4
Conclusion ......................................................................................... 1003 Glossary ............................................................................................. 1003 References .......................................................................................... 1003
6.3.1 Performance of Local Networks
6.1 Computer Network Architecture Computer network architecture refers to a set of rules that allow for connectivity among a large number of computers. This set of rules is also called communication protocols. To simplify the complexity of network design, the communication functions are divided into several levels of abstractions. Each level or layer of the protocol is designed in such a way that the change to one layer normally does not affect adjacent layers. The services of higher layers are implemented to use the services provided at lower layers. There are two interfaces at each layer. One is the peer-to-peer protocol between two computers. The other is the service interface to its adjacent layers on the same computer. Peer-to-peer protocol between two computers mostly regards indirect communication, and the direct communication only occurs at the lowest layer or hardware level. Each higher layer of protocol adds its own header information to the data message it receives from its higher layer of protocol before it passes the data message to its lower layer. This process is called encapsulation. The receiving system reverses the process, called decapsulation, by removing the header at each layer before passing the data message to its upper layer. Two prevalent network architectures are described in this section. Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
6.1.1 Open System Interconnection Open System Interconnection (OSI) is an International Standard Organization (ISO) standard that defines computer communication network architecture. It is a well-defined network architecture, but the implementation of its network protocols is very rare. When its draft standard came out in 1985, many predicated that the implementation of network protocols would predominate in the industry. The prediction was incorrect due to the wide use of Internet protocols in 1985. However, OSI represents a very powerful network reference model to which all communication technologies refer their architecture (see Figure 6.1). This also includes the most popular TCP/IP architecture. OSI divides the communication functions into seven layers of which functions are described in the following list. .
.
Physical layer: This layer is responsible for activating and deactivating physical connections upon request from the data link layer and transmitting bits over a physical connection in a synchronous or asynchronous mode. It also handles very limited error control, like single-character parity checking. Data link layer: This layer is responsible for establishing and releasing data link connections for use by the network layer. 989
990
Vijay K. Garg and Yih-Chen Wang End host
End host
Application
Application
Presentation
Presentation
Session
Session
Transport
Transport
.
Network
Network
Network
Network
Data link
Data link
Data link
Data link
Physical
Physical
Physical
Physical
.
.
FIGURE 6.1 OSI Network Reference Model
.
.
.
Network layer: Responsibilities of this layer include providing the data integrity transmission for a point-topoint connection so that data will not be lost or duplicated. The layer accomplishes this task by maintaining a sequential order of frames that are transmitted over a data link connection and detecting and correcting transmission errors with retransmission of the frames, if necessary. The other important function in the data link layer is to provide flow control, which is a way to allow the receiving station to inform the sending station of stopping transmission for a moment so that the receiver will not overload its buffers. Network (packet) layer: Two major functions in the network layer are routing control and congestion control. Routing control is the process for maintaining a routing table and determining optimum routing over a network connection. Congestion control is needed when there are too many packets queued for a system and there is no space to store them. This is normally happens in a datagram-type of connection, where the network resources for a connection are not preallocated. The network layer is also responsible for multiplexing multiple network connections over a data link connection to maximize its use. Flow control is provided at the network layer as well. Transport layer: This layer provides error-free end user transmission. To improve the utilization, it multiplexes multiple transport connections over a network connection. It controls data flow to prevent from overloading network resources just like the flow control function provided at the data link and network layers. This layer and layers above it are end-to-end, peer-to-peer protocols, for which their protocol data units (PDU) are processed between two end systems. The layers below the
transport layers are the point-to-point, peer-to-peer protocol where the PDUs are processed only between two computer systems connecting together. Session layer: Providing management activities for transaction based applications, this layer ties these application streams, together to form an integrated application. For example, a multimedia application may consist of the transport of data, fax, and video streams that are all managed at the session layer as a single application. Presentation layer: This layer is responsible for performing any required text formatting or text compression. It negotiates the choice of syntax to be used for data transfer. Application layer: To provide an entry point for using OSI protocols is one task of this layer. This task can be accomplished by providing either the Application Programming Interface (API) or standard UNIX I/O functions, like open(0), close(0), read(), and write() functions.
The layer also performs common application functions, such as connection management, and provides specific application functions, like file transfer using File Transfer and Access Management (FTAM), Electronic Mail (X.400), and Virtual Terminal Protocol (VTP).
6.1.2 TCP/IP Network Architecture The TCP/IP network architecture also refers to the Internet architecture. The Transmission Control Protocol (TCP) is a transport layer protocol, and the Internet Protocol (IP) is a network layer protocol. Both protocols were evolved from a earlier packet switching network called ARPANET that was funded by the Department of Defense. The TCP/IP network has been the center of many networking technologies and applications. Many network protocols and applications are running at the top of the TCP/IP protocol. For example, the Voice over IP (VOIP) and the Video Conference application using MBONE are the applications running over the TCP/IP network. The TCP/IP network has the corresponding five layers in the OSI reference model. Figure 6.2 shows the
Application layer Transport layer Network layer Link layer Physical layer
FTP
HTTP
TCP
TFTP
SNMP
UDP
Ping
ICMP
Internet protocol
ARP
Logical Link Control or HDLC
Physical Transmission Medium Control protocol
FIGURE 6.2 TCP/IP Network Architecture and Applications
6 Communication Network Architecture layers of the TCP/IP network and some applications that might exist on the TCP/IP networks. The standard organization for the TCP/IP-related standard is the Internet Engineering Task Force (IETF), which issues Request-for-Comment (RFC) documents. Normally, IETF requires that a prototype implementation be completed before an RFC can be submitted for comments.
6.2 Local Networking Technologies This section describes the local area network (LAN) and metropolitan area network (MAN) technologies. The dramatic and continuing decrease in computer hardware cost and an increase in computer capability have increased the usage of single-function systems or workstations. One has the desire to interconnect these single function systems and workstations together for variety reasons. The major reasons are to exchange data between systems, to share expensive resources, to improve the performance for real-time applications, and to improve the productivity of the employees. LAN is a communication network that supports variety of devices within a small area. The small area could be a single building or a campus. MAN is the communication network that spans a larger geographic area than LAN and could be an area consisting of a few blocks of buildings or even a large metropolitan area. The characteristics of LAN are high data rate, short distance, low error rate, shared medium, and single organization ownership (typically). This section describes the general technologies used by a LAN, the internetworking with bridges and routers, and the performance of local network. General technologies include the transmission media, topologies, and Medium Access Control (MAC) protocol.
6.2.1 Technologies of Local Networks The nature of a local network is very closely tied to the technologies it uses. The key technologies are the topology, transmission medium, and MAC protocol. Unlike a wide area network (WAN), LAN is a shared medium network, and the choice of a transmission medium impacts the topology it can use. These key technologies are closely related to each other. For example, a twisted pair medium is more suitable for the star topology. Each type of technology is described in separate subsection in this chapter.
991 topologies would provide flexibility for applications and the reliability for the connections. For each topology, one should consider the characteristics, reliability, expandability, and performance. The characteristics of a topology should consider the connectivity of devices, the components required (e.g., tap or splitter, etc.), and the transmission. The reliability addresses how a node failure or cable break would affect the operation of the network. The expandability addresses the ease of adding a new station to the network, and the performance should consider the delay characteristics of the topology. The topologies used in the local networks are star, ring, and bus/tree. Table 6.1 summaries these topologies. Transmission Media The transmission medium is a physical transmission path between two devices. A transmission medium could be a guided hard-wired medium (e.g., the twisted pair cable used in the telephone network), a coaxial cable used by CATV, or optical fiber. It could be an unguided medium through atmosphere, such as microwave, laser, or infrared. Most LANs use the guided transmission medium in a building. The unguided medium would be used when there is a need to connect two LANs in different buildings and when the digging of the ground between two buildings is very difficult. Until recently, wireless LAN using microwave radio at the same building or nearby buildings has become a very popular way to connect portable devices together in the network with the data rate up to 10 Mbps. Of the three hard-wired media (e.g., the twisted pair, coaxial cable, and optical fiber), the use of coaxial cable in local networks has started to phase out due to its heavier weight and inflexibility to bend. The twisted pair is popular for lower speeds (100 Mbps or less) of a local network, whereas the optical fiber is more suitable for higher speeds (hundreds of megabits per second or higher) of a local network. Of two unguided media used in local networks, the microwave is used to connect two LANs at different buildings, but both microwave and infrared can be used for wireless LANs. In choosing a transmission medium for a local network, one should consider the following factors: . . . . .
Local Network Topologies Topology is the manner in which network interface devices and transmission links are interconnected. There are two reasons to study the topologies for LAN as well as for WAN. One is that the mesh (complete) topology, which allows for every two devices to be directly connected via a dedicate link, is economically infeasible. The other reason is that different
.
Cost of installation of transmission medium Connectivity Transmission characteristics Topology constraints Distance coverage Environmental constraints
Medium Access Control Protocol for Local Networks In the local network environment, multiple stations on the network share a transmission medium. If more than one station is transmitting at the same time, the transmission will be corrupted. Medium Access Control (MAC) protocol is
992
Vijay K. Garg and Yih-Chen Wang
TABLE 6.1
Various Topologies for LAN
Topology
Star
Ring
Bus/Tree
Characteristics
Stations are connected to a central hub or star coupler, which serves as a relay point.
Reliability
Station failure would not cause a failure of the network, but a failure on the hub or coupler would disable the network operation. A break of the cable would isolate only one station from the network. Adding a new station to the network involves only wiring from the station to the hob or the coupler. The number of ports on hubs or couplers limits the capacity of the network. The delay on a hub or coupler is very small.
Each station is connected to a repeater, and all repeaters are connected to form a closed loop or ring. Signal is traveling unidirectional along the ring. There is no buffering at each repeater. Station failure would not affect the network, but the repeater failure may disable the network unless repeater bypass logic is implemented. A cable break would disable the network transmission unless a self-configuration dual ring network is installed. If a multiple access unit (MAU) is equipped, the adding of a new station just involves the wiring between the station and the MAU. There is a least 1-bit delay at each repeater. The round-trip delay would include the propagation delay, transmission delay, and repeater delay. The number of stations on a ring network is limited for the performance concern.
More than one station is connected to a shared medium. When a signal is transmitted on the medium, it will travel in both directions; then the signal is absorbed at both ends by the terminators. Station failure would not affect the network operation, but the cable break would disconnect the network and produce the reflection that could interfere with the transmission.
Expandability
Performance
needed to regulate an access to the common medium. There are two approaches for control of the multiple transmissions. In the centralized approach, a station on the network is designated as a primary station, and the rest of the stations are the secondary stations. All communications have to be polled and selected by the primary station. The primary station polls for a secondary station on the network for transmission and then selects a specific secondary station for the reception of a message. There is no direct transmission between any two secondary stations. In the distributed approach, each station has a MAC to decide when it can transmit, and the direct transmission occurs between any two stations. The distributed approach is currently used by most local networks. The advantages and disadvantages of the centralized approach are in the following list. Note that the opposite of the advantages and disadvantages on the list would become the disadvantages and advantages of the distributed approach, respectively. The advantages of centralized approach are as follows: .
. .
Greater control over access for providing priorities, overrides, and bandwidth allocation Simple logic at each station Avoidance of the coordination problem
The disadvantages of centralized approach are as follows: . .
Disabling of entire network due to single point of failure Reduction of the efficiency of network operation caused by overhead imposed on the primary station
The MAC protocols for local networks are specified in a series of IEEE802 standards. The fiber data distribution interface (FDDI) for MAN is specified in the ANSI ASC X3T9.5 committee. The popular local network standards are listed as follows:
. .
. . . .
The addition of a new station is easier and would not need to bring down the network for addition of a new station. The performance of LAN using bus/tree topology depends on the bandwidth of the network, the number of active stations on the network, and the kind of medium access protocol being used.
IEEE 802.2—Logical Link Control IEEE 802.3—Carrier Sense Multiple Access/Collision Detection (CSMA/CD) IEEE 802.4—Token Passing Bus IEEE 802.5—Token Passing Ring IEEE 802.1—Wireless LAN ANSI X3T9.5—Fiber Data Distribution Interface
Before we describe each local network standard, we describe the local network architecture. The following subsections address the architecture and each standard.
6.2.2 Local Network Reference Model Open System Interconnection (OSI) is an International Standard Organization (ISO) standard that defines computer communication network architecture. As mentioned before, all networking technologies describe their network architecture within the OSI reference model. How does a local network fit into the OSI’s reference model? We need to determine what layers are required for a local network. The MAC protocol exists for each local network type and is considered part of the data link layer. The MAC protocol has the following deficiencies: .
.
. . .
The protocol only guarantees the transmission of frame onto the medium. There is no frame sequencing, and most MAC protocols do not acknowledge the receiving of a frame. There is no flow control between two stations. There is no retransmission for discard packets. It is a data gram-type of protocol, and frame can be lost.
6 Communication Network Architecture
993
Application Presentaion Session Transport
6.2.4 IEEE 802.3—Carrier Sense Multiple Access Collision Detection Ethernet LAN, which was initially developed by Xerox, Digital, and Intel, used Carrier Sense Multiple Access/Collision Detection (CSMA/CD) technology many years ago. There has been a long evolution path for the CSMA/CD Medium Access Protocol. The following describes the precursors of CSMA/ CD technology.
Logical link control Network Medium access control
Physical layer
Data link
Pure ALOHA Pure ALOHA technology was developed at the University of Hawaii to interconnect a packet radio network across multiple islands in Hawaii. The protocol has the following procedure:
Physical layer
FIGURE 6.3 Local Network Reference Model
With these deficiencies, we need a data link layer above the MAC protocol to make up these deficiencies. Each frame carries the source and destination address in a local network, and all stations on the network will receive the frame through the shared transmission medium. Therefore, a routing function of the network layer within a local network is not required. Figure 6.3 depicts the local network architecture.
6.2.3 IEEE 802.2—Logical Link Control Logical Link Control (LLC) provides conventional data link protocol functions, such as error control and flow control. LLC is very similar to several famous data link protocols, like Synchronous Data Link Control (SDLC) or High Level Data Link Control (HDLC) protocols. The only difference is that LCC includes the service access points (SAP) information in the frame to allow for multiple applications (programs) on one station to communicate simultaneously with other applications on other stations in the network. LLC provides the following three services for a local network application: (1) Unacknowledged connectionless service: This is a data gram-type of service, so there is no overhead to establish a connection. It provides no acknowledgment to ensure the delivery of a frame. (2) Acknowledged connectionless service: This service provides acknowledgment to a frame received to relieve the burden in the higher layer. (3) Connection-oriented service: This service provides flow control, sequencing, and error recovery between SAPs. It also allows for multiplexing logical endpoints over a single physical link.
(1) Frames are transmitted at will. (2) The sending station listens for the time equal to the maximum possible round-trip delay. (3) The station that receives the frame has to send an acknowledgment. If the acknowledgment is not received before the timer expires, the frame is retransmitted. (4) The error is checked at the receiving station, which ignores all of erroneous frames. The successful transmission for Pure ALOHA is only 18%. Slotted ALOHA Improvement of Pure ALOHA led to the creation of Slotted ALOHA. The procedures of the algorithm are as follows: (1) Time on the channel is organized into a uniform slot, and each slot size is equal to a frame transmission time. (2) A frame is transmitted only at the beginning of the time slot (3) If the frame does not go through, the frame is sent again after a random amount of time. The performance of the Slotted ALOHA is 37% empty slots, 37% success, and 26% collisions. This yields the successful transmission of 37%. There are three problems with the ALOHA schemes. The first problem is the station not listening to the transmission medium before sending a packet. The second problem regards the station not listening to the transmission medium during the sending of the packet. The last problem involves the station not taking advantage of a much shorter propagation delay than the frame transmission time. The shorter propagation delay provides a fast feedback for the state of the current transmission. Various versions of CSMA have been proposed to solve only the first problem. The variants of CSMA schemes are dealing with what to do if the transmission medium is sensed to be busy. These CSMA schemes are described next.
994
Vijay K. Garg and Yih-Chen Wang
Nonpersistent CSMA In nonpersistent CSMA, if a frame is waiting for transmission, a station checks if the medium is idle and transmits the frame right away if the medium is idle. If the medium is busy, the station waits for a random amount of time and then repeats the same procedure to check if the medium is idle. The algorithm can reduce the possibility collisions, but there is waste due to the idle time. 1-Persistent CSMA For 1-persistent CSMA, the algorithm is used as part of the CSMA/CD. The procedure is as follows: (1) A station checks if the transmission medium is idle, and then it transmits the frame if it is idle. (2) If the medium is busy, the station continues to listen until the channel is idle, and then it transmits right away. (3) If no acknowledgment is received, the station waits for a random amount of time and repeats the same procedure to check if the transmission medium is idle. The scheme would guarantee a collision if more than one station were waiting for transmitting frames. P-Persistent CSMA The p-persistent CSMA algorithm takes a moderate approach between nonpersistent and 1-persistent CSMA. It specifies a value; the probability of transmission after detecting the medium is idle. The station first checks if the medium is idle, transmits a frame with the probability P if it is idle, and delays one time unit of maximum propagation delay with 1-P. If the medium is busy, the station continues to listen until the channel is idle and repeats the same procedure when the medium is idle. In general, at the heavier load, decreasing P would reduce the number of collisions. At the lighter load, increasing P would avoid the delay and improve the utilization. The value of P can be dynamically adjusted based on the traffic load of the network. CSMA/CD is the result of the evolution of these earlier protocols and the additions of two capabilities to CSMA protocols. The first capability is the listening during the transmission; the second one is the transmission of the minimum frame size to ensure that the transmission time is longer than the propagation delay so that the state of the transmission can be determined. CSMA/CD detects a collision and avoids the unusable transmission of damaged frames. The following describes the procedures of CSMA/CD: (1) If the medium is idle, the frame is transmitted. (2) The medium is listened to during the transmission; if collision is detected, a special jamming signal is sent to inform all of stations of the collisions. (3) After a random amount of time (back-off), there is an attempt to transmit with 1-persistent CSMA.
The back-off algorithm uses the delay of 0 to 2 time units for the first 11 attempts and 0 to 1023 time units for 12 to 16 attempts. The transmitting station gives up when it reaches the 16th attempt. This is the last-in first-out unfair algorithm and requires imposing the minimum frame size for the purpose of collision detection. In principle, the minimum frame size is based on the signal propagation delay on the network and is different between baseband and broadband networks. The baseband network uses digital signaling, and there is only one channel used for the transmission, while the broadband network uses analog signaling, and it can have more than one channel. One channel is used for transmitting, and another channel can be used for receiving. The baseband network has two times the propagation delay between the farthest stations in the network, and the broadband network has four times the propagation delay from the station to the headend, with two stations close to each other and as far as possible from the ‘‘headend.’’ The delay is the minimum transmission time and can be converted into the minimum frame size. The comparison of baseband and broadband in CSMA/CD schemes is as follows: .
.
Different carrier sense (CS): Baseband detects the presence of transition between binary 1 and binary 0 on the channels, but broadband performs the actual carrier sense, just like the technique used in the telephone network. Different collision detection (CD) techniques: Baseband compares the received signal with a collision detection (CD) threshold. If the received signal exceeds the threshold, it claims that the collision is detected. It may fail to detect a collision due to signal attenuation. Broadband performs a bit-by-bit comparison or lets the headend perform collision detection by checking whether higher signal strength is received at the headend. If the headend detects a collision, it sends a jamming signal to the outbound channel.
High-Speed Ethernet-Like LAN When an ethernet LAN has the speed of 100 Mbps or higher, it is classified as a high-speed ethernet LAN. The high-speed ethernet-like LAN is a natural evolution of low-speed traditional 10-Mbps LAN. IEEE specified two standards: 100BASET and 100VG-AnyLAN. Table 6.2 provides a comparison between 100BASE-T and 100VG-AnyLAN TABLE 6.2
Comparison Between 100BASE-T and 100VG-ANYLN 100BASE-T
100VG-AnyLAN
Standard Access protocol Frame format
Part of IEEE802.3 CSMA/CD CSMA/CD 10BASE
Maximum number of stations Frame size Distance coverage Topology
1024 1500 octets 210 m Star Wired
IEEE802.12 Round Robin CDMA/CD 10BASE or Token Ring Unspecified 1500 or 4500 octets 2.5 Km Hierarchical Star/Tree
6 Communication Network Architecture The 100BASE-T network has several options of cabling schemes that are listed as follows: .
.
.
100BASE-TX: Each station uses two category 5 Unshielded Twisted Pair (UTP) or Shielded Twisted Pair (STP). One pair is for transmitting, and the other pair is for receiving. 100BASE-FX: Each station uses two optical fibers. One is for transmitting, and the other is for receiving. 100BASE-T4: Each station uses four pairs of category 3 UTP, three pairs for transmission, and one pair for collision detection.
Each station is connected to a multipoint repeater, and each input to the repeater is repeated on every output link. If two inputs overlap, a jam signal is transmitted on all output links. The class 1 repeater can support mixtures of transmission media types; a class 2 repeater can support only the same transmission media type, but it can interconnect with another class 2 repeater. Each network can be configured as either a full duplex or half duplex. The full duplex operation requires a switch hub and not a multipoint repeater. In the full duplex operation, each station would own the entire bandwidth of the network. In the half duplex operation, each station would share the bandwidth of the transmission medium. To support a mixture of speeds on a network, each device can send link integrity pulses to indicate the speed it supports. The process is called autonegotiation. The priority order in the technology ability field used in the 100BASE-T autonegotiation scheme is based on the highest common denominator regarding abilities. The 100BASE-T4 medium has the higher priority than the 100BASE-FX, and the 100BASE-T4 provides for the operation over a wider range of twisted pair cables and, therefore, is more flexible. The 10BASE-T is the lowest common denominator and has the lowest priority. In general, a full-duplex capability can also operate in half duplex mode and, hence, has a higher priority. The priority is assigned in the following order from highest to lowest: . . . . .
100BASE-TX full duplex 100BASE-T4 100BASE-TX 10BASE-T full duplex 10BASE-T
Gigabit Ethernet Due to the popularity of ethernet, the IEEE802 committee has also defined the standard for gigabit ethernet. A 10-gigabit ethernet has been made commercially available. Some of the requirements for this gigabit ethernet are as follows. This ethernet: (1) Needs high speed for backbone network; (2) Should provide a smooth migration from a 100-Mbps ethernet to gigabit ethernet;
995 (3) Is an alternative to ATM and FDDI for backbone connection; and (4) Should work for copper wires and optical fiber. A gigabit ethernet uses the same CSMA/CD protocol in a sameframe format. It can provide for a 100-Mbps traffic load as a backbone network. There are some differences however, with a previous version of ethernet, which are listed as follows: 1. A separate Gigabit Medium Independent Interface (GMII) defines an independent 8-bit parallel transmission and receives synchronous data interface. GMII is optional for all of transmission media except UTP. It is a chip-to-chip interface between MAC and physical hardware. 2. Two encoding schemes are 8B/10B for fiber and shielded copper and a 4-Dimensional 5-Level Pulse Amplitude Modulation are (4D-PAM5) for UTP. 3. A Carrier Extension Enhancement appends a set of symbols to a short MAC frame to a resulting block of at least 4096 bits, so transmission time is longer than the propagation time.
6.2.5 IEEE 802.4—Token Passing Bus Protocol With the IEEE 802.4 Token Passing Bus Protocol, each station is assigned a logical position in an ordered sequence in a physical bus topology and a logical ring. Each station knows its logical preceding and successor addresses. A token frame regulates the access of medium. When finishing a transmission or when time elapses, a token is passed to the next station in the logical ordering. Each station can only transmit a frame when it holds a token except when a non token-holding station responds to a poll or a request for acknowledgment. There are several token maintenance events described in the following paragraphs. Ring Initialization This event occurs when a logical ring is broken or a network has just powered up and the system needs to determine which station can hold the token first. The event will trigger the following steps at each station: (1) Arbitrary length (multiples of time slots) of a claim token is issued based on the first two bits of its address field. For example, 00 will be mapped to one time slot, 01 will be mapped to two time slots, 10 will be mapped to three time slots, and 11 will be mapped to four time slots. (2) A station drops its claim after transmitting its claim token and then hears anything transmitted on the bus; otherwise, it tries to claim a token at the next iteration with the next two address bits. A station uses all of its address bits; if the station succeeds on the last iteration, it is considered a token holder.
996
Vijay K. Garg and Yih-Chen Wang
Addition to Ring The occurrence of an event is used for the expansion of a network. This occurrence is used to grant opportunity for a nonparticipating station to join in the network. The event follows these steps: 1. Each station periodically issues a solicit-successor frame to invite any nonparticipating station to join in the network. 2. If there is no response, the station transfers the token to its successor. If there is only one response, the station sets its successor to the requesting station and passes the token to the requesting node. It also indicates who is the requesting station’s successor. 3. The station sends a contention resolution frame for the multiple responses and waits for any valid response. Each station can only respond in one of four time slot windows based on its two address bits and refrains from its claim if it hears anything before its window slot arrives. If the token holder station receives a valid set-successor frame, it passes the token to the demander. Otherwise, it tries for the next iteration. Only the stations that responded the previous time are allowed to respond. 4. If an invalid response is received, the station goes into an idle or listening state to avoid the conflict situation in which another station thinks it holds a token. Deletion of a Station The occurrence of event is when a station wants to separate itself from the network. The event triggers the following procedures: 1. A station waits for a token to arrive and then sends a set-successor frame to its preceding station to splice its successor. 2. The token holder station passes the token to its current successor as usual. 3. The station will then be deleted from the network in the next go-around. A duplicated address or a broken logical ring would trigger the error recovery for the token bus protocol. Table 6.3 illustraties the error recovery procedures: Comparison of CSMA/CD and Token Bus An ethernet using CSMA/CD is much more popular than a Token Bus network. The main reasons for this popularity are the cheaper cost for installation and ease of administration. This section compares the pros and cons of the protocols and not the products. Advantages of CSMA/CD . Simple algorithm . Widely used to provide favorable cost and reliability
TABLE 6.3
Token Bus Error Recovery Procedures
Condition
Action
Multiple tokens Unaccepted token (no valid frame is received) Failed station after two tries of sending token Failed receiver after another two tries of sending the message of who follows the frame No token activity
Drops tokens to 1 or 0 Retries the sending token two times
. .
Sends the message of who follows the failed station frame Enters listen state
Becomes inactive with time out and initializes ring
Fair access Good performance at low to medium load
Disadvantages of CSMA/CD . Collision detection issues/requirements . Limit on minimum packet size (72 bytes minimum) . Nondeterministic delay Advantages of Token Bus . Excellent throughput performance . No special requirements for signal strength . Deterministic delay . Class of services with priority schemes . Stability at higher loads Disadvantages of Token Bus . Complicated algorithms . Overhead for passing token even with light traffic
6.2.6 IEEE 802.5 Token Passing Ring Protocol The IEEE 802.5 Token Passing Ring protocol uses the ring topology where repeaters are connected via a transmission medium to form a closed path. Data are transmitted serially bit by bit through the transmission media, and they regenerate the bits at each repeater before they are sent to the next repeater on the ring. Data are transmitted in packets. A repeater should perform the packet reception, transmission, and removal functions. A repeater can be one of three states: listen, transmit, and bypass states. A user station is connected to the repeater and contains the MAPs, which is summarized below: (1) A station can only transmit data when it holds a free token. (2) A free token turns into a busy token followed by a packet to be transmitted. (3) The originating station purges the packet and releases a free token. (4) To determine when to release the free token, there are three approaches. The Single Token protocol does not
6 Communication Network Architecture release the free token unless the leading edge of the busy token is returned. The Multiple Token protocol releases the free token as soon as it has completed transmitting release occurs even if the busy token has not returned). The Single Frame protocol waits until all of data has been purged and then releases a free token. In general, if the frame length is smaller than the ring length measured in bit time, the multiple protocols have better throughput. Regardless of which protocol is used, the length of a ring should be long enough to hold a free token. The token Passing Ring protocol uses two subfields in the frame to recognize whether the frame is copied successfully by the destination and whether a duplicated address is detected. When a frame is returned to the sender, the subfield C indicates that the frame is copied to the destination buffer, and the subfield A indicates if the duplicated address condition has been detected. A destination station detects the duplicated address condition when it finds out that some other station already marks the C bit. Unlike CSMA/CD, Token Passing Ring protocol provides the priority and reservation algorithms in the protocol. The following steps outline the algorithm with notations Pm, which is the priority of the message to transmit; Pr, which is the priority of the token protocol; and Rr, which is the receive reservation priority. 1. A station waits for a free token with a Pr less than or equal to Pm and then seizes it. If the free token has higher priority (i.e., Pm < Pr), the station can set the Rr field to Pm only if Rr is less than Pm and Pm is less than Pr. 2. A station reserves the priority at a busy token by setting the Pm to the Rr field if the Rr is less than the Pm. 3. After seizing a token, the token indicator bit is set to 1, the Rr field is set to 0, Pr is unchanged. 4. When releasing a free token, the Pr field is set to the max(Pr, Rr, Pm), and the Rr field is set to max(Rr, Pm). 5. Each station downgrades the priority of a free token to a former level stored in a stack. Just like the Token Passing Bus protocol, the Token Passing Ring protocol has a token maintenance problem when a station detects no token activity or detects a persistently circulating busy token by issuing a new token. When a station detects no token activity for a period of time, the network enters the initialization procedure to claim a token. A monitor station on the network sets the monitor bit in the frame. It will then remove the frame and issue a new token if the monitor station has seen the monitor bit set. This ring protocol enables a fair access with a ‘‘round robin’’ scheme, but it is inefficient under a lightly loaded system because the station that has a frame to be transmitted has to wait for the token to arrive. Unlike CSMA/CD, a station can transmit a frame right away as long as no other station is transmitting. A ring network, however,
997 has many advantages over a bus network. Some major advantages are less noise and larger distance coverage resulting from the use of repeaters and a high-speed link for the use of optical fiber. The major problem with a ring network is the accumulation of a timing jitter along the ring and across multiple repeaters. This effect significantly limits the size of a ring network.
6.2.7 IEEE 802.11—Wireless LANs The increased demands for mobility and flexibility in daily life are demands that lead the development from wired LANs to wireless LANs (WLANs). Today a WLAN can offer users high bit rates to meet the requirements of bandwidth consuming services, such as video conferences and streaming videos. With this in mind, a user of a WLAN will have high demands on the system and will not accept too much degradation in performance to achieve mobility and flexibility. This will in turn put high demands on the design of WLANs of the future. In this section we first discuss the WLAN architecture, deployment and security issues. Finally, future trends in wireless technology are also discussed. Architecture of WLAN WLAN consists of access points and terminals that have a WLAN connectivity. Finding the optimal locations for access points is important and can be achieved by measuring the relative signal strength of the access points. Placing the access points in a corporation network opens an access way to the resources in an intranet. With wired LANs, an intruder must first gain physical access to the building before he or she can plug a computer to the network and eavesdrop on the traffic. The intranet is typically considered secure even though employees can cause security breaches and data are transmitted unencrypted. If the information transmitted in the corporation network is extremely valuable to the corporation, the WLAN interface should be protected from unauthorized users and eavesdropping. The obvious way to extend the intranet with a wireless LAN is to connect the access points directly to the intranet as illustrated in Figure 6.4. The WLAN standard is defined in the IEEE 802.11 WLAN protocol, an extension of the IEEE 802.3 standard for wired LAN. The term LAN applies to any type of WLAN. It can be based on 802.11a, 802.11b, and 802.11g. A new standard called IEEE 802.11n is evolving to provide hundreds of megabits per second data rate. Another name for WLAN is wireless fidelity (Wi-Fi). Hotspots are another big development in the Wi-Fi field. Hot spots are usually public spaces such as libraries, malls, parks, beaches, airports, hotels, coffee shops, and restaurants where Wi-Fi Internet access is offered for a price or, even in some cases, offered for free. It is predicted that the number of hot spots could peak at 150,000 by 2005 before eventually declining because of unprofitable hot spots being deactivated. Of course, sometimes these hot spots can be
998
Vijay K. Garg and Yih-Chen Wang
Internet Access point Firewall Intranet Access point
Application server
FIGURE 6.4 Access Points of WLAN
somewhat confining because of one needs to make payment arrangements and also obtain a password for each use. Different hot spots are also often run by different Internet service providers, so paying just one provider will not always guarantee service at all hot spot zones. In addition, most hot spots only offer minimal if any security. Most clerks at these places have no idea of how to turn on the security features of their deployed WLAN. IEEE 802.11 network architecture can be configured in two different ways. One way is the independent basic service set (IBSS), also often referred to as an ad hoc network or even a peer-to-peer mode (see Figure 6.5). In an ad hoc network, 802.11 wireless stations in the same cell or in the transmission range of each other communicate directly with one another without using any connection to a wired network. This kind of network is usually only formed on a temporary basis and put up and taken down rather quickly. These networks usually are of small size and have a focused purpose. A business meeting in a conference room where employees bring laptops to share information is a good example of this type of network.
The second way and the most frequent way an IEEE 802.11 network architecture can be configured is by the basic service set (BSS), also often referred to as an infrastructure network (see Figure 6.6). In an infrastructure network, one station, called an access point (AP), acts as a central station for each cell, and many overlapping cells can make up a network. If there is more than one cell, the network is actually called an extended service set (ESS), and each cell is actually considered a basic service set (BSS) (see Figure 6.7). In an ESS, each BSS is connected together by means of a distribution system (DS). So these types of networks can have one or more access points. These access points serve as an ethernet bridge between the wireless LAN and a wired network infrastructure. These access points are not mobile and are considered part of the wired network infrastructure. The wireless stations can be mobile and can roam from one cell to the next, allowing for seamless coverage throughout the whole service area of the ESS. WLAN Security and Deployment Issues WLAN technology provides tremendous convenience to many mobile users, but the initial deployment rate was not as expected. Security has represented the major obstacle to the
Access point Client A Client B Client A
Client C
FIGURE 6.5
Example Ad Hoc Network
Client B
FIGURE 6.6 Basic Service Set: Example Infrastructure Network
6 Communication Network Architecture
999
ns ributio
ystem
(DS)
Dist
int ss po Acce ) P (A
n Statio
SS)−− ells set (B e ic iple c v r se )-mult S S Basic single cell E t( es se servic d e n e Ext
FIGURE 6.7 Extended Service Set—Infrastructure Mode
widespread adoption of WLAN. About 50% of IT managers still consider WLAN security being a major concern to deploy the technology throughout the company. Two popular security standards have been implemented in WLAN networks. They are WEP (Wireless Equivalent Protocol) and WPA (Wi-Fi Protected Access). WEP is implanted in 802.11b protocol and works at Layer 2 to encrypt all over the air transmission using 40/64 bit keys. Unfortunately, these keys are not strong and are easier to break. Furthermore, it only authenticates a device, not a user. WPA is a security enhancement that tries to solve two problems of WEP—weak data encryption and lack of user authentication capability. In order to provide a stronger encryption algorithm, WPA employs Temporal key Integrity Protocol (TKIP), which provides per-packet key to make the hacker’s job more difficult. WPA also implements the Extensible Authentication Protocol (ESP), which works with a RADIUS authentication server to authenticate and authorize a user using WLAN. Virtual Private Network (VPN) can also be used to regulate access to a company’s corporate network from WLAN, either in a private network or public access network. VPN is placed behind the wireless access points to authenticate and authorize a user. WLAN deployment can be either centralized or distributed. In the central approach, the access points are mainly dumb devices with the intelligence existing in a central switch. The access points are just radio stations. In the distributed approach, each access point has intelligence and is configured with access-control information, user authentication, and policy administration. The centralized approach may save operational cost but may not be as efficient as the distributed approach.
Future WLAN Technological Developments Although it is very risky to speculate on future developments in a field that is changing as fast as wireless communications, there does seem to be a certain amount of consensus about broad trends. Communicating Anytime, Anywhere, and in Any Mode There seems to be a wide acceptance of the notion of being able to communicate anywhere, anytime, and in any mode. Software programmable radios allow a subscriber unit to adapt its modulation, multiple-access method, and other characteristics to be able to communicate with a wide range of different systems and more efficiently support diverse voice, data, image, and video requirements. Likewise, a single base station unit will be able to adapt its characteristics to support older generations of subscriber equipment while retaining the ability to support future developments through software changes. Extending Multimedia and Broadband Services to Mobile Users Providing for the electronic transport of multimedia services is a challenging one because of the inherent differences in voice, data, image, and video traffic as explained earlier. Extending multimedia and bandwidth on-demand capabilities into the wireless environment presents a number of challenges. In the mobile, wireless world, available bandwidths (and, hence, transmission rates) are typically limited by spectrum scarcity, and the transmission links between the mobile and base stations are often characterized by high error rates and rapidly changing performance. In addition, a mobile terminal that is connected to one port on the network at one moment may be connected to another one the next moment. Despite these
1000
Vijay K. Garg and Yih-Chen Wang Octets
1
1
1
6
6
ⱖ0
4
1
1
SD
AC
FC
DA
SA
Data unit
FCS
ED
FS
ED = Ending delimiter FS = Frame status
DA = Destination address SA = Source address FCS = Frame check sequence
SD = Starting delimiter AC = Access control FC = Frame control
(A) IEEE 802.5 Frame
Bits
64
8
8
16 or 48
16 or 48
ⱖ0
32
4
1
Preamble
SD
FC
DA
SA
Info
FCS
ED
FS
(B) FDDI Frame
FIGURE 6.8 Frame Formats of IEEE 802.5 and FDDI.
challenges, there is a considerable amount of research interest in wireless ATM, and an entire issue of IEEE Personal Communications was devoted to the topic. Embedded Radios There is also a long-term trend toward, for want of a better term, embedded radios. This trend is comparable to the trend in microelectronics that is resulting in computer chips being installed in such ordinary items as automobiles, washing machines, sewing machines, and other consumer appliances and office equipment. Radios are even being installed in humans in the form of pacemakers and heart monitors. These devices are less visible than personal computers, computer workstations, and mainframe computers, but their impact on everyday life is significant nevertheless.
control (AC) field that is used for priority reservation scheme. FDDI uses a capacity allocation scheme instead of priority reservation. Figure 6.8 shows the frame formats of IEEE 802.5 and FDDI. FDDI MAC protocol is very similar to IEEE 802.5–Token Passing Ring protocol, but there are differences described as follows: .
.
.
6.2.8 ANSI X3T9.5—Fiber Data Distribution Interface The other type of LAN is called metropolitan area network (MAN). MAN has characteristics of high capacity of supporting at least 100-Mbps speed and more than 500 stations on the network. It has a larger geographic scope than LAN provides support for integrated data types, and has the provision of dual cables to improve throughput and reliability Fiber data distribution interface (FDDI) is an example of MAN that is an ANSI X3T9.5 standard, proposed by the ANSI XT9.5 study group. Each field in the MAC frame can either be represented as a symbol (4 bits, for nondata) or in bits format (data or address). The IEEE 802.5 token ring frame format is similar to the FDDI frame format. The FDDI frame includes a preamble to help in clocking because this is desirable for high-speed communication. FDDI does not include an access
.
After absorbing the free token, an FDDI station starts to transmit data frames. In IEEE 802.5—Token Passing Ring MAC, the token type bit is changed to a busy token type, and the data is appended to the token. An FDDI station frees the token right after transmission of a data frame, and it will not wait for the data frame return. In IEEE 802.5, a station will not free a token until the leading edge of the data frame returns. FDDI MAC uses the Time-Token Protocol (TTP) for both synchronous and asynchronous services to all stations. If a token arrives earlier based on the rotation timer, a station can optionally send the asynchronous data. If the token is late, only the synchronous data can be sent. In IEEE 802.5, MAC protocol is based on explicit priority reservation. Both protocols allow the network to respond to changes in traffic load, but FDDI supports more steady load because lower-priority traffic may have more opportunity to send when a token arrives early. The use of a restricted token will allow for two stations to have multiframe dialog capability to interchange long sequences of data frames and acknowledgments. This would improve the performance of the application that uses the capability.
The data-encoding scheme used by FDDI is called 4B/5B, which encodes 4-bit data within a 5-bit cell. There is no more
6 Communication Network Architecture than three consecutive zero bits in a cell, and at least two transitions occur in a 5-bit cell. The binary bit values are represented with non return to zero inverted (NRZI), where the transition at the beginning of the bit time denotes a binary 1 for that bit time, and no transition indicates a binary 0. Only 16 out of 32 code patterns are used for data, and other patterns are used for control symbols. Timing jitter is one of transmission impairments in the data communication. The deviation of clock recovery occurs when a receiver attempts to recover clocking as well as data. Due to the high speed of transmission, the deviation of the clock is more severe for FDDI than for IEEE 802.5. The centralized clocking used by the IEEE 802.5 network is inappropriate for 100 Mbps, and it requires a complicated and expensive phase lock loop circuitry. Distributed clocking is therefore used by FDDI. With distributed clocking, each station recovers a clock from its incoming signal and transmits out at station clock speed. Each station also maintains its own elastic buffer, unlike the IEEE 802.5 network that only designates one station to have the elastic buffer. FDDI specifies three reliability requirements of providing an automatic bypass for a bad or power-off station, using dual rings for easy reconfiguration when one ring is broken, and installing a wiring concentrator. Table 6.4 summarizes the comparison between IEEE 802.5 Token Passing Ring and ANSI X3T9.5 FDDI. FDDI is more reliable than other local network systems, but it is discouraged due to its high cost. There is a trend to migrate FDDI into switched backbone fast ethernet for the following additional reasons: .
.
More bandwidth to the desktops would require increasing the backbone capacity Because of continued evolution of ethernet architecture and its speed, it would make business and technical sense to migrate FDDI backbone into switched fast ethernet. Table 6.5 shows the comparison of FDDI and fast ethernet.
1001 TABLE 6.5
FDDI
Fast ethernet
Reliability
Self-healing dual rings
Maximum frame size Performance
4500 octets Sustained performance with increasing number of stations Up to 32 Km with fiber Close to $1000/port
Can provide one backup connection 1518 octets Whole network owned by each user with the switched Ethernet Up to 32 Km with fiber $100–$150/port
Distance Price
6.3 Local Network Internetworking Using Bridges or Routers A bridge is a device that operates at the MAC level to connect with a similar type of local network, and a router is a device that operates at the third layer of protocol to connect dissimilar types of local networks, including a WAN. Internetworking using bridge: or routers are required for Communication with multiple LANs in the organization, Communication with a LAN in a different location, and communication with the Internet. A simple bridge contains just the forwarding function, but a more sophisticated bridge would offer additional functions, like dynamic routing, priority, and congestion control. There are several reasons to use a bridge for connecting multiple LANs: .
.
.
.
TABLE 6.4
Comparing IEEE 802.5 and X3T9.5 FDDI
ANSI X3T9.5 FDDI
IEEE 802.5 Token Passing Ring
Fiber or twisted pair as transmission medium 100 Mbps Reliability specification Maximum of 1000 stations 4B/5B encoding Distributed clocking Maximum of 4500 octets of frame sizes
Twisted pair or optical fiber as transmission medium 4, 16, or 100 Mbps No reliability specification Maximum of 250 stations Differential Manchester Centralized clocking Maximum of 4550 octets for a 4-Mbps network and 18200 octets for 16Mbps and 100-Mbps networks Priority reservation Token release after busy token comes back
Time token rotation Token release right after the transmission
Comparing FDDI and Fast Ethernet
Reliability: To avoid a single point of failure, a network can be divided into several self-contained units. Performance: Placing users into several smaller LANs rather than in a large LAN would reduce the contention time and improve the performance. Security: It is more desirable to keep different types of traffic separate to improve security (e.g., work group communication can be on the same LAN). Location: Different floors or buildings requires a bridge to connect these locations.
The communication path between two LANs/MANs can be connected either by a single bridge or multiple bridges. In the case of connecting with multiple bridges, a point-to-point protocol, like High-Level Data Link Control (HDLC) or an X.25 connection, can be used between two bridges. Bridges do not modify the content of the frames they receive but must contain addressing and routing intelligence. The bridge can also act as a multiple port switch connecting more than two LANs. The routing approaches of bridges are as complicated as routers. The basic routing operation is to avoid forwarding frames in a closed loop. Each bridge should make a decision about whether the frame should be forwarded and what LAN should forward the frame next. The following approaches are used for routing with bridges:
1002 .
.
.
.
Vijay K. Garg and Yih-Chen Wang Fixed routing: A central routing directory is created at the network control computer. The directory contains the source-destination pairs of LANs and the bridge ID for each pair of LANs. Each bridge on the network has a table that contains the source-destination pairs of LAN for each of LANs attached to the bridge. Transparent bridge: The algorithm was developed by IEEE 802.1 and intended to interconnect similar LAN or dissimilar LAN (IEEE 802.3, IEEE 802.4, and IEEE 802.5) and learn in which side of LAN a MAC station would reside by examining the source and destination addresses. The main idea is for the bridges to select the ports over which they will forward frames. The algorithm uses graph theory to construct a spanning tree to avoid a closed loop to forward the frame. The algorithm selects one root bridge for the network, one designated port for each bridge, and one designated bridge for each LAN. All frames will be forwarded toward the root bridge via the designated bridge and the designated port. The root bridge then forwards the frames via all of its ports. The root bridge has the smallest ID, and each bridge computes the shortest path to the root and notes which port is on the path. The designated port is the preferred path to the root. Each LAN selects a designated bridge that will be responsible for forwarding frames toward the root bridge. The winner is the bridge closer to the root and has the smaller ID. Source routing: The routing algorithm was developed by IEEE 802.5. Each source station determines the route that the frame will take and includes a sequence of LAN and bridge identifiers in the route information field (RIF). The route information is obtained by discovery of broadcast frames sent by a source station. The approach requires the changes (additional bits) to the MAC frame format Source routing transparent (SRT): This approach allows for the interconnection of LANs by a mixture of source routing and transparent bridging. The route information indicator (RII) bit in MAC source address determines which algorithm will be used. If the RII is equal to 0, then the frame is handled by transparent bridge logic; if it equals to 1, it is handled by the source routing logic.
Routers in a LAN are used to connect LANs and the Internet. The addressing scope of the router is at the network layer (e.g., IP). A router makes a decision about how to route the packets. The routing protocol can be either internal within a domain network or external between two domain networks.
6.3.1 Performance of Local Networks The objectives to study the performance of a LAN is to understand the factors that affect the performance of a local network, to understand the relative performance of various local net-
work protocols, and to apply the knowledge in the design and configuration of a local network. The common characteristics between LAN and MAN are that a shared medium is used, the packet switching technique is used, and the MAP is used. The common measurable parameters for LAN and MAN are described as follows: .
.
.
.
.
Delay (D): The time period between when the time a frame is ready for transmission from a station and the completion of the successful transmission Throughput (S): Total rate of data being successfully transmitted between stations Utilization (U): The fraction of total network capacity being used Offered load (G): The total rate of data presented to network for transmission Input load (I): The rate of data generated by the stations attached to a local network
How are these parameters are related? The S and U are proportional to G, but they are flat when network capacity is exceeded. The D increases when the number of active users or I is increased. A local network distinguishes itself as short distance; therefore, the smaller propagation delay would provide instant feedback about the state of a transmission. The frame size could determine the transmission time and also plays an important role in the designing network and affects the performance of a local network. The parameter a is obtained by dividing the propagation time by the transmission time and is the most important single parameter that affects the performance of the local network. The smaller a is, the better performance the network has. If transmission time is normalized to 1, a should indicate the propagation delay of a network. The parameter a can also be used as a good indication of effects on utilization and throughput. In general, the utilization varies inversely with a (U ¼ 1/(1 þ a)) regardless of which MAC protocol is used. Throughput is affected by the parameter a regardless of the offered load (G). The larger the a is, the lower throughput the network has. Different MAC protocols produce different types of overhead. For the contention protocols, like ALOHA, CSMA, and CSMA/CD, the overheads are collisions, acknowledgment, and waste of slot time. For the token bus protocol, the overheads are the passing token (even when the network is idle), token transmission, and acknowledgment. For the token ring protocol, the overhead is just waiting for the token when the other station has no data to send. The factors that affect the performance of LAN/MAN are listed as follows: . . . .
Capacity: Affects the parameter a Propagation delay: Affects the parameter a Frame size: Affects the parameter a MAC protocols: Affects throughput and delay
6 Communication Network Architecture . .
.
Offered load: Affects throughput and utilization Number of stations: Affects delay, input load, offered load, and throughput Error rate: Not quite an issue for LAN/MAN performance
The number of active users in a local network is an important parameter that affects the bound of the performance. Understanding the bounds of the performance would help to determine the size of a LAN. The stability of a LAN can be at the state of either low delay, where the capacity is greater than the offered load; of high delay, where more time is spent on controlling access to network, and little time is spent on actual data transmission; or the unbound delay, where the offered load is greater than the capacity. The state of the unbound delay must be avoided in the design. In general, delay is bounded by the number of users (N) multiplied by the transmission time of Tmsg. The throughput is bounded by N/(Tidle þ N Tmsg), where the Tidle is the mean idle time. The number of active stations can be determined by the following equation: N ¼ Tidle=Tmsg: Generally, the study of LAN performance made by the IEEE 802 Committee suggests that smaller frame sizes have greater differences in the throughput between token passing and CSMA/CD. CSMA/CD strongly depends on the value of a. Token ring is the least sensitive to the workload. CSMA/CD has the shortest delay under the light load and is most sensitive to the heavy load.
6.4 Conclusion The International Standard Organization’s (ISO) Open System Interconnection (OSI) continues to be the de facto standard for network reference model. All of existing networks and future evolving networks continue to model their network based on OSI’s reference model. This chapter introduces a general network architecture using OSI and provides a more detailed description of Local Network and Wireless Local Area Network. WLAN has been extensively used in home/office network and there are increasing numbers of public hotspots worldwide. The seamless interoperability between WLAN and mobile cellular networks has been on the way. It is expected that utilizing IP network to deliver various types of applications becomes a norm. IP Multimedia Subsystem (IMS) is intended to deliver ALL IP network architecture for all types of applications.
1003
Glossary 4B/5B ANSI ATM CSMA CSMA/CD FDDI IEEE 802 IMS LAN LLC MAC MAN NRZI TTP WEP WPA
Encoding scheme to encode 4 bits of data into a 5-bit cell American National Standard Institute Asynchronous Transfer Mode Carrier Sense Multiple Access Carrier Sense Multiple Access with Collision Detection Fiber Data Distribution Interface An IEEE committee that specifies LAN standards IP Multimedia System Local Area Network Logical Link Control Medium Access Control Metropolitan Area Network Non-Return Zero Inverted Time-Token Protocol Wireless Equivalent Protocol Wi-Fi Protected Access
References de Buda, R. (1972). Coherent demodulation of frequency shift keying with low deviation ratio. IEEE Transactions on Communications COM-20, 466–470. Garg, V.K., and Wilkes, J.E. (1996). Wireless and personal communications systems. Engelwood Cliffs, NJ: Prentice Hall. Murota, K., and Hirade, K. (1981). GMSK modulation for digital mobile radio technology IEEE Transactions on Communications COM-29 (7), pages. 000–000. Pasupathy, S. (1979). Minimal shift keying: A spectrally efficient modulation. IEEE Communications Magazine, Vol., pages. 00, 000–000. Proakis, J.G. (1989). Digital communication. New York: McGraw-Hill. 00, 000–000. Schwartz, M., Bennett, W., and Stein, S. (1966). Communications systems and techniques. New York: McGraw-Hill, 00, 000–000. Sklar, B. (1988). Digital communications: Fundamental and applications. Englewood Cliff, NJ: Prentice Hall. (1989). Special issue on bandwidth and power efficient coded modulation. IEEE Journal of Selected Area in Communications 7. 000–000. (1991). Special issue on bandwidth and power efficient modulation. IEEE Communications Magazine 29. 000–000. Stalling, W. (1988). Data and computer communications. New York: MacMillan Publishing. 00, 000–000. Wozencraft, J.M., and Jacobs, I.M. (1965). Principals of communications engineering. New York: John Wiley Sons. 00, 000–000. Ziemer, R.E., and Peterson, R.L. (1992). Introduction to digital communication. New York: Macmillan Publishing. 00, 000–000. Ziemer, R.E., and Tranter, W.H. (1990). Principle of communications. Boston: Houghton-Mifflin. 00, 000–000.
This page intentionally left blank
7 Wireless Network Access Technologies Vijay K. Garg
7.1
Access Technologies.............................................................................. 1005
Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
7.2
Comparisons of FDMA, TDMA, and CDMA............................................ 1009
7.1.1 FDMA . 7.1.2 TDMA . 7.1.3 CDMA
Yih-Chen Wang Lucent Technologies, Naperville, Illinois, USA
7.1 Access Technologies The problem of a cellular system boils down to the choice of the way to share a common pool of radio channels between users. Users can gain access to any channel (each user is not always assigned to the same channel). A channel can be thought of as merely a portion of the limited radio resource that is temporarily allocated for a specific purpose, such as someone’s phone call. A multiple access method is a definition of how the radio spectrum is divided into channels and how channels are allocated to the many users of the systems. The sharing of spectrum is required to achieve high capacity by simultaneously allocating the available bandwidth to multiple users. There are three possible multiple access methods: frequency-division multiple access (FDMA), time-division multiple access (TDMA), and code-division multiple access (CDMA).
7.1.1 FDMA FDMA assigns individual channels to individual users. It can be seen from Figure 7.1 that each user is allocated a unique frequency channel. These channels are assigned on demand to subscribers who request service. Guard bands are maintained between adjacent signal spectra to minimize cross talk between channels. During the period of the call, no other user can share the same frequency band. In frequency division duplex (FDD) systems, the users are assigned a channel as a pair of frequencies; one frequency is used for the upward channel, while the other frequency is used for the downward channel. Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
FDMA is used by analog systems, such as AMPS, NMT, or Radiocom 2000. The advantages of FDMA are the following: .
. .
.
The complexity of FDMA systems is lower when compared to TDMA and CDMA systems, though this is changing as digital signal processing methods improve for TDMA and CDMA. FDMA it is technically simple to implement. A capacity increase can be obtained by reducing the information bit rate and using efficient digital codes. Since FDMA is a continuous transmission scheme, fewer bits are needed for overhead purposes as compared to TDMA.
The disadvantages of FDMA include the following: .
.
.
.
.
Only modest capacity improvements could be expected from a given spectrum allocation. FDMA wastes bandwidth. If an FDMA channel is not in use, then it sits idle and cannot be used by other users to increase or share capacity. FDMA systems have higher cell site system costs compared to TDMA systems because of the need to use costly bandpass filters to eliminate spurious radiation at the base station. The FDMA mobile unit uses duplexers since both the transmitter and receiver operate at the same time. A duplexer adds weight, size, and cost to a radio transmitter and can limit the minimum size of a subscriber unit. FDMA requires tight RF filtering to minimize adjacent channel interference. 1005
1006
Vijay K. Garg and Yih-Chen Wang Time
TX
Power
P2
P1
Frequency
RX
FIGURE 7.3 Multipath Interference
FIGURE 7.1 Frequency-Division Multiple Access (FDMA) .
The maximum bit rate per channel is fixed and small, inhibiting the flexibility in bit-rate capability that may be a requirement for 3G applications in the future.
7.1.2 TDMA TDMA relies on the fact that the audio signal has been digitized (i.e., divided into a number of millisecond-long packets). It allocates a single frequency channel for a short period of time and then moves to another channel. The digital samples from a single transmitter occupy different time slots in several bands at the same time, as shown in Figure 7.2. In a TDMA system, the radio spectrum is divided into time slots, and in each slot, only one user is allowed to either transmit or receive. It can be seen from Figure 7.2 that each user occupies a cyclically repeating time slot, so a channel may be thought of as a particular time slot that reoccurs every frame, where several time slots make up a frame. Since the transmission for any user is noncontinuous, digital modulation must be used with TDMA. The transmission from various users is interlaced into a repeating frame structure as shown in Figure 7.3. Each frame is made up of a preamble, information bits addressed to various stations, and trail bits. The function of the preamble is to provide identification and incidental information and to allow
synchronization of the slot at the intended receiver. Guard times are used between each user’s transmission to minimize cross talk between channels. In a TDMA/TDD system, half of the time slots in the frame information message would be used for the forward link channels and half would be used for reverse link channels. In TDMA/FDD systems, an identical or similar frame structure would be used for either forward or reverse transmission, but the carrier frequencies would be different for the forward and reverse links. GSM uses a TDMA technique, where the carrier is 200 kHz wide and supports eight full rate channels. A channel (roughly) consists of the recurrence every 4.6 ms of a time slot of 0.58 ms. The advantages of TDMA are the following: .
.
.
.
.
Time .
Power
.
.
Frequency
FIGURE 7.2 Time-Division Multiple Access (TDMA)
TDMA permits a flexible bit rate, not only for multiples of a basic single channel rate but also submultiples for low bit-rate multicast traffic. The handoff process in TDMA is much simpler for a subscriber unit because it is able to listen for other base stations during idle time slots. TDMA potentially integrates into VLSI without narrow band filters, giving a low-cost floor in volume production. TDMA uses different time slots for transmission and reception; thus, duplexers are not required. TDMA has the advantage in that it enables allocation of different numbers of time slots per frame to different users. Thus, bandwidth can be supplied on demand to different users by concatenating or reassigning time slots based on priority. TDMA can be easily adapted to the transmission of data as well as voice communication. TDMA offers the ability to carry data rates of 64 kbps to 120 Mbps (expandable in multiples of 64 kbps). This enables operators to offer personal communication-like services including fax, voice-band data, and short message services (SMSs) as well as bandwidth-intensive applications, such as multimedia and videoconferencing. Unlike spread-spectrum techniques that can suffer from interference among users, all of whom are on the same frequency band and transmitting at the same time, TDMA’s technology, which separates users in time,
7 Wireless Network Access Technologies
.
.
.
.
.
ensures that they will not experience interference from other simultaneous transmissions. TDMA also provides the user with extended battery life and talk time because the mobile is only transmitting a portion of the time (From one-third to one-tenth) during conversations. TDMA installations offer substantial savings in base-station equipment, space, and maintenance, an important factor as cell sizes grow ever smaller. TDMA is the most cost-effective technology for upgrading a current analog system to digital. TDMA is the only technology that offers an efficient use of hierarchical cell structures (HCSs) offering pico, micro, and macrocells. HCSs allow coverage for the system to be tailored to support specific traffic and service needs. By using this approach, system capacities of more than 40times AMPS can be achieved in a cost-efficient way. Because of its inherent compatibility with FDMA analog systems, TDMA allows service compatibility with the use of dual-mode handsets.
The disadvantages of TDMA are listed here: .
.
.
TDMA requires a substantial amount of signal processing for matched filtering and correlation detection for synchronizing with a time slot. each user has a predefined time slot. However, users roaming from one cell to another are not allotted a time slot. Thus, if all the time slots in the next cell are already occupied, a call might well be disconnected. Likewise, if all the time slots in the cell in which a user happens to be are already occupied, a user will not receive a dial tone. TDMA is subjected to multipath distortion. A signal coming from a tower to a handset might come from any one of several directions. It might have bounced off several different buildings before arriving (see Figure 7.3), which can cause interference.
One way of getting around this interference is to put a time limit on the system. The system will be designed to receive, treat, and process a signal within a certain time limit. After the time limit has expired, the system ignores signals. The sensitivity of the system depends on how far it processes the multipath frequencies. Even at thousandths of seconds, these multipath signals cause problems. All cellular architectures, whether microcell- or macrocellbased, have a unique set of propagation problems. Macrocells are particularly affected by multipath signal loss—a phenomenon usually occurring at the cell fringes where reflection and refraction may weaken or cancel a signal. Frequency and time division multiplexing can be combined (i.e., a channel can use a certain frequency band for a certain amount of time). This scheme is more robust against frequency selective interference (i.e., interference in a certain small frequency band). In addition, this scheme provides
1007 some (weak) protection against tapping. GSM uses this combination of frequency and time division multiplexing for transmission between a mobile phone and a base station.
7.1.3 CDMA In CDMA, each user is assigned a unique code sequence for encoding an information-bearing signal. The receiver, knowing the code sequences of the user, decodes a received signal after reception and recovers the original data. This is possible since the cross correlations between the code of the desired user and the codes of the other users are small. Because the bandwidth of the code signal is chosen to be much larger than the bandwidth of the information-bearing signal, the encoding process enlarges (spreads) the spectrum of the signal and is therefore also known as spread-spectrum modulation. The resulting signal is also called a spread-spectrum signal. CDMA is commonly explained by the ‘‘cocktail party image,’’ where groups of people of different languages can communicate simultaneously, despite the surrounding noise. For a group of people speaking the same language, the rest of the people in the room are received as noise. Knowing the language they are talking allows them to filter out this noise and understand each other. If someone records the ‘‘noise,’’ and knows different languages, he or she will be able, playing the tape several times, to extract the various conversations taking place in the various languages. With sufficient processing capacity, all conversations can be extracted simultaneously. CDMA receivers separate communication channels by means of a pesudo-random modulation that is applied and removed in the digital domain (with these famous codes), not on the basis of frequency. Multiple users occupy the same frequency band. The use of CDMA permits average interference among all users, thus avoiding the dimension of a network for the worst case. It thus permits to optimize the use of spectrum efficiency. It also efficiently supports variable bit-rate services. This multiple access method reduces the peak transmitter power, thus reducing the power amplifier consumption and then increasing the battery efficiency. CDMA reduces the average transmitted power, and increases the battery efficiency. It allows a reuse factor of 1 (i.e., the same frequency is used in adjacent cells), thus avoiding the need for frequency planning. On the other hand, code planning is required, but this is less difficult than frequency planning, as the code reuse pattern is much larger than the frequency reuse pattern commonly encountered in FDMA systems. However, CDMA requires in particular a complex and very accurate power control, which is a key factor for the system capacity and its proper operation. In direct sequence CDMA (DS-CDMA), the modulated data signal is directly modulated by a digital, discrete-time, discrete-valued code signal. The data signal can be either analog or digital; in most cases, it is digital. In the case of a
1008
Vijay K. Garg and Yih-Chen Wang
digital signal, the data modulation is often omitted, and the data signal is directly multiplied by the code signal; the resulting signal modulates the wideband carrier. Basic DS-CMA elements include the RAKE receiver, power control, soft handover, interfrequency handover, and multiuser detection. Key CDMA system attributes include increased frequency reuse, efficient variable rate speech compression, enhanced RF power control, lower average transmit power, ability to simultaneously receive and combine several signals to increase service reliability, seamless handoff, extended battery life, and advanced features. IS-95 CDMA The IS-95 CDMA air interface standard, after the first revision in 1995, was termed IS-95A; it specifies the air interface for cellular, 800-MHz frequency band. ANSI J-STD-008 specifies the air interface for 1900 MHz in PCS. It differs from IS-95A primarily in the frequency plan and in call processing related to subscriber station identity, such as paging and call origination. TSB74 specifies the Rate Set 2 standard. IS-95B merges the IS95A, ANSI J-STD-008, and TSB74 standards. In addition, it specifies high-speed data operation using up to eight parallel codes, resulting in a maximum bit rate of 115.2 Kb/s. Table 7.1 lists the main parameters of the IS-95 CDMA air interface. Wideband CDMA The 3G air interface standardization for the schemes based on CMA focuses on two main types of wideband CDMA: network asynchronous and network synchronous. In network asynchronous schemes, the base stations are not synchronized; in network synchronous schemes, the base stations are synchronized to each other within a few microseconds. A network asynchronous CDMA proposal is WCDMA in European Telecommunication Standard Institute (ETSI) and Association of Radio Industries and Businesses (ARIB). A
network synchronous wideband CDMA scheme has been proposed by CDMA2000. The main technical approaches of WCDMA and CDMA2000 systems are chip rate, downlink channel structure, and network synchronization. CDMA2000 uses a chip rate of 3.6864 Mc/s for the 5-MHz band allocation with the direct spread downlink and a 1.2288 Mc/s chip rate for the multicarrier downlink. WCDMA uses direct spread with a chip rate of 4.096 Mc/s. WCDMA The WCDMA scheme was developed as a joint effort between ETSI and ARIB in 1997. The ETSI WCDMA scheme was developed from the FMA2 scheme in Europe and the ARIB WCDMA from the Core-A scheme in Japan. The uplink of the WCDMA scheme is based mainly on the FMA2 scheme and the downlink on the Core-A scheme. Table 7.2 lists the main parameters of WCDMA. CDMA2000 Within standardization committee TIA TR45.4, the subcommittee TR45.5.4 was responsible for the selection of the basic CDMA2000 concept. For all the other wideband CMA schemes, the goal has been to provide data rates for the IMT2000 performance requirements of at least 144 Kb/s in a vehicular environments, 384 Kb/s in a pedestrian environment, and 2.048 Mb/s in an indoor office environment. The main
TABLE 7.2
Parameters of WCDMA
Channel bandwidth Downlink RF channel structure Chip rate Roll-off factor for chip shaping Frame length Spreading modulation
Coherent detection
TABLE 7.1
IS-95 CDMA Air Interface Parameters
Bandwidth Chip rate Frequency band uplink Frequency band downlink Frame length Bit rates
Speech code Soft handover Power control Number of RAKE fingers Spreading codes
1.25 MHz 1.2288 Mc/s . 869–894 MHz . 1930–1980 MHz . 824–849 MHz . 1850–1910 MHz 20 ms . Rate set 1: 9.6 kb/s . Rate set 2: 14.4 kb/s . IS-95B: 115.2 kb/s . QCELP 8 kb/s . ACELP 13 kb/s Yes . Uplink: Open loop þ fast closed loop . Downlink: Slow quality loop Four Walsh þ long M-sequence
Channel multiplexing in uplink
Multirate Spreading factors Power control Spreading (downlink)
Spreading (uplink)
Handover
1.25, 5, 10, 20 MHz Direct spread 1.024/4.096/8.192/16.384 Mc/s 0.22 10 ms/20 ms (optional) . Balanced QPSK (downlink) . Dual channel (uplink) . Complex spreading circuit User-dedicated time multiplexed pilot (downlink and uplink); no common pilot in downlink Control and pilot channel time multiplexed I and Q multiplexing for data and control channel Variable spreading and multicode 4–256 Open- and fast-closed loop (1.6 KHz) Variable length orthogonal sequences for channel separation Gold sequence 218 for cell and user separation (truncated cycle 10 ms) Variable-length orthogonal sequences for channel separation Gold sequence 241 for user separation (different time shifts in I and Q channel, truncated cycle 10 ms) . Soft handover . Interfrequency handover
7 Wireless Network Access Technologies TABLE 7.3
CDMA2000 Parameter Summary
Channel bandwidth Downlink RF channel structure Chip rate Roll-off factor Frame length Spreading modulation
Coherent detection Channel multiplexing in uplink Multirate Spreading factors Power control Spreading (downlink) Spreading (uplink) Handover
TABLE 7.4
1009
1.25, 5, 10, 20 MHz Direct spread or multicarrier . 1.2288/3.6864/7.3728/11.0593/14.7456 Mc/s for direct spread . n 1.2288 Mc/s (n ¼ 1, 3, 6, 9, 12) for multicarrier Similar to IS-95 . 20 ms for data and control . 5 ms for control information on the fundamental and dedicated control channel . Balanced QPSK (downlink) . Dual channel QPSK (uplink) . Complex spreading circuit . Pilot time multiplexed with PC and EIB (uplink) . Common continuous pilot channel and auxiliary pilot (downlink) . Control, pilot, fundamental, and supplemental code multiplexed . I and O multiplexing for data and control channel Variable spreading and multicode 4–256 Open- and fast-closed loop (800 Hz, higher rates under study) Variable-length Walsh sequences for channel separation, M-sequence 215 (same sequence with time shift used in different cells, different sequence in I and Q channel) . Variable-length orthogonal sequences for channel separation M-sequence 215 (same sequence for all users, different sequence in I and Q channels) M-sequence 241 – 1 for user separation (different time shifts for different users) . Soft handover . Interfrequency handover
Comparison of FDMA, TDMA, and CDMA
Approach
FDMA
TDMA
CDMA
Idea
Segment the frequency band into disjoint sub-bands Every terminal with own frequency, uninterrupted Filtering in the frequency domain Simple, established, robust Inflexible, wasteful of spectrum
Segment sending time into disjoint time slots, demanding driven or fixed patterns All terminals active for short periods of time on the same frequency Synchronization in the time domain Established, fully digital, flexible Guard space needed, synchronization difficult
Spread of the spectrum using orthogonal codes All terminals can be active at the same place at the same time, uninterrupted Code plus RAKE receivers Flexible, less planning needed, soft handover Complex receivers, needs more complicated power control for senders
Terminals Signal separation Advantages Disadvantages
focus of standardization has been providing 144 Kb/s and 384 Kb/s with approximately 5-MHz bandwidth. The main parameters of CDMA2000 are listed in Table 7.3
7.2 Comparisons of FDMA, TDMA, and CDMA To conclude this section, we make a comparison of the three basic access technologies discussed in Table 7.4. The table shows the MAC schemes without combination with other
schemes. However, in real systems, the MAC schemes always occur in combinations such as those used in GSM and IS-95. The primary advantage of DS-CDMA is its ability to tolerate a fair amount of interfering signals compared to FDMA and TDMA, which typically cannot tolerate any such interference. With DS-CMA, adjacent microcells share the same frequencies, whereas with FDMA and TDMA, it is not feasible for adjacent microcells to share the same frequencies because of interference.
This page intentionally left blank
8 Convergence of Networking Technologies Vijay K. Garg
8.1
Convergence ....................................................................................... 1011
Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
8.2
Optical Networking.............................................................................. 1014
8.1.1 Session Initiation Protocol . 8.1.2 Softswitch . 8.1.3 All IP Architecture 8.2.1 Dense Wavelength Division Multiplexing
Yih-Chen Wang Lucent Technologies, Naperville, Illinois, USA
8.1 Convergence The next generation of wireless networks will be flexible, open, and standards-based. These networks will facilitate the convergence of wireless networks, the PSTN, and the Internet and will provide voice, data, and video services to platforms such as cell phones, PDAs, laptops, PCs, digital cameras, and a plethora of new portable devices. Advances in optical networking will increase the capacity of a core network by stuffing vastly more information into each fiber. Next-generation media interface protocols, such as SIP and MEGACO, have already proven more effective than SS7 and H.323 in a convergence network of wireless systems, wireline systems, and the Internet. The key to wireless convergence is IP and softswitch. This software-on-a-switch is the heart of the next-generation wireless networks that provides integration of the wireless networks, the Internet, and the PSTN. This is the elusive Holy Grail of communications—any-to-any end point interoperability, enhanced services, and flexible billing are all on one network.
8.1.1 Session Initiation Protocol Session initiation protocol (SIP) is an application-layer signaling protocol for creating, modifying, and terminating sessions with one or more participants. SIP sessions include Internet multimedia conferences, Internet telephone calls, and multimedia distribution. SIP invitations create sessions carrying session descriptions, which allow participants to agree on a set of Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
compatible media types. SIP supports user mobility by proxying and redirecting requests to the user’s current location. Users can register their current locations. SIP makes minimal assumptions about the underlying transport protocol. It can be extended easily with additional capabilities and is an enabling technology for providing innovative new services that integrate multimedia with Internet services such as the World Wide Web, e-mail, instant messaging, and presence. The core SIP protocol, specified in Internet Engineering Task Force (IETF) Request for Comments (RFC) 2543, is built on foundation IP protocols, such as HTTP and Simple Mail Transfer Protocol (SMTP). As such, it supports a request/response transaction model that is text-based, similar to e-mail, and selfdescribing; in addition, SIP inherited many of the features from standard Internet protocols. For example, it uses many of HTTPv1.1 header fields, supports Uniform Resource Identifier/Universal Resource Locator (URI/URL) for addressing, and employs the Multimedia Internet Mail Encapsulation (MIME) protocol for message payload description. It is lightweight as the baseline SIP consists only of six methods. Extensions are being proposed by adding new methods, new headers, and new message body types to support additional applications such as interworking with legacy systems, cable, or wireless applications. An SIP initiation scenario is shown in Figure 8.1. The standard SIP architectural components are the following: .
An SIP client is an end system with the SIP User Agent (UA) residing in it. The user agent consists of two 1011
1012
Vijay K. Garg and Yih-Chen Wang
Request
SIP redirect server
Response media
Location service 2 3 5
SIP proxy
4 6
1
7
11
12
10
SIP proxy
13
SIP proxy 8
14
9
SIP client SIP client
FIGURE 8.1 Session Initiation in SIP
.
.
components: the User Agent Client (UAC) is responsible for sending SIP requests, and the User Agent Server (UAS) listens for incoming requests, and prompts a user or executes a program to determine responses. A proxy server is responsible for routing and delivering messages to the called party. It receives requests and forwards them to another server (called a next-hop server), which may be another proxy server, a UAS, or a redirect server. It can fork a request, sending copies to multiple next-hop servers at once. This allows a call setup request to try many different locations at once. It can also forward the invitation to a multicast group. A proxy server can be callstateful, stateful (transaction-stateful), or stateless. A redirect server also receives requests and determines net-hop server(s). Instead of forwarding the request
.
.
there, however, it returns the address(es) of the nexthop server(s) to the sender. A locator service is used by an SIP server to obtain information about a callee’s possible locations. It is outside the scope of SIP. It can be anything, such as LDAP, whois, whoisþþ, POST corporate database, local file, or result of program execution (IN). A registration server receives updates on the current locations of users. It is typically colocated with a proxy redirect server. The server may make its information available through the location server.
Often hailed as more flexible than H.323, SIP is an application-layer control protocol that can establish, modify, and terminate sessions or calls. SIP is text-based and light-
Third-party applications and services Unified messaging
Debit prepaid
E-commerce
High-level API
Customer care
JAVA, C++JTAPI
Software-based call and service control
Softswitch
Standards-based control protocols
IPDC, H.248, H.323
Signaling and media gateways
FIGURE 8.2 The Softswitch Architecture
WEB services
8 Convergence of Networking Technologies
1013
weight, and it uses a simple invitation–acceptance message structure. Other benefits of SIP over H.323 include scalability, service richness, lower latency, faster speed, and ability to distribute for carrier-grade reliability. SIP is a new protocol currently under development. The core SIP application specified in IETF RFC 2543 is being implemented and tested in SIP bake-offs. Many extensions of SIP are being proposed and are under discussion. Holes and gaps exist for system-level deployment. For example, QoS, security, services, and operations issues are still being debated. Interworking mappings of SIP with H.323 and ISUP/BICC are not yet finalized (Internet draft status).
8.1.2 Softswitch Softswitch is an all-encompassing term for the next-generation communications systems that employ open standards to create integrated networks with a decoupled service intelligence capable of carrying voice, video, and data traffic more efficiently and with far greater value-added service potential than is possible with existing circuit-switched networks. Softswitch-based networks will enable service providers to support new, enhanced voice service features as well as offer new types of multimedia applications in addition to integrating existing wireline and wireless voice services with advanced data and video services. The separation of call control and services (Figure 8.2) from the underlying transport network is a key enabling feature of softswitch-based networks. Indeed, the very approach of building switched networks with extracted service intelligence represents a major break-through when compared to the circuitswitched approach of combining the transport hardware, call control, and service logic together into a single, proprietary piece of equipment.
Softswitches can be used in 3G and 4G core networks to provide call control, mobility management, and an open service creation environment for carriers. A softswitch can deliver foundation mobility functions of roaming, location updates, subscriber profile management, intersystem handover, and RAN interworking to wireless service providers.
8.1.3 All IP Architecture The Internet is unprecedented in its impact on the world community of industries, institutions, and individuals. In some way, the Internet has touched most of our lives in terms of how we communicate, how we promote our products, how we teach our children, and how we invest our time. No media adoption curve has been faster than the Internet’s. In the United States alone, it took almost 40 years for 50 million people to use radio and 15 years for 50 million people to use Television and cellular communications. Internet users reached the 50-million mark in just 5 years. During that time, the world became increasingly mobile, defined by the ‘‘take-it-with-you philosophy’’ we have developed regarding information and our access to it. For the wireless cellular industry, that shift in attitude has created the opportunity to add mobility to Internet accessibility—effectively allowing subscribers to carry the power of the Internet with them anywhere at any time. The convergence of wireless and Internet usage is already underway. Globally, Internet users as a whole are projected to increase from about 200 million at present to almost 1 billion by the year 2005. During the same period of time, global wireless subscribers are expected to increase from 300 million to over a billion. With these market dynamics in mind, several industryleading businesses have agreed that next-generation wireless
1 billion subs
Market size
Wireless Internet 300 million subs
1 billion subs
Wireless Internet 200 million subs
Secures NSS marketing
FIGURE 8.3 Projected Wireless Internet Convergence
1014
Vijay K. Garg and Yih-Chen Wang
networks will leverage the packet-based technology of IP. This strategy provides operators with the unique opportunity to deliver a multitude of new services to mobile cellular subscribers in a manner more customizable than previously possible (see Figure 8.3).
As the industry continues to invest heavily in advancing IP technology for supporting real-time applications such as voice with reliable service and toll quality, it is expected to further accelerate the introduction of new network capabilities that are defined within IP standards for network implementations.
8.2 Optical Networking
OC-48
OC-192
OC-48
D W D M
D W D M
OC-48
OC-48
DWDM Systems and Optical Amplifiers
Port A
Port B
Port D
Port C
From port A to port D
From port A to port C 0
0
Transmission [dB]
FIGURE 8.4
OC-192
As networks face increasing bandwidth demand and diminishing fiber availability, network providers are moving toward a crucial milestone in network evolution: the optical network. Optical networks, based on the emergence of the optical layer in transport networks, provide higher capacity and reduced costs for new applications such as the Internet, video and multimedia interaction, and advanced digital services. Optical networks began with wavelength division multiplexing (WDM), which arose to provide additional capacity on existing fibers. The components of the optical network are defined according to how the wavelengths are transmitted, groomed, or implemented in the network. Viewing the network from a layered approach, the optical network requires the addition of an optical layer. To help define network functionality, networks are divided into several different physical or virtual layers. The first layer, the services layer, is where the services— such as data traffic—enter the telecommunications network. The next layer, SONET, provides restoration, performance monitoring, and provisioning that is transparent to the first layer.
−4
−10
−8 −20 −12 −30 −40 1545
−16
1546
1547
Wavelength [nm]
FIGURE 8.5
1548
−20 1545
1546
1547
1548
Wavelength [nm]
In Fiber Bragg Grating Technology: Optical A/D Multiplexer
8 Convergence of Networking Technologies
8.2.1 Dense Wavelength Division Multiplexing As optical filters and laser technology improved, the ability to combine more than two signal wavelengths on a fiber became a reality. Dense wavelength division multiplexing (DWDM) combines multiple signals on the same fiber, ranging up to 40 or 80 channels. By implementing DWDM systems and optical amplifiers, networks can provide a variety of bit rates (i.e., OC–48 or OC–192) and a multitude of channels over a single fiber (see Figure 8.4). The wavelengths used are all in the range in which optical amplifiers perform optimally, typically from about 1530 nm to 1565 nm. Optical Amplifiers The performance of optical amplifiers has improved significantly—with current amplifiers providing significantly lower noise and flatter gain—which is essential to DWDM systems. The total power of amplifiers also has steadily increased, with amplifiers approaching þ20-dBm outputs, which is many orders of magnitude more powerful than the first amplifiers. Narrow Band Lasers Without a narrow, stable, and coherent light source, none of the optical components would be of any value in the optical network. Advanced lasers with narrow bandwidths provide the narrow wavelength source that is the individual channel in
1015 optical networks. Typically, long-haul applications use externally modulated lasers, while shorter applications can use integrated laser technologies. Fiber Bragg Gratings Commercially available fiber Bragg gratings have been important components for enabling WDM and optical networks. A fiber Bragg grating is a small section of fiber that has been modified to create periodic changes in the index of refraction. Depending on the space between the changes, a certain frequency of light—the Bragg resonance wavelength—is reflected back, while all other wavelengths pass through (see Figure 8.5). The wavelength-specific properties of the grating make fiber Bragg gratings useful in implementing optical add/drop multiplexers. Bragg gratings also are being developed to aid in dispersion compensation and signal filtering as well. Thin-Film Substrates Another essential technology for optical networks is the thinfilm substrate. By coating a thin glass or polymer substrate with a thin interference film of dielectric material, the substrate can be made to pass through only a specific wavelength and reflect all others. By integrating several of these components, many optical network devices are created, including multiplexers, demultiplexers, and add/drop devices.
This page intentionally left blank
IX CONTROLS AND SYSTEMS Michael Sain Department of Electrical Engineering, University of Notre Dame, Notre Dame, Indiana, USA
The question of feedback in circuits and systems can be viewed in a denumerable number of different ways. It is, therefore, no longer possible to give a coherent, yet brief, overview of the subject. I am reminded of the story of the hotel with an infinite number of rooms. In such a hotel, there is always room for another guest. To accommodate the next guest, even if all the rooms are occupied, one has only to ask each room occupant to move to the room with the next higher number. When this is done, all the current occupants will again be in a room, and room one will be available for the new guest! Nonetheless, one can set up some equivalence classes on feedback in circuits and systems. In one of these constructions, it is possible to focus upon whether the feedback was introduced deliberately by the designer, or whether it just happened to be present in a circuit or system which was not designed with any particular notion of feedback in mind. A great many readers will likely identify with the first of these two classes, in which the feedback has been incorporated by specific choice, with sensors and actuators and processors which transform measurements into commands to achieve a given set of goals. Indeed, this way of looking at things was apparent early in the modern literature. Circuit persons will also recognize the second class of possibilities, in as much as feedback in electronic amplifiers was being discussed almost at the same time that the systems viewpoint was being put into place. By now we realize that feedback is a question of relationships. When we have, for instance, two equations in two unknowns, the age-old process of eliminating one of the two unknowns can be interpreted as a feedback construction. The more interrelationships which exist, the more ‘‘feedback’’’ that exists. Feedback, then, is present whether we explicitly plan for it or not. It would seem provident, therefore, to become better
acquainted with some of the features and uses of feedback, so that we can understand more clearly what it is doing in our circuit or system, even if we did not have feedback in mind. To obtain this type of insight, we shall focus primarily upon the first class above, in which the feedback is deliberately put into play. In such situations, one big variable is the nature of the process which is being controlled by the use of feedback. One may be controlling speed, or pressure, or temperature, or voltage, or current, or just about any variable that one can imagine. To capture at least a bit of this flavor of the subject, the chapter addresses a wide range of different processes which are being placed under control. A second big variable is the design purpose for which the feedback is applied. Perhaps it is for stabilization, tracking, or amplification. Perhaps it is for coping with process model errors which arise because the process is too complicated to model exactly—assuming that we have the knowledge to do so. Perhaps it is for coping with disturbances, such as noise interfering with the transfer of data from one portal of a circuit to another. Or perhaps it is because we cannot get good estimates of key circuit and system parameters, or we cannot obtain a big enough budget to make all the measurements we want, or to install all the actuators we wish. In the face of all these sorts of issues, we approach the task here by creating a diverse sample of the sorts of issues which can arise. One can create entire handbooks on controls; and that has been done. In fact, one can create encyclopedias on controls; and that has been done. Here we have only a section, and we approach the question by taking samples. This is entirely in the spirit of the subject; and the task of the reader is to try to master the behavior of the whole body of knowledge from these samples! Welcome to the experience of feedback circuits and systems!!!
This page intentionally left blank
1 Algebraic Topics in Control Cheryl B. Schrader College of Engineering, Boise State University, Boise, Idaho, USA
1.1 1.2 1.3
Introduction ....................................................................................... 1019 Vector Spaces Over Fields and Modules Over Rings ................................... 1020 Matrices and Matrix Algebra ................................................................. 1020
1.4
Square Matrix Functions: Determinants and Inverses ................................. 1022
1.5 1.6
The Algebra of Polynomials................................................................... 1024 Characteristic and Singular Values .......................................................... 1024
1.7 1.8
Nonassociative Algebras ........................................................................ 1026 Biosystems Applications........................................................................ 1026 References .......................................................................................... 1026
1.3.1 Standard Matrix Notions and Examples 1.4.1 Determinants . 1.4.2 Inverses
1.6.1 Characteristic Values . 1.6.2 Singular Values
1.1 Introduction Control engineers often are referred to as mathematicians in disguise. Indeed, a firm foundation in mathematics is essential for success in the control arena. Many practicing control engineers delve into the intricacies of one or more particular fields of mathematics. Consequently, many of the sections in this chapter on controls rely heavily on algebraic considerations or have algebraic roots. This chapter purports to provide a lively introduction for such investigations by discussing common algebraic topics that occur in control. Along the way, examples of engineering applications are pointed out to the interested reader. From its very first introduction in elementary school, the term algebra conveys a certain level of abstraction. Most people associate algebra with a collection of arithmetic operations combined with representative symbols such as x and y. As undergraduate students, engineers become comfortable with a discussion of fields and vector spaces along with their axiomatic relationships, and this chapter begins here. In the true sense, vector spaces do not form an algebra equation without the additional concept of vector multiplication (tensor product), as mentioned later in Section 1.3. However, square Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
matrices of dimension n n do form an algebra equation although it is not commutative. How the associated operations are defined in this algebra along with interesting and important characteristics, are illustrated in some detail. This discussion provides a firm foundation for an understanding of algebra for the engineer. Concluding this chapter are examples of other types of algebras and applications of interest to the control engineer. Control theory and the tools used in control theory are particularly useful in the study of biomedical engineering and medicine, and the applications extend from the ‘‘head to the toes.’’ Implantable organs, such as the heart, ear, kidney, and retina, all require exacting control systems that are designed with a significant number of parameters and need to work reliably 100% of the time. Other capabilities under development, such as remote surgery, also require extensive control systems because a minor slip could be life-threatening. The controller needs to be able to differentiate between intentional hand movement and a tremor on the part of the surgeon. Standard algebraic notation is employed and should not be unfamiliar to the informed reader. MATLAB, the tool many engineers use for testing and development (and that is used here for numerical examples) was first developed in the late 1019
1020 1970s and written in Fortran. The name originated from the term matrix laboratory. The examples are generated using Version 5.3 (release 11), and the same vectors and matrices are carried throughout this chapter. MATLAB is a powerful problem-solving environment with many built-in debugging tools. It can generate world-class graphics that can be imported to almost any file format for inclusion in reports and presentations. As the reader will no doubt observe, MATLAB is quickly assimilable; with a little knowledge of history and intuition, actual MATLAB commands are easy to identify. For a detailed introduction on MATLAB, see Higham and Higham (2000).
1.2 Vector Spaces Over Fields and Modules Over Rings It is assumed that the reader already has some knowledge and experience with vector spaces and fields. What may not be as readily apparent is that the reader, subsequently, also has experience with modules and rings. This section briefly describes the essence of these ideas in a mathematical sense. Real numbers, complex numbers, and binary numbers all are fields. Specifically a field F is a nonempty set F and two binary operations, addition ( þ ) and multiplication, that together satisfy the following properties for all a, b, c 2 F: Associativity: (a þ b) þ c ¼ a þ (b þ c); (ab)c ¼ a(bc). Commutativity: a þ b ¼ b þ a; ab ¼ ba. Distributivity: a(b þ c) ¼ (ab) þ (ac). Additive identity: 90 2 F 3 a þ 0 ¼ a. Multiplicative identity: 91 2 F 3 a1 ¼ a. Additive inverse: For every a 2 F, 9b 2 F 3 a þ b ¼ 0. [ Notation note: b ¼ a ]. 7. Multiplicative inverse: For every nonzero, a 2 F, 9b 2 F 3 ab ¼ 1. [ Notation note: b ¼ a1 ].
1. 2. 3. 4. 5. 6.
It is commonly known with real numbers that multiplication distributes over addition, and both additive and multiplicative inverses exist. In the case where a multiplicative inverse does not exist, but properties 1 through 6 hold (such as with integers), then the set does not form a field but is categorized as a commutative ring. If property 2 also does not hold, then the correct terminology is a ring. To speak of an additive group, a single operation is used (addition) along with a nonempty set G, satisfying additive properties 1, 4, and 6. If in addition the operation is commutative (as described in property 2), then the additive group is abelian. To discuss an F-vector space V, one simply requires a nonempty set V and a field F that together with binary operations þ : V V ! V and : F V ! V satisfy the following axioms for all elements v, w 2 V and a, b 2 F:
Cheryl B. Schrader 1. 2. 3. 4. 5.
V and þ form an additive abelian group. a (v þ w) ¼ (a v) þ (a w). (a þ b) v ¼ (a v) þ (b v). (ab) v ¼ a (b v). 1 v ¼ v.
Vectors are elements of V, and scalars are elements of F. Often, one uses the terminology vector space V over the field F. What form vectors may take will be examined more closely in Section 1.3. For the purposes of this treatise, and for the following MATLAB examples, the field of real numbers, R, will be used most often. The reader is urged to remember that any choice of field is allowed. As a direct generalization of a vector space, a module M replaces the underlying field by a ring R. Technically speaking, M is a left module because the scalar appears left of the module element. In an analogous fashion, a module M over the ring R is an R-module M. From this discussion, it is apparent that if R is also a field, R-modules are merely vector spaces. In working with modules, it is important to remember to avoid any vector space results relying on division by a nonzero scalar. It is precisely this notion that leads to the extremely powerful application of modules in system analysis, controllability, and observability.
1.3 Matrices and Matrix Algebra 1.3.1 Standard Matrix Notions and Examples By the time an engineer completes his or her undergraduate studies, familiarity exists with matrices and matrix operations through multivariable calculus or linear algebra. Moreover, a graduate should have a firm understanding of the importance in regarding matrices as representations of linear operators. For a detailed discussion of linear operations and their related matrix representations, see Schrader and Sain (2000). Of course, standard matrix notions can be generalized and expressed using rings and fields, which is the approach used here. Consider the set of arrays (or matrices): 2
a11 6 a21 6 A ¼ 6 .. 4 .
a12 a22 .. .
.. .
am1
am2
amn
a1n a2n .. .
3 7 7 7, 5
(1:1)
with m rows, n columns, and elements aij from a field F (or alternatively, A 2 F mn ). Define matrix addition (A þ B ¼ C) element-wise (aij þ bij ¼ cij ) for A, B, and C 2 F mn ; i ¼ 1, 2, . . . , m; and j ¼ 1, 2, . . . , n. An additive identity is the zero matrix, an m n matrix whose every element is the additive identity in F. For example, if F is the ring of integers or the field of real or complex numbers, then every element is simply 0. An additive inverse of a matrix A with elements aij 2 F is found
1 Algebraic Topics in Control
1021
element by element by solving for bij 2 F in aij þ bij ¼ 0. Thus, bij ¼ aij and the notation generalizes to the matrix level; that is, B ¼ A. Such a discussion leads naturally to the concept of scalar multiplication. Consider a field element r 2 F and a matrix over the same field A 2 F mn . The product P ¼ rA 2 F mn is calculated element-wise: pij ¼ raij ; i ¼ 1, 2, . . . , m, j ¼ 1, 2, . . . , n:
(1:2)
ans ¼ 32
a ¼ [1 2 2 3 4 6 7 7 b¼6 4 5 5:
>> b a ans ¼ 4 8 5 10 6 12
12 12 18
3 ]:
6 c ¼ [7
>> a c
8
9 ]:
??? Error using ¼¼>
It is simple to see that the additive inverse B ¼ A is merely the scalar multiplication by 1 of the matrix A.
Inner matrix dimensions must agree.
For the special case where A has the same number of rows as columns (A is square) and:
Example 1
2
>> a ¼ ½1 2 3; b ¼ ½4; 5; 6; c ¼ ½7 8 9; >> a + b ??? Error using ¼¼> þ Matrix dimensions must agree. >> a + c ans ¼ 8
10
a ¼ [1 2 2 3 4 6 7 7 b¼6 4 5 5:
3 ]:
6
12
>> -b ans ¼
c ¼ [7
4
8
9]:
5 6
To discuss the multiplication of two matrices A and B over the same field F represented by AB, one is restricted to choose matrices where the number of columns of A equal the number of rows of B. Such matrices are said to be conformable. The resultant matrix, C ¼ AB, has its number of rows equal to the number of rows of A and its number of columns equal to the number of columns of B. Hence: A 2 F mn and B 2 F np ) C 2 F mp :
(1:3)
Each element in C is found using one row of A and one column of B at a time: cij ¼
n X
aik bkj :
(1:4)
k¼1
1 60 A¼6 4 ...
0 1 .. .
... ... .. .
3 0 07 , .. 7 .5
0
0
...
1
(1:5)
AB ¼ B regardless of choice of B. In such case, A is said to be the identity matrix and is most often represented by the symbol I. The identity matrix is an example of a diagonal matrix, where all aij ¼ 0 when i 6¼ j, written as A ¼ diag {1, 1, . . . , 1}. In MATLAB, I nn is generated with the command eye (n). Taken all together, the set of square matrices over F, matrix addition, matrix multiplication, and the zero and identity matrices satisfy the six properties of a ring. A discussion of a multiplicative inverse in the context of matrices is left for Section 1.4. Example 3 >> d ¼ ½1 1 1; 1 2 3; 1 3 6; e ¼ ½1 1 1; 4 2 1; 9 3 1; >> d e
2
ans ¼ 14 6 3 36 14 6 67 25 10 >> e d ans ¼ 3 6 10 7 11 16 13 18 24
1
6 d¼6 41 2
1
1
3
2
7 37 5:
1
3
6
1
1
1
6 e¼6 44 9
3
2
7 17 5:
3
1
>> d eye ð3Þ ans ¼
It is important to note that matrix multiplication is not commutative, in general, although it is associative and distributive with respect to matrix addition. Example 2 Consider the same a, b, and c from Example 1. >> a b
1 1 1 2 1 3
1 3 6
A matrix with one column or one row is commonly referred to as a column vector or row vector, respectively. It can be shown easily that the set of all n tuples for n > 0 satisfies the axioms of a vector space over the field of real numbers.
1022
Cheryl B. Schrader
Although many engineers tend to think of this ntuple example as synonymous with the definition of a vector space, this interpretation is limiting. For example, the set of polynomials of degree less than m > 0 with real coefficients satisfies all axioms of a vector space taking into account standard polynomial addition and multiplication. Section 1.5 will examine polynomials in greater detail. The most interesting observation when considering ntuple vectors is that while one can multiply scalars together or multiply vectors by scalars, one simply cannot multiply two vectors together. Example 2 illustrates this dilemma. The concept of vector multiplication together with a vector space extends to the concept of an algebra. This is the additional mathematical understanding one must master to incorporate the common manifestation of vector multiplication found in the tensor product.
. . . .
minor of minor of minor of minor of
a11 a12 a21 a22
¼ det(a22 ) ¼ a22 . ¼ det(a21 ) ¼ a21 . ¼ det(a12 ) ¼ a12 . ¼ det(a11 ) ¼ a11 .
The actual calculation of the determinant of equation 1.7 relies on minors according to the algorithm following: Step 1. Choose any row i (or column j). Step 2. Multiply each element aik (or akj ) in that row (or column) by its minor and by ð1Þiþk (or ð1Þkþj Þ. Step 3. Add the results from step 2. The multiplication of a minor by 1, appropriately, is termed a cofactor. Following the previous algorithm, the choice of column 2 (j ¼ 2) yields: a12 a21 ( 1) þ a22 a11 ( þ 1):
1.4 Square Matrix Functions: Determinants and Inverses Square matrices (n n) have many special properties and functions associated with them. In this section, determinants will be examined first because they are important in the study of matrices and their control applications. Following a discussion of determinants, this section will examine more fully the concept of a matrix multiplicative inverse.
The value of the determinant is independent of the choice of row or column; it makes no difference whatsoever. Determinants of 2 2 matrices are fairly straightforward in that the two elements along the main diagonal are multiplied together and then the product of the remaining two elements is subtracted. An explanation of determinants for matrices larger than 2 2 is best understood first with 3 3 matrices to help grasp the notation, which can then be expanded to n n matrices in general. Consider first: a11 det(A) ¼ a21 a31
1.4.1 Determinants Determinants can be viewed quite naturally using a detailed description of vector multiplication and the tensor product. For details regarding this approach, the reader is referred to Sain and Schrader (2000). In the current examination, determinants will be approached from the matrix algebra perspective. Originally attributed to Vandermonde, a determinant is a scalar value or single algebraic expression resulting from a square matrix. It is, therefore, appropriate to begin with square matrices that are also scalars, A ¼ F 11 . In such case, the determinant of A, more commonly written as det (A) or jAj, is the scalar itself: det(A) ¼ det([a11 ]) ¼ a11 :
(1:6)
Now consider A 2 F 22 and introduce the concept of the minor of aij , which is defined as the determinant of the submatrix that results when row i and column j are removed. For a general 2 2 matrix:
a A ¼ 11 a21 and it is obvious that:
a12 , a22
(1:7)
(1:8)
a12 a22 a32
a13 a23 , a33
(1:9)
and expand about row 1 according to the three-step algorithm. Then: a22 det(A) ¼ a11 a32
a a23 a12 21 a a 33
31
a a23 þa13 21 a a 33
31
a22 (1:10) a 32
¼ a11 ða22 a33 a23 a32 Þ a12 ða21 a33 a23 a31 Þ þ a13 ða21 a32 a22 a31 Þ
(1:11)
¼ a11 a22 a33 þ a12 a23 a31 þ a13 a21 a32 a12 a21 a33 a11 a23 a32 a13 a22 a31 :
(1:12)
Similar to the 2 2 case, there exists a relatively simple way to find the determinant of equation 1.9. Write the first two columns next to the original columns as follows: a11 a21 a31
a12 a22 a32
a13 a23 a33
a11 a21 a31
a12 a22 : a32
(1:13)
Form the first three products of equation 1.12 by starting with a11 in equation 1.13 and drawing a diagonal downward to the
1 Algebraic Topics in Control
1023
right. Repeat with a12 and with a13 , forming two more diagonals. The last three products of equation 1.12 are formed by subtracting left diagonals. Begin with a12 , draw a diagonal down to the left, and then repeat with a11 and a13 . For n n matrices where n > 3, such simple characterizations do not exist. Thus, one is forced to define determinants in general using symbols to describe the expansions illustrated in equations 1.10 through 1.12. In general, determinants of n n matrices can be computed using cofactors, which in turn require finding determinants of (n 1) (n 1) matrices and so on. Such is the case for n ¼ 2 and n ¼ 3. Define Aij as the cofactor formed by (1) deleting the ith row and the jth column of A, (2) calculating the determinant of the (n 1) (n 1) submatrix, and (3) multiplying by ( 1)iþj . Expanding about any row i produces: det(A) ¼
n X
aik Aik ,
may have a different number of solutions, and how many solutions exist is related to the rank of A. To solve simultaneous equations, MATLAB provides the backslash command (\) to determine a vector x 2 F n, satisfying Ax ¼ y for A 2 F mn and y 2 F m . Given A 2 F mn and Y 2 F mp , the backslash operator can be used to solve for X 2 F np in AX ¼ Y . Directly solving for the matrix inverse, assuming one exists, can be avoided by solving for X in AX ¼ I. This is usually faster and more accurate because Cholesky or LU factorizations are employed. More advanced factorizations contribute to solving the sparse system problem (A is large and has few nonzero elements) that occurs, for example, in electrical power systems. All is not lost in terms of inverses in the case of nonsquare matrices A 2 F mn or rank-deficient square matrices. Define the pseudoinverse Aþ 2 F nm by:
(1:14)
Aþ ¼ (AT A)1 AT :
k¼1
or about any column j: det(A) ¼
n X
akj Akj :
(1:15)
k¼1
1.4.2 Inverses
Note that Aþ A ¼ I but AAþ may not. Hence, Aþ is also termed a left inverse. There exists, as one immediately assumes, an analogous derivation for a right inverse. An interesting recent study uses pseudoinverses in crptanalysis to prove insecurity. Example 4
Any square matrix A 2 F nn with a nonzero determinant has a unique inverse A1 2 F nn , which satisfies: (1:16)
Matrix inverses can be determined by forming the cofactor matrix: 2
A11 6 A21 6 A ¼ 6 .. 4 . An1
A12 A22 .. .
... ... .. .
A1n A2n 7 7 .. 7, . 5
An2
...
Ann
(1:17)
A
2
1 d ¼ 41 1 2 1 e ¼ 44 9 2 1 f ¼ 41 2
1 >> detðeÞ ans ¼ >> invðdÞ
AT : ¼ det(A)
>> detðdÞ
2
3
taking its transpose, A T (by interchanging the rows and columns of A ), and scaling: 1
Use the same d and e as in example 3. ans ¼
AA1 ¼ A1 A ¼ I:
(1:19)
(1:18)
The matrix A T is termed the adjoint of A and is calculated in MATLAB using adj (A). Matrices with nonzero determinants are said to be nonsingular or of full rank. For nonsquare matrices A 2 F mn , the rank is the number of linearly independent columns of A over the field F and is min(m, n). Related commands in MATLAB are rank and null. Simultaneous linear equations
>> ans ¼ 3 3 3 5 1 2
1 2 1
3 1 3 5: 6 3 1 1 2 1 5: 3 1 3 1 1 2 3 5: 4 6 1 2 3
>> invðdÞ d >> ans ¼ 1 0 0 1 0 0
0 0 1
>> pinvðeÞ ans ¼ 0:5000 1:0000 2:5000 4:0000 3:0000 3:0000
0:5000 1:5000 1:0000
>> f ¼ ½1 1 1; 1 2 3; 2 4 6 Warning: Matrix is singular to working precision.
1024
Cheryl B. Schrader
ans ¼ Inf Inf Inf Inf Inf Inf
Finally, a ratio of two polynomials is a rational function: Inf Inf Inf
f (l) ¼
n(l) : d(l)
(1:22)
>> detðfÞ ans ¼ 0:
1.5 The Algebra of Polynomials Polynomials in the indeterminate l with coefficients from a field F, usually represented by F[l], form a commutative ring. The set of polynomials of degree less than some integer m > 0 with real coefficients along with the standard addition and multiplication of polynomials form a vector space. Such polynomials can be described by: p(l) ¼ am lm þ am1 lm1 þ þ a1 l þ a0 ¼
m X
ak lk , (1:20)
k¼0
for ai 2 R and am 6¼ 0. The degree (or order) of the polynomial in equation 1.20 is m. Further, if am ¼ 1, the polynomial is called monic. One of the most important results of classical algebra is that polynomials with real coefficients can be factored into real linear factors and real quadratic factors. Any polynomial R[l] has at most m roots (or zeros). If the same matrix is considered over the field of complex numbers C, then it has exactly m roots. Nonreal complex roots must occur in complex conjugate pairs. Note that there are essentially three problems related to polynomials: evaluation, root finding, and curve fitting—and MATLAB functions polyval, roots, and polyfit address all three. Now this section presents at a concrete discussion of algebras. An (associative) algebra over a field F is a set S, which is both a vector space and a ring over F such that the additive group structures are the same, and for all s, t 2 S, and a 2 F, the following axiom holds:
Rational functions occur most often in control as transfer functions for single-input and single-output systems. Systems with more than one input and more than one output can be described by transfer function matrices. These matrices can be realized as matrix polynomial fractions or as a polynomial matrix ‘‘divided’’ by a polynomial. Much research has been accomplished on this topic, particularly in its relationship to multivariable system analysis and control. Inequalities involving polynomial matrices, ‘‘inverses,’’ and optimization problems are also becoming increasingly important.
1.6 Characteristic and Singular Values 1.6.1 Characteristic Values Characteristic values—also called eigenvalues, characteristic or latent roots, proper or spectral values—describe a square matrix representation of a linear operator that is independent of the basis chosen. The characteristic polynomial of A 2 Cnn is p(l) ¼ det(lI A), whose roots are the characteristic values of A. Here, matrices are considered over the complex field to admit the possibility of complex roots. The characteristic equation, p(l) ¼ 0, is of degree n and has n roots. Characteristic values depend on special matrix properties of A. Another important result known as the CayleyHamilton theorem is that a matrix A satisfies its own characteristic equation; that is, p(A) ¼ 0. The idea behind characteristic values is to describe the action of a matrix on a vector by a single scalar value: Ax ¼ lx,
where x 2 Cn and l 2 C. Equation 1.23 can be written as: (lI A)x ¼ 0,
a(st) ¼ (as)t ¼ s(at):
(1:23)
(1:24)
(1:21)
The relationship of equation 1.21 combines the concepts of a ring with those of a vector space. Consider now the ring F[l], which has also a vector space structure. Addition and the additive identity, 0, are the same for both structures. Moreover, for two polynomials p(l), q(l) 2 F[l] and a 2 F, equation 1.21 is satisfied. Thus, polynomials form an associative algebra. The question may arise as to why the terminology matrix algebra was used previously. It is easy to show that the set of n n matrices over F is an associative algebra by recognizing that multiplication of a matrix A by a scalar r can be written rA ¼ (rI)A.
which has nonzero solutions x (characteristic vectors or eigenvectors) only if l is a characteristic value of A. Characteristic values are often used in control to determine system stability. Following what is known about polynomials, complex characteristic values occur in complex conjugate pairs, as do their associated characteristic vectors. Characteristic values are unique although characteristic vectors are not. This makes their actual computation (e.g., using MATLAB) somewhat arbitrary. The function poly(A) returns the characteristic polynomial for a matrix A. Then the roots command can be utilized to find the characteristic values although this is not a
1 Algebraic Topics in Control
1025
numerically reliable process. Alternatively, one can call the function eig, which can also return characteristic vectors as well. The same function can be used to solve the generalized eigenvector problem in the case where not all eigenvalues are distinct. Eigenvectors associated with distinct eigenvalues are linearly independent; however, eigenvectors associated with repeated eigenvalues may be linearly dependent. The generalized eigenvector problem produces linearly independent eigenvectors by recognizing one can be rather clever in eigenvector choice. The result is that the non-uniqueness of eigenvectors can be used as an advantage. Complete eigenstructure assignment can be approached through the feedback control problem, which has been applied, for example, in active control of structures. What makes the linear independence of eigenvectors important is that a matrix of such vectors can be used as a similarity transformation Q to relate a matrix A to its normal or Jordan canonical form:
A ¼ U SV ,
(1:28)
where S 2 F mn is diag{s1 , s2 , . . . , sr , 0, . . . , 0} and where and U 2 F mm and V 2 F nn are unitary (UU ¼ U U ¼ I and V V ¼ VV ¼ I). Here the notation is used to represent the complex conjugate (or Hermitian) transpose. The elements si are related by: s1 s2 sr > 0,
(1:29)
which are the singular values of A; the columns of U are the left singular vectors, and the columns of V the right singular vectors. Singular values are unique although singular vectors are not. The SVD is quite useful and reliable in determining the rank of a constant matrix or its pseudoinverse and in the realization of transfer function matrices. By construction, singular values are the positive square root of the eigenvalues of A A. Example 5 Use the same e as before.
A^ ¼ diagfJ1 , J2 , . . . , Jk g,
(1:25)
V¼
where each Ji is a Jordan block of the form: 2
l 60 6. 6 .. 6 40 0
1 l .. .
0 1 .. .
... ... .. .
0 0
... ...
l 0
(1:26)
l
(1:27)
Additional details on eigenvectors and linear transformations in general can be found in Schrader and Sain (2000). Canonical forms such as the companion, controllable, and modal or observable forms are commonly used for control purposes. The key is in always remembering, through the appropriate linear transformation, how a canonical form is related to the original matrix. This is much like dropping breadcrumbs behind you as you move from one place to another so that you can always find your way home.
1.6.2 Singular Values Unique scalars are associated with nonsquare matrices A 2 F mn , and these are termed singular values. In general, a matrix A of rank r can be described by its singular value decomposition (SVD):
0:2014 0:7710 0:6042
D¼
Eigenvalues along diagonal
5:8284 0 0 2:0000 0 0
For the distinct eigenvalue case, A^ is purely diagonal. For the repeated eigenvalue case, A^ is as close to diagonal as possible. The matrices A and A^ are said to be similar and are related by: A^ ¼ Q1 AQ:
Eigenvectors in columns
0:2738 0:3487 0:5006 0:1162 0:8213 0:9300
3
0 07 .. 7 .7 7: 15
>> ½V ; D ¼ eigsðeÞ
0 0 0:1716
>> ½U; S; V ¼ svdðeÞ U¼
Left singular vectors
0:1324 0:8014 0:4264 0:4852 0:8948 0:3498
0:5833 0:7634 0:2775
S¼ 10:6496 0 0
Singular values along diagonal 0 0 1:2507 0 0 0:1502
V¼ 0:9288 0:3244 0:3446 0:5777 0:1365 0:7490
Right singular vectors 0:1793 0:7400 0:6483
>> ½V ; J ¼ jordanðeÞ V¼ 0:5294 0:3497 0:1765 0:6394 1:4118 1:0490
Linear transformation 0:1209 0:4629 0:3627
J¼ 2:0000 0:0000 0:0000 5:8284 0 0:0000
Jordan form 0:0000 0:0000 0:1716
1026
Cheryl B. Schrader
>> V ne V ans ¼ 2:0000 0:0000 0:0000 5:8284 0 0:0000
0:0000 0:0000 0:1716
1.7 Nonassociative Algebras Thus far, this chapter has considered only associative algebras, so now it turns its attention to other algebras for the sake of completeness. Other types of algebras may evolve by generalizing the concept of associative algebras (e.g., alternative algebras) or by redefining the associative product (e.g., Lie algebras). Consider an associative algebra A over a field F and replace the standard associative product of two elements x, y 2 A, xy, with the commutator product xy yx. Note that associativity holds only in specific cases. By redefining the associative product, we obtain a nonassociative algebra called the Lie algebra. Similarly, if one chooses the anticommutator product, 1=2(xy þ yx), we obtain the Jordan algebra, also nonassociative. Nonassociative algebras are becoming increasingly important in control and quantum computers and in autonomous vehicle motion planning. The reader is most likely familiar with any number of algebras or classes of algebras such as exterior, multilinear, Lie, Jordan, composition, alternative, differential, nonassociative, Boolean, commutative, or division. Some of these listed algebras may be misnomers according to the strict definition of an algebra. Of course, the point is that algebra is much more than operations. It is an axiomatic framework governed by a global view of the whole construct. It is a way of describing how a basic element behaves. And that, after all, is control in its most rudimentary sense.
1.8 Biosystems Applications Biosystems present many potential applications for the use of algebra, including the areas of biomechanics, tissue engineering, image processing (e.g., CAT and MRI), and biosignal processing (e.g., EMG and EEG). The determination of which variables are dependent and which are independent is
important in the design of gait analysis and prostheses. For example, in gait analysis, is the angle between the hip and femur on the opposite side of the body important for proper ankle movement? Is the angle on the same side important? These questions can be answered by taking multiple measurements and solving for dependent and independent variables. Tissue engineering can involve the interaction of materials (e.g., implants, lasers, etc.) with tissues of the body, such as implants using feedback to identify the amount of sugar in the blood for diabetics. The application of control theory to quantum systems, including molecular control, lasing without inversion, NMR spectroscopy, and quantum information systems, has become of great interest in recent years. Medical image analysis can be highly dependent on fundamental algebraic concepts and system theoretic techniques, and bilinear models are widely used for nonlinear biomedical signal and image processing. In medical technology development, the microelectromechanical and nanoelectromechanical phenomena and processes produce new challenges for the control engineer. For noninvasive data systems, such as multisensorial electrophysiology and biomagnetism, sensor array analysis depends on advanced algebraic techniques. The interested reader should look in the more recent journals and conference proceedings to see other exciting topics that address algebraic notions. For biomedical engineering, the prominent resources are the IEEE Transactions on Biomedical Engineering, the IEEE Transactions on Neural and Rehabilitation, and the annual International Conference of the IEEE Engineering in Medicine and Biology Society. The American Society of Mechanical Engineers (ASME) has a bioengineering division with an active publications and conference schedule.
References Higham, D.J., and Higham, N.J. (2000). MATLAB guide. SIAM: Philadelphia. Sain, M.K., and Schrader, C.B. (2000). Bilinear operators and matrices. In Wai-Kai Chen (Ed.), Mathematics for circuits and filters. New York: CRC Press. Schrader, C.B., and Sain, M.K. (2000). Linear operators and matrices. In Wai-Kai Chen (Ed.), Mathematics for circuits and filters. New York: CRC Press.
2 Stability Derong Liu Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
2.1 2.2
Introduction ....................................................................................... 1027 Stability Concepts ................................................................................ 1027
2.3
Stability Criteria .................................................................................. 1028
2.2.1 Bounded-Input Bounded-Output Stability . 2.2.2 Zero-Input Stability 2.3.1 Routh’s Stability Criterion . 2.3.2 Nyquist Stability Criterion . 2.3.3 Phase Margin . 2.3.4 Gain Margin
2.4
Lyapunov Stability Concepts.................................................................. 1031 2.4.1 Equilibrium Point . 2.4.2 Isolated Equilibrium Point . 2.4.3 Stability . 2.4.4 Global Properties of an Equilibrium
2.5 2.6
Lyapunov Stability of Linear Time-Invariant Systems ................................. 1033 Lyapunov Stability Results..................................................................... 1034 2.6.1 Definteness of a Function . 2.6.2 Principle Lyapunov Stability Theorems
References .......................................................................................... 1035
2.1 Introduction For a given control system, stability is usually the most fundamental question to ask. If the system is linear and time-invariant, many stability criteria are available. Among them are the Routh’s stability criterion and the Nyquist stability criterion. If the system is nonlinear or linear but time-varying, however, then such stability criteria do not apply. Stability is mostly concerned with dynamical systems. It is a fundamental requirement in most engineering applications that the systems designed are stable. In control engineering applications, the design of a control system must first guarantee the stability of the overall system. That is to say, one of the design objectives that must be guaranteed in a controller design for a given system is stability. Stability is always a requirement independent of the technique used in the controller design, be it by linear control, adaptive control, optimal control, intelligent control, and the like.
2.2 Stability Concepts Let r(t), y(t), and h(t) be the input, output, and the impulse responses of a linear time-invariant system, respectively. For an nth order system, its initial conditions are given by: Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
y (k) (0) ¼
d k y(t) dt k t¼0
k ¼ 0, 1, , n 1:
Zero initial conditions of an nth order system means that: y (k) (0) ¼ 0, k ¼ 0, 1, , n 1: Finite initial conditions of an nth order system imply that: (k) y (0) Z, k ¼ 0, 1, , n 1, for some positive number Z < 1.
2.2.1 Bounded-Input Bounded-Output Stability With zero initial conditions, a linear-time invariant system is said to be bounded-input bounded-output (BIBO) stable, or simply stable, if its output y(t) is bounded in response to a bounded input r(t). A system is said to be unstable if it is not BIBO stable. The stability of a linear system can be determined from the location of the poles of the closed-loop transfer function in the s plane. If any of these poles lie in the right half of the s plane, with increasing time they give rise to the dominant mode, and 1027
1028
Derong Liu
the transient response increases monotonically or oscillates with increasing amplitude. This represents an unstable system. For such a system, as soon as the power is turned on, the output may increase with time. If no saturation takes place in the system and no mechanical stop is provided, then the system may eventually be subjected to damage and fail since the response of a real physical system cannot increase indefinitely. Hence, closed-loop poles in the right half of the s plane are not permissible in the usual linear control systems. If all closed-loop poles lie to the left of the jv axis, any transient response eventually reaches equilibrium. This represents a stable system. Therefore, for BIBO stability, the roots of the characteristic equation, or the poles of the transfer function H(s), must all lie in the left half of the s plane. Note that when a system has roots on the jv axis, it is unstable by this definition.
2.2.2 Zero-Input Stability If the zero-input response y(t), subject to the finite initial conditions, reaches zero as time t approaches infinity, the system is said to be zero-input stable, or stable; otherwise, the system is unstable. The zero-input stability defined here can also be equivalently stated as follows. A linear time-invariant system is zeroinput stable if for any set of finite initial conditions, there exists a positive number M that depends on the initial condition, such that: (1) jy(t)j M < 1 for all t 0; and (2) lim jy(t)j ¼ 0.
which will be unstable. A system with transfer function is given by: H(s) ¼
3(s þ 3) , (s þ 2)(s 2 þ 5)
which will be unstable or marginally stable.
2.3 Stability Criteria Whether a linear system is stable or unstable is a property of the system itself and does not depend on the input or driving function of the system. The poles of the input or driving function do not affect the property of stability of the system, but they contribute only to steady-state response terms in the solution. Thus, the problem of closed-loop stability can be solved readily by choosing no closed-loop poles in the right half of the s plane, including the jv axis. The most common representation of a linear system is given by its closed-loop transfer function in the form of: B(s) b0 s m þ b1 s m1 þ þ bm1 s þ bm ¼ , A(s) a0 s n þ a1 s n1 þ þ an1 s þ an where the as and bs are constants and m n. A simple criterion, known as Routh’s stability criterion, enables the determination of the number of closed-loop poles that lie in the right half of the s plane without having to factor the polynomial A(s).
t!1
Because the second condition requires that the magnitude of y(t) reaches zero as time approaches infinity, the zero-input stability is also known as the asymptotic stability. For linear time-invariant systems, BIBO stability and zeroinput stability all have the same requirement: the roots of the characteristic equations must be located in the left half of the s plane. Thus, if a system is BIBO stable, it must also be zeroinput or asymptotically stable. For this reason, the present discussion will simply refer to the stability of linear time-invariant systems without mentioning BIBO or zero-input. This chapter also often refers to the situation when the characteristic equation has simple roots on the jv axis and none in the right half of the s plane as marginally stable or marginally unstable. Example 1 A system with (closed-loop) transfer function is given by: H(s) ¼
1 , (s þ 1)(s þ 2)(s þ 3)
which will be stable. A system with transfer function is given by: sþ1 H(s) ¼ , (s 1)(s 2 þ 2s þ 3)
2.3.1 Routh’s Stability Criterion 1) Write the polynomial in s in the following form: a0 s n þ a1 s n1 þ þ an1 s þ an ¼ 0,
(2:1)
where the coefficients are real quantities. Assume that an 6¼ 0; that is, any zero root has been removed. 2) If any of the coefficients are zero or negative in the presence of at least one positive coefficient, there is a root or are roots that are imaginary or have positive real parts. In such a case, the system is not stable. 3) If all coefficients are positive, arrange the coefficients of the polynomial in rows and columns according to the following pattern: sn s s n2 s n3 s n4 .. .
a0 a1 b1 c1 d1 .. .
a2 a3 b2 c2 d2 .. .
s2 s1 s0
e1 f1 g1
e2
n1
a4 a5 b3 c3 d3
a6 a7 b4 c4 d4
2 Stability
1029
The coefficients b1 , b2 , b3 , and so on, are evaluated as follows: a1 a2 a0 a3 : b1 ¼ a1 a1 a4 a0 a5 : b2 ¼ a1 a1 a6 a0 a7 : b3 ¼ a1 .. .
According to the procedure, the following array can be formed. s4 s3 s2 s1 s0
1 2 1 6 5
3 4 5
5 0
c1 ¼
b1 a3 a1 b2 : b1
In this example, the number of changes in the sign of the coefficients in the first column is two. This means that there are two roots with positive real parts. Note that the result is unchanged when the coefficients of any row are multiplied or divided by a positive number to simplify the computation. If the polynomial in the present example is the denominator of a closed-loop transfer function, the corresponding system will be unstable. Routh’s stability criterion is also known as the RouthHurwitz criterion.
c2 ¼
b1 a5 a1 b3 : b1
2.3.2 Nyquist Stability Criterion
The evaluation of the bs is continued until the remaining ones are all zero. The same pattern of cross-multiplying the coefficients of the two previous rows is followed in evaluating the cs, ds, and so on. That is:
c3 ¼
d1 ¼
b1 a7 a1 b4 : b1 .. . c1 b2 b1 c2 : c1
c1 b3 b1 c3 d2 ¼ : c1
For a system shown in Figure 2.1, the closed-loop transfer function is given by: Y (s) G(s) ¼ : R(s) 1 þ G(s)H(s) For stability, all roots of the following characteristic equation must lie in the left half of the s plane: 1 þ G(s)H(s) ¼ 0:
.. . This process is continued until the nth row has been completed. The complete array of coefficients is triangular. Note that in developing the array, an entire row may be divided or multiplied by a positive number to simplify the subsequent numerical calculation without altering the stability conclusion. 4) The number of roots in equation 2.1 with positive real parts is equal to the number of changes in the sign of the coefficients of the first column of the array. It is noted that the exact values of the terms in the first column need not be known; instead, only the signs are needed. The necessary and sufficient conditions that all roots of equations 2.1 lie in the left half of the s plane are that all the coefficients of equation 2.1 be positive and all terms in the first column of the array have positive sign. Example 2 Consider the following polynomial: s 4 þ 2s 3 þ 3s 2 þ 4s þ 5 ¼ 0:
The Nyquist stability criterion applies to cases when G(s)H(s) has neither poles nor zeros on the jv axis. In the system shown in Figure 2.1, if the open-loop transfer function G(s)H(s) has P poles in the right half of the s plane and lim G(s)H(s) ¼ s!1 constant, then for stability, the G(jv)H(jv) locus, as v varies from 1 to 1, must circle the 1 þ j0 point P times in the counterclockwise direction. This criterion can be expressed as: Z ¼ N þ P,
R(s)
+
Y(s) G(s) −
H(s)
FIGURE 2.1 A Closed-Loop System
1030
Derong Liu
where Z ¼ number of zeros of 1 þ G(s)H(s) in the right half of the s plane, N ¼ number of clockwise encirclements of the 1 þ j0 point, and P ¼ number of poles of G(s)H(s) in the right half of the s plane. If P is not zero for a stable control system, Z ¼ 0 or N ¼ P must be true, which means that there must be P counterclockwise encirclements of the 1 þ j0 point. Example 3 Consider a closed-loop system whose open-loop transfer function is given by: G(s)H(s) ¼
K , (T1 s þ 1)(T2 s þ 1)
where T1 > 0 and T2 > 0. A plot of G(jv)H(jv) is shown in Figure 2.2. Because G(s)H(s) does not have any poles in the right half of the
Im
s plane and the 1 þ j0 point is not encircled by the G(jv)H(jv) locus, this system is stable for any positive values of K , T1 , and T2 . Note that in the preceding criterion, it is assumed that G(s)H(s) has neither poles nor zeros on the jv axis. In this case, the criterion is derived by considering the Nyquist contour in Figure 2.3(A), which encloses the entire right half of the s plane. When G(s)H(s) has poles and/or zeros on the jv axis, the contour must be modified as in Figure 2.3(B). The Nyquist stability criterion also applies to cases when G(s)H(s) has poles and/or zeros on the jv axis. In the system shown in Figure 2.1, if the open-loop transfer function G(s)H(s) has P poles in the right half of the s plane, then for stability, the G(s)H(s) locus (as a representative point s traces on the modified Nyquist path in the clockwise direction) must circle the 1 þ j0 point P times in the counterclockwise direction. Example 4 A closed-loop system has the following open-loop transfer function: G(s)H(s) ¼
ω=0
β −1
Re
α
FIGURE 2.2 Polar Plot of G(jv)H(jv). The a corresponds to v ¼ þ1, and b corresponds to v ¼ 1.
K (T2 s þ 1) , s 2 (T1 s þ 1)
whose stability depends on the relative magnitude of T1 and T2 . Plots of the locus G(s)H(s) for three cases, T1 < T2 , T1 ¼ T2 , and T1 > T2 , are shown in Figure 2.4. For T1 < T2 , the locus of G(s)H(s) does not encircle the 1 þ j0 point, and the closed-loop system is stable. For T1 ¼ T2, the locus of G(s)H(s) passes through the 1 þ j0 point, which indicates that the closed-loop poles are located on the jv axis. For T1 > T2, the locus of G(s)H(s) encircles the 1 þ j0 point twice in the clockwise direction. Thus, the closed-loop system has two closed-loop poles in the right half of the s plane, and the system is unstable.
Im
Im
Poles or zeros of G(s)H(s)
0
Re
Re 0
ε
(B) Modified Nyquist Path
(A) Nyquist Path
FIGURE 2.3
The Nyquist Contour
2 Stability
1031 Im
Im
ω = 0− β
ω = 0−
β
Re
α ω = 0+
ω = 0+
α
Re
α
ω = 0+
(A) T1 < T2, Stable
Im
β
ω = 0−
(B) T1 = T2
Re
(C) T1 > T2, Unstable
FIGURE 2.4 Polar Plots in the GH Plane. For Figure 2.4(B), G(jv)H(jv) passes through the 1 þ j0 point. In all of the figures, a corresponds to v ¼ þ1 and b corresponds to v ¼ 1.
2.3.3 Phase Margin The phase margin is the amount of additional phase lag at the gain crossover frequency required to bring the system to the verge of instability. The crossover frequency is the frequency at which jG(jv)j, the magnitude of the open-loop transfer function, is unity. The phase margin g is 1808 plus the phase angle f of the open-loop transfer function at the gain crossover frequency, or: g ¼ 1808 þ f:
Positive gain margin
Im 1 Kg
−1
1
Re
γ
Positive phase margin
φ
2.3.4 Gain Margin The gain margin is defined as the reciprocal of the magnitude jG(jv)j at the frequency at which the phase angle is 1808. Defining the phase crossover frequency v1 to be the frequency at which the phase angle of the open-loop transfer function equals 1808 gives the gain margin Kg : Kg ¼
FIGURE 2.5
Phase and Gain Margin of a Stable System
1 : jG(jv1 )j
Im
In terms of decibels: Kg dB ¼ 20 log Kg ¼ 20 log jG(jv1 )j: Phase margin and gain margin are illustrated in Figures 2.5 and 2.6. For a stable minimum-phase system, the gain margin indicates how much the gain can be increased before the system becomes unstable. For an unstable system, the gain margin indicates how much the gain must be decreased to make the system stable.
2.4 Lyapunov Stability Concepts When the system in question is nonlinear or linear but time varying, the preceding stability criteria do not apply any more.
Negative phase margin
g
1
Re
-1 f
Negative gain margin
FIGURE 2.6
1 Kg
Phase and Gain Margin of an Unstable System
1032
Derong Liu
The alternative, Lyapunov stability theory, will be introduced as follows. Use kxk to denote the norm of x 2 R n . One example of vector norms is given by: sffiffiffiffiffiffiffiffiffiffiffiffi n X kxk ¼ xi2 , i¼1
which is usually called the Euclidean norm or the l2 -norm. A neighborhood of the origin specified by h > 0 will be denoted by B(h) ¼ {x 2 R n : kxk < h}. Consider continuoustime dynamical systems described by differential equations of the form: x_ ¼ f (x), t 0,
FIGURE 2.7 The Two Equilibrium Points of a Pendulum
(2:2)
where x 2 R n and x_ represent the derivative of x with respect to time t. It is assumed that f : Rn ! R n or f : B(h) ! R n for some h > 0. It is also assumed that f (x) is continuous in x.
2.4.1 Equilibrium Point
Example 6 Consider a system described by equations: x_ 1 ¼ ax1 þ bx1 x2 : x_ 2 ¼ bx1 x2 :
(2:4)
The equilibrium point is in the state space at which the dynamical system will stay if it starts from that point. For systems described by equation 2.2, an equilibrium point xe satisfies the condition that f (xe ) ¼ 0. Other terms for equilibrium point include stationary point, singular point, critical point, and rest position.
The a > 0 and b > 0 are constants. This example does not have any isolated equilibrium because every point on the x2 axis (i.e., x1 ¼ 0) is an equilibrium point for equation 2.4.
Example 5 Consider the simple pendulum system described by the equation: x_ 1 ¼ x2 , (2:3) x_ 2 ¼ k sin x1 ,
x_ ¼ A(t)x,
where k > 0. Equation 2.3 describes a dynamical system in R2 . The x1 denotes the angle of the pendulum from the vertical position, and x2 denotes the angular velocity. This system has two equilibrium points. One of them is located as shown in Figure 2.7(left), and the other is located as shown in Figure 2.7(right). Mathematically, these two equilibrium points can be represented by (2kp, 0) and (2kp þ p, 0), respectively, for k ¼ 0, 1, 2, . . . , N .
2.4.2 Isolated Equilibrium Point An equilibrium point xe is called isolated if there is an h > 0 such that B(xe , h) ¼ {x 2 Rn : kx xe k < h} contains no equilibrium point other than xe itself, where k k represents any equivalent vector norm on R n . Both equilibrium points in example 5 are isolated equilibrium points in R 2 .
Example 7 The linear system is described by:
which has a unique equilibrium that is at the origin if A(t) is nonsingular for all t 0. Example 8 Assume that in equation 2.2, f is continuously differentiable with respect to all of its arguments, and let: J (xe ) ¼
qf (x) , qx x¼xe
where qf =qx is the n n Jacobian matrix defined by qf qfi ¼ : qx qxj If f (xe ) ¼ 0, and J (xe ) is nonsingular, then xe is an isolated equilibrium of system 2.2. In stability theory, it is usually assumed that a given equilibrium point is an isolated equilibrium. Stability concepts will be introduced for equilibrium at the origin (i.e., for xe ¼ 0). If the equilibrium xe of equation 2.2 is not at the origin, one can always transform equation 2.2 by
2 Stability
1033
letting z ¼ x xe . After such a transformation, the new system will have an equilibrium at the origin. Equation 2.2 will then become:
The equilibrium xe ¼ 0 of equation 2.2 is exponentially stable if there exists an a > 0, and for every e > 0, there exists a d(e) > 0 such that:
dz ¼ F(z), dt
kx(t)k ee at for all t 0,
2.4.3 Stability
whenever kx(0)k d(e). The equilibrium xe ¼ 0 of equation 2.2 is unstable if the equilibrium is not stable. The preceeding concepts pertain to local properties of an equilibrium. The following concepts characterize some global properties of an equilibrium.
The equilibrium xe ¼ 0 of equation 2.2 is stable if for every e > 0 there exists a d(e) > 0 such that:
2.4.4 Global Properties of an Equilibrium
where F(z) ¼ f (z þ xe ). Therefore, without loss of generality, one can assume that system 2.2 has an isolated equilibrium point at the origin.
kx(t)k < e for all t 0, whenever kx(0)k < d(e). The behavior of a stable equilibrium point can be depicted in Figure 2.8 for the case x 2 R2 . By choosing the initial points in a sufficiently small neighborhood of the origin specified by d, the trajectory of the system can be forced to lie entirely inside a neighborhood of the origin specified by e. The equilibrium xe ¼ 0 of equation 2.2 is asymptotically stable if the following statements are true: 1) The equilibrium is stable. 2) If there exists an Z > 0 such that lim x(t) ¼ 0 whent!1 ever kx(0)k < Z. The set of all x(0) 2 R n such that x(t) ! 0 as t ! 1 is called the domain of attraction of the equilibrium xe ¼ 0 of equation 2.2.
The equilibrium xe ¼ 0 of equation 2.2 is asymptotically stable in the large if it is stable and if every solution of the equation tends to zero as t ! 1. The other term used often for asymptotic stability in the large is global stability. The equilibrium xe ¼ 0 of equation 2.2 is exponentially stable in the large (or globally exponentially stable) if there exists an a > 0 and for any b > 0, there exists a k(b) > 0 such that: kx(t)k < k(b)kx(0)ke at for all t 0, whenever kx(0)k < b. The preceding concepts are referred to as stability (and instability) in the sense of Lyapunov.
2.5 Lyapunov Stability of Linear TimeInvariant Systems Consider linear systems described by:
x2
x_ ¼ Ax, t 0
d x1 e
FIGURE 2.8 Typical Behavior of the Trajectory in the Vicinity of a Stable Equilibrium. This is true whenever x(0) d(e).
if A is nonsingular.
(2:5)
System 2.5 has a unique equilibrium xe ¼ 0. The equilibrium xe ¼ 0 of equation 2.5 is stable if all eigenvalues of A have nonpositive real parts and if every eigenvalue of A that has a zero real part is a simple zero of the characteristic polynomial of A. The equilibrium xe ¼ 0 of equation 2.5 is asymptotically stable if and only if all eigenvalues of A have negative real parts. A real n n matrix A is called stable or a Hurwitz matrix if all of its eigenvalues have negative real parts. If at least one of the eigenvalues has a positive real part, then A is called unstable. Thus, the equilibrium xe ¼ 0 of equation 2.5 is asymptotically stable if and only if A is stable. If A is unstable, then xe ¼ 0 is unstable.
1034
Derong Liu Example 12 The function v: R 3 ! R is given by:
2.6 Lyapunov Stability Results The most important fact about the Lyapunov stability theory is to determine the stability properties of an equilibrium of a system 2.2 without having to solve equation 2.2.
v(x) ¼
which is positive definite but not radially unbounded.
2.6.1 Definiteness of a Function A continuous function v: Rn ! R [resp., v: B(h) ! R] is said to be positive definite if: 1) v(0) ¼ 0, and 2) v(x) > 0 for all x 6¼ 0 [resp., 0 < kxk r for some r > 0].
2.6.2 Principle Lyapunov Stability Theorems .
1) v(0) ¼ 0, and 2) v(x) 0 for all x 2 B(r) for some r > 0. A continuous function v is said to be negative semidefinite if v is a positive semidefinite function. A continuous function v: R n ! R is said to be radially unbounded if: 1) v(0) ¼ 0, 2) v(x) > 0 for all x 6¼ 0, and 3) v(x) ! 1 as kxk ! 1.
v(x) ¼
þ
þ
i¼1
.
.
qxi
fi (x) ¼ rv(x)T f (x),
is negative semidefinite (or identically zero), then the equilibrium xe ¼ 0 of equation 2.2 is stable. If there exists a continuously differentiable and positive definite function v with a negative definite derivative v_ (2:2) , then the equilibrium xe ¼ 0 of equation 2.2 is asymptotically stable. If there exists a continuously differentiable, positive definite, and radially unbounded function v with a negative definite derivative v_ (2:2) , then the equilibrium xe ¼ 0 of equation 2.2 is asymptotically stable in the large.
1 v(x) ¼ k(1 cos x1 ) þ x22 , 2
x32 ,
which is positive definite and radially unbounded. Example 10 The function v: R3 ! R is given by:
then this function is continuously differentiable, v(0) ¼ 0, and v is positive definite. Along the solutions of equation 2.3, the following is true: v_ (2:3) ¼ (k sin x1 )x_ 1 þ x2 x_ 2 ¼ (k sin x1 )x2 þ x2 (k sin x1 ) ¼ 0: Therefore, the equilibrium xe ¼ 0 of the simple pendulum system is stable.
v(x) ¼ x12 þ (x2 x3 )2 , which is positive semidefinite. It is not positive definite because it is zero for all x 2 R3 such that x1 ¼ 0 and x2 ¼ x3 . Example 11 The function v: R3 ! R is given by:
n X qv
Example 13 Consider the simple pendulum in example 5. If the following is chosen:
Example 9 The function v: R 3 ! R is given by: x22
If there exists a continuously differentiable and positive definite function v and its derivative (with respect to t) along the solutions of equation 2.2 given by: v_ (2:2) ¼
A continuous function v is said to be negative definite if v is a positive definite function. Positive semidefinite: A continuous function v: R n ! R [resp., v: B(h) ! R] is said to be positive semidefinite if:
x12
x14 þ x24 þ x34 , 1 þ x14
Example 14 Consider the system: x_ 1 ¼ (x1 c2 x2 )(x12 þ x22 1):
2 3
v(x) ¼ x12 þ x22 þ x32 x12 þ x22 þ x3 , which is positive definite in the interior of the ball given by x12 þ x22 þ x32 < 1. It is not radially unbounded since v(x) < 0 when x12 þ x22 þ x32 > 1.
x_ 2 ¼ (c1 x1 þ x2 )(x12 þ x2 1):
(2:6)
In these equations, c1 > 1 and c2 > 0. If the following is chosen: v(x) ¼ c1 x12 þ c2 x22 ,
2 Stability
1035
it can be verified that:
Example 16 Consider the system:
v_ (2:6) ¼ 2(c1 x12 þ c2 x22 )(x12 þ x22 1):
x_ 1 ¼ c1 x1 þ x1 x2 :
Because v is positive definite and v_ (2:6) is negative definite for x12 þ x22 < 1, the equilibrium xe ¼ 0 of equation 2.6 is asymptotically stable.
x_ 2 ¼ c2 x2 þ x12 :
(2:8)
The c1 > 0 and c2 > 0 are constants. If the following is chosen:
Example 15 Consider the system: x_ 1 ¼ x2 c1 x1 (x12 þ x22 ): x_ 2 ¼ x1 cx2 (x12 þ x2 ):
v(x) ¼ x22 x12 , (2:7)
then the following is true: v_ (2:8) ¼ 2(c1 x12 þ c2 x22 );
In these equations, c > 0. If the following is chosen: v(x) ¼ x12 þ x22 , it can be verified that:
Because v_ (2:8) is negative definite and in every neighborhood of the origin v(x) < 0 when jx1 j < jx2 j, the equilibrium xe ¼ 0 of (2.8) is unstable.
v_ (2:7) ¼ 2c(x12 þ x22 )2 : Because v is positive definite and radially unbounded and v_ (2:7) is negative definite, the equilibrium xe ¼ 0 of equation 2.7 is asymptotically stable in the large. Lyapunov instability theorem: The equilibrium xe ¼ 0 of equation 2.2 is unstable if there exists a continuously differentiable function v such that v_ (2:2) is negative definite (positive definite) and if in every neighborhood of the origin there are points x such that v(x) < 0[v(x) > 0].
References Hahn, W. (1967). Stability of motion. Berlin: Springer-Verlag. Kuo, B.C. (2002). Automatic control systems. (8th Ed.). New York: John Wiley & Sons. Miller, R.K., and Michel, A.N. (1982). Ordinary differential equations. New York: Academic Press. Ogata, K. (2001). Modern control engineering. (3d Ed.). Upper Saddle River, NJ: Prentice Hall.
This page intentionally left blank
3 Robust Multivariable Control Oscar R. Gonza´lez Department of Electrical and Computer Engineering, Old Dominion University, Norfolk, Virginia, USA
3.1 3.2 3.3
Introduction ....................................................................................... 1037 Modeling............................................................................................ 1037 Performance Analysis ........................................................................... 1039
3.4
Stability Theorems ............................................................................... 1042
3.5 3.6
Robust Stability ................................................................................... 1042 Linear Quadratic Regulator and Gaussian Control Problems ....................... 1043
3.7 3.8
H1 Control ........................................................................................ 1044 Passivity-Based Control ........................................................................ 1045
3.3.1 MIMO Frequency Response and System Gains . 3.3.2 Performance Measures 3.4.1 Nyquist Criteria . 3.4.2 Small Gain Criteria
Atul G. Kelkar Department of Mechanical Engineering, Iowa State University, Ames, Iowa, USA
3.6.1 Linear Quadratic Regulator Formulation . 3.6.2 Linear Quadratic Gaussian Formulation
3.8.1 Passivity of Linear Systems . 3.8.2 State-Space Characterization of PR Systems . 3.8.3 Stability of PR Systems . 3.8.4 Passification Methods
3.9
Conclusion ......................................................................................... 1047 References .......................................................................................... 1047
3.1 Introduction A mathematical model of the physical process that needs to be controlled is needed for control system design and analysis. This model could be derived from first principles, obtained via system identification, or created by one engineer and then given to another. Regardless of the origin of the model, the control engineer needs to completely understand it and its limitations. In some processes, it is possible to consider models with a single input and output (SISO) for which a vast amount of literature is available. In many other cases, it is necessary to consider models with multiple inputs and/or outputs (MIMO). This chapter presents a selection of modeling, analysis, and design topics for multivariable, finite-dimensional, causal, linear, time-invariant systems that generalize the results for SISO systems. The reader is encouraged to consult the references at the end of this chapter for additional topics in multivariable control systems and for the proofs to the lemmas and theorems presented.
3.2 Modeling The derivation of a mathematical model for a linear, timeinvariant system typically starts by writing differential equaCopyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
tions relating the inputs to a standard set of variables, such as loop currents and the configuration variables (e.g., displacements and velocities). In the time domain, the differential equations can be written as follows: P(D)j(t) ¼ Q(D)u(t),
(3:1)
where D ¼ d=dt is a differential operator, u(t) 2 Rm is a vector of inputs, j(t) 2 Rl is a vector of the standard variables called the partial state variables, and P(D) and Q(D) are differential operator matrices of compatible dimensions. The vector of output variables is, in general, represented by a linear, differential combination of the partial state variables and the inputs: y(t) ¼ R(D)j(t) þ W (D)u(t),
(3:2)
where y(t) 2 Rp is the vector of outputs and where R(D) and W (D) are differential operator matrices of compatible dimensions. The system representation introduced in equations 3.1 and 3.2 is commonly referred to as the polynomial matrix description (PMD) and is the most natural representation for many engineering processes. For analysis and design, a statespace representation that is an equivalent representation of 1037
Oscar R. Gonza´lez and Atul G. Kelkar
1038 proper1 PMDs is more appropriate. The state-space representation of a system with input u(t) and output y(t) is as follows: x_ (t) ¼ Ax(t) þ Bu(t), y(t) ¼ Cx(t) þ Du(t),
(3:3)
where x(t) 2 Rn is a vector of state variables, any minimal set of variables at time t0 that together with the input u(t), t > t0 completely characterizes the response of the system for t > t0. The state-space representation as defined in equation 3.3 will be represented by the four-tuple (A, B, C, D). The study of the fundamental properties of the state-space representation is covered, for example, in a first graduate course in linear systems (Antsaklis and Michel, 1997.) Because any statespace and PMD representation of a given system are equivalent, they have similar properties. For example, the eigenvalues of A are the roots of jP(l)j, where P(l) is the polynomial matrix in l found by replacing D with l. Furthermore, the state-space representation is state controllable (observable) if and only if the equivalent PMD is controllable (observable). The generalization of an SISO transfer function is a transfer function matrix, which is found by taking the Laplace transform of equation 3.3 with zero initial conditions. This operation yields: G(s) ¼ C(sI A)1 B þ D,
(3:4)
where G(s) 2 R(s)pm is a proper rational transfer function matrix consisting of p m SISO transfer function entries. The transfer function matrix can also be written in terms of an equivalent PMD. Because a transfer function matrix as in the SISO case represents only the controllable and observable dynamics of a system, consider the following controllable and observable PMD: DL (s)j(s) ¼ NL (s)u(s), y(s) ¼ j(s),
(3:5)
where DL (s) 2 R[s]pp and NL (s) 2 R[s]pm are left coprime polynomial matrices in s. The PMD in equation 3.5 is controllable since DL (s) and NL (s) and NL (s) are left coprime; otherwise, it would only be observable. The PMD is a realization of the transfer function matrix in equation 3.4 since G(s) ¼ DL (s)1 NL (s). The poles of a transfer function matrix G(s) are the roots of the pole polynomial of G(s) where the pole polynomial is given by the least common denominator of all nonzero minors of G(s). So, every pole appears as the pole of at least one SISO transfer function entry of G(s). The MIMO poles can also be found as the roots of jDL (s)j. As in SISO systems, the set of poles is a subset of the set of eigenvalues of A if A and G(s) satisfy equation 3.4. There are several definitions of multivariable zeros (Schrader and Sain, 1989). 1 A PMD is proper if lim (R(l)P 1 (l)Q(l) þ W (l)) exists and is finite. In l!1 fact, if the PMD in equations 3.1 and 3.2 is equivalent to the state-space in equation 3.3, then lim (R(l)P 1 (l)Q(l) þ W (l)) ¼ D.
l!1
The transmission zero definition states that s ¼ sz is a zero if the rank of NL (sz ) is less than the rank of NL (s). An important difference between SISO and transmission zeros is that a transmission zero does not have to be a zero of any SISO transfer function entry of G(s). Another difference is that it is possible for G(s) to have a pole equal to a zero. To understand other properties of poles and transmission zeros, such as multiplicities, the Smith-McMillan form of G(s) can be used. To understand the effect of the poles of G(s) on a particular output, consider yi (s), the ith response given by: yi (s) ¼
m X
Gij (s)uj (s),
(3:6)
j¼1
where Gij (s) is the ijth entry of G(s) and where uj (s) is the Laplace transform of the jth input. Equation 3.6 shows that in MIMO systems, the outputs are linear combinations of the inputs where the weights are rational transfer functions. Only the poles of the entries in the ith row of G(s) can affect yi (s), depending on which inputs are nonzero. More insight can be gained by solving for yi (s) in a state-space representation of the system. Because the poles correspond only to the controllable and observable eigenvalues of A, let equation 3.3 be a minimal, that is, a controllable and observable realization of G(s) with n states. The Laplace transform of the ith output is given by: yi (s) ¼ (Ci (sI A)1 B þ Di )u(s),
(3:7)
where Ci and Di are the ith rows of C and D, respectively. If A has distinct eigenvalues, the partial fraction expansion of the response is given by: yi (s) ¼
n X ‘¼1
! 1 H Ci u‘ w‘ B þ Di u(s), s l‘
(3:8)
where u‘ , w‘ 2 C n are right and left eigenvectors of A, respectively, and H denotes complex-conjugate transpose. The effect of the ‘th pole on the ith output is determined by the 1 n residue matrix Ci u‘ w‘H B and the product of this residue times the vector of inputs u(s). If each entry in the residue matrix is small, then this pole will have little effect on yi (t). It is also possible for some entries of the residue matrix not to be small but the product of a residue with the vector of inputs to still be small. This indicates that for some directions of the input vector, a pole may have a more significant effect than in other directions. The concept of input directions is unique to MIMO systems, and it will be discussed again in the next subsection. Stability is an important system property. This section is mostly interested in two types: asymptotic and boundedinput and bounded-output (BIBO) stability. Asymptotic stability is a property of an internal representation, such as a state-space. The representation in 3.3 is said to be asymptotically stable if the solutions of x_ (t) ¼ Ax(t) approach the origin
3 Robust Multivariable Control
1039 do
di r
yc
uc
+
Gc(s)
+
+ up
yp Gp(s)
+ +
+
y
-
FIGURE 3.1
Classical Unity Feedback Configuration
for all initial conditions x(0). The test for asymptotic stability is that the eigenvalues of A have negative real parts. Asymptotic stability implies BIBO stability, which is a property of an external representation. For BIBO stability, only the poles of a transfer function matrix need to have negative real parts. Consider now the classical unity feedback configuration in Figure 3.1, where r(t) 2 Rp denotes a vector of reference inputs and where di (t) 2 Rm and do (t) 2 Rp denote vectors of disturbance inputs at the input and output of the plant, respectively. Assume that the state-space representations (transfer function matrices) of the plant and controller are given by (Ap , Bp , Cp , Dp )(Gp (s)) and (Ac , Bc , Cc , Dc )(Gc (s)), respectively. The representation of the closed system will be well-formed if the dimensions (Gp (s) 2 Rp (s)pm and Gc (s) 2 Rp (s)mp ) are compatible and if jI þ Dp Dc j 6¼ 0. The state representation of the closed-loop system is of the form: x_ cl (t) ¼ Acl xcl (t) þ Bcl w(t),
y(t) ¼ Ccl xcl (t) þ Dcl w(t), (3:9)
where xcl (t) ¼ [xp (t), xc (t)]T and w(t) ¼ [r(t), do (t), di (t)]T . The properties of the closed-loop system are determined by analyzing (Acl (t), Bcl (t), Ccl (t), Dcl (t)). For example, the closed-loop system is asymptotically stable if the eigenvalues of Acl (t) have negative real parts. The closed-loop system is then said to be internally stable. The analysis and design of complex feedback configurations is simplified by using the general closed-loop block diagram in Figure 3.2, where w(t) is the vector of all exogenous inputs, yK (t) is the vector of controller outputs, z(t) is the vector of performance variables, and uK (t) is the vector of inputs to the controller. The top block is called the two-input and twooutput plant, P, and the bottom one corresponds to the matrix of controllers denoted by K that is formed after all the controllers have been pulled out of the closed-loop system. As a
w
z yK
P(s)
uK
trivial example, the classical unity feedback is represented in the general block diagram with the following transfer function matrices: w(s) z(s) ¼ P(s) , yK (s) ¼ K (s)uK (s), yK (s) uK (s) where: P(s) ¼
P11 (s) P21 (s)
2
I P12 (s) 6 ¼4 0 P22 (s) I
I 0
Gp (s) 0
3 Gp (s) 7 I 5:
I
Gp (s)
Gp (s)
In this example, K (s) ¼ Gc (s), the exogenous inputs vector is w(t) ¼ [r(t), do (t), di (t)]T , the vector of performance variables has been taken to be z(t) ¼ [r(t) y(t), yK (t)]T , and the vector of controller’s inputs and outputs is simply given by uK (t) ¼ uc (t) and yK (t) ¼ yc (t), respectively. The first per formance variable is the tracking error, e(s) ¼ r(s) y(s). A state-space representation of the two-input and two-output plant is the following: h i w(t) x_ P (t) ¼ Ap xP (t) þ 0 0 Bp Bp : (3:10) yK (t) 2 3 2 3 I 0 Dp Dp Cp z(t) 6 7 6 7 w(t) 0 I 5 ¼ 4 0 5 xP (t) þ 4 0 0 : yK (t) uK (t) Cp I 0 Dp Dp (3:11) The closed-loop transfer function matrix from the exogenous inputs to the performance variables is as follows: z(s) ¼ Tzw (s)w(s): The closed-loop transfer function matrix is in fact a lower linear fractional transformation (LFT) of P(s) with respect to K (s) as given by: Tzw (s) ¼ F ‘ (P(s), K (s)) ¼ P11 (s) þ P12 (s)K (s)(I P22 (s)K (s))1 P21 (s):
(3:12)
Based on this LFT, the general closed-loop system is wellformed if jI P22 (1)K (1)j 6¼ 0. In the trivial example of a classical unity feedback system, this reduces to jI P22 (1) K (1)j ¼ I þ Dp Dc 6¼ 0.
3.3 Performance Analysis K(s)
FIGURE 3.2
General Closed-Loop Block Diagram
The purpose of the controller is to make the closed-loop system meet the desired specifications regardless of the uncertainty present. Examples of specifications are that the closed-loop system should be asymptotically stable, the steady-state error
Oscar R. Gonza´lez and Atul G. Kelkar
1040 for each output channel should be small, and these stability and performance specifications should be maintained not only for the nominal plant model but also for all models in a specified uncertainty set. In this case, the closed-loop system will be said to have the properties of robust stability and robust performance. Consider the classical unity feedback configuration in Figure 3.1 with an additional sensor noise vector Z(s) so that uc (s) ¼ r(s) (Z(s) þ y(s)). The performance analysis is dependent on the following three types of transfer function matrices: . . .
Return ratio: Sensitivity: Complementary sensitivity:
Lo (s) ¼ Gp (s)Gc (s). So (s) ¼ (I þ Lo (s))1 . To (s) ¼ Lo (s)(I þ Lo (s))1 .
These transfer function matrices are defined when the loop is broken at the output to the plant. Similar definitions follow when the loop is broken at the plant’s input. The input/output and input/error relations can be written as shown next: y(s) ¼ So (s)do (s) þ So (s)Gp (s)di (s) þ To (s)r(s) To (s)Z(s):
(3:13)
Zero steady-state error is possible for step inputs by appropriately including an exogenous model of the exosystem in the feedback loop (Gonzalez and Antsaklis, 1991). If zero steadystate error is not needed, then it will be necessary to make the mappings from the four exogenous inputs to the tracking error in equation 3.14 small. Three ways to determine the size of a transfer function matrix are presented in the following subsection. To simplify the presentation, assume from now on that the plant is square with p ¼ m. In addition, since the physical units used for input and output signals may lead to errors of different orders of magnitude, it is useful to normalize or scale the magnitudes of the plant’s inputs and outputs. Procedures to perform scaling of MIMO systems are presented in, for example, Skogestad and Postlethwaite (1996). One approach is to normalize the plant’s inputs and outputs so that the magnitude of each error is less than one. An alternative and common choice is to include the normalization in the frequency-dependent weights to be introduced for control system design.
e(s) ¼ So (s)do (s) So (s)Gp (s)di (s) þ (I To (s))r(s) þ To (s)Z(s):
(3:14) Notice that the mapping from the reference inputs, r(s), to the errors contributed by them, er (s), is given by I To (s). In this case, if the closed-loop system is internally stable, the steady-state tracking error contributed by a vector of step reference inputs r(s) ¼ R0 1s , R0 2 Rp is found using Laplace’s final value theorem to be er (1) ¼ (I To (0))R0 . Thus, the error contributions to each output channel will be zero if in addition to internal stability, To (0) ¼ I. Because the sensitivity and complementary sensitivity transfer function matrices satisfy: So (s) þ To (s) ¼ I,
(3:15)
then So (0) ¼ 0pp results in er (1) ¼ 0. A sufficient condition for the dc gain of the sensitivity matrix to vanish is that every entry of Lo (s) must have at least one pole at the origin. This is the generalization of system type to MIMO systems. Furthermore, if So (0) ¼ 0pp , the steady-state error contributed by a vector of step functions at the output disturbances, do (s), will also be zero. An additional condition is needed for zero steadystate error contributed by a vector of step functions at the input disturbances, di (s). A sufficient condition is that the plant has no entries with poles at the origin. Making T (0) ¼ I results in the desired zero steady-state errors to vectors of step functions at the reference inputs and at the input and output disturbances. This choice, however, has the undesirable effect of making the error contributed by the dc component of the sensor noise not to be attenuated. This is a common trade-off in control systems, which does not affect the desired performance as long as the signal-to-noise ratio for low frequencies is made sufficiently high.
3.3.1 MIMO Frequency Response and System Gains To determine the frequency response of a BIBO stable transfer function matrix, G(s), let its input be the vector of complex exponentials u(t) ¼ ue jvt , u 2 C p ; then, the steady-state response is a complex exponential vector of the same frequency with amplitudes and phases changed by G(s)js ¼ jv. Let the steady-state response be given by yss (t) ¼ ye jvt , y 2 C p , then the complex vectors u and y are related by: y ¼ G(jv)u:
(3:16)
The complex matrix G(jv) 2 C pp is called the frequency response matrix, and it can be used to determine the size of G(s) at a particular frequency v. In general, the size of G(s) is defined as the gain from an input to its corresponding output. If the input is u(t) ¼ ue jvt , the gain of G(s) at v can be defined at steady-state to be the ratio of the Euclidean vector norms kyk=kuk. This concept of gain is bounded as follows: s(v) ¼ min
kuk6¼0
kG(jv)uk kG(jv)uk kG(jv)uk 4 4 max ¼s (v), kuk kuk kuk kuk6¼0
(3:17) where s (v) and s(v) are the largest and smallest singular values of G(jv). These bounds are used to define the size of a TFM as follows: . .
Large G(s) is said to be large at v if s(G(jv)) is large. Small G(s) is said to be small at v if s (G(jv)) is small.
3 Robust Multivariable Control
1041
The application and the scaling of the model determine what is meant by large or small singular values. In general, s(G(jv)) 1 indicates that G(s) is large at v. Similarly, s (G(jv)) 1 indicates that G(s) is small at v. A graphical representation of the frequency response consists of plotting the maximum and minimum singular values of G(jv) versus log (v). This is the generalization of Bode’s magnitude plot to MIMO systems. To get a better understanding of the role of singular values, substitute the singular value decomposition (SVD) of G(jv) in equation 3.16: y ¼ Y (v)S(v)U (v)H u,
where kuk2 is the norm of the vector signal and ku(t)k is the Euclidean norm of a vector in Rp . To pose and solve optimal control problems, it is useful to consider the most general set of energy signals. The desired set is the Lebesgue space of all square integrable functions. This set is denoted by L2þ ¼ L2 [0, 1). For convenience, the spatial dimension of vectors with entries in L2þ will not be included. A useful measure of the size of the transfer function matrix is the induced system gain. If the system is represented by the mapping G: L2 ! L2 , then the induced system gain is as follows: sup
(3:18)
u6¼0
s(v) ¼ s1 (v), s2 (v), . . . , where S(v) 2 Cpp is diag { sp (v) ¼ s(v)} and where Y (v) and U (v) 2 Cpp are unitary matrices (see Section 9, Chapter 1). The diagonal entries of S(v) are called the singular values of G(s), and they depend on frequency. They are ordered from the largest to the smallest. Now, rewrite (equation 3.18 in terms of the columns of Y (v) and U (v), and let Y (v) ¼ [y1 (v) yp (v)] and U (v) ¼[u1 (v) up (v)]. Since the columns of U (v) form an orthonormal basis of C p , let a be the representation of u with respect to {u1 (v), . . . , up (v)}. Substituting u ¼ U (v)a in equation 3.18 gives the following: y ¼ Y (v)S(v)a ¼
p X
si (v)ai (v)yi (v),
(3:19)
i¼1
where a ¼ [a1 (v) ap (v)]T . This equation shows that the representation of y in terms of the columns of Y (v) (also an orthonormal basis) is as written here: s2 (v)a2 (v) s(v)(v)ap (v)]T :
S(v)a ¼ [s(v)a1 (v),
The columns of U (v) can be called the principal input directions, and the columns of Y (v) are the principal output directions. If the input is parallel to a principal input direction, then the steady-state response will be along the corresponding principal output direction scaled by the corresponding singular value. In this sense, the singular values quantify the size of the effect of a transfer function matrix on particular input directions. This characterization of size of a TFM will be used to develop quantitative measures of performance when the inputs are sinusoids in a specified frequency band. Two other measures will also be useful, and they can be defined as induced gains when the inputs and outputs belong to specified signal spaces. Consider first the set of energy signals that are vectors of functions u that map R into Rp . The signal u is said to have finite energy if: 01 11=2 ð kuk2 ¼ @ ku(t)k2 dtA < 1, (3:20) 0
kGuk2 ¼ sup kGuk2 ¼ kGk1 , kuk2 kuk2 ¼1
(3:21)
where kGk1 is the induced system norm. This is the H1 norm of the system that gives the maximum gain of the system to a vector of sinusoids in the worst possible direction and worst possible frequency. If G(s) is proper and stable, then kGk1 ¼ sup s {G(jv)} is finite. Another popular measure of v
a system is the H2 norm. One interpretation of the H2 norm is as the gain from a white noise input with unit variance to the power of the output. The power is defined as: 0 kykpow ¼ @ lim
1
ðT
T !1 2T
11=2 ky(t)k2 dtA
,
T
where kykpow is only a seminorm.
3.3.2 Performance Measures Consider again the classical unity feedback configuration in Figure 3.1 with an additional sensor noise vector Z(s) so that uc (s) ¼ r(s) (Z(s) þ y(s)). If r(t), di (t), and do (t) are vectors of sinusoids up to a frequency vlow , then adequate steady-state performance to these inputs requires that s(So (jv)) 1 and s(So (jv)Gp (jv)) 1 for v < vlow . The former requirement is met if and only if s(Lo (jv)) 1 for v < vlow . If Gc (s) is invertible, the second requirement is met if s(Gc (jv)) 1 for v < vlow . If the signal-to-noise ratio becomes poor for v > vhigh, then acceptable performance at steady-state requires attenuating the high-frequency components of the noise by making s(To (jv)) 1 for v > vhigh. This is accomplished by making s(Lo (jv)) 1 for v > vhigh. These are just some of the design guidelines that will lead to acceptable designs. There are additional trade-offs and performance limitations that need to be taken into account, including guidelines for roll-offs during the mid-frequencies. A convenient way to combine the low-, mid-, and highfrequency requirements is to introduce frequency-dependent weights. These weights are included in P(s) when the general block diagram in Figure 3.2 is formed. The weights are
Oscar R. Gonza´lez and Atul G. Kelkar
1042 typically added on the exogenous input channels and the performance output channels. The former weights serve to include in the model spectral information about the inputs. The latter weights are important in design to emphasize the frequency bands where the performance outputs need to be minimized. A possible design problem that results is to find a proper compensator Gc (s) so that kTzv k1 < 1. This and other control problems will be discussed in the following sections.
3.4 Stability Theorems In this section, a brief introduction will be given to various stability criteria that can be used to determine the stability of a closed-loop control system.
3.4.1 Nyquist Criteria Consider a negative feedback interconnection of a plant Gp (s) and controller Gc (s) in Figure 3.1. Let nLoþ denote the number of unstable poles of the return ratio Lo (s). Then the Nyquist stability criteria is given by the following theorem: Stability Theorem: The closed-loop system consisting of the negative feedback interconnection of Gp (s) and Gc (s) is internally stable if and only if nLoþ ¼ nGpþ þ nGcþ and the Nyquist plot of jI þ Lo (s)j encircles the origin nLoþ times in the anticlockwise direction and does not pass through the origin. The first condition guarantees that the closed-loop system has no unstable hidden modes. The second condition gives a MIMO generalization of the Nyquist plot in terms of the determinant of I þ Lo (s). Because of the determinant, the Nyquist plot of a scalar times I þ Lo (s) is not simply a scaled version of the Nyquist plot of jI þ Lo (s)j.
3.4.2 Small Gain Criteria Another criteria that is often used to determine internal stability of the feedback interconnection is the small gain theorem, which is based on limiting the loop gain of the system. This theorem is central to the analysis of robust stability. The small gain theorem states that if the feedback interconnection of two proper and stable systems has a loop-gain product less than unity, then the closed-loop system is internally stable. There exist several versions of this theorem. One version is given next. Theorem: Consider the feedback system in Figure 3.1, where systems Gp and Gc are proper and stable. Then, the feedback system is internally stable if:
performance of the closed-loop system with the real plant as discussed in the following section.
3.5 Robust Stability Controller design uses a nominal plant model. The error between the nominal model and the real plant arises primarily from two sources: unmodeled dynamics and parametric uncertainties. If the controller design does not take these errors into account, it cannot guarantee the performance of the closed-loop system with the real plant nor guarantee that the closed-loop system will be stable. Therefore, it is important to design controllers that will maintain closed-loop stability in spite of erroneous design models and uncertainties in parameter values. Controllers so designed are said to impart stability robustness to the closed-loop system. To analyze stability robustness, consider the unity feedback system in Figure 3.1. A basic plant uncertainty representation is Gp (s) ¼ Gpo (s) þ Da (s), where Gpo (s) is the nominal plant model and where Da (s) is the additive uncertainty. Other uncertainty representations include the output multiplicative uncertainty, Gp (s) ¼ (I þ Do (s))Gpo (s), and the input multiplicative one, Gp (s) ¼ Gpo (s)(I þ Di (s)). For design purposes, it helps to normalize the uncertainty representations. For example, consider that the real plant is represented with an ~ o (s), output multiplicative uncertainty, and let Do (s) ¼ Wo (s)D ~ where Do (s) and Wo (s) are proper and stable with ~ o (s)k 1 and where Wo (s) is a frequency-dependent kD 1 scaling matrix. In this case, the unity feedback system in Figure 3.1 will be robustly stable if and only if kTo Wo k1 1. A more general result is possible that is independent of the particular types of uncertainties needed to represent the real plant. Consider the general block diagram in Figure 3.2: if all the normalized uncertainty blocks are pulled out, it results in a new general block diagram shown in Figure 3.3 that is
~
∆(s)
w
z
Po(s) yK
uK
kGp k1 kGc k1 < 1: K(s)
The stability theorems already given can be used in the analysis and synthesis of control systems. These theorems are also useful to determine conditions for robust stability and
FIGURE 3.3 Closed-Loop System with Uncertainties Pulled Out
3 Robust Multivariable Control
1043
∼
∆(s)
x_ (t) A ¼ p_ (t) Q
with initial conditions:
BR 1 B T AT
x(0) ¼ x0 ;
x(t) : p(t)
p(tf ) ¼ Sx(tf ): (3:24)
The optimal controller uses full-state feedback and is given by: u(t) ¼ R1 B T P(t)x(t) ¼ K (t)x(t),
M(s) z
w
where P(t) is the solution of the matrix Riccati equation: FIGURE 3.4 Pulled Out
Simplified Closed-Loop System with Uncertainties
P_ (t) ¼ P(t)A AT P(t) þ P(t)BR 1 B T P(t) Q,
~ (s) is the block useful for robustness analysis. In Figure 3.3, D diagonal matrix of all the normalized uncertainty blocks ~ (s)k 1), and P o (s) is the augmented nominal plant, (kD 1 including the uncertainty frequency weights. P 0 (s) is assumed to be stabilized by K (s). Using a lower LFT, the bottom loop in Figure 3.3 can be closed resulting in Figure 3.4, where M(s) is proper and stable. If M(s) is partitioned corresponding to the two vector inputs and outputs and if kM11 k1 < 1, then the closed-loop system is robustly stable. These results that make use of the unstructured uncertainties lead to very conservative results. One way to reduce the conservativeness is to take advantage of the structure in the block diagonal matrix ~ (s) as done with the structured singular values. D
3.6 Linear Quadratic Regulator and Gaussian Control Problems 3.6.1 Linear Quadratic Regulator Formulation The linear quadratic regulator (LQR) is a classical optimal control problem used by many control engineers. Solutions to LQR are easy to compute and can typically be used to compute a baseline design useful for comparison. A formulation of the LQR problem considers the state equation of the plant: x_ (t) ¼ Ax(t) þ Bu(t):
(3:22)
The formulation also considers the following quadratic cost function of the states and control input: ðtf 1 T 1 T J ¼ x (tf )Sx(tf ) þ x (t)Qx(t) þ uT (t)Ru(t)dt, (3:23) 2 2 0
where S ¼ S T 0, Q ¼ Q T 0 and R ¼ RT > 0. To minimize the cost function, consider the Hamiltonian system with state and costate (p(t)) dynamics given by:
which is solved backward in time starting at P(tf ) ¼ S. The optimal feedback gain matrix K(t) is given by: K ¼ R1 B T P(t), and the optimal cost is as follows: 1 Jopt ¼ x T (0)P(0)x(0): 2
3.6.2 Linear Quadratic Gaussian Formulation The linear quadratic Gaussian (LQG) control problem is an optimal control problem where a quadratic cost function is minimized when the plant has random initial conditions, white noise disturbance input, and white measurement noise. The typical implementation of the LQR solution requires that the plant states be estimated, which can be posed as an LQG problem. The plant is described by the following state and output equations: x_ (t) ¼ Ax(t) þ Bu u(t) þ Bw w(t): ym (t) ¼ Cm x(t) þ v(t) (measurement output): yp (t) ¼ Cp x(t) (performance output):
(3:25)
The v(t) and w(t) are uncorrelated zero-mean Gaussian noise processes; that is, w(t) and (t) are white noise processes with covariances satisfying: E
w(t) W T T [w (t þ t), v (t þ t)] ¼ v(t) 0
0 d(t): V
The quadratic cost function that is to be minimized is given by: 2
3 ðtf 1 1 T T T J ¼ E 4 x (tf )Sx(tf ) þ x (t)Qx(t) þ u (t)Ru(t)dt 5, 2 2 0
(3:26)
Oscar R. Gonza´lez and Atul G. Kelkar
1044 where S ¼ ST 0, Q ¼ QT 0 and R ¼ RT > 0. The optimal controller is a full-state feedback controller and is given by:
The integral LQG can be used both for disturbance rejection and tracking. Similar modifications are possible to handle tracking of time-varying reference inputs.
u(t) ¼ K (t)^ x (t), where x^(t) is the Kalman state estimate. The closed-loop state equation can then be given by:
x_ (t) x(t) w(t) ¼ Acl (t) þ Bcl (t) , e_(t) e(t) v(t)
where:
A Bu K (t) Acl ¼ 0
Bu K (t) Bw , Bcl ¼ A G(t)Cm Bw
0 : G(t)
The closed-loop state covariance matrix is as follows: X _
(t) ¼ Acl (t) ~x
X
(t) þ x~
X
(t)ATcl (t) þ Bcl (t) x~
0 T B (t), V cl
W 0
T
where x~(t) ¼ [x(t), e(t)] , and: X x(0)x T (0) (0) ¼ E x~ e(0)x T (0)
x(0)e T (0) : e(0)e T (0)
3.7 H‘ Control The LQR and LQG can also be posed as two-norm optimization problems (referred to as H2 control problems). If the optimization problem is posed using the H1 norm as the cost function, the H1 formulation results. The H‘ control problem can be defined in terms of the general closed-loop block diagram in Figure 3.2 where the exogenous signals are included in the vector v(s) and an appropriate choice of performance variables is given by the vector z(s). The TFM P(s) includes frequency-dependent weights and appropriate normalization as described previously. Consider the following realization of P(s): h i w(t) x_ P (t) ¼ AxP (t) þ B1 B2 yK (t) w(t) z(t) C1 0 D12 xP (t) þ ¼ , D21 0 yK (t) uK (t) C2
(3:27) (3:28)
where: The performance output covariance is given by: X yp
(t) ¼ [ Cp
0]
X ~x
(t)
CpT 0
¼ Cp
X
CT , x~ p
D21 B1T ¼ 0:
(3:29)
T D21 D21 ¼ I:
(3:30)
T D12 C1 ¼ 0:
(3:31)
T T D12 D12 ¼ I:
(3:32)
and the input covariance is given by: X u
(t) ¼ [ K (t) K (t)]
X
K (t) (t) : x~ K T (t) T
The cost is as follows: 8 1 < S J ¼ Tr 2 : 0
ðtf 0 X Q þ K T (t)RK (t) (t ) þ x~ f 0 K T (t)RK (t) 0
9 = K T (t)RK (t) X (t)dt : T ~ x K (t)RK (t) ;
Remarks For convenience of implementation, the steady-state solution obtained in the limit as tf ! 1 is used. The LQG control design is optimized to reject white noise disturbances; however, it can be modified to handle constant disturbances via feed-forward and integral control. The selection of gains in the feed-forward case can be done in a similar way as in the case of tracking system design. Prior to implementation, the robustness of LQG designs needs to be evaluated since there is no guarantee that any useful robustness will be obtained. For constant disturbance rejection via integral control, one needs to use integral Kalman filter and also integral state feedback.
The control objective is to design a feedback controller that internally stabilizes the closed-loop system such that the 1-norm of the mapping Tzw is bounded: kTzw k1 ¼
kz(t)k2 < g: kw(t)k kw(t)k2 6¼0 2 sup
(3:33)
A suboptimal controller satisfying the above mentioned objective exists if positive semidefinite solutions to the following two Riccati equations are possible: P_ (t) ¼ P(t)A þ AT P(t) P(t)(B2 B2T g2 B1 B1T )P(t) þ C1T C1 :
(3:34) Q_ (t) ¼ AQ(t) þ Q(t)A
T
Q(t)(C2T C2
g
2
C1T C1 )Q(t) þ B1 B1T :
(3:35) Moreover, the solutions P(t) and Q(t) satisfy the following: r(P(t)Q(t)) < g2 :
(3:36)
3 Robust Multivariable Control
1045
The Hamiltonian systems corresponding to the two Riccati equations can be obtained similarly to how they were obtained for the LQG case. The H1 controller, which satisfies the mentioned bound, is given by: x_ c ¼ Ac (t)xc (t) þ Bc (t)uK (t): yK (t) ¼ Cc (t)xc (t):
(3:37)
function (Newcomb, 1966; Desoer and Vidyasagar, 1975). The concept of strict positive realness has also been defined in the literature and is closely related to strict passivity. Let G(s) denote a p p matrix whose elements are proper rational functions of the complex variable s. The G(s) is said to be stable if all its elements are analytic in Re(s) 0. Let the conjugate-transpose of a complex matrix H be denoted by H H .
The matrices Ac (t), Bc (t), and Cc (t) are given as follows: Definition 1 A p p rational matrix G(s) is said to be positive real (PR) if:
Ac (t) ¼ A þ g2 B1 B1T P(t) B2 B2T P(t) [I g2 Q(t)P(t)]1 Q(t)C2T C2 : 2
Bc (t) ¼ [I g Q(t)P(t)]
1
Q(t)C2T :
(3:38)
Cc (t) ¼ B1T P(t): In practice, the steady-state solution of the H1 control problem is often desired. The steady-state solution not only simplifies the controller implementation but also renders a closed-loop system time-invariant, simplifying the robustness and performance analysis. For the steady-state H1 control problem, a suboptimal solution exists if and only if the following conditions are satisfied. The first condition is that the algebraic Riccati equation: 0 ¼ PA þ AT P P(B2 B2T g2 B1 B1T )P þ C1T C1 , has a positive semidefinite solution P such that [A (B2 B2T g2 B1 B1T )P] is stable. The second condition is that the algebraic Riccati equation: 0 ¼ AQ þ QAT Q(C2T C2 g2 C1T C1 )Q þ B1 B1T , has a positive semidefinite solution Q such that [A Q(C2T C2 g2 C1T C1 )] is stable. The third condition is r(PQ) < g2 .
. .
All elements of G(s) are analytic in Re(s) > 0 G(s) þ G H (s) 0 in Re(s) > 0, or equivalently: . Poles on the imaginary axis are simple and have nonnegative-definite residues . G(jv) þ G H (jv) 0 for v 2 ( 1, 1)
Various definitions of strictly positive real (SPR) systems are found in the literature (Kelkar and Joshi, 1996). Given below is the definition of a class of SPR systems: marginally, strictly, and positive-real (MSPR) systems. Definition 2 A p p rational matrix G(s) is said to be marginally strictly positive real (MSPR) if it is positive real and the following is true: G(jv) þ G H (jv) > 0 for v 2 (1, 1): Definition 2 (Joshi and Gupta, 1996) gives the least restrictive class of SPR systems. If G(s) is MSPR, it can be expressed as G(s) ¼ G1 (s) þ G2 (s), where G2 (s) is weak SPR (Kelkar and Joshi, 1996) and where all the poles of G1 (s) are purely imaginary (Joshi and Gupta, 1996).
3.8.2 State-Space Characterization of PR Systems
3.8 Passivity-Based Control Passivity is an important property of dynamic systems. A large class of physical systems, such as flexible space structures with collocated and compatible actuators and sensors, can be classified as being naturally passive. A passive system can be robustly stabilized by any strictly passive controller despite unmodeled dynamics and parametric uncertainties. This important stability characteristic has attracted much attention of researchers in the control of passive systems. This section presents selected definitions and stability theorems for passive linear systems.
3.8.1 Passivity of Linear Systems For finite-dimensional linear, time-invariant (LTI) systems, passivity is equivalent to positive realness of the transfer
For LTI systems, the state-space characterization of positive real (PR) conditions results in the Kalman-Yakubovich-Popov (KYP) lemma. In Lozano-Leal and Joshi (1990), the KYP lemma was extended to WSPR systems, in Joshi and Gupta (1996), it was extended to MSPR systems. These extensions are given next. Let (A, B, C, D) denote an nth-order minimal realization of the p p transfer function matrix G(s). The following lemma then gives the state-space characterization of WSPR system. The Lozano-Leal and Joshi (1990) Lemma: The G(s) is WSPR if and only if there exist real matrices: P ¼ P T > 0, P 2 R nn , L 2 R pn , and W 2 R pp , such that: AT P þ PA ¼ LT L: C ¼ B T P þ W T L: T
T
W W ¼DþD :
(3:39)
1046
Oscar R. Gonza´lez and Atul G. Kelkar
In these equations, (A, B, L, W) is controllable and observable or minimal, and F(s) ¼ W þ L(sI A)1 B is minimum phase. If G(s) is MSPR, it can be expressed as G(s) ¼ G1 (s) þ G2 (s), where G2 (s) is WSPR and all the poles of G1 (s) are purely imaginary (Josh and Gupta, 1996). Let (A2 , B2 , C2 , D) denote an nth 2 -order minimal realization of G2 (s), the stable part of G(s). The following lemma is an extension of the KYP lemma to the MSPR case. The Josh and Gupta (1996) Lemma: If G(s) is MSPR, there exist real matrices: P ¼ P T > 0, P 2 R nn , L 2 R pn2 and W 2 R pp , such that equation 3.39 holds with:
P ¼ PT > 0,
L ¼ [0pn1 , Lpn2 ],
(3:40)
where (A2 , B2 , L, W ) is minimal and F(s) ¼ W þ L(sI A)1 B ¼ W þ L(sI A2 )1 B2 is minimum phase.
3.8.3 Stability of PR Systems The stability theorem for a feedback interconnection of a PR and a MSPR system is given next. LMI form of PR Lemma: An alternate form of KYP Lemma can be given in terms of the following Linear Matrix Inequality (LMI). A system (A, B, C, D) is said to be PR if it satisfies:
AT P þ PA BT P
C PB þ 0 0
D I
T
U WT
W V
C 0
D < 0 (3:41) I
(3:42)
where U ¼ 0, V ¼ 0, and W ¼ I. This LMI condition is convenient to use in the case of checking PR-ness of MIMO systems. This LMI is a special case of dissipativity LMI (Kelkar and Joshi, 1996). Stability Theorem The closed-loop system consisting of negative feedback interconnection of Gp (s) and Gc (s) (Figure 3.1) is globally asymptotically stable if Gp (s) is PR, Gc (s) is MSPR, and none of the purely imaginary poles of Gc (s) is a transmission zero of Gp (s) (Joshi and Gupta, 1996). Note that in the theorem systems Gp (s) and Gc (s) can be interchanged. Some nonlinear extensions of these results are also obtained in Isidori et al. (1999). Passivity-based controllers based on these fundamental stability results have proven to be highly effective in robustly controlling inherently passive linear and nonlinear systems. Most physical systems, however, are not inherently passive, and passivity-based control methods cannot extend directly to such systems. For example, unstable systems and acoustic systems are not passive. One possible method of making these nonpassive systems amenable to passivity-based control is to passify them using suitable compensation. If the compensated system is ensured to be robustly passive despite plant uncertainties, it can be robustly stabilized by any MSPR controller. In Kelkar and Joshi (1997), various passification techG(s)
G(s) u
+
ym = y
Gp(s) −
u
Gs (s)
ym = y
Gp(s)
Gfb(s)
(A) Series
(B) Feedback
G(s)
G(s)
+
y
Gp(s)
Gp(s)
y
Gfb(s)
Gs(s)
−
u
Gff (s)
+
+ ym
(C) Feed-Forward
(D) Hybrid
FIGURE 3.5 Methods of Passification
ym
3 Robust Multivariable Control niques are presented, and some numerical examples are given, demonstrating the use of such techniques. A brief review of these methods is given next.
3.8.4 Passification Methods The four passification methods in Figure 3.5 series, feedback, feed-forward, and hybrid passification are given in Kelkar and Joshi (1997) for finite-dimensional linear time-invariant nonpassive systems as shown. Once passified, the system can be controlled by any MSPR or weakly SPR (WSPR) controller (Isidori et al., 1999). In Figure 3.5, the system with input u(t) and output ym (t) (G(s)) represents the passified system. The type of passification to be used depends on the dynamic characteristics of the unpassified plant. For example, the system having unstable poles will require feedback passification, whereas the system having nonminimum phase zeros (i.e., having unstable zero dynamics) will require feed-forward passification. Some systems may require a combination of the basic passification methods. For SISO systems, the passification process is easier than for MIMO systems. The reason is that for SISO systems, only the phase plot needs to be checked to determine passivity, whereas in the case of MIMO systems, the KYP lemma conditions have to be checked. One numerical technique that can be used to check the KYP lemma is linear matrix inequality (LMI)-based PR conditions. The solution of the LMI can be done using the LMI tool box in MATLAB or another semidefinite programming package. One important thing to be noted here is that, in the case of inherently passive systems, the use of an MSPR controller guarantees stability robustness to unmodeled dynamics and parametric uncertainties; however, in the case of nonpassive systems that are rendered passive using passifying compensation, stability robustness depends on the robustness of the passification. That is, the problem of robust stability is transformed into the problem of robust passification. In Kelkar and Joshi (1998) a number of sufficient conditions are derived to check the robustness of the passification.
3.9 Conclusion This chapter has presented some of the fundamental tools in the analysis and design of linear, time-invariant, continuous-
1047 time, robust, multivariable control systems. The chapter starts with an introduction to modeling. The analysis tools include basic measures of performance, frequency response, and stability theorems. Linear quadratic, H1 and passivity-based control synthesis techniques were also introduced.
References Antsaklis, P.J., and Michel, A.N. (1997). Linear systems. New York: McGraw-Hill. Desoer, C.A., and Vidyasagar, M. (1975). Feedback systems: Input output properties. New York: Academic Press. Gonzalez, O.R., and Antsaklis, P.J. (1991). Internal models in regulation, stabilization, and tracking. International Journal of Control 53(2), 411–430. Green, M., and Limebeer, D.J. N. (1995). Linear robust control. Englewood Cliffs, NJ: Prentice Hall. Isidori, A., Joshi, S.M., and Kelkar, A.G. (1999). Asymptotic stability of interconnected passive nonlinear systems. International Journal of Robust and Nonlinear Control 9, 261–273. Joshi, S.M., and Gupta, S. (1996). On a class of marginally stable positive-real systems. IEEE Transactions on Automatic Control 41(1), 152–155. Kelkar, A.G., and Joshi, S.M. (1996). Control of nonlinear multibody flexible space structures, Lecture notes in control and information sciences, Vol. 221. New York: Springer-Verlag. Kelkar, A.G., and Joshi, S.M. (1997). Robust control of nonpassive systems via passification. Proceedings of the American Control Conference 5, 2657–2661. Kelkar, A.G., and Joshi, S.M. (1998). Robust passification and control of nonpassive systems. Proceedings of the American Control Conference, 3133–3137. Lozano-Leal, R., and Joshi, S.M. (1990). Strictly positive real functions revisited. IEEE Transactions on Automatic Control 35(11), 1243–1245. Lozano-Leal, R., and Joshi, S.M. (1990). On the design of dissipative LQG-type controllers. In P. Dorato and R.K. Yedavalli (Eds.) Recent advances in robust control. New York: IEEE Press. Maciejowski, J.M. (1989). Multivariable feedback design. Reading, MA: Addison-Wesley. Newcomb, R.W. (1966). Linear multiport synthesis. New York: McGraw-Hill. Schrader, C.B., and Sain, M.K. (1966). Research on system zeros: A survey. International Journal of Control 50(4), 1407–1433. Skogestad, S., and Postlethwaite, I. (1996). Multivariable feedback control. West Sussex, England: John Wiley Sons.
This page intentionally left blank
4 State Estimation Jay Farrell Department of Electrical Engineering, University of California, Riverside, California, USA
4.1 4.2 4.3 4.4
Introduction ....................................................................................... State-Space Representations ................................................................... Recursive State Estimation .................................................................... State Estimator Design Approaches.........................................................
1049 1049 1050 1051
4.5
Performance Analysis ........................................................................... 1053
4.6
Implementation Issues .......................................................................... 1054
4.4.1 Luenberger Observer . 4.4.2 Kalman Filter 4.5.1 Error Budgeting . 4.5.2 Covariance Analysis 4.6.1 Minimizing Latency Due to Computation . 4.6.2 Scalar Processing . 4.6.3 Complementary Filtering
4.7 4.8
Example: Inertial Navigation System Error Estimation ............................... 1056 Further Reading .................................................................................. 1058 References .......................................................................................... 1058
4.1 Introduction
4.2 State-Space Representations
The state of a system is the minimal set of information required to completely summarize the status of the system at an initial time t0 . Because the state is a complete summary of the status of the system, it is immaterial how the system managed to get to its state at t0 . In addition, knowledge of the state at t0 , of the inputs to the system for t > t0, and of the dynamics of the system allows the state to be determined for all t > t0 . Therefore, the concept of state allows time to be divided into past (t < t0 ), present (t ¼ t0 ), and future (t > t0 ), with the state at t0 summarizing the information from the past that is necessary (together with knowledge of the system inputs) to predict the future evolution of the system. The goal of the control system designer is to specify the inputs to a dynamic system to force the system to perform some useful purpose. This task can be interpreted as forcing the system from its state at t0 to a desired state at a future time t1 > t0 . Obtaining accurate knowledge of the state at t0 and for all t between t0 and t1 is often critical to the completion of this control objective. This process is referred to as state estimation.
For dynamic systems described by a finite number of ordinary differential equations, the state of the dynamic system can be represented as a vector x(t) in continuous-time. For dynamic systems described by a finite number of ordinary difference equations, the state of the dynamic system can be represented as a vector x(k) in discrete-time. In either case, x 2 Rn . The vector representation of the state allows numerous tools from linear algebra to be applied to the analysis and design of state estimation and control systems. The state-space model in continuous-time representation is the following:
Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
x_ (t) ¼ F(t)x(t) þ G u u(t) þ G v v(t): y(t) ¼ H(t)x(t) þ n(t):
(4:1)
The u 2 Rm is the deterministic control input, v 2 Rq is a stochastic (process noise) vector, y 2 Rp is the measured output, and n 2 Rp is a stochastic (measurement noise) vector. The process and measurement noise vectors are each assumed to be Gaussian white noise processes: 1049
1050
Jay Farrell
Ehv(t)i ¼ 0
Ehv(t1 ), v(t2 )i ¼ Q(t1 )d(t1 t2 ):
Ehn(t)i ¼ 0
Ehn(t1 ), n(t2 )i ¼ R(t1 )d(t1 t2 ):
(4:2)
The d(t) is the Dirac delta function. The power spectral density matrices Q(t) and R(t) are symmetric positive definite for all t. State estimation may be desired for two reasons. First, when the measurement matrix H 6¼ I, then the state is not directly measured; however, knowledge of the full state may be desirable (e.g., for control). Second, even if H ¼ I, it may be beneficial to filter the measurements to decrease the effects of the measurement noise. Filtering can be considered as a state estimation process. State estimation and control processes are often implemented via a computer. Therefore, for implementation purposes, it is usually convenient to work with the discrete-time state-space representation: x(k þ 1) ¼ Fx (k)x(k) þ Gu u(k) þ Gv vd (k) y(k) ¼ H(k)x(k) þ n(k):
(4:3)
In equation 4.3, x(k) ¼ x(kT ), k is an integer, and T is a fixed sample period. If F, G u , and G v are time invariant over the interval (kT , (k þ 1)T ], then Fx (k), Gu , Gv , and the statistics of the discrete-time Gaussian white noise process vd (k) can be determined so that the solutions to equation 4.1 at the sample times t ¼ kT have the same first- and second-order statistics as the solution of equation 4.3. Both n(k) and vd (k) are discrete-time Gaussian white noise sequences with covariance matrices defined by: Ehn(k)n(j)T i ¼ Ehvd (k)vd (j)T i ¼
R(k)
if k ¼ j,
0 otherwise; Q d (k) if k ¼ j, 0
and,
and the state-space model F(k), Gu , Gv are known for k 2 [ko , kf ], these equations allow propagation of the mean and variance of the state for all k 2 [ko , kf ]. At any time instant of interest, the mean and variance of the output (i.e., y and P y ), can be computed as: y(k) ¼ H(k) x (k) P y (k) ¼ H(k)P(k)H T (k) þ R(k):
The fact that linear operations on Gaussian random variables yields Gaussian random variables and the assumptions that u(k) is deterministic and vd (k) and n(k) are Gaussian random variables allow x(k) and y(k) to be modeled as Gaussian random variables. Therefore, the distributions of x(k) and y(k) are completely characterized by their first- and secondorder statistics, which may be time varying. The mean and variance of the state [i.e., x(k) ¼ Ehx(k)i and P(k) ¼ Eh(x(k) x(k))(x(k) x(k))T i] can be propagated through time using the model of equation 4.3 as: x(k þ 1) ¼ F(k) x (k) þ Gu u(k):
(4:4)
P(k þ 1) ¼ F(k)P(k)FT (k) þ Gv Qd (k)GTv :
(4:5)
The superscript T denotes a transpose. Note that when initial conditions x(ko ), P(ko ); signal u(k); noise statistics Qd (k);
(4:7)
4.3 Recursive State Estimation Equations 4.4 through 4.5 provide the mean and the covariance of the state through time based only on the initial mean state vector and its error covariance matrix. When measurements y(k) are available, the measurements can be used to improve the accuracy of an estimate of the state vector at time k. The symbols x^ (k) and x^þ (k) are used to denote the estimate of x(k) before and after incorporating the measurement, respectively. þ Similarly, the symbols P x^ (k) and P x^ (k) are used to denote the error covariance matrices corresponding to x^ (k) and x^þ (k), respectively. This section presents the time propagation and measurement update equations for both the state estimate and its error covariance. The equations are presented in a form that is valid for any linear unbiased measurement correction. These equations contain a gain matrix K that determines the estimator performance. The choice of K to minimize the error covariance P þ x^ (k) will be of interest. For an unbiased linear measurement, the update will have the form: x^þ (k) ¼ x^ (k) þ K (k)(y(k) y^ (k)),
otherwise.
(4:6)
(4:8)
where y^ (k) ¼ H(k)^ x (k). The error covariance of x^þ (k) is the following: T Pþ x^ (k) ¼ (I K (k)H(k))P x^ (k)(I K (k)H(k))
þ K (k)R(k)K T (k):
(4:9)
K(k) is a possibly time-varying state estimation gain vector to be designed. If no measurement is available at time k, then K (k) ¼ 0, which yields x^þ (k) ¼ x^ (k) and P þ x^ (k) ¼ P x^ (k). If a measurement is available, and the state estimator is designed well, then P þ x^ (k) P x^ (k). In either case, the time propagation of the state estimate and its error covariance matrix is achieved by: ^ u u(k) x þ (k) þ G x^ (k þ 1) ¼ F(k)^
(4:10)
T T þ P x^ (k þ 1) ¼ F(k)P x^ (k)F (k) þ Gv Q d (k)Gv :
(4:11)
4 State Estimation
1051 where L ¼ FK . Based on this state-space model, with the assumption that the system is time invariant, the transfer function from n to ^y is as written here:
At least two issues are of interest relative to the state estimation problem. First, does there exist a state estimation gain vector K(k) such that x^ is guaranteed to converge to x regardless of initial condition and the sequence u(k)? Second, how should the designer select the gain vector K(k)? The first issue raises the question of observability. A linear time-invariant system is observable if the following matrix has rank n:
G n (z) ¼ H[zI (F LH)]1 L,
where z is the discrete-time unit advance operator. Assuming that H ¼ H x , the transfer function from u to r is as follows:
H T , FT H T , . . . , (FT )n H T :
G u (z) ¼ H[zI (F LH)]1 h i (4:14) ^u : (zI F)(zI Fx )1 Gu (zI Fx )(zI Fx )1 G
When a system is observable, then it is guaranteed that a stabilizing gain vector K exists. Assuming that the system of interest is observable, the remainder of this chapter discusses the design and analysis of state estimators. Figure 4.1 portrays the state estimator in conjunction with the system of interest. The system of interest is depicted in the upper left. The state estimator is superimposed on a gray background in the lower right. This interconnected system will be referred to as the state estimation system. The figure motivates several important comments. First, although the state estimator has only n states, the state estimation system has 2n states. Second, the inputs to the state estimation system are the deterministic input u and the stochastic inputs v and n. Third, the inputs to the state estimator are the deterministic input u and the measured plant output y. The state space model for the state estimation system is the following: "
x(k þ 1)
#
x^ (k þ 1)
" ¼
LH x "
Gu þ ^u G
u(k)
ω(k)
Γω
0
Fx
#"
x(k)
G v (z) ¼ H[zI (F LH)]1 [zI F][zI Fx ]1 Gv : (4:15)
In the special case where, in addition, Fx ¼ F, the transfer function G v (z) has n identical poles and zeros. This transfer functions is often stated as: G v (z) ¼ H[zI (F LH)]1 Gv ,
4.4 State Estimator Design Approaches Two categories of design approaches are commonly discussed: Luenberger observers and Kalman filters.
ν(k)
x(k + 1)
1 z
(4:16)
where n pole and zero cancellations have occurred. These pole-zero cancellations and therefore the validity of equation 4.16 are dependent on the exact modeling assumption and the stability of the canceled poles.
(4:12)
Γu
Σ
^ u and Fx ¼ F, then this transfer funcTherefore, if Gu ¼ G tion is identically zero. Assuming again that H ¼ H x , the transfer function from v to r is the following:
#
F LH x^ (k) 2 3 # u(k) Gv 0 6 7 4 v(k) 5, 0 L n(k)
(4:13)
x(k)
Σ
Hx
u(k) y(k)
^Γu
Φx Σ
r (k)
K
x^ +(k)
Σ
Φ
_ _ y^ (k) H
_ x^ (k)
FIGURE 4.1 State Estimation System Block Diagram
1 z
Σ
_ x^ (k + 1)
1052
Jay Farrell
4.4.1 Luenberger Observer The time propagation and measurement update equations 4.8 through 4.11 can be combined. The combined equation for the estimate and its error covariance prior to the measurement correction at time k þ 1 are as follows: x (k) þ Gu u(k) þ FKy(k): (4:17) x^ (k þ 1) ¼ F(I KH)^ T T T F P x^ (k þ 1) ¼ F (I KH)P x^ (k)(I KH) þ KRK þ Gv Q d GTv :
(4:18)
The time indices have been dropped for convenience of notation. The combined equation for the estimate and its error covariance posterior to the measurement update are the following: x^þ (k þ 1) ¼ (I KH)F^ xþ (k) þ (I KH)Gu u(k) þ Ky(k þ 1):
(4:19)
T T T þ Pþ x^ (k þ 1) ¼ (I KH)[FP x^ (k)F þ Gv Q d Gv ](I KH)
þ KRK T :
(4:20)
Note that even if the system is time invariant (i.e., F, K , H, Gu , Gv all constant), both the state estimate and its error covariance matrix still change with time. Assuming perfect modeling (i.e., F ¼ Fx , etc.) and a timeinvariant system, the discrete-time dynamics of the state estimation error x~ ¼ x x^ can be computed using equations 4.3 and 4.17: x (k) þ Gv v(k) þ FK (y(k) ^y (k)): x~ (k þ 1) ¼ F~
(4:21)
¼ F(I KH)~ x (k) þ Gv v(k) FK n(k):
(4:22)
Equation 4.22 shows that stability of the state estimate requires that the eigenvalues of (F LH) have a magnitude less than one, where L ¼ FK . The Luenberger observer uses a time-invariant gain vector L to place the eigenvalues of the observer state transition matrix (F LH) at specified locations in the unit circle jzj < 1. When the output y is a scalar, there exists a unique L that will place the n eigenvalues at their specified locations. When y is a vector of measurements, then portions of the eigenvectors of (F LH) can also be specified by the designer. The observer pole placement problem is dual to the controller pole placement problem.
4.4.2 Kalman Filter Equations 4.8 and 4.9 illustrate the trade-offs that are implicit in the selection of the gain matrix K. The state estimation error after the measurement has two contributions. The first is the
state estimation error propagated from the previous time instant: T (I K (k)H(k))P x^ (k)(I K(k)H(k)) :
The second is the state estimation error due to measurement noise at the current time: K (k)RK T (k): The Kalman filter algorithm produces the time varying gain sequence K(k) that minimizes1 trace (P þ x^ (k)) at every measurement instant k. The Kalman filter gain sequence is computed as: T T 1 K (k) ¼ P x^ (k)H(k) [H(k)P x^ (k)H(k) þ R(k)] : (4:23)
For a time-varying linear system driven by Gaussian white noise, the Kalman filter gain sequence can be derived as the optimal estimator from the least-squares, maximumlikelihood, and mean-squares perspectives. The estimator discussion of this chapter was initiated with the assumed ‘‘linear update’’ of equation 4.8. This assumption is not restrictive in the sense that it can be shown for linear systems with additive noise Gaussian white measurement and process noise that a linear measurement update is optimal (i.e., a nonlinear estimator will not achieve better performance). The Kalman filter also has the property that the measurement residual r(k) is a white noise sequence. For the Kalman filter, even if the system is time invariant (i.e., F, Gu , Gv , H, R, and Q constant), the Kalman filter may be time varying. When a steady-state Kalman filter exists, the steady-state Kalman filter gain is: 1 T T K ¼ P ss H [HP ss H þ R] ,
(4:24)
where P ss is the steady-state solution of equation 4.18, which is the discrete-time algebraic Riccati equation: T 1 T T T P ss ¼ F P ss P ss H [HP ss H þ R] HP ss F þ Gv Q d G v : (4:25) This form of the equation is particular to the Kalman filter and does not hold for general estimators. In steady-state, P x^ is þ constant with value P . In steady-state, P is also constant ss x^ þ with value (I KH)P ss . Note that P x^ and P x^ are not equal in steady-state. Let ^z (k) ¼ C x^(k) be a linear combination of the states that is of interest. When the steady-state Kalman filter exists, the transfer function from y to ^z is the following: C[zI F(I KH)]1 K : 1
If D ¼ [dij ] is an n n matrix, then trace (D) ¼
(4:26)
Pn
i¼1
dii .
4 State Estimation
1053
Equation 4.26 is the infinite impulse response Weiner filter for estimating z from y. Finite impulse response Weiner filters could be constructed from the Markov parameters of this steady-state Kalman filter.
Assuming that F, P x^(0), Gv , Qd , H and R are known for each k, the procedure is as follows. .
4.5 Performance Analysis This chapter has presented two methods for the design of state estimators; however, the task is not complete. For example, the Kalman filter methodology would require the designer to develop and use a full stochastic model of the system, process noise, actuators, and sensors. Because the Kalman filter approach assumes a linear model driven by white noise, any colored noise process will necessitate the augmentation of additional states to the model so that the colored noise process is represented as a linear system with white noise inputs. The construction of such models is not straightforward. Often due to data and experimental limitations, the modeling process does not yield a single exact model but a range of possible models. The full model may also be too complex to allow a full Kalman filter implementation with online covariance propagation. Even when the full implementation is possible, additional analysis may show that nearly equivalent and possibly more robust performance can be achieved with fewer computations for an implementation using a reduced order model. The goal of this section is to present methods for the analysis of suboptimal or reduced order filters. The idea of the analysis approach is not difficult, but the implementation of computationally efficient algorithms can be time consuming. The interested reader should consult the references at the end of this chapter.
.
.
Step 1: Iterate equations 4.27 and 4.28 for the time duration of interest. Compute K (k) over the time interval using equation 4.23. This gain sequence is stored and used in each repetition of step 2. It is critical that the same gain sequence be used throughout step 2. Step 2: Divide the error sources into q mutually exclusive subgroups. For simplicity of this discussion, let q ¼ 3 and the subgroups be (1) P x^ (0), (2) Q d , and (3) R. In general, each component of P x^(0), Q d , and R could be in its own subgroup. The main constraint is that each error component must be in exactly one subgroup. Equations 4.27 and 4.28 are now iterated over the time duration of interest, for q repetitions. Each repetition uses the estimator gain sequence determined in step 1. For the ith repetition (q 1), error sources are set to zero, with only the ith subgroup of error sources being nonzero. Denote the sequence of covariance matrices that results þ from the ith repetition as P x^ (k, i) and P x^ (k, i). Step 3: The covariance matrices P x^ (k, i) and P þ x^ (k, i) (usually just the diagonal elements) are compared to identify which of the q error sources is dominant at any time k. Identification of the dominant sources allows the sensor or model development efforts to focus on the most effective directions. Identification of insignificant error sources motivates areas of possible cost savings through either the elimination of sensors or the purchase of lower cost sensors.
In step 1, it is useful as a check to also save the covariance matrices. The check is that the sum of the covariance matrices from each iteration of step 2 should equal the corresponding covariance matrix from step 1:
4.5.1 Error Budgeting Error budgeting is a methodology for determining how much each error source contributes to the overall estimation error. The methodology uses covariance propagation equations 4.9 and 4.11 that are repeated below for convenience: T Pþ x^ (k) ¼ (I K (k)H(k))P x^ (k)(I K (k)H(k))
þ K (k)RK T (k): T T þ P x^ (k þ 1) ¼ F(k)P x^ (k)F (k) þ Gv Q d (k)Gv :
(4:27) (4:28)
Note that if the gain sequence K (k) is known, then equations 4.27 and 4.28 are linear in P, Q d , and R. Therefore, the superposition principle can be applied. The error budgeting procedure can therefore be divided into three steps, where the second step has several subparts. The error budgeting approach is presented here assuming that the Kalman filtering approach is used and that the system model is known and correct. Suboptimal filtering is discussed in the next subsection.
P x^ (k)
¼
q X
P x^ (k, i)
and
Pþ x^ (k)
i¼1
¼
q X
Pþ x^ (k, i),
(4:29)
i¼1
for all k in the time span of interest.
4.5.2 Covariance Analysis Covariance analysis is concerned with the iteration of equations 4.27 and 4.28 to characterize the expected filter performance. These equations are applicable for any estimator gain sequence. The error budgeting analysis of the previous section is a form of covariance analysis. A second form of covariance analysis is the iteration of the same equations for suboptimal gain sequences. Suboptimal gain sequences are often of interest to avoid the need for online computation of the error covariance equations, which are typically the most computationally expensive portion of the Kalman algorithm. Examples of suboptimal gain sequences
1054
Jay Farrell
are the Luenberger gain, the steady-state Kalman filter gain, or a curve fit to the time-varying Kalman gain sequence. Iteration of equations 4.27 and 4.28 once for the optimal gain sequence and once for a suboptimal gain sequence allows analysis of the amount of performance degradation expected from the suboptimal approach. Note that this paragraph has only considered the case where the design model matches the actual system, but a suboptimal gain sequence is used. The case where the filter model is distinct from the system model is discussed in the remainder of this section. Consider a system described by: x(k þ 1)
¼ Fx x(k) þ Gv vd (k)
,
¼ H x (k)x(k) þ n(k)
y(k)
(4:30)
and an estimator design model described by: x^(k þ 1) y^(k)
¼
F^ x (k) þ G^ vd (k)
¼ H(k)^ x (k) þ ^ n(k):
(4:31)
:
The time propagation and measurement update of the state estimate are as follows: x^ (k þ 1) x^þ (k)
F^ x þ (k) , x^ (k) þ K^ (y(k) y^ (k))
¼ ¼
(4:32)
where P(0) and the properties of ^ n and v ^ d are accounted for in the design of the estimator gain sequence K^ (k). The remainder of this section is concerned with calculation of the error variance of z ¼ (Vx x^) in the actual implemented system where the design model of the estimator does not match the ‘‘truth model’’ for the actual system. In this model structure, dim(x) is usually larger than dim(^ x ). The matrix V is defined to select the appropriate linear combination of the actual system states that correspond to the estimator states. To analyze the performance of the coupled system, the joint state space representation for the system 4.30 and the implemented estimator 4.32 will be required: "
x (k þ 1) x^ (k þ 1)
"
# ¼
Fx (k) 0
z(k) ¼ ½V I
0
F(k) " # x(k)
#"
x þ (k)
"
#
x^þ (k)
x^(k) " # x (k) þ n(k) y (k) ¼ ½H x (k), 0 x^ (k) " # x (k) y^ (k) ¼ [0, H(k)] x^ (k) " # 0 K(k) ¼ K^ (k)
þ
Gv (k) 0
# vd (k) (4:33)
(4:34)
(4:35)
(4:36)
(4:37)
"
xþ (k)
#
x^þ (k)
" ¼
x (k) x^ (k)
# þ K (y (k) ^y (k))
(4:38)
This expression represents the actual time propagation of the real system and the estimate. Since the estimate is calculated exactly,2 no process driving noise is represented in the corresponding rows of equation 4.33. In addition, the zeros concatenated into the estimation gain vector account for the fact that the estimator corrections do not affect the state of the actual system. For this coupled system, the covariance propagates between sampling times according to: T þ T P 11 (k þ 1) ¼ Fx P 11 Fx þ Gv Q d Gv :
(4:39)
þ T P 12 (k þ 1) ¼ Fx P 12 F :
(4:40)
P 22 (k
(4:41)
þ 1) ¼
T FP þ 22 F :
The system is altered by measurement updates according to: Pþ (4:42) 11 (k) ¼ P 11 : T ^T T ^T (4:43) Pþ þ P 12 (k) ¼ P 12 I H K 11 H x K : T ^T T ^T ^ Pþ þ K^ H x P 22 (k) ¼ I K H P 22 I H K 11 H x K
T T ^T K þ K^ RK^ þ K^ H x P I H 12 T ^T þ I K^ H P 21 H x K :
(4:44)
In equations 4.39 through 4.44, P 11 ¼ cov(x, x), P T21 ¼ P 12 ¼ cov(x, x^), P 22 ¼ cov(^ x , x^). The error variance of the variable of interest, z, is defined at any instant by: cov(z) ¼ P z ¼ VP 11 V T þ P 22 VP 12 P 21 V T :
(4:45)
Because the matrix P z depends on K , the covariance analysis can be repeated for different gain sequences to compare the predicted performance of the alternative implementations. For given gain sequences, the covariance analysis can be repeated for different values of Fx , Q d , and R to determine the sensitivity of the performance to the design assumptions.
4.6 Implementation Issues 4.6.1 Minimizing Latency Due to Computation Because the Kalman filter covariance equations and gain computation do not depend on the data, the equations can be reorganized to minimize the latency between the time that 2 Quantization error can be modeled by a second additive noise source affecting the estimate but is dropped in this analysis for convenience. In floating point processors, the effect is small.
4 State Estimation
1055
the measurements become available and the time that the measurement-corrected state estimates are generated. Assume that F(k), H(k þ 1), Q d (k), and R(k þ 1) are known at time k. Also, assume that x^þ (k) has just been computed and posted at time k:
most computing operations (i.e., flops) is the covariance update and gain vector calculation. For example, the following standard algorithms can be programmed to require ( 32 n2 m þ 32 nm2 þ nm þ m3 þ 12 m2 þ 12 m) flops:
1. Compute the error covariance corresponding to x^þ (k) using equation 4.9:
(4:46)
þ
P (k) ¼ [I K (k)H(k)]P (k)[I K (k)H(k)]
K ¼ P H T (R þ HP H T ): Pþ ¼ (I KH)P :
(4:47)
T
þ K (k)R(k)K (k)T : 2. Propagate the state estimate covariance to the next time instant using equation 4.11: P (k þ 1) ¼ F(k)P þ (k)F(k)T þ Gv Q d (k)G Tv : 3. Precompute the Kalman gain vector for the upcoming measurement using equation 4.23: K (k þ 1) ¼ P (k þ 1)H(k þ 1)T (H(k þ 1)P (k þ 1)H(k þ 1)T þ R(k þ 1))1 : 4. As soon as u(k) is available, propagate the state estimate to the next time instant using equation 4.10: x^ (k þ 1) ¼ F(k)^ x þ (k) þ Gu u(k): 5. When y(k þ 1) becomes available, the only computation required for the corrected state estimate is as follows: x^þ (k þ 1) ¼ x^ (k þ 1) þ K (k þ 1)[y(k þ 1) H(k þ 1)x^ (k þ 1)]: Step 5 is completed with high priority. Immediately after completion of step 5, x^þ (k þ 1) is posted for use by other systems. Then, k is incremented by 1 and as CPU time becomes available, steps 1 through 4 are completed as low priority processes as long as they are complete before y(k þ 1) becomes available (again). In step 1, equation 4.9 is used because the equation is applicable to any estimation gain vector, while the alternative form P þ (k) ¼ [I K (k)H(k)]P (k) is applicable only for the Kalman gain vector. Also, equation 4.9 has better numeric properties. Note that the portions of step 5 involving x^ (k þ 1) could also be precomputed to allow additional latency reduction. If F(k), H(k), Qd (k), and R(k) can be determined off-line, then steps 1 through 3 can be computed off-line, so that only steps 4 and 5 occur online. If a Luenberger estimator is used, steps 1 and 2 are eliminated, and K ¼ L is fixed; therefore, only the time propagation and measurement updates of steps 4 and 5 are required online.
Alternatively, when the measurements are independent, the measurements can be equivalently treated as m sequential measurements with a zero width time interval between measurements. Define the following as: P 1 ¼ P (k) x^1 ¼ x^ (k) 2 3 2 H1 R1 . . . 6 . 7 6 . 7 6 H ¼6 4 .. 5 R ¼ 4 .. 0 Hm
3 0 .. 7 7 . 5: Rm
(4:48)
Then, the equivalent scalar measurement processing algorithm is for i ¼ 1 to m: Ki ¼
P i H Ti : Ri þ H i P i H Ti
(4:49)
x^iþ1 ¼ x^i þ K i (yi H i x^i ):
(4:50)
P iþ1 ¼ (I K i H i )P i :
(4:51)
In addition, the state and error covariance matrix posterior to the set of m measurements are defined by: x^þ (k) ¼ x^mþ1
(4:52)
P þ (k) ¼ P mþ1 :
(4:53)
The total number of computations for the m scalar measurement updates is m( 32 n2 þ 52 n) plus m scalar divisions. Thus, it can be seen that m scalar updates are computationally cheaper for any m. At the completion of the m scalar measurements, x^þ (k) and P þ (k) will be identical to the values that would have been computed by the corresponding vector measurement update. The state feedback measurement vectors K i corresponding to the scalar updates are not equal to the columns of the state feedback gain matrix that would result from the corresponding vector update. This is due to the different ordering of the updates affecting the error covariance matrix P i at the intermediate steps during the scalar updates.
4.6.2 Scalar Processing The Kalman filter algorithms have been presented, for notational convenience, in a vector form with m measurements. The portion of the measurement update that requires the
4.6.3 Complementary Filtering In applications, it often happens that multiple noisy measurements of a signal are available, and it is of interest to
1056
Jay Farrell s + n1
s + n1
G(z) +
+
s^
+ s + n2
s + n1
1-H(z)
(A)
−
+
s + n2
H(z)
+
s^ +
s + n2
H(z)
(B)
s^
+
n2-n1
H(z)
(C)
FIGURE 4.2 Block diagrams for the Estimation of a Signal s from Noisy Measurements. (A) Generic Filter Structure. (B) Complementary Filter Constrained to Not Distort s. (C) Feed-forward Complementary Filter.
combine the measurements to obtain an improved or even optimal estimate of the signal. Such a situation is illustrated in Figure 4.2(A), where two measurements of the signal s are available: s1 ¼ s þ n1 and s2 ¼ s þ n2
(4:54)
The measurement noise processes n1 and n2 are random independent processes with known spectral densities, and s can be either random or deterministic; however, the signal s is not known, so it cannot be interpreted as the u term in equation 4.54. The objective is to design filters G(z) and H(z) such that ^s ¼ G(z)s1 þ H(z)s2 . If the estimator is designed with the constraint that the signal s cannot be distorted by the estimation process, then G(z) ¼ 1 H(z). This constrained estimator, referred to as a complementary filter, is shown in Figure 4.2(B). Restructuring the complementary filter block diagram as shown in Figure 4.2(C) has two advantages. First, only a single filter is required. Second, the input to H(z) is a random signal with known spectral density. Therefore, the filter design problem is properly structured for the Kalman or Weiner filter design techniques. The objective of the filter is to accurately estimate ^ n1 . Additional computational advantages result that will be stated in the following example.
4.7 Example: Inertial Navigation System Error Estimation As an illustrative example of the estimation concepts presented in this chapter, consider the following example of the calibration of a simplified inertial navigation system (INS) using an independent measurement of position: pm ¼ p þ n, where n is assumed to be Gaussian white noise with cov(n) ¼ s2n ¼ (0:01 m)2 and where p is the true position. The inertial navigation system computations, which are depicted by the left block diagram in Figure 4.3, are the following: p_ c ¼ vc ,
v_ c ¼ am ,
where am is the measured acceleration that is corrupted by a bias plus Gaussian white noise, vc is the computed velocity, and pc is the computed position. The bias is assumed to be a random walk: b_ ¼ v, where v is a Gaussian white noise process. Note that the INS computation block diagram can be decomposed into signal
a
_1
v
s
n
_1 s
p
− a
Σ −
ω
am
1 _ s
vc
1 _ s
pc
Σ
n −
b 1 _ s
δa
Σ −
ω
(4:55)
1 _ s
δv
_1
pc
δp
s
b
1 _ s
FIGURE 4.3 Inertial Navigation System Block Diagrams. (A) Block Diagram of the Actual INS Computations (B) Block Diagram Decomposing the INS into Signal and Error Channels
4 State Estimation
1057
and noise channels as illustrated in Figure 4.3(B). This block diagram illustrates that the position pc computed by the INS can be considered as the sum of the true position p and a colored noise process dp. The state-space model for dp is as seen here: 2
dp_
2
3
0
6 7 6 6 dv_ 7 ¼ 6 0 4 5 4 0 b_
dp ¼ [ 1
32
3
2
0
1
0
0
76 7 6 6 7 6 17 54 dv 5 þ 4 0
0
0
dp
1
b 2 3 dp 6 7 7 0 ]6 4 dv 5: 0
3 0 " # 7 v : 17 5 n 0
−
FIGURE 4.4
_1 s
δv
_1 s
7 T 7 5 and
0
0
1
2
s2n T 3
þ
s2v T 5 20
s2v T 3 6
s2n T 2 2
s2n T
þ
s2v T 4 8
s2v T 3 6
þ
s2v T 3 3
s2v T 2 2
s2v T 2 2
s2v T
3 7 7 7: 7 5
(4:58)
The left column of Figure 4.5 shows the covariance analysis using the method of Section 4.5.2 for a scenario where the position aiding is available at 1 Hz for the first 60 sec but not available for the subsequent 15 sec. In the first 60 sec the position and velocity error standard deviations (STDs) have effectively reached their steady-state accuracy. These plots clearly show that error accumulates during each 1 sec of INS integration and is reduced by each 1 Hz position update. The accelerometer bias error is still being slowly corrected at the end of the 60 sec of position corrections. Two natural questions relevant to the system design are what are the dominant error sources causing the growth of the INS errors between the 1-Hz corrections during the first 60 sec, and what are the dominant error sources that cause the INS error growth during the period when the position aiding is no longer available. Both of these question are answerable by the error budgeting analysis described in Section 4.5.1. The error budget analysis plot for the position estimate is shown in Figure 4.5(D). This figure shows the error contributions from v, n, n, and the total error from all sources. The plot for the error contribution from P(0) has not been included because it is insignificant for the time period shown. Note that the figure plots the standard deviations (STDs). To perform the superposition check that the errors from the individual sources add up to the total error at every time instant, the plotted
pc
Σ
^ p
δp
b 1 _ s
1
p
Σ δa
Σ
6 F¼6 40
6 6 s2 T 2 s2 T 4 n v Qd ¼ 6 6 2 þ 8 4
n
−
ω
_1 s
3
T
(4:57)
v
1 2 2T
1
3
The model specifies the F, H, and Gw matrices. Note that Gu ¼ 0. The spectral density of w and n are denoted 2 2 s2w ¼ 4 1010 ms5 and s2n ¼ 104 ms3 , which correspond to an inexpensive solid-state device. The feed-forward complementary filter to combine pc and pm to estimate p is shown in Figure 4.4. In this figure, all signals are illustrated in continuous time to clearly show the integrals involved. Both the INS and the filter would typically be implemented via computer in discrete-time representation. The analysis that follows will also be performed in discrete time. This feed-forward complementary filter implementation has several useful features in addition to those previously noted. First, the inputs to the filter H(s) are stationary stochastic quantities unaffected by p, v, or a. In typical INS applications, this independence is not true. Second, in the discrete-time implementation, the INS update rate can be much higher that the rate at which the position measurements are made. For Kalman filter implementations, this fact significantly reduces the computational load because the time propagation and measurement updates of the estimation error covariance matrices, which are the most computationally expensive portion of the algorithm, occur at the lower rate of the position measurement.
1 _ s
2
(4:56)
b
a
Consider the design of the complementary filter as a Kalman filter, where H ¼ [1, 0, 0], R ¼ s2v , and:
ν p
Σ
pm
− Σ
ν − δp
H(s)
^ −δp
Feed-Forward Complementary Filter Implementation of the Simplified Position-Aided Inertial Navigation System
Jay Farrell 0.4
0.4
Velocity Error STD
(A)
50
60 Time, t [sec]
70
0.04 0.02 0 (B)
Bias error STD
Qw only R only
0.3
0
2
Full error Qn only
0.35
0.2
3
x 10
60 Time, t [sec]
Position error, STD [m]
Position error STD
1058
0.25 0.2 0.15
70 0.1 0.05
1
0 (D)
0 (C)
50
60 Time, t [sec]
50
60 Time, t [sec]
70
70
FIGURE 4.5 Covariance and Error Budget Analysis for the Position-Aided INS Example (A) Position Error in m/s Versus Time (B) Velocity Error in m/s Versus Time. (C) Accelerometer Bias Error in m/s Versus Time (D) Position Error Budget Analysis
quantities must be squared prior to the addition. The figure clearly shows that by 62 sec, 2 sec after the position aiding has been removed, the acceleration measurement noise n has become the dominant error source. The accelerometer bias drift term is rather insignificant. Therefore, if the design objective were to maintain position accuracy less than 0.3 m for a longer period of time, then either an independent velocity aiding measurement or an accelerometer with less measurement noise would be required. This example has been designed as a simplified analysis of a carrier phase differential global positioning system-aided INS. A full error analysis is significantly more complicated for a few reasons. First, the INS equations are nonlinear. The implementation therefore requires a more advanced estimator implementation, such as an extended Kalman filter. Second, the INS error equations are functions of the acceleration and angular rate of the inertial measurement unit. In addition, the linearized measurement matrix is a function of the satellite positions and is therefore time varying. Therefore, the state-space model is time varying, and the full error analysis would be dependent on assumed vehicle maneuvers and satellite configurations.
4.8 Further Reading Various aspects of linear algebra and the state-space methodology are discussed in Brogan (1991). The Luenberger observer
is presented in Luenberger (1966). The Kalman filter is presented in Kalman (1960, 1961). Estimation theory and estimator design are discussed in several texts (Brown and Hwang, 1992; Gelb et al., 1974; Maybeck, 1979). Practical aspects of estimation theory including state augmentation, suboptimal filter analysis, and applications are discussed in Farrell and Barth (1999), Gelb et al. (1974), and Maybeck (year). Markov parameters and FIR approximations to IIR filters are discussed in Mendel (1995) and Moore (1981). Efficient computation of the discrete-time state transition matrix and process noise covariance matrix are discussed in Van Loan (1978). The full implementation of the GPS/INS system corresponding to the example is described in Farrell (2000).
References Brogan, W.L. (1991). Modern control theory. Englewood Cliffs, NJ: Prentice Hall. Brown, R.G., and Hwang, Y.C. (1992). Introduction to random signals and applied Kalman filtering. (2d ed.). New York: John Wiley & Sons. Farrell, J.A., Givargis, T., and Barth, M., (2000). Real-time differential carrier phase GPS-aided INS. IEEE Transactions on Control Systems Technology 8(4), 709–721. Farrell, J.A., and Barth, M. (1999). The Global Positioning System and inertial navigation: Theory and practice. McGraw-Hill. Gelb, A., Kasper, J.F. Jr., Nash, R.A., Jr., Price, C.F., and Sutherland, A.A. (1974). Applied optimal estimation. Cambridge, MA: MIT Press.
4 State Estimation Kalman, R.E. (1960). A new approach to linear filtering and prediction problems. ASME Journal of Basic Engineering, Ser. D 82, 34–45. Kalman, R.E., and Bucy, R.S. (1961). New results in linear filtering and prediction theory. ASME Journal of Nasic Engineering, Ser. D 83, 95–108. Luenberger, D. (1966). Observers for multivarible systems. IEEE Transactions on Automated Control 11, 190–197. Maybeck, P.S. (1979). Stochastic models, estimation, and control. Vol. 1–3. San Diego, CA: Academic Press.
1059 Mendel, J.M. (1995). Lessons in estimation theory for signal processing, communications, and control. Englewood Cliffs, NJ: Prentice Hall. Moore, B. (1981). Principal component analysis in linear systems: Control, ability, observability, and model reduction. IEEE Transactions on Automatic Control AC-26, 17–32. Van Loan, C. (1978). Computing integrals involving the matrix exponential. IEEE Transactions on Automatic Control AC-23, 395–404.
This page intentionally left blank
5 Cost-Cumulants and Risk-Sensitive Control Chang-Hee Won Department of Electrical Engineering, University of North Dakota, Grand Forks, North Dakota, USA
5.1 5.2 5.3
Introduction ....................................................................................... 1061 Linear-Quadratic-Gaussian Control ........................................................ 1061 Cost-Cumulant Control ........................................................................ 1062
5.4 5.5 5.6
Risk-Sensitive Control .......................................................................... 1063 Relationship Between Risk-Sensitive and Cost-Cumulant Control ................ 1064 Applications........................................................................................ 1065
5.3.1 Minimal Cost Variance Control
5.6.1 Risk-Sensitive Control Applied to Satellite Attitude Maneuver 5.6.2 MCV Control Applied to Seismic Protection of Structures
5.7
Conclusions ........................................................................................ 1067 References .......................................................................................... 1068
5.1 Introduction
dx(t) ¼ Ax(t)dt þ Bk(t, x)dt þ E(t)dw(t): y(t)dt ¼ Cx(t)dt þ dv(t):
Cost-cumulant control, also known as statistical control, is an optimal control method that minimizes a linear combination of quadratic cost cumulants. Risk-sensitive control is an optimal control method that minimizes the exponential of the quadratic cost criterion. This is equivalent to optimizing a denumerable sum of all the cost cumulants. Optimal control theory deals with the optimization, either minimization or maximization, of a given cost criterion. Linear-quadratic-Gaussian control, minimum cost-variance control, and risk-sensitive control are discussed in terms of cost cumulants. Figure 5.1 presents an overview of the optimal control and the relationships among different optimal control methods.
5.2 Linear-Quadratic-Gaussian Control The linear quadratic Gaussian (LQG) control method optimizes the mean, which is the first cumulant, of a quadratic cost criterion (Anderson and Moore, 1989; Davis, 1977; Kwarkernaak and Sivan, 1972). Typical system dynamics for LQG control are given by the stochastic equation: Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
(5:1)
Here w(t) and v(t) are vector Brownian motions. The meaning of a Brownian motion, such as w(t), can be given directly or in terms of its differential dw(t). In the latter case, the dw(t) is a Gaussian random process with zero mean, covariance matrix W dt, and independent increments. A similar description applies to dv(t), with covariance V dt. It is assumed that dw(t) and dv(t) are independent. The matrices A, B, C, and E are of compatible size. It should be remarked that the formalism of equation 5.1 is that of a stochastic differential equation. Intuitively, one thinks of dividing both sides of this equation by dt to obtain the more colloquial form. But the formal derivative of a Brownian motion—which is known as white noise—is not a well-defined random process, and this motivates an alternate way of thinking. The quadratic cost criterion is given by:
J (k) ¼
ðtF
(x 0 Qx þ k 0 Rk)dt:
(5:2)
0
The weighting matrix Q is a symmetric and positive semidefinite matrix, and R is a symmetric and positive definite matrix. 1061
1062 The solution of the output feedback LQG problem is found using the certainty equivalence principle. The optimal control is found using a Kalman filter, where an optimal estimate x^ is obtained such that E{(x x^)0 (x x^)} is minimum. Then this estimate is used as if it were an exact measurement of the state to solve the deterministic LQG control. For the output feedback case, the estimated states are given by:
Stochastic control Cost-cumulant control
1
2 LQG
Minimal cost variance
d x^ ¼ Ax þ Bk þ PCV 1 (y C x^), dt
Risk-sensitive control
where P satisfies the forward Riccati equation:
3
P_ (t) ¼ W þ AP(t) þ P(t)A0 P(t)C 0 V 1 CP(t):
4
k(t, x) ¼ R 1 B 0 (t)^ x (t):
6
(5:7)
In equation 5.7, the initial condition is P(0) ¼ cov(x 0 ). Finally the optimal output feedback controller is given as:
5
H-infinity control
(5:6)
(5:8)
5.3 Cost-Cumulant Control
Game theory
5.3.1 Minimal Cost Variance Control Deterministic control
FIGURE 5.1 Relationship Between Various Optimal and Robust Control Methods. (Sain et al., 2000; Jacobson, 1973; Won and Sain, 1995; Glover and Doyle, 1988; Whittle, 1990; Rhee and Speyer, 1992; Jacobson, 1973; Runolfsson, 1994; Uchida, 1989; Basar and Bernhard, 1991.)
The LQG control problem then becomes a minimization of the mean quadratic cost over feedback controller k: J ¼ min E{J (k)}: k
(5:3)
Minimum cost variance (MCV) control is a special case of cost-cumulant or statistical control where the second cumulant, variance, is minimized, whereas the first cumulant, mean, is kept at a prespecified level. Here, open-loop MCV and full-state feedback MCV control laws are discussed. An open-loop control law is a function u: [0, tF ] ! u where u is some specified allowable set of control values. A closed-loop or feedback control law is a function that depends on time and the past evolution of the process [i.e., u(t, x(s); 0 s t) ]. Open-Loop MCV Control Consider a linear system (Sain and Liberty, 1971): dx(t) ¼ A(t)x(t)dt þ B(t)u(t)dt þ E(t)dw(t),
The full-state feedback control problem is to choose the control k as a function of the state x so that the cost criterion of equation 5.3 is minimized. The general partial observation or output feedback control problem is to choose the control k as a function of the observation y so that the cost of equation 5.3 is minimized. Assume now that the problem has a solution of the quadratic form 12 x 0 x. The matrix can be found from the Riccati equation: _ (t) þ Q þ A0 (t) þ (t)A (t)BR 1 B 0 (t), (5:4) 0¼
(5:9)
and the performance measure: J¼
ðtF
[x 0 (t)Qx(t) þ u0 (t)Ru(t)]dt þ x 0 (tF )QF x(tF ),
(5:10)
0
where w(t) is zero mean with white characteristics relative to the system, tF is the fixed final time, x(t) 2 Rn is the state of the system, and u(t) 2 Rm is the control action. Note that: E{dw(t)dw 0 (t)} ¼ W dt:
(5:11)
where ðtF Þ ¼ 0. Then the full-state feedback optimal controller is given by (Davis, 1977):
The fundamental idea behind minimal cost-variance control is to minimize the variance of the cost criterion J :
k(t, x) ¼ R 1 B 0 (t)x(t):
JMV ¼ V ARk {J },
(5:5)
(5:12)
5 Cost-Cumulant and Risk-Sensitive Control
1063
while satisfying a constraint: Ek {J } ¼ M,
The equation also has the cost criterion: (5:13)
where J is the cost criterion and where the subscript k on E denotes the expectation based on a control law k generating the control action u(t) from the state x(t) or from a measurement history arising from that state. By means of a Lagrange multiplier m corresponding to the constraint of equation 5.13, one can form the function: JMV ¼ m(Ek {J } M) þ VARk {J },
(5:14)
which is equivalent to minimizing: J^MV ¼ mEk {J } þ VARk {J }:
(5:15)
A Riccati solution to J^MV minimization is developed for the open-loop case: u(t) ¼ k(t, x(0)):
(5:16)
The solution is based on the differential equations: 1 z_(t) ¼ A(t)z(t) B(t)R1 B 0 (t)^ r(t): 2 r_ (t) ¼ A0 (t)^ ^ r(t) 2Qz(t) 8mQn(t):
(5:18)
n_ (t) ¼ A(t)n(t) þ E(t)WE 0 (t)y(t):
(5:19)
y_ (t) ¼ A0 (t)y(t) Qz(t):
(5:20)
(5:17)
J (t, x(t), k) ¼ [x(t)0 Qx(t) þ k 0 (t, x)R(t)k(t, x)]ds t
þ x 0 (tF )QF x(tF ):
(5:26)
In MCV control, we define a class of admissible controllers, and then the cost variance is minimized within that class of controllers. Define V1 (t, x; k) ¼ E{J (t, x(t), k)jx(t) ¼ x} and V2 (t, x; k) ¼ E{J 2 (t, x(t), k)jx(t) ¼ x}. A function M is an admissible mean cost criterion if there exists an admissible control law k such that: V1 (t, x; k) ¼ M(t, x),
(5:27)
for all t 2 [0, tF ] and x 2 Rn . A minimal mean cost-control law kM satisfies V1 (t, x; kM )¼ n V1 (t, x) V1 (t, x; k) for t 2 T , x 2 R and for k, an admissible control law. An MCV control law kV jM satisfies V2 (t, x; kV jM ) ¼ V2 (t, x) V2 (t, x; k) for t 2 T , x 2 Rn whenever k is admissible. The corresponding minimal cost variance is given by V (t, x) ¼ V2 (t, x) M 2 (t, x) for t 2 T , x 2 Rn . Here the full-state feedback solution of the MCV control problem is presented for a linear system and a quadratic cost criterion. Then the linear optimal MCV controller is given by (Sain et al., 2000): KV jM (t, x) ¼ R 1 (t)B 0 (t)[M(t) þ g(t)V (t)]x,
These equations have the boundary conditions: z(0) ¼ x(0):
(5:21)
r(tF ) ¼ 2QF z(tF ) þ 8mQF n(tF ): ^
(5:22)
n(0) ¼ 0:
(5:23)
y(tF ) ¼ QF z(tF ):
(5:24)
where M and V are the solutions of the coupled Riccati-type equations (suppressing the time argument): 0 ¼ M_ þ A0 M þ MA þ Q MBR 1 B 0 M þ g2 MBR 1 B 0 V: (5:28) 0 ¼V_ þ 4MEWE 0 M þ A0 V þ VA MBR 1 B 0 V
The equations also have the control action relationship: 1 n(t) ¼ R 1 B(t)r(t): 2
ðtF
(5:25)
The variable z(t) is the mathematical expectation of x(t). The variable r(t) corresponds to the costate variable of optimal control theory because it is the variable that enforces the differential equation constraint between z(t) and n(t). The variable n(t) and y(t) are introduced to reduce the integrodifferential equation. Full-State Feedback Minimal Cost-Variance Control Consider the Ito sense stochastic differential equation (SDE) with control (Sain et al., 2000): dx(t) ¼ [A(t)x(t) þ B(t)k(t, x)] dt þ E(t) dw(t):
VBR 1 B 0 M 2gVBR 1 B 0 V,
(5:29)
with boundary conditions M(tF ) ¼ QF and V(tF ) ¼ 0. Once again, if g approaches zero, classic LQG results are obtained. This MCV idea can be generalized to minimize any cost cumulants. Viewing the cost function as a random variable and optimizing any cost cumulant is called cost-cumulant or statistical control.
5.4 Risk-Sensitive Control A large class of control systems can be described in statevariable form by the stochastic equations (Anderson and Moore, 1989; Whittle, 1996):
1064 dx(t) ¼ Ax(t)dt þ Bk(t, x)dt þ dw(t): y(t)dt ¼ Cx(t)dt þ dv(t):
Here, x(t) is a 2n-dimensional state vector, k(t,x) is an m-dimensional input vector, w(t) is a q-dimensional disturbance vector of Brownian motions, y(t) is a p-dimensional vector of output measurements, and v(t) is an r-dimensional output noise vector of Brownian motions that affect the measurements being taken. The risk-sensitive cost criterion is given by: JRS (u) ¼ u
1
log Ek {e
uJ
},
J¼
(x 0 Qx þ k 0 Rk)dt:
(5:32)
The RS control problem then becomes a minimization of the cost JRS (u) over feedback controller k: u
(5:33)
Assume a solution of the quadratic form 12 x 0 x s0 xþ (terms independent of x). The matrix P can be found from the Riccati-type equation: _ (t) þ Q þ A0 (t) þ (t)A (t) BR 1 B 0 þ uW (t), 0¼ (5:34) where (tF ) ¼ 0. Then, the full-state feedback optimal controller is given by (Whittle, 1996): k(t, x) ¼ R 1 B 0 (t)x(t) þ R 1 B 0 s(t),
As u approaches zero, the cost criterion of equation 5.31 becomes Ek {J }, and the matrices and P are obtained from the Riccati equations: _ (t) þ Q þ A0 (t) þ (t)A (t)BR 1 B 0 (t): (5:40) O¼ P_ (t) ¼ W þ AP(t) þ P(t)A0 P(t)C 0 V 1 CP(t):
(5:35)
where s_ (t) þ (A (B 0 R 1 B 0 þ uW ))0 s(t) ¼ 0 is a backward linear equation. The matrix P satisfies the forward Riccati-type equation:
Thus, the classic LQG result is obtained as u approaches zero:
To see the relationship between RS and cost-cumulant control, consider a cost criterion: J¼
ðtF
[x(t)0 Qx(t) þ k 0 (t, x)R(t)k(t, x)]ds þ x 0 (tF )QF x(tF ): (5:42)
0
Classical LQG control minimizes the first cumulant or the mean of the cost criterion of equation 5.42. In MCV control, the second cumulant of equation 5.42 is minimized while the mean is kept at a prespecified level. Furthermore, RS control minimizes an infinite linear combination of the cost cumulants. To see this, consider an RS cost criterion: JRS ¼ u1 log (E{ exp ( uJ )}),
f(s) ¼ E exp ( s J ):
where P(0) ¼ (x0 ). The updating equation for the risksensitive Kalman filter is given by:
where x˚(0) ¼ 0. x˚ denotes the mean of x conditional on the initial information, current observation history, and previous control history. Finally, the optimal output feedback controller is given as (Whittle, 1996): k(t, x) ¼ R 1 B 0 (t)^ x (t) þ R 1 B 0 s(t), where x^ is the minimal-stress estimate of x, given by:
(5:38)
(5:44)
The cumulant generating function c(s) is defined by: c(s) ¼ log f(s) ¼
(5:37)
(5:43)
where u is a real parameter and E denotes expectation. Then, the moment-generating function or the first characteristic function is given by:
P_ (t) ¼ W þ AP(t) þ P(t)A0 P(t) C 0 V 1 C þ uQ P(t), (5:36)
dx˚ ¼ Ax þ Bk þ PCV 1 (y Cx˚) uPQx˚, dt
(5:41)
5.5 Relationship Between Risk-Sensitive and Cost-Cumulant Control
0
JRS (u) ¼ min JRS (u):
(5:39)
(5:31)
where J is the classical quadratic cost criterion: ðtF
x^(t) ¼ (I þ uP(t)(t))1 (x˚(t) þ uP(t)s(t)):
(5:30)
1 X (1)i i¼1
i!
bi s i ,
(5:45)
in which the {bi } are known as the cumulants or sometimes as the semi-invariants of J. Now by comparing equations 5.43, 5.44, and 5.45, it is noted that:
JRS
( ) 1 i X (1) bi (J )(u)i , ¼ u1 i! i¼1
(5:46)
where bi (J ) denotes the ith cumulant of J with respect to the control law k. Thus, it is important to note that the RS cost criterion is an infinite linear combination of the cost cumulants. Moreover, approximating to the second order:
5 Cost-Cumulant and Risk-Sensitive Control
(5:47)
Therefore, the minimal cost mean and minimal cost variance problems can be viewed as first- and second-order approximations of the RS control problem respectively. Minimizing the VAR{J } under the restriction that the first cumulant E{J } exists is called the minimal cost variance (MCV) problem. Moreover, minimizing any linear combination of cost cumulants under certain restrictions would be called cost-cumulant or statistical control. Thus, classical LQG control (optimization of the first cumulant), MCV control (optimization of the second cumulant), and RS control (optimization of the infinite number of cumulants) are all special cases of the cost-cumulant control problem.
5.6 Applications An application of risk-sensitive control to satellite attitude maneuver is given in this section. An application of minimal cost variance control to an earthquake structure control is also given here. For linear quadratic Gaussian applications, see Anderson and Moore (1989), Fleming and Rishel (1975) and Kwarkernaak and Sivan (1972). For more risk-sensitive control examples, refer to Bensoussan (1992) and Whittle (1996).
5.6.1 Risk-Sensitive Control Applied to Satellite Attitude Maneuver This subsection shows the simulation results associated with the model of a geostationary satellite equipped with a bias momentum wheel on the third axis of body frame. This model assumes that the disturbance torque is Gaussian white noise. A stochastic RS controller is then applied. For this model, small attitude angle and roll/yaw dynamics are assumed to be decoupled from the pitch dynamics. A roll/yaw attitude model of the geostationary satellite is simplified as the following linear differential equation of hw max {Ii , vc }: 2
3 0 6 1 7 6 7 dx(t) ¼ 6 x(t)dt hw 7 6 h w vc 0 0 I11 7 4 I11 5 0 hwI22vc Ih22w 0 3 2 3 2 0 0 7 7 6 6 0 7 607 6 7 7 6 m(t)dt þ þ6 6 1 7dw(t) 6 Be cos (a) 7 5 4 I11 5 4 I11 Be 1 I22 I22 sin (a) 0 0
0 0
1 0
dy(t) ¼ I44 x(t)dt þ dv(t)
(5:48)
The dw/dt is Gaussian white noise representing the disturbance torque, dv/dt is Gaussian white noise representing the measurement noise, hw is the wheel momentum, a is the angle that the positive roll axis makes with the magnetic torquer, vc is the orbital rate, Iii is the moment of inertia of the ith axis, x ¼ [g, r, g_ , r_ ] is the state with yaw (g), roll (r), m is a dipole moment of the magnetic torquer (control), Be ¼ 1:07 107 telsa is the nominal magnetic field strength, and I44 is an identity matrix with dimension four. The expected value of dw/dt is zero with E{dw=dt dw=dt} ¼ 0:7Be , and the expected value of dv/dt is zero with E{dv=dt dv=dt 0 } ¼ 1 107 . Here u ¼ 5 102 was chosen for the demonstration purpose, but this risk-sensitivity parameter, u should be viewed as another design parameter just like the weighting matrices Q and R. By varying this u, different performance and stability results can be obtained. Theoretically, all u that give a solution to the Riccati equation 5.36 are possible. The next example shows how to choose this risk-sensitivity parameter to obtain larger stability margin. The constants for the operational mode are given as I11 ¼ 1988 kgm2 , I22 ¼ 1876 kgm2 , I12 ¼ I21 ¼ 0, hw ¼ 55 kgm2 =s, vc ¼ 0:00418 deg=s, and u ¼ 60 deg. These values are actual parameters of the geostationary satellite. The initial condition is [0.5 deg, 0, 0, 0.007 deg/sec]. Finally, the weighting matrices are chosen to be Q ¼ I44 and R ¼ 1 1010 . In this model, the states are measured with the sensor noise, dv/dt. A Kalman filter is then used to estimate the states. The following simulations are performed using MATLAB, a software package. The RS controller is found using equation 5.35. Note that both yaw and roll angles reduce to a value close to the origin. Figure 5.2 shows the roll and yaw angles with respect to time variation. After about 3 hours, both roll and yaw angles stay below 0.1 degree. Initially, large control action is needed, but after 3 hours or so, less than 300 Atm2 magnetic
0.5 0.4 0.3 Roll and yaw [deg]
u JRS ¼ b1 (J ) b2 (J ) þ O(u2 ) 2 u ¼ E{J } V AR{J } þ O(u2 ): 2
1065
0.2 0.1 0 −0.1 −0.2 −0.3
(5:49)
0
FIGURE 5.2
5
10 15 Time [hours]
20
25
Roll (Dark) and Yaw (Light) Versus Time, RS Control
1066
5.6.2 MCV Control Applied to Seismic Protection of Structures
0.5 0.4
A 3DOF, single-bay structure with an active tendon controller as shown in Figure 5.4 is considered here. The structure is subject to a one-dimensional earthquake excitation. If a simple shear frame model for the structure is assumed, the governing equations of motion in state space form can be written as:
Roll and yaw [deg]
0.3 0.2 0.1
0
dx(t) ¼
I
x(t)dt Ms1 Ks Ms1 Cs 0 0 þ u(t)dt þ dw(t), Ms1 Bs Gs
−0.1 −0.2 −0.3
0
0
FIGURE 5.3
5
10 15 Time [hours]
20
25
where the following apply: 2
0
0
3
2
4kc cos a
3
6 7 6 7 Ms ¼ 4 0 m2 0 5, Bs ¼ 4 0 5: 0 0 0 m3 2 3 2 3 c2 0 c1 þ c 2 1 6 7 6 7 Cs ¼ 4 c2 c2 þ c3 c3 5, Gs ¼ 4 1 5: 1 0 c3 c3 2 3 k2 0 k1 þ k2 6 7 Ks ¼ 4 k2 k2 þ k3 k3 5: 0 k3 k3
Roll (Dark) and Yaw (Light) Versus Time, LQG Control
torque is required. It is important to note that despite the external disturbances, RS control law produces good performance. To compare the results with the well-known LQG controller, the system was simulated with an LQG controller. The u approached infinity in equation 5.36. Note that equation 5.36 becomes a classical Riccati equation as u goes to infinity. This is shown in Figure 5.3. Note that in LQG case, it takes longer for yaw and roll angles to fall below 0.1 degree, and the variation in the angles are larger than the RS case. Thus, in this sense, RS controller outperforms the LQG controller.
m1
The mi , ci , ki are the mass, damping, and stiffness, respectively, associated with the ith floor of the building. The kc is the stiffness of the tendon. The Brownian motion w(t) with E{dw(t)} ¼ 0 and E{dw(t)dw 0 (t)} ¼ Wdt; in this example, 3 W ¼ 1:00 2p in2 =sec . The parameters were chosen to
x3(t )
m3
160 k3 , c3
150 140 130
x2(t )
m2
120 Var{J}
k2 , c2
110 100
kc
m1
α
x1(t )
k1 , c1
90 80 70 60
FIGURE 5.4 Schematic Diagram for Three Degree-of-Freedom Structure
0
FIGURE 5.5
0.2
0.4
0.6
0.8
1 1.2 Gamma
1.4
1.6
1.8
2
Optimal Variance: Full-State Feedback, MCV, 3DOF
5 Cost-Cumulant and Risk-Sensitive Control
1067
Sigma x1
0.02 0.019 0.018 0.017
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Sigma x2
0.03 0.025 0.02 0.015
Sigma x3
0.05 0.04 0.03
Sigma x5
Sigma x4
0.02
0.5
0.6
0.4
Sigma x6
0.8
0.7
Gamma
FIGURE 5.6 Displacements and Velocities. Full-State Feedback, MCV, 3DOF
match modal frequencies and dampings of an experimental structure. The cost criterion is given by:
J¼
ðtF
z 0 (t)Ks z(t) þ kc u2 (t) dt,
0
together with R ¼ kc , where z is a vector of floor displacements and x ¼ [z z_]0 . Figure 5.5 shows that the variance of the cost criterion decreases as g increases. Note that the g ¼ 0 point corresponds to the classical LQG case.
Figure 5.6 shows the RMS displacement responses of first (sx 1 ), second (sx 2 ), and third (sx 3 ) floor and the RMS velocity responses of first (sx 4 ), second (sx 5 ), and third (sx 6 ) floor, respectively, versus the MCV parameter g. It is important to note that both third floor RMS displacement and velocity responses can be decreased by choosing large g.
5.7 Conclusions This chapter describes linear-quadratic-Gaussian (LQG), minimal cost variance (MCV), and risk-sensitive (RS) controls in
1068 terms of the cost cumulants. Cost cumulant control, which is also called statistical control, views the optimization criterion as a random variable and minimizes any cumulant of the optimization criterion. Then LQG, MCV, and RS are all special cases of cost cumulant control where in LQG the mean, in MCV the variance, and in RS all cumulants of the cost function are optimized. This chapter provides the optimal controllers for the LQG, MCV, and RS methods. Finally, satellite attitude control application using RS controller and building control application using MCV controller are described.
References Anderson, B.D.O., and Moore, J.B. (1989). Optimal control, linear quadratic methods. Englewood Cliffs, NJ: Prentice Hall. Basar, T., and Bernhard, P. (1991). H1 -optimal control and related minimax design problems. Boston: Birkhauser. Bensoussan, A. (1992). Stochastic control of partially observable systems. Cambridge: Cambridge University Press. Davis, M.H.A. (1977). Linear estimation and stochastic control. London: Halsted Press. Fleming, W.H., and Rishel, R.W. (1975). Deterministic and stochastic optimal control. New York: Springer-Verlag. Glover, K., and Doyle, J.C. (1988). State-space formulae for all stabilizing controllers that satisfy H1 -norm bound and relations to risk sensitivity. Systems and Control Letters 11, 167–172. Jacobson, D.H. (1973). Optimal stochastic linear systems with exponential performance criteria and their relationship to deterministic differential games, IEEE Transactions on Automatic Control, AC-18, 124–131.
Kwakernaak, H., and Sivan, R. (1972). Linear optimal control systems. New York: John Wiley & Sons. Rhee, I., and Speyer, J. (1992). Application of a game theoretic controller to a benchmark problem. Journal of Guidance, Control, and Dynamics 15(5), 1076–1081. Runolfsson, T. (1994). The equivalence between infinite-horizon optimal control of stochastic systems with exponential-of-integral performance index and stochastic differential games. IEEE Transactions on Automatic Control 39(8), 1551–1563. Sain, M.K., and Liberty, S.R. (1971). Performance measure densities for a class of LQG control systems. IEEE Transactions on Automatic Control AC-16 (5), 431–439. Sain, M.K., Won, C.H., and Spencer, Jr., B.F. (1992). Cumulant minimization and robust control. In Duncan, T.E., and Pasik-Duncan, B. (Eds.) Stochastic Theory and Adaptive Control Lecture Notes in Control and Information Services 184 Berlin: Springer-Verlag, pp. 411–425. Sain, M.K., Won, C.H., Spencer, Jr., B.F., and Liberty, S.R. (2000). Cumulants and risk-sensitive control: A cost mean and variance theory with application to seismic protection of structures. In J.A. Filar, V. Gaitsgory, and K. Mizukami (Eds.), Advances in Dynamic Games and Applications, Annals of the International Society of Dynamic Games, Vol. 5. Boston: Birkhauser. Uchida, K., and Fujita, M. (1989). On the central controller: Characterizations via differential games and LEQG control problems. Systems and Control Letters 15(1), 9–13. Whittle, P. (1996). Optimal control, basics and beyond. New York: John Wiley & Sons. Won, C.H. (1995). Cost cumulants in risk-sensitive and minimal cost variance control, Ph.D. Dissertation. University of Notre Dame.
6 Frequency Domain System Identification Gang Jin Ford Motor Company, Dearborn, Michigan, USA
6.1 6.2
Introduction ....................................................................................... 1069 Frequency Domain Curve-Fitting ........................................................... 1069 6.2.1 Matrix Fraction Parameterization 6.2.2 Polynomial Matrix Parameterization . 6.2.3 Least Squares Optimization Algorithms
6.3
State-Space System Realization............................................................... 1074
6.4
Application Studies .............................................................................. 1075
6.3.1 Markov Parameters Generation . 6.3.2 The ERA Method 6.4.1 Identification of a 16-Story Structure . 6.4.2 Identification of the Seismic-Active Mass Driver Benchmark Structure
6.5
Conclusion ......................................................................................... 1078 References .......................................................................................... 1078
6.1 Introduction
6.2 Frequency Domain Curve-Fitting
A general procedure for the frequency domain identification of multiple inputs/multiple outputs (MIMO) linear time invariant systems is illustrated in Figure 6.1. Typically, one starts with the experimental frequency response function (FRF) of the test system. These FRF data may either be computed from the saved input/output measurement data or measured directly online by a spectrum analyzer. Based on these data, the matrix fraction (MF) or the polynomial matrix (PM) curvefitting technique is applied to find a transfer function matrix (TFM) that closely fits into the FRF data. Detailed algorithms for the curve-fitting are introduced in Section 6.2. Frequently, for the purposes of simulation and control, one needs a statespace realization of the system. This may be achieved by various linear system realization algorithms. In particular, the eigensystem realization algorithm (ERA) is presented in Section 6.3 for this purpose, thanks to its many successes in previous application studies. The Markov parameters, based on the parameters from which the state-space model will be derived, can be easily generated from the identified transfer function matrix. Finally, as a measure of performance, the model FRF is computed and is compared to the experimental FRF. This is illustrated in Section 6.4 by means of two experimental application examples.
Frequency domain curve-fitting is a technique to fit a TFM closely into the observed FRF data. Like other system identification techniques, this is a two-step procedure: model structure selection and model parameter optimization. In this context, the first step is to parameterize the TFM in some special forms. Two such forms are introduced in the following: the matrix fraction (MF) parameterization and the polynomial matrix (PM) parameterization. This is always a critical step in the identification because it will generally lead to quite different parameter optimization algorithms and resulting model properties. In particular, this section shows that for the MF form, the parameters can be optimized by means of linear least squares (LLS) solutions. As for the PM parameterization, one has to resort to some nonlinear techniques; specifically, this section introduces the celebrated Gauss-Newton (GN) method. On the other hand, the PM parameterization offers more flexibility in the sense that it allows the designer to specify certain properties of the identified model (e.g., fixed zeros in any input/output channels). This feature may be quite desirable as shown by the application studies in Section 6.4. Before starting the discussion, it is important to make clear the notations that will be used throughout this section. Assume the test system has r input excitation channels and
Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
1069
1070
Gang Jin
Experimental input-output measurement
FRF Calculation
Transfer function matrix
MF/PM Curve-fitting
Experimental FRF data
Markov parameters generation
Model evaluation
Model FRF data
Discrete time state space model
Model FRF generation
Markov parameters
ERA System realization
FIGURE 6.1 General Procedure of Frequency Domain System Identification. Bolded components imply critical steps; dashed components imply steps may be excluded.
m output measurement channels. Use {G(vi )}i¼1;...l to denote the observed FRF data based on which the TFM G(z 1 ) will be estimated. To evaluate G(z 1 ) at discrete frequencies vi , use 2 the map z(vi ) ¼ e j p!i =ws , with ws being the sampling frequency. The curve-fitting error is measured by Frobenius norm k kF for matrices and by 2-norm k k2 for vectors. Use In to denote an identity matrix of dimensions n n.
Substituting the Q(z 1 ) and R(z 1 ) polynomials in equations 6.2 and 6.3 and vectorizing the summation, equation 6.4 is changed into the form: G (z 1 ) ¼ arg min kFQ þ CkF , 1 G¼Q R
where: 2
6.2.1 Matrix Fraction Parameterization The matrix fraction (MF) parameterization of a TFM takes the following form: G(z 1 ) ¼ Q 1 (z 1 )R(z 1 ),
6 F¼6 4
3 z 1 (v1 )GT (v1 ) . . . z p (v1 )GT (v1 ) Ir z 1 (v1 )Ir . . . z p (v1 )Ir 7 .. .. .. .. .. .. .. 7: . . . . . . . 5 z 1 (vi )GT (vl ) . . . z p (vl )GT (vl ) Ir
R(z 1 ) ¼ R0 þ R1 z 1 þ R2 z 2 þ þ Rp z p :
(6:3)
The constant matrices Q1 , . . . , Qq and R0 , R1 , . . . , Rp are referred to as observer Markov parameters in Juang (1994). From the same reference, the reader may find more detailed material for the MF curve-fitting discussed here and the ERA realization algorithm discussed in a later section. To simplify the notation, without loss of generality, assume p ¼ q. To fit the TFM G(z 1 ) as in equation 6.1 into the observed FRF data {G(vi )}i¼1;...; l , one may solve the following parameter optimization problem:
1
G (z ) ¼ arg min 1
G¼Q R
l X i¼1
1
kQ(z (vi ))G(vi ) R(z
1
(vi ))k2F :
(6:4)
. . . z p (vl )Ir
CT ¼ ½G(v1 ) G(vl ): QT ¼ Q1 Qp R0 R1 Rp :
where: (6:2)
z 1 (vl )Ir
(6:6)
(6:1)
Q(z 1 ) ¼ Im þ Q1 z 1 þ Q2 z 2 þ þ Qq z q ,
(6:5)
(6:7) (6:8)
Thus, the MF curve-fitting has been reduced to a standard LLS problem, which can be solved by various efficient algorithms (e.g., the QR factorization approach quoted in algorithm of equations 6.29 through 6.31).
6.2.2 Polynomial Matrix Parameterization The polynomial matrix (PM) parameterization of a TFM has the form: G(z 1 ) ¼
B(z 1 ) , a(z 1 )
(6:9)
where: B(z 1 ) ¼ B0 þ B1 z 1 þ B2 z 2 þ þ Bp z p :
(6:10)
a(z 1 ) ¼ 1 þ a1 z 1 þ a2 z 2 þ þ aq z q :
(6:11)
6 Frequency Domain System Identification
1071
These equations are the numerator polynomial matrix and the minimal polynomial of the TFM, respectively. Assume that the orders of the numerator and denominator are equal (i.e., p ¼ q). The goal of parameter optimization is to find G (z 1 ) with a prespecified order p such that the estimation error is minimized: G (z 1 ) ¼ arg min G
l X
w 2 (vi )kG(vi ) G(z 1 (vi ))k2F (6:12)
i¼1
Here, an additional term w( ) is included to allow desirable frequency weighting on the estimation error. In the following, G(z 1 ) in equation 6.12 will be parametized in a way that allows the inclusion of the fixed zeros. This is done first for the single input/single output (SISO) case; then, it is generalized to the MIMO case. Let the numerator polynomial of the SISO system be the following: (z 1 ) B ~ (z 1 ) B(z 1 ) ¼ B
(6:13)
~ T ~b, T b c ¼c
(6:14)
where b ¼ [b0 , b1 , . . . , bps ]T is the numerator parameter vector corresponding to the fixed zeros; ~b ¼ [~b0 , ~b1 , . . . , ~bpp ]T is the to-be-estimated numerator parameter vector; s ¼ [1, z 1 , . . . , z ps ]T and c ~ ¼ [1, z 1 , . . . , z (pps ) ]T and c are the corresponding z-vectors. Similarly, the denominator polynomial is as follows: a(z 1 ) ¼ 1 þ f a, T
T
1
where a ¼ [a1 , . . . , ap ] and f ¼ [z , . . . , z mation error to be minimized is then:
(6:15) p T
] . The esti-
!2 l X T (vi )b c ~ T (vi )~b c F¼ w(vi ) G(vi ) 1 þ fT (vi )a i¼1 l X w(vi ) T (vi )b c ~ T (vi ) ¼ G(vi ) [c T i¼1 1 þ f (vi )a " #!2 ~b T G(vi ) f (vi )] , a
Finally, the right-hand side of equation 6.18 is vectorized for standard optimizations: F(u) ¼ kW (a)( y Hu)k22 ,
(6:19)
T ~T T , b12 , . . . , ~bmr , aT ]T , y is a vector containing where u ¼ [~b11 the measured FRF data, and W (a) is a weighting function with variate a. Readers should have no difficulties to derive the detailed expressions of W (a) and H. For the special case when there is T b ¼ 1 in equation 6.14), the results are no fixed zeros (i.e., c given in Bayard (1992). It is important to point out that W (a) and H have the following structure:
2 W (a) 6 6 6 6 W (a) ¼ 6 6 6 4 2 6 6 6 6 H ¼6 6 6 4
0
0
0
W (a)
..
.. .
.. .
..
.
..
0
0
C11
0
.
3 7 7 7 7 7: 7 7 5
0
.
W (a)
0
..
.. .
F12
0
.. .
Cmr
Fmr
0
C12
.. .
..
.
..
0
...
0
. .
(6:20)
F11
3 7 7 7 7 7: 7 7 5
(6:21)
These structures enable the design of an efficient optimization algorithm. This will be discussed in the next section.
6.2.3 Least Squares Optimization Algorithms (6:16)
(6:17)
and c ~ are written as functions of vi . where for simplicity c For the MIMO case, equation 6.17 is generalized to (recall that m and r denote the numbers of output and input channels respectively): m X r X l X w(vi ) F¼ Gjk (vi ) T j¼1 k¼1 i¼1 1 þ f (vi )a " #!2 ~ T (vi )bjk c ~ T (vi ) Gjk (vi ) fT (vi )] bjk [c : (6:18) jk jk a
This section serves two purposes. First, it gives a brief (but general) account on a few of the most important parameter optimization algorithms, namely the linear least squares (LLS) method, the Newton’s method, and the Gauss-Newton’s (GN) method. Second, the discussion applies some of the methods to the parameter optimization problems arising from the curvefitting process. In particular, this section presents a fast algorithm based on the GN method to the minimization of equation 6.19. Excellent textbooks in this field are abundant, and this discussion only refers the reader to a few of them: Gill et al., (1981), Dennis and Schnabel (1996), and Stewart (1973). General Algorithm Development Let F(u) be the scalar-valued multivariate objective function to be minimized. If the first and second derivatives of F are available, a local quadratic model of the objective function may be obtained by taking the first three terms of the Taylorseries expansion about a point uk in the parameter vector space:
1072
Gang Jin 1 F(uk þ p) Fk þ gkT p þ pT Gk p, 2
(6:22)
where p denotes a step in the parameter space and where Fk , gk , and Gk denote the value, gradient, and Hessian of the objective function at uk . Equation 6.22 indicates that to find a local minimum of the objective function, an iterative searching procedure is required. The celebrated Newton’s method is defined by choosing the step p ¼ pk so that uk þ pk is a stationary point of the quadratic model in 6.22. This amounts to solving the following linear equation: Gk pk ¼ gk :
(6:23)
where Q has orthonormal columns and where R is upper triangular with positive diagonal elements. The unique solution of the LLS problem: minn kAx yk22 ,
(6:30)
x ¼ R 1 QT y:
(6:31)
x2R
is given by:
Application to the Curve-Fitting Problems This discussion now returns to the PM curve-fitting problem in equation 6.19. By letting f ¼ W (a)(y Hu), the GaussNewton method in equation 6.28 may be applied. It turns
In the system identification case, the objective function F(u) is often in the sums of squares form, as in equation 6.19: F(u) ¼
n 1X 1 fi (u)2 ¼ kf (u)k22 , 2 i¼1 2
(6:24)
where fi is the ith component of the vector f. To implement the Newton’s method, the gradient and Hessian of F are calculated as: g(u) ¼ J (u)T f (u)
(6:25)
G(u) ¼ J (u)T J (u) þ Q(u),
(6:26)
where Pn J (u) is the Jacobian matrix of f and where Q(u) ¼ i¼1 fi (u)Gi (u), with Gi (u) being the Hessian of fi (u). The Newton’s equation 6.23 thus becomes as follows: ( J (uk ))T J (uk ) þ Q(uk ))pk ¼ J (uk )T f (uk ):
(6:27)
When kf (uk )k is small, kQ(uk )k is usually small and is often omitted from equation 6.27. If this is the case, then solving pk from 6.27 is equivalent to solving the following linear least squares (LLS) problem: 1 pk ¼ arg min kJ (uk )p þ f (uk )k22 : p 2
(6:28)
Equation 6.28 gives the Gauss-Newton method. The LLS problem is often solved by the QR factorization method given in the following algorithm. Algorithm 1 (QR Factorization and LLS Solution) Let A 2 R mn have full column rank. Then A can be uniquely factorized into the form: A ¼ QR,
(6:29) FIGURE 6.2
Picture of the 16-Story Structure
6 Frequency Domain System Identification TABLE 6.1
1073
Key Identification Parameters
min
~ 2 x2 yk2 : kA 2
(6:32)
min
kA1 x1 (y A2 x2 )k22 :
(6:33)
x2 2R q
Curve-fitting
System realization
Test structure
Type
p
q
a
b
n
16-Story Benchmark
MF PM
16 8
16 8
32 12
32 30
10 8
out that the Jacobian matrix J (uk ) of f has an identical structure as H in equation 6.21. Thus, instead of solving equation 6.28 directly, the following decomposition of LLS problems may be applied. Algorithm 2 (Decomposition of LLS Problem) Let A 2 Rmn have full column rank. Decompose A into A ¼ [A1 , A2 ], with A1 2 R mp and A2 2 R mq . Let the unique QR factorization of A1 be A1 ¼ Q1 R1 . Then the unique solution x of the LLS problem of equation 6.30 has the form x ¼ [x1T , x2T ]T , with x2 and x1 being the unique solutions of the following LLS problems:
x16
f16
x12
f8
−40
f4
−40
−60
−60
−60
−80
−80
−80
−80
−100
−100
−100
−100
20
40
60
−120
−40
−40
−60
−60
−80 −100 −120
20
40
40
60
20
40
60
−60 −80
−80
−80
−100
−100
−100
−120 20
40
60
−60
−80
−80
−140
20
40
60
Frequency (Hz)
60
20
40
60 −60 −80
60
Frequency (Hz)
40
60
20
40
60
20
40
60
−120 40
60 −60 −80 −100 −120
−120 40
20
−80
−100
20
60
−100
20
−80
40
−60
−120
−60
−140
40
−100
−120
−120
20
−60
20
−120
−140
−100
−100
60
−80
−120
−120
40
−60
−100
−100
−120 20
−60
60
−80
−60
−120 20
−120
−60
x8
~ 2 ¼ (I Q1 QT )A2 . In these equations, A 1 In practice, J (uk ) is divided into J (uk ) ¼ [J1 (uk ), J2 (uk )], with J1 (uk ) corresponding to the block-diagonal terms and J2 (uk ) the last block matrix column in J (uk ). Solving the LLS problems with J2 (uk ) and J1 (uk ) thus corresponds to updating the denominator a and the numerators bjk estimations. Moreover, due to its block-diagonal structure, the LLS problem with J1 (uk ) should be further decomposed (i.e., the LLS solutions of bjk are independently solved for each j and k). Thus, the computational cost of the optimization algorithm is significantly reduced. To complete the discussion of the algorithm, note that the initial values for the Gauss-Newton iteration may be generated by the classical Sanathanan-Koerner (SK) iteration composed of a sequence of reweighted LLS problems:
−40
−120
x4
f12
x1 2R p
−140 20
40
60
Frequency (Hz)
Frequency (Hz)
FIGURE 6.3 Comparison of Experimental and Model FRF for the 16-Story Structure: Magnitude Plot. fj denotes the input force on the jth floor; xj denotes the output displacement of the jth floor; dotted lines are for measurement data; solid lines are for model output.
1074
Gang Jin ukþ1 ¼ arg min kW (ak )(y Hu)k22 ,
with initial condition u0 ¼ 0.
tem realization algorithm (ERA), is selected to construct a model in the state space form. First presented are the formulas to generate the Markov parameters from the TFM, which are the starting point for the ERA method.
6.3 State-Space System Realization
6.3.1 Markov Parameters Generation
(6:34)
u
System realization is a technique to determine an internal state-space description for a system given with an external description, typically its TFM or impulse response. The name reflects the fact that if a state-space description is available, an electronic circuit can be built in a straightforward manner to realize the system response. There is a great amount of literature on this subject both from a system theoretical point of view (Antsaklis and Michel, 1997) and from a practical system identification point of view (Juang, 1994). In the following, a well-developed method in the second category, the eigensys-
x16
f16 0
0
−50
−100
−100
−200
−150
−300 20
40
60
f12
To calculate the Markov parameters Y0 , Y1 , Y2 , . . . from the system TFM, first note that: G(z 1 ) ¼
Yi z i :
(6:35)
i¼0
For the case when G(z 1 ) is parameterized in the MF form (i.e., G(z 1 ) ¼ Q 1 (z 1 )R(z 1 )), the system Markov parameters can be determined from:
f8
0
f4 0 −200
−200
−400 −400 20
40
60
−600 20
40
60
0
0
1 X
20
40
60
20
40
60
20
40
60
0 100
−50
0
−200
−100
−100
−300
−150
x12
−100
20
40
60
0
−200
−200
−400
−300 20
40
60
20
40
60
0
0
x8
0 −200
−200 −400
x4
20
40
−400
−100
−400
−600
−150
−600
60
20
0
0
−200
−200
60
40
60
0
−100
−100
−200
−200
−300
−800 20 40 60 Frequency (Hz)
20 0
−600
−600
FIGURE 6.4
40
−400
−400
−200
−50
20 40 60 Frequency (Hz)
20 40 60 Frequency (Hz)
20 40 60 Frequency (Hz)
Comparison of Experimental and Model FRF for the 16-Story Structure: Phase Plot. Same notations are used as in Figure 6.3.
6 Frequency Domain System Identification
1075 TABLE 6.2
PM Iteration Record
Iteration type
Iteration index
FRF residue
a Step norm
b Step norm
S K
1 2 3 4
651.47 159.46 156.75 156.74
100% 1.91% 0.16% 0.24%
100% 14.8% 1.36% 0.84%
G N
1 2 3 4 5 6
137.03 133.47 128.03 123.81 123.34 123.48
10.9% 4.23% 0.56% 0.21% 0.40% 0.15%
21.8% 8.31% 2.15% 0.50% 1.07% 0.37%
6.3.2 The ERA Method To solve for a state-space model (A, B, C, D) using the ERA method, first form the generalized Hankel matrices: 2
Yk
6 6 Ykþ1 6 H(k 1) ¼ 6 . 6 .. 4 Ykþa1
Ykþ1
...
Ykþb1
Ykþ2 .. .
... .. .
Ykþb .. .
Ykþa
. . . Ykþaþb2
3 7 7 7 7: 7 5
(6:39)
Note that in general, a and b are chosen to be the smallest numbers such that H(k) has as large row and column ranks are possible. Additional suggestions to determine their optimal values are given in Juang (1994). Let the singular value decomposition of H(0) be H(0) ¼ U SV T , and let n denote the index where the singular values have the largest drop in magnitude. Then, H(0) can be approximated by: FIGURE 6.5 Picture of the Seismic-AMD Benchmark Structure
p X i¼0
! Qi z i
1 X
! Yi z i
i¼0
¼
p X
Ri z i ,
(6:36)
i¼0
by the following iterative calculations starting from Y0 ¼ R0 : ( Yk ¼
P Rk ki¼1 Qi Yki , Pp i¼1 Qi Yki ,
for k ¼ 1, . . . , p. for k ¼ p þ 1, . . . , 1.
(6:37)
If the TFM is parameterized in the PM form, the derivation of the system Markov parameters is almost the same: one starts with Y0 ¼ B0 , and continues with the following iterative procedure: ( Yk ¼
P Bk ki¼1 ai Yki , Pp i¼1 ai Yki ,
for k ¼ 1, . . . , p. for k ¼ p þ 1, . . . , 1.
(6:38)
H(0) Un Sn VnT ,
(6:40)
where Un and Vn are the first n columns of U and V, respectively, and Sn is the diagonal matrix containing the largest n singular values of H(0). Finally, an nth order state-space realization (A, B, C, D) can be calculated by: A ¼ Sn1=2 UnT H(1)Vn S1=2 , n C ¼ EmT Un Sn1=2 ,
D ¼ Y0 ,
B ¼ Sn1=2 VnT Er ,
(6:41)
where Er and EmT are the elementary matrices that pick out the first r (the number of system inputs) columns and first m (the number of system outputs) rows of their multiplicands, respectively.
6.4 Application Studies This section presents two experimental level application studies conducted in the Structural Dynamics and Control/
1076
Gang Jin xgddot
um
x2ddot
x3ddot
40 20
20
0
−20 −40
0
−20 5
15
20
25
30
35
10
15
20
25
30
35
10
15
20
25
30
35
5
10
15
20
25
30
35
5
10
15
20
25
30
35
5
10
15
20
25
30
35
5
10
15
20
25
30
35
20 0
20 0
−20 −40
5 40 20 0 −20 −40
5
x1ddot
10
40 20 0 −20 −40
−20 −40 5
10
15
20
25
30
35
xmddot
20 10
0
0
−20
−10 −20
−40 5
10
15
20
25
30
35
xm
10
−50
5 0
−100 5
10
15
20
25
30
35
Frequency (Hz)
Frequency (Hz)
FIGURE 6.6 Comparison of Experimental and Model FRF for the Seismic-AMD Benchmark Structure: Magnitude Plot. um denotes the input command to the AMD; xg ddot denotes the input ground acceleration to the structure; xj ddot denotes the output acceleration of the jth floor; xm ddot denotes the output acceleration of the AMD; xm denotes the output displacement of the AMD; dotted lines are for measurement data; solid lines are for model output.
Earthquake Engineering Laboratory (SDC/EEL) at University of Notre Dame. This section only presents the results pertinent to identification studies discussed so far in this chapter. For detailed information about these experiments, including experimental setups and/or control developments, the reader may refer to Jin et al. (2000), Jin (2002), and Dyke et al. (1994), respectively.
6.4.1 Identification of a 16-Story Structure The first identification target is a 16-story steel structure model shown in Figure 6.2. The system is excited by impulse force produced by a PCB hammer and applied individually at the 16th, 12th, 8th, and 4th floors. The accelerations of these floors are selected as the system measurement outputs and are sensed by PCB accelerometers. The goal of the identi-
fication is to capture accurately the first five pairs of the complex poles of the structure. For this purpose, a DSPT Siglab spectrum analyzer is used to measure the FRF data. The sampling rate is set at 256 Hz, and the frequency resolution is set at 0.125 Hz. The experimental FRF is preconditioned to eliminate the second order direct current (dc) zeros from acceleration measurement. The MF parameterization is chosen for the curve-fitting, which is complemented by the ERA method for state-space realization. The key identification parameters are given in Table 6.1. The final discretetime state-space realization has 10 states. The magnitude and phase plots of its transfer functions are compared to the experimental FRF data in Figures 6.3 and 6.4. Excellent agreements are found in all but the very high frequency range. The mismatch there is primarily due to the unmodeled high-frequency dynamics.
6 Frequency Domain System Identification
1077 xgddot
x3ddot
um
0
−100
−200
−200
−400
−300
5
10
15
20
25
30
35
x2ddot
0
10
15
20
25
30
35
5
10
15
20
25
30
35
5
10
15
20
25
30
35
5
10
15
20
25
30
35
5
10
15 20 25 Frequency (Hz)
30
35
−100
−200
−200
−400
−300 5
10
15
20
25
30
35
0 x1ddot
5 0
100
−200
0
−400
−100
−600 5
10
15
20
25
30
35
xm
xmddot
0 −200
−100 −200
−400 5
10
15
20
25
30
35
−40 −60 −80 −100 −120 5
10
15 20 25 Frequency (Hz)
30
35
200 0 −200 −400 −600 −800
FIGURE 6.7 Comparison of Experimental and Model FRF for the Seismic-AMD Benchmark Structure: Phase Plot. Same notations are used as in Figure 6.6.
6.4.2 Identification of the Seismic-Active Mass Driver Benchmark Structure The second identification target of this discussion is a threestory steel structure model, with an active mass driver (AMD) installed on the third floor to reduce the vibration of the structure due to simulated earthquakes. A picture of the system is given in Figure 6.5. This system has been used recently as the ASCE first-generation seismic-AMD benchmark study. The benchmark structure has two input excitations: the voltage command sent to the AMD by the control computer and the ground acceleration generated by a seismic shaker. The system responses are measured by four accelerometers for the three floors and the AMD and one linear variable differential transformer (LVDT) for the displacement of the AMD. The sampling rate is 256 Hz, and the frequency resolution is 0.0625 Hz. Due to noise and nonlinearity, only the frequency range of 3 to 35 Hz of the FRF data is considered to be accurate and, thus, this range is used for the identification. A preliminary curve-fitting is carried out using the MF parameterization. The identified model matches the experi-
mental data accurately in all but the low-frequency range of channels corresponding to the AMD command input and the acceleration outputs. A detailed analytical modeling of the system reveals that there are four (respectively two) fixed dc zeros from AMD command input to the structure (respectively AMD) acceleration outputs: lim s!0
lim s!0
Gx¨i um (s) ¼ ki , s4
G x¨m um (s) ¼ km : s2
i ¼ 1, 2, 3:
(6:42) (6:43)
The Gx¨i um and G x¨m um are the transfer functions from AMD command input um to structure and AMD acceleration outputs, respectively. The ki and km are the static gains of these transfer functions with the fixed dc zeroes removed. These fixed zeros dictate the use of the PM curvefitting technique to explicitly include such a priori information. Again, key identification parameters are presented in Table 6.1. The outputs of the parameter optimization iterations are
1078 documented in Table 6.2. The final discrete-time state-space realization has eight states as predicted by the analytical modeling. The magnitude and phase plots of its transfer functions are compared to the experimental FRF data in Figures 6.6 and 6.7. All the input output channels are identified accurately except for the (5,2) element, which corresponds to the ground acceleration input and the displacement output of the AMD. The poor fitting there is caused by the extremely low signal-tonoise ratio.
6.5 Conclusion This chapter discusses the identification of linear dynamic systems using frequency domain measurement data. After outlining a general modeling procedure, the two major computation steps, frequency domain curve-fitting and state-space system realization, are illustration with detailed numerical routines. The algorithms employ TFM models in the form of matrix fraction or polynomial matrix and require respectively linear or nonlinear parameter optimizations. Finally, the proposed identification schemes are validated through the modeling of two experimental test structures.
Gang Jin
References Antsaklis, P., and Michel, A. (1997). Linear systems. New York: McGraw-Hill. Bayard, D. (1992). Multivariable frequency domain identification via two-norm minimization. Proceedings of the American Control Conference, 1253–1257. Dennis, Jr., J.E., and Schnabel, R.B. (1996). Numerical methods for unconstrained optimization and nonlinear equations. Philadelphia: SIAM Press. Dyke, S., Spencer, Jr., B., Belknap, A., Ferrell, K., Quast, P., and Sain, M. (1994). Absolute acceleration feedback control strategies for the active mass driver. Proceedings of the World Conference on Structural Control 2, TP1:51–TP1:60. Gill, P.E., Murray, W., and Wright, M.H. (1981). Practical optimization. New York: Academic Press. Jin, G., Sain, M.K., and Spencer, Jr., B.F. (2000). Frequency domain identification with fixed zeros: First generation seismic-AMD benchmark. Proceedings of the American Control Conference, 981–985. Jin, G. (2002). System identification for controlled structures in civil engineering application: Algorithm development and experimental verification. Ph.D. Dissertation, University of Notre Dame. Juang, J.N. (1994). Applied system identification. Englewood Cliffs, NJ: Prentice Hall. Stewart, G.W. (1973). Introduction to matrix computations. New York: Academic Press.
7 Modeling Interconnected Systems: A Functional Perspective Stanley R. Liberty Academic Affairs, Bradley University, Peoria, Illinois, USA
7.1 7.2
Introduction ....................................................................................... 1079 The Component Connection Model........................................................ 1079
7.3 7.4 7.5 7.6
System Identification ............................................................................ Simulation.......................................................................................... Fault Analysis...................................................................................... Concluding Remarks ............................................................................ References ..........................................................................................
7.2.1 Detailed Insight to Component Connection Philosophy
7.1 Introduction This chapter, adapted from Liberty and Saeks (1973, 1974), contains a particular viewpoint on the mathematical modeling of systems that focuses on an explicit algebraic description of system connection information. The resulting system representation is called the component connection model. The component connection approach to system modeling provides useful ways to tackle a variety of system problems and has significant conceptual value. The general applicability of this viewpoint to some of these system problems is presented here at a conceptual level. Readers interested in specific examples illustrating the component connection model should first refer to Decarlo and Saeks (1981). Additional references are provided on specific applications of the component connection philosophy to system problems. Readers with a clear understanding of the general concept of function should have no difficulty comprehending the material in this section.
7.2 The Component Connection Model A system, in simplistic mathematical terms, is a mapping from a set of inputs (signals) to a set of outputs (signals) (i.e., an input/output relation with a unique output corresponding to each input). (In using this external system description, the context of an internal system state has been deliberately supCopyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
1082 1082 1082 1083 1083
pressed for simplicity of presentation in this chapter.) This may be abstractly notated by: y ¼ Su,
(7:1)
where S is the symbol for the input/output relation (the system). On the other hand, one may think of the physical system symbolized by S as an interconnection of components. Engineering experience explains that a particular S is determined by two factors: the types of components in the system and the way in which the components are interconnected. This latter observation on the physical structure of a system gives rise to the component connection viewpoint. Again, thinking of a physical system, the system’s connection structure can be held fixed while the types of system components or their values change. Experience explains that for each distinct set of components or component values, a unique input/output relation is determined. From a mathematical perspective, a given connection description determines a mapping of the internal component parameters to the input/ output relation. Now, one can generalize this slightly by thinking of the physical system as an interconnection of subsystems, each with its own input/output relation. Note that a subsystem may be either a discrete component, an interconnection of components, or an interconnection of ‘‘smaller’’ subsystems. This hierarchical structure is germane to the component con1079
1080
Stanley R. Liberty
nection philosophy and is precisely the fundamental concept of large-scale system theory. Thinking in this way, one notes that a given connection description determines a system-valued function of a systemvalued variable. Because this function is entirely determined by the connections, it is called the connection function: S ¼ f (Z),
(7:2)
where Z symbolizes the internal subsystem descriptions or component parameters and where f symbolizes the connection function. In the special case where the subsystems represented by Z are linear and time-invariant, Z may be interpreted as a matrix of component transfer functions, gains, or immitances (the precise physical interpretation is unimportant), which is also the case for S. One should not, however, restrict the thinking process to linear time-invariant systems. In addition, it is important to point out that, even when all of the subsystems are linear, the connection function f is nonlinear. What may be surprising is that for most commonly encountered systems, the nonlinear function f can be completely described by four matrices that permit the connection function to be manipulated quite readily. Indeed, feasible analytical and computational techniques based on this point of view have been developed and applied to problems in system analysis (Singh and Trauboth, 1973; Prasad and Reiss, 1970; Trauboth and Prasad, 1970), sensitivity studies (Ransom, 1973, 1972; Ransom and Saeks, 1975; DeCarlo, 1983), synthesis (Ransom, 1973), stability analysis (Lu and Liu, 1972), optimization (Richardson, 1969; Richardson et al., 1969), well-posedness (Ransom, 1972; Singh, 1972), and fault analysis (Ransom, 1973, 1972; Saeks, 1970; Saeks et al., 1972; Ransom and Saeks, 1973a, 1973b; DeCarlo and Rapisarda, 1988; Reisig and DeCarlo, 1987; Rapisarda and DeCarlo, 1983; Garzia, 1971). The reason for this core of linearity in the f description is that connection information is generally contained in ‘‘conservation’’ laws that, upon the proper choice of system variables, yield linear ‘‘connection’’ equations. To obtain an intuitive feel for what types of manipulations one performs on f to solve the systems problems previously mentioned, consider, for example, a classical passive network synthesis problem. In this problem, one is given the input/ output relation S, and one knows what type of network structure is desired. For example, a ladder synthesis problem statement consists of a transfer function specification and the ‘‘connection information’’ of ladder structure. The S and f are specified, and it is desired to determine Z (the components). Clearly, to find Z, one must invert f. In this case, one seeks a right inverse of f because the uniqueness of Z is not a concern and f is viewed as ‘‘onto’’ the class of transfer functions of interest. If, on the other hand, one is attempting to identify the value of a component in an interconnected system of components given external measurements and knowledge of the connec-
tions, then one seeks a left inverse of f. Here, the concern may be with uniqueness, and f is viewed as one-to-one and into the set of input/output relations. Another classical system problem encountered in design analysis is that of determining the sensitivity of the input/ output relation S to variations in certain component values. In this case, differentiation of the connection function f with respect to the component parameters of interest is the major constituent of the sensitivity analysis. Specific applications of the component connection model to sensitivity analysis are contained in Ransom (1973, 1972), Ransom and Saeks (1975), and DeCarlo (1983). The conceptual description just described should provide the reader with a ‘‘feel’’ for the component connection philosophy. This chapter now provides more detailed insight.
7.2.1 Detailed Insight to Component Connection Philosophy Although an abstract functional interpretation of the connections in a system is conceptually valid, it is of no practical value unless one has a specific and computationally viable representation of the function. Fortunately, there is such a representation, and the conceptual problem of inverting or differentiating the connection function may actually be carried out. Classically, in circuit and system theory, connections are represented by some type of graph (e.g., linear graph, bond graph, signal flow graph) or block diagram. This graphical depiction of connection information can readily be converted into a set of linear algebraic constraints on the circuit or system variables for subsequent analysis. As such, it is natural to adopt an algebraic model of the connections from the start. The precise form of the component connection model with its algebraic description of a system’s connection information may be seen by examining Figure 7.1. Figure 7.1(A) shows a box depiction of a system with inputs u and outputs y. The inner box labeled Z represents the system components, and the outer donut shaped area represents the system connections (e.g., scalers, adders, Kirchhoff law constraints, etc.). Now, if Z represents a matrix of component input/output relations, the following may be abstractly written: b ¼ Za,
(7:3)
where b denotes the vector of component output variables and a the vector of component input variables. The component connection model can be obtained by redrawing the system of Figure 7.1(A) as in Figure 7.1(B), where the components and connections have been separated. Thus, the overall system is composed of two interconnected boxes. One box contains the components as described by equation 7.3, and the second donut-shaped box contains the connections. The donutshaped box has inputs u and b and outputs a and y. Finally,
7 Modeling Interconnected Systems: A Functional Perspective
1081
Z
a
u
b Z
u
y
(A) System with Inputs / Outputs
FIGURE 7.1
L12 L22
b : u
(7:4)
(7:5)
Substituting equation 7.5 into 7.4 yields: b_ ¼ L11 b þ L12 u: y ¼ L21 b þ L22 u:
y
A System as an Interconnection of Components
The component connection model is thus a pair of equations, 7.3 and 7.4, with the matrix Z representing the components, and the set of four L matrices representing the connections. An interesting observation can be made here. In the very special case where each component is an integrator, the following is true: b_ ¼ a:
b
(B) Separated Components and Connections
since the connections are characterized entirely by linear algebraic constraints, the donut-shaped box may be mathematically modeled (i.e., the connections) by the matrix equation: a L ¼ 11 y L21
a
(7:6)
The reader will recognize these equations as the familiar ‘‘state model’’ of linear dynamical system theory. Thus, intuitively, the component connection model may be viewed as a generalization of the linear state model. In fact, the L matrices describe how to interconnect subsystems to form a large-scale system S, just as integrators used to be patched together on an analog computer to simulate a system. If one views the L matrices as a generalization of the state model, where the integrators have been replaced by general components, a beneficial cross fertilization between component connection theory and classical system theory results. It is possible and indeed useful to talk about controllable or observable connections (Singh, 1972). The component connection model was first used intuitively by Prasad and Trauboth (Singh and Trauboth, 1973; Prasad and Reiss, 1970; Trauboth and Prasad, 1970) and formalized by Saeks (Saeks, 1970; Saeks et al., 1972) for application in the
fault isolation problem. Existence conditions for the L matrices were first studied by Prasad (Prasad and Reiss, 1970) and later by Singh (1972) who have shown that the existence of L matrices is a reasonable assumption. This is essentially the same type of assumption that one makes when assuming that the state equations of a system exist. As seen above, if the components in the system are integrators, then the L matrices are precisely the state matrices. The advantage of the component connection model over the classical graphical and diagrammatic connection models is that, being inherently algebraic, the component connection model is readily manipulated and amenable to numerical techniques. Moreover, this model unifies the various graphical and diagrammatic models. In fact, electronic circuits, which are commonly described by hybrid block diagram-linear graph models, are handled as readily as passive circuits or analog computer diagrams [see DeCarlo and Saeks (1981) for some simple examples]. Using the component connection model, the desired representation for the connection function is obtained. This is illustrated by a linear example since the concept can be more intuitively understood in the linear case. It is important to point out again, however, that more generality is actually present. Viewing Z as a matrix of transfer functions, the overall system transfer function is desired. Simultaneous solution of equations 7.1, 7.3, and 7.4 yields: S ¼ L22 þ L21 (1 ZL11 )1 ZL12 ¼ f (Z),
(7:7)
and we have a representation of the connection function in terms of the four L matrices even though the connection function is nonlinear in Z. This matrix representation facilitates manipulation of the connection function as shown later (Ransom, 1973, 1972; Saeks, 1970; Ransom and Saeks, 1973b, 1973a). The first observation that the connection function could be so represented was made by Saeks (1970), while the first exploitation of the concept is attributable to Ransom (1973, 1972) and Saeks (1973b, 1973a).
1082
Stanley R. Liberty
7.3 System Identification In a system identification problem, one is normally asked to identify the mathematical input/output relation S from input/ output data. In theory, this is straightforward if the system is truly linear (Chang, 1973), but in practice this may be difficult due to noisy data and finite precision arithmetic. The nonlinear case is difficult in general because series approximations must be made that may not converge rapidly enough for large signal modeling (Bello et al., 1973). In both linear and nonlinear identification techniques, random inputs are commonly used as probes (Bello et al., 1973; Cooper and McGillem, 1971; Lee and Schetzen, 1961). The identification of S is then carried out by mathematical operations on the system input and output autocorrelations and the cross-autocorrelations between input and output. This section does not contain a discussion of such techniques, but some observations are presented. It should be noted that if only input/output information is assumed known (as is the case in standard identification techniques), then system identification does not supply any information on the system structure or the types or values of components. In such cases, the component connection model cannot be used. In many applications, however, connection information and component type information are available. In such cases, the component connection model may be applicable. Indeed, for identification schemes using random inputs, it is easily shown that the component correlation functions and the overall system correlation functions are related by: T T T Ryy ¼ L21 Rbb L21 þ Ryu L22 þ L22 Ruy L22 Ruu L22 :
(7:8)
T T T T Raa ¼ L11 Rbb L11 þ L11 Rbu L12 þ L12 Rub L11 þ L12 Ruu L12 : (7:9)
L21 Rbu ¼ [Ryu L22 Ruu ]:
(7:10)
Rab ¼ L11 Rbb þ L12 Rub :
(7:11)
The Rjk is the crosscorrelation of signal j and signal k. If the left inverse of L21 exists, then the subsystem correlation functions can be determined and used to identify the subsystems via standard identification techniques. However, in general, this left inverse does not exist and either the type information or the psuedo-inverse of L21 must be used to yield subsystem models.
7.4 Simulation There are numerical challenges in computer simulation of large-scale systems. These difficulties are due to the simultaneous solution of large numbers of differential equations. The efficiency in solving a single differential equation is entirely determined by the numerical techniques employed. However, in analyzing a large-scale system from the component connec-
tion viewpoint, the starting point is a specified collection of decoupled component differential equations and a set of coupled algebraic connection equations. Thus, it is possible to take advantage of the special form of the equations characterizing a large-scale dynamical system and to formulate analysis procedures that are more efficient than those which might result from a purely numerical study of the coupled differential equation characterizing the entire system. Such a procedure, namely a relaxation scheme, wherein one integrates each of the component differential equations by separately iterating through the connection equations to obtain the solution of the overall system, was implemented over 30 years ago at the Marshall Space Flight Center (Prasad and Reiss, 1970; Trauboth and Prasad, 1970; Saeks, 1969). The feasibility of the scheme was verified, and significant computer memory savings resulted. The key to this approach of the numerical analysis of a large-scale system is that numerical routines are applied to the decoupled composite component differential equations rather than an overall equation for the entire system. Thus, the complexity of the numerical computations is determined by the complexity of the largest component in the system. In contrast, if the numerical procedures are applied directly to a set of differential equations characterizing the entire system, the complexity of the numerical computations is determined by the complexity of the entire system (the sum of the complexities of all the components). Since 1970, computer simulation software has evolved substantially, and packages that deal with large-scale systems either explicitly or implicitly exploit the component connection philosophy. One of the more recent software developments for modeling and simulation of large-scale heterogeneous systems is an object-oriented language called Modelica; see Mattson et al. (1998) for additional information.
7.5 Fault Analysis Fault analysis is similar to system identification in that the system is reidentified to determine if the component input/ output relations have changed. It should be clear that connection information is essential to detection of a faulty component. As before, note that fault analysis is not restricted to linear systems. To illustrate some of the technique, the discussion will focus on a linear time-invariant situation. To attack the fault analysis problem, assume that there is a given set of external system parameters measured at a finite set of frequencies S(vi ), i ¼ 1, 2, . . . , n and that there are the connection matrices, L11 , L12 , L21 , and L22 . Compute or approximate Z(v). For simplicity of presentation, assume that L22 ¼ 0. There is no loss of generality with this assumption since one can always replace S(vi ) by S~(vi ) ¼ S(vi ) L22 and then work with the system whose measured external parameters are given by S~ and whose connection matrices are ~21 ¼ L21 , and L ~22 ¼ 0: L~11 ¼ L11 , L~12 ¼ L12 , L
7 Modeling Interconnected Systems: A Functional Perspective In the most elementary form of the fault analysis problem, assume that S(v) is given at only a single frequency and compute Z(v) exactly. This form of the problem was first studied by Saeks (1970) and then extended by Saeks and colleagues (Ransom and Saeks, 1975; Saeks et al., 1972). Its solution is based on the observation that the connection function f can be decomposed into two functions (under the assumption that L22 ¼ 0) as: f ¼ hog:
(7:12)
Here, g is a nonlinear function that maps the component parameter matrix Z into an intermediary matrix R via: R ¼ g(Z) ¼ (1 ZL11 )1 Z,
(7:13)
and h is a linear function that maps R into the external parameter matrix S via: S ¼ h(R) ¼ L21 RL12 :
(7:14)
The left inverse of f is given in terms of g and h via: f L ¼ g L ohL :
(7:15)
A little algebra will reveal that g L always exists and is given by the formula: Z ¼ g L (R) ¼ (1 þ RL11 )L R:
(7:16)
Thus, the fault isolation problem is reduced to that of finding the left inverse of the linear function h. This is most easily done by working with the matrix representation of h as: T h ¼ [L12 L21 ],
(7:17)
where is the matrix tensor (Kronecker) product (Ransom, 1972; Saeks et al., 1972; Rao and Mitra, 1971). Standard matrix algebraic techniques can be used to compute hL . Indeed, the rows of h can each be identified with external input/output ‘‘gains.’’ Thus, the problem of choosing a minimal set of external parameters for fault analysis is reduced to the problem of choosing a minimal set of rows in h that render its columns linearly independent (Singh, 1972). There are difficulties with this technique in certain cases, but fortunately, a number of approaches have been developed for alleviating these (Ransom, 1973, 1972; Ransom and Saeks, 1973b).
7.6 Concluding Remarks Although this chapter has not provided a complete or detailed explanation of component connection philosophy, the reader should now have a fundamental understanding of the power
1083 and utility of the philosophy. Even though the component connection approach to modeling large-scale systems originated over three decades ago, it is likely that its full potential has not been realized. There are several areas of research in which the component connection approach might lead to new results. One of these is the area of optimal decentralized control of large-scale systems.
References Bello, P.A. et al. (1973). Nonlinear system modeling and analysis. Report RADC-TR-73-178. Rome Air Development Center, AFSC, Griffiss AFB, New York. Chang, R.R. (1973). System identification from input/output data. M.S. Thesis, Department of Electrical Engineering, University of Notre Dame. Cooper, G.R., and McGillem, C.D. (1971). Probabilistic methods of signal and system analysis. New York: Holt, Rinehart, and Winston. DeCarlo, R.A. (1983). Sensitivity calculations using the component connection model. International Journal of Circuit Theory and Applications 12(3), 288–291. DeCarlo, R.A., and Rapisarda, L. (1988). Fault diagnosis under A limited-fault assumption and limited test-point availability. IEEE Transactions on Circuits Systems and Signal Processing 4(4), 481–509. DeCarlo, R.A., and Saeks, R.E. (1981). Interconnected dynamical systems. New York: Marcel Dekker. Garzia, R.F. (1971). Fault isolation computer methods. NASA Technical Report CR-1758. Marshall Space Flight Center, Huntsville. Lee, Y.W., and Schetzen, M. (1961). Quarterly progress report 60. Research Laboratory of Electronics. Cambridge: MIT. Liberty, S.R., and Saeks, R.E. (1973). The component connection model in system identification, analysis, and design. Proceedings of the Joint EMP Technical Meeting. 61–76. Liberty, S.R., and Saeks, R.E. (1974). The component connection model in circuit and system theory. Proceedings of the European Conference on Circuit Theory and Design. 141–146. Lu, F., and Liu, R.W. (1972). Stability of large-scale dynamical systems. Technical Memorandum EE-7201, University of Notre Dame. Mattson, S.E., Elmqvist, H., and Otter, M. (1998). Physical system modeling with modelica. Journal of Control Engineering Practice 6, 501–510. Prasad, N.S., and Reiss, J. (1970). The digital simulation of interconnected systems. Proceedings of the Conference of the International Association of Cybernetics. Ransom, M.N. (1972). A functional approach to large-scale dynamical systems. Proceedings of the 10th Allerton Conference on Circuits and Systems. 48–55. Ransom, M.N. (1972). On-state equations of RLC large-scale dynamical systems. Technical Memorandum EE-7214, University of Notre Dame. Ransom, M.N. (1973). A functional approach to the connection of a large-scale dynamical systems. Ph.D. Thesis, University of Notre Dame. Ransom, M.N., and Saeks, R.E. (1973a). Fault isolation with insufficient measurements. IEEE Transactions on Circuit Theory CT-20(4), 416–417.
1084 Ransom, M.N., and Saeks, R.E. (1973b). Fault isolation via term expansion. Proceedings of the Third Pittsburgh Symposium on Modeling and Simulation. 224–228. Ransom, M.N., and Saeks, R.E. (1975). The connection function– theory and application. IEEE Transactions on Circuit Theory and Applications 3, 5–21. Rao, C.R., and Mitra, S.K. (1971). Generalized inverse matrices and its applications. New York: John Wiley & Sons. Rapisarda, L., and DeCarlo, R.A. (1983). Analog mulitifrequency fault diagnosis. IEEE Transactions on Circuits and Systems CAS-30(4), 223–234. Reisig, D., and DeCarlo, R.A. (1987). A method of analog-digital multiple fault diagnosis. IEEE Transactions on Circuit Theory and Applications 15, 1–22. Richardson, M.H. (1969). Optimization of large-scale discrete-time systems by a component problem method. Ph.D. Thesis, University of Notre Dame. Richardson, M.H., Leake, R.J., and Saeks, R.E. (1969). A component connection formulation for large-scale discrete-time system opti-
Stanley R. Liberty mization. Proceedings of the Third Asilomar Conference on Circuit and Systems. 665–670. Saeks, R.E. (1969). Studies in system simulation. NASA Technical Memorandum 53868. George Marshall Space Flight Center, Huntsville. Saeks, R.E. (1970). Fault isolation, component decoupling, and the connection groupoid. NASA-ASEE Summer Faculty Fellowship Program Research Reports. 505–534. Auburn University. Saeks, R.E., Singh, S.P., and Liu, R.W. (1972). Fault isolation via components simulation. IEEE Transactions on Circuit Theory CT-19, 634–660. Singh, S.P. (1972). Structural properties of large-scale dynamical systems. Ph.D. Thesis, University of Notre Dame. Singh, S.P., and Trauboth, H. (1973). MARSYAS. IEEE Circuits and Systems Society Newsletter 7. Trauboth, H., and Prasad, N.S. (1970). MARSYAS—A software system for the digital simulation of physical systems. Proceedings of the Spring Joint Computing Conference. 223–235.
8 Fault-Tolerant Control Gary G. Yen Intelligent Systems and Control Laboratory, School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, Oklahoma, USA
8.1 8.2 8.3 8.4
Introduction ....................................................................................... Overview of Fault Diagnosis and Accommodation .................................... Problem Statement............................................................................... Online Fault Accommodation Control ....................................................
1085 1085 1088 1089
8.5
Architecture of Multiple Model-Based Fault Diagnosis and Accommodation... 1094
8.6
Simulation Study and Discussions .......................................................... 1096
8.4.1 Theorem 1 . 8.4.2 Online Learning of the Failure Dynamics 8.5.1 The Dilemma of Online Fault Detection and Diagnosis 8.6.1 Online Fault Accommodation Technique for Unanticipated System Failures . 8.6.2 Intelligent FDA framework . 8.6.3 False Alarm Situations . 8.6.4 Comments and Discussions
8.7
Conclusion ......................................................................................... 1103 References .......................................................................................... 1103
8.1 Introduction While most research attention has been focused on fault detection and diagnosis, much less research effort has been dedicated to ‘‘general’’ failure accommodation mainly because of the lack of well-developed control theory and techniques for general nonlinear systems. Because of the inherent complexity of nonlinear systems, most of model-based analytical redundancy fault diagnosis and accommodation studies deal with the linear system that is subject to simple additive or multiplicative faults. This assumption has limited the system’s effectiveness and usefulness in practical applications. In this research work, the online fault accommodation control problems under catastrophic system failures are investigated. The main interest is dealing with the unanticipated system component failures in the most general formulation. Through discrete-time Lyapunov stability theory, the necessary and sufficient conditions to guarantee the system online stability and performance under failures are derived, and a systematic procedure and technique for proper fault accommodation under the unanticipated failures are developed. The approach is to combine the control technique derived from discrete-time Lyapunov theory with the modern intelligent technique that is capable of self-optimization and online adaptation for realCopyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
time failure estimation. A complete architecture of fault diagnosis and accommodation has also been presented by incorporating the developed intelligent fault-tolerant control (FTC) scheme with a cost-effective fault-detection scheme and a multiple model-based failure diagnosis process to efficiently handle the false alarms and the accommodation of both the anticipated and unanticipated failures in online situations.
8.2 Overview of Fault Diagnosis and Accommodation Because of the increasing demands of system safety and reliability in modern engineering design, research issues dealing with failure diagnosis and accommodation have attracted significant attention in the control society, as the first page of references at the end of this chapter indicates. System failures caused by unexpected interference or aging of system components will possibly result in changes or changing of system dynamics. Thus, the original design under the fault-free condition is no longer reliable and possibly leads to instability. More serious problems, such as the survivability of the system, may arise when the instability of the system may cause the loss of human life. The research work dealing with the underlying 1085
1086
Gary G. Yen
problems is usually referred to as fault diagnosis and accommodation (FDA). The major objectives of FDA or FTC is to detect and isolate the encountered failures and to take the necessary actions to prevent the system from getting and unstable and maintain the successful control mission. Traditional FDA approaches are based on so-called hardware redundancy where extra components are used as backup in case of failures. Because of the additional cost, space, weight, and complexity of incorporating redundant hardware, modelbased methods (in the spirit of analytical redundancy) with inexpensive and high performance microprocessors have dominated the FDA research activities. The major reason for the prevalence of this approach is information processing techniques using powerful computing devices and memory systems can be used to establish the necessary redundancy without the need of hardware instrumentation in the system (Polycarpou and Helmicki, 1995). Under the model-based analytical redundancy, system behaviors are compared with the analytically obtained values through a mathematical model. The resulting differences are so-called residuals (Gertler, 1988). In the ideal situation, the residuals will be zeros in the fault-free system, and any deviation will be interpreted as an indication of faults. This is rarely true, however, in practice with the presence of measurement noise disturbances and modeling errors. The deviation can be the combinational results of noises, disturbances, modeling errors, and faults. Naturally, with the presence of significant noises, statistical analysis of the residuals becomes a reasonable procedure to generate a logical pattern called the signature of the failure for
the proper fault detection and isolation. Many research activities have also been dedicated to investigate a proper residual generation to facilitate the fault isolation process (Garcia and Frank, 1999; Garcia et al., 1998; Gertler and Hu, 1999; Gertler et al., 1995; Gertler and Kunwer, 1995). Generally speaking, residual generation, statistical testing, and logical analysis are usually combined as the three stages of the fault detection and isolation (Gertler, 1988). Due to the inherent complexity of nonlinear systems, most model-based analytical redundancy fault diagnosis studies deal with the linear system that is subject to simple additive or multiplicative faults. This assumption has limited its effectiveness and usefulness in practical applications (Polycarpou and Helmicki, 1995; Polycarpou and Vemuri, 1995). A series of research works that is devoted to more general failure cases is reported in Polycarpou and Helmicki (1995), Polycarpou and Vemuri (1995), Zhang et al., (1999, 2000), Vemuri and Polycarpou (1997) and Trunov and Polycarpou (1999). Although significant progress has been made in the theoretical analysis for fault detection and sensitivity conditions, more practical and complicated fault detection and diagnosis (FDD) problems, such as detection and diagnosis of possible multiple failures, still remain to be solved. The representative fault diagnosis methods are shown in Figure 8.1. Similar to fault detection and diagnosis research work, most of the fault accommodation schemes are mainly designed based on the powerful and well-developed linear design methodology to obtain the desired objectives. Typical approaches for failure accommodation techniques include the pseudo-
Fault diagnosis approaches
Model-free methods
Limited checking
Special sensors
Multiple sensors
Frequency analysis
Knowledge-based approach
Expert system
Combines analytical redundancy with heuristic knowledge2
Hardware redundancy approaches1
Model-based methods
Replaced the redundancy by mathematical model and powerful computing devices
Residual generation, statistical testing, and logical analysis (Also referred as model-based analytical redundancy)3 Interactive Multiple-Model (IMM)4
1 Gertler (1988). 2 Frank (1990). 3 Chow and Willsky (1984); Emani-Naeini (1988); Ge and Feng (1988); Lou et al (1986); Frank (1990); Gertler (1988); Polycorpou and Helmicki (1995); Polycarpou and Vemuri (1995); Jiang (1994); Garcia and Frank (1999); Garcia et al. (1998); Gertler and Hu (1999); Gertler et al. (1995); Gertler and Kunwer (1995); Zitzler and Thiele (1999); Maybeck and Stevens (1991); Zhang and Li (1998); Zhang and Jiang (1999); Laparo et al. (1991). 4 Zhang and Li (1998); Zhang and Jiang (1999); Laparo et al. (1991).
FIGURE 8.1 Typical FD Methods
8 Fault-Tolerant Control
1087 Fault Accommodation techniques
Hardware redundancy
Model-based analytical redundancy
Switching to the backup system or component
Pseudo-inverse method (modelfollowing methods)1
Parameter identification reconfigurable control2
Linear system
LQC3
Nonlinear
Additive State-space compensation pole for sensor and placement5 actuator failures4 EA technique6
IMM7
Linear in control with simple failure
General cases
Not available 1 2 3 4 5 6 7 8 9
Gao and Antsaklis (1991, 1992). Rauch (1995); Bodson and Groszkiewicz (1997). Sauter et al. (1998). Noura et al. (2000); Theilliol et al. (1998). Jiang and Zhao (1998, 1999). Jiang (1994). Maybeck and Stevens (1991); Polycarpal and Vemuri (1995). Polycarpou and Helmicki (1995); Polycarpou and Vemuri (1995). Zhang et al. (1999).
Learning approach8
Intelligent DSMC9
FIGURE 8.2 Typical FA Approaches
inverse method or model-following method (Gao and Antsaklis, 1991, 1992), eigenstructure assignment (Jiang, 1994), LQC (Sauter et al., 1998), additive compensation for sensor and actuator failures (Noura et al., 2000; Theilliol et al., 1998), reconfiguration control with parameter identification (Rauch, 1995; Bodson and Grosz-Kiewicz, 1997), state-space pole placement (Jiang and Zhao, 1998, 1999); and (interactive) multiple model method (IMM) (Maybeck and Stevens, 1991; Zhang and Li, 1998; Zhang and Jiang, 1999) as shown in Figure 8.2. However, this is rarely the case in practice since all the systems are inherently nonlinear, and the system dynamics under failure situations are more likely to be nonlinear and time varying. The failure situations can be further categorized into anticipated and unanticipated faults where the anticipated ones are referred to as the known faults based on the prior knowledge of the system or possibly the history of system behavior, and the unanticipated ones are the unexpected failure situations, which have to be identified online. In general, the recognition and accommodation of the anticipated failures are considered relatively easier to solve because of the availability of the prior information and sufficient time for the development of solutions (i.e., off-line). The details of the systematic procedure of the proper failure isolation and accommodation for anticipated faults, such as the generation of residuals, fault signature identification, and the selection logic
of the proper control actions, however, are still left unanswered. For unanticipated failures under general format, the online identification and accommodation are even more difficult and rarely touched. When a dynamic system encounters failures possibly caused by unexpected interferences, it is not a reasonable approach to assume certain types of dynamic change caused by those unexpected failures. Of course, the faulty system is possibly uncontrollable if the failure is very serious and fatal. It is absolutely crucial to take early actions to properly control the system behavior in time to prevent the failure from causing more serious loss if the system under failures is controllable at that time. Because the successful fault accommodation relies on precise failure diagnosis and both processes have to be successfully achieved in the online and real-time situation for the unanticipated failures, it has been almost impossible to accomplish and guarantee the system safety based solely on the contemporary control technique with insufficient and/or imprecise information of the failure. Nevertheless, it is believed that technology breakthrough will not happen overnight. Instead, it occurs with slow progress, one step at a time. An early work (Yen and Ho, 2000), where two online control laws were developed and reported for a special nonlinear system with unanticipated component failures, focuses on dealing with the general unanticipated system component failures for
1088
Gary G. Yen
the general nonlinear system. Through discrete-time Lyapunov stability theory, the necessary and sufficient conditions to guarantee the system online stability and performance under failures were derived, and a systematic procedure and technique for proper fault accommodation under unanticipated failures were developed. The approach is to combine the control technique derived from discrete-time Lyapunov stability theory with the modern intelligent technique that is capable of self-optimization and online adaptation for real-time failure estimation. A more complete architecture of fault diagnosis and accommodation has also been presented by incorporating the developed intelligent fault-tolerant control technique with a cost-effective fault detection scheme and a multiple modelbased failure diagnosis process to efficiently handle the false alarms, to accommodate the anticipated failures, and to reduce the unnecessary control effort and computational complexity in online situations. This chapter is organized as follows. In Section 8.3, the online fault accommodation control problems of interest are defined. In Section 8.4, through theoretical analysis, the necessary and sufficient conditions to maintain the system’s online stability are derived together with the proposed intelligent fault accommodation technique. The complete multiple model-based FDA architecture is presented in Section 8.5 with the suggested cost-effective fault detection and diagnosis scheme. An online simulation study is provided in Section 8.6 to demonstrate the effectiveness of the proposed online accommodation technique in various failure scenarios. The conclusion is included in Section 8.7 with the discussion of the current and future research directions.
8.3 Problem Statement A general n-input and m-output dynamic system can be described by equation 8.1. yl (k þ d) ¼ fl ( y1 , y2 , . . . ym , u1 , u2 , . . . un ): yi ¼ {yi (k þ d 1), yi (k þ d 2), . . . , yi (k þ d pi )}: uj ¼ {uj (k), uj (k 1), . . . uj (k qj )}: pi , qj 2 (Y (k) S(k)) a þ : (8:11) Dt
For S(k) < 0, the inequalities become the following: 1 1 (Y(k) S(k)) a þ > N y^(k þ 1) þ N ~y (k þ 1) þ nfy(k þ 1) Dt 1 1 : (8:12) þ nf ~y (k þ 1) > (Y (k) þ S(k)) a þ Dt
Modern intelligent optimization techniques, such as genetic algorithms, immune algorithms, simulated annealing, and reinforcement learning, have been exploited in a variety of areas and applications (Chun et al., 1997; Chun et al., 1998; Marchesi et al., 1994; Tanaka and Yoshida, 1999; Marchesi, 1998; Juang et al., 2000; Offner, 2000; Ho and Huang, 2000). Although the effectiveness in achieving successful optimization objectives has been demonstrated, most of them are still applied in off-line situations due to the time-consuming iterative process. From the computational complexity point of view, and the well-known and efficient gradient descent algorithm will be considered and used in the rest of this chapter because of its popularity and effectiveness in online applications. The optimization procedure is shown as follows. The desired point at every time-step is written as: "
1 1 Desire(k) ¼ (Y(k) þ S(k)) a þ t # 1 1 þ(Y (k) S(k)) a þ =2 t 1 1 : ¼Y(k) a þ t
(8:13)
Define the error as: Error(k) ¼ Desire(k) N y^(k þ 1) N ~y (k þ 1) nfy(k þ 1) nf ~y (k þ 1):
(8:14)
The effective control input can be searched based on the gradient descent algorithm for square error: qError(k)2 q Error(k) ¼2 Error(k) qu(k) qu(k) qN y^(k þ 1) qN~y (k þ 1) þ ¼ 2 Error(k) qu(k) qu(k) qnfy(k þ 1) qnf ~y (k þ 1) þ : þ qu(k) qu(k)
(8:15)
The resulting control input will be updated by: u(k)new ¼ u(k)old a
q Error(k)2 , qu(k)
(8:16)
8 Fault-Tolerant Control where a is the learning rate parameter. The searching procedure is repeated until inequalities of equations 8.11 and 8.12 hold, the control input converges, or the maximum number of iterations is reached. Of course, the term nf ~y (k þ 1), the remaining uncertainty of the failure dynamics, and the remaining uncertainty of the nominal system N~y (k þ 1) are unknown; the terms qnf ~y (k þ 1)=qu(k) and qN ~y (k þ 1)=qu(k) cannot be computed either. So, the actual searching procedure is based on the approximated values: Errˆor(k) ¼ Desire(k) N y^(k þ 1) nfy(k þ 1): (8:17) qErrˆor(k)2 qN y^(k þ 1) qnfy(k þ 1) þ ¼ 2Errˆor(k) : (8:18) qu(k) qu(k) qu(k) At every time step, the desired point of Y(k)(a þ 1=t)1 is computed, and the effective control signal is searched to ensure the actual result is as close to the desired point as possible through the realizations of the nominal system dynamics and failure dynamics. Observing inequalities of equations 8.11 and 8.12 closely, if nf ~y (k þ 1) and N~y (k þ 1), the remaining uncertainty of the failure dynamics and the nominal system are bounded, combining these results with equations 8.13 and 8.14 determines that the desired dynamics, represented by S function, are also bounded. This can be proven and summarized in theorem 8.1:
8.4.1 Theorem 1 If the nominal system model is accurate and precise enough such that N y~(k þ 1), the remaining uncertainty of the nom~ inal system, is bounded by "ksup > Tf fjN y (k þ 1)jg and the online estimator is accurate enough such that the remaining uncertainty of the failure dynamics, nf y~(k þ 1), is bounded sup by the least upper bound, "k>T y (k þ 1)jg, and fjnf ~ f DError(k), then the error after the searching effort of the optimization algorithm is finite (i.e., bounded by sup "k>Tf {jDError(k)j}). In addition, the system stability after time-step Tf under arbitrary unanticipated system failures is guaranteed in an online situation, and the sliding surface function, S, defined by the system performance error is also bounded as follows:
1091 ¼
sup sup fjN y~(kþ1)jgþ fjnf y~(kþ1)jg "k>Tf "k > Tf sup aþ1 : {jDError(k)j} þ "k>Tf Dt
The sliding surface function, S, is defined as: yd (k) yd (k 1) y(k) y(k 1) S(k) ¼ Dt Dt þa(yd (k) y(k)); a > 0, where yd (k) is the desired trajectory at time step k. Proof Let Error(k) represent the error after the searching effort of the optimization algorithm. Then: Error(k) ¼ Desire(k) N y^(k þ 1) nfy(k þ 1): By equation 8.13: 1 1 Y(k) a þ Error(k) ¼ N y^(k þ 1) þ nfy(k þ 1): (8:19) t
For S(k) > 0 plugging in equation 8.19 into inequality 8.11, the result is as follows: 1 1 1 1 > Y (k) a þ Error(k) (Y(k) þ S(k)) a þ t t 1 1 þ N~y (k þ 1) þ nf ~y (k þ 1) > (Y(k)S(k)) a þ : t Simplifing the inequality yields these two functions: 1 : S(k) > ½N~y (k þ 1) þ nf ~y (k þ 1) Error(k) a þ t 1 S(k) < ½N~y (k þ 1) þ nf ~y (k þ 1) Error(k) a þ : t
(8:20) Since S(k) > 0, S(k) < [ sup fjN~y (k þ 1)jg þ sup
S S(k þ 1) J,
8k>Tf
8k>Tf
fjnf ~y (k þ 1)jg þ sup fjError(k)jg](a þ 1=t is always true. 8k>Tf
where the following two equations apply:
sup sup fjN y~(kþ1)jg þ fjnf y~(kþ1)jg "k>Tf "k>Tf 1 sup {jDError(k)j} a þ : þ Dt "k>Tf
¼
By assumptions 1, 2, and 3, the following inequalities will hold for the worst condition: S(k) > [ sup fjN ~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg 8k>Tf
8k>Tf
1 þ sup fjDError(k)jg] a þ : Dt 8k>Tf
(8:21)
1092
Gary G. Yen
Apparently,
[ sup fjN ~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg þ 8k>Tf
8k>Tf
sup fjDError(k)jg](a þ 1=Dt) ¼ inf fS(k)g, which is the 8k>Tf
8k>Tf
greatest lower bound of S(k). In [ sup fjN ~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg 8k>Tf
Because S(k) < 0, S(k) < [ sup fjN~y (k þ 1)jg þ sup
8k>Tf
8k>Tf
true. By assumptions 1, 2, and 3, the following inequalities will hold for the worst condition: S(k) > [ sup fjN~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg 8k>Tf
[ sup fjN ~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg
8k>Tf
1 þ sup fjDError(k)jg] a þ S(k þ 1): Dt 8k>Tf
fjDError(k)jg] (a þ 1=Dt) ¼
(8:22)
8k>Tf
8k>Tf
[ sup fjN ~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg 8k>Tf
[ sup fjN~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg 8k>Tf
8k>Tf
1 þ sup fjDError(k)jg] a þ : Dt 8k>Tf
8k>Tf
1 þ sup fjDError(k)jg] a þ S(k þ 1): Dt 8k>Tf
(8:23)
S ¼ [ sup fjN~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg
8k>Tf
þ sup fjDError(k)jg] a þ 8k>Tf
1 Dt
(8:26) S(k þ 1):
This is true for both situations, S(k þ 1) > 0 and S(k þ 1) < 0, which implies:
¼ [ sup fjN ~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg 8k>Tf
1 þ sup fjDError(k)jg] a þ : Dt 8k>Tf
S S(k þ 1) ,
For S(k) < 0, plugging in equation 8.19 into inequality 8.12, the result is the following: 1 1 1 1 (Y(k) S(k)) a þ > Y (k) a þ DError(k) Dt Dt 1 1 : þ N~y (k þ 1) þ nf ~y (k þ 1) > (Y (k) þ S(k)) a þ Dt Simplifing the inequality gets: 1 : S(k) > ½N~y (k þ 1) þ nf ~y (k þ 1) DError(k) a þ Dt 1 S(k) < ½N~y (k þ 1) þ nf ~y (k þ 1) DError(k) a þ : Dt
8k>Tf
least upper bound of S(k). For both of these equations, since S(k) > S(k þ 1) and S(k) < S(k þ 1), the following inequalities will always hold:
where:
8k>Tf
8k>Tf
þ sup fjDError(k)jg](a þ 1=Dt) ¼ sup S(k)g, which is the
These hold true for both situations, S(k þ 1) > 0 and S(k þ 1) < 0, which implies:
8k>Tf
8k>Tf
8k>Tf
8k>Tf
S S(k þ 1) ,
8k>Tf
inf fS(k)g, which is the
greatest lower bound of -S(k). In addition [ sup fjN~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg
[ sup fjN ~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg
8k>Tf
(8:25)
Apparently, [ sup fjN ~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg þ sup
8k>Tf
1 ) S(k þ 1): Dt
8k>Tf
1 þ sup fjDError(k)jg] a þ : Dt 8k>Tf
upper bound of S(k). Since S(k) > S(k þ 1) and S(k) < S(k þ 1), the following inequalities will always hold:
þ sup fjDError(k)jg](a þ
always
8k>Tf
8k>Tf
8k>Tf
is
8k>Tf
addition, þ sup
fjDError(k)jg](a þ 1=Dt) ¼ sup fS(k)g, which is the least
8k>Tf
8k>Tf
fjnf ~y (k þ 1)jg þ sup fjDError(k)jg](a þ 1=Dt)
where: S ¼[ sup fjN~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg 8k>Tf
8k>Tf
1 þ sup fjDError(k)jg] a þ : Dt 8k>Tf ¼[ sup fjN~y (k þ 1)jg þ sup fjnf ~y (k þ 1)jg 8k>Tf
8k>Tf
þ sup fjDError(k)jg] a þ 8k>Tf
(8:24)
(8:27)
1 : Dt
The result is exactly the same result as for equation 8.23. Thus, the sliding surface function, S, is bounded by the value defined by the least upper bounds of the remaining
8 Fault-Tolerant Control
1093
[Y(k)+S(k)](a+
1 ∆t
)−1
Y(k)(a+
∆nfy~ (k + 1)
1 ∆t
)−1
[Y(k)−S(k)](a+
1 ∆t
)−1
∆Ny~ (k + 1)
∆E (k)
Nyˆ (k + 1) + nfy (k + 1) ∆E(k) = sup {|∆Error (k)|} ; ∆nfy~ (k + 1) = sup {|nfy~ (k + 1)|} ; ∆Ny~ (k + 1) = sup {|Ny~ (k + 1)|}. ∀k>Tƒ
∀k>Tƒ
∀k>Tƒ
: Bounded area
FIGURE 8.4
The Bound of the Sliding Surface Function
uncertainties of the nominal system, the failure dynamics, and the error by the optimization algorithm. The discrete-time Lyapunov stability theory indicates that the control problem can be solved as long as the numerical value of the failure dynamics is realized at each time-step, which is a measure of how far the failure drives the system dynamics away from the desired dynamics. Based on the above theoretical analysis, the system under unexpected catastrophic failures can be stabilized online, and the performance can be recovered provided an effective online estimator for the unknown failure dynamics such that the necessary and sufficient condition in theorem 1 is satisfied. Moreover, since the online estimator is used to provide the approximated numerical value of the failure dynamics at each time-step based on the most recent measurements (i.e., the failure may be time varying), no specific structure or dynamics is required for the estimator. In other words, only a static function approximator that approximates the most recent behavior of the failure is needed for the control purpose. Figure 8.4 indicates how the S function is bounded by the upper bounds of the nominal model uncertainty, DN ~y (k þ 1), optimization error, DE(k), and the prediction error of the failure dynamics, Dnf ~y (k þ 1).
8.4.2 Online Learning of the Failure Dynamics With the universal approximation capability for any piecewise continuous function to any degree of accuracy (Hordik et al., 1989), artificial neural networks become one of the most promising candidates for the online control problems of our interest. In this research work, neural network is exploited and used as the online estimator for the unknown failure dynamics. Some important features of the online learning using neural networks should first be addressed here. The structure of the online estimator needs to be decided (i.e., in neural networks,
the number of layers, number of neurons in each layer, and neuron transfer functions have to be specified). It is known that neural networks are sensitive to the number of neurons in the hidden layers. Too few neurons can result in underfitting problems (poor approximation), and too many neurons may contribute to an overfitting problem, where all the training patterns are well fit, but the fitting curve may take wild oscillations between the training data points (Demuth and Beale, 1998). The criterion for stopping of the training process is another important issue in the real applications. If the mean square error of the estimator is forced to reach a very small value, the estimator may perform poorly for the new input data slightly away from the training patterns. This is the wellknown generalization problem. Besides, in the real applications, the training patterns may contain some noise since they are the measurements from real sensors. The estimator may adjust itself to fit the noise instead of the real failure dynamics. Some methods proposed to improve these problems, such as early stopping criterion and generalization network training algorithms, may be useful to remedy these situations (Demoth and Beale, 1998; Mackey, 1992) In the online situation, the number of input output data for the training process becomes a very important design parameter. The system dynamics may keep changing because of different fault situations (i.e., the incipient fault, abrupt fault, and multiple faults). Apparently, using all input/output measurements to train the online estimator does not make too much sense since it is possible to use invalid training patterns to mislead the estimator, and it is also unrealistic for online applications. In other words, only finite and limited number of data sets should be used as training patterns to adjust the parameters of the estimator. A reasonable way is to use the most recent input/output measurements. A set, B, that contains the most recent measurements in a fixed length of a
1094
Gary G. Yen
time-shifting data window is used to collect the training patterns: B ¼ {(p(m), t(m)jp 2 <S ; t 2 > uiþ1 ui uiþ1 ui > > : KN
where P(u) ¼ 2q qx f (x, w, u, u) 6 6 q g(x, w, u, u) 4 qx q qx h(x, w, u, u)
The input, output, and state vectors for the linearized system in equation 9.2 are the deviations from the operating point for a fixed value of u:
(9:3)
Note that no extrapolation is performed for u < u1 or u uN . If the parameter is known to vary significantly beyond (u1 , uN ), then additional design points should be chosen. In discrete scheduling, parameters of the gain-scheduled controller are switched when the parameter trajectory evolves out of a predefined subset of the parameter set, Qi Q. This is true, for instance, when the distance between the current operating condition and the design point associated with the current controller parameters exceeds a design threshold. Expanding on the preceding example, a simple discrete scheduling law is given by:
9 Gain-Scheduled Controllers
1109
θ Switch to Ki+1
θi+1
Switching boundary
θi
Boundary layer
θi −1 t ti+1
ti
ti −1
FIGURE 9.1 Hysteresis in Discrete Scheduling
8 K > < 1 K ¼ Ki > : KN
u 12 (u1 þ u2 ): 1 1 2 (ui1 þ ui ) < u 2 (ui u > 12 (uN 1 þ uN ):
þ uiþ1 ), i ¼ 2, . . . , N 1:
(9:6) Often, the switching law will employ a hysteresis to prevent chattering (fast switching) if the scheduling parameter evolves near a switching boundary. The hysteresis simply delays the switch to new controller parameters until the parameter has moved beyond a boundary layer encompassing the switching boundary, as illustrated in Figure 9.1.
9.2.3 Discussion It is important to remember that the gain scheduling methods just described are, in general, justified through repeated use and do not have rigorous theoretical justification. The methods are generally implemented in an ad hoc manner appropriate to the problem at hand; achievable performance may be sensitive to the form of scheduling law and the form of the controller matrices. In some cases, scheduling parameters are chosen to be functions of the plant variables to make the problem more tractable from a design standpoint. Because the scheduling parameter is no longer independent of the plant variables (e.g., qu=qx 6¼ 0), the closed-loop system formed by the linearized plant and linear controllers does not match the linearized closed-loop plant. The differences in the closed-loop realizations are due to hidden coupling modes whose presence may degrade or limit achievable performance. Although there is no comprehensive theory for removing these modes from the gain-scheduled design, examples are present in the literature (Nichols et al., 1993). As a rule, the designer must exhibit additional caution in ensuring that the scheduling parameter does not vary too rapidly.
For discrete scheduling designs, abrupt changes in the controller output can occur at switching instants. These changes may cause excitation of unmodeled plant dynamics, degrading performance and possibly leading to catastrophic failure. To mitigate these risks, one must resort to techniques such as bumpless transfer (Hanus et al., 1987) or ensure that the design can tolerate the switching through other methods (Bett and Lemon, 1999). In both the continuous, and discrete gain-scheduling designs, stability and performance of the gain-scheduled system can, in general, only be guaranteed in a neighborhood of the current operating condition. Specifically, the gainscheduled design is not guaranteed stable or guaranteed to exhibit desired performance properties for all u 2 Q, or for all u 2 F Q . It is the latter of these two conditions that is the most difficult to guarantee. Further assurances of stability and performance must be derived from extensive simulation and testing. Successful designs typically involve scheduling on a slowly changing variable that captures the nonlinear behavior of the plant. The difficulties associated with applying the gain-scheduling techniques just described stem from the local nature of the results and the lack of formal methods for deriving the scheduling law. These difficulties manifest themselves in increased simulation, test, and redesign. For the so-called linear parameter varying plants, these difficulties can be alleviated. For linear parameter varying plants, modern linear robust control techniques can be applied to the design of a gain-scheduled controller without the need for ad hoc interpolation methods.
9.3 Gain Scheduling for Linear Parameter Varying Systems 9.3.1 LPV Systems A linear parameter varying system may be viewed as a linear system whose parameters depend on an exogenous variable, as stated in the following definition. Definition 1 (LPV System) Given a compact set Q <s and continuous functions A: <s ! 1, Ei (s1 , . . . , si ) ¼ Ti (s1 , . . . , si ). A direct relation between the Volterra kernels of the operators T , M, and G can be found if the design is well-posed on the first order or linear level, implying that E1 (s) has an inverse. As T is usually open to choice, the latter requirement is reasonable.
A nice feature of these equations is their recursive nature. Once M1 (s) has been found, it can be used to obtain an expression for M2 (s1 , s2 ) and so forth. Under partial linearization, the operators are represented by Ek ¼ 0, k > 1. Assuming E1 (s) and P1 (s) to be invertible, then the kernels of the controller G are given by: G1 (s) ¼ P1-1 (s)T1 (s)E11 (s): (11:34) 1 G2 (s1 , s2 ) ¼ P1 (s1 þ s2 )P2 (s1 , s2 )½G1 (s1 ) G1 (s2 ): (11:35) 1 G3 (s1 , s2 , s3 ) ¼ P1 (s1 þ s2 þ s3 ) P2 (s1 , s2 þ s3 ) G1 (s1 ) G2 (s2 , s3 ) þ P2 (s1 þ s2 , s3 ) G2 (s1 , s2 ) G1 (s3 ) þ P3 (s1 , s2 , s3 ) G1 (s1 ) G1 (s2 ) G1 (s3 ) : (11:36)
In general: Gi (s1 , . . . , si ) ¼ P1-1 (s1 þ . . . þ si ) ( ijþ1 ijk þ2 i 1 j2 i X X X1 X ...
11.5 Simplified Partial Linearization Controller Design
j¼2
Consider designing a feedback system for a given nonlinear system so that Ti ¼ 0, 1 < i < n, where Ti is the ith Volterra kernel of the closed loop. Such a design is called partial linearization, and it has the desirable attribute of greatly lessening the complexity of the controller design equations. Both the general controller design and the partial linearization controller design, as derived using the equations above, yield realizations with repeated structures. Collecting terms with like coefficients yields the simplified controller design. The result is presented here for the partial linearization controller, with the general result given by Sain (1997). Suppose for T that the second and higher order kernels are zero. Then, if P1 (s) inverts, the following results: M1 (s) ¼ P1-1 (s)T1 (s): M2 (s1 , s2 ) ¼ P1-1 (s1 þ s2 )P2 (s1 , s2 )½M1 (s1 ) M1 (s2 ): M3 (s1 , s2 , s3 ) ¼ P1-1 (s1 þ s2 þ s3 )fP2 (s1 , s2 þ s3 )[M1 (s1 )
(11:30) (11:31)
M2 (s2 , s3 ) þ P2 (s1 þ s2 , s3 )½M2 (s1 , s2 ) M1 (s3 ) þ P3 (s1 , s2 , s3 )½M1 (s1 ) M1 (s2 ) M1 (s3 )g: (11:32)
In general, the following is true: Mi (s1 , . . . , si ) ¼ P11 (s1 þ . . . þ si ) ( ijþ1 ijk þ2 i 1 j2 i X X X1 X ... j¼2
k1 ¼1
k2 ¼1
kj1 ¼1
Pj (s1 þ . . . þ s1 , s1 þ1 þ . . . þ s2 , . . . , sj1 þ1 þ . . . þ si ) Mk1 (s1 , . . . , s1 ) Mk2 (s1 þ1 , . . . , s2 ) ) . . . Mij1 (sj1 þ1 ,..., si ) : (11:33)
k1 ¼1
k2 ¼1
kj1 ¼1
Pj (s1 þ . . . þ s1 , s1 þ1 þ . . . þ s2 , . . . , sj1 þ1 þ . . . þ si ) Gk1 (s1 , . . . , s1 ) Gk2 (s1 þ1 , . . . , s2 ) ) . . . Gij-1 (sj1 1 , . . . , si ) : (11:37) Again, note the recursive nature of the controller design equations. Once G1 (s) has been computed, it can be used to find G2 (s1 , s2 ) and so on. The next step in the realization process is to substitute the expression for the plant kernels Pi in the frequency domain. j ¼ Gi (sj , sjþ1 , . . . , sjþi1 ), then: Given G i 1 ¼ (C1 P 2 )-1 C2 P 2 P 1 G 1 2 G (11:38) 2 1 2 1 1 G1 : 1 1 1 ¼ C1 P 3 -1 C2 P 3 P G 1 G 2 þ C2 P 3 P 2 G 3 G G 3 1 2 1 1 2 2 1 2 1 1 1 G 2 G 3 : 1 G þ C3 P33 P22 P (11:39) 1 1 1 ( ijþ1 ij þ2 ij þ3 i 1 j2 i X X X X1 X2 1 ¼ (C1 P i )-1 G ... i 1 Cj Pji
nh
j2 G kj1
j¼2 j1 Pj1
þ1
oi
nh
k1 ¼1
h
k2 ¼1
22 ... P
j1 þ1 G ij1
nh
o
k3 ¼1
kj1 ¼1
i oi i 1 þ1 . . . 1 G P11 G k1 k2 ) :
(11:40)
Each of equations 11.38 through 11.40 can be realized as a set of interconnected linear subsystems. These realizations possess a number of interesting features that are nicely illustrated using block diagrams. In doing so, certain notational conveniences are employed. j in equa j and G First, the superscripts on the quantities P i i tions 11.38 through 11.40 are redundant and can be neglected without loss (Sain, 1997). The proof lies in observing that the ordering of terms in a controller design equation, together
11
Nonlinear Input/Output Control: Volterra Synthesis
with the established use of parentheses and brackets, are sufficient to determine what the superscripts should be. Second, if e 2 Y represents the output error of the closed loop, and d11 is its Laplace transform, then let d(i) 11 denote the i-fold tensor product of d11 with itself, then define: bi1 ¼
i X
1 d(k) G 11 , k
1135 The members of the common set G0 , (namely aij and bij ), are computed and stored the first time they are encountered and then recalled as needed. In the sequel, the equation gij ¼ P11 Cj Pj bij and the following identifications are convenient: gi ¼
(11:41)
making bi1 the Laplace transform of the ith order controller output. Finally, the layout of the block diagrams follows certain conventions. If the controller design equations are fully expanded and each written on one line, then their addends, as read from left to right, appear in the block diagrams proceeding from top to bottom. The fundamental observation behind the simplification pro 1 reappear cess is that quantities present in the realization for G i 1 in the realizations for Gj , j > i. The motivation behind the simplification process is that in computing the output of a 1 , the most efficient algorithm given controller component G i is to compute these repeated quantities only once, store them in memory, and then reuse them as needed. The reduction process is thus a problem of identifying a set G of elements 1 , i ¼ 1, common to a sequence of controller components G i 2, 3, . . . , n, and determining which ones to save for later reuse. 1 (d(i) In computing the n components G 11 ), 1 i n, of the i output of an nth order controller, many quantities are computed repeatedly. Therefore, a system of notation is now introduced with the primary goal of eliminating redundant calculations by identifying members of the common set G and the secondary goal of further simplifying the controller output calculation. The identification of G presented here is not unique. Therefore, the notation G0 will be used to denote the particular identification described herein and to distinguish it from the general notion of the common set represented by G. Here and in the sequel, superscripts are suppressed. The first members of G0 have already been introduced, namely bi1. Given aij 2 G0 and 2 j i defined by aij ¼ j1 b(i1)(j1) , then: P ijþ1 X
¼
(11:44)
gij :
(11:45)
j¼2
The above algorithm for calculating the output of the simplified partial linearization controller takes full advantage of the recursive nature of the design equations, reducing the number of states and floating-point computations. Furthermore, the simplified form is amenable to the construction of general software routines, capable of computing the controller output for an arbitrary value of the controller order n. The simplified block diagrams for the second third-order controller components are shown in Figures 11.3 and 11.4. A general form for the ith order component is shown in Figure 11.5.
11.6 SDOF Base-Isolated Structure Example Consider a single degree of freedom (SDOF) structure sitting on a base isolation system consisting of hysteretically damped bearings, and let the structure be subject to ground acceleration x€g (t) and a control force f (t) supplied by a hydraulic actuator. Such a system is shown in Figure 11.6 The forces exerted by the individual bearings are modeled collectively. The equation of motion for the horizontal displacement of the mass m of the structure is as follows: b11 b11
P1
a22
(11:42)
k¼1
b22
-P1-1C2P2 g 2
Simplified G2 Realization
β21
The simplified partial linearization nth order controller output can now be calculated as follows. First, compute the output of the first order controller, b11 ¼ P1-1 T1 (I T1 )-1 d11 , and then the output of the nth order controller for n 2 is computed using:
α22 β11 β21
n X i X i¼2
j¼2
j bij : P11 Cj P
(11:43)
P1
β11 β22
bn1 ¼ b11
i X
FIGURE 11.3
a(ikþ1)j bk1 :
j bij P1-1 Cj P
j¼2
k¼1
bij ¼
i X
P2
+
α32
α33
β33
FIGURE 11.4
β32
-P1-1C2P2
-P1-1C3P3
+
Simplified G3 Realization
g3
1136
Patrick M. Sain β(n − 1)1 α22 β(n − 2)1 α32
+
β31 α(n − 2)2
+
β21 α(n − 1)2
+
β11 β(n − 1)1
P1
αn2
βi1 αii β(i − 1)1 α(i + 1)i
+
-P1-1C2P2
βn2
gn2
+
β(i − j)1 α(i + j)i
+
β21 α(n − 1)i
+
β11 β(n − 1)(i − 1)
Pi − 1
αni
Pn − 2
αn(n − 1)
β21 α(n − 1)(n − 1) β11 β(n − 1)(n − 2) β11 β(n − 1)(n − 1)
Pn − 1 α nn
FIGURE 11.5
+
+
βnn
−P1−1CiPi
βni
βn(n − 1)
-P1-1CnPn
+
gni
-P1-1Cn − 1Pn − 1
g(n − 1) (n − 1)
+ +
gnn
gn
General Form for Simplified Realization
xp
f m1 b1
Actuator
b2 ⋅⋅x g
FIGURE 11.6 A Base-Isolated SDOF Structure of Mass m1 . The structure is supported by hysteretically damped bearings b1 and b2 that are subject to ground acceleration €xg and control force f .
f (t) ¼ m[€ xp (t) þ € xg (t)] þ c x_ p (t) þ kxp (t) þ Q[x(t), x_ (t)]: (11:46) Descriptions of the variables and parameter values are given in Table 11.1 Both the bearings and the actuator affect the stiffness and damping coefficients c and k, and it should be understood that the values identified for these parameters
provided in Table 11.1 are meant to represent the behavior of the system with the actuator and bearings in place. The hysteretic restoring force Q generated by the base isolating bearings is given by: Q[xp (t), x_ p (t)] ¼ a
Fy x p (t) þ (1 a)Fy z(t), Y
(11:47)
11
Nonlinear Input/Output Control: Volterra Synthesis
TABLE 11.1
1137
Parameters for the SDOF System
TABLE 11.3
Parameters for the Actuator
Variable
Description
Value
Variable
Description
Value
c f k m Q
Damping coefficient Force applied by actuator Stiffness coefficient Mass of structure Hysteretic restoring force of base-isolation bearing Horizontal ground acceleration Horizontal displacement of the mass m Natural undamped frequency Damping ratio
600p kg=s (N) 3 106 p2 kg=s2 3000 kg N
Af B f
Cross-sectional area of actuator Bulk modulus of hydraulic fluid Force applied to SDOF structure by actuator Proportional feedback gain Flow–pressure coefficient Flow gain Horizontal displacement Characteristic hydraulic fluid volume Displacement coefficient Force feedback coefficient Velocity feedback coefficient
m2 N=m2 N
xg xp fp xp
TABLE 11.2 System
m=s2 m 5 Hz 1%
Description
Value
A Fy n
Hysteresis loop-shaping parameter Force of equivalent hysteretic damper Parameter controlling the smoothness of the transition from elastic to plastic response Horizontal displacement Yield displacement of equivalent hysteretic damper Hysteretic displacement Post yielding to preyielding stiffness ratio Hysteresis loop-shaping parameter Hysteresis loop-shaping parameter
1 40 k 2
Y z_(t) ¼ g x_ p (t) jz(t)jn1 z(t) bx_ p (t)jz(t)jn þAx_ p (t):
m 1 mm 0.2 -0:25 0.75
(11:48)
Table 11.2 lists descriptions of the variables and parameter values. The actuator force f applied to the SDOF structure is governed by the equation (Dyke et al., 1995; DeSilva, 1989): 2BA2f 2BAf kq gf 2Bkc [u(t) xp (t)] f (t) x_ p (t): V V V (11:49)
The u(t) is the control signal generated by the controller. Descriptions of the quantities in equation 11.49 are provided in Table 11.3. Defining Az ¼ YA , bz ¼ Yb , and gz ¼ Yg and using the fact that jzj ¼ z sgn z, then: z_(t) ¼ [Az (bz þ gz sgn x_ p sgn z)z n (t)]x_ p (t) n
¼ [Az þ s(t)z (t)]x_ p (t),
structure yields the following system of equations. For convenience, the explicit dependence on time is suppressed:
Fy k c 1 Fy þa xp x_ p þ f (t) (1 a) z x€g : (11:52) m m m m Ym
x€p ¼
where z(t) is a dimensionless quantity representing hysteretic displacement (Fan et al., 1988; Sain et al., 1997) satisfying the first-order differential equation:
f_(t) ¼
2.5 m4 s=kg m2 =s m m3 3:6484 108 N=m=s 66.67 1/s 2:1891 107 N=m
Parameters for the Hysteretically Damped Base-Isolation
Variable
xp Y z a b g
g kc kq xp V 2BAf kq G=V 2Bkc =V 2BA2 =V
(11:50) (11:51)
where s(t) ¼ [bz þ gz sgn x_ p (t) sgn z(t)]. Substitution of the expression for Q into the equation of motion of the
f_ ¼
2BA2f 2BAf kq gf 2BAf kq gf 2Bkc xp x_ p f (t) þ u: V V V V z_ ¼ (Az þ sz n )x_ p :
(11:53) (11:54)
Given z with n ¼ 2, with z 0 ¼ z(ts ) and xp0 ¼ xp (ts ) (where ts represents the most recent point in time where s(t) switched value yields (for s 0 and s < 0) closed-form expressions for z are readily obtained (Sain, 1997). An example application of a third-order VFS partial linearization regulator design for the SDOF structure described previously is simulated in equations 11.55 to 11.57, using the simplified controller design derived above. Choosing the state vector [x1 x2 x3 ]0 ¼ [xp x_ p f ]0, where z can be expressed as previously noted, depending on the sign of s, the resulting state-space description has the form y ¼ x1 , and the following results: x_ 1 ¼ x2 :
(11:55)
x_ 2 ¼
Fy Fy k c 1 þa x1 x2 þ x3 (1 a) z: m m m Ym m
(11:56)
x_ 3 ¼
2BA2f 2BAf kq gf 2BAf kq gf 2Bkc x1 x2 x3 þ u: V V V V
(11:57)
This example represents a preliminary test of the control system to demonstrate that the actuator can be commanded to move the first floor of the structure to a specific location. Under the hypothesis that the VFS control designs work to make small errors smaller, a command request r ¼ x1 ¼ 0:004 m was specified, and a constant offset of y0 ¼ 0:001 m, representing a biased position sensor, was added to the measured output of the plant. Regulation about a nonzero set point
1138
Patrick M. Sain recursive in nature and lends itself to an automated design and simulation procedure. The procedure is applied to a regulator design for a base-isolated SDOF structure with hysteretic damping.
3.4
Position [m] x10−3
3.3
References 3.2
3.1
3
0
0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 Time [sec]
FIGURE 11.7 Position of First Floor Following Initial Transient for Linear Controller Design (Dashed) and Third-Order Controller Design (Solid)
was chosen because the higher order Taylor series coefficients of the control model are zero if x1 ¼ 0. Primarily for computational convenience, the desired closed-loop map T1 (s) was given four poles at s ¼ -1000 in this example. A result is shown in Figure 11.7 The initial transient response was omitted so as not to obscure the effect of the higher order controller. The response of the closed-loop system, meaning the position of the first floor, x1 , is shown for the linear controller design and the third-order controller design. Note that the magnitude of the oscillations in the response as the output tends toward the commanded position are much smaller for the third-order design when compared to those produced by the linear controller design.
11.7 Conclusion Volterra feedback synthesis is shown to provide a capability for specifying nonlinear input output behavior for a closed-loop feedback system. The procedure is developed for the class of linear analytic plants that admit Volterra series representation, and a simplified means of systematically realizing the Volterra kernels and computing the output of the resulting controller is provided, up to an arbitrary order. The simplified form is
Al-Baiyat, S.A., and Sain, M.K. (1989). A Volterra method for nonlinear control design. Preprints IFAC Symposium on Nonlinear Control System Design, 76–81. Al-Baiyat, S.A., and Sain, M.K. (1986). Control design with transfer functions associated to higher order Volterra kernels. Proceedings of the 25th IEEE Conference on Decision and Control, 1306–1311. Bussgang, J.J., Ehrman, L., and Graham, J. (1974). Analysis of nonlinear systems with multiple inputs. Proceedings of the IEEE 62, 1088–1119. DeSilva, C.W. (1989). Control sensors and actuators. Englewood Cliffs, NJ: Prentice Hall. Doyle, III, F.J., Ogunnaike, B.A., and Pearson, R.K. (1995). Nonlinear model-based control using second-order Volterra models. Automatica 31, 697–714. Dyke, S.J., Spencer, Jr., B.F., Quast, P., and Sain, M.K. (1995). Role of control-structure interaction in protective system design. ASCE Journal of Engineering Mechanisms 121, 322–338. Fan, F.G., Ahmadi, G., and Tadjbakhsh, I.G. (1988). Base isolation of a multi-story building under a harmonic ground motion—A comparison of performances of various systems. Technical Report. NCEER-88-0010. National Center for Earthquake Engineering Research, State University of New York at Buffalo. Rugh, W.J. (1981). Nonlinear system theory: The Volterra/Wiener approach. Baltimore, MD: Johns Hopkins University Press. Sain, P.M. (1997). Volterra control synthesis, hysteresis models, and magnetorheological structure protection. Ph.D. dissertation. Department of Electrical Engineering., University of Notre Dame, Notre Dame, Indiana. Sain, P.M., Sain, M.K., and Spencer, Jr., B.F. (1997). Models for hysteresis and application to structural control. Proceedings of the American Control Conference, 16–20. Sain, P.M., Sain, M.K., and Spencer, Jr., B.F. (1997). Volterra feedback synthesis: A systematic algorithm for simplified Volterra controller design and realization. Proceedings of Thirty-Fifth Annual Allerton Conference on Communication, Control, and Computing, 1053–1062. Sain, P.M., Sain, M.K., and Michel, A.N. (1991). Nonlinear modelmatching design of servomechanisms. Proceedings of the First IFAC Symposium on Design Methods of Control Systems, 594–599. Sain, P.M., Sain, M.K., and Michel, A.N. (1990). On coordinated feed-forward excitation of nonlinear servomechanisms. Proceedings of the American Control Conference, 1695–1700.
12 Intelligent Control of Nonlinear Systems with a Time-Varying Structure Rau´l Ordo´n˜ez Department of Electrical and Computer Engineering, University of Dayton, Dayton, Ohio, USA
Kevin M. Passino Department of Electrical and Computer Engineering, The Ohio State University, Columbus, Ohio, USA
12.1 12.2
Introduction...................................................................................... 1139 Direct Adaptive Control ...................................................................... 1140 12.2.1 Direct Adaptive Control Theorem . 12.2.2 Performance Analysis: L2 Bounds and Transient Design
12.3 12.4
Application: Direct Adaptive Wing Rock Regulation with Varying Angle of Attack.................................................................................. 1143 Conclusion ....................................................................................... 1148 References......................................................................................... 1149
12.1 Introduction The field of nonlinear adaptive control developed rapidly in the 1990s. Work by Polycarpou and Ioannu (1991) and others gave birth to an important branch of adaptive control theory: the nonlinear online function approximation-based control, which includes neural (Polycarpou, 1996) and fuzzy (Su and Stepanenko, 1994) approaches (note that several other relevant works are focused on neural and fuzzy control, many of them cited in the references within the above publications). The neural and fuzzy approaches are most of the time equivalent, differing between each other only for the structure of the approximator chosen (Spooner and Passino, 1996). Most of these publications deal with indirect adaptive control, trying first to identify the dynamics of the systems and eventually generating a control input according to the certainty equivalence principle (with some modification to add robustness to the control law), whereas very few authors (Spooner and Passino, 1996; Rovithakis and Christodoulou, 1995) use the direct approach in which the controller directly generates the control input to guarantee stability. Plants whose dynamics can be expressed in the so-called strict feedback form have been considered, and techniques like back-stepping and adaptive back-stepping (Krstic et al., Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
1995) have emerged for their control. Publications of Polycarpou (1996) and Polycarpou and Mears (1998) present an extension of the tuning functions approach in which the nonlinearities of the strict feedback system are not assumed to be parametric uncertainties but are rather completely unknown nonlinearities to be approximated online with nonlinearly parameterized function approximators. Both the adaptive methods in Krstic et al. (1995) and in Polycarpou (1996) and Polycarpou and Mears (1998) attempt to approximate the dynamics of the plant online, so they may be classified as indirect adaptive schemes. In this chapter, we combine an extension of the class of strict feedback systems considered in Polycarpou (1996) and Polycarpou and Mears (1998) with the concept of a dynamic structure that depends on time; this combination allows us to propose a class of nonlinear systems with a time-varying structure, for which we develop a direct adaptive control approach. This class of systems is a generalization of the class of strict feedback systems traditionally considered in the literature. Moreover, the direct adaptive control developed here is, to our knowledge, the first of its kind in this context, and it presents several advantages with respect to indirect adaptive methods, including the fact that it needs less plant information to be implemented. 1139
Rau´l Ordo´n˜ez and Kevin M. Passino
1140
12.2 Direct Adaptive Control Consider the class of continuous time nonlinear systems given by: x_ i ¼
R X
j j rj (y) fi (Xi ) þ c i (Xi )xiþ1 :
j¼1
x_ n ¼
R X
rj (y)
fnj (Xn )
þ
c nj (Xn )u
(12:1)
:
be placed on the state transient. Furthermore, as will be indicated, the stability can be made global by using bounding control terms. For each vector Xi, we will assume the existence of a compact set S xi Ri specified by the designer. We will consider trajectories in the compact sets S xi , i ¼ 1, . . . , n, where the sets are constructed such that S xi S xiþ1 , for i ¼ 1, . . . , n 1. We assume the existence of bounds cci , c 2 R, and cc 2 R, i ¼ 1, . . . , n (not necessarily known), c i id such that for all y 2 Rq and Xi 2 S xi , i ¼ 1, . . . , n:
j¼1 >
n
The i ¼ 1, 2, . . . , n 1, Xi ¼ [x1 , . . . , xi ] , Xn 2 R is the state vector, which we assume measurable, and u 2 R is the control input. The variable y 2 Rq may be an additional input or a possibly exogenous scheduling variable. We assume that y and its derivatives up to and including the (n 1)th one are bounded and available for measurement, which may imply that y is given by an external dynamical system. The functions rj , j ¼ 1, . . . , R may be considered to be interpolating functions that produce the time-varying structural nature of system (Polycarpou and Ioannou, 1991) because they combine j j R systems in strict feedback form (given by the fi and ci functions, i ¼ 1, . . . , n, j ¼ 1 . . . , R), and the combination depends on time through the variable y (thereby, the dynamics of the plant may be different at each time point depending on the scheduling variable). Here, we assume that the functions rj are n times continuously differentiable and that they PR satisfy, for all y 2 Rq , r (y) < 1P and qi rj (y)=qy i < 1. j¼1 j j c Denote for P convenience fi (Xi , y) ¼ Rj¼1 rj (y)fi (Xi ) and j R c ci (Xi , y) ¼ j¼1 rj (y)ci (Xi ). We will assume that fci and cci are sufficiently smooth in their arguments and that they satisfy (for all Xi 2 Ri and y 2 Rq ) i ¼ 1, . . . , n, fci (0, y) ¼ 0 and cci (Xi , y) 6¼ 0. Here, we will develop a direct adaptive control method for the class of systems in equation 12.1. We assume that the j interpolation functions rj are known, but the functions fi j and ci (which constitute the underlying time-varying dynamics of the system) are unknown. In an indirect adaptive methodology, one would attempt to identify the unknown functions and then construct a stabilizing control law based on the approximations to the plant dynamics. Here, however, we will postulate the existence of an ideal control law (based on the assumption that the plant belongs to the class of systems of equation 12.1) that possesses some desired stabilizing properties, and then we devise adaptation laws that attempt to approximate the ideal control equation. This approximation will be performed within a compact set S xn Rn of arbitrary size that contains the origin. In this manner, the results obtained are semiglobal, in the sense that they are valid as long as the state remains within S xn ; this set can be made as large as desired by the designer. In particular, with enough plant information, the set can be made large enough so that the state never exits it because, as will be shown, a bound can
c < 1: 0 < cci cci (Xi , y) c i ! j X R qrj (y) j (12:2) qc (X ) i c i X_ i ccid : jc_ i j ¼ y_ ci (Xi ) þ rj (y) j¼1 qXi qy
This assumption implies that the affine terms in the plant dynamics have a bounded gain and a bounded rate of change. Since the functions cci are assumed continuous, they are therefore bounded in S x . Similarly, note that even though the term jX_ i j may not necessarily be globally bounded, it will have a constant bound in S x due to the continuity assumptions we make. Therefore, the assumption of equation 12.2 will always be satisfied in S xn . Moreover, in the simplest of cases, the first part of assumption of equation 12.2 is satisfied globally when the j functions ci are constant or sector bounded for all Xi 2 Ri . The class of plants of equation 12.1 is, to our knowledge, the most general class of systems considered so far within the context of adaptive control based on back-stepping. In particular, in Krstic´ et al. (1995), which are indirect adaptive j approaches, the input functions c i are assumed to be constant for i ¼ 1, . . . , n. This assumption allows the authors of those works to perform a simpler stability analysis, which becomes more complex in the general case (Ordon˜ez and Passino, 2001). The addition of the interpolation functions rj , j ¼ 1, . . . , R also extends the class of strict feedback systems to one including systems with a time-varying structure (Ordon˜ez and Passino, 2000a) as well as systems falling in the domain of gain scheduling (where the plant dynamics are identified at different operating points and then interpolated between using a scheduling variable). Note that if we let R ¼ 1 and r1 (y) ¼ 1 for all y, together with cci ¼ 1, i ¼ 1, . . . , n, we have the particular case considered in Polycarpou (1996) and Polycarpou and Mears (1998). The direct approach presented here has several advantages with respect to indirect approaches such as in Krstic´ et al. (1995), Polycarpou (1996), and Polycarpou and Mears j (1998). In particular, bounds on the input functions c i are only assumed to exist and need neither to be known nor to be estimated. This is because the ideal law is formulated so that there is not an explicit need to include information about the bounds in the actual control law. Moreover, although assumption of equation 12.2 appears to be more restrictive than what is needed in the indirect adaptive case, it is in fact not so
12
Intelligent Control of Nonlinear Systems with a Time-Varying Structure
because the stability results are semiglobal [i.e., since we are operating in the compact sets S xn , continuity of the affine terms automatically implies the satisfaction of the second part of assumption 12.2].
1141
respectively, as long as the size of the approximator can be made arbitrarily large. approximators of finite size, let a1 (x1 , v) ¼ PFor R T r (v)u where the parameter j z j (v, x1 ) þ da1 (v, x1 ), j¼1 j a a 1
uaj 1
12.2.1 Direct Adaptive Control Theorem Next, we state our main result and then show its proof. For convenience, we use the notation ni ¼ [y, y_ , . . . , y(i 1) ] 2 Rqi , i ¼ 1, . . . , n. Theorem Consider system of equation 12.1 with the state vector Xn measurable and the scheduling matrix nn1 measurable and bounded, together with the above stated assumptions on fci , cci , rj , and equation 12.2. Assume also that ni (0) 2 S yi Rqi , Xn (0) 2 S xi Ri , i ¼ 1, . . . , n where S yi and S xi are compact sets specified by the designer and large enough that ni and Xi do not exit them. Consider the diffeomorphism z1 P ¼ x1 , zi ¼ xi ^ ai1 asi1 , i ¼ 2, . . . , n, with a ^ i (Xi , ni ) ¼ Rj¼1 s rj (y)^ u> j z j (Xi , ni ), and ai (zi , zi1 ) ¼ ki zi zi1 , with ki > 0 a a i
i
and z0 ¼ 0. Assume the functions zaj (Xi , ni ) to be at least n i i times continuously differentiable; to satisfy, for i ¼ 1, . . . , n, j ¼ 1, . . . , R: ni q z j ai < 1: q[Xi , ni ]ni
(12:3)
Consider j the adaptation laws for the parameter vectors N ^ uaj 2 R ai , Naj 2 N, ^ uaj , where gaj > 0, u_ aj ¼rj gaj zaj zi saj ^ i
i
i
i
i
i
i
i
saj > 0, i ¼ 1, . . . , n, j ¼ 1, . . . , R are design parameters. i Then, the control law u ¼ a ^ n þ asn guarantees boundedness of all signals and convergence of the states to the residual set: ( Dd ¼
Xn 2 R n :
n X i¼1
zi2
2cm Wd bd
) ,
(12:4)
c , bd is a constant and where Wd where cm ¼ min1in c i measures approximation errors and ideal parameter sizes, and its magnitude can be reduced through the choice of the design constants ki , gaj , and saj . i
i
Proof The proof requires n steps and is performed inductively. First, let z1 ¼ x1 and z2 ¼ x2 a ^ 1 as1 , where a ^ 1 is the approxima tion to an ideal signal a1 (‘‘ideal’’ in the sense that if we had a ^ 1 ¼ a1 , we would have a globally asymptotically stable closed loop without need for the stabilizing term as1 ), and as1 will be given below. Let c1 > 0 be a constant such that c1 > cc1d =2cc1 and a1 (x1 , v) ¼ (1=cc1 )( fc1 c1 z1 ). Because the ideal control a1 is smooth, it may be approximated with arbitrary accuracy for v and x1 within the compact sets S v1 Rq and S x1 R,
1
N
j
vectors 2 R a1 and Naj 2 N are optimum in the sense 1 that they minimize the representation error da1 over the set S x1 S v1 and suitable compact parameter spaces Vaj , in add1 ition, zaj (x1 , v) are defined via the choice of the approximator 1 structure [see Ordon˜ez and Passino (2000b) for an example of a choice for zaj ]. The parameter sets Vaj are simply math1 1 ematical artifacts. As a result of the stability proof, the approximator parameters are bounded using the adaptation laws in theorem 1, so Vaj does not need to be defined explicitly, and 1 no parameter projection (or any other ‘‘artificial’’ means of keeping the parameters bounded) is required. The representation error da1 arises because the sizes Naj are finite, but it i may be made arbitrarily small within S x1 S v1 by increasing Naj (i.e., we assume the chosen approximator structures posi sess the universal approximation property). In this way, there exists a constant bound da1 > 0 such that jda1 j da1 < 1. To make the proof logically consistent, however, we need to assume that some knowledge about this bound and a bound on uaj is available (since, in this case, it becomes possible to 1 guarantee a priori that S x1 S v1 is large enough). However, in practice, some amount of redesign may be required because these bounds are typically guessed by the designer. Let Faj ¼ ^uaj uaj denote the parameter error and 1 1 1 P approximate a1 with a ^ 1 (x1 , v, ^uaj ; j ¼ 1, . . . , R) ¼ Rj¼1 1 rj (v)^u> j z j (x , v). Hence, we have a linear in the parameters a1 a1 1 approximator with parameter vectors ^uaj . Note that the struc1 tural dependence on time of system of equation 12.1 is reflected in the controller because a ^ 1 can be viewed as using the functions rj (v) to interpolate between ‘‘local’’ controllers of the form u^>j z j (x1 , v), respectively. Notice that since the a1 a1
functions rj are assumed continuous and v bounded, the signal a ^ 1 is well defined for all v 2 S v1 . Consider the dynamics of the transformed state z_1 ¼ fc1 þ cc1 (z2 þ a ^ 1 þ as1 ) þ cc1 (a1 a1 ) ¼ c1 z1 þ cc1 z2 þ PR > cc1 (^ a1 a1 ) þ cc1 as1 ¼ c1 z1 þ cc1 z2 þ cc1 j¼1 rj Faj 1 P zaj daj þ cc1 as1 . Let V1 ¼ 2c1 c z12 þ 12 Rj¼1 F> j F j =g j , and a1 a1 a1 1 1 1 P 2 examine its derivative V_ 1 ¼ ð2cc (2z1 z_1 ) 2z 2 c_ c )=ð4cc Þþ R 1
1
1
1
j¼1
c 2 _ _ F> j F j =g j . Using the expression for z_ 1 , V1 ¼ c1 z1 =c1 þ a1 a1 a1 P PR c2 s 1 2 _c z1 z2 þ z1 Rj¼1 rj F> j z j z 1 d j þ z 1 a1 j¼1 2 z1 c1 =c1 þ a1 a1 a1 _ > _ _ ^ F j F j =g j . Choose the adaptation law u j ¼ F j ¼ r g j a1
a1
a1
zaj z1 saj ^uaj
a1
a1
j a1
with
design constants gaj > 0, saj > 0, 1 1 1 1 1 j ¼ 1, . . . , R (we think of saj u^aj as a ‘‘leakage term’’). Also, note 1
1
that for any constant k1 > 0, z1 daj jz1 jda1 k1 z12 þ 1
da2 1 =(4k1 ). We pick as1 ¼ k1 z1 .
Rau´l Ordo´n˜ez and Kevin M. Passino
1142 > ^ Notice also that in completing squares, F> j u j ¼ F j a1 a1 a1 Faj þ uaj jFaj j2 =2 þ juaj j2 =2. Finally, observe that z12 1 1 1 1 cc1 c1 þ cc1 =2cc1 z12 =cc1 c1 cc1d =2cc1 c1 z12 =cc1 with c1 ¼ c1 cc1d =2cc1 > 0. Then, we obtain V_ 1 c1 z12 =cc1 PR PR 2 2 1 2 1 j¼1 saj1 jFaj1 j =gaj1 þ z1 z2 þ da1 =4k1 þ 2 j¼1 saj1 jFaj1 j 2 P =gaj þ z1 z2 þ da2 1 =4k1 þ 12 Rj¼1 saj uaj =gaj . This completes 1 1 1 1 the first step of the proof. We may continue in this manner up to the nth step (we omit intermediate steps for brevity), where we have zn ¼ xn a ^ n1 asn1 , with a ^ n1 and asn1 defined as in the1. Consider the ideal signal an (Xn , nn ) ¼ (1=ccn ) orem c fn cn zn þ a ^_ n1 þ a_ sn1 with cn > ccnd =(2ccn ). Notice that, even though the terms ^ u_ aj appear in an through the partial n1 uaj does not need to be an input to an , since derivatives in a ^_ n1 , ^ n1 the resulting product of the partial derivatives and ^ u_ aj can be n1 ^ n1 . To simplify expressed in terms of z1 , . . . , zn1 , v and saj a n1 the notation, however, we will omit the dependencies on inputs other than Xi and ni , but bearing in mind that, when implementing this method, more inputs may be required to satisfy the proof. Note also that by assumption in equation 12.3, jan j < 1 for bounded arguments. Therefore, we may represent an with PR > an (Xn , nn ) ¼ j¼1 rj (v)uaj zajn (Xn , nn ) þ dan (Xn , nn ) for Xn 2
n
S xn Rn and nn 2 S vn Rqn . The parameter vector uaj n j 2 RN an , Najn 2 N is an optimum within a compact parameter
set Van , in a sense similar to
uaj , 1
so that for (Xn , nn )
2 S xn S vn , jdan jdan 0. Let Fajn ¼ ^ uajn uaj , and consider the approximation a ^ n as given in then s orem 1. The control law u ¼ a ^ n þ an yields z_n ¼ fcn þ ccn (^ an þ asn ) a ^_ n1 a_ sn1 þ ccn (an an ) ¼ cn zn þ ccn PR > c s j¼1 rj (v)Fajn zajn dan þ cn an . Choose the Lyapunov funcP tion candidate V ¼ Vn1 þ 2c1 c zn2 þ 12 Rj¼1 F> j F j =g j , and an an an n P c 2 _ _ examine its derivative V ¼ Vn1 cn zn =cn þ zn Rj¼1 PR > c2 1 2_c s rj (v)F> j z j zn dan þ zn an zn cn =cn þ j¼1 Fajn Fajn =gajn . 2 an an P c 2 One can show inductively that V_ n1 n1 i¼1 c_i zi =ci P P P P R 2 n1 n1 1 12 n1 i¼1 j¼1 saji jFaji j gaji þzn1 zn þ i¼1 da2i =(4ki )þ 2 i¼1 PR 2 c j j with constants ci ¼ ci cid =(2ci ) > 0, j¼1 sa juaj j =ga i
i
i
i ¼ 1, . . . , n. The choice of adaptation laws for uajn and of asn in theorem 1, together with the observations that 2 2 ^ (ssjn =gajn )F> j u j (s j =g j )(jF j j =2)þ s j =g j ju j j =2, an an an an an a a an n
n
zn dajn kn zn2 þ dan =4kn , with kn > 0 and (zn2 =ccn ) ðcn þ c_ c =2cc Þ cn z 2 =cc imply the following: n
n
n
n
2
n n X R jFaj j X ci zi2 1 X i sa j þ Wd , V_ c i 2 g j c i¼1
i
i¼1 j¼1
ai
(12:5)
where Wd contains the combined effects of representation errors and ideal parameter P P P sizes and is given by Wd ¼ ni¼1 da2 i =4ki þ 1=2 ni¼1 Rj¼1 saj juaj j2 gaj . Note that i P P i i P if ni¼1 ci zi2 =cci Wd or 1=2 ni¼1 Rj¼1 saj jFaj j2 =gaj Wd , i i i then we have V_ 0. Furthermore, letting cm ¼ min1in(cc ) i m ¼ max1in (c c ) and defining c0 ¼ min1in (ci ), and c i m , and s0 ¼ min1in, 1jR s j , we have cm ¼ cm = c P P P ai c c0 n z 2 =cc ¼ cO n (z 2 =cc)(cc =cc ) ni¼1 ci zi2 =c i i i i i¼1 i i¼1 i i P P c0 ni¼1 (zi2 =cci ) (cci =cci ) c0 cm ni¼1 zi2 =cci and P PR Pn PR 2 2 1 12 ni¼1 j¼1 saji jFaji j =gaji s0 2 i¼1 j¼1 jFaji j =gaji . Then, letting bd ¼ min (2c0 cm , s0 ), we have that if: 2
V ¼
n n X R jF j j 1X zi2 1 X ai þ V0 , 2 i¼1 cci 2 i¼1 j¼1 gaj
(12:6)
i
with V0 ¼ Wd =bd , then V_ 0 and all signals in the closed loop are bounded. Furthermore, we have V_ bd V þ Wd , which implies that 0 V (t) Wd =bd þ ðV (0) Wd =bd Þ e bd t , so both the transformed states and the parameter error vectors converge to a bounded set. Finally, we conclude from the upper bound on V (t) that the state vector Xn converges to the residual set of equation 12.4 Remark 1 The representation error bounds and the size of the ideal parameter vectors are assumed known because they affect the size of the residual set to which the states converge. It is possible to augment the direct adaptive algorithm with ‘‘auto-tuning’’ capabilities similar to Polycarpou and Mears (1998), which would relax the need for these bounds. Furthermore, note that the stability result of theorem 1 is semiglobal in the sense that it is valid in the compact sets S vi and S xi , i ¼ 1, . . . , n, which can be made arbitrarily large. The stability result may be made global by adding a high-gain bounding control term to the control law. Such a term may be particularly useful when, due to a complete lack of a priori knowledge, the control designer is unable to guarantee that the compact sets S xi , i ¼ 1, . . . , n, are large enough so that the state will not exit them before the controller has time to bring the state inside D d ; moreover, it may also happen that due to a poor design and poor system knowledge, D d is not contained in S xn . In this case, too, bounding control terms may be helpful until the design is refined and improved. Using bounding control, however, requires explicit knowledge of functional upper bounds of jcci (v, Xi )j and also of the lower bounds cci , i ¼ 1, . . . , n, whose knowledge we do not mandate in theorem 1. Bounding terms may be added to the diffeomorphism in theorem 1, but we do not present the analysis since it is similar to the one we present here and it is algebraically tedious; we simply note, though, that the bounding terms have to be smooth (because they need to be differenti-
12
Intelligent Control of Nonlinear Systems with a Time-Varying Structure
able), so they need to be defined in terms of smooth approximations to the sign, saturation, and absolute value functions that are typically used in this approach. Remark 2 c , and cc are known, it becomes possible If the bounds cci , c i id for the designer to directly set the constants ci in the control law. Notice that with knowledge of these bounds, the term cm is also known, and we can pick constants ci such that ci > ccid =2cci . Define the auxiliary functions Zi ¼ ci zi . We may explicitly set the constant ci in ai if we let Zi be an input to the ith approximator structure (i.e., if we let P > ai (Xi , ni , X_ ri , Zi ) ¼ R rj (y)u j z j (Xi , ni , X_ ri , Zi ) þ da ). j¼1
ai
ai
Then, the approximators used in the control procedure are P _ given by a ^ i (Xi , ni , X_ ri , Zi ) ¼ Rj¼1 rj (v)^ u> j z j (Xi , ni , X ri , Zi ), ai ai and the stability analysis can be carried out as expected.
12.2.2 Performance Analysis: L2 Bounds and Transient Design The stability result of theorem 1 is useful because it indicates conditions to obtain a stable closed-loop behavior for a plant belonging to the class given by equation 12.1. However, it is not immediately clear how to choose the several design constants to improve the control performance. Here, we concentrate on the tracking problem and present design guidelines with respect to an L2 bound on the tracking error. We are interested in having x1 track the reference model state xr1 of the reference model x_ ri ¼ xriþ1 , i ¼ 1, 2, . . . , n 1, x_ rn ¼ fr (Xrn , r) with bounded reference input r(t) 2 R. Now, we need to use the diffeomorphism z1 ¼ x1 xr1 , zi ¼ xi a ^ i1 asi1 , i ¼ 2, . . . , n, with c c _ a1 (x1 , y, x_ r1 c) ¼ 1=ci (f1 s c1 z1 þ xr2 ) and ai (Xi , ni , X ri ) c _ ¼ 1=ci fi ci zi þ a ^ i þ a_ i for i ¼ 2, . . . , n. The stability proof needs to be modified accordingly, and it can be shown that q theffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi tracking error jx1 xr1 j converges to a neighborhood of size 2cm Wd =bd . From the upper bound on V (t), we can write V (t) P Wd =bd þ V (0)e bd t . From here, it follows that 12 ni¼1 P P P zi2 (t)=cci (t) Wd =bd þ 12 ni¼1 zi2 (0)=cci (0) þ 12 ni¼1 Rj¼1 ¼ jFaj (0)j2 =gaj Þe bd t . The terms zi (0) depend on the design i i constants in a complex manner. For this reason, rather than trying to take them into account in the design procedure, we follow the trajectory initialization approach taken in Krstic´ et al. (1995), which allows the designer to set zi (0) ¼ 0, i ¼ 1, . . . , n by an appropriate choice of the reference model’s initial conditions. In our case, in addition to the assumption that it is possible to set the initial conditions of the reference model, we will have to assume certain invertibility conditions on the approximators. In particular, because z1 (0) ¼ x1 (0) xr1 (0), we need to set xr1 (0) ¼ x1 (0) for z1 (0) ¼ 0. For the ith transformed state zi , i ¼ 2, . . . , n, zi (0) ¼ xi (0) a ^ i1 (0) asi1 (0). Notice that asi1 (0) ¼ asi1 (zi1 (0),
1143
zi2 (0)), so that if zi1 (0) ¼ 0 and zi2 (0) ¼ 0, we have asi1 (0) ¼ 0. In particular, notice that this holds for i ¼ 2. In this case, to set z2 (0) ¼ 0, we need to have a ^ 1 (x1 (0), y(0), xr2 (0)) ¼ x2 (0). This equation can be solved analytically (or numerically) for xr2 (0), provided q^ a1 =qxr2 jt¼0 6¼ 0. This is not an unreasonable condition because it depends on the choice of approximator structure the designer makes. The structure can be chosen so that it satisfies this condition. Granted this is the case, it clearly holds that as2 (0) ¼ 0, and the same procedure can be inductively carried out for i ¼ 3, . . . , n, with the choices a ^ i1 (Xi1 (0), ni1 (0), xri (0)) ¼ xi (0). P n 2 This procedure yields the simpler bound i¼1 zi (t) Pn PR 2 bd t 2cm Wd =bd þ cm . We would i¼1 j¼1 jFaj (0)j =gaj e i
i
like to make this bound small so that the transient excursion of the tracking error is small. Notice that we do not have direct control on the size of bd because this term depends on the unknown constants ci , which appear in the ideal signals ai . Even though it is not necessary to be able to set bd to reduce the size of the bound, it is possible to do so if the bounds c , and cc are known. cci , c i id At this point, it becomes more clear how to choose the constants to achieve a smaller bound. Recalling the expression of Wd , note that one may first want to have bd > 1, so Wd is not made larger when divided by bd and so the convergence is faster. This may be achieved by setting ci such that 2ci cm > 1 (if enough knowledge is available to do so) and saj > 1. i However, having a large saj makes Wd larger; this can be i offset, however, by also choosing the ratio saj =gaj < 1 or i i smaller. Finally, it is clear that making ki larger reduces the effects of the representation errors and, therefore, makes Wd smaller. Observe that there is enough design freedom to make Wd small and bd large independently of each other. These simple guidelines may become very useful when performing P a real control design. Moreover, notice that the bound on ni¼1 zi2 (t) makes it possible to specify the compact sets of the approximators so that, even throughout the transient, it can be guaranteed that the states will remain within the compact sets without the need for a global bounding control term. This has been a recurrent shortcoming of many online function approximation-based methods, and the explicit bound on the transient makes it possible to overcome it. These simple guidelines, which are obvious under careful observation, may become very useful when performing a real control design. The next section illustrates these guidelines and shows their practical effect in a real application.
12.3 Application: Direct Adaptive Wing Rock Regulation with Varying Angle of Attack Subsonic wing rock is a nonlinear phenomenon experienced by an aircraft with slender delta wings, in which limit cycle roll
Rau´l Ordo´n˜ez and Kevin M. Passino
1144 and roll rate oscillations or unstable behavior are experienced by the aircraft with pointed forebodies at high angles of attack. Wing rock may diminish flight effectiveness or even present serious danger due to potential instability of the aircraft. Here, we will apply the method of local control laws of theorem 1 to the problem of wing rock regulation. Other approaches to this problem can be found in Joshi et al. (1998), Singh et al. (1995), Luo and Lan (1993), and Krstic´ et al. (1995), among others. In Singh et al. (1995), the authors present conventional adaptive and neural adaptive control methods for wing rock control. In Luo and Lan (1993), an optimal feedback control using Beecham-Titchener’s averaging technique is applied. They present a single-neuron controller trained with backpropagation to regulate wing rock, and this controller is tested in a wind tunnel. In Krstic´ et al. (1995) the authors use the tuning functions method of adaptive backstepping to develop a wing rock regulator. It is interesting to note that all these methods are developed at a fixed angle of attack and, in some cases, are tested at another angle close to the design point, which serves to help the researchers claim robustness of the designs. Here, the problem is considered in a more general setting, where the angle of attack is allowed to vary with time according to the evolution of an external dynamical system (which may represent the commands of the pilot together with the aircraft dynamics). As will be noted subsequently, the dynamics of the wing rock phenomenon change nonlinearly with the angle of attack, which makes the problem of developing controllers that are robust against angle of attack a challenging one. However, this problem fits the class of time-varying systems (Polycarpou and Ioannu, 1991) considered in this chapter, so development of a controller that can operate at all angles of attack is greatly simplified by following theorem 1. There exist several analytical nonlinear models that characterize the phenomenon of wing rock (Hsu and Lan, 1985; Nayfeh et al., 1989; Elzebda et al., 1989). The model we use here is the one presented in Nayfeh et al. (1989) and Elzebda et al. (1989), which has the advantage over the model in Hsu and Lan (1985) of being differentiable and, according to the authors, slightly more accurate. This model is given by: € ¼ v2 f þ mj f_ þ b j f_ 3 þ mj f2 f_ þ b j ff_ 2 þ gda , f 1 1 2 2 j
(7)
where f is the roll angle, da is the output of an actuator with first-order dynamics, g ¼ 1:5 is an input gain, and j j j j j j j j j v2j ¼ c1 a1 , m1 ¼ c1 a2 c2 , b1 ¼ c1 a3 , m2 ¼ c1 a4 , and b2 ¼ c1 a5 j are system coefficients that depend on the parameters ai , which in turn are functions of the angle of attack, denoted here by y (aircraft notation conventions dictate the use of a as the angle of attack; however, to avoid confusions with the notation here, we will use y instead). From Nayfeh et al. (1989), we let c1 ¼ 0:354 and c2 ¼ 0:001, constants given by the physical parameters of a delta wing used in wind tunnel
TABLE 12.1 j
v 15 17 19 21.5 22.5 23.75 25
Parameters for the Coefficients in the Wing Rock Model j
j
j
j
a1
a2
a3
a4
a5
0.01026 0.02007 0.0298 0.04207 0.04681 0.0518 0.05686
0.02117 0.0102 0.000818 0.01456 0.01966 0.0261 0.03254
0.14181 0.0837 0.0255 0.04714 0.05671 0.065 0.07334
0.99735 0.63333 0.2692 0.18583 0.22691 0.2933 0.3597
0.83478 0.5034 0.1719 0.24234 0.59065 1.0294 1.4681
experiments in Levin and Katz (1984) to develop the analytical model of equation 12.7. In Nayfeh et al. (1989), four angles of j attack are considered, at which the coefficients ai are given. We added three points to the table in Nayfeh et al. (1989) by j assuming that the functions passing through the points ai are approximately piecewise linear (a reasonable assumption, considering the plots presented in this publication). Thus, the points used are given in Table 12.1, where the points at v ¼ 17, 19 and 23.75 have been added to the table in question. To build a smooth, time-varying model of the wing rock that depends on the angle of attack v, we will consider the interpolation functions: 2
e ð(vvj )=sj Þ rj (v) ¼ P7 , ð(vl )=sl Þ2 l¼1 e
(12:8)
where the centers vj and spreads sj , j ¼ 1, . . . , 7, are given in Table 12.2. Notice that the interpolation functions of equation 12.8 satisfy the assumptions stated in theorem 1. To test the accuracy of the interpolations, let: ai (v) ¼
7 X
j
rj (v)ai ,
(12:9)
j¼1
for i ¼ 1, . . . , 5. Figure 12.1 contains the plots of the interpolated coefficients ai (v) (solid lines) as well as the data points in Table 12.1 marked by circles. We see that the interpolations are generally close to the data points, so we may consider the resulting time-varying model accurate enough. We will assume the control input u affects the wing through an actuator with linear, first-order dynamics. To express the model in the form of equation 12.1, we let x1 ¼ f, x2 ¼ f_ , and x3 ¼ da . Then, the time-varying wing rock model is given by: TABLE 12.2 Functions j vj sj
1 15 1.5
Centers and Spreads for Wing Rock Interpolation 2 17 1.5
3 19 1.5
4 21.5 2.0
5 22.5 1
6 23.75 1
7 25 1
12
Intelligent Control of Nonlinear Systems with a Time-Varying Structure
1145
Interpolated wing rock coefficients
a1(v)
0 0.05 0.1
15
16
17
18
19
20
21
22
23
24
25
15
16
17
18
19
20
21
22
23
24
25
15
16
17
18
19
20
21
22
23
24
25
15
16
17
18
19
20
21
22
23
24
25
15
16
17
18
19
20
21
22
23
24
25
0.05
a2(v)
0 0.05
a3(v)
0.2 0 0.2
a4(v)
1 0 1
a5(v)
2 0 2
Angle of attack, v
FIGURE 12.1
Interpolated Coefficients for the Time-Varying Wing Rock Model
x_ 1 ¼ x2 : 7 X j j j j rj (v) wj2 f þ m1 f_ þ b1 f_ 3 þ m2 f2 f_ þ b2 ff_ 2 þ gx3 : x_ 2 ¼ j¼1
1 1 x_ 3 ¼ x3 þ u: t t
(12:10)
The actuator time constant is t ¼ 1=15. We will assume that the angle of attack v varies according to an exogenous dynamical system:
v_ 1 v_ 2
¼
0 25 25 10
v1 0 0 þ þ r, (12:11) v2 500 62:5
where v1 ¼ v, v2 ¼ v_ , and r is a command input that can take values between minus one pffiffiffiffiffiffi ffi and one. System 12.11 has its poles at 5 24:5i (i ¼ 1), and its equilibrium is at v1 ¼ 20 and v2 ¼ 0. According to the analysis performed in Nayfeh et al. (1989), the wing rock system has a stable focus at the origin for angles of attack v less than approximately 19.5. For larger angles, the origin becomes an unstable equilibrium, and a limit cycle appears around it. In both cases, however, the system is unstable and may diverge to infinity if the initial conditions
are large enough (since we are dealing with angles, such a divergence means that the wings rotate faster and faster). The problem we consider here has the angle of attack varying within the range between 15 and 25, so the qualitative behavior of equation 12.10 changes periodically, as v becomes respectively smaller or larger than 19.5. To gain a better insight into how the dynamic behavior of the wing rock phenomenon changes qualitatively with v, consider Figure 12.2, where we let the system start at the initial condition X3 (0) ¼ [4, 0, 0] and n2 (0) ¼ [20, 0]. Initially, we set r ¼ 1, so the angle of attack stabilizes at 22.5, and we let the system run in open loop for 200 sec. We observe that x1 and x2 are approaching a limit cycle, which would be reached if the system were allowed to run for a longer time: however, at t ¼ 200, we let r ¼ 1 (this is marked by an arrow in Figure 12.2), so the angle of attack changes and after a short transient stabilizes at 17.5. Not being close enough to the origin to be attracted by the local stable focus, the system starts to diverge. We will consider two designs with the purpose of illustrating how different design constants may in fact reduce or enlarge the L2 bound stated in Subsection 12.2.2. In order to do so, we will let the controller track the state x r1 of a reference trajectory given by:
Rau´l Ordo´n˜ez and Kevin M. Passino
1146 0.2
x2
0.1
t=200 sec
0
0.1
4
FIGURE 12.2
3
2
0
1
2
Qualitative Change in Wing Rock Dynamics with Varying Angle of Attack
x_ r1 ¼ xr2 : x_ r2 ¼ xr3 : x_ r3 ¼ 45xr1 39xr2 11xr3 :
(12:12)
That is, these equations represent a stable linear system with poles at 5, 3 and 3. We use radial basis function neural networks (Moody and Darken, 1989) as the approximators. Since v has its equilibrium at 20, we use v ¼ v 20 instead of v as input to the approximators. For a ^ 1 (x1 , v , xr2 ), we choose for j ¼ 1, . . . , 7: 3> exp (x1 cxk )2 =sx21 ) exp (v cvl )2 =sv21 6 7 6 7 2 2 6 7 exp (x c ) =s r r 2 m r2 6 7 ¼ 61, P P P 7 , 2 2 2 2 2 6 7 6 7 d¼1 p¼1 q¼1 exp (x1 cxd ) =sx1 4 5 2 2 2 2 exp (v cvp ) =sv1 exp (xr2 crq ) =sr2 2
z a1 j
1 x1
(12:13)
for k ¼ 1, 2, l ¼ 1, 2, and m ¼ 1, 2, with the centers cxk , cvl , and crm evenly spaced along the intervals [15, 15], [5, 5], and [5, 5], respectively, and the spreads sx1 ¼ 30, sv1 ¼ 10, and sr1 ¼ 10 (i.e., S x1 ¼ {x1 2 R: 15 x1 15} and S v1 ¼ {v 2 R: 15 v 25}; the other compact sets are defined in a similar manner). For a ^ 2 (X2 , n2 , xr3 ), we space three centers evenly on the interval [15, 15] for x1 and x2 , two centers evenly spaced on the interval [5, 5] for v1 and v2 , and two centers evenly spaced on the interval [10, 10] for xr3 . The spreads are chosen as
sx1 ¼ sx2 ¼ 15, sv1 ¼ sv2 ¼ 10, and sr2 ¼ 20. The functions zaj 2 are all chosen the same for j ¼ 1, . . . , 7 and with a similar structure to equation 12.13. The same is done for a ^ 3 (X3 , n3 , x_ r3 ), where we evenly space the centers for xi and vi , i ¼ 1, 2, 3(v3 ¼ v_ 2 ), along the intervals [20, 20] and [5, 5], respectively, and along [100, 100] for x_ r3 , where we use three centers for each xi , two for each vi , and two for x_ r3 . The spreads are sxi ¼ 20, svi ¼ 10, i ¼ 1, 2, 3, and sr3 ¼ 200. Again, for all zaj and zaj , j¼1,..., 7 , we replace v by v in ni , i ¼ 2, 3. 2 3 All the coefficient vectors are initialized with zeros. Since the approximator’s parameters are guaranteed to be bounded, we do not need to set explicitly the parameter spaces Vaj , and i no parameter projection is used. We will use the tracking diffeomorphism given in Subsection 12.2.2, and we will also perform trajectory initialization to be able to better apply the analysis there. Because of the choice we make for the system’s initial conditions (X3 (0) ¼ [4, 0, 0]) and because all parameter coefficients are picked initially equal to zero, the choice Xr3 (0) ¼ X3 (0) ¼ [4, 0, 0] guarantees zi (0) ¼ 0 and i ¼ 1, 2, 3. We will let the reference to the angle of attack system alternate between 1 and 1 every 0.5 sec, and we choose n2 (0) ¼ [20, 0] as the initial condition for the angle of attack system. We will consider two designs. In the first one, we let k1 ¼ 0:3, k2 ¼ 0:6, and k3 ¼ 0:4. For the adaptation laws, we pick gaj ¼ 0:1, saj ¼ 0:2, gajs ¼ 0:5, and saj ¼ 0:1 as well as 1 1 1 gaj ¼ 0:2, saj ¼ 0:3, and j ¼ 1, . . . , 7. Figures 12.3 and 12.4 3 1 show the control results. In Figure 12.3 we first observe the plant’s behavior in open loop, plotted with a dashed line. The
Intelligent Control of Nonlinear Systems with a Time-Varying Structure
1147
4.5 Open loop First design Second design
4 3.5 3
x2
2.5 2 1.5 1 0.5 0 0.5
4
3
FIGURE 12.3
2
1 x1
0
1
2
Direct Adaptive Wing Rock Regulation: Roll and Roll Rate
v
25
20
15 0
0.5
1
1.5
0
0.5
1
1.5
0
0.5
1
1.5
2 2.5 3 (A) Angle of Attack
3.5
4
4.5
5
2 2.5 3 3.5 (B) Control Input: First Design
4
4.5
5
4
4.5
5
2
u
1 0 1 2 5
u
12
0
5 2
2.5 Time
3
3.5
(C) Control Input: Second Design
FIGURE 12.4
Angle of Attack and Control Input
Rau´l Ordo´n˜ez and Kevin M. Passino
1148 Summation of z states 18
First design Second design 16 14
z12 + z22 + z32
12 10 8 6 4 2 0 0
0.5
1
FIGURE 12.5
1.5
2
Different Plots of
oscillations are due to the changing angle of attack; eventually, if allowed to run uncontrolled, the system diverges. The solid line in Figure 12.3 corresponds to the first design just described. Notice that x1 has an overshoot beyond zero. In Figure 12.4(A), we can see the evolution of the angle of attack. In Figure 12.4(B), we observe the control input generated by this first design. Next, with the purpose of improving the controller’s performance, we follow the guidelines given in subsection 12.2.2 and change the design constants in the following manner: we let k1 ¼ 1, k2 ¼ 3, k3 ¼ 2:5, gaj ¼ 0:1, saj ¼ 0:1, gaj ¼ 1:5, 1 1 2 saj ¼ 0:5, gaj ¼ 1:2, and saj ¼ 0:4, j ¼ 1, . . . , 7. Notice that 2 3 2 we have chosen larger constants ki as well as larger gai j , which we expect should help reduce the size of Wd . At the same time, we have set the ratios between sa j and their respective i gaj to be less than or equal to one, which was not the case in i the first design. Figure 12.3 shows the resulting closed-loop behavior, plotted in a solid line with circles, and the bottom plot in Figure 12.4 corresponds to the control input generated by the second design. Although it cannot be seen from Figure 12.3, the convergence to zero is about twice as fast with the second design than with the first one, taking less than 4 secs for x1 and x2 to become small enough, versus about 8 sec with the first design. Furthermore, there is now no overshoot for x1. Note that in the second design, x2 reaches a slightly larger value
2.5 Time
P3
3
2 i¼1 zi (t)
3.5
4
4.5
5
for the Two Designs
than in the first design; this results from the P3fact 2that the L2 bound in subsection 12.2.2 is a measure of i¼1 zi and not of P3 2 x . This means that the only conclusion one can reach i¼1 i for sure is that x1 will have a smaller bound in the P second design. However, observe in Figure 12.5 the plots of 3i¼1 zi2 (t) for both designs. The solid line, corresponding to the first design, is always above the dashed line, which corresponds to the second design, as expected.
12.4 Conclusion In this chapter, we have developed a direct adaptive control method for a class of uncertain nonlinear systems with a timevarying structure using a Lyapunov approach to construct the stability proofs. The systems we considered are composed of a finite number of ‘‘pieces,’’ or dynamic subsystems, that are interpolated by functions depending on a possibly exogenous scheduling variable. We assumed that each piece is in strict feedback form and showed that the methods yield stability of all signals in the closed loop as well as show convergence of the state vector to a residual set around the equilibrium, whose size can be set by the choice of several design parameters We argued that the direct adaptive method presents several advantages over indirect methods in general, including the need for a smaller amount of information about the plant
12
Intelligent Control of Nonlinear Systems with a Time-Varying Structure
and a simpler design. We also provided design guidelines based on L2 bounds on the transient, and we argued that this bound makes it possible to precisely determine how large the compact sets for the function approximators should be so that the states do not exit them. Finally, we applied the direct adaptive method to the problem of wing rock regulation, where the wing rock dynamics are uncertain and where the angle of attack is allowed to vary with time.
References Elzebda, J.M., Nayfeh, A.H., and Mook, D.T. (1989). Development of an analytical model of wing rock for slender delta wings. AIAA Journal of Aircraft 26, 737–743. Hsu, C.H., and Lan, E. (1985). Theory of wing rock. AIAA Journal of Aircraft 22, 920–924. Joshi, S.V., Sreenatha, A.G., and Chandrasekhar, J. (1998). Suppresion of wing rock of slender delta wings using a single neuron controller. IEEE Transactions on Control Systems Technology 6, 671–677. Krstic´, M., Kanellakopoulos, I., and Kokotovic´, P. (1995). Nonlinear and adaptive control design. New York: John Wiley & Sons. Levin, D., and Katz, J. (1984). Dynamic load measurements with delta wings undergoing self-induced roll oscillations. AIAA Journal of Aircraft 21, 30–36. Luo, J., and Lan, C.E. (1993). Control of wing rock motion of slender delta wings. Journal of Guidance Control Dynamics 16, 225–231. Moody, J., and Darken, C. (1989). Fast-learning in networks of locally tuned processing units. Neural Computation 1, 281–294. Nayfeh, A.H., Elzebda, J.M., and Mook, D.T. (1989). Analytical study of the subsonic wing rock phenomenon for slender delta wings. AIAA Journal of Aircraft 26(9), 805–809.
1149
Ordo´n˜ez, R., and Passino, K.M. (2001). Indirect adaptive control for a class of nonlinear systems with a time-varying structure. International Journal of Control 74, 701–717. Ordo´n˜ez, R., and Passino, K.M. (2000a). Adaptive control for a class of nonlinear systems with a time-varying structure. IEEE Transactions on Automatic Control 46, 152–155. Ordo´n˜ez, R., and Passino, K.M. (2000b). Wing rock regulation with a time-varying angle of attack. Proceedings of the International Symposium Intelligent Control/IEEE Mediterranean Conference on Automation and Control 49–54. Polycarpou, M.M., (1996). Stable adaptive neural control scheme for nonlinear systems. IEEE Transactions on Automatic Control 41 447–451. Polycarpou, M.M., and Ioannou, P.A. (1991). Identification and control of nonlinear systems using neural network models: Design and stability analysis. Electrical Engineering: Systems Report 91-09-01, University of Southern California. Polycarpou, M.M., and Mears, M.J. (1998). Stable adaptive tracking of uncertain systems using nonlinearly parametrized online approximators. International Journal of Control 70, 363–384. Rovithakis, G.A., and Christodoulou, M.A. (1995). Direct adaptive regulation of unknown nonlinear dynamical systems via dynamic neural networks. IEEE Transactions on Systems, Man, and Cybernetics 25, 1578–1995. Singh, S.N., Yim, W., and Wells, W.R. (1995). Direct adaptive and neural control of the wing rock motion of slender delta wings. Journal of Guidance Control Dynamics 18, 25–30. Spooner, J.T., and Passino, K.M. (1996). Stable adaptive control using fuzzy systems and neural networks. IEEE Transactions in Fuzzy Systems 4, 339–359. Su, C.Y., and Stepanenko, Y. (1994). Adaptive control of a class of nonlinear systems with fuzzy logic. IEEE Transactions on Fuzzy Systems 2, 285–294.
This page intentionally left blank
13 Direct Learning by Reinforcement* Jennie Siy Department of Electrical Engineering, Arizona State University, Tempe, Arizona, USA
13.1 13.2
Introduction...................................................................................... 1151 A General Framework for Direct Learning Through Association and Reinforcement ................................................................................... 1151
13.3
Analytical Characteristics of an Online NDP Learning Process ................... 1154
13.2.1 The Critic Network . 13.2.2 The Action Network . 13.2.3 Online Learning Algorithms 13.3.1 Stochastic Approximation Algorithms . 13.3.2 Convergence in Statistical Average for Action and Critic Networks
13.4
Example 1......................................................................................... 1156
13.5 13.6
Example 2......................................................................................... 1158 Conclusion ....................................................................................... 1159 References......................................................................................... 1159
13.4.1 The Cart–Pole Balancing Problem . 13.4.2 Simulation Results
13.1 Introduction This chapter focuses on a systematic treatment for developing a generic online learning control system based on the fundamental principle of reinforcement learning or, more specifically, neural dynamic programming. This online learning system improves its performance over time in two aspects. First, it learns from its own mistakes through the reinforcement signal from the external environment and tries to reinforce its action to improve future performance. Second, system states associated with the positive reinforcement are memorized through a network learning process where in the future, similar states will be more positively associated with a control action leading to a positive reinforcement. This discussion also introduces a successful candidate of online learning control design. Real-time learning algorithms are derived for individual components in the learning system. Some analytical insight are provided to give guidelines on the learning process that takes place in each module of the online learning control system. The performance of the online learning controller is measured by its learning speed, success rate of learning, and the degree to meet the * Portions reprinted, with permission, from the 2001 IEEE Transactions on Neural Networks 2(2), 264–276. y Supported by NSF under grant ECS-0002098. Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
learning control objective. The overall learning control system performance is tested on a single cart–pole balancing problem and a pendulum swing-up and balancing task.
13.2 A General Framework for Direct Learning Through Association and Reinforcement Consider a class of learning decision and control problems in terms of optimizing a performance measure over time with the following constraints. First, a model of the environment or the system that interacts with the learner is not available a priori. The environment/system can be stochastic, nonlinear, and subject to change. Second, learning takes place ‘‘on-the-fly’’ while interacting with the environment. Third, even though measurements from the environment are available from one decision and control step to the next, a final outcome of the learning process from a generated sequence of decisions and controls comes as a delayed signal in an indicative ‘‘win or lose’’ format. Figure 13.1 is a schematic diagram of an online learning control scheme. The binary reinforcement signal r(t) is provided from the external environment and is either a 0 or a 1 corresponding to success or failure, respectively. 1151
1152
Jennie Si
X(t)
1 Ec (t) ¼ ec2 (t): 2
Primary reinforcement r(t)
(13:3)
The weight update rule for the critic network is a gradientbased adaptation given by: Action Network
X(t)
u(t)
Critie Critic Network
J(t)
α
J(t-l)-r(t) X(t)
Uc(t) System
FIGURE 13.1 Schematic Diagram for Implementations of Neural Dynamic Programming as a Direct Learning Mechanism. The solid lines represent signal flow, and the dashed lines are the paths for parameter tuning.
In our direct online learning control design, the controller is ‘‘naive’’ when it just starts to control; the action network and the critic network are both randomly initialized in their weights and parameters. Once a system state is observed, an action will be subsequently produced based on the parameters in the action network. A ‘‘better’’ control value under the specific system state will lead to a more balanced equation of the principle of optimality. This set of system operations will be reinforced through memory or association between states and control output in the action network. Otherwise, the control value will be adjusted through tuning the weights in the action network to make the equation of the principle of optimality more balanced. To be more quantitative, consider the critic network as depicted in Figure 13.1. The output of the critic element, the J function, approximates the discounted total reward-to-go. Specifically, it approximates R(t) at time t given by: R(t) ¼ r(t þ 1) þ ar(t þ 2) þ ,
(13:4) (13:5) (13:6)
In equations 13.4 to 13.6, lc (t) > 0 is the learning rate of the critic network at time t, which usually decreases with time to a small value, and w c is the weight vector in the critic network.
13.2.2 The Action Network The principle in adapting the action network is to indirectly back-propagate the error between the desired ultimate objective, denoted by Uc , and the approximate J function from the critic network. Since 0 is defined as the reinforcement signal for success, Uc is set to 0 in the discussion is design paradigm and in the following case studies. In the action network, the state measurements are used as inputs to create a control as the output of the network. In turn, the action network can be implemented by either a linear or a nonlinear network, depending on the complexity of the problem. The weight updating in the action network can be formulated as follows. Let the following be true: ea (t) ¼ J (t) Uc (t):
(13:7)
The weights in the action network are updated to minimize the following performance error measure:
(13:1)
where R(t) is the future accumulative reward-to-go value at time t and where a is a discount factor for the infinite-horizon problem (0 < a < 1). We have used a ¼ 0:95 in our implementations; r(t þ 1) is the external reinforcement value at time t þ 1.
13.2.1 The Critic Network The critic network is used to provide J (t) as an approximate of R(t) in equation 13.1. The prediction error for the critic element is defined as: ec (t) ¼ aJ (t) [J (t 1) r(t)],
w c (t þ 1) ¼ w c (t) þ Dw c (t): qEc (t) Dw c (t) ¼ lc (t) : qw c (t) qEc (t) qEc (t)qJ (t) ¼ : qw c (t) qJ (t)qw c (t)
(13:2)
and the objective function to be minimized in the critic network is as follows:
1 Ea (t) ¼ ea2 (t): 2
(13:8)
The update algorithm is then similar to the one in the critic network. By a gradient descent rule: w a (t þ 1) ¼ w a (t) þ Dw a (t): qEa (t) Dw a (t) ¼ la (t) : qw a (t) qEa (t) qEa (t) qJ (t) qu(t) ¼ : qw a (t) qJ (t) qu(t) qw a (t)
(13:9) (13:10) (13:11)
In equations 13.9 to 13.11, la (t) > 0 is the learning rate of the action network at time t, which usually decreases with time to a small value, and w a is the weight vector in the action network.
13
Direct Learning by Reinforcement
1153
13.2.3 Online Learning Algorithms This chapter is online learning configuration introduced in the previous subsections involves two major components in the learning system: the action network and the critic network. The following devises learning algorithms and elaborates how learning takes place in each of the two modules. In the discussion is NDP design, both the action network and the critic network are nonlinear multilayer feed-forward networks. In these designs, one hidden layer is used in each network. The neural network structure for the nonlinear, multilayer critic network is shown in Figure 13.2. In the critic network, the output J (t) will be of the form: J (t) ¼
Nh X
wc(2) (t)pi (t): i
(13:12)
(2) For Dwc(1) (input to hidden layer): " # qEc (t) (1) Dwcij (t) ¼ lc (t) (1) , qwcij (t) qEc (t) qwc(1) ij (t)
qEc (t) qJ (t) qpi (t) qqi (t) qJ (t) qpi (t) qqi (t) qwc(1) ij (t) 1 (2) 2 1 pi (t) xj (t): ¼ aec (t)wci (t) 2 ¼
(13:17) (13:18) (13:19)
Now, investigate the adaptation in the action network, which is implemented by a feed-forward network similar to the one in Figure 13.2, except that the inputs are the n measured states and the output is the action u(t). The following are the associated equations for the action network:
i¼1
pi (t) ¼ qi (t) ¼
1 expqi (t) , i ¼ 1, . . . , Nh : 1 þ expqi (t) nþ1 X
wc(1) (t)xj (t), i ¼ 1, . . . , Nh : ij
(13:13)
u(t) ¼
(13:14)
v(t) ¼
1 expu(t) : 1 þ expu(t) Nh X
The qi is the ith hidden node input of the critic network, and pi is the corresponding output of the hidden node. The Nh is the total number of hidden nodes in the critic network, and n þ 1 is the total number of inputs into the critic network including the analog action value u(t) from the action network. By applying the chain rule, the adaptation of the critic network is summarized below. (1) For Dwc(2) (hidden to output layer): " # qEc (t) (2) , Dwci (t) ¼ lc (t) (2) qwci (t) qwc(2) i (t)
¼
wa(2) (t)gi (t): i
qEc (t) qJ (t) ¼ aec (t)pi (t): qJ (t) qwc(2) i (t)
(13:15) (13:16)
gi (t) ¼ hi (t) ¼
1 exphi (t) , i ¼ 1, . . . , Nh : 1 þ exphi (t) n X
wa(1) (t)xj (t), i ¼ 1, . . . , Nh : ij
(13:22) (13:23)
j¼1
The v is the input to the action node, and gi and hi are the output and the input of the hidden nodes of the action network, respectively. Since the action network inputs the state measurements only, there is no (n þ 1)th term in equation 13.23 as in the critic network (see equation 13.14 for comparison). The update rule for the nonlinear multilayer action network also contains two sets of equations: (1) For Dw (2) a (hidden to output layer): "
Wc(1) X1
(13:21)
i¼1
j¼1
qEc (t)
(13:20)
Dwa(2) (t) ¼ la (t) i Wc(2)
qEa (t) qwa(2) i (t)
X2 J Xn u
FIGURE 13.2 Schematic Diagram for the Implementation of a Nonlinear Critic Network Using a Feed-Forward Network with one Hidden Layer
¼
qEa (t) qwa(2) i (t)
# ,
qEa (t) qJ (t) qu(t) qv(t) , qJ (t) qu(t) qv(t) qwa(2) i (t)
Nh X 1 1 u2 (t) gi (t) ¼ ea (t) 2 i¼1 (1) 1 2 1 p wc(2) (t) (t) w (t) : i ci; nþ1 i 2
(13:24) (13:25)
(13:26)
In equations 13.24 to 13.26, qJ (t)=qu(t) is obtained by changing variables and by a chain rule. The result is the is the weight associated with summation term, and wc(1) i ; nþ1 the input element from the action network.
1154
Jennie Si
(2) For Dw (1) a (input to hidden layer): " # qEa (t) (1) Dwaij (t) ¼ la (t) (1) , qwaij (t) qEa (t) qwa(1) ij (t)
(13:27)
qEa (t) qJ (t) qu(t) qv(t) qgi (t) qhi (t) (13:28) qJ (t) qu(t) qv(t) qgi (t) qhi (t) qwa(1) ij (t) 1 1 2 1 u2 (t) wa(2) 1 g ¼ ea (t) (t) (t) xj (t): i i 2 2 Nh X (1) 1 (2) 2 wci (t) 1 pi (t) wci, nþ1 (t) : (13:29) 2 i¼1
¼
In implementation, equations 13.16 and 13.19 are used to update the weights in the critic network and equations 13.26 and 13.29 are used to update the weights in the action network.
this discussion’s online NDP learning algorithms in an averaged sense for the action and the critic networks under certain conditions.
13.3.1 Stochastic Approximation Algorithms The original work in recursive stochastic approximation algorithms was introduced by Robbins and Monro (1951), who developed and analyzed a recursive procedure for finding the root of a real-valued function g(w) of a real variable w. The function is not known, but noise-corrupted observations could be taken at values of w selected by the experimenter. A function g(w) with the form g(w) ¼ Ex[f (w)] (Ex[] is the expectation operator) is called a regression function of f (w) and, conversely, f (w) is called a sample function of g(w). The following conditions are needed to obtain the Robbins-Monro algorithm (Robbins and Monro, 1951): .
C1: The g(w) has a single root w , g(w ) ¼ 0, and:
13.3 Analytical Characteristics of an Online NDP Learning Process This section is dedicated to expositions of analytical properties of the online learning algorithms in the context of neural dynamic programming (NDP). It is important to note that in contrast to usual neural network applications, there is no readily available training sets of input/output pairs to be used for approximating J in the sense of least-squares-fit in NDP applications. Both the control action u and the approximated J function are updated according to an error function that changes from one time-step to the next. Therefore, the convergence argument for the steepest descent algorithm does not hold valid for any of the two networks, action or critic. This results in a simulation approach to evaluate the cost-to-go function J for a given control action u. The online learning takes place, aiming at iteratively improving the control policies based on simulation outcomes. This creates analytical and computational difficulties that do not arise in a more typical neural network training context. The closest analytical results in terms of approximating J function was obtained by Tsitsiklis (1997) where a linear in parameter function approximator was used to approximate the J function. The limit of convergence was characterized as the solution to a set of interpretable linear equations, and a bound was placed on the resulting approximation error. It is worth pointing out that the existing implementations of NDP are usually computationally very intensive (Bertsekas and Tsitsiklis, 1996) and often require a considerable amount of trial and error. Most of the computations and experimentations with different approaches were conducted offline. The following paragraphs provide some analytical insight on the online learning process for proposed NDP designs in this chapter. Specifically, the stochastic approximation argument is used to reveal the asymptotic performance of
g(w) < 0 if w < w : g(w) > 0 if w > w :
.
This first condition is assumed with little loss of generality since most functions of a single root not satisfying this condition can be made to do so by multiplying the function by 1. C2: The variance of f (w) from g(w) is finite: s2 (w) ¼ Ex[g(w) f (w)]2 < 1:
.
(13:30)
C3: jg(w)j < B1 jw w j þ B0 < 1:
(13:31)
This third condition is a very mild condition. The values of B1 and B0 need not be known to prove the validity of the algorithm. As long as the root lies in some finite interval, the existence of B1 and B0 can always be assumed. If the conditions C1 through C3 are satisfied, the algorithm from Robbins and Monro (1951) can be used to iteratively seek the root w of the function g(w): w(t þ 1) ¼ w(t) l(t)f [w(t)],
(13:32)
where l(t) is a sequence of positive numbers that satisfy the following conditions: 1) lim l(t) ¼ 0 2) 3)
t!1 1 X
l(t) ¼ 1
t¼0 1 X t¼0
l 2 (t) < 1:
(13:33)
13
Direct Learning by Reinforcement
1155
Furthermore, w(t) will converge toward w in the mean square error sense and with probability of value 1: lim Ex[kw(t) w k2 ] ¼ 0:
(13:34)
n o Prob lim w(t) ¼ w ¼ 1:
(13:35)
t!1
t!1
~ea ¼ J Uc :
The convergence with probability 1 in equation 13.35 is also called convergence almost truly. In this chapter, the RobbinsMonro algorithm is applied to optimization problems (Kusher and Yin, 1997). In that setting, g(w) ¼ qE=qW , where E is an objective function to be optimized. If E has a local optimum at w , g(w) will satisfy the condition C1 locally at w . If E has a quadratic form, g(w) will satisfy the condition C1 globally.
13.3.2 Convergence in Statistical Average for Action and Critic Networks Neural dynamic programming is still in its early stage of development. The problem is not trivial due to several consecutive learning segments being updated simultaneously. A practically effective on-line learning mechanism and a stepby-step analytical guide for the learning process do not coexist at this time. This chapter is dedicated to reliable implementations of NDP algorithms for solving a general class of online learning control problems. As demonstrated in previous sections, experimental results in this direction are very encouraging. This section provides some asymptotic convergence results for each component of the NDP system. The RobbinsMonro algorithm provided in the previous section is the main tool to obtain results. Throughout this chapter, it is implied that the state measurements are samples of a continuous state-space. Specifically, the discussion assumes without loss of generality that the Xj 2 x Rn has discrete probPinput N ability density p(X) ¼ j¼1 pj d(X Xj ), where d() is the delta function. The following paragraphs analyze one component of the NDP system at a time. When one component (e.g., the action network) is under consideration, the other component (e.g., the critic network) is considered to have completed learning; their weights do not change anymore. To examine the learning process taking place in the action network, the following objective function for the action network is defined as: 1X E~a ¼ pi [J (Xi ) Uc ]2 2 i 1 ¼ Ex[(J Uc )2 ]: 2
square between the two. To obtain a (local) minimum for the averaged error measure in equation 13.38, the Robbins-Monro algorithm can be applied by first taking a derivative of this error with respect to the parameters, which are the weights in the action network in this case. Let: (13:37)
~ a , and w ~ a belongs to a bounded set, Since J is smooth in w the derivative of E~a with respect to the weights of the action network is then of the form: qE~a q~ea ¼ Ex ~ea : ~a ~a qw qw
(13:38)
According to the Robbins-Monro algorithm, the root (can ~ a as a function of w ~ a can be obtained be a local root) of qE~a =qw by the following recursive procedure if the root exists and if the step size la (t) meets all the requirements described in equation 13.35: q~ea ~ a (t þ 1) ¼ w ~ a (t) la (t) ~ea w : (13:39) ~a qw Equation 13.37 may be considered as an instantaneous error between a sample of the J function and the desired value Uc . Therefore, equation 13.39 is equivalent to the update equation for the action network given in equations 13.9 to 13.11. From this viewpoint, the online action network updating rule of these equations 13.9 to 13.11 is actually converging to a (local) minimum of the error square between the J function and the desired value Uc in a statistical average sense. Or in other words, even if these equations represent a reduction in instantaneous error square at each iterative time-step, the action network updating rule asymptotically reaches a (local) minimum of the statistical average of (J Uc )2 . By the same token, a similar framework can be constructed to describe the convergence of the critic network. Recall that the residual of the principle of optimality equation to be balanced by the critic network is of the following form: ec (t) ¼ aJ (t) J (t 1) þ r(t):
(13:40)
The instantaneous error square of this residual is given as: 1 Ec (t) ¼ ec2 (t): 2 Instead of the instantaneous error square, let:
(13:41)
(13:36)
It can be seen that equation 13.36 is an ‘‘averaged’’ error square between the estimated J and a final desired value Uc . To contrast this notion, equation 13.8 is an ‘‘instantaneous’’ error
E~c ¼ Ex[Ec ],
(13:42)
and assume that the expectation is well-defined over the discrete state measurements. The derivative of E~c with respect to the weights of the critic network is then of the form:
1156
Jennie Si qE~c qec ¼ Ex ec : ~c ~c qw qw
(13:43)
In equations 13.45 and 13.46, these variables and values apply: .
According to the Robbins-Monro algorithm, the root (can ~ c as a function of w ~ c can be obtained be a local root) of qE~c =qw by the following recursive procedure if the root exists and if the step size lc (t) meets all the requirements described in equation 13.35:
. . . . . .
qec ~ c (t þ 1) ¼ w ~ c (t) lc (t) ec w : ~c qw
(13:44)
Therefore, equation 13.44 is equivalent to the update rule for the critic network given in equations 13.4 to 13.6. From this viewpoint, the online critic network update rule of equations 13.4 to 13.6 is actually converging to a (local) minimum of the residual square of the equation of the principle of optimality in a statistical average sense.
13.4 Example 1 The proposed NDP design has been implemented on a single cart–pole balancing problem. To begin with, the self-learning controller has no prior knowledge about the plant but only of online measurements. The objective is to balance a single pole mounted on a cart, which can move either to the right or to the left on a bounded, horizontal track. The goal for the learning controller is to provide a force (applied to the cart) of a fixed magnitude in either the right or the left direction so that the pole stands balanced and avoids hitting the track boundaries. The controller receives reinforcement only after the pole has fallen. To provide the learning controller measured states as inputs to the action and the critic networks, the cart–pole system was simulated on a digital computer using a detailed model that includes all of the nonlinearities and reactive forces of the physical system such as frictions. Note that these simulated states would be the measured ones in real-time applications.
.
The nonlinear differential equations of 13.45 and 13.48 are numerically solved by a fourth-order Runge-Kutta method. This model provides four state variables: (1) x(t), position of the cart on the track; (2) u(t), angle of the pole with respect to the vertical position; (3) x_ (t), cart velocity; and (4) u(t), angular velocity. In the current study, a run consists of a maximum of 1,000 consecutive trials. It is considered successful if the last trial (trial number less than 1,000) of the run has lasted 600,000 time-steps. Otherwise, if the controller is unable to learn to balance the cart–pole within 1,000 trials (i.e., none of the 1,000 trials has lasted over 600,000 time-steps), then the run is considered unsuccessful. This chapter’s simulations have used 0.02 sec for each time-step, and a trial is a complete process from start to fall. A pole is considered fallen when the pole is outside the range of [12 , 12 ] and/or the cart is beyond the range of [2:4, 2:4] meters in reference to the central position on the track. Note that although the force F applied to the cart is binary, the control u(t) fed into the critic network as shown in Figure 13.1 is continuous.
13.4.2 Simulation Results Several experiments were conducted to evaluate the effectiveness of this chapter’s learning control designs. The parameters used in the simulations are summarized in Table 13.1 with the proper notations defined in the following: . . .
13.4.1 The Cart–Pole Balancing Problem The cart–pole system used in the current study is the same as the one in Barto et al. (1983): d 2 u g sin u þ cos u[ F ml u_ 2 sin u þ mc sgn(x_ )] ¼ dt 2 4 m cos2 u l 3 mc þ m d 2 x F þ ml[u_ 2 sin u €u cos u] mc sgn(x_ ) : ¼ dt 2 mc þ m
mp u_ ml
.
.
: (13:45)
. . .
(13:46)
g ¼ 9:8 m=s2 , acceleration due to gravity mc ¼ 1:0 kg, mass of cart m ¼ 0:1 kg, mass of pole l ¼ 0:5 m, half-pole length mc ¼ 0:0005, coefficient of friction of cart on track mp ¼ 0:000002, coefficient of friction of pole on cart F ¼ 10 Newtons, force applied to cart’s center of mass ( 1, if x > 0. sgn(x) ¼ 0, if x ¼ 0. 1, if x < 0.
.
lc (0): Initial learning rate of the critic network la (0): Initial learning rate of the action network lc (t): Learning rate of the critic network at time t, which is decreased by 0.05 every 5 time-steps until it reaches 0.005 and stays at lc (f ) ¼ 0:005 thereafter la (t): Learning rate of the action network at time t, which is decreased by 0.05 every 5 time-steps until it reaches 0.005 and stays at la (f ) ¼ 0:005 thereafter Nc : Internal cycle of the critic network Na : Internal cycle of the action network Tc : Internal training error threshold for the critic network Ta : Internal training error threshold for the action network Nh : Number of hidden nodes
13
Direct Learning by Reinforcement
1157
TABLE 13.1 Summary of Parameters Used in Obtaining the Results Given in Table 13.2 lc (0) 0.3 Nc 50
la (0) 0.3 Na 100
lc (f ) 0.005 Tc 0.05
la (f ) 0.005 Ta 0.005
8
* * Nh 6
Note that the weights in the action and the critic networks were trained using their internal cycles, Na and Nc , respectively. That is, within each time-step, the weights of the two networks were updated for at most Na and Nc times, respectively, or stopped once the internal training error threshold Ta and Tc has been met. To be more realistic, both a sensor noise and an actuator noise have been added to the state measurements and the action network output. Specifically, the actuator noise has been implemented through u(t) ¼ u(t) þ r, where r is a uniformly distributed random variable. For the sensor noise, both uniform and Gaussian random variables were added to the angle measurements u. The uniform state sensor noise was implemented through u ¼ (1 þ noise percentage) u. Gaussian sensor noise was zero mean with specified variance. The proposed configuration of neural dynamic programming has been evaluated, and the results are summarized in Table 13.2. The simulation results summarized in Table 13.2 were obtained through averaged runs. Specifically, 100 runs were performed to obtain the results reported here. Each run was initialized to random conditions in terms of network weights. If a run is successful, the number of trials it took to balance the cart–pole is then recorded. The number of trials listed in the table corresponds to the one averaged over all of the successful runs. Therefore, there is a need to record the percentage of successful runs out of 100. This number is also recorded in the table. A good configuration is the one with a high percentage of successful runs as well as a low average number of trials needed to learn to perform the balancing task.
6 4 Degrees
Parameter Value Parameter Value
10
2 0 −2 −4 −6 −8
0
500
1000
1500
Time Steps
FIGURE 13.3 A Typical Angle Trajectory During a Successful Learning Trial for the NDP Controller when the System is Free of Noise
Figure 13.3 shows a typical movement or trajectory of the pendulum angle under an NDP controller for a successful learning trial. The system under consideration is not subject to any noise. Figure 13.4 represents a summary of typical statistics of the learning process in histograms. It contains vertical angle histograms when the system learns to balance the cart–pole using ideal state measurements without noise corruption.
6
x 105
Hist nonlinear NDP
5
4
TABLE 13.2 Performance Evaluation of NDP Learning Controller when Balancing a Cart-Pole System Noise type Noise-free Uniform 5% actuator Uniform 10% actuator Uniform 5% sensor Uniform 10% sensor Gaussian s2 ¼ 0:1 sensor Gaussian s2 ¼ 0:2 sensor
Success rate 100% 100% 100% 100% 100% 100% 100%
Number of trials 6 8 14 32 54 164 193
Note: The second column represents the percentage of successful runs out of 100. The third column depicts the average number of trials it took to learn to balance the cart–pole. The average is taken over the successful runs.
3
2
1
0 −2
−1.5
−1
−0.5
0 0.5 Degrees
1
1.5
2
FIGURE 13.4 Histogram of Angle Variations Under the Control of NDP On-Linear Learning Mechanism in the Single Cart–Pole Problem. The system is free of noise in this case.
1158
Jennie Si
13.5 Example 2
TABLE 13.3 Performance Evaluation of NDP Learning Controller to Swing Up and Then Balance a Pendulum
This section examines the performance of the proposed NDP design in a pendulum swing-up and balancing task. The case under study is identical to the one in Santamaria et al. (1996). The pendulum is held by one end and can swing in a vertical plane. The pendulum is actuated by a motor that applied a torque at the hanging point. The dynamics of the pendulum are as follows:
Reinforcement implementation
dv 3 ¼ (F þ mlgsin(u)): dt 4 ml 2
(13:47)
du ¼ v: dt
(13:48)
Success rate
Setting 1 Setting 2
Number of trials
100% 96%
4.2 3.5
Angle(Degree) 300 250 200 150
In equations 13.47 and 13.48, m ¼ 1=3 and l ¼ 3=2 are the mass and length of the pendulum bar, respectively, and g ¼ 9:8 is the gravity. The action is the angular acceleration F, and it is bounded between 3 and 3, namely Fmin ¼ 3 and Fmax ¼ 3. A control action is applied every four time-steps. The system states are the current angle u and the angular velocity v. This task requires the controller to not only swing up the bar but also to balance it at the top position. The pendulum initially sits still at u ¼ p. This task is considered difficult in the sense that (1) there is no closed-form analytical solution for the optimal solution, and complex numerical methods are required to compute it, and (2) the maximum and minimum angular acceleration values are not strong enough to move the pendulum straight up from the starting state without first creating angular momentum (Santamaria et al., 1996). In this study, a run consists of a maximum of 100 consecutive trials. It is considered successful if the last trial (trial number less than 100) of the run has lasted 800 time-steps (with a step size of 0.05 sec). Otherwise, if the NDP controller is unable to swing up and keep the pendulum balanced at the top within 100 trials (i.e., none of the 100 trails has lasted over 800 time-steps), then the run is considered unsuccessful. In this discussion’s simulations, a trial is either terminated at the end of the 800 times-steps or when the angular velocity of the pendulum is greater than 2p (i.e., v > 2p). The following paragraphs explain two implementation scenarios with different settings in reinforcement signal r. In setting 1, r ¼ 0 when the angle displacement is within 908 from the position of u ¼ 0; r ¼ 0:4 when the angle is in the lower half of the plane: and r ¼ 1 when the angular velocity v > 2p. In setting 2, r ¼ 0 when the angle displacement is within 108 from the position of u ¼ 0; r ¼ 0:4 when the angle is in the remaining area of the plane; and r ¼ 1 when the angular velocity v > 2p. This chapter is proposed NDP configuration is then used to perform the described task. The same configuration and the same learning parameters as those in the first case study are used. NDP controller performance is summarized in Table
100 50 0 −50
0
100
200
300
400 500 Time-steps
600
700
800
900
(A) Entire Trial Angle(Degree) 3 2.5 2 1.5 1 0.5 0 −0.5 −1 −1.5 400
450
500
550
600 650 Time-steps (B) Portion of Entire Trajectory
700
750
800
FIGURE 13.5 A Typical Angle Trajectory During a Successful Learning Trial for the NDP Controller in the Pendulum Swing Up and Balancing Task
13.3. The simulation results summarized in the table were obtained through averaged runs. Specifically, 60 runs were performed to obtain the results reported here. Note that more runs have been used than those in Santamaria et al.
13
Direct Learning by Reinforcement
(1996) (which was 36) to generate the final result statistics. Every other simulation condition has been kept the same as that in Santamaria et al. (1996). Each run was initialized to u ¼ p and v ¼ 0. The number of trials listed in the table corresponds to the one averaged over all of the successful runs. The percentage of successful runs out of 60 was also recorded in the table. Figure 13.5 shows a typical trajectory of the pendulum angle under an NDP controller for a successful learning trial. This trajectory is characteristic for both setting 1 and setting 2. The second column represents the percentage of successful runs out of 60. The third column depicts the average number of trials it took to learn to successfully perform the task. The average is taken over the successful runs.
13.6 Conclusion This chapter is an introduction to a learning control scheme that can be implemented in real time. It may be viewed as a model independent approach to the adaptive critic designs. The chapter demonstrates the implementation details and learning results using two illustrative examples. The chapter also provides a view on the convergence property of the learning process under the assumption that the two network models, the critic and the action, can be learned separately. The analysis of the complete learning process remains to be an open issue.
1159
References Barto, A.G., Sutton, R.S., and Anderson, C.W. (1983). Neuron-like adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics 13, 834–847. Bertsekas, D.P., and Tsitsiklis, J.N. (1996). Neuro-dynamic programming. Belmont, MA: Athena Scientific. Kushner, H.J., and Yin, G.G. (1997). Stochastic approximation algorithms and applications. New York: Springer. Robbins, H., and Monro, S. (1951). A stochastic approximation method. Annals of Mathematics and Statistics 22, 400–407. Santamaria, J.C., Sutton, R.S., and Ram, A. (1996). Experiments with reinforcement learning in problems with continuous state and action spaces. COINS Technical Report 96–88, University of Massachussetts, Amherst. Tsitsiklis, J.N., and Van Roy, B. (1997). An analysis of temporaldifference learning with function approximation. IEEE Transactions on Automatic Control 42(5), 674–690. Werbos, P.J. (1990). A menu of design for reinforcement learning over time. Neural Networks for Control, W.T. Miller III, R.S. Sutton, and P.J. Werbos (eds.), MIT Press, Cambridge, MA. Werbos, P.J. (1992). Approximate dynamic programming for realtime control and neural modeling. Handbook of Intelligent Control. D. White and D. Sofge (eds.), Van Nostrand Reinhold, New York.
This page intentionally left blank
14 Software Technologies for Complex Control Systems Bonnie S. Heck School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
14.1 14.2 14.3 14.4 14.5
Introduction...................................................................................... Objects and Components: Software Technologies..................................... Layered Architectures.......................................................................... Networked Communications................................................................ Middleware .......................................................................................
1161 1162 1164 1164 1165
14.5.1 Distributed Component Object Model . 14.5.2 Common Object Request Broker Architecture . 14.5.3 Java-Based Technologies
14.6 14.7
Real-Time Applications ....................................................................... 1167 Software Tools for Control Applications................................................. 1168 References......................................................................................... 1168
14.1 Introduction Control systems and computers enjoy a very successful partnership that began modestly in the 1940s when the theoretical framework for sampled data control systems was first established. The early implementation of the control was done with minicomputers that were rather bulky and sensitive to environmental conditions (e.g., heat, humidity, etc.) and, hence, not well suited for many applications. The popularity of digital control grew dramatically shortly after microprocessors were introduced in the early 1970s. The main considerations of these early digital controllers were speed of the processor, effect of quantization, and effect of sampling rates. The software considerations revolved around the size of the executable program (in terms of memory requirements) and the speed at which it executed (to increase the sampling rate). Many early control system programs were written in assembly code because compiled C programs and Pascal programs were often too large and too slow. As computer technology advanced, the emphasis in digital control research was placed on analysis and on the development of sophisticated algorithms. New developments in software engineering promise to make revolutionary advances in the way control systems are implemented. Moreover, advances in the implementation may inspire a new generation of digital control algorithms that take Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
advantage of new software technologies. The challenge is that control systems are unique among computer applications since they rely on hard real-time computations. Too slow of an update rate (or an erratic update rate) means that the closedloop system may experience stability problems. In other computer applications, such as telecommunications, slow update rates may simply degrade performance and irritate customers rather than destabilize the system. The need for speed of operation historically resulted in tight coupling of software. Tight coupling means that different software modules are intertwined so that they share data or control flow information. Modification to one software module may necessitate modifications to all modules that are coupled with it. Although this is advantageous for the speed of real-time applications, it makes system integration a nightmare for large projects. Consider the flight control for an aircraft: there are many different control algorithms, sensors, and actuators that need to be integrated. Multiple processors are used generally on board. Integrating all the software for the different components through a common communications network is a huge task and often is the bottleneck in any updates to the flight controls, especially when the software is tightly coupled. Tight coupling also severely complicates the flight certification of software, which is a verification and validation procedure used to prove that the software is stable and will not have run-time errors. 1161
1162 As computers have become faster and as communication networks have become more efficient in the flow of information, the practice of providing speed through tight coupling of control system software modules has become less necessary. Replacing the need for speed is the need for ease of system integration and system evolution. A very desirable feature is plug and play, which refers to ready interchangeability of modules and system reconfiguration with minimal software or protocol changes. For example, as new sensors become available, it would be nice to pull out the old sensors and put in the new sensors with drivers that are ‘‘plug and play’’ with the rest of the system. Also, for the sake of adaptability, it would be helpful to have run-time reconfiguration of the software (such as updating or changing algorithms online in real-time). Another desirable feature would be interoperability among different processor platforms and different languages. For example, suppose a processor embedded in an application is running one type of operating system and a desktop computer attached to the system is running on another operating system. Data transfer between these two systems is not trivial; for example, the memory byte ordering is different between Windows-based platforms and Unix platforms, and word-size differences on 32-bit versus 64-bit processor architectures cause incompatibilities. This chapter gives a brief tutorial on some software technologies that are important for control applications. While software is employed both in design and in implementation, this chapter concentrates on implementation technologies. This is a hot topic of current research, so readers should consult the latest information when using these technologies.
14.2 Objects and Components: Software Technologies One of the major advances in computer science over the last few decades has been object-oriented programming (OOP). People who learned programming before 1990 would find OOP somewhat elusive to define and even more elusive to understand. The OOP, however, has taken hold in the constructs of the newer languages, such as Java and Cþþ (the object-oriented version of C). The motivation for using OOP is for the ease of evolution of software and the reuse of code for different applications. OOP builds programs in a modular fashion, where the software modules are called objects. Objects have well-defined interfaces that are used to interact with other objects. Each interface specifies a set of object attributes that are visible to other objects and a set of functions (or ‘‘methods’’) that represent the behavior or activities the object can perform. The term class refers to the object type and is analogous to a generic plan for building objects (like the software version of the term blueprint), whereas the objects themselves are the specific cases. Classes tend to be organized in a hierarchical relationship: the most common features of
Bonnie S. Heck different classes are grouped together in one high-level class called a superclass, with specialization in subclasses. The class structure makes it easier to reuse modular portions of the software for multiple applications because changes can be localized in object classes. There are three main mechanisms used in object-oriented programs: .
.
.
Encapsulation refers to objects being self-contained modules with well-defined interfaces for interaction with other objects. The variables used in an object are generally limited in scope so that they are not accessible outside of the object (unless they are specifically declared as ‘‘public’’ variables). An object’s interface that allows it to interact with the outside world is kept constant even as the internal workings may be modified and updated, so modification to one object does not require corresponding modifications to other objects. This encapsulation property enables software updates to be accomplished more efficiently and with less risk of error. Inheritance is a property relating to the hierarchy of objects. A superclass of objects has certain properties common to all subclasses that inherit from it. The subclasses can reuse predefined methods (or function calls) of the superclass without having to redefine the methods themselves. Thus, inheritance promotes software reuse. When applying the software to a new application, only certain methods need to be specialized; the rest can remain unchanged. Polymorphism is a property relating to the ability of a subclass of a superclass to tailor itself for a specific application or override the methods defined in the superclass. Thus, one method in a superclass might mean different things in different subclasses. The primary advantage is that one set of operations can be applied to objects, and the appropriate implementation of each operation is chosen dependent on the class of object to which it is applied.
These concepts can be made more concrete by applying the OOP paradigm to sensors used in flight controls, such as the Global Positioning System (GPS), radar altimeter, and Inertial Measurement Unit (IMU). These sensors, typically purchased commercial-off-the-shelf (COTS), include the actual sensor hardware, a digital output, and a software driver (usually in C) for a processor to read the output. Each of these sensors constitutes an object. Moreover, all sensors perform a common function and have certain common tasks: initializing hardware, opening the data port, closing the data port, and the periodic reading of data during operation. Therefore, you can create a generic class of objects called ‘‘Sensor’’ that encapsulates common attributes and procedures. The hierarchy of this structure is illustrated in Figure 14.1. The Sensor class has methods (or function calls) defined for the common sensor tasks, and the three different sensor subclasses inherit
14
Software Technologies for Complex Control Systems
1163
Sensor
Superclass
Methods: readData() initialize(doublevalue) openPort(int value) closePort(int value)
Subclasses
GPS sensor
Altimeter sensor
IMU sensor
Methods: readData() initialize(double value)
Methods: readData() initialize(doublevalue)
Methods: readData() initialize(doublevalue)
FIGURE 14.1
An OOP Class Diagram of Sensor Objects
these methods. These methods form the interface that all other objects can use to interact with these sensors (such as a control algorithm reading the current data). A Java example of a method call made from a control algorithm is y ¼ gps.readData(), where gps is an object of the GPS class and readData() is the method within the object that can be accessed publicly by other objects. Because, as stated previously, encapsulation refers to very modular objects with constant interfaces (even as the object’s internal code changes), a GPS from one vendor may be replaced by one from another vendor. As long as the method call is not changed, no other object in the system will have to be changed. Inheritance is reflected by the methods that are present in the Sensor class and used by the subclasses. In this example, you could define the methods openPort(int value) and closePort(int value) in the superclass where the argument is the number of the port to be used. Even though these methods are not explicitly defined in the subclasses, the method call gps.openPort(int value) has meaning because the GPS class inherits the method from the superclass. Polymorphism can be seen in the initialize and the readData methods, which can apply to any sensor object but require different implementation depending on the type of sensor. For this purpose, the subclasses all have these methods defined, which override the method defined in the superclass. Why then would you bother to define these methods in the superclass when they are defined more specifically in the subclasses? The answer lies in the way the methods can be called from the control algorithm object. As defined, the control algorithm can group all the sensor objects into one vector; the method call would only need to be made once (on the vector object), and each sensor’s appropriate method would be used automatically. This demonstrates polymorphism, where the
same method call means something different to the different subclasses. Polymorphism removes the need for loops or complex case statements to determine which initialization routine to run. The desire for more extensive reusability of software than is provided by OOP gave rise to a new concept in the late 1990s termed software components, which are software modules that are designed to be reused in different software applications (Szyperski, 1998). The term arises from the engineering term component, referring to the pieces of equipment (preferably COTS equipment) that an engineer integrates together to form a larger application. The idea is that software for large applications could also be composed of individual software modules that may be available as COTS. A software component can be an object, a group of objects, or a non-objectoriented software module. In this chapter, the term component will be used to represent a generic, reusable module in a control system that may be composed entirely of software, hardware, or a combination of both. The term distributed components is often used to describe components that are linked via a communication network. As an example of these concepts, consider an objectoriented model of a typical flight control system. The flight control consists of a low-level stability augmentation control as well as a high-level control, such as guidance, and each one may have a dedicated processor. There are many sensors on board such as a GPS, altimeter, and an IMU. The actuators are the servomechanisms that drive the control surfaces (the ailerons, the rudder, and the elevator). The object-oriented modeling method subdivides the flight controls into components as shown in Figure 14.2. Each bubble represents an object (or more generally, a component). Some of the components consist of only software (such as the controllers), one component consists of physical features (the vehicle dynam-
1164
Bonnie S. Heck IMU sensor
Guidance
Stabilizing controller
Altimeter
GPS sensor
Aileron
Vehicle dynamics
Rudder Elevator
FIGURE 14.2 Objects (or Components) of a Typical Flight Control System. The links represent communication pathways.
ics), and the rest consist of both hardware and software (specifically, the software drivers for the actuators and sensors). Each of these components can be represented entirely by a software model; for example, the vehicle dynamics could be replaced by a simulation for modeling purposes. The resulting modeled software components could be applied in numerous aircraft flight control systems, demonstrating the reusability of software components. Finally, some of the links shown in the figure may be made along a communication network, so these components may be described as distributed components. While the use of OOP and software components have become well established in computer science arenas, such as desktop computing and network software for managing local and wide area networks, the use of these concepts in the field of control systems is newer and challenges the real-time aspects of the software.
14.3 Layered Architectures In a layered architecture, objects are designed using a building block mentality. The bottom layer is composed of objects that perform low-level, often tedious functions. The next layer has somewhat higher functionality and makes calls to the objects in the lower layer. Each successive layer upward is more highlevel in its functionality. A common way of explaining this layering is that the details are ‘‘abstracted away,’’ meaning that some of the tedious details needed to perform the function are hidden from the higher level objects simply by delegating them to the lower levels. In a layered architecture, the object calls are all downward. This architecture is the motivation for the libraries of common function calls that are available with most high-level languages, including the application
programming interfaces (APIs) that are available for objectoriented languages such as Java. A simple analogy in hardware to layered architecture is the design of digital circuits. At the bottom layer of the design are transistors. Transistors are composed together to form Boolean logic gates at the next layer, such as NAND and OR. At the next layer are devices such as decoders and multiplexers that are composed of logic gates. These devices then become the building blocks for higher layers. A complex digital circuit could be designed at the transistor level, or it could be designed at the multiplexer/decoder level. The transistor-level design would most likely yield a more efficient implementation but would be much more difficult and time-consuming to the person designing the circuit. In a similar way, this concept of abstracting away the details is the motivation for programming using higher level languages rather than programming at the assembly code level. Programming at the assembly code level produces a more efficient code but is much more tedious and harder to troubleshoot. The terms level and layer will be used throughout the rest of this chapter to reflect layers of abstraction in a layered architecture.
14.4 Networked Communications Complex control systems often have multiple processors that must be interconnected through a communication network. All communication networks have protocols that are the specifications (or rules) by which the data is transferred. Most common model for data networks is the open systems interconnection (OSI) model (Tanenbaum, 1996), which has seven layers of protocols: the physical layer (the hardware that actually transmits and receives the data) at the bottom and the application layer (the software level that accesses mail and does FTP functions) at the top. In between are the layers that check for errors, perform routing (such as IP) tasks, create rules for providing a reliable connection including breaking information into packets (such as TCP), and handle collisions of packets (such as CSMA/CD). Typical data networks, such as ethernet, send packets of information in a nondeterministic or ‘‘bursty’’ manner. The desire to have a more deterministic, steady stream of information in applications such as voice and video transmission prompted the design of the asynchronous transfer mode (ATM) model, which is an alternative to the OSI model and has the potential for good applicability to networked control systems. A layer of abstraction can be added on top of the protocol layers to build communication models. The three common types of models are the following: .
Point–to–point: In this model, all communication is oneto-one between pairs of entities. A common example for this type of communication is a telephone call between two parties.
14
Software Technologies for Complex Control Systems .
.
Client–server: This model is a many-to-one communication model where many clients can be connected simultaneously to one server. The server acts as a centralized source of data. Publish–subscribe: In this model, many components interact with one another in a many-to-many architecture. To achieve this, this publish–subscribe model introduces an intermediate level between the components that acts to mediate the communications. Components can be designated as publishers, which produce data, and subscribers, which require data as input. An example of a publisher is a sensor that produces measurements. An example of a subscriber is a control algorithm that requires the measurements from the sensor. In a complex control system, there may be many publishers and many subscribers. For example, a process control system may have distributed components using many processors, or at least many separate algorithms all using similar data, and numerous sensors producing data.
14.5 Middleware Integrating different software applications either in a single processor or across a network requires detailed knowledge of the operating systems on the connected machines. Moreover, the use of networks requires knowledge of communication protocols that dictate how packets of data are sent and verified. Programming at the operating system level or at the network level adds a layer of complexity to the process that most control engineers would prefer to avoid. As an alternative, a new layer, called middleware, is inserted between the operating system and the application software as shown in Figure 14.3. The middleware sets up the communication protocols and interfaces with the operating systems for the machines on the network to allow different software applications on the
Application software
Controller 1
Controller 2
Sensor processing module
1165 different machines (or on the same machines) to communicate. As such, middleware insulates application users from the lower level operations. You can see how useful middleware is by examining the interaction of different application software programs running on one machine. For example, as long as software vendors build to an interface standard, users are able to cut and paste from one window to another application running in a different window. When middleware is used as the basis for integrating applications, it can be seen as a software substrate. To be effective and easy to use as a substrate, middleware should have certain properties. First, it must have a standard interface so that all application software programs that use the standard can be integrated easily. Second, it must have heterogeneous interoperability, that is, be able to integrate application software that is written in different languages and/or running on different processor platforms. The ultimate goal for control systems would be to have middleware that has plug-and-play capability. Also, for adaptability sake, it would be helpful to have runtime reconfiguration of the software components (such as updating or changing algorithms online in real-time). The control engineering field is not yet to this point in its use of middleware. More discussion of the available products is given in the last section of this chapter; first, more fundamental concepts on software engineering are introduced. Consider the use of object-oriented middleware to provide the distributed communication channel for the publish/subscribe method of communications. Object-oriented middleware treats the software and hardware components connected to it as objects. Ordinarily, when one object on a distributed communication network calls a method of another object, it would have to know which object had that method and be able to locate it on the network. The middleware makes this method request transparent to the object that makes the call by using remote procedure calls (RPCs). Because all objects have a standard interface and all coupling between objects is mediated through the communication channel (that is, the objects are not required to be coupled together directly), adding or removing objects from the overall system can be done in a much simpler manner than is possible if the components are all tightly coupled. Consider three different alternatives for object-oriented middleware: DCOM, CORBA, and Java-based technologies (including Jini technology) (Szyperski, 1998).
Middleware
14.5.1 Distributed Component Object Model Operating systems on processors Hardware, including network interface
FIGURE 14.3 Architecture Showing how Middleware Acts to Insulate the Application Software from the Low-Level Computer Functions
Component Object Model (COM) was developed by Microsoft and is used to integrate different software applications on platforms running Microsoft Windows. Distributed Component Object Model (DCOM) extends this standard to software applications that may be running on different Windows-based platforms and are connected via a network. Applicability to other platforms is under development.
1166
14.5.2 Common Object Request Broker Architecture Common Object Request Broker Architecture (CORBA) is a software standard developed by a consortium called the Object Management Group (OMG). OMG has approximately 800 member companies who develop and adopt products that abide by the standards set forth by OMG. A basic feature of CORBA is the Object Request Broker (ORB) that handles remote procedure calls. When an object calls a method of another object distributed elsewhere on the network, the ORB intercepts the call and directs it. In this manner, the calling object does not need to know the location of the remote object. The middleware itself is available in different languages that all abide by the CORBA standard; for example, the Java 2 Platform Standard Edition, v1.3, includes a CORBA ORB. The use of CORBA in control engineering is rather new; see Wills et al. (2000) for details on the development of a product that is based on a CORBA substrate. Some high-level features of CORBA, however, make it very attractive for future control engineering products used to integrate systems. Its main benefits are that it has a well-accepted standard for the interface, it allows for interoperability of software components written in different languages (including Java and Cþþ) and/or running on different platforms, it handles all network communication protocols, and it provides for run-time reconfiguration of software components. Components can also be integrated that are not object-oriented (such as legacy code) by wrapping the code in a layer that provides an object-oriented interface. Services are continually being added to the ORB structure that extend the capabilities of CORBA. For example, while CORBA is implemented as a client/server communication model, a layer (called an event service) has been added that emulates the publish/subscribe behavior.
14.5.3 Java-Based Technologies The third common way of integrating components is with Java-based technologies. Java is an object-oriented language that is platform independent and is open source (meaning that the source code is not proprietary), so it has good promise for use in middleware development. Java is based on a layered architecture with new APIs being developed on a continuing basis. Some of the APIs that enable Java to be used for distributed applications are the Remote Method Invocation (RMI) API, the RMI over Internet Inter-ORB Protocol (IIOP) API, and the Java ORB API. The RMI API provides the means for distributed objects to interface through remote method calls (similar to remote procedure calls but specialized to Java objects). The RMI IIOP API provides the remote method call capability using a CORBA ORB. The Java ORB API provides a Java-based CORBA ORB. It should be noted that Java components can interact with non-Java components (such as those written in C) through the use of the Java Native Interface
Bonnie S. Heck (JNI). JNI works by providing a standard for writing Java code to wrap around non-Java code; the combined code then can be treated like a Java object. Although many of these Java APIs provide services useful in middleware, a complete package is available in Java: the Jini network technology. Jini is a software architecture for integrating hardware over a network. The lower layers of the architecture handle network protocol decisions, so these decisions are transparent to the user who interfaces at the higher layers. The user also has the option of performing remote method calls using the Java RMI technology, CORBA services, or XML. Thus, Jini can be used as a complementary technology to CORBA. There is a strong continuing effort in the Java community to build new APIs that are increasingly aimed at making Java the basis for integrating multiple components across a network with real-time specifications. Because this field changes rapidly, the reader is encouraged to visit the Sun Microsystems Java Web site for updates. The following example illustrates how middleware can be used to facilitate control system integration and evolution of the system as new hardware components or new software algorithms become available. Consider the flight control example introduced in Figure 14.2. This control system is shown in Figure 14.4 integrated into a publish/subscribe model. The publishing components (those that produce data) are shown with arrows pointing away from them, and the subscribing components (those that require data) are shown with arrows pointing toward them. Note that some components are both publishers and subscribers. To illustrate the communication procedure, consider the guidance module that requires altitude information. It subscribes to this data from the software substrate. When subscribing, it declares the data that it needs and the rate at which it must receive the data. The GPS and the altimeter both publish this data, so the middleware must have a strategy for deciding which of these sensors to use in retrieving data. One strategy is to give both sensors weight values (Pardo-Castellote et al., 1999). The middleware sends data to the subscriber from the publisher that has a higher weight. These weights can be changed in real-time by the middleware so that alternate sensors can be used based on current flight conditions. The subscribers themselves are not changed in this process. In a similar manner, an additional sensor might be added to the system with a prescribed weight. If that weight is higher than the existing publishers of that data, the subscribers will receive the data from the new component. Note that this is a simplified example; in reality, there is usually a filtering module between the sensors and controllers that may be used to fuse sensor data. This module may be integrated as a separate component that both publishes and subscribes to data. As mentioned previously, remote procedure calls may be used to access the methods of one object that is remote from the calling object. This is inefficient for real-time applications, such as a stabilizing controller needing sensor measurements at
14
Software Technologies for Complex Control Systems Stabilizing controller
Guidance Sensor readings
1167
Desired trajectory
Desired trajectory
Sensor readings
Aileron Actuator commands
Elevator Elevator command
Aileron command
Rudder Rudder command
Publish/subscribe middleware
Altitude
Altimeter
FIGURE 14.4
Altitude, location
GPS
Altitude
IMU
Flight Control Example Implemented Using Middleware That Implements a Publish/subscribe Communication Model
a high rate. One way that the controller can get its data quickly is for the middleware to duplicate the data and the sensor methods in the memory that is local to the controller program. The middleware can update the data at the specified rate by putting this information into a memory location that is accessed by the subscribing object. Hence, the stabilizing controller can call a method of a sensor object, and it is actually calling the duplicate (also called the replicate or the cache copy). Such a local method call is much faster than a remote method call across the communication channel.
14.6 Real-Time Applications As mentioned previously, digital control systems are much more sensitive to time delays than are other software applications. There are three types of timed events that occur in typical control system: periodic events (such as updating control signals at a given rate), asynchronous events (such as set point adjustments from a supervisory control), and sporadic events (such as a fault occurrence). A long time delay that is incurred when processing an event could cause catastrophic behavior, such as instability or failure to recover from a fault. Making the problem more difficult to manage and to analyze is the fact that operations in typical networked computer systems occur in a nondeterministic manner, giving rise to varying time delays. The concern over how this nondeterministic behavior affects different software applications has prompted a great deal of research in the area of real-time computing. Consider the real-time operation of a complex control system where different components are connected via a network. There are three places where the time delays occur: the data transfer over the network, the software applications running on the processors, and the middleware that integrates the components.
Consider first the network communications. Performance of a communication network is commonly measured in terms of time delays (e.g., due to access delay, message delay, transmission delays), reliability or accuracy of data, and throughput (e.g., max data rate divided by the data size). For control systems, the time delay is the most critical measure due to stability and performance considerations. To address the special concerns of control systems, some specialized protocols have been written that provide for a dedicated control area network (such as DeviceNet). The alternative is to use a standard data network such as ethernet. Data networks typically use protocols (such as TCP/IP) that result in a nondeterministic transfer of data, as opposed to the smaller but more frequent data packet transfers conducted by control area networks. Results examining the use of ethernet for real-time applications are given in Schneider et al. (2000), while a comparison of ethernet (using CSMA/CD) to a token ring bus and to control area networks is given in Fend-Li Lian et al. (2001). Networks using ATM technology have good performance with other real-time applications, such as video and voice, and have good potential for future use in control applications. Next, consider real-time behavior of software applications. While high-level languages do have function calls that seem to imply real-time, the timings are not exact. For example, a sleep(20) command implemented in a Java thread would seem to make the thread pause for 20 msec. However, the thread may actually sleep for 19 msec or 21 msec. This can be explained by examining how an operating system on a processor handles several tasks that are running concurrently (known as multitasking or multithreading: the system must schedule a dedicated processor time to each of these different tasks. As a result, tasks are typically scheduled in a nondeterministic manner, which gives rise to the resulting soft real-time behavior. While the processor itself does have an absolute clock, a realtime operating system (e.g., LinuxOS, VxWorks, Sun/Chorus
1168 ClassiX, and QNX) must be used to get hard real-time performance out of the applications. There are products (rather kernels) that can be installed on a system with a non-real-time operating system, such as the Real-Time Linux modification of the popular Linux Operating System. Here, hard real-time applications coexist with the normal linux kernel and hard real-time tasks are always given priority for execution. The normal linux kernel as a whole is only executed when slack time is available. Most computers use hardware components to perform tasks that need real-time performance, such as video cards on desktop machines or DSP boards in signal processing applications. These tasks can be performed in parallel without the need for the nondeterministic uncertainty introduced by scheduling of processor time. Another alternative is to use separate dedicated processors to perform each concurrent task, such as having one processor perform low-level control loops and a separate processor performing high-level loops such as fault detection and supervisory control. Moreover, the software application must be able to take advantage of realtime operations. While C has been used successfully in control system implementation, Java lacks good real-time applicability. This may change in the near future because there is a large effort to develop real-time Java capability as evidenced by the release of Real-Time Java Specification (Bollella et al., 2000). Finally, the middleware must run in real-time. For example, the original version of CORBA did not support hard real-time operations because of overhead in the client/server implementation due to the ORB intercepting the remote method calls, redirecting them, and marshalling or demarshalling data to translate data values from their local representations to a common network protocol format. CORBA also lacked realtime scheduling capabilities. Extensions have been made to build a real-time CORBA (also known as TAO) (O’Ryan et al., 2000). As described previously, using local replicas helps speed up the real-time behavior of the middleware. Further, real-time applications require the middleware to schedule events based on quality of service (QoS) measures. Typical QoS parameters include desired rates for periodic events (e.g., updates from sensors), deadlines for when a task must be completed (e.g., when the new command must be sent to the actuator), and priority values indicating the relative importance of different procedures. A real-time middleware must also time stamp transactions and prescribe efficient and fast memory usage.
14.7 Software Tools for Control Applications Software tools for control systems are aimed at either design and analysis or at implementation. The most common design and analysis tool is MATLAB from Mathworks. MATLAB is often used in conjunction with the Simulink tool box to
Bonnie S. Heck provide a graphical user interface along with some expanded simulation capabilities. To ease system implementation, the Real-Time Workshop tool box can be used to generate C code from a Simulink block diagram. Mathworks has a continuing effort to increase the efficiency of code generation to improve the speed of processing and to reduce the size of the compiled program (an important feature for embedded processors). In this chapter, emphasis has been placed on using middleware for integrating complex control systems. As mentioned previously, middleware is beneficial for its ability to hide the network communications decisions from the user, its reuse in integrating multiple control system applications, and its ease of system evolution and reconfiguration. A further benefit from some middleware is its ability to integrate software written in different languages and running on different platforms. The development of middleware for control system applications is rather new. One commercial product that uses a realtime publish and subscribe communication model is NDDS made by Real-Time Innovations (RTI). NDDS has good performance for relatively fast periodic events, which is the case for the low-level stability augmentation controller used in flight controls. RTI also markets ControlShell, a graphics tool that allows integration of control systems in an object-oriented fashion. Consult the RTI Web site for more information on the publish and subscribe communications model for use in controls. While no commercial products currently use CORBA for control systems, a CORBA-based software substrate for controls is under development (Wills et al., 2001). Extensions to CORBA and links to CORBA-based products for other applications can be found on the OMG Web site.
Acknowledgments The author would like to thank Raymond Garcia, Suresh Kannan, and Dr. Linda Wills for their valuable comments. The author would also like to acknowledge the support of DARPA under the Software Enabled Control Program headed by Dr. Helen Gill.
References Bollella, G., Dibble, P., Furr, S., Gosling, J., Hardin, D., Turnbull, M. (2000). The real-time specification for Java. New York: AddisonWesley. Lian, F.L., Moyne, F.L.J., and Tilbury, D. (2001). Performance evaluation of control networks: Ethernet, ControlNet, and DeviceNet. IEEE Control Systems Magazine 21(1), 66–83. O’Ryan, C., Schmidt, D., Kuhns, F., Spivak, M., Parsons, J., Pyarali, I., and Levine, D. (2000). Evaluating policies and mechanisms to support distributed real-time applications with CORBA3.0. IEEE Real-Time Technology and Applications Symposium. 188–197. Pardo-Castellote, G., Schneider, S., and Hamilton, M. (1999). NDDS: The real-time publish-subscribe middleware. Retrieved January 11, 2001: www.rti.com/products/ndds/literature.html
14
Software Technologies for Complex Control Systems
Schneider, S. (2000). Making ethernet work in real-time. Sensors Online. Retrieved August 17, 2004: www.sensormag.com/articles/ article_index. Also available as: Can ethernet be real-time? Retrieved January 11, 2001: www.rti.com/products/ndds/literature.html Szyperski, C. (1998). Component software: Beyond object-oriented programming. New York: Addison-Wesley.
1169 Tanenbaum, A. (1996). Distributed operating systems. Englewood Cliffs, NJ: Prentice Hall. Wills, L., Kannan, S., Sanders, S., Guler, M., Heck, B., Prasad, J.V.R., Vachtsevanos, G., Schrage, D. (2001). An open platform for reconfigurable control. IEE Control Systems Magazine 21(3), 49–64.
This page intentionally left blank
Index
A ABCs. See Absorbing boundary conditions ABFT. See Algorithm-based fault tolerance Absorbing boundary conditions (ABCs) FDTD method dynamic range limited by, 632 infinite modeled regions requiring, 631 PML, 655–663 Abstraction layered architecture using, 1164 in system design, 218–219, 219t AC analysis. See Alternating current analysis AC power. See Alternating current power Accelerated graphics port (AGP), 331 Accumulation MOS capacitor, 110 ACE. See Area control error Ackerberg-Mossberg biquad active filter cascade design using, 130–131, 131f Acoustic microwave devices, 616 Action networks ANN control system, 1152, 1153–1154 statistical average convergence for, 1155 Active constraints QP problems with bounded variables solved using, 247 Active elements. See Sources Active filters, 127–138 realization methods for, 128–138 Active integrated antennas, 705 Active microwave circuits, 691–705 technology/concepts of, 692–696 Actuators MEMS in, 292–294, 293f, 294f Adaptive control direct v. indirect, 1140–1141 time-varying nonlinear systems v. direct, 1139–1149 Adaptive DSMC advantages, 1128 controller design for nonlinear systems, 1119–1120 IC engine idle speed control using, 1126–1127, 1126f idle speed control experimental results, 1127, 1127f nonlinear input/output form systems v., 1119–1120 nonlinear system problem formulation, 1119 simulated idle speed control using, 1126–1127, 1127f SMC controller design for, 1119–1120 Adders REMOD FD/FT in, 433–434, 434t Admission control modules network resource demand v., 411, 411f Copyright ß 2004 by Academic Press. All rights of reproduction in any form reserved.
Admission criteria modules in network admission control, 411, 411f Admittances, 23, 24t LC, 68f parallel, 24 ADXL. See Analog devices accelerometer AF PHBs. See Assured forwarding PHBs AGC. See Automatic generation control AGP. See Accelerated graphics port AH. See Authenticated Header AI. See Artificial intelligence Air-gap lines, 734–735, 735f Air gaps magnetic circuit reluctances modeling, 496, 496f Algebra characteristic values in, 1024–1025 in control theory, 1019–1026 nonassociative, 1026 polynomials in, 1024 Algorithm-based fault tolerance (ABFT), 448–453 Aliasing digital communication systems v., 959–960, 959f, 960f hexagonal sampling v., 895 image sampling v., 893, 893f, 894f All-pass filters, 72–74, 73f, 74f ALOHA pure/slotted, 993 Alternating current (AC) analysis steady state, 22–26 transmission lines v., 529–534 Alternating current (AC) power steady state, 26–29 Ambiguity function, 684 Ampe`re’s force law, 485–486, 485f Ampe`re’s law, 482, 633, 721–722 contours filling Yee space lattice, 635, 635f in matched UPML, 661 Amplifiers. See also specific types of amplifiers microwave, 696–698 transistor, 696–697, 696f Amplitude shift keying (ASK), 972, 984, 984f Analog devices accelerometer (ADXL) MEMS in, 283, 288 Analog filters transformation methods for, 854–855 Analog noise, 101–104, 102f categories of, 102–104 characteristics, 103t power, 102, 102f
1171
1172 Analog systems abstraction levels in, 218–219, 219t Analysis lattices, 873, 874f Angular error tracking radar v., 682–683 Animation compression schemes, 405t as media type, 405 ANNs. See Artificial neural networks ANSI X3T9.5. See Fiber Data Distribution Interface Antenna arrays, 578–582, 579f mutual-coupling effects v., 579, 582 one-dimensional, 580–581, 581f radar, 676–677 radiation/reception by, 569–582 two-dimensional, 581–582, 581f Antenna elements, 569–578 Antennas, 553. See also specific types of antennas aperture radar, 676 basic far-field relations in, 554, 554f basic parameters of, 570t communication link using transmit/receive, 564–567, 565f frequency-independent, 578 fundamentals of, 553–569 input impedance determined by frequency of, 559–560 input impedance in, 554f, 559 input impedance v. frequency of, 560f mutual-coupling effects between, 579, 582 polarization/tilt angle for, 558, 559f power per solid angle from, 555 radar, 676–677 radiation/loss resistance in, 559 radiation patterns of, 555, 556f radiation power density from, 555 radiation/reception by elements of, 569–582 received noise temperature in, 563–564, 563f received power v. effective area for, 562–563 as receivers, 561–564, 561f as scatterers, 567–569, 568f simple/complex, 553–554 transmit/receive vectors in, 566 as transmitters, 554–561 Application programming interfaces (APIs), 357 layered architecture motivating OOP language, 1164 Application-specific integrated circuits (ASICs) clock scheduling strategy for, 255 DSP, 942–946 interconnect noise analysis in, 301 Architectural level power consumption v. VLSI design, 274–276 Area control error (ACE), 784–785 Arithmetic circuits FD/FT for, 431–436 ARPAbet, 866, 867t ARPANET, 401 ARQ. See Automatic retransmission request Array factors antenna array configuration determining, 580 sketching, 580–581, 581f Arrays compatibility/mergeability of, 211–212 Artificial intelligence (AI), 367, 369 Artificial neural networks (ANNs) in adaptive DSMC, 1126–1127, 1128
Index direct adaptive control using, 1146 in DSMC, 1119–1120 learning in control systems based on, 1151–1159, 1152f as online failure dynamics estimators, 1093–1094 in speech recognition, 881 ASICs. See Application-specific integrated circuits ASK. See Amplitude shift keying Assured forwarding (AF) PHBs, 415 Asymptotic stability, 1033 in LTI systems, 1038–1039, 1039f SMC, 1128 Asynchronous transfer mode (ATM) in complex control systems, 1164–1165 AT attachment packet interface (ATAPI), 332 ATM. See Asynchronous transfer mode Attenuation constants in microstrip line waveguides, 547 data transmission v., 986 line-type waveguide, 551 microstrip line waveguide, 547–548 rectangular waveguide, 543–544 Yee algorithm calculating, 645–646, 646f Audio compression schemes, 404t as media type, 404–405 Auger recombination, 162 Authenticated Header (AH) IpSec protocol, 418 Authenticity multimedia data, 408 Autocorrelation functions, 952–953 Automatic generation control (AGC), 782–785, 784f Automatic retransmission request (ARQ) in multimedia network error control, 403 Autotransformers, 717, 717f
B B. See Magnetic flux density B matrix loss formula, 781 Babinet’s principle slot/strip dipole antenna complementarity in, 576–577 Back lobe level (BLL), 556, 556f Back lobes, 556, 556f Back-off algorithm, 994 Back-stepping plant control using, 1139 Balance-to-unbalance (balun) transformers, 595 Ballistic transport semiconductor electron, 159 Balun transformers. See Balance-to-unbalance transformers Band diagrams, 155–157, 156f Band-pass filters common configurations of planar microwave, 610–611, 610f equivalent circuit/frequency response of, 607f LC low-pass filters transformed into, 64–65, 66f microwave, 609–612 non-planar microwave, 611–612, 611f Band-stop filters equivalent circuit/frequency response of, 607f LC low-pass filters transformed into, 65, 66f, 67f microwave, 612, 612f Band structures dielectric/semiconductor/metal, 153f
Index Bandwidth hybrid/directional coupler, 596 memory cost v. application data transfer, 203, 203f multimedia requirements for, 402, 406–407, 407t, 410 real-time constraints on data transfer, 201–208, 208f waveguide bandwidth, 539 Bandwidth partitions PHB, 415 Bases power bipolar transistor, 166, 166f Basis edges in clock skew scheduling spanning trees, 253–255, 254f isolated/main, 253 Basis functions, 620–621, 624–626 entire-domain/sub-domain, 623 line, 623–624, 623f, 624f surface, 625–626, 625f Batteries power v. average current from, 271–272, 272f VLSI power dissipation trends v., 265 Baud rate bit rate v., 985 Bayes’ risk, 922 Bayesian approach, 371 Bayesian estimation in statistical signal processing, 921–923 Bayesian signal detection, 928–929 Beam efficiency antenna, 561 Beam width antenna, 555–556 Beh architecture in embedded systems modeling, 224 Behavior aggregates, 414 Belief theory, 371 Bellman Optimality Principle (BOP), 879 Berenger’s PML, 657–660 FDTD grid employing, 658–659, 659f reflectionless matching condition in 2D TEz case of, 658 three-dimensional case of, 659–660 two-dimensional TEz case of, 657–659 two-dimensional TMz case of, 659 Bessel function derivative roots in circular metal waveguides, 544–546, 545t Best-Effort service, 414 BIBO stability. See Bounded-input bounded-output stability Bilinear transformation method, 854–855 warping effects in, 854–855, 855f Bilinear transformations by Pascal matrix, 855–856, 856f Binary digits in digital communications, 957 Binary PSK (BPSK), 973–974, 973f Binary tree predictive coder (BTPC) custom memory organization for, 206–207, 206t, 210 memory cycle/energy trade-off in, 204–205, 204f Biosystems control theory v., 1026 Biot-Savart law, 483–485 examples with simple solutions for, 484t Bipolar transistors, 142–151, 143f base resistance v. fast switching in, 147 basic equations describing, 143–146 characteristics of, 150, 150f, 151f DC behavior of, 144–145, 145f
1173 depletion capacitances in, 140f, 145–146 doping profiles of, 142, 143f extrinsic regions influencing performance of, 148 heterojunction, 150 hybrid-p model for, 693, 694f internal resistance v. DC operation of, 147 microwave, 693–694, 693f minority charge/transit time in, 146, 146f power, 166–167, 166f self-heating, 148, 150f small-signal resistance in, 147 switching characteristics of power, 167, 167f, 168f transistor/base resistance internal to, 146–147 types of, 142 Biquads active filter cascade design using, 130–132 Bit rate baud rate v., 985 BJTs. See Bipolar transistors Blind speeds, 681–682 BLL. See Back lobe level Block-disabling technique VLSI power consumption reduced by, 274–275 Block matching algorithm (BMA), 383 Blur, 899, 900f BMA. See Block matching algorithm Bode diagrams, 25–26, 27f, 28f Body contacts MOSFET, 113 Bojarski’s identity, 687–688 Boltzmann’s constant The´venin equivalent noise voltage calculated with, 102–103 in VHDL-AMS, 223 BOP. See Bellman Optimality Principle Born approximation, 688 Boundary conditions electrostatics, 508, 508f Laplace’s equation v., 506–509, 509f Boundary nodes in DiffServ Internet services model, 414 MPLS LERs as, 415 Bounded-input bounded-output (BIBO) stability, 820–821, 1027–1028 asymptotic stability implying, 1039 BPSK. See Binary PSK Branch-line couplers. See also Quadrature 908 hybrids N-section cascaded, 597, 597f Branches spanning tree, 33 Breakdown voltage, 163–164 Breast cancer detection FDTD methods/ultrawideband microwave pulses in, 665, 666f Brewster angles, 522 Bridges LAN, 1001–1002 transparent, 1001 Bridging faults, 430 Brittle fracture nonceramic insulators v., 747 Brownian motion, 1061 BSP. See Bulk synchronous parallel BTPC. See Binary tree predictive coder Buck circuits VRMs using conventional/synchronous rectifier, 87–88, 87f
1174 Bulk micromachining MEMS produced with, 286 Bulk synchronous parallel (BSP), 340 Bus AGP with, 331 computer architecture with, 329–333, 333t CPU with, 329 EISA, 330 IEEE 1394, 332–333, 333t ISA and, 329–330 MCA, 330 PCI, 331 terminology associated with, 329–330 USB, 332 VESA, 330–331 Bus-invert encoding, 313, 313f Busses power flow using PQ/PV/slack, 768–769 protecting, 802–803 real/reactive power mismatches at, 770–771 Butterworth filters, 61–62, 63f element values for normalized, 64t microwave, 608, 608f poles of, 64f, 64t
C Cþþ, 339 C4 flip-chip, 316, 316f C Language Integrated Production System (CLIPS), 370 Cache coherence multiprocessors with, 337–339, 338f Cache coherent nonuniform memory (CC-NUMA), 336–337, 337f CAD. See Computer-aided design CADENCE MEMS designed with, 284 Call-ID fields SIP, 422, 422f Call-length/type fields SIP, 422, 422f Capacitances, 510–512 geometrical configurations of, 510–511 VLSI intrinsic/extrinsic, 271 Capacitive coupling model interconnect noise analysis with, 301, 301f Capacitor coupling voltage transformers (CVTs), 791–792 Capacitors distribution system voltage regulation using, 758 microwave passive lumped element, 590–591, 592f, 593f modeling high speed/current, 87–88, 88f MOS, 109–113 nonlinear, 76 partially filled, 511 Care-of addresses Mobile IP, 418–419 Carrier Sense Multiple Access (CSMA) non/1-/p-persistent, 994 Token Passing Bus Protocol v., 995 Cart-pole balancing problem NDP controller implementation on, 1156–1158 simulation results for, 1156–1158, 1157f, 1157t Cartesian Yee lattices numerical dispersion in, 639–640 Cascade design active filters implemented with, 129–132
Index first-order filter sections in, 130, 130f of OTA-C filters, 136–138 second-order filter sections in, 130 Case-based reasoning (CBR), 374–375 Case statements VHDL-AMS, 222 Cavity detection algorithm, 199–200, 199f code rewriting for memory efficiency illustrated by, 199–201 custom memory organization for, 210, 210f Cavity resonators metallic, 601–603 resonant frequencies of metallic, 603–604 types of metallic, 602, 603f Cayley-Hamilton theorem, 1024 CBIR. See Content-based indexing and retrieval CC-NUMA. See Cache coherent nonuniform memory CCDs. See Charge-coupled devices CD. See Collision detection CDG. See Configuration data graph CDMA. See Code-division multiple access CDMA2000, 1008–1009 parameters, 1009t Cells FPGA, 437f, 438f FPGA static NC using cover/dependent, 438, 438f Cellular systems access technologies for, 1005–1009, 1009t Central processing unit (CPU) bus with, 329 memory with, 324, 324f Cepstra, 875–877 delta, 877 mel, 877 real, 875–876, 876f Cepstral analysis, 875–877 Certainty factor (CF), 371 CFAR. See Constant false alarm rate CFC. See Control flow checking CFGs. See Control flow graphs CFR. See Coupling-free routing Channel length modulation in MOSFETs, 118–119, 119f MOSFET, 113, 113f Channel length modulation MOSFET pinch-off v., 116 Channel width MOSFET, 113, 113f Characters in digital communication systems, 964 Charge-coupled devices (CCDs), 333–334 Chebychev filters, 62 amplitude response of, 64f microwave, 608, 608f poles of, 64t Chebychev method, 852 Checkpointing hardware-based rollback using full/incremental, 446 Checksum coefficients, 449 Chemical bonds semiconductor, 154, 154f Chords cospanning tree, 33 spanning tree, 243
Index Chromatic numbers in conflict graphs, 208 Circuit analysis AC, 24 computer-aided nonlinear, 80 definitions/terminology used in, 3–6 difficulty of linear/nonlinear, 76 graph-theoretic foundation for, 31–41 graphical nonlinear, 79, 79f, 80f Laplace transform for, 18–19, 20t, 21f linear, 3–29 open problems in resistive, 81 steps in, 6–7, 7f time domain, 13–16 Circuit breakers relaying system, 793–794 Circuit connectivity matrices, 244–245 Circuit design style VLSI power consumption reduced by, 278–279 Circuit elements, 4–5, 5f, 76. See also specific types of circuit elements qualitative properties of, 78 transmission lines as distributed, 526–529 Circuit graphs, 234 QP algorithm derived using, 242–249, 242f synchronous digital system, 234–235, 235f transformation rules for, 235, 235f Circuit laws, 6 Circuit level of VLSI design v. power consumption, 276–279 Circuit matrices, 36 fundamental, 36 Circuit stamps, 43–44, 45t Circuit switching, 987 Circuit theory, 1–81 Circuits, 5. See also specific types of circuits analogous quantities in electrical/magnetic, 495t basic passive microwave, 587–591 component connection model v., 1081 current/voltage division in AC, 24–25 equivalent, 9–12, 10f FDTD v. analog microwave/digital logic, 663–664, 665f graph theoretic, 32–33, 33f magnetic, 495–497 Norton equivalent, 12, 12f, 25f parallel-tuned, 73–74 planar, 6 power in, 26–29 s-domain equivalent, 20t series-tuned, 65, 72, 72f state variables for, 20–21 The´venin equivalent, 12, 12f, 25f time domain analysis of first-order, 13–15, 15f time domain analysis of higher order, 16 time domain analysis of second-order, 15–16, 16f, 17f transient analysis of nonlinear, 48f, 49–50 voltage/current source equivalence in AC, 24, 24f Circulators, 615–616 strip line/waveguide, 615, 616f CISCs. See Complex instruction set computers Clamped inductive load circuits as power bipolar transistor application, 167, 167f, 168f Classes, 1162 HPrTNs modeling inheritance relations between, 470–471, 471f HPrTNs modeling OOP, 469
1175 HPrTNs modeling reference relations between, 470, 470f CLIPS. See C Language Integrated Production System Clock distribution networks clock signals delivered by, 231 Clock gating VLSI power consumption reduced by, 275 Clock schedules, 236 algorithms for creating, 238–242, 250–253, 257–261 consistent/trivial, 236 performance maximized by, 239–240, 240t safety maximized by, 240–241, 240f Clock signals digital systems synchronized by, 231 Clock skews, 232–233 background in scheduling of, 232–238 linear dependence of, 242–246 for performance, 239–240, 240t performance/reliability improved by nonzero, 236–238, 237f permissible range of, 232–233, 233f, 241 QP in scheduling, 241–242, 246–256 for safety, 240–241, 240t scheduling algorithms for, 238–242 VLSI/ULSI reliability improved by scheduling, 231–257 Cluster systems multiprocessors using, 339 Clustering in pattern matching for speech processing, 878–879 Clustering algorithms, 908 Clutter attenuation, 681–682 radar signal, 677–678 CM. See Continuous media CMOS inverter technique, 317, 317f CMOSs. See Complementary metal oxide semiconductors CN bearer service. See Core network bearer service Code-division multiple access (CDMA), 1007–1009 FDMA/TDMA v., 1009, 1009t Codeword timeslots, 964, 964f Coding redundancy, 381 Coefficient quantization, 857 Collectors power bipolar transistor, 166, 166f Collision detection (CD), 994 Combined field integral equation, 623 Command-based user interface, 357–358 Commanded inputs SMC, 1125 Common Object Request Broker Architecture (CORBA) as complex control system middleware, 1166 real-time, 1168 Communication digital, 957–964, 958f, 983–988 digital voice, 963 multimedia networks for, 401–424 Companding, 963 Compatibility in arrays with disjoint lifetimes, 211–212 Compatible Time Sharing System (CTSS), 358 Complementary metal oxide semiconductors (CMOSs) low-power design important in, 263–266 MEMS and, 286 packing densities increasing, 263, 264f power consumption sources, 266–270 power dissipation trends in, 264–266, 266f, 267f
1176 Complementary-pass-transistor logic (CPL) VLSI power consumption reduced by, 279 Complex exponential functions, 23 Complex instruction set computers (CISCs) microrollback in, 448, 448f Component connection model, 1079–1081, 1081f advantages, 1081 Components complex control systems using OOP, 1163–1164 distributed, 1163 Compression data, 406 image, 901–907 Compression points microwave power transistor, 697, 697f Compression ratio multimedia storage, 380 Computer-aided design (CAD) algorithms for circuit design with, 43–51 MEMS with, 283 Computer architecture bus in, 329–333, 333t I/O in, 333–334 interfaces in, 329, 331–333, 333t memory hierarchy in, 324–329, 324f, 325f, 326f, 327f microprogramming for, 323–324, 324f RAM in, 325–326, 325f, 326f ROM in, 326–327, 326f, 327f serial-access memory in, 327–329 Computerized tomography (CT) inverse scattering v., 688–689, 688f Conductances static conduction current field, 512 Conduction band, 153–154 edge/bottom, 155–156 Conductivity temperature v., 480, 480t Configurable computing applications, 348–349, 349f approach to, 344–345, 344f, 345f architectural enhancements for, 350 architecture v. logic cell granularity, 943–944, 943t CDG, 346 concluding remarks on, 351 DISC for, 346 domain-specific compilation in, 350 DSP algorithm implementation, 943–944, 945–946, 946t FPGA, 343–351, 345f, 347f, 349f functional density, 346 future trends in, 350–351 global reconfiguration in, 346 hardware platforms, 348–349, 349f introduction to, 343–344 mapping reconfigurable regions in, 350–351 modeling run-time reconfiguration in, 350 novel implementation technology for, 351 parallel harness with, 348 pipelining for, 344–345, 345f rapid compilation in, 351 RSG for, 346 RTR, 343, 344, 344f, 345–346, 345f, 350, 351 RTR devices, 345–346, 345f RTR methods, 346 run-time management, 344, 348
Index sea of accelerators with, 348 SLU, 348 tools, 346–348, 347f vector dot product in, 349 wildcarding in, 346 Configuration data graph (CDG), 346 Conflict graphs in custom memory organization design, 207 memory bandwidth requirement modeled by, 208, 208f Connection function, 1080 detailed representation for, 1081 Consonants, 867 Constant false alarm rate (CFAR) radar detection, 679 Constant field scaling MOSFET, 121, 121t VLSI, 264 Constant voltage scaling MOSFET, 121, 121t Constitutive relationship for electric fields, 504–505 for magnetic fields, 487 Content-based indexing and retrieval (CBIR) BMA for, 383 camera movement compensation in, 395–396 coding redundancy in, 381 compression in, 380 DCT for, 381 DFT for, 381 encoding, 380–381 error propagation, 380 feature-based modeling in, 396–397 HDM in, 393 high-dimensionality in, 390–391 high-level semantic modeling in, 397–398 image encoding standards for, 381–382, 382f image indexing/retrieval in, 386–392, 387f infraframe coding for, 383 interframe coding for, 383 interpixel redundancy in, 381 introduction to, 379–380 JPEG for, 381–382 keyframe in, 395 low bit rate communications with, 386 low-level feature based indexing in, 386–389, 387f MPEG for, 380, 383–385, 384f multimedia systems with, 379–398 psychovisual redundancy in, 381 QBIC in, 391–392 relevance feedback in, 391 SDM in, 393 segmentation in, 390 spatial v. compressed domain processing in, 389–390 storage for, 380–381 temporal redundancy, 383 temporal segmentation, 392–394, 393f video compression standards, 385–386 video conferencing, 385–386 video encoding standards, 382–386, 383f video indexing/retrieval, 392–398, 393f video summarization, 395 wireless networks with, 386 Content value, 917–918, 918f
Index Continuous media (CM), 403, 403f delay tolerance, 404 Continuous-time Fourier transforms (CTFTs), 826, 827t Continuous wave (CW) radars, 679–680 Control flow checking (CFC), 1045–1047 modern processor architectures using, 444–446 WDs in, 441–446, 442f Control flow graphs (CFGs) WD CFC using, 442–443, 442f Control laws open/closed-loop, 1062 Control systems design/analysis/implementation tools for, 1168 events/delays in, 1167 networked communications in complex, 1164–1165 online learning, 1151–1159, 1152f real-time applications v. complex, 1167–1168 robust multivariable, 1037–1047 software technologies for complex, 1161–1168 Control theory algebraic topics in, 1019–1026 fault-tolerant, 1085–1103 optimal, 1061–1067, 1062f passivity in, 1045–1047 stability in, 1027–1035 Controllable devices, 163 Controlled load service, 414 Converging T-DAGs CS insertion v., 439–440, 440f depth-first search for, 440–441, 441f Convolution sums, 821 DFTs computing, 821 digital filter, 843 CORBA. See Common Object Request Broker Architecture Core-form transformers, 718–719, 719f Core network (CN) bearer service Iu edge nodes/gateways in UMTS, 424 Coronas, 739 Cost-cumulant control, 1062–1063 other optimal control methods v., 1062f risk-sensitive control v., 1064–1065 Coupled Riccati-type equations, 1063 Couplers, 595–596, 596f, 598–600. See also specific types of couplers Coupling system integration v. tight software, 1161 Coupling-driven signal encoding, 314–315, 314f, 315f Coupling-free routing (CFR) interconnect noise analysis using, 305 Courant stability factor, 639, 640 linear instability v. normalized, 655 normalization/extension in 1D/2D grids, 652 Covariance analysis INS, 1057–1058, 1058f state estimators v., 1053–1054 Cover segments (CSs) static/dynamic methods of incorporating, 438–439 in static NC of FPGAs, 438, 438f CPL. See Complementary-pass-transistor logic CPU. See Central processing unit Crame´r-Rao lower bound (CRLB) estimator efficiency v., 926–927 Cramer’s rule, 57 CRC. See Cyclic redundancy checking Creepage extenders, 746, 746f
1177 Critic networks ANN control system, 1152, 1153, 1153f statistical average convergence for, 1155–1156 Critical angles, 522 CRLB. See Crame´r-Rao lower bound Crosscapacitance scaling noise analysis with, 305, 305f Crosscorrelation functions, 953 CS-nets dynamic NC, 439 CSD algorithm, 251–253, 252f, 253f, 254f, 260–261 experimental clock scheduling application of, 256–257, 258t, 259f implementation, 256–257 CSMA. See Carrier Sense Multiple Access CSMA/CD, 993–995 LAN performance v., 1002 CSs. See Cover segments CT. See Computerized tomography CTs. See Current transformers CTSS. See Compatible Time Sharing System Current, 3–4, 479. See also Alternating current; Direct current division in parallel resistors, 11, 11f junction diode voltage v., 141, 141f measuring, 808, 808t nonlinear circuit, 79–81 power bipolar transistor voltage v., 166–167, 166f sinusoidal, 22 standing waves in lossless transmission line, 531–533, 532f, 533f transformer inrush, 717, 800, 800f in two/three wire AC power systems, 709 waveform in lossless/lossy transmission lines, 528 Current density. See also Volume current density collector, 142 surface, 480 Current fields conductance/resistance in static conduction, 512 static conduction, 511–512 Current ripple in advanced VRM topologies, 91–92, 91f center-tapped inductor VRM interleaving v., 94, 96f VRM transformers v., 94 Current transformers (CTs), 717–718, 718f bus protection v. saturation of, 802–803 relaying system, 791 Cut matrices, 35–36 Cut vectors, 35 Cutoff frequencies rectangular waveguide, 543t TE/TM modes v., 539 Cuts, 33 Cutset system of equations, 39 Cutset transformations, 38 Cutsets, 33 CVTs. See Capacitor coupling voltage transformers CW radars. See Continuous wave radars Cyclic redundancy checking (CRC), 987
D Darlington configuration power bipolar transistor current gain enhanced by, 167, 167f Data digital/analog, 983 transmission of digital, 965–970
1178 Data communication, 983–988 advantages of digital, 985 concepts, 983–985 maximum rate of, 985 Data compression audio, 404t image, 405t text, 404t Data flow graphs (DFGs) in algorithm optimization, 940–941 DSP matrix multiplication, 937–938, 937f Data-flow transformations in cavity detection algorithm, 200, 200f, 201t in code rewriting for memory use, 198 Data reuse cavity detection algorithm optimized by, 200–201 in custom memory hierarchies, 199 Data storage MEMS in, 295, 295f Data transfer bandwidth cost, 202–203 custom memory organization for, 191–214 real-time constraints on bandwidth of, 201–208, 208f DC. See Direct current DC machines, 725–728, 725f separately/shunt/series-excited, 727–728, 727f DC operating points, 75 nonlinear circuit, 79–81, 79f DC power flow algorithm, 772 DCMOS. See Dynamic CMOS DCOM. See Distributed Component Object Model DCTs. See Discrete cosine transforms De Ronde couplers, 599 Decapsulation data message, 989 Decomposition Cholesky, 250 coefficient matrices solved by, 250 Deep submicron technology ASIC and, 301 buffer insertion noise minimization in, 302 bus-invert encoding with, 313, 313f C4 flip-chip with, 316, 316f capacitive coupling model and, 301, 301f case study Pentium 4 with, 305–306, 305f, 306f CFR with, 305 charge leakage noise in, 311 charge sharing noise in, 311 circuit noise reduction techniques with, 315–317, 315f, 316f, 317f CMOS inverter technique with, 317, 317f coupling-driven signal encoding with, 314–315, 314f, 315f crosscapacitance noise in, 311 crosscapacitance scaling in, 305, 305f design flow in, 318, 318f distributed interconnect model in, 301–302 dual threshold technique with, 315 dynamic threshold technique with, 315 early design stage in, 304–305, 304f, 305f TO encoding with, 313 gate delay and, 300f gated-Vdd technique with, 315, 315f Gray code encoding with, 313 ILP with, 305 Intel failure criteria, 318
Index interconnect delay, 305 interconnect modeling issues, 302 interconnect noise analysis, 299–306, 300f, 301f, 302f, 303f, 304f, 305f, 306f introduction to, 299–301, 300f, 309–311, 310f lumped interconnect model, 301, 301f MAX-CFL with, 305 mirror technique with, 317, 317f models, 301–302, 301f, 302f mutual inductance noise, 312, 312f network ordering noise minimization, 304 NIC, 301 noise analysis algorithms, 317–318, 318f noise in, 309–310, 310f noise minimization techniques, 302–304, 302f, 303f noise reduction techniques, 312–317, 313f, 314f, 315f, 316f, 317f noise sources in, 311–312, 311f, 312f NTRS with, 304 PMOS pull-up technique with, 316–317, 317f power supply noise, 312 process variation, 312 pseudo-CMOS with, 316, 316f reliability, 310–311 repeater design methodology, 306 shield insertion noise minimization, 303–304, 303f signal encoding techniques with, 313–315, 313f, 314f, 315f small-signal unit gain failure criteria, 317–318, 318f SOC in, 309 thermal effects, 312 twin transistor technique with, 317, 317f VDSM in, 309 wire aspect ratio scaling in, 305, 305f wire design methodology, 306 wire sizing noise minimization in, 302 wire spacing noise minimization in, 303, 303f WZE with, 313–314, 314f Defense Advanced Research Projects Agency, 287 Delay, 402, 1167 jitter/skew, 410–411, 411f limits, 406 real-time multimedia v. Internet, 408–410 Delay-line canceller filters, 680–681, 681f Delay tolerance media, 403f, 404 Delayed systems continuous SMC simulation v. point-, 1123, 1123f, 1124f continuous SMC v. point-, 1121–1125, 1122f experimental results from SMC of point-, 1123–1125, 1125f problem of controlling, 1116–1117 SMC control law design for, 1118 SMC controller design for, 1117–1118 stability analysis of, 1118 Delyiannis-Friend biquad. See Single-amplifier biquad Demagnetization curves permanent magnet, 490, 490f Demodulation technologies, 971–981 Dempster-Shafer theory of evidence, 371 DENDRAL, 367, 368 Denormalization mantissa preserving functions v., 451 Dense WDM (DWDM), 1014f, 1015 Dependability, 428 relay operation, 795
Index Dependence graphs (DGs) DSP matrix multiplication, 937, 937f lower-D mapping of, 939 partitioning, 938–939 Depletion regions MOS capacitor, 110 MOSFET DIBL, 119 Depth of penetration plane wave, 517–518, 518t in seawater, 518t skin depth v., 517–518, 518t Deregulation power industry, 710–711, 710f Description schemes (DSs) MPEG-7, 912–914 MPEG-7 Variation, 914, 914f Design flow deep submicron technology, 318, 318f hardware-software codesign v. traditional VLSI, 273–274, 273f system, 218, 218f Design process black box model of, 218f technical product, 218–219 Detectors, 699, 699f DFGs. See Data flow graphs DFTs. See Discrete Fourier transforms DGs. See Dependence graphs Diamond crystal structure, 154, 155f DIBL. See Drain-induced barrier lowering Dielectric constants, 505, 510f Dielectric resonators (DRs), 604–606, 605f cylindrical, 605, 605f, 606f non-radiative, 605–606, 605f whispering-gallery mode, 605 Dielectrics in semiconductors, 153–157 Difference equation, 843–844 Differential pulse-coded modulation (DPCM), 902–904, 905f Differential relaying, 799, 800t, 801 Differentiated Services (DiffServ), 414–415 Diffusion equation minority carrier diffusion described by, 162 Digital communication systems, 957–964, 958f binary/quaternary, 957, 958f quantization in, 960, 960f Digital filters, 839–860 computational complexity of, 859–860, 860f DSPs in software implementation of, 859 linear systems as, 841–844 quantization in, 856–859, 857f real-time implementation of, 859–860 realizations of, 846–847 Digital image processing, 891–910 Digital items configuration of, 915 MPEG-21, 914–915, 915f Digital Mirror Device (DMD) MEMS in, 283, 288–289, 292–293, 293f Digital noise, 101, 105–108 categories, 105 characteristics, 105t Digital signal processors (DSPs) bit-level algorithms for, 941–942 common characteristics of, 935
1179 digital filter implementations based on, 859 hardware implementation technologies in, 942–944 LNS conversion overhead in, 182 tools for hardware implementation of, 944–946 VLSI in, 933–946 VLSI v. architecture of, 934 Digital systems abstraction levels in, 218, 219t Digital_algorithmic architecture diving computer digital block modeled using, 227–228 in embedded systems modeling, 224 Digital_RTL architecture in embedded systems modeling, 224 Diodes. See also specific types of diodes as nonlinear resistors, 76, 77f VHDL-AMS electrical/thermal model of, 223–224, 223f, 224f voltage breakdown in power, 163–164, 164f Diplexers microwave, 613, 613f Dipole antennas, 569–574, 570f arbitrary length, 572, 572f Babinet’s principle of equivalence of slot/strip, 576–577 dual fan-fields in loop/small, 576 folded, 570t, 573–574, 574f half-wave, 570t, 573 monopole antenna equivalent structures using, 574–575, 575f receiving mode of small, 571–572, 571f small, 569–572, 570t transmitting mode of small, 571 Dipole moments induced, 509 magnetic torque from magnetic, 487 Dipthongs, 867 Dirac delta-function, 816 Direct adaptive control L2 tracking error bound/transients in, 1143 subsonic wing rock v., 1143–1148 time-varying nonlinear systems v., 1139–1149 Direct adaptive control theorem, 1141–1143 Direct (band-to-band) radiative recombination, 161, 161f Direct calls H.323, 419–420 Direct current (DC), 3–4, 479–482 Direct manipulation user interface (DMUI), 358 Direction angles plane waves characterized by, 515, 516t Directional relaying, 795, 798–799, 799t Directivity antenna, 560 hybrid/directional coupler, 596 Directory-based protocols multiprocessors with, 338–339 DISC. See Dynamic instruction set computer Discrete cosine transforms (DCTs), 381, 906, 906f, 908f Discrete Fourier transforms (DFTs), 381, 835–837 DTFTs v., 835–836 FFT algorithms implementing, 836–837 in image enhancement, 898, 899f inverse, 835 Discrete media (DM), 403, 403f Discrete SMC (DSMC) ‘‘chattering’’/oscillation v., 1116, 1127 controller design for nonlinear systems, 1119 nonlinear systems v., 1119–1120
1180 Discrete-time Fourier transforms (DTFTs), 824–826, 824f, 825f DFTs v., 835–836 in digital communication system sampling, 959f inverse, 825–826 properties of, 825–826 Discretization, 49–50 Dispersion laser pulses v. linear/nonlinear, 666 Distance relaying, 793–794, 794f, 797–798, 797t, 798f Distributed Component Object Model (DCOM) as complex control system middleware, 1165 Distributed electrical elements microwave passive component, 586 Distributed interconnect model interconnect noise analysis with, 301–302 Distributed power systems (DPSs), 86, 86f VRMs for, 92–94 Distribution, 749–759 power transmission v., 737 Distribution feeders overcurrent protection for, 796–797, 796f, 797t Distribution systems, 749–755, 750f backup, 754 dispersed storage/generation in, 758–759, 759f good voltage/continuity in, 754 overhead construction of, 754 primary, 752–753, 753f secondary, 753–754 underground construction of, 754–755 Disturbances steady-state/transient, 805, 806t Dithering, 896, 896f DM. See Discrete media DMD. See Digital Mirror Device DMUI. See Direct manipulation user interface Dopants n/p-channel MOSFET, 113–114, 114t n/p-type semiconductor, 157 Doping semiconductor, 157 Double clocking, 236 Double-Y baluns, 595, 596f DP. See Dynamic programming DPCM. See Differential pulse-coded modulation DPSs. See Distributed power systems Drain-induced barrier lowering (DIBL) MOSFET, 118–119, 118f, 119f Drain junctions MOSFET, 113–114, 113f DRAM. See Dynamic RAM Drift velocity electric fields v., 158–159, 159f semiconductor electron, 155 Drive systems mechanical description of, 724, 724f Driven nodes dynamic charge sharing by, 106–107, 107f Droop, 782–784, 783f Drop priorities PHB, 415 DRs. See Dielectric resonators Dry-arcing distances, 739f, 740 flashover v., 742
Index DSMC. See Discrete SMC DSP algorithms hardware synthesis of, 935–942 nested do-loop, 935–939, 936f DSPs. See Digital signal processors DSs. See Description schemes DTMOS. See Dual threshold MOS DTW. See Dynamic time warping Dual threshold MOS (DTMOS), 315 Dual threshold technique, 315 Duration signal, 819 Dynamic allocation vocoder, 211–212, 212f Dynamic charge sharing digital noise from, 106–107, 107f Dynamic CMOS (DCMOS) VLSI power consumption reduced by, 278–279, 278f Dynamic instruction set computer (DISC), 346 Dynamic programming (DP) in speech processing, 879–880, 880f Dynamic RAM (DRAM), 324f, 325–326 Dynamic range FDTD method predictive, 632 Dynamic time warping (DTW), 880–881 Dynamic windowed allocation vocoder, 211–212, 212f Dynamical nonlinear elements two-terminal, 76
E E. See Electric flux density Early effect in bipolar transistors, 144 Early voltage in bipolar transistors, 144 ECs. See Equivalent circuits Edges, 31 incident on/out of/into end vertices, 31–32 parallel, 31, 32f spanning tree isolated/main basis, 253–255, 254f EDPs. See Energy-delay products EF PHBs. See Expedited forwarding PHBs Effective channel mobility MOSFET, 117, 117f Effective mass hole, 157 longitudinal/transverse, 156 semiconductor electron, 155 EFIE. See Electric field integral equation Eigenfunctions Laplace’s equation solved by, 508 Eigensystem realization algorithm (ERA), 1075 Eigenvalues, 1024–1025 Eigenvectors, 1024–1025 EISA. See Enhanced industry standard architecture Elastic applications real-time constraints v., 404 Eldo architecture in embedded systems modeling, 224 Electric charge, 3 conservation, 482 surface density, 501 volume density, 482, 501
Index Electric field integral equation (EFIE), 623 Electric field lines, 506, 507f Electric fields, 504–505 constitutive relationship for, 504–505 transverse, 519–521 Electric flux density (E), 500 Electric power quality, 805–809, 806t analysis techniques, 809, 810t disturbances, 805, 806t instrumentation, 808–809, 809t measuring, 805–808, 807t Electromagnetics finite-difference time-domain method in computational, 629–668 method of moments in computational, 619–627 Electromotive force (emf), 505 full-pitched DC machine winding inducing, 726, 726f Electron saturation velocity, 158 Electronics, 85–176 FDTD modeling of high-speed, 663–664, 665f Electrons diffusion lengths of, 162 energy spectra for, 156, 156f mean free path of, 158 in semiconductors, 155 velocities/mobilities of, 157–162, 160f Electrostatics, 499–512 conventions of, 499–500 sources/fields in, 500–506 Element replacement method LC ladder filter inductors eliminated by, 132–134, 133, 134 ELF electromagnetic waves. See Extremely low frequency electromagnetic waves ELIZA, 367 Elliptic function response in microwave filters, 608, 608f Embedded radios, 999 emf. See Electromotive force Emitter current crowding, 147 high frequency (hf), 147 Emitter-switched thyristors (ESTs), 174–175, 175f Emitters periphery effects in bipolar transistor, 147–148 power bipolar transistor, 166, 166f Encapsulated Security Payload (ESP) IpSec protocol, 418 Encapsulation, 1162 data message, 989 Encoding digital communication system, 964, 964f VLSI power consumption v., 275–276 Encryption multimedia data, 408 Energy, 26 capacitance connected to electrostatic, 510–511 consumption, 271 electric fields storing, 506 expected increase in regional use of, 737, 738f inductors storing, 494–495 magnetic fields storing, 494, 494f power v., 271–272 system of charges having self-, 506 Energy (band) gaps, 154 silicon, 154
1181 Energy bands, 153 Energy-delay products (EDPs) as VLSI optimization metrics, 272 Energy products maximum, 490, 490f, 491t permanent magnet, 490, 490f Enhanced industry standard architecture (EISA), 330 Entropy image, 903–904 EPROM. See Erasable programmable ROM e. See Permittivity Equal area criterion power system transient analysis using, 776–778, 777f Equalizers, 968–970 demodulation using, 980–981, 981f MMSE, 969 PAM equalizer effect v., 961, 961f preset/adaptive, 969 tap, 969–970, 969f Equilibrium points, 1032–1033, 1032f isolated, 1032–1033 stable, 1033, 1033f Equivalent circuits (ECs) bipolar transistor, 148, 149f junction diode, 142, 142f power MOSFET, 172, 172f transmission line TEM mode represented by, 528–529, 528f Equivalent series inductors (ESLs) VRM voltage influenced by, 85, 87–88, 88f Equivalent series resistors (ESRs) VRM voltage influenced by, 85, 87–88, 88f ERA. See Eigensystem realization algorithm Erasable programmable ROM (EPROM), 326, 326f Ergodic processes, 953 Error budgeting INS, 1057–1058, 1058f state estimators v., 1053 Error detection, 429 concurrent, 429 in data transmission, 986–987, 987t metrics, 429 Errors faults/failures v., 428–429, 428f latent/detected, 429 ESLs. See Equivalent series inductors ESP. See Encapsulated Security Payload ESRs. See Equivalent series resistors ESTs. See Emitter-switched thyristors Ethernet 100BASE-T/100VG-AnyLAN, 994–995, 994t FDDI v. fast, 1000, 1001t gigabit, 995 Euler forward method Euler backward method v., 49–50 Even mode in folded dipole antennas, 573, 574f Event-discrete operation embedded system component, 217 Expedited forwarding (EF) PHBs, 415 Expert systems AI in, 367, 369 architecture, 369, 369f Bayesian approach in, 371 belief theory in, 371
1182 Expert systems (Continued ) blackboard architectures in, 370 CBR in, 374–375 certain v. uncertain model in, 372 CF in, 371 characteristics of, 368–369 CLIPS in, 370 component-based approach in, 374 constraint-based approach in, 374 continuous v. discontinuous model in, 372 declarative v. procedural model in, 372 definition of, 367 Dempster-Shafer theory of evidence in, 371 DENDRAL, 367, 368 ELIZA, 367 EMYCIN, 368, 370 explanation in, 376–377 forward/backward chaining in, 372–373, 372f fuzzy logic in, 371 HEARSAY, 368 history of, 367–368 INTERNIST, 368 Java for, 370 knowledge acquisition in, 375–376 knowledge engineer for, 375 knowledge representation in, 370–372 LISP for, 367, 368, 369, 370 logic languages in, 370 MACSYMA, 367, 368 model-based, 370, 371–372 model-based reasoning in, 374 MOLGEN, 368 MYCIN, 367–368, 370, 371, 373, 377 OO approach to, 370 overview of, 367–370, 369f process-based approach in, 374 production rules in, 370 PROLOG, 369 PROSPECTOR, 369 PUFF, 368, 369 quantitative v. qualitative model in, 372 reasoning in, 372–375, 372f rule-based, 370–371 semantic networks in, 370 shell in, 370 SMART, 375 static v. dynamic model in, 372 XCON, 369 Exponential signals continuous-time, 818, 818f discrete-time, 818, 819f LSI systems v., 824–827 Extended conflict graphs. See Conflict graphs Extremely low frequency (ELF) electromagnetic waves FDTD methods modeling propagation of, 663, 664f
F F-B algorithm. See Forward-backward algorithm Fabrication foundries MEMS produced with, 286–287, 287f Fabry-Perot cavities, 602 Factorization matrix equations solved by, 626–627
Index Fail-safe operation, 428 Faraday’s law, 633, 722 contours filling Yee space lattice, 635, 635f FAs. See Foreign agents Fast decoupled power flow algorithm, 771–772 Fast Fourier transform (FFT) algorithms, 836–837 Fast interface traps MOS capacitor/MOSFET, 112 Fault analysis component connection model v., 1082–1083 system identification v., 1082 Fault detection and diagnosis (FDD), 1086 feasibility of online, 1095–1096 Fault detection (FD) implementation levels, 429–430 signature of the failure in, 1086 Fault diagnosis and accommodation (FDA), 1085–1103 feasibility of online, 1103 multiple model-based, 1094–1096, 1094f, 1095f overview, 1085–1088 problem statement, 1088–1089, 1089f self-optimization/online adaptation in, 1088 simulated false alarms v., 1101–1102, 1102f, 1103f simulated intelligent framework for, 1099–1101, 1099f, 1100f, 1101f simulated unanticipated failures v. online, 1096–1099, 1097f, 1098f typical fault accommodation approaches in, 1087f typical fault diagnosis methods in, 1086f Fault models, 430–431 stuck-at, 430–431 Fault tolerance (FT), 428 computer system, 427–454 in control, 1085–1103 FPGA, 436–441 implementation levels, 429–430 microrollback in, 446–448 software-based, 428, 437–438 structural, 428 Faults accommodating online, 1089–1094 active/dormant, 429 ANN estimator training v., 1093–1094 avoidance/removal of, 429 diagnosing/accommodating, 1085–1103 failures/errors v., 428–429, 429f induction motors v., 803 internal/external, 429 permanent/transient/intermittent, 429 power system, 787–788 system performance v. detecting, 1102 FD. See Fault detection FDA. See Fault diagnosis and accommodation FDD. See Fault detection and diagnosis FDDI. See Fiber Data Distribution Interface FDMA. See Frequency-division multiple access FDTD methods. See Finite-difference time-domain methods Feature-based modeling multimedia systems with, 396–397 spatial image features in, 396 temporal motion features in, 396 FEC. See Forward error correction FECs. See Forward equivalence classes Feeder voltage regulators station/distribution-type, 756
Index Ferrite microwave components, 613–616 Ferrites in microwave passive components, 613 Ferroelectric devices microwave, 617 Ferroelectric materials, 617 FFT algorithms. See Fast Fourier transform algorithms Fiber Bragg gratings, 1014f, 1015 Fiber Data Distribution Interface (FDDI), 999–1000 Field displacement isolators, 614, 615f Field-programmable gate arrays (FPGAs) applications for, 348–349, 349f approach to, 344–345, 344f, 345f architectural enhancements for, 350 architecture-driven synthesis of, 944 CDG for, 346 concluding remarks on, 351 configurable computing with, 343–351, 345f, 347f, 349f DISC for, 346 domain-specific compilation in, 350 DSP algorithms implemented in, 942–943, 943t FPGA for, 343–351, 345f, 347f, 349f FT in, 436–441 functional density in, 346 future trends in, 350–351 global reconfiguration in, 346 hardware platforms for, 348–349, 349f introduction to, 343–344 mapping reconfigurable regions in, 350–351 modeling run-time reconfiguration in, 350 novel implementation technology for, 351 parallel harness with, 348 pipelining for, 344–345, 345f rapid compilation in, 351 regular array architecture synthesis v., 944–946, 944f, 945t, 946t reprogrammable SRAM-based, 437, 437f RSG for, 346 RTR devices for, 345–346, 345f RTR for, 343, 344, 344f, 345–346, 345f, 350, 351 RTR methods for, 346 run-time management for, 344, 348 sea of accelerators with, 348 SLU for, 348 tools for, 346–348, 347f vector dot product in, 349 wildcarding in, 346 Fields number, 1020 SDP message, 416–417, 417f Figures of merit (FOMs) bipolar transistor, 149–150 future microprocessors v. VRM, 96 VRM efficiency v. MOSFET, 89–90, 90f Files block, 357 character, 357 machine model with, 360t, 361 MS-DOS, 365 operating systems with, 356–357, 357f, 360t, 361 UNIX, 361, 363 Filter specification (filterspec), 413 Filters. See also specific types of filters ideal frequency responses of, 848f
1183 microwave, 606–613 order of, 608–609 signal estimation using complementary, 1055–1056, 1056f, 1057, 1057f Filterspec. See Filter specification Finite-difference time-domain (FDTD) methods algorithms used by, 631–632 application examples of, 663–667 in computational electromagnetics, 629–668 expanded interest in, 630 historical alternatives to, 629–630 large problems v. scaling, 632–633, 642–643 numerical wave dispersion v., 638–648 PML ABCs in, 655–663 space-grid time-domain techniques related to, 630–631 Finite impulse response (FIR) filters, 844–845 approximation methods for, 848–853 optimization in design of, 849–853 problem formulation for optimization of, 851–852 realizations of, 846, 846f FIR filters. See Finite impulse response filters First-null beam width (FNBW), 555, 556f Fisher factorization theorem, 925–926 Fisher statistics Bayesian statistics/estimation v., 924 information inequality in, 926–927 in statistical signal processing, 924–927 Fisher’s information, 926 Five Dining Philosophers problem as HPrTN example, 468, 468f as PZ net example, 465–466, 465f as TPrTN example, 462–463 Flash memory, 326, 327f Flashovers, 741, 742f back, 739 contamination v., 742f, 744 Flexibility VLSI power v., 272 Flicker in pink noise, 104 Flicker noise active microwave circuit, 692 Flight control systems Java-based middleware in, 1166–1167, 1167f OOP paradigm v., 1162–1164, 1163f, 1164f Floating nodes dynamic charge sharing by, 106–107, 107f Floating-point checksum test in ABFT, 449–450 Flow descriptors, 413 filterspecs/flowspecs comprising, 413 Flow specification (flowspec), 413 Flowspec. See Flow specification Flux fringing magnetic circuit, 496 Flux linkage, 492 FM-CW Radar. See Frequency-modulated CW Radar FNBW. See First-null beam width FOMs. See Figures of merit Forbidden energy gaps, 153, 153f Force electrostatic, 506 magnetic, 485–487
1184 Foreign agents (FAs) Mobile IP, 418–419 Formants, 862 Forward-backward (F-B) algorithm, 884 Forward equivalence classes (FECs) MPLS, 415–416 Forward error correction (FEC) in multimedia network error control, 403, 407 Foster canonical forms, 69 Four-quadrant operation, 724, 724f Fourier transforms, 824–826, 827t, 835–837 continuous-time signals v., 831–833, 833f image sampling using, 892–895, 892f, 893f, 895f FPGAs. See Field-programmable gate arrays Frame market bits video/audio, 417 Free electron mass, 155 Frequency bias, 785 Frequency control, 782–785 Frequency-division multiple access (FDMA), 1005, 1006f TDMA/CDMA v., 1009, 1009t Frequency domains multiple input/output LTI, 1069–1078, 1070f seismic-active mass driver structure, 1073t, 1075f, 1075t, 1076f, 1077–1078, 1077f 16-story structure, 1072f, 1073f, 1073t, 1074f, 1076 Frequency-modulated (FM-)CW Radar, 680, 680f Frequency response AC circuit, 25, 26f AC linear circuit, 46f ideal filter, 848f LSI, 826–827 multiple input/output, 1040–1041 network, 57–61, 58f–60f Sara¨maki window, 849 series inductors/parallel capacitors v. high, 71 Frequency response functions (FRFs) TFMs v. data from, 1069–1074 Frequency shift keying (FSK), 972, 984, 984f FRFs. See Frequency response functions Fricatives voiced/unvoiced, 867–868 Friss transmission formula, 566–567 From fields SIP, 422, 422f FSK. See Frequency shift keying FT. See Fault tolerance FTHNs. See Fuzzy-timing HLPNs Functional requirements for multimedia data transmission, 402, 407–408 Functions. See also specific functions basis/expansion, 620–621 rooftop, 625–626, 625f, 626f testing, 626 testing/weighting, 621 Fundamental circuits, 34, 35f Fundamental cutset matrices, 36 Fundamental cutsets, 33, 35f Fuzzy logic, 371 HLPNs integrated with, 472–473 Fuzzy numbers, 472 Fuzzy-timing HLPNs (FTHNs), 472 application areas/examples of, 473 fuzzy logic/HLPNs integrated by, 472–473
Index G GaAs. See Gallium arsenide Gain antenna, 560–561 BJT short-circuit current, 693, 694f Darlington configuration v. current, 167, 167f filter miniaturization using, 128 IF amplifiers v., 675 induced-L1 , 1112–1113 induced-L2 , 1110–1111 multiple input/output system, 1040–1041 transducer, 697 Gain margin, 1031, 1031f Gain-scheduled controllers, 1107–1113 computational complexity v. synthesizing LPV, 1111 induced-L1 synthesis by, 1111–1113 induced-L2 synthesis by, 1110–1111, 1111f linearization in designing, 1107–1109 LPV synthesis by, 1109–1110, 1110f scheduling laws in, 1108–1109, 1109f Gain scheduling, 1109–1113 Gallium arsenide (GaAs) tetrahedral coordination, 154, 154f Gate-induced drain leakage (GIDL) in CMOS static power dissipation, 269f, 270 Gate turn-off thyristors (GTOs), 169 Gated-Vdd technique, 315, 315f Gatekeeper-routed calls H.323, 419–420 Gatekeepers H.323, 419 Gateways H.323, 419 Gauss-Newton method, 1072 curve-fitting problems v., 1072–1073 Gauss-Seidel algorithm power flow problems solved using, 769–770 Gaussian mixture densities, 885 Gaussian MSK (GMSK), 977–979, 978f Gaussian noise binary signal detection in, 965–967, 966f, 972–973, 972f signal detection in, 929–930, 931f Gaussian surfaces, 500–502, 502f Gauss’s law in electrostatics, 500–503, 500f, 633 integration over charge distributions v., 503–504, 503f, 504f in magnetostatics, 482–483, 633 potential function v., 502t, 504–506 symmetric charge distributions/solutions in, 500–502, 501f General impedance converters (GICs) second-order active filter sections using, 131, 131f Generalized scaling MOSFET, 121, 121t Generation dispatch, 779–782 classical lossless, 780 lossy economic, 780–781 Generators AC power system, 713 modeling armature/field windings in, 772–773, 773f modeling power system, 765–766, 766f, 772–774 protecting synchronous, 801–802 synchronous, 721 three-phase round rotor, 732–734, 732f, 733f
Index Genetic algorithms online fault accommodation control v., 1090 GIC biquad active filter cascade design using, 131–132, 131f element replacement method analogy with, 132–133, 133f GICs. See General impedance converters GIF. See Graphics interchange format Glint error, 683, 683f Glitching CMOS power consumption v., 271 path equalization v., 276 Global constraints DP, 880 DTW, 881 GMSK. See Gaussian MSK GOP. See Group of pictures Gorski-Popiel method LC ladder filter inductors eliminated by, 134, 134f Gradient descent algorithm online fault accommodation control v., 1090–1091 Grammar, 887 regular/finite-state v. stochastic, 888 Granularity limit cycles, 859 Graph theory basic concepts/results of, 31–36 circuit analysis with, 31–41 theorems in, 32–33, 32f Graphical user interface (GUI), 358, 359 Graphics compression schemes, 405t as media type, 405 Graphics interchange format (GIF), 405t Graphs, 31–32, 32f connected, 32f, 33 directed/oriented v. undirected/unoriented, 31, 32f Gray code encoding, 275, 313 Green’s function, 620 Green’s function approach free space of, 621 Group delay, 58–61, 60f all-pass networks v. nonideal, 72–74 flat filter, 63f low-pass filter, 61, 62f Group of pictures (GOP), 383–384, 384f Group velocities plane wave, 517, 518t waveguide, 540 GTOs. See Gate turn-off thyristors Guaranteed service, 413 GUI. See Graphical user interface Gyrators, 137, 137f
H H. See Magnetic field intensity H.323 call control models, 419–420, 421t TCP/IP multimedia support recommended by, 419–420, 419t, 420f Half-power beam width (HPBW), 556, 556f Hall effect, 486, 486f, 809t Hall voltage, 486 Hardware definition language (HDL) MEMS with, 284–285 Hardware description languages in heterogeneous system design, 217–230
1185 Hardware-software codesign VLSI power consumption reduced by, 273–274, 273f HAs. See Home agents HDL. See Hardware definition language HDM. See Histogram distance metric HEARSAY, 368 HEMTs. See High electron mobility transistors Hexagonal sampling, 895 Hidden Markov models (HMMs), 881–887 discrete-observation/continuous-distribution, 883 LMs v. generality of, 886–887 practical features of, 885–886 recognition using continuous-observation, 884 recognition using discrete-observation, 883–884 size/topology, 886 as stochastic finite-state automaton, 881–882 structure, 882, 882f training continuous-observation, 885 training discrete-observation, 885 Hierarchical predicate transition nets (HPrTNs), 467–468, 468f hierarchical structures/PrTNs combined by, 467–471 system modeling with, 469–471 High cross-range resolution radar, 684–686 High electron mobility transistors (HEMTs) microwave, 695, 695f High-level Petri nets (HLPNs) behavior/execution of, 460–461 extension/analysis/applications of high-level, 459–473 fuzzy logic integrated with, 472–473 high-level, 459–461 syntax/semantics, 460–461 temporal logic integrated with, 461–464 transition modes/rules, 460 High-level semantic modeling intelligence-based systems in, 397–398 multimedia systems with, 397–398 multimodal probabilistic frameworks in, 397 multinet in, 397 video data and, 398 High-pass filters equivalent circuit/frequency response of, 607f LC low-pass filters transformed into, 63–64 microwave, 612, 613f High performance FORTRAN (HPF), 340 High-resolution radar, 683–684 H1 control other optimal control methods v., 1062f H1 control problem, 1044–1045 Histogram distance metric (HDM), 393 Histogram equalization, 897–898, 897f Histogram specification, 898 HLPNs. See High-level Petri nets HMMs. See Hidden Markov models Holes diffusion lengths of, 162 in semiconductors, 155 velocities/mobilities of, 157–162, 160f Home agents (HAs) Mobile IP, 418–419 Horn antennas, 578, 579f HPBW. See Half-power beam width HPF. See High performance FORTRAN HPrTNs. See Hierarchical predicate transition nets
1186 Huffman codes, 904 as VLC instances, 941 Hybrids, 595–598, 596f. See also specific types of hybrids Hydrothermal coordination, 780 Hysteresis in damping structural bearings, 1135–1138, 1136f, 1137t, 1138f discrete scheduling using, 1108–1109, 1109f magnetization, 489, 489f permanent magnet, 490, 491f
I I/O. See Input/output I/O integrity in digital noise, 105, 107–108 IC design tools MEMS designed with, 284 IC engines. See Internal combustion engines ICs. See Integrated circuits IDE. See Integrated drive electronics Ideal rectifier formulas, 809, 810t Idle speed control model IC engine, 1121, 1121t IEEE 802.2, 993 IEEE 802.3, 993–995 IEEE 802.4. See Token Passing Bus Protocol IEEE 802.5. See Token Passing Ring Protocol IEEE 802.11. See Wireless LANs IETF. See Internet Engineering Task Force IF amplifiers, 675 If statements VHDL-AMS, 222 IGBTs. See Insulated gate bipolar transistors IIR filters. See Infinite impulse response filters ILP. See Integer linear programming Image coding, 901–907, 902f steps in, 902, 903f Image compression. See Image coding Image enhancement, 897–898 Image processing, 891–910 Image restoration, 898–901 Images analyzing, 908–909 features in, 908, 909f linear model for degradation of, 898, 899f nonrectangular grid sampling of, 893–895, 895f quantizing, 895–896 rectangular grid sampling of, 892–893, 893f redundant/nonredundant, 902–903, 903f, 904f sampling, 892–895 as signals, 814 transcoding, 916–917, 916f, 917f Impact ionization coefficients, 159–160, 160f semiconductor electron/hole pairs from, 159–160 Impedance bandwidth antenna, 560, 560f Impedance transformers, 591–595, 593f tapered/nonuniform, 593, 594f Impedances, 23, 24t antenna input, 554f, 559–560, 560f intrinsic, 515 LC, 65–68, 68f line-type waveguide characteristic, 551 microstrip line waveguide characteristic, 546–547
Index scaling, 61 series, 24 series/shunt transformer, 762–763 Smith charts determining transmission line, 534–536, 535f terminated lossless transmission line input, 530–531, 530f, 531f transformer, 716 transmission line characteristic, 526, 527f waveguide characteristic, 540 Importance hints MPEG-7 transcoding, 913 Impulse invariance method, 854 Impulse response, 821–822 analog filter, 841, 843f FIR filter, 844–845, 845f LTI, 842–843 In-place mapping intersignal, 211–212, 212f intrasignal, 211, 212f vocoder autocorrelation function, 211, 211f Incidence matrices, 34–35, 35f all-vertex, 34 Inductance, 491–494 magnet flux/flux linkage related to, 491–492 mutual, 492–494 self-, 492–494, 493t Inductor capacitor (LC) filters transformations of low-pass, 63–65 unsuitable applications of, 53 Inductors. See also Equivalent series inductors element replacement method v. grounded/floating, 133–134, 133f, 134f gyrators simulating grounded/floating, 137–138, 137f, 138f magnetic energy stored by, 494–495 microwave passive lumped element, 590, 592f nonlinear, 76 Industry standard architecture (ISA), 330 Inertial navigation systems (INSs) error estimation for, 1056–1058, 1056f Infinite impulse response (IIR) filters, 845 approximating, 854–856 realizations, 846–847, 846f, 847f, 848f Inheritance, 1162 HPrTNs modeling OOP, 470–471, 471f Input/output (I/O) CCD, 333–334 Computer architecture with, 333–334 input devices, 333–334 operating systems with, 357, 358, 360t, 361 output devices, 334 registers v. clock skew scheduling, 255–256 Instruction-level optimization VLSI power consumption reduced by, 273 Instruction set architecture (ISA), 323 Instrument transformers, 791 ratio/phase mismatches in, 801, 801f settings for, 797t Insulated gate bipolar transistors (IGBTs), 172–174, 173f output characteristics of, 173–174, 173f transient operation of, 174, 174f Insulators, 739–747 contamination v. performance of, 744, 744f, 744t dimensions of, 739–740, 739f electrical/mechanical performance of, 741 grease coatings on, 745 M and E ratings of, 743–744, 743f
Index nonceramic/composite, 747 performance improvements in installed, 744–746 porcelain/glass/composite, 743–744 resistive/semiconducting glazed porcelain, 746–747, 746f stresses on, 740–741 types of, 739, 740f Integer checksum test in ABFT, 450–453 analytical error coverage of, 452 empirical error coverage/dynamic range in, 452–453, 453f floating point multiplier timing results, 453, 453f, 454f general theory of, 450–451 matrix multiplication with, 451–452 Integer linear programming (ILP) interconnect noise analysis using, 305 Integral equations electromagnetic, 621–624 Integrated antennas. See Planar antennas Integrated circuits (ICs) FDTD methods modeling photonic, 665–666, 666f, 667f Moore’s law describing evolution of, 263–266, 264f, 265f, 266f Integrated drive electronics (IDE), 331–332 Integrated eyeglass display MEMS in, 293, 293f Integrated Services (IntServ), 412–414 disadvantages, 414 Integrators in LC ladder filters, 135–136, 135f Integrity multimedia data, 408 Intel failure criteria, 318 Intelligent control regulators, 1094f, 1095, 1095f Intellisuite MEMS designed with, 285, 287 Inter-symbol interference (ISI), 962 Interconnect delay noise analysis with, 305 Interconnect noise analysis ASIC and, 301 buffer insertion noise minimization in, 302 capacitive coupling model in, 301, 301f case study Pentium 4 in, 305–306, 305f, 306f CFR with, 305 crosscapacitance scaling in, 305, 305f deep submicron technology and, 299–306, 300f, 301f, 302f, 303f, 304f, 305f, 306f distributed interconnect model in, 301–302 early design stage in, 304–305, 304f, 305f gate delay and, 300f ILP with, 305 interconnect delay in, 305 interconnect modeling issues for, 302 introduction to, 299–301, 300f, 309–311, 310f lumped interconnect model in, 301, 301f MAX-CFL with, 305 models, 301–302, 301f, 302f network ordering noise minimization, 304 NIC for, 301 noise minimization techniques, 302–304, 302f, 303f NTRS with, 304 repeater design methodology, 306 shield insertion noise minimization, 303–304, 303f wire aspect ratio scaling, 305, 305f
1187 wire design methodology, 306 wire sizing noise minimization, 302 wire spacing noise minimization, 303, 303f Interfaces ATAPI, 332 computer architecture with, 329, 331–333, 333t IDE, 331–332 PS/2 port, 331 SCSI, 332 serial/parallel port, 331 Interference constructive/destructive wave, 536 Interior nodes in DiffServ Internet services model, 414 MPLS LSRs as, 415 Interior resonance problem, 623 Interleaved QSW VRM, 91–92, 91f, 92f, 93f inductance/capacitance for high frequency operation, 95, 97f integrated magnetic structure, 91, 93f Internal combustion (IC) engines idle speed control model for, 1121, 1121t nomenclature describing, 1128 SMC methodologies controlling idle speed in, 1115–1128 test cell for, 1121 International Organization for Standardization (ISO), 380 OSI, 989 International Phonetic Alphabet (IPA), 866 International Standards Organization. See International Organization for Standardization Internet adoption rates, 1013, 1013f distributed multimedia traffic requirement v., 408–416 origin, 401 proposed multimedia service models, 410–412 Internet Engineering Task Force (IETF) DiffServ Internet service model by, 414–415 IntServ Internet service model by, 412–414 TCP/IP-related standards maintained by, 991 INTERNIST, 368 Interpixel redundancy, 381 IntServ. See Integrated Services Inverse filters, 898 Inverse SAR (ISAR), 685–686 rotating object imaging in, 685–686, 685f Inverse scattering, 686–690 approximate approaches v., 687–689 problem properties/categories, 686 problem solution methods, 687–690 regularization v., 689–690 rigorous approaches v., 689–690 three-dimensional, 686–687, 687f Inverse-time property, 796 IPA. See International Phonetic Alphabet IS-95 CDMA, 1008 air interface parameters, 1008t ISA. See Industry standard architecture; Instruction set architecture ISAR. See Inverse SAR ISI. See Inter-symbol interference ISO. See International Organization for Standardization Isolators, 614, 614f Itakura distances, 878
1188 Iteration fast decoupled power flow algorithm, 770–771 inverse scattering problem v., 689 matrix equations solved by, 627 power flow problems v. Gauss-Seidel, 769–770 power flow problems v. Newton-Raphson, 770–771
J J. See Volume current density Jacobians in DC nonlinear circuit solution, 47–48 in nonlinear circuit transient analysis, 49–50 Java in complex control system middleware, 1166–1167, 1167f expert systems using, 370 JFETs. See Junction field effect transistors Jitter, 402 limits, 406 Johnson noise. See Thermal noise Joint Photographic Experts Group (JPEG), 381–382, 405t Josh and Gupta (1996) lemma, 1046 Joule heating, 481 Joule’s law, 481–482 JPEG. See Joint Photographic Experts Group Junction diodes, 139–142, 140f basic equations describing, 139–142 depletion capacitance of, 140–141, 140f diffusion charge/capacitance in, 141 Junction field effect transistors (JFETs), 122–123, 122f development of, 109
K K-factor, 807–808, 807t Kalman filter algorithm, 1052 scalar measurement processing equivalent for, 1055 Kalman filters as special case/realizable Wiener filters, 923 state estimator design using, 1052–1053 Kirchhoff ’s Current Law (KCL), 6, 482 in electrical network graphs, 37–38, 37f in MNA, 43–44 in nonlinear circuits, 75–76 phasor circuit analysis nodes v., 24 Kirchhoff ’s Voltage Law (KVL), 6 in electrical network graphs, 37–38, 37f in nonlinear circuits, 75–76 phasor circuit analysis mesh v., 24 Knowledge sources, 887
L L-D recursion. See Levinson-Durbin recursion Label edge routers (LERs) as MPLS network boundary nodes, 415 Label switching routers (LSRs) as MPLS network interior nodes, 415 Ladder simulations active filters implemented with, 129, 132–136 OTA-C filter, 138, 138f Lagrange multipliers QP problems solved using, 247, 249–250 Lange couplers, 599 Language models (LMs), 886, 887–888 n-gram, 888
Index purpose/structure of, 887–888 training/searching, 888 Languages natural v. formal, 862n stochastic, 888 LANs. See Local area networks Laplace transforms, 16–20, 830–831, 832t common functions having, 17–18, 18t evaluated on imaginary axis, 830, 831f properties of, 19t Laplace’s equation boundary conditions v., 506–510 eigenfunctions of, 508 as special case of Poisson’s equation, 507–508 Large-signal noise. See Digital noise Lasers optical networks using narrow band, 1015 Last-in first-out unfair algorithm, 994 Law of reflection, 521 LC filters. See Inductor capacitor filters LC ladder filters, 132, 132f LDCs. See Line-drop compensators Leakage distance, 744f increasing insulator, 746 Learning algorithms for online, 1153–1154 association/reinforcement in direct, 1151–1154, 1152f control system using online reinforcement, 1151–1159, 1152f online NDP, 1154–1156 Least squares optimization algorithms, 1071–1074 LERs. See Label edge routers Levinson-Durbin (L-D) recursion, 873, 874f Lexical constraints, 887 LFC. See Load frequency control Library cell design VLSI power consumption reduced by, 276–278 Liftering high/low-time, 876–877 Light penetration depth semiconductor, 160, 160f Likelihood functions, 924–925 Limit cycles granularity of, 859 overflow of, 858–859 Line-drop compensators (LDCs), 756–757 Linear circuit elements, 4 Linear circuits MNA equations of, 44–46, 46f relationships valid in nonlinear and, 75–76 Linear dependence clock skew, 242–246 Linear estimation in statistical signal processing, 923–924 Linear filtering, 898, 899f Linear least squares (LLS) method, 1071–1074 Linear matrix inequalities (LMIs) gain-scheduled controller synthesis v., 1111 Linear parameter varying (LPV) systems gain scheduling for, 1109–1110, 1110f Linear prediction (LP) analysis, 872–875 LP model/normal equations in, 872–873 model order/gain estimation in, 875 Linear quadratic Gaussian (LQG) control, 1061–1062 other optimal control methods v., 1062f
Index Linear quadratic Gaussian (LQG) problem, 1043–1044 Linear quadratic regulator (LQR) problem, 1043 Linear shift-invariant (LSI) systems, 821–824 BIBO stability in, 823 causality in, 823 frequency domain analysis of, 824–827 frequency response by, 826–827 invertibility in, 823–824 properties of, 823–824 Linear time-invariant (LTI) systems, 842–844 Lyapunov stability of, 1033 modeling, 1037–1039 passification methods for, 1046–1047, 1046f passivity in, 1045 performance analysis of, 1039–1042 random signal transmission through, 953–954, 954f stability of positive real, 1046 state-space characterization of positive real, 1045–1046 Lines of force electric field lines forming, 506 Linux, 364 LISP, 367, 368, 369 LLC. See Logical link control Lloyd-Max quantizers, 895 LLS method. See Linear least squares method LMCS-1 algorithm, 250–251, 253f, 254f, 257–260 LMCS-2 algorithm, 251, 253f, 254f, 260 LMIs. See Linear matrix inequalities LMs. See Language models LNAs. See Low-noise amplifiers LNS. See Logarithmic number system Load frequency control (LFC), 782 Load reflection coefficient, 530 Loads modeling power system, 766 Lobes. See also specific types of lobes antenna, 555–556, 556f one-dimensional array grating, 581 two-dimensional array grating, 582 Local area networks (LANs) bridges/routers in, 1001–1002 high-speed Ethernet-like, 994–995 measurable parameters for, 1002 OSI reference model for, 992–993, 993f performance of, 1001–1002 technologies in, 991–992 transmission media for, 991 wireless, 997–999 Local path constraints DP, 879 DTW, 881 Local truncation error (LTE), 50 Logarithmic number system (LNS) architecture v. power dissipation, 184 arithmetic operator complexity, 181–182, 182t, 183f encoding v. power dissipation, 184–185, 185f, 186f linear representations, 180–181 operations, 181–184 precision, 181 processor organization, 181–182, 182f for VLSI arithmetic, 179–185 word organization, 180f Logic gate level power consumption v. VLSI design at, 276
1189 Logical link control (LLC), 993 Longest prefix matches, 415 Longitudinal/transverse components E(k) calculated using, 156 Look-ahead transformation recurrent algorithms optimized with, 940f Look-up tables (LUTs) LNS operations using partitioned, 182, 183f Loop antennas, 570t, 575–576, 575f dual fan-fields in small dipole antennas and, 576 Loop current method, 7–8, 7f, 8f Loop system of equations, 38 Loop transformations, 38 in cavity detection algorithm, 200, 200f, 201t in code rewriting for memory use, 198 Loop unfolding recurrent algorithms optimized with, 940, 940f Loops, 6 Lorentz force equation, 486 Loss factors, 781 Lossless compression, 406 data v., 406 image data v., 904 Lossy compression, 406 data v., 406 DPCM used for, 904, 905f image data v., 904 Low-level feature based indexing color in, 387–388 multimedia systems with, 386–389, 387f shape in, 389 spectral techniques for, 388 structural techniques for, 388 texture in, 388–389 Low-noise amplifiers (LNAs), 697 Low-pass filters, 61 approximating, 61–62 equivalent circuit/frequency response of, 607f ideal impulse response in, 841, 843f microwave, 609, 609f signal reconstruction using, 833, 833f transformations of LC, 63–65, 66f window method v. Sara¨maki windows for, 849, 850f Low-swing signaling VLSI power consumption reduced by, 276 Lozano-Leal and Joshi (1990) lemma, 1045–1046 LP analysis. See Linear prediction analysis LPV systems. See Linear parameter varying systems LQG control. See Linear quadratic Gaussian control LQG problem. See Linear quadratic Gaussian problem LQR problem. See Linear quadratic regulator problem LSI systems. See Linear shift-invariant systems LSRs. See Label switching routers LTE. See Local truncation error LTI systems. See Linear time-invariant systems Luenberger observers state estimator design using, 1052 Lumped electrical elements, 4 bipolar transistor, 148 microwave passive component, 586, 589–591, 592f Lumped interconnect model interconnect noise analysis, 301, 301f LUTs. See Look-up tables Lyapunov instability theorem, 1035
1190 Lyapunov stability, 1031–1035 definite/unbounded continuous functions v., 1034 fault accommodation v., 1088 LTI systems v., 1033 theorems, 1034–1035 unanticipated failures v. discrete-time, 1093, 1093f, 1103
M M and E ratings. See Mechanical and electrical ratings MAA. See Memory allocation and assignment MAC protocol. See Medium Access Control protocol Machines, 721–736. See also specific types of machines classification of AC/DC, 721, 722f energy conversion by, 724–725 synchronous, 732–735 MACSYMA, 367, 368 Magic tees. See Matched hybrid tees Magnetic circuits electrical circuits’ analogous quantities v., 495t physical geometries v. equivalent, 495f, 496, 496f Magnetic disk memory, 327 Magnetic energy density, 494 Magnetic field integral equation (MFIE), 623 Magnetic field intensity (H), 482 boundary conditions, 485 Magnetic fields constitutive relationship for, 487 magnetic energy stored by, 494, 494f transverse, 519–521 Magnetic flux, 491–492 power system transformer, 762f Magnetic flux density (B), 482–483 boundary conditions, 485 Magnetic susceptibility, 487 magnetic materials having, 488–489 Magnetic tape memory, 327–328 Magnetic vector potential, 483–485 Magnetization vectors, 487 Magnetized domains, 489 Magnetizing component transformer, 716 Magnetomotive force (mmf) electric machine, 722–723, 723f electrical circuit voltage analogous to, 495t, 496 traveling waves v., 722–723 Magnetostatics, 479–497 governing equations of, 482–485 postulates, 482–483 Magnets, 479, 487–491 permanent, 489–491 Main lobes antenna, 555, 556f antenna array, 581 MANs. See Metropolitan area networks Mantissa preserving functions, 450–451 MAP equation, 923 MAP estimation. See Maximum a posteriori estimation Marchand baluns, 595, 596f Markov chains, 883 Markov parameters observer, 1070 TFMs as source of, 1074–1075 Matched filters, 930, 967–968 Matched hybrid tees, 598, 598f
Index Matching networks, 591–595, 593f Materials ferromagnetic, 488 hard ferromagnetic, 489 magnetic, 487–491 soft ferromagnetic, 489, 489t MATLAB, 1019–1020 Matrices, 1020–1024 addition of, 1020–1021 cofactor, 1023 column/row vector, 1021 determinants of, 1022–1023 identity/diagonal, 1021 inverses of, 626–627, 1023–1024 multiplication of, 1021 non-singular v. full rank, 1023 similarity transformations/normal forms of, 1025 singular values for, 1025–1026 Matrix algebra, 1020–1024 Matrix equations solving, 626–627 MAX-CFL. See Maximum coupling-free layout Maximum a posteriori (MAP) estimation in statistical signal processing, 922–923 Maximum coupling-free layout (MAX-CFL) interconnect noise analysis with, 305 Maximum likelihood estimate (MLE), 925 properties, 927 Maximum power transfer theorem, 13, 14f Maxwell’s equations compact form of, 639 cumulative error in grid-based algorithms for, 642 field-splitting modification of, 657–658 finite difference expressions for three-dimensional, 636–637 method of moments solving, 619–620 plane waves as simplest solution to, 513 semi-implicit approximation v., 636–637 three-dimensional case of, 633–634 in three-dimensional UPML medium, 660 time-harmonic form of, 514 TMz /TEz modes v., 634, 637–638 two-dimensional case of, 634 Maxwell’s theory, 619 MCA. See Micro channel architecture MCTs. See MOS-controlled thyristors MCU. See Minimum code units MCUs. See Multipoint control units MCV control. See Minimal cost variance control MCV problem. See Minimal cost variance problem Mean free path electron, 158 Mean square per bandwidth. See Noise power spectral density Mean square values analog noise, 101 Measurement process modules in network admission control, 411, 411f Mechanical and electrical (M and E) ratings insulator, 743–744, 743f Media resource adaptation, 915 Medium Access Control (MAC) protocol, 991–992 deficiencies, 992–993 FDDI/IEEE 802.5, 999–1000, 1000f Medium lines, 765, 765f Medusa, 359
Index Memory access cycles v. data path cycles, 205–206, 205f, 206f access speed increase v. Moore’s law, 202 allocation, 209, 209f caches, 324 classification, 192–193, 192f code revision exploiting organization of, 198–201 components for custom, 192–195 in computer architecture, 324–329, 324f, 325f, 326f, 327f cost model for architectures using, 202–203 costs in architectures using highly parallel, 203 custom organization impact, 206–207, 206t cycle budget v. energy cost, 203–205, 204f data layout reorganization methodology reducing, 211–212, 212f data transfer v. custom organization of, 191–214 difficulties in designing custom organization of, 207, 207f DRAM, 324f, 325–326 EPROM, 326, 326f external data access bottleneck v. organization of, 195–196, 196f flash, 326, 327f general principles of, 192, 192f hierarchical organization of off-chip/global, 195–198 high-density issues in organization of, 197–198 integration density v. time, 263–264, 265f magnetic disk, 327 magnetic tape, 327–328 main, 325 MMU with, 324 optical tape, 327–328 optimal power distribution in, 197 organization custom design, 206–210 power consumption v. hierarchical organization, 196–197 RAM, 325–326, 325f, 326f real-time data transfer bandwidth constraints in, 201–208, 208f ROM, 326–327, 326f, 327f secondary, 325 serial-access, 327–329 size reduced by data layout reorganization, 210–214 SRAM, 324f, 325, 325f Memory allocation and assignment (MAA) in custom memory organization design, 208–209 signal-to-memory assignment, 209, 209f Memory design techniques VLSI power consumption reduced by, 274 Memristors, 76 MEMS. See Micro electro mechanical systems MEMS Reliability Newsletter, 288–289, 288f MEMSCaP MEMS designed with, 284–285 Mergeability of arrays with overlapping lifetimes, 211–212 MESFETs. See Metal-semiconductor field effect transistors Meshes, 6 MESI. See Modified exclusive shared invalid Message passing interface (MPI), 340 Message switching, 987–988 Metal oxide semiconductor field effect transistors (MOSFETs), 109–136 device operation, 114–117, 114f, 115f GaAs microwave, 694–695, 694f improved structures for power, 172, 172f invention of, 109 linear region in operating, 114, 115f modern, 121–122, 122f as nonlinear resistive elements, 78
1191 operational limits of power, 171–172 power, 169–172, 169f, 170f power losses in, 89–90, 89f saturation region in operating, 115–116 scaling, 120–121, 121t short channel effects, 118–120 small signal equivalent model of GaAs, 695, 695f subthreshold region in operating, 116–117, 117f surface/buried channel, 113 switching performance of power, 170–171, 171f, 172f thermal noise currents in, 103–104, 104f in VRM buck circuits, 87, 87f Metal oxide semiconductor (MOS) capacitors, 109–113, 110f, 113f, 114f ideal, 110–112, 110f ideal v. practical, 112 inversion/depletion charge variation in, 111–112, 112f regions of operation, 112t Metal-semiconductor field effect transistors (MESFETs), 123–124, 123f, 124f Metals in semiconductors, 153–157 Method calls, 1163 Method of images conductor/source problems solved by, 507 Method of moments basic principle of, 620–621 in computational electromagnetics, 619–627 Method of superposition, 24 Methods OOP, 1162–1163 Metropolitan area networks (MANs), 991. See also Fiber Data Distribution Interface measurable parameters for, 1002 performance of, 1001–1002 MFIE. See Magnetic field integral equation Micro channel architecture (MCA), 330 Micro electro mechanical systems (MEMS) actuators with, 292–294, 293f, 294f ADXL with, 283, 288 application diversity of, 289–295, 290f, 291f, 292f, 293f, 294f, 295f applications for, 283, 283f books on, 296 bulk micromachining with, 286 CAD for, 283, 285 CADENCE with, 284 categories of, 289–290, 290f clinical medicine using, 290, 291 CMOS with, 286 commercial value of, 282, 282f data storage with, 295, 295f designing, 283–288, 284f DMD with, 283, 288–289, 292–293, 293f fabrication foundries for, 286–287, 287f HDL with, 284–285 IC design tools with, 284 integrated eyeglass display using, 293, 293f integrated systems of, 294–295, 295f Intellisuite with, 285, 287 introduction to, 281 materials in, 285–286, 285f MEMS Reliability Newsletter and, 288–289, 288f MEMSCaP with, 284–285 micrograph with, 291, 291f, 294f
1192 Micro electro mechanical systems (MEMS) (Continued ) micromachining and, 289, 617 millipede with, 295, 295f overview of, 281–283, 282f, 283f packaging, 284f, 287–288, 288f patents, 281 plastic microfluid element with, 290f pressure sensors with, 291, 291f processes in, 285–286, 285f production/utilization of, 281–296 reliability of, 288–289, 288f RF systems with, 289, 290f sensors with, 290–292, 291f, 292f simulation, 283–288, 284f structures of, 289–290, 290f structures produced for, 286 summary of, 295–296 surface micromachining with, 286 testing of, 284f, 287–288, 288f transportation industry and, 296 Micrograph electron, 291, 291f MEMS in, 291, 291f, 294fr optical, 294f Micromachined components bulky/surface microwave, 617 Microprocessors power management for future generation, 85–99 voltage/current/frequency changes in, 85–86, 86f VRM integration with, 97, 99f Microprogramming ALU with, 323, 324 computer architecture with, 323–324, 324f ISA with, 323 PLA with, 323 RISC with, 323 ROM with, 323 Microrollback, 446–448 cache memory supporting, 447 in CISC processors, 448, 448f individual state register supporting, 447, 447f, 448f register file supporting, 447, 447f Microstrip patch antennas, 570t, 577–578, 578f, 700–702, 701f input impedance v. frequency for, 702, 702f Microwave circuits active, 691–705 Microwave ferroelectric devices, 617 Microwave passive components, 585–617, 586f Microwaves FDTD methods modeling penetration/coupling of, 664, 665f Middleware in complex control systems, 1165–1167, 1165f, 1168 Miller charges future microprocessors v. VRM, 96 Millipede MEMS in, 295, 295f MIMD. See Multiple instruction multiple data Minimal cost variance (MCV) control, 1062–1063 full-state feedback, 1063 open-loop, 1062–1063 other optimal control methods v., 1062f seismic protection of structures v., 1066–1067, 1066f, 1067f
Index Minimal cost variance (MCV) problem, 1065 Minimax method employing WLS, 852 frequency response, 854f WLS method v., 849–850, 851f Minimum code units (MCU), 381–382, 382f Minimum error criterion, 966 Minimum mean square error (MMSE) equalizers, 969 Minimum mean-squared error (MMSE) estimation in statistical signal processing, 922 Minimum phase spectra in stable speech models, 871 Minimum shift keying (MSK), 976–977, 977f data timing recovery, 980, 980f signal detector, 978f spectral density, 977, 978f synchronization, 979–980, 979f Mirror technique, 317, 317f MISD. See Multiple instruction single data Mixers, 699–700, 700f balanced, 675, 675f radar receiver, 675 MLE. See Maximum likelihood estimate mmf. See Magnetomotive force MMICs. See Monolithic microwave-ICs MMSE equalizers. See Minimum mean square error equalizers MMSE estimation. See Minimum mean-squared error estimation MNA. See Modified nodal analysis Mobility in multimedia data transmission, 402, 408 TCP/IP enhancements supporting, 418–419 Modelica, 1082 Models aberration monitoring in multinature systems, 228, 229f capacitive coupling, 301, 301f distributed interconnect, 301–302 embedded system functional unit, 225–228 functional unit environmental, 225–226 interconnect issues for, 302 interconnect noise analysis with, 301–302, 301f, 302f LTI system, 1037–1039 lumped interconnect, 301, 301f multinature systems, 224–228 partitioning system, 224 simulation results from multinature systems, 228, 229f simulators v., 219, 219f VHDL-AMS break statements v. discontinuities in, 222–223, 223f Modes cutoff frequency v. dominant, 542, 542f hybrid, 540 metallic cavity resonator degenerated, 603 reflected wave interaction, 539 waveguide degenerate, 540 waveguide fundamental, 539 waveguide operation, 540 MODFETs. See Modulation-doped field effect transistors Modified exclusive shared invalid (MESI ), 337–338, 338f Modified nodal analysis (MNA), 43–46 symbols, 43f, 44f, 45t Modulation, 971–973 spread-spectrum, 1007 technologies, 971–981 Modulation-doped field effect transistors (MODFETs), 124–126, 125f
Index Modules vector spaces v., 1020 MOLGEN, 368 Monolithic microwave-ICs (MMICs) GaAs FETs in, 694–695 Monopole antennas, 570t, 574–575, 575f Monopoles. See Point charges Moore’s law Intel IC evolution illustrating, 263–264, 265f memory plane access speed increase v., 202 MOS capacitors. See Metal oxide semiconductor capacitors MOS-controlled thyristors (MCTs), 174–175, 175f MOSFETs. See Metal oxide semiconductor field effect transistors Motion detection, 908 Motion Picture Experts Group (MPEG), 380 compression standards provided by, 383 multimedia systems using standards from, 383–385, 384f Motors load torque v., 724, 724f Moving target indicator (MTI) radar, 680–682 MPEG. See Motion Picture Experts Group MPEG-1, 383–384, 384f GOP with, 383–384, 384f MPEG-2, 384–385 MPEG-3, 385 MPEG-7, 911–912 UMA requirements addressed by, 912–914 Variation DS, 914, 914f MPEG-21, 912 digital item adaptation, 914–915 MPI. See Message passing interface MPLS. See Multi-protocol label switching MS-DOS, 364–366, 365t, 366t batch files with, 365 introduction to, 364–365, 365t memory management, 364–365 using, 365–366, 366t MSK. See Minimum shift keying MTI radar. See Moving target indicator radar Multi-port networks, 6 Multi-protocol label switching (MPLS), 415–416 Multicasting in multimedia data transmission, 402, 407 TCP/IP enhancements supporting, 416 Multimedia, 402–408 data diversity, 402–403, 402f device constraints/preferences v., 918 network-oriented classification of, 403–404, 403f signal processing for, 911–919 synchronization, 408 traffic, 402 Multimedia networks, 401–424 error characteristics of, 410 service expectations/traffic descriptions in, 410–411 traffic/functional requirements v., 402 wireless, 999 Multimedia systems BMA for, 383 camera movement compensation in, 395–396 CBIR with, 379–398 coding redundancy in, 381 compression in, 380 DCT for, 381 DFT for, 381
1193 encoding in, 380–381 error propagation in, 380 feature-based modeling in, 396–397 HDM in, 393 high-dimensionality in, 390–391 high-level semantic modeling in, 397–398 image encoding standards for, 381–382, 382f image indexing/retrieval in, 386–392, 387f infraframe coding for, 383 interframe coding for, 383 interpixel redundancy in, 381 introduction to, 379–380 JPEG for, 381–382 keyframe in, 395 low bit rate communications with, 386 low-level feature based indexing in, 386–389, 387f MPEG standards in, 380, 383–385, 384f psychovisual redundancy in, 381 QBIC in, 391–392 relevance feedback in, 391 SDM in, 393 segmentation in, 390 spatial v. compressed domain processing in, 389–390 storage for, 380–381 temporal redundancy for, 383 temporal segmentation in, 392–394, 393f video compression standards for, 385–386 video conferencing, 385–386 video encoding standards for, 382–386, 383f video indexing/retrieval in, 392–398, 393f video summarization in, 395 wireless networks with, 386 Multinet, 397 Multiple instruction multiple data (MIMD), 336 Multiple instruction single data (MISD), 335–336 Multiplexers microwave, 612–613 Multiplexing, 987 Multipliers noise generated by, 857–858, 857f, 858f REMOD FD/FT in, 433–436, 434f, 435f, 436t Multipoint control units (MCUs), 419 H.323, 419 Multiprocessors architecture of, 336–337, 336f, 337f BSP programming for, 340 Cþþ for, 339 cache coherence in, 337–339, 338f CC-NUMA, 336–337, 337f cluster systems for, 339 directory-based protocols for, 338–339 energy trade-offs between concurrent tasks in, 206, 206f HPF programming for, 340 introduction to, 335–336 MESI in, 337–338, 338f MIMD, 336 MISD, 335–336 MPI for, 340 operating systems for, 339 recent advances in, 341 SIMD, 335 SISD, 335 SMP, 336, 336f snoopy protocols in, 337–338, 338f
1194 Multiprocessors (Continued ) software development for, 339–341 summary of, 341 tools for, 340–341 Multitasking soft real-time behavior from, 1167 Mutual-coupling effects antenna arrays v. antenna, 579, 582 MYCIN backward-chaining reasoning in, 373 EMYCIN v., 368, 370 expert systems and, 367–368, 370, 371, 373, 377 explanations, 377 history, 367–368 rule-based, 371, 373, 377
N Nasals, 868 National Electrical Manufacturers Association (NEMA) motor torque/speed curves, 731, 731f National Technology Roadmap for Semiconductors (NTRS), 304 Nature Electrical VHDL-AMS, 223 Natures extending set of, 223, 223f VHDL-AMS terminals assigned to, 222, 222f NC. See Node-covering NDP. See Neural dynamic programming NEMA. See National Electrical Manufacturers Association NEMP. See Nuclear electromagnetic pulse Net bumping dynamic NC, 439 Network functions, 19–20, 56–57, 56f, 57f impedance, 65–68 realizability of, 65–68 Network theorems, 12–13 Networks. See also Nonlinear circuits; specific types of networks all-pass, 72–74, 73f, 74f architecture of computer, 989–1002 convergence of, 1011–1014, 1013f elementary, 54–56 graphs and electrical, 36–38, 37f impedance matching for, 591–595, 593f in linear circuit analysis, 6 loop/cutset methods of analyzing, 38–41, 39f RC, 74 switching techniques in, 987–988 synthesis of, 53–74 technologies for local, 991–1000 Neural dynamic programming (NDP) cart-pole balancing v., 1156–1158, 1157f, 1157t online learning control system based on, 1151–1159, 1152f pendulum swing-up/balancing v., 1158–1159, 1158f, 1158t Neural networks. See Artificial neural networks Newton-Raphson (NR) method DC solution of nonlinear circuit using, 47–49, 48t power flow problems solved using, 770–771 transient analysis of nonlinear circuit using, 49–50, 50t Newton’s method, 1072 Neyman-Pearson signal detection, 929 receiver-operation curve of, 931f NIC. See Noise-immunity curve No-gain circuit elements, 78 Nodal analysis. See Node voltage method
Index Node admittance matrices, 41 Node-covering (NC) FPGA FT using dynamic, 439–441 FPGA FT using static, 438–439 Node equations, 41 Node transformations, 38 Node voltage method, 8–9, 9f Nodes, 5–6, 6f. See also specific types of nodes DP original/terminal, 879 Noise. See also specific types of noise analog/digital system, 101–108 bipolar transistor, 148, 150, 151f complementary filtering v., 1055–1056, 1056f, 1057, 1057f data transmission v., 986 digital communication system, 961–963 random processes generating, 952–953, 952f temperature in receiving antennas, 563–564, 563f temperature v. radiation-noise density, 564 temperature v. spatial temperature distribution, 564 transformer, 719 Noise aggressors, 101 dynamic charge sharing by, 106–107, 107f Noise analysis algorithms, 317–318, 318f bus-invert encoding with, 313, 313f C4 flip-chip with, 316, 316f charge leakage noise in, 311 charge sharing noise in, 311 circuit noise reduction techniques with, 315–317, 315f, 316f, 317f CMOS inverter technique with, 317, 317f Coupling-driven signal encoding with, 314–315, 314f, 315f crosscapacitance noise in, 311 deep submicron technology and, 309–318, 310f, 311f, 312f, 313f, 314f, 315f, 316f, 317f, 318f design flow, 318, 318f dual threshold technique with, 315 dynamic threshold technique with, 315 TO encoding with, 313 gated-Vdd technique with, 315, 315f Gray code encoding with, 313 Intel failure criteria in, 318 introduction to, 309–311, 310f mirror technique with, 317, 317f mutual inductance noise in, 312, 312f noise in, 309–310, 310f noise reduction techniques with, 312–317, 313f, 314f, 315f, 316f, 317f noise sources in, 311–312, 311f, 312f nomenclature in analog, 102t nomenclature in digital, 105t PMOS pull-up technique with, 316–317, 317f power supply noise in, 312 process variation in, 312 pseudo-CMOS with, 316, 316f reliability in, 310–311 signal encoding techniques with, 313–315, 313f, 314f, 315f small-signal unit gain failure criteria in, 317–318, 318f SOC in, 309 thermal effects in, 312 twin transistor technique with, 317, 317f VDSM in, 309 WZE with, 313–314, 314f Noise-immunity curve (NIC) interconnect noise analysis with, 301
Index Noise margins CMOS digital, 107–108, 107f NMOS digital, 108 Noise power spectral density, 102 Noise victims, 101 dynamic charge sharing by, 106–107, 107f Nominal controllers, 1094f, 1095 Nonlinear circuit elements, 75–78 Nonlinear circuits, 75–81 DC solution of, 47–49, 48f MNA equations of, 46–47, 47f qualitative properties of solutions to, 80–81 Nonlinear resistive elements multiterminal, 78, 79f three-terminal, 77–78, 78f Nonradiative recombination via impurity (trap) levels, 161–162, 161f Nonuniform channels in MOSFETs, 117–118 Norators, 76 Normal equations, 924 autocorrelation method/lattice structures v., 873–874, 874f decomposition methods for covariance solution of, 874 frame-wise, 873 in LP analysis, 872–873 solutions to, 873–875 Normalization linear network, 61 Normalized frequency, 832 Norton source transform voltage/current source, 135, 136f Norton theorem for AC circuits, 25, 25f NR method. See Newton-Raphson method NTRS. See National Technology Roadmap for Semiconductors Nuclear electromagnetic pulse (NEMP) FDTD methods modeling penetration/coupling of, 664 Nullators, 76 Nullity, 33 Nyquist conditions image aliasing v., 893 Nyquist noise. See Thermal noise Nyquist rates, 959 Nyquist stability criterion, 1027, 1029–1031, 1029f, 1030f, 1031f, 1042 Nyquist theorem, 985
O O-nets. See Occupying nets Object-oriented programming (OOP) complex control systems using, 1162–1164 HPrTNs modeling concepts from, 469–471 Object Request Brokers (ORBs) RPCs handled by, 1166 Objects complex control systems using OOP, 1162–1163 HPrTNs modeling OOP, 469 layered architecture in designing, 1164 Observation points integration over charge distributions from, 503, 503f Observation probabilities HMM, 883 Occupied address/time domains in vocoder intrasignal in-place mapping, 211, 212f Occupying nets (O-nets) dynamic NC, 439 transitions to spare track segments, 439
1195 Odd mode in folded dipole antennas, 573, 574f Offset QPSK (OQPSK), 975, 975f signals, 975, 975f Offstate leakage MOSFET, 117, 117f OGs. See Overlap graphs Ohm’s law, 4–5, 480 generalized, 5 in semiconductors, 157–158 One-hot RNS (OHR) RNS power consumption reduced by, 189 1808 hybrid rings, 597–598, 597f One-line diagrams power system, 788, 789f One-port networks, 6, 54–57 synthesis of LC, 68f, 69–70, 70f OOP. See Object-oriented programming Op-amps. See Operational amplifiers Open circuit saturation test, 734–735, 735f Open Systems Interconnection (OSI), 989–990, 990f local network reference model v., 992–993, 993f model in complex control systems, 1164–1165 Operating systems API in, 357 child process in, 356, 358 command-based user interface in, 357–358 concepts in, 356–358, 356f, 357f CTSS in, 358 DMUI in, 358 files in, 356–357, 357f, 360t, 361, 363, 365 GUI in, 358, 359 history of, 358–360 I/O in, 357, 358, 360t, 361 introduction to, 355–356, 356f Medusa, 359 model, 360–361, 360t MS-DOS, 364–366, 365t multi-machine levels for, 360t, 361 multiprocessor, 339 pipe in, 357, 361 POSIX, 359 process in, 356, 356f, 360, 360t shell in, 357–358, 360t, 361, 362–263 single machine levels for, 360–361, 360t StarOS, 359 system calls in, 356 UNIX, 358–359, 361–364, 364t virtual machine in, 355 who program in, 358 Operational amplifiers (op-amps), 78, 79f active circuit gain from, 128–129, 128f, 129f dynamic range of, 128n, 129f idealized v. practical models of, 129 Operational simulation method LC ladder filter inductors eliminated by, 134–136, 134f, 135f, 136f Operational transconductance amplifiers (OTAs) active filter gain obtained from, 136–138 Optical amplifiers in DWDM, 1014f, 1015 Optical fibers as dielectric circular waveguides, 549–550 Optical networks, 1014–1015 Optical tape memory, 327–328
1196 Optonics, 667 OQPSK. See Offset QPSK ORBs. See Object Request Brokers Orthogonality principle, 872, 924 Oscillators microwave frequency, 698–699 one-port negative resistance, 698, 698f two-port transistor, 698–699, 699f OSI. See Open Systems Interconnection OTA-C filters. See Transconductance-C filters OTAs. See Operational transconductance amplifiers Output equations, 21 Overflow limit cycles, 858–859 Overlap graphs (OGs) FPGA circuit routing represented by, 440, 440f Oversampling, 834, 835f
P P. See Polarization P-i-N diodes, 164–165 switching characteristics of, 165 Packet classifiers, 413 Packet dropping proposed Internet service models using, 412 Packet marking proposed Internet service models using, 412 Packet processing delay real-time multimedia v., 408–409 Packet schedulers, 413 Packet switching, 988 Packet transmission delay real-time multimedia v., 409 PAM. See Pulse amplitude modulation Parallel connections, 11, 11f Parallel harness, 348 Parallel-line couplers, 598–599, 598f physical realizations of, 599f Parallel port, 331, 333t Parallelism VLSI power consumption reduced by, 274, 275f PARC. See Xerox Palo Alto Research Center Pareto curves BTPC/cavity detector memory organization using, 210, 210f cycle budget/memory cost balanced using, 203–206, 204f Parity checking, 986, 987t Partial linearization VFS controller design using, 1134–1135, 1135f, 1136f PAs. See Power amplifiers Passbands, 127 LSI, 827 Passive circuit elements, 4–5, 78 filters built with, 54–56 graphical representations of, 5t, 43f, 45t, 54f Passive components, 585–586 microwave, 585–617 Passivity dynamic systems controlling, 1045–1047 Patch resonators, 603–604, 604f Patents MEMS, 281 Paths, 32 directed, 32 (directed) circuits represented by, 32–33, 33f
Index Pattern matching speech processing, 877–879 Pattern recognition algorithms, 908 Pauli exclusion principle energy bands determined by, 153, 153f Pause/resume RTSP supporting, 418 Payload identifier RTP packet, 417 PCI. See Peripheral component interconnect PCM. See Pulse code modulation pdfs. See Probability density functions PDPs. See Power-delay products PECs. See Perfect electric conductors Pendulum swing-up/balancing task NDP controller implementation on, 1158–1159, 1158f, 1158t Pentium 4 crosscapacitance scaling, 305, 305f interconnect delay, 305 noise analysis case study, 305–306, 305f, 306f repeater design methodology, 306 wire aspect ratio scaling, 305, 305f wire design methodology, 306 Per hop behaviors (PHBs), 414–415 AF/EF, 415 Perfect electric conductors (PECs), 506 2D/3D configurations of, 507, 507f Perfectly matched layers (PMLs). See also specific types of PMLs ABCs v., 655–663 anisotropic medium absorbing, 660–662 continuous space v., 662 discrete space v., 662–663 discretization errors in, 663 grading loss parameters of, 662–663 history of ABCs used with, 657 theoretical performance of, 662–663 Periods signal, 951 Peripheral component interconnect (PCI), 331, 333t Permanent capacitor motors, 736 Permeability ferrite, 613 free space, 487, 488f magnetic, 483 magnetic material, 488–489, 488t relative, 487 Permittivity (e) media, 504–505 relative, 505 Pervasive computing UMA v., 911 Petri nets. See also High-level Petri nets Z integrated with, 464–467 Phase-angle regulators, 717, 718f Phase conductors, 764, 764f Phase margin, 1031, 1031f Phase response, 61 group delay preferred to, 58 Phase shift keying (PSK), 972, 973–976, 984, 984f. See also specific types of PSK synchronization, 979, 979f Phase shifters ferrite, 615 ferroelectric materials in, 617
Index Phase velocities anisotropy of numerical, 640–643 lossless transmission line, 528 plane wave, 517, 518t sample values of numerical, 640–643, 641f, 642f waveguide, 540 Yee algorithm calculating, 645–646, 646f, 647f Phasor circuits, 24 analyzing, 24 The´venin/Norton equivalents of linear, 25, 25f Phasor diagrams, 23, 23f Phasors in AC circuit analysis, 23, 23t circuit element voltage-current relationships using, 23, 23t PHBs. See Per hop behaviors Phonemes, 864–869 classification of, 866–868 continuant/noncontinuant, 866 Phonemic transcription, 864 Phones, 864 Phonetics, 864 Physical level of VLSI design v. power consumption, 279, 279f p/4 differential PSK, 975–976, 976f phase transitions, 976, 976t signal constellation, 975–976, 976f Pinch-off in MOSFET channel length modulation, 118–119 in MOSFET DIBL, 119n MOSFET inversion layer, 115–116, 115f Pink noise, 102, 102f, 104 Pipelining VLSI power consumption reduced by, 274 Pitch speech, 862–863, 863n PLA. See Programmable logic array Planar antennas, 700–705, 705t Planar resonators, 603–604, 604f Planck’s constant, 156 Plane waves, 513–514 antenna receivers v. incident, 561–562, 561f antenna scatterers v. incident, 567–568, 568f basic properties of, 514–516, 516t homogeneous/uniform, 515–516, 516t multilayer structures v., 520f phase/attenuation vectors of, 516, 516t propagation of homogeneous, 516–518 propagation of polarized, 518–519 propagation/reflection of, 513–524 properties calculation example for, 522–524, 523f TEM waves v., 525 two-region reflection/transmission of, 520f, 521–522 Plants TSP framework/Volterra representation of, 1132–1133 Plastic microfluid element MEMS in, 290f Plosives, 864, 868 PMDs. See Polynomial matrix descriptions PMLs. See Perfectly matched layers PMOS pull-up technique, 316–317, 317f PNG. See Portable network graphics Point charges electric field surrounding, 501, 503
1197 Poisson’s equation, 507 Polarization ellipse, 518–519, 519f, 557–558, 557f axial ratio, 518, 519f, 557–558 Polarization mismatch factor, 563 in Friss transmission formula, 567 Polarization (P) D=E=e v., 509–510 example calculation of plane wave, 522–524, 523f linear/circular, 519 right/left-handed elliptical, 518–519 Polarization unit vectors antenna radiation, 556, 557f antenna radiation receive, 562 Poles, 57 Polymorphism, 1162 HPrTNs modeling OOP, 471, 471f Polynomial matrix descriptions (PMDs), 1037 Polynomials algebra of, 1024 degree/order of, 1024 Polyphase induction motors. See Three-phase induction motors Popcorn noise in pink noise, 104 Portable network graphics (PNG), 405t Ports hybrid/directional coupler, 595–596, 596f network, 6 VHDL, 222, 222t POSIX, 359 Possibility distributions fuzzy logic, 472, 472f Post-failure control actions, 1094f, 1095, 1095f Potential difference. See Voltage Potential transformers (PTs), 791 Power, 4, 26, 481–482 antennas radiating, 558–559 antennas receiving, 562–563 average, 27–28 consumption, 271 energy v., 271–272 instantaneous, 27–28 intercepted, 568 quality of electric, 805–809, 806t radiated by antennas, 561 reactive, 29 rectangular waveguide, 542–543 three-phase, 787 Power amplifiers (PAs), 697–698, 697f Power components capacitance v. dynamic, 271 dynamic/short-circuit/static, 266, 268f Power-delay products (PDPs) as VLSI optimization metrics, 272 Power density antenna radiation, 555 Power dissipation dynamic, 266–268, 268f short circuit, 268–269, 269f static, 269–270, 269f trends in VLSI, 264–266, 266f, 267f Power dividers/combiners, 595, 600–601 non-Wilkinson, 601 Power factors, 29, 29t, 806–807, 807t
1198 Power flow optimal formulation of, 781–782 orthogonality of plane wave complex, 522, 522f plane wave complex, 515, 516t problem example, 766–768, 766f, 767f, 768f problem formulation, 768–769 steady-state analysis of, 766–772 Power grids North American/West European, 711, 711f Power management for future generation microprocessors, 85–99 VLSI power consumption reduced by dynamic, 274 Power spectral densities (psds), 954–956 input/output, 955–956, 955f, 956f properties of, 955 Power supplies. See Voltage regulator modules Power systems, 707–810 analysis of, 761–778 architecture of, 86, 86f component models in analysis of, 761–766, 772–774 components of, 713–714, 788–789 dynamic analysis of, 772–778 industry business structure for, 710–711, 710f operation/control of, 779–785 protection/control of, 714 protection of, 787–803 separate control areas in, 784, 784f steady-state analysis of, 761–772 three-phase AC, 709–711 two-wire v. three-wire AC, 709 Power transformers, 800 current differentials in, 800–801, 800f, 801f protecting, 799–801 Poynting vector formulation rectangular waveguide power calculated using, 542–543 Pragmatics, 887–888 Preambles TDMA, 1006 PRF. See Pulse Doppler radar Principal value integral, 622 Probability density functions (pdfs), 952 Procedural statements VHDL-AMS, 222 Process, 356, 356f, 360, 360t Processor arrays design methodologies for, 935–936 implementation tools for, 944–946 systolic, 934, 935 Programmable logic array (PLA), 323 PROLOG, 369 Propagation direction, 539 plane wave, 513–524 Propagation constants circular metal waveguide cutoff, 544–545 line-type waveguide, 551 Propagation delay real-time multimedia v., 409 PROSPECTOR, 369 Protective relays, 789 PS/2 port, 331 psds. See Power spectral densities Pseudo-CMOS, 316, 316f Pseudo-inverse filters
Index blur v., 899, 900f inverse filters v., 898–899 PSK. See Phase shift keying Psychovisual redundancy, 381 PTs. See Potential transformers PUFF, 368, 369 Pulse amplitude modulation (PAM), 958, 960–961, 961f Pulse code modulation (PCM), 984–985, 984f Pulse compression high-resolution radar, 684 Pulse Doppler radar (PRF), 682 Pulses integration of radar, 678 numerical instability of 1D Gaussian, 652–655, 653f, 654f switching laser, 666–667, 668f Yee algorithm v. propagation of rectangular, 646–647, 648f, 649f Yee algorithm v. radially propagating, 647–648, 650f Punchthrough currents in CMOS power dissipation, 270 junction diode, 140–141 MOSFET, 120, 120f PZ nets, 465–466, 465f analysis of, 466–467 formalizing/proving invariant properties in, 466–467 Petri nets/Z integrated by, 464–467
Q QBIC. See Query by image content QoS. See Quality of service QP. See Quadratic programming QPSK. See Quadrature PSK QSW circuits. See Quasi-square-wave circuits Quadratic programming (QP) algorithm derivation, 242–249 clock scheduling algorithm CSD, 251–253, 252f, 253f, 254f, 260–261 clock scheduling algorithm LMCS-1, 250–251, 253f, 254f, 257–260 clock scheduling algorithm LMCS-2, 251, 253f, 254f, 260 clock scheduling as problem in, 241–242, 246–249 computational analysis of clock skew scheduling by, 249–253 practical considerations in clock skew scheduling by, 249–256 results of clock scheduling using, 256–257, 258t, 259f Quadrature 908 hybrids, 596–597, 597f closed-form design equations for, 597, 597t Quadrature PSK (QPSK), 974–975, 974f p/4 differential, 975–976, 976f signal constellation, 974, 975f signals, 974, 974f spectral density, 978f Quality factors resonator loaded/external, 602 Quality of service (QoS) architecture for 3G cellular systems, 423–424, 423f distribution system, 755–759 guarantees in communication networks, 402 middle event scheduling v., 1168 UMA v., 911 Quantities, 222t across/through, 222, 222f classification of VHDL-AMS, 220–222, 221f Quantization noise, 857–858, 857f, 858f, 961–962 Quantizers reconstruction/decision levels of, 895, 896f step sizes of, 895
Index Quantum efficiency, 160–161 Quarter-wave impedance transformers, 531, 592–593, 594f Quasi-constant voltage scaling MOSFET, 121, 121t Quasi-square-wave (QSW) circuits efficiency of, 91, 91f in VRMs, 90–92, 90f, 91f Quasi-Yagi antennas, 703–705, 704f input return loss/radiation patterns for, 703–704, 704f Quefrency, 876 Query by image content (QBIC), 391–392 Queuing priority/weighted fair, 412
R R/W access. See Read/write access Radar cross sections (RCSs), 672–674 enhancement/reduction of, 674 fluctuation of, 678–679 normalized, 677–678 scattering antenna, 568–569 simple object, 672–674, 673f, 673t Radar displays, 675–676 Radar equation, 672 Radar receivers, 674–675 noise figure/temperature in, 674–675 noise in, 674 Radar transmitters, 674 Radars, 671–690. See also specific types of radars parameters of pulsed, 672 target detection using, 678–679, 678f Radiation efficiency antenna, 558–559 Radiative band-to-impurity recombination, 161, 161f Radio frequency (RF) communication using antennas, 553 MEMS switches for, 289 Radios embedded, 999 Rahmonics, 876 RAM. See Random-access memory Random-access memory (RAM). See also specific types of RAM computer systems with, 325–326, 325f, 326f DRAM, 324f, 325–326 FDTDs exploiting increasing, 630 organization, 194–195, 194f, 195f specialized/application-specific, 195f SRAM, 324f, 325, 325f synchronization/access times for high-speed, 197, 197f Random processes noise generation by, 952–953, 952f Range LNS representation of number, 180–181, 181t Rank, 33 Rat-race couplers. See 1808 hybrid rings RC. See Configurable computing RC networks. See Resistor capacitor networks RCSs. See Radar cross sections Reactive capability curves synchronous machine, 733–734, 735f Read-only memory (ROM) computer systems with, 326–327, 326f, 327f erasable programmable, 326, 326f flash, 326, 327f
1199 Read/write (R/W) access code rewritten to optimize, 198–201 memory, 192–193, 192f Real-time computing nondeterministic events prompting research in, 1167 Real-time intolerant (RTI) applications, 405 Real-Time Protocol (RTP) multimedia session control using, 417 Real-Time Streaming Protocol (RTSP) multimedia session control using, 418 Real-time tolerant (RTT) applications, 405 Receive voltage vectors antenna, 561f, 562 Recognition phases HMM, 882 Recombination processes semiconductor electron-hole pair, 161–162, 161f Reconfigurable computing. See Configurable computing Reconfiguration state graph (RSG), 346 Rectifiers voltage/current interrelationships of idealized, 809, 810t Recurrent algorithms DSP using, 940 Reduced instruction set computing (RISC), 323 Redundancy in arithmetic circuit FD/FT, 431–432 FT/error detection based on, 429 Redundant spurious switching activity (RSSA) as node switching activity component, 271 path equalization v., 276 Reflection circular optical fiber waveguide internal, 549–550 Reflection coefficients angle of, 536 passive microwave component, 586 Smith charts determining transmission line, 534–536, 535f Region of convergence, 18 Regional approach junction diode behavior simplified by, 139 Register files (regfiles) local memory organization and, 193, 193f Regulating transformers, 758 Relay reach, 790 Relaying systems, 791–796 components of, 791–793, 791f principles of, 793–795 Relaying zones, 789 forward overreaching/backward reverse, 794, 794f, 799 Relays breaker failure, 793 instantaneous overcurrent, 793 operating characteristics of power system, 792, 792f operating criteria for power system, 795–796 Reliability computer/digital system, 427–428 in dependability, 428 insulators v. power system, 741–742 in multimedia data transmission, 402 Reliability regions, 711, 711f Reluctance electrical circuit resistance analogous to, 495t, 496 REMOD. See Reprocessing with micro delays Remote procedure calls (RPCs) method request transparency through, 1165
1200 Replacement sequences FPGA static NC, 438 Replication property lossless transmission line analysis simplified by, 530 Repositioning playback RTSP supporting, 418 Representation, 370 Reprocessing with micro delays (REMOD) arithmetic circuit FD/FT using, 432–436 chip yield increased by, 436, 436f independent/dependent cell arrays in, 432–433, 433f original v. checking computations in, 432 Residue number system (RNS) architectures, 187–188, 188f operations, 186–187 power dissipation, 189 signal activity for Gaussian input, 189, 189f VLSI adder efficiency, 188 VLSI arithmetic, 185–189 Resistances, 481, 481f. See also Equivalent series resistors; Resistors looking-in, 12 nonlinear, 76, 77f parallel, 11, 11f series, 10, 10f, 105–106, 106f static conduction current field, 512 Resistor capacitor (RC) networks, 74 complex poles in active, 128–129, 128f, 129f Resistor inductor capacitor (RLC) prototype second-order filter design using, 131, 131f Resistors microwave passive lumped element, 591, 592f noisy/noiseless, 692, 692f Resolution in high-resolution radar, 683–684 Resonance in circuit impedance frequency response, 128 Resonance isolators, 614, 614f Resonant frequency antenna, 559–560, 560f Resonators, 601–606, 601f. See also specific types of resonators ferroelectric materials in, 617 Resource Reservation Protocol (RSVP) in IntServ Internet service model, 413, 413f Reverse biased regions, 76 Reverse recovery process P-i-N diode, 165, 165f RF. See Radio frequency Riccati equation, 1062, 1110–1111 forward, 1062 Rings modules over, 1020 RISC. See Reduced instruction set computing Risk-sensitive control, 1063–1064 cost-cumulant control v., 1064–1065 other optimal control methods v., 1062f satellite attitude maneuver v., 1065–1066, 1065f, 1066f RLC prototype. See Resistor inductor capacitor prototype RMS values. See Root mean square values RNS. See Residue number system Rollback. See also Microrollback distance/range, 446 ROM. See Read-only memory Room temperature vulcanized (RTV) coatings, 745–746 leakage current v., 745f
Index Root mean square (RMS) values AC voltage/current, 28 analog noise, 101 Routers LAN, 1001–1002 Routh’s stability criterion, 1027, 1028 procedure for, 1028–1029 Routing fixed/source, 1001 policy/constraint-based, 412 QoS-based, 412, 412f Routing/queuing delay real-time multimedia v., 409–410 RPCs. See Remote procedure calls RSG. See Reconfiguration state graph Rspec. See Service request specification RSSA. See Redundant spurious switching activity RSVP. See Resource Reservation Protocol RTI applications. See Real-time intolerant applications RTP. See Real-Time Protocol RTR. See Runtime reconfigurability RTSP. See Real-Time Streaming Protocol RTT applications. See Real-time tolerant applications RTV coatings. See Room temperature vulcanized coatings Runtime reconfigurability (RTR) configurable computing with, 343, 344, 344f, 345–346, 345f, 350, 351
S Safety in dependability, 428 SALICIDE. See Self-aligned silicide Sample resistances, 158 Sampling, 831–834 analog signals, 840–841, 840f, 841f continuous-time signals, 831–833, 833f, 841, 842f in digital communication systems, 958–959, 959f image, 892–895 instantaneous, 958 intervals/frequencies, 815, 831 rate v. digital communication system aliasing, 959–960, 960f rate v. Nyquist rate, 834, 960 under/over-, 834, 834f, 835f Sampling theorem, 840–841 SAR. See Synthetic aperture radar Sara¨maki windows, 849, 850f Scalar equations electromagnetic, 621–622 Scattering ionized impurity/neutral impurity/lattice vibration, 158 SCBD. See Storage cycle budget distribution Schedule error, 785 Scheduling in gain-scheduled controllers, 1107–1113 Schottky diodes, 165–166, 166f Schottky junctions MESFET, 123–124 Schwarz’s inequality, 967 SCMOS. See Static CMOS SCRs. See Space charge regions SCSI. See Small computer system interface SDM. See Spatial distance metric SDP. See Session description protocol SDRAM. See Synchronous DRAM Sea of accelerators, 348
Index Security in multimedia data transmission, 402, 408 relay operation, 795 TCP/IP enhancements supporting, 418 Segmentation algorithms, 908 Selections MPEG-21, 915 multimedia content, 917–919, 919f, 919t optimizing multimedia, 918–919 variation descriptions v., 918 Selectivity relay operation, 795–796 Self-aligned polysilicon gate technology in MOSFET junction region alignment, 121 Self-aligned silicide (SALICIDE) in modern MOSFET fabrication, 121–122 Self-conflicts in conflict graphs, 208 Self-loops, 31, 32f Semantics, 887 Semiconductors, 153–162 breakdown voltage in, 163–164 chemical bonds/crystal structure in, 154–155, 154f, 155f history of, 153 important materials for, 162 materials for power, 175–176 n/p-type, 157 optical properties of, 160–162, 160f parameters of important, 159t for power electronics applications, 163–176 Sensors MEMS in, 290–292, 291f, 292f OOP paradigm v. flight control, 1162–1163, 1163f pressure, 291, 291f simple architecture of temperature, 226–227, 226f Sequence numbers RTP header, 417 Sequentially adjacent pairs of signal path end registers, 232 Serial-access memory, 327–329 Serial port, 331, 333t Series connections, 10, 10f Series resistances in digital noise, 105–106, 106f junction diode, 140f, 142 Service request specification (Rspec), 413 Session description protocol (SDP) in TCP/IP distributed multimedia support, 416–417 Session Initiation Protocol (SIP), 421–422, 1012f, 1014–1013 commands/responses, 421, 422t example, 422, 422f message format, 421–422, 422f, 1012–1013 stages, 421 Session layers OSI model, 417 Session management in multimedia data transmission, 402, 407–408 TCP/IP enhancements supporting, 416–418 SFGs. See Signal flow graphs Shaded-pole motors, 736 Shannon theorem, 985 Shell expert systems, 370 operating systems, 357–358, 360t, 361
1201 UNIX, 361, 362–363 Shell-form transformers, 718–719, 719f Short circuits protecting AC power systems from, 714 Short lines, 764–765, 765f Shot noise active microwave circuit, 692 Norton equivalent current for, 104 in white noise, 102, 103t Si. See Silicon Side lobe level (SLL) antenna, 556, 556f antenna array, 580 Side lobes antenna, 556, 556f Signal attributes VHDL-AMS, 222, 222t Signal detection Gaussian noise v., 929–930, 931f statistical, 927–930 types of errors in, 928 Signal flow graphs (SFGs), 938, 939f. See also Operational simulation method in algorithm optimization, 940–941 cut-sets/retiming v., 941 Signal paths, 232, 232f Signal processing, 811–946 homomorphic, 875 interpolation/decimation in, 834 in multimedia systems, 911–919 statistical, 921–931 VLSI, 933–946 Signal processing systems, 819–824, 840, 840f causality/stability in, 820–821, 823 discrete/continuous-time, 819–820, 819f invertible, 821 linear, 820 LSI, 821–824 LTI, 842–844 properties of, 820–821 Signal-to-noise ratio (SNR), 963 Signals, 814–819, 951. See also specific types of signals baseband/bandpass, 966 basic, 816–818 bit error performance of unipolar/bipolar, 968, 968f causal/noncausal/anticausal, 819 classification of, 818–819 continuous-time (analog), 814–815, 815f convolved, 821–822, 822f digital, 815, 816f digital/analog, 983 discrete-time (sampled data), 815, 816f DTFTs of, 824–826, 824f, 825f energy/power of, 951–952 error probability v. detecting binary, 966–967, 966f finite/infinite-extent, 819 Gaussian noise v. detecting binary, 965–967, 966f, 972–973, 972f periodic/non-periodic, 951 quantization of, 834 quantized continuous-time, 815, 815f random/deterministic, 951 reconstructing, 833–834, 840–841 signal processing v. input/output, 819 statistical approaches v. detecting, 927–930
1202 Signals, 814–819, 951. (Continued ) transmission of digital, 965–970 types/properties/processes of, 951–956 z-transform of discrete-time, 828 Signatures, 1085–1103 CFG node assignments of assigned/derived, 443–444 WD CFC checking block, 445–446, 445f, 446f Z schema, 464 Silicon (Si) energy gap, 154 tetrahedral coordination in, 154, 154f SIMD. See Single instruction multiple data Simple architecture in embedded systems modeling, 224 temperature sensor modeled using, 226–227, 226f Simulated annealing online fault accommodation control v., 1090 Simulation component connection model v., 1082 Simulators models v., 219, 219f Single-amplifier biquad active filter cascade design using, 132, 132f Single instruction multiple data (SIMD), 335 Single instruction single data (SISD), 335 Single-mode guide operation waveguide higher operating frequency from, 539 Single-phase induction machines, 735–736 Singular value decomposition (SVD), 1025–1026 SIP. See Session Initiation Protocol SISD. See Single instruction single data Skew basis spanning tree, 243–244 unconstrained, 253–255 Sliding mode control (SMC) delayed systems v., 1116–1118 point-delayed systems v., 1121–1125, 1122f regulating engine idle speed, 1115–1128 transport delay v. practical applications, 1115–1116 Sliding surface function, 1091–1093 bound of, 1092–1093, 1093f Sliding surfaces SMC for delayed systems v., 1117–1118 SLL. See Side lobe level Slot antennas, 570t, 576–577, 576f, 702–703, 703f small, 577 SLU. See Swappable logic unit Small computer system interface (SCSI), 332 Small gain theorem, 1042 Small-signal capacitance in MOS capacitors, 112, 113f Small-signal noise. See Analog noise Small-signal unit gain failure criteria, 317–318, 318f SMART. See Support Management Automated Reasoning Technology Smart power technologies, 175 SMC. See Sliding mode control Smith charts, 534–536, 535f impedance transformer/matching network design using, 592 replication property in designing, 530 SMLs. See Specified mechanical loads SMPs. See Symmetric multiprocessors Snell’s law, 521 Snoopy protocols multiprocessors with, 337–338, 338f
Index SNR. See Signal-to-noise ratio SOC. See System-on-a-chip Softswitch networks, 1012f, 1013 Software development multiprocessor, 339–341 Solid angle antenna power per, 555 Solitons light switching using, 666–667, 668f Source junctions MOSFET, 113–114, 113f Source transformations, 12, 13f Sources, 4 graphical representations of dependent, 5t, 45t graphical representations of independent, 4f, 44f, 45t, 54f noise modeled using independent, 102–104, 103f, 104f nonlinear resistors including independent, 76 two-port networks representing dependent, 54–55 values of dependent, 4 Space charge regions (SCRs) junction diode, 139–141, 140f Space-time mapping DSP nested do-loop algorithm v., 936–939 one-dimensional, 939 Spatial distance metric (SDM), 393 Spatial resolution hints MPEG-7 transcoding, 913 Specified mechanical loads (SMLs), 743, 743f Spectral magnitude in speech modeling, 871 Speech low-pass pulse train in voiced, 869–870, 870f parametric/feature-based representations of, 872 prosodic features of, 868–869 Speech processing methods/models/algorithms, 861–889 security/forensics applications of, 888 specialized methods/algorithms v., 880–888 traditional search/pattern matching algorithms v., 877–880 Speech production discrete-time model of, 869–871, 869f, 871f human system for, 862, 863f modeling, 862–871 Speech recognition continuous/connected, 880 Speech sounds voiced/unvoiced/mixed, 862–864, 865f–866f, 867f–868f Speed relay operation, 795, 802 Speed factor, 445 Speed regulation, 782–784, 783f SPICE, 80 Spiral plate antennas, 579f frequency independence of, 578 Split-phase motors, 735–736, 736f performance of, 736f Spot networks, 754 Squirrel cage induction motors, 728, 728f SRAM. See Static RAM SSRCs. See Synchronization sources Stability. See also specific types of stability analysis of power system, 774–778 control system, 1027–1035
Index exponential, 1033 marginal, 1028 network conditional/unconditional, 697 numerical integration v. analysis of transient, 776 positive real LTI, 1046 robust, 1039f, 1042–1043, 1042f, 1043f small-signal power system, 774–775, 774f, 775f theorems, 1042 transient power system, 774f, 775–778, 775f, 776f unanticipated failures v. online system, 1089–1093 Standing wave ratios (SWRs), 532 Standing waves complete, 533 lossless transmission line, 531–533, 532f, 533f traveling v., 722–723, 723f Star-delta transformation, 11–12, 11f, 11t StarOS, 359 STARTAP, 403 FEC/UDP in, 403 State estimating system, 1049–1058 recursive estimation of, 1050–1051 State dynamics equations, 21 State estimators, 1050–1051, 1051f computational latency v., 1054–1055 design approaches for, 1051–1053 implementation issues v., 1054–1056 performance analysis for, 1053–1054 State-space representations, 1049–1050 continuous/discrete-time, 1049 in system realization, 1074–1075 State transitions HMM, 882–883 State variable analysis, 20–22 State variable equations matrix representation of, 21–22, 21f normal form of, 21 solving, 22 Static allocation vocoder, 211, 212f Static CMOS (SCMOS) VLSI power consumption reduced by, 278–279, 278f Static limits, 767–768 Static RAM (SRAM), 324f, 325, 325f power consumption by high-speed, 196 synchronous clocked, 197, 197f Static windowed allocation vocoder, 211, 212f Stationary processes, 952 Steady state response, 16 Steady-state short circuit test, 734–735, 735f Stochastic approximation algorithms recursive, 1154–1155 Stopbands, 127 LSI, 827 Storage cycle budget distribution (SCBD) in custom memory organization design, 207–208, 208f Streaming audio, 405 Strength LNS reducing arithmetic operator, 181 Strict feedback form, 1139 Strong inversion in MOS capacitors, 111, 111n, 112n Stubs
1203 impedance matching network design with, 592, 594f transmission line, 587, 588f Stuck-at faults, 430–431 Subclasses, 1162 Suboptimal gain sequences error covariance computation v., 1053–1054 Substations, 714, 749–752, 788 breaker-and-a-half, 788, 788f schemes for, 750–752, 751f–752f Subthreshold swing MOSFET, 116, 117f Sufficient statistics, 925–926 Super loop method, 8, 8f Super loops, 8, 8f Super nodes, 9, 10f Superclasses, 1162 Superconducting components microwave, 617 Superposition theorem, 12–13, 14f Supply voltage level (Vdd) scaling v. VLSI technology advances, 279, 279f as VLSI power consumption parameter, 270, 276, 277f Support Management Automated Reasoning Technology (SMART), 375 Surface micromachining MEMS produced with, 286 Surface recombination, 161f SVD. See Singular value decomposition Swappable logic unit (SLU), 348 Switching in data communication, 987–988 Switching frequencies CMOS power consumption v., 271 Switchyards, 714 SWRs. See Standing wave ratios Symbols in digital communication systems, 957, 964 in LMs, 887 Symmetric multiprocessors (SMPs), 336, 336f Synchronization, 979–980 bit, 980, 980f MSK, 979–980, 979f PSK, 979, 979f word, 980, 980f Synchronization sources (SSRCs) multicast frame originator, 417 Synchronous digital systems, 232, 232f clock scheduling in, 235–238 graphical models of, 233–235, 234f Synchronous DRAM (SDRAM), 202–203, 202f theoretical/actual bandwidth, 195–196, 196f Syntactic constraints, 887 Syntax speech, 886 Synthetic aperture radar (SAR), 684–685, 684f motion compensation, 685 unfocused/SAR focused, 685 System calls, 356 System identification problems component connection model v., 1082 fault analysis v., 1082 System level VLSI design v. power consumption, 272–274 System-on-a-chip (SOC) noise analysis for, 309
1204 Systems modeling interconnected, 1079–1083 signal processing, 819–824 state of, 1049
T T-DAGs. See Transition directed-acyclic graphs T-pi transformation. See Star-delta transformation Tap changers, 758 Tapered slot antennas (TSAs), 703, 703f Target delays in I/O register clock delay requirements, 256 Target skews in I/O register clock delay requirements, 256 TCP/IP. See Transport Control Protocol/Internet Protocol TDMA. See Time-division multiple access TE waves. See Transverse electric waves Technology scaling, 299, 301 Telegen’s theorem, 75–76 TEM waves. See Transverse electromagnetic waves Temporal logic integrated with HLPNs, 461–464 linear time first-order, 461–462 Temporal predicate transition nets (TPrTNs), 462 analysis of, 463–464 deadlocks/livelocks in, 463–464 example, 462–463 HLPNs/temporal logic integrated by, 461–464 Terminals H.323, 419 network, 6 VHDL-AMS quantities between, 222, 222t Testing functions, 626 Tetrahedral coordination in chemical bonds, 154, 154f Text compression schemes, 404t as media type, 404 Texture matching algorithms, 908 TEz mode. See Transverse-electric mode with respect to z mode TFMs. See Transfer function matrices THD. See Total harmonic distortion Thermal noise active microwave circuit, 692 data transmission v., 986 in white noise, 102–104, 103t The´venin impedance, 730 The´venin’s theorem for AC circuits, 25, 25f in polyphase motors, 729–730, 730f Thin-film substrates optical networks using, 1015 Third-generation (3G) cellular systems QoS architecture for, 423–424, 423f Three-phase induction motors, 728–732 equivalent circuit for, 728–729, 729f no-load/locked rotor tests for, 731–732 performance calculations for, 730–731 protecting, 803 rotor/stator currents/fields in, 728, 729f Threshold voltages DIBL lowering MOSFET, 120, 120f MOS capacitor, 112–113, 113f MOS transistor, 270f
Index power consumption v. device, 270, 270f VLSI technology advances v. scaling of, 279, 279f Thyristors, 167–169, 168f, 169f. See also specific types of thyristors controllability of, 163 MOS-gated, 174–175, 175f transient operation of, 168–169 Tikhonov method, 689–690 Tiles FPGA, 438 Time signals as functions of continuous/discrete, 814–815 Time alignment, 880 Time-continuous operation of embedded system components, 217 semantics of combining time-discrete with, 220, 220f VHDL standard extended for, 219f, 220 Time-division multiple access (TDMA), 1006–1007, 1006f FDMA/CDMA v., 1009, 1009t multipath interference v., 1006f, 1007 Time stamps fuzzy, 472 multimedia session control using, 417 TM waves. See Transverse magnetic waves TMz mode. See Transverse-magnetic mode with respect to z mode TO encoding, 313 To fields SIP, 422, 422f Token Passing Bus Protocol, 995–996 ring initialization, 995 station deletion/error recovery, 995, 995t Token Passing Ring Protocol, 996–997 FDDI v., 999–1000, 1000f, 1000t Topologies LAN/WAN, 991, 992t Torque magnetic, 485–487 motor/load, 724, 724f slip v. polyphase induction motor, 730, 730f slip v. split-phase motor, 735f, 736 sources of electric machine, 725, 725f speed v. DC machine, 726–728, 727f speed v. NEMA motor, 731, 731f speed v. polyphase induction motor, 729–730 Total harmonic distortion (THD), 807, 807t Total synthesis problem (TSP), 1131–1132 commutativity in, 1132f regulator design, 1131–1132, 1132f TPrTNs. See Temporal predicate transition nets Track-while-scan (TWS) radar, 683 Tracking radar, 682–683 Traffic descriptors multimedia source, 411 Traffic requirements for multimedia data transmission, 402, 406–407 Traffic shaping/policing proposed Internet service models using, 411–412 Traffic specification (Tspec), 413 Training phases HMM, 882 Transcoding hint annotation in MPEG-7, 913–914, 916–917, 916f, 917f MPEG-7, 913–914 optimizing, 916–917
Index Transconductance-C (OTA-C) filters, 136–138 Transfer admittance, 20 Transfer current ratio, 20 Transfer currents bipolar transistor, 143–144, 145f Transfer function matrices (TFMs) matrix fraction parameterization of, 1070 observed FRF data v., 1069–1074 polynomial matrix parameterization of, 1070–1071 Transfer functions cascading factored low-order blocks of, 129, 129f digital filter, 843 filter transmission characteristic described by, 127, 127f frequency response of BIBO stable, 1040–1041 marginally/strictly/positive real, 1045 Transfer impedance, 20 Transfer voltage ratio, 20 Transfer zeros on imaginary axis, 71–71, 72f at infinity, 70–71, 71f, 72f Transform coding, 904–907 Transformers. See also specific types of transformers AC power system, 713, 715–720 acceptance tests for, 720 applications of, 717–718 cooling methods for, 717 cores/windings in, 718–719 coupled-line, 593–594, 594f K-factor v., 807–808, 807t losses in, 716, 719 lumped-element, 594–595, 595f, 716, 716f modeling power system, 761–763, 763f overcurrent v., 719 performance of, 719 phase-shifting, 713 principles of, 715–717 technical/ideal, 56, 56f two-winding, 715–716, 716f windings in, 762, 762f Transient response, 16 interleaved QSW VRM, 92f, 93t, 94f, 95, 97f low-output/high-input voltage VRM, 95 push-pull forward VRM topology, 94 QSW, 90–91, 90f Transients, 805, 806t Transistor sizing VLSI power consumption reduced by, 278, 278f Transistors. See also specific types of transistors bipolar junction, 78, 78f future VRMs using improved, 96, 98f, 99f Transition directed-acyclic graphs (T-DAGs) dynamic NC net transitions forming, 439 Transmission advantages of digital, 985 asynchronous/synchronous, 985–987 data, 983 digital/analog, 983 impairments, 986 Transmission line model microstrip patch antenna, 701–702, 702f Transmission line resonators, 603–604, 604f Transmission lines, 525–537, 526f, 737–748 AC analysis of terminated lossless, 530–533, 530f AC analysis of terminated lossy, 533–534
1205 AC power system, 713–714 conductor types used in, 738 design considerations for overhead, 738–741 differential relaying for high-voltage, 799, 800t directional relaying schemes for high-voltage, 798–799, 799t discontinuities in, 587–589, 590f distance protection of, 797–798, 797t, 798f equivalent circuit parameters of discontinuities in, 591f faults in, 787–788, 788f filters realized from planar, 609 lossless, 526–528, 527f lossless/low-loss/distortionless, 529 lossy, 528 major construction of, 737–738, 738f matching network design with sections of, 592, 594f microwave passive components as sections of, 587, 588f, 589f modeling power system, 763–765, 763f, 765f nonpolar waveguide, 587 nonuniform, 593, 594f planar/quasiplanar structures of, 587 protection of, 796–799 stubs in, 587, 588f twisted pair, 551 underground, 747–748 zones of protection for, 790–791, 790f Transmission zeros filter, 128 Transmittances in LC ladder filter operational simulation, 134–135 Transport Control Protocol/Internet Protocol (TCP/IP) enhancements for distributed multimedia applications, 416–422 multimedia v., 401–402 network architecture, 990–991, 990f Transport mode IpSec, 418 Transverse-electric mode with respect to z (TEz ) mode Maxwell’s curl equations reduced to, 634, 637–638 Transverse electric (TE) waves waveguide, 540 Transverse electromagnetic (TEM) waves transmission line, 525, 526f waveguide, 540 Transverse equivalent networks orthogonal transverse fields in, 522, 522f transverse fields as voltage current equivalents in, 521 Transverse-magnetic mode with respect to z (TMz ) mode Maxwell’s curl equations reduced to, 634, 637–638 numerical dispersion in, 638–639 Transverse magnetic (TM) waves waveguide, 540 Trees, 33, 34f clock skew scheduling using spanning, 243–244 cospanning, 33, 34f linearly independent clock skews as spanning, 243–244, 244f spanning, 33, 34f Ts. See Matched hybrid tees TSAs. See Tapered slot antennas TSP. See Total synthesis problem Tspec. See Traffic specification Tunnel mode IpSec, 418 Tunnels multicast-capable network segments connected by, 416
1206 Twin transistor technique, 317, 317f Two-port networks, 6, 54–56, 54f antenna transmit-receive link modeled using, 565–566, 565f dependent sources as, 54, 55f filter synthesis from theory of, 67–68 partial removal in synthesis of, 70–72 reciprocal, 606–607, 607f synthesis of LC, 70–72, 607–608, 608f Two-terminal devices microwave, 695–696 Two-value capacitor motors, 736, 736f TWS radar. See Track-while-scan radar
U UDSA. See Useful data switching activity Ultra large scale integrated (ULSI) circuits clock skew scheduling improving reliability of, 231–257 UMA. See Universal multimedia access UML. See Unified Modeling Language UMTS bearer service components, 423–424, 423f 3G QoS architecture, 423–424, 423f Uncontrollable devices, 163 Undersampling, 834, 834f Uniaxial PML (UPML), 660–662 FDTD implementation of, 661–662 special cases, 660 Unified Modeling Language (UML) diagrams formalized by HPrTNs, 469 Uniform sampling theorem, 959 Uniqueness theorem Laplace’s equation solved with, 508 Unit impulses discrete/continuous-time, 816, 817f Unit steps discrete/continuous time, 816–818, 817f Universal approximation property in direct adaptive control, 1141 Universal motors, 736 Universal multimedia access (UMA), 911–919 architecture, 912, 912f Universal serial bus (USB), 332 UNIX directories, 363, 363t files, 361, 363 history, 358–359 introduction to, 361–362 Linux and, 364 operating system, 358–359, 361–364, 364t pipes, 362 process tables, 361 shell, 361, 362–363 system call library, 361 user structure, 361 utility programs, 363–364, 364t UPML. See Uniaxial PML Useful data switching activity (UDSA) as node switching activity component, 271
V V curves synchronous machine, 733–734, 734f Valence band, 153
Index edge/top, 156 electron movement allowed by vacancies in, 155, 155f Variable-length coding (VLC) in signal processing, 941–942 Variable-voltage techniques VLSI power consumption reduced by, 274, 274f Vdd. See Supply voltage level VDSM. See Very deep submicrometer Vector dot product, 349 Vector equations, 622–624 Vector spaces, 1020 Velocity saturation MOSFET, 118, 118f Verilog, 218 Vertices, 31 edges incident on/out of/into end, 31–32 end, 31 (in/out) degree of, 32 isolated, 32 pendant, 32 Very deep submicrometer (VDSM) noise analysis for, 309 Very high-speed description language (VHDL). See also VHDL-analog mixed signal simulator/model relationship covered by, 219, 219f system design modeling by, 218 Very large scale integrated (VLSI) circuits application-specific, 934 clock skew scheduling improving reliability of, 231–257 constant field scaling in, 264 design trends in low-power, 263–279 die size increasing for, 264, 267f, 279 I/O register clock skew scheduling in, 255–256, 255f power consumption parameters in, 270–271 power consumption reduction techniques for, 272–279 power/performance/flexibility metrics for, 272 real-time signal processing using, 933–946 signal processing applications v. design of, 934 system design cycle stages for, 272 Very low frequency (VLF) electromagnetic waves FDTD methods modeling propagation of, 663, 664f VESA. See Video electronics standards association VFS. See Volterra feedback synthesis VHDL. See Very high-speed description language VHDL-analog mixed signal (AMS) concepts, 219–220 fundamentals, 219–224 VHDL extended by, 220, 221f Video compression schemes, 406t as media type, 405 Video electronics standards association (VESA), 330–331 Virtual machine, 355 Virtual planes, 202 Viterbi decoding, 881, 942 F-B algorithm v., 884, 884f Vivaldi antennas, 703 VLC. See Variable-length coding VLF electromagnetic waves. See Very low frequency electromagnetic waves VLSI circuits. See Very large scale integrated circuits VLSI systems, 179–318 Vocoder algorithm. See Voice coder algorithm Voice, 862
Index Voice coder (vocoder) algorithm autocorrelation function, 210–211, 211f data layout reorganization results, 212–214, 213f data layout reorganization v. memory requirements of, 210–214 in-place mapped autocorrelation function, 211, 211f Voice communication, 963 Volatility memory, 192f, 193 Volt-per-hertz principle, 802 Voltage, 4 AC power system, 710 distribution systems controlling, 755–756 distribution systems regulating, 757–758 division in series-connected resistors, 10, 10f generators regulating, 773, 773f junction diode current v., 141, 141f measuring, 807t nonlinear circuit, 79–81 power bipolar transistor current v., 166–167, 166f sinusoidal, 22 standing waves in lossless transmission line, 531–533, 532f, 533f transmission v. distribution, 710, 713 waveform in lossless transmission lines, 527–528, 527f waveform in lossy transmission lines, 528, 528f, 533–534 Voltage regulator modules (VRMs), 85–99 center-tapped inductor, 94, 96f dynamic loading in, 85–86 efficiency of, 88–90, 89f, 91, 93f, 96, 96f, 97f, 99f high-frequency/power-density, 95–97 high-input-voltage topologies for, 92–94, 94f low-input-voltage topologies for, 90–92 practical load model of, 87, 87f push-pull forward topology for, 94, 96f, 97f specifications for, 86t technology limitations v., 87–90 Voltage transfer characteristic (VTC) curves SCMOS circuits having ideal, 278 Voltage transformers, 717–718, 718f Volterra feedback synthesis (VFS) controller design using, 1133–1135 nonlinear input-output control using, 1131–1138 simplified partial linearization controller design using, 1134–1135, 1135f, 1136f single degree of freedom base-isolated structure v., 1135–1138, 1136f, 1137t, 1138f Volume current density (J), 480 boundary conditions for, 485 Volume-surface integral equation, 622 Vowels, 866 VRMs. See Voltage regulator modules VTC curves. See Voltage transfer characteristic curves
W Watchdog processors (WDs), 441, 442f CFC with, 441–446 Wave number vectors plane wave, 514, 516t Yee algorithm’s complex-valued, 643–648 Waveguide couplers hole/aperture-based, 599–600, 600f Waveguide latching phase shifters nonreciprocal, 615, 615f Waveguides, 539–551 circular metal, 544–546, 545f
1207 classification of, 540, 540f coplanar, 549, 549f dielectric circular/optical fiber, 549–550 dielectric-filled metallic rectangular, 544, 544f field configuration of rectangular, 541–542 filters realized from metallic/dielectric, 609 horn antenna input from, 578, 579f line-type, 550–551, 550f, 551t metallic/dielectric/planar, 540 microstrip line, 546–548, 546f operating frequencies of, 540, 543t partially filled metallic rectangular, 544, 544f phase/group velocities v. operating frequencies of, 542, 542f rectangular, 540–544, 541f resonance isolators realized in rectangular, 614, 614f slot line, 548–549, 548f transmission lines from nonpolar, 587 Wavelength division multiplexing (WDM), 1014–1015 Wavelengths microstrip line waveguide, 546–547 plane wave, 517, 518t Wavelet transforms DCTs v., 906–907, 907f, 908f, 909f Waves. See also specific types of waves analytical (ideal) three-dimensional dispersion of, 640 constructive/destructive interference in, 536 FDTD algorithms v. numerical dispersion of, 638–648 FDTD methods v. ELF/VLF electromagnetic, 663, 664f guided, 539–551 linear/nonlinear dispersion of, 666 mmf v. traveling, 722–723 object in scalar, 621–622, 621f object in vector, 622–624, 623f Yee algorithm v. radially propagating cylindrical, 647–648, 650f WCDMA. See Wideband CDMA WDM. See Wavelength division multiplexing WDs. See Watchdog processors Weak inversion in MOS capacitors, 110–111, 111f MOSFET subthreshold region operation and, 116–117, 117f Weighted least-squares (WLS) method, 852 frequency response, 854f minimax method v., 849–850, 851f Weighted residual method. See Method of moments Whip antennas small dipole antennas approximating, 569–570 White noise, 102–104, 102f active microwave circuit, 692 LQG control design v., 1044 The´venin/Norton equivalents for, 102–103 Wideband CDMA (WCDMA) network synchronous/asynchronous, 1008 parameters, 1008t Wiener filters, 900–901, 923 blur v., 900f, 901 Wiener-Hopf equation, 924 Wilkinson power dividers/combiners, 595, 600–601, 600f Window method, 848–849, 848f, 850f Wing rock attack angle qualitatively changing, 1145, 1146f coefficient parameters in time-varying, 1144, 1144t control input v. angle of attack, 1146–1148, 1147f direct adaptive control model, 1144–1146, 1144t direct adaptive control regulating subsonic, 1143–1148
1208 Wing rock (Continued ) interpolated coefficients for time-varying, 1144, 1145f model interpolation function centers/spreads, 1144, 1144t open/closed loop roll/roll rate, 1146–1148, 1147f z states, 1148, 1148f Wireless LANs (WLANs), 997–999 architecture of, 997–998, 999f future developments in, 998–999 Wireless networks access technologies for, 1005–1009, 1009t all-IP architecture in, 1013–1014 WLANs. See Wireless LANs WLS-Chebychev method, 852–853, 853f, 854f WLS method. See Weighted least-squares method Working zone encoding (WZE), 313–314, 314f Wye-delta transformation. See Star-delta transformation
X XCON, 369 Xerox Palo Alto Research Center (PARC), 359
Y Yagi-Uda arrays, 578, 579f Yee algorithm basic ideas of, 634–635 complex frequency analysis of, 649–652 complex-valued numerical wave values, 643–648 3D cubic cell lattice stability bound, 651 FDTD methods using, 634–638 finite differences/notation, 636 grid diagonal wave propagation, 644–645
Index grid velocity anisotropy approximation, 643 instability in 1D grids, 652–655, 653f, 654f instability in 2D grids, 655, 656f intrinsic grid velocity anisotropy, 643 leapfrog arrangement/time-stepping, 635, 636f numerical phase velocity/attenuation, 645–646, 646f, 647f numerical stability, 649–655 principal axis wave propagation, 643–644 radially propagating cylindrical waves, 647–648, 650f rectangular pulse propagation, 646–647, 648f, 649f stable/unstable complex frequency analysis, 651 time/space discretizations v., 643
Z Z overview, 464–465 Petri nets integrated with, 464–467 state/operation schemas in, 464–465 z-transform, 828–830, 829t finite-duration sequence, 829–830, 830f, 831f inverse, 828–829, 829f Zero clock skew methods, 236 Zero clocking, 236 Zero-input stability, 1028 Zero skew islands in I/O register clock delay requirements, 256 Zeros, 57. See also Transfer zeros filter transmission, 128 Zinc blende crystal structure, 154, 155f Zones of protection, 789–791, 790f