Spectral Analysis of Signals
Petre Sto i ca Uppsala University and
Randolph Moses The Ohio State University
---PEARS...
404 downloads
2454 Views
21MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Spectral Analysis of Signals
Petre Sto i ca Uppsala University and
Randolph Moses The Ohio State University
---PEARSON Pn'llti('t'
lIall
Upper Saddle River, New Jersey 07458
Library of Congress Cataloging-in-Publication Data Spectral Analysis of SignalslPetre Stoica and Randolph Moses p. cm. Includes bibliographical references index. ISBN 0-13-113956-8 1. Spectral theory I. Moses, Randolph n. Title 512'-dc 21 2005 CIP 00-055035 QA814.G27 Vice President and Editorial Director, ECS: Marcia J. Horton Executive Managing Editor: Vince O'Brien Managing Editor: David A. George Production Editor: Scott Disanno Directo r of Creative Services: Paul Belfanti Creative Director: Jayne Conte Cover Designer: Bruce Kenselaar Art Editor: Greg Dulles Manufacturing Buyer: Lisa McDowell Senior Marketing Manager: Holly Stark
.---. PEAR SON Pit»))! /( '('
Ittll
© 2005 Pearson Education, Inc. Pearson Prentice Hall Pearson Education, Inc. Upper Saddle River, New Jersey 07458
form or by any means, without permission in All rights reserved. No part of this book may be reproduced, in any writing from the publisher. Pearson Prentice BalInt is a trademark of Pearson Education, Inc. Hill Drive, Natick, MA 01760-2098. MATLAB is a registered trademark of The MathWorks, Inc., 3 Apple in preparing this book. These efforts include the The author and publisher of this book have used their best efforts determine their effectiveness. The author and development, research, and testing of the theories and programs to regard to these programs or the documentation publisher make no warranty of any kind, expressed or implied, with in any event for incidental or consequential damages contained in this book. The author and publisher shall not be liable or use of these programs. in connection with. or arising out of, the furnishing. performance, Printed in the United States of America 10 9 8 7 6 5 4 3 2 1':;-:
ISBN
0-13 -113 956- 8
Pearson Education Ltd., London Pearson Education Australia Ply. Ltd., Sydney Pearson Education Singapore, Pte. Ltd. Pearson Education North Asia Ltd., Hong Kong Pearson Education Canada Inc., Toronto Pearson Educaci6n de Mexico, S.A. de C.V. Pearson Educat ion-Jap an, Tokyo Pearson Education Malaysia, Pte. Ltd. Pearson Education, Inc., Upper Saddle River. New Jersey
Contents
List of Exercises
xi Preface
i I
xv
Notational Conventions xix
Abbreviations xxi 1 Basic Concepts 1.1 Introduction 1.2 Energy Spectral Density of Det enninistic Signals 1.3 Power Spectral Density of Random Signals 1.3.1 First Definition of Power Spectra l Density 1.3.2 Second Definition of Power Spe ctral Density 1.4 Properties of Power Spectral Den sities 1.5 The Spectral Estimation Problem 1.6 Complements 1.6.1 Coherence Spectrum 1.7 Exercises
2 Nonparametric Methods 2.1 Introduction 2.2 Periodogram and Correlogram Me thods 2.2.1 Periodogram 2.2.2 Correlogram
\' \
H
1
,
1 3 5 6 7
·1
1
~',l "-i
1
1
9 13 13 13
n /'.:
16
.,,·1
Jr.• i;~
Ie:~
;~
~1
23 23 24 24 24
v
.~ 11
~ :.
I ••
Contents
vi
27 T FF 27 a vi mputation 2.3 Periodogram Co 29 2.3.1 Radix-2 FFT 30 2.3.2 Zero Padding od eth M 30 Periodogram 2.4 Properties of the am gr do rio Pe 34 the of 2.4.1 Bias Analysis am gr do 39 sis of the Perio 2.4.2 Variance Analy 40 ukey Method 2.5 The Bl ac km an -T ate tim Es l tra ec Sp ey 41 -Tuk 2.5.1 The Bl ac km an ey Spectral Estimate uk -T an km ac Bl 42 the of ss 2.5.2 Nonnegativene Considerations eoffs 2.6 Window Design solution-Variance Trad Re d an t uc 42 od Pr h idt dw 2.6.1 Time-Ban 44 in Wmdow Design 45 Lag Windows 2.6.2 Some Common 50 Example 2.6.3 Wmdow Design s ow ind W g La d 52 ows an 2.6.4 Temporal Wmd s od eth M 52 riodogram 2.7 Other Refined Pe 53 2.7.1 Bartlett Method 55 2.7.2 Welch Method 58 2.7.3 Daniell Method 58 2.8 Complements f FF a vi on 60 nce Computati s 2.8.1 Sample Covaria an -Tukey Periodogram km ac Bl ed ow ind W of utation ization 2.8.2 FF f-Based Comp al Windows: Th e Apod or mp Te t en nd pe De 62 yuenc 2.8.3 Data- and Freq 67 Approach tra ec Sp e nc re he Co d an a 70 oss-Spectr 2.8.4 Estimation of Cr lts su Re t uc od Pr 75 dwidth 2.8.5 More Time-Ban 2.9 Exercises 90 a tr ec Sp l 90 na ds for Ratio 3 Parametric Metho 91 3.1 Introduction 93 nal Spectra 3.2 Signals with Ratio s se es oc Pr 94 re of AR M A 3.3 Covariance Structu 94 3.4 AR Signals 96 ethod 3.4.1 Yule-Walker M 99 ethod 3.4.2 Least-Squares M ns tio ua Eq er alk W leYu 100 Solutions to the 3.5 Order-Recursive hm rit go 102 bin Al 3.5.1 Levinson-Dur Algorithm • 1067 3.5.2 Delsarte-Genin 10 3.6 M A Signals 108 3.7 AR M A Signals od eth M er 111 Walk 3.7.1 Modified Yuleod eth M s are qu 113 t-S 3.7.2 Two-Stage Leas 114 A Signals 3.8 Multivariate ARM ns tio pace Equa 3.8.1 ARMA State-S
I J
I •
:: :!
Contents
3.8.2 Subspace Parameter Estimation-Theoretical Aspects 3.8.3 Subspace Parameter Estimation-Implementation Aspects 3.9 Complements 3.9.1 The Partial Autocorrelation Sequence 3.9.2 Some Properties of Covariance Extensions 3.9.3 The Burg Method for AR Parameter Estimation ", 3.9.4 The Gohberg-Semencul Formula 3.9.5 MA Parameter Estimation in Polynomial Time 3.10 Exercises
vii 117
120 122 122 123 125 128 131 135
4 Parametric Methods for Line Spectra 4.1 Introduction 4.2 Models of Sinusoidal Signals in Noise 4.2.1 Nonlinear Regression Model 4.2.2 ARMA Model 4.2.3 Covariance Matrix Model 4.3 Nonlinear Least-Squares Method 4.4 High-Order Yule-Walker Method 4.5 Pisarenko and MUSIC Methods 4.6 Min-Norm Method 4.7 ESPRIT Method 4.8 Forward-Backward Approach 4.9 Complements 4.9.1 Mean-Square Convergence of Sample Covariances for Line Spectral Processes 4.9.2 The Caratheodory Parameterization of a Covariance Matrix 4.9.3 Using the Unwindowed Periodogram for Sine Wave Detection in White Noise 4.9.4 NLS Frequency Estimation for a Sinusoidal Signal with Time-Varying Amplitude 4.9.5 Monotonically Descending Techniques for Function Minimization 4.9.6 Frequency-Selective ESPRIT-Based Method 4.9.7 A Useful Result for Two-Dimensional (2D) Sinusoidal Signals 4.10 Exercises
150 150 155 155 155 156 157 162 166 171 174 175 178
5 Filter-Bank Methods 5.1 Introduction 5.2 Filter-Bank Interpretation of the Periodogram 5.3 Refined Filter-Bank Method 5.3.1 Slepian Baseband Filters 5.3.2 RFB Method for High-Resolution Spectral Analysis 5.3.3 RFB Method for Statistically Stable Spectral Analysis 5.4 Capon Method 5.4.1 Derivation of the Capon Method 5.4.2 Relationship between Capon and AR Methods
217 217 220 222 223 226 228 232 232 238
178 181 183 186 188 194 203 209
iii
Contents
5.5 5.6
5.7 ~
Filter-Bank Reinterpretation of the Periodogram Complements 5.6.1 Another Relationship between the Capon and AR Methods 5.6.2 Multiwindow Interpretation of Daniell and Blackman-Tukey Periodograms 5.6.3 Capon Method for Exponentially Damped Sinusoidal Signals 5.6.4 Amplitude and Phase Estimation Method (APES) 5.6.5 Amplitude and Phase Estimation Method for Gapped Data (GAPES) 5.6.6 Extensions of Filter-Bank Approaches to Two-Dimensional Signals Exercises
Spatial Methods 6.1 Introduction 6.2 Array Model 6.2.1 The Modulation-Transmission-Demodulation Process 6.2.2 Derivation of the Model Equation 6.3 Nonparametric Methods 6.3.1 Beamforming 6.3.2 Capon Method 6.4 Parametric Methods 6.4.1 Nonlinear Least-Squares Method 6.4.2 Yule-Walker Method 6.4.3 Pisarenko and MUSIC Methods 6.4.4 Min-Norm Method 6.4.5 ESPRIT Method 6.5 Complements 6.5.1 On the Minimum-Norm Constraint 6.5.2 NLS Direction-of-Arrival Estimation for a Constant-Modulus Signal 6.5.3 Capon Method: Further Insights and Derivations 6.5.4 Capon Method for Uncertain Direction Vectors 6.5.5 Capon Method with Noise-Gain Constraint 6.5.6 Spatial Amplitude and Phase Estimation (APES) 6.5.7 The CLEAN Algorithm 6.5.8 Unstructured and Persymmetric ML Estimates of the Covariance Matrix 6.6 Exercises
242 246 246 250 253 256 259 262 269
275 275 277 278 280 285 288 291 293 294 295 297 297 297 298 298 300 302 306 311 319 326 331 334
APPENDICES A Linear Algebra and Matrix Analysis Tools A.l Introduction A.2 Range Space, Null Space, and Matrix Rank A.3 Eigenvalue Decomposition A.3.1 General Matrices A.3.2 Hermitian Matrices A.4 Singular Value Decomposition and Projection Operators
342 342 342 344 345 348 351
I f.~
I
Contents
ix
A.5 A.6 A. 7 A.8
Positive (Semi)Definite Matrices Matrices with Special Structure Matrix Inversion Lemmas Systems of Linear Equations A.8.1 Consistent Systems A.8.2 Inconsistent Systems A.9 Quadratic Minimization
357 361 363
364 364 ",
B Cram er-Ra o Bound Tools B.1 Introduction B.2 The CRE for General Distributions B.3 The CRE for Gaussian Distributions B.4 The CRB for Line Spectra B.5 The CRE for Rational Spectra B.6 The CRE for Spatial Spectra
366 371
373 373 376 378 383 384 387
C Model Order Selection Tools C.1 Introduction C.2 Maximum Likelihood Parameter Estimation C.3 Useful Mathematical Preliminaries and Outlook C.3.1 Maximum A Posteriori (MAP) Selection Rule C.3.2 Kullback-Leibler Information e.3.3 Outlook: Theoretical and Practical Perspectives C.4 Direct Kullback-Leibler (KL) Approach: No-Name Rule C.5 Cross-Validatory KL Approach: The AlC Rule C.6 Generalized Cross-Validatory KL Approach: the GIC Rule C.7 Bayesian Approach: The Bre Rule e.8 Summary and the Multimodel Approach C.8.1 Summary C.8.2 The Multimodel Approach
418
D Answers to Selected Exercises
420
Bibliography References Grouped by Subject Index
397 397 398
402 402 405 406 407 409
412 413 417 417
423 435
443
List of Exercises '-,
CHAPTER 1 1. 1 Scaling of the Frequency Axis 1.2 Time-Frequency Distributions 1.3 Two Useful Z-Transform Properties 1.4 A Simple ACS Example 1.5 Alternative Proof that Ir(k)1 ::: r(O) 1.6 A Double Summation Formula 1.7 Is a Truncated Autocovariance Sequence (ACS) a Valid ACS? 1.8 When Is a Sequence an Autocovariance Sequence? 1.9 Spectral Density of the Sum of Two Correlated Signals 1.10 Least-Squares Spectral Approximation 1.11 Linear Filtering and the Cross-Spectrum C1.12 Computer Generation of Autocovariance Sequences C1.13 DTFT Computations Using Two-Sided Sequences C1.14 Relationship between the PSD and the Eigenvalues of the ACS Matrix CHAPTER 2 2.1 Covariance Estimation for Signals with Unknown Means 2.2 Covariance Estimation for Signals with Unknown Means (cont' d) 2.3 Unbiased ACS Estimates Can Lead to Negative Spectral Estimates 2.4 Variance of Estimated ACS 2.5 Another Proof of the Equality ¢p(CrJ) = ¢c(w) 2.6 A Compact Expression for the Sample ACS 2.7 Yet Another Proof of the Equality ¢p(w) = ¢c(w) 2.8 Linear Transformation Interpretation of the DFf 2.9 For White Noise, the Periodogram Is an Unbiased PSD Estimator 2.10 Shrinking the Periodogram
xi
xii 2.11 2.12 2.13 2.14 2.15 2.16 2.17 2.18 C2.19 C2.20 C2.21 C2.22 C2.23
List of Exercises
Asymptotic Maximum Likelihood Estimation of f/> ((J) from ~p ((J) Plotting the Spectral Estimates in dB Finite-Sample Variance/Covariance Analysis of the Periodogram Data-Weighted ACS Estimate Interpretation of Bartlett and Welch Methods Approximate Formula for Bandwidth Calculation A Further Look at the Time-Bandwidth Product Bias Considerations in Blackman-Tukey Window Design A Property of the Bartlett Window Zero-Padding Effects on Periodogram Estimators Resolution and Leakage Properties of the Periodogram Bias and Variance Properties of the Periodogram Spectral Estimate Refined Methods: Variance-Resolution Tradeoff Periodogram-Based Estimators Applied to Measured Data
CHAPTER 3 3.1 The Minimum Phase Property 3.2 Generating the ACS from ARMA Parameters 3.3 Relationship between AR Modeling and Forward Linear Prediction 3.4 Relationship between AR Modeling and Backward Linear Prediction 3.5 Prediction Filters and Smoothing Filters 3.6 Relationship between Minimum Prediction Error and Spectral Flatness 3.7 Diagonalization of the Covariance Matrix 3.8 Stability of Yule-Walker AR Models 3.9 Three Equivalent Representations for AR Processes 3.10 An Alternative Proof of the Stability Property of Reflection Coefficients 3.11 Recurrence Properties of the Reflection Coefficient Sequence for an MA Model 3.12 Asymptotic Variance of the ARMA Spectral Estimator 3.13 Filtering Interpretation of Numerator Estimators in ARMA Estimation 3.14 An Alternative Expression for ARMA Power Spectral Density 3.15 Pade Approximation 3.16 (Non)Uniqueness of Fully Parameterized ARMA Equations C3.17 Comparison of AR, ARMA, and Periodogram Methods for ARMA Signals C3.18 AR and ARMA Estimators for Line Spectral Estimation C3.19 Model Order Selection for AR and ARMA Processes C3.20 AR and ARMA Estimators Applied to Measured Data CHAPTER 4 4.1 Speed Measurement by a Doppler Radar as a Frequency Estimation Problem 4.2 ACS of Sinusoids with Random Amplitudes or Nonuniform Phases 4.3 A Nonergodic Sinusoidal Signal 4.4 AR Model-Based Frequency Estimation 4.5 An ARMA Model-Based Derivation of the Pisarenko Method 4.6 Frequency Estimation When Some Frequencies Are Known 4.7 A Combined HOYW-ESPRIT Method for the MA Noise Case
xiii
List of Exercises
4.8 4.9 4.10 4.11 C4.12 C4.13 C4.14
Chebyshev Inequality and the Convergence of Sample Covariances More about the Forward-Backward Approach ESPRIT and Min-Nonn under the Same Umbrella Yet Another Relationship between ESPRIT and Min-Nonn Resolution Properties of Subspace Methods for Estimation of Line Spectra Model Order Selection for Sinusoidal Signals Line Spectral Methods Applied to Measured Data ,",
CHAPTER 5 5.1 Multiwindow Interpretation of Bartlett and Welch Methods 5.2 An Alternative Statistically Stable RFB Estimate 5.3 Another Derivation of the Capon FIR Filter 5.4 The Capon Filter Is a Matched Filter 5.5 Computation of the Capon Spectrum 5.6 A Relationship between the Capon Method and MUSIC (pseudo)Spectra 5.7 A Capon-Like Implementation of MUSIC 5.8 Capon Estimate of the Parameters of a Single Sine Wave An Alternative Derivation of the Relationship between the Capon and AR Methods 5.9 C5.10 Slepian Window Sequences C5.11 Resolution of Refined Filter-Bank Methods C5.12 The Statistically Stable RFB Power Spectral Estimator C5.13 The Capon Method CHAPTER 6 6.1 Source Localization by Using a Sensor in Motion 6.2 Beamforming Resolution for Uniform Linear Arrays 6.3 Beamforming Resolution for Arbitrary Arrays 6.4 Beamforming Resolution for L-Shaped Arrays 6.5 Relationship between Beamwidth and Array-Element Locations 6.6 Isotropic Arrays 6.7 Grating Lobes 6.8 Beamspace Processing 6.9 Beamspace Processing (cont'd) 6.10 Beamforming and MUSIC under the Same Umbrella 6.11 Subspace Fitting Interpretation of MUSIC 6.12 Subspace Fitting Interpretation of MUSIC (cont'd) 6.13 Subspace Fitting Interpretation of MUSIC (cont'd) 6.14 Modified MUSIC for Coherent Signals C6.15 Comparison of Spatial Spectral Estimators C6.16 Performance of Spatial Spectral Estimators for Coherent Source Signals C6.17 Spatial Spectral Estimators Applied to Measured Data
Preface '-.
Spectral analysis considers the problem of determining the spectral content (i.e., the distriblltion of power over frequency) of a time series from a finite set of measurements, by means of either nonparametric or parametric techniques. The history of spectral analysis as an established discipline started more than a century ago with the work by Schuster on detecting cyclic behavior in time series. An interesting historical perspective on the developments in this field can be found in [MARPLE 1987]. This reference notes that the word "spectrum" was apparently introduced by Newton in relation to his studies of the decomposition of white light into a band of light colors, when passed through a glass prism (as illustrated on the front cover). This word appears to be a variant of the Latin word "specter" which means "ghostly apparition". The contemporary English word that has the same meaning as the original Latin word is "spectre". Despite these roots of the word "spectrum", we hope the student will be a "vivid presence" in the course that has just started! This text, which is a revised and expanded version of Introduction to Spectral Analysis (Prentice Hall, 1997), is designed to be used with a first course in spectral analysis that would typically be offered to senior undergraduate or first-year graduate students. The book should also be useful for self-study, as it is largely self-contained. The text is concise by design, so that it gets to the main points quickly and should hence be appealing to those who would like a fast appraisal on the classical and modem approaches of spectral analysis. In order to keep the book as concise as possible without sacrificing the rigor of presentation or skipping over essential aspects, we do not cover some advanced topics of spectral estimation in the main part of the text. However, several advanced topics are considered in the complements that appear at the end of each chapter, and also in the appendices. For an introductory course, the reader can skip the complements and refer to results in the appendices without having to understand their derivation in detail.
xv
xvi
Preface
For the more advanced reader, we have included three appendices and a number of complement sections in each chapter. The appendices provide a summary of the main techniques and results in linear algebra, statistical accuracy bounds, and model order selection, respectively. The complements present a broad range of advanced topics in spectral analysis. Many of these are current or recent research topics in the spectral analysis literature. At the end of each chapter, we have included both analytical exercises and computer problems. The analytical exercises are, more or less, ordered from least to most difficult; this ordering also approximately follows the chronological presentation of material in the chapters. The more difficult exercises explore advanced topics in spectral analysis and provide results that are not available in the main text. Answers to selected exercises are found in Appendix D. The computer problems are designed to illustrate the main points of the text and to provide the reader with first-hand information on the behavior and performance of the various spectral analysis techniques considered. The computer exercises also illustrate the relative performance of the methods and explore other topics-such as statistical accuracy, resolution properties, and the like-that are not developed analytically in the book. We have used MATLAB 1 to minimize the programming chore and to encourage the reader to "play" with other examples. We provide a set of MATLAB functions for data generation and spectral estimation that form a basis for a comprehensive set of spectral-estimation tools; these functions are available at the text website, www.prenhall.com/stoica. Supplementary material may also be obtained from the text website. We have prepared a set of overhead transparencies that can be used as a teaching aid for a spectral analysis course. We believe that these transparencies are useful not only to course instructors but also to other readers, because they summarize the principal methods and results in the text. For readers who study the topic on their own, it should be a useful exercise to refer to the main points addressed in the transparencies after completing the reading of each chapter. As we mentioned earlier, this text is a revised and expanded version of Introduction to . Spectral Analysis (Prentice Hall, 1997). We have maintained the conciseness and accessibility of the main text; the revision has focused primarily on expanding the complements, appendices, and bibliography. Specifically, we have expanded Appendix B to include a detailed discussion of Cramer-Rao bounds for direction-of-arrival estimation. We have added Appendix C, which covers model order selection, and have added new computer exercises on order selection. We have more than doubled the number of complements from the previous book, to 32, most of which present recent results in spectral analysis. We have also expanded the bibliography to include new topics, along with recent results on more established topics. The text is organized as follows: Chapter 1 introduces the spectral analysis problem, motivates the definition of power spectral density functions, and reviews some important properties of autocorrelation sequences and spectral density functions. Chapters 2 and 5 consider nonparametric spectral estimation. Chapter 2 presents classical techniques, including the periodogram, the correlogram, and their modified versions to reduce variance. We include an analysis of bias and variance of these techniques and relate them to one another. Chapter 5 considers the more recent filter-bank version of nonparametric techniques, including both data-independent and datadependent filter design techniques. Chapters 3 and 4 consider parametric techniques; Chapter 3 lMATLAB® is a registered trademark of The Mathworks, Inc.
Preface
xvii
focuses on continuous spectral models (Autoregressive Moving Average (ARMA) models and their AR and MA special cases), while Chapter 4 focuses on discrete spectral models (sinusoids in noise). We have placed the filter-bank methods in Chapter 5, after Chapters 3 and 4, mainly because the Capon estimator has interpretations both as an averaged AR spectral estimator and as a matched filter for line-spectral models, and we need the background of Chapters 3 and 4 to develop these interpretations. The data-independent filter-bank techniques in Sections 5.1-5.4 can equally well be covered directly following Chapter 2, if de~ired. Chapter 6 considers the closely related problem of spatial spectral estimation in the context of array signal processing. Both nonparametric (beamforming) and parametric methods are considered, and both are tied into the temporal spectral estimation techniques considered in Chapters 2, 4, and 5. The bibliography contains both modem and classical references (ordered both alphabetically and by subject). We include many historical references as well, for those interested in tracing the early developments of spectral analysis. However, spectral analysis is a topic with contriblltions from many diverse fields, including electrical and mechanical engineering, astronomy, biomedical spectroscopy, geophysics, mathematical statistics, and econometrics-to name a few. As such, any attempt to document the historical development of spectral analysis accurately is doomed to failure. The bibliography reflects our own perspectives, biases, and limitations; there is no doubt that the list is incomplete, but we hope that it gives the reader an appreciation of the breadth and diversity of the spectral analysis field. The background needed for this text includes a basic knowledge of linear algebra, discretetime linear systems, and introductory discrete-time stochastic processes (or time series). A basic understanding of estimation theory is helpful, though not required. Appendix A develops most of the needed background results on matrices and linear algebra, Appendix B gives a tutorial introduction to the Cramer-Rao bound, and Appendix C develops the theory of model order selection. We have included concise definitions and descriptions of the required concepts and results where needed. Thus, we have tried to make the text as self-contained as possible. We are indebted to Jian Li and Lee Potter for adopting a former version of the text in their spectral estimation classes, for their valuable feedback, and for contributing to this book in several other ways. We would like to thank Torsten Soderstrom, for providing the initial stimulus for preparation of lecture notes that led to the book, and Hung-Chih Chiang, Peter Handel, Ari Kangas, Erlendur Karlsson, and Lee Swindlehurst, for careful proofreading and comments and for many ideas on and early drafts of the computer problems. We are grateful to Mats Bengtsson, Tryphon Georgiou, K.V.S. Hari, Andreas Jakobsson, Erchin Serpedin, and A.ndreas Spanias for comments and suggestions that helped us eliminate some inadvertencies and typographical errors from the previous edition of the book. We also wish to thank Wallace Anderson, Alfred Hero, Ralph Hippenstiel, Louis Scharf, and Douglas Williams, who reviewed a former version of the book and provided us with numerous useful comments and suggestions. It was a pleasure to work with the excellent staff at Prentice Hall, and we are particularly appreciative of Tom Robbins and Scott Disanno for their professional expertise. Many of the topics described in this book are outgrowths of our research programs in statistical signal and array processing, and we wish to thank the sponsors of this research: the Swedish Foundation for Strategic Research, the Swedish Research Council, the Swedish Institute, the
xviii
Preface
and the U.S. Defense U.S. Army Research Laboratory, the U.S. Air Force Research Laboratory, Advanced Research Projects Administration. continuing support Finally, we are indebted to Anca and to Liz, Laura, and Daniel for their and understanding throughout this project. PETRE STOIC A
Uppsala University RANDY MOSES
The Ohio State University
~.,
..
Notational Conventions ",
R
the set of real numbers
C
the set of complex numbers
N(A)
the null space of the matrix A (p. 343)
'R(A)
the range space of the matrix A (p. 342)
Dn
the nth definition in Appendix A or B
Rn
the nth result in Appendix A
IIxll
the Euclidean norm of a vector x
*
convolution operator transpose of a vector or matrix conjugate of a vector or matrix conjugate transpose of a vector or matrix.; also used for scalars in lieu of (. y the (i, j)th element of the matrix A the i th element of the vector a
X
an estimate of the quantity x
A > 0 (:::: 0)
A is positive definite (positive semidefinite) (p. 357)
argmaxf(x)
the value of x that maximizes f (x )
argminf(x)
the value of x that minimizes f (x )
A
x
x
xix
xx
Notational Conventions
cov{x, y}
the covariance between x and y
lxl
the modulus of the (possibly complex) scalar x
IAI
the determinant of the square matrix A
diag(a)
the square diagonal matrix whose diagonal elements are the elements of the vector a = I if k
= 1; Ok.l =
Ok,'
Kronecker delta:
oCt-to)
Dirac delta: 0 (t - to) = 0 for t =1= to; f~oo 0 (t - to)dt
E {x}
the expected value of x (p. 5)
f
(discrete-time) frequency: f interval (p. 8)
lj)(w)
a power spectral density function (p. 6)
Im{x}
the imaginary part of x
Vex)
on the order of x (p. 35)
p(x)
probability density function
Pr{A}
the probability of event A
r(k)
an autocovariance sequence (p. 5)
Re{x}
the real part of x
t
discrete-time index
tr(A)
the trace of the matrix A (p. 346)
var{x}
the variance of x
w(k), W(w)
a window sequence and its Fourier transform
WB (k),
WB (w)
Ok,l
0 otherwise
=I
= w/2rr, in cycles per sampling
the Bartlett (or triangular) window sequence and its Fourier transform (p. 31) the rectangular (or Dirichlet) window sequence and its Fourier transform (p. 32) radian (angular) frequency, in radians per sampling interval (p. 3)
Z
-1
unit:'delay operator: z-lx(t) = x(t - 1) (p. 11)
Abbreviations ".
ACS APES
autocovariance sequence (p. 5)
AR
autoregressive (p. 92)
ARMA
autoregressive moving-average (p. 92)
BSP BT
beamspace processing (p. 337)
CM
Capon method (p. 232)
CCM
constrained Capon method (p. 313)
CRB
Cramer-Rao bound (p. 373)
DFT
discrete Fourier transform (p. 27)
DGA
Delsarte-Genin algorithm (p. 99)
DOA
direction of arrival (p. 276)
DTFf
discrete-time Fourier transform (p. 3)
ESP ESPRIT
elementspace processing (p. 337)
EVD
eigenvalue decomposition (p. 345)
FB
forward-backward (p. 176)
FBA
filter-bank approach (p. 219)
amplitude and phase estimation (p. 256)
Blackman-Tukey (p. 39)
estimation of signal parameters by rotational invariance techniques (p. 174)
xxi
xxii
Abbreviations
FFT FIR
fast Fourier transform (p. 27)
flop
floating-point operation (p. 27)
GAPES
gapped amplitude and phase estimation (p. 259)
GS
Gohberg-8emencul (formula) (p. 128)
HOYW
high-order Yule-Walker (p. 162)
i.i.d.
independent, identically distributed (p. 331)
LDA
Levinson-Durbin algorithm (p. 99)
LS
least squares (p. 367)
MA
moving average (p. 92)
MFD
matrix fraction description (p. 143)
ML
maximum likelihood (p. 375)
MLE
maximum likelihood estimate (p. 375)
MSE
mean squared error (p. 30)
MUSIC
multiple signal classification (or characterization) (p. 166)
MYW
modified Yule-Walker (p. 101)
NLS
nonlinear least squares (p. 151)
PARCOR
partial correlation (p. 101)
PSD
power spectral density (p. 5)
RFB
refined filter bank (p. 222)
QRD
Q-R decomposition (p. 368)
RCM
robust Capon method (p. 312)
SNR
signal-to-noise ratio (p. 85)
SVD
singular value decomposition (p. 351)
TLS
total least squares (p. 369)
ULA
uniform linear array (p. 283)
YW
Yule-Walker (p. 94)
finite impulse response (p. 18)
,
.,.
.
;
Spectral Analysis of Signals ",
1 Basic Concepts
1.1 INTRODUCTION The essence of the spectral estimation problem is captured by the following infonnal fonnulation. From a finite record of a stationary data sequence, estimate how the total power is distributed over frequency.
(1.1.1)
Spectral analysis finds applications in many diverse fields. In vibration monitoring, the spectral content of measured signals gives infonnation on the wear and other characteristics of mechanical parts under study. In economics, meteorology, astronomy, and several other fields, the spectral analysis may reveal "hidden periodicities" in the studied data, which are to be associated with cyclic behavior or recurring processes. In speech analysis, spectral models of voice signals are useful in better understanding the speech production process and-in addition-can be used for both speech synthesis (or compression) and speech recognition. In radar and sonar systems. the spectral contents of the received signals provide infonnation on the location of the sources (or targets) situated in the field of view. In medicine, spectral analysis of various signals measured from a patient, such as electrocardiogram (BeG) or electroencephalogram (EEG) signals, can provide useful material for diagnosis. In seismology, the spectral analysis of the signals recorded prior to and during a seismic event (such as a volcano eruption or an earthquake) gives useful infonnation on the ground movement associated with such events and could help in predicting them. Seismic spectral estimation is also used to predict subsurface geologic structure in gas and oil exploration. In control systems, there is a resurging interest in spectral analysis methods as a
1
2
Chapter 1 Basic Concepts .~
means of characterizing the dynamical behavior of a given system and ultimately synthesizing a controller for that system. The previous and other applications of spectral analysis are reviewed in [KAY 1988; MARPLE 1987; BLOOMFIELD 1976; BRACEWELL 1986; HAYKIN 1991; HAYKIN 1995; HAYES ill 1996; KOOPMANS 1974; PRIESTLEY 1981; PERCIVAL AND WALDEN 1993; PORAT 1994; SCHARF 1991; THERRIEN 1992; PROAKIS, RADER, LING, AND NIKIAS 1992]. The textbook [MARPLE 1987] also contains a well-written historical perspective on spectral estimation, which is worth reading. Many of the classical articles on spectral analysis, both application-driven and theoretical, are reprinted in [CHILDERS 1978; KESLER 1986]; these excellent collections of reprints are well worth consulting. There are two broad approaches to spectral analysis. One of these derives its basic idea directly from definition (1.1.1): The studied signal is applied to a bandpass filter with a narrow bandwidth, which is swept through the frequency band of interest, and the filter output power divided by the filter bandwidth is used as a measure of the spectral content of the input to the filter. This is essentially what the classical (or nonparametric) methods of spectral analysis do. These methods are described in Chapters 2 and 5 of this text. (The fact that the methods of Chapter 2 can be given the filter-bank interpretation is made clear in Chapter 5.) The second approach to spectral estimation, called the parametric approach, is to postulate a model for the data, which provides a means of parameterizing the spectrum, and to thereby reduce the spectral estimation problem to that of estimating the parameters in the assumed model. The parametric approach to spectral analysis is treated in Chapters 3, 4, and 6. Parametric methods offer more accurate spectral estimates than the nonparametric ones in the cases where the data indeed satisfy the model assumed by the former methods. However, in the more likely case that the data do not satisfy the assumed models, the nonparametric methods sometimes outperform the parametric ones, because of the sensitivity of the latter to model misspecifications. This observation has motivated renewed interest in the nonparametric approach to spectral estimation. Many real-world signals can be characterized as being random (from the observer's viewpoint). Briefly speaking, this means that the variation of such a signal outside the observed interval cannot be detennined exactly, but can only be specified in statistical terms of averages. In this text, we will be concerned with estimating the spectral characteristics of random signals. In spite of this fact, we find it useful to start the discussion by considering the spectral analysis of detenninistic signals (as we do in Section 1.2). Throughout this work, we consider discrete-index signals (or data sequences). Such signals are most commonly obtained by the temporal or spatial sampling of a continuous (in time or space) signal. The main motivation for focusing on discrete signals lies in the fact that spectral analysis is most often performed by a digital computer or by digital circuitry. Chapters 2 to5 of this text deal with discrete-time signals; Chapter 6 considers the case of discrete-space data sequences. In the interest of notational simplicity, the discrete-time variable t, as used in this text, is assumed to be measured in units of sampling interval. A similar convention is adopted for spatial signals, whenever the sampling is uniform. Accordingly, the units of frequency are cycles per sampling interval. The signals dealt with in the text are complex valued. Complex-valued data can appear in signal processing and spectral estimation applications-for instance, as a result of a "complex demodulation" process (explained in detail in Chapter 6). It should be noted that the treatment of complex-valued signals is not always more general or more difficult than the analysis of
,,
:.,
.
Section 1.2
3
Energy Spectral Density of Deterministic Signals
corresponding real-valued signals. A typical example that illustrates this claim is the case of sinusoidal signals considered in Chapter 4. A real-valued sinusoidal signal, ct cos(wt +
H(w) = ¢yu(w)/¢uu(w).
k=-oo In what follows, we will show that (1.6.3) where ICyu (w) I is the so-called coherence spectrum. We will deduce from (1.6.3) that the coherence spectrum shows the extent to which yet) and u(t) are linearly related to one another, hence providing a motivation for the name given to ICyu (w) I. We will also show that ICyu (w) I ::: 1, with equality, for all w values, if and only if yet) and u(t) are related as in equation (1.6.2) with f(t) == 0 (in the mean-square sense). Finally, we will show that \Cyu(w)1 is invariant to linear filtering of u(t) and yet) (possibly by different filters); that is, if u = g * u and y =1 * y, where 1 and g are linear filters and * denotes convolution, then ICyu(w)! = ICyu(w)\. Let x(t) = L~-oo hkU(t - k). It can be shown that u(t - k) and f(t) are uncorrelated with one another for all k. (The reader is required to verify this claim; see also Exercise 1.11). Hence, x(t) and E(t) are also uncorrelated with each other. Now, yet) = E(t)
+ x(t),
.~
.~
(1.6.4)
so it follows that (1.6.5) By using the fact that ¢xx (w) =
IH (w)1 2 ¢uu (w), we can write
,...
.
which is (1.6.3). Since the left-hand side in (1.6.3) is nonnegative and the PSD function ¢yy(w) is arbitrary, we must have ICyu(w)\ ~ 1 for all w. It can also be seen from (1.6.3) that the closer ICyu(w)! is to 1, the smaller is the residual variance. In particular, if ICyu(w)1 1, then f(t) == 0 (in the
=
.
~
, '"
/:
.
Section 1.6
Complements
15
mean-square sense) and hence yet) and u(t) must be linearly related, as in (1.7.11). The previous interpretation leads to calling Cyu (w) the correlation coefficient in the frequency domain. Next, consider the filtered signals 00
yet) =
fky(t L k=-oo
k) ,-
and 00
u(t) =
L
gkU(t - k)
k=-oo
where the filters {fk} and {gk} are assumed to be stable. As rjiu(p)
l>
E {y(t)u *(t - p)} 00
=
=
00
L L
fkg/ E {yet - k)u*(t - j - p)}
k=-oo
j=-oo
00
00
k=-oo
j=-oo
00
00
L L
fkgj*ryuV
+p -
k),
it follows that 00
cPjir;(ill) =
L L L p=-oo k=-oo
fk e - iwk gt eiwj ryu V
+P -
k)e-iwU+p-k)
j=-oo
Hence, IC-u(ill)1 Y
=
\F(ill) I IG (ill) I l¢yu (ill) I
IF(ill)I¢U2(ill)IG(w)lcP~~\£v)
= IC u(ill)l y
which is the desired result. Observe that the latter result is similar to the invariance of the modulus of the correlation coefficient in the time domain, Iryu(k)1 [ryy (O)ruu (0)]1/2
to a scaling of the two signals: yet) = f . yet) and u(t) = g . u(t).
16
Chapter 1 Basic Concepts
1.7 EXERCISES Exercise 1.1: Scaling of the Frequency Axis In this text, the time variable t has been expressed in units of the sampling interval Ts (say). Consequently, the frequency is measured in cycles per sampling interval. Assume we want the frequency units to be expressed in radians per second or in Hertz (Hz = cycles per second). Then we have to introduce the scaled frequency variables
w=
wiTs
WE [-rrITs , rrlTs ] rad/sec
(1.7.1)
and! = iiJ/2rr (in Hz). It might be thought that the PSD in the new frequency variable is obtained by inserting w = wTs into ¢(w), but this is wrong. Show that the PSD, as expressed in units of power per Hz, is in fact given by 00
(P(ii» = Ts¢(iiJTs )
f).
L
Ts
r(k)e-iwTsk,
(1.7.2)
k=-oo
(See [MARPLE 1987] for more details on this scaling aspect.)
Exercise 1.2: Time-Frequency Distributions Let y (t) denote a discrete-time signal, and let Y (w) be its discrete-time Fourier transfonn. As explained in Section 1.2, Yew) shows how the energy in the whole sequence {y(t)}~-oo is distributed over frequency. Assume that we want to characterize how the energy of the signal is distributed in time and frequency. If D (t, w) is a function that characterizes the time-frequency distribution, then it should satisfy the so-called marginal properties: 00
L
D(t, w) = jY(w)!2
(1.7.3)
t=-oo
and
i
1C
-1 D(t, w)dw = \y(t)12 (1.7.4) 2Jr -1C Use intuitive arguments to explain why the previous conditions are desirable properties of a time-frequency distribution. Next, show that the so-called Rihaczek distribution, D(t, w) = y(t)Y*(w)e- iwt
(1.7.5)
satisfies conditions (1.7.3) and (1.7.4). (For treatments of the time-frequency distributions, the reader is referred to [THERRIEN 1992] and [COHEN 1995].)
.
";.:'
-:4
Section 1.7
17
Exercises
Exercise 1.3: Two Useful Z-Transform Properties (a) Let h k be an absolutely summable sequence, and let H (z) = L~-oo hkZ- k be its Z-transform. Find the Z-transforms of the following two sequences: (i) h_ k (ii) gk = L~::::-oo hmh;_k' (b) Show that, if Zi is a zero of A(z) = 1 + alz- 1 + .. 1-,+ anz-n , then (l/zt) is a zero of A*(l/z*) (where A*(llz*) = [A(llz*)]*). Exercise 1.4: A Simple ACS Example Let y (t) be the output of a linear system, as in Figure 1.1, with filter H (z) = (1 + hi z -I) I (1 + alz-I) whose input is zero-mean white noise with variance (12. (The ACS of such an input is (12 0k ,O') (a) Find r(k) and ¢(w) analytically in terms of at. hI, and (12. (b) Verify that r(-k) = r*(k) and that Ir(k)1 ~ reO). Also verify that, when real, r(k) can be written as a function of Ikl.
al
and hi are
Exercise 1.5: Alternative Proof that Ir(k)1 ~ reO) We stated in the text that (1.3.4) follows from (1.3.6). Provide a proof of that statement. Also, find an alternative, simple proof of (1.3.4) by using (1.3.8). Exercise 1.6: A Double Summation Formula A result often used in the study of discrete-time random signals is the summation formula N
N
L L 1::::1
N-l
L
j{t - s) =
s::::1
(N -Irl)j(r)
(1.7.6)
T::::.,..N+l
where j (.) is an arbitrary function. Provide a proof of this formula. Exercise 1.7: Is a Truncated Autocovariance Sequence (ACS) a Valid ACS? Suppose that {r(k)}~_oo is a valid ACS; thus, L~-oo r(k)e- iwk ~ 0 for all w. Is it possible that, for some integer p, the partial (or truncated) sum p
L r(k)ek::::-p
is negative for some w? Justify your answer.
iwk
18
Chapter 1 Basic Concepts
Exercise 1.8: When Is a Sequence an Autocovariance Sequence? We showed in Section 1.3 that, if {r(k)}~_oo is an ACS, then Rm ~ 0 for m = 0,1,2, .... We also implied that the first definition of the PSD in (1.3.7) satisfies m; thus, m
L
¢y(w) =
ry(k)e-
iwk
(1.7.8)
k=-m
Equation (1.7.8), and the fact that ry(-k) = r;(k), mean that (w) from r(k) on a computer. We are given an ACS:
M -Ikl r(k) =
Generate r (k) for M
{
M'
0,
Ikl
~M
(1.7.14)
otherwise
= 10. Form, in MATLAB, a vector x
of length L
= 256 as
x = [reO), r(l), ... , reM), 0 ... ,0, r( -M), ... , r( -1)] Verify that xf=fft (x) gives cf>(Wk) for Wk = 2rrk/L. (Note that the elements of xf should be nonnegative and real.) Explain why this particular choice of x is needed, citing appropriate circular shift and zero-padding properties of the DTFf. Note that xf often contains a very small imaginary part due to computer roundoff error; replacing xf by real (xf) truncates this imaginary component and leads to more expected results when plotting. A word of caution-do not truncate the imaginary part unless you are sure it is negligible; the command z f =r ea 1 ( f f t ( z) ) when z = [r( -M), ... , r( -1), r(O), r(I), ... , reM), 0 ... ,0]
gives erroneous "spectral" values; try it and explain why it does not work. (b) Alternatively, since we can readily derive the analytical expression for ¢ (w), we can instead work backwards. Form a vector yf = [¢ (0) , ¢(2rr/L), ¢(4rr/L), ... , ¢«L - 1)2rrjL)]
and find y=ifft (yf). Verify that y closely approximates the ACS.
22
Chapter 1 Basic Concepts
(c) Consider the ACS r(k) in Exercise C1.12; let at = -0.9 and hI = 0, and set (j2 = 1. Form a vector x as before, with M = 10, and find xf. Why is xf not a good approximation of ¢J(Wk) in this case? Repeat the experiment for M = 127 and L = 256; is the approximation better for this case? Why? We can again work backwards from the analytical expression for ¢J(w). Form a vector yf = [¢J(O), ¢J(2n /L), ¢J(4n /L), .. " ¢J«L - 1)2n /L)]
and find y:::ifft (yf). Verify that y closely approximates the ACS for large L (say, L = 256), but poorly approximates the ACS for small L (say, L = 20). By citing properties of inverse DTFTs of infinite, two-sided sequences, explain how the elements of y relate to the ACS r (k) and why the approximation is poor for small L. Based on this explanation, give a "rule of thumb" for a choice of L that gives a good approximations of the ACS. (d) We have seen that the fft command results in spectral estimates for W E [0, 2n) instead of the more commonly-used range wE [-n, n). The MATLAB command fftshift can be used to exchange the first and second halves of the f f t output to make it correspond to the frequency range from WE [-n, n). Similarly, fftshift can be used on the output of the iff t operation to "center" the zero lag of an ACS. Experiment with fftshift to achieve both of these results. What frequency vector w is needed so that the command plot (w , f f t shi f t ( f f t (x) ) ) gives the spectral values at the proper frequencies? Similarly, what time vector t is needed to get a proper plot of the ACS with stern (t fftshift (i fft (xf) ) )? Do the results depend on whether the vectors are even or odd in length?
.~
.. ~
I
Exercise Cl.14: Relationship between the PSD and the Eigenvalues of the ACS Matrix An interesting property of the ACS matrix R in equation (1.3.5) is that, for large dimensions m, its eigenvalues are close to the values of the PSD ¢J(Wk) for Wk = 2nk/m, k = 0,1, ... , m - 1. (See, for example, [GRAY 1972].) We verify this property here: Consider the ACS in Exercise C1.12, with the values at = -0.9, ht = 0.8, and (j2 = 1. (a) Compute a vector phi that contains the values of ¢J(Wk) for Wk = 2nk/m, with m = 256 and k = 0, 1, ... , m - 1. Plot a histogram of these values with hist (phi) . Also useful is the cumulative distribution of the values of phi (plotted on a logarithmic scale), which can be found with the command semilogy ( (11m: 11m: 1) sort (phi) ). (b) Compute the eigenvalues of R in equation (1.3.5) for various values of m. Plot the histogram of the eigenvalues and plot their cumulative distribution.' Verify that, as m increases, the cumulative distribution of the eigenvalues approaches the cumulative distribution of the ¢J(w) values. Similarly, the histograms also approach the histogram for ¢J(w), but it is easier to see this result by using cumulative distributions than by using histograms. I
.~
2 Nonparamefric Me~hods
2.1 INTRODUCTION
The nonparametric methods of spectral estimation rely entirely on the definitions (1.3.7) and (1.3.10) of PSD to provide spectral estimates. These methods constitute the "classical means" for PSD estimation [JENKINs AND WATTS 1968]. The present chapter reviews the main nonparametric methods, their properties, and the Fast Fourier Transform (FFT) algorithm for their implementation. A related discussion is to be found in Chapter 5, where the nonparametric approach to PSD estimation is given a filter-bank interpretation. We first introduce two common spectral estimators, the periodogram and the correlogram, derived directly from (1.3.10) and (1.3.7), respectively. These methods are then shown to be equivalent under weak conditions. The periodogram and correlogram methods provide reasonably high resolution for sufficiently long data lengths, but are poor spectral estimators, because their variance is high and does not decrease with increasing data length. (In Chapter 5, we provide an interpretation of the periodogram and correlogram methods as a power estimate based on a single sample of a filtered version of the signal under study; it is thus not surprising that the periodogram or correlogram variance is large.) The high variance of the periodogram and correlogram methods motivates the development of modified methods that have lower variance, at the cost of reduced resolution. Several modified methods have been introduced, and we present some of the most popular ones. We show them all to be, more or less, equivalent in their properties and performance for large data lengths.
23
24
Chapter 2 Nonparametric Methods
2.2 PERIODOGRAM AND CORRELOGRAM METHODS 2.2.1 Periodogram
The periodogram method relies on the definition (1.3.10) of the PSD. Neglecting the expectation and the limit operation in (1.3.10), which cannot be performed when the only available information on the signal consists of the samples (y(t)}f=l' we get 2
N
~p(w) = ~
Ly(t)e-
iwt
(Periodogram)
(2.2.1)
t=l
One of the first uses of the periodogram spectral estimator, (2.2.1), has been in determining possible "hidden periodicities" in time series, which' may be seen as a motivation for the name of this method [SCHUSTER 1900]. 2.2.2 Correlogram
The correlation-based definition (1.3.7) of the PSD leads to the correlogram spectral estimator [BLACKMAN AND TUKEY 1959]: N-l
L
¢c(m) =
r(k)e-
iwk
(Correlogram)
(2.2.2)
k=-(N-l)
where r(k) denotes an estimate of the covariance lag r(k), obtained from the available sample (y (l), ... , Y(N) }. When no assumption is made on the signal under study, except for the stationarity assumption, there are two standard ways to obtain the sample covariances required in (2.2.2): 1
r(k) =
N
'"" y(t)y*(t - k), N-k LJ
O:;:k:;:N-I
(2.2.3)
t=k+l
and
1 r(k) = N
LN
y(t)y*(t - k)
O:;:k:;:N-l
(2.2.4)
t=::k+l
The sample covariances for negative lags are then constructed by using the property (1.3.3) of the covariance function: r(-k) = r*(k),
k = 0, . .. ,N-l
(2.2.5)
.-~
'.~
Section 2.2
25
Periodogram and Correlogram Methods
The estimator (2.2.3) is called the standard unbiased ACS estimate, and (2.2.4) is called the standard biased ACS estimate. The biased ACS estimate is most commonly used, for the following reasons: • For most stationary signals, the covariance function decays rather rapidly, so that r(Tc) is quite small for large lags k. Comparing the definitions (2.2.3) and (2.2.4), it can be seen that ;'(k) in (2.2.4) will be small for large k (provided N is-. reasonably large), whereas r(Tc) in (2.2.3) could take large and erratic values for large k, as it is obtained by averaging only a few products in such a case (in particular, only one product for k = N - 1!). This observation implies that (2.2.4) is likely to be a more accurate estimator of r(k) than (2.2.3) for medium and large values of k (compared with N). For small values of k, the two estimators in (2.2.3) and (2.2.4) can be expected to behave in a similar manner. • The sequence {r(k), k = 0, ±1, ±2, ...} obtained with (2.2.4) is guaranteed to be positive semidefinite (as it should, see equation (1.3.5) and the related discussion), but this is not the case for (2.2.3). This fact is especially important for PSD estimation, since a sample covariance sequence that is not positive definite, when inserted in (2.2.2), may lead to negative spectral estimates, and this is undesirable in most applications. When the sample covariances (2.2.4) are inserted in (2.2.2), it can be shown that the spectral estimate so obtained is identical to (2.2.1). In other words, we have the following result:
¢c(w), when evaluated by using the standard biased ACS estimates, coincides with ¢p(w).
(2.2.6)
A simple proof of (2.2.6) runs as follows: Consider the signal
IN x(t) =
r;:r
vN
Ly(k)e(t - k)
(2.2.7)
k=l
where {y(k)} are considered to be fixed (nonrandom) constants and e(t) is a white noise of unit variance: E {e(t)e*(s)} = 8t ,$ (= I if t = s; and = 0 otherwise). Hence, x(t) is the output of a filter with the following transfer function: 1 Yew) =
r;:r
vN
N iWk Ly(k)ek=l
Since the PSD of the input to the filter is given by ¢e(w) = 1, it follows from (1.4.5) that 2
,..
¢x(w) = IY(w)1 = ¢p(w)
.. _~ ..
(2.2.8)
26
Chapter 2 Nonparametric Methods
On the other hand, a straightforward calculation gives (for k ::: 0) rx(k) = E {x (t)x*(t - k)}
1
N
= N L
N
Ly(p)y*(s)E teet - p)e*(t - k - s)}
p=l $=1
1
=N
N
1
N
L Ly(p)y*(s)8
p ,k+s = N
p=1 $=1
N
L
y(P)y*(P - k)
.~
p=k+1
= {r(k) given by (2.2.4), 0,
k = 0, ... , N - 1
k ::: N
(2.2.9) -
.
.;.
:
~.
Inserting (2.2.9) in the definition (1.3.7) of PSD yields the following alternative expression for ¢x(w): N-1
¢x(w)
=
L
r(k)e-
iwk
= ~c(w)
(2.2.10)
k=-(N-1)
Comparing (2.2.8) and (2.2.10) concludes the proof of the claim (2.2.6). The equivalence of the periodogram and correlogram spectral estimators can be used to derive their properties simultaneously. These two methods are shown in Section 2.4 to provide poor estimates of the PSD. There are two reasons for this, and both can be explained intuitively by using ¢c(w): • The estimation errors in r(k) are on the order of 1/ ffi for large N (see Exercise 2.4), at least for Ikl not too close to N. Because ~c(w) = ¢p(w) is a sum that involves (2N - 1) such covariance estimates, the difference between the true and estimated spectra will be a sum of "many small" errors. Hence, there is no guarantee that the total error will die out as N increases. The spectrum estimation error is even larger than what is suggested by the preceding discussion, because errors in {r(k)}, for Ikl close to N, are typically of an order larger than 1/ -IN. The consequence is that the variance of ¢c (w) does not go to zero as N increases. • In addition, if r(k) converges slowly to zero, then the periodogram estimates will be biased. Indeed, for lags liel :::: N, r(k) will be a poor estimate of r(k) since r(k) is the sum of only a few lag products that are divided by N (see equation (2.2.4)). Thus, r(k) will be much closer to zero than r (k) is; in fact, E {r (k)} = [(N - Ik I) / N]r (k), and the bias is significant for Ik I :::: N if r (k) is not close to zero in this region. If r (k) decays rapidly to zero, the bias will be small and will not contribute significantly to the total error in ¢c (w); however, the nonzero variance just discussed will still be present. Both the bias and the variance of the periodogram are discussed more quantitatively in Section 2.4. Another intuitive explanation for the poor statistical accuracy of the periodogram and correlogram methods is given in Chapter 5, where it is shown, roughly speaking, that these methods
::0
:~
.~ .~
':
.(k)
=~
lY(k) -
Furthermore, in (2.8.25),
fJ
~ [Y(k -
1)+ Y(k + 1)]r, k
= 2, ... , N
- 1
(2.8.25)
'".
is the solution to the following apodization design problem:
fJ
min.IY(k) -
2
,BE[O.I]
[Y(k -1)
+ Y(k + 1)]1
2
(2.8.26)
The unconstrained minimizer of the preceding function is given by 2Y (k)
130 = Re [ Y (k _ 1) + Y (k
+ 1)
]
(2.8.27)
Because the function in (2.8.26) is quadratic in 13, it follows that the constrained minimizer of (2.8.26) is given by
iffJo 1
Remark: It is interesting to note, from (2.8.28), that a change of the value of a in the window expression (2.8.20) will affect the apodization (optimal) window in a more complicated way than just a simple scaling. Indeed, if we change the value of a, for instance to a = 0.75, then the interval for 13 becomes 13 E [0,0.75] and this modification will affect the apodization window nonlinearly via (2.8.28). •
The apodization-based windowed periodogram is obtained simply, by using the 13 given by (2.8.28) in (2.8.25). Hence, despite the fact that the apodization window is both frequency- and data-dependent (via fJ in (2.8.27), (2.8.28)), the implementation of the corresponding spectral estimate is only marginally more demanding computationally than is the implementation of an unwindowed periodogram. Compared with the latter, however, the apodization-based windowed periodogram has a considerably reduced leakage problem and essentially the same resolution. (See [STANKWITZ, DALLAIRE, AND FIENUP 1994; DEGRAAF 1994] for numerical examples illustrating this fact.)
2.8.4 Estimation of Cross-Spectra and Coherency Spectra
As can be seen from Complement 1.6.1, the estimation of the cross-spectrum ¢yu(w) of two stationary signals, yet) and u(t), is a useful operation in the study of possible linear (dynamic)
68
Chapter 2
Nonparametric Methods
relations between yet) and u(t). Let z(t) denote the bivariate signal z(t)
= [Yet)
u(t)f
and let " ¢(w) = N1 Z(w)Z * (w)
(2.8.29)
denote the unwindowed periodogram estimate of the spectral density matrix of z (t). In equation ( 2 . 8 . 2 9 ) , : : N
Z(w) =
L
z(t)e- iUJ1
1=1
is the DTFT of {z (t)
W=l' Partition 4>(w) as (2.8.30)
As indicated by the notation previously used, estimates of ¢yy (w), of ¢uu (w), and of the crossspectrum ¢yu (w) may be obtained from the corresponding elements of 4> (w). We first show that the estimate of the coherence spectrum obtained from (2.8.30) is always such that for all w
(2.8.31)
and, hence, it is useless. To see this, note that the rank of the 2 x 2 matrix in (2.8.30) is equal to one (see Result R22 in Appendix A), so we must have
which readily leads to the conclusion that the coherency spectrum estimate obtained from the elements of 4>(w) is bound to satisfy (2.8.31) and hence is meaningless. This result is yet another indication that the unwindowed periodogram is a poor estimate of the PSD. Consider next a windowed Blackman-Tukey periodogram estimate of the cross-spectrum:
., .~
):
M
~u (w) =
L
W
(k)ryu (k)e-
iUJk
(2.8.32)
k=-M
where w(k) is the lag window, and ryu(k) is some usual estimate of ryu(k). Unlike ryy(k) or ruu (k), rvu (k) does not necessarily peak at k = 0; moreover, it is in general not an even function.
Section 2.8
69
Complements
The choice of the lag window for estimating cross-spectra may hence be governed by different rules from those commonly used in the autospectrum estimation. The main task of a lag window is to retain the "essential part" of the covariance sequence in the defining equation for the spectral density. In this way, the bias is kept small and the variance is also reduced as the noisy tails of the sample covariance sequence are weighted out. For simplicity of discussion, assume that most of the area under the plot of ryu (k) is concentrated about k = ko, with Ikol «N. As ryu(k) is a reasonably accurate estimat~ of ryu(k), provided Ikl «N, we can assume that {ryu(k)} and {ryu(k)} have similar shapes: In such a case, one can redefine (2.8.32) as M
L
¢yu(w) =
w(k - ko)ryu(k)e-iwk
k=-M
where the lag window w (s) is of the type recommended for autospectrum estimation. The choice of an appropriate value for ko in the preceding cross-spectral estimator is essential-if leo is selected poorly, the following situations can occur: • If M is chosen small (to reduce the variance), the bias could be significant, because "essential" lags of the cross-covariance sequence could be left out. • If M is chosen large (to reduce the bias), the variance could be significantly inflated when poorly estimated high-order "nonessential" lags are included in the spectral estimation formula.
Finally, let us look at the cross-spectrum estimators derived from (2.8.30) and (2.8.32), respectively, with a view to establishing a relation between them. Partition Z(w) as
Z(W)=[~~~] and observe that 1 -2rrN
f]'( _]'(
Y(w)U*(w)e iwk dw
1 = -2rrN 1 - N
1]'( L L _]'(
N
I N
N
t=1
$=1
. .
y(t)u*(s)e-1W(t-s) e ,wk dw
N
L L 1=1
N
y(t)U*(S)Ok,I-S
s=1
"LJ lE[I,N)n[l+k,N+k)
~r
y(t)u*(t - k) = yu (k)
(2.8.33)
70
Chapter 2 Nonparametric Methods, ~
where ryu(k) can be rewritten in the following more familiar form:
1 N
.
N
L
y(t)u*(t - k),
k = 0, 1,2, ...
t=k+l
1 N
N+k
L
y(t)u*(t - k),
k = 0, -1, -2, ...
t=1
Let
1 ~u (w) = N Y (w)U*(w) A
denote the unwindowed cross-spectral periodogram-like estimator, given by the off-diagonal element of ~(w) in (2.8.30). With this notation, (2.8.33) can be written more compactly as
">{
By using the previous equation in (2.8.32), we obtain
(2.8.34)
".:...
-.
where W(w) = L~-oo w(k)e- iwk is the spectral window. The previous equation should be compared with the similar equation, (2.5.3), that holds in the case of autospectra. For implementation purposes, one can use the discrete approximation of (2.8.34) given that
__11.
where Wk = ~ k are the Fourier frequencies. The periodogram (cross-spectral) estimate that appears in the above equation can be computed efficiently by means of an FFf algorithm. 2.8.5 More Time-Bandwidth Product Results The time (or duration)-bandwidth product result (2.6.5) relies on the assumptions that both w (t) and W(w) have a dominant peak at the origin, that they both are real valued, and that they take
Section 2.8
71
Complements
on nonnegative values only. While most window-like signals (nearly) satisfy these assumptions, many other signals do not satisfy them. In this complement, we obtain time-bandwidth product results that apply to a much broader class of signals. We begin by showing how the result (2.6.5) can be extended to a more general class of signals. Let x(t) denote a general discrete-time sequence, and let X(w) denote its DITT. Both x (t) and X (w) are allowed to take negative or complex values, and neither is required to peak at the origin. Let to and Wo denote the maximum points of Ix(t)1 and IX(w)l, respectively. The time width (or duration) and bandwidth definitions in (2.6.1) and '(2.6.2) are modified as follows:
and
Because x(t) and X(w) form a Fourier transform pair, we obtain 00
L
IX(wo)1 =
(=-00
and Ix (to) I =
00
x(t)e-
iwot
:5
L
!x(t)1
t=-oo
1_1 j7T X (w)eiW(O d wi :5 _1 jlr IX(w)1 dw 2.7l'
-7T
2.7l'
-lr
which implies that (2.8.35) This result, similar to (2.6.5), can be used to conclude the following: A sequence {x (t)} cannot be narrow in both time and frequency.
(2.8.36)
More precisely, if x(t) is narrow in one domain, it must be wide in the other domain. However, the inequality result (2.8.35), unlike (2.6.5), does not necessarily imply that ~e decreases whenever Ne increases (or vice versa). Furthermore, the result (2.8.35)-again unlike (2.6.5)-does not exclude the possibility that the signal is broad in both domains. In fact, in the general class of signals to which (2.8.35) applies, there are signals that are broad both in the time and in the frequency domain. (For such signals Ne ~e » 1; see, for example, [PAPOULIS 1977].) Evidently, the significant consequence of (2.8.35) is (2.8.36), which is precisely what makes the duration-bandwidth result an important one. The duration-bandwidth product type of result (such as (2.6.5) or (2.8.35), and later (2.8.40») has sometimes been referred to by using the generic name uncertainty principle, in an attempt
Chapter 2 Nonpa rametr ic Metho ds
72
nics. (Briefly stated, the to relate it to the Heisenberg Uncertainty Principle in quantum mecha of a particle cannot simulHeisenberg Uncertainty Principle asserts that the position and velocity one can argue as follows: taneously be specified to arbitrary precision.) To support the relationship, to Ne and that we are asked Suppose that we are given a sequence with (equivalent) duration equal t in a certain narrow to use a linear filtering device to determine the sequence's spectral conten fact, it should be (much) band. Because the filter impulse response cannot be longer than Ne (in bandwidth can be on shorter!), it follows from the time-bandwidth product result that the filter's t in fine bands on an the order of liNe, but not smaller. Hence, the sequence's spectral conten is "uncertain." This is, in order smaller than liNe cannot be ascertained exactly and therefore methods discussed in this effect, the type of limitation that applies to the nonparametric spectral to spectr al estima tion and chapter. However, this way of arguing is related to a specific appro ach we will see in later chapters of not to afund ament allimi tation associ ated with the signal itself. (As e the "high resolution" this text, there are parametric methods of spectral analysis that can provid less than liNe.) necessary to determine the spectral content in bands that are on an order product result. The Next, we present another, slightly more general form of time-bandwidth sense whenever Ix(t)1 and definitions of duration and bandwidth used to obtain (2.8.35) make full able results IX (w) I are single pulse-like waveforms, though these definitions might give reason broadness ions of the in many other instances as well. There are several other possible definit ion used next and the of a waveform in either the time or the frequency domain. The definit most general. corresponding time-bandwidth product result appear to be among the Let x(t) (2.8.37) x(t) =
JL~-oo Ix(t)1 2
and
X(w)
=
(2.8.38)
X(w)
J2~ J!:.Jr
IX(w)! 2dw
(2.8.38) are equal to each By Parseval's theorem (see (1.2.6)), the denominators in (2.8.37) and notation. Observe that other. Therefore, X(w) is the DTFf of x(t), as is already indicated by
L 00
2
Ix(t)1 = -
I
2rr
1=-00
1 Jr
2 IX(w)1 dw = 1
-Jr
2 density functions in the Hence, both {Ix (t) 12 land {IX (w) 112rr} can be interpreted as probability The means and variances sense that they are nonnegative and that they sum or integrate to one. ing equations: associated with these two "probability" densities are given by the follow
Time Domain: 00
J-L =
L
t!x(t)1
2
1=-00 00
0'2
=
L 1=-00
(t - J-L)2 Ix (t)1
2
. -~
Section 2.8
73
Complements
Frequency Domain:
\',
The values of the "standard deviations" (J and p show whether the normalized functions Hi (t) j} and {IX (a» I}, respectively, are narrow or broad. Hence, we can use (J' and p as definitions for the duration and bandwidth, respectively, of the original functions {x (t)} and {X (a»}. In what follows, we assume that fJ- = 0,
v=o
(2.8.39)
For continuous-time signals, the zero-mean assumptions can always be made to hold by translating the origin appropriately on the time and frequency axes; see, for example, [COHEN 1995]. However, doing the same in the case of the discrete-time sequences considered here does not appear to be possible. Indeed, fJ- might not be integer valued, and the support of X(a» is finite and, hence, affected by translation. Consequently, in the present case, the zero-mean assumption introduces some restriction; nevertheless, we impose it to simplify the analysis. According to this discussion and assumption (2.8.39), we define the (equivalent) time width and bandwidth of x (t) as follows:
In the remainder of this complement, we prove the following time-bandwidth-product result:
(2.8.40)
which holds true under (2.8.39) and· the weak additional assumption that (2.8.41) To prove (2.8.40), first we note that
X/(a»
A
dX(a» dw
= -i ~
L...J
t=-oo
tx(t)e-iwt
Nonparametric Metho ds
Chapter 2
74
'"
..
':-'
theorem) that Hence, iX'(w ) is the DTFT of {txU)} , which implies (by Parseval's (2.8.42)
Result R23 in Appendix A), Consequently, by the Cauchy-Schwartz inequality for functions (see
Nefie
1 IX' 7f
1 = [ 2Jr
-7f
11 111
2
]
(w)1 d W
1/2 [
(2Jr)
)dw )X'(w wX*(w -
-7f
1 = 2(2Jr)2
+
)X- , (w)dw wX*(w
Ii:
-7f
2
2
w 1X (w)1 dw
]
1/2
I
7f
-7f
7f
(2Jr)3
7f
::: - -12
1
1
.....
I (2.8.43)
WX(W)X*'(W)dWI}
a simple calculation.) Hence, (The first equality above follows from (2.8.42) and the last one from
Nefie 2: =
1
2(2Jr)
1
2(2Jr)
211 211
7f
w[X*( W)X'( w) +X(W )X*'(W )]dwl
-7f 7f
w
[IX(w)1
2
]' dwl
-7f
which, after integration by parts and use of (2.8.41), yields
I -
7f
1 - ::: 2(2Jr)2 wIX(w)1 21 -7f N- e f3e
1 7f
-
-7f
I
1 I2 dw = 2(2Jr) IX(w) .;.
and the proof is concluded. , starting from the inequality Remark: There is an alternative way to complete the proof above inequality than (2.8.40). in (2.8.43). In fact, as we will ~ee, this alternative proof yields a tighter Let .'
.;" -~-:
::~-;~
-~~~~
Section 2.9
Exercises
77 where
for unbiased ACS estimates for biased ACS estimates
f,,.
Hence, for large N, the standard dev iation of the ACS estimate is O( l/J F[) under we ak conditions on the true ACS {r(k )}. (b) For the special case that y(t ) is white Gaussian noise, show tha t cov {r(k ), r(l) } = 0 for k =1= l, and find a simple express ion for var {r(k )}.
Exercise 2.5: Another Proof of the Equality ~p(
.... ..
:'
'/::: ..
~.'
=
6) ~c(6) The pro of of the result (2.2.6) in the text introduces an auxiliary random sequence and treats the original data sequence as determinis tic (nonrandom). That pro of relies on sev eral results previously derived. A more direct proof of (2.2 .6) can be found using only (2.2.1), (2.2.2), and (2.2.4). Fin d such a pro of.
Exercise 2.6: A Compact Expressio n for the Sample ACS Show that the expressions for the sample ACS given in the text (eq uations (2.2.3) or (2.2.4) for k :::: 0 and (2.2.5) for k < 0) can be rewritten by using a single fon nul a as follows :
N
r(k ) = p
N
L Ly(P)y*(s)~s.P-k> p=1 s=1
where p =
k = 0, ±1 , ... , ±(N - 1)
(2.9.7)
Ii for (2.2.4) and p = N 21kl for (2.2.3).
Exercise 2.7: Yet Another
=
Proof of the Equality ~p(6) ~c(6) Use the compact expression for the sample ACS derived in Exercise 2.6 to obtain a very simple proof of (2.2.6). Exercise 2.8: Lin ear Transfonnatio n Interpreta
tion of the DFT Let F be the N x N matrix whose (k, t)th element is given by W let, where W is as defined in (2.3.2). The n the DFT, (2.3.3), can be written as a linear transfo nnation of the data vec tor y A [y( l) ... y(N )f, Y
A
[Y (0) ... Y (N - 1) f = Fy
(2.9.8)
Show that F is an orthogonal matrix that satisfies
~
N
FF*
=]
(2.9.9)
hods Chapter 2 Nonparametric Met
78 sform is and, as a result, that the inverse tran
(2.9.10)
* y= -1 F Y N
data vector y via an is nothing but a representation of the T DF the t tha lts resu se the from uence Deduce ns of F*) . Also, deduce that, if the seq um col the are tors vec is bas (the orthogonal basis in determines the n the Fourier coefficient vector, Y, the N, to al equ iod per a h wit ic {y (t)} is period .10) can be extended in effect, the inverse transform (2.9 , that and ; I.2 )}t= (y(t ce uen whole seq , yeN ), yeN + 0, yeN + 2), ... to include all samples y(l ),
en
::,
..' .
tor iodogram Is an Unbiased PSD Estima Per the ise, No ite Wh r Fo : 2.9 e Exercis a 2 and let with variance Let yet ) be a zero-mean white noise
(k = 0, ... , N - 1)
ted at the Fourier frequencies. denote its (normalized) DFI' evalua (a) Derive the covariances k,r =O , ... ,N -l ¢(W k) = n to conclude that the periodogram atio cul cal us vio pre the of lt resu (b) Use the of yet). IY (Wk)/2 is an unbiased estimator of the PSD intuitive ds for CJ) =I Wk as well. Present an hol ty per pro ss dne iase unb the er (c) Explain wheth explanation for your finding.
ogram by shrinking. Exercise 2.10: Shrinking the Period mean squared error (MSE) reduction on lt resu l era gen ple sim a ce odu First, we intr x. Assume that x is unbiased (i.e., of a true (and unknown) parameter Let
x be some estimate
E {x} =
x),
and let
a1
.
denote the MSE of x:
1= E {(x - x)2 }
a
(Since
x is unbiased, a1 also equals the variance of x.) For _
X
a fixed (nonrandom) p, let
A
=p x
as to make the MSE age coefficient" p can be chosen so rink "sh The x. of te ma esti r es the be ano a biased estimate of x; hence x trad is I, =I p for i, t tha te (No a1. of i (much) smaller than m value MSE of i, a], achieves its minimu the t tha w sho ly, cise pre re Mo off bias for variance.) (with respect to p),
..~
section 2.9
79
Exercises
for
Next, consider the application of the previous result to the periodogram. As we explained in the chapter, the periodogram-based spectral estimate is asymptotically unbiased and has an asymptotic MSE equal to the squared PSD value: ,_, as N
~ 00
Show that the "optimally shrunk" periodogram estimate is ;Pew)
= ¢p(w)/2
and that the MSE of ;p (w) is 1/2 the MSE of ¢P (w). Finally, comment on the general applicability of this extremely simple tool for MSE reduction.
Exercise 2.11: Asymptotic Maximum Likelihood Estimation of q, (U)) from ~p (U)) It follows from the calculations in Section 2.4 that, asymptotically in N, ¢P (w) has mean ¢ (w) and variance ¢2(w). In this exercise, we assume that ¢p(w) is (asymptotically) Gaussian distributed (which is not necessarily the case; however, the spectral estimator derived here under the Gaussian assumption may also be used when this assumption does not hold). Hence, the asymptotic probability density function of ¢p(w) is (we omit both the index p and the dependence on UJ to simplify the notation): 1
A
(¢ - ¢) 2¢2 A
P
~:.!
'.:.
'.
r ~::
~~,->.
computer Exercises
with
89
A2(Z) = 1 - 1.6408z- 1 + 2.2044z- 2 - 1.4808z- 3 + O.8145z- 4 B2(Z) = 1 + 1.5857z- 1 + O.9604z- 2
and N = 256. Repeat the experiments and comparisons in the broadband example for the narrowband process. ", Exercise C2.23: Periodogram-Based Estimators Applied to Measured Data Consider the data sets in the files sunspotdata.rnat and lynxdata.rnat. These files can be obtained from the text website www.prenhall.com/stoica. Apply periodogram-based estimation techniques (possibly after some preprocessing as in part (b)) to estimate the spectral content of these data. Try to answer the following questions:
(a) Are there sinusoidal components (or periodic structure) in the data? If so, how many components, and at what frequencies? (b) Nonlinear transformations and linear or polynomial trend removal are often applied before spectral analysis of a time series. For the lynx data, compare your spectral analysis results from the original data with those from the data transformed first by taking the logarithm of each sample and then by subtracting the sample mean of this logarithmic data. Does the logarithmic transformation make the data more sinusoidal in nature?
3 Parametric Methods for Rational Spectra
.,..- .>..
~.'
"I"'
:''';
3.1 INTRODUCTION
The principal difference between the spectral-estimation methods of Chapter 2 and those in this chapter is that, in Chapter 2, we imposed no assumption on the studied signal (except stationarity). The parametric or model-based methods of spectral estimation assume that the signal satisfies a generating model with known functional form and then proceed by estimating the parameters in the assumed model. The signal's spectral characteristics of interest are then derived from the estimated model. In those cases where the assumed model is a close approximation to the reality, it is no wonder that the parametric methods provide more accurate spectral estimates than the nonparametric techniques. The nonparametric approach to PSD estimation remains useful, though, in applications where. there is little or no information about the signal in question. Our discussion Of parametric methods for spectral estimation is divided into two parts. In this chapter, we discuss parametric methods for rational spectra, which form a dense set in the class of continuous spectra (see Section 3.2) [ANDERSON 1971; WEI 1990]; more precisely, we discuss methods for estimating the parameters in rational spectral models. The parametric methods of spectral analysis, unlike the nonparametric approaches, also require the selection of the structure (or order) of the spectral model. A review of methods that can be used to solve the structure-selection problem can be found in Appendix C. Furthermore, in Appendix B, we discuss the Cramer-Rao bound and the best accuracy achievable in the rational class of spectral models. However, we do not include detailed results on the statistical properties of the estimation methods discussed in the following sections, because (i) such results are readily available in the
.-'.
"."
90 .:~ '.~".
~..
'.,T.
91
Section 3.2 Signals with Rational Spectra
literature [KAY 1988; PRIESTLEY 1981; SODERSTROM AND STOICA 1989]; (ii) parametric methods provide consistent spectral estimates and hence (for large sample sizes, at least) the issue of statistical behavior is not so critical; and (iii) a detailed statistical analysis is beyond the scope of an introductory course. The second part of our discussion on parametric methods is contained in Chapter 4, where we consider discrete spectra, such as those associated with sinusoidal signals embedded in white noise. Mixed spectra (containing both continuous and discrete spectral components, such as in the case of sinusoidal signals corrupted by colored noise) are n~t covered explicitly in this text, but we remark that some methods in Chapter 4 can be extended to deal with such spectra as well.
3.2 SIGNALS WITH RATIONAL SPECTRA
A rational PSD is a rational function of e- iw (i.e., the ratio of two polynomials in e- iw ), (3.2.1) where Y-k = Yk* and P-k = pt· The Weierstrass theorem from calculus asserts that any continuous PSD can be approximated arbitrarily closely by a rational PSD of the form (3.2.1), provided the degrees m and n in (3.2.1) are chosen sufficiently large; that is, the rational PSDs fonn a dense set in the class ofall continuous spectra. This observation partly motivates the significant interest in the model (3.2.1) for ¢(ev) among the researchers in the "spectral estimation community." It is not difficult to show that, since ¢(ev) 2: 0, the rational spectral density in (3.2.1) can be factored as B(ev)!2 2 (1 A(ev)
¢(w) = - -
I
where
(12
(3.2.2)
is a positive scalar and A(ev) and B(ev) are the polynomials: A(ev)
(3.2.3)
B(w)
The result (3.2.2) can similarly be expressed in the Z-domain. With the notation ¢(z) = L:;=-m liZ-k / L:~=-n ~Z-k, we can factor ¢(z) as 2 B (z)B*(l/z*)
¢(z) =
(f
A(z)A*(l/z*)
where, for example, A(z) = 1 + alz- 1 + ... A*(l/z*)
= [A(l/z*)]*
+ anz- n = 1 + a~z + ... + a;zn
(3.2.4)
92
Chapter 3 Parametric Methods for Rational Spectra
Recall the notational convention in this text that we write, for example, A(z) and A(w) with the implicit understanding that, when we convert from a function of z to a function of w, we use the substitution z = e iw . We note that the zeroes and poles of 4> (z) are in symmetric pairs about the unit circle; if i9 Zi = re iO is a zero (pole) of ¢(z), then (l/zt) = (llr)e is also a zero (pole) (see Exercise 1.3). Under the assumption that ¢(z) has no pole with modulus equal to one, the region of convergence of ¢(z) includes the unit circle z = eiw . The result that (3.2.1) can be written, as in (3.2.2) and (3.2.4), is called the spectral factorization theorem. (See, for example, [SODERSTROM AND STOICA 1989; KAy 1988].) The next point of interest is to compare (3.2.2) with (1.4.9). This comparison leads to the following result: The arbitrary rational PSD in (3.2.2) can be associated with a signal obtained by filtering white noise of power a 2 through the rational filter with transfer function H(w) = B(w)IA(w).
(3.2.5)
The filtering referred to in (3.2.5) can be written in the time domain as B(z)
yet) = - e ( t )
(3.2.6)
A(z)y(t) = B (z)e(t)
(3.2.7)
A(z)
or, alternatively, as
where y (t) is the filter output and Z-l
= the unit delay operator (Z-ky(t) = yet - k»
e(t) = white noise with variance a 2
Hence, by means of the spectral factorization theorem, the parameterized model of ¢(w) is turned into a model of the signal itself. The spectral estimation problem can then be reduced to a problem of signal modeling. In the following sections, we present several methods for estimating the parameters in the signal model (3.2.7) and in two of its special cases (m = 0, and n = 0). A signal yet) satisfying the equation (3.2.6) is called an autoregressive moving average (ARMA or ARMA(n, m») signal. If B(z) = 1 in (3.2.6) (i.e., m = in (3.2.3», then y(t) is an autoregressive (AR or AR(n» signal; and yet) is a moving average (MA or MA(m)) signal if n = O. For easy reference, we summarize these naming conventions here:
°
= B(z)e(t)
ARMA:
A(z)y(t)
AR:
A(z)y(t) = e(t)
MA:
yet) = B(z)e(t)
(3.2.8)
Section 3.3
.;.
93
Covariance Structure of ARMA Processes
By assumption, ¢(w) is finite for all w values; as a result, A(z) cannot have any zero exactly on the unit circle. Furthermore, the poles and zeroes of ¢ (z) are in reciprocal pairs, so it is always possible to choose A(z) to have all its zeroes strictly inside the unit disc. The corresponding model (3.2.6) is then said to be stable. If we assume, for simplicity, that ¢(w) does not vanish at any w, then-similarly to the preceding-we can choose the polynomial B (z) so that it has all its zeroes inside the unit (open) disc. The corresponding model (3.2.6) is said to be of minimum phase. (See Exercise 3.1 for a motivation for the name "minimum '" phase.") We remark that, in the previous paragraph, we actually provided a sketch of the proof of the spectral factorization theorem. That discussion also showed that the spectral factorization problem associated with a rational PSD has multiple solutions, with the stable and minimumphase ARMA model being only one of them. In the next sections, we will consider the problem of estimating the parameters in this particular ARMA equation. When the final goal is the estimation of ¢(w), focusing on the stable and minimum-phase ARMA model is no restriction.
'
.~ ..
.
~
3.3 COVARIANCE STRUcrURE OF ARMA PROCESSES In this section, we derive an expression for the covariances of an ARMA process in terms of the parameters {ad?=l' {bdr=l' and 0"2. The expression provides a convenient method for estimating the ARMA parameters by replacing the true autocovariances with estimates obtained from data. Nearly all ARMA spectral estimation methods exploit this covariance structure either explicitly or implicitly; thus, it will be used widely in the remainder of the chapter. Equation (3.2.7) can be written as n
yet)
m
+ Laiy(t -
i)
= Lbje(t -
i=l
(b o = 1)
j),
(3.3.1)
j=O
Multiplying (3.3.1) by y*(t - k) and taking expectation yields n
r(k)
+L
m
air(k - i) =
i=l Since the filter H (z)
= B (z) / A (z)
L bjE {eU - j)y*(t -
k)}
j=o is asymptotically stable and causal, we can write 00
H(z) = B(z)/A(z) = LhkZ-k,
(ho =
k=O
which gives 00
yet) = H(z)e(t) = Lhke(t - k) k=O
1)
(3.3.2)
94
Chapter 3 Parametric Methods for Rational Spectra
Then the term E {e(t - j)y*(t - k)} becomes
l
E le(t - j)y'(t - k) = E {e(t - j)
~h;e'(t -
-.:,
k - s)}
00
= a
2
L h;8 ,k+s = a h*_k 2
j
j
s=O
where we use the convention that hk
= 0 for k
N. The vector 0 that minimizesf(O) is given (per Result R32 in Appendix A) by
e= _(y*y)-I(y*y)
(3.4.12)
where, as seen from (3.4.11), the definitions of Y and y depend on the choice of (N!, N2) considered. If N! = 1 and N2 = N + n, we have y(l)
o
o
y(2)
y(l)
0
o o
yen + 1) yen +2)
y(l) y(2)
yen) yen - 1) yen) yen + 1)
Y=
y= yeN)
(3.4.13) Y(N - 1) y (N - 2)
0 0
yeN)
0
o
o
yeN - n) yeN - n + 1)
yeN - 1) yeN)
o
yeN)
Notice the Toeplitz structure of Y, and also notice that y matches this Toeplitz structure when it is appended to the left of Y; that is, [y IY] also shares this Toeplitz structure. The two most common choices for Nl and N2 are the following: • Nl = 1, N2 = N + n (considered previously). This choice yields the so-called autocorrelation method. • Nl = n + 1, N2 = N. This choice corresponds to removing the first n and last n rows of Y and y in equation (3.4.13) and, hence, eliminates all the arbitrary zero values there. The estimate (3.4.12) with this choice of (Nl, N2) is often named the covariance method. We refer to this method as the covariance LS method or the IS method.
Chapter 3 Parametric Methods for Rational Spectra
98
le, the prewin dow metho d uses Other choices for Nl and N 2 have also been suggested. For examp and N 2 = N. Nl = 1 and N2 = N, and the postw indow metho d uses Nl = n + 1 ns to the Yule-W alker The least-squares methods can be interp reted as approx imate solutio a multiplicative constant, equations in (3.4.4) by recognizing that y*y and Y*y are, to within to show that, for the autocorfinite-sample estima tes of Rn and r n , respectively. In fact, it is easy y the biased ACS estima tes relatio n method, the eleme nts of (Y*Y) /N and (Y*y) /N are exactl ) as (2.2.4) used in the Yule-W alker AR estimate. Writing in (3.4.12
6=
-
[~ (Y'Y)
r
e
[~ (Y'y)]
'.;
we see the follow ing as a consequence: lent to the The autocorrelation metho d of least-s quares AR estimation is equiva Yule-W alker method. tion: that the matrix Y*Y Remark: We can now prove a claim made in the previous subsec sampl e {y(t)}~=l that is not in (3.4.12), with Y given by (3.4.13), is positiv e definite for any ent to show that rank(Y ) = identically equal to zero. To prove this claim, it is necess ary and suffici again we clearly have n. If y(l) =1= 0, then clearly rank(Y ) = n. If y(l) = and y(2) =1= 0, then • rank(Y ) = n, and so on.
°
unbias ed estima tes of R n For the LS estimator, (Y*Y) /(N - n) and (Y*y) /(N - n) are measu remen t data outsid e the and r n in equati ons (3.4.4) and (3.4.5), and they do not use any / (N - n) is not Toeplitz, so available interval 1 ::: t ::: N. On the other hand, the matrix (Y *Y) canno t be used (although the Levin son-D urbin or Delsar te-Gen in algorithms in the next section for example, [MARPLE 1987]). simila r fast algorithms for the LS metho d have been developed; see, estimates used by the As N increases, the difference betwe en the covari ance matrix sampl es (i.e., for N » 1), Yule-W alker and the LS methods diminishes. Consequently,_ for large the YW and LS estima tes of the AR param eters nearlY--Cillilcide. ance LS methods might For small or mediu m sampl e length s, the Yult1LWalker and covari Yule-W alker metho d is alway s behave differently. First, the estimated AR model obtain ed via the and Exerci se 3.8), where as guaran teed to be stable (see, for examp le, [STOICA ANDNEHORAI 1987] which one is interes ted in the the estima ted LS model could be unstable. For applications in model is often an impor tant AR model (and not just the AR spectral estimate), stability of the ility of the AR model provid ed requirement. It may, therefore, be though t that the potential instab estima ted LS models that are by the LS metho d is a significant drawb ack of this method. However, are simple means to "stabi lize" unstable appea r infrequently; moreover, when they do occur, there circle). Hence , to conclu de this them (for instance, by reflecting the unstab le poles inside the unit d, when compa red with the point, the lack of guaranteed stability is a drawb ack of the LS metho Yule-W alker metho d, but often not a seriou s one. Yule-Walker metho d, in Second, the LS metho d has been found to be more accurate than the e closer to the true values the sense that the estima ted parameters of the fonne r are on the averag sample statistical analysis than those of the latter [MARPLE 1987; KAy 1988]. Becau se the finitethis behav ior is not possib le at of these metho ds is underd evelop ed, a theoretical explan ation of
,.L
">r
~: '..
Section 3.5
99
Order-Recursive Solutions to the Yule-Walker Equations
this time. Only heuristic explanations are available. One such explanation is that the assumption that y (t) = 0 outside the interval 1 ~ t ~ N, and the corresponding zero elements in Y and y, result in bias in the Yule-Walker estimates of the AR parameters. When N is not much greater than n, this bias can be significant. 3.5 ORDER-RECURSIVE SOLUTIONS TO THE YULE-WALKER EQUATIONS
In most applications, a priori information about the true order n is lacking, so AR models with different orders have to be tested. Hence, the Yule-Walker system of equations, (3.4.6), has to be solved for n = 1 up to n = nmax (some prespecified maximum order); see Appendix C. By using a general solving method, this task requires O(n~ax) flops. This can be a significant computational burden if n max is large. This is, for example, the case in the applications dealing with narrowband signals, where values of 50 or even 100 for nmax are not uncommon. In such applications, it can be important to reduce the number of flops required to calculate {t9n , a';} in (3.4.6). In order to be able to do so, the special algebraic structure of (3.4.6) should be exploited, as explained next. The matrix R n+ I in the Yule-Walker system of equations is highly structured: it is Hermitian and Toeplitz. The first algorithm that exploited this fact to compute {en, a;}:~i in n~ax flops was the Levinson-Durbin algorithm (LDA) [LEVINSON 1947; DURBIN 1960]. The number of flops required by the LDA is on the order of n max times smaller than that required by a general linearequation solver to compute (enmax , a~ax)' and on the order of n~ax times smaller than that required by a general linear-equation solver to compute {t9n , O';}:~i. The LDA is discussed in Section 3.5.1. In Section 3.5.2, we present another algorithm, the Delsarte-Genin algorithm (DGA), also named the split-Levinson algorithm, which, in the case of real-valued signals, is about two times faster than the LDA [DELSARTE AND GENIN 1986]. Both the LDA and DGA solve, recursively in the order n, equation (3.4.6). The only requirement is that the matrix there be positive definite, Hermitian, and Toeplitz. Thus, the algorithms apply equally well to the Yule-Walker AR estimator (or, equivalently, the autocorrelation leastsquares AR method), in which the "true" ACS elements are replaced by estimates. Hence, to cover both cases simultaneously, in what follows, (3.5.1)
Pk is used to represent either r(k) or r(k).
By using the preceding convention, we have
Po P-I Rn+I =
PI
P-n
Po
Po P; PI
PI
Po
Po
(3.5.2)
P;
P-I Pn
p~
Pn
PI Po
The following notational convention will also be used frequently in this section. For a vector x = [Xl ••• xnf, we define X-
= [* xn •. 'XI*]T
100
Chapter 3
Parametric Methods for Rational Spectra
An important property of any Hennitian Toeplitz matrix R is that
y = Rx => Y = Rx
(3.5.3)
The result (3.5.3) follows from n
Yi
* = Yn-i+l
= '"'R* L...J n-i+l.kxk* k=l
n
=
n
L P~-i+l-kx: = L k=l
n
P;-iX:-P+I =
p=l
LRj,pxp p=l
= (RX)i
where RiJ denotes the (i ,nth element of the matrix R. 3.5.1 levinson-Durbin Algorithm
The basic idea of the LDA is to solve (3.4.6) recursively in n, starting from the solution for n = 1 (which is easily found). By using (3.4.6) and the nested structure of the R matrix, we can write
(3.5.4)
where
rn OI
= [Pl ... Pnf
n=
7
+ 'ZOn
(3.5.5) (3.5.6)
Equation (3.5.4) would be the counterpart of (3.4.6) when n is increased by one, if an in (3.5.4) could be nulled. To do so, let (3.5.7) It follows from (3.5.3) and (3.5.4) that
(3.5.8)
Section 3.5
101
Order-Recursive Solutions to the Yule-Walker Equations
which has the same structure as (3.5.9) Comparing (3.5.8) with (3.5.9) and making use of the fact that the solution to (3.4.6) is unique for any n, we reach the conclusion that
.,
(3.5.10) and (3.5.11) constitute the solution to (3.4.6) for order (n + 1). Equations (3.5.10) and (3.5.11) form the core of the LOA. The initialization of these recursivein-n equations is straightforward. The following box summarizes the LOA in a fonn that should be convenient for machine coding. The LOA has many interesting properties and uses, for which we refer to [SODERSTROM AND STOICA 1989; MARPLE 1987; KAy 1988]. The coefficients k i in the LOA are often called the reflection coefficients; -ki are also called the partial correlation (PARCOR) coefficients. The motivation for the name "partial correlation coefficient" is developed in Complement 3.9.1.
The Levinson-Durbin Algorithm
Initialization:
= ki
[l flop]
IpII 2 / Po
[1 flop]
el = -PI/Po
a} =
Po -
For n = 1, ... nmax , do: I
[n
+ 1 flops]
[2 flops] [n flops]
It can be seen from the box that the LOA requires on the order of 2n flops to compute flops is needed to compute all the
{en + I. a;+d from {en, a;}. Hence, a total of about
n;ax
102
Chapter 3 Parametric Methods for Rational Spectra
solutions to the Yule-Walker system of equations, from n = 1 to n = nmax. This confirms the claim that the LDA reduces the computational burden associated with a general solver by two orders of magnitude. 3.5.2 Delsarte-Genin Algorithm
In the real data case (i.e., whenever y (t) is real valued), the Delsarte-Genin algorithm (DGA), or the split-Levinson algorithm, exploits some further structure of the Yule-Walker problem (which is not exploited by the LDA) to decrease even more the number of flops required to solve for {(in, a;} [DELSARTE AND GENIN 1986]. In the following, we present a derivation of the DGA that is simpler than the original derivation. As already stated, we assume that the covariance elements {Pk} in the Yule-Walker equations are real valued. Let lln be defined by
(3.5.12)
where the scalar f3n is unspecified for the moment. The matrix Rn+l is positive definite, so the (n + I)-vector lln is uniquely defined by (3.5.12) (once f3n is specified; as a matter of fact, note that f3n only has a scaling effect on the components of lln)' It follows from (3.5.12) and (3.5.3) that lln is a "symmetric vector": It satisfies (3.5.13)
lln = lln
The key idea of the DGA is to introduce such symmetric vectors into the computations involved by the LDA; then only half of the elements of these vectors will need to be computed. Next, note that, by using the nested structure of Rn+l and the defining equation (3.5.12), we can write
-~/ R
n+l
[
0 ] _ lln-l -
[Po r I ] [ 0 ]_ r R lln-ln
Yn-l f3n-l
n
(3.5.14)
f3n-l
where rn is defined in (3.5.5) and Yn-l =
r:
lln-l
(3.5.15)
The systems of equations (3.5.12) and (3.5.14) can be combined linearly into a system having the structure of (3.4.6). To do so, let (3.5.16)
.. Section 3.5
103
Order-Recursive Solutions to the Yule-Walker Equations
Then, from (3.5.12), (3.5.14) and (3.5.16), we get
R'l+l
(~n -
An
[~~-l]}
= [fin -
~nYn-l]
It will be shown that fJn can always be chosen so as to make the first element of
(3.5.17) ~n
equal to 1:
",
(3.5.18) In such a case, (3.5.17) has exactly the same structure as (3.4.6) and, the solutions to these two systems of equations being unique, we are led to the following relations:
(3.5.19) (3.5.20) Furthermore, (~n)l = 1 and ~n is a symmetric vector, so we must also have (~n)n+l = 1. This observation, along with (3.5.19) and the fact that kn is the last element of en (see (3.5.10», gives the following expression for kn : (3.5.21)
a;,
The equations (3.5.19}-(3.5.21) express the LDA variables {en, k n } as functions of {~n} and {fin}. It remains to derive recursive-in-n formulas for {~n} and {fin} and to prove that (3.5.18) really holds. This is done next. Let {fin} be defined recursively by the second-order difference equation fin = 2fin-1 -
Ci n fJn-2
(3.5.22)
where Ci n
..
~.:
= (fin-l
- Yn-d/(fin-2 - Yn-2)
(3.5.23)
The initial values required to start the recursion (3.5.22) are f30 = Po and fil = Po + Pl. With this definition of {f3n}, we claim that the vectors {~n} (as defined in (3.5.12» satisfy both (3.5.18) and the following second-order recursion:
(3.5.24)
"",
"
Chapter 3
104
Parametric Methods for Rational Spectra
to (3.5.14) to get In order to prove the previous claim, we first apply the result (3.5.3)
Rn+l [
~n-l 0
.-'t:
]
=
(3.5.25) f3n-1 Yn-1
....
~
_Yo
;.'
Next, we note that Yn-2 f3n-2
T
r n-
1
Rn -
1
r-T_
(3.5.26) f3n-2 Yn-2
n 1
be combined linearly, as The right-hand sides of equations (3.5.14), (3.5.25), and (3.5.26) can described next, to get the right-hand side of (3.5.12): Yn-I f3n-l f3n-I
Yn-2 f3n-2
f3n-I
+
- exn f3n-1 Yn-l
=
P. [;]
(3.5.27)
f3n-2 Yn-2
ex . This observation, The equality in (3.5.27) follows from the defining equations of f3n and n linear equations in conjunction with (3.5.14), (3.5.25) and (3.5.26), gives the system of
(3.5.28)
) and (3.5.28) are unique, which has exactly the structure of (3.5.12). Since the solutions to (3.5.12 they must coincide; hence, (3.5.24) follows. = (~n-Ih. Hence, Next, tum to the condition (3.5.18). From (3.5.24), we see that (~nh [1 If. The initial values in order to prove that (3.5.18) holds, it suffices to show that ~1 = when inserted in (3.5.12), f30 = Po and f3I = Po + PI (purposely chosen for the sequence {f3n}), ) and (3.5.24) is finished. give ~o = 1 and ~1 = [1 I]T. With this observation, the proof of (3.5.18 equations include The DGA consists of the equations (3.5.16) and (3.5.19)-(3.5.24). These rder recursive equations second-order recursions and appear to be more complicated than the first-o more efficient computaof the LDA. In reality, the symmetry of the ~n vectors makes the DGA arized in the next box tionally than the LDA (as is shown next). The DGA equations are summ implementation. along with an approximate count of the number of flops required for
.'
Section 3.5
105
Order-Recursive Solutions to the Yule-Walker Equations
The Delsarte-Genin Algorithm
Operation count no. of (+)
DGA equations
no. of (x) Initialization:
1, 130 = Po, Yo = PI ~I = [1 If, f3I = Po + PI, YI = PI ~o =
For n (a)
'-,
2
= 2, ... , n max , do the following steps: an = (13n-I ~ Yn-I)/(f3n-z - Yn-Z) f3n = 2f3n-1 - a n f3n-z
fl.
= [ flO-I] + [
Yn = rI+I ~n (b)
+ P2
fl~_1 ] - a. [ fl~-2 ]
= (PI + Pn+d +~n.Z(P2
An = f3nl f3n-1 = f3n - AnYn-1 kn = 1 - An
a;
+ Pn) + ...
1
2
2
1
'" nl2
"'n
'" n 12
"'n
1 1
'" nl2
1 1
"'n
The DGA can be implemented in two principal modes, to suit the application at hand.
DGA-Mode 1. In most AR modeling exercises, we do not really need all {Bn}~~i. We do, however, need {ar, for the purpose of order selection (see Appendix C). Let the selected order be denoted by n max . Then the only B vector to be computed is Bnma:x.' We might also need to compute the {kn } sequence, because this bears useful information about the stability of the AR model. (See, for example, [SODERSTROM AND STOICA 1989; KAy 1988; THERRIEN 1992J.) In the modeling application we have outlined, we need to iterate only the groups (a) and (b) of equations in the previous DGA summary. The matrix equation (c) is computed only for n = nmax . This way of implementing the DGA requires the following number of multiplications and additions:
ai, ...}
no. of (+) ~ n~ax
(3.5.29)
Recall that, for LDA, no. of (x) = no. of (+) ~ n~ax' Thus, the DGA is approximately twice as fast as the LDA (on computers for which multiplication is much more time consuming than addition). We also remark that, in some parameter-estimation applications, the equations in group (b) of the DGA can also be left out, but doing so will speed up the implementation of the DGA only slightly.
Chapter 3
106
Parametric Methods for Rational Spectra
example of such an appliDGA -Mod e 2. In other applications, we need all {en}:':i. An Rn-l . (See, for example, cation is the Cholesky factorization of the inverse covariance matrix max need to iterate all equations Exercise 3.7 and [SODERSTROM AND STOICA 1989].) In such a case, we ions: in the DGA and so need the following number of arithmetic operat no. of (x) :::: 0.75n~ax
no. of (+) :::: 1.5n~ax
(3.5.30)
that the computation time This is still about 25% faster than the LDA (assuming, once again, addition). required for multiplication dominates the time corresponding to an between the DGA and In closing this section, we note that the computational comparisons if nmax is reasonably large the LDA neglected terms on the order O(n max ). This is acceptable valid and, in fact, LDA (say, n max ::: 10). If nmax is small, then these comparisons are no longer In such low-dimensional could be more efficient computationally than the DGA in such a case. recall that the LDA is the applications, the LDA is therefore to be preferred to the DGA. Also to have a computationally algorithm to use with complex-valued data; the DGA does not appear efficient extension for complex-valued data.
3.6 MA SIGNALS g white noise with an allAccording to the definition in (3.2.8), an MA signal is obtained by filterin equation to model a spectrum zero filter. This all-zero structure makes it impossible to use an MA This is to be contrasted with sharp peaks unless the MA order is chosen "sufficiently large." band spectra by using fairly with the ability of the AR (or "all-pole") equation to model narrow MA model provides a good low model orders (per the discussion in the previous sections). The sharp nulls. Such spectra are approximation for those spectra characterized by broad peaks and , so there is a somewhat encountered less frequently in applications than are narrowband spectra al estimation. Another reason limited engineering interest in using the MA signal model for spectr is basically a nonlinear one for this limited interest is that the MA parameter-estimation problem on problem. In any case, and is significantly more difficult to solve than the AR parameter-estimati problems are quite similar; the types of difficulties we must face in MA and ARMA estimation in lieu of the MA one. hence, we almost always prefer to use the more general ARMA model brief. For these reasons, our discussion of MA spectral estimation will be (i) Estimate the MA One method for estimating an MA spectrum consists of two steps: 2 from the first step in the MA parameters {bdk'=; and a ; and (ii) insert the estimated parameters PSD formula (see (3.2.2)). The result is (3.6.1) ear estimation problem. The difficulty with this approach lies in step (i), which is a nonlin of these approximate proceApproximate linear solutions to this problem do, however, exist. One is based on a two-stage dures, perhaps the method most used for MA param eter estimation, d; it will be described in least-squares methodology [DURBIN 1959]. It is called Durbi n 's metho tion. Section 3.7 in the more general context of ARMA parameter estima
Section 3.7
107
ARMA Signals
Another method to estimate an MA spectrum is based on the reparameterization of the PSD in terms of the covariance sequence. We see from (3.2.8) that, for an MA of order m, r(k) = 0
for Ikj > m
(3.6.2)
.. ('
This simple observation turns the definition of the PSD as a function of {r (k)} into a finitedimensional spectral model: '-. m
n in equation (3.7.1) and solving the overdetermined system of equation;' so obtained. If we replace the true covariances in (3.7.1) with M> n by finite-sample estimates, there will in general be no exact solution. A natural idea to overcome this problem is to solve the resultant equation
RQ
~-r
(3.7.3)
r
in a least-squares (LS) or total-least-squares (TLS) sense (see Appendix A). Here, Rand represent the ACS matrix and vector in (3.7.1) with sample ACS estimates replacing the true ACS there. For instance, the (weighted) least-squares solution to (3.7.3) is, mathematically, given byl ..
Q
= -(R*WR)-I(R*Wr)
(3.7.4)
where W is an M x M t::ve definite weighting matrix. The AR estimate derived from (3.7.3) with M > n is called the overdetennined modified YW estimate [BEEX AND SCHARF 1981; CADZOW 1982]. Some notes on the choice between (3.7.2) and (3.7.3), and on the selection of M, are in order. • Choosing M > n does not always improve the accuracy of the previous AR coefficient estimates. In fact, if the poles and zeroes are not close to the unit circle, choosing M > n can make the accuracy worse. When the ACS decays slowly to zero, however, choosing M > n generally improves the accuracy of Q [CADZOW 1982; STOICA, FRIEDLANDER. AND SODERSTROM 1987B]. A qualitative explanation for this phenomenon can be seen by thinking of a finite-sample ACS estimate as being the sum of its "signal" component r(k) and a "noise" component due to finite-sample estimation: r(k) = r(k) + n(k). If the ACS decays slowly to zero, the signal component is "large" compared to the noise component, even for relatively large values of k, and including r(k) in the estimation of Q improves accuracy. If the noise component of r(k) dominates, including r(k) in the estimation of Q could decrease the accuracy of Q. • The statistical and numerical accuracies of the solution {Qi} to (3.7.3) are quite interrelated. In more exact but still loose terms, it can be shown that the statistical accuracy of {Qi} is poor (good) if the condition number of the matrix R in (3.7.3) is large (small). (See [STOICA, FRIEDLANDER, AND SODERSTROM 1987B; SODERSTROM AND STOICA 1989] and also Appendix A.) This observation suggests that M should be selected so as to make the matrix in (3.7.3) reasonably well conditioned. In order to make a connection between this rule of 'Prom a numerical viewpoint. equation (3.7.4) is not a particularly good way to solve (3.7.3). A more numerically sound approach is to use the QR decomposition; see Section A.8.2 for details.
110
Chapter 3 Parametric Methods for Rational Spectra
thumb for selecting M and the previous explanation for the poor accuracy of (3.7.2) in the case of narrowband signals, note that, for slowly decaying covariance sequences, the columns of the matrix in (3.7.2) are nearly linearly dependent. Hence, the condition number of the covariance matrix could be quite high in such a case, and we might need to increase M in order to lower the condition number to a reasonable value. • The weighting matrix W in (3.7.4) can also be chosen to improve the accuracy of the AR coefficient estimates. A simple first choice is W = I, resulting in the regular (unweighted) least squares estimate. Some accuracy improvement can be obtained by choosing W to be diagonal with decreasing positive diagonal elements (to reflect the decreased confidence in higher ACS lag estimates). In addition, optimal weighting matrices have been derived (see [STOICA, FRIEDLANDER, AND SODERSTROM 1987A)); the optimal weight minimizes the covariance of (for large N) over all choices of W. Unfortunately, the optimal weight depends on the (unknown) ARMA parameters. Thus, to use optimally weighted methods, a two-step "bootstrap" approach is used, in which a fixed W is first chosen and initial parameter estimates are obtained; these initial estimates are used to form an optimal W, and a second estimation gives the "optimal accuracy" AR coefficients. As a general rule, the performance gain from using optimal weighting is relatively small compared to the computational overhead required to compute the optimal weighting matrix. Most of the accuracy improvement can be realized by choosing M > n and W = I for many problems. We refer the reader to [STOICA, FRIEDLANDER, AND SODERSTROM 1987A; CADZOW 1982] for a discussion on the effect of W on the accuracy of a and on optimal weighting matrices.
a
Once the AR estimates are obtained, we tum to the problem of estimating the MA part of the ARMA spectrum. Let Yk = E {[B (z)e (t)][B (z)e(t - k)]*}
(3.7.5)
denote the covariances of the MA part. Since the PSD of this part of the ARMA signal model is given by (see (3.6.1) and (3.6.3)) m
a 2IB(w)12 =
L
Yke- iwk
(3.7.6)
k=-m
it suffices to estimate {Yk} in order to characterize the spectrum of the MA part. From (3.2.7) and (3.7.5), we obtain Yk = E {[A(z)y (t)][A(z)y (t - k)]*} n
=
n
L L aja;E {yet - j)y*(t -
k - p)}
j=O p=O
n
n
= LLaja;r(k +p-j) j=O p=O
(aO t:. 1)
(3.7.7)
Section 3.7
111
ARMA Signals
for k = 0, ... , m. Inserting the previously calculated estimates of {ak} and {rk} in (3.7.7) leads to the following estimator of {Yk}:
A
_
Yk -
{ttaja;r(k+p-n, k=O, .... m (ao 61) j=O
Y~k'
'". :
(3.7.8)
p=O
k = -1,•.,. ..• -m
Finally. the ARMA spectrum is estimated as follows: m
L »e-
iwk
(3.7.9)
__
¢J(w) = _k=_-_n:~ IA(w)1 2
The MA estimate used by the ARMA spectral estimator in (3.7.9) is of the type (3.6.4) encountered in the MA context. Hence, the criticism of (3.6.4) in the previous section is still valid. In particular, the numerator in (3.7.9) is not guaranteed to be positive for all w values. so this approach could lead to negative ARMA spectral estimates. See. for example, [KINKEL, PERL, SCHARF, AND STUBBERUD 1979; MOSES AND BEEX 1986]. Since (3.7.9) relies on the modified YW method of AR parameter estimation, W~Ca11 (3.7.9) the modified YW ARMA spectral estimator. Refined versions of this ARMA spectr estimator, which improve the estimation accuracy if N is sufficiently large, were proposed in f TOICA AND NERORAI 1986; STOICA, FRIEDLANDER, AND SODERSTROM 1987A; MOSES, SIMONYTE, STOICA, AND SODERSTROM 1994]. A related ARMA spectral estimation method is outlined in Exercise 3.14. I
v
./
3.7.2 Two-Stage Least-Squares Method
If the noise sequence {e (t)} were known, then the problem of estimating the parameters in the ARMA model (3.2.7) would have been a simple input-output system parameter estimation problem, which could be solved by several methods, the simplest of which is the least-squares (LS) method. In the LS method, we express equation (3.2.7) as y(t)
+ cpT (t)e = e(t)
(3.7.10)
where
cpT (t) = [y(t - 1), ... , yet - n)! - e(t - 1) .... , -e(t - m)]
e=
[ai, ... , anlb 1 ,
Writing (3.7.10) in matrix form for t = L
••• ,
bmf
+ 1, ... , N
(for some L > max(m, n)) gives
z +ze = e
(3.7.11)
Chapter 3
112
Parametric Metho ds for Rational Spectra
where
z=
y(L) y(L + 1) yeN - 1)
y(L - n y(L - n
+ 1) + 2)
yeN - n)
-eel) -eel + 1) -e(N - 1)
-eel - m -eel - m
+ 1) + 2)
(3.7.12)
-e(N - m)
[y(L + I), y(L + 2),
, y(N) f
(3.7.13)
e = [eel + 1), eel + 2),
, e(N) f
(3.7.14)
z=
izing /I e /1 2 . This leads Assume we know Z; then we could solve for (J in (3.7.11) by minim in Section 3.4.2 (see also to a least-squares estimate similar to the AR LS estimate introduced Result R32 in Appendix A): (3.7.15) ted as described next. Of course, the feU)} in Z are not known. However, they can be estima it can alternatively be Since the ARMA model (3.2.7) is of minim um phase , by assumption, wri.tten as the infinite-order AR equation (3.7.16) 2 l converge to zero as k where the coefficients {ak} of 1 + alz- + a2z- + ... t:. A(z)j B(z) eters {ak} in (3.7.16) increases. An idea to estimate feet)} is to first estimate the AR param cannot estimate an infinite and next obtain {e(t)} by filtering {y (t)} as in (3.7.16). Of course, we In practice, the AR equation number of (independent) parameters from a finite number of samples. trunca ted AR model of yet) must be approximated by one of, say, order K. The parameters in the n 3.4. can be estimated by using either the YW or the LS procedure in Sectio the accompanying box. This discussion leads to the two-stage LS algorithm summarized in in [MAYNE AND FIROOZAN The two-stage LS parameter estimator is also discussed, for example, teed to be positive for 1982; SOOERSTRO:M: AND STOICA 1989]. The spectral estimate is guaran to truncate the AR model all frequencies, by' construction. Owing to the practical requirement small by choosing K suffi(3.7.16), the two-stage LS estimate is biased. The bias can be made N, or the accuracy of 8 in ciently large; however, K should not be too large with respect to that of ARMA signals Step 2 will decrease. The difficult case for this method is apparently ary to select a very large with zeroes close to the unit circle. In such a case, it might be necess 1 at a reasonable level. value of K in order to keep the approximation (bias) errors in Step large. It should be noted, The computational burden of Step 1 could then become prohibitively circle is a difficult one for however, that the case of ARMA signals with zeroes near the unit SODERSTROM AND STOICA all known ARMA estimation methods [KAY 1988; MARPLE 1987;
1989].
".}
.
~.
~:
.
....
".
Section 3.8
113
Multivariate ARMA Signals
The Two-Stage Least-Squares ARMA Method
Step 1. Estimate the parameters {ad in an AR(K) model of y(t) by the YW or covariance LS method. Let {adf=l denote the estimated parameters. Obtain an estimate of the noise sequence {e(t)} by \',
K
e(t) = yet)
+ L aky(t -
k)
(3.7.17)
k=l
for
t
= K
+ 1, ... , N.
Step 2. Replace the e(t) in (3.7.12) by the e(t) computed in Step 1. Obtain with L = K + m. Estimate
efrom (3.7.15) (3.7.18)
where ii• • ziJ+z is the LS error from (3.7.11).
.
j
Insert {8, (}2} mto the PSD expresslOn (3.2.2) to estimate the ARMA spectrum.
Finally, we remark that the two-stage LS algorithm may be modified to estimate the parameters in MA models, by simply skipping over the estimation of AR parameters in Step 2. This approach was suggested for the first time in [DURBIN 1959] and is often called Durbin's method.
3.8 MULTIVARIATE ARMA SIGNALS
The multivariate analog of the ARMA signal in equation (3.2.7) is A(z)y(t) = B(z)e(t)
(3.8.1)
where y(t) and e(t) are ny x 1 vectors, and A(z) and B(z) are ny x ny matrix polynomials in the unit delay operator. The task of estimating the matrix coefficients-{A i , Bj }-of the AR and MA polynomials in (3.8.1) is much more complicated than in the scalar case, for at least one reason: The representation of y(t) in (3.8.1), with all elements in {Ai, B)} assumed to be unknown, could well be nonunique, even when the orders of A(z) and B (z) have been chosen correctly. More precisely, assume that we are given the spectral density matrix of an ARMA signal yet) along with the (minimal) orders of the AR and MA polynomials in its ARMA equation. If all elements of {Ai, B)} are considered to be unknown, then, unlike in the scalar case, the previous information could be insufficient for determining the matrix coefficients {Ai, B)} uniquely.
Chapter 3
114
Parametric Methods for Rational Spectra
The lack of uniqueness (See, for example, [HANNAN ANDDEISTLER 1988] and also Exercise 3.16.) eter-estimation problem. of the representation could lead to a numerically ill-conditioned param the modified Yule-Walker For instance, this would be the case with the multivariate analog of method discussed in Section 3.7.1. ts of using a canonApparently, the only possible cure for the aforementioned problem consis amounts to setting some ical parameterization for the AR and MA coefficients. Basically, this y reducing the number of of the elements of {Ai, Bj } to known values, such as 0 or 1, thereb should be set to 0 or 1 in a unknowns. The problem, however, is that, to know which elements which are usually difficult to specific case, we need to know ny indices (called "structure indices"), difficulty in obtaining those obtain in practice [KAILATH 1980; HANNAN AND DEISTLER 1988]. The ations. For this reason, we indices has hampered the use of canonical parameterizations in applic signals. The nonuniquedo not go into any of the details of the canonical forms for ARMA receive further attention in ness of the fully parameterized ARMA equation will, however, Section 3.8.2. sed in Section 3.7.2, Concerning the other approach to ARMA parameter estimation discus it can be extended to the namely the two-stage least-squares method, it is worth noting that need for using a canonical multivariate case in a straightforward manner. In particular, there is no le, [SODERSTROM AND parameterization in either step of the extended method. (See, for examp interesting exercise for the STOICA 1989].) Working the details of the extension is left as an only real competitor to the reader. We stress that the two-stage LS approach is perhaps the subsections. subspace ARMA parameter-estimation method described in the next 3.8.1 ARMA State-Space Equations
the following state-space The difference-equation representation in (3.8.1) can be transformed into TH 1980]): representation, and vice versa (see, for example, [AOKI 1987; KAILA x(t
+ Be(t) Cx(t) + e(t)
+ 1) =
yet) =
Ax(t)
(n x 1) (ny x 1)
(3.8.2)
matrices of appropriate Thereafter, x(t) is the state vector of dimension n; A, B, and C are e(t) is white noise with dimensions (with A having all eigenvalues inside the unit circle); and zero mean and wi~ covariance matrix denoted by Q. We thus have E feet)} = 0
E {e(t)e *(s)} = Q8t ,s
(3.8.3) (3.8.4)
where Q is positive definite by assumption. g filter, is readily The transfer filter corresponding to (3.8.2), also called the ARMA shapin seen to be (3.8.5)
Section 3.8
115
Multivariate ARMA Signals
By paralleling the calculation leading to (1.4.9), it is then possible to show that the ARMA power spectral density (PSD) matrix is given by .' ::2
¢(w) = H (w)QH*(w)
(3.8.6)
The derivation of (3.8.6) is left as an exercise for the reader. In the next subsections, we will introduce a methodology for estimating the matrices A, B, C, and Q of the state-space equation (3.8.2) and, hence, the ARMA power spectral density (via (3.8.5) and (3.8.6)). In this subsection, we derive a number of results that lay the groundwork for the discussion in the next subsections. Let (3.8.7) Rk = E {yU)y*(t - k)} P = E {x (t)x*(t) }
(3.8.8)
Observe that, for k 2:: 1, Rk = E ([Cx(t
= CE {x(t
+ k) + e(t + k)][x*(t)C* + e*(t)]}
+ k)x*(t)} C* + CE {xU + k)e*(t)}
(3.8.9)
~
From equation (3.8.2), we obtain (by induction) k-l
x(t
+ k) =
Akx(t)
+L
A k- 1-
1
Be(t
+ l)
(3.8.10)
l=O
which implies that E {x(t
= A kP
(3.8.11)
= A k - 1BQ
(3.8.12)
+ k)x*(t)}
and E {x(t
+ k)e*(t)}
Inserting (3.8.11) and (3.8.12) into (3.8.9) yields (for k 2:: 1)
(3.8.13)
where
D =APC* +BQ
(3.8.14)
From the first equation in (3.8.2), we also readily obtain P = APA* +BQB*
(3.8.15)
116
Chapter 3 Parametric Methods for Rational Spectra
and, from the second equation,
Ro =
cPC*
+Q
(3.8.16)
It follows from (3.8.14) and (3.8.16) that B = (D - APC*)Q-l
(3.8.17)
Q =Ro - CPC*
(3.8.18)
and, respectively,
Finally, inserting (3.8.17) and (3.8.18) into (3.8.15) gives the following Riccati equation for P: P = APA * + (D - APC*)(Ro - CPC*) -1 (D - APC*)*
(3.8.19)
The results lead to a number of interesting observations. It is well known that a linear nonsingular transfonnation of the state vector in (3.8.2) leaves the transfer-function matrix associated with (3.8.2) unchanged. To be more precise, let the new state vector be given by
The (Non)Uniqueness Issue.
x(t) = Tx(t),
(IT\ -10)
(3.8.20)
It can be verified that the state-space equations in x(t), corresponding to (3.8.2), are x(t
+ 1) = AX(t) + Be(t)
(3.8.21)
yet) = Cx(t) +e(t) where
A = TAT-I;
B = TB;
C = CT- 1
(3.8.22)
As {Yet)} and teet)} in (3.8.21) are the same as in (3.8.2), the transfer function H (z) from e(t) to yet) must be the same for both (3.8.2) and (3.8.21). (Verifying this by direct calculation is left to the reader.) The consequence is that there exists an infinite number of triples (A, B, C) (with all matrix elements assumed unknown) that lead to the same ARMA transfer function and, hence, the same ARMA covariance sequence and PSD matrix. For the transfer-function matrix, the nonuniqueness induced by the similarity transformation (3.8.22) is the only type possible (as we know from the detenninistic system theory, for example, [KAILATH 1980]). For the covariance sequence and the PSD, however, other types of nonuniqueness are also possible. See, for example, [FAURRE 1976] and [SODERSTROM AND STOICA 1989, Problem 6.3]. Most ARMA estimation methods require the use of a uniquely parameterized representation. The previous discussion has clearly shown that letting all elements of A, B, C, and Q be unknown does not lead to such a unique representation. The latter representation is obtained only if a
-, -
~
Section 3.8
Mu ltiv aria te ARMA Signals
117 canonical fon n is used. As already explained, the ARMA parameter esti mation methods relying on canonical parameterizations are impractical. The subspace-based esti mat ion approach discussed in the next subsection circumvents the canonical-parameterization req uire ment in an interesting way: The nonuniqueness of the AR MA representation with A, B, C, and Q fully parameterized is reduced to the nonuniqueness of a certain decomposition of covariance mat rices; then by choosing a specific decomposition, a triplet (A, B, C) is isolated and is determ ined in a numerically wellposed manner. ". The Min ima lity Issue. Let, for some integer-valued m,
0=
C CA
(3.8.23)
and
C* = [D AD '" Am -1D ]
t
~
.""
(3.8.24) The similarity between the above matrices and the observability and controllability matrices, respectively, from the theory of det erministic state-space equations is evid ent. In fact, it follows from the aforementioned theory and from (3.8.13) that the triplet (A, D, C) is a minimal representation (Le., one with the minimu m possible dimension n) of the cov aria nce sequence {Rk} if and only if. (See, for example, [KA rLA TH 1980; HA NN AN AND DEI STL ER 1988].) ran k(O )
! :: \
.~-
, -~ ..
."
= rank(C) = n
(for m
~
n)
(3.8.25)
As was shown previously, the oth er matrices P, Q, and B of the stat e-space equation (3.8,2) can be obtained from A, C, and D (see equations (3.8.19), (3.8.18
), and (3.8.17), respectively). It follows that the state-space equatio n (3.8.2) is a minimal representation of the ARMA covariance sequence {Rkl if and only if the con dition (3.8.25) is satisfied. In what follows, we assume that the "minimality condition" (3.8.25 ) holds true. 3.8.2 Subspace Parameter Es tim ati on -Th eo ret ica l Aspects We begin by showing how A, C, and D can be obtained from a sequence of theoretical AR MA covariances. Let
R=
(3.8.26)
Chapter 3 Parametric Method s for Rational Spectra
118
) is due to its special denote the block-Hankel matrix of covariances. (The name given to (3.8.26 a matrix is a block extenstructure: the submatrices on its block antidiagonals are identical. Such A.) According to (3.8.13), sion to the standard Hankel matrix; see Definition D14 in Appendix we can factor R as follows:
C CA
R=
[D AD .. ·A m -
I
D] = OC*
(3.8.27)
that It follows from (3.8.25) and (3.8.27) (see Result R4 in Appendix A) (for m ::: n)
rankeR) = n
(3.8.28)
une A, C, and D, let us Hence, n could, in principle, be obtained as the rank of R. To detenr A); consider the singular value decomposition (SVD) of R (see Appendix (3.8.29) R = UI;V * where I; is a nonsingular n x n diagonal matrix, and U*U = V*V =1
(n x n)
By comparing (3.8.27) and (3.8.29), we obtain
o=
U:E 1/2 T
for some nonsingular transformation matrix T
(3.8.30)
space of R. Henceforth, because the columns of both 0 and U:E 1/2 are bases of the range By inserting (3.8.30) in the equation I;1/2 denotes a square root of I; (that is, I;1/2I; 1/2 = I;). OC* = U I; V *, we also obtain (3.8.31) Next, observe that (CT- 1) (CT- 1) (TAT- I)
OT- 1 =
and
(3.8.33)
possible bases ofthe This implies that, by identifying 0 and C with the matrices made from all alent triples (A, D, C). range spaces ofRan d R*, respectively, we obtain the set ofsimilarity-equiv
,~
....~ ~.
= the positive definite solution, if any, of the Riccati equation (3.8.19) with A, C, D, and Ro replaced by their estimates
:
and estimate Q and B
",
Q=Ro-Cf>C* iJ = (D - APC*)Q-l
'-.' :~
(3.8.48)
,
(3.8.49)
In some cases, the procedure cannot be completed, because the Riccat i equation has no positive definite solution or even no solution at all. (In the case of a real-valued ARMA signal, for instance, that equation could have no real-valued solution.) In such cases, we can estimate approximate P as discussed next (only the estimation of P has to be modified; all the other parameter estimates can be obtained as previously described). A straightforward calculation making use of (3.8.11) and (3.8.12) yields E {x (t)y*( t - k)} = A k PC*
+ A k - 1BQ
,x 1'"
+ q;T (tWJ)
.-....
j)}
aj-l
·'i'::;'·"'··'·
~:-c.:: .
(cf. (3.4.1) and (3.5.6)). By using the previous equations in (3.9.1), we obtain
~;YE.
F-~··~~.::-.t'
~;r·;r~ ~(.-:::::.
.
'.
~;>:·:!L.'··:. l~,"~-.'··· ~
.:
.
L·'::~·;:' .'
[.'?;~."
~~:·:·f>.::····
k:Y:·~~.
:(."
~:};:~~::
I
'.
which coincides with (3.5.7). 3.9.2 Some Properties of Covariance Extensions
Assume we are given a finite sequence (r(k)}~;2(m_l) with r(-k) = r*(k) and such that R m in equation (3.4.6) is positive definite. We show that the finite sequence can be extended to an infinite sequence that is a valid ACS. Moreover, there are an infinite numbe r of possible covariance extensions, and we derive an algorithm to construct these extensions. One such extension, in which the reflection coefficients km , km +1 , ..• are all zero (and thus the infinite ACS corresponds to an AR process of order less than or equal to (m - 1)), gives the so-call ed Maximum Entropy extension [BURG 1975].
124
Chapter 3 Parametric Methods for Rational Spectra
We begin by constructing the set of rem) values for which R m +1 > O. Using the result of Exercise 3.7, we have (3.9.3) From the Levinson-Durbin algorithm, (3.9.4) Combining (3.9.3) and (3.9.4) gives
I o-m-l 2 1_ IRm+l I -- IRm' [
Ir(m)
+ r*m-l ()m-l 12 ] crm4 _ 1
(3.9.5)
which shows that IR m +lI is quadratic in rem). Since cr~_l > 0 and Rm is positive definite, it follows that
IRm+l1
> 0 if and only if Ir(m)
+ r~_l em _I1 2
0 and we choose reP) inside the disk Ir(p) + r;_1{)p_d 2 < 1, then jRp+d > o. This implies 0-; > 0, and the admissible disk for rep + 1) has nonzero radius, so there are an infinite number of possible choices for r (p + 1) such that IRp +21 > O. Arguing inductively in this way for p = m, m + 1, ... shows that there are an infinite number of covariance extensions and provides a construction for them. If we choose rep) = -r;_lep - 1for p = m, m + 1, ... (i.e., rep) is chosen to be at the center of each disk in (3.9.6)), then, from (3.9.4), we see that the reflection coefficient kp = O. Thus, from the Levinson-Durbin algorithm (see equation (3.5.10)) we have
0-:_
{)p
= [ {)Pa 1 ]
(3.9.7)
2
(3.9.8)
and 0-p
2 = 0-p-l
Arguing inductively again, we find that kp = 0,
{)p
= [{)mo-l
], and cr; = cr;_l for p
= m,
m + 1, .... This extension, called the Maximum Entropy extension [BURG 1975], thus gives an ACS sequence that corresponds to an AR process of order less than or equal to (m - 1). The name maximum entropy arises because the spectrum so obtained has maximum entropy rate f::'rr; In¢(w)dw under the Gaussian assumption [BURG 1975]; the entropy rate is closely related to the numerator in the spectral-flatness measure introduced in Exercise 3.6.
Section 3,9
Complements
125 For som e rec ent results on the cov ariance-extension pro ble m and its variations, we refe r to 2oo1} and the references therein.
[BYRNES, GEORGIOU, AND LIN DQUIST
3.9.3 The Burg Me tho d for AR Parameter Estimation
,. ;. .;.
.... ,.
;
The thesis [BU RG 1975} developed a method for AR parameter esti mat ion that is based on for war d and bac kwa rd prediction errors and on direct estimation of the reflecti on coefficients in equ atio n (3.9.1). In this complement, we dev elop the Bur g estimator ~d discuss some of its pro per ties . Afisume we hav e dat a measurements {y(t )} for t = 1,2 , ... ,N. Mu ch as in Com ple men t 3.9.1, we define the forward and bac kwa rd prediction errors for a pth -or der model as p
ej,p (t) = yet )
+L
ap, iy(t - i),
t
i=l
= P + 1, .. , IN
(3.9.9)
p
eb,p (t) = yet - p) ::..
.
~
'..
"
..
+ La ;,i y(t i=l
p
+ i),
t= p+ l, ... ,N
(3.9.10)
We hav e shifted the time index in the definition of eb (t) from tha t in equation (3.9.2) to reflect that eb,p (t) is com put ed from data up to time t; also, the fact tha t the coefficients in (3.9.10) are given by {a ;) follows from Com ple men t 3.9.1. We use hats to den ote estimated quantities, and we explicitly denote the order p in bot h the prediction error sequen ces and the AR coefficients. The AR par ame ters are related to the reflection coefficient k by (see (3.5.10)) p
(3.9.11) Bur g's me tho d considers the recurs ive-in-order estimation of k given that the AR coefficients for p order p - 1 have been computed. In particular, Bur g's met hod finds k p to minimize the arit hm etic mean of the forward and backward prediction-error variance estimates, namely, (3.9.12)
and where {ap -l,df":11 are assumed to be kno wn from the recursion at the previous order.
126
Chapter 3
Parametric Methods for Rational Spectra
The prediction errors satisfy the following recursive-in-order expressions:
+ kpeb,p-I (t - 1) eb,p-I (t - 1) + k;ej,p-I (t)
ej,p (t) = ej,p-I (t)
(3.9.13)
eb,p(t) =
(3.9.14)
Equation (3.9.13) follows directly from (3.9.9)-(3.9.11) as p-I ej,p(t) = yet)
+L
(ap-I,i
+ kpa;_I,p_;) y(t -
i)
+ kpy(t -
p)
;=1
=
~(t) + ~ap-I'iy(t
= ej,p-I (t)
-
+ kpeb,p-l (t -
i)]
~(t
+kp
- p)
+
~a;_I'iy(t
- P
+
i)]
1)
Similarly, p-1 eb,p(t) = yet - p)
+ L[a;-I,; + k;ap-l,p-ily(t -
p
+ i) + k;y(t)
;=1
= eb,p-I (t - 1) + k;ej,p-1 (t)
which shows (3.9.14). We can use the previous expressions to develop a recursive-in-order algorithm for estimating the AR coefficients. Note that the quantity to be minimized in (3.9.12) is quadratic in kp , because
t
~ [!it(p) + Pb(P)] = 2(N i_ p ) t=p+I \I't,p-l (t) + kp'b,p-l (t + leb,p-I(t -
1)
2
i)1
+ k;ej ,p_I(t)1
+ 2ej,p-l (t)eb,p-l (t -
1)k;
+ 2ej,p_1 (t)eb,p-l (t -
1)kp }
2 }
Using Result R34 in Appendix A, we find that the kp that minimizes the above quantity is given by A
kp
_
-
-2 L~;::p+l ej,p-I (t)eb,p_I (t - 1)
L~=p+I
[
lej,p-1 (t)1
2
+ leb,p-I (t
- 1)1
2]
(3.9.15)
Section 3.9
127
Complements
A recursive-in-order algorithm for estimating the AR parameters, called the Burg algorithm, is as follows: The Burg Algorithm
Step 0 Initialize ef,o(t) = eb,o(t) = yet). Step 1 For p = 1, ... , n, (a) Compute ef,p-l (t) and eb.p-l (t) for t = P + 1, ... , N from (3.9.13) and (3.9.14). (b) Compute kp from (3.9.15). (c) Compute Qp,i for i = 1, ... ,p from (3.9.11). \'.
.;,.;.... };'
:\ il .'~.' ..
Then
Ikpl
e=
[Qp,l, ... , Qp,pf
is the vector of AR coefficient estimates.
Finally, we show that the resulting AR model is stable; this is accomplished by showing that :::: 1 for p = 1, ... , n. (See Exercise 3.9.) To do so, we express kp as A
-2c*d c*e + d*d
(3.9.16)
k ----P -
where e = [eb,p-l (P), ... , eb,p-l (N d = [ef,p-l (P
Then
0:::: lie - e ia d1l 2 ==? ==?
l)f
+ 1), ... , ef,p-l (N)f
+ d*d - 2Re {eiQ'c*d} for every ex E 2Re {eiae*d} :::: e*e + d*d for every ex E [-Jr, Jr] 2lc*dl :::: c*e + d*d =* Ikpl:::: I = e*e
[-Jr, Jr]
The Burg algorithm is computationally simple, and it is amenable to both order-recursive and time-recursive solutions. In addition, the Burg AR model estimate is guaranteed to be stable. On the other hand, the Burg method is suboptimal, in that it estimates the n reflection coefficients by decoupling an n -dimensional minimization problem into the n one-dimensional minimizations in (3.9.12). This is in contrast to the LS AR method in Section 3.4.2, in which the AR coefficients are found by an n-dimensional minimization. For large N, the two algorithms give very similar perfonnance; for short or medium data lengths, the Burg algorithm usually behaves somewhere between the LS method and the Yule-Walker method.
128
Chapter 3 Parametric Methods for Rational Spectra
3.9.4 The Gohberg-Semencul Formula The Hermitian Toeplitz matrix Rn+l in (3.4.6) is higWy structured. In particular, it is completely defined by its first column (or row). As was shown in Section 3.5, exploitation of the special algebraic structure of (3.4.6) makes it possible to solve this system of equations very efficiently. In this complement, we show that the Toeplitz structure of Rn+l may also be exploited to derive a closed-form expression for the inverse of this matrix. This expression is what is usually called the Gohberg-Semencul (GS) formula (or the Gohberg-Semencul-Heining formula, in recognition of the contribution also made by Heining to its discovery) [SODERSTROM AND STOICA 1989; IOHVIDOV 1982; BOTTCHER AND SILBERMANN 1983]. As will be seen, an interesting consequence of the GS formula is the fact that, even if R;;~l is not Toeplitz in general, it is still completely determined by its first column. Observe from (3.4.6) that the first column of R;;~l is given by [1 t9]T /0'2. In what follows, we drop the subscript n of t9 for notational convenience. The derivation of the GS formula requires some preparations. First, note that the following nested structures of Rn+l,
Rn+l =
[ Po Rr;] = Tn
n
[Rn -* rn] Tn
Po
along with (3.4.6) and the result (3.5.3), imply that
2
an
= Po -
1 *R-*R-1n ~n = Po - Tn n Tn
Tn
Next, make use of the above equations and a standard formula for the inverse of a partitioned matrix (see Result R26 in Appendix A) to write (3.9.17)
= Finally, introduce the (n
+ 1)
x (n
ll/a~
+ 1) matrix
o
z=
[Rr ~] + [ ~ ] [ii* o
o
o
1
o
1 0
o
(3.9.18)
Section 3.9
129
Complements
and observe that multiplication by Z of a vector or a matrix has the effects indicated here: x
Zx
o nXl nxl ",
zxz T
x nXn
~
...
0
I----G. .....
0
...............
nXn
0
Owing to these effects of the linear transformation by Z, this matrix is called a shift or displacement operator. We are now prepared to present a simple derivation of the GS formula. The basic idea of this derivation is to eliminate R;;l from the expressions for R;;~l in (3.9.17) and (3.9.18) by making use of the displacement properties of Z. Hence, using the expression (3.9.17) for R;;~l' and its "dual" (3.9.18) for calculating ZR;;~lZT, gives
o
I
a*n
[0 an."
ad
(3.9.19)
a*1
Premultiplying and postmultiplying (3.9.19) by Z and ZT, respectively, and then continuing to do so with the resulting equations, we obtain ZR- 1 ZT _ Z 2R- 1 Z2 T = n+l
1 0'2n
n+l
0 1 al
an-l
* .. an-l*] [0 1 al·
0 0 a*n
a*2
(3.9.20) [0 0 an' .. a2]
Rational Spectra Cha pter 3 Parametric Met hod s for
130
3
1
P p
o [0 ... 0 1)
o
(3.9.21)
1 er n t that Z is a nilp ote nt mat rix of ord In (3.9.21), use is made of the fac zn+ l = 0
+ 1, in the sense that
.19)-(3.9.21), we simply summing up equations (3.9 by , Now ). fied veri ily read be (which can R;;:;'I : derive the following expression for 0
1 R- 1
n+I
-
1
1 a*I
a*n
0
a*I 1
aI
a2 n
at 1
an
0
0
0 an
(3.9.22) aI
a*n an a*I
a*n 0
0
0
by its is, indeed, completely determined .22) that (3.9 from e Not ula. form OS the is This first column, as claimed earlier. AR modeling, and this ted to the Yule-Walker method of The OS formula is inherently rela ful in studying book. The OS formula is also use this in it ng udi incl for ons reas hop e is one of the ch is discussed in Cha pte r 5. The whi , hod met on Cap the as h suc other spectral estimators, inating topic of t will become interested in the fasc par this ies stud who er read s iou that the cur n. In particular, it is indeed s is another reason for its inclusio Toeplitz matrice~ and allied subject en matrix, as has cal formula for the inverse of a giv lyti ana an ive der to able be to fascinating the previous derivation may plitz matrices. The basic ideas of been shown to be the case for Toe en matrix X, the rank Let us explain this briefly. For a giv s. rice mat eral gen re mo to d nde .19), the be exte und er Z. As can be seen from (3.9 X of k ran t men lace disp the ed of X - ZXZ T is call two. Now, assume we has a displacement ran k equal to rix mat z plit Toe ian mit Her a of such that inverse able to find a nilpotent matrix Y are we ch whi for X rix mat ) red are given a (structu to have the previous form er Y; the matrix Y does not need und k ran t men lace disp low a has ive a simple X- I )-(3.9.22), we might be able to der .19 (3.9 in s ion ulat calc the ling for of Z. Then, paralle KAI LAT H, AND LJU NG 1979] I. See [FRIEDLANDER, MORF, Xfor ion ress exp m" -for sed "clo plement. more details on the topic of this com
R;;lI
".\
_ia
Section 3.9
131
Complements
3.9.5 MA Parameter Estimation in Polynomial Time The parameter estimation of an AR process via the LS method leads to a quadratic minimization problem that can be solved in closed fonn (see (3.4.11), (3.4.12». On the other hand, for an MA process, the LS criterion similar to (3.4.11), which is given by
(3.9.23)
is a highly nonlinear function of the MA parameters (and likewise for an ARMA process). A simple MA spectral estimator, one that does not require solving a nonlinear minimization problem, is given by equation (3.6.4) and is repeated here:
\
liz
¢(w)
=
L
r(k)e-
iWk
(3.9.24)
k=-m
\
where m is the assumed MA order and trek)} are the standard sample covariances. As explained in Section 3.6, the main problem associated with (3.9.24) is the fact that ~(w) is not guaranteed to be positive for all w E [0,21t]. If the final goal of the signal processing exercise is spectral analysis, then an occurrence of negative values ¢(w) < 0 (for some values of w) is not acceptable, as the true spectral density of course satisfies if>(w) ?:: 0 for all WE [0,21t]. If the goal is MA parameter estimation, then the problem induced by ¢(w) < 0 (for some values of w) is even more serious, because, in such a case, ~(w) cannot be factored as in (3.6.1), and hence, no MA parameter estimates can be obtained directly from ¢(w). In this complement, we will show how to get around the problem of ¢(w) < 0 and, hence, how to obtain MA parameter estimates from such an invalid MA spectral density estimate, using an indirect but computationally efficient method. (See [STOICA, MCKELVEY, AND MARl 2000; DUMITRESCU, TABUS, AND STOICA 2001].) Note that obtaining MA parameter estimates from the ¢(w) in (3.9.24) is of interest not only for MA estimation, but also as a step of some ARMA estimation methods. (See, for example. (3.7.9) as well as Exercise 3.12.) A sound way of tackling this problem of "factoring the unfactorable" is as follows: Let tP (w) denote the PSD of an MA process of order m; that is, m
q,(w) =
L
r(k)e- iwk ?:: 0,
wE
[0, 21t]
(3.9.25)
k=-m
We would like to find the tP(w) in (3.9.25) that is closest to ¢(w) in (3.9.24), in the following LS sense:
1 min27l'
j7r [if>(w) -q,(w)] 2 dw -7r A
\
(3.9.26)
Chapte r 3 Parametric Methods for Rational Spectra
132
). Without loss of The order m in (3.9.25) could be different from the order m in (3.9.24 the sequence trek)} generality, we can assume that m :::: m. Indeed, if m > m, we can extend g (3.9.26), we can factor with zeroes to make m :::: m. Once cjJ(w) has been obtained by solvin thms (see, for example, it by using any of a number of available spectral factorization algori this way, derive MA parameter estimates [WILS ON 1969; VOSTRY 1975; VOSTRY 1976]) and, in {bkl satisfying (3.9.27) computed in O(m 2 ) flops. (See (3.6.1).) This step for obtaining {bkl and (J2 from cjJ(w) can be r of flops. The problem that remains is to solve (3.9.26) for cjJ(w) in a similar numbe Now, m ¢(w) - cjJ(w) =
L
[r(k) - r(k)] e-
iwk
+
L r(k)e-
iwk
Ikl>m
k=-m
criterion of (3.9.26) can so it follows from Parseval's theorem (see (1.2.6» that the spectral LS be rewritten as a covariance fitting criterion: -1
271
j]f [~cjJ(w) - cjJ(w) ]2 dw = -]f
~ ~
Ir(k) -
r(k)!
2" + ~ Ir(k)1 2 Ikl>m
k=-m
Consequently, the approximation problem (3.9.26) is equivalent to .";
(3.9.28)
min liT - rll~ subject to (3.9.25)
(r(k))
where IIxlI~
= x*Wx
and r = [ reO)
T(m)
r = [r(O)
rem)
r r
o
1 2
W=
o
2
hm for solving problem Next, we will describe a computationally efficient and reliable algorit n of m (a more precise (3.9.28) (with a general W matrix) in a time that is a polynomial functio to first write the covariflop count is given below). A possible way of tackling (3.9.28) would be would guarantee that they ances trek)} as functions of the MA parameters (see (3.3.3», which t to the MA parameters. satisfy (3.9.25), and to then minimize the function in (3.9.28) with respec ), be nonlinear in the However, the minimization problem so obtained would, much like (3.9.23 which is exactly the {bk MA parameters (more precisely, the criterion in (3.9.28) is quartic in type of problem we tried to avoid in the first place.
n,
.
Section 3.9
133
Complements
As a preparation step for solving (3.9.28), we first derive a parameterization of the MA covariance sequence {r (k) }, which will turn out to be more convenient than the parameterization via {bk}. Let Ik denote the (m + 1) x (m + 1) matrix with ones on the (k + 1)st diagonal and zeroes everywhere else: k+l ..--.-...
°... ° 1 °
°
° (for k
= 0, ... ,m). Note that 10 = I.
(m
1
°
'-,
+ 1) x
(m
+ 1)
° °
Then the following result holds:
Any MA covariance sequence {r(k)}k:=O can be written as r(k) = tr(hQ) for Q is an (m + 1) x (m + 1) positive semidefinite matrix.
k = 0, ... , m, where
(3.9.29)
To prove this result, let
and observe that
a(w)a*(w) =
1
e- iw
eiw
1
e- imw m
e- iw e
imw
e
iw
-
L
he- ikw
k=-m
1
where I_k = If (for k ~ 0). Hence, for the sequence parameterized as in (3.9.29), we have that
ktm
r(k)e-
ikm
ltm
ikW
=
tr
=
tr[a(w)a*(w)Q] = a*(w)Qa(w) ~ 0,
ltQe-
]
for w
E
[0,2][']
which implies that trek)} indeed is an MA(m) covariance sequence. To show that any MA(m) covariance sequence can be parameterized as in (3.9.29), we make use of (3.3.3) to write
for Rational Spectra Chapter 3 Parametric Me tho ds
134 (for k = 0, ... , m) m
r(k ) = 0'2
L bjbj*_k =
0'2 [
o
b
j=k ..,....
(3.9.30)
ed in (3.9.29) with Evidently (3.9.30) has the form stat
(3.9.29) is complete. With this observation, the pro of of e an efficient problem, (3.9.28). We will describ in ma the to n ntio atte our tum We can now ady stated.). For a l weighting matrix W > 0 (as alre era gen a h wit ) .28 (3.9 ing solv algorithm for than the simple diagonal accurate MA parameter estimates re mo lds yie ally usu t tha W of choice MARl 2000]. Let reader to (STOICA, MCKELVEY, AND weighting in (3.9.28), we refer the
IJ- =
C( r - r)
C* C). Al so, ':i' · upp er triangular matrix and W = an is C ., (i.e W of tor fac y lesk ~. where C is the Cho ing the diagonal: in the upper triangle of Q, includ nts me ele the all ing tain con tor let a be a vec :~t
"":"
.. , Q2,m+l ; ... ; Qm +l, m+ lt a = [Ql,l Q1,2 ... QI.m+I ; Q2.2 x conjugates are either elements of a or comple Q of nts me ele the is, t tha Q; s Note that a define rite (3.9.28) in the ation and of (3.9.29), we can rew not this of use g kin Ma a. of of elements uences): following form (for real-valued seq
min p
subject to:
p.JJ."a
IIIJ-II
~ p
Q 2: 0
(3.9.31) tr[Q ]
tr
[! (JI + it) Q]
+C
-I
A
IJ- =r
.:~
~
.[
".
section 3.10
135
Exercises
Note that, to obtain the equality constraint in (3.9.31), we used the fact that (in the real-valued case; the complex-valued case can be treated similarly) r(k)
1
= tr(JkQ) = tr(QTJ[) = tr(J[ Q) = 2" tr [(Jk + J[)Q]
The reason for this seemingly artificial trick is that we need the matrices multiplying Q in (3.9.31) to be symmetric. In effect, the problem (3.9.31) has preci~ely the fonn of a semidefinite quadratic program (SQP), which can be solved efficiently by means of interior-point methods (see [STURM 1999] and also [DUMITRESCU, TABUS, AND STOICA 2001] and references therein). Specifically, it can be shown that an interior-point method (such as the ones in [STURM 1999]), when applied to the SQP in (3.9.31), requires O(m 4 ) flops per iteration; furtbennore, the number of iterations needed to achieve practical convergence of the method is typically quite small (and nearly independent of m), for instance between 10 and 20 iterations. The overall conclusion, therefore, is that (3.9.31), and hence the original problem (3.9.28), can be solved efficiently, in O(m 4 ) flops. Once the solution to (3.9.31) has been computed, we can obtain the corresponding MA covariances either as r = ;. - C- 1j.J., or as r(k) = tr(JkQ) for k = 0, ... , m. Numerical results obtained with MA parameter estimation algorithm have been reported in [DUMITRESCU, TABUS, AND STOICA 2001]; see also [STOICA, MCKELVEY, AND MARl 2000].
3.10 EXERCISES Exercise 3.1: The Minimum Phase Property As stated in the text, a polynomial A(z) is said to be of minimum phase if all its zeroes are inside the unit circle. In this exercise, we motivate the name minimum phase. Specifically, we will show that, if A(z) = 1 +alz- 1 + ... +anz- n has real-valued coefficients and has all its zeroes inside the unit circle, and if B (z) is any other polynomial in z -1 with real-valued coefficients that satisfies IB(w)1 = IA(w) I and B(w = 0) = A(w = 0) (where B(w) c. B(z)/z=eiW), then the phase lag of B(w), given by - argB(w), is greater than or equal to the phase lag of A(w): - argB(w) ::: - argA(w)
Since we can factor A(z) as n
A(z) =
TI (l -
1 Q'kZ- )
k=l and argA(w) = L~=l arg (1 first-order polynomials. Let
Q'ke-iW),
we begin by proving the minimum-phase property for
Q'
D(z) = Z-l - a* = e(z)
c.
re i9 ,
r < I
Z-l - a* I c. C(z)E(z)
1-
az-
(3.10.1)
136
Chapter 3 Parametric Methods for Rational Spectra
(a) Show that the zero of D(z) is outside the unit circle, and that ID(w)1 = IC(w)l. (b) Show that - argE(w) = w
+ 2 tan- 1 [
r sinew - e) ] I - r cos(w - e)
Also, show that this function is increasing. (c) If a is real, conclude that -argD(w) ~ -argC(w) for 0 ~ w ~ Jr, which justifies the name minimum phase for C (z) in the first-order case. (d) Generalize the first-order results proven in parts (a)-(c) to polynomials A(z) and B(z) of arbitrary order; in this case, the ak either are real valued or occur in complex-conjugate pairs.
Exercise 3.2: Generating the ACS from ARMA Parameters In this chapter, we developed equations expressing the ARMA coefficients {a 2 , ai, bj} in terms of the ACS trek) }~-oo' Find the inverse map; that is, given a 2 , ai, ... , an, bi ... , bm, find equations to determine trek) }f'=-oo'
Exercise 3.3: Relationship between AR Modeling and Forward Linear Prediction Suppose we have a zero-mean stationary process {y (t)} (not necessarily AR) with ACS {r(k)}f'=_oo' We wish to predict yet) by a linear combination of its n past values-that is, the predicted value is given by n
Yf(t) =
L
(-ak)y(t - k)
k=I
We define the forward prediction error as n
ef(t)
= yet) -
Yf(t)
=L
aky(t - k)
k=O
with ao = 1. Show that the vector ef = [al ... anf of prediction coefficients that minimizes the 2 t:. E{lef(t)1 } is the solution to (3.4.2). Show also that = (i.e., prediction-error variance in (3.4>2) is the prediction-error variance). that Furthermore, show that, if {y (t)} is an AR(p) process with p ~ n, then the prediction error is white noise and that
a;
aJ
aJ a;
for j > p where kj is thejth reflection coefficient defined in (3.5.7). Show that, as a consequence, ap+l, ..., an = O. Hint: The calculations performed in Section 3.4.2 and in Complement 3.9.2 will be useful in solving this problem.
Section 3.10
Exercises
137
Exercise 3.4: Relationship between AR Modeling and Backw ard Linea r Prediction Consider the signal {y (t)}, as in Exercise 3.3. This time, we will consider backward prediction-t hat is, we will predict Yet) from its n immediate future values : n
Yb(t) =
L
(-bk)y (t
+ k)
k=1
This equation has corresponding backward prediction etror eb(t) = yet) - Yb(t). Such backward prediction is useful in applications where noncausal processing is permi tted-f or example, when the data has been prerecorded and is stored in memory or on a tape and we want to make inferences on samples that precede the observed ones. Find an expres sion similar to (3.4.2) for the backward-prediction coefficient vector f)b = [b ... bnf. Find a relatio i nship between the f)b and the corresponding forward-prediction coefficient vector f)f. Relate the forward and backward prediction-error variances.
Exercise 3.5: Prediction Filters and Smoothing Filters The smoothing filter is a practically useful variation on the theme of linear prediction. A result of Exercises 3.3 and 3.4 should be that, for the forward and backward prediction filters n
A(z) = 1 + LakZ -k
n
and
B(z) = 1 + L
k=1
the prediction coefficients satisfy ak = Now consider the smoothing filter m
es(t)
~
.
.
":'. . ".,~
-
bZ
and the prediction-error variances are equal. m
k=1
(a) Derive a system of linear equations, similar to the forward and backw ard linear-prediction equations, that relate the smoothing filter coefficients, the smoothing predic tion-error variance = E {le s (t)1 2 }, and the ACS of yet). (b) For n = 2m, provide an example of a zero-mean stationary random process for which the minimum smoothing prediction-error variance is greater than the minimum forward prediction-error variance. Also provide a second example where the minim um smoothing filter prediction-error variance is less than the corresponding minimum forward predictionerror variance. (c) Assume m = n, but now constrain the smoothing prediction coeffic ients to be complexconjugate symmetric: Ck = d: for k = 1, ... , m. In this case, the two predic tion filters and the smoothing filter have the same number of degrees of freedom. Prove that the minimum smoothing prediction-error variance is less than or equal to the minimum (forward or backward) prediction-error variance. Hint: Show that the unconstrained minim um smoothing error variance solution (where we do not impose the constraint Ck = d:) satisfies Ck = d"k anyway.
a;
.
k=!
= LCky (t -k) +y(t) + Ldky (t +k) k=l
.~
bkZ- k
~".
".j:
l Spectra ic Methods for Rationa Chapter 3 Parametr
138
l Flatness iction Er ro r an d Spectra ed Pr um im in M n ee tw (k )} and PS D p be process, with ACS {r AR Exercise 3.6: Relationshi an y ril ssa ce ne t parameters {y (t) }, no ocess 4.6) for a; and Consider a random pr for ye t) by solving (3. l de mo ) (n AR an d ¢y (w ). We fin del, generate an AR PS D mo
en.
These
ise, we explore the ~_oo' In this exerc )} (k AR {r by te tra ns fo nn we deno ) an d ¢AR(W). whose inverse Fourier and that between 0 and Ikil < 1 for i = 1, ...• n. (A) aI, ... ,an such that > 0 and all the zeroes of A(z) are inside the unit circle.
a;.
a;
Find the mapping from each parameterization to the others. (Some of these have already been derived in the text and in the previous exercises.) .:~ : .~~. ,.",
".
?',.
Chapter 3 Parametric Methods for Rational Spectra
140
tion Coefficients Exercise 3.10: An Alternative Proof of the Stability Property of Reflec t using the expres sion that the k that minimizes (3.9.12) must be such that Ikp I :s 1, withou Prove (3.9.15) for
p
kp • Hint:
Write the criterion in (3.9.12) as
~~
":r where
N
1
EO= Z (t)
'Lo
.....
.~;
,",,,
2(N - p) t=p+I
= [
ej,p-I
(t)
eb,p-l (t
- 1)
]T
and show that if Ikpl > 1 thenf (kp) > f(l/k; ).
Sequence for an MA Exercise 3.11: Recurrence Properties of the Reflection Coefficient Model 0 for i > n (see Exerc ise 3.3) For an AR proces s of order n, the reflection coefficients satisfy ki = = 0 for k > O. Since an MA and the ACS satisfies the linear recurr ence relatio nship A(z)r (k) wonde r wheth er a recurr ence process of order m has the proper ty that rei) = 0 for i > m, we might
process. We will invest igate relationship holds for the reflection coefficients corres pondin g to a MA this "conje cture" for a simple case. I satisfies the relationship Consi der an MA proces s of order 1 with param eter bl' Show that IR n
n2:2 ce satisfies the recurr ence Show that kn = (-r(1) )n /IR n I and that the reflection coefficient sequen relationship
1
reO)
1
kn = - rO) kn-l -
r*(1)
1
(3.10.5)
r(1) k n- 2
the solution to (3.10.5) for with appropri,ate initial conditions (and state them). Show that Ibil < 1 is (3.10.6) that kn = 1/n. This sequen ce decays expon entiall y to zero. When bi = -1, show that the residu e is small It has been shown that, for large n, B (z )kn := 0, where ~ 0 means for MA proces ses of order compa red to the kn terms [GEORGIOU 1987]. This result holds even a means of estima ting the bk higher than 1. Unfor tunate ly, the result is of little practical use as coefficients, becaus e, for large n, the kn values are (very) small.
-
;-::
.
"-:1.' .~
'.
Section 3.10
c··...
141
Exercises
Exercise 3.12: Asymptotic Variance of the ARMA Spectral Estimator Consider the ARMA spectral estimator (3.2.2) with any consistent estimate of (j2 and {ai, bj }. For simplicity, assume that the ARMA parameters are real; however, the result holds for complex ARMA processes as well. Show that the asymptotic (for large data sets) variance of this spectral estimator can be written in the form (3.10.7) where C (cv) = ({JT (cv)P({J(cv). Here, P is the covariance matrix of the estimate of the parameter vector [(j2, aT, bT]T, and the vector ((J(cv) has an expression that is to be found. Deduce that (3.10.7) has the same form as the asymptotic variance of the periodogram spectral estimator but with the essential difference that, in the ARMA-estimator case, C (cv) goes to zero as the number of data samples processed increases (and that C(cv) in (3.10.7) is a function of cv). Hint: Use a Taylor series expansion of ~(cv) as a function of the estimated parameters {B2, bj}. (See, for example, equation (B .1.1) in Appendix B.)
ai,
Exercise 3.13: Filtering Interpretation of Numerator Estimators in ARMA Estimation An alternative method for estimating the MA part of an ARMA PSD is as follows: Assume we have estimated the AR coefficients (e.g., from equation (3.7.2) or (3.7.4». We filter yet) by A(z) to formf(t): n
f(t) = yet)
+L
aiy(t - i),
t
= n
+ 1, ... , N.
i=l
Then estimate the ARMA PSD as
r
"m (k)e- iWk ¢(cv) = Lk=-n:. f IA(cv) 12 A
where rf(k) are the standard ACS estimates for f(t). Show that this estimator is quite similar to (3.7.8) and (3.7.9) for large N. Exercise 3.14: An Alternative Expression for ARMA Power Spectral Density Consider an ARMA(n, m) process. Show that 2 B (z)B*(l/z*)
¢(z) = (j A(z)A*(l/z*)
can be written as
C (z)
C*(l/z*)
¢(z) = A(z)
+ A*(l/z*)
where
max(m,n)
C(z) =
L
k=O
CkZ-
k
(3.10.8)
142
Chapter 3 Parametric Methods for Rational Spectra
Show that the polynomial C (z) satisfying (3.10.8) is unique, and find an expression for Ck in terms of {a;} and {r(k)}. Equation (3.10.8) motivates an estimation procedure alternative to that in equations (3.7.8) and (3.7.9) for ARMA spectral estimation. In the alternative approach, we first estimate the AR coefficients {a j }7=1 (using, for example, equation (3.7.2)). We then estimate the Ck coefficients, using the formula found in this exercise, and finally we insert the estimates ak and Ck into the right-hand side of (3.10.8) to obtain a spectral estimate. Prove that this alternative estimator is equivalent to that in (3.7.8)-(3.7.9) under certain conditions, and find conditions on {tlk} so that they are equivalent. Also, compare (3.7.9) and (3.10.8) for ARMA(n, m) spectral estimation when m < n. Exercise 3.15: Pade Approximation A minimum-phase (or causally invertible) ARMA(n, m) model B(z)/A(z) can be represented equivalently as an ARC 00) model 1/C (z). The approximation of a ratio of polynomials by a polynomial of higher order was considered by Pade in the late 18OOs. One possible application of the Pade approximation is to obtain an ARMA spectral model by first estimating the coefficients of a high-order AR model, then solving for a (low-order) ARMA model from the estimated AR coefficients. In this exercise, we investigate the model relationships and some consequences of truncating the AR model polynomial coefficients. Define A(z) = I + alz- 1 + + anz-n B(z) = 1 + blZ- 1 + + bmz-m e(z) = 1 + CIZ- 1 + C2Z- 2 + ...
(a) Show that
k=O l:::sk:::Sn k>n
where we assume that any polynomial coefficient is equal to zero outside its defined range. (b) Using the equ~tions above, derive a procedure for computing the aj and bj parameters from a given set of {ck parameters. Assume m and n are known. (c) These equations give an exact representation using an infinite-order AR polynomial. In the Pade method, an approximation to B(z)/A(z) = l/C(z) is obtained by truncating (setting to zero) the Ck coefficients for k > m + n. Suppose a stable minimum-phase ARMA(n, m) filter is approximated by an AR(m + n) filter by using the Pade approximation. Give an example to show that the resulting AR approximation is not necessarily stable. (d) Suppose a stable AR(m + n) filter is approximated by a ratio Bm (z)/A n (z), as in part (b). Give an example to show that the resulting ARMA approximation is not necessarily stable.
l;":-o
~: ;,~.
~.
t ~.
Compu ter Exercises
143
Exercise 3.16: (Non)Uniqueness of Fully Param eteriz ed ARM A Equat ions ~~t~~Ping filter (or transfer function) of the ARMA equation (3.8.1) is given by the matrix
I
B(z)
f~ ~.
= A-1(z )B(z),
(ny x ny)
(3.10.9)
where z is a dummy variable, and ~
A(z) = I +AIZ - 1 + B(z) = 1+ BIz- 1 +
+ Apz-P + Bpz-P
(If the AR and MA orders, n and m, are different, then p = max(m, n).) Assume that A(z) and B (z) are "fully parameterized" in the sense that all elements of the matrix coefficients {Ai, Bj} are unknown. The matrix fraction description (MFD) (3.10.9) of the ARMA shaping filter is unique if and only if there exist no matrix polynomials A(z) and B(z) of degree p and no matrix polynomial ". L(z) =f. I such that A(z) = L(z)A (z)
,
B(z)
= L(z)B(z)
(3.10.10)
This can be verified by making use of (3.10.9). (See, for example, (KAILA TH 1980].) Show that the above uniqueness condition is satisfied for the fully param eterized MFD if and only if rank[Ap Bpl = ny
(3.10.11)
Comment on the character of this condition: Is it restrictive or not?
COMPUTER EXERCISES Tools for AR, MA, and ARMA Spect ral Estim ation: The text website www .pren hall. com/ stoic a contains the follow ing MATLAB functions for use in computing AR, MA, and ARMA spectral estimates and selecting the model order. For the first four functions, y is the input data vector, n is the desire d AR order, and m is the desired MA order (if applicable). The outputs are a, the vector [iiI, ... , Qnf of estimated AR parameters, b, the vector [bI, ... ,bmf of MA parameters (if applicable), and sig2, the noise variance estimate &2. Variable definitions specific to particular functio ns are given here:
• [a,si g2]= yule walk er{y ,n) The Yule-Walker AR method given by equation (3.4.2). • [a,si g2]= lsar{ y,n) The covariance least-squares AR method given by equation (3.4.12).
:i
II ;. H
j:
I ~; L
Y:>;.:I~:·
....
144
Chapter 3
Parametric Methods for Rational Spectra
• [a,gamrna]=mywarma(y,n,m,M)
"
.
,:~~ ~~
The modified Yule-Walker-based ARMA spectral estimate given by equation (3.7.9), where:~ the AR coefficients are estimated from the overdetermined set of equations (3.7.4) With,j W = I. Here, M is the number of Yule-Walker equations used in (3.7.4), and gamma is theA " " ]T . vector [ yo,···, Ym • [a,b,sig2]=lsarma(y,n,m,K) The two-stage least-squares ARMA method given in Section 3.7.2; K is the number of AR parameters to estimate in Step 1 of that algorithm. • order=armaorder(mo,sig2,N,nu) Computes the AlC, AlCc , GIC, and BIC model-order selections for general parameterestimation problems. See Appendix C for details on the derivations of these methods. Here, mo is a vector of possible model orders, sig2 is the vector of estimated residual variances corresponding to the model orders in mo, N is the length of the observed data vector, and nu is a parameter in the GIC method. The output 4-element vector order contains the model orders selected by using AlC, AlCc , GIC, and BIC, respectively.
Exercise C3.17: Comparison of AR, ARMA, and Periodogram Methods for ARMA Signals In this exercise, we examine the properties of parametric methods for PSD estimation. We will use two ARMA signals, one broadband and one narrowband, to illustrate the perfonnance of these parametric methods. Broadband ARMA Process.
Generate realizations of the broadband ARMA process
yet)
with
(]'2
Bl (z) =- e(t) Al (z)
= 1 and Al (z)
= 1 - 1.3817z- 1 + 1.5632z- 2
Bl (z)
= 1 + O.3544z- 1 + O.3508z- 2 + O.1736z- 3 + O.2401z-4
-
O.8843z- 3 + 0.4096z- 4
Choose the number of samples as N = 256. (a) Estimate the PSD of the realizations by using the four AR and ARMA estimators described above. Use AR(4), AR(8), ARMA(4,4), and ARMA(8,8); for the MYW algorithm, use both M = n and M = 2n; for the LS AR(MA) algorithms, use K = 2n. Illustrate the perfonnance by plotting ten overlaid estimates of the PSD. Also, plot the true PSD on the same diagram. In addition, plot pole or pole-zero estimates for the various methods. (For the MYW method, the zeroes can be found by spectral factorization of the numerator; comment on the difficulties you encounter, if any.) (b) Compare the two AR algorithms. How are they different in perfonnance?
compu ter Exercises
145
(c) Compare the two ARMA algorithms. How does M affect performance of the MYW algorithm? How do the accuracies of the respective pole and zero estima tes compare? (d) Use an ARMA(4,4) model for the LS ARMA algorithm, and estimate the PSD of the realizations for K = 4, 8, 12, and 16. How does K affect performance of the algorithm? (e) Compare the lower-order estimates with the higher order estima tes. In what way(s) does increasing the model order improve or degrade estimation performance ? (f) Compare the AR to the ARMA estimates. How does the AR(8) model perform with respect to the ARMA(4,4) model and the ARMA(8,8) model? (g) Compare your results with those from the periodogram method on the same process (from Exercise C2.21 in Chapter 2). Comment on the difference between the methods with respect to variance, bias, and any other relevant properties of the estimators you notice.
Narro wban d ARMA Process.
Generate realizations of the narrowband ARMA process B2(Z) yet) = - - e(t) A2(Z)
with
(J"2
= 1 and
= 1 - 1.640 8z- 1 + 2.2044z -2 B2(Z) = 1 + 1.585 7z- 1 + 0.960 4z- 2 A2(Z)
1.4808z -3
+ O.8145z- 4
(a) Repeat the experiments and comparisons in the broadband examp le for the narrowband process; this time, use the following model orders: AR(4), AR(8) , AR(12), AR(16), ARMA(4,2), ARMA(8,4), and ARMA(l2,6). (b) Study qualitatively how the algorithm performances differ for narrow band and broadband data. Comment separately on performance near the spectral peaks and near the spectral valleys.
Exercise C3.18: AR and ARMA Estimators for Line-Spectral Estimation
The ARMA methods can also be used to estimate line spectra. (Estim ation of line spectra by other methods is the topic of Chapter 4.) In this application, AR(MA) techniques are often said to provide superresolution capabilities, because they are able to resolve sinusoids spaced too closely in frequency to be resolved by periodogram-based methods. We again consider the four AR and ARMA estimators described previo usly. (a) Generate realizations of the signal y(t)
= 10 sin(0.24rrt + cpd + 5 sin(O.26rrt + ~) + e(t),
t=I, ... ,N
where e(t) is (real) white Gaussian noise with variance 0"2 and where CPI, ~ are independent random variables each uniformly distributed on [0, 2rr]. From the results in Chapter 4, we
146
Chapter 3 Parametric Methods for Rational Spectra
find the spectrum of y (t) to be
+ <S(w + 0.24n)] 0.26n) + <S(w + 0.26n)] + a 2
¢(w) = 50n [<S(w - 0.24n)
+ 12.5n [<S(w -
(b) Compute the "true" AR polynomial (using the true ACS sequence; see equation (4.1.6)), using the Yule-Walker equations for both AR(4), AR(l2), ARMA(4,4) and ARMA(l2,12) models when a 2 = 1. This experiment corresponds to estimates obtained as N ~ 00. Plot 1/IA(w)12 for each case, and find the roots of A(z). Which methodes) are able to resolve the two sinusoids? (c) Consider now N = 64, and set a 2 = 0; this corresponds to the case of finite data length but infinite SNR. Compute estimated AR polynomials, using the four spectral estimators and the AR and ARMA model orders described above; for the MYW technique, consider both M = n and M = 2n, and, for the LS ARMA technique, use both K = n and K = 2n. Plot 1/lA(w)[2, overlaid, for 50 different Monte Carlo simulations (using different values of CPI and CP2 for each). Also, plot the zeroes of A(z), overlaid, for these 50 simulations. Which methodes) are reliably able to resolve the sinusoids? Explain why. Note that as a 2 ~ 0, yet) corresponds to a (limiting) AR(4) process. How does the choice of M or K in the ARMA methods affect resolution or accuracy of the frequency estimates? (d) Obtain spectral estimates (&2IB(w)1 2/IA(w)!2 for the ARMA estimators, and &2/IA(w)j2 for the AR estimators) for the four methods when N = 64 and a 2 = 1. Plot ten overlaid spectral estimates and overlaid polynomial zeroes of the A(z) estimates. Experiment with different AR and ARMA model orders to see whether the true frequencies are estimated more accurately; note also the appearance and severity of "spurious" sinusoids in the estimates for higher model orders. Which methodes) give reliable "superresolution" estimation of the sinusoids? How does the model order influence the resolution properties? Which method appears to have the best resolution? You might want to experiment further by changing the SNR and the relative amplitudes of the sinusoids to gain a better understanding of the relative differences between the methods. Also, experiment with different model orders and parameters K and M to understand their impact on estimation accuracy. (e) Compare the estimation results with periodogram-based estimates obtained from the same signals. Discuss differences in resolution, bias, and variance of the techniques.
Exercise C3.19: Model Order Selection for AR and ARMA Processes In this exercise, we examine four methods for model order selection in AR and ARMA spectral estimation. We will experiment with both broadband and narrowband processes. As discussed in Appendix C, several important model order selection rules have the general form (see (e.8.1 )-(C.8.2» (3.10.12)
compu ter Exercises
147
with different penalt y coefficients 17(n, N) for the different methods: AIC: AICc : GIC: BIC:
17(n, N) = 2 17(n,N )
N =2 --N -n-1
17(n, N) = v (e.g., v = 4) 17(n, N) = InN ",
(3.10.13)
The tenn Inpn(y , en) is the log-likelihood of the observed data vector y, given the maxim umlikelihood (ML) estima te of the parameter vector () for a model of order n (where n is the total numbe r of estimated real-valued parameters in the model ); for the case of AR, MA. and ARMA models, a large-sample approximation for - 2In Pn (y , en) that is commonly used for order selection (see, e.g., [LJUNG 1987; SODERSTROM AND STOICA 1989]) is given by -2lnp n (Y, en) ~ N
&; + constant
(3.10.14)
&;
where is the sample estimate of (12 in (3.2.2) corresponding to the model of order n. The selected order is the value of n that minimizes (3.10.12). The order selection rules above, althou gh derived for ML estimates of (), can be used even with approx imate ML estimates of (), albeit with some loss of perfon nance.
Broadband AR Process.
Generate 100 realizations of the broadb and AR process 1
Y (t) = - - e(t) Al (z) with
(12
= 1 and Al (z)
= 1-
1.381 7z- 1 + 1.5632z- 2
-
0.8843z- 3 + 0.4096z- 4
Choose the numbe r of samples as N = 128. For each realization, (a) Estima te the model parameters, using the LS AR estima tor and using AR model orders from 1 to 12. (b) Find the model orders that minimize the AIC, Alec, GIC (with v = 4), and BIC criteri a (See Appendix C.) Note that, for an AR model of order m, n = m + 1. (c) For each of the four order selection methods, plot a histog ram of the selected orders for the 100 realizations. Comm ent on their relative perfon nance.
Repeat this experiment, using N = 256 and N = 1024 samples. Discus s the relative perfon nance of the order selection metho ds as N increases.
fL~S 4.".·.
148
Chapter 3 Parametric Methods for Rational Spectra
Narrowband AR Process.
Repeat the previous experiment, using the narrowband AR process;~ ... ~~
.::1
1
.:/~
yet) = - - e(t) A2(Z)
:: ,OJ'
with a 2 = 1 and A2(Z) = 1 - 1.6408z- 1 + 2.2044z- 2 - 1.4808z- 3 + 0.8145z- 4
Compare the narrowband AR and broadband AR order selection results and discuss the relative order selection performance for these two AR processes. Broadband ARMA Process.
Repeat the broadband AR experiment by using the broadband
ARMA process y(t)
Bl (z)
= - - e(t) Al (z)
with a 2 = 1 and Al (z) = 1 - 1.3817z- 1 + 1.5632z- z - 0.8843z- 3 Bl (z) =
+ 0.4096z-4 1 + O.3544z- 1 + 0.3508z- z + 0.1736z- 3 + 0.2401z-4
For the broadband ARMA process, use N = 256 and N = 1024 data samples. For each value of N, find ARMA(m, m) models (so n = 2m + 1 in equation (3.10.12)) for m = 1, ... ,12. Use the two-stage LS ARMA method with K = 4m to estimate parameters. Narrowband ARMA Process.
Repeat the broadband ARMA experiment, but using the narrow-
band ARMA process B2(Z) yet) = - - e(t) Az(z)
with a 2 = 1 and Az(z) = 1 - 1.6408z- 1 + 2.2044z- z - 1.4808z- 3 Bz(z)
+ 0.8145z-4
= 1 + 1. 1100z- 1 + 0.4706z- z
Find ARMA(2m, m) models for m = 1, ... ,6 (so n = 3m + 1 in equation (3.10.12)), using the two-stage LS ARMA method with K = 8m. Compare the narrowband ARMA and broadband ARMA order selection results and discuss the relative order selection performance for these two ARMA processes.
:
~ ..
·,t.
computer Exercises
149
Exercise C3.20: AR and ARMA Estimators Applied to Measured Data Consider the data sets in the files sunspotdata.mat and lynxdata.mat. These files can be obtained from the text website www.prenhall.com!stoica. Apply your favorite AR and ARMA estimator(s) (for the lynx data, use both the original data and the logarithmically transformed data, as in Exercise C2.23) to estimate the spectral content of these data. You will also need to select appropriate model orders m and n; see, for example, Exercise C3 .19. As in Exercise C2.23, try to answer the following questions: Are there sinusoidal components (or periodic structure) in the data? If so, how many components and at what frequencies? Discuss the relative strengths and weaknesses of parametric and nonparametric estimators for understanding the spectral content of these data. In particular, discuss how a combination of the two techniques can be used to estimate the spectral and periodic structure of the data.
4 Parametric Me th od s for Line Sp ec tra
4.1 INTRODUCTION
and geophysical seismology, In several applications, particularly in communications, radar, sonar, the signals dealt with can be described well by the sinusoidal model n
yet) = x(t)
+ e(t)
X
(t) =
L CYkei(Wkt+rpk)
(4.1.1)
k=!
fwd, { 2Jr/ N. It should then be possible to resolve all n sine waves in the noisy signal and to obtain reasonable approximations {Wk} to {wd by evaluating the function in (4.3.8) at the points of a grid corresponding to the sampling of each frequency variable, as in the FFT: 2Jr N
(k = 1, ... , n)
j = 0, ... ,N - 1
Wk = - j
(4.3.13)
Of course, a direct application of such a grid method for the approximate maximization of (4.3.8) would be computationally burdensome for large values of n or N. However, it can be greatly simplified, as described next. The p, k element of the matrix B*B occurring in (4.3.8), when evaluated at the points of the grid (4.3.13), is given by [B*B]p,k
=N
for p = k
(4.3.14)
and
=0
(4.3.15)
for p =1= k
which implies that the function to be minimized in (4.3.8) has, in such a case, the following form: n
1
LN k=l
N
Ly(t)e1=1
2
iwkt
(4.3.16)
Section 4.3
161
Nonlinear Least-Squares Method
The previous additive decomposition in n functions of WI, ... ,(.tJn (respectively) leads to the conclusion that {Wk } (which, by definition, maximize (4.3.16) at the points of the grid (4.3.13)) are given by the n largest peaks of the periodogram. To show this, let us write the function in (4.3.16) as n
g(WI, ... , Wn) =
L ¢p(Wk) k:;;;1
....
where ¢P (w) is once again the periodogram. Observe that
and
Hence, the maximum points of (4.3.16) satisfy
It follows that the set of maximizers of (4.3.16) is given by all possible combinations of n elements from the periodogram's peak locations. Now, recall the assumption made that {Wk}, and hence their estimates {10k }, are distinct. Under this assumption the highest maximum of g(WI, ... ,wn ) is given by the locations of the n largest peaks of ¢p(w), which is the desired result. These findings are summarized'as follows: Under the condition (4.3.12), the unmodified periodogram resolves all the n sine waves present in the noisy signal. Furthermore, the locations {Wk} of the n largest peaks in the periodogram provide O(ljN) approximations to the NLS frequency estimates {wd. In the case of n = 1, we have WI = WI exactly.
(4.3.17)
The fact that the differences {Wk - 10k } are 0 (1 j N) means, of course, that the computationally convenient estimates {Wk} (derived from the periodogram) will generally have an inflated variance compared to {wd. However, {Wk} can at least be used as initial values in a numerical implementation of the NLS estimator. In any case, this discussion indicates that, under (4.3.12), the periodogram performs quite well as a frequency estimator (which actually is the task for which it was introduced by Schuster more than a century ago!). In the next sections, we present several "high-resolution" methods for frequency estimation, which exploit the covariance matrix models. More precisely, all of these methods derive frequency estimates by exploiting the properties of the eigendecomposition of data covariance matrices and, in particular, the subspaces associated with those matrices. For this reason, these methods are
162
Chapter 4
Parametric Methods for Line Spectra
sometimes referred to by the generic name of subspace methods. However, in spite of their common subspace theme, the methods are quite different, and we will treat them in separate sections. The main features of these methods can be summarized as follows: (i) Their statistiCal performance is close to the ultimate performance corresponding to the NLS method (and given by the Cramer-Rao lower bound, (4.3.9)); (ii) unlike the NLS method, these methods are not based on multidimensional search procedures; and (iii) they do not depend on a "resolution condition," such as (4.3.12); thus, they could generally have a resolution threshold lower than that of the periodogram. The chief drawback of these methods, as compared with the NLS method, is that their performance significantly degrades if the measurement noise in (4.1.1) cannot be assumed to be white.
4.4 HIGH-ORDER YULE-WALKER METHOD
The high·order Yule-Walker (HOYW) method of frequency estimation can be derived from the ARMA model of the sinusoidal data, (4.2.3), much as can its counterpart in the rational PSD case. (See Section 3.7 and [CADZOW 1982; STOICA, SODERSTROM, AND TI 1989; STOICA, MOSES, SODERSTROM, AND LI 1991].) Actually, the HOYW method is based on an ARMA model of an order L higher than the minimal order n, for a reason that will be explained shortly. If the polynomial A(z) in (4.2.3) is multiplied by any other polynomial A(z), say of degree equal to L - n, then we obtain a higher order ARMA representation of our sinusoidal data, given by yet)
+ blY (t -
1)
+ ." + bLy(t -
L) = e(t)
+ bIe(t -
1)
+ ... + bLe(t -
L)
(4.4.1)
or B(z)y(t) = B(z)e(t)
where L
B(z) = 1 + LbIcZ- 1c
A
(4.4.2)
A(z)A(Z)
1c=1
Equation (4.4.1) can be rewritten in the following more condensed form (with obvious notation): [Yet) y(t - 1) . .. y(t - L)]
[~ ]
= e(t)
+ ... + bLe(t -
L)
(4.4.3)
Premultiplying (4.4.3) by [Y*(t - L - 1) .. . y*(t - L - M)]T and taking the expectation leads to (4.4.4)
0. The similarity of (4.4.4) to the Yule-Walker system of equations encoun tered in Chapter 3 (see equation (3.7.1)) is more readily seen if (4.4.4) is rewritten in the follow ing more detailed form:
r~1)
r(L) ... [
r(L
+:W - 1)
] b
r(L~ 1)
='_,- [
reM)
]
(4.4.5)
r(L+ M)
Owing to this analogy, the set of equations (4.4.5) associated with the noisy sinusoidal signal {y(t)} is said to form a HOYW system. The HOYW matrix equation (4.4.4) can also be obtained directly from (4.2.8). For any L :::: n and any polynomial A(z) (used in the defining equation, (4.4.2), for b), the elements of the vector (4.4.6) are equal to zero. Indeed, the kth row of (4.4.6) is
[1
e- iWk ... e- iLaJk ]
[~ ].
L
1 + Lbpe- iWkP p=l
(4.4.7) (since A(Wk) = 0, cf (4.2.3)). It follows from (4.2.8) and (4.4.7) that the vector [ c null space of r (see Definition D2 in Appendix A),
.:.';'~
>.
.;~ -.
~]
lies in the
which is the desired result, (4.4.4). The HOYW system of equations just derived can be used for freque ncy estimation in the following way: By replacing the unavailable theoretical covariances {r(k)} in (4.4.5) by the sample covariances {f(k)}, we obtain
peL) [
r(L+~ -1)
... f (1) ] " [ :. b~reM)
r(L + 1) :.
]
(4.4.8)
r(L+ M)
Owing to the estimation errors in {f(k)}, the matrix equation (4.4.8) carmot hold exactly in the general case, for any vector b, as is indicated by the use of the "approximate equality" symbol ~.
,
. . ...i.• -.
164
Chapter 4
We can solve (4.4.8) for polynomial
Parametric Methods for Line Spectra
b in a least-squares sense that is detailed in what follows, then form the/' .....
~
f~ -~*
and, finally (in view of (4.2.3) and (4.4.2», obtain frequency estimates {Wk} as the angular positions of the n roots of (4.4.9) that are located nearest the unit circle.:;~ It can be expected that increasing the values of M and L results in improved frequency'~ ... estimates. Indeed, by increasing M and L we use higher lag covariances in (4.4.8), which couldJ bear "additional information" on the data at hand. Increasing M and L also has a second, more subtle, effect that is explained next. :~ Let Q denote the M x L covariance matrix in (4.4.5) and, similarly, let Q denote the sample covariance matrix in (4.4.8). It can be seen from (4.2.8) that ..:~; ~~
~
_to:
.. ':;:
rank(Q)
=n
for M, L 2: n
(4.4.10)
On the other hand, the matrix Q has full rank (almost surely) rank(Q) = min(M , L)
(4.4.11)
owing to the random errors in {r(k)}. However, for reasonably large values of N, the matrix Q is close to the rank-n matrix Q, since the sample covariances {r(k)} converge to {r(k)} as N increases (as is shown in Complement 4.9.1). Hence, we may expect the linear system (4.4.8) to be ill conditioned from a numerical standpoint. (See the discussion in Section A.8.1 in Appendix A.) In fact, there is compelling empirical evidence that any LS procedure that estimates b directly from (4.4.8) has very poor accuracy. In order to overcome the previously described difficulty, we can make use of the a priori rank information (4.4.10). However, some preparations are required before we shall be able to do so. Let "
Q
= U~V* =A [VI
U2]
~~
n
(4.4.12)
M-n
denote the singular value decomposition (SVD) of the matrix Q. (See Section A.4 in Appendix A; also [SODERSTROM AND STOICA 1989; VAN HUFFEL AND VANDEWALLE 1991] for general discussions on the SVD.) In (4.4.12), U is an M x M unitary matrix, V is an L x L unitary matrix, and :E is an M x L diagonal matrix. Q is close to a rank-n matrix, so L2 in (4.4.12) should be close to zero, which implies that (4.4.13)
Section 4.4
165
High-Order Yule-Walker Method
n.
In fact, it can be proven that this Qn is the best (in should be a good approximation for the Frobenius-norm sense) rank-n approximation of (Result R18 in Appendix A). Hence, in accordance with the rank information (4.4.10), we can use Qn in (4.4.8) in lieu of Q. The so-obtained rank-truncated HOYW system of equations
n
(4.4.14)
can be solved in a numerically sound way by using a simple LS procedure. It is readily verified that (4.4.15) is the pseudoinverse of Qn. (See Definition 015 and Result R32.) Hence, the LS solution to (4.4.14) is given by
(4.4.16)
The additional bonus for using Qn instead of Q in (4.4.8) is an improvement in the statistical accuracy of the frequency estimates obtained from (4.4.16). This improved accuracy is explained by the fact that Qn should be closer to n than Q is; the improved covariance matrix estimate obtained by exploitation of the rank information (4.4.10), when used in the HOYW system of equations, should lead to refined frequency estimates. We remark that a total least-squares (TLS) solution for b can also be obtained from (4.4.8). (See Definition D17 and Result R33 in Appendix A.) A TLS solution makes sense, because we have errors in both Q and the right-hand-side vector in equation (4.4.8). In fact the TLS-based estimate of b is often slightly better than the estimate discussed above, which is obtained as the LS solution to the rank-truncated system of linear equations in (4.4.14). We next return to the selection of Land M. As M and L increase, the information brought into the estimation problem under study by the rank condition (4.4.10) is more and more important, and hence the corresponding increase of accuracy is more and more pronounced. (For instance, the information that a 10 x 10 noisy matrix has rank one in the noise-free case leads to more relations between the matrix elements, and hence to more "noise cleaning," than if the matrix were 2 x 2.) In fact, for M = n or L = n, the rank condition is inactive; = Q in such a case. The previous discussion gives another explanation of why the accuracy of the frequency estimates obtained from (4.4.16) may be expected to increase with increasing M and L. The next box summarizes the HOYW frequency estimation method. It should be noted that the operation in Step 3 of the HOYW method is implicitly based on the assumption that the estimated "signal roots" (i.e., the roots of A(z) in (4.4.2)) are always closer to the unit circle than the
nn
nn
166
Chapter 4
Parametric Methods for Line Spectra
estimated "noise roots" (i.e., the roots of A(z) in (4.4.2». It can be shown that as N ~ 00, all roots of A(z) are strictly inside the unit circle; see, for example, Complement 6.5.1 and [KUMARESAN AND TUFfS 1983]. This property cannot be guaranteed in finite samples, but there is empirical evidence that it holds quite often. In those rare cases where it fails to hold, the HOYW method produces spurious (or false) frequency estimates. The risk of producing spurious estimates is the price paid for the improved accuracy obtained by increasing L. (Note that, for L = n, there is no "noise root," and hence no spurious estimate can occur in such a case.) The risk of false frequency estimation is a problem that is common to all methods that estimate the frequencies from the roots of a polynomial of degree larger than n, such as the MUSIC and Min-Norm methods, to be discussed in the next two sections.
The HOYW Frequency Estimation Method
Step 1. Compute the sample covariances {r(k)}t~~. We may set L ~ M and select the
values of these integers so that L + M is a fraction of the sample length (such as N/3). Note that, if L + M is set to a value too close to N, then the higher lag covariances required in (4.4.8) cannot be estimated in a reliable way. Step 2. Compute the SVD of (4.4.12), and compute b by using (4.4.16). Step 3. Isolate the n roots of the polynomial (4.4.9) that are closest to the unit circle and obtain the frequency estimates as the angular positions of these roots.
n,
4.5 PISARENKO AND MUSIC METHODS The MUltiple SIgnal Classification (or MUltiple SIgnal Characterization) (MUSIC) method [SCHMIDT 1979; BIENVENU 1979] and Pisarenko's method [PISARENKO 1973] (a special case of MUSIC, as is explained next) are derived from the covariance model (4.2.7) with m > n. Let Ai ~ A2 ~ ... ~ Am denote the eigenvalues of R in (4.2.7), arranged in nonincreasing order, and let {Sl, ... , sn} be the orthonormal eigenvectors associated with {AI, ... , An}, and {gl' ... , gm-n} a set of orthonormal eigenvectors corresponding to {An+l, ... , Am}. (See Appendix A.) Since rank(APA*) = n
(4.5.1)
it follows that APA * has n strictly positive eigenvalues, the remaining (m - n) eigenvalues all being equal to zero. Combining this observation with the fact that (see Result R5 in Appendix A) (k = 1, ... , m)
(4.5.2)
where {~klk=l are the eigenvalues of APA* arranged in nonincreasing order, leads to the following result: Ak > a 2 for k = 1, ... , n { Ak a 2 for k n + 1, ... , m
=
=
(4.5.3)
Section 4.5
167
Pisarenko and MUSIC Methods
The set of eigenvalues of R can hence be split into two subsets. Next, we show that the eigenvectors associated with each of these subsets, as introduced previously, possess some interesting properties that can be used for frequency estimation. Let
S=
[Sl,"
"sn]
(m
x n),
G=[gl, ... ,gm-n]
(mx(m-n))
(4.5.4)
From (4.2.7) and (4.5.3), we get at once An+l
(4.5.5)
RG = G o ' "
[
The first equality in (4.5.5) follows from the definition of G and {Ak!k=n+l' the second equality follows from (4.5.3), and the third from (4.2.7). The last equality in equation (4.5.5) implies that APA*G = 0, or (as the matrix AP has full column rank) (4.5.6)
A*G =0
In other words, the columns {gk} of G belong to the null space of A *, a fact which is denoted by gk E N(A*). Since rank(A) = n, the dimension of N(A*) is equal to m - n, which is also the dimension of the range space of G, R(G). It follows from this observation and (4.5.6) that R(G) =N(A*)
(4.5.7)
In words, (4.5.7) says that the vectors {gk} span both R(G) and N(A*). Now, by definition, S*G
=0
(4.5.8)
so we also have R(G) = N(s*); hence, N(s*) = N(A*). Since R(S) and R(A) are the orthogonal complements to N(s*) and N(A*), it follows that R(S)
= R(A)
(4.5.9)
We can also derive the equality (4.5.9) directly from (4.2.7). Set
(4.5.10)
•
line Spectra Chapter 4 Parametric Methods for
168
From
o]
= APA *S
+ 13 2S
(4.5.11)
An
we obtain
(4.5.12)
e dimension (equal R(S ) and R(A ) have the sam er, wev Ho ). R(A C ) R(S that which shows R(S ) and R(G ) are to (4.5.9) and (4.5.8), the subspaces ing Ow . ows foll .9) (4.5 ce, hen n)~ to y. ce and noise subspace, respectivel sometimes called the signal subspa from (4.5.6): The following key result is obtained n are the only solutions of the equatio The true frequency values {Wk}k=l > n. a*( w)G G*a (w) = 0 for any m
(4.5.13)
to prove that ows from (4.5.6). It only remains foll n atio equ this sfy sati } {Wk t tha The fact tion, with IV :f:. Wk ). Let IV denote another possible solu .13 (4.5 to s tion solu y onl the are AA .) Hence, {Wk }~=I projector onto R(G ). (See Section nal ogo onh the is * GG ), .13 (4.5 In (k = 1, ... , n). N(G *). However, the to R(G ), which means that a (IV) E nal ogo orth is (IV) a t tha lies imp ) ndent (4.5.13 wk)}k=l' Since n + 1 linearly indepe {a( of nt nde epe ind arly line is (IV) Vandennonde vector a present case, we subspace, which is N( G* ) in the nal nsio ime n-d an to ng belo not vectors can of is finished. (4.5.13) can exist; with this, the pro conclude that no other solution iiJ to in the following result to derive frequency estimates us vio pre the s use hm orit alg SIC The MU steps: nce matrix Ste p 1. Compute the sample covaria
R= ~
N
(4.5.14)
Ly (t) y* (t) l=m
nd G denote the matrices defined similarly to Sa and its eigendecomposition. Let S and of k {Sl, ... , sn} and {81, ... , 8m -n} G, but made from the eigenvectors frequency estimates as BIE NV ENU 1979]. Detennine 9; 197 IDT HM [SC ) SIC MU al ectr Ste p 2a. (Sp of the function the locations of the n highest peaks 1
WE
[-J r, Jr]
(4.5.15)
a*( w)G G* a(w )
the presence of eudospectrum," since it indicates "ps a ed call is ) .15 (4.5 es tim me (So . This fact may ied signal, but it is not a true PSD sinusoidal components in the stud ched to this variant of MUSIC.) explain the attribute "spectral" atta
Section 4.5
169
Pisarenko and MUSIC Methods
OR Step 2b. (Root MUSIC) [BARABELL 1983]. Determine frequency estimates as the angular positions of the n (pairs of reciprocal) roots of the equation (4.5.16) ,.
which are located nearest the unit circle. In (4.5.16), a(z) stands for the vector a(w), (4.2.4), with eiClJ replaced by z, so () = [1 ,z -1 , ... ,Z -(m-l)]T az
For m = n + 1 (which is the minimum possible value), the MUSIC algorithm reduces to the Pisarenko method, which was the earliest proposal for an eigenanalysis-based (or subspace-based) method of frequency estimation [PISARENKO 1973]: The Pisarenko method is MUSIC with m = n
+ 1.
(4.5.17)
In the Pisarenko method, the estimated frequencies are computed from (4.5.16). For m = n + 1, this 2(m - 1)-degree equation can be reduced to the following equation of degree m - I = n:
(4.5.18)
"-: "
\
..
The Pisarenko frequency estimates are obtained as the angular positions of the roots of (4.5.18). The Pisarenko method is the simplest version of MUSIC from a computational standpoint. In addition, unlike MUSIC with m > n + 1, the Pisarenko procedure does not have the problem of separating the "signal roots" from the "noise roots." (See the discussion on this point at the end of Section 4.4.) However, it can be shown that the accuracy of the MUSIC frequency estimates increases significantly with increasing m. Hence, the price paid for the computational simplicity of the Pisarenko method could be a relatively poor statistical accuracy. Regarding the selection of a value for m, this parameter may be chosen as large as possible, but not too close to N, in order to still allow a reliable estimation of the covariance matrix (for example, as in (4.5.14)). In some applications, the largest possible value that may be selected for m may also be limited by computational complexity considerations. Whenever the tradeoffbetween statistical accuracy and computational complexity is an important issue, the following simple ideas can be valuable. The finite-sample statistical accuracy of MUSIC frequency estimates may be improved by modifying the covariance estimator (4.5.14). For instance, R is not Toeplitz, whereas the true covariance matrix R is. We may correct this situation by replacing the elements in each diagonal of R with their average. The so-corrected sample covariance matrix can be shown to be the best (in the Frobenius-norm sense) Toeplitz approximation of R. Another modification of R, with the same purpose of improving the finite-sample statistical accuracy, is described in Section 4.8.
170
Chapter 4
Parametric Methods for Line Spectra
The computational complexity of MUSIC, for a given m, can be reduced in various ways. Quite often, m is such that m - n > n. Then, the computational burdens associated with both Spectral and Root MUSIC can be reduced by using I - SS* in (4.5.15) or (4.5.16) in lieu of GG*. (Note that SS* + GG* = I by the very definition of the eigenvector matrices.) The computational burden of Root MUSIC can be further reduced as explained next. The polynomial in (4.5.16) is a self-reciprocal (or symmetric) one: its roots appear in reciprocal pairs (pei(jJ, ~ei(jJ). On the unit circle z = eiw , (4.5.16) is nonnegative and, hence, may be interpreted as a PSD. These properties mentioned imply that (4.5.16) can be factored as (4.5.19) where a(z) is a polynomial of degree (m - 1) with all its zeroes located within or on the unit circle. We can then find the frequency estimates from the n roots of a(z) that are closest to the unit circle. Since there are efficient numerical procedures for spectral factorization, determining a(z), as in (4.5.19), and then computing its zeroes is usually computationally more efficient than finding the (reciprocal) roots of the 2(m - I)-degree polynomial (4.5.16). Finally, we address the issue of spurious frequency estimates. As implied by the result (4.5.13), for N ~ 00 there is no risk of obtaining false frequency estimates. However, in finite samples, such a risk always exists. Usually, this risk is quite small but it could become a real problem if m takes on large values. The key result on which the standard MUSIC algorithm, (4.5.15), is based can be used to derive a modified MUSIC that does not suffer from the spurious-estimate problem. In what follows, we explain only the basic ideas leading to the modified MUSIC method, without going into details of its implementation. (For such details, the interested reader may consult [STOICA AND SHARMAN 1990].) Let {Ck}Z==1 denote the coefficients of the polynomial A(z) defined in (4.2.3); that is,
n n
A(z)
= 1+
CIZ- 1
+ ... + cnz- = n
(1 - eiWkz- 1)
(4.5.20)
(m - n) x m
(4.5.21);;
k=l
Introduce the following matrix made from {Ck}:
1
Cl
It is readily verified that C*A = 0,
(m - n) x n
(4.5.22)
where A is defined in (4.2.4). Combining (4.5.9) and (4.5.22) gives C*S = 0,
(m - n)
xn
(4.5.23)
Section 4.6
171
Min-Norm Method
which is the key property here. The matrix equation (4.5.23) can be rewritten in the fonn
¢c
(4.5.24)
=f.J,
where the (m - n)n x n matrix ¢ and the (m - n)n x 1 vector the elements of S, and where
f.J,
are entirely determined from
c = [cl ... cnf By replacing the elements of S in ¢ and sample version of (4.5.24),
f.J,
by the corresponding entries of
(4.5.25)
S,
we obtain the
(4.5.26)
c
from which an estimate of c may be obtained by an LS or TLS algorithm; see Section A.8 for details. The frequency estimates can then be derived from the roots of the estimated polynomial (4.5.20) corresponding to c. Since this polynomial has a (minimal) degree equal to n, there is no risk for false frequency estimation.
4.6 MIN-NORM METHOD MUSIC uses (m - n) linearly independent vectors in R(G) to obtain the frequency estimates. Since any vector in R(G) is (asymptotically) orthogonal to {a (CVk)}k=l (el (4.5.7)), we may think of using only one such vector for frequency estimation. By doing so, we might achieve some computational saving, hopefully without sacrificing too much accuracy. The Min-Nonn method proceeds to estimate the frequencies along these lines [KUMARESAN AND TUFfS 1983]. Let 1] [
g
=
the vector in R(G), with first element equal to one, that has minimum Euclidean norm.
(4.6.1)
Then, the Min-Norm frequency estimates are determined as (Spectral Min-Norm). The locations of the n highest peaks in the pseudospectrum 1
(4.6.2)
Chapter 4 Parametric Metho ds for Line Spectra
172
or, alternatively, (Root Min-Norm). The angular positions of the n roots of the polynomial
(4.6.3) that are located nearest to the unit circle. its first element can It remains to find the vector in (4.6.1) and, in particular, to show that the specific selection always be normalized to 1. We will later comment on the reason behind vector is denoted by II . II. (4.6.1) of a vector in 'R(G). In the following, the Euclidean norm of a Partition the matrix S as }1 } m-l As
[~
(4.6.4)
] E 'R(G), it must satisfy the equation (4.6.5)
which, by using (4.6.4), can be rewritten as (4.6.6) dix A) by The minimum-norm solution to (4.6.6) is given (see Result R31 in Appen (4.6.7) assuming that the inverse exists. Note that (4.6.8)
::.'
. :. .'
2 remaining (n - 1) eigenand also that one eige~value of I - aa* is equal to 1 - lIa 11 and the exists if and only if values of I - aa* are equal to 1; it follows that the inverse in (4.6.7)
(4.6.9) of (4.6.1) in 'R(G). We If this condition is not satisfied, there will be no vector of the form postpone the study of (4.6.9) until we obtain a final-form expression for Under the condition (4.6.9), a simple calculation shows that
g.
(4.6.10)
Section 4.6
173
Min-Norm Method
Inserting (4.6.10) in (4.6.7) gives (4.6.11) which expresses g as a function of the elements of S. We can also obtain g as a function of the entries in
G. 'To do so, partition G as
~ [ fJ*G ] G= Since
SS* =
I -
(4.6.12)
GG* by the definition of the matrices S and G, it follows that (4.6.13)
Comparing the blocks in (4.6.13) makes it possible to express IIall 2 and and fJ, which leads to the following equivalent expression for g:
g = GfJ/llfJf If m - n
> n, then it is computationally more advantageous to obtain
Sa
as functions of G
(4.6.14)
g from (4.6.11); otherwise,
(4.6.14) should be used. Next, we return to the condition (4.6.9), which is implicitly assumed to hold in the previous derivations. As already mentioned, this condition is equivalent to rank(S *S) = n which, in tum, holds if and only if rank(S) = n
;
(4.6.15)
Now, it follows from (4.5.9) that any block of S made from more than n consecutive rows should have rank equal to n. Hence, (4.6.15) must hold at least for N sufficiently large. With this observation, the derivation of the Min-Norm frequency estimator is complete. The statistical accuracy of the Min-Norm method is similar to that corresponding to MUSIC. Hence, Min-Norm achieves MUSIC's performance at a reduced computational cost. It should be noted that the selection (4.6.1) of the vector in R(G), used in the Min-Nonn algorithm, is critical in obtaining frequency estimates with satisfactory statistical accuracy. Other choices of vectors in R(G) could give rather poor accuracy. In addition, there is empirical evidence that the use of the minimum-norm vector in R(G), as in (4.6.1), can decrease the risk of spurious frequency estimates, as compared with the use of other vectors in R(G) or even with MUSIC. See Complement 6.5.1 for details on this aspect.
Chapte r 4 Parametric Methods for Line Spectra
174 4.7 ESPRIT METH OD
Let
OlA
Al = [lm-l
(m - 1) x n
(4.7.1)
(m - 1) x n
(4.7.2)
and
[lm-l 0] and [0 lm-d where Im-l is the identity matrix of dimension (m - 1) x (m - 1) and are (m - 1) x m. It is readily verified that
(4.7.3) where (4.7.4)
IT (Estimation of Signal D is a unitary matrix, so the transformation in (4.7.3) is a rotation. ESPR KAILA TH 1986; Roy AND > Parameters by Rotati onalln varian ce Techniques: [PAUL RAJ, RoY, AND , relies on the rotational transfonnation KAILA TH 1989]; see also [KUNG , ARUN , AND RAo 1983]) (4.7.3), as we detail next. Similarly to (4.7.1) and (4.7.2), define
= [lm-l O]S S2 = [0 Im-dS
(4.7.5)
S =AC
(4.7.7)
S1
(4.7.6)
From (4.5.12), we have that
where C is then x n nonsingu1ar matrix given by C = PA*S X-I
(4.7.8)
C must be nonsingular; (Observe that both S and A in (4.7.7) have full column rank, and hence, actually has no relevance see Result R2 in Appendix A.) The foregoing explicit expression for C gu1ar that count. to the present discussion. It is only (4.7.7) and the fact that C is nonsin By using (4.7.1)-(4.7.3) and (4.7.7), we can write (4.7.9)
Section 4.8
175
Forward-Backward Approach
where (4.7.10) The Vandermonde structure of A, implies that the matrices Al and A2 have full column rank (equal to n). In view of (4.7.7), SI and S2 must also have full column rank. It then follows from (4.7.9) that the matrix ¢ is given uniquely by '-, (4.7.11) This formula expresses ¢ as a function of some quantities that can be estimated from the available sample. The importance of being able to estimate ¢ stems from the fact that ¢ and D have the same eigenvalues. (This can be seen from equation (4.7.10), which is a similarity transformation relating ¢ and D, along with Result R6 in Appendix A.) ESPRIT uses the previous observations to compute frequency estimates as described here: ESPRIT estimates the frequencies {(i)k }k=l as -arg(vk), where {h lk=1 are the eigenvalues of the following (consistent) estimate of the matrix ¢: A
¢
A
A
= (S:Sd-
lA
(4.7.12)
A
S;S2
It should be noted that this estimate of ¢ is implicitly obtained by solving the linear system of equations
(4.7.13) by an LS method. It has been empirically observed that better finite-sample accuracy might be achieved if (4.7.13) is solved for ~ by a Total LS method. (See Section A.8 and [VAN HUFFEL AND V ANDEWALLE 1991] for discussions on the TLS approach.) The statistical accuracy of ESPRIT is similar to that of the previously described methods: HOYW, MUSIC, and Min-Norm. In fact, in most cases, ESPRIT may provide slightly more accurate frequency estimates than do the other methods mentioned, yet at similar computational cost. In addition, unlike these other methods, ESPRIT has no problem with separating the "signal roots" from the "noise roots," as can be seen from (4.7.12). Note that this property is shared by the modified MUSIC method (discussed in Section 4.5); however, in many cases, ESPRIT outperforms modified MUSIC in terms of statistical accuracy. All these considerations recommend ESPRIT as the first choice in a frequency estimation application.
4.8 FORWARD-BACKWARD APPROACH
the previously described eigenanalysis-based methods (MUSIC, Min-Norm, and ESPRIT) derive ~eir frequency estimates from the eigenvectors of the sample covariance matrix R. (4.5.14), which ,!
176
Chapter 4 Parametric Methods for Line Spectra
is restated here for easy reference:
R= ~ L N
yet)
]
:
[
[y * (t) ... Y *(t - m
+ 1)]
(4.8.1)
y(t-m+l)
t=m
Ris recognized to be the matrix that appears in the least-squares (LS) estimation of the coefficients {ak} of an mth-order forward linear predictor of y*(t + 1): :9*(t + 1) = aly*(t)
+ ... + amy*(t -
m
+ 1)
For this reason, the methods that obtain frequency estimates from
R are
(4.8.2) named forward (F)
approaches.
Extensive numerical experience with the aforementioned methods has shown that the corresponding frequency-estimation accuracy can be enhanced by using, in lieu of R, the modified sample covariance matrix
1.... AT R = 2(R +JR J)
(4.8.3)
where
(4.8.4)
is the so-called "exchange" (or "reversal") matrix. The second term in (4.8.3) has the following detailed form: N
JRTJ=~L t=m
[y*(t - m + 1) ]
:
[y(t-m+l) ... y(t)]
(4.8.5)
y*(t)
The matrix (4.8.5) is the one that appears in the LS estimate of the coefficients of an mth-order backward linear predictor of y (t - m):
yet -
m)
= J.LIy(t - m
+ 1) + ... + J.Lmy(t)
(4.8.6)
This observation, along with the previous remark made about R, suggests the name forwardbackward (FB) approaches for methods that obtain frequency estimates from R in (4.8.3).
Section 4.8
Forward-Backward Approach
177
The (i ,j) element of R is given by _ RiJ
1
N
= 2N L:[y( t - i)y*(t - j) + y*(t - m
+ 1 + i)y(t -
t=m
m
+ 1 + j)]
(i,j = 0, ... , m - 1)
(4.8.7)
",
Assume that i ~ j (the other case i :::: j can be similarly treated). Let r(j - i) denote the usual sample covariance: 1 r(j - i) = N
N
L
y(t)y* (t -
V-
i))
t=(j-i) +l
(4.8.8)
A straightforward calculation shows that the two terms Tl and Tz in (4.8.7) can be written as 1 N-i Tl = 2N y(P)y* (P -
L
V-
1
i)) =
"irv - i) + O(lIN )
p=m-i
(4.8.9)
and 1 N-m+j +l
Tz = 2N
'"'
L.J
1 y(P)y* (P -
V-
i)) =
p=j+l
-rv 2
i)
+ (JOIN )
(4.8.10)
where O(1/N ) denotes a term that tends to zero as liN when N increas es (it is here assumed that m « N). It follows from (4.8.7)-(4.8.10) that, for large N, the differe nce between RiJ or RiJ and the sample covariance lag r V- i) is "small." Hence, the frequency estima methods based on R or R (or on [rv - i)]) may be expected to have similar performances intion large samples. In summary, it follows from the previous discussion that the empirically observed performance superiority of the forward-backward approach over the forward-only approach should only be manifest in samples with relatively small lengths. As such, this superio rity cannot easily be established by theoretical means. Let us then argue heuristically. First, note that the transformation JOTJ is such that the following equali ties hold: A
(R)iJ
A
....
T
= (J RJ)m- i,m-j = (J R J)m-j, m-i
(4.8.11)
and '"
(R)m-j ,m-i
= (J R"'T J)iJ
(4.8.12)
This implies that the (i ,j) and (m - j, m - i) elements of R are both given by
(4.8.13)
178
Chapter 4 Parametric Methods for Line Spectra
Equations (4.8.11)-(4.8.12) imply that
R is invariant to the transformation JOT J: (4.8.14)
Such a matrix is said to be persymmetric (or centrosymmetric). In order to see the reason for this name, note that R is Hennitian (symmetric in the real-valued case) with respect to its main diagonal~ in addition, R is symmetric about its main antidiagonal. Indeed, the equal elements RiJ and Rm-j,m-i of R belong to the same diagonal as i - j = (m - j) - (m - i). They are
0)
M
[M.
E{ln(x)} = ~Pk In(xk) :::: In [E{x}] = In ~PkXk or, equivalently,
]
\0,
M
M
LPkXk ::::
Il Xfk
k=1
k=1
M
(for Xk > 0 and LPk = 1)
(4.9.38)
k=1
For Pk = 11M, (4.9.38) reduces to the well-known inequality between the arithmetic and geometric means-that is,
•
which is so easily obtained in the present framework.
After these preparations, we turn our attention to the main question of finding a majorizing function for (4.9.35). Let z be a random vector whose probability density function conditioned on y is completely determined bye, and let
gi (e)
= f (e)i -
!
Ez1y ,9 i In [p(Y,z,e)]} p(Y, z, e i )
(4.9.39)
Clearly gi (e) satisfies (4.9.40) Furthennore, it follows from Jensen's inequality (4.9.36), the concavity of the function In(·), and Bayes' rule for conditional probabilities that
>f(ei)-In[E -
zly.9
i
!p(Y,z,e?}] p(y, z, e )
'
!
i [ p(Y,z,e) }] =f(e)-In Ez \y.9 i p(zly,ei)p(y,e i ) =f(ei)-ln[
1. jP(Y,z,e)dZ]
p(y,()l)
...
v
I
p(y.9)
= f (e i) _ In [p (y , e? ]
p(y, e' )
j. .,":",
.
...... 5, . "
......
ei(m-l) w"
ei(m-l) Wl
The matrix A just introduced is the complex conjugate of the one in (4.2.4) . In this complement, we prefer to work with the type of A matrix in (4.9.93), to simplify the notation, but note that the discussion which follows applies without change to the complex conjug ate of the above A as well (or, to its extension to 2D sinusoidal signals). Let {Ck }k= 1 be defined uniquely via the equation n
1 + CIZ
+ ... + cnZ
n
==
fl (1-
ze-
iWk
(4.9.94)
)
k=l "0!'i
Then, it can be readily checked (see (4.5.21» that the matrix "-
\\
.,, "
(m - n) \
xm
(4.9.95)
204
Chapter 4 Parametric Methods for line Spectra
satisfies C*A=O
(4.9.96)
°
(To verify (4.9.96). it is enough to observe, from (4.9.94), that 1 + Cle iWt + ... + CneinWt = for k = 1, ... , n.) Furthermore, as rank(C) = m - n and dim[N(A*)] = m - n too, it follows from (4.9.96) that C is a basis for the null space of A *, N (A *)
(4.9.97)
The matrix C plays an important role in the derivation and analysis of several frequency estimators. (See, e.g., Section 4.5, [BRESLER AND MACOVSKI 1986], and [STOICA AND SHARMAN 1990].) In this complement. we will extend the result (4.9.97) to 2D sinusoidal signals. The derivation of a result similar to (4.9.97) for such signals is a rather more difficult problem than in the ID case. The solution that we will present was introduced in [CLARK AND SCHARF 1994]. (See also [CLARK. ELDEN, AND STOICA 1997].) Using the extended result, we can derive parameter estimation methods for 2D sinusoidal signals in much the same manner as for ID signals. (See the cited papers and Section 4.5.) A noise-free 2D sinusoidal signal is described by the following equation (compare with (4.9.92)): n
yet, t) = I:J3keiwkleiwkt,
t,
t=
0,1,2, '"
(4.9.98)
k=l
Let (4.9.99) Using this notation allows us to write (4.9.98) in the more compact form (4.9.100) Moreover, equation (4.9.100) (unlike (4.9.98)) also covers the case of damped (2D) sinusoidal signals, for which (4.9.101) with {JLk, ilk} being the damping parameters (/-Lk, ilk :::: 0). The following notation will be used frequently in this complement: (4.9.102)
r=[Yl o
.0] Yn
(4.9.103)
Section 4.9
Complements
205
A= [AI ... 0] o
(4.9.104)
An
(4.9.105)
for L
~
n
(4.9.106)
Using (4.9.102), (4.9.104), and (4.9.105), we can write Y (t, -t)
= gt*A t f3
(4.9.107)
Hence, similarly to (4.9.93), we can write the min x 1 data vector obtain ed from (4.9.98) for t = 0, ... , m - 1 and t = 0, ... , in - 1 as y(O, 0) yeO, in - 1)
(4.9.108) y(m - 1,0) (m - 1, in - 1)
* AD gm-I
* Am-I gm-I
The matrix A just defined, goAD
goA m- 1
A=
(min x n)
* AO g m-l
""'\
* Am-l gm-l
(4.9.109)
Chapter 4 Parametric Methods for line Spectra
206
) does for ID signals. plays the same role for 2D sinusoidal signals as the matrix A in (4.9.93 More precisely, we want Therefore, it is the null space of (4.9.109) that we want to characterize. A* in (4.9.109), similar to to find a linearly parameterized basis for the null space of the matrix the basis C for A* in (4.9.93). (See (4.9.97).) Note that, using (4.9.103), we can also write yet, t) as
- = y(t,t)
[t AI'"
Anl]
(4.9.110)
r t fJ
This means that A can also be written as follows:
(4.9.111)
A= A mr m -
1
.~t:.,. ~~ .
il;
n:
1
C * --
Cl
(m -
.. [
n) x
m
(4.9.113)
:''::i ~..
_.~'~-t.",,:,,::_,
l
.~-,; :~
.}~. :.:
o
';.:~ { .",..,.
7i::
We note (cf (4.9.96)) that C*A m = 0
(4.9.114)
It follows from (4.9.111) and (4.9.114) that C* [
o ".
0]
" i
A=0
C*
(4.9) 15)
I
v
[m(m-n )]xmm -: .
Section 4.9
207
Complements
Hence, we have found (mm - mn) vectors of the sought basis for N(A*). It remains to find (m - l)n additional (linearly independent) vectors of this basis (note that dim[N(A*)] = mm - n). To find the remaining vectors, we need an approach that is rather different from that used so far. Let us assume that (4.9.116) and let the vector b*=[bj, ... ,bn ]
be defined via the linear (interpolation) equation b*An = [YI, ... , Yn]
[
(4.9.117)
- - - - - -
(with An as defined in (4.9.106». Under (4.9.116) and/or given P.k}, there exists a one-to-one map between {bk} and {Yk}; hence, we can view the use of {bk} as a reparameterization of the problem. (Note that, if (4.9.116) does not hold, i.e., Ak = Ap , then, for identifiability reasons, we must have Yk 0:1 Yp, and therefore no vector b that satisfies (4.9.117) can exist.) From (4.9.117), we easily obtain
and hence (see also (4.9.109) and (4.9.111»
(4.9.118)
Next, we assume that m2:2n-l
(4.9.119)
which is a weak condition (typically we have m, m» n). Under (4.9.119), we can write (making use of (4.9.118»:
(4.9.120) v
S*
etric Metho C ha pt er 4 Param
ds for li ne Spectra
208
0 ... 0]
where
bI b2 .. . bn . . . . '.. ' .. .. * B [ o bI b2 .. . bn 0
(n x m )
en the (if m > 2n - 1, th ) 20 .1 .9 (4 te ri w be able to ) yields the ed m:::: 2n - 1 to .115) and (4.9.120 ne .9 (4 e w ng , ni ed bi de in om , C Note that e zeroes). columns of B* ar 1 A: 2n m t os rightm left null space of e th in lie s w ro whose following matrix,
VI 1)
0
I
I
1)
0
C*
where 1
1)
=
[~:] =
1 bn
0 bI
0
Cn
Cl
bI
0
}
m-n
CI
Cn
0
0 bn
}
0 0
0
I=
(4.9.121)
m block rows
(m x m ) n /
}
m-n
0 -1 0
0 0
(m x m )
} n
0
-1 0 .. . 0
x mm, , that is (m m - n) m m x ) n) (m of . sion [em - l) m + quently, the rows en se m on di C . of 0) is =1= ) 21 en .1 at, w rank, as The matrix in (4.9 mind the reader th (i.e., it has full ro re n e W m . A m f o to l e ua ac b bkl· r the null sp and its rank is eq sis parameters {c meterized basis fo ba ra e pa th y d rl an ea } lin Yk a , .k between p. (4.9.121) fo nn of p..k, Yk}, at a one-to-one map {ck, bkl in lieu is g e in er at th tim ), es 16 n of .1 k under (4.9 21) on the unknow ence, we can th in .1 H .9 (4 ).) 17 of .1 ce .9 en (4 d nd d on linear depe (See (4.9.112) an ation method base n we do so, the tim he es w an d, an ch e, su ag of st e e exampl extended least in a first handy. As a simpl 4.5 can easily be ite n io qu ct in Se e m in d co ne ill tli w parameters procedure ou modified MUSIC e th at th te no ). ), (4.9.121 ing use of (4.9.121 ak m by s al gn si to the case of 2D
Section 4.10
209
Exercises
Compared with the basis matrix for the 1D case (see (4.9.95)), the null space basis (4.9.121) in the 2D case is apparently much more complicated. In addition, the given 2D basis result depends on the condition (4.9.116); if (4.9.116) is even approximately violated (i.e., if there exist Ak and Ap with k '# p such that Ak ~ Ap ), then the mapping {Yk} +? {bd could become ill-conditioned and so cause a deterioration of the estimation accuracy. Finally, we remark on the fact that, for damped sinusoids, the parameterization via {b k } and {cd is parsimonious. However, for undamped sin~$oidal signals, the parameterization via {UJk' Wk} contains 2n real-valued unknowns, whereas the one based on {bk' Ck} has 4n unknowns, or 3n unknowns if a certain conjugate symmetry property of {bk} is exploited (see, e.g., [STOICA AND SHARMAN 1990]); hence, in such a case the use of {bk} and, in particular, {ck} leads to an overparameterized problem, which might also result in a (slight) accuracy degradation. The previous criticism of the result (4.9.121) is, however, minor, and, in fact, (4.9.121) is the only known basis for N(A*).
4.10 EXERCISES
Exercise 4.1: Speed Measurement by a Doppler Radar as a Frequency Estimation Problem Assume that a radar system transmits a sinusoidal signal towards an object. For the sake of simplicity, further assume that the object moves along a trajectory parallel to the wave propagation direction, at a constant velocity v. Let ae iwt denote the signal emitted by the radar. Show that the backscattered signal, measured by the radar system after reflection off the object, is given by s(t) = fJei(w-aP)t
+ e(t)
(4.10.1)
where e(t) is measurement noise, UJD is the so-called Doppler frequency,
ClP
~ 2UJv/c
and
fJ =
j.J;ae-2iwr/c
Here c denotes the speed of wave propagation, r is the object range, and j.J; is an attenuation coefficient. Conclude from (4.10.1) that the problem of speed measurement can be reduced to one of frequency detennination. The latter problem can be solved by using the methods of this chapter.
Exercise 4.2: ACS of Sinusoids with Random Amplitudes or Nonuniform Phases In some applications, it is not reasonable to assume that the amplitudes of the sinusoidal terms are fixed or that their phases are uniformly distributed. Examples are fast fading in mobile telecommunications (where the amplitudes vary), and sinusoids that have been tracked so that their phase is random, near zero, but not uniformly distributed. We derive the ACS for such cases. \, Let x(t) = aei(W01+f/J) , where a and cp are statistically independent random variables and UJ() 'is a constant. Assume that has mean it and variance
a
a;.
Chapter 4
210
ctra Parametric Methods for Line Spe
t, if ct is E {x( t)} and rx( k). Show also tha find 7l'], l', [-7 on ted ribu dist ly (a) If q; is uniform uces to equation (4.1.5). constant, the expression for rxCk) red s of the probability on [-7 l', 7l'], express E {x (t)} in term ted ribu dist ly form uni not is find cp If (b) on p (q;) such that x (t) is zero mean, ons diti con ent fici suf d Fin . (cp) p n density functio mple of such a p(q ;). rxCk) in this case, and give an exa
idal Signal Exercise 4.3: A Nonergodic Sinuso signal As shown in Complement 4.9.1, the on [0, 27l'], is q; being uniformly distributed and ts stan con dom ran non ng bei with a and w from an (infinitely mean and covariances determined the t tha se sen the in c odi erg an der second-or mean and covariances obtained from the h wit de nci coi al sign the of w are indelong) temporal realization present exercise, assume that a and the In s. tion liza rea ny) ma ely init ensemble of (inf ial-phase variable formly distributed on [0, 2Jl']; the init uni ng bei w h wit les, iab var dom ran case, pendent nonrandom). Show that, in such a be can it lar ticu par (in d ute trib q; may be arbitrarily dis *( _ k)} _ { E {a 0 E {x () txt
2
}
for k = 0 for k =1= 0
(4.10.2)
from those given, and ed by "temporal averaging" differ ain obt s nce aria cov the t tha w r the sho Also, on the behavior of such a signal ove ent mm Co c. odi erg not is al sign hence deduce tha t the h realization, respectively. ensemble of realizations and in eac quency Estimation Exercise 4.4: AR Model-Based Fre Consider the noisy sinusoidal signal y(t )
= x(t ) + e(t )
27l'D and e(t ) is white q; uniformly distributed on [0, and 0 > a th (wi +lp) wot aei( = where x(t ) d to {Yet)} by using An AR model of order n ::: 1 is fitte ce. ian var t uni and an me o zer AR h noise wit an infinitely long data sample, the of e cas g itin lim the In d. tho me to the AR the Yule-Walker or LS Show that the PSD, corresponding .4). (3.4 to tion solu the by en giv coefficients are modeling can be peak at (j) = woo Conclude that AR bal glo a has .4), (3.4 from n.ed rmi sfy model dete e of the fact that {y (t)} does not sati spit in cy, uen freq al soid sinu the estimates used in this case to find ltiple sinusoids, the AR frequency mu of e cas the (In er. ord te fini of an AR equation the height of the signal power, however, show tha2t the of tion ma esti the ing ard Reg are biased.) of a . not directly provide an "estimate" s doe m ctru spe AR the of k pea global rivation of the Pisarenko Method De sed -Ba del Mo MA AR An : 4.5 Exercise and let g be the eigenvector of R 1, + n = m h wit .7) (4.2 trix ma cies Let R denote the covariance method determines the signal frequen nko are Pis The e. alu env eig m imu associated with its min by exploiting the fact that a*( w)g = 0
for w =
Wk,
k = 1, ... , n
(4.10.3)
Section 4.10
211
Exercises
(ef (4.5.13) and (4.5.17)). Derive the property (4.10.3) directly from the ARMA model equation (4.2.3).
Exercise 4.6: Frequency Estimation when Some Frequencies Are Known Assume that yet) is known to have p sinusoidal components at known frequencies {Wk }~=l (but with unknown amplitudes and phases), plus n - p other sinusoidal components whose frequencies are unknown. Develop a modification of the HOYW method to estimate the unknown frequencies from measurements (y(t)W=l without estimating the kno~n frequencies.
Exercise 4.7: A Combined HOYW-ESPRIT Method for the MA Noise Case The HOYW method, presented in Section 4.4 for the white~noise case, is based on the matrix r in (4.2.8). Let us assume that the noise sequence (eU)} in (4.1.1) is known to be an MA process of order m and that m is given. A simple way to handle such a colored noise in the HOYW method consists of modifying the expression (4.2.8) of r as follows:
f =
yCt-L-l-m)] E
:
{[ yet -L-M -m)
} (4.10.4)
[y*(t), ... ,y*(t - L)]
Derive an expression for f similar to the one for r in (4.2.8). Furthermore, make use of that expression in an ESPRIT-like method to estimate the frequencies {(Ok}, instead of using it in an HOYW-like method (as in Section 4.4). Discuss the advantage of this so-called HOYW-ESPRIT method over the HOYW method based on f. Assuming that the noise is white (i.e., m = 0) and hence that ESPRIT is directly applicable, would you prefer using HOYW-ESPRIT (with m = 0) in lieu of ESPRIT? Why or why not?
Exercise 4.8: Chebyshev Inequality and the Convergence of Sample Covariances Let x be a random variable with finite mean J.L and variance constant c, the so-called Chebyshev inequality holds: Pr(lx -
(j2.
Show that, for any positive
III ~ ccr) ~ l/c 2
(4.10.5)
Use (4.10.5) to show that, if a sample covariance lag rN (estimated fromN data samples) converges to the true value r in the mean-square sense (4.10.6) then .'.~ :,'
~.
rN also converges to r in probability: lim
N~oo
Pr(lrN - rl
=1=
0) = 0
i~\For sinusoidal signals, the mean-square convergence of {rN(k)} to (r(k)), as N --+ il' 'l?J'!. •
-'of,
(4.10.7) 00,
has been
'proven in Complement 4.9.1. (In this exercise, we omit the argument k in rN(k) and r(k), for
212
Chapter 4
Parametric Methods for Line Spectra
notational simplicity.) Additionally, discuss the use of (4.10.5) to set bounds (which hold with a specified probability) on an arbitrary random variable with given mean and variance. Comment on the conservatism of the bounds obtained from (4.10.5) by comparing them with the bounds corresponding to a Gaussian random variable.
Exercise 4.9: More about the Forward-Backward Approach The sample covariance matrix in (4.8.3), used by the forward-backward approach, is often a better estimate of the theoretical covariance matrix than R is (as argued in Section 4.8). Another advantage of (4.8.3) is that the forward-backward sample covariance is always numerically better conditioned than the usual (forward-only) sample covariance matrix R. To understand this statement, let R be a Hennitian matrix (not necessarily a Toeplitz one, like the R in (4.2.7)). The "condition number" of R is defined as cond(R) = Amax (R) / Amin CR) where AmaxCR) and Amin (R) are the maximum and minimum eigenvalues of R, respectively. The numerical errors that affect many algebraic operations on R, such as inversion, eigendecomposition, and so on, are essentially proportional to cond(R). Hence, the smaller cond(R), the better. (See Appendix A for details on this aspect.) Next, let U be a unitary matrix (the J in (4.8.3) being a special case of sue!) a matrix). Observe that the forward-backward covariance in equation (4.8.3) is of the form R + U*RTU. Prove that cond(R) ::::: cond(R
+ U*RT u)
(4.10.8)
for any unitary matrix U. We note that the result (4.10.8) applies to any Hermitian matrix Rand unitary matrix U, and thus is valid in cases more general than the forward-backward approach in Section 4.8, in which R is Toeplitz and U = J.
Exercise 4.10: ESPRIT and Min-Norm Under the Same Umbrella ESPRIT and Min-Norm methods are seemingly quite different from one another; it might well seem unlikely that there is any strong relationship between them. It is the goal of this exercise to show that in fact ESPRIT and Min-Norm are quite related closely to each other. We will see that ESPRIT and Min-Norm are members of a well-defined class of frequency estimates. Consider the equation (4.10.9) where 51 and 52 are as defined in Section 4.7. The (m - 1) x (m - 1) matrix unknown. First, show that the asymptotic counterpart of (4.10.9),
s;w = s:
Win (4.10.9) is the (4.10.10)
has the property that any of its solutions \II has n eigenvalues equal to {e-iWk }k=1' This property, along with the fact that there is an infinite number of matrices Wsatisfying (4.10.9) (see
Compu ter Exercises
213
Section A.S in Appen dix A), implie s that (4.10.9) generates a class of frequency estima tors with an infinite numbe r of members. As a second task, show that ESPRI T and Min-N orm belong to this class of estimators. In other words, prove that there is a solution of (4.10.9) whose nonzer o eigenvalues have exactly the same arguments as the eigenvalues of the ESPRIT matrix ¢ in (4.7.12), and also that there is another solution of (4.10.9) whose eigenvalues are equal to the roots of the Min-N orm polyno mial in (4.6.3). For more details on the topic of this exercise, ~ee [HUA AND SARKAR 1990].
Exercise 4.11: Yet Another Relationship between ESPR IT and Min-N orm
Let the vector [pT, If be defined similarly to the Min-N orm vector [1, gT]T (see (4.6.1)), the only difference being that now we constrain the last eleme nt to be equal to one. Hence, ,0 is the minimum-norm solution to (see (4.6.5))
Use the Min-N orm vector
,0 to build the following matrix
Prove the somew hat curious fact that;P above is equal to the ESPRI T matrix,
- w).
H(w)
in (5.2.5),
222
Chapter 5 Filter-Bank Methods
This interpretation of ¢p(w) highlights a conclusion that is reached, in a different way, in Chapter 2: the unmodified periodogram sacrifices statistical accuracy for resolution. Indeed, ¢p (w) uses a bandpass filter with the smallest bandwidth afforded by a time aperture of length N. In this way, it achieves a good resolution (see assumption (i) in (5.1.5)). The consequence of doing so is that only one (filtered) data sample is obtained for the power-calculation stage, which explains the erratic fluctuations of ¢p (w) (owing to violation of assumption (iii) in (5.1.5)). As explained in Chapter 2, the modified periodogram methods (Bartlett, Welch, and Daniell) reduce the variance of the periodogram at the expense of increasing the bias (or equivalently, worsening the resolution). The FBA interpretation of these modified methods provides an interesting explanation of their behavior. In the filter-bank context, the basic idea behind all of these modified periodograms is to improve the power-calculation stage, which is done so poorly within the unmodified periodogram. The Bartlett and Welch methods split the available sample in several subsequences which are separately (bandpass) filtered. In principle, the larger the number of subsequences, the more samples are averaged in the power-calculation stage and the smaller the variance of the estimated PSD, but the worse the resolution (owing to the inability to design an appropriately narrow bandpass filter for a small-aperture subsequence). The Daniell method, on the other hand, does not split the sample of observations but processes it as a whole. This method improves the "power calculation" in a different way. For each value of ¢ (w) to be estimated, a number of different bandpass filters are employed, each with center frequency near w. Each bandpass filter yields only one sample of the filtered signal, but, there being several bandpass filters, we might get enough infonnation for the power-calculation stage. As the number of filters used increases, the variance of the estimated PSD decreases but the resolution becomes worse (since
Ylk-pl
(5.3.8)
Note that the matrix t~ is symmetric and Toeplitz. Also, note that this matrix has already been encountered in the window design example in Section 2.6.3. In fact, as we will shortly see, the window-design strategy in that example is quite similar to the baseband-filter design method employed here. Since the filter h must be such that the power of the filtered signal in the baseband is as large as possible relative to the total power, we are led to the following optimization problem:
maxh*rh h
subject to h*h = I
(5.3.9)
I
Section 5.3
225
Refined Filter-Bank Method
The solution to the previous problem is given in Result R13 in Appendix A: the maximizing h is equal to the eigenvector of r corresponding to its maximum eigenvalue. Hence, we have proven the following result: The impulse response h of the "most selective" baseband filter (according to the design objectives in (5.3.9» is given by the dominant eigenvector of r. (It is called the first Slepian sequence.) ,.
(5.3.10)
The matrix r played a key role in the foregoing derivation. In what follows, we look in more detail at the eigenstructure of r. In particular, we provide an intuitive explanation of why the first dominant eigenvector of r behaves like a baseband filter. We also show that, depending on the relation between f3 and N, the next dominant eigenvectors of r might also be used as baseband filters. Our discussion of these aspects will be partly heuristic. Note that the eigenvectors of r are called the Slepian sequences [SLEPIAN 1964] (as mentioned in (5.3.10». We denote these eigenvectors by {Sk If=l'
Remark: The Slepian sequences should not be computed by the eigendecomposition of r. Numerically more efficient and reliable ways for computing these sequences exist (see, e.g., [SLEPIAN 1964D, for instance, as solutions to some differential equations or as eigenvectors of certain tridiagonal matrices. • The theoretical eigenanalysis of r is a difficult problem in the case of finite N. (Of course, the eigenvectors and eigenvalues of r can always be computed, for given f3 and N; here we are interested in establishing theoretical expressions for r's eigenelements.) For N sufficiently large, however, "reasonable approximations" to the eigenelements of r can be derived. Let a (w) be defined as before: (5.3.11) Assume that
f3 is chosen larger than 1/N, and define (5.3.12)
K=Nf3~1
(To simplify the discussion, K and N are assumed to be even integers in what follows.) With these preparations, and assuming that N is large, we can approximate the integral in (5.3.6) and write r as
1 '" r-
K /2-1
L a 2Jr p=-K/2
(2Jr) * (2Jr ) 2rr -p a -p N
N
1 L (2 p) (2 p )
= N
K/2-1
p=-K/2
a
:
a*
:
N
6
ro
(5.3.13)
226
Chapter 5 Filter-Bank Methods N
The vectors {aeN7r p)/-JN} T N ' part of which appear in (5.3.13), can readily be shown to P=-T+ 1
fonn an orthonormal set:
~a* N
(21r p) a (2Jr s) = -!.. Lei N-I
N
N
N
!f(p-s)k
k=={)
1 ei27r (P-s)
-
1
-
1
2JT
N e i v(P-s) 1,
}.
= 0,
s =f:. P (5.3.14) s=p
The eigenvectors of the matrix on the right-hand side of equation (5.3.13), fo. are therefore given by {a (~ p ) I v'N} ;~~N /2+ l' with eigenvalues of 1 (with multiplicity K) and (with multiplicity N - K). The eigenvectors corresponding to the eigenvalues equal to one are {a (~p) /v1J}:~~K/2+1' By paralleling the calculations in (5.2.3)-(5.2.5), it is not hard to show that each of these dominant eigenvectors of r () is the impulse response of a narrow bandpass filter with bandwidth equal to about liN and center frequency p; the set of these filters therefore covers the interval [-,8 Jr, t3 n J. Now, the elements of r approach those of f o as N increases; more precisely, 1[r]iJ [fOJiJI = O(lIN) for sufficiently large N. However, this does not mean that llf - foil ~ 0, as N ~ 00, for any reasonable matrix norm, because f and fo are (N x N) matrices. Consequently, the eigenelements of r do not necessarily converge to the eigenelements of r 0 as N ~ 00. However, given the previous analysis, we can at least expect that the eigenelements of r are not "too different" from those of f Q. This observation of the theoretical analysis, backed up with empirical evidence from the computation of the eigenelements of r in specific cases, leads us to conclude the following:
°
'f:
The matrix r has K eigenvalues close to one and (N - K) eigenvalues close to zero, provided N is large enough, where K is given by the "time-bandwidth" product (5.3.12). The dominant eigenvectors corresponding to the K largest eigenvalues fonn a set of orthogonal impulse responses of K bandpass filters that approximately cover the baseband [-,an, ,an].
(5.3.15)
As we argue in the next subsections, in some situations (specified there) we may want to use the whole set of K Slepian baseband filters, not only the dominant Slepian filter in this set. 5.3.2 RFB Method for High~Resolution Spectral Analysis
Assume that the spectral analysis problem dealt with is one in which it is important to achieve the maximum resolution afforded by the approach at hand (such a problem appears, for instance, in the case of PSDs with closely spaced peaks). Then we set
fi = liN
{=}
K
:=:
1
(5.3.16)
..
~.
"
section 5.3
Refined Filter-Bank Met hod
227
(Note that we cannot set f3 to a valu e less than 1/N. That cho ice wou ld lead to K < 1, whi ch is meaningless. The fact that we must cho ose f3 ?: 1{N is one of the many facets of the 1/N resolution limit of the nonparame tric spectral estimation.) Bec aus e K 1, we cannot use mo re than the first Slepian sequence as a bandpass filter:
=
h =
Sl
(5.3.17) ,.
The way in which the RFB sch eme bas ed on (5.3.17) works 'is des crib ed next. First, note from (5.3.5), (5.3.9), and (5.3.16) that
(5.3.18) Hence, under the (idealizing) assu mp tion that H (w) is different from zero only in the bas eba nd where it takes a constant value, we hav e
(5.3.19) Next, consider the sample at the filter's output obtained by the con volution of the whole inp ut sequence {5i(t)}~=1 with the filter impulse response {hd : N-1 X 6
L hkY(N -
k=O
N
k)
=L
hN -tY (O
(5.3.20)
t=l
The pow er of x should be approx imately equal to the PSD val ue y(w)dw = -N jfJ rr 4>y(w + w)d w -fJrr
2rr
-{3rr
..•.~ c. c.~
l
(5.3.23)
The larger f3 is (Le., the lower is the resolution), the larger is K, and hence the larger is the reduction in variance that can be achieved. By using the result (5.3.15 ), we define K baseba nd filters as
hp = [hp,o." hp,N-JJ* = sp,
(p
= 1, ... , K)
(5.3.24)
Here, hp denotes the impUlse response vector of the pth filter and sp is the pth domin ant Slepian sequence. Note that sp is real valued (see Result R12 in Appen dix A), and thus so is hp . Accor ding to the discussion leading to (5.3.15), the set of filters (5.3.24) covers the baseband [- f3 Jr , ,8rr), with each of these filters passing (roughly speaking) 11K of this baseband. Let xp be defined similarly to x in (5.3.20), but now, for the pth filter, N-1
xp
= L hp,ky(N -
N
k)
=L
k=O
hp,N-t y(t)
(5.3.25)
t=1
The calculation (5.3.21) applies to {xp } in exactly the same way; hence, p = 1, ... ,K
(5.3.26)
In addition, a straightforward calculation gives
N-IN- 1
=L
L hp,th:'sry(s -
t)
t=O s=O
l11CN -1N-1
= -2rr
-1C
'"'" ~ "'-(w)ei(s-t)wdw L.-.J L...J hP.I h*k,s'f'y 1=0 s=O
= -1 2rr
171: Hp(w)Ht (w)¢y (w)dw -71:
", . .
i'.,"·
~
230
Chapter 5 Filter-Bank Methods
'" (w). • In the derivations above, the true data-covariance matrix R has been assumed available. In order to tum the previous PSD formulas into practical spectral estimation algorithms, we must replace R in these formulas by a sample estimate-for instance, by 1 R = N _ m
~.
A
L N
Y (t)
]
:
[
[y*(t) .. . y*(t - m)]
(5.4.18)
yet - m)
t=m+l ;
,£
Doing so, we obtain the following two spectral estimators corresponding to (5.4.12) and (5.4.17), respectively:
,~ ..
~ ~-',.'
'::
. .
.'
\\
m
+1
CM-Version 1:
(j>(w)
= --A--
(5.4.19)
CM-Version 2:
a*(w)k-1a(w) ¢(w) = ---:...- a*(w)R-2a (w)
(5.4.20)
A
A
a*(w)R-1a(w)
236
Chapter 5
There is an implicit assumption in both (5.4.19) and (5.4.20) that sets a limit on the maximum value that can be chosen for m:
R- 1 exists.
Filter-Bank Methods
This assumption
(5.4.21)
m s for 0 = Since aj- s = 0 and can truncate the (5.6.7) is zero for j < 0; hence, we in nd ma sum the re, rmo the Fur to s = m. f]. These two observations yield summation over j to the interval [0, a~+l+s-j
':'-
-...
(5.6.8)
Next, decompose few ) additivelyas
where
(The term in T2 corresponding to f
= m is zero.) Let m
fl(s )
[
A L L (Czj-Sa * - am+l-ja:a+l+s_j ) j
[=0 j=O
TI (w) as By using this notation, we can write m
TI (w) =
L
[L(s)e
iwS
s=o
w E [-J r, Jr], we must also have Sin cef (w) is real valued for any -m
T2(W) =
L
s=- 1
[L* (-s )e
iws
(5.6.9)
~
"I·;' \~':
;;:
;;;; .
;"
Section 5.6
249
Complements
As the summand in (5.6.9) does not depend on i, we readily obtain m
il(s) = ;-'.-
L
(m
+ 1-
(m
+ l-k -s) akak+s-"L
j) (Czj_sO/ - Czm+l-ja;+l+s_j)
j=O
m-s
=
L
m
A
A*
k=l
k=O
m-s
=
L (m + 1 -
•
2k - S)Clk
a:+s
k=O
which coincides with (5.6.5). Thus, the proof of (5.6.4) is concluded.
Remark: The reader might wonder what happens with the fonnulas just derived if the AR model parameters are calculated by using the same sample covariance matrix as in the Capon estimator. In such a case, the parameters {a k } in (5.6.1) and in the preceding OS formula should be replaced by {an (see (5.4.27)). Consequently, both (5.6.2)-(5.6.3) and (5.6.4)-(5.6.5) continue to hold, but with {ad replaced by {an (and {an replaced by {ak
n.
•
By comparing (5.6.2) and (5.6.4), we see that the reciprocals of both ~AR(W) and ~CM (w) have the form of a Blackman-Tukey spectral estimate associated with the "covariance sequences" {pes)} and files)}, respectively. The only difference between ~AR(W) and ¢CM(W) is that the sequence {il(s)} corresponding to ~CM (w) is a "linearly tapered" version of the sequence (p(s)} corresponding to ~AR(W). Much as with the interpretation in Section 5.4.2, the previous observation can be used to understand intuitively why the Capon spectral estimates are smoother and have poorer resolution than the AR estimates of the same order. (For more details on this aspect and other aspects related to the discussion in this complement, see [MUSICUS 1985].) We remark in passing that the name "covariance sequence" given, for example, to {pes)}, is not coincidental: The {p (s)} are so-called sample inverse covariances associated with R, and they can be shown to possess a number of interesting and useful properties. (See, for example, [CLEVELAND 1972; BHANSALl 1980].) The formula (5.6.4) can be used for the computation of ~CM (w), as we now show. Assuming that R is already available, we can use the Levinson-Durbin algorithm to compute {a k } and (,2 (and thus files)}) in O(m 2 ) flops. Then (5.6.4) can be evaluated at M Fourier frequencies (say) by using the FFT. The resulting total computational burden is on the order of O(m 2 + M log2 M) flops. For commonly encountered values of m and M, this is about m times smaller than the burden associated with the eigendecomposition-based computational procedure of Exercise 5.5. Note, however, that the latter algorithm can be applied to a general R matrix, whereas the one derived in this complement is limited to Toeplitz covariance matrices. Finally, note that the extension of the results in this complement to two-dimensional (2D) signals can be found in [JAKOBSSON, MARPLE, AND STOICA 2000].
Chapter 5 Filter-Bank Methods
250
Tukey 5.6.2 Multi wind ow Interpretation of Daniell and BlackmanPeriodograms cast into the multiwindow'-~ As stated in Exercise 5.1, the Bartlett and Welch periodograms can be ing form (see (5.7.1 )):/£' framework of Section 5.3.3. In other words, they can be written in the follow
I
1 --AiD_.....
I
I I
I I I I 11 I I .-------,
I
: -Sin(wct)x Im[Yk(t)] Lowpass 1m [h( t)J : : _
I I 1
I
filter
I I I I II
AID >----'
I I
~---------------------------~~------
Figure 6.4 A simplified block diagram of the analog processing in a receiving-array element.
281
Section 6.2 Array Model
Let Yk (t) denote the demodulated signal:
It follows from (6.2.10) and the previous discussion on the demodulation process that the Fourier transform of Yk (t) is given by ",
+ we)[S (w) + S*( -w + Ek(w + we)
Yk(W) = Hk(W
2We)]e-i(w+wc)'t"k
(6.2.11)
When Yk (t) is passed through a lowpass filter with bandwidth matched to S (w), in the filter output (say, Yk(t» the component in (6.2.11) centered at W = -2we is eliminated along with all the other frequency components that fall in the stopband of the lowpass filter. Hence, we obtain (6.2.12) where Hk(W + we) and Ek(W + we) denote the parts of Hk(W + we) and Ek(W + we) that fall within the lowpass filter's passband, n, and where the frequency W is restricted to n. We now make the following key assumption: The received signals are narrowband, so that IS(w)] decreases rapidly with increasing Iwi.
(6.2.13)
Under this assumption, (6.2.12) reduces (in an approximate way) to the following equation: for WE Q
(6.2.14)
Because Hk(W e) must be different from zero, the sensor transfer function Hk(W) should pass frequencies near W = We (as expected, since We is the center frequency of the received signal). Also, note that we do not replace Ek(W + we) in (6.2.14) by Ek(We), as this tenn might not be (nearly) constant over the signal bandwidth. (For instance, this would be the case when the noise term in (6.2.12) contains a narrowband interference with the same center frequency as the signal.)
Remark: It is sometimes claimed that (6.2.12) can be reduced to (6.2.14) even if the signals are broadband, but the sensors in the array are narrowband with center frequency W = We' Under such an assumption, IHk(W + we)1 goes quickly to zero as Iwi increases; hence, (6.2.12) becomes (6.2.15) which apparently is different from (6.2.14). In order to obtain (6.2.14) from (6.2.12) under the previous conditions, we need to make some additional assumptions. Hence, if we further assume that the sensor frequency response is flat over the passband (so that Hk(W + we) = Hk(W e and
»
282
Chapter 6 Spatial Methods
that the signal spectrum varies over the sensor passband (so that S (w) differs quite a bit from • S (0) over the passband in question), then we can still obtain (6.2.14) from (6.2.12). The model of the array is derived in a straightforward manner from equation (6.2.14). The' time-domain counterpart of (6.2.14) is the following: (6.2.16) where Yk(t) and ek(t) are the inverse Fourier transforms of the corresponding terms in (6.2.14). (By a slight abuse of notation, eke!) is associated with Ek(W + we), not EkCw).) The hardware implementation required to obtain {Yk(t)}, as defined above, is indicated in Figure 6.4. Note that the scheme in Figure 6.4 generates samples of the real and imaginary components of Yk (t). These samples are paired in the digital machine following the analog scheme of Figure 6.4 to obtain samples of the complex-valued signal Yk(t). (We stress once more that all physical analog signals are real valued.) Note that the continuous-time signal in (6.2.16) is bandlimited: According to (6.2.14) (and the related discussion), Yk (w) is approximately equal to zero for w ¥ n. Here n is the support of S (w) (recall that the filter bandwidth is matched to the signal bandwidth); hence, it is a narrow interval. Consequently, we can sample (6.2.16) with a rather low sampling frequency. The sampled version of {Yk (t)} is used by the "digital processing equipment" for the purpose of DOA estimation. Of course, the digitalform of {Yk(t)} satisfies an equation directly analogous to (6.2.16). In fact, to avoid a complication of notation by the introduction of a new discrete-time variable, from here on we consider that t in equation (6.2.16) takes discrete values: t
= 1,2, ... , N
(6.2.17)
(As usual, we choose the sampling period as the unit of the time axis.) In Figure 6.4 we sample the baseband signal, which may be done by using sampling rates lower than those needed for the bandpass signal. (See also [PROAKIS, RADER, LING, AND NIKIAS 1992].) Next, we introduce the so-called array transfer vector (or direction vector): (6.2.18) Here, e denotes the source's direction ofarrival, which is the parameter of interest in our problem. Note that, sinc~ the transfer characteristics and positions of the sensors in the array are assumed to be known, the vector in (6.2.18) is a function of only, as indicated by notation (this fact will be illustrated shortly by means of a particular form of array). By making use of (6.2.18), we can write equation (6.2.16) as
e
yet) = a(e)s(t)
+ e(t)
yet) = [YI(t)
Ym(t)f
e(t) = [el (t)
em (t)f
where
(6.2.19)
Section 6.2
283
Array Model
denote the array's output vector and the additive noise vector, respectively. It should be noted that B enters in (6.2.18) not only through Irk} but also through {Hk(W c )}. In some cases, the sensors may be considered to be omnidirectional over the DOA range of interest, and then the {Hk(WC)}k=l are independent of B. Sometimes, the sensors may also be assumed to be identical. Then, by redefining the signal (H (wc)s(t) is redefined as s(t)) and selecting the first sensor as the reference point, the expression (6.2.18) can be simplified to the following form: (6.2.20) The extension of equation (6.2.19) to the case of multiple sources is straightforward. Since the sensors in the array were assumed to be linear elements, a direct application of the superposition principle leads to the following model of the array:
y(t) _ [a(B}) ... a (Bn )]
[
s,
~t) ] + e(t)
eo
As(t)
+ e(t)
sn(t)
(6.2.21)
the DOA of the kth source the signal corresponding to the kth source It is interesting to note that the previous model equation mainly relies on the narrowband assumption (6.2.13). The planar wave assumption made in the introductory part of this chapter has not been used so far. This assumption is to be used when deriving the explicit dependence of {rk} as a function of B, as is illustrated next for an array with a special geometry.
Consider the array of m identical sensors uniformly spaced on a line, depicted in Figure 6.5. Such an array is commonly referred to as a uniform linear array (ULA). Let d denote the distance between two consecutive sensors, and let B denote the DOA of the signal illuminating the array, as measured (counterclockwise) with respect to the normal to the line of sensors. Then, under the planar wave hypothesis and the assumption that the first sensor in the array is chosen as the reference point, we find that Uniform Linear Array.
d sinB rk = (k - 1 ) - -
c
(6.2.22)
where c is the propagation velocity of the impinging waveform (for example, the speed of light, in the case of electromagnetic waves). Inserting (6.2.22) into (6.2.20) gives a(B) =
[1, e- iwcd sin8/c, ... , e- i (m-l)wcdSin8/cf
(6.2.23)
The restriction of B to lie in the interval [-900 , 90°] is a limitation of ULAs: two sources at locations symmetric with respect to the array line yield identical sets of delays {rk} and hence
284
Chapter 6 Spatial Methods· Source
.~~
.':.~
Figure 6.5
The uniform linear array scenario.
cannot be distinguished from one another. In practice, this ambiguity of ULAs is eliminated by using sensors that pass only signals whose DOAs are in [-90° 190°]. Let A denote the signal wavelength (which is the distance traveled by the waveform in one period of the carrier):
A. =
elfc,
(6.2.24)
Define
fs =fe d sine _ d sine e A
(6.2.25)
and
d sine
Ws
= 2rrfs = We - -e -
(6.2.26)
With this notation, the transfer vector (6.2.23) can be rewritten as (6.2.27) This is a Vandermonde vector that is completely analogous to the vector made from the uniform samples of the sinusoidal signal {e-iw.r(}. Let us explore this analogy a bit further. First, by the previous analogy, W s is called the spatial frequency.
".{
Section 6.3
285
Nonparametric Methods
Second, if we were to sample a continuous-time sinusoidal signal with frequency we, then, in order to avoid aliasing effects, the sampling frequency fo should satisfy (by the Nyquist sampling theorem) (6.2.28) fo > 2fe or, equivalently, Te
To < 2
",
(6.2.29)
where To is the sampling period and Te is the period of the continuous-time sinusoidal signal. Now, in the ULA case considered in this example, we see from (6.2.27) that the vector aCe) is uniquely defined (i.e., there is no "spatial aliasing") if and only if W s is constrained as follows: (6.2.30) However, (6.2.30) is equivalent to
1
lfs I < 2"
{=>
d I sin BI
mj2;":~ otherwise, they might not overlap. If the array is purposely built to meet ESPRIT's subarray" condition, then, normally, in = m /2, and the two subarrays are nonoverlapping. Mathematically, the ESPRIT requirement means that (6.4.17) where
(6.4.18)
and where r(e) denotes the time needed by a wavefront impinging upon the array from the direction to travel between (the "reference points" of) the two twin subarrays. If the angle of arrival e is measured with respect to the perpendicular of the line between the subarrays' center points, then a calculation similar to the one that led to (6.2.22) shows that
e
r(e) = d sin(e)jc
(6.4.19)
where d is the distance between the two subarrays. Hence, estimates of the DOAs can readily be derived from estimates of the diagonal elements of D in (6.4.18). Equations (6.4.17) and (6.4.18) are basically equivalent to (4.7.3) and (4.7.4) in Section 4.7; hence, the ESPRIT DOA estimation method is analogous to the ESPRIT frequency estimator. The ESPRIT DOA estimation method, like the ESPRIT frequency estimator, computes the DOA estimates by solving an n x n eigenvalue problem. There is no search involved, in contrast to the previous methods; in addition, there is no problem ofseparating the"signal DOAs" from the "noise DOAs," once again in contrast to the Yule-Walker, MUSIC, and Min-Norm methods. However, unlike these other methods, ESPRIT can only be used with the special array configuration described earlier. In particular, this requirement limits the number of resolvable sources at n < in (as both AI andA2 must have full column rank). Note that the two subarrays do not need to be calibrated, although they need to be identical, and ESPRIT can be sensitive to differences between the two subarrays in the same way that Yule-Walker, MUSIC, and Min-Norm are sensitive to imperfections in array calibration. Finally,;"note that, like the other DOA-finding algorithms presented in this section (with the exception of the NLS method), ESPRIT is not usable in the case of coherent signals. 6.5 COMPLEMENTS 6.5.1 On the Minimum-Norm Constraint As explained in Section 6.4.4, the Root Min-Norm (temporal) frequency estimator, introduced in Section 4.6, can be used without modification for DOA estimation with a uniform linear array. Using the definitions and notation in Section 4.6, let g = [l gl ... gm-If denote the vector in
Section 6.5
299
Complements
neG)
that has the first element equal to one and minimum Euclidean norm. Then, the Root Min-Norm DOA estimates are obtained from the roots of the polynomial g. . () Z = 1 +.... glZ -1
+ ... + gm-IZ .. -(m-I)
(6.5.1)
that are located nearest the unit circle. (See the description of Min-Norm in Section 4.6.) As N increases, the polynomial in (6.5.1) approaches ",
(6.5.2) where g = [1 gl '" gm_d T is the minimum-norm vector in R(G). In this complement, we show that (6.5.2) has n zeroes at {e- iWk }k=1 (the so-called "signal zeroes") and (m - n - 1) extraneous zeroes situated strictly inside the unit circle (the latter are normally called "noise zeroes"); here {{Uk lk=l either are temporal frequencies or are spatial frequencies (as in (6.2.27)). Let g = [1, gl, ... , gm-lf E R(G). Then (4.2.4) and (4.5.6) imply that
=0 gm-I
1 + gleiw/c
+ ... + gm_lei(m-I)w/c
= 0
(for k = 1, ... , n)
(6.5.3)
Hence, any polynomial g (z) whose coefficient vector belongs to R( G) must have zeroes at {e- iWk }k=l' and thus it can be factored as (6.5.4) where n
gs(z) =
TI (1- e-iWkz-
l)
k=1
The (m - n - I)-degree polynomial gn (z) in (6.5.4) contains the noise zeroes, and at this point is arbitrary. (As the coefficients of gn (z) vary, the vectors made from the corresponding coefficients of g(z) span R(G).) Next, assume that g satisfies the minimum-norm constraint: m-l
L
;~~.: ~.
Igkl 2 = min
(gO
A
1)
(6.5.5)
k=O
By using Parseval's theorem (see (1.2.6)), we can rewrite (6.5.5) as (6.5.6)
300
Chapter 6 Spatial Methods
(where, by convention, g(co) = g(z)L:=e iC1l )' Since gs(z) in (6.5.4) is fixed, the minimization in (6.5.6) is over gn (z). To proceed, some additional notation is required. Let
and let yet) be a signal whose PSD is equal to Igs(cu)1 2; hence, yet) is an nth-order MA process. By making use of (1.3.9) and (1.4.9), along with the previous notation, we can write (6.5.6) in the following equivalent form: (6.5.7) The minimizing coefficients {ak} are given by the solution to a Yule-Walker system of equations similar to (3.4.6). (To show this, parallel the calculation leading to (3.4.8) and (3.4.12).) Since the covariance matrix, of any finite dimension, associated with a moving-average signal is positive definite, it follows that
'·1
iJ
• the coefficients {ak}, and hence {gk}, are uniquely determined by the minimum-norm constraint • the polynomial gn (z) whose coefficients are obtained from (6.5.7) has all its zeroes strictly inside the unit circle (cf Exercise 3.8) which was to be proven. Thus, the choice of g in the Min-Norm algorithm makes it possible to separate the signal zeroes from the noise zeroes, at least for data samples that are sufficiently long. (For small or medium-sized samples, it might happen that noise zeroes get closer to the unit circle than signal zeroes, which would lead to spurious frequency or DOA estimates.) As a final remark, note, from (6.5.6), that there is little reason for gn (z) to have zeroes in the sectors where the signal zeroes are present (since the integrand in (6.5.6) is already quite small for co values close to {CUk}k=l)' Hence, we can expect the extraneous zeroes to be more or less uniformly distributed inside the unit circle, in sectors that do not contain signal zeroes (see, e.g., [KUMARESAN 1983].) For more details on the topic of this complement, see [TUFfS AND KUMARESAN 1982; KUMARESAN 1983].:,
"~~ ':'"~~?
,~~l
~
.••.•
". ~~1~
. ::;i .":.q
'~J1
"
..~~
6.5.2 NLS Direction-of-Arrival Estimation for a Constant-Modulus Signal .
~::-.
The NLS estimation of the DOA of a single signal impinging on an array of sensors is obtained by minimizing the criterion (6.4.4) with n = I, N
L lIy(t) t=1
a(8)s(t)1!2
(6.5.8) '.'-".
Section 6.5
301
Complements
with respect to {S(t)W=:1 and 8. The result is obtained from equation (6.4.7), which for n reduces to
=1
N
e= argmaxa*(8)Ra(8) = argmax '"" la*(8)y(t)1 ~ 9
2
9
(6.5.9)
1=:1
This, of course, is nothing but the beamforming DOA estimat! for n = 1; see (6.3.18). Hence, as expected (see the Remark following (6.4.7) and also (4.3.11»), the NLS estimate of the DOA of an arbitrary signal coincides with the beamforming estimate. In this complement, we will solve the NLS direction-of-arrival estimation problem in (6.5.8), under the assumption that {set)} is a constant-modulus signal; that is, s (t) = exeicf>(I)
(6.5.10)
where ex > 0 denotes the unknown signal amplitude and {1> (t)} is its unknown phase sequence. We assume ex > 0 to avoid a phase ambiguity in {¢(t)}. Signals of this type are often encountered in communication applications with phase-modulated waveforms. Inserting (6.5.10) in (6.5.8) yields the following criterion that is to be minimized with respect to {¢(t)}~=:I' ex, and 8: N
..
~
L
Ily(t) - exeicf>(/)a(8) 11
2
1=:1 ~
..
N
=
L {l!y(t)1!2 + ex
2
I1a(8)
uz -
2ex Re [a*(8)y(t)e- i cf>(I)]}
(6.5.11)
(=:1
It follows from (6.5.11) that the NLS estimate of {1>(t)W=:1 is given by the maximizer of the function
I
Re [a* (8)y (t)e-icf>(t)] = Re [Ia* (8)y(t) ei arg[a*(9)y(I)] e-icf>(I) ]
it-
= la*(8)y(t)1 cos [arg(a*(8)y(t») - 4>(t)]
(6.5.12)
= 1, ... ,N
(6.5.13)
which is easily seen to be
{t;"
$(t) = arg[a*(8)y(t)],
t
From (6.5.11)-(6.5.13), along with the assumption that Ila(8)1! is constant (which is also used to derive (6.5.9», we can readily verify that the NLS estimate of 8 for the constant modulus signal case is given by N
{) = argm;x
L la*(8)y(t)1 1=:1
(6.5.14)
302
Chapter 6 Spatial Methods .
Finally, the NLS estimate of ex is obtained by minimizing (6.5.11) (with {(t)} and 8 replaced by (6.5.13) and (6.5.14), respectively): N
ex'" = N lIa1(0) 11 2
""1*'" I f=: a (8)y (t)
(6.5.15)
Remark: It follows easily from the preceding derivation that, if ex is known (as could be the case when the emitted signal has a known amplitude that is not significantly distorted during • propagation), the NLS estimates of 8 and {(t)} are still given by (6.5.13) and (6.5.14).
Interestingly, the only difference between the beamformer for an arbitrary signal, (6.5.9), and the beamformer for a constant-modulus signal, (6.5.14), is that the "squaring operation" is missing in the latter. This difference is somewhat analogous to the one pointed out in Complement 4.9.4, even though the models considered there and in this complement are rather different from one another. For more details on the subject of this complement, see [STOICA AND BESSON 2000] and its references. 6.5.3 Capon Method: Further Insights and Derivations
The spatial filter (or beamformer) used in the beamfonning method is independent of data. In contrast, the Capon spatial filter is data dependent, or data adaptive; see equation (6.3.24). It is this data adaptivity that confers on the Capon method better resolution and significantly reduced leakage as compared with the beamforming method. An interesting fact about the Capon method for temporal or spatial spectral analysis is that it can be derived in several ways. The standard derivation is given in Section 6.3.2. This complement presents four additional derivations of the Capon method, each not as well known as the standard derivation. Each of the derivations presented here is based on an intuitively appealing design criterion. Collectively, they provide further insights into the features and possible interpretations of the Capon method. APES-Like Derivation
Let 8 denote a generic DOA, and consider equation (6.2.19), yet) = a(8)s(t)
+ e(t)
(6.5.16)
which describes the array output, y(t), as the sum of a possible signal component impinging from the generic DOA 8 and a term e(t) that includes noise and any other signals with DOAs different denote the power of the signal set) in (6.5.16), which is the main parameter from 8. Let we want to estimate: a s2 as a function of 8 provides an estimate of the spatial spectrum. Let us estimate the spatial filter vector, h, as well as the signal power, a s2 , by solving the following least-squares (LS) problem:
a;
minE (lh*y(t) -s(t)1 2 } h,a;
(6.5.17)
Section 6.5
0 by assumption. This observation concludes the proof that C6.5.34) is equivalent to (6.5.35). Using the equivalence of (6.5.34) and (6.5.35), we can rewrite (6.5.34) as follows: h*Rh ::: as2
for any h such that h*ace) = 1
(6.5.36)
From (6.5.36), we can see that the solution to (6.5.32) is given by a;
= minh*Rh h
subject to h*a(e) = 1
which coincides with the standard formulation of the Capon method in (6.3.24). The formulation of the Capon method in (6.5.32) will be used in Complement 6.5.4 to extend the method to the case where the direction vector a (e) is known only imprecisely. 6.5.4 Capon Method for Uncertain Direction Vectors
The Capon method has better resolution and much better interference rejection capability (i.e., much lower leakage) than the beamforming method, provided that the direction vector, ace), is
,:~
Section 6.5
307
Complements
known accurately. However, whenever the knowledge of a (e) is imprecise, the perfonnance of the Capon method could become worse than that of the beamfonning method. To see why this is so, consider a scenario in which the problem is to estimate the power coming from a source with DOA assumed to be equal to eo. Let us assume that, in actuality, the true DOA of the source is eo + .6. For the Capon beamformer pointed toward eo, the source of interest (located at eo + .6) will play the role of an interference and will be attenuated. Consequently, the power of the signal of interest will be underestimated; the larger .6 is, the larger the underestimation error. Because steeringvector errors are common in applications, it follows that a robust version of the Capon method (Le., one that is as insensitive to steering-vector errors as possible) would be highly desirable. In this complement, we will present an extension ofthe Capon method to the case ofuncertain direction vectors. Specifically, we will assume that the only knowledge we have about ace) is that it belongs to the uncertainty ellipsoid (a - ii)*C- 1(a - ii)
::s
1
(6.5.37)
where the vector ii and the positive definite matrix C are given. Note that both a and 5, as well as C, usually depend on e; however, for the sake of notational convenience, we drop the e dependence of these variables. In some applications, there will be too little available infonnation about the errors in the steering vector to make a competent choice of the full matrix C in (6.5.37). In such cases, we may simply set C = sI, so that (6.5.37) becomes (6.5.38) where 8 is a positive number. Let ao denote the true (and unknown) direction vector, and let 80 = !lao - ii f where, as before, is the assumed direction vector. Ideally, we should choose 8 = 80. However, it can be shown (see [STOICA, WANG, AND LI 2003], [LI, STOICA, AND WANG 2003]) that the perfonnance of the robust Capon method remains almost unchanged when 8 is varied in a relatively large interval around 80. As already stated, our goal here is to obtain a robust Capon method that is insensitive to errors in the direction (or steering) vector. We will do so by combining the covariance-fitting fonnulation in (6.5.32) for the standard Capon method with the steering uncertainty set in (6.5.37). Hence, we aim to derive estimates of both a s2 and a by solving the following constrained covariance-fitting problem:
a
2 maxa 2 s
a.us
subject to: R - a;aa* 2: 0 (a - ii)*C-I(a - ii) ::::: 1
(6.5.39)
a;
To avoid the trivial solution (a ~ 0, ~ (0), we assume that a = 0 does not belong to the uncertainty ellipsoid in (6.5.39), or, equivalently, that (6.5.40) i-':-"
m
1",'
.
~if
l ". V:.
~.;~
"y········· ...
(which is a regularity condition).
308
Chapter 6 Spatial Methods
Because both a} and a are considered to be free parameters in the previous fitting problem, there is a scaling ambiguity in the signal covariance term in (6.5.39), in the sense that both (o}, a) and (as2/J.L, J.LI/2a ) for any {J; > 0 give the same covariance term a}aa*. To eliminate this ambiguity, we can use the knowledge that the true steering vector satisfies the following condition (see (6.3.11»: a*a =m
(6.5.41)
However, the constraint in (6.5.41) is nonconvex and so makes the combined problem (6.5.39) and (6.5.41) somewhat more difficult to solve than (6.5.39). On the other hand, (6.5.39) (without (6.5.41» can be solved quite efficiently, as we show next. To take advantage of this fact, we can make use of (6.5.41) to eliminate the scaling ambiguity in the following pragmatic way: • Obtain the solution (o}, a) of (6.5.39). • Obtain an estimate of a that satisfies (6.5.41) by scaling
a,
vm--
a=-a A
Ilall
and a corresponding estimate of a} by scaling o}, such that the signal covariance term is left unchanged (Le., o}aa* = a-}aa*), which gives (6.5.42)
To derive the solution (o}, a) of (6.5.39), we first note that, for any fixed a, the maximizing a} is given by 1 a-2 = ---:-S
a*R-1a
(6.5.43)
(See equation (6.5.33) in Complement 6.5.3.) This simple observation allows us to eliminate from (6.5.39) and hence reduce (6.5.39) to the following problem: .:;
mina*R-1a a
subject to: (a - a)*C-I(a - a) ~ 1
a}
(6.5.44)
Under the regularity condition in (6.5.40), the solution a to (6.5.44) will occur on the boundary of the constraint set; therefore, we can reformulate (6.5.44) as the following quadratic problem with a quadratic equality constraint: mina*R-1a a
subject to: (a - a)*C-1(a - a) = I
(6.5.45)
This problem can be solved efficiently by using the Lagrange multiplier approach-see [LI, STOICA, AND WANG 2003]. In the remaining part of this complement, we derive the Lagrangemultiplier solver in [LI, STOICA, AND WANG 2003], but in a more self-contained way.
Section 6.5
309
Complements
To simplify the notation, consider (6.5.45) with C = eI as in (6.5.38):
mina*R-la
subject to:
a
(The case of C
=I=-
lIa -
af =
e
(6.5.46)
eI can be treated similarly.) Define (6.5.47)
x =a-a \',
and rewrite (6.5.46), using x in lieu of a: min [x*R-lx +x*R-la +a*R-1x] x
subject to:
IIxn 2 = e
(6.5.48)
The constraint in (6.5.48) makes the x that solves (6.5.48) also a solution to the problem min [x*(R- 1 +AI)x +x*R-la +a*R-1x] x
subject to:
lIxn 2 = e
(6.5.49)
where A is an arbitrary constant Let us consider a particular choice of A that is a solution of the equation (6.5.50) and is also such that
R- 1 + AI > 0
(6.5.51)
Then, the unconstrained minimizer of the function in (6.5.49) is given by (6.5.52) and it satisfies the constraint in (6.5.49) (cl (6.5.50)). It follows that x in (6.5.52) with A given by (6.5.50) and (6.5.51) is the solution to (6.5.49) (and, hence, to (6.5.48)). Hence, what is left to explain is how to solve (6.5.50) under the condition (6.5.51) in an efficient manner; we will do that next. Let R = UAU*
(6.5.53)
denote the eigenvalue decomposition (EVD) of R, where U* U = UU* = I and
A= [AI ... 0]; o
(6.5.54)
Am
Also, let
b=
u*a
(6.5.55)
310
Chapter 6 Spatial Methods
Using (6.5.53}-(6.5.55), we can rewrite the left-hand side of equation (6.5.50) as g(A) ~ a* [I
+ ARr2 a =
ii* [u (l
= b*(/
+ AA)-2b =
t
k=l
+ AA)U*]-2 a
Ibkl
(l
2
(6.5.56)
+ AAk)2
where bk is the kth element of the vector b. Note that m
L Ibkf = lib 11
2
= llii1l 2 >
(6.5.57)
E
k=l
(See (6.5.55) and (6.5.40).) It follows from (6.5.56) and (6.5.57) that A can be a solution of the equation g(A) = E only if (6.5.58) for some value of k. At the same time, A should (see (6.5.51)) be such that
R- 1 + Al > 0
{=::::> {=::::>
+ AR > 0 1 + AAk > 0 for k =
I
1, ... , m
(6.5.59)
It follows from (6.5.58) and (6.5.59) that 1 + AAk > 1 for at least one value of k, which implies that
A>O
(6.5.60)
This inequality sets a lower bound on the solution to (6.5.50). To refine this lower bound, and also to obtain an upper bound, first observe that g (A) is a monotonically decreasing function of A for A > O. Furthennore, for (6.5.61) we have that
g
(A»
L -
(1
1 IlbI12-_E_II-112+ ALAl)2 - lliil1 2 a - E
(6.5.62)
Similarly, for (6.5.63) we can verify that (6.5.64)
Section 6.5
311
Complements
Summarizing the previous facts, it follows that equation (6.5.50) has a unique solution for A that satisfies (6.5.51), which belongs to the interval [AL. AU] C (0, 00). With this observation, the derivation of the robust version of the Capon method is complete. The following is a step-by-step summary of the Robust Capon algorithm.
The Robust Capon Algoritht!'
Step 1. Compute the eigendecomposition R = U AU*, and set b = U*a. Step 2. Solve the equation g(A) = e for A (using, e.g., a Newton method, along with the fact that there is a unique solution in the interval [AL, AU])' Step 3. Compute (cj. (6.5.47), (6.5.52), (6.5.53» (6.5.65) and, finally, compute the power estimate (see (6.5.42) and (6.5.43» ii*ii ma*UA-IU*a
a-.2 --------.,..-S
where, from (6.5.65), U*ii
=b-
-
(I
(6.5.66)
+ AA)-lb.
The bulk of the computation in the algorithm involves computing the EVD of R, which requires O(m 3 ) arithmetic operations. Hence, the computational complexity of the Robust Capon method is comparable to that of the standard Capon method. We refer the reader to [Lr, STOICA, AND WANG 2003] and also to [STOICA, WANG, AND Lr 2003] for further computational considerations and insights, and for many numerical examples illustrating the good performance of the Robust Capon method, including its insensitivity to the choice of e in (6.5.38) or C in (6.5.37).
6.5.5 Capon Method with Noise-Gain Constraint As was explained in Complement 6.5.4, the Capon method performs poorly as a power estimator in the presence of steering-vector errors. (Yet, it could perform fairly well as a OOA estimator, provided that the SNR is reasonably large; see [Cox 1973; Lr, STOICA, AND WANG 2003] and references therein.) The same happens when the number of snapshots, N, is relatively small, such as when N is equal to or only slightly larger than the number of sensors, m. In fact, there is a close relationship between the cases of steering-vector errors and those of small-sample errors-see, for example, [FELDMAN AND GRIFFITHS 1994]. More precisely, the sampling estimation errors of the covariance matrix can be viewed as steering-vector errors in a corresponding theoretical covariance matrix, and vice versa. For example, consider a uniform linear array, and assume that the source signals are uncorrelated with one another. In this case, the theoretical covariance matrix R of the array output is Toeplitz. Assume that the sample covariance matrix R is also Toeplitz. According to the Caratheodory parameterization of Toeplitz matrices (see Complement 4.9.2), we
312
Chapter 6 Spatial Methods
can view R as being· the theoretical covariance matrix associated with a fictitious ULA on which uncorrelated signals impinge, but the powers and DOAs of the latter signals are different from those of the actual signals. Hence, the small sample estimation errors in R can be viewed as being due to steering-vector errors in a corresponding theoretical covariance matrix. The robust Capon method (RCM) presented in Complement 6.5.4 significantly outperfonns the standard Capon method (CM) in power-estimation applications in which the sample length is insufficient for accurate estimation of R or in which the steering vector is known imprecisely. The ReM was introduced in [STOICA, WANG, AND LI 2003; LI, STOICA, AND WANG 2003]. An earlier approach, whose goal is also to enhance the performance of CM in the presence of samplingestimation errors or steering-vector mismatch, is the so-called diagonal-loading approach (see, e.g., [HUDSON 1981 ; VAN TREES 2002] and references therein). The main idea of diagonal loading is to replace R in the Capon formula for the spatial filter h, (6.3.24), by the matrix R +).[
(6.5.67)
where the diagonal-loading factor). > 0 is a user-selected parameter. The filter vector h so obtained is given by h
=
(R + )..I)-la a*(R + ).I)-la
(6.5.68)
The use of the diagonally loaded matrix in (6.5.67) instead of R is the reason for the name of the approach based on (6.5.68). The symbol R in this complement refers either to a theoretical covariance matrix or to a sample covariance matrix. There have been several rules proposed in the literature for choosing the parameter ).. in (6.5.68). Most of these rules choose). in a rather ad hoc and data-independent manner. As illustrated in [LI, STOICA, AND WANG 2003] and its references, a data-independent selection of the diagonal-loading factor cannot improve the performance for a reasonably large range of SNR values. Hence, a data-dependent choice of).. is desired. One commonly used data-dependent rule selects the diagonal-loading factor). > 0 that satisfies
IIh 11 2 =
a*(R + "AI)-2 a 2 [a*(R + )..I)-la]
=c
(6.5.69)
where the constant c must be chosen by the user. Let us explain briefly why choosing "A via (6.5.69) makes sense intuitively. Assume that the array output vector contains a spatially white noise component whose covariance matrix is proportional to [ (see (6.4.1)). Then the power at the output of the spatial filter h due to the noise component is IIh 11 2 ; for this reason, llh 11 2 is sometimes called the (white) noise gain of h. In scenarios with a large number of (possibly closely spaced) source signals, the Capon spatial filter h in (6.3.24) could run out of "degrees of freedom" and hence not pay enough attention to the noise in the data (unless the SNR is very low). The result is a relatively high noise gain, IIh\l2, which may well degrade the accuracy of signal power estimation. To prevent this from happening, it makes sense to limit IIh 11 2 as in (6.5.69). By doing so, we are left with the problem of choosing c. The choice of c might be easier than the direct
Section 6.5
313
Complements
choice of A in (6.5.68), yet it is far from trivial, and, in fact, clear-cut rules for selecting c are hardly available. In particular, a "too small" value of c could limit the noise gain unnecessarily and result in decreased resolution and increased leakage. In this complement, we will show that the spatial filter of the diagonally loaded Capon method in (6.5.68), (6.5.69) is the solution to the following design problem:
minh*Rh
subject to: h*a
h
= 1 and
IIhf.~ c
(6.5.70)
Because (6.5.70) is obtained by adding the noise-gain constraint IIh 11 2 ::: c to the standard Capon problem in (6.3.23), we will call the method that follows from (6.5.70) the constrained Capon method (CCM). The fact that (6.5.68), (6.5.69) is the solution to (6.5.70) is well known from the previous literature (see, e.g., [HUDSON 1981]). However, we present a rigorous and more thorough analysis of this solution. As a by-product, the analysis that follows also suggests some guidelines for choosing the user parameter c in (6.5.69). Note that, in general, a, c, and h in (6.5.70) depend on the DOA (); to simplify notation, we will omit the functional dependence on () here. It is interesting to observe that the RCM, described in Complement 6.5.4, can also be cast into a diagonal-loading framework. To see this, first note from (6.5.47) and (6.5.52) that the steering-vector estimate used in the RCM is given by
a
=a 1
(l
+ AR)-1a = (l + AR)-l [(1 + AR) -
= ( "i,R- 1 +1
)-1 a
I]a
(6.5.71)
The RCM estimates the signal power by 1
(6.5.72)
with a as given in (6.5.71); hence, RCM does not directly use any spatial filter. However, the power estimate in (6.5.72) is equal to h*Rh, where
R- 1a h = ---,.a*R-1a
(6.5.73)
hence, (6.5.72) can be viewed as being obtained by the (implicit) use of the spatial filter in (6.5.71), (6.5.73). Inserting (6.5.71) into (6.5.73), we obtain
(6.5.74)
which, except for the scalar in the denominator, has the form in (6.5.68) of the spatial filter used by the diagonal-loading approach. Note that the diagonal-loading factor, 1/).., in (6.5.74) is data dependent. Furthermore, the selection of A in the RCM (see Complement 6.5.4 for details on this
314
Chapter 6 Spatial Methods
aspect) relies entirely on infonnation about the uncertainty set of the steering vector, as defined, for instance, by the sphere with radius £ 1/2 in (6.5.38). Such infonnation is more readily available in applications than is information that would help the user select the noise-gain constraint c in the CCM. Indeed, in many applications, we should be able to make a more competent guess about 8 than about c (for all DOAs of interest in the analysis). This appears to be a significant advantage of RCM over CCM, despite the fact that both methods can be interpreted as data-dependent diagonal-loading approaches.
Remark: The reader might have noted by now that the CCM problem in (6.5.70) is similar to the combined RCM problem in (6.5.44), (6.5.41) discussed in Complement 6.5.4. This observation has two consequences. First, it follows that the combined RCM design problem in (6.5.44), (6.5.41) could be solved by an algorithm similar to the one presented below for solving the CCM problem; indeed, this is the case, as is shown in [LI, STOlCA, AND WANG 2004]. Second, the CCM problem (6.5.70) and the combined RCM problem (6.5.44), (6.5.41) both have two constraints and are more complicated than the RCM problem (6.5.44), which has only one constraint. Hence, the CCM algorithm described below will be (slightly) more involved computationally than the RCM algorithm outlined in Complement 6.5.4. • We begin the analysis of the CCM problem in (6.5.70) by deriving a feasible range for the user parameter c. Let S denote the set of vectors h that satisfy both constraints in (6.5.70): S = {h
I h*a = 1 and I!hll 2 ::: c}
(6.5.75)
By the Cauchy-Schwartz inequality (see Result R12 in Appendix. A), we have that (6.5.76) where we also used the fact that (by assumption; see (6.3.11)), (6.5.77) The inequality in (6.5.76) sets a lower bound on c; otherwise, S is empty. To obtain an upper bound, we can argue as follows: The vector h used in the CM has the nonn (6.5.78) Because the noise gain of the CM is typically too high, we should like to choose c so that (6.5.79) Note that, if c does not satisfy (6.5.79), then the CM spatial filter h satisfies both constraints in (6.5.70) and, hence, is the solution to the eeM problem. Combining (6.5.76) and (6.5.79) yields
Section 6.5 Complements
315
the following interval for c:
CE
2
1
a*R- a ] [ m (a*R-1 a) 2 -,
As with (6.5.53), let
(6.5.80)
'-,
R = UAU*
(6.5.81)
be the eigenvalue decomposition (EVD) of R, where U*U = UU* = I and
A= [AI . 0]; o
(6.5.82)
Am
Now,
.
~ ';
(6.5.83)
ih """ ~'".-
."".
so it follows from (6.5.79) that c also satisfies
~"
(6.5.84) The above inequality will be useful later on. Next, let us define the function g(h, A, p.)
= h*Rh + A(UhU 2 -
c)
+ p.(-h* a -
a*h
+ 2)
(6.5.85)
where p. E R is arbitrary and where (6.5.86)
Remark: We note, in passing, that A and p. are the so-called Lagrange multipliers and that
g(h, A, p.) is the so-called Lagrangian function associated with the CCM problem in (6.5.70)~ however, to make the following derivation as self-contained as possib le, we will not explicitly use any result from Lagrange-multiplier theory. •
Evidently, by the definition of g(h, A, p,), we have that g(h, A, p.) ~ h*Rh
., "-..:
for any h
E
S
(6.5.87)
316
and for any J.L
Chapter 6 Spatial Methods E
R and A > O. The part of (6.5.85) that depends on h can be written as h*(R
+ AI)h -
= [h - J.L(R - J-L2 a *(R
Il-h*a - J.La*h
+ Al)-la]* (R + AI) [h
- J.L(R
+ AI)-la]
+ AI)-la
(6.5.88)
Hence, for fixed A and J.L, the unconstrained minimizer of g (h, A, J.L) with respect to h is given by (6.5.89) Let us choose J.L such that (6.5.89) satisfies the first constraint in (6.5.70): ~
h* (A~) 1 , J.L a =
¢=>
1
~
J.L = a*(R
(6.5.90)
+ Al)-la
(which is always possible, for A > 0). Also, let us choose A so that (6.5.89) also satisfies the second constraint in (6.5.70) with equality -that is, (6.5.91) We will show shortly that this equation has a unique solution i > 0 for any c satisfying (6.5.80). Before doing so, we remark on the following important fact: Inserting (6.5.90) into (6.5.89), we get the diagonally loaded version of the Capon method (see (6.5.68))-that is, "
A
~
h(A, J-L)
+ AI)- 1a = ~ a*(R + AI)-la (R
~
(6.5.92)
Because i satisfies (6.5.91), the vector h(i, [L) lies on the boundary of S; hence (see also (6.5.87)), g (h(i, {l),
i, [L)
= h*(i, {l)Rh(i, {l)
:s h*Rh
for any h
E
S
(6.5.93)
From (6.5.93), we conclude that (6.5.92) is the (unique) solution to the CCM problem. It remains to show that, indeed, equation (6.5.91) has a unique solution i > 0 under (6.5.80) and also to provide a computationally convenient way of finding t Towards that end, we use the EVD of R in (6.5.91) (with the hat on i omitted, for notational simplicity) to rewrite (6.5.91) as follows: f(A) = c
(6.5.94)
f
K r '" Section 6.5
r r~
317
Complements
where
f
..
, ,..•.
(6.5.95)
and where bk is the kth element of the vector ~..
b = U*a
f
.k ~~.
~.
(6.5.96)
Differentiation of (6.5.95) with respect to A yields
·c· {~.~-
~t
f
r:,
~
..
~~/
~L
~..
J1 1 ~:'.
;;..
"\.:
(6.5.97)
"j.,
Making use of the Cauchy-Schwartz inequality once again, we can show that
(6.5.98)
318
Chapter 6 Spatial Methods
Hence, f'(A) < 0 for any A > 0 (and Ak =1= Ap for at least one pair k
=I- p)
(6.5.99)
which means thatf(A) is a monotonically strictly decreasing function for A > O. Combining this observation with the fact thatf(O) > c (see (6.5.79)) shows that, indeed, the equationfc.}.,) = c in (6.5.91) has a unique solution for A > O. For efficiently solving the equation f(A) = c, an upper bound on A would also be useful. Such a bound can be obtained from (6.5.95) as follows: A simple calculation shows that
c =
f
(A) < (Am
+ A)2
(AI
+ .1..)2
lib 11 4
==> where we used the fact that lIb1l 2 inequality
mC(A m
+ A)2
= lIall 2 = m.
< (AI
+ A)2
(6.5.100)
From (6.5.100), we see that A must satisfy the
(6.5.101)
Note that both the numerator and the denominator in (6.5.101) are positive; see (6.5.76) and (6.5.84). The derivation of the constrained Capon method is now complete. The following is a stepby-step summary of the CCM.
The Constrained Capon Algorithm
Step 1. Compute the eigendecomposition R = U AU*, and set b = U*a. Step 2. Solve the equation f(A) = c for A (using, e.g., a Newton method, along with the fact that th~re is a unique solution that lies in the interval (0, AU )). Step 3. Compute the (diagonally loaded) spatial filter vector
where A is found in Step 2, and estimate the signal power as h *Rh .
To conclude this complement, we note that the CCM algorithm is quite similar to the ReM algorithm presented in Complement 6.5.4. The only differences are that the equation for A
,.
~.
Section 6.5
319
Complements
associated with the CCM is slightly more complicated and, more importantly, that it is harder to select the c needed in the CCM (for any DOA of interest) than it is to select e in the RCM. As we have shown, for CCM one should choose c in the interval (6.5.80). Note that for c = 11m we get A. --+ 00 and h = aim, which is the beamforming method. For c = a*R- 2a/(a*R- 1a)2, we obtain A. = 0 and h = hCM, which is the standard Capon method. Values of c between these two extremes should be chosen in an application-dependent manner. 6.5.6 Spatial Amplitude and Phase Estimation (APES) ",
As was explained in Section 6.3.2, the Capon method estimates the spatial spectrum by using a spatial filter that passes the signal impinging on the array from direction in a distortionless manner and at the same time attenuates signals with DOAs different from () as much as possible. The Capon method for temporal spectral analysis is based on exactly the same idea (see Section 5.4), as is the temporal APES method described in Complement 5.6.4. In this complement, we will present an extension of APES that can be used for spatial spectral analysis. Let () denote a generic DOA, and consider the equation (6.2.19),
e
yet) = a«()s(t)
+ e(t),
(6.5.102)
t=I, ... ,N
that describes the array output, yet), as a function of a signal, s(t), possibly impinging on the array from a DOA equal to (), and a term, e(t), that includes noise along with any other signals whose DOAs are different from (). We assume that the array is uniform and linear, in which case a «() is given by (6.5.103) where m denotes the number of sensors in the array, and W s = (Wed sin ()/c is the spatial frequency (see (6.2.26) and (6.2.27). As we will explain later, the spatial extension of APES presented in this complement appears to perform well only in the case of ULAs. This is a limitation, but it is not a serious one, because there are techniques that can be used to approximately transform the direction vector of a general array into the direction vector of a fictitious ULA (see, for example, [DORON, DoRON, AND WEISS 1993]). Such a technique performs a relatively simple DOA-independent linear transformation of the array output snapshots; these linearly transformed snapshots can then be used as the input to the spatial APES method presented here. (See [ABRAHAMSSON, JAKOBSSON, AND STOICA 2004] for details on how to use the spatial APES approach of this complement for arrays that are not uniform and linear.) Let u s2 denote the power of the signal s (t) in (6.5.102), which is the main parameter we want to estimate; note that the estimated signal power as a function of provides an estimate of the spatial spectrum. In this complement, we assume that {s(t)W=l is an unknown deterministic sequence; hence, we define as
a;,
e,
a;
1 N 2 u} = lim Is(t)1 N-+oo N
L
1=1
(6.5.104)
320
Chapter 6 Spatial Methods
An important difference between equation (6.5.102) and its temporal counterpart (see, e.g., equation (5.6.80) in Complement 5.6.6) is that, in (6.5.102), the signal s (t) is completely unknown, whereas, in the temporal case, we have set) = {3e iwt and only the amplitude is unknown. Because of this difference, the use of the APES principle for spatial spectral estimation is somewhat different from its use for temporal spectral estimation.
Remark: We remind the reader that {s (t) W=l is assumed to be an unknown deterministic sequence here. The case in which {set)} is assumed to be stochastic is considered in Complement 6.5.3. Interestingly, application of the APES principle in the stochastic signal case leads to the (standard) Capon method! • Let
m
L.(8)
= a*(8) [k - po-ra(OI)a*(81)] a(8) =
~1 (8) -
2
po-lla*(8)a(81) 1
(6.5.151)
Observe that (6.5.148) and (6.5.150) coincide with (6.5.136)-{6.5.137) and (6.5.142)-{6.5.143). Evidently, continuing the previous iterative process, for which (6.5.148) and (6.5.150) are the first two steps, leads to the CLEAN algorithm on page 328.
Chapte r 6 Spatial Methods
330
ties of this algorithm. The foregoing derivation of CLEA N sheds some light on the proper sed at each iteration of First, note that the IS covariance-fitting criterion in (6.5.146) is decrea rd calculation shows that CLEAN. For instance, consider the first iteration. A straightforwa 2
R - PO-I2 a (61da* (611) 11 A
II
A
A
2
= IIRI/2 -
2pa-fa *(8dRa (81) + m2p a-i
= IIRII 2
p(2 - p)m 2 a-i
-
(6.5.152)
2 um decrease occurs for Clearly, (6.5.152) is less than IIRII for any p E (0,2), and the maxim on in (6.5.146) monotonically p = 1 (as expected). A similar calculation shows that the criteri and that at each iteration the decreases as we continue the iterative process, for any p E (0,2), think of choosing p = 1, but maximum decrease occurs for p = 1. As a consequence, we might decrease the fitting criterion this is not advisable. The reason is that our goal is not only to residual covariance matrices (6.5.146) as much and as fast as possible, but also to ensure that the A
A
R 1 =R
(6.5.153)
a*(61k+d to Rk+l would remain positive definite for k = 1,2, ...; otherwise, fitting af+la(61k+l) on (6.5.33) of Complemake little statistical sense. By a calculation similar to that in equati to ment 6.5.3, it can be shown that the condition Rk+I > is equivalent
°
1
(6.5.154)
1, because, by the Cauch yNote that the right-hand side of (6.5.154) is bounded above by Schwartz inequality,
ction of the inequality in Also note that, depending on the scenario under consideration, satisfa less than 1. To summarize, (6.5.154) for k = 1,2, ... could require choosing a value for p much ng p < 1 (or even p « 1) the preceding discussion has provided a a precise argument for choosi in the CLEAN algorithm.
Section 6.5
331
Complements
The LS covariance-fitting derivation of CLEAN also makes the semiparametric nature of CLEAN more transparent. Specifically, the discussion has shown that CLEAN fits the semiparametric covariance model in (6.5.145) to the sample covariance matrix R. Finally, note that, although there is a significant literature on CLEAN, its statistical properties are not well understood; in fact, other than the preliminary study of CLEAN reported in [SCHWARZ 1978B], there appear to be very few statistical studies in the literature. The derivation of CLEAN based on the LS covariance-fitting criterion in (6.5.146) could aJso be useful in understanding the statistical properties of CLEAN. However, we will not attempt to provide a statistical analysis of CLEAN in this complement. 6.5.8 Unstructured and Persymmetric ML Estimates of the Covariance Matrix
Let {y(t)},=:;1.2•... be a sequence of independent and identically distributed (Li.d.) m x 1 random vectors with mean zero and covariance matrix R. The array output given by equation (6.2.21) is an example of such a sequence, under the assumption that the signal set) and the noise e(t) in (6.2.21) are temporally white. Furthermore, let yet) be circularly Gaussian distributed (see Section B.3 in Appendix B), in which case its probability density function is given by (6.5.155) Assume that N observations of {yet)} are available: (6.5.156)
{y(I), "', yeN)}
Owing to the Li.d. assumption made on the sequence {y(t)}/=:;!,2, ... , the probability density function of the sample in (6.5.156) is given by N
p(y(I), ... ,yeN)) = IIp(y(t)) 1==1
(6.5.157) The maximum likelihood (ML) estimate of the covariance matrix R, based on the sample in (6.5.156), is given by the maximizer of the likelihood function in (6.5.157) (see Section B.1 in Appendix B) or, equivalently, by the minimizer of the negative log-likelihood function: N
-lnp(y(l), ... , yeN))
= mN In(;r) + N
In IR 1+ Ly*(t)R-1y(t) 1==1
(6.5.158)
332
Chapter 6 Spatial Methods
"The part of (6.5.158) that depends on R is given (after multiplication by
1 N
In IRI
+N
Ly*(t)R- 1y(t) = In IRI
tr) by
+ tr (R- 1R)
(6.5.159)
1=1
where
R=
1 N (m x m)
N Ly(t)y*(t)
(6.5.160)
1=1
In this complement, we discuss the minimization of (6.5.159) with respect to R, which yields the ML estimate of R, under either of the following two assumptions: A: R has no assumed structure or B: R is persymmetric.
As explained in Section 4.8, R is persymmetric (or centrosymmetric) if and only if
(6.5.161) where J is the so-called reversal matrix defined in (4.8.4). Remark: If yet) is the output of an array that is unifonn and linear and the source signals are uncorrelated with one another, then the covariance matrix R is Toeplitz, and, hence, persymmetric.
•
We will show that the unstructured ML estimate of R, denoted Ru ,ML, is given by the standard sample covariance matrix in (6.5.160), RU,ML
=
R
(6.5.162)
whereas the persymmetric ML estimate of R, denoted
Rp ,ML, is given by
"T)
" ,ML = 2:1 (" Rp R + JR J
(6.5.163) I [
To prove (6.5.162), we need to show (see (6.5.159» that lnlRI
+tr(R- 1 R)
2: InlRI
+m
for any
R>
0
(6.5.164)
o S
section 6.5 Complements
333 Let
C be a square root of R (see Definition D1 2
in Appendix A) and not e tha t (6.5 .16 5)
Usi ng (6.5 .16 5) in (6.5.164), we obt ain the seri es of equ ival enc es (6.5.164)
~
tr
~
tr (C*R-1C) -In
~
L (>"k -
(C*R-1C)
.'-.
-In !R-1k! ?: m
IC*R-1C I?: m
m
In Ak - 1) ?: 0
k=l
(6.5 .16 6)
whe re {Ak} are the eige nva lues of the mat rix C*R- 1C. Nex t, wit h refe ren ce to (6.5.166), we sho w that
f
(A) A A - In A - 1 ?: 0
for any A > 0
To verify (6.5.167), obs erv e that
f
,
(A)
1 = 1- -; A
flf( A) =
(6.5 .16 7)
:2
Hen ce, the fun ctio nf( A) in (6.5.167 ) has a uni que min imu m at A = 1, and f(1 ) = 0; this pro ves (6.5.167). Wit h this observation, the pro of of (6.5.166), and ther efo re of (6.5.162), is complete. The pro of of (6.5 .16 3) is eve n sim pler. In vie w of (6.5.161), we hav e that
tr(R-1R) = tr[(JRTJ)-l R] = tr(R-TJRJ) = tr(R-1JRTJ)
(6.5 .16 8)
Hen ce, the fun ctio n to be min imi zed wit h resp ect to R (under the con stra int (6.5.161» can be wri tten as
(6.5 .16 9) As was sho wn ear lier in this com ple men t, the uns truc ture d min imi zer of (6.5.169) is giv en by (6.5 .17 0) Bec aus e (6.5 .17 0) satisfies the per sym met ry con stra int, by constructio n, it also gives the con stra ine d min imi zer of the neg ativ e log-lik elih ood function; hen ce, the pro of of (6.5 .163) is con clu ded as wel l. The rea der inte rest ed in mo re deta ils on the top ic of this com plem ent, incl udi ng a com par ison of the stat isti cal esti mat ion errors asso ciat ed wit h RU. ML and Rp, ML' can consult [JANSSON AND STO ICA 1999].
Chapte r 6 Spatial Methods
334
6.6 EXERCISES Exercise 6.1: Source Localization by Using a Sensor in Motion
can be found by using a This exercise illustrates how the directions of arrival of planar waves of DOA estimation by sensorsingle moving sensor. Conceptually, this proble m is related to that g a synthetic aperture similar array methods. Indeed, we can think of a sensor in motion as creatin sensors. to the one corresponding to a physical array of spatially distributed to v. Also, assume that Assume that the sensor has a linear motio n with constant speed equal plane as the sensor. Let Ok the sources are far-field point emitters at fixed locations in the same direction of wave propagation denote the kth DOA param eter (defined as the angle betwe en the s emit sinusoidal signals and the nonna l to the sensor trajectory). Finally, assume that the source reflections of a probing {CXteiwt}k=l with the same (center) frequency UJ. (These signals could be case it is not restrictive to sinusoidal signal from different point scatterers of a target, in which assum e that they all have the same frequency.) high-frequency compoShow that, under the previous assumptions and after elimination of the be written as nent corresponding to the frequency w, the sensor output signal can n
s(t) = L
CXkeiwft
+ e(t)
(6.6.1)
k=l
where e(t) is measurement noise and
wf is the kth Doppler frequency, defined by D wk
V,W
= - - - sinOk c
that the DOA-estimation with c denoting the velocity of signal propagation. Conclude from (6.6.1) by using the estimation proble m associated with the scenario under consideration can be solved the sensor speed v can be metho ds discussed in this chapte r and in Chapt er 4 (provided that accurately determined).
Exercise 6.2: Beamfonning Resolution for Uniform Linear Arrays
equal to d. Let A denote the Consi der a ULA comprising m sensors, with interelement spacing discussion in Chapter 2, the wavelength of the signals impinging on the array. According to the given by spatialjrequency resolution of the beamf ormin g used with this ULA is
2Jr m
!:i.w s = -
~.fs
1 = m
(6.6.2)
of beamf onning for signals Make use of the previous observation to show that the DOA resolution coming from broadside is (6.6.3)
r
335
~~
Section 6.6
~~-
where L is the array's length measured in wavelengths:
"J.
Exercises
\":'.
L = _(m_-_l)_d A
(6.6.4)
Explain how (6.6.3) approximately reduces to (6.3.20), for suf:fi~iently large L. Next, show that, for signals impinging from an arbitrary direction angle (), the DOA resolution of beamforming is approximately 1
(6.6.5)
!l() ~ - - -
Lj cos ()I
Hence, for signals coming from nearly end-fire directions, the DOA resolution is much worse than what is suggested in (6.3.20).
Exercise 6.3: Beamforming Resolution for Arbitrary Arrays The beampattem (some ()o) has the same shape as a spectral window: It has a peak at () = ()o, is symmetric about that point, and the peak is narrow (for large enough values of m). Consequently, the beamwidth of the array with direction vector aCe) can approximately be derived by using the window bandwidth formula proven in Exercise 2.15: (6.6.6) Now, the array's beamwidth and the resolution of beamforming are closely related. To see this, consider the case where the array output covariance matrix is given by (6.4.3). Let n = 2, and assume that P = I (for simplicity of explanation). The average beamforming spectral function is then given by
which clearly shows that the sources with DOAs ()I and fh are resolvable by beamforming if and only if 1()1 - ()21 is larger than the array's beamwidth. Consequently, we can approximately determine the beamforming resolution by using (6.6.6). Specialize equation (6.6.6) to a ULA, and compare with the results obtained in Exercise 6.2.
Exercise 6.4: Beamforming Resolution for L-Shaped Arrays Consider an m -element array, with m odd, shaped as an "L" with element spacing d. Thus, the array elements are located at points (0,0), (0, d), ... , (0, d(m - 1)/2) and (d, 0), ... , (d(m -
Cha pte r 6 Spatial Methods
336 mforming for signals 6.3, find the DOA resolution of bea e rcis Exe in lts resu the ng Usi ). angles 1)/ 2,0 maximum resolution, and for what and m imu min the are at Wh e. le coming from an ang ULA case in pare your results with the m-element Com d? lize rea ons luti reso al rem are these ext Exercise 6.2. cations Beamwidth and Array-Element Lo Exercise 6.5: Relationship between = I, ... , m. ents located at Tic = [Xfe, Ylc f for k elem h wit y arra nar pla ent lem m-e Consider an n (6.6.6) to show origin, so L~=l Tic = O. Use equatio the at d tere cen is y arra the t tha e Assum n eo is given by that the array beamwidth at directio
A 1 -- ~e~...fiD(e o) 2n
(6.6.7)
n y elements to the origin in the directio arra the of e anc dist e uar -sq ean t-m where D(eo) is the roo orthogonal to eo (see Figure 6.8): dk(eO) = Xk sineo - Yk cos eo
D(eo) =
ates the equation (6.6.7) slightly underestim in tion ima rox app idth mw bea the As in Exercise 2.15, tion is given by true beamwidth; a better approxima
A
1
(6.6.8)
~e ~ 1. 15 .J2 - 2n D(eo)
der an Exercise 6.6: Isotropic Arrays ctions is said to be isotropic. Consi dire all for e sam the is idth mw An array whose bea and centered at d at rk = [Xfe, Yk f for k = 1, ... , m ate loc s ent elem h wit y arra nar pla m-element y
\
"\.
.
......... '2
•.......
DO A
'1
.. /
....•...........• \
.
~ ..
\
•.........
x
angle es fro m the orig in for DOA anc dist ted jec pro nt me ele ayFigure 6.8 Arr Exercise 6.5).
eo
(see
fr.
Section 6.6
337
Exercises
the origin (L~=l rk = 0), as in Exercise 6.5. Show that the array beamwidth (as given by (6.6.7» is the same for all DOAs if and only if (6.6.9) where
R=
",
Xl Yl X2 Y2
Xm Ym
and where c is a positive constant. (See [BAYSAL AND properties of isotropic arrays.)
MOSES
2003] for additional details and
Exercise 6.7: Grating Lobes The results of Exercise 6.2 might suggest that an m-element ULA can have very high resolution simply by using a large array element spacing d. However, there is an ambiguity associated with choosing d > )./2; this drawback is sometimes referred to as the problem of grating lobes. Identify this drawback, and discuss what ambiguities exist as a function of d. (Refer to the discussion on ULAs in Section 6.2.2.) One potential remedy to this drawback is to use two ULAs: one with ml elements and element spacing d l = )./2, and another with mz elements and element spacing d2. Discuss how to choose mI, m2, and dz to both avoid ambiguities and increase resolution over a conventional ULA with element spacing d = )./2 and ml + m2 elements. Consider as an example using a lO-element ULA with dz = 3)../2 for the second ULA; find mi to resolve ambiguities in this array. Finally, discuss any potential drawbacks of the two-array approach. Exercise 6.8: Beamspace Processing Consider an array comprising many sensors (m » 1). Such an array should be able to resolve sources that are quite closely spaced (ef (6.3.20) and the discussion in Exercise 6.3). There is, however, a price to be paid for the high-resolution performance achieved by using many sensors: the computational burden associated with the elementspace processing (ESP) (i.e., the direct processing of the output of all sensors) may be prohibitively high, and the involved circuitry (A-D converters, etc.) could be quite expensive. Let B* be an m x m matrix with m < m, and consider the transformed output vector B*y(t). The latter vector satisfies the following equation (ef (6.2.21»: B*y(t) = B*As(t) + B*e(t)
(6.6.10)
The transformation matrix B* can be interpreted as a beamformer or spatial filter acting on yet). Estimation of the DOAs of the signals impinging on the array using B*y(t) is called beamspace processing (BSP). Since m < m, BSP should have a lower computational burden than ESP. The critical question is then how to choose the beamformer B so as not to significantly degrade the performance achievable by ESP.
338
Chapter 6 Spatial Methods
Assume that a certain DOA sector is known to contain the source(s) of interest (whose DOAs are designated by the generic variable 80 ). Using this information, design a matrix B* that passes the signals from direction 80 approximately undistorted. Choose B in such a way that the noise in beamspace, B*e(t), is still spatially white. For a given sector size, discuss the tradeoff between the computational burden associated with BSP and the distorting effect of the beamformer on the desired signals. Finally, use the results of Exercise 6.3 to show that the resolution of bearnforming in elementspace and beamspace are nearly the same, under the previous conditions.
Exercise 6.9: Beamspace Processing (cont'd) In this exercise, for simplicity, we consider the beamspace processing (BSP) equation (6.6.10) for the case of a single source (n = 1):
+ B*e(t)
B*y(t) = B*a(8)s(t)
(6.6.11)
The elementspace processing (ESP) counterpart of (6.6.11) (ef (6.2.19» is y(t) = a(8)s(t)
Assume that lIa(8) 11 2 = m (see (6.3.11» and that the Furthermore, assume that
+ e(t)
(6.6.12)
mx m matrix B* is unitary (i.e., B*B = I).
a(8) E R(B)
(6.6.13)
To satisfy (6.6.13), we need knowledge about a DOA sector that contains 8, which is usually assumed to be available in BSP applications; note that the narrower this sector, the smaller the value we can choose for m. As m decreases, the implementation advantages of BSP compared with ESP become more significant. However, the DOA estimation performance achievable by BSP might be expected to decrease as m decreases. As indicated in Exercise 6.8, this is not necessarily the case. In the present exercise, we lend further support to the fact that the estimation performances of ESP and BSP can be quite similar to one another, provided that condition (6.6.13) is satisfied. To be specific, define the array SNR for (6.6.12) as E {lIa(8)s(t)1l2} mP P 2 E lIe(t) 11 = ma 2 = 0'2
(6.6.14)
where P denotes the power of set). Show that the "array SNR" for the BSP equation, (6.6.11), is m/m times that in (6.6.14). Conclude that this increase in the array SNR associated with BSP might counterbalance the presumably negative impact on DOA performance due to the decrease from m to m in the number of observed output signals.
Exercise 6.10: Beamforming and MUSIC under the Same Umbrella Define the scalars t
= 1, . .. ,N.
Section 6.6
Exercises
.:;.~:.
339
I;
By using previous notation, we can write the beamforrning spatial spectru m in (6.3.18) as rl.1\:' ~. Y*(fJ)WY(fJ)
~:~-
t{
(6.6.15)
where
".. ~:"
:,( '.'
W = (lIN )]
:'..
(for beamformingp
and
Show that the MUSIC spatial pseudo spectrum a*(fJ)S S*a(fJ)
(6.6.16)
(see Sections 4.5 and 6.4.3) can also be put in the form (6.6.15), for a cer tain "weighting matrix" W. The columns of the matrix S in (6.6.16) are the n principal eig env ectors of the sample covariance matrix i< in (6.3.17).
Exercise 6.11: Subspace Fitting Int erpretation of MUSIC In
words, the result (4.5.9) (on which MUSIC for both frequency and DO A estimation is based) says that the direction vectors {a (fJk )} belong to the subspace spanne d by the columns of S. Therefore, we can think of estimating the DOAs by choosing fJ (a generic DOA variable) so that the distance between a(fJ) and the closest vector in the span of S is min imi zed -th at is, min !la(fJ) /3.0
Sfill 2
(6.6.17)
where II . II denotes the Euclidean vec tor norm. Note that the dummy vec tor variable fi in (6.6.17) is defined in such a way that Sfi is closest to a(fJ) in Euclidean norm. Show that the DOA estimation metho d derived from the subspace-fitting criterion (6.6.17) is the same as MUSIC.
Exercise 6.12: Subspace Fitting Int erpretation of MUSIC (cont'd) The result (4.5
.9) can also be invoked to arrive at the following subspace fitting criterio n: min IlA(fJ) B,O
SB II}
(6.6.18)
where II . IIF stands for the Frobenius matrix norm and fJ is now the vector of all DOA parameters. This criterion seems to be a more general version of equation (6.6.17 ) in Exercise 6.11. Show that the minimization of the multid imensional subspace fitting criterio n in (6.6.18), with respect to the DOA vector f), still leads to the one-dimensional MUSIC method . Hin t: It will be useful to refer to the type of result proven in equations (4.3.12)-(4.3.16) in Sec tion 4.3.
340
Chapter 6 Spatial Methods
, Exercise 6.13: Subspace Fitting Interpretation of MUSIC (cont~d) The subspace fitting interpretations of the previous two exercises provide some insights into the properties ofthe MUSIC estimator. Assume, for instance, that two or more source signals are coherent. Make use of the subspace fitting interpretation in Exercise 6.12 to show that MUSIC cannot be expected to yield meaningful results in such a case. Follow the line of your argument, explaining why MUSIC fails in the case of coherent signals, to suggest a subspace fitting criterion that works in such a case. Discuss the computational complexity of the method based on the latter criterion. Exercise 6.14: Modified MUSIC for Coherent Signals Consider an m-element ULA. Assume that n signals impinge on the array at angles {8k}k=1 and, also, that some signals are coherent (so that the signal covariance matrix P is singular). Derive a modified MUSIC DOA estimator for this case, analogous to the modified MUSIC frequency estimator in Section 4.5, and show that this method is capable of estimating the n DOAs even in the coherent-signal case.
COMPUTER EXERCISES Tools for Array Signal Processing: The text website www.prenhall.com/stoica contains the following MATLAB functions for use in DOA estimation: • Y=uladata(theta,P,N,sig2,m,d) Generates an m x N data matrix Y = [y(1), ... , y(N)l for a ULA with n sources arriving at angles (in degrees from -900 to 900 ) given by the elements of the n x 1 vector theta. The source signals are zero-mean Gaussian with covariance matrix P = E {s(t)s*(t)}. The noise component is spatially white Gaussian with covariance a 2 I, where a 2 =sig2. The element spacing is equal to d in wavelengths. • phi=beamform(Y,L,d) Implements the beamforming spatial spectral estimate in equation (6.3.18) for an m-element ULA with sensor spacing d in wavelengths. The m x N matrix Y is as defined above. The parameter L controls the DOA sampling, and phi is the spatial spectral estimate phi= Jr Jrk [¢(O\) •... , ¢(OL)l, where Ok = -'2 + T' • phi=capon_sp'(y, L, d) Implements the Capon spatial spectral estimator in equation (6.3.26); the input and output parameters are defined as those in beamform. • theta=root_music_doa(Y,n,d) Implements the Root MUSIC method in Section 4.5, adapted for spatial spectral estimation using a ULA. The parameters Y and d are as in beamform, and theta is the vector containing the n DOA estimates [e1, ...• en f . • theta=esprit_doa(Y,n,d) Implements the ESPRIT method for a ULA. The parameters Y and d are as in beamform, and theta is the vector containing the n DOA estimates [l}1, ... , onf. The two subarrays for ESPRIT are made from the first m - 1 and last m - I elements of the array. A
A
Com pute r Exercises
341
Exercise C6.15: Comparison of Spa tial Spectral Estimators
Simulate the following scenario: Tw o signals with wavelength ).. imping e on an array of sensors from DO As 61 1 = 0° and a (h that will be varied. The signals are mu tual ly uncorrelated complex Gaussian wit h unit power, so that P = E {s( t)s* (t)} = I. The array is a 10-element ULA wit h element spacing d = ),,/2. The mea surements are corrupted by additiv e complex Gaussian white noise with unit power. N = 100 sna pshots are collected. ."
".'
.~.
., >.
,. (a) Let fh = 15°. Compare the resu lts of the beamforming, Capon, Roo t MUSIC, and ESP RIT methods for this example. The resu lts can be shown by plotting the spatial spectrum estimates from beamfonning and Cap on for 50 Monte Carlo experiments ; for Roo t MU SIC and ESPRIT, plo t vertical lines of equal height located at the DO A esti mates from the 50 Monte Carlo experiments. How do the methods compare? Are the pro perties of the various estimators analogous to the time-se ries case for two sinusoids in noise? (b) Rep eat for e.z = 7.5°.
Exercise C6.16: Perfonnance of Spa tial Spectral Estimators for Coherent Source Signals In this exercise, we wil
l see what happens when the sou rce signals are fully correlated (Qr coherent). Use the same parameters and estimation methods as in Exercis e C6 .l5 with 612 = 15°, but with
P=(~ ~) Note that the sources are coherent as ran k(P ) = 1. Compare the results of the four met hods for this case, again by plottin g the spatial spe ctru m and "DO A line spectrum" estimates (as in Exercise C6 .l5) for 50 Monte Car lo experiments from each estimator. Which method app ears to be the bes t in this case?
Exercise C6.17: Spatial Spectral Est imators Applied to Measured Da ta Apply the fou
r DO A estimators from Exercise C6 .l5 to the real data in the file sub ma which can be found at the text web rin e. ma t, site ww w. pre nh all .co m/ sto ica . These data are underwater measurements collected by the Swedish Defense Agency in the Baltic Sea. The 6-element array of hydrophones used in the experiment can be assumed to be a ULA with inter-element spacing equal to 0.9 m. The wavelen gth of the signal is approximately 5.32 m. Can you find the "submarine(s)?"
,-.
·AppendixA
Linear Algebra and Matrix Analysis Tools
A.1 INTRODUCTION
In this appendix, we provide a review of the linear algebra terms and matrix properties used in the text. For the sake of brevity, we do not present proofs for all results stated in this appendix, nor do we discuss related results not needed in the chapters. For most of the results included, however, we do provide proofs and motivation. The reader interested in finding out more about the topic of this appendix can consult the books [STEWART 1973; HORN AND JOHNSON 1985; STRANG 1988; HORN AND JOHNSON 1989; GOLUB AND VAN LOAN 1989], to which we also refer for the proofs omitted here.
A.2 RANGE SPACE, NULL SPACE, AND MATRIX RANK
em
xn, and let Let A be an m x n matrix whose elements are complex valued in general, A E and (.)* denote the transpose and the conjugate transpose operator, respectively.
c-l
Definition Dl: The range space of A, also called the column space, is the subspace spanned by (all linear combinations of) the columns of A: R(A)
= {a
E
cm x1 la =AfJ
for
fJ E
enX1 }
The range space of AT is usually called the row space of A, for obvious reasons.
342
(A.2.1)
Section A.2
Range Space, Null Space, and Matrix Rank
343
Definition D2: The null space of A, also called the kemel, is the following subspace: N(A)
= {,8 E Cnx1 1A,8 = O}
(A.2.2)
The previous definitions are all that we need to introduce the matrix rank and its basic properties. We return to the range and null subspaces in Section A.4, where we discuss the singular-value decomposition. In particular, we derive some convenient bases and useful projectors associated with the previous matrix subspaces. \-. Definition D3: The following are equivalent definitions of the rank of A, denoted by r
A
rank(A)
(i) r is equal to the maximum number of linearly independent columns of A. The latter number is by definition the dimension of the R(A); hence r = dimR(A)
(A.2.3)
(ii) r is equal to the maximum number of linearly independent rows of A, r = dimR(A T ) = dimR(A*)
(A.2A)
(iii) r is the dimension of the nonzero detenninant of maximum size that can be built from the elements of A. The equivalence between the preceding Definitions (i) and (ii) is an important and pleasing result (without which one should have had to consider the row rank and column rank of a matrix separately!). Definition D4: A is said to be • • • •
Rank deficient whenever r < min(m, n). Full column rank if r = n ~ m. Full row rank if r = m ~ n. Nonsingular whenever r = m = n.
Result Rl: Premultiplication or postmultiplication of A by a nonsingular matrix does not change the rank of A.
Proof: This fact directly follows from the definition of rank(A), because the aforementioned • multiplications do not change the number of linearly independent columns (or rows) of A. Result R2: Let A tively. Then
E
cmxn and B
E
C nxp be two conformable matrices of rank rA and rB, respec-
(A.2.5)
Appen dix A
344
Linear Algebra and Matrix Analysis Toots
again. Indeed, independent columns of B, premultiplication of B by A cannot increase the number of linearly t increase the number of hence rank(AB) :5 rB. Similarly, postmultiplication of A by B canno • linearly independent columns of AT, which means that rank(AB) < rA.
rank once . Proof: We can prove the previous assertion by using the definition of the
Result R3: Let A
E em xm
be given by N
A=
L
Xk
Yk
k=l
where
Xk.
Yk E
em xl.
Then, rank(A) < min(m, N)
Proof: A can be rewritten as
•
so the result follows from R2.
Result R4: Let A
E em xn with
n :5 m, let BEe n xP, and let rank(A)
=n
(A.2.6)
Then rank(AB) = rank(B)
(A.2.7)
submatrix, the postmulProof: Assumption (A.2.6) implies that A contains a nonsingular n x n RI). Hence, tiplication of which by B gives a block of rank equal to rank(B) (ef rank(A B) 2: rank(B )
However, by R2, rank(AB) :5 rank(B); hence, (A.2.7) follows.
•
A.3 EIGENVALUE DECOMPOSITION
Definition D5: We say that the matrix A E such an A is said to be symmetric.
emxm is Hermitian if A* = A. In the real-valued case,
Section A.3
345
Eigenvalue Decomposition
Definition D6: A matrix U ever
E
cmxm is said to be unitary (orthogonal if U
is real valued) when-
U*U = UU* =1
If U E
cmxn , with m
> n, is such that U*U = I, then we say that U is semiunitary.
Next, we present a number of definitions and results pertaining to the matrix eigenvalu.e decomposition (EVD), first for general matrices and then for Hermitian ones.
A.3.1 General Matrices Definition D7: A scalar A E C and a (nonzero) vector x ciated eigenvector of a matrix A E em xm if
E
emx1
are an eigenvalue and its asso-
(A.3.1)
Ax=Ax
In particular, an eigenvalue A is a solution of the so-called characteristic equation of A, namely, (A.3.2)
lA-AIl =0
(where 1·1 denotes determinant) and x is a vector in N(A - AI). The pair (A,x) is called an eigenpair.
Observe that, if {(A;, x; Hf=1 are p eigenpairs of A (with p defining equations Ax; = Ax; (i = 1, ... ,p) in the compact form
:s m),
then we can write the
(A.3.3)
AX=XA
where
and
_[AI .. 0]
A-
o Result R5: Let (A, x) be an eigenpair of A is an eigenpair of B .
E
Ap
emxm . If B =
A + exI, with ex
E
C, then (A + ex, x)
Proof: The result follows from the fact that Ax = Ax
==}
(A
+ exl)x =
(A + ex)x.
•
346
Appendix A
Linear Algebra and Matrix Analysis Tools
Result R6: The matrices A and B
6 Q-IAQ, where Q is any nonsingular matrix, share the same eigenvalues. (B is said to be related to A by a similarity transformation.)
Proof: Indeed, the equation
•
is equivalent to IA - AI I = O.
In general, there is no simple relationship between the elements {Aij} of A and its eigenvalues {A'k}' However, the trace of A, which is the sum of the diagonal elements of A, is related in a
simple way to the eigenvalues, as described next.
Definition D8: The trace of a square matrix A
E em Xm
is defined as
m
tr(A)
=L
Au
(A.3.4)
i=1
Result R7: If {Adr=1 are the eigenvalues of A
E
emxm , then m
L
Ai
(A.3.5)
D(A - Ai)
(A.3.6)
tr(A) =
i=1 Proof: We can write n
IH -AI =
i=1
The right-hand side of (A.3.6) is a polynomial in A whose An - 1 coefficient is E7=1 Ai. From the definition of the determinant (see, e.g., [STRANG 1988]), we find that the left-hand side of (A.3.6) is a polynomial whose An - 1 coefficient is l:7=1 Au = tr(A). This proves the result. • Interestingly, although the matrix product is not commutative, the trace is invariant to commuting the factors in a matrix product, as shown next.
Result R8: Let A
E
em xn and BEen xm. Then tr(AB) = tr(BA)
(A.3.7)
section A.3
Eigenvalue Decomposition
347
Proof: A straightforward calculation , based on the definition of tr(·) in (A.3.4), shows that n
m
tr(A B)
=L
L Aij Bj;
i=l j=l
=
n
m
j=l
i=1
n
L L: Bj;A;j = L
[BA)D
l:o:
•
tr(B A)
j=1
We can also prove (A,3.7) by usin g Result R7. Along the way, we wil l obtain some other useful results. First, we note the foll owing: Res ult R9 : Let A, B E
emxm and let a
E
C. Then
IABI = IAIIBI laAI = amlAl Proof: The identities follow directly from the definition of the detennina nt; see, for example, [ST RAN G 1988). • Next we prove the following results:
Result RIO: Let A
E
em xn
and BE en xm. Then
II
-A BI =
II
-BA j.
(A,3.8)
Proof: It is straightforward to verify that
(A,3.9) and
[ BI I0] [- BI -AI ] [I0 A] I
-
[I0 I - 0BA ]
(A.3.1O)
Because the matrices in the left-ha nd sides of (A,3.9) and (A.3.l0) hav e the same detenninant, equal to l-~ it follows that the right-hand sides must also have the same determinan t, which concludes the proof. • Result R11: Let A E em xn and BE en xm. The nonzero eigenvalues of AB and of BA are identical.
-11.
348
Appendix A
Linear Algebra and Matrix Analysis Tools
.Proof: Let.A =1= 0 be an eigenvalue of AB. Then,
•
where the third equality follows from RiO. Hence, A. is also an eigenvalue of BA.
We can now obtain R8 as a simple corollary of Rll, by using the property (A.3.5) of the trace operator.
A.3.2 Hermitian Matrices An important property of the class of Hermitian matrices, which does not necessarily hold for general matrices, is the following: Result R12:
em
(i) All eigenvalues of A = A * E xm are real valued. (ii) The m eigenvectors of A = A* E emxm form an orthonormal set. In other words, the matrix U, whose columns are the eigenvectors of A, is unitary. It follows from (i) and (ii) and from (A.3.3) that, for a Hermitian matrix, we can write
AU =UA where U*U = UU* = I and the diagonal elements of A are real numbers. Equivalently,
A= UAU*
(A.3.11)
which is the so-called eigenvalue decomposition (EVD) of A = A *. The EVD of a Hermitian matrix is a special case of the singular value decomposition of a general matrix, discussed in the next section. The following is a :'lseful result associated with Hermitian matrices:
e
Result R13: Let A = A* E mxm and let v E C mx1 (v =1= 0). Also, let the eigenvalues of A be arranged in a nonincreasing order:
Then (A.3.l2)
Section A.3
Eigenvalue Decomposition
349
The ratio in (A.3.l2) is called the Rayleigh quotient. Because this ratio is invariant under the multiplication of v by any complex number, we can rewrite (A.3.l2) in the fonn: Am :5 v*Av ~ Al
,:....
for any v
E
emx1 with v*v =
1
(A,3.13)
The equalities in (A,3.13) are evidently achieved when v is equal to the eigenvector of A associated with Am and AI, respectively. \',
Proof: Let the EVD of A be given by (A,3.l l), and let
w =U'v =
[J
We need to prove that m
Am
~ w*Aw
=L
2
Ak !Wk 1 :5 Al
k=l
for any
W E
em x 1 satisfying m
w*w
=L
\wkl 2
= 1.
k=l
However, this is readily verified, as m
Al -
L
m
2
Akl wkl =
k=l
L
0·1 -
2
Ak)I W k1 2: 0
k=1
and m
L
m
Akl wkl
2
k=1
-
Am =
L
(Ak - Am)\wkI 2 2: 0
k=1
•
and the proof is concluded. The following result is an extension of R13.
Result R14: Let V E e mxn , with m > n, be a semiunitary matrix (i.e., V*V = 1), and let A = A* E cmxm have its eigenvalues ordered as in R13. Then m
L k=m-n +l
n
Ak:5 tr(V*A V) :5
L k=l
Ak
(A,3.l 4)
Linear Algebra and Matrix Analysis Tools
Appendix A
350
V are the eigenvectors of A .where the equalities are achieved, for instance, when the columns of The ratio corresponding to (Am-n+l, ... , Am) and, respectively, to (AI, ... , An). tr(V*A V) tr(V*V )
-
tr(V*A V) n
is sometimes called the extend ed Rayleigh quotient.
Proof: Let A = UAU * (cf (A.3.1 1», and let
.lE........ (m x n)
(hence,
s; is the kth row of S). By making use of the preceding notation, we can write m
tr(V*A V)
= tr(V*U AU*V ) =
tr(S* AS)
= tr(ASS*) = L
AkCk
(A.3.15)
k=l
where
k = l, ... m
(A.3.I 6)
k = 1, ... ,m
(A.3.17)
Clearly,
and m
LCk = tr(SS*) = tr(S*S ) = tr(V*U U*V) = tr(V*V ) = tr(l) = n
(A.3.I 8)
k=l
Furthermore, (A.3.I 9)
k = 1, ... ,m. ; and let To see this, let G E cmx{m -n) be such that the matrix (S G] is unitary row of G. Then, by construction,
(s: g;l which is (A.3.19).
[~: ] =
Ck
+ g; gk = 1 ~
Ck = 1 - g; gk
~1
g; denote the kth
Section A.4
351
Singular Value Decomposition and Projection Operators
Finally, by combining (A.3.15) with (A.3.17)-(A.3.19), we can readily verify that tr(V*AV) satisfies (A.3 .14), where the equalities are achieved for Cl
= ... = Cm - n = 0;
Cm - n +l
= ... = Cm = 1
and, respectively, Cl
= ... = Cn = 1;
Cn +l
= .,. = t m = 0
These conditions on {q} are satisfied if, for example, S is equal to [0 If and [I With this observation, the proof is concluded.
of, respectively. •
Result R13 is clearly a special case of Result R14. The only reason for considering R13 separately is that the simpler result R13 is used more often in the text than R14.
A.4 SINGULAR VALUE DECOMPOSITION AND PROJECTION OPERATORS For any matrix A E em xn , there exist unitary matrices U E em xm and V E matrix ~ E Rm xn with nonnegative diagonal elements, such that
en xn
A = U~V*
and a diagonal
(A.4.1)
By appropriate permutation, the diagonal elements of E can be arranged in a nonincreasing order: al ::: a2 ::: •.. ::: amin(m,n)
The factorization (AA.l) is called the singular value decomposition (SVD) of A, and its existence is a significant result both from a theoretical and from a practical standpoint. We reiterate that the matrices U, ~, and V in (A.4.1) satisfy the equations
U*U = UU* = I V*V " .. _ .t..J1) -
= W* =1 {
aj :::
0
0
(m x m)
(n x n)
for i =J for i ::j:. j
The following terminology is most commonly associated with the SVD: • The left singular vectors of A are the columns of U. These singular vectors are also the eigenvectors of the matrix AA *. • The right singular vedors of A are the columns of V. These vectors are also the eigenvectors of the matrix A *A. • The singular values of A are the diagonal elements {ad of E. Note that {a;} are the square roots of the largest min(m, n) eigenvalues of AA* or A*A.
Tools Linear Algebra and Ma trix Analysis
App end ix A
352
right gular value, left singular vector, and (sin le: trip ng owi foll the is A of • The singular triple of V (V) . Uk (Vk ) is the kth col um n singular vector; Uk. Uk. Vk) , where
If rank(A) = r
~
min (m, n)
then one can sho w that
0, = 0,
Uk > { Uk
k = 1, ... , r k = r + 1, ... , min (m, n)
SVD can be written as Hen ce, for a matrix of rank r, the (A.4.2)
where LI E R consequences.
rxr
important of A in (A.4.2) has a num ber of is nonsingular. The factorization
A E Result R15: Consider the SV D of
(i) (ii) (iii) (iv)
crnxn
Then in (A.4.2), where r ~ min (m, n).
VI is an orthonormal basis of'R (A) ;
*); U2 is an orthonormal basis of N(A ); (A* VI is an orthonormal basis of'R ). V2 is an orthonormal basis of N(A
prove ties (i) and (ii) as applied to A*. To per pro the from ow foll (iv) and Proof: We see that (iii) (i) and (ii), we need to show that (A.4.3) 'R(A) = 'R( UI) and, respectively, N(A *) = 'R(Uz)
To show (A.4.3), note that a E 'R(A)
=> there exists fJ such that a = AfJ => => a = UI (LI vt fJ) = UI Y :::> a E 'R( UI)
so 'R(A) C 'R( U}) . Also, a E 'R( UI) :::> there exists
fJ
such that a = UIiJ
(A.4.4)
Section A.4
353
Singular Value Decomposition and Projection Operators
From (A.4.2), UI = AVI ~lI; it follows that
which shows R(UI) Similarly,
c
R(A). Combining R(UI> C R(A) with R(A)
c R(U I )
gives (AA.3).
\',
Now, any vector a can be written as
because [U I Uz] is nonsingular. However, 0 = uta thus a = Uz{3. Thus, N(A*) c R(Uz). Finally, a E R(Uz)
=* there exists f3
= UtUIY + utuz{3 =
such that a
y, so y
= 0,
and
= Uz{3
Then
•
which leads to (A.4.4).
This result, readily derived by using the SVD, has a number of interesting corollaries that complement the discussion on range and null subspaces in Section A.2. Result R16: For any A E cmxn , the subspaces R(A) and N(A*) are orthogonal to each other, and they together span em. Consequently, we say that N(A*) is the orthogonal complement of R(A) in em, and vice versa. In particular, we have dimN(A*)
=m -
r
dimN(A) = n - r
(A.4.5) (A.4.6)
(Recall that dim R(A) = dim R(A*) = r.)
Proof: This result is a direct corollary of R15.
•
The SVD of a matrix also provides a convenient representation for the projectors onto the range and null spaces of A and A *.
354
Appendix A
Linear Algebra and Matrix Analysis Tools
Definition D9: Let y E em x 1 be an arbitrary vector. By definition, the orthogonal projector onto R(A) is the matrix n, which is such that (i) R(n) = R(A) and (ii) the Euclidean distance between y and ny E R(A) is minimum:
lIY - ny1l 2
= min
over R(A)
Hereafter, Ilx II = Jx*x denotes the Euclidean vector norm. Result R17: Let A E
cmxn • The orthogonal projector onto R(A) is given by n=
(AA.7)
UIUt
whereas the orthogonal projector onto N (A *) is
nl. = 1-
UIUt
=
U2U2*
(AA.8)
Proof: Let Y E Cmx1 be an arbitrary vector. As R(A) = R(UI), according to R15, we can find the vector in R(A) that is of minimal distance from y by solving the problem min IIY f3
UI/311 2
(AA.9)
Because
lIy - U1,811 2 = (,8* - y*UIH,8 - uty) +y*(l- u1ut)Y = 11,8 - utyl1
2
+ IIUiyll2
it readily follows that the solution to the minimization problem (A.4.9) is given by ,8 = uty. Hence, the vector Ul uty is the orthogonal projection of y onto R(A), and the minimum distance from y to R(A) is 1\ Uiy II. This proves (AA.7). Then (AA.8) follows immediately from (A.4.7) • and the fact that N(A*) = R(U2). Note, for instance, that, for the projection of y onto R(A), the error vector is y - Ul uty = U2 Uiy , which is in R(U2 ) and therefore is orthogonal to R(A) by R15. For this reason, n is given the name "orthogonal projector" in D9 and R17. As an aside, we r~mark that the orthogonal projectors in (AA.7) and (A.4.8) are idempotent matrices; see the next definition.
Definition DIO: The matrix A
E em xm
is idempotent if A 2 =A
(AA.IO)
Furthermore, observe, by making use of Rll, that the idempotent matrix in (AA.7), for example, has r eigenvalues equal to 1 and (m - r) eigenvalues equal to O. This is a general property of idempotent matrices: their eigenvalues are either 0 or 1. Finally, we present a result that, even alone, would be enough to make the SVD an essential matrix-analysis tool.
Section A.4
Singular Value Decomposition and Projection Operators
35 5
Res ult R1 8: Let A E emxn, with elements Aij. Let the SVD of A (with the singular values arranged in a nonincreasing order) be given by
(A. 4.l1 ) where p :5 min(m, n) is an integer . Let
IIAII
2
m
= tr(A*A) = L
n
L
i=l j=l
min( m,n )
lAy
2 1
=
L
(A.4.12)
k=l
denote the square of the so-called Frobenius norm. Then the best rank-p app rox ima nt of A in the Frobenius-norm metric, that is, the solution to min IIA - B 11 2 B
subject to rank(B) = p ,
(A.4.13)
is given by
(A.4.14) Furthermore, Bo is the unique solu tion to the approximation problem (A.4.13) if and only if (Jp > (Jp+ l. Pro of: It follows from R4 and (A. 4.2) that we can parameterize B in (A. 4.l3 ) as B =C D*
(A.4.15)
where C E em xp and D E en xp are full-column-rank matrices. The previous parameterization of B is of course nonunique, but, as we will see, this fact does not introduce any problem. By making use of (A. 4.l5 ), we can rew rite the problem (A.4.13) in the foll owing form: ;
min IIA - CD*1I 2 C,D
ran k(C ) = rankeD) = p
(A.4.16)
"~-
The reparameterized problem is ess entially constraint free. Indeed, the full-column-rank condition that must be satisfied by C and D can be easily handled. First, we minimize (A.4.16) with res pect to D, for a given C. To that end, observe that ~',!'
c~
7~;
ii·". f"'
f
~'.
i IIXoll2 for X
=1= Xo
•
Defin ition D1S: The matrix ,.
':.
(A.8.8) ,
1
',' ".;
in (A.8.S) is the so-called
Moo~Penrose pseudoinverse
(or generalized inverse) of A.
366
Appendix A
Linear Algebra and Matrix Analysis Tools
It can be shown that At is the unique solution to the following set of equations:
Evidently, whenever A is square and nonsingular, we have At = A-I; this observation partly motivates the name "generalized inverse" (or "pseudoinverse") given to At in the general case. The computation of a solution to (A.8.1), whenever one exists, is an important issue, which we address briefly in what follows. We begin by noting that, in the general case, there is no computer algorithm that can compute a solution to (A.8.1) exactly (i.e., without any numerical errors). In effect, the best we can hope for is to compute the exact solution to a slightly perturbed (fictitious) system of linear equations, given by (A
+ LlA)(X + !:i.x) =
B
+ LlB
(A.8.9)
where LlA and LlB are small perturbation terms, the magnitude of which depends on the algorithm and the length of the computer word, and where Llx is the solution perturbation induced. An algorithm which, when applied to (A.8.1), provides a solution to (A.8.9) corresponding to perturbation terms (LlA, LlB) whose magnitude is of the order afforded by the "machine epsilon" is said to be numerically stable. Now, assuming that (A.8.l) has a unique solution (and, hence, that A satisfies (A.8.4)), one can show that the perturbations in A and B in (A.8.9) are retrieved in Llx multiplied by a proportionality factor given by cond(A)
= ai/an
(A.8.lO) ...--./)
where al and an are the largest and smallest singular values of A, respectively, and where "cond" is short for "condition." The system (A.8.1) is said to be well conditioned if the corresponding ratio (A.8.10) is "small" (that is, not much larger than 1). The ratio in (A.8.10) is called the condition number of the matrix A and is an important parameter of a given system of linear equations. Note, from the previous discussion, that even a numerically stable algorithm (Le., one that induces quite small LlA and !:i.e) could yield an inaccurate solution X when applied to an ill-conditioned system of linear equations (Le., a system with a very large cond(A)). For more details on the topic ofthis paragraph, including specific algorithms for solving linear systems, we refer the reader to [STEWART 1973; GOLUB AND VAN LOAN 1989].
A.S.2 Inconsistent Systems The systems of linear equations that appear in applications (such as those in this book) are quite often perturbed versions of a "nominal system," and usually they do not admit any exact solution. Such systems are said to be inconsistent, and frequently they are overdetermined and have a matrix A that has full column rank: rank(A) = n
~
m
(A.8.lI)
Section A.S
Systems of Linear Equations
367
In what follows, we present two approaches to obtaining an approximate solution to an inconsistent system of linear equations
AX :::. B
(A.8.12)
under the condition (A.8.l I). ,-
Defin ition D16: The least squares (LS) approximate solution to (A.8.1 2) is given by the minimizer Xu of the following criterion:
Equivalently, XIS can be defined as follows: Obtain the minimal pertur bation system (A8.1 2) consi stent- that is,
/::;.B
that makes the
(A.8.13) Then derive XIS by solving the system in (A.8.l 3) corresponding to theopt imal perturbation
/::;.B.
The LS solution introduced above can be obtained in several ways. A simple way is as follows: Resul t R32: The LS solution to (A.8.12) is given by XIS = (A*A) -IA*B
(A.8.14)
The inverse matrix in this equation exists, in view of (A8.1 1). Proof : The matrix Bo that makes the system consistent and is of minimal distance (in the Frobenius-norm metric) from B is given by the orthogonal projec tion of (the columns of) B onto R(A): (A.S.15) To motivate (A.8.15) by using only the results proven so far in this append ix, we digress from the main proof and let VI denote an orthogonal basis of R(A). Then R17 implies that Bo = VI V B . However, VI and A span the same subspace; hence, they must be related to one another by a nonsingular linear transformation: VI = AQ (IQ I i= 0). It follow s from this observation that VI vt = AQQ *A * and also that Q *A *AQ = I, which lead to the follow ing projector formula: VI vt = A(A*A )-IA* (as used in (A.8.15)). Next, we return to the proof of (A.8.14). The unique solution to
t
AX - Bo
= A[X -
(A*A) -lA*B ]
is obviously (A8.1 4), because dimN (A) = 0 by assumption.
•
368
Appendix A linear Algebra and Matrix Analysis Tools
The IS solution XLS can be computed by means of the SVD of the m x n matrix A. The XLS can, however, be obtained in a computationally more efficient way, as is briefly described below. Note that XLS should not be computed by directly evaluating the formula in (A.8.14) as it stands. Briefly stated, the reason is as follows: Recall, from (A.8.lO), that the condition number of A is given by cond(A) = allan
(A.8.l6)
(Note that an #- 0 under (A.8.11).) When working directly on A, we find that the numerical errors made in the computation of XLS can be shown to be proportional to (A.8.l6). However, in (A.8.l4), one would need to invert the matrix A*A, whose condition number is cond(A*A) =
at/a; =
[cond(A)}2
(A.8.l7)
Working with (A *A) would therefore lead to much larger numerical errors during the computation of XLS and is thus not advisable. The algorithm sketched in what follows derives XLS by operating on A directly. For any matrix A satisfying (A.8.11), there exist a unitary matrix Q E cmxm and nonsingular upper triangular matrix R E x n such that
en
(A.8.l8) n
m-n
The previous factorization of A is called the QR decomposition (QRD). Inserting (A.8.l8) into (A.8.l4), we obtain
Hence, once the QRD of A has been performed, XLS can be obtained conveniently, as the solution of a triangular system of linear equations: (A.8.19) We note that the computation of the QRD is faster than that of the SVD (see, for example, [STEWART 1973; GoLUB AND VAN LOAN 1989]). The previous definition and derivation of XLS make it clear that the LS approach derives an approximate solution to (A.8.12) by implicitly assuming that only the right-hand-side matrix, B, is perturbed. In applications, quite frequently both A and B are perturbed versions of some nominal (and unknown) matrices. In such cases, we may think of determining an approximate solution to (A.8.12) by explicitly recognizing the fact that neither A nor B is perturbation free. An approach based on this idea is described next (see, for example, [VAN HUFFEL AND VANDEWALLE 1991]).
Section A.S
Systems of Linear Equations
369
Definition D17: The total least squares (TLS) approximate solutio n to (A.8.12) is defined as follows: First, derive the minimal perturbations ~A and ~B that make the system consi stentthat is, nun II[~A ~B]1I2
subject to
(A
+ ~A)X =
B
+ ~B
(A.8.20)
Then, obtain Xn.s by solving the system in (A.8.20) corresponding to the optimal perturbations (~A' ~B)'
A simple way to derive a more explicit formula for calculating the XTLS is as follows: Resul t R33: Let [A B] = [[It
[h]
(A.8.2 I)
'-...-' '-...-' n m-n
denote the SVD of the matrix [A B]. Furthermore, partition
v2* = [ Vi 2
V2~
n
p
V2* as
]
~~
(A.8.22)
Then -
- I Xn.s = - V21 Vi;
(A.8.23)
Proof: The optimization problem with constraints in (A.8.20) can be restate d in the following way: Find the minimal perturbation [~A ~B] and the corresponding matrix X such that (A.8.24) Because rank [ -IX] = p,
[~.4.
.6B] should be such that dimN ( [A B]
equivalently, rank( [A B]
+ [~A
According to RI8, the minimal-perturbation matrix
~B]
) :::: n
[~A ~B]
+ [.6A ~B]) ~ P
or,
(A.8.25)
that achieves (A.8.25) is given by (A.8.26)
Appen dix A
370
Linear Algebra and Matrix Analysis Tools
following matrix equation Inserting (A.8.26) along with (A.8.21) into (A.8.24), we obtain the in X:
Equivalently we have (A.8.27)
Equation (A.8.27) implies that X must satisfy (A.8.28)
for some nonsingular normalizing matrix Q. The expression (A.8.23) for from (A.8.28).
XTLS
is readily obtained •
{ak} of the matrix The TLS solution in (A.8.23) is unique if and only if the singular values is singular, the TLS solution [A B] are such that an > an+I (this follows from RI8). When V22 does not exist; see [VAN HUFF a AND VANDE WALLE 1991]. [A B]. The solution The computation of the XTLS requires the SVD of the m x (n + p) matrix d via the following XTJ..s can be rewritten in a slightly different fonn. Let VII, V12 be define partition of Vt:
Vll vt = ['-v-' n
The orthogonality condition
V12 ] '-v-" p
vt V2 = 0 can be rewritten as
which yields (A.8.29) XTLS can often be more Because p is usually (much) smaller than n, the formula (A.8.23) for on case of p = 1, (A.8.23) efficient computationally than is (A.8.29). (For example, in the comm tion of an n x n matrix does not require a matrix inversion, whereas (A.8.29) requires the calcula inverse.)
Section A.9
371
Quadratic Minimization
A.9 QUADRATJC MINIMIZATION
Several problems in this text require the solution to quadratic minimization problems. In this section, we make use of matrix-analysis techniques to derive two results: one on unconstrained minimization, the other on constrained minimization. Result R34: Let A be an (n x n) Hennitian positive defipite matrix, let X and B be (n x m) matrices, and let C be an m x m Hermitian matrix. Then the unique solution to the minimization problem minF(X),
x
F(X)
= X*AX +X*B +B*X + C
(A.9.1)
is given by (A.9.2) Here, the matrix minimization means F(Xo) ::: F(X) for every X ::f:. Xo; that is, F(X) - F(Xo) is a positive semidefinite matrix.
Proof: Let X = Xo
+ d, where d
F(X)
is an arbitrary (n x m) complex matrix. Then
= (-A-IB + d)*A(-A-IB +.1.) + (-A-IB + +B*(-A-1B
d)*B
+ d) + C
== .1. *Ad + F(Xo) Now, A is positive definite, so d *Ad is F(Xo), and the result is proven.
~
(A.9.3)
0 for all nonzero d; thus, the minimum value of F(X) •
We next present a result on linearly constrained quadratic minimization.
c
Result R35: Let A be an (n x n) Hermitian positive definite matrix, and let X E nxm , B E xk, and C E em xk. Assume that B has full column rank equal to k (hence n ~ k). Then the unique solution to the minimization problem
en
min X* AX
x
subject to
X* B
== C
(A.9.4)
is given by (A.9.S)
= C. Let X == Xo + d, where I:i. E en xm 0 (so that X also satisfies the constraint X* B = C). Then
Proof: First, note that (B *A -I B) -1 exists and that X; B satisfies d *B =
X*AX =X;AXo +X;Ad
+ d*AXo + d*A~
(A. 9.6)
Appen dix A Linear Algebra and Matrix Analysis Tools
372
.,
where the two middle terms are equal to zero:
Hence,
X*AX - XQ'AXo = d *Ad :::: 0
(A.9.7)
izing X matrix is given because A is positive definite. It follows from (A.9.7) that the minim
•
byXo. A common special case of Result R35 is m C = 1. Then
= k = 1 (so X
and B are both vectors) and
Appendix B Cramer-Rao Bound Tools
B.1 INTRODUCTION
In the text, we have kept the discussion of statistical aspects at a minimum for conciseness reasons. However, we have presented certain statistical tools and analyses that we have found useful to the understanding of the spectral analysis material discussed. In this appendix, we introduce some basic facts on an important statistical tool: the Cramer-Rao bound (abbreviated as CRB). We begin our discussion by explaining the importance of the CRB for parametric spectral analysis. Let ¢(w, (j) denote a parametric spectral model, depending on a real-valued vector (), and let ¢(w, 9) denote the spectral density estimated from N data samples. Assume that the estimate 9 of () is consistent, so that the estimation error is small for large values of N. Then, by making use of a Taylor series expansion technique, we can write the estimation error [¢(w, 8) - ¢J(w, (j)J approximately as a linear function of 8 - (), namely, [¢(w,
8) -
0
(C.3.1 8)
if and only if ).. = 1
(C.3.1 9)
and - In)..
=1-
A
408
Appendix C Model Order Selection Tools
Because en is the maximizer of lnpn (Y, en), the second term in (CA.6) is equal to zero. Hence, we can write (see also (C.2.2l)
Inpn(y):::: Inpn(Y, en) -
~(en 2
- enlJ(e n - en)
(C.4.7)
According to (C.2.11),
Eo{(On -enlJ(en _en)} =tr[JEo{(e n _en)(On -enl}] =tr[In] =n
(CA.8)
which means that, for the choice of Pn (y) in (CA.7), we have (CA.9)
An unbiased estimate of the above relative KL information is given by (C.4.1O) The corresponding order-selection rule maximizes (CA.IO), or, equivalently, minimizes (C.4.11) with respect to model order n. This no-name (NN) rule can be shown to perform better than that based on (CA.5), but worse than the rules presented in the next sections. Essentially, the problem with (CA.ll) is that it tends to overfit (i.e., to select model orders larger than the "true" order). To understand intuitively how this happens, note that the first term in (C.4.11) decreases with increasing n (for nested models), whereas the second term increases. Hence, the second term in (C.4.11) penalizes overfitting; however, it turns out that it does not penalize quite enough. The rules presented in the following sections have a form similar to (CA.!!), but with a larger penalty term, and they do have better properties than (CA.ll). Despite this fact, we have chosen to present (CA.!l) briefly in this section for two reasons: (i) the discussion here has revealed the failure of using maxn lnpn (Y, en) as an order-selection rule and. has shown that it is in effect quite easy to obtain rules with better properties; and (ii) this section has laid groundwork for the derivation of better order-selection rules based on the KL approach in the next two sections. To close this section, we motivate the multiplication by -2 in going from (C.4.1O) to (C.4.11). The reason for preferring (C.4.11) to (C.4.1O) is that for the fairly common NLS model in (C.2.2) and the associated Gaussian likelihood in (C.2.3), -2 lnpn (Y, en) takes on the following convenient form:
-2Inpn(y, en) = NIna-,;
+ constant
(CA.12)
(See (C.2.5)-(C.2.7).) Hence, in such a case, we can replace -2 Inpn(y, en) in (C.4.ll) by the scaled logarithm of the residual variance, This remark also applies to the order-selection rules presented in the following sections, which are written in a form similar to (C.4.ll).
NIno-;.
1
Section C.S
409
Cross-Validatory Kl Approach: The AIC Rule
C.S CROSS-VALIDATORY KL APPROACH: THE AIC RULE As explained in the previous section, a possible approach to model order selection consists of minimizing the KL discrepancy between the "true" pdf of the data and the pdf (or likelihood) of the model, or, equivalently, of maximizing the relative KL infonnation (see (C.4.2)): (C.5.!)
I(po,p) = Eo{lnp(y)} ~.
When using this approach, the first (and, likely the main) hurdle that we have to overcome is the choice of the model likelihood, p (y). As discussed in the previous section, we would ideally like to use the true pdf of the model as p(y) in (C.5.l), i.e.p(y) =Pn(y,8 n ), but this is not possible; Pn (Y, 8 n) is unknown. Hence, we have to choose p(y) in a different way. This choice is important; it eventually detennines the model order selection rule that we will obtain. The other issue we should consider when using the approach based on (C.5.I) is that the expectation in (C.5.I) cannot be evaluated, because the true pdf of the data is unknown. Consequently, we will have to use an estimate, i, in lieu of the unavailable I(po,p) in (C.5.!). Let x denote a fictitious data vector having the same size, N, and the same pdf as y, but such that x is independent of Also, let denote the ML estimate of the model parameter as vector that would be obtained from x if x were available. (We omit the superindex n of often as possible, to simplify notation.) In this section, we will consider the following choice of the model's pdf:
y.
ex
ex
In P(y) = Ex {In p (Y, ex) }
(C.5.2)
which, when inserted in (C.5.I), yields
I = Ey {Ex {lnP(Y, ex)}}
(C.5.3)
Hereafter, E x {'} and Ey {'} denote the expectation with respect to the pdf of x and y, respectively. The above choice of p(y), which was introduced in [AKAIKE 1974; AKAIKE 1978], has an interesting cross-validation interpretation: we use the sample x for estimation and the independent sample y for validation of the estimated model's pdf. Note that the dependence of (C.5.3) on the fictitious sample x is eliminated (as it should be, because x is unavailable) via the expectation operation see below for details. An asymptotic second-order Taylor series expansion of In P (y , around y , similar to (C.4.6}-(CA.7), yields
Ex{.};
ex)
A
lnp(y, 8x )
:::
Inp(y, 8y) + (8x A
1 + 2"(8x A
A
A
-
A
8y )
T [
-
A
8y)
T [
e
aIn Pa8(y , 8) I _
A
]
8_0y
I ]
a2 1n p (Y, 8)
1"",,
a8
a8 T
AT
8=8
A
(Bx
A
-
8y )
y A
A
~ lnp(y, By) - 2"(8x - 8y) Jy (8x - 8y )
(C.5.4)
410
Appendix C Model Order Selection Tools
where Jy is the J matrix, as defined in (C.2.20), associated with the data vector y. Using the fact that x and y have the same pdf (which implies that Jy = Jx ), along with the fact that they are independent of each other, we can show that
Ey{Ex {(Bx - By)TJy(Bx- By)} } = Ey{Ex {tr (Jy[(ex - 8) - (8y- 8)] [(ex -
8) -
(8y-
8)
JT)
II
= tr [Jy (Jx- 1 + Jy- 1)] = 2n
(C.5.5)
Inserting (C.5.5) in (C.5.4) yields the following asymptotic approximation of the relative KL infonnation in (C.5.3): (C.5.6) (where we have omitted the subindex y of 8 but reinstated the superindex n). Evidently, (C.5.6) can be estimated in an unbiased manner by (C.5.?) Maximizing (C.5.?) with respect to n is equivalent to minimizing the function of n (C.5.8) where the acronym AIC stands for Akaike Information Criterion (the reasons for multiplying (C.5.?) by -2 to get (C.5.8), and for the use of the word "infonnation" in the name given to (C.5.8) have been explained before-see the previous two sections). As an example, for the sinusoidal signal model with nc components (see Section C.2), AlC takes on the foon (see (C.2.6)-(C.2.1O» AIC = 2Ns lna~
+ 2(3nc + I)
(C.5.9)
where N s denotes the number of available complex-valued samples, {yC(t)}~l' and (C.5.1O)
Remark: AlC can also be obtained by using the following relative KL infonnation function, in lieu of (C.5.3): (C.5.1I)
~
~
:
' "i'
Section CS
411
Cross-Validatory KL Approach: The AIC Rule
Note that, in (C.S.ll), x is used for validation and y for estimation. However, the derivation of AlC from (C.S.ll) is more complicated; such a derivation, which is left as an exercise to the reader, makes use of two Taylor series expansions and of the fact that Ex {lnp(x, O)} = Ey{lnp(y, O)} .• The performance of AlC has been found to be satisfactory in many case studies and applications to real-life data reported in the literature (see, for example, [McQUARRIE AND TSAI 1998; LINHART AND ZUCCHINI 1986; BURNHAM AND ANDE,RSON 2002; SAKAMOTO, ISHIGURO, AND KITAGAWA 1986]). The performance of a model order selection rule, such as AlC, can be measured in different ways, as explained in the next two paragraphs. As a first possibility, we can consider a scenario in which the data-generating mechanism belongs to the class of models under test; thus, there is a true order. In such a case, analytical or numerical studies can be used to determine the probability with which the rule selects the true order. For AlC, it can be shown that, under quite general conditions, the probability of underfitting the probability of overfitting
---,l>
---,l>
0
constant > 0
(C.5.l2) (C.5.13)
as N ---,l> 00 (see, for example, [MCQUARRIE AND TSAI 1998; KAsHYAP 1980]). We can see from (C.5.l3) that the behavior of AlC with respect to the probability of correct detection is not entirely satisfactory. Interestingly, it is precisely this kind of behavior that appears to make AlC perform satisfactorily with respect to the other possible type of performance measure, as explained below. An alternative way of measuring the performance is to consider a more practical scenario, in which the data-generating mechanism is more complex than any of the models under test, as is usually the case in practical applications. In such a case, we can use analytical or numerical studies to detennine the performance of the model picked by the rule as an approximation of the data-generating mechanism-for instance, we can consider the average distance between the estimated and true spectral densities or the average prediction error of the model. With respect to such a performance measure, AlC performs well, partly because of its tendency to select models with relatively large orders which may be a good thing to do in a case in which the data generating mechanism is more complex than the models used to fit it. . The nonzero overfitting probability of AIC is due to the fact that the term 2n in (C.5.8) (which penalizes high-order models), while larger than the term n that appears in the NN rule, is still too small. Extensive simulation studies (see, e.g., [BHANSALI AND DOWNHAM 1977]) have found empirically that the following Generalized Information Criterion (GIC) (C.5.14) can outperform AlC with respect to various performance measures if v > 2. Specifically, depending on the considered scenario as well as the value of N and the performance measure, values of v in the interval v E [2,6] have been found to give the best performance. In the next section, we show that GIC can be obtained as a natural theoretical extension of AlC. Hence, the use of (C.5.l4) with v > 2 can be motivated on formal grounds. However, the choice of a particular v in GIC is a more difficult problem, as we will see in Section C.6,
....';ab.. ,i{-'
412
Appendix C Model Order Selection Tools
and cannot be solved in the current KL framework. The different framework of Section C.7 appears to be necessary to arrive at a rule having the form of (C.5.14) with a specific expression for v. We close this section with a brief discussion on another modification of the AlC rule suggested in the literature (see, for example, [HURVICH AND TSAI 1993]). As explained before, AlC is derived by maximizing an asymptotically unbiased estimate of the relative KL information I in (e.5.3). Interestingly, for linear-regression models (given by (C.2.2) where f.J-(y) is a linear function of y), the following corrected Ale rule, AlCc , can be shown to be an exactly unbiased estimate of I: AlCc = -2 Inpn(Y,
eAn ) + N
2N
-n-l
n
(C.5.15)
(See, for example, [HURvIcH AND TSAI 1993~ CAVANAUGH 1997].) As N ~ 00, AlC c ---+ AIC (as expected). However, for finite values of N, the penalty term of AlCc is larger than that of Ale. Consequently, in finite samples, AlC c has a smaller risk of overfitting than AlC, and therefore we can say that AlCc trades off a decrease of the risk of overfitting (which is rather large for AlC) for an increase in the risk of underfitting (which is quite small for AIC and hence can be slightly increased without a significant deterioration of performance). With this fact in mind, AlCc can be used as an order-selection rule for models more general than just linear regressions, even though its motivation in the general case is pragmatic rather than theoretical. For other finitesample corrections of AlC, we refer the reader to [DE WAELE AND BROERSEN 2003; BROERSEN 2000; BROERSEN 2002; SEGHOUANE, BEKARA, AND FLEURY 2003J.
C.6 GENERALIZED CROSS-VALIDATORY KL APPROACH: THE GIC RULE In the cross-validatory approach of the previous section, the estimation sample x has the same length as the validation sample y. In that approach, ex (obtained from x) is used to approximate the likelihood of the model via Ex {p (y, ex)}' The AIC rule so obtained has a nonzero probability of overfitting (even asymptotically). Intuitively, the risk of overfitting will decrease if we let the length of the validation sample be (much) larger than that of the estimation sample-that is, N
A
length(y)
= p . length(x) ,
p2:1
(C.6.1)
Indeed, overfitting occurs when the model corresponding to ex also fits the "noise" in the sample x, so that p(x, Bx ) has a "much" larger value than the true pdf, p(x, B). Such a model could behave reasonably well on a short validation sample y, but not on a long validation sample. (In the latter case, p(Y, ex) will take on very small values.) The simple idea in (C.6.1) of letting the lengths of the validation and estimation samples be different leads to a natural extension of AlC, as shown next. A straightforward calculation shows that, under (C.6.1), we have
Jy = pJx
(C.6.2)
Section C.7
413
Bayesian Approach: The BIC Rule
(See, e.g., (C.2.19).) With this small difference, the calculations in the previous section carry over to the present case, and we obtain (see (C.5.4HC.5.5))
I
~
Ed
In Pn (Y, Oy) }
- ~Ey{E+ (Jy [(Ox = Ey {ln Pn(Y, Oy) -
(8, -
0) -
0)]
[(Ox ~ 0) - (Oy
~ tr [Jy (PJy- 1 + Jy- 1) ]
,,1 p = Ey { Inpn(Y, By) - + -2-n
I
-O)f)} I
I (C.6.3)
An unbiased estimate of the right side in (C.6.3) is given by
1+P lnp(y, By) - -2-n A
(C.6.4)
The generalized information criterion (GIC) rule maximizes (C.6.4) or, equivalently, minimizes GIC
= -2lnpn(y, On) + (1 + p)n
(C.6.5)
As expected, (C.6.5) reduces to AlC for p = 1. Note also that, for a given y, the order selected by (C.6.5) with p > 1 is always smaller than the order selected by AlC (because the penalty term in (C.6.5) is larger than that in (C.5.8)); hence, as predicted by the previous intuitive discussion, the risk of overfitting associated with GIC is smaller than for AlC when p > 1. On the negative side, there is no clear guideline for choosing p in (C.6.5). The "optimal" value of p in the GIC rule has been shown empirically to depend on the performance measure, the number of data samples, and the data-generating mechanism itself [MCQUARRIE AND TSAI 1998; BHANSALI AND DOWNHAM 1977]. Consequently, p should be chosen as a function of all these factors, but there is no clear rule as to how that should be done. The approach of the next section appears to be more successful than the present approach in suggesting a specific choice for p in (C.6.5). Indeed, as we will see, that approach leads to an order-selection rule of the GIC type but with a concrete expression for p as a function of N.
C.7 BAYESIAN APPROACH: THE BIC RULE The order-selection rule to be presented in this section can be obtained in two ways. First, let us consider the KL framework of the previous sections. Therefore, our goal is to maximize the relative KL information (see (C.5.I)): I(po,f)) = Eo{lnp(y)}
(C.7.l)
-m ,,"0"
414
Appendix C Model Order Selection Tools
The ideal choice of p (y) would be p(y) = Pn (Y, Bn ). However, this choice is not possible, because the likelihood of the model. Pn (Y, Bn ), is not available. Hence, we have to use a "surrogate likelihood" in lieu of Pn (Y, Bn ). Let us assume, as before, that a fictitious sample x is used to make inferences about B. The pdf of the estimate, Bx , obtained from x can alternatively be viewed as an a priori pdf of B; hence, it will be denoted by pCB) in what follows (once again, we omit the superindex n of B, etc. to simplify the notation, whenever there is no risk for confusion). Note that we do not constrain p(B) to be Gaussian. We only assume that
e,
P (B)
is fiat around
B
(C.7.2)
where, as before, Bdenotes the ML estimate of the parameter vector obtained from the available data sample, y. Furthermore, now we assume that the length of the fictitious sample is a constant that does not depend on N; hence, p(B) is independent of N
(C.7.3)
As a consequence of assumption (C.7.3), the ratio between the lengths of the validation sample and the (fictitious) estimation sample grows without bound as N increases. According to the discussion in the previous section. this fact should lead to an order-selection rule having a penalty term asymptotically much larger than that of AlC or GIC (with p = constant) and, hence, having a reduced risk of overfitting. The scenario just introduced leads naturally to the following choice of surrogate likelihood: p(y)
= E8 {P(Y, B)} =
f
p(y, B)p(B) dB
(C.7.4)
Remark: In the previous sections, we used a surrogate likelihood given (see (C.5.2» by (C.7.5) However, we could have instead used a p(y) given by (C.7.6) The rule that would be obtained by using (C.7.6) can be shown to have the same form as AlC and GIC. but with a (slightly) different penalty term. Note that the choice of p(y) in (C.7.6) is similar to the choice in (C.7.4), with the difference that for (C.7.6) the "a priori" pdf, p(Bx ), depends on N. • To obtain a simple asymptotic approximation of the integral in (C.7.4). we make use of the asymptotic approximation of p(Y, B) given by (C.4.6}-(C.4.7): p(y, B) -::=p(y, B)e-!(6-8)TJ(8-8)
(C.7.7)
lI t
1 J
1 1
I
Section C.7
415
Bayesian Approach: The BIC Rule
e.
This equation holds for 8 in the vicinity of Inserting (C.7.7) in (C.7.4) and using the assumption in (C.7.2), along with the fact that p(y, 8) is asymptotically much larger at 8 = e than at any 8 =/; we obtain
e,
p(y) ::= p(y, e)p(B)
"I
f
e-~(§-8)TJ(§-8) d8
!
= p(y, e)Ph(e)(21r t IJ jl/2
/2
f
1 ~.
(21r )n/2IJ-
d8
1 II/2
... ==1
...
-
e-t(O-8)TJeO-8)
A
J
p(y, e)p(B)(21r)n/2
(C.7.8)
Ii 11/ 2
(See [DJURIC 1998] and references therein for the exact conditions under which this approximation holds true.) It follows from (C.7.l) and (C.7.8) that A
A
1= Inp(y, 8)
n
+ Inp(8) + "2 1n21r '"
1"
21n IJI
(C.7.9)
is an asymptotically unbiased estimate of the relative KL information. Note, however, that (C.7.9) depends on the a priori pdf of 8, which has not been specified. To eliminate this dependence, we use the fact that Ii I increases without bound as N increases. Specifically, in most cases (but not in all; see below) we have (cf (C.2.21» that
In IJI A
= In IN
1hl = n InN + In
. NJ
1h = n inN + 0(1)
1NJ l
(C.7.1O)
where we used the fact that leJI = enlJI for a scalar c and an n x n matrix J. Using (C.7.1O) and the fact that p(8) is independent of N (see (C.7.3» yields the following asymptotic approximation of the right side in (C.7.9): (C.7.11) The Bayesian information criterion (BIC) rule selects the order that maximizes (C.7.11), or, equivalently, minimizes CC.7.12)
We remind the reader that (C.7.l2) has been derived under the assumption that (C.2.21) holds, but this assumption is not always true. As an example (see [DJURIC 1998] for more examples), consider once again the sinusoidal signal model with nc components (as also considered in Section C.5),
;;-
.. :;.e. . .
416
Appendix C Model Order Selection Tools
in the case of which we have (cf (C.2.22)-(C.2.23)) that
In /)1
= In IK;21 + In !KN) KN I = (2n c + 1) InNs + 3nc InNs + 0(1) = (5n c + 1) InNs + 0(1)
(C.7.13)
Hence, in the case of sinusoidal signals, BIC takes on the form
+ (5n c + 1) InNs 2Ns In a;c + (5n c + 1) inNs
BIC = -2 lnpnc(Y, §n C )
=
(C.7.14)
where a-~ is as defined in (C.S.lO) and Ns denotes the number of complex-valued data samples. The attribute Bayesian in the name of the rule in (C.7.12) or (C.7.l4) is motivated by the use of the a priori pdf, pCB), in the rule derivation, a method typical of a Bayesian approach. In fact, the EIC rule can be obtained by using a full Bayesian approach, as explained next. To obtain the EIC rule in a Bayesian framework, we assume that the parameter vector B is a random variable with a given a priori pdf denoted by p«(). Owing to this assumption on B, we need to modify the previously used notation as follows: p(Y, () will now denote the joint pdf of y and B, and p(y IB) will denote the conditional pdf of y given B. Using this notation and Bayes' rule, we can write (C.7.l5) The right side of (C.7.l5) is identical to that of (C.7.4). It follows from this observation and from the analysis conducted in the first part of this section that, under the assumptions (C.7.2) and (C.7.3), and asymptotically in N, (C.7.16) Hence, maximizing p(yjHn ) is asymptotically equivalent to minimizing BIC, independently of the prior p(B) (as long as it satisfies (C.7.2) and (C.7.3». The rediscovery ofBIC in this Bayesian framework is important: It reveals the interesting fact that the BIC rule is asymptotically equivalent to the optimal MAP rule" (see Section C.3.l) and, hence, that the BIC rule can be expected to maximize the total probability of correct detection, at least for sufficiently large values of N. The BIC rule has been proposed in [SCHWARZ 1978A; KAsHYAP 1982], among others. In [RrSSANEN 1978; RISSANEN 1982] the same type of rule has been obtained by a different approach, one based on coding arguments and on the minimum description length (MOL) principle. The fact that the BIC rule can be derived in several different ways suggests that it might have a fundamental character. In particular, it can be shown that, under the assumption that the datagenerating mechanism belongs to the model class considered, the BIC rule is consistent-that is, For RIC: the probability of correct detection -+ 1 as N -+
00
(C.7.17)
Section C.8
417
Summary and the Multimodel Approach
(See, e.g., [SODERSTROM AND STOICA 1989; McQuARRIE AND TSAI 1998].) This should be contrasted with the nonzero overfitting probability of AIC and GIC (with p = constant); see (C.5.12)-(C.5.13). Note that the result in (C. 7.17) is not surprising in view of the asymptotic equivalence between the BIC rule and the optimal MAP rule. Finally, we note in passing that, if we remove the condition in (e.7.3) thatp((}) be independent of N, then the teiln In p (8) no longer may be eliminated from (C. 7.9) by letting N -+ 00. Consequently, (C.7.9) would lead to a prior-dependent rule, which could be use4 to obtain any other rule described in this appendix by suitably choosing the prior. This line of argument can serve the theoretical purpose of interpreting various order-selection rules in a common Bayesian framework, but it appears to have little practical value; in particular, it can hardly be used to derive sound new order-selection rules.
C.8 SUMMARY AND THE MULTIMODEL APPROACH
In the first part of this section, we summarize the model order selection rules presented in the previous sections. Then we briefly discuss and motivate the multimodel approach which, as the name suggests, is based on the idea of using more than just one model for making inferences about the signal under study.
C.8.1 Summary We begin with the observation that all the order-selection rules discussed in this appendix have the common form
(C.8.1) but different penalty coefficients ry(n, N):
AlC: AlCc
ry(n, N)
=2 N
:
GIC: BIC:
ry(n, N) = 2 - - N -n-l ry(n,N) = v = p+ 1 ry(n, N) = InN
(C.8.2)
Before using any of these rules for order selection in a specific problem, we need to carry out the following steps: (i) Obtain an explicit expression for the term -2InplI(Y, 8") in (e.8.1). This requires the specification both of the model structures to be tested and of their postulated likelihoods. An aspect that should receive some attention here is the fact that the derivation of all previous rules assumed real-valued data and parameters. Consequently, complex-valued data and parameters must be converted to real-valued quantities in order to apply the results in this appendix. (ii) Count the number of unknown (real-valued) parameters in each model structure under consideration. This is easily done in the parametric spectral analysis problems in which we are interested.
418
Appendix C Model Order Selection Tools
(iii) Verify that the assumptions that have been made to derive the rules hold true. Fortunately, most of the assumptions made are quite weak; hence, they will usually hold. Indeed, the models under test may be either nested or non-nested, and they may even be only approximate descriptions of the data-generating mechanism. However, there are two particular assumptions, made on the infonnation matrix l, that do not always hold and hence must be checked. First, we assumed in all derivations that the inverse matrix, 1 -1, exists; such is not always the case. Second, we made the assumption that 1 is such that liN = V(l). For some models, this is not true; when it is not true, a different nonnalization of 1 is required to make it tend to a constant matrix as N ~ 00. (This aspect is important for the BIC rule only.) We have used the sinusoidal signal model as an example throughout this appendix to illustrate these steps and the involved aspects. Once these aspects have been carefully considered, we can go on to use one of the four rules in (C.8.l)-(C.8.2) for selecting the order in our estimation problem. The question of which rule should be used is not an easy one. In general, we can prefer AlCc over AlC: indeed, there is empirical evidence that AlCc outperfonns AlC in small samples (whereas in medium or large samples the two rules are almost equivalent). We also tend to prefer BIC over AIC or AlCc , on the grounds that BIC is an asymptotic approximation of the optimal MAP rule. Regarding OIC, as mentioned in Sections e.5 and C.6, OIC with v E [2,6] (depending on the scenario under study) can outperfonn AlC and AlCc ' Hence, for lack of a more precise guideline, we can think of using OIC with v = 4, the value in the middle of the above interval. To summarize, then, a possible ranking of the four rules discussed in this appendix is as follows (the first being considered the best): • • • •
BIC OIC with v AICc AIC
= 4 (p = 3)
In Figure C.l, we show the penalty coefficients of the above rules, as functions of N, to further illustrate the relationship between them. C.8.2 The Multimodel Approach
We close this section with a brief discussion of a multimodel approach. Assume that we have used our favorite infonnation criterion-call it XIC-and have computed its values for the model orders under test: XIC(n);
n = 1, ... , Ii
(e.8.3)
We can then pick the order that minimizes XIC(n) and hence end up using a single model; this is the single-model approach. Alternatively, we can consider a multimodel approach. Specifically, let us pick aM E [I, Ii] (such as M = 3) and consider the model orders that give the M smallest values of XIC(n), let
Section C.8 Summary and the Multim odel Approach
419
7,.---- -r----r ---,-r- -'l--r" "T""" l-r---- -,---,- ..,...-- r-..,...
-T"""T ''''r''l
6
5
Z
GICwi th v
4
,-...
=4
=
"-'
F="
3
2
AIC
AICc for n
=5
1 0 101
1(f
loJ
Data Length , N
Figure C.1 Penalty coefficients of AIC, GIC with v = 4 (p functio ns of data length N.
= 3),
AICc (for n = 5), and Ble, as
us say n 1, ... , nM' From the derivations presented in the previous sections of this appendix, we can see that all infonnation criteria attempt to estimate twice the negati ve log-likelihood of the model: (C.8.4) Hence, we can use (C.B.5)
as an estimate of the likelihood of the mooel with order equal to n (to within a multiplicative constant). Consequently, instead of using just one model correspondin g to the order that minimizes XIC(n ), we can think of considering a combined use of the selected models (with orders nl, ... , nM) in which the contribution of each model is propor tional to its likelihood value: k=l, ... ,M
(C.8.6)
For more details on the multimodel approach, including guidelines for choosing M, we refer the interested reader to [BURNHAM AND ANDERSON 2002; STOICA, SEU~N , AND LI 2004].
Appendix D Answers
to
1.3(a): Z{h-kl = H(l/z);
Selected Exercises
Z{gd = H(z)H*(l/z*)
1.4(a): 2
A-.(UJ)
=
'I'
(l
2 + al e-i(U) (l + at ei(U) [1 + lb 1 1 + b 1e-i(U + b*ej(U] 1
a
2
reO) = r(k) =
1.9(a): tPy(w)
1-
a
l
al
12 {Il- bl a rl 2 + Ib l f(l-lat!2)}
1_0"1:d {(1- ~:) (1 - bja 2
= aflHI (tv) 12 + p a la2
= a 4 a 2 (k)(N -
(-all',
k) [1
+ 6k,0]
2.9: (a) E {Y (Wk)Y*(W )} r
2 N-l
=~ '"' N Lt=O
ei«(Ur-(U/c)t
=
{
2
a
0
k =r kt=r
(c) E {~(W)} = a 2= ¢J(w), so ~(w) is an unbiased estimate.
420
k '"
1
+ H2(W)Ht(UJ)] + aiIH2(w) \2 unbiased ACS estimate is r(k) = {1.07, 1.1, I},
[HI (w)Hi(w)
2.3: An example is yet) = {I,Ll, I}, whose giving ~(w) = 1.07 + 2.2cos(w) + 2cos(2UJ). 2.4(b): var{r(k)}
1)}
Appen dix 0
Answers to Selected Exercises
421
3.2: Decompose the ARMA system as x(t) = AtZ)e(t) and yet) = B(z)x (t). Then (x(t)} is an AR(n) process. To find {rx(k) } from {a 2 , al '" an}, write the Yule-W alker equations as
1
0
o al
rx(O) rx(l)
an
+ an 0 0
rx(n)
0
r;(O) r;(1)
0
r;(n)
-
a2 0
0
0
or AIrx
+A2r~ = [~2]
which can be solved for (rx(m)}~=o' Then find rx(k) for k > n from equation (3.3.4) and rx(k) for k < 0 from r;( -k). Finally, m
ry(k) =
m
L L rx(k + p - j) bjb; j=o p=O
3.4:
at = E {l eb(t)!2} = [1 O[lR + I [ e~ ] = [1 0blR~+I [;b ] giving Ob = Of and at = aJ. n
3.S(a):
0
Cm
RIm+I
CI
0
1 dl
(12
dm 3.14:
Cl
=
:L7=o aJ-(l -
r(k) = 0 for k < O.
-
s
0 0
i) for 0 ~ l ~ p, where r(k) = r(k) for k 2: 1 ,
3.1S(b): First solve for bI, ... ,bm from Cn
Cn-l
Cn-m+ l
bl
Cn+1
Cn
Cn- m+2
b2
Cn+m- l Cn+ m -2
Cn
bm
Cn+l
=-
Cn+2
Cn+m
reO) = r(0)/2,
and
"1 ..
Appendix 0
422
4.2:
=
Answers to Selected Exercises
= (a2 + a;)eiUJ()k
. (a) E {x(t)} 0; rx(k) (b) Letp(~) = L~-oo
ikrp
be the Fourier series of peep) for ep E [-Jr, Jr]. Then E {x(t)} = ae~"'Ot Ct. Thus, E {x(t)} = 0 if and only if either Ii = 0 or Cl = O. In this case, rx(k) is the same as in part (a). Ck e-
5.8: The height of the peak of the (unnonnalized) Capon spectrum is
Bibliography
Abrahamsson, R., A. Jakobsson, and P. Stoica (2004). Spatial Amplitude and Phase Estimation Methodfor Arbitrary Array Geometries. Technical Report, IT Department, Uppsala University, Sweden. Akaike, H. (1974). "A new look at the statistical model identification," IEEE Transactions on Automatic Control 19, 716-723. Akaike, H. (1978). "On the likelihood of a time series model," The Statistician 27, 217-235. Anderson, T. W. (1971). The Statistical Analysis of Time Series. New York: Wiley. Aoki, M. (1987). State Space Modeling of Time Series. Berlin: Springer-Verlag. Bangs, W. J. (1971). Array Processing with Generalized Beamformers. Ph.D. thesis, Yale University, New Haven, CT. BarabeIl, A. J. (1983). "Improving the resolution performance of eigenstructure-based directionfinding algorithms," in Proceedings ofthe International Conference on Acoustics, Speech, and Signal Processing, Boston, 336-339. Bartlett, M. S. (1948). "Smoothing periodograms for time series with continuous spectra," Nature 161, 68(H)87. (Reprinted in [KESLER 1986].) Bartlett, M. S. (1950). "Periodogram analysis and continuous spectra," Biometrika 37, 1-16. Baysal, U. and R. Moses (2003). "On the geometry of isotropic arrays," IEEE Transactions on Signal Processing 51 (6), 1469-1478. Beex, A. A. and L. L. Scharf (1981). "Covariance sequence approximation for parametric spectrum modeling," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-29 (5), 1042-1052. Besson, O. and P. Stoica (1999). "Nonlinear least-squares approach to frequency estimation and detection of sinusoidal signals with arbitrary envelope," Digital Signal Processing: A Review Journal 9, 45-56.
423
........
__
.
.,..,
"1."
I
424
Bibliography
Bhansali, R. 1. (1980). "Autoregressive and window estimates of the inverse correlation function," Biometrika 67, 551-566. Bhansali, R. J. and D. Y. Downham (1977). "Some properties of the order of an autoregressive model selected by a generalization of Akaike's FPE criterion," Biometrika 64, 547-551. Bienvenu, G. (1979). "Influence of the spatial coherence of the background noise on high resolution passive methods," in Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Washington, DC, 306-309. Blackman, R. B. and J. W. Tukey (1959). Th£ Measurement of Power Spectra from the Point of View of Communication Engineering. New York: Dover. Bloomfield, P. (1976). Fourier Analysis of Time Series-An Introduction. New York: Wiley. Bohme, J. F. (1991). "Array processing," in S. Haykin (ed.), Advances in Spectrum Analysis and Array Processing, Volume 2, 1-63. Englewood Cliffs, NJ: Prentice Hall. Bottcher, A. and B. Silbermann (1983). Invertibility and Asymptotics of Toeplitz Matrices. Berlin: Akademie-Verlag. Bracewell, R. N. (1986). The Fourier Transform and its Applications, 2d edition. New York: McGraw-Hill. Bresler, Y. and A. Macovski (1986). "Exact maximum likelihood parameter estimation of superimposed exponential signals in noise," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-34 (5), 1081-1089. Brillinger, D. R. (1981). Time Series-Data Analysis and Theory. New York: Holt, Rinehart and Winston. Brockwell, R. J. and R. A. Davis (1991). Time Series-Theory and Methods, 2d edition. New York: Springer-Verlag. Broersen, P. M. T. (2000). "Finite sample criteria for autoregressive order selection," IEEE Transactions on Signal Processing 48 (12), 3550-3558. Broersen, P. M. T. (2002). "Automatic spectral analysis with time series models," IEEE Transactions on Instrumentation and Measurement 51 (2), 211-216. Bronez, T. P. (1992). "On the performance advantage of multitaper spectral analysis," IEEE Transactions on Signal Processing 40(12), 2941-2946. Burg, 1. P. (1972). 'The relationship between maximum entropy spectra and maximum likelihood spectra," Geophysics 37, 375-376. (Reprinted in [CHILDERS 1978).) Burg, 1. P. (1975). Maximum Entropy Spectral Analysis. Ph.D. thesis, Stanford University, Stanford, CA. Burnham, K. P. and D.R. Anderson (2002). Model Selection and Multi-Model Inference. New York: Springer-Verlag. Byrnes, C. L., T. T. Georgiou, and A. Lindquist (2000). "A new approach to spectral estimation: a tunable high-resolution spectral estimator," IEEE Transactions on Signal Processing 48(11), 3189-3205. Byrnes, C. L., T. T. Georgiou, and A. Lindquist (2001). "A generalized entropy criterion for Nevanlinna-Pick interpolation with degree constraint," IEEE Transactions on Automatic Control 46(6),822-839. Cadzow, J. A. (1982). "Spectrum estimation: An overdetermined rational model equation approach," Proceedings of the IEEE 70(9), 907-939.
Ij i
Bibliography
425
Calvez, L. C. and P. Vilbe (1992). "On the uncertainty principle in discrete signals," IEEE Transactions on Circuits and Systems 39 (6), 394-395. Cantoni, A. and P. Butler (1976). "Eigenvalues and eigenvectors of symmetric centrosymmetric matrices," Linear Algebra and Its Applications 13(3), 275-288. Capon, 1. (1969). "High-resolution frequency-wavenumber spectrum analysis," Proceedings of the IEEE 57(8), 1408-1418. (Reprinted in [CHILDERS 1978].) Cavanaugh, J. E. (1997). "Unifying the derivations for the ':Akaike and corrected Akaike information criteria," Statistics and Probability Letters 23, 201-208. Childers, D. G. (ed.) (1978). Modem Spectrum Analysis. New York: IEEE Press. Choi, B. (1992). ARMA Model Identification. New York: Springer-Verlag. Clark, M. P., L. Elden, and P. Stoica (1997). "A computationally efficient implementation of 2D IQML," in Proceedings ofthe 31st Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, 1730-1734. Clark, M. P. and L. L. Scharf (1994). "Two-dimensional modal analysis based on maximum likelihood," IEEE Transactions on Signal Processing 42 (6), 1443-1452. Cleveland, W. S. (1972). "The inverse autocorrelations of a time series and their applications," Technometrics 14, 277-298. Cohen, L. (1995). Time-Frequency Analysis. Englewood Cliffs, NJ: Prentice Hall. Cooley, J. W. and 1. W. Tukey (1965). "An algorithm for the machine calculation of complex Fourier series," Math. Computation 19, 297-301. Cornwell, T. and A. Bridle (1996). Deconvolution Tutorial. Technical report, National Radio Astronomy Observatory. (Also on the Internet at http://www.cv.nrao.edu/..-.abridle/deconvoV deconvol.html.) Cox, H. (1973). "Resolving power and sensitivity to mismatch of optimum array processors," Journal ofthe Acoustical Society ofAmerica 54, 771-785. Cramer, H. (1946). Mathematical Methods of Statistics. Princeton, NJ: Princeton University Press. Daniell, P. J. (1946). "Discussion of 'On the theoretical specification and sampling properties of autocorrelated time-series,''' Journal of the Royal Statistical Society 8, 88-90. de Waele, S. and P. M. T. Broersen (2003). "Order selection for vector autoregressive models," IEEE Transactions on Signal Processing 51 (2),427-433. DeGraaf, S. R. (1994). "Sidelobe reduction via adaptive FIR filtering in SAR imagery," IEEE Transactions on Image Processing 3 (3), 292-301. Delsarte, P. and Y. Genin (1986). ''The split Levinson algorithm," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-34 (3), 470-478. Demeure, C. J. and C. T. Mullis (1989). "The Euclid algorithm and the fast computation of cross-covariance and autocovariance sequences," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-37(4), 545-552. Dempster, A., N. Laird, and D. Rubin (1977). "Maximum likelihood from incomplete data via the EM algorithm," Journal of the Royal Statistical Society 39, 1-38. Djuric, P. (1998). "Asymptotic MAP criteria for model selection," IEEE Transactions on Signal Processing 46 (10), 2726-2735. Doron, M., E. Doron, and A. Weiss (1993). "Coherent wide-band processing for arbitrary array geometry," IEEE Transactions on Signal Processing 41 (1), 414-417.
I.; .\
'..,.
..
~.
426
Bibliography
Doroslovacki, M. 1. (1998). "Product of second moments in time and frequency for discrete-time signals and the uncertainty limit," Signal Processing 67(1), 59-76. Dumitrescu, B., I. Tabus, and P. Stoica (2001). "On the parameterization of positive real sequences and MA parameter estimation," IEEE Transactions on Signal Processing 49(11),2630-2639. Durbin, J. (1959). "Efficient estimation of parameters in moving-average models," Biometrika 46, 306--316. Durbin, J. (1960). "The fitting of time series models," Review of the International Institute of Statistics 28, 233-244. Faurre, P. (1976). "Stochastic realization algorithms," in R. K. Mehra and D. G. Lainiotis (eds.), System Identification: Advances and Case Studies. London: Academic Press. Feldman, D. D. and L. 1. Griffiths (1994). "A projection approach for robust adaptive beamforming," IEEE Transactions on Signal Processing 42 (4), 867-876. Fisher, R. A. (1922). "On the mathematical foundations of theoretical statistics," Philosophical Transactions of the Royal Society of London 222, 309-368. Friedlander, B., M. Morl, T. Kailath, and L. Ljung (1979). "New inversion formulas for matrices classified in terms of their distance from Toeplitz matrices," Linear Algebra and Its Applications 27, 31-60. Fuchs, J. 1. (1987). "ARMA order estimation via matrix perturbation theory," IEEE Transactions on Automatic Control AC-32 (4), 358-361. Fuchs, J. J. (1988). "Estimating the number of sinusoids in additive white noise," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-36(l2), 1846--1854. Fuchs, 1. 1. (1992). "Estimation of the number of signals in the presence of unknown correlated sensor noise," IEEE Transactions on Signal Processing 40(5), 1053-1061. Fuchs, 1. J. (1996). "Rectangular Pisarenko method applied to source localization," IEEE Transactions on Signal Processing 44 (10), 2377-2383. Georgiou, T. T. (1987). "Realization of power spectra from partial covariance sequences," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-35 (4), 438-449. Gersh, W. (1970). "Estimation of the autoregressive parameters of mixed autoregressive movingaverage time series," IEEE Transactions on Automatic Control AC-15 (5), 583-588. Ghogho, M. and A. Swami (1999). "Fast computation of the exact FIM: for deterministic signals in colored noise," IEEE Transactions on Signal Processing 47 (1), 52-61. Gini, F. and F. Lombardini (2002). "Multilook APES for multibaseline SAR interferometry," IEEE Transactions OIL Signal Processing 50(7), 1800-1803. Golub, G. H. and C. F. Van Loan (1989). Matrix Computations, 2d edition. Baltimore: The Johns Hopkins University Press. Gray, R. M. (1972). "On the asymptotic eigenvalue distribution of Toeplitz matrices," IEEE Transactions on Information Theory IT-18, 725-730. Hannan, E. and B. Wahlberg (1989). "Convergence rates for inverse Toeplitz matrix forms," Journal of Multivariate Analysis 31, 127-135. Hannan, E. J. and M. Deistler (1988). The Statistical Theory of Linear Systems. New York: Wiley. Harris, F. J. (1978). "On the use of windows for harmonic analysis with the discrete Fourier transform," Proceedings of the IEEE 66(1), 51-83. (Reprinted in [KESLER 1986].) Hayes III, M. H. (1996). Statistical Digital Signal Processing and Modeling. New York: Wiley.
j ..
Bibliography
427
Haykin, S. (ed.) (1991). Advances in Spectrum Analysis and Array Processing, Volumes 1 and 2. Engle wood Cliffs, NJ: Prentice Hall. Haykin, S. (ed.) (1995). Advances in Spectrum Analysis and Array Processing, Volume 3. Engle wood Cliffs, NJ: Prentice Hall. Heiser, W. J. (1995). "Convergent computation by iterative majorization : theory and applications in multidimensional data analysis," in W. J. Krzanowski (ed.), Recen t Advances in Descriptive Multivariate Analysis, 157-189. Oxford, UK: Oxford U~versity Press. Hogbom, J. (1974). "Aperture synthesis with a non-regular distrib ution of interferometer baselines," Astronomy and Astrophysics, Supplement 15, 417-4 26. Hom, R. A. and C. A. Johnson (1985). Matrix Analysis. Cambridge, UK: Cambridge University Press. Hom, R. A. and C. A. Johnson (1989). Topics in Matrix Analysis. Cambridge, UK: Camb ridge University Press. Hua, Y. and T. Sarkar (1990). "Matrix pencil method for estimating parameters of exponentially damped/undamped sinusoids in noise," IEEE Transactions on Acous tics, Speech, and Signal Processing 38(5), 814-8 24. Hudson, J. E. (1981). Adaptive Array Principles. London: Peter Peregr inus. Hurvich, C. and C. Tsai (1993). "A corrected Akaike information criteri on for vector autoregressive model selection," Journal of TIme Series Analysis 14, 271-2 79. Hwang, J.-K. and Y.-C. Chen (1993). "A combined detection-es timation algorithm for the harmonic-retrieval problem," Signal Processing 30(2), 177-197. lohvidov, I. S. (1982). Hankel and Toeplitz Matrices and Forms. Boston : Birkhauser. Ishii, R. and K. Furukawa (1986). "The uncertainty principle in discre te signals," IEEE Transactions on Circuits and Systems 33(10 ), 1032- 1034. Jakobsson, A., L. Marple, and P. Stoica (2000). "Computationally efficient two-dimensional Capon spectrum analysis," IEEE Transactions on Signal Processing 48(9), 2651- 2661. Jakobsson, A. and P. Stoica (2000). "Combining Capon and APES for estimation of spectral lines," Circuits, Systems, and Signal Processing 19, 159-1 69. Janssen, P. and P. Stoica (1988). "On the expectation of the produc t of four matrix-valued Gaussian random variables," IEEE Transactions on Automatic Contro l AC-33 (9), 867-8 70. Jansson, M. and P. Stoica (1999). "Forward-only and forwa rd-bac kward sample covar iance s-a comparative study," Signal Processing 77(3), 235-2 45. Jenkins, G. M. and D. G. Watts (1968). Spectral Analysis and Its Applications. San Francisco: Holden-Day. Johnson, D. H. and D. E. Dudgeon (1992). Array Signal Proce ssingConce pts and Methods. Englewood Cliffs, NJ: Prentice Hall. Kailath, T. (1980). Linear Systems. Englewood Cliffs, NJ: Prentice Hall. Kashyap, R. L. (1980). "Inconsistency of the AIC rule for estimating the order of autoregressive models," IEEE Transactions on Automatic Control 25 (5), 996-9 98. Kashyap, R. L. (1982). "Optimal choice of AR and MA parts in autore gressive moving average models," IEEE Transactions on Pattern Analysis and Machine Intellig ence 4 (2), 99-10 4. Kay, S. M. (1988). Mode m Spectral Estimation, Theory and Applic ation. Englewood Cliffs, NJ: Prentice Hall. Kesler, S. B. (ed.) (1986). Mode m Spectrum Analysis II. New York: IEEE Press.
428
Bibliography
Kinkel, J. F., J. Perl, L. Scharf, and A. Stubberud (1979). "A note on covariance-invariant digital filter design and autoregressive-moving average spectral estimation," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-27 (2), 200-202. Koopmans, L. H. (1974). The Spectral Analysis of Time Series. New York: Academic Press. Kullback, S. and R. A. Leibler (1951). "On information and sufficiency," Annals ofMathematical Statistics 22, 79-86. Kumaresan, R. (1983). "On the zeroes of the linear prediction-error filter for deterministic signals," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-31 (1), 217-220. Kumaresan, R., L. L. Scharf, and A. K. Shaw (1986). "An algorithm for pole-zero modeling and spectral analysis," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-34 (6), 637-640. Kumaresan, R. and D. W. Tufts (1983). "Estimating the angles of arrival of multiple plane waves," IEEE Transactions on Aerospace and Electronic Systems AES-19, 134-139. Kung, S. Y., K. S. Arnn, and D. V. B. Rao (1983). "State-space and singular-value decompositionbased approximation methods for the harmonic retrieval problem," J. Optical Soc. Amer. 73, 1799-1811. Lacoss, R. T. (1971). "Data adaptive spectral analysis methods," Geophysics 36, 134-148. (Reprinted in [CHILDERS 1978].) Lagunas, M., M. Santamaria, A. Gasull, and A. Moreno (1986). "Maximum likelihood filters in spectral estimation problems," Signal Processing 10(1), 7-18. Larsson, E., J. Li, and P. Stoica (2003). "High-resolution nonparametric spectral analysis: Theory and applications," in Y. Hua, A. Gershman, and Q. Cheng (eds.), High-Resolution and Robust Signal Processing. New York: Marcel Dekker. Lee, J. and D. C. Munson Jr. (1995). "Effectiveness of spatially-variant apodization," in Proceedings of the International Conference on Image Processing, Washington, DC, Volume 1, 147150. Levinson, N. (1947). 'The Wiener RMS (root mean square) criterion in filter design and prediction," Journal of Mathematics and Physics 25, 261-278. Li, J. and P. Stoica (1996a). "An adaptive filtering approach to spectral estimation and SAR imaging," IEEE Transactions on Signal Processing 44 (6), 1469-1484. Li, J. and P. Stoica (1996b). "Efficient mixed-spectrum estimation with applications to target feature extraction," IEEE Transactions on Signal Processing 44 (2), 281-295. Li, J., P. Stoica, and Z. Wang (2003). "On robust Capon beamforming and diagonal loading," IEEE Transactions on Signal Processing 51 (7), 1702-1715. Li, 1., P. Stoica, and Z. Wang (2004). "Doubly constrained robust Capon beamformer," IEEE Transactions on Signal Processing 52 (9), 2407-2423. Linhart, H. and W. Zucchini (1986). Model Selection. New York: Wiley. Ljung, L. (1987). System Identification-Theory for the User. Englewood Cliffs, NJ: Prentice Hall. Markel, J. D. "'(1971). "FFf pruning," IEEE Transactions on Audio and Electroacoustics AU-19(4), 305-311. Marple, L. (1987). Digital Spectral Analysis with Applications. Englewood Cliffs, NJ: Prentice Hall. .;
,
..:0.-:
.~
,':;;"#.~
Bibliography
429
Manetta, T. L. (1983). "A new interpretation for Capon's maximum likelihood method of frequency-wavenumber spectrum estimation," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-31 (2), 445~9. Mayne, D. Q. and F. Firoozan (1982). "Linear identification of ARMA processes," Automatica 18, 461-466. McCloud, M., L. Scharf, and C. Mullis (1999). "Lag-windowing and multiple-data-windowing are roughly equivalent for smooth spectrum estimation," IEEE Transactions on Signal Processing 47 (3), 839-843. . McKelvey, T. and M. Viberg (2001). "A robust frequency domain subspace algorithm for multicomponent harmonic retrieval," in Proceedings of the 35th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, 68-72. McLachlan, G. J. and T. Krishnan (1997). The EM Algorithm and Extensions. New York: Wiley. McQuarrie, A. D. R. and C. -L. Tsai (1998). Regression and TIme Series Model Selection. Singapore: World Scientific Publishing. Moon, T. K. (1996). ''The expectation-maximization algorithm," IEEE Signal Processing Magazine 13,47-60. Moses, R. and A. A. Beex (1986). "A comparison of numerator estimators for ARMA spectra," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-34 (6), 1668-1671. Moses, R., V. Simonyte, P. Stoica, and T. Soderstrom (1994). "An efficient linear method for ARMA spectral estimation," International Journal of Control 59 (2), 337-356. Mullis, C. T. and L. L. Scharf (1991). "Quadratic estimators of the power spectrum," in S. Haykin (ed.), Advances in Spectrum Analysis and Array Processing. Englewood Cliffs, NJ: Prentice Hall. Musicus, B. (1985). "Fast MLM power spectrum estimation from uniformly spaced correlations," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-33 (6), 1333-1335. Naidu, P. S. (1996). Modem Spectrum Analysis of TIme Series. Boca Raton, FL: CRC Press. Ninness, B. (2003). ''The asymptotic CRLB for the spectrum of ARMA processes," IEEE Transactions on Signal Processing 51 (6), 1520-1531. Onn, R. and A. O. Steinhardt (1993). "Multi-window spectrum estimation-a linear algebraic approach," International Journal on Adaptive Control and Signal Processing 7,103116. Oppenheim, A. V. and R. W. Schafer (1989). Discrete-TIme Signal Processing. Englewood Cliffs, NJ: Prentice Hall. Ottersten, B., P. Stoica, and R. Roy (1998). "Covariance matching estimation techniques for array signal processing applications," Digital Signal Processing 8, 185-210. Ottersten, B., M. Viberg, P. Stoica, and A. Nehorai (1993). "Exact and large sample maximum likelihood techniques for parameter estimation and detection in array processing," in S. Haykin, J.Litva, and T. J. Shephard (eds.), Radar Array Processing, 99-151. New York: Springer-Verlag. Papoulis, A. (1977). Signal Analysis. New York: McGraw-Hill. Paulraj, A., R. Roy, and T. Kailath (1986). "A subspace rotation approach to signal parameter estimation," Proceedings of the IEEE 74(7), 1044-1046.
I 430
1
Ij
Bibliography
Percival, D. B. and A. T. Walden (1993). Spectral Analysis for Physical Applications-Multitaper . and Conventional Univariate Techniques. Cambridge, UK: Cambridge University Press. Pillai, S. U. (1989). Array Signal Processing. New York: Springer-Verlag. Pisarenko, V. F. (1973). "The retrieval of harmonics from a covariance function," Geophysical Journal of the Royal Astronomical Society 33, 347-366. (Reprinted in [KESLER 1986].) Porat, B. (1994). Digital Processing ofRandom Signals- Theory and Methods. Englewood Cliffs, NJ: Prentice Hall. Porat, B. (1997). A Course in Digital Signal Processing. New York: Wiley. Priestley, M. B. (1981). Spectral Analysis and Time Series. London: Academic Press. Priestley, M. B. (1997). "Detection of periodicities," in T. S. Rao, M. B. Priestley, and O. Lessi (eds.), Applications of Time Series Analysis in Astronomy and Meteorology, pp. 65-88. London: Chapman and Hall. Proakis, J. G., C. M. Rader, F. Ling, and C. L. Nikias (1992). Advanced Digital Signal Processing. New York: Macmillan. Rao, B. D. and K. S. Arun (1992). "Model based processing of signals: A state space approach," Proceedings of the IEEE 80(2), 283-309. Rao, B. D. and K. V. S. Hari (1993). "Weighted subspace methods and spatial smoothing: Analysis and comparison," IEEE Transactions on Signal Processing 41 (2), 788-803. Rao, C. R. (1945). "Information and accuracy attainable in the estimation of statistical parameters," Bulletin of the Calcutta Mathematical Society 37, 81-91. Riedel, K. and A. Sidorenko (1995). "Minimum bias multiple taper spectral estimation," IEEE Transactions on Signal Processing 43 (1), 188-195. Rissanen, J. (1978). "Modeling by the shortest data description," Automatica 14(5),465-471. Rissanen, J. (1982). "Estimation of structure by minimum description length," Circuits, Systems, and Signal Processing 1 (3-4), 395-406. Roy, R. and T. Kailath (1989). "ESPRIT-Estimation of signal parameters via rotational invariance techniques," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP37 (7), 984-995. Sakamoto, Y., M.Ishiguro, and G. Kitagawa (1986). Akaike Information Criterion Statistics. Tokyo: KTK Scientific Publishers. Sando, S., A. Mitra, and P. Stoica (2002). "On the Cramer-Rao bound for model-based spectral analysis," IEEE Signal Processing Letters 9(2), 68-71. Schad, L. L. (1991). Statistical Signal Processing-Detection, Estimation, and Time Series Analysis. Reading, MA: ·J\ddison-Wesley: Schmidt, R. O. (1979). "Multiple emitter location and signal parameter estimation," in Proc. RADC, Spectral Estimation Workshop, Rome, NY, 243-258. (Reprinted in [KESLER 1986].) Schuster, A. (1898). "On the investigation of hidden periodicities with application to a supposed twenty-six-day period of meteorological phenomena," Teor. Mag. 3(1), 13-41. Schuster, A. (1900). ''The periodogram of magnetic declination as obtained from the records of the Greenwich Observatory during the years 1871-1895," Trans. Cambridge Phi/os. Soc. 18, 107-135. Schwarz, G. (1978a). "Estimating the dimension of a model," Annals of Statistics 6, 461-464. Schwarz, U. J. (1978b). "Mathematical-statistical description of the iterative beam removing technique (method CLEAN)," Astronomy and Astrophysics 65, 345-356.
r
II +
1"
1 l
Bibliog raphy
431
Seghouane, A.-K., M. Bekara, and G. Fleury (2003). "A small sampl e model selection criterion based on Kullback's symmetric divergence," in Proceedings of the Intern ational Conference on Acoustics, Speech, and Signal Processing, Volume 6, Hong Kong, 145-148. Slepian, D. (1954). "Estimation of signal parameters in the presence of noise," Transactions of the IRE Professional Group on Information Theory 3, 68-89 . Slepian, D. (1964). "Prolate spheroidal wave functions, Fourier analys is and uncer tainty -IV," Bell System Technical Journal 43, 3009-3057. (See also Bell System Technical Journal, Volume 40,43 -64, 1961; vol. 44, 1745-1759, 1965; and Volume 57, 1371-1429, 1978). Soderstrom, T. and P. Stoica (1989). System Identification. London: Prentice Hall International. Stankwitz, H. C., R. 1. Dallaire, and J. R. Fienup (1994). "Spati ally variant apodization for sidelobe control in SAR imagery," in Record of the 1994 IEEE Nation al Radar Conference, 132-137. Stewart, G. W. (1973). Introduction to Matrix Computations. New York: Academic Press. Stoica, P. and O. Besson (2000). "Maximum likelihood DOA estima tion for constant-modulus signal," Electronics Letters 36 (9), 849-851. Stoica, P., O. Besson, and A. Gershman (2001). "Direction-of-arrival estimation of an amplitudedistorted wavefront," IEEE Transactions on Signal Processing 49 (2), 269-276. Stoica, P., P. Eykhoff, P. Jannsen, and T. Soderstrom (1986). "Mode l structure selection by cross-validation," International Journal of Control 43(11 ),1841 -1878 . Stoica, P., B. Friedlander, and T. Soderstrom (l987a). "Approxima te maximum-likelihood approach to ARMA spectral estimation," International Journal of Contro l 45 (4), 1281-1310. Stoica, P., B. Friedlander, and T. SOderstrom (1987b). "Instrumental variable methods for ARM A models," in C. T. Leondes (ed.), Control and Dynamic System s-Adv ances in Theory and Applications, Volume 25,79 -150. New York: Academic Press. Stoica, P., A. Jakobsson, and J. Li (1997). "Cisoid parameter estima tion in the colored noise case- asym ptotic Crame r-Rao bound, maximum likelihood, and nonlin ear least squares," IEEE Transactions on Signal Processing 45 (8), 2048-2059. Stoica, P. and E. G. Larsson (2001). "Comments on 'Linearization metho d for finding Crame r-Rao bounds in signal processing' ," IEEE Transactions on Signal Proces sing 49(12), 31683169. Stoica, P., E. G. Larsson, and A. B. Gershman (2001). ''The stochastic CRB for array processing: a textbook derivation," IEEE Signal Processing Letters 8(5), 148-150. Stoica, P., E. G. Larsson, and J. Li (2000). "Adaptive filter-bank approach to restoration and spectral analysis of gapped data," The Astronomical Journal 120(4), 2163-2173. Stoica, P., H. Li, and J. Li (1999). "A new derivation of the APES filter," IEEE Signal Processing Letters 6 (8), 205-2 06. Stoica, P., T. McKelvey, and 1. Mari (2000). "MA estimation in polyno mial time," IEEE Transactions on Signal Processing 48(7), 1999-2012. Stoica, P. and R. Moses (1990). "On biased estimators and the unbias ed Cramer-Rao lower bound," Signal Processing 21, 349-3 50. Stoica, P., R. Moses, B. Friedlander, and T. Soderstrom (1989). "Maxi mum likelihood estimation of the parameters of multiple sinusoids from noisy measurements," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP- 37(3), 378-3 92.
1 1
432
i
Bibliography
Stoica, P., R. Moses, T. Soderstrom, and J. Li (1991). "Optimal high-order Yule-Walker estimation of sinusoidal frequencies," IEEE Transactions on Signal Processing 39 (6), 13601368. Stoica, P. and A. Nehorai (1986). "An asymptotically efficient ARMA estimator based on sample covariances," IEEE Transactions on Automatic Control AC-31 (11), 1068-1071. Stoica, P. and A. Nehorai (1987). "On stability and root location of linear prediction models," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-35, 582-584. Stoica, P. and A. Nehorai (1989a). "MUSIC, maximum likelihood, and Cramer-Rao bound," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-37(5), 720-741. Stoica, P. and A. Nehorai (1989b). "Statistical analysis of two nonlinear least-squares estimators of sine-wave parameters in the colored-noise case," Circuits, Systems, and Signal Processing 8(1), 3-15. Stoica, P. and A. Nehorai (1990). "Performance study of conditional and unconditional directionof-arrival estimation," IEEE Transactions on Signal Processing 38 (10), 1783-1795. Stoica, P. and A. Nehorai (1991). "Performance comparison of subspace rotation and MUSIC methods for direction estimation," IEEE Transactions on Signal Processing 39(2), 446-453. Stoica, P. and B. Ottersten (1996). "The evil of superefficiency," Signal Processing 55 (1), 133-136. Stoica, P., N. Sandgren, Y. Se1en, L. Vanhamme, and S. Van Huffel (2003). "Frequency-domain method based on the singular value decomposition for frequency-selective NMR spectroscopy," Journal of Magnetic Resonance 165 (1), 80-88. Stoica, P. and Y. Selen (2oo4a). "Cyclic minimizers, majorization techniques, and the expectationmaximization algorithm: A refresher," IEEE Signal Processing Magazine 21 (1), 112-114. Stoica, P. and Y. Selen (2004b). "Model order selection: A review of the AlC, GIC, and BIC rules," IEEE Signal Processing Magazine 21 (2), 36-47. Stoica, P., Y. Selen, and 1. Li (2004). "Multi-model approach to model selection," Digital Signal Processing 14 (5), 399-412. Stoica, P. and K. C. Sharman (1990). "Maximum likelihood methods for direction-of-arrival estimation," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-38 (7), 1132-1143. Stoica, P., T. Soderstrom, and F. Ti (1989). "Asymptotic properties of the high-order Yule-Walker estimates of sinusoidal frequencies," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-37(11), 1721-1734. Stoica, P. and T. Soderstrom (1991). "Statistical analysis of MUSIC and subspace rotation estimates of sinusoidal frequencies," IEEE Transactions on Signal Processing 39 (8), 18361847. Stoica, P. and T. Sundin (2001). "Nonparametric NMR spectroscopy," Journal of Magnetic Resonance 152 (1), 57-69. Stoica, P., Z. Wang, and J. Li (2003). "Robust Capon beamforming," IEEE Signal Processing Letters 10 (6), 172-175. Strang, G. (1988). Linear Algebra and Its Applications. Orlando, FL: Harcourt Brace Jovanovich. Sturm, J. F. (1999). "Using SeDuMi, a Matlab toolbox for optimization over symmetric cones," Optimization Methods and Software 11-12, 625-653. (Software is available on line on the Internet at http://fewcal.uvt.nl/sturmlsoftware/sedumi.html).
.:
Bibliography
433 Therrien, C. W. (1992). Discrete Random Signals and Statistical Sig nal Processing. Eng lew ood Cliffs, NJ: Prentice Hall. Tho mso n, D. J. (1982). "Sp ect rum estimation and han non ic analysis," Proceedings of the IEE E 72 (9), 105 5-1 096 . Tufts, D. W. and R. Kumaresan (1982). "Estimation of frequen cies of multiple sin uso ids : Ma kin g linear prediction perform like ma xim um likelihood," Procee dings ofthe IEEE 70( 9), 975 -98 9. ~. Umesh, S. and D. W. Tufts (1996) . "Es tim atio n of parameters of exp onentially dam ped sin uso ids using fast ma xim um likelihood esti mation with application to NM R spe ctroscopy data," IEE E Transactions on Signal Processing 44 (9), 224 5-2 259 . Van Huffel, S. and 1. Vandewalle (19 91). The Total Least Squares Proble m: Computational Asp ect s and Analysis. Philadelphia: SIAM. van Overschee, P. and B. de Mo or (1996). Subspace Identification for Linear Systems: Th eor yImp lem ent atio n-M eth ods . Boston : Klu wer Academic. Van Trees, H. L. (1968). Detection, Estimation, and Modulation Theory , Part I. New York: Wiley. Van Trees, H. L. (2002). Optim um Array Processing (Part IV of Detection, Estimation, and Modulation Theory). New York: Wiley. Van Veen, B. D. and K. M. Buc kley (1988). "Beamforming: A versatile approach to spa tial filtering," IEEE ASS P Magazine 5 (2), 4-2 4. Viberg, M. (1995). "Subspace-ba sed methods for the identification of linear tim e-in var ian t systems," Automatica 31 (12), 183 5-1 851 . Viberg, M. and B. Ottersten (1991) . "Se nso r array processing bas ed on subspace fitting," IEE E Transactions on Signal Processing 39( 5), 111 0-1 121 . Viberg, M., B. Ottersten, and T. Kai lath (1991). "Detection and estima tion in sensor arrays usi ng wei ght ed subspace fitting," IEEE Transactions on Acoustics, Speech, and Signal Processing AS SP -34 (11), 243 6-2 449 . Viberg, M., P. Stoica, and B. Otterst en (1995). "Array processing in cor related noise fields bas ed on instrumental variables and sub space fitting," IEEE Transactions on Signal Processing 43 (5), 118 7-1 195 . Vostry, Z. (1975). "Ne w algorithm for polynomial spectral factorizati on with quadratic con ver gence, Par t I," Kybernetika 11, 415 -42 2. Vostry, Z. (1976). "Ne w algorithm for polynomial spectral factorizati on with quadratic con ver gence, Par t II," Kybernetika 12, 248 -25 9. Walker, G. (1931). "On periodicit y in series of related terms," Pro ceedings of the Royal Society of London 131, 518 -53 2. Wax, M. and T. Kailath (1985). "De tec tion of signals by informa tion theoretic criteria," IEEE Transactions on Acoustics, Speech, and Signal Processing ASSP-33 (2), 387 -39 2. Wei, W. (1990). Time Series Ana lysis. New York: Addison-Wesley. Welch, P. D. (1967). ''Th e use of fast Fou rier transform for the esti mation of pow er spectra: A me tho d based on time averaging ove r short, modified periodograms ," IEEE Transactions on Audio and Electroacoustics AU -15 (2), 70- 76. (Reprinted in [KE SLE R 1986].) Wilson, G. (1969). "Factorization of the covariance generating functio n of a pure moving ave rag e process," SIA M Journal on Numeric al Analysis 6(1 ), 1-7 .
434
Bibliography
determination for complex .Ying, C. J., L. C. Potter, and R. Moses (1994). "On model order in Proceedings of exponential signals: Performance of an FFf-initialized ML algorithm," IEEE Seventh SP workshop on SSAP, Quebec City, QC, 43-46 . ed series, with special Yule, G. U. (1927). "On a method of investigating periodicities in disturb n 226, 267-298. (Rereference to Wolfer's sunspot numbers," Philos. Trans. R. Soc. Londo printed in [KESLER 1986].) of multiple sources by Ziskind, I. and M. Wax (1988). "Maximum likelihood localization and Signal Processing alternating projection," IEEE Transactions on Acoustics, Speech, ASSP-36(1O), 1553-1560.
References Grouped by Subject
Books on Spectral Analysis [BLOOMFIELD
1976]
[BRACEWELL
1986J
[CHILDERS
1995]
[COHEN [HAYES
1978]
III 1996]
[HAYKIN
1991]
1995J [KAy 1988] [HAYKIN
[KESLER
1986] 1974]
[KOOPMANS [MARPLE [NAIDU
1987]
1996]
[PERCIvAL AND WALDEN [PRIESTLEY
1993]
1981]
Books about Spectral Analysis and Allied Topics [AOKI
1987]
[PORAT
1994]
[PROAKIS, RADER, LING, AND NIKIAS
1992]
435
. ......
- -
References Grouped by Subject
436 1991]
[SCHARF
1989]
[SODERSTROM AND STOICA [THERRIEN
1992]
[VAN OVERSCHEE AND DE MOOR
1996]
Books on Linear Systems and Signals [HANNAN AND DEISTLER [KAILATH
1988]
1980]
[OPPENHEIM AND SCHAFER [PORAT
1989]
1997]
Books on Array Signal Processing [HA YKIN 1991] [HA YKIN
1995]
[HUDSON
1981]
[JOHNSON AND DUDGEON [PILLAI
1992]
1989]
[VAN TREES 2002]
Works on Time Series, Estimation Theory, and Statistics 1971]
[ANDERSON [BHANSALl
1980]
[BRILLINGER
1981]
[BROCKWELL AND DAVIS [CLEVELAND [CRAMER
1991]
1972]
1946]
[DEMPSTER, LAnID~ AND RUBIN [FISHER
1922]
[HEISER
1995]
[JANSSEN AND STOICA
1988]
[MCLACHLAN AND KRISHNAN [MOON [RAO
1996]
1945]
[SLEPIAN
1954]
[STOICA AND MOSES
1977]
1990]
1997]
References Grouped by Subject
[STOICA AND OTTERSTEN 1996] [STOICA AND SELEN 2004A] [VIBERG 1995] [WEI 1990]
Works on Matrix Analysis and Linear Algebra [BOTTCHER AND SILBERMANN 1983] [CANTONI AND BUTLER 1976] [GOLUB AND VAN LOAN 1989] [GRAY 1972] [HORN AND JOHNSON 1985] [HORN AND JOHNSON 1989] [IOHVIDOV 1982] [STEWART 1973] [STRANG 1988] [VAN HUFFEL AND VANDEWALLE 1991]
Works on Nonparametric Temporal Spectral Analysis (a) Historical
[BARTLETT 1948] [BARTLETT 1950] [DANIELL 1946] [SCHUSTER 1898] [SCHUSTER 1900]
(b) Classical [BLACKMAN AND TUKEY 1959] [BURG 1972] [COOLEY AND TuKEY 1965] [HARRIS 1978] [JENKINS AND WATTS 1968] [LACOSS 1971] [SLEPIAN 1964] [THoMSON 1982] [WELCH 1967]
(c) More Recent [BRONEZ 1992]
437
438
References Grouped by Subject
[CALVEZ AND VILBE 1992] [DEGRAAF 1994] [DoROSLOVACKI 1998] [ISHII AND FURUKAWA 1986] [JAKOBssON, MARPLE, AND STOICA 2000] [LAGUNAS, SANTAMARIA, GASULL, AND MORENO 1986] [LARSSON, LI, AND STOICA 2003] [LEE AND MUNSON JR. 1995] [LI AND STOICA 1996A] [MCCLOUD, SCHARF, AND MULLIS 1999] [MULLIS AND SCHARF 1991] [MUSICUS 1985] [ONN AND STEINHARDT 1993] [RIEDEL AND SIDORENKO 1995] [STANKWITZ, DALLAIRE, AND FIENUP 1994] [STOICA, LARSSON, AND LI 2000] [STOICA, LI, AND LI 1999]
Works on Parametric Temporal Rational Spectral Analysis (a) Historical
[yULE 1927] [WALKER 1931] (1)) (;laussical
[BURG 1975] [CADZOW 1982] [DURBIN 1959] [DURBIN 1960] [GERSH 1970] [LEVINSON 1947] (c) More Recent [BYRNES, GEORGIOU, AND LINDQUIST 2000] [BYRNES, GEORGIOU, AND LINDQUIST 2001] [CHOI 1992] [DELSARTE AND GENIN 1986] [DuMITRESCU, TABUS, AND STOICA 2001] [FUCHS 1987]
..
,
.............Ii
1I I
439
References Grouped by Subject
[KINKEL, 'PERL, SCHARF, AND STUBBERUD 1979] [MAYNE AND FIROOZAN 1982] [MosEs AND BEEX 1986] [MOSES, SIMONYTE, STOICA, AND SODERSTROM 1994] [STOICA, FRIEDLANDER, AND SODERSTROM 1987A] [STOICA, FRIEDLANDER, AND SODERSTROM 1987B]
",
[STOICA, MCKELVEY, AND MARl 2000] [STOICA AND NEHORAI 1986] [STOICA AND NEHORAI 1987]
Ii
Works on Parametric Temporal Line-Spectral Analysis
l
(a) Classical
I j !
ti
[BANGS 1971] [HOOBOM 1974] [KUMARESAN 1983] [KUNG, ARUN, AND RAO 1983] [PAULRAJ, Roy, AND KAILATH 1986] [PISARENKO 1973] [TuFTS AND KUMARESAN 1982]
(b) More Recent ~.
j J
,\ ;
[BESSON AND STOICA 1999] [BRESLER AND MACOVSKI 1986] [CLARK, ELDEN, AND STOICA 1997] [CLARK AND SCHARF 1994] [CORNWELL AND BRIDLE 1996] [FUCHS 1988]
i
1
[HUA AND SARKAR 1990]
I
[JAKoBsSON, MARPLE, AND STOICA 2000]
,1
[KUMARESAN, SCHARF, AND SHAW 1986]
~
1 ! i .~
[LI AND STOICA 1996B] [MCKELVEY AND VIBERG 2001] [SCHWARZ 1978A]
i
1 1
[STOICA, BESSON, AND GERSHMAN 2001] [STOICA, JAKOBSSON, AND LI 1997]
~
[STOICA, MOSES, FRIEDLANDER, AND SODERSTROM 1989]
j
[STOICA, MOSES, SODERSTROM, AND LI 1991]
!
i 1
l
.
t: ..t.f:".. ·i'rl·"l¢h~&..:
440
References Grouped by Subject [STOICA AND NEHORAI
1989B]
[STOICA AND SODERSTROM
1991]
[STOICA, SODERSTROM, AND [UMESH AND TUFfs
TI 1989]
1996]
Works on Nonparametric Spatial Spectral Analysis (a) ClassicaJ
[CAPON 1969] (b) More Recent [ABRAHAMSSON, JAKOBSSON, AND STOICA [BANGS
2004]
1971] 1994]
[FELDMAN AND GRIFFITHS [GINI AND LOMBARDINI
2002] 1992]
[jOHNsON AND DUDGEON [LI, STOICA, AND WANG
2003]
[LI, STOICA, AND WANG
2004]
[MARzETTA
1983]
[STOICA, WANG, AND LI
2003]
[VAN VEEN AND BUCKLEY
1988]
Works on Parametric Spatial Spectral Analysis (a) Classical [BARABELL
1983]
[BIENVENU
1979]
[KUMARESAN AND TUFfS [ROY AND KAILATH [SCHMIDT
1983]
1989]
1979]
[WAX AND KAlLATH
1985]
(b) More Recent [BoHME
1991]
[DORON, DORON, AND WEISS [FUCHS
1992]
[FUCHS
1996]
1993]
[OrTERSTEN, VIBERG, STOICA, AND NEHORAI [PILLAI
1993]
1989]
[RAO AND lIARI
1993]
1
I
.i
~
441
References Grouped by Subject
[STOICA AND BESSON 2000] [STOICA, BESSON, AND GERSHMAN 2001] [STOICA AND NEHORAI 1989A] [STOICA AND NEHORAI 1990] [STOICA AND NEHORAI 1991] [STOICA AND SHARMAN 1990]
~.
[VmERG AND OTIERSTEN 1991] [VIBERG, OTTERSTEN, AND KAILATH 1991] [VIBERG, STOICA. AND OTTERSTEN 1995] [ZISKIND AND WAX 1988]
Works on Model Order Selection
(a) Classical [AKAIKE 1974] [AKAIKE 1978] [BHANSALI AND DoWNHAM 1977] [KASHYAP 1980] [KASHYAP 1982] [RISSANEN 1978] [RISSANEN 1982] [SCHWARZ 1978B]
(b) More Recent [BROERSEN 2000] [BROERSEN 2002] [BURNHAM AND ANDERSON 2002] [CAVANAUGH 1997] [CHOI 1992] [DE W AELE AND BROERSEN 2003] [DJURIC 1998] [HURVICH AND TSAI 1993] [LINHART AND ZUCCHINI 1986] [MCQUARRIE AND TSAI 1998] [SAKAMOTO, ISHIGURO, AND
KIT AGAWA
1986]
[SEGHOUANE. BEKARA. AND FLEURY 2003] [STOICA. EYKHOFF, JANNSEN. AND SODERSTROM 1986] [STOICA AND SELEN 2004B]
.
.
.. _. ----_. _.--------... ~
i
'I
1
I
1
I I
ij
I 1
1 J
Index ,..
A
Akaike information criterion, 409-412 corrected, 412 generalized, 411-413 all-pole signals, 94 amplitude and phase estimation (APES) method, 256-259, 303 for gapped data, 259-262 for spatial spectra, 319-326 for two-dimensional signals, 267-269 amplitude spectrum, 253, 258 Capon estimates of, 254-255 angle of arrival, 276 aperture, 276 APES method, see amplitude and phase estimation method apodization, 62-67 AR process, see autoregressive process AR spectral estimation, see autoregressive spectral estimation ARMA process, see autoregressive moving average process
array aperture of, 276 beamforming resolution, 334, 335
beamspace processing, 337, 338 beamwidth, 290, 336 broadband signals in, 281 coherent signals in, 293, 340 isotropic, 336 L-shaped, 335 narrowband, 281, 283 planar, 276 uniform linear, 283-285 array model, 277-285 autocorrelation function, 122 autocorrelation method, 97 autocovariance sequence computation using FFf, 58-60 computer generation of, 20 definition of, 5 estimates, 24 estimation variance, 76 extensions, 123-125, 182 for signals with unknown means, 75 for sinusoidal signals, 151, 152 generation from ARMA parameters, 136 mean-square convergence of, 179 of ARMA processes, 93-94 properties, 5-6
443
444 autoregressive (AR) process covariance structure, 93 definition of, 92 stability of, 139 autoregressive moving average (ARMA) process covariance structure, 93 definition of, 92 multivariate, 113-122 state-space equations, 114 autoregressive moving-average spectral estinruation, 107-122 least-squares method, 111-113 modified Yule-Walker method, 108-111 multivariate, 117-122 autoregressive spectral. estimation, 94-99 autocorrelation method, 97 Burg method, 125-127 covariance method, 94, 97 least-squares method, 96-99 postwindow method, 98 prewindow method, 98 Yule-Walker method, 94 B backward prediction, 122, 123, 137 bandpass signal, 277 bandwidth approximate formula, 81 definition of, 71 equivalent, 43, 58, 73, 234 Bartlett method, 52-53 Bartlett window, 31, 45 baseband signal, 278 basis linearly parameterized, 203-209 null space, 203-209 Bayesian information criterion, 413-417 beamforming, 288-291, 300-302 beamforming method, 307 and CLEAN, 326-331 beamspace processing, 337, 338
Index
beamwidth, 290, 335, 336 BIC rule, 413-417 Blackman window, 45 Blackman-Tukey method, 39-42 computation using FFT, 60-62 nonnegativeness property, 41 block-Hankel matrix, 118 broadband signal, 281 Burg method, 125-127
C CAPES method, 259 Capon method, 232-242, 302-306 as a matched filter, 270 comparison with APES, 258-259 constr~ned, 311-319 derivation of, 232-238, 270 for damped sinusoids, 253-256 for DOA estimation, 291-293 for two-dimensional signals, 267-269 relationship to AR methods, 238-242, 246-249 robust, 306-319 spectrum of, 271 stochastic signal, 302-303 Caratheodory parameterization, 311 carrier frequency, 277 Cauchy-Schwartz inequality, 270, 291, 314, 317,330,360-361 for functions, 361 for vectors, 360-361 centrosymmetric matrix, 178, 332 Chebyshev inequality, 211 Chebyshev window, 44 chi-squared distribution, 184 Cholesky factor, 134, 358 circular Gaussian distribution, 80, 331, 380, 387 circular white noise, 34, 39 CLEAN algorithm, 326-331 coherence spectrum, 67-70 coherent signals, 293, 340
,~
j Il
J,
~., ~(
; ".[,
J
1 't:
1 J .j
445
Index
column space, 342 complex demodulation, 280 complex envelope, 280 complex modulation, 279 complex white noise, 34 concave function, 192 condition number, 212 and AR parameter estimation, 109 and forward-backward approach, 212 definition of, 366 confidence interval, 79 consistent estimator, 373 consistent linear equations, 364-366 constant-modulus signal, 300-302 constrained Capon method, 311-319 continuous spectra, 90 convergence in probability, 211 mean-square, 179, 180,211 uniform, 272 corrected Akaike information criterion, 412 correlation coefficient, 15 correlogram method, 24-27 covariance definition of, 5 matrix, 6 covariance fitting, 303-306, 329 using CLEAN, 328-331 covariance-fitting criterion, 132 covariance function, see autocovariance sequence covariance matrix diagonalization of, 139 eigenvalue decomposition of, 309, 315 persymmetric, 178, 332 properties of, 6 covariance method, 97 covariance sequence, see autocovariance sequence covariance-fitting criterion, 132
Cramer-Rao bound, 373-396,400 for Gaussian distributions, 378-383 for general distributions, 376-378 for line spectra, 383-384 for rational spectra, 384-386 for spatial spectra, 387-396 for unlcn.9wn model order, 376 cross covariance sequence, 19 cross-spectrum, 13, 19, 67 cyclic minimization, 189-190 cyclic minimizer, 260 D damped sinusoidal signals, 203-209, 253-256 Daniell method, 55-58 delay operator, 11 Delsarte-Genin Algorithm, 102-106 demodulation, 280 diagonal loading, 312-319 Dirac impulse, 152 direction of arrival, 276 direction-of-arrival estimation, 275-298 beamfonrrring, 288-291 Capon method, 291-293 ESPRIT method, 297-298 Min-Norm method, 297 MUSIC method, 297 nonlinear least-squares method, 294 nonparametric methods, 285-293 parametric methods, 293-298 Pisarenko method, 297 Yule-Walker method, 295 direction vector uncertainty, 306-319 Dirichlet kernel, 33 discrete Fourier transform (DFT), 27 linear transformation interpretation, 77 discrete signals, 2 discrete spectrum, 153 discrete-time Fourier transform (DTFT), 3
446
discrete-time system, 11 finite-impulse response (FIR), 18 frequency response, 221 minimum phase, 93, 135 transfer function, 221 displacement operator, 129 displacement rank, 130 Doppler frequency, 334 Durbin's method, 106, 113 E efficiency, statistical, 375 eigenvalue, 345 of a matrix product, 347 eigenvalue decomposition, 309, 315, 344-351 eigenvector, 345 EM algorithm, 188-194 energy-spectral density, 3 Capon estimates of, 254-255 of damped sinusoids, 253 ergodic, 179 ESPRIT method and min-norm, 212, 213 combined with HOYW, 211 for DOA estimation, 297-298 for frequency estimation, 174-175 frequency selective, 194-203 statistical accuracy of, 175 estimate consistent, 91, 141, 154, 158, 159, 184, 185, 272, 291, 373 statistically efficient, 375 unbiased, 374 Euclidean vector norm, 354 exchange matrix, 362 Expectation-Maximization algorithm, 188-194 expected value, 5 exponentially damped sinusoids, 253-256 extended Rayleigh quotient, 350 F far field, 276 fast Fourier transform (FFf), 27-29 for two-sided sequences, 21
Index
pruning in, 29 radix-two, 27-29 two-dimensional, 264, 269 zero padding and, 29 Fejer kernel, 32 filter-bank methods, 217-232 and periodogram, 220-222, 242-245 APES, 256-259, 303 for gapped data, 259-262 for two-dimensional signals, 265-266 refined, 222-232 spatial APES, 319-326 Fisher information matrix, 378 flatness, spectral, 138 forward prediction, 122, 123, 136 forward-backward approach, 175-178 frequency, 2, 3, 8 angular, 3 conversion, 3 resolution, 33 scaling, 16 spatial, 284 frequency band, 195 frequency estimation, 153-178 ESPRIT method, 174-175 forward-backward approach, 175-178 frequency-selective ESPRIT, 194-203 ~S-ESPRIT, 194-203 high-order Yule-Walker method, 162-166 Min-Norm method, 171-173 modified MUSIC method, 170 MUSIC method, 166-169 nonlinear least squares, 157-162 Pisarenko method, 169 spurious estimates, 170 two-dimensional, 203-209 frequency-selective method, 194-203 Frobenius norm, 355, 365, 367
G GAPES method, 259-262 gapped data, 259-262 Gaussian distribution circular, 380, 387
447
Index
Gaussian random variable circular, 80 Cramer-Rao bound for, 378-383 moment property, 35 generalized Akaike information criterion, 411 generalized inverse, 365 Gohberg-Semencul formula, 128-130 grating lobes, 337 H Hadamard matrix product, 358, 392 Hamming window, 45 Hankel matrix, 362 block, 118 Hanning window, 45 Heisenberg uncertainty principle, 72 Hermitian matrix, 344, 348-351 Hermitian square root, 358 hypothesis testing, 183 I idempotent, 294, 354 impulse response, 20, 72, 220, 224-226, 277 in-phase component, 280 inconsistent linear equations, 366-370 information matrix, 378 interior-point methods, 135 inverse covariances, 249
J Jensen's inequality, 192, 395 K Kaiser window, 44, 45 kernel, 343 Kronecker delta, 4 Kronecker product, 265, 266 Kullback-Leibler information metric, 405-406 L Lagrange multiplier, 308-309, 315 leading submatrix, 357
least squares, 20, 109, 171, 302, 303, 320, 329 spectral approximation, 18 with quadratic constraints, 308 least-squares method, 94, 96-99, 238, 255, 257,259,260,263,267,268 least-squares solution, 367 Levinson-O.!1rbin Algorithm, 100-101 split, 102-106 likelihood function, 192, 375, 376, 378, 398 line spectrum, 153 linear equations consistent, 364-366 inconsistent, 366-370 least-squares solution, 367 minimum-norm solution, 365 systems of, 364-370 total-least-squares solution, 369 linear prediction, 96, 122, 123, 125, 136-138 linear predictive modeling, 96 linearly parameterized basis, 203-209 low-pass signal, 278 M MA covariance parameterization, 133 MA parameter estimation, 131-135 MA process, see moving average process majorization, 190-191 majorizing function, 190 matrix centrosymmetric, 178,332 Cholesky factor, 358 condition, 212, 366 eigenvalue decomposition, 344-351 exchange, 362 fraction, 143 Frobenius norm, 355, 365, 367 Hankel, 362 idempotent, 294, 354 inversion lemma, 363 Moore-Penrose pseudoinverse, 365 orthogonal, 345 partition, 359, 363 persymmetric, 178, 332
448
matrix (continued) positive (semi)definite, 357-361 QR decomposition, 368 rank,343 rank deficient, 343 semiunitary, 345, 349 singular value decomposition, 118, 164, 351-356 square root, 333, 358 Toeplitz, 362 trace, 346 unitary, 164, 174, 212, 345, 348, 351, 360, 368 Vandermonde, 361 matrix fraction description, 143 matrix inversion lemma, 258, 363 maximum a posteriori detection, 402-405 maximum likelihood estimate, 79, 157, 375, 382, 393, 398 of covariance matrix, 331-333 maximum likelihood estimation, 398-402 regularity conditions, 400 maximum likelihood method, 192 MDL principle, 416 mean-square convergence, 179 mean squared error, 30 Min-norm and ESPRIT, 213 Min-Norm method and ESPRIT, 212 for DOA estimation, 297 for frequency estimation, 171-173 root, 172 spectral, 171 minimization cyclic, 189-190 majorization, 19~ 191 quadratic, 371-372 relaxation algorithms, 190 minimum description length, 416 minimum-norm constraint, 298-300 minimum-norm solution, 365 minimum phase, 93, 135 missing data, 259-262
Index
model order selection, 375-376, 397-419 AICrule, 409-412 BIC rule, 413-417 corrected AlC rule, 412 generalized AlC rule, 412-413 generalized information criterion, 411 Kullback-Leibler metric, 405-406 maximum a posteriori, 402-405 MDL rule, 416 multimodel, 418 modified MUSIC method, 170, 203-209 modified Yule-Walker method, 108-111 modulation, 279 Moore-Penrose pseudoinverse, 304, 365 moving-average noise, 211 moving-average parameter estimation, 131-135 moving-average process covariance structure, 93 definition of, 92 parameter estimation, 131-135 reflection coefficients of, 140 moving-average spectral estimation, 106-107, 131-135 multimodel order selection, 418 multiple signal classification, see MUSIC method multivariate systems, 113-122 MUSIC method for DOA estimation, 297 modified, 170, 340 root, 169 spectral, 168 subspace-fitting interpretation, 339
N narrowband, 283 nilpotent matrix, 130 NLS method, see nonlinear least squares method noise complex white, 34 noise gain, 312-314
1:, ~. ";
r
449
Index
nonlinear least-squares method for direction estimation, 294-295 for frequency estimation, 157-162 nonsingular, 343 normal equations, 95 null space, 167, 343 null-space basis, 203-209 Nyquist sampling theorem spatial, 285 temporal,9
o order selection, see model order selection orthogonal complement, 353 orthogonal matrix, 345 orthogonal projection, 168, 198, 199, 354 overdetennined linear equations, 109, 364, 366-370
p Pade approximation, 142 parameter estimation maximum likelihood, 398-402 PARCOR coefficient, 101 Parseval's theorem, 4, 132 partial autocorrelation sequence, 122 partial correlation coefficients, 101 partitioned matrix, 359, 363 periodogram and frequency estimation, 159 bias analysis of, 30-34 definition of, 24 FFf computation of, 27-29 for two-dimensional signals, 263-264 properties of, 30-39 variance analysis of, 34-39 windowed, 50 periodogram method, 24 periodogram-based methods Bartlett, 52-53 Daniell, 55-58 refined, 52-58 Welch, 53-55
persymmetric matrix, 178, 332 Pisarenko method ARMA model derivation of, 210 for DOA estimation, 297 for frequency estimation, 166, 169 relation to MUSIC, 169 planar wave, 276, 283 positive (seni'i)definite matrices, 357-361 postwindow method, 98 power spectral density and linear systems, 12 continuous, 90 definition of, 6, 7 properties of, 9 rational, 91 prediction backward, 122 forward, 136 linear, 96, 123, 125, 136-138, 386 prediction error, 96, 386 prewindow method, 98 principal submatrix, 357 probability density function, 72, 79, 375, 378 projection matrix, 198, 199 projection operator, 354
Q QR decomposition, 368 quadratic minimization, 371-372 quadratic program, 135 quadrature component, 280 R random signals, 2 range space, 167, 342 rank, 199 rank-deficient, 343 rank of a matrix, 343 rank of a matrix product, 343 rational spectra, 91 Rayleigh quotient, 349 extended, 350 rectangular window, 45
450 "reflection coefficient, 101 properties of, 140 region of convergence, 92 RELAX algorithm, 190 relaxation algorithms, 190 resolution and time-bandwidth product, 72 and window design, 42~ and zero padding, 29 for filter-bank methods, 218 for parametric methods, 162, 214 frequency, 33 limit, 33 of beamforming method, 290, 334 of Blackman-Tukey method, 41 of Capon method, 236, 240, 249 of common windows, 45 of Daniell method, 57 of periodogram, 23, 33 of periodogram-based methods, 88 spatial, 290, 334-337 superresolution, 145, 146, 154 Riccati equation, 116 Rihaczek distribution, 16 robust Capon method, 312-319 root MUSIC for DOA estimation, 297 for frequency estimation, 169 row space, 342
S sample covariance, 24, 25, 53, 58, 75-77, 80,83 sample covariance matrix ML estimates of, 331-333 sampling Nyquist sampling theorem, 9 spatial, 276, 285, 286 temporal, 3 semiparametric estimation, 326 semidefinite quadratic program, 135 semiunitary matrix, 345, 349 sidelobe, 34, 44, 45 signal modeling, 92
Index
similarity transformation, 116, 175, 346 singular value decomposition (SVD), 118, 164, 304, 351-356 sinusoidal signals amplitude estimation, 153 ARMA model, 155 covariance-matrix model, 156 damped, 203-209, 253-256 frequency estimation, 153-178 models of, 150, 155-157 nonlinear regression model, 155 phase estimation, 153 two-dimensional, 262-269 skew-symmetric vector, 362 Slepian sequences, 225-226 two-dimensional, 265-266 smoothed periodogram, 231 smoothing filter, 137 spatial filter, 287, 290 spatial frequency, 284 spatial sampling, 285, 286 spatial spectral-estimation problem, 275 spectral analysis high-resolution, 154 nonparametric, 2 parametric, 2 semi-parametric, 326 superresolution, 154 spectral density energy, 3 power, 5 spectral estimation definition of, 1, 13 spectral factorization, 92, 132, 170 spectral flatness, 138 spectral-line analysis, 153 spectral LS criterion, 131 spectral MUSIC for DOA estimation, 297 for frequency estimation, 168 spectrum coherency, 13-15 continuous, 90
t
i
:J.
"
/
:.
~ }
~
451
Index
cross, 19 discrete, 153 rational, 91 split Levinson algorithm, 102 square root of a matrix, 333, 358 stability for AR models, 139 of AR estimates, 94 of Pade approximation, 142 of Yule-Walker estimates, 98, 139 state-space equations for ARMA process, 114 minimality, 117 nonuniqueness of, 116 statistically efficient estimator, 375 steering-vector uncertainty, 306-319 structure indices, 114 subarrays, 298 submatrix leading, 357 principal, 357 subspace and state-space representations, 114, 117-122 noise, 168 signal, 168 superresolution, 145, 154 symmetric matrix, 344 symmetric vector, 362 synthetic aperture, 334 systems of linear equations, 3~370 T taper, 50 Taylor series expansion, 373 time width definition of, 71 equivalent, 43, 53, 58, 73 time-bandwidth product, 42-44, 70-75 time-frequency distributions, 16 Toeplitz matrix, 362 total least squares, 109, 165, 171, 175, 369 trace of a matrix, 346 trace of a matrix product, 346
transfer function, 11 two-dimensional sinusoidal signals, 203-209 two-dimensional spectral analysis APES method, 267-269 Capon method, 267-269 periodogram, 263-264 refined filter-bank method, 265-266 two-sided sequences, 21
U unbiased estimate, 374 uncertainty principle, 71 uniform linear array, 283-285 beamforming resolution, 334 spatial APES, 319-326 unitary matrix, 164, 174, 212, 345, 348, 351, 360, 368 V Vandermonde matrix, 361 vector skew-symmetric, 362 symmetric, 362 vectorization, 265, 268
W wave field, 275 planar, 276 Welch method, 53-55 white noise complex, 34 real,39 whitening filter, 151 Wiener-Hopfequation, 20 window function Bartlett, 45 Chebyshev, 44 common,~5
data and frequency-dependent, 62 design of, 42 Hamming, 45 Hanning, 45 Kaiser, 44 leakage, 34
452
window function (continued) main lobe, 33 rectangular, 45 resolution, 33 resolution-variance tradeoff's, 42-44 sidelobes, 33
y Yule-Walker equations, 95 Yule-Walker method for AR processes, 94
Index
for DOA estimation, 295 for frequency estimation, 162-166 modified, 108-111 overdeterrrrined, 109 stability property, 98
Z zero padding, 28-29 zeroes extraneous, 299, 300 in ARMA model, 92, 112
1
Net