\\ILLY SERIES IN PROBABILITY AND STATISTICS Est3bli,hed bv• WALTER A. SHEWHART and SAMUEL S. WILKS •
Editors: hc Barnell. Ralph A. Bradley, Nicholas I. Fisher, 1. Stuart Hunter, .I B. f.:adal/e. David G. Kendall. David W. Scott, Adrian F. M. Smith, '/o:c/L Tel/gels. Geofl;'ey S. Watson . \ COJllpierc list or tht: titles in this series appears at the end of this volume,
, ,
.
~, \
tatistics
JAMES R. SCHOTT .
A WileyInterscience Publication JOHN WILEY & SONS, INC. New York • Chichester • Brisbane • Toronto • Singapore • Weinhcim
This text is printed on acidfree paper. Copyright © 1997 By John Wiley & Sons, Inc. All Rights Reserved. Published Simultaneously In Canada. Reproduction or translation of any paI1 of this work beyond that peJIllitted by Section 107 or 108 of the 1976 United States Copyright Act without the peJIllission of the copyright owner is unlawful. Requests for peJIllission or further infoJIllation should be addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 101580012.
Library Of Congress Cataloging In Publication Data,' Schott, James R .. 1955Matrix Analysis For Statistics / James R. Schott. p, cm.  (Wiley Series In Probability And Statistics. Applied Probability And Statistics) "A WileyInterscience Publication." InCludes bibliographical references and index. ISBN 0471154091 (cloth: alk. paper) I. Matrices. 2. Mathematical Statistics. I. Title. II. Series. QA188.S24 1996 512.9'434 dc20 9612133 Printed in the United States of America
I • •
10 9 8 7 6 5 4 3 2 I
To Susan, Adam, and Sarah
•
I • •
,
•
•
,
Contents
1.
A Re vi ew of El em en ta ry M at rix A lg eb ra
1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
, II I 12. 13.
2.
Introduction, 1 Definitions an d Notation, 1 Matrix Addition an d Multiplication, 2 Th e Transpose, 3 Th e Trace, 4 Th e Determinant, 5 Th e Inverse, 8 Partitioned Matrices, II Th e Ra nk of a Matrix, 13 Or th og on al Matrices, 14 Quadratic Forms, 15 Co m pl ex Matrices, 16 Ra nd om Vectors an d So m e Related Statistical Concepts, 18 Problems, 28
Ve ct or Sp ac es I. 2. '3. 4. ,5; 6. 7. 8.
1
32
Introduction, 32 Definitions, 32 Li ne ar Independence an d Dependence, 38 Bases an d Dimension, 41 Matrix Rank an d Li ne ar Independence, 43 Or th on or m al Bases and Projections, 48 Projection Matrices, 52 Li ne ar Transformations and Systems of Li ne ar Equations, 60
.. VII
•
. VIJI "
CONTENTS
9. 10.
3.
The Intersection and Sum of Vector Spaces, 67 Convex Sets, 70 Problems, 74
I.
2. 3. 4. 5. 6. 7.
Introduction, 84 Eigenvalues, Eigenvectors, and Eigenspaces, 84 . Some Basic Properties of Eigenvalues and Eigenvectors, 88 Symmetric Matrices, 93 Continuity of Eigenvalues and Eigenprojections, 102 Extremal Properties of Eigenvalues, 104 Some Additional Results Concerning Eigenvalues, III Problems, 122
4.
.' .
'. \'
,,
.
I
Matrix Factorizations and Matrix Norms I. 2. 3. 4. 5. 6. 7. 8.
5.
84
Eigenvalues and Eigenvectors
131
Introduction, 131 The Singular Value Decomposition, 131 The Spectral Decomposition and Square Root Matrices of a Symmetric Matrix, 138 The Diagonalization of a Square Matrix, 144 The Jordan Decomposition, 147 The Schur Decomposition, 149 The Simultaneous Diagonalization of Two Symmetric Matrices, 154 Matrix NOIlllS, 157 Problems, 162
Generalized Inverses
170
•
1. 2. 3. 4. 5. 6. 7.
Introduction, 170 The MoorePenrose Generalized Inverse, 171 Some Basic Properties of the MoorePemose Inverse, 174 The MoorePenrose Inverse of a Matrix Product, 180 The MoorePenrose Inverse of Partitioned Matrices, 185 The MoorePenrose Inverse of a Sum, 186 The Continuity of the MoorePenrose Inverse 188
8. 9.
Some Other Generalized Inverses, 190 Computing Generalized Inverses, 197 Problems, 204
I,
I •
I
•
IX
CONTENTS
6.
Systems of Li ne ar Equations 1.
2. 3. 4. 5. 6. 7. 8.
210
Introduction, 210 0 21 , ns Consistency of a System of Equatio • Solutions to a Consistent System of Equations, 213 Homogeneous Systems of Equations, 219 2 22 , ns tio ua Eq ar ne Li of m ste Sy a to ns tio lu So s re ua Least Sq R 22 . els od M nk Ra ll Fu an Th ss Le r Fo on ati Le as t Squares Estim Systems of Linear Equations and the Singular Value Decomposition, 233 Sparse Linear Systems of Equations, 235 •
Problems, 241 ,
7,~
•
•
Special Matrices an d M at rix O pe ra to rs 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
Introduction, 247 Partitioned Matrices, 247 Th e Kronecker Product, 253 Th e Direct Sum, 26 0. Th e Ve(_Q~erator, 261 Th e Hadamard Product, 266 Th e Commutation Matrix, 276 3 28 r, ato er Op c Ve e th th wi ted cia so As s ice atr M r So m e Ot he Nonnegative Matrices, 288 Circulant and Toep1itz Matrices, 300 Hadamard and V andell lion de Matrices, 305 Problems, 309
8. M at rix Derivatives an d Related Topics Introduction, 323 2. Multivariable Differential Calculus, 323 3. Vector and Matrix Functions, 326 . 4.. Some Useful Matrix Derivatives, 332 5 33 s, ice atr M ed rn tte Pa of ns tio nc Fu of s ve ati riv De 5. 6. Th e Perturbation Method, 337 7. M ax im a an d Minima, 344 8. Convex and Concave Functions, 349 9. Th e M et ho d of Lagrange Multipliers, 353 Problems, 36 0 1.
247
323
x 9.
CONTENTS
Some Special Topics Related to Quadratic FOl'ms
I. 2.
3. 4. 5. 6.
7.
370
Introduction, 370 Some Results on Idempotent Matrices, 370 Cochran's Theorem, 374 Distribution of Quadratic FOHns in NOHnal Variates, 378 Independence of Quadratic FOHns, 384 Expected Values of Quadratic Forms, 390 The Wishart Distribution, 398 Problems, 409
References
416
Index
421
List of Series Titles
•
,
. ,
•
•
Preface
As the field of statistics has developed over the years, the role of matrix methods has evolved from a tool through which statistical problems could be more conveniently expressed to an absolutely essential part in the development, understanding, and use of the more complicated statistical analyses that have appeared in recent years. As such, a background in matrix analysis has become a vital part of a graduate education in statistics. Too often, the statistics graduate student gets his or her matrix background in bits and pieces through various courses on topics such as regression analysis, multivariate analysis, linear models, stochastic processes, and so on. An alternative to this fragmented approach is an entire course devoted to matrix methods useful in statistics. This text has been written with such a COurse in mind. It also could be used as a text for an advanced undergraduate COurse with an unusually bright group of students and should prove to be useful as a reference for both applied and research statisti• clans. Students beginning a graduate program in statistics often have their previous degrees in other fields, such as mathematics, and so initially their statistical backgrounds may not be all that extensive. With this in mind, I have tried to make the statistical topics presented as examples in this text as selfcontained as possible. This has been accomplished by including a section in the first chapter that covers some basic statistical concepts and by having most of the statistical examples deal with applications that are fairly simple to understand; for instance, many of these examples involve least squares regression or applications that utilize the simple conccpts of mcan vectors and covariancc matriccs. Thus, an introdUctory statistics course should provide the reader of this text with a sufficient background in statistics. An additional prerequisite is an undergraduate course in matrices or linear algebra, while a calculus background is necessary for some portions of the book, most notably Chapter 8. By selectively omitting some sections, all nine chapters of this book can be covered in a onesemester course. For instance, in a course targeted at students who end their educational careers with the master's degree, I typically omit Sections 2.10, 3.5 3.7,4.8, 5.45.7, and 8.6, along with a few other sections . • XI
.. XII
PREFACE
Anyone writing a book on a subject for which other texts have already been written stands to benefit from these earlier works, and that certainly has been the case here. The texts by Basilevsky (1983), Graybill (1983), Healy (1986), and Searle (1982), all books on matrices for statistics, have helped me, in varying degrees, to fOIlnulate my ideas on matrices. Graybill's book has been particularly influential, since this is the book that I referred to extensively, first as a graduate student and then in the early stages of my research career. Other texts that have proven to be quite helpful are Horn and Johnson (1985, 1991), Magnus and Neudecker (1988), particularly in the writing of Chapter 8, and Magnus (1988). I wish to thank several anonymous reviewers who offered many very helpful suggestions and Mark Johnson for his support and encouragement throughout this project. I am also grateful to the numerous students who have alerted me to various mistakes and typos in earlier versions of this book. In spite of their help and my diligent efforts at proofreading, undoubtedly some mistakes remain, and I would appreciate being infoIlned of any that are spotted. JIM SCHOrr Orlando, Florida
•
CHAPTER ONE
ix tr a M ry ta n e m le E f o ' w ie v e AR Algebra
1. IN TR O D U C TI O N er op pr tal en am nd fu d an s on ati er op sic ba the of e m In this chapter we review so t ou th wi ted sta be ll wi s tie er op pr s se ca t os m In ties involved in matrix algebra. d en e W . ted en es pr be ll wi fs oo pr e, tiv uc str in en wh s, se proof, but in some ca rs, cto ve om nd ra d an es bl ria va om nd ra of n sio us sc di the chapter with a brief un co en ns io ut ib str di nt rta po im e m so d an , es bl expected values of random varia tered elsewhere in the book.
2. DEFINITIONS AND N O TA TI O N r. be m nu l rea a t en es pr re ll wi ex as ch su ar al sc a , ise rw he ot Except when stated by n ve gi rs ala sc of y ra ar ar ul ng cta re n x m e th is n, A matrix A of size m x
A= •
,
.
al l
al2
•
••
al n
a21
a22
•
• •
a2 n
• •
• •
•
•
am i
a m2
• • • • • •
,
a mn
env co be ll wi so al it es m eti m So . ij) (a = A as ied tif en and sometimes simply id n. = m If ' )ij (A = j ai is, at th ; )ij (A as A, of t en nient to refer to the (i, j)t h elem ix atr m I x m An m. r de or of ix atr m re ua sq a then A is called al
•
a2 a=
• • •
am 1
2
A REVIEW OF ELEMENTARY MATRIX ALGEBRA
is called a column vector or simply a vector. The element aj is referred to as the ith component of a. A I x n matrix is called a row vector. The ith row and jth column of the matrix A sometimes will be refened to by (A)j. and (A).Jo respectively. We will usually use capital letters to represent matrices and lowercase bold letters for vectors. The diagonal element of the m X m matrix A are all, a22, ... ,amm • If all other elements of A are equal to 0, A is called a diagonal matrix and can be identified as A = diag(all, ... , a mm ). If, in addition, au = 1 for i = 1, ... , m so that A = diag(l, ... , I), then the matrix A is called the identity matrix of order m and will be written as A = 1m or simply A = I if the order is obvious. If A = diag(a" ... , am) and b is a scalar, then we will use Ab to denote the diagonal matrix diag(at, ... ,a~). For any mXm matrix A, DA will denote the diagonal matrix with diagonal elements equal to the diagonal elements of A and, for any m x I vector a, Do denotes the diagonal matrix with diagonal elements equal to the components of a; that is, DA = diag(all' ... , a mm ) and Do = diag(al" .. , am). A triangular matrix is a square matrix that is either an upper triangular matrix or a lower triangular matrix. An upper triangular matrix is one which has all of its elements below the diagonal equal to 0, while a lower triangular matrix has all of its elements above the diagonal equal to 0. .. . . The ith column of the m X m identity matrix will be denoted by ej; that is, ej is the m x I vector which has its ith component equal to 1 and all of its other components equal to O. When the value of m is not obvious, we will make it more explicit by writing ej as ej,m' The m X m matrix whose only nonzero element is a I in the (i,j)th position will be identified as Eij. The scalar zero is written 0, while a vector of zeros, called a null vector, will be denoted by 0, and a matrix of zeros, called a null matrix, will be denoted by (0). The m x I vector having each component equal to I will be denoted 1m or simply 1 when the size of the vector is obvious. ..
•
3. MATRIX ADDITION AND MULTIPLICATION The sum of two matrices A and B is defined if they have the same number of rows and the same number of columns; in this case ,,
The product of a scalar ex and a matrix A is cxA = Aex = (exaij)
,I
! ,, • •
;•
The premultiplication of the matrix B by the matrix A is defined only if the . nu.nber of columns of A equals the number of rows of B. Thus, if A is m X p and B is p x n, then C = AB will be the m x n matrix, which has its (i,j)th
1 i• •
, •
I •
3
THE TRANSPOSE
element,
Cjj'
given by
Cij
= (A)j.(B).j = k=l
.
,,
There is a similar definition for BA, the postmultiplication of B by A. When both products are defined, we will not have, in general, AB = BA. If the matrix 2 A is square, then the product AA, or simply A , is defined. In this case, if we have A2 = A, then A is said to be an idempotent matrix. The following basic properties of matrix addition and multiplication are easy to verify.
,
Theorem 1.1. Let a and (3 be scalars and A, B, and C be matrices. Then, when the operations involved are defined, the following properties hold. (a) A + B = B + A. •
(b) (c) (d) (e) (f)
(A+B)+C=A+(B+C) . a(A + B) = cxA + aB. (a + (3)A = cxA + (3A.
A  A = A + (A) = (0). A(B+C)=AB+AC.
(g) (A+B)C=AC+BC. (h) (AB)C = A(BC).
4. 'fHI': TRANSPOSE The transpose of an m X n matrix A is the n x m matrix A' obtained by interchanging the rows and columns of A. Thus, the (i,j)th element of A' is aj;. If A is m x p and B is p x n, then the (i,j)th element of (AB)' can be expressed as p
«AB)');j = (AB)ji
= (A)j.(B).j = L
ajkbk;
= (B');.(A')'j = (B' A');j
k=l
Thus, evidently (AB), = B' A'. This along with some other results involving the transpose are summarized below.
Theorem 1.2. Let a and (3 be scalars and A and B be matrices. Then, when defined, the following hold. (a) (cxA)' = cxA'.
(b) (A')' = A.
n. ,,1..:.
•
II 11:" \''1'
VJ"
c.Lt:.tVic.J'" i
At BI2 is nl x P2, B21 is n2 x PI> B22 is n2 x P2, and PI + P2 = p. Then the premultiplication of B by A can be expressed in partitioned fOHll as
AB=
AIiBII + AI2B21 A21BlI + A22B21
AlIBI2 +A12B22 A21BI2 + A22B22
Matrices can be partitioned into submatrices in other ways besides the 2 x 2 partitioning given above. For instance, we could partition only the columns of A, yielding the expression
A REVIEW OF ELEMENTARY MATRIX ALGEBRA
12
ich wh in e on is n tio ua sit l ra ne ge e or m A . n2 x m is A2 wh er e AI is m x nl and d ne tio rti pa e ar A of ns m lu co e th d an ps ou gr r to in d ne the rows of A ar e pa rti tio into c groups so th at A ca n be written as A II A21 A=
AI2 A22
•
Ale A2c
• •
• • •
,
•
• •
•
• • •
A,I
A'2
• • •
•
A,c
•
,n ... I, n d an , ,m .. . c l, m s er teg in the d an nj x mi where th e submatrix Ai) is
are such that c
,
:...., mi = m
nj = n
an d j=1
i=1
•
re ua sq a is j Aj c, = r if nn fo al on ag di k oc bl in be to The matrix A ab ov e is sa id is th In j. :I. i ich wh r fo j d an i all r fo ix atr m ll nu matrix for ea ch i, and Ai j is a is, at th ; ,,) ,A ... j, A g( dia = A ite wr ll wi I we se ca All
(0)
• • •
(0)
A22
• ••
•
• •
diag(AIj, ... ,A ,r) =
•
•
•
(0)
(0)
(0) (0) • • •
• • •
A"
'. AA t uc od pr se po ns tra the te pu m co to sh wi we e os pp Su Ex am pl e 1.3. where th e 5 x 5 matrix A is given by 1
A=
0 0
1 1
0 1 0 1
1
0 1
1
1 1 1 1 1 1 2 0 1 0 2 0
•
•
en itt wr be ay m A at th g in rv se ob by ed ifi pl sim Th e co m pu tat io n ca n be
A=
I
13
TH E RANK OF A MATRIX
As a result, we ha ve _ 
M '=
+ 21 31; h 
121;
13 1; 3121; + 4h •

13 + 13 1;1 21 ; 1 21 ; + 2121;
3 2 2 1 2 3 2 I 2 2 3 1 1 1 1 7 1 I 1 3
1 1 1 3 7
9. TH E R AN K OF A M AT R IX
.
illS of tel in n ve gi is A x tri ma n x m an of nk ra e th Ou r initial definition of A of ns m lu co or ws ro g in let de by ed lln fO x tri ma submatrices. In general, any d lle ca is A of ix atr bm su T TX an of t an in m ter de e is ca lle d a su bm atr ix of A. Th ly us io ev pr ve ha we A, x tri ma m x m an r fo e, nc a m in or of or de r T. For insta r de or of or in m a of e pl am ex an is, is th j; ai of defined wh at we ca lle d the m in or T, if = ) (A nk ra en itt wr T, is A x tri ma n x m l ul nn m  1. No w the ra nk of a no I + T r de or of s or in m all ile wh o er nz no is T r at lea st on e of its minors of or de O. = ) (A nk ra n the , ix atr m ll nu a is A If . ro ze e ar (if th er e ar e any) s, on ati er op ng wi llo fo e th of y an by d ge an ch Th e ra nk of a matrix A is un ca lle d ele m en tar y transformations.
(a) Th e in ter ch an ge of tw o ro ws (o r co lu m ns ) of A. r. ala sc o er nz no a by A of n) m lu co r (o w ro a of (b) Th e m ul tip lic ati on c!r oth an to A of n) m lu co r (o w ro a of le tip ul m r (c) Th e ad di tio n of a sc ala ro w (or co lu m n) of A.
•
on ati lic tip ul m the as d se es pr ex be n ca A of n tio An y ele m en tar y transfoIllIa en m ele An x. tri ma n io lat Ill fo ns tra y tar en m ele an of A by a matrix referred to as of on ati lic tip ul em pr e th by n ve gi be ll wi A of ws ro tar y tra ns fo rm ati on of th e on ati llu fo ns tra y tar en m ele an ile wh x, tri ma on ati A by an ele m en tar y tra ns fo llu on ati rm fo ns tra y tar en em El . on ati lic tip ul stm po a of the co lu m ns co rre sp on ds to the as d se es pr ex be n ca x tri ma lar gu in ns no y an d an matrices ar e no ns in gu lar lfo the ve ha we , ly nt ue eq ns Co s. ice atr m on ati llu pr od uc t of ele m en tar y tra ns fo lo wi ng very us ef ul result. an C d an x, tri ma m x m an B x, tri ma n x m an be A t Theorem 1.S. Le at th ws llo fo it s, ice atr m lar gu in ns no e ar C d an B if en matrix. Th ra nk (B AC ) = rank(BA) = ra nk (A C) = rank(A)
II
x
II
14
A REVIEW OF ELEMENTARY MATRIX ALGEBRA
By using elementary transfollnation matrices, any matrix A can be transformed to another matrix of simpler form having the same rank as A.
Theorem 1.9. If A is an m X n matrix of rank r > 0, then there exist nonsingular mx m and nx n matrices B and C, such that H = BAC and A = B1HC I , where H is given by if r = m = n ,
(a) Ir (c)
Ir
(0)
(b) [lr
if r = n < m,
(d)
(0)]
Ir
(0)
(0)
(0)
if r = m < n, if r < m, r < n
The following is an immediate consequence of Theorem 1.9.
Corollary 1.9.1. Let A be an mX n matrix with rank(A) = r > O. Then there exist an m X r matrix F and an r x n matrix G such that rank(F) = rank(G) = r and A = FG.
10. ORTHOGONAL MATRICES An m X 1 vector P is said to be a nOllnalized vector or a unit vector if pip = I. The mx 1 vectors, PI"" ,Pn' where n :s; m, are said to be orthogonal if P;Pj = 0 for all i oJ j. If in addition, each Pi is a normalized vector then the vectors are said to be orthonollnal. An mx m matrix P whose columns fOlln an orthonollnal set of vectors is called an orthogonal matrix. It immediately follows that
P'p = I Taking the detelillinant of both sides, we see that
IP'PI = IP'IIPI = 1P12 = III = 1 Thus, IPI = +1 or 1, so that P is nonsingular, p I = P', and PP' = I in addition to P' P = I; that is, the rows of P also fOlln an orthonoilllal set of 11/ x I vectors. Some basic properties of orthogonal matrices are summarized in the following theorem. .
11/
Theorem 1.10. Let P and Q be m x m orthogonal matrices and A be any x m matrix. Then (a) (b)
IPI=±I, IP'API = IAI,
(c) PQ is an orthogonal matrix.
•
15
QUADRATIC FORMS
,
, ,
An m x m matrix P is called a pelillutation matrix if each row and each column of P has a single element I, while all the remaining elements are zeros. As a result the columns of P will be e J. •.. , em, the columns of I"" in some order. Note then that the (h, h)th element of p'p will be e;e; = 1 for some i, and the (h,l)th element of p'p will be e;ej = 0 for some i oJ j if h oJ I; that is, a pelillutation matrix is a special orthogonal matrix. Since there are m! ways of pelilluting the columns of 1m , there are m! different permutation matrices of order m. If A is also m x m, then PA creates an m x m matrix by pelllluting the rows of A, and AP produces a matrix by pelilluting the columns of A.
•,, ,
11. QUADRATIC
,
, ,
Let x be an m x I vector, y an n x I vector, and A an m x n matrix. Then the function of x and y given by m x' Ay
n:.:...,
=
xiYja;j ;~I
j~1
is sometimes called a bilinear fOlill in x and y. We will be most interested in the special case in which m = n so that A is m x m and x = y. In this case, the function above reduces to the function of x, m
m
f(x) = x'Ax =
X;Xjaij, ;=1
j=1
which is called a quadratic fOlill in x; A is referred to as the matrix of the quadratic fOllll. We will always assume that A is a symmetric matrix since, if it is not, A may be replaced by B = ~(A + A'), which is symmetric, without altering f(x); that is, I
X Bx =
I
2' x
I
I
(A + A )x =
I
1
2' (x Ax +x A x) = 2' (x Ax +x Ax) = x Ax I
II
I
I
I
since x' A'x = (x' A'X)' = x' Ax. Every symmetric matrix A and its associated quadratic fOlill is classified into one of the following five categories. (a) If x' Ax > 0 for all x oJ 0, then A is positive definite. (b) If x' Ax ~ 0 for all x oJ 0 and x'Ax = 0 for some x oJ 0, then A is positive semidefinite. ,
A REVIEW OF ELEMENTARY MATRIX ALGEBRA
16
. ite fin de e tiv ga ne is A n the 0, oJ x all r fo 0 < (c) If x' Ax
°
e tiv ga ne is A en th 0, oJ x e m so r fo 0 = Ax x' d an
(d) If x'Ax ::; 0 for all x oJ semidefinite. . ite fin de in is A en th x, e m so r fo 0 < Ax x' d an x e m so (e) If x' Ax > 0 for
e tiv ga ne d an ite in ef id m se e iv sit po th bo y all No te that the null matrix is actu semidefinite. spo s ea er wh , lar gu in ns no e ar s ice atr m ite fin de e tiv ga Positive definite and ne es m eti m So . lar gu sin e ar s ice atr m ite in ef id m se e tiv ga itive semidefinite and ne at th ix atr m ic etr m m sy a to r fe re to ed us be ll wi ite the term nonnegative defin d lle ca is B . a rn m x m An . ite in ef id m se e iv sit po or ite is either positive defin es m eti m So . B = A if A ix atr m m x m ite fin de ve ati a square root of the nonneg , B2 = A at th so ic, etr m m sy o als is B If . 1/2 A as B ix atr m we will denote such a then B is called the symmetric square root of A. 9, r te ap Ch In s. tic tis sta ial nt re fe in in le ro t en in om pr Quadratic fo nn s pl ay a s rm fo tic ra ad qu g in lv vo in lts su re nt rta po im t os m e th we will develop some of that are of particular interest in statistics. •
12. COMPLEX MATRICES d an rs cto ve of sis aly an e th th wi ng ali de be ll Throughout this entire text we wi ns sio ca oc e ar e er th r, ve we Ho . es bl ria va or rs matrices co m po se d of real numbe in ix atr m a of n io sit po m co de e th as ch su , ix in which an analysis of a real matr ex pl m co ain nt co at th s ice atr m to ds lea s, ice atr the form of a product of other m e th of e m so on cti se is th in ize ar m m su ly ief br numbers. Fo r this reason, we will rs. be m nu ex pl m co g in rd ga re gy lo no lli teu d an basic notation Any complex number c can be written in the form
c = a + ib,
t. ,;: .." r be m nu y ar in ag im e th ts en es pr re i d an rs be m nu l where a and b are rea e th as to d rre fe re is b ile wh c, of rt pa al re e th d lle ca Th e real nu m be r a is we If O. is b if ly on r be m nu al re a is c r be m nu e th , us imaginary part of c. Th is m su eir th en th , ib2 + a2 = C2 d an ib + al = Cl rs, 1 have two complex numbe given by
,.
while th eir product is given by • I
r be m nu ex pl m co r he ot an is ib + a = c r be m nu Corresponding to each complex
, • ••
·, •
•
I
17
CO MP LE X MATRICES
ate ug nj co ex pl m co e Th c. of ate ug nj co ex pl m co de no ted by c an d ca lle d th e 2 + b2 , so th at the pr od uc t of a of c is gi ve n by c = a  ib an d satisfies cc = a er. mb nu l rea a in lts su re ate ug nj co its d an r be m nu ex pl co m mco the in t in po a by lly ica etr om ge ted en es pr re be n ca r be A co m pl ex nu m the is is ax r he ot e th d an is ax al re e th is es ax e th of e on e pl ex plane, wh er be d ul wo ib, + a = c r be m nu ex pl m co e th , us Th is. aX y co m pl ex or im ag in ar n ca we , ely tiv na ter Al e. an pl ex pl m co is th in b) , (a t in represented by th e po. . ior e th m fro e lin e th of h gt len e th is r e er wh us e the po lar co or di na tes (r, O) , ve i sit po the d an e lin is th n ee tw be e gl an e th is gin to the po in t (a, b) an d 0 n the is 0 nd ra nd ba d an a n ee tw be p hi ns io lat ha lf of th e real axis. Th e re given by a = rcO S
0,
b = r sin 0
Writing c in terllls of the po lar coordinates, we ha ve c = r co s 0 + i r sin 0, me so o als e, lu va te lu so ab e Th re = c y pl or, af ter us in g Eu le r's fOllllula, sim is. is Th r. be to ed fin de is c r be m nu ex pl m co tim es ca lle d the modulus, of th e 2 = r2 we ha ve 2 b + of co ur se , alw ay s a no nn eg ati ve real number, an d sin ce a iO •
•
•
We also find th at
,
.
r be m nu ex pl m co y an r fo at th e se o als we y, dl ate pe re Using th e identity ab ov e c an d an y positive in teg er n, Ie" I = Icln. so ab e th to ate ug nj co its d an c r be m nu ex pl m co a g in A uset)d identity relat lute va lu e of c is
• I •
·•
CIC2 at th g tin no d an C2 + CI rs be m nu ex pl m co o tw of m su Applying this to the + CIC2 S; 21 c1 11 c2 1. w e ge t
· •
•
• •
I
•
18
A REVIEW OF ELEMENTARY MATRIX ALGEBRA
lei + c21
2
= (CI + C2)(CI + C2) = (CI + C2)(CI + (2) = CIGI + CIG2 + C2GI + C2C2 2 2 ~ ICI1 + 21cII IC21 + IC21 =
(icil + 1e2i)2
From this we get the important inequality lei + c21 ~ led + 1e21. known as the triangle inequality. A complex matrix is simply a matrix whose elements are complex numbers. As a result, a complex matrix can be written as the sum of a real matrix and an imaginary matrix; that is, if C is an m X n complex matrix then it can be expressed as C= A + iB,
where both A and B are m X n real matrices. The complex conjugate of C, denoted C, is simply the matrix containing the complex conjugates of the elements of C; that is, C= A  iB
,
•
The conjugate transpose of C is C* = C . If the complex matrix C is square and C* = C, so that Cij = Gji' then C is said to be Hellnitian. Note that if C is HelIlIitian and C is a real matrix, then C is symmetric. The m x m matrix C is said to be unitary if C* C = 1m. This is the generalization of the concept of orthogonal matrices to complex matrices since if C is real then C* = C'.
13. RANDOM VECTORS AND SOME RELATED STATISTICAL CONCEPTS In this section, we review some of the basic definitions and results in distribution theory which will be needed later in this text. A more comprehensive treatment of this subject can be found in books on statistical theory such as Casella and Berger (1990) or Lindgren (1993). To be consistem with our notation, which uses a capital letter to denote a matrix, a bold lowercase letter for a vector, and a lowercase letter for a scalar, ·we will use a lowercase letter instead of the more conventional capital letter to denote a scalar random variable. A random variable x is said to be discrete if its collection of possible values, R" is a countable set. In this case, x has a probability function pAt) satisfying pAt) = P(x = t), for t E Rxo and pAt) = 0, for t fi Rx. A continuous random variable x, on the other hand, has for its range, Rx , an uncountably infinite set. ••
•
RANDOM VECTORS AND SOME RELATED STATISTICAL CO NCEPTS
19
Associated with each continuous ra nd om variable x is a density fu nction fA t) satisfying fx (t) > 0, for t E Rx and fx (t) = 0, for t fi Rx Probabilitie s for x are obtained by integration; if '13 is a subset of the real line, then
P( x
E
'13) =
fA t) dt 'lJ
Fo r both discrete and continuous x, we ha ve P( x E Rx) = I. Th e expected value of a realvalued function of x, g( x) , gives the average observed value of g( x) . This expectation, denoted E[g(x)], is given by E[g(x)] =
L IE
g( t)p At ),
Rx
if x is discrete and ,
;
,
•
E[g(x)] =
g( t)f At ) dt,
•
•
,• •
•
• •
,
if x is continuous. Properties of th e expectation operator follow di rectly from properties of sums and integrals. Fo r instance, if x and ya re random variables and c¥ and (3 are constants, then the expectation operator satisfies th e properties E(c¥) =
C¥,
an d
where g, and g2 are any realvalued functions. Th e set of expected • values of a ra nd om variable x given by E( xk ), k = 1, 2, ... , are known as the m oments of x. These are important for both descriptive and theoretical purposes. The first few m om en ts ca n be used to describe certain features of the distrib ution of x. Fo r instance, the first m om en t or mean of x, J.l.x = E(x), locates a ce ntral value of the distribution. Th e variance of x, denoted 11; or var(x), is defin ed as
so that it is a function of the first an d second moments of x. Th e varia nce gives a measure of the dispersion of the observed values of x about the ce ntral value
20
A REVIEW OF ELEMENTARY MATRIX ALGEBRA
[,; r~
L I
at th ied rif ve y sil ea is it n, tio cta pe ex of s tie er 11.,. Using prop
•
"
.;;,'. ,. ., "
va r(a + (3x) = (32var(x)
.'
· ., ·.
,
d lle ca n tio nc fu a in ed dd be im e ar x e bl ria va om nd ra All of the m om en ts of a lar cu rti pa a as ed fin de is n tio nc fu is Th x. of n tio nc the m om en t generating fu n ve gi is , (t) mx x, of n tio nc fu g tin ra ne ge t en om m expectation; specifically, th e by
•
f •
•
, ,I •
, ,
I
I
i
I.
rhe Ot 0. of od ho or hb ig ne a in t of es lu va r fo provided this expectation exists g tin ra ne ge t en om m e th If . ist ex t no es do n tio wise, the moment generating func ce sin it m fro t en om m y an n tai ob n ca we en th , ist function of x does ex
r
I, i
I
fI
I, .
I
l •
• I •
: ..
,
I 10.: . I'
!
n io ut ib str di e th s ize ter ac ar ch n tio nc fu g tin ra ne ge t More importantly, the m om en nL ti ra n~ ge t en om m e m sa e th ve h? ns io ut ib str di nt re of x in that ~ lW~ J!iffe ' func, tion. ll wi we at th ns io ut ib str di of es ili m fa lar cu We now focus on two parti istr di al rm no a ve ha to id sa is x e bl ria va om en co un ter lat er in this text. A rand 2 ), if the de ns ity ( JI., hution with mean II and variance (12, indicated by x  N{ of x is given by 00
< t < 00 •
Th e corresponding m om en t generating function is
d ar nd sta e th is ns io ut ib str di Ial lIl nO of ily fam is th A special m em be r of ct fa e th m fro ws llo fo n io ut ib str di is th of e nc rta po im distribution N(O, 1). Th e a s eld 2 yi )/u p. (x = z on ati rm fo ns tra g zin di ar nd sta e th that if x  N{JI., ( ), then g in iat nt re ffe di By n. io ut ib str di al rm no d ar nd sta e th random variable z which has st fir e th at th y rif ve to sy ea is it , 1) O, N( z of n tio the m om en t ge ne ra tin g func d an 0, 3, 0, 1, 0, e ar , ter ap ch er lat a in ed ne ll wi six moments of z, which we 15, respectively. istr di ed ar qu is ch a s ha u e bl ria va om nd ra a en th , er If r is a positive integ
I"
t·
·..
I, •
21
TS EP NC CO AL IC IST AT ST D TE LA RE ME SO D AN S OR CT RANDOM VE
bution with r degrees of freedom, written
v  X;, if its density function is
•
•
i•
t(,/2) I e 1/ 2
,
/u (t) =
•
, , •
'
/2
2' r( r/ 2)
t > 0,
g tin ra ne ge t en om m e Th . r/2 at d ate alu ev n tio where r( r/ 2) is the ga m m a func the of e nc rta po im e Th ~. < t r fo , /2 r, 2t (1 = (t) mu 'by n ve function of v is gi If n. io ut ib str di al rm no e th to n tio ec nn co its m fro s ch is qu ar ed distribution arise es bl ria va om nd ra t en nd pe de in e ar Z, " ," ZI if , er Furth Z  N(O, 1), then Z2 with Zi  N(O, 1) fo r i = I, ... , r, then
• •
xI.
, ( \.5 ) j
nce a as to d rre fe re es m eti m so is e ov ab d ne tio Th e ch is qu ar ed distribution men l ra ne ge e or m a of se ca ial ec sp a y all tu ac is it tral ch is qu ar ed distribution since e es Th . ns io ut ib str di ed ar qu is ch l tra en nc no e th family of distributions kn ow n as n. io ut ib str di al rm no e th to ed lat re o als e ar ns io ut ib str di noncentral ch is qu ar ed en th I), , /Lj N( Xj th wi es bl ria va om nd ra t en nd If XI> ••• ,X , are in de pe
. ··. "
"" ".
=1
•
•
,
,.
~?
( 1.6)
?
£.. Xi  X;(A),
I
j=1
of s ee gr de r th wi n io ut ib str di ed ar qu is ch l tra where X; (A ) denotes the noncen freedom an d non centrality pa ra m ete r
, 2. ~ A = 1 £. /Lj ' .2 i= 1
•
s nd pe de re, he ve gi t no ll wi we ich wh , ity ns de ed ar qu is ch that is, th e noncentral s ce du re .6) (1 e nc Si A. r ete m ra pa e th on o als t bu r r not on ly on the pa ra m ete to ds on sp rre co ) (A X; n io ut ib str di e th at th e se we i, to (1.5) when /Lj = 0 for all X; wh en }. =O. th wi n io ut ib str di F e th is n io ut ib str di ed ar qu is ch e A distribution related to th ity ns de e th en th 2' I,' F' Y If ' ,'2 F" by ted no de , om r1 an d r2 de gr ee s of freed . function of y is (' 1
+,~)/2
,

1> 0
22
A REVIEW OF ELEMENTARY MATRIX ALGEBRA
The importance of this distribution arises from the fact that if VI and IJ2 are independent random variables with v,  X;I and Ih  X;2' then the ratio
r,
and '2 degrees of freedom. has the F distribution with The concept of a random variable can be extended to that of a random vector. A sequence of related random variables X" ••• ,Xm is modeled by a joint or multivariate probability function, p,,(t) if all of the random variables are discrete. and a inultivariate density function fAt), if all of the random variables are continuous, where x = (x" ... , xm)' and t = (t" •.. , tm)'. For instance, if m they are continuous and '1J is a region in R , then the probability that x falls in '1J is
•
while the expected value of the realvalued function g(x) of x is given by
E[g(x)] =
• • • • _00
_00
The random variables x" ... ,Xm are said to be independent, a concept we have already referred to, if and only if the joint probability function or density function factors into the product of the marginal probability or density functions; that is, in the continuous case, x" ... ,Xm are independent if and only if
for all t. The mean vector of x, denoted by ...., is the vector of expected values of the .I',S; that is.
•
A measure of the linear relationship between X; and Xj is given by the covariance of Xi and Xj' which is denoted cov(x;, Xj) or (J ij and is defined by .
, ,, ,
23
RANDOM VECTORS AND SOME RELATED STATISTICAL CONCEPTS
,
"· •
,
'
"l
," ,
.,•
that is, (1;; = (11 = var(xi)' When j i j and Xi and Xj are independent, then COV(Xi,Xj) = 0 since E(xtxj) = JLiILj' If a], a2, {3], and {32 are scalars, then
When i
=j, this covariance reduces to the variance of Xi;
"to '
,•
,•
,
",",, 0,
,
'
,
,
, ,
, , ,
The matrix 0, which has Uij as its (i,j)th element, is called the variancecovariance matrix, or simply the covariance matrix, of x. This matrix will be also denoted sometimes by var(x) or cov(x,x). Clearly, Uij = uji so that 0 is a symmetric matrix. Using (1.7) we obtain the matrix forIllulation for 0,
o = var(x) = E[(x 
.... )(x  ....)'] = E(xx')  ........'
If a is an m x 1 vector of constants and we define the random variable y
= a'x,
then
m
E(y) = E(a'x) = E
L
m
ajXj
=
m
L
a;E(x;) =
;;1
i; 1
L
aiIL; = a' ....
;;1
If, in addition, IJ is another m x 1 vector of constants and w = lJ'x, then
m
cov(y, w) = cov(a'x, IJ'x) =
COy
L
m
ajX;,
;=1 m
m
=L
L
;;1
j;1
(3jXj
j=1 m
ai{3jCov(x;,Xj)
L m
=L
L
;=1
j=1
aj{3ju;j
=a'OIJ
In particular, var(y) = cov(y,y) = a'Oa. Since this holds for any choice of a and since the variance is always nonnegative, More generally, if A is 11 (I X 11/ matrix y  /Ix,
E(y) = E(Ax) = AE(x) = A .... ,
var(y) = E[ {y  E(y)}{ y  E(y)
( 1.8)
n = E[(Ax  A ....)(Ax  A ....)']
= E[A(x  ....)(x  ....)'A'] = A {E[(x  fL)(X  11)' nA' = AOA' (1,9)
A REVIEW OF ELEMENTARY MATRIX ALGEBRA
24
,
,
,,
,, ,
is , Ax r, cto ve led rIl fo ns tra e th of ix atr m e nc ria va co d an r Thus, th e m ea n vecto es nc ria va co of ix atr m e th en th rs, cto ve om nd AJL and AO A' . If v and w are ra by n ve gi is w of s nt ne po m co d an v of s nt ne po m co n betwee ,
,
co v( v, w) = E( vw )  E( v) E( w)
,,
,
In particular, if v= Ax and w = Bx , then ,
cov(v, w) = Ac ov (x ,x )B ' = Av ar (x )B ' = AO B'
,
I ,
,
d cte fe af un is at th Xj d an Xj n ee tw be p hi ns io lat re A measure of the linear > P;j t ien fic ef co n io lat rre co e th d lle ca is Xj d an by th e m ea su re m en t scales of Xj defined by
,, ,
= ..,
•
t, en m ele h j)t (i, its as j pj s ha ich wh P, ix atr m n io lat W he n i = j, pj) = l. Th e corre e th d an 0 ix atr m e nc ria va co ng di on sp rre co e th can be ex pr es se d in terIlls of II 2 1 'fi /2) 1 2 1/ (d' / y, D . ca l ec sp al ; ' on matrIx {l = lag (111 , .•• ; (1 mm dlag (1 .1 0)
Fo r any m x I vector a, we have , • •
,l 'n( (,l _ /2 I nD /2 I 'D _ 'P p a a a {l .. {l a p u ,
,
be..o.:.;.= where IJ = Dn l/2 a, an d so P must~.. the m x m particular, if ej is th e ith co lu m n
ve definite be ca us e then
{l
is. In
i
I, ,,
• I
I
(ej + ej )'P (e j + ej ) = (P)jj + (P )jj + (P )jj + (P )jj
= 1(1 + pj j); :: 0,
and (e,  ej )' P(e,  ej) = (P)"  (P )ij  (P )ji + (P )jj = 2(1  Pij) ;:: 0,
from which we obtain th e inequality, 1 :s; Pi} :s; 1.
i,
I, '
I
25
TS EP NC CO AL IC IST AT ST D TE LA RE ME SO D AN S OR RANDOM VECT
,,
,,
,, ,
st mu ey th so d an n ow kn un e ar es nc ria va co d an lY pi ca lly , m ea ns , va ria nc es , le mp sa om nd ra a ts en es pr re ,X ... X" e os pp n be es tim ate d fro m a sample. Su e nc ria va d an p. an me th wi n io ut ib str di e m so s ha of a ra nd om va ria bl e X th at e nc ria va e pl m sa d an n ea m e pl m sa e th by d ate tim es (J2 . Th es e qu an tit ies ca n be gi ve n by
,,
X=
n
1
S2 =
Xi ,

n
i= 1
n
I
0
(Xi  X)
n I
i=1
,
,
I
if ; {l d an JI. r fo rs ato tim es us go alo an ve ha In the m ul tiv ar iat e se tti ng we r cto ve an me ng vi ha X r cto ve om nd ra 1 x m an Xl , ..• ,Xn is a ra nd om sa m pl e of e nc ria va co e pl m sa d an or ct ve an me e pl m sa e th en th , {l ix JI. an d co va ria nc e m atr m atr ix ar e gi ve n by
,
,
•
I
X=
n
n
I n l
s=
L X i, i= 1
n
L
(Xi  X)(Xi  of)'
i=1
•
ties an n tai ob to ) .10 (1 in ed us en th be n ca Th e sa m pl e co va ria nc e m atr ix x tri ma al on ag di the e fin de we if is, at th P; , m at or of th e co rre lat io n m atr ix the by d ate tim es be n ca ix atr m n io lat rre co e th 2), 1/2 = diag(S~II/\ ... , S;;,:£ sa m pl e co rre lat io n m atr ix defined as
Ds
,
R= , • •
,
i
I, ,
Ds
1/2 SDs 1/2
te ria va lti mu the is er id ns co ll wi we at th n io ut ib str Th e on e pa rti cu lar jo in t di t en nd pe de in of llS teB in ed fin de be n ca n io ut ib str no rm al di str ib ut io n. Th is di ed ut ib str di tly en nd pe de in be ,Zm ... , ZI t Le . es bl ria sta nd ar d no rm al ra nd om va n ve gi en th is Z of n tio nc fu ity ns de e Th '. m) ,Z ... as N(O, I) an d pu t Z = (Zit by
• I
m
I( z)
= i= I
1
ex p 
1 2
zl
1 = ::;;; exp (211")m/2

I
Z' Z
2·
rno te ria va lti mu l na sio en im d m lar cu rti pa s thi , 1 = ) r(z m Si nc e E(z) = 0 an d va ts tan ns co of r cto ve I x m an is JI. If ). ,l (O Nm as d te no m m al di str ib ut io n is de the ve ha to id sa is Tz + JI. = X en th x, tri ma lar gu in ns no an d T is an m x m riva co d an JI. r cto ve n ea m th wi n io ut ib str di lal IiJ nO e iat ar m d im en sio na l m ul tiv if e, nc sta in r Fo ). {l I., I/(J NI X by ted ca di in is is Th an ce m atr ix 0 = TT '. nde its d an n io ut ib str di al rm no te ria va bi a s ha m = 2, th e ve ct or X = (X I, X2 )' be to n ow sh be n ca , Tz + JI. = X on ati rm fo ns tra e th sity, in du ce d by
26
A REVIEW OF ELEMENTARY MATRIX ALGEBRA
1 (111
XI  J.l.1
 2p
,
~
(1.11)
for all x E R2, where p = p 12 is the correlation coefficient. When p = 0, this density factors into the product of the marginal densities, so XI and X2 are independent if and only if p = O. The rather cumbersome looking density function given in e1.11) can be more conveniently expressed by utilizing matrix notation. It is straightforward to verify that this density is identical to
lex)
I
1 ' 1 (x J1) n(x J1) = 211'1011/2 exp 2
•
(1.12)
The density function of an mvariate nOilnal random vector is very similar to the function given in e1.12). If x  Nm (J1, 0), then its density is I
lex)
I , I = e211')",/210 11/2 exp  2 (xJ1)OeXJ1)
,
(1.13) •
for all x E Rm. If 0 is positive semidefinite, then x  Nm eJ1, 0) is said to have a singular nm IlIal distribution. In this case 0 I does not exist, and so the multivariate nOlIllal density cannot be written in the form given in (1.13). However, the random vector x can still be expressed in tellllS of independent standard normal random variables. Suppose that rank(O) = rand U is an m x r matrix satisfying U U' = O. Then x  Nm eJ1, 0) if x is distributed the same as fl: + Uz, where now z  NreO, Ir). An important property of the multivariate nOllllal distribution is that a linear transfOlIllation of a multivariate normal vector yields a multivariate normal vector; that is, if x  Nm eJ1, 0) and A is a px m matrix of constants, then y = Ax • has a pvariate normal distribution. In particular, from (1.8) and (1.9) we know that y  N"eAJ1, AOA'). . One of the most widely used procedures in statistics is regression analysis. We will briefly describe this analysis here and subsequently use regression analysis to illustrate some of the matrix methods developed in this text. Some good references on regression are Neter, Wasserman, and Kutner (1985) and Sen and Srivastava (1990). In the typical regression problem, one wishes to study the relationship between some response variable, say y, and k explanatory variables X" ... 'Xk. For instance, y might be the yield of some product of
•
RANDOM VECTORS AND SOME RELATED STATISTICAL CO NCEPTS
27
a manufacturing process, while the ex pl an ato ry variables are conditi ons affecting th e production process, su ch as temperature, humidity, pressure , and so on. A model relating th e XjS to Y is gi ve n by
(1.14) where 13o, .•• ,13k are un kn ow n pa ra m ete rs and E is a ra nd om eli or , that is, a random variable, with E( e) = O. In wh at is known as ordinary lea st squares regression, we als o ha ve the errors being in de pe nd en t random va riables with co m m on variance (12 ; that is, if ei and Ej ar e random er ro rs associa ted with the responses Yi and Yj. then var(ei) = var(Ej) = (12 and COV(Ei, Ej) = O. Th e model given in (1.14) is an ex am pl e of a linear model since it is a linear fu nction of the parameters. It need not be linear in the XjS so that, for instance, we m ig ht ha ve r X2 = Si nc e th e parameters ar e unknown, they must be estimated and thi s will be possible if we have some ob se rv ed values of Y and the co rresponding Xj s. Th us , for the. ith ob se rv ati on su pp os e that th e ex pl an ato ry variables are se t to th e values Xi i' .•. ,X ik yielding th e response Yi ' an d this is do ne for i =: 1, ... ,N , wh er e N > k + 1. If model (1.14) holds, then we sh ou ld have , approximately,
xI.
,
•,, ,,
fo r each i. Th is ca n be written as th e matrix equation
if we define
Y=
YI
13o
Y2
131
,
•
• •
IJ=
YN
• •
I I
,
X==
• •
•
•
13k
I
X II
•
• •
Xl k
X21
• • •
Xu
•
•
•
•
•
XN I
•
•
•
•
XN k
On e m eth od of es tim ati ng th e 13js, which we will discyss frpm timF to time in this text, is called th e m eth od of least squares. If IJ == (131,"" 13d is an es tim ate of th e pa ra m ete r vector IJ, then y == XIJ is the vector of fit ted values, while y  Y gives th e vector of errors or deviations of th e actual resp onses from th e corresponding fitted values, an d
.
,
A
A
,
A
f(l J) = (Y  XIJ) (y  XIJ) gi ve s th e su m of squares of these errors. Th e method of least squa res selects
,,
A REVIEW OF ELEMENTARY MATRIX ALGEBRA
28
. ,
,
A
y an at th er i?t e se ll wi e W . 3) f(J n tio nc fu e th s ze mi ni mi as J3 an y ve cto r th at as to d rre fe re es m eti m so , ns tio ua eq r ea lin of m ste sy e th su ch ve cto r satisfies the nO! mal eq ua tio ns , ,
.',
,
,
X'XJ3 = X'y so d an s ist ex )l 'X (X en th I, + k = ) (X nk ra is, If X has fulI co lu m n rank, th at by n ve gi is d an ue iq un is J3 of r ato tim es s re ua sq st lea e th " 1:;
f, ,,• ,•
PROBLEMS
·. ,•
·, .
0 = A) A' tr( at th ow sh x, tri ma n X m an is A if is, at th ; 1, Pr ov e Th eo re m 1.3(e) if an d only if A = (0). B d an A if at th ow Sh . x'y = ') xy tr( rs, cto ve 1 x 2. Sh ow th at if x an d ya re m 1 co tr( A) . ) BBA tr( are m x m m atr ice s an d B is nonsingular,
3. Prove Th eo re m 1.4. ic etr m m sy a of m su e th as en itt wr be n ca x tri ma re 4. Sh ow th at an y sq ua matrix an d a sk ew sy m m etr ic matrix. 5. Define the m x m matrices, A, B, an d C as
Prove th at IA I = IB I + IC\ . 6. Verify the results of Th eo re m 1.6. 7. Co ns id er the 4 x 4 matrix
,,
, ,
,· •
,
,
,
29 PROBLEMS
o
I 1 120
1
2 2 I
1 A=
o
2
I
I
2
e th n o la u m ! fO n io s n a p x e r to c fa o c e th g in s u y b A f o Find the determinant first c o lu m n o f A. n e h w ) .3 (1 n o ti a u q e fy ri e v , m le b ro p s u io v re p e th m o fr A 8. Using the matrix i = I and k = 2. n a is A re e h w " I" A A f o t n a in n n te e d e th r e id s n o c d n a 9. Let}.. be a variable "? is th is A f o n o ti c n fu f o e p ty t a h W . A f o n o ti c n fu a s a , ix m X m matr to is th e s U . 7 m le b ro P in n e iv g A ix tr a m e th f o ix . 10. Find the adjoint matr obtain the inverse o f A.
•
t a th so e d n a B s e ic tr a m e in n ll te e d , s n o ti a rm fo s n a tr 11. Using elementary e th te u p m o c to e d n a B e s U . 7 m le b ro P in n e iv g A ix tr a m BA e = 4 for the ~ = e A B n o ti a u q e e th f o s e id s th o b f o e rs e v in e th e k ta , is t inverse o f A; tha and then solve for A  I. o g ia d s it f o t c u d ro p e th is ix tr a m r la u g n ia tr a f o t n a n Iu lJ 12. Show that the dete ix tr a m r la u g n ia tr r e w lo a f o e rs e v in e th t a th w o h s , n io it d d nal elem• ents. In a . ix tr a m r la u g n ia tr r e w lo is a se U . ix tr a m l a n o g ia d m mX a: re e h w ', b a a + D f o e invers
n a e b D d n a rs to c e v 1 13. L e t a a n d b b e m X e th r fo n io s s re p x e n a d Corollary 1.7.2 to fin i s a scalar. •
•
ix tr a m d e n io it rt a p m x m e th 14. C o n s id e r
A=
(0) An
,
r. la u g in s n o n re a 2 2 A ix tr a m2 m x 2 m e th d n a ll A ix tr a m where the m l x m l . 1 2 A d n a , n A > ll A f o s n n te in I A r fo n io s s re p x e n a in ta Ob 15. Let
A=
,
•
30
A REVIEW OF ELEMENTARY MATRIX ALGEBRA
mJ,
wh er e Al l is m l x A n is m2 x m2, an d AI 2 is m l X m2' Sh ow th at if A is po sit iv e definite then A II an d A n ar e als o po sit iv e definite. 16. Find the rank of the 4 x 4 matrix
A=
3
2
0
I
1
1 1 2 0
'.
1 1 1
1
2 0 0 2
17. Us e ele m en tar y transfolIllations to transforIll th e m atr ix A gi ve n in pr ob le m 16 to a matrix H ha vi ng th e form gi ve n in Th eo re m 1.9. Co ns eq ue nt ly , de tel lll in e matrices B an d C so BA C = H. 18. Pr ov e parts (b) an d (c) of Th eo re m 1.10. 19. List all pellllutation ma tri ce s of or de r 3. 20. Co ns id er the 3 x 3 m atr ix
P=
1
V6
V2 V2 Y'3 o P32
P3 3
•
j,J, pj2, and p1 so that P is an orthogonal matrix. Is your
Find values for solution un iq ue ?
21. Su pp os e th e m x m or th og on al m atr ix P is pa rti tio ne d as P = [P I P 2 ], wh er e PI is //I x 1111, P2 is m x m2, an d m l + m2 = m. Sh ow th at ~P I =I m" P; P2 =1m2 , an d PIP~ +P 2P ; =1m· p'p= P,' [P, Pz,J: 111 " " Ci m such th 1 S ~ T s u th d n a . 1 S e x so , 0 ::: 'y x , iS Z e th f o ty li a Due to th e orthogon rs la a c s t is x e re e th , ill R in o ls a is x e c in S . 1 S E x t a th e Conversely, suppos C i, Z " + . .. + \ Z i\ C ::: y t le e w if w o N • Z im C + . .. + m I Z iI C : :: x C il , .. • , Ci m such th a t n a c is th t u B . O ::: i; C + . .. + iT C ::: 'y x e v a h t s u m e w . 1 S e x then y e S , and since d n a Z im C + . .. + m \ ,+ \Z + i, C : :: x e s a c h ic h w in , 0 ::: i, C ::: . only happen if Cil ::: .• 0 .. 1 S : :: T t a th s e h s li b ta s e is th so d n a , T ~ . 1 S e v a h o ls so x e T. Thus, w e a {z, + I , .. . ,Z m } .
•
.., •• ,
·
,,, I,
.. ,
S E IC R T A M N IO T C E J O R 7. P
,,
•
n o c e b n a c S e c a p s r to c e v a to n o x r to c e v I x m n a f o n o T h e orthogonal projecti is s a b l a rm o n o h rt o y n a e b ,} z " " I, Z { t e L . rm fo ix tr a veniently expressed in m Ci m are , • •• > il C e s o p p u S . m R r fo is s a b l a u ll o n o h rt o n a is } m ,z for S while {XI, .. . ip h s n o ti la re e th g in fy s ti a s the constants
x:::
(C iI Z \
+ .. . +
C i, Z ,)
+
(C i, + IZ ,+ \
+ .. . +
CimZ m ): :: U
,•
, ,
•
·
,,
+ v,
,
•
53
PROJECTION MATRICES
J. Z~ 1 [Z = Z d an )' az ;, (a = a e rit W . ed fin where u and v are as previously de = Z~ d an r), ,Z ... " (z = ZI )" i ,C ... I, ir+ (C m = a2 ', ir) where al = (Cib ... ,C as en itt wr be n ca e ov ab n ve gi x r fo n sio es pr ex e th en Th (Zr+)' ... ,Zm).
e th of y lit na on on th or e th to e Du . a2 Z2 = v, d that is, U = ZI al an Z~ZI = Ir and Z;Z2 = (0), and so (0)]
ZiS ,
we have
aI a,
Thus, we have the following result. an rm fo ZI x tri ma r x 111 e th of ns m lu co e th e os pp Theorem 2.17. Su ll, Ri E x If . Rm of ce pa bs su a is ich wh S e ac sp orthonormal basis for th e vector . X Z; ZI by n ve gi is S to on x of n tio ec oj pr al on og the orth •
x tri ma n tio ec oj pr e th d lle ca is 7 2.1 m re eo Th in g in ar pe Th e matrix Z 1Z; ap Z; Z~ . ly lar mi Si . P by ted no de be ll wi s es m eti m so d an for the vector space S '. R" r fo x tri ma n tio ec oj pr e th is 1m = : ZZ d is the projection matrix for S1. an Z; ZI 1m = Z; Z2 n tio ua eq e pl sim e th ve ha we , Z; Z2 Si nc e zZ: = ZIZ; + m co al on og th or its d an ce pa bs su r cto ve a relating the projection matrices of , sis ba al IIl ol on th or ue iq un a ve ha t no es do e plement. Although a vector spac e. igu !ln is s se ba al m or on th or e es th m fro ed nu fo ix the projection matr
I W d an I Z s ice atr m r x 111 e th of ns m lu co Theorem 2.18. Suppose the en Th S. e ac sp r cto ve l na sio en im rd e th r fo sis each fo nn an or th on on na l ba ZIZ; = WI W ;.
,
.., , ,
, ..
.. ..
..
..t ..
Pr oo f
the of n tio na bi m co r ea lin a as en itt wr be n ca I Each co lu m n of W
S; in is I W of n m lu co ch ea d an S an sp ZI of ns columns of ZI since the colum = WI ; W = ZI Z; t Bu P. ZI = I W at th ch su P ix that is, there exists an r x r matr , us Th . ns m lu co l na on on th or s ha ix atr m ch ea ce I r , sin
,
,,
,, ..
,
,
d an If> = ' PP s fie tis sa o als P , ly nt ue eq ns Co . ix atr m al on so that P is an orthog so
,
, ..
o
S4
VECTOR SPACES
We will take another look at the GramSchmidt orthononnalization procedure, this time utilizing projection matrices. The procedure takes an initial linearly independent set of vectors {x" ... ,x,}, which is transformed to an orthogonal set {y I' ... ,y,}, which is then transformed to an orthonormal set {ZI, ... ,z,}. It is very easy to verify that for i = I, ... , rI, the vector Y;+ I can be expressed as . ;
Y;+I=
Imj=1
that is, Yi+ I = (1m  2(i)2~i))xi+" where 2(i) = (z" ... ,Zi). Thus, the (i + I)th orthogonal vector Yi + I is obtained as the projection of the (i + I)th original vector onto the orthogonal complement of the vector space spanned by the first i orthogonal vectors, YI" .. 'Yi' The GramSchmidt orthonOllllalization process represents one method of obtaining an orthonOllllal basis for a vector space S from a given basis {Xl •... • Xr}. In general. if we define the m x r matrix XI = (x" ... ,x,), the columns of (2.6) will fOlln an orthonollual basis for S if A is any r x r matrix for which
•
The matrix A must be nonsingular since we must have rank(Xd = rank(2 1) = r; so AI exists. and X~XI = (AI)'A I or (X~XI)I = AA'; that is, A is a square root matrix of (X~ X It I. Consequently, we can obtain an expression for the projection matrix Ps onto the vector space S in tellus of XI as
•
(2.7)
Note that the GramSchmidt equations given in (2.5) can be written in matrix fOlln as Y I = XIT. where Y I = (Yp ... ,y,). XI = (XI,""X,), and T is an rX r upper triangular matrix with each diagonal element equal to 1. The nonnalization to produce 21 can then be written as 21 = XI TD I , where D is the diagonal matrix with the positive square root of Y;Yi as its ith diagonal element. Consequently. the matrix A = TD I is upper triangular with positive diagonal elements. Thus the GramSchmidt orthononnalization is the particular case 'of equation (2.6) in which the matrix A has been chosen to be the upper triangular square root matrix of (X~ X I)1 having positive diagonal elements.
,., ,

•
ss
PROJECTION MA:I'RICES
Example 2.9.
Using the basis {XI,X2,X3} from Example
2.7,
we fonn the
10 8 6
12 6 9
XI matrix
I 2 I 2
I I I I
3 I I I
, •
and it is easy to verify that
4 2 4
2 10 4
4 4 , 12
(X'X )1 = 1 1 1 36
26 10 12
Thus, the projection matrix for the vector space S spanned by given by
L
3 1 1 3 1 1 1 1 3 1 1 1
{XItX2,X3}
is
1 1 1 3
This, of course, is the same as Z 1Z~, where Z 1 = (z 1,Z2, Z3) and Zit Z2, Z3 are the vectors obtained by the GramSchmidt orthonollualization in the previous example. Now if x = (l, 2, 1,0)', then the projection of x onto S is XI(X~XlrIX;x = x; the projection of x is equal to x since x = X3 XI X2 E S. On the other hand, if x = (1, I, 2,1)" then the projection of x is given by u = XI(X;XlrIX;x = (~, ~,~, ~)'. The component of x orthogonal to S, or in other words, the orthogonal projection of x onto SJ., is {I  X 1(X~ X 1t 1X~ }x = 1 1 4. I)' Th·· .. x X 1(X'X)IX' I 1 IX=X U = (I4'4'4' IS gives us t he d ecomposttlOn
X=
I 1 2 I
of Theorem 2.14.
1
3
1 4
3 9
3
1 + 4
I I
=U+V
1 •
Example 2.10. We will generalize some of the ideas of Example 2.8 to the multiple regression model
VE Cf OR SPACES
56
'" I,. X es bl ria va ry ato an pl ex k to y e bl ria va se on sp re relating a N observations, this model ca n be written as
Xk .
If we ha ve
y = X p + E, d an 1, x l) + (k is p , 1) + (k x N is X 1, x N is y w he re vector of fitted va lu es is gi ve n by
E
is N x 1, wh ile th e
y = Xp, of ce £a bs su e th in t in po a is y p, y an r fo , rly ea where P is an es tim ate of p. Cl t us m p p, of ate tim es s re ua sq st lea a be To X. RN sp an ne d by the £o lu m ns of y, or ct ve e th to st se clo ce pa bs su is th in t in po be such that y = X p yi eld s the since this will have the su m of sq ua re d errors, A
A
A
A
(y  Xp )'( y  XP ),
e ac sp e th to on y of n tio ec oj pr al on og th or e th be t us m p minimized. Th us X s ha e ac sp is th en th , nk ra n m lu co fuB s ha X If X. of sp an ne d by th e co lu m ns is n tio ec oj pr d ire qu re e th so d an , X' I X) X' X( ix atr m n tio projec A
•
r ato tim es s re ua sq st lea e th n tai ob we , X' I r X ' (X Pr em ul tip ly in g this eq ua tio n by
p= (X 'X rI X 'y el od m ted fit e th r fo ) SE (S rs ro er d re ua sq of m su the In addition, we find th at y = X p can be written as A
,
,
, ,
l y) X' I x) x' x( '(y y) X' r 'X (X X (y = ) XP SSEI = (y  XP )'( y , )y X' I X) X' X( (IN y' = )2 X' I r 'X y (X X (IN y' =
; , ,
,
of n tio ec oj pr e th of e nc sta di d re ua sq the ts en es and so this su m of sq ua re s re pr at th w no e os pp Su X. of e ac sp n m lu co e th of t en y onto the orthogonal co m pl em r be m nu e th e er wh ), X2 " (X = X d an )' p; ;, (P = p as p an d X are pa rti tio ne d sh wi we d an ' PI in ts en m ele of r be m nu e th as e m sa e of co lu m ns of X I is th e th to al on og th or e ar XI of ns m lu co e th If O. = P2 t to de cid e wh eth er or no co lu m ns of X2, then X; X2 = (0) an d
', r, "
•
,,
(X 'X )I = ,
,:, ,,, , , I
57
PROJECfION MATRICES AI
.....
d n a ;y 'X d ;X X ( re e h w )" 2 P I' (P == P s a d e n io ¥ ld SO P c a n b e partit l, e d o m d e tt fi e th r fo rs o rr e d re a u q s f o m u s e th r, e h rt u F /3• 2 == •( X ;X 2 t I X ;y . s a d e s o p m o c e d e b n a c , /3 y=X ....
....
AI
/3,
==
y ') IX r 'x X ( X IN '( y = ). P X ( y  X p ) '( y y ;) IX )2 ;X (X X ~ ' r l 2 X ; X == y'(IN  X,( •
On th e o th e r hand,
is
•
•
/31
, /3 f o r to a m ti s e s re a u q s th e least
in the reduced model
•
y b n e iv g is rs o rr e d re a u q s f o m u s s it e il h w , ;y X I I) ;X (X ==
m u s e th in n o ti c u d re e th s e iv g y ; X 1 )2 ;X (X 2 'X y = E S T h u s , the te n n S S E2  S I l e d o m e th in 2 P X ll ll te e 2 th f o n io s lu c in e th to le b ta o f squared e rr o rs attribu g in id c e d in l fu lp e h e b l il w e iz s e v ti la re s it so d n a , E + 2 P 2 X y == XP+•E = XIPI + e b ld u o h s y f o s n o ti a rv e s b o N e th n e th , 0 = 2 P f I . O = w h e th e r o r n o t P2 to y c n e d n te o n h it w N R I in X f o e c a p s n m lu o c e th t ra n d o m ly clustered a b o u . n o ti c e ir d r e th o y n a in n a th re o m n o ti c e ir d e n o in e c d e v ia te fr o m this subspa e th in h it w s n o ti c e ir d in s n o ti ia v e d r e rg la t c e p x e ld u while if P2 :/. 0, w e wo . X f o e c a p s n m lu o c e th to l a n o g o h rt o s n o ti c e ir d in c o lu m n s p a c e o f X 2 than m u s e th is , E S S , 1 + k is X f o e c a p s n m lu o c e th f .o n Now, since th e d im e n s io E S S j 2 E S S e il h w , s n o ti c e ir d l a n o g o h rt o 1 k N in s n o o f s q u a re d deviati e th is J k re e h w , s n o ti c e ir d l a n o g o h rt o k2 in s n o ti ia v e d g iv e s the s u m o f s q u a re d J k )j , E S S E S (S d n a 2 ) 1 k N /( I E S S , s u h T ' 2 P n u m b e r o f c o m p o n e n ts in n a th r e rg la e b ld u o h s r e tt la e th e il h w , 0 == 2 P if s e d u it n s h o u ld be o f s im il a r m a g e th n o d e s a b e b n a c 2 P t u o b a n io is c e d a , y tl n e u q e s n o the fo u tl e r if P2 :/. O. C value o f th e statistic
. k2 / ) E E S S I S S 2 ( = F S S E t/ ( N  k  1)
(2.8)
F t a th n w o h s e b n a c it , 9 r te p a h C in p lo e v e d l il w e w U s in g results that . O == 2 P d n a ) IN "2 (J , (O N N Fk2 •N k  I if E to l a u q e t o n is ) E S S ~ I S S ( r fo n io s s re p x e e th ), W h e n X;X2 :/. (0 y f o n o ti c je ro p e th f o m u s e th t o n is y , e s a c is th in , e c y 'X 2 ( X iX 2 tI X ;y sin e c a p s n m lu o c e th to n o y f o n o ti c je ro p e th d n a I X f o o n to the c o lu m n space e th in 2 P 2 X rm te e th f o n io s lu c in e th f o t c e ff e e th s s e o f X2. To properly a s s lo c e th to n o y f o n o ti c je ro p e th f o m u s e th to in e s o p m model. w e m u s t d e c o e c a p s n m lu o c e th f o e c a p s b u s e th to n o y f o n o ti c je ro p e umn s p a c e o f X I a n d th y b d e n n a p s is e c a p s b u s r e tt la is h T I. X f o e c a p s n m lu o c e o f X2 o rt h o g o n a l to th
y
58
VECTOR SPACES
the columns of
since (IN  X 1(X~ X 1)1 X;) is the projection matrix of the orthogo!,lal complement of the column space of X I. Thus the vector of fitted values y = Xp can be written as
Further, the sum of squared errors is given by
and the reduction in the sum of squared errors attributable to the inclusion of the tellll X2P2 in the model y = Xp + E is
Least squares estimators are not always unique as they have been throughout this example. For instance, let us return to the least squares estimation of P in the model y = Xp + E, where now X does not have full column rank. As before , y = Xp will be given by the orthogonal projection of y onto the space spanned by the columns of X, but the necessary projection matrix can not be expressed as X(X'X)I X', since X'X is singular. If the projection matrix of the column space , of X is denoted by PRIX), then a least squares estimator of p is any vector p satisfying
•
•
Since X does not have full column rank, the dimension of its null space is at least one, anq so we will be able to find a nonnull vector a satisfying Xa = O. In this case, p + a is also a least squares estimator since , X(P + a)
=PR(X)y,
•
and so the least squares estimator is not unique.
·
•
We have seen that if the columns of an mx r matrix ZI form an orthonormal basis for a vector space S, then the projection matrix of S is given by Z 1Z;. Clearly this projection matrix is symmetric and, since Z~Zl = Ir , it is also idempotent; that is, every projection matrix is symmetric and idempotent. Our next
, ! ,,
..
•
•, ,
•
•
.,," " , ,
•• " "
~•
,,
59
PROJECI'ION MATRICES
result proves the converse. Every symmetric idempotent matrix is a projection matrix for some vector space.
Theorem 2.19.
Let P be an m x m symmetric idempotent matrix of rank r. Then there is an rdimensional vector space which has P as its projection matrix.
Proof
From Corollary 1.9.\, there exist an m x r matrix F and an r X m matrix G such that rank(F) = rank(G) = rand P = FG. Since P is idempotent, we have FGFG= FG,
which implies that F' FGFGG' = F' FGG'
(2.9)
Since F and G' are full column rank, the matrices F' F and GG' are nonsingular. Premultiplying (2.9) by (F' Ft I and postmultiplying by (GG')I, we obtain GF = I r • Using this and the symmetry of P = FG, we find that F = FGF= (FG)'F = G'F'F,
which leads to G' = F(F' F)I. Thus, P =FG =F(F' F)I F'. Comparing this to equation (2.7), we see that P must be the projection matrix for the vector space spanned by the columns of F. This completes the proof. 0
Example 2.11. Consider the 3 x 3 matrix
I
P = :6
5 I 2
.1 5 2
2 2 2
Clearly, P is syminetric and it is easily verified that P is idempotent, so.P is a projection matrix. We will find the vector space S associated with this projection matrix. First note that the first two columns of P are linearly independent while the third column is the average of the first two columns. Thus, rank(P) = 2 and so the dimension of the vector space associated with P is 2. For any x E R 3 , Px yields a vector in S. In particular, Pel and Pe2 are in S. These two vectors fonn a basis for S since they are linearly independent and the dimension of S is 2. Consequently, S contains all vectors of the fOlln (5a  b, 5b  a, 2a + 2b)'.
VE CT OR SPACES
60
R EA N LI F O S EM ST SY D AN S N O TI A RM O SF N A TR 8. LI N EA R EQUATIONS en se ve ha we en th , Ps ix atr m n tio ec oj pr th wi If S is a vector subspace of R"', S; to on x of n tio ec oj pr al on og th or the is x Ps that for any x E Rm, u = u(x) = is x Ps m = x) u( n tio nc fu e Th S. E u a to in ed that is, each x E R is transfolln m in to S. an example of a lin ea r transfollnation of R •
e ac sp or ct ve the in x all r fo ed fin de n tio nc fu a be u t De fin iti on 2.11. Le en Th e. ac sp r cto ve a o als is S e er wh S, E x) u( = u T, T such that for any x E y an r fo if S to in T of on ati lln fo ns tra r ea lin a is u by ed the transfollnation defin T, E X2 d an T E XI rs cto ve o tw y an d an (X2 d an two scalars (XI
e er wh , Ax = u l'm fO e th of s on ati lln fo ns tra ix We will be interested in matr m n by d te no de R of ce pa bs su e th in is u T, by ted no de R of x is in the subspace e th d an S, to in T of on ati rm fo ns tra a es fin de is Th x. tri ma S, and A is an m x n it , X2 d an XI rs cto ve I x n d an (X2 I, (X rs ala sc r fo ce sin r ea transformation is lin follows immediately that
•
(2.10) . on ati rm fo ns tra ix atr m a as d se es pr ex be n ca on ati In fact, every linear transform , Ps = A , on cti se is th of g in nn gi be e th at ed rib sc de n Fo r the orthogonal projectio m to or , Rm to in R of on ati lm fo ns tra r ea lin a ve ha we so that n = m an d thus e th r fo , lar CU rti pa In S. to in ll Ri of on ati rm fo ns tra r be more specific, a lin ea N y an r fo at th w sa we 0, 2.1 e pl am Ex in d se us sc di lem ob multiple regression pr n ve gi s wa es lu va ted fit or d ate tim es of r cto ve the y, s on ati x I vector of observ l X'y. Thus, since y E RN and Y E R( X) , we ha ve he re a lin ea r by y = X (X 'X t transformation of RN into R(X). t se e th be to ed fin de y all tu ac is S if at th ) .10 (2 m fro s ou It should be obvi o als ll wi S at th s tee an ar gu e ac sp r cto ve a g in be T n the , {u: u = Ax ;x E T} rs cto ve e th en th T, an sp Xr , ... , XI rs cto ve the if n, tio di ad be a vector space. In we , Rn an sp en , ... , el ce sin en th , Rn is T if , lar cu rti pa In AX I, ... ,A xr span S. A of e ng ra or e ac sp n m lu co the is S is, at th S; an sp )·n find that (A). I> ... , (A since it is sp an ne d by the co lu m ns of A. rs cto ve be ll wi e er th en th k. ran n m lu co ll fu ve ha t no es W he n the matrix A do is rs cto ve ch su all of t se e Th O. = Ax g in fy tis sa r, cto x, other than the null ve e th of e ac sp ll nu e th y pl sim or Ax n io at rm fo ns tra the called the null space of matrix A. = u by n ve gi be S to in R of on ati lm fo ns tra r ea lin Theorem 2.20. Le t the n ve gi A, n of e ac sp ll nu e th en Th . ix atr m n x m Ax , where x E R and A is an n
•
S N IO T A U Q E R A E IN L F O MS E T S Y S D N A S N IO T A M R O F S L IN E A R TRAN
61
by the set N(A) = {x: A x = O,X is a vector space.
E
R"},
•
rs la a c s y n a r fo n e h T . O = 2 X A = I X A t a th o s ) (A N in e b P ro o f L e t X I a n d X2 Cil and Ci2, we h a v e
so that
(C iI X I
+ Ci2X2)
E
. e c a p s r to c e v a is ) (A N , e c N(A) and, hen
o
m o c l a n o g o h rt o f o t p e c n o c e th to d te la re is A ix tr a T h e null space o f a m e th is A ix tr a m e th f o e c a p s ll u n e th t, c fa In . .6 2 n o ti c plements discussed in Se ll u n e th , y rl a il im S . A f o e c a p s w ro e th f o t n e m le p m o same as the orthogonal c n m lu o c e th f o t n e m le p m o c l a n o g o h rt o e th s a e m a s e th is ' A space o f the matrix . 6 .1 2 m re o e h T f o e c n e u q e s n o c te ia d e m im n a is lt u s re space o f A. T h e following •
e c a p s w ro e th f o n io s n e im d e th f I . ix tr a m n X m n a e b Theorem 2.21. L e t A . II = '2 + '1 n e th , '2 is A f o e c a p s ll u n e th f o n io s n e o f A is 'I a n d the dim e c a p s w ro e th f o n io s n e im d e th to l a u q e is A ix tr a m Since the ra n k o f the s a d e s s re p x e y tl n le a iv u q e e b n a c e v o b a lt u s re e th , A f o rank(A) = n  d im {N (A )}
(2 .1 \)
ll u n e th f o n io s n e im d e th d n a ix tr a m a f o k n ra e th n e e This connection betw in ix tr a m a f o k n ra e th g in in ll Il te e d in l fu e s u ry e v e b n space o f that • matrix c a • • certam sItuations. e v ti a rn e lt a n a e iv g l il w e w ), 1 .1 (2 f o ty li ti u e th te a tr s lu il o E x a m p le 2.12. T ). (c O .1 2 m re o e h T s a n e iv g s a w h ic h w ' ), A (A k n ra = ) (A k n p ro o f o f th e identity ra e v a h t s u m e w , y rl a le c , n e h T . O = x A t a th o s A f o e c a p s ll Suppose X is in the nu s w o ll fo it o s , A ' A f o e c a p s ll u n e th in o ls a is x t a th s e A 'A x = 0, which impli y tl n le a iv u q e r o , ' )} A (A {N that d im {N (A )} S; d im rank(A) ~ rank(A'A)
(2.12)
g in ly ip lt u m re P . O = x A ' A n e th A ' A f o e c a p s ll u n e th in is x . O n the o th e r hand, if in o ls a is x , s u h T . O = x A if ly n o d e fi s ti a s is h ic h w , 0 = by x' yields x 'A 'A x r o , )} 'A (A {N im d 2! )} (A {N im d t a th o s A f o e c a p s ll u n e th
62
VECTOR SPACES
rank(A) 5: rank(A'A)
(2.13)
Combining (2.12) and (2.13), we get rank(A) = rank(A'A). When A is an m x m nonsingular matrix and x E Rm, then u = Ax defines a onetoone transfOllllation of Rm onto Rm. One way of viewing this transfOllllation is as the movement of each point in ~ to another point in Rm. Alternatively, we can view the transfOllllation as a change of coordinate axes. For instance, if we start with the standard coordinate axes which are given by the columns, el, ... , em of the identity matrix 1m' then, since for any x E Rm, x = Xlel + '" + xme m, the components of x give the coordinates of the point x relative to these standard coordinate axes. On the other hand, if Xl, ••• ,Xm is another basis for Rm , then from Theorem 2.7 there exist scalars ut. ... , Um so that with u =(UI, ... , um)' and X =(XI, ..• , xm), we have m
X
=
L
UiXi
= Xu;
i=I
that is, U = (UI, ... , um)' gives the coordinates of the point X relative to the coordinate axes Xt. .•. ,Xm • The transfOllllation from the standard coordinate system to the one with axes XI, ••• ,Xm is then given by the matrix transfonnation u = Ax, where A = XI. Note that the squared Euclidean distance of u from the origin, •
u'u = (Ax)'(Ax) = x'A'Ax, will be the same as the squared Euclidean distance of x from the origin for every choice of x if and only if A, and hence also X, is an orthogonal matrix. In this case, XI, ... ,Xm fOllns an orthonollllal basis for Rm, and so the transformation has replaced the standard coordinate axes by a new set of orthogonal axes given by x I, ... ,Xm . Example 2.13. Orthogonal transfOllllations are of two types according to whether the detellllinant of A is +1 or 1. If IAI = I, then the new axes can be obtained by a rotation of the standard axes. For example, for a fixed angle 8, • let
A=
cos 8 sin 8
 sin 8 cos 8
o
o
0 0 , 1
,
•
2
2
so that IA I = cos 8 + sin 8 = I. The transfOllllation given by u = Ax transfOllns the standard axes et.e2,e3 to the new axes XI = (cos 8,  sin 8,0)"
'.
,..;
LINEAR TRANSFORMATIONS AND SYSTEMS OF LINEAR EQUATIONS
63 .
X2 = (sin 8, cos 8,0)" X3 = e3, and this simply represents a rotation of el and e2 through an angle of (). If instead we have
A=
cos 8 sin 8
 sin 8 cos 8
o
o
o o , I
then IAI = (cos 8 + sin 8) . (I) = 1. Now the transformation given by U = Ax, transforms the standard axes to the new axes XI = (cos 8,  sin 8,0)" X2 = (sin 8,cos 8,0)', and X3 = e3; these axes are obtained by a rotation of e] and e2 through an angle of 8 followed by a reflection of e3 about the XI, X2 plane. 2
2
Although orthogonal transfor mations are very common, there are situations in which nonsingular nonorthogonal transfOlmations are useful.
Example 2.14. Suppose we have several threedimensional vectors XI, ... ,Xr that are observations from distributions, each having the same positive definite covariance matrix O. If we are interested in how these vectors differ from one another, then a plot of the points in R3 may be useful. However, as discussed in Example 2.2, if 0 is not the identity matrix, then the Euclidean distance is not appropriate, and so it becomes difficult to compare and interpret the observed differences among the r points. This difficulty can be resolved by an appropriate transformation. We will see in a later chapter that since 0 is positive definite, there exists a nonsingular matrix T satisfying 0 = TT'. If we let Uj = TIxj, then the Mahalanobis distance, which was defined in Example 2.2, between X; and Xj is do(x;,Xj) = {(x;  Xj)'OI(X;  Xj)}1/2
= {(x; Xj)'T'lrl(x; _Xj)}1/2 = {(rlx;  rIXj)'(rIXj  rIXj)}1/2 = {(u;  Uj)' (u;  Uj)} 1/2 = dl(u;, Uj),
while the variance of Uj is given by var(Uj)
=var(rlx;) = rl {var(xj)}T'1 = rIOT'1 =13
That is, the transformation Uj = rl X; produces vectors for which the Euclidean distance function is an appropriate measure of distance between points.
In our next two examples, we discuss some transfollnations that are sometimes useful in regression analysis.
VECTOR SI' AC t.:: l
64
is ns tio ua sit e m so in ul ef us is at th on ati lln fo ns tra e pl Ex am pl e 2. 15 . A sim the is l' if e, nc sta in r Fo . in ig or the at rs be m nu or on one lhal centers a collecli po m co e th of e ag er av e th en th )', N ,x ... l, (X = x of s mean of the co m po ne nt nents of XI X2 
X X
•
• •
er nt ce to sis aly an n sio es gr re a in ed us es m eti m so is is O. Th is tra ns fo nn ati on el od m n sio es gr re ple lti mU the us Th . es bl ria va each of the ex pl an ato ry
y = Xi l + E = [IN ca n be re ex pr es se d as •
y = {301N + {NIINl~ + (IN  NIINl~)}XI III + E = 1\)lN + V ill i + E = V'Y + E, + o ({3 = ' l~) l,I (1\ = Y d' an l Xt ~) Nl II N(IN , [IN where V = [IN ,V d = st lea e th , IN to al on og th or e ar I V of ns m lu co e th W 11~ X I Ill ' Il~)'. Si nc e sq ua re s es tim ato r of 'Y simplifies to
,
y (V~VltIV~y
e th of s m ter in d se es pr ex tly ien en nv co be n ca ' Ill r, Th us , .yo = y. Th e estimato of ws ro e th l1l f01 at th rs cto ve I x 1) + (k N e th of x tri sa m pl e co va ria nc e ma it n tio rti pa d an S by x tri ma e nc ria va co is th te no de the matrix [y X d. If we as
then (N  I) I V~ VI = S22 and, since V~ IN = 0,
NS TIO UA EQ R EA LIN OF S EM ST SY D AN NS TIO MA OR SF AN TR R LINEA
65
n sio es gr re al in ig or e th to t en stm ju ad r he ot an t Ye Consequently, III = , se ca is th In , es bl ria va ry ato an pl ex the of n tio model involves the standardiza the model becomes ~
siis21'
I' Il (: D~ = 01 d an ,I!2 D V :: ZI , 'Yo :: 00 I I), ,Z s (I :: N where 0 :: (oo,o~)', Z rti pa ve ha we re he w 1, ir2 Ri :: 5 d' an y :: 00 e ar rs 1 ato tim es The least S. of at th to r ila sim n io sh fa a in R ix atr m n io lat rre co e th d tione
lin a es lv vo in , ly us io ev pr d se us sc di , es bl ria va Th e centering of explanatory us eo ag nt va ad is it , ns tio ua sit e m so In I. X of ns ea r tra ns fo nn ati on on the colum I. Fo r instance. Z or I, V " X of ws ro e th on on ati nn fo ns tra to em pl oy a lin ea r . 00 = ao T. ZI :: I W e fin de we d an , ix atr m lar gu suppose that T is a kx k nonsin T1o" so that the model and
"I ::
,
ca n be written as , ,
st; fir the n tha es bl ria va ry ato an pl ex of t se nt re ffe di a es us Th is second model es bl ria va ry ato an pl ex the of n tio na bi m co r ea lin a is e bl ria its ith explanatory va r. ve we Ho T, of n m lu co ith e th by n ve gi ts ien fic ef co e th th of the first model wi s. thi e se To s, lue va ted fit e th of s lu tel in lts su re t the two models yield equivalen let
,
,
,
,
,,I ,
nd co se the m fro es lu va ted fit of r cto ve e th at th te so that W :: ZT *, and no model,
I,
y:: Wei:: W (W 'W rI W 'y = ZT *( T; Z' ZT *r IT ;Z 'y = ZT *T ;I (Z 'Z rI T; I T; Z' y = Z(Z'Z)IZ'y,
,,
,i
L de mo st fir e th m fro d ne tai ob at th as e m sa e th is el od m n sio es gr re le tip ul m the er id ns Co . 16 2. Example
"
f,
f.,,
y = Xil + E, ,
,
,
•
,,
66
VECTOR SPACES A
where now In this case, our previous estimator, p = (X'X)IX'y, is still the least squares estimator of p, but it doesn't possess certain optimality properties, one of which is illustrated later in Example 3.13, that hold when 2 Var(E) = a IN. In this example, we will consider the situation in which the fiS are still uncorrelated, but their variances are not all the same. Thus, var(E) = n = a 2 e, where e = diag(d, ... , c~) and the CiS are known constants. This special regression problem is sometimes referred to as weighted least squares regression. The weighted least squares estimator of P is obtained by miling a simple transfOlll1ation so that ordinary least squares regression applies to the l l 2 / = diag(ci , ... ,c,} ) and transform transfollned model. Define the matrix e the original regression problem by premultiplying the model equation by e If2 ; the new model equation is 2 Var(E):t a IN'
or, equivalently,
where y* = e I / 2y, X", = e I / 2x, and E* =
I 2 / e E. The covariance matrix of
• •
•
Thus. for the transfOlll1ed model, ordinary least squares regression applies and so the least squares estimator of P can be expressed as . A
P = (X~X*rIX*y* • •
Rewriting this in the original model tenns X and y, we get A
•
P = (X'C1/2C1/2Xrlx'C1/2C1/2y = (X'CIX)IX'CIy.
A common application related to linear transfolll1ations is one in which the matrix A and vector u consist of known constants, while x ·is a vector of variables, and we wish to deteIllline all x for which Ax = u; that is, we want to find the simultaneous solutions XI, •.. ,Xn to the system of m equations
•• "
,· •
•
,
·
THE INTERSECI'ION AND SUM OF VECTOR SPACES
67
• •
•
For instance, in Example 2.10, we saw that the least squares estimator of ,the parameter vector p in the mUltiple regression model satisfies the, equation, XJl = X(X'XrlX'y; that is, here A = X, u = X(X'X)IX'y, and x = Jl. In general, if u = 0, then this system of equations is referred to as a homogeneous system, and the set of all solutions to Ax = u, in this case, is simply given by the null space of A. Consequently, if A has full column rank, then x = 0 is the only solution, whereas there are infinitely many solutions if A has less than full column rank. A nonhomogeneous system of linear equations is one which has u :t O. While a homogeneous system always has at least one solution, x = 0, a nonhomogeneous system mayor may not have any solutions. A system of linear equations that has no solutions is called an inconsistent system of equations, while a system with solutions is referred to as a consistent system. If u :t 0 and Ax = u holds for some x, then u must be a linear combination of the columns of A; that is, the nonhomogeneous system of equations Ax = u is consistent if and only if u is in the column space of A. . The mathematics involved in solving systems of linear equations is most conveniently handled using matrix methods. For example, consider one of the simplest nonhomogeneous systems of linear equations in which the matrix A is square and nonsingular. In this case, since A I exists, we find that the system Ax = u has a solution that is unique and is given by x = A I u. Similarly, when the matrix A is singular or not even square, matrix methods can be used to detelllline whether the system is consistent, and if so, the solutions can be given as matrix expressions. These results regarding the solution of a general system of linear equations will be developed in Chapter 6.
9. THE INTERSECTION AND SUM OF VECTOR SPACES In this section, we discuss some common ways of fOllning a vector subspace from two or more given subspaces. The first of these utilizes a familiar operation from set theory. Definition 2.12. Let SI and S2 be vector subspaces of Rm. The intersection of SI and S2, denoted by SI n S2, is the vector subspace given as
Note that this definition says that the set sin S2 is a vector subspace if SI and S2 are vector subspaces. This follows from the fact that if XI and X2 are in
VECTOR SPACES
68
are S2 d an SI ce sin , us Th . S2 E X2 , S2 E XI d an SI SI n S2 , then XI E S" X2 E d an S2 d an SI in be ll wi X2 (X2 + XI (XI , (X2 d an (XI rs ala vector spaces, for any sc on hi fas s ou vi ob an in ed liz ra ne ge be n ca 2 2.1 on iti fin De . hence also in SI n S2 . S, , ..• , S) es ac sp r cto ve r the of ' S, n ... n SI to the intersection, , is the S2 d an SI of ts en m ele the s ne bi m co ich wh , on A second set operati union; that is, the union of S I and S2 is given by
ce pa bs su r cto ve a be o als ll wi S2 U SI n the s, ce pa bs su r If SI and S2 are vecto na bi m co ng wi llo fo the at th n ow sh y sil ea be n ca It I. S S;;; only if SIS;;; S2 or S2 n sio en m di le ib ss po t es all sm the th wi e ac sp r cto ve the s eld tion of SI and S2 yi containing S I U S2 · m
R of s ce pa bs su r cto ve are S2 d an I S If 3. 2.1 on iti Defin by n ve gi e ac sp r cto ve the is , S2 + SI by ted no de , SI and S2
,
then the sum of
•
r the of m su the S" + ... + SI to ed liz ra ne ge be n ca on Again our definiti as t lef en be s ha m re eo th ng wi llo fo the of f oo pr e Th . S, , vector spaces S I, ... • an exercise.
Theorem 2.22.
m
If S I and Si are vector subspaces of R
Ex am pl e 2.17. Let SI and S2 be subspaces of and {Y I'Y 2} ' respectively, where XI
= (1 ,0 ,0 , 1, 0) ',
X2
= (0 ,0 , 1, 0, 1)',
x3 YI
= (0, 1, 0, 0, 0) ', = (1 ,0 ,0 , 1, 1) ',
Y2
= (0, 1, 1, 0, 0) '
5 R
,
then
having bases
{X ),X 2, X3 }
d ne an sp is S2 + I S , ly ar cle w, No . S2 n si d an S2 + I We wish to find bases for S be n ca it d an ' YI X3 + x2 + XI = Y2 at th te No . 2} I'Y by the set {X J,X 2, X3 ,Y = (X3 = (X2 = (XI pt ce ex , (X4 , (X3 , (X2 I. (X ts tan ns co no easily verified that there are a is I} ,Y X3 2, ,X I {X , us Th O. = YI (X4 + X3 (X3 + X2 (X2 (X4 = 0, satisfying (XIXI + at th ow kn we , 22 2. m re eo Th om Fr 4. = ) S2 + I (S dim basis for SI + S2 , and so r. cto ve e on of s ist ns co S2 n si r fo sis ba y an so d an 1, = dim(S I n S2 ) = 3 + 2  4
•
ES AC SP OR Cf VE OF M SU D AN N fIO EC RS TE IN E TH
69
r; cto ve te ria op pr ap an te ca di in ll wi ys the d an xs The dependency between the ce sin , I)' I, I, I, , (1 = Y2 + y, r cto ve the by en that is, a basis for S, n S2 is giv y, + Y2 = x, +x2 +X3· d ne tai ob e ac sp r cto ve the n the }, {O = S2 n S, at th ch su e ar When S, and S2 d an S, of m su ct re di the as to d rre fe re es m eti m so is as the sum of S, and S2 e iqu un a s ha S2 EB S, E X ch ea , se ca ial ec sp is th In . S2 S2 and written S, EB ial ec sp er rth fu A . S2 E X2 d an S, E x, e er wh , X2 '+ representation as x = x, y an r fo is, at th ; es ac sp r cto ve al on og th or e ar S2 d an S, case is one in which n io tat en es pr re e iqu un the , se ca is th In O. = X2 X; ve ha we , x, E S, and X2 E S2 al on og th or the by en giv x, r cto ve the ve ha ll wi S2 EB S, x = x, + X2 for X E of n tio ec oj pr al on og th or the by n ve gi be ll wi X2 ile wh S" projection of x onto r fo d an , S.L EB S = Ern ll, Ri of S ce pa bs su r cto ve y x onto S2. For instance, for an any x E Rm,
,
x = Psx + PsJ.x ,
•
" ,S ... S" es ac sp r cto ve r the of m su the is S e ac sp r cto In general, if a ve S, , ... S" of m su ct re di the be to id sa is S n the and Sj n Sj = {OJ for all i oJ j, and is written as S = S, EB··· EB S, .
d ne an sp is Si ere wh S"" , ... S" es ac sp r cto ve the er id ns Co Example 2.18. nCo ix. atr m y tit en id m x m the of n m lu co ith by {ej} and, as usual, ej is the d ne an sp is Ti ere wh m, T , ... T" , es ac sp r cto ve of ce en sider a second sequ ws llo fo it en Th }. m "e {e by d ne an sp is Tm ile wh 1, m by {e i,e j+ d if i ~ m h ug ho alt r, ve we Ho . Tm m + ... + T, = R as l el that R = S, + ... + Sm ,a sw m is it ce sin m, T EB ... EB T, = R m at th w llo fo t R = SI EB ... EB Sm, it does no n ca ' R" in ' III) ,X ... (x" = x y an us Th j. oJ i all r fo } {O = not true that Ti n Tj es ac sp the of ch ea m fro r cto ve a of d ise pr m co be expressed uniquely as a sum S" ... , Sm; namely
• ••
,,
I
,,I , • •
•
•
i i,
I,
to ng di on sp rre co n io sit po m co de the , nd ha r he ot the where ej E Si' On by e ov ab m su e m sa the t ge n ca we e, nc sta in r Fo . ue iq T" ... , Tm is not un E e3 T" E e2 g sin oo ch by or m T E em " ," choosing e, E T" e2 E T2 ns tio ec oj pr al on og th or the of m su e th n, tio di ad In m' T T 2, ... , em E T m_ , , e lE ec oj pr al on og th or the of m su the ile wh x, s eld yi ,Sm ... of x onto the spaces S" of ce en qu se ird th a as er id ns Co . 2x s eld yi m T , ... tions of x onto the spaces T" ei· ·+ .. + e, = 'Vi d an i} {'V sis ba the s ha Vi ere wh vector spaces, V Io "" '" Vm, m s ha l Ril E x ch ea d an Vm EB ··· EB VI = R so j, . i;J Clearly, V/ n Vj = {OJ if , se ca s thi in r, ve we Ho . Vi E Xi th wi Xm + ... + XI = a unique decomposition x t no is x of n io sit po m co de is th , es ac sp r cto ve al on og th since the Vi S are not or II' VI , ... V" es ac sp the to on x of ns tio ec oj pr al on og th or given by the sum of the
70
VECTOR SPACES
10. CONVEX SETS A special type of subset of a vector space is known as a convex set. Such a set has the property that it contains any point on the line segment connecting any other two points in the set. A formal definition follows. ,,
Definition 2.14. and
Xz E
A set S
!;;
m
R is said to be a convex set if for any XI
E
S
S,
,
eXI
+ (I 
e)x2 E
,
'
S,
,,
where c is any scalar satisfying 0 < e < 1. The condition for a convex set is very similar to the condition for a vector space; for S to be a vector space, we must have for any XI E S and X2 E S, (XIXI + (X2 X Z E S for all (XI and (X2, while for S to be a convex set, this need only hold when (XI and (X2 are nonnegative and (XI + (X2 = 1. Thus, any vector space is a convex set. However, many familiar sets that are not vector spaces are, in fact, convex sets. For instance, intervals in R, rectangles in R2, and ellipsoidal m regions in R are all examples of convex sets. The linear combination of XI and Xz, (XIXI + (X2X2, is called a convex combination when (XI + (X2 = 1 and (Xj ~ 0 for each i. More generally, (XIXI + ... + (XrX, is called a convex combination of the vectors XI, .•• ,X, when (XI + ... + (X, = I and (Xj ~ 0 for each i. Thus, by a simple induction argument, we see that a set S is convex if and only if it is closed under all convex combinations of vectors in S. The following result indicates that the intersection of convex sets and the sum of convex sets are themselves convex. The proof will be left as an exercise.
, •,
'
,
,
,
Theorem 2.23.
Suppose that
SI
and
S2
are convex sets, where
Sj !;;
,
m
•
R for
,
,, "j
eac hi. Then the set
, ,•, ,
,,••
,
(a) (b)
SI
n S2
SI +S2
is convex, and
.." , ..,
.. ;
.,
= {XI +X2: XI E S"X2 E S2} is convex.
j .~
•., ,
"
For any set S, the set C(S) defined as the intersection of all convex sets containing S is called the convex hull of S. Consequently, due to a generalization of Theorem 2.23(a), C(S) is the smallest convex ~et containing S. m A point a is a limit or accumulation point of a set S ~ R if for any 0 > 0, the set So = {x: X E Rm, (x  a)'(x  a) < o} contains at least one point of S distinct from a. A closed set is one that contains all of its limit points. If S is a set, then S will denote its closure; that is, if So is the set of all limit points of S, then S = SU So. In our next theorem, we see that the convexity of S guarantees the convexity of S.
,"
. , " ., ,
,
•
'i'
,
"
, ,,
,
., ,,
'
,
,
. ,
CONVEX SETS
Theorem 2.24.
71
If S ~ R is a convex set, then its closure S is also a Convex m
set. Proof It is easily verified that the set Bn = {x: x E Rm, x'x ::;; n]} is a convex set, where n is a positive integer. Consequently, it follows from Theorem 2.23(b) that Cn = S + Bn is also convex. It also follows from a generalization of the result given in Theorem 2.23(a) that the set
A=
n
Cn
n= ]
is convex. The result now follows by observing that A = S.
o
One of the most important results regarding convex sets is a theorem known as the separating hyperplane theorem. A hyperplane in Rm is a set of the fOlln, T = {x: x e Rm, a'x = e}, where a is an m x I vector and e is a scalar. Thus, if m = 2, T represents a line in R2 and if m = 3, T is a plane in R3. We will see that the separating hyperplane theorem states that two convex sets S] and S2 are separated by a hyperplane if their intersection is empty; that is, there is a hyperplane which partitions Rm into two parts so that S] is contained in one part, while S2 is contained in the other. Before proving this result, we will need to obtain some preliminary results. Our first result is a special case of the separating hyperplane theorem in which one of the sets contains the single point O.
Theorem 2.25. Let S be a nonempty closed convex subset of Rm and suppose that 0 fJ. S. Then there exists an m x I vector a such that a'x > 0 for all xeS. Proof
Let a be a point in S satisfying •
. f X ' x, a,a = In rE
S
where inf denotes the infimum or greatest lower bound. It is a consequence of the fact that S is closed and nonemptythat such an a E S exists. In addition, a J. 0 since 0 fJ. S. Now let e be an arbitrary scalar, x any vector in S except for a, and consider the vector ex + (1  e)a. The squared length of this vector as a function of e is given by fee) = {ex + (1 e)a}' {ex + (I  e)a} = {e(x  a) +a}' {e(x  a) +a}
= e2(x  a)'(x  a) + 2ea'(x  a) +a'a
/
..
d fin we e. iv sit po is ) f(c n tio nc fu tic ra ad qu is th Since the second derivative of that it has a unique minimum at the point a' (x  a) ,c * = ,  ':  ' a) (x  a) '(x 
we so d an 1. :s; c :s; 0 en wh S Note that since S is convex. . ed fin de s wa a y wa e th to e du 1 ::; c ::; 0 r fo a a' = must have x~xc =f(c ) ~f(O) r fo ) f(O > ) f(c at th ies pl im is th ). f(c of re tu uc str tic ra But because of the quad all c > O. In other words. c* ::; O. and this leads to Xc ::= cx
+ (\  c) a
E
a' (x  a);::: O.
or
o
a'x;::: a' a > 0 
t se e th at th ch su 0 > 0 a s ist ex e er th if S of t A point x* is an interior poin x* . nd ha r he ot e th On S. of et bs su a is o} < ) S8 = {x: x E Rm. (x  xS (x  x* t in po e on st lea at s ain nt co S8 t se e th O. > 0 ch is a boundary point of S if for ea d an S ts se e th at th s ow sh lt su re xt ne r Ou S. in t no in S and at least one point . ex nv co is S if ts in po r io ter in e m sa e th ve ha S en op an is T ile wh . Rm of et bs su ex nv co a is S ppose that Th eo re m 2.26. Su subset of Rm. If T c S. then T c S. Pr oo f
Let x* be an arbitrary point in T and define the sets
d an . en op is * T . ex nv co is S* at th m re eo th e th of s It follo ws from the condition ll wi is th ce sin S* E 0 at th ow sh n ca we if ete pl m co T * c S*_ The pr oo f will be 0 > e an d fin n ca we t. se en op an is * T d an * imply that x* E S. Since 0 E T rs cto ve e es th e nc Si *. T in are lm e • ee _. _. l. ee m such that each of the vectors. 1, + ,m ... 2, 1. = i r fo ..• •• i2 l,X Xi s. ce en qu se d also must be in S*. we can fin r fo lm e 7 j xi d an , ,m .. .. l = i r fo j ee 7 such that each xi) E S* and xi) . elm 7 X at th so j) xm .. .. j lj, (X = Xj ix atr m xm i = m+ 1, as j 7 00. Define the m lar gu in ns no is Xj at th ch su N, er teg in an s ist ex e er as j 7 00 . It follows that th for all j > N I . Fo r j > N I • define (2.14)
so that
CU N Vl:.X
~l:.l ~
, all of the N > j all r fo at th ch 2 su , N ~ N2 er teg in e I m so Thus there exists components of Yj are negative. But from (2.14) we have
Xm+I,j  XjYj = [Xj •
r cto ve it un the by ' I) , yj (r cto ve e th e ac pl re This same e?:uation holds if we of ns m lu co e th of n tio na bi m co ex nv co a is 0 us Th . I)' (YfYj + 1)1 2(_Yj, 0 . S* E 0 , ex nv co is S* ce sin so , S* in is [Xj xm+ I,j ], each of which It . m re eo th e lan rp pe hy g tin or pp su the d lle ca es m eti Th e next result is som e lan rp pe hy a s ist ex e er th S, t se ex nv co a of t in po ry states that for any bounda of e sid e on on are S of ts in po e th of ne no at th ch su t . passing through that poin the hyperplane. m
•
r he eit X* at th d an R of et bs su ex nv co a is S at th Theorem 2.27. Suppose 1 x m an s ist ex e er th en Th S. in is it if S of t in po is not in S or is a boundary vector b J. 0 such that b' x ~ b' x* for all XE S. t us m or S in t no is o als X* at th m re eo th us io ev pr e Proof It follows from th of ce en qu se a s ist ex e er th , ly nt ue eq ns Co S. in is be a boundary point of S if it 00 . Co rre sp on di ng ~ i as X* ~ Xi at th ch su S fJ. Xi vectors, Xl ,X2, ... with ea ch ce sin Si fJ. 0 t tha te no d an }, S xe , Xi X = Y : {y to each Xi, define the set Si = m fro ws llo fo it 4, 2.2 m re eo Th by ex nv co d an d se clo Xi fJ. S. Thus, since Si is all r fo 0 > y a; at th ch su ai r cto ve m x m an s ist ex Theorem 2.25 that there ite wr n ca we , ely tiv na ter Al . S xe all r fo 0 > ) Xi (x Y e Si or, equival.!ntly, a; ce en qu se the 1, = bi b; ce sin w No . /2 )1 ;aj (a ai this as b; (x  Xi ) > 0, where bi = t tha e; nc ue eq bs su nt ge er nv co a s ha it so d an ce en qu se d b .. b2, ... , is a bo un de ch su b r cto ve it un I x m e m so d an ", . < i2 < il s er teg in is, there are positive 00, and ~ j as *) x (x b' ~ j) Xi (x b;. , ly nt ue eq ns Co that b ij ~ b as j ~ 00. . S E X all r fo 0 > ') XI .(x b; ce sin S e' X all r fo 0 ~ } we. must have b' (x  x* ) 0 This completes the proof.
,
. m re eo th e lan rp pe hy g tin ra pa se e th e ov pr to y ad re w no We are
0. = S2 n SI th wi R of ets bs su ex nv co be S2 d an Theorem 2.28. Le t SI E SI I X all r fo X2 b' ~ I X b' at th ch su 0 J. b r cto ve I Then there exists an m x and all X2 e S2. m
ce sin ex nv co is } S2 E x ; {x = . S2 t se the rly ea Cl Proof. t se the at th ow kn we 3 2.2 m re eo Th m fro us Th
,•.•
· •
S2
is convex.
74
VEcroR SPACES
is also convex. In addition, 0 e S since SI n S2 = 0. Consequently, using Theorem 2.27, we find that there is an m X I vector b:J. 0 for which b'x ~ 0 for all XES. But this implies that b' (XI  X2) ~ 0 for all XI E SI and all X2 E S2, as is required. 0 Suppose thatf(x) is a nonnegative function which is symmetric about X = 0 and has only one maximum, occurring at x = 0; in other words,f(x) =f(x) for all x and f(x) ~ f(cx) if 0 ~ c ~ 1. Clearly, the integral of f(x) over an interval of fixed length will be maximized when the interval is centered at O. This can be expressed as a
i
•
0
)
o•

· , · .,
,
o o •
0
• o
o o
,•
a
o
f(x + cy) dx ~
•
f(x + y) dx,
o
•o
"
a
0
•
•
,
.,~
for any y, a > 0, and 0 ~ c ~ I. This result has some important applications regarding probabilities of random variables. The following result, which is a generalization to a function f(x) of the m X 1 vector X replaces the interval in RI by a symmetric convex set in Rm. This generalization is due to Anderson (1955). For simple applications of the result to probabilities of random vectors, see Problem 2.44.
,
Let S be a convex subset of ~, symmetric about 0, so that if XES, x E S also. Let f(x) ~ 0 be a function for which f(x) = f( x), Sa = (x : f(x) ;;?: a} is convex for any positive a, and Jsf(x) dx < 00. Then
Theorem 2.29.
"• •
• o
•o
,
•
o
o
o
•
f(x+cy)dx~
s
f(x+y)dx,
o
,
,
s
· o

for 0
~
c
~
I and Y
E
.j,
Rm.
l
, o
A more comprehensive discussion of convex sets can be found in Kelly and Weiss (1979), Lay (1982), and Rockafellar (1970), while some applications of the separating hyperplane theorem to statistical decision theory can be found in Ferguson (1967).
, o
o
0'• o,

•
..
•0
o
0, 0
·o "0,
j
PROBLEMS
.,",
·...
o
I. Determine whether each of the following sets of vectors is a vector space. (a) {(a, b, a + b, I)':  0 0 < a < 00,  0 0 < b < oo}. (b) {(a,b,c,a+ b  2c)':  0 0 < a < 00,00 < b < 00,00 < c < oo}. I (c) {(a, b, c, I  a  b  c)':  0 0 < a < 00,  0 0 < b < 00,  0 0 < c < oo}.
J.
X
•• o
o
:,
.,, o
•
75
PROBLEMS
2. Consider the vector space
s=
{(a,a+b,a+ b, b)':
00
< a < 00,  0 0 < b < oo}
Determine which of the following sets of vectors are spanning sets of S. (a) {(I,0,0,1)',(1,2,2,I)'}. (b) {(t, 1,0,0),,(0,0, I,I)'}. (c) {(2, I, I, 1),,(3, I, 1,2)',(3,2,2, I)'}. (d) {(1,0, 0, 0)"(0, I, 1,0)',(0,0,0, I)'}. 3. Is the vector x = (1, I, I, I)' in the vector space S given in Problem 2? Is the vector y = (4, I, 1,3)' in S? 4. Let {XI, •.• ,xr } be a set of vectors in a vector space S and let W be the vector subspace consisting of all possible linear combinations of these vectors. Prove that W is the smallest subspace of S that contains the vectors XI, .•• ,Xr ; that is, show that if V is another vector subspace containing X" ... ,x" then W is a subspace of V. 5. Suppose that X is a random vector having a distribution with mean vector fI. and covariance matrix 0 given by
fI.=
I I
'
0=
I
0.5
0.5
I
Let XI = (2,2), and X2 = (2,0)' be two observations from this distribution. Use the Mahalanobis distance function to determine which of these two observations is closer to the mean. 6. Show that the functions vector norms.
IIxlip
and Ilxll~ defined in Section 2.2 are, in fact,
7. Prove Theorem 2.3. 8. Show that the set of vectors {(l,2,2,2)',(1,2, 1,2),,(1,1, I, I)'} is a linearly independent set. 9. Consider the set of vectors {(2, 1,4,3)', (3, 0, 5, 2)', (0, 3, 2, 5)', (4, 2, 8, 6)'}
(a) Show that this set of vectors is linearly dependent.
VE CT OR SPACES
76
a is at th rs cto ve o tw of et bs su a d fin rs cto ve ur fo of t se (b) Fr om this linearly independent set. ~? r fo s se ba are rs cto ve of ts se ng wi llo fo the 10. Which of (a) {CO, 1, 0, 1) ',( 1, 1, 0, 0) ',( 0, 0, 1, In . }. I)' 1, I, 1, ',( 1) I, 2, 3, ,,( 1) 1, 1, 2, ,,( 1) 2, 2, 2, {( ) (b . In 2, 1, 2, ',( 2) 1, 1, 2, ',( 2) 2, 1, 3, ',( 1) 1, 0, 2, {( ) (c
II . Prove the results of Th eo re m 2.8. r, cto ve ll nu the ain nt co t no es do rs cto ve al on og th or of 12. Prove that if a set it is a linearly independent set. n sio en m di e th is t ha W 2. lem ob Pr in n ve gi e ac sp r 13. Find a basis for the vecto r cto ve e m sa is th r fo sis ba nt re ffe di nd co se a nd Fi of this vector space? space. sis ba a is 4, 2. e pl am Ex in n ve gi ' m} 'Y , ... I' {'Y rs cto ve of t 14. Show that the se for Rm. at th ow Sh x. tri ma p x n an be B d an ix atr m n x m IS. Let A be an (a) R(AB) ~ R(A). (b) R(AB) = R(A) if rank(AB) = rank(A).
ix atr m n x n an s ist ex e er th at th ow Sh s. ce tri ma n x m re Ba 16. Suppose A and C satisfying AC = B if an d only if R(B) ~ R(A). 17. Prove the results of Th eo re m 2.11. e ov Pr . ely tiv ec sp re s, ce tri ma n x m d an q, x m n, x p be 18. Let A, B, and C that
rank
C A
B (0)
= rank(A) + rank(B)
. BG + FA = C at th ch su G ix atr m n x q a d an F ix if there ex ist an m X p matr ow Sh n. = ) (B nk ra th wi ix atr m p x n an B d an 19. Let A be an m x n matrix that rank(A) = rank(AB).
•
77
PROBLEMS
A. XI = 21 g in fy tis sa A ix atr m the nd Fi . 2.9 d 20. Refer to Examples 2.7 an . )1 I ;X (X = ' AA at th ow Sh . 3) 2,X ioX (X = XI d an ) Z3 2, where 21 = (Z "Z '.x~ = 2) 1, , ,2 (1 = XI rs cto ve the by d ne an sp e ac sp r 21. Le t S be the vecto . ' 1) , ,0 ,4 (3 = X4 d an ', 0) , 1 , ,4 (3 = ,x3 ), ,2 ,1 (2 ,3 (a) Find a basis for S. e in ln tel de to (a) in d un fo sis ba e th on e ur ed oc pr t id m ch S (b) Use the Gr am an orthonormal basis for S. S. to on ' 1) 0, 0, , (1 = X of n tio ec oj pr al on og th or e th (c) Find (d) Find the component of X orthogonal to S. •
e ac sp r cto ve the r fo ix atr m n tio ec oj pr the e in m ter 22. Using equation (2.7), de of n tio ec oj pr al on og th or e th te pu m co to s thi e Us . S given in Problem 21 x = (1 ,0 ,0 ,1 )' onto S. X~ d an )' ,3 ,2 (1 = XI rs cto ve the by d ne an sp 23. Le t S be the vector space . I)' I, , (1 = X t in po the to st se clo is at th S in t in po e th nd (1, 1, 1 )'. Fi
=
ix atr m n tio ec oj pr e th ng vi ha f?4 of ce pa bs su r cto ve 24. Suppose S is a
1 Ps = 10
6 2 2 4
2 9 1 2
2 1 9 2
4 2 2 6
(a) What is the dimension of S? (b) Find a basis for S. i I
:
25. Consider the vector space S = {u: u = Ax ,x matrix given by
A=
I (a) (b) (c) (d)
1 2 1 I 1 0 1 3
0 2 4 2
1 2 3 0
E
f?4}, where A is the 4 x 4
•
De te nn in e the dimension of S and find a basis. it r fo sis ba a d fin d an A) N( e ac sp ll nu e th of n sio en m di De te nn in e the Is the vector (3 ,5 ,2 ,4 )' in S? Is the vector (1 ,1 ,1 ,1 )' in N(A)?
78
VECTOR SPACES
26. Let x
E
R" and suppose that u(x) defines a linear transformation of
n R into R'". Using the basis {el, ... , en} for Jr' and the m x I vectors
u(el),"" u(e n ), prove that there exists an m x n matrix A for which
u(x) = Ax,
for every x
E
.1•
1
Rn.
1
1
. !
27. Let T be a vector subspace of given by
Rn
and suppose that S is the subspace of R'"
·I,
•,,.
,"
,
· _.1 ,
.~
1"
,,
~,
S = {u(x): x
E
T},
j
,
where the transfOlll1ation defined by u is linear. Show that there exists an II! x 11 matrix A satisfying
•
•
u(x) = Ax, · •
for every x
E
T.
28. Let T be the vector space spanned by the two vectors XI = (1, 1,0)' and X2 = (0, I, I)'. Let S be the vector space defined as S = {u(x): x E T}, where the function u defines a linear transformation satisfying u(xj} = (2,3, 1), and U(X2) = (2,5,3)'. Find a matrix A such that u(x) = Ax, for all x E T.
•
29. Consider the linear transformation defined by
XI
X2
u(x) =
x  x • •
,
•
Xm
x
for all x E Rill, where x = (l!m)Ex j • Find the matrix A for which u(x) = Ax and then detellnine the dimension of the range and null spaces. 30. In an introductory statistics course, students must take three l00point exams followed by a 150point final exam. We will identify the scores on these exams with the variables XI.X~.X:1. and y. We want to be able to estimate the value of yonce X"X2, and X3 are known. A class of 32 students produced the following set of exam scores.
•
, •• '.
:·1 ..~
·.
79
PROBLEMS
Student
XI
1 2 3 4 5 6 7 8 9 10 11 12
87 67 79 60 83 82 87 88 62 100 87
13
72
14 15 16
86 85 62
72
X2
X3
Y
89 92 111 85 77 99 79 54 82 71 68 136 67 53 73 84 92 107 88 76 106 68 91 128 66 65 95 68 63 108 100 100 142 82 80 89 94 76 109 92 98 140 82 62 117 50 71 102
•
Student
XI
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
72 73 73 73 97 84 82 61 78 84 57 87 62
X2
76 70 61 83 99 92 68 59 73 73 47 95 29 77 82 52 66 95 99
X3
Y
96 116 52 78 86 101 76 82 97 141 86 112 73 62 77 56 81 137 68 118 71 108 84 121 66 71 81 123 71 102 96 130
(a) Find the least squares estimator for /3 = ({3o,{3 1, (32, (33)' in the multiple regression model
(b) Find the least squares estimator for /31 = ((30,{31,{32)' in the model
(c) Compute the reduction in the sum of squared errors attributable to the inclusion of the variable X3 in the model given in (a). 31. Suppose that we have independent samples of a response y corresponding to k different treatments with a sample size of ni responses from the ith treatment. If the jth observation from the ith treatment is denoted, Yij, then the model
is known as the oneway classification model. Here P.i represents the expected value of a response from treatment i, while the EijS are independent and identically distributed as N(O, u 2 ). (a) If we let /3 = (p.1,"" p.d, write the model above in matrix form by defining y, X, and E so that y = X/3 + E. (b) Find the least squares estimator of /3 and show that the sum of squared
VECTOR SPACES
80 errors for the corresponding fitted model is given by k
IIi
(Yij  Yi)2,
SS E, =
i='
j= '
where ni
Yi =
L
Yij/ni
j  I
(c) If 1" = ... = I'k = 1', then the reduced model Yij = I' +
Eij
m su e th d an I' of r ato tim es s re ua sq st lea e th nd Fi j. holds for all j and ~ SS at th ow Sh l. de mo d ce du re ted fit e th r fo E of squared errors SS 2 T, SS ted no de d an t en atm tre r fo s re ua sq of m su e SSE" referred to as th can be expressed as k
)2 ,   Y ( ni Yi
SS T= i= ,
where k
Y=
~ ni y; /n , i=,
lln fO e th es tak .8) (2 in n ve gi tic tis sta F the at (d) Show th SS T/ (k  1) F= SS E, /(n  k) r ato tim es e th d fin to sh wi d an E + Xp = Y l de mo 32. ~uppose that we have the p which minimizes (Y 
. , . XP) (y  XP), •
subject to the restriction that p satisfies rank and A has full row rank.
•
Ap = 0,
where X ha s full column
•
81 PROBLEMS ~
. e c a p s r to c e v a is } O = P A , p X = y : {y = S t a th w o h S (a) f o e c a p s ll u n e th r fo is s a b a n ll fO s n m lu o c e s o h w ix tr a m y (b) L e t C by an g in s U . 'A 't A A '( A 1 = ' 'C ) 'C C ( C ty ti n e id e th s e fi s ti a s A; that is, C e th t a th w o h s . rs to a m ti s e s re a u q s t s a le f o s ie rt e p ro p l a ic the geometr y b n e iv g is p r to a m ti s e s restricted least square ~
~
'y 'X 'C ) C 'X 'X C ( C = p . e b st u m o ls a 2 S + , S t a th w o h S . m R f o s e c a p s b u s r to 33. L e t S, and S2 be vec . " If f o e c a p s b u s r to c e v a r to c e v e th is 2 S + , S t a th w o h S . m R f o s e c a p s b u s r to c e v 34. L e t St and S2 be t a th w o h s s. rd o w r e th o In ' 2 S U , S g in in ta n o c n io s n e im space o f smallest d . T ~ 2 S + , S n e th , T ~ 2 S U , S h ic h w r fo e c a p s r to c if T is a ve . 2 .2 2 m re o e h T e v ro P . 5 3
•
s n a p s r} X , . .. " {x t a th e s o p p u S . m R f o s e c a p s b u s r to c e v 36. L e t S, and S2 be e th s n a p s } ,, ,y . .. " Y r, ,X . .. I, {X t a th w o h S ' 2 S s n a p s S, a n d { Y I" " ,Y h } vector space SI + S2. rs to c e v e th y b d e n n a p s e c a 37. L e t SI be the vector sp
3
x,
=
1
3 1
,
X2
=
1 1 , 1 1
X3
2 .1 , = 2 1
rs to c e v e th y b d e n n a p s is 2 S while the vector space ,
,
3
I
t
0 Y I=
5 1
,
Y2 =
F in d the following. ' 2 S d n a I S r fo s e s a B ) (a ' 2 S + I S f o n io s n e im d e (b) T h (c) A basis for SI + S2. . 2 S I S f o n io s n e im d e (d) T h (e) A basis for SI n S2'
n
.,
1 2• , 3 1
1
Y3 =
4 1
3
", , ,
82
,
VEcroR SPACES
,
,
38. Let 51 and 52 be vector subspaces of R'" with dim(S)) = '1 and dim(52) = '2.
~
, '
,
,,
(a) Find expressions in terllls of m, 'I, and '2 for the smallest and largest possible values of dim(5 1 + 52). (b) Find the smallest and largest possible values of dim(SI n S2).
•,•
·
, ,~,
'. '.
· i:,
·"~,
~'i
39. Let T be the vector space spanned by the vectors {(l, I, 1)', (2,1, 2)'}. Find a vector space SI such that R3 = T E9 SI. Find another vector space 52 such that R3 = T E9 S2 and SI n S2 = {O}.
~
,
; .~ " 
1
'~
J
,
,, J
s,
•• ,
"
•, ,
.., ,!
41. Let 5, and 52 be vector subspaces of Rm and let T = SI + S2. Show that this sum is a direct sum, that is, T = S, E9 S2 if and only if
•
•
,, •, ,
,
, ,
,
"
"
, "
,
•
,
•
,
•
43. Prove Theorem 2.23.
,
44. Show that if 5, and 52 are convex subsets of Rm, then SI U S2 need not be convex. 45. Show that for any positive scalar n, the set Bn = {x: x convex.
E
I m R ,x'x $; n } is
,
.
40. Let 5, be the vector space spanned by {(1,I,2,O)',(2,O,I,3)'), while 5 c is spanned by {(I, I, I, 3)', 0, 1, 1, O'}. Show that Ir = + S2' Is this a direct sum? That is, can we write S, E9 S2? Are SI and S2 orthogonal vector spaces?
42. The concept of orthogonal projections and their associated projection matrices can be extended to projections that are not orthogonal. In the case of m orthogonal projections onto the vector space S ~ Rm, we decompose R m as R = 5 E9 S1.. The projection matrix that projects orthogonally onto S m is the matrix P satisfying Py E S and (y  Py) E S1. for all y E R and Px = x for alI XES. If S is the column space of the full rank matrix X, then S1. wiII be the null space of X', and the projection matrix described above is given by P = X(X'X)IX'. Suppose now that we decompose Rm m as R =S E9 T, where S is as before and T is the nuIl space of the full rank matrix Y'. Note that S and T are not necessarily orthogonal vector spaces; We wish to find a projection matrix Q satisfying Qy E S and (y  Qy) E T for alI Y E Rm and Qx = x for all XES. (a) Show that Q is a projection matrix if and only if it is an idempotent matrix. (b) Show that Q can be expressed as Q = X(y'X)1 Y'.
,
PROBLEMS
83
46. For any set S !;; Ir", show that its convex hull C(S) consists of all convex combinations of the vectors in S. 47. Suppose that S is a nonempty subset of Rm. Show that every vector in the convex hull of S can be expressed as a convex combination of m + I or fewer vectors in S. 48. Let x be an mx 1 random vector with density functionf(x) such thatf(x) = f( x) and the set {x: f(x)
(b) Show that the inequality in (a) also holds if Y is an mX 1 random vector distributed independently of x. (c) Show that if x  Nm(O, {1), its density function satisfies the conditions of this exercise. (d) Show that if x and y are independently distributed with x  Nm(O, {1 I) and y  Nm(O, {12) such that {1 I  {12 is nonnegative definite, then P(x E S) ~ P(y E S).
CHAPTER THREE
rs to c e v n e ig E d n a s e lu a v n e Eig
1. INTRODUCTION e th of ns tio nc fu ed fin de tly ici pl im ial ec sp are rs Eigenvalues and eigenvecto of sis aly an e th g in lv vo in ns tio ica pl ap y an m In . ix elements of a square matr or e m so by ed id ov pr is sis aly an e th m fro on ati m O! a square matrix, the key inf e th of e m so of e nc ue eq ns co a is is Th rs. cto ve en eig all of these eigenvalues and . ter ap ch is th in p lo ve de ll wi we at th rs cto ve en properties of eigenvalues and eig es lu va en eig w ho d an rst de un st fir t us m we s, tie er But before we get to these prop . ed lat lcu ca e ar y the w ho d an ed fin de e ar rs cto ve en eig d an
S CE A SP EN G EI D AN , RS TO EC V EN G EI , ES LU A 2. EIGENV n tio ua eq e th g in fy tis sa A r ala sc y an n the , ix atr m m x m If A is an Ax = AX,
(3.1)
d lle ca is x r cto ve e Th A. of e lu va en eig an d lle ca is 0, 0/. for some m X 1 vector X is .1) (3 n tio ua eq d an A, e lu va en eig e th to ng di on sp rre co an eigenvector of A rs cto ve en eig d an es alu nv ge Ei A. of n tio ua eq or ct ve en ig called the ei ge nv al ue e s ot ro ic ist ter ac ar ch or rs cto ve d an s ot ro t en lat as to d rre are also sometimes refe as d se es pr ex tly len va ui eq be n ca .1) (3 n tio ua Eq rs. and vecto
(A  AI)x = 0
(3.2)
of on ati lic tip ul em pr so d an ist ex d ul wo I lt A Note that if IA  All 0/. 0, then (A ted sta dy ea alr e th of on cti di ra nt co a to d lea d ul equation (3.2) by this inverse wo tal an in lll tel de e th fy tis sa t us m A e lu va en eig y an , us Th o. assumption th at x 0/. . equation
, i,
i
IA  All = 0,
84
I •
S E C A P S N E IG E D N A , S R O T C E EIGENVALUES, EIGENV
85
a f o n io it n fi e d e th g in s U . A f o n o ti a u q e c ti s ri te c ra a h c which is known as the e re g e d th m n a is n o ti a u q e c ti s ri te c ra a h c e th t a th e rv e s b o detellninant, we readily rte c ra a h c e th t a th h c u s \ _ m ,a . .. , o a rs la a c s re a re e th , polynomial in A; that is s a y tl n le a iv u q e d e s s re p x e e b n a c e v o b a n o ti a u q e c ti is
s a h ix tr a m m x m n a t a th s w o ll fo it , ts o ro m s a h l ia m o n ly o p Since an mth degree c ra a h c e th fy s ti a s h ic h w , m ,A . .. \, A rs la a c s m re a re e m eigenvalues; that is, th s e m ti e m o s l il w e w l, a re re a A f o s e lu a v n e ig e e th f o ll teristic equation. When a ix tr a m e th f o e lu a v n e ig e t s e rg la h it e th fy ti n e id to t n ie find it notationally conven e b y a m A f o s e lu a v n e ig e d re e rd o e th e s a c is th in , s A as A;(A). In other word ). (A m A ~ . .. ~ ) (A 1 A s a n e tt wri e th f o s e lu a v n e ig e e th in ta b o to d e s u e b n a c n o ti a T h e characteristic equ to n o ti a u q e r to c e v n e ig e e lu a v n e ig e e th in d e s u n e th e b n matrix A. T h e s e ca . rs to c e v n e ig e g in d n o p s obtain corre 3 x 3 e th f o rs to c e v n e ig e d n a s e lu a v n e ig e e th d n fi l il w e W E x a m p le 3.1. matrix A given by •
A=
5 4 4
3 2 4
3 3 5
is A f o n o ti a u q e c ti s ri te c ra a h c The
IA 
3 3 5 A 3 A 2 4 = All 5 A 4 4 )2 (3 4 )2 (4 3 ) A + 2 f( A 5 = ( ) A 5 )( (4 3 + ) A 5 )( (4 3 + ) A + 3(4)(2 + 3+ = _A
28A
17A + 10 , 0 = ) 1 A )( 2 A )( 5 = (A
ro c A f o r to c e v n e ig e n a d n fi o T . 5 d n a , 2 I, re a A f o s so the three eigenvalue , x r fo x 5 = x A n o ti a u q e e th e lv o s t s u m e w , 5 = A e lu a v n responding to the eige s n o ti a u q e f o m te s y s e th s ld which yie 5 x \  3X2 + 3X3 = 5 x \ 4x\  2x2 + 3X3 = 5X2 4 x \  4X2 + 5X3 = 5X3 i
, •
I
86
EIGENVALVES AND EIGENVECTORS
The first and third equations imply that X2 =X3 and XI =X2, which when used in the second equation yields the identity X2 = X2. Thus, X2 is arbitrary and so any x having XI = X2 = X3. such as the vector (1, I, I)', is an eigenvector of A associated with the root 5. In a similar fashion, by solving the equation Ax = AX, when A = 2 and A = I, we find that (I, 1, 0)' is an eigenvector corresponding to the eigenvalue 2. and (0, I, I)' is an eigenvector corresponding to the eigenvalue I. ,
,
Note that if a nonnull vector X satisfies (3.1) for a given value of A, then so will (ax) for any nonzero scalar a. Thus, eigenvectors are not uniquely defined unless we impose some scale constraint; for instance, we might only consider eigenvectors, x, satisfying x'x = l. In this case, for the previous example we would obtain the three normalized eigenvectors (l/J3, 1/J3, 1/J3)', (1/V2, 1/V2, 0)' and (0.1/V2. 1/V2)' corresponding to the eigenvalues 5, 2. and 1, respectively. These nOlll1alized eigenvectors are unique except for sign, since each of these eigenvectors. when multiplied by I, yields another norillalized eigenvector. The following example illustrates the fact that a real matrix may have complex eigenvalues and eigenvectors. Example 3.2.
•
The matrix
A=
I
I
2
I
•
, •,
has the characteristic equation
IA  AI I = I  A 2
I
I  A ,
so that the eigenvalues of A are i = vCl and i. To find an eigenvector corresponding to the root i, write x = (Xt.X2)' = (YI + iZl,Y2 + iZ2)' and solve for )'1.)'2. Z I.
= (A  p.)k IB2  p'lm _ k I.
•
.
,
I
n a p x e r to c fa o c e th f o e s u d te a e p re y b d e in ta b o e b n a c ty w h e re th e last e q u a li h it w X A I X f o e lu a v n e ig e n a e b t s u m A , s u h T t. n a in rm te sion fOlillula fo r a de ). (d .2 3 m re o e h T m o fr , e c in s s w o ll fo w o n lt u s re e h T . k t s multiplicity o f at lea [J . A f o e s o th s a e m a s e th re a X lA X th e e ig e n v a lu e s o f e th d n a s e lu a v n e ig e e th th o b g in lv o v in m re o e th g in w We now p ro v e th e follo e ig e n v e c to rs o f a matrix.
a
e b x t le d n a A ix tr a m m x m e th f o e lu a v n e ig e n a e Theorem 3.4. L e t A b , n e h T r. to c e v n e ig e g in d n o p s e corr e th to g in d n o p s e rr o c " A f o e lu a v n e ig e n a is n A I, ~ r (a) I f n is an in te g e e ig e n v e c to r x. e th to g in d n o p s e rr o c I A f o e lu a v n e ig e n a is I A r, (b ) I f A is nonsingula e ig e n v e c to r x.
90
EIGENVALUES AND EIGENVECfORS
, 1, ,
,
Proof Part (a) is proven by repeatedly using the relationship Ax = AX; that is, we have
•, ,
,, ,
,
,
To prove part (b), premultiply the eigenvalue eigenvector equation
•
.x'y = (Ax)'y = (Ax)'y = x'A'y = x'(Ay) = x'('YY) = 'Yx'y
Since A of. "I, we must have x'y = 0; that is, eigenvectors corresponding to different eigenvalues must be orthogonal. Thus, if the m eigenvalues of A are distinct. then the set of corresponding eigenvectors will fOlln a group of mutually orthogonal vectors, We will show that this is still possible when A has multiple eigenvalues, Before we prove this, we will need the following result.
Theorem 3.9. LetA be an mXm symmetric matrix and let x be any nonzero III x I vector. Then for some r 2: I, the vector space spanned by the vectors x.Ax .... ,Ar lx, contains an eigenvector of A.
Proof Let r be the smallest integer for which x, Ax, ... ,A'x form a linearly dependent set. Then there exist scalars, ao, ... , a r , not all of which are zero, such that
where without loss of generality we have taken a r . = 1, since the way r was chosen guarantees that a r is not zero. The expression in the parentheses is an rthdegree matrix polynomial in A. This can be factored in a fashion similar to the way scalar polynomials are factored; that is, it can be written as ,
•
where "I I , .•• ,"I r are the roots of the polynomial satisfying ao = (_I)r'Y I "I" ... , a r  I = ('YI + "12 + ... + "Ir). Let
• 'Y2 •••
·.
~ • •
95
SYMMETRIC MATRICES
y = (A  'Y2Im)··· (A  'Yrlm}x, rI ArI I) = ('Y2···'YrX+···+ x,
and note thaty :/: 0 since, otherwise, x, Ax, ... ,A'  I x would be a linearly dependent set, contradicting the definition of T. Thus, y is in the space spanned by x,Ax, ... ,A r  I X and
Consequently, y is an eigenvector of A corresponding to the eigenvalue 'Y I, and so the proof is complete. 0
Theorem 3.10. If the m x m matrix A is symmetric, then it is possible to construct a set of m eigenvectors of A such that the set is orthonolillaI.
Proof
We first show that if we have an orthonolillal set of eigenvectors, x" ... ,Xh, where I ::;; h < m, then we can find another normalized eigenvector Xh+ I orthogonal to each of these vectors. Select any vector x which is orthogonal to each of the vectors XI, .•. ,x". Note that for any positive integer k,Akx is also orthogonal to XI, ••. ,x" since, if A; is the eigenvalue corresponding to Xi, it follows from the symmetry of A and Theorem 3.4(a) that
From the previous theorem we know that, for some T, the space spanned by x,Ax, ... ,A'I x contains an eigenvector, say y, of A. This vector y also must be orthogonal to XI, .•. ,X" since it is from a vector space spanned by a set l 2 of vectors orthogonal to XI, .•• ,X". Thus, we can take Xh+ I = (y'yt / y. The theorem now follows by starting with any eigenvector of A, and then using the previous argument m  I times. 0
If we let the m x m matrix X = (XI, •.. ,xm), where XI, •.. ,Xm are the orthonolillal vectors described in the proof, and A = diag( A" ... , Am), then the eigenvalueeigenvector equation Ax; = A;x; can be expressed collectively as the matrix equation AX = XA. Since the colullllls of X are orthonolillalovectors, X is an orthogonaJ matrix. Premultiplication of our matrix equation by X yields the relationship X'AX = A, or equivalently
A.:. XAX', which is known as the spectral decomposition of A. We will see in Section 4.2 that there is a very useful generalization of this decomposition, known as the singular value decomposition, which holds for any mx n matrix A; in particular,
96
EIGENV ALUES AND EIGENVECfORS
there exist m x m and n x n orthogonal matrices P and Q and an m x n matrix D with d ij = 0 if i Jj, such that A = PDQ'. Note that it follows from Theorem 3.2(d) that the eigenvalues of A are the same as the eigenvalues of A, which are the diagonal elements of A. Thus, if A is a multiple root of A with multiplicity r> I, then r of the diagonal elements of A are equal to A and r of the eigenvectors, say XI, ... ,X" cOllespond to this root A. Consequently, the dimension of the eigenspace of A, SA(A), cOllesponding to A, is equal to the multiplicity r. The set of orthonormal eigenvectors corresponding to this root is not unique. Any orthonollllal basis for SA(A) will be a set of r orthonormal vectors associated with the eigenvalue A. For example, if we let XI = (XI, ... ,xr) and let Q be any r x r orthogonal matrix, the columns of Y I = X IQ also form a set of orthonormal eigenvectors conesponding to A. Example 3.6. One application of an eigenanalysis in statistics involves overcoming difficulties associated with a regression analysis in which the explanatory variables are nearly linearly dependent. This situation is often referred to as multicollinearity. In this case, some of the explanatory variables are providing redundant infollllation about the response variable. As a result, the least squares estimator of fJ in the model y = XfJ + E
will be imprecise since its covariance matrix var(~) " (X'Xr IX' (var(y) }X(X'X)I = (X'X)IX' (q 2 I}X(X'X)1 = q2(X'X)1
will tend to have some large elements due to the near singularity of X'X. If the near linear dependence is simply because one of the explanatory variables, say Xj, is nearly a scalar multiple of another, say XI, one could simply eliminate one of these variables from the model. However, in most cases, the near linear dependence is not this straightforward. We will see that an eigenanalysis will help reveal any of these dependencies. Suppose that we standardize the explanatory variables so that we have the model
discussed in Example 2.15. Let A = diag(AI, ... ,Ak) contain the eigenvalues of Z; ZI in descending order of magnitude, and let U be an orthogonal matrix that has corresponding normalized eigenvectors of Z~ZI as its columns, so that Z;ZI = UAU'. It was shown in Example 2.15 that the estimation of y is unaffected by a nonsingular transfollllation of the explanatory variables; that is, we
•
97
SYMMETRIC MATRICES
co ul d ju st as well work with the model
A . ix atr m lar gu in ns no a is T d an T, ZI = I W h I~ Twhere o!o = oo ,a l =
,
s lem ob pr e th th wi als de n, sio es gr re s nt ne po m co al cip in method, referred to as pr s on ati rm fo ns tra al on og th or e th g zin ili ut by y rit ea associated with multicollin em ra pa d an es bl ria va ry ato an pl ex d ze di ar nd sta the WI = ZI U an d al = U'~i of s; nt ne po m co al cip in pr the d lle ca e ar es bl ria va ry ato an pl ter vector. Th e k new ex al cip in pr ith e th d lle ca is I W of n m lu co ith e th to ng the variable correspondi the 0', = V O' = U ZI l~ = WI l~ d an A = U I ;Z 'Z U component. Since W~ WI = least squares estimate of a I is ' IW A' IW )'W (W IY ' IY = I I a l= A
while its covariance matrix simplifies to
•
s Ai the of e on st lea at en th , lar gU sin ly ar ne is I W If Z; ZI and hence also W~ S will be very O!i ng di on sp ue co e th of es nc ria va e th ile will be very small, wh I N is I W ; W d, ze di ar nd sta en be ve ha es bl ria va large. Si nc e the explanatory m fro ted pu m co s nt ne po m co al cip in pr e th of ix atr m n times the sample co ud at io ly ar ne is nt ne po m co al cip in pr ith e th en th 0, = Ai if , us the N observations. Th the to tle lit ry ve es ut rib nt co it so d an , on ati rv se ob to co ns ta nt from observation th wi ted cia so as s lem ob pr e th en th k, , ... I, r+ k= i r fo 0 estimation of y. If A/ s nt ne po m co al cip in pr r t las e th ng ati in m eli by d de oi av multicollinearity can be l de mo n sio es gr re s nt ne po m co al cip in pr the s, rd wo r he from the model; in ot
=
,,
,I, , ,,•
•
IS
,
,/. i•
,, •
,I
,
!
r t las eir th g in let de by a; d an I W m fro d ne tai where W II and a~1 are ob of ate tim es s re ua sq st lea the en th r), k,A ' .. h (A ag columns. If we let AI = di al l ca n be written as
' W I A' W )1 W ' (W Y l l I = IIY II II a II = A
t
! t
al tic en id is l al s, nt ne po m co al cip in pr e th of ty ali on og th or No te that due to the the d fin to ed us be n ca l al ate tim es e Th ' al of s nt ne po m to the first k  r co ll ca Re L de mo d ze di ar nd sta al in ig or e th in ~I of ate tim principal components es the ng ati in m eli By l. Va = ~I y tit en id the h ug ro th ed that ~I an d al are relat y tit en id the th wi y tit en id is th g in ac pl re are we last r principal components,
98
EIGENVALUES AND EIGENVECTORS
A set of orthonormal eigenvectors of a matrix A can be used to find what are known as the eigenprojections of A.
Definition 3.1. Let},. be an eigenvalue of the m x m symmetric matrix A with multiplicity r ~ I. If XI, ... ,X, is a set of orthonormal eigenvectors corresponding to },., then the eigenprojection of A associated with the eigenvalue ., },. is given by r
PAC},.) =
LXiX;
,
i= I
The eigenprojection PAC}..) is simply the projection matrix for the vector space SAC},.). Thus, for any X E Rm,y = PAC}..)x gives the orthogonal projection of X onto the eigenspace SACA). If we define XI as before, that is XI = (XI,""X r ), then PA(A) = XIX~, Note thatPACA)is unique even though the set of eigenvectors XI,'" ,X, is not unique; for instance, if YI = XI Q, where Q is an arbitrary r x r orthogonal matrix, then the columns of Y, forln another set of orthonOt lIIal eigenvectors corresponding to A, but
,
The term spectral decomposition comes from the use of the tet III spectral set of A for the set of all eigenvalues of A excluding repetitions of the same value. Suppose the III x III matrix A has the spectral set {I'J, ... , I'd. where k ~ m, since some of the I'i may correspond to multiple eigenvalues. The set of I'i may be different from our set of Ai in that we do not repeat values for the I'i. Thus, if A is 4 x 4 with eigenvalues AI =3, A2 = 2, A3 = 2, and A4 = 1, then the spectral set of A is {3,2, I}. Using X and A as previously defined, the spectral decomposition states that m
A =XAX' =
k
L AiXiX;= L i= I
l'i PACI'i),
i= I
so that A has been decomposed into a sum of telills, one corresponding to each value in the spectral set. ,, "
Example 3.7. It can be easily verified by solving the characteristic equati'on for the 3 x 3 symmetric matrix
·.
,
·
, ·
•,
SYMMETRIC MATRICES
99
5
A= 1 I
1
1
51 I
5
that A has the simple eigenvalue 3 and the multiple eigenvalue 6, with multiplicity 2. The unique (except for sign) unit eigenvector associated with the eigenvalue 3 can be shown to equal (1/V3, 1/V3, 1/V3)', while a set of orthonormal eigenvectors associated with 6 is given by (2/V6, 1/V6, 1/V6)' and (0, 1/V2, 1/.,)2"r, Thus, the spectral decomposition of A is given by
5
1 1
1 5 I
I I

5 x
0 1/V3 2/V6 3 1/V3 1/V6 1/V2 0 1/V3 1/V6 1/V2 0 1/V3 1/V3 1/V3 2/V6 1/V6 1/V6 , 0 1/V2 1/V2
0
6
0 0
0
6
and the two eigenprojections of A are
PA (3) =
PA (6) =
I =3
1/V3 I I I 1/V3 [1/V3 1/V3 1/V3] = ~3 I 1 I , I I I 1/V3 0 2/V6 2/V6 1/V6 1/V6 1/V6 1/V2 0 1/V2 1/V2 1/.,)6 1/V2 2 I
I
1
1
I
21 2
The relationship between the rank of a matrix and the number of its nonzero eigenvalues becomes an exact one for symmetric matrices.
Theorem 3.11.
Suppose that the mx m matrix A has r nonzero eigenvalues. Then, if A is symmetric, rank(A) = r.
Proof
If A = XAX' is the spectral decomposition of A, then the diagonal matrix A has r nonzero diagonal elements and rank(A) = rank(XAX') = rank(A),
EIGENV ALUES AN D EIGENVECTORS
100
the ct fe af t no es do s ice atr m lar gu in ns no by x tri ma a since the multiplication of o er nz no its of r be m nu the ls ua eq ix atr m al on ag rank. Clearly, the rank of a di 0 . ws llo fo lt diagonal elements, so the resu in rs cto ve en eig d an es lu va en eig of ns tio ica pl ap nt rta po im t So m e of the mos s. ice atr m n io lat rre co d an e nc ria va co of sis aly statistics involve the an at th re tu uc str ial ec sp e m so s ha ix atr m a , ns tio ua sit e m Ex am pl e 3.8. In so d an es lu va en eig of n io lat lcu ca e th te di pe ex to ed us be when recognized can by d se es ss po es m eti m so re tu uc str a er id ns co we e pl am eigenvectors. In this ex l ua eq ve ha we ich wh in e on is re tu uc str is an m x m covariance matrix. Th lln fO the s ha ix atr m e nc ria va co e th is, at th ; ns io lat rre co l variances and equa
n = (J2
I
p
p
I
•
•
• •
•
•
p
p
• • •
p
•
p
• •
• • •
•
•
•
I
= (J2{(1  p) I m + plml~,} so that it is
Alternatively, n can be expressed as n en eig the in le ro ial uc cr a s ay pl o als r cto ve a function of the vector 1m. Th is analysis of n since
2 corresponding to the eigenvalue (J {( l  p) +
Thus, 1m is an eigenvector of n is x if at th g tin no by ied tif en id be n ca n of es lu va en m p} . Th e remaining eig any m x 1 vector orthogonal to 1/1" then
e nc Si . p) l2( (J e lu va en eig e th to ng di on sp rre co n of and so x is an eigenvector e lu va en eig e th , 1 to al on og th or rs cto m ve t en nd there are m  I linearly indepe es lu va en eig t nc sti di o tw e es th of r de or e Th . es tim 1 (J2 (l  p) is repeated m ly on p) 12( (J an th r ge lar be ll wi p} +m p) (1 2{ (J depends on the value of p; if p is positive.
fie de ve ati eg nn no ic etr m m sy y an be n ca ix atr m e nc ria va co A Ex am pl e 3.9. n ve gi a d an rs be m nu ve ati eg nn no m of t se n ve gi a r fo , ly nite matrix. Consequent e nc ria va co xm m an ct tru ns co to le ib ss po is it rs, cto ve 1 x set of m orthonOllnal m On rs. cto ve en eig d an es lu va en eig its as rs cto ve d an rs be matrix with these num oag di its at th t in tra ns co l na tio di ad e th s ha ix atr m n the other hand, a correlatio e th on ct pa im an s ha n io ict str re tra ex is th d an I, l ua nal elements m us t each eq
•
lU l
SYMMETRIC MATRICES
t se d e it m li re o m h c u m a is re e th , is t a th ; s e ic tr a m n o ti eigenanalysis o f correla st o m e th r o F . s e ic tr a m n o ti la e rr o c r fo rs to c e v n e ig e d n a o f possible eigenvalues n n fo e th e v a h t s u m t a th ix tr a m n o ti la e rr o c 2 x 2 a r e id s n extreme case, co P=
i
,i l
•
I
P
p
I
a u q e c ti s ri te c ra a h c e h T . e it n fi e d e v ti a g e n n o n e b st u ·m P e c with  I ~ p ~ I, sin g in s U . p I d n a p + I s e lu a v n e ig e o tw e th s it m d a y il d a re tion IP  Ahl = 0 s s le rd a g re t a th d n fi e w X A = x P n o ti a u q e r to c e v n e ig e e lu a these in the eigenv to g in d n o p s e rr o c r to c e v n e ig e n a e b t s u m ' ) Z I /v ,l Z o f the value o f p , ( l/ v I . p I to g in d n o p s e rr o c r to c e v n e ig e n a e b t s u m ' ) Z I v 1 + p , while (l/vIZ,  l/ rs to c e v n e ig e l la ll J o n o h rt o f o t e s e n o ly n o is re e th , s e g n Thus, ignoring sign cha f o ts se le ib s s o p f o r e b m u n is h T . O ,J p if ix tr a m n o ti la e rr o possible for a 2 x 2 c . s n o ti a u it s e m o s In . s e s a re c in m r e rd o e th s a s e s a re c in rs to orthonolillal eigenvec to h is w y a m e n o , s e ic tr a m n o ti la e rr o c f o s e s ly a n a f o s ie d such as simulation stu s it to rd a g re h it w re tu c u tr s r la u ic rt a p e m o s h it w ix tr a m n construct a correlatio n a t c u tr s n o c to t n a w e w t a th e s o p p u s , le p m a x e r o F . rs to c e eigenvalues or eigenv m e th f o e n o h it w s e lu a v n e ig e t c n ti is d e re th s a h t a th ix tr a m x m correlation m ll Il fO e th s a h ix tr a m n o ti la e rr o c is th , s u h T . s e m ti 2 m d being repeate
I
m
P = A IX IX ; +A2 X2X; +
L
A;X;X;,
;= 3 X II I are , . .. I, X d n a , P f o s e lu a v n e ig e t c n ti is d e th re a A e where AI, A2, a n d w , e it n fi e d e v ti a g e n n o n is P e c in S . rs to c e v n e ig e d corresponding normalize t a th s e li p im m = ) (P tr e il h w , 0 ~ A d n a , 0 ~ must have AI ~ 0, A2 s a n e tt ri w e b n a c P t a th te o N ). 2 (m )/ 2 A I A (m A= •
t a th s e li p im I = j )j (P t in a tr s so that the con
.
•• •
or, equivalently,
lA(AIA)xfl (A2  A)
r o F . ix tr a m r la u ic rt a p a t c u tr s n o c to d e s u e b n e th n a c d e b T h e constraints descri
102
EIGENV ALVES AND EIGENVEcroRS
instance. suppose that we want to construct a 4 X 4 correlation matrix with eigenvalues AI = 2. A2 = I. and A = 0.5 repeated twice. If we choose XI = (~.~. then we must have = and so because of the orthogonality of XI and X2. X2 can be any vector obtained from XI by negating two of its then components. For example. if we take X2 =
i. V.
xt2 !.
(t. t, t. t)'.
I
0.25 0.50 0.25
p=
0.25 I 0.25 0.50
0.50 0.25 I 0.25
0.25 0.50 0.25 I
,
"
5. CONTINUITY OF EIGENVALVES AND EIGENPROJECTIONS Our first result of this section is one which bounds the absolute difference between eigenvalues of two matrices by a function of the absolute differences of the elements of the two matrices. A proof of this theorem can be found in Ostrowski (1973). For some other similar bounds see Elsner (1982).
Theorem 3.12. Let A and B be m x m matrices possessing eigenvalues AI ..... Am and 'Y I, ... , 'Y m. respectively. Define
M=
max
ISiSm.ISjSm
(laijl,lbijD. •
and
o(A. B) =
.!..m
•
Then max
min
1$;
,
t a th t c fa e th m o fr s w o so that (3.6) foll m
,
I I
I
i: I
Y;
Ai Y; :S >'1
< Y~, 
Am I,
//I
//I
i: I
i: I
; d e in a tt a re a ) .6 (3 in s d n u o b e th h ic h w r fo x f o s e ic o h c Now (3.7) is verified by d n u o b r e p p u e th e il h w , x = x h it w m d e in a tt a is d n u o b r e w for instance, the lo 0 holds with x = X I' in m e th r, to c e v it n u a is 2 1/ X t 'x x ( = z , x ll u n n o n y n a r Note that, since fo m A ld ie y o ls a l il w z rs to c e v it n u ll a r e v o z A ' z f o n o ti a iz imization and maxim . is t a th ; ly e v ti c e p s re I> A d n a Am = min z 'Az. tz: I
Al = max z'Az iz :1
ix tr a m ic tr e m m y s a f o e lu a v n e ig e h c a e t a th s w o h s m re o e The following th h ig le y a R e th f o m u im in m r o m u im x a m d e in a tr s n o c A can be expressed as a quotient. R (x ,A ). s e lu a v n e ig e g in v a h ix tr a m ic tr e m m y s m x m n a e b A t Theorem 3.16. Le l a n n o n o h rt o f o t se g in d n o p s e rr o c a g in e b m X , • .. I, X h >1.\ ~ A2 ~ .. . ~ Am wit d e n n a sp s e c a p s r to c e v e th e b h to T d n a h S e n fi e d , m , . .. • I eigenvectors. For h =
, j'" ,
106
,
j
';'
EIGENVALUES AND EIGENVEcroRS
,
•
• "
by the columns of X h = (XI, ... ,Xh) and Yh = (Xh, ... ,xm ), respectively. Then
· ·,:., ,
x'Ax x'x
=
x'Ax
•
min 1.r:0
r;,+
,
x'x
and •
x'Ax
• . J '.
·.:
x'Ax
x'x '
x'x
,
'
"
.
•
,
where the vector X = 0 has been excluded from the maximization and minimization processes.
~.1
;,
,,
· •
,
We will prove the result concerning the minimum; the proof for the maximum is similar. Let X = (XI, ... , x m ) and A = diag(AI, ... , Am). Note that, since X' AX = A and X'X = 1m, it follows that X~ Xh = Ih and X~ AXh = A h, where Ail = diag(Ah ... ,Ah)' Now X E Sh if and only if there exists an h xl vector y such that X = X"y. Consequently, Proof
•• •,
.'
x'Ax
x'x
y'X~AXhY
•
= mIn
Y'X~XhY
yfO
, •
•
= mIn yfO
where the last equality follows from Theorem 3.15. The second version of the minimization follows immediately from the first and the fact that the null space of Y~ + I is Sh' 0
•
The next two examples give some indication of how the extremal properties of eigenvalues make them important features in many applications. Example 3.11. Suppose that the same m variables are measured on individuals from k different groups with the goal being to identify differences in the means for the k groups. Let the m x I vectors fL1"'" fLk represent the k group mean vectors, and let fL = (fLl + ... + fLk)/k be the average of these mean vectors. To investigate the differences in group means, we will utilize the deviations (fL;  fL) from the average mean; in particular. we form the sum of squares and cross products matrix given by
· , ,·
,
k ,
.:
A=
•
i: 1
• •
Note that for a particular unit vector x, x'Ax will give a measure of the dif
EXTREMAL PROPERTIES OF ElGENV ALUES
107
ferences among the k groups in the direction x; a value of zero. indicates the
groups have identical means in this direction, while increasingly large values of x'Ax indicate increasingly widespread differences in this same direction. If XI, ••• ,Xm are nonnalized eigenvectors of A corresponding to its ordered eigenvalues AI
i i j,
EIGENV ALUES AN D EIGENVECTORS
108 co v( a; x, ai x)
=a; na j =0
by n ve gi ix atr m e nc ria va co 4 x 4 the er id ns co st For some specific examples, fir
n=
4.65 4.35 0.55 0.45
4.35 4.65 0.45 0.55
0.55 0.45 4.65 4.35
0.45 0.55 4.35 4.65
t un co ac es lu va en eig o tw st fir the so , 0.2 d an , The eigenvalues of n are 10, 8, 0.4 is Th . fx yo lit bi ria va tal to the of 7, 0.9 = 6 8. /1 18 y all tu ac n, for a large proportio all t os alm , gt in ts in po as ar pe ap uld wo x of s on ati rv se ob means that although is ne pla is Th e. an pl a to ed in nf co be ll wi ts in po e es th g of the dispersion amon )' 25 0. 5, .2 ,0 25 0. 5, .2 (0 n, of rs cto ve en eig d ze ali rm no o spanned by the first tw e nc ria va co a er id ns co n, tio tra us ill nd co se a As '. 5) .2 0 and (0.25,0.25, 0 .2 5, matrix such as ,
n=
2 5 59 35  IO , 5 56 2 1 0
th wi 30 d an 60 are es lu va en eig the lly ca ifi ec sp e; lu va which has a repeated eigen d, ate pe re is n of e alu nv ge e'i st ge lar the e nc Si . ely tiv multiplicities 2 and 1, respec of n sio er sp di the , ad ste In ). ix r(a va s ize im ax m at th there is no one direction al e ac sp en eig e th by n ve gi e an pl the in s on cti re di all in x observations is the same , ly nt ue eq ns Co '. 1) , ,0 (2 d an ' 2) 1, , (1 rs cto ve the by d So (60), which is spanne in ts in po of rn tte pa lar cu cir a e uc od pr d ul wo s on a scatter plot of x observati this plane. er alt s ve gi , m re eo th ax m in m r he isc tF an ur Co the as Our final result, known a im in m ed in tra ns co as A of es lu va en eig e iat ed lu tel in native expressions for the and maxima of the Rayleigh quotient R(x,A). es lu va en eig ng vi ha ix atr m ic etr m m sy m x m an be Theorem 3.17. Let A Ch d an ix atr m 1) (h x m y an be Bh let , ,m ... AI ~ A2 ~ ... ~Am' For h = l,  h. Then 1 == Ch C~ d an I m Ih = B B'" g in fy tis sa ix atr m h any m x (m  h) x' Ax x' x ' as weIJ as
(3.9)
109
EXTREMAL PROPERTIES OF EIGENVALUES
•
max nun Ch
C; 'r= O
x' Ax x' x
(3.10)
where the vector x = 0 has been excluded . •
= I XI t Le ). .9 (3 by n ve gi lt su re ax m in m the e ov pr Proof We first rco A, of rs cto ve en eig al ill ol on th or of t se a is h ,X ... . XI (XI, ... ,Xh), where ix atr m I) (h x m an is I _ Xh e nc Si ' Ah , ... I, A es lu va en responding to the eig satisfying X~ _ I XII _ I = Ih _ I. it follows that
x' Ax x' x
x' Ax mm max x' x Bh 8/, r=O •
•
(3.11 )
g in fy tis sa BII ry tra bi ar r fo w No 6. 3.1 m re eo Th m fro where the equality follows be t us m ns m lu co the at th so h, x I) (h is h B;.Bh = Ih (, the matrix B;.X ch su Y r cto ve l ul nn no 1 x h an d fin n ca we , ly nt ue linearly dependent. Conseq at th d fin we x, r fo ce oi ch e on is Y Xh e nc Si O. = that B;.Xhy
x' Ax max 8/, r=O x' x
(3.12)
ini M . .6) (3 m fro ws llo fo y lit ua eq in t las the d an )' Ah " where Ah = di ag (A I." mizing (3.12) over all choices of Bh gives
This,
. es lin e m sa the ng alo is 0) .1 (3 of f oo pr e Th ). along with (3.11), proves (3.9
of rs cto ve en eig l na on on th or of t se a is ,X ... , Xh e er m Let Yh = (Xh, ..• ,xm), wh h) (m x m an is I + Y e nc Si . Am , ... , All h es lu va en eig A, cOllcsponding to the at th ws llo fo it , h Im 1= + Yh I + Y~ g in fy tis sa ix atr m
x' Ax max mm ,,x' x Ch C; 'r= O •
~
min y;,+ Ir= O
x' Ax ~ = All, x' x
(3.13 )
g in fy tis sa ell ry tra bi ar an r Fo 6. 3.1 m re eo Th m fro ws where the equality follo 'Y" C; of ns m lu co the so 1), + h (m x h) (m is Yh C~ ix C~Ch = Im h, the matr r cto ve l ul nn no I x I) + h (m an s ist ex re the must be linearly dependent. Thus,
• •
110
EIGENVALUES AND EIGENVECTORS
•
,,
·, •• .'·
y satisfying C"Y/zy = O. Since YhY is one choice for x. we have
o
,
• ,.~
x'Ax
.
(3.14)
XiX
· ' . l'
. 'I~ , •
where ll" = diag(Ah •... ,Am) and the last inequality follows from (3.6). Maximizing (3.14) over all choices of Ch yields .
x'Ax max min  Ch
I chx=o
XiX
,
5, Ah
o
This together with (3.13) establishes (3.10).
•
Corolillry 3.17.1. Let A be an m X m symmetric matrix having eigenvalues AI 2: A2 2: ... 2: Am. For h = I, ... ,m. let Bh be any m X (h  1) matrix and Ch be any m x (m  h) matrix. Then x'Ax ' . XiX
'
• and
x'Ax x'x
•
Proof If B'"B/z = 1,,_ I and C;,Ch = Im  h, then the two inequalities follow directly from Theorem 3.17. We need to establish them for arbitrary Bh and Ch. When B'"Bh = 1"1, the set SBh = {x:x E Rm,B'"x = O} is the orthogonal complement of the vector space which has the columns of Bh as an orthonollual basis. Thus, the first inequality holds when maximizing over all x J 0 in any (m  h + I)dimensional vector subspace of Rm. Consequently, this inequality also will hold for any m x (h  I) matrix Bh since, in this case, rank(Bh) 5, h  1 guarantees that the maximization is over a vector subspace of dimension at least III  II + I. A similar argument applies to the second inequality. 0 The proof of the following result is left to the reader as an exercise. ,•
Theorem 3.1S. Suppose that A and Bare m x m symmetric matrices and A  B is nonnegative definite. Then Ai(A) 2: Ai(B) for i = I, ... , m.
·"
,,
·.
"
•\, "
•
Some additional extremal properties of eigenvalues can be found in Bellman (1970) and Hom and Johnson (1985).
,
,· ,
•
••
,~
"'i .~
1,
SOME ADDmONAL RESULTS CONCERNING EIGENVALUES
111
7. SOM ..: ADDITIONAL RESULTS CONCERNING EIGENVALUES
Let A be an m x m symmetric matrix and H, an m x h matrix satisfying H'H = I h • In some situations it is of interest to compare the eigenvalues of A to those of H'AH. Some comparisons follow immediately from Theorem 3. 17. For instance, it is easily verified that from (3.9), we have
and from (3.10) we have
The following result, known as the Poincare separation theorem [Poincare, (1890); see also Fan (1949)], provides some inequalities involving the eigenvalues of A and H' AH in addition to the two given above. Theorem 3.19. Let A be an m x m symmetric matrix and H be an m x h matrix satisfying H' H = I h. Then, for i = I, ... , h, it follows that
Am  h + ;(A) ~ A;CH'AH) ~ A;CA) To establish the lower bound on A;(H'AH), let Y. = (x., ... , XIII), where n = m  h + i + 1, and XI, •.. ,Xm is a set of orthonormal eigenvectors of A corresponding to the eigenvalues AI (A) ~ ... ~ AIII(A). Then it follows that Proof.
•
x'Ax
Y~.r=o
x'x
nun
•
mm < Y:,x =0
x'Ax x'x
r=Hy
=
.
mm
, Y nHy=O
y'H'AHy =:''::; A" y'y
(11111+
I)(H'AH) = A;(H'AH),
where the second equality follows from Theorem 3.16. The last inequality follows from Corollary 3.17.1, after noting that the order of H' AH is h and Y;,H is (mn+ l) x h. To prove the upper bound for A;(H'AH), let Xi _ I = (XI,··· ,Xi _ I), and note that
112
EIGENVALUES AND EIGENVECTORS
Ai(A) =
max x·,, Ir=O
x'Ax x'x
max , Ir=O x.,r=Hy
y

~
x'Ax x'x
'R'AR y'y
y. ~ Ai (H' AH),
where the first equality follows from Theorem 3.16 and the final inequality follows from Corollary 3.17.1. 0 Theorem 3.19 can be used to prove the following useful result. Theorem 3.20. Let A be an m x m symmetric matrix and let Ak be its leading k x k principal submatrix; that is, Ak is the matrix obtained by deleting the last m  k rows and columns of A. Then, for i = 1, ... , k,
Am  i + I (A)
:$
Ak  i + I (Ad
:$
Ak  i + I(A)
In Chapter 1, the conditions for a symmetric matrix A to be a positive definite or positive semidefinite matrix were given in terms of the possible values of the quadratic fonn x'Ax. We now show that these conditions also can be expressed in terms of the eigenvalues of A. Theorem 3.21. AI, ... , Am. Then
Let A be an m x m symmetric matrix with eigenvalues
(a) A is positive definite if and only if Ai > 0 for all i. (b) A is positive semidefinite if and only if Ai ~ 0 for all i and Ai = 0 for at least one i. •
Let the columns of X = (XI, ... ,xm) be a set of orthonormal eigenvectors of A corresponding to the eigenvalues AI, ... ,Am, so that A = XAX', where A = diag(A" . .. ,Am). If A is positive definite, then x'Ax > 0 for all x of. 0, so in particular, choosing x = Xi, we have Proof
Conversely, if Ai > 0 for all i, then for any x of. 0 define y = X'x, and note that m
XI Ax =X'XAX'x=y 'AY =
(3.15) i= I
has to be positive because the Ai are positive and at least one of the
yl is pos
113
SOME ADDiTIONAL RESULTS CONCERNING EIGENVALUES
is A at th d fin we t, en m gu ar r ila sim a By . (a) es ov itive since y :/: O. This pr ly on we ) (b e ov pr to , us Th i. all r fo 0 ~ Ai if ly on nonnegative definite if and It O. = Ai e on st lea at if ly on d an if /:O x: e m so r fo 0 ne ed to pr ov e that x' Ax = O. > ich wh r fo i y er ev r fo 0 = Ai en th 0, = follows fro m (3.15) that if x' Ax 0 O. = Ai = ; Ax x; en th 0, = Ai i, On the ot he r hand, if fo r soine
)'1
e, lu va en eig ro ze a s ha it if ly on d an if lar gu sin is Si nc e a square matrix are s ce tri ma ite fin de e iv sit po at th 1 3.2 m re eo Th m it follows imme(liately fro . lar gu sin e ar s ce tri ma ite in ef id m se e iv sit po ile wh , lar nonsingu ,
r ato tim es s re ua sq st lea ry na di or e th er id ns Co Example 3.13. (X' X) I X'Y of ~ in the model
IJ
y = X~ + E, ll wi we c, r cto ve I x I) + (k ry tra bi ar an r Fo . IN q2 = ) r(E va where E(E) = 0 an d an is ( r ato tim es an J; c'l of r ato tim es ed as bi un r ea lin st be e , prove that c'~ is th 0 = E) E( ce sin ed as bi un is ~ c' , rly ea Cl ~. c' = t) E( if ~ c' of unbiased es tim ato r im pl ies th at A
~ c' = X~ X' I r 'X (X c' = y) E( X' )I 'X (X C' = ~) c' E( A
s ha it at th ow sh t us m we r, ato tim es ed as bi un r To sh ow that it is the be st linea r ato tim es ed as bi un r ea lin r he ot y an of e nc ria va e variance at least as small as th at th so p, c' of r ato tim es ed as bi un r ea lin ry tra bi ar an be of c' p. Le t a' y c' p i·
=E(a'y) =a' E( y) = a'XIJ,
at th ies pl im is th t Bu p. r cto ve e th of e lu va e th of s les rd ga re
c, = a'X No w A , ') , X r 1 }c ," var(c P) = c' {v ar (p )} c =c {q "(X
") "
= q" a X(X X) 
while 2 2 va r(a 'y) = a' {v ar (y )} a = a' {q 1N}a = q a' a
Th us , the difference in their variances is
l'
X a,
,
••
· •
EIGENVALUES AND EIGENVECI'ORS
114
•
•
I
var(a'y)  var(c'J1) = (/2a' a  (/2a'X(X'Xr X'a =
2 q a'( IN
.'1
,
 X( X' X) I X' )a
Bu t
.
,
IN of es lu va en eig e th of ch ea at th d fin an d so using Th eo re m 3.4, we IN at th e se we 1, 3.2 m re eo Th m fro , us Th X( X' X) IX ' m us t be 0 or 1. X( X' X) IX ' is no nn eg ati ve definite an d so •
,
A
va r(a 'y)  va r(c 'p ) ~ 0 as is required.
•
t; uc od pr se po ns tra a of lt su re e th as ed in ta ob Sy m m etr ic ma tri ce s ar e of ten s. ce tri ma ic etr m m sy e ar i TT d an T T' th bo en th x, that is, if T is an m x n matri d an ve ati eg nn no e ar es lu va en eig r ei th at th ow Th e fo llo wi ng tw o th eo re m s sh th eir po sit iv e eig en va lu es ar e eq ua l . s ha T T' en Th r. = ) (T nk ra th wi x, tri ma n x m an be T Theorem 3.22. Le t ite fin de mi se e iv sit po d an n = r if ite fin de e iv sit po r positive eig en va lu es . It is if r < n.
Proof
•
ly ar cle en Th . Tx = y let x, r cto ve 1 x n ll nu n no y Fo r an m
X'T 'Tx= y ' y=
k' " Yi2 i=I
all 1 3.2 m re eo Th by , us th d, an ite fin de ve ati eg is no nn eg ati ve , so T' T is no nn ng di on sp rre co T T' of r to ec nv ge ei an is x If . ve of its eig en va lu es are no nn eg ati n ca is th d an , ro ze l ua eq t us m e ov ab n tio ua eq to a ze ro eig en va lu e, then th e rly ea lin r n of t se a d fin n ca we r, = ) (T nk ra e on ly ha pp en if y = Tx = O. Si nc d an T, of e ac sp ll nu e th of sis ba y an is, at th 0, = in de pe nd en t xs sa tis fy in g Tx w no lt su re e Th r. n to l ua eq is T T' of es lu va so the nu m be r of ze ro eig en 0 follows. • e th en Th r. = CD nk ra th wi x, tri ma n x m , an be T Theorem 3.23. Le t '. TT of es lu va en eig e iv sit po e th to l ua eq e ar T T' of positive eig en va lu es
n e th e nc Si h. ty ici pl lti mu th wi T T' of e lu va en eig an Proof Le t X. > 0 be ns m lu co e os wh X, ix atr m h x n an d fin n ca we ic, x n ma tri x T' T is sy m m etr
115
SOME ADDmONAL RESULTS CONCERNING EIGENVALUES
are orthonormal, satisfying T'TX= AX.
Let Y = TX and observe that TT'y = TT'TX = T(A.X) = hTX = n,
so that A. is also an eigenvalue of TT'. Its multiplicity is also h since rank.(Y) = rank.(TX) = rank«TX)'TX) = rank(X'T'TX) = rank.(AX.'X) = rank(A.1h) = h
0
Next we will use the CourantFischer minmax theorem to prove the following important monotonicity property of the eigenvalues of symmetric matrices. Theorem 3.24. Let A be an m x m symmetric matrix and B be an m x nonnegative definite matrix. Then, for h = I, ... , m, we have
111
where the inequality is strict if B is positive definite.
Proof have
For an arbitrary m x (h  1) matrix B" satisfying It"Bh = I" _ I, we
x'(A + B)x max x'x Jt"x = 0
=
max B/,x= 0
x'Ax x'Bx +,,...x'x x'x
x'Ax x'x
x'Bx x'Ax x'Ex • + min xiO x'x x'x x'Ax x'Ax + A.m(B) ;::: max   , = max , x'x BhxoO B/,X = 0 x'x where the last equality follows from Theorem 3.15. The final inequality above is strict if B is positive definite since, in this case, A.m(B) > O. Now minimizing both sides of the equation above over all choices of B" satisfying B'"B" = I" and using (3.9) of Theorem 3.17, we get
EIGENV ALVES AND EIGENVECTORS
116 x'( A + B)x hh(A + B) = min max x' x Bh Jt"x= 0
o
This completes the proof.
d an B) + (A hh n ee tw be p hi ns io lat re ng di un bo l ra ne ge a t Note that there is no en th , 2) 4, 6, , (8 ag di = B d an 4) 3, 2, , (1 ag di = A if e, nc sta in h,,(A) +h ,,( B) . For
while
en eig d an es lu va en eig e th ich wh in n tio ua sit a d In Example 3.11 we discusse vectors of •
k
A=
L
(11,  11)(11;  11)'
;=I
r Fo k' I1 , .• 1" 11 ns ea m p ou gr e th g on am es nc re ffe di were utilized in analyzing s ve gi A, of e lu va en eig st ge lar e th to ng di on sp instance, an eigenvector XI , corre at th in ns ea m p ou gr e th g on am n sio er sp di um im ax the direction of m
ay m , ale sc of ct fe ef e th es ov m re ich wh , XI X; by re he is maximized. Th e division y tit en id e th an th r he ot s ice atr m e nc ria va co ve ha ps ou not be appropriate if the gr ix atr m e nc ria va co e m sa e th s ha p ou gr ch ea at th e, pl matrix. Suppose, for exam in y of y lit bi ria va e th en th B, ix atr m e nc ria va co th wi B. If y is a random vector e th g on am es nc re ffe di e nc Si x. x'B = 'y) r(x va be ll wi the direction given by X r ila sim as nt rta po im as be t no ll wi y lit bi ria va gh hi groups in a direction with e es th r fo st ju ad ll wi we y, lit bi ria va low th wi on differences in another directi differences in variability by constructing the ratio x' Ax
x'Bx l na sio en m di eon e th y tif en id en th ll wi tio ra is th s ize Th e vector XI that maxim r fo ng sti ju ad en wh t, os m e th r ffe di ns ea m p subspace of Rm in which the grou
117
S E U L A V N E IG E G IN N R E C N O C SOME ADDITIONAL RESULTS
c e v e th d n fi to e b ld u o w I X g in d n fi r e ft a p te s t x e n e h T . y differences in variabilit e b ld u o w is th ; ;y x h it w d te la e n o c n u ;y x s a h t u b io tt rl e tor X2 that m a x im iz e s th t in a tr s n o c e th to t c je b u s n o ti a u q e e th in o ti ra e th s e iz im th e vector X2 that m a x In vece th e in n ll te e d ld u o w e w , n io h s fa is th in g in u n that X~BX2 = O. C o n ti m o f the ratio. These
A , . .. I> A s e lu a v l a m e tr x e m that yield th e
tors X ), • .. ,X m . . m re o e th g in w o ll fo e th in d e fi ti n e id re a s e lu a v l extrema e v ti a g e n n o n g in e b A h it w . s e ic tr a m m x m e b B d n a Theorem 3.25. L e t A d n a ) II X .• .. I> (X = II X e n fi e d , m , . .. , 1 = h r o F . e it n fi definite and B positive de f o rs to c e v n e ig e t n e d n e p e d in y rl a e n li re a m ,X . .. • l X YII = (X h ,' " ,x m ), w h e re n e h T ). A I (B ", A ~ . .. ~ ) A I B I( A s e lu a v n e ig e e th to g in d B~IA c o rr e s p o n x 'A x x 'B x '
and x 'A x
x 'E x ' •
1/1 = It n e h w 0 i X ll a r e v o re a x a m d n a in m e th d n a . d e d where X = 0 is exclu and h 1, respectively. e th r fo f o ro p e th ; m u im in m e th g in lv o v in lt u s re e th e v Proof We will pro t a th o s . B f o n io it s o p m o c e d l a tr c e p s e th e b ' P D P = B m a x im u m is similar. L e t e iv it s o p ll a re a • ,d . .. l, d . m B f o s e lu a v n e ig e e th re e . D = d ia g (d ), .. . ,dm), wh ) t ) d .• .. \ / : (d g ia d = 2 I/ D re e h w '. P 2 I/ D P = T t le e w f I d u e to T h e o re m 3.19. . x T = y g in tt u P r. la u g in s n o n d n a ic tr e m m y s is , B e k li , then B = T T = T 2 and T w e find that I Tx T IA T 'T x • x 'A x • rrun n u rr x T 'T x , 8.. = 0 x 'E x I T T x= O ,+ 11 1 h+ y T  ly IA 'T y • (3.16)
=
 , rrun Y h + IT y = O
y 'y
t a th o s i. iX A = i x IA B n e th ), A I B i( A = i A te ri w e w if t a N o te th
w h ic h implies
e lu a v n e ig e e th I to g in d n o p s e rr o c I T A T f o r to c e v n e ig T h u s , TXi is an e
••
••
•
.~ ...~
118
~
EIGENV ALUES AND EIGENVECTORS
J
,
Aj = Aj(T I ATI); that is, the eigenvalues of B 1A are the same as those of
I TIAT . Since the rows of Yh+IT are the transposes of the eigenvectors
TXh+I>"" Txm , it follows from Theorem 3.16 that(3.16) equals Ah(T1AT 1), which we have already established as being the same as Ah(B1A). 0
,
j
•
'..;~~ '.
~ .~
Since Xj is an eigenvector of B1A corresponding to the eigenvalue Ai = Aj(W I A), we know that
'. '5
~,
~ .~ ,~
J
, 70 .~
",,,
or. equivalently.
i:
: ,
, •
. •
•
(3.17)
Ax; = A;Bx;
·
Equation (3.17) is similar to the eigenvalueeigenvector equation of A, except for the multiplication of x; by B on the righthand side of the equation. The eigenvalues satisfying (3.17) are sometimes referred to as the eigenvalues of A in the metric of B. Note that if we premultiply (3.17) by and then solve for Aj. we get
x;
•
that is. the extremal values given in Theorem 3.25 are attained at the eigenvectors of B I A. The proof of the previous theorem suggests a way of simultaneously diagonalizing the matrices A and B. Since TI ATI is symmetric, it can be expressed in the foon QAQ', where Q is orthogonal and A = diag(AI(TIArI), ... ,Am(TIATI)). The matrix C = Q'T 1 is nonsingular since Q and T 1 are nonsingular and
•
,
•
CAC' = Q'TIAT1Q= Q'QA(jQ= A,
· •
•
CBC' = Q'T1TTrIQ= Q'Q= 1m •
Equivalently, if G = CI we have A = GAG' and B = GG'. This simultaneous diagonalization is useful in proving our next result. For some other related results see Olkin and Tomsky (1981).
.'
,.1 ·.1.
"'./
, 1
.~
Theorem 3.26. Let A be an m x m nonnegative definite matrix and B be• an 111 x 111 positive definite matrix. If F is any m x h matrix with full column rank, then for i = I, ... , h
•
,·.., ~
1..
'~
".,
119
SOME ADDITIONAL RESULTS CONCERNING EIGENVALUES
and further
Proof
Note that the second equation implies the first, so our proof simply involves the verification of the second equation. Let the nonsingular m x m matrix G be such that B = GG' and A = GAG', where A = diag(AI(B1A), ... ,Am(B1A». Then max MCF'AF)(F'BF)I) = max M(F'GAG'F)(F'GG'F)I) F
F·
= max A,«E' AE)(E' E E
r 1),
where this last maximization is also over all m x h matrices of rank h, since E = G' F must have the same rank as F. Note that since E has rank h, the h X h matrix E'E is a nonsingular symmetric matrix. As was seen in the previous proof, such a matrix can be expressed as E' E = TT for some nonsingular symmetric h x h matrix T. It then follows that max M(E' AE)(E'E)I) = max A,«E' AE)(TT)I)
. E
E
= max A,(T1E' AEr l ), E
where this last equality follows from Theorem 3.2(d). Now if we define the m x h rank h matrix H = ET 1, then H'H = T1E'Er l = T1TTT 1  I". Thus,
where the final equality follows from Theorem 3.19 and the fact that equality 0 is actually achieved with the choice of H' = [I" (0)]. •
Example 3.14. Many multivariate analyses are simply generalizations or extensions of conesponding univariate analyses. In this example, we begin with what is known as the univariate oneway classification model in which we have independent samples of a response y from k different populations or treatments, with a sample size of ni from the ith population. The jth observation from the ith sample can be expressed as •
Yij = Jl.i
+ € ij'
EIGENV ALVES AND EIGENVECTORS
120
sdi y all tic en id d an t en nd pe de in c ar it_ f the d an ts tan where the !LiS arc (:ons e th all e ar S !Li e th t no or er eth wh e in ln tel de to is al go r trihuted as N(O, U 2). Ou e th st ain ag !Lk = ... '" I :!L H" is es th po hy ll nu e sa me ; that is, we wi sh 10 test Jh e nc ria va of sis aly an An r. Tc dii S !Li the oi" o tw st lea alt er na tiv e hypothcsis III : al ts, en atm tre n ee tw be y lit bi ria va e th 1) 2.3 lem ob Pr e co m pa re s (se k
SST =
L
ni(Yi  y)2,
i= I
to the variability within treatments, k
SSE =
nj
L L i~
I
j~
(Yij 
yi,
I
where •
Yi=
L
k
k
n·I
Yij/ni'
y= L
i= I
i~ I
j~1
ni
n=
ni y, /n ,
the d lle ca is E SS ile wh t en atm tre r fo s re ua sq of SST is referred to as the sum tic tis sta e th if ted jec re is Ho is es th po hy e Th . or err r fo sum of sq ua re s F= SS T/ (k  I) SSE/en  k)
s ee gr de k nd an I kth wi n io ut ib str di F the of e til ex ce ed s th e ap pr op ria te qu an se on sp re e on of e lu va e th ng ni tai ob of ad ste in at th of freedom. No w su pp os e riva se on sp re nt re ffe di m of es lu va e th n tai ob we , va ria bl e fo r ea ch ob se rv ati on the as ed in ta ob s se on sp re of r cto ve 1 x m e th is Yij If ab les for ea ch ob se rv ati on . y wa eon e iat ar tiv ul m e th ve ha we en th t, en atm tre jth ob se rv ati on fro m th e ith classification m od el given by
. tly en nd pe de in , D) , (0 N j Ei d an ts tan ns co of m wh er e ~i is an m X 1 ve cto r y lit bi ria va t en atm tre in th wi d an y lit bi ria va t en M ea su re s of th e be tw ee n tre atm are now gi ve n by th e matrices, k
k
B=
L i:;:. 1
ni (Y i  Y) (Y i  y)',
W=L i~1
n·I
j= 1
S E U L A V N E IG E G IN N R E C N O C SOME ADDITIONAL RESULTS
121
r t~ al e th t s in a g a k ~ = . .. = I ~ : u H is s e th o p y h ll u n e th O n e approach to testing e Ih l lb :a I d o lh lc II a y b is r,
fe if d S i ~ e th f o o tw t s a le t a native hypothesis, H I: m o c e d g in w o ll fo e th n o d e s a b is e u iq n h c te is h T . re u d u n io n i n te rs e c ti o n proce y n a is c If . s e s e th o p y h te a ri a iv n u to in I H d n a u H s e s position o f the hypothe ~ h I n e th ' k ~ ' C = '" = I ~ ' c ): (c o H is s e th o p y h e th e n m X I vector, and w e defi e w if , n io it d d a In . u H is s e th o p y h e th is m R E c ll a r e intersection o f Ho(c) o v f o n io n u e th n e th r, fe if s d ; ~ ' c e th f o o tw t s a le t a ): define the hypothesis HI (c ld u o h s e w , s u h T I. H is s e th o p y h e th is ill R E c ll a r e v o the hypotheses HI (c) w o N . c e n o t s a le t a r fo ) (c o H t c je re e w if ly n o d n a if reject the hypothesis Ho l e d o m n o ti a ic if s s la c y a w e n o te a ri a iv n u e th s e lv o v in ) (c the null hypothesis Ho f o s e lu a v e rg la r fo ) (c o H t c je re ld u o w e w o s d n a , e s n o p s in which c'Yij is the re the F statistic F
•
_ S S T (c )/ (k  1) (c)  S S E (c )/ (n _ k ) ,
, rs o rr e d n a ts n e tm a e tr r fo s re a u q s f o s m u s e th re a ) (c where SST(c) a n d S S E is ) (c o H if d te c je re is o H e c in S ' ij 'Y c s e s n o p s re e th r respectively, c o m p u te d fo t a r fo e rg la y tl n ie ic ff u s is ) (c F is o H t c je re l il w e w , rejected for at least o n e c if , y tl n le a iv u q e r, o c e n o t leas max F(c) dO
(k ts n ta s n o c e th g n ti it is sufficiently large. Om e b n a c ) (c E S S d n a ) (c T the sums o f squares SS SST(c) = c ' Bc,
t a th g n ti o n d n a ) k I (/ d n 1) a s a W d n a B g in s u d e s s expre
SSE(c) = c 'W c ,
f o s e lu a v e rg la r fo o H t c we find that we reje (3 .1 8 )
e th is a _ I II if , s u h T . 5 .2 3 m re o e h T m o fr s w o ll fo e id s I where the righthand . e e [s ) B (W I A e lu a v n e ig e t s e rg la e th f o n o ti u ib tr is d e th (1  a ) th quantile o f t a th o s )] 0 9 9 (1 n o is rr o for example, M (3 .1 9 )
e th f o e g ta n a v d a e n O ' a _ I U > (W I A if o H t c je then we would re fi n o c s u o e n a lt u im s to s d a le y ll ra tu a n it t a th is re u d e c u n io n i n te rs e c ti o n pro n a e m y n a r fo t a th ) 9 .1 (3 d n a ) 8 .1 (3 m o fr ly te ia d e m im s w dence intervals. It follo I B)
122
EIGENVALUES AND EIGENVEC'IORS
vectors J11" .. , J1k' with probability 1  01, the inequality
L~= 1 niC' {(ii  i)  (J1i  II)}{(ii  i)  (IIi  II)}'C C'WC
(3.20)
holds for all m X I vectors c, where k
II =
L
ni II/n
i= 1
Scheffe's method [see Scheffe (1953) or Miller (1981)] can then be used on (3.20) to yield the inequalities I
'j", j
, "
k
UI_aC'WC
~L
,
)' .~
La;/ni i =I '
k
, •
"
\
,
m
,
,,
L aiej JI.;j
;=1 j=1 k
L a;/ni
UI_aC'WC
;= 1
which hold with probability 1  01, for all m X 1 vectors a satisfying a'l. = O.
C
•
,
,
and all k X 1 vectors ,
,
PROBLEMS 1. Consider the 3
X
3 matrix , ,,
A=
9 12 8
4 4 6 3
3
,
• ,
3 ,
•
(a) Find the eigenvalues of A. (b) Find a normalized eigenvector corresponding to each eigenvalue. "
2. Find the eigenvalues of A', where A is the matrix given in Problem 1. Deter' mine the eigenspaces for A' and compare these to those of A.
,
,
,,
.,,
.•
',I,
123
PROBLEMS
3. Let the 3 x 3 matrix A be given by 1 2 4 1
A=
o o
0 0 2
(a) Find the eigenvalues of A. (b) For each different value of h, determine the associated eigenspace SA (h). (c) Describe the eigenspaces obtained in part (b). 4. If the m x m matrix A has eigenvalues hi,"" h m and corresponding eigenvectors X(, ••. ,Xm , show that the matrix (A + 'YI) has eigenvalues hi + 'Y, ... ,hm + 'Y and corresponding eigenvectors XI,··. ,Xm· 5. In Example 3.6, we discussed the use of principal components regression as a way of overcoming the difficulties associated with multicollinearity. Another approach, called ridge regression~ replaces the ordinary )east squares estimator in the standardized model 01 = (Z; ZI )1 Z;y by 01'Y = (Z; Zi +'Yn 1Z;y, where 'Yis a small positive number. This adjustment will reduce the impact of the near singularity of Z; ZI since the addition of 'YI increases each of the eigenvalues of Z; ZI by'Y. (a) Show that if N > 2k + I, there is an N x k matrix W such that 01'Y is the ordinary least squares estimate of 01 in the model A
A
that is, 01'Y can be viewed as the ordinary least squares estimator of 01 after we have perturbed the matrix of values for the explanatory variables ZI by W. (b) Show that there exists a k x k matrix U such that 01'Y is the ordinary least squares estimate of 0 I in the model A
y
o

where 0 is a k x I vector of zeros and E*  Nk(O, (J 21), independently of E. Thus, the ridge regression estimator also can be viewed as the least squares estimator obtained after adding k observations, each having zero for the response variable and the small values in U as the values for the explanatory variables . •
124
EIGENV AWES AND EIGENVECTORS
6. Refer to Example 3.6 and the previous exercise. (a) Find th~ expected values of the principal corpponents regression estimator, 01* and the ridge regression estimator Oly, thereby showing that each is a biased estimator of 01. , " (b) Find the covariance matrix of 01* and , show that var(ol)  var(ol') is a nonnegative definite matrix, where 01 is the ordinary least squares estimator of 0 I . , " (c) Find the covariance matrix of Oly and show that tr{var(ol)  var(oly)} is nonnegative. '
7. If A and Bare m x m matrices and at least one of them is nonsingular, show that the eigenvalues of AB and BA are the same.
X. If A is a real eigenvalue of the m x m real matrix A, show that there exist real eigenvectors of A corresponding to the eigenvalue A. 9. Prove the results given in Theorem 3.2.
10. Suppose that A is a simple eigenvalue of the m X m matrix A. Show that rank (A = m  1.
An
II. If A is an m X m matrix and rank(A  AI) = m  1, show that A is an eigenvalue of A with multiplicity of at least one. 12. Consider the m x m matrix
I 0 A=
•
1 0 1 1 •
• •
• •
0 0
0 0
• •
•
• •
•
•
• • •
•
•
0 0
0 0
• • •
0
,
1 I
which has each element on and directly above the diagonal equal to 1. Find the eigenValues and eigenvectors of A.
13. Let A be an m x m nonsingular matrix with eigenvalues AI, ... , Am and corresponding eigenvectors XI. . .. ,Xm . If I + A is nonsingular, find the eigenvalues and eigenvectors of (a) (l+Arl, (b) A + AI, (c) (I +A I ).
•
125
PROBlEMS
d an , lar gu in ns no is A + I at th ch su be A ix atr m 14. Le t the m x m nonsingular define
e lu \'a en eig e th to ng di on sp rre co A of r cto ve en eig an (a) Show that if x is I. e lu va en eig the to ng di on sp rre co B of r cto ve en eig A, then x is an (b) Us e Th eo re m 1.7 to show that B = 1. 15. Co ns id er the 2 x 2 matrix
A=
4
2
3 5
(a) Find the characteristic equation of A. ua eq ic ist ter ac ar ch the in A r fo A g in ut tit bs su by 3.7 m re (b) Illustrate Th eo the is x tri ma ng lti su re the at th g in ow sh en th d an ) (a in tion obtained null matrix. n sio es pr ex an n tai ob to ) (b in n tio ua eq ial m no ly po (c ) Rearrange the matrix for A2 as a linear co m bi na tio n of A and 1. d an A of ns tio na bi m co r ea lin as I Ad an A3 ite wr n, io sh (d) In a similar fa
1. 16. Co ns id er the general 2 x 2 matrix
•
I'
,I
(a) Find the characteristic equation of A. ele the of s Jll ter in A of es lu va en eig o tw e (b) Ob ta in expressions for th ments of A. (c) W he n will these eigenvalues be real?
. 1;" 1/11 ix atr m the of rs cto ve en eig d an es lu va 17. Find the eigen rs. ala sc e ar is d an a e er wh , l~ lm iS + aIm == A ix atr 18. Co ns id er the m x m m (a) Fi nd the eigenvalues an d eigenvectors of A. . A of ns tio ec oj pr en eig ted cia so as d an es ac sp en eig (b) De tel lu in e the (c) Fo r wh ic h values of a and is will A be nonsingular? en th , lar gu in ns no is A en wh at th ow sh ), (a g in (d) Us
." .' l•" ·

126
~ ~'
EIGENV ALUES AND EIGENVECTORS
.~\
•
.~
,
•
,,
,• •
(e) Show that the determinant of A is amI(a + m~).
· ·
·
19. Consider the m x m matrix A::: aIm + (3cc', where a and (3 are scalars and c !. 0 is an m x I vector. (a) Find the eigenvalues and eigenvectors of A. (b) Find the detemunant of A. (c) Give conditions for A to be nonsingular and find an expression for the inverse of A.
A=
x m symmetric matrix with eigenvalues hI, ... ,hm'
then m
m
m
2
a 2I).. i
=1
j=1
hI i =1
23. Show that if A is an m x m symmetric matrix with its eigenvalues equal to its diagonal elements, then A must be a diagonal matrix. 24. Use Theorem 3, 17 to prove Theorem 3.18. Show that the converse is not true; that is, find symmetric matrices A and 8 for which hi (A) 2! hi (8) for i = I, ... , m, yet A  8 is not nonnegative definite. 25. Let A be an m x n matrix with rank(A) = r. Use the spectral decomposition of A' A to show that there exists an n x (n  r) matrix X such that
'.
•
.,
.,
,
'.•' ·, ,j
...•
(a) Find the eigenvalues and associated normalized eigenvectors of A. (b) What is the rank of A? (c) Find the eigenspaces and associated eigenprojections of A.
III
,, .. :':
012
22. Show that if A is an
. .
'1
1 0 1 1
21. Construct a 3 x 3 symmetric matrix having eigenvalues 18,21, and 28, and corresponding eigenvectors (1, 1,2)', (4,  2, 1)', and (1,3,  2)'.
,
•
20, Let A be the 3 x 3 matrix given by
2 1
"
•
127
PROBLEMS
AX=(O)
and
X'X=I n 
r
In a similar fashion. show that there exists an (m ~ r) x m matrix Y such that
YA = (0)
and
yy' = Imr
26. Let A be the 2 x 3 matrix given by
A=
6 4 4 322
Find matrices X and Y satisfying the conditions given in the previous exer• clse. 27. An m x m matrix A is said to be nilpotent if Ak = (0) for some positive integer k. (a) Show that all of the eigenvalues of a nilpotent matrix are equal to O. (b) Find a matrix. other than the null matrix. that is nilpotent. 28. Complete the details of Example 3.10 by showing that
P I •n
~
100 0 0 0 • 000
000 P2." ~
0
1 0
001
29. Let A and B be m x m symmetric matrices. Show that hl(A + B) $ hl(A) + hl(B). hm(A + B) ~ Am(A) + h",(B)
30. Prove Theorem 3.20. 31. Our proof of Theorem 3.24 utilized (3.9) of Theorem 3.17. Obtain an alternative proof of Theorem 3.24 by using (3.10) of Theorem 3.17. 32. Let A be an m x m nonnegative definite matrix and B be an m x m positive definite matrix. If F is any m x h matrix with full column rank. then show the following: I (a) hh_i+I«F'AF)(F'BFrl) 2!hm i+I(AB ). for i = 1..... h. 1 (b) minFhl«F'AF)(F'BF)I) = AmII+ I(AB ). I
(c) minFhh«F'AF)(F'BFrl) = Am(AB ).
33. Suppose A is an m x m matrix with eigenvalues AI>' .. ' Am and associated eigenvectors XI, ... , XIII' while B is n x n with eigenvalues 'YI, •.. , 'Yn and eigenvectors YI' ... •Y". What are the eigenvalues and eigenvectors of the (m + n) x (m + n) matrix
A
C=
(0) ?
B
(0)
.
Generalize this result by giving the eigenvalues and eigenvectors of the matrix
C=
CI
(0)
•
• •
(0)
C2
•
• •
• • •
• • •
(0) (0) • •
•
(0) (0)
•
• •
Cr
in tenns of the eigenvalues and eigenvectors of the matrices C I , ... , Cr. •
34. Let
T=
I 2
\ I
2 1
(a) Find the eigenvalues and corresponding eigenvectors of TT'. (b) Find the eigenvalues and corresponding eigenvectors of T'T. 35. Show that if A is a nonnegative definite matrix and aii = 0 for some i, then au = ajj = 0 for all j. 36. Suppose that A is an m x m symmetric matrix with eigenvalues AI, •.. , Am and associated eigenvectors XI, ... ,Xm, while B is an m x m symmetric matrix with eigenvalues 'YI, .•. , 'Ym and associated eigenvectors XI> ... ,Xm;
that is, A and B have common eigenvectors. (a) Find the eigenvalues and eigenvectors of A + B .. (b) Find the eigenvalues and eigenvectors of AB. (c) Show that AB = BA. 37. Suppose that XI, ... , Xr is a set of orthonormal eigenvectors corresponding to the r largest eigenvalues 'YI, ... ,'Yr of the m x m symmetric matrix A and assume that 'Yr > 'Yr+ I. Let P be the total eigenprojection of A associated
129
PROBLEMS
with the eigenvalues 'YI, ... ,'Yr; that is, r
P= L X iX ; i= I •
es lu va en eig st ge lar r its th wi ix atr m ic etr m m sy m x be another m
Let B or on th or of t se ng di on sp rre co a d an I, + Jl.r > Jl.r given by Ill , •• . ,Il " where of n tio ec oj pr en eig tal to the be Q t Le r' ,Y . '" YI mal eigenvectors given by at th so r ,Jl. ... , Jl.1 es lu va en eig the th wi ted cia B asso r
Q=
L
YiY;
i= I
(a) Show that P = Q if and only if r
L .
h i + Jl.i 
Ai(A + B) } = 0
i= I
,Xm is a set of orthonorlllal eigenow Sh A. of es lu va en eig r m t es all sm the to ng di on sp vectors corre that if P = Q, then X' BX has the block diagonal forlll
(b) Let X = (XIo ... ,xm), where Xr+
u
10 •••
(0) ,
(0)
V
where U is r x r and V is (m  r) is not true.
X (m  r).
Show that the converse
.
ix atr m ic etr m m sy m x m e th of es lu va en eig the be Am 38. Let AI ~ ... ~ rs. cto ve en eig al lll ol on th or ng di on sp rre co of t se a be A and let XIo ... ,Xm es lu va en eig the th wi ted cia so as n tio ec oj pr en eig tal to e th e For some k, defin Ako ... ,Am as m
P= L X iX ; i=k
Show that Ak = ... = Am = A if and only if P(A  AI )P = (0 )
130
EIGENVALUES AND EIGENVECTORS
39. Let A I, ... , Ak be m X m symmetric matrices and let T; be one of the eigenvalues of Ai' Let XI, ... ,x,be a set of orthonolmal m X 1 vectors, and define •
,
•
,
, p=
."{
, .'i;
>
,
~
,
1
X; x;
o
;=I
,
o~
Show that if each of the eigenvalues T; has multiplicity r and has XI, ..• ,X, as associated eigenvectors, then
•
'.,
.
, '~
I
.•:
'j
o
~
,) ':!
/~
c.

c
k
P
2: (A;  T;I)2
•
':"
'~
"
00
p = (0)
• , ,• 000
"
;=I
•
,
CHAPTER FOUR
.
Matrix Factorizations and Matrix Norms
1. INTRODUCTION In this chapter, we take a look at some useful ways of expressing a given matrix A in the fOlm of a product of other matrices having some special structure or canonical forIll. In many applications such a decomposition of A may reveal to us the key features of A that are of interest to us. These factorizations are particularly useful in multivariate distribution theory in that they can expedite the mathematical developmentand often simplify the generalization of results from a special case to a more general situation. Our focus here will be on conditions for the existence of these factorizations as well as mathematical properties and consequences of the factorizations. Details on the numerical computation of the component matrices in these factorizations can be found in texts on numerical methods. Some useful references are Golub and Van Loan (1989) and Press, Flannery, Teukolsky, and VetterIing (1992).
2. THE SINGULAR VALUE DECOMPOSITION The first factorization that we consider, the singular value decomposition, could be described as the most useful because this is a factorization for a matrix of any size; the subsequent decompositions will only apply to square matrices. We will find this decomposition particularly useful in the next chapter when we generalize the concept of an inverse of a nonsingular square matrix to any matrix. Theorem 4.1. . If A is an m x n matrix of rank r > 0, there exist orthogonal m x m and n X n matrices P and Q, such that A = PDQ' and D = P' AQ, where the m X n matrix D is given by •
(a) •
,
(c)
~
if r = m = n, ~
(0)
if r = n < m,
(b)
(d)
r~
(0) 1 if r , m < n, ~
(0)
(0)
(0)
if r < m, r < n, 131
•
al on ag di e Th ts. en m ele al on ag di e iv sit po th wi ix atr m and .:1 is an rX r diagonal 2 '. AA d an A A' of es lu va en eig e iv sit po the e ar .:1 elements of fs oo pr e Th n. < r d an m < r se ca e th r fo lt su re the e ov pr Proof We will 2 ix atr m al on ag di r X r e th be .:1 t Le s. ge an ch l na io tat of (a )( c) on ly require no en id are ich wh A A' of es lu va en eig e iv sit po r e th whose diagonal ele m en ts are e th be to .:1 e fin De 3. 3.2 m re eo Th by ' AA of es lu tical to the positive eigenva e th of s ot ro re ua sq e iv sit po e th are ts en m ele al on diagonal matrix whose diag 2 . Si nc e A' A is an n x n 'symmetric matrix, .:1 of ts en corresponding diagonal elem we can lind an n x II orthogonal matrix Q such that
Q' A' AQ = Partitioning Q as Q = [Q, that
!:J,.2
(0)
(0) (0)
ies pl im e ov ab y tit en id e th r, x n is Q, e er wh ], Q2
(4 .1 ) •
and (4.2) Note that from (4.2) it follows that (4.3)
AQ2 = (0)
ix atr m r x m e th e er wh x, tri ma al on og th or m x m an Now let P = [PI P es ak m ich wh ix atr m y an is P x tri ma r) 2 P, = AQ,!:J,.' an d the m x (m va ui eq , or ) (0 = ' 1.: Q, ~A = , ~P ve ha t us m orthogonal. Consequently, we lently, P 21 be
(4.4) By using (4.1), (4.3), and (4.4), we find that
P' AQ =

.:1'Q;A'AQ2
~AQ, ~AQ, 2 I t:. t:.
(0 )
~AQ2
t:. Q~A'(O) _ I
~(O)

t:.
(0)
(0)
(0 )
o
,.
 e iv sit po the of s ot ro re ua sq e iv sit po e th is, at th The diagonal elements of .:1, s ou vi ob is It A. of es lu va lar gu sin e th d lle ca e ar eigenvalues of A'A and AA', al lll ol on th or an lln fO Q of ns m lu co e th at th 4.1 m re eo Th from the pr oo f of o als te no to nt rta po im is It '. DQ D' Q = A A' so d set of eigenvectors of A' A an ce sin ' AA of rs cto ve en eig of t se al m or on th or an th at the columns of P fo nn AA' = PDQ'QD'P' = PD D' P' . PI e er wh ], Q2 I [Q = Q d an ] P I [P = P as Q 2 d an If we again partition P ted sta re be n ca n io sit po m co de e lu va lar gu sin e th en th r, is m X r and QI is n X . as follows. /1/ ist ex e er th n the 0, > r nk ra of ix atr m n X m an is A If Corollary 4.1.1. , Q; .:1 P = A d an If = I I ;Q Q = ,P P'. at th ch su Qb I d an X ra nd n X r matrices PI ts. en m ele al on ag di e iv sit po th wi ix atr m al on ag di r where .:1 is an r X
•
d ne tai ob be n ca A x tri ma a of re tu uc str the t Qu ite a bi t of information abou s ve gi es lu va lar gu sin of r be m nu e Th n. io sit from its singular value decompo the r fo s se ba al m or on th or are QI d an PI of ns m lu co e the rank of A, while th Pc of ns m lu co e th , ly lar mi Si . ely tiv ec sp re A, of e ac sp co lu m n space and row A. of e ac sp ll nu the an sp Q2 of ns m lu co e th d an A' span the null space of lro co its d an 1.9 m re eo Th to ed lat re e ar .1 4.1 ry lla ro Theorem 4.1 and Co of s tie er op pr e th of es nc ue eq ns co as ted sta re we lary, Corollary 1.9.1, which ry lla ro Co d an 1.9 m re eo Th at th ied rif ve y sil ea is It s. elementary transformation .1. 4.1 ry lla ro Co d an 4.1 m re eo Th m fro y ctl re di w llo 1.9.1 also fo Ex am pl e 4.1. matrix
3 x 4 the r fo n io sit po m co de e lu va lar gu sin a d fin ll We wi
A=
2 3 2
0 1 4
1 1 1
1
1
1
First an eigenanalysis of the matrix
A' A =
18 1 0
1 0 18
4 4
4
4
4
d ze ali rm no ted cia so as th wi 0 d an , 12 , 28 es reveals th at it has eigenvalu 6, V 1/ 6, /V (l d an )', v3 1/ , v3 1/ 3, /v (l 0)" eigenvectors (1//2, 1//2, ix atr m al on og th or 3 x 3 the of ns m lu co e th be e es  2/16)', respectively. Le t th . i2 .J d an 8 h are A of es lu va lar gu sin o tw e Q. Clearly, rank(A) = 2 an d th Th us , the 4 X 2 matrix PI is given by
.'
,,

134
.
··
MATRIX FACfORlZATIONS AND MATRIX NORMS
,
·.
,~
2
PI = AQI~I =
3 1 2 I


0
1 1
l/Vl 1/v'3 I/h 1/v3
4 I I
.,'.
0
I
1/V28 0
1/V3
0
, 
"
1/.Jt2
··
.'• ,,
1/V14 1/2 2/V14 1/2 3/V14 1/2 o 1/2
•
The 4 x 2 matrix P2 can be any matrix satisfying P'"P2 = (0) and P2P2 = Iz; for instance, we can take (l/.Jt2, 1/.Jt2, 1/.Jt2, 3f.Jt2)' and (5/V42, 4/V42. 1/V42, 0)' as the columns of P2 • Then our singular value decompo~ition of A is given by
1/V14 1/2 1/.Jt2 5/V42 , 2/V14 1/2 1/.Jt2 4/V42 3/V14 1/2 1/.Jt2 1/V42 0 0 1/2 3/.Jt2 •
58
0
0 0 0
.Jt2 0 0
·
•"
,
,
~,
,•
0 0 0 0
0 1/V2 1/V2 1/V3 1/V3 1/V3 , 1/v'6 1/v'6 2/v'6 •
or in the form of Corollary 4.1.1,
1/V14 1/2 2/V14 1/2 3/V14 1/2 o 1/2
,
58 o
0
.Jt2
Alternatively, we could have determined the matrix P by using the fact that its columns are eigenvectors of the matrix ,
5
AA' =
3 7 9 3 9 21 3 3 3 7 II
3 3· 3 3
However, when constructing P this way, one will have to check the decomposition A =PI ~Q; to determine the correct sign for each of the columns of PI.
, "
, , '
,'.' . ,
,
, ,f
" ;•
, •
,
,
,, ,,
,,
, "
,•,
, ,
,'
, ,• ,
e W ct. tru ns co to sy ea ry ve is r cto ve a of n io sit Th e singular va lu e decompo illustrate this in the ne xt example.
'
"r;: , ~',
m co de e lu va lar gu sin Its r. cto ve l ul nn no I X m Example 4.2. Le t x be an position will be of the form
x = Pdq,
,
,", ,
• ~,
,, ,
,
"1,
,.:"
,•,
135
TH E SI NG UL AR VA LU E DE CO MP OS IT IO N
its ly on ng vi ha r cto ve I X m an is d x, tri ma al on og th wh er e P is an m X m or lar gu sin e gl sin e Th I. = 2 q g in fy tis sa r ala sc a is q d an first co m po ne nt nonzero, at th te 2 no , 2X 1/ A = * x e fin de we If x. x' = A e er wh , I value of x is gi ve n by ),.t x~x", = I, an d
'
xx 'x *
=XX'()..1/2 X) =().. 1/2X)X'X =AX*
e iv sit po e gl sin its to ng di on sp rre co ' xx of r cto ve en eig d so th at x'" is a normalize don sp rre co ' xx of r cto ve en eig an is x* to al on og th or r eig en va lu e A. Any ve cto d an I, = q " ,0) ... 0, , 1/2 ().. = d let we if , us Th O. e lu va in g to the re pe ate d eig en en th n, m lu co st fir its as x* th wi ix atr m al on og P = [X*,P2' ... ,Pm] be any orth
• •
,
o as is required. ed lat re y ctl re di e ar A of es lu va lar gu sin e th ic, etr m m W he n A is m x m an d sy e th d an , A2 = ' AA at th ct fa e th m fro ws llo fo is to the eig en va lu es of A. Th lar gu sin e th , us Th A. of es lu va en eig e th of s re ua sq eigenvalues of A2 ar e th e If A, of es lu va en eig e th of es lu va te lu so ab e th by values of A will be given Q e th en th A, of rs cto ve en eig al m or on th or of t we let the columns of P be a se at th Q of n m lu co y an at th pt ce ex P to al tic en id matrix in Th eo re m 4.1 will be ng di on sp rre co e th es tim I be ll wi e lu va en eig is associated wi th a ne ga tiv e ll wi A of es lu va lar gu sin e th en th , ite fin de ve ati eg co lu m n of P. If A is nonn e lu va lar gu sin e th t, fac in d, an A of es lu va en eig e be th e sa m e as the positiv e th in d se us sc di A of n io sit po m co de l tra ec sp e th de co m po sit io n of A is simply es lu va lar gu sin d an es lu va en eig e th n ee tw be p hi ns io ne xt section. Th is nice relat s. ce tri ma re ua sq l ra ne ge to er ov rry ca t no es do ix atr m ic of a sy m m etr
Example 4.3. Co ns id er the 2 x 2 matrix A=
6 6  I, I
'
MA TR IX
136
FACTORIZATION~
AN U
MA 1K 1A I~V"'n..,
which has
M' =
72
o
0
2 '
A' A =
37 35
35 37
en eig d ze ali rm No 2. V d an 2 6V = 2 v0 e ar A of es lu Clearly, the singular va s ha A A' ile wh ', M r fo , 1) , (0 d an ), ,0 (1 e ar 2 d an 72 vectors corresponding to n io sit po m co de e lu va lar gu sin e th , us Th '. 2) V 1/ , I2 I/v ((l/vI2, l/vI2)' and of A can be written as 1 0 1
o
6V2 0 0V2
1/ V 2 1/V2 1 /V 2 1/v12
so As 3. d an 4 es lu va en eig e th s eld yi A of sis aly an On the other hand, an eigen . )' /0 1 . /0 (2 d an )' lO /v 1 , lO /v (3 e ar rs ciated nOllualized eigenvecto e th of n tio ica pl ap an tes tra us i\l ich wh e pl am ex We end this section with an of n sio us sc di e or m r Fo n. sio es gr re s re ua sq st lea singular value decomposition to er ad re e th s, tic tis sta in n io sit po m co de e lu va lar this and other uses of the singu . 5) 98 (1 er id Ne d an , 5) 98 (1 r ste eb W d an nk ba Eu , is referred to Mandel (1982) tiul m e th at ok lo r se clo a e tak ll wi we e, pl Example 4.4. In this exam ve ha we e os pp Su . 3.6 e pl am Ex in d se us sc di st fir collinearity problem, which we the standardized regression model
e Th y. is 00 of r ato tim es s re ua sq st lea e th at th We have seen in Example J.. IS e th e er wh I, + Rk in e lan rp pe hy a on ts in po s ve gi 1 fitted model y = yI N + Z 15 e th d an es bl ria va ry ato an pl ex d ze di ar nd sta k e th (k + I) axes correspond to po m co de e lu va lar gu sin e th be U' VD = I Z let w fitted response variable. No a is U , ix atr m al on og th or N x N an is V , us Th . ZI sition of the N x k matrix of s ot ro re ua sq e th s ha at th ix atr m k x N an is D d k x k orthogonal matrix, an n ca e W . re he ew els s ro ze d an ts en m ele al on ag di its the eigenvalues of Z; ZI as g in fin de by 5 2.1 e pl am Ex in d di we as E + 5 rewrite the model y = OOIN + Z I 1 e os pp Su E. + I I(X W + lN ao = y at th so , VD = I W d an , au = 00, (XI = U '5 1 al on ag di r t las e th lly ca ifi ec sp D, of ts en m ele that exactly r of the diagonal t ge we y, tel ria op pr ap D d an V, U, ng ni tio rti elements, are zeros, and so by pa at th ns ea m is Th x. tri ma al on ag di r) (k x r) (k a is DI e Z I = V I DI U~, wher
ce pa bs su is th d an , Rk of ce pa bs su l na sio en m di r)(k a the row space of Z\ is
n sio es gr re ted fit e th on ts in po e th is, at th ; U is spanned by the columns of I dar nd sta l na sio en m di ke th to on ted ec oj pr en wh e, ov ab ed hyperplane describ
•
.."
ION IT S O P M O C E D E U L A V R A L U T H E S IN G
l a n io s n e im d r) (k a to d e n fi n o c y ll a tu c a re a , e c a p s le b a ized explanatory vari to s e fi li p im s E + l a I W + lN o a = y l e d o m e th , o ls A . e c a subsp (4.5 ) e th f o r to a m ti s e s re a u q s t s a le e th d n a h O ~ V = ll a d l V;y. This can where W II = V ID I an i D = Y ;I W I )II W • I ~ W ( = ll a y b n e iv g is ll a r (k  r) x I vecto . 1 0 ; v = ll a e v a h st u } l1 e w e c in s 1 0 f o r to a m ti s e s e r a u q be used to fit.!d a le}lst ~ e w I. x r) (k is 1 01 re e h w ), ;z V I' = ; V d n a ' ) z ~ o Partitioning ih = (o~ I, obtain the relationship
(U;
e b n a c I V d n a 1 0 n e th r, la u g in s n o n t o n is II V f (i 1 1 PremuItiplying this by V~ t a th d n fi e w ), is it t a th o s rearranged I
A
'" ' V I 'V u · '_ \2; V Iz '" \I = II a ll  II U A
••
•
.
)' ;2 ,0 1 ~ (o = I ~ e c in s e u iq n u } o n is 1 0 f o r to a m ti s e s re that is, the least squa e th s e fi s ti a s 1 01 s a g n lo s a , \2 0 f o e ic o h c y n a r fo r to a m ti s is a least squares e y le b a ri a v e s n o p s re e th te a m ti s e to h is w e w t a th e s o identity given. N o w supp s le b a ri a v ry to a n la p x e d e iz rd a d n ta s e th s a h t a th n o ti a rv corresponding to an obse e w 1 0 te a m ti s e s re a u q s t s a le a g in s U z. r to c e v 1 x k at the values g iv e n in the e b t o n y a m , 1 0 e k li , e s n o p s re d te a m ti s e is h T I' 'O Z + obtain the estimate y = y I, x r) (k I Z h it w ;) z ;, (z = ' z s a Z n io it rt a p e w if , e c unique sin •
A
.
A
A
A
= Y + Z 'O I = Y +Z~ 011 + Z ;O IZ • " )' ' V I 'V I ,, VI \ la ll + Z Z  Z I I I 1 2 U I 2  +Z =Y
Y
A
(
,
,
if ly n o , e u iq n u re fo re e th is d n a IZ O ry ra it rb a e th n o d n e p e d Thus, y does not A
(4.6)
' 0 ') V I ''V , ' = (Zz  ZI II IZ
z;
is It . ll a 1 ;1 V + Y = y y b n e iv g is e lu a v d te a m ti s e e u iq n in which c a s e the u ly p im s is ) .6 (4 g in fy s ti a s ;Y z ~, (Z = Z rs to c e v ll a f o t e s e easily shown that th e th if ly n o d te a m ti s e ly e u iq n u is I 'O Z + 0 0 = Y , s u h T I. V the c o lu m n space o f d e n n a p s e c a p s e th in h it w s ll fa Z s le b a ri a v ry to a n la p x e d vector o f standardize le b a il a v a s le b a ri a v ry to a n la p x e d e iz rd a d n ta s f o rs to c e v ll by th e collection o f a . A
to c o m p u te 01. D ix tr a m e th t a th o s k n ra ll fu is 1 2 , m le b ro p ty ri a e in ll o ic In th e typical mult ry e v ts n e m le e l a n o g ia d s it f o r s a h d a te s in t u b ts n e m le has n o zero diagonal e t u b k, J? f o ll a is 1 2 f o e c a p s w ro e th , e s a c is th In . rs e small relative to the oth
•
138
,
>
MAI"RIX FACTORIZATIONS AND MATRIX NORMS
the points corresponding to the rows of ZI all lie very close to a (k  r)dimensional subspace S of K', specifically, the space spanned by the columns of U \. Small, changes in the values of the response variables corresponding to these points can, substantially alter the position of the fitted regression hyperplane )' = Y + Z'OI for vectors Z lying outside of and, in particular, far from S. For instance, if k = 2 and r = I, the points corresponding to the rows of ZI all lie very close to S, which in this case is a line in the ZI, Z2 plane, and y = y + to\ will be given by a pl~ne in R3 extended OVer the Zt.Z2 plane. The fitted regression plane y = y + Z'OI can be identified by the line fonned as the intersection of this plane and the plane perpendicular to the ZI, Z2 plane and passing through the line S, along with the tilt of the fitted regression plane. Small changes in the values of the response variables will produce small changes in both the location of this line of intersection and the tilt of the plane. However, even a slight change in the tilt of the regression plane will yield large changes on the surface of this plane for vectors Z far from S. The adverse effect of this tilting can be eliminated by the use of principal components regression. As we saw in Example 3.6, principal components regression utilizes the regression model 1 (4.5), and so an estimated response will be given by y = Y+ z'UID. V;y. Since this regression model technically holds only for z E S, by using this model for Z E S we will introduce bias into our estimate of y., The advantage of principal components regression is that this may be compensated for by a large enough reduction in the variance of our estimate so as to reduce the mean squared error (see Problem 4.9). However, it should be apparent that the predicted values of y obtained from both ordinary least squares regression and principal components regression will be poor if the vector Z is far from S.
3. THE SPECTRAL DECOMPOSITION AND SQUARE ROOT MATRICES OF A SYMMETRIC MATRIX The spectral decomposition of a symmetric matrix, briefly discussed in the previous chapter, is nothing more than a special case of the singular value decomposition. We summarize this result in the following theorem.
Theorem 4.2. Let A be an m x m symmetric matrix with eigenvalues AI, ... ,Am and suppose that x" ... ,Xm is a set of orthonormal eigenvectors corresponding to these eigenvalues. Then, if A = diag(A I , ... , Am) and X = (XI, . .. ,Xm ), it follows that ,
A = XAX' We can use the spectral decomposition of a nonnegative definite matrix A to find a square root matrix of A; that is, we wish to find an m x m matrix A 1/2 for which A = A 1/2A 1/2. If A and X are defined as in the theorem above, and
,
,
,
•J .,
j I
,
.,
A, .,.,
"
,, ,
.,.,
, ,
,
139
TH E SP EC TR AL DE CO M PO Sm ON
2 I I, = X X' ce sin en th ', X / XA = /2 AI d an 2) !,{ ,"A ... /2, we le t AI/2 = diag(A: A, = ' il X = X' 1/2 A 1/2 XA = X' 1/2 XA X' 1/2 XA = 1/2 A 1/2 A 2 I 2 I , ly nt ue eq ns co /2; AI = X' / XA = ')' X / A (X = )' l/2 as is required. No te th at (A we if at th o als te No A. of ot ro re ua sq ic etr m m XA1/2X' is referred to as th e sy ex pl m co a be d ul wo 1/2 A en th , ite fin de ve ati eg nn no be di d no t require A to matrix if so m e of th e eig en va lu es of A ar e negative. at th t sis in t no do we if s ice atr m ot ro re ua sq of t se W e ca n ex pa nd th e g in fy tis sa 1/2 A ix atr m y an er id ns co us t le w A1/2 be symmetric; th at is, no 2 I Q' / XA = /2 AI en th l x, 2 tri 2 I ma al on og th or m x A = A / (A / )'. If Q is an y m is such a sq ua re ro ot matrix since
en th ts, en m ele al on ag di ve ati eg nn no th wi ix atr If A1/2 is a lo we r triangular m 2 A. I of 2 n io sit I po m co de ky es ol Ch e th as n ow kn is ' / th e factorization A = A / A . n io sit po m co de a ch su of ce en ist ex e th s he lis tab es m re eo Th e following th e er th en Th x. tri ma ite fin de ve ati eg nn no m x m an be Theorem 4.3. Le t A ts en m ele al on ag di ve ati eg nn no ng vi ha T ix atr m lar gu an ex ist s an m Xm lo we r tri d an ue iq un is T ix atr m e th , ite fin de e iv sit po is A if , er su ch th at A = TT '. Furth ha s positive diagonal elements. •
is f oo pr r Ou s. ce tri ma ite fin de e iv sit po r fo lt su re e th e ov pr Proof We will e iv sit po a is A se ca is th in ce sin I, = m if s ld ho by induction. Th e re su lt clearly A. of ot ro re ua sq e iv sit po e th by n ve gi be d ul wo T ue iq scalar, an d so th e un 1) ~. (m x 1) (m ite fin de e iv sit po all r fo s ld ho No w as su m e that th e result matrices. Partition A as
is, A if ite fin de e iv sit po be t us m l Al e nc Si . wh er e All is (m  1) x (m  1) l Ti ix atr m lar gu an tri r we lo 1) (m x 1) (m we kn ow there eJtists a unique ll wi f oo pr r Ou I' T~ II T = II A g in fy tis sa d an ts ha vi ng positive diagonal elemen a d an 112 r cto ve 1 x 1) (m ue iq un a is e er th at th be co m pl ete if we can show un iq ue positive sc ala r 122 such that

o
112
_
t22
t22

TI IT ;I 1;2 T ; I
140
MATRIX FACTORIZATIONS AND MATRIX NORMS
be t us m l Ti e nc Si . t~2 + /12 1;2 = 2 a2 d an 2 I/1 TI = al2 that is, we must have t us m t~2 so d an , l2 ia Ti = 112 by en giv is 112 of ce oi ch nonsingular, the unique satisfy
if , ce sin e iv sit po be ll wi l2 iia 2A a; lin , ite fin de e iv sit Note that since A is po we let x = (x;, 1 )' = (a;2Aii, 1 )', then
. /2 )1 I2 j}a 2A a; 2 (a2 = t22 by n ve gi is 0 > t22 ue Consequently, the uniq
0
n ca , on ati riz cto fa QR e th as n ow kn ly on m m co n, io sit po The following decom e iv sit po r fo 4.3 m re eo Th of on ati riz cto fa lar gu an tri be used to establish the semidefinite matrices. n x n an ist ex e er Th n. ~ m e er wh , ix atr m n x m Theorem 4.4. Let A be an at th ch su , In = Q Q' g in fy tis sa Q ix atr m n x m an upper triangular matrix R and A = QR.
e iv sit po a is A If . 5) 98 (1 n so hn Jo d an m Ho e se 4.4 m re For a proof of Theo 2 I of on ati riz cto fa lar gu an tri e th n the )', / (A /2 AI = A d an semidefinite matrix QR the g in us by en ov pr be n ca s ice atr m ite in ef id m se e Theorem 4.3 for positiv factorization of (A 1/2)'. fL r cto ve n ea m s ha x r cto ve om nd ra 1 x m the at th e os Example 4.5. Supp of ix atr m ot ro re ua sq a g in us By . {} ix atr m e nc ria va co and the positive definite om nd ra ed rm fo ns tra e th at th so x of on ati rm fo ns tra r {} , we can deteliuine a linea If . 1m ix atr m e nc ria va co d an 0 r cto ve n ea m s ha it vector is standardized; that is, , fL) x 2( 1/ (}= z t pu d an )' 1/2 ({} 1/2 {} = {} g we let {} 1/2 be any matrix satisfyin d fin we 3, 1.1 on cti Se of ) .9 (1 d an .8) (1 g in us by en where {} 1/ 2 = ({) 1/ 2t l, th that
and
141
THE SP EC fR AL DECOMPOSITION
2 I )' / W )}( fL r(x va 2{ 1/ (}= )} fL (x /2 }1 r{{ va = ) var(z = (} 1/ 2{ va r(x )}( {} 1/ 2) ' = (} 1/ 2{ }({ }1 /2 )' = 1m
,
e nc sta di an de cli Eu e th x, tri ma y tit en id e th is z of Si nc e th e co va ria nc e matrix s on ati rv se ob n ee tw be e nc sta di e th of re su ea m l fu function will gi ve a m ea ni ng e, ov ab ed fin de n io at nn fo ns tra r ea lin e th of e us g in ak from th is distribution. By m rse ob x n ee tw be es nc sta di to s on ati rv se ob z n ee tw be we ca n relate di sta nc es its d an z on ati rv se ob an n ee tw be e nc sta di an de cli Eu e vations. Fo r ex am pl e, th ex pe cte d va lu e 0 is d,(z, O)
= {(z 
0)' (z  O)} 1/2 =(z' Z)I/2
= {(x  fL )'( {} 1/ 2) '{} 1 /2 (X  fL)}I/2
={(x 
fL ),{ }I (X  fL )}I /2
= do (x, fL),
, ly lar mi Si . 2.2 on cti Se in ed fin de n tio nc fu e nc sta di s bi wh er e do is th e M ah ala no Z2 are d an Zl d an x of n io ut ib str di e th m fro s on ati rv se if x, an d X2 ar e tw o ob arel is Th . 2) j,x (x do = 2) j,z (z d, en th rs, cto ve ed llu fo th e co rre sp on di ng tra ns s ke ma e nc sta di an de cli Eu e th d an e nc sta di s bi no ala ah tio ns hi p be tw ee n th e M th no is It . nt re pa ap e or m n tio nc fu e nc sta di s bi no ala ah M e th e co ns tru cti on of th ms l fO ns tra ge sta st fir e th e; nc sta di of n io tat pu m co ge sta oin g m or e th an a tw ile wh , es nc ria va g rin ffe di d an ns io lat rre co of ct fe ef e th e points so as to re m ov d me l fO ns tra e es th r fo e nc sta di an de cli Eu the tes th e se co nd sta ge sim pl y co m pu points. s re ua sq st lea ed ht ig we e th d ne tai ob we 6, 2.1 e pl am Ex Example 4.6. In es tim ato r of P in th e m ul tip le re gr es sio n m od el
y = Xp + E,
ct, ... ,
w no e W 2 ts. tan ns co n ow kn e ar c~ d an ) c~ , ... t, wh er e Var(E) = a di ag (c lra ne ge as to d rre fe re es m eti m so , lem ob pr n sio es gr re l co ns id er a m or e ge ne ra 2 N x N n ow kn a is e er wh e, a = ) r(E Va h ic wh in n, sio iz ed lea st sq ua re s re gr es nt re ffe di ve ha y ma ly on t no rs ro 'er om nd ra e th , us po sit iv e definite matrix. Th is n sio es gr re s re ua sq st lea ed ht ig we d an , ed lat rre co va ria nc es bu t als o m ay be ed ht ig we th wi As n. sio es gr re s re ua sq st lea ed liz sim pl y a sp ec ial ca se of ge ne ra di or to lem ob pr e th llil fO nS tra to is re he ch oa pr lea st sq ua re s re gr es sio n, th e ap t tha so el od m the I 'lI fo ns tra to sh wi we is, at th na ry lea st sq ua re s re gr es sio n; 2 e nc ria va co its as 1N a s ha el od m ed m i fO ns tra e th in rs ro er th e ve cto r of ra nd om be T t Le e. of ix atr m ot ro re ua sq y an g zin ili matrix. Th is ca n be do ne by ut I I . Now e= TI T, , tly len va ui eq , or e = ' TT g in fy an y N x N m atr ix sa tis el od m e th to el od m n sio es gr re al in ig or r ou l lli fO ns tra
e
·,
••
142
,
M,6;I'RIX FACfORIZATIONS AND MATRIX NORMS
where y* = T1y, X* = T 1X, and E* = T1E, and note that E(E*) = T1E(E) = 0 and var(E*) = var(rIE) = r l {Var(E)} T'I = T 1(u 2 e)T'1 = u 2 T 1TT'T'1 = u 2 IN • A
Thus, the generalized least squares estimator ~* of p in the model y = Xp + E is given by the ordinary least squares estimator of ~ in the model y* = X*~ + E* and so can be expressed as 11* = (X~X*)IX~y* = (X'T'lrIXr·IX'T'lrl y = (X'C1X)IX'C1y
In some situations a matrix A can be expressed in the form of the transpose product, BB', where the m x r matrix B has r < m, so that unlike a square root matrix, B is not square. This is the subject of our next theorem, the proof of which will be left to the reader as an exercise.
Theorem 4.5. Let A be an m x m nonnegative definite matrix with rank(A) = r. Then there exists an m x r matrix B, having rank of r, such that A = BB'. The transpose product fOIln A = BB' of the nonnegative definite matrix A is not unique. However, if e is another matrix of order m X n where n ~ r and . A = ee', there is an explicit relationship between the matrices B and e. This is established in the next theorem.
Theorem 4.6. Suppose that B is an m x h matrix and e is an m x n matrix, where h :s; n. Then BB' = ee' if and only if there exists an h x n matrix Q such that QQ' = Ih and e = BQ.
Proof
If e = BQ with QQ' = Ih , then clearly
•
ee' = BQ(BQ)' = BQQ'B'''; BB' Conversely, now suppose that BB' = ee'. We will assume that h = n since if II < n, we can fOIln the matrix B* = [B (0)] so that B* is m x n an" B*B'* = BB'; then proving that there exists an n x n orthogonal matrix Q* such that e = B*Q* wilI yield e = BQ, if We take Q to be the first h rows of Q*.
•
143
THE SPECfRAL DECOMPOSITION
Now since BB' is symmetric, there exists an orthogonal matrix X such that
A (0)
BIt = CC' = X
(0) (0)
where rank(BB') = r and the r x r diagonal matrix A contains the positive eigenvalues of the nonnegative definite matrix BB'. Here X has been partitioned as X = [X I X2], where X I is m x r. FOlIll the matrices
A 1/2 E=
(0)
(0)
Imr
AI/2
(0)
(0)
Imr
X'B=
(4.7)
X'c=
(4.8)
so that
EE'=FF'=
Ir
(0)
.
(0)
(0)
,
that is, EIE~ = FIFI = In E2E2 = F 2F'z = (0), and so E2 = F2 = (0). Now let E3 and F3 be any (h  r) x h matrices such that E* = [E~ E;r and F * = [FI F;]' are both orthogonal matrices. Consequently, if Q = E~F *' then QQ' = E~F *F~E* = E~E* = Ih, so Q is orthogonal. Since E* is orthogonal, we have EIE; = (0), and so
_ 
Ir (0)
(0) (0)
But using (4.7) and (4.8), EQ = F can be written as
A I/ 2
(0)
(0)
Imr
•
X'BQ=
A I / 2
(0)
(0)
Im r
X'c
MATRIX FACTORIZATIONS AND MA TR IX NU KM :S
144
by n tio ua eq is th g in ly tip ul em pr by ws llo fo w no lt Th e resu Al j2
X
(0)
(0) I", _ r
'
0
since XX' = 1m.
X RI AT M RE UA SQ A OF N O TI ZA LI NA O AG 4. THE DI ic etr m m sy y er ev at th ow kn we , m re eo th n io sit po From the spectral decom opr ap an by ng yi pl lti nu str po by x tri ma al on ag di matrix ca n be transfOIll1ed to a is Th . se po ns tra its by g in ly tip ul em pr d an ix atr m priately chosen orthogonal ix atr m ic etr m m sy a n ee tw be p hi ns io lat re e pl sim d result gives us a very useful an lra ne ge a te ga sti ve in we , on cti se is th In rs. cto ve and its eigenvalues and eigen e th th wi n gi be e W l. ra ne ge in s ice atr m re ua sq ization of this relationship to following definition. s ice atr m r ila sim be to id sa e ar B d an A s ice atr m Definition 4.1. Th e m X m I • CCB = A at th ch su C x tri ma lar gu in ns no if there exists a en eig al tic en id ve ha s ice atr m r ila sim at th ) (d 3.2 m It follows from Theore ve ha we if e, nc sta in r Fo e. tru t no is se er nv co e th r, ve values. Howe
A=
0
o
1 0 '
o
B= 0
0 0 '
2. ity lic tip ul m th wi 0 s ha ch ea ce sin es lu va en eig al tic then A an d B have iden I • CCB = A g in fy tis sa C ix atr m lar gu in ns no no is e Clearly, however, ther y er ev at th us ls tel 2 4. m re eo Th as n ve gi m re Th e spectral decomposition theo testa e m sa e th y, tel na rtu fo Un . ix atr m al on ag di a to r ila sim symmetric matrix is ag di e th of ts en m ele al on ag di e th If s. ice atr m ment does no t hold for all square ng di on sp rre co e ar X of ns m lu co e th d an A, of es lu va en eig e onal matrix A are th ely iat ed m im XA = AX n tio ua eq r cto ve en eig e lu va en eig e th eigenvectors, then ty ili ab on ag di e th is, at th ; lar gu in ns no is X if A, leads to th e identity X I AX = de in rly ea lin m of t se a of ce en ist ex e th on s of an m x m matrix simply depend ly us io ev pr lt, su re ng wi llo fo e th ve ha we , ly nt ue eq ns Co rs. pendent eigenvecto . 3.6 m re eo Th m fro ely iat ed m im ws llo fo ich wh , 3.3 on mentioned in Secti es lu va en eig e th s ha A ix atr m m x m e th at th e os pp Theorem 4.7. Su ), ,x ... > (Xi = X d m an ) Am , ... \, (A ag di = A If t. nc Ai> ... ,Am which are disti en th m. ,A ... , AJ to ng di on sp rre co A of rs cto ve en eig e where XI , ••• ,Xm ar
(4.9)
THE DIAGONALIZATION OF A SQUARE MATRIX
145
gdia the r fo on iti nd co y ar ss ce ne t no t bu nt cie ffi su a s ve gi e The theorem abov t tha s ice atr m ic etr m ym ns no e m so is, at th ix; onalization of a general square matr m re eo th xt ne e Th ix. atr m al on ag di a to r ila sim e ar es lu va have multiple eigen e. bl za ali on ag di be to ix atr m a r fo on iti nd co nt cie ffi su d gives a necessary an A ix atr m m x m the of m ,A ... I, A es lu va en eig e Theorem 4.8. Suppose. th t tha so rh, , ... n, es iti lic tip ul m ng vi ha h ,JL consist of h distinct values JLI,." d, an rs cto ve en eig t en nd pe de in rly ea lin m of t se rl + ... + rh = m. Then A· has a h. , ... 1, = i r fo rj m = ) jl JL A k( ran if ly on m thus, is diagonalizable if and
tano l ua us the g in us at th so e, bl za ali on ag di is A at th e Proof First, suppos I . Thus, XXA = tion, we have X I AX = A, or equivalently A rank(A  JLjlm )
1 = rank(XAX  JLjlm ) = rank{X(A  JLjIIII)XI} = rank(A  JLjl m ),
alun is ix atr m a of k ran the at th ct fa the m fro ws lIo fo where the last equality iult m s ha JLI ce sin w, No ix. atr m lar gu in ns no a by tered by its multiplication al on ag di o er nz no rj In tly ac ex s ha ) jl JL (A ix atr m m plicity rio the diagonal w no , ly se er nv Co rj. m = ) jl JL A k( ran at th s tee an ar m gu elements which then the t tha ies pl im is Th h. , ... I, = i r fo rj, m = ) jl JL m suppose that rank:(A n ca we so d an rj, = rj) (m m is ) jl JL m dimension of the null space of (A n tio ua eq e th g in fy tis sa rs cto ve t en nd pe de in rly ea lin rj find
nCo ' ILi e lu va en eig e th to ng di on sp rre co A of r cto ve en eig But any such x is an ted cia so as rs cto ve en eig t en nd pe de in rly ea lin rj sequently, we can find a set of rre co rs cto ve en eig at th ow kn we , 3.6 m re eo Th om with the eigenvalue JLj. Fr t se y an lt, su re a As t. en nd pe de in rly ea lin are es lu va sponding to different eigen rre co rs cto ve en eig t en nd pe de in rly ea lin rj s ha ich of m eigenvectors of A, wh is A e, or ef er Th t. en nd pe de in rly ea lin be o als ll wi sponding to JLj for each i, 0 . ete pl m co is f oo pr diagonalizable and so the the to l ua eq is ix atr m ic etr m m sy a of nk ra e th We saw in Chapter 3 that .9) (4 in en giv on ati riz cto fa al on ag di e Th . es lu number of its nonzero eigenva lt. su re s thi of on ati liz ra ne ge ng wi llo fo e th s eld immediately yi the en th e, bl za ali on ag di is A If . ix atr m m x m an be A Theorem 4.9. Let A. of es lu va en eig o er nz no of r be m nu e th to l ua eq is A rank: of gdia be t no ed ne ix atr m a is, at th e; tru t no is 4.9 m The converse of Theore . es lu va en eig o er nz no its of r be m nu e th l ua eq onalizable for its rank: to
MAIRIX FACfORlZATIONS AND MATRIX NORMS
146
Example 4.7. Let A, B,
A=
1 4
an d C be th e 2x 2 m atr ice s gi ve n by
1 1 '
o o
B=
1
0 '
C=
1 1
1
o
es lu va en eig its so 0, = 1) + )(A 3 (A to es ifi pl Th e characteristic eq ua tio n of A sim ec nv ge Ei e. bl za ali on ag di is A e, pl sim e ar es are A = 3, 1 . Si nc e the eig en va lu ', 2) , (1 :: X2 d an )' ,2 (1 = Xl e ar es lu va en eig o tw e es th tors co rre sp on di ng to so th e di ag on ali za tio n of A is gi ve n by
1/4 1/2 1/2 1 /4
1 4
1 1
1 1 2 2
3
o
0 1
en eig o er nz no of r be m nu e th as e m sa e th is ich Clearly, the ra nk of A is 2, wh e th s ha R sO 0, = A2 to s ce du re B of n tio ua eq ic values of A. Th e ch ar ac ter ist i. 1 = R) ke ran = ) I A (R nk ra e nc Si 2. 2 : r: ty eig en va lu e A = 0 wi th multiplici n tio ua eq e Th rs. cto ve en eig t en nd pe de in rly ea lin o o = m  r, B wi\l not ha ve tw rs cto ve , ly me na X, r fo n tio lu so t en nd pe de in rly ea Bx = Ax = 0 ha s only on e lin of nk ra e th at th o als te No e. bl za ali on ag di t of the f01l11 (a,O)'. Th us , R is no , lly na Fi . es lu va en eig o er nz no its of r be m nu e th B is I, wh ich is gr ea ter than 2, = r ty ici pl lti mu th wi 1 = A e lu va en eig e th tu rn in g to C, we se e th at it has t no is x tri ma is Th O. = 2 A) (1 to es ifi pl sim sin ce its characteristic eq ua tio n r. m = 0 i. 1 = ) (B nk ra :: lz) (C nk ra = h) di ag on ali za bl e since ra nk (C  A r, ve we Ho )'. ,0 (1 = X r cto ve e th of le tip ul m r ala Any eig en ve cto r of C is a sc e th is h ic wh 2, of nk ra s ha it e, bl za ali on ag di no tic e that ev en th ou gh C is not . es lu va en eig o er nz no its of r be m nu e th as e sa m r be m nu e th d an nk ra e th n ee tw be n tio ec nn co e th Th e next result sh ow s th at e ac sp en eig e th of n sio en m di e th on es ng hi A x tri ma a of no nz er o eig en va lu es of as so cia ted wi th th e eig en va lu e O. e th of n sio en m di e th be k let d an x tri ma m X m an Theorem 4.10. Let A be let d an A, of e lu va en eig an is 0 if 0 e lu va en eig e eig en sp ac e as so cia ted with th k = 0 otherwise. Th en rank(A) = m  k
Proof
Fr om Th eo re m 2.21, we know th at ra nk (A ):: m  di m {N (A )},
,
.
•
•
147
THE JORDAN DECOMPOSITION
where N(A) is the null space of A. But since the null space of A consists of all vectors x satisfying Ax = 0, We see that N(A) is the same as SA(O), and so the 0 result follows. We have seen that the 'lUmber of nonzero eigenvalues of a matrix A equals the rank of A if A is similar to a diagonal matrix; that is, A being diagonalizable is a sufficient condition for this exact relationship between rank and the number of nonzero eigenvalues. The following necessary and sufficient condition for this relationship to exist is an immediate consequence of Theorem 4.10.
Corollary 4.10.1. Let A be an rnx rn matrix and let rno denote the mUltiplicity of the eigenvalue O. Then the rank of A is equal to the number of nonzero eigenvalues of A if and only if
Example 4.8.
We saw in Example 4.7 that the two matrices
B=
o
o
1
0 '
c=
1 0
1 1
are not diagonalizable since each has only one linearly independent eigenvector associated with its single eigenvalue, which has multiplicity two. This eigenvalue is 0 for B, sO rank(B) = 2  dim{SB(O)}
=2  I = I
On the other hand, since 0 is not an eigenvalue of C, dim{Sc(O)} = 0, and so the rank of C equals the number of its nonzero eigenvalues, 2.
5. THE JORDAN DECOMPOSITION Our next factorization of a square matrix A is one that could be described as an attempt to find a matrix similar to A, which, if not diagonal, is as diagonal as is possible. We begin with the following definition. Definition 4.2. For h > I, the h x h matrix J,.(A) is said to be a Jordan block matrix if it has the form
MATRIX fACTORIZATIONS AND MATRIX NORMS
148
0
• • •
I 0 0 0 A
• • •
A /i I
Jh(A) = Alh +
,
eiei+ I = .
I
I A
• •
I
•
• • •
where
•
•
•
• •
• •
•
0 0 0 ei
•
0 0 0 ,
•
A
• •
is the ith column of II" If h = I, J I (A) = A. •
k oc bl an rd Jo 2 2x th bo are 4.8 d an 4.7 es pl am Ex m fro The matrices B and C e es th of er ith ne at th w sa e W . (l) lz = C d an O) lz( = matrices; in particular, B s ice atr m k oc bl an rd Jo r fo e tru is is Th ix. atr m al on ag di a matrices is similar to ce sin at th te no is, th e se To e. bl za ali on ag di t no is ) A h( en in general; if h > 1, th it so d an , es lu va en eig its are ts en m ele al on ag di its , ix atr m h( A ) is a triangular AX = )x (A Jh to n tio lu so e th r, ve we Ho . es tim h d ate pe has the one value, A, re rly ea lin e on ly on s ha ) A h( is, at th 0; :Xli = , .. = has x I arbitrary while X2 )'. ,0 ... , 1,0 (x = X lIll fO e th of is ich wh r, cto ve en eig t en nd indepe lt su re is th of f oo pr a r Fo . m re eo th n io sit po m co de an rd Jo We now state the see Hom and Johnson (1985). 4.11. Let A be an matrix B such that Th eo re m
mx m
matrix. Then there exists a nonsingular
B IA B = J = diag(JII( (A I) ,·. · ,h r( A r»
(0)
• • •
(0)
h 2 (A2)
• • •
• • •
•
•
(0)
(0)
iJ, ,(A I)

.
(0) (0) • • •
•
• ••
,
hr (A r )
es lu va en eig t nc sti di ily ar ss ce ne t no e th e ar r ,A >." AI d an where hi + .. ·+hr = m of A. ix atr m hi x hi e th e nc Si i. all r fo I = hi if al on The matrix J wiJ) be diag rJo e th at th ws llo fo it r, cto ve en eig t en nd pe de hi (A i) has only one linearly in t en nd pe de in rly ea lin r s ha r» (A r ,h ... I), (A hi (J ag di = J dan canonical fonn in al; on ag di be t no ll wi J en th i, e on st lea at r eigenvectors. Thus, if hi > I fo rre co J of r cto ve en eig an is Xi r cto ve e Th e. bl za ali on ag fact, J will not be di r cto ve en eig an is i BX = Yi r cto ve e th if ly on d an if Ai e sponding to the eigenvalu en th , Xi Ai = i JX s fie tis sa Xi if e, nc sta in for Ai; to ng of A correspondi
149
N IO IT S O P 1 lv O C E D R U H C S THE
d n a , A f o rs to c e v n e ig e t n e d n e p e d in y rl a e n li f o r e b m u n e th Thus, r also gives l. a n o g ia d is J if ly n o le b a z li A is diagona v a h A e lu a v n e ig e e th h it w ix tr a m 4 x 4 a is A t a th e s o p Example 4.9. Sup n a rd o J e v fi g in w o ll fo e th f o e n o to r a il im s e b l il w A n e ing multiplicity 4. T h canonical forms: 0 A 0
0 0 A
0 0 0
0
0
A
.... A
I
o
0
o o
A
0
0
A
A
o
o o
diag(}z(}..), J I (} .. ), J I (}..» =
000
A
o o
diag(}z(}..), h(}"»
=
o
A 1 0
A
o
o '
A..J
0
o
o '
o
0
o
A
I A 0 0
0 0 0 0 A I 0 A
o o o A
•
I
o o o
'
A
,
I o 0 A I o 0 A I 0 0 A
s a h A h ic h w in e s a c e th to s d n o p s e rr o c is th o s l a n o g ia d T h e first fo n n given is e h T . A e lu a v n e ig e e th h it w d te ia c o s s a rs to c e v n e ig e t n e four linearly independ t n e d n e p e d in y rl a e n li e n o d n a e re th g in v a h A to d n o p s e rr o second a n d last fo n n s c n e th , rs to c e v n e ig e t n e d n e p e d in y rl a e n li o tw s a h A f I . ly e eigenvectors, respectiv . n e iv g ix tr a m h rt u fo e th r o d ir th e th r e h it e to r a il im s e it will b N O I T I S O P M O C E D R U 6. T H E SCH m o c e d l a tr c e p s e th f o n o ti a z li ra e n e g r e th o n a s a d e w ie v e b O u r n e x t result can d n a m re o e th n o ti a z li a n o g ia d e h T . A , ix tr a m re a u q s y n position theorem to a n io it s o p m o c e d l a tr c e p s e th f o s n o ti a z li ra e n e g re e w n io the Jordan decomposit , w o N . ix tr a m l a n o g ia d " y rl a e n " r o l a n o g ia d a in ta b o to s a in which o u r goal w
M.6;I'RIX FACTORIZATIONS AN D MATRIX NORMS
150
po m co de l tra ec sp e th in ed oy pl em ix atr m al on instead we focus on the orthog s, ice atr m al on og th or to ly on n tio en att ict str re we if , lly ca ifi sition theorem. Spec r fo at th t ou s rn tu It ? AX X' r fo t ge n ca we at th re tu uc X. what is th e sim pl es t str AX * X at th ch su X an d fin n ca we A, ix atr m re ua sq l rea the general case of any all de clu in to X of ce oi ch e th d ne de oa br ve ha is a triangular matrix, wh er e we d, an ix atr m al on og th or an is ix atr m ry ita un l rea a unitary matrices. Recall that m co e th of se po ns tra e th is * X e er wh I, = in general, X is unitary if X *X r hu Sc e th as to d rre fe re es m eti m so n, io sit po m co de is Th plex conjugate of X. . m re eo th ng wi llo fo e th in n ve gi is n, io sit po decom
•
•
•
,
•
, •
m x m an s ist ex e er th en Th x. tri ma m x m an Theorem 4.12. Le t A be unitary matrix X such that
·
•
.
X *AX = T, al on ag di its as A of es lu va en eig e th th wi ix atr m lar where T is an upper triangu elements. •
r cto ve en eig an be YI let d an A, of es lu va en eig e th be Am , Proof Let AI. ... x m y an be Y t Le 1. = YI yi at th so d ze ali Iln nO d an AI to of A corresponding as Iln fO d ne tio rti pa in Y g in rit W n. m lu co st fir its as 1 Y ng 1/1 unitary matrix ha vi 0, = YI Yi d an Y! AI = I AY ce sin t, tha e se we l, Y:! Y = [YI
•

,

d an e ov ab y tit en id e th g in Us . Yz A Yi = B ix atr m I) where the (Ill  I) x (m ter ac ar ch e th at th ws llo fo it t, an in ln tel de a r fo ula lll1 the cofactor expansion fO istic equation of Y *A Y is
(A I

A) IB  AIm _ I I = 0, •
e os th as e m sa e th e ar AY Y* of es lu va en eig e and. since by Th eo re m 3.2(d) th r ala sc e th en th 2, = m if w No m. ,A . .. , Az be t of A, the eigenvalues of B m us r Fo . ete pl m co is f oo 'pr e th so , lar gu an tri r pe up B must eq ua l AZ an d Y* A Y is r fo s ld ho lt su re r ou if at th ow sh we is, at th n~ io ct du in by 1/1 > 2. we pr oc ee d e nc Si s. ice atr m m x m r fo ld ho o als t us m it (Ill  I) X (Il l  I) matrices, then W ix atr m ry ita un a s ist ex e er th at th e m su as y R is (Ill  I) x (m  I) we ma al on ag di th wi ix atr m lar gu an tri r pe up an is z T e er wh 2, T such that W *B W = by U ix atr m m x m e th e fin De . Am , . ... A2 ts elemen
•
151
THE SCHUR DECOMPOSmON
u= oI
0'
W'
•
.
and note that U is unitary since W is. If we let X " Y U, then X is also unitary and
X*AX= U*Y*AYU=
1
I
0' W*
o
o
0'
W ,
where this final matrix is upper triangular with }.." .. . ,}..III as its diagonal elements. Thus, the proof is complete. 0
If all of the eigenvalues of A are real, then there exist corresponding real eigenvectors. In this case, a real matrix X satisfying the conditions of Theorem 4.12 can be found. Consequently, we have the following result.
Corollary 4.12.1. If the m x m matrix A has real eigenvalues, then there exists an m x m orthogonal matrix X such that X'AX = T, where T is an upper triangular matrix . •
Example 4.10.
Consider the 3 x 3 matrix given by
A=
5 4 4
3 2 4
3 3 5
In Example 3.1, the eigenvalues of A were shown to be }..I = 1, }..2 = 2, and }..3 = 5, with eigenvectors, XI = (0,1,1)', X2 = (1,1,0)" and X3 = (1,1, I)', respectively. We will find an orthogonal matrix X and an upper triangular matrix T so that A = XTX'. First, we construct an orthogonal matrix Y having a normalized version of XI 'as its first column; for instance, by inspection we set
Y=
Thus, our first stage yields
°
1/v2 1/v2
1
° °
•
MATRIX FACTORIZATIONS AND MATRIX NORMS
152
7
I 0
Y'A Y
2
o 3V2 The 2 x 2 matrix
B=
has a nonnalized eigenvector orthogonal matrix
(l/V3, V2/V3)',
and so we can construct an
W=
for which
W'BW=
2312 Vi.
o
5
,
Putting it all together, we have
X=y
1
o
0'
W
V2 1 V3 IV2 v'6 V3 1 V2 o

2
,
and
T= X'AX=
I 0
o The matrices X and T in the Schur decomposition are not unique; that is, if A =XTX* is a Schur decomposition of A, then A = XoToX~ is also, where Xo = XP and P is any unitary matrix for which P*TP = To is upper triangular. The triangular matrices T and To must have the same diagonal elements, possibly ordered differently. Otherwise, however, the two matrices T and To may be
153
TIlE SCHUR DECOMPOSITION
s ice atr m e th at th ied rif ve y sil ea be n ca it e, pl qu ite different. Fo r ex am
0 2/ v6 l jv 6 . 1 /V 2 , 1 /v 6 1/V2
Xo =
5 8/V2 To =
o o
1
0
•
0. 4.1 e pl am Ex of A ix atr m e th of n io sit po m co de r gi ve an ot he r Sc hu A, ix atr m m x m e th of n tio ua eq ic ist ter ac ar ch e th g In Ch ap ter 3, by utilizin en eig its of t uc od pr e th ls ua eq A of t an un nr te de e th we we re ab le to pr ov e th at are lts su re e es Th . es lu va en eig its of m su e th ls ua eq values, while th e trace of A lva en eig e th If A. of n io sit po m co de r hu Sc e th also very easily pr ov en using it en th A, of n io sit po m co de r hu Sc a is X* XT = ues of A ar e AI> ... ,Am an d A follows that /11
IA I = IXTX*I
= IX *XIITI = ITI =
Ai. i= I
ter de the d an x, tri ma ry ita un a is X at th ct fa e th m fro ws sin ce IX*XI = I follo g in us , so Al ts. en m ele al on ag di its of t uc od pr e th is ix atr m m in an t of a triangular properties of th e trace of a matrix, we have /11
tr(A)
=tr(XTX*) =tr( X* XT ) = tr( T) = L
Ai
i =I
t fac the ng hi lis tab es y sil ea of od eth m a es id ov pr o als n Th e Sc hu r de co m po sit io d un bo r we lo a as es rv se x tri ma a of es lu va en eig o er nz no that th e nu m be r of m. re eo th xt ne r ou of t ec bj su e th is is Th x. tri ma for th e ra nk of that
Theorem 4.13. Th en rank(A)
~
. es lu va en eig o er nz no r s ha A ix atr m m x m e th e Su pp os
r.
ch su x tri ma lar gU an tri r pe up an be T d an ix atr m ry ita Proof. Le t X be a un T, of ts en m ele al on ag di e th e ar A of es lu va en eig e th that A = XT X* . Si nc e T. of ix atr bm su x r e Th ts. en m ele al on ag di o er T must have exactly r nonz ele al on ag di ro ze e th by ed pi cu oc ws ro d an ns m fOil ne d by deleting th e co lu bsu is Th ts. en m ele al on ag di o er nz no th wi lar gu an ments of T, wi ll be up pe r tri the is x tri ma lar gu an tri a of t an in nn te de e th ce matrix wi ll be no ns in gu lar sin en th lt su re e Th r. ~ ) (T nk ra ve ha t us m we so pr od uc t of its diagonal ele m en ts, so , lar gu in ns no be t us m it , ry ita un is X ce sin at th ct follows from th e fa
r
rank(A) = rank(XTX *) = ra nk (T )
~
r
o
MATRIX FACTORIZATIONS AND MATRIX NORMS
154
7. THE SIMULTANEOUS DIAGONALIZATION TRIC MATRICES
OF TWO S
, •
etm m sy o tw ich wh in r ne an m e on 3.7 on cti Se in d se us sc We have already di the in lt su re is th te sta re e W d. ze ali on ag di ly us eo tan ul ric matrices can be sim following theorem. g in be A th wi s, ice atr m ic etr m m sy m X m be B d an A t Le Theorem 4.14. ), where ,X ... h (X ag di = m A t Le . ite fin de e iv sit po B, d an nonnegative definite 1 ix atr m lar gu in ns no a s ist ex e er th en Th A. Bof AI •. ..• Am are the eigenvalues C such that
•
•
CB C' = 1m
CA C' = A.
n tio za ali on ag di us eo tan ul sim the of n tio ica pl ap e On Example 4.11. d rre fe re ly on m m co sis aly an e iat ar tiv ul m a in is described in the theorem above d an , nt Ke a, di ar M or 88 19 , ki ws no za Kr e (se sis to as canonical variate analy scla y wa eon ate ari tiv Ul m the m fro ta da es lv vo in Bibby. 1979). This analysis ran t en nd pe de in ve ha we at th so 4, 3.1 e pl am Ex in sification model discussed I x m of e pl m sa ith the th wi ts, en atm tre or ps ou gr nt dom samples from k differe vectors given by Yi l .. .. 'Yini' The model is Yij =
JLi
+ Ei j.
. •
where JLi is an m X 1 vector of constants and we saw how the matrices
k
k
ni(Yi  Y)(Yi  y)'.
B= j
0
Eij 
W=L i=I
I
Nm(O, 0) . In Example 3.14,
ni
(Yij  Yi)(Yij  Yi)',
,
j=1
where ni

Yj = jo l
Yij
nj •
Y=
k
i=I
niYi
• n
k
n=
Li=
ni,
I
•
alan te ria va l ca ni no Ca k' JL = ... = fl.1 : Ho is, es th po could be used to test the hy is th en wh ed nn rfo pe rs, cto ve n ea m the in es nc re ffe di ysis is an analysis of the es nc re ffe di the en wh ul ef us ly lar cu rti pa is sis aly an is Th hypothesis is rejected. r we lo e m so to , ed in nf co ly ar ne or , ed in nf co are between the vectors JL I' ... , JLk l na sio en im rd an an sp rs cto ve e es th if at th te No . dimensional subspace of Rm B, of on rsi ve n io lat pu po the en th ~, of ce pa bs su
,
•
•
THE SIMULTANEOUS DIAGONALIZATION OF TWO SYMMETRIC MATRICES
155
k
~= •
•
i=I
•
where fI. == Enifl.i In. will have rank r; in fact, the eigenvectors of corresponding to its positive eigenvalues will span this rdimensional space. Thus. a plot of the projections of fl.1' ••• , fl.k onto this subspace. will yield a reduceddimension diagram of the population means. Unfortunately, if 0 i. 1m , it will be difficult to interpret the differences in these mean vectors since Euclidean distance would not be appropriate. This difficulty can be resolved by analyzing the transfonned 1 2 data OI/2y IJ.. ' where 0'1/20 1/ 2 == 0 1'since 01/2 .. N (0". I ) / y I J In ." m . Thus. we would plot the projections of 01/2f1.1, ... ,0I/2JLk onto the subspace spanned by the eigenvectors of 0. 1/ 20 '1/2 corresponding to its r positive eigenvalues; that is. if the spectral decomposition of 0 1/20 ,1/2 is given by PIAIP;. where PI is an m X r matrix satisfying P;P I == I, and AI is an r X r diagonal matrix. then we could simply plot the vectors P;01/2f1.1 •...• P;01/2 JLk in R'. The r components of the vector Vi = P; 0 1/2f1.i in this rdimensional space are called the canonical variates means for the ith population. Note that in obtaining these canonical variates we have essentially used the simultaneous diagonalization of and 0, since if C' = (C~. C;) satisfies
(0) . •
(0)
I, •
(0)
(0) 1m , ,
then we can take C1 == P; 0 1/2. When JL I •...• fl.k are unknown, the canonical variate means can be estimated by the sample canonical variate means. which are computed using the samples means YI , ...• Yk and the corresponding simultaneous diagonalization of B and W. ,
The matrix C that diagonalizes A and B in Theorem 4.14 is nonsingular but not necessarily orthogonal. Further. the diagonal elements of the two diagonal matrices are not the eigenvalues of A nor B. This sort of diagonalization. one which will be useful in our study of quadratic fOllllS in nOllllal random vectors ins Chapter 9, is what we consider next; we would like to know whether or not there exists an orthogonal matrix that diagonalizes both A and B. The following result gives a necessary and sufficient condition for such an orthogonal matrix to exist.
,
Theorem 4.15. Suppose that A and Bare m X m symmetric matrices. Then there exists an orthogonal matrix P such that p' AP and p' BP are both diagonal if and only if A and B commute; that is, if and only if AB = BA. Proof First suppose that such an orthogonal matrix does exist; that is, there is an orthogonal matrix P such P'AP = AI and P'BP = A 2, where AI and
MATRIX FACTORIZATIONS AND MATRIX NORMS
156
ly ar cle s, ice atr m al on ag di are A2 d an I A ce sin en A2 are diagonal matrices. Th AI A2 = A2AJ, so we have
, BA = AB at th g in m su as w no , ly se er nv Co e. ut m m co and, hence, A and B do ,J.l.h ••• I, J.l. t Le . ist ex es do P ix atr m al on og th or an ch we need to show that su rh, , ... rl, es iti lic tip ul m g vin ha A of es lu va en eig the of be the distinct values g in fy tis sa Q ix atr m al on og th or an s ist ex e er th ic etr m m respectively. Since A is sy
ix atr m ng lti su re the ng ni tio rti pa d an B on on ati lln Perfollning this same transfo in the same way that (f AQ has been partitioned, we get
COO Q 'B Q =
where Cij is rj x
rj.
CII C21
C I2
•
•
Cl h
C 22
• ••
C 2h
•
•
•
• •
Ch i
Ch 2
•
•
•
.
,
• •
• • •
C hh
Note that since AB = BA, we must have
l CA = Q 'A Q Q 'B Q = AQ 'B Q = BQ 'A Q = Q 'B Q Q 'A A IC = Q
•
s eld yi l CA of ix atr bm su h j)t (i, the to C AI of ix Equating the (i, j)t h submatr j; fi if ) (0 = C ve ha t us m we ij j, fi if J.l.j fJ.l.j e nc the identity J.l.jC jj = J.l.jC jj . Si is C ce sin w No al. on ag di k oc bl is h) Ch , ... IJ. (C that is, the matrix C = diag al on og th or rj x rj an d fin n ca we s, thu d, an i, ch jj ea r fo symmetric so also is C matrix Xj satisfying
x; Cj; Xj = I:l;, = X ix atr m al on ag di k oc bl the is X ere wh , QX = P t where I:l j is diagonal. Le diag(X I, ... , Xh), and note that p' P = X' Q' QX = x' x = diag(X~XI"'" X~Xh)
= diag(I'I ' ... , 1,/,) = 1m ,
so that
P
d an al on ag di is ) ~I , ... , I:l g( dia = I:l ix atr m the I is orthogonal. Finally.
157
MATRIX NORMS
P' AP = X' Q 'A Q X= X' AIX
= diag(X~, ... , X~) diag(JL IIr p ... , JL/.Irh) diag(X I, ... , XII) I, A = /,) I JLI! , ... , Iq I JL r g( dia = h) ~X hX JL , ••• 10 X X~ I L g(J = dia
and
•
P'BP = X' Q 'B Q X= X' C X I) XI , .:. I, X g( dia i,) Ch , ... II, (C ag di ) X~ , . .. ~, = diag(X =diag(X~ C II XI, ... , X;, CI!I!Xh) = diag(~I' ... , ~,) = ~,
and so the proof is complete.
o
d an A is, at th B; as ll we as A of rs cto ve en eig are P ix The columns of the matr . so Al rs. cto ve en eig on m m co ve ha s ice atr m o tw the B commute if and only if BA = AB so d an , BA = A' B' = , B) (A ic, etr m m sy note that since A and B are a to es liz ra ne ge y sil ea m re eo th us io ev pr e Th ic. etr m m if and only if AB is sy collection of symmetric matrices.
A Io •••
,Ak be m x m symmetric matrices. Then there
Theorem 4.16. Let d an if i ch ea r fo al on ag di is Ai = P Ai p' at th ch su P ix exists an orthogonal matr only if AiAj = AjAi for all pairs (i, j). of s se ca ial ec sp are s ice atr m ic etr m m sy g in lv vo in s m re The two previous theo rem eo Th e, nc sta in r Fo s. ice atr m e bl za ali on ag di g in rd ga re more general results t tha to r ila sim is ich wh f, oo pr e Th . ult res ng wi llo fo the of 4.16 is a special case given for Theorem 4.15, is left as an exercise. gdia is < ,AI ... , AI s ice atr m m x m the of ch ea at th e os Theorem 4.17. Supp is Ai = X Ai I Xat th ch su X ix atr m lar gu in ns no a s ist ex re onalizable. Then the j). (i, irs pa all r fo A; Aj = Aj Aj if ly on d an if i ch ea r fo al diagon
8. M AT R IX NORMS a of e siz the re su ea m to ed us be n ca s un no r In Chapter 2, we saw that vecto ix atr m m X m an of e siz e th g rin su ea m in ted es ter vector. Similarly, we may be in s un no ix atr M B. ix atr m m x m r he ot an to A of A or measuring the closeness e m so ply ap to ed ne ll wi we , ter ap ch er lat a In is. th do to ns will provide the mea s. ice atr m ex pl m co ly ib ss po are at th s ice atr m to s rm no ix atr of our results on m ly on n tio en att g in ict str re be t no ll wi we , on cti se Consequently, throughout this to real matrices.
MATRIX FACTORIZATIONS AND MATRIX NORMS
158
IIA II
defined on all m X m matrices A, real or s ice atr m Xm m all r fo ld ho s on iti nd co ng wi llo fo the if lln nO complex. is a matrix A and B.
Definition 4.3. A function
(a) IIAII ~ o. (b) IIA II = 0 if and only if A = (0). (c) IIcAIl = IcillAIl for any complex (d ) IIA + BII ::;; IlAIl + IIBII· (e) !lAB II ::;; /lAIIIIBIi.
scalar
c.
•
2 X 1 vecm the to 1 vectors, when applied Any vector nOlln defined on fy tis sa ll wi r, he ot the of p to on e on A, of ns m tor fOlllled by stacking the colu r, ve we Ho . lln nO r cto ve a of s on iti nd co the are conditions (a)(d) since these sce ne t no ll wi , AB of at th to B d an A of es siz the condition (e), which relates ix atr m as ed us be n ca s lln nO r cto ve all t no is, at th sarily hold for vector nOIlIlS; nOllns. e W s. lln nO ix atr m ed ter un co en ly on m m co e m so of We now give examples the fy tis sa t, fac in , ns tio nc fu e es th at th y rif ve to win leave it to the reader an de cli Eu e th y pl sim is rm no ix atr m an de cli Eu e Th conditions of Definition 4.3. by n ve gi is so d an A, of ns m lu co ed ck sta the on ted pu m co vector norm
m2 X
m
1/2
m
laijl2
IIAIIE = i=1
= {tr(A*A)}1/2
j= \
The maximum column sum matrix nOim is given by •
m
by n ve gi is lln nO ix atr m m su row um im ax m the while m
IIAII~ = max
15 iSm
•
j=1
m 'P••• ' J.l.1 if , lar cu rti pa in A; * A of es lu va en eig the The spectral norm utilizes by n ve gi is rm no l tra ec sp the n the A, * A of es lu va en eig are the
IIA 112 = Imax V ;; ~i~m
•
159
MATRIX NORMS
We will find the following theorem useful. The proof, which simply involves
the verification of the conditions of Definition 4.3, is left to the reader as an •
exercIse. Theorem 4.18. Let IIAII be any matrix nOlln defined on m x m matrices. If C is an m x m nonsingular matrix, then the function defined by
is also a matrix nOlln. The eigenvalues of a matrix A play an important role in the study of matrix nouns of A. Particularly important is the maximum modulus of this set of eigenvalues. Definition 4.4. Let >1, ... ,>m be the eigenvalues of the m x m matrix A. The spectral radius of A, denoted p(A), is defined to be p(A) = max I
~i~m
IAil
Although p(A) does give us some infollnation about the size of A, it is not a matrix nOlln itself. To see this, consider the case in which m = 2 and
A=
0
o
I
0
Both of the eigenvalues of A are 0, so p(A) = 0 even though A is not the null matrix; that is, p(A) violates condition (b) of Definition 4.3. The following result shows that p(A) actually serves as a lower bound for any matrix nOlln of A. Theorem 4.19.
For any m X m matrix
A and any matrix
nOlln
IIAII, p(A) $
IIA II· Suppose that > is an eigenvalue of A for which IA I = p(A), and let x be a corresponding eigenvector, so that Ax = AX. Then xl~ is an m x m matrix satisfying Ax1~ = >x1~, and so using properties (c) and (e) of matrix nOlIns, we find that
Proof
p(A) IIxl~ II = 1>llIxl~ II =
IIAxl;1I11 = IIAxt;1I11
$
IIA IIl1xl~ II.
The result now follows by dividing the equation above by •
IIxl;lIl1.
0
•
MATRIX FACTORIZATIONS AND MATRIX NORMS
160
A, of rm no y er ev as all sm as st lea at is A of Although the spectral radius is II IIA at th so nn no ix atr m a d fin s ay alw n ca we at th s our next result show arbitrarily close to p( A) . r ala sc y an d an A ix atr m m x m y an r Fo 0. 4.2 m re eo Th a matrix nOllU, IIAlk" such that
IIA IIA., 
p( A)
0, there exists
E
a is X at th so A, of n io sit po m co de r hu Sc the Pr oo f Let A = XT X* be A, of es lu va en eig e th th wi ix atr m lar gu an tri r pe up an is unitary matrix and T = Dc ix atr m e th e fin de 0, > e r ala sc y an r Fo ts. en m ele al AI, ... , Am, as its diagon lar gu an tri r pe up e th 2 of ts en m ele al on ag di e th at th te no d diag(c, c , . .. , em) an is ~1 TD Dc of m su n m lu co ith e th , er rth Fu m. ,A ... A\> matrix DcTD~1 are also given by i I
A' + "" 'k ..
c (i j)t ·· Jl
j=1 .
at th tee an ar gu n ca we , gh ou en ge lar c g sin oo ch by , Clearly i I
L
ic (i j) tjd
0, then
•
GENERALIZED INVERSES
172
Q d an P s ice atr m r x n nd ra x m ist ex e er th ow kn we from Corollary 4.1.1, such that P' P = (f Q = 1, and A = PI lQ ', + = A e fin De ts. en m ele al on ag di e iv sit po th wi where Il is a diagonal matrix QI l 1P' , and note that
A = ' lQ PI = ' IlQ I PM = ' lQ PI P' 1 Il'Q lQ PI = A AA+ A+AA+ = QIl1 P' PI lQ ' QIl1 p' = QIl1 M
I p'
AA+ = PI lQ ' QIl1 p' = pp '
is sy m m etr ic
A +A = QIl1 p' P IlQ ' = QQ '
is symmetric
= QIl1 p' = A+
d he lis tab es ve ha we so d an A, of rse ve in se ro en P re oo M a Thus, A+ = QIl1 p' is e ar C d an B at th e os pp su , xt Ne . rse ve in se ro en P re the existence of the M oo ur fo e es th g in us en Th . A+ r fo .4) (5 .1) (5 s on iti nd co g in fy any two matrices satis conditions we find that
, AC = AC AB = AC , B) (A = ' C) (A A' B' = ), CA '(A B AB = (A 8) ' = B 'A ' = and CA = BA CA = ' A) (B CA = B' A' ' A) (C = ' B )' CA (A = B' BA = (BA), = A'
Now using these two identities, we see th at
B = BA B = BA C = CA C = C . ue iq un is rse ve in se ro en P re oo M e th al, tic en id e ar C nd Si nc e Ba
•
0
a of rse ve in se ro en P re oo M e th at th 5.1 m re eo Th We saw in the pr oo f of at th A; of n io sit po m co de e lu va lar gu sin e th to ed matrix A is explicitly relat nt ne po m co the of n tio nc fu e pl sim ry ve a an th is, this inverse is nothing mOre A. of n io sit po m co de e lu va lar gu sin e th up g in ak m s ice atr m e os nr Pe by n ve gi rse ve ih ed liz ra ne ge a of on iti fin de the Definition 5.1 is e m so on ul ef us d fin ll wi we ich wh , on iti fin de e tiv na er alt (1955). The following on iti fin de is Th . 5) 93 (1 re oo M by n ve gi on iti fin de al in ig occasions, is th e or 2. ter ap Ch in d se us sc di re we at th s ice atr m n tio ec oj pr utilizes the co nc ep t of , ix atr m n tio ec oj pr its is Ps d an Rm of ce pa bs su r cto ve a Recall that if S is ile wh S, to on x of n tio ec oj pr al on og th or e th s ve gi x Ps ll, then for any x E Ri ix atr m ue iq un is th , er rth fu S; to al on og th or x of x  Ps x is th e co m po ne nt
173
THE MOOREPENROSE GENERALIZED INVERSE
sis ba al m or on th or y an is } ,x ... I, {X ere wh , r x; xr + Ps is given by XIX~ + ... for S. e ers inv se ro en P re oo M the en Th ix. atr m n X m an be A t Definition 5.2. Le of A is the unique n x m matrix A+, satisfying
where PR(A) and PR W ) A+, respectively.
d an A of es ac sp ge ran the of s ice atr m n tio ec oj pr are the
nCo . us vio ob ely iat ed m im t no is 5.2 d an 5.1 s on iti fin The equivalence of De sequently, we will establish it in the next theorem. Th eo re m. 5. 2.
Definition 5.2 is equivalent to Definition 5.1.
o als t us m 5.2 on iti fin De g in fy tis sa + A ix atr m Proof. We first show that a by ce sin ely iat ed m im w llo fo .4) (5 d an .3) (5 s on satisfy Definition 5.1. Conditi ce sin w llo fo .2) (5 d an .1) (5 ile wh ic, etr m m sy is definition, a projection matrix the columns of A are in R(A) imply that •
and the columns of A+ are in R(A+) imply that
•
Conversely, by A yields
.2) (5 g in ly tip ul em Pr . 5.1 on iti fin De s fie tis sa A+ at th now suppose the identity
s thu d an ic etr m m sy d an t ten po em id is + M at which along with (5.3) shows th ix atr m n tio ec oj pr the is it at th ow sh To . ix atr m n tio ec oj pr by Theorem 2.19 is a is Be ich wh r fo C, d an B s ice atr m y an r fo at th te no A, of the range space of at th d fin we , .1) (5 th wi ng alo ice tw is th ing Us . defined, R( BC ) !:; R(B)
of f oo pr A +. M = (A) PR at th es ov pr is Th . A) R( = +) AA so that R( is obtained in a similar fashion using (5.1) and (5.4).
P RW ) =
A+ A
Ll
GE NE RA LI ZE D IN VE RS ES
174
I I
I
SE RO EN P RE O O M E TH F O S IE RT PE O PR C SI BA 3. SO M E
I
INVERSE
I
\
e th of s tie er op pr sic ba e th of e m so h lis tab es In this section, we will ok lo II wi we s, on cti se nt ue eq bs su e th of e m so in ile wh , rse M oo re P en ro se inve m. re eo th ng wi no fo e th ve ha we , rst Fi s. ult res d ize at some more sp ec ial
Theorem 5.3.
I
;
I
Le t A be an m x n matrix. Th en I
(a) (a A r = a IA +, if a oJ 0 is a scalar, (b ) (A 'r = (A +) ', (c ) (A T = A, (d) N =A I. if A is square an d nonsingular. (e) (A 'A r = A+A+' an d (A A 'r = A+ 'A +, (I' ) (A A+ r '" AA+ an d (A +A r = A+A. (g ) A+ '" (A 'A )+ A' =A '(A A' )+ , (h) A+ = (A ' A) lA ' an d A+A = In, if rank(A)
= n,
(i) N '" A' (A A' )I and AA+ = 1m, if rank(A) = m, In. = A A' is, at th al, on og th or e ar A of ns m lu co e th (j) A+ = A' . if tis sa rse ve in ted sta e th at th ng yi rif ve y pl sim by en ov Pr oo f Ea ch part is pr n ve gi ', A+ A+ = r 'A (A at th y rif ve ly on ll wi we , re He . 4) 5. ties co nd iti on s (5 .1 )( ur fo e th s fie tis sa A+ e nc Si r. de rea e th to fs oo pr g in ain in (e). an d leave the re m co nd iti on s of a M oo re P en ro se inverse, we find th at A A+ +A AA A' = 'A +) AA +( AA A' = A A' +' +A AA A' = A' A( A' At A' A
= A' AA +A = A' A, ' A+ + AA + AA A+ = ' A+ + AA )' A+ (A A+ = ' A+ A+ 'A 'A A+ A+ = r A (A ' A t A' A( A' = A+AA+A+' = A+ A+ ' = (A 'A r,
rse ve in se ro en P re oo M e th of .2) (5 d an .1) (5 s on iti nd co s so that A + A+' satisfie (X A t. In addition, note that
r fo d fie tis sa is .3) (5 on iti nd co so , on iti fin de by ic an d N A must be sy m m etr (A ' A)+ = A+ A+'. Li ke wi se co nd iti on (5.4) ho ld s sin ce
This then proves that (A 'A r = A+A+'.
o
175
SOME BASIC PROPERTIES OF THE MOOREPENROSE INVERSE
•
Exampls 5.1. Properties (h) and (i) ofTheorem 5.3 give useful ways of computing the MoorePenrose inverse of matrices that have ful1 column rank or ful\ row rank. We wi11 demonstrate this by finding the MoorePenrose inverses of .
,
a=
I I
I I
and A =
I 2
2
I
I
o
From property (h), for any vector a J 0, a+ wi11 be given by (a' a)la', so here we find that a+ = [0.5
0.5]
For A, we can use property (i) since rank(A) = 2. Computing AA' and (AA't I • we get
AA' =
6 4 4
l
(AA'r = 
5 '
I
14
5 4
4 6 '
and so 1
2
2
1
1 0
5 4
4  I 6 14
3 6 5
8 2 4
Our next result establishes a relationship between the rank of a matrix and the rank of its MoorePenrose inverse.
Theorem 5.4. For any m x n matrix A,
Proof. Using condition (5.1) and the facl that the rank
or a matrix product
cannot exceed the rank of any of the matrices in the product, we find that (5.5) In a similar fashion, using condition (5.2), we get
The result follows immediately from (5.5) and (5.6).
0
GENERALIZED INVERSES
176
ec oj pr e th is A A+ at th 2 5. m re eo Th d an 2 5. on iti fin De h We have seen throug e ng ra e th of ix atr m n tio ec oj pr the be ll wi o als It +. tion matrix of the range of A e, nc sta in r Fo B. = AB A+ d an ) A+ k( ran = ) (B nk ra g in of any matrix B satisfy from Th eo re m 5.4 we have rank(A') = rank(A+) and
, ,
,i
•
A. A+ = ') (A PR is at th ; A' of e ng ra e th of x tri ma n so A ' A is also the projectio the by d se es ss po s tie er op pr ial ec sp e th of e m so s Our next result summarize M oo re P en ro se inverse of a symmetric matrix.
Theorem 5.5
Le t A be an m x m symmetric matrix. Th en
(a) A+ is also symmetric, (b ) AA+ = A+A, (c) A+ = A, if A is idempotent.
I
I
ve ha we , A' = A at th ct fa e th d an b) 3( 5. m re eo Th g in Pr oo f Us
II
I I I
.3) (5 on iti nd co m fro ws llo fo it at th te no ), (b e ov pr To which then proves (a). A th bo of y etr m m sy the th wi ng alo , ix atr m a of rse ve of the M oo re P en ro se in and A+, that •
I se ro en P re oo M the of s on iti nd co ur fo the ng yi rif Finally, (c) is established by ve .2) (5 d an .1) (5 s on iti nd co th bo e, nc sta in r Fo A. inverse for A+ = A, when A2 = hold since
while conditions (5.3) and (5.4) hold because
•
I
•
(M )' = A' A' = M
o
y an of rse ve in se ro en P re oo M the at th w sa we , 5.1 m re eo In the pr oo f of Th in ed lv vo in s nt ne po m co the of S lll tel in d se es pr ex tly matrix can be convenien se ca ial ec sp e th in , se wi ke Li x. tri ma at th of n the singular value decompositio in rse ve in se ro en P re oo M e th ite wr to le ab be ll wi of a symmetric matrix, we in is, at th ; ix atr m at th of n io sit po m co de l tra ec sp the te on s of the co m po ne nt s of
E RS VE IN E OS NR PE EOR MO E TH OF S TIE ER OP PR SIC BA ME SO
177
p. hi ns io lat re is th ng yi tif en id re fo Be rs. cto ve en tellus of its eig en va lu es an d eig f oo pr e Th x. tri ma al on ag di a of rse ve in se ro we first co ns id er the M oo re P en , 4) 5. H 1 (5. s on iti nd co of n tio ica rif ve e th es lv of this result, which simply invo is lef t to the reader. en Th ). "m , ... " (" ag di ix atr m al on ag di m x m e th Theorem 5.6. Le t A be • m) , ... />" g(< dia ix atr m al on ag di the is A, of A+ the M oo re P en ro se inverse where •
i=
"i' , 0,
in i J 0, in i = 0
rre co rs cto ve en eig l ma Ol on th or of t se a be Theorem 5.7. Le t If A. ix atr m ic etr m m sy m x m e th of ' ,"m .. sp on di ng to the eigenvalues, "' " en th , "J ,x ... (x" = X d an m) ''' ... " (" ag di = A we define X" . .. ,Xm
i I: I
I !
•
I
at th so s ''i the d re de or ve ha we at th e os pp su d Proof Le t r = rank(A) , an rti pa d an r, X m is X, e er wh ], X2 , [X = X as =O. Partition X "r +1 = ... = ). "r , ... ', (" ag di = A, e er wh )), (0 A" g( dia = A as tion A in bl oc k diagonal fOlm Th en , the spectral decomposition of A is given by
"m
ce sin . us Th . X; I' A X, = + A to s ce du re + A r fo e ov ab an d similarly the expression X; X I = In we have
= A + A , ly lar mi Si d. fie tis sa is .3) (5 on iti nd co wh ich is clea;ly symmetric, so ce sin ld ho .2) (5 d an .1) (5 s on iti nd Co s. ld ho o XIX; an d so (5.4) als •
I •
an d
an d so th e pr oo f is complete.
., . ,

GENERALIZED INVERSES
178
,
•
,i
Co ns id er the symmetric matrix
Example 5.2.
32 16 16
A=
16 14 2
16 2 14
,
,
•
I, •
,, ,,,
d' se es pr ex be n ca it at th als ve re A of sis aly an en eig an It is easily verified that as
·, ,, • ,
,
, •
I
o
o
48
1 /v 2 1/ v2
A=
,, ,,
I
i I
o 12
I
I ,,
Thus, using Theorem 5.7, we find that
I
I I
I
o 1 /v 2 1/ v2
A+ =
1 ~
288
2
2 13
2
I I
4
o
1/48
o
1/ 12
i
2/V6 . 1/V6 1/V6 o 1 /v 2 1/.)2
,•
I I I
I
2 1 1 \3
I
,
sis ba a rm fo X ix atr m r x m an of ns m lu co e th if at th w sa In Section 2.7, we '; IX X) X' X( by n ve gi is S of ix atr m n tio ec oj pr e th n the S, for a vector space that is
I
II I
I
I
I
I
ich wh in n tio ua sit e th to ed liz ra ne ge be n ca is th w ho tes Definition 5.2 indica e w , g) 3( 5. m re eo Th d an 2 5. on iti fin De g in us X is not full column rank. Thus,
·
have
·
•
,,
PR (X)
Ex am pl e 5.3. range of
I
= X X+ = X( X'
x t X'
i
(5.7)
J
e th of ix atr m n tio ec oj pr e th n tai ob to ) .7 (5 We will utilize
,
·•
X=
4 4
o
3 I 3 1 2 2
~,,, •
..
, ,•
,
)
, ~
·· ·,
,
179
SOME BASIC PROPERTIES OF THE MOOREPENROSE INVERSE
The MoorePenrose inverse of
, •
X'X =
,
I, •
32 16 16
16 14 2
16 2 14
was obtained in the previous exercise. Using this we find that •
• •
•
,
PR(X)
, •
= X(X'xtx'
I
,, •
1 288
•
I i I
I
\, •
I
I I
I
\ 3
4 4
o 2 \ 1
1 3 3 1 2 2
4
2
2
4
2 2
\3 11
\I 13
o
\
4 3
2
3
\
2
\ 1 2 \ 1 2
•
i
•
•
I I
I
This illustrates the use of (5.7). Actually, PR(X) can be computed without ever fonnally computing any MoorePenrose inverse since PR(X) is the total eigenprojection corresponding to the positive eigenvalues of XX'. Here we have
I
xX' = •
I
II
I
I I
II I
PR(X)
•
22 26 4
4 4 8
,
which has the normalized eigenvectors ZI = O/V2, 1/V2, 0)' and Z2 = (1/..)6,1/..)6,2/..)6)' cOll'esponding to its two positive eigenvalues. Thus, if we let Z = (ZI, Z2), then
I ·
26 22 4
\ = zZ = 3
2 \ I \
2 I
\ 1 2
,
·!
i
J
•
·•
~, i
..
, •• •
.1
,•
Example 5.4. The MoorePenrose inverse is useful in constructing quadratic forms, in nOlIllal random vectors, so that they have chisquared distributions. This is a topic that we will in vestigate in more detail in Chapter 9; here we will look at a simple illustration. A common situation encountered in inferential statistics is one in which one has a sample statistic, t  Nm(O, 0), and it is desired to determine whether or not the m x \ parameter vector 0 = 0; formally, we want to test the null hypothesis Hn: 0 = 0 versus the alternative hypothesis HI: 0 i. O. One approach to this problem, if 0 is positive definite, is to base the decision between Ho and H I on the statistic
•
GENERALIZED INVERSES
180
I
I ,
I
,
n the t, \ T= u e fin de we d an 0, = T' T g in fy tis sa ix atr m m x Now if T is any m E( u) = TIO and
,,
I {var(t)} T' I = rl (T T ')T '1 =: 1m, Tvar(u) =
ed ut ib str di tly en nd pe de in are U , ... u" , ly nt ue m so u  Nm(TIO,lm). Conseq nOlJllal random variables, and so
,
I,
I J
In
VI
'n I
,
~
2
=t. . t = u u = ~ U i i=I
II
dis ed ar qu is ch is Th . om ed fre of s ee gr de m th wi n io has a chisquared distribut er ov HI se oo ch uld wo we so 0, f. 0 if l ra nt ce n no d an 0 = tribution is central if 0 on cti tru ns co the , ite in ef id m se e iv sit po is 0 n he W . Ho is VI is sufficiently large In O. of rse ve in se ro en P re oo M the g in us by ed liz ra ne of VI above can be ge e er wh \, i'X \A X = 0+ d an X; A\ XI = 0 ite wr we d an r, this case, if rank(O) = f oo pr the in as ed fin de are I A ix atr m al on ag di r x r the d the m x r matrix X I an 2 l ce sin , lr) O, X\ / i (A N \t 2X il/ A = w n r the , 5.7 m re eo of Th
,,
I I, ' I,
,
i
Thus, since the
Wi S
I\
are independently distributed notlllal random variables, r
V2
= t .. t = w w = 'n +
'
~ ~ j=
2
Wj
,,,
I
s ee gr de r th wi 0, = 0 X; I/2 A, if l ra nt ce is ich wh n, io has a chisquared distribut of freedom. ,
,
CT U D O PR X RI AT M A F O SE ER V IN SE 4. THE MOOREPENRO = \ Rf (A at th ws llo fo it n the ix, atr m lar gu in ns no m If A and R each is an m x e liz ra ne ge ely iat ed m im t no 1 es do rse ve in ix atr m the B AI. This property of n, x p is R d an p x m is A if is, at th ; ix atr m a of rse ve in to the MoorePenrose we , on cti se is th In +. A s+ = Rt (A at th d re su as then we cannot, in general, be se ro en P re oo M e th of on ati riz cto fa of rt so s thi g look at some results regardin inverse of a product.
,
,
'
;
,
., ,
"
, ,
,
.
'
'
,
., '

181
T UC OD PR IX TR MA A OF E RS VE IN E OS NR PE EOR MO TH E
I
e ill ev Gr by n ve gi e. pl am ex e pl sim ry ve a at ok lo we re He Ex am pl e 5. 5. ld. ho t no es do on ati riz cto fa the ich wh in n tio ua sit a tes (1966), th at illustra Define the 2 x 1 vectors
I ,
I
•
,
b=
I 1
so th at ,
I, , I J
Th us , we have
!I ,,
I I. I•,
,
in ns tio ua sit few a n ve gi dy ea alr ve ha we , on cti se us io ev pr Actually, in the we 5.3 m re eo Th in e pl am ex r Fo . ld ho es do + A wh ic h the identity (AB)+ = W sa w th at
and I
II
Th e ne xt theorem gives yet an ot he r situation. x h e ar Q d an P ile wh x, tri ma n x m an be A t Le Theorem 5.8. matrices sa tis fy in g P' P = 1m an d QQ ' = In. Th en
,, •
,
,
•
, ,
'
;
.'
.,
,
··
· .,'
and
1/
x JI
es lv vo in y pl sim r. de rea the to ve lea we h ic wh , Th e pr oo f of Th eo re m 5.8 the g in rd ga re , 5.7 m re eo Th at th te No ). 5.4 { .1) the verification of co nd iti on s (5 m re eo th the of se ca ial ec sp a is x, tri ma ic etr m m sy a of rse M oo re P en ro se inve above. argu to B d an A s ce tri ma the on on iti nd co nt cie ffi su a s ve Ou r ne xt result gi an tee th at (AB)+ B+ A+ .
=
.
, ,
lIZ
'
'

= A) k( ran If x. tri ma n x p a be B d an x tri ma p X m an be A Theorem 5.9. Le t rank(B) = p, then (A B t = B+ A+.
GENERALIZED INVERSES
182
m fro ow kn we , nk ra w ro ll fu is B d an nk ra n m lu co n fu Proof Si nc e A is l • Co ns eq ue nt ly , we find 't B Th eo re m 5.3 that A+ = (A 'A tIA ' an d W = B '(B that
I •
I
, ·,, •,, •
I
AB Er A+ AB = AB B' (B B' )I (A 'A )I A' AB = AB,
! ,I
••
' A ri A ' (A 'rl B '(B B B 'A A rI A ' (A 'rl B '(B B = A+ B+ B+ A+ AB = B '(B B 'fl (A 'A ) IA ' = ErA+,
•
I,
.
" ·
· ,,
,,
, A' I A) A' A( = ' lA A) ' (A )I B' (B B' AB = A+ B+ AB B )I B' (B ' B = AB A' I A) ' (A I 'r B (B B' = AB + A B+
•
•
i
1
.. •
o
fsu a s ve gi ly on it at th is ck ba aw dr r ajo m its , ul ef While Th eo re m 5.9 is us to e du t, Ul reS ng wi llo fo e Th . Bt (A of on ati riz cto ficient co nd iti on for the fa cfa is th r fo s on iti nd co nt cie ffi su d an y ar ss ce ne l ra Greville (1966), gi ve s se ve torization to hold.
1
I I j
en Th x. tri ma n x p a be B d an ix atr m p x m an be Theorem 5.10. Le t A , A+ W = + B) (A r fo nt cie ffi su d an y ar ss ce ne e ar s on each of the following co nd iti
. ".
(a) A+ AB B' A' = BB ' A' an d BEr A' AB = A' AB. (b) A+ AB B' an d A' AB Er ar e sy m m etr ic matrices. (c ) A+ AB B' A' AB Er = BB 'A 'A . (d) A+AB = B( AB YA B an d BW A' = A'AB(AB)+ . fsu d an y ar ss ce ne e ar ) (a in n ve gi s on iti nd co e Proof We will prove that th rst Fi e. cis er ex an as er ad re e th to t lef be ll wi ficient; the pr oo fs for (b )( d) Er by y tit en id st fir e th g in ly tip ul em Pr . ld ho ) (a as su m e that the co nd iti on s of while postmultiplying by (AB)'+ yields
(5.8)
Now for an y matrix C,
' C = ' C '+ 'C C = ' 'C + 'c C = ' )'C +C (C = ' CC C+
•
•
.
B+ A+ AB (A B) ' (AB)'+ = Er BB ' A' (AB)'+'
,
• •I
so co nd iti on s (5 .1) an d (5 .2 ) are satisfied. In addition,
. AB of rse ve in se ro en P re oo M e th is + A W so ic, etr are sy• mm
:
(5.9) •
se po ns tra its d an .8) (5 of e sid d an h ht rig e th on B, = Using this identity, wh en C n tio ua eq e th n tai ob we , AB = C en wh e, sid d an th nn the lef
183
THE MOOREPENROSE INVERSE OF A MATRIX PRODUCT
Er A+AB = (AB)' (AB)'+ ,
which, due to condition (5.4), is equivalent to .
(5.10)
The final equality in (5.10) follows from the definition of the MoorePenrose inverse in telIllS of projection matrices, as given in Definition 5.2. In a similar fashion, if we take the transpose of the second identity in (a), which yields B'A'ABB+ = B'A'A
and premultiply this by (AB)'+ and postmultiply by A+, then, after simplifying by using (5.9) on the lefthand side with C = (AB), and the transpose of (5.9) on the righthand side with C = A', we obtain the equation .
ABEr A+ = (AB)(ABt =
PR(AB)
(5.11 )
But from Definition 5.2, (ABr is the only matrix satisfying both (5.10) and (5.11). Consequently, we must have (ABr = B+A+. Conversely, now suppose that (ABr = s+ A +. Using this in (5.9), when C = AB, gives
Premultiplying this by ABB' B, we obtain ABB'BB'A' = ABB'BB+ A+ABB'A',
which, after using the transpose of (5.9) with C = B' and then rearranging, simplifies to ABB'(I  A + A)BB'A' = (0)
Note that since D·= (I  A+ A) is symmetric and idempotent, the equation above is in the fOlIll E'D'DE = (0), where E = BB' A'. This then implies that ED = (0); that is, (I  A+ A)BB'A' = (0),
which is equivalent to the first identity in (a). In a similar fashion, using (ABr = s+ A+ in (5.9) with C = (AB)' yields
184
GENERALIZED INVERSES •
, •
, ,
is at th n tio ua eq an to ed ifi pl sim be n ca ', AA A' B' by This, when premultiplied 0 . (a) of y tit eq ui va len t to the se co nd iden
•
,
,•
,,. I
, ,
A all r fo s ld ho ich wh Br (A r fo n sio es pr ex l ra Ou r ne xt ste p is to find a gene g lin lIl fo ns tra es lv vo in ch oa pr ap r Ou . ed fin de is and B for wh ich the product AB = 1)+ B I (A nd la B I A = AB at th ch su I, B to B g A to a matrix A I and tra ns fo lln in . m re eo th xt ne e th in n ve gi is , a) 64 (l9 e in Cl to e du Bt At . Th e result,
, ," • •
'.
,
,
we If x. tri ma n x p a be B d an x tri ma p x m an be A t Le Theorem 5.11. j. A st = BY (A d an BI AI = AB en th t, IB AB = AI define BI =A+ AB and
Proof No te th at ,
,
, 1 ,I'
•
e th at th ow sh ll wi we t, en tem sta nd co se e th y rif ve To s. so the first result hold • Fi rst no te B d an AI r fo d fie tis sa I e ar a) O( 5.I m re eo Th in two conditions given that
•
,,t , •
I
(5.12)
,·
r
f ,i',
and
i (5.13)
d an .3) (5 s on iti nd co th wi ng alo ), .12 (5 g in us d an ) Ta ki ng th e transpose of (5.13 (5.4), we get
I
t .
~•
.'
r\' I
, .
and so
•
•
•
be n ca y tit en id nd co se e Th . a) O( 5.I m re eo which is th e first identity in Th ob tai ne d by noting that
·
and then po stm ul tip ly in g this identity by A I B 1.
•
o •
·.
185
ES IC TR MA D NE IO IT RT PA OF E RS VE IN E OS NR PE EOR MO TH E
ix atr m n tio ec oj pr the by BI to led lll fo ns tra s wa B 1, 5.1 m Note that in Theore ix atr m n tio ec oj pr e th by ( A to ed rm fo ns tra s wa of the range space of A +, while A the t tha tes ca di in lt su re xt ne r Ou B. of at th t no d an BI of the range space of t tha ist ins t no do we if , B( of at th of ad ste range space of B can be used in . 9) 97 (1 er ey M d an ell pb m Ca in d un fo be n ca lt su re is th AB = AI B I • A proof of
, •
, ,
•
,
,•
,,. I
,
•,
,"
e fin de we If ix. atr m 11 px a B, d an ix atr m xp m an be Theorem 5.12•. Let A BI = A+AB and AI = AB s+ , then (A B t = Bj Aj .
•• '.
·, ,•
ED N IO IT RT PA F O SE ER V IN SE RO EN P 5. TH E M O O RE MATRICES ere wh , V] [U = A as d ne tio rti pa en be s ha A ix atr m 11 x m Suppose that the an ve ha to l efu us be y ma it , ns tio ua sit e m so U is m x nl and V is m x 112. In the th wi gin be e W V. d an U s, ice atr bm su e th of lllS tel in expression for A+ V. d an U g in rd ga re e ad m be n ca ns io pt m su as no ich general case, in wh
,
,
Let the m x 11 matrix A be partitioned as A =
, I ,I'
•
Theorem 5.13. U is m x nl , V is m x
,,t ,
n2 ,
and
11
I
,·
V] . where
= III + 112. Then
U+  U+V(C+ + W) C+ + W
•
[U
,
d an (. } C) C+ lll2 V( U+ ' U+ ' )V +C C /2 (11 + I2 {I1 = M , )V where C =(1m  U U+ W = (1"2  C+ C) M V' U+ 'U +( lm  VC+). e Th . ed itt om be ll wi y, gth len er th ra is ich wh 3, The proof of Theorem 5.1 or . 1) 97 (J ell Od d an n lio ul Bo , b) 64 (l9 e in Cl to r interested reader should refe eTh of es nc ue eq ns co ng wi llo fo the of fs oo pr e Pringle and Rayner (1971). Th . es nc re fe re se the in d un fo be o als n ca 3 5.1 orem .'
r\'
= K let d an 3, 5.1 m re eo Th in as ed fin de be C d an A t Le CorollDry 5.13.1. (11/2 + V ' U+' U+ V) I. Then
I
, . •
(a) A+ =
U+  U+ VK V' U+ 'U + C+ + KV 'U +' U +
if and only if C+ CV 'U +' U+ V = (0), •
U+  U+ VK V' U+ 'U + KV 'U +' U +
•
if and only if C = (0),
.. ·• •
(c) A+ =
GENERALIZED INVERSES
186
if and only if cc v' U +' U +V = V'U+'U+v, 
,
,• ,
,
if and only if U 'V = (0).
• • •
,
d ne tio rti pa a of rse ve in se ro en P re oo M e th Ou r final theorem involves by en ov pr y sil ea be n ca lt su re is Th . rm fo al matrix that has the bl oc k diagon d. fie tis sa e ar rse ve in se ro en P re oo M e th of s on simply verifying that th e co nd iti
Theorem 5.14.
Let the m x n marix A be given by
,
,•
, •
, , •
,
Al l
(0)
(0)
A22
• • •
• • •
A=
(0)
• • • • • •
(0) (0)
, •
,
• • •
(0)
•
• •
•
I
..
,•
A"
; ,

, I
•
en Th n. = nr + ... + nl d an m, = mr + ... + l m , nj x mj is j Aj where
I
I
I
(0)
• • •
(0)
Ah
• • •
•
• •
Ai l A+ =
• •
(0)
(0) (0)
•
I
• •
•
•
(0)
II
•
• •
A;r
M SU A F O SE ER V IN SE RO EN P RE O O M E 6. TH
I
e ar B d an A s ice atr m e th en wh I, D) CB + (A r fo n sio es pr ex Th eo re m 1.7 gave an e th to a lul lll fO is th of on ati liz ra ne ge a gh ou th Al . lar gu in both square and nons d ize ial ec sp e m so e ar e er th le. ab ail av t no is case of a M oo re P en ro se inverse e es th of e m So s. ice atr m of m su a of rse ve in se ro en P results for the M oo re ze ili ut lts su re o tw st fir r ou of fs oo pr e Th . on results are presented in this secti fs oo pr e es Th s. ice atr m d ne tio rti pa g in rd ga re the results of the previous section . 1) 97 (1 ell Od d an n lio ul Bo or 5) 96 (1 e in Cl in can be found
,
Theorem 5.15.
en Th . ix atr m 2 xn m an be V d an ix atr m l xn m Le t U be an
(U ll + v v y = (lm 
C 'V ')U +' K ifOm

VC+) + (CC')+,
in as ed fin de e ar M d an C d an )', +V (U M C) where K = 1,'1  U+V(lIl'•  C+ Th eo re m 5.13. •
•
II
187
THE MOOREPENROSE INvERSE OF A SUM
Suppose V and V are both m x n matrices. If VV' = (0),
Theorem 5.16.
then (V +
Vr = V+ + (In 
V+V)(C + W),
where C and W are as given in Theorem 5.13. Theorem 5.16 gives an expression for (V + V)+ that holds when the rows of V are orthogonal to the rows of V. If, in addition, the columns of V are orthogonal to the columns of V, this expression greatly Simplifies. This special case is summarized in the following theorem.
If V and V are
Theorem 5.17.
III
x
II
matrices satisfying V V' = (0) and
V'V = (0), then (V +
Vr = u+ + V+
Proof Using Theorem 5.3(g), we find that V+V = (V'VrV'V = (0)
and Vu+ = vU'(vU'r = {(VU'r'VV'}' = (0)
Similarly, we have V+ V = (0) and V V+ = (0) .. As a result, (V + V)(V+ + V+) = Vu+ + VV+
(5.14 )
(V+ + V+)(V + V) = V+V + V+V,
(5.15)
which are both symmetric, so that conditions (5.3) and (5.4) are satisfied. Postmultiplying equation (5.14) by (V + V) and (5.15) by (V' + V+) yields conditions (5.1) and (5.2), so the result follows. 0 Theorem 5.17 can be easily generalized to more than two matrices.
CorollDry 5.17.1.
Let Vi> ... , Vk be m x n matrices satisfying V;V; = (0) and V; Vj = (0) for all i :/. j. Then (U I + ... + V d = VI + ... + V k
GENERALIZED IN vE RS ES
188
SE ER V IN SE RO EN P RE O O M E TH F O Y IT U N 7. THE CONTI
, I I •
I •
,• I
nc fu us uo in nt co ce sin n tio nc fu a of ty ui in nt co e th It is very useful to es tab lis h r de un s on iti nd co ve gi ll wi we , on cti se is th In tions enjoy m an y nice properties. t Bu A. of ts en m ele e th of ns tio nc fu us uo in nt which the ele m en ts of A+ are co d an A x tri ma re ua sq a of t an in m ter de the er id before do in g this, let us first co ns m X m an of t an in lll tel de the at th ll ca Re A. ix the inverse of a no ns in gu lar m atr 1 or 1 + is Ill ter ch ea e er wh , ms ter of m su matrix A can be ex pr es se d as the of ty ui in nt co e th to e du , us Th A. of ts en m ele times the pr od uc t of m of the wllo fo e th ve ha ely iat ed m im we ts, uc od pr r sums an d the co nt in ui ty of sc ala • mg.
A, of t an in m ter de e th en Th x. tri ma m x m an Theorem 5.18. Let A be is a co nt in uo us function of the ele m en ts of A.
IAI, ,• , I I
Su pp os e that A is an m X m no ns in gu lar matrix so that
IA I of. o.
Recall that
,,
, •
I
the inverse of A can be ex pr es se d as
,I (5.16)
i I ,
ch su s ice atr m of ce en qu se a is ... , A2 , AI If A. where A# is the ad jo in t matrix of n, tio nc fu t an in lll tel de e th of ty ui in nt co e th to e that Ai 7 A as i 7 00, then, du N. > i all r fo 0 f. I i IA at th ch su N an ist ex t lAd 7 IA I, an d so there m us it t, an in lll tel de a es tim 1 or 1 + is x tri ma t in jo ad Si nc e ea ch ele m en t of an e th is # Ai if at th n tio nc fu t an in lll tel de e th of ty ui also follows from the contin ed ow all s ha ) .16 (5 n tio ua eq lt, su re a As . 00 7 i as A# adjoint of Ai, then AiU 7 us to es tab lis h the following. of rse ve in e th en Th x. tri ma lar gu in ns no m X m an Theorem 5.19. Let A be A, AI, is a co nt in uo us function of the ele m en ts of A. as d ar rw fo ht aig str as t no is rse ve in se ro en P re oo M e th Th e co nt in ui ty of ix atr m n x m an is A If x. tri ma lar gu in ns no a of rse ve in e the continuity of th as A 7 Ai g in fy tis sa s ice atr m n x m of ce en qu se ry tra and AI, A 2 , ••• is an arbi te tra us ill ll wi e pl am ex e pl sim A . A+ 7 Ai at th d re su as i 7 00 , then we are not the potential problem.
,
I, , .'
Ex am pl e 5. 6.
e er wh , ... , A2 A" s ice atr m 2 x 2 of ce en qu se the er id ns Co
Ai =
Ij i
o
0 I
•
•·,
,,, •
,, ,, I, ,
189
THE CONTINUITY OF TH E MOOREPENROSE INVERSE
,
Clearly, Ai
,
7
A, where
,, A=
o o
However, note that'rank(A) = 1, while we do not have Ai 7 A+. In fact,
0 1
rank(A j )
= 2 for all
i. For this reason.
o
•
I
1
to s ge er nv co i, t, en m ele h I)t , (1 its ce sin ng hi yt an to ge er does not conv the other hand
: •
i
!
00.
On
I,
,
o o
0 1 .
) = ranklA) A k( ran j ich wh r fo ... , A2 I, A s ice atr m If we have a sequence of fidif the ter un co en t no ll wi we en th N, y sa r, ge ter in for all i larger than some t ge II wi AT A, to r se clo ts ge Ai as is, at th e; ov ab e pl culty observed in the exam of f oo pr A . low be d ize ar m m su is A+ of ty er op pr ty ui closer to A+. This contin er ey M d an ell pb m Ca or 5) 95 (1 e os nr Pe in d un fo be this important result can
•
I •
(1 97 9) . III of ce en qu se a .. ,' A2 I, A d an ix atr m n X m an Theorem 5.20. Let A be x n matrices such that Ai 7 A, as j 7 00. Then
!
AI+ 7 A+ ,
as
•
I 7
00
if and only if there exists an integer N such that .
.
•
rank(Ai) = rank(A)
for all i > N
se ro en P re oo M the of ty ui in nt co the r fo s Example 5.7. The condition ob pr g tin tes is es th po hy d an on ati tim es in ns inverse have important implicatio as to d rre fe re y, ert op pr a s us sc di ll wi we e, pl lems. In particular, in this exam a m fro ted pu m co t, r ato tim es An s. es ss po rs consistency, that some estimato nco r if (J r ete m ra pa a of r ato tim es t en ist ns co sample of size n, is said to be a
verges in probability to (J; that is, if lim
n+~
• •
P(lt(JI~E)=O
190
GENERALIZED INVERSES
for any E > O. An important result associated with the property of consistency is that continuous functions of consistent estimators are consistent; that is, .if t is a consistent estimator of fJ, and g(t) is a continuous function of t, then g(t) is a consistent estimator of g(fJ). We will now apply some of these ideas to a situation involving the estimation of the MoorePenrose inverse of a matrix of parameters. For instance, let 0 be an m X m positive semidefinite covariance matrix having rank r < m. Suppose that the elements of the matrix 0 are unknown and are, therefore, to be estimated. Suppose, in addition, that our sample estimate of O. whjch we will denote by 0, is positiAve definite with probability one, so that rank(O) = m with probability one, and 0 is a consistent estimator of 0; that is, each element of 0 is a consistent estimator of the corresponding element of O. However, since rank(O) = r < m, 0 + is .not a consistent estimator of 0 +. Intuitatively, here is obvious. If 0 = XAX' is the A the problem A A spectral decomposition of 0 so that 0 + = 0 I = XAI X', then the consistency of 0 is implying that as n increases, the m  r smallest diagonal elements of A are converging to zero, while the m  r largest diagonal elements of AI are increasing without bound. The:: difficulty here can be easily avoided if the value of r is known. In this case, 0 can be adjusted to yield an estimator of o having rank r. For example, if 0, has eigenvalues AI 2! A2 2! ... 2! Am and corresponding nOllnalized eigenvectors XI, ... ,Xm and Pr is the eigenprojection A
A
A
A
A
A
r
i= I
then r
i=I
will be an estimator of 0 having rank of r. It can be shown then that, due to the continuity of eigenprojections, 0 * is also a consistent estimator of O. More A A importantly, since rank(O *) = rank(O) = r, Theorem 5.20 guarantees that Ot is a consistent estimator of 0 + • • A
8. SOME OTHER GENERALIZED INVERSES The MoorePenrose inverse is just one of many generalized inverses that have been developed in recent years. In this section, we will briefly discuss two other· generalized inverses that have applications in statistics. Both of these inverses can be defined by utilizing some of the four conditions (5.1)(5.4) or, for sim
. 191
SOME OTHER GENERALIZED INVERSES
plicity, 14 of the MoorePenrose inverse. In fact, we can define a different
class of inverses corresponding to each different subset of the conditions 14 that the inverse must satisfy. Definition 5.3. For any m X n matrix A, let the nxm matrix denotedA(i'·····i,) be any matrix satisfying conditions ii, ... , ir from among the four conditions 14; AUI, ... ,i,) will be called a {ii, ... , i, }inverse of A. Thus, the MoorePenrose inverse of A is the {1,2,3,4}inverse of A; that, isA+ =A(l·2.3.4). Note that for any proper subset {il>".,i r } of {1,2,3,4}, A+ will also be a {il>"" ir }inverse of A but it may not be the only one. Since in many cases there are many different {il, ... , ir }inverses of A, it may be easier to compute a {il, ... , ir }inverse of A than to compute the MoorePenrose inverse. The rest of this section will be devoted to the {I }inverse of A and the {t, 3 }inverse of A, which have special applications that will be discussed in the next chapter. Discussion of other useful {i l , ... , i, }inverses can be found in BenIsrael and Greville (1974), Campbell and Meyer (1979), and Rao and Mitra (1971). In the next chapter, we will see that in solving systems of linear equations, we will only need an inverse matrix satisfying the first condition of the four MoorePenrose conditions. We wi\l refer to any such {I }inverse of A as simply a generalized inverse of A, and we will write it using the fairly common notation A; that is, A(I) = A. One useful way of expressing a generalized inverse of a matrix A makes use of the singular value decomposition of A. The following result, which is stated for a matrix A having less than full rank, can easily be modified for matrices having full row rank or full column rank. Suppose that the m x n matrix A has rank r > 0 and the singular value decomposition given by
Theorem 5.21.
(0) A=P Q', (0) (0) A
where P and Q are m X m and n x n orthogonal matrices, respectively, and A is an r x r nonsingular diagonal matrix. Let
E
G
,
, ,, •
,, " •
p',
where E is r x m  r, F is n  r x r, and G is (n  r) x (m  r). Then for alI choices of E, F, and G, B is a generalized inverse of A, and any generalized inverse of A can be expressed in the fOlln of B for some E, F, and G.
GENERALIZED INVERSES
192
If
Proof Note that D.
ABA = P (0 ) =P
Q' Q
(0) (0 )
I M D.
(0 )
(0)
(0)
D.I F
Q' = P
E G
pI p
D.
(0)
D. (0)
(0) (0)
(0) (0)
Q'
,
Q' = A, ~
• •
•
ce oi ch e th of s les rd ga re A of rse ve in ed liz ra ne ge a is e ov ab and so the matrix B ], P2 I [P = P ], Q2 I [Q = Q ite wr we if , nd ha r he ot of E, F, and G. On the ed liz ra ne ge y an In, = ' QQ , 1 = ' pp ce sin , m en th r, X m is where QI is nX r and PI inverse B, of A, can be expressed as
I •
a is B e nc Si I. D. = I BP Q; at th ow sh n ca we if l which is in the required forIl g in rit W . AQ P' = AQ AB P' , tly len va ui eq or A, = A AB generalized inverse of A, on s ice atr bm su )th 1 , (1 e th g tin ua eq d an rm fo d ne tio this last identity in parti both sides, we find that
m co is f oo pr the so d an I, D. = BP Q; at th ws llo fo ely I from which it immediat 0 plete.
Example 5.S.
The 4 x 3 matrix 1 1
A=
0
0
0 1 0 I
0.5 0.5 0 .5 0 .5
th wi n io sit po m co de e lu va lar gu sin d an 2 = r nk has ra
1 P= 2
I 1 1 I 1 1 I I 1 1 1 1 1 \ 1 1
1/v2 ,
Q' = 1/13
1/.,)6
I
193
S E S R E V IN D E IZ L A R E N E G R SOME OTHE
I
and
f
o
,
o v!J in n e iv g B r fo n o ti a u q e e th e s u d n a , s e ic tr a m ll u n s a G If we take E, F, and ix tr a m e th A f o e rs e v in d e z li ra e n e g a s a in ta b o e w , 1 .2 5 T h e o re m 1
12
I
5
5
1
1
1
1
5
2
2
5 2
e th is e v o b a ix tr a m e th t a th w o n k e w , .1 5 m re o e h T f Actually, fr o m the p ro o f o d te c u tr s n o c e b y a m A f o s e rs e v in d e z li ra e n e g t n re fe if M o o re P e n ro s e inverse. D d n a E e k ta in a g a e w if , le p m a x e r fo ; G d n a , F , E f o th ro u g h d if fe re n t c h o ic e s e s u w o n t u b s e ic tr a m ll u n s F a
•
G=
I
2
1/V6 0 ,
e rs e v in d e z li ra e n e g e th in ta b o then w e 1 6
3
o o
2 1 2
1
0
2 2
3
o
k n ra s it s a h e rs e v in e s ro n e P re o o M e th e il h w , 3 k n ra N o te th a t this m a tr ix h a s . 2 is h ic h w , A f o t a th to l a u eq l} { f o s ie rt e p ro p ic s a b e th f o e m o s s e z ri a m m u s T h e fo ll o w in g th e o re m • Inverses. e rs e v in d e z li ra e n e g a e b A t le d n a ix tr a m n X m n a e b Theorem 5.22. L e t A o f A. T h e n
', A f o e rs e v in d e z li ra e n (a) A  ' is a g e , A a f o e rs e v in d e z li ra e n e g a is IA :~ eo r, la a c s ro e z n (b ) if eo: is a n o , ly e u iq n u I K = A r, la u g in s n o n d n a re a u q s is A if ) (c , C A B f o e rs e v in d e z li ra e n e g a is 1 B K I C r, la u g in s n o (d) if B a n d C a r e n , ) K ( k n a r ~ ) A K ( k n a r = ) (e ) rank(A) = r a n k ( A K , 1 = A A if ly m n o d n a if m = ( f) rank(A) I. II = A K if ly n o d n a if n (g ) rank(A) =
GENERALIZED INVERSES
194 
e on e th at th ng yi rif ve y pl sim by en ov pr y sil Proof Properties (a}(d) are ea , A AA = A ce sin at th te no ), (e e ov pr To s. ld ho rse condition of a generalized inve we can use Th eo re m 2. 10 to get rank(A) = rank(AA A) ~ rank(AA)
~
ra nk (A )
and rank(A) = rank(AA  A)
~
rank(A  A) ~ rank (A),
n, tio di ad In . A) (A nk ra = ) AA k( ran = A) k( ran at th so
is AA if ly on d an if m = ) (A nk ra at th ) (e m fro so the result follows. It follgws nonsingular. Premultiplying the equation •
lar gu in ns no is A Aif ly on d an if n, = ) (A nk ra , by (AA  t' yields (f) . Similarly and so premultiplying 
o se ro en P re oo M e th by d se es ss po s tie er op pr e th of Ex am pl e 5.9. So m e A at th en se ve ha we e, nc sta in r Fo . rse ve in }{I e th to inverse do not carry ov er we l, ra ne ge in r, ve we Ho A. = +Y (A is, at th ; A+ of rse is the M oo re P en ro se inve bi ar an is A e er wh , A of rse ve in ed liz ra ne ge a is are not guaranteed that A = A ix atr m al on ag di e th er id ns co e, pl am ex r Fo A. of trary general"ized inverse . 5) 0.2 5, 0. (l, ag di = A is A of rse ve in ed liz ra ne ge diag(O. 2. 4). One choice of a = ' t (A , ly me na , rse ve in ed liz ra ne ge e on ly on s ha it Here A is nonsingular so . ) 25 0. 5, 0. , (1 ag di = Aof rse ve in ed liz ra ne ge a t no di ag (l, 2, 4) and, thus, A is •
of s m ter in d se es pr ex be n ca A ix atr m a of s rse ve in All of the generalized . low be n ve gi is p hi ns io lat re is Th . rse ve in ed an yo ne particular generaliz
Theorem 5.23. Th en for any
II
A. ix atr m n x m e th of rse ve in ed liz ra ne ge y an e b Let A
x m matrix C,
.
195
SOME OTHER GENERALIZED INVERSES
is a generalized inverse of A, and each generalized inverse of A can be expressed
in this fonn for some C. Proof Since AK A = A, A(A + C  A ACAK)A = AK A + ACA  AKACAAA = A + ACA  ACA = A,
so A + C  A ACAA is a generalized inverse of A regardless of the choice of A and C. Now let B be any generalized inverse of A and define C = B  A . Then, since ABA = A, we have A + C  AACAA = A + (B  A)  AA(B  A)AA= B  A  ABAA  + A AA AA = B  A  AA  + A  AA  = B,
o
and so the proof is complete. We will find the following result useful in a later chapter.
Theorem 5.24. Let A, B, and C be matrices of sizes p x m, m x n, and n x q, respectively. If rank(ABC) = rank(B), then C(ABC) A is a generalized inverse of B.
Proof Our proof folIows that of Srivastava and Khatri (1979). Using Theorem 2.10, we have rank(B) = rank(ABC)
$;
rank(AB)
$;
rank(B)
and ,,
rank(B) = rank(ABC)
,,,
,
, ,
I
I I "
$;
rank(BC) :$ rank(B),
,
so that evidently rank(AB) = rank(BC)
=rank(B) =rank(ABC)
Using Theorem 2.12 along with the identity A(BCHIq

(ABCt ABC} = (0),
(5.17)
GENERALIZED INVERSES
196
,
·.,.
"
we find th at
'~
1 !
• •
rank(ABC) + rank(BC{I q

·
(A BC r AB C} )  rank(BC):':;; rank{(O)} = 0,
"•, •
'.
so that .
0, = ) BC (A nk ra C) (B nk ra :.:;; ) C} AB )BC (A I {I' C ra nk (B if ly on e tru be n ca is th t Bu ). ,17 (5 m fro ws llo fo y lit ua where the eq BC {Iq

(A BC r AB C) = {I q

BC (A BC ) A} B( C) = (0)
I
we e, ov ab n sio es pr ex e dl id m e th on e tim is th 2, 2.1 m Again applying Th eo re ontain
0, = )} (O k{ ran ;; ):': k(B ran ) BC k( an +r B) A} r BC rank({Iq  BC (A or equivalently, rank( {I q

BC (A BC r A }B) :.:;; rank(B)  rank(BC) = 0,
at th ies pl im is Th ). .17 (5 m fro ws llo fo y lit ua eq e th , where, again {I q

BC (A BC r A} B = B  B{ C (A BC r A} B = (0),
and so the result follows.
o
st lea g din fin in ul ef us is rse ve in }3 , {I the at th ter ap ch xt We will see in the ne , ly nt ue eq ns Co . ns tio ua eq r ea lin of m ste sy t en ist ns co in squares solutions to an e th te no de ll wi e W . rse ve in s re ua sq st lea e th d lle ca this inverse is co m m on ly of rse ve in s re ua sq st lea a e nc Si . AL = ) i,3 A( is, at th ; AL by {t , 3 }inverse of A y pl ap o als 2 5.2 m re eo Th in n ve gi s tie er op pr e th A, A is also a {I }inverse of w. lo be n ve gi e ar s rse ve in s re ua sq st lea of s tie er to AL. So m e additional prop
Theorem 5.25.
Let A be an m x n matrix. Th en
L +, AA = L AA A, of , A , rse ve in s re ua sq st lea y (a) for an , rse ve in ed liz ra ne ge y an r fo A of rse ve in s re ua sq st (b) (A ' A r A' is a lea (A 'A t, of A'A.
,
i
197
COMPUTING GENERALIZED INVERSES ,
L L at th d fin we , AA = )' A (A d an A = LA AA e nc Si f Pr oo
"
1
'~
1 !
L L ' A' A A' ' A+ = AAL = AA+ AAL = (AA+)'(AA ), = A+'(AALA)' = A+ 'A ' = (AA+)' = AA+,
", •

and so (a) holds, To prove (b) first note that 
A A' f A A' A( )' A+ (A = A A' f A A' A( + AA = A A' A) A( A' = A+' A' A = (AA+)' A = AA+ A = A,
=A+'A'A(A'A) A'A
e rv se ob s, ld ho 3 on iti nd co at th y rif ve To 1, on iti so that (A ' A t A' satisfies cond that
I
' +) '(M A A) A' A( = A' +' A A' f A A' A( = A' A) A( A' =A (A 'A )A 'A A+ =A A+ ,
,
i
, us Th , en ov pr st ju A, = A A' t A A' A( y, tit en id e th es where the last equality us 0 , + M of y etr m m sy the m fro ws llo fo A' t A the symmetry of A( A' •
9. COMPUTING GENERALIZED INVERSES
I
,
I i
I 
,,.· •
. es ers inv ed liz ra ne ge r fo s ula lln fO l na io tat pu m co e m so In this section we review the r fo d ite su st be s ula lln fO of t en pm lo ve de e th on t no The emphasis here is the e, nc sta in r Fo . ter pu m co a on s rse ve in ed liz ra ne ge of numerical computation is x tri ma a of rse ve in se ro en P re oo M e th g tin pu m co of m os t common method = A if is, at th n; io sit po m co de e lu va lar gu through the computation of its sin I. , 4.1 ry lla ro Co in n ve gi as A of n io sit po m co de e lu va P.AQ; is the singular s ula lln fO e Th . p; I ~QI = A+ a lul lll fO the a vi then A+ can be easily computed l efu us be y ma s, se ca e m so in t, tha es on e ar s lem ob pr provided here and in the in t, bu e siz all sm of s ice atr m of rse ve in ed liz ra ne ge e th for the computation of . es os rp pu al tic re eo th r fo ul ef us ily ar im pr e ar s, se m os t ca of rse ve in se ro en P re oo M the r fo n sio es pr ex an d Greville (1960) obtaine d an B ix atr m the , se ur co of e, er wh , c] [B lln fO e a matrix partitioned in th ed us n the be n ca ula lln fO is Th . ws ro of r be m nu e the vector C have the sam To A. ix atr m n x m an of rse ve in se ro en P re oo M e recursively to compute th at th so j), ,a ... I, (a = Aj e fin de d an A of n m lu co see this, le t aj denote the jth n ow sh s ha e ill ev Gr A. of ns m lu co j st fir e th g in Aj is the m x j matrix contain that if we write Aj = [Aj _ • aj ]' then
. ,i
I•
I; I; I
,,
,,
I
Aj
+
MJ I  dJ,b~J b~
J
,
(5.18)
GENERALIZED INVERSES
198
if Cj J 0, if Cj = 0,
h') =
and Cj
g tin pu m co ely siv es cc su by ted pu m co be n ca A; = = aj Aj _ Idj . Thus, A>
A2,A j, ... ,A ;. e th te pu m co to e ov ab e ur ed oc pr e th e us ll wi Ex am pl e S.10. We M oo re P en ro se inverse of the matrix 123 I 1 0 1 1 2 3 1
1
A=
We begin by computing the inverse of A2 = [a l
111 3' 3' 3
a2] = [AI
a2]. We find that
,
2
4 2
Since
C2
oJ 0, we get
+
h 2 = c2 I
('
I )1 c C2
= C2
2 =
[1 1 4 '
2 ,
1],
and, thus,
Ai  d 2h;
b;
The inverse of A,
= [A2
_ 1 4

2 1 1 2
1 1
aJ) now can be computed by using
Ai and
199
COMPlJfING GENERALIZED INVERSES
I 1 '
2 C3
Since
C3
= a3

0 2
A 2d 3 =
2 
0 2
=
°
= 0, we find that
1 1] 4 =
1
"6
[I
0
1
2
1
1 2
1
I],
and so
Aj = i,
A!  d3h;
b;
6
1
_ 1 12
1
1 6 1 o 2 2
•
, i,
,, ,
,
Finally, to obtain the MoorePenrose inverse of A = A 4 , we compute
•r l
[" ~, , ,
•
,, ,
, ,
,
3
3 C4
,
= a4

A3 d 4 =
• ,
, ",
1 3

1
= 0,
3
,
,
, ,
,
,
Consequently, the MoorePenrose inverse of A is given by
A+4 •
•
A3  d4h~ b'4
1 12
4 0 1 6 1 2 2 1
0 1 1 1
GENERALIZED INVERSES
200
, rse ve in }{I a is, at th , rse ve in ed liz ra ne ge a g tin pu m co of A co m m on method . rm fo ite rm He to x tri ma at th of on cti du re w ro of a matrix is ba se d on the e th if rm fo ite rm He in be to id sa is H x tri ma m x Definition 5.4. An m following four co nd iti on s hold. (a) (b) (c) (d)
H is an up pe r triangular matrix. h ii equals 0 or I for ea ch i. If hi; = 0, then hi} = 0 for all j. If hi; = I, then hj; = 0 for al lj oJ i.
rse ve in ed liz ra ne ge a d fin to s rm fo ite rm He of t ep Be fo re applying this co nc . rm fo ite rm He in s ice atr m g in rd ga re lts su re of of a matrix, we wi II ne ed a co up le to ed rm fo ns tra be n ca ix atr m re ua sq y an at th Th e first of these two results says x. tri ma lar gu in ns no a by on ati lic tip ul em pr its a m atr ix in He! Illite fOi III through Details of the pr oo f are given in Ra o (1973). lar gu in ns no a s ist ex e er th en Th x. tri ma m x m an be A t Th eo re m 5.26. Le . rm fo ite rm He in is H e er wh H, = CA at th ch su C ix atr m mx m e. cis er ex an as er ad re e th to t lef be ll wi lt su re Th e pr oo f of the next H en Th . rm fo e lit Ill He in is H x tri ma m x m e th Th eo re m 5.27. Su pp os e is idempotent; that is, H2 = H.
d an A x tri ma re ua sq a of rse ve in ed liz ra ne ge a Th e co nn ec tio n between lt su re is Th . m re eo th ng wi llo fo e th in d he lis tab es matrices in He rm ite fOi III is a be ll wi 26 5. m re eo Th of s on iti nd co e th g in fy tis sa says th at an y matrix C generalized inverse of A. lar gu in ns no m x m an be C d an x tri ma m x m an be Th eo re m 5.2S. Let A e th en Th . rm fo e lit lll He in ix atr m a is H e er wh H, matrix for which CA = matrix C is a ge ne ra liz ed inverse of A. ow kn we 7 5.2 m re eo Th m fro w No A. = A AC at th Pr oo f We need to show that H is id em po ten t and so CA CA = H2
=H = CA
I . Cby n tio ua eq is th g in ly tip ul em pr by ws llo fo Th e re su lt then
o
w ro y tar en m ele h ug ro th A, g lin Ill fo ns tra by d ne tai ob be Th e matrix C ca n e th in ted tra us ill is s es oc pr is Th . rm fo ite rm transformations, to a m atr ix in He following example.
201
COMPUTING GENERALIZED INVERSES
ix atr m 3 x 3 e th of rse ve in ed liz ra ne ge a d fin ll wi e W
Example S.11. ,
224 2 A = 4 2 2 4 2 its s ha ix atr m ng lti su re e th t tha so A on s on ati rm fo ns First, we perform row tra st tir the in ts en m ele g in ain m re the ile wh e, on to l ua first diagonal element eq n tio ua eq ix atr m e th a vi ed ev hi ac be n ca is Th . ro ze column are all equal to CIA = AI where •
•
1/ 2 2
0 0 I 0 ,
I
0
I 6 6
I
2 6 6
its s ha ix atr m ng lti su re e th at th so AI on s on ati rm fo ns Next we use row tra in ts en m ele g in ain m re the of ch ea ile wh e, on to l ua eq t en second diagonal elem e er wh , A = A, C2 as en itt wr be n 2 ca is Th . ro ze the second column is
, , •
,,, I,
I,
1/6 1 /6
0 0 ,
I
I
A,=
101 o I I 000
te ni w H in is it so d an , 5.4 on iti fin De of s on Th e matrix A2 satisfies the conditi ed liz ra ne ge a 8 5.2 m re eo Th by so , A = A C, C = A, C 2 ve 2 2 form. Thus, we ha inverse of A is given by
i
I ,
!
II
I
I
0
2 6
I 6
0 6
lar cu rti pa s thi t bu , ue iq un ily ar ss ce ne t no rse ve in ed liz ra No t only is a gene ue iq un a eld yi l, ra ne ge in t, no es do rse ve in ed liz ra ne ge a g in method of produc we , A2 = A, C e, ov ab n ve gi on ati 2 rm fo ns tra nd co se e th in matrix. For instance, could have chosen
C2 =
I 0
o
0 1 /6
1/6 0
2
2
rse ve in ed liz ra ne ge e th d ne tai ob ve ha d ul wo we , se ca In this
I
, ,
."
202
GENERALIZED INVERSES
2 0 2 1 12 12
1
o 12
The method of finding a generalized inverse of a matrix by transfonning it to a matrix in Hermite form can be easily extended from square matrices to rectangular matrices. The following result indicates how such an extension is possible.
Theorem 5.29.
Let A be an m x n matrix, where m < n. Define the matrix
A* as
A (0)
,
so that A* is Il x n, and let C be any n x n nonsingular matrix for which CA* is in HWllite form. If we partition C as C = [C 1 C2 ], where C 1 is n x m, then '. C 1 is a generalized inverse of A. Proof We know from Theorem 5.28 that C is a generalized inverse of A*. Hence, A*CA* = A*. Simplifying the lefthand side of this identity, we find that
A
A
(0)
(0)

AC2
A
(0)
(0)

Equating this to A*, we get AC1A = A, and so the proof is complete.
0
Clearly, an analogous result holds for the case in which m> n. Example 5.12. matrix
Suppose that we wish to find a generalized inverse of the
A=
1 1 2 101 112 202
Consequently, we consider the augmented matrix
203
COMPUTING GENERALIZED INVERSES
1 I 2 0 1 0 1
1 0
1 2
0
2 0 2 0 Proceeding as in the previous example, we obtain a nonsingular matrix C so that CA* is in Hennite form. One such matrix is given by .'
,',• •
,,
,.
o
"
1 0
1 I 1 0 o 2
•
0""
0 0 1 0 0 1
Thus, partitioning this matrix as
We find that a generalized inverse of A is given by
o C) =
1 0
0
1 1 0 0 1 0 1 0
A least squares generalized inverse of a matrix A can be computed by first computing a generalized inverse of A'A and then using the relationship, A L = (A' A)A', established in Theorem 5.25(b). ,·•, ,
Example 5.13. To find a least squares inverse of the matrix A from Example 5.12, we first compute
'.,· ,!
··
,
A'A =
·.
,
,· ···
, , •
·, f
7 2 9
2 2 4
9 4 13
By transforming this matrix to Hermite form, we find that a generalized inverse of (A' A) is given by
(A'Ar =
I 10
2 2
2 7
0 0
10 10 10
GENERALIZED INVERSES
204 Hence, a least squares inverse of A is given by
o
1 AL = (A 'A tA ' " ' 10
5
o
2 0 2 5 0 0
4 4 0
,
PROBLEMS ,I,
1. Prove results (a)(d) of Theorem 5.3.
\
i,
, •
of rse ve in se ro en P re oo M the d fin to ) (h 5.3 m re eo Th e 2. Us
A=
1 1 I 0 1 0 0 1 I 2 0 I
3. Find the MoorePenrose inverse of the vector
I ,,• ,
I I,, ·
•
I
I II • •
I I
i
a=
2 I 3 2
4. Provide the proofs for (f ) (j) of Theorem 5.3. 5. Prove Theorem 5.6. 6. Use the spectral decomposition of the matrix
I •
,
,
,•
205
PROBLEMS ,,. .,,
) (g 5.3 m re eo Th e us en th d an ', AA of rse ve in se ro en P re (a) Fi nd th e M oo to find A+. ec oj pr the d an A of e ng ra the r fo x tri ma n tio (b ) Us e A+ to find the projec tion matrix for the row sp ac e of A.
,
:.,.," ,
,·. •
,
, ,
8. Le t A be an m c = tr(A' A).
,
X II
matrix with rank(A)
= 1. Show that A+ = c iA ',
where
I
•I
ch ea th wi r cto ve I x III the be 1 let d an rs, 9. Le t x an d y be m x 1 vecto s rse ve in se ro en P re oo M e th r fo ns sio es pr ex n el em en t eq ua l to one. Obtai of i '. xy ) (d ' xx ) (c ' 11 m 1m ) (b ' 11 ) (a
,, \
I,
, ,
I, :
+, A+ A, (lm AA s, ce tri ma e th of ch ea at th ow Sh x. tri 10. Le t A be an m x II ma AA +) , and (In  A+ A) is id em po ten t.
I
. ies tit en id ng wi llo fo the h lis tab Es . rix ma II x m an 11. Le t A be (a) A' AA += A+ AA '= A' . (b) A' A+ ' A+ = A+ A+ ' A' = A+. (c) A( A' A) +A 'A = AA '(A A' tA = A.
i i,, II
, •
I •I I
I
12. Le t A be an m x II matrix. Sh ow that x. tri ma p x II an is B e er wh ), (0 = + A Ir if ly on d an if (a) AB = (0) x. tri ma p x m an is B e er wh ), (0 = B A' if ly on d an if ) (b) A+ B = (0
•
·
,
, ,
, , •
,I
e on s ha A if at th ow Sh r. nk ra ng vi ha ix atr m ic etr rn rn sy m 13. Le t A be an m X A. no nz er o ei ge nv al ue " of multiplicity r, then A+ =
,,2
,
•
,, ,
,
'
,
, ,,,
I,
, ,,.
,, ,
,,
14. Let A be an m x ro w rank, then
II
matrix an d B be an
/I
x p matrix. Sh ow that if B has full
15. Let A be an m x m syrnrnetric matrix. Sh ow that (a) if A is no nn eg ati ve definite, then so is A +, o. als 0 = x A+ en th x, r cto ve e m so r fo 0 = Ax (b) if l tra ec sp e th e Us r. = A) k( ran th wi ix atr m ic etr m 16. Le t A be an m X m sy m th wi x tri ma ic etr m m sy III x III y an is B if at th ow de co m po sit io n of A to sh . 1m = B s+ + A A+ en th ), (0 = AB at th ch su r rank(B) = m 
I,
I
I
t tha e os pp Su x. tri ma m X n an be B d an ix atr m 17_ Le t A be an m cve en eig the by d ne an sp e ac sp e th at th , er rank,(A) = rank(B) and, furth X II
•
GENERALIZED INVERSES
206
at th as e m sa e th is A A' of es lu va en eig e iv sit po e th tors corresponding to of es lu va en eig e iv sit po e th to ng di on sp rre co spanned by the eigenvectors BB '. Sh ow that (A B t = B+ A +. 18. Prove Th eo re m 5.8. 19. Prove (b )( d) of Th eo re m 5.10. +. A W = + B) (A er eth wh e in m ter de to 0 5.1 m re eo Th e us 20. Fo r each ca se below
(a) A =
(b) A =
0 0 0 1 0 0 , 0 1 0 1 1 0 0 1 0 0 0 0
B=
,
B=
1 0 0
0 0 0
0 0
0 0 0
0
0
21. Let A be an m X /I matrix and B be an if A' AB B' = BB ' A'A.
2
1 1 1 0 /IX m
matrix. Show that (A B t = W A +
22. Prove Th eo re m 5.14. 23. Find the M oo re P en ro se inverse of the matrix
A=
21 0 0 0 1 1 0 0 0 0 0 1 2 0 00120 0 0 0 4
o
ix atr m e th of rse ve in se ro en P re oo M the d fin to d) 24. Use Corollary 5. 13 .I( A ~ ru Vl. where •
U=
I I I
1 I I
1 I , I
1 2 V=
1
o
1 1 •
ix atr m e th of rse ve in se ro en P re oo M e th d fin to c) .I( 13 25. Use Corollary 5. A = [U V], where
,"
,I I
207
PROBLEMS
u=
26. Let the vectors w,
W=
1 1 , 1 1
1 1 1 1 0 1 0 1 0 0
2 2
,
V=
x, y, and z be given
X=
1 1
2 0
,
I I
2 0 0 2
0
1
by
Y=
1 I 0 0
,
Z=
1 1 I
3
Use Theorem 5.17 to find the MoorePenrose inverse of the matrix A = WX' + yz'. 27. Find a generalized inverse, different from the MoorePenrose inverse, of the vector given in Problem 3. 28. Consider the diagonal matrix A = diag(O, 2, 3). (a) Find a generalized inverse of A having rank of 2. (b) Find a generalized inverse of A that has rank of 3 and is diagonal. (c) Find a generalized inverse of A that is not diagonal. 29. Let A be an m X n matrix and B be an n x p matrix. Show that B Awi 11 be a generalized inverse of (AB) for any choice of Aand B if rank(B) = II. 30. Let A be an m X n matrix and B be an n x p matrix. Show that for any choice of A and B, B A will be a generalized inverse of (AB) if and only if A  ABB is idempotent. 31. Let A, P, and Q by m x n, p x m, and n x q matrices, respectively. Show that if P has full column rank and Q has full row rank, then Q A  p" is a generalized inverse of PAQ. 32. Let (A' At be any generalized inverse of the matrix A' A, where A is m x n. Establish the following. (a) A(A'A)A' does not depend on the choice of (A' A) . (b) A(A'A) A' is symmetric even if (A' A) is not symmetric. 33. Suppose that the m x n matrix A is partitioned as A = [A I A 2 ], where A I is m X r, and rank(A) = rank(A 1) = r. Show that A(A' At A' = AI (A;AI fA;.
GENERALIZED INVERSES
208
e th n tai ob to 9 on cti Se in ed rib sc de e ur ed 34. Us e th e re cu rsi ve pr oc M oo re P en ro se in ve rse of th e ma tri x.
A=
I  I 2
I 1 1 1 1 I
, l,
'.
" "
•
"
dfin by e cis er ex us io ev pr e th in A x tri ma e th of rse ve in 35. Fi nd a ge ne ra liz ed ite nn He ng vi ha x tri ma a to in it s rm fo ns tra at th x tri ma i ng a no ns in gu lar fOIlIl.
·
.
""
.,: •
".,,• •, ,
,I ' ,
\,
36" Fi nd a ge ne ra liz ed in ve rse of th e ma tri x
,
,, •
,t ~ • '
A=
1 2 I
I 4 I
I 2 32 1 3
e. cis er ex us io ev pr e th in n ve gi A x tri ma e th r fo 37. Fi nd a lea st sq ua re s in ve rse
·h l' , ..
"
, ,• ,i I I, , ,'
•
, ,,
"
••
be n ca A of rse ve in ed liz ra ne ge a at th 28 5. m re eo Th in 38. It wa s sh ow n ite nn He to A s ce du re w ro at th x tri ma lar gu in ns no a g din ob tai ne d by fin ite rm He to on cti du re n m lu co r fo lt su re r ila sim a is e er th at form. Sh ow th H. = AC at th ch su x tri ma lar gu in ns no a is C if at th ow sh fo nn ; th at is. A. of rse ve in ed liz ra ne ge a is C en th l. IlI fO ite nn He in wh er e H is
,
39. Pr ov e Th eo re m 5.27.
,•
•
• ,•
, •,
•
•
g in lat lcu ca r fo od eth m ve rsi cu re ng wi llo fo e th 40" Pe nr os e (1 95 6) ob tai ne d e lat lcu ca ely siv es cc Su A. x tri ma n x m an of rse th e M oo re P en ro se in ve B2 • B3 •. ..• wh er e
,• ••
,
r
I !
I I
B; + I =
i I tr(B;A' A)I"
 B;A' A
I, ,
•
,•
,
en th r. = ) (A nk ra If x. tri ma y tit en id n x n e th be to ed an d BI is de fin B,+ IA 'A = (0) an d
·• •
·• • •
• ••
• ,•
of A x tri ma e th of rse ve in se ro en P re oo M e th te Us e th is m eth od to co m pu
Example 5.10. t Le x. tri ma n x m an is A e er wh ', AA of e lu va 41. Let 'l\ be th e lar ge st eig en l ae Isr nBe A'. OI = I X e fin de d an A 2/ < 0/ < 0 g in fy tis 0/ be an y co ns ta nt sa
•
209
PROBLEMS
(1966) has shown that if we define X/+ I = X, {2 Im

AX/)
to e ur ed oc pr e tiv ra ite s thi e Us 00. 7 i as A+ 7 Xi n for i = 1, 2, ... the on 0 5.1 e pl am Ex of A ix atr m the of rse ve in se ro en P compute the Moore a computer. Stop the iterative process when tr {( X i + 1 X, )' (Xi + I

X, )}
ve ha st mu we ce sin ted pu m co be to ed ne t no es gets small. Note that 'l\ do
2
2
tr( AA ') < 'l\
the r fo ) .18 (5 in n ve gi n sio es pr ex the n tai ob to 5 on 42. Use the results of Secti MoorePenrose inverse of the matrix Aj = [Aj _ I aj ]' ,
,,
,
,r ! I I
,I,, ,,
, ,
,,
,
,
, ,,
, ,
,, ,, ,
•
C H A P T E R S IX
s n o ti a u q E r a e in L f o s m Syste
1. IN TR O D U CT IO N nge of ns tio ica pl ap e th of e on 5, ter ap Ch of g in nn gi As mentioned at the be e th of ns tio ua eq r ea lin of m ste sy a to ns tio lu so g din eralized inverses is in fin form Ax = c,
(6.1)
ts, tan ns co of r cto ve 1 x m an is c ts, tan ns co of where A is an m x n matrix is th In . ed ed ne e ar ns tio lu so ich wh r fo es bl ria va and x is an n x 1 vector of rm fo e th , .1) (6 to ns tio lu so of ce en ist ex e th as s ue chapter, we discuss such iss e W . ns tio lu so t en nd pe de in rly ea lin of r be m nu e th d of a general solution, an st lea g din fin of n tio ica pl ap ial ec sp e th at ok lo a g in tak conclude the ch ap ter by . ist ex t no es do n tio lu so t ac ex an en wh , .1) (6 to ns squares solutio
S ON TI UA EQ F O EM ST SY A F O Y NC 2. CO NS IS TE isex e th r fo s on iti nd co nt cie ffi su d an y ar ss ce ne n In this section, we will obtai rs cto ve ch su e or m or e on n he W . .1) (6 n tio ua eq g tence of a vector x satisfyin is m ste sy e th , ise rw he ot t; en ist ns co be to id sa is ns tio exist, the system of equa ind co nt cie ffi su d an y ar ss ce ne st fir r Ou . m ste sy t en ist referred to as an incons vui eq , or A of e ac sp n m lu co e th in is c r cto ve e th tion for consistency is that nk ra e th as e m sa e th is c] [A ix atr m ted en gm au the of alently, that the rank of A. if ly on d an if t en ist ns co is c, = Ax , ns tio ua eq of m Theorem 6.1. Th e syste rank([A cD = rank(A). •
Proof written as 210
be n ca c = Ax n tio ua eq e th en th A, of ns m lu co If a" ... ,a" are the
•
211
CONSISTENCY OF A SYSTEM OF EQUATIONS
n
• • •
Ax = [a) '" all] •
=
L,
Xiai
=
c
i= )
XII
Clearly, this holds for some x if and only if c is a linear combination of the 0 columns of A, in which case rank[A c] = rank(A).
Example 6.1.
Consider the system of equations which has 1 A=
2 1
2 1 ,
1 c=
o
5 3
Clearly, the rank of A is 2 while
I[A
I c] I = 2 I
2 1
o
I 5 = 0, 3
so that the rank of [A c] is also 2. Thus, we know from Theorem 6.1 that the system of equations Ax = c is consistent. Although Theorem 6.1 is useful in detellllining whether a given system of linear equations is consistent, it does not tell us how to lind a solution to the system when it is consistent. Our next result gives an alternative necessary and sufficient condition for consistency utilizing a generalized inverse, A, of A. An obvious consequence of this result is that when the system Ax = c is consistent, then a solution will be given by x = Ac. 
Theorem 6.2. The system of equations Ax = c is consistent if and only if for some generalized inverse, A , of A, AA c = c.
Proof First, suppose that the system is consistent and x * is a solution, so that c = Ax*. Premultiplying this identity by AA , where Ais any generalized inverse of A, yields
as is required. Conversely, now suppose that there is a generalized inverse of A satisfying AAc = c. Define x* = Ac and note that
212
SYSTEMS OF LINEAR EQUATIONS ,
,
Thus, since x* = A c is a solution, the system is consistent and so the proof is complete. 0
,,
Suppose that A I and A2 are any two generalized inverses of A so that AAIA = AA2A = A. In addition, suppose that AI satisfies the condition of Theorem 6.2; that is, AAlc = c. Then A2 satisfies the same condition since
Thus, in applying Theorem 6.2, one will need to check the given condition for only one generalized inverse of A, and it doesn't matter which generalized inverse is used. In particular, we can use the MoorePenrose inverse A+, of A. The following results involve some special cases regarding the matrix A.
Corollary 6.2.1.
•,
,,I
,,,
If' A is an m x m nonsingular matrix and c is an m x I
,I
vector of constants, then the system Ax = c is consistent.
•
• ,,
•
Corollary 6.2.2. If the mX n matrix A has rank equal to m, then the system Ax = c is consistent.
, ,
, •
,
!
Proof Since A has full row rank, it follows from Theorem 5.22(f) that AA  = 1m. As a result, AA  c = c, and so from Theorem 6.2, the system must 0
be consistent.
,
,
Example 6.2.
Consider the system of equations Ax = c, where
, •
I
A=
1
I I
0
2
1
I I 2
2 0 , 2
c=
,,
3 2 5
, "
f. .
A generalized inverse of the transpose of A was given in Example 5.12. Using
I
this, we find that
,• •
I
AAc=
I 1 0 2 I
2 2
0 0 0 I 0 I 1 0 I

I 2 1 0
1
1
1 I
0 0
0 0
1
0
3 2 5
0 0
,
•,
3 2 5
,
3 2 5
Since this is c, the system of equations is consistent, and a solution is given by
"
,
,
213
SOLUTIONS TO A CONSISTENT SYSTEM OF EQUATIONS
1
1
1 1
0 0
0 0
I 0
0 A c = •
0
3
3 2 5

I 5
0
l ra ne ge re mo the of se ca ial ec sp a is c = Ax ns tio ua eq r ea Th e system of lin C q, x P is B II, x m is A e er wh C, = B AX by n system of linear equations give ce en ist ex the r fo on iti nd co nt cie ffi su d an y ar ss is m x q, and X is n x p. A nece . m re eo th ng wi llo fo the in n ve gi is m ste sy is th g of a solution matrix X satisfyin x m is A e er wh ts, tan ns co of s ice atr m be C d an Theorem 6.3. Let A, B, ns tio ua eq of m ste sy the en Th q. x m is C d an q, x p is B
11,
AX B = C, , Bd an A s rse ve in ed liz ra ne ge e m so r fo if ly on d an if is consistent
(6.2)
•
n. tio lu so a is X* ix atr m the d an t en ist ns co is m ste sy Pr oo f Suppose that the ere wh B, Sby g in ly tip ul stm po d an AA by g in ly so that C = AX*B. Premultip t tha d fin we B, d an A of s rse ve in ed liz ra ne ge A and B are any
e fin de , .2) (6 fy tis sa Bnd a A if , nd ha r he ot e th On s. ld ho and so equation (6.2) X* = A CB , and note that X* is a solution since
y rif ve n ca we , 6.2 m re eo Th ter af n ve gi at th to r ila Using an argument sim ll wi it n the , Bnd a A of ce oi ch lar cu rti pa ne yo an r fo that if (6.2) is satisfied m re eo Th of n tio ica pl ap e th , ly nt ue eq ns Co . Bd an A hold for all choices of B. d an A r fo s rse ve in ed liz ra ne ge of s ce oi ch e th 6.3 is not dependent upon
S ON TI UA EQ F O EM ST SY T EN ST SI N CO 3. SOLUTIONS TO A c A= x en th t, en ist ns co is c = Ax ns tio ua eq of m ste sy We have seen that if the c A if , us Th . A rse ve in ed liz ra ne ge e th of ce oi ch e th is a solution regardless of re mo s ha ns tio ua eq of m ste sy r ou en th , A of s ce oi ch all is not the same for nd pe de t no es do c A en wh en ev at th e se ll wi we t, than one solution. In fac
SYSTEMS OF LINEAR EQUATIONS
214
y ma ns tio ua eq of m ste sy r ou 0, = c if se ca on the choice of A, which is the
all r fo n sio es pr ex l ra ne ge a s ve gi m re eo th ng wi llo fo e Th have many solutions. solutions to the system. d an , ns tio ua eq of m ste sy t en ist ns co a is e = Ax at th Th eo re m 6.4. Suppose 1 x n y an r fo , en Th A. ix atr m n x m the of rse ve in let A be any generalized vectory y, (6.3) . x = x* at th y ch su y r cto ve a s ist ex e er th , x* n, tio lu so is a solution, and for any
m fro ow kn we , ns tio ua eq of m ste sy t en ist ns Proof Since Ax = c is a co Theorem 6.2 that AA c = c, and so Ax y = AA  e + A(ln  A  A) y = c + (A  AA  A) y = e,
r he ot e th On y. of ce oi ch the of s les rd ga re n tio since AA  A = A. Thus, Xy is a solu c. A= * Ax Aat th ws llo fo it e, = * Ax at th so n, tio lu so ry hand. ifx * is an arbitra Consequently, •
so that x * = X.to •
This completes the proof.
o
ed fix a of lls ten in d se es pr ex is 6.4 m re eo Th in en giv ns The set of solutio t se is th , ely tiv na ter Al y. r cto ve I x n ry tra bi ar an nd a generalized inverse A rse ve in ed liz ra ne ge ry tra bi ar an of lls ten in d se es pr of all solutions can be ex of A. , ns tio ua eq of m ste sy t en ist ns co a is e = Ax at th e Corollary 6.4.1. Suppos d an n, tio lu so a is Be = x en th A, of rse ve in ed liz where c i O. If B is a genera . Be = x* at th ch su B rse ve in ed liz ra ne ge a s ist for any solution x* , there ex ed liz ra ne ge e th of ce oi ch the ' on up t en nd pe de t no s wa 6.4 Proof Theorem a is Be = x at th e ov pr we , .3) (6 in 0 = y nd Ba = A g inverse, so by choosin we y, d an Alar cu rti pa y an r fo at th is n ow sh be to s ain solution. All that rem . Be ls ua eq ) .3 (6 in n sio es pr ex the at th ch su B rse ve can find a generalized in e fin De O. to l ua eq t no Ci, y sa , nt ne po m co e on st lea at Now since c i 0, it has ns tio ua eq of m ste sy e th e nc Si y. = Ce at th so ; lye cj = the II x //I matrix C as C Ax = C is consistent, we must have AA e = e, and so
215
SOLUI10NS TO A CONSISTENT SYSTEM OF EQUATIONS
Xy = Ac+
(In  A A)y = Ac + (In  A A)Cc
= Ae + Ce  AACe = Ae + Ce  AACAAe = (A + C  AACAA)e
•
But it follows from Theorem 5.23 that A  + C  A  ACAA  is a generalized inverse of A for any choice of the n x m matrix C and so the proof is complete. D
Our next theorem gives a result, analogous to Theorem 6.4, for the system of equations AXB = C. The proof will be left to the reader as an exercise. Theorem 6.5. Let AXB = C be a consistent system of equations, where A is m x n, B is p x q, and Cis m x q. Then for any generalized inverses, Aand B, and any n x p matrix, Y,
is a solution, and for any solution, X *' there exists a matrix Y such that X * = X y.
Example 6.3. For the consistent system of equations discussed in Example 6.2, we have
o
1 1 1
1
o o
1
o
o 0....1
1 1 2
1
1
2
o
\
0
I
2 2
\
o
=
2
o
1 1 2 \ o 2 2 2 1
o o
o
Consequently, a general solution to this system of equations is given by

3
2
\
1
0
0
5 0
+
2 \ 0
0
2 0 2  1 2 \ 0 1
 3 + 2YI + Y2 + Y3 + 2Y4 
1  2Y4
5  2YI  Y2  Y3  2Y4 Y4
where y is an arbitrary 4 x 1 vector.
YI Y2
Yo Y4
SYSTEMS OF LINEAR EQUATIONS
216
ssy t en ist ns co a er eth wh ow kn to nt rta po im be ay m it , In so m e ap pl ica tio ns ll wi s on iti nd co at wh r de un is, at th n; tio lu so ue iq un a s tem of eq ua tio ns yi eld (6.3) yield th e sa m e so lu tio n for all ch oi ce s of y? lu so e th n the , ns tio ua eq of m ste sy t en ist ns Th eo re m 6.6. If Ax =' c is a co y an is Ae er wh In, = A Aif ly on d an if n tio lu so ue iq tion x* = A c is a un ge ne ra liz ed in ve rse of the m x n matrix A. r fo x* = Xy if ly on d an if n tio lu so ue iq un a Pr oo f No te th at x* = A c is is n tio lu so e th s, rd wo r he ot In . .3) (6 in ed fin de all ch oi ce s of y, wh er e Xy is as un iq ue if an d on ly if
, ••
,• • • •
(Il l  A  A) y = 0
or ) (0 = A) A(In on iti nd co e th to t len va ui eq for all y, an d cle ar ly this is
,·
A A =l n •
,
o
a As In. = A Aif ly on d an if n = A) k( ran at th ) (g 22 We sa w in Th eo re m 5. as 6 6. m re eo Th of on iti nd co nt cie ffi su d an y ar ss ce ne result, we ca n re sta te th e follows. . ns tio ua eq of m ste sy t en ist ns co a is c = Ax at th e os pp Corollary 6.6.1. Su n. = A) k( ran if ly on d an if n tio lu so ue iq un a is c A~ * x Th en the so lu tio n
•
•

, •
,r, f
• •
• •
Ex am pl e 6.4. where
We sa w in Ex am pl e 6.1 th at th e sy ste m of eq ua tio ns Ax = c,
•
A=
1 2 2 1 , I
c=
o
1 5 , 3
ir •
d ne tai ob s wa A of se po ns tra e th of rse ve in se ro en is co ns ist en t. Th e M oo re P in Ex am pl e 5.1. Us in g this, we find th at
A+ A= 
1
3
14
8
2 2 1 1
5 4
6 2
o
1
I 14
o =h o 14
14
by n ve gi n tio lu so ue iq un e th s ha c = Ax ns tio ua eq of m Th us , th e sy ste
, I
r i
•I,
t
I
h, •
1 A+c = 14
3 8
6
5
2 4
1
1 5 14 3
42 1 4
3 1
·. • •
.
.,·
217
SOLUTIONS TO A CONSISTENT SYSTEM OF EQUATIONS
d an n, tio lu so e on an th e or m s ha ns tio ua eq r ea lin of m Su pp os e th at a sy ste il 2. d ;tn 1 ' i r fo c " ; Ax ce sin , en Th . ns tio lu so nt re ffe let XI an d X2 be tw o di fo llo ws th at fo r an y sc ala r a
al (h e se we , ry tra bi ar s wa a e nc Si n. tio lu so a o als is } x2 a) Th us , X = {a xi + (1
. ns tio lu so ny ma ly ite fin in s ha it en th n, tio lu so e on an th e if a sy ste m ha s m or m ste sy t en ist ns co a to ns tio lu so t en nd pe de in rly ea lin of r Ho we ve r, th e nu m be t se a s ist ex e er th is, at th n; d an 1 n ee tw be be t of eq ua tio ns ha vi ng c i 0 m us be n ca n tio lu so y er ev at th ch su , ,} ,x ... I, {X , ns of lin ea rly in de pe nd en t so lu tio s, rd wo r he ot In ,. ,X ... , XI , ns tio lu so e th of n ex pr es se d as a lin ea r co m bi na tio ts, ien fic ef co me so r fo x" a, + ... + xl al = X as any so lu tio n X ca n be wr itt en ve ha st mu we i, ch ea r fo c = i AX ce sin at th te No . a, a l, ""
,
A x= A
,
,
:2, ajXj =:2, ajAxj =:2, aic = i=I
i=I
i=I
c, i=I
= a, + ... + al ty, nti ide e th fy tis sa t us m ts ien fic ef co an d so if X is a so lu tio n, th e rly ea lin of r be m nu s thi e in ln tel de to w ho tly ac ex 1. Ou r ne xt re su lt tells us n tio ua sit the of n sio us sc di the lay de ll wi e W O. i c en wh in de pe nd en t so lu tio ns r in wh ic h c = 0 until th e ne xt se cti on .
is A e er wh t, en ist ns co is c = Ax m ste sy e th at th e os pp Theorem 6.7. Su on ati in mb co r ea lin a as d se es pr ex be n ca n tio lu so ch ea m x n an d c i O. Th en I. + A) k( ran n = r e er wh , ns tio lu so t en nd pe de in rly ea lin of r th wi n gi be we , A+ rse ve in ed liz ra ne ge lar cu rti pa e th th wi ) .3 Pr oo f Us in g (6 + A) e" . A (I" c+ + A = x." , ... , el A) + A (In c+ + A = I X' , +c A th e n+ 1 so lu tio ns , Xo = in 1 is t en m ele o er nz no ly on e os wh r cto ve 1 nX e th tes no wh er e, as usual, ej de on ati in mb co r ea lin a as d se es pr ex be n ca n tio lu so y er th e ith po sit io n. No w ev of th es e so lu tio ns sin ce for an y y = (Y h ... ,Yn)', n
n
, ,
1
, I
l,
:2, Yi i= I
Xo +
:2, YiX'j i=I
•
• •
··. • •
., '.
ll wi f oo pr the , ,,) ,x, ••• , x. o, (x = X x tri ma 1) + (n x n e th I Th us , if we define n ca we at th te No 1. + A) k( ran n ) (X nk ra at th ow sh n be co m pl et e if we ca 1) + (n x 1) + (n d an 1) + (n x n e th e ar C d an B e er wh , wr ite X as X = BC
=
218
SYSTEMS OF LINEAR EQUATIONS
matrices given by B = (A+c, In  A+ A) and
I
c= o
, 1
n
In
Clearly, C is nonsingular since it is lower triangular and the product of its diagonal elements is l. Consequently, from Theorem 1.8, we know that rank(X) = rank(B). Note also that
.,
(In  A+A)'A+c =(In  A+A)A+c =(A+  A+AA+)c =(A+A+)c=O, so that the first column of B is orthogonal to the remaining columns. This implies that
since the consistency condition, AA +C = c and c:/.O guarantee that A+c :/. O. All that remains is to show that rank(ln  A+ A) = n  rank(A). Now since A+ A is the projection matrix of R(A +) = R(A'), it follows that In  A+A is the projection matrix of the orthogonal complement of R(A') or, in other words, the null space of A, N(A). Since dim{N(A)} = n  rank(A), we must have rank(ln  A+ A) = II  rank(A). 0 Since Xo = A+c is orthogonal to the columns of (In A+ A), when constructing a set of r linearly independent solutions, one of these solutions always will be Xu, with the remaining solutions given by Xy for r  I different choices of y :/. 0 .. This statement is not dependent upon the choice of A+ as the generalized inverse in (6.3). since A  c and (I"  A  A)y are linearly independent regardless of the choice of A  if c i O,y i O. The proof of this linear independence is left as an • exercise. Example 6.5. We saw that the system of equations Ax = c of Examples 6.2 and 6.3 has the set of solutions consisting of all vectors of the fmlll
Xy
= A c + (14  A  A)y =
3+2YI :+Y2+Y3+2Y4 1 2Y4 5  2YI  Y2  Y3  2Y4 Y4 •
Since the last row of the 3 x 4 matrix
•
219
HOMOGENEOUS SYSTEMS OF EQUATIONS
\
A=
\
\
2
1 0 1 0 2
122
is the sum of the first two rows, rank(A) = 2. Thus, the system of equations possesses
n  rank(A) + 1 = 4  2 + 1 = 3 linearly independent solutions. Three linearly independent solutions can be obtained through appropriate choices of the y vector. For instance, since A  c and (4  A  A)y are linearly independent, the three solutions
will be linearly independent if the ith andjth columns of (14  AA) are linearly independent. Looking back at the matrix (4  A A) given in Example 6.3, we see that its first and fourth columns are linearly independent. Thus, three linearly independent solutions of Ax = c are given by
3 Ac=
1 , 5 0
A  c + (4  A  A)·I 
3 A  c + (4  A  A)·4 =
I 5 0
+
3
2
\
I 5 0
0
I
+
2 2 2 I
2 0

3
,
0
I I
3
•
I
4. HOMOGENEOUS SYSTEMS OF EQUATIONS The system of equations Ax = c is called a nonhomogeneous system of equations when c :/. 0, while Ax = 0 is referred to as a homogeneous system of equations. In this section. we obtain some results rcgarding homogeneous systcms of equations. One obvious distinction between homogeneous and nonhomogeneous systems is that a homogeneous system of equations must be consistent since it will always have the trivial solution, x = O. A homogeneous system will then have a unique solution only when the trivial solution is the only solution. Conditions for the existence of nontri vial solutions, which we state in the next theorem, follow directly from Theorem 6.6 and Corollary 6.6.1.
220
SYSTEMS OF LINEAR EQUATIONS
Theorem 6.8. Suppose that A is an m x n matrix. The system Ax = 0 has nontrivial solutions if and only if A A 4. In, or equivalently if and only if rank(A) < n. If the system Ax = 0 has more than one solution, and {XI, .•• ,Xr } is a set of r solutions, then x = (XIXI + ... + (XrXr is also a solution regardless of the choice of (XI, ... , (Xr, since r
Ax= A
L
(X;X;
;= I
r
r
;= I
i= I
=L (X;Ax! = L
In fact, we have the following.
Theorem 6.9. If A is an m x n matrix, then the set of all solutions to the n system of equations Ax = 0 fOIJlls a vector subpsace of R having dimension n  rank(A). Proof The result follows immediately from the fact that the set of all solu0 tions of Ax = 0 is the null space of A.
In contrast to Theorem 6.9, the set of all solutions to a nonhomogeneous system of equations will not fDlm a vector subspace. This is because, as we have seen in the previous section, a linear combination of solutions to a nonhomogeneous system yields another solution only if the coefficients sum to one. Additionally, a nonhomogeneous system cannot have 0 as a solution. The general fOIJIl of a solution given in Theorem 6.4 applies to both homogeneous and nonhomogeneous systems. Thus, for any n x I vector y, Xy
= (III  A  A)y
is a solution to the system Ax = 0, and for any solution, x *, there exists a vector y such that x * = x,.. The following result shows that the set of solutions of Ax = c can be expressed in teIJIlS of the set of solutions to Ax = O.
Theorem 6.10. Then
Let x * be any solution to the system of equations Ax = c. , .
la) if xu is a solution to the system Axu = 0, x = x* + xu is a solution of Ax = c, and (b) for any solution x to the equation Ax = c, there exists a solution Xu to the equation Ax = 0 such that X = x*+ Xu.
Proof Note that if Xu is as defined in (a), then A(x * + xu) = Ax * + Axu = c + 0 = c,
221
HOMOGENEOUS SYSTEMS OF EQUATIONS
and so x = x* +x# is a solution to Ax = c. To prove (b), define Xu = x  x*. so that x = x* +X#. Then since Ax = c and Ax* = c, it follows that
Ax# = A(x  x*) = Ax  Ax* = c  c = 0 Our next result, regarding the number of linearly independent solutions possessed by a homogeneous system of equations, follows immediately from Theorem 6.9.
Theorem 6.11.
Each solution of the homogeneous system of equations Ax = 0 can be expressed as a linear combination of r linearly independent solutions, where r = n  rank(A).
Example 6.6. Consider the system of equations Ax = 0, where
A =
I
2
2
I
I
0
We saw in Example 6.4 that A+ A = h. Thus, the system only has the trivial solution O.
Example 6.7. Since the matrix I 1
A=
2
1 I 2 010 122
from Example 6.5 has rank of 2, the homogeneous system of equations Ax = 0 has r = n  rank(A) = 4  2 = 2 linearly independent solutions. Any set of two linearly independent columns of the matrix (4  AA) will be a set of linearly . independent solutions; for example, the first and fourth columns, ~
, .
2 0
2 0 are linearly independent solutions.
,
2 2 2 1
222
SYSTEMS OF LINEAR EQUATIONS
5. LEAST SQUARES SOLUTIONS TO A SYSTEM OF LINEAR EQUATIONS In some situations in which we have an inconsistent system of equtions Ax = c, it may be desirable to find the vector or set of vectors which comes "closest" to satisfying the equations. If x* is one choice for x, then x* will approximately satisfy our system of equations if Ax •  c is close to O. One of the most common ways of measuring the closeness ofAx*  c to 0 is through the computation of the sum of squares of the components of the vector Ax*  c. Any vector minimizing this sum of squares is referred to as a least squares solution. Definition 6.1. The n x I vector x. is said to be a least squares solution to the system of equations Ax = e if the inequality (Ax*  e)'(Ax*  e)::;:; (Ax  e)'(Ax  e)
(6.4)
holds for every n x 1 vector x. Of course, we have already utilized the concept of a least squares solution in many of our examples on regression analysis. In particular, we have seen that if the matrix X has full column rank, then the least squares solution for 13 in the fitted regression equation, y = XI3 is given by 13 = (X'X)I X'y. The generalized inverses that we have discussed in this chapter will enable us to obtain a unified treatment of this problem including cases in which X is not of full rank. In Section 5.8, we briefly discussed the {I, 3 }inverse of a matrix A, that is, any matrix satisfying the first and third conditions of the MoorePenrose inverse. We referred to this inverse as the least squares inverse of A. The following result motivates this description. A
A
A
Theorem 6.12. Let A ~ by any {I, 3 }inverse of a matrix A. Then the vector x* = ALe is a least squares solution to the system of equations Ax = e. We must show that (6.4) holds when x* = ALe. The righthand side of (6.4) can be written as Proof
(Ax e)'(Ax  e) = {(Ax  AALe) + (AALe  e)}'{(Ax  AALe) + (AALe  e)} = (Ax  AALe)'(Ax  AALe) + (AALe  e)'(AALe  e)
+ 2(Ax  AALe)'(AALe  e) ~ (AALe  e)'(AALe  e) = (Ax*  e)'(Ax*  e),
where the inequality follows from the fact that
•
LEAST SQUARES SOLUTIONS TO A SYSTEM OF LINEAR EQUATIONS
223
and (Ax  AALe)'(AALe  e) = (x  ALe)' A' (AALe  e) L = (x  ALe)'A'«AA )'e  e) = (x  ALe)'(A'AL'A'e  A'e)
= (xALe)'(A'eA'e)=O
This completes the proof.
(6.5)
0
Corollary 6.12.1. The vector x * is a least squares solution to the system Ax = e if and only if
Proof From the previous theorem, ALe is a least squares solution for any choice of AL , and its sum of squared errors is given by L L (AALe  e)'(AALe  e) = e'(AA  Im)'(AA  Im)e L L = e'(AA  Imfe = e'(AALAA  2AAL + Im)e = e'(AA L  2AAL + Im)e = e'(lm  AAL)e
The result now follows since, by definition, a least squares solution minimizes the sum of squared errors, and so any other vector x* will be a least squares solution if and only if its sum of squared errors is equal to this minimum sum of squares, e'(lm  AAL)e. . 0
Example 6.S.
Let the system of equations Ax = e have A and e given by
A=
I 1 I
2 o 1 , 1 2
2
o
4
I
e=
2
1 6
5
In Example 5.13 we computed the least squares inverse
o
2 0 5 2 5 000
4 4 0
SYSTEMS OF LINEAR EQUATIONS
22 4 Si nc e
5
4 I 6 5
505 0 o 204 505 0 040 8
2. 2

:I
5
e,
4. 4
st lea A t. en ist ns co in is ns tio ua eq of m ste sy e th at th 2 it follows from Th eo re m 6. squares so lu tio n is then gi ve n by
o 5
2 2
0 5
4 4
o
0
0
0
4 I
2. 2 2.8
=
6
o
5
Si nc e (AALe  e) ' = (5 ,2 .2 ,5 ,4 .4 )  (4 ,1 ,6 ,5 ) sq ua re d er ro rs for the least squares so lu tio n is
= (1 ,1 .2 , I , 0 .6 ),
the su m of :
•
•
•
n ca er ad re e th e, nc sta in r Fo . ue iq un t no is n In general, a least sq ua re s solutio ea sil y verify th at the matrix
2 B=
1 .5
2
0 .8 1 .2 I
2 1 .5 2
1 .6 2 .4
2
is also a least squares in ve rse of A. Co ns eq ue nt ly ,
Be =
2 1 .5 2
0 .8 1 .2 I
2 1 .5 2
1 .6 2 .4 2
4 1 6 5

2 8. 8 2 8. 2 31
).4 ,4 ,5 .2 ,2 (5 = ' e) Bc (A r, ve we Ho n. tio is an ot he r lea st sq ua re s solu st lea is th r fo rs ro er d re ua sq of m su e th so d an (4, 1, 6, 5) = (I , 1.2,  I, 0 .6 ), n. tio lu so us io ev pr e th of at th to al tic en id , be sq ua re s solution is, as it m us t st lea a of nn fo l ra ne ge e th ng hi lis tab es in ul ef Th e following re su lt will be us be t no ay m x* n tio lu so s re ua sq st lea a ile wh at th tes sq ua re s solution. It indica unique, the ve cto r Ax * will be unique.
,
NS TIO UA EQ AR NE LI OF EM ST SY A TO S ON TI LU SO S RE UA LEAST SQ
Theorem 6.13.
225
= Ax m ste sy e th to n tio lu so s re ua sq st lea a is x* r cto ve Th e
c if an d only if (6.6) •
in n ve gi ns tio ua eq of m ste sy e th at th e se we 2, .6. m re eo Proof. Using Th (6.6) is co ns ist en t since
,, ,
is .6) (6 jilg f)4 tis sa x* r cto ve y an r fo rs ro er d re ua sq of m Th e su
•
(A x*  c) '(A x*  c) = (AALc  e)'(AALe  c) L  IIIJ2e = c'( AA
= e' (I",  AAL)e,
e os pp su w no , ly se er nv Co n. tio lu so s re ua sq st lea a is x* , so by Corollary 6.12.1 ve ha t us m we 2.1 6.1 ry lla ro Co m fro en Th n. tio lu so s that x* is a least square (A x*  c) '(A x*  c) = e'(lm  AA L) c
, •
•
= e'(lm  AAL)'(lm  AAL)e = (AALe  e)'(AALe  e).
(6 .7)
t. ten po em id d an ic etr m m sy is L) AA (lm at th where we have used th e fact However, we als o have (A x*  c) '(A x*  c) = {(A x*  AALe) + (AALe  c) }' . {(A x*  AALc) + (AALe 
,
en
=
e), e AL '(A e) e AL (A + ) Lc AA * Ax )'( Lc AA (A x* 
(6.8) .8) (6 d an .7) (6 w No . .5) (6 in n ow sh as 0, = since (A x*  AA Lc )'( AA Lc  e) imply that
which ca n be true only if
S N IO T A U Q E R A E IN L F O S M E T SYS
226 .
,
o
and this establishes (6.6).
m te s y s a to n o ti lu o s s re a u q s t s a le l ra e n e g a r fo n io s s re p x e We now give an o f equations. e n fi e D . A ix tr a m n x m e th f o e rs e v n i } 3 I, { y n a e b L A t e Theorem 6.14. L the vector
s re a u q s t s a le a is y X , y h c a e r fo , n e h T r. to c e v 1 x n where y is an arbitrary n o ti lu o s s re a u q s t s a le y n a r fo d n a , e = x A s n o ti a u q e f o solution to the system . x = y * x t a th h c u s y r to c e v x * there exists a
Proof
Since
, 0 = y ) A (A = )y A L A A(l"  ALA)y = (A  A . n o ti lu o s s re a u q s t s a le a is y X 3 .1 6 m re o e h T y b o s d n a we have Axy = AALe, m re o e h T g in s u y b n e th , n o ti lu o s s re a u q s t s a le ry ra it rb Conversely, if x * is an a 6.13 again, we must have
t, a th s e li p im , L A y b d e which. when premultipli
t e g e w g in g n a rr a re n e th d n a , ty ti n e id is th f o s e id s th o b to Adding x*
) e L A * (x A L A * x = * x ) e L A '* (x A L A e L A * = ALe + x ) e L A . x )( A L A n (I + e L A = . x ( = y re e h w , x = y * x t a th n w o h s e v a h e w e c in s f o ro 0 This completes the p ALe). ly ri a s s e c e n t o n re a s n o ti lu o s s re a u q s t s a le t a th .8 6 le We saw in Examp
LEAST SQUARES SOLU nONS TO A SYSTEM OF LINEAR EQUATIONS
227
unique. Theorem 6.14 can be used to obtain a necessary and sufficient condition for the solution to be unique. ,, •
Theorem 6.15. If A is an mx n matrix, then the system of equations Ax = c has a unique least squares solution if and only if rank(A) = n .
Proof
It follows immediately from Theorem 6.14 that the least squares solution is unique if and only if (1 ALA) = (0), or equivalently, ALA = I". The result now follows from Theorem 5.22(g). 0 Even when the least squares solution to a system is not unique, certain linear combinations of the elements of least squares solutions may be unique. This is the subject of our next theorem. Theorem 6.16. Let x* be a least squares solution to the system of equations Ax = c. Then a'x* is unique if and only if a is in the row space of A.
Proof
Using Theorem 6.14, if a'x* is unique regardless of the choice of the least squares solution x*, then
is the same for all choices of y. But this implies that (6.9) Now if (6.9) holds, then
a' = b' A, where b' = a' AL, and so a is in the row space of A. On the other hand, if a is in the row space of A, then there exists some vector b such that a' = b' A. This implies that
and so the least squares solution must be unique.
o
. Example 6.9. We will obtain the general least squares solution to the system of equations presented in Example 6.8. First note that
SYSTEMS OF LINEAR EQUATIONS
228 101
o
1
1 ,
,
000
, f
[
so that
!F' I •
Xy
= ALe + (13  ALA)y
1 10

0 5 0
4 2 0 2 5 4 0
0
0
4 1
6
I,
+
5
0 0 0
0 0 0
1 1 1
YI Y2 Y3
I ,I I,
,•I , I,
,,
2.2  Y3 2.8  Y3 Y3
•
,· :•
,, ,
,•
t no es do x a' y tit an y qu e Th . Y3 of ce oi ch y an r fo is a least squares solution , se ca is th in A; of e ac sp w ro e th in is a as g lon as Y3 depend on the choice of )'. 1 , 1 1, (r cto ve e th to al on og th or g in be a to ds on sp rre that co
\
I, •
I
•
I •l
•I
,,. •
•
,,I
AN TH SS LE R FO N O TI A M TI ES S RE UA SQ ST A LE 6. FULL RANK MODELS
,•••
, •
e th of el od m a r fo on ati tim es s re ua sq st lea of In all of ou r previous examples fOlln (6.10)
y = X~ + E,
at th ed m su as ve ha we l, x N is E d an , xl m is where y is N xl , X is N x m, rank(X) = m. In this case, the no nn al equations, ~
X'X~ = X'y, of r ato tim es s re ua sq st lea ue iq un the n, tio lu so ue iq un a ld yie
(6.11) ~,
,••
l, I• •
I
!• !,
,
given by , :
,
, •
,,
•
. nk ra ll fu an th s les s ha X ix atr m e th , ns tio ica pl ap y an m However, in
•
••,
,, •
el, od m n tio ca ifi ss cla y wa eon e iat ar iv un e th er id ns Ex am pl e 6.10. Co which was written as
• ••
,,• !
I, ,
•
•
MODELS NK RA LL FU AN TH SS LE R FO ON TI MA TI ES S RE UA SQ T LEAS
22 9
en itt wr be n ca el od m is Th j. ,n ... 1, = j d an k , .. 1" = i in Example 3.14, where in the forlll of (6.10), where ~ =
' on d an if le ab tim es is p a' n tio nc fu e th e, ov from the definition ab ue iq un is p a' at th 16 6. m re eo Th m fro ws llo fo it ile wh the row space of X, A
231
LEAST SQUARES ESTIMATION FOR LESS THAN FULL RANK MODELS
if and only if a is in the row space of X. In addition, since X'(Xx:yX is the projection matrix for the row space of X, we get the more practical condition for estimability of a'p given by
X'(XX'tXa = a
(6.13) •
It follows from Theorems 5.3 and 5.25 that
X'(XX'tx = X'X'+ = X'X'L = X'(XX'r X, and so equation (6.13) is not dependent upon the MoorePenrose inverse as the choice of the generalized inverse of X X'. Finally, we will demonstrate the invariance of the vector of fitted values y = Xp and its sum of squared errors (y  y)'(y  y) to the choice of the least squares solution p. Since XX+X = X A
A
y = XJl = X {X+y + (I  X+X)u} = Xx+y + (X  XX+X)u = XX+y, which does not depend on the vector u. Thus, y is unique, while the uniqueness of
(y  y)'(y  y) = y'(I  XX+)y follows immediately from the uniqueness of y. Example 6.11.
Let us return to the oneway classification model
of Example 6.10, where
Il* = (p., 7"1 , ••• , 7"k)'
X* =
and
1nl
1nl
0
• • •
In2
1n2
• • •
1n3
0 0
•
•
• •
Ink
0
• • •
•
•
• •
•
0
0
0 0 0 •
•
•
• •
•
l"k
Since the rank of the n X (k + 1) matrix X *' where n = 2. ni is k, the least squares solution for Il* is not unique. To find the fOlIll of the general solution, note that
232
SYSTEMS OF LINEAR EQUAIlUN1i
n' D" ,
n
while a generalized inverse is given by n
0'
I
o
I D,,n 11 kk I' ,
where n = (n" ... , nd' and D" = diag(nl, ... , nd. Thus, using (6.12) we have the general solution n
I

0' I D,,n 11 kI' k
o
ny
+
D"y  I
I
u
o
,
n n n 11 kn ' u, where y = (Y I' ... ,y.r and y = L 1\ j)';/ 1\. Choosing u = 0, we get the particular least squares solution that has ~ = y and Tj = Yj  Y for i = 1, ... , k. Since a'~* is estimable only if a is in the row space of X, we find that the k quantities, /.t + 7'j, i = I, ... , k, as well as any linear combinations of these quantities, are estimable. In particular, since /.t + 7'j = a;~*. where aj = (I,e;)" its estimator is given by y  YY
= Yi
The vector of fitted values is
A
y=X*~*=
1" I 1"1 1", 0 1"3 0 • •
•
l"k
• • •
0
0 1"2 0
•
•
•
• • • • •
•
0 0 0
• •
•
• •
•
0
•
• •
l"k
Y YI Y2 
Y Y
• • •
Yk  Y
while the sum of squared errors is given by k
(y  y)'(y  y)
=
II;
L L ;;1
j;1
(Yij  Yj)2
YI 1", 
Y2 1"2 • • •
Yk l"k
,
OSITION MP CO DE E LU VA R LA GU SIN D AN S ON TI UA EQ AR NE LI SYSTEMS OF
233
R LA U G N SI E TH D AN S ON TI UA EQ R EA N LI F O S 7. SYSTEM VALUE DECOMPOSITION ns tio ua eq of m ste sy e th to n tio lu so e th en th , lar gu W he n A is sq ua re an d nonsin I c. A = x as A, of rse ve in e th of s nn te in d se es pr ex Ax = c ca n be co nv en ien tly , e th r fo ns tio lu so e th th wi al de to l ra tu na t ha ew m Fo r this reason, it has se em ed so ch oa pr ap e th is is Th +. A I, A of on ati liz ra ne ge e th m or e general ca se in te nn s of s thi k ac att n ca we , ely tiv na ter Al . ter ap ch is th that we have tak en th ro ug ho ut t tha ch oa pr ap an n, io sit po m co de e lu va lar gu sin pr ob le m by directly us in g the r ou l lIl fO ns tra to le ab be s ay alw ll wi we , se ca is th may of fe r m or e insight. In sy ste m to a sim pl er sy ste m of eq ua tio ns of th e fOlIn (6. 14)
Dy = b,
d an ts, tan ns co of r cto ve I x III an is b , es bl ria where y is an n X 1 ve cto r of va of e on ve ha ll wi D , lar cu rti pa In j. :/: i if 0 = d D is an m x n matrix such that ij the four fOlIllS, as given in Th eo re m 4. 1,
(a ) Ll
(b) [Ll
(c )
(0 )]
Ll (0)
(d)
Ll (0 )
(0) (0 )
,
.
D if w No . A) k( ran = r d an ix atr m al on ag di lar gu wh er e Ll is an r x r no ns in ue iq un e th th wi t en ist ns co is ) .14 (6 m ste sy e th en has the fo nn gi ve n in (a), th YI e er wh ;)" ;,y (y = y as y n tio rti pa we if ), (b r so lu tio n gi ve n by y = Ll1b. Fo is r x 1, th en eq ua tio n (6.14) re du ce s to
lIn fO e th of ns tio lu so s ha d an t en ist ns co is ) .14 (6 , us Th
y=
,
rly ea lin r n ve ha en th we e nc Si . ry tra bi ar is Y2 r cto where the (n  r) X 1 ve r n is ns tio lu so t en nd pe de in rly ea lin of r be m nu e th in de pe nd en t ch oi ce s for Y2' m ste sy the , (c) in n ve gi l ril fo e th s ha D n he W if b = 0 an d n  r + I if b :/: O. in (6.14) takes the forill
If O. = b2 if ly on t en ist ns co is it so d an 1, X r) (m is b d where b l is r x I an 2
SY ST EM S OF LI NE AR EQUATIONS
234
: y by n ve gi n tio lu so ue iq un a s ha en th m ste sy e th , this is the case
a I b l .
Fo r
as s ar pe ap ) .14 (6 in ns tio ua eq of m ste sy the ), (d in en the final fann giv •
, (c) rm fo of se ca e th in As . re fo be as d ne tio where y an d b have be en parti en wh ), (b rm fo of se ca e th in as d an 0, : b2 this sy ste m is consistent only if 1 + r n d an 0, : b if ns tio lu so t en nd pe de in consistent, it has n  r linearly by n ve gi is n tio lu so l ra ne ge e Th O. j. b if ns tio lu so t en linearly independ
,
y:
where the (n  r) x 1 vector Y2 is arbitrary. ua eq of m ste sy l ra ne ge e th to ied pl ap ily ad re be All of the ab ov e ca n now tions, .
"
A x: c
(6.15)
in as (1 PD : A by n ve gi A of n io sit po m co de e lu va by utilizing the sin gu lar e th es uc od pr P' by ns tio ua eq of m ste sy is th of on ati lic tip Th eo re m 4.1. Premul x (1 : y by n ve gi is es bl ria va of r cto ve e th e er system of equations in (6.14), wh ), .14 (6 to n tio lu so a is y if , ly nt ue eq ns Co c. p' : and the vector of constants is b ), (b d an ) (a s rm fo of se ca e th in , us Th ). .15 (6 then x = Q y will be a solution to (6.15) is consistent with the unique solution gi ve n by
is n tio lu so l ra ne ge e th ) (b rm fo r fo ile wh D, of when (a) is the f01'111
x = 0 '= lOl s ha Y2 Q2 rm te e Th r. to ec l·v x r) (n ry tra bi ar an where QI is n x r an d Y2 is Q2 ix atr m r) (n x n e th of ns m lu co e th ce sin Ax no effect on the value of m ste sy e th ), (d d an ) (c s rm fo of se ca the In A. of e ac sp ll f01111 a basis for the nu d an 2) hP (P = P e er wh 0, : b2 P2 at th so bl Pl : C (6.15) is consistent only if A, of e ng ra the r fo sis ba a rm fo PI of ns m lu co the ce PI is m x r; that is, sin c n tio rti pa we if , us Th A. of e ac sp n m lu co e th in is c if the system is consistent n tio lu so ue iq un e th s, ld ho ) (c rm fo en wh en th 1, rx is as c = (c ;,c ;)' where CI will be given by ,
•
235
SPARSE LINEAR SYSTEMS OF EQUATIONS
f
I
In the caSe of form (d), the general solution is
x = Qy = [QI
8. SPARSE LINEAR SYSTEMS OF EQUATIONS The typical approach to the numerical computation of solutions to a consistent system of equations Ax = c, or least squares ~olutions when the system is inconsistent, utilizes some factorization of A such as the QR factorization, the singular value decomposition, or the LV decomposition, which factors A into the product of a lower triangular matrix and upper triangular matrix. Any method of this type is referred to as a direct method. One situation in which direct methods may not be appropriate is when our system of equations is large and sparse; that is, m and n are large and a relatively large number of the elements of the m X n matrix A are equal to zero. Thus, although the size of A may be quite large, its storage will not require an enormous amount of computer memory since we only need store the nonzero values and their location. However, when A is sparse, the factors in its decompositions need not be sparse, so if A is large enough, the computation of these factorizations may easily require more memory than is available. If there is some particular structure to the sparsity of A, then it may be possible to implement a direct method that exploits this structure. A simple example of such a situation is one in which A is m x m and tridiagonal; that is, A has the form VI
WI
0
U2
V2
W2
•
• •
• •
•
• • •
0 0
0 0
0 0
A=
•
• •
• • •
• •
•
• • •
0 0
0 0
0 0
• • •
• • •
• • •
Urn  I
VrnI
Wm  I
0
Urn
Vm
In this case, if we define '1 U2
L=
•
0
• • •
'2
• ••
• •
• • •
0 0
0 0
0 0
0 0
1 0
•
•
•
• •
• •
• • •
'mI
0
• • •
Um
'rn
,
V=
SI
1
•
•
• • •
0 0
0 0
• • • •
•
•
0 0
0 0
•
•
• •
• •
•
• • •
1 0
•
•
Sm I
1
,
SYSTEMS OF LINEAR EQUATIUNS
236
,m , ... 2, = i r fo > ii !r iI w = I Si d an w he re " = Vi> ri = rs, cto fa o tw e th , us Th O. ,J ri ch ea as ng lo as then A can be factored as A = LV g in lv so st fir by ed lv so be y sil ea n ca c = Ax m ste sy e L an d V are also sparse. Th is, th on ls tai de e or m r Fo y. = Vx m ste sy e th g in lv so . the system Ly = c an d then ed nd ba as ch su s ice atr m d re tu uc str r he ot r fo s and adaptations of direct method d an 6) 98 (1 id Re d an , an ism Er ff, Du e se s ice atr m matrices an d bl oc k tridiagonal Go lu b and Van Loan (1989). s ze ili ut ns tio ua eq of s m ste sy se ar sp of n tio lu so A second approach to the ted ra ne ge is " h. ,X XO rs, cto ve of ce en qu se a , iterative methods. In this case is at th r cto ve a is ... 2, 1, = j r fo Xj ile wh r with Xo being some initial vecto x, as 7 Xj at th ty er op pr the th wi I, _ Xj r cto ve us co m pu ted using the previo n io tat pu m co e th , lly ca pi Ty c. = Ax to n tio lu so e j 7 00, where x is the tru is is th d an rs, cto ve th wi t uc od pr its h ug ro th A in these methods only involves d an st de ol e th of o Tw . se ar sp is A if le nd ha to an operation that will be easy is A If s. od eth m el id Se sus Ga d an bi co Ja e th are es m he simplest iterative sc en itt wr be n ca c = Ax m ste sy e th en th ts, en m ele al on ag m x m with nonzero di as Vi  U iw iI !r iI ,

•
which yields the identity
tes pu m co at th od eth m bi co Ja e th r fo on ati iv ot m the is Th is
Xj
as
= A as A of g tin lit sp e th s ze ili ut od eth m el id Se sus Ga On the other hand, the of ch ea th wi lar gu an tri r pe up is A2 d an lar gu an tri r we lo Al + A 2, where Al is as ed ng ra ar re be n ca c = Ax , se ca is th In . ro ze to l ua eq its diagonal elements
and this leads to the iterative scheme
. lar gu an tri is m ste sy the ce sin Xj r fo ed lv so y sil ea is which ng iri qu re s, od eth m e tiv ra ite ted ica ist ph so e or m r he ot In recent years, some lve de en be ve ha s, tie er op pr e nc ge er nv co r tte be ng less co m pu tat io n an d havi ich wh , ns tio ua eq of m ste sy a g in lv so r fo od eth m a s us sc di oped. We will briefly r Fo ]. 0) 95 (1 os cz an [L m ith or alg s zo nc La e th as n utilizes an algorithm kn ow
•
•
237
SPARSE LINEAR SYSTEMS OF EQUATIONS
rne ge s. tie er op pr e nc ge er nv co ng di clu in e, ur ed oc m or e information on this pr s re ua sq st lea ng di fin of lem ob pr the to d an , ix alizations to a general m X n matr g un Yo to d rre fe re is er ad re e th s, od eth m e tiv solutions, as well as ot he r itera ). 89 (\9 an Lo n Va d an b lu Go d an , 1) 98 (1 g un Yo d an an m (1971), Ha ge Co ns id er th e function J( x) =
!X' Ax  x' e,
e Th x. tri ma ite fin de e iv sit po m x m an is A d where x is an m x I vector an vector of partial derivatives of J( x) gi ve n by
VJ (x ) =

aJ
ax I , ... ,
aJ aXm
I
= A x e
ro ze the to l ua eq is th ng tti Se . x) J( of t ien ad gr e th as to d is so m eti m es referre the to n tio lu so e th o als is , Ie A= x , gJ in iz im in vector, we find that the ve cto r m an be o als ll wi J es iz im in m ly ate im ox pr ap ich wh r cto ve a , system Ax = c. Th us r ize im in m the g din fin r fo od eth m e tiv ra ite e On e. = Ax to approximate solution bsu l na sio en im jd a er ov J of Xj rs ize im in m g din fin ely x involves successiv . lar cu rti pa In I. by j g sin ea cr in lly ua in nt co d an 1 = j th wi space of Rm, starting jth the e fin de ll wi we ' qm , .. " ql rs, cto ve 1 x m al m for so m e set of orthonor I' ... , qj ). (q = Qj x, tri ma j x m the of ns m lu co e th th subspace as th e sp ac e wi as its basis. COrisequently, fo r some j x 1 vector Yj '
•
(6 .16 ) • '.
,• •
an d
•
•
•
where ,
.
g( y) = ty '(Q iA Q j)Y  yl Q i e
so d an r, cto ve ll nu e th to l ua eq be t us m ) Yj g( of t ien ad gr Thus, th e (6.17)
To ob tai n Xj '
to ) .16 (6 in is th e us en th d an Yj e lat lcu ca to ) .17 we ca n first use (6
SY ST EM S OF LI NE AR EQUATIONS
238
is re he al go e th t bu c, = Ax to n tio lu so e th be ll wi get Xj . Th e final lu so te ra cu ac ly nt cie ffi su a th wi m = j re fo be s es to stop the iterative proc tion Xj . of ts se nt re ffe di th wi rk wo ll wi e ov ab ed rib sc The iterative sc he m e de of ce oi ch us cio di ju a by at th e se ll wi we t bu orthonoIlnal vectors q I , ... , qm XjS e th g tin pu m co in ed lv vo in n io tat pu m co e th at th tee an ar gu this set, we may cve e m sa e es Th . se ar sp d an ge lar is A en wh will be fairly straightforward even st ge lar e th of w fe a ng ni tai ob r fo e ur ed oc pr e tiv ra ite an tors are also useful in of t ex nt co e th in rs cto ve e es th e riv de ll wi e W A. of es lu and smallest eigenva m ste sy e th of n sio us sc di r ou to rn tu re er lat en th d an this eigenvalue problem es lu va en eig t es all sm d an st ge lar e th te no de Am d an 1 of equations Ax = c. Let}\ i x j e th of es lu va en eig t es all sm d an st ge lar e th te no of A, while Al j and Aj j de d an Am ~ j Aj , A\ ~ j Al at th 3 r te ap Ch in en se matrix Q jA Q j. Now we have oqu h eig yl Ra e th of es lu va um im in m d an um im ax m e that AI and Am are th tient, Xj ' X m ,
A) = x' Ax x' x x,
R(
l na tio di ad an d fin to sh wi we d an j, Q of ns m lu co j e th Suppose that we have se clo as I + I,} + Aj d an I + .j AI ve ha d an I + Qj ix atr m \'cctor qj + 1 so as to fOlln the ns m lu co e th by d ne an sp e ac sp e th in r cto ve a to AI and Am as possible. If Uj is t ien ad gr e th ce sin en th j, AI = A) , uj R( g in fy tis sa d an Qj of
d ul wo we y, ~dl rap t os m g sin ea cr in is ) ,A uj R( ich gives the direction in wh ns m lu co e th by d ne an sp e ac sp the in is A) , uj R( V want to choose qj + I so that Qj by d ne an sp e ac sp e th in r cto ve a is j v if , of Qj + I. On the other hand y dl pi ra t os m g sin ea cr de is ) j,A (v R ce sin en th j, and satisfying R (v j,A ) = Aj at th re su e ak m to nt wa d ul wo we , A) , vj R( V in the direction given by e es th of th Bo I. + Qj of ns m lu co e th by d ne an VR( vj ,A ) is also in the space sp rs cto ve e th by d ne an sp e ar Qj of ns m lu co e objectives ca n be satisfied if th . I d ne an sp are I + Qj of ns m lu co e th at th so I + qj t lec se we d q I. Aq I' ..• ,A ' q I an e th of e ar ) j,A (v VR d an ) j,A (u VR th bo ce sin l' 'q ,A '" by the vectors ql ,A qp we , us Th j. Q of ns m lu co e th by d ne an sp x r fOlI\1 aA x + bx for some vecto it un a as ted lec se is qj 2, ;?: j r fo ile wh ' ql r cto ve it un start with an initial d ne an sp e ar Qj of ns m lu co e th at th ch su d an I _ . ,qj ... vector orthogonal to q I' as n ow kn e ar rs cto ve qj lar cu rti pa e es Th I' q I ' ,A ••• I' by the vectors q I' Aq of e us e th by ted ita cil fa be n ca qjS e th of n io the LanclOs vectors. Th e calculat e th s ha T d an al on og th or is P e er wh , P' PT the tridiagonal factorization A = tridiagonal fOIln
239
SPARSE LINEAR SYSTEMS OF EQUATIONS
~I
{31 (X2
•
•
Clq
T=
0
• • •
0
0
0
~2
• • •
0
0
0
•
•
• •
•
• •
{3m2 0
(XmI {3m I
{3m I (Xm.
•
• •
• •
• •
0 0
0 0
0 0
• • • • • ••
•
Using this factorization. we find that if choose P and ql so that Pel = ql' then
Since (elo Tel, ...• TjI el ) has upper triangular structure. this means that the firstj columns of P span the column space of (ql.Aql ..... AjI ql ); that is. the qjS can be obtained by calculating the factorization A = PTP'. or in other words. we can take Q =(ql •...• qm) =P. Thus. since AQ = QT. we have
(6.18) and (6.19)
for j = 2 •... ,m  1. Using these equations and the orthonollnality of the qjS, it is easily shown that (Xj = qi Aqj for all j. and as long as Pj = (A  (Xj Im)qj {3jIqj_1 =I O. then {3J = piPj and qj+ I = Pj/{3j for j = 1•.. .• m  1. if we define qo = O. Thus. we can continue calculating the qjS until we encounter a Pj = O. To see the significance of this event. let us suppose that the iterative procedure has proceeded through the first j  1 steps with Pi =I 0 for each i = 2, ...• j  1. and so we have obtained the matrix Qj whose columns fOlln a basis for (ql' Aql' ...• Aj  Iql)' Note that it follows immediately from the relationship AQ = QT that
where Tj is the j xj submatrix of T consisting of its first j rows and j columns. This leads to the equation QiAQj = Tj + QiPj ei . But q;Aqi = (Xi. while it follows from (6.18) and (6.19) that q;+ IAqi = {3i and qkAqi = 0 if k > i + 1. Thus, Qi AQj = Tj and so we must have Qip· = O. Now if Pj =I O. then qj + I = Pj I {3j is orthogonal to the columns of Qj. F~er. it follows from the fact that qj + I is a linear combination of Aqj' qj. and qj _ I that the columns of Qj + I = (Qj, qj + I ) form a basis for the column space of (ql' Aql •...• Ai ql). If. on the other hand. Pj = O. then AQj = QjTj . From this we see that the vectors Ajql •... •Am1ql are in the space spanned by the columns of Qj. that is. the space spanned by the
SYSTEMS OF LINEAR EQUATIONS
240 1q I.
Consequently, the iterative procedure is co m pl ete
vectors ql , Aq l, ... , Aj since there are only j qiS . en eig t es all sm d an st ge lar the e, ov ab ed rib sc de e In the iterative procedur of es lu va en eig t es all sm d an st ge lar the to s on ati im values of Tj serve as approx e th to e du t no lly ua us is s es oc pr e tiv ra ite is th of on A, In practice, the telluinati e th of s on ati im ox pr ap te ra cu ac ly nt cie ffi su to e encounter of a Pj = 0, bu t du eigenvalues of A. e = Ax ns tio ua eq of m ste sy e th g in lv so of lem ob pr Now let us return to the en th d an ) .17 (6 in Yj of n io lat lcu ca the on d se ba e ur ed oc through the iterative pr ns m lu co the as rs cto ve S lO nc La the of ce oi ch e th at th e Xj in (6.16). We wi\l se ve ha we , Qj of ce oi ch is th r Fo . ed lv vo in ns io tat pu m co the of Qj will simplify of se ca ial ec sp a is ) .17 (6 in m ste sy the at th so , T = j AQ j already seen that Qi , on cti se is th of g in nn gi be e th at d se us sc di the tridiagonal system of equations as d re cto fa be n ca T ix atr m e th lt, su re a As j ic. etr m m sy special in that Tj is Tj = Lj Dj Li , where Dj = di ag (h ... , dj ).
1 II Lj =
•
0 1
•
• • •
0 0
0 0
•
• • • • • •
• •
•
•
•
•
Ij
0 0
0 0
• • •
•
1
0 1
_ I
• •
,
j . Th us , the l / j{3 aj = d d an l j /d il {3 = I Ii1 ,}, i ... d l = ai , and for i = 2, en th e, Qj = wj Lj g in lv so st fir by d un fo y sil ea be n ca ) .l7 solution for Yj in (6 d ire qu re n io tat pu m co e th s, se ea cr in j as en Ev zj. = Yj 11 Djzj = Wj , and finally in so d an , L d an D of s ice atr bm su j are I _ j L d is not extensive since Dj _ I an j m fro L d an D n tai ob to I _ I d j an d e j lat lcu ca to j j the jth iteration we only need Dj _ I and Lj _ I . is th at th e se ll wi e W ). .16 (6 g in us Yj m fro Xj te pu m co The next step is to e fin de we if at th te No n. io tat pu m co of nt ou also may be done with a small am e th g in ly tip ul em pr by en th , Qj = Li B at th so ) b , ... j j the m x j matrix Bj = (b I, t ge we ), .16 (6 g in us d an "t jT Q by e Qi = Yj equation Tj
(6.20) , .
) .20 (6 m fro Xj te pu m co to r sie ea be ll wi It . ed where Zj is as previously defin Zj I d an _ B ter af te pu m co to e pl 1 sim j are Zj d than from (6.16) since Bj an e se we , B of on iti fin de e th m fro e, nc sta j in r Fo . have already be en calculated ). l,b j_ (B = B , ly nt ue eq ns co d j an 1, i> r fo j that b l = ql and b j = qi l i_ lb j _ 1 . at th d fin we Zj, d an Wj r fo ns tio ua eq g in fin de e th g U sin (6.21 )
241
PROBLEMS
by n the r, cto ve I x I) (j a is I _ 'Yj e er wh )', 'Yj I' _ If we partition Zj as Zj = ("Ii using the fact that
Lj Ij
_I
ei
o
o
_ I
I
d) '
1 '
at th ns ea m is th t Bu c. iI Q = 1 j_ 'Y _I Dj _1 Lj at th we se e th at (6.21) implies n ve gi is ich wh ; "ij te pu m co to ed ne ly on we Zj, te 'Yj _ I = Zj  I, an d so to co m pu by
wh er e
'Yj _ I
es m co be ) .20 (6 , us Th I. _ Zj of nt ne po m co t las e is th
m fro n tio lu so e tiv ra ite jth the g tin pu m co r fo la lu nl fo e an d so we have a sim pl hj , 'Yj, an d th e (j  1)th iterative solution Xj _ I.
PROBLEMS en giv x tri ma 3 4x the is A e er wh c, = Ax ns tio 1. Co ns id er th e sy ste m of eq ua in Problem 5. 2 and I
,, ,
c=
,
3
1
0 ....
,, ,
,
'
(a) Sh ow that th e sy ste m is consistent. (b) Fi nd a solution to this sy ste m of equations. e? er th are ns tio lu so t en nd pe de in rly ea lin y an m w (c) Ho in n ve gi x tri ma 4 x 3 the to l ua eq A s ha c = Ax ns tio ua 2. Th e system of eq Pr ob le m 5.36 an d
1
c= 1 4
242
SYSTEMS OF LINEAR EQUATIONS
(a) Show that the system of equations is consistent. (b) Give the general solution. (c) Find r, the number of linearly independent solutions. (d) Give a set of r linearly independent solutions. 3. Suppose the system of equations Ax = c has
5 2 A=
3 2 1
1
1 1
1
0
2 3
For each c given below, detelinine whether or not the system of equations is consistent.
(a) c =
1 1 , 1 1
3 2 1 1
(b) c =
,
1 1 1 1
(c) c =
4. Consider the system of equations Ax = c, where •
A=
(a) (b) (c) (d)
1 2
1 1 1 1
o
2
1
1 '
c=
3 1
Show that the system of equations is consistent. Give the general solution. Find r, the number of linearly independent solutions. Give a set of r linearly independent solutions.
S. Prove Theorem 6.S. 6. Consider the system of equations AXB = C, where X is a 3 x 3 matrix of variables and .
A=
1 3 3 2
1 1 '
B=
1 1
1
o
1
o ,
C=
4 2
(a) Show that the system of equations is consistent. (b) Find the fOlin of the general solution to this system.
2 1
243
PROBLEMS
7. The general solution of a consistent system of equations was given in Theorem 6.4 as Ac+(InAA)y. Show that the two vectors Kc and (InA A)y are linearly independent if c j. 0 and y j. O. 8. Suppose the m x n matrix A and m x 1 vector c j. 0 are such that A  c is the same for all choices of A  ..Use Theorem 5.23 to show that, if Ax = c is a consistent system of equations, then it has a unique solution. 9. For the homogeneous system of equations Ax = 0 in which
2
1 3 2 3
A=
o
1 2 '
detelinine r, the number of linearly independent solutions, and find a set of r linearly independent solutions. 10. Show that if the system of equations AXB = C is consistent, then the solution is unique if and only if A has full column rank and B has full row rank. 11. Let 1
A=
2
B=
2
o
1
1
1 1
3 1 1
1
2 1
'
1 I'
1
c=
d=
2 ' 2
4
.
(a) Show that the system Ax = c is consistent and has three linearly independent solutions. (b) Show that the system Bx = d is consistent and has three linearly independent solutions. (c) Show that the systems Ax = c and Bx = d have a common solution and that this common solution is unique. 12. Consider the systems of equations AX = C and XB = D, where A is m x n, B is p x q, Cis m x p, and D is n x q. (a) Show that the two systems of equations have a common solution X if and only if each system is consistent and AD = CB. (b) Show that the general common solution is given by
where Y is an arbitrary n x p matrix.
SYSTEMS OF LI NE AR EQUAIlUNIS
244
ix atr m e th r fo d un fo s wa rse ve in s re ua sq st lea a 7, 5.3 e 13. In Exercis
1 2 1
2 3 1 3
1 1 4 2
A=
1
.
ns tio ua eq of m ste sy e th at th ow sh to rse ve in s re ua sq (a) Use this least Ax = c is inconsistent, where c' = (2, 1, 5) . (b) Find a least squares solution. is th to n tio lu so s re ua sq st lea a r fo rs ro er d re ua sq of (c) Compute the sum system of equations. e er wh c, = Ax ns tio ua eq of m ste sy e th er id ns Co . 14
A=
(a) (b) (c) (d)
0 1 2 1
I 2 1 2
2 2 5 0
2
3 0
,
c=
3
Find a least squares inverse of A. Show that the system of equations is inconsistent. Find a least squares solution. Is this solution unique?
c = Ax ns tio ua eq of m ste sy e th to n tio lu so s re ua sq st lea a 15. Show that x * is if and only if
A' Ax *= A' c •
rs, cto ve 1 x m d an 1, x m 1, nx c, d an *, ,Y x* d an , ix atr m 16. Let A be an m X n ns tio ua eq of m ste sy e th at th ch su e ar y* d an x* at th e respectively. Suppos
1m
A
A'
(0)
y*
_ c o
'
m ste sy e th to n tio lu so s re ua sq st lea a be t us m en th holds. Show that x* Ax = c.
rm fo the of is n tio ac ter in th wi el od m n tio ca ifi ss cla 17. The balanced twoway Yij k
= II + Ti + 'Yj + f/ ij + tij k.
245
PROBLEMS
ts en es pr re l. r} ete ram pa e Th n. , ... 1, = k d an b, , where i = 1, ... , a, j = 1, ... an is 'Yj e, on r cto fa of el lev ith e th to e du ct fe ef an is Ti an overall effect, the to e du ct fe ef an is ij / 1 ' d an o, tw r cto fa of ef fe ct du e to the jth level the l, ua us as o; tw d an e on rs cto fa of els lev interaction of th e ith an d jth a" ). O, N( as ed ut ib str di ch ea rs, ro er om nd ra t en nd pe de in t eij kS re pr es en y wa otw the at th so X x tri ma e th d an E d an IJ, (a) Se t up th e vectors y, E. + J XI = y Iln fO ix atr m e th in en itt wr be n ca e ov model ab le ab tim es t en nd pe de in r ea lin r of t se a e in lll teI De X. of (b) Fi nd th e ra nk r, functions of the parameters, }l., Ti , 'Yj, an d '1/ ij. IJ. r cto ve r ete m ra pa the r fo n tio lu so s re ua sq st (c) Find a lea •
18. Co ns id er the regression model
y = XIJ + E, ite fin de e iv sit po n ow kn a is C d an , C) a , (0 NN wh er e X is N x m, we k, ran n m lu co ll fu is X ich wh in se ca e th r fo 6, 4. , matrix. In Ex am pl e I I X' C Iy, t X C' (X = IJ r ato tim es s re ua sq st lea ed liz ob tai ne d th e genera wh ich m in im ize s 2
E 
(6 .2 2)
st lea ed liz ra ne ge the n the tk, rar n m lu co ll fu an th Sh ow that if X is less by n ve gi is 2) .2 (6 s ize im in m ich wh IJ of r ato squares es tim
wh er e u is an arbitrary m x 1 vector. ,
ize im in m at th IJ rs cto ve e th ns tai ob s re ua sq st lea ,19. Restricted •
,
,
is b d an m x p is B e er wh b, = J BI s , fie tis sa IJ su bj ec t to the restriction th at to l3u n tio lu so l ra ne ge he d.r fin to 6.4 m r~ eo Th e P x 1 su ch that BBb = b. Us ry tra bi ar an on s, nd pe de /. 131 e er wh b, =;: 3 BI ns tio the co ns ist en t sy ste m of eq ua e us en th d an 3), X( y)'( 3 XI (y to in IJ r fo n sio es ve cto r u. Substitute this expr UK' e er wh u, r fo , .. u n tio lu so s re ua sq st lea l ra ne ge e th , n tai ob Th eo re m 6.14 to the t tha ow sh to l3u in U r fo .. u e ut tit bs Su w. r cto ve ry tra bi depends on an ar by n ve gi is 13 r fo n tio lu so s re ua sq st lea ed ict str re l ra ne ge
b) XB (y ]L B) B(1 {[X B) B(1 + b B= .. ~ + (I  [XCI  B B) ]L X( I B B» w} .
246
SYSlEMS OF LINEAR EQUATIONS
20. In the previous exercise. show that if we use the Moore Penrose inverse as the least squares inverse of [X(I B B)] in the expression given for then it simplifies to
P....
•
~M' = Bb + [XCI  BBW(y  XBb) + (1 B B){I  [X(I B BWX(I B B)}w. .
21. Consider the iterative procedure. based on the Lanczos vectors. for solving the system of equations Ax = c. Suppose that for the initial Lanczos vector ql we use (e' et l / 2e. (a) Show that if for some j. Pj = (A  {Xj Im)qj  {3j Iqj_1 = O. then AXj = c. (b) Show that for any j. the procedure easily yields a measure of the adequacy of the jth iterative solution since •
where
)jj
is thejth component of the vector Yj in (6.16).
•
•
•
•
.
•
•
CHAPTER SEVEN
Special Matrices and Matrix Operators
1. INTRODUCTION The concept of partitioning matrices was first introduced in Chapter I, and we have subsequently used partitioned matrices throughout this text. In this chapter we develop some specialized formulas for the detelIllinant and inverse of partitioned matrices. In addition to partitioned matrices, we will look at some other special types of structured matrices that we have not previously discussed. In this chapter, we will also introduce and develop properties of some special matrix operators. In many situations, a seemingly complicated matrix expression can be written in a fairly simple form by making use of one or more of these matrix operators.
2. PARI'ITIONED MATRICES Up to this point. most of our applications involving partitioned matrices have utilized only the simple operations of matrix addition and multiplication. In this section. we will obtain expressions for the inverse and detelIllinant of an m x m matrix A that is partitioned into the 2 x 2 block fOlln given by
A=
(7.1 )
•
where All is ml x mI. AI2 is ml x m2. A21 is m2 x ml, and A22 is m2 x m2. We wish to obtain expressions for the inverse and determinant of A in telIllS of its submatrices. We begin with the inverse of A.
Theorem 7.1. Let the m x m matrix A be partitioned as in (7.1), and suppose that A, All, and A22 are nonsingular matrices. For notational convenience write B = AI and partition B as
247
SPECIAL MATRICES AND MATRIX OPERATORS
248
B=
abm su ng di on sp rre co e th as es siz e m sa e th of are B of s where the submatrice trices of A. Then we have i, Ai 2I 2A B2 I2 iiA +A ii =A l If A2 2~ 2A AI IAI (a ) BI I =( L A2 I2 IA BI 2I ~A A2 + ~ A2 = fl I2 ,iA IA A2 2 (A (b ) B22 = 2 (c ) BI2 = AiIIAI2B22, (d ) B21 = A2 "iA 2I B II .
Pr oo f
The matrix equation
AB =
yields the four equations Im p
(7.2)
A21BI2 +A22B22 = 1m2 , AI IB I2 +A12B22 = (0),
(7.3) (7.4)
A21 BII + A22B21 = (0)
(7 .5)
AI IB II +A12B21 =
e th to ds lea ely iat ed m im , ely tiv ec sp re , B d an 21 Solving (7.4) and (7.5) for BI2 1 B2 d an 2 BI r fo ns tio lu so e es th g in ut tit bs Su ). expressions given in (c) and (d n ve gi ns sio es pr ex st fir e th s eld yi 2 B2 d an I BI r fo g in into (7.2) and (7.3) and solv w llo fo ) (b d an (a) in ns sio es pr ex nd co se e Th ). (b for BII and B22 in (a) and 0 . 1.7 m re eo Th g in us ter af st immediately from the fir
Example 7.1.
Consider the regression model y = Xil + E,
at th e os pp Su 1. x N is E d an , xl I) + (k is 11 I), + (k where y is N x I, X is N x t uc od pr e th at th so ) X2 " (X = X d an ;)' 11 ;, (11 11 and X are partitioned as 11 = n sio es gr re ete pl m co e th g rin pa m co in ted es ter X 1111 is defined, and we are in el od m n sio es gr re d ce du re e th to e ov ab n ve gi el od m
els od m o tw e th r fo rs ato tim es s re ua sq st lea e th If X has full column rank, then
249
PARTITIONED MATRICES A
in e nc re ffe di e th d an , ely tiv ec sp re , ;y 'X ,r ;X are Il = (X 'X r'X 'y an d Il, = (X the sums of squared errors for th e two models A
(y  X,~,)'(y  X,~,)  (y  X~)'(y  X~) = /( 1  X, (X ;X ,) ' X; )y  y' (I  X( X' X) ' X' )y = /X (X 'X r'X 'y  y' X, (X ;X ,) 'X ;y
(7.6)
of n sio clu in e th to le ab ut rib att rs ro er d re ua sq of m su the gives the reduction in of s tie er op pr l ica etr om ge e th g in us By el. od m ete pl m co the te rm X21l2 in th e the in on cti du re is th at th ed ow sh we 0, 2.1 e pl am Ex in n lea st squares regressio sum of squared errors simplifies to
ich wh s, thi g in ow sh of y wa e tiv na er alt An 2' ;)X 'X ,);X ,(X X where X 2 * = (1 as d ne tio rti pa be n ca X X' w No . 7.1 m re eo Th es us , re we illustrate he
X' X= X ;* (X = ' 2) ;X 'X ,);X h (X ;X X 2;X (X = C t le 1 an d so if we a di re ct application of Th eo re m 7.1 that
(X 'X ) ' =
(X ;X ,r ' + (X ;X ,) 'X ;X 2C X; X, (X ;X ,) ' C X ;X I (X; X
,r'
) ',
we find from
( X ;X ,) 'X ;X 2 C C
y. . X~, ' r * 2 ,.X X; *( 2 X y' t ge we g, in ify pl sim en th d an Substituting this in to (7.6) as required. er id ns co st fir ll wi we A, of t an in m tel de e th r fo ns sio es pr Before ob tai ni ng ex so m e special cases. 2 A2 If . .1) (7 in as d ne tio rti pa be A ix atr m m x m e Theorem 7.2. Le t th an d AI2 = (0) or A21 = (0), then IAI = IAII I·
·
Proof
To find th e de tel ln in an t
'•... ..,
, ,
,t ·
, i,
! I
••
I ~
IAI =
"
I""
250
SPECIAL MA;I'RICES AND MATRIX OPERATORS
first use the cofactor expansion fOIlllula for a detellilinant on the last column of A to obtain
IAI = A~I
(0) 1m2  I
•
where B is the (m2  I) x ml matrix obtained by deleting the last row from A21' Repeating this process another (m2  1) times yields IA I = IAlIl. In a similar fashion. we obtain IA I = IA I I I. when A2 I = (0). by repeatedly expanding along the last row. 0
.,
Clearly we have a result analogous to Theorem 7.2 when All = Iml and AI2 = (0) Or A21 = (0). Also. Theorem 7.2 can be generalized to the following.
Theorem 7.3. Let the m x m matrix A be partitioned as in (7.1). If AI2 = (0) or A21 = (0), then IA I = IA II IIA221·
Proof
Observe that
•
(0)

1m2
where the last equality follows from Theorem 7.2. A similar proof yields IA liliAn I when A21 = (0).
IAI = 0
We are now ready to find an expression for the determinant of A in the general case.
Theorem 7.4. (a) (b)
Let the m x m matrix A be partitioned as in (7.1). Then
IAI = IAnllAl1 AI2A2iA2d. IAI = IAIIIIA22  A2IAiiAI2i.
Proof
if A22 is nonsingular. and if All is nonsingular.
Suppose that A22 is nonsingular. Note that in this case the identity
Im,
AI2A2"i
(0)
1m2
All A21
_ All  A12A2"iA21 (0)
AI2 A22
I~I
(0)
:A2"iA21
1m2
(0) A22
holds. After taking the detelIllinant of both sides of this identity and using the previous theorem. we immediately get (a). The proof of (b) is similar. 0
.
251
PARI'ITIONED MA:I'RICES
Example 7.2.
We will find the deteIIllinant and inverse of the 2m x 2m
matrix A given by 1111 (" ,
A=
hIm
•
,
where a and b are nonzero scalars. Using (a) of Theorem 7.4, we find that
,
a  :b
where we have used the result of Problem 3.18(e) in the last step. The matrix A will be nonsingular if IA I f. 0 or, equivalently, if
In this case, using Theorem 7.1, we find that
m =
aIIII

=a ) I m+
b
1m 1;',
)
m a(ab  m 2 )
where this last expression follows from Problem 3.18(d). In a similar fashion, we find that
m b(ab  m 2 )
1", ("
SPECIAL MATRICES AN D MA TR IX OPERATORS
252
Th e remaining su bm atr ice s of B = AI are given by
m 2 ) m ab b(
ve ha we , er th ge to all is th ng tti Pu 2. B\ = 2 B; = 1 B2 and, since A is sy m m etr ic, a I
e lm I;n b I (1m + m el mI;n) ,
(I", + mel",1;,,)
 c I", t;" where c = (ab 
2 m
r
1•
lt. su re ul ef us ng wi llo fo the h lis tab es to 7.4 m re eo Th e us We will
Theorem 7.5.
en Th . ely tiv ec sp re s, ce tri ma m x n nd na x m Le t A an d B be
11m + ABI
II" + BAI
'"
•
Proof
Note that
I,,, B
A I"
III/
(0)
B
I"

1m + AB
A
I, ,'
(0)
we , 7.4 m re eo Th g in us d an es sid th bo of t an so that by tak in g the de teI ln in obtai n the identity
I",
A
=
B I"
II",
(7.7)
+ ABI
Similarly, ob se rv e that 1m
B
(0)
I"
I", B
A
I",
I"
(0)
A In + BA
• •
so that
I",
A
B
I
=
II" + BAI
"
. .8) (7 d an .7) (7 g tin ua eq by ws llo fo w no lt su Th e re
(7.8)
o
. M by A e ac pl re we if 7.5 m re eo Th m fro y ctl re di ws Ou r final result follo
253
TH E KRONECKER PRODUCT
the en Th s. ice atr m /11 x 11 nd na x m be B d an A Corollary 7.5.1. Le t . BA of es lu va en eig o er nz no e th as e m sa the e ar nonzero eigenvalues of AB
3. THE KRONECKER PRODUCT be to em th its lm pe at th re tu uc str of pe ty ial ec So m e matrices possess a sp of t, uc od pr r ke ec on Kr e th as to d rre fe re ly on m expressed as a product, com the n the , ix atr m q x f? a is B d an ix atr m n x m an two ot he r matrices. If A is' IIl I matrix x /lip the is B, ® A by ted no de B, d an A Kronecker product of
ai lB a21 B
al2 B a22 B
• •
• •
•
•
amlB
am 2B
•
• •
•
• •
al"B a2"B
(7 .9)
• •
•
•
•
•
alll"B
ht rig the as n ow kn ly ise ec pr re mo is e ov ab Jh e Kronecker product defined od pr r ke ec on Kr of on iti fin de on m m co t os m the Kronecker product, and it is ll bi ay Gr e, pl am ex r [fo s or th au e m so r, ve we uc t appearing in the literature. Ho s ha ich wh t, uc od pr r ke ec on Kr t lef e th as t uc (1983)] define the Kronecker prod the to e nc re fe re y an , ok bo is th ut ho ug ro Th . .9) (7 B ® A as the matrix given in esp e Th t. uc od pr r ke ec on Kr ht rig e th to ng ni fe Kronecker product will be re the r fo s ula llll fO ed ifi pl sim to ds lea ) .9 (7 in n ve cial structure of the matrix gi is th In . es lu va en eig d an t, an in ln tel de , rse ve in its as computation of such things re mo the of e m so as ll we as as lul lll fO e es th of e m so section, we will develop basic properties of the Kronecker product. is B ® A t uc od pr r ke ec on Kr e th , on ati lic tip ul m ix atr Unlike ordinary m x tri ma ry na di or th wi as r, ve we Ho B. d an A defined regardless of the sizes of is as ve ati ut m m co l, ra ne ge in t, no is t uc od multiplication, the Kronecker pr demonstrated in the following example.
Example 7.3.
Le t A and B be the 1 x 3 and 2 x 2 matrices given by
A = [0
1
2],
B=
[ 2 3 4
Th en we find that
A ® B = [OB
while
IB
2B ] =
o 0
0 122 4
0
346
8 '
. ·
,I
.
•,
SPECIAL Ml HR IC ES AND MATRIX OPERATORS
254
• •
0 1 2 0 2 4
IA 24. B®A =
3A
4A
.
=
0
3
6
0
,,
8
4
, , I
,
y sil ea e ar ich wh t, uc od pr r ke ec on Kr e th of s Some of the basic propertie e th to t lef e ar fs oo pr e Th w. lo be d ize ar m m su proven from its definition, are reader as an exercise.
'.
,
.
o tw y an be b d an a d an s ice atr m y an be C d an Le t A, B,
Theorem 7.6. vectors. Th en (a) (b) (c) (d) (e)
® A = A ® 0: = o:A, for any scalar 0:, (o:A) ® ((3B) = o:{3(A ® B), for any scalars 0: an d (3, (A ® B) ® C = A ® (B ® C) , e, siz e m sa e th of e ar B d an A if , C) ® (B + C) ® (A = C ® (A + B) e, siz e m sa e th of e ar C d an B if , C) ® (A + B) ® A ® (B + C) = (A
0:
(f) (A ® B)' = A' ® B' , (g ) ab ' = a ® b' = b' ® a.
t uc od pr r ke ec on Kr e th g in lv vo in ty er op pr ul ef us ry ve We have the following and ordinary matrix multiplication.
n, X h k, x p h, X m es siz of s ice atr m be D d an C, Theorem 7.7. Let A, B, and k x q, respectively. Then (7.10)
(A ® B) (C ® D ) = AC ® BD •
The lefthand side of (7.10) is
Proof •
al iB
•
• •
alh B
cl iD
•
• •
•
•
• • •
amlB
amhB
chiD
•
•
• •
.. .
•
•
• • •
•
FI I
cln D

. ..
•
•
•
Fml
chnD
...
Fin • • •
,
Fmn
•
where h
I= I
The result now follows since
•
•
255
TIlE KRONECKER PRODUCT
• • •
AC®BD=
,
• • •
o
Our next result demonstrates that the trace of the Kronecker product A ® B can be expressed in terms of the trace of A and the trace of B when A and B are square matrices.
Theorem 7.S.
Let A be an m x m matrix and B be a p x p matrix. Then tr(A ® B) = tr(A)tr(B)
Proof
Using (7.9) when n = m, we see that m
tr(A ® B) =
L"
m
aii
i= I
tr(B) =
L
aii
tr(B) = tr(A) tr(B),
i= I
o
so that the result holds.
Theorem 7.8 gives a simplified expression for the trace of a Kronecker product. There is an analogous result for the detellninant of a Kronecker product. But before we get to that, let us first consider the inverse of A ® B and the eigenvalues of A ® B when A and B are square matrices.
Theorem 7.9.
Let A be an m x n matrix and B be a p x q matrix. Then 1
(a) (A ® 8)1 = AI ® B , if m = n, p = q and A ® B is nonsingular, (b) (A ®B)+ =A+ ®W, (c) (A ® 8) = A ® B, for any generalized inverses, K and B, of A and B.
Proof
Using Theorem 7.7, we find that
so (a) holds. We will leave the verification of (b) and (c) as an exercise for the reader. 0
Theorem 7.10. Let AI> ... ' Am be the eigenvalues of the m x m matrix A, and let 8 I, ... ,8 p be the eigenvalues of the p x p matrix B. Then the set of mp eigenvalues of A ® B is given by {Ai 8j : i = 1, ... , m;j = 1, ... ,p}.
SPECIAL MATRICES AND MATRIX OPERATORS
256
s ice atr m lar gu in ns no ist ex e er th at th 2 4.1 m re Pr oo f It follows from Theo < at th P and Q such
d an A of es lu va en eig the th wi s ice atr m lar gu an tri r pe where T 1 and T 2 are up e m sa the e ar B ® A of es lu va en eig e Th ts. en m ele al on ag B, respectively, as di as those of (P ® Q f 1(A ® B) (P ® Q) = (P 1 ® Q I )(A ® B) (P ® Q) = P 1AP ® Q I BQ = TI ® T2,
lt su re e Th . lar gu an tri r pe up are T2 d an TI ce which must be upper triangular sin d an ts, en m ele al on ag di its e ar T2 ® TI of es lu va en eig now follows since the 0 }. ,p ... I, = ;j ,m ... l, = i { 'j8 ('' these are clearly given by are B d an A en wh B, ® A of t an in m ter de the r fo n sio es pr A simplified ex t an in lll tel de the at th t fac the g in us by d ne tai square matrices, is most easily ob . es lu va en eig its of t uc od pr the by n ve gi is ix atr m a of
Theorem 7.11.
en Th . ix atr m p x p a be B d an ix atr m m x m an be Let A IA ® BI = IAIPIBI'"
" 8 let d an A, of 1 es lu va en eig the be , A" , . .. , f.q t Le f Proo eigenvalues of B. Then we have
..
,8 P be the
P
m
IBI =
IAI =
and from the previous theorem /,
m
8'" }
IA ® BI = j=1

8J' IA I
1=1
P
= IA I"
'"
P
8}
'"
= IAIPIBI'"
o
n ee tw be p hi ns io lat re a ies tif en id ts uc od pr r ke ec on Kr Our final result on rank(A ® B), and rank(A) and rank(B).
257
TH E KRONECKER PRODUCT
Theorem 7.12.
en Th ix. atr m q x p a be B d an ix atr m n x m an Let A be rank(A ® B) = rank(A)rank(B)
a of k ran the t tha tes sta ich wh 1, 3.1 m re eo Th s ze ili ut Proof Our proof B ® A gh ou th Al . es lu va en eig o er nz no its of r be m nu e th ls symmetric matrix equa as ll we as " B) ® (A B) ® (A ix atr m the ic, etr as given is not necessarily symm ve ha we \0 2. m re eo Th m fro w No ic. etr m m sy is AA ' and BB ', •
') BB ® ' AA k( ran = '} B) ® (A B) ® (A k{ ran = B) ® rank(A
ero nz no its of r be m nu the by n ve gi is k ran its ic, etr m m Since AA ' ® BB ' is sy are " ,8 ... 81> d an ', AA of es lu va en eig the e ar eigen values. Now if "1 , ... ' BB ® ' AA of es lu va en eig the 0, 7.1 m re eo Th by n, the eigenvalues of BB ' the ero nz no of r be m nu the , rly ea Cl }. ,p ... 1, = ;j m , are given by {" i8 j: i = 1, ... ero nz no of r be m nu the es tim s ''i o er nz no of r be values in this set is the num en giv is s ''i erO nz no of r be m nu the ic, etr m m sy are 8j s. But, since AA' and BB' = ) B' (B nk ra by en giv is s 8 o er nz no of r be m nu j the by rank(AA') = rank(A), and 0 . ete pl m rank(B). The proof is now co
'''m
are e nc ria va of sis aly an an in ed lv vo in ns io tat pu Example 7.4. The com r Fo t. uc od pr r ke ec on Kr the of e us the r fo d ite su ll sometimes particularly we el od m n tio ca ifi ss cla y wa eon e iat ar iv un e th er id ns example, co Yu = P. + T i + f ij'
ve ha we at th e os pp Su 1. 6.1 d an , 10 6. 4, 3.1 es pl am Ex in which was discussed so ts. en atm tre k the of ch ea m fro le ab ail av s on ati rv se ob the same number of as en itt wr be y ma el od m e th , se ca s thi In i. ch th at j = 1, ... , n for ea •
Y = Xp + E,
= Yi d an )', ,y~ ... ;, (y = Y , d' ,T ... J, .,T (p = P ), 1n where X = (lk ® 1n ,Ik ® ted pu m co y sil ea is p r fo n tio lu so s re ua sq st lea a , ly (Yil, ... ,Yin>'. Consequent as •
,
,• ,.. ,
,
P = (X 'X rX 'y =
l'k ® l'n [1k ® 1n Ik ® 1~
••
f
,,r ~..." ,
,..

nk n1k
nl~ nlk


Ik ® 11ll
1~ ® (, y Ik ® 1~
1~ ® 1;, Y lk ® 1:,
SP EC IA L MA:I'RlCES AND MA:I'RlX OPERATORS
258
I'k ® I'n
0'

n \(Ik
h ®I~ Y
 k\Id~) •
(n k) \( I; ® I~)
nl(lk®I~)(nk)\(lkl~®I~) Y
Th is yields
/L = y and TI = YI  y, 
n
k
1
where
Y= nk L L Ylj' I; 1 j;1
n
YI= 1
Ylj
n
j;1
the e nc he d an , nk ra ll fu t no is X ce sin ue iq un t no is n Note that this solutio r, ve we Ho X. X' of rse ve in ed liz ra ne ge the of ce oi ch e th solution depends on di ad In y;. = T; + ~ by n ve gi is ate tim es its d an le ab for each i, p. + T 1 is estim n ve gi is d an ue iq un s ay alw is el od m the r fo rs ro er d re tion, the su m of squa by
»y l~ In ® k I(l nnk '(I =y )y X' XF X' X( nk '(l =y ) X~ (y  X~)'(y k
k
n
Ly;(lnn'lnl~)YI= L
=
(Y ij_ YI )2
1=1
1=1
j;1
•
el od m d ce du re e th y, = )'y In ® (lk 1 )} In ® (lk ,)' 1, Si nc e {(1 k ®
Yij = P. + Eij is rs ro er d re ua sq of m su its ile wh y, = ~ ate tim es s re ua has the least sq k
{y 
y( lk ® In )} ' {y

y( lk ® In)} =
L ;;1
n
(Yij  y)2 j; \
ed all c so e th , els od m o tw e es th r fo rs ro er d re Th e difference in the sums of squa . • su m of squares for treatments (SST), is then k
SS T = L 1= \
k
n
L
(Yij  y)2  L
j= \
k
=
L 1= \
n(Yi  y)2
;;\
·n
L j; \
(yij  YI )2
•
259
THE KRONECKER PRODUcr
Example 7.5. In this example, we will illustrate some of the computations involved in the analysis of the twoway classification model with interaction, which is of the form YUk
= P. + Ti + 'Yj + 1/ ij + EUk,
where i = 1, ... ,a,j = 1, ... ,b, and k = 1, ... ,n (see Problem 6.17). Here p. can be described as an overall effect, while Ti is an effect due to the ith level of factor A, "0 is an effect due to the jth level of factor B, and 1/ ij is an effect due to the interaction of the ith andjth levels of factors A and B. If we define the parameter vector, p = (p., TJ, ••• ,Ta, 'YJ, ... , 'Yb, 1/ II , 1/ 12, .•. , 1/ ""  J, 1/ ab)' and the response vector, Y = (YIII. .. ·,Ylln,YI2h ... ,Ylbn,Y211, .. · ,Y"I",)', then the model ahove can be written as
Y = Xp + E, where
Now it is easily verified that the matrix
X'X=
abn bn1a an1b n1a ® 1"
bn1~
bnI a nl:' ® 1" nI" ® I"
anI;' n1a ® 1~ anI" nIa ® I"
n1~ ® 1;' nIa ® 1;' nl:' ® Ib nl" ® I"
has as a generalized inverse the matrix
where •
C = nIIa ® Ib  (bnflIa ® 1b1;'  (anfI1a1:' ® Ib + (abnfI1al~ ® 1b1;'
Using this generalized inverse, we find that a least squares solution for . given by
P is
SPECIAL MATRICES AND MA n{ lX UI 'I:O .KA IU K"
260
y .. h  y .. • •
•
YIl"  y .. Y'I  y ..
,
•
•
•
Y·"  y ..
YII  YI'  Y'I +y .. •
• •
Yab Yu'  Y'b+Y" where b
u
y ..
Y'j
='
='
(a bn )I
( an )1
LL L k=1 i=I
j= I
u
u
L L i= I
n
b
n
Yi'
Yij k.
='
L L k=1
(b n) I
Yij k
j=1
n _
Yij k,
Yij
='
n
I
L
Yij k
k=1
k= I
e lu va ted fit e th is ich wh , ate tim es its d an le, ab Clearly, J.l. + Ti + 'Yj + 11 ij is es tim e th of e m so of n io tat pu m co the ve lea ll wi e W j' Yi = ij ~ + for Yijk, is P. + Ti + Yj an as er ad re e th r fo el od m is th of sis aly an the th su ms of squares as so cia ted wi exercIse. •
4. THE DIRECT SUM to in s ce tri ma re ua sq l ra ve se ls lll fo ns tra at th r ato er Th e direct su m is a matrix op s ice atr bm su e th as g in ar pe ap s ce tri ma e es th one bl oc k di ag on al matrix wi th llll fO e th of is x tri ma al on ag di k oc bl a at th ll ca Re along the diagonal.
diag(A], ... ,A r ) =
Al
(0)
• • •
(0)
A2
•
•
•
•
•
• •
(0)
(0)
•
•
(0) (0) •
,
• • • • •
Ar
ct re di e th be to id sa is x tri ma al on ag di k oc bl is wh er e Ai is an mi x mi matrix. Th . as en itt wr es m eti m so is d an r ,A ... I, A s ce tri ma the su m of
261 T H E V E e OPERATOR
r fo , e c in s m su t c e ir d e th r fo ld o h t o n s e o d y rt e p ro p Clearly, the commutative instance,
ro K s a d e s s re p x e e b n a c lf e s it h it w ix tr a m a f o s m u s t unless AI = A2. Direc n e th , A = r A = . .. = I A if , is necker products; that
lfo e th in d e z ri a m m u s re a m u s t c e ir d e th f o s ie rt e p ro p Some o f the basic e th to ft le re a , rd a rw o tf h ig a tr s ly ir fa re a h ic h w , fs o ro p e lowing theorem. Th reader.
Theorem 7.13. (a) (b) (c) (d) (e)
is ; A re e h w , s e ic tr a m e b r ,A Let A I, .. .
Il l;
x
Il l; .
Then
), (A tr + . .. + r I) (A tr = ) r A 9 tr(A I E9 .. . E l, r A 'I ·· d lA = rl A 9 E . .. IAI E9 d n a r la u . g in s n o n o ls a is r A 9 E '" 9 E I A = A r, la u g in s if each A; is non K I = A\"I E9 .. . E9A;I, , r) (A k n ra + . .. + d (A k n ra = ) rank(A I E9 .. . E9 A r f o s e lu a v n e ig e e th , "" ;. A , . .. !, ;. A y b d te o n e d re a ; A f if the eigenvalues o l; } . Il , . .. I, = j ; r , . .. , 1 = i j: j, A { y b n e iv g re a r A 9 E . .. 9 E AI •
S. T H E V E e OPERATOR t a th r to c e v a to ix tr a m a ll o fo s n a tr to l fu e s u is it h ic h w There are situations in s c ti s ti ta s in n o ti a u it s h c su e n O . ix tr a m e th f o ts n e m le e e has as its elements th is It . S ix tr a m e c n a ri a v o c le p m a s e th f o n o ti u ib tr is d e th f o involves the study n e d s s re p x e to ry o e th n o ti u ib tr is d in y ll a c ti a m e th a m t n ie n usually more conve f o s ll o te in s le b a ri a v m o d n ra d te u ib tr is d y tl in jo f o sity functions and moments u ib tr is d e th , s u h T . ts n e n o p m o c s it s a s le b a ri a v m o d n ra e s the vector with the y b d e ll ll fO r to c e v e th f o S ll n te in n e iv g y ll a u s u is S ix tr a m tion o f the random r. e th o e th th a e rn e d n u e n o , S f stacking columns o re p o c e v e th s a n w o n k is r to c e v a to ix tr a m a s l1 r1 o sf n a tr t The operator tha I x 11 "1 e th is ) (A c e v n e th , n m lu o c h it s it s a j a s a h A ix tr ator. I f the m x n ma vector given by
262
SPECIAL MA:I'RlCES AND MATRIX OPERATORS
vec(A) =
Example 7.6. If A is the 2 x 3 matrix given by
A=
2 0 5 8
1 3 '
then vec(A) is the 6 x 1 vector given by
2 8
vec(A) =
o 1
5 3 In this section, we develop some of the basic algebra associated with this operator. For instance, if a is m x 1 and b is n x 1, then ab' is m x nand
vec(ab') = vec([b1a, b 2a, ... , bnaD =
• • •
= b ®a
Our first theorem gives this result and some others that follow directly from the definition of the vec operator.
Theorem 7.14.
Let a and b be any two vectors, while A and 8 are two matrices of the same size. Then •
(a) vec(a) = vec(a') = a, (b) vec(ab') = b ® a,
(c) vec( aA + (38) =
0:
vec(A) + (3 vec(8), where
0:
and (3 are scalars.
The trace of a product of two matrices can be expressed in tenus of the vecs of those two matrices. This result is given next.
THE VEe OPERATOR
Theorem 7.15.
263
Let A and B both be m x n matrices. Then tr(A' B) = (vec(A)}' vec(B)
Proof. As usual, let a 1. ... ,an denote the columns of A and b J, . . . , b" denote the columns of B. Then
n
tr(A'B)
n
=L
(A 'B);;
=L
;=1
a;b;
= [a;, ... ,a~]
• • •
;=1
o
= (vec(A)}' vec(B)
A generalization of Theorem 7.14(b) to the situation involving the vec of the product of three matrices is our next result. Let A, B, and C be matrices of sizes m x n, n x p, and p x q, respectively. Then Theorem 7.16.
vec(ABC) = (C' ® A )vec(B)
Proof.
Note that if b l , ... ,bp are the columns of B, then B can be written
as p
B=
L j=
b;e;,
I
where ej is the ith column of Ip. Thus, p
vec(ABC) = vec
A
p
L j
bje;
C
=I
L j=
L j=
p
=
=
vec(Abje; C)
I
p
vec{(Abj)(C'ej)'} =
I
L
C'ej ®Ab j
;= I p
= (C' ®A)
L j=
(ej ®b j ),
I
where the second last equality follows from Theorem 7.14(b). The result now follows since, by again using Theorem 7.14(b), we find that
264
SPECIAL MATRICES AND MATRIX OPERATORS
p
L j;
p
(ej®b j )=
L
vec(bje;) = vec
= vec(B)
o
I
Example 7.7. In Chapter 6, we discussed systems of linear equations of the fOlln Ax = c, as well as systems of equations of the fOlln AXB ::: C. Using the vec operator and Theorem 7.16, this second system of equations can be equivalently expressed as vec(AXB)::: (B' ®A)vec(X) = vec(C); that is, this is an equation of the fOlln Ax = c, where in place of A, x, and c, we ha ve (B' ® A), vec(X), and vec( C). As a result, Theorem 6.4, which gives the general fOlln of a solution to Ax = c, can be used to prove Theorem 6.5, which gives the general fOlln of a solution to AXB = C. The details of this proof are left to the reader. Theorem 7.15 also can be generalized to a result involving the product of more than two matrices.
Theorem 7.17. Let A, B, C, and D be matrices of sizes m x n, n x p, p x q, and q x m, respectively. Then tr(ABCD) = {vec(A') }'(D' ® B)vec(C) Proof
Using Theorem 7.15, it follows that tr(ABCD) = tr{A(BCD)} = {vec(A')}' vec(BCD)
But from the previous theorem, we know that vec(BCD) = (D' ®B)vec(C), and so the proof is complete. 0 The proofs of the following consequences of Theorem 7.17 are left to the reader as an exercise.
Corollary 7.17.1. Let A and C be matrices of sizes m x nand n x m, respectively, while Band D are n x n. Then •
(a) tr(ABC) = {vec(A') }'(Im ® B)vec(C), (b) treAD' BDC) = {vec(D) rcA' c' ® B)vec(D). Other transfollnations of a matrix, A, to a vector may be useful when the matI;", A has some special suucture. One such transfollnation for an 111 x 111
•
265
THE VE e OPERATOR
r cto ve I x )/2 I + (/11 /11 e th e uc od pr to as so ed fin de matrix, de no ted by v(A), is the e ov ab are at th ts en m ele e th of all it m fro g in ob tai ne d from vec(A) by de let the of all s ain nt co ) v(A x, tri ma lar gu an tri r we lo a diagonal of A. Th us , if A is t Ye A. of on rti po lar gu an tri r pe up the in s ro ze e th ele m en ts of A ex ce pt fo r by ted no de be ll wi r cto ve a to A x tri ma m x m an ot he r tra ns fo rm ati on of the it m fro g in let de by ) v(A m fro ed lln fO r cto ve I x v(A) an d yi eld s th e m(m  1) /2 g in ck sta by d ne tai ob r cto ve e th is A) v( is, at th A; all of the di ag on al ele me nt s. of ew sk a is A If al. on ag di its w lo be e ar at th A of ns m on ly th e po rti on of the co lu al on ag di e th (.:e sin ) v(A m fro d cte tru ns co re be n ca A sy m m etr ic matrix, th en e lis we n io tat no e Th j. J. i if ij a = i aj ile wh , ro ele m en ts of A m us t be ze rs he Ot . 8) 98 (1 s nu ag M by ed us at th to ds on sp rre co , he re , th at is, v(A) an d v(A) d an ) (A ch ve n io tat no e th e us ] 9) 97 (1 le ar Se d an n so er [see, fo r ex am pl e, He nd \" the ate rel ich wh s on ati lln fo ns tra e m so s us sc di ll wi we veck(A). In Se cti on 8, an d v op er ato rs to the vec operator.
•
alde en wh ul ef us ly lar cu rti pa e ar rs ato er op v d an v e Th Example 7.8. e ar we at th e os pp su e, nc sta in r Fo s. ce tri ma n in g wi th co va ria nc e an d co rre lat io uib str di e th or x tri ma e nc ria va co e pl m sa e th of n io ut interested in the di str ib s on ati rv se ob of e pl m sa a m fro ted pu m co ix atr m n io lat tio n of the sa m pl e co rre n io lat rre co d an e nc ria va co e pl m sa ng lti su re e Th . es bl on three di ffe re nt varia matrices wo ul d be of the fo rm
S=
SII
SI2
S\3
S 12
S22
S23
SI3
S23
S33
,
R=
1
rl2
rl3
rl2
I
rO.J
rl3
r23
I
,
so th at ve c( S) = (S II' SI 2, S\3 , S1 2, S2 2, S2 3, S\3 , SB , S3 3)' , vec(R) = (1, rl2 , r\3 , r12 , I, r23 , r\3 , r23 , I) ' d an S) c( ve in ts en m ele t an nd du re e ar e er th ic, etr Si nc e bo th S an d R are sy m m by n ve gi R) v( d an S) v( in lts su re e es th of on ati in m eli vec(R). Th e v( S) =
(SI j, S1 2, SI 3, S2 2, S2 3, S3 3)' ,
v(R) = (I , rl2 , r\3 , 1, r23 , 1) ' n tai ob we , R) v( m fro s 1 m do an nr no e th ng ati in m eli by , lly Fina
.wh ich co nt ain s all of th e ra nd om va ria bl es in R.
SP EC IA L MA:IRICES AN D MATRIX OPERATORS
266
6. TH E H AD AM AR D PR O DU CT
rs, ato er op ix atr m r he ot r ou an th e ur sc ob e or m tle lit a is at A matrix operator th e th as n ow kn is s, tic tis sta in ns tio ica pl ap g sin but one which is finding increa sim 0, l bo m sy e th by te no de ll wi we ich wh r, Hadamard product. This operato d an A if is, at th s; ice atr m o tw of on ati lic tip ul m ise tw en m ply perfolllls the ele B are each m x n, then
/
,
.
al lb ll
A 0B =
•••
al nb 1n •
• • •
• •
am1bml
•• •
amnbmn
e th of e ar ed lv vo in s ice atr m o tw e th if ed fin de ly Clearly, this operation is on • same size. Ex am pl e 7.9.
If A and B are the 2 x 3 matrices given by
A=
3 1 3 B= 6 5 1 '
1 4 2 o 2 3 '
then
346 A0 B =
0
10
3
in n tio ica pl ap ds fin t uc od pr d ar m da Ha e th ich wh One of the situations in of. ns tio nc fu in rta ce of re tu uc str e nc ria va co e th r statistics is in expressions fo es pl am ex e se ll wi e W s. ice atr m n io lat rre co e pl m sa the sample covariance and e th of e m so te ga sti ve in ll wi we , on cti se is th of this later in Section 9.7. In e m so th wi ng alo t, en atm tre ete pl m co e or m a properties of this operator. Fo r d rre fe re is er ad re e th s, tic tis sta in r ato er op e th other examples of applications of y tar en m ele e m so th wi n gi be e W . 1) 99 (1 n so hn Jo d an m Ho to Styan (1973) and t. uc od pr d ar m da Ha e th of on iti fin de e th m fro y ctl re di properties that follow
Theorem 7.18.
Let A, B, an d C be m x n matt ices. Th en
(a) A 0 B = B 0 A, (b) (A 0 B) 0 C = A 0 (B 0 C) , (c ) (A + B) 0 C = A 0 C + B 0 C,
(d) (A 0B )' = A' 08 ', (e ) A 0 (0 ) = (0 ),
•
267
THE HADAMARD PRODUCT
(f) A 01ml~ = A,
(g) A 01m = DA = diag(all,"" amm ), if m = n, (h) C(A 0B) = (CA) 0B = A o (CB) and (A 0 B)C = (AC) 0B = A 0 (BC), if m = n and C is diagonal, (i) ab' 0 cd' = (a 0 c )(b 0 d)', where a and c are m x 1 vectors and band d are n x 1 vectors. We will now show how A 0 B is related to the Kronecker product A ® B; specifically, A 0B is a submatrix of A ®B. To see this, define the mX m 2 matrix i'm as m
i'm = i=I
where ei,m is the ith column of the identity matrix 1m. Note that if A and Bare m 2 2 x n, then i' m(A ®B)i'~ fonus the m x n submatrix of the m x n matrix A ® B, 2 2 containing rows 1, m + 2, 2m + 3, ... ,m and columns I, n + 2, 2n + 3, ... , n . Taking a closer look at this sub matrix, we find that m
n;;..,
i'm(A®B)i'~ =
ei,m(ei,m ®ei,m)'(A ®B)(ej,n ®ej,n)ej,n i=1 j=1 m
•
=
=
n
L L i= I
j= I
m
n
L L i=1
ei,m(e;,mAej,n ® e;,mBej,n)ej,n
,
aijbijei,mej,n = A 0 B
j=1
Although the rank of A 0 B is not detelluined, in general, by the rank of A and the rank of B, we do have the following bound,
Theorem 7.19.
Let A and B be rank(A 0 B)
III
~
x
II
matrices, Then
rank(A) rank(B)
Let rA = rank(A) and rB = rank(B). It follows from the singular value decomposition theorem (Theorem 4.1 and Corollary 4.1.1) that there exist III x rA and n x rA matrices U = (UIo"" u rA ) and V = (VI, ... , v rA ), and mx rB and nX rB matrices W = (WI,.'" wrB ) and X = (XI,." ,xrB ), such that A = UV' and B= WX'. Then
Proof
SP EC IA L MATRICES AN D MATRIX OP ER Al 'O RS
268
'A
,
A 0 B = U V' 0 W X' = 0
W jX j
jo l
1
'B
'A
0
Ui V i i
,
'B
'B
'A
i) =
(u i v ;0 W j X
; 0
1 j
i
1
0
1 jo l
ve ha we ce sin ws llo fo lt su re e Th . (i) d an ) (c 18 where we have used Theorem 7. t os m at of nk ra ng vi ha ch ea s, ice atr m B Ar r of m su e th as now expressed A 0 B 0 one.
) 0B A k( ran r fo d un bo r pe up an s ve gi 9 7.1 m re eo Th Ex am pl e 7.10. While In d. un bo r we lo ng di on sp rre co no is e er th ), (B nk in tellBS of rank(A) and ra s ha B 0 A ile wh nk ra ll fu ve ha B d an A th bo other words, it is possible that s ice atr m the of ch ea e, nc sta in r Fo O. to l ua eq k ran
100
010 A=
,
0
0
I
0
0
B=
•
1
010
001
). (0 = B 0 A ce sin 0 nk ra s ha B 0 A t ye 3, nk ra s ha ly ar cle
of t uc od pr d ar m da Ha a in lm fO r ea lin bi a at th s ow sh The following result two matrices may be written as a trace. 1 x m be y d an x let d an s, ice atr m n x m be B Theorem 7.20. Let A and and n x 1 vectors, respectively. Then (a) 1~(A 0 B) ln = tr( AB ') (b) x' (A 0 B) y = trCDx ADy B'),
r D r fo rly ila sim d an I) XII ., ., l, (x ag di = Dx e er wh Proof
(a) follows since m
m
"
a·IJ· b·· IJ
1~ (A 0 B)I" =
111
 ":0 (A);.(B').; = ;=1
,..;..,"
It I
L ;=1
(A E) ;; = tr( AB ')
26 9
THE HA DA MA RD PRODUCT
m re eo Th g in us by at th so ", yl D = y d an 1m Dx To prove (b), note that x = 7.20(a) and Th eo re m 7.18(h), we find that
0
) 8' )" AD D tr( = l" y) BD x 0 xA (D 1;" = 1" D B) 0 y x' (A 0 B) y = 1~ Dx (A
d ar m da Ha the er eth wh g in in m ter de in l fu lp he be n ca lt Th e following resu . ite fin de e iv sit po or ite fin de ve ati eg nn no is s ice atr m ic product of tw o sy m m etr en Th x. tri ma ic etr m m sy m x m an be ch ea B Le t A and
Theorem 7.21.
, ite fin de ve ati eg nn no are B d an A if ite fin de ve (a) A 0 B is no nn eg ati . ite fin de e iv sit po are B d an A if ite fin de e iv sit po is B (b) A 0 = B t Le B. 0 A is o als so en th ic, etr m m sy are B d . Proof Clearly, if A an 0 ~ Ak e er wh jb jkX kX ;A "L = bij at th so B of n io sit XA X' be the spectral de co m po r cto ve 1 x m y an r fo at th d fin we en Th . ite fin de for all k since B is no nn eg ati ve
y, •
m
m
y' (A 0 B)y =
ai jb ijY i Yj j
=I
m
m
In
k=1
i =I
j=1
=
j=1
In
(7.11)
Ak (y 0 x k) 'A (y 0 x k) ,

k =I the , ite fin de ve ati eg nn no is A e nc Si X. of n m lu co h kt the ts where Xk represen . ite fin de ve ati eg nn no o als is B 0 A so d an , su m in (7.11) m us t be nonnegative r fo e iv sit po be ll wi ) .11 (7 en th , ite fin de e iv Th is proves (a). No w if A is posit if t Bu O. > Ak ich wh r fo k e on st lea at r fo 0 any y ;J 0 th at satisfies y 0 Xk ;J nt ne po m co h ht its s ha y if d an k all r fo 0 > Ak B is als o positive definite, then is is Th s. ro ze a1l s ha X of w ro h ht e th if ly Yh ;J 0, th en y 0X k = 0 fo r all k on ich wh r fo 0 J. y no is e er th , ly nt ue eq ns Co . lar gu in ns no is X not possible sin ce 0 . ws llo fo ) (b so d an (7.11) eq ua ls zero spo be to B 0 A x tri ma the r fo on iti nd co nt cie ffi su a s ve gi ) Th eo re m 7 .21(b t no is on iti nd co is th at th s ate str on m de e pl itive definite. Th e foUowing ex am necessary.
Example 7.11. Co ns id er the 2 x 2 matrices I
A= 1
I 1 '
B=
4 2 2
2
SPECIAL MAI'RICES AND MAi'RIX OPERATORS
270
e er wh ', VV = B e, nc sta in r fo , ce sin ite fin de e iv The matrix B is posit
V=
2 0 1
1
B. = B 0 A ce sin ite fin de e iv sit po o als is B 0 A , and ra nk (V ) = 2. Clearly 1. = A) k( ran ce sin ite fin de e iv sit po t no is A r, Howeve at th an th er ak we B, 0 A of ss ne ite fin de e iv sit po A sufficient condition for the . m re eo th xt ne r ou in n ve gi is , b) 1( 7.2 m re eo Th in n give is B If . ix atr m ic etr m m sy m x m an be ch ea B d an A t Theorem 7.22. Le ts, en m ele al on ag di e iv sit po th wi ite fin de ve ati eg nn no is positive definite and A then A 0 B is positive definite. is B e nc Si O. > x B) 0 '(A ,x O i x y an r fo at Proof We need to show th It '. TT = B at th ch su T ix atr m lar gu in ns no positive definite, there exists a follows then from Theorem 7.20(b) that (7.12) no s ha A d an 0, ;J x if n, tio di ad In . AD Dr is r Since A is nonnegative definite, so at of nk ra s ha " AD D" is, at th ); (0 ;J r AD Dr en th , ro ze to l diagonal elements equa , lar gu in ns no is T e nc Si e. lu va en eig e iv sit po e on st lea at s least one, and so it ha fde ve ati eg nn no o als is rT D rA D T' so d an , rank(DrADr ) = ra nk (T 'D rA Dr T) ) .12 (7 ce sin ws llo fo w no lt su re e Th e. lu va en inite with at least one positive eig 0 T. " AD D" T' of es lu va en eig e th of m su e th is : implies that x' (A 0 B'p •
of t an in lil tel de e th n ee tw be p hi ns io lat re a s ve gi ich The following result, wh e th as n ow kn ly on m m co is ts, en m ele al on ag di its d an a positive definite matrix Hadamard inequality.
Theorem 7.23.
If A is an m X m positive definite matrix, th en m
IA I ::;
au,
•
. ix atr m al on ag di a is A if ly on d an if y lit ua eq with
Proof
Ou r pr oo f is by induction. If m = 2, th en •
•
271
THE HADAMARD PRODUCT
with equality if and only if al2 = 0, and so the result clearly holds when m = 2.
For general m, use the cofactor expansion fonnula for the detelluinant of A to obtain
IAI = all
a22
a23
• • •
a2m
0
al2
••
•
aim
a32
a33
• • •
a3m
a21
a22
• • •
a2m
•
•
• •
• • •
•
• •
a m2
a m3
ami
a m2
•
•
o
=aIlIAI!+ a
+
• • •
••
a mm
•
• • • • • •
a mm
a'
(7.13)
AI'
where Al is the (m  1) x (m  I) submatrix of A fOllned by deleting the first row and column of A and a' = (aI2,"" aim)' Since A is positive definite, Al also must be positive definite. Consequently, we can use Theorem 7.4(a) to simplify the second telln in the righthand side of (7.13), leading to the equation
• Since AI and All are positive definite, it follows that
with equality if and only if a = O. Thus, the result holds for the m x m matrix A if the result holds for the (m  I) x (m  I) matrix A I, and so our induction proof is complete. 0
Corollary 7.23.1.
Let B be an m
X m
nonsingular matrix. Then
m
m
i= I
j= I
IBI2 ~
bt '
with equality if and only if the rows of B are orthogonal.
Proof
Since B is nonsingular, the matrix A = BB' is positive definite. Note
that
IAI = IB8'1 = IBII8'1 = IBI2 and
SPECIAL MATRICES AND MATRIX OPERATORS
272
m
a;;
= (BB');; = (B);.(B').; = (B);.(8);. = j=1
o
3. 7.2 m re eo Th m fro ely iat ed m im ws llo fo lt su re and so the
in at th pt ce ex s ice atr m ite in ef id m se e iv sit po r fo s ld ho o Th eo re m 7.23 als al on ag di its of e or m or e on ce sin y lit ua eq r fo al on ag di be this case A need not s ice atr m lar gu sin r fo s ld ho 3.1 7.2 ry lla ro Co , se wi ke Li . ro ze elements may equal except for the statement concerning equality. e th g in us d, se es pr ex be n ca 3 7.2 m re eo Th in n ve gi y lit ua Ha da m ar d's ineq Hadamard product, as
(7.14)
IA I ;= I
of ts en m ele al on ag di e th of t uc od pr e th to ds on sp rre co 1) where the tel m (TI s ice atr m r he ot r fo s ld ho ) .14 (7 y lit ua eq in the at th ow sh ll 11/1' Th eo re m 7.25 wi lt. su re ng wi llo fo the ed ne ll wi we st fir t Bu y. tit besides the iden
Th eo re m 7.24.
Let A be an
III
x
III
positive definite matrix and define
An .. A by ed lln fO A of ix atr bm su 1) (m x 1) (m is the
where 0: = IAI/IAII and AI . ite fin de ve ati eg nn no is AI> en Th n. m lu co d an w ro st deleting its fir Le t A be partitioned as
Proof
a
,
, 7.4 m re eo Th g in us , us Th I. A is so , ite fin de e iv sit po is A and note that since we find that
and so
0:
=
IAI/IAII
= (a ll  a' Alia). Consequently,
Aa
may be written as
273
THE HADAMARD PRODUCT
_
(all  a'Alla)
o I 'Aa I a
_ 
a'

a
0' (0)
I 'Aa I
111/ 
I
.
Since A I is positive definite, there exists an (m  1) x (m  I) matrix T such that Al = TT'. If we let V' = T'[Aila Im 11, then AI> = VV', and so A" is nonnegative definite. ::J
Theorem 7.25.
Let A and B be m x m nonnegative definite matrices. Then 11/
bii ~ IA 0 BI
IA I i=I
Proof The result follows immediately if A is singular since IA I = 0, while IA 0 BI ~ 0 is guarlmteed by Theorem 7.21. For the case in which A is positive definite, we will prove the result by induction. The result holds when since in this case .
IA 0 BI =
/11 "
2.
,
= alla22bllbn  (aI2 b I2)1
1
,
= (alla22  ai2)b ll b 22 + ai2(b ll b 22  bill
= IAlb ll b 22 + af21BI ~ IAlb ll b 22
To prove the result for general m, assume that it holds for m  I, so that m
(7.15)
lAd i= 2
where A I and BI are the submatrices of A and B fOllned by deleting their first row and first column. From Theorem 7.24 we know that (A  ael e~) is nonnegative definite, where a = IAI/IAII. Thus, by using Theorem 7.21(a), Theorem 7 .18( c), and the expansion fOllnula for deteuninants, we find that
o ~ I(A 
aele~)OBI = IA OB aele~ OBI = IA OB  abllele~1
= IA (:) BI o:blll(A (:) B)II, where (A 0 B)I denotes the (m  I) x (m  I) submatrix of A 0 B formed by
,·,
. .i
•
"
"
"1•
, ", "
SPECIAL MATRICES AND MATRIX OPERATORS
274
"
"
,~ "
y lit ua eq in e th at th so I B 0 I A = I B) 0 (A t Bu n. deleting its first row and colum at th ies pl im I, IA = I I alA y tit en id e th d an ) .15 (7 th wi ng above, alo
" "
m
m
hii
= IAI
hii
;=I
;= 2
"
o
The proof is now complete.
we rst Fi . es lu va en eig eir th e lv vo in ts uc od pr d ar m Our final results on Hada m sy e ar B d an A en wh B 0 A ix atr m e th of e lu va obtain bounds for each eigen metric. eg nn no is B If s. ice atr m ic etr m m sy m x m be B d Theorem 7.26. Let A an s fie tis sa B 0 A of e lu va en eig st ge lar ith the n the , ative definite ~
min bii
A (A) 11I
Ai (A 0 B)
~
m ax b··II
AI (A)
I S;S m
I Si~m
.
ch su T ix atr m m X m an s ist ex e er th ite fin de ve ati eg nn Proof Since B is no t en m ele h j)t (i, e th tes no de tij ile wh T, of n m lu co jth e that B = TT '. Let tj be th of T. For any m x 1 vector, x :/. 0, we find that
x' (A 0B )x = ;= I
m
m
m
L
m
m
i=1
tih tjh
ai j i= I
1r=1
m
m
ai jb ijx iX j =
j= 1
.
m
m
L
(Xi til, )a ij (Xj tjh )
=L
(X 0t dA (x 0t h)
h=1
j=1
m
(X 0 td (x 0t h) =
AI(A)
L
m
h=1 j=1
h=1
•
m
m
m
j=1
h=1
j=1
~AI(A){max
Xi Xj
h= I
j=1
m
~ AI(A)
•
bi i}x 'x,
I Si Sm
where the first inequality arises from the relation
(7.16) ,
,
275
THE HADAMARD PRODUCT
AI(A) =max
y'y'O
y'Ay y'y
given in Theorem 3.15. Using this same relationship for A 0 B, along with (7.l6), we find that for any i, 1 S; i S; m,
which is the required upper bound on Ai (A 0 B). The lower bound is obtained in a similar fashion by using the identity . y'Ay Am(A) = mm , y'y.O Y Y
o
Our final result provides an alternative lower bound for the eigenvalues of (A 0 B). The derivation of this bound will make use of the following result. Theorem 7.27. Let A be an m X m positive definite matrix. Then the matrix (A 0 A I)  1m is nonnegative definite. Proof Let :L?:.I Ai Xi X; be the spectral decomposition of A so that :L~= 1 AjlxiX;. Then
m

m
L
AiXiX;
i= 1 m
=
L
=
m
Aj'XjXi

j= 1
L
XiX; 0
i= 1
m
L L i=1
0
AI
(AiAjl  l)(xi x ; 0Xj Xi)
j=1
iij
i<j
where X is the m x m(m  1)/2 matrix having (Xi 0 Xj), i < j as its columns, while D is the diagonal matrix with its corresponding diagonal elements given by (AiAjl + AjAjl  2), i <j. Since A is positive definite, Ai > 0 for all i, and so
SP EC IA L MATRICES AND MATRIX OPERATORS
276
. X' XD is so ly nt ue eq ns co d an ite fin de ve ati eg Thus, D is nonn Theorem 7.28.
o
Let A and B be m x m nonnegative definite matrices. Then
e th so d an , ite fin de ve ati eg nn no is B 0 A 1, 7.2 m re eo Th Proof Due to ve ha ll wi AB se ca s thi in ce sin lar gu sin is B or A r he eit if s inequality is obviou y an be T let d an , ite fin de e iv sit po e ar B d an A at th e os pp a zero eigenvalue. Su ite fin de ve ati eg nn no is I", B) (A Am T' TA at th te No B. = ' matrix such that TT ). T' A ,(T A" = B) ,(A A" d an , B) ,(A A" ) T' A (T Aj is es lu va en eig since its ith largest As a result,
ite fin de ve ati eg nn no is B 0 ) BB) ,(A A" (A , us Th . is also nonnegative definite 1 e du ite fin de ve ati eg nn no is } 1m B) 0 (B BH (A Am due to Theorem 7.21, while by n ve gi is ich wh s, ice atr m o tw e es th of m su e th to Theorem 7.27, and so I
{(A  Am(AB)B
I
)
I
0 B} + A" ,(A B) {(B  O B)  1m}
I lm B) (A Am B) 0 )(B AB II( AI + B) 0 I · (B B) ,(A A" =A0 B
= A 0 B  A",(AB)lm
is also nonnegative definite. Consequently, for any x, x' (A 0 B) x ~ Am (A B) x'x ,
and so the result follows from Theorem 3.15.
o
7. TH E COMMUTATION MATRIX ix atr m y an be to \0 1. on cti Se in ed fin de s wa ix atr m on An /11 X m pellnutati sdi we , on cti se s thi In . ns m lu co its g in ut lln pe by 1m m fro that can be obtained s, ice atr m on ati ut m m co as n ow kn s, ice atr m n io tat lu lll pe of cuss a special class rno e iat ar tiv ul m e th of ts en om m the g tin pu m co en wh which are very useful of s tie er op pr sic ba e th of e m so h lis tab es ll wi e W . ns io mal and related distribut d un fo be n ca t ec bj su is th of t en atm tre ete pl m co e or m A commutation matrices. in Magnus and Neudecker (1979) and Magnus (1988).
.
•
277
TH E COMMUTATION MA TR IX
ele o er nz no ly on its s ha at th x tri ma n x m e Definition 7.1. Let Hij be th x. tri ma on ati ut m m co mn x mn the en Th n. io ment, a one, in the (i, j)t h posit denoted by K mn , is given by m
Kmn = i=1
II
L
(7.17)
(Hij ® H; j)
j=1
the m fro ns m lu co of S Il1 tel in d se es pr ex tly ien en nv co be n ca Th e matrix Hi j jth the is I ej,1 d an 1m of n m lu co ith e th is 11 e;.1 If ' III d an identity matrices 1m column of In, then Hij = e;,mej,n' r de or of ix atr m on ati ut m m co e on an th e or m is e Note that, in general, ther 10 ' K s. ice atr m on ati ut m m co ur fo e th ve ha we 6, mn, For example, for mn = ile wh Ih. = l Kh = K at th y rif ve lo to sy ea is it ), .17 (7 g in K23, K 32, an d K 61 . Us
K23 =
K32 =
0 0
I 0 0 0 0 0
0 0 0 1 0 0
0 I 0 0 0 0
0 0 0 0
1 0 0 0 0 0
0 0 1 0 0 0
0 0 0 0 1 0
0 0 1 0 0 0 0 1 0 0 0 0
I
0 1 0 0 0
0 0 0 0 0
,
1
0 0 0 0 0 1
ty er op pr l ra ne ge a is is th ce sin , ce en cid in co a t no is 3 Th e fact that K 32 =. K; that follows from the definition of Kmn.
Theorem 7.29.
Th e commutation matrix satisfies the properties
(a) Km l:: Ki m: : 1m ,
(b) K~n = K nm , (c) K~:,::KII/II'
Pr oo f
When H;j is m x 1, then Hij = e;,/II and so /II
/II
Kml =
L
(e;,/ll ® e;,m) = 1m =
L ;=1
;=1 •
(e;,m ® e;,m) = Ki m,
,
SPECIAL MA l RI CE S AND MATRIX OPERATORS
278
proving (a), To prove (b), note that m
m
II
K ;""
,
n
,
(Hi) ® H ;) ' = ;= I
j
;= I
=I
j=1
Finally. (c) follows since if j = 1, if ii I,
, ' H ' l me ek e, J'n H .. kl =e . I, m . ,n 'J
if i = k, if i i k,
=
and so n
(Hkl ® H~/)' 1= 1 n
m
m
n

,
;=1
j=1
m
n
k= 1
1=1
,
j=1
;=1
m
n
;= I
j=1
,
r ato er op c ve e th th wi ps hi ns io lat re nt rta po im ve Co m m ut ati on matrices ha d an ) c(A ve rs cto ve o tw e th A, ix atr m n X m an r Fo and the Kronecker product. nt re ffe di a in ed ng ra ar ts en m ele e m sa e th ain nt vec(A') are related since they co e uc od pr ll wi ) c(A ve of ts en m ele e th of g rin de or re order; that is, an appropriate s rm fo ns tra ich wh r lie tip ul m ix atr m e th is n Km ix atr vec(A'), Th e co m m ut ati on m vec(A) to vec(A'),
Theorem 7.30.
Fo r any m X n matrix A, Km ll
vec(A) = vec(A') ,
•
279
THE COMMUTATION MATRIX
Proof Clearly, since aijH;j is the n x m matrix whose only nonzero element, aij, is in the (j, i)th position, we have n
m
m
A'=
aijH;j =
;; 1 j;1 m

n
;; 1 j;1
n
m
L ;; 1 j;1 m
(e;,mAej,n )ej,1I e;, m n
(ej,1I e;, m )A(ej, n e;, m )
ej,n(e;,mAej,n)e;,m =
;; 1 j;1
;;.,n
=L j;1
H'·Alt· .) .)
;;1
The result now follows by taking the vec of both sides and using Theorem 7.16, • SInce
m
~ £..J ~ H'·Alt· vec(A') = vec £..J .) .) ;;1 j;1 m
=
m
n
=
n
L L
vec(H;jAF;j)
;; 1 j ; 1
n
L L
(Hij ®H;j)vec(A) = Kmnvec(A)
D
;;1 j:1
•
The term commutation arises from the fact that commutation matrices provide the factors that allow a Kronecker product to commute. This property is summarized in Theorem 7.31.
Theorem 7.31. Let A be an m X n matrix, B be a p x q matrix, x be an m x I vector, and Y be a p x I vector. Then (a) Kpm(A ® B) = (B ® A)Kqn ,
(b) Kpm(A ® B)Knq = B ® A, •
(c) Kpm(A®y)=y®A,
(d) Kmp(Y ®A) = A ®y, (e) Kpm(x®y)=y®x, (f) tr{(B ® A)Kmn }
Proof find that
= tr(BA),
if p
= nand q = m.
If X is a q x n matrix, then by using Theorems 7.16 and 7.30, we
SP EC IA L MATRICES AND MATRIX OPERATORS
280
= KplII vec(BXA') = vec {(B XA ')' }
KplII(A ® 8) ve c( X)
= ve c( AX ' B' ) = (B ® A) ve c( X' )
.
 (8 ® A)K',II vec(X) , we ob se rv e I of n m lu co ith the ls ua qll eq X) c( ve at th Thus, if X is chosen so of n m lu co ith e th as e m sa the be t us m B) ® A lII( that the ith co lu m n of Kp m re eo Th g in us en th d an q Kn by (a) ng yi pl lti nu stI Po (B ® A) K qn , so (a) follows. ce sin ) (a 29 7. m re eo Th d an ) (a m fro w llo fo e) )( (c s tie 7.29(c) yields (b). Pr op er
Kpm(A ® y) = (y ® A) K III = Y ® A, KIIIP(y ® A)
= (A ®y )K III =A ® y,
Kp m (x ®y ) = (y ® x) K II = y® x
t ge to x tri ma on ati ut m m co the of on iti fin de e th e To prove (0 , us m
tr{(B ® A) K mll
}
=
=
=
L
II
tr{ (B ® A) (H ij ® H ;j) }
;=1
j=1
m
II
L L ;=1
j=1
In
11
L ;=1
{tr (B H; j)} { tr(Aff;j)}
w( ej .n Be ;.m )(e ;.m Ae j,n )=
L j=1
;=1
j=1
"
"
=
m
(B )j. (A h
=L
(B A) jj
n
L
bj ;a ij
j=1
=tr(BA)
o
j=1
n ee tw be p hi ns io lat re a n tai ob to d ze ili ut be n ca o als x tri ma Th e co m m ut ati on ng di on sp rre co e th of t uc od pr r ke ec on Kr e th d an t uc od pr r ke the vec of a Kr on ec vecs.
Theorem 7.32.
en Th x. tri ma q x p a be B d an x tri ma n X m an be Let A
vec(A ® B) = (In ® Kqm ® Ip){vec(A) ® vec(B)}
be ,an ... , al t Le . 8) 98 (1 s nu ag M by n ve gi at th ws llo fo f Proof Ou r pr oo be n ca B d an A ce sin en Th B. of ns m lu co e th the co lu m ns of A and b l , ... ,bq written as
281
THE COMMUTATION MA TR IX q
II
A=
L
ale;,,,,
B=
bi e; .",
j=1
i=I
we have
L
•
n
q.
vec(aj e;, II ® bj
vec(A ® B) = i=1
ej, q)
j=1
II
i=1
~
vec{(ai ® bj )(e;, II ® ej .q )}
j=1
n
II
q
i=1
j=1
II
=L i=1
j=1
(ei,,, ® ai ) ~
= (In ® Kqm ® Ip)
"
L
vec(aie;,,,)
i~1

= (In ® Kqm ® Ip ){ vec(A) ® vec(B)}
on ati ut m m co ial ec sp the r fo lts su re e m so s Ou r next theorem establishe ,,, K" ix atr m on ati ut m m co l ra ne ge the r fo lts su matrix Kmm. Co rre sp on di ng re . 8) 98 (1 s nu ag M or 9) 97 (1 r ke ec ud Ne d an s nu ca n be found in M ag
1 + e lu va en eig e th s ha "" K" ix atr m on ati ut m m Theorem 7.33. Th e co . es tim I) II (1 tlll d ate pe re 1 e lu va en eig e th d an repeated ~m(m + 1) times In addition. tr (K mm) = m
an d
)111(111 1 (= IKm,n I
1)/".2
SPECIAL MA:I'RICES AND MATRIX OPERATORS
282 •
at th 3.8 m re eo Th m fro ow kn we ic, etr m m sy d an Proof Since Kmm is real of re ua sq the al, on og th or is m Km ce sin , er rth Fu l. rea o its eigenvalues are als be p t Le ly. on 1 d an +1 es lu va en eig s ha it so 1, be t each eigenvalue mus 2  p is the nu m be r m at the nu m be r of eigenvalues equal to 1 , implying th , es lu va en eig e th of m su e th ls ua eq ce tra e th e nc Si . +1 of eigenvalues equal to 2  2p. Bu t by using basic 2 we must have tr(Kmm) = p( 1 ) + (m  p) (I ) = m properties of the trace, we also find that m
m
m
m
;=1
j=1
m
m
(e;e) ® ej e; )
tr(K mm ) = tr ;= 1 j= 1
m
m
;=1
;=1
j=1
...
(e~ed
j=1
m

I '" m ;= 1
a ul rm fo e th , lly na Fi . ed im cla as l) (m ~m = p at th so Evidently, m  2p = m, t an in m ter de e th at th ct fa e th m fro y ctl re di ws llo fo t given for the determinan 0 . es lu va en eig e equals the product of th 2
nt rta po im e m so in s ar pe ap m Km ix atr m on ati ut m We will see later that the com , ly nt ue eq ns Co . m) Km + 2 Im ~( = N m ter e th h ug ro m matrix moment formulas th we will establish some basic properties of N m •
Theorem 7.34. Then
Let N m
= ~ (1m2 + K mm), and let A and B be m X m matrices .. •
(a) N m = N~ = N~, (b ) NmKmm = N m = KmmNm, (c) N m vec(A) = ~vec(A +A '),
(d ) Nm(A ® B)Nm = Nm(B ® A) Nm.
m, Km d an 2 1m of y etr m m sy e th m fro ws llo fo N of y The symmetr m
Proof while
•
~
NI1I
I
=:
4"
1 (I/1I~ + Kmm) = 4 2
ws llo fo ) (b , rly ila m Si m. Km = '~ K;; at th ct fa e th m fro since K ;,,,,, =: I follows of e nc ue eq ns co e iat ed m im an is (c) rt Pa L Im =: from the fact that K~,m
•
SOME OTHER MA:I'RICES ASSOCIATED WITH THE VEe OPERATOR
1m2 vec(A) = vec(A),
Kmm
283
vec(A) = vec(A')
Finally, to prove (d), note that, by using Theorem 7.31(b), and Theorem 7.34(b),
The proof of our final result will be left to the reader as an exercise.
Theorem 7.35.
Let A and B be m x m matrices such that A = BB'. Then
(a) Nm(B ® B)Nm = (B ® B)Nm = N m(B ® B), (b) (B ® B)Nm (8' ® 8') = N m(A ® A).
8. SOME OTHER MATRICES ASSOCIATED WITH '1'011: VEC OPERATOR
•
In this section, we introduce several other matrices that, like the commutation matrix, have important relationships with the vec operator. However, each of the matrices we discuss here is useful in working with vec(A) when the matrix A is square and has some particular structure. A more thorough discussion of this and other related material can be found in Magnus (1988). When the m x m matrix A is symmetric, then vec(A) contains redundant elements since aij = aji for i oJ. j. For this reason, we previously defined v(A) to be the m(m + 1)/2 x I vector fonned by stacking the columns of the lower triangular portion of A. The matrix that transforlUs v(A) into vec(A) is called the duplication matrix; that is, if we denote this duplication matrix by D m , then for any m x m symmetric matrix A, (7.18)
Dmv(A) = vec(A)
For instance, the duplication matrix D3 is given by
D3=
I
0
0 0 0 0 0 0 0 0
I
0 I
0 0 0 0 0
0 0
0 0 0 0 I 0 0 0 0 0 0 I 0 0 0 I I 0 0 0 0 I 0 0 0
0 0 0 0 0 0 0
0 I
•
SPECIAL MATRICES AND MATRIX OPERATORS
284
, re fe r D ix atr m n tio ica pl du )/2 I m + m m( x m e th r fo n sio es Fo r an ex pl ici t expr to M ag nu s (1988) or Pr ob lem 7.54. are rse ve in se ro en P re oo M its d an x tri ma n tio ica pl du e So m e properties of th su m m ar ize d in Th eo re m 7.36.
2
2 x m( m + 1) /2 du pl ica tio n m atr ix an d D ; m Th eo re m 7.36. Le t D", be the its M oo re P en ro se inverse. Th en
(a ) rank(D",) '" m( m + 1)/2 .
(b ) D:n = (D~ Dm )1 D;,I' (c ) D~,Dm = Im(m + 1)/2 , (d )
A. x tri ma ic etr m m sy m x m y er ev r fo A) ye = ) c(A ve n D7
m x m an s ist ex e er th x r cto ve I x )/2 I + m m( y er ev Pr oo f Clearly, for = A) v( Dm A, ic etr m m sy e m so r fo if t Bu . A) ye " x' at th sy m m etr ic matrix A such O. = ) v(A at th ies pl im en th ich wh 0, = ) c(A ve , D of on m 0, then from th e definiti d an ) (b rts Pa . nk ra n m lu co ll fu s ha Dm so d an 0, Th us , Dmx = 0 on ly if x '" by d ne tai ob is ) (d ile wh , 5.3 m re eo Th d an ) (a m fro (c) follow immediately 0 . (c) g in us en th d an ', premultiplying (7.18) by D: nt rta po im e m so ve ha rse ve in se ro en P re oo M its d an x tri ma Th e duplication relationships with Km", and N m.
; D d an ix atr m n tio ica pl du /2 1) + m m( x m e th be Th eo re m 7.37. Le t Dm its M oo re P en ro se inverse. Th en 2
Ca) KmmDm = NmDm = Dm,
K",,,, = D~, NII/ . DII/D;',"'N",.
(h ) /)~
(c )
Pr oo f
D~"
at th ws llo fo it A, x tri ma ic etr m m sy m x m y an r Fo
KmmDm yeA) '" Kmm vec(A) '" vec(A') '" vec(A) = Dm yeA)
(7.19)
Similarly, we have
1 A) ye , D" = ) c(A ve = ) A + c(A ve 2 = ) c(A ve , N" = N", Dm yeA) I
(7.20)
) .19 (7 e, ac sp l na sio en im d )/2 I + m I/I( of all is A} = Since {v(A): A 1/1 X III and A' all ly tip ul em pr ), (a of se po ns tra e th e tak ), (b e ov and (7.20) establish (a). To pr by ) (c e ov pr ll wi e W . b) 6( 7.3 m re eo Th e us en th three sid es by CD;" D", r I, and sh ow in g that fo r any m X m matrix A,
OR AT ER OP C VE E TH H IT W ED AT CI SO AS ES IC TR MA SOME OTHER
285
•
DmD;" vec(A) = N mvec(A) If we define A* =
! (A + A'), then A* is symmetric and 1
1
)} A' c( ve + ) (A ec (v 2 = ) (A ec )v m 2 Km + (1m 2 = A) N m vec(
= vec(A*) Us in g this an d (b), we find th at
DmD; vec(A) = DmD~, Nm vec(A) = =
D",D~, vec(A*)
Dm v(A*) = vec(A*) = N",vec(A), ....,
an d so th e pr oo f is co m pl ete .
....J
ter ap ch xt ne e th in lt su re ng wi llo fo the ed ne ll W e wi is Dm A) ® ,,(A D; en th x, tri ma lar gu in ns no m Theorem 7.38. If A . ~: )D I A ® I "(A D; by n ve gi is rse ve in its d an no ns in gu lar
is an m X
o tw the of t uc od pr the at th ow sh y pl sim we Pr oo f To pr ov e th e re su lt . c) 6( 7.3 m re eo Th , a) 5( 7.3 m re eo Th g in Us 2. 1)/ + matrices gi ve n ab ov e yi eld s Im(m an d Th eo re m 7. 37 (a ) an d (c), we have
)D:;; ® A I)D+In'
D~ (A ® A )Dm D;"(A I ® A I
=D'm (A ® A) N
111
•• •·
1
= D~Nm (A ® A)(A ® A1)D:;; =
• •
i: . "
, •
•
(A I
,
.
D;" NmD~;
= (N m Dm )' D:'; =D;n D~; == (D~, Dn,)' == I m(III + I) /2
o
•
•
•
••
,•.. ·, ,• ~.~
• •
an tri r we lo is A x tri ma m x m e th ich wh in n tio W e ne xt co ns id er th e sit ua pt ce ex ) v(A of e os th to al tic en id e ar ) c(A ve of ts gular. In this case, th e ele m en 2 )/2 1 + (m m x m the L;" by te no de ll wi e W s. ro ze th at vec(A) has so m e ad di tio na l s fie tis sa L;" is, at th ); c(A ve to in A) v( ls lil fo matrix wh ich trans L~ v(A) == vec(A)
(7.21 )
286
SPECIAL MATRICES AND MATRIX OPERATORS
Thus, for instance, for m = 3,
I
,
L3 =
0 0 0 I 0 0 0 I 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 I 0 0 0 0
0 0 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 1
.,
Note that L~ can be obtained from Dm by replacing m(m  1)/2 of the rows of Dm by rows of zeros. The following properties of the matrix L", can be proven directly from its definition given in (7,21).
Theorem 7.39.
2
The m(m + 1)/2 x m matrix Lm satisfies
(a) rank(Lm)= m(m+ 1)/2, (b) LIII L;n = IIII(m + 1)/2, (c) L~, = L~,
(d) L",vec(A) = v(A), for every m x m matrix A.
•
•
Proof Note that if A is lower triangular, then vec(A)'vec(A) = v(A),v(A) and so (7.21) implies
v(A)' LmL~ yeA)  v(A)'v(A) = v(A)'(LmL~ 
Im(m+
1)/2)v(A) = 0
for all lower triangular matrices A. But this can be true only if (b) holds since {yeA): A m x m and lower triangular} = Rm(m+ 1)/2. Part (a) follows immediately from (b), as does (c) since L:;' = (L~LmtIL~. To prove (d), note that every matrix A can be written A = Ai, + Au, where Ai, is lower triangular, and Au is upper triangular with each diagonal element equal to zero. Clearly, •
,.
o = vec(Ad
,
vec(Au) = v(AL,) L,;, vec(Au ),
and since, for fixed Au, this must hold for all choices of the lower triangUlar matrix Ai" it follows that
Lm vec(Au) = 0
•
SOME OTHER MA:I'RICES ASSOCIATED WITH THE VEC OPERATOR
287
Thus, using this along with (7.21), (b), and the fact that v(Ad = v(A), we have
.
,
We see from property (d) in Theorem 7.39 that Lm is the matrix that eliminates the zeros in vec(A) coming from the upper triangular portion of A so as to yield v(A). For this reason, Lin is sometimes referred to as the elimination matrix. Our next result gives some relationships between Lm and the matrices Dm and N m • We will leave the proofs of these results as an exercise for the reader.
Theorem 7.40.
The elimination matrix Lm satisfies
(a) L mDm =Im(m+I)/2, (b) DmLmNm = N m, (c) D~ = LmNm.
The last matrix related to vec(A) that we will discuss is another sort of elimination matrix. Suppose now that the m x m matrix A is a strictly lower triangular matrix; that is, it is lower triangular and all of its diagonal elements are zero. In this case, v(A) contains all of the relevant elements of A. We denote by L;" the m 2 X m(m  1)/2 matrix that transforms v(A) into vec(A); that is, •
L~ v(A) = vec(A) Thus, for m = 3 we have
o
I
o
001 000
0 0 0 0 0 0 o 000 0 0
o
•
0
I
o
0
o
Since Lm is very similar to Lm, some of its basic properties parallel those of Lm. For instance, the following results are analogous to those in Theorem 7.39. The proofs, which we omit, are similar to those of Theorem 7.39. Theorem 7.41. (a)
The m(m  1)/2 x m 2 matrix
rank(Lm) = m(m 
LmL~  = Im(m  1)/2, L~ = L~, 
Lm
1)/2,
(b) (c) (d) Lm vec(A) = v(A), for every m x m matrix A.
satisfies
SPECIAL MATRICES AND MATRIX OPERATORS
288
be tw ee n Lm,
Ou r final theorem gives so m e relationships e. cis er ex an as er ad re the to t lef is f oo pr e Th N",.
Theorem 7.42. (a)
Lnh Dm, Kmm and
2
Th e m(m  I)/2 x m matrix [III sa tis fie s.
LIIIKmmL~ = (0),  , Kmm L~ = (0),
(b) L" (c)
(d) (e)
(f)
L", D", = L" , L;'"
L;nLmL~=L~, D", Lm L~ = 2NmL;n' LIIIL~LmL~ = Im(mI)/2.
9. NONNEGATIVE MATRICES nco be t no ld ou sh s, ice atr m ve i sit po d an ve ati Th e topic of this section, nonneg ve ha we ich wh s, ce tri ma ite fin de e iv sit po d an fused with nonnegative definite ve ati eg nn no a is A ix atr m n x m An . ns sio ca oc discussed earlier on several A , ly lar mi Si . ve ati eg nn no is A of t en m ele ch matrix, indicated by A ~ (0), if ea e W e. iv sit po is A of t en em el ch ea if ), (0 > A by is a positive matrix, indicated . ely tiv ec sp re ), (0 > B A d an ) (0 ~ B A at tb n ea m will write A ~ B and A > B to ch ea g in ac pl re by ix atr m ve ati eg nn no a to led IlI Any matrix A ca n be transfo if is, at th ); s(A ab by ted no de be ll wi is Th e. lu va te lu of its ele m en ts by its ab so t en em el h j)t (i, th wi ix atr m n x m an o als is ) s(A ab en th A is an m x n matrix, re ua sq ve ati eg nn no of s tie er op pr e th of e m so te ga sti ve gi ven by 1aij I. We will in s. se es oc pr tic as ch sto in ns tio ica pl ap eir th of e m so te ca di matrices as well as in ts tex e th to d ne fe re is er ad re the c pi to is th of ge ra ve co e For a more ex ha us tiv d an ), 88 (9 c in M , 4) 99 (1 s on m em Pl d an an lm Be by s on nonnegative matrice hn Jo d an m Ho d an 9) 95 (1 r he ac m nt Ga by s ok bo the as Seneta ( 1973), as well of es lin e th ng alo w llo fo re he t en es pr we at th son (1985). M os t of the proofs . 5) 98 (1 n so hn Jo d an m Ho in n ve gi s, lln nO x tri ma on the derivations, based d an ve ati eg nn no of us di ra l tra ec sp e th g in rd ga re lts su re e We begin wi th so m positive matrices. If r. cto ve 1 x m an be x d an x tri ma m x m Le t A be an
Theorem 7.43. A ~ (0 ) and x > 0, then
m
m •
mm
I ;5;iSm
(7.22)
max
l'5:i'5:m
j=1
(7.23)
•
,"
i.' ,
.1. . • '7 ,.'
"
er ov ng izi im ax m d an ng izi im in m en wh g in ld ho ies lit with similar inequa columns instead of rows.
" ~. \~
J
289
NONNEGATIVE MATRICES
'.'
.. :' .
•" ,
:;: 
Pr oo f
.
. ,
•,
Let
"
1/1 ,
•
a= mm 
,
1 0 and bil, = 0 if a = ies pl im n the s thi d an II ~ Ak k, er teg in e iv sit po y an r Clearly, it follows that fo that /lAk /I~ ~ /lBk /I~ or, equivalently,
I
.
t Bu . B) p( ~ A) p( at th 4 4.2 m re eo Th m fro ws llo fo it Taking the limit as k ~ t tha t fac the m fro ws llo fo a = B) p( ce sin ) .22 (7 in d un bo r this proves the lowe p(B) ~ a since 00,
Bl m = aI m
a in en ov pr is d un bo r pe up e Th 9. 4.1 m re eo Th to e du a and p( B) ::;; IIBII~ = . ing us similar fashion •
m
a = max
1 O.
Proof
Note that abs(Ax)
=abs(Ax) =p(A)abs(x),
(7.24)
while it follows from Corollary 7.44.1 that A abs(x) = p(A)abs(x)
(7.25)
Now by using (7.24) and (7.25), we find that m
m
ajkXk
~
k=1
k=1
m
k=1 •
holds for each j. Evidently m
m
ajkXk k=1

lajkllxkl, k=1
and this can happen only if the, possibly complex, numbers ajkXk = rk e i8k = rk (cos () k + i sin () d, for k = 1, ... , m, have identical angles; that is, there exists some angle () such that each ajkXk, for k = 1, ... , m can be written in the fOlll1 i8 i8 ajkXk = rke = rk (cos () + i sin ()). In this case, e ajkXk = rk > 0, which implies 0 that e i8 Xk > 0 since ajk > O. •
The following result not only indicates that the eigenspace corresponding to p(A) has dimension one, but also that p(A) is the only eigenvalue of A having modulus equal to p(A).
Theorem 7.46.
If A is an m
X
m positive matrix, then the dimension of
the eigenspace corresponding to the eigenvalue p(A) is one. Further, if A is an eigenvalue of A and Ai p(A), then IAI < p(A).
292
SPECIAL MATRICES AND MATRIX OPERATORS
The first statement will be proven by showing that if u and v are nonnull vectors satisfying Au = p(A)u and Av = p(A)v, then there exists some scalar c such that v = cu. Now from Theorem 7.45, we know there exist angles (J I and (J2 such that s =e~j91 u > 0 and I = e~;92 v > O. Define w =1 ds, where Proof
. d = mm 1 '5.)
~m
~
s)
I
Ij'
so that w is nonnegative with at least one component equal to O. If wi 0, then clearly Aw > 0 since A is positive. This leads to a contradiction since Aw = AI  dAs :: p(A)t  p(A)ds = p(A)w
then implies that w > O. Thus, we must have w = 0, so I = ds and v = cu, where c = de;(82 oJ). To prove the second statement of the theorem, first note that from the definition of the spectral radius, IAI ::; peA) for any eigenvalue A, of A. Now if x is an eigenvector corresponding to Aand IAI = p(A), then it iO follows from Theorem 7.45 that there exists an angle (J such that u = e x > O. Clearly, Au = AU. Premultiplying this identity by Du I, we get
so that m
ujl
L
aijuj =
A
)=1
o
holds for each i. Now applying Theorem 7.43, we get A = peA).
We will see that the first statement in the previous theorem actually can be replaced by the stronger condition that peA) must be a simple eigenvalue of A. But first we have the following results, the last of which is a very useful limiting result for A. •
Theorem 7.47.
Suppose that A is an m x m positive matrix, and x and y are positive vectors satisfying Ax = p(A)x, A'y =p(A)y, and x'y = 1. Then the following hold. (a) (A  p(A)xy')k :: Ak  p(Alxy', for k:: 1,2, .... (b) Each nonzero eigenvalue of A  p(A)xy' is an eigenvalue of A. (c) p(A) is not an eigenvalue of A  p(A)xy'. Cd) peA  p(A)xy') < peA). (e) limk>~{p(ArIA}k = xy'. •
293
NONNEGATIVE MATRICES
I, = k r fo s ld ho ly ar cle it ce sin , on cti du in by d he lis Proof (a) is easily estab and if it holds for k = j  1, then (A  p( A) xy ')J = (A  p( A) xy ')J I( A  p( A) xy ') = (A iI p (A )i 1X y' )( A  p( A) xy ') = Ai  p( A) Ai  Ix y'  p( A) i  Ixy ' A + pl A) !x y' xy ' = A i  p( A) ix y'  p( A) ix y' + p( A) i xy ' = Ai  p( A) ix y'
(A of r cto ve en eig d an e lu va en eig an e ar u d an 0 i Next, suppose that A p( A) xy '), so that
(A  p( A) xy ')u = AU •
0, = ') xy A) p( (A ' xy t tha g in rv se ob d an ' xy by Premultiplying this equation we see that we must have xy ' u = O. Consequently, Au = (A  p( A) xy ')u = AU,
e os pp su , (c) e ov pr To ). (b r fo d ire qu re is as A, of e lu and so A is also an eigenva cve en eig ng di on sp rre co a u th wi ' xy A) p( A of e lu va en that A = peA) is an eig r cto ve en eig an o als is u at th y pl im d ul wo s thi at th en se tor. Bu t we have ju st ex = u 6, 7.4 m re eo Th m fro , us Th . A) pe e lu va en eig e th of A corresponding to for so m e scalar c an d 0 = ex A) p( ex A) p( = x ')c xy A) p( (A = ')u xy A) p( A) u = (A  p(
(d) w No . lds ho (c) so d an 0, i u d an 0 > A) pe ce sin le ib But this is imposs t tha te no , (e) e ov pr to , lly na Fi 6. 7.4 m re eo Th d an , (c) ), (b follows directly from g, in ng ra ar re d an i A p( by (a) in n ve gi n tio ua eq e th of by dividing both sides we get
Take the limit, as k (d),
~
{
p p
and so
00 ,
of both sides of this equation and observe that from
_ IA A) (
'} = p{ A  p( A) xy '} peA) xy
< I,
294
SPECIAL MATRICES AND MATRIX OPERATORS
.
D
follows from Theorem 4.23.
Theorem 7.48. Let A be an m x m positive matrix. Then the eigenvalue peA) is a simple eigenvalue of A. Proof Let A = XTx* be the Schur decomposition of A, so that X is a unitary matrix. and T is an upper triangular matrix with the eigenvalues of A as its diagonal elements. Write T = TI + T2, where TI is diagonal and T 2 is upper triangular with each diagonal element equal to O. Suppose that we have chosen X so that the diagonal elements of TI are ordered as TI = diag(p(A) •. ..• peA). Ar+ I, •.. , Am), where r is the multiplicity of the eigenvalue peA) and IAj I < peA) for j = r + I, ... ,m, due to Theorem 7.46. We need to show that r = I. Note that, for any upper triangular matrix U with ith diagonal element lIii, Uk is also upper triangular with its ith diagonal element given by II ~i. Using this, we find that
=X
lim diag
k~oo
Am
1, ... ,1,
Ar+ I
k
peA)
, ... ,
k
peA)
= X {diag(l, ... , 1,0, ... , 0) + T 3}X *,
where this last diagonal matrix has r Is and T3 is an upper triangular matrix with each diagonal element equal to o. Clearly, this limiting matrix has rank at least r. But from Theorem 7.47(e), we see that the limiting matrix must have rank I. This proves the result. D To this point, we have concentrated on positive matrices. Our next step is to ex.tend some of the results above to nonnegative matrices. We will see that many of these results generalize to the class of irreducible· nonnegative matrices. Definition 7.2. An mX m matrix A, with m ~ 2, is called a reducible matrix if there exist some integer rand m x m permutation matrix P such that
PAP' =
B (0)
C
D'
.
,
295
NONNEGATIVE MATRICES
where B is r x r, C is r x (m  r), and D is (m  r) x (m  r). If A is not
reducible, then it is said to be irreducible. We will need the following result regarding irreducible nonnegative matrices.
Theorem 7.49. An m x m nonnegative matrix A is irreducible if and only if (1m + A)m  I > (0). Proof First suppose that A is irreducible. We will show that if x is an m x 1 nonnegative vector with r positive components 1 S; r S; m  I, then (1m + A)x has at least r + 1 positive components. Repeated use of this result verifies that (1m + A)mI > (0) since each column of 1m + A has at least one positive component. Since A ~ (0), (1m + A)x = x + Ax must have at least r positive components. If it has exactly r positive components, then the jth component of Ax must be 0 for every j for which Xj = O. Equivalently, for any permutation matrix P, the jth component of PAx must be 0 for every j for which the jth component of Px is O. If we choose a permutation matrix for which y = Px has its m  r Os in the last m  r positions, then we find that the jth component of PAx = PAP'y must be 0 for j = r + 1, ... ,m. Since PAP' ~ (0) and the first r components of y are positive, PAP' would have to be of the form
PAP' =
B (0)
C D
Since this contradicts the fact that A is irreducible, the number of positive components in the vector (1m + A)x must exceed r. Conversely, now suppose that (1m + A)m  I > (0) so that, clearly, (1m + A)m  I is irreducible. Now A cannot be reducible since, if for some permutation matrix P,
PAP' =
B (0)
C D'
then
Ir + B
'C
(0)
Imr + D
mI
,
and the matrix on the righthand side of this last equation has the upper trian0 gular form given in Definition 7.2. We will generalize the result of Theorem 7.44 by showing that peA) is pos
•
296
SPECIAL MATRICES AND MATRIX OPERATORS
itive, is an eigenvalue of A, and has a positive eigenvector when A is an irreducible nonnegative matrix. But first we need the following result.
Theorem 7.50.
Let A be an m x m irreducible nonnegative matrix, x be an m x I nonnegative vector, and define the function
f(x) = min xi I (A)j.x = min xii XjiO
XjiO
Then there exists an m x 1 nonnegative vector b such that b'lm = 1 andf(b) ~ f(x) holds for any nonnegative x.
Proof
Define the set
Since S is a closed and bounded set, and f is a continuous function on S due to the fact thaty > 0 ify E S, there exists aCE S such thatf(c) ~f(y) for all yES. Define b = c/(c'lm), and note thatf is unaffected by scale changes, so f(b) = f(c). Let x be an arbitrary nonnegative vector and define x* = x/(x'lm) and y = (1m + A)m  I X *. Now it follows from the definition off that
•
•
Premultiplying this equation by (1m + A)m A)'"  I A = A(lm + A)m  I, we find that
I
and using the fact that (1m +
But a = f(y) is the largest value for which Ay  ay ~ 0 since at least one component of Ay  f(y)y is 0; that is, for some k,J(y) = Ykl(Ah.y and, consequently, the kth component of Ay  f(y)y will be O. Thus, we have shown that f(y) "C.f(x*) =f(x). The result then follows from the fact thatf(y) ~f(c) =f(b).
•
•
o Theorem 7.51.
Let A be an m x m irreducible nonnegative matrix. Then A has the positive eigenvalue p(A) and associated with it a positive eigenvector
x.
•
Proof
We first show thatf(b) is a positive eigenvalue of A, wheref(b) is defined as in Theorem 7.50, and b is a nonnegative vector satisfying b'lm = 1 and maximizing f. Since b maximizes f(x) over all nonnegative x, we have
297
S E IC R T A M E IV T A G E N N O N
m •
= mm
a ij
I SiSm
> 0,
j= 1
, A f o e lu a v n e ig e n a is ) b ( tf a th e v ro p o T . le ib c u d re ir d n a e v s in c e A is n o n n e g a ti b ) b f( b A f I . O ~ )b b f( b A t a th s w o ll fo it f f o n io it n fi e d e recall th a t fr o m th " )' A + m (1 e c in s n e th t, n e n o p m o c e iv it s o p e n o t s a le t a has have
I
> (0), we m u s t
, 0 > y ) b f( y A = ) )b b f( b (lm + A )m I (A
•
, 0 ~ y ex y A h ic h w r fo e lu a v t s e rg la e th is ) y fe = ex t u B . b w h e re y = (lm+A)m1 ) y ( f s e iz m .i x a m b e c in s e u tr e b t o n n a c h ic h w ) b fe > s o w e w o u ld h a v e f ( y ) b d n a A f o e lu a v n e ig e n a is ) b fe so d n a 0 = b ) b f( b A o v e r a ll y ~ O. T h u s , y b ) A e p = ) b fe t a th w o h s to is p te s .t x e n r u O r. to c e v n is a c o rr e s p o n d in g eige is u if w o N . A f o e lu a v n e ig e ry ra it rb a n a is i A re e h w , d s h o w in g th a tf ( b ) ~ lA r o iU A = u A n e th i, A to g in d n o p s e rr o c A f o r to c e v n a n e ig e m
j= 1
, y tl n e u q e s n o C . m , . .. , 1 = h for m
j= 1
ly p im s r o ,m . .. , 1 = h r fo A a b s (u ) 
lAd a b s (u ) ~ 0,
e iv it s o p a d n fi t s u m e w , y ll a in F ). b fe ; S » (u s b a f( ; S il IA a n d this im p li e s th a t d n u fo y d a e lr a e v a h e W ). b fe = ) A e p e lu a v n e ig e e th h it i w d te ia c o s s a r to c e v n e ig e = b II )I A + ", (1 t a th s e li p im b ) b f( = b A t a th te o N . b r, to c e a n o n n e g a ti v e e ig e n v { I + f( b )} m 1 b , a n d s o
b=
(1 + A)I':II::_I;b
:n;,.:,.' : :
{ I +f(b)}m
I
. e iv it s o p y ll a tu c a is b t a th d n fi e w , 9 .4 7 m re o e h T g in Thus, us
o
298
SPECIAL MATRICES AND MATRIX OPERATORS
The proof of the following result will be left to the reader as an exercise. Theorem 7.52.
If A is an m x m irreducible nonnegative matrix, then p(A) is a simple eigenvalue of A. Although p(A) is a simple eigenvalue of an irreducible nonnegative matrix A, there may be other eigenvalues of A that have absolute value p(A). Consequently, Theorem 7.47(e) does not immediately extend to irreducible nonnegative matrices. This leads us to the following definition. Definition 7.3. An m x m nonnegative matrix A is said to be primitive if it is irreducible and has only one eigenvalue satisfying IAi I = p(A). Clearly, the result of Theorem 7.47(e) does extend to primitive matrices and this is summarized below. Theorem 7.53.
Let A be an m X m primitive nonnegative matrix and suppose that the m x I vectors x and y satisfy Ax = p(A)x, A'y = p(A)y, x > 0, y > O. and x'y = I. Then
Our final theorem of this section gives a general limit result that holds for all irreducible nonnegative matrices. A proof of this result can be found in Hom and Johnson (1985). Theorem 7.54.
Let A be an m X m irreducible nonnegative matrix and suppose that the mx I vectors x and y satisfy Ax = p(A)x, A'y = p(A)y, and x'y = 1. Then N
=xy
I
k= I
Nonnegative matrices play an important role in the study of stochastic processes. We will illustrate some of their applications to a particular type of stochastic process known as a Markov chain. Additional information on Markov chains, and stochastic processes in general, can be found in texts such as Bhattacharya and Waymire (1990), Medhi (1994), and Taylor and Karlin (1984) . •
Example 7.12. Suppose that we are observing some random phenomenon over time, and at anyone point in time our observation can take on anyone of the m values, sometimes referred to as states, 1, ... , m. In other words, we have a sequence of random variables XI' for time periods t = 0, 1, ... , where each random variable can be equal to anyone of the numbers, 1, ... , m. If the
299
NONNEGATIVE MATRICES
probability that X t is in state i depends only on the state that X t _ J is in and not on the states of prior time periods, then this process is said to be a Markov chain. If this probability also does not depend on the value of t, then the Markov chain is said to be homogeneous. In this case, the state probabilities for any time period can be computed from the initial state probabilities and what are known as the transition probabilities. We will write the initial state probability vector p(O) = (p~O), ... ,p~»', where p~O) gives the probability that the process starts out at time 0 in state i. The matrix of transition probabilities is the m x m matrix p whose (i,j)th element, Pij, gives the probability of X, being in state i given that XtI is in state j. Thus, if p(t) = (pjt), ... ,p~»' and pV) is the probability that the system is in state i at time t, then, clearly,
,
p(2)
= pp(l) = ppp(O) = p2p(O),
or for general t, p(t)
=
P'p(O)
If we have a large population of individuals subject to the random process discussed above, then p~t) could be described as the proportion of individuals in state i at time t, while pjO) would be the proportion of individuals starting out in state i. A natural question then is what is happening to these proportions as t increases? That is, can we detelluine the limiting behavior ofp(t)? Note that this depends on the limiting behavior of pI, and P is a nonnegative matrix since each of its elements is a probability. Thus, if P is a primitive matrix, we can apply Theorem 7.53. Now, since thejth column of P gives the probabilities of the various states for time period t when we are in state j at time period t  I, the column sum must be 1; that is, 1~ P = 1~ or P'lm = 1m, so P has an eigenValue equal to 1. Further, a simple application of Theorem 7.43 assures us that p(P) S; I, so we must have p(P) = 1. Consequently, if P is primitive and 1T is the m x I positive vector satisfying P1T = 1T and 1T'l m = I, then lim {p(Pt I P}t
t 4> 00
= lim t
~ 00
pI
=1Tl~
Using this, we see that
•
where the last step follows from the fact that l~p(O) = 1. Thus, the system approaches a point of equilibrium in which the proportions for the various states are given by the components of 1T, and these proportions do not change from time period to time period. Further, this limiting behavior is not dependent upon the initial proportions in p(O). As a specific example, let us consider the problem of social mobility that involves the transition between social classes over successive generations in a
300
SPECIAL MATRICES AND MATRIX OPERATORS
family. Suppose that each individual is classified according to his occupation as being upper, middle, or lower class, and these have been labelled as states 1, 2, and 3, respectively. Suppose that the transition matrix relating a son's class to his father's class is given by
P=
0.45 0.45 0.10
0.05 0.70 0.25
0.05 0.50 0.45
,
so that, for instance, the probabilities that a son will have an upper, middle, or lower class occupation when his father has an upper class occupation are given by the entries in the first column of P. Since P is positive, the limiting result just discussed applies. A simple eigenanalysis of the matrix P reveals that the positive vector 'IT, satisfying P'IT = 'IT and 'IT'lm = 1, is given by 'IT = (0.083,0.620,0.297)'. Thus, if this random process satisfies the conditions of a homogeneous Markov chain, then after many generations, the male population would consist of 8.3% in the upper class, 62% in the middle class, and 29.7% in the lower class.
•
10. CIRCULANT AND TOEPLITZ MATRICES In this section, we briefly discuss some structured matrices that have applications in stochastic processes and time series analysis. For a more comprehensive treatment of the first of these classes of matrices, the reader is referred to Davis ( 1979). An m x m matrix A is said to be a circulant matrix if each row of A can be obtained from the previous row by a circular rotation of elements; that is, if we shift each element in the ith row over one column, with the element in the last column being shifted back to the first column, we get the (i + l)th row, unless i = m, in which case we get the first row. Thus, if the elements of the first row of A are a" a2, ... ,am, then to be a circulant matrix, A must have the fOlln
A=
a, am am 
I
a2 a, am
a3 a2 a,
•
•
•
• • •
• • •
• • •
•
•
•
•
a3 a2
a4 a3
as
• • •
a4
• •
•
•
•
am_ , am am2 am , am 3 am 2 •
• • •
•
a, am
a2 a,
, ,
(7.26)
•
We will sometimes use the notation A = circ(a" a2,.'" am) to refer to the circulant matrix in (7.26). One special circulant matrix, which we will denote by lIm, is circ(O, 1,0, ... ,0). This matrix, which also can be written as
301
S E IC R T A M Z IT L P E O T D N A T CIRCULAN
,
e2, e3
,
•
• •
,
em, el
te o n e d to m ,a . .. l, a se u e w if t a th te o N ' ;n ll == ,1 ;; ll o s , ix tr a is a pellllutation m s, w ro e th te o n e d to 1I b; , . .. ;, b d n a A ix tr a m m x m ry ra it rb a the columns. o f an then
All",
' I) _ ", a , . .. I, a "" (a = I) _ m ,e . .. I, e "" e )( ", a , . .. , 2 a i, == (a
, e2, llmA
==
b;b'3
b; b;
e3

•
• • •
•
•
' em
,
el
(7.27)
b;n _ I bm
• • •
,
(7.28)
b'm b'I
s u h T ). 6 .2 (7 in n e iv g n ll fO e th f o is A if ly n o d n a a n d (7.27) equals (7.28) if . lt u s re g in w o ll fo e th e v a h we
Theorem 7.55.
if ly n o d n a if ix tr a m t n la u c ir c T h e m x m matrix A is a A == ll", All;n
s ll rl te in ix tr a m t n la u c ir c m x m n a r fo n io s s re p x e n a s e iv O u r next theorem g o f a sum o f m matrices.
Theorem 7.56.
d e s s re p x e e b n a c ) ", a " .' l, a ( T h e circulant matrix A == c ir c
as 2
A == a t Im + a2 I I m + a3 l l m + .. .
Proof
I m ll + am '"
Using (7.26), w e see that
. .. + ) 2 ", ,e . .. l, e " " e I, ,_ " (e 3 a + ) I m ,e . .. " ,e (e 2 a + m A == allm
+ am (e2,e3, .. . ,em,el) y b ix tr a m m x m y n a f o S in c e th e postmultiplication
II", shifts the columns o f
302
SPECIAL MATRICES AND MATRIX OPERATORS
that matrix one place to the right, we find that
•
• •
and so the result follows.
o
Certain operations on circulant matrices produce another circulant matrix. Some of these are given in the following theorem.
Theorem 7.57.
Let A and B be m x m circulant matrices. Then
A' is circulant, for any scalars ex and {3, etA + (3B is circulant, for any positive integer r, A r is circulant, . A I is circulant, if A is nonsingular, ... (e) AB is circulant.
(a) (b) (c) (d)
Proof.
If A = circ(al' ... , am) and B = circ(bI, ... , bm ), it follows directly from (7.26) that A' = circ(al, am, am _ I, ... ,a2) and •
Since A is circulant, we must have A  IImAII~. But 11m is an orthogonal matrix, so
and so by Theorem 7.55, A r is also a circulant matrix. In a similar fashion, we find that if A is nonsingular, then
and so AI is circulant. Finally, to prove (e), note that we must have both A = II", AII~, and B = 11m BII~, implying that
and so the proof is complete.
o
30 3
CI RC UL AN T AN D TO EP LI TZ MA TR IC ES
a es id ov pr 56 7. m re eo Th in n ve gi ix atr m t lan cu cir a of n T4 e representatio simple way of proving the following result. en Th s. ice atr m t lan cu cir m x m e ar B d an A at Theorem 7.58. Suppose th their pr od uc t commutes; that is, AB = BA. m fro ws llo fo it en th ), ,b ... , b c( cir = B m d an m) l ,a Proof If A = circ(aI, ... Th eo re m 7.56 that m
m i 1
A=
ai II m
bj n In
jI
B=
,
,
j= 1
i=1
where II~ = 1m. Consequently, •
m
i= 1 j= 1
j= 1
i= 1
m
m
m
m
(a·l IImi I) (bJ·IIInj l)

i1 rJ b·: J m
IIi  1 ai m
AB =
m
m
m
ai bj II :; j2 =

j
i = 1 j= 1
m
m
i 1
b· II j1 m

= 1 j=1
aj II m
J
j
j= 1
D
=B A
=1
g nin lni te1 de by s thi ow sh ll wi e W e. bl za ali on ag di e ar s ice All circulant matr e th d fin us let st fir t Bu . ix atr m t lan cu cir a of rs cto ve en eig the eigenvalues and . lIm ix atr m t lan cu cir ial ec sp e th of rs cto ve en eigenvalues an d eig ial m no ly po e th to ns tio lu so m e th be m ,A ... I, A t Le Theorem 7.59. = ) i/m 1T p(2 ex = () e er wh , jI () = Aj is, equation Am  1 = 0; that x tri ma al on ag di e th be to A. e fin De . =l ..J = i d an m) T/ Cos(21T/m) + i sin(21 diag(1, (), ... , ()m  I) and let
F=
1
Vm
1 1 1 • • •
1
1
1
• • •
1
()
()2
• • •
()m  1
()2
()4
•
• •
•
• • •
()m  1
()2 (m l)
• •
()2 (m  I) • •
• • • •
() (/II 
1)(m  I)
the is F* e er wh *, .F FA = lIm by n ve gi is Th en the diagonalization of lIm
304
SPECIAL MATRICES AND MATRIX OPERATORS
conjugate transpose of F; that is, the diagonal elements of A are the eigenvalues of Ilm, while the columns of F are corresponding eigenvectors.
Proof
The eigenvalueeigenvector equation, IImx = Ax, yields the equa
tions
for j = I, ... , m  I, and
After repeated substitution, we obtain for any j, Xj = Am Xj. Thus, Am = 1, and so the eigenvalues of IIm are I, (), ... ,()m  I. Substituting the eigenvalue ()j  I and I 2 / into the equations above, we find that an eigenvector corresponding XI = mto the eigenvalue ()jI is given by x = m I/ 2(1,()jI, ... ,()(mllUI»)'. Thus, we have shown that the diagonal elements of A are the eigenvalues of IIm and the columns of F are corresponding eigenvectors. The remainder of the proof, I which simply involves the verification that F = F*, is left to the reader as an ex.ercise. D The matrix. F given in Theorem 7.59 is sometimes referred to as the Fourier matrix. of order m. The diagonalization of an arbitrary circulant matrix., which follows directly from Theorems 7.56 and 7.59, is given in our nex.t theorem.
Theorem 7.60.
Let A be the m x m circulant matrix. circ(al,' .. , am). Then A = FM'*,
where ~ = diag(oJ, ... ,Dill), OJ = al + a2A) + ... + amA'J'I, and Aj and F are defined as in Theorem 7.59.
. * Since IIm = FA.F* and FF* = I"" we have II{" = FNF , for
Proof j = 2, ... , m  1, and so by using Theorem 7.56, we find that
,
A = allm + a2II m + a3II~ + ... + amII: I 1 m = aIFF* + a2FAI F* + a3FA2F* + ... + amFA  F* = F(all m + a2AI + a3A2 + ... + aIllAmI)F* = FM'*
D
The class of circulant matrices is a subclass of a larger class of matrices known as Toeplitz matrices. The elements of an mX m Toeplitz matrix. A satisfy au = aj  i for scalars am + J, a,n+ 2, ... ,am _ I; that is, A has the fOlln
.
305
HADAMARD AND VANDERMONDE MATRICES
al ao a_I
ao a_I a 2
A=
al ao
•
am :2
a", _ I
• •
•
am  3 a m 4
am  '2 am  3
• •
• • •
• • •
•
• •
• • •
•
•
a2
•
• •
•
a m +2 a m +3 a_ m +4 a m + I a m +2 a m +3 •
•
•
•
ao
al
•
•
•
a_I
ao
z lit ep To ic etr m m sy a is A x tri ma the en th 1, m , ... If aj = a_j fo r j = 1, t tha e on is x tri ma z lit ep To ic etr m m sy e pl sim matrix. On e im po rta nt an d fairly ha s aj = a_j = 0 for j = 2, ... , m  1, so th at
A=
ao al
al
0
ao
al
•
• •
0
al
ao
•
• •
•
•
•
•
•
•
0 0
0 0
0 0
•
•
• • •
•
0 0 0
0 0 0
• •
• •
•
•
al ao
•
• •
ao
•
• •
al
(7.29)
the r fo as \ul lll fO d an es lu va en eig r fo as ul rm fo as ch su , lts So m e sp ec ial ize d resu d an r de an en Gr in d un fo be n ca x, tri ma z lit ep To a of rse co m pu tat io n of the in ve Sz eg o (1 98 4) an d Heinig and Ro st (1984).
11. HADAMARD AND VANDERMONDE MATRICES s ea ar the in ns tio ica pl ap ve ha at th s ice atr m e m In this section, we discuss so a th wi n gi be e W y. og ol od th me ce rfa su se on sp re of de sig n of ex pe rim en ts an d be to id sa is H x tri ma m X m An s. ce tri ma d ar m da class of matrices kn ow n as Ha , nd co se d an I, or I + r he eit is H of t en m ele ch a Ha da m ar d m atr ix if first, ea H satisfies
H ' H = H It = mIm ;
(7.30)
III for ws ro e th d an rs, cto ve of t se al on og th or an I1I f01 H that is, the co lu m ns of by n ve gi is ix atr m d ar m da Ha 2 x 2 a e, nc sta in r Fo an or th og on al se t as well.
H=
1
1
1 1
while a 4 x 4 Ha da m ar d matrix is gi ve n by
,
306
SPECIAL MAIRlCES AND MATRIX OPERATORS
H=
1
1
1
1
1
1
1
1
1 I I 1
1 1 1 1
Some of the basic properties of Hadamard matrices are given in the following theorem.
Theorem 7.61.
Let Hm denote any m x m Hadamard matrix. Then
(a) m I/ 2 H m is an m X m orthogonal matrix, (b) IHml = ±mm/2, (c) Hm ® Hn is an mn x mn Hadamard matrix.
Proof
(a) follows directly from (7.30). Also using (7.30), we find that
But
and so (b) follows. To prove (c), note that each element of Hm ® Hn is +1 or  1 since each element is the product of an element from Hm and an element from H II , and
Hadamard matrices which have all of the elements of the first row equal to +1 are called normalized Hadamard matrices. Our next result addresses the existence of nOllnaIized Hadamard matrices.
Theorem 7.62. an
1/1
x
1/1
If there exists an m x m Hadamard matrix, then there exists nOli nal ized Hadamard matrix. •
Proof
Suppose that H is an m x m Hadamard matrix. Let D be the diagonal matrix with the elements of the first row ofB as its diagonal elements; that is, D = diag(h ll , ... , hIm). Note that D2 = 1m since each diagonal element of D is + 1 or 1. Consider the m x m matrix H * = HD. Each column of H* is the corresponding column of H multiplied by either +1 or 1, so clearly each element of H * is +1 or 1. The jth element in the first row of H * is h7j = I, so H * has all of its elements of the first row equal to + 1. In
•
HADAMARD AND V ANDERMONDE MATRICES
307
addition,
H;H * = (HD)' HD Thus, H * is an m plete.
X m
= D' H' HD = D(mIm)D = mIY = mIm
nOllualized Hadamard matrix and so the proof is comrl
Hadamard matrices of size m x m do not exist for every choice of m. We have already given an example of a 2 x 2 Hadamard matrix, and this matrix can be used repeatedly in Theorem 7.61 (c) to obtain a 2" x 2n Hadamard matrix for any n ~ 2. However, m x m Hadamard matrices do exist for some values of m J 2n. Our next result gives a necessary condition on the order m so that Hadamard matrices of order m exist.
Theorem 7.63. If H is an m x m Hadamard matrix, where m > 2, then m is a multiple of 4 .
.Proof .The result can be proven by using the fact that any three rows of
•
H are orthogonal to one another. Consequently, we will refer to the first three rows of H, and, due to Theorem 7.62, we may assume that H is a nOllualized Hadamard matrix, so that all of the elements in the first row are+l. Since the second and third rows are orthogonal to the first row, they must each have r + I s and r I s, where r = n/2; thus clearly, n = 2r,
(7.31 )
or in other words, n is a multiple of 2. Let n+_ be the number of columns in which row 2 has a +1 and row 3 has a 1. Similarly, define n_+, n++, and n __ . Note that the value of anyone of these ns detenuines the others since n++ + n+_ = r, n++ + n_+ = r, and n __ + n+_ = r. For instance, if n++ = s, then n+_ = (r  s), n_+ = (r  s), and n __ = s. But the orthogonality of rows 2 and 3 guarantee that n++ + n __ = n_+ + n+_, which yields the relationship 2s=2(rs)
Thus, r = 2s, and so using (7.31) we get n = 4s, which completes the proof.
o Some additional results on Hadamard matrices can be found in Hedayat and Wallis (1978) and Agaian (1985). An m x m matrix A is said to be a Vandelluonde matrix if it has the form
SPECIAL MATRICES AN D MATRIX OPERATORS
308
I
I
•
• •
a2, ai
a3, a3
•
•
• • •
•
a2'  I
/n
I al, A=
aj • •
•
tTl 
al
I
•
• • •
•
am 2 am
(7.32)
• • •
• •
a,
I
1
• •
•
m I
am
en th 10 7. on cti Se in d se us sc di ix atr m ier ur Fo m x For instance, if F is the m al fin r Ou . ,m ... I, = i r fo I, j () = aj th wi ix atr m e nd l1o el1 A = /Ill /2 F is a Vand e nd no eu nd Va a of t an in ln tel de the r fo n sio es pr ex an result of this ch ap ter gives matrix. ). .31 (7 in n ve gi ix atr m de on Ill er nd Va m x m the be Theorem 7.64. Let A Then its deterIllinant is given by (7.33)
IAI = l:S i<j <m
Pr oo f
at th d fin we 2, = m r Fo . on cti du in by is f oo pr r Ou I
I
r fo s ld ho ) .33 (7 at th e m su as we xt Ne 2. x 2 is A en and so (7.33) holds wh r fo ld ho o als t us m it en th at th ow sh d an I m r de or of VanderIllonde matrices g in let de by A m fro d ne tai ob ix atr m I) (m x 1) (m e or de r m. Th us , if B is th r de or of ix atr m de on lfl er nd Va a is B ce sin , en th n, m lu its last row an d first co m  1, we m us t have
IBI = Define the m x m matrix
•
309 PROBLEMS
rte e d a r fo la u n ll fO n io s n a p x e r to c fa o c e th g in s u ly d te a e and note that by rep y il s a e is it t u B I. A C I = I IA , s u h T . 1 = ! C I t a th d n fi e w , minant o n the first row verified that CA = E, where E= •
I
o
,
, y tl n e u q e s n o C . ,» a m (a , . .. , ,) a 3 (a , d a 2 a « g ia d and D =
= I D II IB = I D IB = I lE IAI =

( aJ'  aI·) , '! .i < j! .m
•
la lU m fo n io s n a p x e r to c fa o c e th g in s u y b d e in ta b o s a w ty where the second equali u f. o ro p e th s te le p m o c is h T . o n the first c o lu m n o f E
PROBLEMS ix tr a m m 2 X m 2 e th r e id s n 1. Co
A=
•
bIm dIm '
. rs la a c s rO e z n o n re a d d n where a, b, c, a . A f o t n a in il ll te e d e th r fo n (a) G iv e an expressio r? la u g in s n o n e b A l il w d d n a , c , b , a f o s e lu a v t a h w r o F (b) '. A r fo n io s s re p x e n a d in F ) (c
2. L e t A b e o f the fOllll
A=
A '2 (0)
•
and A21 are nonsinA~I d n a , 2 A I" A 1 f o s n B te in A f o e rs e v in e th r fo n io s s re p x e gular. Find an ). .S 7 H .2 (7 s n o ti a u q e g in z by utili
12 A s e ic tr a m e th d n a m X where e a c h submatrix is m
•
310
SPECIAL MATRICES AND MATRIX OPERATORS
3. Generalize Example 7.2 by obtaining the detelluinant, conditions for non
singularity, and the inverse of the 2m x 2m matrix ,
A=
where a, b, c, and d are nonzero scalars. 4. Let the matrix G be given by ABC (0) D E , (0) (0) F
G=
where each of the matrices A, D, and F is square and nonsingular. Find the inverse of G. 5. Use Theorems 7.1 and 7.4 to find the detelluinant and inverse of the matrix
A=
40012 03012 0 0 2 2 3 o 0 123 1 I 0 1 2
•
6. Let A be an m x n matrix partitioned as
•
A=
where All is r x rand rank(A) = rank(AI I) = r. (a) Show that A22 = A2IAiiAI2. (b) Use the result of part (a) to show that . •
B=
AI II
(0)
(0)
(0)
is a generalized inverse of A. (c) Show that the MoorePenrose inverse of A is given by
311
PROBLEMS
C[A~ I
7. Use Theorem 7.4 to show that if
is nonsingular and All is nonsingular, then A22 A 2I AjlA I2 is nonsingular. 8. Let A be an mX m positive definite matrix and let B be its inverse. Partition A and B as
A=
B=
where All and BII are r x r matrices. Show that the matrix •
All  Bjl A;2
AI2 A22
is positive semidefinite with rank of m  r. 9. Consider the m x m matrix
a
where the (m  1) x (m  I) matrix All is positive definite. (a) Prove that IAI ~ ammlAlI1 with equality if and only if a = O. (b) Use part (a) to obtain an alternative proof of Theorem 7.23; that is, generalize the result of part (a) by proving that if all, ... ,amm are the diagonal elements of a positive definite matrix A, then IA I ~ a II ... a""11 with equality if and only if A is a diagonal matrix. 10. Let A be an m x m matrix and define Ai to be the i x i matrix obtained by deleting the last m  i rows and columns of A. The leading principal minors of A are given by the detellninants, IAII, ... , IAml, where Am = A.
312
SPECIAL MATRICES AND MATRIX OPERATORS
Show that if A is a symmetric matrix, then it is positive definite if and only if all of its leading principal minors are positive.
11. Let the 2 x 2 matrices A and B be given by
A=
2
3
I
2
'
5 3
B=
3 2
(a) Compute A® Band B ® A. (b) Find tr(A ® B). (c) Compute IA ® BI. (d) Give the eigenvalues of A ® B. (e) Find (A ® B) I .
12. Give a simplified expression for I", ® I". 13. Prove the properties given in Theorem 7.6. 14. Prove results (b) and (c) of Theorem 7.9.
•
•
15. Show that if A and B are symmetric matrices, then A ® B is also symmetric. 16. Find the rank of A ® B, where
2 6 A=
1 4 3
I
,
B=
524 2 I 1 102
17. For matrices A and B of any size, show that A ® B = (0) if and only if A = (0) or B = (0). 18. Let Xi be an eigenvector of the m x m matrix A corresponding to the eigenvalue Ai. Let Yj be an eigenvector of the p x p matrix B corresponding to the eigenvalue () j.
(a) Show that Xi ® Yj is an eigenvector of A ® B. (b) Give an example of matrices A and B such that A ® B has an eigenvector that is not the Kronecker product of an eigenvector of A and an eigenvector of B. 19. Show that if A and B are positive definite matrices, then A ® B is also positive definite.
,.
313
PROBLEMS
ee thr the at th y rif Ve r. cto ve I x n an be y d an r cto ve 1 x 20. Let x be an m matrices xy ', y' ® x, and x ® y' are identical. y wa otw e th r fo y) '(y y) (y = E SS rs ro er d re ua sq of m 21. Compute the su . 7.5 ple .am Ex in d se us sc di n tio ac ter in th wi el od classification m o
by n ve gi n tio ac ter in t ou th wi el od m n tio ca ifi ss cla y wa o22. Consider the tw Yijk
= P. + Ti + 'Yj +
€ij k.
n. , ... 1, = k d an b, , ... 1, = j a, , ... 1, = i e er wh e us d an ', I>) ,'Y ... , 'YI , T" , ... , TI , (p. = Il r fo n tio lu so s re ua sq (a) Fi nd a least rs ro er d re ua sq of m su e th d an es lu va ted fit of r cto ve e th n this to obtai for this model. k = 11 + Y'j el od m d ce du re e th r fo rs ro er d re ua sq of m su e (b) Compute th t tha ow sh to ) Ca in ted pu m co E SS e th th wi ng alo is th e us 'Yj + Eij k an d th e su m of squares for factor A is •
a
SS A = nb

 . y).. "( y;
;= I
is B r cto fa r fo s re ua sq of m su e th at th ow sh n, io (c) In a sim ila r fash •
b
'
c.
•
> •
( Y'j
SSB = na
• • •
,
2 )  y ..
j=1
11, of ns tio nc fu le ab tim es t en nd pe de in rly ea lin y an m (d) Find a se t of as T; , an d 'Yj as possible. d re ua sq of m SU the d an (a) in ted pu m co rs ro er d re ua sq of (e) Use the sum for s re ua sq of m su e th at th ow sh to 21 lem ob Pr in d errors co m pu te by n ve gi is 21 lem ob Pr of el od m e th in n tio ac ter in ,. a
b
(Yij  Y;. _ Y'j + y.. )2
SSAB = n ;= I
j= I
23. Prove Theorem 7.13. of es siz the en wh t, tha ow Sh s. ice atr m re ua sq 24. Le t AI. A 2 , A 3 , an d A4 be , ed fin de e ar s on ati er op te ria op pr ap e th at th these matrices are such (a) (A I E9 A2) + (A3 E9 A4) = (A I + A3) E9 (A2 + A4),
314
SPECIAL MATRICES AND MATRIX OPERATORS
(b) (AI ffi A 2 )(A3 ffi A4) = AIA3 ffi A2A.t,
(C) (A I ffi A2) ® A3
=(AI ® A3) EB (A 2 ® A3)'
25. Give an example to show that, in general,
26. Complete the details of Example 7.7; that is, use Theorem 6.4 and Theorem 7.16 to prove Theorem 6.5. 27. Prove the results of Corollary 7.17.1. 28. Let px (a) (b)
A and B be m x nand n x p matrices, respectively, while e and d are 1 and n x 1 vectors. Show that ABc =(e' ® A) vec(B) =(A ® e') vec(B'), d' Be = (e' ® d') vec(B).
29. For any matrix A and any vector b, show that vec(A ® b) = vec(A) ® b 30. Let A be an m x m matrix, B be an n x n matrix, and C be an m x n matrix. Prove that •
vec(AC + CB) = {(In ® A) + (B' ® 1m )}vec( C)
31. If e,. is the ith column of the identity matrix 1m, verify that m
vec(lm) =
(ei ® ei) i= I
32. Prove property (h) of Theorem 7.18. 33. Let the 2 X 2 matrices A and B be given by •
1 2 A= 2 4 '
B=
4
1
I 3
(a) Compute A G B. (b) Which of the matrices, A, B, and AGB, are positive definite or positive semidefinite? How does this relate to Theorem 7.227
315
PROBLEMS
34. Give an example of matrices A and B such that neither is nonnegative def
inite, yet A 0 B is positive definite. 35. Let A, B, and C be m x n matrices. Show that
tr{(A' 0 B')C} = tr{A'(B 0 C)} 36. Suppose that the m x m matrix A is diagonalizable; that is, there exist a nonsingular matrix X and a diagonal matrix A = diag(AI, ... ,Am) such that I A = XAX . Show that if we define the vector of diagonal elements of A, a = (all"~" amm )' and the vector of eigenvalues of A, A. = (A I, ... , Am)', then
and
37. Let A and B be m x m nonnegative definite matrices. Show that (a) IA 0 BI :?! IAIIBI, (b) IA 0 A II:?! 1, if A is positive definite. 38. For each of the following pairs of 2 x 2 matrices, compute the smaller eigenvalue A2(A0B) and the lower bounds for this eigenvalue given by Theorem 7.26 and Theorem 7.28. Which bound is closer to the actual value?
(a) A =
(b) A =
4 0
o
I
I
o
o
I
B=
'
'
B=
2 0
o
3
2
.J2
.J2
3
.
39. Let A be an m x m positive definite matrix. Use Theorem 7.24 to show that, if B = A I then all hll :?! I. Show how this generalizes to (Ii; hii > I for i = I, ... , m.
40. Let A and B be mX m positive definite matrices and consider the inequality III
IA 0 BI + IAIIBI :?! IAI
III
h;; ;=1
au
+ IBI ;=1
316
SPECIAL MATRICES AND MATRIX OPERATORS
(a) Show that this inequality is equivalent to
where RA and R8 represent the correlation matrices computed from A and B. (b) Use Theorem 7.25 on IRA 0 CI. where C: R8  (e~R8Ie,)'e,e~, to establish the inequality given in (a). 41. Suppose that A and Bare m x m positive definite matrices. Show that A 0 R c AB if and only if both A and B are diagonal matrices.
42. Let A be an m x m positive definite matrix and B be an m x m positive semidefinite matrix with exactly r positive diagonal elements. Show that rank(A 0 B) = r.
43. Show that if A and B are singular 2x 2 matrices then AOB is also singular. 44. Let R be an m x m positive definite correlation matrix having A as its smallest eigenvalue. Show that if T is the smallest eigenvalue of ROR and R of. 1m, then T > A.
45. Consider the matrix 1/1
'lrl/l = L
e;,m(e;,m ® e;,m)',
;= 1
which we have seen satisfies 'lr meA ® B)'lr:n : A 0 B for any m x m matrices A and B. Define w(A) to be the mX 1 vector containing the diagonal elements of A; that is, w(A) : (all, . .. , a mm )'. Also let Am be the m 2x m 2 matrix given by m
m
Am: L(Eii®Eii )= L(e;,me;,m®e;,me;,m) ;01
;=1
Show that (a) 'lr~w(A): vec(A) for every diagonal matrix A, (b) 'lrmvec(A): w(A) for every matrix A, (c) 'lr m'lr~ : 1m so that 'lr~ : 'lr~, (d) 'It;" 'It m = Am,
31 7
pROBLEMS
(e ) AmNm = NmAm = Am, . A) w( B) 0 '(B )} (A {w = ) c(A ve Am B) ® (B Am (0 {vec(A)}' . 8) 98 (1 s nu ag M in d un fo be n ca n 'I'l of s tie er op pr l Ad di tio na is, at th x; tri ma n io tat lu lll pe a is n Km x tri ma n io at ut m 46. Verify th at th e co m , I",, of n m lu co ch ea d an , I"" of n m lu co a is n Km of sh ow th at ea ch co lu m n is a co lu m n of Kmn. d an n K s ice atr m n io at ut m m co e th t ou e rit W . 47
K2 4.
rco at th ow Sh 3. 7.3 m re eo Th in n ve gi re we m 48 . Th e eig en va lu es of Km . e/ ® e/ rm fo the of rs cto ve e th by n ve gi e re sp on di ng eig en ve cto rs ar (el ® ek) + (ek ® el), an d (el ® ek)  (ek ® el ). as d se es pr ex be n ca ", K x tri ma n io at ut n m m co e th at 49 . Sh ow th m
i =1
is A if at th ow sh to is th e Us . 1m of n m lu co ith e th is ej e wh er m x l, Y is an ar bi tra ry vector, th en
II X III, X
is
K~II(x ® A ® y' ) = A ® xy '
o er nz no e th be Ar , ... I, A let d an r nk ra th wi ix atr m 50. Le t A be an m x n eig en va lu es of A'A. If we de fin e •
, ,
•
P = Kmn(A' ® A) ,
· ,·"·. ,
f
i,
,.
sh ow th at (a) P is sy m m etr ic, (b ) rank(P) = r2, (c) tr( P) = tr(A'A), (d ) p2 = (A A' ) ® (A ' A) , . <j i all r fo 2 )1/ A} A, ±( d an ,A ,." AI e ar P of es r lu va en eig (e ) th e no nz er o
51. Prove the results of Theorem 7.35. en th s, ce tri ma m X m e ar B d an A if at th ow Sh . 52
SP EC IA L MATRICES AND MATRIX OPERATORS
318
) A ® B + B ® (A Nm = Nm A) ® B B+ ® (A = Nm Nm(A ® B + B ®A) = 2Nm(A ® B) Nm
53. Write out the matrices N2 and N3. e th be to j ui r cto ve 1 x /2 1) + (m m e th e fin de i, , ... 1, 54. Fo r i = 1, ... , m, j = s ro ze d an n io sit po h }t /2 1) j j( i + m 1) j vector with one in its {( of ns m lu co e th e ar rs cto ve e es th at th ied rif ve y elsewhere. It can be easil the identity matrix of or de r m (m + 1) /2 ; that is,
•
•
e th in e on a is t en m ele o er nz no ly on e os wh ix atr m m x Let Eij be the m (i, j)t h position, and define E' j + Eji, E a,
if i J.j, if i = j
at th y rif ve is, at th ;j; }u ) (T ec {v j i~ L. ij = Dm at th ow Sh
.> . ,)
•
where A is an arbitrary m x m symmetric matrix. 55. Prove the results of Th eo re m 7.40. 56. If A is an m x m matrix show that (a) Dm D;;' (A ® A)Dm = (A ® A) Dm, er. eg int e iv sit po y an is i e er wh , )D Ai ® " (A' ;' D: m (b) {D~, (A ® A)Dm}i = at th ow sh , 54 lem ob Pr in as ed fin de e ar j Ei 57. If uij and L.'~j{vec(Eij)}U;j; that is, verify that .
{vec(Eij )}u ;j v(A) = vec(A),
. ix atr m lar gu an tri r we lo m x m ry tra bi ar an is A where 58. Prove Theorem 7.41.
L~

319
PROBLEMS
59. For i = 2, ... , m, j = 1, ... , iI, define the m(m  1)/2 x 1 vector Uij to be
the vector with one in its {(j  l)m + i  j(j + 1)/2}th position and zeros elsewhere. It can be easily verified that these vectors are the columns of the identity matrix of order m(m  1)/2; that is,
Show that L~ = Li>j{vec(Eij)}u;j; that is, verify that {vec(Eij) }u;j v(A) = vec(A), i>j
where A is an arbitrary m x m strictly lower triangular matrix. 60. Prove the results of Theorem 7.42. 61. Find a 2x 2 nonnegative matrix A which has its spectral radius equal to 1, yet Ak does not converge to anything as k 700. 62. Show that if A is a nonnegative matrix and, for some positive integer k, Ak is a positive matrix, then p(A) > O.
•
63. It can be shown [see, for example, Horn and Johnson (1985)] that if A is an m x m nonnegative matrix, then p(A) is an eigenvalue of A and there exists a nonnegative eigenvector x corresponding to the eigenvalue p(A). This result is weaker than the result for irreducible nonnegative matrices. For each of the following, find a 2x 2 nonnull reducible matrix A such that the stated condition holds. (a) p(A) = O. (b) x is not positive for any x satisfying Ax = p(A)x. (c) p(A) is a multiple eigenvalue. 64. Verify that the absolute value of each of the eigenvalues of the 2 x 2 irreducible matrix
A=
0 1 1 0
is equal to p(A) . •
65. Let A be an m x m irreducible nonnegative matrix. (a) Show that p(Irn + A) = 1 + p(A).
320
SPECIAL MATRICES AND MATRIX OPERATORS
(b) Show that if Ak > (0) for some positive integer k, then peA) is a simple eigenvalue of A. (c) Apply part (b) on the matrix (1", + A) to prove Theorem 7.52; that is, prove that for any irreducible nonnegative matrix A, peA) must be a simple eigenvalue. 66. Consider the homogeneous Markov chain that has three states and the matrix of transition probabilities given by 0.50 0.50
p=
°
0.25 0.50 0.25
° 0.25 0.75
(a) Show that P is primitive. (b) Determine the equilibrium distribution; that is, find . I
Im/>~p
(I)
'iT
such that
= 'iT.
67. Let A be the m x m circulant matrix circ(al, ... , am). (a) Find the trace of A. (b) Find the determinant of A. 68. Show that the conjugate transpose of the matrix F given in Theorem 7.59 •
IS
F*=
1
..;m
1 1 1 •
1
1
• • •
1
0 1
0 2
• • •
o(mI)
0 2
0 4
• • •
02(ml)
•
•
• •
• •
• •
1
o(mI)
02(ml)
•
• • •
o(mI)(mI)
• •
Then use the geometric series partial sum forilluia
to prove that F 1 = F*. 69. Let F be defined as in Theorem 7.59 and let Show that
F2 = r, 4 (b) F =Im , (a)
(e) F3 = F*.
r •
321
PROBLEMS
at th ow Sh 0. 7.1 on cti Se in ed fin de ix atr m t lan cu cir 70. Let nm be the n I' (a) nm I = in m
(b) (c)
n:;: =1m , n;::nH = n:;., for any integers na nd r.
of es lu va en eig e th d fin ), bin , ... , b c( cir = B d l 71. If A = cir c( al , ... , am). an A + B an d AB. A ix atr m t lan cu cir the of es lu va en eig e th d 72. Us e Th eo re m 7. 60 to fin circ(1, ... , 1).
se ro en P re oo M its en th , ix atr m t lan cu cir lar gu sin a is A 73. Sh ow that if inverse, A+, is also a circulant matrix. t no are B d an A at th ch su r de or e m sa e th of B d an 74. Find square matrices A x. tri ma t lan cu cir a is AB t uc od pr eir th t ye s ice atr circulant m x tri ma III x III an at th ow Sh . O) m( J ix atr m k oc bl 75. Let B be the m x m Jordan llll fO e th in en itt wr be n ca it if ly on d an if ix atr m z lit A is a Toep m I
L
A = aolm +
(a }B ) + a_ }B '})
}= I
76. Co ns id er the m x m Toeplitz matrix
A=
a
2
1 a 2 a
b 1 a
b b 1
• • •
• • •
•
m I
a
• • • •
• •
•
•
•
•
•
a
m 3
,
• •
•
1112
bill  I bm 2 bln  3
• • •
1
by n ve gi is A of rse ve in e th at th on ati lic tip ul where ab,j. 1. Verify by m b e (ab + l)e a e
0 b e (a b+ l)e
• • •
• • •
• •
0 0
0
0
•
• •
0
0
•
•
e a e I
A = •
0
• • • • • •
• • •
0 0 0
0 0 0
•
• • •
•
•
•
•
(ab + l)e Q C
I. = ab if lar gu sin is A at th ow Sh . )I ab (1 = e where
b e C
,
SP EC IA L MA TR IC ES AN D MA TR IX OP ER AT OR S
322
77. Suppose that z" . .. mean 0 and variance ith co m po ne nt
are independent ra nd om variables each having its as s ha at th or ct ve om nd ra 1 x m e th be x t Le
,Zm + I
1.
Xj
= Zj + I
 PZ j,
z Iit ep To a is x of ix atr m e nc ria va co e th at th ow Sh t. where p is a constan . al d an ao of es lu va e th d fin d an ), .29 (7 in n matrix of the form give 78. Find a Hadamard matrix of or de r 8. of ce en ist ex e th g tin tra us ill y eb er th , 12 r de or of ix atr 79. Give a Hadamard m n n. r ge te in e iv sit po y an r fo 2 ,j. m e er wh m, r de or of a Hadamard matrix d un bo r pe up e th s ain att ix atr m d ar m da Ha a of t an in ln tel 80. Show that the de of the Hadamard inequality given in Corollary 7.23.1. 81. Let A, B, e, and D be m X + 1 and 1 , and define H as
H=
m matrices with all of their elements equal to
D
A
C D
C
D
A
B
D
C
B
A
A
B
B
e
Show that if
M ' + BE' + ec ' + DD' = 4m1m and
X y' = YX ' is H en th D, d an C, B, A, m fro en os ch Y, d an for every pair of matrices X a Hadamard matrix of or de r 4m. d an if lar gu in ns no is ) .32 (7 in n ve gi A ix atr m de on ill er nd 82. Sh ow that the Va only if the m elements of the second row are distinct. •
e er th if at th e ov Pr ). .32 (7 in n ve gi ix atr m e nd llo en nd Va m 83. Let A be the m X r. = ) (A nk ra en th }, am '" l,. {a t se e th in es lu va t nc are r disti is A if at th ow Sh d. ,e ... t. _ em , (em ix atr m al on og 84. Let P be the m X m orth s. ice atr m z lit ep To e ar 'P M d an ' M P en th , ix atr m an m x m Vandell110nde
CHAPTER EIGHT
Matrix Derivatives and Related Topics
1. INTRODUCTION Differential calculus has widespread applications in statistics. For example, estimation procedures such as the maximum likelihood method and the method of least squares utilize the optimization properties of derivatives, whereas the socalled delta method for obtaining the asymptotic distribution of a function of random variables uses the first derivative to obtain a firstorder Taylor series approximation. These and other applications of differential calculus often involve vectors or matrices. In this chapter, we obtain some of the most commonly encountered matrix derivatives.
2. MULTIVARIABLE DIFFERENTIAL CALCULUS We will begin with a brief review of some of the basic notation, concepts, and results of elementary and multivariable differential calculus. Throughout this section, we will assume differentiability or multiple differentiability of the functions we discuss. For more details on the conditions for differentiability see Magnus and Neudecker (1988). Iff is a realvalued function of one variable, x, then its derivative at x, if it exists, is given by f(l)(x) =J'(x) = d f(x) = lim f(x + u)  f(x) dx u>O U •
Equivalently,J '(x) is the quantity that gives the firstorder Taylor forillula for f(x + u). In other words, (
f(x + u) =f(x) + uf'(x) + rl(u,x),
(8.1 )
where the remainder rl(u,x) is a function of u and x satisfying
323
MATRIX DERIVATIVES AND RELATED TOPICS
324 lim
u>o
,
Th e qu an tit y
,
,"
•
(8 .2)
dl l/( x) = uf ' (x)
is, Th u. t en em cr in th wi x at I of ial nt re ffe di st fir e th d ap pe ar in g in (8.1) is ca lle is, at th u, of e ac pl in dx e us ll wi we r te La x. of ial nt re in cr em en t .U is th e di ffe ial nt re ffe di e th is u at th ct fa e th ize as ph em to , u) x+ I( write I( x + dx) in ste ad of in n ve gi ial nt re ffe di e th te no de ten of ll wi we , ce ien of x. Fo r no tat io na l co nv en er gh hi g in tak by d ne tai ob be n ca .1) (8 of s on ati liz (8.2) sim pl y by d/. Ge ne ra as ed fin de x at I of ve ati riv de ith e th th wi is, at th s; ve ati or de re d de riv 

1) (x + u)  I (i  1) (x ) I=(i_ 'c '_ ' " _ _'' ,
•
d'
dxi I( x) =
lim
11>0
u
we have the kt h or de r Ta yl or forilluia k
I( x + u) = /(x ) +
L
.I
I .
i=1
k
=/ (X ) +
d~/(x)
L
.I
i=1
.
.
.
'.
••
+ rk(u, x) ,
, •.
I .
wh er e rk (u ,x) is a fu nc tio n of u an d x sa tis fy in g
rk (u ,x )
lim  ,k  = 0,
11 )0
U
and
u. t en em cr in th wi x at I of ial nt re ffe di ith e th is l, di y pl or sim m co a of ve ati riv de e th g in lat lcu ca r fo a lul Th e ch ain ru le is a useful fOlll en th )), f(x g( = x) y( at th ch su ns tio nc fu e ar I d an posite function. If y, g.
y' (x) = g' (f (x )) f' (x ) If I
(8.3)
its en th )', xn , ... I, (X = x r cto ve I X n e th of n tio nc fu is a re al va lu ed
325
MULTIV ARiABLE DIFFERENTIAL CALCULUS
r cto ve w ro n x 1 e th by n ve gi is s, ist ex it if x, at ve ati riv de .,
a aXI
a
ax ' f( x) =
where
•
,
•
a"
f( x) , "
ax
f(x ) ,
•
f( x + u; ej)  f(x ) Ui
I". of n m lu co ith the is ei d an Xi, to t ec sp re th wi is the partial derivative of f by n ve gi is ,1) (8 to us go alo an a ul rm fo or yl Ta r de or Th e first
a ax '
f( x+ u) = f( x) +
(8.4)
u + rl (u, x) ,
f(x )
where the remainder, n (u, x) , satisfies n( u, x) = 0 , I1m 2 (U'U)I/
u >0
at f of ial nt re ffe di st fir the is .4) (8 of e sid d an h ht Th e second terlll on the rig x with incremental vector u; that is,
df = du f(x ) =
a
a
11
ax ' f( x)
u=
Ui ax; f(x ) ;=I
,
st fir the d an ial nt re ffe di st fir e th n ee tw be p hi ns io lat re e It is important to note th x at f of ve ati riv de st fir the is u in x at f of ial nt re ffe di derivative; the first by en giv are u r cto ve the in x at ff o s ial nt re ffe di er rd times u, Th e highero •
•
d'f = d~f(x) =
' , ,
h: I .. ;
a j
11
11
Ujl
Uj;
' , ,
j;: I
aXh
, , ,
and these appear in the kthorder Ta yl or forlllula,
,•
i
k
f( x + u) = f( x) +
d'f 'I
;: I •
•
/.
+ rk (u , x) ,
aXj;
f(x ),
326
MAIRIX DERIVATIVES AND RELATED TOPICS
where the remainder rk(u,x) satisfies lim
rk(u,x) = 0 u>o (u'u)k/2
The second differential, d 2/, can be written as a quadratic fonn in the vector u; that is,
where Hf' called the Hessian matrix, is the matrix of secondorder partial derivatives given by i:l • • •
i:l
Hf=
2
2 i:l i:l 2
I(x)
i:lX 2 i:lXI
x2
• • •
i:l
i:l i:l XI Xn i:l
I(x)
2
2
• • •
i:lx2i:lx n •
•
I(x)
i:lXni:lXI
i:l
2
2
i:lx n i:lX2
I(x)
• •
• •
2
I(x)
I(x)
• •
•
i:l i:l 2 Xn
I(x)
3. VECTOR AND MATRIX FUNCTIONS Suppose now that I I, ... ,fm each is a function of the same n x I vector x = (x I, ... ,xn )'. These m functions can be conveniently expressed as components of the vector function .
fix) =
The function I is differentiable at x if and only if each component function Ii is differentiable at x. The Taylor fonnulas from the previous section can be applied componentwise to f For instance, the firstorder Taylor forilluia is given by
I(x + u) =/(x) +
i:l
ax,/(x)
U + 'I(U,X) =f(x) + df(x) +r1(U,X),
•
327
VEcrOR AND MA:JRIX FUNcrIONS 7, (U, x),
where the vector remainder,
satisfies
=0
lim
u >0
and the first derivative of I at x is given by
•
a
a
ax,
ax' I(x) =
•
a
h(x)
aX2
• •
•
a ax,
a
1m (X)
aX2
h(x)
•
•
• • •
a aX n
• • •
h(x) • •
1m (X)
•
• •
a aXn
•
1m (X)
This matrix of partial derivatives is sometimes referred to as the Jacobian matrix of I at x. Again, it is ~rucial to understand the relationship between the first differential and the first derivative. If we obtain the first differential of I at x in u and write it in the form
d/= Bu,
•
then the m x n matrix B must be the derivative of I at x. If y and g are realvalued functions satisfying y(x) = g(f(x)), then the generalization of the chain rule given in (8.3) is
a a
Xi
a
m
y(x) =
ax, h(x)
a
af'
=
j='
g(J)
a
ax,l(x)
for i = 1, ... , n, or simply
a
ax' y(x) =
a
a
ai'
g(/)
ax' j(x)
In some applications the hs or the XiS are arranged in a matrix instead of a vector. Thus, the most general case involves the p x q matrix function
MATRIX DERIVATIVES AND RELATED TOPICS
328
! II (X ) ! 12(X) hl (X ) !2 2( X)
F( X) =
• •
•
• •
•
!lq (X ) . hq (X ) •
• • •
• • •
fl' l(X )
fI' 2( X)
• •
• • •
!" q( X)
ed nd te ex y sil ea be n ca ) ix nf tio nc fu r cto ve e th r fo lts su Re of the m X n ma tri x X. e th be f let is, at th r; ato er op c ve e th g zin ili ut by X) F( n to the ma tri x fu nc tio e th e, nc sta in r fo , en Th . X» F( c( ve = » (X ec fiv at th ch su n pq X I ve cto r fu nc tio x tri ma mn X pq e th by n ve gi is X at F of x tri ma an bi co Ja
a
a
av ec (X Y !( ve c( X » = av ec (X Y ve c( F( X» , of t en em el ith e th of ve ati riv de al rti pa e th , nt me ele )th j wh ich has as its (i, ed us be en th d ul co is Th . X) c( ve of t en em el jth e th to ve c( F( X» wi th re sp ec t of s ial nt re ffe di e Th + X F( c( ve r fo a ul rm fo or yl Ta to ob tai n th e firstorder the ma tri x F( X) are defined by the eq ua tio ns
U».
.
.
, .
ve c( d' F) = ve c( du F( X » = d' f= d~ec(u)fivec(X»; is U, x tri ma tal en em cr in e th in X at F of ial nt re ffe that is, d' F, th e ith or de r di ial nt re ffe di er rd o ith e th g in ck sta un by d ne tai ob x defined to be th e p X q matri of ! at ve c( X) in the in cr em en tal ve cto r ve c( U) . ht aig str y irl fa a in w llo fo s ial nt re ffe di x tri ma d an r cto Ba sic pr op er tie s of ve e W s. ial nt re ffe di ar al sc of s tie er op pr ng di on sp rre co e fo rw ar d fashion from th ex d an ns tio nc fu e ar y d an x If re. he s tie er op pr e es th will su m m ar iz e so m e of s fie tis sa d, r, ato er op ial nt re ffe di e th en th t, tan is a co ns
(a) dex = 0, (b) d( ax ) = ex dx, (c ) d(x + y) = dx + dy, (d ) d(xy) = (dx)y + x(dy), (e) dx'" = ax '"  I dx, x = eX dx, (f) de (g) d lo g( x) = [ I dx. Fo r in sta nc e, to ill us tra te pr op er ty (d), no te th at
(x + dx)(y + dy) = xy + x(dy) + (dx)y + (dx)(dy), + y x) (d is ich wh , dy d an dx in m ter ee gr de stfir and d(xy) will be gi ve n by th e x tri ma a of on iti fin de e th d an e ov ab s tie er op pr x( dy ) as required. Us in g th e
•
329
VECTOR AND MATRIX FUNCTIONS
a is A d an ns tio nc fu x tri ma e ar Y d an X if at di ffe re nt ial , it •is ea sil y sh ow n th ma tri x of co ns tan ts, then
(h) dA == (0), (i) d( aX ) == ad X , (j) d( X' ) == (d X )', (k) d(X + Y) == dX + dY, (I) d( X y) == (d X )Y +X (d y) . of t en em el h j)t (i, e th at th ow sh t us m we , us Th . We will verify pr op er ty (I) the as e m sa e th is ij. Y» (X (d n, tio ua eq e th of e th e ma tri x on th e lef th an d sid //I is X e er wh .j. Y) (d );. (X + ·j Y) i.( X) (d e. sid d an (i, j)t h el em en t on th e rig ht h
x
na nd
Y is n x m.
Us in g pr op er tie s (c) and (d). we find th at
(d(XY»U == d{ (X )j. (Y h} == d •
n
n
/I
/I

and so (I) is pr ov en . ariv de e th g din fin by st fir s tie er op pr e es th of We ill us tra te the us e of so m e e th ng di fin by en th d an x, or ct ve a of ns tio tiv es of so m e sim pl e sc al ar fu nc X. x tri ma a of ns tio nc fu x tri ma e pl sim e m so of s ve ati de riv
Example 8.1.
e fin de d an es bl ria va ed lat re un of r cto ve 1 x m an be Le t x
th e fu nc tio ns
f( x) == a' x, wh er e a is an m x 1 ve ct or of co ns tan ts, and
g(x) = x' Ax . the of ial nt re ffe di e Th ts. tan ns co of ix atr m ic etr wh er e A is an m x m sy m m first fu nc tio n is
MATRIX DERIVATIVES AN D RELATED TO PIC S
33 0
df = d( a'x ) = a' dx
n tio ua eq e th h ug ro th ed lat re e ar ve ati riv de e Si nc e this differential an d th
df =
a
ax' f
dx,
by n ve gi is ve ati riv de e th at th e rv se ob ely iat ed m im we
a ax' f
,
=a
by n ve gi e ar n tio nc fu nd co se r ou of ve ati riv de d an ial nt re Th e diffe dg
=d(x' Ax ) =d( x') Ax + x' d( Ax ) = (d x) ' Ax + x'A dx , dx 'A 2x = dx A x' + dx A' x' = dx A x' + ' x} )'A dx {( = ,.
and
' 2 xA a ax 'g = Ex am pl e 8.2. the functions
e fin de d an es bl ria va ed lat re un of ix atr m n X m an Let X be
F( X) = AX ,
where A is a p
X
m matrix of constants, and G (X ) = (X  C )' B( X  C) ,
of ix atr m n x m an is C d an ts tan ns co of ix atr m ic etr m wh er e B is an m X m sy m s ial nt re ffe di e th ng ni tai ob st fir by s ice atr m an bi co Ja e th d constants. We will fin at th d fin we n, tio nc fu st fir r ou r Fo . ns tio nc fu e of thes dF = d( AX ) = A dX , so that
X) c( ve d A) ® (In = ) dX c( ve A) ® (In = ) dX c(A ve = d ve c( F) = vec(dF) Thus, We must ha ve
•
331
VECIOR AND MA:IRlX FUNCTIONS
a ::= vec(F) = In ovec(X)'
®A
The differential of our second function is dG = d{(X  C)'B(X  C)} = {d(X'  C')}B(X  C) + (X  C)'B{d(X  C)}
= (dX)'B(X  C) + (X  C)'BdX
From this we obtain d vec(G) = {(X  C)' B ® In }vec(dX') + {In ® (X  C)' B}vec(dX) = {(X  C)'B ® In}Kmn vec(dX) + {In ® (X  C)'B}vec(dX)
= Knn{In ® (X  C)'B}vec(dX) + {In ® (X  C)'B}vec(dX) = (ln2 + KnnH1n ® (X  C)' B}vec(dX) = 2Nn {I n ® (X  C)'B}dvec(X),
where we have used properties of the vec operator and the commutation matrix. Consequently, we have
a :: vec(G) = 2N
n {I n
avec(XY
® (X  C)'B}
•
In our next example, we show how the Jacobian matrix of the simple transfonnation z = c + Ax can be used to obtain the multivariate nonnal density function given in (1.13).
Example 8.3. Suppose thatz is an m x 1 random vector with density function I I (z) that is positive for all z E S I k Rm. Let the m x 1 vector x = x(z) represent a onetoone transformation of S 1 onto S2 k Rm , so that the inverse transfor mation Z = z(x), x E S2 is unique. Denote the Jacobian matrix of Z at x as
1=
a ax'
z(x)
If the partial derivatives in J exist and are continuous functions on the set S2, then the density of x is given by h(x) = II (z(x» 111
We will use the formula above to obtain the multivariate nonnal density, given in (1.13), from the standard normal density. Now recall that by definition,
332
MATRIX DERIVATIVES AND RELATED TOPICS
if x can he expressed as x Jl. + 1'z, where 1'1" , n and the components ofz,zh ... , Zm are independently distributed each as N(O, I). Thus, the density function of z is given by
x " N",(Jl.,
n)
m
f
I ( Z)
I
vz;
= i=I
exp
I  2 Z,2
27r
I
 ::::C/;;;"2 exp ~)m (2 ..
I ,  2 ZZ
The differential of the inverse transformation Z = T1(x  Jl.) is dz = T 1dx, and so the necessary Jacobian matrix is J = T 1 • Consequently, we find that the density of x is given by
4. SOME USEFUL MATRIX DERIVATIVES In this section we will obtain the differentials and the corresponding derivatives of some important scalar functions and matrix functions of matrices. Throughout this section, when dealing with functions of the fonnf(X) or F(X) we will assume that the m x n matrix X is composed of mn unrelated variables; that is, X is assumed not to have any particular structure such as symmetry, triangularity, and so on. We begin with some scalar functions of X.
Theorem 8.1.
Let X be an m x m matrix. Then
,
0
,
(a) d{tr(X)} = vec(ll1/) d vec(X); ovec(XY tr(X) = vec(Im),
(b) dixi = tr(XudX) = Ixl tr(X
1
o dX); ,,::::::: IXI = vec(XN )', . ovec(XY
where Xu is the adjoint matrix of X.
Proof
Part (a) follows directly from the fact that
d tr(X) = tr(dX)
= tr(lm dX) = vec(lm)' vec(dX) = vec(lm)' d vec(X),
I '~," (
,.. f,
, "....
,,
333
SOME USEFUL MATRIX DERIVATIVES
with the third equality following from Theorem 7.15. Since XII i~ the transpose , of the matrix of cofactors of X, to obtain the derivative in (b), we simply need to show that
where X ij is the cofactor of xI}. By using the cofactor expansion fOllllula on the ith row of X, we can write the determinant of X as m
Ixi = L
,
XikXik
k=1
Note that for each k, Xik is a detenninant computed after deleting the ith row so that each Xik does not involve the element xij. Consequently, we have
IXI =
a ax
a ax ..
I}
Xik
I}
Using the relationship between the first differential and derivative and the fact that XI = lXII XN, we also get dlXI
= {vec(XN)}' vec(dX) = tr(X# = )} W + ,(Z {"f ' Q V) (X +
343
1HE PERTURBATION METHOD
that is, U is the perturbation matrix of X, and A/(Z + W) is an eigenvalue of (X + U) corresponding to the eigenvector Q''Y/(Z + W). Thus, if we use the elements of U = QWQ' in place of those of U in the formulas in Theorem 8.5, we will obtain expansions for A/(Z + W) and Q''Y/(Z + W). For instance, firstorder approximations of A/(Z + W) and 'Y/(Z + W) are given by
=XI + q~Wql' 'Y/(Z + W) =Q{el  (X A/(Z + W)
x/lmt(Q'WQ)el}
= ql  (Z  x/ImtWql •
The following is an immediate consequence of the firstorder Taylor expansion formulas given above.
Theorem 8.6.
Let A/(Z) be the eigenvalue defined on m x m symmetric matrices Z, and let 'Y/(Z) be a corresponding normalized eigenvector. If the matrix Z is such that the eigenvalue A/(Z) is distinct, then differentials and derivatives at that matrix Z are given by " d"I = 'Y/(dZ)'Y/,
a , ( , , av(z), "I Z) = ("II ® 'Y/)Dm,
d'Yl = (Z A/lmt(dZ)'Y/, •
The expansions given in and immediately following Theorem 8.5 do not hold when the eigenvalue XI is not distinct. Suppose, for instance, that again XI ~ ••• ~ Xm, but now XI = XI+ 1 = ... = XI+r It so that the value XI is repeated as an eigenvalue of Z = QXQ', r times. In this case, we can get expansions for A/,I+r1 (Z + W), the average of the perturbed eigenvalues A/(Z + W), ... ,AI +r I (Z + W), and the total eigenprojection ~ I associated with this collection of eigenvalues; if Pz+ w {A/+i 1(Z + W)} represents the eigenprojection of Z + W associated with the eigenvalue AI + iI (Z + W), then this total eigenprojection is given by r
~I =
L
Pz+ w {A/+i I(Z + W)}
i=I
r
=
L
"I I + i  i (Z + W)( "I I + iI (Z + W»'
i=I
These expansions are summarized below. The proof, which is similar to that of Theorem 8.5, is left to the reader.
344
MATRIX DERIVATIVES AND RELATED TOPICS
Let Z be an m X m symmetric matrix with eigenvalues XI ~ = X'+I = ... = X'HI > X'H ~ .. , ~ X m , so that X, is an eigenvalue with multiplicity r. Suppose that W is an m X m symmetric matrix and let AI ~ A2 ~ ... ~ Alii be the eigenvalues of Z+ W, while };.U+rI = rI(A,+ ... + A, +' _ I)' Denote the eigenprojection of Z corresponding to the repeated eigenvalue X, by P, and denote the total eigenprojection of Z+ W corresponding to the collection of eigenvalues A" ... ,A'HI by ,. Define Y = (Z  x,In,)+. Then the thirdorder Taylor approximations
Theorem 8.7. ... ~ X'_I > X,

Au+, I ::::
X, + al
+ a2 + a),
, :::: P, + B I + B2 + B3 ,
have I
al =  treWP,), r I
a2 =   tr(WYWP,), r
•
a, = {tr(YWYWP,W)  trey r 1
2
WP,WP,W)},
BI = YWP, P,WY, 2
B2 = YWP,WY+ YWYWP,  y Wp,Wp, + P,WYWY  p,Wp,Wy
2
2
 P,WY WPJ, 2
B) = Y 2Wp,WYWp, + p,WYWp,Wy + Y 2Wp,Wp,WY + YWp,Wp,Wy
+y
2 WYWp,Wp,
 y
3 Wp,Wp,WP,
+
2 p,Wp,WYWy
+
2 YWy Wp,Wp,
3 p,Wp,Wp,Wy 
+
2
2 p,Wp,Wy WY
YWYWP,WY  YWP,WYWY 2
2
 YWYWYWP,  P,WYWYWY + YWp,Wy Wp, + p,Wy Wp,WY 2 + p,Wy WYWp,
2
3
+ p,WYWy Wp, p,Wy Wp,Wp,
 p,Wp,WyJWp,
7. MAXIMA AND MINIMA One important application of derivatives involves finding the maxima or minima of a function. A function f has a local maximum at an n x 1 point a if for some o> 0, f(a) ~f(a+x) whenever x'x < o. This function has an absolute maximum at a if f(a) ~f(x) for all x for whichf is defined. Similar definitions hold for a local minimum and an absolute minimum; in fact, iff has a local minimum at a point a, then f has a local maximum at a, and iff has an absolute minimum at a. then f has an absolute maximum at a. For this reason, we will at times
345
MAXIMA AND MINIMA
confine our discussion to only the case of a maximum. In this section and the next section, we state some results that are helpful in finding local maxima and minima. For proofs of these results the reader is referred to Khuri (1993) or Magnus and Neudecker (1988). Our first result gives a necessary condition for a function / to have a local maximum at a. Suppose the function lex) is defined for all 11 x I vectors xeS, where S is some subset of R". Let a be an interior point of S; that is. there exists a 0 > 0 such that a + u e S for all u'u < O. If / has a local maximuim at a and / is differentiable at a, then
Theorem 8.8.
a
aa' lea) = 0
•
,
(8.29)
Any point a satisfying (8.29) is called a stationary point off. While Theorem 8.8 indicates that any point at which a local maximum or local minimum occurs must be a stationary point, the converse does not hold. A stationary point that does not correspond to a local maximum or a local minimum is called a saddle point. Our next result is helpful in deteliuining whether a particular stationary point is a local maximum or minimum in those situations in which the function / is twice differentiable. Suppose the function lex) is defined for all 11 x I vectors xeS, Where S is some subset of Rn. Suppose also that/ is twice differentiable at the interior point a of S. If a is a stationary point of / and Hr is the Hessian matrix of / at a, then
Theorem 8.9.
•
(a) / has a local minimum at a if HI is positive definite, (b) / has a local maximum at a if HI is negative definite. (c) / has a saddle point at a if HI is nonsingular but not positive definite or negative definite, (d) / may have a local minimum, a local maximum, or a saddle point at a if HI is singular.
Example 8.4. On several occasions, we have discussed the problem of findA
ing a least squares solution
" •
~
to the inconsistent system of equations
•~
"
,r:" •
, ,
y=
X~,
,• •
,. ,
,"
~,'
~' ,
~:' ,:
.,
C
,:'.
, •
where y is an N x 1 vector of constants, X is an N x (k + I) matrix of constants, and p is a (k+ 1) x 1 vector of variables. A solution was obtained in Chapter 2 by using the geometrical properties of least squares regression, while in Chapter 6 we utilized the results developed on least squares generalized inverses. In this example, we will show how the methods of this section may be used to obtain
MATRIX DE Rl V ATIVES AN D RELATED TO PIC S
346
s ha X rix at m e th is, at th l; x k = ) (X nk ra at th e m su a solution. We will as ich wh or ct ve y an is P n tio lu so s re ua sq st lea a at th ll ca Re nk. ra n m fun co lu • minimizes th e su m of sq ua re d er ro rs gi ve n by A
•
••
A
Th e differential of f(J l) is
.
X~)'}(y  X~) + (y  X~)' d( y  X~) = (d~)'X'(y  X~)  (y  X~)'X d~ = 2 (y  X~)'X d~,
df = d{ (y 
so that
" a A' f(J l) = 2 (y  XJl) X "I
aJl
at th d fin we g, in ng ra ar re d an 0' to l ua eq ve ati riv de st fir Th us , upon setting this ns tio ua eq of m ste sy e th to Jl ns tio lu so e th by n ve gi e the stationary values ar A
A
(S.30)
X'XJl = X' y
n tio lu so ue iq un e th so d an , lar gu in ns no is X Si nc e X has full co lu m n rank, X' to (S.30) is (S.31) e w rs, ro er d re ua sq of m su e th s ize im in m n tio lu so In or de r to verify th at th is n ve gi is l) f(J of ial nt re ffe di nd co se e Th ' HI ix need to obtain the Hessian m atr by A
d~ 'X )} X~ (y {d 2 = } d~ X )' X~ (y {2 d = ) df d( = f d 2
= 2(d~)'X'X d~,
•
so that
•
lu so e th at th S.9 m re eo Th m fro ws llo fo it , ite fin de e iv sit Si nc e this m atr ix is po tion given in (S.31) m in im ize s f(J l). A
,
347
MAXIMA AND MINIMA
Example 8.5. One of the most popular ways of obtaining estimators of
unknown parameters is by a method known as maximum likelihood estimation. If we have a random sample of vectors Xl, ... ,xn from a population having density function f(x; 9), where 9 is a vector of parameters, then the likelihood function of 9 is defined to be the joint density function of Xl, .. ' a function of 9; that is, this likelihood function is given by
L(9) =
,X"
viewed as
"
A
The method of maximum likelihood estimates 9 by the vector 9, which maximizes L(9). In this example, we will use this method to obtain estimates of .... and () when our sample is coming from the norlllal distribution, Nm ( .... , (}). Thus, .... is an m X 1 vector, () is an m x m positive definite matrix, and the require~ density function,f(x; .... , (}) is given in (1.13). In deriving the estimates tl and (), we will find it a little bit easier to maximize the function 10g(L(.... , () )), which is, of course, maximized at the same solution as L( .... , (}). After omitting terms from 10g(L(...., (})) that do not involve .... or (), we find that we must maximize the function 1 1 g( .... ,(}) =  2 nlogl(}l 2 tr(WIU),
where
u=
"
(X;  .... )(x;  ....)'
;= I
The first differential of g is given by
•
1
+ 2 tr
=
"
"
;= I
;= I
MATRIX DERIVATIVES AND RELATED TOPICS
348
1  2 tr{ (d O )o l( U  nO )o l} + n( i'  JL )'o l dJL 1
, dJL I )'O JL (X +n ) nO U c( ve I) O ® 1 = :: ve c( dO )'( O2
th fif e th d an , 8.2 m re eo Th d an .1 8.1 ry lla ro Co ed us y lit where the second equa , O) v( d Dm = O) c( ve d = ) dO c( ve ic, etr m m sy is 0 e nc Si used Th eo re m 7.17. and so the differential may be reexpressed as (8.32)
an d thus,
I a ; :  g = av (o ),
2
{v ec (U  nO )} ,(O 1 ® O I )Dm •
ns tio ua eq e th n tai ob we rs, cto ve ll nu to s ve ati riv Upon eq ua tin g th es e first de nO I (i'  JL) = 0, D~(oI ® O I)v ec (U  nO ) = 0
JL, r fo n tio lu so e th n tai ob we , ns tio ua eq o tw e es th of st fir e From th the se co nd ca n be rewritten as
t1 = i', while
). nO U v( Dm = ) nO U c( ve at th ies pl im ) nO sin ce the sy m m etr y of (U we , 38 7. m re eo Th g in us d an :;; )D 0 ® :'(O D; by n tio ua Premultiplying this eq find that v( U  nO ) = 0
n tio lu so the so d an ), (0 = ) nO (U at th es pli im s thi Since (U  nO) is symmetric A
s eld yi 0) (p" n tio lu so e th at th ow sh to is s in ma re for 0 is 0 = n U. All that a maximum. By differentiating (8.32), we find that A
I
349
CONVEX AND CONCAVE FU NC fIO NS
th ur fo e th d an st fir e th at th d fin we U, n= 0 d Evaluating this at fl = x an at th te no n, tio di ad In . sh ni va e ov ab n tio ua eq e th of e terllls on the righthand sid I
n= 0 d an x = J1. at , us Th x. = J1. at d ate alu ev en wh es sh also vani
I U,
•
•
where n I
n ..
(0)
(0) _ n D ' (0  1® O I) D m 2 m
1 sipo are Dm 12 0® I (O D~ d an 0ce sin ite fin de e Clearly, Hg is negativ 11 I U ) (x, = 0) , (,1 n tio lu so e th at th s he lis tab es en th is ti ve definite matrices. Th yields a maximum.
S N IO CT N FU E AV NC CO D AN EX V N CO 8. of t ep nc co e th d ten ex ll wi we re He ts. se ex nv co In Section 2.10, we discussed ss cla is th to y pl ap at th lts su re ial ec sp e m so n tai convexity to functions and ob of functions. S, E X aU r fo ed fin de n tio nc fu ed lu va alre a be x) Definition 8.1. Let f( if S, on n tio nc fu ex nv co a is x) f( en Th . Rm of where S is a convex subset •
for all XI E S, X2 E S, and 0 ~ c is said to be a concave function.
~
l. If f (x ) is a convex function, then f(x)
MA l'R IX DERIVATIVES AN D RELATED TO PIC S
350
by ed fin de t se e th at th ied rif ve y sil ea is it en th n, tio nc fu If f( x) is a convex
T = {z = (x ',y )':
XE S,
Y ~j(x)}
ex nv co a be ll wi T en th 1, = m if e, nc sta in r is a convex subset of R + I. Fo ry da un bo a be ll wi a» ,j( (a t in po e th S, E a y subset of R2. 1n this case, for an 7, 2.2 m re eo Th , m re eo th e lan rp pe hy g tin or pp su e th m fro w point of the se t T. No e th at th ch su ) a) ,j( (a t in po e th h. ug ro th g in ss pa e lin a is we know that there t in po e th h ug ro th es ss pa e lin is th e nc Si e. lin is th w lo function f(x ) is never be pe slo e th is t e er wh , a) t(x + a) j( = x) g( nn fo e th in (a ,/( a» , it can be written of the line, and thus, for all XE S, we ha ve m
j( x)
~f(a)
+ t(x  a)
(8.33)
. low be n ve gi is m ry tra bi ar to lt su re is th of on ati The generaliz all r fo ed fin de n tio nc fu ex nv co ed lu va alre a Theorem 8.10. Let f(x ) be r io ter in ch ea to ng di on sp rre co , en Th . Rm of et X E S , wh er e S is a co nv ex su bs at th ch su t r cto ve 1 X m an s ist ex e er th S, E a int po j( x) ~j(a) + t'( x  a)
for all
(8.34)
XE S.
S, the po in t z* =(a' ,j( a) )' is a boundary point of the
For any a E e er th at th 7 2.2 m re eo Th m fro ws llo fo it so d an e, ov ab convex set T defined all r fo z* b' ~ z b' ich wh r fo O :/. d+ b , (b~ = b r cto m exists an (m + I) x I ve e lu va e th se ea cr in y ril tra bi ar n ca we T, E )' ',y (x = z Z E T. Clearly, for any . be ot nn ca + b at th e se we , on m 1 as re is th r Fo T. in t in of y and get another po d, an all sm y ril tra bi ar z b' e ak m to le ab be d ul wo we negative since if it were, y an r fo w No O. or e iv sit po er th ei is 1 + b , us Th m in particular, less than b'z*. *, b'z ~ z b' y lit ua eq in e th in z of ce oi ch is th r fo X E S , (X ',/ (X »' E T an d so we get
Proof
nn fo e th to ed ng ra ar re be ay m e ov ab y lit ua eq in e th en th If b m + 1 is positive, ~ z b' en th 0, = +1 bm , nd ha r he ot e th on If, l' lb '~ ;; b given in (8.34) with t = b' z* reduces to
0 . ete pl m co is f oo pr e th , us Th S. of t in po ry da un which implies that a is a bo
.
351
CONVEX AND CONCAVE FUNCTIONS
If I is a differentiable function, then the hyperplane given on the righthand side of (8.34) will be given by the tangent hyperplane to I(x) at x = a.
Theorem 8.11. Let I(x) be a realvalued convex function defined for all XES, where S is an open convex subset of Rm. If I(x) is differentiable and a E S, then
I(x) ?/(a) +
a aa,/(a)
(xa)
for all XES.
Proof
Suppose that XES and a E S, and let y = a  x so that a = x + y. Since S is convex, the point ca + (1  c)x = c(x + y) + (I  c)x = x + cy
is in S for 0
~
c
=:;;
1. Thus, due to the convexity of I, we have
I(x + cy) ~ cfix + y) + (I  c)/(x) = I(x) + c{f(x + y)  I(x)},
or, equivalently, I
I(x + y) ?/(x) + c {f(x + cy)  I(x)}
(8.35)
•
Now since I is differentiable, we also have the Taylor fO[,lIIula
I(x + cy) = I(x) +
a ax' I(x)
cy + rl (cy, x),
(8.36)
where the remainder satisfies lim c1rl(cy,x) = 0 as c ~ O. Using (8.36) in (8.35), we get
I(x + y) ?/(x) + •
and so the result follows by letting c
~
o.
D
The previous theorem can easily be used to show that a stationary point of a convex function will actually be an absolute minimum. Equivalently, a stationary point of a concave function will be •an absolute maximum of that function.
Theorem 8.12. Let f(x) be a realvalued convex function defined for all XES, where S is an open convex subset of Rill. If f(x) is differentiable and a E S is a stationary point of f, then f has an absolute minimum at a. If a is a stationary point of f, then
Proof
a aa'
f(a) = 0'
Using this in the inequality of Theorem 8.11, we getf(x) and so the result follows.
~f(a)
for all
XES,
0
The inequality given in (8.34) can be used to prove a very useful inequality involving the moments of a random vector x. This inequality is known as Jensen's inequality. But before we can prove this result, we will need the following. m
Theorem 8.13. Suppose that S is a convex subset of R and y is an m x 1 random vector with finite first moments. If P(y E S) = 1, then E(y) E S. •
Proof
We will prove the result by induction. Clearly, the result holds if m = 1, since in this case S is an interval, and it is easily demonstrated that a random variable y satisfying Pea :::; y :::; b) = 1 for some constants a and b will ha ve a :::; E( y) :::; b. Now assuming that the result holds for dimension m  1, we will show that it must then hold for m. Define the conveX set S* =: {x: x=: U E(y), U E S} So that the proof will be complete if We show that 0 E S*. Now if 0 e S*, it follows from Theorem 2.27 that there exists an m x 1 vector a '" 0 such that a'x ~ 0 for all x E S*. Consequently, since P(y E S) = pew E S*) =: 1, where the random vector w = y  E(y), we have a'w ~ 0 with probability 1, yet E(a'w) = O. This is possible only if a'w = 0, in which case w is on the hyperplane defined by {x: a'x = O}, with probability one. But since e S*) =: 1 as well, E So) = 1, where So = S* {x: a'x = O}. Now it follows we must have from Theorem 2.23 that So is a convex set, and it is contained within an (ml)dimensional vector space since {x: a'x = O} is an (m  I)dimensional vector space. Thus, since our result holds for m  Idimensional spaces, we must have E(w) = 0 E So. This leads to the contradiction 0 E S*, since So ~ S*, and so the proof is complete. 0 ,
pew
n
pew
We now prove Jensen's inequality.
Theorem 8.14. Let f(x) be a realvalued convex function defined for all XES, where S is a convex subset of Rm. If y is an m x 1 random vector with finite first moments and satisfying P(y e S) = 1, then
•
353
THE METHOD OF LAGRANGE MULTIPLIERS
E(f(y» ~f(E(y»
Proof The previous theorem guarantees that E(y) E S. We first prove the result for m = 1. If E( y) is an interior point of S, the result follows by taking the expected value of both sides of (8.33) when x = y and a = E( y). Since when m = 1, S is an interval, E( y) can be a boundary point of S only if S is closed and P( y = c) = 1, where c is an endpoint of the interval. In this case, the result is trivial since the terms oil the two sides of the inequality above are equal. We will complete the proof by showing that if the result holds for m  1, then it must hold for m. If the m X 1 vector E(y) is an interior point of S, the result follows by taking the expected value of both sides of (8.34) with x =y and a = E(y). If E(y) is a boundary point of S, then we know from the supporting hyperplane theorem that there exists an m X 1 unit vector b such that w = b'y ~ b'E(y) = J.l with probability one. But since we also have E(w) = b'E(y) = J.l, it follows that b'y = J.l with probability one. Let P be any m X m orthogonal matrix with its last column given by b, so that the vector u = P'y has the form u = (u;. J.l)'. where UI is an (m  I) x 1 vector. Define the function g(ul) as
•
= fey),
for all UI e S* = {x: x = Ply, YES}, where PI is the matrix obtained from P by deleting its last column. The convexity of S* and g follow from the convexity of S andf, and so, since UI is (m  1) x 1. our result applies to g(ud. Thus. we have .. . . ' "
E(f(y» = E(g(ud)
~ g(E(ud) =f P
E(ul) JL
=f(E(y»
o
9. THE METHOD OF LAGRANGE MULTIPLIERS In some situations we may need to find a local maximum of a function f(x). where f is defined for all XES, while the desired maximum is over all x in T. a subset of S. The method of Lagrange multipliers is useful in those situations in which the set T can be expressed in terms of a number of equality constraints; that is, there exist functions g" ... ,gm such that T = {x: x
E
R", g(x) = O},
whereg(x) is the m x 1 function given by (g1(X), .... gm(x»'. The method of Lagrange multipliers involves the maximization of the
S IC P O T D E T A L E R D N A S E lV T MATRIX D E R lV A
354
Lagrange function L (x , ~) = I( x )  ~'g (x ),
e g n ra g a L e th d e ll a c re a , X I , .. . , m X , ~ r o t c e v 1 x m e th f o where the components g in fy s ti a s ) ~ , x ( s n o ti lu o s e th re a ) ~ , x ( L f o s e lu a v ry a n multipliers. The statio
ox'
o
o o~'
,
0
L(x,~) = o x ' I( x )  ~ L(x,~) =  g
o ox'
g (x ) = 0',
(8.37)
, , (x) = 0
ts in a tr s n o c ty li a u q e e th ly p im s is e v o b a n o ti a u q e d n o c e s The g (x ) = 0
(8.38)
e th f o m u im x a m l a c lo e h ,t s n io it d n o c in a rt e c r e d n U . T t that detelllline the se ~, e m o s r fo t. a th x r to c e v a y b n e iv g e b l il w , T e x to function f( x ) , subject in n ll te e d r fo re u d e c ro p a t n e s re p l il w e W ). 8 .3 (8 d n a ) 7 .3 satisfies equations (8 re u d e c ro p is h T . m u im x a m l a c lo a is x r to c e v n o ti lu o s r la ing whether a particu d n a s u n g a M in d n u fo e b n a c h ic h w f o f o ro p a , lt u s re g in is based on the follow Neudecker (1988). rs to c e v 1 x n ll a r fo d e n fi e d is ) x I( n o ti c n fu e th e s o p p u Theorem 8.15. S n o ti c n fu r to c e v 1 x m n a is ) (x g d n a " R f o t e s b u s e m o x e S , where S is s e s o p p u s d n a S f o t in o p r o ri te in n a e b a t e L . n < m re e h w defined for all x e S , . ld o h s n io it d n o c g in w o ll fo e that th •
. a t a le b a ti n re fe if d e ic tw re a (a) f and g . m k n ra ll fu s a h ), (a g ') a /d (o , a t a g f o e v ti a v ri e d t rs fi e h (b) T (c ) g(a) = O. . 1 x m is ~ d n a ) (x g ' ~ ) x I( = ) ~ , (x L re e h w ', 0 = ) ~ , (d) (o/da')L(a d te a lu a v e ) X i( g d n a ) x f( s n o ti c n fu e th f o s e ic tr a m n ia s s be the He
Let Hf and H gj at x = a and define
m
;= I
o
B = o a ' g(a)
•
355
THE METHOD OF LAGRANGE MULTIPLIERS
Then I(x) has a local maximum at x = a, subject to g(x) = 0, if x'Ax < 0
for all x:/:.O for which Bx = O. A similar result holds for a local minimum with the inequality x' Ax > 0 replacing x'Ax < O. Our next result provides a method for determining whether x'Ax < 0 or x' Ax > 0 holds for all x:/:.O satisfying Bx = O. Again, a proof can be found in Magnus and Neudecker (1988).
Theorem 8.16. Let A be an n x n symmetric matrix and B be an m X n matrix. For r = I, ... , n, let Arr be the r x r matrix obtained by deleting the last n  r rows and columns of A, and let Br be the m x r matrix obtained by deleting the last nr columns of B. For r = \, ... ,n, define the (m+r) x (m+r) matrix Ar as (0)
Br
B',. Arr Then, if Bm is nonsingular, x' Ax > 0 holds for all x:/:.O satisfying Bx = 0 if and only if •
for r = m + 1, ... , n, and x'Ax < 0 holds for all x:/:.O satisfying Bx = 0 if and only if •
for r = m + 1, ... , n.
Example 8.6. We will find solutions x
= (xt. X2, X3)', which maximize and
minimize the function
subject to the constraints XI2
+ x 22 = 1,
(8.39)
= 1
(8.40)
X3  XI  X2
356
MATRIX DERIVATIVES AND RELATED TOPICS
Setting the first derivative of the Lagrange function
with respect to x, equal to 0', we obtain the equations
1 2A\x\ +A2 = 0, 1  2A\X2 + A2 = 0, 1  A2 = 0 The third equation gives A2 = 1, and when this is substituted in the first two equations, we find that we must have
1
Using this in (8.39), we find that A\ = points
±v2,
and so we have the stationary
1
when A\ =
v'2' _ 1
v'2'
_
1
v'2'
1v22
v'2,
when A\ =v2
To detelllline whether either of these solutions yields a maximum or minimum we use Theorems 8.15 and 8.16. Thus, since m = 2 and n = 3, we only need the dete! minant of the matrix
a3=
0 0 2x\ 2x2 0
0 0 1 1 I
2x\ 1 2A\ 0 0
2x2 1 0 2A\ 0
0 1
0 0 0
By using the cofactor expansion fOllllula for a detellllinant, it is fairly straight
forward to show that
357
THE METHOD OF LAGRANGE MULTIPLIERS
ve ha we I), i, V i, V + I i, V I/ i, /V (l = ) A2 , Thus, when (XI, X2, X3, AI
ed in tra ns co a s eld yi i) V + I i, V I/ i, /V (l and so the solution li, /V I i, V I/ (= ) A2 o Al , X3 , X2 I, (X en wh , nd ha r maximum. On the othe . V i,  V i, I), (Xlo X2, X3) =
so the solution • • rrurumum.
(X l. X2, X3) =
( I/ V i, I /V i, I  V i) yields a constrained
of es lu va ry na tio sta the ng ni tai ob of s es oc pr e th In some situations, in lu so ich wh d an um im ax m a s eld yi n tio lu so ich wh nt L(x, ~), it becomes appare the te pu m co to ed ne no be ll wi e er th , se ca is th in , us tion yields a minimum. Th ar matrices. d an ix atr m ic etr m m sy m x m an be A t Le 7. 8. e pl am Ex vector. We saw in Section 3.6 that x' Ax ..
X
be an
III
x I
(8.41 )
x' x
•
I (A ) ~ A e er wh ), (A Am of e lu va um im in m a d an ) (A I A has a maximum value of e tim s thi , ain ag lt su re is th e ov pr ll wi e W A. of es lu va en ... ~ Am(A) are the eig it r, cto ve it un a is 2x tl/ 'x (X = z ce sin at th te No . od using Lagrange's meth to t len va ui eq is 0 J. X all er ov ) .41 (8 ng izi im in m or ng follows that maximizi maximizing or minimizing the function
f(z ) = z'Az, subject to the co ns ba in t (8.42)
z'z = I Thus, the Lagrange function is
L(z, X) = z'Az  A(Z;'Z; 
1)
•
n tio ua eq the n tai ob we , 0' to l ua eq z, to t ec sp Setting its first derivative, with re
358
MATRIX DERIVATIVES AND RELATED TOPICS
2Az  2Az = 0, or. equivalently, (8.43)
Az = Az,
which is the eigenvalueeigenvector equation. Thus, the Lagrange multiplier A is an eigenvalue of A. Further, premultiplying (8.43) by z' and using (8.42), we find that A = z'Az;
that is, if Z is a stationary point of L(z, A), then z' Az must be an eigenvalue of A. Consequently, the maximum value of z'Az, subject to z'z = I, is Al(A), which is attained when Z is equal to any unit eigenvector corresponding to A1(A). Similarly, the minimum value of z' Az, subject to z' Z = I, is Am(A), and this is attained at any unit eigenvector associated with Am(A). In our final example, we obtain the best quadratic unbiased estimator of (12 in the ordinary least squares regression model.
Example S.S. Consider the multiple regression model y = Xp + E, where 2 E  NN (0, (121). A quadratic estimator of (12 is any estimator, 8 that takes the fOllll &2 = y' Ay, where A is a symmetric matrix of constants. We wish to find the choice of A that minimizes var(&2) over all choices of A for which 8 2 is unbiased. Now since E(E) = 0 and E(EE') = (121, we have E(y'Ay) = E{(XI3 + E)'A(XI3 + E)} = E{I3'X'AXI3 +213'X'AE + E'AE} = j3'X'AXI3 + tr{AE(EE')} = I3'X'AXI3 + (12 tr(A),
and so &2 = y' Ay is unbiased regardless of the value of 13 only if X'AX
=(0)
(8.44)
and •
tr(A) = 1
(8.45)
Using the fact that the components of E are independently distributed and the first four moments of each component are 0, I, 0, 3, it is easily verified that
TH E ME TH OD OF LA GR AN GE MU LT IPL IE RS
35 9
Thus, the required Lagrange function is
x tri ma e th of s nt ne po m co e th d an A by n ve gi e ar ers where the Lagrange mUltipli t ec sp re th wi n io iat nt re ffe Di ic. etr m m sy is AX X' A, which is symmetric since to A yi eld s 2
dL = 2a tr{ (d A) A + A dA} + 4a p' X ' {(dA)A + A dA}XP  tr{AX'(dA)X}  Atr(dA) 2 4 ) dA } N AI X' XA A) ' 'X PP X + X' p' Xp (A 4a = tr( {4a A + 4
Thus, we must use
g in ly tip ul stm po d an g in ly tip ul em Pr A. r fo e lv so to along with (8.44) an d (8.45) at th d fin we , X' t 'x (X = X+ at th t fac e th d an ) .44 (8 g (8.46) by XX+ and usin ,
XA X' = }. .X x'"
Substituting this ba ck into (8.46), we get •
(8.47) d an ) .46 (8 to in ck ba ) .47 (8 ng tti Pu p. X = Y ' d an where H = ky'Y' + 'Y'Y'A simplifying, we obtain (8.48) d an II, of r cto ve en eig an be t us m Y ' at th d fin we By postmultiplying (8.48) by 'Y, 'Y' c'Y ~ H lIll fO e th of is H if ly on e tru he n ca in light of equation (8.48), this we at th d fin we ), .48 (8 in 'Y' C'Y = H t pu we en for some scalar c. Further, wh of es sid th bo of ce tra e th e tak we if n, tio di ad In m us t have c =0; ·thus H =(0). (8.47) and use (8.45), we see that
es ifi pl sim ) .47 (8 at th n ow sh ve ha we , ly nt ue eq ns Co X. wh er e r is the ra nk of
360
MATRIX DERIVATIVES AND RELATED TOPICS
to (8.49) so that 0 2 =y'Ay = SSE/(N  r) is the familiar residual variance estimate. We can easily demonstrate that (8.49) yields an absolute minimum by writing an arbitrary symmetric matrix satisfying (8.44) and (8.45), as A* = A + B, where B must then satisfy tr(B) = 0 and X' BX = (0). Then, since tr(AB) = 0 and AX =(0), we have var(y' A*y) = 20 tr(A~) + 402p'X'A~XP 4 4
= 20 {tr(A2) + tr(B2) + 2 tr(AB)} + 402p'X'
. (A 2 + B2 + AB:r BA)XP = 20 4 {tr(A 2) + tr(B2)} + 402p'X'B2Xp 4 2 ~ 20 tr(A ) = var(y' Ay)
PROBLEMS 1. Consider the natural log function, J(x) = log(x). (a) Obtain the kthorder Taylor fonllula for J(1 + u) in powers of u. (b) Use the fonllula in part (a) with k = 5 to approximate log(1.1). 2. Suppose the functionJ of the 2 x I vector x is given by
Give the secondorder Taylor fonllula for J(O + u) in powers of UI and 3. Suppose the 2 x 1 functionJ of the 3 x I vector x is given by
J(x) =
and the 2 x I function g of the 2 x 1 vector z is given by
g(z) =
Z2/ZI ZIZ2
U2·
361
PROBLEMS
Use the chain rule to compute
a
ax' y(x), where y(x) is the composite function defined by y(x) = g(J(x» . •
4. Let A and B be m x m symmetric matrices of constants and x be an III x I vector of variables. Find the differential and first derivative of the function
f(x) = x'Ax x'Bx 5. Let A and B be m x n matrices of constants and X be an variables. Find the differential and derivative of
II
x
II!
matrix of
(a) tr(AX), (b) tr(AXBX).
6. Let X be an m x m nonsingular matrix and A be an constants. Find the differential and derivative of
III
x m matrix of
(a) IX 1, (b) tr(AX'). 2
7. Let X be an m x n matrix with rank(X) = n. Show that
vec(X)' 8. Let X be an m x m matrix and n be a positive integer. Show that
9. Let A and B be n x m and m x n matrices of constants, respectively. If X is an m x m nonsingular matrix find the derivatives of (a) vec(AXB), (b) vec(AX' B). 10. Show that if X is an m x m nonsingular matrix and X# is its adjoint matrix, then
MA I'R IX DE RI VA TI VE S AN D RE LA TE D TO PIC S
362
I \. Prove Co ro lla ry 8. \. \. ng wi llo fo e th of ch ea r Fo . es bl ria va of x tri ma ic etr m m sy 12. Le t X be an m x m functions, find the Ja co bi an m atr ix
o
ov(X),
ve c( F)
ts. tan ns co of x tri ma m x m an is A e er wh , A' AX (a) F( X) = ts. tan ns co of ix atr m ic etr m m sy m x m an is B e (b) F( X) = XBX, wh er a is X is, at th ; re tu uc str n io lat rre co ng vi ha x tri ma 13. Let X be an m x m is ts en m ele al on ag di its of ch ea at th pt ce ex es bl ria va sy m m etr ic matrix of eq ua l to one. Sh ow that, if X is no ns in gu lar , th en
o ov (X )'
v( X I) =
2 im(X I
® X I )i '
m
at th ch su r ala sc a is E d an x tri ma ic etr m m sy m x m an 14. Su pp os e th at Y is I/ 2 be th e sy m m etr ic sq ua re ro ot of (1m + I ex ist s. Le t (1m + E Om + € I so th at . €
n
n
n
Using pe rtu rb ati on m eth od s, sh ow th at
i= I
where •
2 B Y J B Y B I =  2 ' 2 = ii ' J I
=
35 Y 4 B d 3 Y T6 ' an 4 = 128 . 5
the n, at th e os pp su d an ix, tT ma e nc ria va co e pl 15. Let S be an III x m sa m ele al on ag di its of ch ea s ha , ix atr m e nc ria va co co rre sp on di ng po pu lat io n iatr m o tw e es th n ee tw be e nc re ffe di the be to A e fin ments eq ua l to one. De n io lat pu po e th at th te No A. + n = S at th so ces; that is. A = S  n, n ve gi is x tri ma n io lat rre co e pl m sa e th ile wh n, co rre lat io n matrix is also th th 2 I Sh /2) 1 2 1/ d' / D e h at I/2 Dow 2S ' I/ m Sm , ... , II (s lag = s e er w , s by R = D s
36 3
PROBLEMS
illS tel er rd o ird th h ug ro th up te ra cu ac , C + C2 + 3 ap pr ox im ati on R = {} + C I in the elements of A, is gi ve n by
wh er e DA = di ag (a ll, "" amm }.
, B2 B" r fo ns sio es pr ex n tai ob rst Fi . 8.7 m re eo Th in n 16. Derive the results give d an I, = i ), W + 1(2 = I W} + (2 ns tio ua eq an d B3 by utilizing the at th ct fa the g in us by a3 d an , a2 , aJ r fo ns sio es pr ex ~I = I. Th en ob tai n hl ,/H 1 = ,1 tr{ (2 + W}/}. lth the at th e os pp su d an , X ~ '" ~ XI e er m wh 17. Le t X = di ag (x l,. ,. I hIl ~ '" ~ hi t Le l. fi if Xi fXI at th so t nc sti di diagonal ele m en t is + , (1" of rs cto ve en eig ng di on sp rre co d an es lu va en eig e an d "'I I, ' . , ,"'I m be th r fo is, at th s; ice atr m ic etr m m sy m x m are V d an U e er V } I (X + U) , wh each i
,xm },
•
(X + U}'Y i = M1m + V }'Y i s on ati im ox pr ap r de or stfir e th n tai ob to is e cis er ex is Th e purpose of th er gh Hi ' 11/1 of n m lu co lth e th is el e er wh , b + l l hl = XI + al and "'I 1 = ee aim ox pr ap e es Th , 6} 97 (I a ur gi Su in d un fo be n ca or de r approximations st ju n tio ua eq r cto ve en ig e elu va en eig the g in us by ed tions ca n be detelJuin I. I "' on t in tra ns co ale sc te ria op pr ap the th wi gi ve n alo ng (a) Sh ow that al = UI/  XlV II. (b) Sh ow th at if e = I and "'I; "'I I = I, then bi , = 
Ut i  XlV ii
for all i f I,
Xi  XI
' bl r cto ve e th of nt ne po m co ith the is bil e er wh en th I, = I 'Y V} + (1 'Y; d an 1 = e m if at th ow Sh (c)
bi! = 
Uli  XlV ii Xi 
XI
for alI if l
364
MATRIX DERIVATIVES AND RELATED TOPICS
(d) Show that if
c = xi/2
and "I; "I / = A/, then
,
bll =
bi1
_
1/2(Uti
x/
 X/Vii
 
) ,
for all i J. l
Xi  XI
I 8. Consider the function f of the 2 x I vector x given by
f(x) =
2xi
•
+ xi  6xI  27x2
(a) Detelllline the stationary points of f. (b) Identify each of the points in part (a) as a maximum, minimum, or saddle point. 19. For each of the following functions detelluine any local maxima or minima. (a) xi + ix~  2x1 X2 + XI  2x2 + I. (b) xi + ~xf + x~  6xI  2x2· (c)
xi + 2xi +xj + 2xIX3 
3X2  X3·
20. Let a be an m x I vector and B be an m x m symmetric matrix, each containing constants. Let x be an m x I vector of variables. (a) Show that the function
f(x) = x'Bx + a'x has stationary solutions given by
where y is an arbitrary m x I vector. (b) Show that if B is nonsingular, then there is only one stationary solution. When will this solution yield a maximum or a minimum? 21. If the Hessian matrix HI of a function f is singular at a stationary point x, then we must take a closer look at the behavior of this function in the neighborhood of the point x to detelluine whether the point is a maximum, minimum, or a saddle point. For each of the functions below, show that 0 is a stationary point and the Hessian matrix is singular at O. In each case, detelllline whether 0 yields a maximum, minimum, or a saddlepoint, (a) xi+x~. (b) XTX~ (c) x~.
xi 
xt  4
365
PROBLEMS
lmu k of ch ea m fro es pl m sa om nd ra t en nd pe de in ve 22. Suppose that we ha • n) . .... IC NII g in be n io ut ib str di ith e th i th wi ns io ut tivariate normal distrib al tic en id t bu rs cto ve n ea m nt re ffe di ly ib ss po ve ha ns io ut Thus. these distrib at th ow sh ",. Xi • ... i, Xi by ted no de is e pl m sa ith e th covariance matrices. If by n ve gi are n d an i .... of rs ato tim es od ho eli lik um im the m ax •
nI
xi j
tL; = Xi = L
,
n
nj
j=1
•
where n= nl + ··· +n k. 23. Co ns id er the multiple regression model.
y = Xp + E, t tha e os pp Su 1. x N is E d an 1. x m is (3 m. x N is X where y is N x l, 2 I ). so that y  NN (X p. q 2I N). Find the rank(X) = m and E  NN (O ,q N m ax im um likelihood estimates of P and q2 . is S e er wh S. E X all r fo ed fin de n tio nc fu ex nv co ed lu 24. Le tf( x) be a realva )} (x ~f y . ES X )': '.y (x = {z = T t se the at th ow Sh . Rm a convex subset of • IS convex. ex nv co the on ed fin de th bo ns tio nc fu ex nv co e ar x) g( d an ) 25. Suppose th at f(x are b d an a if ex nv co is ) (x bg + ) {x aj n tio nc fu e th at th se t S ~ ~. Sh ow nonnegative scalars. ed fin de is ) f(x if at th ow sh is. at th 1; 8.1 m re eo Th of se er 26. Prove the conv nd Sa t se ex nv co en op the on le iab nt re ffe di d an
f(x )
~f(a)
+
a aa'
f(a )
(x  a)
for all x e S and a e S, th en f( x) is a convex function. en op an is S e er wh , S xe all r fo ed fin de n tio nc fu ed lu 27. Le tf( x) be a realva n tio m nc fu le iab nt re ffe di ice tw a is ) f(x at th e os pp su d an , convex su bs et of R ix atr m ian ss He e th if ly on d an if n tio nc fu ex nv co a is ) on S. Sh ow th at f(x HI is nonnegative definite at each xe S . = x) f< n tio nc fu e th er id ns co d an r cto ve 1 x 2 28. Le t x be a . O} > X2 0, > XI : {x = S d an 1 < a < 0 e er wh , S xe
xf x~ "
for all
MAI'RIX DERIVATIVES AND RELATED TOPICS
366
n. tio nc fu e av nc co a is x) !( at th ow sh to e cis er ex us io ev (a) Use the pr d an ts en om m st fir ite fin th wi or ct ve om nd ra 1 x 2 a is y (b) Sh ow th at if sa tis fy in g P(y E S) = 1, th en
if 0
••• ,Xk, where Xi counts the number of times that outcome i occurs in the n trials. Then the random vector x = (x\, ... ,Xk)' has·the multinomial distribution with probability function given by
where nl, ... , nk are nonnegative integers satisfying nl + ... + nk = n. Find the maximum likelihood estimate of p = (P\,··· ,Pd'. 38. Suppose that the m x m positive definite covariance matrix 0 is partitioned in the fOIIIl
0=
where 0 II is ml x ml, 0 22 is m2 x m2, and ml +m2 = m. Suppose also that the m x 1 random vector x has covariance matrix 0 and is partitioned as x = (x; ,x;)" where XI is ml x 1 and X2 is m2 x 1. If the ml x 1 vector a and m2 x 1 vector b are vectors of constants, then the square of the correlation between the random variables u = a'xl and v = b'X2 is given by •
Show that the maximum value of f(x), that is, the maximum squared correlation between u and v, subject to the constraints a'Olla=l,
b'022b = 1
2i
is the largest. eigenvalue of 0 Ii 0 120 0 ~ 2 or, equivalently, the largest eigenvalue of 02iO;20Ilo 12. What are the vectors a and b that yield this maximum?
369
PROBLEMS
/II orthogonal x III an is P ere wh , D) P' PX tr( = ) f(P n, tio nc 39. Consider the fu er, rth Fu s. ice atr m ite fin de e iv sit po m x m are matrix, and both X and D ele al on ag di e iv sit po , ng di en sc de t, nc sti di th wi suppose that D is diagonal O. > din > ... > d th wi ) ,d ... " (d ag l di = D is, m ments; that (a) By working with the Lagrange function,
L( P, A) =t r( PX P' D )+ tr {A (P P' 1 m )} ,
the t tha ow sh rs, lie tip ul m ge an gr La of ix atr m ic etr where A is a symm al. on ag di is P' PX en wh r cu oc ) f(P of ts in po ry stationa (b) Use part (a) to show that In
djAj(X),
max f(P ) =
P; PP '=I
j=
1
and III
dm + 1 jAi(X),
min f(P ) = P; PP '=I
where AI(X)
~
...
~
i= 1
Am (X ) > 0 are the eigenvalues of X.
C H A P T E R N IN E
d te la e R s ic p o T l ia c e p S e m So to uadratic F o n n s •
1. INTRODUCTION r, cto ve 1 x m an is x d an ix atr m ic etr m m su m x m an is We have seen that if A al tic tis sta y an m In x. in lJn fO tic ra ad qu a d lle ca is , Ax x' then the function of x, t os m e Th ts. tan ns co of ix atr m a is A ile wh r, cto ve om applications, x is a rand ic ot pt ym as its or n, io ut ib str di its as s ha x ich wh in e common situation is on te ga sti ve in we , ter ap ch is th In n. io ut ib str di al distribution, the multivariate norm we , lar cu rti pa In . ng tti se is th in Ax x' of s tie er op pr l na io ut some of the distrib a ve ha ll wi Ax x' ich wh r de un s on iti nd co g in in are most interested in detellu chisquared distribution.
ES IC TR A M T N TE PO EM ID ON S LT SU RE E M 2. SO 2 = A. A if t ten po em id be to id sa is A ix atr m m x m an at th er rli ea ted no We have in le ro ial nt se es an ay pl s ice atr m t ten po em id at th on cti se We will see in the next s ha tes ria va al rm no in llll fO tic ra ad qu a ich wh r de un s the discussion of condition ng hi lis tab es to ted vo de is on cti se is th , ly nt ue eq ns Co a chisquared distribution. s. ice atr m t ten po em id g in rd ga re lts su re sic ba some of the
Theorem 9.1.
Let A be an m x m idempotent matrix. Th en
(a) 1m  A is also idempotent, (b) each eigenvalue of A is 0 or 1,
(c) A is diagonalizable, (d) rank(A) = tr(A). 37 0
371
SOME RESULTS ON IDEMPOTENT MATRICES
Proof. Since A2 = A, we have (1m 
Al = 1m 
2
2A + A = 1m  A,
and so (a) holds. Let "A be an eigenvalue of A corresponding to the eigenvector x so that Ax = "Ax. Then since A2 = A, we find that
which implies that
"A("AI)x=O Since eigenvectors are nonnull vectors, we must have A("A  I) = 0, and so (b) follows. Let r be the number of eigenvalues of A equal to one, so that m  r is the number of eigenvalues of A equal to zero. As a result, A  1m must have r eigenvalues equal to zero and m  r eigenvalues equal to 1. By Theorem 4.8, (c) will follow if we can show that rank(A) = r,
rank(A  I"J
=m  r
(9.1 )
Now from Theorem 4.10, we know that the rank of any square matrix is at least as large as the number of its nonzero eigenvalues, so we must have rank(A) ;::: r,
rank(A  1m) ;::: m  r
(9.2)
But Corollary 2.12.1 gives rank(A) + rank(lm  A)
$;
rank{A(lm  A)} + m = rank{(O)} + m = m,
which together with (9.2) implies (9.1), so (c) is proven. Finally, (d) is an immediate consequence of (b) and (c). 0 Since any matrix with at least one 0 eigenvalue has to be a singular matrix, a nonsingular idempotent matrix has all of its eigenvalues equal to I. But the only diagonalizable matrix with all of its eigenvalues equal to I is the identity matrix; that is, the only nonsingular m x m idempotent matrix is 1m. If A is a diagonal matrix, that is, A = diag(a" ... , am), then A2 = 2 diag(ai, ... ,a~). Equating A and A , we find that a diagonal matrix is idempotent if and only if each diagonal element is 0 or l.
Example 9.1. Although an
idempotent matrix has each of its eigenvalues equal to 1 or 0, the converse is not true; that is, a matrix having only eigenvalues
372
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
or I and 0 need not be an idempotent matrix. For instance, it is easily verified that the matrix
"
010 A= 0 0 0 001
•
•
.
has eigenvalues 0 and I with multiplicities 2 and I, respectively. However,
A2 =
000 0 0 0 , 001
so Ihat A is not idempotent. The matrix A in the example above is not idempotent because it is not diagonalizable. In other words, an m X m matrix A is idempotent if and only if each of its eigenvalues is 0 or I and it is diagonalizable. In fact, we have the following special case.
Theorem 9.2. Let A be an m X m symmetric matrix. Then A is idempotent if and only if each eigenvalue of A is 0 or L Proof Let A = XAX' be the spectral decomposition of A, so that X is an orthogonal matrix and A is diagonal. Then
Clearly, this equals A if and only if each diagonal element of A, that is, each 0 eigenvalue of A, is 0 or 1. Our next result gives some conditions for the sum of two idempotent matrices and the product of two idempotent matrices to be idempotent.
Theorem 9.3.
Let A and B be m x m idempotent matrices. Then
(a) A + B is idempotent if and only if AB = BA = (0), (b) AB is idempotent if AB = BA.
Proof Since A and B are idempotent, we have (A + Bi
=A2 + B2 + AB + BA = A + B + AB + BA,
373
SOME RESULTS ON IDEMPOTENT MATRICES
so that A + B will be idempotent if and only if (9.3 )
AB= BA
Premultiplication of (9.3) by B and postmultiplication by A yields the identity (BA)2 = BA,
(9.4)
since A and B are idempotent. Similarly, premultiplying (9.3) by A and postmultiplying by B, we also find that (AB)2 = AB
(9.5)
.Thus, it follows from (9.4) and (9.5) that both BA and AB are idempotent matrices, and due to (9.3), so then is AB. Part (a) now follows since the null matrix is the only idempotent matrix whose negative is also idempotent. To prove (b). note that if A and B commute under multiplication, then (AB)2
=ABAB =A(BA)B =A(AB)B =A2 B2 =AB, o
and so the result follows.
Example 9.2. The conditions given for (A + B) to be idempotent are necessary and sufficient, while the condition given for AB to be idempotent is only sufficient. We can illustrate that this second condition is not necessary through a simple example. Let A and B be defined as
A=
1
o
I
B= 0 0
0 '
I
I
'
and observe that A2 = A and B2 = B, so that A and B are idempotent. In addition. AB = A, so that AB is also idempotent. However, AB f. BA since BA = B. Most of the statistical applications involving idempotent matrices deal with symmetric idempotent matrices. For this reason, we end this section with some results for this spt.cial class of matrices. The first result gives some restrictions on the elements of a symmetric idempotent matrix.
Theorem 9.4.
Suppose A is an m
(a) aii ;::: 0 for i = 1..... m, (b) aii:S; 1 for i= 1•...• m, (c) aij =aji = 0, for allj f. i. if ai,. ·
• •
X
m symmetric idempotent matrix. Then
=0 or aii = 1.
374
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
at th ws llo fo it ic, etr m m sy d an t ten po em id i"s A Proof Since m
aii =(A)ii = (A 2)i i = (A' A)ii
=(A')i·(A)·i :: L
aJi'
(9.6)
j=1
ve ha we .6) (9 m fro n, tio di ad In . ve ati eg nn no be t us m ly ar which cle
d an ali :: ajj en th 1, :: aii or 0 :: aii If . ld ho t so that aii ~ a;'. and, thus (b) mus so we must have
j~i
. (c) s he lis tab es A, of y etr m m sy the th wi ng alo . which
o
to r sie ea is it h ic wh in ns tio ua sit e os th in ul ef us is m re eo The following th A. = A2 y tit en id e th an th A2 = A" as ch su y tit en verify an id m sy m X m e th i, er teg in e iv sit po e m so r fo at Theorem 9.5. Suppose th . ix atr m t ten po em id an is A en Th . Ai :: I + Ai s metric matrix A satisfie d an I ~+ ,A ... I, + A\ en th A, of es lu va en eig e Proof If AI, ... ,Am are th ~ en id e th t Bu . ely tiv ec sp re , Ai d an I + Ai of es A;, ... , A;" are the eigenvalu b ~ h I • & i e ~ t I us + m i ~ I\j h c ea I' so , i' ,m A ... , I + :: Ai r] . lO ' I\j :: I\j at t Imp les = lIty 0 . 9.2 m re eo Th m fro ws llo fo w no lt su re either 0 or I. The
3. CO CH RA N 'S TH EO RE M an hr oc [C m re eo Th n's ra ch Co as to d rre fe re es m eti m so , The following result nt re ffe di l ra ve se of ce en nd pe de in e th ng hi lis tab es in ul ef us ( 1934) j, will be very quadratic forms in the same normal variables. •
d an ic etr m m sy be k ,A ... I, A s ice atr m m X m Theorem 9.6. Let each of the j. i:/er ev en wh ) (0 :: Aj Ai en Th . 1m :: k +A ... + AI at th e os pp idempotent, and su e nc Si r. by nk ra its te no de d an , A y sa s, ice atr m e th of h Proof Select an yo ne at th ch su P ix atr m al on og th or an s ist ex e er th t, ten po A" is symmetric and idem
•
375
COCHRAN'S THEOREM
For j :/ h, define Bj = P' AjP, and note that
1m = P'lmP = P'
P=
= diag(l" (0)) +
p'AP J
Bj , jih
or, equivalently,
Bj = diag((O), 1m  r)
In particular, for 1 = 1, ... , r,
~ (Bj )/1 = 0 jill
But clearly, B j is symmetric and idempotent since Aj is, and so, from Theorem 9.4(a), its diagonal elements are nonnegative. Thus, we must have (Bj )/1 = 0 for each 1= 1, ... , r, and this, along with Theorem 9.4(c), implies that Bj must be of the form
where Cj is an (m  r) X (m  r) symmetric idempotent matrix. Now, for any j :/ h,
p' Ah, AjP = (P' AhP)(P'AjP) = diag(l" (0)) diag((O), Cj ) = (0), which can be true only if AhAj = (0), since P is nonsingular. Our proof is now complete, since h was arbitrary. 0 Our next result is an extension of Cochran's Theorem.
Theorem 9.7. Let AI, ... ,Ak be m X m symmetric matrices and define A = A I + ... + Ak. Consider the following statements.
376
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
(a) Aj is idempotent for i = 1, ... , k. (b) A is idempotent. (c) AjAj = (0), for all i '" j.
Then if any two of these conditions hold, the third condition must also hold.
Proof First we show that (a) and (b) imply (c). Since A is symmetric and idempotent, there exists an orthogonal matrix P such that p'AP = P'(A 1 + ... + AdP = diag(I" (0»,
(9.7)
where r = rank(A). Let Bj = p'AjP for i = I, ... , k, and note that Bj is symmetric and idempotent. Thus, it follows from (9.7) and Theorem 9.4 that Bi must be of the fOlln diag(C j, (0», where the rX r matrix C j also must be symmetric and idempotent. But (9.7) also implies that
Consequently, CJ, ... , Ck satisfy the conditions of Theorem 9.6 and so CjCj = (0) ('or every i '" j. From this we get BjBj = (0) and, hence, AjAj = (0) for every i '" j as is required. That (a) and (c) imply (b) follows immediately, since •
2
k
A2=
LA; j
~
I
j~1
j~1
j ~
I
k
A j =A
=L j~
I
Finally, we must prove that (b) and (c) imply (a). If (c) holds, then AjAj = AjAj for all i :/ j, and so by Theorem 4.16, the matrices A J, ... , Ak can be simultaneously diagonalized; that is, there exists an orthogonal matrix Q such that Q'AjQ = D j ,
where each of the matrices D\, ... , Dk is diagonal. Further,
377
COCHRAN'S THEOREM
for every j :/ j. Now since A is symmetric and idempotent so also is the diagonal matrix Q' AQ = DI + ... + Dk •
As a result, each diagonal element of Q'AQ must be either 0 or 1, and due to (9.8) the same can be said of the diagonal elements of D 1, ••• , D k . Thus, for each i, Di is symmetric and idempotent, and hence so is Ai = QDiQ'. This completes the proof. 0 Suppose that the three conditions given in Theorem 9.7 hold. Then implies that tr(Ai~ = rank(A j ), and (b) implies that k
k
rank(A) = tr(A) = tr
L
Ai
=
i=I
k
L j
(a)
tr(Aj) =
=I
L j
rank(A;)
=I
Thus, we have shown that the conditions in Theorem 9.7 imply the founh condition ",k (d) rank(A) = ~i= I rank(A j ).
Conversely, suppose that conditions (b) and (d) hold. We will show that these imply (a) and (c). Let H = diag(AIo ... ,Ad and F = 111/ ® 111/ so that A =F' H F. Then (d) can be written rank(F'HF) = rank(H), and so it follows from Theorem 5.24 that F(F' H F) F' is a generalized inverse of H for any generalized inverse (F' H F), of F' H F. But since A is idempotent, AlmA = A and hence 111/ is a generalized inverse of A = F'HF. Thus, FF' is a generalized inverse of H, yielding the equation HFF'H = H, which in partitioned fOlln is A2 I
A2AI
AIA2 A2 2
• • •
• • •
AkAI
Ak A 2
... • • •
AIAk A2A k • • •
• • •
A2 k

Al
(0)
(0)
Ao
• •
•
•
• •
•
•
•
•
•
•
•
•
(0) (0)
(0) (0) •
• • •
Ak
This immediately gives conditions (a) and (c). The following result summarizes the relationship among these four conditions.
S RM FO C TI RA AD QU TO D TE LA RE S PIC TO L IA EC SOME SP
378
e fin de d an s ice atr m ic etr m m sy m X m be Ak , .. . Corollary 9.7.1. Le t AI> ts. en tem sta ng wi llo fo e th er id ns Co . Ak + ... + AI A= (a) Ai is id em po ten t for i = 1, ... , k. (b) A is idempotent. (c) AiAj = (O), for all i j. j. ", k (d ) rank(A} = ""' i= I rank(A j }. d an ) (b if or , ld ho ) (c d an ), (b , (a) of o tw y an if All four of the co nd iti on s hold (d) hold.
L A RM O N IN S RM FO C TI RA AD QU F O N O TI 4. D IS TR IB U VARIATES en am nd fu is ns io ut ib str di ed ar qu is ch d an al lln nO e th The relationship be tw ee n . es bl ria va om nd ra lal lll nO in lln fO tic ra ad qu a of n io ut tal in obtaining the distrib r fo I} O, N( Zi th wi es bl ria va om nd ra t en nd pe de in e Recall that if ZI •. .. , Zr ar ea ch i, then r
i=I
s ha Ax x' rm fo tic ra ad qu e th en wh e in m ter de to m Th is is used in ou r first theore , ed ut ib str di tly en nd pe de in e ar x of s nt ne po m co e a chisquared distribution if th ea ch having the N(O, 1) distribution.
Theorem 9.8.
is A ix atr m m x m e th at th e os pp su d an }, 1m Le t x  Nm(O,
. X; Ax x' en Th r. nk ra s ha d an t, ten po em id ic, etr m sym •
al on og th or an s ist ex e er th t, ten po em id d an Pr oo f Since A is sy m m etr ic matrix P su ch that A = PD P' , •
where D = di ag (l" (0)). Le t
Z=
P' x and note th at sin ce x  Nm(O, 1m},
E(z} = E( P' x} = P' E( x} = p' O = 0, , 1m = P p' = P l P' = }}P (x ar {v p' = m x} ' r(P va = } r(z va .
of s nt ne po m co e th e lik e, ar z of s nt ne po m co and so z  Nm(O, 1m}; that is, the D, of ll1 for e th to e du w No . es bl ria va om nd ra x, independent standard nOIlnal
•
ES AT RI VA AL RM NO IN S RM FO C TI RA AD QU OF ON DI ST RI BU TI
37 9
we find th at r
x' Ax = x' PD P' x= z' D z=
L
z~,
i=I
and th e re su lt then follows.
o
lti mu the ich wh in m re eo th xt ne e th of se ca Th e re su lt ab ov e is a special x. tri ma e nc ria va co lar gu in ns no l ra ne ge a s ha n io ut ib str variate nOllnal di d an x, tri ma ite fin de e iv sit po a is 0 e er wh , O} , Theorem 9.9. Le t x  Nm(O r, = } AO k( ran d an t ten po em id is AO If x. tri ma let A be an m X m sy m m etr ic then x' Ax  x~. tsa T ix atr m lar gu in ns no a s ist ex e er th , ite fin de e Pr oo f Si nc e 0 is positiv d an 0, = } (x 1E T= z} E( en th , 1x T= z e fin de isfying 0 = TT '. If we
z of s ln tel in en itt wr be n ca Ax x' lln fO tic ra ad qu e so th at z  Nm(O, 1m}. Th •
SI nc e
us io ev pr e th of s on iti nd co the s fie tis sa AT T' at th ow sh to All th at re m ain s is ce sin t ten po em id d an is, A ce sin ic, etr m m sy is theorem. Clearly, T' A T
(T 'A T} 2 = T' AT T' AT = T' AO AT = T' AT , wh er e th e qu en ce of T' A T and
ens co a is ich wh A, = A AO y tit en id the m fro ws llo fo last equality ce sin , lly na Fi . lar gu in ns no is 0 d an t ten po em id is AO the fa ct th at AO are idempotent, we have
r, = O} (A nk ra = } AO tr( = '} TT A tr( = } AT T' tr( = ) AT ' (T ra nk
and so the pr oo f is co m pl ete .
o
gu sin a s ha at th r cto ve a in lln fO tic ra ad qu a ve ha It is no t un co m m on to us io ev pr e th es liz ra ne ge lt su re xt ne r Ou n. io ut lar m ul tiv ar iat e nOllnal distrib theorem to th is situation.
380
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
Theorem 9.10.
Let x  NII1 (O,!l), where !l is positive semidefinite, and suppose that A is an m X m symmetric matrix. If !lA!lA!l =!lA!l and tr(A!l) = r, then x' Ax  X;.
Proof Let n = rank(!l), where n < m. Then there exists an m matrix P = [PI P21 such that
X
m orthogonal
(0) (0) (0) A
where PI is m x n and A is an n X n nonsingular diagonal matrix. Define
Z=
= P'x,
and note that since P'O = 0 and P'!l P = diag(A, (0», z  Nm(O, diag(A, (0))). But this means that z = (z;, 0')', where z I has the nonsingular distribution N,,(O, A). Now x'Ax = x'PP'APP'x = z'P'APz = z;P;APIZI,
and so the proof will be complete if we can show that the symmetric matrix P;AP I satisfies the conditions of the previous theorem, namely, that P;APIA is idempotent and rank(P;AP I A) = r. Since !lA!lA!l = !lA!l, we have (AI/2p~APIAI/2)3 = AI/2p;A(PIAP~)A(PIAP~)APIAI/2 = AI/2p;A!lA!lAPIAI/2 = AI/2p;A!lAPIAI/2 = AI/2p;A(PIAP;)APIAI/2 = (AI/2p;APIAI/2)2, •
and so the idempotency of A 1/2 P;API AI/2 follows from Theorem 9.5. However, this also establishes the idempotency of P;API A since A is nonsingular. Its rank •
IS
r
•
SiOce
.To this point, all of our results have dealt with normal distributions having the· zero mean vector. In some applications, such as the detellllination of nonnull distributions in hypothesis testing situations, we encounter quadratic fonns in nOlll1al vectors having nonzero means. The next two theorems are helpful
•
381
DISTRIBUTION OF QUADRATIC FORMS IN NORMAL VARIATES
in detellllining whether such a quadratic fOIlf! has a chisquared distribution. The proof of the first of these two theorems, which is very similar to that of Theorem 9.9, is left to the reader. It utilizes the relationship between the nOJ lIlal distribution and the noncentral chisquared distribution; that is, if YI, ... ,y, are independently distributed with Yi  N{JLi, 1), then
,
L
Y~ 
X;(A),
i=I
where the noncentrality parameter of this noncentral chisquared distribution is given by 1
A= 2
'
L
0
lLi
i= I
.
Theorem 9.11.
Let x  Nm(JI., 0), where 0 is a positive definite matrix, and let A be an m x m symmetric matrix. If AO is idempotent and rank(AO) = r, then x' Ax  X;(A), where A = JI.'AJI..
+
Let x  Nm(JI., 0), where 0 is positive semidefinite of rank n, and suppose that A is an m x m symmetric matrix. Then x' Ax  (A), where A = JI.'AJI. if
Theorem 9.12.
x;
+
(a) OAOAO = OAO, (b) JI.'AOAO = JI.'AO, (c) JI.'AOAJI. = JI.'AJI., (d) tr(AO) = r.
o
Proof. Let PI. P2, and A be defined as in the proof of Theorem 9.10 so that = PIAP~. Put C= [PIA 1j2 P2] and note that •
Z=
ZI
=
c'x  Nm
•
•
In other words, •
••
z,
Z= P' , 2J1. • •
, ,, •
• •
•
,
I"
(0)
(0)
(0)
,
,
382
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
,
x'Ax = x'CC'Ac''c'x = z'CIAc''z
"I
., •,
A I/ 2p,AP2 P;AP2
A I/ 2 p,APIA I/ 2 JI. P2] P;API A '/2 ,
= Z,A'/2p,APIAI/2ZI
•
,
+ JI.'P2 P;AP2P;JI.
+ 2J1.' P2P;API A1/2ZI
(9.9)
1,
But conditions (a)(c) imply the identities
• , •
• !,
,,I
(i) p,AnAP, = P,AP"
,,,
. ',
(ii) JI.' P2 P ;AOAP, = JI.' P2 P ;API, (iii) JI.' P2 P ;AOAOAP2 P ;JI. = JI.' P2~AOAP2~JI. = JI.' P2 P ;AP 2P;JI.; .
in particular, (a) implies (i), (b) and (i) imply (ii), while (iii) follows from (c), (i) and (ii). Utilizing these identities in (9.9), we obtiain
x'Ax = Z,A'/2p,AP,A'/2 Z1 + JI.'P2P;AOAOAP2P;JI.
+ 2J1.' P2P;AOAP, A1/2ZI I 2 = (ZI + A'/2p'AP2P;JI.)' A'/2p,AP 1A / (ZI + AI/2p'AP2P;JI.) = w'A*w. •
"'"  A'/2 p ', JI. + A'/2 p 'AP , 2 p'2 JI., and. since A* = A'/2p,AP,A'/2 is idempotent, a consequence of (i), we may apply Theorem 9.11; that is, w' A*w  X;(A), where
and •
A= I 2
2 I x A'/2p,AP,A'/2(A / p,JI. + AI/2p'AP2P;JI.)
=
+ (JI.' P, PIAP, PI JI. + JI.' P2 P ;AOAO AP2P;JI. + 2J1.'PI P'AOAP2P;JI.)
383
DISTRIBUTION OF QUADRATIC FORMS IN NORMAL VARIATES
n
This completes the proof.
A matrix A satisfying conditions (a), (b), and (c) of Theorem 9.12 i~ W. the Moore Penrose inverse of {}. That is, if x  NII/(JI., {}), then x'{} + x will have a chisquared distribution since the identity {} +{} n+ = 0 + ensures that conditions (a), (b), and (c) hold. The degrees of freedom r = rank(n) since rank(W{}) = rank({}). All of the theorems presented in this section give sufficient conditions for a quadratic fOlln to have a chisquared distribution. Actually, in each case, the stated conditions are necessary conditions as well. This is most easily proven using moment generating functions. For details on this, the interested reader is referred to Mathai and Provost (1992) or Searle (1971).
Example 9.3. Let XI, ••• ,Xn be a random sample from a nOllnal distribution with mean p. and variance 0"2; that is the XiS are independent random variables, each having the distribution N(/L, 0"2). The sample variance s2 is given by
1 s2 =
n
~
£..J
(n  1)
I
(Xi 
x)2
=I
We will use the results of this section to show that (n  l)s2
t=
cO":2::'
II
=
L

2 Xn  I
i=I •
Define the n x I vector x = (XI,'" ,XII)' so that x  NII(p.1m 0"2In). Note that if the n x n matrix A = (In  nllnl~)/0"2, then
x'Ax=
{x'xnl(l~xi} 0"2
II
L

II 2
x·I  n
i=I
n
=L
(Xi 0"2
X)2
2
I
LXi i=I
(n  l)s2
= C,:'_ = t 0"2
'
i=I
and so t is a quadratic form in the random vector x. The matrix
A(0"2I II )
= 0"2A
384
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
is idempotent since (q2A)2 = (In  nllnl~)2 = In  2nllnl~ + n21nl~lnl~ 2
= In  n 11111~ = q A,
and so by Theorem 9.11, f has a chisquared distribution. This chisquared distribution has n  I degrees of freedom since .
and the noncentrality parameter is given by 1 p.2 2 q2
•
Thus, we have shown that t  X~ _ I'
5. INDEPENDENCE OF QUADRATIC FORMS We now consider the situation in which we have several different quadratic fOJ ms, each a function of the same multivariate nOllnal vector. In some settings, it is important to be able to detellnine whether or not these quadratic fOlillS are distributed independently of one another. For instance, this is useful in the partitioning of chisquared random variables as well as in the fOJ mation of ratios having an F distribution. We begin with the following basic result regarding the statistical independence of two quadratic fOllnS in the same nOllnal vector. Theorem 9.13. Let x  Nm(JI., n), where n is positive definite, and suppose that A and B are m X m symmetric matrices. If B = (0), then x' Ax and x' Bx are independently distributed.
An
Proof Since n is positive definite, there exists a nonsingular matrix T such
that n then
= T T'.
Define G
= T' A T and
H
= T' B T,
and note that if An B
GH =(T' AT)(T' BT) =T'AOBT =T'(O)T =(0) Consequently, due to the symmetry of G and H, we also have
= (0),
(9.10)
385
INDEPENDENCE OF QUADRATIC FORMS (O) = (O)' = (G H) ' = H 'G ' = HG ,
at th ow kn we , 15 4. m re eo Th om Fr . HG = H and so we have established that G H; d an G s ze ali on ag di ly us eo tan ul sim at th P ix atr m al on og there exists an orth that is, for so m e diagonal matrices C and D, P' H P= P' T' BT P= D
P' G P= P' T' AT P= C,
(9. II )
Bu t using (9.1O) and (9.11), we find that (O) = G H = PC P' PD P' = PC DP ',
•
s, ce tri ma al on ag di are D d an C e nc Si ). (O = CD which only ca n be true if , ew nz no is s ice atr m e es th of e on of t en m ele al on ag this means that if the ith di g sin oo ch by lt, su re a As . ro ze be t us m r he ot e the ith diagonal ele m en t of th '!. c . ... l, (c ag di = C lI l11 f01 e th in D d an C n P appropriately, we may obtai 1111. If we let er teg in e m so r fo } d "" I, + d 0, , m ... O, g( dia m1 O, ... ,O} and D = y = P' T Ix , then ou r two quadratic fOIlI1S simplify as "11
x' Ax = X' T, I PP 'T ' A TP P 'r lx = y'C y =
L
CiY;,
i;:. I
and III
L
X 'B x= x' T' lp p' T' B TP P 'T lx =y 'D y=
i=11I1
+I
nd co se the ile wh ' II'! ,Y ... , Yl of ly on n tio nc fu a is Iln that is, the first quadratic fO the m fro ws llo fo w no lt su re e Th n. ,y, ... 1, + l Ym of n tio nc qu ad ra tic fo nn is a fu d an al llll nO is y at th ct fa the of e nc ue eq ns co a , ,Ym independence of YI, ... I x} = 'T r(P va r(y } = va
1{} T' I 'T P
P=
rIll
,,,

th wi ed ut ib str di tly en nd pe de in e ar k ,X ••• , XI at Example 9.4. Suppose th om nd ra the be '2 d an tl t Le i. ch ea r fo } 2In 0" Xi = (Xii, ... ,X;n)'  Nn0Lln, quantities defined by k
k
tl
CXi 
=n ;= 1
} X 2,
t, =
n
(x ..  x)" 2 I)
; = I j=1
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
386 where
X·
k
n
Xi =
Xi j
n
}= I
I
,
x=L
k
1= \
e th d an ts en atm tre r fo s re ua sq of m su e th r fo as ul rm fo e th Note that (I an d (2 are e pl m xa (E el od m n tio ca ifi ss cla y wa eon d ce lan ba a in r sum of squares for er ro 7.4). Now (I can be expressed as 2
k
L
2 X·I 
k I
i=I
0) I., n(J Nk x en th , U ,x ... ;, (x = x as x e fin de we where x = (XI, ... ,x d. If I so , )x l~ ® (lk n= x d an kn> a21 = 1n a2 ® 1k = 0 d an kn with JI. = lk ® ILl" = 1L1 (I
= n x'( Ik ® In)(lk  rllkl~)(Ik ® l~)x = n Ix '{( lk  rl lk lk ) ® lnl~}x = x' A\ x, I
(Ik d an ~ nl nl = 2 ~) nl (l e nc Si }. l~ ln ® ) lk where AI = n {(Ik  k Ilk is so e nc he d an t ten po em id is AI at th d fin we )' kllkl~)2 = (Ik  k Ilk lk 2 2 sdi ed ar qu is ch 2 a s ha a II/ = )x \/a (A x' 1, 9.1 (A 1/a )O. Thus, by Th eo re m 2 ws llo fo ich wh 0, = /a JI. A l.' tJ = A ce sin l ra nt ce I is tribution. Th is distribution from the fact that I
} tn )® l lk rl k [(I L{ nl = ) 1n 1L ® lk }( l~ ln ® ;) k l {(lk  k Ilk =nIL {(lk  lk ) ® In} =0, while its degrees of freedom is given by •
tr{(lk  rl lk lD ® lnl~)} I I k = I)n I(k n= ) l~ ln r( )t lk lk rl lk tr( n= 2
rl = tr{ (A I/a )O}
=tr(AI) =n
I
•
Turning to (2, observe that it ca n be written as
L x5  n L =L I
(2
;=1
}= I
k
2
n
k"
Xij
j=1
= X' {lk ® (In  nIlnl~)}x = X' A 2x,
=
L i=I
x;(ln  n\lnl~)xi
•
38 7
INDEPENDENCE OF QU AD RA llC FOR.'1S
where A~ = Il ® (1" 
n \ 1" 1:). Clearly. A: is idemp''tent since On 
1\
\
1., (,)
(J: also t21 = )x 0'2 21 (A x' so d an t ten po em id is . , is idempotent. Thus, (A2/0'2)0
ce sin \) \lIXk : /0 1: . lar cu rti pa In n. io ut ib str di ed ar qu is ch ha s a tr{ (A 2/ 02 )O } = tr(A 2 ) = tr{lk ® (In  n \ III I:) } = tr(Idtr(III  n \ In l;, )} = k(n 1 ),
an d A2J1. = {Ik ® (In  nllnl~)}(1k ® 1L1 11 ) = lk ® lL(ln  n llnl~)ln = lk ® 1L(ln  In )
=0,
tJl.'
2 npe de in e th h lis tab es we , lly na Fi O. = ./0 J1 A2 at th th er eb y gu ar an tee in g at th ng yi rif ve es lv vo in y pl sim is Th 3. 9.1 m re eo Th g in us de nc e of 11 an d 12 by the of e nc ue eq ns co e iat ed m im an is ich wh ), (0 (A l/0 2) O( A2 /0 2) = AIA2/02 = fa ct th at •
Inl~(ln  nllnl~) = (0)
Example 9.5.
Le t us return to the general re gr es sio n m od el
y = Xil + E, d an Il at th e os pp Su I. X m is Il d an m, x N is X 1, wh er e y an d E are N m l x I, is III e er wh ), X I (X = X d an 2 ;)' 11 ~ (Il = X ar e pa rti tio ne d as 11 me su as ll wi e W O. = 112 at th is es th po hy e th t tes to sh 112 is m2 X 1, an d we wi ul gf in an me be t no d ul wo t tes is th ce sin le ab tim es is 112 of that ea ch co m po ne nt k ran n m lu co ll fu s ha X at th ies pl im en th is 2 th at th n ow sh y ot he rw ise . It is ea sil d cte tru ns co be n ca 0 = 112 of t tes A ). (X nk ra = r e er wh an d rank(X I) = r  m2, E, + III Xl = y l de mo d ce du re e th r fo rs ro er d re ua sq of by co m pa rin g th e sum wh ich is X
by n ve gi is ich wh l, de mo e et pl m co the r fo rs ro to th e su m of sq ua re d er
m re eo Th ng yi pl ap by , us Th 1). ( Il, (X NN NN(O, ( 1), then y 2 2
2
No w if E )/0 (2 (11 at th d fin we > XI = I 'X rX 'X (X X at th ct fa 9.11 an d us in g th e ch is qu ar ed sin ce
is
388
SOME SPECIAL TOPICS RELATED TO QUADRATiC FORMS
(X(X'X)X'  XI(X;XI)X;)
(X(X'X)X'  XI(X;XI)X;)
q2
q2
(X(X'XfX'  XI(X;XI)X;) q2
In particular, if P2 = 0, (II  t2)/q2  X~/" since tr{X(X'X)X'  XI(X;X,fX;} = tr{X(X'X)X'}  tr{XI(X;XI)X;} = r  (r  m2) = m2,
and (X(X'X)X'  XI (X;XI f X;) q2 •
By a similar application of Theorem 911, we observe that t2/q2  X~r' In addition, it follows from Theorem 9.13 that (tl  t2)/ q2 and t2/ q2 are independently distributed since (X(X'X)X'  X I(X;XI q2
r X;)
(IN  X(X'X) X') q2
This then pellnits the construction of an F statistic for testing that is, if P2 = 0, then the statistic F=
=0
P2 = 0; that
(tl  t2)/m2
t2/(N  r)
has the F distribution with m2 and N  r degrees of freedom_ The proof of the next result, which is very similar to the proof of Theorem 9.13, is left to the reader as an exercise.
Theorem 9.14. Let x  Nm(JI., n), where n is positive definite, and suppose that A is an m X m symmetric matrix while B is an n x m matrix. If Bn A = (0), then x' Ax and Bx are independently distributed. Example 9.6.
Suppose that we have a random sample XI, ... , Xn from a nOllnal distribution with mean JJ. and variance q2. In Example 9.3, it was shown
•
389
INDEPENDENCE OF QUADRATIC FORMS
by n ve gi is e, nc ria va e pl m sa e th , s2 e er wh ' I X~ jq2 that (n  1)s2 1 s2 = _ _:(n  1)
n
i=,
, an me e pl m sa the at th ow sh to 14 9. m re eo Th We will now use
X= 1 n
n Xi ,
i=,
t tha w sa we , 9.3 e pl am Ex In . s2 of ed ut ib str di tly en nd pe is inde multiple of the quadratic fOlln
•
, nd ha r he ot e th On . In) q2 lm {JL Nn ' n) ,x .. . (x" = x where
.\.2
is a scalar
x can be expressed
as
x = n ' I'"x at th t fac the m fro ws llo fo s2 d an :x of ce en nd pe de in e th , Consequently
•
m re eo Th in n ve gi ), (0 = B AO on iti nd co e th , ite in ef id m W he n 0 is positive se eind are Bx x' d an x A x' s Iln fO tic ra ad qu o tw e th at th 9.13, will still guarantee on iti nd co the , ite in ef id m se e iv sit po is 0 n he W , se wi ke pendently distributed. Li are Bx d an Ax x' at th tee an ar gu ll sti II wi , 14 9. m re eo BOA = (0), given in Th ind co of t se er ak we a ns tio ua sit se the in r, ve we Ho . independently distributed ng wi llo fo the in n ve gi are s on iti nd co e es Th . ce en nd pe tions will guarantee inde two theorems. Th e proofs are left as exercises. d an , ite in ef id m se e iv sit po is 0 e er wh , 0) ., (JI Theorem 9.15. Let x  Nm are Bx x' d an Ax x' en Th s. ice atr m ic etr m m sy m X m e ar suppose that A and B independently distributed if (a ) OAOBO = (0),
(b) OAOBp. = 0, (c ) OBOAJI. = 0, . (d ) JI.' AOBJI. = O.
S RM FO C TI RA AD QU TO D TE LA RE S PIC TO L IA EC SP ME SO
390
d an , ite in ef id m se e iv sit po is n e er wh , n) , (fL Nm Theorem 9.16. Let If . ix atr m m X n an is B ile wh ix atr m ic etr m m suppose that A is an m X m sy . ed ut ib str di tly en nd pe de in e ar Bx d an Ax x' en Bn An = (0) and Bn Af L = 0, th
l
x
I, ! .1,
,
,I,
,
s lln fO tic ra ad qu l ra ve se at th ng hi lis tab es in l fu Ou r final result ca n be help ng vi ha ch ea , ed ut ib str di tly en nd pe de in e ar r cto ve in the same nOllnal random a chisquared distribution. e os pp Su . ite fin de e iv sit po is n e er wh , n) , (fL Nm x t Th eo re m 9.17. Le = A d an k, , ... 1, = i r fo ri, nk ra of ix atr m ic etr m m sy that A j is an m x m s on iti nd co e th er id ns Co r. nk ra of is Ak + ... AI + ,
(a) Aj n is id em po ten t for ea ch i, (b) An is idempotent, (c) Aj nAj = (0), for all i i j,
""k
(d ) r = £" 'j= I
rio
en th , ld ho ) (d d an ) (b if or ld, ho ) (c d an ), (b , (a) of o If any tw
t
(i) x'Aj X  x~/ fL' AjfL), (ii) x' Ax  xh tfL 'A fL ), . ed ut ib str di tly en nd pe de in e ar X Ak x' , ... , IX A x' (iii) tsa T ix atr m lar gu in ns no a s ist ex e er th , ite Proof Si nc e n is positive defin as d se es pr ex tly len va ui eq be n ca d) )( (a s on iti isfying n = TT ', and the co nd (a) T' Aj T is id em po ten t for ea ch i, (b) T' A T is idempotent, (c ) (T 'A jT )( T' A jT )= (O ), fo ra IlU j, (d) ra nk (T 'A T) = L~=I ra nk (T 'A jT ).
.1, 9.7 ry lla ro Co of s on iti nd co e th fy tis sa AT T' Since, T' AI T, ... , T' Ak T and
, ld ho ) (d d an ) (b if or ld ho ) (c d an ), (b , (a) of o tw we are ensured that if any ) (a 1, 9.1 m re eo Th g in us w No . ld ho e ov ab d) )( (a then all four of th e co nd iti on s s tee an ar gu ), (c th wi ng alo 3, 9.1 m re eo Th ile wh implies (i) and (b) implies (ii), • 0 that (iii) holds.
6. EX PE CT ED V AL VE S O F QU AD RA TI C FO RM S cSe of s m re eo th e th in n ve gi s on iti nd co e th s When a quadratic fOi 111 satisfie ich te ria op pr ap e th m fro y ctl re di d ne tai ob be n tion 6.4, then its moments ca , es nc ria va , ns ea m r fo s ula lln fO e riv de we , on squared distribution. In this secti
EX PE CI ED VALUES OF QUADRATIC FORMS
39 1
. se ca the t no is is th en wh ul ef us be ll wi at th s lln fO tic ra an d covariances of quad
an s ha x r cto ve om nd ra the ich wh in se ca l ra ne ge t We will start with the mos
nd co se of ix atr m e th e lv vo in n tai ob we ns sio es pr ex e Th n. arbitrary distributio '). xx ® ' xx E( ts en om m th ur fo of ix atr m e th d an '), moments of x, E( xx th ur fo ite fin ng vi ha r cto ve om nd ra 1 X m an Theorem 9.18. Le t x be cve n ea m e th te no De . ist ex ') xx ® ' xx E( d an moments, so th at bo th E( xx ') ic etr m m sy m X m are B d an A If O. d an JI. by to r and covariance matrix of x matrices, then (a ) E( x'A x) = tr{ AE (x x') } = tr( AO ) + JI.'AJI., , I.]2 AJ JI.' + O) (A [tr ')} xx ® ' xx E( A) ® (A tr{ = ) Ax ' (b) va r(x + O) r(B ][t JI. 'A JI. + O) (A [tr ')} xx ® ' xx E( B) ® (A tr{ = x) (c) co v( x'A x, x'B
JI.' BJI.]'
Proof Th e covariance matrix 0 is defined by
o = E {(x 
JI. )(x  JI.)'} = E( xx ')  Jl.JI.',
ve ha we r, ala sc a is Ax x' e nc Si I.'. Jl.J + 0 = ') xx E( at so th
')} JI. Jl. + {} A( tr{ = )} x' (x AE tr{ = )} x' Ax tr( E{ = )} Ax x' E( x'A x) = E{ tr( = tr( AO ) + tr(AJl.JI.') = tr( AO ) + JI.' AJI., , (c) e ov pr To A. = B g in tak by ) (c m fro w llo fo ll wi ) (b and so (a) holds. Part no te that E( x'Ax x'B x) = E[ tr{ (x ' ® x' )( A ® B) (x ® x) }] = E[tr{(A ® B) (x ® x) (x ' ® x' )} ] = tr{ (A ® B) E( xx ' ® xx ')}
n tio ua eq e th in , (a) rt pa th wi ng alo is, th e us Th en •
co v( x'A x, x'B x) = E( x'Ax x' Bx )  E( x' Ax )E (x 'B x)
o
riva co d an es nc ria va r fo ns sio es pr ex e th n, io ut ib str di al lln W he n x has a nO e nc ue eq ns co a is is Th t. ha ew m so ify pl sim ts, en om m er gh ances, as well as hi n. io ut ib str di al rm no e iat ar tiv ul m the of ts en om m e th of re tu of the special struc in le ro ial uc cr a s ay pl 7, r te ap Ch in d se us sc di , K ix atr mm Th e commutation m m x m e th of e us e ak m o als ll wi e W . ns sio es pr ex ix atr m e es obtaining some of th matrix Ti j defined by
392
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
that is, all of the elements of Tij are equal to 0 except for the (i,j)th and (j, i)th elements, which equal 1, unless i =), in which case the only nonzero element is a 2 in the (i, i)th position. Before obtaining expressions for the variance and covariance of quadratic fOllns in nOllnal variates, we will need the following result.
Theorem 9.19.
If Z  NII/(O, 11/1) and c is a vector of constants, then
(a) E(z ® z) = vec(lm), (b) E(cz' ®zz') = (0), E(zc' ®zz') = (0), E(zz' ®cz') = (0), E(zz' ®zc') = (0), (c) E(zz' ®zz') = 2N m + vec(lm){vec(II/I)}', (d) var(z ® z) = 2NI/I'
Proof Since E(z) = 0, 1m = var(z) = E(zz') and so E(z ® z) = E{ vec(zz')} = vec {E(zz')} = vec(lm) It is easily verified using the standard normal moment generating function that
E(Z)) = 0,
E(Z;) = 3
(9.12)
Each element of the matrices of expected values in (b) will be of the form CjE(2jZk2/). Since the components of z are independent, we get
when the three subscripts are distinct,
when) = k :I I, and similarly for) = IJ k and 1= k J), and
when) =k =I. This proves (b). Next we consider teIlns of the fOlm E(ZjZjZkZ/). These equal 1 if i = ) J 1= k, i = k J) = I, or i = 1:1 ) = k, equal 3 if i =) = k = I, and equal zero otherwise. This leads to
393
EXPECfED VALUES OF QUADRATIC FORMS
where {)ij is the (i,j)th element of 1m. Thus, m
m
m
;. . L
E(zz' ® zz') = E
®zz'
E;jz;zj
m
;= 1 j= 1
;=1 j=1 •
m
=
m
LL
{E;j ® (T;j + {)ij I I1J}
;=1 j=1
m
=L
m
m
III
(E IJ·· ® T·) IJ +
;=1 j=1
.The third result now follows since m
m
L
m
"':0 (Eij ® Tij) =
;=1 j=1
m
L L ;=1
m
iii
;=1
j=1
(Eij ® Ej ;) +
j=1
III
III
=
L
Klllm +
(e; ® e;)
;= 1
L
(ej ® ej)
j~1
m
m
= Kmm +
L
vec(e;e;)
;= I
~ {vec(ejej)}' j=1
= Kmm + vec(lm){vec(lI1Jr, m
m
m
({)ijEij ® 1m) = ; = 1 j=1
and
1m2
L
E jj
;= 1
+ Kmm = 2Nm • Finally, (d) is an immediate consequence of (a) and (c) . •
o
The next result generalizes the results of Theorem 9.19 to a multivariate normal distribution having a general positive definite covariance matrix.
Theorem 9.20. Let x  Nm(O, {}), where {} is positive definite, and let e be an m X 1 vector of constants. Then (a) E(x ® x) = vec({}), (b) E(cx' ® xx') = (0), E(xc' ® xx') xc') = (0),
= (0),
E(xx' ® ex')
= (0),
E(xx' ®
394
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
(c) E(xx' ®xx') = 2Nm (0 ® 0) + vec(O){vec(O)}',
(d) var(x ®x) = 2N",{O ® 0).
Proof Let T be any nonsingular matrix satisfying 0 = T T', so that z = TI x and x = Tz, where z  Nm(O,lm). Then the results above are consequences of Theorem 9.19 since E(x ® x) = (T ® T)E(z ® z) = (T ® T)vec(lm) = vec(TT') = vec(O), E(cx' ® xx') = (1m ® T)E(cz' ® zz')(T' ® T') = (0), •
and •
E(xx' ® xx') = (T ® T)E(zz' ® zz')(T' ® T') = (T ® T)(2N m + vec(lm){vec(lm)}')(T' ® T') = 2(T ® T)Nm(T' ® T') + (T ® T)vec(lm){vec(lm)}'(T' ® T') = 2N",(T ® T)(T' ® T') = 2Nm (0 ® 0)
+ vec(TT'){vec(TT')}'
+ vec(O){vec(O)}' .
0
We are now ready to obtain simplified expressions for the variance and covariance of quadratic fOIIIIS in nOIIIIal variates. Theorem 9.21.
Let A and B be mx m symmetric matrices and suppose that x  Nm(O, 0). where 0 is positive definite. Then (a) E{x' Ax x' Ex} = tr(AO )tr(BO) + 2 tr(AO BO), .
(b) cov(x'Ax.x'Bx) = 2 tr(AOBO).
(c) var(x' Ax) = 2 tr{(AO)2}.
Proof Since (c) is the special case of (b) in which B = A, we only need to prove (a) and (b). Note that by making use of Theorem 9.20, we find that
E{x' Ax x' Bx} = E{(x' ®:r')(A ® B)(x ®x)} = E[tr{ (A ® B)(xx' ® xx')}] = tr{(A ® B)E(xx' ® xx')}
+ vec(O){vec(O)}')} = tr{(A ® B)«(lm2 + Kmm)(O ® 0) + vec(O){vec(O)}')} = tr {(A ® B)(O ® O)} + tr{(A ® B)Kmm(O ® O)} = tr{(A ® B)(2Nm(0 ® 0)
+ tr«A ® B)vec(O )(vec(O) }') Now
•
395
EXPECTED VALUES OF QUADRATIC FORMS
tr{(A ® B)(O ® O)}
=tr(AO
® BO)
=tr(AO )tr(BO)
foIlows directly from Theorem 7.8, while tr{(A ® B)Kmm(O ® O)} = tr{(AO ® BO )Kmm} = tr(AO BO)
follows from Theorem 7.31. Using the symmetry of A and 0 along with Theorems 7.15 and 7.16, the last term in E{x'Axx'Bx} simplifies as tr«A ® B)vec(O){vec(O)}') = {vec(O )}'(A ® B)vec(O) == {vec(O)}, vec(BOA) = tr(AOBO)
This then proves (a). To prove (b) we use the definition of covariance and Theorem 9.18(a) to get cov(x'Ax,x'Bx)
= E{x'Axx'Bx} 
E(x'Ax)E(x'Bx)
=2 tr(AOBO)
D
The formulas given in the previous theorem become somewhat more complicated when the normal distribution has a non null mean vector. These formulas are given in the following theorem.
Theorem 9.22. Let A and B be symmetric m x m matrices and suppose that x  N m ( .... , 0), where 0 is positive definite. Then (a) E{x'Axx'Bx} = tr(AO)tr(BO) + 2 tr(AOBO) + tr(AO) ....'B .... + 4 ....'AOB .... + ....'A .... tr(BO) + ....'A ........'B ....,
(b) cov(x'Ax,x'Bx) = 2 tr(AOBO) + 4 ....' AOB ...., (c) var(x'Ax) = 2tr{(AO)2} + 4 ....'AOA .....
Proof
Again (c) is a special case of (b), so we only need to prove (a) and (b). We can write x = y + ...., where y  Nm(O, 0) and, consequently, E{x'Ax x'Bx} = E{(y + ....)' A(y + ....)(y + ....)' B(y + Jl.)}
. = E{(y'Ay+ 2 ....'Ay+ ....'A .... )(y'By + 2 ....'By+ ....'B ....)} = E{y'Ayy'By} + 2E{y'Ay ....'By} + E(y'Ay) ....'B ....
+ 2E{ ....' Ay y' By} + 4E( ....' Ay ....' By) + 2E( ....' Ay) ....' B .... + ....'A .... E(y'By) + 2 ....'A .... E( ....'By) + ....'A ........'B ....
The sixth and eighth terms in this last expression are zero since E(y) = 0, while it follows from Theorem 9.20(b) that the second and fourth terms are zero. To simplify the fifth telIll note that
396
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
E(J.l'Ay J.l' By) = E{ (J.l' A ® J.l' B)(y ® y)} = (A J.l ® BJ.l)'E{(y ® y)} = {vec(BJ.lJ.l'A)}, vec(n) = tr{(BJ.lJ.l' A)'n}
= tr(AJ.lJ.l'Bn) = J.l'AnBf.l
Thus, using this and Theorems 9.18(a) and 9.21(a), we find that E {x' Ax x' Bx} = tr(An) tr(Bn) + 2 tr(An Bn) + tr(An )J.l' BJ.l + 4J.l'An BJ.l
+ J.l'AJ.l tr(Bn) + J.l' AJ.l J.l' BJ.l, thereby proving (a); (b) then follows immediately from the definition of covariance and Theorem 9.18(a). 0 Example 9.7.
Let us return to the subject of Example 9.4, where we defined •
and
It was shown that if x = (x~, ... ,xi)'  Nk,,(J.l, n) with J.l = h ® ~I" and n = lk ® q2In, then tl/q2 = X'(AI/q2)x I' 12/q2 = x'(A 2/q2)x  Xi(nI)' independently. Since the mean of a chisquared random variable equals its degrees of freedom, while the variance is two times the degrees of freedom, we can easily calculate the mean and variance of II and 12 without employing the results of this section; in particular, we have
xL
E(tl) = q2(k  1), 2
E(12) = q k(n  1),
var(tl) = 2q4(k  1), var(12) = 2q4k(n  1)
Suppose now that Xi  N,,~ln' q;In) so that n = var(x) = D ® In, where D = diag(q~, ... , qi). It can be easily verified that, in this case, II/q2 and Izl q 2 no longer satisfy the conditions of Theorem 9.11 for chisquaredness, but are still independently distributed. The mean and variance of II and 12 can be computed by using Theorems 9.18 and 9.22. For instance, the mean of 12 is given by E(12) = E(x'A2X) = tr(A2 n ) + J.l' A2J.l = tr({Ik ®(In  nllnl~)}(D®In»
+ ~2(1~ ® 1~){Ik ® (In  nIlnl:)}(h ® In)
397
EXPECTED VALUES OF QUADRATIC FORMS
l = tr(D)tr(ln  nll/11~) +~2(1;ld{I;,(1/l 1I l/1t;,)I/1} k
= (n  1)
L
uT,
i= 1
while its variance is
•
var(t2) = var(x'A 2x) = 2tr{(A 20)2} + 4J.l'A 20A2J.l = 2 tr{D2 ® (In  n 1 l n t;,)}
+4~2(l~ ® 1:){D® (1/1 n l/11;,)}(1k ® 1/1) 1
2 1 )tr{(ln = 2tr(D n l nl;,)} + 4~\I;Dld{I;,(I/I 
l II l/1(,)I/1}
k
=2(n 
1)
L
u;
i=1
We will leave it to the reader to verify that k
E(tl) = (l  k
I
)
L
UT,
i=1
,

k
(l  2k l )
L u; + k
2
i =1
So far we have considered the expectation of a quadratic form as well as the expectation of a product of two quadratic forms. A more general situation is one in which we need the expected value of the product of 11 quadratic fOllllS. This expectation becomes more tedious to compute as n increases. For example. if A, B, and C are m x m symmetric matrices and x  Nm(O, 0). the expected value E(x'Axx' Bxx'Cx) can be obtained by first computing E(xx' ®xx' ®xx'). and then using this in the identity E(x'Axx'Bxx'Cx) = tr{(A ® B ® C)E(xx' ®xx' ®xx')}
The details of this derivation are left as an exercise. Magnus (1978) used an alternative method, utilizing the cumulants of a distribution and their relationship to the moments of a distribution, to obtain the expectation of the product of an arbitrary number of quadratic fOIlIlS. The results for a product of three and four quadratic fOlms are summarized below.
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
398
psu d an s ice atr m m X m ic etr m m sy be D d an C, B, A, t Le
Theorem 9.23. pose that x  Nm(O,lm). Then
•
) AC tr( B) tr( + ) BC tr( ) (A {tr 2 + C) tr( B) tr( A) (a) E( x' Ax x' Bx x' Cx ) = tr( + tr( C) tr( AB )} + 8 tr( AB C) ,
,
.,
•
I
• •
(b) E( x' Ax x' Bx x' Cx x'D x)
= tr( A) tr( B) tr( C) tr(D) + 8 {tr(A) tr( BC D) ) CD tr( B) (A {tr 4 + } C) AB tr( D) tr( + D) AB tr( C) tr( + + tr( B) tr(ACD)
) CD tr( B) tr( ) (A {tr 2 + )} BC tr( ) AD tre + ) BD tr( + tr( AC ) ) AD tre C) tr( B) tr( + ) BC tr( D) tr( A) tr( + ) BD tr( C) + tr(A) tr( + tr(B) tr(D) tr( AC ) + tr( C) tr( D) tr( AB )} + 16 {tr (A BC D) + tr( AB DC ) + tr( AC BD )}
•
in g in ar pe ap D d an C, B, A, en th , ite fin de e iv sit If x  Nm(O, 0) , wh er e 0 is po , BO , AO by ed ac pl re e ar 23 9. m re eo Th in ns tio the righthand side of th e equa CO , and DO . s rm fo tic ra ad qu of ts en om m of n io lat lcu ca e th An alternative ap pr oa ch to e os th in ng ali pe ap ly lar cu rti pa be y ma ch oa pr ap is utilizes tensor methods. Th x r cto ve om nd ra e th or ed ed ne e ar ts en om m d re de or situations in wh ic h higher e es th of n sio us sc di led tai de A n. io ut ib str di al lln nO e iat ar does no t have a multiv . 7) 98 (1 gh la ul cC M in d un fo be n ca s od eth m ten so r •
7. TH E WISHART DISTRIBUTION When x" ... , x" are independently distributed, with then
,
Xj 
N(O,O'2) for ev er y i,
n
xx =
n th wi n io ut ib str di ed ar qu is ch a s ha '2 /O x'x is, at th ,); where x' = (x" ... ,x,
ich wh e on n. tio ua sit is th of on ati liz ra ne ge ix atr m l ra tu na degrees of freedom. A of n io ut ib str di e th es lv vo in , sis aly an e iat ar tiv ul m in ns tio has important applica
"
X' X= j
Xj X j •
=,
is an m X n matrix su ch th at XI , .. .,X n are in de pe nof n m lu co jth e th of s nt ne po m co e th , us Th i. ch ea r fo NII/(O, 0)
(x J, ", ,xn )
where X' =: dent and Xi 

,
TH E W IS HA RT DI ST Rm UT IO N
39 9
oag di jth e th is j (Jj e er wh j), (Jj O, N( as ch ea ed ut ib X are in de pe nd en tly di str n io ut ib str di e th s ha X X' of t en em el al on ag di jth e nal el em en t of {), so th at th X X' ix atr m m X m e th of ts en m ele e th of all (JjjX~. Th e jo in t di str ib ut io n of om ed fre of s ee gr de d an [} ix atr m ale sc th wi n io ut is ca lle d th e W ish ar t di str ib ich e th e lik n, io ut ib str di t ar ish W is Th . n) [}, m( W n, an d wi ll be de no ted by e ar II ,X . •• X" if , lly ra ne ge e or M l. ra nt ce be to sq ua re d di str ib ut io n X~, is sa id uib str di t ar ish W l tra en nc no e th s ha X X' en th ), [} j, ("' Nm in de pe nd en t an d Xj n ve gi ix atr m n x m e th is ' M e er wh , 'M 1M = cJ> ix atr m y tio n with no nc en tra lit as n io ut ib str di t ar ish W l tra en nc no is th te no de ll wi e by M ' = (,.." ... , ",,,). W as ch su n, io ut ib str di t ar ish W e th g in rd ga re n io at nn fo in Wm([}, n, cJ». Additional sis aly an te ria va lti mu on ts tex in d un fo be n ca n, tio nc th e forIll of its de ns ity fu . 2) 98 (1 d ea rh ui M d an 9) 97 (1 i atr Kh d an a tav as iv su ch as Sr the en th x, tri ma n x m an is X' d an ix atr m ic etr m If A is an n X n sy m ng wi llo fo e Th l. IIl fO tic ra ad qu ed liz ra ne ge a d lle ca es m atr ix X 'AX is so m et im d an 4 9. s on cti Se in d ne tai ob lts su re e th of s on ati liz ra th eo re m gi ve s so m e ge ne S. IIll fO tic ra ad qu ed liz ra ne ge e es th to S IIll fO tic ra ad qu g in 9. 5 re ga rd npe de in e ar ns m lu co e os wh ix atr m n x m an be ' X Theorem 9.24. Le t e er wh n, io ut ib str di ) [} .j' (,. N e th ng vi ha n m lu co m ith de nt ly di str ib ut ed , wi th the ile wh s ce tri ma ic etr m m sy n x n e ar B d an A at th e os pp Su . [} is po sit iv e definite en Th ). (A nk ra = r d an , AM ' 1M = cJ> ), "'n , ... " (,.. = ' M t e is k x n. Le (a) X' AX  W m([}, r, cJ», if A is id em po ten t, ), (0 = AB if ed ut ib str di tly en nd pe de in e ar BX X' d an (b) X' AX ). (0 = A ifC ed ut ib str di tly en nd pe de in e ar ex d an (c) X 'AX an s ist ex e er th at th ow sh n ca we if e et pl m co be ll wi ) (a Proof Th e pr oo f of npe de in e ar Y' of ns m lu co e th e er wh Y, Y' = AX X' at m X r m atr ix Y' su ch th e nc ria va co e m sa e th th wi n io ut ib str di al nn no a ng vi ha de nt ly di str ib ut ed ea ch tly en nd pe de in e ar ' X of ns m lu co e th e nc Si . cJ> = m atr ix [}, an d 1E (Y ')E (Y ) di str ib ut ed , it follows th at ve c( X' )  Nnm ( ve c( M '),1" ® [})
r x n an ist ex t us m e er th r, nk ra s ha d an t, ten po Si nc e A is sy m m etr ic, id em e er wh Y, Y' = AX X' , ly nt ue eq ns Co . I = P P' d an ' r m atr ix P sa tis fy in g A = pP th e m x r m atr ix Y' = X' P so th at ve c( Y' ) = ve c( X' P) = (P ' ® 1m )v ec (X ')
 Nmr«P' ® III/)vec(M'), (P' ® 1".)(\" ® [} )(P ® 1m»  Nmr(vec(M' P), (lr ® [}»
400
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
But this means that the columns of y' are independently and nonnally distributed, each with covariance matrix !l. Further, I
2 and so (a) follows. To prove (b), note that since A and B are symmetric, AB = (0) implies that AB = BA, so A and B are diagonalized by the same orthogonal matrix; that is, there exist diagonal matrices C and D and an orthogonal matrix Q such that Q' AQ = C and Q' BQ = D. Further, AB = (0) implies that CD = (0), so that by appropriately choosing Q we will have C = diag(c\, ... , Ch, 0, ... ,0) and D = diag(O, ... , 0, d" + \ , ... , d n ) for some h. Thus, if we let U = QX, we find that h
X'AX= U'CU=
L j=
n
CjUjU;,
X'BX= U'DU=
\
L
djuju;,
j=h+ \
where Uj is the ith column of U'. Since vec(U')  Nnm(vec(M'Q'), (In ® !l», these columns are independently distributed and so (b) follows. The proof of (c) is similar to that of (b). 0 If the columns of the m x n matrix X' are independent and identically distributed as Nm(O,!l) and M' is an m X n matrix of constants, then V = (X + M)'(X + M) has the Wishart distribution Wm(!l ,n, ~ M' M). A more general situation is one in which the columns of X' are independent and identically distributed having zero mean vector and some nonnolIllal multivariate distribution. In this case, the distribution of V = (X + M)'(X + M), which may be very complicated, will depend on the specific nonnolIllal distribution. In particular, the moments of V are directly related to the moments of the columns of X'. Our next result gives expressions for the first two moments of V when M = (0). Since V is a matrix and joint distributions are more conveniently handled in the fOlln of vectors, we will vectorize V; that is, for instance, variances and covariances of the elements of V can be obtained from the matrix var{ vec(V)}. Let the columns of the m x n matrix X' = (XI> ... ,xn ) be independently and identically distributed with E(xj) = 0, var(xj) = !l, and E(xjx; ®XjX;) = v. If V = X'X, then
Theorem 9.25.
(a) E(V) = n!l, (b)
var{vec(V)}
= n{i'
 vec(!l )vec(!l)'}.
401
THE WISHART DISTRIBUTION
Proof
Si nc e E(xj) = 0, we have n = E( xj x; ) and so
"
"
E( V) = E( X' X) = j
=I
j
n =lin
=I
•
In addition, since XI ,
va r{ ve c( V) } = va r
...
,X " are independent, we have
"
vec j
,
X·I X·I
=v ar
ve c( xj x; l
=I
n

var{ vec(xjx;)} j
=
=I
"
L j=1

"
va r(x j® xj ) =
"
L
{E(xjx; ® xj x; l  E(x; ® x; lE (x ; ® x; )}
;=1
{v  ve c( n) ve c( n) '} = n{ v  ve c( n) ve c( n) '}.
n io ut ib str di t ar ish W a s ha V en wh es ifi pl sim } V) c( ve Th e ex pr es sio n for va r{ n. io ut ib str di al rm no e th of ts en om m th ur fo e th of re du e to the special structu s thi h ug ho alt at th te No . m re eo th xt ne r ou in n ve gi is n Th is simplified ex pr es sio ies pl ap n ve gi lt su re st fir e th , ns m lu co ed ut ib str di ly lal lll nO th eo re m is stated for to th e general ca se as well. tly en nd pe de in be X' ix atr m n x m e th of ns m lu co e th t Theorem 9.26. Le = ' M e er wh ), +M (X )' +M (X = V e fin De . 0) , (O Nm as ed ut and identically distrib )' 'M iM , ,lI (n m W V at th so ts, tan ns co of ix atr m (J.lI, ··. ,J. ln ) is an m x n Th en (a) E (V )= nn +M 'M , . n} ® 'M M + 'M M ® o + n) ® (n {n 2N }= V) c( (b) va r{ ve m
it m, re eo th us io ev pr the m fro nn = X) X' E( d Proof Si nc e E( X) = (0) an follows that
'M M + /10 = 'M M + X) X' E( :: ) 'M M + 'X M + M X' + X X' E(V):: E( n tai ob we 5, 9.2 m re eo Th of f oo pr e th in as ng di ee Proc
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
402
var{ vec(V)}
•
= j
(9.13)
=1
But JLi ® JLi + Xi ® JLj + Li ®J j +X Xi ® j =X ) JLj + Xj ®( ) p. + (Xj = Xj ® Xi + (1m + Kmm)(Xi ® JLi) + JLi ® JLj = Xi ® Xi + 2N m(lm ® JLi)xi + JLi ® JLi e ar Xi d an Xi ® Xi 0, to l ua eq are Xi of ts en om m r de or Since all first and third at th d fin we 2, 7.5 lem ob Pr d an 20 9. m re eo Th g in uncorrelated, and so us i} i)x JL ® (lm 2N r{ va + ) Xi ® i m r(x va = j)} JL + i (X i)® JL + Xj var{( = 2Nm(0 ® 0) + 4Nm(1m ® JLi)O (1m ® JL;)Nm = 2Nm(0 ® 0) + 4Nm(0 ® JLiJL;)Nm = 2Nm(0 ® 0 + 0 ® JLi JL; + JLi JL; ® 0 )
••
1 ,
,
"
(9.14)
o
). (b n tai ob we g, in ify pl sim d an 3) .1 (9 in 4) .1 Now substituting (9
g in pl m sa en wh t, tha n ow sh s wa it 9.6 d an 9.3 es pl am Ex am pl e 9.8. In Ex S2 has a e nc ria va e pl m sa e th of le tip ul m t tan ns co a n, io ut ib from a nonnal distr n ea m e pl m sa e th of ed ut ib str di tly en nd pe de in is it d an chisquared distribution, lv vo in lem ob pr is th of on rsi ve e iat ar tiv ul m e th er id ns x. In this example, we co th wi ed ut ib str di tly en nd pe de in are . ,X " o' Xl at th e ing x and S; that is, suppos ). ,x ... lo (X ix atr m n x n m e th be to X' e fin de Xi  Nm(JL, O) for each i, and d se es pr ex be n ca ix atr m e nc ria va co e pl m sa d an r Then the sample mean vecto as 1
X= n
•
1 X 'I. , Xi = n
and
•
I S= n 1
• i
=1
1 (Xi  X)(Xi  X) ' = n l
•
L i
=1
XiXiI 
nxx
!
•
•
403
THE WISHART DISTRIBUTION
Since A = (In  nIlnl~) is idempotent and rank(A) = tr(A) = n  I, it follows from Theorem 9.24(a) that (hI)S has a Wishart distribution. To its noncentrality matrix, note that M' = (p., ... , p.) = p.l:, so that
Thus, (n,... l)S has the central Wishart distribution W m(O , n  1). Further, using Theorem 9.24(c), we see that S and x are independently distributed since
l'n (I n n11 1') = (1'n  1') nn n = 0'
•
In addition, it follows from Theorem 9.26 that E(S) = 0,
2 2 var{vec(S)} = , Nm(O ® 0) = , Nm{n ® O)NIII nl nl
The redundant elements in vec(S) can be eliminated by utilizing v(S). Since v(S) = D~ vec(S), where Dm is the duplication matrix discussed in Section 7.8, we have
In some situations, we may be interested only in the sample variances and not the sample covariances; that is, the random vector of interest here is the m x 1 vector S = (SI\, ... , smm)'. Expressions for the mean vector and covariance matrix of s are easily obtained from the fOlil1ulas in this example since s w(S) = 'l}fm vec(S) as seen in Problem 7.45, where III
'l}fm =
L
ei.m(ei.m ® ei.m)'
i~ I
Thus, using the properties of 'I}f III obtained in Problem 7.45, we find that E(s)
= 'I}f,;, vec{E(S)} = 'l}fm vec(O) = w{n),
var(s) = 'l}fm
var{vec(S)}'I}f~ = 'l}fm
n
~1
Nm(O ® O)Nm
2 (000), n I
where 0 is the Hadamard product.
'I}f~
404
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
Example 9.9. The perturbation fOlll1ulas for eigenvalues and eigenvectors of a symmetric matrix obtained in Section 8.6 can be used to approximate the distributions of an eigenvalue or an eigenvector of a matrix having a Wishart distribution. One important application in statistics that utilizes these asymptotic distributions is principal components analysis, an analysis involving the eigenvalues and eigenvectors of the m x m sample covariance matrix S. The exact distributions of an eigenvalue and an eigenvector of S are rather complicated, while their asymptotic distributions follow in a fairly straightforward manner from the asymptotic distribution of S. Now it can be shown by using the central limit theorem [see Muirhead, (1982)] that v'n  1 vec(S) has an asymptotic normal distribution. In particular, using results from Example 9.8, we have, asymptoticall y,
v'n  I {vec(S)  vec(O)}  Nm2(0, 2Nm(O ® 0)), ,
where 0 is the population covariance matrix. Let W = S  0 and W * = v'n  I W, so that vec(W *) has the asymptotic normal distribution indicated above. Suppose that 'Yi is a normalized eigenvector of S = 0 + W corresponding to the ith largest eigenvalue,Ai, while qi is a nOllllalized eigenvector of 0 corresponding to its ith largest eigenvalue Xi. Now if Xi is a distinct eigenvalue of O. then we have the firstorder approximations from Section 8.6 Ai = Xi + q;Wqi = Xi + (q; ® q;)vec(W), 'Yi=q;(O xiImrWqi=qi {q;®(O xiIm)+}vec(W)
(9.15)
Thus, the asymptotic normality of ai = ,jn _. 1(Ai  Xi) follows from, the asymptotic normality of vec(W *). Further, we have, asymptotically, E(ai) = (q; ® q;)E{ vec(W *)} = (q; ® q;)O = 0, var(a;) = (q; ® q;)(var{vec(W *)})(qi ® q;) = (q; ® q)(2Nm(O ® 0 ))(qi ® q;) = 2(q;Oqi ® q;Oqi) = 2xT;
that is, for large n, Ai  N(x;, 2xl/(n  1)), approximately. Similarly, hi = v'n  I ('Y;  q;) is asymptotically normal with
405
THE WISHART DISTRIBUTION
= {(O  Xi lrn t ® q; + q; ® (0 
Xi
ImY H(0 ® 0) Hqi ® (0 
Xi
IlI It} ,
r de or stfir the le hi W . ly ate im ox pr ap , cJ» 1 1)(n and so for large n, "Ii  N;n(qi'
. ns io ut ib str di ic ot pt ym as e th n tai ob to ed us be n ca approximations in (9.15) ed us be n ca , 8.5 m re eo Th in n ve gi e os th as ch su s, on ati higherorder approxim st mo e Th . ns io ut ib str di ic ot pt ym as e es th of ce lan lll rfo to further improve the pe uib str di ed ar qu is ch ic ot pt ym as es lv vo in s es oc pr is common application, of th tic tis sta e th th wi ea id sic ba e th te tra us ill ll wi we tions, so (n  1) O' i  Xi)2
t = '  ' ' :<  "' , 2x2I th wi ed ar qu is ch lly ica ot pt ym as is , ai of ty ali rm no which, due to the asymptotic ile wh I, is n io ut ib str di ed ar qu is ch is th of n ea m on e degree of freedom. Th e the ex ac t mean of t is of the form C'}
E( t) = 1 +
(n  1)(j + 1)/2 ' j=1
ed us be n ca A.i of s on ati im ox pr ap er rd o er gh hi e Th ts. where the CjS are constan an te pu m co to ed us be ay m is th n the d an , CI t tan ns to deteulline the first co adjusted statistic CI
I  :. , ,,(n  1)
t
Th e mean of this adjusted statistic is CI
E( t)
1  : : :(n  1)

CI
1
C'}
I+
(n  1)
(n  l)U + 1)/2 j=1
d)
= 1+
(n 1 )(j +1 )/2 '
j=2 of an me the at th te No . Cjs the of ns tio nc fu e ar where the djs are constants that squar~d ich the , on as re s thi r Fo t). E( es do an th te ra r ste fa 1* converges to I at a
406
SO ME SP EC IA L TO PIC S RELATED TO QU AD RA TI C FO RM S •
,•
of n io ut ib str di e th ate im ox pr ap ld ou sh om ed fre of distribution with one degree is Th t. of n io ut ib str di the ate im ox pr ap uld wo it an th this adjusted statistic be tte r d rre fe re ly on m m co is s tic tis sta ed ar qu is ch lly type of adjustment of asymptotica ett rtl Ba of n sio us sc di e m So )]. 47 19 7, 93 (1 tt tle to as a Bartlett adjustment (Bar . 4) 99 (1 x Co d an n se iel N rff do rn Ba in d un fo be n ca ts en adjustm
•
,
·. •
•
•
·
'",.,. ,
il',,",
rpo im ve ha 3 ter ap Ch in d pe lo ve de es lu va en eig r fo ies So m e of th e inequalit ns tio nc fu in rta ce of es lu va en eig of ns io ut ib str di e th tant applications regarding e. pl am ex xt ne r ou in ted tra us ill is n tio ica pl ap ch of Wishart matrices. On e su
a • I, ,
";~ .\ ~l
e iat ar tiv ul m e th as ch su e, nc ria va of sis aly an e iat ar Ex am pl e 9. 10 .' A multiv lva en eig e th s ze ili ut , 14 3. e pl am Ex in d se us sc on e wa y classification model di ed ut ib str di tly en nd pe de in e ar W d an B s ice atr m m ues of B W i, where the m x at th ow sh ll wi e W . 0) 9.3 em bl ro (P w) , lm m( W with B  W m(lm, b, cJ» an d W de in e ar V d an VI d an m < r 2 is cJ> ix atr m y if the rank of th e noncentralit , w) n (lm r m W V d an r) b n 2 (lm r m W VI th wi pendently distributed then
.j '1• I";, •
··•,'
•
··'. .' ••
';1
'I
'i·, ;~
" f1,· ,
rJ
il
.~
, ,1 ::.
= 'T tT at th ch su T ix atr m m X r an s rank(cJ» = r, there exist », cJ ,b, Im m( W B d an cJ> = 'M tM ce sin en th », (0 ' (T the II! x b matrix M ' = ix atr m b x m a is ' X e er wh X, X' = B as d se es pr ex it follows that B ca n be ), X~ ~ (X = X' as X' ng ni tio rti Pa ). 1m ® Ib '), M c( ve ",( for which ve c( X' )  Nh where X; is 11/ X r, we find that
), T' c( ve m( Nr ) X~ c( ve ce sin r) b , lm m( W where BI  W",(I"" r, cJ» and B2 F let h B ed fix r fo w No ). 1m ® r b},l O) {( ec ,(v r)" I, ® 1m) an d vec(X;)  N(be fin de d an I = F F' d an ) (0 = n IF B F' m g in fy tis sa ix be any 11/ x (II !  r) matr the sets ,
I > c} , SI (B I) = {B2, W: Ar +i (B W  ) •
S2(B I ) = {B 2• W: A; {( F' B2F) (F 'W F) I} ) > c}
It follows from Pr ob lem 3.32(a) that
·
,· ,
·.'•.·
.;~• •
,,> , ·
,l ,, ,
40 7
TH E W ISH AR T Dl S" IR IB UT IO N ",
= VI at th ied rif ve y sil ea be n ca it d an .), (B S2 !: .) .(B so for each fixed B. ,S
, ,"
"
, ly nt ue eq ns Co . w) " m ,(1 m W F W ' F = 2 V F' B2F  Wm_,(1m _" b  r) and cpe res , B d an , ,B W r fo ns 2 tio nc fu ity I ns de e if g( W ),! .(B I), an dh (B2) are th tively, then
,, ,'
",
· "
,,'
";.1 ",",
~
g( W )!2 (B 2) dW dB2 = P{ Ai (V I V 21 ) > c}
g( W )!2 (B2)d W dB2 ::;; S2 (BI )
S.( B.)
•
If we also define the sets
c. = {B I • B2,
> c} C2 = {B I : BI positive definite},
,
,
I
,'
'~1
'I ·'1, ;~
"'I r~
W: Ar +i (B W 
)
then the desired result follows since
"
,
,,
,'
.
•
J:)

"
1
••
"'"I
•
;~
, ,
"., I
.,.',:1,, "
," i " "
,
·..~, . ,
,
,
,
d an s ice atr m e nc ria va co d an n io lat rre co e pl m sa The relationship between the an n tai ob to ed us be n ca 9.8 e pl am Ex in en giv )} c(S ve r{ the expression for va t ec bj su the is is Th ). c(R ve of ix atr m e nc ria va co ic ot pt ym expression for the as of our final example. dis tly en nd pe de in be II ,X ,." XI let , 9.8 e pl am Ex in Example 9.11. As e nc ria va co e pl m sa the be R nd Sa let d an i, ch ea tributed with Xj  N m ( ...., [} ), for tano e th e us we if , us Th e. pl m sa s thi m fro and cOIl"elation matrices computed ple m sa the n the ix, atr m m x m an is X e er wh ), :' ,x: tionDX = diag(xl., ... m matrix can be expressed as
while the population correlation matrix is given by
408
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
,i •
I,
i
,, ,,I , •
,
,,
j
Note that if we define Yi = Do I 2xi , then YI' ... ,Yn are independently distributed 1j2 with 'Yi  N m(Dii ,.., P). If S* is the sample covariance matrix computed Dlj2SDlj2 Dlj2_DI/2DIj2DI/2DI/2 ds fo S S then * r m th e Yi' {J 0 ' s* S 0 0 S ,an 0
•
•
I,
,
·
i
,· r, •
j
,
,i•
!, , •
that is, the sample correlation matrix computed from the YiS is the same as that computed from the XiS. If A = S*  P, then the firstorder approximation for R is given by (see Problem 8.15)
·
I•
·.
I
!
,I
I,,
and so
,•
,
I,
. 1 vec(R) = vec(P) + vec(A)  '2 (vec(PD A) + vec(DAP)}
,,
I• r r , ,
,
1
= vec(P) + vec(A)  2 {(1m ® P) + (P ® Im)}vec(DA)
!
• •
= vec(P) + 1m 2
,• ,
1

'2
{(1m ® P) + (P ® Im)}Am vec(A),
!
J
(9.16)
"1
! I
I
!
where
,, r ~
,
•
m
Am =
L. (Eli ® Eii ) i=I I
Thus, since
2
var{vec(A)} = var{vec(S*)} = , Nm(P®P)Nm, n 1 we get the firstorder approximation var{vec(R)} =
2 nl
•
H Nm(P ® P)NmH',
•
I
I
i I
i
I
j •, •
, • •
PROBLEMS
409
where the matrix H is the premultiplier on vec(A) in the last expression given in (9.16). Simplification (see Problem 9.~3) leads to
2 var{vec(R)} = . Nmif>NIII> .' n 1 where
(9.17)
•
Since R is symmetric and has each diagonal element equal to one, its redundant and nonrandom elements can be eliminated by utilizing v(R). Since v(R) = Lm vec(R), where Lm  is the matrix discussed in Section 7.8, we find that the asymptotic covariance matrix of v(R) is given by 2 var{v(R)} =   L",NIllif>N",L;" nl The Hadamard product and its associated properties can be useful in analyses involving the manipulation of if> since
if> = P ® P  (1m ® P)Am(P ® P)  (P ® P)Am(lm ® P) +(Im ® P)Am(P ® P)Am(lm ® P),
and the last term on the righthand side of this equation can be expressed as (1m ® P)Am(P ® P)Am(lm ® P) = (1m ® P)i!;,,(P 0 P)i!",(lm ® P)
PROBLEMS 1. We saw in the proof of Theorem 9.1 that if A is an III x III idempotent matrix, then rank(A) + rank(lrn  A) = m. Prove the converse; that is, show that if A is an m x m matrix satisfying rank(A) + rank(l",  A) = Ill, then A is idempotent. 2. Suppose that A is an m X III idempotent matrix. Show that each of the following matrices is also idempotent. (a) A'. (b) BAB 1, where B is any m x III nonsingular matrix. (c) An, where n is a positive integer. 3. Let A be an m x n matrix. Show that each of the following matrices is idempotent.
"j 0'
Il
410
N '.
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
~ 'J.
(a) AA.
(b)
AA.
.: l
I
•
"
• ".:' ,,
(c) A(A'A)A'.
'1 ·,
,
•
.I,
,,
," ,
4. Determine the class of m x 1 vectors {x}, for which xx' is idempotent.
:
'.:
1
,I
5. Determine constants a, b, and c so that each of the following is an idempotent matrix. (a) alml~. (b) bIm + clml~.
,
,
,
,i
,, ,
, ! ,
·
6. Let A be an m x n matrix with rank(A) = m. Show that A'(AA')IA is symmetric and idempotent and find its rank.
. ,I "
,I
•
, ,, ,
,
7. Let A and B be m x m matrices. Show that if B is nonsingular and AB is idempotent, then BA is also idempotent. 8. Show that if A is an m x m matrix and A2 = rnA for some scalar m, then tr(A) = m rank(A) ,
9. Oi ve an example of a collection of matrices A It ... ,Ak that satisfies conditions (a) and (d) of Corollary 9.7.1, but does not satisfy conditions (b) and (c). Similarly, find a collection of matrices that satisfies conditions (c) and (d) but does not satisfy conditions (a) and (b).
, ", ,
10. Prove Theorem 9.11. II. Let A be an m X m symmetric matrix with r = rank(A) and suppose that x  Nm(O, 1m). Show that the distribution of x'Ax can be expressed as a linear combination of r independent chisquared random variables, each with I degree of freedom. What are the coefficients in this linear combination when A is idempotent? 12. Extend the result of Problem 11 to the situation in which x  Nm(O, [}), where [} is nonnegative definite; that is, show that if A is a symmetric matrix, then x' Ax can be expressed as a linear combination of independent chisquared random variables each having one degree of freedom. How many chisquared random variables are in this linear combination?
\3. Let XI,'" ,Xn be a random sample from a normal distribution with mean JI. and variance a", and let x be the sample mean. Write
,
." , ,
411
PROBLEMS
,, · i•
:
'.:
as a quadratic form in the vector (x is the distribution of t?
~ln)'
where x = (XI, ... ,x,,)'. What
!
,, ·
·,
: j
• •
,•
14. Suppose that x  Nn(J.l, 0), where 0 is positive definite. Partition x, J.l. and o as
, !
•
•
·
X=
.,.
XI X2
,
" • •
,
J.l1 J.l2
J.l=
,
0=
•
,
.
·•
where XI is r x 1 and
X2
is (n  r) X I. Show that
X;.
•
•
,
(a) tl = (XI  J.l1)'Oill(xl  J.l1) (b) t2 = (x  J.l)'OI(X  J.l)  (XI  J.l1)'oii(xl  J.l1)  X~r'
(c)
tl
and
t2
are independently distributed.
15. Prove Theorem 9.14. 16. Pearson's chisquared statistic is given by ..•
m
•
t=
L j~
I
n~j
,
where n is a positive integer. the XjS are random variables. and the ~jS are nonnegative constants satisfying ~I + '" + ~m = I. Let X = (XI, ... ,xm)'. J.l = (~I> ... '~m)', and 0 = D  J.lJ.l', where D = diag(~ I, ... , ~m)' (a) Show that is a singular matrix. (b) Show that if vn(x  J.l)  Nm(O, 0). then t  X;" _I'
n
17. Suppose that X  N4 (O, Ld and consider the three functions of the components of X given by ,
"., •
.
• · ,•
••
·,
'j,
1• •
• •
. . ", ., · •
. .·
,
, 412
,
SOME SPECIAL TOPICS RELATED TO QUADRATIC FORMS
,
,
(a) Write tl, t2, and t3 as quadratic forms in x. (b) Which of these statistics have chisquared distributions? (c) Which of the pairs tl and t2, tl and t3, and t2 and t3 are independently distributed? IS. Suppose that x  N4 (J1.,[}}, where J1. Define
= (1,1,1,1),
and [}
1
•
,
I
1j 1 1
= 4 + 141~.
" "
"
•• ,~,
•
,•
,
•
, , ",
,, ,
(a) Does tl or t2 have a chisquared distribution? If so, identify the parameters of the distribution. (b) Are tl and t2 independently distributed?
·
J, ']
.j
•
1• 19. Prove Theorem 9.15.
• .,'I
'1
J
!
20. Prove Theorem 9.16.
I
,I •
·1 •
I
21. The purpose of this exercise is to generalize the results of Example 9.5 to a test of the hypothesis that Hjl = c, where H is an m2 x m matrix having rank m2 and c is an mj X 1 vector; Example 9.5 dealt with the special case in which H = «O) 1m2 } and c =O. Let G be an (m  m2) x m matrix having rank mm2 and satisfying HG' = (O). Show that the reduced model may be written as
• ••
,,
,
,, •
".[,
I 1 I
!
• •
, ,
,
• •
where y* =y  XH'(HH'}I C, X* = XG'(GG'r l , and jl* = Gjl. Use the sum of squared errors for this reduced model and the sum of squared errors for the complete model to construct the appropriate F statistic. 22. Suppose that x  Nm(O, [}}, where r = rank([}} < m. If T is any m x r 'matrix satisfying TT' = [}, and z  N,(O, I,}, then x is distributed the same as Tz. Use this to show that the fOllllulas given in Theorem 9.21 for positive
definite
n also hold when n is positive semidefinite.
23. Let x  Nm(O, 1m}. Use the fact that the first six moments of the standard normal distribution are 0, 1, 0, 3, 0, and 15 to show that
413
PROBLEMS
I E(xx' ® xx' ® xx') = 1m 3 + 2
m
L L i~
m
+
m
m
I
j= I
m
L L L j=,
j='
(1'1/ ® Tij ® Tij + Tij ® 11/1 ® Til
(Tij ® Tik ® Tjd.
k~'
where Tij = E jj + Eji. 24. Let A, B, and C be m x m symmetric matrices and suppose that x Nm(O, 1m). (a) Show that E(x'Axx'Bxx'Cx) = tr{(A ® B ® C)E(xx' ®xx' ®xx')}
(b) Use part (a) and the result of the previous exercise to derive the fOllllula for E(x'Axx'Bxx'Cx) given in Theorem 9.23.
25. Let x  Nm (J1., 0), where 0 is positive definite. (a) Using Theorem 9.20, show that var(x ® x) = 2Nm(O ® n + n ® J1.J1.' + J1.J1.' ® n) (b) Show that the matrix (n ® n + n ® J1.J1.' + J1.J1.' ® n) is nonsingular. (c) Determine the eigenvalues of N m. Use these along with part (b) to show that rank{var(x ®x)} = m(m + 1)/2. 26. Suppose that the m x 1 vector x and the n x I vector y are independently distributed with E(x) = J1." E(y) = fL2, E(xx') = V" and E(yy') = V 2 • Show that (a) E(xy' ® xy') = vec(Vd{ vec(V 2 )}', (b) E(xy' ®yx') = (VI ® V 2 )Kmn = K mn {V2 ® VI). (c) E(x ®x ® y ® y) = vec(Vd ® vec(V2 ), (d) E(x ® y ® x ® y) = (1m ® Knm ® InH vec(V,) ® vec(V2)}. (e) var(x ® y) = VI ® V2  J1.1 J1.~ ® fL2"';.
27. Let A, B, and C be m x m symmetric matrices, and let a and b be vectors of constants. If x  Nm(O. n). show that (a) E(x'Aar'Bb)=a'AOBb, (b) E(x'Aax'Bbx'Cx) = a'AnBb tr(n C) + 2a' An cnBb.
111
x I
SO ME SPE CIA L TO PIC S RELATED TO QUADRATIC FORMS
414
• •
44 + 141~.
28. Su pp ose tha t x  N4(11, n) , wh ere 11 = 14 and n = ran do m var iab les tl and t2 be def ine d by
+ X2
tl = (XI
 2X3 )2
+ (X3
Le t the
 X4) 2,
= (XI  X2  X3) 2 + (XI + X2  X4) 2
t2
,, Us e Th eor em 9.2 2 to find (a) var (tl) , (b) var(t2),
"
\
(c) CO V(I I' 12)'
(tl) ' var and tl) E( for 9.7 ple am Ex of end the at en giv las mu for 29. Verify the ~ j ~ nd are ind epe n1 k, ~ i ~ 1 j' {Yi s tor vec I x m the t tha ose pp Su 30. var ian ce den tly dis trib ute d wit h Yij  N m(l1i' n). A mu ltiv ari ate ana lys is of uti lize s the ma tric es (Ex am ple 3.14) k
k
B=
L
ni(Yi  Y)(Yi  y)' ,
W=
ni
(Yij  yJ( Yij  Yi)" LL j=1 i=1
i=I
wh ere nj
Y, =
L j=1
Yij /I'I
k
,
k

niYi Y= L n ,
n=
L n· I
i=I
i= I
ute d, trib dis y ntl nde epe ind are B and W t tha w sho to 4 9.2 em eor Th Us e and W  Wm (n, w), and B  Wm (n, b,c I», wh ere w = n  k,b = k  1,
cI> =
~
k
k
L ni(l1i  'jI)(l1i  'jI)', i= I
'jI=
L i= I
IL n·'r,
n
nd ent epe ind are ,X ••• XI, ere wh , n trix ma n X m an be ) ,x •.. , (XI = n 31. Le t X' and
Xi 
N",(O, n) for eac h i. Sh ow tha t
E(X ® X ® X ® X) = {ve c(l n) ® vec(ln)}{ ve c(n ) ® ve c(n )}' + vec(In ® In) {ve c(n ® n) }' + vec(Knn) . [vec{Kmm(n ®n )} ]'
PROBLEMS
415
32. Suppose that the columns of X' = (XI,." ,Xn ) are independently distributed with Xi  Nm( ....i' 0). Let A be an m x m symmetric matrix, and let M' = (111' ... ,l1n)' Use the spectral decomposition of A to show that (a) E(X' AX) = tr(A)n + M' AM, (b) var{vec(X'AX)}= 2Nm {tr(A2)(n ®n)+n ®M'A2M+M'A2M®O} •
I
33. Use the results of Problems 7.45(e) and 7.52 to show that
,
,,,
I
I I,
_', i,
,
.•
",, ,
, ,I I
i.,'•l ,
, ·,
,I ·, •
thereby verifying the simplified formula for var{ vec(R)} given in (9.17).
, i
References
•
i\[:aian. S. S. (1985). Hadall/lIrd MafriNs tIIltl TIIt'i,. ill'plimfions. SpringerVerlag, Berlin. AIllJcrson. T. W. (1955). The integral of a symmetric unimodal function over a symmetric convex set and some probability inequalities. Proceeding" oj the American Mathematical Society, 6, 170176. BarndorllNielsen, O. E. and Cox, D. R. (1994). InJerence and Asymptotics. Chapman and Hall, London. Banlett, M. S. (1937). Propenies of sufficiency and statistical tests. Proceedings oj the Royal Society oj London, Ser. A. 160, 268282. Bartlett. M. S. (1947). Multivariate analysis. Journal oj the Royal Statistical Society Supplement. Set: B. 9, 176197. Basilevsky, A. (1983). Applied Matrix Algebra in the Statistical Sciences. NorthHolland, New York. Bellman. R. (1970). Introduction
/0
Matrix Analysis. McGrawHili, New York.
BenIsrael. A. (1966). A note on an iterative method for generalized inversion of matrices. Matheli/lilies oj Computation. 20, 439440.
1
BenIsrael. A. and Greville. T. N. E. (1974). Generalized Inverses: Theory and Applications. John Wiley. New York. Berman. A. and Plemmons. R. J. (1994). Nonnegative Matrices in the Mathematical Sciences. Society for Industrial and Applied Mathematics. Philadelphia. Bhattacharya, R. N. and Waymire. E. C. (1990). Stochastic Processes with Applications. John Wiley. New York_ Boullion. T. L. and Odell, P. L. (1971). Generalized Inverse Matrices. John Wiley, New York. Campbell. S. L. and Meyer, C. D. (1979). Generalized Inverses oj Linear TransJonnations. Pitman, London. Casella. G. and Berger. R. L (1990). Statistical InJerence. Wadsworth & Brooks/Cole, Pacific Grove. CA. Cline. R. E. (I 964a). Note on the generalized inverse of the product of matrices. SIAM Review, 6, 5758. Cline. R. E. (1964b). Representations for the generalized inverse of a partitioned matrix. SIAM Journal oj Applied Mathematics, 12, 588~00.
Cline, R. E. (1965). Representations for the generalized inverse of sums of matric~s_ SIAM Journal oj Numerical AnalYSis, 2, 99114. Cochran. W. G. (1934). The distribution of quadratic forms in a normal system with applications to the analysis of variance. Proceedings oj the Cambridge Philosophical SOciety, 30, 178191.
416
,,
417
REFERENCES Davis, P. J. (1979). Circulant Matrices. John Wiley, New York.
Duff, I. S., Erisman, A. M., and Reid, J. K. (1986). Direct Method>'for Sparse Marri,..,.,. O,\foru University Press. Elsner, L. (1982). On the variation of the spectra of matrices. Linear Algebra al/(I Its Apl'limtioll.I, 47, 127138. ",
,.Eubank, R. L. and Webster, J. T. (1985). The singularvalue decomposition as a tool for sol\'ing estimability problems. American Statistician, 39, 6466.
,
Fan, K. (1949). On a theorem of Weyl concerning eigenvalues of linear transformations. I. Proceedings of the National Academy of Sciellces of the USA, 35, 652655,
i,
Ferguson, T. S. (1967). Mathematical Slatistics: A Decisioll Theoretic Approach, Academic Press. New York. Gantmacher, F. R. (1959). The Theory of Matrices, Volumes I and II. Chelsea. New York_ Golub, G. H. and Van Loan, C. F. (1989). Matrix COlllp"tatiolls. Johns Hopkins University Press, Baltimore. ,Graybill, F. A. (1983). Matrices With Applications in Statistics, 2nd ed. Wadswonh, Belmolll. CA. "
Grenander, U. and Szcgo, G. (1984). Toeplit,
and Their APl'liCCltioll.l'. Chdsea, New York, GrevilIe, T. N. E. (1960). Some applications of the pseudoinverse of a matrix. SIAM Rerie"·. 2. FOflll.\'
1522. GreviIle, T. N. E. (1966). Note on the generalized inverse of a matrix product. SIAM R,·.i,·",. 8. 518521. Hageman, L. A. and Young, D. M. (1981). Applied Iterative Metlrods. Academic Press, New York, Hammarling, S. J. (1970). Latent Roots and Latent Vectors. University of Toromo Press. " Healy, M. J. R. (1986). Matrices for Slatistics. Clarendon Press. Oxford. Hedayat, A. and Wallis, W. D. (1978). Hadamard matrices and their applications. Allllais of Statistics, 6, 11841238. Heinig, G. and Rost, K. (1984). Algebraic Methods for Toeplit:like Matrices ami Opera/(JrJ, Birkhliuser, Basel. Henderson, H. V. and Searle, S. R. (1979). Vec and vech operators for matrices, with some uses in Jacobians and multivariate statistics. Canadian Joumal of Statistics. 7, 6581.
1
Hinch, E. J. (1991). Perturbation Methods. Cambridge University Press. Hom, R. A. and Johnson, C. R. (1985). Matrix Analysis. Cambridge University Press. C Hom, R. A. and Johnson, C. R. (1991). Topics in Matrix Analysis. Cambridge University Press. Hotelling, H. (1933). Analysis of a complex of statistical variables imo principal components, Journal of Educational Psychology, 24, 417 441,498520 . •
'" Hubeny, C. J. (1994). Applied Discriminant Analysis. John Wiley, New York. Jackson, J. E. (1991). A User's Guide to Principal Components. John Wiley, New York. Jolliffe, I. T. (1986). Principal Component Analysis. SpringerVerlag, New York. Kato, T. (1982). A Short Introduction to Perturbation Theory for Lillear Operators. SpringerVerlag, New York. Kelly, P. J. and Weiss, M. L. (1979). Geometry and ConveXity. John Wiley, New York.
, ,
Khuri, A. (1993). Advanced Calculus with Applications in Statistics. John Wiley, New York. Krzanowski, W. J. (1988). Principles of Multivariate Analysis: A User's Perspectil'e. Clarendon Press, Oxford. Lanczos, C. (1950). An iteration method for the solution of the eigenvalue problem' of linear differential and integral operators. Journal of Research of the Natiollal Bureau of Standard... 45, 255282. Lay, S. R. (1982). Convex Sets and Their Applications. John Wiley. New York.
I
418
REFERENCES
·, I
,
: I
Lindgren. B. W. (1993). Statistical Theory. 4th ed. Chapman and Hall, New York.
I
·I
,
Magnus. J. R. (1978). The moments of products of quadratic forms in normal variables. Statistica
•
· j
,
Neerlanqica 32, 201210.
, •
1>1agnus. J. R. (1988). Linear Structures. Charles Griffin, London. Magnus. J. R. and Neudecker. H. (1979). The commutation matrix: some properties and applications. Allllais of Statistics. 7, 381394.
,I• '.1•
Magnus. J. R. and Neudecker. H. (1988). Matrix Differential Calculus with Applications in Statistics al/(I Econometrics. John Wiley, New York. ~landel.
J. (1982). Use of the singular value decomposition in regression analysis. American Statis· ticiall. 36, 1524.
\\ardia. K. York.
v..
:!
•
.
, '. i
Kent. J. T.. and Bibby, J. M. (1979). Multivariate Analysis. Academic Press, New
\1athai. A. M. and Provost. S. B. (1992). Quadratic Fonns in Random Variables. Marcel Dekker, New York. McCullagh. P. (1987). Tensor Methods in Statistics. Chapman and Hall, London. McLachlan. G. J. (1992). Discrimillant Analysis and Statistical Pattern Recognition. John Wiley, New York. Medhi. J. (1994). Stochastic Processes. John Wiley, New York. Miller. R. G.. Jr. (1981). Simultaneous Statistical Inference, 2nd ed. SpringerVerlag, New York. \1inc. H. (1988). Nonllegatil'e Matrices. John Wiley, New York. \10ore. E. H. (1920). On the reciprocal of the general algebraic matrix (Abstract). Bulletin of the Americall Mathematical Societ)" 26, 394395. \1oore. E. H. (1935). General analysis. Memoirs of the American Philosophical Society, I, I·H209. \lorrison. D. F. (1990). Multivariate Statistical Methods. McGrawHill, New York. ,'1uirhead. R. J. (1982). Aspects of Multivariate Statistical Theory. John Wiley, New York. :"\aykh. A. H. (1981). Introductioll to Perturbation Techniques. John Wiley, New York.
"'d. D. G. (1980). On matrix differentiation in statistics. South African Statistical Journal, 14, 137193. "'elder. J. A. (1985). An alternative interpretation of the singularvalue decomposition in regression. American Statistician. 39, 6364. :"I/etcr. J. Wasserman. W.. and Kutner. M. H. (1985). Applied linear Statistical Models: Regression, Allalysis oj I'ariallce. alld Experimental Design. Irwin, Homewood, IL. Olk,n. I. and Tomsky. J. L. (1981). A new class of multivariate tests based on the
unionintersection principle. Annals of Statistics, 9, 792802.
.
Ostrowski. A. M. (1973). Solution of Equations in Euclidean and Banach Spaces. Academic Press, "'ew York. i'enrnse. R. (1 Eigel1\·alue. 84 asymptotic distribution of. 404 406