Methods o f MATRIX ALGEBRA Stanford Research Institute Menlo Park, California
1965
York
0 1965,
111
10003
United ...
289 downloads
2248 Views
13MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Methods o f MATRIX ALGEBRA Stanford Research Institute Menlo Park, California
1965
York
0 1965,
111
10003
United Kingdom Edition published by
W.l OF
65-19017
Foreword book, on
go
book
by
up
why up
FOREWORD
no up book
book, ;
Also,
on on on
on
on
by no
by
by ;
...
on
on on
u
=
Tv
T
by
book on
book
by
FOREWORD
by on
of
by upon do by body on
on on
book on
of
book from
through
FOREWORD
no go
1
book
PEASE
Symbols and Conventions A X
x*
A*
xT
AT
xt
At
x
x x
A.
A.
A.
p.
0
...”
E
xi
X -i
4, pp. pp. 49, 217.
A
B, (AB - BA), p. 279. [A,
xi
p. 305.
CHAPTER I
Vectors and Matrices by of
1. VECTORS
As
by
Definition.
A vector is a set of n numbers arranged in a de$nite
order. n
F,” components n
dimensionality
by
scalars.
1
2
I. VECTORS A N D MATRICES
F by on x = col(x, x2 ... xn)
(1’)
T transpose
As components,
box 1.
box
1.
3
VECTORS
El FIG. 1.
x1
A 2-port network.
x2 =
=
(?)
(3)
by no
by
by
e
=
(Ej,
i
=
e
w,
E’s
1’s
E,
#1 do go
#1
E, 1,
.
4
AND
on
x1 of the I
#1 by n,
N, (N by
(N -
n
vector
by
+') by
on up
1. on,
2.
5
on of
2
?
3 do
by
2. A D D I T I O N O F VECTORS A N D SCALAR MULTIPLICATION
Definition. The sum of two n-dimensional vectors x and y is the vector whose components are the sums of the components of x and y . That is, if
then x1 +Y1
(7)
F,
6
I . VECTORS AND MATRICES
no Definition. The product of a scalar a, in a field F , and a vector x, whose components are in F, is the vector whose components are a times the components of x. If
x =
(8)
then
a
F,
xi
1.
commutative
associative:
x+y=y+x x+(Y+z)=(x+y)+z
2.
S, F x
null vector 0,
S,, x+O=x
O=
3.
x
(I)
S,
negative, x
+ (-x)
=0
(-x)
7
3.
x
by
4.
associative:
(13)
a(bx) = (ab)x
5.
distributive (a
+ b)x = ax + bx
a(x
+ y)
= ax
+ ay
F,
6. l.x=x
7.
1, (15)
commutative:
ax
go
= xa
(16)
on go
per se,
do by 3. LINEAR VECTOR SPACES
Definition. A set S of n-dimensional vectors is a linear vector space over the field F if the sum of any two vectors in S is in the set and if the product of any vector in S times a scalar in F is in S .
8
I.
S,
y
x
p
a
F,
(ax
+ fly)
S.
F,
F, , F2. (ax
+ by)
a
b book,
p,
p
characteristic p ) .
p binary field, do
F F book,
4. DIMENSIONALITY A N D BASES
not
=
2,
4.
9
DIMENSIONALITY AND BASES
Definition. The set of k n-dimensional vectors x1 , xz , ..., xk are said to be linearly independent in the jield F if and only if there exists no set of scalars c1 , c2 , ..., ck of F , not all zero, such that ClX1
+
CZXZ
+ + "'
C/:X/c =
0
(17)
ci ci
S. Definition. A linear vector space S has the dimension k if S contains at least one set of k vectors, none of which are the null vector, that are linearly independent, but does not contain any set of I ) linearly independent vectors.
+
Definition. If S is R-dimensional, then a set of vectors in S , x1 , x2 , ..., xk that are linearly independent is called a basis f o r
S generated by linear envelope span
over thejield
k
k basis
y y
= CIXl
+ c2x2+ ." +
no
ci
y
xi
y
ci
xi, A
(18)
CkXk
xi xi
complete
10
I.
xi
y expansion
xi.
y on
n
k
n
k.
0 < k < n,
the whole space. proper subspace no
ux
subspace.
null space,
+ by + cz = 0
(19)
on
on linear mani-
fold
no on
5. L I N E A R H O M O G E N E O U S SYSTEMS-MATRICES
do
5.
11
LINEAR HOMOGENEOUS SYSTEMS-MATRICES
1.
1
x2, x2 .
by
x1
x2
x2.
on
do
As
box
1,
no
(21) by
x1
linear
x2
a,
f(ax2) = ax,
by
good
12
I . VECTORS A N D MATRICES
?
homogeneity.
x, #1
x,
x,
x1 1
x1 El I,
+ BIZ = CE, + nr,
=
AE,
A, B, E,
E,
I,.
I,. x, by maps
x, .
mapped onto
x, maps S , into
, Eq.
x,
onto S , .
.
x, S,
,
5.
y1
= %lJl
y2
=
E D,
4-a12x2 t
a,,%,
ym = %lXl
A, B,
13
LINEAR HOMOGENEOUS SYSTEMS-MATRICES
+ a2pxz 4 +
arn2X2
"'
"'
t n,,,x,1 a2pn
+ ... +
GITlxn
E a,, , do
?
do
E,
I,,
xl,
..., x, ,
any any do
any
matrix A
=
A
(c
B
D)
= (aii)
(26)
14
I . VECTORS A N D MATRICES
on
main diagonal
A.
aii . on
diagonal. on
by A
, u P 2 ,..., unn)
=
(27)
on
dimensionality
by
m x n
A A on
n
(x,
, ..., xn)
“ m by n”)
, ..., y J .
m
A A 2 x 2 A
B
A
B
A , B
...
on
by
2 x 2
x2
on
S,
x1
S, I
x1
x,
.
6.
15
PARTITIONED MATRICES
by
6. PARTITIONED MATRICES
on up
A
=
N
&I m x m
A
(m
M R N)
(s
+ n) x ( m + n).
R,
n x n m x n,
S n x m.
no
partitioned
A
A
A, A
quasidiagonal.
(29)
on
A
= quasidiag(A, , A ? ,
-*a,
AP)
V on
so
by no
(30)
16
I. VECTORS AND MATRICES
m x n m x 1 A
=
(xi xz
9
. *)
(31)
xn)
A,
xi on
A
A =
i"i YmT
on
A
7. ADDITION O F MATRICES A N D SCALAR MULTIPLICATION
Definition. The sum of A = ( a i j )and B = (bij)of the same dimensions is the matrix whose terms are the term-by-term sum of the terms of the separate matrices:
(33) Definition.
The product of A = ( a i j ) times a scalar a is the matrix
aA = ( a a i j ) : (34)
17
8.
do
by 8. MULTIPLICATION O F A MATRIX T I M E S A VECTOR
(24) (22) ?
1 maps
x2’sonto
xl’s.
xl”
x2
(22) (24)
operating on x2
y = 2x,
y
by
by
x
x,
xl,
y’s
x’s
by x1 = NIX,
(35)
x2. ?
y
=
AX
y
A by
by
x
by
18
VECTORS
MATRICES
x. y. no
by AX)
=
(orA)~= A((Yx)= AX)^
(40)
no
by
3’2
“.
= (XI
x2
XI?) x
“.
i:“ ::j n,,
yT
==
1:
a2n
’.’
= %,a,,
amn
(41)
xTAT
yT xT by yz
I;
+
x2a2.2
+ ”. +
by
by
w 4 n
(42)
(41) by on a,, 1s
, aZ2,
aji .
transposes
A
on m x n
n x m
n
m m
n
uij ,
(41)
8.
19
MATRIX MULTIPLICATION
x 1 x n
by
xT
n x 1
Eq. Eq. (37) Eq. (41)
AX)^ = x’A”
(43)
9. MATRIX MULTIPLICATION
by on on z.
y,
,to z
x.
x.
z product
y
=
AX
z
=
By
x.
20
I . VECTORS A N D MATRICES
p A B. 1
I, up
A
2, B.
of y
(47) (4), R R
(45)
2 by
(47),
x.
z z
(45)
=
By
=
B(Ax) = (BA)x
(47)
(48)
9.
21
MATRIX MULTIPLICATION
(BA),, B
A (BA)ij = Z b i k a k j
(52)
I,
BA B by
by
A
C
=
2 3 6 7 (4 5 ) ' (8 9)
BA 2.6+3.8
x
2.7+3.9 4.7+5*9
(4 . 6 + 5 * 8
=
=
n
(54)
Ii:
14 + 2 7 28 45)
+
xp
n x n
1.
2.
3.
+ B)C = AC + BC A(B + C) = AB + AC (A
A(BC) = (AB)C
=
36 41 (64 73)
nxp
22
I. VECTORS A N D MATRICES
4.
null matrix,
A, Q * A =A . Q = Q
5.
identity matrix,
A,
IA
=
A1
=
A
I
Eq. on 1 0 0 0 0 1 0 0
... =
(S,J
(55)
Sii Sj,
=
= 1
0
i#j .
z = j
.
Also,
10. 4N ALGEBRA
n x n algebra product relation, has
of * A ring
(b)
no
1 1.
23
COMMUTATIVITY
n x n
n x n
by
A
n2
n x n
B F n x n
cy
/3
(cuA + PB)
F,
F, of
A
B
AB.
associative algebra.
x(yz) # (xy)~.
n x n
of
1
Eij
A A
A
=
=
(aij)
r)~ijEij ii
on
Eij no
cu
C C ~ , ,=E0~ ~ ij
11. COMM UTATlVlTY
F
;i n x n (57)
24
I . VECTORS AND MATRICES
AB # BA. on
As
6.2+7.4 6.3+7.5 8.219.4 8.3+9.5
1
=
21.
40 53 (52 69)
never
do n x n commutative subspace
n x n n x n on
2 of
up no
As
12.
25
OF
A(B + C)
AB
AC,
(A + B).
As
(A
+ B)' = (A + B)(A + B) = A' + BA + AB + B2
(59)
no
BA
AB a
12. DIVISORS O F Z E R O
by 1 0 0 0 0 0 (0 O N 1 0 ) = (0 0)
by
nilpotent.
X). A
Ax
not
A
= 0,
x Ax
=
Axi
=0
0
any
xf
A
i,
A
26
I.
13. A M A T R I X AS A R E P R E S E N T A T I O N O F A N ABSTRACT OPERATOR
4
on
#I
1,
#2. abstract operator.
by arepresentation of the abstract operator. 1
El ABCD matrix
I, ,
E,
I,
E's
transmission matrix,
Eq. (4),
E-I basis.
on
I's
wave matrix
on
wave basis.
E
+ RI R.
E
-
RI
3,
"-i
-u1
FIG. 3.
s
Waves at the terminals
-:-I a 2-port network.
14.
Eq. on
27
OTHER PRODUCT RELATIONS
scattering matrix scattering basis.
E’s I’s. impedance matrix. admittance
E2,
representations ABCD
El
I,
E,
I1
I,
I, , El
abstract operator.
V. 14. O T H E R P R O D U C T R E L A T I O N S
2.
4.
28
AND
-
-
#2
I 1 J
S
(61)
on
on
S (61)
by
A,
v2
u2
A, A1
S S,
(62),
=
aid,
-
blcl ,
A,
=
a,d, - b,c,
(65)
(62),
star product of
S, ,
s
=
s, * s,
(66)
9
15.
29
T H E INVERSE OF A MATRIX
on
A
15. T H E I N V E R S E O F A M A T R I X
1. #2,
x1
x2 A:
x2
x1 =
x2 on xl. x2 =
A-l
inverse
A.
(68), x2 =
x1 =
=
=
=
=
x2, = M-1 =
I
1
(55). A-'
E,
I, y l , ...,y,
El
I,, m
n
n
x,
, ..., x,, ,
=
n. xl,
..., x, .
30
I . VECTORS A N D MATRICES
y l , ...,yn x l , ..., x , i f and
(23) only ;f a11
(A\=
a12
a1n
"'
: Qn1
: #O an2
'.'
(72)
ann
A.
Aij
A-l, by
+1
i
+j
by
ij
A-1
=
by A I. A-'
-1
i
+j
odd, by
(74)
Definition. A square matrix A is singular i f its determinant is zero. I t is nonsingular if its determinant is not zero.
Theorem. Given a nonsingular square matrix A, its inverse exists uniquely. Furthermore, its left inverse is the same as its right.
on on
16.
I aij I
31
RANK OF A MATRIX
I bij I
by
(54) A
B IABl = / A l . I B / by
(54).
(76), AB
A
B
AB.
Theorem. If A, B, C, etc., are n x n square matrices, all of which is the product are nonsingular, then the inverse of the product (ABC of the inverses in reverse order. a * * )
X
=
(ABC ...)X by A-l,
X
(77)
(ABC ...)-' =
I
by B-l,
by C-l,
... C-lB-lA-1
~.
(79)
16. RANK O F A MATRIX
n x n
A.
rank k x k,
A (n - k )
k
no
A
32
I . VECTORS A N D MATRICES
k x k
so
by
n x n
A
A
n. (n - k)
n,
17. GAUSS’S ALGORITHM3
15
A by
A,
Ax x,
(80)
=y
y.
Eq.
a,, = 0, x1 # 0.
a,, # 0 x1
(i # l ) ,
x,
(ail/all)
b’s
so y.‘
, z
= y2
-
yi’
by
a tlYl/all .
S A n algorithm is a procedure whereby a desired result can be obtained in a finite usually restricted, however, to procedures number of computable steps. The term that are practical for at least computer calculation.
17.
33
GAUSS’S ALGORITHM
b,,
x,
x’s
x2
b’s
b,, # 0.
x,
allXl
+ + hz,x, + hzsx, + ... + bznxn + ... +
i -n l $ 2
a13x3
”‘
1alrrXn = y1
c3nxn
c33x3
= yz’
,,
= y3
i“!;j !j(i)011
a12
a13
:::...
-
...
..
Xn
fnn
on on
A upper triangular.
x,
(83)
y.
x n = y F - 1 )if,n
do x,
fnn
# 0. xnP1,
on, u p
34
I . VECTORS AND MATRICES
do
on
y. A-l. y1
xi
AX^ i
1
n.
=
(84)
y+
A-l
xi. up
on do,
x
(85),
y. y; = y&”= 0.
do
A
on
A x
book.
y
18.
35
2-PORT NETWORKS
18. 2-PORT N E T W O R K S
by
5.
FIG.5.
Partition
a ladder network.
up
2. a
Ei
E, Ii = I , =
+ RI,
36
I. VECTORS A N D MATRICES
TABLE I
TRANSMISSION MATRIX OF BASIC2-PORT ELEMENTS ON Element
Matjx
Series impedance
i:,4)
Shunt admittance
Transmission line
cos p
i
jZsin p
-cp-
(isin p
cos
Characteristic impedance Z Electrical length p
cosh
Waveguide below cutoff Characteristic impedance jX Electrical length jr
r
-sinh
Transformer
M
1.
by
jX sinh
r
cosh
r
r
i
19.
37
EXAMPLE
2. on A B E, ( C D)( I ,
El
=
I1 =
+
AE, BIZ CE, $- DI, -
E,
=
1,
=
BC)
DE, - BI, -CE, AI,
+
I,
I,
1):
A
=
D.
3. A
I.
AB. (+{El*
+ E*I})
19. EXAMPLE
As 2,
B,
38 by
2,
=
2,.
by
Y,
Z,,
2, , 2, ,
Y
Exercises
1.
8,
2, ( c ) coI(1,
I,
3,
0,
2, 4, - I ,
2.
col(0, - 1 , 0,
1,
E, ?
on on x1 , x2 , ..., x, x1
+ + ..' + x,
=0
XI
+ x, + ... + x,
=
x2
1
,
39
EXERCISES
3. of
x
= C O ~ ( U 6, , C, a,
6, C, a, ...)
a, b, c, a, 6 , c
4.
S, ,
S,
sum
S,
S,
S,
S,,
S,
union
S, , S,
S, S,
intersection
.
S, , S, .
S, ?
5. A =
AB
I B I.
(';'
B = (1 - J
'j.), 1 -J
I AB I
BA.
'
-j
I BA I
IAI
6.
7. A = ( O0 1
8. A
=
(;
by
1)
1)
1 0 A=f
a
0 0
40
I . VECTORS AND MATRICES
9.
of 1
A=(:
10. 11.
1 -1 2 -3
-1
0 -1
: ; 1;)
Eq. (64)
Ya L
==C by
12.
a A=(-b
b a)
A (AB).
a
(a
of
+ jb).
B b
by semigroup have the group property.
of
a a
group,
a = 1, b
=
0.)
13.
A=(
AB # BA
-c
a f j b jd
+
a
-
jb
b,
b
41
EXERCISES
14.
An
A
by
Bn,
B n
n
(AB)" 15.
A
d
a
Pn
16.
= ad - bc
(n
+
17.
D
n.
x, dldx. x, x=,..., xn.
D
A, B, C,
n x n
(A
X
D on
+ B) Y
AX+BY=C BXtAY = D
X 2n x 2n
18.
A,
C,
M-1
D
=
Y. M,
n x n
(A - BD-'C)-' (B - AC-'D)-'
of
(D - CA-lB):'
(A -
42 19.
M
2n x 2n
n x n
D)
=
D
(Hint:
D
by
X
C
C H A P T E R II
The Inner Product no a priori
(3) 2 3
?
do
by topological space).
1. U N I T A R Y I N N E R P R O D U C T
(9
x =
xi7
x
3
no
A
by to
43
44
11. THE INNER PRODUCT
by 11 x 11
x”, no a
11 x 11
I xi 1
xi
xi
< E,
< E. by
by a.
a,
on
As
x
y
=
y,
by
x,
!I x (1 (3)
=
(x, X)l’Z
inner product
(4)
x
y.
Many authors use X x,Y,*. We prefer the form given above since it somewhat simplifies symbology that we shall introduce in the next section. The distinction is trivial, however, providing one is careful to note which form is being used.
1.
45
UNITARY I N N E R PRODUCT
()
Eq.
x x y
x
y, /3,
FIG. 1 .
(x,y)
a
y, #1
1.
x1 = x cos a,
y1 = y cos j3
x2 = x sin a:
y3 =y
(5 1
j3
The unitary inner product of two real vectors.
+ x2y2 = xy(cos
= xlyl
01
cos j3
+
01
j3)
= xy cos(O1 - j3)
(6)
(x, x on y,
y*,
y on x, y. on
x*.
x
by
unitary inner product relation unitary space U-space
v,
x
y by
Eq.
U,
.
46
11. THE INNER PRODUCT
(x,yj
cp
x
y
cp
As
Eq. (3). (x,x> 2 0
(9)
x
Also
<x, Y> =
(Y, x>*
(10)
(x, ay> = 4x9 Y>
u
x
x
u
v, (x,u
(x, au
+ v> = (x,u> + <x,v>
+ bv)
=
a(x, u)
+ b(x, v)
by
(1 1)
linear in the second
factor.
( 1 1)
(au
+ bv, x) = a*(u, x) + b*(v, by
x) antilinear
47
1.
scalar product inner product. CauchySchwartz inequality: Given the inner product relation of Eq. ( 3 ) , then for any
Theorem.
x and y
y
x
f 0
# 0,
(ax
by
+ by):
+
(ax
+
=
+
+ +
a
b
30
by b =
a =
b*
=
=
(1 5 )
+
30
-
# 0,
2) Triangle inequality,
Theorem.
Given the inner product relation of Eq. ( 3 ) , then for any
x and y
+
+ (13)
+
+
=
<
x)l’z
+ (Y. Y)1’2}2
(17)
I pl
pl‘
2. ALTERNATIVE REPRESENTATION O F U NITARY INNER PRODUCT
XT
=
(xl xg ... xn)
(18)
T Eq. (3) (x, y)
(19)
xi*yi = x*Ty
=
(t)
xt
= X*T =
(xl*
(‘11
‘12
‘13
= aZ1
a,,
...
u31
At
hermitian conjugate
...
(20)
x,*)
hermitian conjugate
xt
A
...
x2*
x.
’.’) (21)
A.
Eq. (3) (X,Y>
= xty
(23)
3.
GENERAL (PROPER)
49
INNER PRODUCT
on
xty,
3. G E N E R A L (PROPER) I N N E R P R O D U C T
(x, x>
(A)
(B)
(x, x) (x, x)
x =
0
x
=
0
(x, y)
y
(x, Y>
=
(x, ay
+ pz> = a(x, y) + p(x, z )
x,
(Y, x > *
44).
I). (3)
K I, xtKy
K
=
kij ,
(ytKx)*
8). (25)
50
11. THE INNER PRODUCT
on
i,j
x
y,
kij
K
=
=
k,*,
Kt
A square matrix K such that it equals its hermitian conjugate Kt) is called hermitian.
Definition.
(K
=
K
(x, x)
x. good
(A) by Definition. A square hermitian matrix K is positive definite if, for all x except the null vector,
xtKx > 0
(30)
It is positive semidefinite or nonnegative definite except the null vector,
xtKx
if, for all x
0
negative definite
(31)
negative semidefinite K all x,
definite. indefinite.
nonpositive
x (of
(24) K no
K
(24)
4.
51
EUCLIDEAN I N N E R PRODUCT
K proper
(23),
K K
by
=
a
I, metric on
on on
(9),
(1
Hilbert space.
s(x) = xtsx
=
z
xi*xjsij
(32)
i3
quadrat,icform
xi
. (St = S ) xi,
S s(x) hermitian quadratic form.
4. E U C L I D E A N I N N E R P R O D U C T
(3) <x,Y>
=
z i
xiyi
Y'X
z=
(33)
52
11. T H E INNER PRODUCT
En.
a
(24) (x, y)
(34)
= x%y
S ST = S ,
S
=
sii
(sii),
=
sii
.
S
x
(A’)
y
(34)
S,
x.
(x, x)
(x,x) (x, (D’) (x,
=
=
x
0
+ bz)
x) =
a<x,
=
0.
+ b<x,
2).
(D’)
(3) (24)
(33) (A)-(D),
As
(33), x, y,
(34)
(3) (34)
(24)
(A’)-(D’)
5. a
53
SKEW AXES
b
a
b. of Eq. (33)
Eqs. (3)
(33)
x+y = 2(a,a,
+ bib,)
x’y = 2(a,a,
-
b,b,)
K xTy,
K.) do
5. SKEW A X E S
K
x
a
2. on
x1
x2, a
6, a = XI
b
+ x2 cos y
= x,siny
FIG. 2. Skew axes in two-dimensional euclidean space.
54
PRODUCT
11.
x, 1 I"
by =
a2
+ b2 = xI2 + 2x,x,
cos 'p
+ xi2
c0s7
K = (cosy
1
K.
K
by
on by
by do. no
a
(3) no a priori no
priori.
do odd on
As
(3),
E-I
5.
55
SKEW AXES
by rotational
1
K = -1( 0 1 2 1 0
(37)
K
f, . by by fn = fo
parametric harmonics
fo ,
+ nfp ,
n
=
on
0, fn
... .
. by
En
In fn
e = col(... E-,
i
P
=
=
.
by
, E, , El , ...)
col( ...I-, , I , , 1, , ...)
diag( ... l/f-l, l/f,,
...)
56
11.
THE INNER PRODUCT
x
=
=
O P (P 0 )
(43)
by z,
s
=x
O P e t ~= x (et, it)(P O i
)( )
(44)
K,
by
6. ORTHOGONALITY
by Definition. Given an inner product relation defined by the metric K, two vectors x and y are orthogonal, or precisely, K-orthogonal, when
(x, y)
=
xtKy
=
0 q~ =
0,
q~ =
Definition. Given an inner product relation defined by the metric K, the set of vectors uiis said to be orthogonal,or better, K-orthogonal, when
(uiuj)
=
u,tKuj = 0
so that each pair of vectors is K-orthogonal.
i #j
(45)
6.
57
ORTHOGONALITY
n
K
ui . by
x
x
==
z
aiuj
I
x
n
n
ai .
n ai .
ui Eq. (46) n
K
ui
k
Eq. (46) by u,tK, n
1,
..., n
ul,.tKx = z a i u l . t K u j
(47)
I
(46).
i
on a, =
Eq. (46)
(u,.tKx)/(u,tKu,)
=
K. (48)
58
11. T H E INNER PRODUCT
(46)
7. NORMALIZATION
ui x
by ui by vi ,
by
a!i :
v,
v,tKv,
=
=
(50)
a,u,
a,*a,~,tKtl,
= 1 at I % , ~ K U , ,
. 2
a, = l / ( ~ , t K u , ) ' / ~
do
vi
Definition.
a vector x is
(51)
(52)
uitKui # 0
(53)
v , ~ K v ,=
Sij
. =J
1 i =j, 0 i # j. normalized-more K-normalized. orthonormal I
=0
(29)
V U.
(au
+ bv)
u
V
V,
v
V.
Theorem 1 . If the inner product relation is proper, a subspace U and its orthogonal complement V are disjoint-i.e. have no vector in common, except the null vector.
U
by
XI
, x2 , ..., xk .
U.
U
by (y, y)
V.
(y, xi) y
U
V
6.
105
STRUCTURE OF NORMAL MATRICES
Theorem 2. If U is a proper subspace and V its orthogonal complement under a proper inner product, then V is nonvoid.
U U. U, U,
projection on U ,
U,
V.
V A
U
A (Ax).
U
x
invariant
Theorem 3. If U is invariant for A, then V ,its orthogonal complement under a given inner product, is invariant f o r A#, the adjoint to A under the given inner product.
x
A, Ax
U
U, by
V.
y
=
(xi, Ix,)
xi>
- 1 ) ( X i , Xi) = 0
i
==
(xixi)# 0 by
j,
hi i2
=
1,
10. GENERAL (PROPER) I N N E R P R O D U C T
of on on
<x,Y)
of
(52)
= xtKy
K
(x, x) x y
so
x K-orthogonal, (x, y)
=x
~ yK = o
x <x,AY) =
(53)
y (54)
10.
GENERAL (PROPER) INNER PRODUCT
111
xtKAy
(55)
x
=
(A#x)tKy = x+A#tKy
y KA
K-adjoint
(56)
= A#tK
A A#
= K-'AtK
(57)
K
Definition.
A matrix A is K - n o r m a l AK-lAtK
Definition.
=
if it commutes with its K-adjoint:
A matrix A is K-hermitian A KA
(58)
K-lAtKA
if it equals its K-adjoint:
= K-lAtK
(59)
= AtK
Definition. A matrix A is K-skew-hermitian if A equals the negative of its-K-adjoint:
Definition.
A matrix A is K - u n i t a r y if its K-adjoint equals its
reciprocal: K-lAtK AtKA
(61)
= A-1
-
=K
(62)
Theorem 8. A K-normal matrix A iias a complete set of eigenvectors that can be chosen to be K-orthogonal in pairs, and conversely. Theorem 9. The eigenvalues of a K-hermitian matrix are real, and those of a K-unitary matrix are of unit magnitude.
on so.
112
IV. HERMITIAN, UNITARY, A N D NORMAL MATRICES
As to
K
of
Exercises
1.
a
a
b
2.
2
0
2,
3. (A,
+ jA,)
n x n
A,
A,
A
b,
113
A,
4.
AAt
AtA
A ?
A
A
K
5.
6.
A
7.
A
n x n
II Ax I1 x,
/3
01
(aA + @At)
II x II
=
=
I1 Atx I1
(x, X
Y 2
A ?
8.
A A
(A2 = A), (A2 = A
A
9.
10.
A
At
by
11. A B (AB + BA)
(AB - BA) A
B ?
A
12.
x 13.
A2x = 0, A
Ax
=
0.
B
B A-ABI = O
A
114
I V . HERMITIAN, UNITARY, A N D NORMAL MATRICES
by
p = - xtAx xtBx
p
T2= I
14. A
15. A
?
involution.
At
16.
A
A U
=
U
+ I)(A
-
I)-'
(U - I) A
A (Hint: (A + I) (U I). ?)
+
(A
=
(U - I)-'(U
Cuyley transforms
U
w = (z
(A
-
+ I)
I)-l
+
-
do (U - I)-l
CHAPTER V
Change o f Basis, Diagonalization, and the Jordan Canonical Form Jordan canonical form.
by
by
do
no
by similarity transformation.
change of basis.
1. CHANGE O F BASIS A N D SIMILARITY TRANSFORMATIONS
representation of the abstract vector. As
by
so 115
116
CHANGE OF BASIS, DIAGONALIZATION
by
A on by
? on
x
u.
S. x = su.
u
(1)
x
on on S
on S.
S,
S
u
= s-'x
(2)
y
= AX.
(3)
as
2.
117
EQUIVALENCE TRANSFORMATIONS
A on
by
(1). ? a
x
=
su,
(4)
y = sv
(3),
by S-l,
v = (S-'AS)u
A =
=AU
(5)
A'
S-lAS
(6)
A
(3) S similarity transformation.
by S
similar.
on good
A, A'
on
by no
A
S
A'
by a collineatory
transformation. 2. EQUIVALENCE TRANSFORMATIONS
x (3),
y,
118
CHANGE OF BASIS, DIAGONALIZATION
x
y
A
do do
by x
=
u
su,
=
s-lx
(7)
v = T-’y
y = Tv,
T
S
A equivalence transformation. equivaZent.l
3. CONGRUENT A N D CONJUNCTIVE TRANSFORMATIONS
of
A
B
by B
(’)
=
STAS
A
B
=
STAS
(9)
congruent.
by
B
(10)
conjunctive.
There is some variation in the literature on names for the various types of transformations and on their definitions. We are here following the usage of G. A. Korn M. Korn, “Mathematical Handbook for Scientists and Engineers,” McGraw-Hill, and New York, 1961.
4.
x
119
EXAMPLE
y,
K
by
(x, Y>
x
(1 1)
= xtKy
y,
x on y,
y. (6),
by
(6) (x, y) = (Su)tK(Sv)= uWKSV
(u,v)
x
(12)
(x, y)
y. (u, V)
K
K,
=~
=
K ’ v
(13)
StKS
by by on by
A
by
(3) (1 4. EXAMPLE
As
120
V. CHANGE OF BASIS, DIAGONALIZATION
111, Eq. (36), on
2
111,
-f)
M'
=
S-'MS
= (cos a,
=
a,
cos a, O
;
a,)
(18)
diag(ej7, e-je)
on
5,
11,
by
)
K = - 1( 0 1 2 1 0
K,
Eq. (15) MtKM
K',
=K
(14):
K
= StKS = Z
1 (0 -- 1
(20)
6.
121
INVARIANCE OF THE EIGENVALUES
parity
by on
on u wave basis. on by
5. GAUGE INVARIANCE
by by
by by
no
by
invariant
principle of gauge invariance.
6. INVARIANCE O F T H E EIGENVALUES UNDER A CHANGE O F BASIS
A by A:
x
AX = AX
122
CHANGE
BASIS, DIAGONALIZATION
(4),
A
(4)
by S-l,
u
=
A u =xu A'
Sx
(23) A.
on h
(4)
by
S
by
k
h on
x,
X
x,
2 AX, = AX,
A,
x2
:
+ x1
(24)
(4), XI
=
su, ,
A'u,
x2 =
= h,
+
su,
Ui
A, u,
x1 A'
u,
2
u, .
A' A A'.
7. INVARIANCE O F T H E TRACE
111,
tr A =
12,
Aii
8.
123
VARIATION OF THE EIGENVALUES
S, Bij =
B
z
(27)
(S-'),p4khShj
kh
S-l
z(s-l)iisik = &,
(28)
i
B
of
k 2
3.
8. VARIATION O F T H E EIGENVALUES UNDER A C O NJ UNCTlVE TRANSFORMATION
A of
K (x,x)
x.
K',
K K'
Kt
=
StKS
=
(StKS)t
z=
StKS
K'
=
=K
StKt(St)t
K,
124
CHANGE
BASIS, DIAGONALIZATION
by
K
S
Sylvester's law of inertia.
szgnature
K' on
K
parity matrix
wave basis. 9. DIAGONALIZATION
A. A
..., A,
A,,
x, , ..., x,
:
A
Aj
xi do by
s = (XI
x2
***
x,)
(33)
A,
S S
S by xk .
x1 , ..., x, S
. ?
S
A on
9. 111,
125
15,
wi
xi. witxi
wi
(34)
= Sij
. S. S
S-’
(33))
by
S-1
A by S, AS = A(xl
X,
x,)
by S-l,
= (A1xl
A,x,
**.
A,x,)
(37)
126
V. CHANGE OF BASIS, DIAGONALIZATION
of A, on
reduced to diagonal form. A.
A
A (33)
S-'AS by
S
(33) by S-l
(35),
S A, A,
S x1
=
A,,
(33)
x, w1
x2.
x,
w2, (38),
A
10. DIAGONALIZATION O F NORMAL MATRICES
xi 2
- 8.. 13
S (39)
(39)
3 -
(33)
xi
(34), S-l,
by
(39,
11.
127
THE HERMITIAN MATRIX
by
s:
s-1
=
S A
S
S A
by
=
S
, A,, ...)S-l
=
S
, A,, ...)St
(40)
by
K by
S
S by
11. CONJUNCTIVE TRANSFORMATION O F A HERM I T l A N MATRlX
K by K
K'by S hi
K'. K":
on
K
K'
K" = DtKD sgnAi
A,
=
=
--1 -
K"
K'
=
1
,
A,, ...)
Xi > 0 Ais-l
=
(14)
up
fN(A)
Xi A
***I
=
mi x mi
J,,,i(hi)
Jn
JtJAJ, ...>
=
J,,, ,
JZl(X,)
(15)
by
-+ 00,
k hi. f(x)
off(x)
152
OF A MATRIX
Eq. (1 Eq. Eq. x,, x,
f(x)
f(x) admissable for
by by f(A) =
S
Eq. .-.)>S-l
(16)
by
Eq. f(A)
JnijAi)}
f{Jm,(hl)), f{Jm,(hz)),
=S
**
.)>S-l
mi x mi
\ etc.
/
V.
(4)
4. TRANSMISSION LINE
Eq.
(17)
5.
153
SQUARE ROOT F U N C T I O N
R 0
R2 = (o
e-iRz
Be) = pZI
1
--
= I - jRz
--
(I
+
1
-
p z 2
+
+ j 31!B22R +
2!
(1 - 2!
= I cos Bz
=
2!
1
(19)
1 R2z2 j - R3z3 3!
= I - jRz .
=1
Eq. (3).
1 p4z4 - .:.) +-
4!
1
j-R
Pz
B
-jZ
pz
2
pz
Bz
R.
E(z) = -. E(0)
/3z
E(O) Z(z) = - j Z on
-
jZZ(0)
/3z
flz
+ Z(0) cos fk z
z =0
(21)
of z = 0.
z
e = -p
5. SQUARE ROOT FUNCTION
As
us
z
(22)
154
OF
5. H12= AAt
(23)
H,2
(24)
= AtA
H, ,
H,
of
H,,.
H12
H12
H,,
A x = 0.
of
H12
H,,
x, xtHlzx = x ~ A A ~=x (Atx)t(Atx) = yty
x
y
...,
A?,
A,, A,,
(25)
of H12
...
H12
H12= S{diag(X12,,A,,
...)} S-l
(26)
S{diag(A, , A,, ...)} S-l
(27)
H, H,
=
A, , A,,
S
...
H, go A,, A,,
of
... .
H,
H, H,
U2
A by
A
H,
H, U,
by
(26)
7.
155
EIGENVECTORS
no
A
IV,
H,
H, XII.
6. UNITARY MATRICES AS EXPONENTIALS
As
U
(28)
=
H a
IV)
U
hi
U
=
H
=
ejA2,
...)} S-l
(29)
H ,A 2 , ...)} S-l
(30)
H 10,
S-l
Ht
S
=
,A , , ...)}
=
, A , , ...)} S-l
=
=H
H
hi H
by
A
7. EIGENVECTORS
f(A), A. (4)
156
VI. FUNCTIONS OF A MATRIX
7)
A,
f(hi)
hi
8) f(A).
Theorem 1 . I f f ( x ) is an admissable function of a matrix A, and xiis an eigenvector of A with eigenvalue Xi , then xi is also an eigenvector o f f ( A ) , but with eigenvalue f (A,)..
A A
A f’(A)
A,
1 Theorem 2. If A and B are semisimple, and any eigenvector of A, regardless of how any degeneracies in A are resolved, is also an eigenvector of B, then B can be expressed as an admissable function of A.
A x1
B, A
x2
axl
B
+ /lx2
A,
B A
xi
A
B.
B
hi
/3.
a
Xi.
hi . pi.
pi,
h
pi
hi,
i = i.
A.
=
Xi.
8.
SPECTRUM xk
157
A MATRIX
k # i,
A.
i = k,
B xk
p(A)
A
by Eq.
Lagrange-Sylvester interpola-
tion polynomial
A
B.
A do
go
A, A Eqs.
p’(A,)
p(A,),
=
(dp(A)/dA)+,
p”(A,), ...,pm*-l( Ai).
8. SPECTRUM O F A MATRIX
Definition. If A is a semisimple operator whose distinct eigenvalues are A,, A,, ..., A,, and i f the values of a function f ( h ) are speciJied for h = h,, A = A,, ..., A = h k , then f (A) is said to be defined on the
spectrum of A.
A A A
158
VI. FUNCTIONS OF A MATRIX
Definition. If A is an operator whose distinct eigenvalues are h i , A, , ..., A, , and mi is the length the longest chain with eigenvalue Xi , then (A) is a function such that for all hi ,the set
have specified values, t h e n f ( A ) is said to be defined on the spectrum of A. (A)
g(A) do
A,
on
(35)
f(A)
f(A),
of on
(A)
A.
Eq. A.
on
A Theorem 3. Given a matrix A whose minimum polynomial is of rank m = Erni, there exists a unique polynomial of rank less than m which assumes a specifid nontrivial set of values on the spectrum of A. This polynomial is the Lagrange-Sylvester interpolation polynomial.
#(A), A.
on
~(h), A on m,
?(A)
+(A) #(A) = X
x(A) T(A)
r(A)
(M4
A
A. ~(h),
+ 44
(36)
r(A) m. on
of A, on
,y(A)y(A).
A.
9.
159
EXAMPLE
r(X)
m {r(h) - $(A)} on ~(h)
s(X),
on
m of m
9. EXAMPLE
B As
R,
Eq. (3).
=
e-jRz
(A
B
C D
) -jflx),
x1
x2
+ B = Ze-iBz C Z + D = e-3Bz
AZ
AZ - B
= ZejBz
CZ - D
= -&Bz
B,
D,
A B
=C
C D
=
-jZsinPz -(j/Z)sinPz
=
COSPZ
=
O S ~ Z
160
VI. FUNCTIONS OF A MATRIX
(32) by
of ?
10. C O MM UTATlV lT Y
A =
A
f(A), go
= f ( A ) ,f
(x) A
A A
A
A. Eqs. (17)
I I I.
A Theorem 4. If A and B are semisimple, then they commute if they have a complete set of eigenvectors in common.
if
and only
10.
161
COMMUTATIVITY
2
any
A
B
4
some
xi, y
on
y =
zaixi
Axi
= hixi
B x ~= pixi
AB
y,
=
BA by
A do S-lAS
S-'BS.
on on
A'
=
B
A',
quasidiag(X,I, , h,I, , ...)
=
S-lAS
(41)
hi.
I,
B' B'
=
S-lBS
=
quasidiag(B, , B, , ...)
Bi
Ic.
A"
B" on T-'A'T
by
A"
=
= (ST)-lA(ST)
B"
= T-'B'T = (ST)-'B(ST)
(42)
162
FUNCTIONS
A MATRIX
=
, T, , ...)
T T
Ti
B,.
Ii , ...)
=
,
=
T,, T,,
...
=
A
T, , T, , ...
A'
A"
(ST)
(43)
...)
(44)
B, , B, , ...
B , , ..., A B
a
A on
A
B
11. F U N C T I O N A L RELATIONS
f(x)
A f(x), f(y).
+y)
1 1.
163
eX+Y
= eXeY
(45)
(x)
on by
x
x
providing
do
Theorem 5.
If A and B commute, then, for any positive integer n
(A
+ B)n = $ (3An-kBk k=O
where n! =
k!(n - k)!
n = 1.
(A
n.
+ BY+' = (A + B) 2 (3 k-0
by
A
(47)
164
k
+1
k,
+
2
"
(k - 1)
An-k+lBk
+ Bn+l
2 (" 1') so(" 1'1
= An+l
+
An-k+lBk
+ Bn+l
k= 1
n+l
=
+ 1,
n n
=
2
A
An+l-kBk
by
B do (46):
Theorem 6.
If A and B commute, then eA+B = eAeB
on on
go
k 0
=
n
n = (r r
+ s), n
go
0
03,
k
13. R
165
NOT CONSTANT
A
(48)
do
(4) 12. DERIVATIVE O F e-jR2, R CONSTANT
(4) x(O),
=
-jRe-iRz
(52)
6.
by
+ O(W)
e-jR*2 = I - jR 6z
O(Gz2)
Gz2
1
= lim -{ -jRGz 82-10
6Z
+ 0(6z2)}e-iR2
- -jRe-iRZ
(53) (4)
(52)
13. R NOT CONSTANT
R x(z) = exp (-j
s'
0
R dz) x(0)
(54)
on
166
+ 6z)
z.
by P(z
+ Sz) = d dz
- ep =
+ sz d P ( z ) + O(622) dz 1 {edP/d2Sz - I} 8 6z
81-0
- p deP p dz
dPjdz
J:R dz
R. z.
R
z,
R X.)
R on z R (54).
14. DERIVATIVE O F A P O W E R
of
z,
14.
DERIVATIVE
167
A POWER
by
U”,
n Theorem 7. If U and may be the same, then
V are two n x n matrix functions of x, which
d dU -dz ( U V ) = - Vdz +U-
dV dz
(57)
derivation.
d -(uv> = dz
+ 62)
1
62-0
V(z U(z V(z
dz
=
+ Gz)V(z + 6z) - U(z)V(z) 6z
+ 6x)
+ Sz) = U(z) + 6z U’(z) + ... + 6z) = V(z) + sz V(z) + ...
d
- (UV) =
U(z
6Z+O
1 {(U + 6z U‘)(V+ 6z V’) - UV) 6z
uv + U’V
d -uu” = u u + UU’ dz
only
2UU’, U
U‘.
(59)
U
n dun - - U’Un-1 + UU‘Un-2 +
dz
... + Un-lU‘
(60)
168
VI. FUNCTIONS OF A MATRIX
n
U‘,
nUn-lU’,
n
by
unu-n = 1 dun dz
+ un-dU-” dz
-u-n
= U’
=0
1 1 +(UU’ + U’U) + -(UW’ + UU’U + U’UZ)+ ... 2! 3!
U’
= U’
t
I
+ u + 21!u2+
15. EXAMPLE
by (54) by
Z 0 = fpdz 0
oz) o/z 0
rRdz=( 0 0
-*/
(62)
169
EXERCISES
R. X(2) =
cos
-IZcos
((-jjz)
B
by
(pz). 2,
on
z,
Exercises
1.
d2x dx u-+b-+cx=O dzL dz - d2x + - - + ( 1 -1- dx )=O n2 dz2 z d z 22 d2x dx 22 - - a(a (c) (9- 1) dz2 dz d2x dz2 (b - h2 COS’ Z)X = 0
+
+
+
d
dx
+
*.
&, (P &) + (4
=0
ddX d2x dx ~ + P ~ + q - dz + Y X = O 2. A
3.
=
(:
1 u)
A
=
=0
(54)
170
VI. FUNCTIONS OF A MATRIX
4.
(I
+ A)-'
=I -A
+ A2 - Ad +
?
* * a
of
B=( 5. =
0,
n x n nilpotent.
idempotent.
= =
I,
involution.
6.
X2+ &BX
+ XB)+ C = 0
?
X2+ 5X
+ 41 = 0
7.
3 = y 2 + *Y dz
+q by
1 du Y=-;&
on dY dz
-Y2+PY+Q ?
8. dM _ -- eZAM, dz
M(0) = I
171
EXERCISES
A
X(z)
9.
z
X(u u
+ v) = X(u)X(v) X(z)
w. --
dz
z,
- Ax@)
A (Hint: separately.) (Comment:
u
Polya's equation.
A
10.
X, = ?
n ?
w
CHAPTER VII
The Matricant
-dx(z) -
dz
-jRx(z)
S = -jR,
_-
dx - sx dz
R
S
z.
R on
z,
-
R
2
x(z) = exp(-jRz)x(O)
(2)
by
1. INTEGRAL MATRIX
by 172
R(z)
1.
a
173
INTEGRAL MATRIX
= 0
Theorem. If the given inner product relation is a proper one, then a subspace S, and its orthogonal complement S , is a decomposition of the whole space.
. S, no
S,
u
,u, ,
v: u=u,+v
(v, v)
= (u - U 1 , U - Ul)
ul u on
, by
- u,,)
u, . u,
= uo
+ ciw
>0
,
(23)
u,
.
,
u,
(v, v)
288
XII. SINGULAR A N D RECTANGULAR OPERATORS
w
S,
w (u - u,)
a
u, (u
-
uo - orw,u - uo - aw) = (u - uo ,u
-
uo) - or*(W,u - uo)
- 4 u - uo , w)
3 (u
S, ,
+ I a I2(w,w)
- 110 9 u - UO)
a,
(w,w) # 0. -Kw,u - uo)12 2 0 (w,w> (w,u - uo>= 0
u
- u,
w
S,
.
u
u u
+ (u - uo)
(u - uo) S , .
uoE S ,
Theorem.
= uo
If u
of
w u
of
T, of
ifu
T.
(u,W)
to
u
w,
v.
= (Tv, W) =
(v, T#w)
T#w
w
by
T#
T
u
Tv,
(v, T#w)= 0,
K T#w = 0,
w.
u of
(u,w)
=0
(28)
5.
T#w = 0. u
u
Ty,
-
u,
=
+
= u1
289
EXAMPLE
T
S,
112,
u1 E s, ,
u2 E
S,
s, T,
x
u2
y (U - 111,
Ty)
=
(T#(u - uI), y)
==
0
y T#(u - ~
u
-
u, u
u1
-
S,
u
-
ul
u
(29)
u
(u,u - Ul) = (u - u, , u - Ul)
ul
0
1 = )
+ (Ul , u - Ul)
u1 = u2
S,
(u -u] , u -ul)
=0
u
-
=0
. (30)
u
T
u,.
u
T, on u
by
T#
T#. u.
5. EXAMPLE
T n x n
0 1 0 ...
'
0 0 0 0 0 0
.*.
290
X I I . SINGULAR A N D RECTANGULAR OPERATORS
u=(
0 0 0
T~u =
.**
1 0
('i m
TtT v
=
diag(0, 1, I ,
Ttu,
...,
S, :
by on
6. CONCLUSIONS
T
u = Tv or
u
29 1
EXERCISES
by
Exercises
1.
0
-j
-j 0
-1 j
-1
-1
1
A=(-!
-J
( 1; ) 1
Ax
=
2j - 1
2.
a,
AX = U
(a)
a
= col(1,
-1, -j,j)
a = col(1, 1 , - j , j )
(c) a = col(1 + j , 1 - j , 1 - j , -1
-j)
(d) a.= col(1, 1 - j , -j, 1 - j ) 3. (Caution:
UV~,
?
292
XII. SINGULAR A N D RECTANGULAR OPERATORS
5,
4.
=
uvtK,
K
EX = y x
y
y
5.
Px ?
=a
?
u.
CHAPTER Xlll
The Commutator Operator dW - -- [S,w] = sw - ws dz
S z.
by dx _ - sx dz
x
K,
S W
x
(3)
= xytK
(2),
y dz
do
= @ytK
dz
=
+x
K
SxytK - xytKS
=
=
SxytK
+xyWK
[S, w]
(4)
W’s,
W
293
294
XIII. THE COMMUTATOR OPERATOR
W
.
(l), dM _ -- SM, dz
S
M(0) = I
(5)
z,
W,
W (6), by
W,
W. (5),
S
S
by dSdz
[A,Sl
(7)
A A
Lie book.
1.
295
LIE GROUPS
1. LIE GROUPS
3)
(A, B,...) 1.
I
2.
X
3.
X-l,
not distinct jinite dimensionality.
X
x = X(ff,, a l ,..., a,
f(zn+l)
= f(zn
f(zn-1)
= f(zn - dn) = f ( z n ) - &f’(zn>
dn+l)
f(zn) =
(6)
(7)
f n
n n
d,+,
(5)
=
1
=
fo
N,
fN+l d,
= x1
-
xo
by
f , = f N = 0,
f,’
1
=f2/d,
fl’ = d,u - 2 -fd
f N ’ = -fN-,/dN.
362
XVII. STURM-LIOUVILLE SYSTEMS
where dN+lis taken as appropriate to the boundary condition. We write T
>
fl f 2
fN-1
_- 1 dN+l
f N
1
I
which can be written as
f+' = D+f where
1
=( F ) ( - a i i 7t1
Similarly, from Eq.
+
8i.j-l)
we have 1
fl' = -dl f1
f;
1
=
-(f* -fJ 4 1
fN' =
which we write as -
1 -
d,( f N - f N - l )
0
0
fl f 2
f,
fN
I
\ I
(14)
363
2.
DD+PDf - Qf
-
hRf
D-PD+f - Qf
- hRf = 0
=0
by R-l, Hf
(18)
= hf
H
= R-'(D+PD- - Q)
H
= R-'(D-PD+ - Q)
D-
H. 2. MODIFIED STURM-LIOUVILLE E Q U A T I O N
by g
= p1My1I4f
(21)
Y
w. w
V(w)
w
364
XVII. STURM-LIOUVILLE SYSTEMS
D+D-
d2/dw2
D-D, w, (w7‘ =
1 + h) = gn + hgn’ + 2! h2&’ +
gn+1
=
gn-l
= g(w, - h) = gn - hg,’
(
-
D,
=
h2 I
+ --2!1
h2,”
by 2 1 1 - 2
0 1
-!
A -;
0 0
-2
. Hg
H = D,
= Ag
+ diag(Vl , V, , ...) v,-2 1 0
1 v,-2 1
0 1
v3-2
+
)
(26)
3.
365
THE CHARACTERISTIC EQUATION OF H
no (24).
by 3. T H E CHARACTERISTIC EQUATION O F
H
by
lv1i2-h v z - 2 - x
0 1
I
Vn-2-h
(29)
n x n.
k x k
n -K
by on
V1-2-h
1 pk(h)
1
v2- 2 - h
=(vk-2 vt-,
V1-2-X
-2
1 v*-2-x
!
= ( vk
I
I
-2 -
-pk-2 p,
=
v, - 2 - x
p 2 = (V, - 2
- h)(VZ- 2
-
-
1
V , 's
-h
366
XVII. STURM-LIOUVILLE SYSTEMS
(30)
Jucobi matrix V k
(30) do
V, do by 4. STURM CHAINS V k
p,(h)
p,(X) n,
chain h
pk(h)
=
0,
(a
pk+l(h)
< h < b)
k,
pk-l(h)
k,
po(X)
p,(X),
no
p,;(h)
(30)
by
by
Po(4
k pk(h)
k.
2
pk+l(h)
(30),
h,
p,(h) (30),
A.
p’s Pk
=
=1
pk-l(h)
=
0
h,
A.
pk-2
R(x) (a,b), I s R(h), - co
h
0
n(u) n.
R(x)
+ co,
+ co a
b, pk(a)
k
- 00,
4.
367
STURM CHAINS
pk(h)
pn-Jpn p, pi
b
piPl i # n,
n(b),
i
pnPl(b) > 0,
=
p,(b)
n,
+
0
-
b
by
n(b)
by
p71-l
by
p, , pk(h) pk(A) n(a) = 0. As h +
n(b)
+
up
(30),
by
h -+ - 00,
$),(A)
00,
k
b
odd.
P,~(A)
pn-l/pn P,-~
n. -n.
p,
on
k
n n
k
0 odd,
pk ,
pk-1 .
pk pkPl
A,
$1 n(a)
A
n(b)
V(x) n
(n -
n. V(x)
368
XVII. STURM-LIOUVILLE SYSTEMS
5. R E C U R S I O N FORMULA
(30),
on good
xn = anxn-i
+
n.
b,
a,
(33)
bnxn-z
do do Yn
(34)
= Xn-1
(33) x n = anxn-1
(34)
+
(35)
bnYn-1
(35)
P, = P,
P by
(n -
P,
(36) n. a,
n.
b,
(36) x,
n
.
yn by
odd t i n = Pnpn-1un-2
(38)
5.
369
RECURSION FORMULA
P,P,-, P,
a,,
b,
. (36),
by x,
on x,
x,-,
y , by on n.
1).
XV,
(34),
As Yn = knxn-1
k,
(36)
bn = kanan-,
k
k,
b, ,
a,
.
a,
(33), x, = anxnPl
nna,
=
+ kanan-,xn-,
P,
x,
(44) W,
= wn-1
+ kwn-2 P,
(43).
3 70
XVII.
do
do
Exercises
n x n
n
1.
Jn =
a, b, Jn
c
. (Hint:
3
J, .
(Hint:
a
( I 1)
dt
2.
d,
D, ?
R,
3. R4
+ aR3 + bR2 + CR+ dI = 0 Rn.
(Hint:
R” = fnR3 + gnR2+ h,R
f, , g, , h, , k,
+ k,I
n a =
b
= c =
0, d
=
-1.
CHAPTER X V l l l
Markoff Matrices and Probability Theory
body do up, by
Markofprocess do
by
by
by
N(t). 37 1
372
XVIII. MARKOFF MATRICES AND ‘PROBABILITY THEORY
+ $No
t
=
$N(t)
+ 1).
t,
0, go
Problem of the Gambler’s Ruin.
?
1.
Markoff
random walk
373
STATE VECTOR
drunkard’s walk
1. S T A T E V E C T O R
up
(M
$0
+1
$M
$1
$M, $(M
on,
-
$0. no
on
no
pure state. do
$N
p N,
mixed state.
374
x
probability vector.
on
61.
11 ax 11
=
0111
x 11.
go by
!I x t- y II
= I/ x II
+ II Y II
(3)
2. TRANSITION O R MARKOFF M A T R I X
$n.
$m
pmn
pm,
up
.
m n.
P,~, #m,
by #n
2.
375
TRANSITION OR MARKOFF MATRIX
P,,~,
transition probability
n
m. $n.
do x, $n
pmnx, .
$m ym
$m
(4)
by m by y
= Px
transition
(5)
M a r k 0 8 matrix. s
by x(s).
xk x(s
xk ,
x(s
+
+
P.
There is some confusion in the literature as to just what is called a Markoff matrix. Sometimes pmnis taken as the probability that state m arose from state n, and the Markoff matrix is the matrix of these probability terms. In this case, each row must be a probability vector, since state m must have arisen from some state.
316
XVIII. MARKOFF MATRICES A N D PROBABILITY THEORY
3. EIGENVECTORS O F A MARKOFF MATRIX
xi x(s) x(s)
Eq. ( 5 )
= xi
PXi = xixi x(s
x(s
+ x(s
x(s)
+ 1)
(7)
+ 1) = xixi hi
P no
$M
..., 0)
(8)
xz = col(O,O, ...)0, 1 )
(9)
x1
= col(l,O, 0,
on by
x3,..., x,
up
on ~ ( 0= )
2 i
U a. Xa. - alX1
+ uzx, +
***
(10)
3.
377
m x(m)
=
Cui~imxi
11 x(m) 11 11 xiII
=
1
m.
Xi # 1
=0
(13)
hi 1 hi I > 1,
xi
A’s
m
of x(m)
x(m)
P
I hi I
1, then the matrix can, by a permutation, be put into the f o r m
where the 0’s along the main diagonal are scalars or square matrices.
on no
$0
x
38 1
EXERCISES
1 x 1
(23) on
(n n x 1
x (n -
A A,
by A,
do do
6. CONCLUSIONS
on
do do on
Exercises
$200 $400.
1.
on
p $200 on
$400 ?
$100
(p >
?
$50 ?
?
( p < 8) ?
382
XVIII. MARKOFF MATRICES A N D PROBABILITY THEORY
2.
l b O O O
f !H
A=
'be..
l)
O c a b O O
O
c
a
O
a, b, c
a+b+c=l
P
3.
Q X = a P + ( l -u)Q
0
< < 1.
Q
4.
PQ.
u
5.
uvT, v A
(1 x1 11
Al =
A,, A,, 1, 11 x2 11 = (1 xg I(
=
1
x1 , x2 , ..., x, ,
..., A,, =
.*.= 11 x, 11
=
0,
A
CHAPTER XIX
Stabilitv
by
of
t ?
x(t) = 0,
stable.
x(t) unstable.
x(t)
x(t)
by
admissable. of by
do no
by by
of
x(0). z z
z
383
384
STABILITY
P P
P x ( t ) = exp(Pt)x(O)
(2)
100 do
1. THE BASIC THEOREM FOR STABILITY
Theorem. A system described by Eq. with P constant is stable for all initial conditions if and only if all the eigenvalues of P have negative real parts-i.e., are in the left-hand plane of A.
r(t>= s w S
(3)
(1)
_ du - (S-1PS)u = P’u dt
P P‘
by
(4)
S
P u(t)
M’(t), go
= diag(eAlt,eAzt, ...)u(0) = M(t)u(O)
(5)
1.
THE BASIC THEOREM FOR STABILITY
(4) A 1 0 0
O h 1 0
PI=(.
...
O
385
)
p!. = za pr!, z.+ l = 1
p! . = 0 t.3
(4) U(t) = M(t)u(O)
1
t
0 1 0 0
=o
t2/2! t3/3! t t2/2! 1 t
*")
(7)
j < i
(7')
j
=O
(8)
by
(PM),, .
(5). n t
h
tneat
386
XIX. STABILITY
Definition. P is said to be stable, or a stability matrix if all of its eigenvalues have negative real parts.
2. ROUTH-HURWITZ METHOD
P, P n
h
n
ak = 0 6,
a,
=0
k > 8.
k > *(n
- 1)
b,
p(h)
no Hurwitz matrix
H=
n x n
2.
387
ROUTH-HURWITZ METHOD
by
a,/b,
(bo/co) Routh matrix R :
R
=
b, b2 ...
b, 0
i
c0
C,
0 0 do
...
Theorem. The number of roots of Eq. in the right-hand plane-i.e., with positive real part-is equal to the number of changes of sign in the sequence a, bo do 9
9
9
9
---
Theorem. A polynomial, Eq. (9) has all of its eigenvalues in the lefthand plane-i.e., with negative real part-if all of the terms a , , b, , c, , ... are nonzero and have the same sign.
b, , c, , d o , by regular
by
E
no (A2
+h +
+ h - 1) = A4 + u3 + 2 0
0
0 1
1 - 1
H=(i
A2 -
1
=0
388
XIX. STABILITY
1, 2,
1, 2,
-1
of of A4
+
a 3
- h2 -
u- 3 =
(A2
+ h - 3)(h2 + h + 1) = 0
2-2
co 2-2
0
0
0
2-2
0
0
0 a-3
2 -2
(+
=
0
0 -2
O
O
+ - - -).
+ 6/a -3
B
B,
0.
co = E 2 -2
0
1, 2, c
0
0
0 0
+ b / B , -3 (+ + + + -).
-2
0
0 -2 0 0
+ 6/a -3
4.
CRITERION OF LIENARD A N D CHIPART
389
on do by
3. H U R W I T Z D E T E R M I N A N T S
dk,
k
n): A,
= b,
A,
=
Ai>O,
bo bl b, a, U , a2 10 b, b,l
i 0, > 0,
> 0, > 0,
a,-,,-, an--2k-l
> 0, > 0,
A,,-, > 0, A,, > 0, A 2 k - l > 0, A,, > 0,
no
on 5. LYAPUNOV’S SECOND METHOD
by
< k < n/2 1 < k < n/2 1 < k < n/2 1 X,>O xtx
L
A,
xtMx xtx