Algebra through practice
Book 4: Linear algebra
Algebra through practice A collection of problems in algebra with sol...
54 downloads
1164 Views
747KB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Algebra through practice
Book 4: Linear algebra
Algebra through practice A collection of problems in algebra with solutions
Book 4
Linear algebra T. S. BLYTH o E. F. ROBERTSON University of St Andrews
-
r. The right f ,h, University of Cambridge to print and sell all manner of books as granted by
Henry Vlll in 1534. The University has printed and published continuously since 1584.
CAMBRIDGE UNIVERSITY PRESS Cambridge
London New York New Rochelle Melbourne Sydney
CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, Sao Paulo
Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK
Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521272896
© Cambridge University Press 1985
This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 1985 Re-issued in this digitally printed version 2008
A catalogue record for this publication is available from the British Library
Library of Congress Catalogue Card Number: 83-24013 ISBN 978-0-521-27289-6 paperback
Contents
Preface vi Background reference material vii
1: Direct sums and Jordan forms 1 2: Duality and normal transformations Solutions to Chapter 1 31 Solutions to Chapter 2 67 Test paper 1 96 Test paper 2 98 Test paper 3 100 Test paper 4 102
v
18
Preface
The aim of this series of problem-solvers is to provide a selection of worked examples in algebra designed to supplement undergraduate algebra courses. We have attempted, mainly with the average student in mind, to produce a varied selection of exercises while incorporating a few of a more challenging nature. Although complete solutions are included, it is intended that these should be consulted by readers only after they have attempted the questions. In this way, it is hoped that the student will gain confidence in his or her approach to the art of problem-solving which, after all, is what mathematics is all about.
The problems, although arranged in chapters, have not been `graded' within each chapter so that, if readers cannot do problem n this should not discourage them from attempting problem n+ 1. A great many of the ideas involved in these problems have been used in examination papers of one sort or another. Some test papers (without solutions) are included at the end of each book ; these contain questions based on the topics covered.
TSB, EFR St Andrews
Background reference material
Courses on abstract algebra can be very different in style and content. Likewise, textbooks recommended for these courses can vary enorm-
ously, not only in notation and exposition but also in their level of sophistication. Here is a list of some major texts that are widely used and to which the reader may refer for background material. The subject matter of these texts covers all six of the present volumes, and in some cases a great deal more. For the convenience of the reader there is given overleaf an indication of which parts of which of these texts are most relevant to the appropriate sections of this volume. [1] I. T. Adamson, Introduction to Field Theory, Cambridge
University Press, 1982. [2] F. Ayres, Jr, Modern Algebra, Schaum's Outline Series, McGraw-Hill, 1965. [3] D. Burton, A first course in rings and ideals, Addison-Wesley, 1970.
[4] P. M. Cohn, Algebra Vol. I, Wiley, 1982. [5] D. T. Finkbeiner II, Introduction to Matrices and Linear Transformations, Freeman, 1978. [6] R. Godement, Algebra, Kershaw, 1983. [7] J. A. Green, Sets and Groups, Routledge and Kegan Paul, 1965. [8] I. N. Herstein, Topics in Algebra, Wiley, 1977.
[9] K. Hoffman and R. Kunze, Linear Algebra, Prentice Hall, 1971.
[10] S. Lang, Introduction to Linear Algebra, Addison-Wesley, 1970. [11] S. Lipschutz, Linear Algebra, Schaum's Outline Series, McGraw-Hill, 1974. vii
[12] I. D. Macdonald, The Theory of Groups, Oxford University Press, 1968. [13] S. MacLane and G. Birkhoff, Algebra, Macmillan, 1968. [14] N. H. McCoy, Introduction to Modern Algebra, Allyn and Bacon, 1975. [15] J. J. Rotman, The Theory of Groups: An Introduction, Allyn and Bacon, 1973. [16] I. Stewart, Galois Theory, Chapman and Hall, 1975. [17] I. Stewart and D. Tall, The Foundations of Mathematics, Oxford University Press, 1977.
References useful for Book 4 1: Direct sums and Jordan forms [4, Sections 11.1-11.4], [5, Chapter 7], [8, Sections 6.1-6.6], [9, Chapters 6, 7], [11, Chapter 10].
2: Duality and normal transformations [4, Chapter 8, Section 11.4], [5, Chapter 9], [8, Sections 4.3, 6.8, 6.10], [9, Chapters 8, 9], [11, Chapters 11, 12]. In [4] and [6] some ring theory is assumed, and some elementary results are proved for modules. In [5] the author uses `characteristic value' where we use `eigenvalue'.
viii
1: Direct sums and Jordan forms
In this chapter we take as a central theme the notion of the direct sum A ® B of subspaces A, B of a vector space V. Recall that V = A ® B if and only if every x E V can be expressed uniquely in the form a + b where a E A and b E B; equivalently, if V = A+B and AnB = {0}. For every subspace A of V there is a subspace B of V such that V = A® B. In the case where V is of finite dimension, this is easily seen; take a basis {V1, - .. , vk} of A, extend it to a basis {vi,. .. , v,,} of V, then note that
spans a subspace B such that V = A ®B. If f : V -* V is a linear transformation then a subspace W of V is said to be f-invariant if f maps W into itself. If W is f-invariant then there is an ordered basis of V with respect to which the matrix of V is of the form
0
X
where M is of size dim W x dim W.
If f : V --+ V is such that f o f = f then f is called a projection. For such a linear transformation we have V = Im f ® Ker f where the subspace Imf is f-invariant (and the subspace Kerf is trivially so). A vector space V is the direct sum of subspaces W1, .. . , Wk if and only if there are non-zero projections p1, ... , Pk : V - V such that k
Epi = idv
and
pi o pj= 0 for i # j.
i=1
In this case Wi = Im pi for each i, and relative to given ordered bases of
Book 4
Linear algebra
W1, ... , Wk the matrix of f is of the diagonal block form M1
M2
Mk Of particular importance is the situation where each Ms is of the form A
1
0
0 0
A
1
0
A
... ... ...
0 0
0 0
0 0
... ...
0 0 0
0 0 0
A
l
0
A
in which case the diagonal block matrix is called a Jordan matrix. The Cayley-Hamilton theorem says that a linear transformation f is a zero of its characteristic polynomial. The minimum polynomial of f is the monic polynomial of least degree of which f is a zero. When the minimum polynomial of f factorises into a product of linear polynomials
then there is a basis of V with respect to which the matrix of f is a Jordan matrix. This matrix is unique (up to the sequence of the diagonal blocks), the diagonal entries A above are the eigenvalues of f, and the number of M1 associated with a given A is the geometric multiplicity of A. The corresponding basis is called a Jordan basis. We mention here that, for space considerations in the solutions, we shall often write an eigenvector x1
X2
xn
as [x1, x2, ... , xn]. 1.1
Which of the following statements are true? For those that are false, give a counter-example. (i) If {al, a2, a3} is a basis for IR3 and b is a non-zero vector in IR then {b + a1, a2, a3} is also a basis for IR3. 2
1: Direct sums and Jordan forms (ii) If A is a finite set of linearly independent vectors then the dimension of the subspace spanned by A is equal to the number of vectors in A.
(iii) The subspace {(x, x, x) x E IR} of IR3 has dimension 3. (iv) If A is a linearly dependent set of vectors in IRn then there are more than n vectors in A. (v) If A is a linearly dependent subset of IRn then the dimension of the subspace spanned by A is strictly less than the number of vectors in A. (vi) If A is a subset of IRn and the subspace spanned by A is IRn itself then A contains exactly n vectors. I
(vii) If A and B are subspaces of IRn then we can find a basis of IRn which contains a basis of A and a basis of B. (viii) An n-dimensional vector space contains only finitely many subspaces.
(ix) If A is an n x n matrix over Q2 with A3 = I then A is non-singular. (x) If A is an n x n matrix over Q with A3 = I then A is non-singular. (xi) An isomorphism between two vector spaces can always be represented by a square singular matrix. (xii) Any two n-dimensional vector spaces are isomorphic.
(xiii) If A is an n x n matrix such that A2 = I then A = I. (xiv) If A, B and C are non-zero matrices such that AC = BC then
A=B.
(xv) The identity map on IRn is represented by the identity matrix with respect to any basis of IRn. (xvi) Given any two bases of IRn there is an isomorphism from IRn to itself that maps one basis onto the other. (xvii) If A and B represent linear transformations f, g : IRn -* IRn with respect to the same basis then there is a non-singular matrix P such
that P-1 AP = B. (xviii) There is a bijection between the set of linear transformations from IRn to itself and the set of n x n matrices over IR. (xix) The map t : IR2 --4 IR2 given by t(x, y) = (y, x+y) can be represented by the matrix
with respect to some basis of IR2.
(xx) There is a non-singular matrix P such that P-1 AP is diagonal for any non-singular matrix A. 3
Linear algebra
Book 4 1.2
Let t1i t2, t3, t4 E ,C(IR3, IR3) be given by
t1(a,b,c) = (a+b,b+c,c+a); t2 (a, b, c) = (a - b, b - c, 0);
t3 (a, b, c) = (-b, a, c); t4 (a, b, c) = (a, b, b).
Find Ker t1 and Im t; for i = 1, 2, 3, 4. Is it true that IR3 = Ker t1®Im t1 for any of i = 1, 2, 3, 4? Is Im t2 t3-invariant? Is Ker t2 t3-invariant?
Find t3 o t4 and t4 o t3. Compute the images and kernels of these composites.
1.3
Let V be a vector space of dimension 3 over a field F and let t E £ (V, V)
be represented by the matrix 3
-1
1
-1
5
-1
1
-1
3
with respect to some basis of V. Find dim Ker t and dim Im t when (i) F = IR;
(ii) F = 12; (iii) F = 73. Is V = Ker t ® Im t in any of cases (i), (ii) or (iii)?
1.4
Let V be a finite-dimensional vector space and let s, t E .C (V, V) be such
that sot = idv. Prove that to s = W. Prove also that a subspace of V is t-invariant if and only if it is s-invariant. Are these results true when V is infinite-dimensional? 1.5
Let V,, be the vector space of polynomials of degree less than n over the field IR. If D E £(V,,, V,,) is the differentiation map, find Im D and Ker D. Prove that Im D _- V,,_ 1 and that Ker D = IR. Is it true that
V,y=ImDa) KerD? Do the same results hold if the ground field IR is replaced by the field Z2? 4
1: Direct sums and Jordan forms 1.6
Let V be a finite-dimensional vector space and let t E C (V, V) - Establish the chains
V D Im t D Im t2 D ... D Im to D Im to+1 Q {0} C Ker t C Ker t2 C ... C Ker to C Ker to+1 C
....
Show that there is a positive integer p such that ImtP = ImtP+1 and deduce that (Vk > 1)
Imp = Im tP+k and Ker tP = Ker
tP+k.
Show also that V = Im tP ® Ker tP
and that the subspaces Imt' and KertP are t-invariant. 1.7
Let V be a vector space of dimension n over a field F and let f : V --+ V
be a non-zero linear transformation such that f o f = 0. Show that if Im f is of dimension r then 2r < n. Suppose now that W is a subspace of V.such that V = Ker f ® W. Show that W is of dimension r and that if {wl,... , wr} is a basis of W then { f (wl ), ... , f (wr)} is a linearly independent subset of Ker f. Deduce that n - 2r elements XI, ... , xn-2r can be chosen in Ker f such that
('wl)...,wr,f(wl),..., f(wr),xl,...,xn-2r} is a basis of V. Hence show that a non-zero n x n matrix A over F is such that A2 = 0 if and only if A is similar to a matrix of the form
1.8
Let V be a vector space of dimension 4 over IR. Let a basis of V be B = {bl, b2, b3i b4}. Writing each x E V as x = E l x;b;, let VI = {x E V I x3 = x2 and x4 = XI), V2 = {x E V I x3 = -x2 and x4 = -x1 }. Show that (1) V1 and V2 are subspaces of V; 5
Linear algebra
Book 4
(2) {b1 + b4, b2 + b3} is a basis of V1 and {b1 - b4, b2 - b3} is a basis of V2i (3) V = V1 ® V2;
(4) with respect to the basis B. and the basis C = {bl + b4, b2 + b3, b2 - b3, bl - b4}
the matrix of idv is
i 2 0 0
0
i
1
2
2
i
1
0
0
2-
2
0
-2
2 0
i
2
1
0
1
A 4 x 4 matrix M over IR is said to be centro-symmetric if
mij = m5-i,5-j for all i, j. If M is centro-symmetric, show that M is similar to a matrix of the form
1.9
Q
0
0
ry
S
0
0
0
0
E
S
0
0
n
tg
Let V be a vector space of dimension n over a field F. Suppose first that F is not of characteristic 2 (i.e. that 1F + 1F 54 OF). If f : V --+ V is a linear transformation such that f o f = idv prove that
V = Im(idv +f) ® Im(idv -f). Deduce that an n x n matrix A over F is such that A2 = In if and only if A is similar to a matrix of the form
Suppose now that F is of characteristic 2 and that f o f = idv. If g = idv +f show that
xEKerg b x= f(x), 6
1: Direct sums and Jordan forms and that g o g = 0. Deduce that an n x n matrix A over F is such that A2 = In if and only if A is similar to a matrix of the form In-2p
[Hint.
Observe that Img C Kerg. Let {g(cl),...,g(cp)} be a basis of
Img and extend this to a basis {b1, ... , bn-2p, g(cl ), ... , g(cp) } of Ker g. Show that {b1,. .. , bn-2p, 9(Cl ), C1, ... , 9(Cp), Cp}
is a basis of V.]
1.10 Let V be a finite-dimensional vector space and let t E C(V, V) be such that t # idv and t # 0. Is it possible to have Imt n Kert # {0}? Is it possible to have Im t = Ker t? Is it possible to have Im t C Ker t? Is it possible to have Kert C Imt? Which of these are possible if t is a projection?
1.11 Is it possible to have projections e, f E £(V,V) with Kere = Ken f and Im e # Im f ? Is it possible to have Im e = Im f and Ker e # Ker f? Is it possible to have projections e, f with e o f = 0 but f o e 0? 1.12 Let V be a vector space over a field of characteristic not equal to 2. Let el,e2 E .C(V,V) be projections. Prove that el + e2 is a projection if and only if el o e2 = e2 o el = 0. If el + e2 is a projection, find Im(el + e2) and Ker(el + e2) in terms of the images and kernels of el, e21.13 Let V be the subspace of IR3 given by V = {(a, a, 0) I a E IR}.
Find a subspace U of IR3 such that IR3 = V ® U. Is U unique? Find a projection e E C(IR3, IR3) such that Ime = V and Kere = U. Find also a projection f E C(IR3, IR3) such that Im f = U and Ker f = V. 7
Linear algebra
Book 4
1.14 If V is a finite-dimensional vector space over a field F and e, f E .C(V, V) are projections prove that Im e = Im f if and only if eof = f and foe = e. Suppose that e1,...,ek E .C(V,V) are projections with Let A 1 , A2, ... ,
Ak E F be such that
Ek 1 ai = 1. Prove that
e =ale, + A2e2 + ...+ Akek is a projection with Im e = Im ei. Is it necessarily true that if fl, ... , fk E £(V, V) are projections and 1 ai = 1 then Ek 1 ai fi is also a projection? 1.15 A net over the interval [0, 11 of IR is a finite sequence (ai)o 1; (2) p(X) and q(X) are divisible by X; (3) q satisfies Xp(X) = 0, and p satisfies Xq(X) = 0. Deduce that one of the following holds (i) p(X) = q(X); (ii) p(X) = Xq(X); (iii) q(X) = Xp(X). 1.29 A 3 x 3 complex matrix M is said to be magic if every row sum, every column sum, and both diagonal sums are equal to some 16 E C. If M is magic, prove that 6 = 3m22. Deduce that, given a,8, y E d, there is a unique magic matrix M(a,)6,, -j) such that m22 = of m1 = a +Q, m31 = a + y Show that {M(a,Q, y) a,Q, y E Q} is a subspace of Matax3(C) and 1
that B = {M(1, 0, 0), M(0,1, 0), M(0, 0, 1)} is a basis of this subspace. If f : C3 -+ C3 represents M(a, /3, y) relative to the canonical basis {e1, e2, e3 b show that e1 + e2 + e3 is an eigenvector of f. Determine the matrix of f relative to the basis {el + e2 + e3, e2, e3}. Hence find the eigenvalues of M(a, Q, y). 11
Linear algebra
Book 4
1.30 Let V be a vector space of dimension n over a field F. A linear transformation f : V -> V (respectively, an n x n matrix A over F) is said to be nilpotent of index p if there is an integer p > 1 such that fP-1 # 0 and fP = 0 (respectively, AP-' 54 0 and AP = 0).
Show that if f is nilpotent of index p and x E V\{0} is such that fp-1(x) # 0 then
{x,f(x),...,f"-1(x)} is a linearly independent subset of V. Hence show that f is nilpotent of index n if and only if there is an ordered basis of V with respect to which the matrix of f is 0
0
In-1
0'
Deduce that an n x n matrix A over F is nilpotent of index n if and only if A is similar to this matrix. 1.31 Let V be a finite-dimensional vector space over IR and let f : V -+ V be
a linear transformation such that f o f = -idv. Extend the external law IR x V -* V to an external law (E x V -p V by defining, for all x E V
and alla+if3 E(, (a +i8)x = ax - P f(x). Show that in this way V becomes a vector space over C. Use the identity r
t=1
r
r
1:(at - iµt)vt =
lttf(vt)
Atvt + t=1
t=1
to show that if {v1i... , vr} is a linearly independent subset of the (vector space V then {v1, ... , vr, f (v1), ... , f (vr)} is a linearly independent subset of the IR-vector space V. Deduce that the dimension of V as a vector space over Q is finite, n say, and that dimR V = 2n. Hence show that a 2n x 2n matrix A over IR is such that A2 = -I2n if and only if A is similar to the matrix
1.32 Let A be a real skew-symmetric matrix with eigenvalue A. Prove that the real part of A is zero, and that A is also an eigenvalue. 12
1: Direct sums and Jordan forms If (A - AI)2Z = 0 and Y = (A - AI)Z show, by evaluating V Y, that Y = 0. Hence prove that A satisfies a polynomial equation without repeated roots, and deduce that A is similar to a diagonal matrix. If x is an eigenvector corresponding to the eigenvalue A = is and if
u=x+x,v=i(x-x) show that Av = -au.
Au = av,
Hence show that A is similar to a diagonal block matrix fo
Al
A2
Ak
where each Ai is real and of the form cl
0
J.
1.33 Let V be a vector space of dimension 3 over IR and let t E Z (V, V) have eigenvalues -2,1,2. Use the Cayley-Hamilton theorem to express ten as a real quadratic polynomial in t. 1.34 Let V be a vector space of dimension n over a field F and let f E £ (V, V) be such that all the zeros of the characteristic polynomial of f lie in F. Let Al be an eigenvalue of f and let bl be an associated eigenvector. Let W be such that V = Fbl ®W and let (b;)2 F. If V is of finite dimension and B = {v1,...,vn} is a basis of V then the basis that is dual to B is B4 = { va, ... , vn } where each 0 : V -+ F is given by 1
vd(vj)
0
ifi=j; ifi#j.
For every x E V we have
x = of (x)v1 +7J2(x)v2 + ... + vn(x)vn and for every f E Vd /we have f = f(vl)vf + f(v2)v2 + ... + f(vn)vn.
If (Vi)n, (w:)n are ordered bases of V and (va)n, (wa)n the correspond-
ing dual bases then the transition matrix from (va)n to (wa)n is (p-1)t where P is the transition matrix from (vi)n to (wt)n. In particular, consider V = IRS. Note that if B = {(all,...,aln),(a21,...) a2n),...,(anl) ...7an.)}
is a basis of IRS then the transition matrix from B to the canonical basis
(ei)n of IRS is M = [mj,]nxn where mij = aji. The transition matrix from Bd to (ed)n is given by (M-l)t. We can therefore usefully denote the dual basis by
B = {[all, ... , aln], [all, ... , a2n], ... , [anl, ... , an-[}
2: Duality and normal transformations ain] denotes the ith row of M-1, so that
where
[ail,...,ain](xl) ...,xn) =ailxl + ...+atnxn, The bidual of an element x is x^ : Vd -* F where x^(yd) = yd(x). It is common practice to write yd(x) as (x, yd) and say that yd annihilates x if (x, yd) = 0. For every subspace W of V the set
W1={ydEVd I (Vx E W) (x, yd) = 0) is a subspace of W and dim W + dim W J- = dim V.
The transpose of a linear transformation f : V -+ W is the linear mapping ft : Wd --+ Vd described by yd ,--* yd o f. When V is of finite dimension we can identify V and its bidual (Vd)d, in which case we have that (f t )t = f. Moreover, if f : V -+ W is represented relative to fixed ordered bases by the matrix A then f' : Wd -+ V d is represented relative to the corresponding dual bases by the transpose At of A.
If V is a finite-dimensional inner product space then the mapping 19v : x ,--* xd describes a conjugate isomorphism from V to Vd, by which
we mean that (x + y)d = xd + yd and
The adjoint f* : W -+ V of f : V
(Ax)d = .1xd.
W is defined by
f* _ V1 ° ft o t9w and is the unique linear transformation such that (Vz,y E V)
(f(x)Iy) = (xlf*(y))
We say that f is normal if it commutes with its adjoint. If the matrix of f relative to a given ordered basis is A then that of f* is V. We say that A is normal if it commutes with T. A matrix is normal if and only if it is unitarily similar to a diagonal matrix, i.e. if there is a matrix U with U-1 = U such that U-1 AU is diagonal. A particularly important type of normal transformation occurs when the vector space in question is a real inner product space, and topics dealt with in this section reach as far as the orthogonal reduction of real symmetric matrices and its application to finding the rank and signature of quadratic forms. 19
Linear algebra
Book 4
2.1
Determine which of the following mappings are linear functionals on the vector space IR3 [X] of all real polynomials of degree less than or equal to 2 :
(a) f '-' f';
(b) f '-+
f f;
(c) f '-+ f (2);
(e) f '-'
(d) f '--' f'(2);
2.2
1
/
1
f2.
Let C[0,1] be the vector space of continuous functions f : [0, 11 -+ IR. If fo is a fixed element of C[O,1], prove that (p : C[0,1] -+ IR given by
'P(f) = fo fo(t) f (t) dt 1
is a linear functional.
2.3
Determine the basis of (IR3)d that is dual to the basis {(1'O' -1), (-1, 1, 0), (01 111)) of 1R3.
2.4
Let A = {x1i x2} be a basis of a vector space V of dimension 2 and let Ad = {V1i a2) be the corresponding dual basis of Vd. Find, in terms of epl, (p2 the basis of V d that is dual to the basis A' = {xl +2x2, 3x1 +4x2} of V.
2.5
Which of the following bases of (IR2)d is dual to the basis {(-1, 2), (0, 1)} of IR2?
2.6
(a) {[-1, 2], [0, 1]};
(b) {[-1, 0], [2, 1]};
(c) {[-1, 0], [-2,1]};
(d) {[l, 0], [2, -1]}.
(i) Find a basis that is dual to the basis {(4,5,-2,11), (3, 4, -2,6),(2,3,-1,4), (1,1,-1,3)1 of IR4.
(ii) Find a basis of IR4 whose dual basis is
([2,-1,1'0]) [-1, 0, -2)0]j[-2,2,1,0j, [-8,3,-3, 1]). 2.7
Show that if V is a finite-dimensional vector space over a field F and if A, B are subspaces of V such that V = A ® B then V d =A' a B1. Is it true that if V = A ® B then V d = Ad ® Bd? 20
2: Duality and normal transformations 2.8
Let IR3[X] be the vector space of polynomials over IR of degree less than or equal to 2. Let t1, t2, t3 be three distinct real numbers and for i = 1, 2, 3 define mappings fi : IR3 [X] -+ IR by
MAX)) = p(ti) Show that Bd = {fl, f2, f3} is a basis for the dual space (IR3[X])d and determine a basis B = {pl(X),p2(X),p3(X)} of IR3[X] of which Bd is the dual.
2.9
Let a = (1, 2) and Q = (5, 6) be elements of IR2 and let cp = [3,4] be an element of (IR2)d. Determine (a) a^ ((p);
(b) 01 (`p);
(c) (2a+3,0)1 (,p);
(d) (2a+3,(3)^([a,b]).
2.10 Prove that if S is a subspace of a finite-dimensional vector space V then
dim S + dim Sl =dim V. If t E C(U,V) and td E ,C(Vd, Ud) is the dual of t, prove that
Kertd = (Imt)1. Deduce that if v E V then one of the following holds (i) there exists u E U such that t(u) = v; (ii) there exists (p E Vd such that td((p) = 0 and 0 for all x E IR" with q(x) = 0 only when x = 0 if and only if the rank and signature of q are each n. 2.35 With respect to the standard basis for IR3, a quadratic form q is represented by the matrix
A=
1
1
-1
1
1
0
-1
.
0 -1
Is q positive definite? Is q positive semi-definite? Find a basis of IR3 with respect to which the matrix representing q is in normal form.
2.36 Let f be the bilinear form on IR2 x IR2 given by
f((xi,x2),(yi,y2)) = xiyi +xiy2+2x2y1 +x2Y2 Find a symmetric bilinear form g and a skew-symmetric bilinear form h
such that f = g + h. Let q be the quadratic form given by q(x) = f (x, x) where x E IR2. Find the matrix of q with respect to the standard basis. Find also the rank and signature of q. Is q positive definite? Is q positive semi-definite?
2.37 Write the quadratic form 4x2 +4y 2 + 4z2 - 2yz + 2xz - 2xy
in matrix notation and show that there is an orthogonal transformation (x, y, z) H (u, v, w) which transforms the quadratic form to 3u2 + 3v2 + 6w2.
Deduce that the original form is positive definite. 28
2: Duality and normal transformations 2.38 By completing squares, find the rank and signature of the following quadratic forms : (1) 2y2 - z2 + xy + xz;
(2) 2xy - xz - yz;
(3) yz+xz+xy+xt+yt+zt. 2.39 For each of the following quadratic forms write down the symmetric matrix A for which the form is expressible as xt Ax. Diagonalise each of the forms and in each case find a real non-singular matrix P for which the matrix Pt AP is diagonal with entries in {1, -1,0}. (1) x2 + 2ya + 9z2 - 2xy + 4xz - 6yz; (2) 4xy + 2yz;
(3) x2 + 4y2 + z2 - 4t2 + 2xy - 2xt + 6yz - 8yt - 14zt. 2.40 Find the rank and signature of the quadratic form Q(x1,...,x,) = >(xr - x8)2. r 2. Then qn+1
= qn q = pn-1q2 = pn-1pq = pnq,
which shows that it holds for n + 1. The second equality is established in a similar way. (2) If r is non-singular then r-' exists and consequently from rtr = 0 we obtain the contradiction t = 0. Hence r is singular, so both p and q are singular and hence have 0 as an eigenvalue. Consequently we see that p(X) and q(X) are divisible by X. (3) Let q(X) = a1X+a2X2+ +a,.Xr. Then a1q+a2g2+ +a,.qr' _ 0 and so(a1Q + + a,.q'')p = 0, i.e. a1p2 + + arp''}1 = 0 which shows that p satisfies Xq(X) = 0. Similarly, q satisfies Xp(X) = 0. By (3), p(X) divides Xq(X), and q(X) divides Xp(X), so we have
Xq(X) = p1(X)p(X),
Xp(X) = g1(X)q(X) 48
Solutions to Chapter 1 for monic polynomials pi(X),gl(X). Now X2q(X) = Xpl(X)p(X) _ pl(X)gl(X)q(X) so, since q(X) # 0, we have pi(X)gl(X) = V. Con-
sequently, either (i) pi(X) = ql(X) = X, or (ii) ql(X) = X2, or (iii) P, (X) = X2. 1.29
Clearly, adding together the elements in the middle row, the middle column, and both diagonals, we obtain
E m{j + 3m22 = 4$, i,a
so that 3$ + 3m22 = 4$ and hence t = 3m22. If m22 = a, ml l = a + Q and m31 = a + 'y then
m21 =3a-m11 -m31 =3a-a-Q-a-7=a-Q-'y, m23 =3a-m21 -m22
=3a-a+p+ry-a=a+p+j,
and so on, and we obtain
a+Q
a-P+7
a-7
a
a+Q+ry
M(a,Q)ry)= a-Q- 1
a-,0
a+Q -ry
CL +ry
It is readily seen that sums and scalar multiples of magic matrices are also magic. Hence the magic matrices constitute a subspace of Mat3X3(Q). Also, M(a, Q, 'y) = aM(1, 0, 0) + QM(0,1, 0) + iM(0, 0, 1)
so that B generates this subspace. Since M(a, Q, ry) = 0 if and only if a = /3 = ry = 0, it follows that B is a basis for this subspace. That el + e2 + e3 is an eigenvector of f follows from the fact that 3a
1
1 = 3a = 3a
1
3a
1
1
M(a,Q, 7)
1
To compute the matrix of f relative to the basis {el + e2 + e3, e2, e3} we observe that, by the above, f(el + e2 + e3) = 3a(el + e2 + e3);
49
Linear algebra
Book 4
and that f (e2) = (a - /3 +'y)e1 + ae2 + (a + /3 - 'y)e3
=(a-/.3+7)(ei+e2+e3-e2-e3)+ae2+(a+p-ry)e3 = (a - Q + ry)(el + e2 + e3) + (/3 -'y)e2 + (2Q - 2ry)e3i
f(e3) = (a - ry)el + (a + Q + ry)e2 + (a - /3)e3
=(a-'r)(e1+e2+e3-e2-e3)+(Q+27)e2+ (7-Q)e3 =(a-ry)(e1+e2+e3)+(/3+2ry)e2+(7-9)e3 The matrix of f relative to {e1 + e2 + e3, e2, e3) is then
3a a-+ry a - ry L=
0
/3-i
/3+2ry
0
2/3-try
ry-/3
Since L and M(a, /3, -y) represent the same linear mapping they are similar and therefore have the same eigenvalues. It is readily seen that
det (L - AI3) = (3a - A)(A2 - 3/32 + 372),
so the eigenvalues are 3a and f 3(/32 - i2). 1.80
If f is nilpotent of index p then fP = 0 and fP-1 0 0. Let x E V be such that fP-1(x) 54 0 and consider the set
Bp = {x, f(x),...,fp-1(x)} Suppose that (*)
.lox + A11(x) + ... + AP-1 p-1(x) = 0.
On applying fP-1 to (*) and using the fact that fP = 0, we see that Ao fP-1(x) = 0 whence we deduce that Ao = 0. Deleting the first term in (*) and applying fp-2 to the remainder, we obtain similarly Al = 0. Repeating this argument, we see that each Al = 0 and hence that Bp is linearly independent. It follows from the above that if f is nilpotent of index n = dim V then
B. = {x, f(x),..., fn-1(x)} 50
Solutions to Chapter 1 is a basis of V. The matrix of f relative to Bn is readily seen to be 0
I* = In-1
0 0
.
Consequently, if A is an n x n matrix over F that is nilpotent of index n then A is similar to I. Conversely, if A is similar to I* then there is an invertible matrix P such that P-i AP = I*, so that A = PI*P-1. Computing the powers of A we see that (i) An = 0; (ii) [An-1]n1 = 1, so An-1 # 0. Hence A is nilpotent of index n. 1.31
To see that V is a Q-vector space it suffices to check the axioms concerning the external law. For example, (a + if)[(ry + iS)x] = (a + i8)[ryx - 51(x)]
= a['Yx- 6f W1 -Qf[7x-Sf(x)] = a'Yx - aS f (x) - ,(dryf (x) - QSx
= (ary -,QS)x - (aS +Q7)f(x)
= [(a+i/)(ry+iS)]x. Suppose now that {V1,...,yr} is a linearly independent subset of the d-vector space V and that in the IR-vector space V we have
Eajva+E# f(Vj)=0. Using the given identity, we can rewrite this as the following equation in the Q-vector space V : E(aj - if1)v, = 0.
It follows that aj - i3j = 0 for every j, so that a3 = 0 = P j. Consequently, (V1) ...,Vr,f(VI),.... f(Vr)}
is linearly independent in the IR-vector space V. Since V is of finite dimension over IR it must then be so over C. The given identity shows that every complex linear combination of {v1, ... , vn} can be written as a real linear combination of
V1i...,Vnif(Vl),..., f(Vn). 51
Book 4
Linear algebra
If dime V = n it then follows that dimR V = 2n. By considering a basis of V (over IR) of the form {vi,...,vn,f(vl),.... f(vn)}
we deduce immediately from the fact that f o f = - idv that the matrix of f relative to this basis is 0
In
In
0
1
].
Clearly, it follows from the above that if A is a 2n x 2n matrix over IR
such that A2 = -I2n then A is similar to r. Conversely, if A is similar
to r then there is an invertible matrix P such that P-' AP = r and hence A2 = (PrP-1)2 = Pr2P-1 = P(-I2n)P-1 = -12n1.82
Let x be an eigenvector corresponding to A. Then from Ax = Ax we have that xtAt = Axt and hence e T = A. Since A = A and At = -A we deduce that -Y A = ax . Thus -tAx = -ax x. But we also have
x Ax = iAx = Aix. It follows that A = -A, so the real part of A is zero. We also deduce from Ax = Ax that Ax = ax, i.e. that AT = ax, so A is also an eigenvalue.
Y = (A - AI)Z gives Yt = Zt(At - AI) = -Zt(A+ AI) and hence -2'(-A + XI) = -Z (A - AI). Consequently,
Y=
V Y = -Z (A - AI).(A - AI)Z = 0
since it is given that (A - AI)2Z = 0. Now the elements of V Y are of the form a + ib
= a 2 + b2
[a - ib ... x - iy]
+ +x2+y2
x+iy and a sum of squares is zero if and only if each summand is zero. Hence we see that Y = 0. The minimum polynomial of A cannot have repeated roots. For, if this
were of the form m(X) = (X - a)2p(X) then from (A - aI)2p(A) = 0 we would have, by the above applied to each column of p(A) in turn, (A - aI)p(A) = 0 and m(X) would not be the minimum polynomial. 52
Solutions to Chapter 1 Thus the minimum polynomial has simple roots and so A is similar to a diagonal matrix. Suppose now that Ax = iax. Then Ax = -iax and
Au = A(x+x) =iax-iax=ia(x-x) =av,
Av=Ai(x-x)=-ax-ax=-a(x+x) = -au. These equalities can be written in the form
[Av]- I a 0][ v]' The last part follows by choosing is 1 i ... , iak to be the non-zero eigenvalues of A.
1.33
Since t satisfies its characteristic equation we have
(t - id)(t + 2id)(t - lid) = 0, which gives t3 = t2 + 4t - 4 id. It is now readily seen that
t4 = t2 + 4(t2 - id); t6 = t2 +4(1+4) (t2 - id);
is = t2 +4(1+4+4 2) (t2 - id). This suggests that in general
t2p=t2+4(1+4+42+ ..+4p-2)(t2-id). It is easy to see by induction that this is indeed the case. Thus we see that t2n
= t2 + 4(11 + 4 +
- + 4n-2) (t2 - id)
14(t2-id)
=t2+4(1_4n_1)
= t2 + 3 (4n-1 - 1)(t2 - id). 1.34
We have that
f(b1) = AIbl;
(i>2) f
Qiib1 + E mjibj' j>2 53
Linear algebra
Book 4
Thus the matrix of f relative to the basis {b1, b'2, ... , b,n} is of the form
A=
Pin
012
1 Al
M
0
If wEW,say w=w1b1+E1>2w;b; then g(w)
ir[f(w)] = ir(wiAlbl +>wif(b:)) i>2
wif(bi)EW, i>2
since b1 E Ker it and it acts as the identity on Im it = W. Thus W is g-invariant.
It is clear that Mat (g', M. Also, the characteristic equation of f is given by det (A - XIn) = 0, i.e. by
(A1 -X)det(M-XIn)=0. So the eigenvalues of g' are precisely those of f with the algebraic multiplicity of Al reduced by 1. Since all the eigenvalues of f belong to F by hypothesis, so then do all those of g'. The last part follows from the above by a simple inductive argument; if the result holds for (n - 1) x (n - 1) matrices then it holds for M and hence for A. 1.35 The eigenvalues of t are 0, 1, 1. The minimum polynomial is either X(X - 1) or X(X - 1)2. But t2 - t 54 0 so the minimum polynomial is
X(X - 1)2. We have that V = Ker t ® Ker(t - idv)2. We must find a basis {w1, w2, w3} with
t(wl) = 0,
(t - idv)(w2) = 0,
(t - idv)(w3) = Aw2.
A suitable basis is {(-1, 2, 0), (1, -1, 0), (1,1,1)}, with respect to which the matrix of t is 0
0
0
0
1
1
0
0
1
54
.
Solutions to Chapter 1 1.36
We have that
t(1) = -5 - 8X - 5X2, t2(1) = -5(-5 - 8X - 5X2) - 8(1 + X + X2) - 5(4 + 7X + 4X2), t3(1) = 0.
Similarly we have that t3(X) = 0 and 13(X2) = 0. Consequently t3 = 0 and so t is nilpotent.
Take vl = 1 + X + X2. Then we have t(vl) = 0. Now take v2 = 5 + 8X + 5X2. Then we have
t(v2) = 3(1 + X + X2) E span (vi). Finally, take v3 = 1 and observe that
t(1) = -5 - 8X - 5X2 E span {vl, v2}. It is now clear that {1+X+X2, 5+8X+5X2,1} is a basis with respect to which the matrix of t is upper triangular. 1.37
(a) The characteristic polynomial is X2 + 2X + 1 so the eigenvalues are -1 (twice). The corresponding eigenvector satisfies 40
[2 5
[00]
-40][y]-
so -1 has geometric multiplicity 1 with [8, 51 as an associated eigenvector. Hence the Jordan normal form is
A Jordan basis can be found by solving
(A+ I2)vl = 0,
(A+ I2)v2 = vi.
Take vi = [8, 5]. Then a possible solution for v2 is [5, 3], giving
55
Linear algebra
Book 4
(b) The characteristic polynomial is (X+1)2. The eigenvalues are -1 (twice) with geometric multiplicity 1, and a corresponding eigenvector is [1, 01. The Jordan normal form is
A Jordan basis satisfies
(A+I2)v1=0,
(A+I2)v2=v1
Take v1 = [1, 0] and v2 = [0, -1]; then
P
[1O
= c
(Any Jordan basis is of the form {[c, 0], [d, -c]} with P =
-cdl
0 L
(c) The characteristic polynomial is (X - 1)3, so the only eigenvalue is 1. It has geometric multiplicity 2 with {[I, 0, 01, [0, 2,3]) as a basis for the eigenspace. The Jordan normal form is then 1
1
0
0
1
0
0
0
1
.
A Jordan basis satisfies
(A - I3)vl = 0,
(A - I3)v2 = v1,
(A - I3)v3 = 0.
Now (A- I3)2 = 0 so choose v2 to be any vector not in ([1, 0, 01, [0, 2, 3]), for example v2 = [0, 1, 0]. Then v1 = (A-I3)v2 = [3, 6, 9]. For v3 choose any vector in ([1, 0, 0], [0, 2,31) that is independent of [3,6,9], for example V3 = [1, 0, 0]. This gives 3
P= 6 9
56
0
1
1
0
0
0
.
Solutions to Chapter 1 (d) The Jordan normal form is 3
1
0
0
3
0
0
0
3
.
A Jordan basis satisfies
(A-313)v3=0.
(A-313)v2=v1i
(A-313)vl = 0,
Choose v2 = [0, 0, 11. Then vl = [1, 0, 0] and a suitable choice for v3 is [0, 1, 0]. Thus
1.88
P= 0
1
0 0
0
0
1
0
1
The characteristic polynomial of A is (X- 1)'(X-2)1. For the eigenvalue 2 we solve 0
1
1
1
0
x
0
0
0
0
0
0
y
0
0
0
0
1
0
z
0
0
0
0
-1
1
0
0
-1
-1
t w
-1 - 2
0
to obtain w = t = 0, y + z = 0. Thus the general eigenvector associated with the eigenvalue 2 is [x, y, -y, 0, 01 with x, y not both zero. The Jordan block associated with the eigenvalue 2 is 2
2]
[0
For the eigenvalue 1 we solve 1
1
1
1
0
x
0
0
1
0
0
0
11
0
0
0
1
1
0
z
0
0
0
0
0
1
t
0
-0
-1
-1
-1 - 1
w
0
57
Linear algebra
Book 4
to obtain w = y = x = 0, z + t = 0. Thus the general eigenvector associated with the eigenvalue 1 is [0, 0, z, -z, 0] with z # 0. The Jordan block associated with the eigenvalue 1 is 1
1
0
0
1
1
0
0
1
.
The Jordan normal form of A is therefore 2
0
0 0 0 0
2
0 0 0
0 0
0 0
1
1
0 0 0
0 0
1
1
0
1
Take [0, 0, 1, -1, 0] as an eigenvector associated with the eigenvalue 1. Then we solve x y
0
1
1
1
1
0 0
1
0
0
0 0
1
1
0 0 0
0
0
1
t
-1
-1 -1 -1 -1
w
0
0 0
0
z=
1
to obtain y = 0, w = -1, z + t = 1, x = -1, so we take [-1, 0, 0, 1, -11. Next we solve 0 0 0
x
-1
y
0 0
1
1
1
1
0 0
1
0
0
0
1
1
0
0
0
1
t
1
-1 -1 -1
-1
w
-1
0 0
z=
to obtain y = 0, t + z = 0, w = 1,x = -1, so we consider [-1,0,0,0,11. A Jordan basis is therefore {[1, 0, 0, 0, o], [0, 1, -1,0,011 10,0, 1,-1, o], [-1,0,0, 11 -1], [-1101010, ill
and a suitable matrix is
P=
1
0
0
-1
-1
0
1
0
0
0
0
-1
1
0
0
0
0
-1
1
0
0
0
0
-1
1
58
Solutions to Chapter 1 1.39
(a) The Jordan form and a suitable (non-unique) matrix P are
J=
2
1
0
2
0 0
0
0
2
2
-5 -3
3
8
2
P=
,
5 8
.
-7
(b) The Jordan form and a suitable P are
1.40
2
0
0
0
4
3
2
07
0
1
1
0
5
4
3
0
0
0
1
1
-2 -2
-1
0
0
0
0
1
6
4
P=
11
1j
The Jordan normal form is 2
0
0
0
0
0
2
1
0
0
0
0
2
0
0
0
0
0
3
1
0
0
0
0
3
A Jordan basis is {[2, 1, 0, 0,1], [1,0,1,0,0], [0, 1, 0,1, 0], [-1,0,0,1,0], [2, 0, 0, 0, 1]j. 1.41
The minimum polynomial is (X - 2)3. There are two possibilities for the Jordan normal form, namely 2
1
0
0
0
2
1
0
0
01
0
2
1
0
0
0
2
0
0
0
J1= 0
0
2
0
0
J2 = 0
0
2
1
0
0
0
0
2
0
0
0
0
2
1
0
0
0
0
2
0
0
0
0
2
,
Each of these has (X - 2)3 as minimum polynomial. There are two linearly independent eigenvectors; e.g., [0, -1,1,1, 0] and [0, 1, 0, 0, 11. The number of linearly independent eigenvectors does not determine the Jordan form. For example, the matrix J2 above and the matrix 2
0
0
0
0
0
2
1
0
0
0
0
2
1
0
0
0
0
2
1
0
0
0
0
2
59
Linear algebra
Book 4
have two linearly independent eigenvectors. Both pieces of information are required in order to determine the Jordan form. For the given matrix this is J2. 1.42
A basis for IR4[X] is {1,X,X2,X3} and D(1) = 0,D(X) = 1,D(X2) _ 2X, D(X3) = 3X2. Hence, relative to the above basis, D is represented by the matrix 0
1
0
0
0
0
2
0
0
0
0
3
0
0
0
0
The characteristic polynomial of this matrix is X4, the only (quadruple) eigenvalue is 0, and the eigenspace of 0 is of dimension 1 with basis {1}. So the Jordan normal form is
f0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
A Jordan basis is {fl, f2, f3, f4} where
Df, = 0, Df2 = fl, Df3 = f2, Df4 = f3. Choose f1 = 1; then f2 = X, f3 = 2X2, f4 = 6X3 so a Jordan basis is {6,6X,3X2,X3}. 1.43
The possible Jordan forms are 3
0
0
0
3
0
0
0
3
,
3
1
0
3
1
0
0
3
1
0
3
0
0
0
3
0
0
3
,
3
0
0
3
1.
0
0
3
0
The last two are similar. 1.44
(i) and (ii) are true : use the fact that AB and BA = A-' (AB)A are similar. (iii) and (iv) are false; for example, 0
[0
1
0][0
0]
and
1
clearly have the same Jordan normal form. 60
0
[0 0][0 01
Solutions to Chapter 1 1.45
V decomposes into a direct sum of t-invariant subspaces, say V = Vi ® . ® V,., and each summand is associated with one and only one eigenvalue of t. Without loss of generality we can assume that t has a single eigenvalue A. Consider an i x i Jordan block. Corresponding to this block there are i basis elements of V, say v1i ... , vt, with
(t - A idv)vl = 0;
(t-Aidv)2v2 = (t-Aidv)v1 =0;
(t-Aidv)ivi=(t-Aidv)i-1vt_i = ...=0. Thus there is one eigenvector associated with each block, and so there are
dim Ker(t - A idv ) blocks.
Consider Ker(t - A idv)j. For every 1 x 1 block there corresponds a single basis element which is an eigenvector in Ker(t - A idv)j. For every 2 x 2 block there correspond two basis elements in Ker(t - A idv)j if j > 2 and 1 basis element if j < 2. In general, to each i x i block there correspond i basis elements in Ker(t - A idv)' if j > i and j basis elements if j < i. It follows that
dj=n1+2n2+-..+(j-1)nj-1+9(nj+nj+l+...) 1.46
and a simple calculation shows that 2di - dt_1 - di+1 = ni. The characteristic polynomial of A is (X - 2)4, and the minimum polynomial is (X - 2)2. A has a single eigenvalue and is not diagonalisable. The possible Jordan normal forms are 2
1
0
01
2
1
0
0
0
2
0
0
0
2
0
0
0
0
2
0
0
0
2
1
0
0
0
2
0
0
0
2
Now dimIm(A - 214) = 1 so dim Ker(A - 214) = 3 and so the Jordan form is 2
1
0
0
0
2
0
0
0
0
2
0
0
0
0
2
61
Linear algebra
Book 4
Now Ker(A - 214) = {[x, y, z, t]
2x - y + t = 0}, and we must choose
I
v2 such that (A - 214)2v2 = 0 but v2 0 Ker(A - 214). So we take v2 = [1,0,0,0], and then v1 = (A - 214)v2 = [-2,-2, -2,21. We now wish to choose v3 and v4 such that {vi, v3i v4} is a basis for Ker(A-214). So we take v3 = [0, 1, 0, 1] and v4 = [0, 0, 1, 0]. Then we have
-2 P= -2 -2 2
1
0
0
0
1
0
0
0
1
0
1
0
To solve the system X' = AX we first solve the system Y' = JY, namely
yi = 2yi + Y2 y2' = 2y2
ys=2y3 y4 = 2y4
The solution to this is clearly Zt y4 = c4e 2t
y3=c3e
2t
y2 = c2e
Yi = c2te2t +
cle2t.
Since now
-2 X=PY= -2 -2 2
L
we deduce that
0
0
c2te2t + cie2t
0
1
0
c2e2t
0
0
1
c3e2t
0
1
0
c4e2t
1
xi = -2c2te2t - 2c1e2t + c2e2t x2 = -2c2te2t - 2c1e2t +
C3e2t
x3 = -2c2te2t - 2cie2t + C4e2t x4 = 2c2te2t + 2cie2t + c3e2t.
62
Solutions to Chapter 1 1.47 (a) The system is X' = AX where
[-1
`4 =
0].
The characteristic polynomial is (X - 1)(X - 4). The eigenvalues are therefore 1 and 4, and associated eigenvectors are El = [1, -1] and E4 = [4, -1]. The solution is aEjet + bE4e4t, i.e.
xl = aet
+ 4be4t
x2 = -aet - befit
(b) The system is X' = AX where 4
A= 1
-1 -1 2 -1 -1
1
.
2
The characteristic polynomial is (X - 3)2(X - 2). The eigenvalues are therefore 3 and 2. An eigenvector associated with 2 is [1,1,1] so take E2 = [1,1,1]. The eigenvalue 3 has geometric multiplicity 2 and [1, 1, 0], [1, 0, 1] are linearly independent vectors in the eigenspace of 3. The general solution vector is therefore all, 1,1]e2t + b[1,1, 0]e3t + c[l, 0,1]e3t
so that xi = ae2t + (b + c)e3t
x2 = ae2t + be 3t
x3 = ae2t +
ce3t.
(c) The system is X' = AX where
-6 -6
5
A= -1
4
2
.
-6 -4
3
The characteristic polynomial is (X - 1)(X - 2)2. The eigenvalues are therefore 1 and 2. An eigenvector associated with 1 is El = [3, -1,31, 63
Linear algebra
Book 4
and independent eigenvectors associated with 2 are E2 = [2, 1, 0] and E2 = [2, 0, 1]. The solution space is then spanned by {[3et, -et, 3et], [2e2t, e2t, 0], [2e2t, 0, e2t]}.
(d) The system is X' = AX where 1
3
-2
A= 0
7
-4
0
9
-5
.
Now A has Jordan normal form 1
0 0
0
1
1
1
J= 0 0
and an invertible matrix P such that P-1 AP = J is 0
1
6
1
0
9
0
0
3
P=
.
First we solve Y' = JY to obtain y' = yi + y2, y2 = Y2, y3 = ys hence ys = cet ya = be
t
yl = btet + aet. Thus X' = AX has the general solution X = PY = a [3, 6, 9]e t + b([3,6, 9] tet + [0, 1, 0] et) + c[1, 0, 0] et . 1.48
The system is AY' = Y where
-3 A= 0 -5 1
-9
0
2
4
.
7
Now A is invertible with
A-' = 0
3 7
-2 -4
0
9
-5
1
and the system Y' = A-'Y is that of question 1.47(d). 64
and
Solutions to Chapter 1 1.49 The system is X' = AX where
The eigenvalues are 2 + v and 2 - f, with associated eigenvectors
[-1, -1 - f] and [-1, -1 + f]. The general solution is a[-1, -1 -
v"3]e(2+-,13-)t +
b[-1, -1 +
V]ei2-flt.
Since x1(0) = 0 and x2(0) = 1 we have a+b = 0 and a- f a-b+sb = 1, giving a = and b = 2 , so the solution is 2
e2t
([1,1 + V31e
f3t + [-1, -1 + -V3]e- t ).
2Nf3-
1.50
Let x = xl,xi = x2,4' = x2 = X3, X'1" = x3 = 2x3 + 4x2 - 8x1. Then
the system can be written in the form X' = AX where 0 0
A=
-8
1
0
0
1
4
2
.
The characteristic polynomial is (X - 2) (X2 - 4) so the eigenvalues are 2 and -2. The Jordan normal form is 2
1
J= 0
2
0
0
0 0 -2
.
A Jordan basis {v1i v2, v31 satisfies
(A - 213)vl = 0 (A - 213)v2 = VI
(A+213)v3 = 0.
Take v1 = [1, 2, 4] and v3 = [1, -2, 4]. Then v2 = [0, 1, 4]. Hence an invertible matrix P such that P-1AP = J is 1
P= 2 4 65
0
1
1
-2
4
4
.
Linear algebra
Book 4
Now solve the system Y' = JY to get yi = 2yl + y2, y2= 2y2, y3 = -2y3
so that Y2 = c2e2t, y3 = yl = cle2t + c2te2t. Now observe that
c3e-2t and hence yl
1
0
1
X=PY= 2
1
-2
4
4
4
= 2yi + c2e2t which gives
cle2t + c2te2t
c2 e2t
c3e-2t
Hence x = xl = cl e2t + c2te2t + c3e-2t. Now apply the initial conditions to obtain x = (4t - 1)e2t
66
+e-2t.
Solutions to Chapter 2
2.1
(a) f H f' does not define a linear functional since f' 0 IR in general. (b),(c),(d) These are linear functionals. (e) 99 : f F-+ J 1f 2 is not a linear mapping; for example, we have 0 = t9[ f + (-f)] whereas in general
f f2#0.
,O(,f)+0(-f)=2 2.2
1
0
That cp is linear follows from the fact that F so if A $ V we have that Ad is not a subset of Vd. What is true is : if V = A ® B then V d = A' ® B' where A', B' are subspaces of Vd with A' Ad and B' Bd. To see this, let f E Ad and define f : V -+ F by f (v) = f (a) where v = a + b. Then V : Ad --> Vd given by o(f) = f is an injective linear transformation and V(Ad) is a 68
Solutions to Chapter 2 subspace of Vd that is isomorphic to Ad. Define similarly µ : Bd -+ Vd by µ(g) = g where g(v) = g(b). Then we have V d = co(Ad) ® µ(Bd). 2.8
{ fl, f2i f3} is linearly independent. For, if Al f1 + A2f2 + )t3f3 = 0 then we have 0 = (A1f{1 + A2f2 + A3f3)(1) = Al + A2 + A3;
0 = (A1!1 +A2f2 +A3f3)(X) _ Alt1 +A2t2+A3t3; 0 = (A1f1 + A2 f2 + 3f3)(X2) _ ltl + A2t2 + )3t3. Since the coefficient matrix is the Vandermonde matrix and since the t; are given to be distinct, the only solution is Al = A2 = A3 = 0. Hence { fl, f2i f3} is linearly independent and so forms a basis for (IR3[X])d If {p1i p2i p3} is a basis of V of which {fl, f2i f3} is the dual then we must have fg(p3) = Sgt; i.e. pi(ts) = Std.
It is now easily seen that
pl(X) - (tl
p2(X) - (t2
- t2)(tl - t3)' p3(X) - (
2.9
(a) a^ ('p) _ cp(a) = [ 3
4][2]= 11;
(b) Q" ('P) = p(Q) = [ 3
4][6]= 39;
(c) (2a+3Q)^(,p) = p(2a+3Q) _ [3
(d)(2a+3Q)^([a
-
tl)(t2- t3)'
t2)
4](2I 2]+3[6]
I = 139;
b])=[a b](2{ ]+3I6]l=17a+22b. 6\\\9
L
J
Linear algebra
Book 4
2.10
Let V be of dimension n and S of dimension k. Take a basis {v1,. .. , vk}
of S and extend it to a basis {V1,...,vk,vk+1,.... vn}
of V. Let {(p1,...,Vn) be the basis of Vd dual to {v1i...,v.}. Given cpEVd we have
p= al p, + ... + a.Pn Since V(vi) = a; we see that . A-'A =IAA-1 =A-1. It follows that
A-1A l= A-1(A)-1 = /A)-1A-1 l l
= X---41A-1
and so A-1 is normal.
If A = aoI + a, A + + an An then clearly AA = A A. Suppose conversely that A is normal. Then there is a unitary matrix P and a diagonal matrix D such that
A =P DP=P-1DP.
A=P-1DP=P DP,
Let A1, ..., A,. be the distinct elements of D. Consider the equations 1 = ao + al A, + a2A1 + ... + a,._1 Jai-1
A2 =ao+a1A2+a2.\22 +
r-1.
ao + a,,\, + a2A2 r + ... + a,.-1
81
Linear algebra
Book 4
Since A1,...,A,. are distinct the (Vandermonde) coefficient matrix has non-zero determinant and so the system has a unique solution. We then have
D=aoI+a1D+a2D2+
+a,._1D''-1
and consequently
A =P-1DP=P-1(aoI+a1D+ +a,._1D''-1)P =aoI+ajA+ +a,._1A''-1. 2.31
Suppose that A is normal and let B = g(A). There is a unitary matrix P and a diagonal matrix D such that
A=P DP=P-1DP. Consequently we have
B = g(A) = P g(D)P = P-lg(D)P, and so
V B = P g(D)PPg(D)P = Pg(D)g(D)P and similarly
BB = P g(D)g(D)P. Since g(D) and g(D) are diagonal matrices, it follows that B B = BB and so B is normal. 2.32
We have that
(A + Bi)*(A + Bi) = (A* - B*i)(A + Bi)
= (A - Bi)(A+ Bi) = A2 - (BA - AB)i + B2, and similarly (A + Bi)(A+ Bi)* = A2 - (AB - BA)i + B2. It follows that A + Bi is normal if and only if AB = BA. 2.33
To get -A multiply each row of A by -1. Then clearly det(-A) _ (-1)" det A. If n is odd then
det A = det(At) = det(-A) _ (-1)" det A = - det A and so det A = 0. 82
Solutions to Chapter 2 Since xtAx is a 1 x 1 matrix we have
xtAx = (xtAx)t = xtAtx = -xtAx and so xt Ax = 0. Let Ax = Ax and let stars denote transposes of complex conjugates.
Then we have x*Ax = Ax*x. Taking the star of each side and using A* = At = -A, we obtain ax*x = (x*Ax)* = x*A*x = -x*Ax = -Ax*x. Since x*x # 0 it follows that A = -A. Thus A = iu where p E IR \ {0}.
If x = y + iz then from Ax = ipx we obtain A(y + iz) = ip(y + iz) and so, equating real and imaginary parts, Ay = -pz, Az = py. Now
pyty = ytAz = (ytAz)t = ztAty = -ztAy = pztz
and so yty = ztz. Also, pytz = -ytAy = 0 (by the first part of the question). If, therefore, At = 0 then
put y = utAz = -(Au)tz = 0 and similarly
putz = -utAy = (Au)ty = 0. For the last part, we have -A
det(A - AI) = det
2
-2 -A 2
-2
-1 = -A(A2 + 9), -A
1
so the eigenvalues are 0 and ±3i. A normalised eigenvector corresponding to 0 is 1
U=3
-1 2
.
2
To find y, z as above, choose y perpendicular to u, say
83
Linear algebra
Book 4 Then we have
-4
-3z=Ay= 1 -1 1
which gives
f Relative to the basis {u, y, z} the representing matrix is now 0
0
0
0
0
3
0
.
-3 0
The required orthogonal matrix P is then
0 4/3f
-1/3
2/3 -1/f 1/3f 2/3 1/f 1/3f
P= 2.84
Let Q be the matrix that represents the change to a new basis with respect to which q is in normal form. Then xtAx becomes ytBy where x = Qy and B = QtAQ. Now q(x)=9(y)=yz
1+...+ypz
-yp+21-...-ypa+m
where p - m is the signature of q and p + m is the rank of q. Notice that the rank of q is equal to the rank of the matrix B which is in turn equal to the rank of the matrix A (since Q is non-singular), and y = (yi ) ... , yp, yp+i , ... , yp+m , yp+m+ 1 , ... , Yn)t .
Now if q has the same rank and signature then clearly m = 0. Hence ytBy > 0 for all y E IRn since it is a sum of squares. Consequently xtAx > 0 for all x E IRn. Conversely, if xtAx > 0 for all x c IRn then ytBy > 0 for all y E IRn. Choose y = (0, ... , 0, yj, 0, ... , 0). Now the coefficient of yE must be 0 or
1, but not -1. Therefore there are no terms of the form -y?, so m = 0 and q has the same rank and signature. 84
Solutions to Chapter 2 If the rank and signature are both equal to n then m = 0 and p = n. Hence ytBy=yi+...+yn,
But a sum of squares is zero if and only if each term is zero, so xtAx > 0 and is equal to 0 only when x = 0.
Conversely, if xtAx _> 0 for x E IR' then ytBy > 0 for y E IR" so m = 0, for otherwise we can choose
y = (0) ... ,0,yp+1,0,...,0)
with yp+1 = 1 to obtain ytBy < 0. Also, xtAx = 0 only for x = 0 gives ytBy = 0 only for y = 0. If p < n then, since we have m = 0, choose y = (0,..., 0, 1) to get yt By = 0 with y 0 0. Hence p = n as required. 2.35 The quadratic form q can be reduced to normal form either by completing squares or by row and column operations. We solve the problem by completing squares. We have q(x) = xi + 2x1x2 + x2 - 2x1x3 - x3 = (x1 + x2)2 + xi - (xl + x3)2
and so the normal form of q is 1
0
0
1
0 0
0
0
-1
.
Since the rank of q is 3 and its signature is 1, q is neither positive definite nor positive semi-definite.
Coordinates (x1, x2, x3) with respect to the standard basis become (z1 +x2i x1, x1 +x3) in the new basis. Therefore the new basis elements can be taken as the columns of the inverse of
i.e. {(O, 1, 0), (1, -1, -1), (0, 0,1)}. 85
Linear algebra
Book 4 2.86
Take
9((xi, x2), (y1, y2)) = 2 [f ((xi, z2), (y1, y2)) + f ((yi, y2), (z1, z2))] = x1y1 + 2(x1y2 + x2y1) + X2!2
and h((z1, x2), (y1, y2)) = a [f ((xi, x2), (yi, y2)) - f ((yl, y2), (x1, x2))I
=-ZX1Y2+ZX2Yl. We have &1, z2) = f ((xii x2), (x1i x2)) = zi + 3x1 x2 + z2 and so the matrix of q relative to the standard basis is 3
1
2
3 2
1
Completing squares gives (x1 + 2x2)2 The signature is then 0 and the rank is 2. The form is neither positive definite nor positive 4x2.
semi-definite.
2.87 In matrix notation, the quadratic form is
-1
4
xtAx=[x
z, -1
y
4 -1
1
1
x
-1
y
4
z
.
It is readily seen that the eigenvalues of A are 3 (of algebraic multiplicity
2) and 6. An orthogonal matrix P such that PAP is diagonal is
1/f 1/f _1/-,/3 1/f
P = 2/f
0
1/v
1/\ -1/V2_
Changing coordinates by setting u
11
V =Pt y w
z X
transforms the original quadratic form to
3u2 + 3v2 +6w 2
which is positve definite. 86
Solutions to Chapter 2 2.38
(1) We have
2y2-z2+xy+xz=2(y+4x)2-$x2+xz-z2 = 2(y + 4x)2 - '(x - 4z)2 + z2. Thus the rank is 3 and the signature is 1.
(2) In 2xy-xz-yz put x=X+Y,y=X-Y,z=Ztoobtain 2(X2 - Y2) - (X+ Y)Z - (X_ Y)Z = 2X2 - 2Y2 - 2XZ
= 2(X - Z)2 - 2 Z2
-
W.
2
Thus the rank is 3 and the signature is -1. (3) In yz + xz + xy + xt + yt + zt put
x=X+Y, y=X-Y, z=Z, t=T. Then we obtain
(X2 - Y2) + (X - Y)Z + (X + Y)Z + (X + Y)T + (X - Y)T + ZT
=X2-Y2+2XZ+2XT+ZT
=(X+Z+T)2-Y2-Z2-T2-ZT =(X+Z+T)2-(T+2Z)2-4Z2-Y2. Thus the rank is 4 and the signature -2. 2.39
(1) The matrix in question is
-1
1
A= -1
2
-3
2
2
-3 . 9
Now
x2 + 2y2 + 9z2 - 2xy + 4xz - 6yz
=(x-y+2z)2+y2+5z2-2yz = (Cx- y+2z)2 +(y-z)2 +4z2 = [2 + Q2 + S2 87
Linear algebra
Book 4
where E=x-y+2z,, =y-z,s=2z. Then
z=1 y=rI+2S
X+q - ZS 1
1
P= 0
21
1
z
0
0
2
and PtAP = diag{1,1,1}. (2) Here the matrix is 0
2
A= 2
0
0 1
0
1
0
.
Now
4xy+2yz=(x+y)2-(x-y)2+2yz
=X2-Y2+(X-Y)z [X=x+y,Y=x-y] =(X+Zz)2-Y2-Yz-4z2 =(X+Zz)2_(Y+Zz)2
= r2 - n2,
where e=x+y+Zz,tl=x-y+Zzand S=z, say. Then
x= W+4-S) y = 2( - n)
z=S so if we let 1
1
1
2
2
-2
0
0
P= 2 -2 88
0 1
Solutions to Chapter 2 then we have
and Pt AP = diag{1, -1, 0}. (3) Here we have
A-
1
1
1
4
0 3
3
1
0
-1 -4 -7
-4 -7 -4
-1
The quadratic form is x2 + 4y2 + z2 - 4t2 + 2xy - 2xt + 6yz - 8yt - 14zt
=(x+y-t)2+3y2+z2-5t2+6yz-6yt-14zt = (x+y-t)2+3(y+z-t)2 -2z2 -8t2 -8zt = (x + y - t)2 + 3(y + z - t)2 - 2(z + 2t)2
=52+172-52, where
x+y-t,,?=/(y+z-t),5=-(z+2t) and T=tsay.
x=-7s-2r
Then
Y=
1n-+3T
z=
S-2r
t = T
and so 1
P=
-1/f 1/f -2
0
1/\ -1/'
o
o
o
0
1/
gives
x
y
z
=P
71
Ti
t
and Pt AP = diag{1, 1,-1,0}. 89
3
2
-2
0
1
Book 4
2.40
Linear algebra
Here we have 2
(xr - xa)
r yiui) = E xiyi i=1
i=1
i=1
and, for some real symmetric matrix B = [bij[nxn, n
n
g(x, y) = > > bi? xi yj = xt By i=1 j=1
where xl
Yi
x=
y= yn
Lxn
Now we know that there is an orthogonal matrix P such that
PtBP = i.e. that there is an ordered basis {v1, ... , vn} that is orthonormal (relative to the inner product determined by f) and consists of eigenvectors of B. Let x = EE ;vi and y = 1 pivi. Then, relative to this orthonormal basis, we have n
f (x, y) _
e£fli; ti=1
n
eirli
g(x,y) _ i=1
Consequently,
n
Qf (x) = f (x, x) = E d; i=1
n
Qg(x) = g(x, x) i=1
Observe now that
g - Af degenerate b (3x)(Vy) (g - Af)(x, y) = 0 4=> (9x)(Vy) g(x, y) - )f (x, y) = 0. 92
Solutions to Chapter 2 Since, from the above expressions for f (x, y) and g(x, y) computed relative to the orthonormal basis {v1, ... , vn), n
g(x, y) - Af (x, y) _ E(A1- A)esrl+ 4=1
it follows that g - A f is degenerate if and only if A = A, for some i. Suppose now that A, B are the matrices of f, g respectively with respect to some ordered basis of IRn. If xl
y l
Y=
x xn
yn
are the coordinate vectors of x, y relative to this basis then we have
g(x, y) - Af(x, y) = xt(B - AA)y. Thus we see that g - A f is degenerate if and only if A is a root of the equation det(B - AA) = 0. For the last part, observe that the matrices of 2xy + 2yz and x2 -y2+ 2xz relative to the canonical basis of IR3 are respectively 0
B= 0
A= 1 101, 0
1
0
0
1
0
0
-1 0.
1
Since 1
-A
1
-A
0
-1 -A =A2+1
det(B - AA) = det -A 1
the equation det(B-AA) = 0 has no solutions. But, as observed above, if a simultaneous reduction to sums of squares were possible, such solutions would be the coefficients in one of these sums of squares. Since neither of the given forms is positive definite, the conclusion follows. 2.45
The exponent is -xeAx where 1
A=
1
1
2
2
2
1
2
1
1
93
1
Book 4
Linear algebra
The quadratic form xtAx is positive definite since
x2+y2+z2+zy+xz+yz=(x+2y+az)2+4y2+4z2+ayz =(x+2y+az)2+4(y+3z)2+ 3z2 which is greater than 0 for all x # 0. So the integral converges to ir3/2/ det A, i.e. to
2P.
94
Test paper 1
Time allowed : 3 hours (Allocate 20 marks for each question)
Let V be a finite-dimensional vector space. Prove that if f E C (V, V) then (a) dim V = dim Im f + dim Ker f ; (b) the properties (i) f is surjective, (ii) f is injective, are equivalent; (c) V = Im f ® Ker f if and only if Im f = IM f2; (d) Im f = Ker f if and only if the following properties are satisfied
(i) f2=0,
(ii) dim V = n is even, (iii) dim Im f = an.
2
Suppose that t E
Q5) is represented with respect to the basis
W, 010, 010), (1, 1101 0, 0), (1,1,1, 0, 0), (1,1,1,1, 0), (1,1,1,1,1) }
by the matrix 1
8
6
4
1
0
1
0
0
0
0
1
2
1
0
0
-1
-1
0
1
0
-5 -4 -3 -2
Find a basis of Q5 with respect to which the matrix of t is in Jordan normal form.
S
Let rP1, ... , rPn E (IRn)d. Prove that the solution set C of the linear inequalities
rP1(x)>0, rai(x)>0, ... ,rpn(x)>0 satisfies
(a) a,QEC=a+QEC;
(b) aEC,tEIR,t>O==> taEC. Show that if VI, ... , rPn form a basis of (IR')d then
C = {t1 al +
- + tnan
I
t1 E IR, t{ > 0}
where {a1, ... , an} is the basis of IRn dual to the basis {(pl,... , rPn }. Hence write down the solution of the system of inequalities (P1 (X) > 0, rP2(x) >_ 0, S03 (X) > 0, 94 (X) > 0
where rP1 = [4,5,-2, 111, rP2 = [3, 4, -2,6], ro3 = 12,3,-1,4] and rP4 = [0, 0, 0, 1].
4
Let A be a real orthogonal matrix. If (A - AI)2x = 0 and y = (A - AI)x show, by considering yty, that y = 0. Hence prove that an orthogonal matrix satisfies an equation without repeated roots. Prove that a real orthogonal matrix with all its eigenvalues real is necessarily symmetric.
5
Prove that if a real quadratic form in n variables is reduced by a real non-singular linear transformation to a form
having p positive, q negative, and n - p - q zero coefficients then p and q do not depend on the choice of transformation. For the form a 1 x 1x2 + a2 x2 x3 + ... + an-1 xn _ 1 xn
in which each a; # 0, show that p = q; and for the form + an_1xn_1xn + anxnxl
a1x1x2 + a2x2x3 +
in which each a;
0, show that
Ip - ql = I
if n is even; if n is odd.
0 1
97
Test paper 2
Time allowed : 3 hours (Allocate 20 marks for each question)
Let V be a finite-dimensional vector space and let e E £(V, V) be a projection. Prove that Kere = Im(idv -e). If t E £(V, V) show that Im e is t-invariant if and only if e o toe = toe; and that Kere is t-invariant if and only if e o t o e = e o t. Deduce that Im e and Ker e are t-invariant if and only if e and t commute. 2
If U = [UT8] E Matnxn(Q) is given by
ifs=r+
_ J1
ura = l 0
otherwise,
and J = [jre] E Matnxn(C) is given by
jr° -
ifr+s= n+1;
11
otherwise,
0
show that Ut = JUJ. Deduce that if A E Matnxn(() then there is an invertible matrix P such that P-1 AP = At. Find such a matrix P when A is the matrix 0
4
4
2
2
1
-3 -6 -5
.
3
Let V be a vector space of dimension n over a field F. Suppose that W is a subspace of V with dim W = m. Show that (a) dim W-'- = n - m;
(b) (Wl)1 = W. If f, g E Vd are such that there is no A E F \ {0} with f = Ag, show that Ker f n Ker g is of dimension n - 2. 4
Let V be a finite-dimensional complex inner product space and let f : V -+ V be a normal transformation. Prove that f(x) = 0
f2(x) = 0
and deduce that the minimum polynomial of f has no repeated roots. If e : V - V is a projection, show that the following statements are equivalent
(a) e is normal; (b) e is self-adjoint; (c) e is the orthogonal projection of V onto Im e. Show finally that a linear transformation h : V V is normal if and only if there are complex scalars Al, ... , Ak and self-adjoint projections
el,...,ek on V such that (1) f = A1e1 + (2) ides = e1 + b
+ Akek; + ek;
(3) (i34 j) eioej=0. (a) Show that the quadratic form xtAx is positive definite if and only if there exists a real non-singular matrix P such that A = PP. Show also that if Ei j_1 bijxixj > 0 for all non-zero vectors x then bijpixipjxj > 0 for all x. Hence show that if xtAx and xtBx are both positive definite then so is n
E at,bijxtx,'
i,j=1
(b) For what values of k is the quadratic form n
xr+k>xixj i<j
r=1
positive definite? 99
Test paper 3
Time allowed : 3 hours (Allocate 20 marks for each question) If U, W are subspaces of a finite-dimensional vector space V prove that
dimU+dimW = dim(U + W) + dim(U n W). Suppose now that V = U ® W. If S is any subspace of V prove that
2dimS-dimV