Network Theory and Applications Volume3
Matrices in Combinatorics
and Graph Theory by
BolianLiu Department ofMathem...
156 downloads
858 Views
10MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Network Theory and Applications Volume3
Matrices in Combinatorics
and Graph Theory by
BolianLiu Department ofMathematics, South China Normal University, Guangz/wu. P. R. China and
HongJian Lai Department ofMathematics, West Virginia University, Morgantown. West Virginia, 'u.S.A.
KLUWER ACADEMIC PUBLISHERS DORDRECHT/BOSTON/LONDON
A C.I.P. Catalogue record for this book is available from the Library of Congress.
ISBN 0792364694
Published by Kluwer Academic Publishers, P.O. Box 17, 3300 AA Dordrecht, The Netherlands. Sold and distributed in North, Central and South America by Kluwer Academic Publishers, 101 Philip Drive, Norwell, MA 02061, U.S.A. In all other countries. sold and distributed by Kluwer Academic Publishers, P.O. Box 322. 3300 AH Dordrecht, The NetherlaDfls'
Printed on acidfree paper
All Rights Reserved
({) 2000 Kluwer Academic Publishers No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording or by any information storage and retrieval system, without written permission from the copyright owner. Printed in the Netherlands.
Contents Foreword
ix
Preface
xi
1 Matrices and Graphs 1.1 The Basics . . . . . . . . . . . 1.2 The Spectrum of a Graph . . . 1.3 Estimating the Eigenvalues of a 1.4 Line Graphs and Total Graphs 1.5 Cospectral Graphs . . . . . . . 1.6 Spectral Radius . . . . . . . . . 1.7 Exercises . . . . . . . . . . . . 1.8 Hints for Exercises . . • . . . .
. . . . . . . .
. . . . . . . .
. . . . • . . .
. . . . . . . .
. . • . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . • • .
. . . . • . . .
. . . . . . . .
. . . . . . . .
. . . . . . • .
1 1 5 12 17 20 24 38 41
2 Combinatorial Properties of Matrices 2.1 Irreducible and FUlly Indecomposable Matrices . . . 2.2 Standard Forms . . . . . . . . . . . . . . . . . . . . . 2.3 Nearly Reducible Matrices . . . . . . . . . . . . . • . 2.4 Nearly Decomposable Matrices . . . . . . . . . . . . 2.5 Permanent . . . . . . . . . . . . . . . . . . . . . . . . 2.6 The Class U(r, s) . . . . . . . . . . • . . . . . . . . • 2. 7 Stochastic Matrices and Doubly Stochastic Matrices 2.8 Birkhoff Type Theorems . . . . . . . . . . • . . . . • 2.9 Exercise . . . . . . . . . . . . . . . . . • . . . . . . . 2.10 Hints for Exercises . . . . . . . . . . . . . . . . . . .
. . . . . . . . . .
. . • • . . . . . .
. . • . . . . . . •
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . • . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . • .
. . . • . . . . . .
. . . • • • . . . •
. . . . • . . . . .
47 48 50 54 58 61 71 77 81 90 92
. . . . . . . . Graph . . . . . . . . . . . . . . . . . • . .
. . . . . . . .
. . . . . • . .
. . . . . . . .
. . . . . . . .
. . . . . . • •
. . . . . . • .
. . . . . . . .
3 Powers of Nonnegative Matrices 97 3.1 The Frobenius Diophantine Problem • . . • . • . . • . . . . . . . . . . . • . 97 3.2 The Period and The Index of a Boolean Matrix . • . . . . . . • . . . . . . . 101
Contents
vi
3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11
The Primitive Exponent . . . . . . • . . . . . . . . . . . . . . . . . . . . . . The Index of Convergence of Irreducible Nonprimitive Matrices . . . . . . . The Index of Convergence of Reducible Nonprimitive Matrices . . . . . . . Index of Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generalized Exponents of Primitive Matrices . • . . . . . . . . . . . • . . . Fully indecomposable exponents and Hall exponents . . . . . . . . . . . • . Primitive exponent and other parameters . . • . . . . . . . . • . . . . . . . Exercises • . . . . . . . . . . . . . . • . . . . . . • . . . . . . . . . . . . . . Hint for Exercises . . . . . . . • . . • . . • . . . . • . . . . . . . . . . . . . .
107 114 120 125 129 136 146 150 153
4 Matrices in Combinatorial Problems 161 4.1 Matrix Solutions for Difference Equations . 161 4.2 Matrices in Some Combinatorial Configurations . . . 166 4.3 Decomposition of Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 4.4 MatrixTree Theorems . . . . . . . • . . . . . . . . . . . . . . . . . . . . . . 176 4.5 Shannon Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 4.6 Strongly Regular Graphs . . . . . . . • . • . . . . . . . • . . . . . . . . • . • 190 4. 7 Eulerian Problems . . . . . . • . . . . . . . . . . . . . . . . . . . . . . . . . 196 4:8 The Chromatic Number . . . . . . . • . . . . . • . . . . . . . . . . . . . . 204 4.9 Exercises . . . • . . . . . . . . . . . . 210 4.10 Hints for Exercises . . . . • . . . . . . 212 217 5 Combinatorial Analysis in Matrices 5.1 Combinatorial Representation of Matrices and Determinants . . . . . . . . 217 5.2 Combinatorial Proofs in Linear Algebra . . . • . . . 221 5.3 Generalized Inverse of a Boolean Matrix . . . . . . • . . . . . . . . . • . . . 224 5.4 Maximum Determinant of a (0,1) Matrix . . . . . . . . . . . . • . . . . . . . 230 5.5 Rearrangement of (0,1) Matrices . . . . . . . . . . . . . . . . . . . . . . . . 236 5.6 Perfect Elimination Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 5. 7 Completions of Partial Hermitian Matrices . . . . . . . . . . . . . . . . . . 249 . . . . • . . . . . . . . . 253 5.8 Estimation of the Eigenvalues of a Matrix . 5.9 Mmatrices . . . . . . . . . . • . • . . . . . . . . . . . • . . . . . . . . . . . 261 5.10 Exercises . • . . . . . . . . . • . . . . . . . . . . . . . . . . . • . . . . . . . 271 5.11 Hints for Exercises • . . . . . . . . . . • . . . . . . . . . . . . . . . . . . . . 272 6 Appendix 275 6.1 Linear Algebra and Matrices • . . . • . . . . . . . . . . . . . . . . . • . . 275 6.2 The Term Rank and the Line Rank of a Matrix . . . . . . . . . . . . • . . 277 6.3 Graph Theory • . . . • . . . . • . . . . . . . . . . . . . . . . . . . . . . . . . 279
Contents Index
Bibliography
vi
FOREWORD Combinatorics and Matrix Theory have a symbiotic, or mutually beneficial, relationship. This relationship is discussed in my paper The symbiotic relationship of combinatorics and matriz theor"Jl where I attempted to justify this description. One could say that a more detailed justification was given in my book with H. J. Ryser entitled Combinatorial Matriz Theoryf where an attempt was made to give a broad picture of the use of combinatorial ideas in matrix theory and the use of matrix theory in proving theorems which, at least on the surface, are combinatorial in nature. In the book by Liu and Lai, this picture is enlarged and expanded to include recent developments and contributions of Chinese mathematicians, many of which have not been readily available to those of us who are unfamiliar with Chinese journals. Necessarily, there is some overlap with the book Combinatorial Matriz Theory. Some of the additional topics include: spectra of graphs, eulerian graph problems, Shannon capacity, generalized inverses of Boolean matrices, matrix rearrangements, and matrix completions. A topic to which many Chinese mathematicians have made substantial contributions is the combinatorial analysis of powers of nonnegative matrices, and a large chapter is devoted to this topic. This book should be a valuable resource for mathematicians working in the area of combinatorial matrix theory. Richard A. Brualdi University of WISconsin  Madison
1 Linear
Alg. Applies., vols. 1624, 1992, 65105 University Press, 1991.
2 Cambridge
ix
PREFACE On the last two decades or so work in combinatorics and graph theory with matrix and linear algebra techniques, and applications of graph th~ry and combinatorics to linear algebra have developed rapidly. In 1973, H. J. Ryser first brought in the concept "combinatorial matrix theory". In 1992, Brualdi and Ryser published "Combinatorial Matrix Theory", the first expository monograph on this subject. By now, numerous exciting results and problems, interesting new techniques and applications have emerged and been developing. Quite a few remarkable achievements in this area have been made by Chinese researchers, adding their contributions to the enrichment and development of this new theory. The purpose of this book is to present connections among combinatorics, graph theory and matrix theory, with an emphasis on an exposition of the contributions made by Chinese scholars. Prerequisites for an understanding of the text have been kept to a minimum. It is essential however to be familiar with elementary set notation and to have had basic knowledge in linear algebra, graph theory and combinatorics. For referential convenience, three sections on the basics of these areas are included in the Appendix, supplementing the brief introductions in the text. The exercises which appear at the ends of chapters often supplement, extend or motivate the materials of the text. For this reason, outlines of solutions are invariably included. We wish to make special acknowledgment to Professor Herbert John Ryser~ who can be rightfully considered the father of Combinatorial Matrix Theory, and to Professor Richard Brualdi, who has made enormous contributions to the development of the theory. There are many people to thank for their contributions to the organization and content of this book and an earlier version of it. In particular, we would like to express our sincere thanks to Professors Lizhi Hsu, Dingzhu Du, Ji Zhong, Qiao Li, Jongsheng Li, Jiayu Shao, Mingyiao Hsu, Fuji Zhang, Kerning Zhang, Jingzhong Mao, and Maocheng Zhang, for their wonderful comments and suggestions. We would also like to thank Bo Zhou and Hoifung Poon for proof reading the manuscript. Bolian Liu would like to give his special appreciation to his wife, Mo Hui, and his favorite daughters, Christy and Jolene. HongJian Lai would like to give his special thanks to his wife, Ying Wu, and to his parents JieYing Li and HanSi Lai ~ Without our families' forbearance and support we would never have been able to complete this project.
I
I
xi
Chapter 1
Matrices and Graphs 1.1
The Basics
=
Definition 1.1.1 For a digraph D with vertices V(D) {111,112, ... ,v,.}, let m(v;,v;) denote the number of arcs in D oriented from v; to v;. The adjacency matri:J: of D is an n by n matrix A(D) = (a;;), given by Oi; =m(v;,v;).
We can view a graph G as a digraph by replacing each edges of G by a pair of arcs with opposite directions. Denote the resulting digraph by Da. With this view point, we define the adjacency matri:J: ofG by A(G) A(Da), the adjacency matrix of the digraph Da. Note that A(G) is a symmetric matrix.
=
Note that the adjacency matrix of a simple digraph is a (0, I)matrix, and so there is a onetoone correspondence between the set of simple digraphs D(V, E) with V = {v1 , • • • ,v,.} and B,., the set of all (0,1) square matrices of order n: For each (0,1) square matrix A == (a;;)nxn, define an arc set Eon the vertex set V bye= (v;,v;) E E if and only if a;;= 1. Then we obtain a digraph D(A), called the associated digraph of A. Proposition 1.1.1 follows from the definitions immediately. Proposition 1.1.1 For any square (0,1) matrix A, A(D(A)) =A. Definition 1.1.2 A matrix A E B,. is a permutation matrix if each row and each column have exactly one 1entry. Two matrices A and B are permutation equivalent if there exist permutation matrices P and Q such that A = P BQ; A and B are permutation similar if for some permutation matrix P, A P BP 1 • Let A= (a;;) and B = (b;;) be two matrices in Mm,n· Write A~ B if for each i and j, a;;~ b;;.
=
1
Matrices and Graphs
2
Proposition 1.1.2 can be easily verified. Proposition 1.1.2 Let A be a square (0,1) matrix, and let D = D(A) be the associated digraph of A. Each of the following holds. (i) Each row sum (column sum, respectively) of A is a constant r if and only if di' (v) = r (d(v) = r, respectively) Vv E V(D). (ii) Let A, B E Bn. Then A $; B if and only if D(A) is a spanning subgraph of D(B). (iii) There is a permutation matrix P such that P APt = B if and only if the vertices of D(A) can be relabeled to obtain D(B). (iv) A is symmetric with tr(A) 0 if and only if D(A) =Do for some simple graph G; or equivalently, if and only if A is the adjacency matrix of a simple graph G. (v) There is a permutation matrix P such that
=
where At and A2 are square (0,1) matrices, if and only if D(A) is not connected. (vi) There is a permutation matrix P such that
PAPt
=(
0
BT
B) 0
I
where B is a (0,1} matrix, then D(A} is a bipartite graph. (In this case, the matrix B is called the reduced adjacency matriz of D(A); and D(A) is called the reduced associated bipartite graph of B. ) (vii) The (i,j}th entry of A1, the lth power of A, is positive if and only if there is a directed (v,,v;)walk in D(A) oflength l. (viii) A1 is a principal square submatrix of A if and only if At =A(H) is the adjacency matrix of a subgraph G induced by the vertices corresponding to the columns of At. Definition 1.1.3 Let G = (V,E} be a graph with vertex set V = {v1,t12, ... ,vn} and edge set E = {e1 ,e2, · · · ,e9 }, with loops and parallel edges allowed. The incidence matriz of G is B(G} = (biJ)nxq whose entries are defined by
v,
it is incident with e; otherwise Let diag(rt,r2, · .. , rn) denote the diagonal matrix with diagonal entries r1,r2, · .. , rn. For a digraph D = (V,E) with vertex set V = {v11v2, ... ,vn} and arc set E = {e1, e2, · .. , e11 }, with loops and parallel edges allowed, the oriented incidence matri:r: of
Matrices and Graphs the digraph D is B(D)
3
= (bs;)nx 11 whose entries are defined by it v; is an outarc of Vi it v; is an inarc of v, otherwise
Given a digraph D with oriented incidence matrix B, the matrix BBT is called the Laplace matrix, (or admittance matriz) of D. As shown below, the Laplace matrix is independent of the orientation of the digraph D. Therefore, we can also talk about the Laplace matriz of a graph G, meaning the Laplace matrix of any orientation D of the graph G. Theorems 1.1.1 and 1.1.2 below follow from these definitions and so are left as exercises.
=
Theorem 1.1.1 Let G be a loopless graph with V(G) {v1 , 112, • • • , 11n} 1 and let dt be the degree of v, in G, for each i with 1 5 i 5 n. Let D be an orientation of G, A be the
=
adjacency matrix of G and C diag(d1. da, · · · , d,.). Each of the following holds. (i) H B is the incidence matrix of G, then BBT C +A. (ii) H B is the oriented incidence matrix of D, then BBT =CA.
=
Theorem 1.1.2 Let G be a graph with t components and with n vertices. Let D be an orientation of G. Each of the following holds. (i) The rank of the oriented incidence matrix B is n  t. (ii) Let B 1 be obtained from B by removing t rows from B, each of these t rows corresponding to a vertex from each of the t components of G. Then the rank of B 1 is nt.
=
Definition 1.1.4 An integral matrix A (a1;) is totally unimodular if the determinant of every square submatrix is in {0, 1, 1}. Theorem 1.1.3 (Hoffman and Kruskal (127]) Let A be an m x n matrix such that the rows of A can be partitioned into two sets R 1 and R 2 and such that each of the following holds: (i) each entry of A is in {0, 1, 1}; (ii) each column of A has at most two nonzero entries; (iii) if some column of A has two nonzero entries with the same sign, then one of the rows corresponding to these two nonzero entries must be in R 1 and the other in~; and (iv) if some column of A has two nonzero entries with different signs, then either both rows corresponding to these two nonzero entries are in R1 or both are in R 2 • Then A is totally unimodular. Proof Note that if a matrix A satisfies Theorem 1.1.3 (i)(iv), then so does any submatrix of A. Therefore, we may assume that A is ann x n matrix to prove det(A), the determinant
Matrices and Graphs
4
of A, is in {0, 1, 1}. By induction on n, the theorem follows trivia.lly from Theorem 1.1.3(i) when n = 1. Assume that n ~ 2 and that Theorem 1.1.3 holds for square matrices with smaller size. If each column of A has exactly two nonzero entries, then by (iii) and (iv), the sum of the rows in R1 is equal to that of the rows in~. consequently det(A) 0. If A has an all zero entry column, then det(A) = 0. Therefore by (ii), we may assume that there is a column which has exactly one nonzero entry. Expand det(A) along this column and by induction, we conclude that det(A) E {0, 1, 1}. O
=
Corollary 1.1.3A (Poincare [213]} The oriented incidence matrix of a graph if totally unimodular. Corollary 1.1.3B Let G be a loopless graph (multiple edges allowed) and let B be the incidence matrix of G. Then G is bipartite if and only if B is totally unimodular. Definition 1.1.5 For a graph G, the characteristic polynomial of G is the characteristic polynomial of the matrix A(G), denoted by xa(A)
= det(AI A(G)).
The spectrum of G is the set of numbers which are eigenvalues of the matrix A(G), together with their multiplicities. If the distinct eigenvalues of A(G) are AI > A2 > · · · > and their multiplicities are m 1 , m 2 , • • • , m 8 , respectively, then we write
As,
(Al A2 ... As) .
spec (G) 
m1
m2
··•
m.
Since A(G) is symmetric, all the eigenvalues As's are real. When no confusion arises, we write detG or det(G) for det(A(G)). Theorem 1.1.4 Let S(G,H) denote the number of subgraphs of G isomorphic to H. If n
xa(A)
= L:C•An•, then i=O
Co C,
= =
1, and (1)•1: detH S(G,H), i
= 1,2, ... ,n,
H
where the summation is taken over all non isomorphic subgraphs of G. Sketch of Proof SiJ;lce xa(A) = det(AI A) = An + · ··, C0 = 1, and for each i = 1, 2, · · · , n, = (1)i detA., where the summation is over all the ith order principal
c,
L: A;
submatrices of A. Note that A, is the adjacency matrix of the subgraph H of G induced by the vertices corresponding to the rows of As (Preposition 1.1.2(vii)), and that the number of such subgraphs of G is S(G,H). O
5
Matrices and Graphs Further discussion on the coefficients of xo(.X) can be found in (226].
The Spectrum of a Graph
1.2
In this section, we present certain techniques using the matrix A(G) and spec(G) to study the properties of a graph G. Theorem 1.2.1 displays some facts from linear algebra and from Definition 1.1.5. Theorem 1.2.1 Let G be a graph with n vertices, and A= A(G) be the adjacency matrix of G. Each of the following holds: (i) H G1, G2, · · • , G~c are components of G, then xo(.X) Il~=l XG, (.X). 1, 2, · · · , k. (ii) The spectrum of G is the disjoint union of the spectrums of G,, where i (iii) H f(z) is a polynomial, and if). is an eigenvalue of A, then /(.X) is an eigenvalue
=
=
of /(A). Theorem 1.2.2 Let G be a connected graph with n vertices and with diameter cl. H G has 8 distinct eigenvalues, then n ~ 8 ~ d + 1. Proof Since n is the degree of xa(.X), n ~ 8. Let A= A(G) and suppose that 8 :$d. Then G has two vertices 11i and 11J such that the distance between 11i and 11J iss. Let mA(.X) be the minimum po]ynomial of A. Then mA(A) 0. Since A is symmetric, the degree ofmA(.X) is exactly 8 and mA(.X) .X8 +· · ·. By Proposition 1.1.2(vii), the (i,j)entry of A• is positive, and the (i,j)entry of A1 is zero, for each l with 1 ;$l ~ 8  1. it follows that it is impossible to have mA(A) 0, a
=
=
=
contradiction.
D
For integer r,g > 1, an rregular graph with girth g is called an (r,g)graph. Sachs [225] showed that for any r,g > 1, an (r,g)graph exists. Let f(r,g) denote the smallest order of an (r,g)graph. An (r,g)graph G with jV(G)I f(r,g) is called a Moore graph.
=
Theorem 1.2.3 (ErdOs and Sachs, (84]) r 1 + 1 ~ f(r,5) :$ 4(r 1)(~ r + 1). Theorem 1.2.4 When g = 5, a Moore (r, 5)graph exists only if r = 2, 3, 7, or 57. Proof Let GbeaMoore (r,5)graph. Then IV(G) I= r 2 +1. Let V(G) = {111,112 , .. • ,11r2+1}, and let A= (a1;) denote the adjacency matrix of G, and denote A 2 =(a~;>). 0, 1, a~> = 0; and when a0J Since g = 5, G has no 3cycles, and so when a,; a~;> 1. It follows that
=
=
A 2 +A
= J + (r 1)1.
=
Matrices and Graphs
6
Add to the first row of A2 + A  M all the other rows to get det(A2 +A AI) = (,2 + r A)(r 1 A)'"2 • Therefore, spec(A2 +A)= ( r
2
:
r r
~1 ) .
Let Ai, 1 ~ i ~ r 2 + 1, be the eigenvalues of A. Then by Theorem 1.2.1(iii), A~ + Ai is an eigenval~ of A2 +A, and so we may assume that A~ +AI = r 2 + r, and A~ +At = r  1, for each i = 2, 3, · · · , r 2 + 1.
Hence we may assume AI = r, and for some k with 2 5 k 5 r 2 + 1, A2 = .\a = · · · = 1 + .;.r,:=3 1 .J4r 3 Ar.+I = and A1:+2 = · · · = A,2+1 = . Since the sum of all 2 2 eigenvalues of A is zero, solving
r+
k(1+.J4r3) 2
+
(r2k)(1.J4r3) 2 =0,
(r 2  2r) + r2• .J4r 3 Since k ~ 0 is an integer and since r ~ 2, either r = 2 or for some positive integer m, 4r 3 = (2m+ 1)2 is the square of an odd integer. Thus if r > 2, then r = m 2 + m + 1 and so we have 2k =
2k·= 2m  1 +
~2
(2m+ 3 
2~~ 1)
+ (m2 + m + 1)2 •
As m 2 and 2m+ 1 are relatively prime, 2m+ 1 must divide 15, and so m E {1, 2, 7}. It follows that r E {2,3, 7,57}, respectively. O Moore (r, 5)graphs with r E {2, 3, 7} has been constructed. The existence of a Moore (57,5)graph is still unknown. (See [9).) Theorem 1.2.5 (Brown [18]) There is norregular graph with girth 5 and order n =
r+2.
Sketch of Proof By contradiction, psume that there exists an (r, 5)graph G with order n r 2 + 2. For each v e V(G), let N(v) denote the vertices adjacent to v in G. Let VI E V(G) and let N(vi) {112,··· ,Vr+I}· Since the girth of G is 5, each N(v,)nN(v;) ={vi} for all2 ~ i < j ~ k+l, and N(vi) is an independent set. It follows that
=
=
I U.":f (N(v;) {vi })I=
r+I
E IN(v;) {vt}l = r + r(r 1) = r
2•
(1.1)
i=I
Since IV(G) I = r + 2, it follows by (1.1) that for any 11 e V(G), there is exactly one v* E V(G) which cannot be reached form v by a path of length at most 2. Note that (v*)* = v.
Matrices and Graphs
1
Let A = A(G) be the adjacency matrix of G. Since G has girth 5, it follows that A2 +A (r 1)1 = J B, where B is a permutation matrix in which each entry in the main di7onal is zero. Therefore we can relabel the vertices of G so that B is the direct sum of \
~ ~).
It follows that both n and r are even. By Theorem 1.2.1(iii), if k is
an eigenvalue of A, then k 2 + k (r 1) is an eigenvalue of A 2 +A (r 1)1. Direct computation yields (Exercise 1.7{vi)) spec{A2 +A (r 1}1) = ( n
~1
1
1
~
~1
) .
Since A is a real symmetric matrix, A has I real eigenvalues k satisfying ~ + k (r 1) = 1, or k =  12±•, where s = ../4r + 1; and A has I  1 real eigenvalues k satisfying k 2 + k (r 1} = 1, or k = ~::l::t, where t = .;;rr='f. Consider the following cases. Case 1 Both s and t are rational numbers. Then s and t must be the two odd integer 3 and 1, respectively. It follows that r = 2 and so G is a circle of length 6. Case 2 Both s and t are irrationals. H there is a prime a which is a common factor of both s 2 and t 2 but a does not divide s or a does not divide t, then a divides s 2  t 2 8, and so a = 2. But since s 2 and t 2 are both odd numbers, none of them can have an even factor, a contradiction. Therefore, no such a exists. It follows that if is an eigenvalue of J B, then ~· is also an eigenvalue of J B. In other words, the number of eigenvalues of J  B in the form of ~±• is even. Similarly, the number of eigenvalues of J  B in the form of ~±t is even. But this is impossible, since one of ~ and ~  1 must be odd. Case 3 sis irrational and tis rational. Then tis an odd integer and so 1 ±tis even. Since s is irrational, the number of eigenvalues in the form ~±• is even and the sum of all such eigenvalues is 21 ~ == 4"'· However, since 1 ±tis even, the sum of all eigenvalues is an integer, and so the sum of eigenvalues in the form of \±• is an in the form of integer. It follows that r 2 + 2 = n = 0 {mod 4}, or r2 = 2 (mod 4), which is impossible. Case 4 s is rational and t is irrational. Since t is irrational, the eigenvalues in the form ;#must appear in pairs, and so the sum of all such eigenvalues is an integer. However,
=
;+
;±t
~1)
the sum of these eigenvalues is (
(i  1). Let m denote the multiplicity of  1t
8•
Since G is simple, the trace of A is zero. By Proposition 1.1.2{vii) and by g = 5, + 0= trA=r+ m(1+s) 2
= r + 2 and since r = s
2
Since n
(n2m)1s 1)(n  2 + ( 2 21) ·
{1.2}
1
4 , substitute these into (1.2} to get
s5 + 2s4  2s3

20s2 + (33 64m}s +50= 0.
(1.3}
Matrices and Graphs
8
Any positive rational solution of 8 in (1.3) must be a factor of 50, and so 8 E {1, 2, 5, 10, 25, 50}. Among these numbers only 8 = 1, 5, and 25 will lead to these integral solutions s=1 { m=l
r=O
8=25 { m=6565 r = 156.
{s=5 m=12 r=6
Since n = r 2 + 2, we can exclude the solution when 8 = 1. We outline the proof that 8 cannot be 5 or 25, as follows. Since g = 5, the diagonal entries of A3 must all be zero. By Theorem 1.2.1(iii), if). is an eigenvalue of A, then .X3 is an eigenvalue of A3 • It follows that tr(A3 )=0. But no matter whether 8 = 5 or s = 25, tr(A3 ) =/: 0, and so 8 = 5 or 8 = 25 are also impossible.
D Theorem 1.2.6 Let G be a graph with n vertices such that v e V(G). If G has eigenvalues .X1 ~ .X2 ~ · · · .Xn, and if G v has eigenvalues #'1 ~ 1'2 ~ • .. Pnt. then
Proof Let A1 = A(G v), and A= A(G). Then there is a row vector u such that
A( u) 
0
uT
A1
·
Since A is symmetric, A has a set of n orthonormal eigenvectors x 1 ,x2, · · · ,xn. Let e; denote the ndimensional vector whose ith component is one and whose other components are zeros. Divide the eigenvalues .Xt ~ · · · ~ .Xn into two groups: Group 1 and Group 2. A .X; is in Group 1 if .X. has an eigenvector x; such that efX; = 0; and a .X; is in Group 2 if it is not in Group 1. Note that if .X. is in Group 2, then efx; =/: 0 (for all eigenvectors X; of .X;). Suppose .X; is in Group 1 and it has an eigenvector x; such that efx; = 0. Let x~ denote the (n I)dimensional vector obtained from x; by removing the first component of X;. Then as .X;A .X;x;, .X1A1 =,X.~. It follows that .X;= p;, for such .X;'s. Rename the eigenvalues in Group 2 so that they are ~ ~ ~ X~:. For notational convenience, let :X; e {xt.x2, .. · ·Xn} be the eigenvector corresponding to X;, 1 :::; i::;; k. Let y be an eigenvector of A1 corresponding to an eigenvalue in Group 2; and let y be the ndimensional vector obtained from y by adding a zero component as the first component of y. Since the X;'s are orthonormal and by the definition of Group 1, both y = 1 b;x; and e1 1 c;x;, where c; = efi:;.
=
E!'::
xl x2 ...
= E!'::
Matrices and Graphs
9
Since y' is an eigenvector of A 1 corresponding to an eigenvalue JL (say), A1y' = py'. It follows that
" =A Lb,x; " = Ay = (uTy')et + py' = (uTy') LCiX; " + JLY'· Lb,X.x, i=l
i=l
i=l
Therefore, for each j, the jth component of both sides must be the same, and so bj
= (uTy')Ci  . JL >..;
By the definition of y,
L:  JJu ~>..i')c'f• = L: baCi = eiY = o. k
( T
i=l
k
(1.4)
i=l
Equation (1.4) has k vertical asymptotes at JL =X,, 1 ~ i ~ k. It follows that (1.4) has a
This, together with the conclusion on the eigenvalues in Group 1, asserts the conclusion of the theorem. D Corollary 1.2.6A Let G be a graph with n > k ~ 0 vertices. Let V' C V(G) be a vertex subset of G with IV' I k. If G has eigenvalues >..1 ~ >..2 ~ • • · >..n, then
=
>..;
~ >...(G V') ~ >...+"·
Corollary 1.2.6B Let G be a graph with n then >..2(G) ~ 0.
~
2 vertices. If G is not a complete graph,
Proof Corollary 1.2.6A follows from Theorem 1.2.6 immediately. Assume that G has two nonadjacent vertices u and v. Let V' = V(G) {u,v}. Then by Corollary 1.2.6A, >..2(G) ~ >..2(G V') = 0. D Lemma 1.2.1 (J. H. Smith [71]) Let G be a connected graph with n following are equivalent. (i) G has exactly one positive eigenvalue. (ii) G is a complete kpartite graph, where 2 ~ k ~ n  1.
~
2 vertices. The
Theorem 1.2. 7 (Cao and Hong [43]) Let G be a simple graph with n vertices and without isolated vertices. Each of the following holds. (i) >..2(G) = 1 if and only if G is a complete graph with n ~ 2 vertices. (ii) >..2(G) = 0 if and only if G =/: K2 and G is a complete kpartite graph, where
Matrices and Graphs
10 2$k$n1. (iii) There exists no graph G such that 1 < .>.2 (G) < 0.
Proof Direct computation yields .>.2(Kn) = 1 (Exercise 1.7(i)). Thus Theorem 1.2.7(i) follows from Corollary 1.2.6B. Suppose that G I K 2 and that G is a complete kpartite graph with 2 $ k S n  1. Then by Lemma 1.2.1 and Corollary 1.2.6B, .>.2(G) = 0. Conversely, assume that ..>.2(G) = 0 (then G I K2) and that G is not a complete kpartite graph. Since G has no isolated vertices, G must contain one of these as induced subgraphs: 2K2, P4 and K1 V (K1 U K2). However, each of these graphs has positive second eigenValue, and so by Corollary 1.2.6A, ..>.2(G) > 0, contrary to the assumption that . >.2(G) 0. Hence G must be a complete kpartite graph. This proves Theorem 1.2.7(ii). Theorem 1.2.7(iii) follows from Theorem 1.2.7(i) and (ii), and from Corollary 1.2.6B.
=
0 The proofs for the following lemmas are left as exercises. Lemma 1.2.2 (Wolk, (277]) H G has no isolated vertices and if Gc is connected, then G has an induced subgraph isomorphic to 2K2 or P4 • Lemma 1.2.3 (Smith, (71]) Let H 1,H2,Hs and H4 be given in Figure 1.2.1. Then for each i with 1 SiS 4, ..>.(H&)
> ~·
Figure 1.2.1 Lemma 1.2.4 (Cao and Hong [43]) Let H 5 = Kns V (K1 U K2). Then each of the following holds. (i) XHs = (..>.3  ..>.2  3(n 3)..>. + n 3)..>.n4 (.>. + 1). (ii) ..>.2(H5)
< ~·
Theorem 1.2.8 (Cao and Hong (43]) Let G be a graph with n vertices and without isolated vertices. Each of the following holds. (i) 0 < ..>.2(G) < if and only if G!!!:! Hs.
i
Matrices and Graphs
11
(ii) If 1 > >.2 (G) > i, then G contains an induced subgraph H isomorphic to a member in {2K2, P4} U {H; : 12 $; i $; 4}, where the H;'s are defined in Lemma 1.2.2. (iii) If >.(n) = >.2(H5), then >.(n) increases as n increases, and lim,.~00 >.(n) = (iv) There exist no graphs such that >.2 (G~:) > and lim >.2 (G~:) = 31 . lo~oo
l·
k
a,.
Proof Part (i). By Lemma 1.2.4(ii), it suffices to prove the necessity. Suppose 0 < >.2(G) < t· If G has an induced subgraph H isomorphic to a member in {2K2,P4} U {H; : 12 $; i S 4}, where the H;'s are defined in Lemma 1.2.3, then by Corollary 1.2.6A, >.2 (G) ~ >.2(H) > k, a contradiction. Hence we have the following: Claim 1 G does not have an induced subgraph isomorphic to a member in {2K2, P4} U {H;: 12 $; i S 5}. By Claim 1 and Lemma 1.2.2, ac is not connected. Hence Gc has connected components G1. · · · , G,. with k ~ 2. But then G = G~ V G~ V • • • V G~. We have the following claims. Claim 2 For each i with 1 S i $; k, Gf has an isolated vertex. If not, then by Lemma 1.2.2, G~ has an induced subgraph isomorphic to 2K2 or P4, contrary to Claim 1. Claim 3 For some i, Gf e! K1 U K2. By >.2(G) > 0 and by Theorem 1.2.7(ii), G is not a complete kpartite graph, and some Gf has at least one edge. By Claim 1, Gf contains K 1 U K2 as an induced subgraph. If IV(Gf)l > 3, then Gf must have an induced subgraph H isomorphic to one of 2K1 U K 2,K1 UPs and K 1 UK8 • Since k ~ 2, there exists a vertex u E V(G) V(Gf). It follows that V(H) U {u} induces a subgraph of G isomorphic to one of the H;'s in Lemma 1.2.3, contrary to Claim 1. Claim 4 k 2 and G2 e! Kn3· By Claims 2 and 3, we may assume that G~ = K 1 U K2. Assume further that either k ~ 3 or k = 2 and G2 has two nonadjacent vertices. Let u E V(G2 and v E V(Gs) if k ~ 3 and let u,v E V(G2) be two nonadjacent vertices in G2. Then V(G1) U {u,v} induces a subgraph of G isomorphic to H 4 (defined in Lemma 1.2.3), contrary to Claim 1. Thus it must be k 2 and G2 e! Kn3· From Claims 3 and 4, we conclude that G e! H 5 •
=
=
Part (ii). It follows from the same arguments used in Part (i) and it is left as an exercise. Part (iii). By Corollary 1.2.6A, .).(n) is increasing. Thus limn~oo >.(n) = L exists. Let = >.3  >.2  3(n 3)>. + n 3. Then by Lemma 1.2.4(i), f(>.) = 0, and so
f(>.)
.).(n)
=!
>.(n) 1 (>.(n))2
3 + 3(n3)
·
Matrices and Graphs
12 It follows that L
= 31 .
Part (iv). By contradiction, assume such a sequence exist. We may assume that for all k, 1 > .X2(G~;) > ~· By Theorem 1.2.8(ii), all such G~; contains a member H in {2K2, P4} u {Hi: 12 ~ i ~ 4}. By Corollary 1.2.6A, .X2 (G~:) ~ .X2(H) > 0.334, therefore, .X2(G,) cannot have 1/3 as a limit. O
Corollary 1.2.8A Let G be a graph with n vertices and without isolated vertices. If G is not a complete kpartite graph for some k with 2 ~ k ~ n 1, then .X2(G) ~ .X2(H5 ), where equality holds if and only if G !:!!! H 5 • In 1982, CvetkQvic (69] posed a problem of characterizing graphs with the property 0 < A2(G) ~ 1. In 1993, P. Miroslav (200] characterized all graphs with the property A2(G) ~ v'2 1. In 1995, D. Cvetkovic and S. Simic [72] obtained some important 1 • Cao, Hong and Miroslav explicitly properties of the graph satisfying .X2(G) ~ 1. It remains difficult to solve the displayed all the graphs with the property .X2(G) ~ Cvetkovic's problem completely.
v'\
1.3
v;
Estimating the Eigenvalues of a Graph
We start with some basics from the classical PerronFrobenius theory on nonnegative square matrices. As the adjacency matrix of a connected graph is a special case of such matrices, some of these results can be stated in terms of connected graphs as follows. Theorem 1.3.1 (PerronFrobenius, (211), (93]) Let G be a connected graph witlljV{G)I 2. Each of the following holds. (i) .X1(G) is positive, and is a simple root of xo(A). (ii) If x 1 is an eigenvector of .X1 (G), then x 1 > 0. (iii) For any e e E(G), .X1(G e)< .X1(G). Definition 1.3.1 Let G be a connected graph with IV(G)I the spectrol radiw of the graph G.
~
~
2. The value A1(G) is called
Theorem 1.3.2 (Varga (265)) Let G be a connected graph with A= A(G), let X1 denote an eigenvector of .X1 (G). and let y be a positive vector. Then (Ay)Ty ~2 ~ ~... and so
= cs = · · · =
Cn
= 0 or ~1 = ~2 = · ·· = ~...
(Ay)Ty ~
IYI2 if and only if y
Again by y

1
= c1x 1 • This proves the lower bound. = E?=1 CiX; and by Ax; = ~;x;, (Ay)Te; _ ~~; _ '· T _2  "• y e; Ci
0 for each i with 1 ~ i ~ n. Since the x.'s are orthonormal and by y 2:?:. 1 CiX.,
=
~1
= Ct~l = (Ay)Txt = E?;? b,(Ay)Te; < Ct
yTx1
:Ei=l
max b;(Ay)T e; < ~~. b;(yTxi)  1:5JSn bj(yTx;) 
where equality holds if and only if for each j, (Ay)T e; y = c1x1. This proves the upper bound. O
= ~1 (yTe1), and if and only if
Corollary 1.3.2A Let G be a connected graph with q Exactly one of the following holds.
= IE(G)I and n =
(i) 2q
(ii)
IV(G)I
~
2.
< ~~ < a(G).
~qn = ~1 = a(G), G is regular and (1, 1, ... '1)T is an eigenvector.
Corollary 1.8.2B(Hoffman, Walfe and Hofmeister [128], Zhou and Lin [289]) Let G be a connected graph with q IE(G)I and n IV(G)I ~ 2. Let d1, ... ,d,. be the degree sequence of G. Then
=
1
q L
ijEE(G)
=
1
~~~1 ~~~~d·  
1
L
ijEE(G)
~
equalities hold if and only if there is a constant integer r such that ~di = r 2 for any ij E E(G). Proof Setting Zj = ..,(dj {1 ~ j ~ n) in Theorem 1.3.2, we obtain the inequalities in Corollary 1.3.2B.
14
Matrices and Graphs
Suppose that there exists a constant integer r such that didJ = r 2 for any ij e E(G). For i = 1, · · • , n, let d be the common degree of all the vertices j adjacent to i. Then
I:
.flj = ~.jd = r./d,
j:~jEE(G}
so that r is an eigenvalue of A= A(G) with (Jdi", .. · ,..;;I;,}t as a corresponding eigenvector. Hence A1 = r. Conversely, if the equality holds in Corollary 1.3.2B, we have
At Jd; =
I: ..fdi j:ijEE(G}
for all i. If G is regular, we are done. Otherwise let o and fl. be respectively the minimal and maximal degrees of G. Choose u and v such that the degrees of u and v are o and fl. Assume that there exists a vertex w with uw e E(G) and that the degree of w is less than fl. Then we have
On the other hand,
a contradiction. Assuming existence of w where vw e E(G) and the degree of w is greater than oleads to an analogous contradiction. We have proved that whenever ij e E( G), then the degrees of i and j are oand fl. or vice versa, and that At = ../fK. D The study of the spectrum of a graph has been intensive. We present the following results with most of the proofs omitted. More results can be found in Exercises 1.11 through 1.13, and in [71). In [131}, Hong studied bounds of Ak(Tn) for general k (Theorem 1.3.7). Hong's results are recently refined by Shao [244) and Wu [278). These refinements will be presented in Theorem 1.3.8. Theorem 1.3.3 (Hong, [130]) Let n ~ 3 be an integer, let G be a unicyclic graph (a graph with exactly one cycle) with n vertices, and let S! denote the graph obtained from K 1 ,nt by adding a new edge joining two vertices of degree one in Kt,nl· Then each of the following holds. (i) 2 = At(Cn) S At(G) S At(S!), where the lower bound holds if and only if G Cn; and where the upper bound holds if and only if G = S!. (ii) 2 S A1 (G) S .,fii, where the upper bound holds if and only if G = S:.
=
Matrices and Graphs
15
Theorem 1.3.4 (Hofemeister, (121)) Let Tn denote a tree with n vertices. Then either
Jn;
n
3 , or = 2s + 2 and Tn can be obtained from two disjoint K1,.'s by .X2(Tn) ::5 adding a new edge joining the two vertices of degree s. Theorem 1.3.5 (Collatz and Sinogowitz, (62)) Let G be a connected graph with n IV(G)j. Then
=
2cos (n: 1) ::5 .X1 (G) :5 n 1, where the lower bound holds if and only if G = Pn, a path of n vertices; and where the upper bound holds if and only if G is a complete graph Kn. Theorem 1.3.6 (Smith, (255]) Let G be a connected graph. Each of the following holds. (i) If .X1(G) = 2, then either G E {K1,4, Cn}, or G is one of the following graphs:
>< Figure 1.3.1. (ii) If .X1(G) < 2, then G is a subgraph of one of these graphs. Theorem 1.3. 7 (Hong, (131]) Let Tn denote a tree on n
0::5 .Xr.(Tn) :5 Moreover, if n
J[ n; 2],
~
4 vertices. Then
for each k with 2 :5
[i].
= 1 mod k, this upper bound of .Xr.(Tn) is best possible.
Theorem 1.3.8 Let Tn denote a tree on n Then each of the following holds. (i) (Shao, (244)) .Xr.(Tn) :5
~
4 vertices and Fn a forest on n
~
4 vertices.
J[~] 1, for each k with 2 ::5 (i]. Moreover, when n ~ 1
Matrices and Graphs
16
(mod k), this upper bound is best possible. (ii) (Shao, [244]) When n 1 (mod k), strict inequality holds in the upper bound in
=
Theorem 1.3.8(i). However, there exists no f for each k with
> 0 such that A~o(Tn)
~ J[~]  1 f,
[i] .
(iii) (Shao, (244])
.\~:(Fn) ~ J[~] 1, for each k with 1 ~ [i]· This upper bound is
best possible. (iv) (Wu, [278]) Let G denote a unicyclic graph with n ~ 4 vertices, then .\~:(G) ~
Vrrnl3 LiJ 4 + .!.2' for each k with 2 n, by Exercise 1.20(v), zero is an eigenvalue of BTB, and so by Exercise 1.20(ii), ). = 2 is an eigenvalue of L(G). O One of the main stream problems in this area is the relationships between xa(.X) and XL(G)(.X), and between xa(.X) and XT(G)(.X). The next two theorems by Sachs [227] and by Cvetkovic, Doob and Sachs [71) settled these problems for the regular graph cases. Theorem 1.4.2 (Sachs, [227]) Let G be a kregular graph on n vertices and q edges. Then
Proof Let B = B(G) denote the incidence matrix of G. Define two n matrices as follows
u=(
>.In B ) V 0 Iq ,
=(
In BT
B ) . >Jq
+q
by n
+q
Matrices and Grapbs
18 By det(UV)
= det(VU), we have A11 det(M,. BBT)
= A"det(Al11 
BT B).
Note that G is kregular, and so the matrix C in Exercise 1.28(i) is kl,.. These, together with Exercise 1.28(i) and (ii), yield det(M11  A(L(G)))
XL(G)(A)
=
det((A + 2)111
=
A11"det((A + 2)1,. BBT)

BTB)
= .\9 "det((A + 2 k)I,. A(G)) = (A+ 2) 11"xo(A + 2 k). This proves Theorem 1.4.2.
0
Example 1.4.1 It follows from Theorem 1.4.2 that if spec(G)= ( k
At
A2
···
1 m1
m2
·••
then
specL(G)
=(
2k  1 k  2 + A1 1
k2+A.
2 ) qn
m1
In particular, specL(K,.)
=(
2n 4
n 4
2
1
n11
n(n3) 2
) •
Theorem 1.4.3 (Cretkovic, [71]) Let G beakregular graph with n vertices and q edges, .and let the eigenvalues of G be A1 ~ .\2 ~ · · · ~ A,.. Then n
XT(G)(A)
= (A+ 2) 11n II(A2 
(2>.i + k 2)A + ~ + (k 3)~ k).
i=l
Sketch of Proof Let A = A(G), B = B(G) denote the adjacency matrix and the incidence matrix of G, respectively; and let L = A(L(G)) denote the adjacency matrix of L(G). Then by the definitions and by the fact that G is kregular,
BBT=A+kl,BTB=L+2l, andA(T(G))= [ :T
~].
19
Matrices aDd Graphs It follows that
=
I
B (>.+k)IBBT (>.+2)1BTB BT
I
I
B (>.+k)IBBT (>.+k+1)BT+BTBBT (>.+2)1
= =
=
(>.+2)9 det
I
(AI A+ >.! 2(A+kl)(A (>.+ 1}I))
(>. + 2)Hn det(A2

(2>. k + 3)A + (A2
(k 2)>. k)I)

n
(>. + 2)q+n
II(A1 (2A k + 3)Aa + >.
2 
(k 2)>. k).
i=l
This proves the theorem.
In (235], the relationships between xa(>.) and XL(G}(>.), and between xa(>.) and XT(G)(A) were listed as unsolved problems. These are solved by Lin and Zhang (165]. Definition 1.4.3 Let D be a digraph with n vertices and m arcs, such that D may have loops and parallel arcs. The entries of the inincidence matriz B 1 = (b1;) of D are defined as follows:
b~. = { '3
1 0
if vertex vi is the head of arc e1 otherwise
The entries of the outincidence matriz Bo b'!.
={ 1
'3
0
= (bf;) of D are defined as follows:
if vertex v, is the tail of arc e; otherwise
Immediately from the definitions, we have
(1.5)
A(D) = BoBT, and A(L(D)) = BT Bo, and
A(T(D))
= [ A(D) BT
] Bo A(L(D))
(1.6)
Theorem 1.4.4 (Lin and Zhang, (165]) Let D be a digraph with n vertices and m arcs, and let AL = A(L(D)). Then
(1.7) Proof Let
U
= [ >.In 0
Bo ] and W
lm
=[
1';, B 0 Bi >.Im
]
•
20
Matrices and Graphs
Note that det(UW)
= det(WU), and so
This, together with (1.5), implies that
XL(D) (A)
and so (1.7) follows.
=
det(Alm  AL)
=
Amn det(Al,.  BoB'f)
= det(Alm 
B'f Bo)
= Amn det(Al,. A(D)),
0
Theorem 1.4.5 Let D be a digraph with n vertices and m arcs, and let AL and AT = A(T(D)). Then
XT(Dj(A) = Amn det((Al,. A) 2

= A(L(D))
A).
(1.8)
Proof By (1.6),
XT(Dj(A)
=
I I
AI,.  A
B'f
 Bo Aim AL
I= I
Bo AI,.  BoB'f B'f Aim B'fBo
AI,. BoB'f Bo B'f Bf(AI,. BoB'f> Aim
I
=
I
= =
Amn det(A2 I,.  AA n(1 + A)A + A2 ),
which implies (1.8).
I
AI,.  BoBT + l Bo( (1 + A)B'f + B'f BoBTJ o (1 + A)Bf + B'f BoB'f> Aim Amn det(A2 I,. ABoB'f (1+A)BoB'f +BoB'fBoB'f)
0
Hoffman in [123] and (122] considered graphs each of whose eigenvalues is at least 2, and introduced the generalized line graphs. Results on generalized line graphs and related topics can be found in (123], [122] and [70].
1.5
Cospectral Graphs
Can the spectruin of a graph uniquely determine the graph? The answer is no in general as there are non isomorphic graphs which have the same spectrum. Such graphs are call cospectral graphs. Harary, King and Read (115] found that K 1 , 4 and K 1 U 0 4 are the smallest pair of cospectral graphs. Hoffman (125] constructed cospectral regular bipartite
Matrices and Graphs
21
graphs on 16 vertices. In fact, Hoffman (Theorem 1.5.1 below) found a construction for cospectral graphs of arbitrary size. The proof here is given by Mowshowitz [206]. Lemma 1.5.1 (Cvetkovic, [65]) Let G be a graph on n vertices with complement G•, and let Ha(t) = E~..o N~ot" be the generating function of the number N,. of walks of length k in G. Then (1.9)
Sketch of Proof For ann by n matrix M = (mi;), let M* denote the matrix (det(M)M 1 )T (if M1 exists), and let IIMII = E?=1 Ej= 1 mii· It is routine to verify that, for any real number :z:, det(M + :z:Jn) = det(M) + :z:IIM*II Let A= A(G). By Proposition 1.1.2(vii), N,. = IIA,.II. Note that when t < max{\ 1 }, co
EA"t" =(I tA) 1
:::
(det(I tA)) 1 (I tA)*,
k=O
and so Ha(t)
=
f
N~ot" =
I p(A').
Proof This follows from direct computation.
D
Theorem 1.6.2 (Brualdi and Hoffman, (30]) For each A E 4}(n, e), p(A) ~ f*(n, e), where equality holds if and only if there exists a permutation matrix P such that P AP1 E 4/*(n, e).
Matrices and Graphs
26
=
Proof Choose A E w(n,e) such that p(A) j(n,e). By Theorem 1.6.1(i), there exists a non negative vector x such that p(A)x =Ax and lxl = 1. By Lemma 1.6.1, we may assume that x = (xi.z2,··· ,zn)T such that z 1 ~ x2 ~ .•. z;. ~ 0. Argue by contradiction, we assume that A¢ w*(n,e) and consider these two cases.
=
Case 1 There exist integers p and q with p < q, apq 0 and ap,q+l = 1. Let B be obtained from A by permuting apq and ap,q+l and aq+l,p and aqp. Note that B A has only four entries that are not equal to zero: the (p, q) and the (q,p) entries are both equal to 1, and the (p, q + 1) and the (q + 1, p) entries are both equal to 1. It follows by Xq
~
Zq+l that
= (··· ,z, xq+l···· ,xp,Zp,··· )x
=
2zp(Zq 
Xq+t) ~
0.
Since B is a real symmetric matrix and by jxj = 1, p(B) = xTBx. By Ax= f:'(A)x and by lxl = 1, we have p(B) p(A) ~ 2zp{z 9  z 9H) ~ 0. As p(A) = f(n,e), and as BE w(n,e), it must be Zp(Xq Zq+t) = 0, and so p(B) = xTBx = p(A). Zq+l, and so xTBx  xT Ax 0. It follows that H Zp :# 0, then
z, =
=
(Ax),
= (Bx) 9 = (Ax) 9 + (x) 9 ,
=
and so x 9 0, contrary to the assumption that z 9 > 0. = Xn. Since Zs+l Therefore, x 9 = 0, and so for some 8 $ p  1, Z 8 > 0 p(A) j(n,e), A has an 8 by 8 principal submatrix A'= J  I at the upper left comer and p(A) = p(A'). But then, by Lemma 1.6.3, f(n, e) ~ p(A") > p(A') = p(A), contrary to the assumption that p(A) j(n,e). This completes the proof for Case 1.
=
=
= ···
=
Case 2 Case 1 does not hold. Then there exist p and q such that apq = 0 and aP+ 1,9 = 1. Let B be obtained from A by permuting apq and ~l,q and a,p and aq,p+l· As in Case 1, we have
and so it follows that Bx = p(A)x and z 9 (zp  xP+I) = 0. We may assume that Zp = xP+l· Then (Ax)p p(A)zp (Bx)p (Ax)p +x,, which forces that x 9 = · · · = Zn 0. Therefore, a contradiction will be obtain as in Case 1, by applying Lemma 1.6.3. 0
=
=
=
=
Lemma 1.6.4 For integers n ~ r > 0, let F 11 be the r by r matrix whose (1,1)entry is r1 and each ofwhoseotherentries is zero; let D = (d;;)nxn be a nonnegative matrix such
Matrices and Graphs
27
that the left upper comer r x r principal square submatrix is Fu; and let a; = JL.j= 1 ~i, where i = r + 1, · • · , n. Let F 12 be the r by n  r matrix whose jth column vector has O!r+J in the first component and zero in all other components, 1 ~ j ~ n  r. Define F= [
F~
F12]. 0
F12 Then p(F)
~
p(D).
Proof Let x
=
=
(x1, · · · , Xn)T be such that Jxl
1 and such that p(A)x Then Jyf = 1, and
y = h/x~ + x~ + .. · +x~,o, ... ,O,xr+l>'" ,xn)T. xTDx
=
n
(rl)x~+2
=
Ax. Let
r
L
L:x;do;x;
j=r+li=l
=
(r
i'f.l
1)x~ + 2
s Hlxl+2
x;
(~x;d;;)
£:' (~t,z~~t,dl,)
~ (rl)x~+2 1t 1 x;~~x1a; $
(r 1)
=
yTFy
t,z:
+2
,&.:;~t,zl•;
and so Lemma 1.6.4 follows from the Rayleigh Principle (see Theorem 6.1.3 in the Appendix). O Lemma 1.6.5 Let F be the matrix defined in Lemma 1.6.4. Eacq of the following holds. (i) p(F) is the larger root of the equation n
x2

(r 1)x
L
~
= 0.
i=r+l
(ii) p(F) ~ k  1. (iii) p(F) = k  1 is and only if r
=k 
1.
Sketch of Proof By directly computing XF(A), we obtain (i). By the definition of the a; 's, we have
Ea~=e(r)· 2
i=r+l
28
Matrices and Graphs
It follows by (i) and by r
k  1 that
r  1 + ...}2k2  r2 2k + 1
r 1 + .j(k 1) 2 + k2 r2
= 2 ~ k  1. By algebraic manipulation, we have p(F) = k 1 if and only if r = k 1, and so (iii) p(F)
follows.
=
~
2
O
Theorem 1.6.3 (Brualdi and Hoffman, [30]) Let k > 0 be an integer with e
= ( : ).
Then f(n,e) = k 1. Moreover, a matrix A E ~(n,e) satisfies p(A) = k 1 if and only if A is permutation similar to
(1.12) Proof Let A= (ai;) E ~·(n,e). By Theorem 1.6.2, it suffices to show that p(A) ~ k1, and that p(A) = k 1 if and only if A is similar to the matrix in the theorem. Since A E ~*(n,e), we may assume that
A= (
z
~t),
where r < k 1 and where all the entries in the first column of A 1 are 1's. Since ~ is symmetric, there is an orthonormal matrix U such that UT ~U is a diagonalized matrix. Let V be the direct sum of U and Inro and let R be the r by r matrix whose (1, 1) entry is an rand whose other entries are zero. Then B
= V AVT = ( U
oT
= (
R I,. (UAt)T
O
Inr
) (
~
Af
At) ( UT 0
oT
0 ) Inr
U At ) . 0
Obtain a new n by n matrix C from B by changing all the 1 's in the main diagonal of B to zeros. Note that for each nonnegative vector x = (x 11 • .. , xn)T, xT Bx xTCx 1 1 z1 ~ xTCx. By Lemma 1.6.2, p(B) ~ p(C). Obtain a new matrix D = (di;) from C by changing every entry in UA1 and in (U A 1 )T into its absolute value. Then p(C) :5 p(D). Since D is nonnegative, p(D) has a nonnegative eigenvector x (zt, · · · , Xn)T such that lxl 1. Let O!i = Jl:j= 1 4i, where i = r + 1, ·· · ,n. and let F11 be the r by r matrix whose (1,1)entry is r 1 and each of w~ose other entries is zero; F12 the r by n  r matrix whose jth column vector has a,.+i
=
l:::
=
=
Matrices and Graphs
29
in the first component and zero in all other components, 1 ~ j
F= [
~
n r. Define
ii ~2 ]. =
By Lemma 1.6.4, p(F) ;::: p(D) ;::: p(A). By Lemma 1.6.5, J(n, e) p(A) ~ p(F) ~ k 1. When p(A) k  1, by Lemma 1.6.5(iii) and by r ~ k  1, we must have r k  1 and so by e k(k + 1}/2, A must be similar to the matrix in (1.12).
=
=
=
D
Theorem 1.6.4 (Stanley [256]) For any A E ~(n,e), p
and equality holds if and only if e
(A)< 1+~ 
2
'
=( k2 ) and A is permutation similar to
(1 ~ )· =
Proof Let A= (a,;), let Ti be the ith row sum and let x (x1,··· ,xn)T be a unit eigenvector of A corresponding to p(A). Since Ax= p(A)x, we have p(A)x1 = L:; a1;x;. Hence, by CauchySchwarz inequality,
p(A) 2
= (La.;x;)2 ~ r, 2:a,;x~ ~ ri(1 x0) 2 • i
;
Sum up over i to get
p(A) 2
~ 2e 
=
2e 
=
2e 
L r1x~ = 2e  L a,;x~ i,; L (xl + ~) ~ 2e  L i 1. These imply r = R and 8 = S. On the other hand, if r = R, 8 =Sand ar = b8, then it is easy to verify that
where x; = .;b and Y; =
../mar+Sr.
..fii for all 1 :$ i :$ a, 1 :$ j
:$ b. By Theorem 1.6.1, p
= Fa =
0
Recently, Ellingham and Zha obtained a new result on p(G) for a simple planar graph of order n. Theorem 1.6.7 (Ellingham and Zha, [81]) let G be a simple planar graph with n vertices. Then p(G) :$ 2 + ../2n 6.
An analogue study has also been conducted for nonsymmetric matrices in B ... Definition 1.6.2 Let M(n,d) be the collection of n by n (0,1) matrices each of which has exactly d entries being one; and let M*(n,d) be the subset of M(n,d) such that A= (a;;) E ~·if and only if A E M(n,d) and for each i with 1 :$ i :$ n, au;;:: a2i;;:: ···;;::a,.; and au ;;:: a;2 ;;:: • • • ;;:: a;... Denote
g(n, e)
= max{p(A) :A E M(n, e)} and g*(n, e) = max{p(A) : A E M*(n, e)}.
Matrices and Graphs
34
Example 1.6.2 The following matrices are members in M(3, 7) achieving the maximum spectral radius g(3, 7) = 1 + .;2. Only the first one is in M*(3, 7). 1 1 1) ( 1 1 1) ( 1 1 1) ( 111,110,101. 100 101 110 The value of g( n, d) has yet to be completely determined. Most of the results on g( n, d) and g*(n, d) are done by Brualdi and Hoffman. Theorem 1.6.8 (Schwarz, (231]) g(n, d)
= g*(n, d).
Theorem 1.6.9 (Brualdi and Hoffman (30]) Let k > 0 be an integer. Then g(n, k2 ) = k. Moreover, for A E M(n, k2 ), p(A) = k if and only if there exists a permutation matrix P such that
Let k
> 2 be an integer. Define 1 10] ,Zt= [ 0 ], OOO 11 000 0 0 0 0 0 0 0 0 0
where there is a single 1entry.somewhere in the asterisked region of Z~c. Theorem 1.6.10 (BrualdiandHoffman (30]) Let k > 0 bean integer. Theg(n,k2 +1) = k. Moreover, for A E M(n, k 2 + 1), p(A) =kif and only if there is a permutation matrix P such that plAP = z,.. Friedland (92) and Rowlinson [221) also proved different upper bounds p(A), under various conditions. Theorem 1.6.11 below extends Theorem 1.6.9 when s 0; and Theorem 1.6.13 below was conjectured by Brualdi and Hoffman in (30].
=
Theorem 1.6.11 (Friedland [92]) Let e = ( : ) + s, where s < k. Then for any
AE
~(n,e),
p(A) $
k 1 + ..j(k 1)2 + 4s 2 .
Theorem 1.6.12 (Friedland [92]) Let e
=( : ) +k 
1, where k
~ 2.
Then for any
Matrices and Graphs AE
35
~(n,e),
(A)< k2+vka+4k4 2 ' where equality holds if and only if there exists a permutation matrix A such that pt AP = .H2+1 + 0, where p
1
1
1
1
.121
H2+1=
1
1
1
1
0
Theorem 1.6.13 (Rawlinson [221]) Let e
=( : )
with k >
8
~ 0. If A E ~(n, e) such
=
that p(A) t/>(n,e), then G(A) can be obtained from a K,. by adding n k additional vertices v1 , • • • , Vn1: such that v1 is adjacent to exactly 8 vertices in K,., and such that Va, • • • , Vnk are isolated vertices in G(A). Theorem 1.6.14 (Friedland [92]) Ford= k2 + t where 1 ::;; t ::;; 2k, and for each A E M(n,e),
p(A)::;;k+~. Moreover, equality holds if and only if t = 2k and there is a permutation matrix P such that pl AP
where d = k2
= ( Ed 0
0 )
0
'
+ 2k, and Ed= (
Theorem 1.6.15 (Friedland [92]) For d
:~ ~).
= k2 + 2k 
3 where k
> 1, and for
each A E
M(n,e),
p
(A)
g(n,r). Thus we may assume that there is a number j such that the matrix F = AJ+l has the form (1.16) and k = r, but the matrix E = A; does not have the form (1.16) for any k. Without loss of generality, we assume, by Theorem 1.6.16, that
l
=
=
E [ Jr C' ] F _ [ Jr Jnr,r D ' Jnr,r
C ] Jnr '
where C' is obtained from C by replacing a 0 at the (r, t) position of C by a 1, for some t with 1 ~ t ~ n r, and where D is obtained from Jnr by replacing a 1 at the (1, t) position by a 0. Denote E = (e•;) and let x = (Xt,'" ,xn)T with unit length be an eigenvector of E corresponding to the eigenvalue p(E). By the choice of E, E does not take the form of (1.16) for any k. In fact, fork= r+1, there isO in the first column ofC'. Let ei = 1ei,r+l• then L:;?:,1 ei > 0. Since the (r + 2) row of E does not have a 0, we can deduce, from the r + 1 and the r + 2 rows of Ex= p(E)x, that 0 ~ Xr+l < Xr+2 = Xr+3 = · · · = Xn, and
Summing up for i yields p(E)
=n (~(ri 
e,)xn +
~ eiXr+l) > n rxn,
and
lp(E)I 2

np(E) + r
> o.
It follows that p(A)
which completes the proof.
1.7
~ p(E) > ~(n + .../n2 4r) = g(n, r),
0
Exercises
Exercise 1.1 Prove Proposition 1.1.2.
Matrices and Graphs
39
Exercise 1.2 Prove Theorem 1.1.1 and Theorem 1.1.2. Exercise 1.3 Let A= A(K,.) be the adjacency matrix of the complete graph of order n. Show that
Exercise 1.4 Prove Corollary 1.1.3A and Corollary 1.1.3B. Exercise 1.5 The number trAI: is the number of closed walks of length kin G. Exercise 1.6 Let J,. denote the n x n matrix whose entries are all ones and let s, t be two numbers. Show that the matrix sJ,.  tl,. has eigenvalues t (with multiplicity n 1) and ns+t. Exercise 1. 7 Let K,., C,. and P,. denote the complete graph, the cycle, the path with n vertices, respectively, and let Kr,• denote the complete bipartite graph with r vertices on one side and s vertices on the other. Let m = 2k be an even number, J be the m x m matrix in which each entry is a 1, and B is the direct sum of k matrices each of which is (
~ ~ ) • Verify each of the following: (i) spec(K,.)
(.. 11)
= ( 1
n 1) .
n1
SpeC (Kr,B )
=(
1
O
r+s2
= {2cos
(iv) spec(P,.)
= {2cos (n ~ 1 )
k
vrs ). 1
= 0, 1,· · · ,n 1}.
: j
k:1 ~ ~\ !
(v)spec(JB1)= (
=(
1
c:;) :;
(iii) spec(C,.)
(vi) spec(J B)
..;rB
= 1, · · · ,n}· m;2 )·
m; 1 ).
Exercise 1.8 Let G be a simple graph with n vertices and q edges, and let number of 3cycles of G. Show that xa(~)
= ~..  q~n2 
2m(~)~n3
m(~)
be the
+ ....
Exercise 1.9 Prove Lemmas 1.2.2, 1.2.3, and 1.2.4. Exercise 1.10 Prove Corollary 1.3.2A and Corollary 1.3.2B, using Theorem 1.3.2.
40
Matrices and Graphs
Exercise 1.11 Let G be a graph with n 2:. 3 vertices and let 11t. 112 has degree 1 in G and such that 111 112 e E(G). Show that
e V(G)
Xa(.X)
such that 111
=.Xxav, (.X) XG{v ,v2}(.X). 1
Exercise 1.12 Show that is G is a connected graph with n = IV(G)I 2:, 2 vertices and with a degree sequence d1 ::;; ~ ::;; • • • ::;; d,. Then each of the following holds. (i)
).1
(G) ::;;
(1~~.. :E
d;) l
  iiEE(G)
(ii) (H. S. Wilf, [275]) Let q
= IE(G)I. Then .X1 (G)::;;
J
2q(nn 1).
Exercise 1.13 Let T,. denote a tree with n 2:, 2. Then
2cos (n: 1) : ; >. (G)::;; Jn 1, 1
where the lower bound holds if and only if T,. = P,., the path with n vertices; and where the upper bound holds if and only T,. = Kt,n1· Exercise 1.14 LetT denote a tree on n 2:, 3 vertices. For any k with 2 ::;; k ::;; n 1, there exists a vertex v e V(T) such that the components ofT 11 can be labeled as G11G2, · ·· ,Gc so that either (i) IV(G,)I ::;; [ n ~ 2 ] + 1, for all i with 1::;; i::;; c; or (ii)
IV(G:~)I :5
[n ; 2] +1, for all i with 1:5 i :5 c1, and IV(Gc)l :5 n2 [n~ 2 ].
Exercise 1.15 Prove Lemma 1.3.1. Exercise 1.16 Let G be a graph on n 2:, 2 vertices. Prove each of the following. (i) (Lemma 1.3.2) H G is bipartite, then its spectrum is symmetric with respect to the origin. In other words,
).,;(G)= >.n+H(G), for each i with 1 :5 i :5
LiJ·
(ii) H G is connected, then G is bipartite if and only if .X1 (G) is an eigenvalue of G. (iii) H the spectrum of G is symmetric with respect to the origin, then G is bipartite.
Exercise 1.17 Let p(A) be the maximum absolute value of an eigenvalue of matrix A. Show that for any matrix M e B,.,
p(M) :5 P ( [
Exercise 1.18 H [
~T ~ ]
( ;
)
~T ~
]) •
= ). ( ; ) , then llxll = IIYII whenever ). :/: 0.
Matrices and Graphs
41
Exercise 1.19 Let G be a graph on n vertices and ac be the complement of G. Then
p(G) + p(Gc) :5; p(G)p(Gc)
0. (i) Assume that G is bipartite with a bipartition {s+,s}. Let x = (x1o··· ,xn)T be such that A;X = Ax. Define, for each ; 1, 2, · · · , n,
=
Wj
={
x· 3 x;
ifv; E s+ ifv; E
s
and let w = (w1, · · · , wn)T. Then we can routinely verify that Aw = A;w. (x1,··· ,xn)T. Let (ii) Suppose A1 is an eigenvalue. Assume Ax= A1x where x
=
w
= (lztl, · · · , lxnDT. Then lxT Axl
= 1 A1xTxl = A1xTx = A1wTw.
Thus AIWTW
=
xTAx
:::; I  xTAxl = I
L: L
a;;x;x;l
i
:::; LLa;;lx;llxil =wTAw :::;
i AlWTW.
Matrices and Graphs
45
(Here we used the fact that ..\1 (G} =max{ uiTAl2u}. See Theorem 6.1.3 in the Appendix) It lul¢0
U
follows that ..\1 w = Aw, By Theorem 1.3.1(ii), and by the assumption that Gis connected, lz•l > 0, for all i with 1 $ i $ n. It also follows that all inequalities above must be equalities, and so all nonzero terms in
E:~::>ijZiZJ i
= ..\1lxl 2
i
must have the same sign. Therefore, all summands must be negative, which implies that if a,; f: 0, then ZiZj < 0. Lets+= {vi: > 0} and s ={vi: Zi < 0}. Then {S+,s} is a bipartition of V(G}, and so G is bipartite. The other direction follows from (i). (iii) Apply (ii) and argue componentwise.
z,
Exercise 1.17 Suppose Mz p(M)xT and
= p(M)x for
some nonzero vector x. Then xTMT =
Exercise 1.18 Since My= ..\x and MTx = ..\y, we have xTM xTMy = ..\yTy.
= ,\yT and ..\xTx =
Exercise 1.19 Note that G or G• must be connected, and so we may assume that G is connected. By Theorem 1.6.5 and its corollaries, p(G)
+ p(G"}
$
J2en+ 1 +[~ +
12 + '2)
$
(n 1)2 +

(p(G}p(G)
J~ +n(n 1} 2e],
~ + 2v(2e n + 1}}~ + n(n 1) 
2e)
< 2(n 1}2 + ~Exercise 1.20 Apply definitions and Theorem 1.1.1.
=
Exercise 1.21 A formal proof for L(KnH) T(Kn) can be found in (5], but we can get the main idea by working on L(K4) T(Ks).
=
Exercise 1.22 For a simple planar graph G, IE(G)I $ 3jV(G)I 6. Exercise 1.23 Note that L:;~ 1 ..\1
= 2e. Then it follows from Corollary 1.6.5D.
Matrices and Graphs
46
Exercise 1.24 Note that if P and Q are permutation matrices, then
Hence we may assume that
Then ATA= xTx + yTy, and AAT
= [ ~~; ~~ ] , XXT = BBT +GeT.
Since yyT abd ccr are semidefinite positive, >..2 (AT A)+ >..3 (AT A)
~
.\2 (XTX) + ).s(XTX)
=
.\2(BBT) + .\3 (BBT).
It follows by p(A) 2 ::5 .\1 (AT A)
=.\2 (XXT) + ).s(XXT)
= tr(AT A)  2:?=2 At( ATA) that
p(A) 2 ::5 tr(AT A)  >..2(BBT)  ).s(BBT).
Chapter 2
Combinatorial Properties of Matrices Let F be a subset of a number field, and let Mm,n(F) denote the set of all m x n matrices with entries in F, and let Mn(F) = Mn,n(F). Note that Bn = Mn({O, 1}). We write M% = Mn({r ~ Ol r is real}) and M: = Mn({r > 01 r is real}). When the set F is not specified, we write Mm,n and Mn for Mm,n(F) and Mn(F), respectively. Let A E Mm,n, and let k and r be integers with 1 :S k :S m and 1 :S l :S n. For 1 :S i1 < i2 < · · · < ir. :S m and 1 :S j 1 < }2 < · · · < j, S n, write a= (i11 i2, • · • ,ir.:) and {J Ut.i2, · · · ,j,). Let A[it, i2, · · · ,i~:li1>i2, · · · ,ji] A[alfJJ denote the submatrix of A whose (p, q)th entry is as,;,. We also say that A[ai{J] is obtained from A by deleting the rows not indexed in a and columns not indexed in {J.
=
=
Similarly, A[i1 , i2, · · · , i~:IJt,}2, · · · ,j,) = A[alfJ) denotes the matrix obtained from A by deleting the rows not indexed in a and columns indexed in {J; A(alfJJ denotes the matrix obtained from A by deleting the rows indexed in a and columns not indexed in {J; and A(alfJ) the matrix obtained from A by deleting the rows indexed in a and columns indexed in {J. Nonnegative matrices can be classified as reducible matrices and irreducible matrices by the relation ~p, or as partly decomposable matrices and fully indecomposable matrices by the relation "'p·
47
48
2.1
Combinatorial Properties of Matrices
Irreducible and Fully Indecomposable Matrices
Definition 2.1.1 A matrix A E Mn is reducible if A
!:::!p
A1, where
and where B is an l by l matrix and Dis an (nl) by (nl) matrix, for some 1::;; l::;; n1. The matrix A is ineducible if it is not reducible. The next theorem follows from the definitions. Theorem 2.1.1 Let A= (a1;) E M;t for some n > 1, and let m denote the degree of the minimal polynomial of A. The following are equivalent. (i) A is irreducible. (ii) There exist no indices 1 ::;; it < i2 < · · · < ir :S: n with 1 ::;; l < n such that
A(it, · · ·irliv · · ,ir) = 0. (iii) AT is irreducible. (iv) (I+ A)n 1 > 0. (v) There is a polynomial f(:z:) over the complex numbers such that f(A) > 0. (vi) 1+A+···+Am 1 >0. (vii) (I+ A)mt > 0. (viii) For each cell (i,j), there is an integer k > 0 such that a~;>, the (i,j)th entry of A", is positive. (ix) D(A) is strongly connected. Sketch of Proof By definitions, (i) (ii) (iii), (iv) ==> (v), (vi) ::;::::::> (vii) ==> (viii) ==> (i). To see that (v) ==> (vi), let f(x) be a polynomial such that j(A) > 0. Let mA(x) denote the minimum polynomial of A. Then j(x) = g(x)mA(x) + r(x), where the degree of r(x) is less than m, the degree of mA(x). Since mA (A) = 0, r(A) = f(A) > 0, and this proves (vi). To see that (i) ==> (ix), suppose that D(A) has strongly connected components D1, D2, · · · ,D for some k > 1. Then we may assume that D has not arcs from V(D 1 ) to a vertex not in V(D1). Let ito i2, · · · , ir, where 1 ::;; it < i 2 · .. < i; :S: n, be integers representing the vertices of V{Dt)· Then A[i1 · · · i;li 1 · · · i;) = 0, contrary to (i). It remains t? show that (ix) ==> (vi), which follows from Proposition 1.1.1 (vii). O Definition 2.1.2 A matrix A E Mn is partly decomposable if A
"'p
A 11 where
Combinatorial Properties of Matrices
49
and where B is an l by l matrix and Dis an (nl) by (nl) matrix, for some 1 :5 l :5 n1. The matrix A is fully indecomposable if it is not partly decomposable.
=
Theorem 2.1.2 Let A (a;j) e M;t for some n > 1. The following are equivalent. (i) A is fully indecomposable. (ii) For any r with 1 :5 r < n, A does not have an r x (n r) submatrix which equals Orx(nr)•
(iii) Fbr any 0 =I X c V(D(A)), IN(X)I > lXI, where N(X) that D(A) has an arc (uv) from a vertex u eX to v}.
= {v e V(D(A))
Proof By definition, (i) .A :5p+(nq) = n (qp), as the first prows and the lastnq column of A' contain all positive entries of A'. Since A has total support, n = PA =>.A, and sop= q. It follows that A' has Op,nq as a submatrix, and so A is not fully indecomposable.
2.3
Nearly Reducible Matrices
By Theorem 2.1.1, a matrix A E Mt is irreducible if and only if D(A) is strong. A minimally strong digraph corresponds to a nearly reducible matrix. Definition 2.3.1 A digraph Dis a minimally strong digraph if Dis strongly connected, but for any arc e E E(D}, De is not strongly connected. For convenience, the graph K1 is regarded as a minimally strong digraph. A matrix A e Mt is nearly reducible if D(A) is a minimally strong digraph.
Combinatorial Properties of Matrices
55
As an example, a directed cycle is a minimally strong digraph. Some of the properties of minimally strong digraphs are listed in Proposition 2.3.1 and Proposition 2.3.2 below,
which follow from the definition immediately. Proposition 2.3.1 Let D be a minimally strong digraph. Each of the following holds. (i) D has no loops nor parallel arcs. (ii) Any directed cycle of D with length at least 4 has no ahord. In other words, if C = 111112 .. ·11~:v 1 is a directed cycle of D, then (11,,11;) ¢ E(D) E(C), for any 11i, 11; E V(C). (iii) H IV(D)I ~ 2, then D contains at least one vertex 11 such that cf+(11) = d(11) = 1. (Such a vertex is called a cyclic vertex of D.) (iv) H D has a cut vertex 11, then each 11component of Dis minimally strong. Definition 2.3.2 Let D be a digraph, and H a subgraph of D. The contraction D/H is the digraph obtained from D by identifying all the vertices in V(H) into a new single
vertex, and by deleting all the arcs in E(H). If W
~
V(D) is a vertex subset, then write
D/W = D/D[WJ. Proposition 2.3.2 Let D be a minimally strong digraph and let W ~ V(D). If D[WJ is strong, then both D[W] and D /W are minimally strong. Proposition 2.3.3 H D is a minimally strong digraph with n have at least two cyclic vertices.
= IV(D)I ~ 2, then D must
Proof Since D is strong with n ~ 2, D must have a directed cycle, and so the proposition holds when n = 2. By Proposition 2.3.1(iii) and by induction, we may assume that D has no cut vertices and that n ~ 3. H every directed cycle of D has length two, then since D is minimally directed and by n ~ 3, at least one of the two vertices in a directed cycle of length two is a cut vertex of D. Therefore, D must have a directed cycle C of length m ~ 3. We may assume that D =F C, and son m ~ 1. Thus D/C has n m + 1 ~ 2 vertices, and so by induction, D I a has at least two cyclic vertices, 11t and 112 (say). H both 11t, V2 E v (D)  V( C)' then they are both cyclic vertices of D. Thus we assume that 111 E V(D) V(C) and 112 is the contraction image of C in D/C. Then D has exactly two arcs between V(D) V(C) and V(C). Since IV(C)I = m ~ 3, C must contain a cyclic vertex of D. O Definition 2.3.3 Let D = (V, E) be a digraph. A directed path P = 110111 ···11m is a branch of D if each of the following holds. (B1) Neither 110 nor 11m is a cyclic vertex of D, and (B2) Each vertex in P" = {111o !J2, • • • , 11mt} is a cyclic vertex of D (vertices in P 0 are called the internal vertices of the branclt P), and (B3) D[V  P 0 ] is strong.
Combinatorial Properties of Matrices
56
The number m is the length of the branch. Note that W = 0 or v0 =
Vm
is possible.
Proposition 2.3.4 Let D be a minimally strong digraph with n = jV(D)I either D is a directed cycle, or D has a branch with length at least 2.
~
3. Then
Proof Assume that D is not a directed cycle. Let U = {u E V(D)Iu is not cyclic}. Then U =/: 0. Define a new digraph D' = (U, E') such that for any u, u' E U, (u, u') E E' if and only if D has a directed (u, u')path. Since D is strong, D' is also strong and has ho cyclic vertices. By Proposition 2.3.3, D' is not minimally strong and so there must be an arc e' E E' such that D'  e' is also strong. Since D is minimally strong, and since D'  e' is strong, e' must correspond to a branch in D, which completes the proof. O Theorem 2.3.5 (Hartfiel, (118]) For integers n > m ~ 1, let Ft E U:m,m such that Ft = Et,•• for some s with 1 ~ s ~ m, F2 E M!,nm such that F2 = Et,nm, for some t with 1 ~ t ~ m, and let Ao E M;t_m be the following matrix.
Ao=
0 1 0 0
0 0 1 0
0 0 0 0
0 0 0 1
0 0 0 0
1 0
0 0 0
Then every nearly reducible matrix A E B,. is permutation similar to a matrix B E M;t with the form
B=[Ao F2
Ft], At
(2.3)
where At= (a~;) EM;!; is nearly reducible with a~.= 0. Proof This is trivial if D is a directed cycle (m = 1 and At = A(K1 )). Hence assume n ~ 3 and by Proposition 2.3.4, D(A) has a branch VoVt · · ·Vm, and so there is a permutation matrix P such that P APT has the form of B in (2.3). 0 Theorem 2.3.6 (Brualdi and Hedrick (29]) Let D be a minimally strong digraph with n = jV(D)I ~ 2 vertices. Then
n
~
IE(D)I
~
2(n 1).
Moreover, IE(D)I = n is and only if D is a directed cycle; and IE(D)I = 2(n  1) if and only if D is obtained from tree T by replacing each edge of T by a pair of arcs with opposite directions.
Combinatorial Properties of Matrices
57
Proof Since D is strong, d+(v) ?: 1 for each v E V(D). Thus IE(D)I ?: IV(D)I = n, with equality holds if and only if every vertex of D is cyclic, and so D must be a directed cycle. It remains to prove the upper bound.
=
=
The upper bound holds trivially for n 1 and n 2. Assume n ?: 3. By Proposition 2.3.4, D has a branch P VQv1 • • • Vt with t ?: 2 such that D' D  po is strong. IE(D')I + t and IV(D)I By induction, IE(D')I ~ 2(1V(D')I  1). Since IE(D)I IV(D')I + t 1,
=
=
=
=
IE(D)I
= IE(D')I + t ~ 2(1V(D')I 
=
1) + t
= 2(n 1) 
(t 2)
~ 2(n 1).
=
Assume IE(D)I 2(n 1). Then t 2 and by induction, D' is obtained from a tree T' with n  1 vertices by replacing each edge of T' by a pair of oppositely oriented arcs. H
=/: Vt, then there is a directed (Vo, Vt)path P'in D' with length at least one. Since Pis a (vo,vt)path, all the arcs in P' may be deleted from D, and the resulting digraph is still strong, contrary to the assumption that D is minimally strong. Hence vo = Vt and so the theorem follows by induction. O
Vo
Corollary 2.3.6 Let n ?: 2 and let k digraph D with IV(D)I
> 0 be integers.
The there exists a minintally strong
=n and IE(D)I = k if and only if n ~ k ~ 2(n 1).
Proof The only if part following from Theorem 2.3.6. Assumen ~ k ~ 2(n1). Construct adigraphD,. on the vertex set V with these arcs:
E
=
={vt.v2 , • • • ,vn}
{(voVi+t), (v,+lvo)l1 ~ i ~ k n} U{(v~:n+;VI:n+j+t)l1 ~ j ~ n 1} U {(vnVI:n+l)}.
It is routine to check that D is a minimally strong digraph with n vertices and k arcs.
O
Definition 2.3.4 For a matrix A e Mn, the density of A, denoted by II All, is the sum of all entries of A. Note that when A e Bn, IIAII is also the number of positive entries of A. Theorem 2.3.6 and Corollary 2.3.6 can be stated in terms of (0,1) matrices. The proof is straight forward and is omitted. Theorem 2.3. 7 Let A E Bn be a nearly reducible matrix with n ?: 2. Then each of the following holds.
(i) n ~ IIAII ~ 2(n 1). (ii) IIAII = n if and only if A= A(D) for a directed cycle D. (iii) IIAII = 2(n 1) if and only if A= A(G) for a tree G.
58
2.4
Combinatorial Properties of Matrices
Nearly Decomposable Matrices
In order to better understand the behavior of fully indecomposable matrices, we investigate the properties of those matrices that are fully indecomposable matrices but the replacing any positive entry by a zero entry in such a matrix will result in a partly decomposable matrix. For notational convenience, we let Et; denote the matrix whose (i,j)entry is one and whose other entries are zero. Definition 2.4.1 A matrix A= (au) E Bn is nearly decomposable if A is fully indecomposable and for each a,9 > 0, A  a,9 Epq is partly decomposable. Theorem 2.4.1 Let A E Bn be a nearly decomposable matrix. Each of the following holds. (i) There exist permutation matrices P and Q such that P AQ ~ I. (ii) For each pair of permutation matrices P and Q such that PAQ ~I, PAQ I is nearly reducible. Proof By Theorem 2.1.3(v), there exist permutation matrices P and Q such that P AQ 2: I. For the sake of simplicity, we may assume that A 2: I. By Theorem 2.1.3(iv), A I is irreducible. H for some B E Bn such that Cis irreducible and C S A I, then by Theorem 2.1.3(iv), C +I S A is fully indecomposable. Since A is nearly decomposable, it must be C = A I, and so A  I is nearly reducible. O Example 2.4.1 H B is a nearly reducible matrix, by Theorem 2.1.3(iv), B +I is fully indecomposable. But B + I may not be nearly decomposable. Let B= [ 0 1 1 01 0 ] , andC= [ 0 1 1 1 10 ] .
1 0 0
1 0 1
Then B is nearly reducible and C is fully indecomposable. Note that C S B C =/: B +I. Hence B + I is not near decomposable.
+I
and
Theorem 2.3.5 can be applied to obtain a recurrence standard form for nearly decomposable matrices. The following notation for Ao will be used throughout this section. For integers n > m ~ 1, let Ao E M;t_m be the following matrix.
Ao=
1 1 0 0
0 0 1 0 1 1
0 1
0 0 0
0 0 0 0
0 0 0 0
1 1
Combinatorial Properties of Matrices
59
Lemma 2.4.1 follows from Theorem 2.2.5, and so its proof is left as an exercise. Lemma 2.4.1 Let B be a matrix of the form
B=[~
i]·
(2.4)
H Bt = (bii) E M;t_m is a fully indecomposable matrix, if Ff has a 1entry in the first row, and ifF~ has a 1entry in the last column, then B is fully indecomposable.
=
Lemma 2.4.2 Let B 1 (b,1) EM;!; denote the submatrix in (2.4). H Bin (2.7) is nearly indecomposable, then each of the following holds. (i) B 1 is nearly indecomposable. (ii) F{ = E 1,8 , for some 8 with 1 ~ 8 ~ m and F~ (iii) H m ~ 2, then bt. 0.
= Et.nm
1
for some t with 1 ~ t ~ m.
=
Proof H some 1entry of B 1 can be replaced by a 0entry to result in a fully indecomposable matrix, then by Lemma 2.4.1, B is not nearly indecomposable, contrary to the assumption of Lemma 2.4.2. Therefore, B 1 must be nearly indecomposable. Similarly, by the assumption that B is nearly indecomposable, Lemma 2.4.2(ii) must hold. It remains to show Lemma 2.4.2(iii). Suppose m ~ 2. Since m ~ 2 and since B 1 is fully indecomposable, B1 ::f; 0. Given B in the form of (2.4), every nonzero diagonal of B1 can be extended to a nonzero diagonal of B. H bt. 1, then let B' be the matrix obtained from B by replacing the 1entry bts by a 0entry. Since B 1 is fully indecomposable, bt.lies in a nonzero diagonal L of Bt. Removing bt. from L, adding the only 1entry in F1 and F2 , and utilizing the non main diagonal 1entries of Ao, we have a nonzero diagonal of B'. Therefore, B' has total support. It is easy to check that the reduced associated bipartite graph of B' is connected, and so by Theorem 2.2.5, B' is fully indecomposable, contrary to the assumption that B is nearly indecomposable. Thus bts = 0. D
=
Theorem 2.4.2 (Hartfiel, [118]) For integers n > m ~ 1, let F1 E Bnm,m such that F1 = Et,., for some 8 with 1 ~ 8 ~ m, F2 E Bm,nm such that F2 = Et,nm 1 for some t with 1 ~ t ~ m. Then every nearly decomposable matrix A E B,. is permutation equivalent to a matrix B E M;t with the form
B=[~ ~:]• where either m = 1 and A 1 = 0 or m ;:: 3 and A1 able with t4. 0.
=
(2.5)
= (al1 ) E Mm(O, 1) is nearly decompos
Proof By Theorem 2.1.3(v), A~, A' with A';:: I. By Theorem 2.4.1(ii), A' I is nearly irreducible. By Theorem 2.3.5, there is a permutation matrix P such that P(A' I)P 1
=
Combinatorial Properties of Matrices
60
P A'p  I  I has the form in (2.5), Assume that m 2! 2, then by Lemma 2.4.2, A1 is nearly indecomposable with at least one 0entry. Since nearly indecomposable matrices in Mi has no 0entries, m 2! 3. D Example 2.4.2 A matrix A E B,. with the form in Theorem 2.4.2 may not be nearly decomposable. Consider
A1 = [
~ ~ ~ ~
1100]
and B =
~~~~~~] 0
1 0
1 1
.
~ ~ ~ ~ ~
0 1 0 1
Then A1 is nearly decomposable. However, B Es,2 is fully indecomposable and soB, having the form in Theorem 2.4.2, is not nearly decomposable. Theorem 2.4.3 (Mine, [194]) Let A
n 2n
e B,. be a nearly decomposable matrix.
ifn=1} if n 2! 2 ::::;
II All ::::;
{3n2 3(n  1)
ifn::=;2 ifn2!3
Then (2.6)
Proof The upper bound is trivial if n ::::; 2, and so we assume that n 2! 3. By Theorem 2.1.3(iv) and by Theorem 2.3.7, IIA Ill::::; 2(n 1) with equality if and only if there is a tree T on n vertices such that A I = A(T). Note that if n = 2, then T has two pendant vertices (vertices of degree one); and if n 2! 3, then T has at most n 1 pendant vertices. It follows that
IIA Ill < { 
2(n  1) 2(n 1) 1
ifn=2 if n 2! 3,
which implies the upper bound in (2.6). The lower bound in (2.6) is trivial if n = 1. When n 2! 2, note that each row of a fully indecomposable matrix has at least two 1entries, and so the lower bound in (2.6) follows.
D Theorem 2.4.4 Let n 2! 3 and k > 0 be integers such that 2n ::::; k ::::; 3(n 1). Then there exists A e B,. such that A is nearly decomposable and IIAII = k. Sketch of Proof Write k = 2(n  1) + s for some s with 2 ::::; s ::::; n  1. Let T• denote a tree on n vertices with exactly s vertices of degree one. (For example, T• can be obtained by subdividing edges in a K 1 , 8 .) LetT: denote the graph obtained from T' by attaching a loop at each vertex of degree one of T•. Then A(T:) is nearly decomposable and IIA(T:)II = k. D
Combinatorial Properties of Matrices
2.5
61
Permanent
Definition 2.5.1 Let n;::: m;::: 1 be integers and let A= (aii) E Mm,n· The permanent of A is Per(A)
=
where P: is the set of all mpermutations of the integers 1, 2, · · · , n. Both Proposition 2.5.1 and Proposition 2.5.2 follow directly from the related definitions.
=
Proposition 2.5.1 Let D be a directed graph with IV(D)l n;::: 1 vertices and without parallel arcs, and let G be a simple graph on n 2m ;::: 2 vertices. (i) Let A = A(D). Each term in Per(A) is a one, which corresponds to a spanning subgraph of D consisting with disjoint directed cycles. Thus Per (A) counts the number of such subgraphs of D.
=
(ii) Let A =A(G). H G is a bipartite graph with A= [
;T : ],
for some B E Mm,
then Per(B) counts the number of perfect matchings of D. Proposition 2.5.2 Let A= (aii) E Mm,n with n;::: m;::: 1. Each of the following holds. (i) Let c be a scalar and let A' be obtained from A by multiplying the ith row of A by c. Then Per(A') c Per(A). (ii) Fix i with 1 ~ i ~ m. Suppose for each j with 1 ~ j ~ n, aii a~i + a~j. Let A' and A" be obtained from A by replacing the (i,j)th entry of A by a~i and aij, respectively. Then Per(A) Per(A')+ Per(A"). (iii) H A ""p B, then Per(A) Per(B). (iv) H m n, then Per( A) Per(AT). (v) H Dt E Mm and D2 E Mn are diagonal matrices, then Per(DtAD2) Per(Dt) Per(A) Per(D2).
=
=
=
=
=
=
=
The following examples indicate that permanent and determinant behave quite differently. Example 2.5.1 The permanent as a scalar function is not multiplicative. We can routinely check that
Example 2.5.2 We can also verify tlte following: Per [ au a21
a12 ] a22
= det [
au a21
a12 ] . Ga2
Combinatorial Properties of Matrices
62
However, Polya in 1913 indicated that one cannot compute Per(A) by computing det(A'), where A' is obtained from A by changing the signs of some entries of A. Consider As (ai;) EMs. Then
=
Assume that the signs of some entries can be changed so that the last three terms in det(As) can be positive. Then an odd number of sign changes is needed to make the last three term positive. On the other hand, to keep the first three term remaining the same sign, an even number of sign changes would be needed. A contradiction obtains. For examples in Mn with n > 3, we can consider the direct sum of As and Ins and repeat the argument above. Theorem 2.5.1 (Laplace Expansion) Let A
= (~;)
E Mm,n with n ;?: m ;?: 1. Let
= (i1, i2, · • · , ik) be a fixed ktuple of indices with 1 ~ i 1 < .. · < ik ~ m, and /3 = (j1,i2, ... ,j,.) denote an general."tuple of indices with 1 ~ i1 < · .. < j,. ~ n. Then PerA[a:l/31 PerA(a:l/3). Per(A) = :E a:
allpouible/3
In particular, we have the formula of the expansion of Per(A) by the ith row: n
Per(A)
= :E ~; PerA(ilj). i=l
Sketch of Proof Each term in PerA[a:l/31 multiplying each term in PerA(a:l/3) is one term in Per(A). For a fixed
/3, there are k! terms in a fixed PerA[a:l/3] and
terms in PerA(a:l/3). There are (
~)
( : ) k! ( :
( n k ) (m k)! mk
different choices of /3. Therefore, there are
=~ )
(m k)!
= ( : ) m!
terms in the right hand side expression, which equals the number of all terms in Per(A).
D To introduce Ryser's formula for computing Per(A), we need a weighted version of the Principle of Inclusion and Exclusion (Theorem 2.5.2 below). Let S be a set with lSI n ;?: 1, let F be a field and let W : S t+ F be a function (called the weight function). Let P1, P2, · · · , PN be N properties involving elements in S and denote P = {P1, · • · , PN}.
=
63
Combinatorial Properties of Matrices
For any subset { ~1 , ~2 , • • • , P~,.}
s; P
write
W(P,.,Pi2 , " · .~r)
L
=
(2.7)
W(s)
senj= 1 P,J
and
W(r)= all poadble
{Pit ,Ps2 ,••• ,P;r}~P
The following identities are needed in the proof of the next theorem. (2.8)
tm L(1)i (
;=0
tm. ) =(11)tm=o. t(m+J)
Theorem 2.5.2 For an integer m with 1 ~ m E(m)
~
{2.9)
N, let
= L{W(s) Is E Sands has exactly m properties out of P}.
Then E(m)
= 'I:(1); ( i==O
m+j) W(m+j). m
Sketch of Proof Assume that an 8 E S satisfies exactly t properties out of P. Consider the contribution of 8 to the right hand side of the equality. H t < m, then 8 makes m, then the contribution of s is W(8)i and if t > m, then the no contribution; if t contribution of s is
=
[~(1); ( m~j) ( m;j)] W( = (
8)
~) [~(1)i ( t\~:j))] W(s)
= ( ~) (O)W(s) =0, Then the theorem follows by (2.8) and (2.9).
0
Corollary 2.5.2 With the same notation in Theorem 2.5.2, we can write E(O)
= W(O) W(1) + W(2) · · · + (1)NW(N).
Combinatorial Properties of Matrices
64
Theorem 2.5.3 (Ryser, [197]) Let A E Mm,n with n ~ m ~ 1. For each r with 1 ::;: r::;: n, let Ar denote a matrix obtained from A by replacing some r columns of A by all zero columns, S(Ar) the product of them row sums of Ar, and 2:;S(Ar) the sum of the S(Ar)'s with the summation taken over all possible Ar's. Then Per( A)
=m1 E( i=O
nm+j) . E
S(Anm+;)·
1
Proof Let S be the set of all mcombinations of the column labels 1, 2, · · · , n. Then each
s E S has the form (jl>h, · · · ,jm), where 1 $; j1 < j2 < · · · < im $; n. Define
and P, = {(j1,j2,··· ,jm) E Sli E {j1,i2,··· ,jm}}, 1 $; i $; n. Then W(r) and so Theorem 2.5.3 follows from Theorem 2.5.2. O
= 2:;S(Ar)
Corollary 2.5.3 With the same notation in Theorem 2.5.3, we can write Per(A)
= S(A) 
E S(A1) + E S(A2)  · · · + (1)n1 E S(An1)·
Both Theorem 2.5.1 and Theorem 2.5.3 are not easy to apply in actual computation. Therefore, estimating the upper and lower bounds of Per(A) becomes important. The proofs of the following lemmas are left as exercises. Lemma 2.5.1 Let A= (lli;) E Bm,n with n ~ m ~ 1. H for each i, E;=l aii ~ m, then Per( A)> 0. Lemma 2.5.2 Let A= (ai;) E Bm,n with n $; m ~ 1. H for each k with 1 $; k ~ m 1, every k xn submatrixof A has at least k+1 nonzero columns, then every (m1) x (n1) submatrix A' of A satisfies Per(A') > 0. Theorem 2.5.4 (HallMannRyser, [116]) Let A E Bm,n with n ~ m ~ 1 such that each row of A has at leat t 1entries. Then each of the following holds. (i) If t ~ m, then Per(A) ~ t!/(t m)!. (ii) If t ::;: m and if Per(A) > 0, then Per(A) ~ t!. Sketch of Proof By Lemma 2.5.1, we may assume that Per(A)
> 0 for each. t with
1::;: t::;: n. Argue by induction on m. The theorem holds trivially when m Case 1 For some 1 $; h ~ m  1 and B E M,., A "''P A1 where
= 1. Assume 77i > 1.
Combinatorial Properties of Matrices
65
Then each row of B must have all the t positive entries, and so t :$ h :$ m  1. Moreover, Per(A) = Per(B) Per(D) > 0. By induction, Per(B) ~ t! and so Per(A) ~ t!. Case 2 Case 1 does not occur. Then for each k with 1 :$ k :$ m  1, every k x n submatrix of A must have at least k + 1 nonzero columns. By Lemma 2.5.2, for each submatrix A(ilj), obtained from A by deleting the ith row and the jth column, satisfies Per(A(ili)) > 0. Note that each row of A(ilj) has at least (t 1) positive entries. By induction,
. . Per(A(tl.7)) It follows by
L:j..1 at;
~
(t 1)!
={
1!=!1!_ "{tmJT
ift1:$m1 ift1
~
m1.
t that when t :$ m, n
Per(A)
= L:au Per(A(1Ij)) j=l n
~
L:ao;(t1)!~t!; j=l
and when t
~
m, n
Per(A)
=
L aii Per(A(1Ij)) i=l
(t 1)!
n
~ ~at;(tm)! ~ The theorem now follows by induction.
t!
(tm)r
O
Theorem 2.5.5 (Mine, (195]) Let A E Bn be fully indecomposable. Then Per (A) ~ IIAII  2n + 2.
(2.10)
An improvement of Theorem 2.5.5 can be found in Exercise 2.14. Gibson (99) gave another improvement. Theorem 2.5.6 (Gibson, (99]) Let A E B,. be fully indecomposable such that each row of A has at least t positive entries. Then t1
Par(A) ~ IIAII  2n + 2 + L(il 1). i=l
An important upper bound of Per(A) was conjectured Mine in [196) and proved by Bregman (16). The proof needs two lentmas and is due to Schrijver (230]).
Combinatorial Properties of Matrices
66
Lemma 2.5.3 Let A E Bn with Per(A) > 0. Let S denote the set of permutations on n elements such that u E S if and only if ITf:1 aw(i) = 1. Then each of the followin& holds. n
(i)
n
II II (Per(A(ilk}?..(A(ilk)) = II II Per(A(ilu(i))). uESi=l n
(ii) H rb · · • , rn are the row sums of A, then
n
II ,rrA = II II ri. i=l
ueSi=l
Sketch of Proof For fixed i and k, the number ofPer(A(ilk)) factors on the left hand side of (i) is Per(A(ilk)) when aik = 1 and 0 otherwise; and the number of Per(A(ilk)) factors on the right hand side of (i) is the number of permutations u E S such that u(i} = k, which is Per(A(ilk)) when au: 1 and 0 otherwise. For (ii), the number of factors on either side equals Per(A). O
=
r,
Lemma 2.5.4 Assume that
00 = 1. H ft. t 2 , • • • , tn are non negative real numbers, then
Sketch of Proof By the convexity of the function x log z,
and so the lemma follows.
O
Theorem 2.5. 7 (MincBregman,[16]) Let A E Bn be a matrix with row sums r1o r2, • · · , rn. Then n
Per(A) $
II(ri!) ~. i=l
Proof Argue by induction on n. By Lemma 2.5.3, ( PerA)nPe•(A)
=
By Lemma 2.5.4, ( Per(A))nP.. (A)
$]! [(fi r
1)
n
Per(A(ilu(i))}] .
Combinatorial Properties of Matrices
67
Apply induction to each (A(ilu(i))) to get
..
II Per(A(iju(i))) ~ i=l
= =
IT [( II (r;!)~) ( IT [(¥i. II (r;!)~) (
II
i=l
j¢i, a; .. (i)=O
j¢i, ".1"'(')=1
i=l
a 1.. u>=o
Vf.J, a 1..1,>=1
II
((r; 1)!)¢.)]
((r; 1)l);:f.=r)]
fi [ n 2  n, then p.(n, T) = 0.
Theorem 2.5.11 (Brualdi, Goldwasser and Michael, [28]) Let A q = n2  T and r = lq/nJ. Then
e U(n, T) with T :::::; n 2n.
Let
Per(A):::::; (r!) •r±:• ((r + 1)1)~.
(2.11)
Sketch of Proof By Theorem 2.5. 7, we need to estimate ll?=1(r,!) "k, under the conditions that r;'s are positive integers and that E?:1 r; = t7. To do that, we first we establish the following inequality. For integers m, t with m ~ 2 and t ~ 1, ((m + t 1)!)"'+\t (m!).:O For any integer k ~ 2, we have k2
> ((m + t)!)n4t((m 1)!)~.
> (k 1)(k + 1) and so
k2k(l:1)
> [(k 1)(k +
1))"(k1).
It follows that k(k+2)(1:1) (k + 1)"(1:1)(k 1)2 (k 1)(k+1)(k 2) > k(lc1)(A:2) ' and so
n. k=2
n
k(k+2)(k1) • (k + 1)"(k1) (k  1)2 (k _ 1)(1:+1)(k2) > k(k1)(k2) . 1:=2
(2.12)
70
Combinatorial Properties of Matrices
Cancel the common factors from both the numerators and the denominators to get 8 (s+2}(s1)
> (s + 1)• ((s 1)!) .!I ((s + 1)!) ·k
(s!)! It follows that m+t1
II
m+t1
(s!)~ >
s=m
II
((s 1)!)~((s + 1)!)..h,
(2.14)
s=m
and so (2.12) follows from (2.14). By (2.12),
'tr,
max{IT, · ·· ,8~))T
) 
'
8 1II }
according to the two different cases as•
follows: (2.6.3A) When 8~
 s~ ~ 8~  8~
> 0,
(2.6.3B) When 8~
 8~ ~ 8~  s~
> 0,
S
(1)
= ( 81
I
I
1
II
I
I
I
(
I
r
")
···,B,_._l8,_. 1 81J+1•···,8v_ 1 ,8v+ 8v8v
1
8v+1•···
I
1
)T
8n.
eombinatorial Properties of Matrices
73
We can routinely verify that s" < s< 1> < s' (Exercise 2.19). Moreover, there exist an .integer k 2:: 1 and non negative ndimensional vectors s, s< 1), · · · , s(k) such that
s"
=s(k) < s = (9, 8, 4, 4, 4, 3, If, W(s, s< 2>)
=( ~ ) = ( ~)
s< 3 >
= (9, 7,4,4,4,4, If,
W(s,s< 3 >)
s< 4 >
= (9,6,4,4,4,4,2)T,
W(s,s< 0 >) = (
s(S)
=
S
~)
= (7,6,4,4,4,4,4}T.
Therefore, IU(r, s)l
~( ~) ( ~) ( ~) ( ~)
( ; ) = 12600.
In (268], Wan refined the concept of totals chains and improved the lower bolUld in Theorem 2.6.2. An application of Theorem 2.6.2 can be found in Exercise 2.20. Example 2.6.4 (Ryser, [223]) Let A1 = [
~ ~]
and A2 = [
~ ~].
Suppose A E U(r, s) which contains a submatrix A 1 • Let A' be obtained from A by changing this submatrix A1 into A2, while keeping all other entries of A unchanged. l'hen A' E U(r, s). The operation of replacing the submatrix A 1 by A2 to get A' from A i• c;alled an interchange, and we say that A' is obtained from A by performing an interchan.r;e on A •. Theorem 2.6.3 (Ryser, [223]) For A,A' E U(r,s}, A' can be obtained from A byfJerforming a finite sequence of interchanges. Proof Without loss of generality, we assume that both r = (r1 ,··· ,rmf ands = (s., · · · , sn)T are monotone. Argue by induction on n. Note that the theorem holds trivially if n = 1, and so we assume that n ~ 2, and that the theorem holds for Slllallet values of n.
Umnbinatorial Properties of Matrices
l
75
Denote A== (a;;) and A'== (a:3). Consider them x 2 matrix
M== [
a1n a2n
a~n a~n
a~n a~n
.
The rows of M have 4 types: (I,I), (I,O), (O,I) and (0,0). Since A, A' E U(r, s), A and A' have the same column sums and so rows of Type (I,O) and (O,I) must occur in pairs. If M does not have a row of Type (I,O), then M can only have rows of Type (I,l) or (0,0), which implies that the two columns of M are identical. The theorem obtains by applying induction to the two submatrices consisting of the first n  I columns of A and .4.
l'herefore we assume that M has some rows of Type (I,O) and (O,I). Let j = j(A,A') oe the smallest row label such that Row j of M is either (I,O) or (O,I). Without loss of generality, assume that (a;n. ajn) == (0, I). 13y the minimality of j, there exists an integer k with j +I :5 k :5 n} such that a.~:n = I. Since ajn = I and since A and A' has the same row sums, A has at least a Ientry in Row j. Let a;;,,··· , a;; 1 be the 1entries of A in the jth row. Then 1 :5 l :5 r;. Since r is monotone, r; ~ r~;. As a;n = 0 and a~on = I, we may assume that akt = 0 for some t E {i1, i2, · · · , iz}. Thus M has a submatrix
B
= [ a;t
akt
a;n ] akn
=[ I
0] . 0 1
Let .4.1 be the matrix obtained from A by performing an interchange on B. Note that j(At,A') ~ j(A,A')+1, which means we can perform at most m interchanges to transform .4 to a matrix A" E U(r, s) such that the last column of A" is identical with the last column of A', and so the theorem can be proved by induction. 0 Definition 2.6.4 If both r and s are monotone, then U(r, s) is normalized class. For a matrix A = (a;;) in a normalized class U(r, s), a Ientry a;; is called an invariant I if no S
=
=
where WE Me,J(O, 1). We must show that W J and Z 0. If W ::f; J, then by the monotonity of r and s, at most two interchanges applied to A would change Bef into a 0entry, contrary to the assumption that aef = 1 is invariant. Therefore, W = J and every entry in W is an invariant 1. Note that since e +I is maximized, we may assume that al,/+1 0, for some t with 1 ::;; l ::;; e; for otherwise ae,/+1 would be an invariant 1, contrary to the maximality of
=
e+l.
Now if Z has a 1entry in Row t (say), then we may assume that llt,/+1 = li for otherwise by the monotonity of r and s, an interchange can be applied to get llt,/+1 = 1, where e + 1 ::;; t ::;; m. H for some i with 1 ::;; j ::;; e, at.; 0, then an interchange on the entries a1,;, al,fH•at,; and llt,/+1 would make a1,; = 0, contrary to the fact that every entry in W is an invariant 1. Therefore, every llt,; is also an invariant 1, 1 ::;; j $; e, contrary to the maximality of e + I. It follows that Z = 0, as desired. D
=
In [31], Theorem 2.6.4 was applied to study the lattice structure of the subspaces
linearly spanned by matrices in U(r,s): .C(r,s)
= {tCiAo •=1
:
A,
e U(r,s) and Ci e Z,
1 $ i::;;
t}.
In (268], Wan studied the distribution and the counting problems of invariant 1's in a class U(r,s). Using binary numbers, Wang [269] gave a formula of IU(r,s)l. For an integer k > 0, let I(k) denote the number of 1's in the binary expression of the integer k 1; let Pi be number of components of r that are equal to i, 1 $ i ::;; n; and for s (slt · · · , s,.)T, let q; s;  p,., 1 ::;; j ::;; m.
=
=
Theorem 2.6.5 (Wang, [269]) IU(r,s)l
n1 2j1 ( = "" L...J II II
t, 1 .~oiJ=1 lc=l
na;/c . fflojlc
)
,
Combinatorial Properties of Matrices
77
where ns;/c and m;;1c are functions of Ph tJi and t;;1c, given by the recurrence definition: for 1 S i S n  1, 1 S j S n  1 and 1 $; k $; 21 1 , n;u
=
Pi
__
{
m;;~c
+j
t0;;~c
if i
n;;/c
ifl(k)
 n $ I(k)
< i,
(i and k cannot be both equal to 1),
if I(k) ~ i,
< i+j n. if k is odd andj
> 1,
if k is even, n121l m1;1
=
q;
2: 2: m,Jic· i=2 1:=1
2. 7
Stochastic Matrices and Doubly Stochastic Matrices
Definition 2.7.1 Given a matrix A= (a;;) E M;t(R), A is a stocha8tic matrix if for each i with 1 $; i $; n, :Ej=1 a;; 1; and A is a doubly stocha8tic matrix if for each i with
= = 1 and :Ej=1 a;; = 1. Define n,. = {P E Mt(R) I pis doubly stochastic}.
1 $ i $ n, both :Ej=1 as;
TheoreJD 2.7.1 Let A E Mt(R). Each of the following holds. (i) A is stochastic if and only if AJ J, where J J,.. (ii) A is stochastic if and only if 1 is an eigenvalue of A and the vector e Jnxl is an eigenvector corresponding to the eigenvalue 1 of A. (iii) A e 0,. if and only if AJ J A J, where J J,.. (iv) H A is stochastic and if A "'p B, then B is also stochastic. (v) H A E 0,. and if A "'p B, then B is also doubly stochastic. (vi) Suppose A be a direct sum of two matrices A1 and A2. Then A is stochastic if and only if both A1 and A2 are stochastic. (vii) Suppose A be a direct sum of two matrices A 1 and A2. Then A E 0,. if and only if both A1 and A2 are doubly stochastic.
=
=
=
=
=
=
Proof (i)(iii) follow directly from Definition 2.7.1; (iv) and (vi) follow from (i); (v) and (vii) follow from (iii), respectively. O Theorem 2.7.2 H A E 0,. is doubly stochastic, then Per(A)
> 0.
Combinatorial Properties of Matrices
78
=0, then by Theorem 6.2.1 in the Appendix, A
Proof Suppose that A E 0,.. H Per(A) is permutation similar to
B [ where p + q
o:X, ~],
= n + 1. It follows that n = liB II 2!
a contradiction.
!lXII + IIZII =P + q = n + 1,
0
The following Theorem 2.7.3 was conjectured by Van der Waerden [266], and was independently proved in 1980 by Falikman [85] and Egorysev(79]. A simpler proof of Theorem 2.7.3 can be found in (199] and [163]. Theorem 2.7.3 (Vander WaerdenFalikmanEgorysev, [266], [85], [79]) If A E 0,., then Per(A)
n' 2! ;.. n
1 n
Example 2.7.1 Let A= J,.. Then Per(A) Theorem 2.7.4 H A E 0,., then irreducible matrices.
A~,
nl = __:... nn
B, where B is a direct sum of doubly stochastic
Proof Arguing by induction on n. It suffices to show that if A is a doubly stochastic matrix, then A ~, B, where B is a direct sum of doubly stochastic matrices. Nothing needs a proof if A is irreducible. Assume that a doubly stochastic matrix A is reducible. By Definition 2.1.1, A~, B for some B E M,. of the form
B=
[!
~]'
where X E M~c and Z E Mn/c for some integer k > 0. By Theorem 2.7.1(v), B is doubly stochastic, and so both IIXII = k and IIZII = n  k. It follows that n IIBII =
!lXII + IIYII + IIZII = k + IIYII + (n k), and so IIYII = 0, which implies that Y Theorem 2.7.1(vii), both X and Z are doubly stochastit:;. O
= = 0. By
Theorem 2. 7.5 (Birkhoff, (8]) Let A E M;t. Then A E 0,. if and only if there exist an integer t > 0 and positive numbers c~o ~. · · · , ct with c1 +~+· · ·+ct = 1, and permutation matrices Pit · · · , Pt such that
Proof H A= E~ 1 e&Pi, then AJ = E!=1 eoP,J = Therefore A E 0,. by Theorem 2.7.1(iii).
E!=l eoJ = J.
Similarly, JA
= J.
Combinatorial Properties of Matrices
79
Assume now that A E n,.. The necessity will be proved by induction on p(A), the number of positive entries of A. By Theorem 2. 7.2, p(A) ~ n. If p(A) = n, then A is itself a permutation matrix, and so t = 1, c1 = 1 and P 1 = A. The theorem holds. Assume p(A) > n. By Theorem 2.7.2, Per(A) > 0, and so there must be a permutation 11" on n elements such that the product a 1,.(1)a2,.(2l · · · a,.,.(n) > 0. Let ct = min{at.r(l)•~,..(2)•" ·· ,a,.,.(n)} and let P1 = (p,;) denote the permutation matrix such that p;; = 1 if and only if (i,j) = (i,7r(i)), for 1 :5 i:::; n. Since A is doubly stochastic, 0 :5 ct :5 1. If ct = 1, then a;,.(i) = 1 for all 1 :::; i :::; n, and so by Theorem 2.7.1(iii), A P 1 • Therefore, p(A) n, contrary to the assumption that p(A) > n. Therefore Ct < 1. Let 1 A1 = 1 (Ac1P1), c1
=
=
Then A 1 J = JA1 = J and so A 1 is also double stochastic with p(A1) = p(A) 1. By induction, there exist positive integers t, ~. · · · , ~ with ~ + · · · + ~ = 1, and permutation matrices P2, · · · , Pt such that t
A1
= Ld;P;. i=2
It follows that
A= (1 c1)A1
+ c1P1 = c1P1 + (1 ct)(~P2 + · · · + ~Pt)·
Since c1 + (1 c1)(~ + · · · + ~)
= 1, the theorem follows by induction. D
Definition 2.7.2 A matrix A E B,. is of doubly stochastic type if A can be obtained from a doubly stochastic matrix B by changing every positive entry of B into a 1. Let n,. denote the set of all n x n doubly stochastic matrices. A digraph D is kcyclic if D is a disjoint union of k directed cycles. Let D be a digraph on n vertices {Vt. · · · , v,.}, and let l1 , l2 , • • • , l,. be non negative integers. Then D(lt.l2 ,··· ,l,.) denote the digraph obtained from D by attaching l; loops at vertex v;, 1 :5 i :5 n. Example 2.7.2 The matrix
A=[~~]. is not of doubly stochastic type (Exercise 2.25).
Theorem 2.7.6 Let A E B,.. Then A is a matrix of doubly stochastic type if and only if for some integers t, k 1 , • • • , kt > 0, D(A) is a disjoint union oft spanning subgraphs Dt. D2, · · · , Dt of D(A), where Di is k;cyclic, 1 :5 i :5 t.
Combinatorial Properties of Matrices
80
Sketch of Proof Note that P E M,. is a permutation matrix if and only if D(P) is a kcyclic digraph on n vertices, for some integer k depending only on P. Thus Theorem 2.7.6 follows from Theorem 2.7.5. D Definition 2.7.3 Let D be a digraph on n vertices {v1.··· ,v,.}, and let lt,l2 ,··· ,l,. be non negative integers. Then D(l1 , 12 , • • • , l,.) denote the digraph obtained from D by attaching l 1 loops at vertex v;, 1 5 i 5 n. A digraph Dis Eulerian if for every vertex v
e V(D), di'(v) =ct(v).
Corollary 2. 7.6 A digraph D is Eulerian if and only if for some integers 11 , • • • , l,.. ;;:: 0, the adjacency matrix A(D(l1 , • • • , l,.)) is of doubly stochastic type. Sketch of Proof Note that D is Eulerian if and only if D is the disjoint union of directed cycles C1 , C2 , • • • , Ct of D. Since each C, can be made a spanning subgraph by adding a loop at each vertex not inc,, and so the corollary follows from Theorem 2.7.6. D Definition 2.7.4 Let r,s > 0 be real numbers and n ;;:: m > 0 be integers satisfying rm = sn. Let ro denote an mdimensional vector each of whose component is r, and let s 0 denote an ndimensional vector each of whose component is s. In general, for an integer k > 0, let s = ko denote a vector each of whose component is k; and define U,.(k) = U(k0 ,k0 ) n B,.. Thus every matrix A e U,(k) has row sum k and column sum k, for every row and column of A. In particular, U,.(l) = n,.. Theorem 2.7.7 Let A e Bm,n for some n;;:: m > 0. Then A e U(ro,so) if and only if there exist integers t, ClJ • • • , Ct > 0 and permutation matrices P 1 , • • • , Pt. such that
Sketch of Proof H n = m, then r = s and so :A is doubly stochastic. Therefore, Theorem 2.7.7 follows from Theorem 2.7.5. Assume that m < n. Then consider
e U,.(j), and so Theorem 2.7.7 follows by applying Theorem 2.7.5 to A'. D Theorem 2.7.8 Let A e B,.. Then A e U,.(k) if and only if there exist permutation Then A'
matrices P1o • •· , P" such that
Proof It follows from Theorem 2.7.5.
D
Combinatorial Properties of Matrices
81
Example 2.7.3 Let G beakregular bipartite graph with bipartite sets X andY. If lXI !YI = n, then the reduced adjacency matrix of A is in U(k), and so by Theorem 2. 7.8, E( G) can be partitioned into k perfect matchings.
=
Theorem 2.7.9 If A
e U,.(k), then n'k" "' (k)n Per(A) ~ ·/2m. n"
e
O Another lower bound of Per(A) for A e U,.(k)
Proof This follows from Theorem 2.7.3.
can be found in Exercise 2.26. When k = 3, the lower bound in Theorem 2.7.9 has been improved to ([198]) Per(A) ~ 6
(l4)n3 .
However, the problem of determining the exact lower bound of Per(A) for A remains open.
2.8
e Un(k)
Birkhoff Type Theorems
Definition 2.8.1 Recall that 'Pn denotes the set of all n by n permutation matrices, and that On denotes the set of all n by n doubly stochastic matrices. For two matrices A, B E Mn, define 'P(A, B)
=
'P(A,B)
= {~ctPi : ~Co= 1 and PiE 'P(A,B)}.
{P E 'Pn : AP = PB}
Also define
On(A,B) = {P E 0,. : AP
=PB}.
When A== B, write 'P(A), 'P(A) and On(A) for 'P(A,A), 'P(A,A) and On(A,A), respectively. Note that when A= A(G) is the adjacency matrix of a graph G, 'P(A) is the set of all automorphisms of G. Example 2.8.1 Let G be the vertex disjoint union of a 3cycle and a 4cycle, let A = A(G) and B = +J7. Then AB = BA and soB E 07(A). However, as there is no automorphism of G that maps a vertex in the 3cycle to a vertex in the 4cycle, B ¢ 'P(A). Definition 2.8.2 A graph G is compact if 'P(A(G))
= On(A(G)).
Combinatorial Properties of Matrices
82
Example 2.8.2 Note that G(1,.) is the graph with n vertices and n loop edges such that a loop is attached at each vertex of G(I,.). Note that 1'(1,.) 1',.. Thus BirkhoffTheorem (Theorem 2.7.5) may be restated as
=
0,.(1,.)
= 0,. ='Pn = 1'(1,.),
which is equivalent to saying that G(1,.) is compact. Tinhofer (260] indicated that theorems on compact graph families may be viewed as Birkhof£ type theorems. Definition 2.8.3 Let G be a graph with n vertices and without multiple edges. Let G*
=
be the graph obtained from G by attaching a loop at each vertex of G. Thus K,'; G(J,.) is the graph obtained from the complete graph K,. by attaching a loop at each vertex of
K,.. The graph G can be viewed as a subgraph of K,';. Moreover, if G is loopless, then G can be viewed as a subgraph of K,.. The full completement of G is Glc = K,'; E(G). If G is loopless, then the completement of G is ac = K,. E(G). The proof of the following Theorem 2.8.1 is straightforward. Theorem 2.8.1 Each of the following holds. (i) H G is compact, then Glc is also compact.
(ii) H a loopless graph G is compact, then G* is also compact. {iii) H a loopless graph G is compact, then ac is also compact. {iv) K,., K,';, K~ are compact graphs. Theorem 2.8.2 (Tinhofer, (260]) A tree is compact. Theorem 2.8.3 (Tinhofer, (260]) For n Proof Let V(C,.)
~
3, thencycle C,. is compact.
= Z,., the integers modulo n, and denote A= A(C,.) =(a,;), where au= { 1 0
=
if j i ± 1 (mod n) otherwise.
It suffices to show O,.(A) ~ 'P(A). Argue by contradiction, assume that there is an X= (x1J) E O,.(A) \ P(C,.) such that p(X), the number of positive entries of X, is minimized. To find a contradiction, we shall show the existence of a real number t: with 1 > t: > 0, matrices Y E 0,. and P E 'P(A) such that X (1  e)Y + t:P and such that p(Y) < p(X). Since XA =AX,
=
Xi+t,j
+ Xil,j = XiJ1 + Xi,j+lo
It follows that for all i,j with 1 =::; i,j :5 n,
for all i,j with i,j E Z,..
Combinatorial Properties of Matrices
83
Note that the right hand sides are independent of i. In each of the cases below, a matrix P (piA:) e 'P(A) is found with the property that
=
=
Let E min{z;t!Pit = 1}. Ife = 1, the X= P e 'P(A), contrary to that assumption that X is a counterexample. Hence 0 < e < 1. Let Y = 1_:. (X eP). Then p(Y) < p(X) and by XA = AX,PA = AP and by Theorem 2.7.1(iii), Y e On(A), and so a contradiction obtains. Zilr:
> OwheneverPit > 0.
Case 1 Z1,;  Zn,j1 > 0, for some fixed j. Define P {p;t) as follows:
=
=
if k j + 1 i (mod n) otherwise.
=
X1,J ZnJ1, Z;t > 0 whenever Pit > 0. Note that Pis the reflection a.l;lout the axis through the positions and and soP e 'P(A).
As Xi+1,j1  Zi.Ji1
n±4+1,
.1¥
Case 2 Z1,; Xn,; 1 < 0, for some fixed j. In this case, define P to be the reflection about the axis through the positions
'*t 1 • The proof is similar to that for Case 1.
.ijl and
Case 3 ZlJ ZnJH > 0, for some fixed j. Define P {p;~:) as follows:
=
=
ifk j + i 1 (mod n) otherwise. Xi+t.i+i  Zi,JH+l = X1.J  Zn,;+t > 0, z;t clockwise rotation of Cn, and soP e 'P(A).
As
> 0 whenever Pit > 0.
Note that P is a
Case 4 z1,;  Xn,j+t < 0, for some fixed j. In this case, define P to be an anticlockwise rotation, similar to that in Case 3.
=
Case 5 X1,; = Xn,i+l Xn,j1. for all j with 1 :S: j :S: n. Then, Zi+tJ x;,;+1 Z;1,; z;,;1. for all i,j with 1 :S: i,j :S: n. It follows that there exist matrices U, V and numbers a,/3 ~ 0, such that X= U + V and such that
=
=
=
=
if i  j 0( mod 2) otherwise,
and l't;
= { {3 0
=
if i  j 1( mod 2) otherwise.
Note that if a > 0, then U ~ the sum of some reflections; if {3 > 0, then V some reflections. Thus we can proceed as in Case 14 to express
X= (1 e)Y + t:P, for some P
e 'P(A)
and Y
e On(A).
If a= {3
= 0, then X= 0.
~
the sum of
Combinatorial Properties of Matrices
84
Therefore, in any case, if X ::f; 0, a desired P can be found and so this completes the proof. D Definition 2.8.4 Let A
e Mn be a matrix. Cone(A)
P(A)
=
Define
{B E M; : BA = AB}
= {P~A) cpP
: cp
~ 0} .
It follows immediately from definitions that
P(A) ~ Cone(A). Let G be a graph with A= A(G). If P(A) Theorem 2.8.4 Let
Pn = {
E
= Cone(A), then G is a supercompact graph.
cpP : cp
~ o} be a set of n x n matrices. Let G be
Pe'P(A)
a graph on n vertices with A= A(G). Each of the following holds. (i) If Y E Pn, then G(Y) is a regular graph. (In other words, Y E Un(q) for some number q.) (ii) If Y e P(A) and if Y ::1 0, then there exists a number q > 0 such that ~ Y e 'P(A). Moreover, q = 1 if and only if Y EOn. (iii) (Brualdi, [19)} If G is supercompact, then G is compact and regular.
=
Sketch of Proof (i) follows from Theorem 2.7.7. For (ii}, note that Y EpcpP. Therefore let q EP cp and apply Birkhoff Theorem (Theorem 2. 7.5) to conclude (ii). For (iii}, by definitions, 'P(A) ~ On(A). It suffices to show the other direction of containment when G is supercompact. Let A= A(G) and assume that P(A) = Cone(A). By (i}, G G(A) is regular. By definitions and by (ii}, On(A) ~ Cone(A) = P(A) ~ 'P(A).
=
=
D Example 2.8.3 There exist compact graphs that are not supercompact. Let G be a tree with n ~ 3 vertices. By Theorem 2.8.3, G is compact. Since n ~ 3, G is not regular, and so G is not supercompact, by Theorem 2.8.4(i). Example 2.8.4 A compact regular graph may not be supercompact. Let G be the disjoint union of two K 2 's. With appropriate labeling,
A= A(G)
~~
01 01 0 0 0 0 1 0 0
=[1
l .
85
Oombina.torla.l Properties of Matrices
Let
X=[i ~
~ ~]·
We can easily check that XA = XA and so X E Cone(A). But by Theorem 2.8.4(i), X¢ P(A), and so G is not supercompact. ~
Theorem 2.8.5 (Brualdi, [19]) For n
1, the ncycle Cn is supercompact.
Theorem 2.8.6 (Brualdi, [19]) Let n, k > 0 be integers such ihat n = kl, let H be a supercompact graph on k vertices, and let G be the disjoint union of l copies of H. Then G is compact. Proof Let B = A(H) and A= A(G). Then A is the direct sum of l matrices each of which equals B. By contradiction, assume that G is not compact, and so there exists a matrix X E On(A) 'P(A) with p(X), the number of nonzero entries of X, is as small as possible. Write
X=
[ : l ~~~. ~~:. .
.
.
~::.
Xn
X12
Xu
.
.
.
,
where each Xi; e Mt. Since XA = AX, for all 1 ~ i,j ~ l, Xo;B = BXii, and so Xi; E P(B), by the assumption that His supercompact. By Theorem 2.8.4(i), there is a number Qii ~ 0 such that Xi; e U~:(Qi;). Let Q = (qi;) denote the l x l matrix. Since X EOn, Q E 0,. By Theorem 2.7.5, Q = L:PeP, cpP, where L:;cp = 1. Therefore, there exists a P = (pi;) e 'P, which corresponds to a permutation t1 on {1, 2, · · · , Z}, such that q8 ,cr(s) > 0 for all 1 ~ 8 ~ Z. Fix an 8 = 1, 2, ... , l. Since x ..... (•) e P(B), x ..... (•) = L:PEP,(B) cpP, where Cp ~ 0. Hence there exists a P. e 'P~o(B), which corresponds to an automorphism t18 of H, such that for all1 ~ u ~ k, the (u,a.(u))entry of x.,cr(•) is positive. Construct a new matrix R = (r,;) eM., as follows. Write R12
R=
[ Ru ~1
R22
R2l Ru :
Ru
.R,2
Ru
.
l
, where Rt.;
P.
={ 0
=
ifi 8 andj =t1(8) otherswise.
Combinatorial PropertieS of Matrices
86
Since P8 B = BP., RA = AR. Since Ps e P,., ReP,.. Thus R is an automorphism of G and so R E P(A). Moreover, z,; > 0 whenever r;; = 1, for all 1 $ i,j $ n. Let E= min{Zij : r;; = 1}. HE= 1, then since X,R En,., X= R E P(A) ~ P(A), contrary to the choice of X. Therefore, 0 < e < 1. Let Y = 1 ER). Then by Theorem 2.7.1(iii), and by X,R E O,.(A), Y E S"l,.(A) with p(Y) < p(X). By the minimality of X, Y E P(A), and so X = (1  e)Y + ER E P(A), contrary to the choice of X. 0
.:.(x
Corollary 2.8.6 H G is the disjoint union of C~c (cycles of length k), then G is compact.
=
When k 1, Corollary 2.8.6 yields Birkhoff Theorem. Therefore, Corollary 2.8.6 is an extension of Theorem 2.7.5. See Exercise 2.30 for other applications of Theorem 2.8.6.
=
Definition 2.8.5 Let k, m, n > 0 be integers with n km. A graph G on n vertices is a complete kequipartite graph, (a C.k e graph for short), if V(G) = ~ 1 is a disjoint union with IV.I = k, for all1 ~ i $ m, such that two vertices in V(G) are joined by an edge e E E(G) if and only if these two vertices belong to different Yo's. Let G be a C.k  e graph on n vertices. H k = 1, then G = K,.; if 2k = n, then G = Kt.t· Theorem 2.8.6 can be applied to show that C.k e graphs are also compact graphs (Exercise 2.30). Theorem 2.8.7 (Brualdi, (19]) Let n =2m ~ 2 and let M ~ E(Km,m) be a perfect matching of Km,m· Then Km,m  M is compact. Proof We may assume that m ~ 3. Write V(Km,m  M)
and M
= {e; I eo joins v; to Vm+i• where 1 $ A = A(K  M) m,m
= V1 U V2, where
i $ m}. Hence we may assume that
= [ JmIm 0
Jm  Im ] . 0
By contradiction, there exists an X E S"l,.(A) P(A) with p(X), the number of positive entries of X, minimized. Write
Then by AX= XA,
Xt(Jm Im) X4(Jm  Im)
(Jm Im)X4
=
(Jm  Im)Xt
and so (Xt + X4)Jm = Jm(Xt + X4). It follows that there exists a number a ~ 0 such that a is the common value of each row sum and each column sum of X1 + X4. Sintilarly,
87
Combinatorial Properties of Matrices
there exists a number b 2:: 0 such that b is the common value of each row sum and each column sum of X2 + Xs. Let rto · · · , Tm denote the row sums of X 1 and let Bt. • • • , Bm denote the column sums of X1. Then the row sums and column sums of x, are respectively a r 1, · · · , a Tm and
,bsm.
bSto•••
denote the (i,j)entry of Xt. Since xl x, = XtJm JmX4, then (i,j)entry of X, is zt; +atis;. By the definition of a, Jm(Xt +X,)= aJm, and so for fixed j, Let
Zij
m
a= E 2 that, for each j
+ma (rt +r2 + ··· +rm) ms;. with 1 ::;; j ::;; m,
s;=
m~ 2 [<m1)a ~r•]. = ··· = Tm·
It follows that Bt = s2 = · · · = Bm· Similarly, Tt :;: r2 a11 a, 2:: 0 such that a= a1 + a4, X1 E U(at), X, E U(a,), and
Hence there exist
Compare the row sums and column sums of the matrices both sides to see a  a1 + m(a 2at). and so
= a, =
a1
(m 1)(a 2a1 )
= 0.
Since m > 2, a= 2at and so X 1 =X,. Similarly, X 2 = Xs, and so
=
where Xt E U(at), X2 E U(aa), and a1 + a2 1. Suppose first that a1 ::f; 0. Then t,X1 E Om and so by Theorem 2.7.5, there is a Q = (q,;) E 'Pm such that Zii > 0 whenever q 0; :;: 1. LetT denote the permutation on {1, 2, · · · , m} corresponding to Q, and let P E Cl2m be the direct sum of Q and Q. Then P corresponds to the automorphism of Km,m  M which maps Vi to Vr(i) and Vm+i to Vm+r(i)o and so p E P(A). Let e min{zt; : qii = 1}. If e = 1, then X= P, contrary to the assumption that X ft P(A). Therefore, 0 < f < 1. Let Y:;: 1!.,(X eP). Then by Theorem 2.71.(iii) and by X, p E n,.(A), y E n,.(A) with p(Y) ::;; p(X) 2. By the choice of X I y E P(A). But then X = (1 e)Y + EP E P(A), contrary to the assumption that X ¢ P(A).
=
88
Combinatorial Properties of Matrices
The proof for the case when f12 ¢ 0 is similar. This completes the proof.
0
It is not difficult to see that Kmm  M (m ~ 3) is also supercompact. Theorem 2.8.8 (Liu and Zhou, [187]) Let G be the !regular graph on n vertices. Then G is not supercompact.
= 2m ;::: 4
Proof Let A= A(G) and consider these two cases. Case 1 n
=0 (mod 4). Then A can be written as
Define
X=
[~ ~ ::: ..
0
0
Then X E Bn and AX= XA, and so X E Cone(A). However, as the row sums of X are not a constant, X fJ P(A) by Theorem 2.8.4(i), and so G is not compact. Case2 n
=2 (mod 4). Let p(l,l)
=[~
~]andY=[~~]
Then A can be written as 0 0
A=
0 0
0 0 0 12 12 0
0 0
p(l,l)
0 0
0 12 12 0 0 0 0
0 0 0
Combinatorial Properties of Matrices
89
Define y
0
0
y
0 0 0
0 0
0 0
0 0
0 0
12 0 0
0 y
0 0
0
y
X=
Then X
e Cone(A) \ P(A)
0
as shown in Case 1, and so G is not compact.
D
Example 2.8.5 The complement of a supercompact graph may not be supercompact. Let G = C4 denote the 4cycle. Then G is compact. But is a !regular graph on 4 vertices, which is not supercompact, by Theorem 2.8.8. Since is not supercompact, it follows by Definition 2.8.4 that Gfc is not supercompact either.
ac ac
Definition 2.8.6 Let k,m > 0 be integers. A graph G is called an (m,k)cycle if V(G) can be partitioned into Vi U V2 U · · · U Vm with lllil = k, (1 ~ i ~ m) such that an edge e e E(G) if and only if for some i, one end of e is in Vi and the other in Vi+l, where i 1,2,··· ,m (mod m).
=
=
=
Example 2.8.6 An (m, 1)cycle is an mcycle. When m 2 or m 4, an (m, k)cycle is a complete bipartite graph K!!l},111}· H A(Cm) B, then an (m,k)cycle has adjacency matrix B ® Jm· For example, the adjacency matrix of the (4,2)cycle is
=
[ ~ J. ~ J.]. J2
0
J2
0
Theorem 2.8.9 (Brualdi, [19]) Let m, k > 0 be integers. An (m, k)cycle is supercompact 1, or k ~ 2 and m 4, or k ~ 2 and m 'if= 0 (mod 4).
if either k
=
=
Example 2.8. 7 The (8, 2)cycle is not compact, and so it is not supercompact. To see this, let A= A(C8 ) ® J 2 be the adjacency matrix of the (8, 2)cycle, and let
X·· {
[
~ ~]
., [~ ~]
if j  i
=0, 1 (mod 4)
if j  i
= 2, 3 (mod 4)
Let X= (X;j) denote the matrix in M 16 which is formed by putting each of the blocks X;j, 1, ~ i,j ~ 2 in the ijth position of a 2 x 2 matrix. Then X e On(A) \ P(A). (Exercise 2.31).
90
Combinatorial Properties of Matrices
Open Problems (i) Find new compact graph families. (ii) Find new techniques to construct compact graphs from supercompact graphs. (iii) Construct new supercompact graphs. It is known that when k 1, a C.k  e graph is supercompact; and when k 2, a C.k  e graph is compact. What can we say
=
=
fork2:3? (iv) Is there another kind of graphs whose relationship with supercompact graphs is similar to that between supercompact graphs and compact graphs?
2.9
Exercise
Exercise 2.1 (This is needed in the proof of Theorem 2.2.1) Let D be a directed graph. Prove each of the following. (i) H D has no directed cycles, then G has a vertex v with out degree zero.
(ii) D has no directed cycles if and only if the vertices of G can be so labeled v1, v2, · · · , v,. that (v1v;) E E(D) only if i < j. (Such a labeling is called a topological ordering.) (iii) H D~o D 2 , • • • , D,. are the strongly connected components of D, then there is some D 1 such that G has no arc from a vertex in V(D1) to a vertex V(D)  V(D1). (In this case we say tliat D 1 is a source component, and write o+(D,) = 0. ) (iv) The strong components of D can be labeled as D1,D2,··· ,D,. such that D has an arc from a vertex in V(Di) to a vertex in V(D;) only if i < j. Exercise 2.2 Prove Lemma 2.2.1. Exercise 2.3 Prove Lemma 2.2.2. Exercise 2.4 Let A E M,. be a matrix with the form A1
0
0
*
A2
0
0 0
A=
*
*
*
*
where each Ai is a square matrix, 1 S i S k. Show that A has a nonzero diagonal if and only if each Ai has a nonzero diagonal.
=
Exercise 2.5 Let A (aiJ) E M;t. Then A is nearly reducible if and only if A is irreducible and for each apq > 0, A  apqEpg is reducible. Exercise 2.6 Prove Proposition 2.3.1. Exercise 2.7 Let D be a digraph, let W ~ V(D) be a vertex subset and let H be a subgraph of D. Prove each of the following.
91
Combinatorial Properties of Matrices (i) H D is minimally strong and if D[W) is strong, then D[W] is minimally strong. (ii) H D is strong, then D / H is also strong. (iii) H both H and D / H are strong, then D is strong. Exercise 2.8 A permutation (at,~.··· , a,.) of 1, 2, · · · , n is an nderangement if a; fur each i with 1 ~ i ~ n. Show each of the following. (i) Per(J,.) = n!. ~ (1)/c (ii) Per(J,. I,.) n! 4J "'""k!'
:f i,
=
lc=O
Exercise 2.9 For n ~ 3, let A= (a;;) EM,. be a matrix with a;; = 0 for each i and j with 1 ~ i ~ n 1 and i + 2 ~ j ~ n, (such a matrix is called a Hessenberg matrix). Show that the signs of some entries of A can be changed so that Per(A) = det(A). (Hint: change the sign of each a;,i+to 1 ~ i ~ n 1.) Exercise 2.10 A matrix A = (a;;) E Mr,n with n ~ r ~ 1 is a r x n normalized Latin rectangle if au i, 1 ~ i ~ n, if each row of A is a permutation of 1, 2, · · · , n and if each column of A is an rpermutation of 1, 2, · · · , n. Let K(r, n) denote the number of r x n normalized Latin rectangles. Show that K(2,n) Per(J,. I,.).
=
=
Exercise 2.11 Prove Lemma 2.5.1. Exercise 2.12 Prove Lemma 2.5.2. Exercise 2.13 Prove Theorem 2.5.5. Exercise 2.14 Let A E M;t be fully indecomposable. Then Par (A)
~
IIAII  2n + 2.
Exercise 2.15 Prove Lemma 2.5.5. Exercise 2.16 Prove Lemma 2.5.6. Exercise 2.17 Prove Lemma 2.5.7. Exercise 2.18 Prove Theorem 2.5.9. Exercise 2.19 Using the notation in Definition 2.6.3, Show that each of the following holds. (i) s" < s< 1> < s'. (ii) there exist an integer k ~ 1 and k non negative ndimensional vectors s, s, ··· ,s(k) such that s" Exercise 2.20 Let r
= s 0 and let (Ax),=
(i) H A is nonnegative and irreducible, then min (Ax). l!>i:Sn
(ii) H A is nonnegative, then min (Ax)i o:t>O
Xi
Xi
= p(A) =::; m~ (Ax),. l:Ss:Sn Xi
= p(A) =::;max (Ax)i. o:t>O
Xi
Exercise 2.22 Show that if A is a stochastic matrix, then p(A)
= 1.
Exercise 2.23 Let A, B e M;t. Prove each o the following: (i) H both A and B are stochastic, show that AB is also stochastic. (ii) H both A and B are doubly stochastic, show that AB is also doubly stochastic. Exercise 2.24 H A e M;t is doubly stochastic, then A....., B, where B is a direct sum of doubly stochastic fully indecomposable matrices. Exercise 2.25 Show that the matrix A in Example 2.7.2 is not of doubly stochastic type. Exercise 2.26 Show that if A e Un(k), then Per(A)~ k!. Exercise 2.27 Prove Theorem 2.8.1. Exercise 2.28 Without turning to Theorem 2.8.2, prove the star K 1,nl is compact. Exercise 2.29 Prove Theorem 2.8.5 by imitate the proof of Theorem 2.8.3. Exercise 2.30 Show that G is compact if (i) G is a disjoint union of Km 's. (ii) G is a disjoint union of K:, 's. (iii) G is a disjoint union ofTm 's, where Tm is a tree on m vertices. (iv) G is a C.k  e graph. Exercise 2.31 Show that X
2.10
e !!(A)\ 'P(A) in Example 2.8.7.
Hints for Exercises
Exercise 2.1 For (i), if no such v (a source) exists, then walking randomly through the arcs, we can find a directed cycle. For (ii), use (i) to find a source. Label the source with the largest available number, then delete the source and go on by induction.
93
Combinatorial Properties of Matrices
For (iii), we can contract each strong component into a vertex to apply the result in
(i). Exercise 2.2 Suppose that A= (a,;) and A'= (a~i). Then ao. =a~ and au =a~•• for each i with 1 5 i 5 n. For each au > 0, there are Bi• arcs from 11i to 11• in D(A). Since a~ = ao., there are ao. ares from 110 to 11t in D(A'). Thus all the arcs getting into 118 in D(A) will be redirected to 11t in D(A'). Similarly, all the arcs getting into 11t in D(A) will be redirected to 118 in D(A'). Exercise 2.3 We only show that case when k = 2. The general case can be proved similarly. Let u E V(D') = V(D). We shall show that D' has a directed (u2,u)path and a (u, u2)path. H u E V(D2), then since D 2 is a strong component, D2 has a ('1£2, u)path which is still in D'. Also, D 2 has a (u,u')path for some vertex u' with (u'u2) E E(D2). Note that in D', (u''l£2) e E(D'). Since u 1 is not a source and by (*), there is a vertex v e V(D1 ) such that (vu1) e E(D). It follows that D' has a (u, u2)path that goes from u, through u' and 11 with the last arc (11,u2). H u e V(D1), since u 1 is not a sink and by (*), there is a u" e V(D1) such that (u"ul) e E(D), whence D' has a (u, u 2)path that contains a (u, u")path in D 1 with the last arc (u"'l£2). Also, since u 2 is no a source, there is a vertex 11 E V(D2) such that (w2) E E(D). Hence D' has a ('1£2, u)path that contains a ('1£2, 11)path in D2, the arc (v,u1), and a (u1,u)path in D 1. Exercise 2.4 Induction on k. By the definition of a diagonal, we can see that A~: must have a nonzero diagonal. Argue by induction to the submatrix by deleting the rows and columns containing entries in A~:. Exercise 2.5 Apply Definition 2.3.1. Exercise 2.6 Apply definitions only. Exercise 2. 7 (i) H D[W] has an arc a such that D[W]  a is strong, then D  a is also strong, and so D is not minimal. Hence D[W) is minimally strong. (ii) and (iii): definitions. Exercise 2.8 (i) Show that Per( Jn) counts the number of permutations on n elements. (ii) Show that Per(Jn In) counts the number of n derangements. Exercise 2.9 If Per(A) = 0, then A has a zero submatrix H E M•,n•+1 (0), which has at least n  m + 1 columns.
=
Exercise 2.11 H A E Bm,n and if Per(A) 0, then A has Osx(nB+l) as a submatrix, for somes > 0 (Theorem 6.2.1 in the Appendix); and this submatrix has at least n m + 1
Combinatorial Properties of Matrices
94
columns. Exercise 2.12 In this case, each kx (n1) submatrixof A has at least k nonzero columns, and so byTheorem6.2.1 in the Appendix, Par(A') > 0 for each (m1) x (n1) submatrix A'. Exercise 2.13 It suffices to prove Theorem 2.5.5 for nearly indecomposable matrices. Argue by induction on n. When n = 1, (2.10) is trivial. By Theorem 2.4.2, we may assume that
where B is nearly indecomposable. By induction, Per(A) $ Per(B) + 1 ~ IIBII  2m + 2 + 1. Exercise 2.14 Let A= (ao;). If a0; $ 1, then this is Theorem 2.5.5. Assume that some a,.8 > 0. Let B = A  a,.8 Er8 • By induction on II All, we have Per(A)
Exercise 2.15 Since n that an > 0. Hence
~
= =
Per(B) + perA(rls)
~
liB II  2n + 1
IIAII2n+2.
2 and since A is fully indecomposable, there exists a t Is such n
Per(A)
=
Lark Per(A(rlk)) 1:=1
~
ar• Per(A(rls)) +
~
2 Per(A(rls)) + 1
tlrt
Per(A(rlt))
Exercise 2.16 Argue by induction on n ~ 3. Assume that (n 1)! < 2(nl)(ns). Then the theorem will follow if n $ :z2(n2). Consider the function f(x) = 2(x2)lo~x. Note that /(3) = 2log2 3 > 2log2 4 = 0, and f'(x) > 0. Hence f(x) > 0 and so 3$ n $ 22 (n 2>. Exercise 2.17 Let r1, r 2 , .. • , r,. denote the row sums of A. If ri Theorem 2.5.7 and by Lemma 2.5.6, Per( A) $
n
n
i=l
i=l
~
3 for each i, then by
II(r,!) 'k < II 2r,2 = 2IIAU2n.
Exercise 2.18 Argue by induction on t
~
1 and apply Theorem 2.5.8.
Combinatorial Properties of Matrices
95
Exercise 2.19 (i). By Definition 2.6.2, the first p. 1 components of s > 8~. comparing the sum of the first k components and considering the cases when k ~ p.  1 and k ~ p., we also conclude that s" < s 0 for all i. Let E = min{zi.. (i), i = 1, 2, · · · , n}. As P E 1'(A), (X  eP) E Cone(A), and X eP has one more 0entry than
X. Then argue by induction to show that X e P(A). Exercise 2.30 (i)(iii) follows directly from Theorem 2.8.6. For (iv), note that ale is the disjoint union of K;'s.
Chapter 3
Powers of Nonnegative Matrices Powers of nonnegative matrices are of great interests since many combinatorial properties of nonnegative matrices have been discovered in the study of the powers of nonnegative matrices and the indices associated with these powers. A standard technique in this area is to study the associated (0,1)matrix of a nonnegative matrix. Given a nonnegative matrix A, we associate A with a matrix A' E Bn obtained by replacing any nonzero entry of A with a 1entry of A'. Many of the combinatorial properties of a nonnegative matrix can be obtained by investigating the associated (0,1)matrix and by treating this associated (0,1)matrix as a Boolean matrix (to be defined in Section 3.2). Quite a few of the problems in the study of powers will lead to the Frobenius Diophantine Problem. Thus we start with a brief introductory section on this problem.
3.1
The Frobenius Diophantine Problem
Certain Diophantine equations will be encountered in the study of powers of nonnegative matrices. This section presents some results and methods on this topic. As usual, for integers a1o a2, · · · ,a., let gcd(a1, · · · ,a.) denote that greatest common divisor of a1, · · • ,a., and let lcm(alt · · · , a.) denotes that least common multiple of a1o • • • , a,.
=
Theorem 3.1.1 Let a 1,G2 > 0 be integers with gcd(a1oa2) 1. Define ,P(a1oa2) = (a1 1)(G2 1). Each of the following holds. (i) For any integer n ;?:: tf>(at.G2), the equation a 1:z: 1 + G2:Z:2 = n has a nonnegative integral solution :z:1 ;?:: 0 and :z:2 ;?:: 0. 97
Powers of Nonnegative Matrices
98
(ii) The equation a1x1 + f12X 2 = cfl(alt a2)  1 does not have any nonnegative integral solution. Proof Let n;:::: (a1 1)(ll2 1). Note from number theory that any solution x 1 and :1:2 of a1x1 + t12X2 = n can be presented by
= x~ +a2t =~ a1t
where x~, x~ are a pair of integers satisfying the equation, and where t can be any integer. :5 al t +at 1. Since al ~ 1 and since X~ is an integer' we can choose t so that al t :5 Therefore x2 = x~ a1t;:::: 0. Since n > a 1a 2  a 1  ~ and by this choice oft,
ra
x1a1
= >
= n (x~ a1t)a2 a1a2 a1 ~(a1 1)a2 = a1 (x~ + a2t)a1
and so x 1 = x~ + ~t ;:::: 0. This proves (i). Argue by contradiction to prove (ii). Assume that there exist nonnegative integers X1J x2 so that a1x1 + fi2X2 a1a2  a1  a2, which can be written as a 1a2 (x1 + 1)at + (x2 + 1)a2. It follows by gcd(a1,t12) = 1 that a1j(x2 + 1) and t12l(xt + 1). Therefore X2 + 1 ;:::: a1 and Xt + 1 ;:::: f12, and so a1a2 = (x1 + 1)at + (x2 + 1)a2 ;:::: 2at~. a contradiction obtains. 0
=
=
Theorem 3.1.2 Let 8,n,a1,· ··,as bepositiveintegerswith 8 > 2such that gcd(a1, ···,a,)= 1. There exists an integer N N(a11 ···,a,), such that the equation
=
has nonnegative integral solution x1
;::::
O,x2 ;:::: 0, · ·· ,x.;:::: 0 whenever n > N(a1,· ··,an)·
Proof When 8 = 2, Theorem 3.1.2 follows from Theorem 3.1.1 with N(a1,t12) =(at1)(~ 1) 1. Assume that 8 ;:::: 3 and argue by induction on 8. Let d denote gcd(a1, • · · , a 8 _ 1 ). Then gcd(d,a,) = 1, and so there is an integer d. with 0 :5 b. :5 d 1 such that a,b, n (mod d). Write a, = a~d, 1 :5 i :5 s 1. Then the equation a1x1 + · · · + a1 x 1 n becomes
=
=
(3.1) By induction, there exists an integer N(a~, ~. · · · ,a~_ 1 ) such that the equation (3.1) has nonnegative integral solution XI = b1, · • · , Xs1 = ba1, whenever
Powers of NoDIJegative Matrices
99
Let N(a1,· · · ,a,)= a,(d1)+c1N(a~, · · · ,a~_ 1 ). Then a1x1 +·· ·a,x, = n has nonnegative integral solution x1 = bl> · • · x, = b, whenever n > N(a1, · • · , a,), and so the theorem is proved by induction. 0 Definition 3.1.1 Let 8, a1. · · · , a, be positive integers with s > 2 such that gcd(a1, · · · , a,) = 1. By Theore111 3.1.2, there exists a smallest positive integer tfl(a1 , ···,a.) such that any integer n ;:: t/l(ai. · · · , a,) can be expressed as n = a1x 1 + · · · a.x, for some nonnegative integers z1.··· ,x •. This number t;(a1.··· ,a,) is called the l'robenius number. The .lirobenius problem is to determine the exact value of tfl(al> • · • , a,). Theore111 3.1.1 solves the Erobenius problem when 8 = 2. This problem re111ains open when 8;:: 3. Theorem 3.1.3 (Ke, (143]) Let a1oa:z,a3 > 0 be integers with gcd(a1,t12,a3 ) = 1. Then
t/l(al,tl2,as)~
3
:t/12 )+asgcd(ai>Il2)}:al+l. gc al> ll2
(3.2)
i"'l
Moreover, equality holds when a1a2 aa > 7( gc""'d;;(a=1=,a27))~2
(3.3)
Note that a1>112,as can be permuted in both (3.2) and (3.3).
Proof Let d = gcd(a1,a2) and write a 1 = a~d and a2 =~d. Let u1>u2, xo,yo,zo be integers satisfying a~u1 +~u2 = 1 and a1zo+a2Yo+aszo = n respectively. We can easily see (Exercise 3.1) that any integral solution of a1x + ll2Y + asz = n can be expressed as
=
x xo + a~t1 u1ast2 { Y =Yo a1t1 u2ast2 z = zo+dt2, where t1. t 2 can be any integers. Choose t2 so that dt2 ~ zo ~ dt2 + d  1, and then choose t1 so that ~t1 ~ x 0  u 1a 3 t2 ~ ~t 1 +a~ 1. Note that these choices of t1 and t2 make x ~ 0 and z;:: 0. Let n ~ ccd(!:~ao) +as gcd(a1>a2)1 a1 + 1. By the choices of t1. t2 ad n,
'E!.
a2y
= = =
tl2(Yo a~ t1  u2ast2)
nalxasz;:: na~(~ 1) as(d1) n
da1112 
asd + a1 +as > 02.
Thus y ;:: 0. This proves (3.2). Now assume (3.3). We shall show that (3.4)
100
Powers of Nonnegative Matrices
has no nonnegative integral solutions, and so equality must hold in (3.2). By contradiction, 112 assume that there exist nonnegative integers :z:, y, z satisfying (3.4). Note that gcd(,.,,,.s l da~ a~ and that asgcd(at. a2) = asd, and so We have
=
s
da~~ +aad
Elli = a1:z: +a2y+ aaz. i=l
It follows that d(a~~ +as)= daHz+ 1) +~(y+1) +as(z+ 1).
Since gcd(d,as) = 1, we must have dl(z + 1), and so z + 1 = dk for some integer k Cancel d both sides to get
> 0.
a~a2 = aH:z: + 1) + ~(y + 1) + as(k 1).
H k > 1, then a~~~ a~ +~+as, contrary to (3.3). Thus k = 1. Then by gcd(aLa~) a~I(Y + 1) and ~l(:z: + 1), which lead to a contradiction a~ a~~ 2a~~ D
=1,
Some of the major results in this area are listed below. In each of these theorems, it is assumed that 8 ~ 2 and that a1, · · · , a. are integers with a1 > as > · · · > a8 > 0 and with gcd(a1 ,··· ,a.)= 1. Exercises 3.2 and 3.3 are parts of the proof for Theorem 3.1.7. Theorem 3.1.4 (Schur, [158])
tf>(at. · · · , a 8 ) ~ (a1  l)(a8 Theorem 3.1.5 (Brauer and Seflbinder, [15}) Let Then
When
8

1).
c4 = gcd(a1,as, ···,a,), 1:::;; i:::;; 81.
= 3, Theorem 3.1.5 gives inequality (3.2).
Theorem 3.1.6 (Roberts, (217]) Let a ~ 2 and d a1 =a+ jd, (0:::;; j =:;; s), then
> 0 be integers with d ;1a.
a2 ) t/>(ao,at. ···,a.):::;; ( L8 J + 1 a+ (d 1)(a 1). Theorem 3.1.7 (Lewin, [158])
Let
Powers of Nonnegative Matrices
101
Theorem 3.1.8 (Lewin, [156)) H s ;?: 3, then
A.(a ••• a ) < L(a1 2)(112 1)J
'I'
1•
'
•

2
.
Theorem 3.1.9 (Vitek, [267]) Let i be the largest integer such that i; is not an integer. One of the following holds. (i) H there is an a;, such that there exist for all choices of nonnegative integers p. and 7, a1 '# p.a, + ya,, then cfJ(at. • ..
,a,)~ L~ J(at 2).
(ii) H no such a; exists, then
Theorem 3.1.10 (Vitek, [267]) H s ;?: 3, then
 2)J . .,A.(at, .. • ,a,) < l(as1 1)(al 2
3.2
The Period and The Index of a Boolean Matrix
Definition 3.2.1 A matrix A E Bn can be viewed as a Boolean matri:J:. The Boolean matrix multiplication and addition of (0,1) matrices can be done as they were real matrices except that the addition of entries in Boolean matrices follows the Boolean way:
a+b=max{a,b}, wherea,be {0,1}. Unless otherwise stated, the addition and multiplication of all (0, 1) matrices in this section will be Boolean. Theorem 3.2.1 Let A E Bn. There exist integers p > 0 and k > 0 such that each of the following holds. (i) Hn;?: k and n k = sp+r, where 0 ~ r ~p1, then An= Ak+r. (ii) {I,A,A2 ,··· ,Ar., ... ,Ak+P1 } with the Boolean matrix multiplication forms a semigroup. (iii) {Ar., · · · ,Ak+P 1 } with the Boolean matrix multiplication forms a cyclic group with identity Ae lind generator Ae+l, for some e E {k, k + 1, .. · , k + p 1}. Proof It suffices to show (i) since (ii) and (iii) follow from (i) immediately. Since IBnl 2n• is a finite number, the infinite sequence I,A,A2 ,A3 ,··· must have repeated members. Let k be the smallest positive integers such that there exist a smallest integer p > 0 satisfying Ar. = A"+P.
=
Powers of Nonnegative Matrices
102
Then for any integers > 0, Ak+•P = Ak+P+(• 1}P = Ak+P A(•l)p = Ai:+(•l}p Ak, and so (i) obtains.
= ... =
D
Definition 3.2.2 For a matrix A E Bn, the smallest positive integers p and k satisfying Theorem 3.2.1 are called the period of CIJnvergence of A and the index of CIJnvergence of A, denoted by p = p(A) and k = k(A), respectively. Very often p(A) and k(A) are just called the period and the index of A, respectively. Definition 3.2.3 A irreducible matrix A e Mn is primitive if there exists an integer k such that Ak > 0. An irreducible matrix A is imprimitive if A is not primitive. Example 3.2.1 Let D be a directed ncycle for some integer n ~ 2 and let A Then since Dis strong, A is irreducible. However, A is not primitive.
>0
= A(D).
Definition 3.2.4 Let D be a strong digraph. Let l(D) = {l > 0: D has a directed cycle of length I}= {l1, l2, · · • , l8 }. The index of imprimitivity of Dis d(D) = gcd(l1,l2, · · · ,l,). Theorem 3.2.2 Let A of the following holds: (i) D is strong, and (ii) d(D) 1.
e M;t and let D = D(A).
Then A is primitive if and only if both
=
Proof Suppose that A is primitive. Then A is irreducible and so (i) holds. Moreover, there is an integer k > 0 such that Ak > 0. Note that if Ak > 0, then Ak+1 > 0 also. It follows by Proposition 1.1.2(vii) and by Exercise 3.4, that d(D) must be a divisor for both k and k + 1, and so d(D) 1. Conversely, assume that A satisfies both (i) and (ii). Then A is irreducible. Let l(D) = {h,l2 , ... ,1,}. By (ii), gcd(h,l2, ... ,l,) = 1. By Theorem 3.1.2, 4>(l1, ... ,l,) exists. {1,2,··· ,n}. For each pair i,j e V(D), by (i), D has a spanning Denote V(D) directed trail T(i,j) from ito j. Let d(i,j) = IE(T(i,j))j. Define
=
=
k = max d(i,j) + tf>(l1 , • .. , l,). iJEV(D}
Then for each fixed pair i,j e V(D), k = d(i,j) +a, where a ~ ¢(l~o · · · , l.). By the definition of tf>(h, · · · , l,) and by Proposition 1.1.2{viii), D has a closed trail L of length a. Since T(i,j) is spanning, T(i,j) and L together form a directed (i,j)trial of D. By Proposition 1.1.2(vii), Ak > 0 and so A is primitive. D Definition 3.2.5 Let d ~ 2 be an integer. A digraph Dis cyclically dpartite if V(D) can be partitioned into d sets l/i, l/2, · · · , Vc~ such that D has an arc (u, v) e E(D) only if u E V, and v E Vq such that q p = 1 or q p = 1 d. Lemma 3.2.1 Let D be a strong graph with V(D)
= {v1,v2, ... ,vn}·
For each i with
Powers of Nonnegative Matrices
103
1 ~ i ~ n, let ~ be the greatest common divisor of the lengths of all closed trails of D containing v;, and let d d(D). The each of the following holds. (i) dt ~ = ... = dn = d. (ii) For each pair of vertices v;, v; E V(D), if P1 and P 2 are two (v;, v;)walks of D,
=
=
=
then IE(Pt)l IE(P2)I (mod d). (iii) V (D) can be partitioned into Vi U V2 U · · · U Vd such that any (v;, v; )trial T;,; with v; E Vi and v; E Vj has length IE(T;,;)I j  i (mod d). (iv) H d ~ 2, then D is cyclically dpartite.
=
Sketch of Proof (i). Fix v~ov; E V(D). Assume that D has a (v1 ,v;)trail T;,; oflength s, a (v;, v1)trail T;,; of length t, and a (v;, v; )trail of length t;. Then both s + t and s + t + t; are lengths of closed trails containing v1, and so ~l(s + t) and ~l(s + t + t;). It
=
=
follows that ~lh;, and so d;ld;. Since i,j are arbitrary, d1 ~ = ·· · = dn d', and did'. Let l be a length of a directed cycle C of D. Then C contains a vertex v; (say), and so d'll. It follows that d'ld. (ii). Let Q be a (v;, v1)trail. Then each P; U Q is a closed trail containing v1• By (i), IE(P1)I
+ IE(Q)I = !E(P2)I + IE(Q)I
(mod d).
(iii). Fix v 1 • For 1 ~ i ~ d, let Vi = {v; E V : any directed (v~o v1)path has length i mod d}. H v; E Vi and v; E Vj, then D has a directed (v~ov;)path of length l', and a directed (v1, v;)path of length l. Thus l' i mod d and l + l' j mod d, and sol= j  i
=
mod d. (iv) follows from (iii).
=
O
By direct matrix computation, we obtain Lemma 3.2.2 below. Lemma 3.2.2 Let A E M;t be a matrix such that for some positive integers n1,na,· · • ,nd matrices A; E Mn,,n•+• with each A; having no zero row nor zero column,
(3.5)
Then each of the following holds.
=
(i) Ad diag(Bt. · · · , Bd), where B; counted modulo d.
(ii) If for someintegerm square matrix, then dim.
= Tit!J1 A;
and where the subscripts are
> 0, Am= diag(J1,J2 ,· • • ,Jd), such that each J; is anonzero
Theorem 3.2.3 Let A E M;t be an irreducible matrix with d the following holds.
= d(D(A)) > 1.
Each of
104
Powers of Nonnegative Matrices
(i) There exist positive integers n1, n2, · · · , n4 and matrices A; E Mn,,n•+•• and a permutation matrix P such that P AP 1 has the form of (3.5). (ii) Each A; in (i) has no zero row nor zero column, 1 :::; i :::; d. (iii)
Ilt=t A; is primitive.
=
Proof (i). Let D D(A). By Lemma 3.2.1(iii), V(D) has a partition Vi u V2 U · · · U V4 satisfying Lemma 3.2.1(iii). Let n; IV; I, (1 :::; i :::; d). By Lemma 3.2.1(iii), any arc of D
=
=
is directed from a vertex in V; to a vertex in Vi+I. i 1,2, · · · , d (mod d). With such a labeling, D has an adjacency matrix of the form in (i). (ii) follows from t!le assumption that D is irreducible. (iii). Let l(D) = {l1.l2,··· ,l,}. Then
gcd(~.• ~.··· .~)
= 1, and so by Theorem
. 3.1.2, ko q, d' · · · , d) extsts. Choose u,v e Vj. Since Dis strong, there exists a directed closed walk W(u,v) from u to v. By Lemma 3.2.1(iii), d divides IE(W(u, v))l. Let ( ll
=
l,
_ { IE(W(u,v))l t max ,.,.,ev. d
+
ko} .
=
Then t ~ ko, and so for any u,v E V1 , td IE(W(u,v))l +kd for some integer k ~ k 0 • By Theorem 3.1.2, and since V(W(u,v)) V(D), for any pair ofvertices u,v E Vi, D has a directed (u, v)walk of length td. It follows by Proposition 1.1.2(vii) that every entry of the n 1 x n 1 submatrix in the upper left conner of (PAP 1 )td is positive, and so by Lemma
=
3.2.2(i), Bf
> 0.
This proves (iii).
D
The corollaries below follow from Theorem 3.2.3. Corollary 3.2.3A Let A E M;t be irreducible with d positive integer such that D(A) is cyclically dpartite.
= d(D(A)).
Then d is the largest
=
Corollary 3.2.3B Let A E M;t be an irreducible matrix with d d(D(A)). Then each of the following holds. (i) If A has the form (3.5) and satisfies Theorem 3.2.3(ii), then for each j with 1 :::; j :::; d, 1 A; is primitive, where the subscripts are counted modulo d. BJ (ii) (Dulmage and Mendelsohn, [77)) There is a permutation matrix Q (called a canonical transformer of A) such that
= IIt.!J
Q 1 A 4 Q = diag(Bt,B2,··· ,Bd), where each B; is primitive. (iii) (Dulmage and Mendelsohn, [77]) Let Q be a canonical transformer of A. The number d d(D(A)) is thesmallestpowerofQ 1 AQ whichhastheformofdiag(Bt.B2 ,· • • ,B,l), where each B; is primitive.
=
Powers of Nonnegative Matrices
105
Corollary S.2.SC Let A E Mt be a irreducible matrix with d(D(A)) d(D(A)).
> 1. Then p(A)
=
Corollary S.2.3D Let A E B,.. Each of the following holds. (i) p(A) 1 if and only if A is primitive. (ii) If p p(A) > 1, then A is similar to
=
=
(3.6)
such that each Ao is primitive. (The form (3.6) is called the imprimitive standard form of the matrix A, and the integer p is the index of imprimitivity of A.) Lemma 3.2.3 Let A E B,. be a matrix having the form (3.5) and satisfy Theorem 3.2.3(ii).
If
m'.. Ao is irreducible, then A is also irreducible and djp(A). 1
Sketch of Proof By Lemma 3.2.2(i),
Ad= diag(Bt.B2,··· ,Bd),
=
where Bt fi~1 Ai is irreducible. By Theorem 2.1.1(v), there is a polynomial f(x) such that f(Bl) > 0. Let g(x) xf(x). Then for each i 1, 2, · · · , d,
=
=
g(Bi)
= Bd(Bi) = (A.Ai+l · · · Ad)f(B1)(A1A2 ···Ail)·
Since A satisfies Theorem 3.2.3(ii), each Ai has no zero rows nor zero columns. It follows by f(Bl) > 0 and by the operations of Boolean matrices that g(Bi) > 0. Direct computation leads to
and so A is irreducible by Theorem 2.1.l{vi). Let p = p(A). By Corollary 3.2.3C, p = d(D(A)). Let m > 0 be a length of a closed trail of D(A). Then Am has a diagonal1entry. By Lemma 3.2.2(ii), djm, and so djp(A).
0 Theorem 3.2.4 Let B E B,. such that B ~, A for some A such that A has the form > 1 and satisfies Theorem 3.2.3(ii) and 'theorem 3.2.3(iii). Then B is inlprimitive and d p(B).
in (3.5) with d
=
Proof Since B ~, A, p(B) = p(A) and B is primitive exactly when A is primitive. Thus it suffices to show that A is imprimitive and p(A) =d.
Powers of Nonnegative Matrices
106
By Lemma 3.2.3 and since A satisfies Theorem 3.2.3(ii) and Theorem 3.2.3(iii), A is irreducible and dlp(A). Thus p(A) > 1 and so by Corollary 3.2.3D, A is imprimitive. It remains to show that p(A)Id. By Corollary 3.2.3D(ii) and Lemma 3.2.2, A4 = diag(B~. B 2 , .. • , B.,), where the B.'s are defined in Lemma 3.2.2(ii). Since A satisfies Theorem 3.2.3(iii), B1 is primitive, and so for some integer k > 0, both Bf > 0 and B~+l > 0. It follows by Proposition 1.1.2(vii) that D(A) has closed walks of length kd and (k + 1)d, and so d(D(A))Id. By Corollary 3.2.3C, p(A) = d(D(A)), and so p(A)Id, as desired.
D
Example 3.2.11£ A :::!p B, then D(A) and D(B) are isomorphic, and so by Corollary 3.2.3C and Corollary 3.2.3D that A is primitive if and only if B is primitive. However, if A .....,. B, then that A is primitive may not imply that B is primitive. Consider
A=
0 0 1 1 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0
0 0 0 1 0
andB=
0 0 1 0 1
1 0 0 0 0
0 1 0 0 0
1 0 0 0 0
0 0 0 1 0
We can show (Exercise 3.11) that A is primitive, B is imprimitive and A .....,. B. Theorem 3.2.5 (Shao, (246]) Let A E B,. The there exists a Q E P,. such that AQ is primitive if and only if each of the following holds: (i) Each row and each column of A has at least one 1entry; (ii) A (iii)
ft P,.; and
Theorem 3.2.6 (Moon and Moser, (204]) Almost all (0,1)matrices are primitive. In other words, if P,. denote the set of all primitive matrices in B,., then lim IPnl
n+oo IBnl
=1.
Powers of Nonnegative Matrices
3.3
107
The Primitive Exponent
Definition 3.3.1 A digraph D is primitive digraph if A(D) is primitive. Let D be a primitive digraph with V = V(D). For u.,v E V, let y(u.,v)
=
y(D)
=
min{k : D has an (u,v)walk oflength k} and max y(u, v).
u,vEV(D)
Let A be a primitive matrix. The primitive exponent of A is y(A)
=y(D(A)).
Propositions 3.3.1 and 3.3.2 below provide an equivalent definition and some elementary properties of y(A). Proposition 3.3.1 Let A be a primitive matrix and let D = D(A). Each of the following holds. (i) y(A) = min{k : A• > 0}. (ii) If y(A) = k, then for any u, v E V(D), D has an (u, v)walk of length at least k. (iii) For u, v E V(D), let dv(u., v) be the shortest length of an (u, v)walk in D, and let l(D)={Zt,h,··· ,18 }. Then y(u.,v)::;; dv(u.,v)
+ 0, Am is primitive. (ii) Given any u E V(D), there exists a smallest integer hu > 0 such that for every v E V(D), D has an (u, v)walk of length h.,. (This number hu is called the reach of the vertex
u..)
(iii) Fix an u E V(D). For any integer h ;::: hu, D has an (u., v)walk of length h for every v E V(D). (iv) IfV(D) = {1,2,··· ,n}, theny(D) =max{ht,h2,··· ,h,.}. Sketch of Proof (i) and (ii) follow from Definitions 3.2.3 and 3.3.1, and (iv) follows from (iii). (iii) can be proved by induction on h' = h  hu. When h' > 0, since D is strong, there exists wE V(D) such that (w,v) E E(D). By induction D has an (u,w)walk of length hu + h'  1 = h  1. Theorem 3.3.1 (Dulmage and Mendelsohn, [77)) Let D be a primitive digraph with IV(D)I = n, and lets be the length of a directed cycle of D. Then y(D) ::;; n
+ s(n 
2).
108
Powers of Nonnegative Matrices
Proof Let C8 be a directed cycle of D with length 8. Note that D(A•) has a loop at each vertex in V(C,). Let u, v E V(D) = V(D(A•)) be two vertices. If u E V(C,), then since D(A') is primitive (Proposition 3.3.2), and since D(A•) has a loop at u, D(A') has an (u, v)walk of length n 1. By Proposition 1.1.2(vii), the (u, v)entry of A•(nl) is positive. Hence D has a (u, v)walk of length 8(n 1). If u ¢ V(C,), then since D is strong, D has an (u, w)walk of length at most n s for somew E V(C,), and soD has an (u,v)walkoflength at mostn8+8(n1) = n+8(n2). It follows by Proposition 3.3.2 that y(D) ::; n + 8(n 2). D Corollary 3.3.1A (Wielandt, (274]) Let D be a primitive digraph with n vertices, then
y(D) ::; (n 1)2

1.
Sketch of Proof By induction on n. When n;::: 2, D bas a cycle of length 8 > 1, since D 8 ::; n  1, and so Corollary 3.3.1A follows from Theorem 3.3.1. 0
is strong. Since D is primitive,
Proposition 3.3.3 Fix i E {1, 2} and let D be a primitive digraph with n;::: 3+i vertices. Each of the following holds.
(i) If y(D)
= (n 1) 2 + 2 i, then D is isomorphic to D;.
(ii) There is no primitive digraph on n vertices such that
Proof (i). Let 8 be the length of a shortest directed cycle of D. By Theorem 3.3.1, (n 1) 2 + 2 i y(D) ::; n + 8(n 2). It follows that 8 n 1. Build D from this n 1 directed cycle to see that D must be D 1 or D 2 • (ii). By (i), 8 ::; n 2, and so by Theorem 3.3.1, y(D) ::; n 2  3n + 4. D
=
=
Example 3.3.1 Let C,. = v1 v2 • • • , VnV! be a directed cycle with n ;::: 4 vertices. Let D1 be the digraph obtained from C,. by adding an arc (vnt.vl). Then by Proposition 3.3.2 and Theorem 3.1.2,
y(D1)
= max{hv,} = hv. = n + t/l(n, n 1) = (n 1) + 1. 2
Thus the bound in Corollary 3.3.1A is best possible. Example 3.3~2 (Continuation of Example 3.3.1) Assume that n;::: 5. Let D2 be obtained from D 1 by adding an arc (v,.,V2). Note that y(D2 ) = (n 1) 2 • Proposition 3.3.3 indicates that D 1 is the only digraph, up to isomorphism, that have ~(D 1 ) = (n 1) 2 + 2  i, (1 ::; i ::; 2). Moreover, there will some integers k such that 1 ::; k::; (n 1) 2 + 1 but no primitive digraph D satisfies y(D) = k.
Powers of Noll1legative Matrices
109
Example 3.3.3 (Holladay and Varga, (129]) Let n ~ d > 0 be integers. H A e Mt is irreducible and has d positive diagonal entries, then A is primitive and y(A) :5 2n d 1 (Proposition 3.3.4(ii) below). Let A(n,d) E Mn be the following matrix 1
1
0 1
A(n,d) =
0 1
0
1
0
where A(n, d) has exactly d positive diagonal entries. Then y(A(n, d)) =2nd 1.
Proposition 3.3.4 Let D be a strong digraph with n = IV(D)I and let d > 0 be an integer. Then each of the following holds. (i) H D has a loop, then D is primitive. (ii) H D has loops at d distinct vertices, then y(D) :5 2n  d  1. (iii) The bound in (ii) is best possible. Sketch of Proof (i) follows from Theorem 3.2.1. (ii). By (i), D is primitive. For any u, v E V(D), D has an (u, w)walk of length n d for some vertex w with a loop, and a (w, v )walk of length at most n  1. Thus hu :5 2n  d 1, and (ii) follows from Proposition 3.3.2(iv). (iii). Compute y(A(n,d)) for the graph A(n,d) in Example 3.3.3 to see y(A(n,d)) = 2nd1.0 Definition 3.3.2 For integers b > a > 0 and n > 0, let (a, W
=
{k : k is an integer and a :5 k :5 b}
En
=
{k : there exists a primitive matrix A E Mt such that y(A) = k}.
By Theorem 3.3.1 and by Proposition 3.3.3, En c (1, (n 1) 2
+ 1)0.
Theorem 3.3.2 (Liu, (168]) Let n1 ~ d ~ 1 be integers let Pn(d) be the set of primitives matrix in Mt with d > 0 positive diagonal entries. H k e {2,3,··· ,2nd 1}, then there exists a matrix A e Pn(d) such that y(A) = k. Sketch of Proof For any integer k e {2, 3, · · · , n} we construct a digraph D whose adjacency matrix A satisfying the requirements. If 1 :5 d < k :5 n, the consider the adjacency matrix of the digraph D in Figure 3.3.1.
Powers of Nonnegative Matrices
110
k+l
k+2
n1 n
Figure 3.3.1 Note that y(i,j) { ; :
ifi =j = 1, otherwise.
Thus y(A) = k in this case. Digraphs needed to prove the other cases can be constructed similarly, and their constructions are left as an exercise. D
Theorem 3.3.3 (Shao, [245]) Let A E Bn be symmetric and irreducible. (i) A is primitive if and only if D(A) has a directed cycle of odd length. (ii) If A is primitive, then y(A) :::;; 2n  2, where equality holds if and only if
Powers of Nonnegative Matrices
111
Proof Let D = D(A). Since A is reduced and symmetric, Dis strong and every arc of D lies in a directed 2cycle. Thus by Theorem 3.2.2, D is primitive if and only if D has a
directed odd cycle. Assume that A is primitive. Then A2 is also primitive by Proposition 3.3.2. Since A is symmetric, a loop is attached at each vertex of V(D(A 2 )) = V(D). Thus by Proposition 3.3.4(ii), y(A2 ) ~ n 1, and so A2(nl) > 0. Hence y(A) ~ 2n 2. Assume further that y(A) = 2n 2. Then in D(A2 ), there exist a pair of vertices u, v such that the shortest length of a (u,v)path in D(A2 ) is n 1. It follows that D(A2 ) must be an (u, v )path with a loop at every vertex and with each arc in a 2cycle. If D has a vertex adjacent to three distinct vertices u',v',w' E V(D), then u'v'w'u' is a directed 3cycle in D(A2 ), contrary to the fact that D(A2 ) is an (u, v)path with a loop attaching at each vertex. It follows that D(A) is a path of n vertices and has at least one loop. By 7(A) 2n 2 again, D has exactly one loop which is attached at one end of the path. D
=
Theorem 3.3.4 (Shao, (245]) For all n ;;::: 1, E,.
s;; .En+1 •
Moreover, if n ;;::: 4, then
E,.CEn+l·
Proof Lett E E,., and let A= (a0;) E B,. be a primitive matrix with y(A) = t. Construct a matrix B = (b,3 ) E Bn+l as follows. The n x n upper left corner submatrix of B is a..,n· Let A, for 1 ~ i ~ n, bo,n+l =a;,,., for 1 ~ j ~ n, bn+l,j =a,.,;, and bn+l,n+l {1, 2, · · · , n }. Then D(B) is the digraph obtained from D(A) by adding a V(D(A)) new vertex n+ 1 such that (i,n+ 1) E E(D(B)) if and only if (i,n) E E(D(A)), such that (n+l,j) E E(D(B)) ifandonlyif(n,j) E E(D(A)), and such that (n+l,n+l) E E(D(B)) if and only if (n,n) E E(D(A)). By Theorem 3.2.2, B is also primitive. By Definition 3.3.1 (or by Exercise 3.13(iv)), y(B) = y(A), and sot E En+l· If n;;::: 4, then by Example 3.3.1, n 2 + 1 E En+l E,., and so the containment must be proper. D
=
=
Ever since 1950, when Wielandt published his paper [274) giving a best possible upper bound of y(A), the study of y(A) has been focusing on the problems described below. Let A denote a class of primitive matrices. (MI) The Maximum Index problem: estimate upper bounds of y(A) for A E A. (IS) The Set of Indices problem: determine the exponent set
E,.(A)
= {m
: m > 0 is an integer such that for some A E A, y(A)
= m}.
(EM) The Extremal Matri:l: problem: determine the matrices with maximum exponent in a given class A. That is, the set
EM(A) ={A E A : y(A) = max{"Y(A') : A' E A}}.
Powers of Nonnegative Matrices
112
(MS) The Set of Matrices problem: for ay0 E u,.E,.(A), determine the set of matrices MS(A,yo) ={A E A : y(A} ='YO}· In fact, Problem EM is a special case of Problem MS.
We are to present a brief survey on these problems, which indicates the progresses made in each of these problems by far. First, let us recall and name some classes of matrices.
Some Classes of Primitive Matrices Notation P,. P,.(d)
T,. F,. DS,.
CP,. NR,
s... s~
Definition n x n primitive matrices in B,. matrices in P,. with d positive diagonal entries matrices A E P,. such that D(A) is a tournament fully indecomposable matrices in P,.
P,.nn,. circulant matrices in P,. nearly reducible matrices in P,. symmetric matrices in P,. matrices in S,. with zero trace
Problem MI This area seems to be the one that has been studied most thoroughly. Results Notation Authots and References y(A) $; (n 1} 2 + 1 P,. Wielandt, [274] Dulmage and Mendelsohn, [77] y(A) $; n + s(n 2) y(A) $; 2n  d 1 Holladay and Varga, [129] P,.(d) y(A) $; n+ 2 T,. Moon and Pullman, [205] y(A) $; n 1 F,. Schwarz, [232] n2 L4+1J ifn 5,6, or y(A) $; DS,. Lewin, [159] n 0 (mod4} 2
=
=
L~J 4
CP,. NR,
s.. son
KimButler and Krabill [144] Brualdi and Ross, [36] Sha.o, [245] Liu et al, [177]
y(A) y(A} y(A) y(A}
otherwise $; n 1 $; n 2  4n + 6 $; 2n  2 $; 2n  4
Powers of Nonnegative Matrices
113
Problem IS Let Wn = (n  1) 2 + 1. Wielandt (1950, [274]) showed that En ~ [1, wn] 0 ; Dulmage and Mendelsohn (1964, [77]) showed that En C [1, wn] 0 • In 1981, Lewin and Vitek [157] found all gaps (numbers in [1, wn] 0 but not in En) in [l ~n j + 1, Wn and conjectured that [1, l W; j has no gaps. Shao (1985, [247]) proved that this LevinVitek Conjecture is valid for sufficiently large nand that [ 1, L~n J+ 1) 0 has no gaps. However, when n 11, 48 ¢ E 11 and so the conjecture has one counterexample. Zhang continued and complete the work. He proved (1987, (282]) that the LevinVitek Conjecture holds for all n except n 11. Thus the set En for the class Pn is completely
r
r
=
=
determined. Results concerning the exponent set in special classes of matrices are listed below. Notation Pn(n) Pn(d)
Authors and References Guo, [110] Liu, [168]
Results [1,nW [2,2nd W
Tn Fn DSn CPn NR,.
Moon and Pullman, [205] Pan, [209]
[3,n+2] 0 [1,n W Unsolved Unsolved Characterized [1,2n 2] 0 \ S S = {m is an odd integer and n ~ m ~ 2n  3} [2, 2n  4] 0 \ 81 m is an odd integer 81
1~d 1, define
mn,p ={A E Bn : p(A) Note that IBn,l
=p}.
= Pn, the set of all n by n primitive matrices. Denote k(n,p)
=max{k(A)
: A e mn,p}·
Theorem 3.4.1 (Heap and Lynn, (119]) Write n = pr + 8 for integers r and 8 such that 058 p,
there exist j and q (1 ~ j ~ t, 0 ~ q ~ p t) such that some i; A;; Al+q· It follows that for each l = 1, 2, · · · ,p,
=
A,(k)
Therefore k(A)
~
= l + q (mod p), and so
=
A,(q)A,+q(k q)
=
A,(q)A,+q(ph)A,+q+ph(k ph q)
= =
A,(q)A;,; (ph)AI+q+ph(k ph q) A,(q)JA,+q+ph(p t  q)
k follows by Lemma 3.4.3.
= J.
O
Lemma 3.4.5 Let A= (n1 ,A1 ,~,A 2 , ... ,A11 ,n 1) E B,. be an irreducible matrix with p(A) = p, and let m = min{nt.n2 , ... ,np}. Then k(A) ~ p(m2

2m+ 3)  1.
Proof It follows from Lemma 3.4.4 and Corollary 3.3.1A. O Proof of Theorem 3.4.2 Since k(A) = k(PAPt) for any permutation matrix P, we may assume that A= (nt,At,n 2 ,A2, ... ,n,.,A,.,nt), where nt +n2 + ·· · +n,. = n. Let m min{n1. .. · ,np}· Since n rp+ s where 0 ~ s < p, m ~ r.
=
=
Powers of Nonnegative Matrices
117
Case 1 m 5 r  1. Then r ;?:: m + 1 ;?:: 2, and so by Lemma 3.4.5,
k(A)
5
p(m2
" · · ,Ai+s1 is a J, since these matrices has no zero rows nor zero columns. It follows that Ai(s) = J, i 1, 2, · · · ,p, and so k(A) 5 s, by Lemma 3.4.3. 0
=
Schwarz indicated in [233] that in Theorem 3.4.2, the upper bound can be reached when n = 7 and p 2. Can the upper bound be reached for general n and p? Shao and Li [252] completely answered this question. Two of their results are presented below. Further details can be found in [252].
=
=
Theorem 3.4.3 (Shao and Li, [252]) Let A E IBn,p with p = 2 and n 2r + 1, r > 1. Then k(A) k(n,p) if and only if there is a permutation matrix P such that P AP 1 e {M1, Ma, M3 }, where
=
M
1
=[
0 H1 ] M YiO ' 2
= [ Y20 0 H1 ]
'
M
3
=[
0 H2 ] Y10'
and where
0
1
0
0
0 1
H1 = 1 0
0 1
1
0 0
1
; Ha =
0 0
1 1
(r+l)xr
0 1
0
0
0
0
and
Y,
~ [I
l 1
1
, Ya= rx(r+l)
[I
1
:L~,,
(r+l)xr
Powers of Nonnegative Matrices
118
Theorem 3.4.4 (Shao and Li, (252]) When r > 1, r = 1 and 8 > 0, or r k(n,p) can be partitioned into 2' + the matrices A e ffin,p with k(A) and 1 equivalence classes, respectively, under the relation ~P·
=
= 1 and 8 = 0, 8 •
2•1, 2' 1,
Theorem 3.4.2 can be viewed as a Wielandt type upper bound of the index of convergence of irreducible nonprimitive matrices. In order to obtain a DulmageMendelsohn type upper bound, we need a few more notions. Definition 3.4.3 Let A E ffin,p· For 1 ~ i,j ~ n, let kA(i,j) be the smallest integer (A1);.J, for all l ;?:: k; and define mA(i,j) be the smallest k ;?:: 0 such that (A1+P)i,j integer m ;?:: 0 such that (Am+"P}iJ = 1 for all a ;?:: 0.
=
Example 3.4.1 Let A E IBn,p and let D = D(A) with V(D) = {v1,t12,··· ,vn}· By Proposition 1.1.2(vii), kA(i,j) is the smallest integer k;?:: 0 such that for each l ;?:: k, D has a directed (v1, VJ )walk oflength l + p if and only if D has a directed (v;, VJ )walk of length l; and mA(i,j) is the smallest integer m ;?:: 0 such that for any a ;?:: 0, D has a directed (v;, VJ )walk of length m + ap. Definition 3.4.4 Let A E ffin,p and D = D(A) with V(D) = {v1o··· ,vn}, l(A) = {l1,l2,··· ,l,} denote the set of lengths of directed cycles in D(A), and let d,(A)(i,j) denote the shortest length of a directed (v1,vJ)walk in D that intersects directed cycles in D of lengths in Z(,P). Also, let p = gcd(h, 12 , • • • ,l.), and define

l1 z2
z.
3.5
+ '"2 i!: (r + S)p pd(Plo pt) = P
The Index of Convergence of Reducible Nonprimitive Matrices
In 1970, Schwarz presented the first upper bound of k(A) for reducible matrices in B,.. Little had been done on this subject until Shao obtained a DulmageMendelsohn type upper bound in 1990.
Powers of Nonnegative Matrices
121
Theorem 3.5.1 (Schwarz, [233]) For each A is reducible, then k(A) ~ (n 1) 2 •
e Bn, k(A)
~
(n 1) 2 + 1. Moreover, if A
Theorem 3.5.2 (Shao, [243]) Let A E Bn, D = D(A), and D11D2,··· ,De the strong components of D. Let no = m&XtSiSc{IV(Di)l}, and let so be the maximum of shortest lengths of directed cycles in eacll Di, 1 ~ i ~ c, if D has a directed cycle, or 0 if D is acyclic. Then k(A)
~ n + so ( d~) 
2) .
Theorem 3.5.2 implies Theorem 3.5.1 and Theorem 3.3.1 (Exercise 3.17). In Theorem 3.5.3 below, Shao (243] applied Theorem 3.5.2 to estimate the upperbound of k(A) for reducible matrices A. Lemma 3.5.1 Let X E Bn have the following form:
B 0] X=[ XT a
(3.8)
where BE Bn 1 and a E {0, 1}. Each of the following holds. (i) H a = 0, then k(B) ~ k(X) ~ k(B) + 1. (ii) H a= 1, then k(B) ~ k(X) ~ max{k(B), n 1}. Proof The relationship between Xil+1 and Bi:+ 1 can be seen in Exercise 3.19. Thus by Definition 3.2.2, k(B) ~ k(X). When a= 0, Lemma 3.5.1(i) follows immediately from direct matrix computation (Exercise 3.19(i)). Assume that a= 1. Since B E Bn1> for any k ~ n 2, I+ B + ···B" = I+ B + ·· · + nn2 , by Proposition 1.1.2(vii). By matrix computation (Exercise 3.19(ii)), if and so k(X) ~ max{k(B),n1}. 0 m~ max{k(B),n1}, then xm
=Xm+p(B),
Lemma 3.5.2 Let n ~ 3 be an integer and let A E Bn be a reducible matrix. H k(A) > n 2  5n + 9, then there exists a matrix X E Bn with the form in (3.8) such that B E Bn1 is primitive, such that D(B) has a shortest directed cycle with length n  1, and such that either A ~P X or AT ~P X. Proof Let D = D(A). H every strong component of D is a single vertex, then An = An+l = 0, and so k(A) ~ n < n 2  5n + 9, a contradiction. Hence D must have a strong component D 1 with IV(D1 )1 > 1. By Theorem 3.5.2, k(A>
~ n +so ( dC~> + 2) ,
(3.9)
where so and no are defined in Theorem 3.5.2. Since A is reducible, so~no~n1.
(3.10)
Powers of Nonnegative Matrices
122
If so $ n 3, then by (3.9) and (3.10), k(A) $ n+ (n 3)(n 1 2} = n 2  5n +9, a contradiction. If s0 = n 2 and no= n 2, then by (3.9), k(A) $ n+ (n 2)(n 4) < n 2  5n+9, another contradiction. If so = n 1, then by (3.10), no n 1. Then D has a strong component D 1 which is a directed cycle of n 1 vertices. Therefore we may assume that A or AT has the form (3.8), and that the submatrix Bin (3.8) must be a permutation matrix. It follows that ED In1 = Bnl, and so k(B) = 0. By Lemma 3.5.1, k(A) $ ma.x{k(B) + 1, n 1} ~ n  1 < n 2  5n + 9, a contradiction. Therefore it must bethecasethat s0 = n2and n 0 = n1, and so we may assume that A or AT has the form (3.8) and that the associate directed graph D(B) of the submatrix B in (3.8) has shortest directed cycle length n 2. It remains to show that B is primitive. If not, then by Theorem 3.2.2, every directed cycle of D(B) has length exactly n 2. Since D(B) is strong, B Bnl, and so k(B) 0, which implies by Lemma 3.5.1 that k(A) $ n 2  5n + 9, a contradiction. Therefore, B must be primitive. 0
=
=
=
=
Theorem 3.5.3 (Shao, [243]) Let A E Bn be a reducible matrix. Then k(A) $ (n 2) 2 + 2.
(3.11)
Moreover, when n 2!: 4, equality holds in (3.11) if and only if A where 0
1
0
0
1
1
1
0
::::!p
Rn
or AT
::::!p
R,.,
0 1
0
Rn= 1 0 0 0 0 0 0 0 0
Proof Note that (3.11) holds trivially when n E {1, 2}. Assume that n > 3. Since n 2  5n + 9 $ (n  2) 2 + 2, we may assume that k(A) > n 2  5n + 9. By Lemma 3.5.2, we may assume that A or AT has the form (3.8). By Theorem 3.5.1 or Theorem 3.5.2, k(B) $ (n 1) 2 + 1 and so (3.11) follows by Lemma 3.5.1. Now assume that n 2!: 4 and that A ::::!p Rn or AT ::::!p .R,.. Then direct computation
Powers of Nonnegative Matrices
123
yields
.Ri"2)2 +1
=[ 0 1
=
:l'
1 0
=
and so k(.R,.) (n 2) 2 + 2. Thus k(A) = k(.R,.) (n 2) 2 + 2. Conversely, assume that n ~ 4 and k(A) (n  2) 2 + 2 > n 2  5n + 9. By Lemma 3.5.2, we may assume that A~, X or AT~, X, where X is of the form (3.8). Note that k(B) $; (n 1)2 + 1. H a= 1 in (3.8), then by Lentma 3.5.1,
=
= k(X) $; max{k(B), n 1} $; (n 2) 2 + 1, contrary to the assumption that k(A) = (n 2) 2 + 2. Therefore we must have a= 0, and k(A)
so by (nk(B)
2? + 2 = k(A) = k(X) $; k(B) + 1 $; (n 2) 2 + 2,
= (n 2) + 1. By Proposition 3.3.1 and Proposition 3.3.3, D(B) must be the digraph 2
in Example 3.3.1 with n 1 vertices, and so we may assume that B is the (n 1) x (n 1) upper left comer submatrix of .R,.. It remains to show that the vector in the last row of A in (3.8) is xT (1, 0, 0, · • · , 0). By direct computation,
=
B(n2) 2
=[
0 J1x(n2)
Since k(X)
Jcn2)x1 ] • Jn2
= k(A) = (n2) 2+2, x nJs(n 1) +
l! and gcd(s, n) > 1.
Theorem 3.5.7 has been improved by Zhou [285]. An important case of Theorem 3.5.7 is when s
= 1.
Theorem 3.5.8 (Liu and Shao, [183], Liu and Li, [179]) Let n ~ d ~ 1 be integers. Suppose that A E B,. has d positive diagonal entries. Then (n  d 1) 2 + 1 k(A) < { 2nd 1 
=
if 1 < d
+;)
J,.(d)
={
if 1 < d 
< 2n3y'4H 
2
{1, 2, .. · , 2nd 1}
Theorem 3.5.10 (Liu, Shao and Wu, (184]) H A E 0,., then
k(A) 5 {
= 5, 6 or n = 0 (mod 4)
r~· + 11
if n
fztl
otherwise.
Moreover, these bounds are best possible. The extremal matrices of k(A) in Theorem 3.5.8 and Theorem 3.5.10 have been characterized by Zhou and Liu ([288] and [287]).
3.6
Index of Density
Definition 3.6.1 For a matrix A E B,., the mazimum density of A is
and the indez of ma:rimum density of A is
h(A) = min{m > 0 : IIAmll = JS(A)}. For matrices in m,.,,, define h(n,p)
= max{h(A)
: A E ffin,p}.
126
Powers of Nonnegative Matrices
Example 3.6.1 Let A e B,. be a primitive matrix. Then p(A) = n 2 and h(A) = "Y(A). Thus the study of the index of density will be mainly on imprimitive matrices. For a generic matrix A E B,. with p(A) > 1, p(A) < n 2 and h(A) $ k(A) +p1 (Exercise 3.23).
Jn1xn1
0
0
J1>2xn2
0
0
Bo = [
0 0
Jn,xn,
0
0
Jn,xna
0
0 0
0 0
0 0
B1= Jn.,xn,
and
B, = Bi, 1 $ i $
J,..,_,xn.,
0
p  1. Then
k(A) =min{m
>0
: Am =B;,j
=m (modp),O $ j
$p1}.
Proof Let m 0 =min{m
>0
: Am= B;,j ::m (modp),O $ j $p1}.
Let k = k(A) and write k = rp + j, where 0 $ j < p. Since each A,(p) is primitive, "Y(A,(p)) exists. Let e = max1SI~{"Y(A,(p))}. Then A""= Bo, and so
However, as A""= Bo, A(r+e)p = Bo also, and so Ale
== Alc+ep = A(r+•)P+i = B;. Thus
mo$k. On the other hand, write m 0 = lp + j with 0 $ j A""'+"= B;A" B;, and so k $mo. 0
< p. Then
A""' = B;, and so
=
Corollary 3.6.1 Let A= (n1,A11n2,A2,··· ,n,,Ap,n1) Em,.,, and let "Yi Then for each i = 1,2,··· ,p,
p("Yi  1)
< k(A) < P("Y; + 1).
Proof Note that A0 =I, for each i, by the definition of "Yh
(A1(p))7 • 1 < J ==> APh• 1> < B 0 ==> k(A) > p('y,  1).
= "Y(Aa(p)).
Powers of NODDegative Matrices
127
To show k(A) < P("Yi + 1), for each j with 1 $ j $ p, it suffices to show that A;(p("Yi)1) = J. Writei :aj+t (modp), whereO $ t < p. Then A,= A;+t and so (AJ+t(p))r• = J. It follows
A;(p(y.+ 1)  1)
=
A;(py, + p 1)
= =
(A;·· ·A;+i · · ·AJ+p1)7 ' A;·: ·AJ+p2 A;(t)(AJ+t(p)) 7 ' AJ+t(p 1 t)
= J.
D Definition 3.6.2 Let aT= (a1 ,G2, •· · ,a,) be a vector. The circular period of a, denoted by r(a) or r(at, a2, • • · , a,), is the smallest positive integer m such that
With this definition, r(a11 a2,··· ,a,)IP· Heap and Lynn [119] started that investigation of h(A) and (n,p). Sbao and Li [251] gave an explicit expression of h(A) in terms of the circular period of a vector, and completely determined ii(n,p). Theorem 3.6.2 (Heap and Lynn, [119], Shao and Li, [251]) Let A E ffin,p with the form A= (nt,At,··· ,n,,A,,nl), and letT =r(n11 n 2 ,··· ,n,). Each of the following holds. p
(i) p(A)
= L: n~, i=l
(ii) h(A)
= min{m
: m?: k(A), rim}= rlk~) J.
Sketch of Proof Let m?: 0 be an integer with m = j (mod p), such that 0?: j < p. Define n; = n;' whenever j j' (mod p). By Theorem 3.6.1,
=
p
m?: k(A) ~Am =B; ~ IIAmll
=Enini+i• i=l
and p
m
< k(A) ~Am< B; ~ IIAmll < Enini+i· i=l
p
L: n
p
=
1 p
2  E n,n,+i 2 E 1, 0 < 8 < p if r 1, 0 < 8 < p . ifr 1,8 = 0.
= =
= k(n,p), then h(A) = ii.(n,p).
Sketch of Proof For each A E 1Bn,p 1 A~, (nt, At.··· , n,, A,, n1).
LetT= tau(n1. n2, · · · , n,). By Theorem 3.6.2, k(A)
= rl k(A) J ~ Plk(A) J ~ Plk(n,p) J. p
T
p
Assume that for some A~, (n1, At.··· , n,, A,, n 1) E IBn,,, k(A) = k(n,p). By Theorem 3.4.2 and Theorem 3.4.3, wemayassumethat (n1,n2, · · · ,n,) (r+l, · · · ,r+1,r, · · · ,r),
=
and so h(A) =plk(n,p) J. p
O
Definition 3.6.3 The index set for h(A) is H(n,p)
= {h(A)
: A E IBn,,}.
Thus H(n, 1) =En. Theorem 3.6.4 (Shao and Li, [251]) For integers n ;?: p ;?: 1, write n = rp + 8, where 0 ~ 8 < p. Each of the following holds. (i) H k ¢ Er and if k1 ~ k ~ ~. then for each integer m with pk1 < m ~ p~, m¢ H(n,p). (ii) H r is odd and if r ;?: 5, then (p(r2  3r + 5) + 1,p(r2  2r}j0 n H(n,p) = 0. (iii) H r is even and if r;?: 4, then [p(r2  4r + 7) + 1,p(r2  2r})O n H(n,p) = 0. Definition 3.6.4 For integers n ;?: p ;?: 1, let SIBn,p denote the set of all symmetric imprimitive irreducible matrices. Example 3.6.2 Let A E S1Bn,2 and let D the diameter of Dis k(A) + 1.
= D(A). Then D(A) is a bipartite graph and
Powers of Nonnegative Matrices
129
=
_ { k(A) if n1 n2 h (A)1: A 2l¥J ifn1 #n:~. Proof This follows from Theorem 3.6.2.
O
Example 3.6.3 Define
SKn,2 = {k(A) : A
e SIBn.2} and SHn,2 =
{h(A) : A
e SIB,..2 }.
For integers n ;':::: k+2 ;':::: 3, let G(n,k) to be the graph obtained from Knlc,l by replacing an edge of Knlc,l by a path kedges. Then we can show that k(A(G)) = k, and so
SKn,2 = {1,2, · · · ,n 2}. The same technique can be used to show the following Theorem 3.6.6. Theorem 3.6.6 (Shao and Li, [251]) Let n ;':::: 2 be an integer. Each of the following holds. (i) H n is even, then SH,., 2 = [1, n  2] 0 • (ii) H n is odd, then SH,.,2 consists of all even integers in [2, n W. For tournaments, Zhang et al completely determined the index set of maximum density. Theorem 3.6.7 (Zhang, Wang and Hong, [284]) Let ST,. = {h(A) : A
{1} ST,.=
3. 7
{1,9} {1,4,6,7,9} {1,2,···,8,9}\{2} {1, 2, · · · , n + 2} \ {2} {1,2,··· ,n+2}
e T,.}.
Then
if n = 1,2,3 ifn=4 ifn=5 ifn=6 ifn=7,8,··· ,15 ifn;::: 16.
Generalized Exponents of Primitive Matrices
The main purpose of this section is to study that generalized exponents ezp(n, k), f(n, k) and F(n,k), to be defined in Definitions 3.7.1 and 3.7.2, and to estimate their bounds. Definition 3.7.1 For a primitive digraphD with V(D) = {vt,t/:a, · · · ,v,.}, and for v;,v; E V(D), define exJ>D(v;,vi) to be the smallest positive integer p such that for each integer t ;': : p, D has a directed (v;,v;)walk oflength t. By Proposition 3.3.1, this integer exists. For each i = 1, 2, · · · , n, define
Powers of Nonnegative Matrices
130
For convenience, we assume that the vertices of D are so labeled that
With this convention, we define, for integers n;;::: k;;::: 1, exp(n,k) =
max D Ia primitive a.ncl
IV(DII•
Let D be a primitive digraph with IV(D)I =nand let X ~ V(D) with lXI = k. Define expv(X) to be the smallest positive integer p such that for each u E V(D), there exists a v E X such that D has a directed (u, v )walk of length at least p; and define the kth lower multiexponent of D and the kth upper multiexponent of D as f(D,k)
=
min {expv(X)} and F(D,k) =
X~V(D)
max {expv(X)},
X~V(D)
respectively. We further define, for integers n ;;::: k ;;::: 1, f(n,k)
=
{f(D, k)} and
max IV(DII=•
F(n,k)
=
{F(D,k)}.
max D
1• primltl.e aad.
IV(D)I=•
These parameters exp(n,k), f(n,k) and F(n,k) can be viewed as generalized exponents of primitive matrices (Exercise 3.25). Example 3.7.1 Denote e:z:p(n) e:z:p(n) = (n 1) 2 + 1.
= e:z:p(n,n).
By Corollary 3.3.1A and Example 3.3.1,
Definition 3.7.2 Let D,. denote the digraph obtained from reversing every arc in the digraph D 1 in Example 3.3.1, and write V(D,.) = {v1 ,v2 , ... ,v,.}. For convenience, for j > n, define VJ =Vi if and only if j = i (mod n). For each Vi E V(D,.) and integer t :2:: 0, let Rt (i) be the set of vertices in D,. that can be reached by a directed walk in D of length t.
E V(D,.), lett :2::0 be an integer. Write t = p(n 1) +r, where r ~ n  1. Each of the following holds. (i) H t :2:: (n 2)(n 1) + 1, then Rt(1) V(D,.). (ii) H t ~ (n 2)(n 1) + 1, then Rt(l) {vr, Vtr, .. · , Vpr, Vpr+l}· (iii) H t;;::: (n 2)(n 1) + m, then Rt(m) = V(D,.).
Lemma 3.7.1 Let
Vm
p, r :2:: 0 are integers such that 0 ~
= =
Powers of Nonnegative Matrices
131
(iv) H 0 ~ t ~ m 1, then Rt(m) = {vmt}· (v) H m 1 ~ t ~ (n 2)(n 1) + m, then Rt(m)
=Rtm+1(1).
Proof (i) and (ii) follows directly from the structure of D,.. (iii), (iv) and (v) follows from (i) and (ii), and the fact that in D,., there is exactly one arc from vr. to v6 _ 1, 2 ~ k ~ n. Theorem 3. 7.1 Let n ;:: k ;:: 1 be integers. Each of the following holds. (i) ezpv,. (k) = n2  3n + k + 2. n1 n1 (ii) j(D,., k) = 1 + (2n k 2)lkJ  klkj 2 • Proof By Lemma 3.7.1, ezpv,.(k) = (n 2)(n1) +k = n 2 3n+k+2.
=
=
1, (ii) follows by Example 3.3.1. Assume that k < n. Write n  1 qk+8, where 0 ~ 8 < k. Then the right hand side of (ii) becomes (q1)(n1)+1+8(q+l). We construct two subsets X andY in V(D,.) as follows. Let X = { Vi 1 , vi.,· · · , v,.} such that i 1 = 1, and such that, for j ;:: 2,
Note that when k
.
z;
Let Y
= {vi1+q1
:
= { i;1 . +q+ 1 Zj1 +q
if2~j~8+1 if8+2~j~k.
1 ~ j ~ 8} andY= V(D,.) \ Y. We make these claims.
Claim 1 H X*= {u1,u2 , • • • ,u~:} ~ V(D,.), then
expv,. (X*) ;:: (q 1)(n 1) + 1 + 8(q + 1). Note that from any vertex v E X*, v can reach at most n 8 vertices by using directed walks of length (n 1)(q 1) + 1. H a vertex v~, where 1 ~ l < 8(q + 1)  1, cannot be reached from X* by directed walks of length (n 1)(q 1) + 1, then adding a directed walk of length 8(q + 1)  1 cannot reach the vertex v,.. Thus Claim 1 follows. Claim 2 Every vertex in Y can be reached from a vertex in X by a directed walk of length 1 + (n 1)(q 1) in D,.. In fact, by Lemma 3.7.1, Rl+(n1)(q1)(va.)
{vn1 1 V,.,V1,V2,··· ,Vi1 +q2}
Rl+(n1)(q1)(Vi2 )
=
{vio1 1 Vi2 , ·
Rl+(n1)(q1)(vi~:)
=
{vi.1,
••
,Vi2+q2}
v,., ···,
Vi•+q2}·
Thus Claim 2 becomes clear since (i;  1)  (ij1
+ q 2) =i; 
i;1  q + 1
={ ~
if2~j~8+1
if 8+2
~j ~
k.
Powers of Nonnegative Matrices
132
Claim 3 Every vertex ViE V(D,.) can be reached by a directed walk from a vertex in Y of length s(q + 1); but not every vertex can be reached by a directed walk from a vertex in Y of length s(q + 1) 1. It suffices to indicate that v,. cannot be reached by a directed walk from a vertex in Y of length s(q + 1)  1. In fact, since s(q + 1)  1 = i. + q 1, if v,. can be reached by a directed walk in D,. of length s(q + 1) 1, then the initial vertex of the walk must be E Y. By Claims 2 and 3, f(Dn,k) :5 expv,.(X) :5 (q1)(n1)+1+s(q+1). This, together with Claim 1, implies {ii). D
V•(q+l)I
Theorem 3. 7.2 Let n ;?:: k ;?:: 1 be integers. Then F(D,., k)
= (n 1)(n k) + 1.
Proof Let X'= {v1.v2,··· ,v~:I,v,.}. By Lemma 3.7.1, Dn has no directed walk of length (n 1)(n k) from a vertex in X' to v,.. Thus F(D,., k) ;?:: (n 1)(n k) + 1. On the other hand, by Lemma 3.7.1 again, for any vertex Vie V(D,.), the end vertices of directed walks from v, oflength (n 1) (n k) + 1 consists of n k + 1 consecutive vertices in a section of the directed cycle VI V2 • • • Vn v1 • Since any k distinct such sections must coverall vertices of Dn, for any X~ V(Dn) with lXI =k, expv.. (X) :5 (n1)(nk)+l. This proves the theorem. D Lemma 3.7.2 Let n;?:: k;?:: 1 be integers and let D be a primitive digraph with V(D,.) {v1,v2, · ·· ,v,.}. H D has a loop at v,, 1:5 i :5 r, then expv(k) :5 {
n1
ifk:5r
n1+kr
if k;?:: r.
=
Proof Assume that D has a loop at VI. V2, • • • , vr. Then expv(vi) :5 n 1, 1 :5 i :5 r. Thus if k :5 r, then expv(k) :5 n 1. Assume k >rand L ={vi,'" ,vr}· Since Dis strong, V(D) has a subset X with lXI = k  r such that any vertex in X can reach a vertex in L with a directed walk of length at most k  r, and any vertex in L can reach a vertex in X with a directed walk of length at most k r. Thus expv(v) :5 (n 1) + (k r), Vv e XU L. D Lemma 3. 7.3 Let n;?:: k ;?:: 2 be integers and let D be a primitive digraph with jV(D)I Then
=n.
expv(k) :5 expv(k 1) + 1.
Proof Assume that expv(vi) such that (vi,v) E E(D). D
= expv(i), 1 :5 i :5 n. Since Dis strong, D has a vertex v
Powers of Nonnegative Matrices
133
Theorem 3. 7.3 (Brualdi and Liu, [33]) Let n ~ k ~ 1 be integers and let D be a primitive digraph with IV(D)I = n. H 8 is the shortest length of a directed cycle of D, then (k)
ifk~8
< { 8(n 1)
expv
8(n1+k8)
ifk>8
Sketch of Proof Given D, construct a new digraph D such that V(D) = V(D), where (x, y) e E(D) if and only if D has a directed (x, y)walk of length 8. Then D' has at least s vertices attached with loops, and so Theorem 3.7.3 follows from Lemma 3.7.2.
0 Theorem 3. 7.4 (Brualdi and Liu, [33]) Let n exp(n,k)
= n2 
~
k
~
1 be integers. Then
3n+ k + 2.
Proof Let D be a primitive digraph with IV(D)I = n. By Lemma 3.7.3, expv(k) ~ expv(1) + (k 1). Let 8 denote the shortest length of directed cycles in D. If 8 ~ n 2, then by Theorem 3.7.3, expv(l) ~ n 2  3n + 2 and so the theorem obtains. Since D is primitive, by Theorem 3.2.2, n ~ n 1. Assume now 8 = n 1. Since D is strong, D must have a directed cycle of length nand soD has D,. (see Definition 3.7.2) as a spanning subgraph. Theorem 3.7.4 follows from Theorem 3.7.1(i) and Theorem 3.7.3.
0 Shao et al [253] proved that the extremal matrix of exp(n, k) is the adjacency matrix of D,.. In [185] and [241], the exponent set for expv(k) was partially determined. Lemma 3.7.4 Let n ~ k > 8 > 0 be integers and let D be a primitive digraph with IV(D)I =nand with 8 the shortest length of a directed cycle of D. Then f(D, k) ~ n k.
=
Proof Let Y c V(D) be the set of vertices of a directed cycle of D with IYI s. Since Dis strong, V(D) has a subset X such that Y c X~ V(D) and such that every vertex in X \ Y can be reached from a vertex in Y by a directed walk with all vertices in X. Thus any vertex in V(D) can be reached from a vertex in X by a directed walk of length exactly n  k. D Lemma 3. 7.5 Let n s, then
~ 8
> k ~ 1 be integers. f(D,k)
~
H D has a shortest directed cycle of length
1+8(nk1).
Proof Let C, = x 1 x2 • • ·x.x1 be a directed cycle in D. Siilce Dis strong, we may assume that there exists z E V(D) \ V(C.) such that (x1, z) E E(D).
Powers of Nonnegative Matrices
134
Let X= {x1,x2, · · · ,x,}, and let Y be the set of vertices in D that can be reached from vertices in X by a directed path of length 1. Then {z, x2, · · · , XHt} ~ Y. V(D), where (u, v) E E(D) if and Construct a new digraph D(•) with V(D) only if D has a directed (u, v)walk of length 8. Then D' has a loop at each of the vertices x2, · · · x1c+1, and (x2, z) E E(D). Thus, each vertex in D(•} can be reached from a vertex in Y by a directed walk of length exactly n k 1, and so every vertex in D can be reached from a vertex in X by a directed walk of length exactly 1 + 8(n k 1). O
=
Lemma 3.7.6 can be proved in a way similar to the proof for Lentma 3.7.5, and so its proof is left as an exercise. Lemma 3. 7.6 Let n > 8 ;::: k ;::: 1 be integers such that kl8. Let D be a primitive digraph with IV(D)I =nand with a directed cycle of length 8. Then /(D,k) :$;
1+ 8(n:1).
Theorem 3.7.5 (Brualdi and Liu, [33]) Let n
f(n,k) :$; n 2

> k;::: 1 be integers.
Then
(k+2)n+k+2.
Sketch of Proof Any primitive digraph on n vertices must have a directed cycle of length 8 :$; n 1, by Theorem 3.2.2. Thus Theorem 3.7.5 follows from Lemmas 3.7.4 and 3.7.5.
0
Theorem 3.7.6 Let n > k ;::: 1 be integers such that kl(n 1). Let f*(n,k) = max{/(D,k) : D is a primitive digraph on n vertices with a directed cycle of length 8 and kl8}. Then
f*(n,k)
= n2 
(k 2)n+2k+ 1_ k
Proof This follows by combining Lemma 3.7.6 and Theorem 3.7.1(ii).
0
=
Lemma 3.7.7 Let D be a primitive digraph with IV(D)I n, and let 8 and t denote the shortest length and longest length of directed cycles in D, respectively. Then
F(D,n 1) :$; max{n 8 1 t}. Proof Pick X c V(D) with lXI = n  1. H V(C) ~ X for some directed cycle C of length p, where 8 5 p 5 t, then any vertex in D can be reached by a directed walk from a vertex in V (C) of length n  p. Hence we assume tllat no directed cycle of D is contained in X. Let u denote tile only vertex in V (D) \ X. Then every directed cycle of D contains u.
Powers of Nonnegative Matrices
135
Let C1 be a directed cycle of length t in D. Then u E V (C1). Since D is strong, every vertex lies in a directed cycle of length at most t, and so every vertex in X can be reached from a vertex in X by a directed walk of length exactly t. Since D is primitive, and by Theorem 3.2.2, D has a directed cycle 0 2 of length q with 0 < q < t. Lett= mq + r with 0 < r ~ q. let v E V(Cl) be the (t r)th vertex from u. Then C1 has a directed (v, u)path. By repeating C2 m times, D has a directed (v, u)walk of length t. Hence expv(X) ~ max{n s, t}. D Theorem 3.7.7 F(n,n 1)
= n.
Proof By Theorem 3.7.2, F(n,n 1) ~ F(Dn,n 1) = n. By Lemma 3.7.7, for any primitive digraph D with IV(D)I =n, F(D,n1) ~ max{ns,t} ~ max{nl,n} = n.
0 Lemma3.7.8 Let n ~ m ~ 1 beintegersandletD be a primitive digraph with IV(D)I such that D has loops at m vertices. Then for any integer k with n ~ k ~ 1, F(D,k)
~{
=n
ifk>nm
n 1
2nmk
ifk~nm.
Proof Let X!;;; V(D) with lXI = k. Assume first that D has a loop at a vertex vEX. Then every vertex of D can be reached from v by a directed walk of length exactly n 1, and so F(D, k) ~ n 1. Note that when k > n m, X must have such a vertex v. Assume then k ~ n  m and no loops is attached to any vertex of X. Then X has a vertexx such that D has a directed (x,w)path of length at most nmk+1, for some vertex w E V(D) at which a loop of D is attached. Thus any vertex in D can be reached from a vertex in X by a directed walk of length exactly 2n  m  k. O Theorem 3.7.8 (Brualdi and Liu, [33]) Let n ~ k ~ 1 and 8 > 0 be integers. H a primitive digraph D with IV(D)I = n has a directed cycle of length 8, then F(D k) < { 8(n 1) ' 8(2n 8 k) Sketch of Proof Apply Lemma 3.7.8 to
ifk>n8 if k ~ ns.
n. D
Theorem 3.7.9 (Liu and Li, [179]) Let n ~ k ~ 1 be integers, and let D be a primitive digraph with IV(D)) = n and with shortest directed cycle length s. Then F(D,k) ~ (n k)s + (n s). Proof It suffices to prove the theorem when n > k ~ 1. Let c. be a directed cycle of length s and let X !;;; V(D) be a subset with lXI = k < n. Let v e V(D) and let
Powers of Nonnegative Matrices
136
+ (n  s). We want to find a vertex x E X such that D has a directed (x, v)walk of length exactly t. Fix v E V(D). Then there is a vertex x' EX such that D has a directed (x',v)walk of length d::;; n s. Since is a directed cycle, then for any h;::: d, there exists a vertex x" E V(C.) such that D has a directed (x'',v)walk of length h. Note that in n k;::: 1 be integers. Then
F(n,k)
= (n l)(n k) + 1.
Proof Let D be a primitive digraph with IV(D)I =nand let s denote the shortest length of a directed cycle of D. Since Dis primitive, s;::: n 1. Thus by Theorem 3.7.9,
F(D,k)
= =
s(nk)+ns=s(nk1)+n ~~~k~+n=~~~~+1.
Theorem 3.7.10 proves a conjecture in [33]. By Theorem 3.7.2, the bound in Theorem 3.7.10 is best possible. The extremal matrices for F(D, k) have been completely determined by Liu and Zhou [186]. The determination of f(n, k) for general values of nand k remains unsolved. Conjecture Let n ;::: k
+ 2 ;::: 4 be integers.
f(n,k)
3.8
Show that
n1 n1 = 1 + (2n k 2)LkJLkJ
2
k.
Fully indecomposable exponents and Hall expo
nents Definition 3.8.1 For integer n > 0, let F,. denote the collection of fully indecomposable matrices in B,., and P,. the collection of primitive matrices in B,.. For a matrix A E P,., define /(A), the fully indecomposable exponent of A, to be the smallest integer k > 0 such that AI: E F ,.. For an integer n > 0, define
f,. =max{/(A) : A E P,.}.
Powers of Nonnegative Matrices
137
The Proposition 3.8.1 follows from the definitions. Proposition 3.8.1 Let n
> 0 be an integer. Then
Pn ={A : A E Bn and for some integer k
> O,A11
E Fn}·
Schwarz [232] posed the problem to determine fn, and he conjectured that In However, Chao [53] presented a counterexample.
S n.
Example 3.8.1 Let
Ms=
0 0 0 1 1
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 1 0 0 0
Then we can compute M~, fori= 2,3,4,5 to see that f(Ms) ~ 6. In fact, Chao in [53] showed that for every integer n ~ 5, there exists an A E P n such that f(A) > n. However, Chao and Zhang [54] showed that if trA > 0, then /n S n. Example 3.8.2 For a matrix A E P n and an integer k Ak+I E Fn. Let
A=
1 0 0 0 0 1 0 1 0
0 0 0 0 1
0 1 0 0 0 0 0
> 1, that Ak E F n does not imply
0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0
Then we can verify that A8 ,A9 E F 7, A 10 ,A11 ¢ F7, and Ai E F7 for all i ~ 12. Definition 3.8.2 For a matrix A E Pn, define f*(A), the strict fully indecomposable e:xponent of A, to be the smallest integer k > 0 such that for every i ~ k, A' E F n· Define f~ = max{f*(A) :
Thus in Example 3.8.2, f(A)
A E Pn}·
= 8, f*(A) = 12.
Proposition 3.8.2 Let A E Pn. Then !(A)
S J*(A) S y(A).
138
Powers of Nonnegative Matrices
Proof Let k =!(A). Then AI: E Fn, and so f(A) :::;; f*(A). By Proposition 3.3.1 1 for any and k' ~ y(A), we have AI:' > 0, and so AI:' E F n· Therefore, /*(A) :::;; y(A). O Lemma 3.8.1 Let A E Bn and D = D(A) with V(D) = {vt,v2,··· ,vn}· For a subset X ~ V(D), and for an integer t > 0, let .Rt(X) denote the set of vertices in D which can be reached from a vertex in X by a directed walk of length t, and let Ro(X) =X. Then for an integer k > 0, the following are equivalent. (i) A" E Fn. (ii) For every non empty subset X ~ V(D), IR~:(X)I > lXI. Proof This is a restatement of Theorem 2.1.2.
O
=
Lemma 3.8.2 Let D be a strong digraph with V(D) {VI. v2 , • • • , vn} and let W = {v,.,v12,··· ,v,.} ~ V(D) be vertices of D at which D has a loop. Then for each integer t>O,
IRt(W)I
~min{8+t,n}.
Proof Suppose that .Rt(W) ::f; V(D). Since D is strong, there exist u E Rt(W) and v E V(D) \ Rt(W) such that (u,v) E E(D). Since v ¢ .Rt(W), we may assume that the distance in D from v 01 to u is t, and the distance in D from v;; to u is at least t, for any j with 1 :::;; j :::;; 8. As the directed (v,.,u) in Rt(W) contains no vertices in W {tit,}, IRt(W)I ~ (8 1) + (t + 1) = 8 + t. 0 Theorem 3.8.1 (Brualdi and Liu, [31]} let n has 8 positive diagonal entries. Then
1 be integers. Suppose that A EP,.
~ 8 ~
f*(A) :::;; n s + 1.
Proof Let D = D(A) and let W denote the set of vertices of D at which a loop is attached. Let XC V(D) be a subset with n > lXI = k > 0. By Lemma 3.8.1, it suffices to show that
IRt(X)I ~ lXI + 1, for each t
~
n s + 1.
(3.13)
Since n > k, we may assume that IRt(X)I < n. If X n W ::f; 0, then by Lemma 3.8.2 and sincet~ns+1
IRt(X)I~
IRt(XnW)I ~ IXnWI +t ~ IXnWI +n s+ 1 ~ lXI + 1.
Thus we assume that X n W = 0. Let x• E X and w• E W such that the distance d from x• to w• is minimized among all x E X and w E W. By the minimality of d, d :::;; n + 1 lXI IWI
=n  + 1 8
k
< t.
Powers of NODllegative Matrices
139
Since w* E W, x• can reach every vertex in R~:(w*) by a directed walk in D of length exactly t. By Lemma 3.8.2,1Rt(X)I ~ I.Rt({w*})l ~ IR~:(w*)l ~ k+ 1, and so (3.13) holds also. D Corollary 3.8.1A (Chao and Zhang, (54]) Suppose A E P,. with tr(A) > 0. Then !(A)~ /*(A)~
n.
Corollary 3.8.1B Let A E P,. such that D(A) has a directed cycle of length r and such that D(A) has 8 vertices lying in directed cycles of length r. Then f(A) ~ r(n 8 + 1). In particular, if D has a Hamilton directed cycle, then f(A) ~ n. Corollary 3.8.1C Let A E P,. such that the diameter of D(A) is d. Then /(A)
~
2d(n d).
Corollary 3.8.1D Let A E P,. be a symmetric matrix with tr(A)
= 0. Then /(A) ~ 2.
Proof Corollary 3.8.1A follows from Theorem 3.8.1. For Corollary 3.8.1B, argue by Theorem 3.8.1 that (Ar)n• 1 E F,.. If the diameter of Dis d, then D has a directed cycle of length r ~ 2d, and D has at least d + 1 vertices lying in directed cycles of length r. Thus the other corollaries follow from Corollary 3.8.1B. D Theorem 3.8.2 (Brualdi and Liu, (31]) For n
~
1,
f,. ~ r 0. Let 8 be the number of vertices in D lying in directed cycles of length r. By Corollary 3.8.1B, f(A) ~ r(n 8
+ 1) ~ r(n r + 1).
When n is odd, since D is primitive, D must have a directed closed walk of length different from (n + 1)/2. Since r(n r + 1) is a quadratic function in r, we have n 2 t2n
/(A) ~ { n•t:n3
which completes the proof.
ifn is even ifn is odd,
D
=
Conjecture 3.8.1 (Brualdi and Liu, [31]) For n ~ 5, /n 2n 4. Example 3.8.1 can be extended for large values of nand so we can conclude that f,. ~ 2n 4. Liu (170] proved Conjecture 3.8.1 for primitive matrices with symmetric 1entries.
Powers of Nollllegative Matrices
140
Example 3.8.3 Let n ~ 5 and k ~ 2 be integers with n ~ k+3. Let D be the digraph with V(D) = {VlJ v:z, ••• , vn} and with E(D) = {(vi, Vi+l) : 1 $ i $ n k} U {(vn1:+1. 'Ill)} U {(vnkl>v;),(v;,vl) : nk+2 $j $ n}. LetA= A(D) and let X~;= {vl:1:+1•""" ,vn}· Then wecanseethat foreachi = 1,2, · · • ,k, I.R.cn1:)l(X~:)I i, and so f*(A) ~ k(nk). (See Exercise 3.25 for more discussion of this example.)
=
Lemma 3.8.3 Let D be a strong digraph with IV(D)I of D of length r > 0. (i) H X s;; V(C.), then R.r+;(X)
= n, and let Cr be a directed cycle
s;; R(i+l)r+;(X), (i ~ 0, 0 $ j $
(ii) H X= V(Cr), then Ri(X)
r  1).
s;; R(i+l)(X), for each i ~ 0.
Proof (i). Let z E Rir+;(X) and x E X. Since x E V(Cr), any direct (x,z)walk of length ir + j can be extended to a direct (x,z)walk of length (i + 1)r + j by taking an additional tour of (ii). Let z E R.(X) and x E X = V(C.). Let x' E V(Cr) be the vertex such that (x',x) E E(Cr)· Then D has a directed (x',z)walk oflength i + 1. D
c•.
Lemma 3.8.4 Let r > 8 > 0 be two coprime integers, and let D be a digraph consists of exactly two directed cycles Cr and c., of length rand 8, respectively, such that V(C.) n V(C,) :/: 0. H 0 :/:X s;; V(C.), then IR.(X)I ~ min{n, lXI + l}, i ~ lr and l
> 1.
(3.14)
Proof Let x denote the vertices in V(Cr) that can be reached from vertices in X by a directed walk in Cr of length i. Thus if i j (mod r), x< •> = X. Assume first that r $ i < 2r. H X= V(Cr), then since i ~ r, IR.(X)I ~ min{n, lXI + i}, and so (3.14) holds. Thus we assume X:/: V(Cr) and l = 1. H R.(X) g; V(C.), then IR.(X)I ~ lXI + 1. Assume then Ri(X) s;; V(C.). H (3.14) does not hold when l = 1, then IR.(X)I = lXI, and so R.(X) = x. Since Ri(X) s;; V(C.), we have R.s(X) s;; R.(X), whicll implies that x(i•) = x l  1, Rtr+;(X) = R(ll)r+;(X). Since D is primitive, for t large enough, we have IRcll)r+;(X)I = I.Rtr+;(X)I = n, a contradiction. Hence by Lemma 3.8.3, R(ll)r+;(X) = Ri)r+;(X), for each j with
141
Powers of Nonnegative Matrices
0 ::; j
~
r  1. It follows
IRzr+;(X)I
which completes the proof.
~
IRczl)r+i(X)I + 1
~
min{n, lXI + (l 1)} + 1
~
min{n, lXI + l}
D
Theorem 3.8.3 (Brualdi and Liu, [31]) Let A E Pn· H D(A) has exactly 2 different lengths of directed cycles,then
Proof Let D(A) has directed cycles Cr and Cr, of lengths rands, respectively, such that rands are coprime and such that V(Cr) n V(C.) # 0. Let D* denote the subgraph of D induced by E(Cr) U E(C.). Let Y ~ V(D) be a subset, where 1 ~ k = IYI ~ n1. First assume that !YnV(Cr)l ~ p ~ 1. By Lemma 3.8.4, I.R;(Y)I ~ k + 1, (i ~ (k p + 1)r), and so by r ~ n (k p), it follows that
(k p+ 1)r ~
L41 (n+ 1) 2J.
Now assume that Y n V(Cr) = 0. Then r ~ n k and D has a directed (y, x)walk from a vertex y E Y to a vertex x E V( Cr) of length t, where t ~ n  r  k + 1. By lemma 3.8.4, j.R;({x})l ~ k + 1, i ~ kr. Therefore j.R;(Y)I ~ k + 1, fori~ kr + n r k + 1. It follows that
Hence for all Y 0 # Y ~ X, I.R;(Y)I which completes the proof.
~ IYI + 1, i ~ l
(n:
1)2 J,
D
From the discussions above on f~ (see also Exercise 3.25), we can see that the order of
1: will fall between O(n2 /4) and O(n2 /2). It was conjectured that f~ ~ l(n + 1) 2 /4J
and this conjecture has been proved by Liu and Li (178].
142
Powers of Nonnegative Matrices
Definition 3.8.3 A matrix A E Bn is called a Hall matriz if there exists a permutation matrix Q such that Q ~A. Let Hn denote the collection of all Hall matrices in Pn. Hn ={A E Bn : A" E Hn for some integer k}. For an matrix A E Hn, h(A), the Hall e:q,onent of A, is the smallest integer k that A 11 € Hn. Define
> 0 such
hn = max{h(A) : A E Hn n IBn}, where ffin is the collection of irreducible matrices in Bn. Similarly, for an matrix A E Hn, h*(A), the strict Hall e:q,onentof A, is the smallest integer k > 0 slich that A' E H,., for all integer i ;?: k. Define H~ = {A E Bn h*(A) exists as a finite number}, and h~ = max{h*(A) : A E Hn n IBn},
Example 3.8.4 In general, Pn C Hn. When n tr(P) = 0, then P E Hn \Pn.
> 1, if Pis
a permutation matrix with
Example 3.8.5 Let
A=
Then we can verify that A
0 0 0 0 0 0
0 0 0 0 0 0 1 1
e Pr \Hr.
1 1 0 0 0 0 1
0 0 1 0 0 0 1
0 0 1 0 0 0 1
0 0 1 0 0 0 1
0 0 0 1 1 1 0
A2 E Hr but A3 ¢ Hr, and A' E Hr, for all i ;?: 4.
Proposition 3.8.3 follows from Hall's Theorem for the existence of a system of distinct representatives (Theorem 1.1 in (222]); and the other proposition is obtained from the definitions and Corollary 3.3.1A. Proposition 3.8.3 Let A E Bn and let D = D(A). Each of the following holds. (i) A is Hall if and only if for any integers r > 0 and s > 0 with r + s > n, A does not have an O,.x• as a submatrix. (ii) For some integer k > 0, A 11 E Hn if and onlyifforeachnonemptysubset X~ V(D), IR~:(X)I ;::: lXI. Proposition 3.8.4 Each of the following holds: (i) If A E H~, then h(A) ~ h*(A)
< y(A)
~ n 2  2n + 2.
Powers of Nonnegative Matrices
143
(ii) If A e P,., then h(A)::; j(A) and h*(A)::; J*(A). (iii) For each n > 1, F n k H,.. Example 3.8. 7 Let
A~ ~ [
Then A11
0 0 0 1
1 1 0 0
1
e ~ if and only if 4lk, and so A e H,. \ H:.
Example 3.8.8 It is possible that h*(A) > f(A). Let
A=
0 0 0 0 0 0 0 0 1 1
0 0 0 0 0 0 0 0 1 1 1 1
0 0 0 0 0 0 0 0
Then A f. HIO, A2 e Fto, (and so A br any k ~ 4. Therefore, h*(A) Definition 3.8.4 Let A
1 1 1 0 0 0 0 0 1
0 0 0 1 0 0 0 0
0 0 0 1 0 0 0 0 1 1 1 1 1
0 0 0 1 0 0 0 0 1 1
0 0 0 1 0 0 0 0 1 1
0 0 0 0 1 1
0 0 0 0 1 1 1 1
1 1 0 0 0 0
e PIO and A2 e Hio), A 3 f. Hto but Ale e Fto k Hio.
= f*(A) = 4 > 2 = f(A) = h(A).
e B,. and let An A12 [
A,.I lethe Frobenius normal form (Theorem 2.2.1) of A. By Theorem 2.2.1, each A" is meducible, and will be called an irY"educible block of A, i 1, 2, · • · ,p. A block ~i is a .nrial block of A if Att 01x1·
=
=
By definition, we can see that if A has a trivial block, then A 1.8.5 obtains.
f. H,.,
and so Lemma
144
Powers of Nozmegative Matrices
Lemma 3.8.5 Let A E B,.. Then A E H,. if and only if every irreducible block of A is a Hall matrix. Theorem 3.8.4 (Brualdi and Liu, [33]) Let A e B,.. Then A e fl,. if and only if the Frobenius standard form of A does not have a trivial irreducible block. Proof We may assume that A is in the standard form. H A has a trivial irreducible block. Then for any k, A" also has a trivial irreducible block, and soAk f. H,.. Assume then that A has no trivial irreducible block. Then each vertex Vi e V(D(A)} lies in a directed cycle of length m 1, (1 5 i 5 n). Let p = lcm(m1, ma, · · · , m,.). Then each diagonal entry of AP is positive, and so A e fl,.. D Definition 3.8.5 Recall that if A 0 0
e B,. is irreducible, then A is permutation similar to B1 0
0 Ba
0 0 (3.15)
0 B,.
0 0
0 0
Bh1 0
where B, E M~:1 x~:,+, (1 5 i 5 h) and k~a+l = k1. These integers k,'s are the imprimitive parameters of A. Let P e B,. be a permutation matrix and let Y11 Y2, · · · , Y,. e B~: be h matrices. Then P(Yi, Y2 , • • • , Y,.) denotes a matrix in B~:h obtained by replacing the only 1entry of the ith row of P by Yo, and every 0entry of P by a Okxl:, (1 5 i 5 h). Theorem 3.8.5 (Brualdi and Liu, [33]) Let A imprimitive parameters are identical.
e IB,..
Then A
e H~ if and only if all the
= B 1Ba · • ·Bh, Xa = BaBs · · · B~aB1, · · ·, X,. = B,.Bl · · · B1a1· Suppose first that k = k1 = ka = ·· · k,.. Then the matrices X 1 , X 2 , • • ·, X,. are in P~:, and so there exists an integer e > 0 sucll that Xf = J~;, for any integer p ~ e and 15i$h. Let q ~ eh be an integer and write q = lh + r, where I ~ e and where 0 5 r 0 A+A2 +· .. Am> 0
={vt,V2,"' ,vn}, then ifi=j if i =1 j.
Proof (i) follows from the graphical meaning of y(A). To prove (ii), consider the graph Each vertex of a shortest directed cycle is a loop vertex in D(•), from which any vertex can be reached by a directed walk of length at most n  1. Thus in D, any vertex u can reach a vertex in c. by a directed walk of length at most d, and any vertex in V(C,) to any other vertex v by a directed walk of length at most s(n 1). It follows that y(A) :5 d + s(n 1) :5 d + d(n 1). D
nC•>.
c.
Problem 3.9.1 By examine the graph in Example 3.3.1, it may be natural to conjecture that if A e P,., and if d is·the diameter of D(A), then
y(A) :5 ~ + 1. Note that the degree m of the minimal polynomial of A and d are related by m A weaker conjecture will be
y(A) :5 (m 1) 2 + 1.
(3.16) ~
d + 1.
(3.17}
Powers of Nonnegative Matrices
147
Hartwig and Neumann proved (3.17) conditionally. Lemma 3.9.1 below follows from Proposition 3.9.1 and Proposition 1.1.2(vii).
Lemma 3.9.1 (Hartwig and Neumann, [117]) Let A e P,., D Suppose V(D) = {VJ.,V2,··· ,v,.}. (i) H v~; is a loop vertex of D, then Am 1e 11 > 0. (ii) H each vertex of Dis a loop vertex, then Aml > 0.
= D(A)
and m
= m(A).
e P,., D = D(A) and m = m(A). Then y(A) ~ (m 1) 2 + 1, if one of the following holds for each vertex v e V(D): (i) v lies in a directed cycle of length at most m  1, (ii) v can be reached from a vertex lying in a directed cycle of length at most m  1 by a directed walk of length one, or (iii) v can reach a vertex lying in a directed cycle of length at most m 1 by a directed Theorem 3.9.1 (Hartwig and Neumann, [117]) Let A
walk of length one. Sketch of Proof Let V(D) = {v1,V2,··· ,vn} and assume that v11 lies in a directed cycle of length j 11 < m. Then Vfc is a loop vertex in D(Ai• ). Since Ai• e P,. with m(Ai•) ~ m(A) m, it follows by Lemma 3.9.1 that (Ai•)mlek > 0, and so
=
A(m1) 2e~:
= A((m1)i•)(m1) [(Ai•)mle~:) > 0.
Thus (i) implies the conclusion by Lemma 3.9.1. Assume then v~; can be reached from a vertex lying in a directed cycle of length at most m  1 by a directed walk of length one, then argue similarly to see that A(m1)2+lel:
=A(m1)2(Ae~:) > 0,
and so (ii) implies the conclusion by Lemma 3.9.1 also.
That (iii) implies the conclusion can be proved similarly by considering AT instead of
A, and so the proof is left as an exercise.
O
Theorem 3.9.2 (Hartwig and Neumann, [117]) Let A
y(A)
~
e P,. with m = m(A).
Then
m(m 1).
Proof Let D = D(A) with V(D) = {VJ.,V2,··· ,v,.}. By Proposition 3.9.1(iii), for each u* E V(D), there is an integer j1: with 1 ~ j,. ~ m such that Vfc is a loop vertex of D(Ai• ). By Lemma 3.9.1, (Ai•)m 1e,. > 0. It follows that Am(m1)e(l:) and so Am(m 1)
= A(m;.)(m1) [(Ai•)m1e~:] > 0,
> 0, by Lemma 3.9.1(ii). 0
Powers of Nonnegative Matrices
148
Theorem 3.9.3 (Hartwig and Neumann, [117]) Let A E Pn be symmetric with m the degree of minimal polynomial of A. Then y(A) :::; 2(m 1). Sketch of Proof As A is symmetric, every vertex of D(A2 ) is a loop vertex. Then apply Lemma 3.9.1 to see (A2 )m 1 e~: > 0. 0 Theorem 3.9.4 (Hartwig and Neumann, [117]) LetA E Pn such that D(A) has a directed cycle of length k > 0, and let m and mA• be the degree of the minimal polynomial of A and that of AI:, respectively. Then y(A) :::; (m 1) + k(mA•  1). Proof Let C~: denote a directed cycle of length k. Then V(C~:) are loop vertices in D(A•). Any vertex in D(Ak) can be reached from a vertex in V(C~:) by a directed walk of length at most mA•  1, and so in D(A), any vertex can_ reach another by adirected walk (via vertices in V(C~:)) oflength at most k(mA•  1) + (m 1). 0 Theorem 3.9.5 (Hartwig and Neumann, [117)) Let A E Pn such that A has r distinct eigenvalues. Then D(A) contains a directed cycle of length at most r. Proof H p(A), the spectrum radius of A, is zero, then A2 = 0. Thus r = 1 and, by Proposition 1.1.2(vii), D(A) has no directed cycles. Assume that p(A) > 0 and that Spec(A)
=(
A1
h
A2 l2
.. · .. ·
Ar ) • lr
Argue by contradiction, assume that every directed cycle of D(A) has length longer than = 0, by Proposition 1.1.2(vii). Thus
r. Then for each k with 1:::; k:::; r, tr(AA:)
[ ~i .~ ::: Ar
~
...
l[ [ ~ l[~:: l .~
~ .~.] .~.JA~
(3.18)
o
lr
Note that (3.18) is equivalent to the homogeneous system 1
A2
=[
A~l
Arlr
]·
(3.19)
0
The determinant of the coefficient matrix in (3.19) is a Vandermonde determinant with A; :f: 'AJ, whenever i :f: j. Thus the system in (3.19) can only have a zero solution Alh A2l2 Arlr = 0, a contradiction. 0
=
= ···
Corollary 3.9.5A ([117]) Let A E Pn with m eigenvalues, then y(A):::; (m1) 2 •
= m(A).
If A has at most m 2 distinct
Powers of Nonnegative Matrices
149
The conjectured (3.17) remains unsolved in (117). In 1996, Shen proved the stronger form of (3.16), therefore also proved (3.17). For a simple graph G, Delorme and Sole (73] proved "Y(G) can have a much smaller upper bound. Theorem 3.9.6 (Delorme and Sole, [73]) Let G be a connected simple graph with diameter + 1, then 'Y(G) ~ d +g. In particular, if G is not bipartite, then "Y(G) ~ 2d. d. H every vertex of G lies in a closed walk of an odd length af most 2g
Example 3.9.1 The equality "Y(G) = 2d may be reached. Consider these examples: G is the cycle of length 2k+ 1 (d = k and "Y = 2k); G = K,. with n > 2 (d = 1 and "Y = 2); and G is the Petersen graph (d = 2 and "Y = 4). The relationship between "Y(A) and the eigenvalues of A is not clear yet. Chung obtained some upper bounds of "Y(A) in terms of eigenvalues of A. For convenience, we extend the definition of "Y(A) and define "Y(A) oo when A is imprimitive.
=
Theorem 3.9.7 (Chung, [58]) Let G beakregular graph and with eigenvalues A; so labeled that IA1I ~ IA2I ~ · · • ~ IAnl· Then log(n 1)
"Y(A)
~ flogklogiA2Il·
Proof Let u1o u2, · .. , Un are orthonormal eigenvectors corresponding to A1, A2, .. · , An, respectively, such that U1 Thus if
C!
1 = ..;n,Jnxl and Al = k.
1) m > n 1, then (Am)r,s
=
,E ~m(Uiul)r,s
~
km I,EAf'(u,)r(uo).,
~
km IA21m {.E l(uo)r(uo)sl}
n
n
i>l
i>l
k: }! }! = k: IA2Im{1(u,)~}t{l(uo)~}l ~
IA21m {.EI(uo)rl 2 i>l
> 0.
{.EI(uo)sl 2 i>l
150
Powers of Nollllegative Matrices
Therefore, ifm >
L 10~0:~l~g~~21 J, then Am> 0. 0
With similar techniques, Chung also obtained analogous bounds for non regular graphs and digraphs. Theorem 3.9.8 (Chung, [58]) Let G be a simple graph with eigenvalues A& so labeled that IXII ~ I.X2I ~ · ·· ~ fA,. I. Let ui be an eigenvector corresponding to AI, let w = min,{l(ui)il} and let d(G) denote the diameter of G. Then d(G)
0 be integers with gcd(a1ta2,a3 ) = 1. Let d = gcd(ai,Il2) and write ai a~d and a2 a~d. Let ui,u2, zo,!fo,ZQ be integers satisfying a~ui +
=
=
Powers of Nonnegative Matrices
14112
151
= 1 and a1xo + ll2Yo + aszo = n respectively.
OJX + ll2Y + asz
Show that all integral solutions of
= n can be presented as x {
=xo + a~t1 u1aat2
y =Yo a1t1  u2ast2
z
= zo +dt2,
where h, t2 can be any integers. Exercise 3.2 Let 8 ~ 2 be an integer and suppose rl> r2, · · · , r, are real numbers such that r 1 ~ r 2 ~ • • • ~ r. ~ 1. Show that
Exercise 3.3 Assume that Theorem 3.1.7 holds for induction on 8.
8
= 3.
Prove Theorem 3.1.7 by
Exercise 3.4 Let D be a strong digraph. Let d'(D) denote the g.c.d. of directed closed trail lengths of D. Show that d'(D) = d(D). Exercise 3.5 Let D be a cyclically k partite directed graph. Show each of the following. (i) If D has a directed cycle of length m, then kim. (ii) If hlk, then D is also cyclically hpa.rtite. Exercise 3.6 Show that if A e M;t is irreducible with d = d(D(A)) > 1, and if hid, then, there exists a permutation matrix P such that P Ah pI = diag(A1 , A2 , • • • , A,.). Exercise 3. 7 Prove Corollary 3.2.3A. Exercise 3.8 Prove Corollary.3.2.3B. For (i), imitate the proof for Theorem 3.2.3(iii). Exercise 3.9 Prove Corollary 3.2.3C. Exercise 3.10 Prove Corollary 3.2.3D. Exercise 3.11 Show that in Example 3.2.1, A is primitive, B is imprimitive and A ....., B. Exercise 3.12 Let D1,D2 be the graphs in Examples 3.3.1.and 3.3.2. Show that 'Y(D;) = (n 1) 2 + 2 i. Exercise 3.13 Complete the proof of Theorem 3.3.2. Exercise 3.14 Prove Lemma 3.4.1. Exercise 3.15 Prove Lemma 3.4.2. Exercise 3.16 Prove Lemma 3.4.3. Exercise 3.17 Prove Lemma 3.4.5.
152
Powers of Nonnegative Matrices
Exercise 3.18 Let A E B, and let no, s0 be defined as in Theorem 3.5.2. Apply Theorem 3.5.2 to prove each of the following. (i) If A E IB,.,,, then k(A)
~ n + s0 (~ 
2).
(ii) Wielandt's Theorem (Corollary 3.3.1A). (iii) Theorem 3.5.1. Exercise 3.19 Let X be a matrix with the form in Lemma 3.5.1. Show each of the following. (i) If a 0, then
=
(ii) If a
= 1, then
Exercise 3.20 Suppose that A E B,. with p(A) k(A) +pl.
> 1. Show that p(A) < n 2 and h(A) S
Exercise 3.21 Let n > 0 denote an integer, and let D be a primitive digraph with V(D) {vt,· · · , v,.} such that expv(vt) ~ expv(va) ~ · · · ~ expv(v,.). Show that (i) F(D,1) = expv(v,.) ='Y(D) and /(D,1) = expv(v1 ). (ii) f(n, n) 0, /(n,1) exp(n,1), and F(n,1) exp(n).
=
=
=
=
Exercise 3.22 Suppose that r is the largest outdegree of vertices of a shortest cycle with length s in D. Show that expv(1) ~ s(n r) + 1. Exercise 3.23 Let A E B,. be a primitive matrix and let D = D(A). For each positive k ~ n, show each of the following. (i) expv(k) is the smallest integer p > 0 such that A~' has k all one rows. (That is J1cxn is a submatrix of AP.) (ii) J(D,k) is the smallest integer p > 0 such that AP has a k x n submatrix which does not have a zero column. (iii) F(D, k) is the smallest integer p > 0 such that AP does not have a k x n submatrix which had a zero column. Exercise 3.24 Let n ;::: k 2: 1. Then f(D,.,k)
={
1 + (n k 1)(n 1) k 2(n k)  1
if n 1
if 'i ~ k
=O(mod k) < n 1
Powers of Nollllegative Matrices
153
Exercise 3.25 Show that f(n, n 1) = 1 and f(n, 1) = n 2  3n + 3. Exercise 3.26 Prove Lemma 3.7.6. Exercise 3.27 Let D be the digraph (i) Show that f*(A) = k(n k). (ii) Show that for n ;;:: 5,
Exercise 3.28 Let A e P,. with m most m 2, then y(A) ~ {m 1) 2 • Exercise 3.29 Let A E P,. with m
= m(A).
H D(A) has a directed cycle of length at
=m(A) ;;:: 4. H every eigenvalue of A is real, then
y(A) ~ 3(m 1) ~ (m 1) 2 •
Exercise 3.30 Prove Corollary 3.9.5A. Exercise 3.31 Let A E P,. with m = m(A). If A has a real eigenvalue with multiplicity at least 3, or if A has a non real eigenvalue of multiplicity at least 2, then y(A) ~ (m1) 2 • Exercise 3.32 Prove Theorem 3.9.10. Exercise 3.33 Prove Theorem 3.9.11.
3.11
Hint for. Exercises
Exercise 3.1 First, we can routinely verify that for integers t1o t2,
=
z xo + a~t1 u1aat2 { y =Yo  a1t1  u2ast2
= zo +dt2, satisfy the equation a1x + G2'Y + aaz = n. z
Conversely, let x,y,z be an integral solution of the equation a1x+a2 y+a3 z
=n. Since
we derive that
d(at(x xo) + bt(Y yo))= c(z zo). Since gcd(c, d)
= 1, there exists an integer t 2 such that z = zo + dt2 • It follows that a1(x xo) + bt(Y Yo)= ct2.
154
Powers of Nonnegative Matrices
It follows that there exists an integer t 1 such that
Exercise 3.2 Argue by induction on
8
?::: 2.
Exercise 3.3 First prove a fact on real number sequence. Let u1, u2 be real numbers such that u 1 ?::: u 2 ?::: 1. Then U1
~u1.
 1+u2 U2
This can be applied to show that if u 1 ?::: u 2 ?::: • • • ?::: U&. ?::: 1, then
For nu.mber a1,aa, ···,a, satisfying Theorem 3.1.7, let gcd(a1,d2) da, gcd(alo · .. , a,1) = d,2, 8  1 > 2. Then ,~..(
'I'
a1o • .. , a,
)
~
a1aa d1
asd1 d2
= dto gcd(atoaa,a3 ) =
a,_l ds3 a,_2
  +   + · · · + ....::.....:......:o.....::.
+a.a,2 
•
2:::: a; + 1 i=l
0, then ds2 = ri for some integer b with 0 < b < a. Hence we have gcd(a~o a 2, · · · , a,) = p6 for some integer o> 0, a contradiction. Therefore, d1 must have at least two prime factors. Thus d1 ?::: 6, a2::;; a1 d~o as::;; a2 2::;; n 8. As d1la2, we have dt::;; n/2, and so
Powers of Nonnegative Matrices
155
As n(nd:•h) is a decreasing function of d11 ad as d1 ;::: 6, we have both and (n d1  2)d1 5 (n 2) 2 /4.
n(ni,d,)
5 (n 2) 2 /4
Exercise 3.4 Note that d'ld since cycles are closed trails. By Euler, a closed trail is an edgedisjoint union of cycles, and so did'. Exercise 3.5 Apply Definition 3.2.5 and combine the partite sets. Exercise 3.6 By Corollary 3.2.3A and argue similarly to the proof of Lemma 3.2.2(i). Exercise 3.7 By the definition of d(D) (Definition 3.2.4). Exercise 3.8 For (i), imitate the proof for Theorem 3.2.3(iii). (ii) is obtained by direct computation. Exercise 3.9 By Lemma 3.2.2(i), P A"pl = diag(B1 , .. · , B.,). By Corollary 3.2.3B, B, B:"' > 0 for some smallest integer m 0 > 0. Let m = m&Xi{m,}. Then Bf' > 0 and Bi"H > 0, and so p(A) d, by definition. Exercise 3.10 (i) follows from definition immediately. Assume that p > 1. Then by Corollary 3.2.3C, p = d = d(D(A)). Then argue by Theorem 3.2.3. is primitive. Therefore,
=
Exercise 3.11 It can be determined if A is primitive by directly computing the sequence A, A2 , • • • • An alternative way is to apply Theorem 3.2.2. The digraph D(A) has a 3cycle and a 4cycle, and so d(D(A)) 1, and A is primitive. Do the same for D(B) to see d(D(B)) = 3. Move Column 1 of A to the place between Column 3 and Column 4 of A to get B, and so A "'p B.
=
Exercise 3.12 Direct computation gives y(D1 ) = y(vn,vn) and 7(D2) = y(vl>vn), Exercise 3.13 Complete the proof of Theorem 3.3.2. The following take care of the unfinished cases. If k E {2,3, · · · ,n} and k 5 d 5 n15 n, then write d = k+l for some integer l with 0 $ l 5 n  k  1. Consider the adjacency matrix of the digraph D in figure below.
156
Powers of Nonnegative Matrices
k+l
k+2
1
k+l+2
n
Figure: when k E {2, 3, · · · , n} and k ~ d ~ n 1 ~ n
Again, we have
'Y(i,j) { ; :
ifi =i = 1, otherwise,
and so 'Y(A) = k in this case also. Now assume that k E {n+1,n+2, ... ,2nd1}. Note that we must haved < n1 in this case. Write k 2n  l for some integer l with d + 1 ~ l ~ n  1. Consider the ac:ljacency matrix of the digraph D in the figure below
=
Powers of Nonnegative Matrices
1
ld1
157
l1
ld
n
Figure: when k E {n+ 1,n+2,··· ,2nd1}. Note that in this case, we must have d < n  1. Thus =2nl=k y(i,j) { $; 2n l = k
ifi=j andj=n, otherwise,
and so y(A) = k, as desired. Exercise 3.14 Definition 3.4.2. Exercise 3.15 (i) follows from Definition 3.2.3. (ii). Use (BA)l+1 = B(AB)" B and U JV = J. (iii). Apply (ii). Exercise 3.16 Let k be an integer such that A;(k) = J, Vi = 1, 2, · · · ,p. Then by Lemma 3.4.1, A"= (A1 (k), · · · ,Ap(k))~c and Ak+P = (A1 (k+p), · · · ,Ap(k+p))lc+p· Thus A;(k) = A;(k+p), Vi, and k+p ::p (modp), and so A• = A•+P. It follows that k(A) $; k. Conversely, assume that for some j, A1 (k 1} =F J. Note that Alc 1 = (A 1 (k1},··· ,Ap(k 1))1c1 and A"1+P = (A1(k 1 + p),··· ,Ap(k 1 + p}}lcl+p· Since A1 (k 1 +p) = J =/= A 1 (k 1), A" 1 =/= A11  1+P, and so k(A) > k 1. Exercise 3.17 Let m = n; for some i. Then A,(p) E Mm,m is prinlitive, and so by Corollary 3.3.1A, y(A;(p}) $; m 2  2m+ 2. Apply Lemma 3.4.4 with t = 1 and i 1 = i to get the answer. Exercise 3.18 (i). When A is irreducible, n =no and p = d(D). (ii). Theorem 3.3.1 follows from (i) with p = 1. (iii). When A is reducible, apply a decomposition. Exercise 3.19 Argue by induction on k.
Powers of NoDD.egative Matrices
158
Exercise 3.20 By Definition (of primitive matrix}, J.t(A) = n 2 if and only if A is primitive, and if and only if p(A) = 1. The inequality of h(A) follows from the definitions of p(A) and k(A}. Exercise 3.21 V(D) = {v11· · · (i) F(D, 1} (ii) f(n, n)
Let n > 0 denote an integer, and let D be a primitive digraph with ,v,.} such that expv(VI) $ expv(V2) $ ··· $ e:~:pv(v,.). Show that e:~:pv(v,.) 'Y(D) and f(D, 1} expv(vt)· 0, /(n, 1) exp(n, 1), and F(n, 1} e:~:p(n).
= =
= =
=
=
=
=
Exercise 3.22 Let w e V(C.) and d""(w) r. Let vl {v I (w,v) e E(D}}. Then IVil = r. Denote V(C.) n Vi= {wt}. Then D has a directed path oflength 8 from w1 to a vertex in V1 • In n•, there is at most one vertex, say x, which cannot be reached from loop w1 by a walk of length n  r. Thus a path of length n  r + 1 from 101 to x must pass through some vertex z (say) of V1 , and so there is a path of length n r from z to z. It follows that there is a walk of length 8(n r) + 1 from w to z in D(A). Exercise 3.23 Apply definitions. Exercise 3.24 Apply Theorem 3.7.1. Exercise 3.25 By Theorem 3.7.5, /(n,n1) $ 1 and f(n, 1) $ n 2 3n+3. By Theorem 3. 7.l(ii), f(Dn, 1) = n 2  3n + 3.
=i
Exercise 3.26 Let t and let C. denote a directed cycle of length 8. Pick X :::; {x1 ,z2 ,··· ,x11 } ~ V(C.) such that C, has a directed (z;,Xi+I)path of length t, where x; xj whenever j j' (mod k). Since D is primitive and since n > 8, we may assume that (x 11 z) E E(D) for some z E V(D) \ V(C.). Let Y be the set ofvertices that can be reached from vertices in X by a directed path of length 1. Then {x; 1 , • • • , x;,., z} ~ Y. Construct n as in the proof of Lemma 3.7.5. Note that x11 • • ·x;,.x11 is a directed cycle of n(t) of length k and (xt,Z) e E(n). Thus any vertex in n(t) can be reached from a vertex in Y by a directed walk of length exactly n  k  1.
=
=
1:
Exercise 3.27 First use Example 3.8.3 to show that $ k(n k). As a quadratic function ink, k(n k) has a maximum when k = n/2. The other inequality of (ii) comes from Proposition 3.8.2 and Wielandt' Theorem (Corollary 3.3.1A). Exercise 3.28 Apply Theorem 3.9.4 with mA• $ m and k $ m 2. Exercise 3.29 Since p(A) > 0 and tr(A) > 0, D(A) must have a directed cycle of length 2$m2. Exercise 3.30 Apply Theorem 3.9.5 and then Exercise 3.42. Exercise 3.31 In either case, A has at most m  2 distinct eigenvalues. Apply Corollary 3.9.5A.
Powers of Nonnegative Matrices
Exercise 3.32 Suppose that A (b1) 2 + 2.
159
= Xnx&Yi.xn·
By Lemma 3.4.2, y(A) :5 y(XY)
Exercise 3.33 Apply Lemma 3.7.3 and Exercise 3.22.
+ 1 :5
Chapter 4
Matrices in Combinatorial Problems 4.1
Matrix Solutions for Difference Equations
Consider the difference equation (also called recunence relation) with given boundary conditions Un+k
=
al'lln+k1 'Ui
+ a2'lln+l:2 + ••· + a1c'Un + b,.
= c,, 0:::; l:::; k 1,
(4.1) (4.2)
where the constants a 1 , • • • , a~;, co,·· · , c~: 1 and the sequence (b,.) are given. A solution to this equation is a sequence (u,.) satisfying (4.1) and (4.2). H b,. 0, for all n, then the resulting equation is the corresponding homogeneous equation to (4.1).
=
Definition 4.1.1 The equation (4.3) is called the characteristic equation of the difference equation in (4.1), and the matrix 0 0
1
0
0 0
0 0
0 1
A=
(4.4) 0
0
0
0
1
a~:
ar.1
a1:2
ll2
a1
is called the campanian matN of equation (4.3). Note that by HamiltonCayley Theorem,
A 11

a1A1c 1 t12A11 
161
2  • • ·
a~:I
=0
Matrices in Combinatorial Problems
162
A usual way to solve (4.1) and (4.2) is to solve the characteristic equation of the difference equation, to obtained the homogeneous solution, which satisfies the difference equation (4.1) when the constant b,. on the right hand side of the equation is set to 0, and the particular solution, which satisfies the difference equation with b,. on the right hand side. The homogeneous solution is usually obtained by solving the characteristic equation (4.3). However, when k is large, (4.3) is difficult to solve. The purpose of this section is to introduce an alternative way of solving (4.1), via matrix techniques.
Theorem 4.1.1 (Liu, (169]) Let A be the companian matrix in (4.4), and let C
=
(co,CI, · · · ,c.1:1f
B;
=
(O,O,···,O,b;)T,j=0,1,2,···
and let
A me+ Am 1Bo + Am 2 B1 + ·· · + A.l: 1Bm.1:
= (a<m), · · ·) T.
(4.5)
Then (a<m)) is a solution to (4.1).
Proof By (4.4),
Amc Ami 1B;
= =
A:
I:a•Am•c, and i=1 .1:
I:BiAmi 1B;, j
= 1,2,···.
i=1
Thus by (4.5), ( a
= 1,2,··· ,k, j
(m) _ "
ajj

.
L..J G/c•+1
/(m/c+i1)
(4.7)
i=l
Proof By Definition 4.1.2, D has these directed (k, k)walks:
Weight
k~k1~k
Length 1 2
k~k2~k1+k
3
as
Type Ct
k~k
02 Cs
Walk
a1
a2
k
Therefore, any directed (k, k)walk of length m must have 81 of Type Ct. C2, ... ,8/c of Type C~c. For any j with 1:::;; j:::;; k 1, D has these directed (j,j)walks: Type 0{
Walk j~···+k···~k~1~2~···~j
c~
;~
c~
;~
.. ~k···~k~2~3~ ... ~;
.. ·~k .. ·~k~3~4+···~;
82
of Type
Matrices in Combinatorial Problems
165
For each i with 1 ~ i ~ j, the first directed (j,k)wal.k of length k j and the last directed (k,j)wal.k of length j  i+ 1 of c; form a directed closed walk oflength k i + 1. Thus, for each j with 1 ~ j ~ k, (m)
aii i
E J=l
=
E
•t
+ 2•2 + ... + •••  "" •i ~O,t'l/:.•i+l
•t>O.c=••+•
=
i Eaki+1 J=l
E •t
+ 2•2 + • • • + ••• •s
~
+i
m  Jr O,(t = 1,2,··· ,It)
 1
Therefore the lemma follows by the definition of f(m).
O
Th.eorem 4.1.2 (Liu, [169]) The solution for (4.1) and (4.2) is i
l1
Um
= Bi:d(m1:+1) + E
Cj1 E
i=1
at.i+d(mlj+i)
+
ml+1 E
i=1
b;d(m•i+l)
J=l
Proof This follows from Theorem 4.1.1, (4.6) and {4.7).
0
Corollary 4.1.2A Another way to express Um is .1:
Um
j
=E
Cj1 E
j=1
B.l:i+d(mlHi)
+
m.1:+1 E b;tf(mTc3+1)
i=1
j=l
Corollary 4.1.2B (Tu, (261]) Let k and r be integers with 1 equation {
Un+lc Uo
~
r
~
k 1. The difference
= aun+r + bun + bn
=CO, U1 = c1, • • • , ''1:1 = CTc1
has solutions r1
Um
=E
Tel
c;bf(mkj)
+E
j=D
c;/(mj)
+
j=r
m1:+1
E
b;d(mTcj+l).
i=l
where
•• + (k a,v
Proof Let a1c = b, a1cr Theorem 4.1.2. O
or)v 0
=
tn
~
= a and all other Bi = 0.
Then Corollary 4.1.2B follows from
166
Matrices in Combinatorial Problems
Corollary 4.1.2C Letting a= b = 1, b,. we obtain the Fibonacci sequence
=
/(m)
= 0 r = 1, and eo = c1 = 1 in Corollary 4.1.2B,
= L 2•+v=m ~0
=··
Example 4.1.1 Solve the difference equation Fn+5
{
= 2Fn+4 + 3Fn + (2n 1)
Fa= l,F1
= O,F2 = 1,Fa = 2,F4 = 3.
In this case,
k = 5, r = 4, a = 2, b = 3, bn = 2n  1 eo= 1,c1 = O,C2 = 1,es = 2,c.t = 3. and so
Fn
=
3
L
n  4x 5
z=D
X
L(n5)/SJ (
+3
+6
+3
)
L
n  4x  7
z=D
X
L
n  4x  8
z=D
X
E
n  4x  4
z=D
X
L(n7)/SJ (
L(n.8)/SJ (
L(n4)/SJ (
3"'2n5z5 )
)
)
3"'2n15z7
3"'2n15z8
a"'2nsz4
n4 L(n4j)/5J ( . ) + L(2j3) n4z43 3"'2n5z4i. J~l z=D X
L
4.2
Matrices in Some Combinatorial Configurations
Incidence matrix is a very useful tool in the study of some combinatorial configurations. In this section, we describe how incidence matrices can be applied to investigate the properties of system of distinct representatives, of bipartite graph coverings, and of certain incomplete block designs. Definition4.2.1 Let X= {zt.X2 1 • • • ,xn} be a set and let A= {Xt.X2, ·· · ,Xm} denote a family of subsets of X. (Members in a family may not be distinct.)
Matrices in Combinatorial Problems
167
The incidence matri:l: of A is an matrix A= (ai;) E Bm,n satisfying: if Xj E Xi if Xj Sit X,, where 1 ~ i
~
m and 1 ~ j
~
n.
Example 4.2.1 The incidence of elements of X in members of A can also be represented by abipartitegraphGwith vertex partite sets X= {x1,x2, ·· · ,xn} andY= b/1,7/2 1 ' • • ,ym} such that XiYi E E(G) if and only if Xi E Xi, for each 1 ~ i ~nand 1 ~ j ~ m. Let A be the incidence matrix of A. Note that a set of k mutually independent entries (entries that are not lying in the same row or same column, see Section 6.2 in the Appendix) of A corresponds to kedges in E(G) that are mutually disjoint (called a matching in graph theory), In a graph H, a vertex and an edge are said to cover each other if they are incident. A set of vertices cover all the edges of H is called a vertex cover of H. Since a line in the incidence matrix A of A corresponds to either an element in X or a member in A, either of which is a vertex in G. Therefore, Theorem 6.2.2 in the Appendix says that in a bipartite graph, the number of edges in a maximum matching is equal to the number of vertices in a minimum vertex cover. Definition 4.2.2 A family of elements (xi : i E I) in Sis a system of representatives (SR) of A if Xi EAt, for each i E I. An SR (xi : i E I) is a system of distinct representatives (SDR) of A if for each i,j e I, if if: j, then Xi f: x;. Example 4.2.2 Let X= {1,2,3,4,5}, X 1 = X2 = {1,2,4}, Xs = {2,3,5} and X4 = {1,2,4,5}. Then both D 1 = {1,2,3,4} and D2 = {4,2,5, 1} are SDRs for the same family
X. However, for the same ground set X, if we redefine X 1 = {1,2},X2 = {2,4},Xs {1,2,4} and X4 = {1,4}. Then this family {X1.X2,Xs,X4} does not have an SDR.
=
Example 4.2.3 Let A be the incidence matrix of A. By Definitions 4.2.1 and 4.2.2, A set of k mutually independent entries of A corresponds to a subset of k distinct elements in X such that for some k members X,., x,., ·.. ,X,~ of A, we have xi E Xip for each 1 ~ j ~ k (called a partial transversal of A). Thus a partial transversal of IAI elements is just an SDR of A. Several major results concerning transversals are given below. Proposition 4.2.1 is straightforward, while the proofs for Theorems 4.2.1, 4.2.2 and 4.2.3 can be found in [113]. Proposition 4.2.1 Let X= {xt.x2 , ... ,xn} be a set and let A= {X1.X2, ... ,Xm} denote a family of subsets of X. Let A e Bm,n denote the incidence matrix of A. Each of the following holds.
Matrices in Combinatorial Problems
168
(i) The family A has an SDR if and only if PA, the term rank of A, is equal tom. (ii) The number of SDRs of A is equal to per(A). Theorem 4.2.1 (P. Hall) A family A subset J!;;; I, I U;eJ X;l ~ IJI.
= {Xi I i E I} has an SDR if and only if for each
Given a family X= {X1,X2, · · · ,Xm}, N(X) of SDRs of the family.
= N(X1. · · · ,Xm) denotes the number
Theorem 4.2.2 (M. Hall) Suppose the family X= {X1,X2, ·· · ,Xm} has an SDR. Hfor each i, IX•I ~ k, then N(X) ~ {
k'
.lc! (lcm)l
ifk=s;m > m.
if k
Van Lint obtains a better lower bound in [162]. For integers m > 0 and n 11 • • • , nm, let
Theorem 4.2.3 (Van Lint, [162]} Suppose the family X SDR. H for each i, IXol ~ n,, then
= {X1.X2 ,··· ,Xm} has an
N(X) ~ Fm(n1,n2,··· ,nm)·
Definition 4.2.3Let S be a set. A partition of Sis a collection of subsets {A1 ,A2, · · · ,A,.} such that (i) S = U~ 1 Ai, and (ii) A 1 n A; = 0, whenever i ::f; j. Suppose that S has two partitions {A11 A2 ,··· ,Am} and {B1,B2,··· ,Bm}· A subset E !;;; S is a system of common representatives (abbreviated as SCR) if for each i,j E {1,2,··· ,m}, En A.
::10 and EnnB; ::10.
Theorem 4.2.4 Suppose that S has two partitions {A1,A2, · · · ,Am} and {B~tB2, · · · ,Bm}· Then these two partitions have an SCR if and only if for any integer k with 1 :::;; k :::;; m, and for any k subsets A1, Ai2 , • • • , A.., the union U'=I A,1 contains at most k distinct members in {Bt.B2,··· ,Bm}· Interested readers can find the proofs for Theorems 4.2.14.2.4 in (222] and (162]. Definition 4.2.4 For integers k,t,>. ~ 0, a family {X1 ,X2 , • • • ,X6 } of subsets (called the blocks of aground set X= {xt.x2, ... ,x.,} is atdesign, denoted by S>.(t,k,v), if
Matrices in Combinatorial Problems
169
(i) IX;I = k, and (ii) for each t element subset T of X, there are exactly~ blocks that contain T. An S.x(2,k,v) is also called a balanced incomplete block design (abbreviated BIBD, or a (v,A:,~)BIBD) . A BIBD with b = v is a symmetric balanced incomplete block design (abbreviated SBIBD, or a (v,A:,~)SBIBD). An S1 (2,A:,v) is a Steiner system. Example 4.2.4 The incidence matrix of an 5 1 (2,3, 7) (a Steiner triple system):
A=
1 0 0 0 1 0 1
1 0 1 0 0 0 1 1 0 1 0 0 0 1 1 0 1 0
0 0 1 0 0 0 1 0 0 0 1 0
1 0 1 1 1 0 0 1 1 0 0 1
Proposition 4.2.2 The five parameters of a BIBD are not independent. They are related by these equalities: (i) rv = bk. (ii) ~(v 1) = r(k 1). Theorem 4.2.5 Let A= (a;J) E Bb,v denote the incidence matrix of a BIBD S,.(2,k,v). Hv > k, then (i) AT A is nonsingular, and
(AT A)1
= ((r ~)I,+ AJ,)1 = r ~.X (I,~ J,) .
(ii) (Fisher inequality) b::?: v. Proof By the definition of an S>.(2,A:,v), we have the following:
r(k 1)
=
.X(v 1)
AJ,,b
= = =
kJb
J,,bA ATA
rJ, (r .X)Iv + AJ,.
H v > k, then r > ~. and so the matrix (r .X)I, + .XJ, has eigenvalues r + (v + 1).X and .X r (with multiplicity v 1). Since all v eigenvalues of AT A are nonzero, AT A is nonsingular, and the rank of A is v. By direct computation,
Matrices in Combinatorial Problems
170
Note that A E B&,v and the rank of A is v. Fisher inequality b ~ v now follows.
0
Theorem 4.2.6 (BruckRyserChowla, (222]) H a (v, k, ~)SBffiD exists, and if v is odd, then the equation
has a solution in :c,y, and z, not all zero. We omit the proof of Theorem 4.2.6, which can be found in (222). In the following, we shall apply linear algebra and matrix techniques to derive the Cannor inequalities. From linear algebra, it is well known that the matrices P = . · · · ,Vil})[Ea{v
1 ,. ••
,v1_ 1 }(v,)].
(Thus G, is the subgraph of G  { v1, • · • , v•d induced by the edges incident with v in G {vt,··· ,Vi1}.) Then {G1oG2, ... ,Gnd is a bipartite decomposition of Kn. What is the smallest number r such that Kn has a bipartite decomposition of r subgraphs? This was first answered by Graham and Pollak. Different proofs were later given by Tverberg and by Peck. Theorem 4.3.1 (Graham and Pollak, [105), Tverberg, [263), and Peck, [210]) H { G1, G2, · · · , Gr} is a bipartite decomposition of Kn, then r ~ n  1. Theorem 4.3.2 (Graham and Pollak, [105]) Let G be a multigraph with n vertices with a bipartite decomposition {G1,G2, · · · ,Gr}· Let A= A(G)= (tlfJ) be the adjacency matrix of G, and let n+, n denote the number of positive eigenvalues and the number of negative eigenvalues of A, respectively. Then r ~ max{n+,n}. Proof A complete bipartite subgraph G, of G with vertex partite sets Xi and Yi can be obtained by selecting two disjoint nonempty subsets X, and Yi from V(G). Therefore we write (X., Yi) for G,, 1 ::;; i ::;; r. Let z1, • · · , Zn ben unknowns, let z = (zt, z2, · · · , zn)T, and let q(z)
= zTAz = 2 2:
GoJZ&ZJ·
tSi<JSn
For each (X,, Yi), q,(z)
= { 2:
\...ex,
z,  ..2: zz) . ,eY;
172
Matrices in Combinatorial Problems
Since {G1,G2,· ·· ,Gr} is a bipartite decomposition ofG, r
q(z)
= zTAz = 2 E q;(z), i=l
By the identity 4ab
=(a+
b) 2 
q(z)
(a b) 2,
= ~ (tlHz)2 ) =1
(tl~'(z) 2), t=l
where lHz) and l~'(z) are linear combinations of z1,z2,·· · ,zn. Note that l~, l~, · • · , l~ will take zero values over a vector subspace W of dimension at least n r. Therefore, q(z) is semidefinite negative over W. Let E+ denote the vector subspace spanned by the eigenvectors of A corresponding to the positive eigenvectors. Then E+ has dimension 74 and q(z) is positive definite over E+. It follows that (n r) + n+ :5 n, and so r ~ n+. Similarly, r ~ n_.
0
Note that A(Kn) = Jn In has n 1 negative eigenvalues. Thus Theorem 4.3.2 implies Theorem 4.3.1. Definition 4.3.2 A bipartite decomposition {G1, G2, · · · , Gr} of a graph G may be viewed as ·an edge coloring of E( G) with r colors, such that the edges colored with color i induce the complete bipartite subgraph G;. A subgraph H of G is multicolored if no two edges of H received the same color. Theorem 4.3.2 has been extended by Alon, Brualdi and Shader in [3]. We refer the interested readers to [3] for a proof. Theorem 4.3.3 (Alon, Brualdi and Shader, (3]) Let G be a graph with n vertices such that A A(G) has n+ positive eigenvalues and n_ negative eigenvalues. Then in any bipartite
=
decomposition {G 1 , G2, · • · , Gr}, there is a multicolored forest with at least max{ n+, n_} edges. In particular, any bipartite decomposition of Kn contains a multicolored tree. Theorem 4.3.4 (Caen and Gregory, [44]) Let n ~ 2 and let K:.,n denote the graph obtained from Kn,n by deleting edges in a perfect matching. (i) If K:.,n has a bipartite decomposition {Gt. .. · , Gr }, then r ~ n. (ii) If r n, then there exist integers p > 0 and q > 0 such that pq n  1 and each
=
=
G; is isomorphic to KrNr Proof Let {X,Y} denote the bipartition of V(G!,n), where X= {x1tx2, ... ,xn} and
=
Y {YltY2,"' ,yn}· For each i with 1 :5 i :5 r, G; has bipartition {X;,Yi}. Let G~ be the subgraph induced by the edge set E(G;), and let A; A(Gn be the adjacency matrix of G~. Note that
=
(4.8)
Matrices in Combinatorial Problems For each i with 1 ~ i
~
r, let
(~(i) :z:(i) , , • :z:(i))T "'1 , 2 , , n
v. 
. 
such that for k
andy· _ ' 
(y(i) y(i) .. , y(i))T 1 ' 2 ' ' n
= 1, 2, · · • , n, (i) {
:z:lc
Therefore,~
173

= x;y'[,
1
0
if :z:~: e
x,
d (i) _ an Y1c 
other~e
1 ~ i ~ r. Let Ax
{
1 0
= (x1,X2,·" ,x,.) e
B,.,r and Ay
=
(Yt,Y2, · · · ,yn)T E Br,n• Then by (4.8),
JI=AxAy.
(4.9)
=
By (4.9), n rank(J I)~ r, and so Theorem 4.3.4(i) follows. By (4.9), for each i with 1 ~ i ~ r, y'[x1 = 0. For integers i,j with 1 ~ i,j ~ n, define U E Bn,n1 to be the matrix obtained from Ax by deleting Column i and Column ; from Ax and by adding an all1 column as the first column of U; and define V e Bnl,n to be the matrix obtained from Ay by deleting Row i and Row j from Ay and by adding an all 1 row as the first row of V. Then
is a singular matrix. Let
It follows that 0
= =
det(UV)
= det(I,. + U1 V1) = det(I2 + V1Ut)
1 (y'[ Xj)(yf x;),
and so for each i,j, y'[Xj
= yfX; = 1.
Ifr = n, then by (4.9), we must have
AyAx
= J I,
and so by (4.9) and (4.10),
AxJ = JAx, and AyJ =JAy,
(4.10)
Matrices in Combinatorial Problems
174
and so all the rows of Ax have the same row sum and all the columns of Ax have the same column sum and so there exists an integer p 2: 0 such that AxJ JAx pJ. Similarly, there exists an integer q 2:0 such that AyJ =JAy= qJ. It follows that
=
=
= (J I)J = (AxAy)J = Ax(AyJ) = Ax(qJ) = (pq)J, and so Theorem 4.3.3(ii) obtains. D (n 1)J
Definition 4.3.3 Let m1, m2, · · · , mt be positive integers, and let a be a graph. A complete (mt, m2, · · • , mt)decomposition of is a collection of edgedisjoint subgraphs {a1, · · · ,at} such that E(a) lf;:;1 E(ai) and such that each a. is a completefflipartite graph. Another extension of Theorem 4.3.1 is the next theorem, which is first proposed by Hsu [134].
a
=
Theorem 4.3.5 (Liu, [173]) H K,. has a complete (m1 ,m2 , • •• ,mt)decomposition, then t
n :5 ~)mi 1) + 1. i=l
Proof Suppose that {a 1, · · · , at} is a complete (m1, · · · , mt)decomposition of a, where
a. is a complete mi partite graph with partite sets Ai,1,Ai,2,·· · .~.m,, (1 :5 i :5 t). Let z 1, z2, · · · , x,. be real variables and for i,j = 1, 2, · · · , t, let Li.;
=
I:
Zl:·
lcEA;,;
Note that t
I: I:
L;,;Li,k
i=l ISi) "
= 11+oo lim Va(G 0, c:t(G(")) $ 9(G(A:)) $ (9(G))", and so by Definition 4.5.2, Theorem 4.5.1 below obtains. Theorem 4.5.1 {Lovasz, (189]) 9(G) $ 9(G).
Matrices in Combinatorial Problems
183
Proposition 4.5.2 Let u1, u2, · · · , u,. is an orthonormal representation of G and v 1, v 2 , • • • , v n is an orthonormal representation of Gc, the complement of G. Each of the following holds. (i) H c and d are two vectors, then n
n
L(udhi)T(c®d)
= L(ufc)(vfd) :5 lcl 2 ldl 2 •
i=l
i=l
(ii) H dis a unit vector, then n
8(G) ~ :Ea.
=
Note that for h (1, 0, · · · , O)T, the corresponding ii satisfies aTfa $;a, and soy$; a. On the other hand, that aT z > a implies 8(G)y >a, and so 8(G)y >a~ y. By Proposition 4.5.l(iv), 8(G) ~ 1, and so a~ y > 0. We may assume that y = 1, and so 8(G) > a. Now define A= (at;) with a;i=
{ ia~c +1, 1
if {i,j}
= {i~c,jlc}
otherwise
then that aTfa $;a can be written as hTAh~
" " ~""' =LJ LJ a;ih;hi $; a. i=l i=l
Since Amax(A) = max{xTAx : lxl = 1}, this implies that >..nax(A) $;a. However, A E A, and so by Theorem 4.5.2(i), 8(G) $;a, a contradiction. This proves the claim. Therefore (4.18) and (4.19) hold. Set
h, =
(h,,1,h,,2,··· ,hp,n)T
b;i
L, c,.hp,;hp,;
N
p=l
B
=
(b;J)
186
Matrices in Combinatorial Problems
Then B is symmetric and positive semidefinite. By (4.18) tr(B)
b••J• tr(BJ)
= =
= 1. By (4.19),
0,(1~k~m)
9(G).
Therefore B E 8, and so 9(G) ~ maxBeB tr(BJ}. This complete the proof of Theorem 4.5.2(ii). 0 Corollary 4.5.2 There exists an optimal representation {ut, · · · , u,.} such that 9(G)
= (cT~) 2 , 1 ~ i ~ n.
Theorem 4.5.3 (Lovasz, [189]) Let v 1 , v 2 , • • • , v n range over all orthonormal representation of ac' and d over all unit vectors. Then n
9(G) =max ~)dTv 1) 2 • i=l
Proof By Proposition 4.5.2(ii), it suffices to show that for some vi's and some d, 9(G) ~
:E:.l(dTvi)2.
=
(b;J) E 8 such that tr(BJ} Pick a matrix B there exists vectors w1, wa, ···, Wn such that b;J
=wfwJ,
= 9(G). Since B is positive semidefinite,
1 ~ i,j ~
n.
Since Be 8,
tr lwal = n
2
1, and
It;w• 12 = n
9(G).
Set v 1 = 1:•1, (1 i
/ltw•l· ~ i ~ n) and d = (tw•) •=1 •=1
Then the v 1's form a orthonormal representation of equality,
ac, and so by CauchySchwarz in
= (~lw•l 2) (~(dTv1 ) 2)
~ (~~w.l(dTv,)r = (~ k, and so 0 < p~ 0, F2 is a kfriendship, (called a fish in [149]). Proposition 4.6.S Let D be a kfriendship graph with n vertices. and let A Then each of the following holds. (i) tr(A) 0.
= A(D).
=
(ii) A" = J I. {iii) For some integer c
~ 0, AJ = JA = cJ. (Therefore, A has constant row and column sums c; and the digraph D has indegree and outdegree c in each vertex.) (iv) The integer c in (ii) satisfies
n=c"+l.
(4.26)
Proof Since D is loopless, tr(A) = 0. By Proposition 1.1.2(vii) and Definition 4.6.2, A" J I. Multiply both sides of Ak J I by A to get
=
=
A"+1
= JA A =AJ A,
= =
which implies that AJ JA cJ, for some integer c. Multiply both sides of A"= J I by J, and apply Proposition 4.6.3(ii) to get
c"J=A"J= J 2 and so c"
=n 
1.

J=nJ J= (n1)J,
O
Theorem 4.6.5 (Lam and Van Lint, [149]) H k is even, no kfriendship graph exists.
=
Proof Let k 21. Assume that there exists a kfriendship graph D with n vertices. Let A= A(D) and let A 1 = A1• By Proposition 4.6.3(i), Af = J I. Therefore by Proposition 4.6.3, n must satisfy n c2 + 1 for some integer c. The eigenvalues of A must then be c with multiplicity 1, and i and i (where i is the complex number satisfying i 2 = 1) with equal multiplicities. This implies tr(A) = c, contrary to Proposition 4.6.3(i). O
=
=
Fork odd, a solution for A"= J I is obtained for each n c" + 1, where c > 0 is an integer. Consider the integers mod n. For each integer v with 1 :S v :S c, define the permutation matrix Pv = (p~)) by
P(v)
_ {
ii 
1 0
=
if j v ci (mod n) otherwise
(O :S i,j :S n _ 1).
Matrices in Combinatorial Problems
195
and define (4.27)
In fact, the matrix A of order n has as its first row (0,1, 1, · • · , 1, 0, · · · , 0, 0) where there are c ones after the initial 0. Subsequent rows of A are obtained by shifting c positions to the left at each step. The matrix A" is the sum of all the matrix products of the form
where (a11 a 2,··· ,a~:) runs through {1,2,··· ,c}", the set of all k element subsets of {1,2, · · · ,c}. The matrix P,.,P,.2 • • ·P,.., is a permutation matrix corresponding to the permutation
"
:t H (c)"z + Ea1(c)"• (mod n).
(4.28)
i=l
Theorem 4.6.6 (Lam and Van Lint, (149]) The matrix A defined in (4.27) satisfies A" J  I for odd integer k ~ 1.
=
Proof Note that if (all a 2, · .. , a~:) E {1, 2, · · · , c}", then 1.$lta,(c)"'15 n 1 a=l
(which is obtained by letting the a 1's alternate between 1 and c). Sinillarly, if (,Bll P2 ... · , ,8~:)
e
{1, 2, · • • , c}" also, then by (4.26), c" = n 1, and so
It follows that
.
Ea,(c)"• i=l
" =E.B•(c)"• (mod n) i=l
=
implies that the two sums are equal which is possible only if (all a2, • · · , a~:) (flt, P2, · · · , fJ~:). Therefore if (all a2, · .. , a~:) runs through 2, .. · , c}", the permutations in (4.28) form the set of permutations of the form x 1+ x + 'Y (1 .$ 'Y < n ). This proves the theorem. D
=
Theorem 4.6.7 (Lam and Van Lint, [149]) Let A be defined in (4.27) and let D D(A). Then the dihedral group of order 2(c + 1) is a group of automorphisms of the graph D.
=
Proof In the permutation :t I+ (c)x + v which defines Pv, substitute :t y + A(c" + 1)/(c+ 1). The result is the permutation y .+ (c)y+v. Hence for A= 0,1,2, ... ,c, we
Matrices in Combinatorial Problems
196
find a permutation which leaves A invariant. These substitutions form a cyclic group of order c. In the same way it can be found that the substitution
x = 1 y + A(c" + 1)/(c + 1) maps P., to the permutation y t+ (c)y + (c + 1 v) and therefore leaves A invariant. This, together with the cyclic group of order c above, yields a dihedral group acting on
n.o
=
It is not known whether the solution of A" J  I is unique or not. In [149], it was shown that when k = 3 and n 9, the dihedral group of Theorem 4.6. 7 is the full automorphism group of the friendship graph D. However, whether the dihedral group in Theorem 4.6. 7 is the full automorphism group of D in general remains to be determined.
=
We conclude this section by presenting two important theorems in the field. Let T(m) = L(Km) denote the line graph of the complete graph Km, and let ~(m) = L(Km,m) denote the line graph of the complete bipartite graph Km,m· Note that T(m) is an (m(m 1)/2, 2(m 2), m 2, 4)strongly regular graph, and ~(m) is an (m2 , 2(m2), m 2, 2)strongly regular graph. Theorem 4.6.8 (Chang, [51) and [52), and Hoffman, [126]) Let m ~ 4 be an integer. Let G be an (m(m 1)/2,2(m 2),m 2,4)strongly regular graph. If m :/; 8, then G is isomorphic to T(m); and if m = 8, then G is isomorphic to one of the four graphs, one of which is T(8). Theorem 4.6.9 (Shrikhande, [254]) Let m ~ 2 be an integer and let G be an (m2 , 2(m2),m 2,2)strongly regular graph. If m :/; 4, then G is isomorphic to £ 2 (m); and if m = 4, then G is isomorphic to one of the two graphs, one of which is £2 (4).
4. 7
Eulerian Problems
In this section, linear algebra and systems of linear equations will be applied to the study of certain graph theory problems. Most of the discussions in this section will be over GF(2), the field of 2 elements. Let V(m,2) denote themdimensional vector space over GF(2). Let B~,m denote the matrices B E Bn,m such that all the column sums of B are positive and even. For subspaces V and W of V(m, 2), V + W is the subspace spanned by the vectors in V U W. Let E = {e1 ,e2 , ···,em} be a set. For each vector x = (Xt.Z2,"' ,zm)T E V(m,2), the map
Matrices in Combinatorial Problems
197
yields a bijection between the subsets of E and the vectors in V(m, 2). Thus we also use V(E,2) for V(m,2), especially when we want to indicate the vectors in the vector space
are indexed by the elements in E. Therefore, for a subset E' ~ E, it makes sense to use V(E',2) to denote a subspace of V(E,2) which consists of all the vectors whose ith
e,
component is always 0 whenever e E  E', 1 ::;; i ::;; m. For two matrices B1,B2, write Bt !;;; B2 to mean that B 1 is submatrix of B2. Throughout this section, j (1, 1, · · · , 1)T denotes the mdimensional vector each of whose component is a 1.
=
Definition 4.7.1 A matrix Be B:_,m is separable if B is permutation similar to [ Bu 0
0 ] , B22
=
=
where Bu e B,.,,m, and B22 E B,.2 ,m2 such that n n1 +n2 and m m1 +m2, for some positive integers n1,n2 ,mt.m2 • A matrix B is nonseparable if it is not separable. For a matrix B e with rank(B) ~ n 1, a submatrix B' of B is spanning in B if rank(B') ~ n  1. Note that for every B e B:,,m, each column sum of B is equal to zero modulo 2. Thus if B E B:,,m is nonseparable, then over GF(2), rank(B) n 1. A matrix B e B:_,m is even if Bj 0 (mod 2); and B is Eulerian if B is both nonseparable and even. A matrix B e B,.,m is simple if it has no repeated columns and does not contain a zero column. In other words, in a simple matrix, the columns are mutually distinct, and no column is a zero column.
s:..m
=
=
Example 4.7.1 When G is a graph and B = B(G) is the incidence matrix of G, G is connected if and only if B is nonseparable; every vertex of G has even degree if and only if B is even; G is a simple graph if and only if B is simple; and G is eulerian (that is, both even and connected) if and only if B is eulerian. Proposition 4.7.1 By Definition 4.7.1, Each of the following holds. (i) If Bt. B 2 e Bn,m are two permutation sinillar matrices, then B 1 is nonseparable if and only if B2 is nonseparable.
(ii) Suppose that B1 E B,.,m and B2 E Bn,m' are matrices such that Bt !;;; B2. If Bt is nonseparable, then B 2 is nonseparable. (iii) Suppose that B1 E Bn,m and B2 E Bn,m' are matrices such that B1 !;;; B2. If Bt has a submatrix B which is spanning in B 11 then B is also spanning in B 2 • (iv) Let B B(G) e B,.,m be an incident matrix of a graph G. If B is nonseparable, then rank(B) n  1.
= =
=
Proof The first three claims follow from Definition 4.7.1. If B B(G) is nonseparable, then G is connected with n vertices, and so G has a spanning tree T with n  1 edges.
Matrices in Combinatorial Problems
198
The submatrix of B consisting of the n  1 columns corresponding to the edges ofT will be a matrix of rank n  1. 0 Definition 4.7.2 Let A,B e B:'.,m be matrices such that A ~ B. We say that A is cyclable in B if there exists an even matrix A' such that A ~ A' ~ B; and that A is subeulerian in B if there exists an eulerian matrix A' such that A ~ A' ~ B. A matrix B E B:'.,m is supereulerian if there exists a matrix B" e B:'.,m" for some integer m" S m such that B" ~ B and such that B" is eulerian. Let G be a graph, and let B = B(G) be the incidence matrix of G. Then G is subeulerian (supereulerian, respectively) if and only if B(G) is subeulerian (supereulerian, respectively}.
=
Example 4.7.2 Let G be a graph and B B(G) be the incidence matrix of G. Then G is subeulerian if and only if G is a spanning subgraph of an eulerian graph; and G is supereulerian if and only if G contains a spanning eulerian subgraph. What graphs are subeulerian? what graphs are supereulerian? These are questions proposed by Boesh, Suffey and Tindell in [12]. The same question can also be asked for matrices. It has been noted that the subeulerian problem should be restricted to simple matrices, for otherwise we can always construct an eulerian matrix B' with B e B' by adding additional columns, including a copy of each column of B. The subeulerian problem is completely solved in (12) and in [138). Jaeger's elegant proof will be presented later. However, as pointed out in [12], the supereulerian problem seems very difficult, even just for graphs. In fact, Pulleyblank [214) showed that the problem to determine if a graph is supereulerian is NP complete. Catlin's survey [48] and its update [57) will be good sources of the literature on supereulerian graphs and related problems. Definition 4.7.3 Let B e Bn,m• let E(B) denote the set of the labeled columns of B. We shall use V(B,2} to denote V(E(B},2). For a vector x E V(E(B),2}, let Ex denote the subset of E(B) defined in (4.29}. We shall write V(B x,2) for V(E(B) Ex,2), and write V(x, 2) for V(Ex, 2). Therefore, V(B x, 2) is a subspace of V(B, 2} consisting of all the vectors whose ith component is always 0 whenever the ith component ofx is 1, (1 SiS m), while V(x,2) is the subspace consisting of all the vectors whose ith component is always 0 whenever the ith component of xis 0, (1 S i S m). For a matrix B e Bn,m and a vector x e V(B, 2}, we say that Column i of B is chosen by x if and only if the ith component of x is 1. Let Bx denote the submatrix of B consisting of all the columns chosen by x. If E' ~ E(B), then by the bijection (4.29), there is a vector x E V(B,2) such that Ex= E'. Define BE• = Bx. Conversely, for each
Matrices in Combinatorial Problems
199
submatrix A ~ B with A E Bn,m 1 there exists a unique vector x E V(B,2) such that B. = A. Then denote this vector x as XA. A vector x E V(B,2) is a cycle (of B) if B. is even, and is eulerian (with respect to B) if B. is eulerian. Note that the set of all cycles, together with the zero vector 0, form a vector subspace C, called the cycle space of B; c.L, the maximal subspace in V(B,2) orthogonal to C, is the cocycle space of B. For x,y E V(m, 2), write x ~ y if y x ~ 0, and in this case we say that y contains x. Let BE Bn,m be a matrix and let x E V(B,2) be a vector. The vector xis cyclable in B if there exists an cycle y E V(B,2) such that x ~ y. Denote the number of non zero components of a vector x E V(n,2) by llxllo· Theorem 4.7.1 Let B E B!'.,m· A vector x is cyclable in B if and only if x does not contain a vector z in the cocycle space of B such that llzllo is odd. Proof Let C and c.L denote the cycle space and the cocycle space of B, respectively. Let x E V(B,2). Then, by the definitions, the following are equivalent. (A) x is cyclable in B. (B) there exists ayE C such that x ~ y. (C) there exists ayE C such that x = y + (y + x) E C + V(B x,2). (D) x E C+ V(B x,2). Therefore, xis cyclable if and only if x E C + V(B x,2). Note that
[C + V(B x, 2)].L
=c.L n V(B x,2).L = c.L n V(x, 2).
It follows that x is cyclable if and only if x is orthogonal to every vector in the subspace c.L n V(x, 2). Since x contains every vector in V(x, 2), and since for every nonzero vector z E c.L, llzllo is odd, we conclude that x is cycla.ble if and only if x does not contain a vector z in the cocycle space of B such that llzllo is odd. D Theorem 4.7.2 (Jaeger, [138]) Let A,B E B:.,m be matrices such that A~ B. Each of the following holds. (i) A is cyclable in B if and only if there exists no vector z in the cocycle space of B such that Uzflo is odd. (ii) If, in addition, A is a nonseparable. Then A is subeulerian in B if and only if there exists no vector z in the cocycle space of B such that llzllo is odd. Proof By Definition 4.7.2 and since A is nonseparable, A is subeulerian in B if and only if the vector XA is cycla.ble in B. Therefore, Theorem 4.7.2(ii) follows from Theorem 4.7.2(i). By Theorem 4.7.1, XA is cyclable in B if and only if XA does not contain a vector z in the cocycle space of B such that llzllo is odd. This proves Theorem 4.7.1(i). O
Matrices in Combinatorial Problems
200
Theorem 4.7.3 (Boesch, Suffey and Tindell, [12], and Jaeger, [138]) A connected simple graph on n vertices is subeulerian if and only if G is not spanned by a complete bipartite with an odd number of edges. Proof Let G be a connected simple graph on n vertices. Theorem 4.7.3 obtains by applying Theorem 4.7.2 with A= B(G) and B = A(Kn) (Exercise 4.12). D
=
Definition 4.7.4 A vector be V(n,2) is an even vector if llbll 0 (mod 2). A matrix HE Bn,m is collapsible if for any even vector bE V(n, 2), the system Hx=b, has a solution x such that Hx is nonseparable and is spanning in H, (such a solution xis called a bsolution). Let n, m, n 1 , m1o m2 be integers with n 2! n 1 2! 0, and m 2! m1o m2 2! 0. Let Bu E Bn,,m.,Bl2 E Bn1 ,m2 ,B22 E Bnn.,m2 and H E Bnn1 ,m(m1 +m2 )· Let Sf denote the column sum of the ith column of B22, 1 ~ i ~ m2, and let BE Bn,m with the following form B
= [ Bu 0
B12 B22
0 ] .
H
(4.30)
Define B / H E Bnno+l,mm2 to be the matrix of the following fonn
B/H
= [ B~t !~
],
where vJ; E V(m, 2} such that v'l; = (vmt+l> Vm 1 +2• · .. , Vm 1 +m2 ) and such that Vm 1 +i (mod 2), (1 ~ i ~ m2).
=
Sf
Proposition 4.7.2 Let B1oB2 E Bn,m such that B 1 is pennutation similar to B2. Then each of the following holds. (i) B 1 is collapsible if and only if B2 is collapsible. (ii) B 1 is supereulerian if and only if B 2 is supereulerian. Proof Suppose that B 1 = PB2 Q. Let bE V(n,2) be an even vector. Then since b is even, b' p 1 b e V(n,2) is also even. Since B 1 is collapsible, B 1 y = b' has solution y E V(m, 2) such that (Bt),.. is nonseparable. Let x = Q1 y. Then
=
and so Bx
= b has a solution x = Qy.
Matrices in Combinatorial Problems
201
=
=
Note that p 1(B1),. (B2Q),. (B2)•. Therefore, B. is nonseparable, by Proposition 4.7.1 and by the fact that (B1),. is nonseparable. This proves Proposition 4.7.2(i). Proposition 4.7.2(ii) follows from Proposition 4.7.2(i) by letting b 0 in the proof
=
above.
O
Proposition 4.7.3 H HE B!,m is collapsible, then His nonseparable, and rank(H) n1.
=
=
Proof By Definition 4.7.4, the system Hx 0 has a 0solution x, and so H. is nonseparable and spanning in H, and so Proposition 4.7.3 follows from Proposition 4.7.1.
0 Theorem 4.7.4 Let H be a oollapsible matrix and let B be a nonseparable matrix of the form in (4.30). Each of the following holds. (i) If B/H is collapsible, then B is collapsible. (ii) H B / H is supereulerian, then B is supereulerian. Proof We adopt the notation in Definition 4.7.4. Let b be an even vector, and oonsider the system of linear equations
[
B~, !: ! l (:) ~ (~) ~b,
(4.31)
where b1 E V(n1,2),b2 E V(n n1.2), X1 E V(mlJ2),x2 E V(m2,2) and xs E V(m
ifb1 is even otherwise Then b' is an even vector. Since B/H is collapsible,
(B/N)x12 = [
B~ 1 !~ ]( =~ )= ( i
),
has a b'solution x 12 • Therefore (G/H).12 is nonseparable and spanning in G/H. Since b is even, by the definition of 6, the system Hxs
(4.32)
h2  B 22 x 2 is also even. Since H is collapsible,
= b2 B22X2
has a (b2 B22x2)solution xs. Therefore, Hq is nonseparable and spanning in H.
(4.33)
202
Matrices in Combinatorial Problems
Now let
We have
Thus, to see that x is a bsolution for equation (4.31}, it remains to show that B,. is nonseparable and is spanning in B. Claim 1 B,. is nonseparable. Suppose that B,. is separable. We may assume that
B,.
H [
~]
= [ ~ ~ ] , where X =F 0 andY =F 0.
has a submatrix [
submatrix [
~H ]
that is a submatrix of [
! ].
(4.34)
and if [
~]
has a
:H ] that is a submatrix of [ ! ],then
By (4.33}, H,. 8 is nonseparable, and so we must have B has no zero columns, and so
Xn = 0.
By the definition of B~.m•
the columns chosen by xs are in the last m (m1 + m2) columns of B. By(4.34}and(4.35}, [
~] hasasubmatrix [
Xs;n]
thatisasubmatrixof[
(4.35}
B~t
!: ]•
and [ 0 ] has a submatrix [ 0 ] that is a submatrix of [ Bu B 12 ] • By (4.31) y ~~ 0 ~ and (4.33), XB/H =F 0. H YB/H =F 0, then (B/H)x12 is separable, contrary to (4.32}. Therefore, YB/H 0, and so
=
the columns chosen by x 12 are in the first (m 1 + m 2 } columns of B.
=
(4.36}
By (4.34), (4.35} and (4.36), Y Hxa and X is the first n1 rows of B/H,.12 • This implies that the last row of B/H,.12 is a zero row, and so B/H,.12 is separable, contrary to (4.31). This proves Claim 1.
Matrices in Combinatorial Problems
203
Claim 2 B,. is spanning in B.
It suffices to show that ra.nk(B,.) = n 1. Since (B/H)x12 spans in B/H, there exist It column vectors Vt. v2, • · · , vh in the first m1 columns of B, and 12 column vectors w1, w2, · · · , W!2 in the middle rna columns of B such that l 1 + h n 1 and such that v11 v2, • • • , v,., Wt, w2, · • · , W12 are linearly independent over GF(2). Since H,.,. spans in H, there exist column vectors u1, · • · , Uno1 in the last m  (m1 + m2) columns of B such that u 11 • • • , Unn,1 are linearly independent over GF(2). It remains to show that Vt, v 2 , • • • , v,., Wt. w2, · · · , w,, and u 11 · • · , Unn 1  1 are linearly independent. If not, there exist constants c1, · · · , c,,, ~ · · · ,C:., t!i, · · · , ~I such that
=
h !2 nn,1 :ECiv; + :E~w, + :E dtno = 0. i=l
i=l
(4.37)
i=1
Consider the first n1 equations in (4.37), and since v1, v2, · · · , v,., w1, w2, · · · , w,. are linearly independent over GF(2), we must have c1 ch 0 and cl_ C:. 0. This , together with the fact that u 1 , • • • , u,._,.,_ 1 are linearly independent, implies that
=··· = =
= ··· = =
d{. =··· = ~,., 1
=0. Therefore ra.nk(Bx) =n 1, as expected. 0
Definition 4.7.5 Let B E B~,m· Let T(B) denote the largest possible number k such that E(B) can be partitioned into k subsets E 11 Ea,· · · ,E,. such that each BE, is both nonseparating and spanning in B, 1 s; is; k. Example 4.7.3 Let G be a graph and let B = B(G). Then T(B) is the spanning tree packing number of G, which is the maximum number of edgedisjoint spanning trees in G. Proposition 4.7.4 Let BE B~,m be a matrix with T(B) bE V(n,2), the system Bx = b has a solution.
~
1. Then for any even vector
Proof We may assume that B E B~,nl and ra.nk(B) = n 1, for otherwise by tau(B) ~ 1, we can pick a submatrix B' of B such that B' is nonseparating and spanning in B to replace B. Since b is even and since every columns sum of A is even, it follows that the rank([B, b]) = n 1 = ra.nk(B) also. Therefore, Bx = b has a solution. O Theorem 4. 7.5 Let B E B~.m be a matrix with T(B)
~
2. Then B is collapsible.
Proof We need to show that for every even vector b E V(n, 2), Bx = b has absolution. Since T(B)?: 2, we may assume that for some B1 E B~1 ,,. and B2 E B~n,,m, B = [B11 B 2]suchthateachB,isnonseparableandspanninginB. Letx1 (1,1, ... ,1,0,··· ,O)T E V(n, 2) such that the first n1 components of x 1 are 1, and all the other components of x 1 areO. Write B = [B~, B 2] = [B~, 0] + [0, B2 ]. Since Bt E B~ 1 ,,., [B11 O]xt is even, and so the vector b [Bt,O]x1 E V(n,2) is also even.
=
204
Matrices in Combinatorial Problems
Since r([O,B,]) ~ 1, by Proposition 4.7.4, the system [O,B2)x2 solution x2, such that the first n 1 components of x 2 are 0.
=
Let x x1 + x2. Then since the last components of x2 are 0, we have
= b [B1oO]x1 has a
n n 1 components of x 1 are 0 and the first n 1
By the definition of Xtt B,. contains B1 as a submatrix, and so B .. is both spanning in B and nonseparable, by Proposition 4.7.1.
0
Theorem 4. 7.6 (Catlin, (47] and Jaeger, [138]) If a graph G has two edgedisjoint spanning trees, then G is collapsible, and supereulerian. We close this section by mentioning a completely different definition of E ulerian matrix in the literature. For a square matrix A whose entries are in {0, 1, 1}, Camion (46) called the matrix A Eulerian if the row sums and the column sums of A are even integers, and he proved the following result.
Theorem 4.7.7 (Camion [46]) An m x n (0, 1, 1) matrix A is totally unimodular if and only if the sum of all the elements of each Eulerian submatrix of A is a multiple of 4.
4.8
The Chromatic Number
Graphs considered in this section are simple, and groups considered in this section are all finite Abelian groups. The focus of this section is certain linear algebra approach in the study of graph coloring problems. Let r denote a group and let p
> 0 be an integer. Denote by V(p, r) the set of ptuples
(91,92•'" ,g,)T such that each g, e r, (1 $ i $ p). Given g = (gz,92,··· ,g,)T and h = (h1oh 2, ... ,h,)T, wewriteg~h to mean thatg, =F h;. foreveryiwith 1$ i $p. For notational convenience, we assume that the binary operation of r is addition and that 0 We also adopt the convention that for integers 1, 1,0, denotes the additive identity of and for an element g E r, the multiplication (1)(g) g, (O)(g) 0, the additive identity of r, and (1)g = g, the additive inverse of gin r.
r.
=
=
Let G be a graph, k ~ 1 be an integer, and let O(k) = {1,2, ... ,k} be a set of k distinct elements (referred as colors in this section). A function c : V(G) 1+ O(k) is a proper vertex kcoloring if f(u) =F f(v) whenever uv e E(G). Elements in the set O(k) are referred as colors. A graph G is kcolorable if G has a proper kcoloring. Note that a graph G has a proper kcoloring if and only if V(G) can be partitioned into k independent sets, each of which is called a color class. The smallest integer k such that G is kcolorable
Matrices in Combinatorial Problems
205
isx(G), the chromatic number of G. Hfor every vertex v e V(G), x(G v) < x(G) then G is kcritical or just critical.
= k,
Let G be a graph with n vertices and m edges. We can use elements in r as colors. D(G), and let Arbitrarily assign orientations to the edges of G to get a digraph D B B(D) be the incidence matrix of D. Then a proper lflcoloring is an element c e V(n,r) such that
=
=
where 0 e V(m,r). Viewing the problem in the nonhomogeneous way, for an element b E V(m,r), an element c e V(n,r) is a (r, b)coloring if (4.38) Definition 4.8.1 Let r denote a group. A graph G is fcolomble if for any bE V(m,r), there is always a (r, b)coloring c satisfying (4.38). Vectors in V(IE(G)I,f) can be viewed as functions from E(G) into r, and vectors in V(IV(G)I,f) can be viewed as functions from V(G) into r. With this in mind, for any be V(IE(G)I,f) and e E E(G), b(e) denotes the component in b labeled with element e. Similarly, for any c E V(IV(G)I,f) and v E V(G), c(v) denotes the component inc labeled with element v. Therefore, we can equivalently state that for a function be V(m,r), a (r, b)coloring is a function c E V(n,r) such that for each oriented edge e = (z,y) E E(G), c(z) c(y) ¢ b(e); and that a graph G is fcolorable if, under some fixed orientation of G, for ant function bE V(m,r), G always has a (r, b)coloring. Proposition 4.8.1 H for one orientation D orientation of G, G is also r colorable.
= D(G), that G is fcolorable, then for any
Proof Let D 1 and D 2 be two orientations of G, and assume that G is rcolorable under D1 • It suffices to show that when D2 is obtained from D 1 by reversing the direction of exactly one edge, G is r colorable under D 2 • Let B 1 B(D 1 ) and B 2 B(D2 ). We may assume that B 1 and B 2 differ only in Row 1, where the first row of B 2 equals the first row of B 1 multiplied by (1). Let b (b 11 ~,··· ,bm)T E V(m,r). Then b' = (b11 b2 ,··· ,bm)T E V(m,r) also. Since G is r colorable under D~, there exists a (r' b')coloring c' = (ell C2' .•• ' Cn) T E V(n,r). Note that c = (c1,C2,··· ,c,.)T e V(n,r) is a (r,b)coloring, and so G is also rcolorable under D2. 0
=
=
=
206
Matrices in Combinatorial Problems
Definition 4.8.2 Let G be a simple graph. The define the group chromatic number x1(G) to be the smallest integer k such that whenever r is a group with 1r1 ~ k, G is rcolorable. Example 4.8.1 (Lai and Zhang, [152]) For any positive integers m and k, let G be a graph with (2m+ k) + (m + k)m+k 1 vertices formed from a complete subgraph Km on m vertices and a complete bipartite subgraph K,..,f"2 with r 1 = m+k and r 2 (m+k)m+k such that
=
We can routinely verify that x(G) = m and Xl(G) = m + k (Exercises 4.19). Im:r,nediately from the definition of x1 (G), we have x(G) ~ Xl(G).
(4.39}
Xl(G') ~ Xl(G), if G' is a subgraph of G.
(4.40)
and
More properties on the group chromatic number x1 (G) can be found in the exercises. We now present the Brooks coloring theorem for the group chromatic number.
Theorem 4.8.1 (Lai and Zhang, [152]) Let G be a connected graph with maximum degree d(G}. Then Xl(G) ~~(G)+ 1,
where equality holds if and only if G complete graph on n vertices.
= C,. is the cycle on n vertices, or G = K,. is the
The proof of Theorem 4.8.1 has been divided into several exercises at the end of this chapter. Modifying a method of Wilf [275], we can applied Theorem 4.8.1 to prove an improvement of Theorem 4.8.1, yielding a better upper bound of Xl(G) in terms of .X1(G), the largest eigenvalue of G. Lemma 4.8.1 Let G be a graph with x1 (G} G' such that x1 (G) k and o(G') ~ k 1.
=
= k. Then G contains a connected subgraph
Proof By Definition 4.8.1, G is rcolorable if and only if each component of G is rcolorable. Therefore, we may assume that G is connected. By (4.40), G contains a connected subgraph G' such that X1 (G) = k but for any proper subgraph G" of G', Xl(G") < k. Let n = jV(G')I and m = IE(G')I.
Matrices in Combinatorial Problems
207
H 6(G') < k1, then G' has a vertex v of degree at most d ~ k2 in G'. Note that by the choice ofG', Xt(G'v) ~ k1. By Proposition 4.8.1, we assume that G' is a digraph such that all the edges incident with v are directed from v, and such that v corresponds to the last row of B = B(G'). Let v1o 112, • • • , v11 be the vertices adjacent to v in G', and correspond to the first d rows of B, respectively; and let e1 (v,v,), (1 ~ i ~d) denote the edges incident with v in G', and correspond to the first d columns of B, respectively. Let r be a group with 1r1 k  1. For any b = (blo 11:!, ••• , b11, bll+h · · • , bm)T E V(m,r). Let b' (b"+l• · · · ,bm)T =E V(m d,r) Since x1 (G' v) ~ k 1 = 1r1, there exists a (r, b')coloring c' = (c1 , c,, · · · ,c,._I)T E V(n 1,r). Note that 1r  {bt + c1. ~ + c,, · · · , b11 + c11}1 ~ (k  1)  d > 0, there exists a c,. E r  {bt +Ct.~+ ca,··· ,bd + C 0, then choose E so that 2 E~=l a;,lo+l > e(k 1), which results in .X1(G) > k 1, contrary to (4.43). Therefore, E;=l a;,l:+l = 0, and so a;,l:+l = 0 for each j with 1 :S j :S k. Repeating this process yields that A12 = 0, and so A21 = A?; 0, contrary to the assumption that G is connected. Therefore, we must have n k, and so G = G' Kn. With a similar argument, we can also show that when k = 2, G G' = On is a cycle.
=
=
=
=
0 Corollary 4.8.2 Let G be a connected graph with n vertices and m edges. Then Xl(G) :S 1 +~2m(: 1).
(4.44)
Equality holds if and only if G is a complete graph.
Proof Let .X1, .X2, · • · , .Xn denote the eigenvalues of G. By Schwarz inequality, by E~ 1 ~ 0 and E~ 1 ~ =2m,
=
.x~ = c.x1)2 = (~.x.f :Sen 1> ~~=en 1)(2m .xn. Therefore, (4.44) follows from that (4.41). Suppose equality holds in (4.44). Then by Theorem 4.8.2(i), G must be a complete graph or a cycle. But in this case, we must also have ).2 .\n, and so G must 3 = ··· be a Kn. (see Exercise 1.7 for the spectrums of Kn and On.) 0
= ).
=
The following result unifies Brooks coloring theorem and Wilf coloring theorem. Theorem 4.8.3 (Szekers and Wilf, [258), Cao, (42]) Let /(G) be a real function on a grapp G satisfying the properties (P1} and (P2) below: (P1) H His an induced subgraph of G, then /(H) :S /(G). (P2) /(G) ;::: 6(G) with equality if and only if G is regular. Then x(G) :S /(G)+ 1 with equality if and only if G is an odd cycle or a complete graph.
Matrices in Combinatorial Problems
209
We now turn to the lower bounds of x(G). For a graph G, w(G), the clique number of G, is the m&Jdmum k such that G has K,. as a subgraph. We immediately have a trivial lower bound for x(G): X1(G) 2:: x(G) 2:: w(G).
Theorem 4.8.4 below present a lower bound obtained by investigating A(G), the adjacency matrix of G. A better bound (Theorem 4.8.5 below) was obtained by Hoffman [123) by working on the eigenvalues of G. Theorem 4.8.4 H G has n vertices and m edges, then x(G) 2::
n2
Ln2 2m J.
Proof Note that G has a proper kcoloring if and only if V(G) can be partitioned into k independent subsets Vi, V2, · · · , V,.. Let = !Vii, (1 ~ i ~ k). we may assume that the adjacency matrix A(G) has the form
n,
A(G)
=[
~:: ~~
:::
~::
A~:2
···
A~o~:
Akl
l,
(4.45)
where the rows in [Au, A.2 , .. • , Ail,) correspond to the vertices in Vi, 1 ~ i ~ k. Since Vi is independent, A.• = 0 e B,.10 (1 ~ i ~ k), and so
2m= IIA(G)II ~
" n En~. 2 
(4.46)
It follows by Schwarz inequality that
" 2:: ("En,)2 = na.., kEn: i=l
i=l
and so Theorem 4.8.4 follows by (4.46).
D
Lemma 4.8.2 (Hoffman, [123]) Let A be a real symmetric matrix with the form (4.45) such that each A.. are square matrices. Then
"
~max(A) + (k 1)~min(A) ~ E~max(A.;). i=l
Theorem 4.8.5 (Hoffman, [123)} Let G be a graph with n vertices and m > 0 edges, and with eigenvalues ~ 1 ;?!: .\2 2:: • • ·;?!: ~... Then ~1
x(G);;::: 1 ~·
(4.47)
210
Matrices in Combinatorial Problems
Proof Let k = x(G). Then V(G) can be partitioned into k independent subsets V1, • · • , Vk, and so we may assume that A(G) has the form in (4.45), where At; = 0, 1 :5 i :5 k. By Lemma 4.8.2, we have
..\1 + (k1)..\,. :50. However, since m
4.9
> 0, we have..\,.< 0, and so (4.47) follows. O
Exercises
Exercise 4.1 Solve the difference equation {
Fn+S
= 2Fn+l + Fn + n, = 1.
Fo = 1,Fl = O,Fs
Exercise 4.2 Let S>.(t,k,v) beatdesign. Show that
Exercise 4.3 Show that for i s,.,(i,k,v), where
= 0, 1, · · · , t,
a tdesign S,.(t, k, v) is also an idesign
Ai=..\(v~)/(k~)· tl tl Exercise 4.4 Prove that bk = vr in BIDD. Exercise 4.5 Prove Theorem 4.3.8. Exercise 4.6 Let A= (lli;) and B = (b;;) be the adjacency matrix and the incidence matrix of digraph D(V, E) respectively. Show that
JVI lEI
JVI JVI
L L b;; = 0 and L La;; = lEI. i=l j=l
Exercise 4.7 For graphs G and H, show that 8(G *H)= 8(G)8(H). Exercise 4.8 If G has an orthonormal representation in dimension d, then 8(G) :5 d. Exercise 4.9 Let G be a graph on n vertices. (i) If the automorphism group r of G ~vertextransitive, then both 8(G)8(Gc) and 8(G)8(Gc) :5 n. (ii) Find an example to show that it is necessary for r to be vertextransitive.
=n
Matrices in Combinatorial Problems
211
Exercise 4.10 If the automorphism group r of G is vertextransitive, each of the following holds. (i) E>(G * ac) = IV(G)I. (ii) if, in addition, that G is selfcomplementary, then 9(G) JIV(G)j. (iii) e(Cs)
=
=.;s.
Exercise 4.11 Prove Proposition 4.6.2. Exercise 4.12 Prove Theorem 4.7.3. Exercise 4.13 (Cao, [42]) Let v be a vertex of a graph G. The kdegree of vis defined v. Let .6.~:(G) be the maximum kdegree of vertices in G. Show that (i) .6.~:(G) is the maximum row sum of A"(G). (ii) For a connected graph G,
to be the number of walks of length k from
where equality holds if and only if G is an odd cycle or a complete graph. Exercise 4.14 Let r be an Abelian group. Then a graph G is rcolorable if and only if every block of G is r colorable. Exercise 4.15 (Lai and Zhang, (152)) Let H be a subgraph of a graph G, and r be an Abelian group. Then (G,H) is said to be rextendible if for any bE V(IE(G)I,r), and for any (r, bl)coloring c1 of H, where b 1 is the restriction of bin E(H) (as a function), there is a (r, b)coloring c of G such that the restriction of c in V (H) is c1 (as a function). Show that if (G, H) is r extendible and H is r colorable, then G is r colorable. Exercise 4.16 (Lai and Zhang, [152)) Let G be a graph and suppose that V(G) can be linearly ordered as Vt.tJa,··· ,vn such that da,(vi) :$ k (i 1,2, ... ,n), where G, G[{v~ot~a, .. • ,v;}] is the subgraph of G induced by {VI,t/a, · · · ,v;}. Then for any Abelian group rwith 1r1 ~ k+ 1, (G;+l,Ga) (i 1,2, ... ,n1) is rextendible. In particular, G is r colorable.
=
=
=
Exercise 4.17 (Lai and Zhang, [152)) Let G be a graph. Then
Exercise 4.18 (Lai and Zhang, [152]) For any complete bipartite graph Km,n with n mm, Xt(Km,n) =m+ 1.
~
Exercise 4.19 (Lai and Zhang, [152]) For any positive integers m and k, there exists a graph G such that x(G) = m and x 1 (G) = m + k.
212
Matrices in Combinatorial Problems
Exercise 4.20 Let G be a graph. Show that (i) H G is a cycle on n;::: 3 vertices, then Xl(G) (ii) Xl(G) :S 2 if and only if G is a forest.
= 3.
Exercise 4.21 Prove Theorem 4.8.1.
Hints for Exercises
4.10
Exercise 4.1 Apply Corollary 4.1.2B to obtain k = 3,r
= 1,a = 2,/J =
1,bn = n,
Co= 1,c1 = 0,1:2 = 1. Exercise 4.2 Count the nun1ber oftsubsets in S,.(t,k,v) in two different ways. Exercise 4.3 For any subset S subsets containing
s from X
~
X with lSI
is ( v
~)
tz
I
= i, the nun1ber of = ways of taking t
while each tsubset belongs to ). of the x.·s in
an S.A(t,k,v). Thus the nun1ber of x.·s containing Sis). (
v~ )·
tz
On the other hand, the number of ways of taking tsubset containing S from X is
( k
~)
ta
).i (
where S belongs to .X. of the Xi's. Hence the number of Xi's containing Sis
:~ii).
Exercise 4.4 Count the repeated number of v elements in two different ways. Exercise 4.5 Let m = l + 1 and b = t 1 and apply Theorem 4.3.7. Then there exists a complete graph Kn with n = b(m1)+m (b1) (t1)l+ (l+1) (t2) = tlt+3, which can be decomposed into t complete (l + 1)partite subgraphs F1, F2, · · · , Ft. Clearly d~:(Fi) :S 2 :S d, and so (4.13) follows.
=
Exercise 4.6 In the incidence matrix, every row has exactly a +1 and a 1, and so the first double SUDl is 0. The second double sum is the sum of the indegrees of all vertices, and so the sum is IE(D)I. Exercise 4. 7 By Proposition 4.5.1, it suffices to show that 8(G *H) :S 8(G)8(H). Let Vl, ••. 'Vn, and Wit· •• ' Wm be orthonormal representations of and H, respectively, and let c and d be unit vectors such that
ac
n
~)vf c) 2 i=l
m
= 8(G), }:Cwfd) 2 =8(H). j=l
ac
ac
Then the Vj ® w;'s form an orthonormal representation of *H. Since *H ~ G * H, the vi® w;'s form an orthonormal representation of G *H. Note that c ®dis a unit
Matrices in Combinatorial Problems
213
vector. Hence n
9(G•H)
m
?: LL((v, ®w;)T(c®d))2 i=l i=l
n
m
i=l
i=l
= :E1J2, ... ,vn} such that (u,,v;) is an arc if and only if a,; > 0. Conversely, given a bipartite digraph B(R,., Sn) whose arcs are all directed from a vertex in Rm to a vertex in S,., there is a matrix A e Bm,n• denoted by M(B(R,., S,.)), whose bipartite digraph representation is B(R,., S .. ). Note that in our notation, the arcs in a bipartite digraph B(V1 , l/2) are always directed from Vi to l/2. Thus B(R,., Sn) and B(S,., R,.) have identical vertex sets but the arcs are directed oppositely. H A e Bm,n has a bipartite digraph representation B(R,.,S,.) and if M(B(S,.,R,.)) is the bipartite digraph representation of a generalized inverse of A, then B(S,., R,.) is called the ginverse graph of A (or of B(R,.,S.. )). HG = B(R,.,Sn)UB(S,.,R,.), then G is called the combined graph of B(Rm,Sn) and B(S11 ,R,.).
=
Example 5.3.1 The bipartite digraph representation of the matrix
A= [ is the bipartite graph in Figure 5.3.1.
~~~
l
Combinatorial Analysis in Matrices
225
Sa
B(~,Sa)
Figure 5.3.1 Example 5.3.2 (Combined graph) Let A be the matrix in Example 5.3.1 with representation B(14, Ss) and G = B(~, Sa) U B(Sa, 14). (See Figure 5.3.2). We can verify that Bt = M(Sa,l4) is a ginverse of A, where
B1
=[
1 0 0 0 00 01 0 1 0 1
l
226
Combinatorial Analysis in Matrices
Ss
Figure 5.3.2 Example 5.3.3 (Nonuniqueness of ginverses) Each of the following is a ginverse of the matrix A in Example 5.3.1.
1000] [1 0 0 0 ] B2= [ 0 0 0 0 ,Bs= 0 0 0 0 , B4 0 1 0 0 0 101
000] = [1 0 0 0 1 0000
Definition 5.3.2 The set of all ginverses of A is denoted by A. A matrix B e A is a ma:rimum ginverse of A and is denoted by max: A, if B has the maximum number of nonzero entries among all the matrices in A; a matrix BE A is a minimum ginverse of A and is denoted by min A, if B has the minimum number of nonzero entries among all the matrices in A.
Combinatorial Analysis in Matrices
227
Note that both max A and min A are sets of matrices. For notational convenience, we also use max A to denote a matrix in max A, and use min A to denote a matrix in min A.
=
Example 5.3.4 Let A be the matrix in Example 5.3.1. Then max A B 1 and both B 2 and B• are min A. In fact, Zhou [291] and Liu [167] proved that while min A may not be unique, max A is unique as long as A ::f: 0. Theorem 5.3.1 Let A E Bm,n with a bipartite digraph representation B(Rm, Sn)· For a graph B(Sn,Rm), the following are equivalent. {i} B(Sn, Rm) is a ginverse graph of A. (ii) In the combined graph G B(Rm,Sn) U B(Sn,Rm), for each pair of vertices (u.,v;) with uo E Rm and v; E Sn, (u,,v;) is an arc of G if and only if G has a directed (u,, v; )walk of length 3.
=
Proof Denote M =(Yo;)= M(B(S,.,Rm)) and A= (ao;). Suppose first that B(S.. , Rm) is a ginverse graph of A. Then AM A each i,j,
L a;,g, aq; =a;;.
= A, and so for (5.5)
9
p,q
If (u;,v;) is an arcinG with u; E Rm and v; E s.. , then a.;= 1, and so by (5.5), there must be a pair (p, q) such that a;pgpqaq; a;; = 1. Hence a;, = g,9 = a9; = 1, which implies that G has a directed (u,, v; )walk of length 3. If (u;,v;) is not an arcinG, then by (5.5), a;,g,qaq; 0 for all choices of (p,q), and so G does not have any directed (u;, v;)walk of length 3, and so (ii) must hold. Conversely, assume that (ii) holds. Then AMA =A follows immediately by (5.5), and so (i) follows. O
=
=
=
Definition 5.3.3 Let G B(Rm,S,.) UB(S.. ,Rm) be a combined graph. For each pair of vertices (u,v), if (u,v) E E(G) only ifG has a directed (u,v}walkoflength 3, then we say that the pair (u,v) has the (13} property; similarly, if G has a directed (u,v)walk of length 3 only if (u, v) E E(G), then we say that the pair (u, v) has the (31} property. For a vertex u E Rm, if for each v E Sn, (u, v) has the (13) property {or (31) property, respectively), then we say that u is a vertex with the (13} property (or {31) property, respectively).
If each vertex in Rm has the (13) property (or (31) property, respectively), then we say that G has the (13) property (or (31) property, respectively). With these terminology, Theorem 5.3.1 can be restated as follows. Theorem 5.3.1' Let A E Bm,n and let B(Rm, Sn) be the bipartite digraph representation of A. Then a bipartite digraph B(Sn, Rm) is a ginverse graph of A if and only if the
Combinatorial Analysis in Matrices
228
combined graph G = B(R,, Sn} U B(Sn, R,} has both the (13} property and the (31} property. Corollary 5.3.1A Suppose that B(S.. ,R,} is a ginverse graph of B(R,,S.. ). Then for every pair of vertices u e Rm and v e Sn in the combined graph G = B(Rm,Sn} u B(S.. ,R,}, either d(u,v} = 1 or d(u,v) = oo.
< oo, then k must be an odd integer. By Theorem 5.3.1, if k > 1, then k > 3. Take a shortest directed (u, v)path P = tloVt · · · in G, where vo = u and""= v. Then by Theorem 5.3.1, d(v0 ,v3} 1, contrary to the assumption that Pis a shortest path. 0
Proof Note that if d(u, v} = k
v,.
=
Corollary 5.3.1B Suppose that B(S.. , R,} is a_ginverse graph of B(R,, Sn}· Then for vertices "'1•"'2 e R, and Vt,U2 e in the combined graph G = B(R,, UB(S.. ,R,}, if (u1,v1}, (u2, v2} E E(G}, and if (u1 , t12} ¢ E(G). then (Vt."'2} ¢ E(G).
s..}
s..
Lemma 5.3.1 In thedigraphG = B(R,,S.. )UB(S.. ,R,), if for some u e R, andv e Sn, t:L+(u) 0 or tr(v) 0, then the pair {u,v} has (13) property and (31) property.
=
=
Proof G does not have a directed (u, v )path of length 1 or 3.
O
Lemma 5.3.2 In the digraph G = B(R,, S.. ) U B(Sn, R,.,.), For any u e R,, if for any v E Sn, at least one vertex in {u, v} lies in a directed cycle of length 2, then u has (13) property. Proof By Definition 5.3.3, we may assume that (u, v) E E(G). Hone of u or v lies in a directed 2cycle, then G has a directed (u, v)walk of length 3. O
=
Given a combined graph G B(Rm,Sn) UB(Sn,R.n), when (ut.v1 ), ("'2,V2) E E(G) and (ult !J2) ¢ E(G), we say that the pair {vt,u2} is a forbidden pair. An arc (u, v) E E(G} with u e R, and v e is called a single arc if neither u not v lies in a directed cycle of length 2.
s..
We now present an algorithm to determine if a matrix A E Bm,n has a ginverse, and construct one if it does. The validity of this algorithm will be proved in Theorem 5.3.2, which follows the algorithm. Algorithm 5.3.1 (Step 1) For a given A E Bm,n, construct the bipartite digraph representation of A, denote it by B(R,, S,.). (Step 2) Construct a bipartite digraph Bo = B(S.. , R,) as follows: For each pair of vertices u E R, and v E S,., an arc (v, u) e E(Bo) if and only if { v, u} is a not a forbidden pair. (Step 3) Let G B(R,, Sn) l,J Bo.
=
229
Combinatorial Analysis in Matrices
H for each single arc (u,v) of G, the pair {u, v} has (13) property, then Bo is a ginverse graph of B(R,.,S,.), and M(B0 ) is the maximum ginverse of A; otherwise A does not have a ginverse.
Example 5.3.5
~~~] ~ ~ ~
'
MB( o) 
1 0 0
[01000] 00001 1 1 0 0 0 0 01 00
Since G has no single arcs, M(Bo) is the maximum ginverse of A. Lemma 5.3.3 The graph G (31) property.
= B(R,., S,.) U B0 produced in Step 3 of Algorithm 5.3.1 has
Proof Pick u e R,. and v e S,.. Suppose that G has a directed (u,v)walk uu'v'v of length 3, but (u,v) ¢ E(G). Then {u',v'} is a forbidden pair, and so by Step 2, (tl,u') ¢ E(G), a contradiction. Lemma 5.3.4 Let B'(S,.,R,.) beaginverse graph of A, then B'(S,.,R,.) is a subgraph of Bo. Proof Let G = B(R,.,S,.) UB0 (S,.,R,.), and G'
= B(R,.,S,.) u B'(S,.,R,.).
Let u e R,. and v e S,.. H (u,v) ¢ E(G), then {u,v} is a forbidden pair, by Step 2 of Algorithm 5.3.1. By Corollary 5.3.1B, and since B'(S,., R,.) is a ginverse of B(R,., S,.), we have (u, v} ¢ E(G'). 0 Theorem 5.3.2 Let A e Bm,n with a bipartite digraph representation B(R,.,S,.), and let B 0 = B 0 (S,., R,.) be the bipartite digraph produced by Algorithm 5.3.1. The following are equivalent. (i) A has a ginverse. (ii) In the combined graph G = B(R,.,S,.) UB0 , if for some u E R,.,v E S,., (u,v) E E(G) is a single arc, then the pair {u,v} has (13) property. Proof Assume first that B' = B' (S,., R,.) is a ginverse of A, and let G' = B(R,., S,.)UB'. If Part(ii) is violated, then there exists an arc (u,v) E E(G') with u E R,. and v E S,. such that G' has not (u, v )trail of length 3. By Lemma 5.3.2, (u, v) must be a single arc. It follows that G' does not have a (13) property, and so by Theorem 5.3.1', B' is not a ginverse of A, a contradiction.
230
Combinatorial Analysis in Matrices
Conversely, by Theorem 5.3.2(ii) and by Lemma 5.3.2, B 0 has (13) property. By Theorem 5.3.1', by Lemma 5.3.3 and by the fact that Bo has (13) property, Bo is a ginverse of A. 0 Now we consider properties of a minimum ginverse of A. We have the following observation, stated as Proposition 5.3.1. The straightforward proof for this proposition is left as an exercise. Proposition 5.3.1 Let A
e Bm,n·
Each of the following holds.
{i) Let BE Bn,m be a matrix. If for some min A, we have min A ~ B ~max A+, then B is also a ginverse of A. (ii) If Bo(S,.,R,.) is a maxA+, then a minium ginverse B*(S,.,R,.) of A can be obtained from B 0 (S,., R,.) by deleting arcs such that G B(R,., S,.) U B* (S,., R,.) has (13) property and such that B*(S,.,R,.) has the minimum number of arcs among all ginverses of A.
=
(iii) Let (11,u) with 11 e S,. and u e R,. be an arc in B0 (S,.,R,.). If cl+(u) = 0 or = 0 in G = B(R,.,S,.) U B0 (S,.,R,.), then the arc (11,u) can be deleted and the resulting graph B0  (11,u) is also a ginverse of A.
tr(11)
Theorem 5.3.3 Let B(R,.,S,.) be the bipartite representation of A, let B*(S,.,R.,.) be a minimum ginverse of B{R.,.,S,.), and let G* B(R,., S,.) U B*(S,.,R,.). If (u,11) is a single arc of G*, then G* must have a directed K 2,2 as a subgraph whose arcs are all directed from R,. to S,., and such that {u,v) is an arc of this directed K2,2•
=
Proof Since B*(S,.,R,.) is a ginverse, {u,11) has {13) property. Since (u,11) is a single arc, G* has a directed (u,v)path uv1u111 with u,u1 e R,. and 11,111 e S,.. If (u 11 111) is in G*, then G* has a desirable K2, 2. If (Ut, 111) is not an arc in G*, then as (u, 11 1) has (13) property, G* has either a directed (u, 111 )path u112u2111 with u 2 =I u, or a directed 2cycle 111'1.£2111. In either case, since (u2,11) has (13) property, G* contains (u2,v), and so a desirable K2,2 must exist. 0 By Theorem 5.3.3, we can first apply Algorithm 5.3.1 to construct a ginverse of B0 (S,.,R,.), and then obtain a minimum ginverse by deleting arcs from B0 • Interested readers are referred to [174] for details.
5.4
Maximum Determinant of a (0,1) Matrix
What is the maximum value of ldet(A)I, if A ranges over all matrices in B,.? What is the least upper bound of ldet(A)I, if A ranges over all matrices in M,.? The Hadamard inequality (Theorem 5.4.1) gives an upper bound of ldet(A)I, but determining the least upper bound seems very difficult. This section will be devoted to the discussion of this
231
Combinatorial Analysis in Matrices problem. Theorem 5.4.1 (Hadamard, [111]) Let A= (a;;) EM,... Then
,..
ldet (AW
~
fi 2>~;·
j==li==l
Moreover, if each a;; E {1, 1}, then ldet (A)I
~
fl /'fa~;~ ni,
(5.6)
where equality in (5.6) holds if and only if AAT = nl,... Definition 5.4.1 Let M,..(1, 1) denote the collection of all n x n matrices whose entries are 1 or 1. A matrix HE M,..(1,1) is a Hadamard matrizif HHT = nl,... Proposition 5.4.1 Suppose that there exists ann x n Hadamard matrix. Each of the following holds.
(i) a,. (ii) n
= ..;nn.
=1,2 or n s
0 (mod 4).
Proof (i) follows by Theorem 5.4.1. Suppose that H (hj;) is a Hadamard matrix. By the definition of a Hadamard matrix, H HT HT H nl,... Hence, for n > 2,
=
= =
,..
,..
~)h1; + h,;)(h1; + hs;) i=l
= }:h~; = n. i=l
=
Since h 1; + h 2; ±2, 0 and h 1; divisible by 4, and so (ii) holds.
+ h3; = ±2, 0, the left hand side of the equality above is D
It has been conjectured that a Hadamard matrix exists if and only if n (mod 4). See [237]. For each integer n ~ 1, define
a,.
=
max{ det(H) : HE M,..(1,1)},
fj,..
=
max{ det(B) : B E B,..}.
= 1, 2, or n =0
When n ¢ 0 (mod 4), the value of a,. is determined by Ehlich [80]. Williamson (276] showed that for n ~ 2, a,.= 2"' 1{3,... Therefore, we can study Pn in order to determine When A belongs to some special classes of (0,1) matrices, the studies of the least upper bound of ldet(A)I were conducted by Ryser with an algebraic approach and by Brualdi and Solheid with a graphical approach.
232
Combinatorial Analysis in Matrices
Example 5.4.1 Let A E Bn be the incidence matrix of a symmetric 2design S>.(2,k,n). Then
AAT =ATA= (k .\)In + AJn· It follows that
= k(k .\) !!jl. Note that the parameters satisfy .\(n 1) = k(k 1). jdet(A)I
Ryser [224] found that the incidence matrix of symmetric 2design s,.(2,k,n) yields in fact the extremal value of ldet(A)j, among a class of matrices in Bn with interesting properties. Theorem 5.4.2 (Ryser, [224]) Let Q = (q,;) E Bn with integers such that .\(n 1) = k(k 1). H
IIQII = t.
Let k ;::: A ;::: 0 be
t ~ kn and A ~ k  A, or if t ;::: kn and k  A ~ A,
(5.7)
then jdet (Q)I ~ k(k .\)~. Proof For a matrix E E Bn, let E(x,y) denote the matrix obtained from E by replacing each 1entry of E by an x, and each 0entry of E by a y. With this notation, set k.\ Ql p= A,
=Q(p,1)
and
7J = [
p z ] , zT Ql
(5.8)
where zT = (..,JP, ..;p, ·· · ,..;p). LetS,= L:j=1 ql;• for each i with 1 ~ i ~ n. By Theorem 5.4.1, n
·ldet('lJ)I :S
.../# + np IT ..jp + S,.
(5.9)
i=l n
Note that
Es• = tp
2
+ (n 2 
t)
=t(J} 1) + n 2 , and that
i=l
...2 P
_
+ np P
(k.\+.\n) _ k 2 (k.\) .\

,\2
•
n
It follows by (5.7) that
Es• :S kn(p 1) +n 2
2•
For each i, let 8, be a quantity such that
i=l
s,;::: s,, (1 ~ i :S n),
n
and
Es• ~ kn(p2 
1) + n 2 •
i=l
Thus
kn(p2
=
J....(p
n"'P

1) + n 2
+ np = n ( kr + .\n.\k+k.\) .\
) _ n(k  .\)k2 A2 •
+1 
Combinatorial Analysis in Matrices
233
It follows
!1(p+3'.)~ (~~(p+:s'·)r ~ (Ck;_;>Jc2r.
(5.10)
Combine (5.9) and (5.10) to get ldet(Q)I
~ k~ IT Vp+S, i=l
~ k~ e~r = e~r+l. By (5.8), we can multiply the first row of 7J by 1/.,;fJ and add to the other rows of 7J to get ldet(7J)I Note that ldet(Q(k/~,0))1
=pldet(Q(k/~,0))1 ~ (k~) n+l.
(5.11)
= (k/~)nl det(Q)I. It follows that
)n+l .
k ( pldet(Q)I ~'X .../k ~
Therefore the theorem obtains.
O
Theorem 5.4.3 (Ryser, [224]) Let Q = (q,i) E Bn be a matrix. H ldet(Q)I then Q is the incidence matrix of a symmetric 2design S.A(2, k, n). Proof H ldet (Q)I
= k(k.X)¥,
= k(k .X)¥, then
.x . I (x,ok )I = (k~)n+l
P detQ
Define 7J as in (5.8) and employ the notations in Theorem 5.4.2. By (5.11), ldet(7J)I
= (k~) n+l,
and so equality must hold in (5.10), which implies p
It follows that 7J7JT

+ s, =
(k.\)k 2
.\2
,
1 ~ i ~ n.
= k2(k .X)/~2 In+l• and so k2
Q1Qf = ,x2 (k .\)In pJn.
(5.12)
234
Combinatorial Analysis in Matrices
For each i, let r;
= :Ej=1 qii· By (5.12), p'lr;+ (nr;) (p2 l)r,
=
1l'
,X2(k.\) p
k2
= .\2 (k 
.\)  p n.
=
Hence r; k, for each i with 1 ~ i ~ n. For i ::1 ; with 1 ~ i,; ~ n, let f denote the dot product of the ith row and the jth row of Q. By (5.12),
fp 2

2(k  f)p + n  2k + f
= p
l(p2 + 2p+ 1) =2kp p+ 2k n. It follows that lk2 /.X2 = k 2 /).. and so f = .\. Therefore, Q is the incidence matrix of a symmetric 2design 8>..(2, k, n).
O
The following theorem of Ryser follows by combining Theorems 5.4.2 and 5.4.3. Theorem 5.4.4 (Ryser, (224]) Letn Let
QE Bn with IIQII =t = kn.
> k >A> Obeintegerssuch that.\(n1) = k(k1}.
Then ldet(Q)I ~ k(k A) n;;',
where equality holds if and only if A is the incidence matrix of a 2design 8>..(2, k,n). Definition 5.4.2 Let A E Bn. Define two bipartite graphs Go(A) and Gt(A) as follows. Both Go(A) and Gt(A) have vertex partite sets U {u1 ,112,··· ,un} and V =
=
{v~o t/2, • • • , Vn}·
lJi;
An edge u;vi E E(Go(A)) (u;v; E E(G1 (A)), respectively) if and only if
= 0 (a;;= 1, respectively). Note that Go(A) = Gt(Jn A).
A matrix A is acyclic if G 1 (A) is acyclic, and A is complementary acyclic if G0 (A) is acyclic. A matrix A E Bn is complementary triangular if A has only 1 's above the its main diagonal. For example, Jn  A is a triangular matrix. For each integer n ~ 1, define
In= max{l
det(A)I : A E Bn and A is complementary acyclic}.
Example 5.4.2 Since an acyclic graph has at most one perfect matching, if A is acyclic, then det(A} E {0,1,1}. Example 5.4.3 Suppose A is complementary acyclic, and let B be the matrix obtained from A by permuting two rows of A. Then B is also complementary acyclic with det(B) =  det(A). Therefore,
In= max{ det(A)
: A E Bn and A is complementary acyclic}.
Combinatorial Analysis in Matrices
235
Brualdi and Solheid successfully obtained the least upper bound of ldet(A)I, where A ranges in some subsets of B,. with the complementary acyclic property. Their results are
presented below. Interested readers are referred to [39] for proofs. Theorem 5.4.5 (Brualdi and Solheid, [40]) Let n ~ 3 be an integer and let A E B,. be a complementary acyclic matrix such that A has a row or column of all ones. Then ldet(A)I::; n 2.
(5.13)
For n ~ 4, equality in (5.13) holds if and only if A or AT is permutation equivalent to
L,.=[O~. 1
J,._lln1
1]·
(5.14)
For n = 3, equality in (5.13) holds if and only if A or AT is permutation equivalent to one of these matrices
Definition 5.4.3 For a matrix A E B,., the complementary term rank of A, PJA, is the term rank of J,.  A. Theorem 5.4.6 (Brualdi and Solheid, [40]) Let n ~ 3 be an integer and let A complementary acyclic matrix with PJA = n  1. Then ldet(A)I ::;
{
n2
if3::;n::;8
l n;s Jrn;sl
ifn
~8.
e B,. be a (5.15)
For n ~ 4, equality holds in (5.15) if and only if A or AT is permutation equivalent to L,. as defined in (5.14) (when 4 ::; n ::; 8), or JL.;:!J [
oT
j 1
z
0
(n ~ 8),
where Z has at most one 0. Theorem 5.4.7 (Brualdi and Solheid, (40]) Let n ~ 2 be an integer and let A E B,. be a complementary acyclic matrix with PJA = n. Then ldet(A)I ::;
{
n2
ifn::; 5
l n;t Jrn;tl
ifn~5.
(5.16)
Combinatorial Analysis in Matrices
236
Equality holds in (5.16) if and only if A or AT is permutation equivalent to Jn In, or
JL~J IL~J j [
or
o
J
0
More details can be found in [39). For most of the matrix classes, the determination of the maximum determinant of matrices in a given class is still open.
5.5
Rearrangement of (0,1) Matrices
The rearrangement problem of an ntuple was first studied by Hardy, Littlewood and Polya [114]. Schwarz (231) extended the concept to square matrices.
Definition 5.5.1 Let (a)= (a1.a2,··· ,an) be anntuple and let 11' be a permutation on the set {1, 2, .. · ,n}. Then (a,..)= (a.r(t),a,.(2), .. ·a..cn>) is a reammgement of(a). A matrix A e Mn can be viewed as an n 2 tuple, and so we can define the rearrangement of a square matrix in a similar way. Let
A,.
11'
be a permutation on {1,2,··· ,n}, and let A
= (BiJ)
E Mn. The matrix
= (aij) is called a permutation of A if al; = a,.(i),~rCi)• for all 1 :5 i,j :5 n.
Clearly a permutation or a transposition of a matrix A is a rearrangement of A. We call a rearrangement trivial if it is a permutation, or a transposition, or a combination of permutations and transpositions. Two matrices At, A2 rearrangement of At.
e Mn are essentially different if A2 is a nontrivial
=
For each A (Bi;) e Mn, define IIAII follows immediately from the definitions.
= E~t E.i=t Gij·
Proposition 5.5.1 below
Proposition 5.5.1 For matrices A 1 ,A2 e Mn, each of the following holds. (i) H A1 is a rearrangement of A2, then IIAtll = IIA2II· (ii) HAt is a trivial rearrangement of A:~, then IIA~II = IIA~II· Definition 5.5.2 Let
N = ma.x{IIA211 : A e Mn},
u
=
a=
and
N =min{IIA211 :
=
A e Mn}·
c
Let ={A e Mn : IIA2 11 N} and {A e Mn : IIA2 II N}, let denote the set of matrices A= (Bi1 ) e Mn such that for each i, a;i ~ Bi;• whenever;< j', and such that for each j, ao; ~ Bi•; whenever i < i'; and let Cdenote the set of matrices A a;i) E Mn such that for each i, a;; ~ aw whenever; < j', and such that for each;, Bi; :5 G;•; whenever i < i'.
=(
Combinatorial Analysis in Matrices
237
Theorem 5.5.1 (Schwarz, [231])
un c¢ 0, and un c¢ 0. Definition 5.5.3 For a matrix A e Mn, let A1 (A) and An(A) denote the maximum and the minimum eigenvalues of A, respectively. Let
=
max{A1 (A) : A
X=
minp.n(A) : A
X
e Mn}, e Mn};
and
and let
iJ = {A e Mn : A1 (A) =X} and
B
=
{A e Mn : An{A)
= X}.
Theorem 5.5.2 (Schwarz, [231])
iJnc ¢0,
and
8nc ¢0.
Definition 5.5.4 For integer n ~ 1 and u with 1 SuS n 2 , let Un(u) IIAII
= u}. Let =
max{IIA2 11 : A min{IIA2 11 : A
e Un(u}}, e Un(u)}.
= {A e Bn
and
Let Un(u) denote the set of matrices A= (a,1 ) E Un(u) such that for each i, a0; ~a,;, whenever j < j', and such that for each j, a,; ~ at•; whenever i < i'; and let Un(u) denote the set of matrices A= (a,1) e Un(u) such that for each i, a0; ~ a0;• whenever j < j', and such that for each j, as; S at•; whenever i < i'. Proposition 5.5.2 follows from Theorem 5.5.1 and the definitions. Proposition 5.5.2 For integers n ~ 1 and u with 1 S u S n 2 , each of the following holds. (i) N n(u) = max{IIA2 11 : A e Un(u)}, and Nn(u) = min{IIA2 II : A e Un(u)}. (ii) Let A = (a,;) e Un(u), let s, :Ej=1 a,; and r, = E~ 1 a;• denote the ith row sum and the ith column sum of A, respectively. Then
=
Example 5.5.1
[ 1~
011 1] ~
e Us{6)
and [1~
238
Combinatorial Analysis in Matrices
Aharoni discovered the following relationship between Nn(u) and Nn(n2 between Nn(u) and Nn(n 2  u). Theorem 5.5.3 (Aharoni, [1]) Let nand u be integers with n A e Un(u), then
~

u), and
1 and 1 SuS n 2 • H
(i) IIA2 II = 2un n 3 + II(Jn A) 2 11. (ii) N n(u) 2un n 3 + Nn(n2  u). (iii) Nn(u) 2un n 3 + Nn(n2  u).
= =
Parts (ii) and (iii) of Theorem 5.5.3 follow from Part (i) of Theorem 5.5.3 and the observation that if A e Un(u), then Jn A e Un(n2  u). In [1), Aharoni constructed four types of matrices for any 1 S u S n 2 , and proved that among these four matrices, there must be one A such that Nn(u) = IIA2 11. Theorem 5.5.3(ii) and (iii) indicate that to study Nn(u) and Nn(u), it suffices to consider the case when u ~ n 2 f2. The next result of Katz, points out that an extremal matrix reaching Nn(u) would have all its 1entries in the upper left corner principal submatrix. Theorem 5.5.4 (Katz, [142]) Let n, k be integers with n 2 ~ k2 ~ n 2 /2 > 0. Then
Corollary 5.5.4 Let n, k be integers with n 2 ~ k2 ~ n 2 /2 > 0. Then Nn(n2

k2 )
=k
3 
2k2 n + n 3 •
To study Nn(u), we introduce the square bipartite digraph of a matrix, which plays a
useful role in the study of IIA2 11. Definition 5.5.5 For a A = (~;) e Bn, let K(A) be a directed bipartite graph with Vertex partite sets (Vi, l/2), where Vt Ut, t£2, ••• , Un} and Vt, t12, ••• , Vn}, representing the row labels and the column labels of A, respectively. An arc (uh v;) is in E(K) if and only if aii = 1. Let Kt and K2 be two copies of K(A) with vertex partite sets (Vi, V2) and Wt, V~), respectively, where V: {u~,t4, · · · ,u~} and V~ {vLv~, · · · ,v~}, and where (u~,vj) E E(K2) if and only if as;= 1. The square bipartite digraph of A, denoted by SB(A),_is the digraph obtained from K1 and K2 by identifying "• with u~, for each i = 1, 2, · · · , n. The next proposition follows from the definitions.
={
=
v2 ={
=
Proposition 5.5.3 Let A e Bn. Each of the following holds. (i) IIA2 11 is the total number of directed paths of length 2 from a vertex in Vi to a vertex in v~.
Combinatorial Analysis in Matrices
239
(ii) For each t1; e 1'2 in SB(A), d(v1) ith columnnsum of A. n
= s; is the ith row sum of A, and ti+(v;) = r;
is the
(iii) IIA2 11
= L:d(t~;)~(t1;) = L:r;s;. i=l
i=l
Example 5.5.2 The square bipartite graph of the matrix At in Example 5.5.1 is as follows:
1
1
1
Figure 5.5.1 The graph in Example 5.5.1 Example 5.5.3 The value IIM2 11 may not be preserved under taking rearrangements. Consider the matrices 100] [11 At= [ 0 1 0 andA2= 10 0 0 1 0 0
~ ]·
Then At and A2 are essentially different and IIA~II =FIIA~II Theorem 5.5.5 (Brualdi and Solheid, [38]) H t1;:: n2

LiJf!J1, then
Moreover, for A E Un(u), IIA2 11 = Nn(u) if and only if A is a permutation similar to [ J, J,,,
X] J,
'
(5.17)
240
Combinatorial Analysis in Matrices
where X E M,.,, is an arbitrary matrix, and where k k+l =n.
~
0 and l
~
0 are integers such that
=
Sketch of Proof Construct a square bipartite digraph D SB(A1 ) as follows: every vertex in {uh··· ,u,} is directed to every vertex in {vl+l•"' ,vn}, where Z > 0 is an integer at most n. By Proposition 5.5.3(i), IIA~II = 0. Let A= Jn A 1 • By Theorem 5.5.3(i) and by IIA~II = o,
IIA2 II = 2on n3 = Nn(u),
=
where u IIAII = IIJn A1ll ~ n 2  LJJf!l· By Theorem 5.5.3(i), if A E Un(u) satisfies IIA2 11 = 2on n 3 , then II(Jn A) 2 11 = 0, and so by Proposition 5.5.3(i), SB(Jn A) must be some digraph as a subgraph of the one constructed above, (renaming the vertices if needed). Therefore, A must be permutation similar to a matrix of the form in (5.17). O Proposition 5.5.4 Let A e ii..(u) and let D notations in Definition 5.5.5. (i) H for some i < nand j > 1, (u;,v;) (uHl, v;) e E(D).
(ii) H u
~(
; ) , and if IIA2 11
=SB(A) with vertex set Vi UV2 uv;, using e E(D), then both {u1,v;1 ) E E(D) and
= Nn(u), then in D, that i > j
implies that (uo, v;) E
E(D).
{iii) H u
~(
; ) , and if A
e Un(u)
and
IIA2 11 = Nn(u), then every entry under the
main diagonal in A is a 1entry. Proof Part (i) follows from the Definition 5.5.4 and Definition 5.5.5. Part (iii) follows from Part {ii) immediately. To prove Part (ii), we argue by contradiction. Assume that there is a pair p and q such that p > q but (u,, v11 ) f. E(D). Since u ~ n(n 1)/2 and by Proposition 5.5.4(i), there must be ani such that (u;,vi) E E(D). Obtain a new bipartite digraph D1 = SB(Ao) from D by deleting (u;,v;),(v1,vD and then by adding (u,,v11 ) and (v,.,v~). Note that
where the degrees are counted in D. By Proposition 5.5.4(i) again, a(v;) ~ n (i 1), ~(v1 ) ~ i, d(v,.) ~ n p, and ~(v11 ) ~ q 1.
It follows by (5.18) and (5.19) that
IIA2 UU.Agll ~pq+ 1 ~ 2,
(5.19)
Combinatorial Analysis in Matrices
241
contrary to the assumption that IIA2 II
=Nn(u). O
Theorem 5.5.6 (Brualdi and Solheid, [38]) H u
= ( ; ) , then
Moreover, if A E U,.(u) and IIA2 } = N,.(u), then A is permutation similar to Ln, the matrix in Bn each of whose 1entry is below the main diagonal. Proof This follows from Proposition 5.5.4(iii). To investigate the case when ( ; )
O
< u < n2
LiJril, we establish a few lemmas.
Lemma 5.5.1 (Lin, [175]) Let A = (tli;) E Un(u), let A(u + epg) denote the matrix obtained from A by replacing a 0entry apq by a 1entry, and lets, and r 9 denote the pth column sum and the qth row sum, respectively. Then ifp;lq ifp=q
Proof Note that SB(A(u+e,9 ) isobtainedfromSB(A) byaddingthearcs (u,,v9 ), (v,,v~). If p ;l q, the number of the newly created directed paths of length 2 from vl to v~ in SB(A(u + e,q) is d"'(vq) + r(v,) = Tq + s,; if p = q, an additional path u,v,v~ is also created, and so Lemma 5.5.1 follows from Proposition 5.5.3(i). O Lemma 5.5.2 (Liu, [175]) Let L,. be the matrix in B,. each of whose 1entry is below the s; i· Let A denote the matrix obtained from Ln by changing the upper entries at (it, it), where 1 s; t s; r, from 0 to 1. H all the it's are distinct and all the it's are distinct, then
main diagonal. An (i,i)entry in L,. is called an upper entry if i
where
Ac· . ) _ {
~ Zt,Jt
(n 1) it+ it • • ntt+1t
ifit 1, then d(Tr) > ~~ 1 d(Tr;).
Proof By assumption and by r
> r,,
=
ti==1 (r,
> (
ri;
+1
) (r + 2)
2
1 ) (rd 2).
It follows by Lemma 5.5.5 that d(Tr) > ~~ 1 d(Tr;).
0
The next two lemmas can be proved similarly. Lemma5.5.7 d(Sr)
Lemma 5.5.8 If IISrll
=r(n + 3) 1.
= ~:== 1 HSr,ll for some t > 1, then d(Sr) > 1:::==1d(Sr;)·
Theprem 5.11 Suppose that
u= ( n;
1 )
+ k, LiJ < k ~
fn;
11
Then
N,.(u)
n1 = (n+2) +k(n+3) L2J. 3
Proof Assume that A E U,.(u) with IIA2 II = N,.(u). By Proposition 5.5.2, we may assume that A E U,.(u), and soL~ is a submatrix of A. By Lemma 5.5.8, the minimum of IIA2 II 1J of s2 and 2L 1J  k o£ s1 above the main can be obtained by putting k l
n;
n;
diagonal of L~. Therefore, •L~J
N,.(u)
= IICL:f"ll + }:
This completes the proof.
i=1
O
2L~Jk
d(S2)
+ }: i==l
d(SI)
Combinatorial Analysis in Matrices
5.6
245
Perfect Elimination Scheme
This section is devoted to the discussion of perfect elimination schemes of matrices and graph theory techniques can be applied in the study. Definition 5.6.1 Let A= (a.;) E Mn be a nonsingular matrix. The following process converting A into I is called the Gauss elimination process.
=
,n,
For each t 1,2, · · · (1) select a nonzero entry at some (ittjt)cell (called a pivot), (2) apply row and column operations to convert this entry into 1, and to convert the other entries in Row it and Column it into zero. The resulted matrix can then be converted to the identity matrix I by row permutations only. The sequence (it.jt), (i2,h), · • • , (in,jn) is called a pivot sequence. A perfect elimination scheme is a pivot sequence such that no zero entry of A will become a nonzero entry in the Gauss eiimination process. Example 5.6.1 For the matrix
[i i t
J
The pivot sequence (1, 1), (2,2), (3,3), (4,4) is not a perfect elimination scheme since the 0entry at (3,2) will become a nonzero entry in the process. On the other hand, the pivot sequence (4, 4), (3,3), (2, 2), (1, 1) is a perfect elimination scheme. Example 5.6.2 There exists matrices that does not have a perfect elimination scheme. Consider
A=
1 1 0 0 1 1 0 1 1 1 0 1 1 0 0 1
0 0 1 1
1 1 0 0 1
Note that if for a given (i, j), there exist s, t such that a.; ¥: 0, 0 and A a.. :/; 0, implies that among the vertices in N+(v,), there must be at least one v; E N+(v0) such that z; :/; 0. Let Vi, =Vi and let Vi2 e N(vi,) such that for any 11 E N+(11o1 ), 11 ~ 11i2 • Note that Zi2 oF 0. Inductively, assume that a walk v;.vi2 ···vi1_,v,P satisfying (B) and (C) in Claim 1, has been constructed. Since z,1 :/; 0, we can repeat the above to find 11,1+, E N+(v;) such that for any v E N+(v;) 11 ~ v01+,. Since D(A) has only finitely many vertices, a closed walk satisfying (A), (B) and (C) of Claim 1 must exists. This proves Claim 2, as well as the theorem. O Theorem 5.8.6 (Brualdi, [23]) Let A= (ai;) be ann x n irreducible matrix and let A be a complex number. If A lies in the boundary of the region (5.26), then for any We C(A), A also lies in the boundary of each region
{z E C :
II lzaiil $ IT Ri(A)} · v;EW
(5.31)
v;EW
Proof Note that an irreducible matrix is also weakly irreducible. All the argument in the proof of the previous theorem remains valid here. We shall use the same notation as
260
Combinatorial Analysis in Matrices
in the proof of the previous theorem. Note that Claim 2 in the proof of Theorem 5.8.5 remains valid here.
= a"
for some i, then A cannot be in the boundary of Since Ro > 0, for each i. If A (5.26). Hence A=/: llih 1 ~ i ~ n. Fix a W E C(A) that satisfies (A),(B) and (C) of Claim 1 in the proof of Theorem 5.8.5. Since A lies in the boundary the region (5.26), for each Vi E V(W), we have I.X lltil ~ Rt, and so
II fz lliil ~ II Ro(A). t11EW
,(5.32)
v;EW
By (5.30), we must have equality in (5.32), and so .>.lies in the boundary of (5.31) for this
w. Note that when equality holds in (5.32), we must have, for any j = 1, 2, · · · , k, that equalities hold everywhere in (5.27). Therefore, for any closed walk in C(A) satisfying (A), (B) and (C) of Claim 1 in the proof of Theorem 5.8.5, for any v11 E V(W) and for anytimE N+(v;1 ),
(5.33)
Define K = {v; E V(D(A)) :
lzml = c; =
constant, for any Vm E N+(v;)}.
By Claim 2 in the proof for Theorem 5.8.5, K =/: 0. If we can show that K = V(D(A)), then for any closed walk W E C(A), W will satisfy (A), (B) and (C) of Claim 1 in the proof of Theorem 5.8.5, and so>.. will be on the boundary of the region (5.31) for this W. Suppose, by contradiction, then some vq E V(D(A))  K. Since D(A) is strongly connected, D(A) has a shortest directed walk from a vertex inK to Vq· Since it is shortest, the first arc of this walk is from a vertex in K to a vertex VJ not in K. Adopting the same preorder of D(A) as in the proof for Claim 2 of Theorem 5.8.5, we can similarly construct a walk by letting Vi1 = Vf, Vio is chosen from N+(v,,) so that for any v E N+(v.,), v ~ v;2 • Since D(A) is strong, N+(v,) =/: 0, for every Vi E V(D(A)). Once again, such a walk satisfies (B) and (C) in Claim 1 in the proof of Theorem 5.8.5. In each step to find the next Vis, we choose "i; ¢ K whenever possible, and if we have to choose Vi; E K, then choose "'s E K so that Vi; is in a shortest directed walk from a vertex inK to a vertex not inK. Since IV(D(A)) Kl is finite, a vertex t1 not inK will appear more than once in this walk, and so a closed walk W' E C(A) is found, satisfying (A), (B) and (C) of Claim 1 in the proof of Theorem 5.8.5, and containing v. But then, by (5.33), every vertex in W' must be K, contrary to the assumption that v jnK. Hence V(D(A)) = K. This completes the proof. D
Combinatorial Analysis in Matrices
261
Corollary 5.8.6 Let A= (Go;) be ann x n matrix. Then A is nonsingular if one of the following holds. {i) A is weakly irreducible and
II la••l > II B., for any W E C(A). v;EW
v;EW
(ii) A is irreducible and
II
laiil ~
v;EW
II B., for any WE C(A), v;EW
and strict inequality holds for at least one WE C(A). Proof In either case, by Theorems 5.8.5 or 5.8.6, the region (5.26) does not contain 0; and when A is irreducible, the boundary of the region (5.26) qoes not contain 0, either.
D
5.9
Mmatrices
In this section, we will once again restrict our discussion to matrices over the real numbers, and in particular, to nonnegative matrices. Throughout this section, for an integer p > 0, denote (p} = {1, 2, · · • ,p}. For the convenience of discussion, a matrix A E M,. is often written in the form
[ : l
A
~~: ~~:
A,.1
Ap2
~::
•••
,
(5.34)
Apq
= =
where each block A,; E Mm,,n1 and where m1 + m2 + · · · + mp n n1 + n2 + · · · + nq. In this case, we write A = (A1;),i = 1,2,··· ,p and j = 1,2,··· ,q. A vector x = (xf,xf, · · · ;x'J)T is said to agree with the blocks of the matrix (5.34) if Xi is an m 0dimensional vector, 1 5 i 5 p. When x = (xf,xf,··· ,x'J)T, x'{' is also called the ith component of x, for convenience. Definition 5.9.1 Recall that if A~ 0 and A EM,., then A is permutation similar to its
262
Combinatorial Analysis in Matrices
Frobenius Standard from (Theorem 2.2.1) 0 0
B=
0
0
0 0
0
0 0 0
A,,
0
A 9+t,l Ag+2,1
A9+1, 9 A 9+2, 9
Ag+l,g+l
0 0
A,+a,g+l
A 9+2,g+2
Ak,i
A~c, 9
Ak,g+l
0
(5.35)
A""'
By Theorem 2.1.1, each irreducible diagonal block A;;, 1.::;; i.::;; k, corresponds to a strong component of D(A). Throughout this section, let p; = p(A;;), the spectral radius of A;;, for each i with 1 .::;; i .::;; k. Label the strong components of D(A) {diagonal blocks in (5.35)) with elements in (k), and define a partial order~ on (k) as follows: for i,j e (k), i ~ j if and only if in D(A), there is a directed path from a vertex in the jth strong component to a vertex in the ith strong component; and i < j means i ~ j but i :fi j. The partial order~ yields a digraph, called the reduced graph R(A) of A, which has vertex set (k), where (i,j) e E(R(A)) if and only if i< j. Note that by the definition of strong components, R(A) is has no directed cycles. H a matrix A has the form (5.35), then denote p; = p(Aii), 1.::;; i.::;; k. Let M =AI A be an Mmatrix. Then the ith vertex in R(M) (that is, the vertex corresponding to A;; in R(A)) is a singular vertex if~ p;. The singular vertices of reduced graph R(M) is also called the singular vertices of the matrix M. For matrix of the form (5.34), define
=
'Yii
={
1 0
ifi=jorA;;¢0 otherwise.
Also, let
where the maximum is taken over all possible sequences {i,h,··· ,q,j}. Proposition 5.9.1 With the notation above, each of the following holds. (i) H A is a Frobenius Standard form (5.35), then with the partial order equivalently write, if j ~ i otherwise.
~.we
can
Combinatorial Analysis in Matrices
263
(ii) p
L.Ru.Rh; ~ Ri; ~ Ri1R1;, 1 S l S p h=l
(iii) p
'YihRh; ~ B.; ~ ~ 'YihRhj, if i # j.
L
(5.36)
h=l,h~i
Example 5.9.1 Let
I,
0 0 0 0 a 1 1 1 0 0 b c 1 1
A=
where a, b, c are nonnegative real numbers. Then R(A) is the graph in Figure 5.9.1.
1
2
4
3
5
0 singular vertex
e
nonsingular vertex
Figure 5.9.1 Definition 5.9.2 A matrix B = (b1;) E M,.. is an M matrix if each of the following holds. (5.9.2A) au ~ 0, 1 S i S n.
264
Combinatorial Analysis in Matrices
(5.9.2B) bs; ~ 0, for i :f: j and 1 ~ i,j ~ n. (5.9.2C) H ). f. 0 is an eigenvalue of B, then ). has a positive real part. Proposition 5.9.2 summarizes some observations made in [229] and [216]. Proposition 5.9.2 (Schneider, [229] and Richman and Schneider, [216]) Each of the following holds. (i) A is an Mmatrix if and only if there exists a nonnegative matrix P ~ 0 a.Iid a number p ~ p(P) such that A= piP, and A is a singular Mmatrix if and only if
A=p(P)I P. (ii) H A = (A1;) is an Mmatrix in the F\obenius standard form (5.35), then the diagonal blocks A,1, 1 ~ i ~ k, are irreducible Mmatrices. (iii) The blocks below the main diagonal, A;;, 1 ~ j < i ~ k, are nonnegative. In other words, A;; ~ 0. Lemma 5.9.1 Let A form (5.35), and let x and for an h > a, let
= (A;;), i, j = 1, 2, · · · k be an M matrix in a F\obenius Standard = (xf, xi,· · · , xf)T agreeing with the blocks of A. For an a e (k)
{
x;=O x;>>O
ifHi.,=O if~.. =l.
i
= 1, 2, ..• , h 
1.
(5.37)
H {
and
Yt =0
(5.38)
Yi = E~:!A.;x;, i = 1,2,··· ,k,
then {
Yh =0
ifR,..,=O
Yh >0
if Rha
= 1.
(5.39)
Proof Since A,.;x; ~ 0, we have y,. ~ 0, and Yh j 1,2, · · • ,h 1. By (5.37), y,. = 0 if and only if
= 0 if and only if A,.;x; = 0,
=
'Yh;R;a Since 'Yhi
(5.40)
= 0 whenever h < j, we also have h1
/;
i=l
i=l,j;fi
L: 'Yh;R;a = E
AB h
= 0, j = 1,2, ·· · ,h 1.
f. a, it follows by (5.36)
'Yh;R; ... and JI.l2f'Yh;R;a '
0 for some 1 S i,j Sm. Let A(ijj) denote the matrix obtained from A by deleting the row and column of A containing a;3 • Then
=
PA(ilil
=
< m 1. By induction, A(ilj) has a Op,xq1 as a submatrix, for some Ptoqi with = m and 1 S Pl,qi :=:; m 1. By Proposition 6.2.1(i), we may assume that
P1 + q1
A= [Xz
0]
y
'
where X e Bp,,PI, y e Bmp,,mPI and z e BmPloPI' Since PA < m, either Px