CAMBRIDGE TRACTS IN MATHEMATICS GENERAL
EDITORS
H.BASS, J. F. C. KINGMAN, F. S M I T H I E S J . A . T O D D & C. T. C...
24 downloads
827 Views
4MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
CAMBRIDGE TRACTS IN MATHEMATICS GENERAL
EDITORS
H.BASS, J. F. C. KINGMAN, F. S M I T H I E S J . A . T O D D & C. T. C. WALL
67. Algebraic graph
theory
NORMAN BIGGS Lecturer in Mathematics Royal Holloway College, University of London
Algebraic Graph
Theory
CAMBRIDGE U N I V E R S I T Y P R E S S
Published by the Syndics of the Cambridge University Press Bentley House, 200 Euston Road, London NW1 2DB American Branch: 32 East 57th Street, New York, K.Y.10022 © Cambridge University Press 1974 Library of Congress Catalogue Card Number: 73-86042 I S B N : 0 521 20335 X First published 1974 Printed in Great Britain a t the University Printing House, Cambridge (Brooke Crutchley, University Printer)
Contents
Preface 1
page vii
Introduction
1
PART O N E - L I N E A R A L G E B R A I N
GRAPH
THEORY
2
The spectrum of a graph
3
Regular graphs and line graphs
14
4
The homology of graphs
22
5
Spanning trees and associated structures
29
6
Complexity
34
7
Determinant expansions
40
PART T W O - C O L O U R I N G
9
PROBLEMS
8
Vertex-colourings and the spectrum
49
9
The chromatic polynomial
57
10
Edge-subgraph expansions
64
11
The logarithmic transformation
72
12
The vertex-subgraph expansion
78
13
The Tutte polynomial
86
14
The chromatic polynomial and spanning trees
94
PART T H R E E - S Y M M E T R Y
AND
REGULARITY OF GRAPHS 15
General properties of graph automorphisms
101
16
Vertex-transitive graphs
106
17
Symmetric graphs
112
vi
Contents
18
Trivalent symmetric graphs
19
The covering-graph construction
page 119 127
20
Distance-transitive graphs
132
21
The feasibility of intersection arrays
140
22
Primitivity and imprimitivity
23
Minimal regular graphs with given girth
147 154
Bibliography
165
Index
169
Preface
I t is a pleasure for me to acknowledge the help which I have received during the preparation of this book. A preliminary draft of the manuscript was read by Dr R. J . Wilson, and his detailed comments resulted in substantial changes and improve ments. I was then fortunate to be able to rely upon the expert assistance of my wife for the production of a typescript. Ideas and helpful criticisms were offered by several friends and col leagues, among them G. de Barra, R. M. Damerell, A. D. Gardiner, R. K. Guy, P . McMullen and J . W. Moon. The general editor of the Cambridge Mathematical Tracts, Professor C. T. C. Wall, was swift and perceptive in his appraisal, and his com ments were much appreciated. The staff of the Cambridge University Press maintained their usual high standard of courtesy and efficiency throughout the process of publication. During the months January-April 1973, when the final stages of the writing were completed, I held a visiting appointment at the University of Waterloo, and my thanks are due to Professor W. T. Tutte for arranging this. In addition, I owe a mathe matical debt to Professor Tutte, for he is the author of the two results, Theorems 13.9 and 18.6, which I regard as the most important in the book. I should venture the opinion that, were it not for his pioneering work, these results would still be unknown to this day. NORMAN BIGGS
Waterloo, Canada March 1973
1.
Introduction
This book is concerned with the use of algebraic techniques in the study of graphs. We aim to translate properties of graphs into algebraic properties and then, using the results and methods of algebra, to deduce theorems about graphs. The exposition which we shall give is not part of the modern functorial approach to topology, despite the claims of those who hold that, since graphs are one-dimensional spaces, graph theory is merely one-dimensional topology. By t h a t definition, algebraic graph theory would consist only of the homology of 1-complexes. But the problems dealt with in graph theory are more delicate than those which form the substance of algebraic topology, and even if these problems can be generalized to dimensions greater than one, there is usually no hope of a general solution at the present time. Consequently, the algebra used in algebraic graph theory is largely unrelated to the subject which has come to be known as homological algebra. This book is not an introduction to graph theory. I t would be to the reader's advantage if he were familiar with the basic con cepts of the subject, for example, as they are set out in the book by R. J. Wilson entitled Introduction to graph theory. However, for the convenience of those readers who do not have this back ground, we give brief explanations of important standard terms. These explanations are usually accompanied by a reference to Wilson's book (in the form [W, p. 99]), where further details may be found. I n the same way, some concepts from permutationgroup theory are accompanied by a reference [B, p. 99] to the author's book Finite groups of automorphisms. Both these books are described fully at the end of this chapter. A few other books are also referred to for results which may be unfamiliar to some readers. I n such cases, the result required is necessary for an understanding of the topic under discussion, so t h a t the reference is given in full, enclosed in square brackets, I
[ 1 ]
BAG
2
Introduction
where it is needed. Other references, of a supplementary nature, are given in parentheses in the form (Smith 1971) or Smith (1971). In such cases, the full reference may be found in the bibliography at the end of the book. The tract is in three parts, each of which is further subdivided into a number of short chapters. Within each chapter, the major definitions and results are labelled using the decimal system. The first part deals with the applications of linear algebra and matrix theory to the study of graphs. We begin by introducing the adjacency matrix of a graph; this matrix completely deter mines the graph, and its spectral properties are shown to be related to properties of the graph. For example, if a graph is regular, then the eigenvalues of its adjacency matrix are bounded in absolute value by the valency of the graph. In the case of a line graph, there is a strong lower bound for the eigenvalues. Another matrix which completely describes a graph is the incidence matrix of the graph. This matrix represents a linear mapping which, in modern language, determines the homology of the graph; however, the sophistication of this language obscures the underlying simplicity of the situation. The problem of choosing a basis for the homology of a graph is just t h a t of finding a fundamental system of circuits, and we solve this problem by using a spanning tree in the graph. At the same time we study the cutsets of the graph. These ideas are then applied to the systematic solution of network equations, a topic which supplied the stimulus for the original theoretical development. We then investigate various formulae for the number of span ning trees in a graph, and apply these formulae to several wellknown families of graphs. The first part of the book ends with results which are derived from the expansion of certain deter minants, and which illuminate the relationship between a graph and the characteristic polynomial of its adjacency matrix. The second part of the book deals with the problem of colouring the vertices of a graph in such a way that adjacent vertices have different colours. The least number of colours for which such a colouring is possible is called the chromatic number of the graph, and we begin by investigating some connections between this
Introduction
3
number and the eigenvalues of the adjacency matrix of the graph. The algebraic technique for counting the colourings of a graph is founded on a polynomial known as the chromatic polynomial. We first discuss some simple ways of calculating this polynomial, and show how these can be applied in several important cases. Many important properties of the chromatic polynomial of a graph stem from its connection with the family of subgraphs of the graph, and we show how the chromatic polynomial can be expanded in terms of subgraphs. From our first (additive) expan sion another (multiplicative) expansion can be derived, and the latter depends upon a very restricted class of subgraphs. This leads to efficient methods for approximating the chromatic polynomials of large graphs. A completely different kind of expansion relates the chromatic polynomial to the spanning trees of a graph; this expansion has several remarkable features and leads to new ways of looking at the colouring problem, and some new properties of chromatic polynomials. The third part of the book is concerned with symmetry and regularity properties. A symmetry property of a graph is related to the existence of automorphisms-that is, permutations of the vertices which preserve adjacency. A regularity property is defined in purely numerical terms. Consequently, symmetry properties induce regularity properties, but the converse is not necessarily true. We first study the elementary properties of automorphisms, and explain the connection between the automorphisms of a graph and the eigenvalues of its adjacency matrix. We then intro duce a hierarchy of symmetry conditions which can be imposed on a graph, and proceed to investigate their consequences. The condition t h a t all vertices be alike (under the action of the group of automorphisms) turns out to be rather a weak one, but a slight strengthening of it leads to highly non-trivial conclusions. I n fact, under certain conditions, there is an absolute bound to the level of symmetry which a graph can possess. A new kind of symmetry property, called distance-transitivity, and the consequent regularity property, called distance1-2
4
Introduction
regularity, are then introduced. We return to the methods of linear algebra to derive strong constraints upon the existence of graphs with these properties. Finally, these constraints are applied to the problem of finding minimal regular graphs whose valency and girth are given. At the end of each chapter there are some supplementary results and examples, labelled by the number of the chapter and a letter (as, for example, 9A). The reader is warned that these results are variable in difficulty and in kind. Their presence allows the inclusion of a great deal of material which would otherwise have interrupted the mainstream of the exposition, or would have had to be omitted altogether. We end this introductory chapter by describing the few ways in which we differ from the terminology of Wilson's book. I n this book, a general graph V consists of three things: a finite set FT of vertices, a finite set EY of edges, and an incidence relation between vertices and edges. If v is a vertex, e is an edge, and (v, e) is a pair in the incidence relation, then we say t h a t v is incident with e, and e is incident with v. Each edge is incident with either one vertex (in which case it is a loop) or two vertices. If each edge is incident with two vertices, and no two edges are incident with the same pair of vertices, then we say that T is a simple graph or briefly, a graph. In this case, EY can be identified with a subset of the set of (unordered) pairs of vertices, and we shall always assume t h a t this identification has been made. We shall deal mainly with graphs (that is, simple graphs), except in Part Two, where it is sometimes essential to consider general graphs. If v and w are vertices of a graph T, and e = {v, w} is an edge of r , then we say t h a t e joins v and w, and that v and w are the ends of e. The number of edges of which v is an end is called the valency of v. We consider two kinds of subgraph of a general graph T. An edge-subgraph of T is constructed by taking a subset S of ET together with all vertices of T incident in T with some edge belonging to S. A vertex-subgraph of T is constructed by taking a subset V of F r together with all edges of V which are incident
Introduction
5
in r only with vertices belonging to V. In both cases the incidence relation in the subgraph is inherited from the incidence relation in r . We shall use the notation r , (U}r for these subgraphs, and usually, when the context is clear, the subscript reference to T will be omitted. Further new terminology and notation will be defined when it is required. Basic references R . J . Wilson. Introduction to graph theory (Oliver and Boyd, Edinburgh, 1972). N. L. Biggs. Finite groups of automorphisms, London Math. Society Lecture Notes Series, No. 6 (Cambridge University Press, 1971).
P A R T ONE
Linear algebra in graph
theory
2. The spectrum of a graph
We begin by defining a matrix which will play an important role in many parts of this book. We shall suppose t h a t T is a graph whose vertex-set VT is the set {vvv2, ...,vn}; as explained in Chapter 1, we shall take ET to be a subset of the set of unordered pairs of elements of VT. If {vi9 Vj} is an edge, then we say t h a t vi and Vj are adjacent. D E F I N I T I O N 2.1 The adjacency matrix of T is the nxn matrix A = A(F), over the complex field, whose entries a^ are given by [ 1 iivi and Vj are adjacent; ^ \o otherwise. I t follows directly from the definition that A is a real symmetric matrix, and t h a t the trace of A is zero. Since the rows and columns of A correspond to an arbitrary labelling of the vertices of r , it is clear t h a t we shall be interested primarily in those properties of the adjacency matrix which are invariant under permutations of the rows and columns. Foremost among such properties are the spectral properties of A. Suppose t h a t A is an eigenvalue of A. Then, since A is real and symmetric, A is real, and the multiplicity of A as a root of the characteristic equation det (AI — A) = 0 is equal to the dimension of the space of eigenvectors corresponding to A. D E F I N I T I O N 2.2 The spectrum of a graph T is the set of numbers which are eigenvalues of A ( r ) , together with their multiplicities as eigenvalues of A ( r ) . If the distinct eigenvalues of A ( r ) are A0 > A1 > ... > As_1? and their multiplicities are m(A0), m(A1), ..., m(\s_^), then we shall write
Spec r = ( A° \m(A0)
Al
"" m(Ax) ... [ 9 ]
As_1
\ m(A a _ 1 )/"
10
Linear algebra in graph theory
For example, the complete graph Kn has n vertices, and each distinct pair are adjacent [W, p. 16]. Thus, the graph K± has adjacency matrix
ro
i
i
ii
A= M °
1
!,
1 Ll
1 0 1 1 1 0J
and an easy calculation shows that the spectrum of K± is:
S p e c i f -J). We shall often refer to the eigenvalues of A(T) as the eigenvalues of r . Also, the characteristic polynomial of A(T) will be denoted by ^(T; A), and referred to as the characteristic polynomial of r . Let us suppose that the characteristic polynomial of Y is X(T;
A) = A71 + cx A71-1 + c2 A™-2 + c3 A?l~3 + ... + cn.
Then the coefficients ci can be interpreted as sums of principal minors of A, and this leads to the following simple result. P R O P O S I T I O N 2.3 Using the notation given above, we have: (1) c ^ O ; (2) —c2is the number of edges of T; (3) — c3 is twice the number of triangles in T. Proof For each i e {1,2,..., n}, the number (— 1 f ci is the sum of those principal minors of A which have i rows and columns. Thus: (1) Since the diagonal elements of A are all zero, cx = 0. (2) A principal minor with two rows and columns, and which has a non-zero entry, must be of the form I 0
1I
I 1 0 I' There is one such minor for each pair of adjacent vertices of F, and each has value — 1. Hence (— l) 2 c 2 = — \ET\, giving the result.
The spectrum of a graph
11
(3) There are essentially three possibilities for non-trivial principal minors with three rows and columns: 0 1 0
1 0 0 0 > 0 0
0 1 1
1 1 0 0 0 0
5
0 1 1
1 1 0 1 1 0
and, of these, the only non-zero one is the last (whose value is 2). This principal minor corresponds t o three mutually adjacent vertices in T, and so we have the required description of c3. □ These elementary results indicate t h a t the characteristic poly nomial of a graph is a typical object of the kind one considers in algebraic theory: it is an algebraic construction which contains graphical information. Proposition 2.3 is just a pointer, and we shall obtain a more comprehensive result on the coefficients of the characteristic polynomial in Chapter 7. Suppose A is the adjacency matrix of a graph V. Then the set of polynomials in A, with complex coefficients, forms an algebra under the usual matrix operations. This algebra has finite dimension as a complex vector space. D E F I N I T I O N 2.4 The adjacency algebra of a graph Y is the algebra of polynomials in the adjacency matrix A = A(T). We shall denote the adjacency algebra of V by stf{Y).
Since every element of the adjacency algebra is a linear combina tion of powers of A, we can obtain results about stf{Y) from a study of these powers. We define a walk of length I in T, joining vi to Vp to be a finite sequence of vertices of r , vi = u0,ul9...,ul
= Vp
such that ut_1 and ut are adjacent for 1 < t < I. (If ut_x and ut+1 are distinct, 1 ^ t < I— 1, then we say that the walk is a path.) L E M M A 2.5 The number of walks of length I in T, joining vi to Vp is the entry in position (i, j) of the matrix A1.
Linear algebra in graph theory
12
Proof The result is true for I — 0 (since A 0 = I) and for 1=1 (since A 1 = A is the adjacency matrix). Suppose that the result is true for I = L. From the identity n
h=l L+1
we deduce that (A )^ is the number of walks of length L + 1 j oining vi to v^, whence the result for all I follows by induction. □ In a connected graph every pair of vertices may be joined by a walk. The number of edges traversed in the shortest walk joining vi and Vj is called the distance in T between vi and Vj and is denoted by d(vi} Vj) [W, p. 31]. The maximum value of the distance func tion in r is called the diameter of I \ PROPOSITION2.6 Let Ybea connected graph with adjacency algebra stf(Y) and diameter d. Then the dimension of stf(Y) is at least d+1. Proof Let x and y be vertices of T such that d(x, y) = d, and suppose x = w0,wl9...,wd = y
is a path of length d. Then, for each i e {1,2,..., d}, there is at least one walk of length i, but no shorter walk, joining w0 to w^ Con sequently, A^ has a non-zero entry in a position where the corre sponding entries of I, A, A 2 ,..., A^ _1 are zero; and it follows t h a t A^ is not linearly dependent on {I, A , . . . , A^ -1 }. We deduce t h a t {I, A, ...,Ad} is a linearly independent set in £/(T), and the proposition is proved. Q There is a close connection between the adjacency algebra and the spectrum of T. If the adjacency matrix has s distinct eigen values, then (since it is a real symmetric matrix), its minimum polynomial has degree s, and consequently the dimension of the adjacency algebra is equal to s. We remark t h a t there is no real loss of generality in restricting our attention to connected graphs, since the spectrum of a disconnected graph is the union of the spectra of its connected components. We have obtained the following bounds for the number of distinct eigenvalues.
The spectrum of a graph
13
C O R O L L A R Y 2.7 A connected graph with n vertices and diameter d has at least d+1, and at most n, distinct eigenvalues. □
One of the major topics of the last part of this book is the study of graphs which have the minimal number of distinct eigenvalues. These graphs have startling regularity properties.
2 A A criterion for connectedness A graph T with n vertices is connected if and only if (A +1) 7 1 - 1 has no zero entries, where A is the adjacency matrix of F. 2 B Cospectral graphs Two non-isomorphic graphs are said to be cospectral if they have the same eigenvalues with the same multiplicities. Many examples of this phenomenon are cited in the survey by Cvetkovic (1971). In particular, there are two cospectral connected graphs (shown in Fig. 1) with 6 vertices; their characteristic polynomial is A6 — 7A4 — 4A3 + 7A2 + 4A — 1. (See also Chapter 7.)
Fig. l
2 C The walk generating function Let T be a graph with vertex-set {vl9v2,...,vn}, and let v^r) denote the number of walks of length r in T joining vi to Vj. If we write N for the matrix whose entries are 00
r=l
then N = (I — £A) -1 , where A is the adjacency matrix of T. 2D A bound for the eigenvalues Let T be a graph with n vertices and m edges, and suppose the eigenvalues of Y are Ax ^ A2 ^ ... > An. Then 2A^ = 0, SAf = 2m. Applying the Cauchy-Schwarz inequality to (A 2 ,..., An) and ( 1 , 1 , . . . , 1) leads to the bound
A^
(2m{n-l)\i
3. Regular graphs and line
graphs
In this chapter we discuss graphs which possess some kinds of combinatorial regularity, and whose spectra, in consequence, have distinctive features. A graph is said to be regular of valency k (or fe-valent) if each of its vertices has valency k. This is the most obvious kind of combinatorial regularity, and it has interesting consequences. PROPOSITION
3.1
Let Y be a regular graph of valency k.
Then: (1) kis an eigenvalue of Y. (2) IfTis connected, then the multiplicity of k is one. (3) For any eigenvalue A of Y, we have |A| < k. Proof (1) Let u = [ 1 , 1 , . . . , 1]*; then if A is the adjacency matrix of T we have Au = ku, so t h a t k is an eigenvalue of T. (2) Let x = [%i, %2, • - -, Xn]* denote any non-zero vector for which A x = &x, and suppose Xj is an entry of x having the largest absolute value. Since (Ax)^ = kxj, we have
where the summation is over those k vertices vi which are adjacent to v3-. By the maximal property of Xp it follows that xi = Xj for all these vertices. If Y is connected we may proceed successively in this way, eventually showing that all entries of x are equal. Thus x is a multiple of u, and the space of eigenvectors associated with the eigenvalue k has dimension one. (3) Suppose that Ay = Ay, y 4= 0, and let y^ denote an entry of y which is largest in absolute value. By the same argument as in (2), we have YIyi = Xyp and so |A||y,| = | Z ' y i | < 2 ' | y < | < % , l . Thus |A| < k, as required.
□ [ 14]
Regular graphs and line graphs
15
The adjacency algebra of a regular connected graph also has a distinctive property, related to the results of Proposition 3.1. Let J denote the matrix each of whose entries is + 1. Then, if A is the adjacency matrix of a regular graph of valency k, we have A J = J A = kJ. This is the point of departure for the following result. P R O P O S I T I O N 3.2 (Hoffman 1963) The matrix J belongs to the adjacency algebra stf(Y) if and only if Y is a regular connected graph. Proof Suppose 3es#(T). Then J is, by definition of s/(F), a polynomial in A; consequently A J = J A . Now if kf® denotes the valency of the vertex vi9 then (AJ)^. = kf® and (JA)^ = kV\ so t h a t all the valencies are equal and T is regular. Further, if T were disconnected we could find two vertices with no walks in joining them, so t h a t the corresponding entry of A1 would be zero for all I ^ 0. Then every polynomial in A would have a zero entry, contradicting the fact t h a t 3es#(T). Thus Y is connected. Conversely, suppose Y is connected and regular of valency k. Then, by part (1) of Proposition 3.1, & is an eigenvalue of T, and so the minimum polynomial of A is of the form (A — k) q(X). This means t h a t Ag(A) = kq(A), t h a t is, each column of q(A) is an eigenvector of A corresponding to the eigenvalue k. By part (2) of Proposition 3.1, this means that each column of q(A) is a multiple of u, and since q(A) is a symmetric matrix, it is a multiple of J . Thus J is a polynomial in A. Q C O R O L L A R Y 3.3 Let Y be a regular connected graph with n vertices, and let the distinct eigenvalues of Y be k > A1 > ... > As_p Then if q(A) = II (A — A^), where the product is over the range 1 < i < s— 1, we have
Proof I t follows from the proof of Proposition 3.2 that q(A) = a J , for some constant a. Now the eigenvalues of q(A) are q(k) and g(A^) (1 ^ i ^ s— 1), and all these except q(k) are zero. The only non-zero eigenvalue of a J is an, hence a = q{k)jn. □
16
Linear algebra in graph theory
There is a special class of regular graphs whose spectra can be found by means of an old technique in matrix theory. As this class contains several well-known families of graphs, we shall briefly review the relevant theory. An n x n matrix S is said to be a circulant matrix if its entries satisfy s{j = s1}j_i+v where the subscripts are reduced modulo n and lie in the set {1, 2,..., n}. In other words, row i of S is obtained from the first row of S by a cyclic shift of i — 1 steps, and so any circulant matrix is determined by its first row. Let W denote the circulant matrix whose first row is [0,1,0,..., 0], and let S denote a general circulant matrix whose first row is [svs2, ...,sn]. Then a straightforward calculation shows that n 3= 1
Since the eigenvalues of W are 1, OJ, OJ2, ..., o)n~1i where o) = exp (27ri/n), it follows that the eigenvalues of S are n
Ar = 2 ^ w o - i y
r =
o, l , . . . , n - l .
3= 1
D E F I N I T I O N 3.4 A circulant graph is a graph Y whose adjacency matrix A(T) is a circulant matrix.
I t follows, from the fact the adjacency matrix is a symmetric matrix with zero entries on the main diagonal, that, if the first row of the adjacency matrix of a circulant graph is [av a 2 ,..., an], then a1 = 0 and ai = an_i+2 (2 ^ i ^ n). P R O P O S I T I O N 3.5 Suppose [0,a 2 , ...,an] is the first row of the adjacency matrix of a circulant graph T. Then the eigenvalues of T are n \r= J^a.a)U-1)r, r = 0 , 1 , . . . , n - l .
Proof This result follows directly from the expression for the eigenvalues of a circulant matrix. □ We notice that the condition at = an_i+2 ensures t h a t the eigen values of a circulant graph are real; and that the n eigenvalues given by the formula of Proposition 3.5 are not necessarily all distinct.
Regular graphs and line graphs
17
We give three important examples of this technique. First, the complete graph Kn is a circulant graph, and the first row of its adjacency matrix is [ 0 , 1 , 1 , . . . , 1]. Since 1 +ajr+ ... +oj(-n~1)r = 0 for re{1,2, ...,n— 1}, it follows from Proposition 3.5 that the spectrum of Kn is:
specir =
» ( V n~~\)-
Our second example is the circuit graph Cn, whose adjacency matrix is a circulant matrix with first row [0,1, 0,..., 0,1]. In the notation of Proposition 3.5, the eigenvalues are Ar = 2 cos (27rr/n), but these numbers are not all distinct and the complete descrip tion of the spectrum is : . ,,. rr (if n is odd)
~ (2 SpecCw=L
2cos277>
0
,.,. . ^ /2 xo (if^iseven)Spec^=^
... ^
2
2COS2TTM 2
2cos(n-l)7r/n\ 2 /'
... 2cos(n-2)7T/n ... 2
-2\ 1 /'
A third family of circulant graphs which we shall encounter is the family Hs of hyperoctahedral graphs [B, p. 101]. The graph Hs may be constructed by removing s disjoint edges from K2s\ con sequently Hs is the complete multipartite graph K2% 2, as defined on p . 37. I t is also a circulant graph, and the first row of its adjacency matrix is [al9..., a 2 J, where each entry is 1, except for a1 and as+1 which are zero. I t follows t h a t the eigenvalues of Hs are A0 = 2s-2,
Ar = -l-Qjrs
(1 C by a column vector x = [x^Xfr ...j^flj* such that xi = £(ej (1 < i < m). The bases {CL>1? CL>2, ..., o)n} and {el9 e 2 ,..., em} will be referred to as the standard bases for C0(T) and C^T). We now introduce a useful device. For each edge e a = {va, vT} of r , we shall choose one of ^ vT to be the positive end of ea, and the other one to be the negative end. We refer to this procedure by saying that T has been given an orientation. I t will appear that, although this device is employed in the proofs of several results, the results themselves are independent of it. DEFINITION4.2 The incidence matrix D of T, with respect to a given orientation of T, is the nxm matrix (d^) whose entries are -f 1 if vi is the positive end of ep — 1 if vi is the negative end of ep 0 otherwise. v
f
We remark that D is the representation, with respect to the standard bases, of a linear mapping from C^T) to C0(T). This mapping will be called the incidence mapping, and be denoted by D. For each £: ET-+C the function D£: VF-+C is defined by m
The rows of the incidence matrix correspond to the vertices of r , and its columns correspond to the edges of T; each column con tains just two non-zero entries, -f 1 and — 1, representing the positive and negative ends of the corresponding edge. For the remainder of this chapter we shall let c denote the number of connected components of F. PROPOSITION
n — c. Proof form
4.3
The incidence matrix D of T has rank
The incidence matrix can be written in the partitioned pD*1) !
0
[o
0
...
0 1
D
...
0
o
I'
c
... W u
Linear algebra in graph theory
24
by a suitable labelling of the vertices and edges of Y. The matrix D ( ^ (1 < i < c) is the incidence matrix of a component T^ of T, and we shall show that the rank of D ( ^ is n{ — 1, where ni = | VT^ |. The required result then follows, by addition. Let d3 denote the row of D ( ^ corresponding to the vertex vj of Y^\ Since there is just one -f 1 and just one — 1 in each column of D(^, it follows t h a t the sum of the rows of D ( ^ is the zero row vector, and the rank of D ( ^ is at most ni — 1. Suppose we have a linear relation Sa^d^- = 0, where the summation is over all rows of D (l) , and not all the coefficients ocj are zero. Choose a row dk for which ock =)= 0; this row has non-zero entries in those columns cor responding to the edges incident with vk. For each such column, there is just one other row Al with a non-zero entry in that column, and in order that the given linear relation should hold, we must have ax = ock. Thus, if ock =t= 0, then az = ock for all vertices vl adjacent to vk. Since T^ is connected, it follows that all co efficients (Xj are equal, and so the given linear relation is just a multiple of Sd ; . = 0. Consequently, the rank of D ( ^ is ni — 1. □ The following definition, which is for a general graph V with n vertices, m edges, and c components, will be applied (in this part of the book) to simple graphs only. DEFINITION
4.4
The rank of Y and the co-rank of Y are,
respectively r(Y) = n — c;
s(Y) = m~n + c.
We now investigate the kernel of the incidence mapping D, and its relationship with graph-theoretical properties of Y. Let Q be a set of edges such that the edge-subgraph (Q) is a circuit graph. We say that Q is a circuit in Y; the two possible cyclic orderings of the vertices of (Q) induce two possible circuitorientations of the edges of Q. Let us choose one of these circuitorientations, and define a function £Q in C^Y) as follows. We put £,Q(e) = + 1 if e belongs to Q and its circuit-orientation coincides with its orientation in T, £,Q(e) = — 1 if e belongs to Q and its circuit-orientation is the reverse of its orientation in F, while if e is not in Q we put £,Q{e) = 0.
The homology of graphs
25
T H E O R E M 4.5 The kernel of the incidence mapping DofYis a vector space whose dimension is equal to the co-rank of Y. If Q is a circuit in Y, then £Q belongs to the kernel of D. Proof Since the rank of D is n — c, and the dimension of (71( Y) is m, it follows that the kernel of D has dimension m — n + c = s(Y). With respect to the standard bases for O^Y) and C0(Y), we may take D to be the incidence matrix, and £,Q to be represented by a column vector xQ. Now (DxQ)i is the inner product of the row d{ of D and the vector XQ. If vi is not incident with any edges of Q, then this inner product is zero; if vi is incident with some edges of Q, then it is incident with precisely two edges, and the choice of signs in the definition of £Q implies that the inner product is again zero. Thus D x Q = 0, and £,Q belongs to the kernel of D. □
If/) and a are two elements of the edge-space of Y (that is, func tions from EY to C), then we may define the inner product (P,t)ii is the inner product; of d^ with itself, and, since the number of entries ± 1 in d^ is equal to the valency of v{, the result follows. □
4 A Planar graphs and duality Let T be a planar connected graph and T* its geometric-dual [W, p. 72]. If Y is given an orientation and D is the incidence matrix of T, then T* can be given an orientation so that its incidence matrix D * satisfies : (a) rank ( D ) + r a n k ( D * ) = \EY\; (b) D*D* = 0. 4B The co-boundary mapping The linear mapping from G0(Y) to C^Y) defined (with respect to the standard bases) by XH->D*x is sometimes called the co-boundary mapping for Y. The kernel of the co-boundary mapping is a vector space of dimension c, and the image of the co-boundary mapping is the cutset-subspace of Y. 4 0 Flows An element ^ of the circuit-sub space of Y is often called a, flow on Y. The support of 0 for e in 8(6). (a) For a given integral flow (j) there is a primitive flow which conforms to (]>. (b) Any integral flow (j) is the sum of integer multiples of primitive flows, each of which conforms to (j) (Tutte 1956). 4:E Modular flows Suppose the entries 0, 1, — 1 of D are taken to be elements of the ring Zu of residue classes of integers modulo u. A flow mod u on F is a vector x with components in Zu for which D x = 0, where 0 is the zero vector over Zu. The results of § 4 D imply that if x is a given flow mod u, then there is an integral flow y, each of whose components yi satisfies (a) yiGxi9 (b) -u