f-. l\.o
W 0''
-.
'D i: -. :: 'D vi .t
00
!!
Progress in Theoretical Computer Science
Alistair Sinclair Editor ...
59 downloads
949 Views
6MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
f-. l\.o
W 0''
-.
'D i: -. :: 'D vi .t
00
!!
Progress in Theoretical Computer Science
Alistair Sinclair Editor Ronald V. Book, University of Californa
Editorial Board Erwin Engeler, ETH Zentrm, Zurich, Switzerland Gérard Huet, INRI, Le Chesnay, France
Jean-Pierre Jouannaud, Université de Pars-Sud, Orsay, France Robin Milner, University of Edinburgh, Edinburgh, Scotland
Algorithms for Random Generation and Counting: A Markov Chain Approach
Maurice Nivat, Université de Pars VII, Pars, France
Marin Wirsing, Universität Passau, Passau, Germany
Birkhäuser Boston' Basel, Berli
Contents Alstar Sinclair
Deparent of Computer Science
University of Edinburgh Edinburgh, Scotland
Synopsis Library or Congress Cataloging-in-Publication Data Sinclair, Alista, 1960 Algorithms for random generation and counting : a Markov chain approach I Alistar Sinclair. p. em. -- (Progress in theoretical computer science)
Includes bibliographical references and index.
ISBN 0-8176-3658-7 (acid-fre) 1. Markov proesses. 2. Combinatorial set theory. 3. Algorithms. I. Title. II. Series.
QA274.7.S56 1992
519.2'33--dc20
92-34616 CIP
1 Preliminaries
(g Birkhäuser Boston 1993.
Portions of Chapters 1, 2 and 4 are reprinted with permission from Informtion and Computation, vol. 82, pp. 93-133. (g 1989 Academic Press. Portions of Chapter 3 are reprinted with permission from the SIAM Journl on
Computing, vol. 18, no. 6, pp 1149-1178. (g 1989 by the Society for Industral and Applied Mathematics. Copyright is not claimed for works of U.S. Government employees. All rights reserved. No par of this publication may be reproduced, stored in a retreval system, or trsmitted, in any form or by any means, electrnic, mechanical, photo-
copying, reording, or otherwise, without prior permssion of the copyright owner.
Permssion to photocopy for internal or personal use of speific clients is grante by Birkhäuser Boston for libraes and other users registered with the Copyright Clearance Center (CCC), provided that the base fee of $6.00 per copy, plus $0.20 per page is paid ditly to CCC, 21 Congress Stret, Salem, MA 01970, U.S.A. Speial requests should
7
1.1 Some basic definitions
7
1.2 Notions of tractabilty
12
1.3 An extended model. .
18
1.4 Counting, generation and
self-reducibilty . . . . . . 1.5 An interesting class of relations
2 Markov chains and rapid mixing Prte on acid-fre paper.
1
28
36
42
2.1 The Markov chain approach to
generation problems . . . . . 2.2 Conductance and the rate of
43
convergence . . . . . . . . . .
46
2.3 A characterisation of rapid mixng
56
3 Direct Applications
63
3.1 Some simple examples . . . . .
63
3.2 Approximating the permanent
70
3.3 Monomer-dimer systems
85
3.4 Concluding remarks
96
be addressed directly to Birkhäuser Boston, 675 Massachusetts Avenue, Cambridge, MA 02139, U.S.A.
ISBN 0-8176-3658-7 ISBN 3-7643-3658-7 Camera-ready copy prepared by the Author in TeX. Prnte and bound by Quinn-Woobine, Woobine, NJ. Prte in the U.S.A.
987654321
4 Indirect Applications
101
4.1 A robust notion of approximate counting . . . . . . . . . .
102
4.2 Self-embeddable relations
112
4.3 Graphs with specifed degrees
114
~
Appendix: Recent developments
126
Bibliography
136
Index
144
Foreword This monograph is a slightly revised version of my PhD thesis (86), completed in the Department of Computer Science at the University of Edinburgh in June 1988, with an additional chapter summarising more recent
developments. Some of the material has appeared in the form of papers (50,88).
The underlying theme of the monograph is the study of two classical problems: counting the elements of a finite set of combinatorial structures, and generating them uniformly at random. In their exact form, these prob-
lems appear to be intractable for many important structures, so interest has focused on finding effcient randomised algorithms that solve them approxim~ly, with a small probabilty of error. For most natural structures the two problems are intimately connected at this level of approximation,
so it is natural to study them together. At the heart of the monograph is a single algorithmic paradigm: sim-
ulate a Markov chain whose states are combinatorial structures and which converges to a known probabilty distribution over them. This technique has applications not only in combinatorial counting and generation, but also in several other areas such as statistical physics and combinatorial optimisation. The effciency of the technique in any application depends crucially on the rate of convergence of the Markov chain. Since the number of states
reach eqlÙlibrium after exploring only a tiny fraction of its state space; chains with this property is typically extremely large, the chain should
are caled rapidly mixing. A major portion of the monograph is devoted
to developing new methods for analysing the rate of convergence, which are of independent mathematical interest. The methods are quite general and lay the foundations for the analysis of the time-dependent behaviour of complex Markov chains arising in the above applications. The power of these methods is ilustrated by using them to analyse several interesting Markov chains. As a result, polynomial time approximation algorithms are obtained for counting and generation problems associated
with various natural structures, including matchings in a graph and labelled graphs with specified vertex degrees. Important consequences are the first known polynomial time approximation algorithms for computing the permanent of dense 0-1 matrices and the partition fuction of monomer-dimer systems in statistical physics. Markov chain techniques are also used in the monograph to provide futher insight into the relationship between counting and generation problems in general, and to establish a pleasing robustness result for a suitable notion of annrovim:itp rmmtinO"
"
Since the thesis appeared in the Summer of 1988, there has been much interest in the quantitative analysis of Markov chains and their computational applications. The analytical methods developed here have been refined and applied to many important new examples, and the subject is rapidly becoming a research area in its own right. Were I to embark on writing a new monograph from scratch today, I would take as the starting point the notion of a rapidly mixng Markov chain and proceed to the
Synopsis This monograph is concerned with two classes of problems involving a finite set S of combinatorial structures: counting them and generating them at
random from a uniform distribution. The set S wil always have a concise
applications from there. In recogntion of these developments, I have at-
description x: for example, x may be a connected graph and S its set
tempted to summarise in an Appendix the most signcant activity in this
of spanng trees, or x a positive integer and S its set of partitions. We
field up to early 1992. References are given to the relevat papers so that the interested reader may follow up the details. I hope that the monograph
also consider the more general setting in which each structure in S has
wil provide a usefu introduction to what promises to be an exciting and
sum of all elements of S, or to generate them randomly with probabilties
fruitful research area.
proportional to their weights.
I wish to record my warmest thanks to Mark Jerrum, my PhD supervisor, for numerous suggestions and discussions during the development of this work and for reading everything so criticaly. I would also like to thank Clemens Lautemann, Kyriakos Kalorkoti and Gordon Brebner for helpfu discussions, and my friends and colleagues at the Deparment of Computer Science at Edinburgh for providing a pleasant and stimulating research environment.
which we wil not attempt to summarise here. In addition to their in-
an associated positive weight: the task is then to compute the weighted
Combinatorial counting problems have a long and distinguished history
trinsic interest, they arise naturally from investigations in numerous other branches of mathematics and the natural sciences and have given rise to a rich and beautifu theory.
Generation problems are less well studied but have a number of computational applications. For example, uniform generation can be seen as a way of exploring a large set of combinatorial structures and constructing
typical representatives of it. These may be used to formulate conjectures about the set, or perhaps as test data for the empirical analysis of some
heuristic algorithm which takes inputs from the set. Non-uniform genera-
Rutgers University, January 1992
tion occurs in the mathematical modellng of physical systems where the structures are vad system confguations each of which has a weight which
depends on its energy. Many important properties of the model can be deduced from estimates of the expectation, under the weighted distribution, of certain operators on confguations. An analogous idea lies at the hear
of widely used stochastic techniques for combinatorial optimisation, such as simulated annealng. Here, low cost solutions are assigned low "energy" and thus large weight: sampling from the weighted distribution therefore tends to favour them.
The central question we shal address is that of the existence of com-
putationally effcient counting and generation procedures for a given class of structures. In accordance with accepted practice in theoretical computer science, "effcient" is taken to
mean having a runtime which grows
only polynomialy with the size of the problem description x. Of course, 1
I
~-;
SYNopsis
2
i-,
merating the elements of S since there wil in general be far too many of them. Our treatment is motivated by the empirical observation that effcient teresting structures. For many others, the apparent hardness of counting
is supported by strong theoretical evidence to the effect that no effcient counting procedure exists unless P = NP. However, there is no a priori
3
suggest themselves for one class of problems can be used to solve the other indirectly.
for most structures this rules out the naïve approach of exhaustively enu-
exact counting procedures exist only for a relatively small number of in-
SYNOPSiS
Traditionaly the cross-fertilzation between these problems has tended F
to be rather one-sided: exact or approximate analytic results on count-
ing enable one to generate structures uniformly or alost uniformly. This monograph concentrates chiefly on the opposite direction. We analyse a powerfu algorithmic paradigm for generation, which by virtue of selfreducibilty constitutes an attack on counting problems as well.
reason to suppose that some of these structures cannot be counted ap-
The paradigm in question, which has been in use for some years in
proximately in some suitable sense. Specificaly, it is interesting to relax the requirements of exact counting in two directions: randomisation and approximation. Thus we alow our counting procedures to flip coins, and
statistical physics under the name of the Monte Carlo method (12), involves
demand that they produce an answer which is correct to within some small
to move around the state space by means of random local perturbations
specified error with high probabilty. In the case of generation, we settle for
of the structures. Moreover, the process is ergodic, i.e., if it is alowed to
a distribution which is almost uniform (or, more generally, close to some
evolve in time the distribution of the final state tends asymptoticaly to a unque stationary distribution 7r over S which is independent of the initial
weighted distribution) with some small specified bias. This gives us well-
constructing a dynamic stochastic process (a finite Markov chain) whose states correspond to the set S of structures of interest. The process is able
defined notions of approximate counting and almost uniform generation: a
state. By simulating the process for sufciently many steps and outputting
fundamental assumption implicit in our approach is that they correspond to reasonable notions of effective tractabilty.
the final state, we are therefore able to generate elements of S from a
The principal aim of this monograph is to go some way towards classifying naturally occurring counting and generation problems according to these notions of tractabilty. We shall concentrate mainly on techniques for
proving positive results, though it is worth pointing out that for certain structures the problems remain hard even in approximate form. Our investigation wil have essentialy two strands: the classifcation of particular structures, and the nature of the classification process itself.
The reader may be wondering why we have chosen to investigate two apparently very disparate classes of problems at the same time. The reason
for this is that, from a computational point of view, the problems of approximate counting and almost uniform generation are very closely related. More precisely, for most natural structures a polynomial time procedure for approximate counting can be used to construct a polynomial time almost uniform generation algorithm, and vice versa. The only assumption we need to make is that the structures are self-reducible, which essentialy
means that they possess a simple inductive construction in terms of similar structures of a smaler size. This connection has frequently been observed
in specific cases, and was formalised in a general setting by Jerrum, Valiant and Vazirani (53), whose work initially inspired the author to embark on
the research presented 'here. The upshot is that counting and generation problems may profitably be studied in tandem: techniques which naturally
distribution which is arbitrarily close to 7r. The construction and simulation of an ergodic Markov chain with the
desired stationary distribution is usualy straightforward. What is not at al obvious, however, is how to determine the number of simulation steps which are necessary in order to achieve some specifed degree of accuracy.
In most application areas, such as Monte Carlo simulations in physics, ad hoc arguments are typically used to estimate this number. There is clearly a need for rigorous analytic bounds if we are to have confdence in the results produced by such methods.
According to our effciency criterion, the crucial consideration is that the number of steps required for the chain to be close to stationarity should grow only polynomially with the size ofthe problem description x. We refer
to chains which have this property as rapidly mixing. Since the number of states wil in general be exponentialy large, rapid mixng is a strong property: it demands that the process should be close to equilibrium after visiting only a tiny fraction of its state space.
The classical theory of Markov chains is concerned alost exclusively with asymptotic behaviour and provides little useful information about the rate of convergence. More recently, several authors have proposed various methods of analysis but these have so far been able to handle only sim-
ple chains (such as random walks on cubes) which possess a highly symmetrical structure. The first major contribution of this monograph is to
!n r L
SYNopsis
4
f,
l
establish a useful characterisation of the rapid mixng property for a wide
5
of certain stochastic optimisation heuristics such as simulated annealing. Indeed, some substantial progress in these directions has aleady been made
class of Markov chains. The characterisation is based on a structural prop-
erty, called the conductance, of the weighted graph underlying the chain. The conductance is a global measure of connectedness, or communication strength, of the process and is a natural analogue in this context of the more familar graph-theoretic concept of expansion. Informaly, our characterisation says that a (reversible) Markov chain is rapidly mixng if and only if the conductance is large. This result represents a generalisation of recent work by Alon and Milman (4, 5) on the connection between the eigenvaues
SYNOPSiS
r;
~
L
r I'
r I
h
and is reported in the Appendi. The final theme of the monograph is concerned more with the general notions of approximate counting and alost uniform generation than with paricular structures. For approximate counting, we are able to establish
i
certain Markov chains has been observed by Aldous (2) and others.
a pleasing robustness result: for any self-reducible structures, approximate counting within a factor of the form 1 + nß is possible in polynomial time either for al real constants ß or for no ß. Thus we are justifed is callng a counting problem tractable if it can be aproximated within some such factor by a polynomial time randomised algorithm. This provides a natural way
More signficant than the characterisation itself is the fact that it allows the rate of convergence of non-trivial Markov chains to be analysed
of classifying hard counting problems with respect to a well-defined notion of approximabilty.
for the first time. In order to do this, we develop a general methodology for
The mechanism used in the above proof is of independent interest. The key step is showing how to generate structures alost uniformly given only very crude counting information for them. This is achieved by means of a Markov chai suggested by the self-reducibilty inherent in the problem,
of a graph and its structural properties, whose relevance to the analysis of
deriving lower bounds on the conductance of the underlying graphs. The methodology is based on constructing paths between states in the underlying graph in such a way that no edge carries too many paths. A novel injective mapping technique is introduced to bound the number of paths traversing any edge.
With this machinery in place, we show how to infer the rapid mixng property for natural Markov chains whose states are matchings (independent sets of edges) in a graph. (Chains of this kind were fist considered by Broder (16).) We are therefore able to generate alost uniormly in polynomial time the following classes of structures: perfect matchings in a very
large class of graphs, includig al sufciently dense graphs, and matchings of al sizes in arbitrar graphs. As a result, we establish for the first time the existence of effcient approximation algorithms for counting these structures. Both counting problems are highly signcant, and are known to be hard in exact form. Counting perfect matchings in a bipartite graph is equivaent to computing the permanent of a 0-1 matrix, a problem for which mathematicians have been seeking an effcient computational procedure for over a centur. Counting and generating matchings is of interest in
the context of monomer-dimer systems in statistical physics. In the latter case, we are also able to handle natural weighted versions of the problems, and so to approximate the partition function of a monomer-dimer system. We conjecture that the methods described above point the way towards
a powerful general approach for analysing the effciency of algorithms based
on finite Markov chains. Potential future application areas include fuher positive results on approximate counting, rigorous performance guarantees for Monte Carlo simulations in statistical physics, and the demystifcation
I: i, II I
which can again be analysed using the characterisation described earlier. This construction can help us to exploit a much wider range of analytic results on approximate counting than has hitherto been possible in this context, even for structures which are not self-reducible. To ilustrate this point, we consider the problem of generating almost
uniformly labelled graphs with given vertex degrees and a given excluded subgraph. Using an asymptotic result of McKay (69) on the number of such graphs, we are able to generate them in polynomial time from a distribution which is very close to unform, provided that the vertex degrees are not too
large. The range of degrees we can handle is considerably larger than for previous methods. Moreover, we show the existence of polynomial time
algorithms for approximating the number of graphs with a much smaller error than that of avaable analytic estimates.
The monograph is organsed as follows. In Chapter 1 we present some fudamental definitions and summarise earlier work which is relevat to our
exposition. The general results on Markov chains are discussed in Chapter 2, culinating in the characterisation of the rapid mixng property. The next two chapters are devoted to applications of the characterisation. In Chapter 3 we consider Markov chains for generating specific structures, and establish positive results for the permanent and monomer-dimer systems.
In Chapter 4 we discuss the robustness of our notions of approximate counting and uniform generation, and apply some of our new general machinery
6
SYNopsis
to the degree-constrained graph problem mentioned above. Finaly, in the Appendi we summarise recent developments in the theory and applications of rapidly mixng Markov chains that apply and extend the techniques presented in the monograph.
I i ¡
= ¡¡
iE
Chapter i
~
I
Preliminaries
The aims of this introductory chapter are twofold: to set down and motivate precise definitions of some fudamental concepts, and to briefly summarise relevant previous knowledge. Accordingly, much of this material is not new,
but is included in the interests of orienting the reader. f i i
~'
i. I'
t, i r:
F " l.l.. 1
Three concepts defined here wil playa central rôle in all that follows: self-reducibilty, approximate counting and almost uniform generation. We trust that the reader wil gain an intuitive feel for them through the mostly elementar discussions of this chapter. Almost all the algorithms we shall present in this monograph are probabilstic or randomised. We therefore
devote some effort at this stage to a discussion of the model of randomised computation so that technical details may be avoided in the sequel.
The starting point for our later investigations is the work of Jerrum, Valant and Vazirani ¡53), in which the intimate computational connection between counting and generation was fist formalsed. Since this connection
underlies most of our later work, we feel it is appropriate to spell it out in
some detail in Section i.4. Finally, we set the scene for the rest of the monograph by identifying a class of combinatorial structures for which the questions of approximate counting and uniform generation are particularly relevant.
1.1 Some basic definitions We begin by introducing a framework within which a number of natural classes of combinatorial problems, including counting and generation, may be formulated and their computational complexities compared. Common
to al classes is the idea of computing certain information about a finite but usualy very large set of combinatorial structures given a concise implicit description of the set. Tyicaly, the description takes the form of some
1.1 SOME BASIC DEFINITIONS
8
other combinatorial entity drawn from a family of problem instances, together with a relation R which associates with each instance x in the family a finite set R(x) of structures called solutions, as in the following examples:
1. Problem instances: Boolean formulae B in disjunctive normal form (DNF); Solution set R(B) : al satisfying assignments of B.
2. Problem instances: undirected graphs G;
I-factors (perfect matchings) of G.
Solution set R(G) : al
3. Problem instances: positive integers n; Solution set R( n) : al partitions of n.
1) over which instances and solutions are to be encoded, and let R ç E* x E* be a binary relation over E. For any string x E E*, the corresponding solution set with respect More formally, we fi a finite alphabet E :2 to,
to R is just R(x) = tY E E*: (x,y) E Rl.
Note that no distinction is made between strings which do not encode a "valid" problem instance and those which encode a problem instance with empty solution set. The formal counterpart of example 1 above is then R = t (x, y) : x E E* encodes a Boolean formula B in DNFj y E E* encodes a satisfying assignment of B l.
Throughout this monograph we shall move freely between the formal and informal problem descriptions without explicitly saying so, assuming always that the encoding scheme used is "reasonable" in the sense of Garey and Johnson ¡38). The relational framework used here is taken from Jerrum, Valiant and Vaziran ¡53), and has alo appeared elsewhere, notably in the form of the "search fuctions" of Valiant ¡95) and the "string relations" of Garey and Johnson ¡38).
Our main concern is with classes of problems which involve computing the cardinalty of a finite set of combinatorial structures or generating el-
ements of the set unformly at random. In fact, most of what we wi say applies in the more general setting where each structure has an associated
I i
!
i I I
1.1 SOME BASIC DEFINITIONS
9
Formaly, the counting problem for a relation Rover E involves com-
puting the function #R : E* -- N defined by #R(x) = IR(x)l. In the ( uniform) generation problem for R, on input x E E*, we are asked to
select an element of R(x) at random in such a way that each solution has equal a priori probabilty of being chosen. We shall say in the next section precisely what we mean by effective solutions to these problems. For the purposes of comparison, we mention some other important prob-
lem classes which fit naturally into this framework and which have been extensively studied. Foremost among these is the existence problem (or
decision problem) for a relation R, which on input x E E* asks whether the solution set R(x) is non-empty: in example 1 above, this means asking whether a given DNF formula is satisfiable. The construction problem is similar, but requires in addition that some solution y E R(x) be output if one exists. If some cost fuction is defined on solutions, then the optimi-
sation problem seeks a solution of minimum cost. Finaly, the enumeration problem for R involves listing all elements of the solution set R(x).
We shal restrict our attention throughout to relations whose solutions are easy to check, i.e., for which membership of a candidate object in a
given solution set can be tested effciently. Let Ixl denote the length of the string x. Borrowing terminology from ¡53), we say that R is a p-relation if
(prI) There exists a polynomial p such that, for al strings x, y E E*, (x, y) E R ~ Iyl $ p(lxl) .
(pr2) The predicate (x, y) E R can be tested in time polynomial in Ixl+lyl. The class of existence problems associated with p-relations may be identified with the more familar class NP of languages accepted by polynomialy time- bounded nondeterministic Thing machines (NP-machines). Specif-
caly, if M is a nondeterministic Thing machine with at most lEI possible moves from each confguation, then any halting computation of M may be encoded in a standard way as a string over E. It should be clear that R is a p-relation if and only if there exists an NP-machine M such that, under such an encoding,
R = t (x, y) : y encodes an accepting computation of M on input x l.
positive weight, and the task is to compute the weighted sum of the struc-
tures or to generate them randomly with probabilties proportional to their weights. For the sake of simplicity we shal work with unweighted problems until Chapter 3, where weighted structures wil arise naturaly in one of our
applications.
In precisely the same manner, Valant's class #P ¡96) may be viewed as the class of counting problems associated with p-relations. (A fuction
f : E* -- N belongs to #P iff there exists an NP-machine M such that, for al x E E*, the number of accepting computations of M on input x is j(x).)
10
1.1 SOME BASIC DEFINITIONS
1.1 SOME BASIC DEFINITIONS
For many naturally occurring p-relations, the structures in each solution set have a simple inductive construction from the solution sets of a few smaler instances of the same relation. All of the above examples possess
this property: in the case of DNF satisfiabilty, for instance, there is an obvious (I-I)-correspondence between the satisfying assignments of Band those of the reduced formulae BT and B F, which are obtained from B by
i I I
11
R(x) = ø or R(x) = -(Al. Note that this, together with condition (sr3), implies that we can test in time polynomial in ixl + lyl whether a candidate solution y E E* belongs to the solution set R(x). In view of condition (srI), any self-reducible relation is therefore automatically a p-relation.
its vaiables the vaues TRUE and FALSE respectively.
Example 1.1 Consider the relation MATCH which associates with an undirected graph G all matchings (independent sets of edges) of G. Then MATCH
This property is generally known as self-reducibility and was first studied
is easily seen to be self-reducible since, for an arbitrary edge e = (u, v) of G,
substituting for one of
by Schnorr ¡83J. Formaly, we say that a relation Rover E is (polynomial
MATCH(G) = MATCH(G-) U t M U -(el: M E MATCH(G+) ì,
time) self-reducible iff
(srI) There exists a polynomial time computable length function lR E* -- N such that lR(x) = O(lxlkR) for some constant kR, and
Y E R(x) :: lyl = lR(x)
Vx,y E E*.
where G- is the graph obtained by deleting the edge e from G, and G+ the graph obtained by deleting the vertices u and v together with all their
incident edges. D From our point of view, the crucial feature of the above definition is that it alows the counting or uniform generation problem for any instance x to
(sr2) For all x E E* with lR(x) = 0, the predicate A E R(x) can be tested in polynomial time. (A denotes the empty string over E.)
We shall exploit this fact later in establishing a close relationship between
(sr3) There exist polynomial time computable functions 'i : E* x E* -E* and a : E* -- N satisfying
these two problems for a given self-reducible relation. The following two properties of a self-reducible relation R are easily checked and already well
a(x) = O(lg Ix!);
lR(x) :: 0 Ç: a(x) :: 0
v x E E*;
1'i(x,w)1 :: ixl
Vx,w E E*;
lR('i(x,w)) = max-(lR(x) -lwl,Ol
Vx,wEE*,
and such that each solution set can be expressed in the form
R(x) = U t wy: y E R('i(x,w)) ì. w E EtT(x)
Condition (sr3) provides an inductive construction of the solution sets as
be formulated in terms of the same problem for smaler instances 'i( x, w).
known: if the existence problem for R lies in P, then (i) the construction problem for R can be solved in polynomial time; and (ii) the enumeration problem for R can be solved in time #R(x)p(lxl), where x is the input andp is a polynomial. In fact, these relationships follow from weaker notions of self-reducibilty, in which the strict (I-I)-correspondence of condition (sr3) is replaced by a requirement to the effect that a solution to the existence problem for any instance can be derived easily from solutions for certain subinstances. Such notions have been explored by various other authors
(see, e.g., ¡73, 84)). Despite the fact that our definition appears rather strong, we have to work quite hard to find natural relations which cannot easily be formulated so as to be self-reducible. An outstanding candidate is the relation which associates with a positive integer n the set of factors
follows: if the solution length L R (x) is greater than zero, R( x) can be parti-
ofn.
tioned into classes according to the (smal) initial segment w of length a(x),
It is conceptualy helpfu to capture the inductive construction of solutions of a self-reducible relation explicitly in a tree structure. Suppose that
and each class can then be expressed as the solution set of another instance 'i(x, w), concatenated with w. The partitioning of satisfying assignments of a DNF formula indicated above is easily seen to be of the required form (under some natural encoding). An atom is an instance x E E* with
problem instance label inst( v) E E* and a partial solution label sol( v) E E*,
solution length lR(x) = 0: in the above example, these would include (en-
defined inductively as follows:
co
the relation Rover E is self-reducible. For each x E E* with R(x) -l ø, the tree of derivations TR(x) is a rooted tree in which each vertex v bears both a
dings of) the constants TRUE and FALSE, viewed as DNF formulae. Con-
dition (sr2) says that, for atoms x, we can test in polynomial time whether
(tI) The root u has labels inst(u) = x and sol(u) = A.
1.2 NOTIONS OF TRACTABILITY
1.2 NOTIONS OF TRACTABILITY
12
a PTM is a Tuing machine with output tape (45, Chapter 7) which has
(t2) For any vertex v in TR(x), if the problem instance z = inst(v) is an
certain distinguished "coin-tossing" states. The machine is deterministic
atom then v is a leaf. Otherwise, define
W(v) = t wE EU(z) : R('l(z,w)) =f ø 1.
13
i I
except that in coin-tossing states precisely two possible transitions are spec-
ified, the choice of transition being determined by the toss of an unbiased coin. If the machine halts, its output is the contents of the output tape;
(Note that W(v) is non-empty.) Then v has a child Vw for each
otherwise, its output is undefined. A PTM is deterministic if it has no
wE W(v), with labels inst(vw) = 'l(z, w) and sol(vw) = sol(v) . w.
coin-tossing states.
Note that the labels sol(v) are distinct, whie the inst(v) are in general not. It should be clear that the labels sol( v) for leaves v are precisely the elements of R(x), so there is a (I-I)-correspondence between leaves and solutions.
More generaly, for any vertex v of TR(x) there is a (I-I)-correspondence between the solution set R (inst( v)) and the leaves of the maxmal subtree
rooted at v. The bounds on 0' and 'l in the definition of self-reducibilty
ensure that the depth of TR(x) is at most lR(x), and hence bounded by a polynomial in lxi, and that its vertex degree is alo polynomialy bounded. The definition of W(v) above makes it clear that, in order to infer the
structure of the tree of derivations, it is necessar to solve the existence problem for the relation in question. Since we wil not always be able to do
The above defition induces a probabilty distribution on the set of computations of a PTM M on a given input. We define the random vaiables M(x) and ComPM(x) to denote the output of M on input x and the computation of M on input x respectively, with the proviso that M(x) takes
the special vaue 1. when the output is undefined. We restrict attention to PTMs with alphabet E U -(?1, where? rt E is a distingushed symbol which may appear only on the output tape and is to be interpreted as rejection or
faiure. A computation of a PTM is accepting if it halts with some output other than ? According to the application at hand, vaious definitions for
the time complexity of a PTM are possible. We define the runtime of a PTM M on input x to be the length of a longest accepting computation of M on x.
this with certainty, it is usefu to define the self-reducibility tree TR(X) as for T R ( x) above except that the restriction R( 'l (z, w)) =f ø in the definition
of W(v) is removed. Clearly, TR(X) contains TR(x) as a subgraph and their labels agree. All solutions in R(x) stil occur precisely once as labels of
leaves ofTR(x), but there may be other leaves whose parial solution labels are not in R(x). The depth and vertex degree ofTR(x) remain polynomialy bounded as before.
Let us now consider solutions to the counting problem for p-relations. Our approach here is motivated by the observtion that effcient exact solutions to this problem are the exception rather than the rule. By an effcient solution we mean a deterministic algorithm which computes #R(x) in time polynomial in the input size Ixl. Note that the naïve method of explic-
itly enumerating candidate solutions is unacceptable since in general their number grows exponentially with the size of the input. Indeed, the class
1.2 Notions of tractability The aim of this section is to define precisely what is meant by an effective
solution to the counting and generation problems introduced earlier. First we must agree on a model of computation: as we wil chiefly be studying
#P of counting problems associated with p-relations is not known to lie
within any level of the Meyer-Stockieyer polynomial time hierarchy (90). More strikngly, there are many natural p-relations (among them examples 1 and 2 of the previous section) whose counting problem is complete
able model, so we shal use an elementar computing device with access
for #P, i.e., in a precise sense as hard as any problem in the class, but whose construction problem is solvable in polynomial time: thus whie finding a perfect matching in a graph is easy, counting these structures is apparently intractable. A detailed discussion and fuher examples of this phenomenon
only to a stream of truly random bits. Possible extensions of this model
can be found in (97).
probabilstic algorithms, some source of randomisation has to be included.
It is clearly desirable to formulate definitions in terms of the simplest ava-
wil be considered in the next section. Aprobabilistic Turing machin'e (PTM), as defined by Gil (39), is a stan-
dard deterministic Tuing machine equipped with the additional abilty to make decisions according to the fal of an unbiased coin. More formally,
För functional evauation problems which have resisted attempts at an
exact solution, it is natural to seek effcient approximation algorithms which estimate the vaue of the function within a specified factor. This notion is
familar from combinatorial optimisation (see, e.g., (38, Chapter 6)) and
14
1.2 NOTIONS OF TRACTABILITY
1.2 NOTIONS OF TRACTABILITY
asymptotic analysis ¡ioJ. (A less conventional, and much more severe definition of approximation is studied by Cai and Hemachandra ¡18J.) A sec-
i
ond approach towards reducing the complexity of apparently intractable
l
problems is to alow some element of randomisation in the algorithm, requiing only that it delivers a reliable solution with high probabilty. Whie rigorous proofs that randomisation helps are avaable only in rather re-
stricted circumstances, probabilstic algorithms which run faster than their best known deterministic counterpars have been discovered for a vaiety
15
(f.p.) if its runtime is bounded by a polynomial in ixl and c1. (In practice,
we can think of the vaue c1 as being specified on the input tape as an integer in unary notation.)
Ii E
I
The significance of the lower bound of 3/4 in the above definitions lies in
the fact that it allows the counters to be "powered" so that the probabilty of producing a bad estimate becomes very smal in polynomial time. (This would stil hold if 3/4 were replaced by any fied constant strictly greater than 1/2.) More precisely, we have
of problems (see ¡98J for a surey), and there has also been considerable interest in randomised technques for combinatorial optimisation ¡60J.
Proposition 1.2 If there exists a polynomially time-bounded randomised
Following Stockmeyer ¡91J and Karp and Luby ¡56J, we use these ideas to establish a weaker notion of tractabilty for counting problems. Evidence
approximate counter C for R within ratio p, then there exists a PTM C'
that apparently hard counting problems may be tractable in this sense is provided by the latter authors, who exhbit a fuy-polynomial randomised
output and satisfies
approximate counter (see below) for satisfyng assignents of a DNF for-
mula (example 1 of the previous section).
which on inputs (x,8) E L* X iæ+ always halts with non-negative real-valued
Pr( C(x, 8) approximates #R(x) within ratio p(lxl)) 2: 1 - 8, and whose runtime is bounded by a polynomial in ixl and 19 8-1.
If a, â and r are non-negative real numbers with r 2: 1, we say that
â approximates a within ratio r if âr-1 :: a :: âr. Let 9 be a non-negative integer-vaued function on L* and P a real-vaued function on N such that p(n) 2: 1 for al n E N. A randomised approximation algorithm for 9 within ratio p is a PTM C which on inputs x E L* always halts with non-negative real-vaued output and satisfies Pr( C(x) approximates g(x) within ratio p(lxl)) 2: 3/4. If C is in fact deterministic, then it is an approximation algorithm for 9
within ratio p. In the case that 9 is the counting function #R for some
Proof: The required procedure C' makes p(lg 8-1) cal to C, with input x, for a suitable polynomial p and returns the median of the values obtained.
For the detais, see Lemma 6.1 of ¡53J. D Clearly, the powering operation of Proposition 1.2 may be applied in an identical maner to a f.p. randomised approximate counter, which wil constitute our primary notion of tractabilty for counting problems.
within ratio Pi if C is deterministic and p( n) = 1 for al n E N then C is
We now turn our attention to unform generation problems, takng our notions of tractabilty from Jerrum, Valant and Vaziran ¡53J. It is tempt-
an exact counter for R. In al cases, C is polynomially time-bounded if its
ing to demand that an exactly uniform generation algorithm should always
runtime is bounded by p (Ixl) for some polynomial p and al inputs x E L*.
halt and output some element of the solution set, assuming the latter is
relation Rover L, we cal C a (randomised) approximate counter for R
Ideally, we would like to be able to specify the factor in the approximation as par of the input, so that arbitrary accuracy can be achieved
(usualy at the cost ofincreased computation time). We say that a PTM C
is a randomised approximation scheme for 9 if on inputs (x, e) E L* x iæ+ it always halts with non-negative real-vaued output and satisfies
non-empty. However, this would not be reasonable within our model since any halting computation of a PTM is performed with probabilty 1/2t for some tEN: such a machine would thus not be capable of unformly generating (say) the elements of a solution set of cardinality 3 in finite time. We therefore allow generators to fail (i.e., produce no meanngf output) with bounded probabilty. Once the possibilty of failure is admitted, a number
Pr( C(x, e) approximates g(x) within ratio 1 + e) 2:3/4.
of attractive and natural generation paradigms become avaable, some of
If C is deterministic, it is an approximation scheme for g. If 9 = #R, we
As in the case of counting, we also introduce a relaxation of the original problem by alowing the output distribution to deviate from uniformity by
speak of a (randomised) approximate counter for R. C is fully-polynomial
which we shall describe in due course.
6
1.2 NOTIONS OF TRACTABILITY
1.2 NOTIONS OF TRACTABILITY
, specified amount, caled the tolerance. The strict uniformity requirement ) seldom of practical importance and if the deviation is smal enough it wil ie effectively undetectable. Furthermore, our notion of "alost uniform" :eneration arises naturally in a number of ways: as a problem which is iolynomial time equivalent to randomised approximate counting for selfeducible relations (Section 1.4); as a generalsation of uniform generation
"hich is robust with respect to extensions of the randomised model (Secion 1.3); and, most importantly, from dynamic stochastic algorithms for ;eneration problems (Chapter 2 onwards).
Let R be a relation over E, and € : N -+ IR+ U rO)-. A PTM Q is an ~lmost uniform generator for R within tolerance € iff its output on input : E E*, when defined, belongs to the set E* U r?)- and satisfies
(gl) If R(x) -I ø then Pr(Q(x) E E*) 2: 1/2.
I í
I I
17
where the input (x, €) E E* X IR+. Q is then fully-polynomial if its runtime is bounded by a polynomial in ixl and 19 c1. Note that the definition of f.p. here diers from that for approximate counters in the presence of the
logarithm: the errors tolerated in generation are exponentialy smal. (The
vaue C1 may be thought of as being specified on the input tape as an integer in binary notation.) As before, f.p. generators are assumed always to halt within their time bound.
Clearly, if R(x) -I ø the probabilty offaiure of any ofthe time-bounded generators defined above may be reduced rapidly by repeated trials. More precisely, Ig8-1 iterations of the generator sufce to ensure that the failure probabilty does not exceed 8. By the same token, we could replace 1/2
in the definition by l/p(lxl) for an arbitrary polynomial p. Futhermore, if the construction problem for a relation R is solvable deterministicaly in polynomial time then a f.p. almost uniform generator for R may be modifed so as never to fail when R(x) is non-empty.
(g2) There exists a function ø : E* -+ (0,1) such that, for al y E E*,
y tt R(x) =? Pr(Q(x) = y) = 0; Y E R(x) =? (1 + €(Ixl)) -1 Ø(x) :: Pr(Q(x) = y) :: (1 + €(Ixl) )Ø(x).
We shall regard the existence of a f.p. generator as an effective criterion
for tractabilty of generation problems. We should point out that the gap between exact and approximate notions here is apparently considerably smaller than in the case of counting. For example, viewed as black boxes,
.rote that the generator may only output valid solutions or the failure
alost uniform and uniform generators wil be effectively indistingushable
ymbol ?, and that it either fails or runs forever when the solution set s empty. Q is a uniform generator for R if €(n) = 0 for all n E N. In
under any reasonable definition of a statistical test involving polynomialy many observations. Moreover, we know of no structures which can be
ither case, Q is polynomially time-bounded if its runtime is bounded by a iolynomial in Ixl.1 If Q is polynomialy time-bounded, we may attach a
generated alost uniformly in an effcient manner but for which exactly
:lock to it and force al non-accepting computations to halt within the time
in Section 1.5, such a dichotomy does arise in the case of approximate
iound with output ? We wil assume that all such generators have been
counting.
uniform generation is (in some appropriate sense) hard. As we shal see
Ldapted so as to have this property.
By analogy with counters, it should be clear how to define an almost iniform generator Q for R, whose tolerance € :; 0 can be preset as part of
he input. Specifcaly, (g2) above now becomes (g2') There exists a function ø : E* x IR+ -+ (0,1) such that, for al y E E*,
y tt R(x) =? Pr(Q(x,€) = y) = OJ Y E R(x) =? (1 + €)-1ø(X, €) :: Pr(Q(x, €) = y) :: (1 + €)Ø(x, E),
For future reference, we also note here a standard randomised definition of
tractabilty for existence problems. The existence problem for a relation R ç E* x E* belongs to the class RP ("randomised polynomial time") . if there exists a PTM M with polynomialy bounded runtime which behaves as follows: for all inputs x E E*, R(x) -I ø =? Pr(M accepts x) 2: 1/2; R(x) = ø =? Pr(M accepts x) = O.
The probabilty of M makng an error is therefore one-sided, and can be i In many contexts, an appropriate complexity measure for a PTM is its exiected runtime. In the case of generation, however, a bound on the expected untime leaves open the undesirable possibilty that certain solutions only ever Lppear after a very long time. For approximate counters, either measure wil do.
made to decay very rapidly by the standard device of repeated trial. The
class RP (alo caled VPP by Gil (39)) replaces P as the class of tractable decision problems for randomised computation. Note that P ç RP ç NP, and, since problems in RP are intuitively tractable, the second inclusion is
1.3 AN EXTENDED MODEL
g
1.3 AN EXTENDED MODEL
ridely conjectured to be strict. The first is also thought to be strict for
i
ifferent reasons.
2. (Probabilstic execution)
with prob q do P
l
..3 An extended model I
3. (m-way branching)
rsing j successive tosses of its fair coin, a PTM can clearly simulate
either (with prob qo) Po
ist those. branching probabilties which are of the form i2-i, for i E
or (with prob qi) Pi
0,1,...,2i)-. The inabilty of a PTM to branch with more general probbilties often makes the design and justification of algorithms needlessly implicated. Indeed, randomised algorithms in the literature are typicaly icpressed in terms of more general branching probabilties involving the ttio of two previously
computed integers. Accordingly, in this section we
itroduce an extension of the PTM model which can easily simulate this lnd of branching behaviour. We go on to establish sufcient conditions
ir counting and generation algorithms in this model to be effciently imlementable by a PTM. This wil allow us in the sequel to work with the dended model while retaining our basic definitions of tractabilty. The roof details in this section are not central to the main development and iay safely be skipped.
An oracle coin machine (OCM) is a generalsation of a PTM in which
or (with prob qm-i) Pm-i
where m :; 1 and Li qi = 1. 4. (Weighted selection from an m-set)
select j E to, 1,..., m - 1)- with prob %1 Li qi where m :; O. (If Li qi = 0, the machie should halt with output?) . 5. (Uniform selection form an ordered n-set)
ie unbiased coin is replaced by a coin whose bias can be set and varied by
ie machine itself. The bias of the coin is determined by the contents of a iecial bias tape: if in a coin-tossing state this tape contains the encoding , a rational number q = rls, where r, s are coprime natural numbers . binary with r S sand s :; 0, then the two possible transitions are iosen with probabilties q and 1 - q respectively; otherwise, the machine
ilts immediately with distingushed output ? In all other respects, an
19
select j E to, 1, . . . , n - 1)- u.a.r. where n:; O.
Note that the integer elements in 4 and 5 may be viewed as indices, thus allowing selection from general ordered sets.
CM is identical to a PTM. The reason for rejecting such a machine as the :imary randomised model is that an elementary branching step could not
Proposition 1.3 Each of the above branching operations can be performed
asonably be implemented by a physical device in constant time.
by an OeM in runtime bounded by a polynomial in m and the sizes of the binary representations of q, tqi)- and n, plus the maxmum runtime of the programs P, tPi)-.
As is customary, Tuing machine algorithms wil be expressed in a diect of Pidgin AlgoL The OCM model alows this language to be augented with the probabilstic branching commands defined below, whose mantics should be obvious. In what follows, q, tqiÌ denote rationals in
Proof:
ie interval ¡0,1J, n, m natural numbers, and P, tPi)- programs.
1. This is the basic OCM branchig operation. The time requied, in 1. (Two-way branching)
addition to that for Po and Pi, is a single branching step plus the
time to write the encoding of q on the bias tape. either (with prob q) Po
or (with prob 1 - q) Pi
2. A special case of 1 in which Pi is nul.
1.3 AN EXTENDED MODEL
20
1.3 AN EXTENDED MODEL
21
3. By performing appropriate tests and relabellng if necessary, we may
introduced earlier. (Analogous questions for existence problems with re-
assume w.l.o.g. that al the qi are greater than O. The OCM fist
spect to a similar extended model have been studied by Lautemann (61j.)
computes the quantities q~ = qd(1 - E~:~ qj) for a -: i -: m - 1,
In the following, we refer to OCMs which serve as counters or generators for
and qb = qo. It then executes the following sequence of two-way
a relation Rover E in the vaious senses aleady introduced: the definitions difer from those of the previous section only in the model of computation.
branchings:
First let us observe that, as far as our most important approximate either (with prob qb) Po
or (with prob 1 - qb)
notions of counting and generation are concerned, the class of tractable problems is robust with respect to the change of modeL.
either (with prob qD Pi
or (with prob l-qD
Proposition 1.4 If there exists a f.p. OCM randomised almost uniform
generator (respectively, a f.p. OCM randomised approximate counter) either (with prob q;"-2) Pm-2
or (with prob 1 - q;"-2) Pm-i Clearly, both the arithmetic and the branching can be performed in time bounded by a polynomial in m and the sizes of the representations of ~ qd. 4. This is essentialy equivalent to 3.
5. Consider the following recursive procedure which uses only two-way
branchings:
for R, then there exists a J.P. almost uniform generator (respectively, a
f.p. randomised approximate counter) for R.
Proof: Let 9 be the postulated OCM generator for R, and let p be a polynomial bounding its runtime. We wil describe a PTM g' which on input (x, €) E E* x :i+ approximately simulates the computation process of 9 on input (x, €j3).
Assume w.l.o.g. that € :: 1, and set m = max~p(lxl,lg(3j€)),41: then m bounds the runtime of 9 on input (x, €j3), and al non-trivial branching probabilties of 9 must lie in the range (2-m, 1- 2-mj. The idea is that, for
procedure select(a, b);
any such branching probabilty q, g' wi compute a good approximation
begin L := b - a + 1;
to q of the form iq2-j, where iq EN and j = 2m + Ig(3j€) + 1. Specifcaly, if we set iq = Lq 2j J, then the rational iq2-j and 1 - iq2-j approximate q
if L = 1 then j := a
and 1 - q respectively within ratio 1 + 2-m€j3, as may easily be verified.
else begin
The simulation proceeds as follows. Deterministic steps of 9 are simulated directly by g', and if 9 halts then g' halts, necessarily with the same output.
m := (a + b) div 2; l':= m - a + 1;
either (with prob l jl) select(a, m) or (with prob 1 -l' jl) select(m + 1, b)
end end The call select(O, n - 1) achieves the desired effect and leads to at
most O(log n) levels of recursion, while the arithmetic is triviaL. The total runtime is therefore polynomial in the size of the representation
ofn. D he next issue we have to address is the effective implementation of OCM .gorithms in the more restricted PTM modeL. We shal do this with ref'ence to the notions of tractabilty for counting and generation problems
To simulate a non-trivial branching step of 9 of the form either (with prob q) Po
or (with prob 1- q) Pi
(1.1)
9 tosses its fai coin j times and performs the branching either (with prob iqj2j) simulate Po
or (with prob 1 - iqj2j) simulate Pi The time requied to compute iq and to do the coin tossing is clearly bounded by a polynomial in m and 19 C i . Since there are at most m steps of 9 to simulate, the runtime of g' is bounded by a polynomial in ixl and 19 c i, as requied.
1.3 AN EXTENDED MODEL
1.3 AN EXTENDED MODEL
)w suppose that 9 performs some computation c with probabilty p(c). r our earlier remarks, the probabilty that gr simulates c, yielding the me output, approximates p(c) within ratio (1 + 2-mE/3)m :: 1 + m22-mE/3 :: 1 + E/3, iere we have used the binomial theorem and the bounds m ;: 4 and E :: 1. lus the tolerance in the output distribution of g' is at most (1 + E/3)2 ::
I- E. Applying the same argument to non-accepting computations, we see
at the failure probabilty of gr exceeds that of 9 by at most a factor of I- E/3 :: 4/3, and so is bounded above by 2/3. Thus g', modified to allow lingle repeated trial on failure, is a f.p. almost uniform generator for R. ie proof for the counter is identical, except that the final modification
ikes use of the powering operation of Proposition 1.2. D In very similar fashion, we can show the following.
23
(Note that the approximate simulation in the proof of Proposition 1.4 could
also be formalsed in this way, though 'I(x) would have to be replaced by
i l
a smal range of vaues.) Of course, this is a minimal requirement since it
demands only that the output distribution of M (conditional on acceptance) be preserved. The simulation is effcient in the sense that the runtime of Mr the failure probabilty of M is not much greater than that of M, and that if is bounded away from 1 then that of M' may be similarly bounded using only a polynomial number of repeated trials. The following observation is immediate from the above definition.
Proposition 1.6 Suppose that M is a polynomially time-bounded OCM
uniform generator for R, and that the PTM Mr effciently simulates M. Then M', suitably powered to reduce its failure probability, is a polynomially
time-bounded uniform generator for R. D Next we derive a simple sufcient condition for effcient simulation by a PTM to be possible. The condition is expressed in terms of the probabilty distribution over the accepting computations of the OCM.
'oposition 1.5 If there exists a polynomially time-bounded OCM almost iform generator for R within tolerance E( n) (respectively, a polynomiy time-bounded OCM randomised approximate counter for R within ra-
Theorem 1. 7 Let M be an OCM with polynomially bounded runtime. Suppose there exists a polynomial time computable function N : E* - N+
i p( n)), then there exists a polynomially time-bounded almost uniform rierator for R within tolerance E(n) + 2-p(n) for any polynomial p (respec'€y, a polynomially time-bounded randomised approximate counter for R
N(x) Pr(CompM(x) = c)
thin ratio p(n)). D :act generators differ from the above devices in that their output distribun is prescribed precisely up to the probabilty of failure, so the straightward approach of the previous proof is not enough here. Before dealing Gh this problem, it wil be usefu to formalse the concept of simulation the present context.
Let M be an OCM and T(x) its runtime on input x. A PTM Mr is said effciently simulate M iff
:sl) For al inputs x for M and al strings y -l ?, Pr(M'(x) = y) = 'I(x) Pr(M(x) = y) , where 'I(x) ;: l/pi(lxl) for some polynomial Pi. 82) For al inputs x for M, the runtime of Mr on x is at most P2(lxl)T(x)
for some polynomial P2.
with the property that, for all inputs x E E* and all accepting computations c of M on input x, the quantity
is integral. Then M can be effciently simulated by a PTM. Proof: Let p be a polynomial bounding the runtime of M: since we are
only required to simulate accepting computations, we may assume that all non-accepting computations of M halt within this time bound with output ? Let x be an input for M, and set m = p(lxl). Clearly, al nontrivial branching probabilties of M on this input must lie in the range (2-m, 1- 2-m). The required PTM M' operates in two consecutive phases: a simulation phase and a correction phase. The first of these is very similar to the simulation in the proof of Proposition 1.4, except that Mr must be careful to select its branching probabilties from a smal set so that
subsequent correction is possible. Specifically, M' works with the set
A = U2i/2m12-2m : 1 :: i :: 4m2l of cardinality 4m2. It is not hard to see (53, Lemma 3.2) that for each real q E (2-m, 1 - 2-m) there exists aq E A such that
aq :: q :: (1 + l/m)aq,
(1.2)
--. ---~---'-----
I;
24
1.3 AN EXTENDED MODEL
I:
so that in Paricular aq approximates q within ratio 1 + 11m.
25
where 'Ø( x) is independent of e. For the simulation to be effcient, we require
As before, M' simulates deterministic steps of M directly. However, each non-trivial branching of the form (1.1) is now simulated by M' using the branching
a lower bound on 'Ø, which we get by choosing k appropriately. Recal that
the only constraint on the choice of k is that q :$ 1. But from (1.3) we have f
either (with prob aq) simulate Po
r
q = ;~~ 'Ø(x) :$ e'Ø(x),
i'
or (with prob ai-q) simulate Pi
or (with prob 1 - aq - ai_q) halt with output? II
From the definition of A, this can be realsed by M' using only 2m tosses of its fair coin. Assuming no faiure, the simulation phase continues until A1
1.3 AN EXTENDED MODEL
!
I
so we may select k to be maxmal such that 'Ø(x) = ?r(A)N(x)2k :$ lIe. It then follows that 'Ø(x) ). 1/2e is bounded below as required.
to be described shortly. The total time requied for the simulation phase
The above discussion indicates that M' satisfies condition (sl) of the definition. For condition (s2) we need only make the additional observation that the arithmetic of the correction phase can be performed in polynomial time since the cardinalty of A is only 4m2. Hence the runtime of M' is
is clearly bounded by a polynomial in Ixl.
polynomialy bounded as required. 0
reaches a halting confguation. If the corresponding output of M is ? then M' also halts with output ?; otherwise, M' enters its correction phase,
I: i I: l i
Upon entry to the correction phase, M' has simulated some accepting com-
putation e of M. By maintaining two running products, we may assume that M' has computed both the probabilty pee) = Pr(CompM(x) = e) (i.e., the product of the branching probabilties it has simulated), and the probabilty p'(e) = Pr(CompMI(x) simulates e) (i.e., the product of the branching probabilties actualy used). By (1.2), each term in the second product approximates the corresponding term in the fist within ratio 1 + 11m, and we have p'(e) :$ pee) :$ (1 + 1/m)m p'(e) :$ ep'(e). (1.3) Futhermore, M' is able to compute in polynomial time a natural number N(x) such that N(x)p(e) is integral, as stipulated in the statement of the
theorem.
The sufcient condition of Theorem 1. 7 may at first sight appear cumbersome to apply in practice. However, most known generation algorithms
can readily be seen to satisfy it. One typical case is an OCM uniform generator each of whose accepting computations yields a different solution as output. For a fied input x, any such computation must therefore be
executed with probabilty rxlsx for constants rx, Sx which are known a posteriori, and we may take N(x) = sx' A rather less trivial example is provided by the following algorithm, due to Nijenhuis and Wilf (78), for uniformly generating partitions of a positive integer.
Example 1.8 A partition of a positive integer n is a representation ?r of the form
Denote by ?reA) the product IIaEA am. The correction phase of M' now consists of a single branching of the form q ;= ?r(A)N(x)2kp(e)lp'(e);
either (with prob q) halt with output y
or (with prob 1 - q) halt with output? where y is the output of the simulated computation e and k is an integer independent of e to be specified below: in particular, k wil be chosen so that q :$ 1. It should be clear from the form of q that the above branching can be achieved by M' using its fair coin. Fuhermore, the probabilty that M' simulates e and, after correction, accepts with the same output is (?r(A)N(x)2k) pee) = 'Ø(x)p(e),
n = ri + . . . + r¡ for integers ri with ri ~ r2 ~ . . . ~ r¡ ). O. We shal identify partitions of n with multisets of the form lJLi . ri, . . . , JLk . rd, where JLi, ri E N+, the ri are distinct and ¿~i JLiri = n. For convenience, n is alowed to be zero,
in which case the only partition is the empty set. Define the relation II which associates with each n E N the set of partitions
of n, and consider the recursive procedure Genpart of Figure 1.1. Here ?r + j . r denotes the operation of adjoining j copies of r to the multiset ?r. It is assumed in line (2) that the counting function #II for II is computed by some other procedure (using, e.g., the recurrence relation implicit in Genpart itself), and that #II(m) = 0 for m -( O.
We claim that the cal Genpart(n, ø) uniformly generates partitions of n. More generalv. Wp "li"m h.. :_ _L _, .
26
1.3 AN EXTENDED MODEL
1.3 AN EXTENDED MODEL
27
that c is executed with probabilty ¡ procedure Genpart(n : naturaL
number; 11 : partition);
begin
p(c) =mit¡#II(mi) #II(m2) #II(m3) t¡ #II(O) = ~i=iIImi ~ m2t2#II(m2) m¡...#II(m¡) #II(n)
(i) if n = 0 then halt with output 11
else begin
for some ti,mi,l E ii+, with n = mi :; m2 :; ... :; m¡ :; O. Hence
(2) for (r,j) E ¡i,n) x ¡i,nj do q(r,j) :=r#IT(n-jr);
the product n! #II( n) p( c) is always integraL We may therefore take
(3) select (r,j) E ¡i,n) x ¡i,n) with prob q(r,j)lE,q(r',j');
N(n) = n! #II(n) in Theorem 1.7 and deduce that the algorithm has a
(4) Genpart(n - jr, 11 + j . r)
PTM implementation. D
end end
Figue 1.1: Algorithm for uniformly generating partitions
To conclude this discussion of models, we futher motivate Theorem 1.7 by showing that effcient simulation by a PTM is not possible for arbitrary polynomialy time-bounded OCMs. In other words, the extended model is strictly more powerfu than the PTM model in terms of its abilty to realse a given output distribution exactly in polynomial time.
uniformly generates partitions of n and adjoins them to 7r. The base case n = 0 is immediate from line (1). Suppose then that n :; 0 and consider
Theorem 1.9 There exists a polynomially time-bounded OeM which can-
the partition 7r' of n given by
not be effciently simulated by any PTM.
7r' = tILi.ri,...,ILk 'rkì.
Examination of the algorithm reveals that 7r' may be adjoined to 7r in precisely ILi + . . . + ILk ways as follows: in line (3) select r = ri for some
1 :: i :: k, and j in the range 1 :: j :: ILi; finally, adjoin to 7r + j . ri the partition
Proof: Consider the OCM M which, on any input x of length n, executes the following: select j E tI,..., 2nì u.a.r.j
either (with prob 1fj) halt with output? or (with prob 1 - 1fj) halt with output j
tILi' ri,..., (ILi - j). ri,... ,ILk' rkÌ
of n - jr i via the recursive call in line (4). By the inductive hypothesis, the recursive call generates partitions unformly. In view of lines (2) and (3), the probabilty that 7r' is appended to 7r is therefore
'"k l-¡ '" ri#II(n ')- 1Jrik
'" n L. ~ Q(n) #II(n - jri) = Q(n) ~ ILiri = Q(n) ,
o=i 3=i o=i
By Proposition 1.3, the runtime of M is bounded by a polynomial in n. Fu-
thermore, for each j in the range 1 :: j :: 2n, M outputs j with probabilty (j - 1)(2nj)-i. Note incidentaly that M is effective in the sense that its faiure probabilty on inputs of length n is 1 - H2n /2n, which tends to zero
exponentialy fast with n. (Here Hk denotes the kth harmonic number.)
where Q(n) = ¿q(r',j') as in line (3) and depends only on n. Since 7r'
Now suppose there exists a PTM M' which effciently simulates M. Since M is polynomialy time-bounded, M' must be also, by condition (s2). So let p be a polynomial bounding its runtime, and set m = p(n). Then clearly, for
was chosen arbitrarily, we conclude that the generation is uniform and also
any input x of length n and integer j, we must have
that Q(n) = n#II(n). For a combinatorial derivation of this algorithm,
Pr(M'(x) = j) = i(j)Tm
the reader is referred to ¡78J.
It should be clear that Genpart can be implemented by a polyno~ially time-bounded OCM. (Note that the evaluation of #II at the appropriate points can be performed effciently.) Now let c be any computation of such a machine on input (n, ø) with n :; O. The form of the selection probabilties in line (3), together with the fact that Q(m) = m#II(m) for all m, implies
for some integer i(j) in the range ¡O,2mJ.
Given the output distribution of M, condition (s1) of the definition therefore demands that i(j) = r¡(x) j -- 1
J
(1.4)
28 1.4 COUNTING, GENERATION AND SELF-REDUCIBILITY
I
for 1 ~ j ~ 2n and some constant r¡(x) :; 0 independent of j. Setting j = 2 in (1.4) implies that r¡(x) = 2i(2), so r¡(x) is integral and bounded above by 2m+!. FUhermore, since i(j) itself is integral, (1.4) also implies that j I r¡(x) for 1 ~ j ~ 2n. However, it is well known that, for infnitely many natural numbers r, lcm-(I,... ,r) 2: Aer for some constant A (see, e.g., ¡32, page 34)). We must therefore have
i
r¡(x) 2: lcm-(I,..., 2n) 2: ke2n
infinitely often, which is a contradiction since r¡(x) is bounded above by 2m+! = 2P(n)+1, and p is a polynomial. We must therefore conclude that
1.4 COUNTING, GENERATION AND SELF-REDUCIBILITY 29 procedure GenR(v : vertex);
I
begin
i!
i
(1) z := inst(v); (2) if lR(z) = 0 then halt with output sol(v)
else begin
i
(3) select w E EO"(z) with prob C('l(z,w))/Lw'C('l(z,w')); (4) inst(u) := 'l(z,w); sol(u):= sol(v). w;
(5) GenR(u) end end
the integers i(j) do not always exist as claimed, so M' does not effciently
simulate M. D Figue 1.2: Reduction from generation to counting
It could be argued that the above counterexample exploits the special properties of a bizarre output distribution, and that a similar result would not necessarily hold if attention were restricted to (say) uniform generators.
However, we claim that our weak definition of simulation makes it entirely reasonable to consider general distributions. This is because we are really only using outputs to encode distinct computations of the OCM. Accord-
ing to our definition, the algorithm of the simulating machine need bear no resemblance to that of the original OCM. A more realistic definition of simulation would requie that each individual computation step be simulated
explicitly. (Note that the simulations of Proposition 1.4 and Theorem 1.7 have this property.) Under this definition, the proof of Theorem 1.9 stil holds even when the OCM M is modifed so as to have a trivial output distribution. The question of whether there exists a relation which can be uniformly
generated in polynomial time by an OCM but by no PTM is open, and seems dicult. In view of Theorems 1.7 and 1.9, this is not inconceivable but any effcient OCM generator for such a relation must necessarily induce
a fairly complex distribution on its set of accepting computations.
1.4 Counting, generation and
self- red ucibility In this section we describe two reductions due to Jerrum, Valant and Vazirani which together establish a close relationship between the complexities of approximate counting and almost unform generation for most interesting p-relations. In each case, the additional property which is needed to
make the reduction work is self-reducibilty. Similar results in a less general setting have been proved by Broder ¡16).
Most known unform generation algorithms for combinatorial structures may be viewed as instances of the followig generic reduction to the corresponding counting problem. Suppose that the structures have a simple
inductive construction, or more formaly that they may be described in terms of a self-reducible relation R, and let x be a problem instance with non-empty solution set. Now select a random path from the root of the
tree of derivations TR(x) to a leaf (solution), choosing the next edge at each stage with probabilty proportional to the number of solutions in the maxal subtree rooted at its lower end: thi inormation may be obtained by evauating the fuction #R for appropriate problem instance labels in the tree. It is then easy to see that the distribution over solutions is uni-
form. A formal specifcation of the algorithm is provided by the recursive
procedure GenR shown in Figue 1.2, in which 0', 'l, sol and inst have the meanings ascribed to them in the earlier definitions and C is an exact counter for R, viewed as an oracle. Elements of R(x) are generated using the cal GenR(u), where u represents the root of TR(x), i.e., inst(u) = x and sol( u) = A. Since the depth and vertex degree of the tree are polynomialy bounded, the algorithm can be implemented on a polynomialy time-bounded OCM equipped with the oracle C for #R. Futhermore, as
each computation yields a dierent output, Theorem 1.7 ensures that a PTM implementation exists. The above procedure can actualy stil be made to work when the count-
1.4 COUNTING, GENERATION AND SELF-REDUCIBILITY
30
r
ing information supplied by the oracle C is slightly inaccurate, specifically
i
if it is within ratio 1 + O(n-kR), where kR :; 0 is a constant satisfying
!
lR(X) = O(lxlkR) (recall condition (srI) in the definition of self-reducibilty).
~
This is done by appending a correction step prior to the output in line (2),
~
1.4 COUNTING, GENERATION AND SELF-REDUCIBILITY 31
of this event on the output probabilty p(y) of any solution y is thus an additive term :l8q(lxl), i.e.,
fj(x) - 8q(lxl) ~ p(y) ~ fj(x) + 8q(lxl).
much as in the proof of Theorem 1.7. To see how this works, view the uncor-
rected procedure as an approximate simulation of the ideal procedure with exact oracle: given the constraint on the counting estimates, each branching probabilty in line (3) is simulated within ratio (1 + cjlR(1/(X,w))2 for some fied constant c E N+. Taking the product of such factors along any path from root to leaf, and bearing in mind that the length function lR de-
~
(1.8)
(Note also that, since the oracle no longer decides with certainty whether a given solution set is non-empty, the process is not now confined to the tree of derivations but may occasionaly fal into other parts of the selfreducibilty tree. As a result, some non-solution leaves may be reached
with non-zero probabilty: however, since R is a p-relation the condition
creases strictly along the path, we find that the generation probabilty p(y)
(x, y) E R may always be checked prior to output.) Now observe that, since
of each solution y E R(x) approximates the ideal vaue Ij#R(x) within
solutions are strings oflength mover E, their total number #R(x) cannot exceed IElm. Combining this with the bound (1.7) on fj(x), we see that by
ratio
IR(x)-1 ( ) 2c
g (1 + ~r ~ n (1 + ~) = lR(X)2C .
choosing (1.5)
(1 + cjm)2IElmq(lxl) - q(lxl)
Writing m = lR(X), this implies that p(y) is bounded below by
m-2c m-2c
p(y):; -:; =fj(x). - #R(x) - i. , ~Lu\r"/u\'
8 = (€j2)m-2c .( (€j2)fj(x) ,
(1.6)
The correction step is now simply a matter of outputting y with probabilty
fj(x)jp(y), which by (1.6) is ~ 1, and failng otherwise. Each solution is thus generated with the unform probabilty fj(x), and the probabilty that no faiure occurs is m-2c #R(x) fj(x) ~ (1 + cjm)2 ' (1.7) which may be boosted to Ij2 as usual using only polynomialy many re-
peated trial. Implementation in the PTM model follows as before. Finaly, consider what happens if we use the same procedure when C is a randomised approximate counter within the above ratio. Assuming initialy that all vaues returned by C are within this ratio, the previous arguent stil holds and solutions are generated with some unform probabilty fj( x) which is again bounded as in (1.7). Unfortunately, we can say nothig about the output distribution when some vaue returned by C happens to be very inaccurate: however, by appealg to Proposition 1.2, we may ensure that C
we can write the additive approximation (1.8) of fj(x) by p(y) as a relative approximation within ratio 1 + € (assuming 0 .( € ~ 1), so that the resulting generator is almost uniform within tolerance €. Since the powering operation requies time polynomial in 19 8-1, the runtime of the procedure with C as oracle is polynomial in Igc1 and lxi, as required. Implementation on a PTM follows directly from Proposition 1.4. If C itself is polynomially time-bounded, then the entire procedure constitutes a f.p. almost uniform
generator for R. The above discussion is summarised in the following theorem.
Theorem 1.10 (Jerrum, Valiant and Vaziran) Let R be a self-reducible relation over E and kR a constant such that, for all pairs (x, y) E R, lyl = O(lxlkR). If there exists a polynomially time-bounded randomised
approximate counter for R within ratio 1 + O(n-kR) then there exists a f.p. almost uniform generator for R. Moreover, if the counter is deterministic then there exists a polynomially time-bounded uniform generator
for R. 0
behaves badly with such smal probabilty 8 that the resulting procedure
Note that the ratio I+O(n-kR) appears to be a threshold value for this simple reduction technique: under a weaker hypothesis, there seems to be
(with C as oracle) is a f.p. alost unform generator for R.
no polynomial bound on the product (1.5), indicating that the cumulative
To see how to choose 8, note fist that the total number of oracle call
is bounded by q(lxl), where q is a polynomial which depends on the depth and vertex degree of the tree. The probabilty that any returned vaue fal outside the acceptable range is therefore at most 8q(lxl), and the effect
errors may then become too large for effective correction to be possible. As we have already mentioned, numerous generation algorithms may readily be derived from the above reduction to exact counting. Several
simple examples appear in the book by Nijenhuis and Wilf (78J, who also
32 1.4 COUNTING, GENERATION AND SELF-REDUCIBILITY
present an alternative formulation of the generic reduction in this case. (Note that the algorithm of Example 1.8 can be recast in this form by
modifying the underlying relation and biasing the generation probabilties appropriately.) Other examples in the literature include generating spanning trees in a graph (40), terminal strings of given length in an unambiguous context-free grammar (44), unabelled trees (99) and labelled connected graphs (79). Tyicaly, the counting information is derived from
a recurrence relation which alo defines a self-reducibilty for the structures in question: occasionaly, however, a deeper result is involved (e.g., in the case of spanning trees, Kirchoff's matrix tree theorem). We give one more example below, and thereafter in this monograph concentrate on cases where exact counting is apparently not possible.2 Generation algorithms which make use of approximate counting infor-
mation and the correction technque are much less common: a notable exception is the elegant method of Bach (8) for uniformly generating factored
integers in a given interv. The approach can also be used to refine the scheme proposed by Dixon and Wil (26) for generating unabelled graphs. (Their algorithm has polynomialy bounded expected runtime, but the distribution induced by any polynomial time truncation has very large bias. The refinement guarantees a unform distribution in polynomial time.) We wil not expand on this example here but instead refer the reader to a more elegant attack on the same problem by Wormald (101). The following little example is chosen partly because it ilustrates the
reduction to exact counting and parly because of its superficial resemblance to the problem of generating labelled graphs with given vertex degrees,
which we shal discuss at length in Chapter 4.
Example 1.11 Consider the relation TREES which associates with each sequence g = (gi)iEI of non-negative integers the set of labelled trees with vertex set I in which vertex i has degree gi. Clearly, if TREES(g) is non-
empty then gi )- 0 for al iE I and EiEI gi = 21I1 - 2: we call g valid in this case. If g is vad then gio = 1 for at least one io E I, which suggests a natural self-reducibilty for the relation TREES. By considering al possible
I i
1.4 COUNTING, GENERATION AND SELF-REDUCIBILITY 33 neighbours of io, we may write
~ '"
J
I~
.. ,-,
TREES(g) = U t T U Hio,j)l : T E TREES(gU)) J ' (1.9) jEI\fioÌ
..
i
where gU) is the degree sequence with vertex set I \ r io 1 satisfying
g.' = i ifi-J'. , -, (') tg.-1
i gi, otherwise,
and we have identifed trees with their edge sets. To generate trees on g uniformly, it is therefore sufcient by Theorem 1.10
to find some effcient means of counting them. This is provided by the following formula, in which g is assumed vad:
(III - 2)! #TREES(g) = IliEi(gi - I)! . The formula is easily verifed by induction on III using the recurrence im-
plicit in (1.9) (see, e.g., (64, Problem 4.1)). D The converse reduction from counting to generation, while perhaps less familar, is equaly intuitive. Clearly, a counting procedure based on random generators canot be deterministic, so we aim for an effcient ran-
domised approximate counter. Again we make the assumption that R is self-reducible and work with the tree of derivations for a problem instance x
with R( x) =l ø. The task at hand is therefore to estimate the number of leaves in a rooted tree T given an aÌost unform generator for the leaves in any maxmal subtree. For such a subtree 8, let L(8) denote the number of leaves in 8. The idea is to generate leaves of T and compute the proportion s of this sample which belongs to some subtree 8 rooted at a child of the root of T. Assuming that s is a reasonable estimate of the ratio L(8)/ L(T), we get an approximation for L(T) by recursively estimating L(8) and multiplying the result by S-1. The qualty of the estimate s at each stage depends on the size of the sample and the way in which the subtree 8 is selected. Clearly, higher
accuracy wil be achieved if the ratio L( 8) / L(T) is large, so we adopt the 2Uniform generation can sometimes be streamlined if the structures can be
policy of choosing 8 so as to maxmise the estimate s for the given sample.
"such a way that integers can be effciently mapped to structures. (This operation is
The following straightforward piece of statistics tell us how large a sample
placed in I-I-correspondence with a set of integers, typically il,..., NL, in
is requied in order to achieve a specifed accuracy.
called unranking.) Then it is enough to generate an integer u.a.r. and output the corresponding structure. As a non-trivial example, (21) develops this approach
Proposition 1.12 With the above notation, suppose that T has maximum
for spanning trees in a graph.
degree d and elements of the sample are generated almost uniformly within
34
1.4 COUNTING, GENERATION AND SELF-REDUCIBILITY
1.4 COUNTING, GENERATION AND SELF-REDUCIBILITY
tolerance e E (0,1). Then for any 8 E (0,1), the sample size t required to ensure that
i
Pr(s approximates L(S)IL(T) within ratio 1 + 5e) ~ 1- 8
i
is at most 9d3(e8)-1. 0
35
TI:= 1; while lR(x) ? 0 do begin make 3t calls of the form Q(x, €/lOm);
if at least t of these yield a solution then let Y = tY1, . . . , Yt 1 be the first t solutions
I
else halt with output 0;
A minor problem arises from the fact that the generator may fail to
for w E EtT(z) do s(w) := ItY E Y : w is a prefi of yll/lYl;
supply sufciently many meaningf outputs within a reasonable number of trials. However, if failure occurs in anyone trial with probabilty no more than 1/2, it is easy to see that
let w be such that s(w) = maxw, s(w');
TI:= TIls(w); x:= 'l(x,w) end; halt with output TI
Proposition 1.13 For any t, the probability that 3t trials fail to yield at
least t outputs distinct from? is at most 31t. 0
Figure 1.3: Procedure for estimating #R(x)
We therefore perform 3t trials at each level of recursion and examine the
outputs of the first t successfu ones: in the unkely event that insuffcient
Figue 1.3, in which it is assumed that Q is an almost unform generator
successfu outputs are generated, we abort the counting procedure with
for R.
default vaue O.
A partial converse to Theorem 1.10 may now be stated as follows.
Suppose now that we wish to approximate L(T) within ratio 1 + € using
the above method. Let the depth of T be m and assume for the moment that the procedure is not aborted because of generator failure. Setting
e = €/I0m and 8 = 118m in Proposition 1.12, we can fi t so that at each level of the tree the estimate s is accurate within ratio 1 + €/2m with probabilty at least 1 - 118m. Since there are at most m levels, with
Theorem 1.14 (Jerrum, Valiant ànd Vazirani) Let R be a self-reducible relation over E. If there exists a J.P. almost uniform generator for R then there exists a f.p. randomised approximate counter for R. 0 Combining Theorems 1.10 and 1.14, we arrive at
probabilty (1 - 1/8m)m ~ 7/8 the product of the factors s-1 therefore approximates L(T) within ratio (1 + €/2m)m :: 1 + € (assuming € :: 1).
Corollary 1.15 For a self-reducible relation Rover E, the following are
Finally, to dispense with generator failure, we assume futher that t ~ 24m
equivalent:
and deduce from Proposition 1.13 that the procedure runs to completion with probabilty at least (1 - 1/8m)m ~ 7/8. The procedure therefore
approximates L(T) within ratio 1+€ with probabilty at least (7/8)2 ~ 314, as required. Given an oracle which generates leaves alost uniformly within tolerance €/lOm, the runtime of the procedure is bounded by 3mt, which in turn by Proposition 1.12 is bounded by a polynomial in d, m and c1.
Returning to the case where T = TR(x) as above, note fist that the depth m and degree d of the tree are both polynomialy bounded in 1xJ The generation of leaves in any maxmal subtree may be accomplished'by an alost uniform generator for R within tolerance €/I0m. If the genera-
tor is f.p., its runtime on each trial wil then be bounded by a polynomial in ixl and ci, and thus the runtime of the entire procedure wil be sim-
ilarly bounded. The algorithm for estimating #R(x) is given in detail in
(i) There exists a J.P. almost uniform generator for R.
(ii) There exists a f.p. randomised approximate counter for R. (iii) There exists a polynomially time-bounded randomised approximate counter for R within ratio 1 + O(n-kR), where kR is a constant as above. 0 The implication (ii) =? (ii) in Corollary 1.15 indicates that the pair of reductions presented here actually yield a method for bootstrapping a polynomially time-bounded randomised approximate counter for R within the
threshold ratio 1 + O(n-kR) to one within ratio 1 + n-ß for any desired real ß. In view of the hypothesis of Theorem 1.10, however, bootstrapping of less accurate counters is apparently not possible using these techniques.
36
1.5 AN INTERESTING CLASS OF RELATIONS
Unfortunately, the reductions as they stand do not seem to yield a corresponding improvement mechanism for generators. This stems from the fact that the second reduction, une the first, demands of its oracle at
each level an accuracy which is not simply a function of the local problem instance, but which depends on the global quantities € and m: there is no analogue of the correction process employed in generation which would
allow larger errors to be handled. Consequently, we have to resort to a generator whose tolerance may be vaied.3 For future reference, however, we state a weaker version of Theorem 1.14 which presupposes only the
existence of a generator with fied tolerance. Theorem 1.16 For Rand kR as above, if
there exists a polynomially time-
bounded almost uniform generator for R within tolerance O(n-kR) then there exists a polynomially time-bounded randomised approximate counter for R within ratio O(nf;) for some constant ß.
Proof: Immediate from (1.5) and the proof of Theorem 1.14, setting e = c'lxl-kR for some constant c'. 0
1.5 AN INTERESTING CLASS OF RELATIONS
i i
37
In the remainder of this monograph, we wil make some contribution towards classifying natural relations according to the criterion of approximabilty, concentrating primarily on techniques for proving positive results. Before embarking on this investigation, however, we should be a
i
little clearer about the kind of relations for which this question is interesting.
Obviously, we are not particularly interested in relations, such as those mentioned in the previous section, whose counting problem is known to be
solvable èxactly in polynomial time. If the relation is self-reducible, this alo implies the existence of a polynomialy time-bounded uniform generator for it and the status of both problems is therefore fully resolved. Idealy,
we would lie some positive evidence of intractablility for the exact counting problem, which usualy means #P-completeness. (Note that there are
diffcult counting problems for which #P-completeness is not an appropriate notion of hardness. This is the case for most of the classical graphical
enumeration problems of the form: Given an integer n, compute the number of graphs of size n having a certain property l42). We refer to these as problems of unary type since they have only one input of each size. A more natural hardness concept here is that of #P i-completeness 197), which
We shal have a lot more to say about bootstrapping both counters and generators in Chapter 4, where we present a signcantly improved version of Theorem 1.10.
1.5 An interesting class of relations
unfortunately has proved rather hard to work with in practice.)
On the other hand, the question of approximablity is also trivial if the existence problem for the relation in question is suspected to be hard, in particular if it is NP-complete. To see this, observe that a polynomialy time-bounded alost unform generator for R within any tolerance € ~ 0 immediately yields an effcient solution to the corresponding existence prob-
In Section 1.2 it was argued that the counting problem for a p-relation R
lem in the RP sense defied earlier. If this latter problem were known to be NP-complete, then we could deduce that RP = NP, i.e., that the existence problem for any p-relation is tractable: this is widely held to be alost as
may be regarded as effectively tractable if there exists a f.p. randomised
unely as the assertion that P = NP. The existence of a polynomialy
approxiate counter for R. Simiarly, we may view the existence of a
time-bounded randomised approximate counter for such a relation within
f.p. alost unform generator for R as evidence for the tractablility of its unform generation problem. For the puroses of this discussion, let us cal a relation "approximable" if at least one of these problems for it is tractable. In most cases the relation wi be self-reducible, so in the light of Section 1.4 if one of the problems is tractable then so is the other.
any ratio p ~ 1 would alo imply that RP = NP. By Theorem 1.10, this is immediate if th~ relation is self-reducible. If not, we could stil use the counter to construct a polynomial time randomised algorithm with bounded two-sided error probabilty for an NP-complete existence problem associ-
ated with a self-reducible relation, such as SAT.4 The error may, however, be made one-sided in polynomial time by exploiting the self-reducibilty of
31n (53) it is informaly claimed that a generator satisfyng the hypothesis of
Theorem 1.16 can be bootstrapped to a f.p. almost uniform generator. This turns
4More precisely, we could locate the existence problem for SAT in the class
out to be true, as shown in Theorem 4.8, but the proof apparently does not follow
BPP (39). (SAT is the relation which associates Boolean formulae in conjunctive normal form with their satisfyng assigments.)
from the results of (53) and relies on the machinery developed in Chapter 4.
38
1.5 AN INTERESTING CLASS OF RELATIONS
1.5 AN INTERESTING CLASS OF RELATIONS
is trivial here since the empty set is always independent. We now proceed to show
SAT: before outputting a "yes" answer, it is first verified by explicit construction of a solution. This would imply that the existence problem for
SAT is in RP.
i
have aleady mentioned that many naturally occurring relations fall into this category: as well as perfect matchings in a graph and satisfying assign-
ments of a DNF formula, examples include matchings (independent sets of edges of any size) in a graph (97), directed trees and s-t-paths in a directed graph (97), and connected spanning subgraphs of a graph (81). For al these relations, the existence problem lies in P and the counting problem
is #P-complete.
Remark: We should mention by means of an example a further case in which uniform generation may become trivial. Consider the problem of uniformly generating labelled connected graphs with a given number of vertices. The following naïve procedure provides an effcient solution:
simply select a random graph of the required size and output it if it is connected, failing otherwise. The method works since almost every graph of each size is connected (see, e.g., (42)). (A non-trivial "exact" solution to this problem, based on the reduction to counting, also exists and can be found in (79).) 0
Having identified a potentially interesting class of relations, we should check that the question of approximabilty is a genuine issue for the class: in other words, we should be able to exhbit a natural relation in it which is approximable and another for which even approximate counting and gen-
eration are hard. A candidate for the fist of these criteria has aleady been mentioned, namely the DNF satisfiabilty relation. That this is approximable is shown by Karp and Luby (56), who give a f.p. randomised approximate counter for it. In fact, Jerrum, Valant and Vazirani (53)
prove the slightly stronger result that the relation has a polynomialy timebounded uniform generator. We do not describe these algorithms in detail here; the reader wil find an alternative approach to this problem in Example 4.4 of Chapter 4. Conversely, it is possible to find natural relations with easy existence and
construction problems which are nevertheless hard to approximate. This complexity gap presumably arises from an implicit requirement in counting and generation that al solutions be "equaly accessible". As an example,
consider the relation INDSETS which associates with an undirected graph G
al independent sets of vertices of G. Note that the construction problem
Theorem 1.17 If there exists a polynomially time-bounded uniform generator for INDSETS then RP = NP.
In view of these points, we shall focus on p-relations whose counting problem appears to be hard and whose existence problem is tractable. We
39
i Proof: Let is be the relation which associates with each graph G = (V, E)
and positive integer k al independent sets of G of size at least k. The existence problem for is is a standard NP-complete problem (38): we present a polynomial time reduction from it to the unform generation problem for INDSETS.
The reduction proceeds by replacing each vertex v of G by a cluster Cv of r independent vertices, where r wil be specified below. For each edge (u, v) of G, we add the set of edges H u', v') : u' E Cu, v' E Cv l, and denote the resulting graph G'.
Now let S be an independent set in G. An independent set S' in G' is a witness for S iff it satisfies tV E V : Cv n S' f: øl = S.
Note that every independent set in G' is a witness for a unique set in G. The proof hinges on the fact that, if r is chosen suitably, large independent sets have many more witnesses that smal ones. To see this, parition
the independent sets of G' into two classes, Large(G, k) and Smal(G, k), according to whether they are witnesses for sets in G of size ~ k or -( k respectively. Clearly, ILarge(G, k)1 ;: 0 iff G contains an independent set of
size k. So assume now that this holds, and for 0 ~ L ~ k let N¡ denote the number of witnesses for each independent set of size L in G. Then we have, from the construction of G',
N¡H ~ 2T N¡ for 0 ~ L ~ k . Let n be the number of vertices in G. Then G can contain at most (7) independent sets of size l, so k-l ( )
ISmall(G,k)1 ~ ~ 7 N¡ ~ 2nNk_l, since N¡ increases with l. Combining the above two inequalties gives ILarge(G, k)1 ~ Nk ~ 2T Nk-l ~ 2T-nISmall(G, k)l.
Hence by choosing r = n we may ensure that at least hal of the independent
sets in G' are witnesses for independent sets in G of size k or more. It follows that, if independent sets in G' can be generated uniformly in polynomial time, the existence problem for is lies in RP, as required. 0
40
1.5 AN INTERESTING CLASS OF RELATIONS
1.5 AN INTERESTING CLASS OF RELATIONS
Closer examination of the above proof reveals that a polynomialy time-
and existence; it turns out that this can be formalsed as follows. Recal that NP is the class of existence problems for p-relations and lies within the first
bounded almost uniform generator for INDSETS with very large tolerance (any polynomial fuction of the input size) would also be sufcient. Fu-
thermore, approximate counting even within a large ratio is easily seen to
i
level of the polynomial time hierarchy (90J. Fuhermore, the corresponding
I
following result, which is an application of work by Sipser (89J on unversal
be hard for the same reason.
The above technque of boosting the proportion of solutions which are
hard to detect was initialy used by Jerrum, Valant and Vaziran (53J to prove analogous hardness results for the relation of cycles in a directed graph; these transfer immediately to undirected graphs and s-t paths (i.e.,
simple paths between a specified pair of vertices). By parsimonious (or
41
class #P of exact counting problems is not known to lie within any fied level of the hierarchy (generalsed to Turing machines with output). The hash functions, indicates that the counting problem for any p-relation may
be solved approximately by a machine in the second level of the hierarchy, generalsed to include randomisation.
also seen to be hard to approximate: all vertex covers and al cliques in
Theorem 1.18 (Stockmeyer (91J) Let R ç E* x E* be a p-relation. Then there exists a PTM equipped with an NP oracle which is a J.P. randomised
a graph, satisfying assignments of a monotone Boolean formula in 2-CNF, and 3-colourings of dense5 graphs. The proof of Theorem 1.17, with minor modifcations, shows that the same holds for maxmal (i.e., non-extendable)
for R. D
alost parsimonious) reductions from INDSETS, the following relations are
independent sets, and thus also for minimal vertex covers and maxmal cliques (97J.
approximate counter for R. Furthermore, there exists a deterministic Turing machine equipped with a E~ oracle which is a J.P. approximate countei.
As noted in (53J, this immediately yields a similar characterisation for generation problems.
Remark: The proof of Theorem 1.17 ilustrates a feature of approximate counting of which the reader should be aware. We may view the
Corollary 1.19 For any p-relation R ç E* x E*, there exists a PTM with
above reduction as a proof that the exact counting problem for INDSETS
an NP oracle which is a J.P. almost uniform generator for R, and a PTM
is #P-hard: for with suitable choice of r, by evauating #INDSETS for the
with a E~ oracle which is a polynomially time-bounded uniform generator
transformed graph G' and looking only at the most significant figues in the output, #IS(G, k) can be computed exactly. The latter counting problem is itself trivially #P-complete by parsimonious reduction from SAT. Because only the most signcant figues are relevat, the reduction also holds for approximate counting. Similarly robust reductions exist for the
forR.
other relations listed above. The reductions employed in many proofs of #P-hardness, however, appeal to less trivial arithmetic such as polyno-
mial interpolation (see, e.g., (97J) and consequently say nothing about the
complexity of approximate counting. D We conclude our introductory material by quoting some results on the complexity of approximate counting and generation problems for p-relations as a class. From our earlier discussion, it might be expected that these
problems are in general intermediate in difculty between exact counting 5The minimum vertex degree must be at least. en, where n is the number of vertices and 0 0( co( 1/2 is fied. Note that the corresponding existence problem is in P, and that the counting problem is solvable exactly in polynomial time if c:; 1/2 (31).
Proof: Observe that the generic p-relation NCOMP defined by
NCOMP(M,x,w) = tY: M encodes an NP-machine and wy an accepting computation of M on input x J is self-reducible, and apply Theorem 1.18 and the reduction of Theorem 1.10. D Note that the class of (exactly) uniform generation problems for p-
relations lies within the third level of the hierarchy. By contrast, recal that
the corresponding class #P of exact counting problems is not known to lie within any fied level of the hierarchy.
2.1 THE MARKOV CHAIN APPROACH
43
2.1 The Markov chain approach to Chapter 2
i
generation problems Consider the following dynamic stochastic technique for generating objects
I
Markov chains and rapid mixing
unormly at random from a finite set S, assumed very large. Suppose it is possible to construct a Markov chain whose states may be identified with the elements of S. Suppose alo that the chain is ergodic with uniform stationary distribution; in other words, if the chain is alowed to evolve for t
steps the distribution of the final state approaches the uniform distribution as t - 00, irrespective of initial state. Then an alost uniform generation procedure for S is obtained by simulating the Markov chain for sufciently many steps, from some arbitrary initial state, and outputting the element of S corresponding to the final state.
This monograph is primarily concerned with positive results about counting and generating structures described by natural p-relations. With the exception of certain relations of unary type, where there is a considerable body of work on analytic (Le., closed form) counting estimates, such results
In the following chapters we shall exploit this approach to obtain effcient almost uniform generators for several natural relations as discussed in Chapter 1. More specificaly, if R is a relation and x a problem instance
are rare in the kinds of interesting cases identified in Section 1.5. Ideally,
with non-empty solution set R(x), the idea is to construct a Markov chain
we would like to have avalable some algorithmic paradigms with reason-
MC(x) some or al of whose states correspond to the structures in R(x).
ably wide applicabilty. To this end, we investigate here a very general
Transitions in the chain will correspond to simple local perturbations of the
approach to generation problems. The approach is based on simulating a
structures themselves. This technique was suggested by Andrei Broder ¡16)
;¡imple dynamic stochastic process, namely a finite Markov chain, which
'lost always be self-reducible, the results we obtain carry over directly
as a means of generating perfect matchings in dense bipartite graphs: in Chapter 3 we will look at this problem in detail and show for the first time that the technique works. The same approach can be used to sample the structures from more general probabilty distributions by adjusting
GO the corresponding counting problems by virtue of the observations of
the stationary distribution of the Markov chain accordingly. Problems of
,ection 1.4.
this nature arise frequently in Monte Carlo investigations of physical sys-
moves around a space containing the structures of interest and converges GO some desired distribution on them. Since the relations we consider wil
tems ¡12J, where the states correspond to confgurations of the system and
Technques of this kind have been in use for some time, paricularly n the physical sciences; however, until recently the mathematical tools
tvailable for analysing the non-asymptotic behaviour of Markov chains were
tpparently not sensitive enough to provide usefu performance guarantees
'or the resulting algorithms. In this chapter, a method of analysis for
appropriate functions of the stationary process to physical constants or parameters. They alo lie at the heart of stochastic optimisation methods, such as simulated annealng ¡60), in which low cost confgurations are associated with large weights in the stationary distribution. We shal discuss these applications in more detail in Chapter 3.
\Æarkov chains is developed which is related to recent work in graph theory m the connection between the subdominant eigenvalues of a graph and its ixpansion properties. As well as being of interest in its own right, this wil mabIe us in later chapters to demonstrate for the first time the existence of iffcient approximation algorithms for a number of important counting and
Assuming that the local structure of the Markov chain is simple, so that individual transitions can be simulated at smal cost, the effciency
~eneration problems. It wil also aford a deeper insight into the nature of
some polynomial p. Since the number of states is in general exponentialy
;he approximations themselves.
large, rapid mixng demands, informaly, that the chain should lose its memory afer visiting only a small fraction of its state space. In applying
of the above procedure depends crucially on the rate of convergence of
the chain. Specifically, the chain MC(x) should be rapidly mixing in the sense that it is close to stationarity after only p(lxl) simulation steps, for
2.1 THE MARKOV CHAIN APPROACH
2.1 THE MARKOV CHAIN APPROACH
14
;his technique, we are therefore faced with the problem of deriving a priori )ounds on the number of simulation steps requied to achieve a distribution which is sufciently close to the limit for the purpose at hand.
In this case, we have that 7r(t)' = 7r(O)' pt _ 7r' pointwise as t _ 00,
i
and the limit is independent of 7r(O)'. The stationary distribution 7r' is the unique vector satisfyng 7r' P = 7r', L:i 1Ii = 1, i.e., the unique normalsed left eigenvector of P with eigenvalue 1. Necessar and sufcient conditions
i
for ergodicity are that the chain should be (a) irrducible, i.e., for each
In the traditional theory of Markov chains, the question of rate of conV'ergence has received relatively little attention. In recent years, however,
it has emerged as an important research area, and a number of analytic Gechnques have been explored by vaious authors. These include Fourier
malysis and group representation theory (23), and stochastic approaches
45
pair of states i,j E (N), there is an s E N such that p~;) ~ 0 (j can be reached from i in a finite number of steps); and (b) aperiodic, Le.,
gcdfs :p~;) ~ OJ = 1 for al i,j E (N).
!Uch as coupling (1) and stopping times (3). Although these methods are
Consider now the problem discussed earlier of sampling elements of the
~legant and yield tight bounds for simple chains which possess a highly iymmetrical structure, the analysis involved appears to become extremely iiffcult in more complex examples. In this monograph we develop a simple ~haracterisation of rapid mixg in terms of a structural property of the l1derlying graph of the chain. The characterisation is based on the clasiical relationship between rate of convergence and the eigenvaues of the
state space, assumed very large, according to the stationary distribution 7r' .
The desired distribution can be realsed by picking an arbitrar initial state and simulating the transitions of the Markov chain according to the prob-
abilties Pij, which we assume can be computed localy as requied. As
;ransition matrix. As we shal see later, it turns out to be a powerfu tool
the number t of simulation steps increases, the distribution of the random vaiable Xt wil approach 7r'. In order to investigate the rate of approach to stationarity, we define the following time-dependent measure of deviation
:Or obtaining usefu analytic bounds for a number of interesting chains.
from the limit: for any non-empty subset U ç (N), the relative pointwise distance (r.p.d.) over U afer t steps is given by
(n our development, we wi assume that the reader has a passing familarity
with the elementar theory of finite Markov chains in discrete time: a
I (t) I
ßu(t) = 0,3EU ~ax Pij1Ij - 1Ij
ietaied introduction can be found in, for example, (34, Chapter XV). In ;he remainder of this section, we establish some terminology and notation tnd quote without proof some basic facts.
Thus ßu(t) is just the largest relative diference between 7r(t)' and 7r' at any state j E U, maxmised over al possible initial states i E U. i The inclusion
Let the sequence of random vaiables (Xt):o be a time-homogeneous \1arkov chain on a finite state space (N) = fO, 1, . . . , N - 1 J, N ~ 1, with
of the parameter U merely alows us to specify that certain portions of the state space are not relevat in the sampling process, as wi prove helpfu
¡ransition matri P = (Pij)i:~io' (Unless otherwse stated, all Markov
0,3-
~hains in this monograph wi be assumed to be of this form.) Thus for tny ordered pair i,j of states the quantity Pij = Pr (Xt+! = j I Xt = i) is
later. In the case that U = (N), we shal omit the subscript and write
simply ß in place of ßlNJ' The aim of this chapter is to obtain usefu bounds on ßu as a fuction of t. In paricular, we want to investigate
;he transition probability from state i to state j and is independent of t. rhe matrix P is non-negative and stochastic, i.e., its row sums are al unty. ¡¡or s E N, the s-step transition matrix is simply the power ps = (p~;)); thus
conditions under which the chains MC(x) are rapidly mixng in the sense
7~;) = Pr(Xt+s = j I Xt = i), independent of t. We denote the distribution
hence both) of the following equivaent conditions holds:
)f Xt by the row vector 7r(t)' = (7I~t)):~i. so that 1I~t) = Pr(Xt = i). flere 7r(O)' denotes the initial distribution, and 7r(t)' = 7r(O)' pt for al tEN. Usualy we wi have 1I~O) = 1 for some i E (N) (and 0 elsewhere); i is then ~aled the initial state. The chain is ergodic if there exists a distribution 7r' = (1Iö) ~ 0 over (N) !Uch that
li m (s) p..
s-+oo °3
=1Ij
V i,j E (N).
that ß(t) becomes very small in polynomial time.
An ergodic Markov chain is said to be (time-)reversible iff either (and
(trl) For all i,j E (N), Pij 1Ii = Pji 1Ij.
lWe have chosen this measure by analogy with our definition of almost uniform
generation in Chapter 1. Other measures, such as the variation distance, are also possible. For most interesting chains, this choice makes no essential difference to the rapid mixng criterion we are about to develop. We shal return to this point towards the end of this chapter.
6
2.2 CONDUCTANCE AND THE RATE OF CONVERGENCE (tr2) The matrix Dl/2 PD-l/2 is symmetric, where Dl/2 is the diagonal
mat.rixd'iag(1/2 110 ,...,1IN-l an is its.inverse. 1/2) d D-l/2' .
2.2 CONDUCTANCE AND THE RATE OF CONVERGENCE
47
As we have already stated, the stationar distribution 7r' of an ergodic chain is a left eigenvector of P with associated eigenvalue Ào = 1. Let lÀi : 1 ~ i ~ N - 1), with Ài E C, be the remaining eigenvaues (not
Condition (trl) says that, in the stationar distribution, the expected numbers of transitions per unt time from state i to state j and from state j to state i are equal, and is often referred to as the "detaied balance" property. It is easy to verify that, for any ergodic chain, if 7r' is any positive vector satisfyng (trl) and the normalzation condition Ei 1Ii = 1, then the chain is reversible and 7r' is its stationary distribution.
necessarily distinct) of P. By standard Perron- Frobenius theory for nonnegative matrices ¡ 85), these satisfy I Ài I -c 1 for 1 ~ i ~ N - 1. Futhermore,
the transient behaviour of the chain, and hence its rate of convergence, is governed by the magntude of the eigenvaues Ài' In the reversible case, condition (tr2) of the definition implies that the eigenvaues of P are just those of the similar symmetric matrix Dl/2pD-l/2, and so are al reaL.
This fact leads to a clean formulation of the above dependence, expressed
In our development we shal work exclusively with reversible chains. As we shal see, this assumption simplifies the analysis of the time-dependent
in the following pair of propositions.
behaviour. Moreover, since most chains arising in the computational applications of interest to us have this property, it does not represent a sig-
Proposition 2.1 Let P be the transition matrix of an ergodic reversible
nicant restriction. For a brief discussion of non-reversible chains, see the
Appendix.
Markov chain, 7r' its stationary distribution and Pi : 0 ~ i ~ N - 1)
its (necessarily real) eigenvalues, with Ào = 1. Then for any non-empty subset U ç ¡N) and all tEN, the relative pointwise distance ~u(t) satisfies
U _' ,
It is iluminating to identify an ergodic reversible chain with a weighted undiected graph (containing self-loops) as follows. The vertex set of the
~ (t) -c À~ax min1li iEU
!7aph is the state space ¡N) of the chain, and for each pair of states i, j (which need not be distinct) the edge (i,j) has weight Wij = 1IiPij = 1IjPji.
By detaied balance, this definition is consistent. Thus there is an edge of non-zero weight between i and j iff Pij :; O. We cal this graph the underlying graph of the chain. It should be clear that such a chain is
uniquely specified by its underlying graph.
where Àmax = maxl IÀil : 1 ~ i ~ N - 1).
Proof: Let Dl/2 and D-l/2 be as in the definition of reversibilty, so that the matrix A = Dl/2PD-l/2 is symmetric with the same eigenvaues as P, and these are real. This implies that we can select an orthonormal basis le(i)' : 0 ~ i ~ N - 1) for )RN consisting of left eigenvectors of A, where
2.2 Conductance and the rate of convergence
e(i)' = (e;i)) has associated eigenvaue Ài and e;O) = 11//2 for j E ¡N). Following ¡59), A has the spectral representation
N-l N-l A = ¿ Àie(i)e(i)' = ¿ ÀiE(i) , i=O i=O
(n this section, we establish an intimate relationship between the rate of convergence of an ergodic reversible chain and a certain structural property,
caled the conductance, of its underlying graph. Essentialy, such a chain will turn out to converge fast if and only if the conductance is not too smal.
where E(i) = eU)eU)' is a dyad (Le., has rank 1) with E(i) E(j = 0 for
The crucial step in the proof is a connection between the conductance and the second eigenvaue of the transition matrix of the chain. Similar rela" tionships between sub dominant eigenvalues of a graph and a more familar 5tructural property, known as the expansion or magnification, have been established in a different context by Alon ¡4) and Alon and Milan ¡5), and
hence
their relevance for the rate of convergence of certain Markov chains noted by Aldous ¡2).
i =I j, and E(i)2 = E(i). It follows that, for any tEN, At = Ei À!E(i), and
N-l pt = D-l/2 At Dl/2 = ¿ À!(D-l/2é))(e(i)' Dl/2) i=O
N-l = IN 7r' + ¿ À!(D-l/2é))(e(i)' Dl/2),
i=l
48
2.2 CONDUCTANCE AND THE RATE OF CONVERGENCE
where IN is the N-vector al of whose entries are 1; in component form, N-1 V 7rj i=1
By definition, the r.p.d. du is therefore given by
L"i=1\ teÇi) ". 3e(i) k I I"N-:1 -- max .."' "7rk 3,"kEU "3
49
in the sense indicated earlier if and only if ),max is suitably bounded away from 1. The first of these conditions can be checked immediately from our knowledge of Tr', and is rarely violated in practice. We therefore focus our attention on the second condition, which is not so easily handled. (Recal that P is assumed to be a large matrix, so that direct numerical evauation
PJ2 = trk + 1f:E ),~eJi)e~i) .
dU(t)
2.2 CONDUCTANCE AND THE RATE OF CONVERGENCE
of the eigenvalues is not feasible.) (2.1)
N~11 Çi)lle(i)1
max L" e3 k j,kEU i=1
-( ),~ax min 7r j
jEU -( ),~ax min 7r3"
Suppose the eigenvaues of P are ordered so that 1 = ),0 )- ),1 ~ .. . ~ ),N-1 )- -1. Then ),max = maxVi, I),N-11:¡ and the value of ),N-1 is signficant only if some of the eigenvalues are negative. Negative eigen-
values correspond to oscilatory, or "near-periodic" behaviour and cannot occur if each state is equipped with a sufciently la.ge self-loop probabilty. Specificaly, it is enough to have minj Pjj ~ 1/2. To see this, let IN denote
the N x N identity matrix and consider the non-negative matrix 2P - IN, whose eigenvaues are J.i = 2),i - 1. By Perron-Frobenius, J.i ~ -1 for all i E (N), which implies that ),N-1 ~ O.
jEU
In fact, negative eigenvalues never present an obstacle to rapid mixng where the second inequalty follows from the Cauchy-Schwarz inequalty
and the orthonormalty of the e(i)'. D
because any chain can be modified in a simple way so that the above con-
dition holds without risk of slowing down the convergence too much. All we need to do is increase the self-loop probabilty of every state by 1/2, as can easily be checked:
Proposition 2.2 With the notation and assumptions of Proposition 2.1,
the relative pointwise distance d(t) over (NJ satisfies
Proposition 2.3 With the notation of Proposition 2.1, suppose also that the eigenvalues of P are ordered so that 1 = ),0 )- ),1 ~ ... ~ ),N-1 )- -1.
d(t) ~ ),~ax
Then the modified chain with transition matrix P' = !(IN + P), where IN
for all even tEN. Moreover, if all eigenvalues of P are non-negative, the bound holds for al tEN.
is the N x N identity matrix, is also ergodic and reversible with the same stationary distribution, and its eigenvalues ~),a, similarly ordered, satisfy
),Ív-1 )- 0 and ),~ax = ),~ = !(1 + ),1). D
Proof: The equalty (2.1) in the proof of Proposition 2.1 stil holds. Set-
ting U = (NJ and k = j, we see that 2
eÇio)2 d(t) == d(N)(t) ~ max .=1. 3I~ ),~ax max -l IL~ -1 ),teçi)
jE(N) 7rj jE(Nj 7rj
for al (even) natural numbers t, where e(io)' is an eigenvector corresponding to an eigenvaue of modulus ),max' By orthonormalty of e(O)' and e(io)' and
the form of e(O)', this latter quantity is bounded below by ),~ax as rec¡uired. D
Remarks: (a) For the sake of simplicity, we have chosen the vaue 1/2 for the additional self-loop probabilty. A less crude approach would be to make this choice depend on the minimum diagonal element of P.
(b) Of course, in a practical simulation of the modifed chain, the waiting time at each state would be controlled by a separate random process. D
We turn now to the much more substantial problem of bounding the second eigenvalue ),1 away from 1. We shal do this by relating ),1 to a more accessible structural property of the underlying graph. Intuitively, we would expect an ergodic chain to converge rapidly if it is unkely to "get stuck" in any subset S of the state space whose total
Propositions 2.1 and 2.2 say that, provided Tr' is not extremely smal
in any state of interest, the convergence of a reversible chain wil be rapid
stationary probabilty is fairly smal. We can formalse this idea by considering the cut edges which separate S from the rest of the space in the
2.2 CONDUCTANCE AND THE RATE OF CONVERGENCE
,0
underlying graph, and stipulating that these must be capable of supporting a. sufciently large "flow" in the graph, viewed as a network. With this in mind, for any non-empty subset S of states with non-empty complement S
2.2 CONDUCTANCE AND THE RATE OF CONVERGENCE
Proof: Let e' = (ei);:~l be an eigenvector of P with associated eigen-
vaue À 0( 1, and define the matrix Q = IN - P (the "Laplace operator"
associated with P). Then clearly
In (N) we define the quantity (ls = Fs/Cs, where Cs = L7ri
the capacity of Si
iES
Fs = L Pij 7ri
the ergodic flow out of S.
51
e'Q = (1 - À)e'.
(2.2)
Define the subset of states S = ti E (N) : ei ;: Ol. Since P is stochastic and À 0( 1, it follows that Li ei = O. Hence 0 0( ISI 0( N, and we may assume without loss of generalty that Cs = LiES 7ri ~ 1/2. Now let ê' =
( êi) be the vector defined by
iES
jES
êi = f ei/7ri, 1. 0,
'lote that 0 0( Fs :: Cs 0( 1. (ls may be visualsed as the conditional
Jrobabilty that the stationary process crosses the cut from S to S in a iingle step, given that it stars in S. Finally, we define the conductance of ;he chain by
(l = min (l s . o-elsl-eN
059/2 ~t is easy to see that Fs = Fs for al such sets S. This implies that ¡'S = (ls(Cs/(1 - Cs)), so we may equivalently write
for i E Si otherwise.
Renumbering states as necessar, we shal assume that êo ~ êi ~ ... ~
êN-l, which implies alo that S = to, 1,..., r L for some r with 0 ~ r 0(
N - 1. Takng the inner product of (2.2) with ê' gives (2.3)
(e'Q, ê') = (1 - À)(e', ê').
The right-hand side of (2.3) is just
(l = min max t (l s, (lsl. o-elsl-eN
(2.4)
(1 - À) L 7ri ê~ .
iES
Now suppose that the chain is reversible, and let G be its underlying
Note that if Q = (aij) then qij = -Pij for i =l j, and qii = I-pii = L,#i Pij,
~aph. Then for al S as above we have
so we can expand the left-hand side of (2.3) as
Fs = Fs = L Wij , iES
jES
t fuction of the edge weights of G. The conductance (l == (l(G) may then )e viewed as a structural property of the weighted graph G. In view of the tbove remarks, we might hope that (l(G), which in some sense measures the ninimum relative connection strength between "small" subsets S and the 'est of the space, is related to the rate of convergence of the chain. This 'elationship is manfested via two separate bounds on the second eigen-
L L êiqjiej ~ L L êiQjiej iES jE(Nj iES jES - L L wijêiêj + L LWijê~ iES jES
#i
iES #i
2'" AA '" (.2 .2)
= - L. Wijeiej + L. Wij ei + ej
i-ej i-ej
L Wij(êi - êj)2 ,
(2.5)
i-ej
falue Ài.
~emma 2.4 For an ergodic reversible Markov chain with underlying graph
'J, the second eigenvalue Ài of the transition matrix satisfies Ài :: 1 _ (l(G)2
2
where the inequalty follows from the fact that al contributions with j ft S are positive. Using (2.4) and (2.5), equation (2.3) therefore yields 1- À ;: Li-ej wij(êi - êj)2
LiES 7riê~
(2.6)
2.2 CONDUCTANCE AND THE RATE OF CONVERGENCE
52
2.2 CONDUCTANCE AND THE RATE OF CONVERGENCE
53
This inequality ensures that the quotient in (2.7) is bounded below by ~(G),
Now consider the sum
so that finaly
i~j i~j iES
L wij(êi + êj)2 :S 2 L wij(ê~ + ê~) :S 2 L 7riê~ .
as required. 0
1 -.À ~ ~(G)2 2
Combining this with (2.6) gives
Combining Proposition 2.1 and Lemma 2.4, we arrive at the first major
1-.À ~ Ei~j "'0 wij(êi - êj)2 -"oeA2
Ei~j wij(êi + êj)2
result of this section, namely an upper bound on the distance from sta-
2 EiES 7riê~
tionarity of a reversible chain in terms of the conductance of its underlying graph.
L.iES "i i
'" A2 ' L.iES 7riei ~ ~2(Ei~jWij(êl -êJ))2
(2.7)
Theorem 2.5 Let G be the underlying graph of an ergodic reversible Markov chain all of whose eigenvalues are non-negative, and 7r' its station-
where we have used the Cauchy-Schwarz inequalty. To complete the proof,
ary distribution. Then for any non-empty subset U ç rN) and all tEN,
we need
the relative pointwise distance ßu(t) satisfies
to relate the quotient in (2.7) to the quantity ~(G).
ßu(t) :S. 0
To do this, consider the increasing sequence (Sk) ~=o of subsets of S with
( 1 _ ~(G)2 /2) t
Sk = to, . . . , k ì. The numerator of the quotient in (2.7) may be expressed
min 7r i
in terms of ergodic flows across the boundaries between successive sets Sk as follows:
iEU
Remarks: (a) In the interests of simplicity, Theorem 2.5 is stated only
~ Wij ei -A2) ej '" (A2 i~j
for chains with non-negative eigenvaues. Of course, Proposition 2.3 tell L Wij L (ê~ - ê~+1) i~j i'"k~j
us that any chain can be modied in a crude way so that this condition
holds: the effect of this operation on the conductance is to reduce it by a factor of 1/2. In practice it may often be possible to reason about negative
r
k=O iESk jf;Sk
L(ê~ - ê~+1) L Wij
eigenvaues on an ad hoc basis for the chai at hand. Proposition 2.1 and
Lemma 2.4 may then be used diectly to get a bound on ßu(t).
r
= L(ê~ - ê~+1)FSk .
(2.8)
k=O
(b) Theorem 2.5 says that ß(t) is bounded above by a fuction of the form exp(-(t - to)/ß), where ß and to are determined by ~(G) and 7r'.
Thus ß(t) is guaranteed to decrease at an exponential rate after an initial
Now the capacities of the Sk satisfy CSk :S Cs :S 1/2 for 0 :S k :S r, and
"delay" of to = 2~(G)-2In7r;;fn time unts, where 7rmin = mini7ri. Once
hence by definition of~, FSk 2: ~(G) Csk. We therefore get from (2.8)
this regime sets in, the actual exponential rate of convergence is governed
r
i~j k=Or
L wij(êl - ê~) 2: ~(G) L(ê~ - ê~+1)CSk
k
= ~(G) L(ê~ - ê~+1) L 7ri
k=O i=O r r
= ~(G) L 7ri L(ê~ - ê~+1)
i=O k=i
= ~(G) L 7riêl. iES
by the "time constant" ß = 2~( G) -2. The number of steps requied to ensure a r. p.d. of E is at most to + ß In E-l. We shal distil the essential features of this behaviour for our purposes into Corollar 2.8 of the next section. 0 Our next aim is to derive a parial converse of Theorem 2.5. First we
requie a bound on .Àl complementar to that of Lemma 2.4. Lemma 2.6 For an ergodic reversible Markov chain with underlying graph
G, the second eigenvalue .Àl of the transition matri satisfies .Àl 2: 1 - 2~(G).
54
2.2 CONDUCTANCE AND THE RATE OF CONVERGENCE
Proof: As in the proof of
Lemma 2.4, we work with the matrix Q = IN-P,
2.2 CONDUCTANCE AND THE RATE OF CONVERGENCE
55
Inequality (2.10) therefore yields
whose eigenvalues are fl - ),iÌ for ),i an eigenvaue of P. First we get a
general bound on ),1 using a vaiational principle.
1 - ),1 :: Fs Cs + 1 _ Cs :: 2 Cs = 2g)s .
Define B = D1/2QD-1/2; B is symmetric by vitue of reversibilty, and its eigenvaues are those of Q. Futhermore, if 7r' = (1Id is the stationary dis-
( 1 1) Fs
Since 8 was chosen arbitrarily, we have the bound
tribution of P, then the vector e' = (ei) with ei = 11//2 is a left eigenvector
),1~1-2g)(G),
of B with eigenvalue O. Now let g' = (gi) be any vector orthogonal to e'. Since B is symmetric with second smalest eigenvalue 1 - ),1, the classical
which completes the proof. 0
calculus of vaiations tells us that (2.9)
(1-),1)(g',g'):: (g'B,g').
To relate this to Q, introduce the vectors f' = (Ji) = g'D1/2 and f' = (Îi) = f'D-1. Then (2.9) becomes
(1 - ),1)(f', f') :: (f'Q, f'),
(2.10)
Proposition 2.2 and Lemma 2.6 together yield a converse of Theorem 2.5.
Theorem 2.7 Let G be the underlying graph of an ergodic reversible Markov chain all of whose eigenvalues are non-negative, and suppose that g)(G) :: 1/2. Then the relative pointwise distance ß(t) over (N) satisfies ß(t) ~ (1- 2g)(G))t
and this holds for al f' with ¿i Ji = O. The right-hand side of (2.10) can
be rewritten along similar lines to the proof of Lemma 2.4: it is a simple matter to check that
(f'Q, f') = L Wij(Îi - Îj)2 ,
i-lj
(2.11)
where as usual Wij = 1IiPij = 1IjPji denotes the weight of the edge (i, j) in G.
Now let 8 be any subset of states for which Cs :: 1/2. The idea is to select a particular vector f' for which (2.10) yields a good bound on g)s. With this in mind, set for i E 8;
~
fi = C~'-1Ii f 11'
,. \ , for i ~ 8.
Then clearly ¿i fi = 0, so (2.10) holds for f. Fuhermore, we have
A, f2i 11" i (I-CS)2 _ Cs _ L= 1Ii (+1 1) (f',f)=L;:=LC2+ l-Cs iE(NJ i iES S i~S
and from (2.11), since f' is constant on 8 and S,
for all t E fi.
o
Remarks: (a) For simplicity, and in paralel with Theorem 2.5, we have stated Theorem 2.7 only for chains with non-negative eigenvalues (and conductance at most 1/2). If the modification procedure of Proposition 2.3 is
applied to an arbitrary chain then both of these conditions are guaranteed to hold. For chains with negative eigenvaues (and g)(G) :: 1/2), the bound of Theorem 2.7 holds for all even t E fi, by Proposition 2.2.
(b) Lemma 2.4 and its proof parallel an earlier continuous result of Cheeger (19) for Riemannian manifolds. In the discrete setting, the lemma and its converse, Lemma 2.6, are closely related to recent work of Alon (4)
and Alon and Milman (5) (see also Dodziuk (27)) in which a relationship between a similar structural property of simple, unweighted graphs and the second eigenvaue of the adjacency matrix is established. This property, caled the "magnification" in (4, 5), measures the minimum number of vertices adjacent to a small subset 8 as a fraction of 181, and is a generalsation of the widely-studied concept of expansion for bipartite graphs. Our conductance g) is a weighted edge analogue of magnfication, and is the natural quantity to study in the present application.
(c) The signficance of Alon's result (4) as a sufcient condition for rapid
(f' Q, f) = L Wij C s + 1 - C s = Fs C s + 1 - C S iES
j~S
( 1 1)2 ( 1 1)2
convergence of certain Markov chains has been observed by vaious authors (see, e.g., (2, 17, 80)). In particular, Aldous (2) states a restricted form of Theorem 2.5 for random wals on reguar simple graphs. However,
... ...'- il ..
t-. 8: "1
~ 13 S'
_ø;~g'13 .. o i: ~ !' 0-. t: 't Cl
ii ~ 8. 0+ ... (1
~ 't il 6' ê
'"
00
o. i: n
õ' ~ i:
~ n
§ ~
'"
o ~ "S ~ S. n 0 Cl
~o Cl 00 Ii .. 0. Cl
-l . 00 0
~. 0 13 ¡:
~ "1 ~ gi n 0-. 't n Cl 0 Cl 0 õ'ii Cl 't "1 i: ~ i: O+o+'-n i: i: ::. 't
8' ~ "1 i: g' ~
i: 0. a S' 8.
1lË.00+i: a¡¡o'~thi:
..
o i- 0. :.. '" . _Cl
00 õ' ... "1 .. ::. n i: Sl 0 i:
i: s-
k ~ .: ~ 13 ~ ff ~ ~. e. P: i
~~ il ~ C"O+ ~ 0.iS8 g;~ ..00 i=",Cl s.~ So.C( Cl Cl
Cl
0+
'" Z -: 0
o Cl
13 i:
~ 8:
.. 13
o '-
00 . 0+ ""
('C:š i:~ g'~Cl ¡:~:a~
00
.... j ~. o s: 13 i: 00
?6 0 ~ ~ ã e. ~ ;i õ. i: ct Cl il 0.. i:
i: "" 13 00 00 Cl 0+
~~ m-g'
~~~~ \i' Cl a-~
js-æ.k
'g s.~s.
::. ~ 0 $. e.~.:i: ~Cl i:Cl . _Cl 9 ~
0+ 0 't C'
l g ~ a Clo+l3o 0. ~ Cl ..
,$ 0 g: "" ... 00 "1 t:
C'
ê ~ 0.
8. i
~ 0
0+ 00 Cl 00
C' ... ~ Cl
ê "1
~ 13 n S'
"" Cl
~ g:
00 't
S. :3
~ .:
Cl
~~
§ Cl
š'
S' š'
S'
~ e. Cl 0+ 0+
is
00 13
ã g o ~ ... (1
ê
Il i:'t a- i: _. 0
0.
~ Cl
~
00
Cl
..n
il i:
.. i: õ' C"=io.i:
'- a Cl
k
(' "" !' .. C' C' e¡ _il t. C'0i: i:~
.0 0 00 "1
Cl i: 00
S' õ'
't n 0+ 0+
13 8
71 ¡¡ Cl 0+
o 0
i: n
~ g'
Go ~
..
'-
..
ê ~. ê 6' 'to .. 8. o.e.Cl ~
..
w
0. Cl 0.
i: i:
'C' o
g
13
o
Sn
Sn
~
S' "1
'"
1/\
..~
.:i: IV~
't o
il
00
0+
~. 00
Cl "1 Cl Cl
t:
""~
t;
13
,.
II
Snt.
~ il o "1 n il ... il 't _. ct ~ 0. '-
Cl ..
~ i: t: 0
~ ft. "1 0
s ~ Cl ø;
g. gJ.
c: ~
....
'"
I
..
Cl
0+
tE
~
~~
1/\
o. ""
o ... a S. .. ;. "" Cl 't "1 il 13 i:t:~ il 't ê Cl ~:3 o. Cì;: C' ~ Cl s: .. "1 .. "' il 0+ '.. 00 Cl "1
~ ==
fJ 13
¡¡ ~ ti S'cß ~ 't ~ Cl n 0 il
g. 00 ... '-il 00 0. $~ ""13 ~ "1 ~ ~C'm~. n 131l Õ ~:$S
t: S' 0 13 ~ "1 ~ Cl 13 š' t: n Cl Cl. e. ê Cl Cl Cl i: il 't Ë. ~
:' t:. ~ is ~ ~o Cl ~ fJ 00 '; S
~ ..00' 8' .g0+... 00 00~ g, i: 13 o.i:~ Cl ~... 0+ 00 il .. o 13 C:Cl Cl ~ 13 00 ~ il ..
n""": OOt:Cl t:''''' 00 0. ... i: t: t: i= ti ~ 00 n i5
$ "1 æ 13 i: i: '" 13 g, ~. t:
..:; gJ. 0 ;; Cl t: oi n
ct 0 il s: - ::... Go es. t: :a 13
s-i:a~ 8.ct~ 00 ~ n ¡¡~
00 Cl
Cl 0+ i: t: 00
t. .: 00+..
t: 8. g, - Cl 't... il .. g, "'0' Clo+.,- "1nll
. Cl
g'o.IlO'-Cllloo~ClClO i: Cl o-...o?;ct e.: is0.i: ~th§~¡t~ 00 s.t:~ a g:
'! § 0. 0.
:a g' ~ g' ~ ~ ~ g' g; Go ~ ~
t: ~
i: t:
't 00
13 _.
il il
~ s: 00 Cl
~ t: ~ n (¡O ('
Cl
is 1/\
.i a §":~~ ~~o il C' .. "1 0 åg'E:e: 0+ "S _. ~
1/\ \i'
'" 13
Cl 0
i: i:.: n
-23.'t0
t: ... ~ 00
..~~O~Cl Clo+'-O+nCl
; .. Go 00' 0. gJ. -0. ~~ ni:i:pi. t=t:Go..~ t: æ.~ il 0.
~a~ctnll~i: ¡¡. 00 .. 8' èi ~ 0o.l3Cl~n 0+ il '- §
i: 0+ s-.: t: 00' § is ~ 0. Cl w
~ S' 00 ti ~ Cl '- a :5. n Cl ~ ~
00:a¡?'t't 8~~g'~Cl .: i:' Cl Cl ~ 0 :3o+Cl.. 13 t: fJ
13 Cl i:' 0+00 .. i: . ~ i: n _. § 00 fJ t: IlI3 ~~ciO+Cl ~'i
"1 0. 0 ~. '- Cl gi ~ ê ~ fJ Cl ~
~ .. .. t: ~ S' 00 th ¡:. i5 :: :a Cl
ê 13 ~ (1 Cl ~ p.Cl Go t'i:~~ Sa ng' C: t:. ~~. i= ~. ~S
,¡
Sn 0+ o.
èi ~ g ro 00'
~ 000ê~ ¡¡~ !'Cl
~ s. i: m- S'
F~13 g.~
~ ~ m- S. Cl
i: C' 00 t:
õ' ~ :3 Cl 0+
.. § il 00 t: S. 00 't ~ ê
OOOC:Cli::a
'-:;i:e.0"i=_ ... '-e.
.§ 00 :So ~. 0
%g.!'g.~
o Cl - Cl i:
t: S''t 0+ Cl "1 :a il S' ~,,oo ~"1"1t:~~t:i:"1i:I3t-. Cl 13 .g ~ 'g s: Cl ... n Cl n 00 ..
--
g'g'g'E9"6's §
~ 0+ 0+ ~ 0+ as-'
:nClClI3't't13 Go Cl/"1ct il~¡::. Cl ..""t.g00 i: .... 00 :so :n e: ~
~~ t:~~ g.~i: ~i: i: i: i: il 00
~ 13 6' g '" ~ ~ ~.
Cl i: . i: .. il . n
"10+ "";; 0+ t:
o 8. "" "'0' "1 i: ;. 9 ~C'~ C'""'-..
13 Cl ~ ~ Cl _. t. ?;
i; ~ 'ti: ~'t Cl ~
::0 8'- õ~iS
i: n IV 0+ Cl o.
~~013"S is~t: .. Cl .. .. _. Cl
... il I ~ i:' Cl 0+
S' ¡: g' 0. il
~ &q 00' g' :a 'ü 8 "-a- t1 ..C'C'c+
(' ::o. 0+ 00 ~
n i: Cl
K s' 8. ~ 13
~ ... S ~ ... 00 i= _. Cl 00
~ _. - 0. 00 Cl t:13.. Cl .:.:!' i=
i:~~ê~~o~
e:Cl'-
~ n 0. ... ...0 i: ~
~ ~
"1
§ 0. Cl
~
S' "1
"1
~ o
Cl "1
~i:
13
S' "1
8.
i:
0+
o 00
'?
il
0. 00
l'ï
..
~ '-..
... o i: .t.
~ E 0+ Cl
n o _.~
S' 0.. Cl
i= t: ~ o :a 13
S' "1
Cl 0
t: &q
0+ il
t: 0+ il Cl Cl i
0+ g 't
¡: Cl~
is:3 (' th 0. ""
0. ~k
il t; i:
~. i: ~oo
Cl
:; !' is "1 (J .. ~ 00 0+ -~ ~ n .g m- t: Y' 0+
il t:
~.
~
"1
'til
0+
il õ' i:
š'
Ë.
00
~. i:
g.
Š ..
~
is
..
t.
~. Cl
~
a ; Cl
'g :5. 8' ...0 i:'
o a 13
e. Cl ... 00 't 0.
~ a 00 § 0
i:
==
~
~ Ë. .. ~gjGo :So S. Cl i=o+~ t: 't
ê g' a:
n0+Cl 0+th i:
. i: i: Cl
il Cl 0 nCl iln e: n oi: i: 0. 00 ... 0.
't "1 Cl
~ ..
t: 13 il :5.
Cl t1~ t: Cl 00 0+ * 0 il ~ S. ~ il Cl 0+ - 't Cl I' Cl ::. g'..~ n a 00 00 0 "1
0+ ... 13 C'
0. ~ (J ~g.Cl is Cl:"t:S'i: *. o+t:o.~ Cl n Cl t: 00' '- ~ ~ ~ -l ~I3O"ê_!'Cl"1Cl 131l~0+ Go 0+ ~ ::k§ ...~ 0 Cl '" ~~ "1 ~~~ g'Q Cl Ë ~ il õ' 00 i= ~('oo ~ _. n~ i: "" g" i: ~ - ... ¡: ~ E:~g¡¡,&~ C' 0 i: + "" 0+ ~ 0+ Cl n ~ ~ .. S. Ë. 0. + ... I !' ::. :a õ' ~. 0. t: c:C"8.~(JOO"1"1 _. 0. 0 .¡ .. i: Cl .: o Cl Cl '- 0 ~ ê 'H~o~°ti i: n 0. 0+ t1 .. _. n ~ S' ~ 1 a ~ Sl ~ t: ¡¡ § "S /\ :n 0 o t:.. t: * il ~ _Cl lSCl..wi: O ct ~~. .. ... 0+ S' "1 "1 .. ~ 00 il X i= C' ~ !' ~ S' R. ClCli:o+ o+oo:a '7§..('~~th '- ~00 (J Cl ~ i: g.", ~t:e.Cl '- ~"" t:Cl :.g'is8. :5. '" -l 00 il i= i: ili= '~ (J.. 00 0+ Cl ~~ ti g, + Cl i: ~ ~ t: ~ "" i: i; n ~ '- 'G. ã ~ S' n g. ê ~ nt: &...Go 1/\ 0. e. g. i: 0+ Ñ. Cl:a 0 i: il .. Cl Cl i: ~ .. g' ~. ~ ~. in ~ ~ 00 ., i: ~. ~ 0+ - i: i; 00
i: Cl
i: .. 0 ~ Cl 0 _. "1 ~ g'!' .. ~ 't Q..: -.~~I3OOt.I3"1asiil ... o' S ¡:c+ (JSClIlt:~ C+~i-..c+ t1~Il't 0+_
o il i: "' 0. o.,,J
~ 8: 'g 8 ~ :So 'g ~O+""o+..Ei="1
i: 0 Cl
-_.._..,._--"'~,--
go ~ ê .:~~
ft :::- n
Cl .. m-
S' å ~ il 8. il
t. 00 ..
0: a 0
't "S
~ Cl 0
o 00 ~
0. 0+ 13 00 lš 00
il ~ ~
i ~ ~
~ ~. ê
t" gJ. ~
a- gj §"
å ~ ~ .:g.§
êF'o+ 0. t:
't 0 i:
0 C' D 8. ~ ~ l Cl C: 0
"1 n i:
~ ~ õ'
§ a i: ct ct e: s. ¡;. ~ i! ~ ~
n ~ õ' ~
å i: ~ ~
"1g.~g.
o.~ S'0
~ n "1 i:
i; .. el
.~ "" ..
~ 0+
'ti: 'ti:
S. S. .~
=- =- s. ~ s. :So
e.e.~~': Go .. i.: 8 :a ...: :a ('0~ ~~~i:!' .gi: t: t:.. o 0 0::0+
1'~0+::~ ~ =.g&qCle...
~ § o. e. = ~ .. Cl ~ 00 =_ =- 00 !' ~
~ Cl S- ~ go
-:
CJ
~
~ ~
~ ~ ~ t:
~ ~
~ :i (J ~ .. t' ..
a "'13' .. el oq II el i: II II S' lS
~ ~ 00 "' ~ ~..
~ Q
~ ~ :i
t. è.
~
c:
~ ~ :i ~ ~ ~ ~ ~ t:
~
Q
~ ~ :i
t. è.
0)
CJ
:: 0 ~ C"~
... ': "" ~ ~
æ. =;
.... ""..w ""..t. ..""..
"" "" CJ ,¡
... X ... = (J
!3
Q.
~Ø)...
"1
o ~
=
o
~ ...
n e: m "1 ... l'
Ø)
~
=-
n
~
~ . ~
~
2.3 A CHARACTERISATION OF RAPID MIXING
58
Assume now that al chains in some family are reversible and are known to have non-negative eigenvalues; recal that the latter can always be ar-
ranged by boosting self-loop probabilties as in Proposition 2.3. Futher-
more, suppose that the minimum probabilty 7r~i~ assigned to any state in the stationary distribution of MC (x) satisfies Ig7r~(n -1 :: q'(lxl)
(2.12)
for al x E n and some polynomial q'. Then the results of the previous section give us the following elegant characterisation of the rapid mixng property.
Corollary 2.8 Let ~ MC(x) : x E n 1 be a family of ergodic reversible Markov chains, and let G(x) be the underlying graph of MC(x). Under the above assumptions, the family is rapidly mixing if and only if
2.3 A CHARACTERISATION OF RAPID MIXING
59
following example confirms that, for ergodic chains with negative eigenvalues, rapid mixng is not guaranteed by a polynomial lower bound on the conductance. Example 2.9 Consider the family of chains MC(n) parameterised on natural numbers n E N+, in which MC(n) has state space VI U V2 for disjoint Vl, V2 with !VI I = !V21 = N /2 and N = 2n. Define the matrix P = (Pij) by Pi. f 2/N if i E VI, j E V2 or i E V2, j E VI; J =1. 0 otherwise,
and let the transition matrix of MC(n) be (1 - a)P + aIN, where IN is the N x N identity matrix and the value of a wil be specified below. The chain MC(n) is obviously ergodic for 0 -( a -( 1; since it is symmetric, it is also reversible with uniform stationar distribution. To calculate (l(G(n)), consider any non-empty subset S = Au B of states,
with A ç VI, B ç V2 and ISI :: N/2. Writing a = IAI/N and b = IBI/N,
(l(G(x)) ;: l/p(lxl)
we have Cs = a + band
for all x E n and some polynomial p. Fs = 2(1 ~ a) (a(N/2 - bN) + b(N/2 - aN)).
Proof: We may reformulate the results of the previous section in terms of r(x) as follows:
1- 2(l( G x( InE-1:: ~tr.,(_\\?(x) InE +ln7rmin ))r(x)(E):: 2 (-1 -1).
It follows that
Fs (4ab ) Cs a + b
(ls == - = (1 - a) 1 - - .
(2.13)
The latter quantity is minimised under the constraint a + b :: 1/2 at the
The upper bound on r(x) is a direct consequence of Theorem 2.5. For the lower bound, note from Proposition 2.2 that r(x)(E) ;: (Inc1)/(ln-A;~) :; (In E-1 )-Amax/(I- -Amax), using the fact that In Z-l -( (1- z)/ z for z E (0,1),
values a = b = 1/4, whence (l(G(n)) = min (ls = (1 - a)/2.
s
(2.14)
and then apply Lemma 2.6.
Now consider the probabilty that the chain, when started from some initial
The corollary follows immediately from (2.13). 0
state in VI, will be found in VI at time t = 2m + 1 for mEN. This can happen only if at least one self-loop is traversed during this period, so the
Corollary 2.8 provides a
partial characterisation of the rapid mixng
property in terms of the conductance of the underlying graphs of the family.
The characterisation is incomplete because of the non-negativity assumption on the eigenvalues and the lower bound (2.12) imposed on 7r~i~' We
can justify the restriction to chains having non-negative eigenvaues on the grounds that our ultimate aim is to construct effcient sampling proced~es for a given family of distributions. The modification procedure of Proposition 2.3 is effective and cannot destroy the rapid mixng property of the original family: it is therefore reasonable to incorporate it into the chains we consider, and we wil always do this. For the sake of completeness, the
probabilty is at most 1 - (1 - at If we set a = N-1 (say), then if tis permitted to grow only polynomialy with n this probabilty tends to 0 as n -- 00. It follows that the family is not rapidly mixng. Equalty (2.14), however, ensures that the conductance is large. Note that the above chains fail to converge fast because they are "alost
periodic". P is the transition matrix of the random walk on the complete bipartite graph with vertex set VI U 1I, which is periodic with period 2. MC(n) is obtained from P by adding a self-loop probabilty a to each state. The effect is to increase the smalest eigenvaue -AN-1 from -1 to -1 + 2a, and the second eigenvaue -A1 from 0 to a. For a = N-1, we therefore have
60
2.3 A CHARACTERISATION OF RAPID MIXING
2.3 A CHARACTERISATION OF RAPID MIXING
61
.Àmax = 1- 2N-l. On the other hand, taking 0: = 1/2 corresponds precisely to the modification procedure of Proposition 2.3. In this case, of course,
distribution at every point; and secondly it demands that convergence be rapid from every initial state. We can formulate a less strict definition as
.Àmax = 1/2 and the chains become rapidly mixng. D
follows. Define the variation distance from initial state i by
Next we consider the effect of the stationary distribution on the char-
~rar(t) = l L IpW -1ljl,
j
acterisation of Corollary 2.8. The following example demonstrates that the
assumption of a lower bound of the form (2.12) on 1l~i~ is necessar.
and let Trr(€) = minit EN: ~rar(t') ~ € for al t' ? tJ-. By relating
Example 2.10 For n E N+, let MC(n) be a one-dimensional geomet-
bounds on TVarmay be obtained (see, e.g., (871):
ric random wal on state space (N), where N = 2n, with transition
~ir(t) to .Àmax in similar fashion to Propositions 2.1 and 2.2, the following
probabilties Pi(iH) = 1/3, Pi(i-l) = 2/3 for 1 ~ i ~ N - 2, and (i) Trr(€) ~ (1- .Àmax)-l (ln1l;l + lnel);
POD = P(N-l)(N-l) = 2/3, POl = P(N-l)(N-2) = 1/3. (Thus states 0 and
N -1 form reflecting barriers.) Then MC(n) is ergodic and reversible with
stationary distribution 1li = 2-i-l for 0 ~ i ~ N - 2 and 1l(N-l) = 2-NH. FUrthermore, the conductance cp(G(n)) is exactly 1/3, as may readily be verified. Hence, if the chain is modified as in Proposition 2.3 to eliminate
negative eigenvalues, the conductance is stil 1/6. However, this family is not rapidly mixng since the number of states reachable from any initial state in t steps is at most 2t + 1. This does not contradict Corollar 2.8
since Ig1l;:~ = _2n + 1. D Fortunately, pathological cases such as the one above rarely occur in
our applications. Moreover, states with extremely smal weight in the stationary distribution are typically not relevant to the sampling process, and their effect can be eliminated by working with ~u(t) for some suitably chosen subset U of states. Hence for most practical purposes the conductance may be taken as a reliable characterisation of rapid mixng for familes of reversible Markov chais.
In the remainder of this monograph, extensive use wil be made of the positive part of the above characterisation to show that certain natural familes are rapidly mixng, thus providing effcient sampling schemes for
the associated stationar distributions. We shal see that, for several chains with a rather complex structure, the conductance may be quite accessible while the rate of convergence is apparently not easily investigated by existing methods. Thus we contend that the characterisation of rapid mixng presented in this chapter is a potentialy powerfu tool for analysing the time-dependent behaviour of a much wider class of Markov chains than has
hitherto been possible. '-.
(ii) m~ Trr(€) ? l.Àmax(1- .Àmax)-lln(2€)-1.
i
Using the bounds on .Àl from Lemmas 2.4 and 2.6, we therefore get upper and lower bounds on Tvar in terms of the conductance cp that are very similar to those of (2.13) in the proof of Corollar 2.8. Thus we see that cp also characterises the rapid mixng property when formulated in terms of variation distance. Note, however, that in this formulation the initial state plays a signficant role. In paricular, the upper bound (i) depends only on the stationary probabilty of the initial state, rather than on 1lmin'
This is of practical vaue in cases where the stationary distribution contains states of very smal probabilty (so 1lmin is tiny) but we are able to choose
an initial state whose stationary probabilty is quite large. On the other hand, the lower bound (ü) does not rule out the possibilty that the chain converges fast in vaiation distance from certain initial states even when the conductance is smal; such examples exist. We could in fact have developed the theory in this chapter, and in much of the rest of the monograph, in terms of variation distance. Indeed, for many puroses, including the reduction from counting to generation of
Section 1.4, it is enough to generate structures from a distribution whose vaiation distance from the stationar distribution is smal. However, we have chosen to work with r.p.d. here for consistency with our rather strict
definition of alost unform generation in Chapter 1. It should be clear from the foregoing remarks that this choice makes little essential difference to our main results. D
We close this chapter with an observtion which is frequently useful
Remark: Our definition of rapid mixng is rather a strict one because it is
in applications. Many natural Markov chains can be viewed as a simple
based on the r.p.d., which is a very severe measure for two reasons. Firstly,
random wal on a graph H = (V, E) in which transitions are made from
it demands that the distribution of the chain be close to the stationary
any vertex v to an adjacent vertex with probabilty ß / d, where d is the
62
2.3 A CHARACTERISATION OF RAPID MIXING
maxmum vertex degree of Hand ß ~ 1 is a positive constant. In addition, v has self-loop probabilty 1 - ß deg( v) / d, where deg( v) is the degree of v
in H. If H is connected and ß -: 1 then the chain is certainly ergodic;
Chapter 3
symmetry ensures that it is reversible with uniform stationary distribution. Note that in the underlying graph al edges of H have equal non-zero weight, while all remaining edges other than self-loops have weight zero.
For any subset S ç V, let r(S) denote the cut set in H defined by S,
Direct Applications
i.e., the set of edges in E with one endpoint in S and one endpoint in V - S, and define the (edge) magnification ¡i(H) of H by
(H) = min Ir(S)1
¡i O-:ISI::IVI/2 ISI
Clearly, 0 -: ¡i(H) ~ d. The following equivaence is immediate from the definition of conductance. Proposition 2.11 Let G be the underlying graph of an ergodic random
walk on a graph H with maximum degree d and transition probabilities ß / d between distinct adjacent states. Then the conductance of G is given by
We have seen in Chapter 2 that the Markov chain simulation paradigm
provides an elegant general approach to generation problems, and have developed some theoretical machinery for analysing the effciency of the resulting algorithms. The purpose of this chapter is to demonstrate the utilty of the approach by applying it to some concrete and non-trivial examples. We shall show how to generate various combinatorial structures by constructing suitable ergodic Markov chains having the structures as states and transitions corresponding to simple local perturbations of the
(l(G) = ß¡i(H)/d. 0
structures. The rate of convergence wil be investigated using the tech-
niques of Chapter 2, and in particular the rapid mixng characterisation
In a typical family of random wals MC (x), the degree d is fairly smal (i.e., bounded by a polynomial in Ix!), so the rapid mixng criterion boils lower bounds on the magncation. This view down to finding polynomial
of Corollary 2.8. In each case, the detailed structure of the Markov chain wil enable us to estimate the conductance of its underlying graph, and we develop a useful general methodology for doing this. Our results constitute
simplies the analysis of familes of this kind. As the following example
apparently the first demonstrations of rapid mixng for Markov chains with
ilustrates, however, such a bound on the magnfication is of no signficance
genuinely complex structure. As corollaries, we deduce the existence of ef-
when the degree is large.
ficient approximation algorithms for two signficant #P-complete counting problems.
Example 2.12 For n E N+, define the graph H(n) as follows. Let Hi, H2 be two copies of the complete graph KN/2, where N = 2n. Then H(n) the disjoint union of Hi and H2 together with a perfect matching consists of between vertices of H i and vertices of H 2. Let MC (n) be a random wal
3.1 Some simple examples
on H(n) as above. Such a chain is clearly ergodic, and it is not hard to verify that ¡i(H(n)) = 1, with the minimum vaue of r(S)/lsl attained
Before tacklng some more substantial examples, let us first apply the techniques of Chapter 2 to construct natural Markov chain generators for a
when S is the vertex set of one of the Hi. However, the degree of H(n) is
few very simple structures. The generation problems considered in this section are not particularly interesting from a computational point of view
N/2 = 2n-i, which precludes the possibilty of rapid mixng regardless of the value of ß. An obstacle to rapid convergence is presented by the edges
between the vertex sets of Hi and H2, which constitute a constriction to flow in the underlying graph. 0
as a number of effcient exact methods exist for their solution. Moreover,
the associated counting problems are trivial. However, our analysis wil serve to ilustrate what is involved in practical applications of the rapid mixng characterisation of Chapter 2. It will also allow us to develop some additional technology which wil play a central rôle in later proofs.
64
3.1 SOME SIMPLE EXAMPLES
Consider first the relation B which associates with each natural number n the set B(n) = to,1)n of bit vectors of length n. We proceeed to
construct a family of Markov chains MC(n) which can be used as an al-
3.1 SOME SIMPLE EXAMPLES
65
Remark: Inspection of the proof of Corollary 2.8 reveals that the actual number of simulation steps performed by the generator on input (n, €) is O(n2(n + 1ge1)). D
most uniform generator for B following the paradigm of Figue 2.1. The
most natural process to look at here is one which moves around the state space B(n) by flipping a single random bit on each transition. Thus we can view MC(n) as a random wal on the n-reguar graph H(n) with vertex set B(n) and edge set
We turn now to the proof of Theorem 3.1, which ilustrates a technique which wi be employed throughout this chapter.
t(u,v) E B(n) x B(n) : Ð(u,v) = 1),
to specify a canonical simple path in H(n) between each ordered pair of
Proof of Theorem 3.1: Let N = 2n be the number of states of MC(n). Our arguent hinges on the following observtion. Suppose it is possible
where Ð denotes Hamming distance. Of course, H(n) is just the ndimensional hypercube. To avoid problems of periodicity, we add a self-
loop probabilty of 1/2 to each state (Le., ß = 1/2 in the terminology of Proposition 2.11); note that this also dispenses with the problem of nega-
distinct states in such a way that no oriented edge of H ( n) is contained in
more than bN of the paths. If S is any subset of states with 0 -c ISI :5 N/2, then the number of paths which cross the cut from S to S is clearly
ISI(N - ISI) ~ ISIN/2.
tive eigenvalues as in Proposition 2.3. This gives us an ergodic reversible
Markov chain with uniform stationary distribution. Tuning now to the question of effciency, it is clear that conditions (mc1)-(mc3) of Section 2.3 hold in this case: we may select on as initial state and simulate individual steps in O(logn) time on an OCM (see
Proposition 1.3). The effciency of the generation procedure is therefore governed by the rate of convergence of the chain. The results of Section 2.3
in turn imply that this depends on the magncation of H(n). Fortunately, a suitable bound on this quantity is not too hard to come by.
Theorem 3.1 The magnification of the n-dimensional hypercube satisfies
¡i(H(n)) ~ 1. Before proving Theorem 3.1, let us fist confrm that the resulting generation procedure is effcient.
Hence for any such S the number of cut edges Ir(S)1 is bounded below by
ISIN /2bN = ISI/2b, whence
. Ir(S)1 1
¡i(H(n)) = mJn -- ~ 2b'
(3.1)
The problem of bounding ¡i(H(n)) below can therefore be reduced to one of defining a collection of canonical paths in H(n) which are "suffciently edge disjoint", as measured by the bottleneck parameter b.
We now proceed to define a suitable set of paths. Let u = (Ui)~:Ol and v = (Vi)~:Ol be distinct elements of B(n), and ii -c ... -c i¡ be the positions
in which u and v dier. Then for 1 :5 j :5 l, the jth edge of the canonical path from u to v corresponds to a transition in which the ijth bit is flpped from Uij to Vij'
Consider now an arbitrar tranition t of MC(n) (or, equivaently, an oriented edge of H(n)); our aim is to bound the number of paths which con-
Corollary 3.2 Simulation of the above family of Markov chains yields a
tain t. Suppose that t takes state W = (Wi) to state W' = (wD by flpping
J.P. almost uniform generator for the bit vector relation B.
the vaue of Wk, and let P( t) denote the set of paths containing t, viewed as ordered pais of states. Rather than counting elements of pet) directly, we
wi set up an injective mapping from pet) into the state space B(n); this Proof: By the preceding discussion, it is enough to check the rapid IIixing condition (mc4). Theorem 3.1 and Proposition 2.11 imply that the conductance of the underlying graph of MC(n) is bounded below by 1/2n. Clearly, the minimum stationar probabilty satisfies 19 7l:~ = -no Since the conditions of Corollary 2.8 are satisfied, we conclude that the family of Markov chains is rapidly mixng as required. D
wil yield an upper bound on the bottleneck parameter b appearing in (3.1).
The mapping at: pet) -- B(n) is defined as follows: given an ordered pai (u,v) E pet), set at(u,v) = (sï), where
Sit Ui, =k -c.ii :5-c k;n. Vi, 0:5
3.1 SOME SIMPLE EXAMPLES
66
3.1 SOME SIMPLE EXAMPLES
67
Thus O"t (u, v) agrees with U on the first k + 1 bits and with v on the
what the injective mapping technique achieves. As we shall see presently,
remainder. Note that we can express this definition more succinctly as
it turns out to be rather generally applicable.
O"t(u, v) = u EB v EB w', where EB denotes bitwise exclusive-or.
the endpoints u and v,
We claim that O"t(u, v) is an unambiguous encoding of
so that O"t is indeed injective. To see this, simply note that Si, Ui = 1.f Wi,
o ~ i ~ k;
k -c i -c n;
Vi = f w~, 0 ~ ~ ~ k;
1. Si, k -c i -c n.
Hence u and v may be recovered from knowledge of t and O"t(u, v), so O"t is
Other simple Markov chains may be analysed in a similar fashion. Examples of rapidly mixng familes include random walks on n-dimensional cubes of side d and a host of "card-shufHing" processes whose state space
is the set of permutations of n objects and whose transitions correspond to some natural shufing scheme (see, e.g., (231). Here we content ourselves with two fuher examples to ilustrate the generality of our approach before passing on to more interesting problems.
injective. It follows immediately that IP(t)1 ~ Nj in fact, since al vectors (Si) in the range of O"t satisfy Sk = Wk, we have the stronger result
IP(t)1 ~ IB(n)I/2 = N/2. Since t was chosen arbitrarily, the number of paths traversing any oriented edge cannot exceed N/2. Setting b = 1/2, inequalty (3.1) now yields the desired bound on the magnification ¡.(H(n)). 0
Example 3.3 Let us analyse a simple card-shufing process based on random transpositions, as studied in (24). For a natural number n, let Sn denote the set of permutations of the set (n) = to,..., n - I). Con-
sider a deck of n cards labelled with the elements of (n), and identify P = (Po, . . . , Pn-l) E Sn with the ordering of the deck in which the ith card from the top is Pi. We define a Markov chain MC(n) with state space Sn in which transitions are made by picking a pair of cards at random (without replacement) and interchanging them. As usual, we incorporate a self-loop
Remark: The bound of Theorem 3.1 is tight. To see this, let S be the subset of B(n) consisting of al vectors with first bit 0 and note that Ir(S)I/ISI = 1. Hence ¡.(H(n)) = 1 for the n-dimensional hypercube.
However, the bound on second eig~nvalue provided by Lemma 2.4, namely ).1 ~ 1 - 1/8n2, is not tight: the exact value of ).1 is 1 -c/n for a constant c.
o Some observations on the above proof are in order here. Passing from the conductance (a "cut" quantity) to paths (a "flow-lie" quantity) is analogous to transforming the problem into its dual: this is advantageous because we need only exhibit some good collection of paths rather than argue about all possible cuts. (This analogy is made explicit in the Appendix.) Similar ideas have been used before in the literature to investigate the connectivity properties of various graphs in other contexts (94). The principal
novelty of our proof lies in the use of the injective mapping technique to bound the number of paths which traverse an edge. This is not actualy necessary in this simple example as the paths could have been counted explicitly. The point is that in more complex cases the states of the chain will be less trivial structures, such as matchings in a graph, and we wîl have no useful information about their number - indeed, this is what we
wil ultimately be trying to compute. It is then crucial to be able to bound the maxmum number of paths through any edge in terms of the number of states without explicit knowledge of these quantities. This is precisely
probabilty of 1/2 for each state. Here again we have an ergodic random wal on a regular graph of degree n(n - 1)/2 and we need to look at its magnfication. A canonical path
between permutations u, v of the deck can be described as follows: for
successive vaues k = 0,. . ., n - 1, move the card Vk into position k (if it is not there already) by interchanging it with the current kth card. Consider now some transition t which interchanges the cards in positions j and k of
the permutation w, with j :; k, and as before let P(t) be the set of paths containing t. The injective mapping O"t : P(t) - Sn is a little more subtle here. For (u, v) E P(t), we refer to the positions of a given card in u, v,
W respectively as its initial, final and current positions. The permutation O"t(u,v) is then defined as follows:
(i) place the cards Wo,. .. , Wk-l in their initial positions; (ii) place the remaining n - k cards in the vacant positions in the order
in which they appear in the final permutation v. Let us now check that O"t is injective. Given t and O"t(u,v) we can uniquely
recover v as follows: the final positions of cards Wo,.. . , Wk-l are the same as their current positions, and the final order of the remaining cards may be read off from O"t (u, v). To recover u, note that the initial positions of
wo, . . . , Wk-l are just as in O"t( u, v); but these positions together determine,
68
3.1 SOME SIMPLE EXAMPLES
for every i, the current position of the card initially in position i, since each previous transition on the path involved moving one of Wo,..., Wk-1 into its final position. Hence we can deduce the initial positions of all cards, and so recover u.
Thus at is injective, and we have IP(t)1 :: N, where N = IBnl is the total number of states. It follows from (3.1), with b = 1, and Proposition 2.11
that the conductance of MC(n) is at least 1j2n(n - 1), so this famy of card-shufing processes is rapidly mixng. The number of simulation steps requied to achieve tolerance E is seen from the proof of Corollar 2.8 and an application of Stirling's approximation to be 0 (n4 (n 19 n + 19 c 1 )) . 0
Our final example ilustrates a simple but usefu generalsation of our path counting technque.
3.1 SOME SIMPLE EXAMPLES
69
P(t) denote the set of paths that contain edge t. We define the mapping at: P(t) ~ H(n,k) by at(u,v) = uEBvEBw', just as in the hypercube
example of Theorem 3.1. However, note that at is not necessarily injective here; what we must do is bound the expected number of paths with a given image under at. Note that at( U, v) EB w' = u EB v, so al paths with the same
image z have the same length m and the same probabilty m! -2 of being chosen, where 2m = IZ EB w'l. Moreover, the total number of paths which, given that they are chosen, have image z under at is
m-1 ~ m; 1( r!)2 2(m _ r _ I)! 2 = m(m _ I)! 2.
Example 3.4 We wil show how to construct an effcient Markov chain
(Here r corresponds to the distance along the path from u to w.) Thus the expected number of paths with image z is m!-2 m(m - I)! 2 = m-1. Finaly, summing over images, and noting that the range of at consists of
generator for subsets of an n-set of cardinality k. Equivaently, let SUBS
al subsets that contain j and do not contain i, we see that the expected
be the relation which associates with pais of natural numbers n, k all bit vectors of length n containg precisely k 1 's. For each pair n, k with o -: k -: n define the Markov chain MC(n, k) with state space sUBs(n, k)
number of paths containing t is
and transitions as follows: randomly select a pai of positions, one of which contains a 0 and the other a 1, and interchange their vaues. Adding a self-
loop probabilty of 1j2 to each state, we may view MC(n, k) as an ergodic random wal on a graph H(n, k) of degree k(n - k). This Markov chain corresponds essentialy to the classical Bernoul-Laplace diffsion modeL. It seems diffcult to
get a usefu bound on tt(H(n, k)) using canonical paths
as in the above examples. However, the following simple generalsation
of the path counting technque, suggested by Dagum et al (22J, does give a good bound. Suppose we define a set of paths (rather than a single
canonical path) between each pai of states in the underlying graph, and then choose one path from the set uniformly at random. If the sets of paths are defined in such a way that the expected number of paths containing any
given oriented edge is at most bN, where N is the number of states, then we can deduce that tt ;: 1j2b, exactly as before. (Clearly the same result holds if the paths are chosen from any probabilty distribution.) For our present example, let u, v E SUBS( n, k) and take as the set of patlis from u to v all shortest paths in H(n, k) from u to v; thus if lu EB vi = 2m,
there are m! 2 such paths, each of length m. (Each path corresponds to an ordering of the m elements of u - v and an ordering of the m elements of
min~k,n-k) -1 (k -1) (n - k -1)
L m m-1 m-1 m=l
N n
where N = (~) is the total number of subsets. Appealng to (3.1) with b = n-1 now yields tt(H(n, k)) ;: nj2. The conductance of MC(n, k) is
therefore at least nj4k(n - k), and we have rapid mixng. Note that the bound on second eigenvaue given by Lemma 2.4 is ).1 :: 1-n2 j32k2 (n- k)2,
which in the classical case n = 2k becomes ).1 :: 1 - 1j2n2. Again this is not tight: the exact vaue here is).l = 1 - cjn for a constant c. 0
The Markov chain familes mentioned in this section al possess a highly
symmetrical structure which makes them particularly easy to analyse. These and similar processes have alo been studied using other methods such as coupling, stopping times and group representation theory: see (1,3,23, 24J for a vaiety of examples. The time bounds obtained by these methods are generaly rather tighter than ours and can often be shown to be optimal. However, the fu power of our approach wil become apparent in the remainder of this monograph, where it wil permit the analysis of highly non-symmetric chains with only a little additional effort. Most sig-
v - u.) Now let t be a transition that takes state w to w' by interchanging
nificantly, such chains have so far not proved amenable to analysis by any
the vaues Wi = 0 and Wj = 1. For a paricular choice of random paths, let
of the other established methods.
70
3.2 APPROXIMATING THE PERMANENT
3.2 Approximating the permanent In this section we treat our first major example - the groundwork of Chap-
ter 2 wil begin to bear fruit in the form of a significant and unexpected approximabilty result. The permanent of an n x n matrix A with 0-1 entries aij is defined by
n-i per (A) = L II aio-(i) , 0- i=O where the sum is over all permutations a of
the set ¡n). Evaluating per(A) is
eq1Uvalent to counting perfect matchings (I-factors) in the bipartite graph
G = (Vi,V2,E), where Vi = '¡xo,...,xn-il, V2 = '¡Yo,...,Yn-il and ( Xi, Y j ) E E iff aij = 1. The permanent function arises naturaly in a number of fields, including algebra, combinatorial enumeration and the physical sciences, and has been an object of study by mathematicians since first appearing in 1812 in the work of Cauchy and Binet. We shall mention an application in statistical physics in the next section; for further background information see ¡n).
Despite considerable effort, and in sharp contrast with the syntacticaly very similar determinant, which can readily be evauated, no effcient procedure for computing the permanent is known. Convincing evidence for its inherent intractabilty was provided in the late 1970s by Valiant ¡96j, who
demonstrated that the problem of counting perfect matchings in a bipartite
graph is #P-complete. On the other hand, it is well known that the corre-
3.2 APPROXIMATING THE PERMANENT
71
in a large class of graphs, including al graphs which are suffciently dense. The existence of an effcient randomised approximation algorithm for the dense permanent is therefore established for the fist time. The crucial step is to show that the appropriate family of Markov chains on matchings is
rapidly mixng. In the next section, we wil present and analyse a more natural algorithm based on an alternative Markov chain.
Let G = (Vi, V2, E) be a bipartite graph with !Vil = !V21 = n, and for kEN let Mk(G) denote the set of matchings of size k in G. We assume throughout that G has a perfect matching, Le., that Mn(G) is non-empty. This assumption constitutes no real restriction since it can be tested in polynomial time as noted above. We view elements of E as unordered pais
of vertices, and matchings in G as subsets of E. If A, B ç E and e E E then A EE B denotes the symmetric difference of A and B, while A + e and
A - e denote the sets Au .¡ e Ì, A \ .¡ e Ì respectively.
Following Broder ¡16), we proceed to define a Markov chain MCpm (G)
with state space N = Mn(G) U Mn-i(G). Note that N includes auxliary states, namely "near-perfect" matchings in G, which wil permit free movement of the process between perfect matchings. Transitions in the chain are specified as follows: in any state MEN, choose an edge e = (u, v) E E
uniformly at random, and then
(i) if ME Mn(G) and e E M, move to state M' = M - e (Type 1 transition) ;
(ii) if ME Mn-i(G) and u,v are unmatched in M, move to M' = M + e (Type 2 transition);
sponding construction problem is solvable in polynomial time for arbitrar
graphs ¡30). The perfect matchings relation therefore belongs to the class discussed in Section 1.5, and the question of approximabilty is pertinent.
Until recently, little tangible progress had been made in the area of
(iii) if ME Mn-i(G), u is matched to win M, and v is unmatched in M, move to M' = (M + e) - (u,w) (Type 0 transition); (iv) in all other cases, do nothing.
effcient approximation algorithms for the permanent: to date, the best
known algorithm is due to Karmarkar et al ¡55) and has a runtime which grows exponentialy with the input size.i In 1985, Broder ¡16) proposed a Markov chain approach for almost unformly generating perfect matchings, which in view of the relationships of Section 1.4 could be used to count
For the sake of convenience, we introduce an additional self-loop probabilty of 1/2 for each state; Le., with probabilty 1/2 the process does not select a random edge as above but simply remains at M. MCpm(G) may be viewed as a random wal on an appropriate graph H of maxmum degree n. It is
them. However, Broder was unable to demonstrate that his algorithm is
not hard to see that H is connected, so MCpm (G) is ergodic and reversible
effcient. In this section, we fill this gap by showing that the method ~Òes indeed yield a f.p. randomised approximate counter for perfect matchings
with uniform stationary distribution.
lThe performance of this algorithm on an appropriate class of random instances, however, has recently been shown to be dramatically better (48J.
We now consider using the algorithm of Figure 2.1 in conjunction with the family of chains MCpm(G) as an almost uniform generator for perfect matchings. Stepwise simulation of transitions is readily performed by an OCM in time O(IEI), and we have aleady noted that the construction
72
3.2 APPROXIMATING THE PERMANENT
3.2 APPROXIMATING THE PERMANENT
73
problem for perfect matchings can be solved in polynomial time. Hence con-
Then, since each matching in lC(M) has at least n - 2 edges in common
ditions (mc1) and (mc2) hold. Condition (mc3), however, presents a prob-
with M, it is easy to see that IlC(M)1 S n2. Note that the sets lC(M)
lem, since G may in general contain many more near-perfect than perfect
partition N. This implies that INI S n2IMn(G)I, thus verifyng our earlier
matchings. Let us cal G dense if its minimum vertex degree is at least nj2.
claim that the near-perfect matchings are not too numerous. It is also
It is not hard to check (see below) that, if G is dense, IMn(G)ljINI 2: 1jn2, so that (mc3) holds. Remarkably, under this assumption it is alo possible
worth noting that this is the only point in the proof at which the biparite
to prove condition (mc4), i.e., that the family of Markov chains is rapidly
Next we define a canonical path in H between an ordered pair I, F of
mixng. This is a consequence of the following theorem.
perfect matchings (refer to Figue 3.1(a)). To do this, we first assume a
Theorem 3.5 For dense bipartite graphs G, the conductance of the underlying graph of the Markov chain MCpm(G) is at least 1j12n6.
Proof: It is sufcient to show that the graph H defining the random wal performed by MCpm (G) has magnfication
fied ordering of all even cycles of G, and distingush a start vertex in
each cycle. Now consider the symmetric difference I EB F; we may write this as a sequence C¡,..., Cr of disjoint even cycles, each of length at
least 4, where the indices respect the above ordering. The path from I to F involves unwinding each of the cycles Ci,..., Cr in turn in the following way. Suppose the cycle Ci has star vertex Uo and consists of the sequence
¡i(H) 2: 1j6n4,
(3.2)
for then the theorem follows from Proposition 2.11 with ßjd = 1j21EI 2:
1j2n2. To show (3.2) we proceed as in the proofs of the previous section by defining a set of canonical paths in H. If no transition occurs in more than blNI of these, (3.1) gives us a bound on the magnfication in terms of the bottleneck parameter b.
We begin by specifying, for each MEN, canonical paths to and from a unique closest perfect matching M E Mn (G) as follows, where u, v denote
the unmatched vertices (if any) of M:
(i) if ME Mn(G) then M = M and the path is empty;
(ii) if ME Mn-i(G) and (u,v) E E, then M = M +e and the path consists of a single Tye 2 transition; (ii) if M E Mn-i (G) and (u, v) ft E, fi some (u', v') E M such that (u,~'),(u',v) E E: note that at least one such edge must exist
by the density assumption on G. Then M = (M - (u',v')) + (u,v') + (u',v), and we specify one of
structure of G is used: we shal have more to say about this later.
the two possible paths of
length 2 from M to M, involving a Tye 0 transition followed by a Tye 2 transition. The canonical path from M to M consists of the same edges of H traversed " in the opposite direction. For future reference, we observe that no perfect matching is involved in too many canonical paths of the above form: for M E Mn(G), define the set
lC(M) = r M' EN: M' = MJ- .
of distinct vertices (uo, Vo, U¡, Vi, . . . , Ui, vi), where (Uj, Vj) E I for 0 S j S L and the remaining edges are in F. Then the fist step in the unwinding
of Ci is a Tye 1 transition which removes the edge (uo, vo). This is followed
by a sequence of L Tye 0 transitions, the jth of which replaces the edge (Uj,Vj) by (Uj,Vj_i). The unwinding is completed by a Tye 2 transition
which adds the edge (uo, vi).
The canonical path between any pai of matchings I, FEN is now defined as the concatenation of three segments as follows:
initial segment: follow the canonical path from I to I; main segment: follow the canonical path from I to F; final segment: follow the canonical path from F to F.
Now consider an arbitrary oriented edge of H, correspondig to a transition t in the Markov chain. We aim to establish an upper bound of the form blNI on the number of canonical paths which contain this transition. Suppose fist that t occurs in the initial segment of a path from I to F, where
I, FEN. Then it is clear from the definition of initial segment that the perfect matching I is unquely determined by t. But we have aleady seen that IlC(I)1 S n2. Since I E lC(I), the number of paths which contain t in
their initial segment is thus at most n21NI. A symmetrical argument shows that the number of paths containing t in their final segment is similarly bounded. To handle the main segments of the paths, we make use of the injective
mapping technique seen earlier. This wi obviate the need for any explicit
counting of structures in N, which is crucial here. Let t be a transition from M to M', where M,M' EN are distinct, and denote by P(t) the
74
3.2 APPROXIMATING THE PERMANENT
0-
0-
0-
0-
I Cl I ... I Ci-l I I Ci I I Ci+1 I
0-
0-
0-
0-
3.2 APPROXIMATING THE PERMANENT
75
0-
set of ordered pairs (I, F) of perfect matchings such that t is contained in the canonical path from I to F. We proceed to define, for each pair (I, F) E P(t), an encoding G't(I, F) E N from which I and F can be uniquely reconstructed. The intention is that, if Ci, . . . , Cr is the ordered
I Cr I
then the encoding should agree with I on Ci,..., Ci-i and on that portion
0-
sequence of cycles in I EB F, and t is traversed during the unwinding of Ci,
of Ci which has already been unwound, and with F elsewhere.
With this in mind, consider the set S = I EB F EB (M U M'). Since In F ç
start vertex of cycle Ci ~
0-
I0-I 0I0-I
/'. ¡"~ ¡~¡
/ ~ ~ /
~ / 0- 0-
/'.¡o'.¡~¡
/ ~ ~ /
MUM' ç I U F and III = IFI = 1M U M'I = n, elementary set theory
~/ 0/ 0-
tells us that ISI = n. Futhermore, suppose that some vertex u has degree greater than 1 in S, Le., we have (u,vi),(U,V2) E S for distinct vertices Vi, V2. Then necessarily (u, vi) and (u, V2) both lie in I EB F, which in turn
implies that neither edge lies in MUM'. Hence the vertex u must be unmatched in MUM'. From the form of the transitions, however, it is clear that MUM' contains at most one such vertex u = Uti moreover, this is the case iff t is a Tye 0 transition, and Ut must then be the start vertex of the cycle currently being unwound. In this case, we denote by el,t the edge of I incident with Ut.
We are now in a position to define the encoding: G't(I F) = f (I EB F EB (M U M')) - el,t, if t is Tye 0;
/ ~ ~ /
/ ~ / ~ / ~ ~ / ~ / ~ /
/ ~ ~ /
, 1. I EB F EB (M U M'), otherwise.
Figure 3.1(b) ilustrates this definition for a Type 0 transition. In view of
always a matching of cardinality at least the above discussion, G't(I,F) is n - 1, and hence an element of N. It remains for us to show that I and F can be recovered from it.
First observe that I EB F can be recovered immediately using the relation
Figure 3.1(a): A transition t on the canonical path from I to F
I EB F = f (G't(I, F) EB (M U M')) + el,t, if t is Tye 0;
1. (Tt(I, F) EB (M U M'), otherwise.
(Note that el,t is the unique edge that must be added to G't(I, F)EB(MUM') to ensure that I EB F is a union of disjoint cycles.) Thus we may infer the
0I Cl
0-
0I
... I Ci-l I
0-
/ Ci 0I ~ 0
/~ /~ ~ / ~ /' CHI
...
Figure 3.1(b): The corresponding encoding G't(I,F)
Cr
'.
ordered sequence Ci,. . . , Cr of cycles to be unwound on the path from I to F. The cycle Ci which is currently being unwound, together with its
parity with respect to I and F, is then determined by the transition t. The parity of all remaining cycles may be deduced from M and the cycle ordering. Finaly, the remaining portions of I and F may be recovered using the fact that In F = M \ (I EB F). Hence G't (I, F) uniquely determines the
pair (I, F), so G't is an injective mapping from P(t) to N. The existence of G't ensures that IP(t)1 S INI for any transition t. Since
also IK(M)I S n2 for any perfect matching M, we see that t is contained in
3.2 APPROXIMATING THE PERMANENT
76
¡he main segment of at most n41Nl paths. Combining this with the results
3.2 APPROXIMATING THE PERMANENT Let us now return to the approximation of
77
the permanent. Unfortu-
for initial and final segments derived earlier, we deduce that the maxmum
nately, the introduction of the density assumption means that we are no
;otal number of paths which contain t is bounded by
longer working with a self-reducible relation, so we cannot apply the reduction of Theorem 1.14 to get an approximate counter for perfect matchings. However, the same result is achieved by a more specialsed construction sketched by Broder (16J, which we spell out in detail below. A more natu-
(n2 + n2 + n4) INI $ 3n41NI. rakng b = 3n4 in (3.1) yields (3.2), which completes the proof. 0
ral reduction will be presented in the next section.
The characterisation of Corollar 2.8 now ensures that the Markov ~hains MCpm(G) constitute a rapidly mixng family. (Note that the num-
)er of perfect matchings in G is at most n!, so the minimum stationar )robabilty 7r;:~ of MCpm(G) satisfies 197r;:~ 2: -cnlgn for some con¡tant c.) In the light of the discussion preceding Theorem 3.5 we therefore
Corollary 3.7 There exists a J.P. randomised approximate counter for per-
fect matchings in dense bipartite graphs, and hence a f.p. randomised approximation scheme for the permanent of dense square 0-1 matrices. To simpli the proof, we fist derive an elementary statistical fact which
iave
says that, for a finite set S and a subset U ç S, the ratio IUI/ISI can be estimated effciently by alost unform sampling from S provided the ratio
:Jorollary 3.6 There exists a f.p. almost uniform generator for perfect
is not too smal.
natchings in dense bipartite graphs. 0
Proposition 3.8 Let S be a finite set and U a subset of S. Write p = ltemarks: (a) In his original paper (16), Broder claimed that the above
IUI/ISI. Suppose that t elements of S are selected independently and with
'apid mixng property holds under the same density assumption. However,
replacement from an almost uniform distribution within tolerance e E (0, 1), and let X denote the proportion of the sample which belong to U. Then for any 8 E (0,1), the sample size t required to ensure that
tis proof based on coupling ideas is both complex and fundamentaly Hawed.
lhe problem is that the "coupling" defined in (16) is not in fact a coupling
faithf copy of MCpm(G): ,his is explaied in detail by Mihai (74). As a result Broder has withdrawn tis proof (see Erratum to (16)). We see this as an indication that coupling )ecause one of
the two processes involved is not a
ind related methods may not be well suited to the analysis of Markov chains
Pr(X approximates p within ratio 1 + 5e) ~ 1 - 8
is at most (54/ep)ln(2/8).
"hich lack a high degree of symmetry.
b) A chain with a rather better conductance bound is obtained by modifiyng MCpm (G) slightly so that transitions are effected by selecting a random
rertex in V2 rather than a random edge. This gives us a random wal with ransition probabilties 1/2n rather than 1/21EI, and the same bound on
he magncation. c) In the case that G is the complete bipartite graph Kn,n, the Markov :hain in (b) may be viewed as a scheme for shufng a deck of n + 1 cards in vhich the top card is repeatedly interchanged with another card selected at
Proof: Writing p' for the expectation of X we have, since p' approximates p within ratio (1 + e)2 $ 1 + 3e,
Pr(X approximates p within ratio 1 + 5e)
2: Pr(X approximates p' within ratio 1 + e/2) 2: Pr(lX - p'l $ ep'13) 2: 1 - 2 exp( -ep't/27) ,
'.,
andom. Of course, MCpm(Kn,n) itself provides a generator for all permu-
ations of n objects, albeit rather indiectly. By appropriate choice of G, 'Ie can. alo generate in polynomial time vaious natural restricted classes .f permutations which satisfy the above density condition, such as displacenents or ménage arrangements. 0
where the last inequalty is derived from Chernoff's bound on tai of the
binomial distribution (6, Proposition 2.4). This latter expression certainly
exceeds 1 - 8 provided t 2: (54/ep) In(218). 0
3.2 APPROXIMATING THE PERMANENT
rs
)roof of Corollary 3.7: Let G = (Vi, V2, E) with !Vil = !V21 = n be
lense. For L = 0,..., n - 1, consider the graph Gi obtained from G by
3.2 APPROXIMATING THE PERMANENT
79
A similar bound holds for the proportion corresponding to (n - L - 1)-
matchings in G. Thus Pi and P2 are not too small. Putting ~ = €/20n and
,ppending L vertices to each part of the biparition, each new vertex being
Ó = I/Sn in Proposition 3.S now ensures that the sample size required to
onnected to al vertices of G in the opposite part. More precisely, Gi is the
achieve the specified accuracy is polynomially bounded in nand €-i. 0
~aph w;, V~, E'), where V; = Vi Utxo,... ,xi-ii, V~ = V2 Utyo,..., YI-iJ
md
E' = E U HU,Yi): u E Vi, i E (in U HU,Xi) : u E V2, i E (in. ::aearly Gi is dense. Now consider the set N = Mn+I(Gi) U Mn+l-i(Gi). 3y setting up explicit bijections, it is readily seen that
Theorem 3.9 (Broder) The problem of counting perfect matchings in
IMn+I(Gi)1 = (l!)2IMn_I(G)1 (3.3)
IMn+l-i(Gi)1
The reader may be wondering at this stage whether the problem of counting perfect matchings remains hard when restricted to dense graphs: if not, of course, the approximation results of this section would not be very exciting. The following result, which is proved in (16J, serves to justify our approach.
(i!)2(2lIMn_I(G)1 + IMn-I+1(G)1
dense bipartite graphs is #P-complete. 0 So far in this section we have concentrated exclusively on biparite
+ (l + 1)2IMn-l-i(G)1) .
graphs because of their connection with the permanent. The Markov
rhis suggests a procedure for estimating the ratio IMn-I(G)I/IMn-l-i(G)I:
iefine Pi = IMn-I(G)I/INI, P2 = IMn-l-i(G)I/INI. By simulating the \1arkov chain MCpm(Gi) repeatedly from the same initial state, genertte almost uniformly, within smal tolerance ~, some number of elements )f N, and let Si, S2 be the proportions of the sample which correspond to Ilatchings in G of size n - L and n - L - 1 respectively. Then the quantity U+ 1)2 si!(2l+ l)s2 is an estimator ofthe desired ratio Pi!P2. Provided this
lS sufciently accurate for each l, the product of the estimated ratios gives a
~ood approximation to IMn(G)I. More precisely, for any specified accuracy E E (0,1) we can arrange for Si, S2 to approximate Pi,P2 respectively within
ratio 1 + €/4n with probabilty at least 1 - I/Sn. Repeating this for each l, the final estimate approximates IMn(G)1 within ratio (1 + €/4n)2n ~ 1 + € with probabilty (1 - I/Sn)2n 2: 3/4. To see that the necessar accuracy can be achieved in polynomial time,
note first that
chain MCpm(G) can be applied without essential modifcation to arbitrary graphs G. In fact, the only point at which we have relied on the bipartite structure of G is in the definition of the sets lC(M) in the proof of Theo-
rem 3.5 and the bound on their size. Let G = (V, E) be an arbitrary graph with !VI = 2n. As before, we assume that G contains a perfect matching. Call G dense if its minimum vertex degree is at least n. This ensures that
lC(M) for ME Mn(G) is stil well defined, and that IlC(M)1 ~ 2n2. The rest of the proof caries through as before, yielding b = Sn4 and consequently ¡i(H) 2: 1/16n4. The conductance is therefore bounded below by 1/64n6. (This can again be improved if transitions are implemented by random vertex selection.) Since a construction analogous to that of Corollar 3.7 holds for general dense graphs, we have
Corollary 3.10 There exists a f.p. almost uniform generator and a J.P. randomised approximate counter for perfect matchings in arbitrary dense graphs. 0
!. -c IMk(G)1 -c n2 n2 - IMk+1(G)1 -
for 0 ~ k -c n.
(3.4)
The lower bound is trivial, while the upper bound follows from the density
assumption in the same manner as the bound on IlC(M)1 in the proof~f Theorem 3.5. Hence from (3.3) the proportion of matchings in N corre~ sponding to (n - l)-matchings in G is at least
~ ~-.
(i!)2(2l + 1)IMn-I(G)1 2l + 1 1
INI - n2((l + 1)2 + 1) + 2l + 1 - 3n3
We conclude this section by examining the rôle played in our results by the density assumption. In the reduction of Corollary 3.7, we used it to prove the polynomial upper bound (3.4) on the ratios IMk(G)I/IMk+1(G)I.
The proof of Theorem 3.5 makes use of an even stronger property of dense graphs, namely that Mk (G) can be paritioned into classes of polynomially
bounded size, one for each element of Mk+i(G), such that al matchings in a given class are "close to" the corresponding element of Mk+1(G). In fact, it wil turn out that everything works under the considerably weaker
3.2 APPROXIMATING THE PERMANENT
10
18sumption that
IMn-i(G)1 :: q(n)IMn(G)1
(3.5)
or some fied polynomial q, where 2n is the number of vertices in G. First we show that this condition implies a similar bound on al the ratios Mk
(G)lIMk+i
(G)
I, generalising (3.4). This follows from the surprising
md beautifu fact that the sequence IMk(G)1 is log-concave, as we now
lemonstrate. ~emma 3.11 For any graph G and positive integer k,
3.2 APPROXIMATING THE PERMANENT
Clearly, the number of elements of Br reachable from a given (M, M') E Ar is just the number of M-paths in M EB M', namely r + 1. Conversely, any
given element of Br is reachable from precisely r elements of Ar. Hence if IArl ). 0 we have IBrl
ound in ¡43, Theorem 7.11 (see alo ¡65, Exercise 8.5.10n. We present m elementary combinatorial proof which uses ideas seen elsewhere in this :hapter. Since log-concavity results in combinatorics tend to be rather hard
o come by, we believe the simpler proof to be of independent interest. Jet k E N+. We may assume that IMk+1(G)1 ). 0, since the inequality
s trivially true otherwise. Define the sets A = Mk+i(G) x Mk-i(G) and
r+l ). 1,
r which completes the proof of the lemma. D IArl
Remark: In ¡43J, the tight inequalty
IMk+i(G)IIMk-i(G)1 :: IMk(G)12.
:'roof: A proof which relies on machinery from complex analysis can be
81
IMk(G)12 2: (k + ~~~m -,~ + 1) IMk+1(G)IIMk-i(G)1
is proved, where m = r n/21 and n is the number of vertices in G. The bound in our proof can also be improved with a little more care, but we will not labour this point here as simple log-concavity is quite adequate for
our purposes. D Next we show that (3.5) is sufcient to ensure rapid mixng for the family of Markov chains MCpm(G).
1 = Mk(G) x Mk(G). Our aim is to show that IAI :: IBI.
~ote fist that, for any two matchings M, M' in G, the symmetric diference kf EB M' consists of a set of disjoint simple paths (possibly closed) in G.
Theorem 3.12 For any graph G = (V, E) with IVI = 2n and IMn(G)1 ). 0, the conductance of the underlying graph of MCpm (G) is bounded below by
Jet us cal such a path an M -path if it contains one more edge of M
1 (IMn(G)I)2 161EI IMn-i(G)1
han of M'; an M'-path is defined similarly. Clearly, al other paths in kf EB M' contain equal numbers of edges from M and M'. Now for any pai M, M') E A, the number of M-paths in M EB M' must exceed the number
,f M'-paths by precisely 2. We may therefore partition A into disjoint lasses tAr: 0 oe r :: kì, where
1r=t(M,M') E A: M EB M' contains r+l M-paths and r-l M'-pathsì. :imilarly, the sets t Br : 0 :: r :: k ì with
Br=t(M,M') E B: MEBM' contains r M-paths and r M'-pathsì
iarition B. The lemma wil follow from the fact that IArl :: IBrl for
Proof: The proof is similar to that of Theorem 3.5 with one or two additional technicalties. As before, let H be the graph defining the random wal performed by MCpm(G) and N its state space. We wil need a variant of the canonical path counting argument which deals only with certain types
of paths. For a subset 8 of states with 181 :: INI/2, let S = N \ 8 and define 8i = 8 n Mn(G);
8~ = S n Mn(G);
82 = 8 n Mn-i(G);
8~ = S n Mn-i(G).
,et us cal a pair (L, L') E Br reachable from (M, M') E Ar if L EB L' =
We claim that, when counting paths crossing the cut from 8 to S, it is enough to consider paths from 8i to 8~ and from 82 to 8~. To make this precise, write r = IMn(G)I/IMn-i(G)1 and Q = 18il/1821, and let
¡f EB M' and L is obtained from M by taking some M-path of M EB M' and
Pi = 18i118~1 and P2 = 182118~1 be the numbers of paths from 8i to 8~ and
jpping the parity of al its edges with respect to M and M'. (This is anal-
from 82 to 8~ respectively. Clearly, Pi and P2 are respectively increasing
gous to unwinding the path in the maner of the proof of Theorem 3.5.)
and decreasing functions of Q (for fied r, 181, IN!); moreover, it is easy
ach r ). O.
82
3.2 APPROXIMATING THE PERMANENT
to check that they are equal when Q = r. Hence maxtPi,P2) is minimised when Q = r, and the minimum value is
3.2 APPROXIMATING THE PERMANENT
83
The final result is obtained via Proposition 2.11, the transition probabilties in MCpm(G) being 1/2IEI. (As before, this can be improved using a more
intellgent implementation.) 0 rI81(INI-181) :; rl811NI
(1 + r)2 - 8 since clearly r S 1.
Now all we need do is define canonical paths in H between elements of Mn(G) (perfect matchings) and elements of Mn-i(G) (near-perfect matchings). If no more than blNI of these paths use any oriented edge of H, by analogy with (3.1) we wil get the bound
Corollary 3.13 There exists a J.P. almost uniform generator and a f.p. randomised approximate counter for perfect matchings in any family of graphs satisfying (3.5) for some fied polynomial q.
Proof: The generator is immediate from the foregoing theorem. The
1 ( IMn(G)1 )
jL(H) ;: 8b IMn-i(G)1 .
(3.6)
counter follows via the reduction of Corollar 3.7, once we have noted from (3.3) that the graphs G¡ inherit from G a bound of the form (3.5) (for a diferent polynomial q). 0
Henceforth we consider only paths from perfect to near-perfect matchings, the complementary case being symmetrical. As in the proof of Theorem 3.5, the canonical path from I E Mn(G) to F E Mn-i(G) is determined by the symmetric difference I EI F, which now consists of an ordered sequence Oi, . . . , Or of disjoint cycles together with a single open path 0 both of whose endpoints are matched in I but not in F. The canonical path from I
to F proceeds by unwinding first the Oi as before and then 0 in the obvious way, with one endpoint nominated as start vertex.
Now let t be a transition in MCpm(G) from M to M' and P(t) ç Mn(G) x
F) E P(t),
Mn-i(G) the set of canonical paths which contain t. For (I, ~e define the encoding at (I, F) exactly
as in the proof of Theorem 3.5. A
noments reflection showd convince the reader that at I\n-2(G), since we now get an additional pair of
(I, F) E Mn-i (G) u unmatched vertices arising
rom the open path O. (Note that this takes us outside the state space, mt Lemma 3.11 wil take care of this.) Recovery of I and F from t and rt(I, F) works essentialy as before. Hence at is an injective mapping and ve have, using Lemma 3.11,
IP(t)1 S IMn-i(G)1 + IMn-2(G)1
Remark: As observed by Dagu et al (22), the reduction from counting to generation described in Corollar 3.7 may be replaced by the following
mechanism, with a smal increase in effciency. In analogous fashion to MCpm(G), we may define for 1 S k S n a Markov chain MCk(G) whose states are k- and (k-l)-matchings in G. (Thus MCn(G) is
just MCpm(G).) By a straightforward extension of the proof of Theorem 3.12, whereby mw-
tiple rather than unque canonical paths between states are alowed in the manner of Example 3.4, it can be shown that each of the chains MCk(G) is rapidly mixng under the same condition (3.5) on G. This alows the ratios IMk(G)I/IMk-i(G)1 to be estimated diectly for each k in turn. We do not dwell on this point here as we wi present a more natural algorithm for the permanent in the next section. 0
Our earlier reswts for dense graphs are now seen to be a special case of Corollary 3.13.2 We might well want to ask whether other natural classes of graphs exist which satisfy (3.5) for a suitable polynomial q and for which membership of a given graph in the class can be tested easily. One observation which is often usefu in this connection is the following. Let G be bi-
..IMn-i IMn(G)1 ( )1G (1 + IMn-i(G)I) = IMn-i(G)IINI
parite with 2n vertices. An alternating path for a matching ME Mn-i(G) is a path whose edges lie alternately inside and outside M. Suppose that G
IMn(G)1 . ~hus we may take b = IMn-i(G)I/IMn(G)1 in (3.6) to get
2Note that the density bound quoted is tight in the sense that it is possible to construct, for any fied 8 )0 0, a sequence of (bipartite) graphs (Gn)
jL(H)-:; 8! (IMn-i(G)1 IMn(G)1 )2
with 2n vertices and minimum vertex degree;: n/(2 + 8) such that the ratio IMn-i(Gn)/IMn(Gn)1 is exponentially large.
84
3.2 APPROXIMATING THE PERMANENT
has maxmum vertex degree d, and that any M E Mn-i(G) can be extended to a perfect matching in G by augmentation along an alternating path oflength at most 2l+1. Then the ratio IMn-i(G)I/IMn(G)1 is clearly
bounded above by nd i. (Note that this is essentially the mechanism we used for dense graphs, with L = 1.) Thus G satisfies (3.5) for some fied polynomial q(n) if L = O(IOgd n). This observtion enables one to show
3.3 MONOMER-DIMER SYSTEMS
85
Of perhaps greater interest, however, is the fact that the condition (3.5) can be tested for an arbitrary graph in randomised polynomial time, as we shal see in the next section. Thus we are able not only to approximate
the permanent in almost all cases, but alo to reliably identify the diffcult instances. This property gives our algorithm a pleasing robustness that is often not exhbited by algorithms that work for random instances.
that alost all graphs in fact satisfy (3.5). The following strong statement
of this fact is due to Mark Jerrum.
This concludes our discussion of perfect matchings for now. An alternative view of these results wil emerge as a by-product of our work on a
Theorem 3.14 (Jerrum) Let G = (Vi, V2, E) be a random bipartite graph
dierent problem in the next section.
with IVi! = IV21 = n, where each edge is selected independently with proba-
bility p;: (360Iogn)/n. Then, with probability 1 - O(n-i), G satisfies !Mn-i(G)1 ~ n4+o(i)IMn(G)I. 0
The crux of the proof is that a.e. such graph has maxmum degree O(pn) and augmenting paths of length O(Iogpn n). Combining Theorem 3.14 and Corollary 3.13, we immediately have the following result.
3.3
Monomer-climer systems
This section is concerned with counting and generating all matchings (independent sets of edges) in a graph. Apart from their inherent interest, these problems arise in the theory of statistical physics, which is a rich source of combinatorial counting and generation problems.
A monomer-dimer system consists of a graph G = (V, E), which is
Corollary 3.15 There exists a f.p. almost uniform generator and a J.P.
usualy some form of reguar lattice, whose vertices represent physical sites;
rundomised approximate counter for perfect matchings in a. e. random bi7artite graph as in Theorem 3.14. 0
adjacent pairs of sites may be occupied by diatomic molecules, or dimers.
)f course, Corollar 3.15 implies the existence of a f.p. randomised ap-
Confguations of the system correspond to arrangements of dimers on the lattice in which no two dimers overlap. In a confguation consisting of k ~ IVII2 dimers, unoccupied sites are referred to as monomers, and the
)roximation scheme for the permanent of a.e. random 0-1 matrix in the
ratio (I V I - 2k) / I Vi is the monomer density. Monomer-dimer systems have
malogous modeL.
been extensively studied as models of physical systems involving diatomic molecules. In the two-dimensional case, the system models the adsorption
lemark: The bound on the density p in Theorem 3.14 is, up to a constant actor, equal to the threshold vaue for the existence of a perfect matching in uch graphs (14), and hence alost best possible. By a more carefu appli-
ation ofthe same ideas, Corollar 3.15 may be extended to random graphs
of dimers on the surface of a crystal. Three-dimensional systems occur
in the theory of mixtures of molecules of different sizes and in the cellcluster theory of the liquid state. For fuher information, see (43) and the references therein.
f arbitrary density. Specifcaly, if G = (Vi, V2, E) is a random bipartite raph with IVi! = IV21 = n and
any edge density p, then with probabilty
~nding to 1 as n tends to infnity, either G contains no perfect matching or 'in-i(G)1 ~ nioIMn(G)I. (For the details, see (50).) Since the existence f a perfect matching can be tested in polynomial time, we conclude that
:orollary 3.15 holds for random bipartite graphs of any density. 0
Other classes of graphs can often be tackled using the same approach. Dr example, Dagum et al (22) have shown the existence of augmenting
aths of constant length for al en-reguar biparite graphs for any fied )- O. Other simple deterministic criteria ensuring (3.5) doubtless exist.
Most thermodynamic properties of such a system can be deduced from knowledge of the number of possible confguations, which is just the num-
ber of matchings in G. More generaly, each edge e of G has an associated weight c(e) E lR+ which represents the relative probabilty of occupation by a dimer. This wil depend on the contribution of such a dimer to the global energy of the system. The quantity of interest is the partition function
Z(G) = L W(G,M), MEMATCH(G)
(3.7)
where MATCH(G) is the set ofmatchings in G and W(G, M) = TIeEM c(e) is the weight of M. Counting matchings, Le., the special case
of (3.7) in which
3.3 MONOMER-DIMER SYSTEMS I edge weights are unity, is a #P-complete problem even when restricted planar graphs (46, 97J. The main result of this section is that the more neral sum (3.7) can in fact be approximated effciently for any weighted aph G.
We shal again proceed via a related generation problem for matchings. tlce the sum in (3.7) is weighted, however, matchings should be generated
it uniformly but with probabilties proportional to their weights. In fact, mpling monomer-dimer confgurations from the weighted distribution is
. interesting problem in its own right as a means of estimating the expection of various physical operators. We begin by generalsing the concepts
Chapter 1 a little to handle weighted structures.
Let R be a relation over E, and W : E* x E* __ lR an arbitrary funcm. For any problem instance x E E* and solution y E R(x), we cal (x,y) the weight ofy. We shal always assume that the weights are sim~ in the sense that W is computable in polynomial time. The weighted
3.3 MONOMER-DIMER SYSTEMS
87
where e is any edge of G and G-, G+ are as in Example 1.1. In the case that al solution weights are positive, a corresponding
weighted generation problem for (R, W) may be defined: this requires that each solution y E R(x) be output with probabilty proportional to its weight W(x, y). The vaious definitions of generators given in Section 1.2 carry over in an obvious way to the non-uniform case. In particular, our main notion of tractabilty is an almost W-generator g for R, which is defined exactly as for an alost uniform generator except that, for all inputs (x, €)
and solutions y E R(x), the output probabilties must satisfy (1 + €)-lø(X, €)W(x, y) :: Pr(g(x, €) = y) :: (1 + €)Ø(x, €)W(x, y)
for some function ø : E* x lR+ -- (0, IJ. As before, g is fuy-polynomial iff its runtime is bounded by a polynomial in ixl and 19 c1.
m LYERCx) W(x,y) wil be denoted by #wR(x). The concept of self-
The results of Section 1.3 transfer directly to the weighted case: thus we can work freely in the OCM model when considering approximation
:lucibilty can be extended to the weighted case as follows. We cal the ir (R, W) self-reducible if R satisfies conditions (srl)-(sr3) of Section 1.1
algorithms. Fuhermore, it is easy to verify that, with minor modifications, the reductions of Theorems 1.10 and 1.14 also hold for weighted problems,
d in addition (with the same notation):
giving the following generalisation of Corollary 1.15.
~r4) There exists a polynomial time computable function g : E* x E* x lR -- lR such that, for al x E E*,
#wR(x) = L g(x,w,#wR('l(x,w))). wEEU("')
us definition merely generalises the idea that #R(x) can be computed
Corollary 3.16 Let R ç E* x E* and W : E* x E* -- lR+. If the pair (R, W) is self-reducible, then the following are equivalent: (i) There exists a J.P. almost W-generator for R.
(ii) There exists a f.p. randomised approximation scheme for #w R. o
,ily given the vaues of #R for a few smaller problem instances. The ighted counting problem for the pai (R, W) involves computing the fucn #w R. By analogy with the unweighted case, this problem wil be rerded as tractable if there exists a f.p. randomised approximation scheme . #w R, as defined in Section 1,2. The monomer-dimer partition function (3.7) can be expressed.as the ighted counting problem for a self-reducible pair as follows. Let MATCH
the relation which associates with a (weighted) graph G al matchings G; this relation was seen to be self-reducible in Example 1.1. Let c(e) tlote the weight of the edge e in G, and for M E MATCH(G) define'(G, M) = IleEE c(e) as in (3.7). Then the pair (MATCH, W) is self-
lucible since
#WMATCH(G) = #wMATCH(G-) + c(e) #wMATCH(G+),
Hence we wil get a f.p. randomised approximation scheme for the
monomer-dimer partition function provided we can generate matchings with probabilties roughly proportional to their weights. This we achieve
using a suitable Markov chain simulation. Given a graph G = (V,
E) with positive edge weights tc(e): e EEl, we
consider the Markov chain MCmd (G) with state space N = MATCH( G) and
transitions as follows: in any state MEN, choose an edge e = (u, v) E E uniformly at random and then (i) if e E M, move to M - e with probabilty 1/(1 + c(e)) (Type 1 transition) ;
(ii) if u, v are both unmatched in M, move to M +e with probabilty c( e) / (1 + c( e)) (Type 2 transition);
88
3.3 MONOMER-DIMER SYSTEMS
(üi) if e' = (u,w) EM for some w, and v is unmatched in M, move
3.3 MONOMER-DIMER SYSTEMS
89
that b here is precisely a weighted analogue of the bottleneck parameter
to (M + e) - e' with probabilty c(e)/(c(e) + c(e')) (Type 0
we have been using in the previous two sections. Taking (3.8) and (3.9) together, we have the following bound on the ergodic flow out of S, where cut(S) denotes the set of transitions crossing the cut from S to S:
transition) ; (iv) in al other cases, do nothing.
As always, we simplify matters by adding a self-loop probabilty of 1/2 to each state. It is then readily checked that MCmd(G) is irreducible and ape-
riodic, and hence ergodic. Moreover, by considering the detailed balance equations (trl) on page 45, it is easy to see that the chain is reversible and
Fs = L Wt 2: b-i L L 7r¡7rF tEcut(S) tEcut(S) (I,F)EP(t) 2: b-i L 7r¡7rF
¡ES
FES
that the stationar probabilty 7rM of M E MATCH(G) is proportional to
its weight W (G, M) = TIeEM c( e). Thus the simulation procedure of Figure 2.1 will yield a f.p. almost W-generator for MATCH provided MCmd(G)
is rapidly mixng. Applying the analysis of Chapter 2, the crucial fact is
)- Cs. - 2b
By definition, the conductance of H therefore satisfies
the following. 1
Theorem 3.17 For a graph G = (V, E) with positive edge weights the underlying graph of the Markov chain MCmd(G) is at least 1/(8IElc~ax), where Cmax = max-(l, maxeEE c(e)J.
-(c(e) : e E Eì, the conductance of
Proof: Let H be the underlying graph of MCmd(G). The first step is to establish a weighted version of the path counting argument which led to the bound (3.1). Suppose that between each ordered pair (1, F) of distinct
q,(H) 2: 2b'
(3.10)
Our aim is thus to define a set of paths that give a suitably smal vaue for the bottleneck parameter bin (3.9).
To do this we generalise the proof of Theorem 3.5. Suppose there is an underlying order on al simple paths in G and designate in each of them a start vertex, which must be an endpoint if the path is not a cycle but
is arbitrary otherwise. For distinct 1, FEN, we can write the symmetric
states we have a canonical path in H consisting only of edges of non-zero
diference 1 EB F as a sequence Qi,. . . , Qr of disjoint paths which respects
weight (corresponding to valid transitions in the chain). Futhermore, let us
the ordering. The canonical path from 1 to F involves unwinding each of
!lsociate a weight 7r¡7rF with the path from 1 to F. If S is any non-empty
the Qi in turn as follows. There are two cases to consider:
mbset of states with capacity Cs = ¿MES 7rM ~ 1/2, the aggregated weight of al paths crossing the cut from S to S = N \ S satisfies
L 7r¡7rF = CsC'S 2: Cs/2.
¡ES
(i) Qi is not a cycle. Let Qi consist of the sequence (vo, Vi,..., vi) of vertices, with Vo the start vertex. If (vo, vi) E F, perform (3.8)
a sequence of Type 0 transitions replacing (V2j+1,V2j+2) by
as
transition if L is odd. If on the other hand (vo, vi) E 1, begin with a Tye 1 transition removing (vo, vi) and proceed as before
(V2ji V2j+1) for j = 0,1,. ", and finish with a single Tye 2
FES \¡ ow let t be a transition from a state M to a state M' '" M, and
ISUal denote by P(t) the set of al ordered pairs (1, F) whose canonical
iath contains t. Suppose it is known that, for any such transition t, the Lggregated weight of paths containing t satisfies
L 7r¡7rF ~ bwt,
(3.9)
(I,F)EP(t)
vhere Wt = 7rMPMM' = 7rM'PM'M is the weight of the edge in H corre-
:ponding to t. (PMM' is the transition probabilty from M to M'.) Note
for the reduced path (vi, . . . , Vi), (ii) Q i is a cycle. Let Q i consist of the sequence (vo, Vi, . . . , V2l+ i) of vertices, where L 2: 1, Vo is the start vertex, and (V2j, V2j+1) E 1 for 0 ~ j ~ l, the remaining edges belonging to F. Then the
unwinding begins with a Type 1 transition to remove (vo, vi). We are left with an open path 0 with endpoints vo, Vi, one of which must be the star vertex of O. Suppose Vk, k E -(0,1ì,
is not the start vertex. Then we unwind 0 as in (i) above but
90
3.3 MONOMER-DIMER SYSTEMS treating Vk as the start vertex. This trick serves to distinguish
paths from cycles, as wil prove convenient shortly.
~ow let t be a transition from M to M' =l M. The next step is to define our injective mapping O't : P(t) -- N. As in the proof of Theorem 3.5, we set 7t(I, F) equal to 1 E9 F E9 (M U M'), and remove the edge ei,t of 1 adjacent
3.3 MONOMER-DIMER SYSTEMS from which (3.11) follows. .
(ii) t is a Type 2 transition. This is handled by a symmetrical argument to (i) above, with M replaced by M'.
(ii) t is a Type 0 transition. Suppose M' = (M+e)-e', and consider
~onsists of independent edges, and so is an element of N. The difference
the multiset O't(I, F) U M. This is equal to the multiset 1 U F except that the edge e, and possibly also the edge ei,t are absent from it. Assuming ei,t is absent, which happens precisely when
r E9 F can be recovered from O't(I, F) using the relation
the current path is a cycle, we have
GO the start vertex of the path currently being unwound if necessar: this is
,0 iff the path is a cycle and t is Type O. It is now easily seen that O't(I, F)
1 E9 F = current path is a cycle;
7ri7rF = c( ei,t)c( e )7rM7rO't(i,F)
O't(1,F)F) U M'), otherwise. f (O't(1, E9E9 (M (M U M')) + ei,t, if t is Type 0 and the Note that we can tell whether the current path is a cycle from the sense of iinwinding. Recovery of 1 and F themselves now follows as before from the
path ordering. Hence O't is injective.
Moreover, it should be clear that O't(1, F) is very nearly the complement )f M in the union of 1 and F viewed as a multiset, so that the product 7ri7rF is approximately equal to 7rM7rO't(I,F), giving us a handle on the bottleneck
parameter b in (3.9). We now make this precise.
91
= c(ei,i)c(e)
(Wi/PMM' )7rO't(I,F)
= 2IElc(ei,t)(c(e) + c(e'))wt7rO't(I,F),
which again satisfies (3.11). If ei,t is not absent, the arguent
is identical with the factor c(ei,t) omitted. This concludes the proof of the Claim and the theorem. D
Claim For any (1, F) E P(t), we have 7ri7rF $ 4IElc~axwt7rO't(I,F) . (3.11)
The Claim wil be proved in a moment. First note that it immediately yields the desired bound b in (3.9), since for any transition t we have
L 7ri7rF $ 4lElc~axwt L 7rO't(i,F) $ 4IElc~axwt,
(i,F)EP(t) (i,F)EP(t)
where the second inequalty follows from the fact that O't is injective. We may therefore take b = 4IElc~ax' which in the light of (3.10) gives the ~onductance bound stated in the theorem. (t remains only for us to prove the Claim. We distinguish three cases: (i) t is a Type 1 transition. Suppose M' = M - e. Then O't(1, F) = 1 E9 F E9 M, so, viewed as multisets, M U O't(1, F) and 1 U Fare equal. Hence we have
Corollary 3.18 There exists a f.p. almost W-generator for matchings in arbitrary weighted graphs, where W is the weighting function for matchings defined above, provided the edge weights are positive and presented in unary.
Proof: Define Cmin = min-(1, mineEE c(e)ì. Then the minimum stationary
state probabilty in MCmd(G) is at least c~in2-IElc;;~, where n = IVI.
The logarithm of this quantity is at least -p(lxl), where ixl is the size of the input description and p is a polynomiaL. Hence by Theorem 3.17
and Corollary 2.8 the Markov chain family is rapidly mixng. Simulation an OCM is a simple matter, and we may take the empty
of MCmd(G) by
matching as initial state. D In view of Corollary 3.16, we may now state the main result of this section.
7ri7rF = 7rM7rO't(i,F)
Corollary 3.19 There exists a f.p. randomised approximation scheme for = (Wi/PMM')7rO't(I,F)
= 21EI(1 + c(e))wt7rO't(I,F)'
the monomer-dimer partition function of arbitrary weighted graphs with edge weights presented in unary. D
3.3 MONOMER-DIMER SYSTEMS
12
lemark: The special case in which all edge weights are unity corresponds o unweighted counting of matchings. Corollaries 3.18 and 3.19 therefore mply the existence of a f.p. almost uniform generator and a f.p. randomised
~pproximate counter for all matchings in arbitrary graphs. D As we have already mentioned, the monomer-dimer chain provides some
urther insight into the results of the previous section. In particular, it rields alternative and arguably more natural algorithms for generating and
:ounting perfect matchings in familes of graphs satisfying (3.5), and hence a iew proof of Corollary 3.13. The key to these algorithms is the introduction
if carefuy chosen edge weights. Let :F be a family of graphs satisfying (3.5) for some fied polynomial q, md G = (V, E) be a member of:F with ¡VI = 2n and IMn(G)1 :; O. We
ntroduce the notation G(c) to stand for the weighted graph obtained by ~signing to each edge of G the weight c. To generate perfect matchings n G almost uniformly, we proceed as follows: for some suitable C :; 1, use he Markov chain MCmd(G(C)) as above to generate matchings in G(c) (or !quivaently in G) from the distribution determined by the edge weights,
md fail if the ouptut is not a perfect matching. For any vaue of c, the nduced distribution on perfect matchings is clearly uniform. Now suppose v-e choose c = 2q(n)j since all edge weights are polynomially bounded, jorollary 3.18 ensures that the generator is f.p. It remains only to check hat the failure probabilty is not too large.
Writing mk = IMk(G)I, the log-concavity of the mk (Lemma 3.11) imilies that
mk =n-l II mi :: q(nt-k mn i=k mi+l
or 0 :: k :: n. In the stationary distribution of MCmd(G(C)), the probaiilty of being at a perfect matching is therefore
3.3 MONOMER-DIMER SYSTEMS
93
the ratios mk+i!mk in turn in a sequence of n stages, for k = 0, . . . , n - i. Since mo = 1, an approximation to mn is obtained as the product of the estimated ratios. The main idea is to compute a sequence Ci,..., Cn-l of edge weights with the property that, in the stationary distribution of the Markov chain MCmd(G(Ck)), the probabilty of being at a k-matching is
quite large, alowing the ratio mk+i!mk to be estimated statistically. That such a sequence of weights exists is a consequence of the log-concavity of the mk. Idealy, we want the weight Ck to be the inverse ratio mk-i!mk; however, it wil suffce to substitute the estimate of this quantity obtained in the previous stage.
Figue 3.2 shows the approximate counter for perfect matchings, where G is the input graph and E :: 1 the relative error specified for the final approximation. The algorithm begins by testing whether G contains a perfect matching, for which purose any standard polynomial time algorithm may be used. We therefore assume in the following discussion that
IMn(G)1 :; O. In line (4), Q is the alost W-generator for matchings described earlier, i.e., the cal Q(G(c), . ) invokes a simulation of the Markov
chain MCmd(G(C)). Line (2) and the iterations of the for-loop correspond to the n stages of the computation mentioned above. Let Ck+1 be the value
of the weight parameter c at the end of the kth stage: we claim that, for each k, Ck+1 approximates mk/mk+1 within ratio 1 + E/2n with high probabilty, provided the vaue t in line (4) is suitably chosen. This wil II output in line (8) approximates mn within ratio imply that the product (1 + E/2nt :: 1 + E with high probabilty, as requied.
The claim is proved by induction on k. The base case is trivial since, from line (2), Ci = IEI-I = mo/mi. Now assume inductively that Ck approxiates mk-i!mk within ratio 1+E/2n. In the stationary distribution of the chain MCmd(G(Ck)), let Pi, 0:: i :: n, denote the probabilty of
being at
an i-matching. Then clearly mk/mk+1 = CkPk/Pk+1' Inspection of line (7)
:; 2n 1
reveal that Ck+1 = CkPk/Pk+1, where Pk,Pk+1 are estimates of Pk,Pk+1
L 2k 2
rin-i!mn, very large edge weights would be required to ensure that perfec~
and (k+ 1)-matchings in a sample produced by the generator. Since the tolerance is e = E/30n, by makng the sample size t large enough we can ensure that these estimates are within ratio 1 + 5e = 1 + E/6n with high probability. Then Ck+1 approximates mk/mk+l within ratio (1 + E/6n)2 :: 1 + E/2n the proof. Note that with high probabilty, completing the inductive step of
aatchings appear frequently enough. ..
the pathological cases of lines (3) and (6) can occur only in the unlely
n k mn(2q(n)t
L mk(2q(n))
k=O
-- :; k=O
~his confrms that the method works. Notice how the condition (3.5) again
,rises naturaly here: in the absence of a polynomial bound on the ratio
Now let us review the problem of counting perfect matchings. The
computed in lines (4) and (5) by observng the proportions of k-matchings
event that some estimate Ck is out of range.
aonomer-dimer chain suggests a more natural reduction to generation than
Finaly, we need to investigate the runtime of the procedure. With the
hat of Corollar 3.7. For a graph G = (V, E) as above, we wil estimate
exception of lines (4) and (5), this is evidently bounded by a polynomial
3.3 MONOMER-DIMER SYSTEMS
(1)
satisfying (3.5).
c := lEI-I; II:= lEI;
for k := 1 to n - 1 do begin
Recall from Theorem 3.14 ofthe previous section that almost all graphs
if c :; 2q(n) or c.c (2IEI)-1 then halt with output 0
(3)
else begin (4)
the form g(G(c),E/30n)
Pk := IY n Mk(G)I/t; PHl:= iy n MH1(G)I/t; if Pk = 0 or PHI = 0 then halt with output 0
perfect matching, behaves as follows:
and let Y be the set of outputs;
(8)
satisfy (3.5) for a fied polynomial q. However, it is not at all clear how to decide whether a given graph satisfies the bound. The above techniques suggest a simple randomised procedure for doing this. More precisely, let q be an arbitrar polynomial and suppose we wish to design an effcient algorithm A which, when presented with an arbitrary graph G containing a
make t cal of
(5) (6) (7)
95
in nand E-1. Hence the procedure of Figure 3.2 constitutes a f.p. randomised approximate counter for perfect matchings in familes of graphs
if IMn(G)1 = 0 then halt with output 0
else begin (2)
3.3 MONOMER-DIMER SYSTEMS
else begin c:= CPk/PHl; II:= II/c end end end; halt with output II end
(i) if mn-i/mn ~ q(n), A accepts with high probabilty; (ii) if mn-i/mn :; 6q(n), A rejects with high probabilty.
Figue 3.2: Approximate counter for perfect matchings
For intermediate vaues of the ratio, we do not care whether A accepts or rejects. (The value 6 here is used for ilustrative purposes only and can
n. Moreover, the bound on the edge weights in line (3) ensures that each ii to 9 is polynomialy bounded in n and c 1. We just have to check that t :ed only be similarly bounded in order to give good estimates Pk, PH 1 with gh probabilty. To see this, assume that Ck is a good estimate of mk/mkH
id note that, for i ;: k, Pk
k k-ii-III mj mkCk
Pi
miCki = Ck . k mjH .
be replaced by any fied constant greater than 1.) Of course, as usual A
can also incorporate an effcient test for the existence of a perfect matching in G. Such an algorithm is of signficant practical value, since we can be almost
certain that any graph accepted satisfies mn-i/mn ~ 6q(n), alowing us to count and generate perfect matchings in it. Moreover, all graphs with
(3.12) .
i=
'ý log-concavity of the mi, each ratio in the product is bounded be-
w by mk-i/mk. Also, Ckk-i approximates (mk/mk_i)i-k within ratio + E/2n)i-k ~ (1 + l/n)n ~ e. It follows from (3.12) that Pk/Pi ;: l/e
r i ;: k. Exactly the same bound holds for i .c k. Since ¿Pi = 1,
~ conclude that Pk ;: (en + 1) - 1 and so is bounded below by an inverse
ilynomial in n. Moreover, we have Pk+l = Ckmk+l :;! (mo) ( mn ) :; --
Pk mk - 2 mi mn-i - 2IElq(n)
:ing log-concavity and (3.5), so that PHI is similarly bounded below. Iiese lower bounds imply by Proposition 3.8 (generalsed to the weighted se) that the sample size t required to ensure that Pk, PkH approximate
, PHI within ratio 1 +E/6n with high probabilty is polynomialy bounded
mn-i/mn ~ q(n) wil almost always be accepted. Taken in conjunction with Theorem 3.14, and with q(n) = n5 (say), this implies that we can not only handle almost al graphs in some reasonable model, but also reliably identify the pathological cases which are hard to deal with. As we observed earlier, this lends much greater signficance to our results about random instances. To obtain an algorithm A with the specified behaviour, all we need do is generate some number t of matchings in G using the chain MCmd (G(2q( n))) with some smal fied tolerance E, and note the proportion s of these which are perfect. We accept iff s ;: 3/8. As we have already seen, in the case
mn-i/mn ~ q(n) the expected value of s wil be :; 1/2(1 + E)2, so takng t = Cln8-i, for a suitable constant C and any desired 8:; 0, ensures that
Pr(s .c 3/8) .c 8. Similarly, if mn-i/mn :; 6q(n) the expected value of s is less than (1 + E)22q(n)/8q(n) = (1 + E)2/4, and Pr(s ;: 3/8) can again be
made arbitrarily small as above.
T U l¡
96
3.4 CONCLUDING REMARKS
3.4
Concluding remarks
As well as presenting several positive results, this chapter has left open a number of interesting questions which we now briefly mention.
The first question which suggests itself concerns the approximabilty of the unestricted permanent. Though based on dierent Markov chains,
both approximation algorithms we have employed here give rise naturally to condition (3.5), that the ratio of the number of near-perfect to perfect matchings must be polynomialy bounded. On the other hand, there seems to be no a priori reason why counting perfect matchings in graphs which
violate this condition should be particularly hard. Perhaps a radicaly different approach can be used to handle graphs of this kind. In any case, the task of devising an effcient unversaly vad approximate counter for perfect matchings, or alternatively showing that the problem is hard to
~
3.4 CONCLUDING REMARKS
97
fi ~ ~ li
!
i: "'0 ~ ~
avaable. Some improvements have aleady been achieved and are reported in the Appendix.
Similar considerations apply to our methods for estimating the expectation of a random vaiable under the stationar distribution of a Markov chain. We have chosen to view the chain as a generator of independent
samples, partly to simplify the statistical concepts and partly because the random generation problems are of interest in their own right. In contrast, Aldous (2) considers estimates derived by observng a Markov chain continuously, and in fact formulates a defition of rapid mixng directly in terms of the vaiance of such an estimate. This approach may lead to increased effciency.
Another important question is whether the techniques of this chapter can be applied to other interesting combinatOiial structures. In many cases,
approximate in general, in the manner of Theorem 1.17, seems to be a
a reversible Markov chain on the structures with the desired stationar dis-
thorny issue which may requie other technques for its resolution.
tribution suggests itself naturaly. We give a few examples as an ilustration.
As mentioned earlier, it would be of interest to know whether other easily characterised familes of graphs, apar from dense graphs, satisfy
Example 3.20 For a connected graph G, consider the Markov chain
(3.5) for some suitable polynomial q. For example, this question is important in the case of certain reguar lattice graphs, in which physicists are
keen to count dimer coverings (monomer-dimer confguations with zero monomer density). Much effort has been expended on this problem, and an elegant exact solution obtaied for planar lattices (or indeed arbitrar
planar graphs) (58). The three-dimensional case, however, remais open. Some smal-scale experimentation suggests that the ratio of near-perfect to perfect matchings is polynomialy bounded for three-dimensional rectangular lattices, alowing the counting problem to be effciently approximated,
MC( G) whose states are the spanning trees of G, with transitions as follows:
given a spannng tree T, select an edge e of G unformly at random. If e belongs to T, do nothing; otherwse, add e to T to form a graph with a sinthe cycle. Then MC(G) is gle cycle, and remove a randomly chosen edge of symmetric, so its stationar distribution is unform over the spanning trees of G. (As noted in Chapter 1, effcient exact methods do exist for counting and generating spanng trees. However, the simplicity of this Markov chain, together with the fact that it has certai features in common with
a number of other natural chains, make it an interesting object of study in
but we have not been able to confrm this analyticaly. (Note that the
its own right.) 0
straightforward augmenting path approach is of no help for such graphs due to their large diameter.) Another quantity of interest is the number of monomer-dimer confguations with given monomer density, i.e., the num-
Example 3.21 Let n be a positive integer and g = (go, . . . , gn-i) a degree natural numbers gi with Lgi even. Let sequence on (n), i.e., a sequence of
ber of matchings of given cardinalty. Both algorithms we have presented for counting perfect matchings can obviously be adapted to computeIMk(G)/ for any given k, but we are left with a similar condition on G, namely that
vertex i has degree 9i, and consider the Markov chain MC(g) defined as
the ratio IMk-i(G)//IMk(G)/be polynomialy bounded. From a practical point of view, it would be interesting to know whether the conductance bounds we have derived can be signficantly improved.
We make no claim of optimalty here, prefering to concentrate on giving a clear exposition of the rapid mixng property. The practical utilty of our algorithms, however, is likely to depend on rather tighter bounds being
GRAPHS(g) denote the set of al simple graphs on vertex set (n) in which
follows. The states of MC(g) are the elements of GRAPHS(g), together with all simple graphs on (n) whose degree sequence g' satisfies g~ ~ gi for all i
and Li Igi - g~1 = 2. (I.e., the degree sequence g' is obtained from g by a smal perturbation.) To make a transition from state G, select an ordered pai i, j of vertices uniormly at random and
(i) if G E GRAPHS(g) and (i,j) is an edge of G, delete (i,j) from G;
(ü) if G tt GRAPHS(g), the degree of i in G is less than 9i, and (i,j) is
98
3.4 CONCLUDING REMARKS
not an edge of G, add (i,j) to G; if this causes the degree of j to exceed gj, select an edge (j, k) uniformly
at random and delete it.
3.4 CONCLUDING REMARKS
99
Since alost al natural chains of this kind are reversible, the character-
isation of Chapter 2 can in principle be applied to obtain rigorous bounds on the number of simulation steps required to achieve any specified level of
)nce again, the stationary distribution is uniform over GRAPHS(g). (We
accuracy. We conjecture that this is possible for some of these chains, and
Iiall attack the generation problem for the relation GRAPHS by a completely
that the path counting technique developed in this chapter is a promising
ifferent method in the next chapter.) 0
approach for obtaining upper bounds. Indeed, as reported in the Appendix, progress has recently been made in the analysis of a Markov chain associ-
lxample 3.22 Let n be a positive integer and w a partial order on the
ated with the Ising model (52J.
~t (nJ. Let LE be the relation which associates with the pair (n,w) the
Finaly, let us briefly mention the widely used stochastic heuristic for combinatorial optimisation known as simulated annealing (60J. The basic idea is that a Markov chain explores a space of configurations (feasible solutions), each of which has an associated cost or "energy". In the stationary distribution of the chain, low cost solutions have large weight so the chain tends to favour them asymptotically. By progressively reducing a "temperature" parameter, the weights are scaled so as to accentuate the depths of
~t of al
linear extensions of w. A Markov chain MC(n,w) with state
iace LE( n, w) and uniform stationary distribution can be defined as follows: lven a linear extension J. of w, select an element of (n) uniformly
at random
n.d interchange it with the next largest element in J., provided the resulting
rder is stil consistent with w. (If it is not consistent, do nothing.) 0
,xample 3.23 Let G = (V, E) be a connected undirected graph, and ONN(G) the set of al connected spanning subgraphs of G, i.e., con-
~cted graphs (V, E') where E' ç E. Consider the Markov chain MC(G) hose state space is CONN( G) and which makes random transitions from
the energy wells. (Thus the process is not in general time-homogeneous.)
Whie such a process is known to converge asymptotically under faily general conditions (see, e.g., (41, 671), very little is known about its rate of convergence, and hence its effciency, when applied to non-trivial problems.
, E CONN( G) as follows: select an edge e E E uniformly at random; if e
Since the question is related to that of rapid mixng for an appropriate
ies not belong to H, add e to H; if e belongs to H, remove e from H
sequence of Markov chains, the methods described here potentialy provide
:ovided the resulting graph remains connected. This chain is easily seen i have uniform stationary distribution. If it could be shown to be rapidly ixng, it would yield an effcient randomised approximate counter for con~cted spanning subgraphs. This #P-complete counting problem has apications in the theory of network reliabilty (81 J. 0
The first three chains above have in fact recently been analysed and iown to be rapidly mixng, using the techniques developed in the last two iapters plus some additional ideas. A brief discussion and references are ven in the Appendix. The question of rapid mixng for the chain of Exnple 3.23, or any other chain that gives usefu computational information iout the relation CONN, remains open. Similar (usually non-unform) Markov chains have been the subject of tensive experimental investigation in statistical physics: examples include, If-avoiding walks (11 J and Ising confgurations (12J on various lattices. '
rpicaly, non-rigorous ad hoc arguments are given to justify the number of nulation steps performed in order to approximate the desired distribution i the states. Conclusions drawn from such experiments therefore often mand from the reader a certain act of faith.
a means of analysis.
Actually, the results of Section 3.3 do enable us to analyse a restricted form of simulated annealng algorithm for the problem of finding a maxmum cardinalty matching in a graph G = (V, E). (Although polynomial.time deterministic algorithms exist for this problem, they are far from simple.) The algorithm uses the chain MCmd(G(C)) for a suitably chosen (fied) vaue of the edge weight c. (In the above terminology, c is inversely
related to temperature.) Clearly, for c 2: 1 larger matchings have greater weight in the stationary distribution, and this effect is magnfied as c is increased. However, as we can see from Theorem 3.17, we pay a price for increasing c in the. form of a slower approach to stationarity. The idea is therefore to choose a vaue of c that is both smal enough to ensure rapid convergence and large enough so that the stationary distribution is concentrated on matchings of nearly maxmum size. To make this precise, suppose we are aiming to find a matching whose size is within a factor 1 - € of the size of a largest matching in G, for some
constant € E (0,1). We claim that, if we take c = 2IEI(1-e)/e, then the probabilty in the stationary distribution of being at a matching of at least this size is bounded below by 1/2. Since the vaue of c is polynomial in the
100
3.4 CONCLUDING REMARKS
size of G, we conclude from Theorem 3.17 that a polynomial time-bounded
simulation of MCmd(G(c)) wil fid a matching of the desired size with probabilty at least (say) 1/4. This can be boosted to 1 - 8 by repeating
Chapter 4
¡he experiment O(lg 8-1) times.
To justify the above claim, let ko be the size of a largest matching in G, md define k = r(1 - f)ko 1- Writing as usual mi for IMi(G)I, the log:oncavity property of matchigs (Lemma 3.11), together with the fact that nko ~ 1, implies that
Indirect Applications
ko
mk-l = mko II mj~l ~ (mk-1) ko-k+l
j=k mJ mk
(3.13)
In this chapter, we return to the general framework of Chapter 1 and re-
3ut since j-matchings in G are subsets of E of size j, there is alo the crude
examine the notions of approximate counting and alost unform genera-
ipper bound mk-l :S IElk-1. Henc~ from (3.13) we conclude that
tion in the light of our work on Markov chains. Our main result is a dra-
mk 2
matic improvement of the reduction from generation to counting for self-
mk-l :S IEI(l-e)/e = ~. (3.14)
l fuher application of the log-concavity property now shows that
ridmk :S (C/2)k-i for 0 :S i :S k, so the aggregated weight of matchings of ize less than k is
k-l k-l L mici :S (L 2i-k) mkck -( mkck . i=O i=O 'his ensures that the probabilty of being at a matching of size k or more : at least 1/2, as claimed.
A result similar to the above but with a much more complex proof is ¡ven by Sasak and Hajek (82). These authors also prove an interesting
egative result to the effect that no simulated annealing algorithm in this r a fairly large related class will fid a matching of maxmum cardinalty l polynomial time with high probabilty. There is undoubtedly consider)le scope for fuher positive ard negative results of these kinds; another
teresting example can be found in (47).
reducible relations presented in Theorem 1.10, which alows much larger errors in the counter to be handled. The reduction is achieved by con-
structing an ergodic Markov chain based on the tree of derivations. As always, the crucial feature of the chain from our point of view is that it converges rapidly to its stationary distribution. The machinery developed in Chapter 2 wil enable us to establish this property painlessly. This result has two major consequences. Firstly, it gives us a notion of approximate counting which is robust with respect to polynomial time computation in the following sense: for a self-reducible relation R, if ran-
domised approximate counting within ratio 1 + O(nß) is possible in polynomial time for some real constant ß, however large, then there exists a f.p. randomised approximate counter for R. In other words, a very crude counter can always be effectively bootstrapped to achieve arbitrarily good asymptotic behaviour. Secondly, it suggests a new technique for generating combinatorial structures when some asymptotic information about their number is avaable (to within a constant factor, say). As a concrete example of this technique in action, we consider the prob-
lem of generating labelled graphs with given vertex degrees and a given excluded subgraph. Using a result of McKay ¡69) which provides analytic
counting estimates for these graphs, we show that it is possible to generate them in polynomial time from a distribution which is very close to uniform
provided only that the maxmum degree grows no faster than O(m1/4), where m is the number of edges. In spite of the fact. that the problem is
apparently not self-reducible under this restriction, our technique can stil be applied with a little extra work. This result represents a considerable
102
4.1 A ROBUST NOTION OF APPROXIMATE COUNTING
4.1 A ROBUST NOTION OF APPROXIMATE COUNTING
103
improvement on hitherto known methods ¡13, 100). It also implies the ex-
backtrack from any vertex to its parent, the transition probabilities to all
istence of polynomial time randomised algorithms for counting such graphs ~o within a factor of 1 + m-ß, for any real ß.
adjacent vertices being proportional to the edge weights: thus we get a
dynamic process in which upward and downward movements are equally likely (except at the root and the leaves). To eliminate periodicity, we also
4.1 A robust notion of approximate
add to each state a self-loop probabilty of 1/2. Viewing this process as a symmetric random wal with reflecting barriers on the levels of the tree, it is easy to see that it converges rapidly (essentially in time polynomial in the depth of the tree) to a stationary distribution which is uniform over
counting The main aim of this section is to show how to construct a f.p. almost iiniform generator for a self-reducible relation given only very approximate ~ounting information for the structures concerned.
levels and also uniform over leaves. Hence a short simulation of the chain generates solutions almost uniformly, and the probabilty of failure can be made small by repeated trials. Now suppose that we have available only an
lorn path from the root of the tree of derivations to a leaf (solution): at
approximate counter for the structures in question, so that the edge weights in the tree are no longer accurate. Then we have grounds for optimism that this procedure might stil work effciently: the hope is that, since each edge weight infuences transitions in both directions, the process wil actualy be
~ach stage, the next edge is selected with probabilty proportional to the
self-correcting.
Let us first briefly review the straightforward reduction from generation ;0 counting used to establish Theorem 1.10. This involves choosing a ran-
lUmber of solutions in the subtree rooted at its lower end. By appending
t correction process based on the a posteriori probabilty of the path, we iaw that this procedure can be made to work even if the counter is ranlomised and slightly inaccurate, specificaly if
it is within ratio 1+0(n-kR),
;vhere kR ): 0 is a constant depending on R. However, when only rather
:ruder counting information is available (to within a constant factor, say) ;he basic "one-pass" technique breaks down owing to the accumulation of irrors which are too large to be corrected.
One possible approach to coping with cruder counting estimates is to )revent the accumulation of large errors by applying a correction process at 'requent intervs as we move down the tree. The effect of such a modifica,ion is to introduce an element of backtracking into the algorithm, since the
:orrection process works by throwing away, with some appropriate probLbilty, some final segment of the path aleady chosen. This suggests a nore flexible dynamic approach, based on a Markov chain, which we now
We now proceed with the details of the above construction. Let R be a self-reducible relation over the alphabet L, and suppose that we are given
a polynomially time-bounded approximate counter C for R within ratio p(n) = 1 + O(nß) for some ß E JR. Thus the error ratio in C need not even be constant, but may increase polynomially with the problem size. Since R
is self-reducible, C can always be modified so as to give an exact answer
(which wil be either 0 or 1) when its input is an atom; also, its output may always be rounded up to the nearest integer at the cost of adding at
most 1 to p(n). We shall assume throughout that C incorporates these modifications. We may also assume without loss of generality that p is monotonically increasing. To begin with, we shal consider the case where C is deterministic; the additional technical problems posed by randomised counters wil be dealt with later. We adopt the terminology and notation for self-reducible relations in-
lescribe. The credit for proposing this approach goes to Mark Jerrum. As ve shall see, the results of Chapter 2 enable us to analyse it quite easily md show, perhaps surrisingly, that it is effcient.
troduced in Section 1.1. Let x E L* be a problem instance with R(x) f: ø. Our aim is to set up an ergodic Markov chain MC(x) whose states are the vertices of the tree of derivations TR(x) and whose stationary distribution is uniform over the leaves of the tree. Let V, E be the vertex and edge sets
Consider again the reduction of Theorem 1.10. If the counter is exact,
respectively of TR(x), and set m = lR(x), r = p(lxl). Note that both m
ve may view it as assigning to each edge of the tree of derivations a~ nteger weight equal to the number of leaves in the subtree rooted at its.
and r are polynomially bounded in lxi, and that the depth of the tree is at
most m. For each edge (u, v) E E, define the quantity
ower end; the process can then be seen as an absorbing Markov chain in
I'hich the transition probabilties from any vertex (state) to its children
a-e proportional to the corresponding edge weights. Suppose now that he process is no longer constrained to move downwards, but may also
f(u,v)=fC(~nst(u)), ifvist.heparentofu; (4.1)
1. C(inst(v)), otherwise.
(Recal that inst( . ) gives the problem instance associated with any vertex
4.1 A ROBUST NOTION OF APPROXIMATE COUNTING
104
4.1 A ROBUST NOTION OF APPROXIMATE COUNTING
105
in the tree.) Since C is deterministic, f : E -+ N is a well-defined function
Thrning our attention to condition (mc3), since non-leaf states corre-
on E. The crucial property to bear in mind is that for any edge e E E, f(e) approximates within ratio r the number of leaves in the maxmal subtree
spond to failure of the generator we need to check that the stationary process wil be found at a leaf with reasonably high probabilty. This is
below e.
confrmed by the following lemma.
Next we define for each vertex v E V a degree
d(v) = L f(u,v). u:(u,v)EE
Lemma 4.1 In the stationary distribution of MC(x), the probability of (4.2)
Note that d( v) 2: 1 for all v E V, and that d( v) = 1 if v is a leaf because C is exact for atoms. For each ordered pair v, u of vertices, the transition probabilty Pvu from v to u is then defined to be
being at a leaf is at least 1/2rm. Proof: Note that the degree sum D over the tree TR(x) may be written
D = L d(v) = 2 Lf(e).
vEV eEE
Now consider the collection of edges at some fied level of the tree. By (4.1)
Pvu = 1/2, if u = v;
0, otherwise.
(4.3)
i f(u,v)/2d(v), ~f (u,v) E E;
Thus there is a non-zero transition probabilty between two states iff they
the weight of each such edge approximates within ratio r the number of leaves in the maxmal subtree rooted at its lower end. Since these subtrees are disjoint, the aggregated weight of al edges at this level is at most r#R(x). Summing over al
are adjacent in the tree. The self-loop probabilty 1/2 ensures that the
levels of the tree yields the bound
D = 2 L f(e) :: 2rm#R(x).
chain is aperiodic. It is clearly also irreducible, and hence ergodic, and the detailed balance equations (trl) of page 45 show that it is reversible! with stationary distribution 7r' = (-irv)vEv that is proportional to the degrees,
Since 1rv = 1/ D for each leaf v, the stationary probabilty of being at a leaf
i.e,
is
= #R(x) )- ~
d(v) 1rv = D "Iv E V,
leaves v, we immediately see that 7r' is unform over them. Identifying leaves with solutions, we
where D = ¿vEV d(v). Since d(v) = 1 for al
can therefore construct an alost uniform generator for R by applying the
simulation paradigm of Figue 2.1 to the family of Markov chains MC(x). The generator wil be fully-polynomial provided we can verify conditions (mc1)-(mc4).
The fist two conditions are easy: a single call to the counter will tell us
whether R(x) = ø, and we can always start the simulation at the root of TR(X). Individual steps can be simulated by cal to the counter, which allow both the local structure of the tree and the transition probabilties (4.3)
~o be inferred, in similar fashion to the algorithm of Figue 1.2. Since the '¡ze of
problem instance labels
in the tree never exceeds lxi, al calls wil be
(4.4)
eEE
L 1rv leaves v
D - 2rm'
as required. 0 Recall that m and r are each polynomialy bounded in lxi, so the probabilty of being at a leaf can be boosted to 1/2 by repeating the entire
experiment only polynomialy many times. This verifes (mc3).
We now address the trickier question of rate of convergence. Specificaly,
we want the family of chains MC(x) to be rapidly mixng, as stipulated in condition (mc4). Let us see whether the characterisation of Chapter 2 helps here. We have seen that the chain MC(x) is reversible. Moreover, by (4.4) the minium stationary state probabilty 1r~?n satisfies
(x) _- 1 1 1r. min D --)-2rm#R(x).
polynomialy bounded.
Since solutions are strings of length m over the fied alphabet E, we have #R(x) :: ¡Eim. This implies a polynomial bound of the form (2.12) on Iln fact, it is easy to see that the reversibilty property holds for any tree
Ig1r~?n' By Corollary 2.8, rapid mixg for the family MC(x) wi therefore
orocess, Le., any ergodic Markov chain in which the edges of non-zero weight in ;he underlying graph form a tree.
follow from a suitable lower bound on the conductance of the underlying graphs.
)6
4.1 A ROBUST NOTION OF APPROXIMATE COUNTING
4.1 A ROBUST NOTION OF APPROXIMATE COUNTING
107
emma 4.2 Let G be the underlying graph of the Markov chain MC(x)
where E(8) is the set of edges in 8. Since Cs = EVEs d(v)j D, putting
~fined above. Then the conductance of G is bounded below by
(4.5) and (4.'6) together yields
12 ' ips = Fs -C ~ -4
ip(G) ~ (4r2m)-i .
S r m
roof: Note first that in G each edge (u, v) E E has weight WUV =
(u,v)j2D, while the loop at v has weight d(v)j2D and all other edges ive weight zero. In what follows, we wil identify subsets of V with the ibgraphs of TR(X) which they induce. If 8 ç V is a subtree (connected
ibgraph) of TR(x), we let root(8) denote the vertex of 8 at minimum ,stance from root(V), the root of TR(x), and L(8) the number of leaves
which completes the proof of the lemma. D Lemma 4.2 ensures that the Markov chains MC(x) are rapidly mixng, verifying condition (mc4). Hence the simulation paradigm yields a f.p. almost uniform generator for R. We are now in a position to state the first
.8.
major result of this chapter.
L order to bound the conductance of G, we claim that it sufces to consider
Theorem 4.3 Let R be a self-reducible relation over~. If there exists
)ws out of al subtrees 8 with root(8) =l root(V).2 (Informally, the process
a polynomially time-bounded (deterministic) approximate counter for R within ratio p(n) = 1 + O(nß) for some real constant ß, then there exists a
il converge fast because it is quite likely to emerge from any such subtree, 'avellng upwards, within a small number of steps.) To see this, note first iat ip( G) ~ min ips, where the minimisation is over all non-empty subsets ç V with root(V) ft 8. But we may write any such 8 as the union
) U . . . U T¡ of disjoint subtrees no pair of which is connected by an edge
i TR(X), and we have
f.p. almost uniform generator for R. D The following example is a very simple application of Theorem 4.3. We shal come to a more significant application later.
Example 4.4 Consider the relation DNFSAT mentioned in Chapter 1,
Æ. _ Fs Ei FTi:; . FTi . Æ. ':S = -C = '" C - m~n-C = m~n':Ti'
S wi Ti i Ti i
which associates with a Boolean formula in disjunctive normal form its
ence it is clear that ip( G) ~ min ips, where the minimisation is now over
proximate counter for DNFSAT. Let ø = Di V ... V D¡ be a formula over n
1 subtrees 8 of TR(X) with root(8) =l root(V) as claimed.
vaiables, where each Di is a conjunction of literals, and for 1 :: i :: llet Si be the number of assignments to the variables of ø which satisfy Di. Note that the Si are trivial to compute: if Di contains k distinct literals and no contradictions then Si = 2n-k. But it is immediate that
set of satisfying assignments. The following argument provides a crude ap-
lower bound on ips for such subtrees is readily obtained. We may assume ithout loss of generalty that 8 is maxmaL. Then the flow out of 8 is just
's = j(cut(8))j2D, where cut(8) is the cut edge connecting 8 to the rest
¡
'the tree. But since j(cut(8)) approximates the number of leaves L(8)
#DNFSAT( Ø) :: L Si :: l #DNFSAT( Ø) ,
ithin ratio r, the flow is bounded below by Fs :; L(8)
- 2rD'
i=i (4.5)
'n the other hand, summing edge weights in the subtree 8 as in the proof r Lemma 4.1, we easily derive the bound
L d(v) = j(cut(8)) + 2 L f(e) :: 2rmL(8), (4.6) vES eEE(S) 21n fact, it is true for any reversible chain that, when computing q,( G), it is
iffcient to consider q,s only for sets S that form connected subgraphs of the ¡iderlying graph G (with edges of weight zero removed).
so that E Si approximates the number of satisfying assignments of ø within
the polynomial ratio l. By Theorem 4.3, this gives us a f.p. alost uniform generator, and by Theorem 1.14 a f.p. randomised approximate counter
for DNFSAT. (As mentioned in Chapter 1, such algorithms have been constructed directly by Jerrum, Valiant and Vazirani ¡53J and Karp and Luby ¡56J respectively.) D If the approximate counter in Theorem 4.3 is randomised, so that it may occasionally produce arbitrarily bad results, a similar reduction stil goes through but at the cost of some tiresome technicalities. We summarise the proof in this case.
18
4.1 A ROBUST NOTION OF APPROXIMATE COUNTING
heorem 4.5 The result of Theorem 4.3 still holds even if the approximate
iunter for R is randomised.
4.1 A ROBUST NOTION OF APPROXIMATE COUNTING
109
(see Section 1.1). We let V, V denote the vertex sets of TR(x) and TR(x) respectively. Note that V \ V consists of a union of disjoint maxmal subtrees of TR(x). Some modifications to the transition probabilties are alo
roof (sketch): Let x be a problem instance for which R(x) l ø. As ~fore, assume p is monotonic and set m = lR(x), r = p(lxl). We begin
r considering the intermediate case where the counter C is randomised it always produces estimates which are within ratio r of their correct ilues. We again define a Markov chain MC(x) on the tree TR(x), whose ansition probabilties are now determined as follows. Suppose the pro-
necessar. At vertex v E V, we compute vaues c( u), u E U U .¡ v ì, and d( v)
as before, where now U is the set of chidren of v in TR(x). If d(v) = 0 then we make a transition to the parent of v (if it exists) with probabilty 1/4, and remai at v with probabilty 3/4. Otherwise, we test whether
Eu c(u) ~ 4r2d(v): if so, we remain at v; if not, we make a transition to a neighbouring vertex with probabilties as in (4.7). (Note that the self-loop
!ss is currently at vertex v, and let U be the set of children of v.For
probabilty in each state is at least 1/2.) Once again, the leaves of TR(x)
tch u E U U .¡ v ì, make a call C (inst( u)) to the counter and denote the
are treated as a special case.
:srut c(u); then make a fuher independent set of cal C(inst(u)) for the
This chain is clearly ergodic on some subset of V containing V, namely those states which communicate with the root. Henceforth we redefine V to include only such states. The chain is also stil reversible because it is a tree process. Let us fist observe that the new vertices in V \ V have negligible effect. All transitions from V to V \ V occur with at most tiny probabilty 6, so if started in V the process is unlely to leave V during
~me vertices u and denote their sum d(v). Finaly, make a transition to
i adjacent vertex u with probabilty
f c(u)/4r2d(v), if u is a child of V; 1. c(v)/4r2d(v), if u is the parent of v,
(4.7)
id remain at v otherwise. (Note that the factor 1/4r2 ensures that these ansitions are always well-defined, and that there is a self-loop probabilty
the random vaiable d( v) at the root vertex v wi take the vaue 0 with
: at least 1/2 in each state; we have used 1/4 rather than 1/2 for consis-
probabilty very close to 1, thus causing the chain to leave the subtree
~ncy with the second part of the proof.) Clearly, if C is deterministic this ~duces (except for a uniform factor of 1/2r2) to the original chain. In the
mdomised case, it is easy to see that the transition probabilty Pvu from v i u is actualy the expectation
E 4r2d(v) = 4r2 E(J(u,v)) E d(v) ,
(!(u,V)) 1 ( 1 )
here the random vaiable f (u, v) is defined as in (4.1) and is independent , d( v ). The stationar distribution 7r' therefore satisfies
7rv oc I/E(d(v)-1)
Vv E V,
lId the fact that C is exact for atoms implies that d( v) = 1 with probabily 1 for leaves v. The chain is clearly stil reversible, and the rest of the roof goes through essentialy as in the deterministic case, with d( v) and
the course of the simruation. Shorud it enter a subtree in V \ V, however,
rapidly. In fact, it is not hard to see that the stationary probabilty 7rv of
a vertex v E V \ V is at most O(6k), where k is the distance of v from V in T R (x). As a resrut, the total weight of V \ V in the stationary distribution
is smal. Futhermore, the large exit probabilty from subtrees S ç V \ V ensures a lower bound on cI s similar to that in the proof of Lemma 4.2.
Examination of the transition probabilties within V reveal that we can view this portion of the chain as a chain of the restricted kind described in whose transition probabilties have been perturbed by a factor in the range (1 :l 6'), where 6' depends on 6 and can be made the first part ofthe proof
exponentialy smal in Ixl. It is then easy to see that the stationary probabilties of states in V undergo similarly smal perturbations in the range (1 :l 6,)m. As a resrut, a lower bound as in the proof of Lemma 4.2 also holds for subtrees S with root(S) E V, and so for al subtrees, which again implies that the conductance cI(G) is suitably bounded below. Assuming that the simruation starts at the root, we therefore get rapid convergence
(u, v) replaced by 1 /E (d( v) -1) and E (J (u, v)) respectively.
over the subset V of the state space,3 which is sufcient since V includes al
"ow suppose that the counter may in addition produce arbitrarily bad re-'
ilts with some smal probabilty 6: by Lemma 1.2 we may assume that ~ 2-p(lxl) for al problem instances in the tree, where p is any desired olynomial. Since we are no longer able to infer the structure of TR(x) "ith certainty, we must now work in the larger self-reducibilty tree TR(x)
3More precisely, we are using the r.p.d. 'óv(t) over V here, as defined in
Chapter 2. Note that Theorem 2.5 implies a suffcient condition for rapid mixng with respect to this measure also.
10
4.1 A ROBUST NOTION OF APPROXIMATE COUNTING
eaves of TR(x). A test applied to leaf labels ensures that no non-solutions
lIe output. D
4.1 A ROBUST NOTION OF APPROXIMATE COUNTING
111
to a simple criterion of approximabilty. We conjecture that this notion wil prove usefu in the future classification of counting problems.
It is natural to ask whether a similar state of afairs holds in the case of generation problems. Unfortunately, we know of no analogue of Thettemark: There is actualy a simpler way to prove Theorem 4.5, though ;he resulting algorithm is rather less natural and the process is
no longer
¡trictly a Markov chain. Note that the simulation of Theorem 4.3 can stil
ie performed using a randomised counter if we arrange to remember the
orem 4.6 for improving almost uniform generators with very large (poly-
nomial) bias. Nevertheless, Theorem 4.5 can be used to obtain a weaker bootstrapping theorem for generators.
mtputs of the counter on al previous cal so that each edge weight is
Theorem 4.8 Let R ç E x E* be self-reducible and kR be a constant as
:omputed at most once. Provided al values returned by the counter are :em 4.3 and our earlier analysis applies. By powering the counter, we can
inSection 1.1 such that, for all pairs (x,y) E R, Iyl = O(lxlkR). If there exists a polynomially time-bounded almost uniform generator for R within tolerance O(n-kR) then there exists a f.p. almost uniform generator for R.
msure that this condition fails to hold with vashingly smal probabilty, ;0 the effect on the overal process will be negligible. D
Proof: We know from Theorem 1.16 that a generator satisfying the above
tccurate within the given ratio, we are effectively in the situation of Theo-
hypothesis can be used to construct a polynomialy time- bounded ran-
A direct consequence of Theorem 4.5 is a powerful bootstrapping theo-
domised approximate counter for R within ratio O(nß) for some constant ß.
:em for approximate counters. Recall from Theorem 1.14 that a f.p. alost
An application of Theorem 4.5 now yields the result. D
iniform generator for a self-reducible relation R can be used to construct t f.p. randomised approximate counter for R. Thus, starting from an apJroximate counter within ratio 1 + O(nß), and proceeding via the reduc¡ion described in this section, we arive at a counter with arbitrarily good
On reflection, Theorem 4.8 is quite remarkable: it says that, by observing only polynomialy many outputs of a black box generator whose bias
=iymptotic behaviour.
bias. Indeed, the reduction is huge, from O(n-kR) to the exponential factor
is a fied function of the input size, we are effectively able to reduce this
exp( -nß) for any desired real ß. This seems counter-intuitive since such
rheorem 4.6 Let R be a self-reducible relation over E. If there exists a polynomially time-bounded randomised approximate counter for R within ratio 1+0(nß) for
some real constantß, then there exists af.p. randomised
approximate counter for R. D
an experiment can apparently yield only very local information about the output distribution. The non-trivial reduction of Theorem 4.5 seemingly plays a crucial rôle here. We close this section with an updated version of Corollar 1. 15 which
The chief signcance of Theorem 4.6 is that it establishes a notion Jf approximate counting which is robust with respect to polynomial time computation, at least for the large class of self-reducible relations. This is evident from the following.
Corollary 4.7 Let R ç. E* x E* be self-reducible. Then there .exists a
summarises what we know about approximate counting and alost unform
generation for self-reducible relations. Corollary 4.9 For a se1f-reducible relation Rover E, the following are
equivalent: (i) There exists a f.p. almost uniform generator for R.
polynomially time-bounded randomised approximate counter for R within
(ii) There exists a J.P. randomised approximate counter for R.
ratio 1 + O(nß) either for all ß E IR or for no ß E R D ',-.
We are therefore justified in calng the counting problem for a self-
reducible relation R tractable iff there is a polynomial time randomised procedure which with high probabilty estimates #R( x) to within a factor of the form 1 + O(lxIß), for some fied real ß. This provides a particularly convenient means of classifying #P-complete counting problems according
(ii) There exists a polynomially time-bounded randomised approximate
counter for R within ratio 1 + O(nß) for some real constant ß. (iv) There exists a polynomially time-bounded almost uniform generator
for R within tolerance O(n-kR), where kR is a constant as above. D
112
4.2 SELF-EMBEDDABLE RELATIONS
4.2 Self-embeddable relations
113
self-reducible relations, including DNF -satisfiabilty and natural restricted
This short section is in the nature of a caveat to the bootstrapping results we have just derived. It turns out that similar results hold for a rather trivial reason if the relation R in question has a property which we might term self-embeddability. Informally, R is self-embeddable if there exists a polynomial time computable function ~ which takes a pair Xl, X2 of
4.2 SELF-EMBEDDABLE RELATIONS
problem
instances and "embeds" them in an instance ~(Xi' X2), whose size is at most linear in IXil and IX21 and whose solution set is in (I-I)-correspondence with
the product set R(xi) x R(X2)' We also demand that, from each solution for ~(Xi' X2), the corresponding pair of solutions for Xl and X2 can be recovered easily. The relation MATCH of Example 1.1 is clearly self-embeddable: for
any pair Gi, G2 of graphs we may take ~(Gi, G2) to be the disjoint union of Gi and G2. A slightly less trivial example is the following.
Example 4.10 Consider the relation which associates with a directed graph G its set of (directed) Hamiltonian paths. Then given a pair Gi, G2
of graphs, the required embedding function ~ adds to their disjoint union a new vertex v, together with edges from v to al vertices of Gi and from al
versions of familar relations, such as Hamiltonian paths in planar graphs. Moreover, the Markov chain reduction technque presented earlier is quite natural and general and can sometimes be applied even in the absence of self-reducibilty. Evidence for this is provided by the relation GRAPHS dis-
cussed in the next section, which is apparently neither self-embeddable nor self-reducible under the degree restrictions imposed there.
How does self-embeddabilty impact upon almost uniform generation? Currently, the best answer we can give to this question is that, in the presence of both self-reducibilty and self-embeddabilty, a very strong bootstrapping mechansm for alost uniform generators is avaable: this is the subject of our next theorem. However, we know of no analogous result
when the self-reducibilty assumption is dropped. Theorem 4.12 Let R ç E* xE* be both self-reducible and self-embeddable. If there exists a polynomially time-bounded almost uniform generator for R within tolerance O(nß) for some real constant ß, then there exists a f.p. almost uniform generator for R.
vertices of G2 to v. 0
For self-embeddable relations, there is a simple bootstrapping mechanism for approximate counters. (This observation is due to Keith Edwards. It is also implicit in the work of several other authors, such as
Stockmeyer (91J.)
Theorem 4.11 Let R ç E* x E* be self-embeddable. If there exists a polynomially time-bounded (randomised) approximate counter for R within
ratio 1 + O(nß) for some real constant ß, then there exists a f.p. (randomised) approximate counter for R.
Proof: Let C be a counter for R as above within ratio 1 + p'(n), where p'(n) = cnß and c, ß )- O. Given a problem instance X E E* and a positive real € :: 1, apply the embedding construction q = pg(2cnß /€)l times to
obtain an instance z with #R(z) = #R(X)2Q. From the form ofq, izl is
Proof: Suppose the given generator has tolerance at most cnß - 1 for c, ß )- O. We show how to construct a f.p. randomised approximate counter for R: the result wil then follow from Theorem 1.10.
The construction is essentialy that of Theorem 1.14 (see Figure 1.3 and the accompanying discussion-we shall adopt the notation used there in what follows). This shows how to estimate the number of leaves in the tree of derivations given a means of sampling them with smal bias. We need therefore only show how to use our very biased generator to produce such a sample.
Given a problem instance X, the trick is again to exploit the selfembeddabilty of R in creating a new problem instance z with #R(z) = #R(x)t for some suitably large t. We may then regard solutions to z as t-samples of solutions to x. The key fact is that, for large enough t, alost
bounded by a polynomial in ixl and cl. Now use C to approximate #R(z),
al such samples are good enough for the procedure of Figure 1.3, so that
and take the 2ath root of the result. The approximation ratio for the final
even the very large bias in generating a sample has negligible effect.
estimate is then at most (1 + cnß)I/2Q :: 1 + €, as required. 0
To make this precise, let us call a t-sample good for the algorithm of Figure 1.3 if it yields an estimate of the proportion of leaves in the corresponding subtree within ratio 1 + €/2m. The correct operation of the algorithm depends on samples being good with probabilty at least 1 - 1/8m. Now by Proposition 1.12, setting ~ = €/4m and 8 = (8c2n2ßm)-I, we see that
Theorem 4.11 does not undermine the contribution of the previous section, for several reasons. Firstly, although many natural relations are selfembeddable, there seem to be a number of signcant exceptions among
4.3 GRAPHS WITH SPECIFIED DEGREES
14
f t 2: 9d3/eö then a uniformly selected t-sample wil be good with prob.bilty at least 1 - ö. However, we are only able to select samples almost
miformly within tolerance crl,ß, so we have
4.3 GRAPHS WITH SPECIFIED DEGREES
115
Wormald ¡lOa) gives effcient algorithms for uniformly generating labelled cubic and degree-4 graphs on n vertices. However, these are based on specifc recurrence relations and do not generalse easily to higher de-
grees. A simpler method proposed in ¡lOa), and aleady implicit in the
Pr(sample is not good) ~ c2n2ßö = 118m,
work of Bollobás ¡I3), uniformly generates labelled reguar graphs of arbi-
~ required. The bound on t means that a sample size which is polynomialy iounded in ixl and c1 wil do. Generator failure is handled as before using
trary degree k, but the probabilty of faiure remains polynomialy bounded only if k = O((ogn)1/2). When the degree is permitted to increase more rapidly with n, it seems diffcult to generate the graphs with anything ap-
)roposition 1.13. 0
proaching equal probabilties. (In the work of Tinhofer ¡93), for example,
the probabilties associated with dierent graphs may va widely.) Our
1.3 Graphs with specified degrees
method, which appears to rely on the fu power of the reduction to counting developed in Section 4.1, requies only that k = O(n1/3) and achieves a distribution over the graphs which is asymptoticaly very close to unform.
)0 far in this chapter we have been concerned with the general implications if the reduction from generation to counting of Section 4.1. In this section, i¡e ilustrate its potential practical vaue by applying it to a specific prob-
which describes the graphs of interest. A (labelled) degree sequence on
em which is of independent interest. The application wil also indicate
negative integers such that ¿i gi = 2e(g) is even, and a graph on g is a
iow the requirement of self-reducibilty may be relaxed in practice. We imphasise that this material is included mainly as a non-trivial example of ;he techniques of Section 4.1 in action. An alternative, and arguably more
In keeping with our general approach, we begin by defining a relation vertex set ¡n) = to,...,n - IJ is a sequence g = (go,... ,gn-l) of non-
graph with vertex set (n) in which vertex i has degree gi, a ~ i ~ n - 1.
(All graphs here are assumed to be simple and undirected.) If the vertex set is understood, we shal identify a graph with its edge set. As mentioned
ilegant approach to this specific problem, based on the Markov chain of
earlier, we alo alow a set of forbidden edges to be specifed. Accordingly,
~xample 3.21, can be found in ¡51); see also the Appendix.
we define the relation GRAPHS which associates with each problem instance
We shal be concerned with the following question: given a sequence ~ = (go,. .. , gn-i) of non-negative integers, is it possible to effciently genirate labelled graphs with vertex set to, 1, . . . ,n - 1 J in which vertex i has
of the form (g, X), where g is a degree sequence on (n) and X is a labelled graph with vertex set ¡n), the solution set GRAPHS(g, X) = t G : G is a graph on g having no edge in X J .
legree gi, a ~ i ~ n - 1, such that each graph occurs with roughly equal
)robabilty? We also allow a set X of excluded edges to be specified, i.e., ;he graphs must al be subgraphs of some given graph. Using the ideas of
3ection 4.1, we are able to answer this question afrmatively provided that ;he maxmum degree in the problem does not grow too rapidly with the lUmber of vertices n.
The special case of this problem in which X is empty and the graphs are has been particular interest and
reguar, i.e., gi = k for al i and some k, is of
~onsidered by several authors. Reguar graphs are a natural class to study
in their own right, and have become an important model in the theory of random graphs ¡I4). A generation procedure for the above problem would
provide a means of examining "typical" regular graphs with a given numbcl of vertices and given degree and investigating their properties, about many of which little is known. Fuhermore, it has been shown by Wormald ¡LOI) that generation techniques for labelled graphs with a given degree sequence can be used in the uniform generation of unlabelled reguar graphs.
We refer to X as an excluded graph for g. Although this relation is selfreducible as it stands, we get a more symmetrical structure using the relation R defined by
R(g,X) = t (G,w) : G E GRAPHS(g,X) and w is an edge-ordering of GJ'
Clearly, we can move freely between these relations since any solution set R(g, X) contains precisely e(g)! ordered copies of each element of
GRAPHS(g, X).
Next we specify a self-reducibilty on R by defining the tree of deriva-
tions TR(g, X), assuming that R(g, X) l ø. In this tree, the object (G, w) wil be derived by successively adding the edges of G in the order determined by w. More precisely, the partial solution labels of the tree are in (I-I)-correspondence with pairs (H,w) in which H is a graph with vertex set (n) which can be extended to at
least one graph in GRAPHS(g, X), and w
16
4.3 GRAPHS WITH SPECIFIED DEGREES
¡ an edge-ordering of H. The root has label (ø, ø), while the children of the
4.3 GRAPHS WITH SPECIFIED DEGREES
117
approximates #GRAPHS(g, X) within ratio exp (ro'l(g, X)2 j e(g)). D
ertex with label (H,w) have labels ofthe form (H U t(i,j)ì, w + (i,j)) for
)me edge (i,j), where w+(i,j) denotes the extension of w in which (i,j) is lie largest element. The problem instance label of a vertex v is determined
Remarks: (a) Actualy, McKay's result is slightly stronger than this: we have stated it in a simplied form which is adequate for our purposes.
y its partial solution label (H, w) as follows. Let ii = (ko,..., iin-I) be
lie degree sequence of H, and define h = g - ii, where the subtraction is
(b) The estimate in Theorem 4.13 immediately leads to a simple method,
ointwise. Also, let Y be the subgraph of X U H obtained by deleting all dges (i, j) for which either ki = gi or kj = gj. Then the problem instance
suggested by Wormald ¡ioO) and implicit in the earlier work of Bollobás ¡13),
tbel of v is (h, Y).
make gi copies of vertex i for each i, generate a pairing (i.e., a perfect matching in the complete graph on these vertices) unformly at random, and then collapse the copies to a single vertex again. The result will be a multigraph on g, and the distribution over GRAPHS(g, X) is unform, but the procedure may fail since not all the graphs generated in this way wil
Note that the deletion of redundant constraints from XU H is not
later-in
ecessary for the consistency of the tree, but it wil prove usefu
lie proof of Lemma 4.16-that Y represents only the essential excluded raph. From now on, we wil in fact assume without loss of generalty that 11 problem instances (g, X) have had redundant constraints removed. In articular, this means that the problem instance label of the root of the
ree is just (g, X). It also justifies our use of e(g) as a measure of input ize for this problem when stating approximation results in the sequel.
Now that we have a tree of derivations for R, the reduction of Sec-
for generating graphs whose degrees grow slowly with the number of edges:
be simple or avoid X. The exponential factor in (4.8) can be interpreted as approximating the probabilty that a randomly chosen paiing yields an element of GRAPHS(g, X). It is then clear from the definitions of À and J. that,
provided 'l(g, X) = 0 (log e(g)), this probabilty is polynomialy bounded below, so that the method is effective in this range. For reguar graphs,
this implies a degree bound of O((10gn)I/2). D
lon 4.1 wil give us an effcient almost uniform generator for R, and hence ir GRAPHS, provided that we can count these structures with sufcient
Let us now restate Theorem 4.13 in a more convenient form.
ccuracy. The counting problem for GRAPHS has received much attention
ver a number of years, where the aim has chiefly been to extend the vad-
;y of asymptotic estimates to a wider range of degrees (see ¡69) for a brief urey). We quote below a result of McKay, which is usefu for degrees up ) about e(g)I/4.
Given a degree sequence g on ¡n) and an excluded graph X for g, let
Corollary 4.14 Let Q, B be fied real numbers with Q :; 0 and B 2:
ioOQ4. Then for all problem instances (g, X) for which either e(g) ~ B or 'l(g, X) ~ Q2e(g)I/2, the quantity #R(g, X) can be approximated in polynomial time within a constant ratio.
: = (xo,..., Xn-I) be the degree sequence of X, and define 'l(g, X) =
laxtg;'ax,gmaxXmaxJ, where gmax = maxigi and Xmax = m8.Xi. We lial use l' to express bounds on the degrees involved in the problem. Fur-
Proof: We have aleady observed that #R(g, X) = e(g)! #GRAPHS(g, X),
liermore, if gmax :; 0, set
on l' ensures alo that 'l(g, X) ~ e(g)jio, so we may appeal to Theo-
1 n-I À(g) = 4e( ) L gi(gi - 1) ; g i=O
so we need only estimate the latter. Note that when e(g) :; B the bound rem 4.13. The expression in (4.8) can clearly be evauated in polynomial
1
J.(g, X) = 2e() L gigj' g (i,j)EX
There exists a positive constant ro with the roperty that, for any problem instance (g, X) with gmax :; 0, and 'l(g, X) ~"
~heorem 4.13 (McKay ¡69))
(g)jio, the quantity
(2e(g))! e(g)!2e(g) I1~:OI gi! exp (-À(g) - À(g)2 - J.(g,X)) (4.8)
time, and yields an approximation within the constant ratio exp( roQ4)
in al relevat cases, except when gmax = 0 or possibly when e(g) ~ B.
The first case is trivial; to handle the second, note that for fied B there are only a constant number of instances, up to relabellng of the vertices,
for which e(g) ~ B, so al counting in this range may be done exactly by explicit enumeration. (Alternatively, in practice any convenient approximation method may be used, subject to the proviso that it yields the answer 0 iff #GRAPHS(g, X) = 0: this property can be tested in polynomial time
using matching technques.) D
18
4.3 GRAPHS WITH SPECIFIED DEGREES
4.3 GRAPHS WITH SPECIFIED DEGREES
119
Now let us see whether Corollary 4.14 is powerful enough to allow us to
inal tree TR(g,X).) However, since we have lost some leaves by pruning,
mstruct a generation algorithm for GRAPHS via the reduction to counting
it is by no means obvious that the induced distribution on GRAPHS(g, X)
nbodied in Theorem 4.3. Ideally, we might hope to handle instances for ;ely since the relation R is no longer self-reducible when restricted in this
obtained by forgetting the edge orderings is even close to uniform, or that the failure probabilty is stil bounded. Both these facts wil follow from the lemma below, which says that in the pruning process we lose at most
ay. In other words, even if gmax and Xmax are suitably bounded, the tree
a small fraction of the leaves corresponding to any graph in GRAPHS(g, X)
R(g, X) wil in general contain vertices whose problem instances (h, Y)
provided that the constants Q, B are suitably chosen.
hich ')(g,X) grows as O(e(g)1/2). However, this does not follow immedi-
:e unbalanced in the sense that the degrees are rather large compared to ie number of edges e(h), so that we cannot guarantee reasonable counting
itimates over the whole tree. We will overcome this problem by naïvely runing the tree in such a way as to leave only problem instances which do 11 within the bounds of Corollar 4.14, though we wil have to do a little' ork to check that the effects of this are not too drastic. For any pai Q, B of real numbers with Q :; 0 and B 2: lOOQ4,
e call a problem instance (g,X) (Q,B)-balanced if either e(g) :: B or (g,X) :: Q2e(g)1/2. If (g,X) is (Q,B)-balanced and R(g,X) =l ø, then ie pruned tree T~Q,B)(g,X) with respect to Q,B is obtained by deleting
Lemma 4.16 Let:F be a family of problem instances (g, X) satisfying the bound maxtgmax,xmaxl = O(e(g)1/4), and ß a real constant. Then there exists a pair of real numbers Q, B as above (which depend on :F and ß)
such that, for each instance (g, X) E :F with GRAPHS(g, X) =l ø, and each G E GRAPHS(g,X), the pruned tree T~Q,B)(g,X) contains at least e(g)!(1 - e(g)-ß /4) leaves with solution label G.
We postpone the rather technical proof of this lemma until we have examined its consequences, which constitute the central results of this section.
om TR(g, X) each vertex whose problem instance label is not (Q, B)'lanced, together with the entire subtree rooted at the vertex.
Now consider defining a reversible Markov chain MC(g, X) on the mned tree in precisely the same manner as in Section 4.1, using the count-
ig estimates of Corollary 4.14. Our first claim is that the conductance :mnd of Lemma 4.2 stil holds, so that MC(g, X) is rapidly mixng. To :e this, imagine a correspondig chain on the complete tree TR(g, X) in hich all counting estimates are within the constant ratio of Corollary 4.14:
early, in this case the conductance is bounded as in Lemma 4.2. But 1C(g, X) is obtained from this chain simply by deleting some subtrees
id, as the reader may readily verify, the remova of extremal portions
, a Markov chain cannot decrease its conductance. Hence the bound of emma 4.2 applies to MC(g, X) alo. We may therefore state the following ,ct.
emma 4.15 For any fied Q, B as above, and all (Q, B)-balanced problem istances (g, X) with GRAPHS(g, X) =l ø, the family of Markov chains
1C(g, X) is rapidly mixing. 0
Theorem 4.17 For any fixed real ß, there exists a polynomially timebounded almost uniform generator for GRAPHS within tolerance e(g)-ß, provided that the degrees involved are bounded as max tgma,,, xmaxl = O(e(g)1/4).
Proof: We assume without loss of generalty that ß 2: 0 and that e(g) :; O. Let Q, B be real numbers satisfying the conditions of Lemma 4.16 for the given value of ß. Assuming that GRAPHS(g, X) =l ø, simulate the Markov chain MC(g, X) as defined above. By Lemma 4.15, a polynomially bounded
simulation is sufcient to ensure a r.p.d. of at most e(g)-ß /4. But by Lemma 4.16, the stationary distribution of the chain induces a distribution over GRAPHS(g, X) which is almost uniform within tolerance e(g) -ß /4, since e(g) 2: 1 and ß 2: O. The overal tolerance is then at most e(g) -ß, as
required. Finaly, again by Lemma 4.16, the stationary probabilty of being at a leaf is bounded below as in Lemma 4.1 except for an additional factor due to prunng of 1 - e(g)-ß /42: 3/4. 0
We turn now to the effect of the pruning operation on the stationary" istribution. As before, the distribution wil be proportional to the "de:ees" d(v) defined as in (4.2), and can be made unform over leaves by mnting exactly at this leveL. (When we speak of "leaves" of the pruned
Corollary 4.18 For any fixed real ß, there exists a polynomially timebounded almost uniform generator for labelled k-regular graphs on n vertices within tolerance n-ß, provided that the degree is bounded as k = O(n1/3).
'ee, we shal always mean those vertices which are also leaves of the orig-
o
., ¡
i ¡
4.3 GRAPHS WITH SPECIFIED DEGREES
w
We could of course alow the tolerance in the above algorithm to be
)ecified as part of the input. However, there is no reason to suppose iat the resulting generator would be fuy-polynomial since we can say
I
4.3 GRAPHS WITH SPECIFIED DEGREES and consequently
8 (2 )0111 Pr(Z:; r):: L Pr(Z = r'):: s; ,
othing useful about the behaviour of the counter in Corollary 4.14 for 3mal" instances as Q and B vary. Thus the polynomial bias claimed here that the
121
r' = r r 1
as required. 0
: apparently the best we can achieve in polynomial time. Note
)urce of the bias is essentially just the pruning operation on the tree: the
!feet of the truncation of the Markov chain is exponentially small as in 'heorem 4.3, and thus negligible by comparison.
It remains now for us to prove Lemma 4.16. The proof itself is rather echnical and not particularly enlightening and may safely be skipped. It
Proof of Lemma 4.16: By virtue of the asymptotic bounds on gmax and Xmax, we may choose Q :; 0 such that maxígmax,Xmax)- :: (Q/4)e(g)l/4 for all instances in the family. This implies a lower bound on B of ioOQ4: fuher constraints on B wil be introduced below. Note that al instances
(lakes use of the following elementary combinatorial result.
in the family are certainly (Q, B)-balanced.
)roposition 4.19 Let Z be a random variable denoting the number of
For problem instances with e(g) :: B there is nothing to prove as no pruning takes place in the tree TR(g, X). So let (g, X) be an instance in the family
ireen objects in a random sample (without replacement) of size s :; 0 from ! population of size m :2 2s made up of 9 green and b = m - 9 blue objects, ind let ¡i = E(Z) = sg/m. Then for any real a :; 0,
with m = e(g) :; B, and G be any graph in GRAPHS(g, X), assumed
non-empty. In order to estimate the proportion of al m! derivations of G present in the pruned tree TkQ,B)(g, X), we estimate the probabilty that
m(t))m -(0) -(t) .
a randomly chosen derivation of G is present. More precisely, consider
Pr(Z :; a¡i) :: s ; (2 )0111
the random process \H t=o' where H = ø and, for t :2 1, H is
Proof: Note that Z is distributed hypergeometricaly with mean E(Z) = ¡i
a subgraph of G having precisely t edges which is obtained from H(t-l)
lS claimed. Now set r = a¡i. If r .( sg/(m - s) then the right-hand side of
by adding a single edge, al unused edges of G being equiprobable. If
;he above inequalty is greater than 1 and there is nothing to prove. Assume
we identify H(t) with a problem instance label (h(t), yet)) in the tree of
;herefore that r :2 sg/(m - s). For each i, 1 :: i :: s, the probabilty that ¡he i th choice yields a green object, conditional on the preceding choices, ~ertainly cannot exceed 9 / (m - s), since there are always at least m - s
derivations as before, then a random derivation (l(t)) is stil present after
pruning if (h(t), yet)) is (Q, B)-balanced for 0:: t :: m. The proportion of all m! derivations of G which are present after prunng is therefore just
elements remaining in the pooL. Thus for any r' E N with r' :2 r we have Pr C& ((hC'L, yC')) ;, ( Q, B)-baanced) ) . (4.9) Pr(Z = r') :: ar' = (s) (-l)r'
r' m-s
We proceed to obtain a lower bound on (4.9) by showing that, for each t sep-
But we have also
arately, (h(t), yet)) is alost surely (Q, B)-balanced, provided we make B
r' = r'!(s - r')!sr' :: r'! ::es (.;;)r' (s) - s! by Stirling's approximation, so that
large enough. Note that this is just the event that the problem instance cor-
responding to a randomly chosen t -edge subgraph of G is (Q, B)-balanced. The proof divides into four stages, corresponding to various ranges of vaues of t.
ar':: r'(m - s)
( esg )r'
(i) If 0:: t :: m/2, then Pr ((h(t), yet)) is (Q, B)-balanced) = 1.
Now the function f(x) = (c/x)'\ (c E )R+), is monotonically decreasing for
For any t in this range, we must have h~~ :: 9max and y~?. :: Xmax + 9max,
c/x :: ej hence, since r' :2 r :2 sg/(m - s), we have the bound
so that 'Y(h(t), yet)) :: 2'Y(g, X). Futhermore, e(h(t)) :2 e(g)/2. From our initial choice of Q, we conclude that (h(t), yet)) is (Q, B)-balanced for al such t.
0111
ar' .(- s) .(- - rem a ( esg )r (2e)
V r' :2 r ,
4.3 GRAPHS WITH SPECIFIED DEGREES
122
4.3 GRAPHS WITH SPECIFIED DEGREES
123
precisely as in (ii), only this time with tail value ap, = Qs1/4. We find that (ii) If m/2 :S t :S m - m5/8, then Pr ((h(t), yet)) is (Q,B)-balanced) 2:
1 - m-ß-1/4.
-(t)
a:; Qm :; 4m9/32
Recal that H can be viewed as a randomly chosen t-edge subgraph of G,
- s3/4gmax - ,
or equivalently, its complement H(t) in G as a randomly chosen s -edge subgraph of G, where s = m - t. Now we have
since now s :S m5/8. Fuher, ap, = Q81/4 2: QB1/4, so we get the tail estimate
( (t) ) ( e )
'Y(h(t), yet)) :S h~~(gmax + Xmax) :S h~~Qe(g)1/4 .
So (h(t),y(t)) is certainly (Q,B)-balanced if h~~:S Qe(h(t))1/2/e(g)1/4, i.e., ifthe maxmum vertex degree h~~ of
the random s -edge subgraph H(t)
does not exceed QS1/2 /m1/4. We can estimate the probabilty of this event using Proposition 4.19 as follows: let j E (n) be any vertex with gj :; O.
Then if we colour green all gj edges of G adjacent to j, and al other edges of G blue, the random variable hjt) is distributed as the number of green edges in a random sample (without replacement) of s edges of G. We are
Pr hj :;ap, :Ss 2m9/32
QB1/4
Thus the probabilty that any vertex degree hY) exceeds QS1/4 is at most
m2(cm)-ß', where c :; 0 is fied and ß' can be made arbitrarily large by suitable choice of B. By setting B appropriately, we can clearly make this less than m-ß-1/8 for al m :; B. A similar argument can be used to handle y~L: for a vertex j E (n) with gj :; 0, let r(j) be the set of vertices adjacent to j in G. At this point we
therefore in the situation of Proposition 4.19, with Z = hjt), 9 = gj, and
make use of the fact (refer to the definition of problem instance labels in
tail value ap, = Qs1/2/m1/4, where p, = sgj/m is the mean of h;i. The
the tree) that yet) includes only essential excluded edges, i.e., edges (i, k)
factor a is quite large, viz.,
for which both h~t) :; 0 and hkt) :; O. From this it is clear that
Qm3/4 V2Qm1/4:; a--:; _ 4V2,
(4.10)
yjt) :S It i E r(j) : h~t) :; OJ I .
- 81/2gj - gmax
Now colour green al edges of G with an endpoint in r(j), and the remainder
where we have used the facts that 8 :S m/2 and gmax :S (Q/4)m1/4. The tail value itself satisfies
blue, and again view H(t) as a random sample of size 8 from the edge set of G. Each time a green edge is selected, it contributes at most two to the
Q81/2 ap, = - :; Qm1/16
the number of green edges in the sample, so the required tai probabilty
m1/4 - ,
right-hand side of (4.10). Thus yjt) :S 2Z, where the random vaiable Z is may be estimated from Proposition 4.19 with 9 = L gi :S g~ax, and
since also 8 2: m5/8. Proposition 4.19 therefore yields
ap, = Q81/4/2. The bounds on 8 in this range imply
Pr (hjt) :; ap,) :S 8 (~) OIL :S; c_m1/16 ,
where c = (2V2/e)Q :; 1. Thus the probabilty that any vertex degree hjt) exceeds the bound is at most m2c-ml/16, which is less than m-ß-1/4 for
iHCi)
a:;- Qm :; !m1/32 283/4g~ax - Q , and ap, 2: QB1/4/2, so that
all m :; B, provided B is chosen large enough.
Pr (yjt) :; 2ap,) :S Pr (Z :; ap,) :S 8 (4~~32 )
QB1/4/2
(ii) If m - m5/8 :S t :S m - B, then Pr ((h(t), yet)) is (Q, B)-balanced) 2:
1 - m-ß-1/4.
Exactly as above, this ensures that the probabilty that any vertex de-
As in (ii) above, let 8 = m - t and view H(t) as a randomly chosen 8-edge subgraph of G. In view of (i), we may assume that 8 :S m/2. By definition
gree yjt) exceeds Q81/4 is at most m-ß-1/8 for al m :; B, provided we
of 'Y,('h(t), yet)) wil be (Q, B)-balanced if h~~x and y~~x are each bounded above by Qe(h(t))1/4. In the case of h~~ we proceed via Proposition 4.19
at (ii).
make B large enough. Combining the bounds for h~L and y~L, we arrive
4.3 GRAPHS WITH SPECIFIED DEGREES
L24
:iv) If t 2' m - B, the"; Pr ((h(t), y(t)) is (Q, B)-balanced) = 1. lhis is true by definition, since e(h(t)) = m - t :: B.
(n view of (i)-(iv), the probabilty of the conjunction in (4.9) is now easily
,een to be bounded below by 1 - m-ß /4, as claimed in the lemma. 0 We conclude our discussion of graphs with specified degrees with some remarks on the counting problem. Recal again the procedure of Figue 1.3,
4.3 GRAPHS WITH SPECIFIED DEGREES
125
Finally, we should observe that the counting problem for GRAPHS is
apparently hard to solve exactly even under the degree restrictions imposed in this section, so that the approximation approach pursued 'here is justified.
Theorem 4.21 The problem of evaluating #GRAPHS for instances (g, X)
whose degrees are bounded as maxrgmax,xmaxi = O(e(g)1/4) is #P_ complete.
which provides a way of estimating the number of leaves in a rooted tree
given an alost uniform generator for leaves in the subtree rooted at any vertex. Consider the situation of Theorem 4.17: can we apply this technique to estimate the number of leaves in the pruned trees T~Q,B) (g, X)?
Note first that an almost uniform generator for the leaves in any maximal subtree S is available since we may simulate just this portion of the Markov chain MC(g, X), transitions out of S being censored. Moreover, the reduced chains clearly inherit the rapid mixng property, so the generator wil be effcient provided only that the subtree has sufciently many leaves. It is not hard to see that, by modifying slightly the method of selecting a subtree for the recursion, we can ensure that this condition always holds with high probabilty. Thus we get a f.p. randomised approxiate counter for the leaves of such subtrees. We may use this to approximate the number of leaves in T~Q,B)(g,X)
within ratio 1 + m-ß /4 in polynomial time. But by Lemma 4.16 this number itself approximates m! #GRAPHS(g, X) within ratio 1 + m-ß /2, so we are in fact able to approximate #GRAPHS(g, X) within ratio 1 + m-ß in polynomial time. We summarise this discussion in the following theorem.
Theorem 4.20 For any fied real ß, there exists a polynomially timebounded randomised approximate counter for GRAPHS within ratio 1 + e(g) -ß, provided that the degrees are bounded as max rgmax, xmaxl = O(e(g)1/4). 0
Theorem 4.20 implies the existence of polynomial time algorithmic
methods for computing the number of labelled graphs with specifed degrees (assuming that these are not too large) with a relative error which
is smaler than any desired power of the number of edges. The asymptotic
behaviour of such a counter thus compares very favourably with avaable analytic estimates, such as Theorem 4.13. Whie this is a remarkable theoretical result, we suspect that the vaious powers and constants accumulated in the reductions wil render the method impractical if a high degree of accuracy is requied.
Proof: Note fist that there is a simple reduction to the unrestricted
version of this problem from the problem of counting perfect matchings in a graph G, under which the excluded graph X is the complement of G and
the degree sequence is (1,1,.. .,1). The #P-completeness of the restricted version follows from the fact that the former problem remains #P-complete
even for very dense graphs G, i.e., specificaly when G has minimum vertex degree n - O(n1/4). This can be shown via the same reduction which is
used to establish Theorem 3.9. 0
APPENDIX: RECENT DEVELOPMENTS
Appendix: Recent Developments
bution. As usual, the meat of the result lies in the analysis of the random wal, specifically the demonstration that it is rapidly mixng. Dyer, Frieze and Kannan use a geometric isoperimetric inequality to show that the conductance of
,ince the thesis on which this monograph is based appeared in the Summer if 1988, there has been increasing interest in the quantitative analysis of Markov chains and their computational applications. This interest has
irought with it a number of major new advances, which wil be briefly
127
the random walk is bounded below by 1jpoly(n,c1), where as
usual € is the accuracy input. They then appeal to the characterisation of rapid mixng in Chapter 2 of this monograph to conclude that the Markov chain is indeed rapidly mixng. Various improvements of the above result have since appeared, notably by Lovász and Simonovits ¡66) and Applegate and Kannan ¡7): the main aim
mmmarised here; references to the relevat papers are given for the reader mshing to follow up the details.
has been to push down the exponent in the polynomial bounding the run-
Advances have been made in two distinct directions. On the one hand,
survey of the history of volume computation, and of the above develop-
¡everal new Markov chains based on particular combinatorial structures
iave been studied and shown to be rapidly mixng, thus yielding the fist
time, using sharper isoperimetric inequalities and other ideas. A detailed ments, may be found in ¡28)i here it is shown that the volume of a convex
body in n dimensions can be computed to fied specified accuracy with
iolynomial time approximation algorithms for vaious hard combinatorial
high probabilty in time O(n810gn) (measured by the number of calls to
mumeration problems via the (by now familar) reduction to random sam-
the membership oracle).
ilig. On the other hand, progress has also been made with refinements of
Another problem which has been solved using rapidly mixng Markov chains is that of counting the number of linear extensions of a given par-
¡he tools described in this monograph for analysing the rate of convergence if Markov chains. The refiements both extend the applicabilty of the ~xisting tools and improve them so as to yield sharer bounds for many ~xamples.
We deal first with Markov chains for particular structures. An impor¡ant breakthrough was achieved in 1989 by Dyer, Frieze and Kannan ¡29J,
ivho devised the first fuy-polynomial randomised approxiation scheme 'or computing the volume of a convex body K ç lRn, specified by a memiership oracle. This result is paricularly interesting since it is known that, n this model, no determinstic algorithm can compute even a very weak ipproximation to the volume in polynomial time ¡9). Hence this is a rare nstance where randomisation provably makes the dierence between poly-
10mial and exponential time.
The algorithm of Dyer, Frieze and Kannan works by reducing the volime computation to the problem of generating a point alost unformly at
:andom from a convex body, discretised into an array of reguarly spaced ioints. If this is done for a suitable sequence of bodies, namely the interiections of K with an increasing sequence of ball centred in K, the volume )f K can be deduced in a fashion reminiscent of the reduction from counting ;0 generation described in Theorem 1.14. The generation problem is solved iy performing a simple nearest-neighbour random walk on the points in ;he body, which is ergodic and converges to the unform stationar distri-
tial order (see Example 3.22 at the end of Chapter 3). This problem is actualy a special case of computing the volume of a convex body, so the above result of Dyer, Frieze and Kannan implies the existence of a f.p. randomised approximate counter for linear extensions. (A refinement of this idea was developed by Matthews ¡68).) However, a much more direct and effcient method, based on the Markov chain described in Example 3.22, was subsequently employed by Karzanov and Khachiyan ¡57). Recall that the states of the chain are linear extensions of the given parial order w on
n elements, and transitions are made by selecting an element unformly at random and interchanging it with the next largest element in the current linear order, provided the resulting order is stil consistent with w. Using a similar geometric isoperimetric inequality to that in ¡29), Karzanov and Khachiyan prove that this Markov chain has large conductance (in fact, O(n-S/2)), and hence is rapidly mixng. They deduce the existence of a f.p. alost uniform generator and a f.p. randomised approximate counter
for linear extensions.
We should point out that, though it had been conjectured for some time, the problem of counting linear extensions has only recently been shown to be #P-complete. This result is due to Brightwell and Winkler ¡15J, who also give a survey of applications and a refinement of the algorithm of Karzanov and Khachiyan which improves the runtime somewhat.
Another signficant recent application of rapidly mixng Markov chains
APPENDIX: RECENT DEVELOPMENTS
~8
to the Ising model ¡52). This has been the focus of much study in stastical physics since the 1920s, initialy as a means of explaining the pheomenon of ferromagnetism but more recently in a wider context, and has
~nerated a vast body of literature; a very readable surey is given by ~ipra ¡20).
The problem is easily stated. Consider a collection of sites ¡n) = D, 1, . . . , n - 1:¡ each pai i, j of which has an associated interaction energy
ij ~ O. In most cases of physical interest, the set E of pais with non-zero iteraction energies forms a reguar lattice graph (¡n), E). A configuration
: an assignment of positive (O"i = +1) and negative (O"i = -1) spins to a.ch site i E ¡n). The energy of a confguation 0' = (O"i) is given by the (amiltonian H(O') = - L VïjO"iO"j - B L O"k,
APPENDIX: RECENT DEVELOPMENTS
129
of (52) shows that this Markov chain is rapidly mixng, apparently the first such rigorous demonstration for a Markov chain related to the Ising modeL. The proof uses conductance and the combinatorial encoding technique for
counting canonical paths, in similar fashion to the proof of Theorem 3.17. This means that subgraphs can be generated in polynomial time with probabilties approximately proportional to their weights. As a result, we get a f.p. randomised approximation scheme for the partition function Z of an arbitrary Ising system. Other quantities associated with the model,
such as the mean energy and the mean magnetic moment, can also be ef-
ficiently approximated using the same Markov chain ¡52). It is intrigung that this approach works with combinatorial structures other than the spin confguations appearing in the definition of the problem. These structures apparently have no physical signficance, and are introduced solely in order
to obtain a rapidly mixng Markov chain. It is an open problem to find a
li,jlEE kErn)
Markov chain which works diectly with spin confguations and which can
rhere B is an external field. The central problem is to compute the partition
be proved to be rapidly mixng at al temperatures; a leading candidate is
ILnction
the elegant Swendsen-Wang process ¡92), which is known to work extremely
Z = Z(Vïj,B,ß) = Lexp(-ßH(O')),
(A.
1)
well in practice but has so far eluded rigorous analysis.
0'
There ß )- 0 is related to the temperature and the sum is over al possible onfguations 0'. Almost all the physical properties of the system can be
Several algorithms for counting new classes of combinatorial structures
omputed from knowledge of Z and its derivatives. The effcient computa-
have been obtained by a direct application of the results of Chapter 3 of this monograph. These include graphs with a given degree sequence (as
ion of Z for planar interaction graphs in zero field (B = 0) is a celebrated
studied in Section 4.3) and Eulerian orientations of a graph. Recal from
Igorithmic result ¡36, 58); however, it is not hard to see that the problem iecomes #P-hard when either of these restrictions is removed ¡52).
The main contribution of ¡52) is a f.p. randomised approximation scheme
or Z for arbitrary Ising systems. This is based on the Markov chain apiroach, but with an interesting twist. It seems difcult to find a Markov
Corollar 3.13 of Chapter 3 that we have a f.p. randomised approximate
counter for perfect matchings in any family of graphs in which the number of near-perfect matchigs exceeds the number of perfect matchings by at most a polynomial factor (condition (3.5) on page 80).
Z = L w(X)
In ¡51), the problem of counting graphs with given degree sequence g is reduced to that of counting perfect matchings in a specialy constructed biparite graph. This graph satisfies condition (3.5), for a suitable polynomial q, provided the degree sequence g belongs to a so-called P-stable class. (Essentialy, this means that the number of graphs with degree sequence g does not change radicaly under smal perturbations of g.) Many
XÇE
classes of degree sequences turn out to be P-stable, including the class of
hain whose states are spin confguations 0' and which is provably rapidly
the expression (A.l), known
nixng. On the other hand, a reformulation of
LS the high-temperature expansion, alows Z to be written as a weighted
um
)ver subgraphs of the interaction graph. The weight w(X) of a subgraph X s positive and depends on the number of edges and the number of vertices
if odd degree in X. Now a simple Markov chain can be constructed whos~ itates are subgraphs and whose transitions correspond to the addition and leletion of individual edges. Moreover, with appropriate choice of tran;it ion probabilties the chain can be made ergodic with stationary distri-
mtion proportional to the weights of the subgraphs. The central result
all reguar sequences, and al n-vertex sequences with maxmum degree at most yn/2. (As in Section 4.3 of Chapter 4, excluded subgraphs are also alowed provided their degrees obey similar constraints. A more detaied investigation of P-stabilty can be found in ¡49).) As a result, we can deduce the existence of a f.p. alost unform generator and a f.p. randomised approximate counter for graphs on any P-stable class of degree sequences. Thus the range of sequences we can handle is much wider than for other
30
APPENDIX: RECENT DEVELOPMENTS
:nown methods, including those of Chapter 4. Other recent progress on
APPENDIX: RECENT DEVELOPMENTS
131
and m are the numbers of vertices and edges respectively in G. Actually
ounting and generating graphs with given degrees, using very different
the result is rather more general and applies to the analogous Markov chain
nethods, can be found in ¡37, 70, 71, 72).
defined on bases of a certain class of matroids. (Spanning trees are precisely the bases of a graphic matroid.) The class of matroids that can be handled,
Recently, Mihail and Winkler ¡76) have also used a reduction to counting ,erfect matchings in order to deduce the existence of a f.p. randomised ap,roximate counter for Eulerian orientations of an arbitrary Eulerian graph. \.gain the key to the result is to show that the graphs arising in the reducion satisfy condition (3.5) relating the number of perfect and near-perfect natchings, for some polynomial q. The proof of this fact is interesting be-
:ause it does not rely on the idea of short augmenting paths discussed in ;ection 3.2. A nice application of Mihail and Winker's result is to the ap-
iroximation of the partition function of the ice model, another important iroblem in statistical physics.
known as "balanced matroids", is a little larger than the class of graphic matroids, and includes all reguar matroids. However, the question of rapid
mixng for this or a related Markov chain on bases of arbitrary matroids remains, intrigungly, open.
We turn now to new results concerning general tools for the analysis of Markov chains arising in computational applications. Recal that the analysis of several complex chains in this monograph has proceeded as follows:
first, canonical paths have been constructed between each pair of states in
Both of the above results can actually be obtained more directly, without
the chain, in such a way that no oriented edge of the underlying graph (Le.,
eference to perfect matchings. In the case of graphs with given degree equence, one simulates the Markov chain defined in Example 3.21; this :hain in fact turns out to be rapidly mixng, as can be verified using the
transition) is overloaded by paths; this implies a lower bound on the conductance if, which in turn implies that the chain is rapidly mixng by the
iath-counting and encoding techniques of Chapter 3. In the case of Eulerian ,rientations, one simulates a chain whose states are Eulerian orientations
md "near-Eulerian" orientations.
The results of Chapter 3 have also been employed very recently in the lesign of an improved randomised approximation algorithm for the permaient of an arbitrary 0-1 matrix. The improved algorithm, due to Jerrum md Vazirani ¡54), has a worst-case runtime of the form exp(O(nl/2Iog2 n)) 'lhich, though stil exponential, has an appreciably smaller exponent than he exp(8(n)) of the best existing algorithm of Karmarkar et at ¡55) men-
characterisation of Chapter 2. Recent work by Diaconis and Stroock ¡25) shows that this analysis can be streamlined somewhat by bypassing the conductance: the existence of a good set of canonical paths leads directly to an upper bound on the second eigenvalue Ai, and hence to rapid mix-
ing. The advantage of this more direct approach is that it leads to sharper bounds on Ai in many cases of interest. We state below a modified version of Diaconis and Stroock's result, taken from ¡87), which is more usefu in our applications. Consider an
ergodic, reversible Markov chain with stationary distribution 1r' described by an underlying graph H in which edge (i, j) has weight Wij = 7riPij,
joned in Section 3.2. Briefly, the algorithm proceeds as follows (in terms
where Pij is the transition probabilty from state i to state j. Suppose we
if counting perfect matchings in a bipartite graph G = (Vi, V2, E)). If 'J is a sufciently good expander, then any near-perfect matching can be
have constructed a collection of canonical paths, one between each ordered
;ransformed into a perfect matching by augmentation along a short alteriating path; as in Section 3.2, this ensures that G satisfies a bound of the orm (3.5). Hence the algorithm of Chapter 3 can be used to count perfect natchings in G. On the other hand, if G fails to have good expansion
)roperties then it is possible to find a "constriction" in G, Le., a subset of rertices U ç VI which is adjacent to rather few vertices in V2. This allows a
pair of distinct states, consisting of edges of non-zero weight in H (i.e.,
transitions in the chain). For a transition t from state i to state j, let Wt = Wij and let P(t) denote the set of paths which contain t. The key quantity is
b = mF Wt -1 L 7ri'7lj . (A.2) (i,j)EP(t) Note that b is precisely the bottleneck parameter appearing in inequal-
'elatively effcient decomposition of G into subproblems that may be solved
ity (3.9) in the proof of Theorem 3.17. It measures the maxmum loading
'ecursively. For the details, see ¡54).
of any edge by paths, and as such is a measure of constrictions in the chain. As we have seen, b arises very naturally out of the injective mapping technique of Chapter 3.
Finally, we mention that the Markov chain on spannng trees of a 7aph G, defined in Example 3.20, has recently been analysed by Feder Lnd Mihail ¡33) and shown to have conductance at least Ij2mn, where n
In ¡87) we prove the following direct bound on the second eigenvalue A1
APPENDIX: RECENT DEVELOPMENTS
32
i terms of b:
APPENDIX: RECENT DEVELOPMENTS
133
yields the better estimate Ài :: 1 - 1/16r2m2. The improvement is a fac1
Ài :: 1- bt'
(A.3)
rhere t is the length of a longest canonical path. This bound should be ontrasted with the conductance-based bound used in Chapter 3, namely 1 Ài :: 1 - 8b2'
(A.4)
rhich follows from Lemma 2.4 and the fact that cP ;: 1/2b (see (3.10)). In
ases where t is small compared with b, (A.3) wil give a sharer bound lian (A.4). We can exploit this fact to obtain improved bounds on the ~cond eigenvaue of several important chains; of course, these translate irectly to improved upper bounds on the runtime of algorithms which use lie chains. This is done in (87J for the monomer-dimer chain MCmd (G)
f Section 3.3, the perfect matchings chain MCpm(G) of Section 3.2, the liain of Section 4.1 based on the tree of derivations, and the chain men-
toned above in connection with the Ising modeL. Diaconis and Stroock
tor 2r2.
Interestingly, the bottleneck measure on paths b, generalsed slightly,
provides a characterisation of the rapid mixng property analogous to the characterisation in terms of conductance presented in Chapter 2. The generalsation makes use of a multicommodity flow problem in the underlying graph H, viewed as a network, in which oriented edge t has capacity Wt and a quantity 7ri7rj of commodity (i,j) has to be transported from i to j,
for each ordered pair of states i and j.l The quality of a flow I is measured by the quantity
max I(t) t Wt'
(A.5)
where I(t) is the total flow routed by I through t. Note that this is a generalsation of the definition (A.2) of b, in which multiple rather than canonical paths between states are alowed. (Compare also the notion of random paths introduced in Example 3.4 of Section 3.1.)
!5, Proposition IJ prove a bound similar to (A.3) using a rather dierent
Now let p == p(H) denote the minimum value of the quantity (A.5) over
ottleneck measure b, and use it to obtain improved estimates for Ài for a
al valid flows I; in (87J, p is caled the resistance of the Markov chain.
umber of Markov chains with a symmetric structure, and for the perfect iatchings chain MCpm(G). For a more detailed discussion and comparison
the conductance cP == cP(H), or more precisely that
Then it can be shown (87J that the resistance is almost the reciprocal of
f these bounds, see (87J.
1
To get an idea of the improvements achieved, consider the monomer-
imer chain MCmd(G), whose states are the matchings in graph G = V, E). The proof of Theorem 3.17 shows that b :: 4lElc~ax for the anonical paths defined there, so the conductance bound (A.4) yields
1 :: 1 - 1/128IEI2c~ax' However, it is clear that the maxmum length f a canonical path is at most lVi, so (A.3) gives the sharper bound
1 :: 1-1/4IVIIElc~ax' This leads to a factor 32lElcmax/IVI improvement i the runtime bound of the algorithms based on this chain. For example, in lie application to approximating the permanent discussed at the end of Secion 3.3, the largest vaue requied for Cmax is the ratio IMn-i(G)I/IMn(G)1 f the number of near-perfect matchings to the number of perfect matchings
i G, where IVI = 2n. This quantity is at least n, and can be quite large in iteresting cases: for random graphs of low density, the best known bound ; nlO (see the Remark following Corollar 3.15). As another example, recal, lie Markov chain of Section 4.1 based on the tree of derivations. Here (A.4)
4.2 give the bound Ài :: 1-1/32r4m2 n the second eigenvaue, where r is the ratio tolerated in the counting es¡mates and m is the depth of the tree. Using (the only possible) canonical nd the conductance bound of
2cP :: P :: 0 Co; N) , where N is the number of states. (This is actually a form of approximate max-flow min-cut theorem for the multicommodity flow problem, and is a straightforward generalisation of a result obtained in a different context by
Leighton and Rao (63J.) Thus the characterisation ofrapid mixng in terms of cP embodied in Corollary 2.8 holds also for p: a family of Markov chains MC(x) is rapidly mixng il and only il the resistance p is bounded above by a polynomial in Ixl.
It is interesting to note that the problems of computing cP and computing p are approximate duals of one another, and give us essentially the same information about Ài. However, for the purposes of obtaining upper bounds on Ài, P is much easier to work with: rather than argung about cPs for al subsets S, it is suffcient to exhbit some flow which does not overload any
edge. (The above max-flow min-cut theorem ensures that such a flow
Lemma
aths, however, it is easy to check that b :: 8r2m and t = 2m, so (A.3)
1 Note that the meanings of the terms "flow" and "capacity" here are different
from those in earlier chapters.
134
APPENDIX: RECENT DEVELOPMENTS
APPENDIX: RECENT DEVELOPMENTS
135
always exists.) Of course, this is precisely the mechansm we used to obtain
with the above refinements, to analyse the reversibilsed chain, and hence
our positive results in Chapter 3. A more detailed discussion of flows, and
to demonstrate for the first time that the exclusion process itself is rapidly
more examples, may be found in ¡87j.
mixing, in the sense that its distribution is close to uniform after a number of steps which is polynomial in p.
Throughout this monograph we have worked with Markov chains which
are reversible, i.e., which satisfy the detailed balance condition (tr1) on page 45. As we have seen, this condition is satisfied by most chains arising in the kinds of combinatorial applications of interest to us, including those iised in the more recent results mentioned above. It is, however, interesting GO ask whether the analytical tools developed here apply to more general,
!lon-reversible chains. Note that our proof of the characterisation of rapid
mixng in Chapter 2 relies heavily on reversibilty to ensure the existence )f an orthogonal basis of eigenvectors with real eigenvalues.
In the non-reversible case, the relation between eigenvalues and rate )f convergence to stationarity is not so transparent. This question was ¡tudied by Mihail ¡75j, who generalsed the analysis of Chapter 2 to non-
:eversible chains using a direct combinatorial approach, dispensing with inear algebra altogether. Fil ¡35j gives an algebraic treatment of Mihail's
'esult which is more readily related to our presentation. Let P be the ;ransition matrix of an ergodic (not necessarily reversible) Markov chain 'lith stationary distribution 7r'. Define the multiplicative reversibilisation
l1(P) of P by M(P) = PP, where the matrix P = (pij) is given by ¡ij = 7rjpjïf7ri. Then M(P) is a reversible transition matrix, and so has eal (in fact, non-negative) eigenvalues. Moreover, the rate of convergence if the original Markov chain depends geometricaly on the second eigenvalue if M(P).2 Since M(P) is reversible, its second eigenvalue can be bounded Lsing the techniques of Chapters 2 and 3 and the refinements discussed
,bove. Fil carries through such an analysis for several non-reversible chains.
~he most signficant example is the exclusion process corresponding to
lockwise walk on the discrete circle Zp, in which d = p/2 particles move andomly as follows. Initialy the particles are distributed around the circle, rith at most one paricle per site, according to some probabilty distribu-
ion. At each time step, a randomly chosen particle attempts to move one ,osition clockwise, but makes the move only if the new position is vacant. ~his Markov chain is ergodic. with uniform stationary distribution, but
lainly not reversible. Fil uses the path counting techniques of Chapter 3:' 2
Precisely, the variation distance between the distribution at time t and 7r'
: bounded above by Ai/2/27ri, where i is the initial state and Ai the second ,genvalue of M(P).
137
BIBLIOGRAPHY
ume 5b (C. Domb and M.S. Green eds.), Academic Press, London, 1976, pp. 1-105.
Bibliography
l
¡13) Bollobás, B. A probabilstic proof of an asymptotic formula for the
i
number of labelled regular graphs. European Journal of Combinatorics 1 (1980), pp. 311-316.
¡14) Bollobás, B. Random Graphs. Academic Press, London, 1985.
¡I) Aldous, D. Random wals on finite groups and rapidly mixng
¡15) Brightwell, G. and Winker, P. Counting linear extensions. Preprint,
Markov chains. Séminaire de Probabilités XVII, 1981/82, Springer Lecture Notes in Mathematics 986, pp. 243-297.
1991. Preliminary version appeared in Proceedings of the 23rd ACM Symposium on Theory of Computing (1991), pp. 175-181.
¡2) Aldous, D. On the Markov chain simulation method for uniform combinatorial distributions and simulated annealing. Probability in the Engineering and Informational Sciences 1 (1987), pp. 33-46.
¡16) Broder, A.Z. How hard is it to marry at random? (On the approxi-
¡3) Aldous, D. and Diaconis, P. Shufing cards and stopping times.
of the 20th ACM Symposium on Theory of Computing (1988), p. 551.
American Mathematical Monthly 93 (1986), pp. 333-348.
¡4) Alon, N. Eigenvalues and expanders. Combinatorica 6 (1986), pp. 83-96.
¡5) Alon, N. and Milman, V.D. Ài, isoperimetric inequalities for graphs and superconcentrators. Journal of Combinatorial Theory Series B
38 (1985), pp. 73-88.
¡6) Angluin, D. and Valant, L.G. Fast probabilstic algorithms for Hamiltonian circuits and matchings. Journal of Computer and System Sciences 18 (1979), pp. 155-193.
¡7) Applegate, D. and Kanan, R Sampling and integration of near log-concave functions. Proceedings of the 23rd A CM Symposium on
Theory of Computing (1991), pp. 156-163.
¡8) Bach, E. How to generate random integers with known factorisation. Proceedings of the 15th ACM Symposium on Theory of Computing (1983), pp. 184-188. ¡9) Bárány, 1. and Füredi, Z. Computing the volume is diffcult. Pro-
ceedings of the 18th ACM Symposium on Theory of Computing (1986),
pp.442-447.
LO) Bender, B.A. Asymptotic methods in enumeration. SIAM Review 16 (1974), pp. 485-515.
Ll) Berretti, A. and Soka, A.D. New Monte Carlo method for the selfavoiding wal. Journal of Statistical Physics 40 (1985), pp. 483-531. L2) Binder, K. Monte Carlo investigations of phase transitions and critical phenomena. In Phase Transitions and Critical Phenomena, Vol-
mation of the permanent). Proceedings of the
18th A CM Symposium
on Theory of Computing (1986), pp. 50-58. Erratum in Proceedings
¡17) Broder, A.Z. and Shamir, E. On the second eigenvalue of random
regular graphs. Proceedings of the 28th IEEE Symposium on Foun-
dations of Computer Science (1987), pp. 286-294.
¡18) Cai, J.-y. and Hemachandra, L.A. Enumerative counting is hard. Information and Computation 82 (1989), pp. 34-43. ¡19) Cheeger, J. A lower bound for the smallest eigenvalue of the Lapla-
cian. In Problems in Analysis (RC. Gunning ed.), Princeton University Press, New Jersey, 1970, pp. 195-199.
¡20) Cipra, B. An introduction to the Ising modeL. American Mathematical Monthly 94 (1987), pp. 937-959. ¡21) Colbourn, C., Day, R and Nell, L. Unrankng and rankng spanng trees of a graph. Journal of Algorithms 10 (1989), pp. 271-286. ¡22) Dagum, P., Luby, M., Mihai, M. and Vazirani, U.V. Polytopes, per-
manents and graphs with large factors. Proceedings of the 29th IEEE
Symposium on Foundations of Computer Science (1988), pp. 412-421.
¡23) Diaconis, P. Group representations in probability and statistics. Lecture Notes Monograph Series VoL. 11, Institute of Mathematical Statistics, Hayward, Calfornia, 1988.
¡24) Diaconis, P. and Shahshahani, M. Generating a random permutation with random transpositions. Zeitschrift lÜr Wahrscheinlichkeitstheorie und verwandte Gebiete 57 (1981), pp. 159-179.
¡25) Diaconis, P. and Stroock, D. Geometric bounds for eigenvalues of Markov chains. Annals of Applied Probability 1 (1991), pp. 36-1.
i
.;
BIBLIOGRAPHY
:8
6) Dixon, J.D. and Wilf, H.S. The random selection of unlabelled graphs. Journal of Algorithms 4 (1983), pp. 205-213. 7J Dodziuk, J. Difference equations, isoperimetric inequality and tran-
sience of certain random walks. Transactions of the American Mathematical Society 284 (1984), pp. 787-794. 8) Dyer, M. and Frieze, A. Computing the volume of convex bodies: a case where randomness provably helps. In Probabilistic Combina-
torics and its Applications, Proceedings of AMS Symposia in Applied
Mathematics, Volume 44, 1991, pp. 123-170.
9) Dyer, M., Frieze, A. and Kannan, R. A random polynomial time algorithm for approximating the volume of convex bodies. Journal of the ACM 38 (1991), pp. 1-17. J) Edmonds, J. Paths, trees and flowers. Canadian Journal of Math-
BIBLIOGRAPHY
¡41) Hajek, B. Cooling schedules for optimal annealing. Mathematics of Operations Research 13 (1988), pp. 311-329. ¡42J Harary, F. and Paler, E.M. Graphical Enumeration: . Academic
Press, New York, 1973.
¡43) Heilmann, O.J. and Lieb, E.H. Theory of monomer-clmer systems. Communications in Mathematical Physics 25 (1972), pp. 190-232. ¡44) Hickey, T. and Cohen, J. Uniform random generation of strings in a context-free language. SIAM Journal on Computing 12 (1983), pp. 645-655.
¡45) Hopcroft, J.E. and Ullan, J.D. Introduction to Automata Theory, Languages and Computation. Addison-Wesley, Reading MA, 1979. ¡46) Jerrum, M.R. 2-dimensional monomer-dimer systems are computationaly intractable. Journal of Statistical Physics 48 (1987), pp. 121-
ematics 17 (1965), pp. 449-467.
1J Edwards, K. The complexity of colouring problems on dense graphs. Theoretical Computer Science 43 (1986), pp. 337-343.
134. ¡47)
combinatorial number theory. Monographie No. 28, L'Enseignement
Mathématique, Geneva, 1980.
rithms.. ¡48J
lJ Feder, T. and Mihail, M. Balanced matroids. Preprint, 1991. To
Computer Science, University of Edinburgh, June 1991. ¡49J
I) Feller, W. An introduction to probability theory and its applications, ¡50)
¡) Fisher, M.E. On the dimer solution of
¡51)
planar Ising models. Journal
of Mathematical Physics 7 (1966), pp. 1776-1781.
¡52)
Jerrum, M. R. and Sinclair, A. J. Polynomial-time approximation al-
gorithms for the Ising modeL. Technical Report CSR-I-90, Dept. of Computer Science, University of Edinburgh, February 1990. To ap-
Unpublished manuscript, Carnegie-Mellon University, 1988.
pear in SIAM Journal on Computing.
Garey, M.R. and Johnson, D.S. Computers and intractability. Free¡53)
Jerrum, M.R., Valiant, L.G. and Vazirani, V.V. Random generation of combinatorial structures from a uniform distribution. Theoretical
Gil, J. Computational complexity of probabilstic Thing machines.
Computer Science 43 (1986), pp. 169-188.
SIAM Journal on Computing 6 (1977), pp. 675-695.
Guénoche, A. Random spanning tree. Journal of Algorithms 4 (1983), pp. 214-220.
Jerrum, M. R. and Sinclair, A. J. Fast uniform generation ofreguar graphs. Theoretical Computer Science 73 (1990), pp. 91-100.
') Frieze, A. On random regular graphs with non-constant degree.
man, San Francisco, 1979.
Jerrum, M. R. and Sinclair, A. J. Approximating the permanent.
SIAM Journal on Computing 18 (1989), pp. 1149-1178.
reversible Markov chains, with an application to exclusion processes.
Annals of Applied Probability 1 (1991), pp. 62-87.
Jerrum, M.R., McKay, B.D. and Sinclair, A.J. When is a graphical sequence stable? In Random Graphs, Volume 2 (A. Frieze and T. Luczak eds.), Wiley, 1992, pp. 101-115.
Volume I (3rd ed.). John Wiley, New York, 1968.
iJ Fil, J .A. Eigenvalue bounds on convergence to stationarity for non-
Jerrum, M.R. An analysis of a Monte Carlo algorithm for estimating the permanent. Technical Report ECS-LFCS-91-164, Dept. of
the 24th ACM Symposium on Theory of Com-
puting (1992).
Jerrum, M.R. The elusiveness of cliques in a random graph. Technical Report CSR-9-90, Dept. of Computer Science, University of Edinburgh, September 1990. To appear in Random Structures & Algo-
2J Erdos, P. and Graham, R.L. Old and new problems and results in
appear in Proceedings of
139
¡54)
Jerrum, M.R. and Vazirani, U.V. A mildly exponential algorithm for the permanent. Technical Report ECS-LFCS-91-179, Dept. of
Computer Science, University of Edinburgh, October 1991.
.,
I.~
BIBLIOGRAPHY
40
)5) Karmarka, N., Karp, R.M., Lipton, R., Lovász, L. and Luby, M. A Monte Carlo algorithm for estimating the permanent. Preprint, 1988. To appear in Journal of Algorithms..
)6J Karp, R.M. and Luby, M. Monte-Carlo algorithms for enumeration and reliabilty problems. Proceedings of the 24th IEEE Symposium on Foundations of Computer Science (1983), pp. 56-4.
i7) Karzanov, A. and Khachiyan, L. On the conductance of order
Markov chains. Techncal Report DCS 268, Rutgers University, June
BIBLIOGRAPHY
¡68J Matthews, P. Generating a random linear extension of a partial order. Techncal Report, University of Marland (Baltimore County),
1989.
¡69) McKay, B.D. Asymptotics for symmetric 0-1 matrices with prescribed row sums. Ars Combinatoria 19A (1985), pp. 15-25. ¡70J McKay, B.D. and Wormald, N.C. Asymptotic enumeration by de-
gree sequence of graphs of high degree. European Journal of Combinatorics 11 (1990), pp. 565-580.
1990. ¡71) McKay, B.D. and Wormald, N.C. Uniform generation of
¡8J Kastelyn, P.W. Graph theory and crystal physics. In Graph Theory
and Theoretical Physics (F. Harary ed.), Academic Press, London, 1967, pp. 43-110.
141
random reg-
ular graphs of moderate degree. Journal of Algorithms 11 (1990), pp. 52-67.
¡72) McKay, B.D. and Wormald, N.C. Asymptotic enumeration by degree
i9) Keilson, J. Markov chain models-rarity and exponentiality. SpringerVerlag, New York, 1979.
sequence of graphs with degrees o(n1/2). Combinatorica 11 (1991),
10J Kirkpatrick, S., Gellatt, C.D. and Vecchi, M.P. Optimisation by sim-
¡73J Meyer, A.R. and Paterson, M.S. With what frequency are apparently
ulated annealng. Science 220 (May 1983), pp. 671-680.
,IJ Lautemann, C. On two computational models for probabilstic algorithms. Bericht-Nr. 82-15, Technische Universitât Berli, Fachbere-
ich Informatik, November 1982. 2J Lawler, G.F. and Soka, A.D. Bounds on the L2 spectrum for Markov chains and Markov processes: a generalzation of Cheeger's inequality. Transactions of the American Mathematical Society 309 (1988), pp. 557-580.
rem for uniform multicommodity flow problems with applications to approximation algorithms. Proceedings of the 29th IEEE Symposium on Foundations of Computer Science (1988), pp. 422-431. 4) Lovász, L. Combinatorial Problems and Exercises. North Holland,
1979.
5) Lovász, L. and Plummer, M.D. Matching Theory. North-Holland, Amsterdam,
intractable problems diffcult? Report MIT/LCS/TM-126, Laboratory for Computer Science, M.LT., Februar 1979. ¡74) Mihail, M. On coupling and the approximation of the permanent. Information Processing Letters 30 (1989), pp. 91-95. ¡75J Mihai, M. Conductance and convergence of Markov chains: a combinatorial treatment of expanders. Proceedings of the 30th IEEE Sym-
posium on Foundations of Computer Science (1989), pp. 526-531.
¡76) Mihail, M. and Winker, P. On the number of Eulerian orientations
3) Leighton, T. and Ra, S. An approximate max-flow min-cut theo-
Amsterdam,
pp. 1-14.
1986.
5J Lovász, L. and Simonovits; M. The mixng rate of Markov chains, an isoperimetric inequalty, and computing the volume. Preprint 27/1990, Mathematical Institute of the Hungarian Academy of Sciences, March 1990.
7J Lundy, M. and Mees, A.L Convergence of an annealng algorithm. Mathematical Programming 34 (1986), pp. 111-124.
of a graph. Proceedings of the 3rd A CM-SIAM Symposium on Discrete Algorithms (1992), pp. 138-145. ¡77) Minc, H. Permanents. Addison-Wesley, Reading MA, 1978.
¡78) Nijenhuis, A. and Wil, H.S. Combinatorial Algorithms (2nd ed.). Academic Press, Orlando, 1978.
¡79J Nijenhuis, A. and Wil, H.S. The enumeration of connected graphs and linked diagrams. Journal of Combinatorial Theory Series A 27
(1979), pp. 356-359.
¡80) Peleg, D. and Upfal, E. The token distribution problem. Proceedings of the 27th Symposium on Foundations of Computer Science (1987), pp. 418-427. ¡81J Prova, J.S. and Ball, M.O. The complexity of counting cuts and
computing the probabilty that a graph is connected. SIAM Journal on Computing 12 (1983), pp. 777-788.
1:1 'ì :1
.42
BIBLIOGRAPHY
BIBLIOGRAPHY
143
maxmum match-
(95) Valiant, L.G. The complexity of combinatorial computations: an in-
ing by simwated annealing. Journal of the ACM 35 (1988), pp. 387403.
troduction. GI 8. Jahrestagung Informatik, Fachberichte Band 18
83) Schnorr, C.P. Optimal algorithms for self-reducible problems. Proceedings of the 3rd International Colloquium on Automata, Languages and Programming (1976), pp. 322-337.
(96) Valiant, L.G. The complexity of computing the permanent. Theo-
82) Sasak, G.H. and Hajek, B. The time complexity of
84) Schnorr, C.P. On self-transformable combinatorial problems. Math-
ematical Programming Study 14 (1981), pp. 225-243. 85) Seneta, E. Non-negative matrices and Markov chains (2nd ed.). Springer-Verlag, New York, 1981.
86) Sinclair, A. J. Randomised algorithms for counting and generating combinatorial structures. PhD Thesis, Dept. of Computer Science,
University of Edinburgh, June 1988.
87) Sinclair, A. J. Improved bounds for mixng rates of Markov chains and multicommodity flow. Techncal Report ECS- LFCS-91-178,
Dept. of Computer Science, University of Edinburgh, October 1991. To appear in Combinatorics, Probability and Computing.
g8) Sinclair, A.J. and Jerrum, M.R. Approximate counting, uniform gen-
eration and rapidly mixng Markov chains. Information and Computation 82 (1989), pp. 93-133.
~9) Sipser, M. A complexity-theoretic approach to randomness. Proceedings of
the 15th ACM Symposium on Theory of Computing (1983),
pp. 330-335.
)0) Stockmeyer, L. The polynomial-time hierarchy. Theoretical Com-
puter Science 3 (1977), pp. 1-22. n) Stockmeyer, L. The complexity of approximate counting. Proceed-
ings of the 15th ACM Symposium on Theory of Computing (1983), pp. 118-126.
)2) Swendsen, R.H. and Wang, J.S. Non-universal critical dynamics in
Monte Carlo simwations. Physics Review Letters 58 (1987), pp. 8690. )3) Tinhofer, G. On the generation of random graphs with given prop-
erties and known distribution. Appl. Comput. Sci., Ber. Prakt. Inf. 13 (1979), pp. 265-297.
)4) Ullan, J.D. Computational Aspects of VLSI. Computer Science Press, Rockvile, 1984.
(1978), pp. 326-337. retical Computer Science 8 (1979), pp. 189-201.
(97) Valant, L.G. The complexity of enumeration and reliabilty problems. SIAM Journal on Computing 8 (1979), pp. 410-421. (98) Welsh, D.J.A. Randomised algorithms. Discrete Applied Mathematics 5 (1983), pp. 133-145.
(99) Wilf, H.S. The uniform selection of free trees. Journal of Algorithms 2 (1981), pp. 204-207.
(100) Wormald, N.C. Generating random regwar graphs. Journal of Algorithms 5 (1984), pp. 247-280.
random unabelledgraphs. SIAM Journal on Computing 16 (1987), pp. 717-727.
(101) Wormald, N.C. Generating
1 INDEX
Index
145
almost W, 87 fully-polynomial (f.p.), 17 polynomially time-bounded, 16 uniform, 16
'P,9
fully-polynomial (f.p.), 14 polynomially time-bounded, 14
lmost uniform generation, 15
lmost uniform generator for matchings, 92
randomised approximate, 14 counting problem, 9 weighted, 86
graphs with specified degrees, 119, 129 linear extensions, 127
degree sequence, 115
graphs with specified, 97, 114,
perfect matchings, 76, 79, 83,
84,92 regular graphs, 119
lmost W-generator for matchings, 91
lternating path, 83
129 P-stable class, 129
maximum, 99 monomer-dimer configuration, 85 near-perfect, 71
hypercube, 64
independent sets, 38
injective mapping technique, 65 Ising model, 128
perfect, 70
weight of, 86 monomer, 85
monomer-dimer system, 85 multicommodity flow, 133 multiplicative reversibilsation, 134
lattice, 96
linear extensions of a partial order,
98, 127 log-concavity, 80
trees with specified, 32
dense
matching, 11
NP-machine,9 optimisation problem, 9
oracle coin machine (OeM), 18 magnification, 62
bipartite graph, 72
Markov chain, 44
p-relation, 9
aperiodic, 45
pairing, 117
pproximable, 36
graph, 79 detailed balance, 46
ergodic, 44
partial solution label, 11
pproximate counting, 14
dimer, 85
family, 56
partition function
DNFSAT, 107
irreducible, 45
robustness of, 110
Jproximation algorithm
non-reversible, 134
randomised, 14
effcient simulation, 22
within ratio, 14
eigenvaues, 47
Jproximation scheme fully-polynomial (f.p.), 14 randomised, 14
;om,1O
enumeration problem, 9 ergodic flow, 50
Eulerian orientations, 129
excluded graph, 115
existence problem, 9
l, B)-balanced, 118 Jotstrapping of almost uniform generators, 111 of approximate counters, 110 Jttleneck parameter, 65, 89, 131
expansion, 55
98 discrete circle (exclusion pro-
cess), 134
graphs with specified degrees,
failure, 15
linear extensions, 98, 127
flow, 133 fully- polynomial (f. p. )
matchings, 87, 132
approximation scheme, 14
generator, 17
Lpacity, 50, 133 Lrd-shufing, 67
connected spanning subgraphs,
generation problem
monomer-dimer configurations, 87, 132
perfect and near-perfect matchings, 71
permutations (card-shuffing), 67 points in a convex body, 126
mductance, 50
uniform, 9
mnected spanning subgraphs, 98
spanning trees, 97, 130
weighted, 87
subgraphs (Ising model), 128
mstruction problem, 9
mnter approximate within ratio, 14 exact, 14
partitions of an integer, 25
permanent, 70
polynomial time hierarchy, 41 probabilstic Thring machine (PTM), 12
problem instance, 8 problem instance label, 11 pruned tree, 118
97, 130
counter, 14
Llonical path, 65
reversible, 45 Markov chain on bit vectors, 64
of a monomer-dimer system, 85 of an Ising system, 128
generator
k-subsets of an n-set, 68
almost uniform, 16
almost uniform within tolerance, 16
tree of derivations, 102, 132
Markov chain simulation paradigm, 56
random graph, 84 randomised approximate counter for Eulerian orientations, 130
graphs with specified degrees, 124, 129 linear extensions, 127
matchings, 92 perfect matchings, 77, 79, 83,
84,92 randomised approximation scheme
for Ising parition function, 128
monomer-dimer partition function, 91
i l6
INDEX permanent, 77, 84, 92
volume of a convex body, 126
ipid mixng, 56 characterisation of, 58, 133
iduction from counting to generation, 33
Progress in Theoretical Computer Science Editor Ronald V. Book Deparent of Mathematics University of Californa
Santa Barbar CA 93106
from generation to counting, 29, 102
Editorial Board
igular graphs, 114
Erwin Engeler
Robin Milner
ilation, 8
Mathemati
Deparent of Computer Science
ilative pointwise distance (r. p.d.),
ETH Zentrm
University of Ednburgh
CH-8092 Zurich, Switzrland
Ednburgh EH9 3JZ, Scotland
Gérard Huet Domaine de V oluceau-Rocquencourt
Maurice Nivat Université de Pars VII 2, place Jussieu
B. P. 105
7525 i Pars Cedex 05
78150 Le Chesnay Cedex, Frace
Frace
Jean-Pierre Jouanaud
Mar Wirsing
Laboratoire de Recherche en Informatique Bât. 490
Fakltät fir Mathemati
45
lower bound for, 55 upper bound for, 53 isistance, 133
P,17
INA
icond eigenvaue, 49
lower bound for, 53 upper bound for, 50, 132 ilf-embeddabilty, 112 il-reducibilty, 10
for weighted problems, 86
Universität Passau
Université de Pars-Sud
und Informati
Centr d'Orsay
Postfach 2540
91405 Orsay Cedex, Frace
D-839O Passau, Germany
il-reducibilty tree, 12
mulated annealing, 99 ilution, 8
weight of, 86 ,lution set, 8
ianning trees, 97 ationary distribution, 45
.subsets of an n-set, 68 ilerance, 16
ee of derivations, 11
Progress in Theoretical Computer Science is a series that focuses on the theoretical aspets of computer science and on the logical and mathematical foundations of computer science, as well as the applications of computer theOry. It addresses itself to research workers and grduate students in computer and information science deparments and researh laboratories, as well as to deparents of mathematics and electrcal engineering where an interest in computer theory is found.
The series publishes research monographs, grduate texts, and polished lectures from seminar and lectu series. We encourage prepartion of manuscripts in some form ofTeX for delivery in camera-ready copy, which leads to rapid publication, or in electronic form for interfacing with laser priters or tysetters.
ees with specified degrees, 32
iary problem, 37 iderlying graph, 46 ,riation distance, 60 ilume of a convex body, 126
Prposals should be sent directly to the Editor, any member of the Edtorial Board, or to: Birkhäuser Boston, 675 Massachusett Avenue, Cambridge, MA 02139.
1. Leo Bachmai, Canonical Equational Proofs 2. Howard Karloff, Linear Programing 3. Ker-I Ko, Complexity Theory of Real Functions
4. Guo-Qiang Zhang, Logic of Domains 5. Thomas Streicher, Semantics of
Type Theory: Correctness, Completeness
an Independence Results 6. Julian Charles Bradfield, Verifing Temporal Properties of
Systems
7. Alistar Sinclai, Algorithm for Ranm Generation and Counting